From patchwork Wed Sep 30 16:09:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11809843 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7F7261580 for ; Wed, 30 Sep 2020 16:10:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 59A95207FB for ; Wed, 30 Sep 2020 16:10:05 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="bgqyEOvc" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730363AbgI3QJa (ORCPT ); Wed, 30 Sep 2020 12:09:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48554 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727438AbgI3QJa (ORCPT ); Wed, 30 Sep 2020 12:09:30 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0E0D8C0613D1; Wed, 30 Sep 2020 09:09:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=xYVAS8gqHWqs3QJ5Ea1gOgq+YCGQh2QxwGDstR65vcw=; b=bgqyEOvcazknkjXu+XD34eM9kU 2/Qxq/Ch1RrqStmO/H7aFX7eocn3B89NDrAYzUEelsKifeeuNjSmMbIjr9nZKUMXphR59qJj0b2EV +vjB15gtEiWZOBEZ69jetp6DC/qBJi8ZDTT8+GHAGZiJfOsx6zYi1YjsmygpEdZIaonLbnjFmkfw6 GnBouNenkOqKlq/0ckJqygNI3j5c9iaL2eHbTDr/HRhxdV+/KBSFCzUgyDOPYrZPCN27eazHVcpbo KcvepWMlyEidZqfNgm+mBlP7eRHXJ5cOki4RRZb8PfPaU683fAqLTH6FStWpnRCt0oCqpMS2TT4vH SK+TbjCw==; Received: from [2001:4bb8:180:7b62:c70:4a89:bc61:4] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1kNefS-0003Ct-A7; Wed, 30 Sep 2020 16:09:22 +0000 From: Christoph Hellwig To: Mauro Carvalho Chehab , Marek Szyprowski , Tomasz Figa , iommu@lists.linux-foundation.org Cc: Robin Murphy , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org Subject: [PATCH 1/8] dma-mapping: remove the {alloc,free}_noncoherent methods Date: Wed, 30 Sep 2020 18:09:10 +0200 Message-Id: <20200930160917.1234225-2-hch@lst.de> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200930160917.1234225-1-hch@lst.de> References: <20200930160917.1234225-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org It turns out allowing non-contigous allocations here was a rather bad idea, as we'll now need to define ways to get the pages for mmaping or dma_buf sharing. Revert this change and stick to the original concept. A different API for the use case of non-contigous allocations will be added back later. Signed-off-by: Christoph Hellwig --- drivers/iommu/dma-iommu.c | 30 ------------------------------ include/linux/dma-mapping.h | 5 ----- kernel/dma/mapping.c | 33 ++++++--------------------------- 3 files changed, 6 insertions(+), 62 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index c12c1dc43d312e..b363b20a9f41ce 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -1055,34 +1055,6 @@ static void *iommu_dma_alloc(struct device *dev, size_t size, return cpu_addr; } -#ifdef CONFIG_DMA_REMAP -static void *iommu_dma_alloc_noncoherent(struct device *dev, size_t size, - dma_addr_t *handle, enum dma_data_direction dir, gfp_t gfp) -{ - if (!gfpflags_allow_blocking(gfp)) { - struct page *page; - - page = dma_common_alloc_pages(dev, size, handle, dir, gfp); - if (!page) - return NULL; - return page_address(page); - } - - return iommu_dma_alloc_remap(dev, size, handle, gfp | __GFP_ZERO, - PAGE_KERNEL, 0); -} - -static void iommu_dma_free_noncoherent(struct device *dev, size_t size, - void *cpu_addr, dma_addr_t handle, enum dma_data_direction dir) -{ - __iommu_dma_unmap(dev, handle, size); - __iommu_dma_free(dev, size, cpu_addr); -} -#else -#define iommu_dma_alloc_noncoherent NULL -#define iommu_dma_free_noncoherent NULL -#endif /* CONFIG_DMA_REMAP */ - static int iommu_dma_mmap(struct device *dev, struct vm_area_struct *vma, void *cpu_addr, dma_addr_t dma_addr, size_t size, unsigned long attrs) @@ -1153,8 +1125,6 @@ static const struct dma_map_ops iommu_dma_ops = { .free = iommu_dma_free, .alloc_pages = dma_common_alloc_pages, .free_pages = dma_common_free_pages, - .alloc_noncoherent = iommu_dma_alloc_noncoherent, - .free_noncoherent = iommu_dma_free_noncoherent, .mmap = iommu_dma_mmap, .get_sgtable = iommu_dma_get_sgtable, .map_page = iommu_dma_map_page, diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 7c77cd6f3604a7..4b9b1d64f5ec9e 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -74,11 +74,6 @@ struct dma_map_ops { gfp_t gfp); void (*free_pages)(struct device *dev, size_t size, struct page *vaddr, dma_addr_t dma_handle, enum dma_data_direction dir); - void* (*alloc_noncoherent)(struct device *dev, size_t size, - dma_addr_t *dma_handle, enum dma_data_direction dir, - gfp_t gfp); - void (*free_noncoherent)(struct device *dev, size_t size, void *vaddr, - dma_addr_t dma_handle, enum dma_data_direction dir); int (*mmap)(struct device *, struct vm_area_struct *, void *, dma_addr_t, size_t, unsigned long attrs); diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 9669550656a0b4..06115f59f4ffbf 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -513,40 +513,19 @@ EXPORT_SYMBOL_GPL(dma_free_pages); void *dma_alloc_noncoherent(struct device *dev, size_t size, dma_addr_t *dma_handle, enum dma_data_direction dir, gfp_t gfp) { - const struct dma_map_ops *ops = get_dma_ops(dev); - void *vaddr; - - if (!ops || !ops->alloc_noncoherent) { - struct page *page; - - page = dma_alloc_pages(dev, size, dma_handle, dir, gfp); - if (!page) - return NULL; - return page_address(page); - } + struct page *page; - size = PAGE_ALIGN(size); - vaddr = ops->alloc_noncoherent(dev, size, dma_handle, dir, gfp); - if (vaddr) - debug_dma_map_page(dev, virt_to_page(vaddr), 0, size, dir, - *dma_handle); - return vaddr; + page = dma_alloc_pages(dev, size, dma_handle, dir, gfp); + if (!page) + return NULL; + return page_address(page); } EXPORT_SYMBOL_GPL(dma_alloc_noncoherent); void dma_free_noncoherent(struct device *dev, size_t size, void *vaddr, dma_addr_t dma_handle, enum dma_data_direction dir) { - const struct dma_map_ops *ops = get_dma_ops(dev); - - if (!ops || !ops->free_noncoherent) { - dma_free_pages(dev, size, virt_to_page(vaddr), dma_handle, dir); - return; - } - - size = PAGE_ALIGN(size); - debug_dma_unmap_page(dev, dma_handle, size, dir); - ops->free_noncoherent(dev, size, vaddr, dma_handle, dir); + dma_free_pages(dev, size, virt_to_page(vaddr), dma_handle, dir); } EXPORT_SYMBOL_GPL(dma_free_noncoherent); From patchwork Wed Sep 30 16:09:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11809845 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9454E618 for ; Wed, 30 Sep 2020 16:10:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 73BE220B1F for ; Wed, 30 Sep 2020 16:10:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="U3M0mwN0" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730034AbgI3QJa (ORCPT ); Wed, 30 Sep 2020 12:09:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48552 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725893AbgI3QJa (ORCPT ); Wed, 30 Sep 2020 12:09:30 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E1236C0613D0; Wed, 30 Sep 2020 09:09:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ss1B5LS02Dl8LXyPvF2nQEHb1zf2SKhaR9X/Ikso6f4=; b=U3M0mwN0TpO/LPCKIbzUN6bVR3 sbfhV1rPcgeix+8XaxfVRCcQskhVhU7vvvKQ7ecovz8eNTM0uT3zhlxPMm5uT9VR+0rO6xnsMVzap nByo9G9jhaMzKIf8fmMD1FQxS75w0+1uAS7PVzz3PVEyXv17geuHe1R8lRORK07fWWHQdlFCSzD5X rKP9QzSli/u5mKEeRgUXa3i1goK5SJBCfFIVU+uTkVNHrY2hZycKCYo2q+2wNWyAk/uDcJk6X2DK3 gENyet37MSyh/MLgo6meq2uwbImm42OXGensQcnMr6w0GaqKzyqeqPZms5J/I2eDM/GpYbvpwKpUy ye+htRWg==; Received: from [2001:4bb8:180:7b62:c70:4a89:bc61:4] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1kNefT-0003Cx-Mk; Wed, 30 Sep 2020 16:09:24 +0000 From: Christoph Hellwig To: Mauro Carvalho Chehab , Marek Szyprowski , Tomasz Figa , iommu@lists.linux-foundation.org Cc: Robin Murphy , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org Subject: [PATCH 2/8] dma-mapping: document dma_{alloc,free}_pages Date: Wed, 30 Sep 2020 18:09:11 +0200 Message-Id: <20200930160917.1234225-3-hch@lst.de> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200930160917.1234225-1-hch@lst.de> References: <20200930160917.1234225-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Document the new dma_alloc_pages and dma_free_pages APIs, and fix up the documentation for dma_alloc_noncoherent and dma_free_noncoherent. Reported-by: Robin Murphy Signed-off-by: Christoph Hellwig --- Documentation/core-api/dma-api.rst | 45 ++++++++++++++++++++++++++---- 1 file changed, 40 insertions(+), 5 deletions(-) diff --git a/Documentation/core-api/dma-api.rst b/Documentation/core-api/dma-api.rst index ea0413276ddb70..a75c469dbcaa7c 100644 --- a/Documentation/core-api/dma-api.rst +++ b/Documentation/core-api/dma-api.rst @@ -534,11 +534,9 @@ an I/O device, you should not be using this part of the API. dma_addr_t *dma_handle, enum dma_data_direction dir, gfp_t gfp) -This routine allocates a region of bytes of consistent memory. It +This routine allocates a region of bytes of non-coherent memory. It returns a pointer to the allocated region (in the processor's virtual address -space) or NULL if the allocation failed. The returned memory may or may not -be in the kernels direct mapping. Drivers must not call virt_to_page on -the returned memory region. +space) or NULL if the allocation failed. It also returns a which may be cast to an unsigned integer the same width as the bus and given to the device as the DMA address base of @@ -565,7 +563,44 @@ reused. Free a region of memory previously allocated using dma_alloc_noncoherent(). dev, size and dma_handle and dir must all be the same as those passed into dma_alloc_noncoherent(). cpu_addr must be the virtual address returned by -the dma_alloc_noncoherent(). +dma_alloc_noncoherent(). + +:: + + struct page * + dma_alloc_pages(struct device *dev, size_t size, dma_addr_t *dma_handle, + enum dma_data_direction dir, gfp_t gfp) + +This routine allocates a region of bytes of non-coherent memory. It +returns a pointer to first struct page for the region, or NULL if the +allocation failed. + +It also returns a which may be cast to an unsigned integer the +same width as the bus and given to the device as the DMA address base of +the region. + +The dir parameter specified if data is read and/or written by the device, +see dma_map_single() for details. + +The gfp parameter allows the caller to specify the ``GFP_`` flags (see +kmalloc()) for the allocation, but rejects flags used to specify a memory +zone such as GFP_DMA or GFP_HIGHMEM. + +Before giving the memory to the device, dma_sync_single_for_device() needs +to be called, and before reading memory written by the device, +dma_sync_single_for_cpu(), just like for streaming DMA mappings that are +reused. + +:: + + void + dma_free_pages(struct device *dev, size_t size, struct page *page, + dma_addr_t dma_handle, enum dma_data_direction dir) + +Free a region of memory previously allocated using dma_alloc_pages(). +dev, size and dma_handle and dir must all be the same as those passed into +dma_alloc_noncoherent(). page must be the pointer returned by +dma_alloc_pages(). :: From patchwork Wed Sep 30 16:09:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11809837 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 46032618 for ; Wed, 30 Sep 2020 16:09:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1C8BD20789 for ; Wed, 30 Sep 2020 16:09:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="K7EO9cRv" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731054AbgI3QJf (ORCPT ); Wed, 30 Sep 2020 12:09:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48562 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730980AbgI3QJc (ORCPT ); Wed, 30 Sep 2020 12:09:32 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 36BC2C061755; Wed, 30 Sep 2020 09:09:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=fcJxwoQ+kZKNx/nryCD7ZbkgjtXm1YK3I5cf/tzI15g=; b=K7EO9cRvqiyOHHAuOj+yInRw9e aJTZbNY6h8qg6La0bk5dj+xM/PU5+VeOuJ9YhhDCqKy1IV6JfVpsa3QNZRUxOxNDVyZubSBafYpw/ 8B9KbiiBu8HF6qFaWoZ9Nc1tvZfosiixYgC0KZ5+3cBXrJfVVvacmmaHlgPaa24vT44E19Bi/Pw5F rLE3znMdkvvyCdJJAy8KkMJaJNP4aoskq1wfuDz0gHU+pCLAIPQTTeEJowpQDT53HF24P5Rhuj3xs +qv4bBP1a4NuwrtJnJLhYVWbWli41HX7YLlCWv7SmF4pi/giKNBMZ768eYdOWjDVKv0Iy3bedlGyc l+0H6jqQ==; Received: from [2001:4bb8:180:7b62:c70:4a89:bc61:4] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1kNefU-0003D8-Uo; Wed, 30 Sep 2020 16:09:25 +0000 From: Christoph Hellwig To: Mauro Carvalho Chehab , Marek Szyprowski , Tomasz Figa , iommu@lists.linux-foundation.org Cc: Robin Murphy , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org Subject: [PATCH 3/8] dma-direct check for highmem pages in dma_direct_alloc_pages Date: Wed, 30 Sep 2020 18:09:12 +0200 Message-Id: <20200930160917.1234225-4-hch@lst.de> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200930160917.1234225-1-hch@lst.de> References: <20200930160917.1234225-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Check for highmem pages from CMA, just like in the dma_direct_alloc path. Signed-off-by: Christoph Hellwig --- kernel/dma/direct.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 121a9c1969dd3a..b5f20781d3a96f 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -309,6 +309,17 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size, page = __dma_direct_alloc_pages(dev, size, gfp); if (!page) return NULL; + if (PageHighMem(page)) { + /* + * Depending on the cma= arguments and per-arch setup + * dma_alloc_contiguous could return highmem pages. + * Without remapping there is no way to return them here, + * so log an error and fail. + */ + dev_info(dev, "Rejecting highmem page from CMA.\n"); + goto out_free_pages; + } + ret = page_address(page); if (force_dma_unencrypted(dev)) { if (set_memory_decrypted((unsigned long)ret, From patchwork Wed Sep 30 16:09:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11809841 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 181EE112E for ; Wed, 30 Sep 2020 16:10:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E88E82085B for ; Wed, 30 Sep 2020 16:09:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="vTUKxlHv" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731029AbgI3QJe (ORCPT ); Wed, 30 Sep 2020 12:09:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48564 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730984AbgI3QJc (ORCPT ); Wed, 30 Sep 2020 12:09:32 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6233AC0613D0; Wed, 30 Sep 2020 09:09:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=TmfaCnUd+7mEz2YuYjVCYbtQ0kXcXuqBuz2CDTaW1SU=; b=vTUKxlHvoBZU8oPuhfyrBvMush R8HWo77aaA/LJrPmDuWZoiiDEH5yrCNR3FWG2OnLJ7+urDV58UGL9mWxIRaiQu0dullqNzBTf5wAU pKdhVu2W/dOFVAwdroM1B55Rhhws/gIKMi8HWTMWuFeAQvOc9bLTw23c9Uy9fAhoGpi7fgBmWeQ4k quQOt/+R406V/VviQHvuyRe3bFdwJQlDenY5GHU7VuR+9Hhv/brb8OJVzK8YlqfnvDS0Pog73xU2r GVbYqmzOXj8rj/iA0Sit4Ms7NjLAmayqY7L5ime3YuIlMssIibDUhdvSI8nL5TlzWtCmOTpFUWy/O 6Rr9StWw==; Received: from [2001:4bb8:180:7b62:c70:4a89:bc61:4] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1kNefW-0003DH-6X; Wed, 30 Sep 2020 16:09:26 +0000 From: Christoph Hellwig To: Mauro Carvalho Chehab , Marek Szyprowski , Tomasz Figa , iommu@lists.linux-foundation.org Cc: Robin Murphy , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org Subject: [PATCH 4/8] dma-direct: use __GFP_ZERO in dma_direct_alloc_pages Date: Wed, 30 Sep 2020 18:09:13 +0200 Message-Id: <20200930160917.1234225-5-hch@lst.de> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200930160917.1234225-1-hch@lst.de> References: <20200930160917.1234225-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Prepare for supporting the DMA_ATTR_NO_KERNEL_MAPPING flag in dma_alloc_pages. Signed-off-by: Christoph Hellwig --- kernel/dma/direct.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index b5f20781d3a96f..b5d56810130b22 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -296,9 +296,10 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size, dma_addr_t *dma_handle, enum dma_data_direction dir, gfp_t gfp) { struct page *page; - void *ret; if (dma_should_alloc_from_pool(dev, gfp, 0)) { + void *ret; + page = dma_alloc_from_pool(dev, size, &ret, gfp, dma_coherent_ok); if (!page) @@ -306,7 +307,7 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size, goto done; } - page = __dma_direct_alloc_pages(dev, size, gfp); + page = __dma_direct_alloc_pages(dev, size, gfp | __GFP_ZERO); if (!page) return NULL; if (PageHighMem(page)) { @@ -320,13 +321,11 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size, goto out_free_pages; } - ret = page_address(page); if (force_dma_unencrypted(dev)) { - if (set_memory_decrypted((unsigned long)ret, + if (set_memory_decrypted((unsigned long)page_address(page), 1 << get_order(size))) goto out_free_pages; } - memset(ret, 0, size); done: *dma_handle = phys_to_dma_direct(dev, page_to_phys(page)); return page; From patchwork Wed Sep 30 16:09:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11809839 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F2B91112E for ; Wed, 30 Sep 2020 16:09:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D410D20936 for ; Wed, 30 Sep 2020 16:09:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="bMti5Xtq" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731181AbgI3QJy (ORCPT ); Wed, 30 Sep 2020 12:09:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48570 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727438AbgI3QJe (ORCPT ); Wed, 30 Sep 2020 12:09:34 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DA506C0613D1; Wed, 30 Sep 2020 09:09:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=WZ4uxbVNk31AEBkp8OQl6jt0fRfNiunxqubb4E1eSbY=; b=bMti5Xtqqs0GqczNh80QjZ0mrX ZctdYFhCjEFzMm7Uux0RVA7zUG85KKurUiilgkQJBz2NT8UKwG08a20rIj4Q4D24U6JMNa0aNNlIi LbEPRdmI8oPJiREjU7L8A+2Pi6RdjS32Fm4L/Wyl+1UrZO6rKAe7QBH9ae45600MYZOzfK2DsjH+/ I3iiPcauAN8zuwme92upF2nNwbmePCAXhtRR7oxoXg5LNg2g3mGNB5hZ7S4ITERGyVBusRixjqiPs XqRecu3wTMA1dHLCU5XPSXJsTy5y1w8bXYgehaj0FwbRq5t24H6BvQH/MkQkDHY2RrqeNKwmXKK27 +XD4hGFw==; Received: from [2001:4bb8:180:7b62:c70:4a89:bc61:4] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1kNefX-0003DT-L8; Wed, 30 Sep 2020 16:09:28 +0000 From: Christoph Hellwig To: Mauro Carvalho Chehab , Marek Szyprowski , Tomasz Figa , iommu@lists.linux-foundation.org Cc: Robin Murphy , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org Subject: [PATCH 5/8] dma-direct: factor out a dma_direct_alloc_from_pool helper Date: Wed, 30 Sep 2020 18:09:14 +0200 Message-Id: <20200930160917.1234225-6-hch@lst.de> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200930160917.1234225-1-hch@lst.de> References: <20200930160917.1234225-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org This ensures dma_direct_alloc_pages will use the right gfp mask, as well as keeping the code for that common. Signed-off-by: Christoph Hellwig --- kernel/dma/direct.c | 41 ++++++++++++++++++++--------------------- 1 file changed, 20 insertions(+), 21 deletions(-) diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index b5d56810130b22..ace9159c992f65 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -147,6 +147,22 @@ static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size, return page; } +static void *dma_direct_alloc_from_pool(struct device *dev, size_t size, + dma_addr_t *dma_handle, gfp_t gfp) +{ + struct page *page; + u64 phys_mask; + void *ret; + + gfp |= dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask, + &phys_mask); + page = dma_alloc_from_pool(dev, size, &ret, gfp, dma_coherent_ok); + if (!page) + return NULL; + *dma_handle = phys_to_dma_direct(dev, page_to_phys(page)); + return ret; +} + void *dma_direct_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) { @@ -163,17 +179,8 @@ void *dma_direct_alloc(struct device *dev, size_t size, if (attrs & DMA_ATTR_NO_WARN) gfp |= __GFP_NOWARN; - if (dma_should_alloc_from_pool(dev, gfp, attrs)) { - u64 phys_mask; - - gfp |= dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask, - &phys_mask); - page = dma_alloc_from_pool(dev, size, &ret, gfp, - dma_coherent_ok); - if (!page) - return NULL; - goto done; - } + if (dma_should_alloc_from_pool(dev, gfp, attrs)) + return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp); /* we always manually zero the memory once we are done */ page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO); @@ -297,15 +304,8 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size, { struct page *page; - if (dma_should_alloc_from_pool(dev, gfp, 0)) { - void *ret; - - page = dma_alloc_from_pool(dev, size, &ret, gfp, - dma_coherent_ok); - if (!page) - return NULL; - goto done; - } + if (dma_should_alloc_from_pool(dev, gfp, 0)) + return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp); page = __dma_direct_alloc_pages(dev, size, gfp | __GFP_ZERO); if (!page) @@ -326,7 +326,6 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size, 1 << get_order(size))) goto out_free_pages; } -done: *dma_handle = phys_to_dma_direct(dev, page_to_phys(page)); return page; out_free_pages: From patchwork Wed Sep 30 16:09:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11809835 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8DB08618 for ; Wed, 30 Sep 2020 16:09:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 688752087D for ; Wed, 30 Sep 2020 16:09:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="s2TCGGUp" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731155AbgI3QJs (ORCPT ); Wed, 30 Sep 2020 12:09:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48578 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731070AbgI3QJg (ORCPT ); Wed, 30 Sep 2020 12:09:36 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6182AC061755; Wed, 30 Sep 2020 09:09:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=eMHlDyqZ7dNED/B7sCe7wssLaO5nPO2N0j7Z8xnh0T8=; b=s2TCGGUph5wbFGk6APK4eVyxPI b75XFkAr8c4FUXo56lmvDS6fzxJHtxrgluvuCv9ic/aZiV7umcHegVS1VySMrPR3A7YwHCNYd6/S2 NEcXghBLnPS2UEH764m6Dgm6ERRJNWsuKNCmHwAf4tpO0f8uzl6SOZCnGu61Xzt/AaP65YcB46lss LXHerOe19sJmWoO0p8Se7OuR2I0FNTht2nYyQioGQAnmngX5flFWTLkLZ+vlp8amYTtPHgFwVN703 EbrIANXyAk4xNp294lO8yPpa+5EiQycfInum0VYWfcRTKbaywSD4yzA8YQgKfCUuCeTe0v8KEOjNr 2InOnBNw==; Received: from [2001:4bb8:180:7b62:c70:4a89:bc61:4] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1kNefZ-0003Dp-7t; Wed, 30 Sep 2020 16:09:30 +0000 From: Christoph Hellwig To: Mauro Carvalho Chehab , Marek Szyprowski , Tomasz Figa , iommu@lists.linux-foundation.org Cc: Robin Murphy , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org Subject: [PATCH 6/8] dma-direct: simplify the DMA_ATTR_NO_KERNEL_MAPPING handling Date: Wed, 30 Sep 2020 18:09:15 +0200 Message-Id: <20200930160917.1234225-7-hch@lst.de> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200930160917.1234225-1-hch@lst.de> References: <20200930160917.1234225-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Use and entirely separate code path for the DMA_ATTR_NO_KERNEL_MAPPING path. This avoids any confusion about the ret type, and avoids lots of attr checks and helpers that can be significantly simplified now. It also ensures that common handling is applied to architetures still using the arch alloc/free hooks. Signed-off-by: Christoph Hellwig --- include/linux/dma-noncoherent.h | 13 ----- kernel/dma/direct.c | 100 +++++++++++++------------------- 2 files changed, 39 insertions(+), 74 deletions(-) diff --git a/include/linux/dma-noncoherent.h b/include/linux/dma-noncoherent.h index e61283e06576a8..73ac149fa181b4 100644 --- a/include/linux/dma-noncoherent.h +++ b/include/linux/dma-noncoherent.h @@ -21,19 +21,6 @@ static inline bool dev_is_dma_coherent(struct device *dev) } #endif /* CONFIG_ARCH_HAS_DMA_COHERENCE_H */ -/* - * Check if an allocation needs to be marked uncached to be coherent. - */ -static __always_inline bool dma_alloc_need_uncached(struct device *dev, - unsigned long attrs) -{ - if (dev_is_dma_coherent(dev)) - return false; - if (attrs & DMA_ATTR_NO_KERNEL_MAPPING) - return false; - return true; -} - void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs); void arch_dma_free(struct device *dev, size_t size, void *cpu_addr, diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index ace9159c992f65..a3c619b424edf0 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -75,39 +75,6 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size) min_not_zero(dev->coherent_dma_mask, dev->bus_dma_limit); } -/* - * Decrypting memory is allowed to block, so if this device requires - * unencrypted memory it must come from atomic pools. - */ -static inline bool dma_should_alloc_from_pool(struct device *dev, gfp_t gfp, - unsigned long attrs) -{ - if (!IS_ENABLED(CONFIG_DMA_COHERENT_POOL)) - return false; - if (gfpflags_allow_blocking(gfp)) - return false; - if (force_dma_unencrypted(dev)) - return true; - if (!IS_ENABLED(CONFIG_DMA_DIRECT_REMAP)) - return false; - if (dma_alloc_need_uncached(dev, attrs)) - return true; - return false; -} - -static inline bool dma_should_free_from_pool(struct device *dev, - unsigned long attrs) -{ - if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL)) - return true; - if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && - !force_dma_unencrypted(dev)) - return false; - if (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP)) - return true; - return false; -} - static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size, gfp_t gfp) { @@ -170,35 +137,45 @@ void *dma_direct_alloc(struct device *dev, size_t size, void *ret; int err; - if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) && - !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && - dma_alloc_need_uncached(dev, attrs)) - return arch_dma_alloc(dev, size, dma_handle, gfp, attrs); - size = PAGE_ALIGN(size); if (attrs & DMA_ATTR_NO_WARN) gfp |= __GFP_NOWARN; - if (dma_should_alloc_from_pool(dev, gfp, attrs)) - return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp); - - /* we always manually zero the memory once we are done */ - page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO); - if (!page) - return NULL; - if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && !force_dma_unencrypted(dev)) { + page = __dma_direct_alloc_pages(dev, size, gfp); + if (!page) + return NULL; /* remove any dirty cache lines on the kernel alias */ if (!PageHighMem(page)) arch_dma_prep_coherent(page, size); + *dma_handle = phys_to_dma_direct(dev, page_to_phys(page)); /* return the page pointer as the opaque cookie */ - ret = page; - goto done; + return page; } + if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) && + !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && + !dev_is_dma_coherent(dev)) + return arch_dma_alloc(dev, size, dma_handle, gfp, attrs); + + /* + * Remapping or decrypting memory may block. If either is required and + * we can't block, allocate the memory from the atomic pools. + */ + if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) && + !gfpflags_allow_blocking(gfp) && + (force_dma_unencrypted(dev) || + (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev)))) + return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp); + + /* we always manually zero the memory once we are done */ + page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO); + if (!page) + return NULL; + if ((IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && - dma_alloc_need_uncached(dev, attrs)) || + !dev_is_dma_coherent(dev)) || (IS_ENABLED(CONFIG_DMA_REMAP) && PageHighMem(page))) { /* remove any dirty cache lines on the kernel alias */ arch_dma_prep_coherent(page, size); @@ -241,7 +218,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, memset(ret, 0, size); if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) && - dma_alloc_need_uncached(dev, attrs)) { + !dev_is_dma_coherent(dev)) { arch_dma_prep_coherent(page, size); ret = arch_dma_set_uncached(ret, size); if (IS_ERR(ret)) @@ -269,25 +246,25 @@ void dma_direct_free(struct device *dev, size_t size, { unsigned int page_order = get_order(size); + if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && + !force_dma_unencrypted(dev)) { + /* cpu_addr is a struct page cookie, not a kernel address */ + dma_free_contiguous(dev, cpu_addr, size); + return; + } + if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) && !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && - dma_alloc_need_uncached(dev, attrs)) { + !dev_is_dma_coherent(dev)) { arch_dma_free(dev, size, cpu_addr, dma_addr, attrs); return; } /* If cpu_addr is not from an atomic pool, dma_free_from_pool() fails */ - if (dma_should_free_from_pool(dev, attrs) && + if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) && dma_free_from_pool(dev, cpu_addr, PAGE_ALIGN(size))) return; - if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && - !force_dma_unencrypted(dev)) { - /* cpu_addr is a struct page cookie, not a kernel address */ - dma_free_contiguous(dev, cpu_addr, size); - return; - } - if (force_dma_unencrypted(dev)) set_memory_encrypted((unsigned long)cpu_addr, 1 << page_order); @@ -304,7 +281,8 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size, { struct page *page; - if (dma_should_alloc_from_pool(dev, gfp, 0)) + if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) && + force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp)) return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp); page = __dma_direct_alloc_pages(dev, size, gfp | __GFP_ZERO); @@ -341,7 +319,7 @@ void dma_direct_free_pages(struct device *dev, size_t size, void *vaddr = page_address(page); /* If cpu_addr is not from an atomic pool, dma_free_from_pool() fails */ - if (dma_should_free_from_pool(dev, 0) && + if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) && dma_free_from_pool(dev, vaddr, size)) return; From patchwork Wed Sep 30 16:09:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11809831 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A393E618 for ; Wed, 30 Sep 2020 16:09:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 75F9B20789 for ; Wed, 30 Sep 2020 16:09:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="P3xuRpLp" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731101AbgI3QJi (ORCPT ); Wed, 30 Sep 2020 12:09:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48584 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731074AbgI3QJh (ORCPT ); Wed, 30 Sep 2020 12:09:37 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 241E9C0613D0; Wed, 30 Sep 2020 09:09:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=jLVH2ETUcna4vnSJJM6qywpgQCBtyE661AcFkWsPNQc=; b=P3xuRpLpWkE6u3BGD3WvuVniKG aCLFBV/3BO8MRlfymLp6EYoYWaJKOi3sDy8xSAcdLIQon++6H2oEFkBJl22oNVEG5rJXr5NabZFML LSxzhjXq9CzGRjzdKjCMmbnQsoq7H4C1AQYThJOjWCPwVE/W7YOgEiYUk7jb8QDaxiKmM4pkI2JDN gVlveu0fpxAFqCweZzvqmpgA0yGT3KP+lz0wbFbyVAFtoLHlFRxa6v7SF8bV4FVGvXeEawkjaI1r0 uCQy4mWJlE9d4k6Lv/wNJMHdNhtEyW9NBA1XvdjHzweDMC+sFSChe+gxXfai2OeNX9yi2YKN07Ea5 qVyfOXGQ==; Received: from [2001:4bb8:180:7b62:c70:4a89:bc61:4] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1kNefb-0003E1-2j; Wed, 30 Sep 2020 16:09:31 +0000 From: Christoph Hellwig To: Mauro Carvalho Chehab , Marek Szyprowski , Tomasz Figa , iommu@lists.linux-foundation.org Cc: Robin Murphy , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org Subject: [PATCH 7/8] dma-iommu: remove __iommu_dma_mmap Date: Wed, 30 Sep 2020 18:09:16 +0200 Message-Id: <20200930160917.1234225-8-hch@lst.de> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200930160917.1234225-1-hch@lst.de> References: <20200930160917.1234225-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org The function has a single caller, so open code it there and take advantage of the precalculated page count variable. Signed-off-by: Christoph Hellwig --- drivers/iommu/dma-iommu.c | 17 +---------------- 1 file changed, 1 insertion(+), 16 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index b363b20a9f41ce..7922f545cd5eef 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -656,21 +656,6 @@ static void *iommu_dma_alloc_remap(struct device *dev, size_t size, return NULL; } -/** - * __iommu_dma_mmap - Map a buffer into provided user VMA - * @pages: Array representing buffer from __iommu_dma_alloc() - * @size: Size of buffer in bytes - * @vma: VMA describing requested userspace mapping - * - * Maps the pages of the buffer in @pages into @vma. The caller is responsible - * for verifying the correct size and protection of @vma beforehand. - */ -static int __iommu_dma_mmap(struct page **pages, size_t size, - struct vm_area_struct *vma) -{ - return vm_map_pages(vma, pages, PAGE_ALIGN(size) >> PAGE_SHIFT); -} - static void iommu_dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size, enum dma_data_direction dir) { @@ -1075,7 +1060,7 @@ static int iommu_dma_mmap(struct device *dev, struct vm_area_struct *vma, struct page **pages = dma_common_find_pages(cpu_addr); if (pages) - return __iommu_dma_mmap(pages, size, vma); + return vm_map_pages(vma, pages, nr_pages); pfn = vmalloc_to_pfn(cpu_addr); } else { pfn = page_to_pfn(virt_to_page(cpu_addr)); From patchwork Wed Sep 30 16:09:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11809833 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 37EF2618 for ; Wed, 30 Sep 2020 16:09:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0ECC92085B for ; Wed, 30 Sep 2020 16:09:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="fzFAqzzF" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731130AbgI3QJn (ORCPT ); Wed, 30 Sep 2020 12:09:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48586 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730980AbgI3QJi (ORCPT ); Wed, 30 Sep 2020 12:09:38 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4D591C0613D2; Wed, 30 Sep 2020 09:09:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=n8t3zfDKftEl3wniB1iex4k9WYjEfWPCbzhvTIk320k=; b=fzFAqzzFRvhRqbzPvKHX/UJfQ3 /NI7O+NLFeOPs/aVaRFnad9SHfEGa0AMJH3Ww74NM485dJ6rUlZXctWmJlRLtyMoDdWNgig/bjYeI GI1v6PBkrOb+y+AowzYW4JDG6DLNMz+263+qVoyx/zEHHw6ZSM8kMvZl+XMT/dYrUtRE/0n2Ildd0 M1cdlAvzx1TXOdTQ8/fu+WFi7lghDWFZ/XxW0a9wOe6vS909VHX6CTJRh97xEm5B7FsxsYRy0CFcK jqLQrKdLfns0WNpLBm6gpHCG6fknpYwbiccBrcQtmHhN/B/jCHggvhMeQt7g7UuCrN8AzHSDWLMHt opJTXyMg==; Received: from [2001:4bb8:180:7b62:c70:4a89:bc61:4] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1kNefc-0003EI-Ei; Wed, 30 Sep 2020 16:09:32 +0000 From: Christoph Hellwig To: Mauro Carvalho Chehab , Marek Szyprowski , Tomasz Figa , iommu@lists.linux-foundation.org Cc: Robin Murphy , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org Subject: [PATCH 8/8] WIP: add a dma_alloc_contiguous API Date: Wed, 30 Sep 2020 18:09:17 +0200 Message-Id: <20200930160917.1234225-9-hch@lst.de> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200930160917.1234225-1-hch@lst.de> References: <20200930160917.1234225-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Add a new API that returns a virtually non-contigous array of pages and dma address. This API is only implemented for dma-iommu and will not be implemented for non-iommu DMA API instances that have to allocate contiguous memory. It is up to the caller to check if the API is available. The intent is that media drivers can use this API if either: - no kernel mapping or only temporary kernel mappings are required. That is as a better replacement for DMA_ATTR_NO_KERNEL_MAPPING - a kernel mapping is required for cached and DMA mapped pages, but the driver also needs the pages to e.g. map them to userspace. In that sense it is a replacement for some aspects of the recently removed and never fully implemented DMA_ATTR_NON_CONSISTENT Signed-off-by: Christoph Hellwig --- drivers/iommu/dma-iommu.c | 73 +++++++++++++++++++++++++------------ include/linux/dma-mapping.h | 9 +++++ kernel/dma/mapping.c | 35 ++++++++++++++++++ 3 files changed, 93 insertions(+), 24 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 7922f545cd5eef..158026a856622c 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -565,23 +565,12 @@ static struct page **__iommu_dma_alloc_pages(struct device *dev, return pages; } -/** - * iommu_dma_alloc_remap - Allocate and map a buffer contiguous in IOVA space - * @dev: Device to allocate memory for. Must be a real device - * attached to an iommu_dma_domain - * @size: Size of buffer in bytes - * @dma_handle: Out argument for allocated DMA handle - * @gfp: Allocation flags - * @prot: pgprot_t to use for the remapped mapping - * @attrs: DMA attributes for this allocation - * - * If @size is less than PAGE_SIZE, then a full CPU page will be allocated, +/* + * If size is less than PAGE_SIZE, then a full CPU page will be allocated, * but an IOMMU which supports smaller pages might not map the whole thing. - * - * Return: Mapped virtual address, or NULL on failure. */ -static void *iommu_dma_alloc_remap(struct device *dev, size_t size, - dma_addr_t *dma_handle, gfp_t gfp, pgprot_t prot, +static struct page **__iommu_dma_alloc_noncontiguous(struct device *dev, + size_t size, dma_addr_t *dma_handle, gfp_t gfp, pgprot_t prot, unsigned long attrs) { struct iommu_domain *domain = iommu_get_dma_domain(dev); @@ -593,7 +582,6 @@ static void *iommu_dma_alloc_remap(struct device *dev, size_t size, struct page **pages; struct sg_table sgt; dma_addr_t iova; - void *vaddr; *dma_handle = DMA_MAPPING_ERROR; @@ -636,17 +624,10 @@ static void *iommu_dma_alloc_remap(struct device *dev, size_t size, < size) goto out_free_sg; - vaddr = dma_common_pages_remap(pages, size, prot, - __builtin_return_address(0)); - if (!vaddr) - goto out_unmap; - *dma_handle = iova; sg_free_table(&sgt); - return vaddr; + return pages; -out_unmap: - __iommu_dma_unmap(dev, iova, size); out_free_sg: sg_free_table(&sgt); out_free_iova: @@ -656,6 +637,46 @@ static void *iommu_dma_alloc_remap(struct device *dev, size_t size, return NULL; } +static void *iommu_dma_alloc_remap(struct device *dev, size_t size, + dma_addr_t *dma_handle, gfp_t gfp, pgprot_t prot, + unsigned long attrs) +{ + struct page **pages; + void *vaddr; + + pages = __iommu_dma_alloc_noncontiguous(dev, size, dma_handle, gfp, + prot, attrs); + if (!pages) + return NULL; + vaddr = dma_common_pages_remap(pages, size, prot, + __builtin_return_address(0)); + if (!vaddr) + goto out_unmap; + return vaddr; + +out_unmap: + __iommu_dma_unmap(dev, *dma_handle, size); + __iommu_dma_free_pages(pages, PAGE_ALIGN(size) >> PAGE_SHIFT); + return NULL; +} + +#ifdef CONFIG_DMA_REMAP +static struct page **iommu_dma_alloc_noncontiguous(struct device *dev, + size_t size, dma_addr_t *dma_handle, gfp_t gfp, + unsigned long attrs) +{ + return __iommu_dma_alloc_noncontiguous(dev, size, dma_handle, gfp, + PAGE_KERNEL, attrs); +} + +static void iommu_dma_free_noncontiguous(struct device *dev, size_t size, + struct page **pages, dma_addr_t dma_handle) +{ + __iommu_dma_unmap(dev, dma_handle, size); + __iommu_dma_free_pages(pages, PAGE_ALIGN(size) >> PAGE_SHIFT); +} +#endif + static void iommu_dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size, enum dma_data_direction dir) { @@ -1110,6 +1131,10 @@ static const struct dma_map_ops iommu_dma_ops = { .free = iommu_dma_free, .alloc_pages = dma_common_alloc_pages, .free_pages = dma_common_free_pages, +#ifdef CONFIG_DMA_REMAP + .alloc_noncontiguous = iommu_dma_alloc_noncontiguous, + .free_noncontiguous = iommu_dma_free_noncontiguous, +#endif .mmap = iommu_dma_mmap, .get_sgtable = iommu_dma_get_sgtable, .map_page = iommu_dma_map_page, diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 4b9b1d64f5ec9e..51bbc32365bb8d 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -74,6 +74,10 @@ struct dma_map_ops { gfp_t gfp); void (*free_pages)(struct device *dev, size_t size, struct page *vaddr, dma_addr_t dma_handle, enum dma_data_direction dir); + struct page **(*alloc_noncontiguous)(struct device *dev, size_t size, + dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs); + void (*free_noncontiguous)(struct device *dev, size_t size, + struct page **pages, dma_addr_t dma_handle); int (*mmap)(struct device *, struct vm_area_struct *, void *, dma_addr_t, size_t, unsigned long attrs); @@ -384,6 +388,11 @@ void *dma_alloc_noncoherent(struct device *dev, size_t size, dma_addr_t *dma_handle, enum dma_data_direction dir, gfp_t gfp); void dma_free_noncoherent(struct device *dev, size_t size, void *vaddr, dma_addr_t dma_handle, enum dma_data_direction dir); +bool dma_can_alloc_noncontiguous(struct device *dev); +struct page **dma_alloc_noncontiguous(struct device *dev, size_t size, + dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs); +void dma_free_noncontiguous(struct device *dev, size_t size, + struct page **pages, dma_addr_t dma_handle); static inline dma_addr_t dma_map_single_attrs(struct device *dev, void *ptr, size_t size, enum dma_data_direction dir, unsigned long attrs) diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 06115f59f4ffbf..6d975d1a20dd72 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -529,6 +529,41 @@ void dma_free_noncoherent(struct device *dev, size_t size, void *vaddr, } EXPORT_SYMBOL_GPL(dma_free_noncoherent); +bool dma_can_alloc_noncontiguous(struct device *dev) +{ + const struct dma_map_ops *ops = get_dma_ops(dev); + + return ops && ops->free_noncontiguous; +} +EXPORT_SYMBOL_GPL(dma_can_alloc_noncontiguous); + +struct page **dma_alloc_noncontiguous(struct device *dev, size_t size, + dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) +{ + const struct dma_map_ops *ops = get_dma_ops(dev); + + if (WARN_ON_ONCE(!dma_can_alloc_noncontiguous(dev))) + return NULL; + if (attrs & ~DMA_ATTR_ALLOC_SINGLE_PAGES) { + dev_warn(dev, "invalid flags (0x%lx) for %s\n", + attrs, __func__); + return NULL; + } + return ops->alloc_noncontiguous(dev, size, dma_handle, gfp, attrs); +} +EXPORT_SYMBOL_GPL(dma_alloc_noncontiguous); + +void dma_free_noncontiguous(struct device *dev, size_t size, + struct page **pages, dma_addr_t dma_handle) +{ + const struct dma_map_ops *ops = get_dma_ops(dev); + + if (WARN_ON_ONCE(!dma_can_alloc_noncontiguous(dev))) + return; + ops->free_noncontiguous(dev, size, pages, dma_handle); +} +EXPORT_SYMBOL_GPL(dma_free_noncontiguous); + int dma_supported(struct device *dev, u64 mask) { const struct dma_map_ops *ops = get_dma_ops(dev);