From patchwork Tue Feb 7 15:38:36 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Geert Uytterhoeven X-Patchwork-Id: 9560415 X-Patchwork-Delegate: geert@linux-m68k.org Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 21D37602B1 for ; Tue, 7 Feb 2017 15:38:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1191F2833A for ; Tue, 7 Feb 2017 15:38:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 02FC828441; Tue, 7 Feb 2017 15:38:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E22CC2833A for ; Tue, 7 Feb 2017 15:38:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754731AbdBGPif (ORCPT ); Tue, 7 Feb 2017 10:38:35 -0500 Received: from laurent.telenet-ops.be ([195.130.137.89]:33136 "EHLO laurent.telenet-ops.be" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754670AbdBGPif (ORCPT ); Tue, 7 Feb 2017 10:38:35 -0500 Received: from ayla.of.borg ([84.193.137.253]) by laurent.telenet-ops.be with bizsmtp id hreW1u00Z5UCtCs01reWhq; Tue, 07 Feb 2017 16:38:33 +0100 Received: from ramsan.of.borg ([192.168.97.29] helo=ramsan) by ayla.of.borg with esmtp (Exim 4.82) (envelope-from ) id 1cb7qf-0001eD-VY; Tue, 07 Feb 2017 16:38:29 +0100 Received: from geert by ramsan with local (Exim 4.82) (envelope-from ) id 1cb7qo-0000V7-1x; Tue, 07 Feb 2017 16:38:38 +0100 From: Geert Uytterhoeven To: Catalin Marinas , Will Deacon , Robin Murphy Cc: Joerg Roedel , Laurent Pinchart , Marek Szyprowski , Magnus Damm , linux-arm-kernel@lists.infradead.org, linux-renesas-soc@vger.kernel.org, iommu@lists.linux-foundation.org, Geert Uytterhoeven Subject: [PATCH v4] arm64: Add support for DMA_ATTR_FORCE_CONTIGUOUS to IOMMU Date: Tue, 7 Feb 2017 16:38:36 +0100 Message-Id: <1486481916-1892-1-git-send-email-geert+renesas@glider.be> X-Mailer: git-send-email 1.9.1 Sender: linux-renesas-soc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-renesas-soc@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Add support for allocating physically contiguous DMA buffers on arm64 systems with an IOMMU. This can be useful when two or more devices with different memory requirements are involved in buffer sharing. Note that as this uses the CMA allocator, setting the DMA_ATTR_FORCE_CONTIGUOUS attribute has a runtime-dependency on CONFIG_DMA_CMA, just like on arm32. For arm64 systems using swiotlb, no changes are needed to support the allocation of physically contiguous DMA buffers: - swiotlb always uses physically contiguous buffers (up to IO_TLB_SEGSIZE = 128 pages), - arm64's __dma_alloc_coherent() already calls dma_alloc_from_contiguous() when CMA is available. Signed-off-by: Geert Uytterhoeven Acked-by: Laurent Pinchart Reviewed-by: Robin Murphy --- v4: - Replace dma_to_phys()/phys_to_page() by vmalloc_to_page(), to pass the correct page pointer to dma_release_from_contiguous(). Note that the latter doesn't scream when passed a wrong pointer, but just returns false. While this makes life easier for callers who may want to call another deallocator, it makes it harder catching bugs. v3: - Add Acked-by, - Update comment to "one of _4_ things", - Call dma_alloc_from_contiguous() and iommu_dma_map_page() directly, as suggested by Robin Murphy, v2: - New, handle dispatching in the arch (arm64) code, as requested by Robin Murphy. --- arch/arm64/mm/dma-mapping.c | 63 ++++++++++++++++++++++++++++++++++----------- 1 file changed, 48 insertions(+), 15 deletions(-) diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c index 351f7595cb3ebdb9..fb76e82c90edd514 100644 --- a/arch/arm64/mm/dma-mapping.c +++ b/arch/arm64/mm/dma-mapping.c @@ -584,20 +584,7 @@ static void *__iommu_alloc_attrs(struct device *dev, size_t size, */ gfp |= __GFP_ZERO; - if (gfpflags_allow_blocking(gfp)) { - struct page **pages; - pgprot_t prot = __get_dma_pgprot(attrs, PAGE_KERNEL, coherent); - - pages = iommu_dma_alloc(dev, iosize, gfp, attrs, ioprot, - handle, flush_page); - if (!pages) - return NULL; - - addr = dma_common_pages_remap(pages, size, VM_USERMAP, prot, - __builtin_return_address(0)); - if (!addr) - iommu_dma_free(dev, pages, iosize, handle); - } else { + if (!gfpflags_allow_blocking(gfp)) { struct page *page; /* * In atomic context we can't remap anything, so we'll only @@ -621,6 +608,45 @@ static void *__iommu_alloc_attrs(struct device *dev, size_t size, __free_from_pool(addr, size); addr = NULL; } + } else if (attrs & DMA_ATTR_FORCE_CONTIGUOUS) { + pgprot_t prot = __get_dma_pgprot(attrs, PAGE_KERNEL, coherent); + struct page *page; + + page = dma_alloc_from_contiguous(dev, size >> PAGE_SHIFT, + get_order(size)); + if (!page) + return NULL; + + *handle = iommu_dma_map_page(dev, page, 0, iosize, ioprot); + if (iommu_dma_mapping_error(dev, *handle)) { + dma_release_from_contiguous(dev, page, + size >> PAGE_SHIFT); + return NULL; + } + if (!coherent) + __dma_flush_area(page_to_virt(page), iosize); + + addr = dma_common_contiguous_remap(page, size, VM_USERMAP, + prot, + __builtin_return_address(0)); + if (!addr) { + iommu_dma_unmap_page(dev, *handle, iosize, 0, attrs); + dma_release_from_contiguous(dev, page, + size >> PAGE_SHIFT); + } + } else { + pgprot_t prot = __get_dma_pgprot(attrs, PAGE_KERNEL, coherent); + struct page **pages; + + pages = iommu_dma_alloc(dev, iosize, gfp, attrs, ioprot, + handle, flush_page); + if (!pages) + return NULL; + + addr = dma_common_pages_remap(pages, size, VM_USERMAP, prot, + __builtin_return_address(0)); + if (!addr) + iommu_dma_free(dev, pages, iosize, handle); } return addr; } @@ -632,7 +658,8 @@ static void __iommu_free_attrs(struct device *dev, size_t size, void *cpu_addr, size = PAGE_ALIGN(size); /* - * @cpu_addr will be one of 3 things depending on how it was allocated: + * @cpu_addr will be one of 4 things depending on how it was allocated: + * - A remapped array of pages for contiguous allocations. * - A remapped array of pages from iommu_dma_alloc(), for all * non-atomic allocations. * - A non-cacheable alias from the atomic pool, for atomic @@ -644,6 +671,12 @@ static void __iommu_free_attrs(struct device *dev, size_t size, void *cpu_addr, if (__in_atomic_pool(cpu_addr, size)) { iommu_dma_unmap_page(dev, handle, iosize, 0, 0); __free_from_pool(cpu_addr, size); + } else if (attrs & DMA_ATTR_FORCE_CONTIGUOUS) { + struct page *page = vmalloc_to_page(cpu_addr); + + iommu_dma_unmap_page(dev, handle, iosize, 0, attrs); + dma_release_from_contiguous(dev, page, size >> PAGE_SHIFT); + dma_common_free_remap(cpu_addr, size, VM_USERMAP); } else if (is_vmalloc_addr(cpu_addr)){ struct vm_struct *area = find_vm_area(cpu_addr);