From patchwork Sat Dec 8 17:36:53 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10719557 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E0374112E for ; Sat, 8 Dec 2018 17:37:21 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C350E2AB65 for ; Sat, 8 Dec 2018 17:37:21 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B7AF02AE27; Sat, 8 Dec 2018 17:37:21 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1A07B2AB65 for ; Sat, 8 Dec 2018 17:37:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726256AbeLHRhU (ORCPT ); Sat, 8 Dec 2018 12:37:20 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:42968 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726218AbeLHRhS (ORCPT ); Sat, 8 Dec 2018 12:37:18 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=1gyRIvwO7YjDhu1pOGNh/fO+2+8GUnj5Y1CfvjSlDBA=; b=dXpk2xj61RcWCWZZ4AfADciXZN CrEBEfqaaC/ZHW+zQQNeh12wqjEZ25C4V6DePxEhvQtGx7SfpXbEyx7ndUUmeMnhGbfYFyZZVOr7s +R/6QQYvqXY+Bp3vNQfkdpqVrln3vN+0CCTrI4MPjOgr9JruP8mfxSbaSHriU3n4Fuf9WRNZ33BpJ Pix4+M1ZllQIGKEX8ZqY8w+4dcYmdGV+9YA30I2SWUG8cKgm4DDtfQR1W/pzRURerMvLeVn9ph3zr ieWkLw1qvOV4zpr61+QJpE+Jnk9FPaqU+dhppDUcAMFJdhiQmIq/b5eeWOCZIoUDKqTizR8M8G8Mk c7tXyCug==; Received: from [184.48.100.57] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1gVgXG-00053x-UN; Sat, 08 Dec 2018 17:37:03 +0000 From: Christoph Hellwig To: iommu@lists.linux-foundation.org Cc: Robin Murphy , Vineet Gupta , "Matwey V. Kornilov" , Laurent Pinchart , linux-snps-arc@lists.infradead.org, Ezequiel Garcia , linux-media@vger.kernel.org, linux-arm-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, sparclinux@vger.kernel.org, openrisc@lists.librecores.org, linux-parisc@vger.kernel.org, linux-mips@vger.kernel.org Subject: [PATCH 01/10] dma-direct: provide a generic implementation of DMA_ATTR_NON_CONSISTENT Date: Sat, 8 Dec 2018 09:36:53 -0800 Message-Id: <20181208173702.15158-2-hch@lst.de> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181208173702.15158-1-hch@lst.de> References: <20181208173702.15158-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-media-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP If DMA_ATTR_NON_CONSISTENT is passed in the flags we can always just use the dma_direct_alloc_pages implementation given that the callers will take care of any cache maintainance on ownership transfers between the CPU and the device. Signed-off-by: Christoph Hellwig --- arch/arc/mm/dma.c | 21 ++++++-------------- arch/mips/mm/dma-noncoherent.c | 5 ++--- arch/openrisc/kernel/dma.c | 23 +++++++++------------- arch/parisc/kernel/pci-dma.c | 35 ++++++++++++---------------------- kernel/dma/direct.c | 4 ++-- 5 files changed, 31 insertions(+), 57 deletions(-) diff --git a/arch/arc/mm/dma.c b/arch/arc/mm/dma.c index db203ff69ccf..135759d4ea8c 100644 --- a/arch/arc/mm/dma.c +++ b/arch/arc/mm/dma.c @@ -24,7 +24,6 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, struct page *page; phys_addr_t paddr; void *kvaddr; - bool need_coh = !(attrs & DMA_ATTR_NON_CONSISTENT); /* * __GFP_HIGHMEM flag is cleared by upper layer functions @@ -46,14 +45,10 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, * A coherent buffer needs MMU mapping to enforce non-cachability. * kvaddr is kernel Virtual address (0x7000_0000 based). */ - if (need_coh) { - kvaddr = ioremap_nocache(paddr, size); - if (kvaddr == NULL) { - __free_pages(page, order); - return NULL; - } - } else { - kvaddr = (void *)(u32)paddr; + kvaddr = ioremap_nocache(paddr, size); + if (kvaddr == NULL) { + __free_pages(page, order); + return NULL; } /* @@ -66,9 +61,7 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, * Currently flush_cache_vmap nukes the L1 cache completely which * will be optimized as a separate commit */ - if (need_coh) - dma_cache_wback_inv(paddr, size); - + dma_cache_wback_inv(paddr, size); return kvaddr; } @@ -78,9 +71,7 @@ void arch_dma_free(struct device *dev, size_t size, void *vaddr, phys_addr_t paddr = dma_handle; struct page *page = virt_to_page(paddr); - if (!(attrs & DMA_ATTR_NON_CONSISTENT)) - iounmap((void __force __iomem *)vaddr); - + iounmap((void __force __iomem *)vaddr); __free_pages(page, get_order(size)); } diff --git a/arch/mips/mm/dma-noncoherent.c b/arch/mips/mm/dma-noncoherent.c index cb38461391cb..7576cd7193ba 100644 --- a/arch/mips/mm/dma-noncoherent.c +++ b/arch/mips/mm/dma-noncoherent.c @@ -50,7 +50,7 @@ void *arch_dma_alloc(struct device *dev, size_t size, void *ret; ret = dma_direct_alloc_pages(dev, size, dma_handle, gfp, attrs); - if (ret && !(attrs & DMA_ATTR_NON_CONSISTENT)) { + if (ret) { dma_cache_wback_inv((unsigned long) ret, size); ret = (void *)UNCAC_ADDR(ret); } @@ -61,8 +61,7 @@ void *arch_dma_alloc(struct device *dev, size_t size, void arch_dma_free(struct device *dev, size_t size, void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs) { - if (!(attrs & DMA_ATTR_NON_CONSISTENT)) - cpu_addr = (void *)CAC_ADDR((unsigned long)cpu_addr); + cpu_addr = (void *)CAC_ADDR((unsigned long)cpu_addr); dma_direct_free_pages(dev, size, cpu_addr, dma_addr, attrs); } diff --git a/arch/openrisc/kernel/dma.c b/arch/openrisc/kernel/dma.c index 159336adfa2f..483adbb000bb 100644 --- a/arch/openrisc/kernel/dma.c +++ b/arch/openrisc/kernel/dma.c @@ -98,15 +98,13 @@ arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, va = (unsigned long)page; - if ((attrs & DMA_ATTR_NON_CONSISTENT) == 0) { - /* - * We need to iterate through the pages, clearing the dcache for - * them and setting the cache-inhibit bit. - */ - if (walk_page_range(va, va + size, &walk)) { - free_pages_exact(page, size); - return NULL; - } + /* + * We need to iterate through the pages, clearing the dcache for + * them and setting the cache-inhibit bit. + */ + if (walk_page_range(va, va + size, &walk)) { + free_pages_exact(page, size); + return NULL; } return (void *)va; @@ -122,11 +120,8 @@ arch_dma_free(struct device *dev, size_t size, void *vaddr, .mm = &init_mm }; - if ((attrs & DMA_ATTR_NON_CONSISTENT) == 0) { - /* walk_page_range shouldn't be able to fail here */ - WARN_ON(walk_page_range(va, va + size, &walk)); - } - + /* walk_page_range shouldn't be able to fail here */ + WARN_ON(walk_page_range(va, va + size, &walk)); free_pages_exact(vaddr, size); } diff --git a/arch/parisc/kernel/pci-dma.c b/arch/parisc/kernel/pci-dma.c index 04c48f1ef3fb..6780449e3e8b 100644 --- a/arch/parisc/kernel/pci-dma.c +++ b/arch/parisc/kernel/pci-dma.c @@ -421,29 +421,18 @@ static void *pcxl_dma_alloc(struct device *dev, size_t size, return (void *)vaddr; } -static void *pcx_dma_alloc(struct device *dev, size_t size, - dma_addr_t *dma_handle, gfp_t flag, unsigned long attrs) +static inline bool cpu_supports_coherent_area(void) { - void *addr; - - if ((attrs & DMA_ATTR_NON_CONSISTENT) == 0) - return NULL; - - addr = (void *)__get_free_pages(flag, get_order(size)); - if (addr) - *dma_handle = (dma_addr_t)virt_to_phys(addr); - - return addr; + return boot_cpu_data.cpu_type == pcxl2 || + boot_cpu_data.cpu_type == pcxl; } void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) { - - if (boot_cpu_data.cpu_type == pcxl2 || boot_cpu_data.cpu_type == pcxl) + if (cpu_supports_coherent_area()) return pcxl_dma_alloc(dev, size, dma_handle, gfp, attrs); - else - return pcx_dma_alloc(dev, size, dma_handle, gfp, attrs); + return NULL; } void arch_dma_free(struct device *dev, size_t size, void *vaddr, @@ -451,14 +440,14 @@ void arch_dma_free(struct device *dev, size_t size, void *vaddr, { int order = get_order(size); - if (boot_cpu_data.cpu_type == pcxl2 || boot_cpu_data.cpu_type == pcxl) { - size = 1 << (order + PAGE_SHIFT); - unmap_uncached_pages((unsigned long)vaddr, size); - pcxl_free_range((unsigned long)vaddr, size); + if (WARN_ON_ONCE(!cpu_supports_coherent_area())) + return; - vaddr = __va(dma_handle); - } - free_pages((unsigned long)vaddr, get_order(size)); + size = 1 << (order + PAGE_SHIFT); + unmap_uncached_pages((unsigned long)vaddr, size); + pcxl_free_range((unsigned long)vaddr, size); + + free_pages((unsigned long)__va(dma_handle), get_order(size)); } void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr, diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 308f88a750c8..4efe1188fd2e 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -206,7 +206,7 @@ void dma_direct_free_pages(struct device *dev, size_t size, void *cpu_addr, void *dma_direct_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) { - if (!dev_is_dma_coherent(dev)) + if (!dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_NON_CONSISTENT)) return arch_dma_alloc(dev, size, dma_handle, gfp, attrs); return dma_direct_alloc_pages(dev, size, dma_handle, gfp, attrs); } @@ -214,7 +214,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, void dma_direct_free(struct device *dev, size_t size, void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs) { - if (!dev_is_dma_coherent(dev)) + if (!dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_NON_CONSISTENT)) arch_dma_free(dev, size, cpu_addr, dma_addr, attrs); else dma_direct_free_pages(dev, size, cpu_addr, dma_addr, attrs); From patchwork Sat Dec 8 17:36:54 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10719555 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1AFC213BF for ; Sat, 8 Dec 2018 17:37:21 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F30482AB65 for ; Sat, 8 Dec 2018 17:37:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E42DE2AE27; Sat, 8 Dec 2018 17:37:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 724A02AB65 for ; Sat, 8 Dec 2018 17:37:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726239AbeLHRhT (ORCPT ); Sat, 8 Dec 2018 12:37:19 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:42954 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726186AbeLHRhS (ORCPT ); Sat, 8 Dec 2018 12:37:18 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=kXkzU9BzUhWvx/Zcm13Tqvhxg671ARu4x9T2MJXWUJ0=; b=i+aF6u7Tp/uOvkkcL3Ce45TtF+ 8Fd1S1iwSUtZ4SogqwQvEZ1xHF4k2d4K9dT7lFM5yBAMnlyJ0kVJ48Snvqwldv2IosN8e96LSGugT nRBz/BdyTS+o0+Sm9v0NmhIf2QzmQWyqDccG3zdcS2aRyPHb/FH6pbpPqRXBhEidRonqN7x3gEyD4 VrdXsbhUVtAn3/8zDvrN4y4px8BmN2kQYOtVnH7XfsjWYbvCLV2SCrxwba5i2iGxqwH8dECN1EhWS 1T9Pa6IwqeKxbSSAnO65hEbEvZobYTJhY1I9DKcZhjcLeAk4XjNo8su79RZe6u0XPNdsX/GVTTR/Q tP6z5ebg==; Received: from [184.48.100.57] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1gVgXH-000540-BE; Sat, 08 Dec 2018 17:37:03 +0000 From: Christoph Hellwig To: iommu@lists.linux-foundation.org Cc: Robin Murphy , Vineet Gupta , "Matwey V. Kornilov" , Laurent Pinchart , linux-snps-arc@lists.infradead.org, Ezequiel Garcia , linux-media@vger.kernel.org, linux-arm-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, sparclinux@vger.kernel.org, openrisc@lists.librecores.org, linux-parisc@vger.kernel.org, linux-mips@vger.kernel.org Subject: [PATCH 02/10] arm64/iommu: don't remap contiguous allocations for coherent devices Date: Sat, 8 Dec 2018 09:36:54 -0800 Message-Id: <20181208173702.15158-3-hch@lst.de> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181208173702.15158-1-hch@lst.de> References: <20181208173702.15158-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-media-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP There is no need to have an additional kernel mapping for a contiguous allocation if the device already is DMA coherent, so skip it. Signed-off-by: Christoph Hellwig --- arch/arm64/mm/dma-mapping.c | 35 ++++++++++++++++++++++------------- 1 file changed, 22 insertions(+), 13 deletions(-) diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c index 4c0f498069e8..d39b60113539 100644 --- a/arch/arm64/mm/dma-mapping.c +++ b/arch/arm64/mm/dma-mapping.c @@ -255,13 +255,18 @@ static void *__iommu_alloc_attrs(struct device *dev, size_t size, size >> PAGE_SHIFT); return NULL; } + + if (coherent) { + memset(addr, 0, size); + return addr; + } + addr = dma_common_contiguous_remap(page, size, VM_USERMAP, prot, __builtin_return_address(0)); if (addr) { memset(addr, 0, size); - if (!coherent) - __dma_flush_area(page_to_virt(page), iosize); + __dma_flush_area(page_to_virt(page), iosize); } else { iommu_dma_unmap_page(dev, *handle, iosize, 0, attrs); dma_release_from_contiguous(dev, page, @@ -309,7 +314,9 @@ static void __iommu_free_attrs(struct device *dev, size_t size, void *cpu_addr, iommu_dma_unmap_page(dev, handle, iosize, 0, attrs); dma_release_from_contiguous(dev, page, size >> PAGE_SHIFT); - dma_common_free_remap(cpu_addr, size, VM_USERMAP); + + if (!dev_is_dma_coherent(dev)) + dma_common_free_remap(cpu_addr, size, VM_USERMAP); } else if (is_vmalloc_addr(cpu_addr)){ struct vm_struct *area = find_vm_area(cpu_addr); @@ -336,11 +343,12 @@ static int __iommu_mmap_attrs(struct device *dev, struct vm_area_struct *vma, return ret; if (attrs & DMA_ATTR_FORCE_CONTIGUOUS) { - /* - * DMA_ATTR_FORCE_CONTIGUOUS allocations are always remapped, - * hence in the vmalloc space. - */ - unsigned long pfn = vmalloc_to_pfn(cpu_addr); + unsigned long pfn; + + if (dev_is_dma_coherent(dev)) + pfn = virt_to_pfn(cpu_addr); + else + pfn = vmalloc_to_pfn(cpu_addr); return __swiotlb_mmap_pfn(vma, pfn, size); } @@ -359,11 +367,12 @@ static int __iommu_get_sgtable(struct device *dev, struct sg_table *sgt, struct vm_struct *area = find_vm_area(cpu_addr); if (attrs & DMA_ATTR_FORCE_CONTIGUOUS) { - /* - * DMA_ATTR_FORCE_CONTIGUOUS allocations are always remapped, - * hence in the vmalloc space. - */ - struct page *page = vmalloc_to_page(cpu_addr); + struct page *page; + + if (dev_is_dma_coherent(dev)) + page = virt_to_page(cpu_addr); + else + page = vmalloc_to_page(cpu_addr); return __swiotlb_get_sgtable_page(sgt, page, size); } From patchwork Sat Dec 8 17:36:55 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10719573 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 10B0F17D5 for ; Sat, 8 Dec 2018 17:37:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EA2A02ABED for ; Sat, 8 Dec 2018 17:37:25 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DEA732AE27; Sat, 8 Dec 2018 17:37:25 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8915D2AED0 for ; Sat, 8 Dec 2018 17:37:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726278AbeLHRhX (ORCPT ); Sat, 8 Dec 2018 12:37:23 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:42960 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726193AbeLHRhS (ORCPT ); Sat, 8 Dec 2018 12:37:18 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=I+WnIekMM1tbq+rSftkEdHek2fZG7uorjOiJB1PtslI=; b=AyWvMEWwHf4VceiiuzIv8UyW4r waoZzL7eW4Qf17uPtdC2ytEOVUFX2tqgsH5IxutbrPCRA7KTJUNdBPvozp9mqYwd1O9yEU+Pn5Ey/ yoto1DkR9yYe8wG5/iQFM9Q5FN/MEYcDuaEnhWDoG3bksqYOMgRn78BgV2i7VxsiIWHyEmCA7bfbW JUNpMa8FcMMM+XE3RPJWXCkrWr+r0OZdipEifGN0/wg9U6DUMd0SUZP5i5WObhEHwixsKJ6JO+1if MkrrfQ0lyiXA2S4+PpqYPaV8DIPzgtZWv8l0UDlntGX7ITEB/WwKmgV7Yr9B3JWEbzp1OELzupl18 jOPt3yVg==; Received: from [184.48.100.57] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1gVgXH-000543-PA; Sat, 08 Dec 2018 17:37:03 +0000 From: Christoph Hellwig To: iommu@lists.linux-foundation.org Cc: Robin Murphy , Vineet Gupta , "Matwey V. Kornilov" , Laurent Pinchart , linux-snps-arc@lists.infradead.org, Ezequiel Garcia , linux-media@vger.kernel.org, linux-arm-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, sparclinux@vger.kernel.org, openrisc@lists.librecores.org, linux-parisc@vger.kernel.org, linux-mips@vger.kernel.org Subject: [PATCH 03/10] arm64/iommu: implement support for DMA_ATTR_NON_CONSISTENT Date: Sat, 8 Dec 2018 09:36:55 -0800 Message-Id: <20181208173702.15158-4-hch@lst.de> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181208173702.15158-1-hch@lst.de> References: <20181208173702.15158-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-media-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP DMA_ATTR_NON_CONSISTENT forces contiguous allocations as we don't want to remap, and is otherwise forced down the same pass as if we were always on a coherent device. No new code required except for a few conditionals. Signed-off-by: Christoph Hellwig --- arch/arm64/mm/dma-mapping.c | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-) diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c index d39b60113539..0010688ca30e 100644 --- a/arch/arm64/mm/dma-mapping.c +++ b/arch/arm64/mm/dma-mapping.c @@ -240,7 +240,8 @@ static void *__iommu_alloc_attrs(struct device *dev, size_t size, dma_free_from_pool(addr, size); addr = NULL; } - } else if (attrs & DMA_ATTR_FORCE_CONTIGUOUS) { + } else if (attrs & (DMA_ATTR_FORCE_CONTIGUOUS | + DMA_ATTR_NON_CONSISTENT)) { pgprot_t prot = arch_dma_mmap_pgprot(dev, PAGE_KERNEL, attrs); struct page *page; @@ -256,7 +257,7 @@ static void *__iommu_alloc_attrs(struct device *dev, size_t size, return NULL; } - if (coherent) { + if (coherent || (attrs & DMA_ATTR_NON_CONSISTENT)) { memset(addr, 0, size); return addr; } @@ -309,7 +310,8 @@ static void __iommu_free_attrs(struct device *dev, size_t size, void *cpu_addr, if (dma_in_atomic_pool(cpu_addr, size)) { iommu_dma_unmap_page(dev, handle, iosize, 0, 0); dma_free_from_pool(cpu_addr, size); - } else if (attrs & DMA_ATTR_FORCE_CONTIGUOUS) { + } else if (attrs & (DMA_ATTR_FORCE_CONTIGUOUS | + DMA_ATTR_NON_CONSISTENT)) { struct page *page = vmalloc_to_page(cpu_addr); iommu_dma_unmap_page(dev, handle, iosize, 0, attrs); @@ -342,10 +344,11 @@ static int __iommu_mmap_attrs(struct device *dev, struct vm_area_struct *vma, if (dma_mmap_from_dev_coherent(dev, vma, cpu_addr, size, &ret)) return ret; - if (attrs & DMA_ATTR_FORCE_CONTIGUOUS) { + if (attrs & (DMA_ATTR_FORCE_CONTIGUOUS | DMA_ATTR_NON_CONSISTENT)) { unsigned long pfn; - if (dev_is_dma_coherent(dev)) + if (dev_is_dma_coherent(dev) || + (attrs & DMA_ATTR_NON_CONSISTENT)) pfn = virt_to_pfn(cpu_addr); else pfn = vmalloc_to_pfn(cpu_addr); @@ -366,10 +369,11 @@ static int __iommu_get_sgtable(struct device *dev, struct sg_table *sgt, unsigned int count = PAGE_ALIGN(size) >> PAGE_SHIFT; struct vm_struct *area = find_vm_area(cpu_addr); - if (attrs & DMA_ATTR_FORCE_CONTIGUOUS) { + if (attrs & (DMA_ATTR_FORCE_CONTIGUOUS | DMA_ATTR_NON_CONSISTENT)) { struct page *page; - if (dev_is_dma_coherent(dev)) + if (dev_is_dma_coherent(dev) || + (attrs & DMA_ATTR_NON_CONSISTENT)) page = virt_to_page(cpu_addr); else page = vmalloc_to_page(cpu_addr); From patchwork Sat Dec 8 17:36:56 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10719565 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 38B1B17D5 for ; Sat, 8 Dec 2018 17:37:24 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 18AD32AB65 for ; Sat, 8 Dec 2018 17:37:24 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0C0AA2AE27; Sat, 8 Dec 2018 17:37:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 72BC12ABED for ; Sat, 8 Dec 2018 17:37:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726277AbeLHRhW (ORCPT ); Sat, 8 Dec 2018 12:37:22 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:42952 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726174AbeLHRhS (ORCPT ); Sat, 8 Dec 2018 12:37:18 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=yQfz+QIKzBBrHDZauPcpkKETGukfKkWCPbz32A1mz2E=; b=qpQidi4LvpVTFIIWdBieSHClf2 4+6gvC6DoGU3SNXpt+SwVkx7wEM4EVi4e9S55PRApLTxE27jySZhyIKExqs1emhqZ7l5ZG4DmKOSk wuGpyRf8aG0Q2WCBvGNHHuFztFjWFs2ldvuuC4Gwy7ljpxUt2Qa7X2BsvIX3b7eA/Y1ES5QTrReRg gim2H3f7xia9oGzumdJ7OVAAkH7QBjLc3s0apWhcVjArkC6IjoPKAi88PW9onWVKfb1yI1dhXrq5p 7PVBC7/JHMxlWHwGWZl93fIusgOWXFfMw0Dd76nsjj6u52fUAseYX4DZxlnibukVM5ThXtTnYpCaS hKnrC5Iw==; Received: from [184.48.100.57] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1gVgXI-000546-6l; Sat, 08 Dec 2018 17:37:04 +0000 From: Christoph Hellwig To: iommu@lists.linux-foundation.org Cc: Robin Murphy , Vineet Gupta , "Matwey V. Kornilov" , Laurent Pinchart , linux-snps-arc@lists.infradead.org, Ezequiel Garcia , linux-media@vger.kernel.org, linux-arm-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, sparclinux@vger.kernel.org, openrisc@lists.librecores.org, linux-parisc@vger.kernel.org, linux-mips@vger.kernel.org Subject: [PATCH 04/10] arm: implement DMA_ATTR_NON_CONSISTENT Date: Sat, 8 Dec 2018 09:36:56 -0800 Message-Id: <20181208173702.15158-5-hch@lst.de> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181208173702.15158-1-hch@lst.de> References: <20181208173702.15158-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-media-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP For the iommu ops we can just use the implementaton for DMA coherent devices. For the regular ops we need mix and match a bit so that we either use the CMA allocator without remapping, but with a special error handling case for highmem pages, or the simple allocator. Signed-off-by: Christoph Hellwig --- arch/arm/mm/dma-mapping.c | 49 ++++++++++++++++++++++++++++----------- 1 file changed, 35 insertions(+), 14 deletions(-) diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index 2cfb17bad1e6..b3b66b41c450 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -49,6 +49,7 @@ struct arm_dma_alloc_args { const void *caller; bool want_vaddr; int coherent_flag; + bool nonconsistent_flag; }; struct arm_dma_free_args { @@ -57,6 +58,7 @@ struct arm_dma_free_args { void *cpu_addr; struct page *page; bool want_vaddr; + bool nonconsistent_flag; }; #define NORMAL 0 @@ -348,7 +350,8 @@ static void __dma_free_buffer(struct page *page, size_t size) static void *__alloc_from_contiguous(struct device *dev, size_t size, pgprot_t prot, struct page **ret_page, const void *caller, bool want_vaddr, - int coherent_flag, gfp_t gfp); + int coherent_flag, bool nonconsistent_flag, + gfp_t gfp); static void *__alloc_remap_buffer(struct device *dev, size_t size, gfp_t gfp, pgprot_t prot, struct page **ret_page, @@ -405,7 +408,7 @@ static int __init atomic_pool_init(void) if (dev_get_cma_area(NULL)) ptr = __alloc_from_contiguous(NULL, atomic_pool_size, prot, &page, atomic_pool_init, true, NORMAL, - GFP_KERNEL); + false, GFP_KERNEL); else ptr = __alloc_remap_buffer(NULL, atomic_pool_size, gfp, prot, &page, atomic_pool_init, true); @@ -579,7 +582,8 @@ static int __free_from_pool(void *start, size_t size) static void *__alloc_from_contiguous(struct device *dev, size_t size, pgprot_t prot, struct page **ret_page, const void *caller, bool want_vaddr, - int coherent_flag, gfp_t gfp) + int coherent_flag, bool nonconsistent_flag, + gfp_t gfp) { unsigned long order = get_order(size); size_t count = size >> PAGE_SHIFT; @@ -595,12 +599,16 @@ static void *__alloc_from_contiguous(struct device *dev, size_t size, if (!want_vaddr) goto out; + if (nonconsistent_flag) { + if (PageHighMem(page)) + goto fail; + goto out; + } + if (PageHighMem(page)) { ptr = __dma_alloc_remap(page, size, GFP_KERNEL, prot, caller); - if (!ptr) { - dma_release_from_contiguous(dev, page, count); - return NULL; - } + if (!ptr) + goto fail; } else { __dma_remap(page, size, prot); ptr = page_address(page); @@ -609,12 +617,15 @@ static void *__alloc_from_contiguous(struct device *dev, size_t size, out: *ret_page = page; return ptr; + fail: + dma_release_from_contiguous(dev, page, count); + return NULL; } static void __free_from_contiguous(struct device *dev, struct page *page, - void *cpu_addr, size_t size, bool want_vaddr) + void *cpu_addr, size_t size, bool remapped) { - if (want_vaddr) { + if (remapped) { if (PageHighMem(page)) __dma_free_remap(cpu_addr, size); else @@ -635,7 +646,11 @@ static void *__alloc_simple_buffer(struct device *dev, size_t size, gfp_t gfp, struct page **ret_page) { struct page *page; - /* __alloc_simple_buffer is only called when the device is coherent */ + /* + * __alloc_simple_buffer is only called when the device is coherent, + * or if the caller explicitly asked for an allocation that is not + * consistent. + */ page = __dma_alloc_buffer(dev, size, gfp, COHERENT); if (!page) return NULL; @@ -667,13 +682,15 @@ static void *cma_allocator_alloc(struct arm_dma_alloc_args *args, return __alloc_from_contiguous(args->dev, args->size, args->prot, ret_page, args->caller, args->want_vaddr, args->coherent_flag, + args->nonconsistent_flag, args->gfp); } static void cma_allocator_free(struct arm_dma_free_args *args) { __free_from_contiguous(args->dev, args->page, args->cpu_addr, - args->size, args->want_vaddr); + args->size, + args->want_vaddr || args->nonconsistent_flag); } static struct arm_dma_allocator cma_allocator = { @@ -735,6 +752,7 @@ static void *__dma_alloc(struct device *dev, size_t size, dma_addr_t *handle, .caller = caller, .want_vaddr = ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) == 0), .coherent_flag = is_coherent ? COHERENT : NORMAL, + .nonconsistent_flag = (attrs & DMA_ATTR_NON_CONSISTENT), }; #ifdef CONFIG_DMA_API_DEBUG @@ -773,7 +791,7 @@ static void *__dma_alloc(struct device *dev, size_t size, dma_addr_t *handle, if (cma) buf->allocator = &cma_allocator; - else if (is_coherent) + else if (is_coherent || (attrs & DMA_ATTR_NON_CONSISTENT)) buf->allocator = &simple_allocator; else if (allowblock) buf->allocator = &remap_allocator; @@ -874,6 +892,7 @@ static void __arm_dma_free(struct device *dev, size_t size, void *cpu_addr, .cpu_addr = cpu_addr, .page = page, .want_vaddr = ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) == 0), + .nonconsistent_flag = (attrs & DMA_ATTR_NON_CONSISTENT), }; buf = arm_dma_buffer_find(cpu_addr); @@ -1562,7 +1581,8 @@ static void *__arm_iommu_alloc_attrs(struct device *dev, size_t size, static void *arm_iommu_alloc_attrs(struct device *dev, size_t size, dma_addr_t *handle, gfp_t gfp, unsigned long attrs) { - return __arm_iommu_alloc_attrs(dev, size, handle, gfp, attrs, NORMAL); + return __arm_iommu_alloc_attrs(dev, size, handle, gfp, attrs, + (attrs & DMA_ATTR_NON_CONSISTENT) ? COHERENT : NORMAL); } static void *arm_coherent_iommu_alloc_attrs(struct device *dev, size_t size, @@ -1650,7 +1670,8 @@ void __arm_iommu_free_attrs(struct device *dev, size_t size, void *cpu_addr, void arm_iommu_free_attrs(struct device *dev, size_t size, void *cpu_addr, dma_addr_t handle, unsigned long attrs) { - __arm_iommu_free_attrs(dev, size, cpu_addr, handle, attrs, NORMAL); + __arm_iommu_free_attrs(dev, size, cpu_addr, handle, attrs, + (attrs & DMA_ATTR_NON_CONSISTENT) ? COHERENT : NORMAL); } void arm_coherent_iommu_free_attrs(struct device *dev, size_t size, From patchwork Sat Dec 8 17:36:57 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10719571 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9B53B112E for ; Sat, 8 Dec 2018 17:37:25 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8030D2AB65 for ; Sat, 8 Dec 2018 17:37:25 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 73E682AE27; Sat, 8 Dec 2018 17:37:25 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0905A2AB65 for ; Sat, 8 Dec 2018 17:37:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726280AbeLHRhX (ORCPT ); Sat, 8 Dec 2018 12:37:23 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:42962 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726223AbeLHRhT (ORCPT ); Sat, 8 Dec 2018 12:37:19 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=7kqMcnLmVFeoQnDMxocbVrlIY+4jeZ6Ab2fif+he9Kc=; b=npUsXCuskLDtFMp0w2shACalwT /fgAocw/TueSdjampu6EVtyIUz+MLHez0vD5vk70Vt8MUtoS1zr7hrBV9U6QsSh9HC88XLS9dFKMF Z8EwyqaRvuWqccuVsaGRG2uQmvSzdJxLZj7cjzFPo2WuEKEu9Iehzh1vhzvZMP2+XXt8k3A9a8fAC eXL7a2hWUvPl+bxEs2xbn4kZOYLaX2HixePRo/yZut4dVVejiPCWk/sosA/HLeB4O2Wv9mITfOgaI U9cmQJlecV756RNOiR44BkQHtSEh1rO4bu3a7izFQ6/2eUmOyo2aEBZp8z0dIQaJXY3mvgiPW8nGS 7FG/XBLg==; Received: from [184.48.100.57] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1gVgXI-00054C-KO; Sat, 08 Dec 2018 17:37:04 +0000 From: Christoph Hellwig To: iommu@lists.linux-foundation.org Cc: Robin Murphy , Vineet Gupta , "Matwey V. Kornilov" , Laurent Pinchart , linux-snps-arc@lists.infradead.org, Ezequiel Garcia , linux-media@vger.kernel.org, linux-arm-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, sparclinux@vger.kernel.org, openrisc@lists.librecores.org, linux-parisc@vger.kernel.org, linux-mips@vger.kernel.org Subject: [PATCH 05/10] sparc64/iommu: move code around a bit Date: Sat, 8 Dec 2018 09:36:57 -0800 Message-Id: <20181208173702.15158-6-hch@lst.de> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181208173702.15158-1-hch@lst.de> References: <20181208173702.15158-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-media-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Move the alloc / free routines down the file so that we can easily use the map / unmap helpers to implement non-consistent allocations. Also drop the _coherent postfix to match the method name. Signed-off-by: Christoph Hellwig Acked-by: David S. Miller --- arch/sparc/kernel/iommu.c | 135 +++++++++++++++++++------------------- 1 file changed, 67 insertions(+), 68 deletions(-) diff --git a/arch/sparc/kernel/iommu.c b/arch/sparc/kernel/iommu.c index 0626bae5e3da..4bf0497e0704 100644 --- a/arch/sparc/kernel/iommu.c +++ b/arch/sparc/kernel/iommu.c @@ -195,72 +195,6 @@ static inline void iommu_free_ctx(struct iommu *iommu, int ctx) } } -static void *dma_4u_alloc_coherent(struct device *dev, size_t size, - dma_addr_t *dma_addrp, gfp_t gfp, - unsigned long attrs) -{ - unsigned long order, first_page; - struct iommu *iommu; - struct page *page; - int npages, nid; - iopte_t *iopte; - void *ret; - - size = IO_PAGE_ALIGN(size); - order = get_order(size); - if (order >= 10) - return NULL; - - nid = dev->archdata.numa_node; - page = alloc_pages_node(nid, gfp, order); - if (unlikely(!page)) - return NULL; - - first_page = (unsigned long) page_address(page); - memset((char *)first_page, 0, PAGE_SIZE << order); - - iommu = dev->archdata.iommu; - - iopte = alloc_npages(dev, iommu, size >> IO_PAGE_SHIFT); - - if (unlikely(iopte == NULL)) { - free_pages(first_page, order); - return NULL; - } - - *dma_addrp = (iommu->tbl.table_map_base + - ((iopte - iommu->page_table) << IO_PAGE_SHIFT)); - ret = (void *) first_page; - npages = size >> IO_PAGE_SHIFT; - first_page = __pa(first_page); - while (npages--) { - iopte_val(*iopte) = (IOPTE_CONSISTENT(0UL) | - IOPTE_WRITE | - (first_page & IOPTE_PAGE)); - iopte++; - first_page += IO_PAGE_SIZE; - } - - return ret; -} - -static void dma_4u_free_coherent(struct device *dev, size_t size, - void *cpu, dma_addr_t dvma, - unsigned long attrs) -{ - struct iommu *iommu; - unsigned long order, npages; - - npages = IO_PAGE_ALIGN(size) >> IO_PAGE_SHIFT; - iommu = dev->archdata.iommu; - - iommu_tbl_range_free(&iommu->tbl, dvma, npages, IOMMU_ERROR_CODE); - - order = get_order(size); - if (order < 10) - free_pages((unsigned long)cpu, order); -} - static dma_addr_t dma_4u_map_page(struct device *dev, struct page *page, unsigned long offset, size_t sz, enum dma_data_direction direction, @@ -742,6 +676,71 @@ static void dma_4u_sync_sg_for_cpu(struct device *dev, spin_unlock_irqrestore(&iommu->lock, flags); } +static void *dma_4u_alloc(struct device *dev, size_t size, + dma_addr_t *dma_addrp, gfp_t gfp, unsigned long attrs) +{ + unsigned long order, first_page; + struct iommu *iommu; + struct page *page; + int npages, nid; + iopte_t *iopte; + void *ret; + + size = IO_PAGE_ALIGN(size); + order = get_order(size); + if (order >= 10) + return NULL; + + nid = dev->archdata.numa_node; + page = alloc_pages_node(nid, gfp, order); + if (unlikely(!page)) + return NULL; + + first_page = (unsigned long) page_address(page); + memset((char *)first_page, 0, PAGE_SIZE << order); + + iommu = dev->archdata.iommu; + + iopte = alloc_npages(dev, iommu, size >> IO_PAGE_SHIFT); + + if (unlikely(iopte == NULL)) { + free_pages(first_page, order); + return NULL; + } + + *dma_addrp = (iommu->tbl.table_map_base + + ((iopte - iommu->page_table) << IO_PAGE_SHIFT)); + ret = (void *) first_page; + npages = size >> IO_PAGE_SHIFT; + first_page = __pa(first_page); + while (npages--) { + iopte_val(*iopte) = (IOPTE_CONSISTENT(0UL) | + IOPTE_WRITE | + (first_page & IOPTE_PAGE)); + iopte++; + first_page += IO_PAGE_SIZE; + } + + return ret; +} + +static void dma_4u_free(struct device *dev, size_t size, void *cpu, + dma_addr_t dvma, unsigned long attrs) +{ + struct iommu *iommu; + unsigned long order, npages; + + npages = IO_PAGE_ALIGN(size) >> IO_PAGE_SHIFT; + iommu = dev->archdata.iommu; + + iommu_tbl_range_free(&iommu->tbl, dvma, npages, IOMMU_ERROR_CODE); + + order = get_order(size); + if (order < 10) + free_pages((unsigned long)cpu, order); +} + + static int dma_4u_supported(struct device *dev, u64 device_mask) { struct iommu *iommu = dev->archdata.iommu; @@ -758,8 +757,8 @@ static int dma_4u_supported(struct device *dev, u64 device_mask) } static const struct dma_map_ops sun4u_dma_ops = { - .alloc = dma_4u_alloc_coherent, - .free = dma_4u_free_coherent, + .alloc = dma_4u_alloc, + .free = dma_4u_free, .map_page = dma_4u_map_page, .unmap_page = dma_4u_unmap_page, .map_sg = dma_4u_map_sg, From patchwork Sat Dec 8 17:36:58 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10719587 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B3F8F1932 for ; Sat, 8 Dec 2018 17:37:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 99B9B2AB65 for ; Sat, 8 Dec 2018 17:37:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8E0BE2AFA1; Sat, 8 Dec 2018 17:37:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3CB362AB65 for ; Sat, 8 Dec 2018 17:37:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726283AbeLHRh1 (ORCPT ); Sat, 8 Dec 2018 12:37:27 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:43020 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726262AbeLHRhU (ORCPT ); Sat, 8 Dec 2018 12:37:20 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=rf2INvOvASb+bwKkLRGXlrOoY5ukby0zM9d4/mbNcZk=; b=boH8jLoroyijxSr4D+a04FIPAF nnSdifDeV7Nfok8PX4Tr7695v6WUMHSyHoUoUUzer4+YY2wD2i+ho+3xCco2Rm2h4UJg7chVghL9x l82519lVq9LNMO7K1jLjvdnU3qYRzoprOhssFDWlcMAOEGSjxMwAKLXX8NcduitKLF98+G/8HpcD9 Obsk/bnCKyp+j7qlhMokloDgqwMRvVsOmkNxwgJ3ZApvV1awPnzHoPHcVAjTdq2WkvFkO1vnIqQK0 acQW+J00CEFIXFopETVhAyyDm/0zgl1eX4RqF66JjIsnN8DkHNA3hr96Oy01ye//QNJ18IhlhZ5mY iItdF11g==; Received: from [184.48.100.57] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1gVgXJ-00054J-1i; Sat, 08 Dec 2018 17:37:05 +0000 From: Christoph Hellwig To: iommu@lists.linux-foundation.org Cc: Robin Murphy , Vineet Gupta , "Matwey V. Kornilov" , Laurent Pinchart , linux-snps-arc@lists.infradead.org, Ezequiel Garcia , linux-media@vger.kernel.org, linux-arm-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, sparclinux@vger.kernel.org, openrisc@lists.librecores.org, linux-parisc@vger.kernel.org, linux-mips@vger.kernel.org Subject: [PATCH 06/10] sparc64/iommu: implement DMA_ATTR_NON_CONSISTENT Date: Sat, 8 Dec 2018 09:36:58 -0800 Message-Id: <20181208173702.15158-7-hch@lst.de> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181208173702.15158-1-hch@lst.de> References: <20181208173702.15158-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-media-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Just allocate the memory and use map_page to map the memory. Signed-off-by: Christoph Hellwig Acked-by: David S. Miller --- arch/sparc/kernel/iommu.c | 33 +++++++++++++++++++++++---------- 1 file changed, 23 insertions(+), 10 deletions(-) diff --git a/arch/sparc/kernel/iommu.c b/arch/sparc/kernel/iommu.c index 4bf0497e0704..4ce24c9dc691 100644 --- a/arch/sparc/kernel/iommu.c +++ b/arch/sparc/kernel/iommu.c @@ -699,14 +699,19 @@ static void *dma_4u_alloc(struct device *dev, size_t size, first_page = (unsigned long) page_address(page); memset((char *)first_page, 0, PAGE_SIZE << order); + if (attrs & DMA_ATTR_NON_CONSISTENT) { + *dma_addrp = dma_4u_map_page(dev, page, 0, size, + DMA_BIDIRECTIONAL, 0); + if (*dma_addrp == DMA_MAPPING_ERROR) + goto out_free_page; + return page_address(page); + } + iommu = dev->archdata.iommu; iopte = alloc_npages(dev, iommu, size >> IO_PAGE_SHIFT); - - if (unlikely(iopte == NULL)) { - free_pages(first_page, order); - return NULL; - } + if (unlikely(iopte == NULL)) + goto out_free_page; *dma_addrp = (iommu->tbl.table_map_base + ((iopte - iommu->page_table) << IO_PAGE_SHIFT)); @@ -722,18 +727,26 @@ static void *dma_4u_alloc(struct device *dev, size_t size, } return ret; + +out_free_page: + free_pages(first_page, order); + return NULL; } static void dma_4u_free(struct device *dev, size_t size, void *cpu, dma_addr_t dvma, unsigned long attrs) { - struct iommu *iommu; - unsigned long order, npages; + unsigned long order; - npages = IO_PAGE_ALIGN(size) >> IO_PAGE_SHIFT; - iommu = dev->archdata.iommu; + if (attrs & DMA_ATTR_NON_CONSISTENT) { + dma_4u_unmap_page(dev, dvma, size, DMA_BIDIRECTIONAL, 0); + } else { + struct iommu *iommu = dev->archdata.iommu; - iommu_tbl_range_free(&iommu->tbl, dvma, npages, IOMMU_ERROR_CODE); + iommu_tbl_range_free(&iommu->tbl, dvma, + IO_PAGE_ALIGN(size) >> IO_PAGE_SHIFT, + IOMMU_ERROR_CODE); + } order = get_order(size); if (order < 10) From patchwork Sat Dec 8 17:36:59 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10719583 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1E86B112E for ; Sat, 8 Dec 2018 17:37:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 01BCC2AB65 for ; Sat, 8 Dec 2018 17:37:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id EA3212AF94; Sat, 8 Dec 2018 17:37:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4717F2AB65 for ; Sat, 8 Dec 2018 17:37:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726264AbeLHRhU (ORCPT ); Sat, 8 Dec 2018 12:37:20 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:42964 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726219AbeLHRhS (ORCPT ); Sat, 8 Dec 2018 12:37:18 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=FAVutZNoA5nWRwfSxkV2PNMNqj2neLTbsfs93R9vcsQ=; b=G5gzHsuDM+mlQ/DWx9qezcqW+j q4lMr679jhRFonbTO36GmgkQX0ldMdCTPwNZe3p3IfCZvNri2DOdAp2XHCdE6kD/xEx2WqrSLZpEa B74u0JD+6UV0Id6fC2Q7c22LwZ0Zw/Zzs7vDz7YQSaTUiyzzDbAbJrExTGZ1hdhP8oGKPTI96H5AL J5ap/7VRgxEEJODZCSS0iX/XUiamMkyH3rPEMEL9CJ+qMXCImhDynGChgMaTFJoTwMmVVEMtKriVn /lOgV4hkitiuygQkZi4f5C4b9wkswYQq22hMk+LvhRA6/lvfyAmKc1xLRg9nGpmuWJ1+xjkWp3qJr EvhDmgTg==; Received: from [184.48.100.57] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1gVgXJ-00054M-EU; Sat, 08 Dec 2018 17:37:05 +0000 From: Christoph Hellwig To: iommu@lists.linux-foundation.org Cc: Robin Murphy , Vineet Gupta , "Matwey V. Kornilov" , Laurent Pinchart , linux-snps-arc@lists.infradead.org, Ezequiel Garcia , linux-media@vger.kernel.org, linux-arm-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, sparclinux@vger.kernel.org, openrisc@lists.librecores.org, linux-parisc@vger.kernel.org, linux-mips@vger.kernel.org Subject: [PATCH 07/10] sparc64/pci_sun4v: move code around a bit Date: Sat, 8 Dec 2018 09:36:59 -0800 Message-Id: <20181208173702.15158-8-hch@lst.de> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181208173702.15158-1-hch@lst.de> References: <20181208173702.15158-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-media-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Move the alloc / free routines down the file so that we can easily use the map / unmap helpers to implement non-consistent allocations. Also drop the _coherent postfix to match the method name. Signed-off-by: Christoph Hellwig Acked-by: David S. Miller --- arch/sparc/kernel/pci_sun4v.c | 229 +++++++++++++++++----------------- 1 file changed, 114 insertions(+), 115 deletions(-) diff --git a/arch/sparc/kernel/pci_sun4v.c b/arch/sparc/kernel/pci_sun4v.c index fa0e42b4cbfb..b95c70136559 100644 --- a/arch/sparc/kernel/pci_sun4v.c +++ b/arch/sparc/kernel/pci_sun4v.c @@ -171,87 +171,6 @@ static inline long iommu_batch_end(u64 mask) return iommu_batch_flush(p, mask); } -static void *dma_4v_alloc_coherent(struct device *dev, size_t size, - dma_addr_t *dma_addrp, gfp_t gfp, - unsigned long attrs) -{ - u64 mask; - unsigned long flags, order, first_page, npages, n; - unsigned long prot = 0; - struct iommu *iommu; - struct atu *atu; - struct iommu_map_table *tbl; - struct page *page; - void *ret; - long entry; - int nid; - - size = IO_PAGE_ALIGN(size); - order = get_order(size); - if (unlikely(order >= MAX_ORDER)) - return NULL; - - npages = size >> IO_PAGE_SHIFT; - - if (attrs & DMA_ATTR_WEAK_ORDERING) - prot = HV_PCI_MAP_ATTR_RELAXED_ORDER; - - nid = dev->archdata.numa_node; - page = alloc_pages_node(nid, gfp, order); - if (unlikely(!page)) - return NULL; - - first_page = (unsigned long) page_address(page); - memset((char *)first_page, 0, PAGE_SIZE << order); - - iommu = dev->archdata.iommu; - atu = iommu->atu; - - mask = dev->coherent_dma_mask; - if (mask <= DMA_BIT_MASK(32)) - tbl = &iommu->tbl; - else - tbl = &atu->tbl; - - entry = iommu_tbl_range_alloc(dev, tbl, npages, NULL, - (unsigned long)(-1), 0); - - if (unlikely(entry == IOMMU_ERROR_CODE)) - goto range_alloc_fail; - - *dma_addrp = (tbl->table_map_base + (entry << IO_PAGE_SHIFT)); - ret = (void *) first_page; - first_page = __pa(first_page); - - local_irq_save(flags); - - iommu_batch_start(dev, - (HV_PCI_MAP_ATTR_READ | prot | - HV_PCI_MAP_ATTR_WRITE), - entry); - - for (n = 0; n < npages; n++) { - long err = iommu_batch_add(first_page + (n * PAGE_SIZE), mask); - if (unlikely(err < 0L)) - goto iommu_map_fail; - } - - if (unlikely(iommu_batch_end(mask) < 0L)) - goto iommu_map_fail; - - local_irq_restore(flags); - - return ret; - -iommu_map_fail: - local_irq_restore(flags); - iommu_tbl_range_free(tbl, *dma_addrp, npages, IOMMU_ERROR_CODE); - -range_alloc_fail: - free_pages(first_page, order); - return NULL; -} - unsigned long dma_4v_iotsb_bind(unsigned long devhandle, unsigned long iotsb_num, struct pci_bus *bus_dev) @@ -316,38 +235,6 @@ static void dma_4v_iommu_demap(struct device *dev, unsigned long devhandle, local_irq_restore(flags); } -static void dma_4v_free_coherent(struct device *dev, size_t size, void *cpu, - dma_addr_t dvma, unsigned long attrs) -{ - struct pci_pbm_info *pbm; - struct iommu *iommu; - struct atu *atu; - struct iommu_map_table *tbl; - unsigned long order, npages, entry; - unsigned long iotsb_num; - u32 devhandle; - - npages = IO_PAGE_ALIGN(size) >> IO_PAGE_SHIFT; - iommu = dev->archdata.iommu; - pbm = dev->archdata.host_controller; - atu = iommu->atu; - devhandle = pbm->devhandle; - - if (dvma <= DMA_BIT_MASK(32)) { - tbl = &iommu->tbl; - iotsb_num = 0; /* we don't care for legacy iommu */ - } else { - tbl = &atu->tbl; - iotsb_num = atu->iotsb->iotsb_num; - } - entry = ((dvma - tbl->table_map_base) >> IO_PAGE_SHIFT); - dma_4v_iommu_demap(dev, devhandle, dvma, iotsb_num, entry, npages); - iommu_tbl_range_free(tbl, dvma, npages, IOMMU_ERROR_CODE); - order = get_order(size); - if (order < 10) - free_pages((unsigned long)cpu, order); -} - static dma_addr_t dma_4v_map_page(struct device *dev, struct page *page, unsigned long offset, size_t sz, enum dma_data_direction direction, @@ -671,6 +558,118 @@ static void dma_4v_unmap_sg(struct device *dev, struct scatterlist *sglist, local_irq_restore(flags); } +static void *dma_4v_alloc(struct device *dev, size_t size, + dma_addr_t *dma_addrp, gfp_t gfp, unsigned long attrs) +{ + u64 mask; + unsigned long flags, order, first_page, npages, n; + unsigned long prot = 0; + struct iommu *iommu; + struct atu *atu; + struct iommu_map_table *tbl; + struct page *page; + void *ret; + long entry; + int nid; + + size = IO_PAGE_ALIGN(size); + order = get_order(size); + if (unlikely(order >= MAX_ORDER)) + return NULL; + + npages = size >> IO_PAGE_SHIFT; + + if (attrs & DMA_ATTR_WEAK_ORDERING) + prot = HV_PCI_MAP_ATTR_RELAXED_ORDER; + + nid = dev->archdata.numa_node; + page = alloc_pages_node(nid, gfp, order); + if (unlikely(!page)) + return NULL; + + first_page = (unsigned long) page_address(page); + memset((char *)first_page, 0, PAGE_SIZE << order); + + iommu = dev->archdata.iommu; + atu = iommu->atu; + + mask = dev->coherent_dma_mask; + if (mask <= DMA_BIT_MASK(32)) + tbl = &iommu->tbl; + else + tbl = &atu->tbl; + + entry = iommu_tbl_range_alloc(dev, tbl, npages, NULL, + (unsigned long)(-1), 0); + + if (unlikely(entry == IOMMU_ERROR_CODE)) + goto range_alloc_fail; + + *dma_addrp = (tbl->table_map_base + (entry << IO_PAGE_SHIFT)); + ret = (void *) first_page; + first_page = __pa(first_page); + + local_irq_save(flags); + + iommu_batch_start(dev, + (HV_PCI_MAP_ATTR_READ | prot | + HV_PCI_MAP_ATTR_WRITE), + entry); + + for (n = 0; n < npages; n++) { + long err = iommu_batch_add(first_page + (n * PAGE_SIZE), mask); + if (unlikely(err < 0L)) + goto iommu_map_fail; + } + + if (unlikely(iommu_batch_end(mask) < 0L)) + goto iommu_map_fail; + + local_irq_restore(flags); + + return ret; + +iommu_map_fail: + local_irq_restore(flags); + iommu_tbl_range_free(tbl, *dma_addrp, npages, IOMMU_ERROR_CODE); + +range_alloc_fail: + free_pages(first_page, order); + return NULL; +} + +static void dma_4v_free(struct device *dev, size_t size, void *cpu, + dma_addr_t dvma, unsigned long attrs) +{ + struct pci_pbm_info *pbm; + struct iommu *iommu; + struct atu *atu; + struct iommu_map_table *tbl; + unsigned long order, npages, entry; + unsigned long iotsb_num; + u32 devhandle; + + npages = IO_PAGE_ALIGN(size) >> IO_PAGE_SHIFT; + iommu = dev->archdata.iommu; + pbm = dev->archdata.host_controller; + atu = iommu->atu; + devhandle = pbm->devhandle; + + if (dvma <= DMA_BIT_MASK(32)) { + tbl = &iommu->tbl; + iotsb_num = 0; /* we don't care for legacy iommu */ + } else { + tbl = &atu->tbl; + iotsb_num = atu->iotsb->iotsb_num; + } + entry = ((dvma - tbl->table_map_base) >> IO_PAGE_SHIFT); + dma_4v_iommu_demap(dev, devhandle, dvma, iotsb_num, entry, npages); + iommu_tbl_range_free(tbl, dvma, npages, IOMMU_ERROR_CODE); + order = get_order(size); + if (order < 10) + free_pages((unsigned long)cpu, order); +} + static int dma_4v_supported(struct device *dev, u64 device_mask) { struct iommu *iommu = dev->archdata.iommu; @@ -689,8 +688,8 @@ static int dma_4v_supported(struct device *dev, u64 device_mask) } static const struct dma_map_ops sun4v_dma_ops = { - .alloc = dma_4v_alloc_coherent, - .free = dma_4v_free_coherent, + .alloc = dma_4v_alloc, + .free = dma_4v_free, .map_page = dma_4v_map_page, .unmap_page = dma_4v_unmap_page, .map_sg = dma_4v_map_sg, From patchwork Sat Dec 8 17:37:00 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10719623 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 217745C90 for ; Sat, 8 Dec 2018 17:37:35 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 073E22AB65 for ; Sat, 8 Dec 2018 17:37:35 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id EFA7B2AFA1; Sat, 8 Dec 2018 17:37:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 87FE32AF94 for ; Sat, 8 Dec 2018 17:37:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726231AbeLHRhT (ORCPT ); Sat, 8 Dec 2018 12:37:19 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:42958 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726189AbeLHRhS (ORCPT ); Sat, 8 Dec 2018 12:37:18 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=itmW2VTZOd8rUAhTAA5rXh48hRuTxgk6doJ5rJPhYhw=; b=jcwyDbqRd5uJfegfK33I29M4qZ QJc+UC8+Ial6xQVasjjhwQ1tRtP0BmxeQEqhywdDfH3cd4OH/CQ/7JU9DWLtPZR5IEl4kSA2kCzo3 Lzfvez+RGSIyuvbCdUF6j2sXnSFkAb6JzRy8SeuoYakaPKHe9MPI38WGmSHQAlEcQ7ErbfCapSB+H dWbvXWBLED/oQs/FZ31Y4Jg9uTdPn2sUnDWuNMp+y7n+lnn7XhbnrksK2CucqDuX/YYvdWwuY3dpG 7iPrAweJ4mJg11h/4EPW33ciDSfaVgNOsA4u7RK+Ul63Um/nT463wGQhjcHf+HGNtZElW6NmJPCPW xF8pkExA==; Received: from [184.48.100.57] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1gVgXJ-00054P-RK; Sat, 08 Dec 2018 17:37:05 +0000 From: Christoph Hellwig To: iommu@lists.linux-foundation.org Cc: Robin Murphy , Vineet Gupta , "Matwey V. Kornilov" , Laurent Pinchart , linux-snps-arc@lists.infradead.org, Ezequiel Garcia , linux-media@vger.kernel.org, linux-arm-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, sparclinux@vger.kernel.org, openrisc@lists.librecores.org, linux-parisc@vger.kernel.org, linux-mips@vger.kernel.org Subject: [PATCH 08/10] sparc64/pci_sun4v: implement DMA_ATTR_NON_CONSISTENT Date: Sat, 8 Dec 2018 09:37:00 -0800 Message-Id: <20181208173702.15158-9-hch@lst.de> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181208173702.15158-1-hch@lst.de> References: <20181208173702.15158-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-media-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Just allocate the memory and use map_page to map the memory. Signed-off-by: Christoph Hellwig Acked-by: David S. Miller --- arch/sparc/kernel/pci_sun4v.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/arch/sparc/kernel/pci_sun4v.c b/arch/sparc/kernel/pci_sun4v.c index b95c70136559..24a76ecf2986 100644 --- a/arch/sparc/kernel/pci_sun4v.c +++ b/arch/sparc/kernel/pci_sun4v.c @@ -590,6 +590,14 @@ static void *dma_4v_alloc(struct device *dev, size_t size, first_page = (unsigned long) page_address(page); memset((char *)first_page, 0, PAGE_SIZE << order); + if (attrs & DMA_ATTR_NON_CONSISTENT) { + *dma_addrp = dma_4v_map_page(dev, page, 0, size, + DMA_BIDIRECTIONAL, 0); + if (*dma_addrp == DMA_MAPPING_ERROR) + goto range_alloc_fail; + return page_address(page); + } + iommu = dev->archdata.iommu; atu = iommu->atu; @@ -649,6 +657,11 @@ static void dma_4v_free(struct device *dev, size_t size, void *cpu, unsigned long iotsb_num; u32 devhandle; + if (attrs & DMA_ATTR_NON_CONSISTENT) { + dma_4v_unmap_page(dev, dvma, size, DMA_BIDIRECTIONAL, 0); + goto free_pages; + } + npages = IO_PAGE_ALIGN(size) >> IO_PAGE_SHIFT; iommu = dev->archdata.iommu; pbm = dev->archdata.host_controller; @@ -665,6 +678,7 @@ static void dma_4v_free(struct device *dev, size_t size, void *cpu, entry = ((dvma - tbl->table_map_base) >> IO_PAGE_SHIFT); dma_4v_iommu_demap(dev, devhandle, dvma, iotsb_num, entry, npages); iommu_tbl_range_free(tbl, dvma, npages, IOMMU_ERROR_CODE); +free_pages: order = get_order(size); if (order < 10) free_pages((unsigned long)cpu, order); From patchwork Sat Dec 8 17:37:01 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10719581 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 482AF112E for ; Sat, 8 Dec 2018 17:37:27 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2B9F62AB65 for ; Sat, 8 Dec 2018 17:37:27 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1F8422ABED; Sat, 8 Dec 2018 17:37:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A3D942AE27 for ; Sat, 8 Dec 2018 17:37:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726274AbeLHRhV (ORCPT ); Sat, 8 Dec 2018 12:37:21 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:42966 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726213AbeLHRhT (ORCPT ); Sat, 8 Dec 2018 12:37:19 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=0CD6uo+bwgT+H68fZmDPIq1sLN+h7RevkhK940kBU8E=; b=VB3KXCQu/XqE5/eRoXYXL5UAyp PMwyq/2HuGy3y6C4GaRuzE3Ax9/mpPAf8stvE8ysr8Hg6YU16vt/EGetBczq2oSrzygLBhFfV0UOm vm5tNR46EMCp03o8MJnroaB8z9ymeHIJEVU5nnywOiO6FknzJ05lIPvNnJW3AYep76UcooN6Y297B V0PJeDcNMSWrFQbIoUHa1l8WI19GsyMUwa0V8eH75dlMkphAqU8wcpEWwzqnFqQ4ejZUamnSoP5Pk L0CzZJAASGcMT0hVCMwnyiCHEV3EGDxmdrFPIA9Y5VUd2L9iaDOoh7zkgFU34x7RI4i9D9Iv3yl71 KFj2qEiw==; Received: from [184.48.100.57] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1gVgXK-00054V-9p; Sat, 08 Dec 2018 17:37:06 +0000 From: Christoph Hellwig To: iommu@lists.linux-foundation.org Cc: Robin Murphy , Vineet Gupta , "Matwey V. Kornilov" , Laurent Pinchart , linux-snps-arc@lists.infradead.org, Ezequiel Garcia , linux-media@vger.kernel.org, linux-arm-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, sparclinux@vger.kernel.org, openrisc@lists.librecores.org, linux-parisc@vger.kernel.org, linux-mips@vger.kernel.org Subject: [PATCH 09/10] dma-mapping: skip declared coherent memory for DMA_ATTR_NON_CONSISTENT Date: Sat, 8 Dec 2018 09:37:01 -0800 Message-Id: <20181208173702.15158-10-hch@lst.de> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181208173702.15158-1-hch@lst.de> References: <20181208173702.15158-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-media-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Memory declared using dma_declare_coherent is ioremapped and thus not always suitable for our tightened DMA_ATTR_NON_CONSISTENT definition. Skip it given all the existing callers don't DMA_ATTR_NON_CONSISTENT anyway. Signed-off-by: Christoph Hellwig --- include/linux/dma-mapping.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 7799c2b27849..8c81fa5d1f44 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -521,7 +521,8 @@ static inline void *dma_alloc_attrs(struct device *dev, size_t size, BUG_ON(!ops); WARN_ON_ONCE(dev && !dev->coherent_dma_mask); - if (dma_alloc_from_dev_coherent(dev, size, dma_handle, &cpu_addr)) + if (!(attrs & DMA_ATTR_NON_CONSISTENT) && + dma_alloc_from_dev_coherent(dev, size, dma_handle, &cpu_addr)) return cpu_addr; /* let the implementation decide on the zone to allocate from: */ From patchwork Sat Dec 8 17:37:02 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10719569 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BE64313BF for ; Sat, 8 Dec 2018 17:37:24 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A2DE82AB65 for ; Sat, 8 Dec 2018 17:37:24 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8A4AA2AF94; Sat, 8 Dec 2018 17:37:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1BA542AED0 for ; Sat, 8 Dec 2018 17:37:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726226AbeLHRhW (ORCPT ); Sat, 8 Dec 2018 12:37:22 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:42956 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726192AbeLHRhS (ORCPT ); Sat, 8 Dec 2018 12:37:18 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=NLkfdUBEdCMvItDNu9ApfVyxMTjrOso1MH0a7btwpaw=; b=ihDkI2KBJLoZeyxd3zD6I3kY0i BZms6VHd7EFml42ho/Wk/Ab1yUJ5Z7RKMsqFDIsqx/9XUadtmf1Q5+3bAe6KStmKApoGMmy2kldOy 2PClm98ZZ+p/9hcZJ+98ZnTEhZgAAOVS4V77SOn8v9DfmUiC2JHkbYu8CUNj9FDaVZqRc9AjKSD8g yOTn36/h8m1BbxZLks4s5YW4JXCAllgq8EQDKDkma+E+WJzw5e5LO02QIA82StUmVpehkfvT3jE++ Q5m0OWuykXvlff+gh5p6l6ZreX0TuQxntNZDjVWEg/Ls0RErcTVBl+iDCV2Jajnq6RaI984DcWfNt VYUj8b3g==; Received: from [184.48.100.57] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1gVgXK-00054Y-Md; Sat, 08 Dec 2018 17:37:06 +0000 From: Christoph Hellwig To: iommu@lists.linux-foundation.org Cc: Robin Murphy , Vineet Gupta , "Matwey V. Kornilov" , Laurent Pinchart , linux-snps-arc@lists.infradead.org, Ezequiel Garcia , linux-media@vger.kernel.org, linux-arm-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, sparclinux@vger.kernel.org, openrisc@lists.librecores.org, linux-parisc@vger.kernel.org, linux-mips@vger.kernel.org Subject: [PATCH 10/10] Documentation: update the description for DMA_ATTR_NON_CONSISTENT Date: Sat, 8 Dec 2018 09:37:02 -0800 Message-Id: <20181208173702.15158-11-hch@lst.de> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181208173702.15158-1-hch@lst.de> References: <20181208173702.15158-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-media-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We got rid of the odd selective consistent or not behavior, and now want the normal dma_sync_single_* functions to be used for strict ownership transfers. While dma_cache_sync hasn't been removed from the tree yet it should not be used in any new caller, so documentation for it is dropped here. Signed-off-by: Christoph Hellwig --- Documentation/DMA-API.txt | 30 ++++-------------------------- Documentation/DMA-attributes.txt | 9 +++++---- include/linux/dma-mapping.h | 3 +++ 3 files changed, 12 insertions(+), 30 deletions(-) diff --git a/Documentation/DMA-API.txt b/Documentation/DMA-API.txt index ac66ae2509a9..c81fe8a4aeec 100644 --- a/Documentation/DMA-API.txt +++ b/Documentation/DMA-API.txt @@ -518,20 +518,9 @@ API at all. dma_alloc_attrs(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t flag, unsigned long attrs) -Identical to dma_alloc_coherent() except that when the -DMA_ATTR_NON_CONSISTENT flags is passed in the attrs argument, the -platform will choose to return either consistent or non-consistent memory -as it sees fit. By using this API, you are guaranteeing to the platform -that you have all the correct and necessary sync points for this memory -in the driver should it choose to return non-consistent memory. - -Note: where the platform can return consistent memory, it will -guarantee that the sync points become nops. - -Warning: Handling non-consistent memory is a real pain. You should -only use this API if you positively know your driver will be -required to work on one of the rare (usually non-PCI) architectures -that simply cannot make consistent memory. +Similar to dma_alloc_coherent(), except that the behavior can be controlled +in more detail using the attrs argument. See Documentation/DMA-attributes.txt +for more details. :: @@ -540,7 +529,7 @@ that simply cannot make consistent memory. dma_addr_t dma_handle, unsigned long attrs) Free memory allocated by the dma_alloc_attrs(). All parameters common -parameters must identical to those otherwise passed to dma_fre_coherent, +parameters must identical to those otherwise passed to dma_free_coherent, and the attrs argument must be identical to the attrs passed to dma_alloc_attrs(). @@ -560,17 +549,6 @@ memory or doing partial flushes. into the width returned by this call. It will also always be a power of two for easy alignment. -:: - - void - dma_cache_sync(struct device *dev, void *vaddr, size_t size, - enum dma_data_direction direction) - -Do a partial sync of memory that was allocated by dma_alloc_attrs() with -the DMA_ATTR_NON_CONSISTENT flag starting at virtual address vaddr and -continuing on for size. Again, you *must* observe the cache line -boundaries when doing this. - :: int diff --git a/Documentation/DMA-attributes.txt b/Documentation/DMA-attributes.txt index 8f8d97f65d73..2bb3fc0a621b 100644 --- a/Documentation/DMA-attributes.txt +++ b/Documentation/DMA-attributes.txt @@ -46,10 +46,11 @@ behavior. DMA_ATTR_NON_CONSISTENT ----------------------- -DMA_ATTR_NON_CONSISTENT lets the platform to choose to return either -consistent or non-consistent memory as it sees fit. By using this API, -you are guaranteeing to the platform that you have all the correct and -necessary sync points for this memory in the driver. +DMA_ATTR_NON_CONSISTENT specifies that the memory returned is not +required to be consistent. The memory is owned by the device when +returned from this function, and ownership must be explicitly +transferred to the CPU using dma_sync_single_for_cpu, and back to the +device using dma_sync_single_for_device. DMA_ATTR_NO_KERNEL_MAPPING -------------------------- diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 8c81fa5d1f44..8757ad5087c4 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -432,6 +432,9 @@ dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, #define dma_map_page(d, p, o, s, r) dma_map_page_attrs(d, p, o, s, r, 0) #define dma_unmap_page(d, a, s, r) dma_unmap_page_attrs(d, a, s, r, 0) +/* + * Don't use in new code, use dma_sync_single_for_{device,cpu} instead. + */ static inline void dma_cache_sync(struct device *dev, void *vaddr, size_t size, enum dma_data_direction dir)