From patchwork Tue Aug 2 00:50:50 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kwangwoo Lee X-Patchwork-Id: 9254983 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 926AF6077C for ; Tue, 2 Aug 2016 00:54:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6BB9A284C7 for ; Tue, 2 Aug 2016 00:54:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4A239284E5; Tue, 2 Aug 2016 00:54:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 9BC29284C7 for ; Tue, 2 Aug 2016 00:54:32 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.85_2 #1 (Red Hat Linux)) id 1bUNwG-0003EC-MC; Tue, 02 Aug 2016 00:52:08 +0000 Received: from exvmail3.skhynix.com ([166.125.252.90] helo=invmail3.skhynix.com) by bombadil.infradead.org with esmtp (Exim 4.85_2 #1 (Red Hat Linux)) id 1bUNwB-0003AC-LQ for linux-arm-kernel@lists.infradead.org; Tue, 02 Aug 2016 00:52:06 +0000 X-AuditID: a67dfc59-f790b6d000004b07-14-579fee9bedc8 From: Kwangwoo Lee To: Robin Murphy , Russell King - ARM Linux , Catalin Marinas , Will Deacon , Mark Rutland , linux-arm-kernel@lists.infradead.org Subject: [PATCH v3] arm64: mm: convert __dma_* routines to use start, size Date: Tue, 2 Aug 2016 09:50:50 +0900 Message-Id: <1470099050-25407-1-git-send-email-kwangwoo.lee@sk.com> X-Mailer: git-send-email 2.5.0 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFmpkluLIzCtJLcpLzFFi42LhmiOqrDvn3fxwg/4PLBbvl/UwWmx6fI3V 4vKuOWwWh6buZbRYev0ik8XBD09YLV5+PMHiwO6xZt4aRo/L1y4ye2xeUu/xeZNcAEsUl01K ak5mWWqRvl0CV0bjhHssBbPMKrbe/s/ewHhbu4uRk0NCwETi/IxORghbTOLCvfVsMPE/784w gdhsAhoSpzbcZ+li5OIQEfjDKLH/+jJ2EIdZYDKjxNwvV8GqhAW8JM4/7WAGsVkEVCUeXL4P NpVXwFni9aJ1QHEOoKlyEgsupIP0Sgh8ZJXo2X6SFWKbpMTBFTdYJjDyLGBkWMUokplXlpuY mWOsV5ydUZmXWaGXnJ+7iREYMMtq/0TuYOxp1T/EKMDBqMTDy1A0P1yINbGsuDL3EKMEB7OS CO/Kl0Ah3pTEyqrUovz4otKc1OJDjNIcLErivN/+94ULCaQnlqRmp6YWpBbBZJk4OKUaGM+s 1GV4M+X17Pst+w7Y+Da+WXbgil3HoZWPZ9rWnpV4vkqnjfcGx7fYs2cOPrT7MG27zcqvWw8H cFn8u6w2Zad137Tep8t0Ni1+YXsvV2Dt8r1ZcrvytHjdb3z5PT3pTvQOkRcsOz72vPBbLr19 bZ950OptGxis48taVZ3n3/wXNzd8WavJ8k/tSizFGYmGWsxFxYkAJmj/lBQCAAA= X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrPJMWRmVeSWpSXmKPExsXCFdOSpjv73fxwgwunLC3eL+thtJi6qoXJ Yt7nU+wWmx5fY7W4vGsOm8WhqXsZLZZev8hkcfDDE1aLlx9PsFjMW76WxYHLY828NYwel69d ZPbYvKTe49ttD4/Pm+QCWKO4bFJSczLLUov07RK4Mhon3GMpmGVWsfX2f/YGxtvaXYycHBIC JhJ/3p1hgrDFJC7cW88GYrMJaEic2nCfpYuRi0NE4A+jxP7ry9hBHGaByYwSc79cBesQFvCS OP+0gxnEZhFQlXhw+T4jiM0r4CzxetE6oDgH0FQ5iQUX0icwci5gZFjFKJKZV5abmJljplec nVGZl1mhl5yfu4kRGAzLav9M2sH47bL7IUYBDkYlHl6GovnhQqyJZcWVuYcYJTiYlUR4V74E CvGmJFZWpRblxxeV5qQWH2KU5mBREuc9/lEpXEggPbEkNTs1tSC1CCbLxMEp1cDotOLe9O2d SztWN094WCHMsGTJ8zncEYcjF90p6l62Pq9b6qLnuf7JK1tU/SwN5335YTiV8Zdajt8lYdvv P3RqVnXIqHqUOhj/2+1T7HngnEp5sUK32ey4bS5pX7lSg2//SZVbc/Fdad98zdoD7yX9WV64 S/zdfONA19/fT814XAzNpr/VaZBWYinOSDTUYi4qTgQAdFfkcwICAAA= X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160801_175204_038267_77DD1996 X-CRM114-Status: GOOD ( 16.05 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Hyunchul Kim , Kwangwoo Lee , linux-kernel@vger.kernel.org, Woosuk Chung MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP __dma_* routines have been converted to use start and size instread of start and end addresses. The patch was origianlly for adding __clean_dcache_area_poc() which will be used in pmem driver to clean dcache to the PoC(Point of Coherency) in arch_wb_cache_pmem(). The functionality of __clean_dcache_area_poc() was equivalent to __dma_clean_range(). The difference was __dma_clean_range() uses the end address, but __clean_dcache_area_poc() uses the size to clean. Thus, __clean_dcache_area_poc() has been revised with a fallthrough function of __dma_clean_range() after the change that __dma_* routines use start and size instead of using start and end. As a consequence of using start and size, the name of __dma_* routines has also been altered following the terminology below: area: takes a start and size range: takes a start and end Signed-off-by: Kwangwoo Lee Reviewed-by: Robin Murphy --- v3) clean up in __dma_clean_area() based on 823066d9edcd dcache_by_line_op fix v2) change the names of __dma_* rountines to use area instead of range fix to use __dma_flush_area() in dma-mapping.c v1) change __dma_* routines to use start, size add __clean_dcache_area_poc() as a fallthrough of __dma_clean_range() arch/arm64/include/asm/cacheflush.h | 3 +- arch/arm64/mm/cache.S | 82 ++++++++++++++++++------------------- arch/arm64/mm/dma-mapping.c | 6 +-- 3 files changed, 44 insertions(+), 47 deletions(-) diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h index c64268d..2e5fb97 100644 --- a/arch/arm64/include/asm/cacheflush.h +++ b/arch/arm64/include/asm/cacheflush.h @@ -68,6 +68,7 @@ extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end); extern void flush_icache_range(unsigned long start, unsigned long end); extern void __flush_dcache_area(void *addr, size_t len); +extern void __clean_dcache_area_poc(void *addr, size_t len); extern void __clean_dcache_area_pou(void *addr, size_t len); extern long __flush_cache_user_range(unsigned long start, unsigned long end); @@ -85,7 +86,7 @@ static inline void flush_cache_page(struct vm_area_struct *vma, */ extern void __dma_map_area(const void *, size_t, int); extern void __dma_unmap_area(const void *, size_t, int); -extern void __dma_flush_range(const void *, const void *); +extern void __dma_flush_area(const void *, size_t); /* * Copy user data from/to a page which is mapped into a different diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S index 07d7352..58b5a90 100644 --- a/arch/arm64/mm/cache.S +++ b/arch/arm64/mm/cache.S @@ -105,19 +105,20 @@ ENTRY(__clean_dcache_area_pou) ENDPROC(__clean_dcache_area_pou) /* - * __inval_cache_range(start, end) - * - start - start address of region - * - end - end address of region + * __dma_inv_area(start, size) + * - start - virtual start address of region + * - size - size in question */ -ENTRY(__inval_cache_range) +__dma_inv_area: + add x1, x1, x0 /* FALLTHROUGH */ /* - * __dma_inv_range(start, end) - * - start - virtual start address of region - * - end - virtual end address of region + * __inval_cache_range(start, end) + * - start - start address of region + * - end - end address of region */ -__dma_inv_range: +ENTRY(__inval_cache_range) dcache_line_size x2, x3 sub x3, x2, #1 tst x1, x3 // end cache line aligned? @@ -136,46 +137,43 @@ __dma_inv_range: dsb sy ret ENDPIPROC(__inval_cache_range) -ENDPROC(__dma_inv_range) +ENDPROC(__dma_inv_area) + +/* + * __clean_dcache_area_poc(kaddr, size) + * + * Ensure that any D-cache lines for the interval [kaddr, kaddr+size) + * are cleaned to the PoC. + * + * - kaddr - kernel address + * - size - size in question + */ +ENTRY(__clean_dcache_area_poc) + /* FALLTHROUGH */ /* - * __dma_clean_range(start, end) + * __dma_clean_area(start, size) * - start - virtual start address of region - * - end - virtual end address of region + * - size - size in question */ -__dma_clean_range: - dcache_line_size x2, x3 - sub x3, x2, #1 - bic x0, x0, x3 -1: -alternative_if_not ARM64_WORKAROUND_CLEAN_CACHE - dc cvac, x0 -alternative_else - dc civac, x0 -alternative_endif - add x0, x0, x2 - cmp x0, x1 - b.lo 1b - dsb sy +__dma_clean_area: + dcache_by_line_op cvac, sy, x0, x1, x2, x3 ret -ENDPROC(__dma_clean_range) +ENDPIPROC(__clean_dcache_area_poc) +ENDPROC(__dma_clean_area) /* - * __dma_flush_range(start, end) + * __dma_flush_area(start, size) + * + * clean & invalidate D / U line + * * - start - virtual start address of region - * - end - virtual end address of region + * - size - size in question */ -ENTRY(__dma_flush_range) - dcache_line_size x2, x3 - sub x3, x2, #1 - bic x0, x0, x3 -1: dc civac, x0 // clean & invalidate D / U line - add x0, x0, x2 - cmp x0, x1 - b.lo 1b - dsb sy +ENTRY(__dma_flush_area) + dcache_by_line_op civac, sy, x0, x1, x2, x3 ret -ENDPIPROC(__dma_flush_range) +ENDPIPROC(__dma_flush_area) /* * __dma_map_area(start, size, dir) @@ -184,10 +182,9 @@ ENDPIPROC(__dma_flush_range) * - dir - DMA direction */ ENTRY(__dma_map_area) - add x1, x1, x0 cmp w2, #DMA_FROM_DEVICE - b.eq __dma_inv_range - b __dma_clean_range + b.eq __dma_inv_area + b __dma_clean_area ENDPIPROC(__dma_map_area) /* @@ -197,8 +194,7 @@ ENDPIPROC(__dma_map_area) * - dir - DMA direction */ ENTRY(__dma_unmap_area) - add x1, x1, x0 cmp w2, #DMA_TO_DEVICE - b.ne __dma_inv_range + b.ne __dma_inv_area ret ENDPIPROC(__dma_unmap_area) diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c index f6c55af..88922e8 100644 --- a/arch/arm64/mm/dma-mapping.c +++ b/arch/arm64/mm/dma-mapping.c @@ -168,7 +168,7 @@ static void *__dma_alloc(struct device *dev, size_t size, return ptr; /* remove any dirty cache lines on the kernel alias */ - __dma_flush_range(ptr, ptr + size); + __dma_flush_area(ptr, size); /* create a coherent mapping */ page = virt_to_page(ptr); @@ -387,7 +387,7 @@ static int __init atomic_pool_init(void) void *page_addr = page_address(page); memset(page_addr, 0, atomic_pool_size); - __dma_flush_range(page_addr, page_addr + atomic_pool_size); + __dma_flush_area(page_addr, atomic_pool_size); atomic_pool = gen_pool_create(PAGE_SHIFT, -1); if (!atomic_pool) @@ -548,7 +548,7 @@ fs_initcall(dma_debug_do_init); /* Thankfully, all cache ops are by VA so we can ignore phys here */ static void flush_page(struct device *dev, const void *virt, phys_addr_t phys) { - __dma_flush_range(virt, virt + PAGE_SIZE); + __dma_flush_area(virt, PAGE_SIZE); } static void *__iommu_alloc_attrs(struct device *dev, size_t size,