From patchwork Mon Sep 12 21:32:56 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laura Abbott X-Patchwork-Id: 9328131 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 3E86660231 for ; Mon, 12 Sep 2016 21:35:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2E0A628ED2 for ; Mon, 12 Sep 2016 21:35:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 225D628EDA; Mon, 12 Sep 2016 21:35:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id C876D28ED8 for ; Mon, 12 Sep 2016 21:35:36 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.85_2 #1 (Red Hat Linux)) id 1bjYrn-0005Jy-5c; Mon, 12 Sep 2016 21:34:15 +0000 Received: from mail-yb0-f178.google.com ([209.85.213.178]) by bombadil.infradead.org with esmtps (Exim 4.85_2 #1 (Red Hat Linux)) id 1bjYrA-00053V-SC for linux-arm-kernel@lists.infradead.org; Mon, 12 Sep 2016 21:33:39 +0000 Received: by mail-yb0-f178.google.com with SMTP id n11so10903728yba.0 for ; Mon, 12 Sep 2016 14:33:15 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=GSY3m7D4hddP2+/RA37b8Qq/GSYYbSmrDj4WqBjVC94=; b=eeR0hu0EK4E0Q7Mlihw6+JpXZYML+jKCMWawNYqkTo4qRV7QnJeWPorkCrKh+88fid in378BHpsXAzHDqHjRj8EyhGhsvDNOjyG5VXnvT4XgazTmPGrO2Dgh2ooksEeHWVjC1S wCbV0rlCwco+TmVLxGTLOsuEDwvZG0CJ2W+6qSH43sqg7SmHa6S64l+ReK4+XCP+7WiT ZUbSXF7MWhwiz8Z48wcBQDIP7AUzF1mQcqem+JsCXSCrdo2q/VqzawwtQ9H1hTMfxaqA ec4j2RVvf6xtUrVbVTFWUObeb+iK0F16Z9s4DZzlaDvipugeVsMZ8DQvhyYk6EPVC0H4 lpng== X-Gm-Message-State: AE9vXwMVjBVwU7bNtoFzjyXbyiv70vkxJobs3W3ZawNsy8KxvLmgMa/GCHRkidM7FN3Bzzt1 X-Received: by 10.37.25.7 with SMTP id 7mr21246825ybz.20.1473715994664; Mon, 12 Sep 2016 14:33:14 -0700 (PDT) Received: from labbott-redhat-machine.redhat.com ([2601:602:9800:177f::2946]) by smtp.gmail.com with ESMTPSA id x184sm5644931ywe.56.2016.09.12.14.33.11 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 12 Sep 2016 14:33:13 -0700 (PDT) From: Laura Abbott To: Sumit Semwal , John Stultz , =?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?= , Riley Andrews Subject: [RFCv3][PATCH 3/5] arm64: Implement ARCH_HAS_FORCE_CACHE Date: Mon, 12 Sep 2016 14:32:56 -0700 Message-Id: <1473715978-11633-4-git-send-email-labbott@redhat.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1473715978-11633-1-git-send-email-labbott@redhat.com> References: <1473715978-11633-1-git-send-email-labbott@redhat.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160912_143337_123032_49B144AB X-CRM114-Status: GOOD ( 14.23 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: devel@driverdev.osuosl.org, Jon Medhurst , Android Kernel Team , Arnd Bergmann , Greg Kroah-Hartman , Daniel Vetter , Will Deacon , Russell King , linux-kernel@vger.kernel.org, linaro-mm-sig@lists.linaro.org, Rohit kumar , Bryan Huntsman , Eun Taik Lee , Catalin Marinas , Liviu Dudau , Laura Abbott , Jeremy Gebben , linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP arm64 may need to guarantee the caches are synced. Implement versions of the kernel_force_cache API to allow this. Signed-off-by: Laura Abbott --- v3: Switch to calling cache operations directly instead of relying on DMA mapping. --- arch/arm64/include/asm/cacheflush.h | 8 ++++++++ arch/arm64/mm/cache.S | 24 ++++++++++++++++++++---- arch/arm64/mm/flush.c | 11 +++++++++++ 3 files changed, 39 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h index c64268d..1134c15 100644 --- a/arch/arm64/include/asm/cacheflush.h +++ b/arch/arm64/include/asm/cacheflush.h @@ -87,6 +87,9 @@ extern void __dma_map_area(const void *, size_t, int); extern void __dma_unmap_area(const void *, size_t, int); extern void __dma_flush_range(const void *, const void *); +extern void __force_dcache_clean(const void *, size_t); +extern void __force_dcache_invalidate(const void *, size_t); + /* * Copy user data from/to a page which is mapped into a different * processes address space. Really, we want to allow our "user @@ -149,4 +152,9 @@ int set_memory_rw(unsigned long addr, int numpages); int set_memory_x(unsigned long addr, int numpages); int set_memory_nx(unsigned long addr, int numpages); +#define ARCH_HAS_FORCE_CACHE 1 + +void kernel_force_cache_clean(struct page *page, size_t size); +void kernel_force_cache_invalidate(struct page *page, size_t size); + #endif diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S index 07d7352..e99c9a4 100644 --- a/arch/arm64/mm/cache.S +++ b/arch/arm64/mm/cache.S @@ -184,10 +184,6 @@ ENDPIPROC(__dma_flush_range) * - dir - DMA direction */ ENTRY(__dma_map_area) - add x1, x1, x0 - cmp w2, #DMA_FROM_DEVICE - b.eq __dma_inv_range - b __dma_clean_range ENDPIPROC(__dma_map_area) /* @@ -202,3 +198,23 @@ ENTRY(__dma_unmap_area) b.ne __dma_inv_range ret ENDPIPROC(__dma_unmap_area) + +/* + * __force_dcache_clean(start, size) + * - start - kernel virtual start address + * - size - size of region + */ +ENTRY(__force_dcache_clean) + add x1, x1, x0 + b __dma_clean_range +ENDPROC(__force_dcache_clean) + +/* + * __force_dcache_invalidate(start, size) + * - start - kernel virtual start address + * - size - size of region + */ +ENTRY(__force_dcache_invalidate) + add x1, x1, x0 + b __dma_inv_range +ENDPROC(__force_dcache_invalidate) diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c index 43a76b0..54ff32e 100644 --- a/arch/arm64/mm/flush.c +++ b/arch/arm64/mm/flush.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include @@ -94,3 +95,13 @@ EXPORT_SYMBOL(flush_dcache_page); * Additional functions defined in assembly. */ EXPORT_SYMBOL(flush_icache_range); + +void kernel_force_cache_clean(struct page *page, size_t size) +{ + __force_dcache_clean(page_address(page), size); +} + +void kernel_force_cache_invalidate(struct page *page, size_t size) +{ + __force_dcache_invalidate(page_address(page), size); +}