From patchwork Mon Sep 12 21:32:54 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laura Abbott X-Patchwork-Id: 9328127 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id B6F5260231 for ; Mon, 12 Sep 2016 21:35:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A66F528DF0 for ; Mon, 12 Sep 2016 21:35:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9A64B28E69; Mon, 12 Sep 2016 21:35:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.7 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RCVD_IN_SORBS_SPAM autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 453B428DF0 for ; Mon, 12 Sep 2016 21:35:06 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.85_2 #1 (Red Hat Linux)) id 1bjYrK-00059W-S0; Mon, 12 Sep 2016 21:33:46 +0000 Received: from mail-yw0-f169.google.com ([209.85.161.169]) by bombadil.infradead.org with esmtps (Exim 4.85_2 #1 (Red Hat Linux)) id 1bjYr3-00052z-Qk for linux-arm-kernel@lists.infradead.org; Mon, 12 Sep 2016 21:33:31 +0000 Received: by mail-yw0-f169.google.com with SMTP id u124so81766050ywg.3 for ; Mon, 12 Sep 2016 14:33:08 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=b8VT3QIgpjOabdwO6Zg6w/kAwLTTjEYJzpe77i6adZI=; b=PzZSH6D63DHEyT8Uz5PxA9SWxEZT9EtC5wkAIp2xwYOMPvFVJXqavQ/RGkD0J0L/w/ K8s40o0gYWR6DFt66b+I9IPWpExbXK0K2bqT5usGK2oQAAYb/qwymHLfzXQu9U2sp/mm RBOrNjP2NFD/RvK7EYvfw7R4KEaDJeegWGCgOkaQwpbwhdUKT/vUW2bx12zjh2IBK9F/ O+u3pWvEhcYB5LNuKSPPRF9Mwzl/RPG7k2jlAfYXDQ7+PDvfQSYtgwl1WD8N982NQI+s /kClWftaZWlbwkzZEDJTB8upzQg0R8O4/MpNAN/76n75HzlG9KrJNzjM859vTKcvRuMi aF9A== X-Gm-Message-State: AE9vXwO/XPW3OWn8x35JFpvuC73bDrmBoBP8uqKvKdSzm4t3+UrS4q1yjbwCdigKrmHiDD8i X-Received: by 10.129.43.85 with SMTP id r82mr10145675ywr.217.1473715987690; Mon, 12 Sep 2016 14:33:07 -0700 (PDT) Received: from labbott-redhat-machine.redhat.com ([2601:602:9800:177f::2946]) by smtp.gmail.com with ESMTPSA id x184sm5644931ywe.56.2016.09.12.14.33.04 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 12 Sep 2016 14:33:06 -0700 (PDT) From: Laura Abbott To: Sumit Semwal , John Stultz , =?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?= , Riley Andrews Subject: [RFCv3][PATCH 1/5] Documentation: Introduce kernel_force_cache_* APIs Date: Mon, 12 Sep 2016 14:32:54 -0700 Message-Id: <1473715978-11633-2-git-send-email-labbott@redhat.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1473715978-11633-1-git-send-email-labbott@redhat.com> References: <1473715978-11633-1-git-send-email-labbott@redhat.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160912_143330_029543_913C30B1 X-CRM114-Status: GOOD ( 16.14 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: devel@driverdev.osuosl.org, Jon Medhurst , Android Kernel Team , Arnd Bergmann , Greg Kroah-Hartman , Daniel Vetter , Will Deacon , Russell King , linux-kernel@vger.kernel.org, linaro-mm-sig@lists.linaro.org, Rohit kumar , Bryan Huntsman , Eun Taik Lee , Catalin Marinas , Liviu Dudau , Laura Abbott , Jeremy Gebben , linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Some frameworks (e.g. Ion) may need to do explicit cache management to meet performance/correctness requirements. Rather than piggy-back on another API and hope the semantics don't change, introduce a set of APIs to force a page to be cleaned/invalidated in the cache. Signed-off-by: Laura Abbott --- Documentation/cachetlb.txt | 18 +++++++++++++++++- include/linux/cacheflush.h | 11 +++++++++++ 2 files changed, 28 insertions(+), 1 deletion(-) create mode 100644 include/linux/cacheflush.h diff --git a/Documentation/cachetlb.txt b/Documentation/cachetlb.txt index 3f9f808..18eec7c 100644 --- a/Documentation/cachetlb.txt +++ b/Documentation/cachetlb.txt @@ -378,7 +378,7 @@ maps this page at its virtual address. flush_dcache_page and update_mmu_cache. In the future, the hope is to remove this interface completely. -The final category of APIs is for I/O to deliberately aliased address +Another set of APIs is for I/O to deliberately aliased address ranges inside the kernel. Such aliases are set up by use of the vmap/vmalloc API. Since kernel I/O goes via physical pages, the I/O subsystem assumes that the user mapping and kernel offset mapping are @@ -401,3 +401,19 @@ I/O and invalidating it after the I/O returns. speculatively reading data while the I/O was occurring to the physical pages. This is only necessary for data reads into the vmap area. + +Nearly all drivers can handle cache management using the existing DMA model. +There may be limited circumstances when a driver or framework needs to +explicitly manage the cache; trying to force cache management into the DMA +framework may lead to performance loss or unnecessary work. These APIs may +be used to provide explicit coherency for memory that does not fall into +any of the above categories. Implementers of this API must assume the +address can be aliased. Any cache operations shall not be delayed and must +be completed by the time the call returns. + + void kernel_force_cache_clean(struct page *page, size_t size); + Ensures that any data in the cache by the page is written back + and visible across all aliases. + + void kernel_force_cache_invalidate(struct page *page, size_t size); + Invalidates the cache for the given page. diff --git a/include/linux/cacheflush.h b/include/linux/cacheflush.h new file mode 100644 index 0000000..4388846 --- /dev/null +++ b/include/linux/cacheflush.h @@ -0,0 +1,11 @@ +#ifndef CACHEFLUSH_H +#define CACHEFLUSH_H + +#include + +#ifndef ARCH_HAS_FORCE_CACHE +static inline void kernel_force_cache_clean(struct page *page, size_t size) { } +static inline void kernel_force_cache_invalidate(struct page *page, size_t size) { } +#endif + +#endif