From patchwork Sun May 12 05:35:56 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Baatz X-Patchwork-Id: 2555191 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) by patchwork1.kernel.org (Postfix) with ESMTP id 603393FC5A for ; Sun, 12 May 2013 05:37:17 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1UbOy4-00069Q-Ga; Sun, 12 May 2013 05:37:08 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1UbOy1-0000GE-MZ; Sun, 12 May 2013 05:37:05 +0000 Received: from mail-ea0-x236.google.com ([2a00:1450:4013:c01::236]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1UbOxx-0000Fs-Ly for linux-arm-kernel@lists.infradead.org; Sun, 12 May 2013 05:37:02 +0000 Received: by mail-ea0-f182.google.com with SMTP id z16so2902678ead.27 for ; Sat, 11 May 2013 22:36:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=x-received:from:to:cc:subject:date:message-id:x-mailer; bh=U8BUvjyAtDAZnTYv13kTD4u/BMGTI2YCnoTS/D1AIbg=; b=iy0jKIeAvdP60mdkfKfcBZg4B1qRMy1AL8tuKHNEM1bd1Var7bUcWXe0PHp4258Sw3 lVkPjLyFqcXhtRHbQR90wl0Fbc4dexqxCMR70BgjsFpxCPbHCQeH4pDTkOg+zc2PyN2h aJHSyFEJtabQMSZGFMk2mkk9NL9HHg8bsAZZ3NEOA08aQlbEuXAbT1HJKIflIzfARCdJ /gwWC22lFGfdrREnU/kBLvag2yc5F1bg7B5sWIdsUChBhyNvS8DqdvuA/PrqWxaS+chU 6xThBqJUB8WxVjUyEmez8EYUXf1GYwMOGfwkooopANSeFfik7v/qG0Fb1XdQRTtlYBX0 Kf9A== X-Received: by 10.14.3.137 with SMTP id 9mr62488589eeh.0.1368336998630; Sat, 11 May 2013 22:36:38 -0700 (PDT) Received: from gandalf.schnuecks.de (p57A575BD.dip0.t-ipconnect.de. [87.165.117.189]) by mx.google.com with ESMTPSA id q1sm14160914eez.6.2013.05.11.22.36.37 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sat, 11 May 2013 22:36:37 -0700 (PDT) Received: by gandalf.schnuecks.de (Postfix, from userid 500) id 5B1C5400EB; Sun, 12 May 2013 07:36:36 +0200 (CEST) From: Simon Baatz To: linux-arm-kernel@lists.infradead.org Subject: [PATCH V4] ARM: handle user space mapped pages in flush_kernel_dcache_page Date: Sun, 12 May 2013 07:35:56 +0200 Message-Id: <1368336956-6693-1-git-send-email-gmbnomis@gmail.com> X-Mailer: git-send-email 1.7.9.5 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20130512_013701_895351_9A58EA23 X-CRM114-Status: GOOD ( 18.49 ) X-Spam-Score: -2.0 (--) X-Spam-Report: SpamAssassin version 3.3.2 on merlin.infradead.org summary: Content analysis details: (-2.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- 0.0 FREEMAIL_FROM Sender email is commonly abused enduser mail provider (gmbnomis[at]gmail.com) -0.0 SPF_PASS SPF: sender matches SPF record -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature Cc: Catalin Marinas , Russell King , Simon Baatz X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Commit f8b63c1 made flush_kernel_dcache_page a no-op assuming that the pages it needs to handle are kernel mapped only. However, for example when doing direct I/O, pages with user space mappings may occur. Thus, continue to do lazy flushing if there are no user space mappings. Otherwise, flush the kernel cache lines directly. Signed-off-by: Simon Baatz Cc: Catalin Marinas Cc: Russell King Acked-by: Catalin Marinas --- Changes: in V4: - get back to simpler version of flush_kernel_dcache_page() and use the logic from __flush_dcache_page() to flush the kernel mapping (which also takes care of highmem pages) in V3: - Followed Catalin's suggestion to reverse the order of the patches in V2: - flush_kernel_dcache_page() follows flush_dcache_page() now, except that it does not flush the user mappings arch/arm/include/asm/cacheflush.h | 4 +--- arch/arm/mm/flush.c | 38 +++++++++++++++++++++++++++++++------ 2 files changed, 33 insertions(+), 9 deletions(-) diff --git a/arch/arm/include/asm/cacheflush.h b/arch/arm/include/asm/cacheflush.h index bff7138..17d0ae8 100644 --- a/arch/arm/include/asm/cacheflush.h +++ b/arch/arm/include/asm/cacheflush.h @@ -320,9 +320,7 @@ static inline void flush_anon_page(struct vm_area_struct *vma, } #define ARCH_HAS_FLUSH_KERNEL_DCACHE_PAGE -static inline void flush_kernel_dcache_page(struct page *page) -{ -} +extern void flush_kernel_dcache_page(struct page *); #define flush_dcache_mmap_lock(mapping) \ spin_lock_irq(&(mapping)->tree_lock) diff --git a/arch/arm/mm/flush.c b/arch/arm/mm/flush.c index 0d473cc..485ca96 100644 --- a/arch/arm/mm/flush.c +++ b/arch/arm/mm/flush.c @@ -160,13 +160,13 @@ void copy_to_user_page(struct vm_area_struct *vma, struct page *page, #endif } -void __flush_dcache_page(struct address_space *mapping, struct page *page) +/* + * Writeback any data associated with the kernel mapping of this + * page. This ensures that data in the physical page is mutually + * coherent with the kernel's mapping. + */ +static void __flush_kernel_dcache_page(struct page *page) { - /* - * Writeback any data associated with the kernel mapping of this - * page. This ensures that data in the physical page is mutually - * coherent with the kernels mapping. - */ if (!PageHighMem(page)) { __cpuc_flush_dcache_area(page_address(page), PAGE_SIZE); } else { @@ -184,6 +184,11 @@ void __flush_dcache_page(struct address_space *mapping, struct page *page) } } } +} + +void __flush_dcache_page(struct address_space *mapping, struct page *page) +{ + __flush_kernel_dcache_page(page); /* * If this is a page cache page, and we have an aliasing VIPT cache, @@ -301,6 +306,27 @@ void flush_dcache_page(struct page *page) EXPORT_SYMBOL(flush_dcache_page); /* + * Ensure cache coherency for kernel mapping of this page. + * + * If the page only exists in the page cache and there are no user + * space mappings, this is a no-op since the page was already marked + * dirty at creation. Otherwise, we need to flush the dirty kernel + * cache lines directly. + */ +void flush_kernel_dcache_page(struct page *page) +{ + if (cache_is_vivt() || cache_is_vipt_aliasing()) { + struct address_space *mapping; + + mapping = page_mapping(page); + + if (!mapping || mapping_mapped(mapping)) + __flush_kernel_dcache_page(page); + } +} +EXPORT_SYMBOL(flush_kernel_dcache_page); + +/* * Flush an anonymous page so that users of get_user_pages() * can safely access the data. The expected sequence is: *