From patchwork Sun Oct 7 11:29:12 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Baatz X-Patchwork-Id: 1560991 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) by patchwork2.kernel.org (Postfix) with ESMTP id 051F3DFF71 for ; Sun, 7 Oct 2012 11:31:27 +0000 (UTC) Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.76 #1 (Red Hat Linux)) id 1TKp3G-0005K3-1B; Sun, 07 Oct 2012 11:29:42 +0000 Received: from mail-wg0-f41.google.com ([74.125.82.41]) by merlin.infradead.org with esmtps (Exim 4.76 #1 (Red Hat Linux)) id 1TKp32-0005I5-8c for linux-arm-kernel@lists.infradead.org; Sun, 07 Oct 2012 11:29:30 +0000 Received: by mail-wg0-f41.google.com with SMTP id ds1so1278829wgb.0 for ; Sun, 07 Oct 2012 04:29:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references; bh=39FKhSJxxsjbDjlu+yxLBFQecLS3p9Vgrsn4NkI+ess=; b=zOvLG0yIW4FWMnKx2VQ9/6n/qqDcfQLeYoLxDuc/6DEfq9tzK+Mf7rBbwkzVsDOG04 JFFHFLLwfBv5iPEGPO6s553/hiTwLtdEqFYeRXQ5oVuIz6ZnYaRRyzK6k3cKVZUpKJcL sgyUSGZSbrEevuM8lJ+p8M/UoKVubfUhsaJ6bGuNKJB8mOz6E996e8Lb5P2yL3/OW6xa ExEumhETlGkoOQFvHh2+JgoPgCvJH6LHU0C6zib6kWmRtMAU0i19rFUnTxnUik6fzWvq HGNVelvMga5j19AJJ7Prw5Emhjh0uLvqfNQOwIhjJc0G/CbyUI9rz7DkHK9SrI5qWygX SSBw== Received: by 10.180.79.202 with SMTP id l10mr14337232wix.9.1349609366477; Sun, 07 Oct 2012 04:29:26 -0700 (PDT) Received: from gandalf.schnuecks.de (p5DE8D16E.dip.t-dialin.net. [93.232.209.110]) by mx.google.com with ESMTPS id cl8sm13474059wib.10.2012.10.07.04.29.25 (version=TLSv1/SSLv3 cipher=OTHER); Sun, 07 Oct 2012 04:29:25 -0700 (PDT) Received: by gandalf.schnuecks.de (Postfix, from userid 500) id A246140159; Sun, 7 Oct 2012 13:29:24 +0200 (CEST) From: Simon Baatz To: linux-arm-kernel@lists.infradead.org Subject: [PATCH V3 2/2] ARM: Handle user space mapped pages in flush_kernel_dcache_page Date: Sun, 7 Oct 2012 13:29:12 +0200 Message-Id: <1349609352-6408-3-git-send-email-gmbnomis@gmail.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1349609352-6408-1-git-send-email-gmbnomis@gmail.com> References: <1349609352-6408-1-git-send-email-gmbnomis@gmail.com> X-Spam-Note: CRM114 invocation failed X-Spam-Score: -2.7 (--) X-Spam-Report: SpamAssassin version 3.3.2 on merlin.infradead.org summary: Content analysis details: (-2.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- 0.0 FREEMAIL_FROM Sender email is commonly abused enduser mail provider (gmbnomis[at]gmail.com) -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [74.125.82.41 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature Cc: catalin.marinas@arm.com, linux@arm.linux.org.uk, jason@lakedaemon.net, andrew@lunn.ch X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: linux-arm-kernel-bounces@lists.infradead.org Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Commit f8b63c1 made flush_kernel_dcache_page() a no-op assuming that the pages it needs to handle are kernel mapped only. However, for example when doing direct I/O, pages with user space mappings may occur. Thus, do lazy flushing like in flush_dcache_page() if there are no user space mappings. Otherwise, flush the kernel cache lines directly. Signed-off-by: Simon Baatz Cc: Catalin Marinas Cc: Russell King --- Changes: in V3: - Followed Catalin's suggestion to reverse the order of the patches in V2: - flush_kernel_dcache_page() follows flush_dcache_page() now, except that it does not flush the user mappings arch/arm/include/asm/cacheflush.h | 4 ++++ arch/arm/mm/flush.c | 42 +++++++++++++++++++++++++++++++++++++ 2 files changed, 46 insertions(+) diff --git a/arch/arm/include/asm/cacheflush.h b/arch/arm/include/asm/cacheflush.h index e4448e1..eca955f 100644 --- a/arch/arm/include/asm/cacheflush.h +++ b/arch/arm/include/asm/cacheflush.h @@ -307,6 +307,10 @@ static inline void flush_anon_page(struct vm_area_struct *vma, #define ARCH_HAS_FLUSH_KERNEL_DCACHE_PAGE static inline void flush_kernel_dcache_page(struct page *page) { + extern void __flush_kernel_dcache_page(struct page *); + /* highmem pages are always flushed upon kunmap already */ + if (!PageHighMem(page)) + __flush_kernel_dcache_page(page); } #define flush_dcache_mmap_lock(mapping) \ diff --git a/arch/arm/mm/flush.c b/arch/arm/mm/flush.c index 5c474a1..59ad4fc 100644 --- a/arch/arm/mm/flush.c +++ b/arch/arm/mm/flush.c @@ -192,6 +192,48 @@ void __flush_dcache_page(struct address_space *mapping, struct page *page) page->index << PAGE_CACHE_SHIFT); } +/* + * Ensure cache coherency for the kernel mapping of this page. + * + * If the page only exists in the page cache and there are no user + * space mappings, we can be lazy and remember that we may have dirty + * kernel cache lines for later. Otherwise, we need to flush the + * dirty kernel cache lines directly. + * + * Note that we disable the lazy flush for SMP configurations where + * the cache maintenance operations are not automatically broadcasted. + * + * We can assume that the page is no high mem page, see + * flush_kernel_dcache_page. + */ +void __flush_kernel_dcache_page(struct page *page) +{ + struct address_space *mapping; + + /* + * The zero page is never written to, so never has any dirty + * cache lines, and therefore never needs to be flushed. + */ + if (page == ZERO_PAGE(0)) + return; + + mapping = page_mapping(page); + + if (!cache_ops_need_broadcast()) { + if ((mapping && !mapping_mapped(mapping)) || + (!mapping && cache_is_vipt_nonaliasing())) { + clear_bit(PG_dcache_clean, &page->flags); + return; + } + } + + __cpuc_flush_dcache_area(page_address(page), PAGE_SIZE); + if (mapping && !cache_is_vivt()) + __flush_icache_all(); + set_bit(PG_dcache_clean, &page->flags); +} +EXPORT_SYMBOL(__flush_kernel_dcache_page); + static void __flush_dcache_aliases(struct address_space *mapping, struct page *page) { struct mm_struct *mm = current->active_mm;