From patchwork Sat Jul 28 08:41:54 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Baatz X-Patchwork-Id: 1251421 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) by patchwork1.kernel.org (Postfix) with ESMTP id 747D13FCFC for ; Sat, 28 Jul 2012 08:46:01 +0000 (UTC) Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.76 #1 (Red Hat Linux)) id 1Sv2bX-0004Qi-JW; Sat, 28 Jul 2012 08:42:31 +0000 Received: from mail-we0-f177.google.com ([74.125.82.177]) by merlin.infradead.org with esmtps (Exim 4.76 #1 (Red Hat Linux)) id 1Sv2bR-0004QU-HR for linux-arm-kernel@lists.infradead.org; Sat, 28 Jul 2012 08:42:25 +0000 Received: by weyr3 with SMTP id r3so2810830wey.36 for ; Sat, 28 Jul 2012 01:42:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer; bh=9Nc7os7ZRLclBTDsmS13rfA3Zj9ywOFBjgbtgukuR64=; b=muJycGoXIn620JiqjBKM/YkK92WFUYDv/ToKhj7Vwv8sWr7IOoilzlqyfUihGk4PIP gZbfCVWqJcL8oRz3k+8bAOkgL1HAg5jujMKn04bios4AggZnKQJaZMaSo6wVqWbK8iAw WK/7zUbG7WjFz/RTiGcSdIFbNhBt/qyoGXfF0vEoJ5+x5MuQf0wa7SJsPAzkE/KAxMTf hyrv4ihpZiZgoHrVxgKFgliditaSmXIj9yOppDrYFbKXt1CUu6itO17oKjPA9mcwBy39 JrAxFvWmXe2BTo0+eebKd9pRmbmx+nKv3sZyRK9AUTmkwdwRbO1gQYpnzqZr/oIitN9I 1XtQ== Received: by 10.180.90.207 with SMTP id by15mr27807848wib.22.1343464941808; Sat, 28 Jul 2012 01:42:21 -0700 (PDT) Received: from gandalf.schnuecks.de (p5DE8D230.dip.t-dialin.net. [93.232.210.48]) by mx.google.com with ESMTPS id k20sm2874190wiv.11.2012.07.28.01.42.20 (version=TLSv1/SSLv3 cipher=OTHER); Sat, 28 Jul 2012 01:42:21 -0700 (PDT) Received: by gandalf.schnuecks.de (Postfix, from userid 500) id CD6DD40B89; Sat, 28 Jul 2012 10:42:19 +0200 (CEST) From: Simon Baatz To: linux-arm-kernel@lists.infradead.org Subject: [RESEND PATCH] ARM: Handle user space mapped pages in flush_kernel_dcache_page Date: Sat, 28 Jul 2012 10:41:54 +0200 Message-Id: <1343464914-31084-1-git-send-email-gmbnomis@gmail.com> X-Mailer: git-send-email 1.7.9.5 X-Spam-Note: CRM114 invocation failed X-Spam-Note: SpamAssassin invocation failed Cc: Catalin Marinas , Russell King , Simon Baatz X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: linux-arm-kernel-bounces@lists.infradead.org Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Commit f8b63c1 made flush_kernel_dcache_page a no-op assuming that the pages it needs to handle are kernel mapped only. However, for example when doing direct I/O, pages with user space mappings may occur. Thus, continue to do lazy flushing if there are no user space mappings. Otherwise, flush the kernel cache lines directly. Signed-off-by: Simon Baatz Cc: Catalin Marinas Cc: Russell King --- Hi, a while ago I sent the patch above to fix a data corruption problem on VIVT architectures (and probably VIPT aliasing). There has been a bit of discussion with Catalin, but there was no real conclusion on how to proceed. (See http://www.spinics.net/lists/arm-kernel/msg176913.html for the original post) The case is not hit too often apparently; the ingredients are PIO (like) driver, use of flush_kernel_dcache_page(), and direct I/O. However, there is at least one real world example (running lvm2 on top of an encrypted block device using mv_cesa on Kirkwood) that does not work at all because of this problem. - Simon arch/arm/include/asm/cacheflush.h | 4 ++++ arch/arm/mm/flush.c | 22 ++++++++++++++++++++++ 2 files changed, 26 insertions(+) diff --git a/arch/arm/include/asm/cacheflush.h b/arch/arm/include/asm/cacheflush.h index 004c1bc..91ddc70 100644 --- a/arch/arm/include/asm/cacheflush.h +++ b/arch/arm/include/asm/cacheflush.h @@ -303,6 +303,10 @@ static inline void flush_anon_page(struct vm_area_struct *vma, #define ARCH_HAS_FLUSH_KERNEL_DCACHE_PAGE static inline void flush_kernel_dcache_page(struct page *page) { + extern void __flush_kernel_dcache_page(struct page *); + /* highmem pages are always flushed upon kunmap already */ + if ((cache_is_vivt() || cache_is_vipt_aliasing()) && !PageHighMem(page)) + __flush_kernel_dcache_page(page); } #define flush_dcache_mmap_lock(mapping) \ diff --git a/arch/arm/mm/flush.c b/arch/arm/mm/flush.c index 7745854..bcba3a9 100644 --- a/arch/arm/mm/flush.c +++ b/arch/arm/mm/flush.c @@ -192,6 +192,28 @@ void __flush_dcache_page(struct address_space *mapping, struct page *page) page->index << PAGE_CACHE_SHIFT); } +/* + * Ensure cache coherency for kernel mapping of this page. + * + * If the page only exists in the page cache and there are no user + * space mappings, this is a no-op since the page was already marked + * dirty at creation. Otherwise, we need to flush the dirty kernel + * cache lines directly. + * + * We can assume that the page is no high mem page, see + * flush_kernel_dcache_page. + */ +void __flush_kernel_dcache_page(struct page *page) +{ + struct address_space *mapping; + + mapping = page_mapping(page); + + if (!mapping || mapping_mapped(mapping)) + __cpuc_flush_dcache_area(page_address(page), PAGE_SIZE); +} +EXPORT_SYMBOL(__flush_kernel_dcache_page); + static void __flush_dcache_aliases(struct address_space *mapping, struct page *page) { struct mm_struct *mm = current->active_mm;