From patchwork Thu Apr 20 14:44:31 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Kara X-Patchwork-Id: 9690767 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 42AD360383 for ; Thu, 20 Apr 2017 14:53:36 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 317D81FF21 for ; Thu, 20 Apr 2017 14:53:36 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 25D471FFB9; Thu, 20 Apr 2017 14:53:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_TVD_MIME_EPI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B2F901FF21 for ; Thu, 20 Apr 2017 14:53:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S946531AbdDTOxV (ORCPT ); Thu, 20 Apr 2017 10:53:21 -0400 Received: from mx2.suse.de ([195.135.220.15]:43198 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S946449AbdDTOxU (ORCPT ); Thu, 20 Apr 2017 10:53:20 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (charybdis-ext.suse.de [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 27CABAB9D; Thu, 20 Apr 2017 14:53:18 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id CE3951E32BE; Thu, 20 Apr 2017 16:44:31 +0200 (CEST) Date: Thu, 20 Apr 2017 16:44:31 +0200 From: Jan Kara To: Ross Zwisler Cc: Andrey Ryabinin , Alexander Viro , linux-fsdevel@vger.kernel.org, Konrad Rzeszutek Wilk , Eric Van Hensbergen , Ron Minnich , Latchesar Ionkov , Steve French , Matthew Wilcox , Trond Myklebust , Anna Schumaker , Andrew Morton , Jan Kara , Jens Axboe , Johannes Weiner , Alexey Kuznetsov , Christoph Hellwig , v9fs-developer@lists.sourceforge.net, linux-kernel@vger.kernel.org, linux-cifs@vger.kernel.org, samba-technical@lists.samba.org, linux-nfs@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH 1/4] fs: fix data invalidation in the cleancache during direct IO Message-ID: <20170420144431.GA14620@quack2.suse.cz> References: <20170414140753.16108-1-aryabinin@virtuozzo.com> <20170414140753.16108-2-aryabinin@virtuozzo.com> <20170418193808.GA16667@linux.intel.com> <20170419192836.GA6364@linux.intel.com> <20170420143510.GF22135@quack2.suse.cz> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20170420143510.GF22135@quack2.suse.cz> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On Thu 20-04-17 16:35:10, Jan Kara wrote: > On Wed 19-04-17 13:28:36, Ross Zwisler wrote: > > On Wed, Apr 19, 2017 at 06:11:31PM +0300, Andrey Ryabinin wrote: > > > On 04/18/2017 10:38 PM, Ross Zwisler wrote: > > > > On Fri, Apr 14, 2017 at 05:07:50PM +0300, Andrey Ryabinin wrote: > > > >> Some direct write fs hooks call invalidate_inode_pages2[_range]() > > > >> conditionally iff mapping->nrpages is not zero. If page cache is empty, > > > >> buffered read following after direct IO write would get stale data from > > > >> the cleancache. > > > >> > > > >> Also it doesn't feel right to check only for ->nrpages because > > > >> invalidate_inode_pages2[_range] invalidates exceptional entries as well. > > > >> > > > >> Fix this by calling invalidate_inode_pages2[_range]() regardless of nrpages > > > >> state. > > > >> > > > >> Fixes: c515e1fd361c ("mm/fs: add hooks to support cleancache") > > > >> Signed-off-by: Andrey Ryabinin > > > >> --- > > > > <> > > > >> diff --git a/fs/dax.c b/fs/dax.c > > > >> index 2e382fe..1e8cca0 100644 > > > >> --- a/fs/dax.c > > > >> +++ b/fs/dax.c > > > >> @@ -1047,7 +1047,7 @@ dax_iomap_actor(struct inode *inode, loff_t pos, loff_t length, void *data, > > > >> * into page tables. We have to tear down these mappings so that data > > > >> * written by write(2) is visible in mmap. > > > >> */ > > > >> - if ((iomap->flags & IOMAP_F_NEW) && inode->i_mapping->nrpages) { > > > >> + if ((iomap->flags & IOMAP_F_NEW)) { > > > >> invalidate_inode_pages2_range(inode->i_mapping, > > > >> pos >> PAGE_SHIFT, > > > >> (end - 1) >> PAGE_SHIFT); > > > > > > > > tl;dr: I think the old code is correct, and that you don't need this change. > > > > > > > > This should be harmless, but could slow us down a little if we keep > > > > calling invalidate_inode_pages2_range() without really needing to. Really for > > > > DAX I think we need to call invalidate_inode_page2_range() only if we have > > > > zero pages mapped over the place where we are doing I/O, which is why we check > > > > nrpages. > > > > > > > > > > Check for ->nrpages only looks strange, because invalidate_inode_pages2_range() also > > > invalidates exceptional radix tree entries. Is that correct that we invalidate > > > exceptional entries only if ->nrpages > 0 and skip invalidation otherwise? > > > > For DAX we only invalidate clean DAX exceptional entries so that we can keep > > dirty entries around for writeback, but yes you're correct that we only do the > > invalidation if nrpages > 0. And yes, it does seem a bit weird. :) > > Actually in this place the nrpages check is deliberate since there should > only be hole pages or nothing in the invalidated range - see the comment > before the if. But thinking more about it this assumption actually is not > right in presence of zero PMD entries in the radix tree. So this change > actually also fixes a possible bug for DAX but we should do it as a > separate patch with a proper changelog. Something like the attached patch. Ross? Honza From da79b4b72a6fe5fcf1a554ca1ce77cb462e8a306 Mon Sep 17 00:00:00 2001 From: Jan Kara Date: Thu, 20 Apr 2017 16:38:20 +0200 Subject: [PATCH] dax: Fix inconsistency between mmap and write(2) When a process has a PMD-sized hole mapped via mmap and later allocates part of the file underlying this area using write(2), memory mappings need not be appropriately invalidated if the file has no hole pages allocated and thus view via mmap will not show the data written by write(2). Fix the problem by always invalidating memory mappings covering part of the file for which blocks got allocated. Signed-off-by: Jan Kara --- fs/dax.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index 85abd741253d..da7bc44e5725 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -1028,11 +1028,11 @@ dax_iomap_actor(struct inode *inode, loff_t pos, loff_t length, void *data, return -EIO; /* - * Write can allocate block for an area which has a hole page mapped - * into page tables. We have to tear down these mappings so that data - * written by write(2) is visible in mmap. + * Write can allocate block for an area which has a hole page or zero + * PMD entry in the radix tree. We have to tear down these mappings so + * that data written by write(2) is visible in mmap. */ - if ((iomap->flags & IOMAP_F_NEW) && inode->i_mapping->nrpages) { + if (iomap->flags & IOMAP_F_NEW) { invalidate_inode_pages2_range(inode->i_mapping, pos >> PAGE_SHIFT, (end - 1) >> PAGE_SHIFT); -- 2.12.0