From patchwork Fri Apr 23 17:29:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Kara X-Patchwork-Id: 12221189 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9EF2C43603 for ; Fri, 23 Apr 2021 17:30:24 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3805661404 for ; Fri, 23 Apr 2021 17:30:24 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3805661404 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A1B0B6B006C; Fri, 23 Apr 2021 13:30:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8A0926B0071; Fri, 23 Apr 2021 13:30:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6A2BE6B0070; Fri, 23 Apr 2021 13:30:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0201.hostedemail.com [216.40.44.201]) by kanga.kvack.org (Postfix) with ESMTP id 39A6D6B0036 for ; Fri, 23 Apr 2021 13:30:22 -0400 (EDT) Received: from smtpin32.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id DD5CC18030357 for ; Fri, 23 Apr 2021 17:30:21 +0000 (UTC) X-FDA: 78064320642.32.1792245 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf22.hostedemail.com (Postfix) with ESMTP id 6DB77C000C52 for ; Fri, 23 Apr 2021 17:30:15 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 957ABB1E0; Fri, 23 Apr 2021 17:30:19 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id 8D09B1E37A2; Fri, 23 Apr 2021 19:30:18 +0200 (CEST) From: Jan Kara To: Cc: Christoph Hellwig , Amir Goldstein , Dave Chinner , Ted Tso , Jan Kara , ceph-devel@vger.kernel.org, Chao Yu , Damien Le Moal , "Darrick J. Wong" , Hugh Dickins , Jaegeuk Kim , Jeff Layton , Johannes Thumshirn , linux-cifs@vger.kernel.org, linux-ext4@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-mm@kvack.org, linux-xfs@vger.kernel.org, Miklos Szeredi , Steve French Subject: [PATCH 0/12 v4] fs: Hole punch vs page cache filling races Date: Fri, 23 Apr 2021 19:29:29 +0200 Message-Id: <20210423171010.12-1-jack@suse.cz> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 6DB77C000C52 X-Stat-Signature: qm478oyk1q8ep7ckxh137p3z5779urxy Received-SPF: none (suse.cz>: No applicable sender policy available) receiver=imf22; identity=mailfrom; envelope-from=""; helo=mx2.suse.de; client-ip=195.135.220.15 X-HE-DKIM-Result: none/none X-HE-Tag: 1619199015-22142 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hello, here is another version of my patches to address races between hole punching and page cache filling functions for ext4 and other filesystems. I think we are coming close to a complete solution so I've removed the RFC tag from the subject. I went through all filesystems supporting hole punching and converted them from their private locks to a generic one (usually fixing the race ext4 had as a side effect). I also found out ceph & cifs didn't have any protection from the hole punch vs page fault race either so I've added appropriate protections there. Open are still GFS2 and OCFS2 filesystems. GFS2 actually avoids the race but is prone to deadlocks (acquires the same lock both above and below mmap_sem), OCFS2 locking seems kind of hosed and some read, write, and hole punch paths are not properly serialized possibly leading to fs corruption. Both issues are non-trivial so respective fs maintainers have to deal with those (I've informed them and problems were generally confirmed). Anyway, for all the other filesystem this kind of race should be closed. As a next step, I'd like to actually make sure all calls to truncate_inode_pages() happen under mapping->invalidate_lock, add the assert and then we can also get rid of i_size checks in some places (truncate can use the same serialization scheme as hole punch). But that step is mostly a cleanup so I'd like to get these functional fixes in first. Changes since v3: * Renamed and moved lock to struct address_space * Added conversions of tmpfs, ceph, cifs, fuse, f2fs * Fixed error handling path in filemap_read() * Removed .page_mkwrite() cleanup from the series for now Changes since v2: * Added documentation and comments regarding lock ordering and how the lock is supposed to be used * Added conversions of ext2, xfs, zonefs * Added patch removing i_mapping_sem protection from .page_mkwrite handlers Changes since v1: * Moved to using inode->i_mapping_sem instead of aops handler to acquire appropriate lock Acked-by: Hugh Dickins --- Motivation: Amir has reported [1] a that ext4 has a potential issues when reads can race with hole punching possibly exposing stale data from freed blocks or even corrupting filesystem when stale mapping data gets used for writeout. The problem is that during hole punching, new page cache pages can get instantiated and block mapping from the looked up in a punched range after truncate_inode_pages() has run but before the filesystem removes blocks from the file. In principle any filesystem implementing hole punching thus needs to implement a mechanism to block instantiating page cache pages during hole punching to avoid this race. This is further complicated by the fact that there are multiple places that can instantiate pages in page cache. We can have regular read(2) or page fault doing this but fadvise(2) or madvise(2) can also result in reading in page cache pages through force_page_cache_readahead(). There are couple of ways how to fix this. First way (currently implemented by XFS) is to protect read(2) and *advise(2) calls with i_rwsem so that they are serialized with hole punching. This is easy to do but as a result all reads would then be serialized with writes and thus mixed read-write workloads suffer heavily on ext4. Thus this series introduces inode->i_mapping_sem and uses it when creating new pages in the page cache and looking up their corresponding block mapping. We also replace EXT4_I(inode)->i_mmap_sem with this new rwsem which provides necessary serialization with hole punching for ext4. Honza [1] https://lore.kernel.org/linux-fsdevel/CAOQ4uxjQNmxqmtA_VbYW0Su9rKRk2zobJmahcyeaEVOFKVQ5dw@mail.gmail.com/ Previous versions: Link: https://lore.kernel.org/linux-fsdevel/20210208163918.7871-1-jack@suse.cz/ Link: http://lore.kernel.org/r/20210413105205.3093-1-jack@suse.cz CC: ceph-devel@vger.kernel.org CC: Chao Yu CC: Damien Le Moal CC: "Darrick J. Wong" CC: Hugh Dickins CC: Jaegeuk Kim CC: Jeff Layton CC: Johannes Thumshirn CC: linux-cifs@vger.kernel.org CC: CC: linux-f2fs-devel@lists.sourceforge.net CC: CC: CC: CC: Miklos Szeredi CC: Steve French CC: Ted Tso