From patchwork Tue May 25 13:50:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Kara X-Patchwork-Id: 12279053 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2884EC47247 for ; Tue, 25 May 2021 13:51:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0A72361420 for ; Tue, 25 May 2021 13:51:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233619AbhEYNwz (ORCPT ); Tue, 25 May 2021 09:52:55 -0400 Received: from mx2.suse.de ([195.135.220.15]:42780 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233435AbhEYNwj (ORCPT ); Tue, 25 May 2021 09:52:39 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1621950662; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=J9i26LHL2rlVmaJ49pwJamYptfR6NjvqlSUaLcXXdxM=; b=fVtuBhGBODJ/q4WtenoZlmsUFVoiJdvJk+GxugxWsyTz4LAtQiQS/Iu1ZvwKRd2g3kvS60 MV9ntfut1APeWOS+c7ZH57UOJzZ+5bXN+RW7JyZGFpt7D+YgH/LqFuYPEPP1t0cLyktyzO lgKL7Jn4+a+bkcNdXw+HL5Y2PWcL3po= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1621950662; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=J9i26LHL2rlVmaJ49pwJamYptfR6NjvqlSUaLcXXdxM=; b=6QDtzpbab4vaBOv4VCkDNUMcoimrVwb3/6BsjgWKLsuFNnENOviDrGEbqFZnFhIdR38G1M /U3ruX0XChlnCiAQ== Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 35461AF1F; Tue, 25 May 2021 13:51:02 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id 41D691F2CBD; Tue, 25 May 2021 15:51:00 +0200 (CEST) From: Jan Kara To: Cc: Christoph Hellwig , Dave Chinner , ceph-devel@vger.kernel.org, Chao Yu , Damien Le Moal , "Darrick J. Wong" , Jaegeuk Kim , Jeff Layton , Johannes Thumshirn , linux-cifs@vger.kernel.org, , linux-f2fs-devel@lists.sourceforge.net, , , Miklos Szeredi , Steve French , Ted Tso , Matthew Wilcox , Jan Kara Subject: [PATCH 12/13] ceph: Fix race between hole punch and page fault Date: Tue, 25 May 2021 15:50:49 +0200 Message-Id: <20210525135100.11221-12-jack@suse.cz> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210525125652.20457-1-jack@suse.cz> References: <20210525125652.20457-1-jack@suse.cz> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org Ceph has a following race between hole punching and page fault: CPU1 CPU2 ceph_fallocate() ... ceph_zero_pagecache_range() ceph_filemap_fault() faults in page in the range being punched ceph_zero_objects() And now we have a page in punched range with invalid data. Fix the problem by using mapping->invalidate_lock similarly to other filesystems. Note that using invalidate_lock also fixes a similar race wrt ->readpage(). CC: Jeff Layton CC: ceph-devel@vger.kernel.org Reviewed-by: Jeff Layton Signed-off-by: Jan Kara --- fs/ceph/addr.c | 9 ++++++--- fs/ceph/file.c | 2 ++ 2 files changed, 8 insertions(+), 3 deletions(-) diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c index c1570fada3d8..6d868faf97b5 100644 --- a/fs/ceph/addr.c +++ b/fs/ceph/addr.c @@ -1401,9 +1401,11 @@ static vm_fault_t ceph_filemap_fault(struct vm_fault *vmf) ret = VM_FAULT_SIGBUS; } else { struct address_space *mapping = inode->i_mapping; - struct page *page = find_or_create_page(mapping, 0, - mapping_gfp_constraint(mapping, - ~__GFP_FS)); + struct page *page; + + down_read(&mapping->invalidate_lock); + page = find_or_create_page(mapping, 0, + mapping_gfp_constraint(mapping, ~__GFP_FS)); if (!page) { ret = VM_FAULT_OOM; goto out_inline; @@ -1424,6 +1426,7 @@ static vm_fault_t ceph_filemap_fault(struct vm_fault *vmf) vmf->page = page; ret = VM_FAULT_MAJOR | VM_FAULT_LOCKED; out_inline: + up_read(&mapping->invalidate_lock); dout("filemap_fault %p %llu read inline data ret %x\n", inode, off, ret); } diff --git a/fs/ceph/file.c b/fs/ceph/file.c index 77fc037d5beb..91693d8b458e 100644 --- a/fs/ceph/file.c +++ b/fs/ceph/file.c @@ -2083,6 +2083,7 @@ static long ceph_fallocate(struct file *file, int mode, if (ret < 0) goto unlock; + down_write(&inode->i_mapping->invalidate_lock); ceph_zero_pagecache_range(inode, offset, length); ret = ceph_zero_objects(inode, offset, length); @@ -2095,6 +2096,7 @@ static long ceph_fallocate(struct file *file, int mode, if (dirty) __mark_inode_dirty(inode, dirty); } + up_write(&inode->i_mapping->invalidate_lock); ceph_put_cap_refs(ci, got); unlock: