From patchwork Tue Nov 1 00:34:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Chinner X-Patchwork-Id: 13026524 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EE8DFFA3744 for ; Tue, 1 Nov 2022 00:34:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229874AbiKAAeZ (ORCPT ); Mon, 31 Oct 2022 20:34:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57068 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229870AbiKAAeV (ORCPT ); Mon, 31 Oct 2022 20:34:21 -0400 Received: from mail-pj1-x1033.google.com (mail-pj1-x1033.google.com [IPv6:2607:f8b0:4864:20::1033]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6726F15A28 for ; Mon, 31 Oct 2022 17:34:19 -0700 (PDT) Received: by mail-pj1-x1033.google.com with SMTP id h14so11783477pjv.4 for ; Mon, 31 Oct 2022 17:34:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fromorbit-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=9S0K1fKtACWFnh+ClmzETByR2HPKxtW+87wsNKi70dQ=; b=ihroBuarMNM3cP1Oqrrd2eK/moeEqThQ3GG8SDacsUR4g4+OHkaWYdfOrd++ZH4six uRYs1Bc4hrjI46VNyO+7qxqJ1sdNGpnNTggs6kKag4zTm9gKl8YU6R2WdMTv9f0ad9QV ryj1XjTLFMOx+eYvUhsgHWeabrVSjzLZgIszPNRgez5zJTlaTVvwd3dq6ZfIlpzZNYxD fhrcL3iGnbDSnl59hBiq3vWf0rci4/j34Yj264FIVGLPEftV20EOif+R4usp4RSvu1wV j52iCGnLav0X+Bp/dyR5HWddz3dKBNJM32pSHzTv5oCWPVhaw68VZqBmFUebTBsOoXDL GfWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9S0K1fKtACWFnh+ClmzETByR2HPKxtW+87wsNKi70dQ=; b=gGX3oT5zXnzc1WyOeAq/dreTC10YvvQ6aSVbSGWaMd8eKz/Xeo2uxf+jkJeW+EB8ei 5bml5WSvAQbvffFaxLXbfLJn4u1Wf7qnsMyc/N7w1mdY7st3cvXfMOu0rNIgYPSX+vfS ue8kAsrcSYiJJL8sBjaB/NfmB33Cohh6hUcPT53R2S6EyxMG10QsO2aEVPHI2y6WfzEJ cbh7P01NXFV8+C5vUM5qSHsadWDgXhXQrqebCiKOmJI66jVcuZCUy1JML4fE4WMIKr1a 4Du0x/EG4BfpT26PNZbNDZLfb/IG2t8LjNQxyGxKZYjX1Gwa4BUbEKw8mF1+sQXIqezO QnCg== X-Gm-Message-State: ACrzQf2SS8RzaZAmQRP/6+S9iPdRhTiz8G/EINU2TXmzjrypUsH/Am1n FEIaDYiXVt503lW9dxIQOTs7Yg== X-Google-Smtp-Source: AMsMyM5zyCOmZ6SGcrJYRytuMOJWg4Ox6ne51m3KZgNxTcmLuXrbywsFN6cFRFBz5oQH884zrY2/iw== X-Received: by 2002:a17:903:185:b0:187:2430:d39e with SMTP id z5-20020a170903018500b001872430d39emr6817165plg.65.1667262858843; Mon, 31 Oct 2022 17:34:18 -0700 (PDT) Received: from dread.disaster.area (pa49-181-106-210.pa.nsw.optusnet.com.au. [49.181.106.210]) by smtp.gmail.com with ESMTPSA id h11-20020aa796cb000000b00562677968aesm5219006pfq.72.2022.10.31.17.34.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Oct 2022 17:34:17 -0700 (PDT) Received: from discord.disaster.area ([192.168.253.110]) by dread.disaster.area with esmtp (Exim 4.92.3) (envelope-from ) id 1opfEN-008muI-3q; Tue, 01 Nov 2022 11:34:15 +1100 Received: from dave by discord.disaster.area with local (Exim 4.96) (envelope-from ) id 1opfEN-00G7dW-0I; Tue, 01 Nov 2022 11:34:15 +1100 From: Dave Chinner To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org Subject: [PATCH 1/7] xfs: write page faults in iomap are not buffered writes Date: Tue, 1 Nov 2022 11:34:06 +1100 Message-Id: <20221101003412.3842572-2-david@fromorbit.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221101003412.3842572-1-david@fromorbit.com> References: <20221101003412.3842572-1-david@fromorbit.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Dave Chinner When we reserve a delalloc region in xfs_buffered_write_iomap_begin, we mark the iomap as IOMAP_F_NEW so that the the write context understands that it allocated the delalloc region. If we then fail that buffered write, xfs_buffered_write_iomap_end() checks for the IOMAP_F_NEW flag and if it is set, it punches out the unused delalloc region that was allocated for the write. The assumption this code makes is that all buffered write operations that can allocate space are run under an exclusive lock (i_rwsem). This is an invalid assumption: page faults in mmap()d regions call through this same function pair to map the file range being faulted and this runs only holding the inode->i_mapping->invalidate_lock in shared mode. IOWs, we can have races between page faults and write() calls that fail the nested page cache write operation that result in data loss. That is, the failing iomap_end call will punch out the data that the other racing iomap iteration brought into the page cache. This can be reproduced with generic/34[46] if we arbitrarily fail page cache copy-in operations from write() syscalls. Code analysis tells us that the iomap_page_mkwrite() function holds the already instantiated and uptodate folio locked across the iomap mapping iterations. Hence the folio cannot be removed from memory whilst we are mapping the range it covers, and as such we do not care if the mapping changes state underneath the iomap iteration loop: 1. if the folio is not already dirty, there is no writeback races possible. 2. if we allocated the mapping (delalloc or unwritten), the folio cannot already be dirty. See #1. 3. If the folio is already dirty, it must be up to date. As we hold it locked, it cannot be reclaimed from memory. Hence we always have valid data in the page cache while iterating the mapping. 4. Valid data in the page cache can exist when the underlying mapping is DELALLOC, UNWRITTEN or WRITTEN. Having the mapping change from DELALLOC->UNWRITTEN or UNWRITTEN->WRITTEN does not change the data in the page - it only affects actions if we are initialising a new page. Hence #3 applies and we don't care about these extent map transitions racing with iomap_page_mkwrite(). 5. iomap_page_mkwrite() checks for page invalidation races (truncate, hole punch, etc) after it locks the folio. We also hold the mapping->invalidation_lock here, and hence the mapping cannot change due to extent removal operations while we are iterating the folio. As such, filesystems that don't use bufferheads will never fail the iomap_folio_mkwrite_iter() operation on the current mapping, regardless of whether the iomap should be considered stale. Further, the range we are asked to iterate is limited to the range inside EOF that the folio spans. Hence, for XFS, we will only map the exact range we are asked for, and we will only do speculative preallocation with delalloc if we are mapping a hole at the EOF page. The iterator will consume the entire range of the folio that is within EOF, and anything beyond the EOF block cannot be accessed. We never need to truncate this post-EOF speculative prealloc away in the context of the iomap_page_mkwrite() iterator because if it remains unused we'll remove it when the last reference to the inode goes away. Hence we don't actually need an .iomap_end() cleanup/error handling path at all for iomap_page_mkwrite() for XFS. This means we can separate the page fault processing from the complexity of the .iomap_end() processing in the buffered write path. This also means that the buffered write path will also be able to take the mapping->invalidate_lock as necessary. Signed-off-by: Dave Chinner Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/xfs/xfs_file.c | 2 +- fs/xfs/xfs_iomap.c | 9 +++++++++ fs/xfs/xfs_iomap.h | 1 + 3 files changed, 11 insertions(+), 1 deletion(-) diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c index c6c80265c0b2..fee471ca9737 100644 --- a/fs/xfs/xfs_file.c +++ b/fs/xfs/xfs_file.c @@ -1324,7 +1324,7 @@ __xfs_filemap_fault( if (write_fault) { xfs_ilock(XFS_I(inode), XFS_MMAPLOCK_SHARED); ret = iomap_page_mkwrite(vmf, - &xfs_buffered_write_iomap_ops); + &xfs_page_mkwrite_iomap_ops); xfs_iunlock(XFS_I(inode), XFS_MMAPLOCK_SHARED); } else { ret = filemap_fault(vmf); diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c index 07da03976ec1..5cea069a38b4 100644 --- a/fs/xfs/xfs_iomap.c +++ b/fs/xfs/xfs_iomap.c @@ -1187,6 +1187,15 @@ const struct iomap_ops xfs_buffered_write_iomap_ops = { .iomap_end = xfs_buffered_write_iomap_end, }; +/* + * iomap_page_mkwrite() will never fail in a way that requires delalloc extents + * that it allocated to be revoked. Hence we do not need an .iomap_end method + * for this operation. + */ +const struct iomap_ops xfs_page_mkwrite_iomap_ops = { + .iomap_begin = xfs_buffered_write_iomap_begin, +}; + static int xfs_read_iomap_begin( struct inode *inode, diff --git a/fs/xfs/xfs_iomap.h b/fs/xfs/xfs_iomap.h index c782e8c0479c..0f62ab633040 100644 --- a/fs/xfs/xfs_iomap.h +++ b/fs/xfs/xfs_iomap.h @@ -47,6 +47,7 @@ xfs_aligned_fsb_count( } extern const struct iomap_ops xfs_buffered_write_iomap_ops; +extern const struct iomap_ops xfs_page_mkwrite_iomap_ops; extern const struct iomap_ops xfs_direct_write_iomap_ops; extern const struct iomap_ops xfs_read_iomap_ops; extern const struct iomap_ops xfs_seek_iomap_ops; From patchwork Tue Nov 1 00:34:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Chinner X-Patchwork-Id: 13026522 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 72CF4ECAAA1 for ; Tue, 1 Nov 2022 00:34:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229902AbiKAAeW (ORCPT ); Mon, 31 Oct 2022 20:34:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57024 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229556AbiKAAeT (ORCPT ); Mon, 31 Oct 2022 20:34:19 -0400 Received: from mail-pf1-x42f.google.com (mail-pf1-x42f.google.com [IPv6:2607:f8b0:4864:20::42f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D8EAF1583D for ; Mon, 31 Oct 2022 17:34:18 -0700 (PDT) Received: by mail-pf1-x42f.google.com with SMTP id i3so12084075pfc.11 for ; Mon, 31 Oct 2022 17:34:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fromorbit-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=UpYORrtHPEFkdqKAbT3ik18Ay5BZZZbilDStDtNjXS8=; b=61c904ATR7ihnICXDiY7DEoT0knpUcuEM9TKOou3O4466QVWdIbsTUVngQkvubsPDO M3Iv8gwMu6bHf81ZdtPGxzFbq1mFZB2G8tCEZ6iiNMYr6EygMRVKVz2p1lRbJxnbRUsr FsfJhYepQjsQowj5NUJQFn+/tllCIcGYE1koACso5b+lKk9Vs1m89ogeG9Psx1Wp21sq 3KmKSoC9hhKCbbBuL5DpCqLGErmY03TJaSEjkhHnAWjzxsAbLPfKhWwAQYU2wrIT5L89 tSu8aaVwLcYS+jnHMIP5I7cXZUnM13M43pzceerhkcYunsDzlt6RqZepotmBXMJryMvR wz5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=UpYORrtHPEFkdqKAbT3ik18Ay5BZZZbilDStDtNjXS8=; b=13uXbQ+JusoKvZDidbaLmWPzrfKx3IJhkJB3mUyKdzsuU4ErmCKYKzCjw2JbAjXGe1 D6LGD6GnhfdcgUBDzx+uT/1S7yHmEpl9zp53myzyDbBl3XZC0ig38gR3FzajGG/bwTnE 3sef/KDbTaM61ydqrXx/x8MU/cR69w/dD/fOEiD/Djk1FDq8lx5F2CSTlzNhnjvPfkNT Rj5efkTidjB7xmW12Fn35RWKZ4KlB77owO4kVKhx6NtfiE80jBglHsHGsZ55LGOFmgcH /b537CqxknsyNWJ8hHyFk0bcqW3XCZYvpo/Xee3qKavHW3SU18SQz5zhBaVa9JlnFL9A O87A== X-Gm-Message-State: ACrzQf3MxAVl/aXx+EQHk8+P2Nwf4lz+DxnyvXRs95PcgF12SlPyL83d PHO1ssFXmycAPquf0myf2gEB3BjCV9KinQ== X-Google-Smtp-Source: AMsMyM4aCmFhq+ySHVIkk6+EXgD7HL2pSuSBBmcslufMmYLwVPJdsei/QPpBdPSkaOv1iDMck1TyNw== X-Received: by 2002:a05:6a02:186:b0:439:49a3:479b with SMTP id bj6-20020a056a02018600b0043949a3479bmr15024664pgb.171.1667262858203; Mon, 31 Oct 2022 17:34:18 -0700 (PDT) Received: from dread.disaster.area (pa49-181-106-210.pa.nsw.optusnet.com.au. [49.181.106.210]) by smtp.gmail.com with ESMTPSA id j1-20020a170902c3c100b0017dd8c8009esm5003862plj.4.2022.10.31.17.34.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Oct 2022 17:34:17 -0700 (PDT) Received: from discord.disaster.area ([192.168.253.110]) by dread.disaster.area with esmtp (Exim 4.92.3) (envelope-from ) id 1opfEN-008muJ-4m; Tue, 01 Nov 2022 11:34:15 +1100 Received: from dave by discord.disaster.area with local (Exim 4.96) (envelope-from ) id 1opfEN-00G7da-0N; Tue, 01 Nov 2022 11:34:15 +1100 From: Dave Chinner To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org Subject: [PATCH 2/7] xfs: punching delalloc extents on write failure is racy Date: Tue, 1 Nov 2022 11:34:07 +1100 Message-Id: <20221101003412.3842572-3-david@fromorbit.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221101003412.3842572-1-david@fromorbit.com> References: <20221101003412.3842572-1-david@fromorbit.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Dave Chinner xfs_buffered_write_iomap_end() has a comment about the safety of punching delalloc extents based holding the IOLOCK_EXCL. This comment is wrong, and punching delalloc extents is not race free. When we punch out a delalloc extent after a write failure in xfs_buffered_write_iomap_end(), we punch out the page cache with truncate_pagecache_range() before we punch out the delalloc extents. At this point, we only hold the IOLOCK_EXCL, so there is nothing stopping mmap() write faults racing with this cleanup operation, reinstantiating a folio over the range we are about to punch and hence requiring the delalloc extent to be kept. If this race condition is hit, we can end up with a dirty page in the page cache that has no delalloc extent or space reservation backing it. This leads to bad things happening at writeback time. To avoid this race condition, we need the page cache truncation to be atomic w.r.t. the extent manipulation. We can do this by holding the mapping->invalidate_lock exclusively across this operation - this will prevent new pages from being inserted into the page cache whilst we are removing the pages and the backing extent and space reservation. Taking the mapping->invalidate_lock exclusively in the buffered write IO path is safe - it naturally nests inside the IOLOCK (see truncate and fallocate paths). iomap_zero_range() can be called from under the mapping->invalidate_lock (from the truncate path via either xfs_zero_eof() or xfs_truncate_page(), but iomap_zero_iter() will not instantiate new delalloc pages (because it skips holes) and hence will not ever need to punch out delalloc extents on failure. Fix the locking issue, and clean up the code logic a little to avoid unnecessary work if we didn't allocate the delalloc extent or wrote the entire region we allocated. Signed-off-by: Dave Chinner Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/xfs/xfs_iomap.c | 41 +++++++++++++++++++++++------------------ 1 file changed, 23 insertions(+), 18 deletions(-) diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c index 5cea069a38b4..a2e45ea1b0cb 100644 --- a/fs/xfs/xfs_iomap.c +++ b/fs/xfs/xfs_iomap.c @@ -1147,6 +1147,10 @@ xfs_buffered_write_iomap_end( written = 0; } + /* If we didn't reserve the blocks, we're not allowed to punch them. */ + if (!(iomap->flags & IOMAP_F_NEW)) + return 0; + /* * start_fsb refers to the first unused block after a short write. If * nothing was written, round offset down to point at the first block in @@ -1158,27 +1162,28 @@ xfs_buffered_write_iomap_end( start_fsb = XFS_B_TO_FSB(mp, offset + written); end_fsb = XFS_B_TO_FSB(mp, offset + length); + /* Nothing to do if we've written the entire delalloc extent */ + if (start_fsb >= end_fsb) + return 0; + /* - * Trim delalloc blocks if they were allocated by this write and we - * didn't manage to write the whole range. - * - * We don't need to care about racing delalloc as we hold i_mutex - * across the reserve/allocate/unreserve calls. If there are delalloc - * blocks in the range, they are ours. + * Lock the mapping to avoid races with page faults re-instantiating + * folios and dirtying them via ->page_mkwrite between the page cache + * truncation and the delalloc extent removal. Failing to do this can + * leave dirty pages with no space reservation in the cache. */ - if ((iomap->flags & IOMAP_F_NEW) && start_fsb < end_fsb) { - truncate_pagecache_range(VFS_I(ip), XFS_FSB_TO_B(mp, start_fsb), - XFS_FSB_TO_B(mp, end_fsb) - 1); - - error = xfs_bmap_punch_delalloc_range(ip, start_fsb, - end_fsb - start_fsb); - if (error && !xfs_is_shutdown(mp)) { - xfs_alert(mp, "%s: unable to clean up ino %lld", - __func__, ip->i_ino); - return error; - } + filemap_invalidate_lock(inode->i_mapping); + truncate_pagecache_range(VFS_I(ip), XFS_FSB_TO_B(mp, start_fsb), + XFS_FSB_TO_B(mp, end_fsb) - 1); + + error = xfs_bmap_punch_delalloc_range(ip, start_fsb, + end_fsb - start_fsb); + filemap_invalidate_unlock(inode->i_mapping); + if (error && !xfs_is_shutdown(mp)) { + xfs_alert(mp, "%s: unable to clean up ino %lld", + __func__, ip->i_ino); + return error; } - return 0; } From patchwork Tue Nov 1 00:34:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Chinner X-Patchwork-Id: 13026523 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C618DFA3748 for ; Tue, 1 Nov 2022 00:34:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229487AbiKAAeY (ORCPT ); Mon, 31 Oct 2022 20:34:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57066 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229868AbiKAAeV (ORCPT ); Mon, 31 Oct 2022 20:34:21 -0400 Received: from mail-pj1-x1034.google.com (mail-pj1-x1034.google.com [IPv6:2607:f8b0:4864:20::1034]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BBD7D15833 for ; Mon, 31 Oct 2022 17:34:19 -0700 (PDT) Received: by mail-pj1-x1034.google.com with SMTP id o7so8595434pjj.1 for ; Mon, 31 Oct 2022 17:34:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fromorbit-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=QPTCsd0eA+I5KaPEF+DZNzfuYfjq5r3zeLN8iU677T4=; b=BUtQZ7nzbn6cI6DNVzSo069FmCM3AST52NoUCQ6PihDGNsgfP/r5BeNhqgdZ8Syoec qOqen9E/xi604EjlH/Nw65nTQ+fBezIy7bC1vSgCAM+vm+rUcK86ZHGO1h7bkqQ7ezq4 4wSh8lOsa2+/tj0xYAi+bdrzaeb1vxbyWua3fzap7FHrcNhSq0WA1MuE/MGTD6a0H08e SXtqFDdDgIBY11WpPOMhgYjLTJTmXty+vHV/Ej654qKSwRFsgOJLrH9BgHUkMMu2Bv7W dVSgOK0kmQ2YiFsqdZRHn6jQUYRCK89C8O3PN1D6CoMH/dd6kKhi94Y8ts4G+/5o2yx4 4aAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=QPTCsd0eA+I5KaPEF+DZNzfuYfjq5r3zeLN8iU677T4=; b=OWjIbojK5DxpxG74ZXAsxWf51gOpv/rv+/siQOk+qQfqsNX9OGRFOken8pbiJet18v BdNYhfNQqlIL3Rk8/kDF65hsnQTM6ayDQMm+SkUaloXsMcU4fpa69eMdq/C1xdE9Drvg /9jNFKA5Wi73+3alVP51VaLTO0+rMgpk5kEXv4iT3esCzsIxeelo36+k9plkzIMsre9P XGqaYENaRGPRI5fonqCmKJmEctZUhEVYtlo7BN/NiLqtw3Rs6b0ZZdPj9A4x7cTCveVU 7AnE7Q8Dkaf+SElPOPkPAnYmw+XB/mO9HDV2V0fh5iMypXtSP5/z77kEQ3BQeOfXw0HI IaUw== X-Gm-Message-State: ACrzQf21B2rm9Kho2KaMA9TnOUPywSg2jfQu63N787QM8dYGRdbwyKOJ d+Vjfs3iqF68NgrVcAZfzdZjwg== X-Google-Smtp-Source: AMsMyM7d+400J9eughdJc4e1QrBYICtQju6nFmCLwPMMv85b1tskyT99l68rGzu94HWFab6/Atakiw== X-Received: by 2002:a17:90b:388a:b0:213:d1ce:2592 with SMTP id mu10-20020a17090b388a00b00213d1ce2592mr10777549pjb.53.1667262859158; Mon, 31 Oct 2022 17:34:19 -0700 (PDT) Received: from dread.disaster.area (pa49-181-106-210.pa.nsw.optusnet.com.au. [49.181.106.210]) by smtp.gmail.com with ESMTPSA id b16-20020a170902e95000b00178b06fea7asm5006840pll.148.2022.10.31.17.34.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Oct 2022 17:34:17 -0700 (PDT) Received: from discord.disaster.area ([192.168.253.110]) by dread.disaster.area with esmtp (Exim 4.92.3) (envelope-from ) id 1opfEN-008muM-5e; Tue, 01 Nov 2022 11:34:15 +1100 Received: from dave by discord.disaster.area with local (Exim 4.96) (envelope-from ) id 1opfEN-00G7de-0T; Tue, 01 Nov 2022 11:34:15 +1100 From: Dave Chinner To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org Subject: [PATCH 3/7] xfs: use byte ranges for write cleanup ranges Date: Tue, 1 Nov 2022 11:34:08 +1100 Message-Id: <20221101003412.3842572-4-david@fromorbit.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221101003412.3842572-1-david@fromorbit.com> References: <20221101003412.3842572-1-david@fromorbit.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Dave Chinner xfs_buffered_write_iomap_end() currently converts the byte ranges passed to it to filesystem blocks to pass them to the bmap code to punch out delalloc blocks, but then has to convert filesytem blocks back to byte ranges for page cache truncate. We're about to make the page cache truncate go away and replace it with a page cache walk, so having to convert everything to/from/to filesystem blocks is messy and error-prone. It is much easier to pass around byte ranges and convert to page indexes and/or filesystem blocks only where those units are needed. In preparation for the page cache walk being added, add a helper that converts byte ranges to filesystem blocks and calls xfs_bmap_punch_delalloc_range() and convert xfs_buffered_write_iomap_end() to calculate limits in byte ranges. Signed-off-by: Dave Chinner Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/xfs/xfs_iomap.c | 40 +++++++++++++++++++++++++--------------- 1 file changed, 25 insertions(+), 15 deletions(-) diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c index a2e45ea1b0cb..7bb55dbc19d3 100644 --- a/fs/xfs/xfs_iomap.c +++ b/fs/xfs/xfs_iomap.c @@ -1120,6 +1120,20 @@ xfs_buffered_write_iomap_begin( return error; } +static int +xfs_buffered_write_delalloc_punch( + struct inode *inode, + loff_t start_byte, + loff_t end_byte) +{ + struct xfs_mount *mp = XFS_M(inode->i_sb); + xfs_fileoff_t start_fsb = XFS_B_TO_FSBT(mp, start_byte); + xfs_fileoff_t end_fsb = XFS_B_TO_FSB(mp, end_byte); + + return xfs_bmap_punch_delalloc_range(XFS_I(inode), start_fsb, + end_fsb - start_fsb); +} + static int xfs_buffered_write_iomap_end( struct inode *inode, @@ -1129,10 +1143,9 @@ xfs_buffered_write_iomap_end( unsigned flags, struct iomap *iomap) { - struct xfs_inode *ip = XFS_I(inode); - struct xfs_mount *mp = ip->i_mount; - xfs_fileoff_t start_fsb; - xfs_fileoff_t end_fsb; + struct xfs_mount *mp = XFS_M(inode->i_sb); + loff_t start_byte; + loff_t end_byte; int error = 0; if (iomap->type != IOMAP_DELALLOC) @@ -1157,13 +1170,13 @@ xfs_buffered_write_iomap_end( * the range. */ if (unlikely(!written)) - start_fsb = XFS_B_TO_FSBT(mp, offset); + start_byte = round_down(offset, mp->m_sb.sb_blocksize); else - start_fsb = XFS_B_TO_FSB(mp, offset + written); - end_fsb = XFS_B_TO_FSB(mp, offset + length); + start_byte = round_up(offset + written, mp->m_sb.sb_blocksize); + end_byte = round_up(offset + length, mp->m_sb.sb_blocksize); /* Nothing to do if we've written the entire delalloc extent */ - if (start_fsb >= end_fsb) + if (start_byte >= end_byte) return 0; /* @@ -1173,15 +1186,12 @@ xfs_buffered_write_iomap_end( * leave dirty pages with no space reservation in the cache. */ filemap_invalidate_lock(inode->i_mapping); - truncate_pagecache_range(VFS_I(ip), XFS_FSB_TO_B(mp, start_fsb), - XFS_FSB_TO_B(mp, end_fsb) - 1); - - error = xfs_bmap_punch_delalloc_range(ip, start_fsb, - end_fsb - start_fsb); + truncate_pagecache_range(inode, start_byte, end_byte - 1); + error = xfs_buffered_write_delalloc_punch(inode, start_byte, end_byte); filemap_invalidate_unlock(inode->i_mapping); if (error && !xfs_is_shutdown(mp)) { - xfs_alert(mp, "%s: unable to clean up ino %lld", - __func__, ip->i_ino); + xfs_alert(mp, "%s: unable to clean up ino 0x%llx", + __func__, XFS_I(inode)->i_ino); return error; } return 0; From patchwork Tue Nov 1 00:34:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Chinner X-Patchwork-Id: 13026525 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5DBB0FA374A for ; Tue, 1 Nov 2022 00:34:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229880AbiKAAe2 (ORCPT ); Mon, 31 Oct 2022 20:34:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57080 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229877AbiKAAeV (ORCPT ); Mon, 31 Oct 2022 20:34:21 -0400 Received: from mail-pg1-x533.google.com (mail-pg1-x533.google.com [IPv6:2607:f8b0:4864:20::533]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 11B8415A2D for ; Mon, 31 Oct 2022 17:34:20 -0700 (PDT) Received: by mail-pg1-x533.google.com with SMTP id b5so12101937pgb.6 for ; Mon, 31 Oct 2022 17:34:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fromorbit-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Ao3r4ObshVMOTeg9d8XwDGgMsxE9xyLmCjV3U83plkg=; b=SNN8joXJkjRdLzgyXplE9ZuUHRkzP5ljlTwA5WE9MkbLI+OAMUWPqg9N+AjplqhD2v /PF1Jc40ShWpaSmM5UenYi6+3DX5hqNnKBIdV1jdBj/CxDg5cfL/FlZQfcrppJMEn6kM 4jV1Crr+WL1/sdKT5GJzTgDT8StlHfxGDW8q0QuZ32CuL5WZfI1qgG+fudkE0eshl5RS VTJ63u9/nIU46pnWzGOg0igTeJTUj0L+oDJGSbuXSMLMaPeq5UNGrWrZvVIh282QP0Ni P8N3YSNM0bRZCjEzWlCoLKF2zWqTyMSZEuWYCW8j7we6WnH7gEdu4NzIm+InDc6ZlEt+ Ss2g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Ao3r4ObshVMOTeg9d8XwDGgMsxE9xyLmCjV3U83plkg=; b=IVtHfL31yEF+0v3pnAEYdD4WxsOobXwf3sY0Xbg8inXHOdly0Dq4usZFUlAJ/3jUkw HYnZ3e4zkb3xwgniMjMRJrXTD/p1RL8jjX7MaZoXAPWxENlOXQuIETDKrhSMpZqYzIJC 5P4RobJYttVtmVgsG8Ili0Vc2riDsRrGs3KEBZl5P2b79LHhfkT7zo4DmmjBESJZ5C3r GPrp0x2LEB0hBDR78n+b6z8XZLMEaDOQkcxweTXqpQLJNq2/EwC/QxuP19FS9q5Kclzl 6K9URqd5zyERSZnFaZS+nQ019dC/lgN2zs7egdx4d23lTAwE0QdUG0LeuRmm6PiHc27X 90aw== X-Gm-Message-State: ACrzQf3KWW3UnmmPNH2p7AZWrAGdEFVeQDwRBZ5rWiAdDxacCj+SFsKy xqywTOZLt78fwLMSgBLWCm3qOA== X-Google-Smtp-Source: AMsMyM6Lm8qi2XI9ok+jzcb3I1bQY1v8bc4roM4WJXGuqkGT9htWSu8G2eSRIHPSbhL7Gl2Z6b7z+g== X-Received: by 2002:a05:6a00:a21:b0:562:99d6:c30a with SMTP id p33-20020a056a000a2100b0056299d6c30amr16675494pfh.35.1667262859483; Mon, 31 Oct 2022 17:34:19 -0700 (PDT) Received: from dread.disaster.area (pa49-181-106-210.pa.nsw.optusnet.com.au. [49.181.106.210]) by smtp.gmail.com with ESMTPSA id iz18-20020a170902ef9200b00187022627d7sm5013620plb.36.2022.10.31.17.34.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Oct 2022 17:34:17 -0700 (PDT) Received: from discord.disaster.area ([192.168.253.110]) by dread.disaster.area with esmtp (Exim 4.92.3) (envelope-from ) id 1opfEN-008muO-6c; Tue, 01 Nov 2022 11:34:15 +1100 Received: from dave by discord.disaster.area with local (Exim 4.96) (envelope-from ) id 1opfEN-00G7dj-0Z; Tue, 01 Nov 2022 11:34:15 +1100 From: Dave Chinner To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org Subject: [PATCH 4/7] xfs: buffered write failure should not truncate the page cache Date: Tue, 1 Nov 2022 11:34:09 +1100 Message-Id: <20221101003412.3842572-5-david@fromorbit.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221101003412.3842572-1-david@fromorbit.com> References: <20221101003412.3842572-1-david@fromorbit.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Dave Chinner xfs_buffered_write_iomap_end() currently invalidates the page cache over the unused range of the delalloc extent it allocated. While the write allocated the delalloc extent, it does not own it exclusively as the write does not hold any locks that prevent either writeback or mmap page faults from changing the state of either the page cache or the extent state backing this range. Whilst xfs_bmap_punch_delalloc_range() already handles races in extent conversion - it will only punch out delalloc extents and it ignores any other type of extent - the page cache truncate does not discriminate between data written by this write or some other task. As a result, truncating the page cache can result in data corruption if the write races with mmap modifications to the file over the same range. generic/346 exercises this workload, and if we randomly fail writes (as will happen when iomap gets stale iomap detection later in the patchset), it will randomly corrupt the file data because it removes data written by mmap() in the same page as the write() that failed. Hence we do not want to punch out the page cache over the range of the extent we failed to write to - what we actually need to do is detect the ranges that have dirty data in cache over them and *not punch them out*. TO do this, we have to walk the page cache over the range of the delalloc extent we want to remove. This is made complex by the fact we have to handle partially up-to-date folios correctly and this can happen even when the FSB size == PAGE_SIZE because we now support multi-page folios in the page cache. Because we are only interested in discovering the edges of data ranges in the page cache (i.e. hole-data boundaries) we can make use of mapping_seek_hole_data() to find those transitions in the page cache. As we hold the invalidate_lock, we know that the boundaries are not going to change while we walk the range. This interface is also byte-based and is sub-page block aware, so we can find the data ranges in the cache based on byte offsets rather than page, folio or fs block sized chunks. This greatly simplifies the logic of finding dirty cached ranges in the page cache. Once we've identified a range that contains cached data, we can then iterate the range folio by folio. This allows us to determine if the data is dirty and hence perform the correct delalloc extent punching operations. The seek interface we use to iterate data ranges will give us sub-folio start/end granularity, so we may end up looking up the same folio multiple times as the seek interface iterates across each discontiguous data region in the folio. Signed-off-by: Dave Chinner --- fs/xfs/xfs_iomap.c | 151 ++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 141 insertions(+), 10 deletions(-) diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c index 7bb55dbc19d3..2d48fcc7bd6f 100644 --- a/fs/xfs/xfs_iomap.c +++ b/fs/xfs/xfs_iomap.c @@ -1134,6 +1134,146 @@ xfs_buffered_write_delalloc_punch( end_fsb - start_fsb); } +/* + * Scan the data range passed to us for dirty page cache folios. If we find a + * dirty folio, punch out the preceeding range and update the offset from which + * the next punch will start from. + * + * We can punch out clean pages because they either contain data that has been + * written back - in which case the delalloc punch over that range is a no-op - + * or they have been read faults in which case they contain zeroes and we can + * remove the delalloc backing range and any new writes to those pages will do + * the normal hole filling operation... + * + * This makes the logic simple: we only need to keep the delalloc extents only + * over the dirty ranges of the page cache. + */ +static int +xfs_buffered_write_delalloc_scan( + struct inode *inode, + loff_t *punch_start_byte, + loff_t start_byte, + loff_t end_byte) +{ + loff_t offset = start_byte; + + while (offset < end_byte) { + struct folio *folio; + + /* grab locked page */ + folio = filemap_lock_folio(inode->i_mapping, offset >> PAGE_SHIFT); + if (!folio) { + offset = ALIGN_DOWN(offset, PAGE_SIZE) + PAGE_SIZE; + continue; + } + + /* if dirty, punch up to offset */ + if (folio_test_dirty(folio)) { + if (offset > *punch_start_byte) { + int error; + + error = xfs_buffered_write_delalloc_punch(inode, + *punch_start_byte, offset); + if (error) { + folio_unlock(folio); + folio_put(folio); + return error; + } + } + + /* + * Make sure the next punch start is correctly bound to + * the end of this data range, not the end of the folio. + */ + *punch_start_byte = min_t(loff_t, end_byte, + folio_next_index(folio) << PAGE_SHIFT); + } + + /* move offset to start of next folio in range */ + offset = folio_next_index(folio) << PAGE_SHIFT; + folio_unlock(folio); + folio_put(folio); + } + return 0; +} + +/* + * Punch out all the delalloc blocks in the range given except for those that + * have dirty data still pending in the page cache - those are going to be + * written and so must still retain the delalloc backing for writeback. + * + * As we are scanning the page cache for data, we don't need to reimplement the + * wheel - mapping_seek_hole_data() does exactly what we need to identify the + * start and end of data ranges correctly even for sub-folio block sizes. This + * byte range based iteration is especially convenient because it means we don't + * have to care about variable size folios, nor where the start or end of the + * data range lies within a folio, if they lie within the same folio or even if + * there are multiple discontiguous data ranges within the folio. + */ +static int +xfs_buffered_write_delalloc_release( + struct inode *inode, + loff_t start_byte, + loff_t end_byte) +{ + loff_t punch_start_byte = start_byte; + int error = 0; + + /* + * Lock the mapping to avoid races with page faults re-instantiating + * folios and dirtying them via ->page_mkwrite whilst we walk the + * cache and perform delalloc extent removal. Failing to do this can + * leave dirty pages with no space reservation in the cache. + */ + filemap_invalidate_lock(inode->i_mapping); + while (start_byte < end_byte) { + loff_t data_end; + + start_byte = mapping_seek_hole_data(inode->i_mapping, + start_byte, end_byte, SEEK_DATA); + /* + * If there is no more data to scan, all that is left is to + * punch out the remaining range. + */ + if (start_byte == -ENXIO || start_byte == end_byte) + break; + if (start_byte < 0) { + error = start_byte; + goto out_unlock; + } + ASSERT(start_byte >= punch_start_byte); + ASSERT(start_byte < end_byte); + + /* + * We find the end of this contiguous cached data range by + * seeking from start_byte to the beginning of the next hole. + */ + data_end = mapping_seek_hole_data(inode->i_mapping, start_byte, + end_byte, SEEK_HOLE); + if (data_end < 0) { + error = data_end; + goto out_unlock; + } + ASSERT(data_end > start_byte); + ASSERT(data_end <= end_byte); + + error = xfs_buffered_write_delalloc_scan(inode, + &punch_start_byte, start_byte, data_end); + if (error) + goto out_unlock; + + /* The next data search starts at the end of this one. */ + start_byte = data_end; + } + + if (punch_start_byte < end_byte) + error = xfs_buffered_write_delalloc_punch(inode, + punch_start_byte, end_byte); +out_unlock: + filemap_invalidate_unlock(inode->i_mapping); + return error; +} + static int xfs_buffered_write_iomap_end( struct inode *inode, @@ -1179,16 +1319,7 @@ xfs_buffered_write_iomap_end( if (start_byte >= end_byte) return 0; - /* - * Lock the mapping to avoid races with page faults re-instantiating - * folios and dirtying them via ->page_mkwrite between the page cache - * truncation and the delalloc extent removal. Failing to do this can - * leave dirty pages with no space reservation in the cache. - */ - filemap_invalidate_lock(inode->i_mapping); - truncate_pagecache_range(inode, start_byte, end_byte - 1); - error = xfs_buffered_write_delalloc_punch(inode, start_byte, end_byte); - filemap_invalidate_unlock(inode->i_mapping); + error = xfs_buffered_write_delalloc_release(inode, start_byte, end_byte); if (error && !xfs_is_shutdown(mp)) { xfs_alert(mp, "%s: unable to clean up ino 0x%llx", __func__, XFS_I(inode)->i_ino); From patchwork Tue Nov 1 00:34:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Chinner X-Patchwork-Id: 13026526 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5AE78ECAAA1 for ; Tue, 1 Nov 2022 00:34:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229779AbiKAAea (ORCPT ); Mon, 31 Oct 2022 20:34:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57078 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229881AbiKAAeV (ORCPT ); Mon, 31 Oct 2022 20:34:21 -0400 Received: from mail-pf1-x42f.google.com (mail-pf1-x42f.google.com [IPv6:2607:f8b0:4864:20::42f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5159015A2E for ; Mon, 31 Oct 2022 17:34:20 -0700 (PDT) Received: by mail-pf1-x42f.google.com with SMTP id b29so12074659pfp.13 for ; Mon, 31 Oct 2022 17:34:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fromorbit-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=7bf5THXrH6S/FGzXJmin6XJ8+TXuzCVt/TSJE6RMAhU=; b=ef4Uis/ZfSiXwJVohLnEjB5VzQxyDYpdpfmhhCd09jDqMJqI7PFqHnBHbaPWpnr/if BWwJLsJHD44Qm7tu2KEpvVegkXfU8gAq1ZbmznHLZ2xsfYEXWdNWhXRWHgAcWsS17ztX FHMvNGD9gJ7Ejd2nYev9Mqw4yQxAC+71WRZa9NkSOVgXMWukXCXTAhioQH0slB1AVDi6 +rezk4oN0558o/Ln8uvQxQt60b1+Os7yHRF/qx73LiaR7by9b/vAkQGXXUiar7RUq1A9 XE3kkJTnfURqdUwWxDcvy3PsmkkN4v2onwMJ79z3wVaAgQ9lxlSYcE/H8k3QbHnSGXV6 OAFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7bf5THXrH6S/FGzXJmin6XJ8+TXuzCVt/TSJE6RMAhU=; b=fJyO6p9azVOv9eS93zI96MPrqlXxmDgDxxCpafGBU3cIvxtyaP/TUD7FhLCCgibj0p ZPgoiBTdwrn0L7hUURrJfHbupd6SRIgHkniktPwgzdvMakITziQWkEr3B8SkMqI3ySej ZZEHCBqFAqGcjWLh289sFjt3ahqe7o4kEW1GNLY/4KCevOZ44cEW0Q3pgtI0c8jHY47P tpDWtkuTuZzS6YvZ9dE/xtZsq9EMb9g3lDuwgQqyRSDD44NAfimpHb2pCfgqnFxXImMe tqFWcTTVS/qlDJ3nyg5whF6EgD0suotHpTpeuBhrjufCeJHj3MS5s4Cjay+UK04OIAfe fcpA== X-Gm-Message-State: ACrzQf0ckDUh+TXflB4eoaDrGAvlqyeYZgCiTz/8f855RDawFlYbJugF icF1VgY7nEClJAOGcr2Z8RLRXw/UxdHvkA== X-Google-Smtp-Source: AMsMyM5Ie78Rb0XhFA73mieyOhTwfHnt4JZptfLX7K5SSyft/xlkqMuWxF1S0muAeTYsIH9XI9VOag== X-Received: by 2002:a63:574c:0:b0:46f:c9e8:3d1c with SMTP id h12-20020a63574c000000b0046fc9e83d1cmr34494pgm.170.1667262859813; Mon, 31 Oct 2022 17:34:19 -0700 (PDT) Received: from dread.disaster.area (pa49-181-106-210.pa.nsw.optusnet.com.au. [49.181.106.210]) by smtp.gmail.com with ESMTPSA id n8-20020a17090a160800b002135de3013fsm4757047pja.32.2022.10.31.17.34.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Oct 2022 17:34:18 -0700 (PDT) Received: from discord.disaster.area ([192.168.253.110]) by dread.disaster.area with esmtp (Exim 4.92.3) (envelope-from ) id 1opfEN-008muQ-7T; Tue, 01 Nov 2022 11:34:15 +1100 Received: from dave by discord.disaster.area with local (Exim 4.96) (envelope-from ) id 1opfEN-00G7dp-0f; Tue, 01 Nov 2022 11:34:15 +1100 From: Dave Chinner To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org Subject: [PATCH 5/7] iomap: write iomap validity checks Date: Tue, 1 Nov 2022 11:34:10 +1100 Message-Id: <20221101003412.3842572-6-david@fromorbit.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221101003412.3842572-1-david@fromorbit.com> References: <20221101003412.3842572-1-david@fromorbit.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Dave Chinner A recent multithreaded write data corruption has been uncovered in the iomap write code. The core of the problem is partial folio writes can be flushed to disk while a new racing write can map it and fill the rest of the page: writeback new write allocate blocks blocks are unwritten submit IO ..... map blocks iomap indicates UNWRITTEN range loop { lock folio copyin data ..... IO completes runs unwritten extent conv blocks are marked written get next folio } Now add memory pressure such that memory reclaim evicts the partially written folio that has already been written to disk. When the new write finally gets to the last partial page of the new write, it does not find it in cache, so it instantiates a new page, sees the iomap is unwritten, and zeros the part of the page that it does not have data from. This overwrites the data on disk that was originally written. The full description of the corruption mechanism can be found here: https://lore.kernel.org/linux-xfs/20220817093627.GZ3600936@dread.disaster.area/ To solve this problem, we need to check whether the iomap is still valid after we lock each folio during the write. We have to do it after we lock the page so that we don't end up with state changes occurring while we wait for the folio to be locked. Hence we need a mechanism to be able to check that the cached iomap is still valid (similar to what we already do in buffered writeback), and we need a way for ->begin_write to back out and tell the high level iomap iterator that we need to remap the remaining write range. Signed-off-by: Dave Chinner --- fs/iomap/buffered-io.c | 53 +++++++++++++++++++++++++++++++++--------- fs/iomap/iter.c | 19 ++++++++++++++- include/linux/iomap.h | 17 ++++++++++++++ 3 files changed, 77 insertions(+), 12 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 91ee0b308e13..d3c565aa29f8 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -584,8 +584,9 @@ static int iomap_write_begin_inline(const struct iomap_iter *iter, return iomap_read_inline_data(iter, folio); } -static int iomap_write_begin(const struct iomap_iter *iter, loff_t pos, - size_t len, struct folio **foliop) +static int iomap_write_begin(struct iomap_iter *iter, + const struct iomap_ops *ops, loff_t pos, size_t len, + struct folio **foliop) { const struct iomap_page_ops *page_ops = iter->iomap.page_ops; const struct iomap *srcmap = iomap_iter_srcmap(iter); @@ -618,6 +619,27 @@ static int iomap_write_begin(const struct iomap_iter *iter, loff_t pos, status = (iter->flags & IOMAP_NOWAIT) ? -EAGAIN : -ENOMEM; goto out_no_page; } + + /* + * Now we have a locked folio, before we do anything with it we need to + * check that the iomap we have cached is not stale. The inode extent + * mapping can change due to concurrent IO in flight (e.g. + * IOMAP_UNWRITTEN state can change and memory reclaim could have + * reclaimed a previously partially written page at this index after IO + * completion before this write reaches this file offset) and hence we + * could do the wrong thing here (zero a page range incorrectly or fail + * to zero) and corrupt data. + */ + if (ops->iomap_valid) { + bool iomap_valid = ops->iomap_valid(iter->inode, &iter->iomap); + + if (!iomap_valid) { + iter->iomap.flags |= IOMAP_F_STALE; + status = 0; + goto out_unlock; + } + } + if (pos + len > folio_pos(folio) + folio_size(folio)) len = folio_pos(folio) + folio_size(folio) - pos; @@ -727,7 +749,8 @@ static size_t iomap_write_end(struct iomap_iter *iter, loff_t pos, size_t len, return ret; } -static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i) +static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i, + const struct iomap_ops *ops) { loff_t length = iomap_length(iter); loff_t pos = iter->pos; @@ -770,9 +793,11 @@ static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i) break; } - status = iomap_write_begin(iter, pos, bytes, &folio); + status = iomap_write_begin(iter, ops, pos, bytes, &folio); if (unlikely(status)) break; + if (iter->iomap.flags & IOMAP_F_STALE) + break; page = folio_file_page(folio, pos >> PAGE_SHIFT); if (mapping_writably_mapped(mapping)) @@ -825,14 +850,15 @@ iomap_file_buffered_write(struct kiocb *iocb, struct iov_iter *i, iter.flags |= IOMAP_NOWAIT; while ((ret = iomap_iter(&iter, ops)) > 0) - iter.processed = iomap_write_iter(&iter, i); + iter.processed = iomap_write_iter(&iter, i, ops); if (iter.pos == iocb->ki_pos) return ret; return iter.pos - iocb->ki_pos; } EXPORT_SYMBOL_GPL(iomap_file_buffered_write); -static loff_t iomap_unshare_iter(struct iomap_iter *iter) +static loff_t iomap_unshare_iter(struct iomap_iter *iter, + const struct iomap_ops *ops) { struct iomap *iomap = &iter->iomap; const struct iomap *srcmap = iomap_iter_srcmap(iter); @@ -853,9 +879,11 @@ static loff_t iomap_unshare_iter(struct iomap_iter *iter) unsigned long bytes = min_t(loff_t, PAGE_SIZE - offset, length); struct folio *folio; - status = iomap_write_begin(iter, pos, bytes, &folio); + status = iomap_write_begin(iter, ops, pos, bytes, &folio); if (unlikely(status)) return status; + if (iter->iomap.flags & IOMAP_F_STALE) + break; status = iomap_write_end(iter, pos, bytes, bytes, folio); if (WARN_ON_ONCE(status == 0)) @@ -886,12 +914,13 @@ iomap_file_unshare(struct inode *inode, loff_t pos, loff_t len, int ret; while ((ret = iomap_iter(&iter, ops)) > 0) - iter.processed = iomap_unshare_iter(&iter); + iter.processed = iomap_unshare_iter(&iter, ops); return ret; } EXPORT_SYMBOL_GPL(iomap_file_unshare); -static loff_t iomap_zero_iter(struct iomap_iter *iter, bool *did_zero) +static loff_t iomap_zero_iter(struct iomap_iter *iter, + const struct iomap_ops *ops, bool *did_zero) { const struct iomap *srcmap = iomap_iter_srcmap(iter); loff_t pos = iter->pos; @@ -908,9 +937,11 @@ static loff_t iomap_zero_iter(struct iomap_iter *iter, bool *did_zero) size_t offset; size_t bytes = min_t(u64, SIZE_MAX, length); - status = iomap_write_begin(iter, pos, bytes, &folio); + status = iomap_write_begin(iter, ops, pos, bytes, &folio); if (status) return status; + if (iter->iomap.flags & IOMAP_F_STALE) + break; offset = offset_in_folio(folio, pos); if (bytes > folio_size(folio) - offset) @@ -946,7 +977,7 @@ iomap_zero_range(struct inode *inode, loff_t pos, loff_t len, bool *did_zero, int ret; while ((ret = iomap_iter(&iter, ops)) > 0) - iter.processed = iomap_zero_iter(&iter, did_zero); + iter.processed = iomap_zero_iter(&iter, ops, did_zero); return ret; } EXPORT_SYMBOL_GPL(iomap_zero_range); diff --git a/fs/iomap/iter.c b/fs/iomap/iter.c index a1c7592d2ade..79a0614eaab7 100644 --- a/fs/iomap/iter.c +++ b/fs/iomap/iter.c @@ -7,12 +7,28 @@ #include #include "trace.h" +/* + * Advance to the next range we need to map. + * + * If the iomap is marked IOMAP_F_STALE, it means the existing map was not fully + * processed - it was aborted because the extent the iomap spanned may have been + * changed during the operation. In this case, the iteration behaviour is to + * remap the unprocessed range of the iter, and that means we may need to remap + * even when we've made no progress (i.e. iter->processed = 0). Hence the + * "finished iterating" case needs to distinguish between + * (processed = 0) meaning we are done and (processed = 0 && stale) meaning we + * need to remap the entire remaining range. + */ static inline int iomap_iter_advance(struct iomap_iter *iter) { + bool stale = iter->iomap.flags & IOMAP_F_STALE; + /* handle the previous iteration (if any) */ if (iter->iomap.length) { - if (iter->processed <= 0) + if (iter->processed < 0) return iter->processed; + if (!iter->processed && !stale) + return 0; if (WARN_ON_ONCE(iter->processed > iomap_length(iter))) return -EIO; iter->pos += iter->processed; @@ -33,6 +49,7 @@ static inline void iomap_iter_done(struct iomap_iter *iter) WARN_ON_ONCE(iter->iomap.offset > iter->pos); WARN_ON_ONCE(iter->iomap.length == 0); WARN_ON_ONCE(iter->iomap.offset + iter->iomap.length <= iter->pos); + WARN_ON_ONCE(iter->iomap.flags & IOMAP_F_STALE); trace_iomap_iter_dstmap(iter->inode, &iter->iomap); if (iter->srcmap.type != IOMAP_HOLE) diff --git a/include/linux/iomap.h b/include/linux/iomap.h index 238a03087e17..308931f0840a 100644 --- a/include/linux/iomap.h +++ b/include/linux/iomap.h @@ -62,8 +62,13 @@ struct vm_fault; * * IOMAP_F_SIZE_CHANGED indicates to the iomap_end method that the file size * has changed as the result of this write operation. + * + * IOMAP_F_STALE indicates that the iomap is not valid any longer and the file + * range it covers needs to be remapped by the high level before the operation + * can proceed. */ #define IOMAP_F_SIZE_CHANGED 0x100 +#define IOMAP_F_STALE 0x200 /* * Flags from 0x1000 up are for file system specific usage: @@ -165,6 +170,18 @@ struct iomap_ops { */ int (*iomap_end)(struct inode *inode, loff_t pos, loff_t length, ssize_t written, unsigned flags, struct iomap *iomap); + + /* + * Check that the cached iomap still maps correctly to the filesystem's + * internal extent map. FS internal extent maps can change while iomap + * is iterating a cached iomap, so this hook allows iomap to detect that + * the iomap needs to be refreshed during a long running write + * operation. + * + * This is called with the folio over the specified file position + * held locked by the iomap code. + */ + bool (*iomap_valid)(struct inode *inode, const struct iomap *iomap); }; /** From patchwork Tue Nov 1 00:34:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Chinner X-Patchwork-Id: 13026527 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C0976FA3741 for ; Tue, 1 Nov 2022 00:34:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229776AbiKAAec (ORCPT ); Mon, 31 Oct 2022 20:34:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57076 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229905AbiKAAeW (ORCPT ); Mon, 31 Oct 2022 20:34:22 -0400 Received: from mail-pl1-x633.google.com (mail-pl1-x633.google.com [IPv6:2607:f8b0:4864:20::633]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DB09615FC2 for ; Mon, 31 Oct 2022 17:34:20 -0700 (PDT) Received: by mail-pl1-x633.google.com with SMTP id d24so12187289pls.4 for ; Mon, 31 Oct 2022 17:34:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fromorbit-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=RHAzErwNVUuW7SwA2H3HgNg6ehcqGAE2cIaQYUuXWTY=; b=HRoy0hUhou5cxIn9OqcnqxWhUhZZE/wEtmN0B4AA2cwafqhqDI5lAKdUCkXwwVgrdh X9XnegJcUCZgxjtCOsczNHHtn1u3Ft9z0icQ/2zDtRGxi5SXqFETaZ17CKtdmIzaOM3h akNaPAnlmBiND16H4jzhok6kHW0Plb+iAWoAmVsbLj5pvW9Oaospod3jI2SgyEW7T4vi DYN+n+VveMmMlHRCte8t2qg1NfSP/gFeDczZSMc4skbmTJlFVrvYuaBkCywND3b1QrSp Qd1chrg5reGDpdvcNaqZJmAJmJyaEyUjXr2Uw5Uc8qSRDLTlBPBQcohphesjqCfwuCkg jXzQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=RHAzErwNVUuW7SwA2H3HgNg6ehcqGAE2cIaQYUuXWTY=; b=DVKe+OYoYZphN5SZkLl63Xl0qnIe7pOfFlSezwliMS90gv8oaLBOyUqKL1AeTxVHIe E/2S72wFsc62t9JiEQjbKj3VfBe65Q49Tsif/hVFtxM1/9pPEMmh4vaZKB5d/y0rwRDC eIXk+Tnuo0LtyxLwSVVpC2h/meccEbxpKPPh9cgkByddHJXleKUY9j+Al9MGAoQXGm0b YjGtSBgf1RKfL5C+oh1SuRRpjQ6b+Sz/Yd7d5+tMGBu+SXLG0oa3IqnAfAPYt4wLYKHp w3KSw4mWatu1zKiENNSmhVFfj7MQlA8LpFvR7LBF6Aqc+N+jVkhsVE8Wh6L0FIwyzMmo Hb/Q== X-Gm-Message-State: ACrzQf05+oF5hXr0Kzf7rkrUQvyRf5QVSmthDeHd18n+9jfI6eYW5nVP Yg/NU+AcxE6pzCG+5UF7aTcHzzAoqXivQA== X-Google-Smtp-Source: AMsMyM4TW2mp78RxqU7H/uF7H13bro2scrrdFsQQcpq8+1X68AZ3M/txl90MjV0h60EBejV7Th+U5w== X-Received: by 2002:a17:90b:2651:b0:20a:daaf:75f0 with SMTP id pa17-20020a17090b265100b0020adaaf75f0mr16989549pjb.142.1667262860129; Mon, 31 Oct 2022 17:34:20 -0700 (PDT) Received: from dread.disaster.area (pa49-181-106-210.pa.nsw.optusnet.com.au. [49.181.106.210]) by smtp.gmail.com with ESMTPSA id i2-20020a170902cf0200b0018699e6afd8sm4998414plg.265.2022.10.31.17.34.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Oct 2022 17:34:19 -0700 (PDT) Received: from discord.disaster.area ([192.168.253.110]) by dread.disaster.area with esmtp (Exim 4.92.3) (envelope-from ) id 1opfEN-008muT-8i; Tue, 01 Nov 2022 11:34:15 +1100 Received: from dave by discord.disaster.area with local (Exim 4.96) (envelope-from ) id 1opfEN-00G7du-0k; Tue, 01 Nov 2022 11:34:15 +1100 From: Dave Chinner To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org Subject: [PATCH 6/7] xfs: use iomap_valid method to detect stale cached iomaps Date: Tue, 1 Nov 2022 11:34:11 +1100 Message-Id: <20221101003412.3842572-7-david@fromorbit.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221101003412.3842572-1-david@fromorbit.com> References: <20221101003412.3842572-1-david@fromorbit.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Dave Chinner Now that iomap supports a mechanism to validate cached iomaps for buffered write operations, hook it up to the XFS buffered write ops so that we can avoid data corruptions that result from stale cached iomaps. See: https://lore.kernel.org/linux-xfs/20220817093627.GZ3600936@dread.disaster.area/ or the ->iomap_valid() introduction commit for exact details of the corruption vector. Signed-off-by: Dave Chinner --- fs/xfs/libxfs/xfs_bmap.c | 4 +-- fs/xfs/xfs_aops.c | 2 +- fs/xfs/xfs_iomap.c | 69 ++++++++++++++++++++++++++++++---------- fs/xfs/xfs_iomap.h | 4 +-- fs/xfs/xfs_pnfs.c | 5 +-- 5 files changed, 61 insertions(+), 23 deletions(-) diff --git a/fs/xfs/libxfs/xfs_bmap.c b/fs/xfs/libxfs/xfs_bmap.c index 49d0d4ea63fc..db225130618c 100644 --- a/fs/xfs/libxfs/xfs_bmap.c +++ b/fs/xfs/libxfs/xfs_bmap.c @@ -4551,8 +4551,8 @@ xfs_bmapi_convert_delalloc( * the extent. Just return the real extent at this offset. */ if (!isnullstartblock(bma.got.br_startblock)) { - xfs_bmbt_to_iomap(ip, iomap, &bma.got, 0, flags); *seq = READ_ONCE(ifp->if_seq); + xfs_bmbt_to_iomap(ip, iomap, &bma.got, 0, flags, *seq); goto out_trans_cancel; } @@ -4599,8 +4599,8 @@ xfs_bmapi_convert_delalloc( XFS_STATS_INC(mp, xs_xstrat_quick); ASSERT(!isnullstartblock(bma.got.br_startblock)); - xfs_bmbt_to_iomap(ip, iomap, &bma.got, 0, flags); *seq = READ_ONCE(ifp->if_seq); + xfs_bmbt_to_iomap(ip, iomap, &bma.got, 0, flags, *seq); if (whichfork == XFS_COW_FORK) xfs_refcount_alloc_cow_extent(tp, bma.blkno, bma.length); diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c index 5d1a995b15f8..ca5a9e45a48c 100644 --- a/fs/xfs/xfs_aops.c +++ b/fs/xfs/xfs_aops.c @@ -373,7 +373,7 @@ xfs_map_blocks( isnullstartblock(imap.br_startblock)) goto allocate_blocks; - xfs_bmbt_to_iomap(ip, &wpc->iomap, &imap, 0, 0); + xfs_bmbt_to_iomap(ip, &wpc->iomap, &imap, 0, 0, XFS_WPC(wpc)->data_seq); trace_xfs_map_blocks_found(ip, offset, count, whichfork, &imap); return 0; allocate_blocks: diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c index 2d48fcc7bd6f..5053ffcf10fe 100644 --- a/fs/xfs/xfs_iomap.c +++ b/fs/xfs/xfs_iomap.c @@ -54,7 +54,8 @@ xfs_bmbt_to_iomap( struct iomap *iomap, struct xfs_bmbt_irec *imap, unsigned int mapping_flags, - u16 iomap_flags) + u16 iomap_flags, + int sequence) { struct xfs_mount *mp = ip->i_mount; struct xfs_buftarg *target = xfs_inode_buftarg(ip); @@ -91,6 +92,9 @@ xfs_bmbt_to_iomap( if (xfs_ipincount(ip) && (ip->i_itemp->ili_fsync_fields & ~XFS_ILOG_TIMESTAMP)) iomap->flags |= IOMAP_F_DIRTY; + + /* The extent tree sequence is needed for iomap validity checking. */ + *((int *)&iomap->private) = sequence; return 0; } @@ -195,7 +199,8 @@ xfs_iomap_write_direct( xfs_fileoff_t offset_fsb, xfs_fileoff_t count_fsb, unsigned int flags, - struct xfs_bmbt_irec *imap) + struct xfs_bmbt_irec *imap, + int *seq) { struct xfs_mount *mp = ip->i_mount; struct xfs_trans *tp; @@ -285,6 +290,8 @@ xfs_iomap_write_direct( error = xfs_alert_fsblock_zero(ip, imap); out_unlock: + if (seq) + *seq = READ_ONCE(ip->i_df.if_seq); xfs_iunlock(ip, XFS_ILOCK_EXCL); return error; @@ -743,6 +750,7 @@ xfs_direct_write_iomap_begin( bool shared = false; u16 iomap_flags = 0; unsigned int lockmode = XFS_ILOCK_SHARED; + int seq; ASSERT(flags & (IOMAP_WRITE | IOMAP_ZERO)); @@ -811,9 +819,10 @@ xfs_direct_write_iomap_begin( goto out_unlock; } + seq = READ_ONCE(ip->i_df.if_seq); xfs_iunlock(ip, lockmode); trace_xfs_iomap_found(ip, offset, length, XFS_DATA_FORK, &imap); - return xfs_bmbt_to_iomap(ip, iomap, &imap, flags, iomap_flags); + return xfs_bmbt_to_iomap(ip, iomap, &imap, flags, iomap_flags, seq); allocate_blocks: error = -EAGAIN; @@ -839,24 +848,25 @@ xfs_direct_write_iomap_begin( xfs_iunlock(ip, lockmode); error = xfs_iomap_write_direct(ip, offset_fsb, end_fsb - offset_fsb, - flags, &imap); + flags, &imap, &seq); if (error) return error; trace_xfs_iomap_alloc(ip, offset, length, XFS_DATA_FORK, &imap); return xfs_bmbt_to_iomap(ip, iomap, &imap, flags, - iomap_flags | IOMAP_F_NEW); + iomap_flags | IOMAP_F_NEW, seq); out_found_cow: + seq = READ_ONCE(ip->i_df.if_seq); xfs_iunlock(ip, lockmode); length = XFS_FSB_TO_B(mp, cmap.br_startoff + cmap.br_blockcount); trace_xfs_iomap_found(ip, offset, length - offset, XFS_COW_FORK, &cmap); if (imap.br_startblock != HOLESTARTBLOCK) { - error = xfs_bmbt_to_iomap(ip, srcmap, &imap, flags, 0); + error = xfs_bmbt_to_iomap(ip, srcmap, &imap, flags, 0, seq); if (error) return error; } - return xfs_bmbt_to_iomap(ip, iomap, &cmap, flags, IOMAP_F_SHARED); + return xfs_bmbt_to_iomap(ip, iomap, &cmap, flags, IOMAP_F_SHARED, seq); out_unlock: if (lockmode) @@ -915,6 +925,7 @@ xfs_buffered_write_iomap_begin( int allocfork = XFS_DATA_FORK; int error = 0; unsigned int lockmode = XFS_ILOCK_EXCL; + int seq; if (xfs_is_shutdown(mp)) return -EIO; @@ -1094,26 +1105,29 @@ xfs_buffered_write_iomap_begin( * Flag newly allocated delalloc blocks with IOMAP_F_NEW so we punch * them out if the write happens to fail. */ + seq = READ_ONCE(ip->i_df.if_seq); xfs_iunlock(ip, XFS_ILOCK_EXCL); trace_xfs_iomap_alloc(ip, offset, count, allocfork, &imap); - return xfs_bmbt_to_iomap(ip, iomap, &imap, flags, IOMAP_F_NEW); + return xfs_bmbt_to_iomap(ip, iomap, &imap, flags, IOMAP_F_NEW, seq); found_imap: + seq = READ_ONCE(ip->i_df.if_seq); xfs_iunlock(ip, XFS_ILOCK_EXCL); - return xfs_bmbt_to_iomap(ip, iomap, &imap, flags, 0); + return xfs_bmbt_to_iomap(ip, iomap, &imap, flags, 0, seq); found_cow: + seq = READ_ONCE(ip->i_df.if_seq); xfs_iunlock(ip, XFS_ILOCK_EXCL); if (imap.br_startoff <= offset_fsb) { - error = xfs_bmbt_to_iomap(ip, srcmap, &imap, flags, 0); + error = xfs_bmbt_to_iomap(ip, srcmap, &imap, flags, 0, seq); if (error) return error; return xfs_bmbt_to_iomap(ip, iomap, &cmap, flags, - IOMAP_F_SHARED); + IOMAP_F_SHARED, seq); } xfs_trim_extent(&cmap, offset_fsb, imap.br_startoff - offset_fsb); - return xfs_bmbt_to_iomap(ip, iomap, &cmap, flags, 0); + return xfs_bmbt_to_iomap(ip, iomap, &cmap, flags, 0, seq); out_unlock: xfs_iunlock(ip, XFS_ILOCK_EXCL); @@ -1328,9 +1342,26 @@ xfs_buffered_write_iomap_end( return 0; } +/* + * Check that the iomap passed to us is still valid for the given offset and + * length. + */ +static bool +xfs_buffered_write_iomap_valid( + struct inode *inode, + const struct iomap *iomap) +{ + int seq = *((int *)&iomap->private); + + if (seq != READ_ONCE(XFS_I(inode)->i_df.if_seq)) + return false; + return true; +} + const struct iomap_ops xfs_buffered_write_iomap_ops = { .iomap_begin = xfs_buffered_write_iomap_begin, .iomap_end = xfs_buffered_write_iomap_end, + .iomap_valid = xfs_buffered_write_iomap_valid, }; /* @@ -1359,6 +1390,7 @@ xfs_read_iomap_begin( int nimaps = 1, error = 0; bool shared = false; unsigned int lockmode = XFS_ILOCK_SHARED; + int seq; ASSERT(!(flags & (IOMAP_WRITE | IOMAP_ZERO))); @@ -1372,13 +1404,14 @@ xfs_read_iomap_begin( &nimaps, 0); if (!error && (flags & IOMAP_REPORT)) error = xfs_reflink_trim_around_shared(ip, &imap, &shared); + seq = READ_ONCE(ip->i_df.if_seq); xfs_iunlock(ip, lockmode); if (error) return error; trace_xfs_iomap_found(ip, offset, length, XFS_DATA_FORK, &imap); return xfs_bmbt_to_iomap(ip, iomap, &imap, flags, - shared ? IOMAP_F_SHARED : 0); + shared ? IOMAP_F_SHARED : 0, seq); } const struct iomap_ops xfs_read_iomap_ops = { @@ -1438,7 +1471,7 @@ xfs_seek_iomap_begin( end_fsb = min(end_fsb, data_fsb); xfs_trim_extent(&cmap, offset_fsb, end_fsb); error = xfs_bmbt_to_iomap(ip, iomap, &cmap, flags, - IOMAP_F_SHARED); + IOMAP_F_SHARED, READ_ONCE(ip->i_cowfp->if_seq)); /* * This is a COW extent, so we must probe the page cache * because there could be dirty page cache being backed @@ -1460,7 +1493,8 @@ xfs_seek_iomap_begin( imap.br_state = XFS_EXT_NORM; done: xfs_trim_extent(&imap, offset_fsb, end_fsb); - error = xfs_bmbt_to_iomap(ip, iomap, &imap, flags, 0); + error = xfs_bmbt_to_iomap(ip, iomap, &imap, flags, 0, + READ_ONCE(ip->i_df.if_seq)); out_unlock: xfs_iunlock(ip, lockmode); return error; @@ -1486,6 +1520,7 @@ xfs_xattr_iomap_begin( struct xfs_bmbt_irec imap; int nimaps = 1, error = 0; unsigned lockmode; + int seq; if (xfs_is_shutdown(mp)) return -EIO; @@ -1502,12 +1537,14 @@ xfs_xattr_iomap_begin( error = xfs_bmapi_read(ip, offset_fsb, end_fsb - offset_fsb, &imap, &nimaps, XFS_BMAPI_ATTRFORK); out_unlock: + + seq = READ_ONCE(ip->i_af.if_seq); xfs_iunlock(ip, lockmode); if (error) return error; ASSERT(nimaps); - return xfs_bmbt_to_iomap(ip, iomap, &imap, flags, 0); + return xfs_bmbt_to_iomap(ip, iomap, &imap, flags, 0, seq); } const struct iomap_ops xfs_xattr_iomap_ops = { diff --git a/fs/xfs/xfs_iomap.h b/fs/xfs/xfs_iomap.h index 0f62ab633040..792fed2a9072 100644 --- a/fs/xfs/xfs_iomap.h +++ b/fs/xfs/xfs_iomap.h @@ -13,14 +13,14 @@ struct xfs_bmbt_irec; int xfs_iomap_write_direct(struct xfs_inode *ip, xfs_fileoff_t offset_fsb, xfs_fileoff_t count_fsb, unsigned int flags, - struct xfs_bmbt_irec *imap); + struct xfs_bmbt_irec *imap, int *sequence); int xfs_iomap_write_unwritten(struct xfs_inode *, xfs_off_t, xfs_off_t, bool); xfs_fileoff_t xfs_iomap_eof_align_last_fsb(struct xfs_inode *ip, xfs_fileoff_t end_fsb); int xfs_bmbt_to_iomap(struct xfs_inode *ip, struct iomap *iomap, struct xfs_bmbt_irec *imap, unsigned int mapping_flags, - u16 iomap_flags); + u16 iomap_flags, int sequence); int xfs_zero_range(struct xfs_inode *ip, loff_t pos, loff_t len, bool *did_zero); diff --git a/fs/xfs/xfs_pnfs.c b/fs/xfs/xfs_pnfs.c index 37a24f0f7cd4..eea507a80c5c 100644 --- a/fs/xfs/xfs_pnfs.c +++ b/fs/xfs/xfs_pnfs.c @@ -125,6 +125,7 @@ xfs_fs_map_blocks( int nimaps = 1; uint lock_flags; int error = 0; + int seq; if (xfs_is_shutdown(mp)) return -EIO; @@ -189,7 +190,7 @@ xfs_fs_map_blocks( xfs_iunlock(ip, lock_flags); error = xfs_iomap_write_direct(ip, offset_fsb, - end_fsb - offset_fsb, 0, &imap); + end_fsb - offset_fsb, 0, &imap, &seq); if (error) goto out_unlock; @@ -209,7 +210,7 @@ xfs_fs_map_blocks( } xfs_iunlock(ip, XFS_IOLOCK_EXCL); - error = xfs_bmbt_to_iomap(ip, iomap, &imap, 0, 0); + error = xfs_bmbt_to_iomap(ip, iomap, &imap, 0, 0, seq); *device_generation = mp->m_generation; return error; out_unlock: From patchwork Tue Nov 1 00:34:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Chinner X-Patchwork-Id: 13026528 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE394FA3744 for ; Tue, 1 Nov 2022 00:34:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229917AbiKAAed (ORCPT ); Mon, 31 Oct 2022 20:34:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57080 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229910AbiKAAeW (ORCPT ); Mon, 31 Oct 2022 20:34:22 -0400 Received: from mail-pl1-x62b.google.com (mail-pl1-x62b.google.com [IPv6:2607:f8b0:4864:20::62b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0820815FC8 for ; Mon, 31 Oct 2022 17:34:21 -0700 (PDT) Received: by mail-pl1-x62b.google.com with SMTP id y4so12195916plb.2 for ; Mon, 31 Oct 2022 17:34:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fromorbit-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=CnFcH5xdn9mVto1zrQhkUbrq+G42gsbiEG4cIirsKvE=; b=b9ixj06TiKaLCRGE5Zwefk4VHU8PKWgdqb6s6qXRzwW0/6haWCvHBFvE6hxeGSEr+e 5SR+1XOCqGmgZFyXO1wCBd6OG4A96r4uWa7ZluAfA52WK83AP8rpNcBJ4bC3kV6C2qJx PGgFPBr5b85vLOaY3plGKkae0k+QB4ihL+dhmSAV0qdv3WJIugqJzNevD32/WQpBd6se 6DnMRbpKHTFP9Qr7dgcsUmni3J+KtaQJigYALSJKOfAaiXkRWllvIl+EyFfPhTukoXmJ FKFiE1Dnaeg6VDP2qZ9cgJVKoRmRuQtGX1li6e6BcV3RKFzmmgdnnIYYKXDGn0mVH4Dd qAjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=CnFcH5xdn9mVto1zrQhkUbrq+G42gsbiEG4cIirsKvE=; b=IbYJ7IliK8vmcEEzylV/AMk4hggq7XYqfRfmQcYRxT8UI1zjKMdQOkjTButk2seGq0 QUvHoTL/hYHraDI3/2kB0Id4M/J57vQeEKw4MV5laArChbDVzjQX2TVHDjN3k603yy/O iDWiDmSkO4p56TnJp7FkZOQM/WXfBactghVV1bkX4cGIgmgG6nmo7hErPfDwe/0P5rOE C/g7dlri9NVYI/DE7au/+S8lMAHPMHfsvceQvcTkR4MhTgQDD+L73bOhCw6laUEbG1WJ emVIFiwNet9lkXoCg6HAJmvYQIqzxuSkIsI8rfh3e53c2LWsHkT76wN+9I1ZbiwAp+sT LROg== X-Gm-Message-State: ACrzQf0UgyEOEE76ndrRFujn7yW4Sd6zv5DXuy8bqwMKOQMAkM+fB3zV 2NNQ9tSRlKBMDPml7Zv/7TN7mWBpN8TBWg== X-Google-Smtp-Source: AMsMyM7mMM8yy/ik9C9jhVEeg+3ZHlJ+Rd9dwV5eegpHP4Kb3MyGap5Z0O7wFl/Yh/o4p7rIT4t6yg== X-Received: by 2002:a17:90b:1c8e:b0:205:783b:fe32 with SMTP id oo14-20020a17090b1c8e00b00205783bfe32mr35212387pjb.39.1667262860446; Mon, 31 Oct 2022 17:34:20 -0700 (PDT) Received: from dread.disaster.area (pa49-181-106-210.pa.nsw.optusnet.com.au. [49.181.106.210]) by smtp.gmail.com with ESMTPSA id 9-20020a631749000000b0046f1e8cb30dsm4700404pgx.26.2022.10.31.17.34.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Oct 2022 17:34:19 -0700 (PDT) Received: from discord.disaster.area ([192.168.253.110]) by dread.disaster.area with esmtp (Exim 4.92.3) (envelope-from ) id 1opfEN-008mub-9o; Tue, 01 Nov 2022 11:34:15 +1100 Received: from dave by discord.disaster.area with local (Exim 4.96) (envelope-from ) id 1opfEN-00G7dz-0s; Tue, 01 Nov 2022 11:34:15 +1100 From: Dave Chinner To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org Subject: [PATCH 7/7] xfs: drop write error injection is unfixable, remove it Date: Tue, 1 Nov 2022 11:34:12 +1100 Message-Id: <20221101003412.3842572-8-david@fromorbit.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221101003412.3842572-1-david@fromorbit.com> References: <20221101003412.3842572-1-david@fromorbit.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Dave Chinner With the changes to scan the page cache for dirty data to avoid data corruptions from partial write cleanup racing with other page cache operations, the drop writes error injection no longer works the same way it used to and causes xfs/196 to fail. This is because xfs/196 writes to the file and populates the page cache before it turns on the error injection and starts failing -overwrites-. The result is that the original drop-writes code failed writes only -after- overwriting the data in the cache, followed by invalidates the cached data, then punching out the delalloc extent from under that data. On the surface, this looks fine. The problem is that page cache invalidation *doesn't guarantee that it removes anything from the page cache* and it doesn't change the dirty state of the folio. When block size == page size and we do page aligned IO (as xfs/196 does) everything happens to align perfectly and page cache invalidation removes the single page folios that span the written data. Hence the followup delalloc punch pass does not find cached data over that range and it can punch the extent out. IOWs, xfs/196 "works" for block size == page size with the new code. I say "works", because it actually only works for the case where IO is page aligned, and no data was read from disk before writes occur. Because the moment we actually read data first, the readahead code allocates multipage folios and suddenly the invalidate code goes back to zeroing subfolio ranges without changing dirty state. Hence, with multipage folios in play, block size == page size is functionally identical to block size < page size behaviour, and drop-writes is manifestly broken w.r.t to this case. Invalidation of a subfolio range doesn't result in the folio being removed from the cache, just the range gets zeroed. Hence after we've sequentially walked over a folio that we've dirtied (via write data) and then invalidated, we end up with a dirty folio full of zeroed data. And because the new code skips punching ranges that have dirty folios covering them, we end up leaving the delalloc range intact after failing all the writes. Hence failed writes now end up writing zeroes to disk in the cases where invalidation zeroes folios rather than removing them from cache. This is a fundamental change of behaviour that is needed to avoid the data corruption vectors that exist in the old write fail path, and it renders the drop-writes injection non-functional and unworkable as it stands. As it is, I think the error injection is also now unnecessary, as partial writes that need delalloc extent are going to be a lot more common with stale iomap detection in place. Hence this patch removes the drop-writes error injection completely. xfs/196 can remain for testing kernels that don't have this data corruption fix, but those that do will report: xfs/196 3s ... [not run] XFS error injection drop_writes unknown on this kernel. Signed-off-by: Dave Chinner --- fs/xfs/libxfs/xfs_errortag.h | 12 +++++------- fs/xfs/xfs_error.c | 27 ++++++++++++++++++++------- fs/xfs/xfs_iomap.c | 9 --------- 3 files changed, 25 insertions(+), 23 deletions(-) diff --git a/fs/xfs/libxfs/xfs_errortag.h b/fs/xfs/libxfs/xfs_errortag.h index 5362908164b0..580ccbd5aadc 100644 --- a/fs/xfs/libxfs/xfs_errortag.h +++ b/fs/xfs/libxfs/xfs_errortag.h @@ -40,13 +40,12 @@ #define XFS_ERRTAG_REFCOUNT_FINISH_ONE 25 #define XFS_ERRTAG_BMAP_FINISH_ONE 26 #define XFS_ERRTAG_AG_RESV_CRITICAL 27 + /* - * DEBUG mode instrumentation to test and/or trigger delayed allocation - * block killing in the event of failed writes. When enabled, all - * buffered writes are silenty dropped and handled as if they failed. - * All delalloc blocks in the range of the write (including pre-existing - * delalloc blocks!) are tossed as part of the write failure error - * handling sequence. + * Drop-writes support removed because write error handling cannot trash + * pre-existing delalloc extents in any useful way anymore. We retain the + * definition so that we can reject it as an invalid value in + * xfs_errortag_valid(). */ #define XFS_ERRTAG_DROP_WRITES 28 #define XFS_ERRTAG_LOG_BAD_CRC 29 @@ -95,7 +94,6 @@ #define XFS_RANDOM_REFCOUNT_FINISH_ONE 1 #define XFS_RANDOM_BMAP_FINISH_ONE 1 #define XFS_RANDOM_AG_RESV_CRITICAL 4 -#define XFS_RANDOM_DROP_WRITES 1 #define XFS_RANDOM_LOG_BAD_CRC 1 #define XFS_RANDOM_LOG_ITEM_PIN 1 #define XFS_RANDOM_BUF_LRU_REF 2 diff --git a/fs/xfs/xfs_error.c b/fs/xfs/xfs_error.c index c6b2aabd6f18..dea3c0649d2f 100644 --- a/fs/xfs/xfs_error.c +++ b/fs/xfs/xfs_error.c @@ -46,7 +46,7 @@ static unsigned int xfs_errortag_random_default[] = { XFS_RANDOM_REFCOUNT_FINISH_ONE, XFS_RANDOM_BMAP_FINISH_ONE, XFS_RANDOM_AG_RESV_CRITICAL, - XFS_RANDOM_DROP_WRITES, + 0, /* XFS_RANDOM_DROP_WRITES has been removed */ XFS_RANDOM_LOG_BAD_CRC, XFS_RANDOM_LOG_ITEM_PIN, XFS_RANDOM_BUF_LRU_REF, @@ -162,7 +162,6 @@ XFS_ERRORTAG_ATTR_RW(refcount_continue_update, XFS_ERRTAG_REFCOUNT_CONTINUE_UPDA XFS_ERRORTAG_ATTR_RW(refcount_finish_one, XFS_ERRTAG_REFCOUNT_FINISH_ONE); XFS_ERRORTAG_ATTR_RW(bmap_finish_one, XFS_ERRTAG_BMAP_FINISH_ONE); XFS_ERRORTAG_ATTR_RW(ag_resv_critical, XFS_ERRTAG_AG_RESV_CRITICAL); -XFS_ERRORTAG_ATTR_RW(drop_writes, XFS_ERRTAG_DROP_WRITES); XFS_ERRORTAG_ATTR_RW(log_bad_crc, XFS_ERRTAG_LOG_BAD_CRC); XFS_ERRORTAG_ATTR_RW(log_item_pin, XFS_ERRTAG_LOG_ITEM_PIN); XFS_ERRORTAG_ATTR_RW(buf_lru_ref, XFS_ERRTAG_BUF_LRU_REF); @@ -206,7 +205,6 @@ static struct attribute *xfs_errortag_attrs[] = { XFS_ERRORTAG_ATTR_LIST(refcount_finish_one), XFS_ERRORTAG_ATTR_LIST(bmap_finish_one), XFS_ERRORTAG_ATTR_LIST(ag_resv_critical), - XFS_ERRORTAG_ATTR_LIST(drop_writes), XFS_ERRORTAG_ATTR_LIST(log_bad_crc), XFS_ERRORTAG_ATTR_LIST(log_item_pin), XFS_ERRORTAG_ATTR_LIST(buf_lru_ref), @@ -256,6 +254,19 @@ xfs_errortag_del( kmem_free(mp->m_errortag); } +static bool +xfs_errortag_valid( + unsigned int error_tag) +{ + if (error_tag >= XFS_ERRTAG_MAX) + return false; + + /* Error out removed injection types */ + if (error_tag == XFS_ERRTAG_DROP_WRITES) + return false; + return true; +} + bool xfs_errortag_test( struct xfs_mount *mp, @@ -277,7 +288,9 @@ xfs_errortag_test( if (!mp->m_errortag) return false; - ASSERT(error_tag < XFS_ERRTAG_MAX); + if (!xfs_errortag_valid(error_tag)) + return false; + randfactor = mp->m_errortag[error_tag]; if (!randfactor || prandom_u32_max(randfactor)) return false; @@ -293,7 +306,7 @@ xfs_errortag_get( struct xfs_mount *mp, unsigned int error_tag) { - if (error_tag >= XFS_ERRTAG_MAX) + if (!xfs_errortag_valid(error_tag)) return -EINVAL; return mp->m_errortag[error_tag]; @@ -305,7 +318,7 @@ xfs_errortag_set( unsigned int error_tag, unsigned int tag_value) { - if (error_tag >= XFS_ERRTAG_MAX) + if (!xfs_errortag_valid(error_tag)) return -EINVAL; mp->m_errortag[error_tag] = tag_value; @@ -319,7 +332,7 @@ xfs_errortag_add( { BUILD_BUG_ON(ARRAY_SIZE(xfs_errortag_random_default) != XFS_ERRTAG_MAX); - if (error_tag >= XFS_ERRTAG_MAX) + if (!xfs_errortag_valid(error_tag)) return -EINVAL; return xfs_errortag_set(mp, error_tag, diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c index 5053ffcf10fe..8e7e51c5f56d 100644 --- a/fs/xfs/xfs_iomap.c +++ b/fs/xfs/xfs_iomap.c @@ -1305,15 +1305,6 @@ xfs_buffered_write_iomap_end( if (iomap->type != IOMAP_DELALLOC) return 0; - /* - * Behave as if the write failed if drop writes is enabled. Set the NEW - * flag to force delalloc cleanup. - */ - if (XFS_TEST_ERROR(false, mp, XFS_ERRTAG_DROP_WRITES)) { - iomap->flags |= IOMAP_F_NEW; - written = 0; - } - /* If we didn't reserve the blocks, we're not allowed to punch them. */ if (!(iomap->flags & IOMAP_F_NEW)) return 0;