Message ID | 38338e851f80eb505894092dcd898de19ce720bd.1722459563.git.boris@bur.io (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | btrfs: locking comment for cow_file_range_inline | expand |
在 2024/8/1 06:29, Boris Burkov 写道: > Add a comment to document the complicated locked_page unlock logic in > cow_file_range_inline. The specifically tricky part is that a caller > just up the stack converts ret == 0 to ret == 1 and then another > caller far up the callstack handles ret == 1 as a success, AND returns > without cleanup in that case, both of which "feel" unnatural and led to > the original bug. > > Try to document that somewhat specific callstack logic here to explain > the weird un-setting of locked_folio on success. > > Signed-off-by: Boris Burkov <boris@bur.io> Reviewed-by: Qu Wenruo <wqu@suse.com> Thanks, Qu > --- > fs/btrfs/inode.c | 14 ++++++++++++++ > 1 file changed, 14 insertions(+) > > diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c > index ed95040f4bb6..07858d63378f 100644 > --- a/fs/btrfs/inode.c > +++ b/fs/btrfs/inode.c > @@ -744,6 +744,20 @@ static noinline int cow_file_range_inline(struct btrfs_inode *inode, > return ret; > } > > + /* > + * In the successful case (ret == 0 here), cow_file_range will return 1. > + * > + * Quite a bit further up the callstack in __extent_writepage, ret == 1 > + * is treated as a short circuited success and does not unlock the folio, > + * so we must do it here. > + * > + * In the failure case, the locked_folio does get unlocked by > + * btrfs_folio_end_all_writers, which asserts that it is still locked > + * at that point, so we must *not* unlock it here. > + * > + * The other two callsites in compress_file_range do not have a > + * locked_folio, so they are not relevant to this logic. > + */ > if (ret == 0) > locked_folio = NULL; >
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index ed95040f4bb6..07858d63378f 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -744,6 +744,20 @@ static noinline int cow_file_range_inline(struct btrfs_inode *inode, return ret; } + /* + * In the successful case (ret == 0 here), cow_file_range will return 1. + * + * Quite a bit further up the callstack in __extent_writepage, ret == 1 + * is treated as a short circuited success and does not unlock the folio, + * so we must do it here. + * + * In the failure case, the locked_folio does get unlocked by + * btrfs_folio_end_all_writers, which asserts that it is still locked + * at that point, so we must *not* unlock it here. + * + * The other two callsites in compress_file_range do not have a + * locked_folio, so they are not relevant to this logic. + */ if (ret == 0) locked_folio = NULL;
Add a comment to document the complicated locked_page unlock logic in cow_file_range_inline. The specifically tricky part is that a caller just up the stack converts ret == 0 to ret == 1 and then another caller far up the callstack handles ret == 1 as a success, AND returns without cleanup in that case, both of which "feel" unnatural and led to the original bug. Try to document that somewhat specific callstack logic here to explain the weird un-setting of locked_folio on success. Signed-off-by: Boris Burkov <boris@bur.io> --- fs/btrfs/inode.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+)