Message ID | f7504d38c6e6938e4d50e7cee5108a7010e9e8d8.1727660754.git.wqu@suse.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | btrfs: small cleanups to buffered write path | expand |
On Mon, Sep 30, 2024 at 11:20:30AM +0930, Qu Wenruo wrote: > Currently inside prepare_pages(), we handle the leading and tailing page > differently, and skip the middle pages (if any). > > This is to avoid reading pages which is fully covered by the dirty > range. > > Refactor the code by moving all checks (alignment check, range check, > force read check) into prepare_uptodate_page(). > > So that prepare_pages() only need to iterate all the pages > unconditionally. > > Signed-off-by: Qu Wenruo <wqu@suse.com> > --- > fs/btrfs/file.c | 68 +++++++++++++++++++++++++++---------------------- > 1 file changed, 37 insertions(+), 31 deletions(-) > > diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c > index 9555a3485670..cc8edf8da79d 100644 > --- a/fs/btrfs/file.c > +++ b/fs/btrfs/file.c > @@ -858,36 +858,46 @@ int btrfs_mark_extent_written(struct btrfs_trans_handle *trans, > */ > static int prepare_uptodate_page(struct inode *inode, > struct page *page, u64 pos, > - bool force_uptodate) > + u64 len, bool force_uptodate) > { > struct folio *folio = page_folio(page); > + u64 clamp_start = max_t(u64, pos, folio_pos(folio)); > + u64 clamp_end = min_t(u64, pos + len, folio_pos(folio) + folio_size(folio)); > int ret = 0; > > - if (((pos & (PAGE_SIZE - 1)) || force_uptodate) && > - !PageUptodate(page)) { > - ret = btrfs_read_folio(NULL, folio); > - if (ret) > - return ret; > - lock_page(page); > - if (!PageUptodate(page)) { > - unlock_page(page); > - return -EIO; > - } > + if (PageUptodate(page)) > + return 0; > > - /* > - * Since btrfs_read_folio() will unlock the folio before it > - * returns, there is a window where btrfs_release_folio() can be > - * called to release the page. Here we check both inode > - * mapping and PagePrivate() to make sure the page was not > - * released. > - * > - * The private flag check is essential for subpage as we need > - * to store extra bitmap using folio private. > - */ > - if (page->mapping != inode->i_mapping || !folio_test_private(folio)) { > - unlock_page(page); > - return -EAGAIN; > - } > + if (force_uptodate) > + goto read; > + > + /* The dirty range fully cover the page, no need to read it out. */ > + if (IS_ALIGNED(clamp_start, PAGE_SIZE) && > + IS_ALIGNED(clamp_end, PAGE_SIZE)) > + return 0; > +read: > + ret = btrfs_read_folio(NULL, folio); > + if (ret) > + return ret; > + lock_page(page); > + if (!PageUptodate(page)) { > + unlock_page(page); > + return -EIO; > + } > + > + /* > + * Since btrfs_read_folio() will unlock the folio before it > + * returns, there is a window where btrfs_release_folio() can be > + * called to release the page. Here we check both inode > + * mapping and PagePrivate() to make sure the page was not > + * released. > + * > + * The private flag check is essential for subpage as we need > + * to store extra bitmap using folio private. > + */ > + if (page->mapping != inode->i_mapping || !folio_test_private(folio)) { > + unlock_page(page); > + return -EAGAIN; Since you're reworking it anyway can you go ahead and convert this to only use the folio? Thanks, Josef
On Mon, Sep 30, 2024 at 11:20:30AM +0930, Qu Wenruo wrote: > + if (force_uptodate) > + goto read; > + > + /* The dirty range fully cover the page, no need to read it out. */ > + if (IS_ALIGNED(clamp_start, PAGE_SIZE) && > + IS_ALIGNED(clamp_end, PAGE_SIZE)) > + return 0; > +read: > + ret = btrfs_read_folio(NULL, folio); Please avoid this pattern of an if and goto where it's simply jumping around a block and the goto label is used only once.
在 2024/10/2 00:50, David Sterba 写道: > On Mon, Sep 30, 2024 at 11:20:30AM +0930, Qu Wenruo wrote: >> + if (force_uptodate) >> + goto read; >> + >> + /* The dirty range fully cover the page, no need to read it out. */ >> + if (IS_ALIGNED(clamp_start, PAGE_SIZE) && >> + IS_ALIGNED(clamp_end, PAGE_SIZE)) >> + return 0; >> +read: >> + ret = btrfs_read_folio(NULL, folio); > > Please avoid this pattern of an if and goto where it's simply jumping > around a block and the goto label is used only once. > Any better alternative? Thanks, Qu
diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c index 9555a3485670..cc8edf8da79d 100644 --- a/fs/btrfs/file.c +++ b/fs/btrfs/file.c @@ -858,36 +858,46 @@ int btrfs_mark_extent_written(struct btrfs_trans_handle *trans, */ static int prepare_uptodate_page(struct inode *inode, struct page *page, u64 pos, - bool force_uptodate) + u64 len, bool force_uptodate) { struct folio *folio = page_folio(page); + u64 clamp_start = max_t(u64, pos, folio_pos(folio)); + u64 clamp_end = min_t(u64, pos + len, folio_pos(folio) + folio_size(folio)); int ret = 0; - if (((pos & (PAGE_SIZE - 1)) || force_uptodate) && - !PageUptodate(page)) { - ret = btrfs_read_folio(NULL, folio); - if (ret) - return ret; - lock_page(page); - if (!PageUptodate(page)) { - unlock_page(page); - return -EIO; - } + if (PageUptodate(page)) + return 0; - /* - * Since btrfs_read_folio() will unlock the folio before it - * returns, there is a window where btrfs_release_folio() can be - * called to release the page. Here we check both inode - * mapping and PagePrivate() to make sure the page was not - * released. - * - * The private flag check is essential for subpage as we need - * to store extra bitmap using folio private. - */ - if (page->mapping != inode->i_mapping || !folio_test_private(folio)) { - unlock_page(page); - return -EAGAIN; - } + if (force_uptodate) + goto read; + + /* The dirty range fully cover the page, no need to read it out. */ + if (IS_ALIGNED(clamp_start, PAGE_SIZE) && + IS_ALIGNED(clamp_end, PAGE_SIZE)) + return 0; +read: + ret = btrfs_read_folio(NULL, folio); + if (ret) + return ret; + lock_page(page); + if (!PageUptodate(page)) { + unlock_page(page); + return -EIO; + } + + /* + * Since btrfs_read_folio() will unlock the folio before it + * returns, there is a window where btrfs_release_folio() can be + * called to release the page. Here we check both inode + * mapping and PagePrivate() to make sure the page was not + * released. + * + * The private flag check is essential for subpage as we need + * to store extra bitmap using folio private. + */ + if (page->mapping != inode->i_mapping || !folio_test_private(folio)) { + unlock_page(page); + return -EAGAIN; } return 0; } @@ -949,12 +959,8 @@ static noinline int prepare_pages(struct inode *inode, struct page **pages, goto fail; } - if (i == 0) - ret = prepare_uptodate_page(inode, pages[i], pos, - force_uptodate); - if (!ret && i == num_pages - 1) - ret = prepare_uptodate_page(inode, pages[i], - pos + write_bytes, false); + ret = prepare_uptodate_page(inode, pages[i], pos, write_bytes, + force_uptodate); if (ret) { put_page(pages[i]); if (!nowait && ret == -EAGAIN) {
Currently inside prepare_pages(), we handle the leading and tailing page differently, and skip the middle pages (if any). This is to avoid reading pages which is fully covered by the dirty range. Refactor the code by moving all checks (alignment check, range check, force read check) into prepare_uptodate_page(). So that prepare_pages() only need to iterate all the pages unconditionally. Signed-off-by: Qu Wenruo <wqu@suse.com> --- fs/btrfs/file.c | 68 +++++++++++++++++++++++++++---------------------- 1 file changed, 37 insertions(+), 31 deletions(-)