Message ID | 20220901133505.2510834-5-yi.zhang@huawei.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | fs/buffer: remove ll_rw_block() | expand |
On Thu 01-09-22 21:34:55, Zhang Yi wrote: > ll_rw_block() is not safe for the sync read path because it cannot > guarantee that always submitting read IO if the buffer has been locked, > so stop using it. We also switch to new bh_readahead() helper for the > readahead path. > > Signed-off-by: Zhang Yi <yi.zhang@huawei.com> Looks good to me. Feel free to add: Reviewed-by: Jan Kara <jack@suse.cz> Honza > --- > fs/gfs2/meta_io.c | 7 ++----- > fs/gfs2/quota.c | 8 ++------ > 2 files changed, 4 insertions(+), 11 deletions(-) > > diff --git a/fs/gfs2/meta_io.c b/fs/gfs2/meta_io.c > index 7e70e0ba5a6c..6ed728aae9a5 100644 > --- a/fs/gfs2/meta_io.c > +++ b/fs/gfs2/meta_io.c > @@ -525,8 +525,7 @@ struct buffer_head *gfs2_meta_ra(struct gfs2_glock *gl, u64 dblock, u32 extlen) > > if (buffer_uptodate(first_bh)) > goto out; > - if (!buffer_locked(first_bh)) > - ll_rw_block(REQ_OP_READ | REQ_META | REQ_PRIO, 1, &first_bh); > + bh_read_nowait(first_bh, REQ_META | REQ_PRIO); > > dblock++; > extlen--; > @@ -534,9 +533,7 @@ struct buffer_head *gfs2_meta_ra(struct gfs2_glock *gl, u64 dblock, u32 extlen) > while (extlen) { > bh = gfs2_getbuf(gl, dblock, CREATE); > > - if (!buffer_uptodate(bh) && !buffer_locked(bh)) > - ll_rw_block(REQ_OP_READ | REQ_RAHEAD | REQ_META | > - REQ_PRIO, 1, &bh); > + bh_readahead(bh, REQ_RAHEAD | REQ_META | REQ_PRIO); > brelse(bh); > dblock++; > extlen--; > diff --git a/fs/gfs2/quota.c b/fs/gfs2/quota.c > index f201eaf59d0d..1ed17226d9ed 100644 > --- a/fs/gfs2/quota.c > +++ b/fs/gfs2/quota.c > @@ -745,12 +745,8 @@ static int gfs2_write_buf_to_page(struct gfs2_inode *ip, unsigned long index, > } > if (PageUptodate(page)) > set_buffer_uptodate(bh); > - if (!buffer_uptodate(bh)) { > - ll_rw_block(REQ_OP_READ | REQ_META | REQ_PRIO, 1, &bh); > - wait_on_buffer(bh); > - if (!buffer_uptodate(bh)) > - goto unlock_out; > - } > + if (bh_read(bh, REQ_META | REQ_PRIO) < 0) > + goto unlock_out; > if (gfs2_is_jdata(ip)) > gfs2_trans_add_data(ip->i_gl, bh); > else > -- > 2.31.1 >
Looks good:
Reviewed-by: Christoph Hellwig <hch@lst.de>
On Thu, Sep 1, 2022 at 3:24 PM Zhang Yi <yi.zhang@huawei.com> wrote: > ll_rw_block() is not safe for the sync read path because it cannot > guarantee that always submitting read IO if the buffer has been locked, > so stop using it. We also switch to new bh_readahead() helper for the > readahead path. > > Signed-off-by: Zhang Yi <yi.zhang@huawei.com> > --- > fs/gfs2/meta_io.c | 7 ++----- > fs/gfs2/quota.c | 8 ++------ > 2 files changed, 4 insertions(+), 11 deletions(-) > > diff --git a/fs/gfs2/meta_io.c b/fs/gfs2/meta_io.c > index 7e70e0ba5a6c..6ed728aae9a5 100644 > --- a/fs/gfs2/meta_io.c > +++ b/fs/gfs2/meta_io.c > @@ -525,8 +525,7 @@ struct buffer_head *gfs2_meta_ra(struct gfs2_glock *gl, u64 dblock, u32 extlen) > > if (buffer_uptodate(first_bh)) > goto out; > - if (!buffer_locked(first_bh)) > - ll_rw_block(REQ_OP_READ | REQ_META | REQ_PRIO, 1, &first_bh); > + bh_read_nowait(first_bh, REQ_META | REQ_PRIO); > > dblock++; > extlen--; > @@ -534,9 +533,7 @@ struct buffer_head *gfs2_meta_ra(struct gfs2_glock *gl, u64 dblock, u32 extlen) > while (extlen) { > bh = gfs2_getbuf(gl, dblock, CREATE); > > - if (!buffer_uptodate(bh) && !buffer_locked(bh)) > - ll_rw_block(REQ_OP_READ | REQ_RAHEAD | REQ_META | > - REQ_PRIO, 1, &bh); > + bh_readahead(bh, REQ_RAHEAD | REQ_META | REQ_PRIO); > brelse(bh); > dblock++; > extlen--; > diff --git a/fs/gfs2/quota.c b/fs/gfs2/quota.c > index f201eaf59d0d..1ed17226d9ed 100644 > --- a/fs/gfs2/quota.c > +++ b/fs/gfs2/quota.c > @@ -745,12 +745,8 @@ static int gfs2_write_buf_to_page(struct gfs2_inode *ip, unsigned long index, > } > if (PageUptodate(page)) > set_buffer_uptodate(bh); > - if (!buffer_uptodate(bh)) { > - ll_rw_block(REQ_OP_READ | REQ_META | REQ_PRIO, 1, &bh); > - wait_on_buffer(bh); > - if (!buffer_uptodate(bh)) > - goto unlock_out; > - } > + if (bh_read(bh, REQ_META | REQ_PRIO) < 0) > + goto unlock_out; > if (gfs2_is_jdata(ip)) > gfs2_trans_add_data(ip->i_gl, bh); > else > -- > 2.31.1 > Thanks for this fix; looking good. Reviewed-by: Andreas Gruenbacher <agruenba@redhat.com> Andreas
diff --git a/fs/gfs2/meta_io.c b/fs/gfs2/meta_io.c index 7e70e0ba5a6c..6ed728aae9a5 100644 --- a/fs/gfs2/meta_io.c +++ b/fs/gfs2/meta_io.c @@ -525,8 +525,7 @@ struct buffer_head *gfs2_meta_ra(struct gfs2_glock *gl, u64 dblock, u32 extlen) if (buffer_uptodate(first_bh)) goto out; - if (!buffer_locked(first_bh)) - ll_rw_block(REQ_OP_READ | REQ_META | REQ_PRIO, 1, &first_bh); + bh_read_nowait(first_bh, REQ_META | REQ_PRIO); dblock++; extlen--; @@ -534,9 +533,7 @@ struct buffer_head *gfs2_meta_ra(struct gfs2_glock *gl, u64 dblock, u32 extlen) while (extlen) { bh = gfs2_getbuf(gl, dblock, CREATE); - if (!buffer_uptodate(bh) && !buffer_locked(bh)) - ll_rw_block(REQ_OP_READ | REQ_RAHEAD | REQ_META | - REQ_PRIO, 1, &bh); + bh_readahead(bh, REQ_RAHEAD | REQ_META | REQ_PRIO); brelse(bh); dblock++; extlen--; diff --git a/fs/gfs2/quota.c b/fs/gfs2/quota.c index f201eaf59d0d..1ed17226d9ed 100644 --- a/fs/gfs2/quota.c +++ b/fs/gfs2/quota.c @@ -745,12 +745,8 @@ static int gfs2_write_buf_to_page(struct gfs2_inode *ip, unsigned long index, } if (PageUptodate(page)) set_buffer_uptodate(bh); - if (!buffer_uptodate(bh)) { - ll_rw_block(REQ_OP_READ | REQ_META | REQ_PRIO, 1, &bh); - wait_on_buffer(bh); - if (!buffer_uptodate(bh)) - goto unlock_out; - } + if (bh_read(bh, REQ_META | REQ_PRIO) < 0) + goto unlock_out; if (gfs2_is_jdata(ip)) gfs2_trans_add_data(ip->i_gl, bh); else
ll_rw_block() is not safe for the sync read path because it cannot guarantee that always submitting read IO if the buffer has been locked, so stop using it. We also switch to new bh_readahead() helper for the readahead path. Signed-off-by: Zhang Yi <yi.zhang@huawei.com> --- fs/gfs2/meta_io.c | 7 ++----- fs/gfs2/quota.c | 8 ++------ 2 files changed, 4 insertions(+), 11 deletions(-)