Message ID | 20160711152921.GB32896@bfoster.bfoster (mailing list archive) |
---|---|
State | Superseded, archived |
Headers | show |
On Mon, Jul 11, 2016 at 11:29:22AM -0400, Brian Foster wrote: > On Mon, Jul 11, 2016 at 09:52:52AM -0400, Brian Foster wrote: > ... > > So what is your preference out of the possible approaches here? AFAICS, > > we have the following options: > > > > 1.) The original "add readahead to LRU early" approach. > > Pros: simple one-liner > > Cons: bit of a hack, only covers readahead scenario > > 2.) Defer I/O count decrement to buffer release (this patch). > > Pros: should cover all cases (reads/writes) > > Cons: more complex (requires per-buffer accounting, etc.) > > 3.) Raw (buffer or bio?) I/O count (no defer to buffer release) > > Pros: eliminates some complexity from #2 > > Cons: still more complex than #1, racy in that decrement does > > not serialize against LRU addition (requires drain_workqueue(), > > which still doesn't cover error conditions) > > > > As noted above, option #3 also allows for either a buffer based count or > > bio based count, the latter of which might simplify things a bit further > > (TBD). Thoughts? Pretty good summary :P > FWIW, the following is a slightly cleaned up version of my initial > approach (option #3 above). Note that the flag is used to help deal with > varying ioend behavior. E.g., xfs_buf_ioend() is called once for some > buffers, multiple times for others with an iodone callback, that > behavior changes in some cases when an error is set, etc. (I'll add > comments before an official post.) The approach looks good - I think there's a couple of things we can do to clean it up and make it robust. Comments inline. > diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c > index 4665ff6..45d3ddd 100644 > --- a/fs/xfs/xfs_buf.c > +++ b/fs/xfs/xfs_buf.c > @@ -1018,7 +1018,10 @@ xfs_buf_ioend( > > trace_xfs_buf_iodone(bp, _RET_IP_); > > - bp->b_flags &= ~(XBF_READ | XBF_WRITE | XBF_READ_AHEAD); > + if (bp->b_flags & XBF_IN_FLIGHT) > + percpu_counter_dec(&bp->b_target->bt_io_count); > + > + bp->b_flags &= ~(XBF_READ | XBF_WRITE | XBF_READ_AHEAD | XBF_IN_FLIGHT); > > /* > * Pull in IO completion errors now. We are guaranteed to be running I think the XBF_IN_FLIGHT can be moved to the final xfs_buf_rele() processing if: > @@ -1341,6 +1344,11 @@ xfs_buf_submit( > * xfs_buf_ioend too early. > */ > atomic_set(&bp->b_io_remaining, 1); > + if (bp->b_flags & XBF_ASYNC) { > + percpu_counter_inc(&bp->b_target->bt_io_count); > + bp->b_flags |= XBF_IN_FLIGHT; > + } You change this to: if (!(bp->b_flags & XBF_IN_FLIGHT)) { percpu_counter_inc(&bp->b_target->bt_io_count); bp->b_flags |= XBF_IN_FLIGHT; } We shouldn't have to check for XBF_ASYNC in xfs_buf_submit() - it is the path taken for async IO submission, so we should probably ASSERT(bp->b_flags & XBF_ASYNC) in this function to ensure that is the case. [Thinking aloud - __test_and_set_bit() might make this code a bit cleaner] > diff --git a/fs/xfs/xfs_buf.h b/fs/xfs/xfs_buf.h > index 8bfb974..e1f95e0 100644 > --- a/fs/xfs/xfs_buf.h > +++ b/fs/xfs/xfs_buf.h > @@ -43,6 +43,7 @@ typedef enum { > #define XBF_READ (1 << 0) /* buffer intended for reading from device */ > #define XBF_WRITE (1 << 1) /* buffer intended for writing to device */ > #define XBF_READ_AHEAD (1 << 2) /* asynchronous read-ahead */ > +#define XBF_IN_FLIGHT (1 << 3) Hmmm - it's an internal flag, so probably should be prefixed with an "_" and moved down to the section with _XBF_KMEM and friends. Thoughts? Cheers, Dave.
On Tue, Jul 12, 2016 at 08:44:51AM +1000, Dave Chinner wrote: > On Mon, Jul 11, 2016 at 11:29:22AM -0400, Brian Foster wrote: > > On Mon, Jul 11, 2016 at 09:52:52AM -0400, Brian Foster wrote: > > ... > > > So what is your preference out of the possible approaches here? AFAICS, > > > we have the following options: > > > > > > 1.) The original "add readahead to LRU early" approach. > > > Pros: simple one-liner > > > Cons: bit of a hack, only covers readahead scenario > > > 2.) Defer I/O count decrement to buffer release (this patch). > > > Pros: should cover all cases (reads/writes) > > > Cons: more complex (requires per-buffer accounting, etc.) > > > 3.) Raw (buffer or bio?) I/O count (no defer to buffer release) > > > Pros: eliminates some complexity from #2 > > > Cons: still more complex than #1, racy in that decrement does > > > not serialize against LRU addition (requires drain_workqueue(), > > > which still doesn't cover error conditions) > > > > > > As noted above, option #3 also allows for either a buffer based count or > > > bio based count, the latter of which might simplify things a bit further > > > (TBD). Thoughts? > > Pretty good summary :P > > > FWIW, the following is a slightly cleaned up version of my initial > > approach (option #3 above). Note that the flag is used to help deal with > > varying ioend behavior. E.g., xfs_buf_ioend() is called once for some > > buffers, multiple times for others with an iodone callback, that > > behavior changes in some cases when an error is set, etc. (I'll add > > comments before an official post.) > > The approach looks good - I think there's a couple of things we can > do to clean it up and make it robust. Comments inline. > > > diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c > > index 4665ff6..45d3ddd 100644 > > --- a/fs/xfs/xfs_buf.c > > +++ b/fs/xfs/xfs_buf.c > > @@ -1018,7 +1018,10 @@ xfs_buf_ioend( > > > > trace_xfs_buf_iodone(bp, _RET_IP_); > > > > - bp->b_flags &= ~(XBF_READ | XBF_WRITE | XBF_READ_AHEAD); > > + if (bp->b_flags & XBF_IN_FLIGHT) > > + percpu_counter_dec(&bp->b_target->bt_io_count); > > + > > + bp->b_flags &= ~(XBF_READ | XBF_WRITE | XBF_READ_AHEAD | XBF_IN_FLIGHT); > > > > /* > > * Pull in IO completion errors now. We are guaranteed to be running > > I think the XBF_IN_FLIGHT can be moved to the final xfs_buf_rele() > processing if: > > > @@ -1341,6 +1344,11 @@ xfs_buf_submit( > > * xfs_buf_ioend too early. > > */ > > atomic_set(&bp->b_io_remaining, 1); > > + if (bp->b_flags & XBF_ASYNC) { > > + percpu_counter_inc(&bp->b_target->bt_io_count); > > + bp->b_flags |= XBF_IN_FLIGHT; > > + } > > You change this to: > > if (!(bp->b_flags & XBF_IN_FLIGHT)) { > percpu_counter_inc(&bp->b_target->bt_io_count); > bp->b_flags |= XBF_IN_FLIGHT; > } > Ok, so use the flag to cap the I/O count and defer the decrement to release. I think that should work and addresses the raciness issue. I'll give it a try. > We shouldn't have to check for XBF_ASYNC in xfs_buf_submit() - it is > the path taken for async IO submission, so we should probably > ASSERT(bp->b_flags & XBF_ASYNC) in this function to ensure that is > the case. > Yeah, that's unnecessary. There's already such an assert in xfs_buf_submit(), actually. > [Thinking aloud - __test_and_set_bit() might make this code a bit > cleaner] > On a quick try, this complains about b_flags being an unsigned int. I think I'll leave the set bit as is and use a helper for the release, which also provides a location to explain how the count works. > > diff --git a/fs/xfs/xfs_buf.h b/fs/xfs/xfs_buf.h > > index 8bfb974..e1f95e0 100644 > > --- a/fs/xfs/xfs_buf.h > > +++ b/fs/xfs/xfs_buf.h > > @@ -43,6 +43,7 @@ typedef enum { > > #define XBF_READ (1 << 0) /* buffer intended for reading from device */ > > #define XBF_WRITE (1 << 1) /* buffer intended for writing to device */ > > #define XBF_READ_AHEAD (1 << 2) /* asynchronous read-ahead */ > > +#define XBF_IN_FLIGHT (1 << 3) > > Hmmm - it's an internal flag, so probably should be prefixed with an > "_" and moved down to the section with _XBF_KMEM and friends. > Indeed, thanks. Brian > Thoughts? > > Cheers, > > Dave. > -- > Dave Chinner > david@fromorbit.com > > _______________________________________________ > xfs mailing list > xfs@oss.sgi.com > http://oss.sgi.com/mailman/listinfo/xfs
diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c index 4665ff6..45d3ddd 100644 --- a/fs/xfs/xfs_buf.c +++ b/fs/xfs/xfs_buf.c @@ -1018,7 +1018,10 @@ xfs_buf_ioend( trace_xfs_buf_iodone(bp, _RET_IP_); - bp->b_flags &= ~(XBF_READ | XBF_WRITE | XBF_READ_AHEAD); + if (bp->b_flags & XBF_IN_FLIGHT) + percpu_counter_dec(&bp->b_target->bt_io_count); + + bp->b_flags &= ~(XBF_READ | XBF_WRITE | XBF_READ_AHEAD | XBF_IN_FLIGHT); /* * Pull in IO completion errors now. We are guaranteed to be running @@ -1341,6 +1344,11 @@ xfs_buf_submit( * xfs_buf_ioend too early. */ atomic_set(&bp->b_io_remaining, 1); + if (bp->b_flags & XBF_ASYNC) { + percpu_counter_inc(&bp->b_target->bt_io_count); + bp->b_flags |= XBF_IN_FLIGHT; + } + _xfs_buf_ioapply(bp); /* @@ -1533,6 +1541,8 @@ xfs_wait_buftarg( * ensure here that all reference counts have been dropped before we * start walking the LRU list. */ + while (percpu_counter_sum(&btp->bt_io_count)) + delay(100); drain_workqueue(btp->bt_mount->m_buf_workqueue); /* loop until there is nothing left on the lru list. */ @@ -1629,6 +1639,8 @@ xfs_free_buftarg( struct xfs_buftarg *btp) { unregister_shrinker(&btp->bt_shrinker); + ASSERT(percpu_counter_sum(&btp->bt_io_count) == 0); + percpu_counter_destroy(&btp->bt_io_count); list_lru_destroy(&btp->bt_lru); if (mp->m_flags & XFS_MOUNT_BARRIER) @@ -1693,6 +1705,9 @@ xfs_alloc_buftarg( if (list_lru_init(&btp->bt_lru)) goto error; + if (percpu_counter_init(&btp->bt_io_count, 0, GFP_KERNEL)) + goto error; + btp->bt_shrinker.count_objects = xfs_buftarg_shrink_count; btp->bt_shrinker.scan_objects = xfs_buftarg_shrink_scan; btp->bt_shrinker.seeks = DEFAULT_SEEKS; diff --git a/fs/xfs/xfs_buf.h b/fs/xfs/xfs_buf.h index 8bfb974..e1f95e0 100644 --- a/fs/xfs/xfs_buf.h +++ b/fs/xfs/xfs_buf.h @@ -43,6 +43,7 @@ typedef enum { #define XBF_READ (1 << 0) /* buffer intended for reading from device */ #define XBF_WRITE (1 << 1) /* buffer intended for writing to device */ #define XBF_READ_AHEAD (1 << 2) /* asynchronous read-ahead */ +#define XBF_IN_FLIGHT (1 << 3) #define XBF_ASYNC (1 << 4) /* initiator will not wait for completion */ #define XBF_DONE (1 << 5) /* all pages in the buffer uptodate */ #define XBF_STALE (1 << 6) /* buffer has been staled, do not find it */ @@ -115,6 +116,8 @@ typedef struct xfs_buftarg { /* LRU control structures */ struct shrinker bt_shrinker; struct list_lru bt_lru; + + struct percpu_counter bt_io_count; } xfs_buftarg_t; struct xfs_buf;