Message ID | 20200721142234.2680-1-josef@toxicpanda.com (mailing list archive) |
---|---|
Headers | show |
Series | Change data reservations to use the ticketing infra | expand |
On 21.07.20 г. 17:22 ч., Josef Bacik wrote: > v3->v4: > - Rebased onto a recent misc-next, slight fixup because of commenting around the > flush states. > > v2->v3: > - Rebased onto a recent misc-next > > v1->v2: > - Adjusted a comment in may_commit_transaction. > - Fixed one of the intermediate patches to properly update ->reclaim_size. > > We've had two different things in place to reserve data and metadata space, > because generally speaking data is much simpler. However the data reservations > still suffered from the same issues that plagued metadata reservations, you > could get multiple tasks racing in to get reservations. This causes problems > with cases like write/delete loops where we should be able to fill up the fs, > delete everything, and go again. You would sometimes race with a flusher that's > trying to unpin the free space, take it's reservations, and then it would fail. > > Fix this by moving the data reservations under the metadata ticketing > infrastructure. This gets rid of that problem, and simplifies our enospc code > more by consolidating it into a single path. Thanks, > > Josef > I've gone through the series again and it looks good. However, btrfs_alloc_data_chunk_ondemand no longer allocates a data chunk but simply tries to reserve the requested data space. This means this series needs another patch giving it a more becoming name, something like btrfs_(alloc|reserve)_data_space or some such ?
On Wed, Jul 22, 2020 at 11:35:15AM +0300, Nikolay Borisov wrote: > > > On 21.07.20 г. 17:22 ч., Josef Bacik wrote: > > v3->v4: > > - Rebased onto a recent misc-next, slight fixup because of commenting around the > > flush states. > > > > v2->v3: > > - Rebased onto a recent misc-next > > > > v1->v2: > > - Adjusted a comment in may_commit_transaction. > > - Fixed one of the intermediate patches to properly update ->reclaim_size. > > > > We've had two different things in place to reserve data and metadata space, > > because generally speaking data is much simpler. However the data reservations > > still suffered from the same issues that plagued metadata reservations, you > > could get multiple tasks racing in to get reservations. This causes problems > > with cases like write/delete loops where we should be able to fill up the fs, > > delete everything, and go again. You would sometimes race with a flusher that's > > trying to unpin the free space, take it's reservations, and then it would fail. > > > > Fix this by moving the data reservations under the metadata ticketing > > infrastructure. This gets rid of that problem, and simplifies our enospc code > > more by consolidating it into a single path. Thanks, > > > > Josef > > > > I've gone through the series again and it looks good. However, > btrfs_alloc_data_chunk_ondemand no longer allocates a data chunk but > simply tries to reserve the requested data space. This means this series > needs another patch giving it a more becoming name, something like > btrfs_(alloc|reserve)_data_space or some such ? Yes please, this could be confunsing as allocation and reservation are different things.
On Tue, Jul 21, 2020 at 10:22:11AM -0400, Josef Bacik wrote: > v3->v4: > - Rebased onto a recent misc-next, slight fixup because of commenting around the > flush states. > > v2->v3: > - Rebased onto a recent misc-next > > v1->v2: > - Adjusted a comment in may_commit_transaction. > - Fixed one of the intermediate patches to properly update ->reclaim_size. > > We've had two different things in place to reserve data and metadata space, > because generally speaking data is much simpler. However the data reservations > still suffered from the same issues that plagued metadata reservations, you > could get multiple tasks racing in to get reservations. This causes problems > with cases like write/delete loops where we should be able to fill up the fs, > delete everything, and go again. You would sometimes race with a flusher that's > trying to unpin the free space, take it's reservations, and then it would fail. > > Fix this by moving the data reservations under the metadata ticketing > infrastructure. This gets rid of that problem, and simplifies our enospc code > more by consolidating it into a single path. Thanks, V4 was in for-next so it got some testing coverage, though I haven't done enospc targeted tests. I'm going to go through this series and put it to misc-next once rc1 is out.
On Thu, Aug 13, 2020 at 05:14:43PM +0200, David Sterba wrote: > On Tue, Jul 21, 2020 at 10:22:11AM -0400, Josef Bacik wrote: > > v3->v4: > > - Rebased onto a recent misc-next, slight fixup because of commenting around the > > flush states. > > > > v2->v3: > > - Rebased onto a recent misc-next > > > > v1->v2: > > - Adjusted a comment in may_commit_transaction. > > - Fixed one of the intermediate patches to properly update ->reclaim_size. > > > > We've had two different things in place to reserve data and metadata space, > > because generally speaking data is much simpler. However the data reservations > > still suffered from the same issues that plagued metadata reservations, you > > could get multiple tasks racing in to get reservations. This causes problems > > with cases like write/delete loops where we should be able to fill up the fs, > > delete everything, and go again. You would sometimes race with a flusher that's > > trying to unpin the free space, take it's reservations, and then it would fail. > > > > Fix this by moving the data reservations under the metadata ticketing > > infrastructure. This gets rid of that problem, and simplifies our enospc code > > more by consolidating it into a single path. Thanks, > > V4 was in for-next so it got some testing coverage, though I haven't > done enospc targeted tests. I'm going to go through this series and put > it to misc-next once rc1 is out. It's in misc-next now (5.9-rc1 based).