Message ID | 20231207072056.14588-1-ddiss@suse.de (mailing list archive) |
---|---|
Headers | show |
Series | btrfs/282: resolve intermittent failures | expand |
On 2023/12/7 17:50, David Disseldorp wrote: > btrfs/282 fails intermittently under some circumstances. This patchset > adds dmdelay to make storage latencies more uniform and slightly > increases throttled rate tolerances. My concern using dm_delay is, is the delay per-merged-bio or something else? If the delay is only per-bio (after merge), then I'm afraid it would not be good enough. The bio plug we use in scrub can have much higher chance to result a difference in the scrub speed. We may want a delay behavior which can take bio size into consideration at least. Thanks, Qu > > common/dmdelay | 13 ++++--------- > tests/btrfs/282 | 43 +++++++++++++++++++++++++++++-------------- > tests/btrfs/282.out | 2 +- > tests/xfs/311 | 2 +- > 4 files changed, 35 insertions(+), 25 deletions(-) > >
On Thu, 7 Dec 2023 20:19:00 +1030, Qu Wenruo wrote: > On 2023/12/7 17:50, David Disseldorp wrote: > > btrfs/282 fails intermittently under some circumstances. This patchset > > adds dmdelay to make storage latencies more uniform and slightly > > increases throttled rate tolerances. > > My concern using dm_delay is, is the delay per-merged-bio or something else? > > If the delay is only per-bio (after merge), then I'm afraid it would not > be good enough. The dmdelay device presents itself as a regular block device, so I think delay_map()->delay_bio() handling comes after any merges. > The bio plug we use in scrub can have much higher chance to result a > difference in the scrub speed. > > We may want a delay behavior which can take bio size into consideration > at least. Should we manipulate it a bit by fiddling with queue/max_sectors_kb and queue/nomerges? Thanks, David
On 2023/12/8 00:19, David Disseldorp wrote: > On Thu, 7 Dec 2023 20:19:00 +1030, Qu Wenruo wrote: > >> On 2023/12/7 17:50, David Disseldorp wrote: >>> btrfs/282 fails intermittently under some circumstances. This patchset >>> adds dmdelay to make storage latencies more uniform and slightly >>> increases throttled rate tolerances. >> >> My concern using dm_delay is, is the delay per-merged-bio or something else? >> >> If the delay is only per-bio (after merge), then I'm afraid it would not >> be good enough. > > The dmdelay device presents itself as a regular block device, so I think > delay_map()->delay_bio() handling comes after any merges. > >> The bio plug we use in scrub can have much higher chance to result a >> difference in the scrub speed. >> >> We may want a delay behavior which can take bio size into consideration >> at least. > > Should we manipulate it a bit by fiddling with queue/max_sectors_kb and > queue/nomerges? I'm not sure if delay is our best friend. If we have something to limits the read/write speed directly that would be the best case. Unfortunately I didn't see a dm-throttle in upstream, can we limit the IO speed on certain device using cgroup? Thanks, Qu > > Thanks, David