Message ID | cover.1678777941.git.wqu@suse.com (mailing list archive) |
---|---|
Headers | show |
Series | btrfs: scrub: use a more reader friendly code to implement scrub_simple_mirror() | expand |
On 14.03.23 08:36, Qu Wenruo wrote: > - More testing on zoned devices > Now the patchset can already pass all scrub/replace groups with > regular devices. While probably not being the ultimate solution for you here, but you can use qemu to emulate ZNS drives [1]. The TL;DR is: qemu-system-x86_64 -device nvme,id=nvme0,serial=01234 \ -drive file=${znsimg},id=nvmezns0,format=raw,if=none \ -device nvme-ns,drive=nvmezns0,bus=nvme0,nsid=1,zoned=true [1] https://zonedstorage.io/docs/getting-started/zbd-emulation#nvme-zoned-namespace-device-emulation-with-qemu
On 2023/3/14 18:58, Johannes Thumshirn wrote: > On 14.03.23 08:36, Qu Wenruo wrote: >> - More testing on zoned devices >> Now the patchset can already pass all scrub/replace groups with >> regular devices. > > While probably not being the ultimate solution for you here, but > you can use qemu to emulate ZNS drives [1]. > > The TL;DR is: > > qemu-system-x86_64 -device nvme,id=nvme0,serial=01234 \ > -drive file=${znsimg},id=nvmezns0,format=raw,if=none \ > -device nvme-ns,drive=nvmezns0,bus=nvme0,nsid=1,zoned=true > > [1] https://zonedstorage.io/docs/getting-started/zbd-emulation#nvme-zoned-namespace-device-emulation-with-qemu > Is there a libvirt xml binding for it? Still prefer libvirt xml based emulation, which would benefit a lot for all my daily runs. Thanks, Qu
On 14.03.23 12:06, Qu Wenruo wrote: > > > On 2023/3/14 18:58, Johannes Thumshirn wrote: >> On 14.03.23 08:36, Qu Wenruo wrote: >>> - More testing on zoned devices >>> Now the patchset can already pass all scrub/replace groups with >>> regular devices. >> >> While probably not being the ultimate solution for you here, but >> you can use qemu to emulate ZNS drives [1]. >> >> The TL;DR is: >> >> qemu-system-x86_64 -device nvme,id=nvme0,serial=01234 \ >> -drive file=${znsimg},id=nvmezns0,format=raw,if=none \ >> -device nvme-ns,drive=nvmezns0,bus=nvme0,nsid=1,zoned=true >> >> [1] https://zonedstorage.io/docs/getting-started/zbd-emulation#nvme-zoned-namespace-device-emulation-with-qemu >> > > Is there a libvirt xml binding for it? > Still prefer libvirt xml based emulation, which would benefit a lot for > all my daily runs. Not that I know of, but you can add qemu commandline stuff into libvirt. Something like that (untested): <qemu:commandline> <qemu:arg value='-drive'/> <qemu:arg value='file=/path/to/nvme.img,format=raw,if=none,id=nvmezns0'/> <qemu:arg value='-device'/> <qemu:arg value='nvme-ns,drive=nvmezns0,bus=nvme0,nsid=1,zoned=true'/> </qemu:commandline>
On 2023/3/14 19:10, Johannes Thumshirn wrote: > On 14.03.23 12:06, Qu Wenruo wrote: >> >> >> On 2023/3/14 18:58, Johannes Thumshirn wrote: >>> On 14.03.23 08:36, Qu Wenruo wrote: >>>> - More testing on zoned devices >>>> Now the patchset can already pass all scrub/replace groups with >>>> regular devices. >>> >>> While probably not being the ultimate solution for you here, but >>> you can use qemu to emulate ZNS drives [1]. >>> >>> The TL;DR is: >>> >>> qemu-system-x86_64 -device nvme,id=nvme0,serial=01234 \ >>> -drive file=${znsimg},id=nvmezns0,format=raw,if=none \ >>> -device nvme-ns,drive=nvmezns0,bus=nvme0,nsid=1,zoned=true >>> >>> [1] https://zonedstorage.io/docs/getting-started/zbd-emulation#nvme-zoned-namespace-device-emulation-with-qemu >>> >> >> Is there a libvirt xml binding for it? >> Still prefer libvirt xml based emulation, which would benefit a lot for >> all my daily runs. > > Not that I know of, but you can add qemu commandline stuff into libvirt. > > Something like that (untested): > > <qemu:commandline> > <qemu:arg value='-drive'/> > <qemu:arg value='file=/path/to/nvme.img,format=raw,if=none,id=nvmezns0'/> > <qemu:arg value='-device'/> > <qemu:arg value='nvme-ns,drive=nvmezns0,bus=nvme0,nsid=1,zoned=true'/> > </qemu:commandline> I guess that's the only way to go. I'll report back after I got it running. If everything went well, we will see super niche combination, like 64K page size aarch64 with ZNS running for tests... Thanks, Qu
On Tue, Mar 14, 2023 at 03:34:55PM +0800, Qu Wenruo wrote: > This series can be found in my github repo: > > https://github.com/adam900710/linux/tree/scrub_stripe > > It's recommended to fetch from the repo, as our misc-next seems to > change pretty rapidly. There's a cleanup series that changed return value type of btrfs_bio_alloc which is used in 3 patches and it fails to compile. The bio pointer needs to be switched to btrfs_bio and touches the logic so it's not a trivial change I'd otherwise do. There's also some trailing whitespace in some patches and 'git am' refuses to apply them. Pulling the series won't unfortunatelly help, please refresh the series. Regarding misc-next updates: it gets rebased each Monday after a -rc is released so the potentially duplicated patches merged in the last week disappear from misc-next. Otherwise I don't rebase it, only append patches and occasionally update tags or some trivial bits in the code. If you work on a patchset for a long time it may become a chasing game for the stable base, in that case we need to coordinate and/or postpone some series.
On 2023/3/15 02:48, David Sterba wrote: > On Tue, Mar 14, 2023 at 03:34:55PM +0800, Qu Wenruo wrote: >> This series can be found in my github repo: >> >> https://github.com/adam900710/linux/tree/scrub_stripe >> >> It's recommended to fetch from the repo, as our misc-next seems to >> change pretty rapidly. > > There's a cleanup series that changed return value type of > btrfs_bio_alloc which is used in 3 patches and it fails to compile. The > bio pointer needs to be switched to btrfs_bio and touches the logic so > it's not a trivial change I'd otherwise do. I'm aware of that type safe patchset and that's expected, and I'm already ready to rebase. The reason I sent the series as is, is to get some more feedback, especially considering how large the series is, and it's touching the core functionality of scrub. > > There's also some trailing whitespace in some patches and 'git am' > refuses to apply them. Pulling the series won't unfortunatelly help, > please refresh the series. That's a total surprise here. Shouldn't the btrfs workflow catch such problems? Or the hook is not triggered for rebase? Anyway thanks for catching this new problem. Thanks, Qu > > Regarding misc-next updates: it gets rebased each Monday after a -rc is > released so the potentially duplicated patches merged in the last week > disappear from misc-next. Otherwise I don't rebase it, only append > patches and occasionally update tags or some trivial bits in the code. > > If you work on a patchset for a long time it may become a chasing game > for the stable base, in that case we need to coordinate and/or postpone > some series.
On Tue, Mar 14, 2023 at 03:34:55PM +0800, Qu Wenruo wrote: > - More cleanup on RAID56 path > Now RAID56 still uses some old facility, resulting things like > scrub_sector and scrub_bio can not be fully cleaned up. I think converting the raid path is something that should be done before merging the series, instead of leaving the parallel infrastructure in.
On 2023/3/15 15:52, Christoph Hellwig wrote: > On Tue, Mar 14, 2023 at 03:34:55PM +0800, Qu Wenruo wrote: >> - More cleanup on RAID56 path >> Now RAID56 still uses some old facility, resulting things like >> scrub_sector and scrub_bio can not be fully cleaned up. > > I think converting the raid path is something that should be done > before merging the series, instead of leaving the parallel > infrastructure in. The RAID56 scrub path is indeed a little messy, but I'm not sure what's the better way to clean it up. For now, if it's a data stripe, it's already using the scrub_simple_mirror(), so that's not a big deal. The problem is in scrub_raid56_data_stripes_for_parity(), which is doing the same data stripes scrubbing, but with a slightly different behavior dedicated for parity. My current plan is, go with scrub_stripe for the low hanging fruit (everything except P/Q scrub, perf and readability improvement). Then convert the remaining RAID56 code to the scrub_stripe facility. The conversion itself maybe even larger than this patchset, although most of the change would be just dropping the old facility. So for now, I prefer to perfect the scrub_stripe for the low hanging fruits first, leaving the days of the old facility counted, then do a proper final conversion. Thanks, Qu
On 2023/3/15 15:52, Christoph Hellwig wrote: > On Tue, Mar 14, 2023 at 03:34:55PM +0800, Qu Wenruo wrote: >> - More cleanup on RAID56 path >> Now RAID56 still uses some old facility, resulting things like >> scrub_sector and scrub_bio can not be fully cleaned up. > > I think converting the raid path is something that should be done > before merging the series, instead of leaving the parallel > infrastructure in. BTW, finally I have a local branch with the remaining path converted to the new infrastructure. The real problem is not converting the raid path. The patch doing the convert is pretty small, mostly thanks to the new scrub_stripe infrastructure, we can get rid of the complex scrub_parity and scrub_recover related code. fs/btrfs/scrub.c | 168 ++++++++++++++++++++++++++++++++++++++++++++--- fs/btrfs/scrub.h | 4 ++ 2 files changed, 162 insertions(+), 10 deletions(-) The problem is how I cleanup the existing scrub infrastructure (scrub_sector, scrub_block, scrub_recover, scrub_parity structures and involved calls). Currently I just do a single patch to cleanup, the result is super aweful: fs/btrfs/scrub.c | 2513 +--------------------------------------------- fs/btrfs/scrub.h | 4 - 2 files changed, 5 insertions(+), 2512 deletions(-) To be honest, I'm more concerned on how to split the cleanup patch. Thanks, Qu
On Tue, Mar 28, 2023 at 05:34:03PM +0800, Qu Wenruo wrote: > Currently I just do a single patch to cleanup, the result is super aweful: > > fs/btrfs/scrub.c | 2513 +--------------------------------------------- > fs/btrfs/scrub.h | 4 - > 2 files changed, 5 insertions(+), 2512 deletions(-) To me this looks perfect. A patch that just removes a lot of code is always good.
On 2023/3/29 07:37, Christoph Hellwig wrote: > On Tue, Mar 28, 2023 at 05:34:03PM +0800, Qu Wenruo wrote: >> Currently I just do a single patch to cleanup, the result is super aweful: >> >> fs/btrfs/scrub.c | 2513 +--------------------------------------------- >> fs/btrfs/scrub.h | 4 - >> 2 files changed, 5 insertions(+), 2512 deletions(-) > > To me this looks perfect. A patch that just removes a lot of code is > always good. Isn't the patch split also important? Anyway I will try some different methods to split that. Thanks, Qu
On Wed, Mar 29, 2023 at 07:44:59AM +0800, Qu Wenruo wrote: > Isn't the patch split also important? > > Anyway I will try some different methods to split that. I'll have to defer to David as btrfs is sometimes a little different from the rest of the kernel in it's requirements, but everywhere else a patch that just removes code can be as big as it gets.