Message ID | ZoGJRSe98wZFDK36@kernel.org (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [v2] xfs: enable WQ_MEM_RECLAIM on m_sync_workqueue | expand |
On Sun, Jun 30, 2024 at 12:35:17PM -0400, Mike Snitzer wrote: > The need for this fix was exposed while developing a new NFS feature > called "localio" which bypasses the network, if both the client and > server are on the same host, see: > https://git.kernel.org/pub/scm/linux/kernel/git/snitzer/linux.git/log/?h=nfs-localio-for-6.11 > > Because NFS's nfsiod_workqueue enables WQ_MEM_RECLAIM, writeback will > call into NFS and if localio is enabled the NFS client will call > directly into xfs_file_write_iter, this causes the following > backtrace when running xfstest generic/476 against NFS with localio: Oh, that's nasty. We now have to change every path in every filesystem that NFS can call that might defer work to a workqueue. IOWs, this makes WQ_MEM_RECLAIM pretty much mandatory for front end workqueues in the filesystem and block layers regardless of whether the filesystem or block layer path runs under memory reclaim context or not. All WQ_MEM_RECLAIM does is create a rescuer thread at workqueue creation that is used as a "worker of last resort" when forking new worker threads fail due to ENOMEM. This prevents deadlocks when doing GFP_KERNEL allocations in workqueue context and potentially deadlocking because a GFP_KERNEL allocation is blocking waiting for this workqueue to allocate workers to make progress. > workqueue: WQ_MEM_RECLAIM writeback:wb_workfn is flushing !WQ_MEM_RECLAIM xfs-sync/vdc:xfs_flush_inodes_worker > WARNING: CPU: 6 PID: 8525 at kernel/workqueue.c:3706 check_flush_dependency+0x2a4/0x328 > Modules linked in: > CPU: 6 PID: 8525 Comm: kworker/u71:5 Not tainted 6.10.0-rc3-ktest-00032-g2b0a133403ab #18502 > Hardware name: linux,dummy-virt (DT) > Workqueue: writeback wb_workfn (flush-0:33) > pstate: 400010c5 (nZcv daIF -PAN -UAO -TCO -DIT +SSBS BTYPE=--) > pc : check_flush_dependency+0x2a4/0x328 > lr : check_flush_dependency+0x2a4/0x328 > sp : ffff0000c5f06bb0 > x29: ffff0000c5f06bb0 x28: ffff0000c998a908 x27: 1fffe00019331521 > x26: ffff0000d0620900 x25: ffff0000c5f06ca0 x24: ffff8000828848c0 > x23: 1fffe00018be0d8e x22: ffff0000c1210000 x21: ffff0000c75fde00 > x20: ffff800080bfd258 x19: ffff0000cad63400 x18: ffff0000cd3a4810 > x17: 0000000000000000 x16: 0000000000000000 x15: ffff800080508d98 > x14: 0000000000000000 x13: 204d49414c434552 x12: 1fffe0001b6eeab2 > x11: ffff60001b6eeab2 x10: dfff800000000000 x9 : ffff60001b6eeab3 > x8 : 0000000000000001 x7 : 00009fffe491154e x6 : ffff0000db775593 > x5 : ffff0000db775590 x4 : ffff0000db775590 x3 : 0000000000000000 > x2 : 0000000000000027 x1 : ffff600018be0d62 x0 : dfff800000000000 > Call trace: > check_flush_dependency+0x2a4/0x328 > __flush_work+0x184/0x5c8 > flush_work+0x18/0x28 > xfs_flush_inodes+0x68/0x88 > xfs_file_buffered_write+0x128/0x6f0 > xfs_file_write_iter+0x358/0x448 > nfs_local_doio+0x854/0x1568 > nfs_initiate_pgio+0x214/0x418 > nfs_generic_pg_pgios+0x304/0x480 > nfs_pageio_doio+0xe8/0x240 > nfs_pageio_complete+0x160/0x480 > nfs_writepages+0x300/0x4f0 > do_writepages+0x12c/0x4a0 > __writeback_single_inode+0xd4/0xa68 > writeback_sb_inodes+0x470/0xcb0 > __writeback_inodes_wb+0xb0/0x1d0 > wb_writeback+0x594/0x808 > wb_workfn+0x5e8/0x9e0 > process_scheduled_works+0x53c/0xd90 Ah, this is just the standard backing device flusher thread that is running. This is the back end of filesystem writeback, not the front end. It was never intended to be able to directly do loop back IO submission to the front end filesystem IO paths like this - they are very different contexts and have very different constraints. This particular situation occurs when XFS is near ENOSPC. There's a very high probability it is going to fail these writes, and so it's doing slow path work that involves blocking and front end filesystem processing is allowed to block on just about anything in the filesystem as long as it can guarantee it won't deadlock. Fundamentally, doing IO submission in WQ_MEM_RECLAIM context changes the submission context for -all- filesystems, not just XFS. If we have to make this change to XFS, then -every- workqueue in XFS (not just this one) must be converted to WQ_MEM_RECLAIM, and then many workqueues in all the other filesystems will need to have the same changes made, too. That doesn't smell right to me. ---- So let's look at how back end filesystem IO currently submits new front end filesystem IO: the loop block device does this, and it uses workqueues to defer submitted IO so that the lower IO submission context can be directly controlled and made with the front end filesystem IO submission path behaviours. The loop device does not use rescuer threads - that's not needed when you have a queue based submission and just use the workqueues to run the queues until they are empty. So the loop device uses a standard unbound workqueue for it's IO submission path, and then when the work is running it sets the task flags to say "this is a nested IO worker thread" before it starts processing the submission queue and submitting new front end filesystem IO: static void loop_process_work(struct loop_worker *worker, struct list_head *cmd_list, struct loop_device *lo) { int orig_flags = current->flags; struct loop_cmd *cmd; current->flags |= PF_LOCAL_THROTTLE | PF_MEMALLOC_NOIO; PF_LOCAL_THROTTLE prevents deadlocks in balance_dirty_pages() by lifting the dirty ratio for this thread a little, hence giving it priority over the upper filesystem. i.e. the upper filesystem will throttle incoming writes first, then the back end IO submission thread can still submit new front end IOs to the lower filesystem and they won't block in balance_dirty_pages() because the lower filesystem has a higher limit. hence the lower filesystem can always drain the dirty pages on the upper filesystem, and the system won't deadlock in balance_dirty_pages(). Using WQ_MEM_RECLAIM context for IO submission does not address this deadlock. The PF_MEMALLOC_NOIO flag prevents the lower filesystem IO from causing memory reclaim to re-enter filesystems or IO devices and so prevents deadlocks from occuring where IO that cleans pages is waiting on IO to complete. Using WQ_MEM_RECLAIM context for IO submission does not address this deadlock either. IOWs, doing front IO submission like this from the BDI flusher thread is guaranteed to deadlock sooner or later, regardless of whether WQ_MEM_RECLAIM is set or not on workqueues that are flushed during IO submission. The WQ_MEM_RECLAIM warning is effectively your canary in the coal mine. And the canary just carked it. IMO, the only sane way to ensure this sort of nested "back-end page cleaning submits front-end IO filesystem IO" mechanism works is to do something similar to the loop device. You most definitely don't want to be doing buffered IO (double caching is almost always bad) and you want to be doing async direct IO so that the submission thread is not waiting on completion before the next IO is submitted. -Dave.
On Mon, Jul 01, 2024 at 09:46:36AM +1000, Dave Chinner wrote: > Oh, that's nasty. Yes. > We now have to change every path in every filesystem that NFS can > call that might defer work to a workqueue. Yes. That's why the kernel for a long time had the stance that using network file systems / storage locally is entirely unsupported. If we want to change that we'll have a lot of work to do.
On Mon, Jul 01, 2024 at 09:46:36AM +1000, Dave Chinner wrote: > On Sun, Jun 30, 2024 at 12:35:17PM -0400, Mike Snitzer wrote: > > The need for this fix was exposed while developing a new NFS feature > > called "localio" which bypasses the network, if both the client and > > server are on the same host, see: > > https://git.kernel.org/pub/scm/linux/kernel/git/snitzer/linux.git/log/?h=nfs-localio-for-6.11 > > > > Because NFS's nfsiod_workqueue enables WQ_MEM_RECLAIM, writeback will > > call into NFS and if localio is enabled the NFS client will call > > directly into xfs_file_write_iter, this causes the following > > backtrace when running xfstest generic/476 against NFS with localio: > > Oh, that's nasty. > > We now have to change every path in every filesystem that NFS can > call that might defer work to a workqueue. > > IOWs, this makes WQ_MEM_RECLAIM pretty much mandatory for front end > workqueues in the filesystem and block layers regardless of whether > the filesystem or block layer path runs under memory reclaim context > or not. As you noticed later (yet didn't circle back to edit here) this is just triggering when space is low. But yeah, I submitted the patch knowing it was likely not viable, I just wasn't clear on all the whys. I appreciate your feedback, and yes it did feel like a slippery slope (understatement) that'd force other filesystems to follow. > All WQ_MEM_RECLAIM does is create a rescuer thread at workqueue > creation that is used as a "worker of last resort" when forking new > worker threads fail due to ENOMEM. This prevents deadlocks when > doing GFP_KERNEL allocations in workqueue context and potentially > deadlocking because a GFP_KERNEL allocation is blocking waiting for > this workqueue to allocate workers to make progress. Right. > > workqueue: WQ_MEM_RECLAIM writeback:wb_workfn is flushing !WQ_MEM_RECLAIM xfs-sync/vdc:xfs_flush_inodes_worker > > WARNING: CPU: 6 PID: 8525 at kernel/workqueue.c:3706 check_flush_dependency+0x2a4/0x328 > > Modules linked in: > > CPU: 6 PID: 8525 Comm: kworker/u71:5 Not tainted 6.10.0-rc3-ktest-00032-g2b0a133403ab #18502 > > Hardware name: linux,dummy-virt (DT) > > Workqueue: writeback wb_workfn (flush-0:33) > > pstate: 400010c5 (nZcv daIF -PAN -UAO -TCO -DIT +SSBS BTYPE=--) > > pc : check_flush_dependency+0x2a4/0x328 > > lr : check_flush_dependency+0x2a4/0x328 > > sp : ffff0000c5f06bb0 > > x29: ffff0000c5f06bb0 x28: ffff0000c998a908 x27: 1fffe00019331521 > > x26: ffff0000d0620900 x25: ffff0000c5f06ca0 x24: ffff8000828848c0 > > x23: 1fffe00018be0d8e x22: ffff0000c1210000 x21: ffff0000c75fde00 > > x20: ffff800080bfd258 x19: ffff0000cad63400 x18: ffff0000cd3a4810 > > x17: 0000000000000000 x16: 0000000000000000 x15: ffff800080508d98 > > x14: 0000000000000000 x13: 204d49414c434552 x12: 1fffe0001b6eeab2 > > x11: ffff60001b6eeab2 x10: dfff800000000000 x9 : ffff60001b6eeab3 > > x8 : 0000000000000001 x7 : 00009fffe491154e x6 : ffff0000db775593 > > x5 : ffff0000db775590 x4 : ffff0000db775590 x3 : 0000000000000000 > > x2 : 0000000000000027 x1 : ffff600018be0d62 x0 : dfff800000000000 > > Call trace: > > check_flush_dependency+0x2a4/0x328 > > __flush_work+0x184/0x5c8 > > flush_work+0x18/0x28 > > xfs_flush_inodes+0x68/0x88 > > xfs_file_buffered_write+0x128/0x6f0 > > xfs_file_write_iter+0x358/0x448 > > nfs_local_doio+0x854/0x1568 > > nfs_initiate_pgio+0x214/0x418 > > nfs_generic_pg_pgios+0x304/0x480 > > nfs_pageio_doio+0xe8/0x240 > > nfs_pageio_complete+0x160/0x480 > > nfs_writepages+0x300/0x4f0 > > do_writepages+0x12c/0x4a0 > > __writeback_single_inode+0xd4/0xa68 > > writeback_sb_inodes+0x470/0xcb0 > > __writeback_inodes_wb+0xb0/0x1d0 > > wb_writeback+0x594/0x808 > > wb_workfn+0x5e8/0x9e0 > > process_scheduled_works+0x53c/0xd90 > > Ah, this is just the standard backing device flusher thread that is > running. This is the back end of filesystem writeback, not the front > end. It was never intended to be able to directly do loop back IO > submission to the front end filesystem IO paths like this - they are > very different contexts and have very different constraints. > > This particular situation occurs when XFS is near ENOSPC. There's a > very high probability it is going to fail these writes, and so it's > doing slow path work that involves blocking and front end filesystem > processing is allowed to block on just about anything in the > filesystem as long as it can guarantee it won't deadlock. > > Fundamentally, doing IO submission in WQ_MEM_RECLAIM context changes > the submission context for -all- filesystems, not just XFS. Yes, I see that. > If we have to make this change to XFS, then -every- > workqueue in XFS (not just this one) must be converted to > WQ_MEM_RECLAIM, and then many workqueues in all the other > filesystems will need to have the same changes made, too. AFAICT they are all WQ_MEM_RECLAIM aside from m_sync_workqueue, but that's besides the point. > That doesn't smell right to me. > > ---- > > So let's look at how back end filesystem IO currently submits new > front end filesystem IO: the loop block device does this, and it > uses workqueues to defer submitted IO so that the lower IO > submission context can be directly controlled and made with the > front end filesystem IO submission path behaviours. > > The loop device does not use rescuer threads - that's not needed > when you have a queue based submission and just use the workqueues > to run the queues until they are empty. So the loop device uses > a standard unbound workqueue for it's IO submission path, and > then when the work is running it sets the task flags to say "this is > a nested IO worker thread" before it starts processing the > submission queue and submitting new front end filesystem IO: > > static void loop_process_work(struct loop_worker *worker, > struct list_head *cmd_list, struct loop_device *lo) > { > int orig_flags = current->flags; > struct loop_cmd *cmd; > > current->flags |= PF_LOCAL_THROTTLE | PF_MEMALLOC_NOIO; > > PF_LOCAL_THROTTLE prevents deadlocks in balance_dirty_pages() by > lifting the dirty ratio for this thread a little, hence giving it > priority over the upper filesystem. i.e. the upper filesystem will > throttle incoming writes first, then the back end IO submission > thread can still submit new front end IOs to the lower filesystem > and they won't block in balance_dirty_pages() because the lower > filesystem has a higher limit. hence the lower filesystem can always > drain the dirty pages on the upper filesystem, and the system won't > deadlock in balance_dirty_pages(). Perfect, thanks for the guidance. > Using WQ_MEM_RECLAIM context for IO submission does not address this > deadlock. > > The PF_MEMALLOC_NOIO flag prevents the lower filesystem IO from > causing memory reclaim to re-enter filesystems or IO devices and so > prevents deadlocks from occuring where IO that cleans pages is > waiting on IO to complete. > > Using WQ_MEM_RECLAIM context for IO submission does not address this > deadlock either. > > IOWs, doing front IO submission like this from the BDI flusher > thread is guaranteed to deadlock sooner or later, regardless of > whether WQ_MEM_RECLAIM is set or not on workqueues that are flushed > during IO submission. The WQ_MEM_RECLAIM warning is effectively your > canary in the coal mine. And the canary just carked it. Yes, I knew it as such, but I wanted to pin down why it died.. thanks for your help! FYI, this is the long-standing approach to how Trond dealt with this WQ_MEM_RECLAIM situation. I was just hoping to avoid having to introduce a dedicated workqueue for localio's needs (NeilBrown really disliked the fact it mentioned how it avoids blowing the stack, but we get that as a side-effect of needing it to avoid WQ_MEM_RECLAIM): https://git.kernel.org/pub/scm/linux/kernel/git/snitzer/linux.git/commit/?h=nfs-localio-for-6.11-testing&id=0cd25f7610df291827ad95023e03fdd4f93bbea7 (I revised the patch to add PF_LOCAL_THROTTLE | PF_MEMALLOC_NOIO and adjusted the header to reflect our discussion in this thread). > IMO, the only sane way to ensure this sort of nested "back-end page > cleaning submits front-end IO filesystem IO" mechanism works is to > do something similar to the loop device. You most definitely don't > want to be doing buffered IO (double caching is almost always bad) > and you want to be doing async direct IO so that the submission > thread is not waiting on completion before the next IO is > submitted. Yes, follow-on work is for me to revive the directio path for localio that ultimately wasn't pursued (or properly wired up) because it creates DIO alignment requirements on NFS client IO: https://git.kernel.org/pub/scm/linux/kernel/git/snitzer/linux.git/commit/?h=nfs-localio-for-6.11-testing&id=f6c9f51fca819a8af595a4eb94811c1f90051eab But underlying filesystems (like XFS) have the appropriate checks, we just need to fail gracefully and disable NFS localio if the IO is misaligned. Mike
On Mon, 2024-07-01 at 10:13 -0400, Mike Snitzer wrote: > On Mon, Jul 01, 2024 at 09:46:36AM +1000, Dave Chinner wrote: > > IMO, the only sane way to ensure this sort of nested "back-end page > > cleaning submits front-end IO filesystem IO" mechanism works is to > > do something similar to the loop device. You most definitely don't > > want to be doing buffered IO (double caching is almost always bad) > > and you want to be doing async direct IO so that the submission > > thread is not waiting on completion before the next IO is > > submitted. > > Yes, follow-on work is for me to revive the directio path for localio > that ultimately wasn't pursued (or properly wired up) because it > creates DIO alignment requirements on NFS client IO: > https://git.kernel.org/pub/scm/linux/kernel/git/snitzer/linux.git/commit/?h=nfs-localio-for-6.11-testing&id=f6c9f51fca819a8af595a4eb94811c1f90051eab > > But underlying filesystems (like XFS) have the appropriate checks, we > just need to fail gracefully and disable NFS localio if the IO is > misaligned. > Just a reminder to everyone that this is replacing a configuration which would in any case result in double caching, because without the localio change, it would end up being a loopback mount through the NFS server. Use of O_DIRECT to xfs would impose alignment requirements by the lower filesystem that are not being followed by the upper filesystem. A "remedy" where we fall back to disabling localio if there is no alignment won't fix anything. You will now have added the extra nfsd layer back in, and so have the extra networking overhead in addition to the memory management problems you were trying to solve with O_DIRECT.
On Tue, Jul 02, 2024 at 12:33:53PM +0000, Trond Myklebust wrote: > On Mon, 2024-07-01 at 10:13 -0400, Mike Snitzer wrote: > > On Mon, Jul 01, 2024 at 09:46:36AM +1000, Dave Chinner wrote: > > > IMO, the only sane way to ensure this sort of nested "back-end page > > > cleaning submits front-end IO filesystem IO" mechanism works is to > > > do something similar to the loop device. You most definitely don't > > > want to be doing buffered IO (double caching is almost always bad) > > > and you want to be doing async direct IO so that the submission > > > thread is not waiting on completion before the next IO is > > > submitted. > > > > Yes, follow-on work is for me to revive the directio path for localio > > that ultimately wasn't pursued (or properly wired up) because it > > creates DIO alignment requirements on NFS client IO: > > https://git.kernel.org/pub/scm/linux/kernel/git/snitzer/linux.git/commit/?h=nfs-localio-for-6.11-testing&id=f6c9f51fca819a8af595a4eb94811c1f90051eab I don't follow - this is page cache writeback. All the write IO from the bdi flusher thread should be page aligned, right? So why does DIO alignment matter here? > > But underlying filesystems (like XFS) have the appropriate checks, we > > just need to fail gracefully and disable NFS localio if the IO is > > misaligned. > > > > Just a reminder to everyone that this is replacing a configuration > which would in any case result in double caching, because without the > localio change, it would end up being a loopback mount through the NFS > server. Sure. That doesn't mean double caching is desirable and it's something we try should avoid if we trying to design a fast server bypass mechanism. -Dave.
On Tue, 2024-07-02 at 23:04 +1000, Dave Chinner wrote: > On Tue, Jul 02, 2024 at 12:33:53PM +0000, Trond Myklebust wrote: > > On Mon, 2024-07-01 at 10:13 -0400, Mike Snitzer wrote: > > > On Mon, Jul 01, 2024 at 09:46:36AM +1000, Dave Chinner wrote: > > > > IMO, the only sane way to ensure this sort of nested "back-end > > > > page > > > > cleaning submits front-end IO filesystem IO" mechanism works is > > > > to > > > > do something similar to the loop device. You most definitely > > > > don't > > > > want to be doing buffered IO (double caching is almost always > > > > bad) > > > > and you want to be doing async direct IO so that the submission > > > > thread is not waiting on completion before the next IO is > > > > submitted. > > > > > > Yes, follow-on work is for me to revive the directio path for > > > localio > > > that ultimately wasn't pursued (or properly wired up) because it > > > creates DIO alignment requirements on NFS client IO: > > > https://git.kernel.org/pub/scm/linux/kernel/git/snitzer/linux.git/commit/?h=nfs-localio-for-6.11-testing&id=f6c9f51fca819a8af595a4eb94811c1f90051eab > > I don't follow - this is page cache writeback. All the write IO from > the bdi flusher thread should be page aligned, right? So why does DIO > alignment matter here? > There is no guarantee in NFS that writes from the flusher thread are page aligned. If a page/folio is known to be up to date, we will usually align writes to the boundaries, but we won't guarantee to do a read-modify-write if that's not the case. Specifically, we will not do so if the file is open for write-only.
On Tue, Jul 02, 2024 at 02:00:46PM +0000, Trond Myklebust wrote: > On Tue, 2024-07-02 at 23:04 +1000, Dave Chinner wrote: > > On Tue, Jul 02, 2024 at 12:33:53PM +0000, Trond Myklebust wrote: > > > On Mon, 2024-07-01 at 10:13 -0400, Mike Snitzer wrote: > > > > On Mon, Jul 01, 2024 at 09:46:36AM +1000, Dave Chinner wrote: > > > > > IMO, the only sane way to ensure this sort of nested "back-end > > > > > page > > > > > cleaning submits front-end IO filesystem IO" mechanism works is > > > > > to > > > > > do something similar to the loop device. You most definitely > > > > > don't > > > > > want to be doing buffered IO (double caching is almost always > > > > > bad) > > > > > and you want to be doing async direct IO so that the submission > > > > > thread is not waiting on completion before the next IO is > > > > > submitted. > > > > > > > > Yes, follow-on work is for me to revive the directio path for > > > > localio > > > > that ultimately wasn't pursued (or properly wired up) because it > > > > creates DIO alignment requirements on NFS client IO: > > > > https://git.kernel.org/pub/scm/linux/kernel/git/snitzer/linux.git/commit/?h=nfs-localio-for-6.11-testing&id=f6c9f51fca819a8af595a4eb94811c1f90051eab > > > > I don't follow - this is page cache writeback. All the write IO from > > the bdi flusher thread should be page aligned, right? So why does DIO > > alignment matter here? > > > > There is no guarantee in NFS that writes from the flusher thread are > page aligned. If a page/folio is known to be up to date, we will > usually align writes to the boundaries, but we won't guarantee to do a > read-modify-write if that's not the case. Specifically, we will not do > so if the file is open for write-only. So perhaps if the localio mechanism is enabled, it should behave like a local filesystem and do the page cache RMW cycle (because it doesn't involve a network round trip) to make sure all the buffered IO is page aligned. That means both buffered reads and writes are page aligned, and both can be done using async direct IO. If the client is doing aligned direct IO, then we can still do async direct IO to the underlying file. If it's not aligned, then the localio flusher thread can just do async buffered IO for those IOs instead. Let's not reinvent the wheel: we know how to do loopback filesystem IO very efficiently, and the whole point of localio is to do loopback filesystem IO very efficiently. -Dave.
On Sun, Jun 30, 2024 at 09:45:40PM -0700, Christoph Hellwig wrote: > On Mon, Jul 01, 2024 at 09:46:36AM +1000, Dave Chinner wrote: > > Oh, that's nasty. > > Yes. > > > We now have to change every path in every filesystem that NFS can > > call that might defer work to a workqueue. > > Yes. That's why the kernel for a long time had the stance that using > network file systems / storage locally is entirely unsupported. > > If we want to change that we'll have a lot of work to do. Yep. These sorts of changes really need to be cc'd to linux-fsdevel, not kept private to the NFS lists. I wouldn't have known that NFS was going to do local IO to filesystems if it wasn't for this patch, and it's clear the approach being taken needs architectural review before we even get down into the nitty gritty details of the implementation. Mike, can you make sure that linux-fsdevel@vger.kernel.org is cc'd on all the localio work being posted so we can all keep track of it easily? -Dave.
On Mon, 01 Jul 2024, Christoph Hellwig wrote: > On Mon, Jul 01, 2024 at 09:46:36AM +1000, Dave Chinner wrote: > > Oh, that's nasty. > > Yes. > > > We now have to change every path in every filesystem that NFS can > > call that might defer work to a workqueue. > > Yes. That's why the kernel for a long time had the stance that using > network file systems / storage locally is entirely unsupported. I know nothing of this stance. Do you have a reference? I have put a modest amount of work into ensure NFS to a server on the same machine works and last I checked it did - though I'm more confident of NFSv3 than NFSv4 because of the state manager thread. Also /dev/loop can be backed by a file, and have a filesystem mounted on it, and if that didn't work I'm sure we would have complaints. > > If we want to change that we'll have a lot of work to do. What sort of work are you thinking of? Thanks, NeilBrown > > >
On Wed, Jul 03, 2024 at 09:29:00PM +1000, NeilBrown wrote: > I know nothing of this stance. Do you have a reference? No particular one. > I have put a modest amount of work into ensure NFS to a server on the > same machine works and last I checked it did - though I'm more > confident of NFSv3 than NFSv4 because of the state manager thread. How do you propagate the NOFS flag (and NOIO for a loop device) to the server an the workqueues run by the server and the file system call by it? How do you ensure WQ_MEM_RECLAIM gets propagate to all workqueues that could be called by the file system on the server (the problem kicking off this discussion)?
On Wed, Jul 03, 2024 at 07:15:28AM -0700, Christoph Hellwig wrote: > On Wed, Jul 03, 2024 at 09:29:00PM +1000, NeilBrown wrote: > > I know nothing of this stance. Do you have a reference? > > No particular one. > > > I have put a modest amount of work into ensure NFS to a server on the > > same machine works and last I checked it did - though I'm more > > confident of NFSv3 than NFSv4 because of the state manager thread. > > How do you propagate the NOFS flag (and NOIO for a loop device) to > the server an the workqueues run by the server and the file system > call by it? How do you ensure WQ_MEM_RECLAIM gets propagate to > all workqueues that could be called by the file system on the > server (the problem kicking off this discussion)? Don't forget PF_LOCAL_THROTTLE, too. I note that nfsd_vfs_write() knows when it is doing local loopback write IO and in that case sets PF_LOCAL_THROTTLE: if (test_bit(RQ_LOCAL, &rqstp->rq_flags) && !(exp_op_flags & EXPORT_OP_REMOTE_FS)) { /* * We want throttling in balance_dirty_pages() * and shrink_inactive_list() to only consider * the backingdev we are writing to, so that nfs to * localhost doesn't cause nfsd to lock up due to all * the client's dirty pages or its congested queue. */ current->flags |= PF_LOCAL_THROTTLE; restore_flags = true; } This also has impact on memory reclaim congestion throttling (i.e. it turns it off), which is also needed for loopback IO to prevent it being throttled by reclaim because it getting congested trying to reclaim all the dirty pages on the upper filesystem that the IO thread is trying to clean... However, I don't see it calling memalloc_nofs_save() there to prevent memory reclaim recursion back into the upper NFS client filesystem. I suspect that because filesystems like XFS hard code GFP_NOFS context for page cache allocation to prevent NFSD loopback IO from deadlocking hides this issue. We've had to do that because, historically speaking, there wasn't been a way for high level IO submitters to indicate they need GFP_NOFS allocation context. Howver, we have had the memalloc_nofs_save/restore() scoped API for several years now, so it seems to me that the nfsd should really be using this rather than requiring the filesystem to always use GFP_NOFS allocations to avoid loopback IO memory allocation deadlocks... -Dave.
diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c index 27e9f749c4c7..dbe6af00708b 100644 --- a/fs/xfs/xfs_super.c +++ b/fs/xfs/xfs_super.c @@ -574,7 +574,8 @@ xfs_init_mount_workqueues( goto out_destroy_blockgc; mp->m_sync_workqueue = alloc_workqueue("xfs-sync/%s", - XFS_WQFLAGS(WQ_FREEZABLE), 0, mp->m_super->s_id); + XFS_WQFLAGS(WQ_FREEZABLE | WQ_MEM_RECLAIM), + 0, mp->m_super->s_id); if (!mp->m_sync_workqueue) goto out_destroy_inodegc;
The need for this fix was exposed while developing a new NFS feature called "localio" which bypasses the network, if both the client and server are on the same host, see: https://git.kernel.org/pub/scm/linux/kernel/git/snitzer/linux.git/log/?h=nfs-localio-for-6.11 Because NFS's nfsiod_workqueue enables WQ_MEM_RECLAIM, writeback will call into NFS and if localio is enabled the NFS client will call directly into xfs_file_write_iter, this causes the following backtrace when running xfstest generic/476 against NFS with localio: workqueue: WQ_MEM_RECLAIM writeback:wb_workfn is flushing !WQ_MEM_RECLAIM xfs-sync/vdc:xfs_flush_inodes_worker WARNING: CPU: 6 PID: 8525 at kernel/workqueue.c:3706 check_flush_dependency+0x2a4/0x328 Modules linked in: CPU: 6 PID: 8525 Comm: kworker/u71:5 Not tainted 6.10.0-rc3-ktest-00032-g2b0a133403ab #18502 Hardware name: linux,dummy-virt (DT) Workqueue: writeback wb_workfn (flush-0:33) pstate: 400010c5 (nZcv daIF -PAN -UAO -TCO -DIT +SSBS BTYPE=--) pc : check_flush_dependency+0x2a4/0x328 lr : check_flush_dependency+0x2a4/0x328 sp : ffff0000c5f06bb0 x29: ffff0000c5f06bb0 x28: ffff0000c998a908 x27: 1fffe00019331521 x26: ffff0000d0620900 x25: ffff0000c5f06ca0 x24: ffff8000828848c0 x23: 1fffe00018be0d8e x22: ffff0000c1210000 x21: ffff0000c75fde00 x20: ffff800080bfd258 x19: ffff0000cad63400 x18: ffff0000cd3a4810 x17: 0000000000000000 x16: 0000000000000000 x15: ffff800080508d98 x14: 0000000000000000 x13: 204d49414c434552 x12: 1fffe0001b6eeab2 x11: ffff60001b6eeab2 x10: dfff800000000000 x9 : ffff60001b6eeab3 x8 : 0000000000000001 x7 : 00009fffe491154e x6 : ffff0000db775593 x5 : ffff0000db775590 x4 : ffff0000db775590 x3 : 0000000000000000 x2 : 0000000000000027 x1 : ffff600018be0d62 x0 : dfff800000000000 Call trace: check_flush_dependency+0x2a4/0x328 __flush_work+0x184/0x5c8 flush_work+0x18/0x28 xfs_flush_inodes+0x68/0x88 xfs_file_buffered_write+0x128/0x6f0 xfs_file_write_iter+0x358/0x448 nfs_local_doio+0x854/0x1568 nfs_initiate_pgio+0x214/0x418 nfs_generic_pg_pgios+0x304/0x480 nfs_pageio_doio+0xe8/0x240 nfs_pageio_complete+0x160/0x480 nfs_writepages+0x300/0x4f0 do_writepages+0x12c/0x4a0 __writeback_single_inode+0xd4/0xa68 writeback_sb_inodes+0x470/0xcb0 __writeback_inodes_wb+0xb0/0x1d0 wb_writeback+0x594/0x808 wb_workfn+0x5e8/0x9e0 process_scheduled_works+0x53c/0xd90 worker_thread+0x370/0x8c8 kthread+0x258/0x2e8 ret_from_fork+0x10/0x20 Fix this by enabling WQ_MEM_RECLAIM on XFS's m_sync_workqueue. Signed-off-by: Mike Snitzer <snitzer@kernel.org> --- fs/xfs/xfs_super.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) [v2: dropped RFC, this fixes xfstests generic/476, resubmitting with more feeling]