Message ID | 20180524114341.1101-1-mhocko@kernel.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Thu, May 24, 2018 at 4:43 AM, Michal Hocko <mhocko@kernel.org> wrote: > From: Michal Hocko <mhocko@suse.com> > > Although the api is documented in the source code Ted has pointed out > that there is no mention in the core-api Documentation and there are > people looking there to find answers how to use a specific API. > > Cc: "Darrick J. Wong" <darrick.wong@oracle.com> > Cc: David Sterba <dsterba@suse.cz> > Requested-by: "Theodore Y. Ts'o" <tytso@mit.edu> > Signed-off-by: Michal Hocko <mhocko@suse.com> > --- > > Hi Johnatan, > Ted has proposed this at LSFMM and then we discussed that briefly on the > mailing list [1]. I received some useful feedback from Darrick and Dave > which has been (hopefully) integrated. Then the thing fall off my radar > rediscovering it now when doing some cleanup. Could you take the patch > please? > > [1] http://lkml.kernel.org/r/20180424183536.GF30619@thunk.org > .../core-api/gfp_mask-from-fs-io.rst | 55 +++++++++++++++++++ > 1 file changed, 55 insertions(+) > create mode 100644 Documentation/core-api/gfp_mask-from-fs-io.rst > > diff --git a/Documentation/core-api/gfp_mask-from-fs-io.rst b/Documentation/core-api/gfp_mask-from-fs-io.rst > new file mode 100644 > index 000000000000..e8b2678e959b > --- /dev/null > +++ b/Documentation/core-api/gfp_mask-from-fs-io.rst > @@ -0,0 +1,55 @@ > +================================= > +GFP masks used from FS/IO context > +================================= > + > +:Date: Mapy, 2018 > +:Author: Michal Hocko <mhocko@kernel.org> > + > +Introduction > +============ > + > +Code paths in the filesystem and IO stacks must be careful when > +allocating memory to prevent recursion deadlocks caused by direct > +memory reclaim calling back into the FS or IO paths and blocking on > +already held resources (e.g. locks - most commonly those used for the > +transaction context). > + > +The traditional way to avoid this deadlock problem is to clear __GFP_FS > +resp. __GFP_IO (note the later implies clearing the first as well) in Is resp. == respectively? Why not use the full word (here and below)? > +the gfp mask when calling an allocator. GFP_NOFS resp. GFP_NOIO can be > +used as shortcut. It turned out though that above approach has led to > +abuses when the restricted gfp mask is used "just in case" without a > +deeper consideration which leads to problems because an excessive use > +of GFP_NOFS/GFP_NOIO can lead to memory over-reclaim or other memory > +reclaim issues. > + > +New API > +======== > + > +Since 4.12 we do have a generic scope API for both NOFS and NOIO context > +``memalloc_nofs_save``, ``memalloc_nofs_restore`` resp. ``memalloc_noio_save``, > +``memalloc_noio_restore`` which allow to mark a scope to be a critical > +section from the memory reclaim recursion into FS/IO POV. Any allocation > +from that scope will inherently drop __GFP_FS resp. __GFP_IO from the given > +mask so no memory allocation can recurse back in the FS/IO. > + > +FS/IO code then simply calls the appropriate save function right at the > +layer where a lock taken from the reclaim context (e.g. shrinker) and > +the corresponding restore function when the lock is released. All that > +ideally along with an explanation what is the reclaim context for easier > +maintenance. > + > +What about __vmalloc(GFP_NOFS) > +============================== > + > +vmalloc doesn't support GFP_NOFS semantic because there are hardcoded > +GFP_KERNEL allocations deep inside the allocator which are quite non-trivial > +to fix up. That means that calling ``vmalloc`` with GFP_NOFS/GFP_NOIO is > +almost always a bug. The good news is that the NOFS/NOIO semantic can be > +achieved by the scope api. > + > +In the ideal world, upper layers should already mark dangerous contexts > +and so no special care is required and vmalloc should be called without > +any problems. Sometimes if the context is not really clear or there are > +layering violations then the recommended way around that is to wrap ``vmalloc`` > +by the scope API with a comment explaining the problem. > -- > 2.17.0 >
On Thu 24-05-18 07:33:39, Shakeel Butt wrote: > On Thu, May 24, 2018 at 4:43 AM, Michal Hocko <mhocko@kernel.org> wrote: [...] > > +The traditional way to avoid this deadlock problem is to clear __GFP_FS > > +resp. __GFP_IO (note the later implies clearing the first as well) in > > Is resp. == respectively? Why not use the full word (here and below)? yes. Because I was lazy ;)
On 05/24/2018 04:43 AM, Michal Hocko wrote: > From: Michal Hocko <mhocko@suse.com> > > Although the api is documented in the source code Ted has pointed out > that there is no mention in the core-api Documentation and there are > people looking there to find answers how to use a specific API. > > Cc: "Darrick J. Wong" <darrick.wong@oracle.com> > Cc: David Sterba <dsterba@suse.cz> > Requested-by: "Theodore Y. Ts'o" <tytso@mit.edu> > Signed-off-by: Michal Hocko <mhocko@suse.com> > --- > > Hi Johnatan, > Ted has proposed this at LSFMM and then we discussed that briefly on the > mailing list [1]. I received some useful feedback from Darrick and Dave > which has been (hopefully) integrated. Then the thing fall off my radar > rediscovering it now when doing some cleanup. Could you take the patch > please? > > [1] http://lkml.kernel.org/r/20180424183536.GF30619@thunk.org > .../core-api/gfp_mask-from-fs-io.rst | 55 +++++++++++++++++++ > 1 file changed, 55 insertions(+) > create mode 100644 Documentation/core-api/gfp_mask-from-fs-io.rst > > diff --git a/Documentation/core-api/gfp_mask-from-fs-io.rst b/Documentation/core-api/gfp_mask-from-fs-io.rst > new file mode 100644 > index 000000000000..e8b2678e959b > --- /dev/null > +++ b/Documentation/core-api/gfp_mask-from-fs-io.rst > @@ -0,0 +1,55 @@ > +================================= > +GFP masks used from FS/IO context > +================================= > + > +:Date: Mapy, 2018 > +:Author: Michal Hocko <mhocko@kernel.org> > + > +Introduction > +============ > + > +Code paths in the filesystem and IO stacks must be careful when > +allocating memory to prevent recursion deadlocks caused by direct > +memory reclaim calling back into the FS or IO paths and blocking on > +already held resources (e.g. locks - most commonly those used for the > +transaction context). > + > +The traditional way to avoid this deadlock problem is to clear __GFP_FS > +resp. __GFP_IO (note the later implies clearing the first as well) in latter > +the gfp mask when calling an allocator. GFP_NOFS resp. GFP_NOIO can be > +used as shortcut. It turned out though that above approach has led to > +abuses when the restricted gfp mask is used "just in case" without a > +deeper consideration which leads to problems because an excessive use > +of GFP_NOFS/GFP_NOIO can lead to memory over-reclaim or other memory > +reclaim issues. > + > +New API > +======== > + > +Since 4.12 we do have a generic scope API for both NOFS and NOIO context > +``memalloc_nofs_save``, ``memalloc_nofs_restore`` resp. ``memalloc_noio_save``, > +``memalloc_noio_restore`` which allow to mark a scope to be a critical > +section from the memory reclaim recursion into FS/IO POV. Any allocation s/POV/point of view/ or whatever it is. > +from that scope will inherently drop __GFP_FS resp. __GFP_IO from the given > +mask so no memory allocation can recurse back in the FS/IO. > + > +FS/IO code then simply calls the appropriate save function right at the > +layer where a lock taken from the reclaim context (e.g. shrinker) and > +the corresponding restore function when the lock is released. All that > +ideally along with an explanation what is the reclaim context for easier > +maintenance. > + > +What about __vmalloc(GFP_NOFS) > +============================== > + > +vmalloc doesn't support GFP_NOFS semantic because there are hardcoded > +GFP_KERNEL allocations deep inside the allocator which are quite non-trivial > +to fix up. That means that calling ``vmalloc`` with GFP_NOFS/GFP_NOIO is > +almost always a bug. The good news is that the NOFS/NOIO semantic can be > +achieved by the scope api. I would prefer s/api/API/ throughout. > + > +In the ideal world, upper layers should already mark dangerous contexts > +and so no special care is required and vmalloc should be called without > +any problems. Sometimes if the context is not really clear or there are > +layering violations then the recommended way around that is to wrap ``vmalloc`` > +by the scope API with a comment explaining the problem. >
On Thu, 24 May 2018 13:43:41 +0200 Michal Hocko <mhocko@kernel.org> wrote: > From: Michal Hocko <mhocko@suse.com> > > Although the api is documented in the source code Ted has pointed out > that there is no mention in the core-api Documentation and there are > people looking there to find answers how to use a specific API. > > Cc: "Darrick J. Wong" <darrick.wong@oracle.com> > Cc: David Sterba <dsterba@suse.cz> > Requested-by: "Theodore Y. Ts'o" <tytso@mit.edu> > Signed-off-by: Michal Hocko <mhocko@suse.com> > --- > > Hi Johnatan, > Ted has proposed this at LSFMM and then we discussed that briefly on the > mailing list [1]. I received some useful feedback from Darrick and Dave > which has been (hopefully) integrated. Then the thing fall off my radar > rediscovering it now when doing some cleanup. Could you take the patch > please? > > [1] http://lkml.kernel.org/r/20180424183536.GF30619@thunk.org > .../core-api/gfp_mask-from-fs-io.rst | 55 +++++++++++++++++++ > 1 file changed, 55 insertions(+) > create mode 100644 Documentation/core-api/gfp_mask-from-fs-io.rst So you create the rst file, but don't add it in index.rst; that means it won't be a part of the docs build and Sphinx will complain. > diff --git a/Documentation/core-api/gfp_mask-from-fs-io.rst b/Documentation/core-api/gfp_mask-from-fs-io.rst > new file mode 100644 > index 000000000000..e8b2678e959b > --- /dev/null > +++ b/Documentation/core-api/gfp_mask-from-fs-io.rst > @@ -0,0 +1,55 @@ > +================================= > +GFP masks used from FS/IO context > +================================= > + > +:Date: Mapy, 2018 Ah...the wonderful month of Mapy....:) > +:Author: Michal Hocko <mhocko@kernel.org> > + > +Introduction > +============ > + > +Code paths in the filesystem and IO stacks must be careful when > +allocating memory to prevent recursion deadlocks caused by direct > +memory reclaim calling back into the FS or IO paths and blocking on > +already held resources (e.g. locks - most commonly those used for the > +transaction context). > + > +The traditional way to avoid this deadlock problem is to clear __GFP_FS > +resp. __GFP_IO (note the later implies clearing the first as well) in "resp." is indeed a bit terse. Even spelled out as "respectively", though, I'm not sure what the word is intended to mean here. Did you mean "or"? > +the gfp mask when calling an allocator. GFP_NOFS resp. GFP_NOIO can be Here too. > +used as shortcut. It turned out though that above approach has led to > +abuses when the restricted gfp mask is used "just in case" without a > +deeper consideration which leads to problems because an excessive use > +of GFP_NOFS/GFP_NOIO can lead to memory over-reclaim or other memory > +reclaim issues. > + > +New API > +======== > + > +Since 4.12 we do have a generic scope API for both NOFS and NOIO context > +``memalloc_nofs_save``, ``memalloc_nofs_restore`` resp. ``memalloc_noio_save``, > +``memalloc_noio_restore`` which allow to mark a scope to be a critical > +section from the memory reclaim recursion into FS/IO POV. Any allocation "from a filesystem or I/O point of view" ? > +from that scope will inherently drop __GFP_FS resp. __GFP_IO from the given > +mask so no memory allocation can recurse back in the FS/IO. Wouldn't it be nice if those functions had kerneldoc comments that could be pulled in here! :) > +FS/IO code then simply calls the appropriate save function right at the > +layer where a lock taken from the reclaim context (e.g. shrinker) and where a lock *is* taken ? > +the corresponding restore function when the lock is released. All that > +ideally along with an explanation what is the reclaim context for easier > +maintenance. > + > +What about __vmalloc(GFP_NOFS) > +============================== > + > +vmalloc doesn't support GFP_NOFS semantic because there are hardcoded > +GFP_KERNEL allocations deep inside the allocator which are quite non-trivial > +to fix up. That means that calling ``vmalloc`` with GFP_NOFS/GFP_NOIO is > +almost always a bug. The good news is that the NOFS/NOIO semantic can be > +achieved by the scope api. Agree with others on "API" > +In the ideal world, upper layers should already mark dangerous contexts > +and so no special care is required and vmalloc should be called without > +any problems. Sometimes if the context is not really clear or there are > +layering violations then the recommended way around that is to wrap ``vmalloc`` > +by the scope API with a comment explaining the problem. Thanks, jon
On Thu, May 24, 2018 at 01:43:41PM +0200, Michal Hocko wrote: > From: Michal Hocko <mhocko@suse.com> > > Although the api is documented in the source code Ted has pointed out > that there is no mention in the core-api Documentation and there are > people looking there to find answers how to use a specific API. > > Cc: "Darrick J. Wong" <darrick.wong@oracle.com> > Cc: David Sterba <dsterba@suse.cz> > Requested-by: "Theodore Y. Ts'o" <tytso@mit.edu> > Signed-off-by: Michal Hocko <mhocko@suse.com> Yay, Documentation! :) > --- > > Hi Johnatan, > Ted has proposed this at LSFMM and then we discussed that briefly on the > mailing list [1]. I received some useful feedback from Darrick and Dave > which has been (hopefully) integrated. Then the thing fall off my radar > rediscovering it now when doing some cleanup. Could you take the patch > please? > > [1] http://lkml.kernel.org/r/20180424183536.GF30619@thunk.org > .../core-api/gfp_mask-from-fs-io.rst | 55 +++++++++++++++++++ > 1 file changed, 55 insertions(+) > create mode 100644 Documentation/core-api/gfp_mask-from-fs-io.rst > > diff --git a/Documentation/core-api/gfp_mask-from-fs-io.rst b/Documentation/core-api/gfp_mask-from-fs-io.rst > new file mode 100644 > index 000000000000..e8b2678e959b > --- /dev/null > +++ b/Documentation/core-api/gfp_mask-from-fs-io.rst > @@ -0,0 +1,55 @@ > +================================= > +GFP masks used from FS/IO context > +================================= > + > +:Date: Mapy, 2018 > +:Author: Michal Hocko <mhocko@kernel.org> > + > +Introduction > +============ > + > +Code paths in the filesystem and IO stacks must be careful when > +allocating memory to prevent recursion deadlocks caused by direct > +memory reclaim calling back into the FS or IO paths and blocking on > +already held resources (e.g. locks - most commonly those used for the > +transaction context). > + > +The traditional way to avoid this deadlock problem is to clear __GFP_FS > +resp. __GFP_IO (note the later implies clearing the first as well) in > +the gfp mask when calling an allocator. GFP_NOFS resp. GFP_NOIO can be > +used as shortcut. It turned out though that above approach has led to > +abuses when the restricted gfp mask is used "just in case" without a > +deeper consideration which leads to problems because an excessive use > +of GFP_NOFS/GFP_NOIO can lead to memory over-reclaim or other memory > +reclaim issues. > + > +New API > +======== > + > +Since 4.12 we do have a generic scope API for both NOFS and NOIO context > +``memalloc_nofs_save``, ``memalloc_nofs_restore`` resp. ``memalloc_noio_save``, > +``memalloc_noio_restore`` which allow to mark a scope to be a critical > +section from the memory reclaim recursion into FS/IO POV. Any allocation > +from that scope will inherently drop __GFP_FS resp. __GFP_IO from the given > +mask so no memory allocation can recurse back in the FS/IO. > + > +FS/IO code then simply calls the appropriate save function right at the > +layer where a lock taken from the reclaim context (e.g. shrinker) and > +the corresponding restore function when the lock is released. All that > +ideally along with an explanation what is the reclaim context for easier > +maintenance. This paragraph doesn't make much sense to me. I think you're trying to say that we should call the appropriate save function "before locks are taken that a reclaim context (e.g a shrinker) might require access to." I think it's also worth making a note about recursive/nested save/restore stacking, because it's not clear from this description that this is allowed and will work as long as inner save/restore calls are fully nested inside outer save/restore contexts. Cheers, Dave.
On Fri, May 25, 2018 at 08:17:15AM +1000, Dave Chinner wrote: > On Thu, May 24, 2018 at 01:43:41PM +0200, Michal Hocko wrote: > > From: Michal Hocko <mhocko@suse.com> > > > > Although the api is documented in the source code Ted has pointed out > > that there is no mention in the core-api Documentation and there are > > people looking there to find answers how to use a specific API. > > > > Cc: "Darrick J. Wong" <darrick.wong@oracle.com> > > Cc: David Sterba <dsterba@suse.cz> > > Requested-by: "Theodore Y. Ts'o" <tytso@mit.edu> > > Signed-off-by: Michal Hocko <mhocko@suse.com> > > Yay, Documentation! :) Indeed, many thanks!!! - Ted
On Thu 24-05-18 09:37:18, Randy Dunlap wrote: > On 05/24/2018 04:43 AM, Michal Hocko wrote: [...] > > +The traditional way to avoid this deadlock problem is to clear __GFP_FS > > +resp. __GFP_IO (note the later implies clearing the first as well) in > > latter ? No I really meant that clearing __GFP_IO implies __GFP_FS clearing
On Fri 25-05-18 08:17:15, Dave Chinner wrote: > On Thu, May 24, 2018 at 01:43:41PM +0200, Michal Hocko wrote: [...] > > +FS/IO code then simply calls the appropriate save function right at the > > +layer where a lock taken from the reclaim context (e.g. shrinker) and > > +the corresponding restore function when the lock is released. All that > > +ideally along with an explanation what is the reclaim context for easier > > +maintenance. > > This paragraph doesn't make much sense to me. I think you're trying > to say that we should call the appropriate save function "before > locks are taken that a reclaim context (e.g a shrinker) might > require access to." > > I think it's also worth making a note about recursive/nested > save/restore stacking, because it's not clear from this description > that this is allowed and will work as long as inner save/restore > calls are fully nested inside outer save/restore contexts. Any better? -FS/IO code then simply calls the appropriate save function right at the -layer where a lock taken from the reclaim context (e.g. shrinker) and -the corresponding restore function when the lock is released. All that -ideally along with an explanation what is the reclaim context for easier -maintenance. +FS/IO code then simply calls the appropriate save function before any +lock shared with the reclaim context is taken. The corresponding +restore function when the lock is released. All that ideally along with +an explanation what is the reclaim context for easier maintenance. + +Please note that the proper pairing of save/restore function allows nesting +so memalloc_noio_save is safe to be called from an existing NOIO or NOFS scope. What about __vmalloc(GFP_NOFS) ==============================
On Fri, May 25, 2018 at 10:16:24AM +0200, Michal Hocko wrote: > On Fri 25-05-18 08:17:15, Dave Chinner wrote: > > On Thu, May 24, 2018 at 01:43:41PM +0200, Michal Hocko wrote: > [...] > > > +FS/IO code then simply calls the appropriate save function right at the > > > +layer where a lock taken from the reclaim context (e.g. shrinker) and > > > +the corresponding restore function when the lock is released. All that > > > +ideally along with an explanation what is the reclaim context for easier > > > +maintenance. > > > > This paragraph doesn't make much sense to me. I think you're trying > > to say that we should call the appropriate save function "before > > locks are taken that a reclaim context (e.g a shrinker) might > > require access to." > > > > I think it's also worth making a note about recursive/nested > > save/restore stacking, because it's not clear from this description > > that this is allowed and will work as long as inner save/restore > > calls are fully nested inside outer save/restore contexts. > > Any better? > > -FS/IO code then simply calls the appropriate save function right at the > -layer where a lock taken from the reclaim context (e.g. shrinker) and > -the corresponding restore function when the lock is released. All that > -ideally along with an explanation what is the reclaim context for easier > -maintenance. > +FS/IO code then simply calls the appropriate save function before any > +lock shared with the reclaim context is taken. The corresponding > +restore function when the lock is released. All that ideally along with Maybe: "The corresponding restore function is called when the lock is released" > +an explanation what is the reclaim context for easier maintenance. > + > +Please note that the proper pairing of save/restore function allows nesting > +so memalloc_noio_save is safe to be called from an existing NOIO or NOFS scope. so it is safe to call memalloc_noio_save from an existing NOIO or NOFS scope > What about __vmalloc(GFP_NOFS) > ============================== > -- > Michal Hocko > SUSE Labs >
On Fri, May 25, 2018 at 10:16:24AM +0200, Michal Hocko wrote: > On Fri 25-05-18 08:17:15, Dave Chinner wrote: > > On Thu, May 24, 2018 at 01:43:41PM +0200, Michal Hocko wrote: > [...] > > > +FS/IO code then simply calls the appropriate save function right at the > > > +layer where a lock taken from the reclaim context (e.g. shrinker) and > > > +the corresponding restore function when the lock is released. All that > > > +ideally along with an explanation what is the reclaim context for easier > > > +maintenance. > > > > This paragraph doesn't make much sense to me. I think you're trying > > to say that we should call the appropriate save function "before > > locks are taken that a reclaim context (e.g a shrinker) might > > require access to." > > > > I think it's also worth making a note about recursive/nested > > save/restore stacking, because it's not clear from this description > > that this is allowed and will work as long as inner save/restore > > calls are fully nested inside outer save/restore contexts. > > Any better? > > -FS/IO code then simply calls the appropriate save function right at the > -layer where a lock taken from the reclaim context (e.g. shrinker) and > -the corresponding restore function when the lock is released. All that > -ideally along with an explanation what is the reclaim context for easier > -maintenance. > +FS/IO code then simply calls the appropriate save function before any > +lock shared with the reclaim context is taken. The corresponding > +restore function when the lock is released. All that ideally along with > +an explanation what is the reclaim context for easier maintenance. > + > +Please note that the proper pairing of save/restore function allows nesting > +so memalloc_noio_save is safe to be called from an existing NOIO or NOFS scope. It's better, but the talk of this being necessary for locking makes me cringe. XFS doesn't do it for locking reasons - it does it largely for preventing transaction context nesting, which has all sorts of problems that cause hangs (e.g. log space reservations can't be filled) that aren't directly locking related. i.e we should be talking about using these functions around contexts where recursion back into the filesystem through reclaim is problematic, not that "holding locks" is problematic. Locks can be used as an example of a problematic context, but locks are not the only recursion issue that require GFP_NOFS allocation contexts to avoid. Cheers, Dave.
On 25.05.2018 10:52, Michal Hocko wrote: > On Thu 24-05-18 09:37:18, Randy Dunlap wrote: >> On 05/24/2018 04:43 AM, Michal Hocko wrote: > [...] >>> +The traditional way to avoid this deadlock problem is to clear __GFP_FS >>> +resp. __GFP_IO (note the later implies clearing the first as well) in >> >> latter > > ? > No I really meant that clearing __GFP_IO implies __GFP_FS clearing Sorry to barge in like that, but Randy is right. <NIT WARNING> https://www.merriam-webster.com/dictionary/latter " of, relating to, or being the second of two groups or things or the last of several groups or things referred to </NIT WARNING> >
On 05/25/2018 09:52 AM, Michal Hocko wrote: > On Thu 24-05-18 09:37:18, Randy Dunlap wrote: >> On 05/24/2018 04:43 AM, Michal Hocko wrote: > [...] >>> +The traditional way to avoid this deadlock problem is to clear __GFP_FS >>> +resp. __GFP_IO (note the later implies clearing the first as well) in >> >> latter > > ? > No I really meant that clearing __GFP_IO implies __GFP_FS clearing In that case "latter" is the proper word AFAIK. You could also use "former" instead of "first". Or maybe just repeat the flag names to avoid confusion...
On Mon 28-05-18 10:21:00, Nikolay Borisov wrote: > > > On 25.05.2018 10:52, Michal Hocko wrote: > > On Thu 24-05-18 09:37:18, Randy Dunlap wrote: > >> On 05/24/2018 04:43 AM, Michal Hocko wrote: > > [...] > >>> +The traditional way to avoid this deadlock problem is to clear __GFP_FS > >>> +resp. __GFP_IO (note the later implies clearing the first as well) in > >> > >> latter > > > > ? > > No I really meant that clearing __GFP_IO implies __GFP_FS clearing > Sorry to barge in like that, but Randy is right. > > <NIT WARNING> > > > https://www.merriam-webster.com/dictionary/latter > > " of, relating to, or being the second of two groups or things or the > last of several groups or things referred to > > </NIT WARNING> Fixed
diff --git a/Documentation/core-api/gfp_mask-from-fs-io.rst b/Documentation/core-api/gfp_mask-from-fs-io.rst new file mode 100644 index 000000000000..e8b2678e959b --- /dev/null +++ b/Documentation/core-api/gfp_mask-from-fs-io.rst @@ -0,0 +1,55 @@ +================================= +GFP masks used from FS/IO context +================================= + +:Date: Mapy, 2018 +:Author: Michal Hocko <mhocko@kernel.org> + +Introduction +============ + +Code paths in the filesystem and IO stacks must be careful when +allocating memory to prevent recursion deadlocks caused by direct +memory reclaim calling back into the FS or IO paths and blocking on +already held resources (e.g. locks - most commonly those used for the +transaction context). + +The traditional way to avoid this deadlock problem is to clear __GFP_FS +resp. __GFP_IO (note the later implies clearing the first as well) in +the gfp mask when calling an allocator. GFP_NOFS resp. GFP_NOIO can be +used as shortcut. It turned out though that above approach has led to +abuses when the restricted gfp mask is used "just in case" without a +deeper consideration which leads to problems because an excessive use +of GFP_NOFS/GFP_NOIO can lead to memory over-reclaim or other memory +reclaim issues. + +New API +======== + +Since 4.12 we do have a generic scope API for both NOFS and NOIO context +``memalloc_nofs_save``, ``memalloc_nofs_restore`` resp. ``memalloc_noio_save``, +``memalloc_noio_restore`` which allow to mark a scope to be a critical +section from the memory reclaim recursion into FS/IO POV. Any allocation +from that scope will inherently drop __GFP_FS resp. __GFP_IO from the given +mask so no memory allocation can recurse back in the FS/IO. + +FS/IO code then simply calls the appropriate save function right at the +layer where a lock taken from the reclaim context (e.g. shrinker) and +the corresponding restore function when the lock is released. All that +ideally along with an explanation what is the reclaim context for easier +maintenance. + +What about __vmalloc(GFP_NOFS) +============================== + +vmalloc doesn't support GFP_NOFS semantic because there are hardcoded +GFP_KERNEL allocations deep inside the allocator which are quite non-trivial +to fix up. That means that calling ``vmalloc`` with GFP_NOFS/GFP_NOIO is +almost always a bug. The good news is that the NOFS/NOIO semantic can be +achieved by the scope api. + +In the ideal world, upper layers should already mark dangerous contexts +and so no special care is required and vmalloc should be called without +any problems. Sometimes if the context is not really clear or there are +layering violations then the recommended way around that is to wrap ``vmalloc`` +by the scope API with a comment explaining the problem.