mbox series

[00/12] xfs: remove remaining kmem interfaces and GFP_NOFS usage

Message ID 20240115230113.4080105-1-david@fromorbit.com (mailing list archive)
Headers show
Series xfs: remove remaining kmem interfaces and GFP_NOFS usage | expand

Message

Dave Chinner Jan. 15, 2024, 10:59 p.m. UTC
This series does two things. Firstly it removes the remaining XFS
specific kernel memory allocation wrappers, converting everything to
using GFP flags directly. Secondly, it converts all the GFP_NOFS
flag usage to use the scoped memalloc_nofs_save() API instead of
direct calls with the GFP_NOFS.

The first part of the series (fs/xfs/kmem.[ch] removal) is straight
forward.  We've done lots of this stuff in the past leading up to
the point; this is just converting the final remaining usage to the
native kernel interface. The only down-side to this is that we end
up propagating __GFP_NOFAIL everywhere into the code. This is no big
deal for XFS - it's just formalising the fact that all our
allocations are __GFP_NOFAIL by default, except for the ones we
explicity mark as able to fail. This may be a surprise of people
outside XFS, but we've been doing this for a couple of decades now
and the sky hasn't fallen yet.

The second part of the series is more involved - in most cases
GFP_NOFS is redundant because we are already in a scoped NOFS
context (e.g. transactions) so the conversion to GFP_KERNEL isn't a
huge issue.

However, there are some code paths where we have used GFP_NOFS to
prevent lockdep warnings because the code is called from both
GFP_KERNEL and GFP_NOFS contexts and so lockdep gets confused when
it has tracked code as GFP_NOFS and then sees it enter direct
reclaim, recurse into the filesystem and take fs locks from the
GFP_KERNEL caller. There are a couple of other lockdep false
positive paths that can occur that we've shut up with GFP_NOFS, too.
More recently, we've been using the __GFP_NOLOCKDEP flag to signal
this "lockdep gives false positives here" condition, so one of the
things this patchset does is convert all the GFP_NOFS calls in code
that can be run from both GFP_KERNEL and GFP_NOFS contexts, and/or
run both above and below reclaim with GFP_KERNEL | __GFP_NOLOCKDEP.

This means that some allocations have gone from having KM_NOFS tags
to having GFP_KERNEL | __GFP_NOLOCKDEP | __GFP_NOFAIL. There is an
increase in verbosity here, but the first step in cleaning all this
mess up is consistently annotating all the allocation sites with the
correct tags.

Later in the patchset, we start adding new scoped NOFS contexts to
cover cases where we really need NOFS but rely on code being called
to understand that it is actually in a NOFS context. And example of
this is intent recovery - allocating the intent structure occurs
outside transaction scope, but still needs to be NOFS scope because
of all the pending work already queued. The rest of the work is done
under transaction context, giving it NOFS context, but these initial
allocations aren't inside that scope. IOWs, the entire intent
recovery scope should really be covered by a single NOFS context.
The patch set ends up putting the entire second phase of recovery
(intents, unlnked list, reflink cleanup) under a single NOFS context
because we really don't want reclaim to operate on the filesystem
whilst we are performing these operations. Hence a single high level
NOFS scope is appropriate here.

The end result is that GFP_NOFS is completely gone from XFS,
replaced by correct annotations and more widely deployed scoped
allocation contexts. This passes fstests with lockdep, KASAN and
other debuggin enabled without any regressions or new lockdep false
positives.

Comments, thoughts and ideas?

----

Version 1:
- based on v6.7 + linux-xfs/for-next

Comments

Pankaj Raghav (Samsung) March 25, 2024, 5:46 p.m. UTC | #1
> 
> The first part of the series (fs/xfs/kmem.[ch] removal) is straight
> forward.  We've done lots of this stuff in the past leading up to
> the point; this is just converting the final remaining usage to the
> native kernel interface. The only down-side to this is that we end
> up propagating __GFP_NOFAIL everywhere into the code. This is no big
> deal for XFS - it's just formalising the fact that all our
> allocations are __GFP_NOFAIL by default, except for the ones we
> explicity mark as able to fail. This may be a surprise of people
> outside XFS, but we've been doing this for a couple of decades now
> and the sky hasn't fallen yet.

Definetly a surprise to me. :)

I rebased my LBS patches with these changes and generic/476 started to
break in page alloc[1]:

static inline
struct page *rmqueue(struct zone *preferred_zone,
			struct zone *zone, unsigned int order,
			gfp_t gfp_flags, unsigned int alloc_flags,
			int migratetype)
{
	struct page *page;

	/*
	 * We most definitely don't want callers attempting to
	 * allocate greater than order-1 page units with __GFP_NOFAIL.
	 */
	WARN_ON_ONCE((gfp_flags & __GFP_NOFAIL) && (order > 1));
...

The reason for this is the call from xfs_attr_leaf.c to allocate memory
with attr->geo->blksize, which is set to 1 FSB. As 1 FSB can correspond
to order > 1 in LBS, this WARN_ON_ONCE is triggered.

This was not an issue before as xfs/kmem.c retried manually in a loop
without passing the __GFP_NOFAIL flag.

As not all calls to kmalloc in xfs_attr_leaf.c call handles ENOMEM
errors, what would be the correct approach for LBS configurations?

One possible idea is to use __GFP_RETRY_MAYFAIL for LBS configuration as
it will resemble the way things worked before.

Let me know your thoughts.
--
Pankaj
[1] https://elixir.bootlin.com/linux/v6.9-rc1/source/mm/page_alloc.c#L2902
Dave Chinner April 1, 2024, 9:30 p.m. UTC | #2
On Mon, Mar 25, 2024 at 06:46:29PM +0100, Pankaj Raghav (Samsung) wrote:
> > 
> > The first part of the series (fs/xfs/kmem.[ch] removal) is straight
> > forward.  We've done lots of this stuff in the past leading up to
> > the point; this is just converting the final remaining usage to the
> > native kernel interface. The only down-side to this is that we end
> > up propagating __GFP_NOFAIL everywhere into the code. This is no big
> > deal for XFS - it's just formalising the fact that all our
> > allocations are __GFP_NOFAIL by default, except for the ones we
> > explicity mark as able to fail. This may be a surprise of people
> > outside XFS, but we've been doing this for a couple of decades now
> > and the sky hasn't fallen yet.
> 
> Definetly a surprise to me. :)
> 
> I rebased my LBS patches with these changes and generic/476 started to
> break in page alloc[1]:
> 
> static inline
> struct page *rmqueue(struct zone *preferred_zone,
> 			struct zone *zone, unsigned int order,
> 			gfp_t gfp_flags, unsigned int alloc_flags,
> 			int migratetype)
> {
> 	struct page *page;
> 
> 	/*
> 	 * We most definitely don't want callers attempting to
> 	 * allocate greater than order-1 page units with __GFP_NOFAIL.
> 	 */
> 	WARN_ON_ONCE((gfp_flags & __GFP_NOFAIL) && (order > 1));
> ...

Yeah, that warning needs to go. It's just unnecessary noise at this
point in time - at minimum should be gated on __GFP_NOWARN.

> The reason for this is the call from xfs_attr_leaf.c to allocate memory
> with attr->geo->blksize, which is set to 1 FSB. As 1 FSB can correspond
> to order > 1 in LBS, this WARN_ON_ONCE is triggered.
> 
> This was not an issue before as xfs/kmem.c retried manually in a loop
> without passing the __GFP_NOFAIL flag.

Right, we've been doing this sort of "no fail" high order kmalloc
thing for a couple of decades in XFS, explicitly to avoid arbitrary
noise like this warning.....

> As not all calls to kmalloc in xfs_attr_leaf.c call handles ENOMEM
> errors, what would be the correct approach for LBS configurations?

Use kvmalloc().

-Dave.