diff mbox series

[1/1] mm: handle profiling for fake memory allocations during compaction

Message ID 20240614230504.3849136-1-surenb@google.com (mailing list archive)
State New
Headers show
Series [1/1] mm: handle profiling for fake memory allocations during compaction | expand

Commit Message

Suren Baghdasaryan June 14, 2024, 11:05 p.m. UTC
During compaction isolated free pages are marked allocated so that they
can be split and/or freed. For that, post_alloc_hook() is used inside
split_map_pages() and release_free_list(). split_map_pages() marks free
pages allocated, splits the pages and then lets alloc_contig_range_noprof()
free those pages. release_free_list() marks free pages and immediately
frees them. This usage of post_alloc_hook() affect memory allocation
profiling because these functions might not be called from an instrumented
allocator, therefore current->alloc_tag is NULL and when debugging is
enabled (CONFIG_MEM_ALLOC_PROFILING_DEBUG=y) that causes warnings.
To avoid that, wrap such post_alloc_hook() calls into an instrumented
function which acts as an allocator which will be charged for these
fake allocations. Note that these allocations are very short lived until
they are freed, therefore the associated counters should usually read 0.

Signed-off-by: Suren Baghdasaryan <surenb@google.com>
---
 mm/compaction.c | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)


base-commit: c286c21ff94252f778515b21b6bebe749454a852

Comments

Andrew Morton June 15, 2024, 1:19 a.m. UTC | #1
On Fri, 14 Jun 2024 16:05:04 -0700 Suren Baghdasaryan <surenb@google.com> wrote:

> During compaction isolated free pages are marked allocated so that they
> can be split and/or freed. For that, post_alloc_hook() is used inside
> split_map_pages() and release_free_list(). split_map_pages() marks free
> pages allocated, splits the pages and then lets alloc_contig_range_noprof()
> free those pages. release_free_list() marks free pages and immediately
> frees them. This usage of post_alloc_hook() affect memory allocation
> profiling because these functions might not be called from an instrumented
> allocator, therefore current->alloc_tag is NULL and when debugging is
> enabled (CONFIG_MEM_ALLOC_PROFILING_DEBUG=y) that causes warnings.

It would be helpful to quote the warnings for the changelog.  And a
Reported-by:/Closes: if appropriate.

I'm assuming we want this in 6.10-rcX?

Please help in identifying the Fixes:, for anyone who might be
backporting allocation profiling.
Suren Baghdasaryan June 15, 2024, 3:50 a.m. UTC | #2
On Fri, Jun 14, 2024 at 6:19 PM Andrew Morton <akpm@linux-foundation.org> wrote:
>
> On Fri, 14 Jun 2024 16:05:04 -0700 Suren Baghdasaryan <surenb@google.com> wrote:
>
> > During compaction isolated free pages are marked allocated so that they
> > can be split and/or freed. For that, post_alloc_hook() is used inside
> > split_map_pages() and release_free_list(). split_map_pages() marks free
> > pages allocated, splits the pages and then lets alloc_contig_range_noprof()
> > free those pages. release_free_list() marks free pages and immediately
> > frees them. This usage of post_alloc_hook() affect memory allocation
> > profiling because these functions might not be called from an instrumented
> > allocator, therefore current->alloc_tag is NULL and when debugging is
> > enabled (CONFIG_MEM_ALLOC_PROFILING_DEBUG=y) that causes warnings.
>
> It would be helpful to quote the warnings for the changelog.  And a
> Reported-by:/Closes: if appropriate.

This was not really reported anywhere but if someone hit this
condition (it requires compaction to be running) then the warning
would be generated.

>
> I'm assuming we want this in 6.10-rcX?

Yes please. Otherwise someone will report that they are getting this
warning when the system is under memory pressure and
CONFIG_MEM_ALLOC_PROFILING_DEBUG is enabled.

>
> Please help in identifying the Fixes:, for anyone who might be
> backporting allocation profiling.

I think we can mark it as

Fixes: b951aaff5035 ("mm: enable page allocation tagging")

but it's really a separate patch which covers a corner case.
Please let me know if you want me to send a v2 with this tag added.
Thanks,
Suren.


>
Vlastimil Babka June 17, 2024, 8:33 a.m. UTC | #3
On 6/15/24 1:05 AM, Suren Baghdasaryan wrote:
> During compaction isolated free pages are marked allocated so that they
> can be split and/or freed. For that, post_alloc_hook() is used inside
> split_map_pages() and release_free_list(). split_map_pages() marks free
> pages allocated, splits the pages and then lets alloc_contig_range_noprof()
> free those pages. release_free_list() marks free pages and immediately

Well in case of split_map_pages() only some of them end up freed, but most
should be used as migration targets. But we move the tags from the source
page during migration and unaccount the ones from the target (i.e. from the
instrumented post_alloc_hook() after this patch), right? So it should be ok,
just the description here is incomplete.

> frees them. This usage of post_alloc_hook() affect memory allocation
> profiling because these functions might not be called from an instrumented
> allocator, therefore current->alloc_tag is NULL and when debugging is
> enabled (CONFIG_MEM_ALLOC_PROFILING_DEBUG=y) that causes warnings.
> To avoid that, wrap such post_alloc_hook() calls into an instrumented
> function which acts as an allocator which will be charged for these
> fake allocations. Note that these allocations are very short lived until
> they are freed, therefore the associated counters should usually read 0.
> 
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

> ---
>  mm/compaction.c | 11 +++++++++--
>  1 file changed, 9 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/compaction.c b/mm/compaction.c
> index e731d45befc7..739b1bf3d637 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -79,6 +79,13 @@ static inline bool is_via_compact_memory(int order) { return false; }
>  #define COMPACTION_HPAGE_ORDER	(PMD_SHIFT - PAGE_SHIFT)
>  #endif
>  
> +static struct page *mark_allocated_noprof(struct page *page, unsigned int order, gfp_t gfp_flags)
> +{
> +	post_alloc_hook(page, order, __GFP_MOVABLE);
> +	return page;
> +}
> +#define mark_allocated(...)	alloc_hooks(mark_allocated_noprof(__VA_ARGS__))
> +
>  static void split_map_pages(struct list_head *freepages)
>  {
>  	unsigned int i, order;
> @@ -93,7 +100,7 @@ static void split_map_pages(struct list_head *freepages)
>  
>  			nr_pages = 1 << order;
>  
> -			post_alloc_hook(page, order, __GFP_MOVABLE);
> +			mark_allocated(page, order, __GFP_MOVABLE);
>  			if (order)
>  				split_page(page, order);
>  
> @@ -122,7 +129,7 @@ static unsigned long release_free_list(struct list_head *freepages)
>  			 * Convert free pages into post allocation pages, so
>  			 * that we can free them via __free_page.
>  			 */
> -			post_alloc_hook(page, order, __GFP_MOVABLE);
> +			mark_allocated(page, order, __GFP_MOVABLE);
>  			__free_pages(page, order);
>  			if (pfn > high_pfn)
>  				high_pfn = pfn;
> 
> base-commit: c286c21ff94252f778515b21b6bebe749454a852
Suren Baghdasaryan June 30, 2024, 7:17 p.m. UTC | #4
On Mon, Jun 17, 2024 at 1:33 AM Vlastimil Babka <vbabka@suse.cz> wrote:
>
> On 6/15/24 1:05 AM, Suren Baghdasaryan wrote:
> > During compaction isolated free pages are marked allocated so that they
> > can be split and/or freed. For that, post_alloc_hook() is used inside
> > split_map_pages() and release_free_list(). split_map_pages() marks free
> > pages allocated, splits the pages and then lets alloc_contig_range_noprof()
> > free those pages. release_free_list() marks free pages and immediately
>
> Well in case of split_map_pages() only some of them end up freed, but most
> should be used as migration targets. But we move the tags from the source
> page during migration and unaccount the ones from the target (i.e. from the
> instrumented post_alloc_hook() after this patch), right? So it should be ok,
> just the description here is incomplete.

Sorry for the delay with replying, Vlastimil.
Yes, you are correct. Some of these pages are not immediately freed
but migrated and during migration the destination gets charged for
them. As a result these new counters should still read 0 most of the
time except for some intermediate states.
I can amend the description if this is considered important.
Thanks,
Suren.

>
> > frees them. This usage of post_alloc_hook() affect memory allocation
> > profiling because these functions might not be called from an instrumented
> > allocator, therefore current->alloc_tag is NULL and when debugging is
> > enabled (CONFIG_MEM_ALLOC_PROFILING_DEBUG=y) that causes warnings.
> > To avoid that, wrap such post_alloc_hook() calls into an instrumented
> > function which acts as an allocator which will be charged for these
> > fake allocations. Note that these allocations are very short lived until
> > they are freed, therefore the associated counters should usually read 0.
> >
> > Signed-off-by: Suren Baghdasaryan <surenb@google.com>
>
> Acked-by: Vlastimil Babka <vbabka@suse.cz>
>
> > ---
> >  mm/compaction.c | 11 +++++++++--
> >  1 file changed, 9 insertions(+), 2 deletions(-)
> >
> > diff --git a/mm/compaction.c b/mm/compaction.c
> > index e731d45befc7..739b1bf3d637 100644
> > --- a/mm/compaction.c
> > +++ b/mm/compaction.c
> > @@ -79,6 +79,13 @@ static inline bool is_via_compact_memory(int order) { return false; }
> >  #define COMPACTION_HPAGE_ORDER       (PMD_SHIFT - PAGE_SHIFT)
> >  #endif
> >
> > +static struct page *mark_allocated_noprof(struct page *page, unsigned int order, gfp_t gfp_flags)
> > +{
> > +     post_alloc_hook(page, order, __GFP_MOVABLE);
> > +     return page;
> > +}
> > +#define mark_allocated(...)  alloc_hooks(mark_allocated_noprof(__VA_ARGS__))
> > +
> >  static void split_map_pages(struct list_head *freepages)
> >  {
> >       unsigned int i, order;
> > @@ -93,7 +100,7 @@ static void split_map_pages(struct list_head *freepages)
> >
> >                       nr_pages = 1 << order;
> >
> > -                     post_alloc_hook(page, order, __GFP_MOVABLE);
> > +                     mark_allocated(page, order, __GFP_MOVABLE);
> >                       if (order)
> >                               split_page(page, order);
> >
> > @@ -122,7 +129,7 @@ static unsigned long release_free_list(struct list_head *freepages)
> >                        * Convert free pages into post allocation pages, so
> >                        * that we can free them via __free_page.
> >                        */
> > -                     post_alloc_hook(page, order, __GFP_MOVABLE);
> > +                     mark_allocated(page, order, __GFP_MOVABLE);
> >                       __free_pages(page, order);
> >                       if (pfn > high_pfn)
> >                               high_pfn = pfn;
> >
> > base-commit: c286c21ff94252f778515b21b6bebe749454a852
>
Vlastimil Babka July 2, 2024, 9:32 a.m. UTC | #5
On 6/30/24 9:17 PM, Suren Baghdasaryan wrote:
> On Mon, Jun 17, 2024 at 1:33 AM Vlastimil Babka <vbabka@suse.cz> wrote:
>>
>> On 6/15/24 1:05 AM, Suren Baghdasaryan wrote:
>> > During compaction isolated free pages are marked allocated so that they
>> > can be split and/or freed. For that, post_alloc_hook() is used inside
>> > split_map_pages() and release_free_list(). split_map_pages() marks free
>> > pages allocated, splits the pages and then lets alloc_contig_range_noprof()
>> > free those pages. release_free_list() marks free pages and immediately
>>
>> Well in case of split_map_pages() only some of them end up freed, but most
>> should be used as migration targets. But we move the tags from the source
>> page during migration and unaccount the ones from the target (i.e. from the
>> instrumented post_alloc_hook() after this patch), right? So it should be ok,
>> just the description here is incomplete.
> 
> Sorry for the delay with replying, Vlastimil.
> Yes, you are correct. Some of these pages are not immediately freed
> but migrated and during migration the destination gets charged for
> them. As a result these new counters should still read 0 most of the
> time except for some intermediate states.
> I can amend the description if this is considered important.

The fix was merged to mainline already.
Suren Baghdasaryan July 2, 2024, 3:17 p.m. UTC | #6
On Tue, Jul 2, 2024 at 9:32 AM Vlastimil Babka <vbabka@suse.cz> wrote:
>
> On 6/30/24 9:17 PM, Suren Baghdasaryan wrote:
> > On Mon, Jun 17, 2024 at 1:33 AM Vlastimil Babka <vbabka@suse.cz> wrote:
> >>
> >> On 6/15/24 1:05 AM, Suren Baghdasaryan wrote:
> >> > During compaction isolated free pages are marked allocated so that they
> >> > can be split and/or freed. For that, post_alloc_hook() is used inside
> >> > split_map_pages() and release_free_list(). split_map_pages() marks free
> >> > pages allocated, splits the pages and then lets alloc_contig_range_noprof()
> >> > free those pages. release_free_list() marks free pages and immediately
> >>
> >> Well in case of split_map_pages() only some of them end up freed, but most
> >> should be used as migration targets. But we move the tags from the source
> >> page during migration and unaccount the ones from the target (i.e. from the
> >> instrumented post_alloc_hook() after this patch), right? So it should be ok,
> >> just the description here is incomplete.
> >
> > Sorry for the delay with replying, Vlastimil.
> > Yes, you are correct. Some of these pages are not immediately freed
> > but migrated and during migration the destination gets charged for
> > them. As a result these new counters should still read 0 most of the
> > time except for some intermediate states.
> > I can amend the description if this is considered important.
>
> The fix was merged to mainline already.

Oh, didn't realize that. Ok, will have to keep it as is then.

>
diff mbox series

Patch

diff --git a/mm/compaction.c b/mm/compaction.c
index e731d45befc7..739b1bf3d637 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -79,6 +79,13 @@  static inline bool is_via_compact_memory(int order) { return false; }
 #define COMPACTION_HPAGE_ORDER	(PMD_SHIFT - PAGE_SHIFT)
 #endif
 
+static struct page *mark_allocated_noprof(struct page *page, unsigned int order, gfp_t gfp_flags)
+{
+	post_alloc_hook(page, order, __GFP_MOVABLE);
+	return page;
+}
+#define mark_allocated(...)	alloc_hooks(mark_allocated_noprof(__VA_ARGS__))
+
 static void split_map_pages(struct list_head *freepages)
 {
 	unsigned int i, order;
@@ -93,7 +100,7 @@  static void split_map_pages(struct list_head *freepages)
 
 			nr_pages = 1 << order;
 
-			post_alloc_hook(page, order, __GFP_MOVABLE);
+			mark_allocated(page, order, __GFP_MOVABLE);
 			if (order)
 				split_page(page, order);
 
@@ -122,7 +129,7 @@  static unsigned long release_free_list(struct list_head *freepages)
 			 * Convert free pages into post allocation pages, so
 			 * that we can free them via __free_page.
 			 */
-			post_alloc_hook(page, order, __GFP_MOVABLE);
+			mark_allocated(page, order, __GFP_MOVABLE);
 			__free_pages(page, order);
 			if (pfn > high_pfn)
 				high_pfn = pfn;