diff mbox series

[1/1] alloc_tag: mark pages reserved during CMA activation as not tagged

Message ID 20240812184455.86580-1-surenb@google.com (mailing list archive)
State New
Headers show
Series [1/1] alloc_tag: mark pages reserved during CMA activation as not tagged | expand

Commit Message

Suren Baghdasaryan Aug. 12, 2024, 6:44 p.m. UTC
During CMA activation, pages in CMA area are prepared and then freed
without being allocated. This triggers warnings when memory allocation
debug config (CONFIG_MEM_ALLOC_PROFILING_DEBUG) is enabled. Fix this
by marking these pages not tagged before freeing them.

Signed-off-by: Suren Baghdasaryan <surenb@google.com>
---
 mm/mm_init.c | 10 ++++++++++
 1 file changed, 10 insertions(+)


base-commit: d74da846046aeec9333e802f5918bd3261fb5509

Comments

Suren Baghdasaryan Aug. 12, 2024, 7:26 p.m. UTC | #1
On Mon, Aug 12, 2024 at 12:13 PM Suren Baghdasaryan <surenb@google.com> wrote:
>
> On Mon, Aug 12, 2024 at 12:11 PM Suren Baghdasaryan <surenb@google.com> wrote:
> >
> > On Mon, Aug 12, 2024 at 11:44 AM Suren Baghdasaryan <surenb@google.com> wrote:
> > >
> > > During CMA activation, pages in CMA area are prepared and then freed
> > > without being allocated. This triggers warnings when memory allocation
> > > debug config (CONFIG_MEM_ALLOC_PROFILING_DEBUG) is enabled. Fix this
> > > by marking these pages not tagged before freeing them.
> >
> > This should also have:
> >
> > Fixes: d224eb0287fb "codetag: debug: mark codetags for reserved pages as empty"
>
> And Cc: stable@vger.kernel.org # v6.10
>
> Let me post v2 with these corrections...

v2 with corrections posted at
https://lore.kernel.org/all/20240812192428.151825-1-surenb@google.com/

>
> >
> > >
> > > Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> > > ---
> > >  mm/mm_init.c | 10 ++++++++++
> > >  1 file changed, 10 insertions(+)
> > >
> > > diff --git a/mm/mm_init.c b/mm/mm_init.c
> > > index 75c3bd42799b..ec9324653ad9 100644
> > > --- a/mm/mm_init.c
> > > +++ b/mm/mm_init.c
> > > @@ -2245,6 +2245,16 @@ void __init init_cma_reserved_pageblock(struct page *page)
> > >
> > >         set_pageblock_migratetype(page, MIGRATE_CMA);
> > >         set_page_refcounted(page);
> > > +
> > > +       /* pages were reserved and not allocated */
> > > +       if (mem_alloc_profiling_enabled()) {
> > > +               union codetag_ref *ref = get_page_tag_ref(page);
> > > +
> > > +               if (ref) {
> > > +                       set_codetag_empty(ref);
> > > +                       put_page_tag_ref(ref);
> > > +               }
> > > +       }
> > >         __free_pages(page, pageblock_order);
> > >
> > >         adjust_managed_page_count(page, pageblock_nr_pages);
> > >
> > > base-commit: d74da846046aeec9333e802f5918bd3261fb5509
> > > --
> > > 2.46.0.76.ge559c4bf1a-goog
> > >
diff mbox series

Patch

diff --git a/mm/mm_init.c b/mm/mm_init.c
index 75c3bd42799b..ec9324653ad9 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -2245,6 +2245,16 @@  void __init init_cma_reserved_pageblock(struct page *page)
 
 	set_pageblock_migratetype(page, MIGRATE_CMA);
 	set_page_refcounted(page);
+
+	/* pages were reserved and not allocated */
+	if (mem_alloc_profiling_enabled()) {
+		union codetag_ref *ref = get_page_tag_ref(page);
+
+		if (ref) {
+			set_codetag_empty(ref);
+			put_page_tag_ref(ref);
+		}
+	}
 	__free_pages(page, pageblock_order);
 
 	adjust_managed_page_count(page, pageblock_nr_pages);