diff mbox series

[v2] mm/khugepaged: alloc_charge_hpage() take care of mem charge errors

Message ID 20230222195247.791227-1-peterx@redhat.com (mailing list archive)
State New
Headers show
Series [v2] mm/khugepaged: alloc_charge_hpage() take care of mem charge errors | expand

Commit Message

Peter Xu Feb. 22, 2023, 7:52 p.m. UTC
If memory charge failed, instead of returning the hpage but with an error,
allow the function to cleanup the folio properly, which is normally what a
function should do in this case - either return successfully, or return
with no side effect of partial runs with an indicated error.

This will also avoid the caller calling mem_cgroup_uncharge() unnecessarily
with either anon or shmem path (even if it's safe to do so).

Cc: Yang Shi <shy828301@gmail.com>
Reviewed-by: David Stevens <stevensd@chromium.org>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
v1->v2:
- Enhance commit message, drop "Fixes:" and "Cc: stable" tag, add R-bs.
---
 mm/khugepaged.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

Comments

Yang Shi Feb. 22, 2023, 10:53 p.m. UTC | #1
On Wed, Feb 22, 2023 at 11:52 AM Peter Xu <peterx@redhat.com> wrote:
>
> If memory charge failed, instead of returning the hpage but with an error,
> allow the function to cleanup the folio properly, which is normally what a
> function should do in this case - either return successfully, or return
> with no side effect of partial runs with an indicated error.
>
> This will also avoid the caller calling mem_cgroup_uncharge() unnecessarily
> with either anon or shmem path (even if it's safe to do so).

Thanks for the cleanup. Reviewed-by: Yang Shi <shy828301@gmail.com>

>
> Cc: Yang Shi <shy828301@gmail.com>
> Reviewed-by: David Stevens <stevensd@chromium.org>
> Acked-by: Johannes Weiner <hannes@cmpxchg.org>
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
> v1->v2:
> - Enhance commit message, drop "Fixes:" and "Cc: stable" tag, add R-bs.
> ---
>  mm/khugepaged.c | 9 ++++++++-
>  1 file changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 8dbc39896811..941d1c7ea910 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -1063,12 +1063,19 @@ static int alloc_charge_hpage(struct page **hpage, struct mm_struct *mm,
>         gfp_t gfp = (cc->is_khugepaged ? alloc_hugepage_khugepaged_gfpmask() :
>                      GFP_TRANSHUGE);
>         int node = hpage_collapse_find_target_node(cc);
> +       struct folio *folio;
>
>         if (!hpage_collapse_alloc_page(hpage, gfp, node, &cc->alloc_nmask))
>                 return SCAN_ALLOC_HUGE_PAGE_FAIL;
> -       if (unlikely(mem_cgroup_charge(page_folio(*hpage), mm, gfp)))
> +
> +       folio = page_folio(*hpage);
> +       if (unlikely(mem_cgroup_charge(folio, mm, gfp))) {
> +               folio_put(folio);
> +               *hpage = NULL;
>                 return SCAN_CGROUP_CHARGE_FAIL;
> +       }
>         count_memcg_page_event(*hpage, THP_COLLAPSE_ALLOC);
> +
>         return SCAN_SUCCEED;
>  }
>
> --
> 2.39.1
>
Zach O'Keefe March 2, 2023, 11:21 p.m. UTC | #2
On Feb 22 14:53, Yang Shi wrote:
> On Wed, Feb 22, 2023 at 11:52 AM Peter Xu <peterx@redhat.com> wrote:
> >
> > If memory charge failed, instead of returning the hpage but with an error,
> > allow the function to cleanup the folio properly, which is normally what a
> > function should do in this case - either return successfully, or return
> > with no side effect of partial runs with an indicated error.
> >
> > This will also avoid the caller calling mem_cgroup_uncharge() unnecessarily
> > with either anon or shmem path (even if it's safe to do so).
> 
> Thanks for the cleanup. Reviewed-by: Yang Shi <shy828301@gmail.com>
> 
> >
> > Cc: Yang Shi <shy828301@gmail.com>
> > Reviewed-by: David Stevens <stevensd@chromium.org>
> > Acked-by: Johannes Weiner <hannes@cmpxchg.org>
> > Signed-off-by: Peter Xu <peterx@redhat.com>
> > ---
> > v1->v2:
> > - Enhance commit message, drop "Fixes:" and "Cc: stable" tag, add R-bs.
> > ---
> >  mm/khugepaged.c | 9 ++++++++-
> >  1 file changed, 8 insertions(+), 1 deletion(-)
> >
> > diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> > index 8dbc39896811..941d1c7ea910 100644
> > --- a/mm/khugepaged.c
> > +++ b/mm/khugepaged.c
> > @@ -1063,12 +1063,19 @@ static int alloc_charge_hpage(struct page **hpage, struct mm_struct *mm,
> >         gfp_t gfp = (cc->is_khugepaged ? alloc_hugepage_khugepaged_gfpmask() :
> >                      GFP_TRANSHUGE);
> >         int node = hpage_collapse_find_target_node(cc);
> > +       struct folio *folio;
> >
> >         if (!hpage_collapse_alloc_page(hpage, gfp, node, &cc->alloc_nmask))
> >                 return SCAN_ALLOC_HUGE_PAGE_FAIL;
> > -       if (unlikely(mem_cgroup_charge(page_folio(*hpage), mm, gfp)))
> > +
> > +       folio = page_folio(*hpage);
> > +       if (unlikely(mem_cgroup_charge(folio, mm, gfp))) {
> > +               folio_put(folio);
> > +               *hpage = NULL;
> >                 return SCAN_CGROUP_CHARGE_FAIL;
> > +       }
> >         count_memcg_page_event(*hpage, THP_COLLAPSE_ALLOC);
> > +
> >         return SCAN_SUCCEED;
> >  }
> >
> > --
> > 2.39.1
> >
> 

Thanks, Peter.

Can we also get rid of the unnecessary mem_cgroup_uncharge() calls while we're
at it? Maybe this deserves a separate patch, but after Yang's cleanup of the
!NUMA case (where we would preallocate a hugepage) we can depend on put_page()
do take care of that for us.

Regardless, can have my

Reviewed-by: Zach O'Keefe <zokeefe@google.com>
Peter Xu March 3, 2023, 2:59 p.m. UTC | #3
On Thu, Mar 02, 2023 at 03:21:50PM -0800, Zach O'Keefe wrote:
> On Feb 22 14:53, Yang Shi wrote:
> > On Wed, Feb 22, 2023 at 11:52 AM Peter Xu <peterx@redhat.com> wrote:
> > >
> > > If memory charge failed, instead of returning the hpage but with an error,
> > > allow the function to cleanup the folio properly, which is normally what a
> > > function should do in this case - either return successfully, or return
> > > with no side effect of partial runs with an indicated error.
> > >
> > > This will also avoid the caller calling mem_cgroup_uncharge() unnecessarily
> > > with either anon or shmem path (even if it's safe to do so).
> > 
> > Thanks for the cleanup. Reviewed-by: Yang Shi <shy828301@gmail.com>
> > 
> > >
> > > Cc: Yang Shi <shy828301@gmail.com>
> > > Reviewed-by: David Stevens <stevensd@chromium.org>
> > > Acked-by: Johannes Weiner <hannes@cmpxchg.org>
> > > Signed-off-by: Peter Xu <peterx@redhat.com>
> > > ---
> > > v1->v2:
> > > - Enhance commit message, drop "Fixes:" and "Cc: stable" tag, add R-bs.
> > > ---
> > >  mm/khugepaged.c | 9 ++++++++-
> > >  1 file changed, 8 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> > > index 8dbc39896811..941d1c7ea910 100644
> > > --- a/mm/khugepaged.c
> > > +++ b/mm/khugepaged.c
> > > @@ -1063,12 +1063,19 @@ static int alloc_charge_hpage(struct page **hpage, struct mm_struct *mm,
> > >         gfp_t gfp = (cc->is_khugepaged ? alloc_hugepage_khugepaged_gfpmask() :
> > >                      GFP_TRANSHUGE);
> > >         int node = hpage_collapse_find_target_node(cc);
> > > +       struct folio *folio;
> > >
> > >         if (!hpage_collapse_alloc_page(hpage, gfp, node, &cc->alloc_nmask))
> > >                 return SCAN_ALLOC_HUGE_PAGE_FAIL;
> > > -       if (unlikely(mem_cgroup_charge(page_folio(*hpage), mm, gfp)))
> > > +
> > > +       folio = page_folio(*hpage);
> > > +       if (unlikely(mem_cgroup_charge(folio, mm, gfp))) {
> > > +               folio_put(folio);
> > > +               *hpage = NULL;
> > >                 return SCAN_CGROUP_CHARGE_FAIL;
> > > +       }
> > >         count_memcg_page_event(*hpage, THP_COLLAPSE_ALLOC);
> > > +
> > >         return SCAN_SUCCEED;
> > >  }
> > >
> > > --
> > > 2.39.1
> > >
> > 
> 
> Thanks, Peter.
> 
> Can we also get rid of the unnecessary mem_cgroup_uncharge() calls while we're
> at it? Maybe this deserves a separate patch, but after Yang's cleanup of the
> !NUMA case (where we would preallocate a hugepage) we can depend on put_page()
> do take care of that for us.

Makes sense to me.  I can prepare a separate patch to clean it up.

> 
> Regardless, can have my
> 
> Reviewed-by: Zach O'Keefe <zokeefe@google.com>

Thanks!
diff mbox series

Patch

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 8dbc39896811..941d1c7ea910 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1063,12 +1063,19 @@  static int alloc_charge_hpage(struct page **hpage, struct mm_struct *mm,
 	gfp_t gfp = (cc->is_khugepaged ? alloc_hugepage_khugepaged_gfpmask() :
 		     GFP_TRANSHUGE);
 	int node = hpage_collapse_find_target_node(cc);
+	struct folio *folio;
 
 	if (!hpage_collapse_alloc_page(hpage, gfp, node, &cc->alloc_nmask))
 		return SCAN_ALLOC_HUGE_PAGE_FAIL;
-	if (unlikely(mem_cgroup_charge(page_folio(*hpage), mm, gfp)))
+
+	folio = page_folio(*hpage);
+	if (unlikely(mem_cgroup_charge(folio, mm, gfp))) {
+		folio_put(folio);
+		*hpage = NULL;
 		return SCAN_CGROUP_CHARGE_FAIL;
+	}
 	count_memcg_page_event(*hpage, THP_COLLAPSE_ALLOC);
+
 	return SCAN_SUCCEED;
 }