Message ID | 1565144277-36240-3-git-send-email-yang.shi@linux.alibaba.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Make deferred split shrinker memcg aware | expand |
On 07.08.2019 05:17, Yang Shi wrote: > The later patch would make THP deferred split shrinker memcg aware, but > it needs page->mem_cgroup information in THP destructor, which is called > after mem_cgroup_uncharge() now. > > So, move mem_cgroup_uncharge() from __page_cache_release() to compound > page destructor, which is called by both THP and other compound pages > except HugeTLB. And call it in __put_single_page() for single order > page. > > Suggested-by: "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com> > Cc: Kirill Tkhai <ktkhai@virtuozzo.com> > Cc: Johannes Weiner <hannes@cmpxchg.org> > Cc: Michal Hocko <mhocko@suse.com> > Cc: Hugh Dickins <hughd@google.com> > Cc: Shakeel Butt <shakeelb@google.com> > Cc: David Rientjes <rientjes@google.com> > Cc: Qian Cai <cai@lca.pw> > Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> > Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com> Reviewed-by: Kirill Tkhai <ktkhai@virtuozzo.com> > --- > mm/page_alloc.c | 1 + > mm/swap.c | 2 +- > mm/vmscan.c | 6 ++---- > 3 files changed, 4 insertions(+), 5 deletions(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index df02a88..1d1c5d3 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -670,6 +670,7 @@ static void bad_page(struct page *page, const char *reason, > > void free_compound_page(struct page *page) > { > + mem_cgroup_uncharge(page); > __free_pages_ok(page, compound_order(page)); > } > > diff --git a/mm/swap.c b/mm/swap.c > index ae30039..d4242c8 100644 > --- a/mm/swap.c > +++ b/mm/swap.c > @@ -71,12 +71,12 @@ static void __page_cache_release(struct page *page) > spin_unlock_irqrestore(&pgdat->lru_lock, flags); > } > __ClearPageWaiters(page); > - mem_cgroup_uncharge(page); > } > > static void __put_single_page(struct page *page) > { > __page_cache_release(page); > + mem_cgroup_uncharge(page); > free_unref_page(page); > } > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index dbdc46a..b1b5e5f 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -1490,10 +1490,9 @@ static unsigned long shrink_page_list(struct list_head *page_list, > * Is there need to periodically free_page_list? It would > * appear not as the counts should be low > */ > - if (unlikely(PageTransHuge(page))) { > - mem_cgroup_uncharge(page); > + if (unlikely(PageTransHuge(page))) > (*get_compound_page_dtor(page))(page); > - } else > + else > list_add(&page->lru, &free_pages); > continue; > > @@ -1914,7 +1913,6 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, > > if (unlikely(PageCompound(page))) { > spin_unlock_irq(&pgdat->lru_lock); > - mem_cgroup_uncharge(page); > (*get_compound_page_dtor(page))(page); > spin_lock_irq(&pgdat->lru_lock); > } else >
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index df02a88..1d1c5d3 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -670,6 +670,7 @@ static void bad_page(struct page *page, const char *reason, void free_compound_page(struct page *page) { + mem_cgroup_uncharge(page); __free_pages_ok(page, compound_order(page)); } diff --git a/mm/swap.c b/mm/swap.c index ae30039..d4242c8 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -71,12 +71,12 @@ static void __page_cache_release(struct page *page) spin_unlock_irqrestore(&pgdat->lru_lock, flags); } __ClearPageWaiters(page); - mem_cgroup_uncharge(page); } static void __put_single_page(struct page *page) { __page_cache_release(page); + mem_cgroup_uncharge(page); free_unref_page(page); } diff --git a/mm/vmscan.c b/mm/vmscan.c index dbdc46a..b1b5e5f 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1490,10 +1490,9 @@ static unsigned long shrink_page_list(struct list_head *page_list, * Is there need to periodically free_page_list? It would * appear not as the counts should be low */ - if (unlikely(PageTransHuge(page))) { - mem_cgroup_uncharge(page); + if (unlikely(PageTransHuge(page))) (*get_compound_page_dtor(page))(page); - } else + else list_add(&page->lru, &free_pages); continue; @@ -1914,7 +1913,6 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, if (unlikely(PageCompound(page))) { spin_unlock_irq(&pgdat->lru_lock); - mem_cgroup_uncharge(page); (*get_compound_page_dtor(page))(page); spin_lock_irq(&pgdat->lru_lock); } else