Message ID | 20240906001047.1245-3-21cnbao@gmail.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm: enable large folios swap-in support | expand |
On Thu, Sep 5, 2024 at 5:11 PM Barry Song <21cnbao@gmail.com> wrote: > > From: Barry Song <v-songbaohua@oppo.com> > > With large folios swap-in, we might need to uncharge multiple entries all > together, add nr argument in mem_cgroup_swapin_uncharge_swap(). > > For the existing two users, just pass nr=1. > > Signed-off-by: Barry Song <v-songbaohua@oppo.com> Reviewed-by: Yosry Ahmed <yosryahmed@google.com> > Acked-by: Chris Li <chrisl@kernel.org> > Cc: Shakeel Butt <shakeel.butt@linux.dev> > Cc: Baolin Wang <baolin.wang@linux.alibaba.com> > Cc: Christoph Hellwig <hch@infradead.org> > Cc: David Hildenbrand <david@redhat.com> > Cc: Gao Xiang <xiang@kernel.org> > Cc: "Huang, Ying" <ying.huang@intel.com> > Cc: Hugh Dickins <hughd@google.com> > Cc: Johannes Weiner <hannes@cmpxchg.org> > Cc: Kairui Song <kasong@tencent.com> > Cc: Kairui Song <ryncsn@gmail.com> > Cc: Kalesh Singh <kaleshsingh@google.com> > Cc: Matthew Wilcox (Oracle) <willy@infradead.org> > Cc: Michal Hocko <mhocko@suse.com> > Cc: Minchan Kim <minchan@kernel.org> > Cc: Nhat Pham <nphamcs@gmail.com> > Cc: Ryan Roberts <ryan.roberts@arm.com> > Cc: Sergey Senozhatsky <senozhatsky@chromium.org> > Cc: Suren Baghdasaryan <surenb@google.com> > Cc: Yang Shi <shy828301@gmail.com> > Cc: Yosry Ahmed <yosryahmed@google.com> > --- > include/linux/memcontrol.h | 5 +++-- > mm/memcontrol.c | 7 ++++--- > mm/memory.c | 2 +- > mm/swap_state.c | 2 +- > 4 files changed, 9 insertions(+), 7 deletions(-) > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > index 2ef94c74847d..34d2da05f2f1 100644 > --- a/include/linux/memcontrol.h > +++ b/include/linux/memcontrol.h > @@ -699,7 +699,8 @@ int mem_cgroup_hugetlb_try_charge(struct mem_cgroup *memcg, gfp_t gfp, > > int mem_cgroup_swapin_charge_folio(struct folio *folio, struct mm_struct *mm, > gfp_t gfp, swp_entry_t entry); > -void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry); > + > +void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry, unsigned int nr_pages); > > void __mem_cgroup_uncharge(struct folio *folio); > > @@ -1206,7 +1207,7 @@ static inline int mem_cgroup_swapin_charge_folio(struct folio *folio, > return 0; > } > > -static inline void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry) > +static inline void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry, unsigned int nr) > { > } > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index bda6f75d22ff..c0d36ca20332 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -4559,14 +4559,15 @@ int mem_cgroup_swapin_charge_folio(struct folio *folio, struct mm_struct *mm, > > /* > * mem_cgroup_swapin_uncharge_swap - uncharge swap slot > - * @entry: swap entry for which the page is charged > + * @entry: the first swap entry for which the pages are charged > + * @nr_pages: number of pages which will be uncharged > * > * Call this function after successfully adding the charged page to swapcache. > * > * Note: This function assumes the page for which swap slot is being uncharged > * is order 0 page. > */ > -void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry) > +void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry, unsigned int nr_pages) > { > /* > * Cgroup1's unified memory+swap counter has been charged with the > @@ -4586,7 +4587,7 @@ void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry) > * let's not wait for it. The page already received a > * memory+swap charge, drop the swap entry duplicate. > */ > - mem_cgroup_uncharge_swap(entry, 1); > + mem_cgroup_uncharge_swap(entry, nr_pages); > } > } > > diff --git a/mm/memory.c b/mm/memory.c > index 42674c0748cb..cdf03b39a92c 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -4100,7 +4100,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > ret = VM_FAULT_OOM; > goto out_page; > } > - mem_cgroup_swapin_uncharge_swap(entry); > + mem_cgroup_swapin_uncharge_swap(entry, 1); > > shadow = get_shadow_from_swap_cache(entry); > if (shadow) > diff --git a/mm/swap_state.c b/mm/swap_state.c > index a042720554a7..4669f29cf555 100644 > --- a/mm/swap_state.c > +++ b/mm/swap_state.c > @@ -522,7 +522,7 @@ struct folio *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, > if (add_to_swap_cache(new_folio, entry, gfp_mask & GFP_RECLAIM_MASK, &shadow)) > goto fail_unlock; > > - mem_cgroup_swapin_uncharge_swap(entry); > + mem_cgroup_swapin_uncharge_swap(entry, 1); > > if (shadow) > workingset_refault(new_folio, shadow); > -- > 2.34.1 >
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 2ef94c74847d..34d2da05f2f1 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -699,7 +699,8 @@ int mem_cgroup_hugetlb_try_charge(struct mem_cgroup *memcg, gfp_t gfp, int mem_cgroup_swapin_charge_folio(struct folio *folio, struct mm_struct *mm, gfp_t gfp, swp_entry_t entry); -void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry); + +void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry, unsigned int nr_pages); void __mem_cgroup_uncharge(struct folio *folio); @@ -1206,7 +1207,7 @@ static inline int mem_cgroup_swapin_charge_folio(struct folio *folio, return 0; } -static inline void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry) +static inline void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry, unsigned int nr) { } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index bda6f75d22ff..c0d36ca20332 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -4559,14 +4559,15 @@ int mem_cgroup_swapin_charge_folio(struct folio *folio, struct mm_struct *mm, /* * mem_cgroup_swapin_uncharge_swap - uncharge swap slot - * @entry: swap entry for which the page is charged + * @entry: the first swap entry for which the pages are charged + * @nr_pages: number of pages which will be uncharged * * Call this function after successfully adding the charged page to swapcache. * * Note: This function assumes the page for which swap slot is being uncharged * is order 0 page. */ -void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry) +void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry, unsigned int nr_pages) { /* * Cgroup1's unified memory+swap counter has been charged with the @@ -4586,7 +4587,7 @@ void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry) * let's not wait for it. The page already received a * memory+swap charge, drop the swap entry duplicate. */ - mem_cgroup_uncharge_swap(entry, 1); + mem_cgroup_uncharge_swap(entry, nr_pages); } } diff --git a/mm/memory.c b/mm/memory.c index 42674c0748cb..cdf03b39a92c 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4100,7 +4100,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) ret = VM_FAULT_OOM; goto out_page; } - mem_cgroup_swapin_uncharge_swap(entry); + mem_cgroup_swapin_uncharge_swap(entry, 1); shadow = get_shadow_from_swap_cache(entry); if (shadow) diff --git a/mm/swap_state.c b/mm/swap_state.c index a042720554a7..4669f29cf555 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -522,7 +522,7 @@ struct folio *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, if (add_to_swap_cache(new_folio, entry, gfp_mask & GFP_RECLAIM_MASK, &shadow)) goto fail_unlock; - mem_cgroup_swapin_uncharge_swap(entry); + mem_cgroup_swapin_uncharge_swap(entry, 1); if (shadow) workingset_refault(new_folio, shadow);