Message ID | 20210622121551.3398730-15-willy@infradead.org (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Folio-enabling the page cache | expand |
On Tue, Jun 22, 2021 at 01:15:19PM +0100, Matthew Wilcox (Oracle) wrote: > mem_cgroup_charge() already assumed it was being passed a non-tail > page (and looking at the callers, that's true; it's called for freshly > allocated pages). The only real change here is that folio_nr_pages() > doesn't compile away like thp_nr_pages() does as folio support > is not conditional on transparent hugepage support. Reimplement > mem_cgroup_charge() as a wrapper around folio_charge_cgroup(). Maybe rename __mem_cgroup_charge to __folio_charge_cgroup as well?
On Wed, Jun 23, 2021 at 10:15:20AM +0200, Christoph Hellwig wrote: > On Tue, Jun 22, 2021 at 01:15:19PM +0100, Matthew Wilcox (Oracle) wrote: > > mem_cgroup_charge() already assumed it was being passed a non-tail > > page (and looking at the callers, that's true; it's called for freshly > > allocated pages). The only real change here is that folio_nr_pages() > > doesn't compile away like thp_nr_pages() does as folio support > > is not conditional on transparent hugepage support. Reimplement > > mem_cgroup_charge() as a wrapper around folio_charge_cgroup(). > > Maybe rename __mem_cgroup_charge to __folio_charge_cgroup as well? Oh, yeah, should have done that. Thanks.
On Thu 24-06-21 17:42:58, Matthew Wilcox wrote: > On Wed, Jun 23, 2021 at 10:15:20AM +0200, Christoph Hellwig wrote: > > On Tue, Jun 22, 2021 at 01:15:19PM +0100, Matthew Wilcox (Oracle) wrote: > > > mem_cgroup_charge() already assumed it was being passed a non-tail > > > page (and looking at the callers, that's true; it's called for freshly > > > allocated pages). The only real change here is that folio_nr_pages() > > > doesn't compile away like thp_nr_pages() does as folio support > > > is not conditional on transparent hugepage support. Reimplement > > > mem_cgroup_charge() as a wrapper around folio_charge_cgroup(). > > > > Maybe rename __mem_cgroup_charge to __folio_charge_cgroup as well? > > Oh, yeah, should have done that. Thanks. I would stick with __mem_cgroup_charge here. Not that I would insist but the folio nature is quite obvious from the parameter already. Btw. memcg_check_events doesn't really need the page argument. A nid should be sufficient and your earlier patch is already touching the softlimit code so maybe it would be worth changing this page -> folio -> page back and forth.
On Fri, Jun 25, 2021 at 10:22:35AM +0200, Michal Hocko wrote: > On Thu 24-06-21 17:42:58, Matthew Wilcox wrote: > > On Wed, Jun 23, 2021 at 10:15:20AM +0200, Christoph Hellwig wrote: > > > On Tue, Jun 22, 2021 at 01:15:19PM +0100, Matthew Wilcox (Oracle) wrote: > > > > mem_cgroup_charge() already assumed it was being passed a non-tail > > > > page (and looking at the callers, that's true; it's called for freshly > > > > allocated pages). The only real change here is that folio_nr_pages() > > > > doesn't compile away like thp_nr_pages() does as folio support > > > > is not conditional on transparent hugepage support. Reimplement > > > > mem_cgroup_charge() as a wrapper around folio_charge_cgroup(). > > > > > > Maybe rename __mem_cgroup_charge to __folio_charge_cgroup as well? > > > > Oh, yeah, should have done that. Thanks. > > I would stick with __mem_cgroup_charge here. Not that I would insist but the > folio nature is quite obvious from the parameter already. > > Btw. memcg_check_events doesn't really need the page argument. A nid > should be sufficient and your earlier patch is already touching the > softlimit code so maybe it would be worth changing this page -> folio -> > page back and forth. I'm not a huge fan of that 'dummy_page' component of uncharge_gather, so replacing that with nid makes sense. I'll juggle these patches a bit and work that in. Thanks!
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 4460ff0e70a1..a50e5cee6d2c 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -704,6 +704,8 @@ static inline bool mem_cgroup_below_min(struct mem_cgroup *memcg) page_counter_read(&memcg->memory); } +int folio_charge_cgroup(struct folio *, struct mm_struct *, gfp_t); + int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask); int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *mm, gfp_t gfp, swp_entry_t entry); @@ -1216,6 +1218,12 @@ static inline bool mem_cgroup_below_min(struct mem_cgroup *memcg) return false; } +static inline int folio_charge_cgroup(struct folio *folio, + struct mm_struct *mm, gfp_t gfp) +{ + return 0; +} + static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask) { diff --git a/mm/folio-compat.c b/mm/folio-compat.c index a374747ae1c6..1d71b8b587f8 100644 --- a/mm/folio-compat.c +++ b/mm/folio-compat.c @@ -48,3 +48,10 @@ void mark_page_accessed(struct page *page) folio_mark_accessed(page_folio(page)); } EXPORT_SYMBOL(mark_page_accessed); + +#ifdef CONFIG_MEMCG +int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp) +{ + return folio_charge_cgroup(page_folio(page), mm, gfp); +} +#endif diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 7939e4e9118d..69638f84d11b 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -6503,10 +6503,9 @@ void mem_cgroup_calculate_protection(struct mem_cgroup *root, atomic_long_read(&parent->memory.children_low_usage))); } -static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, +static int __mem_cgroup_charge(struct folio *folio, struct mem_cgroup *memcg, gfp_t gfp) { - struct folio *folio = page_folio(page); unsigned int nr_pages = folio_nr_pages(folio); int ret; @@ -6519,26 +6518,26 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, local_irq_disable(); mem_cgroup_charge_statistics(memcg, nr_pages); - memcg_check_events(memcg, page); + memcg_check_events(memcg, &folio->page); local_irq_enable(); out: return ret; } /** - * mem_cgroup_charge - charge a newly allocated page to a cgroup - * @page: page to charge - * @mm: mm context of the victim - * @gfp_mask: reclaim mode + * folio_charge_cgroup - Charge a newly allocated folio to a cgroup. + * @folio: Folio to charge. + * @mm: mm context of the allocating task. + * @gfp: reclaim mode * - * Try to charge @page to the memcg that @mm belongs to, reclaiming - * pages according to @gfp_mask if necessary. + * Try to charge @folio to the memcg that @mm belongs to, reclaiming + * pages according to @gfp if necessary. * - * Do not use this for pages allocated for swapin. + * Do not use this for folios allocated for swapin. * * Returns 0 on success. Otherwise, an error code is returned. */ -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask) +int folio_charge_cgroup(struct folio *folio, struct mm_struct *mm, gfp_t gfp) { struct mem_cgroup *memcg; int ret; @@ -6547,7 +6546,7 @@ int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask) return 0; memcg = get_mem_cgroup_from_mm(mm); - ret = __mem_cgroup_charge(page, memcg, gfp_mask); + ret = __mem_cgroup_charge(folio, memcg, gfp); css_put(&memcg->css); return ret; @@ -6568,6 +6567,7 @@ int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask) int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *mm, gfp_t gfp, swp_entry_t entry) { + struct folio *folio = page_folio(page); struct mem_cgroup *memcg; unsigned short id; int ret; @@ -6582,7 +6582,7 @@ int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *mm, memcg = get_mem_cgroup_from_mm(mm); rcu_read_unlock(); - ret = __mem_cgroup_charge(page, memcg, gfp); + ret = __mem_cgroup_charge(folio, memcg, gfp); css_put(&memcg->css); return ret;
mem_cgroup_charge() already assumed it was being passed a non-tail page (and looking at the callers, that's true; it's called for freshly allocated pages). The only real change here is that folio_nr_pages() doesn't compile away like thp_nr_pages() does as folio support is not conditional on transparent hugepage support. Reimplement mem_cgroup_charge() as a wrapper around folio_charge_cgroup(). Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> --- include/linux/memcontrol.h | 8 ++++++++ mm/folio-compat.c | 7 +++++++ mm/memcontrol.c | 26 +++++++++++++------------- 3 files changed, 28 insertions(+), 13 deletions(-)