diff mbox series

[v2,3/5] mm: Remove references to folio in split_page_memcg()

Message ID 20250314133617.138071-4-willy@infradead.org (mailing list archive)
State New
Headers show
Series Minor memcg cleanups & prep for memdescs | expand

Commit Message

Matthew Wilcox March 14, 2025, 1:36 p.m. UTC
We know that the passed in page is not part of a folio (it's a plain
page allocated with GFP_ACCOUNT), so we should get rid of the misleading
references to folios.

Introduce page_objcg() and page_set_objcg() helpers to make things more
clear.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Shakeel Butt <shakeel.butt@linux.dev>
---
 mm/memcontrol.c | 30 +++++++++++++++++++++++-------
 1 file changed, 23 insertions(+), 7 deletions(-)

Comments

David Hildenbrand March 14, 2025, 9:53 p.m. UTC | #1
On 14.03.25 14:36, Matthew Wilcox (Oracle) wrote:
> We know that the passed in page is not part of a folio 

It would be great if we would have a way to assert that. ... but I'm 
afraid that has to wait for the memdesc split.
diff mbox series

Patch

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 862bb0d5c0f2..9e9027dda78c 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2697,6 +2697,23 @@  static int obj_cgroup_charge_pages(struct obj_cgroup *objcg, gfp_t gfp,
 	return ret;
 }
 
+static struct obj_cgroup *page_objcg(const struct page *page)
+{
+	unsigned long memcg_data = page->memcg_data;
+
+	if (mem_cgroup_disabled() || !memcg_data)
+		return NULL;
+
+	VM_BUG_ON_PAGE((memcg_data & OBJEXTS_FLAGS_MASK) != MEMCG_DATA_KMEM,
+			page);
+	return (struct obj_cgroup *)(memcg_data - MEMCG_DATA_KMEM);
+}
+
+static void page_set_objcg(struct page *page, const struct obj_cgroup *objcg)
+{
+	page->memcg_data = (unsigned long)objcg | MEMCG_DATA_KMEM;
+}
+
 /**
  * __memcg_kmem_charge_page: charge a kmem page to the current memory cgroup
  * @page: page to charge
@@ -2715,8 +2732,7 @@  int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order)
 		ret = obj_cgroup_charge_pages(objcg, gfp, 1 << order);
 		if (!ret) {
 			obj_cgroup_get(objcg);
-			page->memcg_data = (unsigned long)objcg |
-				MEMCG_DATA_KMEM;
+			page_set_objcg(page, objcg);
 			return 0;
 		}
 	}
@@ -3089,18 +3105,18 @@  void __memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab,
  * The objcg is only set on the first page, so transfer it to all the
  * other pages.
  */
-void split_page_memcg(struct page *first, unsigned order)
+void split_page_memcg(struct page *page, unsigned order)
 {
-	struct folio *folio = page_folio(first);
+	struct obj_cgroup *objcg = page_objcg(page);
 	unsigned int i, nr = 1 << order;
 
-	if (mem_cgroup_disabled() || !folio_memcg_charged(folio))
+	if (!objcg)
 		return;
 
 	for (i = 1; i < nr; i++)
-		folio_page(folio, i)->memcg_data = folio->memcg_data;
+		page_set_objcg(&page[i], objcg);
 
-	obj_cgroup_get_many(__folio_objcg(folio), nr - 1);
+	obj_cgroup_get_many(objcg, nr - 1);
 }
 
 void folio_split_memcg_refs(struct folio *folio, unsigned old_order,