diff mbox series

memcg: slub: fix SUnreclaim for post charged objects

Message ID 20241210040657.3441287-1-shakeel.butt@linux.dev (mailing list archive)
State New
Headers show
Series memcg: slub: fix SUnreclaim for post charged objects | expand

Commit Message

Shakeel Butt Dec. 10, 2024, 4:06 a.m. UTC
Large kmalloc directly allocates from the page allocator and then use
lruvec_stat_mod_folio() to increment the unreclaimable slab stats for
global and memcg. However when post memcg charging of slab objects was
added in commit 9028cdeb38e1 ("memcg: add charging of already allocated
slab objects"), it missed to correctly handle the unreclaimable slab
stats for memcg.

One user visisble effect of that bug is that the node level
unreclaimable slab stat will work correctly but the memcg level stat can
underflow as kernel correctly handles the free path but the charge path
missed to increment the memcg level unreclaimable slab stat. Let's fix
by correctly handle in the post charge code path.

Fixes: 9028cdeb38e1 ("memcg: add charging of already allocated slab objects")
Signed-off-by: Shakeel Butt <shakeel.butt@linux.dev>
---
 mm/slub.c | 21 ++++++++++++++++++---
 1 file changed, 18 insertions(+), 3 deletions(-)

Comments

Vlastimil Babka Dec. 10, 2024, 8:29 a.m. UTC | #1
On 12/10/24 05:06, Shakeel Butt wrote:
> Large kmalloc directly allocates from the page allocator and then use
> lruvec_stat_mod_folio() to increment the unreclaimable slab stats for
> global and memcg. However when post memcg charging of slab objects was
> added in commit 9028cdeb38e1 ("memcg: add charging of already allocated
> slab objects"), it missed to correctly handle the unreclaimable slab
> stats for memcg.
> 
> One user visisble effect of that bug is that the node level
> unreclaimable slab stat will work correctly but the memcg level stat can
> underflow as kernel correctly handles the free path but the charge path
> missed to increment the memcg level unreclaimable slab stat. Let's fix
> by correctly handle in the post charge code path.
> 
> Fixes: 9028cdeb38e1 ("memcg: add charging of already allocated slab objects")

That's a 6.12-rc1 commit so I'm adding cc stable.

> Signed-off-by: Shakeel Butt <shakeel.butt@linux.dev>

Queued in slab/for-next-fixes, thanks!

Vlastimil

> ---
>  mm/slub.c | 21 ++++++++++++++++++---
>  1 file changed, 18 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index f62c829b7b6b..88bf2bf51bd6 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2189,9 +2189,24 @@ bool memcg_slab_post_charge(void *p, gfp_t flags)
>  
>  	folio = virt_to_folio(p);
>  	if (!folio_test_slab(folio)) {
> -		return folio_memcg_kmem(folio) ||
> -			(__memcg_kmem_charge_page(folio_page(folio, 0), flags,
> -						  folio_order(folio)) == 0);
> +		int size;
> +
> +		if (folio_memcg_kmem(folio))
> +			return true;
> +
> +		if (__memcg_kmem_charge_page(folio_page(folio, 0), flags,
> +					     folio_order(folio)))
> +			return false;
> +
> +		/*
> +		 * This folio has already been accounted in the global stats but
> +		 * not in the memcg stats. So, subtract from the global and use
> +		 * the interface which adds to both global and memcg stats.
> +		 */
> +		size = folio_size(folio);
> +		node_stat_mod_folio(folio, NR_SLAB_UNRECLAIMABLE_B, -size);
> +		lruvec_stat_mod_folio(folio, NR_SLAB_UNRECLAIMABLE_B, size);
> +		return true;
>  	}
>  
>  	slab = folio_slab(folio);
diff mbox series

Patch

diff --git a/mm/slub.c b/mm/slub.c
index f62c829b7b6b..88bf2bf51bd6 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2189,9 +2189,24 @@  bool memcg_slab_post_charge(void *p, gfp_t flags)
 
 	folio = virt_to_folio(p);
 	if (!folio_test_slab(folio)) {
-		return folio_memcg_kmem(folio) ||
-			(__memcg_kmem_charge_page(folio_page(folio, 0), flags,
-						  folio_order(folio)) == 0);
+		int size;
+
+		if (folio_memcg_kmem(folio))
+			return true;
+
+		if (__memcg_kmem_charge_page(folio_page(folio, 0), flags,
+					     folio_order(folio)))
+			return false;
+
+		/*
+		 * This folio has already been accounted in the global stats but
+		 * not in the memcg stats. So, subtract from the global and use
+		 * the interface which adds to both global and memcg stats.
+		 */
+		size = folio_size(folio);
+		node_stat_mod_folio(folio, NR_SLAB_UNRECLAIMABLE_B, -size);
+		lruvec_stat_mod_folio(folio, NR_SLAB_UNRECLAIMABLE_B, size);
+		return true;
 	}
 
 	slab = folio_slab(folio);