From patchwork Mon Aug 26 23:29:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shakeel Butt X-Patchwork-Id: 13778666 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0CA7C5472C for ; Mon, 26 Aug 2024 23:29:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 48EB86B009C; Mon, 26 Aug 2024 19:29:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 43E106B009E; Mon, 26 Aug 2024 19:29:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2E0786B00A1; Mon, 26 Aug 2024 19:29:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 08AFE6B009C for ; Mon, 26 Aug 2024 19:29:32 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 987AA12146D for ; Mon, 26 Aug 2024 23:29:31 +0000 (UTC) X-FDA: 82495990542.19.E61B44C Received: from out-178.mta0.migadu.com (out-178.mta0.migadu.com [91.218.175.178]) by imf22.hostedemail.com (Postfix) with ESMTP id C1968C0008 for ; Mon, 26 Aug 2024 23:29:28 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=rTYE7Zqv; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf22.hostedemail.com: domain of shakeel.butt@linux.dev designates 91.218.175.178 as permitted sender) smtp.mailfrom=shakeel.butt@linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724714905; a=rsa-sha256; cv=none; b=IIpuWB93/5vgo6l0pOJxx+LxQ/Ynwq7UJR1XbE1cU5pgp15gKWll5KLuhqjVIIvx9UxaWA xdrq5nraCsYFjrSErP/c+yd1mH7SfxPtoZjB9ejtIpCZHU5tugvTXX4KQRVJePSnaU6mQF t2ryXccd0xYoGZ9lqUZKImUUVHk3Sgw= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=rTYE7Zqv; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf22.hostedemail.com: domain of shakeel.butt@linux.dev designates 91.218.175.178 as permitted sender) smtp.mailfrom=shakeel.butt@linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724714905; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=MuChY2rLsaLG7Ru19DVvVTp3mYPf/6WrV6IB7Wjk9P8=; b=aBTx6vnMU//Tg0CMl6LopZwCbMtw56m4rjUqcDQNQNevgZLxC5IEe4XNXQTBXDNko5P9US rlDwoLCPl8yLOXWVT4AgR7Ru9mW/Tt+RXovb9OjdPqHyjtC4RliH82Xee50Eioh8mjQuWw 3vQyTAWjMXK3+cCyEpPPx/Q4m0+MRLs= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1724714966; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=MuChY2rLsaLG7Ru19DVvVTp3mYPf/6WrV6IB7Wjk9P8=; b=rTYE7ZqvmmfLseZfj6zFKiBTrdCvEreY1ReWOBF01UpApr6rmkdCecmgyExt1Fgjo61nnF c+pXChcn8XC50zdRluEn1wZ8+EWrLK94m+B1uQ3nSp45AFU4J5oi3Q7Uep6iKvZrSWqcc+ nfaQXsRJyGcjVI78KSEgm9aNJv9pnfE= From: Shakeel Butt To: Andrew Morton Cc: Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Vlastimil Babka , David Rientjes , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Eric Dumazet , "David S . Miller" , Jakub Kicinski , Paolo Abeni , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Meta kernel team , cgroups@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH v1] memcg: add charging of already allocated slab objects Date: Mon, 26 Aug 2024 16:29:08 -0700 Message-ID: <20240826232908.4076417-1-shakeel.butt@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Rspamd-Queue-Id: C1968C0008 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: pdpx5ct9u8zjsajq8zegjkbd5xjfjogs X-HE-Tag: 1724714968-535553 X-HE-Meta: U2FsdGVkX1/NtjMDz4vhzXaQMTlNynQc60aDGe2vJMTUKkDLnqQvd3jO34pLxDveRPYVuXfMYPks2Svp8AIhALAbo/qKLOKLrBbocGowfrASmaqE181OKM+qS0rJhf0IQm01xirBzMhroJLRS2u3pL2Vxgkake2y5ZdUoVW1won9GJ4Mrucfud0U267IdXNemzKVAVkcF06aGhYa5rnENXvYXynAYLdFz/BoIoCp3/NuNZzaWHY6u5k3hcmkLS2F00BIdcHSjxTRhaEAYK0msXqurFuUGmeqfhOiJ7JaAR353ro6z7dLaXgKjhRoT/Obaf0g7/aL/tK96JFVV6DKDG3ITjntrclLEoGhn0TexKgZ2EC1aQ/VjNDARmNVNq7Ig+diUM/V9fM/nL9W6QiGwQxmcPvT5ZIEx31cvjt6zFfSE3afpy/anC/p8ssCTz9/KLW/tqNZaYDUMdHPgZZMnJL0sOKU7QO3HClH76MM7PICX0dx6df9+qBkA/Gheq//UAXtCuEoEHSE56OLgh8AL9tEL4rzM43PLoNKJJJrhXP6qFWp6utqly9dxWnGJ4Nh7swwLnhxvYZLPLjewLITERK2Z8c2Usi6TQ93yZGLg9Fx6c7ZugqvpDDq3JJST48MpUD2VimwyyQCyDxXVx3WckBwbxWsi+u54yyAZdVe9zzGIC6srdoSh4wIb/8MfN4Qg4BVC0BhE3CBvaiX37YTAZP1PrwpXuY/oDVWSob1tJyNwePL5wdsPuWBA9ZLQ7QhtrZjqMCqRuRNpaGoAo3yhvbEZqUelWaz+iBnqzRMpX/6s7UgnLX5pFwxdHw42GZ1Dq+E3OhuMWo7BriyfHsKhyjUaIRa2yhAOOaMWBziXSHRYVjL707kCpzkSLfjEYavHYznqWZqFajmUOZGx/pCDPanw+HSFnY2tIRpc9oW348Zd1vWgYeKe37/gtr7g4j0hA6utq2CgY4PEL0wBB4 rOnbXv2Z y51FotzD93rYQRgdS2QwXoOpYl6y2wtk/003ct5mDeu6hpCwV7+eQT7CbePdgAM38xMKcCoEMBN8IGHcIO8yJJUMH/++pTqGbuHB7sqOMYCA1IMj8OaE9pPhD85z/9NHjreBJ7Cl/21JKdy514YUgBmVTUJZIFx/pw+gmB/yRlREuLK3WStY+ijaDjp3C93Vtk5jMFcbb+bd9CT38QP9YoeNlOw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: At the moment, the slab objects are charged to the memcg at the allocation time. However there are cases where slab objects are allocated at the time where the right target memcg to charge it to is not known. One such case is the network sockets for the incoming connection which are allocated in the softirq context. Couple hundred thousand connections are very normal on large loaded server and almost all of those sockets underlying those connections get allocated in the softirq context and thus not charged to any memcg. However later at the accept() time we know the right target memcg to charge. Let's add new API to charge already allocated objects, so we can have better accounting of the memory usage. To measure the performance impact of this change, tcp_crr is used from the neper [1] performance suite. Basically it is a network ping pong test with new connection for each ping pong. The server and the client are run inside 3 level of cgroup hierarchy using the following commands: Server: $ tcp_crr -6 Client: $ tcp_crr -6 -c -H ${server_ip} If the client and server run on different machines with 50 GBPS NIC, there is no visible impact of the change. For the same machine experiment with v6.11-rc5 as base. base (throughput) with-patch tcp_crr 14545 (+- 80) 14463 (+- 56) It seems like the performance impact is within the noise. Link: https://github.com/google/neper [1] Signed-off-by: Shakeel Butt --- Changes since the RFC: - Added check for already charged slab objects. - Added performance results from neper's tcp_crr include/linux/slab.h | 1 + mm/slub.c | 54 +++++++++++++++++++++++++++++++++ net/ipv4/inet_connection_sock.c | 5 +-- 3 files changed, 58 insertions(+), 2 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index eb2bf4629157..05cfab107c72 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -547,6 +547,7 @@ void *kmem_cache_alloc_lru_noprof(struct kmem_cache *s, struct list_lru *lru, gfp_t gfpflags) __assume_slab_alignment __malloc; #define kmem_cache_alloc_lru(...) alloc_hooks(kmem_cache_alloc_lru_noprof(__VA_ARGS__)) +bool kmem_cache_charge(void *objp, gfp_t gfpflags); void kmem_cache_free(struct kmem_cache *s, void *objp); kmem_buckets *kmem_buckets_create(const char *name, slab_flags_t flags, diff --git a/mm/slub.c b/mm/slub.c index c9d8a2497fd6..580683597b5c 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2185,6 +2185,16 @@ void memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab, void **p, __memcg_slab_free_hook(s, slab, p, objects, obj_exts); } + +static __fastpath_inline +bool memcg_slab_post_charge(struct kmem_cache *s, void *p, gfp_t flags) +{ + if (likely(!memcg_kmem_online())) + return true; + + return __memcg_slab_post_alloc_hook(s, NULL, flags, 1, &p); +} + #else /* CONFIG_MEMCG */ static inline bool memcg_slab_post_alloc_hook(struct kmem_cache *s, struct list_lru *lru, @@ -2198,6 +2208,13 @@ static inline void memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab, void **p, int objects) { } + +static inline bool memcg_slab_post_charge(struct kmem_cache *s, + void *p, + gfp_t flags) +{ + return true; +} #endif /* CONFIG_MEMCG */ /* @@ -4062,6 +4079,43 @@ void *kmem_cache_alloc_lru_noprof(struct kmem_cache *s, struct list_lru *lru, } EXPORT_SYMBOL(kmem_cache_alloc_lru_noprof); +#define KMALLOC_TYPE (SLAB_KMALLOC | SLAB_CACHE_DMA | \ + SLAB_ACCOUNT | SLAB_RECLAIM_ACCOUNT) + +bool kmem_cache_charge(void *objp, gfp_t gfpflags) +{ + struct slabobj_ext *slab_exts; + struct kmem_cache *s; + struct folio *folio; + struct slab *slab; + unsigned long off; + + if (!memcg_kmem_online()) + return true; + + folio = virt_to_folio(objp); + if (unlikely(!folio_test_slab(folio))) + return false; + + slab = folio_slab(folio); + s = slab->slab_cache; + + /* Ignore KMALLOC_NORMAL cache to avoid circular dependency. */ + if ((s->flags & KMALLOC_TYPE) == SLAB_KMALLOC) + return true; + + /* Ignore already charged objects. */ + slab_exts = slab_obj_exts(slab); + if (slab_exts) { + off = obj_to_index(s, slab, objp); + if (unlikely(slab_exts[off].objcg)) + return true; + } + + return memcg_slab_post_charge(s, objp, gfpflags); +} +EXPORT_SYMBOL(kmem_cache_charge); + /** * kmem_cache_alloc_node - Allocate an object on the specified node * @s: The cache to allocate from. diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c index 64d07b842e73..3c13ca8c11fb 100644 --- a/net/ipv4/inet_connection_sock.c +++ b/net/ipv4/inet_connection_sock.c @@ -715,6 +715,7 @@ struct sock *inet_csk_accept(struct sock *sk, struct proto_accept_arg *arg) release_sock(sk); if (newsk && mem_cgroup_sockets_enabled) { int amt = 0; + gfp_t gfp = GFP_KERNEL | __GFP_NOFAIL; /* atomically get the memory usage, set and charge the * newsk->sk_memcg. @@ -731,8 +732,8 @@ struct sock *inet_csk_accept(struct sock *sk, struct proto_accept_arg *arg) } if (amt) - mem_cgroup_charge_skmem(newsk->sk_memcg, amt, - GFP_KERNEL | __GFP_NOFAIL); + mem_cgroup_charge_skmem(newsk->sk_memcg, amt, gfp); + kmem_cache_charge(newsk, gfp); release_sock(newsk); }