From patchwork Thu Sep 5 17:34:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shakeel Butt X-Patchwork-Id: 13792749 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 01BDCCD8CBD for ; Thu, 5 Sep 2024 17:34:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8865B6B0095; Thu, 5 Sep 2024 13:34:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 836D96B0096; Thu, 5 Sep 2024 13:34:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6FD566B0098; Thu, 5 Sep 2024 13:34:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 50BEA6B0095 for ; Thu, 5 Sep 2024 13:34:46 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id E591F4089D for ; Thu, 5 Sep 2024 17:34:45 +0000 (UTC) X-FDA: 82531384530.20.DA259E9 Received: from out-184.mta1.migadu.com (out-184.mta1.migadu.com [95.215.58.184]) by imf28.hostedemail.com (Postfix) with ESMTP id 2A248C001E for ; Thu, 5 Sep 2024 17:34:43 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=aegBcYsH; spf=pass (imf28.hostedemail.com: domain of shakeel.butt@linux.dev designates 95.215.58.184 as permitted sender) smtp.mailfrom=shakeel.butt@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1725557659; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=YsvYqCVKQQkfAV/dC9DA9+PClCN3nQT/ynwnzjufDVo=; b=kBl+whVVTf05zJ8Pby/fCX8krXbOqojlfIaHlOjGK3IEMvId6WrYQLyXLqst9mLb36zLBb 8Gq/A8T+inIllhJ6mTCWWK7Hc6Q4YuVEV0gdfE455dhEOedxcG5CACWKuV1eXdKZu/cTo/ 5H50uxOc5ke7/NhndXjqGfKrcKZfyPU= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=aegBcYsH; spf=pass (imf28.hostedemail.com: domain of shakeel.butt@linux.dev designates 95.215.58.184 as permitted sender) smtp.mailfrom=shakeel.butt@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1725557659; a=rsa-sha256; cv=none; b=k1Xde2YBJpxvojjyBqXXaH0sVfJ3q8Gxzl4zV9/VIb3QvOgD11FnI8oi58U94den4IHeRp DTc6RBCSGZE8lmvRym88dOu+yKod8YtxHYj2j1vR/69zh9w8LhP4yg125Sow76vybH1jic qnnbh8oh4T6G1qtrpDvIag4zCyCWW4g= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1725557681; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=YsvYqCVKQQkfAV/dC9DA9+PClCN3nQT/ynwnzjufDVo=; b=aegBcYsHS8vohzjzv56rTBm9b+tJe0FfwUNSBZtxslHOz5vEU+Ubqi2eUBNxUb4IIc38KH qExVrqXElq3QqvYjkTUAxmlM2zAT1hP2DFYq5jA7usxMMXUr8N1diwinRFFk4v0CYPEafA jZRZsg0WzdTX6/K3r7Al0Az4MwJhpSU= From: Shakeel Butt To: Andrew Morton , Vlastimil Babka Cc: Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , David Rientjes , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Eric Dumazet , "David S . Miller" , Jakub Kicinski , Paolo Abeni , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Meta kernel team , cgroups@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH v4] memcg: add charging of already allocated slab objects Date: Thu, 5 Sep 2024 10:34:22 -0700 Message-ID: <20240905173422.1565480-1-shakeel.butt@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Rspam-User: X-Stat-Signature: eox5zbugfcparswadx4d9b5ry7cjiwmy X-Rspamd-Queue-Id: 2A248C001E X-Rspamd-Server: rspam11 X-HE-Tag: 1725557683-549795 X-HE-Meta: U2FsdGVkX1873k45gcHRvvYqt6LxAKxYtaV3x+F2aYBS4anntLI9InELi8i1IooxWKpbVmsrJOqYl5+cdR1GKwzkfdACeXgqrl7g+RLDtZzbqkhW/YCMmNp4FcsYzvzOP1wt5umOvjY2y2qQPD/271vpH+YfL9I+batxz6e02668125wNDVRfIDrtXsN7QdC8eE3UqCfNDcRjEtySsJkT14QWzZTRmgDCOrq5xNa14wZFxVkFJJSMILAAXZleP5OEi/9dz0YsfPoDaCrLL9nxbG4NtbuuF8zhirSLZGs+f6E40tkSrEAJuE03udck1bJn0SzfzZHX2kMnicupdd0q/Ns9YX4JgcBoyVOnuVXktW7DzJxOh91XmhSqDSq73PO5mB1bUFnFv2fKHo07GLaFufN3zs494FHOYYlhr0uFUy6cCvPYe3vRAe0JQUaMcBrlNZ/xket3gvVqLiJgaSpErK3o0sAO0HqdC568aI5Xqp2+B2BtY3zq7BaBCU5KTuN2njoyTeXh1oRKVmwNI1AL/Ic8Ijcce9MQUBELaFF3rpyF5xVwyhFcb9El9P2ljkPIU2gK+AbRMDpQPcYYAMA6hEqjGAF2LxxhUlGG4MC5JsYoD/DlF+7E3fTwOkhDiytjg9GDYdR12kRAJPcnu0uovfEcqU57uUYSdm5vj3tVD/+zYPHxG1qz1hIAsYTisDUwmQlIIVCDhN08ppmisYTkOxY8PyTuFF74HtJy4/0CRxsUydRw0sVJrSn79bQM4Q1CV3gkBGZhqcwXcyYOoaKE1XwkodxwfkykHeyxokWOX8VrjnXb6t8rAhOjTDNKP1aON74J6Y1Err1kh/NAWyIbqDHwp5KItNTTzfvkkzC7YiNwLGZfkx6DRyJldky+sun6LBCKEKsnVK2q4Xv9TIUJsYDMu/qPIF7/WUpZ7i1Ozg0ZQL/I8TQMq90MAcB8+3lPSz4TI1Crv2kdb0YWdB 9VTvx8Ba nFPJGSEbrXeyl4Spc9PeoRCg0hISAODMIpYPzShHY2RY4CocPzzV9Nky2LU+VGL3TsH2w9bx9jhAoER287+TswFH1c9dsXApxTD9p8V2AZQZuTQvnBUobPrL+xMy9lwv9n1jHwqNDq3WUZzWHZj6jvpTG1PL1pVEIJ9Kdf9WTBomZNBUGGYr5fLo4KWm9lne2n4rvlLm0jBaA4wxpwUCNa+7RKnETf0DLibSjU/piTC+cuStOWOWrfipkEGTrHUDy6ulW X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: At the moment, the slab objects are charged to the memcg at the allocation time. However there are cases where slab objects are allocated at the time where the right target memcg to charge it to is not known. One such case is the network sockets for the incoming connection which are allocated in the softirq context. Couple hundred thousand connections are very normal on large loaded server and almost all of those sockets underlying those connections get allocated in the softirq context and thus not charged to any memcg. However later at the accept() time we know the right target memcg to charge. Let's add new API to charge already allocated objects, so we can have better accounting of the memory usage. To measure the performance impact of this change, tcp_crr is used from the neper [1] performance suite. Basically it is a network ping pong test with new connection for each ping pong. The server and the client are run inside 3 level of cgroup hierarchy using the following commands: Server: $ tcp_crr -6 Client: $ tcp_crr -6 -c -H ${server_ip} If the client and server run on different machines with 50 GBPS NIC, there is no visible impact of the change. For the same machine experiment with v6.11-rc5 as base. base (throughput) with-patch tcp_crr 14545 (+- 80) 14463 (+- 56) It seems like the performance impact is within the noise. Link: https://github.com/google/neper [1] Signed-off-by: Shakeel Butt Reviewed-by: Roman Gushchin Reviewed-by: Yosry Ahmed Acked-by: Paolo Abeni --- v3: https://lore.kernel.org/all/20240829175339.2424521-1-shakeel.butt@linux.dev/ Changes since v3: - Add kernel doc for kmem_cache_charge. v2: https://lore.kernel.org/all/20240827235228.1591842-1-shakeel.butt@linux.dev/ Change since v2: - Add handling of already charged large kmalloc objects. - Move the normal kmalloc cache check into a function. v1: https://lore.kernel.org/all/20240826232908.4076417-1-shakeel.butt@linux.dev/ Changes since v1: - Correctly handle large allocations which bypass slab - Rearrange code to avoid compilation errors for !CONFIG_MEMCG builds RFC: https://lore.kernel.org/all/20240824010139.1293051-1-shakeel.butt@linux.dev/ Changes since the RFC: - Added check for already charged slab objects. - Added performance results from neper's tcp_crr include/linux/slab.h | 20 ++++++++++++++ mm/slab.h | 7 +++++ mm/slub.c | 49 +++++++++++++++++++++++++++++++++ net/ipv4/inet_connection_sock.c | 5 ++-- 4 files changed, 79 insertions(+), 2 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index eb2bf4629157..68789c79a530 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -547,6 +547,26 @@ void *kmem_cache_alloc_lru_noprof(struct kmem_cache *s, struct list_lru *lru, gfp_t gfpflags) __assume_slab_alignment __malloc; #define kmem_cache_alloc_lru(...) alloc_hooks(kmem_cache_alloc_lru_noprof(__VA_ARGS__)) +/** + * kmem_cache_charge - memcg charge an already allocated slab memory + * @objp: address of the slab object to memcg charge. + * @gfpflags: describe the allocation context + * + * kmem_cache_charge is the normal method to charge a slab object to the current + * memcg. The objp should be pointer returned by the slab allocator functions + * like kmalloc or kmem_cache_alloc. The memcg charge behavior can be controller + * through gfpflags parameter. + * + * There are several cases where it will return true regardless. More + * specifically: + * + * 1. For !CONFIG_MEMCG or cgroup_disable=memory systems. + * 2. Already charged slab objects. + * 3. For slab objects from KMALLOC_NORMAL caches. + * + * Return: true if charge was successful otherwise false. + */ +bool kmem_cache_charge(void *objp, gfp_t gfpflags); void kmem_cache_free(struct kmem_cache *s, void *objp); kmem_buckets *kmem_buckets_create(const char *name, slab_flags_t flags, diff --git a/mm/slab.h b/mm/slab.h index dcdb56b8e7f5..9f907e930609 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -443,6 +443,13 @@ static inline bool is_kmalloc_cache(struct kmem_cache *s) return (s->flags & SLAB_KMALLOC); } +static inline bool is_kmalloc_normal(struct kmem_cache *s) +{ + if (!is_kmalloc_cache(s)) + return false; + return !(s->flags & (SLAB_CACHE_DMA|SLAB_ACCOUNT|SLAB_RECLAIM_ACCOUNT)); +} + /* Legal flag mask for kmem_cache_create(), for various configurations */ #define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | \ SLAB_CACHE_DMA32 | SLAB_PANIC | \ diff --git a/mm/slub.c b/mm/slub.c index c9d8a2497fd6..3f2a89f7a23a 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2185,6 +2185,41 @@ void memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab, void **p, __memcg_slab_free_hook(s, slab, p, objects, obj_exts); } + +static __fastpath_inline +bool memcg_slab_post_charge(void *p, gfp_t flags) +{ + struct slabobj_ext *slab_exts; + struct kmem_cache *s; + struct folio *folio; + struct slab *slab; + unsigned long off; + + folio = virt_to_folio(p); + if (!folio_test_slab(folio)) { + return folio_memcg_kmem(folio) || + (__memcg_kmem_charge_page(folio_page(folio, 0), flags, + folio_order(folio)) == 0); + } + + slab = folio_slab(folio); + s = slab->slab_cache; + + /* Ignore KMALLOC_NORMAL cache to avoid circular dependency. */ + if (is_kmalloc_normal(s)) + return true; + + /* Ignore already charged objects. */ + slab_exts = slab_obj_exts(slab); + if (slab_exts) { + off = obj_to_index(s, slab, p); + if (unlikely(slab_exts[off].objcg)) + return true; + } + + return __memcg_slab_post_alloc_hook(s, NULL, flags, 1, &p); +} + #else /* CONFIG_MEMCG */ static inline bool memcg_slab_post_alloc_hook(struct kmem_cache *s, struct list_lru *lru, @@ -2198,6 +2233,11 @@ static inline void memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab, void **p, int objects) { } + +static inline bool memcg_slab_post_charge(void *p, gfp_t flags) +{ + return true; +} #endif /* CONFIG_MEMCG */ /* @@ -4062,6 +4102,15 @@ void *kmem_cache_alloc_lru_noprof(struct kmem_cache *s, struct list_lru *lru, } EXPORT_SYMBOL(kmem_cache_alloc_lru_noprof); +bool kmem_cache_charge(void *objp, gfp_t gfpflags) +{ + if (!memcg_kmem_online()) + return true; + + return memcg_slab_post_charge(objp, gfpflags); +} +EXPORT_SYMBOL(kmem_cache_charge); + /** * kmem_cache_alloc_node - Allocate an object on the specified node * @s: The cache to allocate from. diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c index 64d07b842e73..3c13ca8c11fb 100644 --- a/net/ipv4/inet_connection_sock.c +++ b/net/ipv4/inet_connection_sock.c @@ -715,6 +715,7 @@ struct sock *inet_csk_accept(struct sock *sk, struct proto_accept_arg *arg) release_sock(sk); if (newsk && mem_cgroup_sockets_enabled) { int amt = 0; + gfp_t gfp = GFP_KERNEL | __GFP_NOFAIL; /* atomically get the memory usage, set and charge the * newsk->sk_memcg. @@ -731,8 +732,8 @@ struct sock *inet_csk_accept(struct sock *sk, struct proto_accept_arg *arg) } if (amt) - mem_cgroup_charge_skmem(newsk->sk_memcg, amt, - GFP_KERNEL | __GFP_NOFAIL); + mem_cgroup_charge_skmem(newsk->sk_memcg, amt, gfp); + kmem_cache_charge(newsk, gfp); release_sock(newsk); }