From patchwork Sat Aug 24 01:01:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shakeel Butt X-Patchwork-Id: 13776153 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 191C2C5321E for ; Sat, 24 Aug 2024 01:02:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 862CE6B03DE; Fri, 23 Aug 2024 21:02:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7C2D26B03E5; Fri, 23 Aug 2024 21:02:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 63AF36B03DF; Fri, 23 Aug 2024 21:02:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 4193A6B03DD for ; Fri, 23 Aug 2024 21:02:09 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id AB48FC0298 for ; Sat, 24 Aug 2024 01:02:08 +0000 (UTC) X-FDA: 82485337536.02.A573307 Received: from out-189.mta0.migadu.com (out-189.mta0.migadu.com [91.218.175.189]) by imf04.hostedemail.com (Postfix) with ESMTP id C251340006 for ; Sat, 24 Aug 2024 01:02:06 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=ppXTsg3u; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf04.hostedemail.com: domain of shakeel.butt@linux.dev designates 91.218.175.189 as permitted sender) smtp.mailfrom=shakeel.butt@linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724461309; a=rsa-sha256; cv=none; b=gfoZaAT9p6+7ZcGX8F1K7wTKf0lJ87k1IdxVvFknKPORMWJVJjvM3pG8nzMC9p8tGNikOX J8BhqcEVcc3e14vHKJjZLhXL2DOAny1fgc4+Ti+n/ocA6qlf0qLNncdj4huNCKjZmttb2x HRgJikwdAOiEA5jUTLdNwvZVLON8euE= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=ppXTsg3u; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf04.hostedemail.com: domain of shakeel.butt@linux.dev designates 91.218.175.189 as permitted sender) smtp.mailfrom=shakeel.butt@linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724461309; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=IdmQ24b94RdpMyCBQcOJAS4hG/aT8izdm4+zDJSbsVk=; b=J/RN5FgV2GAw3G759yBMCmD+MFXqJZK/11VHZLhxP5VwzrXAwlbWIYczAx3BSrAR/NUwmH OsaqPxP/8uJy6YKeiJk4LMaWJ/An2lcz+PfM+rxhszRa+QfdEseEH6p9bF68r3qO/4IpDc C81Czk+E71Z4Mgpb3IlOsyHASq764K0= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1724461324; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=IdmQ24b94RdpMyCBQcOJAS4hG/aT8izdm4+zDJSbsVk=; b=ppXTsg3uulZ98MlT47olN013yCs3uRMV1PF616oQDPMWz8q/TebfIsoJbYcc2zaB7qZRxx nGY84tJYogENAU3CzE2/OlM6rDWu+b7E/AzIGegRMRuBvC0IQ5Vlj4OnOcDQMCY7kDuqVI kCm+V/t6jedHrr1SOJwFTf2TxIoHbFo= From: Shakeel Butt To: Andrew Morton Cc: Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Vlastimil Babka , David Rientjes , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Eric Dumazet , "David S . Miller" , Jakub Kicinski , Paolo Abeni , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Meta kernel team , cgroups@vger.kernel.org, netdev@vger.kernel.org Subject: [RFC PATCH] memcg: add charging of already allocated slab objects Date: Fri, 23 Aug 2024 18:01:39 -0700 Message-ID: <20240824010139.1293051-1-shakeel.butt@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Rspam-User: X-Rspamd-Queue-Id: C251340006 X-Rspamd-Server: rspam01 X-Stat-Signature: wfojcw3fi6bdghzecabbwn9fk8tbjydx X-HE-Tag: 1724461326-54720 X-HE-Meta: U2FsdGVkX1/ln8gBEqB2/+Nuvlb+WqW9oDk0PZCmbK8fqLrwQm5LdIoweBO/UvNxtxh+48o24FsFfdVP+FsEGq7P8WQfNJ6N83siXhzQ+SZc1kjmx4tTTzvR/lKOzWU2aWQ9PXaluhw6/XBSgslyXzFhb9iQOMdvz6RAOlgUxBWqEjGlb4UmQ9pkUle9KP4FP8Jk7B+hKDH8oHEw+KMm0bozKf2eW6kOEaQby4Kn55e4eQg/NfeRxG78GWmp2IHaflw3H8J0ZF5t8+gL/NZQgOWIi1SCz6I5O+5GB/LXkBADYJsgRjDKxt1D/CsD0rr8y0AhZvHxRDPCFZG71YAJy9fOkG0dWS+gDkGp7pO2f+VHYgdUUIoXkzmN2Maqpyz4uaB29S3gBOQ7bf99CMjB1idosln66u9yi+GshumV09ztTZFQZHjSv6Mq9jrtKFkKQRKKbwAveA05Iqo2jpeb+5++fjomTEFIYlLUNRcXpAs8khGhrcL+QHXqCwmfeG1yDYvG6mhEr7R968ocsLL0MRE3QHjAqP5nisJBshkBD1ufdkfZhm8O5xU7w67yl86Dg8ix8stgHxOSZsYdU7VsoyP5cSWGBPc/FjH+h47bTGZpIjVBpAiFDDqaXYymPraEp7gNEElVWmA4cSKqufFZDge04mVwIz0uehgEz8EDd1x1t+XWGR7aBxgu7rbf+yYWBJ258U8nm0A7CXlHnl/0NYV73xQzynIMQ/qV15/5DET+75+gsYNoOiqeenx6TvFSZzHBpNj3fb/udXUuf2U0URVOnGIhcTeBesCvcFlaU0DeWcTEP43nMoKzrG1IR4pxJvZzwEMX+MuzRGzFzZKY4CQv2xyeqKB66a1n4zDpG82QLSyuhQBN25pfLFCVTQkUyHKnvC4tlrHZ2qrJSbmMtJq3fOS6zI8KvraCl+3UmK+FRZCL1aMqWJIzGsMCGl1KlxRieKAihBdTuQCG+ur TEwDvvnj WvuqnzvwFibb6PtLogWF4kP7uJnB1Ykhjcpm0pn0GJ2gYywQxDdNUC6xGzDu+LPVb2ZJjLKBXWKfkc9/gmWoGy3c7ehrbisuqVzOSCjxZff2YzbKc1Dx06SOHxD53PPaYes8J79Zc9kKFhGEdCsHkEZJRsK0p9QGc60wEFJSPM0sQ1G3eDufumm0J6Yeo2IJRJz4D X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: At the moment, the slab objects are charged to the memcg at the allocation time. However there are cases where slab objects are allocated at the time where the right target memcg to charge it to is not known. One such case is the network sockets for the incoming connection which are allocated in the softirq context. Couple hundred thousand connections are very normal on large loaded server and almost all of those sockets underlying those connections get allocated in the softirq context and thus not charged to any memcg. However later at the accept() time we know the right target memcg to charge. Let's add new API to charge already allocated objects, so we can have better accounting of the memory usage. Signed-off-by: Shakeel Butt --- This is RFC to get early comments and I still have to measure the performance impact of this charging. Particularly I am planning to test neper's tcp_crr with this patch. include/linux/slab.h | 1 + mm/slub.c | 39 +++++++++++++++++++++++++++++++++ net/ipv4/inet_connection_sock.c | 1 + 3 files changed, 41 insertions(+) diff --git a/include/linux/slab.h b/include/linux/slab.h index 512e7c844b7f..a8b09b0ca066 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -547,6 +547,7 @@ void *kmem_cache_alloc_lru_noprof(struct kmem_cache *s, struct list_lru *lru, gfp_t gfpflags) __assume_slab_alignment __malloc; #define kmem_cache_alloc_lru(...) alloc_hooks(kmem_cache_alloc_lru_noprof(__VA_ARGS__)) +bool kmem_cache_post_charge(void *objp, gfp_t gfpflags); void kmem_cache_free(struct kmem_cache *s, void *objp); kmem_buckets *kmem_buckets_create(const char *name, slab_flags_t flags, diff --git a/mm/slub.c b/mm/slub.c index b6b947596e26..574122ad89b8 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2189,6 +2189,16 @@ void memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab, void **p, __memcg_slab_free_hook(s, slab, p, objects, obj_exts); } + +static __fastpath_inline +bool memcg_slab_post_charge(struct kmem_cache *s, void *p, gfp_t flags) +{ + if (likely(!memcg_kmem_online())) + return true; + + return __memcg_slab_post_alloc_hook(s, NULL, flags, 1, &p); +} + #else /* CONFIG_MEMCG */ static inline bool memcg_slab_post_alloc_hook(struct kmem_cache *s, struct list_lru *lru, @@ -2202,6 +2212,13 @@ static inline void memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab, void **p, int objects) { } + +static inline bool memcg_slab_post_charge(struct kmem_cache *s, + void *p, + gfp_t flags) +{ + return true; +} #endif /* CONFIG_MEMCG */ #ifdef CONFIG_SLUB_RCU_DEBUG @@ -4110,6 +4127,28 @@ void *kmem_cache_alloc_lru_noprof(struct kmem_cache *s, struct list_lru *lru, } EXPORT_SYMBOL(kmem_cache_alloc_lru_noprof); +bool kmem_cache_post_charge(void *objp, gfp_t gfpflags) +{ + struct folio *folio; + struct slab *slab; + struct kmem_cache *s; + + folio = virt_to_folio(objp); + if (unlikely(!folio_test_slab(folio))) + return false; + + slab = folio_slab(folio); + s = slab->slab_cache; + + /* Ignore KMALLOC_NORMAL cache */ + if (s->flags & SLAB_KMALLOC && + !(s->flags & (SLAB_CACHE_DMA|SLAB_ACCOUNT|SLAB_RECLAIM_ACCOUNT))) + return true; + + return memcg_slab_post_charge(s, objp, gfpflags); +} +EXPORT_SYMBOL(kmem_cache_post_charge); + /** * kmem_cache_alloc_node - Allocate an object on the specified node * @s: The cache to allocate from. diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c index 64d07b842e73..f707bb76e24d 100644 --- a/net/ipv4/inet_connection_sock.c +++ b/net/ipv4/inet_connection_sock.c @@ -733,6 +733,7 @@ struct sock *inet_csk_accept(struct sock *sk, struct proto_accept_arg *arg) if (amt) mem_cgroup_charge_skmem(newsk->sk_memcg, amt, GFP_KERNEL | __GFP_NOFAIL); + kmem_cache_post_charge(newsk, GFP_KERNEL | __GFP_NOFAIL); release_sock(newsk); }