From patchwork Fri Mar 1 17:07:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 13578811 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45561C5478C for ; Fri, 1 Mar 2024 17:07:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BA2246B009F; Fri, 1 Mar 2024 12:07:16 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B52EC6B009D; Fri, 1 Mar 2024 12:07:16 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 97D3E6B009F; Fri, 1 Mar 2024 12:07:16 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 736896B009C for ; Fri, 1 Mar 2024 12:07:16 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id E1B671C0CCD for ; Fri, 1 Mar 2024 17:07:15 +0000 (UTC) X-FDA: 81849100830.26.89C4276 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) by imf06.hostedemail.com (Postfix) with ESMTP id 4BC6F180024 for ; Fri, 1 Mar 2024 17:07:12 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=none; spf=pass (imf06.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.130 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709312832; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=XQGU1Qk0c+C8g3iQCGZYbUkVzv136rhWCBBorzn4hKY=; b=3+Yck4fVmMKy8eSE+lRXW3ZbbWJsgbonzx5uS31n1v/g8aAgTSm3/fvC7cJ7kbg/Ebz2mL ZzB1o3DWInZi4pHSKnE8ZxuoyuPGUoTqTW/sJFOpzXbEjt8wX62L49PPUd4IVR4S/UHD3r Vwt/ZBdc0g+66E6/RNKloPHyeUJfBT8= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=none; spf=pass (imf06.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.130 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709312832; a=rsa-sha256; cv=none; b=G7yDmRN346N57b1TZJUol+IsSH0zZHoNBklx9yxuI4e8rByZ+TX6iV/+Il+EfNzC2jQHzf iyJxtEaA15hassNc1Ojn95VhxTJleFa3QtfX2VJvmA510CFaxhwde8PHAI3p6CMn3L788X 9mBOXrZi2QPoUKteeLYTiIu+AU/RIGY= Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id B170733D2C; Fri, 1 Mar 2024 17:07:10 +0000 (UTC) Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 7CD6813AA3; Fri, 1 Mar 2024 17:07:10 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id 6PIzHj4L4mUcGQAAD6G6ig (envelope-from ); Fri, 01 Mar 2024 17:07:10 +0000 From: Vlastimil Babka Date: Fri, 01 Mar 2024 18:07:08 +0100 Subject: [PATCH RFC 1/4] mm, slab: move memcg charging to post-alloc hook MIME-Version: 1.0 Message-Id: <20240301-slab-memcg-v1-1-359328a46596@suse.cz> References: <20240301-slab-memcg-v1-0-359328a46596@suse.cz> In-Reply-To: <20240301-slab-memcg-v1-0-359328a46596@suse.cz> To: Linus Torvalds , Josh Poimboeuf , Jeff Layton , Chuck Lever , Kees Cook , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Johannes Weiner , Michal Hocko , Shakeel Butt , Muchun Song , Alexander Viro , Christian Brauner , Jan Kara Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-fsdevel@vger.kernel.org, Vlastimil Babka X-Mailer: b4 0.13.0 X-Rspamd-Queue-Id: 4BC6F180024 X-Rspam-User: X-Stat-Signature: rywhcnit4f8sjbosi7tqu91a7n5obwnu X-Rspamd-Server: rspam01 X-HE-Tag: 1709312832-44656 X-HE-Meta: U2FsdGVkX1/IFuBMFK4ykNDd3ZMDoxMqHS71Pd31NNYxHeikuO5eAwpbYRkUxkF2fw0i0B2bMdj0aZdRMF9UvTmRRCv2Ch0Zy09q11U6megikczItBiFatGGwvrYSeCKUZP5VWApx9p5lpEtrH0d8w0otg+bT7HZfONeI3wzOuRen2zLRIT2due++2FzGrLiTvQ2MQfQChF3mOsNqifooAKbG4Ve+GQGDY4yqNQ4qrEZe8UunUxxEUVYDuP681VDZC9Hir4yjwj25nuSDjFpCdsoUO0Rwv8QTJIdYJIiGkOstqH3P2oo7C4n+IEpsCPcZkU50TqpS3TUA6lKw9KbBKc9oUhivQ9+70krn1claP6mws1HclryW3+xs6zKk4ZPyuRFx+otp05VrWL+k1YihH4GTYVc7BL2T2QWbfOQE0/tBtPETPmMavsXPVSy4Onz5SxP32paon+picIty6wToZrQUZp6KkYcepbBTfg+EJ5BTuF5qphyFn0uxgLp7yoNG9MwvwiHgJCgx1CKmgw5kAN738N17hiAGbE4XzMCcmIhDx4vuv33EwykvsXHHZtPSIen9MgT38NQIuBXdp/6EV5K5zgdEdS/GN3w5xix/JZ/DNStC69w/Ld/VvitQkXKF+xXTqGG3CC4AWoqJtoZ6Tj7yUC9wDIfJPEF38DtB5MQh6ivForsXLJd6HdL5WEEfOpkkjX6Xcge2ANMxLm0Zqxf14jL7v0To1zVMcMGPPyMnf+lxrwRauyeb31lzWQ+qaOm6YUvaovyXcmffVp+US722xzBHs1QLuGJKv05LxGaJGBqCLyYRyFbz4Gf4eL3GtAS2TiPOFQroh0OYyO39VSVRxEs9ymhN4cszL7rmgA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The MEMCG_KMEM integration with slab currently relies on two hooks during allocation. memcg_slab_pre_alloc_hook() determines the objcg and charges it, and memcg_slab_post_alloc_hook() assigns the objcg pointer to the allocated object(s). As Linus pointed out, this is unnecessarily complex. Failing to charge due to memcg limits should be rare, so we can optimistically allocate the object(s) and do the charging together with assigning the objcg pointer in a single post_alloc hook. In the rare case the charging fails, we can free the object(s) back. This simplifies the code (no need to pass around the objcg pointer) and potentially allows to separate charging from allocation in cases where it's common that the allocation would be immediately freed, and the memcg handling overhead could be saved. Suggested-by: Linus Torvalds Link: https://lore.kernel.org/all/CAHk-=whYOOdM7jWy5jdrAm8LxcgCMFyk2bt8fYYvZzM4U-zAQA@mail.gmail.com/ Signed-off-by: Vlastimil Babka Reviewed-by: Roman Gushchin Reviewed-by: Chengming Zhou --- mm/slub.c | 180 +++++++++++++++++++++++++++----------------------------------- 1 file changed, 77 insertions(+), 103 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 2ef88bbf56a3..7022a1246bab 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1897,23 +1897,36 @@ static inline size_t obj_full_size(struct kmem_cache *s) return s->size + sizeof(struct obj_cgroup *); } -/* - * Returns false if the allocation should fail. - */ -static bool __memcg_slab_pre_alloc_hook(struct kmem_cache *s, - struct list_lru *lru, - struct obj_cgroup **objcgp, - size_t objects, gfp_t flags) +static bool __memcg_slab_post_alloc_hook(struct kmem_cache *s, + struct list_lru *lru, + gfp_t flags, size_t size, + void **p) { + struct obj_cgroup *objcg; + struct slab *slab; + unsigned long off; + size_t i; + /* * The obtained objcg pointer is safe to use within the current scope, * defined by current task or set_active_memcg() pair. * obj_cgroup_get() is used to get a permanent reference. */ - struct obj_cgroup *objcg = current_obj_cgroup(); + objcg = current_obj_cgroup(); if (!objcg) return true; + /* + * slab_alloc_node() avoids the NULL check, so we might be called with a + * single NULL object. kmem_cache_alloc_bulk() aborts if it can't fill + * the whole requested size. + * return success as there's nothing to free back + */ + if (unlikely(*p == NULL)) + return true; + + flags &= gfp_allowed_mask; + if (lru) { int ret; struct mem_cgroup *memcg; @@ -1926,71 +1939,51 @@ static bool __memcg_slab_pre_alloc_hook(struct kmem_cache *s, return false; } - if (obj_cgroup_charge(objcg, flags, objects * obj_full_size(s))) + if (obj_cgroup_charge(objcg, flags, size * obj_full_size(s))) return false; - *objcgp = objcg; + for (i = 0; i < size; i++) { + slab = virt_to_slab(p[i]); + + if (!slab_objcgs(slab) && + memcg_alloc_slab_cgroups(slab, s, flags, false)) { + obj_cgroup_uncharge(objcg, obj_full_size(s)); + continue; + } + + off = obj_to_index(s, slab, p[i]); + obj_cgroup_get(objcg); + slab_objcgs(slab)[off] = objcg; + mod_objcg_state(objcg, slab_pgdat(slab), + cache_vmstat_idx(s), obj_full_size(s)); + } + return true; } -/* - * Returns false if the allocation should fail. - */ +static void memcg_alloc_abort_single(struct kmem_cache *s, void *object); + static __fastpath_inline -bool memcg_slab_pre_alloc_hook(struct kmem_cache *s, struct list_lru *lru, - struct obj_cgroup **objcgp, size_t objects, - gfp_t flags) +bool memcg_slab_post_alloc_hook(struct kmem_cache *s, struct list_lru *lru, + gfp_t flags, size_t size, void **p) { - if (!memcg_kmem_online()) + if (likely(!memcg_kmem_online())) return true; if (likely(!(flags & __GFP_ACCOUNT) && !(s->flags & SLAB_ACCOUNT))) return true; - return likely(__memcg_slab_pre_alloc_hook(s, lru, objcgp, objects, - flags)); -} - -static void __memcg_slab_post_alloc_hook(struct kmem_cache *s, - struct obj_cgroup *objcg, - gfp_t flags, size_t size, - void **p) -{ - struct slab *slab; - unsigned long off; - size_t i; - - flags &= gfp_allowed_mask; - - for (i = 0; i < size; i++) { - if (likely(p[i])) { - slab = virt_to_slab(p[i]); - - if (!slab_objcgs(slab) && - memcg_alloc_slab_cgroups(slab, s, flags, false)) { - obj_cgroup_uncharge(objcg, obj_full_size(s)); - continue; - } + if (likely(__memcg_slab_post_alloc_hook(s, lru, flags, size, p))) + return true; - off = obj_to_index(s, slab, p[i]); - obj_cgroup_get(objcg); - slab_objcgs(slab)[off] = objcg; - mod_objcg_state(objcg, slab_pgdat(slab), - cache_vmstat_idx(s), obj_full_size(s)); - } else { - obj_cgroup_uncharge(objcg, obj_full_size(s)); - } + if (likely(size == 1)) { + memcg_alloc_abort_single(s, p); + *p = NULL; + } else { + kmem_cache_free_bulk(s, size, p); } -} - -static __fastpath_inline -void memcg_slab_post_alloc_hook(struct kmem_cache *s, struct obj_cgroup *objcg, - gfp_t flags, size_t size, void **p) -{ - if (likely(!memcg_kmem_online() || !objcg)) - return; - return __memcg_slab_post_alloc_hook(s, objcg, flags, size, p); + return false; } static void __memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab, @@ -2029,14 +2022,6 @@ void memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab, void **p, __memcg_slab_free_hook(s, slab, p, objects, objcgs); } - -static inline -void memcg_slab_alloc_error_hook(struct kmem_cache *s, int objects, - struct obj_cgroup *objcg) -{ - if (objcg) - obj_cgroup_uncharge(objcg, objects * obj_full_size(s)); -} #else /* CONFIG_MEMCG_KMEM */ static inline struct mem_cgroup *memcg_from_slab_obj(void *ptr) { @@ -2047,31 +2032,18 @@ static inline void memcg_free_slab_cgroups(struct slab *slab) { } -static inline bool memcg_slab_pre_alloc_hook(struct kmem_cache *s, - struct list_lru *lru, - struct obj_cgroup **objcgp, - size_t objects, gfp_t flags) -{ - return true; -} - -static inline void memcg_slab_post_alloc_hook(struct kmem_cache *s, - struct obj_cgroup *objcg, +static inline bool memcg_slab_post_alloc_hook(struct kmem_cache *s, + struct list_lru *lru, gfp_t flags, size_t size, void **p) { + return true; } static inline void memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab, void **p, int objects) { } - -static inline -void memcg_slab_alloc_error_hook(struct kmem_cache *s, int objects, - struct obj_cgroup *objcg) -{ -} #endif /* CONFIG_MEMCG_KMEM */ /* @@ -3751,10 +3723,7 @@ noinline int should_failslab(struct kmem_cache *s, gfp_t gfpflags) ALLOW_ERROR_INJECTION(should_failslab, ERRNO); static __fastpath_inline -struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s, - struct list_lru *lru, - struct obj_cgroup **objcgp, - size_t size, gfp_t flags) +struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s, gfp_t flags) { flags &= gfp_allowed_mask; @@ -3763,14 +3732,11 @@ struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s, if (unlikely(should_failslab(s, flags))) return NULL; - if (unlikely(!memcg_slab_pre_alloc_hook(s, lru, objcgp, size, flags))) - return NULL; - return s; } static __fastpath_inline -void slab_post_alloc_hook(struct kmem_cache *s, struct obj_cgroup *objcg, +bool slab_post_alloc_hook(struct kmem_cache *s, struct list_lru *lru, gfp_t flags, size_t size, void **p, bool init, unsigned int orig_size) { @@ -3819,7 +3785,7 @@ void slab_post_alloc_hook(struct kmem_cache *s, struct obj_cgroup *objcg, kmsan_slab_alloc(s, p[i], init_flags); } - memcg_slab_post_alloc_hook(s, objcg, flags, size, p); + return memcg_slab_post_alloc_hook(s, lru, flags, size, p); } /* @@ -3836,10 +3802,9 @@ static __fastpath_inline void *slab_alloc_node(struct kmem_cache *s, struct list gfp_t gfpflags, int node, unsigned long addr, size_t orig_size) { void *object; - struct obj_cgroup *objcg = NULL; bool init = false; - s = slab_pre_alloc_hook(s, lru, &objcg, 1, gfpflags); + s = slab_pre_alloc_hook(s, gfpflags); if (unlikely(!s)) return NULL; @@ -3856,8 +3821,10 @@ static __fastpath_inline void *slab_alloc_node(struct kmem_cache *s, struct list /* * When init equals 'true', like for kzalloc() family, only * @orig_size bytes might be zeroed instead of s->object_size + * In case this fails due to memcg_slab_post_alloc_hook(), + * object is set to NULL */ - slab_post_alloc_hook(s, objcg, gfpflags, 1, &object, init, orig_size); + slab_post_alloc_hook(s, lru, gfpflags, 1, &object, init, orig_size); return object; } @@ -4300,6 +4267,16 @@ void slab_free(struct kmem_cache *s, struct slab *slab, void *object, do_slab_free(s, slab, object, object, 1, addr); } +#ifdef CONFIG_MEMCG_KMEM +/* Do not inline the rare memcg charging failed path into the allocation path */ +static noinline +void memcg_alloc_abort_single(struct kmem_cache *s, void *object) +{ + if (likely(slab_free_hook(s, object, slab_want_init_on_free(s)))) + do_slab_free(s, virt_to_slab(object), object, object, 1, _RET_IP_); +} +#endif + static __fastpath_inline void slab_free_bulk(struct kmem_cache *s, struct slab *slab, void *head, void *tail, void **p, int cnt, unsigned long addr) @@ -4635,29 +4612,26 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, void **p) { int i; - struct obj_cgroup *objcg = NULL; if (!size) return 0; - /* memcg and kmem_cache debug support */ - s = slab_pre_alloc_hook(s, NULL, &objcg, size, flags); + s = slab_pre_alloc_hook(s, flags); if (unlikely(!s)) return 0; i = __kmem_cache_alloc_bulk(s, flags, size, p); + if (unlikely(i == 0)) + return 0; /* * memcg and kmem_cache debug support and memory initialization. * Done outside of the IRQ disabled fastpath loop. */ - if (likely(i != 0)) { - slab_post_alloc_hook(s, objcg, flags, size, p, - slab_want_init_on_alloc(flags, s), s->object_size); - } else { - memcg_slab_alloc_error_hook(s, size, objcg); + if (unlikely(!slab_post_alloc_hook(s, NULL, flags, size, p, + slab_want_init_on_alloc(flags, s), s->object_size))) { + return 0; } - return i; } EXPORT_SYMBOL(kmem_cache_alloc_bulk); From patchwork Fri Mar 1 17:07:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 13578808 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57973C54E49 for ; Fri, 1 Mar 2024 17:07:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C9AB394000F; Fri, 1 Mar 2024 12:07:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C149894000E; Fri, 1 Mar 2024 12:07:15 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A8E4A6B009C; Fri, 1 Mar 2024 12:07:15 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 933A094000E for ; Fri, 1 Mar 2024 12:07:15 -0500 (EST) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 5D433403AB for ; Fri, 1 Mar 2024 17:07:15 +0000 (UTC) X-FDA: 81849100830.28.0ACD992 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) by imf02.hostedemail.com (Postfix) with ESMTP id 4ED4D80018 for ; Fri, 1 Mar 2024 17:07:12 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=none; spf=pass (imf02.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.131 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709312832; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=zaY7Cl6H+W5VUviiIrAaZrT1balgaCewR6uDC3lJgT0=; b=fqxmebMmlrNYp8iBy5Lj8omKkwdVZDuEQskotAU3hgj3A1wJQzOdJxwgRGugA3d8LfJNIx rB+MctQDQXU1JH0EdUc8+41vingaFLMut2+jzcRq01ignwwkrvd9AAabc/fLvaPc7xxCzy +F+nv6KFWo9xOCO446Bk74g3h9qqyMg= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709312832; a=rsa-sha256; cv=none; b=gcJ8GV+ueXd+AHU5tjcKe06Q5UhwzzIOroYDF6fs77Yed+lhftT/DnURC9gp1Wol8lOjEY 5fU2IF/XDMQoWpMMO4p2c15sCFtb4KA59oylEnBvTwJ6izqmV4SuTvWNeYSs5iKbibNHzr XtP4sjEoAvksob5E89X95CPUqFy7JwM= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=none; spf=pass (imf02.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.131 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id C4782207B6; Fri, 1 Mar 2024 17:07:10 +0000 (UTC) Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id A05F613AB0; Fri, 1 Mar 2024 17:07:10 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id +FbfJj4L4mUcGQAAD6G6ig (envelope-from ); Fri, 01 Mar 2024 17:07:10 +0000 From: Vlastimil Babka Date: Fri, 01 Mar 2024 18:07:09 +0100 Subject: [PATCH RFC 2/4] mm, slab: move slab_memcg hooks to mm/memcontrol.c MIME-Version: 1.0 Message-Id: <20240301-slab-memcg-v1-2-359328a46596@suse.cz> References: <20240301-slab-memcg-v1-0-359328a46596@suse.cz> In-Reply-To: <20240301-slab-memcg-v1-0-359328a46596@suse.cz> To: Linus Torvalds , Josh Poimboeuf , Jeff Layton , Chuck Lever , Kees Cook , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Johannes Weiner , Michal Hocko , Shakeel Butt , Muchun Song , Alexander Viro , Christian Brauner , Jan Kara Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-fsdevel@vger.kernel.org, Vlastimil Babka X-Mailer: b4 0.13.0 X-Stat-Signature: iua56dd5kwmqodkwyymom37gqjrju66f X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 4ED4D80018 X-Rspam-User: X-HE-Tag: 1709312832-16807 X-HE-Meta: U2FsdGVkX1++JZHxe5WbFZlyzJSmyLzmRGlvAzRDMyzuYotsGgy1zFJto67sxd5WWGDlYtA+LeVtCaPqI+sleLDFVlNXlcbCcP6Mp9rqVf4O/fDHMv+jKXvYUqLX7mkJGi02MGcn5dEIS6JROqMdMZwF4YpjMMru11jaETiQWJlLCLJZQxXyL8gnqp/w5HuPYzCgBKyjwFt5HRKQMaLHVsgxrp97mR1tjJldMvIFK7hqN40bUIO9+WXL/wWlJfsXp0TGjRMMcv9ImBoK+ZZGC7Zg35fOd8odDzACGXp5F9RGTZ8gF4l5iNTTOqT8dqyNXNu/Uzqi8iCm4/kjeXqQ7vgQjSXtAO/Fev90zYUDPKoQTxo2ZJzJri441Yy9PoFYZgIE0dS9m3imszoAEuuhGR/9Cw669NHfB/7BH14h5Te23pu4lEr8UyNm/vCyNeoTaLXtzCrZzmEEVSCe7eq/a6BexYS9IkyrbVTAjjFp+6lpFsQXD2BWDETrwWrzIPIewO7UW+FJxi3DLe5+t272nqu3M/BjKt+k6MY9quINk9qatpgNY4402K5IfOtTkCx33vk1AnGSQ5Lmt8cvQPv/pzlPVqdXeFjdZxjlTSGxXo48BJpgWQSBuVtRCRp+dQvehJbD3579cVQ3vhf1iWvkMJg5qEeinsdtSoYPfVb+wHOFBQcal5VYUk8HC4LvAKVxHjzxwqsZkVcUCL897I+CqTMnTO6NB1QmrEwJSksxKhRU2dkOZlksUSk/HvM/xlwBRPbsxCzVk0r0uAcIkCMof8hoo5pOuMY0PU0EqzUDQmyz1IcOUUM3j/E89WAKTOSCIDuSkkvv5HLUy1HzPyytG7VaLOdnD0LRYIXfB2FQz2SMkesb8xPcqQkDzov3Iqu0Bqu55hS1bqUmpbaQMdvsKqVxKAaY22Am X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The hooks make multiple calls to functions in mm/memcontrol.c, including to th current_obj_cgroup() marked __always_inline. It might be faster to make a single call to the hook in mm/memcontrol.c instead. The hooks also don't use almost anything from mm/slub.c. obj_full_size() can move with the hooks and cache_vmstat_idx() to the internal mm/slab.h Signed-off-by: Vlastimil Babka Reviewed-by: Roman Gushchin --- mm/memcontrol.c | 90 ++++++++++++++++++++++++++++++++++++++++++++++++++ mm/slab.h | 10 ++++++ mm/slub.c | 100 -------------------------------------------------------- 3 files changed, 100 insertions(+), 100 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index e4c8735e7c85..37ee9356a26c 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3575,6 +3575,96 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size) refill_obj_stock(objcg, size, true); } +static inline size_t obj_full_size(struct kmem_cache *s) +{ + /* + * For each accounted object there is an extra space which is used + * to store obj_cgroup membership. Charge it too. + */ + return s->size + sizeof(struct obj_cgroup *); +} + +bool __memcg_slab_post_alloc_hook(struct kmem_cache *s, struct list_lru *lru, + gfp_t flags, size_t size, void **p) +{ + struct obj_cgroup *objcg; + struct slab *slab; + unsigned long off; + size_t i; + + /* + * The obtained objcg pointer is safe to use within the current scope, + * defined by current task or set_active_memcg() pair. + * obj_cgroup_get() is used to get a permanent reference. + */ + objcg = current_obj_cgroup(); + if (!objcg) + return true; + + /* + * slab_alloc_node() avoids the NULL check, so we might be called with a + * single NULL object. kmem_cache_alloc_bulk() aborts if it can't fill + * the whole requested size. + * return success as there's nothing to free back + */ + if (unlikely(*p == NULL)) + return true; + + flags &= gfp_allowed_mask; + + if (lru) { + int ret; + struct mem_cgroup *memcg; + + memcg = get_mem_cgroup_from_objcg(objcg); + ret = memcg_list_lru_alloc(memcg, lru, flags); + css_put(&memcg->css); + + if (ret) + return false; + } + + if (obj_cgroup_charge(objcg, flags, size * obj_full_size(s))) + return false; + + for (i = 0; i < size; i++) { + slab = virt_to_slab(p[i]); + + if (!slab_objcgs(slab) && + memcg_alloc_slab_cgroups(slab, s, flags, false)) { + obj_cgroup_uncharge(objcg, obj_full_size(s)); + continue; + } + + off = obj_to_index(s, slab, p[i]); + obj_cgroup_get(objcg); + slab_objcgs(slab)[off] = objcg; + mod_objcg_state(objcg, slab_pgdat(slab), + cache_vmstat_idx(s), obj_full_size(s)); + } + + return true; +} + +void __memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab, + void **p, int objects, struct obj_cgroup **objcgs) +{ + for (int i = 0; i < objects; i++) { + struct obj_cgroup *objcg; + unsigned int off; + + off = obj_to_index(s, slab, p[i]); + objcg = objcgs[off]; + if (!objcg) + continue; + + objcgs[off] = NULL; + obj_cgroup_uncharge(objcg, obj_full_size(s)); + mod_objcg_state(objcg, slab_pgdat(slab), cache_vmstat_idx(s), + -obj_full_size(s)); + obj_cgroup_put(objcg); + } +} #endif /* CONFIG_MEMCG_KMEM */ /* diff --git a/mm/slab.h b/mm/slab.h index 54deeb0428c6..3f170673fa55 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -541,6 +541,12 @@ static inline bool kmem_cache_debug_flags(struct kmem_cache *s, slab_flags_t fla return false; } +static inline enum node_stat_item cache_vmstat_idx(struct kmem_cache *s) +{ + return (s->flags & SLAB_RECLAIM_ACCOUNT) ? + NR_SLAB_RECLAIMABLE_B : NR_SLAB_UNRECLAIMABLE_B; +} + #ifdef CONFIG_MEMCG_KMEM /* * slab_objcgs - get the object cgroups vector associated with a slab @@ -564,6 +570,10 @@ int memcg_alloc_slab_cgroups(struct slab *slab, struct kmem_cache *s, gfp_t gfp, bool new_slab); void mod_objcg_state(struct obj_cgroup *objcg, struct pglist_data *pgdat, enum node_stat_item idx, int nr); +bool __memcg_slab_post_alloc_hook(struct kmem_cache *s, struct list_lru *lru, + gfp_t flags, size_t size, void **p); +void __memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab, + void **p, int objects, struct obj_cgroup **objcgs); #else /* CONFIG_MEMCG_KMEM */ static inline struct obj_cgroup **slab_objcgs(struct slab *slab) { diff --git a/mm/slub.c b/mm/slub.c index 7022a1246bab..64da169d672a 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1875,12 +1875,6 @@ static bool freelist_corrupted(struct kmem_cache *s, struct slab *slab, #endif #endif /* CONFIG_SLUB_DEBUG */ -static inline enum node_stat_item cache_vmstat_idx(struct kmem_cache *s) -{ - return (s->flags & SLAB_RECLAIM_ACCOUNT) ? - NR_SLAB_RECLAIMABLE_B : NR_SLAB_UNRECLAIMABLE_B; -} - #ifdef CONFIG_MEMCG_KMEM static inline void memcg_free_slab_cgroups(struct slab *slab) { @@ -1888,79 +1882,6 @@ static inline void memcg_free_slab_cgroups(struct slab *slab) slab->memcg_data = 0; } -static inline size_t obj_full_size(struct kmem_cache *s) -{ - /* - * For each accounted object there is an extra space which is used - * to store obj_cgroup membership. Charge it too. - */ - return s->size + sizeof(struct obj_cgroup *); -} - -static bool __memcg_slab_post_alloc_hook(struct kmem_cache *s, - struct list_lru *lru, - gfp_t flags, size_t size, - void **p) -{ - struct obj_cgroup *objcg; - struct slab *slab; - unsigned long off; - size_t i; - - /* - * The obtained objcg pointer is safe to use within the current scope, - * defined by current task or set_active_memcg() pair. - * obj_cgroup_get() is used to get a permanent reference. - */ - objcg = current_obj_cgroup(); - if (!objcg) - return true; - - /* - * slab_alloc_node() avoids the NULL check, so we might be called with a - * single NULL object. kmem_cache_alloc_bulk() aborts if it can't fill - * the whole requested size. - * return success as there's nothing to free back - */ - if (unlikely(*p == NULL)) - return true; - - flags &= gfp_allowed_mask; - - if (lru) { - int ret; - struct mem_cgroup *memcg; - - memcg = get_mem_cgroup_from_objcg(objcg); - ret = memcg_list_lru_alloc(memcg, lru, flags); - css_put(&memcg->css); - - if (ret) - return false; - } - - if (obj_cgroup_charge(objcg, flags, size * obj_full_size(s))) - return false; - - for (i = 0; i < size; i++) { - slab = virt_to_slab(p[i]); - - if (!slab_objcgs(slab) && - memcg_alloc_slab_cgroups(slab, s, flags, false)) { - obj_cgroup_uncharge(objcg, obj_full_size(s)); - continue; - } - - off = obj_to_index(s, slab, p[i]); - obj_cgroup_get(objcg); - slab_objcgs(slab)[off] = objcg; - mod_objcg_state(objcg, slab_pgdat(slab), - cache_vmstat_idx(s), obj_full_size(s)); - } - - return true; -} - static void memcg_alloc_abort_single(struct kmem_cache *s, void *object); static __fastpath_inline @@ -1986,27 +1907,6 @@ bool memcg_slab_post_alloc_hook(struct kmem_cache *s, struct list_lru *lru, return false; } -static void __memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab, - void **p, int objects, - struct obj_cgroup **objcgs) -{ - for (int i = 0; i < objects; i++) { - struct obj_cgroup *objcg; - unsigned int off; - - off = obj_to_index(s, slab, p[i]); - objcg = objcgs[off]; - if (!objcg) - continue; - - objcgs[off] = NULL; - obj_cgroup_uncharge(objcg, obj_full_size(s)); - mod_objcg_state(objcg, slab_pgdat(slab), cache_vmstat_idx(s), - -obj_full_size(s)); - obj_cgroup_put(objcg); - } -} - static __fastpath_inline void memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab, void **p, int objects) From patchwork Fri Mar 1 17:07:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 13578810 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DAEE6C5478C for ; Fri, 1 Mar 2024 17:07:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5D8AC6B009B; Fri, 1 Mar 2024 12:07:16 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 586386B009D; Fri, 1 Mar 2024 12:07:16 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3AEBB6B009C; Fri, 1 Mar 2024 12:07:16 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id EEC15940010 for ; Fri, 1 Mar 2024 12:07:15 -0500 (EST) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id A1E37A1B79 for ; Fri, 1 Mar 2024 17:07:15 +0000 (UTC) X-FDA: 81849100830.25.762DC39 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) by imf13.hostedemail.com (Postfix) with ESMTP id 7D4EF20028 for ; Fri, 1 Mar 2024 17:07:12 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf13.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.130 as permitted sender) smtp.mailfrom=vbabka@suse.cz ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709312832; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ejITEmwOvuELk0FkAjL3jJy28ouM8QRgbljo3Xli1AE=; b=tjjZ9/jVM+io4iKdxdW1tHjVMBM4ODT1xzk/gJzzmAsWARxuKULDpTZEoyzDwW0/9yk+nj gzs+s6OSHKPs9NJyKLIRAPsrcIz1tIiq0kSK0AzC3a5Zy+hpeZ2BVnYk+sTnnT50iY8tma 5KVPt/EQVUvpJ+H/GuZDUKfIgdbb5es= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf13.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.130 as permitted sender) smtp.mailfrom=vbabka@suse.cz ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709312832; a=rsa-sha256; cv=none; b=s2FNrLKivXn7xfmyoSzi3T2a3eZVqotZdzu36atkugr+99Xa9jzkBCNmRRqNYLHvpfdnWc LJBunXzp8bW3I5WEgVK/PR2mw+xW3JwQl5HHLzSvmm1I1pXIGEJCjuaOqgPuhaZC/86cPt LrYlRITGHW38HZYMnKTIfMK1XSasuP4= Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id E827E33D2B; Fri, 1 Mar 2024 17:07:10 +0000 (UTC) Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id C31ED13A59; Fri, 1 Mar 2024 17:07:10 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id AAZcLz4L4mUcGQAAD6G6ig (envelope-from ); Fri, 01 Mar 2024 17:07:10 +0000 From: Vlastimil Babka Date: Fri, 01 Mar 2024 18:07:10 +0100 Subject: [PATCH RFC 3/4] mm, slab: introduce kmem_cache_charge() MIME-Version: 1.0 Message-Id: <20240301-slab-memcg-v1-3-359328a46596@suse.cz> References: <20240301-slab-memcg-v1-0-359328a46596@suse.cz> In-Reply-To: <20240301-slab-memcg-v1-0-359328a46596@suse.cz> To: Linus Torvalds , Josh Poimboeuf , Jeff Layton , Chuck Lever , Kees Cook , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Johannes Weiner , Michal Hocko , Shakeel Butt , Muchun Song , Alexander Viro , Christian Brauner , Jan Kara Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-fsdevel@vger.kernel.org, Vlastimil Babka X-Mailer: b4 0.13.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 7D4EF20028 X-Stat-Signature: omdp8kg8cc3yd5u5p37wtjizmomywy1u X-Rspam-User: X-HE-Tag: 1709312832-804189 X-HE-Meta: U2FsdGVkX1+i5QKHSYXk88ws4PkWCP+SWv2lgJORAgfOJcUTwO1MdpnXGgQx//dMXYyxCTHgz77v6S/dZHWi97dxjM9Ikca7RFM4cpo8fIB7SWlkxE8Hw3z6zS0M2gNsn18cN4gKTowpEQAkoA3PpixjY7haOtJUA9GwqJ/5LaICTdXfvllHqCRvgF2NAh5Yv6ErumgzFKn1TxutulXOacYRFCHUabwd9gZaOhY3/uvvSpR8RJLyDohLoqF3kdQzcwkk5LHYxWh+5UvOYeezar2v/uw+w+VUP/IwGhxUaeaD01SXuneInaAoz+FvENrqKr+CWwtKGDt/KPd+3ZIoWM1EASJeA7xl+2TDSSX3PPvYCRW0K3nWYx9B3XZM0umrtIsZU7wZxrt2fSlsAs2MnLVTET9C5XZZhYhvqak7QJItM1e2qzGe76ud6jfnRDxfKLSkrQlE24Qi7bGs15hurJAAFRRmYdgGQVkDXpRTYrtT8S5WwyV3th5XdY5xeiKq45Tm0ACib3Who6GJVhMr+9NL+/Ps0agSPRCn93hrisvMK22q9rD1GCcBytjGyAu2CM9hXsi4lKf0c6DTmeDIpiJuZUEMK/uSn2jTDAvDTBOWyYj8JC7gES+nV2UQkjx0Kxv0hDL1XItJAGCQ8FDyvn3aTYUqyYonAXQE0PHAn9npYL57Sxpx766xM+ZJHweeefyGKFBfdiPCGi6jb5VNhZs3HqMqBw3IcUsgyGMW7f9cm77hg+dPiARTyq5/zTaADb/yuYUOgd2Sb22eIERAIrqzZSDEUMMQ0LhXjR4khbKgyZAClAifjb5LHgEfvHAlTse6Wn0Xxm9LuB/qeBjb42VH4tTtpcrbwdYurCWc9KD14r7a6T5W8ZUBwQugepAAhwMQ68UgE7NzvctDsKPveuKMrEkicHk56IYFKAQwFkz2kX0cCXDCdQpoX0XXlatyTme0HGoX1guJjzSG/Q4 SEDbf0OH 5lUzAjOAgpufJUNJC2d93qA9WVL62n6Hv9IDFTwWjPgBEpimAVS8asQczpMHQF9DexkGHRlMh3Eh+8FTJYUUYF6hF1FOml45nsI7lcfXFLiB6S1aFTDHbxs3wwdNg6qW8v9uQLEIfz82kYeSTH+RsDTlX86YSTPwUdEd9JNN/l5590ksUyxkHAVyZ0oyKT1ujRcCh4r99MmXYTbZMrP5xZPM7TYB8uDWbSG0cGOF8y1WwpoDElZb6kPhTnNBAQc+dZbH9hMmkSar7BJRBHSgfZIiqt9KIdErgsDiKkBi1HdgI9zoYwRlBlV9yAFya23r/btuQgN8bn1LRldZ2ULD4epzJTxBuaQsloHy2iuCg02SUQQ4sTULjn3jH/Q== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: As suggested by Linus, introduce a slab API function to memcg-charge a an object that was previously allocated without __GFP_ACCOUNT and from a cache that's not SLAB_ACCOUNT. This may be useful when it's likely the object is to be freed soon, and thus the charging/uncharging overhead can be avoided. In case kmem_cache_charge() is called on an already-charged object, it's a no-op. Suggested-by: Linus Torvalds Link: https://lore.kernel.org/all/CAHk-=whYOOdM7jWy5jdrAm8LxcgCMFyk2bt8fYYvZzM4U-zAQA@mail.gmail.com/ Signed-off-by: Vlastimil Babka --- include/linux/slab.h | 10 ++++++++++ mm/slub.c | 29 +++++++++++++++++++++++++++++ 2 files changed, 39 insertions(+) diff --git a/include/linux/slab.h b/include/linux/slab.h index b5f5ee8308d0..0c3acb2fa3e6 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -491,6 +491,16 @@ void *kmem_cache_alloc(struct kmem_cache *cachep, gfp_t flags) __assume_slab_ali void *kmem_cache_alloc_lru(struct kmem_cache *s, struct list_lru *lru, gfp_t gfpflags) __assume_slab_alignment __malloc; void kmem_cache_free(struct kmem_cache *s, void *objp); +#ifdef CONFIG_MEMCG_KMEM +int kmem_cache_charge(struct kmem_cache *s, gfp_t flags, void *objp); +#else +static inline int +kmem_cache_charge(struct kmem_cache *s, gfp_t flags, void *objp) +{ + return 0; +} +#endif + /* * Bulk allocation and freeing operations. These are accelerated in an diff --git a/mm/slub.c b/mm/slub.c index 64da169d672a..72b61b379ba1 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4241,6 +4241,35 @@ void kmem_cache_free(struct kmem_cache *s, void *x) } EXPORT_SYMBOL(kmem_cache_free); +#ifdef CONFIG_MEMCG_KMEM +int kmem_cache_charge(struct kmem_cache *s, gfp_t flags, void *x) +{ + struct obj_cgroup ** objcg; + struct slab *slab; + + s = cache_from_obj(s, x); + if (!s) + return -EINVAL; + + if (likely(!memcg_kmem_online())) + return 0; + + /* was it already accounted? */ + slab = virt_to_slab(x); + if ((objcg = slab_objcgs(slab))) { + unsigned int off = obj_to_index(s, slab, x); + + if (objcg[off]) + return 0; + } + + if (!memcg_slab_post_alloc_hook(s, NULL, flags, 1, &x)) + return -ENOMEM; + + return 0; +} +#endif + static void free_large_kmalloc(struct folio *folio, void *object) { unsigned int order = folio_order(folio); From patchwork Fri Mar 1 17:07:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 13578807 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4D14C5478C for ; Fri, 1 Mar 2024 17:07:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 75C37940007; Fri, 1 Mar 2024 12:07:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 70B7D6B009C; Fri, 1 Mar 2024 12:07:15 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 55E99940007; Fri, 1 Mar 2024 12:07:15 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 41C3E6B009B for ; Fri, 1 Mar 2024 12:07:15 -0500 (EST) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 1A722160408 for ; Fri, 1 Mar 2024 17:07:15 +0000 (UTC) X-FDA: 81849100830.08.F48DBCA Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) by imf01.hostedemail.com (Postfix) with ESMTP id 8AE7340028 for ; Fri, 1 Mar 2024 17:07:12 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=hR4RI3M7; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=o05E46K6; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=hR4RI3M7; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=o05E46K6; dmarc=none; spf=pass (imf01.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.131 as permitted sender) smtp.mailfrom=vbabka@suse.cz ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709312832; a=rsa-sha256; cv=none; b=rSzDKn5TOTOhuPsS6zeYnBYdGSmhi2PJFnUUb84Cslp9nyyevxwzsKYxsNUBZJpU346+6m jfeg/QXdnr/Nu1UKClliGDvjxTCUZZv4zsVnU6hPoDIsqZGLMN3Xx26itokXFTfd0mmyx4 EFXPM2qAubCBm+m3HexhnHzaqAFYp80= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=hR4RI3M7; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=o05E46K6; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=hR4RI3M7; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=o05E46K6; dmarc=none; spf=pass (imf01.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.131 as permitted sender) smtp.mailfrom=vbabka@suse.cz ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709312832; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=3OiKnePSvssqSsATKZD6FgDSrsGbw6Gz9LH7uaXGsMo=; b=usRWIv3c93sDwlOgNinCbh+08rVUPt6S3qYeEvBFSJds5HL2+fZRM5ugwLrAu73bBdThp4 Cf1mLfdCMrJXxO87imoLK8l+gfW/0s8wAB5BPr1S7WPlpUAIUTpX8KF6rCul+LvY18+bmt CNTp7XGj8LGnA88yQEwFPMCf9v6ungc= Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 13FDA207B4; Fri, 1 Mar 2024 17:07:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1709312831; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3OiKnePSvssqSsATKZD6FgDSrsGbw6Gz9LH7uaXGsMo=; b=hR4RI3M7QdmI77qwLuZOBimpV0p2ze0qGZw1B4hIMLU2F/D7t3Gu5VbTAR9gTJgDi6vCeY CHs1xCbyeczWk9Hb3kbBfU7sfuhKG4jF0YdapV00Mtliv7GsO2UuM0v+bywHfHlN1udd4J we+zKQVacDSXAT9F9S8KKuGQXrqXA18= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1709312831; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3OiKnePSvssqSsATKZD6FgDSrsGbw6Gz9LH7uaXGsMo=; b=o05E46K6EN+fy853AkUBuf6HrQD5ZThoOdIKf4tjmcYR9J8tcaU11fvMi6vu+GSL77uZkk Y/9da7wAoHpaIGBA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1709312831; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3OiKnePSvssqSsATKZD6FgDSrsGbw6Gz9LH7uaXGsMo=; b=hR4RI3M7QdmI77qwLuZOBimpV0p2ze0qGZw1B4hIMLU2F/D7t3Gu5VbTAR9gTJgDi6vCeY CHs1xCbyeczWk9Hb3kbBfU7sfuhKG4jF0YdapV00Mtliv7GsO2UuM0v+bywHfHlN1udd4J we+zKQVacDSXAT9F9S8KKuGQXrqXA18= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1709312831; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3OiKnePSvssqSsATKZD6FgDSrsGbw6Gz9LH7uaXGsMo=; b=o05E46K6EN+fy853AkUBuf6HrQD5ZThoOdIKf4tjmcYR9J8tcaU11fvMi6vu+GSL77uZkk Y/9da7wAoHpaIGBA== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id E618113AA3; Fri, 1 Mar 2024 17:07:10 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id MKjgNz4L4mUcGQAAD6G6ig (envelope-from ); Fri, 01 Mar 2024 17:07:10 +0000 From: Vlastimil Babka Date: Fri, 01 Mar 2024 18:07:11 +0100 Subject: [PATCH RFC 4/4] UNFINISHED mm, fs: use kmem_cache_charge() in path_openat() MIME-Version: 1.0 Message-Id: <20240301-slab-memcg-v1-4-359328a46596@suse.cz> References: <20240301-slab-memcg-v1-0-359328a46596@suse.cz> In-Reply-To: <20240301-slab-memcg-v1-0-359328a46596@suse.cz> To: Linus Torvalds , Josh Poimboeuf , Jeff Layton , Chuck Lever , Kees Cook , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Johannes Weiner , Michal Hocko , Shakeel Butt , Muchun Song , Alexander Viro , Christian Brauner , Jan Kara Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-fsdevel@vger.kernel.org, Vlastimil Babka X-Mailer: b4 0.13.0 X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 8AE7340028 X-Stat-Signature: fpnwrmxc9ofid818n3mwtftzjayzhkd5 X-HE-Tag: 1709312832-241885 X-HE-Meta: U2FsdGVkX1/HNkatZm3V+cD+tWJKgDCKIH9rQv56RIWZJJJ9SWpmBf42Y4JHIH9w49GTDcOVbEJsQH5reG1ZvLwcWGO/zhtyzzrZ4diYAQKzGD6IC2n8s+DXw10xxlJp/X6G7Jrrl5q/CBPeiT+OyYbf9vIibRgKX46GPK3h/Gr2wVd3BCQLQH3+CHRVL/L1k1NJZ103sGVXd2xbinvtUcFYuDKVf/IxcrTZLmEwvmLo+WqcH1A8TChap8y94PHpGwmdT+pYbMSTZqWFlyfupNvLG45TJTqamP0+oUXNOiB6qOUB2YPPzzxstgWN06yYMRvsdEsftGkrQVIpRWH+xzUUhvIrlpbaLDb6Rp6sE8aZ3X7tmScfwSlOu6LMfvDdbKmoqoyO8lqeHmKjhiYlL9zvCxdngsyKGxU49t0SoBCMA+SmlPtqejq5xtqxAXw28iBE+INsB2RcMKKuDKZ3kTUnWgK6yNa4Td0i0unKgf0HosLvd3iDLhOngz9cCaRhBZGp+GItrUSbhueSBlJj8aGOilCe5qXijHl1IpyYwYH/vKSUS6HSEbrHgl9sdR8wdhPUB+E34PxoHu827fzF5EChilPubSPPduGjMv2i3tmSP7s12qErh1l8rdUYBd7ApAyr2NULlfQN+xhWNaMVq7NK0LRZGj8oR2ngxRgnnxEiFbRMPt/wznmPYsJdQa4gSfSxUANe4EbdSGslXAMd9qryNtMshQEoT1T8dnnHFyzaP84gfoW/naVNFatjbplnRwPeXaSNZphu5zLfdYPc+JqLGZ6WeyULBiNfzxYGqInD9ZMxYorGxIT7qA44oylsgOXLMlJ0i4ZvSbE+q5DAnFIzy6Uzu3tkCSIgDfvENyceeZF2SG1oMXnZy6BMSMLoqzl7bFrwrTAUfH8eA8A2Bk8/0/zIvi11/EO/qHfvgkBM+dk3HmF/wqw9UjyQBd6jJ14wZqIbWj9BbWvrvl2 o8rZK121 fbglOJbD2rx/YvJiHS3hFWAFTDLKShhl9jn8itL/Z5nx1Yf+AcKkx/0kD850aYhQj8wAjrTbJFg06U8oOMsj4TT453kaF4zP4D2Chj1oqGwfZTcilJJmd+A/J70PbWd7JW+WJA16FV+MjWPDHeQcW4oVcmHrqisRALiMzSK9yVyGWP4Bm2/bES2GsnWq45TwXEZ3XAFUfENDD7/c+HsYVbggsG8ZLLh+EgxGio+f4tfrOD+887pYlE5Geq3EGMSJFwynxyJkoFOUnQPEWYugL0X46ObvC9d3jvxH52TCpVUVa/iO6xL7yvIYslqTKb9biBHRSINaN5WD5eRSudVtE3xHeXhT9PaqBPdf4XHExWy4kcmmjedwzn2iLkCXqQ6hxfhy+cY/c2Zt26lfrF43fQHbVkUddtVF4Eny4r7kWpai/4JM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This is just an example of using the kmem_cache_charge() API. I think it's placed in a place that's applicable for Linus's example [1] although he mentions do_dentry_open() - I have followed from strace() showing openat(2) to path_openat() doing the alloc_empty_file(). The idea is that filp_cachep stops being SLAB_ACCOUNT. Allocations that want to be accounted immediately can use GFP_KERNEL_ACCOUNT. I did that in alloc_empty_file_noaccount() (despite the contradictory name but the noaccount refers to something else, right?) as IIUC it's about kernel-internal opens. alloc_empty_file() is now not doing the accounting, so I added kmem_account_file() that calls the new kmem_cache_charge() API. Why is this unfinished: - there are other callers of alloc_empty_file() which I didn't adjust so they simply became memcg-unaccounted. I haven't investigated for which ones it would make also sense to separate the allocation and accounting. Maybe alloc_empty_file() would need to get a parameter to control this. - I don't know how to properly unwind the accounting failure case. It seems like a new case because when we succeed the open, there's no further error path at least in path_openat(). Basically it boils down I'm unfamiliar with VFS so this depends if this approach is deemed useful enough to finish it. [1] https://lore.kernel.org/all/CAHk-=whYOOdM7jWy5jdrAm8LxcgCMFyk2bt8fYYvZzM4U-zAQA@mail.gmail.com/ Not-signed-off-by: Vlastimil Babka --- fs/file_table.c | 9 +++++++-- fs/internal.h | 1 + fs/namei.c | 4 +++- 3 files changed, 11 insertions(+), 3 deletions(-) diff --git a/fs/file_table.c b/fs/file_table.c index b991f90571b4..6401b6f175ae 100644 --- a/fs/file_table.c +++ b/fs/file_table.c @@ -223,6 +223,11 @@ struct file *alloc_empty_file(int flags, const struct cred *cred) return ERR_PTR(-ENFILE); } +int kmem_account_file(struct file *f) +{ + return kmem_cache_charge(filp_cachep, GFP_KERNEL_ACCOUNT, f); +} + /* * Variant of alloc_empty_file() that doesn't check and modify nr_files. * @@ -234,7 +239,7 @@ struct file *alloc_empty_file_noaccount(int flags, const struct cred *cred) struct file *f; int error; - f = kmem_cache_zalloc(filp_cachep, GFP_KERNEL); + f = kmem_cache_zalloc(filp_cachep, GFP_KERNEL_ACCOUNT); if (unlikely(!f)) return ERR_PTR(-ENOMEM); @@ -468,7 +473,7 @@ void __init files_init(void) { filp_cachep = kmem_cache_create("filp", sizeof(struct file), 0, SLAB_TYPESAFE_BY_RCU | SLAB_HWCACHE_ALIGN | - SLAB_PANIC | SLAB_ACCOUNT, NULL); + SLAB_PANIC, NULL); percpu_counter_init(&nr_files, 0, GFP_KERNEL); } diff --git a/fs/internal.h b/fs/internal.h index b67406435fc0..06ada11b71d0 100644 --- a/fs/internal.h +++ b/fs/internal.h @@ -96,6 +96,7 @@ extern void chroot_fs_refs(const struct path *, const struct path *); struct file *alloc_empty_file(int flags, const struct cred *cred); struct file *alloc_empty_file_noaccount(int flags, const struct cred *cred); struct file *alloc_empty_backing_file(int flags, const struct cred *cred); +int kmem_account_file(struct file *file); static inline void file_put_write_access(struct file *file) { diff --git a/fs/namei.c b/fs/namei.c index 4e0de939fea1..fcf3f3fcd059 100644 --- a/fs/namei.c +++ b/fs/namei.c @@ -3799,8 +3799,10 @@ static struct file *path_openat(struct nameidata *nd, terminate_walk(nd); } if (likely(!error)) { - if (likely(file->f_mode & FMODE_OPENED)) + if (likely(file->f_mode & FMODE_OPENED)) { + kmem_account_file(file); return file; + } WARN_ON(1); error = -EINVAL; }