From patchwork Fri Mar 1 17:07:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 13578808 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57973C54E49 for ; Fri, 1 Mar 2024 17:07:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C9AB394000F; Fri, 1 Mar 2024 12:07:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C149894000E; Fri, 1 Mar 2024 12:07:15 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A8E4A6B009C; Fri, 1 Mar 2024 12:07:15 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 933A094000E for ; Fri, 1 Mar 2024 12:07:15 -0500 (EST) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 5D433403AB for ; Fri, 1 Mar 2024 17:07:15 +0000 (UTC) X-FDA: 81849100830.28.0ACD992 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) by imf02.hostedemail.com (Postfix) with ESMTP id 4ED4D80018 for ; Fri, 1 Mar 2024 17:07:12 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=none; spf=pass (imf02.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.131 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709312832; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=zaY7Cl6H+W5VUviiIrAaZrT1balgaCewR6uDC3lJgT0=; b=fqxmebMmlrNYp8iBy5Lj8omKkwdVZDuEQskotAU3hgj3A1wJQzOdJxwgRGugA3d8LfJNIx rB+MctQDQXU1JH0EdUc8+41vingaFLMut2+jzcRq01ignwwkrvd9AAabc/fLvaPc7xxCzy +F+nv6KFWo9xOCO446Bk74g3h9qqyMg= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709312832; a=rsa-sha256; cv=none; b=gcJ8GV+ueXd+AHU5tjcKe06Q5UhwzzIOroYDF6fs77Yed+lhftT/DnURC9gp1Wol8lOjEY 5fU2IF/XDMQoWpMMO4p2c15sCFtb4KA59oylEnBvTwJ6izqmV4SuTvWNeYSs5iKbibNHzr XtP4sjEoAvksob5E89X95CPUqFy7JwM= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=none; spf=pass (imf02.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.131 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id C4782207B6; Fri, 1 Mar 2024 17:07:10 +0000 (UTC) Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id A05F613AB0; Fri, 1 Mar 2024 17:07:10 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id +FbfJj4L4mUcGQAAD6G6ig (envelope-from ); Fri, 01 Mar 2024 17:07:10 +0000 From: Vlastimil Babka Date: Fri, 01 Mar 2024 18:07:09 +0100 Subject: [PATCH RFC 2/4] mm, slab: move slab_memcg hooks to mm/memcontrol.c MIME-Version: 1.0 Message-Id: <20240301-slab-memcg-v1-2-359328a46596@suse.cz> References: <20240301-slab-memcg-v1-0-359328a46596@suse.cz> In-Reply-To: <20240301-slab-memcg-v1-0-359328a46596@suse.cz> To: Linus Torvalds , Josh Poimboeuf , Jeff Layton , Chuck Lever , Kees Cook , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Johannes Weiner , Michal Hocko , Shakeel Butt , Muchun Song , Alexander Viro , Christian Brauner , Jan Kara Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-fsdevel@vger.kernel.org, Vlastimil Babka X-Mailer: b4 0.13.0 X-Stat-Signature: iua56dd5kwmqodkwyymom37gqjrju66f X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 4ED4D80018 X-Rspam-User: X-HE-Tag: 1709312832-16807 X-HE-Meta: U2FsdGVkX1++JZHxe5WbFZlyzJSmyLzmRGlvAzRDMyzuYotsGgy1zFJto67sxd5WWGDlYtA+LeVtCaPqI+sleLDFVlNXlcbCcP6Mp9rqVf4O/fDHMv+jKXvYUqLX7mkJGi02MGcn5dEIS6JROqMdMZwF4YpjMMru11jaETiQWJlLCLJZQxXyL8gnqp/w5HuPYzCgBKyjwFt5HRKQMaLHVsgxrp97mR1tjJldMvIFK7hqN40bUIO9+WXL/wWlJfsXp0TGjRMMcv9ImBoK+ZZGC7Zg35fOd8odDzACGXp5F9RGTZ8gF4l5iNTTOqT8dqyNXNu/Uzqi8iCm4/kjeXqQ7vgQjSXtAO/Fev90zYUDPKoQTxo2ZJzJri441Yy9PoFYZgIE0dS9m3imszoAEuuhGR/9Cw669NHfB/7BH14h5Te23pu4lEr8UyNm/vCyNeoTaLXtzCrZzmEEVSCe7eq/a6BexYS9IkyrbVTAjjFp+6lpFsQXD2BWDETrwWrzIPIewO7UW+FJxi3DLe5+t272nqu3M/BjKt+k6MY9quINk9qatpgNY4402K5IfOtTkCx33vk1AnGSQ5Lmt8cvQPv/pzlPVqdXeFjdZxjlTSGxXo48BJpgWQSBuVtRCRp+dQvehJbD3579cVQ3vhf1iWvkMJg5qEeinsdtSoYPfVb+wHOFBQcal5VYUk8HC4LvAKVxHjzxwqsZkVcUCL897I+CqTMnTO6NB1QmrEwJSksxKhRU2dkOZlksUSk/HvM/xlwBRPbsxCzVk0r0uAcIkCMof8hoo5pOuMY0PU0EqzUDQmyz1IcOUUM3j/E89WAKTOSCIDuSkkvv5HLUy1HzPyytG7VaLOdnD0LRYIXfB2FQz2SMkesb8xPcqQkDzov3Iqu0Bqu55hS1bqUmpbaQMdvsKqVxKAaY22Am X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The hooks make multiple calls to functions in mm/memcontrol.c, including to th current_obj_cgroup() marked __always_inline. It might be faster to make a single call to the hook in mm/memcontrol.c instead. The hooks also don't use almost anything from mm/slub.c. obj_full_size() can move with the hooks and cache_vmstat_idx() to the internal mm/slab.h Signed-off-by: Vlastimil Babka Reviewed-by: Roman Gushchin --- mm/memcontrol.c | 90 ++++++++++++++++++++++++++++++++++++++++++++++++++ mm/slab.h | 10 ++++++ mm/slub.c | 100 -------------------------------------------------------- 3 files changed, 100 insertions(+), 100 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index e4c8735e7c85..37ee9356a26c 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3575,6 +3575,96 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size) refill_obj_stock(objcg, size, true); } +static inline size_t obj_full_size(struct kmem_cache *s) +{ + /* + * For each accounted object there is an extra space which is used + * to store obj_cgroup membership. Charge it too. + */ + return s->size + sizeof(struct obj_cgroup *); +} + +bool __memcg_slab_post_alloc_hook(struct kmem_cache *s, struct list_lru *lru, + gfp_t flags, size_t size, void **p) +{ + struct obj_cgroup *objcg; + struct slab *slab; + unsigned long off; + size_t i; + + /* + * The obtained objcg pointer is safe to use within the current scope, + * defined by current task or set_active_memcg() pair. + * obj_cgroup_get() is used to get a permanent reference. + */ + objcg = current_obj_cgroup(); + if (!objcg) + return true; + + /* + * slab_alloc_node() avoids the NULL check, so we might be called with a + * single NULL object. kmem_cache_alloc_bulk() aborts if it can't fill + * the whole requested size. + * return success as there's nothing to free back + */ + if (unlikely(*p == NULL)) + return true; + + flags &= gfp_allowed_mask; + + if (lru) { + int ret; + struct mem_cgroup *memcg; + + memcg = get_mem_cgroup_from_objcg(objcg); + ret = memcg_list_lru_alloc(memcg, lru, flags); + css_put(&memcg->css); + + if (ret) + return false; + } + + if (obj_cgroup_charge(objcg, flags, size * obj_full_size(s))) + return false; + + for (i = 0; i < size; i++) { + slab = virt_to_slab(p[i]); + + if (!slab_objcgs(slab) && + memcg_alloc_slab_cgroups(slab, s, flags, false)) { + obj_cgroup_uncharge(objcg, obj_full_size(s)); + continue; + } + + off = obj_to_index(s, slab, p[i]); + obj_cgroup_get(objcg); + slab_objcgs(slab)[off] = objcg; + mod_objcg_state(objcg, slab_pgdat(slab), + cache_vmstat_idx(s), obj_full_size(s)); + } + + return true; +} + +void __memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab, + void **p, int objects, struct obj_cgroup **objcgs) +{ + for (int i = 0; i < objects; i++) { + struct obj_cgroup *objcg; + unsigned int off; + + off = obj_to_index(s, slab, p[i]); + objcg = objcgs[off]; + if (!objcg) + continue; + + objcgs[off] = NULL; + obj_cgroup_uncharge(objcg, obj_full_size(s)); + mod_objcg_state(objcg, slab_pgdat(slab), cache_vmstat_idx(s), + -obj_full_size(s)); + obj_cgroup_put(objcg); + } +} #endif /* CONFIG_MEMCG_KMEM */ /* diff --git a/mm/slab.h b/mm/slab.h index 54deeb0428c6..3f170673fa55 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -541,6 +541,12 @@ static inline bool kmem_cache_debug_flags(struct kmem_cache *s, slab_flags_t fla return false; } +static inline enum node_stat_item cache_vmstat_idx(struct kmem_cache *s) +{ + return (s->flags & SLAB_RECLAIM_ACCOUNT) ? + NR_SLAB_RECLAIMABLE_B : NR_SLAB_UNRECLAIMABLE_B; +} + #ifdef CONFIG_MEMCG_KMEM /* * slab_objcgs - get the object cgroups vector associated with a slab @@ -564,6 +570,10 @@ int memcg_alloc_slab_cgroups(struct slab *slab, struct kmem_cache *s, gfp_t gfp, bool new_slab); void mod_objcg_state(struct obj_cgroup *objcg, struct pglist_data *pgdat, enum node_stat_item idx, int nr); +bool __memcg_slab_post_alloc_hook(struct kmem_cache *s, struct list_lru *lru, + gfp_t flags, size_t size, void **p); +void __memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab, + void **p, int objects, struct obj_cgroup **objcgs); #else /* CONFIG_MEMCG_KMEM */ static inline struct obj_cgroup **slab_objcgs(struct slab *slab) { diff --git a/mm/slub.c b/mm/slub.c index 7022a1246bab..64da169d672a 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1875,12 +1875,6 @@ static bool freelist_corrupted(struct kmem_cache *s, struct slab *slab, #endif #endif /* CONFIG_SLUB_DEBUG */ -static inline enum node_stat_item cache_vmstat_idx(struct kmem_cache *s) -{ - return (s->flags & SLAB_RECLAIM_ACCOUNT) ? - NR_SLAB_RECLAIMABLE_B : NR_SLAB_UNRECLAIMABLE_B; -} - #ifdef CONFIG_MEMCG_KMEM static inline void memcg_free_slab_cgroups(struct slab *slab) { @@ -1888,79 +1882,6 @@ static inline void memcg_free_slab_cgroups(struct slab *slab) slab->memcg_data = 0; } -static inline size_t obj_full_size(struct kmem_cache *s) -{ - /* - * For each accounted object there is an extra space which is used - * to store obj_cgroup membership. Charge it too. - */ - return s->size + sizeof(struct obj_cgroup *); -} - -static bool __memcg_slab_post_alloc_hook(struct kmem_cache *s, - struct list_lru *lru, - gfp_t flags, size_t size, - void **p) -{ - struct obj_cgroup *objcg; - struct slab *slab; - unsigned long off; - size_t i; - - /* - * The obtained objcg pointer is safe to use within the current scope, - * defined by current task or set_active_memcg() pair. - * obj_cgroup_get() is used to get a permanent reference. - */ - objcg = current_obj_cgroup(); - if (!objcg) - return true; - - /* - * slab_alloc_node() avoids the NULL check, so we might be called with a - * single NULL object. kmem_cache_alloc_bulk() aborts if it can't fill - * the whole requested size. - * return success as there's nothing to free back - */ - if (unlikely(*p == NULL)) - return true; - - flags &= gfp_allowed_mask; - - if (lru) { - int ret; - struct mem_cgroup *memcg; - - memcg = get_mem_cgroup_from_objcg(objcg); - ret = memcg_list_lru_alloc(memcg, lru, flags); - css_put(&memcg->css); - - if (ret) - return false; - } - - if (obj_cgroup_charge(objcg, flags, size * obj_full_size(s))) - return false; - - for (i = 0; i < size; i++) { - slab = virt_to_slab(p[i]); - - if (!slab_objcgs(slab) && - memcg_alloc_slab_cgroups(slab, s, flags, false)) { - obj_cgroup_uncharge(objcg, obj_full_size(s)); - continue; - } - - off = obj_to_index(s, slab, p[i]); - obj_cgroup_get(objcg); - slab_objcgs(slab)[off] = objcg; - mod_objcg_state(objcg, slab_pgdat(slab), - cache_vmstat_idx(s), obj_full_size(s)); - } - - return true; -} - static void memcg_alloc_abort_single(struct kmem_cache *s, void *object); static __fastpath_inline @@ -1986,27 +1907,6 @@ bool memcg_slab_post_alloc_hook(struct kmem_cache *s, struct list_lru *lru, return false; } -static void __memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab, - void **p, int objects, - struct obj_cgroup **objcgs) -{ - for (int i = 0; i < objects; i++) { - struct obj_cgroup *objcg; - unsigned int off; - - off = obj_to_index(s, slab, p[i]); - objcg = objcgs[off]; - if (!objcg) - continue; - - objcgs[off] = NULL; - obj_cgroup_uncharge(objcg, obj_full_size(s)); - mod_objcg_state(objcg, slab_pgdat(slab), cache_vmstat_idx(s), - -obj_full_size(s)); - obj_cgroup_put(objcg); - } -} - static __fastpath_inline void memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab, void **p, int objects)