From patchwork Tue Oct 31 14:07:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chengming Zhou X-Patchwork-Id: 13441576 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2472CC4332F for ; Tue, 31 Oct 2023 14:09:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id ACD3A6B0300; Tue, 31 Oct 2023 10:09:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A7CE26B0302; Tue, 31 Oct 2023 10:09:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 91C916B0303; Tue, 31 Oct 2023 10:09:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 7F2206B0300 for ; Tue, 31 Oct 2023 10:09:07 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 59318B5C51 for ; Tue, 31 Oct 2023 14:09:07 +0000 (UTC) X-FDA: 81405938334.11.02F9EA8 Received: from out-178.mta1.migadu.com (out-178.mta1.migadu.com [95.215.58.178]) by imf09.hostedemail.com (Postfix) with ESMTP id 56179140012 for ; Tue, 31 Oct 2023 14:09:05 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=TXFp5ZrS; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf09.hostedemail.com: domain of chengming.zhou@linux.dev designates 95.215.58.178 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1698761345; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=GvFpTjudBUawLgUZhfA4XumfhFt8DHF3SUizR1gsSN0=; b=Y+8dJWvmn/03HpLme0xKiKlJL+tMa0RbbYu8M8x/obwfcVKgBztLt0RavyYEMGTJebpVoX TQJORKz8Cwq3ae+V7imfnr85+m1t0O5QZASb4nuhR80+faYbrwq9i+pFrocV8oglszIFsq lNxlTWtFiP3ImKOarxbY39YZlkIeTIc= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=TXFp5ZrS; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf09.hostedemail.com: domain of chengming.zhou@linux.dev designates 95.215.58.178 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1698761345; a=rsa-sha256; cv=none; b=3pgo7PkdcEfeyblr5T+OB7WpdC3kGP4TmmWBZb6XXHBxmy1/T8L0eBtU03yx2OriFZ7CW6 KEIwfp6h/Su/AfcHqykRnQDgKZuTwpyD8sgVFBtz56ugjXZdy5vgkRY2uVkyoUy/FQ/0w2 VmdiFv4pKOnRA/E1s0c3ZENEB+yWhNw= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1698761343; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GvFpTjudBUawLgUZhfA4XumfhFt8DHF3SUizR1gsSN0=; b=TXFp5ZrS2RLrdK40RbVab8YMC1O20QrLDJHbtWDv0h7N6MLbta00vjmZGm7DnKTG5RBgxq h6YdjNWfhUe2h4dCRzYErdTwo+9MGcYO5LIMetNw+He4mdD9iExl/e/+7gz+qY3n+fIq95 TDslQnTghxmWva6tPEM6zC6/ZgI5/0A= From: chengming.zhou@linux.dev To: vbabka@suse.cz, cl@linux.com, penberg@kernel.org, willy@infradead.org Cc: rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, chengming.zhou@linux.dev, Chengming Zhou Subject: [RFC PATCH v4 2/9] slub: Change get_partial() interfaces to return slab Date: Tue, 31 Oct 2023 14:07:34 +0000 Message-Id: <20231031140741.79387-3-chengming.zhou@linux.dev> In-Reply-To: <20231031140741.79387-1-chengming.zhou@linux.dev> References: <20231031140741.79387-1-chengming.zhou@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Rspamd-Queue-Id: 56179140012 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: 31h1tcwb589zfs69sc6uhe6ma4rt67rp X-HE-Tag: 1698761345-805416 X-HE-Meta: U2FsdGVkX18nqRPN1gInOiMWpBeuvckDrM4Y2j1IaQoputiu82+ig4XAMxkAhwmyThOTOq2N6QwLSPF3f3PsW4CymDy6SqLCmRtTv9URcMlDYeiI0GCHtzCLKRngOdTF9zhhY+hvaUVlw59Lnce5xQtXEZdIefBbN5YeNaS/VhntZLirEZEIdAgse3f6JSbczKsQ1Z4i+JUxv9qnVlR42y6DVGbpmU+bI8lWJ8GBMVwEMO23LpTX5+tfIoqKv5jrKz4P+WHzYJcSjtyE/F2lnSbxdDjwl0g7W8sS+1PzdcPz1hpBdsp7A2LN4icNQgonELGXrIns+T4ZpuZMNUr7Ag3DalxdTsphjmlEp9kPhouuO4FNoGor3+7/5ZIcmajjb1Ibpy6qIlGvWiyP9z5/bUpUUGMkB5QVJiCVzTzIMxfAgsw7MGJcpUJTQTNO/bKnmTuN0+OdA/s3oUsOVACpr2VNt41vMh1EW1wDtNb9bMw1JG61TzuRT4plKWfW5KYrZrMyHgcmNt+A2orToqtSBcTMRF5C+vtwz3cDHv6hIKsqHbl2jv/jaNlF1RdNqPYrJ2oLw6nkcoDgsuRcFhXK4oOzgdumYEH4U1KT8/Jq0i1VsxU1D6FZR4CaBAjIwoZG8K+fQlci9ME75MQRCrgWKIBEs3klj0d9ukjXvlF74dvSdn0+g873wmmUHjPMZtLJWPKLIHrvBIJqtJ9ZKepTl4ykn5DYlMdHKJRIQTWueXZ9MiK45MwS3FA5yXGIEkAC3IBD8/7kXbNmUzhOcxXvzMalbNb2IZbY0SIGTQk8mL4udDpn7D9CzA6QxWOejcnoVgrFYsqIlq6UYCaA1RpLmH2+bE/Yt+/jWNRe1ARNSwcFDLMXzPLz7nvJ7f8VO3XVH+XZHRoD2GqPy8WDE78tsJFK2WR0yf5iwI2NWYipjt86WTMKO+ggUOMwp7Dc/DedPQVLaWG+IOVPmBqQin+ ye0UZeV5 LbeHAe5U5suanZoX8pSeOy9dH9fDAa+Bgp2UZAPmUR2VvWUehrATLyh9syHBA1hwKfi16pqWNfKHcUQPNBWdmVDgK9hMnVDtY4rjmehJOgpWDduPRbRqVpy7NG3Ky/Vt481VTPaPxi2tluzNtan3O04DXEi1w81GeNoZJSgzcutG558wx4ld7XvZZ5Php0RQKPn+UQlA2GYqfySTNlJp2P3iDTuP0pPI8U8STpcdTHTGW0vVVLsh1j0wfNUpi0C1Lfofn X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Chengming Zhou We need all get_partial() related interfaces to return a slab, instead of returning the freelist (or object). Use the partial_context.object to return back freelist or object for now. This patch shouldn't have any functional changes. Suggested-by: Vlastimil Babka Signed-off-by: Chengming Zhou Reviewed-by: Vlastimil Babka --- mm/slub.c | 63 +++++++++++++++++++++++++++++-------------------------- 1 file changed, 33 insertions(+), 30 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 0b0fdc8c189f..03384cd965c5 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -204,9 +204,9 @@ DEFINE_STATIC_KEY_FALSE(slub_debug_enabled); /* Structure holding parameters for get_partial() call chain */ struct partial_context { - struct slab **slab; gfp_t flags; unsigned int orig_size; + void *object; }; static inline bool kmem_cache_debug(struct kmem_cache *s) @@ -2269,10 +2269,11 @@ static inline bool pfmemalloc_match(struct slab *slab, gfp_t gfpflags); /* * Try to allocate a partial slab from a specific node. */ -static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n, - struct partial_context *pc) +static struct slab *get_partial_node(struct kmem_cache *s, + struct kmem_cache_node *n, + struct partial_context *pc) { - struct slab *slab, *slab2; + struct slab *slab, *slab2, *partial = NULL; void *object = NULL; unsigned long flags; unsigned int partial_slabs = 0; @@ -2288,27 +2289,28 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n, spin_lock_irqsave(&n->list_lock, flags); list_for_each_entry_safe(slab, slab2, &n->partial, slab_list) { - void *t; - if (!pfmemalloc_match(slab, pc->flags)) continue; if (IS_ENABLED(CONFIG_SLUB_TINY) || kmem_cache_debug(s)) { object = alloc_single_from_partial(s, n, slab, pc->orig_size); - if (object) + if (object) { + partial = slab; + pc->object = object; break; + } continue; } - t = acquire_slab(s, n, slab, object == NULL); - if (!t) + object = acquire_slab(s, n, slab, object == NULL); + if (!object) break; - if (!object) { - *pc->slab = slab; + if (!partial) { + partial = slab; + pc->object = object; stat(s, ALLOC_FROM_PARTIAL); - object = t; } else { put_cpu_partial(s, slab, 0); stat(s, CPU_PARTIAL_NODE); @@ -2324,20 +2326,21 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n, } spin_unlock_irqrestore(&n->list_lock, flags); - return object; + return partial; } /* * Get a slab from somewhere. Search in increasing NUMA distances. */ -static void *get_any_partial(struct kmem_cache *s, struct partial_context *pc) +static struct slab *get_any_partial(struct kmem_cache *s, + struct partial_context *pc) { #ifdef CONFIG_NUMA struct zonelist *zonelist; struct zoneref *z; struct zone *zone; enum zone_type highest_zoneidx = gfp_zone(pc->flags); - void *object; + struct slab *slab; unsigned int cpuset_mems_cookie; /* @@ -2372,8 +2375,8 @@ static void *get_any_partial(struct kmem_cache *s, struct partial_context *pc) if (n && cpuset_zone_allowed(zone, pc->flags) && n->nr_partial > s->min_partial) { - object = get_partial_node(s, n, pc); - if (object) { + slab = get_partial_node(s, n, pc); + if (slab) { /* * Don't check read_mems_allowed_retry() * here - if mems_allowed was updated in @@ -2381,7 +2384,7 @@ static void *get_any_partial(struct kmem_cache *s, struct partial_context *pc) * between allocation and the cpuset * update */ - return object; + return slab; } } } @@ -2393,17 +2396,18 @@ static void *get_any_partial(struct kmem_cache *s, struct partial_context *pc) /* * Get a partial slab, lock it and return it. */ -static void *get_partial(struct kmem_cache *s, int node, struct partial_context *pc) +static struct slab *get_partial(struct kmem_cache *s, int node, + struct partial_context *pc) { - void *object; + struct slab *slab; int searchnode = node; if (node == NUMA_NO_NODE) searchnode = numa_mem_id(); - object = get_partial_node(s, get_node(s, searchnode), pc); - if (object || node != NUMA_NO_NODE) - return object; + slab = get_partial_node(s, get_node(s, searchnode), pc); + if (slab || node != NUMA_NO_NODE) + return slab; return get_any_partial(s, pc); } @@ -3213,10 +3217,10 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, new_objects: pc.flags = gfpflags; - pc.slab = &slab; pc.orig_size = orig_size; - freelist = get_partial(s, node, &pc); - if (freelist) { + slab = get_partial(s, node, &pc); + if (slab) { + freelist = pc.object; if (kmem_cache_debug(s)) { /* * For debug caches here we had to go through @@ -3408,12 +3412,11 @@ static void *__slab_alloc_node(struct kmem_cache *s, void *object; pc.flags = gfpflags; - pc.slab = &slab; pc.orig_size = orig_size; - object = get_partial(s, node, &pc); + slab = get_partial(s, node, &pc); - if (object) - return object; + if (slab) + return pc.object; slab = new_slab(s, gfpflags, node); if (unlikely(!slab)) {