From patchwork Tue Oct 17 15:44:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chengming Zhou X-Patchwork-Id: 13425536 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 87AC5CDB483 for ; Tue, 17 Oct 2023 15:45:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 238526B020F; Tue, 17 Oct 2023 11:45:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1E91A6B0213; Tue, 17 Oct 2023 11:45:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 088726B0246; Tue, 17 Oct 2023 11:45:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id EB3186B020F for ; Tue, 17 Oct 2023 11:45:22 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id BD97CC0D8D for ; Tue, 17 Oct 2023 15:45:22 +0000 (UTC) X-FDA: 81355377684.07.6260A93 Received: from out-208.mta1.migadu.com (out-208.mta1.migadu.com [95.215.58.208]) by imf26.hostedemail.com (Postfix) with ESMTP id DD1E8140013 for ; Tue, 17 Oct 2023 15:45:20 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=CcmfhoGd; spf=pass (imf26.hostedemail.com: domain of chengming.zhou@linux.dev designates 95.215.58.208 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697557521; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=k+bY4oHxBRtJWcNO+Gp7uU0LbPEc1T0Re1GRoZLORj4=; b=CYjRrdKuubG/q2gJv1FVtSWSPeYcr92d0rX0DPGaFSIdimUv72F2ecRMfw8G2zj3rZkB2O xFx3Vn2v06z5ico2+AAoTEtuARxv8C9DLE6c5oZg2Vr9O76nSqchByymxugK1HaNiWLmG+ 6dPLA8240lOakUoBgJJpNWvJhkFXNUM= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=CcmfhoGd; spf=pass (imf26.hostedemail.com: domain of chengming.zhou@linux.dev designates 95.215.58.208 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697557521; a=rsa-sha256; cv=none; b=V2W/OTE+1kbIDuze0qBS+7h1HoQmywknaJf46Zde5XnJlMhDzBdpPgUU6RWCB7nPkY2zRL VRwZqy+mcrP7/RTfMyWeqNQITdbFJNnTBhA6I0G3Q5n9BaMytDPkXgnPCCzhbHFhMTtJF3 aL+YO9EVJMddMQn5xHeT77mMlxHPvdo= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1697557519; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=k+bY4oHxBRtJWcNO+Gp7uU0LbPEc1T0Re1GRoZLORj4=; b=CcmfhoGdDV5iQ65rv3mmqcXetcW5trAeYkwUs1YW3F3rWasU7mQ1KysL674T5DLYaL+6bY hvlmkEqgAGs6ijxn+CaFmKHMdHwEHXZOjokAKPisHp+6bsfBg24xWmR06X5Sv+xRkXOR31 /Kyv+0b/9DHODnwMzhKPyqiEic4J/bc= From: chengming.zhou@linux.dev To: cl@linux.com, penberg@kernel.org Cc: rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, chengming.zhou@linux.dev, Chengming Zhou Subject: [RFC PATCH 1/5] slub: Introduce on_partial() Date: Tue, 17 Oct 2023 15:44:35 +0000 Message-Id: <20231017154439.3036608-2-chengming.zhou@linux.dev> In-Reply-To: <20231017154439.3036608-1-chengming.zhou@linux.dev> References: <20231017154439.3036608-1-chengming.zhou@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Rspamd-Queue-Id: DD1E8140013 X-Rspam-User: X-Stat-Signature: uorjyukb5meq1t9u3ji7dq6fnthceh39 X-Rspamd-Server: rspam01 X-HE-Tag: 1697557520-376126 X-HE-Meta: U2FsdGVkX18iQoNM7TnIXBv0cplNqAbdLxFpxGUXzpTXiFDKBr/mxWf4jgL/UTjdEWg/Im1C+I7jQkKEw6ntWGG9Yaxhn3tHGAX+0a7eBdHMmi0SehC4b0zeF+jZGTYzvzNpLD4f7MLU6QA1OR8Cq7nGbUBmY/1TWgB9f2DdkuvbqusqAxwm5+7yeDd2j2Em6zVGVuTa9uuL6zgtsqRjM6oaUbqRvNtu+1NrChlyCBJbSVwslhdNDyk+kc2ZQbxxxl5T3GjF9ERmgAqTlg9w0aBISgk6PO7vysqQCPfZKTdzGpqFvj33Y4SOCgIyMpudYrmjInDM02lnxCWVt1pghR2oR8WyYo/7GxEhXjIQOT0FuCufAuHr40ZImtP03R1hbIHxoOKkwu5Tcxy9i+VHMThW9e2xXI9A9VCMXxWRSJsVUQHINAaFCfDk/38CmA0ceCAMLQG+Nsc3bsPtYIwo0gWW9bHQTL5i3I3PxOXD1enixVg3IhnbvHfPpizB43qyKxqh5OnMD+Id5XbuePy3sVzzcHlB5tV0OoSMYwO2WqcsJGWupFwNNt+k9TIRgIIiG+OuZLtq9RKI4g0YapNRjirIDi7dEFmXv4pr/dePjcjK+OvMkvQJncMzOXp4FIG5boqkvkUinOzGazy0EEBaE7qeYFkMRcut4gMFAbUS66vwO2EK0Z5glf65bnoVKzAfOtxT5ynxQf2k6ovDPCEVNiQYPfAuboh+Mt9Um0YgwWnw+sZvLM6/i1+6hBxexaVFYt2yetJZv7ShXybdko1w3tYDkRAJ92JhrFz+/Enm9y72d0s1Xa1lVrnc4C+HZVCsBkYP9kA38we8kwbaJCcqIOeP5NX6hTvz6W0++4rewqVULFGMPEk4MAkdP1I0BQoSf7QpOzepzJfjiPh1wFTy1/QvJmAY4gSFJRSj8t70QhYOT4Fj6v8fzky+nmOvValRC+AY3tglrcGDaMw1q6M xz4jC5Ni Dh3ipPUX1i+iMyAvv+U0ph0j2Z3tsgc2L+CyKhyiFrflcnQVrItsRJ4fYT8jY21I0DCUCRpBZHQc2jNpxPqX0Av2Bg3nBeolkM+Xtb9QxILPJjIm2ByHN/jcS+V5xRBlDjcc6uWh002E3KFaQBINsHDD6bdJKWWGtKIcjXmN4NtyLfLAWMwowDfHDdSa3h92Y8htZqSVbbrzapHm1+yYvHPaKYYwuklmm/OH6s4ra9TqUBkFT+ZLtNQJ9/cQlKqGP0KMj X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Chengming Zhou We change slab->__unused to slab->flags to use it as SLUB_FLAGS, which now only include SF_NODE_PARTIAL flag. It indicates whether or not the slab is on node partial list. The following patches will change to don't freeze slab when moving it from node partial list to cpu partial list. So we can't rely on frozen bit to see if we should manipulate the slab->slab_list. Instead we will rely on this SF_NODE_PARTIAL flag, which is protected by node list_lock. Signed-off-by: Chengming Zhou --- mm/slab.h | 2 +- mm/slub.c | 28 ++++++++++++++++++++++++++++ 2 files changed, 29 insertions(+), 1 deletion(-) diff --git a/mm/slab.h b/mm/slab.h index 8cd3294fedf5..11e9c9a0f648 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -89,7 +89,7 @@ struct slab { }; struct rcu_head rcu_head; }; - unsigned int __unused; + unsigned int flags; #else #error "Unexpected slab allocator configured" diff --git a/mm/slub.c b/mm/slub.c index 63d281dfacdb..e5356ad14951 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1993,6 +1993,12 @@ static inline bool shuffle_freelist(struct kmem_cache *s, struct slab *slab) } #endif /* CONFIG_SLAB_FREELIST_RANDOM */ +enum SLUB_FLAGS { + SF_INIT_VALUE = 0, + SF_EXIT_VALUE = -1, + SF_NODE_PARTIAL = 1 << 0, +}; + static struct slab *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) { struct slab *slab; @@ -2031,6 +2037,7 @@ static struct slab *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) slab->objects = oo_objects(oo); slab->inuse = 0; slab->frozen = 0; + slab->flags = SF_INIT_VALUE; account_slab(slab, oo_order(oo), s, flags); @@ -2077,6 +2084,7 @@ static void __free_slab(struct kmem_cache *s, struct slab *slab) int order = folio_order(folio); int pages = 1 << order; + slab->flags = SF_EXIT_VALUE; __slab_clear_pfmemalloc(slab); folio->mapping = NULL; /* Make the mapping reset visible before clearing the flag */ @@ -2119,9 +2127,28 @@ static void discard_slab(struct kmem_cache *s, struct slab *slab) /* * Management of partially allocated slabs. */ +static void ___add_partial(struct kmem_cache_node *n, struct slab *slab) +{ + lockdep_assert_held(&n->list_lock); + slab->flags |= SF_NODE_PARTIAL; +} + +static void ___remove_partial(struct kmem_cache_node *n, struct slab *slab) +{ + lockdep_assert_held(&n->list_lock); + slab->flags &= ~SF_NODE_PARTIAL; +} + +static inline bool on_partial(struct kmem_cache_node *n, struct slab *slab) +{ + lockdep_assert_held(&n->list_lock); + return slab->flags & SF_NODE_PARTIAL; +} + static inline void __add_partial(struct kmem_cache_node *n, struct slab *slab, int tail) { + ___add_partial(n, slab); n->nr_partial++; if (tail == DEACTIVATE_TO_TAIL) list_add_tail(&slab->slab_list, &n->partial); @@ -2142,6 +2169,7 @@ static inline void remove_partial(struct kmem_cache_node *n, lockdep_assert_held(&n->list_lock); list_del(&slab->slab_list); n->nr_partial--; + ___remove_partial(n, slab); } /*