From patchwork Sat Oct 21 14:43:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chengming Zhou X-Patchwork-Id: 13431524 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 20EBFC07545 for ; Sat, 21 Oct 2023 14:44:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 975678D0023; Sat, 21 Oct 2023 10:43:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9257F8D0008; Sat, 21 Oct 2023 10:43:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7C76B8D0023; Sat, 21 Oct 2023 10:43:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 671008D0008 for ; Sat, 21 Oct 2023 10:43:59 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 29D5A140304 for ; Sat, 21 Oct 2023 14:43:59 +0000 (UTC) X-FDA: 81369738198.08.F8E9287 Received: from out-210.mta1.migadu.com (out-210.mta1.migadu.com [95.215.58.210]) by imf12.hostedemail.com (Postfix) with ESMTP id 65A8C40017 for ; Sat, 21 Oct 2023 14:43:57 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=U7c03tZf; spf=pass (imf12.hostedemail.com: domain of chengming.zhou@linux.dev designates 95.215.58.210 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697899437; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=McUlAOC6AXMF3DQxwjxFsXChYXBELSqYw7TfeM0+yv0=; b=2IdiA9wEfaFdPSXEv4FJbHUNSph2UOaSi4igrt9ORDjz+DPyWDiBQ5F2Iayjkz31PXN82F FwatTJRhR/Z0OmWNd3T0Fsi27jfEiNNGRM6YUL9v6tNItiD/rFjhSQQI7m648pgE+7w6vT 7KN+SNERVYnqzoz6USI9DI4r8uvMlow= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697899437; a=rsa-sha256; cv=none; b=pupIcJpyfSz4mIzuiTcNBQqoWEjYSJq3rZY2kqMnRAN644/PuXigtVbxye+v3Jnjxw8IpX OCKjlfR94k5qKRAqKldFsR3w8NrzYZquVQy++f9JUPwuXtEpUYiswLl21kY8S4uO/ePk23 nCNAxbC6JB+2ZO8se4dHAkjyUDy/glU= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=U7c03tZf; spf=pass (imf12.hostedemail.com: domain of chengming.zhou@linux.dev designates 95.215.58.210 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev; dmarc=pass (policy=none) header.from=linux.dev X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1697899435; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=McUlAOC6AXMF3DQxwjxFsXChYXBELSqYw7TfeM0+yv0=; b=U7c03tZf2fCoW5T3wE2WN887d5Ut/z0dqbsdzuHPFwfuTnJjKkXIqgu1qkz/ZM1D7dvN/0 +t0ZPZQ04YUwljOgsVlPl0zjwouHH5C+41xLj3a5pQygz/jzSeLcr3C5ImZ/LyIGOBMkuM Zx55Qopachdrboij0Lf1+nGLH57UfVI= From: chengming.zhou@linux.dev To: cl@linux.com, penberg@kernel.org Cc: rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, willy@infradead.org, pcc@google.com, tytso@mit.edu, maz@kernel.org, ruansy.fnst@fujitsu.com, vishal.moola@gmail.com, lrh2000@pku.edu.cn, hughd@google.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, chengming.zhou@linux.dev, Chengming Zhou Subject: [RFC PATCH v2 1/6] slub: Keep track of whether slub is on the per-node partial list Date: Sat, 21 Oct 2023 14:43:12 +0000 Message-Id: <20231021144317.3400916-2-chengming.zhou@linux.dev> In-Reply-To: <20231021144317.3400916-1-chengming.zhou@linux.dev> References: <20231021144317.3400916-1-chengming.zhou@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Rspamd-Queue-Id: 65A8C40017 X-Rspam-User: X-Stat-Signature: 8dsq8swbebtenntjhrwnxpkzd9yw4ucq X-Rspamd-Server: rspam03 X-HE-Tag: 1697899437-345791 X-HE-Meta: U2FsdGVkX18f/LU4pZrb8IoTxqnVPgSe5NTLLSKO1KQRD9vivHWa3Qp890B59gB0HDrp4VSYnBvo0ZYcDFhJy3gcItaW0ZSlUhDgmtD4avrpsiEubOTfUUjjq3gpqAXGk2Vy03cjCgxzjtbECcOZpwwNk/SNiHJ1I45+PY55RMbNbmg2UlKePVRjMTLrLom0eRxPuLm92FOOEwVP/laU+Zx9KGnbMPHuVTL7b/ftKzxHGgyuFMc+Oy5NTy6xComOS+qMbGMHOy2dRVM34H0+JGTCqpF+Gj7To9OwO68nXfNtdUPYLorBLoX5mQXWAhIMprv0e6gPDq2tlRgFKfnXigXwBoW0/enA7tgzEUujtdq3wN6kLKB6PUYtRGlUC7Gs6IC6QzFr+vuCi9pin8ViFpEZNO0tSXP3bLYrHV00EPZj0rUVGOsq5SIoasOCd/XzhOYO3V7VYMgdq8e2SZf5ZPEu/B3321hfUrNuKxyF+BnN1RlpceJRv0/73eVY4cU4vbhFM6FOLnSU10sWqQhLw3WjHza3eIRlT9MfKW+j9qzChEb9KLwB5xi3Wjdk3WZ90TAg3DudfPqQJ+d91dXjh14b4kV+Glm8mGkUgrkkl/dEd0nO2WnuyR5nDxU1tFvz2TwLokGU7/VeJOq0/rPTuWA7c07dvSC0CcAXa8Dj0mkJ6mAHC7d4IOuKngu30+MX0Fp0S0hqV/zRVdFOlz1Z6nvI8KTcx8lXeqhfc40z+go2p5EO6MMueppe/pZTVrYEJY39gZJJbh4lyn4J1SpQLauENx9kc5jwiuACNlVQ9VymMDSuu5xLBquPDs1M4TXD0+UMoIVtJG2JpGB+0qARwunzrL6c4bXsB6RD8l1yvOOzk+0DPF+4IgbmPRk6696wRCRxNHc8RSAwnb+ElOU54gR6AG8fEmfin8wbx2p9k/ioI7Uh26COzhuSBKt0uLsf67eFOUq51GqZ614Bu36 QrLksRPN s3ksOxakiuec0PrHXPo3KyP6a6QRVlQaGSdJs+/JMXmkR3mk0FQIr6L+UEksdUT+AiCGq1NdItA8U6SXN7/NMGWRHwqGScckn7qVosgO/1YbQTstDbY6943pFsT50D+2bVqMJQHawBNL/4+R/rNpVVGRxxiLYxFC+xCW/Bc+D+Vmg5GWrh9Ju9WRaTkFxGEbgbI1Ya0UJMsP2p9g= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Chengming Zhou Now we rely on the "frozen" bit to see if we should manipulate the slab->slab_list, which will be changed in the following patch. Instead we introduce another way to keep track of whether slub is on the per-node partial list, here we reuse the PG_workingset bit. Signed-off-by: Chengming Zhou --- include/linux/page-flags.h | 2 ++ mm/slab.h | 19 +++++++++++++++++++ mm/slub.c | 3 +++ 3 files changed, 24 insertions(+) diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index a88e64acebfe..e8b1be71d722 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -478,6 +478,8 @@ PAGEFLAG(Active, active, PF_HEAD) __CLEARPAGEFLAG(Active, active, PF_HEAD) TESTCLEARFLAG(Active, active, PF_HEAD) PAGEFLAG(Workingset, workingset, PF_HEAD) TESTCLEARFLAG(Workingset, workingset, PF_HEAD) + __SETPAGEFLAG(Workingset, workingset, PF_HEAD) + __CLEARPAGEFLAG(Workingset, workingset, PF_HEAD) __PAGEFLAG(Slab, slab, PF_NO_TAIL) PAGEFLAG(Checked, checked, PF_NO_COMPOUND) /* Used by some filesystems */ diff --git a/mm/slab.h b/mm/slab.h index 8cd3294fedf5..9cff64cae8de 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -193,6 +193,25 @@ static inline void __slab_clear_pfmemalloc(struct slab *slab) __folio_clear_active(slab_folio(slab)); } +/* + * Slub reuse PG_workingset bit to keep track of whether it's on + * the per-node partial list. + */ +static inline bool slab_test_node_partial(const struct slab *slab) +{ + return folio_test_workingset((struct folio *)slab_folio(slab)); +} + +static inline void slab_set_node_partial(struct slab *slab) +{ + __folio_set_workingset(slab_folio(slab)); +} + +static inline void slab_clear_node_partial(struct slab *slab) +{ + __folio_clear_workingset(slab_folio(slab)); +} + static inline void *slab_address(const struct slab *slab) { return folio_address(slab_folio(slab)); diff --git a/mm/slub.c b/mm/slub.c index 63d281dfacdb..3fad4edca34b 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2127,6 +2127,7 @@ __add_partial(struct kmem_cache_node *n, struct slab *slab, int tail) list_add_tail(&slab->slab_list, &n->partial); else list_add(&slab->slab_list, &n->partial); + slab_set_node_partial(slab); } static inline void add_partial(struct kmem_cache_node *n, @@ -2141,6 +2142,7 @@ static inline void remove_partial(struct kmem_cache_node *n, { lockdep_assert_held(&n->list_lock); list_del(&slab->slab_list); + slab_clear_node_partial(slab); n->nr_partial--; } @@ -4831,6 +4833,7 @@ static int __kmem_cache_do_shrink(struct kmem_cache *s) if (free == slab->objects) { list_move(&slab->slab_list, &discard); + slab_clear_node_partial(slab); n->nr_partial--; dec_slabs_node(s, node, slab->objects); } else if (free <= SHRINK_PROMOTE_MAX) From patchwork Sat Oct 21 14:43:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chengming Zhou X-Patchwork-Id: 13431525 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6269FC004C0 for ; Sat, 21 Oct 2023 14:44:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CD10A8D0024; Sat, 21 Oct 2023 10:44:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C811F8D0008; Sat, 21 Oct 2023 10:44:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B48DB8D0024; Sat, 21 Oct 2023 10:44:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 9EAAF8D0008 for ; Sat, 21 Oct 2023 10:44:06 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 6FF0816036E for ; Sat, 21 Oct 2023 14:44:06 +0000 (UTC) X-FDA: 81369738492.06.5F21CEB Received: from out-204.mta1.migadu.com (out-204.mta1.migadu.com [95.215.58.204]) by imf12.hostedemail.com (Postfix) with ESMTP id B0F3A40015 for ; Sat, 21 Oct 2023 14:44:04 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=xX8mHLDH; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf12.hostedemail.com: domain of chengming.zhou@linux.dev designates 95.215.58.204 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697899444; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=5cQE9ZFnILnFxpwPZY6R/Y63Il/vf0eHaCXoNa8ST4o=; b=JtvufQ5eM/pewnj97ZZ1i3QpZx0/44QTHUW0ODCAY89sm6n6b5KDd81XJzRSTUekRMvkUI eCkjgKYSbTB73TbzvkhRuTD/f4xZd0FoGdLtM9bxe5UCR4ewAuJhWqTeYf9EdlCE+HkvRt KvIAeXZBmbn1NMEvO+gSrN0SpZnHS84= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=xX8mHLDH; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf12.hostedemail.com: domain of chengming.zhou@linux.dev designates 95.215.58.204 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697899444; a=rsa-sha256; cv=none; b=WT1L9wQVCvQCNop+c5ONT2hawf/N2CRPzdqLY7sCIxZJ28+0x0vYKdBt8DpCXtz/AVYAa1 pUrdCiZbbf3Hfbqo6p/6m038TJDjcF15HSkIQwdXC7EPTQPV0RsmIQAjmzWl3kn7pESpLl 2jVQiywmtU0yJgo6gIJMh5RhABUMkWU= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1697899443; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5cQE9ZFnILnFxpwPZY6R/Y63Il/vf0eHaCXoNa8ST4o=; b=xX8mHLDHA6jFLEhdUxuEf1tXL2PZcg10QlIE6EE+u+VIO1Ec/ICxJ/WjowvxWiwRTrQeqm rLm5/Uzh0gR0aIfDdIxK6teeg/FWiuTnzKUIMz43WCQ8Awhnrqp5HujqEJ+KKY2sWYUBDV RSgUvb/mgs0boYQF+AOgd4q17KMw5RI= From: chengming.zhou@linux.dev To: cl@linux.com, penberg@kernel.org Cc: rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, willy@infradead.org, pcc@google.com, tytso@mit.edu, maz@kernel.org, ruansy.fnst@fujitsu.com, vishal.moola@gmail.com, lrh2000@pku.edu.cn, hughd@google.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, chengming.zhou@linux.dev, Chengming Zhou Subject: [RFC PATCH v2 2/6] slub: Prepare __slab_free() for unfrozen partial slab out of node partial list Date: Sat, 21 Oct 2023 14:43:13 +0000 Message-Id: <20231021144317.3400916-3-chengming.zhou@linux.dev> In-Reply-To: <20231021144317.3400916-1-chengming.zhou@linux.dev> References: <20231021144317.3400916-1-chengming.zhou@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Rspamd-Queue-Id: B0F3A40015 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: thkiz5gcz4gd9nq5axu1jyfaxbdjiiro X-HE-Tag: 1697899444-950683 X-HE-Meta: U2FsdGVkX183eOJw+KGuxWdiJSED+kiyrusSJOQem7dBFh3PicrcsGeZmH283QHtNKxkvwf0/r2IDCNOYon+A8BXnsI2ltltslqOtT44NBBL7K9aSBCzHSI1Hp0XHWdHrxNE1olmheJqBu2Pkzu1IwZOhUyIpJWVN+rHGAz97M4GDGcLXhxnEAqmhnZNSBlrCPlcpdAZkAE55vmo/hpROaNcU68BELmYtLmOxg1k04jc1AChBXW+spGBOBkDx1Oy2o+7M+/1vTnN3qwsNS9ANVcVCvp/NYgqSqz1TFNksVrohY38rjbyrbAXiUgJ5AU7Jo/XZS9pd0E8hl5QfHyXTpSmrVVNJ+oUjpllz4t6EwiBaF/+OzkOgkuuO6yM+BJbooF0AFOn5uoFY+av05mUv+lTvtqkMkaiJP9GVM/b3KlGc7cCjo5tzbTwwmY/3rqkBgQx+bjyLF6DJQgB+0ySm2jeMr/vVV8srh5eoD6LIvAqybDOcgymzlAm2WX8tXasxsP3FiY5n+bLZISdfjs1we/e2VvUfzNIKP3td+a/F5pEqnVwCKNTPn2KWDbWGJsajUskyYn/tQO6QojFwJlS8oEFa23k6TMAYPsurseqnMAO6/RYi64g8d5EM4K3qkV18Q7yPVTgxMs9QpRSRLiqYfvv0nAYHyl2aJF1FS74Rw3hx66Idpv28XQuHx1HTXvYDzCKlLmX435dSJFwg4CHHr9G0j2Z85+wT+q9Y91LXfsCZVCLesWdmmzF39lZEn+0M/DmXWmsl8Iq1zgTk3Ze/qtt57+atmFmgueMjOXCFYtUHdP3D/aFwelT0CrDcnIfOGujwUWxhk3vhVmI/GShkt7XbD9lqitjvtKRXaWcgxSYZXZEC3QB6dHsJgXxzqDgiWCDBKiVtlvvsG3bGtEby44jcYOG9nkKpfXxzJP+oGrlf06SA6v6YMxDFo7MObG1EmPHU5Z3kn2SF+FVm+0 TIfhdV5I 0iJg9N9f7bPA5W2zo8o14HKe9vuOVm534aZDhXoYht98QAqfP2jjLt3D4mTITXcn056a5dq1yfNywXXHrWnYqjTxX5GQi7quIC16zfMWgolkZeiHdx5zn3TUCFV3HAlWimyQu1xSUkwuMjDXNATLylJB2zWEa6JTUm3oSD6zj4Atvfsms4vSEf9yUF+PJTnLu4bm//KkK60ArttwKQ4E5SnwAgbBQCYVGoMKWLUd16LpRWJ4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Chengming Zhou Now the partial slub will be frozen when taken out of node partial list, so the __slab_free() will know from "was_frozen" that the partial slab is not on node partial list and is used by one kmem_cache_cpu. But we will change this, make partial slabs leave the node partial list with unfrozen state, so we need to change __slab_free() to use the new slab_test_node_partial() we just introduced. Signed-off-by: Chengming Zhou --- mm/slub.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/mm/slub.c b/mm/slub.c index 3fad4edca34b..adeff8df85ec 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3610,6 +3610,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, unsigned long counters; struct kmem_cache_node *n = NULL; unsigned long flags; + bool on_node_partial; stat(s, FREE_SLOWPATH); @@ -3657,6 +3658,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, */ spin_lock_irqsave(&n->list_lock, flags); + on_node_partial = slab_test_node_partial(slab); } } @@ -3685,6 +3687,15 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, return; } + /* + * This slab was not full and not on the per-node partial list either, + * in which case we shouldn't manipulate its list, just early return. + */ + if (prior && !on_node_partial) { + spin_unlock_irqrestore(&n->list_lock, flags); + return; + } + if (unlikely(!new.inuse && n->nr_partial >= s->min_partial)) goto slab_empty; From patchwork Sat Oct 21 14:43:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chengming Zhou X-Patchwork-Id: 13431526 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B43E1C004C0 for ; Sat, 21 Oct 2023 14:44:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E80068E0003; Sat, 21 Oct 2023 10:44:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E090A8D0008; Sat, 21 Oct 2023 10:44:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C810A8E0003; Sat, 21 Oct 2023 10:44:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id B5FC18D0008 for ; Sat, 21 Oct 2023 10:44:14 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 7DECD1CAB12 for ; Sat, 21 Oct 2023 14:44:14 +0000 (UTC) X-FDA: 81369738828.10.31718EA Received: from out-198.mta1.migadu.com (out-198.mta1.migadu.com [95.215.58.198]) by imf26.hostedemail.com (Postfix) with ESMTP id B5EAE140004 for ; Sat, 21 Oct 2023 14:44:12 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=VwdHbGN6; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf26.hostedemail.com: domain of chengming.zhou@linux.dev designates 95.215.58.198 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697899453; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Fc6duZ/q+apDqgoP4cLzy3p5t8cD8/eKarQEAhD6cwA=; b=Ffw4eZXWqazZfogKGK6WXuixx33Lk4XcACunVBopv9rhF8keubGOIWg8I94/WheQ7+yfIm DAp9oLy2HUpqjHRhdUWpVw0Fz1l6QWYNVP7cPb/D+dbJlIW1V0z6Ct6uXhesjwvw9Puph4 RQGUB3vztt7TvsfsLpyxlY2XxYn6NL4= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=VwdHbGN6; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf26.hostedemail.com: domain of chengming.zhou@linux.dev designates 95.215.58.198 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697899453; a=rsa-sha256; cv=none; b=MylPzDrjuMuu+iAN3Dtx1LvOX7OraThMpARztu5rZtwap7c5ho9iF1XXRwuwK6v0Y1uCE9 ZimYhBTUErw/CrJHoZ1K0S76bCe5xOTHsvHWTfmIkDpqJdrVMAB5R7m50EelOZ2wxcM1OP mJYWbE/gVVJkU9N7wECOnTtOn3VUKME= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1697899451; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Fc6duZ/q+apDqgoP4cLzy3p5t8cD8/eKarQEAhD6cwA=; b=VwdHbGN6iqPqOPPmjkoCZCqeya5Wrnfy29DfCh17oso4mTQaIrjGQ/bBj7cuImP6Ak0ZDt 74Ji4pLY0KWggdSHOhw8II53OTlr2K3tbgg2NViY04lO63dYbYrl8s/FsQ+pKx4DbOjdEG yYcnIac3YU/gH6s55G+958BKYZfu70Y= From: chengming.zhou@linux.dev To: cl@linux.com, penberg@kernel.org Cc: rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, willy@infradead.org, pcc@google.com, tytso@mit.edu, maz@kernel.org, ruansy.fnst@fujitsu.com, vishal.moola@gmail.com, lrh2000@pku.edu.cn, hughd@google.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, chengming.zhou@linux.dev, Chengming Zhou Subject: [RFC PATCH v2 3/6] slub: Don't freeze slabs for cpu partial Date: Sat, 21 Oct 2023 14:43:14 +0000 Message-Id: <20231021144317.3400916-4-chengming.zhou@linux.dev> In-Reply-To: <20231021144317.3400916-1-chengming.zhou@linux.dev> References: <20231021144317.3400916-1-chengming.zhou@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Rspamd-Queue-Id: B5EAE140004 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: rwjrsc6hm14wbca1s56ymz98bthmjzyr X-HE-Tag: 1697899452-839369 X-HE-Meta: U2FsdGVkX1/2hZ5cMHBvEgxI49UjK+v1UioyNDwDj3Eux97d35cBg9eeFprTWp3eOjqF4Dmyh80CI/VPWb0u0ahuhbW/Jyz+D2sAvOdVe2IUgGED0Lwj0/N9bKRjDPg3vaGLdibT0GMm3iFKhEDAkpa7Lj7oueB+DzNiwo7TvcKs6nD8CaY7jLWs1DjyGj8BFvBhDFy93lirOeUT89/yyTFFKktekC1uet9spE3Z5yJr+QWtnJhlShVZoirS/ZTvceMwfCR8qvzc7tCRSIcb9sD9cMhNXica+fzUY3PbL3nsHAeaodhqw2uPW/LD2ztWPgZqYplXH0MoRuijl4mCGZMLqnUrtx8lNIKynHsjXsb6eLA9cw8otwvsXy1YZKn7VulD9DfsOQWL3p/M3IgocfISwCks62w7oEwZUoK7Wl77fPoebVj7qHHIae3HohTu3aNyA/BS4gubBbJCEo0/mJ2rYBmOcj46H2yKsrjojgL5wF/B5U+61OSvRsZINc8/4kcoRfDyVhgh21i+GBMopNPo57DmkL0r973EVM7+X4cLNlMPDlhd6aQzUsYnsalEq61FU6G6eNib9XYYzUN0RLVHHf9EGI3nV/m/GNWS0z7zubHOPpCCCFUpSJrNSzQHRNRURs65DKas6PRSTIzKYWAEAaluxi0+CxfR+gftV9Ee1ERa6fX0oAnOPFrNAvM9FfnsAKiJiU0QgcqFEmpj0QKvZGdnC8FmxNVqtaoTR6n4/MLyfWGssR1j/Uf20fYrjkqS9+RAdjuT17h7FzJRB8qNWJsPQV+9mdkhH56e3KKcjZbkcvBA1T50cEc8nzRteRvb7lo48HQhPUuKohyRfCLwZONImrmdXtEt7rwXRoje35RJmip5tbCG3ZBo7ZralmaAVFimyv3HRsjgb81VivlBCFtlfs9JysT0pUKZ6X0Hpbho7OmKMTy22DYtTS9nd/i8xFp2yvluQPETWKA tHsauTLX Z6PpCTiPp/t2u+yp729sF4uMv312KAVmGiXF/67f9ai2Djxu9FZ10hHN9VP06Myl23gj44di323btgdWlEOAXvCFwPSaKmg6bPpb4NjSKDWcvrWWXfy+wjd0COVP4uVz8hYOBYvp7xhyd2V6oGAREG881tjrKGm5yT7YnEmcKaJs6cf9z9WPMxUUWX81ZGKGQJ3gnX4VCDp1Mwnk= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Chengming Zhou Now we will freeze slabs when moving them out of node partial list to cpu partial list, this method needs two cmpxchg_double operations: 1. freeze slab (acquire_slab()) under the node list_lock 2. get_freelist() when pick used in ___slab_alloc() Actually we don't need to freeze when moving slabs out of node partial list, we can delay freeze to use slab freelist in ___slab_alloc(), so we can save one cmpxchg_double(). And there are other good points: 1. The moving of slabs between node partial list and cpu partial list becomes simpler, since we don't need to freeze or unfreeze at all. 2. The node list_lock contention would be less, since we only need to freeze one slab under the node list_lock. (In fact, we can first move slabs out of node partial list, don't need to freeze any slab at all, so the contention on slab won't transfer to the node list_lock contention.) We can achieve this because there is no concurrent path would manipulate the partial slab list except the __slab_free() path, which is serialized now. Note this patch just change the parts of moving the partial slabs for easy code review, we will fix other parts in the following patches. Specifically this patch change three paths: 1. get partial slab from node: get_partial_node() 2. put partial slab to node: __unfreeze_partials() 3. cache partail slab on cpu when __slab_free() Signed-off-by: Chengming Zhou --- mm/slub.c | 63 +++++++++++++++++-------------------------------------- 1 file changed, 19 insertions(+), 44 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index adeff8df85ec..61ee82ea21b6 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2277,7 +2277,9 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n, struct slab *slab, *slab2; void *object = NULL; unsigned long flags; +#ifdef CONFIG_SLUB_CPU_PARTIAL unsigned int partial_slabs = 0; +#endif /* * Racy check. If we mistakenly see no partial slabs then we @@ -2303,20 +2305,22 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n, continue; } - t = acquire_slab(s, n, slab, object == NULL); - if (!t) - break; - if (!object) { - *pc->slab = slab; - stat(s, ALLOC_FROM_PARTIAL); - object = t; - } else { - put_cpu_partial(s, slab, 0); - stat(s, CPU_PARTIAL_NODE); - partial_slabs++; + t = acquire_slab(s, n, slab, object == NULL); + if (t) { + *pc->slab = slab; + stat(s, ALLOC_FROM_PARTIAL); + object = t; + continue; + } } + #ifdef CONFIG_SLUB_CPU_PARTIAL + remove_partial(n, slab); + put_cpu_partial(s, slab, 0); + stat(s, CPU_PARTIAL_NODE); + partial_slabs++; + if (!kmem_cache_has_cpu_partial(s) || partial_slabs > s->cpu_partial_slabs / 2) break; @@ -2606,9 +2610,6 @@ static void __unfreeze_partials(struct kmem_cache *s, struct slab *partial_slab) unsigned long flags = 0; while (partial_slab) { - struct slab new; - struct slab old; - slab = partial_slab; partial_slab = slab->next; @@ -2621,23 +2622,7 @@ static void __unfreeze_partials(struct kmem_cache *s, struct slab *partial_slab) spin_lock_irqsave(&n->list_lock, flags); } - do { - - old.freelist = slab->freelist; - old.counters = slab->counters; - VM_BUG_ON(!old.frozen); - - new.counters = old.counters; - new.freelist = old.freelist; - - new.frozen = 0; - - } while (!__slab_update_freelist(s, slab, - old.freelist, old.counters, - new.freelist, new.counters, - "unfreezing slab")); - - if (unlikely(!new.inuse && n->nr_partial >= s->min_partial)) { + if (unlikely(!slab->inuse && n->nr_partial >= s->min_partial)) { slab->next = slab_to_discard; slab_to_discard = slab; } else { @@ -3634,18 +3619,8 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, was_frozen = new.frozen; new.inuse -= cnt; if ((!new.inuse || !prior) && !was_frozen) { - - if (kmem_cache_has_cpu_partial(s) && !prior) { - - /* - * Slab was on no list before and will be - * partially empty - * We can defer the list move and instead - * freeze it. - */ - new.frozen = 1; - - } else { /* Needs to be taken off a list */ + /* Needs to be taken off a list */ + if (!kmem_cache_has_cpu_partial(s) || prior) { n = get_node(s, slab_nid(slab)); /* @@ -3675,7 +3650,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, * activity can be necessary. */ stat(s, FREE_FROZEN); - } else if (new.frozen) { + } else if (kmem_cache_has_cpu_partial(s) && !prior) { /* * If we just froze the slab then put it onto the * per cpu partial list. From patchwork Sat Oct 21 14:43:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chengming Zhou X-Patchwork-Id: 13431527 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 90FCEC001E0 for ; Sat, 21 Oct 2023 14:44:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2983E8E0005; Sat, 21 Oct 2023 10:44:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 248368D0008; Sat, 21 Oct 2023 10:44:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0C2CC8E0005; Sat, 21 Oct 2023 10:44:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id EEB2D8D0008 for ; Sat, 21 Oct 2023 10:44:21 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id C56661204D9 for ; Sat, 21 Oct 2023 14:44:21 +0000 (UTC) X-FDA: 81369739122.10.6BDE498 Received: from out-210.mta1.migadu.com (out-210.mta1.migadu.com [95.215.58.210]) by imf22.hostedemail.com (Postfix) with ESMTP id 22302C000E for ; Sat, 21 Oct 2023 14:44:19 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=PRz3bzrm; spf=pass (imf22.hostedemail.com: domain of chengming.zhou@linux.dev designates 95.215.58.210 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697899460; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=nWmtXPvIlob45jfvRK/RuModS0a7G4PIYKuavHC2hOU=; b=jQ3sG+arjFhhqDZY53huny9XoC492sSDARFA22mmZp6qLQrOMnRny8NlG8vqZha1BKk5Qf iYVK5N8d474doNVgcX2QKYYmuynsaO6t0BLySiqpAJCGL3a/7Qq4kN0MKe0oMcXlAgTcWf T69PsPqU/glkPkljbKQ5rcqOADp/jLw= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697899460; a=rsa-sha256; cv=none; b=GgVNpcKczXPpXRwmq8vCwcdwPe91EkawJXvV8UeDLiViw2MfIbYobE22k2cOojhMPJyNH2 erQBoZ5S6dPRWokEikxW0iTGv+FV+hWqKS9Ug8Fu04/Cg7WqdDFOHNYMgd7KL6Cwm9K4HE UWtUUApCDxh48QbjW1ZSmnHJL7CT9nM= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=PRz3bzrm; spf=pass (imf22.hostedemail.com: domain of chengming.zhou@linux.dev designates 95.215.58.210 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev; dmarc=pass (policy=none) header.from=linux.dev X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1697899458; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nWmtXPvIlob45jfvRK/RuModS0a7G4PIYKuavHC2hOU=; b=PRz3bzrmS8EEMlbH+t4XO9cRe385YlpMQbBC+PJkV+mYGBOQDvOobTnnsiBIQwzbWk3dGH 53MwwtqQ/0FHj6idagxgeusX/wx+o/QH+LERI272V7ONmJtNIhQsZj5RYWyxmNgUWef9WU zq8+O024g84F8rDMjvJ1FPttWXr+WJw= From: chengming.zhou@linux.dev To: cl@linux.com, penberg@kernel.org Cc: rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, willy@infradead.org, pcc@google.com, tytso@mit.edu, maz@kernel.org, ruansy.fnst@fujitsu.com, vishal.moola@gmail.com, lrh2000@pku.edu.cn, hughd@google.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, chengming.zhou@linux.dev, Chengming Zhou Subject: [RFC PATCH v2 4/6] slub: Simplify acquire_slab() Date: Sat, 21 Oct 2023 14:43:15 +0000 Message-Id: <20231021144317.3400916-5-chengming.zhou@linux.dev> In-Reply-To: <20231021144317.3400916-1-chengming.zhou@linux.dev> References: <20231021144317.3400916-1-chengming.zhou@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Rspamd-Queue-Id: 22302C000E X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: bpbruqjscanbqptgpzgftj7oshaczyz5 X-HE-Tag: 1697899459-255531 X-HE-Meta: U2FsdGVkX18LWs6aOhrx6HRXOEUa7jb2gy4kHDDJvDO+zj3CfHkUR3CPBG8ySWTDfERQaxygs0ds5tMJgU2I1jv3NqYeyBQThkf35JRTPlBphJDgJEoh91TLCKbh7yklq3A5h/wv+y7SejKDRM8nRK+AQq/pF6HkJCFwSX1KXXrutfigJPm7mkkSD1uGiilERyIQZQobwHv+PPhOyF57skToGb9ExNsERdFxebSQWzHVwc7kDA9FtPmRObhQXPdH7umQ/ciDYHSWprxXwR7CUcQdHmcw7GqlhtpfVtq3aglSCvfkf7/axIreJ/+YF4a2a454QYa935OBJGroIUbklXGWgHX25G4CBP6ZozSxREE9yeZdE/WZxj+0jC8hGWSwX2ETA5z4GEpwvjIoxOeSHhJegSK4XlQXEve89EfeQLDkG9k70+iFH5twWjcJ9WCmWAE9CTnOXfHG5hPNyQychowss/LC7WqFpclcnOdI/4324rFAS9qKEwVjh1ols2ko4g5wubDeo1BEpZvzJjbLHxQ0mXSKqkyDfd7ljn4qlJJ59tZxODI0Bj4fHDLVHjjwq+1gaqE+tI/ES88sbrp2uuoQLWnGtTK18IrcE+4yUPzX/FS3NTcRwvrBnzpH5kG1U7MYsb+3QgQPnX6W7bV6QD7BAszqEOl0jUi3aL+A0vlVwwqaWJHUiXCO03G7qOkQEy4PikCqN5WEivyh6n8rF/FryJFr3ITc5ca/Zb3UqakjoNlhr0relYPtXvO5KZec1CztbMlGxQlaAHOpr113bBkOUNr3qr841pQibdJfXFjRdYNR8g21yqAAn2YupiDEGYDI5dk9NKI1CNof4IY/ybbiyISb9vujVgpNgBOdJGK9QlcgSk9N1yagoC8K/3zl4YmuE0cwaCzi3syqt4VA+BUP9zvS6/FAIXKkurRsCEmHHTiN3kAkG8BBi9lWfYFUDn9hPqBNMZ2P102lHrO OAKP189J erkn+42FB7oiVosOQymQ6cfavQdm8/35GfXwVks/NzLiJwFSRh6uBa+n+SApXrLsIDCbM0j3SwqpIBdYRVMocU/gWROPFr93wznCzFonnsIVncKPcwNR0y77wU2ATYQ0Dvwfbr0hKZfYBPXoE6JOZOW3pmetgiPYlPIXNQ6hLV2TO0FN4bQ2LUIyWrwnlQamzm2a+LlytCv23EUsHlPx7yGcybEbN4VLxxP+721u0En4UGbc98iL2BcQwSnolunJAxziP X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Chengming Zhou Now the object == NULL is always true, simplify acquire_slab(). Signed-off-by: Chengming Zhou --- mm/slub.c | 13 ++++--------- 1 file changed, 4 insertions(+), 9 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 61ee82ea21b6..9f0b80fefc70 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2222,8 +2222,7 @@ static void *alloc_single_from_new_slab(struct kmem_cache *s, * Returns a list of objects or NULL if it fails. */ static inline void *acquire_slab(struct kmem_cache *s, - struct kmem_cache_node *n, struct slab *slab, - int mode) + struct kmem_cache_node *n, struct slab *slab) { void *freelist; unsigned long counters; @@ -2239,12 +2238,8 @@ static inline void *acquire_slab(struct kmem_cache *s, freelist = slab->freelist; counters = slab->counters; new.counters = counters; - if (mode) { - new.inuse = slab->objects; - new.freelist = NULL; - } else { - new.freelist = freelist; - } + new.inuse = slab->objects; + new.freelist = NULL; VM_BUG_ON(new.frozen); new.frozen = 1; @@ -2306,7 +2301,7 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n, } if (!object) { - t = acquire_slab(s, n, slab, object == NULL); + t = acquire_slab(s, n, slab); if (t) { *pc->slab = slab; stat(s, ALLOC_FROM_PARTIAL); From patchwork Sat Oct 21 14:43:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chengming Zhou X-Patchwork-Id: 13431528 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30F84C07545 for ; Sat, 21 Oct 2023 14:44:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BE74B8E0006; Sat, 21 Oct 2023 10:44:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B97108D0008; Sat, 21 Oct 2023 10:44:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A37D68E0006; Sat, 21 Oct 2023 10:44:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 9353F8D0008 for ; Sat, 21 Oct 2023 10:44:30 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 72914B519E for ; Sat, 21 Oct 2023 14:44:30 +0000 (UTC) X-FDA: 81369739500.16.F5D3550 Received: from out-209.mta1.migadu.com (out-209.mta1.migadu.com [95.215.58.209]) by imf26.hostedemail.com (Postfix) with ESMTP id A4020140004 for ; Sat, 21 Oct 2023 14:44:27 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=xZpLnPA5; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf26.hostedemail.com: domain of chengming.zhou@linux.dev designates 95.215.58.209 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697899467; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=tHWic5xUklM9ZxSjedPZ6GHTnBcatj0QDeFJJJdaZnI=; b=Wjb3j1jh8FC1oVrUiQOMEF+WPcen5UKHeHGjRQTU0+Th84Ir4AKT7G/j7V1PfI/bNnmGY/ WqF3IPWb7Bt8Wm8Zgh9b9lsH+d3XkSV3IBIw5eF7imqB9DLwJIKw0q/8odwid/GfG2wEdS Dgz2HE65eS9Y1Ik/6c7XBmn1Ixvmml8= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=xZpLnPA5; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf26.hostedemail.com: domain of chengming.zhou@linux.dev designates 95.215.58.209 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697899467; a=rsa-sha256; cv=none; b=mcGejbhLMZ8/p2VkSD8oXaI5RamCAeMTLGjuLYSC+WzsA11CZTx0LnTkij8Ccf6ghhuBTW Q5wAIgpE9GFssgs3dEmer8aBG5Afzn45d/NtlYo/aRzKkA+YqLZXqboI/+3LXQC/6nloqp fqR3RNjMmx4fp0EDq9uB+MRJOEZ+5fM= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1697899466; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=tHWic5xUklM9ZxSjedPZ6GHTnBcatj0QDeFJJJdaZnI=; b=xZpLnPA5vle6JTPcFbJ0n67rIsmg5XrvSZmL7TRfIktWc0DANpJ7X1ziqse/jIv0Pz1dUf o7Qc3QwNW64n42WyqJ9gTrBPvWfFdigRhBwW2USYW3Yx3/zSf5l0CB9wXdlFqaxE+Px/su W+CQPFSDyncBRZEtuylakjuQDjHwMzc= From: chengming.zhou@linux.dev To: cl@linux.com, penberg@kernel.org Cc: rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, willy@infradead.org, pcc@google.com, tytso@mit.edu, maz@kernel.org, ruansy.fnst@fujitsu.com, vishal.moola@gmail.com, lrh2000@pku.edu.cn, hughd@google.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, chengming.zhou@linux.dev, Chengming Zhou Subject: [RFC PATCH v2 5/6] slub: Introduce get_cpu_partial() Date: Sat, 21 Oct 2023 14:43:16 +0000 Message-Id: <20231021144317.3400916-6-chengming.zhou@linux.dev> In-Reply-To: <20231021144317.3400916-1-chengming.zhou@linux.dev> References: <20231021144317.3400916-1-chengming.zhou@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Rspamd-Queue-Id: A4020140004 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: tcwxox5cw3bnhu7w3cbkyqco37b47gih X-HE-Tag: 1697899467-159663 X-HE-Meta: U2FsdGVkX1/WJHnPdEPHmsh1LifeZWLL2FKB0tS6h/VdzfZXfvMD/SufVe0wSPVVxAAKj5Na5vz2cLQRqpv/kuuD++XviOXijtieToNiuIX2DtNb70PrKO8ZIwqujuuBedUSPZtnBaNpuw/shfoC6axq8NRcnpuW87iuJ4HB1eKW1LoW9G4K2tnAVFMSvUwn2lszi7tspxRQzRIJYd6nhhv+pkPJeTheAuHOJcOPviiLrJxnFXZsXZxA9JbFKJYxhCGQia6yMQc8+V/Sk6gHSvw9aXh5lLzyfNIamTET14bUF9hbYjjCEe6fRyKHa9fsjvNxP8TygmhRKD90ZouOFz8Hz5sgIAr6ikN4pbsyyzx4C+eD7Lz/Q3nzqanPd/X27jFP+qllFt+5/s9G40VgjeeDyumv4/aHEIwtSdjEeu17Gy3RWPlbIJEwk0GtbOAf2suVjnK+JvvXKmrn6k1UaiqCCu27GWP0Gw7rGbJ+rFS8mPSKrVUFNKnZNHNZOhr+qU2/dasFLGjjy8CB4mazWy65QWaYjuwjluTkB5tpgy7RThyt8WyuIauaqL+EHlRTCKcyCLnTvPwVDPCOJQz3L8iabDHVAbWsNylm/4iF+XOQ9G4ZC+nz2tG8t7fmDCyRKhT6q7z7+VdZz5rmnrZaUow1aTWHkZycCMkzCh7RIXlp1PCVtix0H/c74q9HNUMB4+wlfN4BpOvvFHHalEUgq0NVBvTD7Fc930O82uYX80sMKcAMXY69iLRXShnLevb7aPOdO+LB/NRGyCrwaSRGrUiznoQpdK5BhkwzzTmB4oJX+qW1xPXWVWk8iyt/vPhKm3jzMiL/CrGl7z5XgmBtlHBXb+LdQRL87fw7X6efI9wxDVqgzlfykT7sOeC2iDn/1VO7eg5P2UAZGsph/1MbmQOtGbXe04iG96sxtWLW7YjAmpeOKFxo2jdssjMVPpYwrJTDRlP0rgDu5j53CEY fKVjmwjf i4NbWBdHYoEPxUN+pOvpJI6e5He8dLtNoIdops+hWYfJtTGH02z0W8AekJQGVAMPV3/3AIG3iuN60nPC++wTIYFTNLuLfc7omr6jCqHAE2D1dYYqIVtIt3tdQQVjS82/N218IqBN2MNWUIXLt16RltNFhH33JAjxauty9J4iUbStx6KKFC7KrOdKEec/qrxsDxioN7pw5h9b4IhMlytnMidlQUwLhefwXsQTgWyNHf6QoGJdJyK9FgTo77+TYmJDNwNzP X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Chengming Zhou Since the slabs on cpu partial list are not frozen anymore, we introduce get_cpu_partial() to get a frozen slab with its freelist from cpu partial list. It's now much like getting a frozen slab with its freelist from node partial list. Another change is about get_partial(), which can return no frozen slab when all slabs are failed when acquire_slab(), but get some unfreeze slabs in its cpu partial list, so we need to check this rare case to avoid allocating a new slab. Signed-off-by: Chengming Zhou --- mm/slub.c | 87 +++++++++++++++++++++++++++++++++++++++++++------------ 1 file changed, 68 insertions(+), 19 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 9f0b80fefc70..7fae959c56eb 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3055,6 +3055,68 @@ static inline void *get_freelist(struct kmem_cache *s, struct slab *slab) return freelist; } +#ifdef CONFIG_SLUB_CPU_PARTIAL + +static void *get_cpu_partial(struct kmem_cache *s, struct kmem_cache_cpu *c, + struct slab **slabptr, int node, gfp_t gfpflags) +{ + unsigned long flags; + struct slab *slab; + struct slab new; + unsigned long counters; + void *freelist; + + while (slub_percpu_partial(c)) { + local_lock_irqsave(&s->cpu_slab->lock, flags); + if (unlikely(!slub_percpu_partial(c))) { + local_unlock_irqrestore(&s->cpu_slab->lock, flags); + /* we were preempted and partial list got empty */ + return NULL; + } + + slab = slub_percpu_partial(c); + slub_set_percpu_partial(c, slab); + local_unlock_irqrestore(&s->cpu_slab->lock, flags); + stat(s, CPU_PARTIAL_ALLOC); + + if (unlikely(!node_match(slab, node) || + !pfmemalloc_match(slab, gfpflags))) { + slab->next = NULL; + __unfreeze_partials(s, slab); + continue; + } + + do { + freelist = slab->freelist; + counters = slab->counters; + + new.counters = counters; + VM_BUG_ON(new.frozen); + + new.inuse = slab->objects; + new.frozen = 1; + } while (!__slab_update_freelist(s, slab, + freelist, counters, + NULL, new.counters, + "get_cpu_partial")); + + *slabptr = slab; + return freelist; + } + + return NULL; +} + +#else /* CONFIG_SLUB_CPU_PARTIAL */ + +static void *get_cpu_partial(struct kmem_cache *s, struct kmem_cache_cpu *c, + struct slab **slabptr, int node, gfp_t gfpflags) +{ + return NULL; +} + +#endif + /* * Slow path. The lockless freelist is empty or we need to perform * debugging duties. @@ -3097,7 +3159,6 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, node = NUMA_NO_NODE; goto new_slab; } -redo: if (unlikely(!node_match(slab, node))) { /* @@ -3173,24 +3234,9 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, new_slab: - if (slub_percpu_partial(c)) { - local_lock_irqsave(&s->cpu_slab->lock, flags); - if (unlikely(c->slab)) { - local_unlock_irqrestore(&s->cpu_slab->lock, flags); - goto reread_slab; - } - if (unlikely(!slub_percpu_partial(c))) { - local_unlock_irqrestore(&s->cpu_slab->lock, flags); - /* we were preempted and partial list got empty */ - goto new_objects; - } - - slab = c->slab = slub_percpu_partial(c); - slub_set_percpu_partial(c, slab); - local_unlock_irqrestore(&s->cpu_slab->lock, flags); - stat(s, CPU_PARTIAL_ALLOC); - goto redo; - } + freelist = get_cpu_partial(s, c, &slab, node, gfpflags); + if (freelist) + goto retry_load_slab; new_objects: @@ -3201,6 +3247,9 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, if (freelist) goto check_new_slab; + if (slub_percpu_partial(c)) + goto new_slab; + slub_put_cpu_ptr(s->cpu_slab); slab = new_slab(s, gfpflags, node); c = slub_get_cpu_ptr(s->cpu_slab); From patchwork Sat Oct 21 14:43:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chengming Zhou X-Patchwork-Id: 13431529 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 28D99C001E0 for ; Sat, 21 Oct 2023 14:44:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B71688E0007; Sat, 21 Oct 2023 10:44:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AFB078D0008; Sat, 21 Oct 2023 10:44:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9C2B38E0007; Sat, 21 Oct 2023 10:44:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 891FF8D0008 for ; Sat, 21 Oct 2023 10:44:36 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 6993040472 for ; Sat, 21 Oct 2023 14:44:36 +0000 (UTC) X-FDA: 81369739752.26.FFB06AD Received: from out-209.mta1.migadu.com (out-209.mta1.migadu.com [95.215.58.209]) by imf07.hostedemail.com (Postfix) with ESMTP id B3A724000D for ; Sat, 21 Oct 2023 14:44:34 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=nBlo2iOc; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf07.hostedemail.com: domain of chengming.zhou@linux.dev designates 95.215.58.209 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697899474; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=R6H+QGgDZUUSLw4c5cGh7W9M5d6koieepx0jv1lm/PY=; b=CoF8guRpBPP/r8+dU57KXMRbzXk2cVXwsHlvQZLEqieFl9+mleCiTxL8C607ijPhwf+0AM W/fgYgKt7bqduAVOP5ubbK4u11Lkj25qVue4SHglG47enMGTHVgC5XCDwLEqzpgecvSmGY l88Udtq+2erw7I5nRgUr1csPdUFkht8= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=nBlo2iOc; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf07.hostedemail.com: domain of chengming.zhou@linux.dev designates 95.215.58.209 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697899474; a=rsa-sha256; cv=none; b=Wz0Jo7CdOIXo0cwtad8ylTgjpwBFEEBRmTGEnQgQE/FMGvOalr9x/ymQnO9LmKyuxgwMzW XqO10zbKcK0+cbQVqlUd1HfUeCzmqJfMh5+T+avAsB/Vy8xOiVKo/FORgd6K3nCf7N4XXH u3HuhZZhB+UYQsPyVdf7AHt4UiefcYc= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1697899473; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=R6H+QGgDZUUSLw4c5cGh7W9M5d6koieepx0jv1lm/PY=; b=nBlo2iOcMyh6STRK6arKHuu2OtwYxeBOEZUYaYEoR/aFOVymXtmXiP9LRc647ZUIG7Opee H74yoyYmsNA9MpuqduzBOiT57hUIKMOEgLDg2dwwGPS9Y+i0M3NpXEK/xqZUJNEg8VERd+ ScKNE5ibttZh3eJk/XwCQSEMgTaBodM= From: chengming.zhou@linux.dev To: cl@linux.com, penberg@kernel.org Cc: rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, willy@infradead.org, pcc@google.com, tytso@mit.edu, maz@kernel.org, ruansy.fnst@fujitsu.com, vishal.moola@gmail.com, lrh2000@pku.edu.cn, hughd@google.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, chengming.zhou@linux.dev, Chengming Zhou Subject: [RFC PATCH v2 6/6] slub: Optimize deactivate_slab() Date: Sat, 21 Oct 2023 14:43:17 +0000 Message-Id: <20231021144317.3400916-7-chengming.zhou@linux.dev> In-Reply-To: <20231021144317.3400916-1-chengming.zhou@linux.dev> References: <20231021144317.3400916-1-chengming.zhou@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Rspam-User: X-Stat-Signature: or1ixf5kga8xyqszbms6897h1oqnwg1c X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: B3A724000D X-HE-Tag: 1697899474-72087 X-HE-Meta: U2FsdGVkX19icZnMWxIphr4AEJ2APcmb0uLxKkUxgH25RwUFFd52gR+/OLZpx8ihpJddPOECfADAyFweAE4Q2JIG8QJDbZkHjbnUzw4zhu7tUhWfRHJEzwKFuLwZ2K74/AWItocPPgyv5FDWTeKW7/WEoR87zbbIWxe364QCQvEFe9n7Z34znJaNvLkwbDLppjGWfygzT8l3NPPn+hjM/foxoQbkNTQwowvWuUAcIXo91Ydhw7vJw+eQFt+a9mBm21L5rEKcHJFqOHdV6FWnCFrrW2Nh/w3F3svCdTJRsRQUuQOdK8Ym6dKzd6IikccTl16rGqoKad8d3vJhyaMpHmOhAnMe1pC9DnV7+AF1AXlSKgrIVwhNQ78JhaLcMqVWYgk766zH+jCRa+icMjEg8HaT5wAawmAO2Wl5Fw5tRIWEWR6LbPjHANWqfWTJ2/lJxQlYI36hgaGys4aMU5z+lVnLYVup6+an/YjAR1z+E6rh5ZXLLFGSdAmlT+tYvtGCfIc/FVAywT6lEYZ0+o3S/lcXvcJ52ooS1YvRwxV0bunjwTx/kYrN2a7bqh0mK5LdAJnwud375xYIXAWK6gQi1SRgM6AbVbSoL8+7gE4Akr+JJKRBbEtKbEPAS+rBhAnjUq8HuLku51lG4uJ95jyhrJww8lM89QtjqcIuUCfuluUXQYXHaeheuVUtRoglD4S8aszWf1JDEcBmkMJDnl4uf9t2M5CzobMFnGBpAP0EWI25Le4Lg+LMj+yW0MzTr36kXV9GGtv5IPAxH6xfvl4xeKSZN196skuDYxbhPWcH3W1E3qpH0XpudvPxE1lTd7oQXUecsRuoA16oWdTCLQ0k/m4ThRPwykSuYBJIf5c56FWvvUEHqNk8wr4Z0AA6dwjItKI7yd8WaJzyOWr6wToIZOLs2x9/2ThZT9GpOu1fJKRmhNBKaW0Er2Lae6Hn6U+iHGYi4rNMgQcfSH5D7eW ZBjHvOIQ YolCqwfrNqDuB0BaO0cCghsdpyuh8UObBvoZTnwH+gr51SExMK79+WUtOIjBN0aIVrp089KSd8xWXa5V3YHKxpwJhPHNoM/0LWJigt2rIrjuL+5aJB66TwA0DRh4bKlDSo91AlWMQrJSdvAvK1+tEb9ujbbvlplm7OmIali6fIcadsOmtf1dDseAztxXt/yeupAhYxkapUkVui9wlpBe9NeT9uUlOY3J3BX8SgmgSvNevrCY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Chengming Zhou Since the introduce of unfrozen slabs on cpu partial list, we don't need to synchronize the slab frozen state under the node list_lock. The caller of deactivate_slab() and the caller of __slab_free() won't manipulate the slab list concurrently. So we can get node list_lock in the stage three if we need to manipulate the slab list in this path. Signed-off-by: Chengming Zhou --- mm/slub.c | 70 ++++++++++++++++++++----------------------------------- 1 file changed, 25 insertions(+), 45 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 7fae959c56eb..29a60bfbf9c5 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2493,10 +2493,8 @@ static void init_kmem_cache_cpus(struct kmem_cache *s) static void deactivate_slab(struct kmem_cache *s, struct slab *slab, void *freelist) { - enum slab_modes { M_NONE, M_PARTIAL, M_FREE, M_FULL_NOLIST }; struct kmem_cache_node *n = get_node(s, slab_nid(slab)); int free_delta = 0; - enum slab_modes mode = M_NONE; void *nextfree, *freelist_iter, *freelist_tail; int tail = DEACTIVATE_TO_HEAD; unsigned long flags = 0; @@ -2543,58 +2541,40 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab, * unfrozen and number of objects in the slab may have changed. * Then release lock and retry cmpxchg again. */ -redo: - - old.freelist = READ_ONCE(slab->freelist); - old.counters = READ_ONCE(slab->counters); - VM_BUG_ON(!old.frozen); - - /* Determine target state of the slab */ - new.counters = old.counters; - if (freelist_tail) { - new.inuse -= free_delta; - set_freepointer(s, freelist_tail, old.freelist); - new.freelist = freelist; - } else - new.freelist = old.freelist; + do { + old.freelist = READ_ONCE(slab->freelist); + old.counters = READ_ONCE(slab->counters); + VM_BUG_ON(!old.frozen); + + /* Determine target state of the slab */ + new.counters = old.counters; + new.frozen = 0; + if (freelist_tail) { + new.inuse -= free_delta; + set_freepointer(s, freelist_tail, old.freelist); + new.freelist = freelist; + } else + new.freelist = old.freelist; - new.frozen = 0; + } while (!slab_update_freelist(s, slab, + old.freelist, old.counters, + new.freelist, new.counters, + "unfreezing slab")); + /* + * Stage three: Manipulate the slab list based on the updated state. + */ if (!new.inuse && n->nr_partial >= s->min_partial) { - mode = M_FREE; + stat(s, DEACTIVATE_EMPTY); + discard_slab(s, slab); + stat(s, FREE_SLAB); } else if (new.freelist) { - mode = M_PARTIAL; - /* - * Taking the spinlock removes the possibility that - * acquire_slab() will see a slab that is frozen - */ spin_lock_irqsave(&n->list_lock, flags); - } else { - mode = M_FULL_NOLIST; - } - - - if (!slab_update_freelist(s, slab, - old.freelist, old.counters, - new.freelist, new.counters, - "unfreezing slab")) { - if (mode == M_PARTIAL) - spin_unlock_irqrestore(&n->list_lock, flags); - goto redo; - } - - - if (mode == M_PARTIAL) { add_partial(n, slab, tail); spin_unlock_irqrestore(&n->list_lock, flags); stat(s, tail); - } else if (mode == M_FREE) { - stat(s, DEACTIVATE_EMPTY); - discard_slab(s, slab); - stat(s, FREE_SLAB); - } else if (mode == M_FULL_NOLIST) { + } else stat(s, DEACTIVATE_FULL); - } } #ifdef CONFIG_SLUB_CPU_PARTIAL