From patchwork Tue Oct 24 09:33:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chengming Zhou X-Patchwork-Id: 13434153 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79123C00A8F for ; Tue, 24 Oct 2023 09:34:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 136886B01FA; Tue, 24 Oct 2023 05:34:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0E72F6B01FB; Tue, 24 Oct 2023 05:34:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F17396B01FC; Tue, 24 Oct 2023 05:34:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id E12E06B01FA for ; Tue, 24 Oct 2023 05:34:00 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id B937B1CAF46 for ; Tue, 24 Oct 2023 09:34:00 +0000 (UTC) X-FDA: 81379843440.01.10F7740 Received: from out-203.mta0.migadu.com (out-203.mta0.migadu.com [91.218.175.203]) by imf14.hostedemail.com (Postfix) with ESMTP id 0CAF4100030 for ; Tue, 24 Oct 2023 09:33:58 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=WkSYYMuX; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf14.hostedemail.com: domain of chengming.zhou@linux.dev designates 91.218.175.203 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1698140039; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=mVsHpTi8/ybc90IBC2PUgLSTYbIjUF6GvQZHOJT0cXI=; b=RcqRT38ExOkCg+1oIZQCMHKNWLH9XUEmJmcHLBoRLOiKn3At9J4j35GJuCaBevnOlwkIjV iZ7ihl6S9H3hOCnmUaSiPCmNeqxD9/+p1znMcbmiTU1maA6WXnWuwX1PxBYJrD/+YZURqv UcBKh8unOyyZjntZ9JcPc/B0dNU+dLU= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=WkSYYMuX; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf14.hostedemail.com: domain of chengming.zhou@linux.dev designates 91.218.175.203 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1698140039; a=rsa-sha256; cv=none; b=uR5vqXhhkHJhALWl/Om0wlVWVCYx86N1zZTwctmyhMF2LBqynQNsXs9UAuc2aOZRwM2HIx cT8XwUigJv6l6vfvq73xqlBm3lTLtY012MCJbv/nultPm9NXwhrWMDK9VT0lVe6NsDzyBl cT5Kv49THp2su9XdYgSyea3RBUfrRGk= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1698140037; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mVsHpTi8/ybc90IBC2PUgLSTYbIjUF6GvQZHOJT0cXI=; b=WkSYYMuXGHeVTVvJKeEDkqylfQ5ISuw3VBdmhEEMq+LWy2qJeirb8he9LzRmfOp5qZ6+kT ZswKHPlHfUqDgf+oW4CzD+qBmu/0yksI5xZDpGRNHQHcev4VmDR1H8HChl9GYIrTSRecZo StPLjAgG42c97ebsxtS4Z+pyEQMwhsc= From: chengming.zhou@linux.dev To: cl@linux.com, penberg@kernel.org Cc: rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, chengming.zhou@linux.dev, Chengming Zhou Subject: [RFC PATCH v3 1/7] slub: Keep track of whether slub is on the per-node partial list Date: Tue, 24 Oct 2023 09:33:39 +0000 Message-Id: <20231024093345.3676493-2-chengming.zhou@linux.dev> In-Reply-To: <20231024093345.3676493-1-chengming.zhou@linux.dev> References: <20231024093345.3676493-1-chengming.zhou@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 0CAF4100030 X-Stat-Signature: 9hpyqtzjowynopeph93puyupdksc8csw X-HE-Tag: 1698140038-798154 X-HE-Meta: U2FsdGVkX1/ZVY8SYS5p7pFOrL9ke1Erd9gYqweC2AbanJjrtFEHEahMci6KkXdSbM7K2/H72FLdPm2/OXfcC7V/hpvTYRqX2uhfEBlFTeiE2v+JR1xU9ADSyar4HMuwb0AGTiQ9SHsP3UEUE5y+ZIwn9B9yEY79LRFQo0CJfgi612vTGa3SOTZuoKPMTafR4AiyVSB/7Z361gx9BGZilfHd57hnr/FtIe30Yrx3QubkHKLiK4QB5fwnPq8FP27q1Xyq45m98D4zGRKiebeD7tIXIY6xLo+SwwnvGSXYj54ZCIZOF2jeuuT5r0Nm97AueaZPf2SNLTNL+nr3/c8A7ei7RJpaax3Tg0lux3ekaMfWafZ7hzZg7ynaonMPiH7UsOlhulgD98S8/9VWezm8S2nJWBNOMwTcWb/QjQ+HfvvgoARLvrn25EtXWW43hllQJo+CHQtOiXvsa7uNZJkREsuUsuLCLqkkgVQBADE7c19MT9SPwW0w3NHer5l+UW8ErnTlW8CqipQs85J/xMSK+9OnJHaeQk2h5hProN8hXxwLz2x+e+KRsCqIb8AwYushTVwDUZk2QOZAVeXFiNZxobNE72l+ekiCXXkTsTNmCnKGM1U4011CzNU6I9zSbR1RrftxL1guGO8abw8ohIowYp56l1PfOixhjOvoBacpyDrE9BGMvCRpABtorKAC/ZwWQUaLrxJCCBlBVEMe9AZNLoB51U0fdL+zetXJVjw7Ho4f7yLuaNQnVww6CREIMT7u4RsSQPoVgCDj0X57q6xBm148CTQgmmTC3dTO8NrZLvLHppK7aoHWI30nOWcmX+98KDc+Nf/d39lZb3klz3rT0jWAq0mbJFM41ntTbFbSa4saGK0PQqBHeRZFC4/S0h77E2flXsV4TYM3t/odh+uHMmdeFJ60LI4eA5uARXQ7bnrifidugRSDo31e9jLZE0DvhY52rTaDpGjDQFBb1+r KmThPJbt kdZaWiS1vCGmFUld2rHfREf0AYf/1mzAT7vJrXWhVC4Z/19PTWVyPioPucKhhBKlmOGtVOXf2VsXiAhdk3MLRn+eNajE0xeMr/Yl6cJd8jrBymZ/mTM1hfpLnvuTSJ97r7vpKtpEDCoPGfb3Yt0Ik+SQorOG/6KQPCUto0sLpgw6IMi6fALNwpaSOKdi37dE9p6cXoPTKHPl4M5A= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Chengming Zhou Now we rely on the "frozen" bit to see if we should manipulate the slab->slab_list, which will be changed in the following patch. Instead we introduce another way to keep track of whether slub is on the per-node partial list, here we reuse the PG_workingset bit. We use __set_bit and __clear_bit directly instead of the atomic version for better performance and it's safe since it's protected by the slub node list_lock. Signed-off-by: Chengming Zhou --- mm/slab.h | 19 +++++++++++++++++++ mm/slub.c | 3 +++ 2 files changed, 22 insertions(+) diff --git a/mm/slab.h b/mm/slab.h index 8cd3294fedf5..50522b688cfb 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -193,6 +193,25 @@ static inline void __slab_clear_pfmemalloc(struct slab *slab) __folio_clear_active(slab_folio(slab)); } +/* + * Slub reuse PG_workingset bit to keep track of whether it's on + * the per-node partial list. + */ +static inline bool slab_test_node_partial(const struct slab *slab) +{ + return folio_test_workingset((struct folio *)slab_folio(slab)); +} + +static inline void slab_set_node_partial(struct slab *slab) +{ + __set_bit(PG_workingset, folio_flags(slab_folio(slab), 0)); +} + +static inline void slab_clear_node_partial(struct slab *slab) +{ + __clear_bit(PG_workingset, folio_flags(slab_folio(slab), 0)); +} + static inline void *slab_address(const struct slab *slab) { return folio_address(slab_folio(slab)); diff --git a/mm/slub.c b/mm/slub.c index 63d281dfacdb..3fad4edca34b 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2127,6 +2127,7 @@ __add_partial(struct kmem_cache_node *n, struct slab *slab, int tail) list_add_tail(&slab->slab_list, &n->partial); else list_add(&slab->slab_list, &n->partial); + slab_set_node_partial(slab); } static inline void add_partial(struct kmem_cache_node *n, @@ -2141,6 +2142,7 @@ static inline void remove_partial(struct kmem_cache_node *n, { lockdep_assert_held(&n->list_lock); list_del(&slab->slab_list); + slab_clear_node_partial(slab); n->nr_partial--; } @@ -4831,6 +4833,7 @@ static int __kmem_cache_do_shrink(struct kmem_cache *s) if (free == slab->objects) { list_move(&slab->slab_list, &discard); + slab_clear_node_partial(slab); n->nr_partial--; dec_slabs_node(s, node, slab->objects); } else if (free <= SHRINK_PROMOTE_MAX) From patchwork Tue Oct 24 09:33:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chengming Zhou X-Patchwork-Id: 13434154 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EBAF4C07545 for ; Tue, 24 Oct 2023 09:34:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 619EF6B01FC; Tue, 24 Oct 2023 05:34:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5A1B16B01FE; Tue, 24 Oct 2023 05:34:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 491486B01FF; Tue, 24 Oct 2023 05:34:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 339026B01FC for ; Tue, 24 Oct 2023 05:34:03 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 0314A1A0511 for ; Tue, 24 Oct 2023 09:34:02 +0000 (UTC) X-FDA: 81379843566.16.DCC66FA Received: from out-194.mta0.migadu.com (out-194.mta0.migadu.com [91.218.175.194]) by imf09.hostedemail.com (Postfix) with ESMTP id 3C612140005 for ; Tue, 24 Oct 2023 09:34:00 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=XdLQFwKj; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf09.hostedemail.com: domain of chengming.zhou@linux.dev designates 91.218.175.194 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1698140041; a=rsa-sha256; cv=none; b=VjG2yt4S8KOUdIttqwKqj7IM2Q0a+/ipj4Ad3xM8G3iAg49pORFqweMvbw3idBYB3LSvJ6 HO4O5WJ74zG3EqZqBwizJKcO6n+1Sj+sbsLXukzSqTBYrM9iZtzspksKbjNSaxZ88ak6pQ w6XF/bXonc5Zfdw5M11zp+LLLtNmSok= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=XdLQFwKj; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf09.hostedemail.com: domain of chengming.zhou@linux.dev designates 91.218.175.194 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1698140041; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=h11SacZ+d1SZT9G0QAPU/rLkN5EaAt8R9Llcg4aahcg=; b=Da9tZGNFoTSN5qcQrezgtWfL08Vwn0Xu+mpFBATyuWlzsjFXlpRBZ9O29Gcf7oZBmgq48v /aEjZ3DgQsWdfRRMoh6PresgnqSU4nh5sa9TkymkwY5aVPK8yo6/oNpl5fiWr8nPmLRYs3 b6z+5fpRzV9vZI6W7zy6QleLv19/Scw= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1698140039; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=h11SacZ+d1SZT9G0QAPU/rLkN5EaAt8R9Llcg4aahcg=; b=XdLQFwKjBt2izoxJqBOj93psrMaXNRty4xClGHCDT9p4T6juuhYWB/eoSnFrU+EPl/QvmP 6gAcQ3g4yVWbXa8UyMXIbOBPExVeP2TXXqyMEawXzbGshHntMsONmYh2xHZQ0ydgsgepOq WH5zAaZdRSTxxEDvvW3W5dkYKS7D57s= From: chengming.zhou@linux.dev To: cl@linux.com, penberg@kernel.org Cc: rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, chengming.zhou@linux.dev, Chengming Zhou Subject: [RFC PATCH v3 2/7] slub: Prepare __slab_free() for unfrozen partial slab out of node partial list Date: Tue, 24 Oct 2023 09:33:40 +0000 Message-Id: <20231024093345.3676493-3-chengming.zhou@linux.dev> In-Reply-To: <20231024093345.3676493-1-chengming.zhou@linux.dev> References: <20231024093345.3676493-1-chengming.zhou@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 3C612140005 X-Stat-Signature: jgwbk94ekywyctj5qz77kgreaye6ih4h X-HE-Tag: 1698140040-6004 X-HE-Meta: U2FsdGVkX1/J+txJv9YkC0r61xbezqYLlBfAHCRr3iD7DdI/vuR0S3pi/YMXBPi2lUHv3nW+WFp8vaCrWQPkTpQ08ue+C9Rm8Pv4cq9I1ssVROzvhwLXx51G5Ue3SLZgWOVFyJo9ZITq39vUNE67rPTAgHlD1L7ERxjR02BHLPzkv27erCLdKgk3Veqpgsc2mVEqK3RHY25Hk46i1YkHN2oBtJ4v4/Rse72VoPCQEEeLtDpsAS+p02OeS2l4PhJTHkf7ycp5B2x09VSoFbS/fIwzXkaaqPcEBa+e5RrZ3DlfBA8R5hnK/jawegRvFmx4J50I0qVylm7ID7b+GSjF2GVH28vf729kvyGTCcj0Yv/efFuCOBhVGbKSrGUxZzfuGZGPjkPQRARN72UIuFoRWZhYLqYB0NQdq2pkSiYlsg6qtr4ViuBYsl9PJ/zsm6t+OCXJlEA7JKcQ3Ae4qKismBwLr3L1+lETO1Jseoa28+ae4l0kxbPbDNd+R4j3GA4V3RBtbUfZCDLKOMzP8IUWULTfGLZdUVUavkVl8KPntHVlaCb0WuwTv0BibxtjRiWcVwbrHG9xBlLzsRSdAKAomlkn/Zl9RsiF+FWTwSoDEcigQbk8JBTdPAOM9TcB3LKi2YSu7OIrD/CdLpnyw8wBW6aG0nyCc5p7onJtJZEVc06APVjL9amcwJ40PhvHpa3omVve34HEcyHhYQ19Uz5/Na2+1k3yvijFCdFYldvYKR/7dBd8lIPUjQyazsnNhzwtMGwjIsFECp4PPpUy1DAb9H8bWlUVSk7GMslXIRYy3V2x/r2mO4OyRakMM83hr+OdA7+Ko8VTisjPUnMgA1K8Fbe85Wy2MoCg89bRU7qwDy0aJBca7fu0D4xxTo8eGaHg9u5lVuzjNCODIs6sEhBl0UJ37zkqm7XkyNR1z22hyXQRry+YsWOK6L/Kmvgcx6msSIZApnBGlf3jHaToU5r LLlE+rz/ 8VnLI/M75JhaX2F/tJUVC8mqAc6zYKUIAYDfaEqUJCzqDfs2UkAVDZiZjSNIVOe13sQP0e/RqYTBSFVLAfn7koIWXqV0ZfgZle8lgAVVaDppb6btKcTUlW2yZIJ1csgWvj/f60cvJu4qY0aLgwonqb2E2ggVTXiPfSZ41YAyaIOTDM+9u+nIB+IxOIQrhdqLL+zH159DF5ARdTmDeLdJOh5CRvklKx9xk7jE6B3Hpgri71SbxZcGruawFw46knjynBObo X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Chengming Zhou Now the partial slub will be frozen when taken out of node partial list, so the __slab_free() will know from "was_frozen" that the partial slab is not on node partial list and is used by one kmem_cache_cpu. But we will change this, make partial slabs leave the node partial list with unfrozen state, so we need to change __slab_free() to use the new slab_test_node_partial() we just introduced. Signed-off-by: Chengming Zhou --- mm/slub.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/mm/slub.c b/mm/slub.c index 3fad4edca34b..f568a32d7332 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3610,6 +3610,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, unsigned long counters; struct kmem_cache_node *n = NULL; unsigned long flags; + bool on_node_partial; stat(s, FREE_SLOWPATH); @@ -3657,6 +3658,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, */ spin_lock_irqsave(&n->list_lock, flags); + on_node_partial = slab_test_node_partial(slab); } } @@ -3685,6 +3687,15 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, return; } + /* + * This slab was partial but not on the per-node partial list, + * in which case we shouldn't manipulate its list, just return. + */ + if (prior && !on_node_partial) { + spin_unlock_irqrestore(&n->list_lock, flags); + return; + } + if (unlikely(!new.inuse && n->nr_partial >= s->min_partial)) goto slab_empty; From patchwork Tue Oct 24 09:33:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chengming Zhou X-Patchwork-Id: 13434155 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8DC59C00A8F for ; Tue, 24 Oct 2023 09:34:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 297016B01FF; Tue, 24 Oct 2023 05:34:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2209E6B0200; Tue, 24 Oct 2023 05:34:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 072276B0201; Tue, 24 Oct 2023 05:34:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id E46876B01FF for ; Tue, 24 Oct 2023 05:34:06 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id BE421A05D2 for ; Tue, 24 Oct 2023 09:34:06 +0000 (UTC) X-FDA: 81379843692.22.703C69E Received: from out-197.mta0.migadu.com (out-197.mta0.migadu.com [91.218.175.197]) by imf13.hostedemail.com (Postfix) with ESMTP id 0CD9C20016 for ; Tue, 24 Oct 2023 09:34:04 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=V5gwmpkG; spf=pass (imf13.hostedemail.com: domain of chengming.zhou@linux.dev designates 91.218.175.197 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1698140045; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=nZerFoAj8INaI6ZSxOyjE3BeTDGPig5MeJKjJh1pMQ8=; b=d7UdEFVT4GhkFB1yPvThpnrsiDX8z9ilU61PMNOx08qHNF0ziiGkANO0vDGMpIpv0vxyhq t5UsF0OknY7Qp+sx/MOBy6C9Qk7jCPC0XCPvXUIBtzGuzXjyb0LOzTRXlU7NmPMQ85jwmc 8OZ7unHBC569O6XVJOMDoQ4Z1EkQWnI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1698140045; a=rsa-sha256; cv=none; b=CM7KQq5JngpLI8S+ALsA2CjrYum5SDadkNKj0BDUg4AmWos49nX5/ZaT88816qAJax/06r 20m56H1ARgrpEC4WFxuGelgrra/sRSY1wRrV984xvzJKXEb3ki6NP17/EAc02AIkSvKsic RKCYrgbwV194s8oyh3OT5kTuthwTn3k= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=V5gwmpkG; spf=pass (imf13.hostedemail.com: domain of chengming.zhou@linux.dev designates 91.218.175.197 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev; dmarc=pass (policy=none) header.from=linux.dev X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1698140042; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nZerFoAj8INaI6ZSxOyjE3BeTDGPig5MeJKjJh1pMQ8=; b=V5gwmpkG8243Dq0GOsqzdIUFu+jgwdS+wsU2S/1oNpANH7Jiro5jaQ+wSigcoAXTIwUHR0 tz233Wke1fEJVMWATf22pDzMl0BDfdcprGotQ76eoP2bW65VEb0nau30v/bz1ONYQ7C1iq uidyW/b1k2Vm8H6knFM73iZoGZXo8uc= From: chengming.zhou@linux.dev To: cl@linux.com, penberg@kernel.org Cc: rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, chengming.zhou@linux.dev, Chengming Zhou Subject: [RFC PATCH v3 3/7] slub: Reflow ___slab_alloc() Date: Tue, 24 Oct 2023 09:33:41 +0000 Message-Id: <20231024093345.3676493-4-chengming.zhou@linux.dev> In-Reply-To: <20231024093345.3676493-1-chengming.zhou@linux.dev> References: <20231024093345.3676493-1-chengming.zhou@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Rspamd-Queue-Id: 0CD9C20016 X-Rspam-User: X-Stat-Signature: qrrcrhqgcfszxoddti1eu66agbaa3enw X-Rspamd-Server: rspam03 X-HE-Tag: 1698140044-91290 X-HE-Meta: U2FsdGVkX1+dBYAheOwi1PkoeSXifNA6aNwsJsEunfhWj9EbBBA4WpiAJR76nrXYTnMawmmyY2bToRZzC40GidKO2QPKOVOu8WKOtTVtWPUFUUwVIbhRfunpuRR9zKJLYl3Pvp2m31574sQv/vO3v9ptkihUNmslWXA72arInbWYa+adSWXhRF+MkouDlcMcSBcCM4FfRAfhgUZwmOVzR6XWc3VDCLEsMHJ6rpR/VAMhBUEbHbjtSW6kn/xztXZxqT2eG+Vqz5gFFvaqv6FvqmEf7Wn0RYLuUszKXK84FZmBU+yCJxkepgYVeUtsuPjjato7dXhd7Q8Z+7UryYA2FyhNeTnwqCV3C0tBbs3p8ro4y2KRKcqFIfr6PqGnkwBR02/ZkLfBx4hkqeR2btRzX+OrTnx6a2eeXQ86gzphech58XUCWNm8J3O0uyGvQhinSuRgF4V6kkJpMqfSSPJcsdpuxIwvjJh6ZK56cWLLZLIzVmND6sRNzn/RNtTvboP40dp3Dl1FQ0Rqzfe/kGdm9ZGNcLG5brGna4cLv+QMT9+NTT0oE7+pHwakFU/Bd3wg51VYR2/PcnU8HebGZfCJh38Rqy1StwBKso9IN+9tf9bLLM4/ZN6bE4Gi+omXXwEwLwxt/oiUwbH7Kzim8LzL1aRgTPel/gX2lj7fhBLAU66IHRLj1N9rsDI9PkQYQ8Jnu2/AgLAHZ7zDWZxit7PU9RRShavWgYK/xQ79KK5rA6EryehgvLFnW038nbSF/dkb0bZGEJNW/PeOd2s2qoOHhny7VcnEN9DH8hHJfTvpS20E9QT5suzYtV5C0yzHmHEQYAqTCbTOGM47VYekplPRQfjat3lLa5QLq/7mf9hPV7lCw/fRg+ngO+73Mx3NgeJHYiaE0V/q7UTSkrQ/R3NcKaPgbkkRmL+l7m0wBhr3Y/lx34Q+jHCVOo+TFLO1LS3cKZ0UhYLAHtFkboTUUKh UQm5RTWR 7RbhDuKs1zgBEdsW+uDm6+jz1RLDSVCaKYy6gG8N7tK3oigEihH6VmUPcbK/qGXH8j17npDL44BzghUEO9It801zOSKOTVFtO7Yu4zkPNb7AcVw3KJUptl8jmqw3drlDuz5wSWv5d034NV/qnD9zYC5hykpDIyIfVmIvKx8TWQVDQ8iKnEXLQFKvsM2S9CveGkbwp4nmzzQp21bHA+uyb0pXgOwietI114w8ZKl/er0nYZIeqJJ3qpdb+YeIdY3zGcZFn X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Chengming Zhou The get_partial() interface used in ___slab_alloc() may return a single object in the "kmem_cache_debug(s)" case, in which we will just return the "freelist" object. Move this handling up to prepare for later changes. And the "pfmemalloc_match()" part is not needed for node partial slab, since we already check this in the get_partial_node(). Signed-off-by: Chengming Zhou --- mm/slub.c | 31 +++++++++++++++---------------- 1 file changed, 15 insertions(+), 16 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index f568a32d7332..cd8aa68c156e 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3218,8 +3218,21 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, pc.slab = &slab; pc.orig_size = orig_size; freelist = get_partial(s, node, &pc); - if (freelist) - goto check_new_slab; + if (freelist) { + if (kmem_cache_debug(s)) { + /* + * For debug caches here we had to go through + * alloc_single_from_partial() so just store the + * tracking info and return the object. + */ + if (s->flags & SLAB_STORE_USER) + set_track(s, freelist, TRACK_ALLOC, addr); + + return freelist; + } + + goto retry_load_slab; + } slub_put_cpu_ptr(s->cpu_slab); slab = new_slab(s, gfpflags, node); @@ -3255,20 +3268,6 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, inc_slabs_node(s, slab_nid(slab), slab->objects); -check_new_slab: - - if (kmem_cache_debug(s)) { - /* - * For debug caches here we had to go through - * alloc_single_from_partial() so just store the tracking info - * and return the object - */ - if (s->flags & SLAB_STORE_USER) - set_track(s, freelist, TRACK_ALLOC, addr); - - return freelist; - } - if (unlikely(!pfmemalloc_match(slab, gfpflags))) { /* * For !pfmemalloc_match() case we don't load freelist so that From patchwork Tue Oct 24 09:33:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chengming Zhou X-Patchwork-Id: 13434156 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83F2CC25B47 for ; Tue, 24 Oct 2023 09:34:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 46C836B0200; Tue, 24 Oct 2023 05:34:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 41C0F6B0201; Tue, 24 Oct 2023 05:34:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2E2786B0202; Tue, 24 Oct 2023 05:34:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 18EFE6B0200 for ; Tue, 24 Oct 2023 05:34:08 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id B59D51A0E83 for ; Tue, 24 Oct 2023 09:34:07 +0000 (UTC) X-FDA: 81379843734.09.68B2FEE Received: from out-206.mta0.migadu.com (out-206.mta0.migadu.com [91.218.175.206]) by imf17.hostedemail.com (Postfix) with ESMTP id 0211340019 for ; Tue, 24 Oct 2023 09:34:05 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b="qe+5aC/P"; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf17.hostedemail.com: domain of chengming.zhou@linux.dev designates 91.218.175.206 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1698140046; a=rsa-sha256; cv=none; b=KTUeyrLKa44wIkibrenRftIGC1FAIKaHR2cQSWTET83zkzlNUQpGt28q/o17WCPHKGdzgR nUaGDddhV6XAUZ29DdWhImFK784N04QROiyZTKuRMcSaYAEgtncE87M6ZlrCNTjckLXkar iWayMfxSTuZqRKMaXv3T/8P+ocNMTd0= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b="qe+5aC/P"; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf17.hostedemail.com: domain of chengming.zhou@linux.dev designates 91.218.175.206 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1698140046; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=M0oaaQjRtsggr60xf/TcRhISey4PmiC9GN3ZTwclV0I=; b=nAkSObhasF0N6CXZ5LfpwENUjdm6mIvPqwE9k3xsqAb0MIoCnoawOayU6oZYY4kZdnFMqy 1/NcBIw9+yRlRzs7OxJaD4I+ALDF80nF/Oe1dE8LKSnhm17Ahigokv+Wi8a1NlS6x9B2Dv S36Ojmih0OAyIBekLXRm2JIsLY6XlJs= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1698140044; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=M0oaaQjRtsggr60xf/TcRhISey4PmiC9GN3ZTwclV0I=; b=qe+5aC/P6zRu1TdKIHVLvCkRJ0utMiqJQbaCAMAjQVzHN5g58p3EkWiMWER0AdEMI6PKuU X/FgD2Z340OUYybYcb2S1udN+T5kspyS4qlvEyxH5DfTOIwhkqfb34by37jVV286bVVgXe Fb/uzGHTjVq7WYepvR7yMsyzuEjRMHI= From: chengming.zhou@linux.dev To: cl@linux.com, penberg@kernel.org Cc: rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, chengming.zhou@linux.dev, Chengming Zhou Subject: [RFC PATCH v3 4/7] slub: Change get_partial() interfaces to return slab Date: Tue, 24 Oct 2023 09:33:42 +0000 Message-Id: <20231024093345.3676493-5-chengming.zhou@linux.dev> In-Reply-To: <20231024093345.3676493-1-chengming.zhou@linux.dev> References: <20231024093345.3676493-1-chengming.zhou@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 0211340019 X-Stat-Signature: wrgfhhebee9f5nncsecei5d1dpo7gyse X-HE-Tag: 1698140045-537041 X-HE-Meta: U2FsdGVkX199Jfm8jzuyQ/nWIX5Zy4n6fwXmCn9O1Xs7Sv9J3IpgjfAPUKSJAYR5i0Q0Gj6G7MfOzA7nM5N2UIqT1v06W/fPDhJVuhxqWqqLPWsEEvvPeAw2J/LcLEqCALgesHlBY23dzsZfUrh8SE4ARkmRjE3WXRwT62phrMgdTBmqRGJhc4xs33yL99XPCPEnmp7rZ0seZlXm6yZeQcmcwN2Q3Fg4O0yq6jF6prnwrZTV+KFF0UI/syK39upUy77E8tLg6VkgtS/qaWIi5Ih7JGcIEmGXD2rW3wRprMnccuv7IhzHHatbiAapkHzw8Wor4b9x5V2fp1OzywkBLiQk2jgG+t8V2/WQGUVaIa5jIwsgl408Ab6p2JFjp2gzp5x7mVTW5KXcBDNRNsXJurLu+TIjs4Qv8e57A967G1FouJXu9cXWqRsjpaXsg3LJmzXVqooMJDEXgv/SA1eQdUo7EtYpaiPquaSC1aSOC5Pq2qfdM+06mfn/XFaJHhh7yXx+3eGX5NZpX5LhYyquMw3IMcuzj3Ol9HrgC8ZmqUchi4mht6Zc/m8RZXyfXRDLyc0EwqYa3bNlMsROgI9rbwApKoXSrrWZl18R7yjvIBypu8IyVg6h1SnHND3cpcTYt4RrpXSvDsMoxcSYVixlukaV6TW5UthyknThmA4qMdYRG/FDnLpSUaFjJ8Mhx3g3jsR3D9Fc2SPQC5vc99ritIA0k2DkML3zvy8PlRHK64hyzMzzhUGkB25W8WTM0ubQWprCOuWUsERKhCF8wc3ao349tEX3SIYtB30f9OY1vD723Tv+dTnj0GA2OVWXctxQZBxG8BORoFU1vHsFAS93xqOocxHBQBTGhw60Q+Hkb7h/H5nR2GHVkGquumHG837/7dMM6AJ961bvR6PfvNa4/LNKWBHcy5c7iu63Py3nMhtkrb/d/kgExVg8dajaIYqk4VGj8N6EGLoNSvHcsUq bW36U6VP C3yDhwAwV5xBIXXekGipPagh2p83oZcv1ul003nyl3+sARgy4CO9ZpgLzBxkHI6hQbJg/z0z0vvNhmMxnwSJQyoPsZcwERwen8Xn8PgLS8hATgefi3VSbfYNZc7VsJBWi89RmjiouKA4X0a9aEhCcCBmbiitL6J0XLg75oeQbcp7lIqOCseqEvcT5M3KBZwP7yDlV7SOePp20I2UlRMZ/tSuj2G7Cy0fNfGE7DICh5nCsqcutUSJ9wDfjag== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Chengming Zhou We need all get_partial() related interfaces to return a slab, instead of returning the freelist (or object). Use the partial_context.object to return back freelist or object for now. This patch shouldn't have any functional changes. Signed-off-by: Chengming Zhou Reviewed-by: Vlastimil Babka --- mm/slub.c | 63 +++++++++++++++++++++++++++++-------------------------- 1 file changed, 33 insertions(+), 30 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index cd8aa68c156e..7d0234bffad3 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -204,9 +204,9 @@ DEFINE_STATIC_KEY_FALSE(slub_debug_enabled); /* Structure holding parameters for get_partial() call chain */ struct partial_context { - struct slab **slab; gfp_t flags; unsigned int orig_size; + void *object; }; static inline bool kmem_cache_debug(struct kmem_cache *s) @@ -2271,10 +2271,11 @@ static inline bool pfmemalloc_match(struct slab *slab, gfp_t gfpflags); /* * Try to allocate a partial slab from a specific node. */ -static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n, - struct partial_context *pc) +static struct slab *get_partial_node(struct kmem_cache *s, + struct kmem_cache_node *n, + struct partial_context *pc) { - struct slab *slab, *slab2; + struct slab *slab, *slab2, *partial = NULL; void *object = NULL; unsigned long flags; unsigned int partial_slabs = 0; @@ -2290,27 +2291,28 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n, spin_lock_irqsave(&n->list_lock, flags); list_for_each_entry_safe(slab, slab2, &n->partial, slab_list) { - void *t; - if (!pfmemalloc_match(slab, pc->flags)) continue; if (IS_ENABLED(CONFIG_SLUB_TINY) || kmem_cache_debug(s)) { object = alloc_single_from_partial(s, n, slab, pc->orig_size); - if (object) + if (object) { + partial = slab; + pc->object = object; break; + } continue; } - t = acquire_slab(s, n, slab, object == NULL); - if (!t) + object = acquire_slab(s, n, slab, object == NULL); + if (!object) break; - if (!object) { - *pc->slab = slab; + if (!partial) { + partial = slab; + pc->object = object; stat(s, ALLOC_FROM_PARTIAL); - object = t; } else { put_cpu_partial(s, slab, 0); stat(s, CPU_PARTIAL_NODE); @@ -2326,20 +2328,21 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n, } spin_unlock_irqrestore(&n->list_lock, flags); - return object; + return partial; } /* * Get a slab from somewhere. Search in increasing NUMA distances. */ -static void *get_any_partial(struct kmem_cache *s, struct partial_context *pc) +static struct slab *get_any_partial(struct kmem_cache *s, + struct partial_context *pc) { #ifdef CONFIG_NUMA struct zonelist *zonelist; struct zoneref *z; struct zone *zone; enum zone_type highest_zoneidx = gfp_zone(pc->flags); - void *object; + struct slab *slab; unsigned int cpuset_mems_cookie; /* @@ -2374,8 +2377,8 @@ static void *get_any_partial(struct kmem_cache *s, struct partial_context *pc) if (n && cpuset_zone_allowed(zone, pc->flags) && n->nr_partial > s->min_partial) { - object = get_partial_node(s, n, pc); - if (object) { + slab = get_partial_node(s, n, pc); + if (slab) { /* * Don't check read_mems_allowed_retry() * here - if mems_allowed was updated in @@ -2383,7 +2386,7 @@ static void *get_any_partial(struct kmem_cache *s, struct partial_context *pc) * between allocation and the cpuset * update */ - return object; + return slab; } } } @@ -2395,17 +2398,18 @@ static void *get_any_partial(struct kmem_cache *s, struct partial_context *pc) /* * Get a partial slab, lock it and return it. */ -static void *get_partial(struct kmem_cache *s, int node, struct partial_context *pc) +static struct slab *get_partial(struct kmem_cache *s, int node, + struct partial_context *pc) { - void *object; + struct slab *slab; int searchnode = node; if (node == NUMA_NO_NODE) searchnode = numa_mem_id(); - object = get_partial_node(s, get_node(s, searchnode), pc); - if (object || node != NUMA_NO_NODE) - return object; + slab = get_partial_node(s, get_node(s, searchnode), pc); + if (slab || node != NUMA_NO_NODE) + return slab; return get_any_partial(s, pc); } @@ -3215,10 +3219,10 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, new_objects: pc.flags = gfpflags; - pc.slab = &slab; pc.orig_size = orig_size; - freelist = get_partial(s, node, &pc); - if (freelist) { + slab = get_partial(s, node, &pc); + if (slab) { + freelist = pc.object; if (kmem_cache_debug(s)) { /* * For debug caches here we had to go through @@ -3410,12 +3414,11 @@ static void *__slab_alloc_node(struct kmem_cache *s, void *object; pc.flags = gfpflags; - pc.slab = &slab; pc.orig_size = orig_size; - object = get_partial(s, node, &pc); + slab = get_partial(s, node, &pc); - if (object) - return object; + if (slab) + return pc.object; slab = new_slab(s, gfpflags, node); if (unlikely(!slab)) { From patchwork Tue Oct 24 09:33:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chengming Zhou X-Patchwork-Id: 13434157 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7BDBAC00A8F for ; Tue, 24 Oct 2023 09:34:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 114396B0203; Tue, 24 Oct 2023 05:34:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0C03C6B0204; Tue, 24 Oct 2023 05:34:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ECBDA6B0205; Tue, 24 Oct 2023 05:34:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id D77C36B0203 for ; Tue, 24 Oct 2023 05:34:09 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id BEDA7B625F for ; Tue, 24 Oct 2023 09:34:09 +0000 (UTC) X-FDA: 81379843818.28.4834CC8 Received: from out-199.mta0.migadu.com (out-199.mta0.migadu.com [91.218.175.199]) by imf22.hostedemail.com (Postfix) with ESMTP id 022CEC0024 for ; Tue, 24 Oct 2023 09:34:07 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=gykdRjMa; spf=pass (imf22.hostedemail.com: domain of chengming.zhou@linux.dev designates 91.218.175.199 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1698140048; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=UijxEztLRTexuc2wmlND22JYjAvU8peNSSAQw8rq2PA=; b=q58/h+0aJYkxdH8lIJDRR36oIGn0A6QYXhXaJAihM1R/jyOno1tVJpmakQcu2xCwi+po9Z XO9JZKkPbeyIedZRCVBd4DcHDezmH3mHGkVoGn3wZ4anwrNonFGlOlO7gzQKINXdkrfB/p W2oK7kgDm+USrZuqWFQpBanmjDQtJEs= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1698140048; a=rsa-sha256; cv=none; b=3I3xX3L2Y0TXIQwbDucIon3UD56Fn4vAVCieV+lWfUbBXZmgkxYaYnKHuYUUHLtlKZeOHj BavlnGqScxE/K90V/MlFIWiCB/IouvxrWX0b3KkgJ7yJgw8LWNNoQ8Pn/xAB1WGgxgj8SP N4fraQE6KmOHgj2Rkuc2f5gBp7VJU7o= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=gykdRjMa; spf=pass (imf22.hostedemail.com: domain of chengming.zhou@linux.dev designates 91.218.175.199 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev; dmarc=pass (policy=none) header.from=linux.dev X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1698140046; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=UijxEztLRTexuc2wmlND22JYjAvU8peNSSAQw8rq2PA=; b=gykdRjMal5sST/bB2trCR8USAxy2UxFWhSeBpIeqaRzeF5OYDFVsgF9netz4vEDOO1YDYl eTbp0AKHNXuWCACBnN9eOw9UgYy91M8GYL3m9MQgwc5I0MLP0NIaEzdLHE9x29GwvY+Peq YX7cFeuPsGeiRsn56VsLqoroQhjglBc= From: chengming.zhou@linux.dev To: cl@linux.com, penberg@kernel.org Cc: rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, chengming.zhou@linux.dev, Chengming Zhou Subject: [RFC PATCH v3 5/7] slub: Introduce freeze_slab() Date: Tue, 24 Oct 2023 09:33:43 +0000 Message-Id: <20231024093345.3676493-6-chengming.zhou@linux.dev> In-Reply-To: <20231024093345.3676493-1-chengming.zhou@linux.dev> References: <20231024093345.3676493-1-chengming.zhou@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Rspamd-Queue-Id: 022CEC0024 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: 7f1wgxg4g9ar3wn5jubqkcbawmei1fcc X-HE-Tag: 1698140047-592575 X-HE-Meta: U2FsdGVkX19fHUKHML5NlACe1DJsY9+4mNTrpsbheX0YEojR2qpnCGFxXrESe8NBMY8GGewKoTBrIKAq/pcH9WftosQ1NeuJKNrgPuDHmyAED5fZMMXR4QEAog7VpDx6qn8dKjFohhhvHj7s5CMZLtSRLRQl94ufNeZyJt6ZXEd+kBdnHkpt4BSr6W+naCC49eXGMPbbQHLxvKrC0LWPIx49Cb5/qWVWCH0vE2hZyLiYqByZOczlkhu3o9yRkf7E2ZfOy5V95L3UvBHLHerCZJzw02wom8+IhXVDWAZqTQfQmWBSzjWgK2v5LQqjtqOiqvzSrHAaEY088Fx8Qywd0l1FYK0YR3ACw/HlvL81vZxkcA/ihPDoZgXIhDgevIQc/MQBVqhKcw1fJ200/fMRgAIg3R2wPrBPl8ptZ3X2/dAntFDEw4cE4i1kH82u7gvv8PKpEGBzIplNq8mHBxXgMdJaN/QSKMW60mubCpCzPOVoQD+ItyrwHsymkhWxSTaPvQfJ6pb0kGD6BKK3/ZqkYdIK6MTrJO0S0cm+S1PZUVmVa2gAvHnPja8t0lf5qwXuWOYSqZqai6qdbsTHpm5Y2uiTq1G8pdXQBPPLQrpA8a3WXOs34cyGtUob6KuXsmxTLlxam0cZ5pcpsx3DnfwMRPPJiHAC8h6kPQl7wfmtUyYRvtVUqu3tECCTYS0uE31Xt5ctzeSiCQlR3jHV0S9uYlnmm04vsQBWpNRR93kZU3UlajP1oMkxh+c8ZFmaRZb9YviuXiu3IXn9WOaZEnNlqmphOgNywtOK3mrfTHfhvCzFSvwG80U/zEHMV/LvdG4E0mgqE8wSRrGiuvXs3F0/fs1mxYvvqB5RsuMQ211/YCbLOnMdNgaciJLavwGT6LdwRnnWDeH7SIRIR2GaiKKZNEkrUvZkYrV19Ry9T0XkLr4FEkVmomPzwV0azQUPZdEFGgXLE6ndFWUO9DvAf1H cm9se6tC P/We++hcu+C1Ru5f7l6dqjufmkSielFd5QX4Kvzsj+ketR5BgHxfx5L67ajOy1Tw7RCBLhPAuyYvLm8yhqEk9m86U5bQBu4ZdSe82cS2UH5bavu+gCylGs7CtJ1w+OnTbQsY+iIw2fBpvKqNWpFSProkFH8I769eJY19viF3MU/A+c3QlFSNB5jR7EhG68sJUCvPNj4vt2LsWiKGw+0fUrr3ee0ZN5bsuXhP7WDAW9YAmHQ2a/epWG86FeJD2AZf7x9zF X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Chengming Zhou We will have unfrozen slabs out of the node partial list later, so we need a freeze_slab() function to freeze the partial slab and get its freelist. Signed-off-by: Chengming Zhou Reviewed-by: Vlastimil Babka --- mm/slub.c | 27 +++++++++++++++++++++++++++ 1 file changed, 27 insertions(+) diff --git a/mm/slub.c b/mm/slub.c index 7d0234bffad3..5b428648021f 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3079,6 +3079,33 @@ static inline void *get_freelist(struct kmem_cache *s, struct slab *slab) return freelist; } +/* + * Freeze the partial slab and return the pointer to the freelist. + */ +static inline void *freeze_slab(struct kmem_cache *s, struct slab *slab) +{ + struct slab new; + unsigned long counters; + void *freelist; + + do { + freelist = slab->freelist; + counters = slab->counters; + + new.counters = counters; + VM_BUG_ON(new.frozen); + + new.inuse = slab->objects; + new.frozen = 1; + + } while (!__slab_update_freelist(s, slab, + freelist, counters, + NULL, new.counters, + "freeze_slab")); + + return freelist; +} + /* * Slow path. The lockless freelist is empty or we need to perform * debugging duties. From patchwork Tue Oct 24 09:33:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chengming Zhou X-Patchwork-Id: 13434158 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 591EAC07545 for ; Tue, 24 Oct 2023 09:34:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 94E3E6B0204; Tue, 24 Oct 2023 05:34:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8FE176B0205; Tue, 24 Oct 2023 05:34:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 74FAE6B0206; Tue, 24 Oct 2023 05:34:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 623C66B0204 for ; Tue, 24 Oct 2023 05:34:12 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 3164DB624C for ; Tue, 24 Oct 2023 09:34:12 +0000 (UTC) X-FDA: 81379843944.12.0C9B3C0 Received: from out-207.mta0.migadu.com (out-207.mta0.migadu.com [91.218.175.207]) by imf03.hostedemail.com (Postfix) with ESMTP id 5DD852001A for ; Tue, 24 Oct 2023 09:34:10 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=Y7iQvWYn; spf=pass (imf03.hostedemail.com: domain of chengming.zhou@linux.dev designates 91.218.175.207 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1698140050; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=1189G8OwJQsHaUqQ7omEWGXJiWlZJSsRqA7qcxA5H0E=; b=Ss/aXZvdFtmlddbvZ4DDzUJ8N8OdBngZaFv6znqXxS78mfMbwEw02hhRXW0Klxxc5aS9o7 WZumhbqX/M5Hk1plXCo/XNHQTAMRkts69paDXvUEsjq20HJ5ns9wX2NjjwumrnSXSU2Ebz jTF+aY8hEV4kd56BYT3neftTIhuDcgI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1698140050; a=rsa-sha256; cv=none; b=8Nj+08FXavRluY2EyTSXRJLxuLwjG0QXSutfGscXD77OrPtF5Oc0/AwZ+AStuRrBXL+/Gx Tj0EsT1Ro6qzVNvB5JFVVSkQH512XdVwtV29QvWMIipl/YJBk+ssXAfm2vwwFFGevmqrH9 T0cnNhj4nCYBfVM6aBG+zFVzcLv7BWo= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=Y7iQvWYn; spf=pass (imf03.hostedemail.com: domain of chengming.zhou@linux.dev designates 91.218.175.207 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev; dmarc=pass (policy=none) header.from=linux.dev X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1698140048; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1189G8OwJQsHaUqQ7omEWGXJiWlZJSsRqA7qcxA5H0E=; b=Y7iQvWYnYomwhH9HpqqMB6QnGTBOBbLrimV35vPGob+QAc/EDk42XtH9Qiw36iRppPZAfY nbCKMQcN9UNCzVRzFAAYL7APDUqtxspLiF8pMGCAFass0h7cocBOURvFszr/QxFy9Eav/T md+C1DNGHg5ckM7dyaawphlNjCc6pmI= From: chengming.zhou@linux.dev To: cl@linux.com, penberg@kernel.org Cc: rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, chengming.zhou@linux.dev, Chengming Zhou Subject: [RFC PATCH v3 6/7] slub: Delay freezing of partial slabs Date: Tue, 24 Oct 2023 09:33:44 +0000 Message-Id: <20231024093345.3676493-7-chengming.zhou@linux.dev> In-Reply-To: <20231024093345.3676493-1-chengming.zhou@linux.dev> References: <20231024093345.3676493-1-chengming.zhou@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Rspamd-Queue-Id: 5DD852001A X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: f3az8bnu599xe85krzwdgfeny6oq5wmj X-HE-Tag: 1698140050-940830 X-HE-Meta: U2FsdGVkX185dB+InENBiLAar89lpG7adzh8r29wbPqsu+lxVWvCLBopsGJ3slneDZVWZsZBDpy56Irr6UynTtI89IMshKuk8Eivt21NEyXQ02tuSINgOhhnhUMBw5b3DDFqRM6AsrLSMJ+g8oKW8ODrk963Q9hxzHvkHUYi2RToZJW4w0qLhRpdUW3A+HN4xl9GjX+I3vCHx6u43JN5Kvrm+KjvY3r9Z/s9BJsEuxEPu3jYP+KCMqxMM74dTNllPMRrI0zw1+yCu+6qpjCMO/o0t+Y1WDRNNeOhu8NABisnQZxospxkUudfmq5Tt4UJQrtZkmo+QTPCouqR00bjZpUX+azqiu5podwJgbco0GpJK74/8VBhMZx1NUkHZNGML3kKXwR1NzPVwtZrOH1h6TPO6FcdpFTTAGVrN7RJC8NPKtoC009u3grFTuqDjXD8Pye1momyNlwliUHdy6SBBvakWTsl/6cX3F7KDIqkeQCndu5YtEkw/mCItnY3I/EycEo22wiH/EhNMu81qzX21qu7rUxuBn+WYqlMZLC/QzWMNMAMfptru0UJoGyrVtWBNYXjZR/0EqCc9ZOn3jE4OzENVSn15oCcP9y0CCx/uZ4ZY5No/oql6/743PUo2Et3m7XysCuXqfLDaj+FUewg6yE6HulFiloFHI87GrfUQ/Fsjrk7Zh7UKBncZhwGr5N5gLpBoI9GJZtZIx2V5ZS/Uy8CtMQIaXbiRECw3oy6Jn/BTb+n6AoP9ML3QFslivN3Eezan5bM13mK28MWYet1zcG5cwKoIU4s1sXXpw+5FM4v/hLV3HqfqCKtWWs7Bc/VfyROk4e/sQCWp1Idnrhv5X+ON1wbHSk5wvPvdAWY+GayZEDrK47+aZOWa0t77kJw2MA0716WI3l/FKupdX4khKCQxSzvh6isRBzSPAK65nxz8O72QVwFgrg0NLC0TaNXyNC72l4LOrGWvALW1RQ 90CekX9Y WdqXx50YHVULm6I9aC4vOtQWvSsMIAPezAMLUD4iKz55AHOcd0ORAXPng2eezKXleVe24UZ+ILuvgsl5FfyANFi6LcivsYMqCLqu2WL4BF+bBEXci/NBtCD9mrgjFkKkXBHykqAAcbQe/BiMJr9F9FPqWv0xJiFg+e1LX2S/yOE4ks2qHVKvum734zQ2Lhy9WKViu X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Chengming Zhou Now we will freeze slabs when moving them out of node partial list to cpu partial list, this method needs two cmpxchg_double operations: 1. freeze slab (acquire_slab()) under the node list_lock 2. get_freelist() when pick used in ___slab_alloc() Actually we don't need to freeze when moving slabs out of node partial list, we can delay freezing to when use slab freelist in ___slab_alloc(), so we can save one cmpxchg_double(). And there are other good points: - The moving of slabs between node partial list and cpu partial list becomes simpler, since we don't need to freeze or unfreeze at all. - The node list_lock contention would be less, since we don't need to freeze any slab under the node list_lock. We can achieve this because there is no concurrent path would manipulate the partial slab list except the __slab_free() path, which is serialized now. Since the slab returned by get_partial() interfaces is not frozen anymore and no freelist in the partial_context, so we need to use the introduced freeze_slab() to freeze it and get its freelist. Similarly, the slabs on the CPU partial list are not frozen anymore, we need to freeze_slab() on it before use. Signed-off-by: Chengming Zhou Reviewed-by: Vlastimil Babka --- mm/slub.c | 111 +++++++++++------------------------------------------- 1 file changed, 21 insertions(+), 90 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 5b428648021f..486d44421432 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2215,51 +2215,6 @@ static void *alloc_single_from_new_slab(struct kmem_cache *s, return object; } -/* - * Remove slab from the partial list, freeze it and - * return the pointer to the freelist. - * - * Returns a list of objects or NULL if it fails. - */ -static inline void *acquire_slab(struct kmem_cache *s, - struct kmem_cache_node *n, struct slab *slab, - int mode) -{ - void *freelist; - unsigned long counters; - struct slab new; - - lockdep_assert_held(&n->list_lock); - - /* - * Zap the freelist and set the frozen bit. - * The old freelist is the list of objects for the - * per cpu allocation list. - */ - freelist = slab->freelist; - counters = slab->counters; - new.counters = counters; - if (mode) { - new.inuse = slab->objects; - new.freelist = NULL; - } else { - new.freelist = freelist; - } - - VM_BUG_ON(new.frozen); - new.frozen = 1; - - if (!__slab_update_freelist(s, slab, - freelist, counters, - new.freelist, new.counters, - "acquire_slab")) - return NULL; - - remove_partial(n, slab); - WARN_ON(!freelist); - return freelist; -} - #ifdef CONFIG_SLUB_CPU_PARTIAL static void put_cpu_partial(struct kmem_cache *s, struct slab *slab, int drain); #else @@ -2276,7 +2231,6 @@ static struct slab *get_partial_node(struct kmem_cache *s, struct partial_context *pc) { struct slab *slab, *slab2, *partial = NULL; - void *object = NULL; unsigned long flags; unsigned int partial_slabs = 0; @@ -2295,7 +2249,7 @@ static struct slab *get_partial_node(struct kmem_cache *s, continue; if (IS_ENABLED(CONFIG_SLUB_TINY) || kmem_cache_debug(s)) { - object = alloc_single_from_partial(s, n, slab, + void *object = alloc_single_from_partial(s, n, slab, pc->orig_size); if (object) { partial = slab; @@ -2305,13 +2259,10 @@ static struct slab *get_partial_node(struct kmem_cache *s, continue; } - object = acquire_slab(s, n, slab, object == NULL); - if (!object) - break; + remove_partial(n, slab); if (!partial) { partial = slab; - pc->object = object; stat(s, ALLOC_FROM_PARTIAL); } else { put_cpu_partial(s, slab, 0); @@ -2610,9 +2561,6 @@ static void __unfreeze_partials(struct kmem_cache *s, struct slab *partial_slab) unsigned long flags = 0; while (partial_slab) { - struct slab new; - struct slab old; - slab = partial_slab; partial_slab = slab->next; @@ -2625,23 +2573,7 @@ static void __unfreeze_partials(struct kmem_cache *s, struct slab *partial_slab) spin_lock_irqsave(&n->list_lock, flags); } - do { - - old.freelist = slab->freelist; - old.counters = slab->counters; - VM_BUG_ON(!old.frozen); - - new.counters = old.counters; - new.freelist = old.freelist; - - new.frozen = 0; - - } while (!__slab_update_freelist(s, slab, - old.freelist, old.counters, - new.freelist, new.counters, - "unfreezing slab")); - - if (unlikely(!new.inuse && n->nr_partial >= s->min_partial)) { + if (unlikely(!slab->inuse && n->nr_partial >= s->min_partial)) { slab->next = slab_to_discard; slab_to_discard = slab; } else { @@ -3148,7 +3080,6 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, node = NUMA_NO_NODE; goto new_slab; } -redo: if (unlikely(!node_match(slab, node))) { /* @@ -3224,7 +3155,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, new_slab: - if (slub_percpu_partial(c)) { + while (slub_percpu_partial(c)) { local_lock_irqsave(&s->cpu_slab->lock, flags); if (unlikely(c->slab)) { local_unlock_irqrestore(&s->cpu_slab->lock, flags); @@ -3236,11 +3167,20 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, goto new_objects; } - slab = c->slab = slub_percpu_partial(c); + slab = slub_percpu_partial(c); slub_set_percpu_partial(c, slab); local_unlock_irqrestore(&s->cpu_slab->lock, flags); stat(s, CPU_PARTIAL_ALLOC); - goto redo; + + if (unlikely(!node_match(slab, node) || + !pfmemalloc_match(slab, gfpflags))) { + slab->next = NULL; + __unfreeze_partials(s, slab); + continue; + } + + freelist = freeze_slab(s, slab); + goto retry_load_slab; } new_objects: @@ -3249,8 +3189,8 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, pc.orig_size = orig_size; slab = get_partial(s, node, &pc); if (slab) { - freelist = pc.object; if (kmem_cache_debug(s)) { + freelist = pc.object; /* * For debug caches here we had to go through * alloc_single_from_partial() so just store the @@ -3262,6 +3202,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, return freelist; } + freelist = freeze_slab(s, slab); goto retry_load_slab; } @@ -3663,18 +3604,8 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, was_frozen = new.frozen; new.inuse -= cnt; if ((!new.inuse || !prior) && !was_frozen) { - - if (kmem_cache_has_cpu_partial(s) && !prior) { - - /* - * Slab was on no list before and will be - * partially empty - * We can defer the list move and instead - * freeze it. - */ - new.frozen = 1; - - } else { /* Needs to be taken off a list */ + /* Needs to be taken off a list */ + if (!kmem_cache_has_cpu_partial(s) || prior) { n = get_node(s, slab_nid(slab)); /* @@ -3704,9 +3635,9 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, * activity can be necessary. */ stat(s, FREE_FROZEN); - } else if (new.frozen) { + } else if (kmem_cache_has_cpu_partial(s) && !prior) { /* - * If we just froze the slab then put it onto the + * If we started with a full slab then put it onto the * per cpu partial list. */ put_cpu_partial(s, slab, 1); From patchwork Tue Oct 24 09:33:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chengming Zhou X-Patchwork-Id: 13434159 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89C11C00A8F for ; Tue, 24 Oct 2023 09:34:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 83CCD6B0206; Tue, 24 Oct 2023 05:34:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7ED906B0207; Tue, 24 Oct 2023 05:34:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 68DAA6B0208; Tue, 24 Oct 2023 05:34:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 52BAC6B0206 for ; Tue, 24 Oct 2023 05:34:14 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 322861CB412 for ; Tue, 24 Oct 2023 09:34:14 +0000 (UTC) X-FDA: 81379844028.19.30445AA Received: from out-201.mta0.migadu.com (out-201.mta0.migadu.com [91.218.175.201]) by imf06.hostedemail.com (Postfix) with ESMTP id 7F4A118000A for ; Tue, 24 Oct 2023 09:34:12 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=LFQxXGGR; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf06.hostedemail.com: domain of chengming.zhou@linux.dev designates 91.218.175.201 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1698140052; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=cdvJNGs14r77ax8dSUQgIW9duarJJopRajvvGp2TDMo=; b=ZCUlHU2KQRjzcn+ifumza/SKZH2Ohh3nSCIzHUCLufTTtrgqkZmUsP06CQhHOYvH6FEhY9 H+aERLBUktTkfQWWUJcMCuxRCXM00D1TS10ukwwfUoXf2J0afljzZahYNfLGG/fjM6MbUw sv87E/9QA3832dtXkalKWg3KpZebUP8= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=LFQxXGGR; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf06.hostedemail.com: domain of chengming.zhou@linux.dev designates 91.218.175.201 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1698140052; a=rsa-sha256; cv=none; b=5HSGVdfvo608/7HBp7Xk1cSJGSyTibqqnzj/F9u0eGMmtsiz/k9/s0KjZp9TCHr4CzynS8 csbn42HL82g4Z9Tzh0cHA89df/Br7t062bGnE0zlG2eaktNJLSyjg9zfqLPREWxq3OwltO 2bM30YAsqF/0rs2D1hdjA9a+8H8kLM0= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1698140051; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cdvJNGs14r77ax8dSUQgIW9duarJJopRajvvGp2TDMo=; b=LFQxXGGRKgIbWAldNJ1TYZrnkEc0I+pD5nGoJ0Dqwyqt32dcDScSAvf2HkxURKE+MuV4uf VITI/OvMyXtzkGrkQIUB2hTODauxk/xJEPGsyPf7Ow4uqMZV7ixHrnzuwbIKcLr4S5SXvq SpnsSqfJOa1quAz1cL2pg8WHPN5pFGw= From: chengming.zhou@linux.dev To: cl@linux.com, penberg@kernel.org Cc: rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, chengming.zhou@linux.dev, Chengming Zhou Subject: [RFC PATCH v3 7/7] slub: Optimize deactivate_slab() Date: Tue, 24 Oct 2023 09:33:45 +0000 Message-Id: <20231024093345.3676493-8-chengming.zhou@linux.dev> In-Reply-To: <20231024093345.3676493-1-chengming.zhou@linux.dev> References: <20231024093345.3676493-1-chengming.zhou@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 7F4A118000A X-Stat-Signature: hopoewkportfse36kfoojq4zgs9rted5 X-Rspam-User: X-HE-Tag: 1698140052-730424 X-HE-Meta: U2FsdGVkX19gmULRLY37Ig+VxvrIu7/vcAarZubDdFdV+MMMFMTdD854SpVhCS70d7CjFIfKIdBbd63pdUTgmWEgM3nFNGxLEyBmPO4sweW+AVXYLCeXvcM+uDOqyQeb0v8kvEVadtqce2AoH5R4zHzlD+KJ0VFWyZ0ecMe3FuXNixCq9h0vQ2CIEFJK7GebTzjAfo7adMdR1AM+53dXq7bIgvmRcGjP5dpSEQ0zlDUHOI6xXviNbWwE17iQYP0+JFGo32F4soRaGMhDChL8h1LUS+pr9KwA+Ceo6/FDBDs/WRFSNc1Vb0RoIVmUHaj1EjAXE/imvMhF5L/c0ZQZgHTxW4zap+gEoHz90nHjzpk+ftyubzxfDbl+8g9heMJUtLhoWkNDSaex3/DjCRndCx0I+UiHvED879zZLbiOPvoL+Lsw+yVdutRkYnyv9nOPimY4+2mNy2MWhF2H4ig0Nzy2eCjp12pT6AQJGSviCmQtia9TWmFQZQKl2/Q24TUw3/D2RQCX6uQH6xsZoonM5lX7E2NTzLrhbQb5Ezx4WZJj6EGi9c7i5GyB9wbEd5VzM9MnCVFAUoYqdUyJgKSRyL8zF31Pb1UfdkZ4kgf/5nUl942BB6TyZEGBDX3y4Ro6N309TqbDn39usxD+Q7v2rPJNm3eBHEQMMIvuxZ/WUSHd+tRCcmEEk4Mm5nJl30VTiWohGW4Gic+dFfn1eEGwlgVW5GPG/wcNa501HtcYYdsXvoZTI7jWJdKVFwrrw3AXefcx7zpKYZQXoR9NaHfytBl/DVe0tpx+UYiZTX4bVEkSHBg7fNYDcjcYFuoAsZo87vIQgaeambPmrP9lRZs4ycrxUK3QMzg+TYwaeNIoQ+R+Z8GCYp+iFgHB444q/pY//SVdIFqmdiKriYfe2t3xq9G6dTx8J7TzjW9NGzDGVdG3cGewg6K7R/dn8Xo7aK/qk4Iar7AaSscerAwn4Jb ZkCP1Cn8 U4foDqb/Pvvi1tthncSmSSJZhSzwx6E4V4xwYzmG8sdp0WKAuqQldLk1myuwHp2MFf4HrTXY/UPwifsGE3y8UHYJ09GyZQNWeAin/t0Jq427kplYQw+OrRRmnoX3AZfD/TjDjzT1eVT8qyZ/ukR5qoQLpqFcKbC+h87jqJgl7HqaKS+biOK9I+R2J5XziTNKQVGAX3L/0wHTDXx/0nmQvLuEWSfbKRMX1U9nOUBTU9f7Fohe7RfwVmfAKmA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Chengming Zhou Since the introduce of unfrozen slabs on cpu partial list, we don't need to synchronize the slab frozen state under the node list_lock. The caller of deactivate_slab() and the caller of __slab_free() won't manipulate the slab list concurrently. So we can get node list_lock in the last stage if we really need to manipulate the slab list in this path. Signed-off-by: Chengming Zhou --- mm/slub.c | 70 ++++++++++++++++++++----------------------------------- 1 file changed, 25 insertions(+), 45 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 486d44421432..64d550e415eb 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2449,10 +2449,8 @@ static void init_kmem_cache_cpus(struct kmem_cache *s) static void deactivate_slab(struct kmem_cache *s, struct slab *slab, void *freelist) { - enum slab_modes { M_NONE, M_PARTIAL, M_FREE, M_FULL_NOLIST }; struct kmem_cache_node *n = get_node(s, slab_nid(slab)); int free_delta = 0; - enum slab_modes mode = M_NONE; void *nextfree, *freelist_iter, *freelist_tail; int tail = DEACTIVATE_TO_HEAD; unsigned long flags = 0; @@ -2499,58 +2497,40 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab, * unfrozen and number of objects in the slab may have changed. * Then release lock and retry cmpxchg again. */ -redo: - - old.freelist = READ_ONCE(slab->freelist); - old.counters = READ_ONCE(slab->counters); - VM_BUG_ON(!old.frozen); - - /* Determine target state of the slab */ - new.counters = old.counters; - if (freelist_tail) { - new.inuse -= free_delta; - set_freepointer(s, freelist_tail, old.freelist); - new.freelist = freelist; - } else - new.freelist = old.freelist; + do { + old.freelist = READ_ONCE(slab->freelist); + old.counters = READ_ONCE(slab->counters); + VM_BUG_ON(!old.frozen); + + /* Determine target state of the slab */ + new.counters = old.counters; + new.frozen = 0; + if (freelist_tail) { + new.inuse -= free_delta; + set_freepointer(s, freelist_tail, old.freelist); + new.freelist = freelist; + } else + new.freelist = old.freelist; - new.frozen = 0; + } while (!slab_update_freelist(s, slab, + old.freelist, old.counters, + new.freelist, new.counters, + "unfreezing slab")); + /* + * Stage three: Manipulate the slab list based on the updated state. + */ if (!new.inuse && n->nr_partial >= s->min_partial) { - mode = M_FREE; + stat(s, DEACTIVATE_EMPTY); + discard_slab(s, slab); + stat(s, FREE_SLAB); } else if (new.freelist) { - mode = M_PARTIAL; - /* - * Taking the spinlock removes the possibility that - * acquire_slab() will see a slab that is frozen - */ spin_lock_irqsave(&n->list_lock, flags); - } else { - mode = M_FULL_NOLIST; - } - - - if (!slab_update_freelist(s, slab, - old.freelist, old.counters, - new.freelist, new.counters, - "unfreezing slab")) { - if (mode == M_PARTIAL) - spin_unlock_irqrestore(&n->list_lock, flags); - goto redo; - } - - - if (mode == M_PARTIAL) { add_partial(n, slab, tail); spin_unlock_irqrestore(&n->list_lock, flags); stat(s, tail); - } else if (mode == M_FREE) { - stat(s, DEACTIVATE_EMPTY); - discard_slab(s, slab); - stat(s, FREE_SLAB); - } else if (mode == M_FULL_NOLIST) { + } else stat(s, DEACTIVATE_FULL); - } } #ifdef CONFIG_SLUB_CPU_PARTIAL