From patchwork Mon Mar 7 07:40:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12771304 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DFAA5C433FE for ; Mon, 7 Mar 2022 07:41:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 51E458D0003; Mon, 7 Mar 2022 02:41:20 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4CC858D0001; Mon, 7 Mar 2022 02:41:20 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 394CE8D0003; Mon, 7 Mar 2022 02:41:20 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 2B7F28D0001 for ; Mon, 7 Mar 2022 02:41:20 -0500 (EST) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id E607F1AD2 for ; Mon, 7 Mar 2022 07:41:19 +0000 (UTC) X-FDA: 79216794678.08.2447A01 Received: from mail-pg1-f180.google.com (mail-pg1-f180.google.com [209.85.215.180]) by imf08.hostedemail.com (Postfix) with ESMTP id 63571160004 for ; Mon, 7 Mar 2022 07:41:19 +0000 (UTC) Received: by mail-pg1-f180.google.com with SMTP id 6so8337432pgg.0 for ; Sun, 06 Mar 2022 23:41:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=tlFuEYo8Qx2S8WIAtfcNMg/P/GSehAMJ4GeATPvNw5s=; b=EmSWW4qOqQ70lyF9xacw400VSbQfslPnfkW68IsF9ZUaP//kVexYiY6fqcgcvKk2lv Jwc+wqyFPpKzg3r8E987L80y7ENqcn7qy1U8KQoJfhuPPzcOVxSSxFol5Rtbnjg+adWN 3bDby7QoC7LjW9k/D4Asop5K+5HwRpRdhJ0aGRX762MnLTSHziLNFmRJweFa3QcAEPaR 52iSmSjAEmsW+7lHsp+BKe2HmAuSQQ7i38H1VIcgCZ1c9PoLQyqjLZ2J9VyDeZifxgJb PBeUiREFLdfLCZFDUwCI5TuUdQjS4BDveEDSRWi5RyBtkgJjlQU3JAK0LHqlMaPA9e0K Z6pg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=tlFuEYo8Qx2S8WIAtfcNMg/P/GSehAMJ4GeATPvNw5s=; b=G1FRZi0L7HsYrJYnyMLUn0vdpL2Pkwj4A+DcgVZ1PcP5agGIA8UUAbxGO/Vo6F14wo mAoAYQ8Hq9JzsvHhtgxz9a26kxAYY9h80V2xt0ErGD3NSLYIoMLMlDZA0UlX4KQqKI+A tQAjkFuwqlt8QAmoUb43+yuaIaSd145t0hlaOMU8y/RESlLH7oSTOf9Da5hGE/fnWRcp vcXW/dMLvdBUPuyUr3pST5o9mwp+uKgx4dUqmJjbzHP6QgJlj3qAim9mYO+URyQh5sIw yqHm+43NK2ZxIRYZuF7n5ViWxgaFil0cM8yejqHNveWz4Qi2PSp9cwIshz2r0SiAiN8f b99Q== X-Gm-Message-State: AOAM53212LhtxM+1o0wLjqmNBa2YtCaLBZfk03NkMrprvp82igrMco8A mNYvfSk2j5ZbUQA07+LdoWJjBIqUnd+GTA== X-Google-Smtp-Source: ABdhPJzR0IKk676yJGBx7ojlx56c0CgzQrBTQ0syjlZrPtwjHvFVB53TKSgrjCpQ686BFcZaTeXEqg== X-Received: by 2002:a63:dd17:0:b0:36c:33aa:6d5f with SMTP id t23-20020a63dd17000000b0036c33aa6d5fmr8799031pgg.300.1646638878098; Sun, 06 Mar 2022 23:41:18 -0800 (PST) Received: from ip-172-31-19-208.ap-northeast-1.compute.internal (ec2-18-181-137-102.ap-northeast-1.compute.amazonaws.com. [18.181.137.102]) by smtp.gmail.com with ESMTPSA id p10-20020a637f4a000000b00373a2760775sm10878743pgn.2.2022.03.06.23.41.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 06 Mar 2022 23:41:17 -0800 (PST) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: linux-mm@kvack.org Cc: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Marco Elver , Matthew WilCox , Roman Gushchin , linux-kernel@vger.kernel.org, 42.hyeyoo@gmail.com Subject: [PATCH v3 1/2] mm/slub: limit number of node partial slabs only in cache creation Date: Mon, 7 Mar 2022 07:40:55 +0000 Message-Id: <20220307074057.902222-2-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220307074057.902222-1-42.hyeyoo@gmail.com> References: <20220307074057.902222-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam10 X-Rspam-User: X-Stat-Signature: z797rz9my6gbh3dr8u8pdcaq6tu3xmai Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=EmSWW4qO; spf=pass (imf08.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.215.180 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspamd-Queue-Id: 63571160004 X-HE-Tag: 1646638879-886574 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: SLUB sets number of minimum partial slabs for node (min_partial) using set_min_partial(). SLUB holds at least min_partial slabs even if they're empty to avoid excessive use of page allocator. set_min_partial() limits value of min_partial limits value of min_partial MIN_PARTIAL and MAX_PARTIAL. As set_min_partial() can be called by min_partial_store() too, Only limit value of min_partial in kmem_cache_open() so that it can be changed to value that a user wants. [ rientjes@google.com: Fold set_min_partial() into its callers ] Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka Reviewed-by: Roman Gushchin --- mm/slub.c | 14 +++----------- 1 file changed, 3 insertions(+), 11 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 261474092e43..1ce09b0347ad 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4000,15 +4000,6 @@ static int init_kmem_cache_nodes(struct kmem_cache *s) return 1; } -static void set_min_partial(struct kmem_cache *s, unsigned long min) -{ - if (min < MIN_PARTIAL) - min = MIN_PARTIAL; - else if (min > MAX_PARTIAL) - min = MAX_PARTIAL; - s->min_partial = min; -} - static void set_cpu_partial(struct kmem_cache *s) { #ifdef CONFIG_SLUB_CPU_PARTIAL @@ -4215,7 +4206,8 @@ static int kmem_cache_open(struct kmem_cache *s, slab_flags_t flags) * The larger the object size is, the more slabs we want on the partial * list to avoid pounding the page allocator excessively. */ - set_min_partial(s, ilog2(s->size) / 2); + s->min_partial = min_t(unsigned long, MAX_PARTIAL, ilog2(s->size) / 2); + s->min_partial = max_t(unsigned long, MIN_PARTIAL, s->min_partial); set_cpu_partial(s); @@ -5396,7 +5388,7 @@ static ssize_t min_partial_store(struct kmem_cache *s, const char *buf, if (err) return err; - set_min_partial(s, min); + s->min_partial = min; return length; } SLAB_ATTR(min_partial); From patchwork Mon Mar 7 07:40:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12771305 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 92350C433F5 for ; Mon, 7 Mar 2022 07:41:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1E87B8D0005; Mon, 7 Mar 2022 02:41:23 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1989C8D0001; Mon, 7 Mar 2022 02:41:23 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 05FFB8D0005; Mon, 7 Mar 2022 02:41:23 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id EE3C08D0001 for ; Mon, 7 Mar 2022 02:41:22 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id CF81A61773 for ; Mon, 7 Mar 2022 07:41:22 +0000 (UTC) X-FDA: 79216794804.09.187191C Received: from mail-pj1-f53.google.com (mail-pj1-f53.google.com [209.85.216.53]) by imf23.hostedemail.com (Postfix) with ESMTP id 4AF17140005 for ; Mon, 7 Mar 2022 07:41:22 +0000 (UTC) Received: by mail-pj1-f53.google.com with SMTP id fs4-20020a17090af28400b001bf5624c0aaso2285091pjb.0 for ; Sun, 06 Mar 2022 23:41:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ZGMJZ/gEuZAOoC4B+YmBrsXSs5tkxREQEs5olVeuJaw=; b=eNjzzbpeQLIHlMcaIfbd03vqkymRW54NmATbFQGI0ZOTyDPJdiACVVdKNEi1YC2iBx Kos1SPh67kgKR1PTCYPNoIpha5uq8aC0hWlgiDLlJRj5G3fj8thV6sgleQRaXU2y+fkg xlTUb6Se1WzzCia6dF+zHVlk15DzrQ85QqhH3edD/tb3yg/J12FPzt/cx9lpC/H57CjS y4jrpgwed4+dyGsIc35Soi6JEBEdBlxNF1sx9pImqtJnr6Sr3ug99Wj3HMPOmy8pVAzs pVYUS496lotEZ2NIqOl7SJivDgCUqAKPU4+N97Jrsfv4GaAgErxfDWt7yBp86Qztrl4k vNUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ZGMJZ/gEuZAOoC4B+YmBrsXSs5tkxREQEs5olVeuJaw=; b=vvuTu00RNPsKgqXUF82hObulnFU5nZtkhJvrnR/i5aTdrUWEVTj08Zi08KhrkZjXCz idv0sOPCJdkYq4L/IeM+i6hvF+1ngVM9mUSHB9Ol5n5GVBD8VJrWMU6QT1RLP2MM2J0m 3qz+reMx1AGkAzef9fdWE7RP7Vx+6XJglbFoonHNbFkk+oJcEXNTGHIFa10YQAsAGJAD H86tT1FXgylUhZuctktAri/tDcVq5gwJsUGzX7wzs0+52YNTuwk6JVPAxgveOCRdQYvL oxC7xKLEvsCqeSc2saDK2VvNoNUD8qFFA7W05j2aoW5WONyYwit3viRQh2v25VwZx0j8 d/5w== X-Gm-Message-State: AOAM530Ibkq8bnrslFxxk4W1NoBQj+O2oVIVCky+P6xrBQfH3uwWg7wd PduVLYDuZ/70Jmh5uqcaFqjj1JtkD55gdQ== X-Google-Smtp-Source: ABdhPJwaQxVqrVx8gsxT5/Suk/f0ODcodwq9e9CauZGzxkmOIohCLXMIb+Gr/YK3xnaDruiSupmM8A== X-Received: by 2002:a17:90b:3e88:b0:1bf:3bd0:4b5f with SMTP id rj8-20020a17090b3e8800b001bf3bd04b5fmr11096426pjb.106.1646638881181; Sun, 06 Mar 2022 23:41:21 -0800 (PST) Received: from ip-172-31-19-208.ap-northeast-1.compute.internal (ec2-18-181-137-102.ap-northeast-1.compute.amazonaws.com. [18.181.137.102]) by smtp.gmail.com with ESMTPSA id p10-20020a637f4a000000b00373a2760775sm10878743pgn.2.2022.03.06.23.41.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 06 Mar 2022 23:41:20 -0800 (PST) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: linux-mm@kvack.org Cc: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Marco Elver , Matthew WilCox , Roman Gushchin , linux-kernel@vger.kernel.org, 42.hyeyoo@gmail.com Subject: [PATCH v3 2/2] mm/slub: refactor deactivate_slab() Date: Mon, 7 Mar 2022 07:40:56 +0000 Message-Id: <20220307074057.902222-3-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220307074057.902222-1-42.hyeyoo@gmail.com> References: <20220307074057.902222-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam10 X-Rspam-User: X-Stat-Signature: hnkmw168ajhpd184oimhcd916qmiqze9 Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=eNjzzbpe; spf=pass (imf23.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.216.53 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspamd-Queue-Id: 4AF17140005 X-HE-Tag: 1646638882-247026 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Simplify deactivate_slab() by unlocking n->list_lock and retrying cmpxchg_double() when cmpxchg_double() fails, and perform add_{partial,full} only when it succeed. Releasing and taking n->list_lock again here is not harmful as SLUB avoids deactivating slabs as much as possible. [ vbabka@suse.cz: perform add_{partial,full} when cmpxchg_double() succeed. count deactivating full slabs even if debugging flag is not set. ] Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka Reviewed-by: Roman Gushchin --- mm/slub.c | 91 +++++++++++++++++++++++-------------------------------- 1 file changed, 38 insertions(+), 53 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 1ce09b0347ad..f0cb9d0443ac 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2348,10 +2348,10 @@ static void init_kmem_cache_cpus(struct kmem_cache *s) static void deactivate_slab(struct kmem_cache *s, struct slab *slab, void *freelist) { - enum slab_modes { M_NONE, M_PARTIAL, M_FULL, M_FREE }; + enum slab_modes { M_NONE, M_PARTIAL, M_FULL, M_FREE, M_FULL_NOLIST }; struct kmem_cache_node *n = get_node(s, slab_nid(slab)); - int lock = 0, free_delta = 0; - enum slab_modes l = M_NONE, m = M_NONE; + int free_delta = 0; + enum slab_modes mode = M_NONE; void *nextfree, *freelist_iter, *freelist_tail; int tail = DEACTIVATE_TO_HEAD; unsigned long flags = 0; @@ -2393,14 +2393,10 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab, * Ensure that the slab is unfrozen while the list presence * reflects the actual number of objects during unfreeze. * - * We setup the list membership and then perform a cmpxchg - * with the count. If there is a mismatch then the slab - * is not unfrozen but the slab is on the wrong list. - * - * Then we restart the process which may have to remove - * the slab from the list that we just put it on again - * because the number of objects in the slab may have - * changed. + * We first perform cmpxchg holding lock and insert to list + * when it succeed. If there is mismatch then the slab is not + * unfrozen and number of objects in the slab may have changed. + * Then release lock and retry cmpxchg again. */ redo: @@ -2420,61 +2416,50 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab, new.frozen = 0; if (!new.inuse && n->nr_partial >= s->min_partial) - m = M_FREE; + mode = M_FREE; else if (new.freelist) { - m = M_PARTIAL; - if (!lock) { - lock = 1; - /* - * Taking the spinlock removes the possibility that - * acquire_slab() will see a slab that is frozen - */ - spin_lock_irqsave(&n->list_lock, flags); - } - } else { - m = M_FULL; - if (kmem_cache_debug_flags(s, SLAB_STORE_USER) && !lock) { - lock = 1; - /* - * This also ensures that the scanning of full - * slabs from diagnostic functions will not see - * any frozen slabs. - */ - spin_lock_irqsave(&n->list_lock, flags); - } - } - - if (l != m) { - if (l == M_PARTIAL) - remove_partial(n, slab); - else if (l == M_FULL) - remove_full(s, n, slab); + mode = M_PARTIAL; + /* + * Taking the spinlock removes the possibility that + * acquire_slab() will see a slab that is frozen + */ + spin_lock_irqsave(&n->list_lock, flags); + } else if (kmem_cache_debug_flags(s, SLAB_STORE_USER)) { + mode = M_FULL; + /* + * This also ensures that the scanning of full + * slabs from diagnostic functions will not see + * any frozen slabs. + */ + spin_lock_irqsave(&n->list_lock, flags); + } else + mode = M_FULL_NOLIST; - if (m == M_PARTIAL) - add_partial(n, slab, tail); - else if (m == M_FULL) - add_full(s, n, slab); - } - l = m; if (!cmpxchg_double_slab(s, slab, old.freelist, old.counters, new.freelist, new.counters, - "unfreezing slab")) + "unfreezing slab")) { + if (mode == M_PARTIAL || mode == M_FULL) + spin_unlock_irqrestore(&n->list_lock, flags); goto redo; + } - if (lock) - spin_unlock_irqrestore(&n->list_lock, flags); - if (m == M_PARTIAL) + if (mode == M_PARTIAL) { + add_partial(n, slab, tail); + spin_unlock_irqrestore(&n->list_lock, flags); stat(s, tail); - else if (m == M_FULL) - stat(s, DEACTIVATE_FULL); - else if (m == M_FREE) { + } else if (mode == M_FREE) { stat(s, DEACTIVATE_EMPTY); discard_slab(s, slab); stat(s, FREE_SLAB); - } + } else if (mode == M_FULL) { + add_full(s, n, slab); + spin_unlock_irqrestore(&n->list_lock, flags); + stat(s, DEACTIVATE_FULL); + } else if (mode == M_FULL_NOLIST) + stat(s, DEACTIVATE_FULL); } #ifdef CONFIG_SLUB_CPU_PARTIAL