From patchwork Tue Jul 12 02:28:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rongwei Wang X-Patchwork-Id: 12914479 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 938D4CCA47B for ; Tue, 12 Jul 2022 02:28:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BFECF940010; Mon, 11 Jul 2022 22:28:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B3397940035; Mon, 11 Jul 2022 22:28:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 787E2940010; Mon, 11 Jul 2022 22:28:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 5DD65940033 for ; Mon, 11 Jul 2022 22:28:19 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 37D9A3468D for ; Tue, 12 Jul 2022 02:28:19 +0000 (UTC) X-FDA: 79676863518.26.50E21BB Received: from out30-130.freemail.mail.aliyun.com (out30-130.freemail.mail.aliyun.com [115.124.30.130]) by imf17.hostedemail.com (Postfix) with ESMTP id 029EC40065 for ; Tue, 12 Jul 2022 02:28:15 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R141e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045170;MF=rongwei.wang@linux.alibaba.com;NM=1;PH=DS;RN=10;SR=0;TI=SMTPD_---0VJ6Ep2C_1657592888; Received: from localhost.localdomain(mailfrom:rongwei.wang@linux.alibaba.com fp:SMTPD_---0VJ6Ep2C_1657592888) by smtp.aliyun-inc.com; Tue, 12 Jul 2022 10:28:10 +0800 From: Rongwei Wang To: akpm@linux-foundation.org, vbabka@suse.cz, 42.hyeyoo@gmail.com, roman.gushchin@linux.dev, iamjoonsoo.kim@lge.com, rientjes@google.com, penberg@kernel.org, cl@gentwo.de Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 1/3] mm/slub: fix the race between validate_slab and slab_free Date: Tue, 12 Jul 2022 10:28:05 +0800 Message-Id: <20220712022807.44113-1-rongwei.wang@linux.alibaba.com> X-Mailer: git-send-email 2.32.0 MIME-Version: 1.0 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1657592898; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references; bh=MOJdFKxYXCIqbBknjodZOomTzrYrYkPDUOvvfZOx5/8=; b=nUM1JGXE7u0ZoPbilyZNh6SqqODrKGNJrtKAf6ua2q7DaLIXOh54JEEpCIfI5soOaEjZK3 PXFAx/BJP2ZbrAvWnoNCbNx4gjOb3GtyUx7qwXFiXtt4IhTL2LUX1xJpZn8gopnYNRR+PQ 1sPz/g04+YVsedkBho2xRjEpGWyCTSg= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1657592898; a=rsa-sha256; cv=none; b=YLReSjtwwUwAObkjIeiUxgTUSABazt3npQAqMVZDGK9kWgDCD8Os3qfNR30Bo5J5wWlbKJ RuyfXavZmjPqRsBGR3g0Au8Dfr+mzgwG03zoYbtRKB5omCXbVwkiuC+j78YzS2YSN+SKjW 28affDcgcG33+sgqJIylBRV7soLvc6g= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf17.hostedemail.com: domain of rongwei.wang@linux.alibaba.com designates 115.124.30.130 as permitted sender) smtp.mailfrom=rongwei.wang@linux.alibaba.com X-Stat-Signature: rngfw8zc77eu897qctobgt5mrpk31nas X-Rspam-User: Authentication-Results: imf17.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf17.hostedemail.com: domain of rongwei.wang@linux.alibaba.com designates 115.124.30.130 as permitted sender) smtp.mailfrom=rongwei.wang@linux.alibaba.com X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 029EC40065 X-HE-Tag: 1657592895-861230 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In use cases where allocating and freeing slab frequently, some error messages, such as "Left Redzone overwritten", "First byte 0xbb instead of 0xcc" would be printed when validating slabs. That's because an object has been filled with SLAB_RED_INACTIVE, but has not been added to slab's freelist. And between these two states, the behaviour of validating slab is likely to occur. Actually, it doesn't mean the slab can not work stably. But, these confusing messages will disturb slab debugging more or less. Signed-off-by: Rongwei Wang --- mm/slub.c | 43 +++++++++++++++++++++++++------------------ 1 file changed, 25 insertions(+), 18 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index b1281b8654bd..e950d8df8380 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1391,18 +1391,16 @@ static noinline int free_debug_processing( void *head, void *tail, int bulk_cnt, unsigned long addr) { - struct kmem_cache_node *n = get_node(s, slab_nid(slab)); void *object = head; int cnt = 0; - unsigned long flags, flags2; + unsigned long flags; int ret = 0; depot_stack_handle_t handle = 0; if (s->flags & SLAB_STORE_USER) handle = set_track_prepare(); - spin_lock_irqsave(&n->list_lock, flags); - slab_lock(slab, &flags2); + slab_lock(slab, &flags); if (s->flags & SLAB_CONSISTENCY_CHECKS) { if (!check_slab(s, slab)) @@ -1435,8 +1433,7 @@ static noinline int free_debug_processing( slab_err(s, slab, "Bulk freelist count(%d) invalid(%d)\n", bulk_cnt, cnt); - slab_unlock(slab, &flags2); - spin_unlock_irqrestore(&n->list_lock, flags); + slab_unlock(slab, &flags); if (!ret) slab_fix(s, "Object at 0x%p not freed", object); return ret; @@ -3330,7 +3327,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, { void *prior; - int was_frozen; + int was_frozen, to_take_off = 0; struct slab new; unsigned long counters; struct kmem_cache_node *n = NULL; @@ -3341,14 +3338,23 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, if (kfence_free(head)) return; - if (kmem_cache_debug(s) && - !free_debug_processing(s, slab, head, tail, cnt, addr)) - return; + n = get_node(s, slab_nid(slab)); + if (kmem_cache_debug(s)) { + int ret; - do { - if (unlikely(n)) { + spin_lock_irqsave(&n->list_lock, flags); + ret = free_debug_processing(s, slab, head, tail, cnt, addr); + if (!ret) { spin_unlock_irqrestore(&n->list_lock, flags); - n = NULL; + return; + } + } + + do { + if (unlikely(to_take_off)) { + if (!kmem_cache_debug(s)) + spin_unlock_irqrestore(&n->list_lock, flags); + to_take_off = 0; } prior = slab->freelist; counters = slab->counters; @@ -3369,8 +3375,6 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, new.frozen = 1; } else { /* Needs to be taken off a list */ - - n = get_node(s, slab_nid(slab)); /* * Speculatively acquire the list_lock. * If the cmpxchg does not succeed then we may @@ -3379,8 +3383,10 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, * Otherwise the list_lock will synchronize with * other processors updating the list of slabs. */ - spin_lock_irqsave(&n->list_lock, flags); + if (!kmem_cache_debug(s)) + spin_lock_irqsave(&n->list_lock, flags); + to_take_off = 1; } } @@ -3389,8 +3395,9 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, head, new.counters, "__slab_free")); - if (likely(!n)) { - + if (likely(!to_take_off)) { + if (kmem_cache_debug(s)) + spin_unlock_irqrestore(&n->list_lock, flags); if (likely(was_frozen)) { /* * The list lock was not taken therefore no list From patchwork Tue Jul 12 02:28:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rongwei Wang X-Patchwork-Id: 12914478 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41273C433EF for ; Tue, 12 Jul 2022 02:28:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 89B5C940034; Mon, 11 Jul 2022 22:28:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 824E4940033; Mon, 11 Jul 2022 22:28:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 69CDE940034; Mon, 11 Jul 2022 22:28:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 5C4A1940010 for ; Mon, 11 Jul 2022 22:28:19 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 3D87660B41 for ; Tue, 12 Jul 2022 02:28:19 +0000 (UTC) X-FDA: 79676863518.24.91E0515 Received: from out199-3.us.a.mail.aliyun.com (out199-3.us.a.mail.aliyun.com [47.90.199.3]) by imf21.hostedemail.com (Postfix) with ESMTP id A61801C006B for ; Tue, 12 Jul 2022 02:28:17 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R191e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046049;MF=rongwei.wang@linux.alibaba.com;NM=1;PH=DS;RN=10;SR=0;TI=SMTPD_---0VJ6Ep2a_1657592890; Received: from localhost.localdomain(mailfrom:rongwei.wang@linux.alibaba.com fp:SMTPD_---0VJ6Ep2a_1657592890) by smtp.aliyun-inc.com; Tue, 12 Jul 2022 10:28:11 +0800 From: Rongwei Wang To: akpm@linux-foundation.org, vbabka@suse.cz, 42.hyeyoo@gmail.com, roman.gushchin@linux.dev, iamjoonsoo.kim@lge.com, rientjes@google.com, penberg@kernel.org, cl@gentwo.de Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 2/3] mm/slub: improve consistency of nr_slabs count Date: Tue, 12 Jul 2022 10:28:06 +0800 Message-Id: <20220712022807.44113-2-rongwei.wang@linux.alibaba.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220712022807.44113-1-rongwei.wang@linux.alibaba.com> References: <20220712022807.44113-1-rongwei.wang@linux.alibaba.com> MIME-Version: 1.0 ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf21.hostedemail.com: domain of rongwei.wang@linux.alibaba.com designates 47.90.199.3 as permitted sender) smtp.mailfrom=rongwei.wang@linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1657592898; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=TLjsAAWIVSWxQxAIUG5TH8s7If6tIIolMExwAMTDWhU=; b=Km5Sy5MR363I94qZFj0TaYBCPDiphnGllKu8sUQjknmyhidqGSDjbukWP7sC9/d9iQK9wA ZuUcXYu3UAadyfm7tO7WI3iNIe53ZCQytw9UTIvDStEG08aQb6L2oHQtHGjx2N9h3TrX+k ki5lwTuof2+BEKSL2S5wZtk5lLTh1KU= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1657592898; a=rsa-sha256; cv=none; b=RE9liLC3nAy4pmV4zx9UyPSOGfSyjp2qrWNSRhXw02T7ONU4rEA0Azp+qb3blApNdKY3OB jsGUWil7C7saklPvRepUJWqm7NyMiJWwvRCvwdudLo+GLozzuFjekEbHKK87KkhJXbYj3Y TmGB2SFyvd4VWUFViAL0Jb2RIdv+oIE= Authentication-Results: imf21.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf21.hostedemail.com: domain of rongwei.wang@linux.alibaba.com designates 47.90.199.3 as permitted sender) smtp.mailfrom=rongwei.wang@linux.alibaba.com X-Rspam-User: X-Rspamd-Server: rspam08 X-Stat-Signature: tcdi8ybzbxat1he11kb7x91bwdhwztsx X-Rspamd-Queue-Id: A61801C006B X-HE-Tag: 1657592897-83984 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently, discard_slab() can change nr_slabs count without holding node's list_lock. This will lead some error messages print when scanning node's partial or full list, e.g. validate all slabs. Literally, it affects the consistency of nr_slabs count. Here, discard_slab() is abandoned, And dec_slabs_node() is called before releasing node's list_lock. dec_slabs_nodes() and free_slab() can be called separately to ensure consistency of nr_slabs count. Signed-off-by: Rongwei Wang --- mm/slub.c | 26 ++++++++++++++------------ 1 file changed, 14 insertions(+), 12 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index e950d8df8380..587416e39292 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2065,12 +2065,6 @@ static void free_slab(struct kmem_cache *s, struct slab *slab) __free_slab(s, slab); } -static void discard_slab(struct kmem_cache *s, struct slab *slab) -{ - dec_slabs_node(s, slab_nid(slab), slab->objects); - free_slab(s, slab); -} - /* * Management of partially allocated slabs. */ @@ -2439,6 +2433,7 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab, if (!new.inuse && n->nr_partial >= s->min_partial) { mode = M_FREE; + spin_lock_irqsave(&n->list_lock, flags); } else if (new.freelist) { mode = M_PARTIAL; /* @@ -2463,7 +2458,7 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab, old.freelist, old.counters, new.freelist, new.counters, "unfreezing slab")) { - if (mode == M_PARTIAL || mode == M_FULL) + if (mode != M_FULL_NOLIST) spin_unlock_irqrestore(&n->list_lock, flags); goto redo; } @@ -2475,7 +2470,10 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab, stat(s, tail); } else if (mode == M_FREE) { stat(s, DEACTIVATE_EMPTY); - discard_slab(s, slab); + dec_slabs_node(s, slab_nid(slab), slab->objects); + spin_unlock_irqrestore(&n->list_lock, flags); + + free_slab(s, slab); stat(s, FREE_SLAB); } else if (mode == M_FULL) { add_full(s, n, slab); @@ -2528,6 +2526,7 @@ static void __unfreeze_partials(struct kmem_cache *s, struct slab *partial_slab) if (unlikely(!new.inuse && n->nr_partial >= s->min_partial)) { slab->next = slab_to_discard; slab_to_discard = slab; + dec_slabs_node(s, slab_nid(slab), slab->objects); } else { add_partial(n, slab, DEACTIVATE_TO_TAIL); stat(s, FREE_ADD_PARTIAL); @@ -2542,7 +2541,7 @@ static void __unfreeze_partials(struct kmem_cache *s, struct slab *partial_slab) slab_to_discard = slab_to_discard->next; stat(s, DEACTIVATE_EMPTY); - discard_slab(s, slab); + free_slab(s, slab); stat(s, FREE_SLAB); } } @@ -3443,9 +3442,10 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, remove_full(s, n, slab); } + dec_slabs_node(s, slab_nid(slab), slab->objects); spin_unlock_irqrestore(&n->list_lock, flags); stat(s, FREE_SLAB); - discard_slab(s, slab); + free_slab(s, slab); } /* @@ -4302,6 +4302,7 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n) if (!slab->inuse) { remove_partial(n, slab); list_add(&slab->slab_list, &discard); + dec_slabs_node(s, slab_nid(slab), slab->objects); } else { list_slab_objects(s, slab, "Objects remaining in %s on __kmem_cache_shutdown()"); @@ -4310,7 +4311,7 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n) spin_unlock_irq(&n->list_lock); list_for_each_entry_safe(slab, h, &discard, slab_list) - discard_slab(s, slab); + free_slab(s, slab); } bool __kmem_cache_empty(struct kmem_cache *s) @@ -4640,6 +4641,7 @@ static int __kmem_cache_do_shrink(struct kmem_cache *s) if (free == slab->objects) { list_move(&slab->slab_list, &discard); n->nr_partial--; + dec_slabs_node(s, slab_nid(slab), slab->objects); } else if (free <= SHRINK_PROMOTE_MAX) list_move(&slab->slab_list, promote + free - 1); } @@ -4655,7 +4657,7 @@ static int __kmem_cache_do_shrink(struct kmem_cache *s) /* Release empty slabs */ list_for_each_entry_safe(slab, t, &discard, slab_list) - discard_slab(s, slab); + free_slab(s, slab); if (slabs_node(s, node)) ret = 1; From patchwork Tue Jul 12 02:28:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rongwei Wang X-Patchwork-Id: 12914477 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B0557C43334 for ; Tue, 12 Jul 2022 02:28:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 20CAA940032; Mon, 11 Jul 2022 22:28:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 193A8940010; Mon, 11 Jul 2022 22:28:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 03439940032; Mon, 11 Jul 2022 22:28:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id E0F56940010 for ; Mon, 11 Jul 2022 22:28:18 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id A7F6F809C5 for ; Tue, 12 Jul 2022 02:28:18 +0000 (UTC) X-FDA: 79676863476.21.EA4A501 Received: from out30-57.freemail.mail.aliyun.com (out30-57.freemail.mail.aliyun.com [115.124.30.57]) by imf01.hostedemail.com (Postfix) with ESMTP id 21B8F40030 for ; Tue, 12 Jul 2022 02:28:15 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R271e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046050;MF=rongwei.wang@linux.alibaba.com;NM=1;PH=DS;RN=10;SR=0;TI=SMTPD_---0VJ6Ep2o_1657592891; Received: from localhost.localdomain(mailfrom:rongwei.wang@linux.alibaba.com fp:SMTPD_---0VJ6Ep2o_1657592891) by smtp.aliyun-inc.com; Tue, 12 Jul 2022 10:28:12 +0800 From: Rongwei Wang To: akpm@linux-foundation.org, vbabka@suse.cz, 42.hyeyoo@gmail.com, roman.gushchin@linux.dev, iamjoonsoo.kim@lge.com, rientjes@google.com, penberg@kernel.org, cl@gentwo.de Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 3/3] mm/slub: delete confusing pr_err when debugging slub Date: Tue, 12 Jul 2022 10:28:07 +0800 Message-Id: <20220712022807.44113-3-rongwei.wang@linux.alibaba.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220712022807.44113-1-rongwei.wang@linux.alibaba.com> References: <20220712022807.44113-1-rongwei.wang@linux.alibaba.com> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1657592898; a=rsa-sha256; cv=none; b=B6DSehxUVK2G0G+IVjYzTAO6Ln396Rh3AqjN2KqF2FsaJ0ZfXghdzNjvgRfuFu9upsglaL 2AnSBickgCMtvzw3gmvRjj1TwZpYuL3gTqW+IjMnaa9dnauJAZr25pTXMwb9Zc9bqHQ4wg koWmlk2W+CZCt1FqY9vU6rEioNzwEN8= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=none; spf=pass (imf01.hostedemail.com: domain of rongwei.wang@linux.alibaba.com designates 115.124.30.57 as permitted sender) smtp.mailfrom=rongwei.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1657592898; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Hx/OmN7i5DePavteDrpEzsZSE2sGzJ9oASfjNC/R13s=; b=ZIHaDKVS5rM0X6+bHL9VKvfxXQjjNauW0Ng0ZOozKzHsZPNSQEF9PjDH65se1joZFxL4U9 crdOEJOxy1f0SsmbY0BDYiSZwsqcybyXcNkgPc9SO3P5VLXTiuwo+wzAidJWxBcnztOK5U Ve9Pm3FVaS2dtr0KxpKdskGK8um/OT0= X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 21B8F40030 X-Rspam-User: X-Stat-Signature: mw66581gchymj85pxwgmit1hsxuxe3qt Authentication-Results: imf01.hostedemail.com; dkim=none; spf=pass (imf01.hostedemail.com: domain of rongwei.wang@linux.alibaba.com designates 115.124.30.57 as permitted sender) smtp.mailfrom=rongwei.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=alibaba.com X-HE-Tag: 1657592895-79945 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The n->nr_slabs will be updated when really to allocate or free a slab, but this slab is unnecessarily in full list or partial list of one node. That means the total count of slab in node's full and partial list is unnecessarily equal to n->nr_slabs, even though flush_all() has been called. An example here, an error message likes below will be printed when 'slabinfo -v' is executed: SLUB: kmemleak_object 4157 slabs counted but counter=4161 SLUB: kmemleak_object 4072 slabs counted but counter=4077 SLUB: kmalloc-2k 19 slabs counted but counter=20 SLUB: kmalloc-2k 12 slabs counted but counter=13 SLUB: kmemleak_object 4205 slabs counted but counter=4209 Here, deleting this pr_err() directly. Signed-off-by: Rongwei Wang --- mm/slub.c | 5 ----- 1 file changed, 5 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 587416e39292..cdac004f232f 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -5059,11 +5059,6 @@ static int validate_slab_node(struct kmem_cache *s, validate_slab(s, slab, obj_map); count++; } - if (count != atomic_long_read(&n->nr_slabs)) { - pr_err("SLUB: %s %ld slabs counted but counter=%ld\n", - s->name, count, atomic_long_read(&n->nr_slabs)); - slab_add_kunit_errors(); - } out: spin_unlock_irqrestore(&n->list_lock, flags);