From patchwork Tue May 11 15:07:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oliver Glitta X-Patchwork-Id: 12251309 X-Patchwork-Delegate: brendanhiggins@google.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3130FC43461 for ; Tue, 11 May 2021 15:07:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 019376191D for ; Tue, 11 May 2021 15:07:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231753AbhEKPIr (ORCPT ); Tue, 11 May 2021 11:08:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57594 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231461AbhEKPIq (ORCPT ); Tue, 11 May 2021 11:08:46 -0400 Received: from mail-ed1-x536.google.com (mail-ed1-x536.google.com [IPv6:2a00:1450:4864:20::536]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2F3BAC061574; Tue, 11 May 2021 08:07:38 -0700 (PDT) Received: by mail-ed1-x536.google.com with SMTP id s6so23249899edu.10; Tue, 11 May 2021 08:07:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=+fRsR5t8wvCfu4sVp9hQ5wsUoHnYKOQm/lSOy9/33Bw=; b=qU+9gcOwydfg4s6P6n/Jb+5uionmETIkViN7y+BGerFtxjo3bl57OISiAWjpk8S9t2 AhC97YSGC5R811hgJhp4Wk1drQAyUwZ2ih0doU8PbXjOeD0TevC3uXVC4n+x51XGcsts PKGHs1Hngz5Cw5oyclUpQZbmRdUCUR6CvWfiHncCH6/1XT4zmG2QSAEEHzdPdCG2amyX aMf23Mqnki3G62xwUiuL8dJhRw5kMCiTlmyyaSZwkYzpypWjWkhZl9N2SbcNGcloKbWl dH9pmLYQbP42p+hpaQ0eKQQ7G3Tu4Er2jAejDC2g8nsN8Va0HB+/Wbu2AX5vy60fCymQ WEKQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=+fRsR5t8wvCfu4sVp9hQ5wsUoHnYKOQm/lSOy9/33Bw=; b=cjaS6g/bw79eAxI9Tcvjrh4OwcsOIF7LR26h8TKFGBD8Urp/vZK36cZSoxTTtPm1md qlF1+WIXxPGFg70GNzWg+5/kzys9SqaHKQVUEHh4C6wAsueRKj0I+QFrlwNlTg2W3AuP ZK4Xa60fTP/RRYndGYj7UmCOLv9MlTpfbkzVwS7VOHeGP5qs99q7xG7SuwgxzJZeQhnq uEBc8qqRPJy33zwlOFR09UggHX1fKzPTfohLiEdQfMhCwckpSW95LdZsKPFtXyUQkSWH H/dMu+bZILOK7qA8aSh13GOaSHx6eWTXo0xf3XkM8vz4wywYu1JRTgjRMeyZMBmrmK7i ab/g== X-Gm-Message-State: AOAM533V/7fTzpPdKLUwXtNscO6mQFLnMl133roo4pUg1+zGn69nqcIq trEcm+Imcx6IGppxJJ72kcE= X-Google-Smtp-Source: ABdhPJzVrN4P/dKwWMkwHkth9UYmwQuaRE8e4YYq8slMmp8f8icnhSQHfcZPPLBj0WhS5HmnOJCnNg== X-Received: by 2002:a50:c446:: with SMTP id w6mr33437597edf.62.1620745656797; Tue, 11 May 2021 08:07:36 -0700 (PDT) Received: from localhost.localdomain (ispc-static-34.84-47-111.telekom.sk. [84.47.111.34]) by smtp.gmail.com with ESMTPSA id lr15sm11872709ejb.107.2021.05.11.08.07.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 11 May 2021 08:07:36 -0700 (PDT) From: glittao@gmail.com To: brendanhiggins@google.com, cl@linux.com, penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, vbabka@suse.cz Cc: linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, kunit-dev@googlegroups.com, linux-mm@kvack.org, elver@google.com, dlatypov@google.com, Oliver Glitta Subject: [PATCH v5 1/3] kunit: make test->lock irq safe Date: Tue, 11 May 2021 17:07:32 +0200 Message-Id: <20210511150734.3492-1-glittao@gmail.com> X-Mailer: git-send-email 2.31.1.272.g89b43f80a5 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org From: Vlastimil Babka The upcoming SLUB kunit test will be calling kunit_find_named_resource() from a context with disabled interrupts. That means kunit's test->lock needs to be IRQ safe to avoid potential deadlocks and lockdep splats. This patch therefore changes the test->lock usage to spin_lock_irqsave() and spin_unlock_irqrestore(). Signed-off-by: Vlastimil Babka Signed-off-by: Oliver Glitta Reviewed-by: Brendan Higgins --- Changes since v4 Rebased whole series on 5.13-rc1 include/kunit/test.h | 5 +++-- lib/kunit/test.c | 18 +++++++++++------- 2 files changed, 14 insertions(+), 9 deletions(-) -- 2.31.1.272.g89b43f80a5 diff --git a/include/kunit/test.h b/include/kunit/test.h index 49601c4b98b8..524d4789af22 100644 --- a/include/kunit/test.h +++ b/include/kunit/test.h @@ -515,8 +515,9 @@ kunit_find_resource(struct kunit *test, void *match_data) { struct kunit_resource *res, *found = NULL; + unsigned long flags; - spin_lock(&test->lock); + spin_lock_irqsave(&test->lock, flags); list_for_each_entry_reverse(res, &test->resources, node) { if (match(test, res, (void *)match_data)) { @@ -526,7 +527,7 @@ kunit_find_resource(struct kunit *test, } } - spin_unlock(&test->lock); + spin_unlock_irqrestore(&test->lock, flags); return found; } diff --git a/lib/kunit/test.c b/lib/kunit/test.c index 2f6cc0123232..45f068864d76 100644 --- a/lib/kunit/test.c +++ b/lib/kunit/test.c @@ -475,6 +475,7 @@ int kunit_add_resource(struct kunit *test, void *data) { int ret = 0; + unsigned long flags; res->free = free; kref_init(&res->refcount); @@ -487,10 +488,10 @@ int kunit_add_resource(struct kunit *test, res->data = data; } - spin_lock(&test->lock); + spin_lock_irqsave(&test->lock, flags); list_add_tail(&res->node, &test->resources); /* refcount for list is established by kref_init() */ - spin_unlock(&test->lock); + spin_unlock_irqrestore(&test->lock, flags); return ret; } @@ -548,9 +549,11 @@ EXPORT_SYMBOL_GPL(kunit_alloc_and_get_resource); void kunit_remove_resource(struct kunit *test, struct kunit_resource *res) { - spin_lock(&test->lock); + unsigned long flags; + + spin_lock_irqsave(&test->lock, flags); list_del(&res->node); - spin_unlock(&test->lock); + spin_unlock_irqrestore(&test->lock, flags); kunit_put_resource(res); } EXPORT_SYMBOL_GPL(kunit_remove_resource); @@ -630,6 +633,7 @@ EXPORT_SYMBOL_GPL(kunit_kfree); void kunit_cleanup(struct kunit *test) { struct kunit_resource *res; + unsigned long flags; /* * test->resources is a stack - each allocation must be freed in the @@ -641,9 +645,9 @@ void kunit_cleanup(struct kunit *test) * protect against the current node being deleted, not the next. */ while (true) { - spin_lock(&test->lock); + spin_lock_irqsave(&test->lock, flags); if (list_empty(&test->resources)) { - spin_unlock(&test->lock); + spin_unlock_irqrestore(&test->lock, flags); break; } res = list_last_entry(&test->resources, @@ -654,7 +658,7 @@ void kunit_cleanup(struct kunit *test) * resource, and this can't happen if the test->lock * is held. */ - spin_unlock(&test->lock); + spin_unlock_irqrestore(&test->lock, flags); kunit_remove_resource(test, res); } current->kunit_test = NULL; From patchwork Tue May 11 15:07:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oliver Glitta X-Patchwork-Id: 12251305 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D98B6C43460 for ; Tue, 11 May 2021 15:07:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A390E6191D for ; Tue, 11 May 2021 15:07:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231646AbhEKPIq (ORCPT ); Tue, 11 May 2021 11:08:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57596 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231519AbhEKPIq (ORCPT ); Tue, 11 May 2021 11:08:46 -0400 Received: from mail-ed1-x535.google.com (mail-ed1-x535.google.com [IPv6:2a00:1450:4864:20::535]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 73F99C06174A; Tue, 11 May 2021 08:07:39 -0700 (PDT) Received: by mail-ed1-x535.google.com with SMTP id n25so23293398edr.5; Tue, 11 May 2021 08:07:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=mm7LDmbzuzXICr7GpzR9n7uELDGvRPAoxQvO706Q6EQ=; b=kpIm19siY4JZvc5ZWgrdvxs0Njlh0zJPd2HfCL6CU0PjadZ3E299V0Oy8QpgXoreWw dtm61EFtIV+JzzyS4OkyDrQ0/7ddR6V3wj+emih9GGmyEtPqMh95OdzWLUFWskaMrPf7 AKlJsUPSdr6V4IjXFy+vEQsTxzNxKbnHXnmgmtYEEwJGXxwku/C1zigcnA82v9R34rX9 y/u4kns2nMKxrNS9ADeU/3Y2p9UK2Xqd5wdsC5+UKgBb5ve3w5PQXHG1eTNYQKdwK7wV MEz2XLGhpiDSeYQ980ebNuJlMhXSo4jisWMOpRxfW8E8FSDJquKOSq+kpH6CEBL66NOW 4Qlw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=mm7LDmbzuzXICr7GpzR9n7uELDGvRPAoxQvO706Q6EQ=; b=H6QOKS/lUHD0LlzKwqV7u0wr1GwulFMatbwNBlybeNPoGNGCYJiomGJ2lObNb4b/2/ PzUCudrbQc6xHIZrlLTcZJzmPsVDuH3/y+DVZ9MtUWGsBYwZ4xYqkk8RriUf0PfoZElw SSXdfJ64X1kHzcR5QFnD4moPK3h3jYrozR/gvUa6LBh3kTFCnz2ZZ+GwSqjh/OXwften Z6xbBoLnTJ+aUsGd/lYHfqBS/sTF+PHlYXrASF8o6hMF5XPyZ33AR6kckf4gapUsJIrJ gZZHoZXi44W9y5RPdl/6eZxo11A9/DHpl15V4XZKRyC2BEzyh2QiPwfO1aa3zVQm9dn9 RcUA== X-Gm-Message-State: AOAM532NbsCmWOTSuyxdXZULJSQsBUmhmFkNHFHUVkotOl0PBfyUSvmU RVuCUcA5Q2gXbkXxbQc0cGU8QahO7/FVvA== X-Google-Smtp-Source: ABdhPJztg4FEy3AMcxcP85q0BoP5JYRGHlKdna5nGunsinHPiZXpRmDBfCdYZWg0rmIWUMpxjbuUyQ== X-Received: by 2002:aa7:d685:: with SMTP id d5mr37219744edr.200.1620745658110; Tue, 11 May 2021 08:07:38 -0700 (PDT) Received: from localhost.localdomain (ispc-static-34.84-47-111.telekom.sk. [84.47.111.34]) by smtp.gmail.com with ESMTPSA id lr15sm11872709ejb.107.2021.05.11.08.07.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 11 May 2021 08:07:37 -0700 (PDT) From: glittao@gmail.com To: brendanhiggins@google.com, cl@linux.com, penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, vbabka@suse.cz Cc: linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, kunit-dev@googlegroups.com, linux-mm@kvack.org, elver@google.com, dlatypov@google.com, Oliver Glitta Subject: [PATCH v5 2/3] mm/slub, kunit: add a KUnit test for SLUB debugging functionality Date: Tue, 11 May 2021 17:07:33 +0200 Message-Id: <20210511150734.3492-2-glittao@gmail.com> X-Mailer: git-send-email 2.31.1.272.g89b43f80a5 In-Reply-To: <20210511150734.3492-1-glittao@gmail.com> References: <20210511150734.3492-1-glittao@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org From: Oliver Glitta SLUB has resiliency_test() function which is hidden behind #ifdef SLUB_RESILIENCY_TEST that is not part of Kconfig, so nobody runs it. KUnit should be a proper replacement for it. Try changing byte in redzone after allocation and changing pointer to next free node, first byte, 50th byte and redzone byte. Check if validation finds errors. There are several differences from the original resiliency test: Tests create own caches with known state instead of corrupting shared kmalloc caches. The corruption of freepointer uses correct offset, the original resiliency test got broken with freepointer changes. Scratch changing random byte test, because it does not have meaning in this form where we need deterministic results. Add new option CONFIG_SLUB_KUNIT_TEST in Kconfig. Tests next_pointer, first_word and clobber_50th_byte do not run with KASAN option on. Because the test deliberately modifies non-allocated objects. Use kunit_resource to count errors in cache and silence bug reports. Count error whenever slab_bug() or slab_fix() is called or when the count of pages is wrong. Signed-off-by: Oliver Glitta Reviewed-by: Marco Elver Reviewed-by: Vlastimil Babka Acked-by: Daniel Latypov Acked-by: Marco Elver --- Changes since v4 Use two tests with KASAN dependency. Remove setting current test during init and exit. Changes since v3 Use kunit_resource to silence bug reports and count errors suggested by Marco Elver. Make the test depends on !KASAN thanks to report from the kernel test robot. Changes since v2 Use bit operation & instead of logical && as reported by kernel test robot and Dan Carpenter Changes since v1 Conversion from kselftest to KUnit test suggested by Marco Elver. Error silencing. Error counting improvements. lib/Kconfig.debug | 12 ++++ lib/Makefile | 1 + lib/slub_kunit.c | 155 ++++++++++++++++++++++++++++++++++++++++++++++ mm/slab.h | 1 + mm/slub.c | 46 +++++++++++++- 5 files changed, 212 insertions(+), 3 deletions(-) create mode 100644 lib/slub_kunit.c -- 2.31.1.272.g89b43f80a5 diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 678c13967580..7723f58a9394 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -2429,6 +2429,18 @@ config BITS_TEST If unsure, say N. +config SLUB_KUNIT_TEST + tristate "KUnit test for SLUB cache error detection" if !KUNIT_ALL_TESTS + depends on SLUB_DEBUG && KUNIT + default KUNIT_ALL_TESTS + help + This builds SLUB allocator unit test. + Tests SLUB cache debugging functionality. + For more information on KUnit and unit tests in general please refer + to the KUnit documentation in Documentation/dev-tools/kunit/. + + If unsure, say N. + config TEST_UDELAY tristate "udelay test driver" help diff --git a/lib/Makefile b/lib/Makefile index e11cfc18b6c0..386215dcb0a0 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -353,5 +353,6 @@ obj-$(CONFIG_LIST_KUNIT_TEST) += list-test.o obj-$(CONFIG_LINEAR_RANGES_TEST) += test_linear_ranges.o obj-$(CONFIG_BITS_TEST) += test_bits.o obj-$(CONFIG_CMDLINE_KUNIT_TEST) += cmdline_kunit.o +obj-$(CONFIG_SLUB_KUNIT_TEST) += slub_kunit.o obj-$(CONFIG_GENERIC_LIB_DEVMEM_IS_ALLOWED) += devmem_is_allowed.o diff --git a/lib/slub_kunit.c b/lib/slub_kunit.c new file mode 100644 index 000000000000..f28965f64ef6 --- /dev/null +++ b/lib/slub_kunit.c @@ -0,0 +1,155 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include +#include +#include +#include "../mm/slab.h" + +static struct kunit_resource resource; +static int slab_errors; + +static void test_clobber_zone(struct kunit *test) +{ + struct kmem_cache *s = kmem_cache_create("TestSlub_RZ_alloc", 64, 0, + SLAB_RED_ZONE, NULL); + u8 *p = kmem_cache_alloc(s, GFP_KERNEL); + + kasan_disable_current(); + p[64] = 0x12; + + validate_slab_cache(s); + KUNIT_EXPECT_EQ(test, 2, slab_errors); + + kasan_enable_current(); + kmem_cache_free(s, p); + kmem_cache_destroy(s); +} + +#ifndef CONFIG_KASAN +static void test_next_pointer(struct kunit *test) +{ + struct kmem_cache *s = kmem_cache_create("TestSlub_next_ptr_free", 64, 0, + SLAB_POISON, NULL); + u8 *p = kmem_cache_alloc(s, GFP_KERNEL); + unsigned long tmp; + unsigned long *ptr_addr; + + kmem_cache_free(s, p); + + ptr_addr = (unsigned long *)(p + s->offset); + tmp = *ptr_addr; + p[s->offset] = 0x12; + + /* + * Expecting three errors. + * One for the corrupted freechain and the other one for the wrong + * count of objects in use. The third error is fixing broken cache. + */ + validate_slab_cache(s); + KUNIT_EXPECT_EQ(test, 3, slab_errors); + + /* + * Try to repair corrupted freepointer. + * Still expecting two errors. The first for the wrong count + * of objects in use. + * The second error is for fixing broken cache. + */ + *ptr_addr = tmp; + slab_errors = 0; + + validate_slab_cache(s); + KUNIT_EXPECT_EQ(test, 2, slab_errors); + + /* + * Previous validation repaired the count of objects in use. + * Now expecting no error. + */ + slab_errors = 0; + validate_slab_cache(s); + KUNIT_EXPECT_EQ(test, 0, slab_errors); + + kmem_cache_destroy(s); +} + +static void test_first_word(struct kunit *test) +{ + struct kmem_cache *s = kmem_cache_create("TestSlub_1th_word_free", 64, 0, + SLAB_POISON, NULL); + u8 *p = kmem_cache_alloc(s, GFP_KERNEL); + + kmem_cache_free(s, p); + *p = 0x78; + + validate_slab_cache(s); + KUNIT_EXPECT_EQ(test, 2, slab_errors); + + kmem_cache_destroy(s); +} + +static void test_clobber_50th_byte(struct kunit *test) +{ + struct kmem_cache *s = kmem_cache_create("TestSlub_50th_word_free", 64, 0, + SLAB_POISON, NULL); + u8 *p = kmem_cache_alloc(s, GFP_KERNEL); + + kmem_cache_free(s, p); + p[50] = 0x9a; + + validate_slab_cache(s); + KUNIT_EXPECT_EQ(test, 2, slab_errors); + + kmem_cache_destroy(s); +} +#endif + +static void test_clobber_redzone_free(struct kunit *test) +{ + struct kmem_cache *s = kmem_cache_create("TestSlub_RZ_free", 64, 0, + SLAB_RED_ZONE, NULL); + u8 *p = kmem_cache_alloc(s, GFP_KERNEL); + + kasan_disable_current(); + kmem_cache_free(s, p); + p[64] = 0xab; + + validate_slab_cache(s); + KUNIT_EXPECT_EQ(test, 2, slab_errors); + + kasan_enable_current(); + kmem_cache_destroy(s); +} + +static int test_init(struct kunit *test) +{ + slab_errors = 0; + + kunit_add_named_resource(test, NULL, NULL, &resource, + "slab_errors", &slab_errors); + return 0; +} + +static void test_exit(struct kunit *test) {} + +static struct kunit_case test_cases[] = { + KUNIT_CASE(test_clobber_zone), + +#ifndef CONFIG_KASAN + KUNIT_CASE(test_next_pointer), + KUNIT_CASE(test_first_word), + KUNIT_CASE(test_clobber_50th_byte), +#endif + + KUNIT_CASE(test_clobber_redzone_free), + {} +}; + +static struct kunit_suite test_suite = { + .name = "slub_test", + .init = test_init, + .exit = test_exit, + .test_cases = test_cases, +}; +kunit_test_suite(test_suite); + +MODULE_LICENSE("GPL"); diff --git a/mm/slab.h b/mm/slab.h index 18c1927cd196..9b690fa44cae 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -215,6 +215,7 @@ DECLARE_STATIC_KEY_TRUE(slub_debug_enabled); DECLARE_STATIC_KEY_FALSE(slub_debug_enabled); #endif extern void print_tracking(struct kmem_cache *s, void *object); +long validate_slab_cache(struct kmem_cache *s); #else static inline void print_tracking(struct kmem_cache *s, void *object) { diff --git a/mm/slub.c b/mm/slub.c index feda53ae62ba..985fd6ef033c 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -35,6 +35,7 @@ #include #include #include +#include #include @@ -447,6 +448,26 @@ static inline bool cmpxchg_double_slab(struct kmem_cache *s, struct page *page, static unsigned long object_map[BITS_TO_LONGS(MAX_OBJS_PER_PAGE)]; static DEFINE_SPINLOCK(object_map_lock); +#if IS_ENABLED(CONFIG_KUNIT) +static bool slab_add_kunit_errors(void) +{ + struct kunit_resource *resource; + + if (likely(!current->kunit_test)) + return false; + + resource = kunit_find_named_resource(current->kunit_test, "slab_errors"); + if (!resource) + return false; + + (*(int *)resource->data)++; + kunit_put_resource(resource); + return true; +} +#else +static inline bool slab_add_kunit_errors(void) { return false; } +#endif + /* * Determine a map of object in use on a page. * @@ -677,6 +698,9 @@ static void slab_fix(struct kmem_cache *s, char *fmt, ...) struct va_format vaf; va_list args; + if (slab_add_kunit_errors()) + return; + va_start(args, fmt); vaf.fmt = fmt; vaf.va = &args; @@ -740,6 +764,9 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p) void object_err(struct kmem_cache *s, struct page *page, u8 *object, char *reason) { + if (slab_add_kunit_errors()) + return; + slab_bug(s, "%s", reason); print_trailer(s, page, object); } @@ -750,6 +777,9 @@ static __printf(3, 4) void slab_err(struct kmem_cache *s, struct page *page, va_list args; char buf[100]; + if (slab_add_kunit_errors()) + return; + va_start(args, fmt); vsnprintf(buf, sizeof(buf), fmt, args); va_end(args); @@ -799,12 +829,16 @@ static int check_bytes_and_report(struct kmem_cache *s, struct page *page, while (end > fault && end[-1] == value) end--; + if (slab_add_kunit_errors()) + goto skip_bug_print; + slab_bug(s, "%s overwritten", what); pr_err("0x%p-0x%p @offset=%tu. First byte 0x%x instead of 0x%x\n", fault, end - 1, fault - addr, fault[0], value); print_trailer(s, page, object); +skip_bug_print: restore_bytes(s, what, value, fault, end); return 0; } @@ -4662,9 +4696,11 @@ static int validate_slab_node(struct kmem_cache *s, validate_slab(s, page); count++; } - if (count != n->nr_partial) + if (count != n->nr_partial) { pr_err("SLUB %s: %ld partial slabs counted but counter=%ld\n", s->name, count, n->nr_partial); + slab_add_kunit_errors(); + } if (!(s->flags & SLAB_STORE_USER)) goto out; @@ -4673,16 +4709,18 @@ static int validate_slab_node(struct kmem_cache *s, validate_slab(s, page); count++; } - if (count != atomic_long_read(&n->nr_slabs)) + if (count != atomic_long_read(&n->nr_slabs)) { pr_err("SLUB: %s %ld slabs counted but counter=%ld\n", s->name, count, atomic_long_read(&n->nr_slabs)); + slab_add_kunit_errors(); + } out: spin_unlock_irqrestore(&n->list_lock, flags); return count; } -static long validate_slab_cache(struct kmem_cache *s) +long validate_slab_cache(struct kmem_cache *s) { int node; unsigned long count = 0; @@ -4694,6 +4732,8 @@ static long validate_slab_cache(struct kmem_cache *s) return count; } +EXPORT_SYMBOL(validate_slab_cache); + /* * Generate lists of code addresses where slabcache objects are allocated * and freed. From patchwork Tue May 11 15:07:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oliver Glitta X-Patchwork-Id: 12251307 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4DD9C43462 for ; Tue, 11 May 2021 15:07:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B95D66188B for ; Tue, 11 May 2021 15:07:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231777AbhEKPIs (ORCPT ); Tue, 11 May 2021 11:08:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57598 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231704AbhEKPIr (ORCPT ); Tue, 11 May 2021 11:08:47 -0400 Received: from mail-ed1-x534.google.com (mail-ed1-x534.google.com [IPv6:2a00:1450:4864:20::534]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8F96AC061574; Tue, 11 May 2021 08:07:40 -0700 (PDT) Received: by mail-ed1-x534.google.com with SMTP id di13so23292263edb.2; Tue, 11 May 2021 08:07:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ZgYR93SVlkZvr4CY4psIFUBV0csM+Pz5TRJ2BUW6nyE=; b=kOfx24HFIAVv/9kfO93eFL1VvmJmbedCagku5RyckBUYmLR0pgPSmY7yRzBHNeuvjQ x0YZq1vcYIThEkrH/VRgOtjETT/kNC/24+1kG72uleNJ22FeQYfVE9Ze3gsftAtVciSB nAHMUJn60apoVeGuHLCfQsFB8jIIbdEQFZN319a7zTkmucr08vXHnHrohtPxhabK3W85 pRVPDc0Dke45g582uBQI2cxBdIrqSCxc86cH6RqpMQdwRVkkp9YpNo2VMeC7FMPyi4YS V3xs44Yoja2PKzzezLwGzwU1v6aq+VMEe/gVxIBa1YJVyhEnO1pfaGa6HQsw7SdrJfpf +yaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ZgYR93SVlkZvr4CY4psIFUBV0csM+Pz5TRJ2BUW6nyE=; b=UIU6WRXIUKRc1X7sTF1tkdcmfPgg+c3sy/tcOC9eFwPa1ZDB64G/viN7/E8X73sCHI 9amhNi+KhwGcj0kxaGaLPZFpDX73SzuPFrJ6iP7aQQciGaMSb7v0ac2yaz5dX/nzXdNo O0jIXBEKZE5XtgERW+oPHkvrpaDxcEfNiE7Kw3cGzX9V5e9naWBLrwW1JAKt56fAd2Fm MTyb0b1kg2TF3CfB6wFJVIGhbGfEWOIhzzpwvyGX0mq993kEJijhCdIpmGKsxIJ9qqC8 FIEW4t7Kyc8nim/BW6YGXmCMbJZoP8+SBkZFUPrGFpoPLp8wng1yLLqlKbla0N9H4vYI vL5w== X-Gm-Message-State: AOAM532pyG39NrNOPuw88infcpIBrMSCbsaAskYy8dVEvHhfJq0f7NjL 9rx2/gvoRCqszcCJkhsdrLw= X-Google-Smtp-Source: ABdhPJwHvw9QPdqWyTP/JMTfmGKd2iHBsGvPyidkRWB7T8/N9sQ1cOkYhMqECV2EJU/Tnwghnpm+Pg== X-Received: by 2002:a05:6402:7d1:: with SMTP id u17mr37103727edy.312.1620745659302; Tue, 11 May 2021 08:07:39 -0700 (PDT) Received: from localhost.localdomain (ispc-static-34.84-47-111.telekom.sk. [84.47.111.34]) by smtp.gmail.com with ESMTPSA id lr15sm11872709ejb.107.2021.05.11.08.07.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 11 May 2021 08:07:38 -0700 (PDT) From: glittao@gmail.com To: brendanhiggins@google.com, cl@linux.com, penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, vbabka@suse.cz Cc: linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, kunit-dev@googlegroups.com, linux-mm@kvack.org, elver@google.com, dlatypov@google.com, Oliver Glitta Subject: [PATCH v5 3/3] slub: remove resiliency_test() function Date: Tue, 11 May 2021 17:07:34 +0200 Message-Id: <20210511150734.3492-3-glittao@gmail.com> X-Mailer: git-send-email 2.31.1.272.g89b43f80a5 In-Reply-To: <20210511150734.3492-1-glittao@gmail.com> References: <20210511150734.3492-1-glittao@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org From: Oliver Glitta Function resiliency_test() is hidden behind #ifdef SLUB_RESILIENCY_TEST that is not part of Kconfig, so nobody runs it. This function is replaced with KUnit test for SLUB added by the previous patch "selftests: add a KUnit test for SLUB debugging functionality". Signed-off-by: Oliver Glitta Acked-by: Vlastimil Babka Acked-by: David Rientjes --- mm/slub.c | 64 ------------------------------------------------------- 1 file changed, 64 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 985fd6ef033c..88e2c1847698 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -154,9 +154,6 @@ static inline bool kmem_cache_has_cpu_partial(struct kmem_cache *s) * - Variable sizing of the per node arrays */ -/* Enable to test recovery from slab corruption on boot */ -#undef SLUB_RESILIENCY_TEST - /* Enable to log cmpxchg failures */ #undef SLUB_DEBUG_CMPXCHG @@ -4951,66 +4948,6 @@ static int list_locations(struct kmem_cache *s, char *buf, } #endif /* CONFIG_SLUB_DEBUG */ -#ifdef SLUB_RESILIENCY_TEST -static void __init resiliency_test(void) -{ - u8 *p; - int type = KMALLOC_NORMAL; - - BUILD_BUG_ON(KMALLOC_MIN_SIZE > 16 || KMALLOC_SHIFT_HIGH < 10); - - pr_err("SLUB resiliency testing\n"); - pr_err("-----------------------\n"); - pr_err("A. Corruption after allocation\n"); - - p = kzalloc(16, GFP_KERNEL); - p[16] = 0x12; - pr_err("\n1. kmalloc-16: Clobber Redzone/next pointer 0x12->0x%p\n\n", - p + 16); - - validate_slab_cache(kmalloc_caches[type][4]); - - /* Hmmm... The next two are dangerous */ - p = kzalloc(32, GFP_KERNEL); - p[32 + sizeof(void *)] = 0x34; - pr_err("\n2. kmalloc-32: Clobber next pointer/next slab 0x34 -> -0x%p\n", - p); - pr_err("If allocated object is overwritten then not detectable\n\n"); - - validate_slab_cache(kmalloc_caches[type][5]); - p = kzalloc(64, GFP_KERNEL); - p += 64 + (get_cycles() & 0xff) * sizeof(void *); - *p = 0x56; - pr_err("\n3. kmalloc-64: corrupting random byte 0x56->0x%p\n", - p); - pr_err("If allocated object is overwritten then not detectable\n\n"); - validate_slab_cache(kmalloc_caches[type][6]); - - pr_err("\nB. Corruption after free\n"); - p = kzalloc(128, GFP_KERNEL); - kfree(p); - *p = 0x78; - pr_err("1. kmalloc-128: Clobber first word 0x78->0x%p\n\n", p); - validate_slab_cache(kmalloc_caches[type][7]); - - p = kzalloc(256, GFP_KERNEL); - kfree(p); - p[50] = 0x9a; - pr_err("\n2. kmalloc-256: Clobber 50th byte 0x9a->0x%p\n\n", p); - validate_slab_cache(kmalloc_caches[type][8]); - - p = kzalloc(512, GFP_KERNEL); - kfree(p); - p[512] = 0xab; - pr_err("\n3. kmalloc-512: Clobber redzone 0xab->0x%p\n\n", p); - validate_slab_cache(kmalloc_caches[type][9]); -} -#else -#ifdef CONFIG_SYSFS -static void resiliency_test(void) {}; -#endif -#endif /* SLUB_RESILIENCY_TEST */ - #ifdef CONFIG_SYSFS enum slab_stat_type { SL_ALL, /* All slabs */ @@ -5859,7 +5796,6 @@ static int __init slab_sysfs_init(void) } mutex_unlock(&slab_mutex); - resiliency_test(); return 0; }