From patchwork Thu Sep 2 21:50:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 12472661 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2E631C433F5 for ; Thu, 2 Sep 2021 21:50:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D4B39610F7 for ; Thu, 2 Sep 2021 21:50:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org D4B39610F7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 76C2C6B0085; Thu, 2 Sep 2021 17:50:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6CE356B0087; Thu, 2 Sep 2021 17:50:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 598818D0001; Thu, 2 Sep 2021 17:50:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0165.hostedemail.com [216.40.44.165]) by kanga.kvack.org (Postfix) with ESMTP id 48F1C6B0085 for ; Thu, 2 Sep 2021 17:50:42 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 0C01F180137D2 for ; Thu, 2 Sep 2021 21:50:42 +0000 (UTC) X-FDA: 78543978324.18.CEED8DE Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf27.hostedemail.com (Postfix) with ESMTP id AF9D870000A0 for ; Thu, 2 Sep 2021 21:50:41 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 8C4A1610A0; Thu, 2 Sep 2021 21:50:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1630619441; bh=wW9GebXTOR/9pji37xYyRQ+hC2JmuS7SgrwRYDq5URI=; h=Date:From:To:Subject:In-Reply-To:From; b=JxdlSQMqxri9nRCFf/ePWPOrXXAltIpeeVSFOPQeKD85W6x2rfCdJ6cq1v9oxgnHA yxROaZDRYe+zLxcDzvbE6bifiOML5AQFIHK24VlltLNmLSYkLTx/JVjGJlEPs6TgY0 ZZY+iYHmSi24/BSRCPzrV/tnz/QYjBUNBaSW+f/A= Date: Thu, 02 Sep 2021 14:50:40 -0700 From: Andrew Morton To: akpm@linux-foundation.org, bigeasy@linutronix.de, brouer@redhat.com, cl@linux.com, efault@gmx.de, iamjoonsoo.kim@lge.com, jannh@google.com, linux-mm@kvack.org, mgorman@techsingularity.net, mm-commits@vger.kernel.org, penberg@kernel.org, rientjes@google.com, tglx@linutronix.de, torvalds@linux-foundation.org, vbabka@suse.cz Subject: [patch 014/212] mm, slub: unify cmpxchg_double_slab() and __cmpxchg_double_slab() Message-ID: <20210902215040.JtgJOciWe%akpm@linux-foundation.org> In-Reply-To: <20210902144820.78957dff93d7bea620d55a89@linux-foundation.org> User-Agent: s-nail v14.8.16 Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=linux-foundation.org header.s=korg header.b=JxdlSQMq; spf=pass (imf27.hostedemail.com: domain of akpm@linux-foundation.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=akpm@linux-foundation.org; dmarc=none X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: AF9D870000A0 X-Stat-Signature: 3754upif9yzb7n6kjfjwfisk3ekenguz X-HE-Tag: 1630619441-835527 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Vlastimil Babka Subject: mm, slub: unify cmpxchg_double_slab() and __cmpxchg_double_slab() These functions differ only in irq disabling in the slow path. We can create a common function with an extra bool parameter to control the irq disabling. As the functions are inline and the parameter compile-time constant, there will be no runtime overhead due to this change. Also change the DEBUG_VM based irqs disable assert to the more standard lockdep_assert based one. Link: https://lkml.kernel.org/r/20210805152000.12817-7-vbabka@suse.cz Signed-off-by: Vlastimil Babka Acked-by: Christoph Lameter Cc: David Rientjes Cc: Jann Horn Cc: Jesper Dangaard Brouer Cc: Joonsoo Kim Cc: Mel Gorman Cc: Mike Galbraith Cc: Pekka Enberg Cc: Sebastian Andrzej Siewior Cc: Thomas Gleixner Signed-off-by: Andrew Morton --- mm/slub.c | 62 ++++++++++++++++++++-------------------------------- 1 file changed, 24 insertions(+), 38 deletions(-) --- a/mm/slub.c~mm-slub-unify-cmpxchg_double_slab-and-__cmpxchg_double_slab +++ a/mm/slub.c @@ -371,13 +371,13 @@ static __always_inline void slab_unlock( __bit_spin_unlock(PG_locked, &page->flags); } -/* Interrupts must be disabled (for the fallback code to work right) */ -static inline bool __cmpxchg_double_slab(struct kmem_cache *s, struct page *page, +static inline bool ___cmpxchg_double_slab(struct kmem_cache *s, struct page *page, void *freelist_old, unsigned long counters_old, void *freelist_new, unsigned long counters_new, - const char *n) + const char *n, bool disable_irqs) { - VM_BUG_ON(!irqs_disabled()); + if (!disable_irqs) + lockdep_assert_irqs_disabled(); #if defined(CONFIG_HAVE_CMPXCHG_DOUBLE) && \ defined(CONFIG_HAVE_ALIGNED_STRUCT_PAGE) if (s->flags & __CMPXCHG_DOUBLE) { @@ -388,15 +388,23 @@ static inline bool __cmpxchg_double_slab } else #endif { + unsigned long flags; + + if (disable_irqs) + local_irq_save(flags); slab_lock(page); if (page->freelist == freelist_old && page->counters == counters_old) { page->freelist = freelist_new; page->counters = counters_new; slab_unlock(page); + if (disable_irqs) + local_irq_restore(flags); return true; } slab_unlock(page); + if (disable_irqs) + local_irq_restore(flags); } cpu_relax(); @@ -409,45 +417,23 @@ static inline bool __cmpxchg_double_slab return false; } -static inline bool cmpxchg_double_slab(struct kmem_cache *s, struct page *page, +/* Interrupts must be disabled (for the fallback code to work right) */ +static inline bool __cmpxchg_double_slab(struct kmem_cache *s, struct page *page, void *freelist_old, unsigned long counters_old, void *freelist_new, unsigned long counters_new, const char *n) { -#if defined(CONFIG_HAVE_CMPXCHG_DOUBLE) && \ - defined(CONFIG_HAVE_ALIGNED_STRUCT_PAGE) - if (s->flags & __CMPXCHG_DOUBLE) { - if (cmpxchg_double(&page->freelist, &page->counters, - freelist_old, counters_old, - freelist_new, counters_new)) - return true; - } else -#endif - { - unsigned long flags; - - local_irq_save(flags); - slab_lock(page); - if (page->freelist == freelist_old && - page->counters == counters_old) { - page->freelist = freelist_new; - page->counters = counters_new; - slab_unlock(page); - local_irq_restore(flags); - return true; - } - slab_unlock(page); - local_irq_restore(flags); - } - - cpu_relax(); - stat(s, CMPXCHG_DOUBLE_FAIL); - -#ifdef SLUB_DEBUG_CMPXCHG - pr_info("%s %s: cmpxchg double redo ", n, s->name); -#endif + return ___cmpxchg_double_slab(s, page, freelist_old, counters_old, + freelist_new, counters_new, n, false); +} - return false; +static inline bool cmpxchg_double_slab(struct kmem_cache *s, struct page *page, + void *freelist_old, unsigned long counters_old, + void *freelist_new, unsigned long counters_new, + const char *n) +{ + return ___cmpxchg_double_slab(s, page, freelist_old, counters_old, + freelist_new, counters_new, n, true); } #ifdef CONFIG_SLUB_DEBUG