From patchwork Mon Oct 4 13:46:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534235 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46166C433F5 for ; Mon, 4 Oct 2021 14:49:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EDA9B613A8 for ; Mon, 4 Oct 2021 14:49:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org EDA9B613A8 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 8AB1B94004C; Mon, 4 Oct 2021 10:49:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 85A7594000B; Mon, 4 Oct 2021 10:49:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 74A9694004C; Mon, 4 Oct 2021 10:49:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0206.hostedemail.com [216.40.44.206]) by kanga.kvack.org (Postfix) with ESMTP id 6723194000B for ; Mon, 4 Oct 2021 10:49:17 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 191862D3B1 for ; Mon, 4 Oct 2021 14:49:17 +0000 (UTC) X-FDA: 78659037954.03.44E539C Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf26.hostedemail.com (Postfix) with ESMTP id CE9C52002835 for ; Mon, 4 Oct 2021 14:49:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=JXf1y2FhZPKSvbmR8SXUgl2IQsIcI5Cle91jKNKGrCc=; b=oFsRDF2ZXsJeL2F5mxFJ7FFD9D ng64rUuYpuGLP2ZxQZrNOdF/EUL/G7nsWaA4/6XcMEs2rLOainICSjo+CNw9dnpOg2qu4X5PDAKEY H+D1WJe6dhg/xz4uh+28o2R2cyuDEz4EJJGEr3cYh/S1SqEMYZbg4cJoghK5EixBNmgBYcz62qkF4 U6b67B62TvnFBtnXdjb5WAQvlGoJwcxe3QURWIuPnzBOyoatNJycMwNy5OnPasW1f+zQLsIVDWg0z uUaF1IDwc9rTMCWHQRKtUWvJMgO/foaw74j0JR86I3HNyscenfYTk8EBAcclC7j66s3/esXh/+3ms JwOoFOfw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXPFf-00H1Gf-Ei; Mon, 04 Oct 2021 14:47:51 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 48/62] mm/slub: Convert cmpxchg_double_slab to struct slab Date: Mon, 4 Oct 2021 14:46:36 +0100 Message-Id: <20211004134650.4031813-49-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: CE9C52002835 X-Stat-Signature: azezng85xwfhou8rtg9h48x8uc4k3d8j Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=oFsRDF2Z; spf=none (imf26.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam06 X-HE-Tag: 1633358956-272903 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Improve type safety for both cmpxchg_double_slab() and __cmpxchg_double_slab(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 46 +++++++++++++++++++++++----------------------- 1 file changed, 23 insertions(+), 23 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 72a50fab64b5..0d9299679ea2 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -448,7 +448,7 @@ static __always_inline void slab_unlock(struct page *page, unsigned long *flags) * by an _irqsave() lock variant. Except on PREEMPT_RT where locks are different * so we disable interrupts as part of slab_[un]lock(). */ -static inline bool __cmpxchg_double_slab(struct kmem_cache *s, struct page *page, +static inline bool __cmpxchg_double_slab(struct kmem_cache *s, struct slab *slab, void *freelist_old, unsigned long counters_old, void *freelist_new, unsigned long counters_new, const char *n) @@ -458,7 +458,7 @@ static inline bool __cmpxchg_double_slab(struct kmem_cache *s, struct page *page #if defined(CONFIG_HAVE_CMPXCHG_DOUBLE) && \ defined(CONFIG_HAVE_ALIGNED_STRUCT_PAGE) if (s->flags & __CMPXCHG_DOUBLE) { - if (cmpxchg_double(&page->freelist, &page->counters, + if (cmpxchg_double(&slab->freelist, &slab->counters, freelist_old, counters_old, freelist_new, counters_new)) return true; @@ -468,15 +468,15 @@ static inline bool __cmpxchg_double_slab(struct kmem_cache *s, struct page *page /* init to 0 to prevent spurious warnings */ unsigned long flags = 0; - slab_lock(page, &flags); - if (page->freelist == freelist_old && - page->counters == counters_old) { - page->freelist = freelist_new; - page->counters = counters_new; - slab_unlock(page, &flags); + slab_lock(slab_page(slab), &flags); + if (slab->freelist == freelist_old && + slab->counters == counters_old) { + slab->freelist = freelist_new; + slab->counters = counters_new; + slab_unlock(slab_page(slab), &flags); return true; } - slab_unlock(page, &flags); + slab_unlock(slab_page(slab), &flags); } cpu_relax(); @@ -489,7 +489,7 @@ static inline bool __cmpxchg_double_slab(struct kmem_cache *s, struct page *page return false; } -static inline bool cmpxchg_double_slab(struct kmem_cache *s, struct page *page, +static inline bool cmpxchg_double_slab(struct kmem_cache *s, struct slab *slab, void *freelist_old, unsigned long counters_old, void *freelist_new, unsigned long counters_new, const char *n) @@ -497,7 +497,7 @@ static inline bool cmpxchg_double_slab(struct kmem_cache *s, struct page *page, #if defined(CONFIG_HAVE_CMPXCHG_DOUBLE) && \ defined(CONFIG_HAVE_ALIGNED_STRUCT_PAGE) if (s->flags & __CMPXCHG_DOUBLE) { - if (cmpxchg_double(&page->freelist, &page->counters, + if (cmpxchg_double(&slab->freelist, &slab->counters, freelist_old, counters_old, freelist_new, counters_new)) return true; @@ -507,16 +507,16 @@ static inline bool cmpxchg_double_slab(struct kmem_cache *s, struct page *page, unsigned long flags; local_irq_save(flags); - __slab_lock(page); - if (page->freelist == freelist_old && - page->counters == counters_old) { - page->freelist = freelist_new; - page->counters = counters_new; - __slab_unlock(page); + __slab_lock(slab_page(slab)); + if (slab->freelist == freelist_old && + slab->counters == counters_old) { + slab->freelist = freelist_new; + slab->counters = counters_new; + __slab_unlock(slab_page(slab)); local_irq_restore(flags); return true; } - __slab_unlock(page); + __slab_unlock(slab_page(slab)); local_irq_restore(flags); } @@ -2068,7 +2068,7 @@ static inline void *acquire_slab(struct kmem_cache *s, VM_BUG_ON(new.frozen); new.frozen = 1; - if (!__cmpxchg_double_slab(s, slab_page(slab), + if (!__cmpxchg_double_slab(s, slab, freelist, counters, new.freelist, new.counters, "acquire_slab")) @@ -2412,7 +2412,7 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab, } l = m; - if (!cmpxchg_double_slab(s, slab_page(slab), + if (!cmpxchg_double_slab(s, slab, old.freelist, old.counters, new.freelist, new.counters, "unfreezing slab")) @@ -2466,7 +2466,7 @@ static void __unfreeze_partials(struct kmem_cache *s, struct slab *partial_slab) new.frozen = 0; - } while (!__cmpxchg_double_slab(s, slab_page(slab), + } while (!__cmpxchg_double_slab(s, slab, old.freelist, old.counters, new.freelist, new.counters, "unfreezing slab")); @@ -2837,7 +2837,7 @@ static inline void *get_freelist(struct kmem_cache *s, struct slab *slab) new.inuse = slab->objects; new.frozen = freelist != NULL; - } while (!__cmpxchg_double_slab(s, slab_page(slab), + } while (!__cmpxchg_double_slab(s, slab, freelist, counters, NULL, new.counters, "get_freelist")); @@ -3329,7 +3329,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, } } - } while (!cmpxchg_double_slab(s, slab_page(slab), + } while (!cmpxchg_double_slab(s, slab, prior, counters, head, new.counters, "__slab_free"));