From patchwork Mon Oct 4 13:46:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534109 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 937BEC433F5 for ; Mon, 4 Oct 2021 14:23:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 29D876121F for ; Mon, 4 Oct 2021 14:23:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 29D876121F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id AEE02940033; Mon, 4 Oct 2021 10:23:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A9D5D94000B; Mon, 4 Oct 2021 10:23:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 98C47940033; Mon, 4 Oct 2021 10:23:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0104.hostedemail.com [216.40.44.104]) by kanga.kvack.org (Postfix) with ESMTP id 896C594000B for ; Mon, 4 Oct 2021 10:23:17 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 4A57230141 for ; Mon, 4 Oct 2021 14:23:17 +0000 (UTC) X-FDA: 78658972434.01.4AF8A7C Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf28.hostedemail.com (Postfix) with ESMTP id EEDC39001B4C for ; Mon, 4 Oct 2021 14:23:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=zbvCDz8Kn1Ai0oWuet7gu1HqiHu8Tje3nC93LHIjlN8=; b=I/2ojAGrfYa7MsYRp38HW7b/LX t4X1YpGDhA5NsyXqWPEq4Xc8mtXk/54PVJtufGIreDjIznmKqrzC7Zdpki0O8UD5TqjwFeMV9XSgl XnrH4Rn9MBF9iXlVxrI+N5N2c5oGgY+VtxObADZLENn2eC9EVcipHWjPPQ2l+7OgMKpl2LarkDG9p UkG8/EQOSGnvmhLiBelIiBfFRXChXwyOnibhALHFvqc4iedZ8OjO7DfJdS7S6teuJ1tvDhONpYZAK sfGE7uox0Kc9gm/In0XIh9wfNVZqm3teNzLfLSJ0bMeHG+HUGTYksUksSP2ARg2JNY3ExBSIDvvQJ ljjs7EAA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXOqH-00Gyr6-8U; Mon, 04 Oct 2021 14:21:46 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 27/62] mm/slub: Convert __unfreeze_partials to take a struct slab Date: Mon, 4 Oct 2021 14:46:15 +0100 Message-Id: <20211004134650.4031813-28-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: EEDC39001B4C X-Stat-Signature: ijtnw9it9msmbmyk38snpx3ok4y4qkxc Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="I/2ojAGr"; dmarc=none; spf=none (imf28.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1633357396-603705 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Improves type safety while removing a few calls to slab_page(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 54 +++++++++++++++++++++++++++--------------------------- 1 file changed, 27 insertions(+), 27 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index f33a196fe64f..e6fd0619d1f2 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2437,20 +2437,20 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, } #ifdef CONFIG_SLUB_CPU_PARTIAL -static void __unfreeze_partials(struct kmem_cache *s, struct page *partial_page) +static void __unfreeze_partials(struct kmem_cache *s, struct slab *partial_slab) { struct kmem_cache_node *n = NULL, *n2 = NULL; - struct page *page, *discard_page = NULL; + struct slab *slab, *unusable = NULL; unsigned long flags = 0; - while (partial_page) { - struct page new; - struct page old; + while (partial_slab) { + struct slab new; + struct slab old; - page = partial_page; - partial_page = page->next; + slab = partial_slab; + partial_slab = slab->next; - n2 = get_node(s, page_to_nid(page)); + n2 = get_node(s, slab_nid(slab)); if (n != n2) { if (n) spin_unlock_irqrestore(&n->list_lock, flags); @@ -2461,8 +2461,8 @@ static void __unfreeze_partials(struct kmem_cache *s, struct page *partial_page) do { - old.freelist = page->freelist; - old.counters = page->counters; + old.freelist = slab->freelist; + old.counters = slab->counters; VM_BUG_ON(!old.frozen); new.counters = old.counters; @@ -2470,16 +2470,16 @@ static void __unfreeze_partials(struct kmem_cache *s, struct page *partial_page) new.frozen = 0; - } while (!__cmpxchg_double_slab(s, page, + } while (!__cmpxchg_double_slab(s, slab_page(slab), old.freelist, old.counters, new.freelist, new.counters, "unfreezing slab")); if (unlikely(!new.inuse && n->nr_partial >= s->min_partial)) { - page->next = discard_page; - discard_page = page; + slab->next = unusable; + unusable = slab; } else { - add_partial(n, page, DEACTIVATE_TO_TAIL); + add_partial(n, slab_page(slab), DEACTIVATE_TO_TAIL); stat(s, FREE_ADD_PARTIAL); } } @@ -2487,12 +2487,12 @@ static void __unfreeze_partials(struct kmem_cache *s, struct page *partial_page) if (n) spin_unlock_irqrestore(&n->list_lock, flags); - while (discard_page) { - page = discard_page; - discard_page = discard_page->next; + while (unusable) { + slab = unusable; + unusable = unusable->next; stat(s, DEACTIVATE_EMPTY); - discard_slab(s, page); + discard_slab(s, slab_page(slab)); stat(s, FREE_SLAB); } } @@ -2502,28 +2502,28 @@ static void __unfreeze_partials(struct kmem_cache *s, struct page *partial_page) */ static void unfreeze_partials(struct kmem_cache *s) { - struct page *partial_page; + struct slab *partial_slab; unsigned long flags; local_lock_irqsave(&s->cpu_slab->lock, flags); - partial_page = slab_page(this_cpu_read(s->cpu_slab->partial)); + partial_slab = this_cpu_read(s->cpu_slab->partial); this_cpu_write(s->cpu_slab->partial, NULL); local_unlock_irqrestore(&s->cpu_slab->lock, flags); - if (partial_page) - __unfreeze_partials(s, partial_page); + if (partial_slab) + __unfreeze_partials(s, partial_slab); } static void unfreeze_partials_cpu(struct kmem_cache *s, struct kmem_cache_cpu *c) { - struct page *partial_page; + struct slab *partial_slab; - partial_page = slab_page(slub_percpu_partial(c)); + partial_slab = slub_percpu_partial(c); c->partial = NULL; - if (partial_page) - __unfreeze_partials(s, partial_page); + if (partial_slab) + __unfreeze_partials(s, partial_slab); } /* @@ -2572,7 +2572,7 @@ static void put_cpu_partial(struct kmem_cache *s, struct slab *slab, int drain) local_unlock_irqrestore(&s->cpu_slab->lock, flags); if (slab_to_unfreeze) { - __unfreeze_partials(s, slab_page(slab_to_unfreeze)); + __unfreeze_partials(s, slab_to_unfreeze); stat(s, CPU_PARTIAL_DRAIN); } }