From patchwork Mon Oct 4 13:46:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12533995 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98F81C433EF for ; Mon, 4 Oct 2021 14:03:59 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3E8CD61019 for ; Mon, 4 Oct 2021 14:03:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 3E8CD61019 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id D4E80940022; Mon, 4 Oct 2021 10:03:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CFD9E94000B; Mon, 4 Oct 2021 10:03:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BC57F940022; Mon, 4 Oct 2021 10:03:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0018.hostedemail.com [216.40.44.18]) by kanga.kvack.org (Postfix) with ESMTP id AE7CD94000B for ; Mon, 4 Oct 2021 10:03:58 -0400 (EDT) Received: from smtpin39.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 670D718116E0D for ; Mon, 4 Oct 2021 14:03:58 +0000 (UTC) X-FDA: 78658923756.39.6A7B184 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf28.hostedemail.com (Postfix) with ESMTP id A1CE49002F10 for ; Mon, 4 Oct 2021 14:03:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=HdJtNDMKWGqGg3a+1JjP6TXy0ykw9trtwmpAPPdvVBY=; b=JgHW0OHHfvpKi4hL7NJWSLvv64 U5HRsD66MhlPfKWivycu29cX0l5YEbGNOHG/St2GMlK3fhNp6EBRZHnocxK89PmjsBW5S/+fcjaLY L/8n0FmVCBTJWSNL7ZoOsTmtke/AALdNjVbWqVPNLJ9ddXB4Di9GbBJfFLe50d/+T12SlyDpNT87h kICwqafwDEf+ufJ5J4T0na3iDUynatSXcdSdo1In8n/TaWpi4Rw3JaCtHpcHv7ZVOJRN/xlk9YSv1 60IZJWAjHoLZuQ7c5SKXZLio9tt7pemrpDtuyb/kLkv/5eA2eE0NMlP7YfPqFr64OSUfv7YyQ3yDa h5Lg8a/w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXOXw-00Gwcn-Ga; Mon, 04 Oct 2021 14:02:41 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 13/62] mm/slub: Convert new_slab() to return a struct slab Date: Mon, 4 Oct 2021 14:46:01 +0100 Message-Id: <20211004134650.4031813-14-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=JgHW0OHH; spf=none (imf28.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: A1CE49002F10 X-Stat-Signature: qi8hjaxmh4b6qrogeuho7abq3aemcj4c X-HE-Tag: 1633356228-534408 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We can cast directly from struct page to struct slab in alloc_slab_page() because the page pointer returned from the page allocator is guaranteed to be a head page. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 62 +++++++++++++++++++++++++++---------------------------- 1 file changed, 31 insertions(+), 31 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 0a566a03d424..555c46cbae1f 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1753,8 +1753,8 @@ static void *setup_object(struct kmem_cache *s, struct page *page, /* * Slab allocation and freeing */ -static inline struct page *alloc_slab_page(struct kmem_cache *s, - gfp_t flags, int node, struct kmem_cache_order_objects oo) +static inline struct slab *alloc_slab(struct kmem_cache *s, gfp_t flags, + int node, struct kmem_cache_order_objects oo) { struct page *page; unsigned int order = oo_order(oo); @@ -1764,7 +1764,7 @@ static inline struct page *alloc_slab_page(struct kmem_cache *s, else page = __alloc_pages_node(node, flags, order); - return page; + return (struct slab *)page; } #ifdef CONFIG_SLAB_FREELIST_RANDOM @@ -1876,9 +1876,9 @@ static inline bool shuffle_freelist(struct kmem_cache *s, struct page *page) } #endif /* CONFIG_SLAB_FREELIST_RANDOM */ -static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) +static struct slab *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) { - struct page *page; + struct slab *slab; struct kmem_cache_order_objects oo = s->oo; gfp_t alloc_gfp; void *start, *p, *next; @@ -1897,63 +1897,63 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) if ((alloc_gfp & __GFP_DIRECT_RECLAIM) && oo_order(oo) > oo_order(s->min)) alloc_gfp = (alloc_gfp | __GFP_NOMEMALLOC) & ~(__GFP_RECLAIM|__GFP_NOFAIL); - page = alloc_slab_page(s, alloc_gfp, node, oo); - if (unlikely(!page)) { + slab = alloc_slab(s, alloc_gfp, node, oo); + if (unlikely(!slab)) { oo = s->min; alloc_gfp = flags; /* * Allocation may have failed due to fragmentation. * Try a lower order alloc if possible */ - page = alloc_slab_page(s, alloc_gfp, node, oo); - if (unlikely(!page)) + slab = alloc_slab(s, alloc_gfp, node, oo); + if (unlikely(!slab)) goto out; stat(s, ORDER_FALLBACK); } - page->objects = oo_objects(oo); + slab->objects = oo_objects(oo); - account_slab_page(page, oo_order(oo), s, flags); + account_slab(slab, oo_order(oo), s, flags); - page->slab_cache = s; - __SetPageSlab(page); - if (page_is_pfmemalloc(page)) - SetPageSlabPfmemalloc(page); + slab->slab_cache = s; + __SetPageSlab(slab_page(slab)); + if (page_is_pfmemalloc(slab_page(slab))) + slab_set_pfmemalloc(slab); - kasan_poison_slab(page); + kasan_poison_slab(slab_page(slab)); - start = page_address(page); + start = slab_address(slab); - setup_page_debug(s, page, start); + setup_page_debug(s, slab_page(slab), start); - shuffle = shuffle_freelist(s, page); + shuffle = shuffle_freelist(s, slab_page(slab)); if (!shuffle) { start = fixup_red_left(s, start); - start = setup_object(s, page, start); - page->freelist = start; - for (idx = 0, p = start; idx < page->objects - 1; idx++) { + start = setup_object(s, slab_page(slab), start); + slab->freelist = start; + for (idx = 0, p = start; idx < slab->objects - 1; idx++) { next = p + s->size; - next = setup_object(s, page, next); + next = setup_object(s, slab_page(slab), next); set_freepointer(s, p, next); p = next; } set_freepointer(s, p, NULL); } - page->inuse = page->objects; - page->frozen = 1; + slab->inuse = slab->objects; + slab->frozen = 1; out: - if (!page) + if (!slab) return NULL; - inc_slabs_node(s, page_to_nid(page), page->objects); + inc_slabs_node(s, slab_nid(slab), slab->objects); - return page; + return slab; } -static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node) +static struct slab *new_slab(struct kmem_cache *s, gfp_t flags, int node) { if (unlikely(flags & GFP_SLAB_BUG_MASK)) flags = kmalloc_fix_flags(flags); @@ -2991,7 +2991,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, goto check_new_page; slub_put_cpu_ptr(s->cpu_slab); - page = new_slab(s, gfpflags, node); + page = slab_page(new_slab(s, gfpflags, node)); c = slub_get_cpu_ptr(s->cpu_slab); if (unlikely(!page)) { @@ -3896,7 +3896,7 @@ static void early_kmem_cache_node_alloc(int node) BUG_ON(kmem_cache_node->size < sizeof(struct kmem_cache_node)); - page = new_slab(kmem_cache_node, GFP_NOWAIT, node); + page = slab_page(new_slab(kmem_cache_node, GFP_NOWAIT, node)); BUG_ON(!page); if (page_to_nid(page) != node) {