From patchwork Mon Oct 4 13:46:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534253 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 72B08C433F5 for ; Mon, 4 Oct 2021 14:57:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1304D61186 for ; Mon, 4 Oct 2021 14:57:34 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 1304D61186 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id A4DCD940056; Mon, 4 Oct 2021 10:57:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9FD8894000B; Mon, 4 Oct 2021 10:57:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8ECDB940056; Mon, 4 Oct 2021 10:57:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0089.hostedemail.com [216.40.44.89]) by kanga.kvack.org (Postfix) with ESMTP id 8104B94000B for ; Mon, 4 Oct 2021 10:57:33 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 27CA327DD8 for ; Mon, 4 Oct 2021 14:57:33 +0000 (UTC) X-FDA: 78659058786.16.52DBC2F Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf26.hostedemail.com (Postfix) with ESMTP id D9DBD20061CB for ; Mon, 4 Oct 2021 14:57:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=L7XbtYBRFHqbQAfsnTuRU8rzMVBdIuXtm2jFCwGgkoo=; b=bTydWZiKgvEJxq+sGzqYnTWqAp auk+SKVy+hTsY1xoFYPoa6cTv4uhh1jP3E1voky3kapdmF695BS03zqrgwm+vRk17+oKGrwEycO3C iCfINr4Fi/IOig/EnA5nJBhdL9pFdedi2CWQKugQPAC/7WMChhUCJssN426pGWim0TqaHDu5krqIV Of2s7514AnFfMhIa3hWa3CY8a2xNpwZB3DtjJmdJYA0PPJnyZ67vFP+jN475zOkFIrulfZp1bB4mU P3w2FCfluypniksEk2/8YWUqiF2uYSVbNdUDsjNCVI1dckAOuEu+w1ai/i3Lkm9vujnlUfowsGEcN 0zk4UExg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXPNG-00H1uy-UT; Mon, 04 Oct 2021 14:55:52 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 56/62] mm: Convert slub to use struct slab Date: Mon, 4 Oct 2021 14:46:44 +0100 Message-Id: <20211004134650.4031813-57-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: D9DBD20061CB X-Stat-Signature: xhh4s9k3uzbm191y9ys11ogmyckr56db Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=bTydWZiK; dmarc=none; spf=none (imf26.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1633359452-954207 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Remaining bits & pieces. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 29 ++++++++++++++++------------- 1 file changed, 16 insertions(+), 13 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 229fc56809c2..51ead3838fc1 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -64,19 +64,19 @@ * * The slab_lock is only used for debugging and on arches that do not * have the ability to do a cmpxchg_double. It only protects: - * A. slab->freelist -> List of object free in a page + * A. slab->freelist -> List of object free in a slab * B. slab->inuse -> Number of objects in use - * C. slab->objects -> Number of objects in page + * C. slab->objects -> Number of objects in slab * D. slab->frozen -> frozen state * * Frozen slabs * * If a slab is frozen then it is exempt from list management. It is not * on any list except per cpu partial list. The processor that froze the - * slab is the one who can perform list operations on the page. Other + * slab is the one who can perform list operations on the slab. Other * processors may put objects onto the freelist but the processor that * froze the slab is the only one that can retrieve the objects from the - * page's freelist. + * slab's freelist. * * list_lock * @@ -135,7 +135,7 @@ * minimal so we rely on the page allocators per cpu caches for * fast frees and allocs. * - * page->frozen The slab is frozen and exempt from list processing. + * slab->frozen The slab is frozen and exempt from list processing. * This means that the slab is dedicated to a purpose * such as satisfying allocations for a specific * processor. Objects may be freed in the slab while @@ -250,7 +250,7 @@ static inline bool kmem_cache_has_cpu_partial(struct kmem_cache *s) #define OO_SHIFT 16 #define OO_MASK ((1 << OO_SHIFT) - 1) -#define MAX_OBJS_PER_PAGE 32767 /* since page.objects is u15 */ +#define MAX_OBJS_PER_PAGE 32767 /* since slab.objects is u15 */ /* Internal SLUB flags */ /* Poison object */ @@ -1753,14 +1753,21 @@ static inline struct slab *alloc_slab(struct kmem_cache *s, gfp_t flags, int node, struct kmem_cache_order_objects oo) { struct page *page; + struct slab *slab; unsigned int order = oo_order(oo); if (node == NUMA_NO_NODE) page = alloc_pages(flags, order); else page = __alloc_pages_node(node, flags, order); + if (!page) + return NULL; - return (struct slab *)page; + __SetPageSlab(page); + slab = (struct slab *)page; + if (page_is_pfmemalloc(page)) + slab_set_pfmemalloc(slab); + return slab; } #ifdef CONFIG_SLAB_FREELIST_RANDOM @@ -1781,7 +1788,7 @@ static int init_cache_random_seq(struct kmem_cache *s) return err; } - /* Transform to an offset on the set of pages */ + /* Transform to an offset on the set of slabs */ if (s->random_seq) { unsigned int i; @@ -1911,10 +1918,6 @@ static struct slab *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) account_slab(slab, oo_order(oo), s, flags); slab->slab_cache = s; - __SetPageSlab(slab_page(slab)); - if (page_is_pfmemalloc(slab_page(slab))) - slab_set_pfmemalloc(slab); - kasan_poison_slab(slab_page(slab)); start = slab_address(slab); @@ -3494,7 +3497,7 @@ static inline void free_nonslab_page(struct page *page, void *object) { unsigned int order = compound_order(page); - VM_BUG_ON_PAGE(!PageCompound(page), page); + VM_BUG_ON_PAGE(!PageHead(page), page); kfree_hook(object); mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, -(PAGE_SIZE << order)); __free_pages(page, order);