From patchwork Mon Oct 4 13:46:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534251 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7B791C433EF for ; Mon, 4 Oct 2021 14:56:22 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2FA1361175 for ; Mon, 4 Oct 2021 14:56:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 2FA1361175 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id C242F940054; Mon, 4 Oct 2021 10:56:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BD3B994000B; Mon, 4 Oct 2021 10:56:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AC295940054; Mon, 4 Oct 2021 10:56:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0070.hostedemail.com [216.40.44.70]) by kanga.kvack.org (Postfix) with ESMTP id 9D61E94000B for ; Mon, 4 Oct 2021 10:56:21 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 61CA8181D3043 for ; Mon, 4 Oct 2021 14:56:21 +0000 (UTC) X-FDA: 78659055762.30.A452180 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf09.hostedemail.com (Postfix) with ESMTP id 136EC3002E74 for ; Mon, 4 Oct 2021 14:56:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=DzertxoMJbHD5EvugZfzOqtrEn4HUjqp/EGYE9blCy8=; b=E7F5tDlfp2KOFJRneGhjq7kQbi pUOJNsfHjGO0opy/46aBvtRPqn6oTTSwGFcSKgTmzxOHwu3w7b8BvSuwKJdXMtcX+qyo5cpuPd6XQ wp/OdVBWR/s+gjRycQ++KtyLMfmEGqwNVN3HK7U6rrjuxF2nT9a0S1jQspSOipCxiZwEweGfNiuSW Xnd1nRSQhsCnCtp1SHwBCjHEJmYhp+GF59nAxg6DCdozxO/34BuQtTlfb/ZNdGlGPicXZ2UhtBW7X /zTgaPF9CF6uaCQH+VHwE6rBkRNHllcz3lnDHKzHBI5s1pN5fRr5fpc+UOAg3VwgCQ92OVGrztVcj gkVYVqRA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXPMS-00H1re-Q4; Mon, 04 Oct 2021 14:54:49 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 55/62] mm: Convert slob to use struct slab Date: Mon, 4 Oct 2021 14:46:43 +0100 Message-Id: <20211004134650.4031813-56-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 136EC3002E74 X-Stat-Signature: hwn7te6azjeq1qcozb4946h8wbj7iu3k Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=E7F5tDlf; spf=none (imf09.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam06 X-HE-Tag: 1633359380-638521 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use struct slab throughout the slob allocator. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slab.h | 15 +++++++++++++++ mm/slob.c | 30 +++++++++++++++--------------- 2 files changed, 30 insertions(+), 15 deletions(-) diff --git a/mm/slab.h b/mm/slab.h index 7631e274a840..5eabc9352bbf 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -43,6 +43,21 @@ static inline void __slab_clear_pfmemalloc(struct slab *slab) __clear_bit(PG_pfmemalloc, &slab->flags); } +static inline bool slab_test_free(const struct slab *slab) +{ + return test_bit(PG_slob_free, &slab->flags); +} + +static inline void __slab_set_free(struct slab *slab) +{ + __set_bit(PG_slob_free, &slab->flags); +} + +static inline void __slab_clear_free(struct slab *slab) +{ + __clear_bit(PG_slob_free, &slab->flags); +} + static inline void *slab_address(const struct slab *slab) { return page_address(slab_page(slab)); diff --git a/mm/slob.c b/mm/slob.c index 8cede39054fc..be5c9c472bbb 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -105,21 +105,21 @@ static LIST_HEAD(free_slob_large); /* * slob_page_free: true for pages on free_slob_pages list. */ -static inline int slob_page_free(struct page *sp) +static inline int slob_page_free(struct slab *sp) { - return PageSlobFree(sp); + return slab_test_free(sp); } -static void set_slob_page_free(struct page *sp, struct list_head *list) +static void set_slob_page_free(struct slab *sp, struct list_head *list) { list_add(&sp->slab_list, list); - __SetPageSlobFree(sp); + __slab_set_free(sp); } -static inline void clear_slob_page_free(struct page *sp) +static inline void clear_slob_page_free(struct slab *sp) { list_del(&sp->slab_list); - __ClearPageSlobFree(sp); + __slab_clear_free(sp); } #define SLOB_UNIT sizeof(slob_t) @@ -234,7 +234,7 @@ static void slob_free_pages(void *b, int order) * freelist, in this case @page_removed_from_list will be set to * true (set to false otherwise). */ -static void *slob_page_alloc(struct page *sp, size_t size, int align, +static void *slob_page_alloc(struct slab *sp, size_t size, int align, int align_offset, bool *page_removed_from_list) { slob_t *prev, *cur, *aligned = NULL; @@ -301,7 +301,7 @@ static void *slob_page_alloc(struct page *sp, size_t size, int align, static void *slob_alloc(size_t size, gfp_t gfp, int align, int node, int align_offset) { - struct page *sp; + struct slab *sp; struct list_head *slob_list; slob_t *b = NULL; unsigned long flags; @@ -323,7 +323,7 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node, * If there's a node specification, search for a partial * page with a matching node id in the freelist. */ - if (node != NUMA_NO_NODE && page_to_nid(sp) != node) + if (node != NUMA_NO_NODE && slab_nid(sp) != node) continue; #endif /* Enough room on this page? */ @@ -358,8 +358,8 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node, b = slob_new_pages(gfp & ~__GFP_ZERO, 0, node); if (!b) return NULL; - sp = virt_to_page(b); - __SetPageSlab(sp); + sp = virt_to_slab(b); + __SetPageSlab(slab_page(sp)); spin_lock_irqsave(&slob_lock, flags); sp->units = SLOB_UNITS(PAGE_SIZE); @@ -381,7 +381,7 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node, */ static void slob_free(void *block, int size) { - struct page *sp; + struct slab *sp; slob_t *prev, *next, *b = (slob_t *)block; slobidx_t units; unsigned long flags; @@ -391,7 +391,7 @@ static void slob_free(void *block, int size) return; BUG_ON(!size); - sp = virt_to_page(block); + sp = virt_to_slab(block); units = SLOB_UNITS(size); spin_lock_irqsave(&slob_lock, flags); @@ -401,8 +401,8 @@ static void slob_free(void *block, int size) if (slob_page_free(sp)) clear_slob_page_free(sp); spin_unlock_irqrestore(&slob_lock, flags); - __ClearPageSlab(sp); - page_mapcount_reset(sp); + __ClearPageSlab(slab_page(sp)); + page_mapcount_reset(slab_page(sp)); slob_free_pages(b, 0); return; }