From patchwork Mon Oct 4 13:45:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12533991 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05A6CC433EF for ; Mon, 4 Oct 2021 14:02:02 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id ABAC961131 for ; Mon, 4 Oct 2021 14:02:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org ABAC961131 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 47669940020; Mon, 4 Oct 2021 10:02:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4256F94000B; Mon, 4 Oct 2021 10:02:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 31566940020; Mon, 4 Oct 2021 10:02:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0027.hostedemail.com [216.40.44.27]) by kanga.kvack.org (Postfix) with ESMTP id 228E094000B for ; Mon, 4 Oct 2021 10:02:01 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id D6DDE18018737 for ; Mon, 4 Oct 2021 14:02:00 +0000 (UTC) X-FDA: 78658918800.01.EDE885E Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf04.hostedemail.com (Postfix) with ESMTP id 729A8500150C for ; Mon, 4 Oct 2021 14:02:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=aWJ46jxRDg6jG4PJeNanroj8cPTE6X3OYlOsmPUu3Ug=; b=KTh39MmCLt+rdR9EAKrTtDudjh OUjfd2zYieEqvaOAuPoYH+kXoeLngQeBxPNCFwLCSczICgcgtAbluFtruY7PunFlI+dZzYU59Dk7F DLB/JaYg63w4u8FaUWY7knV9Xu14iarRhPcYwtyAnGpKr/UsaD+x/nhEi8qjA+4tMrgDN2kKd9kB2 j+oJ3CQDDBjDY4aio3lo1RCgUpa9os/Jf5I5Id1nkb3Wydpl8OvUv/wYWS9kv+7lt7TYvaIV5U0R/ RMP+Ph2HOwBnRdEcE2F5C2/eJ+1TDPiomPxrkaWMq9Ya/jlGy074SfSxvy48vjc6WHroHLpGwXFSC 6lQlBDcA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXOVi-00GwNj-FU; Mon, 04 Oct 2021 14:00:36 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 11/62] mm/slub: Convert kfree() to use a struct slab Date: Mon, 4 Oct 2021 14:45:59 +0100 Message-Id: <20211004134650.4031813-12-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 729A8500150C X-Stat-Signature: bew5ksji44wq7ms81wi1q844b1heckwb Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=KTh39MmC; spf=none (imf04.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam06 X-HE-Tag: 1633356120-951864 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: With kfree() using a struct slab, we can also convert slab_free() and do_slab_free() to use a slab instead of a page. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 30 +++++++++++++++--------------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 050a0610b3ef..15996ea165ac 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3402,11 +3402,11 @@ static void __slab_free(struct kmem_cache *s, struct page *page, * with all sorts of special processing. * * Bulk free of a freelist with several objects (all pointing to the - * same page) possible by specifying head and tail ptr, plus objects + * same slab) possible by specifying head and tail ptr, plus objects * count (cnt). Bulk free indicated by tail pointer being set. */ static __always_inline void do_slab_free(struct kmem_cache *s, - struct page *page, void *head, void *tail, + struct slab *slab, void *head, void *tail, int cnt, unsigned long addr) { void *tail_obj = tail ? : head; @@ -3427,7 +3427,7 @@ static __always_inline void do_slab_free(struct kmem_cache *s, /* Same with comment on barrier() in slab_alloc_node() */ barrier(); - if (likely(page == c->page)) { + if (likely(slab_page(slab) == c->page)) { #ifndef CONFIG_PREEMPT_RT void **freelist = READ_ONCE(c->freelist); @@ -3453,7 +3453,7 @@ static __always_inline void do_slab_free(struct kmem_cache *s, local_lock(&s->cpu_slab->lock); c = this_cpu_ptr(s->cpu_slab); - if (unlikely(page != c->page)) { + if (unlikely(slab_page(slab) != c->page)) { local_unlock(&s->cpu_slab->lock); goto redo; } @@ -3468,11 +3468,11 @@ static __always_inline void do_slab_free(struct kmem_cache *s, #endif stat(s, FREE_FASTPATH); } else - __slab_free(s, page, head, tail_obj, cnt, addr); + __slab_free(s, slab_page(slab), head, tail_obj, cnt, addr); } -static __always_inline void slab_free(struct kmem_cache *s, struct page *page, +static __always_inline void slab_free(struct kmem_cache *s, struct slab *slab, void *head, void *tail, int cnt, unsigned long addr) { @@ -3481,13 +3481,13 @@ static __always_inline void slab_free(struct kmem_cache *s, struct page *page, * to remove objects, whose reuse must be delayed. */ if (slab_free_freelist_hook(s, &head, &tail)) - do_slab_free(s, page, head, tail, cnt, addr); + do_slab_free(s, slab, head, tail, cnt, addr); } #ifdef CONFIG_KASAN_GENERIC void ___cache_free(struct kmem_cache *cache, void *x, unsigned long addr) { - do_slab_free(cache, virt_to_head_page(x), x, NULL, 1, addr); + do_slab_free(cache, virt_to_slab(x), x, NULL, 1, addr); } #endif @@ -3496,7 +3496,7 @@ void kmem_cache_free(struct kmem_cache *s, void *x) s = cache_from_obj(s, x); if (!s) return; - slab_free(s, virt_to_head_page(x), x, NULL, 1, _RET_IP_); + slab_free(s, virt_to_slab(x), x, NULL, 1, _RET_IP_); trace_kmem_cache_free(_RET_IP_, x, s->name); } EXPORT_SYMBOL(kmem_cache_free); @@ -3621,7 +3621,7 @@ void kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p) if (!df.slab) continue; - slab_free(df.s, slab_page(df.slab), df.freelist, df.tail, df.cnt, _RET_IP_); + slab_free(df.s, df.slab, df.freelist, df.tail, df.cnt, _RET_IP_); } while (likely(size)); } EXPORT_SYMBOL(kmem_cache_free_bulk); @@ -4527,7 +4527,7 @@ EXPORT_SYMBOL(__ksize); void kfree(const void *x) { - struct page *page; + struct slab *slab; void *object = (void *)x; trace_kfree(_RET_IP_, x); @@ -4535,12 +4535,12 @@ void kfree(const void *x) if (unlikely(ZERO_OR_NULL_PTR(x))) return; - page = virt_to_head_page(x); - if (unlikely(!PageSlab(page))) { - free_nonslab_page(page, object); + slab = virt_to_slab(x); + if (unlikely(!SlabAllocation(slab))) { + free_nonslab_page(slab_page(slab), object); return; } - slab_free(page->slab_cache, page, object, NULL, 1, _RET_IP_); + slab_free(slab->slab_cache, slab, object, NULL, 1, _RET_IP_); } EXPORT_SYMBOL(kfree);