From patchwork Mon Oct 4 13:45:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12533981 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A1A33C433F5 for ; Mon, 4 Oct 2021 13:57:28 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5378060F4F for ; Mon, 4 Oct 2021 13:57:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 5378060F4F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id E18F294001B; Mon, 4 Oct 2021 09:57:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DC85394000B; Mon, 4 Oct 2021 09:57:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C90C294001B; Mon, 4 Oct 2021 09:57:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0099.hostedemail.com [216.40.44.99]) by kanga.kvack.org (Postfix) with ESMTP id B80B594000B for ; Mon, 4 Oct 2021 09:57:27 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 68A278249980 for ; Mon, 4 Oct 2021 13:57:27 +0000 (UTC) X-FDA: 78658907334.11.0431AA5 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf24.hostedemail.com (Postfix) with ESMTP id 245F8B000D7C for ; Mon, 4 Oct 2021 13:57:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Wii0IJfv4YqgjnIofW8iZlElv/sc3Cql02RygCdXuSI=; b=ukZUT3/vmEhyskhi+J9FE9bneO 78dTC9KLHrTItoIX9W08GZtnMSPy37KkiFhqNieWr8L1fDe94ndeX9WIqv2JJ0GZ5ESN6GGHIKq/T XSEP/QAdh55fRJln/Kb5iUFIUXDViFej2QcGrzOYIw48FJ7mSX2qPCEzQMTJYUZzc04p+VhthSfov eDlzdyhTzetoqJb00pxGw/lc9gkTy9N4jAJFKQ+fIQ4jsljRtBkcMnGp4MI/4i9oXUko3VZKmoXIP /+4UPTth4sCYH+DbuSufxZQKZtOu4jbUnoMAUjufPQZMG7DrQaYbi7MqU0HRK73x+VMrkxJDDWk36 Rii/g9cg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXOQe-00GvfQ-GY; Mon, 04 Oct 2021 13:55:23 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 06/62] mm: Convert __ksize() to struct slab Date: Mon, 4 Oct 2021 14:45:54 +0100 Message-Id: <20211004134650.4031813-7-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="ukZUT3/v"; spf=none (imf24.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 245F8B000D7C X-Stat-Signature: kjrtewfyxkc1eio9it5omeijyxmkkpww X-HE-Tag: 1633355846-9323 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: slub and slob both use struct page here; convert them to struct slab. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slab.h | 6 +++--- mm/slob.c | 8 ++++---- mm/slub.c | 12 ++++++------ 3 files changed, 13 insertions(+), 13 deletions(-) diff --git a/mm/slab.h b/mm/slab.h index 3c691ef6b492..ac89b656de67 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -14,7 +14,7 @@ static inline bool slab_test_cache(const struct slab *slab) return test_bit(PG_slab, &slab->flags); } -static inline bool slab_test_multi_page(const struct slab *slab) +static inline bool slab_test_multipage(const struct slab *slab) { return test_bit(PG_head, &slab->flags); } @@ -67,7 +67,7 @@ static inline struct slab *virt_to_slab(const void *addr) static inline int slab_order(const struct slab *slab) { - if (!slab_test_multi_page(slab)) + if (!slab_test_multipage(slab)) return 0; return ((struct page *)slab)[1].compound_order; } @@ -483,7 +483,7 @@ static inline struct kmem_cache *virt_to_cache(const void *obj) struct slab *slab; slab = virt_to_slab(obj); - if (WARN_ONCE(!SlabAllocation(slab), "%s: Object is not a Slab page!\n", + if (WARN_ONCE(!slab_test_cache(slab), "%s: Object is not a Slab page!\n", __func__)) return NULL; return slab->slab_cache; diff --git a/mm/slob.c b/mm/slob.c index 74d3f6e60666..90996e8f7337 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -570,7 +570,7 @@ EXPORT_SYMBOL(kfree); /* can't use ksize for kmem_cache_alloc memory, only kmalloc */ size_t __ksize(const void *block) { - struct page *sp; + struct slab *sp; int align; unsigned int *m; @@ -578,9 +578,9 @@ size_t __ksize(const void *block) if (unlikely(block == ZERO_SIZE_PTR)) return 0; - sp = virt_to_page(block); - if (unlikely(!PageSlab(sp))) - return page_size(sp); + sp = virt_to_slab(block); + if (unlikely(!slab_test_cache(sp))) + return slab_size(sp); align = max_t(size_t, ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN); m = (unsigned int *)(block - align); diff --git a/mm/slub.c b/mm/slub.c index 7e429a31b326..2780342395dc 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4509,19 +4509,19 @@ void __check_heap_object(const void *ptr, unsigned long n, struct page *page, size_t __ksize(const void *object) { - struct page *page; + struct slab *slab; if (unlikely(object == ZERO_SIZE_PTR)) return 0; - page = virt_to_head_page(object); + slab = virt_to_slab(object); - if (unlikely(!PageSlab(page))) { - WARN_ON(!PageCompound(page)); - return page_size(page); + if (unlikely(!slab_test_cache(slab))) { + WARN_ON(!slab_test_multipage(slab)); + return slab_size(slab); } - return slab_ksize(page->slab_cache); + return slab_ksize(slab->slab_cache); } EXPORT_SYMBOL(__ksize);