From patchwork Mon Oct 4 13:46:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12533993 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA783C433F5 for ; Mon, 4 Oct 2021 14:03:44 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 492E861019 for ; Mon, 4 Oct 2021 14:03:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 492E861019 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id D9BEB940021; Mon, 4 Oct 2021 10:03:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D249394000B; Mon, 4 Oct 2021 10:03:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BC559940021; Mon, 4 Oct 2021 10:03:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0179.hostedemail.com [216.40.44.179]) by kanga.kvack.org (Postfix) with ESMTP id A902694000B for ; Mon, 4 Oct 2021 10:03:43 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 6C47C8249980 for ; Mon, 4 Oct 2021 14:03:43 +0000 (UTC) X-FDA: 78658923126.14.E1DFC9E Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf03.hostedemail.com (Postfix) with ESMTP id 290F63000781 for ; Mon, 4 Oct 2021 14:03:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=BaJ+FyAZ98xC9dLGu/+Y1slNk42dYE38zPd2lQIOSiU=; b=Kg+ztRceQospwR4QOPICjWV2pC iGZ9kTIHn1HoY3BYs8MblV4ssoxm3nAy8fedwURnsPM2GWQB4RMTl42yp7dxv1kGQutGF5RXOyAJV aVtWqocZ78GrJE6GJKSQohrOBUGBA/O0by+edqhD3YTGTYb9k/AkXfrYgIa+0GD+sFfV4ZSSdUeo/ tFECa96OjSz4grud/DpraHgIVdGSWnKdMN8MCs59BNwPt0T1JXE/gwsbYqCXHjrKpZaeaZSaQ+fX0 xcJAdaUbI08+SbLEYXXDmqtR7GBIcF1tsRPLKbw8GdNJGyoFkpegoa+iukIxafDCXBQquXkc4qj1A o+/H8fFQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXOWa-00GwUp-FV; Mon, 04 Oct 2021 14:01:27 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 12/62] mm/slub: Convert __slab_free() to take a struct slab Date: Mon, 4 Oct 2021 14:46:00 +0100 Message-Id: <20211004134650.4031813-13-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 290F63000781 X-Stat-Signature: dsmh5f8m9mtay63nbpfu4bzs5ohm1r5h Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Kg+ztRce; dmarc=none; spf=none (imf03.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1633356223-790982 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Provide a little more typesafety and also convert free_debug_processing() to take a struct slab. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 52 ++++++++++++++++++++++++++-------------------------- 1 file changed, 26 insertions(+), 26 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 15996ea165ac..0a566a03d424 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1342,21 +1342,21 @@ static inline int free_consistency_checks(struct kmem_cache *s, /* Supports checking bulk free of a constructed freelist */ static noinline int free_debug_processing( - struct kmem_cache *s, struct page *page, + struct kmem_cache *s, struct slab *slab, void *head, void *tail, int bulk_cnt, unsigned long addr) { - struct kmem_cache_node *n = get_node(s, page_to_nid(page)); + struct kmem_cache_node *n = get_node(s, slab_nid(slab)); void *object = head; int cnt = 0; unsigned long flags, flags2; int ret = 0; spin_lock_irqsave(&n->list_lock, flags); - slab_lock(page, &flags2); + slab_lock(slab_page(slab), &flags2); if (s->flags & SLAB_CONSISTENCY_CHECKS) { - if (!check_slab(s, page)) + if (!check_slab(s, slab_page(slab))) goto out; } @@ -1364,13 +1364,13 @@ static noinline int free_debug_processing( cnt++; if (s->flags & SLAB_CONSISTENCY_CHECKS) { - if (!free_consistency_checks(s, page, object, addr)) + if (!free_consistency_checks(s, slab_page(slab), object, addr)) goto out; } if (s->flags & SLAB_STORE_USER) set_track(s, object, TRACK_FREE, addr); - trace(s, page, object, 0); + trace(s, slab_page(slab), object, 0); /* Freepointer not overwritten by init_object(), SLAB_POISON moved it */ init_object(s, object, SLUB_RED_INACTIVE); @@ -1383,10 +1383,10 @@ static noinline int free_debug_processing( out: if (cnt != bulk_cnt) - slab_err(s, page, "Bulk freelist count(%d) invalid(%d)\n", + slab_err(s, slab_page(slab), "Bulk freelist count(%d) invalid(%d)\n", bulk_cnt, cnt); - slab_unlock(page, &flags2); + slab_unlock(slab_page(slab), &flags2); spin_unlock_irqrestore(&n->list_lock, flags); if (!ret) slab_fix(s, "Object at 0x%p not freed", object); @@ -1609,7 +1609,7 @@ static inline int alloc_debug_processing(struct kmem_cache *s, struct page *page, void *object, unsigned long addr) { return 0; } static inline int free_debug_processing( - struct kmem_cache *s, struct page *page, + struct kmem_cache *s, struct slab *slab, void *head, void *tail, int bulk_cnt, unsigned long addr) { return 0; } @@ -3270,17 +3270,17 @@ EXPORT_SYMBOL(kmem_cache_alloc_node_trace); * have a longer lifetime than the cpu slabs in most processing loads. * * So we still attempt to reduce cache line usage. Just take the slab - * lock and free the item. If there is no additional partial page + * lock and free the item. If there is no additional partial slab * handling required then we can return immediately. */ -static void __slab_free(struct kmem_cache *s, struct page *page, +static void __slab_free(struct kmem_cache *s, struct slab *slab, void *head, void *tail, int cnt, unsigned long addr) { void *prior; int was_frozen; - struct page new; + struct slab new; unsigned long counters; struct kmem_cache_node *n = NULL; unsigned long flags; @@ -3291,7 +3291,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page, return; if (kmem_cache_debug(s) && - !free_debug_processing(s, page, head, tail, cnt, addr)) + !free_debug_processing(s, slab, head, tail, cnt, addr)) return; do { @@ -3299,8 +3299,8 @@ static void __slab_free(struct kmem_cache *s, struct page *page, spin_unlock_irqrestore(&n->list_lock, flags); n = NULL; } - prior = page->freelist; - counters = page->counters; + prior = slab->freelist; + counters = slab->counters; set_freepointer(s, tail, prior); new.counters = counters; was_frozen = new.frozen; @@ -3319,7 +3319,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page, } else { /* Needs to be taken off a list */ - n = get_node(s, page_to_nid(page)); + n = get_node(s, slab_nid(slab)); /* * Speculatively acquire the list_lock. * If the cmpxchg does not succeed then we may @@ -3333,7 +3333,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page, } } - } while (!cmpxchg_double_slab(s, page, + } while (!cmpxchg_double_slab(s, slab_page(slab), prior, counters, head, new.counters, "__slab_free")); @@ -3348,10 +3348,10 @@ static void __slab_free(struct kmem_cache *s, struct page *page, stat(s, FREE_FROZEN); } else if (new.frozen) { /* - * If we just froze the page then put it onto the + * If we just froze the slab then put it onto the * per cpu partial list. */ - put_cpu_partial(s, page, 1); + put_cpu_partial(s, slab_page(slab), 1); stat(s, CPU_PARTIAL_FREE); } @@ -3366,8 +3366,8 @@ static void __slab_free(struct kmem_cache *s, struct page *page, * then add it. */ if (!kmem_cache_has_cpu_partial(s) && unlikely(!prior)) { - remove_full(s, n, page); - add_partial(n, page, DEACTIVATE_TO_TAIL); + remove_full(s, n, slab_page(slab)); + add_partial(n, slab_page(slab), DEACTIVATE_TO_TAIL); stat(s, FREE_ADD_PARTIAL); } spin_unlock_irqrestore(&n->list_lock, flags); @@ -3378,16 +3378,16 @@ static void __slab_free(struct kmem_cache *s, struct page *page, /* * Slab on the partial list. */ - remove_partial(n, page); + remove_partial(n, slab_page(slab)); stat(s, FREE_REMOVE_PARTIAL); } else { /* Slab must be on the full list */ - remove_full(s, n, page); + remove_full(s, n, slab_page(slab)); } spin_unlock_irqrestore(&n->list_lock, flags); stat(s, FREE_SLAB); - discard_slab(s, page); + discard_slab(s, slab_page(slab)); } /* @@ -3468,7 +3468,7 @@ static __always_inline void do_slab_free(struct kmem_cache *s, #endif stat(s, FREE_FASTPATH); } else - __slab_free(s, slab_page(slab), head, tail_obj, cnt, addr); + __slab_free(s, slab, head, tail_obj, cnt, addr); } @@ -4536,7 +4536,7 @@ void kfree(const void *x) return; slab = virt_to_slab(x); - if (unlikely(!SlabAllocation(slab))) { + if (unlikely(!slab_test_cache(slab))) { free_nonslab_page(slab_page(slab), object); return; }