Message ID | 20211004134650.4031813-22-willy@infradead.org (mailing list archive) |
---|---|
State | New |
Headers | show
Return-Path: <SRS0=Fowb=OY=kvack.org=owner-linux-mm@kernel.org> X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 39E1FC433F5 for <linux-mm@archiver.kernel.org>; Mon, 4 Oct 2021 14:14:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DD4A761251 for <linux-mm@archiver.kernel.org>; Mon, 4 Oct 2021 14:14:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org DD4A761251 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 7EA8C94002C; Mon, 4 Oct 2021 10:14:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 79A6D94000B; Mon, 4 Oct 2021 10:14:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6891D94002C; Mon, 4 Oct 2021 10:14:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0215.hostedemail.com [216.40.44.215]) by kanga.kvack.org (Postfix) with ESMTP id 593DF94000B for <linux-mm@kvack.org>; Mon, 4 Oct 2021 10:14:31 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 151B28249980 for <linux-mm@kvack.org>; Mon, 4 Oct 2021 14:14:31 +0000 (UTC) X-FDA: 78658950342.24.C6D7389 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf12.hostedemail.com (Postfix) with ESMTP id B52E410003E3 for <linux-mm@kvack.org>; Mon, 4 Oct 2021 14:14:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=vyDRbjvC+gdDVgnyBlZ2ptutLUiqpA5PXce7uCWEcwg=; b=E+WIr3/oeZl76KKs4uEx20a+Fw wa0oW3W6aafUa9ZB8/JvdWFktby4dMMUA7RrZwaQVW9SwUBwi2h9BGZHUn78V24TV1qQmhpKMSqcV jY3AMjI4F85kPpum5hs/r+AoJr5cgl70LF+jC3jbJmIiLecL4o2+Ag/FwU27p0w2iXNgJNTVKjlPs U6vQIoBuocPB4IR8xbXgwbA+t4Im3WMnGXz+HptzKVNlufK+Z/BfTc0Z+C5bebPLKPpGd41Um8wrR 0dn2+Hl58J0Pb9oszvNoA2T8aHACa68Eu1p3xIaKgzWgwKnXRBDjFeqnREvPevo24OAmuUbPq1568 /qKpd0AQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXOgl-00GxqR-Qy; Mon, 04 Oct 2021 14:11:53 +0000 From: "Matthew Wilcox (Oracle)" <willy@infradead.org> To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org> Subject: [PATCH 21/62] mm/slub: Convert free_partial() to use struct slab Date: Mon, 4 Oct 2021 14:46:09 +0100 Message-Id: <20211004134650.4031813-22-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: B52E410003E3 X-Stat-Signature: pu9p16rrkwkimzdy4hhyecm67eaq1uah Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="E+WIr3/o"; dmarc=none; spf=none (imf12.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1633356870-331704 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: <linux-mm.kvack.org> |
Series |
Separate struct slab from struct page
|
expand
|
diff --git a/mm/slub.c b/mm/slub.c index ea7f8d9716e0..875f3f6c1ae6 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4241,23 +4241,23 @@ static void list_slab_objects(struct kmem_cache *s, struct page *page, static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n) { LIST_HEAD(discard); - struct page *page, *h; + struct slab *slab, *h; BUG_ON(irqs_disabled()); spin_lock_irq(&n->list_lock); - list_for_each_entry_safe(page, h, &n->partial, slab_list) { - if (!page->inuse) { - remove_partial(n, page); - list_add(&page->slab_list, &discard); + list_for_each_entry_safe(slab, h, &n->partial, slab_list) { + if (!slab->inuse) { + remove_partial(n, slab_page(slab)); + list_add(&slab->slab_list, &discard); } else { - list_slab_objects(s, page, + list_slab_objects(s, slab_page(slab), "Objects remaining in %s on __kmem_cache_shutdown()"); } } spin_unlock_irq(&n->list_lock); - list_for_each_entry_safe(page, h, &discard, slab_list) - discard_slab(s, page); + list_for_each_entry_safe(slab, h, &discard, slab_list) + discard_slab(s, slab_page(slab)); } bool __kmem_cache_empty(struct kmem_cache *s)
Add a little type safety. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> --- mm/slub.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-)