From patchwork Mon Oct 4 13:46:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12534201 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C30C5C433EF for ; Mon, 4 Oct 2021 14:46:35 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7576E610EA for ; Mon, 4 Oct 2021 14:46:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 7576E610EA Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 0E76A940049; Mon, 4 Oct 2021 10:46:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0975494000B; Mon, 4 Oct 2021 10:46:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EC94B940049; Mon, 4 Oct 2021 10:46:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0249.hostedemail.com [216.40.44.249]) by kanga.kvack.org (Postfix) with ESMTP id DC09C94000B for ; Mon, 4 Oct 2021 10:46:34 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 968F18249980 for ; Mon, 4 Oct 2021 14:46:34 +0000 (UTC) X-FDA: 78659031108.02.964636F Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf26.hostedemail.com (Postfix) with ESMTP id 5676320061CA for ; Mon, 4 Oct 2021 14:46:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=/RC78lSaGXRDrNrbdXhhvYZOtHHhSvlMsLlFe4T2/xo=; b=qk1zRLApRYKENPFHrvLJ1ySGk7 vTwiK+FXxGzFP440teQey8oAK3VN4P9b1cJe/v6ichmlylHGCJIVa7ezj+1GPJBHud8QPxmUcRTY3 qylxkC5uWI8lv1f/gclHlqEeb1vAH6vh2qmLeJmvH3VF0l4CO5Mqt5+mTsMrPgHnV4QlA7i/s5+OB XF0o6aMraammQGFDlgu7pxIPTqt68NW5fDtXU6GAWQH833cmhhxGZ0ECmA2fUOwDnb31a+9G9pSGX 5MuMT6zW8UqoAmVga+L75pbtttFdLNLsdmT2kH9xDXy6IztxZNZ1mIOA2EIoxEdSI1sqtLS/yRJsQ 3b2Xll+g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXPCb-00H144-1l; Mon, 04 Oct 2021 14:44:43 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 45/62] mm/slub: Convert slab_err() to take a struct slab Date: Mon, 4 Oct 2021 14:46:33 +0100 Message-Id: <20211004134650.4031813-46-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211004134650.4031813-1-willy@infradead.org> References: <20211004134650.4031813-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=qk1zRLAp; spf=none (imf26.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 5676320061CA X-Stat-Signature: jkeptn87ie3ub8zd1rgbpryzruuc6utc X-HE-Tag: 1633358794-236133 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Push slab_page() down. Signed-off-by: Matthew Wilcox (Oracle) --- mm/slub.c | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 9651586a3450..98cc2545a9bd 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -866,7 +866,7 @@ static void object_err(struct kmem_cache *s, struct slab *slab, add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE); } -static __printf(3, 4) void slab_err(struct kmem_cache *s, struct page *page, +static __printf(3, 4) void slab_err(struct kmem_cache *s, struct slab *slab, const char *fmt, ...) { va_list args; @@ -879,7 +879,7 @@ static __printf(3, 4) void slab_err(struct kmem_cache *s, struct page *page, vsnprintf(buf, sizeof(buf), fmt, args); va_end(args); slab_bug(s, "%s", buf); - print_page_info(page); + print_page_info(slab_page(slab)); dump_stack(); add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE); } @@ -1024,7 +1024,7 @@ static int slab_pad_check(struct kmem_cache *s, struct slab *slab) while (end > fault && end[-1] == POISON_INUSE) end--; - slab_err(s, slab_page(slab), "Padding overwritten. 0x%p-0x%p @offset=%tu", + slab_err(s, slab, "Padding overwritten. 0x%p-0x%p @offset=%tu", fault, end - 1, fault - start); print_section(KERN_ERR, "Padding ", pad, remainder); @@ -1093,18 +1093,18 @@ static int check_slab(struct kmem_cache *s, struct slab *slab) int maxobj; if (!slab_test_cache(slab)) { - slab_err(s, slab_page(slab), "Not a valid slab page"); + slab_err(s, slab, "Not a valid slab page"); return 0; } maxobj = order_objects(slab_order(slab), s->size); if (slab->objects > maxobj) { - slab_err(s, slab_page(slab), "objects %u > max %u", + slab_err(s, slab, "objects %u > max %u", slab->objects, maxobj); return 0; } if (slab->inuse > slab->objects) { - slab_err(s, slab_page(slab), "inuse %u > max %u", + slab_err(s, slab, "inuse %u > max %u", slab->inuse, slab->objects); return 0; } @@ -1134,7 +1134,7 @@ static int on_freelist(struct kmem_cache *s, struct slab *slab, void *search) "Freechain corrupt"); set_freepointer(s, object, NULL); } else { - slab_err(s, slab_page(slab), "Freepointer corrupt"); + slab_err(s, slab, "Freepointer corrupt"); slab->freelist = NULL; slab->inuse = slab->objects; slab_fix(s, "Freelist cleared"); @@ -1152,13 +1152,13 @@ static int on_freelist(struct kmem_cache *s, struct slab *slab, void *search) max_objects = MAX_OBJS_PER_PAGE; if (slab->objects != max_objects) { - slab_err(s, slab_page(slab), "Wrong number of objects. Found %d but should be %d", + slab_err(s, slab, "Wrong number of objects. Found %d but should be %d", slab->objects, max_objects); slab->objects = max_objects; slab_fix(s, "Number of objects adjusted"); } if (slab->inuse != slab->objects - nr) { - slab_err(s, slab_page(slab), "Wrong object count. Counter is %d but counted were %d", + slab_err(s, slab, "Wrong object count. Counter is %d but counted were %d", slab->inuse, slab->objects - nr); slab->inuse = slab->objects - nr; slab_fix(s, "Object count adjusted"); @@ -1314,7 +1314,7 @@ static inline int free_consistency_checks(struct kmem_cache *s, struct slab *slab, void *object, unsigned long addr) { if (!check_valid_pointer(s, slab, object)) { - slab_err(s, slab_page(slab), "Invalid object pointer 0x%p", object); + slab_err(s, slab, "Invalid object pointer 0x%p", object); return 0; } @@ -1328,7 +1328,7 @@ static inline int free_consistency_checks(struct kmem_cache *s, if (unlikely(s != slab->slab_cache)) { if (!slab_test_cache(slab)) { - slab_err(s, slab_page(slab), "Attempt to free object(0x%p) outside of slab", + slab_err(s, slab, "Attempt to free object(0x%p) outside of slab", object); } else if (!slab->slab_cache) { pr_err("SLUB : no slab for object 0x%p.\n", @@ -1384,7 +1384,7 @@ static noinline int free_debug_processing( out: if (cnt != bulk_cnt) - slab_err(s, slab_page(slab), "Bulk freelist count(%d) invalid(%d)\n", + slab_err(s, slab, "Bulk freelist count(%d) invalid(%d)\n", bulk_cnt, cnt); slab_unlock(slab_page(slab), &flags2); @@ -4214,7 +4214,7 @@ static void list_slab_objects(struct kmem_cache *s, struct slab *slab, unsigned long *map; void *p; - slab_err(s, slab_page(slab), text, s->name); + slab_err(s, slab, text, s->name); slab_lock(slab_page(slab), &flags); map = get_map(s, slab_page(slab));