From patchwork Tue Feb 27 19:23:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13574330 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D2238C54798 for ; Tue, 27 Feb 2024 19:23:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 13F166B00E3; Tue, 27 Feb 2024 14:23:44 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0D1C16B00ED; Tue, 27 Feb 2024 14:23:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DA0816B00F6; Tue, 27 Feb 2024 14:23:43 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id A5C9C6B00ED for ; Tue, 27 Feb 2024 14:23:43 -0500 (EST) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 6F828A0D72 for ; Tue, 27 Feb 2024 19:23:43 +0000 (UTC) X-FDA: 81838558326.12.6F8B574 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf27.hostedemail.com (Postfix) with ESMTP id C1DF94001D for ; Tue, 27 Feb 2024 19:23:40 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=T2kf4OuE; dmarc=none; spf=none (imf27.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709061821; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=JAWJd0ynHxn8bDec82aB+5H7xTi2L5NLwYca1kGHHD0=; b=DpPyQXbEpxEj4xfpBEQIPIIasFEP8l+kKKWt8oTAWnvNdDzddk4jSVadpSb2dEKKV0TPpc D9sLgrwEWg4vOWzjy8xCom/Jotv369+D/q46kg+x5KE9kTv3Yzv8NnvREQbmNYDPtwNnN4 Vh1cOG2fj45Ll2dMGkK6sb+Oy/g8gv0= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=T2kf4OuE; dmarc=none; spf=none (imf27.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709061821; a=rsa-sha256; cv=none; b=Mg0E9m3QUE9FFP6y1LvMducmKfJIHo4GHVHyw/hmAt4PCmpPEpDe2TKnquOfpzbKl5dnuZ qrxBKe1/ycmpSPvcL7umoRYpRZP3MkR5KF00eF1gUOwwFNaW8R+3qYsoDH/vRK9/STMVkQ Xpq1gADNowV14b5IEJgfVFs25wte9bc= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=JAWJd0ynHxn8bDec82aB+5H7xTi2L5NLwYca1kGHHD0=; b=T2kf4OuEvoY38GYQlK2jwGeYoW /oB+aFiAKE8Eu11akOA0rZxNRgk4u+fiBjnopDgJTf3fSyyXrdgXLpcWpwkJYhCVNXs33DccRSJPU FEJOsCm3QN+jpmc6OZCGt9ADqi0GS+S839u0wpAH7EtOlr1R7xVhgt7rQup8noj8GvmUJBGvz1LyC zJCWcYHxzDLLZo+wmuJ8TNQRywXUQpwdAfQXKG9yG5bifQ+e3zbiiy0bwc01QWOGCPRVXZ+9ooWSn 2O0QIWHT3V9RUhTAEV5UaunyuxtaTLj/FxKgEk3L59CEqVrOLx4tPCUw6lDCDHO8+OPzQoeihRc31 0HvkeMIA==; Received: from willy by casper.infradead.org with local (Exim 4.97.1 #2 (Red Hat Linux)) id 1rf33D-00000003B1B-0sPO; Tue, 27 Feb 2024 19:23:39 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 4/8] mm: Add __dump_folio() Date: Tue, 27 Feb 2024 19:23:31 +0000 Message-ID: <20240227192337.757313-5-willy@infradead.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240227192337.757313-1-willy@infradead.org> References: <20240227192337.757313-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: C1DF94001D X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: by6uhsafe7ju7nm3scqxc5x7zfreyece X-HE-Tag: 1709061820-389813 X-HE-Meta: U2FsdGVkX19RwjBTLX+WPtS6xtsHQblbZXVPI0nH7jJHpUWZ6i1sRsn7TMLbkhhbomcsvHkNBYoGF3ALN37wJ/yeq+8bJU2o7yYu6VaHoaH33MkrpHaPOiecy0y4KzA8wYawgxGNvwhavt5DJMFvlyNvV2nUQmeYHgx0Rk1loS7+u8RoIUMF28e6SJUOJJac0odxjl81WyykpQuG4ip/5txdJOPnTM0OOfQiwsDbCuGZUOG2VI9w1nFfVshLe//8Dvi/27RyqSer+jIGZz5lekhASOnd4S7KwkBjnonIj9tCeXpPOC2jx5jXopZ8J/nW9sLZw/PHko8Xchee0oi4BgYzxjfV78h61mJf1FK1ScvEfmQDSkv2F78IPYPh5PNPJz5AnW9SDW1LtA1K71Vb/giUlknPMlxmTG+0Y6egtdz/wkuApBTYJhn2DoNokaaXws4vg9djVINtnQvwIammyYoHAKy0W3yVpn/vvxQIslLbXWtFrCMTnVA/Daa9gMJTnY2TklqI+NGGjUTU4LDCtZ72eRqhFb+fGeBoMRkaBx6MrvWAi7AZE+Qn00/Mg6PDc9YK20TwWcS3Gg9KPQMVbZP8Ek5PrE6U8yH0ZbhIlvzOC53ClyWKrxIWP2YYqIs8CCkM9UzqwzhNzzRqjXbI77a6IW60ymEW1uQ9m6/4GcIhP+nM5gvUwgAkZMwcFK7eCxzQ5aikOIIriSozWdSqtFZ6vfTuMJ/K0MRO67Cek7vdlf3+R2d/mqqdZTW2buMbjus9RQFY0pVbqxkY0rfQALyaVWbGeHwEKkamLCbvv1qq94ovOdf1n1OGff4fzCTxxEYHnZb0ytuV5KkjUkosXGFZkH2SzOBN+mcBsmoM4upynu4CIXnJzyrGUh6nP6QCck+5d/fgwM1E6ds9HFDJi0Wqh9jeEGAt2kLCbX8OACGy8FXukHvsq4t7Ap/UHpLf++QikSmKNei5h8XbwZz WxFmHA5W AjUM4nTRYv0vtQRbGc+j87TmUm1lddQQAu/RRnjFqh/C3wtTxNQPQQPp3dXLlpdEOzX0Rgjvv1HwrdMN2ISyY5DO6K/yHFxVwosjQa0oOzbnnm1iKIf7fHtzeAQuc+Bd5dLRnY7TnL/wvSAruIc3hmiEZDS58hu5VBf9ZndAx9ohiXFTjDWOmzUKdqnZj1oqZkBszcnmczG1WsR5wN2gnd1P+1M6H+N0CbqVG X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Turn __dump_page() into a wrapper around __dump_folio(). Snapshot the page & folio into a stack variable so we don't hit BUG_ON() if an allocation is freed under us and what was a folio pointer becomes a pointer to a tail page. Signed-off-by: Matthew Wilcox (Oracle) Tested-by: SeongJae Park Signed-off-by: Matthew Wilcox (Oracle) --- mm/debug.c | 120 +++++++++++++++++++++++++++++------------------------ 1 file changed, 66 insertions(+), 54 deletions(-) diff --git a/mm/debug.c b/mm/debug.c index ee533a5ceb79..96555fc78f1a 100644 --- a/mm/debug.c +++ b/mm/debug.c @@ -51,84 +51,96 @@ const struct trace_print_flags vmaflag_names[] = { {0, NULL} }; -static void __dump_page(struct page *page) +static void __dump_folio(struct folio *folio, struct page *page, + unsigned long pfn, unsigned long idx) { - struct folio *folio = page_folio(page); - struct page *head = &folio->page; - struct address_space *mapping; - bool compound = PageCompound(page); - /* - * Accessing the pageblock without the zone lock. It could change to - * "isolate" again in the meantime, but since we are just dumping the - * state for debugging, it should be fine to accept a bit of - * inaccuracy here due to racing. - */ - bool page_cma = is_migrate_cma_page(page); - int mapcount; + struct address_space *mapping = folio_mapping(folio); + bool page_cma; + int mapcount = 0; char *type = ""; - if (page < head || (page >= head + MAX_ORDER_NR_PAGES)) { - /* - * Corrupt page, so we cannot call page_mapping. Instead, do a - * safe subset of the steps that page_mapping() does. Caution: - * this will be misleading for tail pages, PageSwapCache pages, - * and potentially other situations. (See the page_mapping() - * implementation for what's missing here.) - */ - unsigned long tmp = (unsigned long)page->mapping; - - if (tmp & PAGE_MAPPING_ANON) - mapping = NULL; - else - mapping = (void *)(tmp & ~PAGE_MAPPING_FLAGS); - head = page; - folio = (struct folio *)page; - compound = false; - } else { - mapping = page_mapping(page); - } - /* - * Avoid VM_BUG_ON() in page_mapcount(). - * page->_mapcount space in struct page is used by sl[aou]b pages to - * encode own info. + * page->_mapcount space in struct page is used by slab pages to + * encode own info, and we must avoid calling page_folio() again. */ - mapcount = PageSlab(head) ? 0 : page_mapcount(page); - - pr_warn("page:%p refcount:%d mapcount:%d mapping:%p index:%#lx pfn:%#lx\n", - page, page_ref_count(head), mapcount, mapping, - page_to_pgoff(page), page_to_pfn(page)); - if (compound) { - pr_warn("head:%p order:%u entire_mapcount:%d nr_pages_mapped:%d pincount:%d\n", - head, compound_order(head), + if (!folio_test_slab(folio)) { + mapcount = atomic_read(&page->_mapcount) + 1; + if (folio_test_large(folio)) + mapcount += folio_entire_mapcount(folio); + } + + pr_warn("page: refcount:%d mapcount:%d mapping:%p index:%#lx pfn:%#lx\n", + folio_ref_count(folio), mapcount, mapping, + folio->index + idx, pfn); + if (folio_test_large(folio)) { + pr_warn("head: order:%u entire_mapcount:%d nr_pages_mapped:%d pincount:%d\n", + folio_order(folio), folio_entire_mapcount(folio), folio_nr_pages_mapped(folio), atomic_read(&folio->_pincount)); } #ifdef CONFIG_MEMCG - if (head->memcg_data) - pr_warn("memcg:%lx\n", head->memcg_data); + if (folio->memcg_data) + pr_warn("memcg:%lx\n", folio->memcg_data); #endif - if (PageKsm(page)) + if (folio_test_ksm(folio)) type = "ksm "; - else if (PageAnon(page)) + else if (folio_test_anon(folio)) type = "anon "; else if (mapping) dump_mapping(mapping); BUILD_BUG_ON(ARRAY_SIZE(pageflag_names) != __NR_PAGEFLAGS + 1); - pr_warn("%sflags: %pGp%s\n", type, &head->flags, + /* + * Accessing the pageblock without the zone lock. It could change to + * "isolate" again in the meantime, but since we are just dumping the + * state for debugging, it should be fine to accept a bit of + * inaccuracy here due to racing. + */ + page_cma = is_migrate_cma_page(page); + pr_warn("%sflags: %pGp%s\n", type, &folio->flags, page_cma ? " CMA" : ""); - pr_warn("page_type: %pGt\n", &head->page_type); + pr_warn("page_type: %pGt\n", &folio->page.page_type); print_hex_dump(KERN_WARNING, "raw: ", DUMP_PREFIX_NONE, 32, sizeof(unsigned long), page, sizeof(struct page), false); - if (head != page) + if (folio_test_large(folio)) print_hex_dump(KERN_WARNING, "head: ", DUMP_PREFIX_NONE, 32, - sizeof(unsigned long), head, - sizeof(struct page), false); + sizeof(unsigned long), folio, + 2 * sizeof(struct page), false); +} + +static void __dump_page(const struct page *page) +{ + struct folio *foliop, folio; + struct page precise; + unsigned long pfn = page_to_pfn(page); + unsigned long idx, nr_pages = 1; + int loops = 5; + +again: + memcpy(&precise, page, sizeof(*page)); + foliop = page_folio(&precise); + idx = folio_page_idx(foliop, page); + if (idx != 0) { + if (idx < (1UL << PUD_ORDER)) { + memcpy(&folio, foliop, 2 * sizeof(struct page)); + nr_pages = folio_nr_pages(&folio); + } + + if (idx > nr_pages) { + if (loops-- > 0) + goto again; + printk("page does not match folio\n"); + precise.compound_head &= ~1UL; + foliop = (struct folio *)&precise; + idx = 0; + } + } + + __dump_folio(foliop, &precise, pfn, idx); } void dump_page(struct page *page, const char *reason)