From patchwork Wed Jan 15 05:31:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qian Cai X-Patchwork-Id: 11333367 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id ADE0B92A for ; Wed, 15 Jan 2020 05:31:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5FF0D24671 for ; Wed, 15 Jan 2020 05:31:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=lca.pw header.i=@lca.pw header.b="AjQ/j0aP" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5FF0D24671 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lca.pw Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 945FC8E0005; Wed, 15 Jan 2020 00:31:42 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 8F6FC8E0003; Wed, 15 Jan 2020 00:31:42 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 834A78E0005; Wed, 15 Jan 2020 00:31:42 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0205.hostedemail.com [216.40.44.205]) by kanga.kvack.org (Postfix) with ESMTP id 6DD878E0003 for ; Wed, 15 Jan 2020 00:31:42 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id 245BC181AEF09 for ; Wed, 15 Jan 2020 05:31:42 +0000 (UTC) X-FDA: 76378746444.26.beam06_2be0f8f6d8f2c X-Spam-Summary: 2,0,0,cc9b9af5222da50c,d41d8cd98f00b204,cai@lca.pw,:akpm@linux-foundation.org:mhocko@kernel.org:sergey.senozhatsky.work@gmail.com:pmladek@suse.com:rostedt@goodmis.org:peterz@infradead.org:david@redhat.com::linux-kernel@vger.kernel.org:cai@lca.pw,RULES_HIT:2:41:69:355:379:541:800:960:966:967:968:973:988:989:1260:1311:1314:1345:1437:1515:1535:1605:1730:1747:1777:1792:1801:2196:2198:2199:2200:2393:2525:2553:2559:2563:2682:2685:2689:2693:2731:2859:2899:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3873:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4049:4120:4250:4321:4385:4605:5007:6119:6261:6653:7875:7903:8660:8957:9010:9025:9391:10004:11026:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12663:12679:12895:12986:13148:13161:13229:13230:13894:14018:14096:14394:21080:21433:21444:21451:21611:21627:21740:21819:21990:30054:30065:30070:30090,0,RBL:209.85.160.196:@lca.pw:.lbl8.mailshell.net- 62.2.0.1 X-HE-Tag: beam06_2be0f8f6d8f2c X-Filterd-Recvd-Size: 9721 Received: from mail-qt1-f196.google.com (mail-qt1-f196.google.com [209.85.160.196]) by imf24.hostedemail.com (Postfix) with ESMTP for ; Wed, 15 Jan 2020 05:31:41 +0000 (UTC) Received: by mail-qt1-f196.google.com with SMTP id w47so14775371qtk.4 for ; Tue, 14 Jan 2020 21:31:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=lca.pw; s=google; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=/N/Grf0h86aCdnpP8US/jQXa2nTyUfKvifn9VZTSxVM=; b=AjQ/j0aPkIhUSI07x8Y2CsVa4wUUsi3jIhS6hf+4v3Zeh4TMtA2eyYM6nLGxXh11gM ynzityt/yH96Vc4eeaqu73sJxvleM8gLM6vNvdP75cOC7Bq2LTSgOQwqVKDvqPzNLemk lzRcQYuNEHZZDwnrw4yiwRSuM3U2yLfg+zCRODODwejG6RCLwsiHNbW9P558lnH0d53C K23fTpCUXJrMCCBpEdHXeLHJWKMxMikKg/V14ihQ9IIUpnygsiG/osh93TcV1Qi/JsDg FT0D6hqHpvJAWJxwyscjL/lVGtBurj74OX2jbuaIU1fvSLPj+WcKsi4D+v5XQubr6trL VYYQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=/N/Grf0h86aCdnpP8US/jQXa2nTyUfKvifn9VZTSxVM=; b=jc7xH6mcoT3krfywd+abddsI4z02XmzUXzZgJlOye0uDua+cl86FDcPkmMpxP9Zrcv lCReBLCS5Pe1TN3gn7ikW4yFDqKLNBVCgKq4E2rlCSmE2lGLO/oBUSHMa4HIm0C12joZ iaN0jiAJW1GeZNhLhrhqfOl8uuyGVZ4f+lsGIb5d6gJ//l8MrgSl6d8pwlcJM21wfm5q bPqYH4NVQcg/VVZ6zd8Bl8heju5Ny/OIKpmytx+mdf+lEfkNZYgVZOuiZbLiulftCVdf rCT6DgW1FRLnM36jQkNg0MD2fukK9sxKxeedAADkaJ2TIf8pwQOvTtZsEcbW4vLioPqt Fphw== X-Gm-Message-State: APjAAAWQSTPs9LSrEzs1P6vogt11v5LXwM3MMsNRy7gYldVMm0nwyfVt VI1p8FC8xoMl50a47gnSxPFdRA== X-Google-Smtp-Source: APXvYqwM5841GH8tvssxIK4kb2L2mjuNzPEOHdbsJHC4+oTTcRaP2r4oZr9r5fZQh6x38AbogUIKrQ== X-Received: by 2002:ac8:5395:: with SMTP id x21mr2015457qtp.64.1579066300750; Tue, 14 Jan 2020 21:31:40 -0800 (PST) Received: from ovpn-120-31.rdu2.redhat.com (pool-71-184-117-43.bstnma.fios.verizon.net. [71.184.117.43]) by smtp.gmail.com with ESMTPSA id e64sm8623448qtd.45.2020.01.14.21.31.39 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 14 Jan 2020 21:31:39 -0800 (PST) From: Qian Cai To: akpm@linux-foundation.org Cc: mhocko@kernel.org, sergey.senozhatsky.work@gmail.com, pmladek@suse.com, rostedt@goodmis.org, peterz@infradead.org, david@redhat.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Qian Cai Subject: [PATCH -next v2] mm/hotplug: silence a lockdep splat with printk() Date: Wed, 15 Jan 2020 00:31:30 -0500 Message-Id: <20200115053130.15490-1-cai@lca.pw> X-Mailer: git-send-email 2.21.0 (Apple Git-122.2) MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: It is guaranteed to trigger a lockdep splat if calling printk() with zone->lock held because there are many places (tty, console drivers, debugobjects etc) would allocate some memory with another lock held which is proved to be difficult to fix them all. A common workaround until the onging effort to make all printk() as deferred happens is to use printk_deferred() in those places similar to the recent commit [1] merged into the random and -next trees, but memory offline will call dump_page() which needs to be deferred after the lock. So change has_unmovable_pages() so that it no longer calls dump_page() itself - instead it returns a "struct page *" of the unmovable page back to the caller so that in the case of a has_unmovable_pages() failure, the caller can call dump_page() after releasing zone->lock. Also, make dump_page() is able to report a CMA page as well, so the reason string from has_unmovable_pages() can be removed. While at it, remove a similar but unnecessary debug-only printk() as well. [1] https://lore.kernel.org/lkml/1573679785-21068-1-git-send-email-cai@lca.pw/ Signed-off-by: Qian Cai --- v2: Improve the commit log and report CMA in dump_page() per Andrew. has_unmovable_pages() returns a "struct page *" to the caller. include/linux/page-isolation.h | 4 ++-- mm/debug.c | 5 +++-- mm/memory_hotplug.c | 6 ++++-- mm/page_alloc.c | 18 +++++------------- mm/page_isolation.c | 7 ++++++- 5 files changed, 20 insertions(+), 20 deletions(-) diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h index 148e65a9c606..da043ae86488 100644 --- a/include/linux/page-isolation.h +++ b/include/linux/page-isolation.h @@ -33,8 +33,8 @@ static inline bool is_migrate_isolate(int migratetype) #define MEMORY_OFFLINE 0x1 #define REPORT_FAILURE 0x2 -bool has_unmovable_pages(struct zone *zone, struct page *page, int migratetype, - int flags); +struct page *has_unmovable_pages(struct zone *zone, struct page *page, int + migratetype, int flags); void set_pageblock_migratetype(struct page *page, int migratetype); int move_freepages_block(struct zone *zone, struct page *page, int migratetype, int *num_movable); diff --git a/mm/debug.c b/mm/debug.c index 0461df1207cb..2165a4c83b97 100644 --- a/mm/debug.c +++ b/mm/debug.c @@ -46,6 +46,7 @@ void __dump_page(struct page *page, const char *reason) { struct address_space *mapping; bool page_poisoned = PagePoisoned(page); + bool page_cma = is_migrate_cma_page(page); int mapcount; /* @@ -74,9 +75,9 @@ void __dump_page(struct page *page, const char *reason) page->mapping, page_to_pgoff(page), compound_mapcount(page)); else - pr_warn("page:%px refcount:%d mapcount:%d mapping:%px index:%#lx\n", + pr_warn("page:%px refcount:%d mapcount:%d mapping:%px index:%#lx cma:%d\n", page, page_ref_count(page), mapcount, - page->mapping, page_to_pgoff(page)); + page->mapping, page_to_pgoff(page), page_cma); if (PageKsm(page)) pr_warn("ksm flags: %#lx(%pGp)\n", page->flags, &page->flags); else if (PageAnon(page)) diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 7a6de9b0dcab..06e7dd3eb9a9 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1148,8 +1148,10 @@ static bool is_pageblock_removable_nolock(unsigned long pfn) if (!zone_spans_pfn(zone, pfn)) return false; - return !has_unmovable_pages(zone, page, MIGRATE_MOVABLE, - MEMORY_OFFLINE); + if (has_unmovable_pages(zone, page, MIGRATE_MOVABLE, MEMORY_OFFLINE)) + return false; + + return true; } /* Checks if this range of memory is likely to be hot-removable. */ diff --git a/mm/page_alloc.c b/mm/page_alloc.c index e56cd1f33242..728e36f24aea 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -8203,12 +8203,11 @@ void *__init alloc_large_system_hash(const char *tablename, * check without lock_page also may miss some movable non-lru pages at * race condition. So you can't expect this function should be exact. */ -bool has_unmovable_pages(struct zone *zone, struct page *page, int migratetype, - int flags) +struct page *has_unmovable_pages(struct zone *zone, struct page *page, + int migratetype, int flags) { unsigned long iter = 0; unsigned long pfn = page_to_pfn(page); - const char *reason = "unmovable page"; /* * TODO we could make this much more efficient by not checking every @@ -8225,9 +8224,8 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int migratetype, * so consider them movable here. */ if (is_migrate_cma(migratetype)) - return false; + return NULL; - reason = "CMA page"; goto unmovable; } @@ -8302,12 +8300,10 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int migratetype, */ goto unmovable; } - return false; + return NULL; unmovable: WARN_ON_ONCE(zone_idx(zone) == ZONE_MOVABLE); - if (flags & REPORT_FAILURE) - dump_page(pfn_to_page(pfn + iter), reason); - return true; + return pfn_to_page(pfn + iter); } #ifdef CONFIG_CONTIG_ALLOC @@ -8711,10 +8707,6 @@ __offline_isolated_pages(unsigned long start_pfn, unsigned long end_pfn) BUG_ON(!PageBuddy(page)); order = page_order(page); offlined_pages += 1 << order; -#ifdef CONFIG_DEBUG_VM - pr_info("remove from free list %lx %d %lx\n", - pfn, 1 << order, end_pfn); -#endif del_page_from_free_area(page, &zone->free_area[order]); pfn += (1 << order); } diff --git a/mm/page_isolation.c b/mm/page_isolation.c index 1f8b9dfecbe8..a2b730ea3732 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -20,6 +20,7 @@ static int set_migratetype_isolate(struct page *page, int migratetype, int isol_ struct zone *zone; unsigned long flags; int ret = -EBUSY; + struct page *unmovable = NULL; zone = page_zone(page); @@ -37,7 +38,8 @@ static int set_migratetype_isolate(struct page *page, int migratetype, int isol_ * FIXME: Now, memory hotplug doesn't call shrink_slab() by itself. * We just check MOVABLE pages. */ - if (!has_unmovable_pages(zone, page, migratetype, isol_flags)) { + unmovable = has_unmovable_pages(zone, page, migratetype, isol_flags); + if (!unmovable) { unsigned long nr_pages; int mt = get_pageblock_migratetype(page); @@ -54,6 +56,9 @@ static int set_migratetype_isolate(struct page *page, int migratetype, int isol_ spin_unlock_irqrestore(&zone->lock, flags); if (!ret) drain_all_pages(zone); + else if (isol_flags & REPORT_FAILURE && unmovable) + dump_page(unmovable, "unmovable page"); + return ret; }