From patchwork Wed Jun 3 22:59:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11586427 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 169EC138C for ; Wed, 3 Jun 2020 22:59:28 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CECBD208C9 for ; Wed, 3 Jun 2020 22:59:27 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="cSQ+C9xr" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CECBD208C9 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C5C47280037; Wed, 3 Jun 2020 18:59:26 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C0DD8280003; Wed, 3 Jun 2020 18:59:26 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AFC1B280037; Wed, 3 Jun 2020 18:59:26 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0245.hostedemail.com [216.40.44.245]) by kanga.kvack.org (Postfix) with ESMTP id 92355280003 for ; Wed, 3 Jun 2020 18:59:26 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 544DF181AEF09 for ; Wed, 3 Jun 2020 22:59:26 +0000 (UTC) X-FDA: 76889418732.22.sink24_1820db9b04121 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin22.hostedemail.com (Postfix) with ESMTP id 2BF2F18038E67 for ; Wed, 3 Jun 2020 22:59:26 +0000 (UTC) X-Spam-Summary: 2,0,0,75e244724c2a078d,d41d8cd98f00b204,akpm@linux-foundation.org,,RULES_HIT:41:69:355:379:800:960:967:973:988:989:1260:1263:1345:1359:1381:1431:1437:1535:1544:1711:1730:1747:1777:1792:2198:2199:2393:2525:2559:2563:2682:2685:2689:2693:2736:2859:2895:2902:2904:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3355:3503:3504:3865:3866:3867:3868:3870:3871:3872:3873:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4321:4470:4605:5007:6261:6653:6737:7576:7903:8599:9025:9038:9545:9592:10004:10913:11026:11473:11658:11914:12043:12048:12296:12297:12438:12517:12519:12555:12679:12783:12986:13053:13161:13221:13229:13846:13869:14093:14096:14181:14721:14849:21080:21324:21451:21627:21939:21966:21990:30005:30034:30054:30064:30070,0,RBL:198.145.29.99:@linux-foundation.org:.lbl8.mailshell.net-62.2.0.100 64.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_ SUMMARY: X-HE-Tag: sink24_1820db9b04121 X-Filterd-Recvd-Size: 5741 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf50.hostedemail.com (Postfix) with ESMTP for ; Wed, 3 Jun 2020 22:59:25 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 652BA208E4; Wed, 3 Jun 2020 22:59:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1591225165; bh=/E9/b+TOMOM7GAxyDpO/IKuw362PHG5b8JjXPscDbBw=; h=Date:From:To:Subject:In-Reply-To:From; b=cSQ+C9xrBaPMzbCLl/yevPMve+DDEREG1nRVyTe+Ju6BZG337MOUr6QWXCugJL8Ae l/aFT3iW0nLLfSxh7x7Oe9ZTxlm6ZRgREg9ZRzH2ECZtrhxO4PIoziLQW8/waRiERz tIGGP6MwuW7jeXEfRcg+RzRCEV3Gh9o6DOt30WFY= Date: Wed, 03 Jun 2020 15:59:24 -0700 From: Andrew Morton To: akpm@linux-foundation.org, dan.j.williams@intel.com, daniel.m.jordan@oracle.com, david@redhat.com, jmorris@namei.org, ktkhai@virtuozzo.com, linux-mm@kvack.org, mhocko@suse.com, mm-commits@vger.kernel.org, pasha.tatashin@soleen.com, sashal@kernel.org, shile.zhang@linux.alibaba.com, stable@vger.kernel.org, torvalds@linux-foundation.org, vbabka@suse.cz, yiwei@redhat.com Subject: [patch 050/131] mm: initialize deferred pages with interrupts enabled Message-ID: <20200603225924.kDWBwWWfU%akpm@linux-foundation.org> In-Reply-To: <20200603155549.e041363450869eaae4c7f05b@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Rspamd-Queue-Id: 2BF2F18038E67 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Pavel Tatashin Subject: mm: initialize deferred pages with interrupts enabled Initializing struct pages is a long task and keeping interrupts disabled for the duration of this operation introduces a number of problems. 1. jiffies are not updated for long period of time, and thus incorrect time is reported. See proposed solution and discussion here: lkml/20200311123848.118638-1-shile.zhang@linux.alibaba.com 2. It prevents farther improving deferred page initialization by allowing intra-node multi-threading. We are keeping interrupts disabled to solve a rather theoretical problem that was never observed in real world (See 3a2d7fa8a3d5). Let's keep interrupts enabled. In case we ever encounter a scenario where an interrupt thread wants to allocate large amount of memory this early in boot we can deal with that by growing zone (see deferred_grow_zone()) by the needed amount before starting deferred_init_memmap() threads. Before: [ 1.232459] node 0 initialised, 12058412 pages in 1ms After: [ 1.632580] node 0 initialised, 12051227 pages in 436ms Link: http://lkml.kernel.org/r/20200403140952.17177-3-pasha.tatashin@soleen.com Fixes: 3a2d7fa8a3d5 ("mm: disable interrupts while initializing deferred pages") Reported-by: Shile Zhang Signed-off-by: Pavel Tatashin Reviewed-by: Daniel Jordan Acked-by: Michal Hocko Acked-by: Vlastimil Babka Reviewed-by: David Hildenbrand Cc: Dan Williams Cc: James Morris Cc: Kirill Tkhai Cc: Sasha Levin Cc: Yiqian Wei Cc: [4.17+] Signed-off-by: Andrew Morton --- include/linux/mmzone.h | 2 ++ mm/page_alloc.c | 20 +++++++------------- 2 files changed, 9 insertions(+), 13 deletions(-) --- a/include/linux/mmzone.h~mm-initialize-deferred-pages-with-interrupts-enabled +++ a/include/linux/mmzone.h @@ -680,6 +680,8 @@ typedef struct pglist_data { /* * Must be held any time you expect node_start_pfn, * node_present_pages, node_spanned_pages or nr_zones to stay constant. + * Also synchronizes pgdat->first_deferred_pfn during deferred page + * init. * * pgdat_resize_lock() and pgdat_resize_unlock() are provided to * manipulate node_size_lock without checking for CONFIG_MEMORY_HOTPLUG --- a/mm/page_alloc.c~mm-initialize-deferred-pages-with-interrupts-enabled +++ a/mm/page_alloc.c @@ -1844,6 +1844,13 @@ static int __init deferred_init_memmap(v BUG_ON(pgdat->first_deferred_pfn > pgdat_end_pfn(pgdat)); pgdat->first_deferred_pfn = ULONG_MAX; + /* + * Once we unlock here, the zone cannot be grown anymore, thus if an + * interrupt thread must allocate this early in boot, zone must be + * pre-grown prior to start of deferred page initialization. + */ + pgdat_resize_unlock(pgdat, &flags); + /* Only the highest zone is deferred so find it */ for (zid = 0; zid < MAX_NR_ZONES; zid++) { zone = pgdat->node_zones + zid; @@ -1866,8 +1873,6 @@ static int __init deferred_init_memmap(v touch_nmi_watchdog(); } zone_empty: - pgdat_resize_unlock(pgdat, &flags); - /* Sanity check that the next zone really is unpopulated */ WARN_ON(++zid < MAX_NR_ZONES && populated_zone(++zone)); @@ -1910,17 +1915,6 @@ deferred_grow_zone(struct zone *zone, un pgdat_resize_lock(pgdat, &flags); /* - * If deferred pages have been initialized while we were waiting for - * the lock, return true, as the zone was grown. The caller will retry - * this zone. We won't return to this function since the caller also - * has this static branch. - */ - if (!static_branch_unlikely(&deferred_pages)) { - pgdat_resize_unlock(pgdat, &flags); - return true; - } - - /* * If someone grew this zone while we were waiting for spinlock, return * true, as there might be enough pages already. */