From patchwork Tue Nov 13 05:50:49 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sasha Levin X-Patchwork-Id: 10679597 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 167B013B5 for ; Tue, 13 Nov 2018 05:51:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F3BF22A312 for ; Tue, 13 Nov 2018 05:51:50 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id EE6AE2A1E4; Tue, 13 Nov 2018 05:51:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B43682A312 for ; Tue, 13 Nov 2018 05:51:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 24FF66B027E; Tue, 13 Nov 2018 00:51:44 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 15A906B027F; Tue, 13 Nov 2018 00:51:44 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F1DB56B0280; Tue, 13 Nov 2018 00:51:43 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f197.google.com (mail-pg1-f197.google.com [209.85.215.197]) by kanga.kvack.org (Postfix) with ESMTP id A43906B027E for ; Tue, 13 Nov 2018 00:51:43 -0500 (EST) Received: by mail-pg1-f197.google.com with SMTP id h9so2250192pgm.1 for ; Mon, 12 Nov 2018 21:51:43 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=BpkhSdlmcMuUzjBF/ksvn6od71MSvZqWbWiDbsznuaw=; b=pk/IY9thWOW5SXszmzMXcSDDwW/XU/34YkuyXs+OaSvdwpvLhoEJ+7ftbYvEU6k0aE Hz6FWztQysjQO5BmUs0JaxkuPYyaVADiKfK+wippm4lF83v5AcUiekcNPU9jCUxPOzND xOULjggSG0a5pywtVRckmZNgcbIjSdxysXmijtZ51fYuuoArqyavsFRwuGVKnljfhhPU J8ZO7neSECWch1uFhlD6HyJtHcmlpGm2TtDOUUSPuvtvoE5hovcaOY0nNxzw6SfOqy/D NkQjWgfOJ63BFX6YJRr0BptfgKe0I7ZvJgAl15jxPwtt1uuCKH8G1gj70TGVRFh9q0RD UcLg== X-Gm-Message-State: AGRZ1gJxpFgtsQjfN83P4rGkaKRRA+tBF7GZK9vDvgqOhXgjEdSARmLV rxawn5Y5Dr1NSQtTvPdqamiCxxtiNE9LzqLshOqF1n2wsTbx7zaFDrPx3BfVf8N2nhG4PXw6kvS gqm3loKFDtEtjBcevKqADjoQtzA0inw9zvstRzkfpPIuVZcIcuGg2x4qQrOaxifTJxw== X-Received: by 2002:a17:902:24c:: with SMTP id 70-v6mr3624493plc.120.1542088303339; Mon, 12 Nov 2018 21:51:43 -0800 (PST) X-Google-Smtp-Source: AJdET5dadTMVbj8W2CGFwkU4hQw4vvN2C0vC6xi8SDmUFhtYD47Ima7YDPuTbVCVEBjAshr0wFT0 X-Received: by 2002:a17:902:24c:: with SMTP id 70-v6mr3624468plc.120.1542088302465; Mon, 12 Nov 2018 21:51:42 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1542088302; cv=none; d=google.com; s=arc-20160816; b=exIXrshx7hGvBidtZxBju3NxmwU6CTpeb4fYzuQ0TA2awG0RkvYiqONRBHrja0EMwK PE57LPGp1b1zxoMHxHAHFE22UvsLzTJacFwV4INrxTq8Z2ntx4BnNRfcP3YiMw/AhkFX rPn086YxrVZBHevCe5xA1u+L/niP2J2aXB1M+9KAv3D3OQFtkfYQD472vpa6ba8JdKsX f6Y0efauMN27cQeJ9NLK4OdLf/Pu8FpG3L8QGc+8cM1UkUam2FRpIU7rOHY6Z4Qcvzt6 iE285jGvfP3kC4IEBebK0lGlY95bpKv5rYOrV428IJt3VKuuj0719pozmT5wZequuEAn xIZw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=BpkhSdlmcMuUzjBF/ksvn6od71MSvZqWbWiDbsznuaw=; b=LgyJfwf9Pr7fKTanSf8QGILSLV25MppXuDawd5qzVPiTV8E/wC/9qWgddMEOJbiX40 0nPLyp4/Q3w5o+HkPJ/RDqQ3OutgqocKntzq4xRaJOcmb4E7F8oLcxMhtYxtNiIUeqwG L02rPJwfxBC2keTC+mOxPz6rfx/pftW2GqWzQ/xQv6qMmw5LVsYwXTjx50oO4D0nv8rj 5T4y7GXzkwcZxG4BvlgSyh3M0w8Is94nLh1mI0G5DpdLkPfeNf9SDy8f/yJDcIrNxCXl ke1aQ96i5BGh7C2VvGcGdmnFB4Neb3B1h2GABh8ibvAJkeGWODvzahqwuT5pOfM7tc3k N1RQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=HnO6lzGt; spf=pass (google.com: domain of sashal@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=sashal@kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from mail.kernel.org (mail.kernel.org. [198.145.29.99]) by mx.google.com with ESMTPS id 91-v6si20304449plc.409.2018.11.12.21.51.42 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 12 Nov 2018 21:51:42 -0800 (PST) Received-SPF: pass (google.com: domain of sashal@kernel.org designates 198.145.29.99 as permitted sender) client-ip=198.145.29.99; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=HnO6lzGt; spf=pass (google.com: domain of sashal@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=sashal@kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from sasha-vm.mshome.net (unknown [64.114.255.114]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id C7F5822511; Tue, 13 Nov 2018 05:51:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1542088302; bh=01UZQO41vzNh4Savahf7vssInXlOxgP+KXnpuRHFvB4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HnO6lzGtfavJ4a0yEe3+D9TCHyjIPQ+IKnISXuc0Y5dm7cfd/MskXamxBABRli8Fp QeueDu1iDlDy2Y7q9zc1+QNDPI61b36SwRWiWSBvh+qt4nVDXHylWfiKPMJi4Xh6XK VxUkSQOZAaiyp/xsUEwHSJoSk+Go/qGNuoWfrBYI= From: Sasha Levin To: stable@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Pavel Tatashin , Abdul Haleem , Baoquan He , Daniel Jordan , Dan Williams , Dave Hansen , David Rientjes , Greg Kroah-Hartman , Ingo Molnar , Jan Kara , =?utf-8?b?SsOpcsO0?= =?utf-8?b?bWUgR2xpc3Nl?= , "Kirill A . Shutemov" , Michael Ellerman , Michal Hocko , Souptick Joarder , Steven Sistare , Vlastimil Babka , Wei Yang , Pasha Tatashin , Andrew Morton , Linus Torvalds , Sasha Levin , linux-mm@kvack.org Subject: [PATCH AUTOSEL 4.18 35/39] mm: calculate deferred pages after skipping mirrored memory Date: Tue, 13 Nov 2018 00:50:49 -0500 Message-Id: <20181113055053.78352-35-sashal@kernel.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181113055053.78352-1-sashal@kernel.org> References: <20181113055053.78352-1-sashal@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Pavel Tatashin [ Upstream commit d3035be4ce2345d98633a45f93a74e526e94b802 ] update_defer_init() should be called only when struct page is about to be initialized. Because it counts number of initialized struct pages, but there we may skip struct pages if there is some mirrored memory. So move, update_defer_init() after checking for mirrored memory. Also, rename update_defer_init() to defer_init() and reverse the return boolean to emphasize that this is a boolean function, that tells that the reset of memmap initialization should be deferred. Make this function self-contained: do not pass number of already initialized pages in this zone by using static counters. I found this bug by reading the code. The effect is that fewer than expected struct pages are initialized early in boot, and it is possible that in some corner cases we may fail to boot when mirrored pages are used. The deferred on demand code should somewhat mitigate this. But this still brings some inconsistencies compared to when booting without mirrored pages, so it is better to fix. [pasha.tatashin@oracle.com: add comment about defer_init's lack of locking] Link: http://lkml.kernel.org/r/20180726193509.3326-3-pasha.tatashin@oracle.com [akpm@linux-foundation.org: make defer_init non-inline, __meminit] Link: http://lkml.kernel.org/r/20180724235520.10200-3-pasha.tatashin@oracle.com Signed-off-by: Pavel Tatashin Reviewed-by: Oscar Salvador Cc: Abdul Haleem Cc: Baoquan He Cc: Daniel Jordan Cc: Dan Williams Cc: Dave Hansen Cc: David Rientjes Cc: Greg Kroah-Hartman Cc: Ingo Molnar Cc: Jan Kara Cc: Jérôme Glisse Cc: Kirill A. Shutemov Cc: Michael Ellerman Cc: Michal Hocko Cc: Souptick Joarder Cc: Steven Sistare Cc: Vlastimil Babka Cc: Wei Yang Cc: Pasha Tatashin Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin --- mm/page_alloc.c | 45 +++++++++++++++++++++++++-------------------- 1 file changed, 25 insertions(+), 20 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 65f2e6481c99..eb3b250c7c9a 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -306,24 +306,33 @@ static inline bool __meminit early_page_uninitialised(unsigned long pfn) } /* - * Returns false when the remaining initialisation should be deferred until + * Returns true when the remaining initialisation should be deferred until * later in the boot cycle when it can be parallelised. */ -static inline bool update_defer_init(pg_data_t *pgdat, - unsigned long pfn, unsigned long zone_end, - unsigned long *nr_initialised) +static bool __meminit +defer_init(int nid, unsigned long pfn, unsigned long end_pfn) { + static unsigned long prev_end_pfn, nr_initialised; + + /* + * prev_end_pfn static that contains the end of previous zone + * No need to protect because called very early in boot before smp_init. + */ + if (prev_end_pfn != end_pfn) { + prev_end_pfn = end_pfn; + nr_initialised = 0; + } + /* Always populate low zones for address-constrained allocations */ - if (zone_end < pgdat_end_pfn(pgdat)) - return true; - (*nr_initialised)++; - if ((*nr_initialised > pgdat->static_init_pgcnt) && - (pfn & (PAGES_PER_SECTION - 1)) == 0) { - pgdat->first_deferred_pfn = pfn; + if (end_pfn < pgdat_end_pfn(NODE_DATA(nid))) return false; + nr_initialised++; + if ((nr_initialised > NODE_DATA(nid)->static_init_pgcnt) && + (pfn & (PAGES_PER_SECTION - 1)) == 0) { + NODE_DATA(nid)->first_deferred_pfn = pfn; + return true; } - - return true; + return false; } #else static inline bool early_page_uninitialised(unsigned long pfn) @@ -331,11 +340,9 @@ static inline bool early_page_uninitialised(unsigned long pfn) return false; } -static inline bool update_defer_init(pg_data_t *pgdat, - unsigned long pfn, unsigned long zone_end, - unsigned long *nr_initialised) +static inline bool defer_init(int nid, unsigned long pfn, unsigned long end_pfn) { - return true; + return false; } #endif @@ -5462,9 +5469,7 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone, struct vmem_altmap *altmap) { unsigned long end_pfn = start_pfn + size; - pg_data_t *pgdat = NODE_DATA(nid); unsigned long pfn; - unsigned long nr_initialised = 0; struct page *page; #ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP struct memblock_region *r = NULL, *tmp; @@ -5492,8 +5497,6 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone, continue; if (!early_pfn_in_nid(pfn, nid)) continue; - if (!update_defer_init(pgdat, pfn, end_pfn, &nr_initialised)) - break; #ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP /* @@ -5516,6 +5519,8 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone, } } #endif + if (defer_init(nid, pfn, end_pfn)) + break; not_early: page = pfn_to_page(pfn);