From patchwork Tue Jul 28 05:11:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 11688225 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 18AED14B7 for ; Tue, 28 Jul 2020 05:14:35 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D927122B47 for ; Tue, 28 Jul 2020 05:14:34 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="M+RZG9Zn" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D927122B47 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C0DA38D000D; Tue, 28 Jul 2020 01:14:33 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id BE2FF8D0002; Tue, 28 Jul 2020 01:14:33 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AFAD68D000D; Tue, 28 Jul 2020 01:14:33 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0059.hostedemail.com [216.40.44.59]) by kanga.kvack.org (Postfix) with ESMTP id 9A0658D0002 for ; Tue, 28 Jul 2020 01:14:33 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 67B6E3622 for ; Tue, 28 Jul 2020 05:14:33 +0000 (UTC) X-FDA: 77086319226.25.park18_13037a526f67 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin25.hostedemail.com (Postfix) with ESMTP id 3BB501804E3A8 for ; Tue, 28 Jul 2020 05:14:33 +0000 (UTC) X-Spam-Summary: 1,0,0,d98510472bb27858,d41d8cd98f00b204,rppt@kernel.org,,RULES_HIT:41:69:355:379:541:800:960:968:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1542:1711:1730:1747:1777:1792:2393:2559:2562:2693:2895:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3870:3871:3874:4250:4321:4419:4434:5007:6119:6261:6653:6742:6743:7576:9592:10004:11026:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12679:12895:12986:13894:14181:14394:14721:21080:21627:30054:30070:30080,0,RBL:198.145.29.99:@kernel.org:.lbl8.mailshell.net-62.2.0.100 64.100.201.201;04yrqm5w65dwbh9q9bkuchzef45ajopk4m1sqy9i37fmaaatngo643sfnfy4n9q.5c4tb639wicsnwetpsajmbzy9izymqte9fssht46erkkiqainxnp9h66d6xppqu.s-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: park18_13037a526f67 X-Filterd-Recvd-Size: 5062 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf34.hostedemail.com (Postfix) with ESMTP for ; Tue, 28 Jul 2020 05:14:32 +0000 (UTC) Received: from aquarius.haifa.ibm.com (nesher1.haifa.il.ibm.com [195.110.40.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id A319622B45; Tue, 28 Jul 2020 05:14:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1595913272; bh=hFA2o8it4I/HXbe86IOHmF4tlkZa8gVYSY+0N3aJP/M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=M+RZG9ZnotcmsF2PkNa5zM/WFJM3ny9Ao4jVf4GZvFJO9XWUIy1uk0iFGD2FkFtN2 /+/KrQiOLQR/E4QyReOgoyl9fI+gDphQS1WeTYk5sT9/kd5vjko1vVBIixO3xg+5dl 6l0zbIn3ds2e3/BzAP/7In9EqpLZsIJvcvdKNYd8= From: Mike Rapoport To: Andrew Morton Cc: Andy Lutomirski , Benjamin Herrenschmidt , Borislav Petkov , Catalin Marinas , Christoph Hellwig , Dave Hansen , Ingo Molnar , Marek Szyprowski , Max Filippov , Michael Ellerman , Michal Simek , Mike Rapoport , Mike Rapoport , Palmer Dabbelt , Paul Mackerras , Paul Walmsley , Peter Zijlstra , Russell King , Stafford Horne , Thomas Gleixner , Will Deacon , Yoshinori Sato , clang-built-linux@googlegroups.com, iommu@lists.linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-c6x-dev@linux-c6x.org, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, linux-xtensa@linux-xtensa.org, linuxppc-dev@lists.ozlabs.org, openrisc@lists.librecores.org, sparclinux@vger.kernel.org, uclinux-h8-devel@lists.sourceforge.jp, x86@kernel.org Subject: [PATCH 14/15] x86/numa: remove redundant iteration over memblock.reserved Date: Tue, 28 Jul 2020 08:11:52 +0300 Message-Id: <20200728051153.1590-15-rppt@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200728051153.1590-1-rppt@kernel.org> References: <20200728051153.1590-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 3BB501804E3A8 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Rapoport numa_clear_kernel_node_hotplug() function first traverses numa_meminfo regions to set node ID in memblock.reserved and than traverses memblock.reserved to update reserved_nodemask to include node IDs that were set in the first loop. Remove redundant traversal over memblock.reserved and update reserved_nodemask while iterating over numa_meminfo. Signed-off-by: Mike Rapoport Acked-by: Ingo Molnar --- arch/x86/mm/numa.c | 26 ++++++++++---------------- 1 file changed, 10 insertions(+), 16 deletions(-) diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c index 8ee952038c80..4078abd33938 100644 --- a/arch/x86/mm/numa.c +++ b/arch/x86/mm/numa.c @@ -498,31 +498,25 @@ static void __init numa_clear_kernel_node_hotplug(void) * and use those ranges to set the nid in memblock.reserved. * This will split up the memblock regions along node * boundaries and will set the node IDs as well. + * + * The nid will also be set in reserved_nodemask which is later + * used to clear MEMBLOCK_HOTPLUG flag. + * + * [ Note, when booting with mem=nn[kMG] or in a kdump kernel, + * numa_meminfo might not include all memblock.reserved + * memory ranges, because quirks such as trim_snb_memory() + * reserve specific pages for Sandy Bridge graphics. + * These ranges will remain with nid == MAX_NUMNODES. ] */ for (i = 0; i < numa_meminfo.nr_blks; i++) { struct numa_memblk *mb = numa_meminfo.blk + i; int ret; ret = memblock_set_node(mb->start, mb->end - mb->start, &memblock.reserved, mb->nid); + node_set(mb->nid, reserved_nodemask); WARN_ON_ONCE(ret); } - /* - * Now go over all reserved memblock regions, to construct a - * node mask of all kernel reserved memory areas. - * - * [ Note, when booting with mem=nn[kMG] or in a kdump kernel, - * numa_meminfo might not include all memblock.reserved - * memory ranges, because quirks such as trim_snb_memory() - * reserve specific pages for Sandy Bridge graphics. ] - */ - for_each_memblock(reserved, mb_region) { - int nid = memblock_get_region_node(mb_region); - - if (nid != MAX_NUMNODES) - node_set(nid, reserved_nodemask); - } - /* * Finally, clear the MEMBLOCK_HOTPLUG flag for all memory * belonging to the reserved node mask.