From patchwork Sat Dec 18 21:20:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 12686289 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83B66C433EF for ; Sat, 18 Dec 2021 21:21:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234650AbhLRVVO (ORCPT ); Sat, 18 Dec 2021 16:21:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56294 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232344AbhLRVUj (ORCPT ); Sat, 18 Dec 2021 16:20:39 -0500 Received: from mail-oi1-x22e.google.com (mail-oi1-x22e.google.com [IPv6:2607:f8b0:4864:20::22e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0B350C061746; Sat, 18 Dec 2021 13:20:39 -0800 (PST) Received: by mail-oi1-x22e.google.com with SMTP id m6so9357878oim.2; Sat, 18 Dec 2021 13:20:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=GUqPfWmU6f9MK/QXiRQLZcQ9F1QsVSWWK571Ot9oIxE=; b=hfcawraa9tTE2tV32Lqk+3wfPsGEMDYsM0IK6gUmsFmGNuwLZjW0huAN9qU8EwuPoo 5k7MSSEqu01CZYqfeU5LSK/e8D6uuRrmxwt9qDMnJa1I80ulPtLA2GtHdI5tPQK1BzRX at6dD7gTWW2fpasNYFbDYAhyy8of2TpYtGAllUu8hD+hDyH0tj6cTs2or3r30/AlnrIa autdfD/MUkFDBqV3AuVQKfMwo3c48fzfkaZ+di2R+D2G00hS2LPKPzOThGTpvNs4snIb ac4DTnMzgL2cMGCL8bdEtgRm3hqd60/4HvGqQQMJNjdiYo3rtzwfmA/hx3zgDTZC+SKy Vs7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=GUqPfWmU6f9MK/QXiRQLZcQ9F1QsVSWWK571Ot9oIxE=; b=h7nCUYz+rnFBNUJ+3fZT+dVYuPMaOCyHpsmEK+U9pKwS4L4BDlYbBNi1acxq3xK9Jp qYeUznkhAKidKYMPy9IE6kqs0pElH2w0F9H9NvkeFOzlMgk/F1HNpK2j7Q7ZEBwuYkZ4 iJzk5M7xUXxGIYmmIaheiFW7CjHMVU33mfdU+Klcfr0Q9XmsPxKHbOgarWyXQWgWSCsh 7BBrcthT/Plr9zJHz/D6WfXYS+p5ynlSMatYXBINd/7YferUf7UiPzw0cDphNRUl16Lj hEO0np88f+WaT6iAhAgnYP64JsBMcDlnjPdjkwXezRq///dMjZlvB7g83JsPd4OFe0Kd x03A== X-Gm-Message-State: AOAM53197XsPpDrC7PpwRMsDJVI3uNDFdJQ+iexyr8l9niJ/Do9tOpPu M6e1IVHKr3SoMIYaBTdJFqrgmsAwlEKzAg== X-Google-Smtp-Source: ABdhPJwMghCWSAXUdG9w873jzrPra4gD4vLC9CVG4QJCBThdyFFc0fiXAMtWvAylP2GRwZ72nMb1wQ== X-Received: by 2002:a05:6808:1914:: with SMTP id bf20mr6913245oib.7.1639862436787; Sat, 18 Dec 2021 13:20:36 -0800 (PST) Received: from localhost (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id d11sm2335240otu.36.2021.12.18.13.20.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 18 Dec 2021 13:20:36 -0800 (PST) From: Yury Norov To: linux-kernel@vger.kernel.org, Yury Norov , "James E.J. Bottomley" , "Martin K. Petersen" , =?utf-8?b?TWljaGHFgiBN?= =?utf-8?b?aXJvc8WCYXc=?= , "Paul E. McKenney" , "Rafael J. Wysocki" , Alexander Shishkin , Alexey Klimov , Amitkumar Karwar , Andi Kleen , Andrew Lunn , Andrew Morton , Andy Gross , Andy Lutomirski , Andy Shevchenko , Anup Patel , Ard Biesheuvel , Arnaldo Carvalho de Melo , Arnd Bergmann , Borislav Petkov , Catalin Marinas , Christoph Hellwig , Christoph Lameter , Daniel Vetter , Dave Hansen , David Airlie , David Laight , Dennis Zhou , Emil Renner Berthing , Geert Uytterhoeven , Geetha sowjanya , Greg Kroah-Hartman , Guo Ren , Hans de Goede , Heiko Carstens , Ian Rogers , Ingo Molnar , Jakub Kicinski , Jason Wessel , Jens Axboe , Jiri Olsa , Joe Perches , Jonathan Cameron , Juri Lelli , Kees Cook , Krzysztof Kozlowski , Lee Jones , Marc Zyngier , Marcin Wojtas , Mark Gross , Mark Rutland , Matti Vaittinen , Mauro Carvalho Chehab , Mel Gorman , Michael Ellerman , Mike Marciniszyn , Nicholas Piggin , Palmer Dabbelt , Peter Zijlstra , Petr Mladek , Randy Dunlap , Rasmus Villemoes , Russell King , Saeed Mahameed , Sagi Grimberg , Sergey Senozhatsky , Solomon Peachy , Stephen Boyd , Stephen Rothwell , Steven Rostedt , Subbaraya Sundeep , Sudeep Holla , Sunil Goutham , Tariq Toukan , Tejun Heo , Thomas Bogendoerfer , Thomas Gleixner , Ulf Hansson , Vincent Guittot , Vineet Gupta , Viresh Kumar , Vivien Didelot , Vlastimil Babka , Will Deacon , bcm-kernel-feedback-list@broadcom.com, kvm@vger.kernel.org, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-crypto@vger.kernel.org, linux-csky@vger.kernel.org, linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-perf-users@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-snps-arc@lists.infradead.org, linuxppc-dev@lists.ozlabs.org Subject: [PATCH 06/17] all: replace nodes_weight with nodes_empty where appropriate Date: Sat, 18 Dec 2021 13:20:02 -0800 Message-Id: <20211218212014.1315894-7-yury.norov@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211218212014.1315894-1-yury.norov@gmail.com> References: <20211218212014.1315894-1-yury.norov@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org Kernel code calls nodes_weight() to check if any bit of a given nodemask is set. We can do it more efficiently with nodes_empty() because nodes_empty() stops traversing the nodemask as soon as it finds first set bit, while nodes_weight() counts all bits unconditionally. Signed-off-by: Yury Norov --- arch/x86/mm/amdtopology.c | 2 +- arch/x86/mm/numa_emulation.c | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/x86/mm/amdtopology.c b/arch/x86/mm/amdtopology.c index 058b2f36b3a6..b3ca7d23e4b0 100644 --- a/arch/x86/mm/amdtopology.c +++ b/arch/x86/mm/amdtopology.c @@ -154,7 +154,7 @@ int __init amd_numa_init(void) node_set(nodeid, numa_nodes_parsed); } - if (!nodes_weight(numa_nodes_parsed)) + if (nodes_empty(numa_nodes_parsed)) return -ENOENT; /* diff --git a/arch/x86/mm/numa_emulation.c b/arch/x86/mm/numa_emulation.c index 1a02b791d273..9a9305367fdd 100644 --- a/arch/x86/mm/numa_emulation.c +++ b/arch/x86/mm/numa_emulation.c @@ -123,7 +123,7 @@ static int __init split_nodes_interleave(struct numa_meminfo *ei, * Continue to fill physical nodes with fake nodes until there is no * memory left on any of them. */ - while (nodes_weight(physnode_mask)) { + while (!nodes_empty(physnode_mask)) { for_each_node_mask(i, physnode_mask) { u64 dma32_end = PFN_PHYS(MAX_DMA32_PFN); u64 start, limit, end; @@ -270,7 +270,7 @@ static int __init split_nodes_size_interleave_uniform(struct numa_meminfo *ei, * Fill physical nodes with fake nodes of size until there is no memory * left on any of them. */ - while (nodes_weight(physnode_mask)) { + while (!nodes_empty(physnode_mask)) { for_each_node_mask(i, physnode_mask) { u64 dma32_end = PFN_PHYS(MAX_DMA32_PFN); u64 start, limit, end;