From patchwork Sun Nov 28 03:57:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 12642715 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0DD05C433FE for ; Sun, 28 Nov 2021 03:59:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350021AbhK1ECh (ORCPT ); Sat, 27 Nov 2021 23:02:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41910 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239427AbhK1EAe (ORCPT ); Sat, 27 Nov 2021 23:00:34 -0500 Received: from mail-qt1-x831.google.com (mail-qt1-x831.google.com [IPv6:2607:f8b0:4864:20::831]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6B08DC06175F; Sat, 27 Nov 2021 19:57:19 -0800 (PST) Received: by mail-qt1-x831.google.com with SMTP id 8so12913438qtx.5; Sat, 27 Nov 2021 19:57:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=jzLzZlFUvFeqj4QS59vSNbpytRIoqeqf9/2ASrWiyLI=; b=XZk2hdbImcYXcyTC2jMa8yUglMKlAaSa3xH9sAgN8RhIne1RnBqfSUZL708Dj5PShJ 1QAgIr7OGJSVFq2mZi9yqGYZICTi3cTPrtQ0tOobYJNIG4wvvbUN4d801KAluKhZSNaw 5kUnpLot57OvMvoht9gsbXIMAs09ng0JZiqnM+0yYWP+TepGtZssa5wn06dX9ufDHnvD ZPHBWKdLOu3HgdCy3bnlMuzdGZFZAy+/D4yK35zPune2uhYUFd+hI1kPxLZJWSAJgOhj nARVFLZUHE85Uc6xvOKK1lK1j8FJ1rHLaJoLnfMbZbIjECHtDtB3Pw6waYVEs3Vzk86C rZ8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=jzLzZlFUvFeqj4QS59vSNbpytRIoqeqf9/2ASrWiyLI=; b=AT0wvSGqd5HUCGV/BgDPFiIi+wRnUME2oMnWqo9fXDBq2MyQufAv/XH1KKySGH2oUa NlyXIhM+yAfVzWMZhdtxViTKwYEkJogaGYGDQ0fE4pJPEcwnok2GUlFyPC5MJZy3K7zp 19EmlrTmbqzRsQLaQC/aumqcfnBRyDI6+tZ3cuSGTjLjj/ZVF6MaqtGSZAUIlA56Z4/V VWa7XRCtXtJs6MsEcizg+Vxqd54UYTYjSVmmBNO/T2J8YfbxXxhPP4HW46WylGSWRlci HdA6B3YhtExD2X4OZDR1O59MbhD03ahIsVz7/UFr792YDhNi63EEtSdZWt1uHyKpOcaC jiHA== X-Gm-Message-State: AOAM530WPPCcrb8/XVNAXTDYbr8Hl8WcBO9d0EcOWj5MlCRESZY2VoTs UTjhQaWPbW4tdSNmXRZXVQZ/cYc1XixcKA== X-Google-Smtp-Source: ABdhPJzvsa6ori4gMhvDjxhyLALutssBlk2zYDXBmn3Os/fHY+Flrm1nO4Hm4z35zKEFTRvnBCBPjg== X-Received: by 2002:ac8:7d84:: with SMTP id c4mr35229985qtd.94.1638071838293; Sat, 27 Nov 2021 19:57:18 -0800 (PST) Received: from localhost ([66.216.211.25]) by smtp.gmail.com with ESMTPSA id bj1sm1391494qkb.75.2021.11.27.19.57.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 27 Nov 2021 19:57:18 -0800 (PST) From: Yury Norov To: linux-kernel@vger.kernel.org, Yury Norov , "James E.J. Bottomley" , "Martin K. Petersen" , "Paul E. McKenney" , "Rafael J. Wysocki" , Alexander Shishkin , Alexey Klimov , Amitkumar Karwar , Andi Kleen , Andrew Lunn , Andrew Morton , Andy Gross , Andy Lutomirski , Andy Shevchenko , Anup Patel , Ard Biesheuvel , Arnaldo Carvalho de Melo , Arnd Bergmann , Borislav Petkov , Catalin Marinas , Christoph Hellwig , Christoph Lameter , Daniel Vetter , Dave Hansen , David Airlie , David Laight , Dennis Zhou , Dinh Nguyen , Geetha sowjanya , Geert Uytterhoeven , Greg Kroah-Hartman , Guo Ren , Hans de Goede , Heiko Carstens , Ian Rogers , Ingo Molnar , Jakub Kicinski , Jason Wessel , Jens Axboe , Jiri Olsa , Jonathan Cameron , Juri Lelli , Kalle Valo , Kees Cook , Krzysztof Kozlowski , Lee Jones , Marc Zyngier , Marcin Wojtas , Mark Gross , Mark Rutland , Matti Vaittinen , Mauro Carvalho Chehab , Mel Gorman , Michael Ellerman , Mike Marciniszyn , Nicholas Piggin , Palmer Dabbelt , Peter Zijlstra , Petr Mladek , Randy Dunlap , Rasmus Villemoes , Roy Pledge , Russell King , Saeed Mahameed , Sagi Grimberg , Sergey Senozhatsky , Solomon Peachy , Stephen Boyd , Stephen Rothwell , Steven Rostedt , Subbaraya Sundeep , Sudeep Holla , Sunil Goutham , Tariq Toukan , Tejun Heo , Thomas Bogendoerfer , Thomas Gleixner , Ulf Hansson , Vincent Guittot , Vineet Gupta , Viresh Kumar , Vivien Didelot , Vlastimil Babka , Will Deacon , bcm-kernel-feedback-list@broadcom.com, kvm@vger.kernel.org, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-crypto@vger.kernel.org, linux-csky@vger.kernel.org, linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-perf-users@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-snps-arc@lists.infradead.org, linuxppc-dev@lists.ozlabs.org Subject: [PATCH 6/9] lib/nodemask: add nodemask_weight_{eq,gt,le} Date: Sat, 27 Nov 2021 19:57:01 -0800 Message-Id: <20211128035704.270739-7-yury.norov@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211128035704.270739-1-yury.norov@gmail.com> References: <20211128035704.270739-1-yury.norov@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org Add nodemask_weight_{eq,gt,le} and replace nodemask_weight() where appropriate. This allows nodemask_weight_*() to return earlier depending on the condition. Signed-off-by: Yury Norov --- arch/x86/mm/amdtopology.c | 2 +- arch/x86/mm/numa_emulation.c | 4 ++-- drivers/acpi/numa/srat.c | 2 +- include/linux/nodemask.h | 24 ++++++++++++++++++++++++ mm/mempolicy.c | 2 +- 5 files changed, 29 insertions(+), 5 deletions(-) diff --git a/arch/x86/mm/amdtopology.c b/arch/x86/mm/amdtopology.c index 058b2f36b3a6..b3ca7d23e4b0 100644 --- a/arch/x86/mm/amdtopology.c +++ b/arch/x86/mm/amdtopology.c @@ -154,7 +154,7 @@ int __init amd_numa_init(void) node_set(nodeid, numa_nodes_parsed); } - if (!nodes_weight(numa_nodes_parsed)) + if (nodes_empty(numa_nodes_parsed)) return -ENOENT; /* diff --git a/arch/x86/mm/numa_emulation.c b/arch/x86/mm/numa_emulation.c index 1a02b791d273..9a9305367fdd 100644 --- a/arch/x86/mm/numa_emulation.c +++ b/arch/x86/mm/numa_emulation.c @@ -123,7 +123,7 @@ static int __init split_nodes_interleave(struct numa_meminfo *ei, * Continue to fill physical nodes with fake nodes until there is no * memory left on any of them. */ - while (nodes_weight(physnode_mask)) { + while (!nodes_empty(physnode_mask)) { for_each_node_mask(i, physnode_mask) { u64 dma32_end = PFN_PHYS(MAX_DMA32_PFN); u64 start, limit, end; @@ -270,7 +270,7 @@ static int __init split_nodes_size_interleave_uniform(struct numa_meminfo *ei, * Fill physical nodes with fake nodes of size until there is no memory * left on any of them. */ - while (nodes_weight(physnode_mask)) { + while (!nodes_empty(physnode_mask)) { for_each_node_mask(i, physnode_mask) { u64 dma32_end = PFN_PHYS(MAX_DMA32_PFN); u64 start, limit, end; diff --git a/drivers/acpi/numa/srat.c b/drivers/acpi/numa/srat.c index 66a0142dc78c..c4f80d2d85bf 100644 --- a/drivers/acpi/numa/srat.c +++ b/drivers/acpi/numa/srat.c @@ -67,7 +67,7 @@ int acpi_map_pxm_to_node(int pxm) node = pxm_to_node_map[pxm]; if (node == NUMA_NO_NODE) { - if (nodes_weight(nodes_found_map) >= MAX_NUMNODES) + if (nodes_weight_gt(nodes_found_map, MAX_NUMNODES + 1)) return NUMA_NO_NODE; node = first_unset_node(nodes_found_map); __acpi_map_pxm_to_node(pxm, node); diff --git a/include/linux/nodemask.h b/include/linux/nodemask.h index 567c3ddba2c4..3801ec5b06f4 100644 --- a/include/linux/nodemask.h +++ b/include/linux/nodemask.h @@ -38,6 +38,9 @@ * int nodes_empty(mask) Is mask empty (no bits sets)? * int nodes_full(mask) Is mask full (all bits sets)? * int nodes_weight(mask) Hamming weight - number of set bits + * bool nodes_weight_eq(src, nbits, num) Hamming Weight is equal to num + * bool nodes_weight_gt(src, nbits, num) Hamming Weight is greater than num + * bool nodes_weight_le(src, nbits, num) Hamming Weight is less than num * * void nodes_shift_right(dst, src, n) Shift right * void nodes_shift_left(dst, src, n) Shift left @@ -240,6 +243,27 @@ static inline int __nodes_weight(const nodemask_t *srcp, unsigned int nbits) return bitmap_weight(srcp->bits, nbits); } +#define nodes_weight_eq(nodemask, num) __nodes_weight_eq(&(nodemask), MAX_NUMNODES, (num)) +static inline int __nodes_weight_eq(const nodemask_t *srcp, + unsigned int nbits, unsigned int num) +{ + return bitmap_weight_eq(srcp->bits, nbits, num); +} + +#define nodes_weight_gt(nodemask, num) __nodes_weight_gt(&(nodemask), MAX_NUMNODES, (num)) +static inline int __nodes_weight_gt(const nodemask_t *srcp, + unsigned int nbits, unsigned int num) +{ + return bitmap_weight_gt(srcp->bits, nbits, num); +} + +#define nodes_weight_le(nodemask, num) __nodes_weight_le(&(nodemask), MAX_NUMNODES, (num)) +static inline int __nodes_weight_le(const nodemask_t *srcp, + unsigned int nbits, unsigned int num) +{ + return bitmap_weight_le(srcp->bits, nbits, num); +} + #define nodes_shift_right(dst, src, n) \ __nodes_shift_right(&(dst), &(src), (n), MAX_NUMNODES) static inline void __nodes_shift_right(nodemask_t *dstp, diff --git a/mm/mempolicy.c b/mm/mempolicy.c index b1fcdb4d25d6..4a48ce5b86cf 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -1154,7 +1154,7 @@ int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from, * [0-7] - > [3,4,5] moves only 0,1,2,6,7. */ - if ((nodes_weight(*from) != nodes_weight(*to)) && + if (!nodes_weight_eq(*from, nodes_weight(*to)) && (node_isset(s, *to))) continue;