From patchwork Sun Apr 30 17:18:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 13227239 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E64F7C7EE24 for ; Sun, 30 Apr 2023 17:18:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231229AbjD3RSP (ORCPT ); Sun, 30 Apr 2023 13:18:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44550 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230494AbjD3RSO (ORCPT ); Sun, 30 Apr 2023 13:18:14 -0400 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E95102D48; Sun, 30 Apr 2023 10:18:13 -0700 (PDT) Received: by mail-pj1-x102e.google.com with SMTP id 98e67ed59e1d1-24df4ef05d4so517161a91.2; Sun, 30 Apr 2023 10:18:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682875093; x=1685467093; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=TWZ/FM9Xy42k8TQIEAYaaEoBZGiiKDahHvzomhIyOWo=; b=DfGc4B6u50hjxfbXwokfigtvybIagJSpgF1k7vXk3oK29QxTRG7GYATE4MFjrmVdG/ J5c6Dy6piJTDC3ByETGQyRxvWRL5FEywm8uWBVIA3bXWCobrYb7tFtLGsHzSgxaYmIsy 0DaUFwWeniLqoFWx8bPR4xf2Z4vQDUwOO67UG1iuMv7BhkA7coitPbK0Qo/Uhxp3w09X tEUlw4Yq+Hoc92NM2QyEVe+N7a6CVJdsUuBuhh4M8MIaLqFXNlh/yuwjB0y+q37zct6M +s8Nn7Ek179r1DUNnx/Zd5mjKrbQ39Alm6h9+BTD9KwEdhlZYgWyV2JrENAYa1I8s7Dv 1Sng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682875093; x=1685467093; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=TWZ/FM9Xy42k8TQIEAYaaEoBZGiiKDahHvzomhIyOWo=; b=IWd/mPIlOwL917fXchHUqYzsAyT4vurCEYG0LgoNABBOk6Sg8zivlrYG+K7d4wNsZ0 M3YWJ8fNEU7d90vDl0+jSX7FmInGd+pyB00PTtDL1RXRrVgzc4Q+NMZFZ73AvAju8cPo 9b8oDZqo44GAAGQMMEqpdJgJYVWM4eVGCUgTtKuSEUuXZYQOvHkE9sIGxwYh9/lLD2o/ 4sqMpJu9ZH3LrzsaiBZqMI2k6nDZKjdT06BbQGLuHymQ1YZEIoMDF/Iz24i2ujI9r+ZW YD+3dXpcmwV6jqFZgRQ413vjSHv88sQgvmO5K5DLhuFCX4+4JvvgyfqIEnLaDNOFEEFZ Bl+Q== X-Gm-Message-State: AC+VfDwRrIPSp6D9qwOFZsCurs/Mr2S62dhowKTzYzzIoQnEDWpXTm9J Ceqg0D0vOXr5NYMdMmT2qIg= X-Google-Smtp-Source: ACHHUZ4QxoittMxV8zbCIJSrLlRIt/RmzZr/yg+px1T1B035hDcTBgtdFUy65HhCICOy4EAMXasQwQ== X-Received: by 2002:a17:90b:4a01:b0:246:8a27:d42d with SMTP id kk1-20020a17090b4a0100b002468a27d42dmr11382564pjb.48.1682875093288; Sun, 30 Apr 2023 10:18:13 -0700 (PDT) Received: from localhost ([4.1.102.3]) by smtp.gmail.com with ESMTPSA id n12-20020a17090a9f0c00b0024e05b7ba8bsm68533pjp.25.2023.04.30.10.18.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 30 Apr 2023 10:18:12 -0700 (PDT) From: Yury Norov To: Jakub Kicinski , netdev@vger.kernel.org, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Yury Norov , Saeed Mahameed , Pawel Chmielewski , Leon Romanovsky , "David S. Miller" , Eric Dumazet , Paolo Abeni , Andy Shevchenko , Rasmus Villemoes , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider , Tariq Toukan , Gal Pressman , Greg Kroah-Hartman , Heiko Carstens , Barry Song Subject: [PATCH v3 1/8] sched: fix sched_numa_find_nth_cpu() in non-NUMA case Date: Sun, 30 Apr 2023 10:18:02 -0700 Message-Id: <20230430171809.124686-2-yury.norov@gmail.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230430171809.124686-1-yury.norov@gmail.com> References: <20230430171809.124686-1-yury.norov@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org When CONFIG_NUMA is enabled, sched_numa_find_nth_cpu() searches for a CPU in sched_domains_numa_masks. The masks includes only online CPUs, so effectively offline CPUs are skipped. When CONFIG_NUMA is disabled, the fallback function should be consistent. Fixes: cd7f55359c90 ("sched: add sched_numa_find_nth_cpu()") Signed-off-by: Yury Norov --- include/linux/topology.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/topology.h b/include/linux/topology.h index fea32377f7c7..52f5850730b3 100644 --- a/include/linux/topology.h +++ b/include/linux/topology.h @@ -251,7 +251,7 @@ extern const struct cpumask *sched_numa_hop_mask(unsigned int node, unsigned int #else static __always_inline int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node) { - return cpumask_nth(cpu, cpus); + return cpumask_nth_and(cpu, cpus, cpu_online_mask); } static inline const struct cpumask * From patchwork Sun Apr 30 17:18:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 13227240 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 966C8C7EE21 for ; Sun, 30 Apr 2023 17:18:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231310AbjD3RS1 (ORCPT ); Sun, 30 Apr 2023 13:18:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44560 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231251AbjD3RSR (ORCPT ); Sun, 30 Apr 2023 13:18:17 -0400 Received: from mail-pj1-x102b.google.com (mail-pj1-x102b.google.com [IPv6:2607:f8b0:4864:20::102b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 880A72706; Sun, 30 Apr 2023 10:18:15 -0700 (PDT) Received: by mail-pj1-x102b.google.com with SMTP id 98e67ed59e1d1-24ba5c1be6dso1196891a91.2; Sun, 30 Apr 2023 10:18:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682875095; x=1685467095; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=VUcX1u9Na09LWEG/YlEb5cTfwitTfjn6pv+M2nzybOg=; b=pRa6oNNkpLd1dXDPDAdnGEFDyudn68KHFh/2Ibj9peU1jYkYw+KCmHXKxDOQVmi6f7 bvxqucMU/OlomYLoLnI27rw82VUeME69XiK+3JPmr3/qB/hJKPVCucAKNqR05cZuZ8vF gIIsbSCabbgbBgewuGivVkXrv4lbhgaH40GUryUTbNXuZ7QsKcfWah/ce5ut7N/YP0P7 KKpv4dM+eXR//oWEgz6VUD2nu64ZMZCJTq/BppIvDyn7IbLS0SGl8FWBau83jOrSGnqV jJ+n4XNTfmvTSnnewnAvgeI+ud2/b8sEiWriCRSEAuUvz13spcUZhEt5FUKPKDoREY10 kXuA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682875095; x=1685467095; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VUcX1u9Na09LWEG/YlEb5cTfwitTfjn6pv+M2nzybOg=; b=bWMwl74Gnx63e3Wyr184orNYEzp3+TNk5wrilulDqIuqTNpf73V9isqANjTd86mlZ4 BezNBtHM52gc5jvSjxv+2yKHOmfPBil2eLeV1NVub6j7pE31umRyywhKX3VjjldzT5Wt C/GBcL2aO5hfqwA1Dx7z1w0vsUe9qFEAeiRsiXoLYTMb6QHSmXuLJ/xL+N832V45akqs kfg+j/5b9F/okr5y331K9bi0MTK84Iga0h7we7swKRdps7xJMVi5USssNXk4mQkUM9AC M38f4/1LgN8FzvCnT5HMxd/VyMmIGOOqa+4XYNUtT3gcq4Vmhu9p5Elk4Xa52Yw79T6n kZ4g== X-Gm-Message-State: AC+VfDx6SM/dS1sx4SofvHZc3bctoMl+aRUkO2wsoKvVHwaLT+AD7Bb8 M2SzT3vyDw3nfzmvaDjPtGc= X-Google-Smtp-Source: ACHHUZ738Ghtd4bVqUrJ1VjpR5sXBQOudiJXMTpSA+63jEgG4H+QovJPYAsOzNLRry1OYS3Mx9oRkw== X-Received: by 2002:a17:902:d505:b0:1a9:9c5d:9fac with SMTP id b5-20020a170902d50500b001a99c5d9facmr14847770plg.33.1682875094901; Sun, 30 Apr 2023 10:18:14 -0700 (PDT) Received: from localhost ([4.1.102.3]) by smtp.gmail.com with ESMTPSA id jd9-20020a170903260900b001a681fb3e77sm16155349plb.44.2023.04.30.10.18.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 30 Apr 2023 10:18:14 -0700 (PDT) From: Yury Norov To: Jakub Kicinski , netdev@vger.kernel.org, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Yury Norov , Saeed Mahameed , Pawel Chmielewski , Leon Romanovsky , "David S. Miller" , Eric Dumazet , Paolo Abeni , Andy Shevchenko , Rasmus Villemoes , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider , Tariq Toukan , Gal Pressman , Greg Kroah-Hartman , Heiko Carstens , Barry Song Subject: [PATCH v3 2/8] lib/find: add find_next_and_andnot_bit() Date: Sun, 30 Apr 2023 10:18:03 -0700 Message-Id: <20230430171809.124686-3-yury.norov@gmail.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230430171809.124686-1-yury.norov@gmail.com> References: <20230430171809.124686-1-yury.norov@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Similarly to find_nth_and_andnot_bit(), find_next_and_andnot_bit() is a convenient helper that allows traversing bitmaps without storing intermediate results in a temporary bitmap. In the following patches the function is used to implement NUMA-aware CPUs enumeration. Signed-off-by: Yury Norov --- include/linux/find.h | 43 +++++++++++++++++++++++++++++++++++++++++++ lib/find_bit.c | 12 ++++++++++++ 2 files changed, 55 insertions(+) diff --git a/include/linux/find.h b/include/linux/find.h index 5e4f39ef2e72..90b68d76c073 100644 --- a/include/linux/find.h +++ b/include/linux/find.h @@ -16,6 +16,9 @@ unsigned long _find_next_andnot_bit(const unsigned long *addr1, const unsigned l unsigned long nbits, unsigned long start); unsigned long _find_next_or_bit(const unsigned long *addr1, const unsigned long *addr2, unsigned long nbits, unsigned long start); +unsigned long _find_next_and_andnot_bit(const unsigned long *addr1, const unsigned long *addr2, + const unsigned long *addr3, unsigned long nbits, + unsigned long start); unsigned long _find_next_zero_bit(const unsigned long *addr, unsigned long nbits, unsigned long start); extern unsigned long _find_first_bit(const unsigned long *addr, unsigned long size); @@ -159,6 +162,40 @@ unsigned long find_next_or_bit(const unsigned long *addr1, } #endif +#ifndef find_next_and_andnot_bit +/** + * find_next_and_andnot_bit - find the next bit set in *addr1 and *addr2, + * excluding all the bits in *addr3 + * @addr1: The first address to base the search on + * @addr2: The second address to base the search on + * @addr3: The third address to base the search on + * @size: The bitmap size in bits + * @offset: The bitnumber to start searching at + * + * Return: the bit number for the next set bit + * If no bits are set, returns @size. + */ +static __always_inline +unsigned long find_next_and_andnot_bit(const unsigned long *addr1, + const unsigned long *addr2, + const unsigned long *addr3, + unsigned long size, + unsigned long offset) +{ + if (small_const_nbits(size)) { + unsigned long val; + + if (unlikely(offset >= size)) + return size; + + val = *addr1 & *addr2 & ~*addr3 & GENMASK(size - 1, offset); + return val ? __ffs(val) : size; + } + + return _find_next_and_andnot_bit(addr1, addr2, addr3, size, offset); +} +#endif + #ifndef find_next_zero_bit /** * find_next_zero_bit - find the next cleared bit in a memory region @@ -568,6 +605,12 @@ unsigned long find_next_bit_le(const void *addr, unsigned (bit) = find_next_andnot_bit((addr1), (addr2), (size), (bit)), (bit) < (size);\ (bit)++) +#define for_each_and_andnot_bit(bit, addr1, addr2, addr3, size) \ + for ((bit) = 0; \ + (bit) = find_next_and_andnot_bit((addr1), (addr2), (addr3), (size), (bit)),\ + (bit) < (size); \ + (bit)++) + #define for_each_or_bit(bit, addr1, addr2, size) \ for ((bit) = 0; \ (bit) = find_next_or_bit((addr1), (addr2), (size), (bit)), (bit) < (size);\ diff --git a/lib/find_bit.c b/lib/find_bit.c index 32f99e9a670e..4403e00890b1 100644 --- a/lib/find_bit.c +++ b/lib/find_bit.c @@ -182,6 +182,18 @@ unsigned long _find_next_andnot_bit(const unsigned long *addr1, const unsigned l EXPORT_SYMBOL(_find_next_andnot_bit); #endif +#ifndef find_next_and_andnot_bit +unsigned long _find_next_and_andnot_bit(const unsigned long *addr1, + const unsigned long *addr2, + const unsigned long *addr3, + unsigned long nbits, + unsigned long start) +{ + return FIND_NEXT_BIT(addr1[idx] & addr2[idx] & ~addr3[idx], /* nop */, nbits, start); +} +EXPORT_SYMBOL(_find_next_and_andnot_bit); +#endif + #ifndef find_next_or_bit unsigned long _find_next_or_bit(const unsigned long *addr1, const unsigned long *addr2, unsigned long nbits, unsigned long start) From patchwork Sun Apr 30 17:18:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 13227242 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 16EE7C77B60 for ; Sun, 30 Apr 2023 17:18:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231416AbjD3RS2 (ORCPT ); Sun, 30 Apr 2023 13:18:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44622 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231254AbjD3RSS (ORCPT ); Sun, 30 Apr 2023 13:18:18 -0400 Received: from mail-pf1-x42f.google.com (mail-pf1-x42f.google.com [IPv6:2607:f8b0:4864:20::42f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2EDB62D73; Sun, 30 Apr 2023 10:18:17 -0700 (PDT) Received: by mail-pf1-x42f.google.com with SMTP id d2e1a72fcca58-63f273b219eso1216775b3a.1; Sun, 30 Apr 2023 10:18:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682875096; x=1685467096; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=oognvtBHKp+y57Kqu34zBdtUcBbI+DybK/7lI+VyXho=; b=VQYUcls8wAkuyKL2R95TT5lGXrflkHTfc53souXigvwJNGmbWpsrcAz4pi4GSaPuur wdDtcgnTPGvqPx+6s7uTkh6gRuYtMZ/EI127bESMIKNKI7i24bA1tyQ1RMe8HDJBgPL1 rCYcAld1i3y6/HG/J3CjVlHxUONZEjcVrJXPsdthyCF02WvUUT4v/Szv/2mluK6Tm3jc dLpTt1jEfV772jz+nT1OwNisbCpmt7iWP+JnlIRmceZzfsgPssiW3my3Obt62bRFA0v4 ZU/z79gda1Kxoijl/KMMb4Z8pnLhA1RPo096Kv3M1P/SVBo9R09suN0KfHmz0HIqZPNJ sgUg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682875096; x=1685467096; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=oognvtBHKp+y57Kqu34zBdtUcBbI+DybK/7lI+VyXho=; b=RAsqQdCeBAnxOy/5gJr/LVqwDJCrMNZauVIBwtCMOSCZJWGgOVo0Ruj5oj9llgQNye kizsL6ZJGwsckODqMdXbAepL8bG5KqPrMkpvIOGVPdF1dVjLwHYAC035+tFoKxMsAO1r aLWCfP/c967OW5opZvgtDqFApXdLk7RuOiqll2aV4JmecTPUU5UBpEPcfpTZ8HfxkaZ+ BuORfNZznRZMxJtY/qsObPswisH+l3SQTHR05n7RoyQOzKHvQJ6uXGCjZ8QPNx/oxmG8 HseK1HvW7Rcmi12Qy3KWztqF6kiPnJsT7YMPXuIA3SNjz02+f4E1NOeQRnr4VizZ5G84 O+tQ== X-Gm-Message-State: AC+VfDzj15/W3xwoArRE6nc7LtyJZqTWf3diif+5TcZXmgTbDGmi8bYN xL5NwMnlmk5pE25bOUQ5U7E= X-Google-Smtp-Source: ACHHUZ4yWaUtorBzTjTAJPBkMNnKufxSWS9Ry2Aww36MifdQmhUAlVu346qAq9Yywz9ECvQE6G4HJg== X-Received: by 2002:a05:6a00:c88:b0:63b:1e3b:aa02 with SMTP id a8-20020a056a000c8800b0063b1e3baa02mr16689872pfv.16.1682875096461; Sun, 30 Apr 2023 10:18:16 -0700 (PDT) Received: from localhost ([4.1.102.3]) by smtp.gmail.com with ESMTPSA id f14-20020a056a00238e00b00640e64aa9b7sm11086045pfc.10.2023.04.30.10.18.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 30 Apr 2023 10:18:16 -0700 (PDT) From: Yury Norov To: Jakub Kicinski , netdev@vger.kernel.org, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Yury Norov , Saeed Mahameed , Pawel Chmielewski , Leon Romanovsky , "David S. Miller" , Eric Dumazet , Paolo Abeni , Andy Shevchenko , Rasmus Villemoes , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider , Tariq Toukan , Gal Pressman , Greg Kroah-Hartman , Heiko Carstens , Barry Song Subject: [PATCH v3 3/8] sched/topology: introduce sched_numa_find_next_cpu() Date: Sun, 30 Apr 2023 10:18:04 -0700 Message-Id: <20230430171809.124686-4-yury.norov@gmail.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230430171809.124686-1-yury.norov@gmail.com> References: <20230430171809.124686-1-yury.norov@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org The function searches for a next CPU in a given cpumask according to NUMA topology, so that it traverses CPUs per-hop. If the CPU is the last CPU in a given hop, sched_numa_find_next_cpu() switches to the next hop, and picks the first CPU from there, excluding those already traversed. Because only online CPUs are presented in the NUMA topology masks, offline CPUs will be skipped even if presented in the 'cpus' mask provided in the arguments. Signed-off-by: Yury Norov --- include/linux/topology.h | 12 ++++++++++++ kernel/sched/topology.c | 39 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 51 insertions(+) diff --git a/include/linux/topology.h b/include/linux/topology.h index 52f5850730b3..da92fea38585 100644 --- a/include/linux/topology.h +++ b/include/linux/topology.h @@ -245,8 +245,13 @@ static inline const struct cpumask *cpu_cpu_mask(int cpu) return cpumask_of_node(cpu_to_node(cpu)); } +/* + * sched_numa_find_*_cpu() functions family traverses only accessible CPUs, + * i.e. those listed in cpu_online_mask. + */ #ifdef CONFIG_NUMA int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node); +int sched_numa_find_next_cpu(const struct cpumask *cpus, int cpu, int node, unsigned int *hop); extern const struct cpumask *sched_numa_hop_mask(unsigned int node, unsigned int hops); #else static __always_inline int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node) @@ -254,6 +259,13 @@ static __always_inline int sched_numa_find_nth_cpu(const struct cpumask *cpus, i return cpumask_nth_and(cpu, cpus, cpu_online_mask); } +static __always_inline +int sched_numa_find_next_cpu(const struct cpumask *cpus, int cpu, int node, unsigned int *hop) +{ + return find_next_and_bit(cpumask_bits(cpus), cpumask_bits(cpu_online_mask), + small_cpumask_bits, cpu); +} + static inline const struct cpumask * sched_numa_hop_mask(unsigned int node, unsigned int hops) { diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 051aaf65c749..fc163e4181e6 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -2130,6 +2130,45 @@ int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node) } EXPORT_SYMBOL_GPL(sched_numa_find_nth_cpu); +/* + * sched_numa_find_next_cpu() - given the NUMA topology, find the next cpu + * cpumask: cpumask to find a CPU from + * cpu: current CPU + * node: local node + * hop: (in/out) indicates distance order of current CPU to a local node + * + * The function searches for a next CPU at a given NUMA distance, indicated + * by hop, and if nothing found, tries to find CPUs at a greater distance, + * starting from the beginning. + * + * Return: cpu, or >= nr_cpu_ids when nothing found. + */ +int sched_numa_find_next_cpu(const struct cpumask *cpus, int cpu, int node, unsigned int *hop) +{ + unsigned long *cur, *prev; + struct cpumask ***masks; + unsigned int ret; + + if (*hop >= sched_domains_numa_levels) + return nr_cpu_ids; + + masks = rcu_dereference(sched_domains_numa_masks); + cur = cpumask_bits(masks[*hop][node]); + if (*hop == 0) + ret = find_next_and_bit(cpumask_bits(cpus), cur, nr_cpu_ids, cpu); + else { + prev = cpumask_bits(masks[*hop - 1][node]); + ret = find_next_and_andnot_bit(cpumask_bits(cpus), cur, prev, nr_cpu_ids, cpu); + } + + if (ret < nr_cpu_ids) + return ret; + + *hop += 1; + return sched_numa_find_next_cpu(cpus, 0, node, hop); +} +EXPORT_SYMBOL_GPL(sched_numa_find_next_cpu); + /** * sched_numa_hop_mask() - Get the cpumask of CPUs at most @hops hops away from * @node From patchwork Sun Apr 30 17:18:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 13227241 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BEEC0C7EE23 for ; Sun, 30 Apr 2023 17:18:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231679AbjD3RS3 (ORCPT ); Sun, 30 Apr 2023 13:18:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44972 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230494AbjD3RS0 (ORCPT ); Sun, 30 Apr 2023 13:18:26 -0400 Received: from mail-pl1-x635.google.com (mail-pl1-x635.google.com [IPv6:2607:f8b0:4864:20::635]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D5C023595; Sun, 30 Apr 2023 10:18:18 -0700 (PDT) Received: by mail-pl1-x635.google.com with SMTP id d9443c01a7336-1a6715ee82fso18149665ad.1; Sun, 30 Apr 2023 10:18:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682875098; x=1685467098; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=8TFT+ESKwfU7QbMQhW5p+wdCiH0luZnBdTAjBAnH2Fg=; b=ke83Ke+83g4oZWBNC3U1aX00laf2hOH8TD69Tz/ZlUHkydQwQRS1g9KeTxEeoX+J8x uYblxYNvIflf+Fu+r4ZaDO/s2Q1MeRkMBriys0/OWj72TrfVWCYMFaX+y/69V/hU1K74 G5L6m7HJUCZOUaHzyauz3hhZ63ODntNV+4yhgWueT1CpXcuLifDiI/i26UP5R5VzDB06 JGWI8WnUd7Dyi+gU8pyPla2d4u8AjpYRSBv+02yxTEq89z6J82m8aHfqR8CccE1A7vtA tqxZYEVSxAqy2ejWmCsWkNyVNDBwu0E4tPDQRxCfL5Bf9y+2hMnQNvkWz5Y8V3/QSyyj sesg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682875098; x=1685467098; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8TFT+ESKwfU7QbMQhW5p+wdCiH0luZnBdTAjBAnH2Fg=; b=i6dsQtTOOK3m+2AWx7ZcMd+BeY+zmv4cABYcJsAaol6q/iBibvsO0FhGlvBDFmamny 84fE9ekJM4C74m/kD7vto9G2BVxouzrC3DPfrZt2WAut9AJNdXJCG0MvvTF9WL8AEAHU 23i91rWjOKJNHBe9wjDpVtdSo6Bm2TFnCYRkW4iQ7Gqj1fsvacdwrtnjqgmIN6koLFyu H0UA0cdVf/XK7GByoRwE4jJQ2Vj8HcnmJCy9gnVThNv2qobFvJbnUbvxSC5M1Im5mpz0 mSrVXfoN61MT4CONGcGVlrw9NUvcojXzDxuy1oQ7ZRaB99Q+TP1ZPpDvYCUVMV0kus7V ClHw== X-Gm-Message-State: AC+VfDxbr/wEX6aAiuvXAk9svYnibBrv/WeDuhT/2ioNjFici2SsUMkm X9NS0wCpIt77Vycu1xDb5wg= X-Google-Smtp-Source: ACHHUZ7frQxXvjq2TJpWQpvURgwj288wakVEzbaYwn3v9PW599mlWCVhQWHWq9Txfex7+SrmiThJoQ== X-Received: by 2002:a17:902:ec92:b0:1a6:9079:2bb3 with SMTP id x18-20020a170902ec9200b001a690792bb3mr16126156plg.33.1682875098056; Sun, 30 Apr 2023 10:18:18 -0700 (PDT) Received: from localhost ([4.1.102.3]) by smtp.gmail.com with ESMTPSA id u4-20020a170902b28400b001a64851087bsm16447001plr.272.2023.04.30.10.18.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 30 Apr 2023 10:18:17 -0700 (PDT) From: Yury Norov To: Jakub Kicinski , netdev@vger.kernel.org, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Yury Norov , Saeed Mahameed , Pawel Chmielewski , Leon Romanovsky , "David S. Miller" , Eric Dumazet , Paolo Abeni , Andy Shevchenko , Rasmus Villemoes , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider , Tariq Toukan , Gal Pressman , Greg Kroah-Hartman , Heiko Carstens , Barry Song Subject: [PATCH v3 4/8] sched/topology: add for_each_numa_{,online}_cpu() macro Date: Sun, 30 Apr 2023 10:18:05 -0700 Message-Id: <20230430171809.124686-5-yury.norov@gmail.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230430171809.124686-1-yury.norov@gmail.com> References: <20230430171809.124686-1-yury.norov@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org for_each_cpu() is widely used in the kernel, and it's beneficial to create a NUMA-aware version of the macro. Recently added for_each_numa_hop_mask() works, but switching existing codebase to using it is not an easy process. New for_each_numa_cpu() is designed to be similar to the for_each_cpu(). It allows to convert existing code to NUMA-aware as simple as adding a hop iterator variable and passing it inside new macro. for_each_numa_cpu() takes care of the rest. At the moment, we have 2 users of NUMA-aware enumerators. One is Melanox's in-tree driver, and another is Intel's in-review driver: https://lore.kernel.org/lkml/20230216145455.661709-1-pawel.chmielewski@intel.com/ Both real-life examples follow the same pattern: for_each_numa_hop_mask(cpus, prev, node) { for_each_cpu_andnot(cpu, cpus, prev) { if (cnt++ == max_num) goto out; do_something(cpu); } prev = cpus; } With the new macro, it would look like this: for_each_numa_online_cpu(cpu, hop, node) { if (cnt++ == max_num) break; do_something(cpu); } Straight conversion of existing for_each_cpu() codebase to NUMA-aware version with for_each_numa_hop_mask() is difficult because it doesn't take a user-provided cpu mask, and eventually ends up with open-coded double loop. With for_each_numa_cpu() it shouldn't be a brainteaser. Consider the NUMA-ignorant example: cpumask_t cpus = get_mask(); int cnt = 0, cpu; for_each_cpu(cpu, cpus) { if (cnt++ == max_num) break; do_something(cpu); } Converting it to NUMA-aware version would be as simple as: cpumask_t cpus = get_mask(); int node = get_node(); int cnt = 0, hop, cpu; for_each_numa_cpu(cpu, hop, node, cpus) { if (cnt++ == max_num) break; do_something(cpu); } The latter looks more verbose and avoids from open-coding that annoying double loop. Another advantage is that it works with a 'hop' parameter with the clear meaning of NUMA distance, and doesn't make people not familiar to enumerator internals bothering with current and previous masks machinery. Signed-off-by: Yury Norov --- include/linux/topology.h | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) diff --git a/include/linux/topology.h b/include/linux/topology.h index da92fea38585..6ed01962878c 100644 --- a/include/linux/topology.h +++ b/include/linux/topology.h @@ -291,4 +291,23 @@ sched_numa_hop_mask(unsigned int node, unsigned int hops) !IS_ERR_OR_NULL(mask); \ __hops++) +/** + * for_each_numa_cpu - iterate over cpus in increasing order taking into account + * NUMA distances from a given node. + * @cpu: the (optionally unsigned) integer iterator + * @hop: the iterator variable, must be initialized to a desired minimal hop. + * @node: the NUMA node to start the search from. + * @mask: the cpumask pointer + * + * Requires rcu_lock to be held. + */ +#define for_each_numa_cpu(cpu, hop, node, mask) \ + for ((cpu) = 0, (hop) = 0; \ + (cpu) = sched_numa_find_next_cpu((mask), (cpu), (node), &(hop)),\ + (cpu) < nr_cpu_ids; \ + (cpu)++) + +#define for_each_numa_online_cpu(cpu, hop, node) \ + for_each_numa_cpu(cpu, hop, node, cpu_online_mask) + #endif /* _LINUX_TOPOLOGY_H */ From patchwork Sun Apr 30 17:18:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 13227244 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0625DC7EE21 for ; Sun, 30 Apr 2023 17:18:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231254AbjD3RSd (ORCPT ); Sun, 30 Apr 2023 13:18:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45038 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231357AbjD3RS1 (ORCPT ); Sun, 30 Apr 2023 13:18:27 -0400 Received: from mail-pf1-x42b.google.com (mail-pf1-x42b.google.com [IPv6:2607:f8b0:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6148B3C1E; Sun, 30 Apr 2023 10:18:20 -0700 (PDT) Received: by mail-pf1-x42b.google.com with SMTP id d2e1a72fcca58-63b62d2f729so1280804b3a.1; Sun, 30 Apr 2023 10:18:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682875099; x=1685467099; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=1jGbFVuum+KtcabYGT3vt7OtFUyPnxnlWHN0/8h9s/o=; b=G94aeKvLvroKF6lcT983KCdgd7rZcBLC06Z9bBz+UyNrXSo2eiQjSWpnQCQF4NHp8Q wM4DxZvuu/uFuRDAaKOJqmNG9zxzD8/ULWCdudSfZdFbAJuI3TIBsmenceBgHI+3T044 IDtBYP5WGCp+bbfKKlTM0GMeLXfAqvG2Ee6ofJRQsoIW2wVXBMMOu2kn12TYYFCn/LkI 4Nya8atDxP3m7BTQT/zdhRgY/wNxjYNg2M/XMLKZBjJ0STKX/Cc3NRzGut08M/8uJIy5 /YK3eX6pDh1h2LZmFbnEpmQnWr1YTZfx80Q/bNuAF3TndKOxt0323pM5e6DewS2GHUEh ZNLA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682875100; x=1685467100; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1jGbFVuum+KtcabYGT3vt7OtFUyPnxnlWHN0/8h9s/o=; b=ls2rQznUz+98PmxV54tms9G01w/9KbgJWVuVcWioUzJb4XU4hVUHvqBd/fGWJFBeLr xxip6lUDUXUG3gkf66bYolLKEDEYMaWN3Ekf7UrbKmEroQ2ED/I/JcEIfB7B1LQsTluC NWN0xlnJZK+AntYY/nxDdQ6uiLuYbbm7ovxZVD889EEpCsb2w738Sr9LGvI2KD940gab h3xuoRZedP0HPZD7MU6tjgRWQsUv2AeiOWLyzHSvxJYuhrjZWC39hQkdiqhViRo9zfgi 6t2+LohJSo4qd0EjqIgmcBaxnq12cLiciPDANjYlSUTgmQ2w88NUx4KrRqcSdXFc+zc0 QNsw== X-Gm-Message-State: AC+VfDzN7kiqHm8R1SqOUBeEtXJvb1HAce+ORKlherhdBnw54CWcdfOe M+nS87WMg4eZhD6UzFtIaBY= X-Google-Smtp-Source: ACHHUZ7+J94TmMbJxTjh+YH/l81cpVyK4Si/E3roU3XQjqn065SMbwM7W5CXxf3HvL3yNTv/E/5N/g== X-Received: by 2002:a05:6a00:1402:b0:63d:2343:f9b with SMTP id l2-20020a056a00140200b0063d23430f9bmr15918991pfu.19.1682875099546; Sun, 30 Apr 2023 10:18:19 -0700 (PDT) Received: from localhost ([4.1.102.3]) by smtp.gmail.com with ESMTPSA id i21-20020a056a00225500b0063b8f33cb81sm19040360pfu.93.2023.04.30.10.18.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 30 Apr 2023 10:18:19 -0700 (PDT) From: Yury Norov To: Jakub Kicinski , netdev@vger.kernel.org, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Yury Norov , Saeed Mahameed , Pawel Chmielewski , Leon Romanovsky , "David S. Miller" , Eric Dumazet , Paolo Abeni , Andy Shevchenko , Rasmus Villemoes , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider , Tariq Toukan , Gal Pressman , Greg Kroah-Hartman , Heiko Carstens , Barry Song Subject: [PATCH v3 5/8] net: mlx5: switch comp_irqs_request() to using for_each_numa_cpu Date: Sun, 30 Apr 2023 10:18:06 -0700 Message-Id: <20230430171809.124686-6-yury.norov@gmail.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230430171809.124686-1-yury.norov@gmail.com> References: <20230430171809.124686-1-yury.norov@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org for_each_numa_online_cpu() is a more straightforward alternative to for_each_numa_hop_mask() + for_each_cpu_andnot(). Signed-off-by: Yury Norov Reviewed-by: Tariq Toukan --- drivers/net/ethernet/mellanox/mlx5/core/eq.c | 16 +++++----------- 1 file changed, 5 insertions(+), 11 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eq.c b/drivers/net/ethernet/mellanox/mlx5/core/eq.c index 38b32e98f3bd..d3511e45f121 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eq.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/eq.c @@ -817,12 +817,10 @@ static void comp_irqs_release(struct mlx5_core_dev *dev) static int comp_irqs_request(struct mlx5_core_dev *dev) { struct mlx5_eq_table *table = dev->priv.eq_table; - const struct cpumask *prev = cpu_none_mask; - const struct cpumask *mask; int ncomp_eqs = table->num_comp_eqs; u16 *cpus; int ret; - int cpu; + int cpu, hop; int i; ncomp_eqs = table->num_comp_eqs; @@ -844,15 +842,11 @@ static int comp_irqs_request(struct mlx5_core_dev *dev) i = 0; rcu_read_lock(); - for_each_numa_hop_mask(mask, dev->priv.numa_node) { - for_each_cpu_andnot(cpu, mask, prev) { - cpus[i] = cpu; - if (++i == ncomp_eqs) - goto spread_done; - } - prev = mask; + for_each_numa_online_cpu(cpu, hop, dev->priv.numa_node) { + cpus[i] = cpu; + if (++i == ncomp_eqs) + break; } -spread_done: rcu_read_unlock(); ret = mlx5_irqs_request_vectors(dev, cpus, ncomp_eqs, table->comp_irqs); kfree(cpus); From patchwork Sun Apr 30 17:18:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 13227243 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8EB59C77B73 for ; Sun, 30 Apr 2023 17:18:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230391AbjD3RSc (ORCPT ); Sun, 30 Apr 2023 13:18:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45042 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231362AbjD3RS2 (ORCPT ); Sun, 30 Apr 2023 13:18:28 -0400 Received: from mail-pj1-x102b.google.com (mail-pj1-x102b.google.com [IPv6:2607:f8b0:4864:20::102b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BA2713590; Sun, 30 Apr 2023 10:18:21 -0700 (PDT) Received: by mail-pj1-x102b.google.com with SMTP id 98e67ed59e1d1-24ba5c1be6dso1196937a91.2; Sun, 30 Apr 2023 10:18:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682875101; x=1685467101; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=uCv69uJohA5AlkXqweJAXFHwc6wg8xE3bGvGWZBwPj0=; b=p0+sIZD1gwanjZp7Rv1P3JIxKXXOQJXNctd57ehfGs3eiauZenLX6GOGDWaeHMCfAM bphrWH3UFdeD340P/7p3OQC2hl1aiM6PnP/TmpuUBSr8JfJLkyTUdjkYYN+RKtmJMxQ+ pVhEeq+UzMOiQJX1Stax4ON1bMZIKWbT1w8sfXjHX9ZLbmNA4X5OWButls3Hhpxi2l/2 R20DX7ji8g1JfmZT7NCX+M47SztTkGYFEQEj+ynynAbEQU3zFLH4I+OCvSutig8djd/F C6ULc7jKHvFqNoZSGbXpygNeLaIYu6j1gAdVoMb4T1r1nv3mqiftyH6GjmN+Ds5i+dUQ bn0g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682875101; x=1685467101; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=uCv69uJohA5AlkXqweJAXFHwc6wg8xE3bGvGWZBwPj0=; b=cCVFmDzmRu8BoajIq40pABlD7rD9AChF3LVJKx9SppkbOC0H/1jbnI3sbv5+pAkmn6 lOSqNBUyLTe1pYsIjyh9i+ABaCJ1gmTieD7Veb/eB7D4XUNrOvNghC+2HvARsalorqLa Z3qgk5jI3nxIDeM/6nzNJdXfHo24yq9dljGV98N/zatXGhRGLuLndOf503T3lD280TZD Xg5aW59iMpeV1lrG7gcya7BoqtPKZMIa2z/ToQLhW5r41N8Wsv0h6yikQmzcpm/0QjF5 5GZSCw7l/XlUGJFRQB6d3aw52cxx2Xl63ASZvMbMBXj4TtOCboMX0IhBJUvtpRPJL14j uY4g== X-Gm-Message-State: AC+VfDydjRJ/A0JOgfML2FXxMInna0I4XVqiMbxIr4zX6A7TlbBUFnqJ Ulv/Lg3HbxLZRAeFw3t5jyo= X-Google-Smtp-Source: ACHHUZ5OCSlIbGQl91MvYWZXxbjDU93snOKgTVhUU3jokqYRdZJT+YCdTmHX0+qn1HSV74QgWqWQMQ== X-Received: by 2002:a17:90b:17c3:b0:247:bab1:d901 with SMTP id me3-20020a17090b17c300b00247bab1d901mr11307062pjb.17.1682875101345; Sun, 30 Apr 2023 10:18:21 -0700 (PDT) Received: from localhost ([4.1.102.3]) by smtp.gmail.com with ESMTPSA id t9-20020a17090a5d8900b002465ff5d829sm4625861pji.13.2023.04.30.10.18.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 30 Apr 2023 10:18:20 -0700 (PDT) From: Yury Norov To: Jakub Kicinski , netdev@vger.kernel.org, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Yury Norov , Saeed Mahameed , Pawel Chmielewski , Leon Romanovsky , "David S. Miller" , Eric Dumazet , Paolo Abeni , Andy Shevchenko , Rasmus Villemoes , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider , Tariq Toukan , Gal Pressman , Greg Kroah-Hartman , Heiko Carstens , Barry Song Subject: [PATCH v3 6/8] lib/cpumask: update comment to cpumask_local_spread() Date: Sun, 30 Apr 2023 10:18:07 -0700 Message-Id: <20230430171809.124686-7-yury.norov@gmail.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230430171809.124686-1-yury.norov@gmail.com> References: <20230430171809.124686-1-yury.norov@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Now that we have a for_each_numa_online_cpu(), which is a more straightforward replacement to the cpumask_local_spread() when it comes to enumeration of CPUs with respect to NUMA topology, it's worth to update the comment. Signed-off-by: Yury Norov --- lib/cpumask.c | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-) diff --git a/lib/cpumask.c b/lib/cpumask.c index e7258836b60b..774966483ca9 100644 --- a/lib/cpumask.c +++ b/lib/cpumask.c @@ -127,11 +127,8 @@ void __init free_bootmem_cpumask_var(cpumask_var_t mask) * * There's a better alternative based on for_each()-like iterators: * - * for_each_numa_hop_mask(mask, node) { - * for_each_cpu_andnot(cpu, mask, prev) - * do_something(cpu); - * prev = mask; - * } + * for_each_numa_online_cpu(cpu, hop, node) + * do_something(cpu); * * It's simpler and more verbose than above. Complexity of iterator-based * enumeration is O(sched_domains_numa_levels * nr_cpu_ids), while From patchwork Sun Apr 30 17:18:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 13227245 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3960AC77B73 for ; Sun, 30 Apr 2023 17:19:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231698AbjD3RTW (ORCPT ); Sun, 30 Apr 2023 13:19:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45110 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231688AbjD3RSa (ORCPT ); Sun, 30 Apr 2023 13:18:30 -0400 Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com [IPv6:2607:f8b0:4864:20::636]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2BD5640FF; Sun, 30 Apr 2023 10:18:24 -0700 (PDT) Received: by mail-pl1-x636.google.com with SMTP id d9443c01a7336-1aaf91ae451so1275165ad.1; Sun, 30 Apr 2023 10:18:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682875103; x=1685467103; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=vEDddPK2KWms5ByKjAdqzmWy8wIq0UJSgOARkSGChjw=; b=qARWWHh5Z1F1HN0NJ4OBAIUgyrWd2TBqWT7jwQPG802Qm7RkKl2fXPTheFCUR/LWOE 0Fix0eUh7k/CNsSO+evb6TvFvUQ6f0U539kL8ifXsUSF9YKU0HgojrvYVVGITrInlZ9s H8aeEok/HwOX+B31b+LLywyaFRreCk/kj2Mb+XbcZZaTHjrxBDCUxFIiAYoy4AU6WobK S0D5cAdWMVLGNkVBzqftauKQa3Uq90U5S8puqiM6aGrql7O+E038H8PEo/FvV3B9WPiT 3d9eyhnZbYYoe7/y6g1BUbaiQj8AEj7ZEyggWzElcahdLv3AbwpxqMybUtTdUGpFbjY4 VXrg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682875103; x=1685467103; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vEDddPK2KWms5ByKjAdqzmWy8wIq0UJSgOARkSGChjw=; b=WFCno2VGRxLlzn8e14E7clPW19dRsKehB48Nc2/FKPpdetAwqLBS/Ij5xvDrh/7Ynv FluWOw8Wn/ySmwZeJry3R6wPy1mfRoyVmixxeNx/RVDSuK2CmUuMXUWejnYmw+pmei0H 8xflZY7DJpThTvJvztMTxJXiH3rG1fM4s2184bnzlz/vJ4BOIAf1fjM5JkYefhCQcSI8 KqYuwe8OuDTYJV+Kj9qRLOs/H+BzHjy/N/Or+gMhbWiW6LmiXJuwh7Co4TUckxSYdFJc JViXEHu2KMT47afYwxd80dXb5C5HhwF0njAVuIYU+klkCV6aBZt6VWy2igcgKzUjDDTj p0fQ== X-Gm-Message-State: AC+VfDypJ+qJ3Eks2p+FCIWeHIpcL7HGoWTcyIgGHTvAq0ZbilsD+nQz b2slwMmttHkubmY5K/KyK9s= X-Google-Smtp-Source: ACHHUZ4bpfqZTe7GYmq6GwQvlggN26rWdCIRujteRiubrRQHGhf26HGKrbdwLY8vUedmfdNP3XnAuQ== X-Received: by 2002:a17:903:5cd:b0:1a6:3799:ec2a with SMTP id kf13-20020a17090305cd00b001a63799ec2amr10957581plb.35.1682875102976; Sun, 30 Apr 2023 10:18:22 -0700 (PDT) Received: from localhost ([4.1.102.3]) by smtp.gmail.com with ESMTPSA id u5-20020a17090282c500b001aaf5dcd772sm532971plz.21.2023.04.30.10.18.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 30 Apr 2023 10:18:22 -0700 (PDT) From: Yury Norov To: Jakub Kicinski , netdev@vger.kernel.org, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Yury Norov , Saeed Mahameed , Pawel Chmielewski , Leon Romanovsky , "David S. Miller" , Eric Dumazet , Paolo Abeni , Andy Shevchenko , Rasmus Villemoes , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider , Tariq Toukan , Gal Pressman , Greg Kroah-Hartman , Heiko Carstens , Barry Song Subject: [PATCH v3 7/8] sched: drop for_each_numa_hop_mask() Date: Sun, 30 Apr 2023 10:18:08 -0700 Message-Id: <20230430171809.124686-8-yury.norov@gmail.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230430171809.124686-1-yury.norov@gmail.com> References: <20230430171809.124686-1-yury.norov@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Now that we have for_each_numa_cpu(), for_each_numa_hop_mask() and all related code is a dead code. Drop it. Signed-off-by: Yury Norov --- include/linux/topology.h | 25 ------------------------- kernel/sched/topology.c | 32 -------------------------------- 2 files changed, 57 deletions(-) diff --git a/include/linux/topology.h b/include/linux/topology.h index 6ed01962878c..808b5dcf6e36 100644 --- a/include/linux/topology.h +++ b/include/linux/topology.h @@ -252,7 +252,6 @@ static inline const struct cpumask *cpu_cpu_mask(int cpu) #ifdef CONFIG_NUMA int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node); int sched_numa_find_next_cpu(const struct cpumask *cpus, int cpu, int node, unsigned int *hop); -extern const struct cpumask *sched_numa_hop_mask(unsigned int node, unsigned int hops); #else static __always_inline int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node) { @@ -265,32 +264,8 @@ int sched_numa_find_next_cpu(const struct cpumask *cpus, int cpu, int node, unsi return find_next_and_bit(cpumask_bits(cpus), cpumask_bits(cpu_online_mask), small_cpumask_bits, cpu); } - -static inline const struct cpumask * -sched_numa_hop_mask(unsigned int node, unsigned int hops) -{ - return ERR_PTR(-EOPNOTSUPP); -} #endif /* CONFIG_NUMA */ -/** - * for_each_numa_hop_mask - iterate over cpumasks of increasing NUMA distance - * from a given node. - * @mask: the iteration variable. - * @node: the NUMA node to start the search from. - * - * Requires rcu_lock to be held. - * - * Yields cpu_online_mask for @node == NUMA_NO_NODE. - */ -#define for_each_numa_hop_mask(mask, node) \ - for (unsigned int __hops = 0; \ - mask = (node != NUMA_NO_NODE || __hops) ? \ - sched_numa_hop_mask(node, __hops) : \ - cpu_online_mask, \ - !IS_ERR_OR_NULL(mask); \ - __hops++) - /** * for_each_numa_cpu - iterate over cpus in increasing order taking into account * NUMA distances from a given node. diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index fc163e4181e6..bb5ba2c5589a 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -2169,38 +2169,6 @@ int sched_numa_find_next_cpu(const struct cpumask *cpus, int cpu, int node, unsi } EXPORT_SYMBOL_GPL(sched_numa_find_next_cpu); -/** - * sched_numa_hop_mask() - Get the cpumask of CPUs at most @hops hops away from - * @node - * @node: The node to count hops from. - * @hops: Include CPUs up to that many hops away. 0 means local node. - * - * Return: On success, a pointer to a cpumask of CPUs at most @hops away from - * @node, an error value otherwise. - * - * Requires rcu_lock to be held. Returned cpumask is only valid within that - * read-side section, copy it if required beyond that. - * - * Note that not all hops are equal in distance; see sched_init_numa() for how - * distances and masks are handled. - * Also note that this is a reflection of sched_domains_numa_masks, which may change - * during the lifetime of the system (offline nodes are taken out of the masks). - */ -const struct cpumask *sched_numa_hop_mask(unsigned int node, unsigned int hops) -{ - struct cpumask ***masks; - - if (node >= nr_node_ids || hops >= sched_domains_numa_levels) - return ERR_PTR(-EINVAL); - - masks = rcu_dereference(sched_domains_numa_masks); - if (!masks) - return ERR_PTR(-EBUSY); - - return masks[hops][node]; -} -EXPORT_SYMBOL_GPL(sched_numa_hop_mask); - #endif /* CONFIG_NUMA */ static int __sdt_alloc(const struct cpumask *cpu_map) From patchwork Sun Apr 30 17:18:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 13227255 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3CB96C77B73 for ; Sun, 30 Apr 2023 17:19:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231841AbjD3RTc (ORCPT ); Sun, 30 Apr 2023 13:19:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45004 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231755AbjD3RS5 (ORCPT ); Sun, 30 Apr 2023 13:18:57 -0400 Received: from mail-pf1-x430.google.com (mail-pf1-x430.google.com [IPv6:2607:f8b0:4864:20::430]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D4D653C26; Sun, 30 Apr 2023 10:18:26 -0700 (PDT) Received: by mail-pf1-x430.google.com with SMTP id d2e1a72fcca58-64115eef620so22640619b3a.1; Sun, 30 Apr 2023 10:18:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682875106; x=1685467106; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=NVQPAhvM42bg60bi1IlAwEIZcPjoQKQJa+YHtFWyV1o=; b=hN4PCyTCEMv72Ox/Rxcp0pvZ5h/Tsa+vo0js4h1J1bdZpTdHeI49d5ObSi2mWgSwK6 u3mn0/RG3qZ0/djtCcCLgknmYmZHABoW6ZBcIllvjQpO+k//zIaEqOu1NPEasFR7+11v 5hjZWEMFvDKxn9JM02pSjmM/ATVdN2263VdfoHAKgvqUTY32t4ZeK5zTRG0YDAlSnUdm fmVqJH0uXGRNaSJqzoraoyTjw6garidcNU7G4HeRPxyvy8D8cgy4YzzcPwSsoewfbedl tS7nWzlVwtH0umHPUKihIuxVteC2txu1DpbZj62NGGT+sdFxWL5DmIR0KvUQIU+k+YJN Q1og== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682875106; x=1685467106; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=NVQPAhvM42bg60bi1IlAwEIZcPjoQKQJa+YHtFWyV1o=; b=YYVxCCeM2ikpPCo/mR8Z1TtjQRh5mNbIsBDze5kGJWuT6zZ3cQTZ8IH4kvFzy6ZxuL fBVvpVOGNTA8mRYS+cIFuUa5EhG4u/jaxNZI2e9tf+OYGIZCvwkU65zBMPD17nQwx1H/ E1F4aB0U+38xjnsmK1527u8ibyfcSr21dlFpECo686T8b/9GgSbaDNiK2xbaiKc9nvbQ rqBE7OO7Le/8986y3l2CmJYQPA6fWq0qfqSjSaXyI9anRZiBIiGC1iY19GHw+4YNSq0R /bEl6aDZbHaqPUDxuLRfoh6Y2qiIhCijJQHknv9HRV3WFDGwpC9CVt5MgFq9L2bvTJXS ijGw== X-Gm-Message-State: AC+VfDyD0QJV1RrgSriuBa0K2HkWLU5jZIDRggjfGwKeqg/bkoAXRB4A 4iYz/z0gARIMGZh5FdgjLKg= X-Google-Smtp-Source: ACHHUZ7tvH8wofODdNT6Wh2V6qp8Vo+6xmyzjcG+xEBwWJ128LaOLk8IJN/IWA1WnaEA3PA9UH4drQ== X-Received: by 2002:a17:902:c94f:b0:19a:727e:d4f3 with SMTP id i15-20020a170902c94f00b0019a727ed4f3mr19207851pla.5.1682875105507; Sun, 30 Apr 2023 10:18:25 -0700 (PDT) Received: from localhost ([4.1.102.3]) by smtp.gmail.com with ESMTPSA id x12-20020a1709027c0c00b001a52e3e3745sm16385404pll.296.2023.04.30.10.18.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 30 Apr 2023 10:18:25 -0700 (PDT) From: Yury Norov To: Jakub Kicinski , netdev@vger.kernel.org, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Yury Norov , Saeed Mahameed , Pawel Chmielewski , Leon Romanovsky , "David S. Miller" , Eric Dumazet , Paolo Abeni , Andy Shevchenko , Rasmus Villemoes , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider , Tariq Toukan , Gal Pressman , Greg Kroah-Hartman , Heiko Carstens , Barry Song Subject: [PATCH v3 8/8] lib: test for_each_numa_cpus() Date: Sun, 30 Apr 2023 10:18:09 -0700 Message-Id: <20230430171809.124686-9-yury.norov@gmail.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230430171809.124686-1-yury.norov@gmail.com> References: <20230430171809.124686-1-yury.norov@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Test for_each_numa_cpus() output to ensure that: - all CPUs are picked from NUMA nodes with non-decreasing distances to the original node; - only online CPUs are enumerated; - the macro enumerates each online CPUs only once; - enumeration order is consistent with cpumask_local_spread(). The latter is an implementation-defined behavior. If cpumask_local_spread() or for_each_numa_cpu() will get changed in future, the subtest may need to be adjusted or even removed, as appropriate. It's useful now because some architectures don't implement numa_distance(), and generic implementation only distinguishes local and remote nodes, which doesn't allow to test the for_each_numa_cpu() properly. Suggested-by: Valentin Schneider (for node_distance() test) Signed-off-by: Yury Norov --- lib/test_bitmap.c | 70 +++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 68 insertions(+), 2 deletions(-) diff --git a/lib/test_bitmap.c b/lib/test_bitmap.c index a8005ad3bd58..ac4fe621d37b 100644 --- a/lib/test_bitmap.c +++ b/lib/test_bitmap.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include "../tools/testing/selftests/kselftest_module.h" @@ -71,6 +72,16 @@ __check_eq_uint(const char *srcfile, unsigned int line, return true; } +static bool __init +__check_ge_uint(const char *srcfile, unsigned int line, + const unsigned int exp_uint, unsigned int x) +{ + if (exp_uint >= x) + return true; + + pr_err("[%s:%u] expected >= %u, got %u\n", srcfile, line, exp_uint, x); + return false; +} static bool __init __check_eq_bitmap(const char *srcfile, unsigned int line, @@ -86,6 +97,18 @@ __check_eq_bitmap(const char *srcfile, unsigned int line, return true; } +static bool __init +__check_eq_cpumask(const char *srcfile, unsigned int line, + const struct cpumask *exp_cpumask, const struct cpumask *cpumask) +{ + if (cpumask_equal(exp_cpumask, cpumask)) + return true; + + pr_warn("[%s:%u] cpumasks contents differ: expected \"%*pbl\", got \"%*pbl\"\n", + srcfile, line, cpumask_pr_args(exp_cpumask), cpumask_pr_args(cpumask)); + return false; +} + static bool __init __check_eq_pbl(const char *srcfile, unsigned int line, const char *expected_pbl, @@ -173,11 +196,11 @@ __check_eq_str(const char *srcfile, unsigned int line, return eq; } -#define __expect_eq(suffix, ...) \ +#define __expect(suffix, ...) \ ({ \ int result = 0; \ total_tests++; \ - if (!__check_eq_ ## suffix(__FILE__, __LINE__, \ + if (!__check_ ## suffix(__FILE__, __LINE__, \ ##__VA_ARGS__)) { \ failed_tests++; \ result = 1; \ @@ -185,13 +208,19 @@ __check_eq_str(const char *srcfile, unsigned int line, result; \ }) +#define __expect_eq(suffix, ...) __expect(eq_ ## suffix, ##__VA_ARGS__) +#define __expect_ge(suffix, ...) __expect(ge_ ## suffix, ##__VA_ARGS__) + #define expect_eq_uint(...) __expect_eq(uint, ##__VA_ARGS__) #define expect_eq_bitmap(...) __expect_eq(bitmap, ##__VA_ARGS__) +#define expect_eq_cpumask(...) __expect_eq(cpumask, ##__VA_ARGS__) #define expect_eq_pbl(...) __expect_eq(pbl, ##__VA_ARGS__) #define expect_eq_u32_array(...) __expect_eq(u32_array, ##__VA_ARGS__) #define expect_eq_clump8(...) __expect_eq(clump8, ##__VA_ARGS__) #define expect_eq_str(...) __expect_eq(str, ##__VA_ARGS__) +#define expect_ge_uint(...) __expect_ge(uint, ##__VA_ARGS__) + static void __init test_zero_clear(void) { DECLARE_BITMAP(bmap, 1024); @@ -751,6 +780,42 @@ static void __init test_for_each_set_bit_wrap(void) } } +static void __init test_for_each_numa_cpu(void) +{ + unsigned int node, cpu, hop; + cpumask_var_t mask; + + if (!alloc_cpumask_var(&mask, GFP_KERNEL)) { + pr_err("Can't allocate cpumask. Skipping for_each_numa_cpu() test"); + return; + } + + for_each_node(node) { + unsigned int c = 0, dist, old_dist = node_distance(node, node); + + cpumask_clear(mask); + + rcu_read_lock(); + for_each_numa_cpu(cpu, hop, node, cpu_possible_mask) { + dist = node_distance(cpu_to_node(cpu), node); + + /* Distance between nodes must never decrease */ + expect_ge_uint(dist, old_dist); + + /* Test for coherence with cpumask_local_spread() */ + expect_eq_uint(cpumask_local_spread(c++, node), cpu); + + cpumask_set_cpu(cpu, mask); + old_dist = dist; + } + rcu_read_unlock(); + + /* Each online CPU must be visited exactly once */ + expect_eq_uint(c, num_online_cpus()); + expect_eq_cpumask(mask, cpu_online_mask); + } +} + static void __init test_for_each_set_bit(void) { DECLARE_BITMAP(orig, 500); @@ -1237,6 +1302,7 @@ static void __init selftest(void) test_for_each_clear_bitrange_from(); test_for_each_set_clump8(); test_for_each_set_bit_wrap(); + test_for_each_numa_cpu(); } KSTM_MODULE_LOADERS(test_bitmap);