Message ID | 20230420051946.7463-5-yury.norov@gmail.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | sched/topology: add for_each_numa_cpu() macro | expand |
On 20/04/2023 8:19, Yury Norov wrote: > for_each_numa_cpu() is a more straightforward alternative to > for_each_numa_hop_mask() + for_each_cpu_andnot(). > > Signed-off-by: Yury Norov <yury.norov@gmail.com> > --- > drivers/net/ethernet/mellanox/mlx5/core/eq.c | 16 +++++----------- > 1 file changed, 5 insertions(+), 11 deletions(-) > > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eq.c b/drivers/net/ethernet/mellanox/mlx5/core/eq.c > index 38b32e98f3bd..80368952e9b1 100644 > --- a/drivers/net/ethernet/mellanox/mlx5/core/eq.c > +++ b/drivers/net/ethernet/mellanox/mlx5/core/eq.c > @@ -817,12 +817,10 @@ static void comp_irqs_release(struct mlx5_core_dev *dev) > static int comp_irqs_request(struct mlx5_core_dev *dev) > { > struct mlx5_eq_table *table = dev->priv.eq_table; > - const struct cpumask *prev = cpu_none_mask; > - const struct cpumask *mask; > int ncomp_eqs = table->num_comp_eqs; > u16 *cpus; > int ret; > - int cpu; > + int cpu, hop; > int i; > > ncomp_eqs = table->num_comp_eqs; > @@ -844,15 +842,11 @@ static int comp_irqs_request(struct mlx5_core_dev *dev) > > i = 0; > rcu_read_lock(); > - for_each_numa_hop_mask(mask, dev->priv.numa_node) { > - for_each_cpu_andnot(cpu, mask, prev) { > - cpus[i] = cpu; > - if (++i == ncomp_eqs) > - goto spread_done; > - } > - prev = mask; > + for_each_numa_cpu(cpu, hop, dev->priv.numa_node, cpu_possible_mask) { I like this clean API. nit: Previously cpu_online_mask was used here. Is this change intentional? We can fix it in a followup patch if this is the only comment on the series. Reviewed-by: Tariq Toukan <tariqt@nvidia.com> > + cpus[i] = cpu; > + if (++i == ncomp_eqs) > + break; > } > -spread_done: > rcu_read_unlock(); > ret = mlx5_irqs_request_vectors(dev, cpus, ncomp_eqs, table->comp_irqs); > kfree(cpus);
On Thu, Apr 20, 2023 at 11:27:26AM +0300, Tariq Toukan wrote: > I like this clean API. Thanks :) > nit: > Previously cpu_online_mask was used here. Is this change intentional? > We can fix it in a followup patch if this is the only comment on the series. > > Reviewed-by: Tariq Toukan <tariqt@nvidia.com> The only CPUs listed in the sched_domains_numa_masks are 'available', i.e. online CPUs. The for_each_numa_cpu() ANDs user-provided cpumask with a map associate to the hop, and that means that if we AND with possible mask, we'll eventually walk online CPUs only. To make sure, I experimented with the modified test: diff --git a/lib/test_bitmap.c b/lib/test_bitmap.c index 6becb044a66f..c8d557731080 100644 --- a/lib/test_bitmap.c +++ b/lib/test_bitmap.c @@ -760,8 +760,13 @@ static void __init test_for_each_numa(void) unsigned int hop, c = 0; rcu_read_lock(); - for_each_numa_cpu(cpu, hop, node, cpu_online_mask) + pr_err("Node %d:\t", node); + for_each_numa_cpu(cpu, hop, node, cpu_possible_mask) { expect_eq_uint(cpumask_local_spread(c++, node), cpu); + pr_cont("%3d", cpu); + + } + pr_err("\n"); rcu_read_unlock(); } } This is the NUMA topology of my test machine after the boot: root@debian:~# numactl -H available: 4 nodes (0-3) node 0 cpus: 0 1 2 3 node 0 size: 1861 MB node 0 free: 1792 MB node 1 cpus: 4 5 node 1 size: 1914 MB node 1 free: 1823 MB node 2 cpus: 6 7 node 2 size: 1967 MB node 2 free: 1915 MB node 3 cpus: 8 9 10 11 12 13 14 15 node 3 size: 7862 MB node 3 free: 7259 MB node distances: node 0 1 2 3 0: 10 50 30 70 1: 50 10 70 30 2: 30 70 10 50 3: 70 30 50 10 And this is what test prints: root@debian:~# insmod test_bitmap.ko test_bitmap: loaded. test_bitmap: parselist: 14: input is '0-2047:128/256' OK, Time: 472 test_bitmap: bitmap_print_to_pagebuf: input is '0-32767 ', Time: 2665 test_bitmap: Node 0: 0 1 2 3 6 7 4 5 8 9 10 11 12 13 14 15 test_bitmap: test_bitmap: Node 1: 4 5 8 9 10 11 12 13 14 15 0 1 2 3 6 7 test_bitmap: test_bitmap: Node 2: 6 7 0 1 2 3 8 9 10 11 12 13 14 15 4 5 test_bitmap: test_bitmap: Node 3: 8 9 10 11 12 13 14 15 4 5 6 7 0 1 2 3 test_bitmap: test_bitmap: all 6614 tests passed Now, disable a couple of CPUs: root@debian:~# chcpu -d 1-2 smpboot: CPU 1 is now offline CPU 1 disabled smpboot: CPU 2 is now offline CPU 2 disabled And try again: root@debian:~# rmmod test_bitmap rmmod: ERROR: ../libkmod/libkmod[ 320.275904] test_bitmap: unloaded. root@debian:~# numactl -H available: 4 nodes (0-3) node 0 cpus: 0 3 node 0 size: 1861 MB node 0 free: 1792 MB node 1 cpus: 4 5 node 1 size: 1914 MB node 1 free: 1823 MB node 2 cpus: 6 7 node 2 size: 1967 MB node 2 free: 1915 MB node 3 cpus: 8 9 10 11 12 13 14 15 node 3 size: 7862 MB node 3 free: 7259 MB node distances: node 0 1 2 3 0: 10 50 30 70 1: 50 10 70 30 2: 30 70 10 50 3: 70 30 50 10 root@debian:~# insmod test_bitmap.ko test_bitmap: loaded. test_bitmap: parselist: 14: input is '0-2047:128/256' OK, Time: 491 test_bitmap: bitmap_print_to_pagebuf: input is '0-32767 ', Time: 2174 test_bitmap: Node 0: 0 3 6 7 4 5 8 9 10 11 12 13 14 15 test_bitmap: test_bitmap: Node 1: 4 5 8 9 10 11 12 13 14 15 0 3 6 7 test_bitmap: test_bitmap: Node 2: 6 7 0 3 8 9 10 11 12 13 14 15 4 5 test_bitmap: test_bitmap: Node 3: 8 9 10 11 12 13 14 15 4 5 6 7 0 3 test_bitmap: test_bitmap: all 6606 tests passed I used cpu_possible_mask because I wanted to keep the patch consistent: before we traversed NUMA hop masks, now we traverse the same hop masks AND user-provided mask, so the latter should include all possible CPUs. If you think it's better to have cpu_online_mask in the driver, let's make it in a separate patch? Thanks, Yury
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eq.c b/drivers/net/ethernet/mellanox/mlx5/core/eq.c index 38b32e98f3bd..80368952e9b1 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eq.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/eq.c @@ -817,12 +817,10 @@ static void comp_irqs_release(struct mlx5_core_dev *dev) static int comp_irqs_request(struct mlx5_core_dev *dev) { struct mlx5_eq_table *table = dev->priv.eq_table; - const struct cpumask *prev = cpu_none_mask; - const struct cpumask *mask; int ncomp_eqs = table->num_comp_eqs; u16 *cpus; int ret; - int cpu; + int cpu, hop; int i; ncomp_eqs = table->num_comp_eqs; @@ -844,15 +842,11 @@ static int comp_irqs_request(struct mlx5_core_dev *dev) i = 0; rcu_read_lock(); - for_each_numa_hop_mask(mask, dev->priv.numa_node) { - for_each_cpu_andnot(cpu, mask, prev) { - cpus[i] = cpu; - if (++i == ncomp_eqs) - goto spread_done; - } - prev = mask; + for_each_numa_cpu(cpu, hop, dev->priv.numa_node, cpu_possible_mask) { + cpus[i] = cpu; + if (++i == ncomp_eqs) + break; } -spread_done: rcu_read_unlock(); ret = mlx5_irqs_request_vectors(dev, cpus, ncomp_eqs, table->comp_irqs); kfree(cpus);
for_each_numa_cpu() is a more straightforward alternative to for_each_numa_hop_mask() + for_each_cpu_andnot(). Signed-off-by: Yury Norov <yury.norov@gmail.com> --- drivers/net/ethernet/mellanox/mlx5/core/eq.c | 16 +++++----------- 1 file changed, 5 insertions(+), 11 deletions(-)