Message ID | 20230430171809.124686-1-yury.norov@gmail.com (mailing list archive) |
---|---|
Headers | show |
Series | sched/topology: add for_each_numa_cpu() macro | expand |
On 30/04/23 10:18, Yury Norov wrote: > for_each_cpu() is widely used in kernel, and it's beneficial to create > a NUMA-aware version of the macro. > > Recently added for_each_numa_hop_mask() works, but switching existing > codebase to it is not an easy process. > > This series adds for_each_numa_cpu(), which is designed to be similar to > the for_each_cpu(). It allows to convert existing code to NUMA-aware as > simple as adding a hop iterator variable and passing it inside new macro. > for_each_numa_cpu() takes care of the rest. > > At the moment, we have 2 users of NUMA-aware enumerators. One is > Melanox's in-tree driver, and another is Intel's in-review driver: > > https://lore.kernel.org/lkml/20230216145455.661709-1-pawel.chmielewski@intel.com/ > > Both real-life examples follow the same pattern: > > for_each_numa_hop_mask(cpus, prev, node) { > for_each_cpu_andnot(cpu, cpus, prev) { > if (cnt++ == max_num) > goto out; > do_something(cpu); > } > prev = cpus; > } > > With the new macro, it has a more standard look, like this: > > for_each_numa_cpu(cpu, hop, node, cpu_possible_mask) { > if (cnt++ == max_num) > break; > do_something(cpu); > } > > Straight conversion of existing for_each_cpu() codebase to NUMA-aware > version with for_each_numa_hop_mask() is difficult because it doesn't > take a user-provided cpu mask, and eventually ends up with open-coded > double loop. With for_each_numa_cpu() it shouldn't be a brainteaser. > Consider the NUMA-ignorant example: > > cpumask_t cpus = get_mask(); > int cnt = 0, cpu; > > for_each_cpu(cpu, cpus) { > if (cnt++ == max_num) > break; > do_something(cpu); > } > > Converting it to NUMA-aware version would be as simple as: > > cpumask_t cpus = get_mask(); > int node = get_node(); > int cnt = 0, hop, cpu; > > for_each_numa_cpu(cpu, hop, node, cpus) { > if (cnt++ == max_num) > break; > do_something(cpu); > } > > The latter looks more verbose and avoids from open-coding that annoying > double loop. Another advantage is that it works with a 'hop' parameter with > the clear meaning of NUMA distance, and doesn't make people not familiar > to enumerator internals bothering with current and previous masks machinery. > LGTM, I ran the tests on a few NUMA topologies and that all seems to behave as expected. Thanks for working on this! Reviewed-by: Valentin Schneider <vschneid@redhat.com>
> LGTM, I ran the tests on a few NUMA topologies and that all seems to behave > as expected. Thanks for working on this! > > Reviewed-by: Valentin Schneider <vschneid@redhat.com> Thank you Valentin. If you spent time testing the series, why don't you add your Tested-by?
On 02/05/23 14:58, Yury Norov wrote: >> LGTM, I ran the tests on a few NUMA topologies and that all seems to behave >> as expected. Thanks for working on this! >> >> Reviewed-by: Valentin Schneider <vschneid@redhat.com> > > Thank you Valentin. If you spent time testing the series, why > don't you add your Tested-by? Well, I only ran the test_bitmap stuff and checked the output of the iterator then, I didn't get to test on actual hardware with a mellanox card :-) But yeah, I suppose that does count for the rest, so feel free to add to all patches but #5: Tested-by: Valentin Schneider <vschneid@redhat.com>
On Sun, Apr 30, 2023 at 10:18:01AM -0700, Yury Norov wrote: > for_each_cpu() is widely used in kernel, and it's beneficial to create > a NUMA-aware version of the macro. > > Recently added for_each_numa_hop_mask() works, but switching existing > codebase to it is not an easy process. > > This series adds for_each_numa_cpu(), which is designed to be similar to > the for_each_cpu(). It allows to convert existing code to NUMA-aware as > simple as adding a hop iterator variable and passing it inside new macro. > for_each_numa_cpu() takes care of the rest. Hi Jakub, Now that the series reviewed, can you consider taking it in sched tree? Thanks, Yury
On Wed, 31 May 2023 08:43:46 -0700 Yury Norov wrote: > On Sun, Apr 30, 2023 at 10:18:01AM -0700, Yury Norov wrote: > > for_each_cpu() is widely used in kernel, and it's beneficial to create > > a NUMA-aware version of the macro. > > > > Recently added for_each_numa_hop_mask() works, but switching existing > > codebase to it is not an easy process. > > > > This series adds for_each_numa_cpu(), which is designed to be similar to > > the for_each_cpu(). It allows to convert existing code to NUMA-aware as > > simple as adding a hop iterator variable and passing it inside new macro. > > for_each_numa_cpu() takes care of the rest. > > Hi Jakub, > > Now that the series reviewed, can you consider taking it in sched > tree? Do you mean someone else or did you mean the net-next tree?
On Wed, May 31, 2023 at 10:01:25AM -0700, Jakub Kicinski wrote: > On Wed, 31 May 2023 08:43:46 -0700 Yury Norov wrote: > > On Sun, Apr 30, 2023 at 10:18:01AM -0700, Yury Norov wrote: > > > for_each_cpu() is widely used in kernel, and it's beneficial to create > > > a NUMA-aware version of the macro. > > > > > > Recently added for_each_numa_hop_mask() works, but switching existing > > > codebase to it is not an easy process. > > > > > > This series adds for_each_numa_cpu(), which is designed to be similar to > > > the for_each_cpu(). It allows to convert existing code to NUMA-aware as > > > simple as adding a hop iterator variable and passing it inside new macro. > > > for_each_numa_cpu() takes care of the rest. > > > > Hi Jakub, > > > > Now that the series reviewed, can you consider taking it in sched > > tree? > > Do you mean someone else or did you mean the net-next tree? Sorry, net-next.
On Wed, 31 May 2023 10:08:58 -0700 Yury Norov wrote: > On Wed, May 31, 2023 at 10:01:25AM -0700, Jakub Kicinski wrote: > > On Wed, 31 May 2023 08:43:46 -0700 Yury Norov wrote: > > > Now that the series reviewed, can you consider taking it in sched > > > tree? > > > > Do you mean someone else or did you mean the net-next tree? > > Sorry, net-next. I'm a bit of a coward. I don't trust my ability to judge this code, and it seems Linus has opinions about it :( The mlx5 patch looks like a small refactoring which can wait until 6.6. I don't feel like net-next is the best path downstream for this series :(