mbox series

[0/4] cpumask: improve on cpumask_local_spread() locality

Message ID 20221111040027.621646-1-yury.norov@gmail.com (mailing list archive)
Headers show
Series cpumask: improve on cpumask_local_spread() locality | expand

Message

Yury Norov Nov. 11, 2022, 4 a.m. UTC
cpumask_local_spread() currently checks local node for presence of i'th
CPU, and then if it finds nothing makes a flat search among all non-local
CPUs. We can do it better by checking CPUs per NUMA hops.

This series is inspired by Valentin Schneider's "net/mlx5e: Improve remote
NUMA preferences used for the IRQ affinity hints"

https://patchwork.kernel.org/project/netdevbpf/patch/20220728191203.4055-3-tariqt@nvidia.com/

According to Valentin's measurements, for mlx5e:

	Bottleneck in RX side is released, reached linerate (~1.8x speedup).
	~30% less cpu util on TX.

This patch makes cpumask_local_spread() traversing CPUs based on NUMA
distance, just as well, and I expect comparabale improvement for its
users, as in Valentin's case.

I tested it on my VM with the following NUMA configuration:

root@debian:~# numactl -H
available: 4 nodes (0-3)
node 0 cpus: 0 1 2 3
node 0 size: 3869 MB
node 0 free: 3740 MB
node 1 cpus: 4 5
node 1 size: 1969 MB
node 1 free: 1937 MB
node 2 cpus: 6 7
node 2 size: 1967 MB
node 2 free: 1873 MB
node 3 cpus: 8 9 10 11 12 13 14 15
node 3 size: 7842 MB
node 3 free: 7723 MB
node distances:
node   0   1   2   3
  0:  10  50  30  70
  1:  50  10  70  30
  2:  30  70  10  50
  3:  70  30  50  10

And the cpumask_local_spread() for each node and offset traversing looks
like this:

node 0:   0   1   2   3   6   7   4   5   8   9  10  11  12  13  14  15
node 1:   4   5   8   9  10  11  12  13  14  15   0   1   2   3   6   7
node 2:   6   7   0   1   2   3   8   9  10  11  12  13  14  15   4   5
node 3:   8   9  10  11  12  13  14  15   4   5   6   7   0   1   2   3

Yury Norov (4):
  lib/find: introduce find_nth_and_andnot_bit
  cpumask: introduce cpumask_nth_and_andnot
  sched: add sched_numa_find_nth_cpu()
  cpumask: improve on cpumask_local_spread() locality

 include/linux/cpumask.h  | 20 +++++++++++++++++++
 include/linux/find.h     | 33 +++++++++++++++++++++++++++++++
 include/linux/topology.h |  8 ++++++++
 kernel/sched/topology.c  | 42 ++++++++++++++++++++++++++++++++++++++++
 lib/cpumask.c            | 12 ++----------
 lib/find_bit.c           |  9 +++++++++
 6 files changed, 114 insertions(+), 10 deletions(-)

Comments

Jakub Kicinski Nov. 11, 2022, 4:25 p.m. UTC | #1
On Thu, 10 Nov 2022 20:00:23 -0800 Yury Norov wrote:
> cpumask_local_spread() currently checks local node for presence of i'th
> CPU, and then if it finds nothing makes a flat search among all non-local
> CPUs. We can do it better by checking CPUs per NUMA hops.

Nice.

> This series is inspired by Valentin Schneider's "net/mlx5e: Improve remote
> NUMA preferences used for the IRQ affinity hints"
> 
> https://patchwork.kernel.org/project/netdevbpf/patch/20220728191203.4055-3-tariqt@nvidia.com/
> 
> According to Valentin's measurements, for mlx5e:
> 
> 	Bottleneck in RX side is released, reached linerate (~1.8x speedup).
> 	~30% less cpu util on TX.
> 
> This patch makes cpumask_local_spread() traversing CPUs based on NUMA
> distance, just as well, and I expect comparabale improvement for its
> users, as in Valentin's case.
> 
> I tested it on my VM with the following NUMA configuration:

nit: the authorship is a bit more complicated, it'd be good to mention
Tariq. Both for the code and attribution of the testing / measurements.
Tariq Toukan Nov. 13, 2022, 7:37 a.m. UTC | #2
On 11/11/2022 6:47 PM, Yury Norov wrote:
> 
> 
> On Fri, Nov 11, 2022, 10:25 AM Jakub Kicinski <kuba@kernel.org 
> <mailto:kuba@kernel.org>> wrote:
> 
>     On Thu, 10 Nov 2022 20:00:23 -0800 Yury Norov wrote:
>      > cpumask_local_spread() currently checks local node for presence
>     of i'th
>      > CPU, and then if it finds nothing makes a flat search among all
>     non-local
>      > CPUs. We can do it better by checking CPUs per NUMA hops.
> 
>     Nice.
> 

Thanks for your series.
This improves them all, with no changes required to the network device 
drivers.

>      > This series is inspired by Valentin Schneider's "net/mlx5e:
>     Improve remote
>      > NUMA preferences used for the IRQ affinity hints"
>      >
>      >
>     https://patchwork.kernel.org/project/netdevbpf/patch/20220728191203.4055-3-tariqt@nvidia.com/ <https://patchwork.kernel.org/project/netdevbpf/patch/20220728191203.4055-3-tariqt@nvidia.com/>


Find my very first version here, including the perf testing results:
https://patchwork.kernel.org/project/netdevbpf/list/?series=660413&state=*


>      >
>      > According to Valentin's measurements, for mlx5e:
>      >
>      >       Bottleneck in RX side is released, reached linerate (~1.8x
>     speedup).
>      >       ~30% less cpu util on TX.
>      >
>      > This patch makes cpumask_local_spread() traversing CPUs based on NUMA
>      > distance, just as well, and I expect comparabale improvement for its
>      > users, as in Valentin's case.
>      >

Right.

>      > I tested it on my VM with the following NUMA configuration:
> 
>     nit: the authorship is a bit more complicated, it'd be good to mention
>     Tariq. Both for the code and attribution of the testing / measurements.
> 
> 
> Sure. Tariq and Valentine please send your tags as appropriate.
> 

I wonder what fits best here?

As the contribution is based upon previous work that I developed, then 
probably:
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>

Thanks,
Tariq
Andy Shevchenko Nov. 13, 2022, 12:29 p.m. UTC | #3
On Sun, Nov 13, 2022 at 09:37:59AM +0200, Tariq Toukan wrote:
> On 11/11/2022 6:47 PM, Yury Norov wrote:
> > On Fri, Nov 11, 2022, 10:25 AM Jakub Kicinski <kuba@kernel.org
> > <mailto:kuba@kernel.org>> wrote:
> >     On Thu, 10 Nov 2022 20:00:23 -0800 Yury Norov wrote:

..

> > Sure. Tariq and Valentine please send your tags as appropriate.
> 
> I wonder what fits best here?
> 
> As the contribution is based upon previous work that I developed, then
> probably:
> Signed-off-by: Tariq Toukan <tariqt@nvidia.com>

Then it probably means that one of you (Yury or you) should also have a
Co-developed-by.