mbox series

[RESEND,0/9] sched: cpumask: improve on cpumask_local_spread() locality

Message ID 20230121042436.2661843-1-yury.norov@gmail.com (mailing list archive)
Headers show
Series sched: cpumask: improve on cpumask_local_spread() locality | expand

Message

Yury Norov Jan. 21, 2023, 4:24 a.m. UTC
cpumask_local_spread() currently checks local node for presence of i'th
CPU, and then if it finds nothing makes a flat search among all non-local
CPUs. We can do it better by checking CPUs per NUMA hops.

This has significant performance implications on NUMA machines, for example
when using NUMA-aware allocated memory together with NUMA-aware IRQ
affinity hints.

Performance tests from patch 8 of this series for mellanox network
driver show:

  TCP multi-stream, using 16 iperf3 instances pinned to 16 cores (with aRFS on).
  Active cores: 64,65,72,73,80,81,88,89,96,97,104,105,112,113,120,121
  
  +-------------------------+-----------+------------------+------------------+
  |                         | BW (Gbps) | TX side CPU util | RX side CPU util |
  +-------------------------+-----------+------------------+------------------+
  | Baseline                | 52.3      | 6.4 %            | 17.9 %           |
  +-------------------------+-----------+------------------+------------------+
  | Applied on TX side only | 52.6      | 5.2 %            | 18.5 %           |
  +-------------------------+-----------+------------------+------------------+
  | Applied on RX side only | 94.9      | 11.9 %           | 27.2 %           |
  +-------------------------+-----------+------------------+------------------+
  | Applied on both sides   | 95.1      | 8.4 %            | 27.3 %           |
  +-------------------------+-----------+------------------+------------------+
  
  Bottleneck in RX side is released, reached linerate (~1.8x speedup).
  ~30% less cpu util on TX.

This series was supposed to be included in v6.2, but that didn't happen. It
spent enough in -next without any issues, so I hope we'll finally see it
in v6.3.

I believe, the best way would be moving it with scheduler patches, but I'm
OK to try again with bitmap branch as well.

Tariq Toukan (1):
  net/mlx5e: Improve remote NUMA preferences used for the IRQ affinity
    hints

Valentin Schneider (2):
  sched/topology: Introduce sched_numa_hop_mask()
  sched/topology: Introduce for_each_numa_hop_mask()

Yury Norov (6):
  lib/find: introduce find_nth_and_andnot_bit
  cpumask: introduce cpumask_nth_and_andnot
  sched: add sched_numa_find_nth_cpu()
  cpumask: improve on cpumask_local_spread() locality
  lib/cpumask: reorganize cpumask_local_spread() logic
  lib/cpumask: update comment for cpumask_local_spread()

 drivers/net/ethernet/mellanox/mlx5/core/eq.c | 18 +++-
 include/linux/cpumask.h                      | 20 +++++
 include/linux/find.h                         | 33 +++++++
 include/linux/topology.h                     | 33 +++++++
 kernel/sched/topology.c                      | 90 ++++++++++++++++++++
 lib/cpumask.c                                | 52 ++++++-----
 lib/find_bit.c                               |  9 ++
 7 files changed, 230 insertions(+), 25 deletions(-)

Comments

Tariq Toukan Jan. 22, 2023, 12:57 p.m. UTC | #1
On 21/01/2023 6:24, Yury Norov wrote:
> cpumask_local_spread() currently checks local node for presence of i'th
> CPU, and then if it finds nothing makes a flat search among all non-local
> CPUs. We can do it better by checking CPUs per NUMA hops.
> 
> This has significant performance implications on NUMA machines, for example
> when using NUMA-aware allocated memory together with NUMA-aware IRQ
> affinity hints.
> 
> Performance tests from patch 8 of this series for mellanox network
> driver show:
> 
>    TCP multi-stream, using 16 iperf3 instances pinned to 16 cores (with aRFS on).
>    Active cores: 64,65,72,73,80,81,88,89,96,97,104,105,112,113,120,121
>    
>    +-------------------------+-----------+------------------+------------------+
>    |                         | BW (Gbps) | TX side CPU util | RX side CPU util |
>    +-------------------------+-----------+------------------+------------------+
>    | Baseline                | 52.3      | 6.4 %            | 17.9 %           |
>    +-------------------------+-----------+------------------+------------------+
>    | Applied on TX side only | 52.6      | 5.2 %            | 18.5 %           |
>    +-------------------------+-----------+------------------+------------------+
>    | Applied on RX side only | 94.9      | 11.9 %           | 27.2 %           |
>    +-------------------------+-----------+------------------+------------------+
>    | Applied on both sides   | 95.1      | 8.4 %            | 27.3 %           |
>    +-------------------------+-----------+------------------+------------------+
>    
>    Bottleneck in RX side is released, reached linerate (~1.8x speedup).
>    ~30% less cpu util on TX.
> 
> This series was supposed to be included in v6.2, but that didn't happen. It
> spent enough in -next without any issues, so I hope we'll finally see it
> in v6.3.
> 
> I believe, the best way would be moving it with scheduler patches, but I'm
> OK to try again with bitmap branch as well.

Now that Yury dropped several controversial bitmap patches form the PR, 
the rest are mostly in sched, or new API that's used by sched.

Valentin, what do you think? Can you take it to your sched branch?

> 
> Tariq Toukan (1):
>    net/mlx5e: Improve remote NUMA preferences used for the IRQ affinity
>      hints
> 
> Valentin Schneider (2):
>    sched/topology: Introduce sched_numa_hop_mask()
>    sched/topology: Introduce for_each_numa_hop_mask()
> 
> Yury Norov (6):
>    lib/find: introduce find_nth_and_andnot_bit
>    cpumask: introduce cpumask_nth_and_andnot
>    sched: add sched_numa_find_nth_cpu()
>    cpumask: improve on cpumask_local_spread() locality
>    lib/cpumask: reorganize cpumask_local_spread() logic
>    lib/cpumask: update comment for cpumask_local_spread()
> 
>   drivers/net/ethernet/mellanox/mlx5/core/eq.c | 18 +++-
>   include/linux/cpumask.h                      | 20 +++++
>   include/linux/find.h                         | 33 +++++++
>   include/linux/topology.h                     | 33 +++++++
>   kernel/sched/topology.c                      | 90 ++++++++++++++++++++
>   lib/cpumask.c                                | 52 ++++++-----
>   lib/find_bit.c                               |  9 ++
>   7 files changed, 230 insertions(+), 25 deletions(-)
>
Valentin Schneider Jan. 23, 2023, 9:57 a.m. UTC | #2
On 22/01/23 14:57, Tariq Toukan wrote:
> On 21/01/2023 6:24, Yury Norov wrote:
>>
>> This series was supposed to be included in v6.2, but that didn't happen. It
>> spent enough in -next without any issues, so I hope we'll finally see it
>> in v6.3.
>>
>> I believe, the best way would be moving it with scheduler patches, but I'm
>> OK to try again with bitmap branch as well.
>
> Now that Yury dropped several controversial bitmap patches form the PR,
> the rest are mostly in sched, or new API that's used by sched.
>
> Valentin, what do you think? Can you take it to your sched branch?
>

I would if I had one :-)

Peter/Ingo, any objections to stashing this in tip/sched/core?
Tariq Toukan Jan. 29, 2023, 8:07 a.m. UTC | #3
On 23/01/2023 11:57, Valentin Schneider wrote:
> On 22/01/23 14:57, Tariq Toukan wrote:
>> On 21/01/2023 6:24, Yury Norov wrote:
>>>
>>> This series was supposed to be included in v6.2, but that didn't happen. It
>>> spent enough in -next without any issues, so I hope we'll finally see it
>>> in v6.3.
>>>
>>> I believe, the best way would be moving it with scheduler patches, but I'm
>>> OK to try again with bitmap branch as well.
>>
>> Now that Yury dropped several controversial bitmap patches form the PR,
>> the rest are mostly in sched, or new API that's used by sched.
>>
>> Valentin, what do you think? Can you take it to your sched branch?
>>
> 
> I would if I had one :-)
> 

Oh I see :)

> Peter/Ingo, any objections to stashing this in tip/sched/core?
> 

Hi Peter and Ingo,

Can you please look into it? So we'll have enough time to act (in 
case...) during this kernel.

We already missed one kernel...

Thanks,
Tariq
Jakub Kicinski Jan. 30, 2023, 8:22 p.m. UTC | #4
On Sun, 29 Jan 2023 10:07:58 +0200 Tariq Toukan wrote:
> > Peter/Ingo, any objections to stashing this in tip/sched/core?
> 
> Can you please look into it? So we'll have enough time to act (in 
> case...) during this kernel.
> 
> We already missed one kernel...

We really need this in linux-next by the end of the week. PTAL.
Jakub Kicinski Feb. 2, 2023, 5:33 p.m. UTC | #5
On Mon, 30 Jan 2023 12:22:06 -0800 Jakub Kicinski wrote:
> On Sun, 29 Jan 2023 10:07:58 +0200 Tariq Toukan wrote:
> > > Peter/Ingo, any objections to stashing this in tip/sched/core?  
> > 
> > Can you please look into it? So we'll have enough time to act (in 
> > case...) during this kernel.
> > 
> > We already missed one kernel...  
> 
> We really need this in linux-next by the end of the week. PTAL.

Peter, could you please take a look? Linux doesn't have an API for
basic, common sense IRQ distribution on AMD systems. It's important :(
Yury Norov Feb. 2, 2023, 5:37 p.m. UTC | #6
On Thu, Feb 2, 2023 at 9:33 AM Jakub Kicinski <kuba@kernel.org> wrote:
>
> On Mon, 30 Jan 2023 12:22:06 -0800 Jakub Kicinski wrote:
> > On Sun, 29 Jan 2023 10:07:58 +0200 Tariq Toukan wrote:
> > > > Peter/Ingo, any objections to stashing this in tip/sched/core?
> > >
> > > Can you please look into it? So we'll have enough time to act (in
> > > case...) during this kernel.
> > >
> > > We already missed one kernel...
> >
> > We really need this in linux-next by the end of the week. PTAL.
>
> Peter, could you please take a look? Linux doesn't have an API for
> basic, common sense IRQ distribution on AMD systems. It's important :(

FWIW, it's already been in linux-next since mid-December through the
bitmap branch, and no issues were reported so far.

Thanks,
Yury
Jakub Kicinski Feb. 8, 2023, 2:25 a.m. UTC | #7
On Mon, 23 Jan 2023 09:57:43 +0000 Valentin Schneider wrote:
> On 22/01/23 14:57, Tariq Toukan wrote:
> > On 21/01/2023 6:24, Yury Norov wrote:  
> >>
> >> This series was supposed to be included in v6.2, but that didn't happen. It
> >> spent enough in -next without any issues, so I hope we'll finally see it
> >> in v6.3.
> >>
> >> I believe, the best way would be moving it with scheduler patches, but I'm
> >> OK to try again with bitmap branch as well.  
> >
> > Now that Yury dropped several controversial bitmap patches form the PR,
> > the rest are mostly in sched, or new API that's used by sched.
> >
> > Valentin, what do you think? Can you take it to your sched branch?
>
> I would if I had one :-)
> 
> Peter/Ingo, any objections to stashing this in tip/sched/core?

No replies... so let me take it via networking.
patchwork-bot+netdevbpf@kernel.org Feb. 8, 2023, 4:20 a.m. UTC | #8
Hello:

This series was applied to netdev/net-next.git (master)
by Jakub Kicinski <kuba@kernel.org>:

On Fri, 20 Jan 2023 20:24:27 -0800 you wrote:
> cpumask_local_spread() currently checks local node for presence of i'th
> CPU, and then if it finds nothing makes a flat search among all non-local
> CPUs. We can do it better by checking CPUs per NUMA hops.
> 
> This has significant performance implications on NUMA machines, for example
> when using NUMA-aware allocated memory together with NUMA-aware IRQ
> affinity hints.
> 
> [...]

Here is the summary with links:
  - [1/9] lib/find: introduce find_nth_and_andnot_bit
    https://git.kernel.org/netdev/net-next/c/43245117806f
  - [2/9] cpumask: introduce cpumask_nth_and_andnot
    https://git.kernel.org/netdev/net-next/c/62f4386e564d
  - [3/9] sched: add sched_numa_find_nth_cpu()
    https://git.kernel.org/netdev/net-next/c/cd7f55359c90
  - [4/9] cpumask: improve on cpumask_local_spread() locality
    https://git.kernel.org/netdev/net-next/c/406d394abfcd
  - [5/9] lib/cpumask: reorganize cpumask_local_spread() logic
    https://git.kernel.org/netdev/net-next/c/b1beed72b8b7
  - [6/9] sched/topology: Introduce sched_numa_hop_mask()
    https://git.kernel.org/netdev/net-next/c/9feae65845f7
  - [7/9] sched/topology: Introduce for_each_numa_hop_mask()
    https://git.kernel.org/netdev/net-next/c/06ac01721f7d
  - [8/9] net/mlx5e: Improve remote NUMA preferences used for the IRQ affinity hints
    https://git.kernel.org/netdev/net-next/c/2acda57736de
  - [9/9] lib/cpumask: update comment for cpumask_local_spread()
    https://git.kernel.org/netdev/net-next/c/2ac4980c57f5

You are awesome, thank you!