mbox series

[v2,0/2] bpf: Introduce cpu affinity for sockmap

Message ID 20241101161624.568527-1-mrpre@163.com (mailing list archive)
Headers show
Series bpf: Introduce cpu affinity for sockmap | expand

Message

Jiayuan Chen Nov. 1, 2024, 4:16 p.m. UTC
Why we need cpu affinity:
Mainstream data planes, like Nginx and HAProxy, utilize CPU affinity
by binding user processes to specific CPUs. This avoids interference
between processes and prevents impact from other processes.

Sockmap, as an optimization to accelerate such proxy programs,
currently lacks the ability to specify CPU affinity. The current
implementation of sockmap handling backlog is based on workqueue,
which operates by calling 'schedule_delayed_work()'. It's current
implementation prefers to schedule on the local CPU, i.e., the CPU
that handled the packet under softirq. 

For extremely high traffic with large numbers of packets,
'sk_psock_backlog' becomes a large loop.

For multi-threaded programs with only one map, we expect different
sockets to run on different CPUs. It is important to note that this
feature is not a general performance optimization. Instead, it
provides users with the ability to bind to specific CPU, allowing
them to enhance overall operating system utilization based on their
own system environments.

Implementation:
1.When updating the sockmap, support passing a CPU parameter and
save it to the psock.
2.When scheduling psock, determine which CPU to run on using the
psock's CPU information.
3.For thoes sockmap without CPU affinity, keep original logic by using
'schedule_delayed_work()'.

Performance Testing:
'client <-> sockmap proxy <-> server'

Using 'iperf3' tests, with the iperf server bound to CPU5 and the iperf
client bound to CPU6, performance without using CPU affinity is
around 34 Gbits/s, and CPU usage is concentrated on CPU5 and CPU6.
'''
[  5] local 127.0.0.1 port 57144 connected to 127.0.0.1 port 10000
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec  3.95 GBytes  33.9 Gbits/sec
[  5]   1.00-2.00   sec  3.95 GBytes  34.0 Gbits/sec
......
'''

With using CPU affinity, the performnce is close to direct connection
(without any proxy).
'''
[  5] local 127.0.0.1 port 56518 connected to 127.0.0.1 port 10000
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec  7.76 GBytes  66.6 Gbits/sec
[  5]   1.00-2.00   sec  7.76 GBytes  66.7 Gbits/sec
......
'''

---
v1 -> v2: fix compile error when some macro disabled
---

mrpre (2):
  bpf: Introduce cpu affinity for sockmap
  bpf: implement libbpf sockmap cpu affinity

 include/linux/bpf.h                           |  5 ++--
 include/linux/skmsg.h                         |  8 +++++++
 include/uapi/linux/bpf.h                      |  4 ++++
 kernel/bpf/syscall.c                          | 23 ++++++++++++++-----
 net/core/skmsg.c                              | 14 +++++------
 net/core/sock_map.c                           | 13 +++++------
 tools/include/uapi/linux/bpf.h                |  4 ++++
 tools/lib/bpf/bpf.c                           | 22 ++++++++++++++++++
 tools/lib/bpf/bpf.h                           |  9 ++++++++
 tools/lib/bpf/libbpf.map                      |  1 +
 .../selftests/bpf/prog_tests/sockmap_basic.c  | 19 +++++++++++----
 11 files changed, 96 insertions(+), 26 deletions(-)