Message ID | cover.1662295929.git.leonro@nvidia.com (mailing list archive) |
---|---|
Headers | show |
Series | Extend XFRM core to allow full offload configuration | expand |
On Sun, Sep 04, 2022 at 04:15:34PM +0300, Leon Romanovsky wrote: > From: Leon Romanovsky <leonro@nvidia.com> > > Changelog: > v4: > * Changed title from "PATCH" to "PATCH RFC" per-request. > * Added two new patches: one to update hard/soft limits and another > initial take on documentation. > * Added more info about lifetime/rekeying flow to cover letter, see > relevant section. > * perf traces for crypto mode will come later. <...> The series is v4 and not as written in subject title. Thanks > -- > 2.37.2 >
On Sun, Sep 04, 2022 at 04:15:34PM +0300, Leon Romanovsky wrote: > From: Leon Romanovsky <leonro@nvidia.com> > > Changelog: > v4: <...> > * perf traces for crypto mode will come later. Single core RX: # Children Self Samples Command Shared Object Symbol # ........ ........ ............ ............. .................. ....................................... # 99.97% 0.00% 0 ksoftirqd/143 [kernel.vmlinux] [k] ret_from_fork | ---ret_from_fork kthread smpboot_thread_fn | --99.96%--run_ksoftirqd __do_softirq | --99.86%--net_rx_action | |--61.49%--mlx5e_napi_poll | | | |--58.43%--mlx5e_poll_rx_cq | | | | | --57.27%--mlx5e_handle_rx_cqe | | | | | |--33.05%--napi_gro_receive | | | | | | | --32.42%--dev_gro_receive | | | | | | | --29.64%--inet_gro_receive | | | | | | | --27.70%--esp4_gro_receive | | | | | | | --25.80%--xfrm_input | | | | | | | |--6.86%--xfrm4_transport_finish | | | | | | | | | |--4.19%--__memmove | | | | | | | | | --1.27%--ip_send_check | | | | | | | |--6.02%--esp_input_done2 | | | | | | | | | --3.26%--skb_copy_bits | | | | | | | | | --2.50%--memcpy_erms | | | | | | | |--2.19%--_raw_spin_lock | | | | | | | |--1.22%--xfrm_rcv_cb | | | | | | | | | --0.68%--xfrm4_rcv_cb | | | | | | | |--0.97%--xfrm_inner_mode_input.isra.35 | | | | | | | |--0.97%--gro_cells_receive | | | | | | | |--0.69%--esp_input_tail | | | | | | | --0.66%--xfrm_parse_spi | | | | | |--11.91%--mlx5e_skb_from_cqe_linear | | | | | | | --5.63%--build_skb | | | | | | | --3.82%--__build_skb | | | | | | | --1.97%--kmem_cache_alloc | | | | | --9.97%--mlx5e_build_rx_skb | | | | | --7.23%--mlx5e_ipsec_offload_handle_rx_skb | | | | | |--3.60%--secpath_set | | | | | | | --3.41%--skb_ext_add | | | | | | | --2.69%--__skb_ext_alloc | | | | | | | --2.58%--kmem_cache_alloc | | | | | | | --0.60%--__slab_alloc | | | | | | | --0.56%--___slab_alloc | | | | | --2.52%--mlx5e_ipsec_sadb_rx_lookup | | | --2.78%--mlx5e_post_rx_wqes I have TX traces too and can add if RX are not sufficient. Thanks
On Sun, Sep 04, 2022 at 04:15:34PM +0300, Leon Romanovsky wrote: > From: Leon Romanovsky <leonro@nvidia.com> <...> > Leon Romanovsky (8): > xfrm: add new full offload flag > xfrm: allow state full offload mode > xfrm: add an interface to offload policy > xfrm: add TX datapath support for IPsec full offload mode > xfrm: add RX datapath protection for IPsec full offload mode > xfrm: enforce separation between priorities of HW/SW policies > xfrm: add support to HW update soft and hard limits > xfrm: document IPsec full offload mode Kindly reminder. Thanks
On Thu, 8 Sep 2022 12:56:16 +0300 Leon Romanovsky wrote:
> I have TX traces too and can add if RX are not sufficient.
The perf trace is good, but for those of us not intimately familiar
with xfrm, could you provide some analysis here?
On Wed, Sep 21, 2022 at 07:59:27AM -0700, Jakub Kicinski wrote: > On Thu, 8 Sep 2022 12:56:16 +0300 Leon Romanovsky wrote: > > I have TX traces too and can add if RX are not sufficient. > > The perf trace is good, but for those of us not intimately familiar > with xfrm, could you provide some analysis here? The perf trace presented is for RX path of IPsec crypto offload mode. In that mode, decrypted packet enters the netdev stack to perform various XFRM specific checks. The trace presents "the cost" of these checks, which is 25% according to the line "--25.80%--xfrm_input". The xfrm_input has number of "slow" places (other places are not fast either), which are handled by HW in parallel without any locks in IPsec full offload mode. The avoided checks include: * XFRM state lookup. It is linked list iteration. * Lock of whole xfrm_state. It means that parallel traffic will be congested on this lock. * Double calculation of replay window protection. * Update of replay window. https://elixir.bootlin.com/linux/v6.0-rc6/source/net/xfrm/xfrm_input.c#L459 int xfrm_input(struct sk_buff *skb, int nexthdr, __be32 spi, int encap_type) { ... x = xfrm_state_lookup(net, mark, daddr, spi, nexthdr, family); ... spin_lock(&x->lock); ... if (xfrm_replay_check(x, skb, seq)) { ... spin_unlock(&x->lock); ... spin_lock(&x->lock); ... if (xfrm_replay_recheck(x, skb, seq)) { ... xfrm_replay_advance(x, seq); .....
On Mon, Sep 19, 2022 at 12:31:11PM +0300, Leon Romanovsky wrote: > On Sun, Sep 04, 2022 at 04:15:34PM +0300, Leon Romanovsky wrote: > > From: Leon Romanovsky <leonro@nvidia.com> > > <...> > > > Leon Romanovsky (8): > > xfrm: add new full offload flag > > xfrm: allow state full offload mode > > xfrm: add an interface to offload policy > > xfrm: add TX datapath support for IPsec full offload mode > > xfrm: add RX datapath protection for IPsec full offload mode > > xfrm: enforce separation between priorities of HW/SW policies > > xfrm: add support to HW update soft and hard limits > > xfrm: document IPsec full offload mode > > Kindly reminder. Hi Steffen, Can we please progress with the series? I would like to see it is merged in this merge window, so I will be able to progress with iproute2 and mlx5 code too. It is approximately 3 weeks already in the ML without any objections. The implementation proposed here has nothing specific to our HW and applicable to other vendors as well. Thanks > > Thanks
On Thu, Sep 22, 2022 at 10:17:31AM +0300, Leon Romanovsky wrote: > On Mon, Sep 19, 2022 at 12:31:11PM +0300, Leon Romanovsky wrote: > > On Sun, Sep 04, 2022 at 04:15:34PM +0300, Leon Romanovsky wrote: > > > From: Leon Romanovsky <leonro@nvidia.com> > > > > <...> > > > > > Leon Romanovsky (8): > > > xfrm: add new full offload flag > > > xfrm: allow state full offload mode > > > xfrm: add an interface to offload policy > > > xfrm: add TX datapath support for IPsec full offload mode > > > xfrm: add RX datapath protection for IPsec full offload mode > > > xfrm: enforce separation between priorities of HW/SW policies > > > xfrm: add support to HW update soft and hard limits > > > xfrm: document IPsec full offload mode > > > > Kindly reminder. > > Hi Steffen, > > Can we please progress with the series? I would like to see it is merged > in this merge window, so I will be able to progress with iproute2 and mlx5 > code too. > > It is approximately 3 weeks already in the ML without any objections. > The implementation proposed here has nothing specific to our HW and > applicable to other vendors as well. I've just replied to our private thread about this. I'll have a look at the patches tomorrow. I've just returned from vacation and still a bit backloged...
On Wed, Sep 21, 2022 at 08:37:06PM +0300, Leon Romanovsky wrote: > On Wed, Sep 21, 2022 at 07:59:27AM -0700, Jakub Kicinski wrote: > > On Thu, 8 Sep 2022 12:56:16 +0300 Leon Romanovsky wrote: > > > I have TX traces too and can add if RX are not sufficient. > > > > The perf trace is good, but for those of us not intimately familiar > > with xfrm, could you provide some analysis here? > > The perf trace presented is for RX path of IPsec crypto offload mode. In that > mode, decrypted packet enters the netdev stack to perform various XFRM specific > checks. Can you provide the perf traces and analysis for the TX side too? That would be interesting in particular, because the policy and state lookups there happen still in software. Thanks a lot for your effort on this!
On Sun, Sep 25, 2022 at 11:40:39AM +0200, Steffen Klassert wrote: > On Wed, Sep 21, 2022 at 08:37:06PM +0300, Leon Romanovsky wrote: > > On Wed, Sep 21, 2022 at 07:59:27AM -0700, Jakub Kicinski wrote: > > > On Thu, 8 Sep 2022 12:56:16 +0300 Leon Romanovsky wrote: > > > > I have TX traces too and can add if RX are not sufficient. > > > > > > The perf trace is good, but for those of us not intimately familiar > > > with xfrm, could you provide some analysis here? > > > > The perf trace presented is for RX path of IPsec crypto offload mode. In that > > mode, decrypted packet enters the netdev stack to perform various XFRM specific > > checks. > > Can you provide the perf traces and analysis for the TX side too? That > would be interesting in particular, because the policy and state lookups > there happen still in software. Single core TX (crypto mode) from the same run: Please notice that it is not really bottleneck, probably RX caused to the situation where TX was not executed enough. It is also lighter path than RX. # Children Self Samples Command Shared Object Symbol # ........ ........ ............ ............... .................. .............................................. # 86.58% 0.00% 0 swapper [kernel.vmlinux] [k] secondary_startup_64_no_verify | ---secondary_startup_64_no_verify start_secondary cpu_startup_entry do_idle | --86.37%--cpu_idle_poll | --24.53%--asm_common_interrupt | --24.48%--common_interrupt | |--23.47%--irq_exit_rcu | | | --23.23%--do_softirq_own_stack | | | --23.17%--asm_call_irq_on_stack | __do_softirq | | | |--22.23%--net_rx_action | | | | | |--20.17%--gro_cell_poll | | | | | | | --20.02%--napi_complete_done | | | | | | | --19.98%--gro_normal_list.part.154 | | | | | | | --19.96%--netif_receive_skb_list_internal | | | | | | | --19.89%--__netif_receive_skb_list_core | | | | | | | --19.77%--ip_list_rcv | | | | | | | --19.67%--ip_sublist_rcv | | | | | | | --19.56%--ip_sublist_rcv_finish | | | | | | | --19.54%--ip_local_deliver | | | | | | | --19.49%--ip_local_deliver_finish | | | | | | | --19.47%--ip_protocol_deliver_rcu | | | | | | | --19.43%--tcp_v4_rcv | | | | | | | --18.87%--tcp_v4_do_rcv | | | | | | | --18.83%--tcp_rcv_established | | | | | | | |--16.41%--__tcp_push_pending_frames | | | | | | | | | --16.38%--tcp_write_xmit | | | | | | | | | |--6.35%--tcp_event_new_data_sent | | | | | | | | | | | --6.22%--sk_reset_timer | | | | | | | | | | | --6.21%--mod_timer | | | | | | | | | | | --6.10%--get_nohz_timer_target | | | | | | | | | | | --1.87%--cpumask_next_and | | | | | | | | | | | --1.07%--_find_next_bit.constprop.1 | | | | | | | | | |--5.50%--tcp_schedule_loss_probe | | | | | | | | | | | --5.49%--sk_reset_timer | | | | | mod_timer | | | | | | | | | | | --5.43%--get_nohz_timer_target | | | | | | | | | | | --1.37%--cpumask_next_and | | | | | | | | | | | --0.71%--_find_next_bit.constprop.1 | | | | | | | | | --4.31%--__tcp_transmit_skb | | | | | | | | | --3.87%--__ip_queue_xmit | | | | | | | | | --3.54%--xfrm4_output | | | | | | | | | --3.26%--xfrm_output_resume | | | | | | | | | --2.88%--ip_output | | | | | | | | | --2.78%--ip_finish_output2 | | | | | | | | | --2.73%--__dev_queue_xmit | | | | | | | | | --2.49%--sch_direct_xmit | | | | | | | | | |--1.50%--validate_xmit_skb_list | | | | | | | | | | | | --1.32%--validate_xmit_skb | | | | | | | | | | | | --1.06%--__skb_gso_segment | | | | | | | | | | | | --1.04%--skb_mac_gso_segment | | | | | | | | | | --1.02%--inet_gso_segment | | | | | | | | | | --0.93%--esp4_gso_segment | | | | | | | | | | --0.86%--tcp_gso_segment | | | | | | | | --0.78%--skb_segment | | | | | | | | | --0.77%--dev_hard_start_xmit | | | | | | | | | | --0.75%--mlx5e_xmit | | | | | | | --1.87%--tcp_ack | | | | | | | --1.66%--tcp_clean_rtx_queue | | | | | | | --1.35%--__kfree_skb | | | | | | | --1.21%--skb_release_data | | | | | --1.92%--mlx5e_napi_poll | | | | | --1.38%--mlx5e_poll_rx_cq | | | | | --1.33%--mlx5e_handle_rx_cqe | | | | | --0.53%--napi_gro_receive | | | | | --0.52%--dev_gro_receive | | | --0.77%--tasklet_action_common.isra.17 | --0.80%--asm_call_irq_on_stack | --0.78%--handle_edge_irq | --0.74%--handle_irq_event | --0.71%--handle_irq_event_percpu | --0.64%--__handle_irq_event_percpu | --0.60%--mlx5_irq_int_handler | --0.58%--atomic_notifier_call_chain | --0.57%--mlx5_eq_comp_int
On Mon, Sep 26, 2022 at 09:55:45AM +0300, Leon Romanovsky wrote: > On Sun, Sep 25, 2022 at 11:40:39AM +0200, Steffen Klassert wrote: > > On Wed, Sep 21, 2022 at 08:37:06PM +0300, Leon Romanovsky wrote: > > > On Wed, Sep 21, 2022 at 07:59:27AM -0700, Jakub Kicinski wrote: > > > > On Thu, 8 Sep 2022 12:56:16 +0300 Leon Romanovsky wrote: > > > > > I have TX traces too and can add if RX are not sufficient. > > > > > > > > The perf trace is good, but for those of us not intimately familiar > > > > with xfrm, could you provide some analysis here? > > > > > > The perf trace presented is for RX path of IPsec crypto offload mode. In that > > > mode, decrypted packet enters the netdev stack to perform various XFRM specific > > > checks. > > > > Can you provide the perf traces and analysis for the TX side too? That > > would be interesting in particular, because the policy and state lookups > > there happen still in software. > > Single core TX (crypto mode) from the same run: > Please notice that it is not really bottleneck, probably RX caused to the situation > where TX was not executed enough. It is also lighter path than RX. Thanks for this! How many policies and SAs were installed when you ran this? A run with 'many' policies and SAs would be interesting, in particualar a comparison between crypto and full offload. That would show us where the performance of the full offload comes from.
On Tue, Sep 27, 2022 at 07:59:03AM +0200, Steffen Klassert wrote: > On Mon, Sep 26, 2022 at 09:55:45AM +0300, Leon Romanovsky wrote: > > On Sun, Sep 25, 2022 at 11:40:39AM +0200, Steffen Klassert wrote: > > > On Wed, Sep 21, 2022 at 08:37:06PM +0300, Leon Romanovsky wrote: > > > > On Wed, Sep 21, 2022 at 07:59:27AM -0700, Jakub Kicinski wrote: > > > > > On Thu, 8 Sep 2022 12:56:16 +0300 Leon Romanovsky wrote: > > > > > > I have TX traces too and can add if RX are not sufficient. > > > > > > > > > > The perf trace is good, but for those of us not intimately familiar > > > > > with xfrm, could you provide some analysis here? > > > > > > > > The perf trace presented is for RX path of IPsec crypto offload mode. In that > > > > mode, decrypted packet enters the netdev stack to perform various XFRM specific > > > > checks. > > > > > > Can you provide the perf traces and analysis for the TX side too? That > > > would be interesting in particular, because the policy and state lookups > > > there happen still in software. > > > > Single core TX (crypto mode) from the same run: > > Please notice that it is not really bottleneck, probably RX caused to the situation > > where TX was not executed enough. It is also lighter path than RX. > > Thanks for this! How many policies and SAs were installed when you ran > this? A run with 'many' policies and SAs would be interesting, in > particualar a comparison between crypto and full offload. That would > show us where the performance of the full offload comes from. It was 160 CPU machine with policy/SA per-CPU and direction. In total, 320 policies and 320 SAs. Thanks
From: Leon Romanovsky <leonro@nvidia.com> Changelog: v4: * Changed title from "PATCH" to "PATCH RFC" per-request. * Added two new patches: one to update hard/soft limits and another initial take on documentation. * Added more info about lifetime/rekeying flow to cover letter, see relevant section. * perf traces for crypto mode will come later. v3: https://lore.kernel.org/all/cover.1661260787.git.leonro@nvidia.com * I didn't hear any suggestion what term to use instead of "full offload", so left it as is. It is used in commit messages and documentation only and easy to rename. * Added performance data and background info to cover letter * Reused xfrm_output_resume() function to support multiple XFRM transformations * Add PMTU check in addition to driver .xdo_dev_offload_ok validation * Documentation is in progress, but not part of this series yet. v2: https://lore.kernel.org/all/cover.1660639789.git.leonro@nvidia.com * Rebased to latest 6.0-rc1 * Add an extra check in TX datapath patch to validate packets before forwarding to HW. * Added policy cleanup logic in case of netdev down event v1: https://lore.kernel.org/all/cover.1652851393.git.leonro@nvidia.com * Moved comment to be before if (...) in third patch. v0: https://lore.kernel.org/all/cover.1652176932.git.leonro@nvidia.com ----------------------------------------------------------------------- The following series extends XFRM core code to handle a new type of IPsec offload - full offload. In this mode, the HW is going to be responsible for the whole data path, so both policy and state should be offloaded. IPsec full offload is an improved version of IPsec crypto mode, In full mode, HW is responsible to trim/add headers in addition to decrypt/encrypt. In this mode, the packet arrives to the stack as already decrypted and vice versa for TX (exits to HW as not-encrypted). Devices that implement IPsec full offload mode offload policies too. In the RX path, it causes the situation that HW can't effectively handle mixed SW and HW priorities unless users make sure that HW offloaded policies have higher priorities. To make sure that users have a coherent picture, we require that HW offloaded policies have always (both RX and TX) higher priorities than SW ones. To not over-engineer the code, HW policies are treated as SW ones and don't take into account netdev to allow reuse of the same priorities for different devices. There are several deliberate limitations: * No software fallback * Fragments are dropped, both in RX and TX * No sockets policies * Only IPsec transport mode is implemented ================================================================================ Rekeying: In order to support rekeying, as XFRM core is skipped, the HW/driver should do the following: * Count the handled packets * Raise event that limits are reached * Drop packets once hard limit is occurred. The XFRM core calls to newly introduced xfrm_dev_state_update_curlft() function in order to perform sync between device statistics and internal structures. On HW limit event, driver calls to xfrm_state_check_expire() to allow XFRM core take relevant decisions. This separation between control logic (in XFRM) and data plane allows us to fully reuse SW stack. ================================================================================ Configuration: iproute2: https://lore.kernel.org/netdev/cover.1652179360.git.leonro@nvidia.com/ Full offload mode: ip xfrm state offload full dev <if-name> dir <in|out> ip xfrm policy .... offload full dev <if-name> Crypto offload mode: ip xfrm state offload crypto dev <if-name> dir <in|out> or (backward compatibility) ip xfrm state offload dev <if-name> dir <in|out> ================================================================================ Performance results: TCP multi-stream, using iperf3 instance per-CPU. +----------------------+--------+--------+--------+--------+---------+---------+ | | 1 CPU | 2 CPUs | 4 CPUs | 8 CPUs | 16 CPUs | 32 CPUs | | +--------+--------+--------+--------+---------+---------+ | | BW (Gbps) | +----------------------+--------+--------+-------+---------+---------+---------+ | Baseline | 27.9 | 59 | 93.1 | 92.8 | 93.7 | 94.4 | +----------------------+--------+--------+-------+---------+---------+---------+ | Software IPsec | 6 | 11.9 | 23.3 | 45.9 | 83.8 | 91.8 | +----------------------+--------+--------+-------+---------+---------+---------+ | IPsec crypto offload | 15 | 29.7 | 58.5 | 89.6 | 90.4 | 90.8 | +----------------------+--------+--------+-------+---------+---------+---------+ | IPsec full offload | 28 | 57 | 90.7 | 91 | 91.3 | 91.9 | +----------------------+--------+--------+-------+---------+---------+---------+ IPsec full offload mode behaves as baseline and reaches linerate with same amount of CPUs. Setups details (similar for both sides): * NIC: ConnectX6-DX dual port, 100 Gbps each. Single port used in the tests. * CPU: Intel(R) Xeon(R) Platinum 8380 CPU @ 2.30GHz ================================================================================ Series together with mlx5 part: https://git.kernel.org/pub/scm/linux/kernel/git/leon/linux-rdma.git/log/?h=xfrm-next Thanks Leon Romanovsky (8): xfrm: add new full offload flag xfrm: allow state full offload mode xfrm: add an interface to offload policy xfrm: add TX datapath support for IPsec full offload mode xfrm: add RX datapath protection for IPsec full offload mode xfrm: enforce separation between priorities of HW/SW policies xfrm: add support to HW update soft and hard limits xfrm: document IPsec full offload mode Documentation/networking/xfrm_device.rst | 62 +++++- .../inline_crypto/ch_ipsec/chcr_ipsec.c | 4 + .../net/ethernet/intel/ixgbe/ixgbe_ipsec.c | 5 + drivers/net/ethernet/intel/ixgbevf/ipsec.c | 5 + .../mellanox/mlx5/core/en_accel/ipsec.c | 4 + drivers/net/netdevsim/ipsec.c | 5 + include/linux/netdevice.h | 4 + include/net/netns/xfrm.h | 8 +- include/net/xfrm.h | 121 +++++++++--- include/uapi/linux/xfrm.h | 6 + net/xfrm/xfrm_device.c | 103 +++++++++- net/xfrm/xfrm_output.c | 13 +- net/xfrm/xfrm_policy.c | 181 ++++++++++++++++++ net/xfrm/xfrm_state.c | 4 + net/xfrm/xfrm_user.c | 19 ++ 15 files changed, 501 insertions(+), 43 deletions(-)