mbox series

[xfrm-next,v9,0/8] Extend XFRM core to allow packet offload configuration

Message ID cover.1669547603.git.leonro@nvidia.com (mailing list archive)
Headers show
Series Extend XFRM core to allow packet offload configuration | expand

Message

Leon Romanovsky Nov. 27, 2022, 11:18 a.m. UTC
From: Leon Romanovsky <leonro@nvidia.com>

Changelog:
v9:
 * Added acquire support
v8: https://lore.kernel.org/all/cover.1668753030.git.leonro@nvidia.com
 * Removed not-related blank line
 * Fixed typos in documentation
v7: https://lore.kernel.org/all/cover.1667997522.git.leonro@nvidia.com
As was discussed in IPsec workshop:
 * Renamed "full offload" to be "packet offload".
 * Added check that offloaded SA and policy have same device while sending packet
 * Added to SAD same optimization as was done for SPD to speed-up lookups.
v6: https://lore.kernel.org/all/cover.1666692948.git.leonro@nvidia.com
 * Fixed misplaced "!" in sixth patch.
v5: https://lore.kernel.org/all/cover.1666525321.git.leonro@nvidia.com
 * Rebased to latest ipsec-next.
 * Replaced HW priority patch with solution which mimics separated SPDs
   for SW and HW. See more description in this cover letter.
 * Dropped RFC tag, usecase, API and implementation are clear.
v4: https://lore.kernel.org/all/cover.1662295929.git.leonro@nvidia.com
 * Changed title from "PATCH" to "PATCH RFC" per-request.
 * Added two new patches: one to update hard/soft limits and another
   initial take on documentation.
 * Added more info about lifetime/rekeying flow to cover letter, see
   relevant section.
 * perf traces for crypto mode will come later.
v3: https://lore.kernel.org/all/cover.1661260787.git.leonro@nvidia.com
 * I didn't hear any suggestion what term to use instead of
   "packet offload", so left it as is. It is used in commit messages
   and documentation only and easy to rename.
 * Added performance data and background info to cover letter
 * Reused xfrm_output_resume() function to support multiple XFRM transformations
 * Add PMTU check in addition to driver .xdo_dev_offload_ok validation
 * Documentation is in progress, but not part of this series yet.
v2: https://lore.kernel.org/all/cover.1660639789.git.leonro@nvidia.com
 * Rebased to latest 6.0-rc1
 * Add an extra check in TX datapath patch to validate packets before
   forwarding to HW.
 * Added policy cleanup logic in case of netdev down event
v1: https://lore.kernel.org/all/cover.1652851393.git.leonro@nvidia.com
 * Moved comment to be before if (...) in third patch.
v0: https://lore.kernel.org/all/cover.1652176932.git.leonro@nvidia.com
-----------------------------------------------------------------------

The following series extends XFRM core code to handle a new type of IPsec
offload - packet offload.

In this mode, the HW is going to be responsible for the whole data path,
so both policy and state should be offloaded.

IPsec packet offload is an improved version of IPsec crypto mode,
In packet mode, HW is responsible to trim/add headers in addition to
decrypt/encrypt. In this mode, the packet arrives to the stack as already
decrypted and vice versa for TX (exits to HW as not-encrypted).

Devices that implement IPsec packet offload mode offload policies too.
In the RX path, it causes the situation that HW can't effectively
handle mixed SW and HW priorities unless users make sure that HW offloaded
policies have higher priorities.

It means that we don't need to perform any search of inexact policies
and/or priority checks if HW policy was discovered. In such situation,
the HW will catch the packets anyway and HW can still implement inexact
lookups.

In case specific policy is not found, we will continue with packet lookup
and check for existence of HW policies in inexact list.

HW policies are added to the head of SPD to ensure fast lookup, as XFRM
iterates over all policies in the loop.

This simple solution allows us to achieve same benefits of separate HW/SW
policies databases without over-engineering the code to iterate and manage
two databases at the same path.

To not over-engineer the code, HW policies are treated as SW ones and
don't take into account netdev to allow reuse of the same priorities for
policies databases without over-engineering the code to iterate and manage
two databases at the same path.

To not over-engineer the code, HW policies are treated as SW ones and
don't take into account netdev to allow reuse of the same priorities for
different devices.
 * No software fallback
 * Fragments are dropped, both in RX and TX
 * No sockets policies
 * Only IPsec transport mode is implemented

================================================================================
Rekeying:

In order to support rekeying, as XFRM core is skipped, the HW/driver should
do the following:
 * Count the handled packets
 * Raise event that limits are reached
 * Drop packets once hard limit is occurred.

The XFRM core calls to newly introduced xfrm_dev_state_update_curlft()
function in order to perform sync between device statistics and internal
structures. On HW limit event, driver calls to xfrm_state_check_expire()
to allow XFRM core take relevant decisions.

This separation between control logic (in XFRM) and data plane allows us
to packet reuse SW stack.

================================================================================
Configuration:

iproute2: https://lore.kernel.org/netdev/cover.1652179360.git.leonro@nvidia.com/

Packet offload mode:
  ip xfrm state offload packet dev <if-name> dir <in|out>
  ip xfrm policy .... offload packet dev <if-name>
Crypto offload mode:
  ip xfrm state offload crypto dev <if-name> dir <in|out>
or (backward compatibility)
  ip xfrm state offload dev <if-name> dir <in|out>

================================================================================
Performance results:

TCP multi-stream, using iperf3 instance per-CPU.
+----------------------+--------+--------+--------+--------+---------+---------+
|                      | 1 CPU  | 2 CPUs | 4 CPUs | 8 CPUs | 16 CPUs | 32 CPUs |
|                      +--------+--------+--------+--------+---------+---------+
|                      |                   BW (Gbps)                           |
+----------------------+--------+--------+-------+---------+---------+---------+
| Baseline             | 27.9   | 59     | 93.1  | 92.8    | 93.7    | 94.4    |
+----------------------+--------+--------+-------+---------+---------+---------+
| Software IPsec       | 6      | 11.9   | 23.3  | 45.9    | 83.8    | 91.8    |
+----------------------+--------+--------+-------+---------+---------+---------+
| IPsec crypto offload | 15     | 29.7   | 58.5  | 89.6    | 90.4    | 90.8    |
+----------------------+--------+--------+-------+---------+---------+---------+
| IPsec packet offload | 28     | 57     | 90.7  | 91      | 91.3    | 91.9    |
+----------------------+--------+--------+-------+---------+---------+---------+

IPsec packet offload mode behaves as baseline and reaches linerate with same amount
of CPUs.

Setups details (similar for both sides):
* NIC: ConnectX6-DX dual port, 100 Gbps each.
  Single port used in the tests.
* CPU: Intel(R) Xeon(R) Platinum 8380 CPU @ 2.30GHz

================================================================================
Series together with mlx5 part:
https://git.kernel.org/pub/scm/linux/kernel/git/leon/linux-rdma.git/log/?h=xfrm-next

Thanks

Leon Romanovsky (8):
  xfrm: add new packet offload flag
  xfrm: allow state packet offload mode
  xfrm: add an interface to offload policy
  xfrm: add TX datapath support for IPsec packet offload mode
  xfrm: add RX datapath protection for IPsec packet offload mode
  xfrm: speed-up lookup of HW policies
  xfrm: add support to HW update soft and hard limits
  xfrm: document IPsec packet offload mode

 Documentation/networking/xfrm_device.rst      |  62 +++++-
 .../inline_crypto/ch_ipsec/chcr_ipsec.c       |   4 +
 .../net/ethernet/intel/ixgbe/ixgbe_ipsec.c    |   5 +
 drivers/net/ethernet/intel/ixgbevf/ipsec.c    |   5 +
 .../mellanox/mlx5/core/en_accel/ipsec.c       |   4 +
 drivers/net/netdevsim/ipsec.c                 |   5 +
 include/linux/netdevice.h                     |   4 +
 include/net/xfrm.h                            | 124 +++++++++---
 include/uapi/linux/xfrm.h                     |   6 +
 net/xfrm/xfrm_device.c                        | 109 +++++++++-
 net/xfrm/xfrm_output.c                        |  12 +-
 net/xfrm/xfrm_policy.c                        |  85 +++++++-
 net/xfrm/xfrm_state.c                         | 190 ++++++++++++++++--
 net/xfrm/xfrm_user.c                          |  20 ++
 14 files changed, 571 insertions(+), 64 deletions(-)

Comments

Steffen Klassert Dec. 2, 2022, 9:42 a.m. UTC | #1
On Sun, Nov 27, 2022 at 01:18:10PM +0200, Leon Romanovsky wrote:
> From: Leon Romanovsky <leonro@nvidia.com>
> 
> Changelog:
> v9:
>  * Added acquire support
> v8: https://lore.kernel.org/all/cover.1668753030.git.leonro@nvidia.com
>  * Removed not-related blank line
>  * Fixed typos in documentation
> v7: https://lore.kernel.org/all/cover.1667997522.git.leonro@nvidia.com
> As was discussed in IPsec workshop:
>  * Renamed "full offload" to be "packet offload".
>  * Added check that offloaded SA and policy have same device while sending packet
>  * Added to SAD same optimization as was done for SPD to speed-up lookups.
> v6: https://lore.kernel.org/all/cover.1666692948.git.leonro@nvidia.com
>  * Fixed misplaced "!" in sixth patch.
> v5: https://lore.kernel.org/all/cover.1666525321.git.leonro@nvidia.com
>  * Rebased to latest ipsec-next.
>  * Replaced HW priority patch with solution which mimics separated SPDs
>    for SW and HW. See more description in this cover letter.
>  * Dropped RFC tag, usecase, API and implementation are clear.
> v4: https://lore.kernel.org/all/cover.1662295929.git.leonro@nvidia.com
>  * Changed title from "PATCH" to "PATCH RFC" per-request.
>  * Added two new patches: one to update hard/soft limits and another
>    initial take on documentation.
>  * Added more info about lifetime/rekeying flow to cover letter, see
>    relevant section.
>  * perf traces for crypto mode will come later.
> v3: https://lore.kernel.org/all/cover.1661260787.git.leonro@nvidia.com
>  * I didn't hear any suggestion what term to use instead of
>    "packet offload", so left it as is. It is used in commit messages
>    and documentation only and easy to rename.
>  * Added performance data and background info to cover letter
>  * Reused xfrm_output_resume() function to support multiple XFRM transformations
>  * Add PMTU check in addition to driver .xdo_dev_offload_ok validation
>  * Documentation is in progress, but not part of this series yet.
> v2: https://lore.kernel.org/all/cover.1660639789.git.leonro@nvidia.com
>  * Rebased to latest 6.0-rc1
>  * Add an extra check in TX datapath patch to validate packets before
>    forwarding to HW.
>  * Added policy cleanup logic in case of netdev down event
> v1: https://lore.kernel.org/all/cover.1652851393.git.leonro@nvidia.com
>  * Moved comment to be before if (...) in third patch.
> v0: https://lore.kernel.org/all/cover.1652176932.git.leonro@nvidia.com
> -----------------------------------------------------------------------

Please move the Changelog to the end of the commit message.

Except of the minor nit I had in patch 4/8, the patchset looks
ready for merging. I'd prefer to merge it after the upcomming
merge window. But Linus might do a rc8, so I leave it up to you
in that case.

Thanks a lot Leon for your effort to make this patchset ready!
Leon Romanovsky Dec. 2, 2022, 6:05 p.m. UTC | #2
On Fri, Dec 02, 2022 at 10:42:43AM +0100, Steffen Klassert wrote:
> On Sun, Nov 27, 2022 at 01:18:10PM +0200, Leon Romanovsky wrote:
> > From: Leon Romanovsky <leonro@nvidia.com>
> > 
> > Changelog:
> > v9:
> >  * Added acquire support
> > v8: https://lore.kernel.org/all/cover.1668753030.git.leonro@nvidia.com
> >  * Removed not-related blank line
> >  * Fixed typos in documentation
> > v7: https://lore.kernel.org/all/cover.1667997522.git.leonro@nvidia.com
> > As was discussed in IPsec workshop:
> >  * Renamed "full offload" to be "packet offload".
> >  * Added check that offloaded SA and policy have same device while sending packet
> >  * Added to SAD same optimization as was done for SPD to speed-up lookups.
> > v6: https://lore.kernel.org/all/cover.1666692948.git.leonro@nvidia.com
> >  * Fixed misplaced "!" in sixth patch.
> > v5: https://lore.kernel.org/all/cover.1666525321.git.leonro@nvidia.com
> >  * Rebased to latest ipsec-next.
> >  * Replaced HW priority patch with solution which mimics separated SPDs
> >    for SW and HW. See more description in this cover letter.
> >  * Dropped RFC tag, usecase, API and implementation are clear.
> > v4: https://lore.kernel.org/all/cover.1662295929.git.leonro@nvidia.com
> >  * Changed title from "PATCH" to "PATCH RFC" per-request.
> >  * Added two new patches: one to update hard/soft limits and another
> >    initial take on documentation.
> >  * Added more info about lifetime/rekeying flow to cover letter, see
> >    relevant section.
> >  * perf traces for crypto mode will come later.
> > v3: https://lore.kernel.org/all/cover.1661260787.git.leonro@nvidia.com
> >  * I didn't hear any suggestion what term to use instead of
> >    "packet offload", so left it as is. It is used in commit messages
> >    and documentation only and easy to rename.
> >  * Added performance data and background info to cover letter
> >  * Reused xfrm_output_resume() function to support multiple XFRM transformations
> >  * Add PMTU check in addition to driver .xdo_dev_offload_ok validation
> >  * Documentation is in progress, but not part of this series yet.
> > v2: https://lore.kernel.org/all/cover.1660639789.git.leonro@nvidia.com
> >  * Rebased to latest 6.0-rc1
> >  * Add an extra check in TX datapath patch to validate packets before
> >    forwarding to HW.
> >  * Added policy cleanup logic in case of netdev down event
> > v1: https://lore.kernel.org/all/cover.1652851393.git.leonro@nvidia.com
> >  * Moved comment to be before if (...) in third patch.
> > v0: https://lore.kernel.org/all/cover.1652176932.git.leonro@nvidia.com
> > -----------------------------------------------------------------------
> 
> Please move the Changelog to the end of the commit message.
> 
> Except of the minor nit I had in patch 4/8, the patchset looks
> ready for merging. I'd prefer to merge it after the upcomming
> merge window. But Linus might do a rc8, so I leave it up to you
> in that case.

I'm sending new version now and my preference is to merge it in this
cycle. It will allow us to easily merge mlx5 part in next cycle without
any ipsec tree involvement. You won't need to apply and deal with any
merge conflicts which can bring our code :).

Of course, we will CC you and ipsec ML.

Thanks
Jakub Kicinski Dec. 2, 2022, 6:10 p.m. UTC | #3
On Fri, 2 Dec 2022 20:05:19 +0200 Leon Romanovsky wrote:
> You won't need to apply and deal with any
> merge conflicts which can bring our code :).

FWIW the ipsec tree feeds the netdev tree, there should be no conflicts
or full release cycle delays. In fact merging the infra in one cycle
and driver in another seems odd, no?
Leon Romanovsky Dec. 2, 2022, 6:31 p.m. UTC | #4
On Fri, Dec 02, 2022 at 10:10:00AM -0800, Jakub Kicinski wrote:
> On Fri, 2 Dec 2022 20:05:19 +0200 Leon Romanovsky wrote:
> > You won't need to apply and deal with any
> > merge conflicts which can bring our code :).
> 
> FWIW the ipsec tree feeds the netdev tree, there should be no conflicts
> or full release cycle delays. In fact merging the infra in one cycle
> and driver in another seems odd, no?

Not really, it is a matter of trust.

The driver exists https://git.kernel.org/pub/scm/linux/kernel/git/leon/linux-rdma.git/log/?h=xfrm-next
and it is a lot of code (28 patches for now) which is more natural for us to route through
traditional path.

If you are not convinced, I can post all these patches to the ML right now
and Steffen will send them to you.

Thanks
Jakub Kicinski Dec. 2, 2022, 7:26 p.m. UTC | #5
On Fri, 2 Dec 2022 20:31:46 +0200 Leon Romanovsky wrote:
> Not really, it is a matter of trust.

More of a question of whether we can reasonably expect to merge all 
the driver code in a single release cycle. If not then piecemeal
merging is indeed inevitable. But if Steffen is happy with the core
changes whether they are in tree for 6.2 or not should not matter.
An upstream user can't access them anyway, it'd only matter to an
out-of-tree consumer.

That's just my 2 cents, whatever Steffen prefers matters most.

> The driver exists https://git.kernel.org/pub/scm/linux/kernel/git/leon/linux-rdma.git/log/?h=xfrm-next
> and it is a lot of code (28 patches for now) which is more natural for us to route through
> traditional path.
> 
> If you are not convinced, I can post all these patches to the ML right now
> and Steffen will send them to you.
Leon Romanovsky Dec. 2, 2022, 7:45 p.m. UTC | #6
On Fri, Dec 02, 2022 at 11:26:07AM -0800, Jakub Kicinski wrote:
> On Fri, 2 Dec 2022 20:31:46 +0200 Leon Romanovsky wrote:
> > Not really, it is a matter of trust.
> 
> More of a question of whether we can reasonably expect to merge all 
> the driver code in a single release cycle. If not then piecemeal
> merging is indeed inevitable. But if Steffen is happy with the core
> changes whether they are in tree for 6.2 or not should not matter.
> An upstream user can't access them anyway, it'd only matter to an
> out-of-tree consumer.
> 
> That's just my 2 cents, whatever Steffen prefers matters most.

There are no out-of-tree users, just ton of mlx5 refactoring to natively
support packet offload.

> 
> > The driver exists https://git.kernel.org/pub/scm/linux/kernel/git/leon/linux-rdma.git/log/?h=xfrm-next
> > and it is a lot of code (28 patches for now) which is more natural for us to route through
> > traditional path.
> > 
> > If you are not convinced, I can post all these patches to the ML right now
> > and Steffen will send them to you.
Jakub Kicinski Dec. 2, 2022, 7:52 p.m. UTC | #7
On Fri, 2 Dec 2022 21:45:47 +0200 Leon Romanovsky wrote:
> > More of a question of whether we can reasonably expect to merge all 
> > the driver code in a single release cycle. If not then piecemeal
> > merging is indeed inevitable. But if Steffen is happy with the core
> > changes whether they are in tree for 6.2 or not should not matter.
> > An upstream user can't access them anyway, it'd only matter to an
> > out-of-tree consumer.
> > 
> > That's just my 2 cents, whatever Steffen prefers matters most.  
> 
> There are no out-of-tree users, just ton of mlx5 refactoring to natively
> support packet offload.

30 patches is just two series, that's mergeable in a week.
You know, if it builds cleanly.. :S  Dunno.
Steffen Klassert Dec. 5, 2022, 9:23 a.m. UTC | #8
On Fri, Dec 02, 2022 at 11:52:13AM -0800, Jakub Kicinski wrote:
> On Fri, 2 Dec 2022 21:45:47 +0200 Leon Romanovsky wrote:
> > > More of a question of whether we can reasonably expect to merge all 
> > > the driver code in a single release cycle. If not then piecemeal
> > > merging is indeed inevitable. But if Steffen is happy with the core
> > > changes whether they are in tree for 6.2 or not should not matter.
> > > An upstream user can't access them anyway, it'd only matter to an
> > > out-of-tree consumer.
> > > 
> > > That's just my 2 cents, whatever Steffen prefers matters most.  
> > 
> > There are no out-of-tree users, just ton of mlx5 refactoring to natively
> > support packet offload.
> 
> 30 patches is just two series, that's mergeable in a week.
> You know, if it builds cleanly.. :S  Dunno.

The core changes are ready, so there is no real reason
to hold them off.

I had not yet a closer look to the driver changes, though.

I've just updated ipsec-next, whatever builds with ipsec-next
should build with net-next now. In case the driver changes
do not genarate any fallouts, I can take them into ipsec-next
as well.

The two driver series and the core series would be about 40
patches. If you are ok with taking such a last minute PR
into net-next, we can go that way.
Jakub Kicinski Dec. 6, 2022, 12:09 a.m. UTC | #9
On Mon, 5 Dec 2022 10:23:04 +0100 Steffen Klassert wrote:
> The two driver series and the core series would be about 40
> patches. If you are ok with taking such a last minute PR
> into net-next, we can go that way.

Fine by me.