mbox series

[net-next,v2,00/13] nfp: add support for multi-pf configuration

Message ID 20230816143912.34540-1-louis.peens@corigine.com (mailing list archive)
Headers show
Series nfp: add support for multi-pf configuration | expand

Message

Louis Peens Aug. 16, 2023, 2:38 p.m. UTC
This patch series is introducing multiple PFs for multiple ports NIC
assembled with NFP3800 chip. This is done since the NFP3800 can
support up to 4 PFs, and is more in-line with the modern expectation
that each port/netdev is associated with a unique PF.

For compatibility concern with NFP4000/6000 cards, and older management
firmware on NFP3800, multiple ports sharing single PF is still supported
with this change. Whether it's multi-PF setup or single-PF setup is
determined by management firmware, and driver will notify the
application firmware of the setup so that both are well handled.

* Patch 1/13 and 2/13 are to support new management firmware with bumped
  major version.
* Patch 3/13, 4/13, 6/13, adjust the application firmware loading
  and unloading mechanism since multi PFs share the same application
  firmware.
* Patch 5/16 is a small fix to fix an issue sparse is complaining about.
* Patch 7/13 is a small fix to avoid reclaiming resources by mistake in
  multi-PF setup.
* Patch 8/13 re-formats the symbols to communicate with application
  firmware to adapt multi-PF setup.
* Patch 9/13 applies one port/netdev per PF.
* Patch 10/13 is to support both single-PF and multi-PF setup by a
  configuration in application firmware.
* Patch 11/13, 12/13, 13/13 are some necessary adaption to use SR-IOV
  for multi-PF setup.


Since v1:
    Modify 64-bit non-atomic write functions to avoid sparse
    warning

As part of v1 there was also some partially finished discussion about
devlink allowing to bind to multiple bus devices. This series creates a
devlink instance per PF, and the comment was asking if this should maybe
change to be a single instance, since it is still a single device. For
the moment we feel that this is a parallel issue to this specific
series, as it seems to be already implemented this way in other places,
and this series would be matching that.

We are curious about this idea though, as it does seem to make sense if
the original devlink idea was that it should have a one-to-one
correspondence per ASIC. Not sure where one would start with this
though, on first glance it looks like the assumption that devlink is
only connected to a single bus device is embedded quite deep. This
probably needs commenting/discussion with somebody that has pretty good
knowledge of devlink core.

Tianyu Yuan (4):
  nsp: generate nsp command with variable nsp major version
  nfp: bump the nsp major version to support multi-PF
  nfp: apply one port per PF for multi-PF setup
  nfp: configure VF total count for each PF

Yinjun Zhang (9):
  nfp: change application firmware loading flow in multi-PF setup
  nfp: don't skip firmware loading when it's pxe firmware in running
  io-64-nonatomic: truncate bits explicitly to avoid warning
  nfp: introduce keepalive mechanism for multi-PF setup
  nfp: avoid reclaiming resource mutex by mistake
  nfp: redefine PF id used to format symbols
  nfp: enable multi-PF in application firmware if supported
  nfp: configure VF split info into application firmware
  nfp: use absolute vf id for multi-PF case

 drivers/net/ethernet/netronome/nfp/abm/ctrl.c |   2 +-
 drivers/net/ethernet/netronome/nfp/abm/main.c |   2 +-
 drivers/net/ethernet/netronome/nfp/bpf/main.c |   2 +-
 .../net/ethernet/netronome/nfp/flower/main.c  |  19 +-
 drivers/net/ethernet/netronome/nfp/nfp_main.c | 227 ++++++++++++++++--
 drivers/net/ethernet/netronome/nfp/nfp_main.h |  28 +++
 .../net/ethernet/netronome/nfp/nfp_net_ctrl.h |   1 +
 .../net/ethernet/netronome/nfp/nfp_net_main.c | 166 ++++++++++---
 .../ethernet/netronome/nfp/nfp_net_sriov.c    |  39 ++-
 .../ethernet/netronome/nfp/nfp_net_sriov.h    |   5 +
 drivers/net/ethernet/netronome/nfp/nfp_port.c |   4 +-
 .../net/ethernet/netronome/nfp/nfpcore/nfp.h  |   4 +
 .../ethernet/netronome/nfp/nfpcore/nfp_dev.c  |   2 +
 .../ethernet/netronome/nfp/nfpcore/nfp_dev.h  |   1 +
 .../netronome/nfp/nfpcore/nfp_mutex.c         |  21 +-
 .../ethernet/netronome/nfp/nfpcore/nfp_nffw.h |   4 +
 .../ethernet/netronome/nfp/nfpcore/nfp_nsp.c  |  18 +-
 .../netronome/nfp/nfpcore/nfp_rtsym.c         |  16 +-
 drivers/net/ethernet/netronome/nfp/nic/main.c |   3 +-
 include/linux/io-64-nonatomic-hi-lo.h         |   8 +-
 include/linux/io-64-nonatomic-lo-hi.h         |   8 +-
 21 files changed, 482 insertions(+), 98 deletions(-)

Comments

Jakub Kicinski Aug. 18, 2023, 2:22 a.m. UTC | #1
On Wed, 16 Aug 2023 16:38:59 +0200 Louis Peens wrote:
> As part of v1 there was also some partially finished discussion about
> devlink allowing to bind to multiple bus devices. This series creates a
> devlink instance per PF, and the comment was asking if this should maybe
> change to be a single instance, since it is still a single device. For
> the moment we feel that this is a parallel issue to this specific
> series, as it seems to be already implemented this way in other places,
> and this series would be matching that.
> 
> We are curious about this idea though, as it does seem to make sense if
> the original devlink idea was that it should have a one-to-one
> correspondence per ASIC. Not sure where one would start with this
> though, on first glance it looks like the assumption that devlink is
> only connected to a single bus device is embedded quite deep. This
> probably needs commenting/discussion with somebody that has pretty good
> knowledge of devlink core.

How do you suggest we move forward? This is a community project after
all, _someone_ has to start the discussion and then write the code.
Louis Peens Aug. 18, 2023, 7:03 a.m. UTC | #2
On Thu, Aug 17, 2023 at 07:22:05PM -0700, Jakub Kicinski wrote:
> On Wed, 16 Aug 2023 16:38:59 +0200 Louis Peens wrote:
> > As part of v1 there was also some partially finished discussion about
> > devlink allowing to bind to multiple bus devices. This series creates a
> > devlink instance per PF, and the comment was asking if this should maybe
> > change to be a single instance, since it is still a single device. For
> > the moment we feel that this is a parallel issue to this specific
> > series, as it seems to be already implemented this way in other places,
> > and this series would be matching that.
> > 
> > We are curious about this idea though, as it does seem to make sense if
> > the original devlink idea was that it should have a one-to-one
> > correspondence per ASIC. Not sure where one would start with this
> > though, on first glance it looks like the assumption that devlink is
> > only connected to a single bus device is embedded quite deep. This
> > probably needs commenting/discussion with somebody that has pretty good
> > knowledge of devlink core.
> 
> How do you suggest we move forward? This is a community project after
> all, _someone_ has to start the discussion and then write the code.

This is a good point, it would have been nice if things could just be
wished into existence. I will try to negotiate for some time to be spend
on this from our side, and then raise a new topic for discussion.