mbox series

[net-next,v3,00/15] Introduce Intel IDPF driver

Message ID 20230427020917.12029-1-emil.s.tantilov@intel.com (mailing list archive)
Headers show
Series Introduce Intel IDPF driver | expand

Message

Tantilov, Emil S April 27, 2023, 2:09 a.m. UTC
This patch series introduces the Intel Infrastructure Data Path Function
(IDPF) driver. It is used for both physical and virtual functions. Except
for some of the device operations the rest of the functionality is the
same for both PF and VF. IDPF uses virtchnl version2 opcodes and
structures defined in the virtchnl2 header file which helps the driver
to learn the capabilities and register offsets from the device
Control Plane (CP) instead of assuming the default values.

The format of the series follows the driver init flow to interface open.
To start with, probe gets called and kicks off the driver initialization
by spawning the 'vc_event_task' work queue which in turn calls the
'hard reset' function. As part of that, the mailbox is initialized which
is used to send/receive the virtchnl messages to/from the CP. Once that is
done, 'core init' kicks in which requests all the required global resources
from the CP and spawns the 'init_task' work queue to create the vports.

Based on the capability information received, the driver creates the said
number of vports (one or many) where each vport is associated to a netdev.
Also, each vport has its own resources such as queues, vectors etc.
From there, rest of the netdev_ops and data path are added.

IDPF implements both single queue which is traditional queueing model
as well as split queue model. In split queue model, it uses separate queue
for both completion descriptors and buffers which helps to implement
out-of-order completions. It also helps to implement asymmetric queues,
for example multiple RX completion queues can be processed by a single
RX buffer queue and multiple TX buffer queues can be processed by a
single TX completion queue. In single queue model, same queue is used
for both descriptor completions as well as buffer completions. It also
supports features such as generic checksum offload, generic receive
offload (hardware GRO) etc.

v2 --> v3: link [2]
 * converted virtchnl2 defines to enums
 * fixed comment style in virtchnl2 to follow kernel-doc format
 * removed empty lines between end of structs and size check macro
   checkpatch will mark these instances as CHECK
 * cleaned up unused Rx descriptor structs and related bits in virtchnl2
 * converted Rx descriptor bit offsets into bitmasks to better align with
   the use of GENMASK and FIELD_GET
 * added device ids to pci_tbl from the start
 * consolidated common probe and remove functions into idpf_probe() and
   idpf_remove() respectively
 * removed needless adapter NULL checks
 * removed devm_kzalloc() in favor of kzalloc(), including kfree in
   error and exit code path
 * replaced instances of kcalloc() calls where either size parameter was
   1 with kzalloc(), reported by smatch
 * used kmemdup() in some instances reported by coccicheck
 * added explicit error code and comment explaining the condition for
   the exit to address warning by smatch
 * moved build support to the last patch

[2] https://lore.kernel.org/netdev/20230411011354.2619359-1-pavan.kumar.linga@intel.com/

v1 --> v2: link [1]
 * removed the OASIS reference in the commit message to make it clear
   that this is an Intel vendor specific driver
 * fixed misspells
 * used comment starter "/**" for struct and definition headers in
   virtchnl header files
 * removed AVF reference
 * renamed APF reference to IDPF
 * added a comment to explain the reason for 'no flex field' at the end of
   virtchnl2_get_ptype_info struct
 * removed 'key[1]' in virtchnl2_rss_key struct as it is not used
 * set VIRTCHNL2_RXDID_2_FLEX_SQ_NIC to VIRTCHNL2_RXDID_2_FLEX_SPLITQ
   instead of assigning the same value
 * cleanup unnecessary NULL assignment to the rx_buf skb pointer since
   it is not used in splitq model
 * added comments to clarify the generation bit usage in splitq model
 * introduced 'reuse_bias' in the page_info structure and make use of it
   in the hot path
 * fixed RCT format in idpf_rx_construct_skb
 * report SPEED_UNKNOWN and DUPLEX_UNKNOWN when the link is down
 * fixed -Wframe-larger-than warning reported by lkp bot in
   idpf_vport_queue_ids_init
 * updated the documentation in idpf.rst to fix LKP bot warning

[1] https://lore.kernel.org/netdev/20230329140404.1647925-1-pavan.kumar.linga@intel.com/

Alan Brady (4):
  idpf: configure resources for TX queues
  idpf: configure resources for RX queues
  idpf: add RX splitq napi poll support
  idpf: add ethtool callbacks

Joshua Hay (5):
  idpf: add controlq init and reset checks
  idpf: add splitq start_xmit
  idpf: add TX splitq napi poll support
  idpf: add singleq start_xmit and napi poll
  idpf: configure SRIOV and add other ndo_ops

Pavan Kumar Linga (5):
  virtchnl: add virtchnl version 2 ops
  idpf: add core init and interrupt request
  idpf: add create vport and netdev configuration
  idpf: continue expanding init task
  idpf: initialize interrupts and enable vport

Phani Burra (1):
  idpf: add module register and probe functionality

 .../device_drivers/ethernet/intel/idpf.rst    |  162 +
 drivers/net/ethernet/intel/Kconfig            |   10 +
 drivers/net/ethernet/intel/Makefile           |    1 +
 drivers/net/ethernet/intel/idpf/Makefile      |   18 +
 drivers/net/ethernet/intel/idpf/idpf.h        |  736 +++
 .../net/ethernet/intel/idpf/idpf_controlq.c   |  644 +++
 .../net/ethernet/intel/idpf/idpf_controlq.h   |  131 +
 .../ethernet/intel/idpf/idpf_controlq_api.h   |  188 +
 .../ethernet/intel/idpf/idpf_controlq_setup.c |  175 +
 drivers/net/ethernet/intel/idpf/idpf_dev.c    |  165 +
 drivers/net/ethernet/intel/idpf/idpf_devids.h |   10 +
 .../net/ethernet/intel/idpf/idpf_ethtool.c    | 1330 +++++
 .../ethernet/intel/idpf/idpf_lan_pf_regs.h    |  124 +
 .../net/ethernet/intel/idpf/idpf_lan_txrx.h   |  293 +
 .../ethernet/intel/idpf/idpf_lan_vf_regs.h    |  128 +
 drivers/net/ethernet/intel/idpf/idpf_lib.c    | 2349 ++++++++
 drivers/net/ethernet/intel/idpf/idpf_main.c   |  266 +
 drivers/net/ethernet/intel/idpf/idpf_mem.h    |   20 +
 .../ethernet/intel/idpf/idpf_singleq_txrx.c   | 1251 +++++
 drivers/net/ethernet/intel/idpf/idpf_txrx.c   | 4855 +++++++++++++++++
 drivers/net/ethernet/intel/idpf/idpf_txrx.h   |  854 +++
 drivers/net/ethernet/intel/idpf/idpf_vf_dev.c |  164 +
 .../net/ethernet/intel/idpf/idpf_virtchnl.c   | 3820 +++++++++++++
 drivers/net/ethernet/intel/idpf/virtchnl2.h   | 1317 +++++
 .../ethernet/intel/idpf/virtchnl2_lan_desc.h  |  448 ++
 25 files changed, 19459 insertions(+)
 create mode 100644 Documentation/networking/device_drivers/ethernet/intel/idpf.rst
 create mode 100644 drivers/net/ethernet/intel/idpf/Makefile
 create mode 100644 drivers/net/ethernet/intel/idpf/idpf.h
 create mode 100644 drivers/net/ethernet/intel/idpf/idpf_controlq.c
 create mode 100644 drivers/net/ethernet/intel/idpf/idpf_controlq.h
 create mode 100644 drivers/net/ethernet/intel/idpf/idpf_controlq_api.h
 create mode 100644 drivers/net/ethernet/intel/idpf/idpf_controlq_setup.c
 create mode 100644 drivers/net/ethernet/intel/idpf/idpf_dev.c
 create mode 100644 drivers/net/ethernet/intel/idpf/idpf_devids.h
 create mode 100644 drivers/net/ethernet/intel/idpf/idpf_ethtool.c
 create mode 100644 drivers/net/ethernet/intel/idpf/idpf_lan_pf_regs.h
 create mode 100644 drivers/net/ethernet/intel/idpf/idpf_lan_txrx.h
 create mode 100644 drivers/net/ethernet/intel/idpf/idpf_lan_vf_regs.h
 create mode 100644 drivers/net/ethernet/intel/idpf/idpf_lib.c
 create mode 100644 drivers/net/ethernet/intel/idpf/idpf_main.c
 create mode 100644 drivers/net/ethernet/intel/idpf/idpf_mem.h
 create mode 100644 drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c
 create mode 100644 drivers/net/ethernet/intel/idpf/idpf_txrx.c
 create mode 100644 drivers/net/ethernet/intel/idpf/idpf_txrx.h
 create mode 100644 drivers/net/ethernet/intel/idpf/idpf_vf_dev.c
 create mode 100644 drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
 create mode 100644 drivers/net/ethernet/intel/idpf/virtchnl2.h
 create mode 100644 drivers/net/ethernet/intel/idpf/virtchnl2_lan_desc.h

Comments

Jakub Kicinski April 27, 2023, 2:46 a.m. UTC | #1
On Wed, 26 Apr 2023 19:09:02 -0700 Emil Tantilov wrote:
> This patch series introduces the Intel Infrastructure Data Path Function
> (IDPF) driver. It is used for both physical and virtual functions. Except
> for some of the device operations the rest of the functionality is the
> same for both PF and VF. IDPF uses virtchnl version2 opcodes and
> structures defined in the virtchnl2 header file which helps the driver
> to learn the capabilities and register offsets from the device
> Control Plane (CP) instead of assuming the default values.

This is not the right time to post patches, see below.

Please have Tony/Jesse take over posting of this code to the list
going forward. Intel has a history of putting upstream training on
the community, we're not going thru this again.


## Form letter - net-next-closed

The merge window for v6.3 has begun and therefore net-next is closed
for new drivers, features, code refactoring and optimizations.
We are currently accepting bug fixes only.

Please repost when net-next reopens after May 8th.

RFC patches sent for review only are obviously welcome at any time.

See: https://www.kernel.org/doc/html/next/process/maintainer-netdev.html#development-cycle
Tantilov, Emil S April 27, 2023, 2:55 a.m. UTC | #2
On 4/26/2023 7:46 PM, Jakub Kicinski wrote:
> On Wed, 26 Apr 2023 19:09:02 -0700 Emil Tantilov wrote:
>> This patch series introduces the Intel Infrastructure Data Path Function
>> (IDPF) driver. It is used for both physical and virtual functions. Except
>> for some of the device operations the rest of the functionality is the
>> same for both PF and VF. IDPF uses virtchnl version2 opcodes and
>> structures defined in the virtchnl2 header file which helps the driver
>> to learn the capabilities and register offsets from the device
>> Control Plane (CP) instead of assuming the default values.
> 
> This is not the right time to post patches, see below.
> 
> Please have Tony/Jesse take over posting of this code to the list
> going forward. Intel has a history of putting upstream training on
> the community, we're not going thru this again.
> 
> 
> ## Form letter - net-next-closed
> 
> The merge window for v6.3 has begun and therefore net-next is closed
> for new drivers, features, code refactoring and optimizations.
> We are currently accepting bug fixes only.
> 
> Please repost when net-next reopens after May 8th.
> 
> RFC patches sent for review only are obviously welcome at any time.
> 
> See: https://www.kernel.org/doc/html/next/process/maintainer-netdev.html#development-cycle

The v3 series are primarily for review on IWL (to intel-wired-lan, 
netdev cc-ed) as follow up for the feedback we received on v2.

Was I not supposed to cc netdev in the quiet period?

Thanks,
Emil
Jakub Kicinski April 27, 2023, 3:29 a.m. UTC | #3
On Wed, 26 Apr 2023 19:55:06 -0700 Tantilov, Emil S wrote:
> The v3 series are primarily for review on IWL (to intel-wired-lan, 
> netdev cc-ed) as follow up for the feedback we received on v2.

Well, you put net-next in the subject.

> Was I not supposed to cc netdev in the quiet period?

That's what you got from my previous email? Did you read it?
The answer was there :|

The community volunteers can't be expected to help teach every team of
every vendor the process. That doesn't scale and leads to maintainer
frustration. You have a team at Intel which is strongly engaged
upstream (Jesse, Jake K, Maciej F, Alex L, Tony etc.) - I'd much rather
interface with them.

Jesse, does it sound workable to you? What do you have in mind in terms
of the process long term if/once this driver gets merged?
Jesse Brandeburg April 27, 2023, 10:23 p.m. UTC | #4
On 4/26/2023 8:29 PM, Jakub Kicinski wrote:
> On Wed, 26 Apr 2023 19:55:06 -0700 Tantilov, Emil S wrote:
>> The v3 series are primarily for review on IWL (to intel-wired-lan,
>> netdev cc-ed) as follow up for the feedback we received on v2.
>
> Well, you put net-next in the subject.

We tried to convey intent via the To: and CC: lists, but this review is
continuing across multiple merge windows and we previously had been
sending with net-next in the Subject and had continued in that vein, so
we intended to convey the "request for continued review" via the
headers, but didn't mean to violate the "net-next is closed! Don't send
patches with the Subject net-next!" rule.

I reviewed these patches but didn't block Emil from sending v3 (right
now vs waiting until net-next opens).

from the other reply:
> RFC patches sent for review only are obviously welcome at any time.

In the past, we had developed an allergy to using RFC when we want
comments back as the patches had sometimes been ignored when RFC and
then heavily commented upon/rejected as a "real submittal". This may not
be the case anymore, and if so, we need to adjust our expectations and
would be glad to do so. In this case, it didn’t feel right to switch a
series from “in-review” to RFC on v3.

> Jesse, does it sound workable to you? What do you have in mind in terms
> of the process long term if/once this driver gets merged?

Sorry for the thrash on this one.

We have a proposal by doing these two things in the future:
1) to: intel-wired-lan, cc: netdev until we've addressed review comments
2) use [iwl-next ] or [iwl-net] in the Subject: when reviewing on
intel-wired-lan, and cc:netdev, to make clear the intent in both headers
and Subject line.

There are two discussions here
1) we can solve the "net-next subject" vs cc:netdev via my proposal
above, would appreciate your feedback.
2) Long term, this driver will join the "normal flow" of individual
patch series that are sent to intel-wired-lan and cc:netdev, but I'd
like those that are sent from Intel non-maintainers to always use
[iwl-next], [iwl-net], and Tony or I will provide series to:
maintainers, cc:netdev with the Subject: [net-next] or [net]
Jakub Kicinski May 3, 2023, 2:20 a.m. UTC | #5
On Thu, 27 Apr 2023 15:23:12 -0700 Jesse Brandeburg wrote:
> > Jesse, does it sound workable to you? What do you have in mind in terms
> > of the process long term if/once this driver gets merged?  
> 
> Sorry for the thrash on this one.
> 
> We have a proposal by doing these two things in the future:
> 1) to: intel-wired-lan, cc: netdev until we've addressed review comments
> 2) use [iwl-next ] or [iwl-net] in the Subject: when reviewing on
> intel-wired-lan, and cc:netdev, to make clear the intent in both headers
> and Subject line.
> 
> There are two discussions here
> 1) we can solve the "net-next subject" vs cc:netdev via my proposal
> above, would appreciate your feedback.
> 2) Long term, this driver will join the "normal flow" of individual
> patch series that are sent to intel-wired-lan and cc:netdev, but I'd
> like those that are sent from Intel non-maintainers to always use
> [iwl-next], [iwl-net], and Tony or I will provide series to:
> maintainers, cc:netdev with the Subject: [net-next] or [net]

Sounds like a good scheme, let's try it!
Thanks!
Jesse Brandeburg May 3, 2023, 4:24 p.m. UTC | #6
On 5/2/2023 7:20 PM, Jakub Kicinski wrote:
> On Thu, 27 Apr 2023 15:23:12 -0700 Jesse Brandeburg wrote:
>> We have a proposal by doing these two things in the future:
>> 1) to: intel-wired-lan, cc: netdev until we've addressed review comments
>> 2) use [iwl-next ] or [iwl-net] in the Subject: when reviewing on
>> intel-wired-lan, and cc:netdev, to make clear the intent in both headers
>> and Subject line.
> 
> Sounds like a good scheme, let's try it!
> Thanks!

Ok, we'll start implementing.