mbox series

[v10,00/20] dlb: introduce DLB device driver

Message ID 20210210175423.1873-1-mike.ximing.chen@intel.com (mailing list archive)
Headers show
Series dlb: introduce DLB device driver | expand

Message

Chen, Mike Ximing Feb. 10, 2021, 5:54 p.m. UTC
---------------------------------------------------------
This is a device driver for a new HW IPC accelerator. It was submitted
to linux-kernel group. Per Greg's (maintainer for drivers/misc) suggestion
(see below), we could like to get the patch set reviewed/acked by the
networking driver community. Thanks.

>As this is a networking related thing, I would like you to get the
>proper reviews/acks from the networking maintainers before I can take
>this.
>
>Or, if they think it has nothing to do with networking, that's fine too,
>but please do not try to route around them.
>
>thanks,
>
>greg k-
---------------------------------------------------------

Introduce a new misc device driver for the Intel(r) Dynamic Load Balancer
(Intel(r) DLB). The Intel DLB is a PCIe device that provides
load-balanced, prioritized scheduling of core-to-core communication.

Intel DLB is an accelerator for the event-driven programming model of
DPDK's Event Device Library[2]. The library is used in packet processing
pipelines that arrange for multi-core scalability, dynamic load-balancing,
and variety of packet distribution and synchronization schemes

These distribution schemes include "parallel" (packets are load-balanced
across multiple cores and processed in parallel), "ordered" (similar to
"parallel" but packets are reordered into ingress order by the device), and
"atomic" (packet flows are scheduled to a single core at a time such that
locks are not required to access per-flow data, and dynamically migrated to
ensure load-balance).

This submission supports Intel DLB 2.0 only.

The Intel DLB consists of queues and arbiters that connect producer
cores and consumer cores. The device implements load-balanced queueing
features including:
- Lock-free multi-producer/multi-consumer operation.
- Multiple priority levels for varying traffic types.
- 'Direct' traffic (i.e. multi-producer/single-consumer)
- Simple unordered load-balanced distribution.
- Atomic lock free load balancing across multiple consumers.
- Queue element reordering feature allowing ordered load-balanced
  distribution.

The fundamental unit of communication through the device is a queue entry
(QE), which consists of 8B of data and 8B of metadata (destination queue,
priority, etc.). The data field can be any type that fits within 8B.

A core's interface to the device, a "port," consists of a memory-mappable
region through which the core enqueues a queue entry, and an in-memory
queue (the "consumer queue") to which the device schedules QEs. Each QE
is enqueued to a device-managed queue, and from there scheduled to a port.
Software specifies the "linking" of queues and ports; i.e. which ports the
device is allowed to schedule to for a given queue. The device uses a
credit scheme to prevent overflow of the on-device queue storage.

Applications can interface directly with the device by mapping the port's
memory and MMIO regions into the application's address space for enqueue
and dequeue operations, but call into the kernel driver for configuration
operations. An application can also be polling- or interrupt-driven;
Intel DLB supports both modes of operation.

Device resources -- i.e. ports, queues, and credits -- are contained within
a scheduling domain. Scheduling domains are isolated from one another; a
port can only enqueue to and dequeue from queues within its scheduling
domain. A scheduling domain's resources are configured through a scheduling
domain file, which is acquired through an ioctl.

Intel DLB supports SR-IOV and Scalable IOV, and allows for a flexible
division of its resources among the PF and its virtual devices. The virtual
devices are incapable of configuring the device directly; they use a
hardware mailbox to proxy configuration requests to the PF driver. This
driver supports both PF and virtual devices, as there is significant code
re-use between the two, with device-specific behavior handled through a
callback interface.  Virtualization support will be added in a later patch
set.

The dlb driver uses ioctls as its primary interface (it makes use of sysfs
as well, to a lesser extent). The dlb device file supports a different
ioctl interface than the scheduling domain file; the dlb device file
is used for device-wide operations (including scheduling domain creation),
and the scheduling domain file supports operations on the scheduling
domain's resources (primarily resource configuration). Scheduling domains
are created dynamically (using a dlb device file ioctl) by user-space
software, and the scheduling domain file is created from an anonymous file
that is installed in the ioctl's calling process's file descriptor table.

[1] https://builders.intel.com/docs/networkbuilders/SKU-343247-001US-queue-management-and-load-balancing-on-intel-architecture.pdf
[2] https://doc.dpdk.org/guides/prog_guide/eventdev.html

v10:
- Addressed an issue reported by kernel test robot <lkp@intel.com>
-- Add "WITH Linux-syscall-note" to the SPDX-License-Identifier in uapi
   header file dlb.h.

v9:
- Addressed all of Greg's feecback on v8, including
-- Remove function name (__func__) from dev_err() messages, that could spam log.
-- Replace list and function pointer calls in dlb_ioctl() with switch-case
   and real function calls for ioctl.
-- Drop the compat_ptr_ioctl in dlb_ops (struct file_operations).
-- Change ioctl magic number for DLB to unused 0x81 (from 'h').
-- Remove all placeholder/dummy functions in the patch set.
-- Re-arrange the comments in dlb.h so that the order is consistent with that
   of data structures referred.
-- Correct the comments on SPDX License and DLB versions in dlb.h.
-- Replace BIT_SET() and BITS_CLR() marcos with direct coding.   
-- Remove NULL pointer checking (f->private_data) in dlb_ioctl().
-- Use whole line whenever possible and not wrapping lines unnecessarily.
-- Remove __attribute__((unused)).
-- Merge dlb_ioctl.h and dlb_file.h into dlb_main.h

v8:
- Add a functional block diagram in dlb.rst 
- Modify change logs to reflect the links between patches and DPDK
  eventdev library.
- Add a check of power-of-2 for CQ depth.
- Move call to INIT_WORK() to dlb_open().
- Clean dlb workqueue by calling flush_scheduled_work().
- Add unmap_mapping_range() in dlb_port_close().

v7 (Intel internal version):
- Address all of Dan's feedback, including
-- Drop DLB 2.0 throughout the patch set, use DLB only.
-- Fix license and copyright statements
-- Use pcim_enable_device() and pcim_iomap_regions(), instead of
   unmanaged version.
-- Move cdev_add() to dlb_init() and add all devices at once.
-- Fix Makefile, using "+=" style.
-- Remove FLR description and mention movdir64/enqcmd usage in doc.
-- Make the permission for the domain same as that for device for
   ioctl access.
-- Use idr instead of ida.
-- Add a lock in dlb_close() to prevent driver unbinding while ioctl
   coomands are in progress.
-- Remove wrappers that are used for code sharing between kernel driver
   and DPDK. 
- Address Pierre-Louis' feedback, including
-- Clean the warinings from checkpatch
-- Fix the warnings from "make W=1"

v6 (Intel internal version):
- Change the module name to dlb(from dlb2), which currently supports Intel
  DLB 2.0 only.
- Address all of Pierre-Louis' feedback on v5, including
-- Consolidate the two near-identical for loops in dlb2_release_domain_memory().
-- Remove an unnecessary "port = NULL" initialization
-- Consistently use curly braces on the *_LIST_FOR macros
   when the for-loop contents spans multiple lines.
-- Add a comment to the definition of DLB2FS_MAGIC
-- Remove always true if statemnets
-- Move the get_cos_bw mutex unlock call earlier to shorten the critical
   section.
- Address all of Dan's feedbacks, including
-- Replace the unions for register bits access with bitmask and shifts
-- Centralize the "to/from" user memory copies for ioctl functions.
-- Review ioctl design against Documentation/process/botching-up-ioctls.rst
-- Remove wraper functions for memory barriers.
-- Use ilog() to simplify a switch code block.
-- Add base-commit to cover letter.

v5 (Intel internal version):
- Reduce the scope of the initial patch set (drop the last 8 patches)
- Further decompose some of the remaining patches into multiple patches.
- Address all of Pierre-Louis' feedback, including:
-- Move kerneldoc to *.c files
-- Fix SPDX comment style
-- Add BAR macros
-- Improve/clarify struct dlb2_dev and struct device variable naming
-- Add const where missing
-- Clarify existing comments and add new ones in various places
-- Remove unnecessary memsets and zero-initialization
-- Remove PM abstraction, fix missing pm_runtime_allow(), and don't
   update PM refcnt when port files are opened and closed.
-- Convert certain ternary operations into if-statements
-- Out-line the CQ depth valid check
-- De-duplicate the logic in dlb2_release_device_memory()
-- Limit use of devm functions to allocating/freeing struct dlb2
- Address Ira's comments on dlb2.rst and correct commit messages that
  don't use the imperative voice.

v4:
- Move PCI device ID definitions into dlb2_hw_types.h, drop the VF definition
- Remove dlb2_dev_list
- Remove open/close functions and fops structure (unused)
- Remove "(char *)" cast from PCI driver name
- Unwind init failures properly
- Remove ID alloc helper functions and call IDA interfaces directly instead

v3:
- Remove DLB2_PCI_REG_READ/WRITE macros

v2:
- Change driver license to GPLv2 only
- Expand Kconfig help text and remove unnecessary (R)s
- Remove unnecessary prints
- Add a new entry in ioctl-number.rst
- Convert the ioctl handler into a switch statement
- Correct some instances of IOWR that should have been IOR
- Align macro blocks
- Don't break ioctl ABI when introducing new commands
- Remove indirect pointers from ioctl data structures
- Remove the get-sched-domain-fd ioctl command

Mike Ximing Chen (20):
  dlb: add skeleton for DLB driver
  dlb: initialize device
  dlb: add resource and device initialization
  dlb: add device ioctl layer and first three ioctls
  dlb: add scheduling domain configuration
  dlb: add domain software reset
  dlb: add low-level register reset operations
  dlb: add runtime power-management support
  dlb: add queue create, reset, get-depth ioctls
  dlb: add register operations for queue management
  dlb: add ioctl to configure ports and query poll mode
  dlb: add register operations for port management
  dlb: add port mmap support
  dlb: add start domain ioctl
  dlb: add queue map, unmap, and pending unmap operations
  dlb: add port map/unmap state machine
  dlb: add static queue map register operations
  dlb: add dynamic queue map register operations
  dlb: add queue unmap register operations
  dlb: queue map/unmap workqueue

 Documentation/misc-devices/dlb.rst            |  259 +
 Documentation/misc-devices/index.rst          |    1 +
 .../userspace-api/ioctl/ioctl-number.rst      |    1 +
 MAINTAINERS                                   |    8 +
 drivers/misc/Kconfig                          |    1 +
 drivers/misc/Makefile                         |    1 +
 drivers/misc/dlb/Kconfig                      |   18 +
 drivers/misc/dlb/Makefile                     |   11 +
 drivers/misc/dlb/dlb_bitmap.h                 |  210 +
 drivers/misc/dlb/dlb_file.c                   |  149 +
 drivers/misc/dlb/dlb_hw_types.h               |  311 +
 drivers/misc/dlb/dlb_ioctl.c                  |  498 ++
 drivers/misc/dlb/dlb_main.c                   |  614 ++
 drivers/misc/dlb/dlb_main.h                   |  178 +
 drivers/misc/dlb/dlb_pf_ops.c                 |  277 +
 drivers/misc/dlb/dlb_regs.h                   | 3640 +++++++++++
 drivers/misc/dlb/dlb_resource.c               | 5469 +++++++++++++++++
 drivers/misc/dlb/dlb_resource.h               |   94 +
 include/uapi/linux/dlb.h                      |  602 ++
 19 files changed, 12342 insertions(+)
 create mode 100644 Documentation/misc-devices/dlb.rst
 create mode 100644 drivers/misc/dlb/Kconfig
 create mode 100644 drivers/misc/dlb/Makefile
 create mode 100644 drivers/misc/dlb/dlb_bitmap.h
 create mode 100644 drivers/misc/dlb/dlb_file.c
 create mode 100644 drivers/misc/dlb/dlb_hw_types.h
 create mode 100644 drivers/misc/dlb/dlb_ioctl.c
 create mode 100644 drivers/misc/dlb/dlb_main.c
 create mode 100644 drivers/misc/dlb/dlb_main.h
 create mode 100644 drivers/misc/dlb/dlb_pf_ops.c
 create mode 100644 drivers/misc/dlb/dlb_regs.h
 create mode 100644 drivers/misc/dlb/dlb_resource.c
 create mode 100644 drivers/misc/dlb/dlb_resource.h
 create mode 100644 include/uapi/linux/dlb.h


base-commit: e71ba9452f0b5b2e8dc8aa5445198cd9214a6a62

Comments

Greg KH March 10, 2021, 9:02 a.m. UTC | #1
On Wed, Feb 10, 2021 at 11:54:03AM -0600, Mike Ximing Chen wrote:
> Intel DLB is an accelerator for the event-driven programming model of
> DPDK's Event Device Library[2]. The library is used in packet processing
> pipelines that arrange for multi-core scalability, dynamic load-balancing,
> and variety of packet distribution and synchronization schemes

The more that I look at this driver, the more I think this is a "run
around" the networking stack.  Why are you all adding kernel code to
support DPDK which is an out-of-kernel networking stack?  We can't
support that at all.

Why not just use the normal networking functionality instead of this
custom char-device-node-monstrosity?

What is missing from todays kernel networking code that requires this
run-around?

thanks,

greg k-h
Dan Williams March 12, 2021, 7:18 a.m. UTC | #2
On Wed, Mar 10, 2021 at 1:02 AM Greg KH <gregkh@linuxfoundation.org> wrote:
>
> On Wed, Feb 10, 2021 at 11:54:03AM -0600, Mike Ximing Chen wrote:
> > Intel DLB is an accelerator for the event-driven programming model of
> > DPDK's Event Device Library[2]. The library is used in packet processing
> > pipelines that arrange for multi-core scalability, dynamic load-balancing,
> > and variety of packet distribution and synchronization schemes
>
> The more that I look at this driver, the more I think this is a "run
> around" the networking stack.  Why are you all adding kernel code to
> support DPDK which is an out-of-kernel networking stack?  We can't
> support that at all.
>
> Why not just use the normal networking functionality instead of this
> custom char-device-node-monstrosity?

Hey Greg,

I've come to find out that this driver does not bypass kernel
networking, and the kernel functionality I thought it bypassed, IPC /
Scheduling, is not even in the picture in the non-accelerated case. So
given you and I are both confused by this submission that tells me
that the problem space needs to be clarified and assumptions need to
be enumerated.

> What is missing from todays kernel networking code that requires this
> run-around?

Yes, first and foremost Mike, what are the kernel infrastructure gaps
and pain points that led up to this proposal?
Chen, Mike Ximing March 12, 2021, 9:55 p.m. UTC | #3
> -----Original Message-----
> From: Dan Williams <dan.j.williams@intel.com>
> Sent: Friday, March 12, 2021 2:18 AM
> To: Greg KH <gregkh@linuxfoundation.org>
> Cc: Chen, Mike Ximing <mike.ximing.chen@intel.com>; Netdev <netdev@vger.kernel.org>; David Miller
> <davem@davemloft.net>; Jakub Kicinski <kuba@kernel.org>; Arnd Bergmann <arnd@arndb.de>; Pierre-
> Louis Bossart <pierre-louis.bossart@linux.intel.com>
> Subject: Re: [PATCH v10 00/20] dlb: introduce DLB device driver
> 
> On Wed, Mar 10, 2021 at 1:02 AM Greg KH <gregkh@linuxfoundation.org> wrote:
> >
> > On Wed, Feb 10, 2021 at 11:54:03AM -0600, Mike Ximing Chen wrote:
> > > Intel DLB is an accelerator for the event-driven programming model of
> > > DPDK's Event Device Library[2]. The library is used in packet processing
> > > pipelines that arrange for multi-core scalability, dynamic load-balancing,
> > > and variety of packet distribution and synchronization schemes
> >
> > The more that I look at this driver, the more I think this is a "run
> > around" the networking stack.  Why are you all adding kernel code to
> > support DPDK which is an out-of-kernel networking stack?  We can't
> > support that at all.
> >
> > Why not just use the normal networking functionality instead of this
> > custom char-device-node-monstrosity?
> 
> Hey Greg,
> 
> I've come to find out that this driver does not bypass kernel
> networking, and the kernel functionality I thought it bypassed, IPC /
> Scheduling, is not even in the picture in the non-accelerated case. So
> given you and I are both confused by this submission that tells me
> that the problem space needs to be clarified and assumptions need to
> be enumerated.
> 
> > What is missing from todays kernel networking code that requires this
> > run-around?
> 
> Yes, first and foremost Mike, what are the kernel infrastructure gaps
> and pain points that led up to this proposal?

Hi Greg/Dan,

Sorry for the confusion. The cover letter and document did not articulate 
clearly the problem being solved by DLB. We will update the document in
the next revision.

In a brief description, Intel DLB is an accelerator that replaces shared memory
queuing systems. Large modern server-class CPUs,  with local caches
for each core, tend to incur costly cache misses, cross core snoops
and contentions.  The impact becomes noticeable at high (messages/sec) 
rates, such as are seen in high throughput packet processing and HPC 
applications. DLB is used in high rate pipelines that require a variety of packet
distribution & synchronization schemes.  It can be leveraged to accelerate
user space libraries, such as DPDK eventdev. It could show similar benefits in 
frameworks such as PADATA in the Kernel - if the messaging rate is sufficiently
high. As can be seen in the following diagram,  DLB operations come into the 
picture only after packets are received by Rx core from the networking 
devices. WCs are the worker cores which process packets distributed by DLB. 
(In case the diagram gets mis-formatted,  please see attached file).


                              WC1              WC4
 +-----+   +----+   +---+  /      \  +---+  /      \  +---+   +----+   +-----+
 |NIC  |   |Rx  |   |DLB| /        \ |DLB| /        \ |DLB|   |Tx  |   |NIC  |
 |Ports|---|Core|---|   |-----WC2----|   |-----WC5----|   |---|Core|---|Ports|
 +-----+   -----+   +---+ \        / +---+ \        / +---+   +----+   ------+
                           \      /         \      /
                              WC3              WC6 

At its heart DLB consists of resources than can be assigned to 
VDEVs/applications in a flexible manner, such as ports, queues, credits to use
queues, sequence numbers, etc. We support up to 16/32 VF/VDEVs (depending
on version) with SRIOV and SIOV. Role of the kernel driver includes VDEV 
Composition (vdcm module), functional level reset, live migration, error 
handling, power management, and etc..

Thanks
Mike
WC1              WC4          
 +-----+   +----+   +---+  /      \  +---+  /      \  +---+   +----+   +-----+
 |NIC  |   |Rx  |   |DLB| /        \ |DLB| /        \ |DLB|   |Tx  |   |NIC  |
 |Ports|---|Core|---|   |-----WC2----|   |-----WC5----|   |---|Core|---|Ports| 
 +-----+   -----+   +---+ \        / +---+ \        / +---+   +----+   ------+
                           \      /         \      /                           
                              WC3              WC6
Dan Williams March 13, 2021, 1:39 a.m. UTC | #4
On Fri, Mar 12, 2021 at 1:55 PM Chen, Mike Ximing
<mike.ximing.chen@intel.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Dan Williams <dan.j.williams@intel.com>
> > Sent: Friday, March 12, 2021 2:18 AM
> > To: Greg KH <gregkh@linuxfoundation.org>
> > Cc: Chen, Mike Ximing <mike.ximing.chen@intel.com>; Netdev <netdev@vger.kernel.org>; David Miller
> > <davem@davemloft.net>; Jakub Kicinski <kuba@kernel.org>; Arnd Bergmann <arnd@arndb.de>; Pierre-
> > Louis Bossart <pierre-louis.bossart@linux.intel.com>
> > Subject: Re: [PATCH v10 00/20] dlb: introduce DLB device driver
> >
> > On Wed, Mar 10, 2021 at 1:02 AM Greg KH <gregkh@linuxfoundation.org> wrote:
> > >
> > > On Wed, Feb 10, 2021 at 11:54:03AM -0600, Mike Ximing Chen wrote:
> > > > Intel DLB is an accelerator for the event-driven programming model of
> > > > DPDK's Event Device Library[2]. The library is used in packet processing
> > > > pipelines that arrange for multi-core scalability, dynamic load-balancing,
> > > > and variety of packet distribution and synchronization schemes
> > >
> > > The more that I look at this driver, the more I think this is a "run
> > > around" the networking stack.  Why are you all adding kernel code to
> > > support DPDK which is an out-of-kernel networking stack?  We can't
> > > support that at all.
> > >
> > > Why not just use the normal networking functionality instead of this
> > > custom char-device-node-monstrosity?
> >
> > Hey Greg,
> >
> > I've come to find out that this driver does not bypass kernel
> > networking, and the kernel functionality I thought it bypassed, IPC /
> > Scheduling, is not even in the picture in the non-accelerated case. So
> > given you and I are both confused by this submission that tells me
> > that the problem space needs to be clarified and assumptions need to
> > be enumerated.
> >
> > > What is missing from todays kernel networking code that requires this
> > > run-around?
> >
> > Yes, first and foremost Mike, what are the kernel infrastructure gaps
> > and pain points that led up to this proposal?
>
> Hi Greg/Dan,
>
> Sorry for the confusion. The cover letter and document did not articulate
> clearly the problem being solved by DLB. We will update the document in
> the next revision.

I'm not sure this answers Greg question about what is missing from
today's kernel implementation?

> In a brief description, Intel DLB is an accelerator that replaces shared memory
> queuing systems. Large modern server-class CPUs,  with local caches
> for each core, tend to incur costly cache misses, cross core snoops
> and contentions.  The impact becomes noticeable at high (messages/sec)
> rates, such as are seen in high throughput packet processing and HPC
> applications. DLB is used in high rate pipelines that require a variety of packet
> distribution & synchronization schemes.  It can be leveraged to accelerate
> user space libraries, such as DPDK eventdev. It could show similar benefits in
> frameworks such as PADATA in the Kernel - if the messaging rate is sufficiently
> high.

Where is PADATA limited by distribution and synchronization overhead?
It's meant for parallelizable work that has minimal communication
between the work units, ordering is about it's only synchronization
overhead, not messaging. It's used for ipsec crypto and page init.
Even potential future bulk work usages that might benefit from PADATA
like like md-raid, ksm, or kcopyd do not have any messaging overhead.

> As can be seen in the following diagram,  DLB operations come into the
> picture only after packets are received by Rx core from the networking
> devices. WCs are the worker cores which process packets distributed by DLB.
> (In case the diagram gets mis-formatted,  please see attached file).
>
>
>                               WC1              WC4
>  +-----+   +----+   +---+  /      \  +---+  /      \  +---+   +----+   +-----+
>  |NIC  |   |Rx  |   |DLB| /        \ |DLB| /        \ |DLB|   |Tx  |   |NIC  |
>  |Ports|---|Core|---|   |-----WC2----|   |-----WC5----|   |---|Core|---|Ports|
>  +-----+   -----+   +---+ \        / +---+ \        / +---+   +----+   ------+
>                            \      /         \      /
>                               WC3              WC6
>
> At its heart DLB consists of resources than can be assigned to
> VDEVs/applications in a flexible manner, such as ports, queues, credits to use
> queues, sequence numbers, etc.

All of those objects are managed in userspace today in the unaccelerated case?

> We support up to 16/32 VF/VDEVs (depending
> on version) with SRIOV and SIOV. Role of the kernel driver includes VDEV
> Composition (vdcm module), functional level reset, live migration, error
> handling, power management, and etc..

Need some more specificity here. What about those features requires
the kernel to get involved with a DLB2 specific ABI to manage ports,
queues, credits, sequence numbers, etc...?
Chen, Mike Ximing March 15, 2021, 8:04 p.m. UTC | #5
> From: Dan Williams <dan.j.williams@intel.com>
> On Fri, Mar 12, 2021 at 1:55 PM Chen, Mike Ximing <mike.ximing.chen@intel.com> wrote:
> >
> > In a brief description, Intel DLB is an accelerator that replaces
> > shared memory queuing systems. Large modern server-class CPUs,  with
> > local caches for each core, tend to incur costly cache misses, cross
> > core snoops and contentions.  The impact becomes noticeable at high
> > (messages/sec) rates, such as are seen in high throughput packet
> > processing and HPC applications. DLB is used in high rate pipelines
> > that require a variety of packet distribution & synchronization
> > schemes.  It can be leveraged to accelerate user space libraries, such
> > as DPDK eventdev. It could show similar benefits in frameworks such as
> > PADATA in the Kernel - if the messaging rate is sufficiently high.
> 
> Where is PADATA limited by distribution and synchronization overhead?
> It's meant for parallelizable work that has minimal communication between the work units, ordering is
> about it's only synchronization overhead, not messaging. It's used for ipsec crypto and page init.
> Even potential future bulk work usages that might benefit from PADATA like like md-raid, ksm, or kcopyd
> do not have any messaging overhead.
> 
In the our PADATA investigation, the improvements are primarily from ordering overhead.
Parallel scheduling is offloaded by DLB orderd parallel queue.
Serialization (re-order) is offloaded by DLB directed queue.
We see significant throughput increases in crypto tests using tcrypt.  In our test configuration, preliminary results show that the dlb accelerated case encrypts at 2.4x (packets/s), and decrypts at 2.6x of the unaccelerated case.
Chen, Mike Ximing March 15, 2021, 8:08 p.m. UTC | #6
> From: Dan Williams <dan.j.williams@intel.com>
> On Fri, Mar 12, 2021 at 1:55 PM Chen, Mike Ximing <mike.ximing.chen@intel.com> wrote:
> >
> > At its heart DLB consists of resources than can be assigned to
> > VDEVs/applications in a flexible manner, such as ports, queues,
> > credits to use queues, sequence numbers, etc.
> 
> All of those objects are managed in userspace today in the unaccelerated case?
> 

Yes, in the unaccelerated case, the software queue manager is generally implemented in the user space (except for cases like padata), so the resources are managed in the user space as well.
With a hardware DLB module, these resources will be managed by the kernel driver for VF and VDEV supports.

Thanks
Mike
Chen, Mike Ximing March 15, 2021, 8:18 p.m. UTC | #7
> From: Dan Williams <dan.j.williams@intel.com>
> On Fri, Mar 12, 2021 at 1:55 PM Chen, Mike Ximing <mike.ximing.chen@intel.com> wrote:
> >
> > We support up to 16/32 VF/VDEVs (depending on version) with SRIOV and
> > SIOV. Role of the kernel driver includes VDEV Composition (vdcm
> > module), functional level reset, live migration, error handling, power
> > management, and etc..
> 
> Need some more specificity here. What about those features requires the kernel to get involved with a
> DLB2 specific ABI to manage ports, queues, credits, sequence numbers, etc...?

Role of the dlb kernel driver:

VDEV Composition
For example writing 1024 to the VDEV_CREDITS[0] register will allocate 1024 credits to VDEV 0. In this way, VFs or VDEVs can be composed  as mini-versions of the full device.
VDEV composition will leverage vfio-mdev to create the VDEV devices while the KMD will implement the VDCM.

Dynamic Composition
Such composition can be dynamic – the PF/VF interface supports scenarios whereby, for example, an application may wish to boost its credit allocation – can I have 100 more credits?

Functional Level Reset
Much of the internal storage is RAM based and not resettable by hardware schemes. There are also internal SRAM  based control structures (BCAM) that have to be flushed. 
The planned way to do this is, roughly:
  -- Kernel driver disables access from the associated ports  (to prevent any SW access, the application should be deadso this is a precaution).
  -- Kernel masquerades as the application to drain all data from internal queues. It can poll some internal counters to verify everything is fully drained.
  -- Only at this point can the resources associated with the VDEV be returned to the pool of available resources for handing to another application/VDEV.

Migration
Requirement is fairly similar to FLR. A VDEV has to be manually drained and reconstituted on another server, Kernel driver is responsible on both sides.

Error Handling
Errors include “Credit Excursions” where a VDEV attempts to use more of the internal capacity (credits) than has been allocated. In such a case, 
the data is dropped and an interrupt generated. All such interrupts are directed to the PF driver, which may simply forward them to a VF (via the PF/VF comms mechanism).

Power Management
The kernel driver keeps the device in D3Hot when not in use. The driver transitions the device to D0 when the first device file is opened or a VF or VDEV is created, 
and keeps it in that state until there are no open device files, memory mappings, or VFs/VDEVs.

Ioctl interface
Kernel driver provides ioctl interface for user applications to setup and configure dlb domains, ports, queues, scheduling types, credits, 
sequence numbers, and links between ports and queues.  Applications also use the interface to start, stop and inquire the dlb operations.

Thanks
Mike
Greg KH March 16, 2021, 9:01 a.m. UTC | #8
On Mon, Mar 15, 2021 at 08:18:10PM +0000, Chen, Mike Ximing wrote:
> > From: Dan Williams <dan.j.williams@intel.com>
> > On Fri, Mar 12, 2021 at 1:55 PM Chen, Mike Ximing <mike.ximing.chen@intel.com> wrote:
> > >
> > > We support up to 16/32 VF/VDEVs (depending on version) with SRIOV and
> > > SIOV. Role of the kernel driver includes VDEV Composition (vdcm
> > > module), functional level reset, live migration, error handling, power
> > > management, and etc..
> > 
> > Need some more specificity here. What about those features requires the kernel to get involved with a
> > DLB2 specific ABI to manage ports, queues, credits, sequence numbers, etc...?
> 
> Role of the dlb kernel driver:
> 
> VDEV Composition
> For example writing 1024 to the VDEV_CREDITS[0] register will allocate 1024 credits to VDEV 0. In this way, VFs or VDEVs can be composed  as mini-versions of the full device.
> VDEV composition will leverage vfio-mdev to create the VDEV devices while the KMD will implement the VDCM.

What is a vdev?

What is KMD?

What is VDCM?

What is VF?

And how does this all work?

> Dynamic Composition
> Such composition can be dynamic – the PF/VF interface supports scenarios whereby, for example, an application may wish to boost its credit allocation – can I have 100 more credits?

What applications?  What "credits"  For what resources?

> Functional Level Reset
> Much of the internal storage is RAM based and not resettable by hardware schemes. There are also internal SRAM  based control structures (BCAM) that have to be flushed. 
> The planned way to do this is, roughly:
>   -- Kernel driver disables access from the associated ports  (to prevent any SW access, the application should be deadso this is a precaution).

What is a "port" here?


>   -- Kernel masquerades as the application to drain all data from internal queues. It can poll some internal counters to verify everything is fully drained.

What queues?

Why would the kernel mess with userspace data?

>   -- Only at this point can the resources associated with the VDEV be returned to the pool of available resources for handing to another application/VDEV.

What is a VDEV and how does an application be "associated with it"?

> Migration
> Requirement is fairly similar to FLR. A VDEV has to be manually drained and reconstituted on another server, Kernel driver is responsible on both sides.

What is FLR?

> Error Handling
> Errors include “Credit Excursions” where a VDEV attempts to use more of the internal capacity (credits) than has been allocated. In such a case, 
> the data is dropped and an interrupt generated. All such interrupts are directed to the PF driver, which may simply forward them to a VF (via the PF/VF comms mechanism).

What data is going where?

> Power Management
> The kernel driver keeps the device in D3Hot when not in use. The driver transitions the device to D0 when the first device file is opened or a VF or VDEV is created, 
> and keeps it in that state until there are no open device files, memory mappings, or VFs/VDEVs.

That's just normal power management for any device, why is this anything
special?

> Ioctl interface
> Kernel driver provides ioctl interface for user applications to setup and configure dlb domains, ports, queues, scheduling types, credits, 
> sequence numbers, and links between ports and queues.  Applications also use the interface to start, stop and inquire the dlb operations.

What applications use any of this?  What userspace implementation today
interacts with this?  Where is that code located?

Too many TLAs here, I have even less of an understanding of what this
driver is supposed to be doing, and what this hardware is now than
before.

And here I thought I understood hardware devices, and if I am confused,
I pity anyone else looking at this code...

You all need to get some real documentation together to explain
everything here in terms that anyone can understand.  Without that, this
code is going nowhere.

good luck!

greg k-h
Dan Williams May 12, 2021, 7:07 p.m. UTC | #9
[ add kvm@vger.kernel.org for VFIO discussion ]


On Tue, Mar 16, 2021 at 2:01 AM Greg KH <gregkh@linuxfoundation.org> wrote:
[..]
> > Ioctl interface
> > Kernel driver provides ioctl interface for user applications to setup and configure dlb domains, ports, queues, scheduling types, credits,
> > sequence numbers, and links between ports and queues.  Applications also use the interface to start, stop and inquire the dlb operations.
>
> What applications use any of this?  What userspace implementation today
> interacts with this?  Where is that code located?
>
> Too many TLAs here, I have even less of an understanding of what this
> driver is supposed to be doing, and what this hardware is now than
> before.
>
> And here I thought I understood hardware devices, and if I am confused,
> I pity anyone else looking at this code...
>
> You all need to get some real documentation together to explain
> everything here in terms that anyone can understand.  Without that, this
> code is going nowhere.

Hi Greg,

So, for the last few weeks Mike and company have patiently waded
through my questions and now I think we are at a point to work through
the upstream driver architecture options and tradeoffs. You were not
alone in struggling to understand what this device does because it is
unlike any other accelerator Linux has ever considered. It shards /
load balances a data stream for processing by CPU threads. This is
typically a network appliance function / protocol, but could also be
any other generic thread pool like the kernel's padata. It saves the
CPU cycles spent load balancing work items and marshaling them through
a thread pool pipeline. For example, in DPDK applications, DLB2 frees
up entire cores that would otherwise be consumed with scheduling and
work distribution. A separate proof-of-concept, using DLB2 to
accelerate the kernel's "padata" thread pool for a crypto workload,
demonstrated ~150% higher throughput with hardware employed to manage
work distribution and result ordering. Yes, you need a sufficiently
high touch / high throughput protocol before the software load
balancing overhead coordinating CPU threads starts to dominate the
performance, but there are some specific workloads willing to switch
to this regime.

The primary consumer to date has been as a backend for the event
handling in the userspace networking stack, DPDK. DLB2 has an existing
polled-mode-userspace driver for that use case. So I said, "great,
just add more features to that userspace driver and you're done". In
fact there was DLB1 hardware that also had a polled-mode-userspace
driver. So, the next question is "what's changed in DLB2 where a
userspace driver is no longer suitable?". The new use case for DLB2 is
new hardware support for a host driver to carve up device resources
into smaller sets (vfio-mdevs) that can be assigned to guests (Intel
calls this new hardware capability SIOV: Scalable IO Virtualization).

Hardware resource management is difficult to handle in userspace
especially when bare-metal hardware events need to coordinate with
guest-VM device instances. This includes a mailbox interface for the
guest VM to negotiate resources with the host driver. Another more
practical roadblock for a "DLB2 in userspace" proposal is the fact
that it implements what are in-effect software-defined-interrupts to
go beyond the scalability limits of PCI MSI-x (Intel calls this
Interrupt Message Store: IMS). So even if hardware resource management
was awkwardly plumbed into a userspace daemon there would still need
to be kernel enabling for device-specific extensions to
drivers/vfio/pci/vfio_pci_intrs.c for it to understand the IMS
interrupts of DLB2 in addition to PCI MSI-x.

While that still might be solvable in userspace if you squint at it, I
don't think Linux end users are served by pushing all of hardware
resource management to userspace. VFIO is mostly built to pass entire
PCI devices to guests, or in coordination with a kernel driver to
describe a subset of the hardware to a virtual-device (vfio-mdev)
interface. The rub here is that to date kernel drivers using VFIO to
provision mdevs have some existing responsibilities to the core kernel
like a network driver or DMA offload driver. The DLB2 driver offers no
such service to the kernel for its primary role of accelerating a
userspace data-plane. I am assuming here that  the padata
proof-of-concept is interesting, but not a compelling reason to ship a
driver compared to giving end users competent kernel-driven
hardware-resource assignment for deploying DLB2 virtual instances into
guest VMs.

My "just continue in userspace" suggestion has no answer for the IMS
interrupt and reliable hardware resource management support
requirements. If you're with me so far we can go deeper into the
details, but in answer to your previous questions most of the TLAs
were from the land of "SIOV" where the VFIO community should be
brought in to review. The driver is mostly a configuration plane where
the fast path data-plane is entirely in userspace. That configuration
plane needs to manage hardware events and resourcing on behalf of
guest VMs running on a partitioned subset of the device. There are
worthwhile questions about whether some of the uapi can be refactored
to common modules like uacce, but I think we need to get to a first
order understanding on what DLB2 is and why the kernel has a role
before diving into the uapi discussion.

Any clearer?

So, in summary drivers/misc/ appears to be the first stop in the
review since a host driver needs to be established to start the VFIO
enabling campaign. With my community hat on, I think requiring
standalone host drivers is healthier for Linux than broaching the
subject of VFIO-only drivers. Even if, as in this case, the initial
host driver is mostly implementing a capability that could be achieved
with a userspace driver.
Greg KH May 14, 2021, 2:33 p.m. UTC | #10
On Wed, May 12, 2021 at 12:07:31PM -0700, Dan Williams wrote:
> [ add kvm@vger.kernel.org for VFIO discussion ]
> 
> 
> On Tue, Mar 16, 2021 at 2:01 AM Greg KH <gregkh@linuxfoundation.org> wrote:
> [..]
> > > Ioctl interface
> > > Kernel driver provides ioctl interface for user applications to setup and configure dlb domains, ports, queues, scheduling types, credits,
> > > sequence numbers, and links between ports and queues.  Applications also use the interface to start, stop and inquire the dlb operations.
> >
> > What applications use any of this?  What userspace implementation today
> > interacts with this?  Where is that code located?
> >
> > Too many TLAs here, I have even less of an understanding of what this
> > driver is supposed to be doing, and what this hardware is now than
> > before.
> >
> > And here I thought I understood hardware devices, and if I am confused,
> > I pity anyone else looking at this code...
> >
> > You all need to get some real documentation together to explain
> > everything here in terms that anyone can understand.  Without that, this
> > code is going nowhere.
> 
> Hi Greg,
> 
> So, for the last few weeks Mike and company have patiently waded
> through my questions and now I think we are at a point to work through
> the upstream driver architecture options and tradeoffs. You were not
> alone in struggling to understand what this device does because it is
> unlike any other accelerator Linux has ever considered. It shards /
> load balances a data stream for processing by CPU threads. This is
> typically a network appliance function / protocol, but could also be
> any other generic thread pool like the kernel's padata. It saves the
> CPU cycles spent load balancing work items and marshaling them through
> a thread pool pipeline. For example, in DPDK applications, DLB2 frees
> up entire cores that would otherwise be consumed with scheduling and
> work distribution. A separate proof-of-concept, using DLB2 to
> accelerate the kernel's "padata" thread pool for a crypto workload,
> demonstrated ~150% higher throughput with hardware employed to manage
> work distribution and result ordering. Yes, you need a sufficiently
> high touch / high throughput protocol before the software load
> balancing overhead coordinating CPU threads starts to dominate the
> performance, but there are some specific workloads willing to switch
> to this regime.
> 
> The primary consumer to date has been as a backend for the event
> handling in the userspace networking stack, DPDK. DLB2 has an existing
> polled-mode-userspace driver for that use case. So I said, "great,
> just add more features to that userspace driver and you're done". In
> fact there was DLB1 hardware that also had a polled-mode-userspace
> driver. So, the next question is "what's changed in DLB2 where a
> userspace driver is no longer suitable?". The new use case for DLB2 is
> new hardware support for a host driver to carve up device resources
> into smaller sets (vfio-mdevs) that can be assigned to guests (Intel
> calls this new hardware capability SIOV: Scalable IO Virtualization).
> 
> Hardware resource management is difficult to handle in userspace
> especially when bare-metal hardware events need to coordinate with
> guest-VM device instances. This includes a mailbox interface for the
> guest VM to negotiate resources with the host driver. Another more
> practical roadblock for a "DLB2 in userspace" proposal is the fact
> that it implements what are in-effect software-defined-interrupts to
> go beyond the scalability limits of PCI MSI-x (Intel calls this
> Interrupt Message Store: IMS). So even if hardware resource management
> was awkwardly plumbed into a userspace daemon there would still need
> to be kernel enabling for device-specific extensions to
> drivers/vfio/pci/vfio_pci_intrs.c for it to understand the IMS
> interrupts of DLB2 in addition to PCI MSI-x.
> 
> While that still might be solvable in userspace if you squint at it, I
> don't think Linux end users are served by pushing all of hardware
> resource management to userspace. VFIO is mostly built to pass entire
> PCI devices to guests, or in coordination with a kernel driver to
> describe a subset of the hardware to a virtual-device (vfio-mdev)
> interface. The rub here is that to date kernel drivers using VFIO to
> provision mdevs have some existing responsibilities to the core kernel
> like a network driver or DMA offload driver. The DLB2 driver offers no
> such service to the kernel for its primary role of accelerating a
> userspace data-plane. I am assuming here that  the padata
> proof-of-concept is interesting, but not a compelling reason to ship a
> driver compared to giving end users competent kernel-driven
> hardware-resource assignment for deploying DLB2 virtual instances into
> guest VMs.
> 
> My "just continue in userspace" suggestion has no answer for the IMS
> interrupt and reliable hardware resource management support
> requirements. If you're with me so far we can go deeper into the
> details, but in answer to your previous questions most of the TLAs
> were from the land of "SIOV" where the VFIO community should be
> brought in to review. The driver is mostly a configuration plane where
> the fast path data-plane is entirely in userspace. That configuration
> plane needs to manage hardware events and resourcing on behalf of
> guest VMs running on a partitioned subset of the device. There are
> worthwhile questions about whether some of the uapi can be refactored
> to common modules like uacce, but I think we need to get to a first
> order understanding on what DLB2 is and why the kernel has a role
> before diving into the uapi discussion.
> 
> Any clearer?

A bit, yes, thanks.

> So, in summary drivers/misc/ appears to be the first stop in the
> review since a host driver needs to be established to start the VFIO
> enabling campaign. With my community hat on, I think requiring
> standalone host drivers is healthier for Linux than broaching the
> subject of VFIO-only drivers. Even if, as in this case, the initial
> host driver is mostly implementing a capability that could be achieved
> with a userspace driver.

Ok, then how about a much "smaller" kernel driver for all of this, and a
whole lot of documentation to describe what is going on and what all of
the TLAs are.

thanks,

greg k-h
Chen, Mike Ximing July 16, 2021, 1:04 a.m. UTC | #11
> -----Original Message-----
> From: Greg KH <gregkh@linuxfoundation.org>
> Sent: Friday, May 14, 2021 10:33 AM
> To: Williams, Dan J <dan.j.williams@intel.com>
> > Hi Greg,
> >
> > So, for the last few weeks Mike and company have patiently waded
> > through my questions and now I think we are at a point to work through
> > the upstream driver architecture options and tradeoffs. You were not
> > alone in struggling to understand what this device does because it is
> > unlike any other accelerator Linux has ever considered. It shards /
> > load balances a data stream for processing by CPU threads. This is
> > typically a network appliance function / protocol, but could also be
> > any other generic thread pool like the kernel's padata. It saves the
> > CPU cycles spent load balancing work items and marshaling them through
> > a thread pool pipeline. For example, in DPDK applications, DLB2 frees
> > up entire cores that would otherwise be consumed with scheduling and
> > work distribution. A separate proof-of-concept, using DLB2 to
> > accelerate the kernel's "padata" thread pool for a crypto workload,
> > demonstrated ~150% higher throughput with hardware employed to manage
> > work distribution and result ordering. Yes, you need a sufficiently
> > high touch / high throughput protocol before the software load
> > balancing overhead coordinating CPU threads starts to dominate the
> > performance, but there are some specific workloads willing to switch
> > to this regime.
> >
> > The primary consumer to date has been as a backend for the event
> > handling in the userspace networking stack, DPDK. DLB2 has an existing
> > polled-mode-userspace driver for that use case. So I said, "great,
> > just add more features to that userspace driver and you're done". In
> > fact there was DLB1 hardware that also had a polled-mode-userspace
> > driver. So, the next question is "what's changed in DLB2 where a
> > userspace driver is no longer suitable?". The new use case for DLB2 is
> > new hardware support for a host driver to carve up device resources
> > into smaller sets (vfio-mdevs) that can be assigned to guests (Intel
> > calls this new hardware capability SIOV: Scalable IO Virtualization).
> >
> > Hardware resource management is difficult to handle in userspace
> > especially when bare-metal hardware events need to coordinate with
> > guest-VM device instances. This includes a mailbox interface for the
> > guest VM to negotiate resources with the host driver. Another more
> > practical roadblock for a "DLB2 in userspace" proposal is the fact
> > that it implements what are in-effect software-defined-interrupts to
> > go beyond the scalability limits of PCI MSI-x (Intel calls this
> > Interrupt Message Store: IMS). So even if hardware resource management
> > was awkwardly plumbed into a userspace daemon there would still need
> > to be kernel enabling for device-specific extensions to
> > drivers/vfio/pci/vfio_pci_intrs.c for it to understand the IMS
> > interrupts of DLB2 in addition to PCI MSI-x.
> >
> > While that still might be solvable in userspace if you squint at it, I
> > don't think Linux end users are served by pushing all of hardware
> > resource management to userspace. VFIO is mostly built to pass entire
> > PCI devices to guests, or in coordination with a kernel driver to
> > describe a subset of the hardware to a virtual-device (vfio-mdev)
> > interface. The rub here is that to date kernel drivers using VFIO to
> > provision mdevs have some existing responsibilities to the core kernel
> > like a network driver or DMA offload driver. The DLB2 driver offers no
> > such service to the kernel for its primary role of accelerating a
> > userspace data-plane. I am assuming here that  the padata
> > proof-of-concept is interesting, but not a compelling reason to ship a
> > driver compared to giving end users competent kernel-driven
> > hardware-resource assignment for deploying DLB2 virtual instances into
> > guest VMs.
> >
> > My "just continue in userspace" suggestion has no answer for the IMS
> > interrupt and reliable hardware resource management support
> > requirements. If you're with me so far we can go deeper into the
> > details, but in answer to your previous questions most of the TLAs
> > were from the land of "SIOV" where the VFIO community should be
> > brought in to review. The driver is mostly a configuration plane where
> > the fast path data-plane is entirely in userspace. That configuration
> > plane needs to manage hardware events and resourcing on behalf of
> > guest VMs running on a partitioned subset of the device. There are
> > worthwhile questions about whether some of the uapi can be refactored
> > to common modules like uacce, but I think we need to get to a first
> > order understanding on what DLB2 is and why the kernel has a role
> > before diving into the uapi discussion.
> >
> > Any clearer?
> 
> A bit, yes, thanks.
> 
> > So, in summary drivers/misc/ appears to be the first stop in the
> > review since a host driver needs to be established to start the VFIO
> > enabling campaign. With my community hat on, I think requiring
> > standalone host drivers is healthier for Linux than broaching the
> > subject of VFIO-only drivers. Even if, as in this case, the initial
> > host driver is mostly implementing a capability that could be achieved
> > with a userspace driver.
> 
> Ok, then how about a much "smaller" kernel driver for all of this, and a whole lot of documentation to
> describe what is going on and what all of the TLAs are.
> 
> thanks,
> 
> greg k-h

Hi Greg,

tl;dr: We have been looking into various options to reduce the kernel driver size and ABI surface, such as moving more responsibility to user space, reusing existing kernel modules (uacce, for example), and converting functionality from ioctl to sysfs. End result 10 ioctls will be replaced by sysfs, the rest of them (20 ioctls) will be replaced by configfs. Some concepts are moved to device-special files rather than ioctls that produce file descriptors.

Details:
We investigated the possibility of using uacce (https://www.kernel.org/doc/html/latest/misc-devices/uacce.html) in our kernel driver.  The uacce interface fits well with accelerators that process user data with known source and destination addresses. For a DLB (Dynamic Load Balancer), however,  the destination port depends on the system load and is unknown to the application. While uacce exposes  "queues" to user, the dlb driver has to handle much complicated resource managements, such as credits, ports, queues and domains. We would have to add a lot of more concepts and code, which are not useful for other accelerators,  in uacce to make it working for DLB. This may also lead to a bigger code size over all.

We also took a another look at moving resource management functionality from kernel space to user space. Much of kernel driver supports both PF (Physical Function) on host and VFs (Virtual Functions) on VMs. Since only the PF on the host has permissions to setup resource and configure the DLB HW, all the requests on VFs are forwarded to PF via the VF-PF mail boxes, which are handled by the kernel driver. The driver also maintains various virtual id to physical id translations (for VFs, ports, queues, etc), and provides virtual-to-physical id mapping info DLB HW so that an application in VM can access the resources with virtual IDs only. Because of the VF/VDEV support, we have to keep the resource management, which is more than one half of the code size, in the driver.

To simplify the user interface, we explored the ways to reduce/eliminate ioctl interface, and found that we can utilize configfs for many of the DLB functionalities. Our current plan is to replace all the ioctls in the driver with sysfs and configfs. We will use configfs for most of setup and configuration for both physical function and virtual functions. This may not reduce the overall driver size greatly, but it will lessen much of ABI maintenance burden (with the elimination of ioctls).  I hope this is something that is in line with what you like to see for the driver.

Thanks
Mike