mbox series

[0/6] Ancillary bus implementation and SOF multi-client support

Message ID 20200930225051.889607-1-david.m.ertman@intel.com (mailing list archive)
Headers show
Series Ancillary bus implementation and SOF multi-client support | expand

Message

Dave Ertman Sept. 30, 2020, 10:50 p.m. UTC
Brief history of Ancillary Bus 
==============================
The ancillary bus code was originally submitted upstream as virtual 
bus, and was submitted through the netdev tree.  This process generated 
up to v4.  This discussion can be found here: 
 https://lore.kernel.org/netdev/0200520070227.3392100-2-jeffrey.t.kirsher@intel.com/T/#u 

At this point, GregKH requested that we take the review and revision 
process to an internal mailing list and garner the buy-in of a respected 
kernel contributor. 

The ancillary bus (then known as virtual bus) was originally submitted
along with implementation code for the ice driver and irdma drive,
causing the complication of also having dependencies in the rdma tree.
This new submission is utilizing an ancillary bus consumer in only the
sound driver tree to create the initial implementation and a single
user. 

Since implementation work has started on this patch set, there have been
multiple inquiries about the time frame of its completion.  It appears
that there will be numerous consumers of this functionality. 

The process of internal review and implementation using the sound
drivers generated 19 internal versions.  The changes, including the name
change from virtual bus to ancillary bus, from these versions can be
summarized as the following: 

- Fixed compilation and checkpatch errors 
- Improved documentation to address the motivation for virtual bus. 
- Renamed virtual bus to ancillary bus 
- increased maximum device name size 
- Correct order in Kconfig and Makefile 
- removed the mid-layer adev->release layer for device unregister 
- pushed adev->id management to parent driver 
- all error paths out of ancillary_device_register return error code 
- all error paths out of ancillary_device_register use put_device 
- added adev->name element 
- modname in register cannot be NULL 
- added KBUILD_MODNAME as prefix for match_name 
- push adev->id responsibility to registering driver 
- uevent now parses adev->dev name 
- match_id function now parses adev->dev name 
- changed drivers probe function to also take an ancillary_device_id param 
- split ancillary_device_register into device_initialize and device_add 
- adjusted what is done in device_initialize and device_add 
- change adev to ancildev and adrv to ancildrv 
- change adev to ancildev in documentation 

This submission is the first time that this patch set will be sent to
the alsa-devel mailing list, so it is currently being submitted as
version 1.

==========================

Introduces the ancillary bus implementation along with the example usage
in the Sound Open Firmware(SOF) audio driver.

In some subsystems, the functionality of the core device
(PCI/ACPI/other) may be too complex for a single device to be managed as
a monolithic block or a part of the functionality might need to be
exposed to a different subsystem.  Splitting the functionality into
smaller orthogonal devices makes it easier to manage data, power
management and domain-specific communication with the hardware.  Also,
common ancillary_device functionality across primary devices can be
handled by a common ancillary_device. A key requirement for such a split
is that there is no dependency on a physical bus, device, register
accesses or regmap support. These individual devices split from the core
cannot live on the platform bus as they are not physical devices that
are controlled by DT/ACPI. The same argument applies for not using MFD
in this scenario as it relies on individual function devices being
physical devices that are DT enumerated.

An example for this kind of requirement is the audio subsystem where a
single IP handles multiple entities such as HDMI, Soundwire, local
devices such as mics/speakers etc. The split for the core's
functionality can be arbitrary or be defined by the DSP firmware
topology and include hooks for test/debug. This allows for the audio
core device to be minimal and tightly coupled with handling the
hardware-specific logic and communication.

The ancillary bus is intended to be minimal, generic and avoid
domain-specific assumptions. Each ancillary bus device represents a part
of its parent functionality. The generic behavior can be extended and
specialized as needed by encapsulating an ancillary bus device within
other domain-specific structures and the use of .ops callbacks.

The SOF driver adopts the ancillary bus for implementing the
multi-client support. A client in the context of the SOF driver
represents a part of the core device's functionality. It is not a
physical device but rather an ancillary device that needs to communicate
with the DSP via IPCs. With multi-client support,the sound card can be
separated into multiple orthogonal ancillary devices for local devices
(mic/speakers etc), HDMI, sensing, probes, debug etc.  In this series,
we demonstrate the usage of the ancillary bus with the help of the IPC
test client which is used for testing the serialization of IPCs when
multiple clients talk to the DSP at the same time.

Dave Ertman (1):
  Add ancillary bus support

Fred Oh (1):
  ASoC: SOF: debug: Remove IPC flood test support in SOF core

Ranjani Sridharan (4):
  ASoC: SOF: Introduce descriptors for SOF client
  ASoC: SOF: Create client driver for IPC test
  ASoC: SOF: ops: Add ops for client registration
  ASoC: SOF: Intel: Define ops for client registration

 Documentation/driver-api/ancillary_bus.rst | 230 +++++++++++++++
 Documentation/driver-api/index.rst         |   1 +
 drivers/bus/Kconfig                        |   3 +
 drivers/bus/Makefile                       |   3 +
 drivers/bus/ancillary.c                    | 191 +++++++++++++
 include/linux/ancillary_bus.h              |  58 ++++
 include/linux/mod_devicetable.h            |   8 +
 scripts/mod/devicetable-offsets.c          |   3 +
 scripts/mod/file2alias.c                   |   8 +
 sound/soc/sof/Kconfig                      |  29 +-
 sound/soc/sof/Makefile                     |   7 +
 sound/soc/sof/core.c                       |  12 +
 sound/soc/sof/debug.c                      | 230 ---------------
 sound/soc/sof/intel/Kconfig                |   9 +
 sound/soc/sof/intel/Makefile               |   3 +
 sound/soc/sof/intel/apl.c                  |  18 ++
 sound/soc/sof/intel/bdw.c                  |  18 ++
 sound/soc/sof/intel/byt.c                  |  22 ++
 sound/soc/sof/intel/cnl.c                  |  18 ++
 sound/soc/sof/intel/intel-client.c         |  49 ++++
 sound/soc/sof/intel/intel-client.h         |  26 ++
 sound/soc/sof/ops.h                        |  14 +
 sound/soc/sof/sof-client.c                 | 117 ++++++++
 sound/soc/sof/sof-client.h                 |  65 +++++
 sound/soc/sof/sof-ipc-test-client.c        | 314 +++++++++++++++++++++
 sound/soc/sof/sof-priv.h                   |  16 +-
 26 files changed, 1233 insertions(+), 239 deletions(-)
 create mode 100644 Documentation/driver-api/ancillary_bus.rst
 create mode 100644 drivers/bus/ancillary.c
 create mode 100644 include/linux/ancillary_bus.h
 create mode 100644 sound/soc/sof/intel/intel-client.c
 create mode 100644 sound/soc/sof/intel/intel-client.h
 create mode 100644 sound/soc/sof/sof-client.c
 create mode 100644 sound/soc/sof/sof-client.h
 create mode 100644 sound/soc/sof/sof-ipc-test-client.c

Comments

Greg KH Oct. 1, 2020, 5:58 a.m. UTC | #1
On Wed, Sep 30, 2020 at 03:50:45PM -0700, Dave Ertman wrote:
> Brief history of Ancillary Bus 
> ==============================

<snip>

Did you send 2 copies of this?  Which one is the "correct" one to
review?

confused,

greg k-h
Greg KH Oct. 1, 2020, 7:14 a.m. UTC | #2
On Wed, Sep 30, 2020 at 03:50:45PM -0700, Dave Ertman wrote:
> The ancillary bus (then known as virtual bus) was originally submitted
> along with implementation code for the ice driver and irdma drive,
> causing the complication of also having dependencies in the rdma tree.
> This new submission is utilizing an ancillary bus consumer in only the
> sound driver tree to create the initial implementation and a single
> user. 

So this will not work for the ice driver and/or irdma drivers?  It would
be great to see how they work for this as well as getting those
maintainers to review and sign off on this implementation as well.
Don't ignore those developers, that's a bit "odd", don't you think?

To drop them from the review process is actually kind of rude, what
happens if this gets merged without their input?

And the name, why was it changed and what does it mean?  For non-native
english speakers this is going to be rough, given that I as a native
english speaker had to go look up the word in a dictionary to fully
understand what you are trying to do with that name.

Naming is hard, but I think this name is really hard to explain and
understand, don't you think?

thanks,

greg k-h
Cezary Rojewski Oct. 1, 2020, 10:05 a.m. UTC | #3
On 2020-10-01 12:50 AM, Dave Ertman wrote:
> Brief history of Ancillary Bus
> ==============================
> The ancillary bus code was originally submitted upstream as virtual
> bus, and was submitted through the netdev tree.  This process generated
> up to v4.  This discussion can be found here:
>   https://lore.kernel.org/netdev/0200520070227.3392100-2-jeffrey.t.kirsher@intel.com/T/#u
> 
> At this point, GregKH requested that we take the review and revision
> process to an internal mailing list and garner the buy-in of a respected
> kernel contributor.
> 
> The ancillary bus (then known as virtual bus) was originally submitted
> along with implementation code for the ice driver and irdma drive,
> causing the complication of also having dependencies in the rdma tree.
> This new submission is utilizing an ancillary bus consumer in only the
> sound driver tree to create the initial implementation and a single
> user.
> 
> Since implementation work has started on this patch set, there have been
> multiple inquiries about the time frame of its completion.  It appears
> that there will be numerous consumers of this functionality.
> 
> The process of internal review and implementation using the sound
> drivers generated 19 internal versions.  The changes, including the name
> change from virtual bus to ancillary bus, from these versions can be
> summarized as the following:
> 
> - Fixed compilation and checkpatch errors
> - Improved documentation to address the motivation for virtual bus.
> - Renamed virtual bus to ancillary bus
> - increased maximum device name size
> - Correct order in Kconfig and Makefile
> - removed the mid-layer adev->release layer for device unregister
> - pushed adev->id management to parent driver
> - all error paths out of ancillary_device_register return error code
> - all error paths out of ancillary_device_register use put_device
> - added adev->name element
> - modname in register cannot be NULL
> - added KBUILD_MODNAME as prefix for match_name
> - push adev->id responsibility to registering driver
> - uevent now parses adev->dev name
> - match_id function now parses adev->dev name
> - changed drivers probe function to also take an ancillary_device_id param
> - split ancillary_device_register into device_initialize and device_add
> - adjusted what is done in device_initialize and device_add
> - change adev to ancildev and adrv to ancildrv
> - change adev to ancildev in documentation
> 
> This submission is the first time that this patch set will be sent to
> the alsa-devel mailing list, so it is currently being submitted as
> version 1.
> 

Given the description and number of possible users, one could safely
say: usage is assured. So, why not submit this bus as a standalone
patch? Isn't it better to first have a stable, complete version present
in Linus' tree and only then append the usage?

All other patches target ASoC SOF solution directly while as stated in
the commit's message, this isn't SOF specific - see the delta provided
at the bottom of cover-letter and a wide range of SOF files it touches.

Czarek


>   Documentation/driver-api/ancillary_bus.rst | 230 +++++++++++++++
>   Documentation/driver-api/index.rst         |   1 +
>   drivers/bus/Kconfig                        |   3 +
>   drivers/bus/Makefile                       |   3 +
>   drivers/bus/ancillary.c                    | 191 +++++++++++++
>   include/linux/ancillary_bus.h              |  58 ++++
>   include/linux/mod_devicetable.h            |   8 +
>   scripts/mod/devicetable-offsets.c          |   3 +
>   scripts/mod/file2alias.c                   |   8 +
>   sound/soc/sof/Kconfig                      |  29 +-
>   sound/soc/sof/Makefile                     |   7 +
>   sound/soc/sof/core.c                       |  12 +
>   sound/soc/sof/debug.c                      | 230 ---------------
>   sound/soc/sof/intel/Kconfig                |   9 +
>   sound/soc/sof/intel/Makefile               |   3 +
>   sound/soc/sof/intel/apl.c                  |  18 ++
>   sound/soc/sof/intel/bdw.c                  |  18 ++
>   sound/soc/sof/intel/byt.c                  |  22 ++
>   sound/soc/sof/intel/cnl.c                  |  18 ++
>   sound/soc/sof/intel/intel-client.c         |  49 ++++
>   sound/soc/sof/intel/intel-client.h         |  26 ++
>   sound/soc/sof/ops.h                        |  14 +
>   sound/soc/sof/sof-client.c                 | 117 ++++++++
>   sound/soc/sof/sof-client.h                 |  65 +++++
>   sound/soc/sof/sof-ipc-test-client.c        | 314 +++++++++++++++++++++
>   sound/soc/sof/sof-priv.h                   |  16 +-
>   26 files changed, 1233 insertions(+), 239 deletions(-)
>   create mode 100644 Documentation/driver-api/ancillary_bus.rst
>   create mode 100644 drivers/bus/ancillary.c
>   create mode 100644 include/linux/ancillary_bus.h
>   create mode 100644 sound/soc/sof/intel/intel-client.c
>   create mode 100644 sound/soc/sof/intel/intel-client.h
>   create mode 100644 sound/soc/sof/sof-client.c
>   create mode 100644 sound/soc/sof/sof-client.h
>   create mode 100644 sound/soc/sof/sof-ipc-test-client.c
>
Greg KH Oct. 1, 2020, 10:59 a.m. UTC | #4
On Thu, Oct 01, 2020 at 10:05:13AM +0000, Rojewski, Cezary wrote:
> On 2020-10-01 12:50 AM, Dave Ertman wrote:
> > Brief history of Ancillary Bus
> > ==============================
> > The ancillary bus code was originally submitted upstream as virtual
> > bus, and was submitted through the netdev tree.  This process generated
> > up to v4.  This discussion can be found here:
> >   https://lore.kernel.org/netdev/0200520070227.3392100-2-jeffrey.t.kirsher@intel.com/T/#u
> > 
> > At this point, GregKH requested that we take the review and revision
> > process to an internal mailing list and garner the buy-in of a respected
> > kernel contributor.
> > 
> > The ancillary bus (then known as virtual bus) was originally submitted
> > along with implementation code for the ice driver and irdma drive,
> > causing the complication of also having dependencies in the rdma tree.
> > This new submission is utilizing an ancillary bus consumer in only the
> > sound driver tree to create the initial implementation and a single
> > user.
> > 
> > Since implementation work has started on this patch set, there have been
> > multiple inquiries about the time frame of its completion.  It appears
> > that there will be numerous consumers of this functionality.
> > 
> > The process of internal review and implementation using the sound
> > drivers generated 19 internal versions.  The changes, including the name
> > change from virtual bus to ancillary bus, from these versions can be
> > summarized as the following:
> > 
> > - Fixed compilation and checkpatch errors
> > - Improved documentation to address the motivation for virtual bus.
> > - Renamed virtual bus to ancillary bus
> > - increased maximum device name size
> > - Correct order in Kconfig and Makefile
> > - removed the mid-layer adev->release layer for device unregister
> > - pushed adev->id management to parent driver
> > - all error paths out of ancillary_device_register return error code
> > - all error paths out of ancillary_device_register use put_device
> > - added adev->name element
> > - modname in register cannot be NULL
> > - added KBUILD_MODNAME as prefix for match_name
> > - push adev->id responsibility to registering driver
> > - uevent now parses adev->dev name
> > - match_id function now parses adev->dev name
> > - changed drivers probe function to also take an ancillary_device_id param
> > - split ancillary_device_register into device_initialize and device_add
> > - adjusted what is done in device_initialize and device_add
> > - change adev to ancildev and adrv to ancildrv
> > - change adev to ancildev in documentation
> > 
> > This submission is the first time that this patch set will be sent to
> > the alsa-devel mailing list, so it is currently being submitted as
> > version 1.
> > 
> 
> Given the description and number of possible users, one could safely
> say: usage is assured. So, why not submit this bus as a standalone
> patch? Isn't it better to first have a stable, complete version present
> in Linus' tree and only then append the usage?

Because I want to see patches that actually _use_ the new code.  So far
the previous versions of this implementation have not shown how all of
the code will be used, making it impossible to review to see if it fits
the needs of people.

We don't add infrastructure without users.  And the normal rule of thumb
of "if we have 3 users, then it is a semi-sane api" really applies here.

thanks,

greg k-h
Jason Gunthorpe Oct. 1, 2020, 12:49 p.m. UTC | #5
On Thu, Oct 01, 2020 at 12:59:25PM +0200, gregkh@linuxfoundation.org wrote:
> We don't add infrastructure without users.  And the normal rule of thumb
> of "if we have 3 users, then it is a semi-sane api" really applies here.

Based on recent discussions I'm expecting:
 - Intel SOF
 - New Intel RDMA driver
 - mlx5 RDMA driver conversion
 - mlx4 RDMA driver conversion
 - mlx5 subdevice feature for netdev
 - Intel IDXD vfio-mdev
 - Habana Labs Gaudi netdev driver

Will use this in the short term.

I would like, but don't expect too see, the other RDMA RoCE drivers
converted - cxgb3/4, i40iw, hns, ocrdma, and qedr. It solves an
annoying module loading problem we have.

We've seen the New Intel RDMA driver many months ago, if patch 1 is
going to stay the same we should post some of the mlx items next week.

It is hard to co-ordinate all of this already, having some general
agreement that there is nothing fundamentally objectionable about
ancillary bus will help alot.

Jason
Mark Brown Oct. 1, 2020, 12:50 p.m. UTC | #6
On Wed, Sep 30, 2020 at 03:50:45PM -0700, Dave Ertman wrote:

> In some subsystems, the functionality of the core device
> (PCI/ACPI/other) may be too complex for a single device to be managed as
> a monolithic block or a part of the functionality might need to be
> exposed to a different subsystem.  Splitting the functionality into
> smaller orthogonal devices makes it easier to manage data, power
> management and domain-specific communication with the hardware.  Also,
> common ancillary_device functionality across primary devices can be
> handled by a common ancillary_device. A key requirement for such a split
> is that there is no dependency on a physical bus, device, register
> accesses or regmap support. These individual devices split from the core
> cannot live on the platform bus as they are not physical devices that

I have to say that I find the motivation behind this bus to be a bit
confusing.  In code terms it's essentially a stripped back copy of the
platform bus and we're basically assigning devices between the two based
on if they end up having a physical resource passed through to them.
That seems to result in some duplication, has some potential for devices
to need to churn between the two buses and for duplication in parent
devices if they need to create both platform and auxiliary devices.
What exactly is the problem we're trying to solve here beyond the
labelling one?  I can see that it's a bit messy but this whole area is a
bit messy and I'm not clear that we're not just pushing the mess around.

> are controlled by DT/ACPI. The same argument applies for not using MFD
> in this scenario as it relies on individual function devices being
> physical devices that are DT enumerated.

MFD has no reliance on devices being DT enumerated, it works on systems
that don't have DT and in many cases it's not clear that the split you'd
want for the way Linux describes devices is a sensible one for other
operating systems so we don't want to put it into DT.  Forcing things to
be DT enumerated would just create needless ABIs.
Greg KH Oct. 1, 2020, 12:55 p.m. UTC | #7
On Thu, Oct 01, 2020 at 09:49:00AM -0300, Jason Gunthorpe wrote:
> On Thu, Oct 01, 2020 at 12:59:25PM +0200, gregkh@linuxfoundation.org wrote:
> > We don't add infrastructure without users.  And the normal rule of thumb
> > of "if we have 3 users, then it is a semi-sane api" really applies here.
> 
> Based on recent discussions I'm expecting:
>  - Intel SOF
>  - New Intel RDMA driver
>  - mlx5 RDMA driver conversion
>  - mlx4 RDMA driver conversion
>  - mlx5 subdevice feature for netdev
>  - Intel IDXD vfio-mdev
>  - Habana Labs Gaudi netdev driver
> 
> Will use this in the short term.
> 
> I would like, but don't expect too see, the other RDMA RoCE drivers
> converted - cxgb3/4, i40iw, hns, ocrdma, and qedr. It solves an
> annoying module loading problem we have.
> 
> We've seen the New Intel RDMA driver many months ago, if patch 1 is
> going to stay the same we should post some of the mlx items next week.
> 
> It is hard to co-ordinate all of this already, having some general
> agreement that there is nothing fundamentally objectionable about
> ancillary bus will help alot.

I agree, but with just one user (in a very odd way I do have to say,
more on that on the review of that specific patch), it's hard to judge
if this is useful are not, right?

So, what happened to at least the Intel SOF driver usage?  That was the
original user of this bus (before it was renamed), surely that patchset
should be floating around somewhere in Intel, right?

thanks,

greg k-h
Greg KH Oct. 1, 2020, 1:12 p.m. UTC | #8
On Thu, Oct 01, 2020 at 01:50:38PM +0100, Mark Brown wrote:
> On Wed, Sep 30, 2020 at 03:50:45PM -0700, Dave Ertman wrote:
> 
> > In some subsystems, the functionality of the core device
> > (PCI/ACPI/other) may be too complex for a single device to be managed as
> > a monolithic block or a part of the functionality might need to be
> > exposed to a different subsystem.  Splitting the functionality into
> > smaller orthogonal devices makes it easier to manage data, power
> > management and domain-specific communication with the hardware.  Also,
> > common ancillary_device functionality across primary devices can be
> > handled by a common ancillary_device. A key requirement for such a split
> > is that there is no dependency on a physical bus, device, register
> > accesses or regmap support. These individual devices split from the core
> > cannot live on the platform bus as they are not physical devices that
> 
> I have to say that I find the motivation behind this bus to be a bit
> confusing.  In code terms it's essentially a stripped back copy of the
> platform bus and we're basically assigning devices between the two based
> on if they end up having a physical resource passed through to them.
> That seems to result in some duplication, has some potential for devices
> to need to churn between the two buses and for duplication in parent
> devices if they need to create both platform and auxiliary devices.
> What exactly is the problem we're trying to solve here beyond the
> labelling one?  I can see that it's a bit messy but this whole area is a
> bit messy and I'm not clear that we're not just pushing the mess around.

This series doesn't really show how this is ment to be used, from what I
can tell.

The goal is to NOT need a platform device/bus as that's an
overloaded/abused subsystem, and to just use a much lighter-weight
subsystem that allows one "device" (PCI/USB/whatever) to have multiple
child devices that then are bound to different subsystems (networking,
tty, input, etc.)  Yes, there will be some core "sharing" needed, but
that's up to the driver that implements this, not the generic code.

> > are controlled by DT/ACPI. The same argument applies for not using MFD
> > in this scenario as it relies on individual function devices being
> > physical devices that are DT enumerated.
> 
> MFD has no reliance on devices being DT enumerated, it works on systems
> that don't have DT and in many cases it's not clear that the split you'd
> want for the way Linux describes devices is a sensible one for other
> operating systems so we don't want to put it into DT.  Forcing things to
> be DT enumerated would just create needless ABIs.

This new bus doesn't need DT at all.  Or at least it better not...

But again, this series doesn't really feel like it is showing what this
is really for to me as there is just one "child device" that is being
created, just to handle debugfs files, which aren't even part of the
driver model.

Again, I could have this totally wrong, if so, someone needs to point
out my errors in reviewing this.

thanks,

greg k-h
Jason Gunthorpe Oct. 1, 2020, 1:26 p.m. UTC | #9
On Thu, Oct 01, 2020 at 02:55:26PM +0200, gregkh@linuxfoundation.org wrote:
> I agree, but with just one user (in a very odd way I do have to say,
> more on that on the review of that specific patch), it's hard to judge
> if this is useful are not, right?

I agree with you completely, this SOF usage is quite weird and not
what I think is representative. I never imagined this stuff would be
used inside a single driver in a single subsystem. It was imagined for
cross-subsystem sharing.

> So, what happened to at least the Intel SOF driver usage?  That was the
> original user of this bus (before it was renamed), surely that patchset
> should be floating around somewhere in Intel, right?

The first user was irdma (the New Intel RDMA driver):

https://lore.kernel.org/linux-rdma/20200520070415.3392210-1-jeffrey.t.kirsher@intel.com/

(look at patch 1, search for virtbus)

I kicked it off when I said I was sick of RDMA RoCE drivers
re-implementing the driver core register/unregister and module
management to share a PCI device between netdev and RDMA.

This has been going on for almost two years now. I did not think it
would be so hard.

Jason
Jason Gunthorpe Oct. 1, 2020, 1:42 p.m. UTC | #10
On Thu, Oct 01, 2020 at 03:12:52PM +0200, Greg KH wrote:

> Again, I could have this totally wrong, if so, someone needs to point
> out my errors in reviewing this.

I agree with your assessment

Jason
Pierre-Louis Bossart Oct. 1, 2020, 2:07 p.m. UTC | #11
>> are controlled by DT/ACPI. The same argument applies for not using MFD
>> in this scenario as it relies on individual function devices being
>> physical devices that are DT enumerated.
> 
> MFD has no reliance on devices being DT enumerated, it works on systems
> that don't have DT and in many cases it's not clear that the split you'd
> want for the way Linux describes devices is a sensible one for other
> operating systems so we don't want to put it into DT.  Forcing things to
> be DT enumerated would just create needless ABIs.

I agree the "DT enumerated" part should be removed.

To the best of my knowledge, the part of 'individual function devices 
being physical devices' is correct though. MFDs typically expose 
different functions on a single physical bus, and the functions are 
separated out by register maps. In the case where there's no physical 
bus/device and no register map it's unclear how MFDs would help?
Greg KH Oct. 1, 2020, 2:17 p.m. UTC | #12
On Thu, Oct 01, 2020 at 10:26:59AM -0300, Jason Gunthorpe wrote:
> On Thu, Oct 01, 2020 at 02:55:26PM +0200, gregkh@linuxfoundation.org wrote:
> > So, what happened to at least the Intel SOF driver usage?  That was the
> > original user of this bus (before it was renamed), surely that patchset
> > should be floating around somewhere in Intel, right?
> 
> The first user was irdma (the New Intel RDMA driver):
> 
> https://lore.kernel.org/linux-rdma/20200520070415.3392210-1-jeffrey.t.kirsher@intel.com/
> 
> (look at patch 1, search for virtbus)

My apologies, you are correct, it's been so long "in flight" that I
can't remember...

> I kicked it off when I said I was sick of RDMA RoCE drivers
> re-implementing the driver core register/unregister and module
> management to share a PCI device between netdev and RDMA.
> 
> This has been going on for almost two years now. I did not think it
> would be so hard.

It really isn't, I have no idea why it has taken so long.

For a while I thought it was people doing the traditional, "if I submit
something so bad, it will make the maintainer take pity and just do it
correctly themselves" method of kernel development, and if so, it failed
horribly.  Now I just have no idea why it has taken so long, sad...

greg k-h
Mark Brown Oct. 1, 2020, 2:40 p.m. UTC | #13
On Thu, Oct 01, 2020 at 03:12:52PM +0200, Greg KH wrote:
> On Thu, Oct 01, 2020 at 01:50:38PM +0100, Mark Brown wrote:

> > That seems to result in some duplication, has some potential for devices
> > to need to churn between the two buses and for duplication in parent
> > devices if they need to create both platform and auxiliary devices.
> > What exactly is the problem we're trying to solve here beyond the
> > labelling one?  I can see that it's a bit messy but this whole area is a
> > bit messy and I'm not clear that we're not just pushing the mess around.

> This series doesn't really show how this is ment to be used, from what I
> can tell.

> The goal is to NOT need a platform device/bus as that's an
> overloaded/abused subsystem, and to just use a much lighter-weight
> subsystem that allows one "device" (PCI/USB/whatever) to have multiple
> child devices that then are bound to different subsystems (networking,
> tty, input, etc.)  Yes, there will be some core "sharing" needed, but
> that's up to the driver that implements this, not the generic code.

Right, so my concern is that as soon as we decide we want to pass some
resources or platform data through to one of the subdevices it needs to
move over into being a platform device and vice versa.  That feels like
something that's going to add to the mess for some of the uses.
Parav Pandit Oct. 1, 2020, 3:08 p.m. UTC | #14
> From: gregkh@linuxfoundation.org <gregkh@linuxfoundation.org>
> Sent: Thursday, October 1, 2020 6:25 PM
> 
> On Thu, Oct 01, 2020 at 09:49:00AM -0300, Jason Gunthorpe wrote:
> > On Thu, Oct 01, 2020 at 12:59:25PM +0200, gregkh@linuxfoundation.org
> wrote:
> > > We don't add infrastructure without users.  And the normal rule of
> > > thumb of "if we have 3 users, then it is a semi-sane api" really applies
> here.
> >
> > Based on recent discussions I'm expecting:
> >  - Intel SOF
> >  - New Intel RDMA driver
> >  - mlx5 RDMA driver conversion
> >  - mlx4 RDMA driver conversion
> >  - mlx5 subdevice feature for netdev
> >  - Intel IDXD vfio-mdev
> >  - Habana Labs Gaudi netdev driver
> >
> > Will use this in the short term.
> >
> > I would like, but don't expect too see, the other RDMA RoCE drivers
> > converted - cxgb3/4, i40iw, hns, ocrdma, and qedr. It solves an
> > annoying module loading problem we have.
> >
> > We've seen the New Intel RDMA driver many months ago, if patch 1 is
> > going to stay the same we should post some of the mlx items next week.
> >
> > It is hard to co-ordinate all of this already, having some general
> > agreement that there is nothing fundamentally objectionable about
> > ancillary bus will help alot.
> 
> I agree, but with just one user (in a very odd way I do have to say, more on
> that on the review of that specific patch), it's hard to judge if this is useful are
> not, right?
> 

As Jason mentioned above, mlx5 subdevice feature, I like to provide more context before posting the patches.

I have rebased and tested mlx5 subfunction devices for netdev to use ancillary device as per the RFC posted at [1].
These subdevices are created dynamically on user request. Typically then are in range of hundreds.
Please grep for virtbus to see its intended use in [1].

To refresh the memory, before working on the RFC [1], mlx5 subfunction use is also discussed further with Greg at [2].
Recently I further discussed ancillary bus (virtbus) intended use for mlx5 subfunction with netdev community at [3] and summarized in [4] , jump to last slide 22.

mlx5 series is bit long and waiting for mainly ancillary bus to be available apart from some internal reviews to finish.

[1] https://lore.kernel.org/netdev/20200519092258.GF4655@nanopsycho/
[2] https://patchwork.kernel.org/patch/11280547/#23056985
[3] https://netdevconf.info/0x14/pub/papers/45/0x14-paper45-talk-paper.pdf
[4] https://netdevconf.info/0x14/pub/slides/45/sf_mgmt_using_devlink_netdevconf_0x14.pdf

> So, what happened to at least the Intel SOF driver usage?  That was the
> original user of this bus (before it was renamed), surely that patchset should
> be floating around somewhere in Intel, right?
> 
> thanks,
> 
> greg k-h
Mark Brown Oct. 1, 2020, 3:24 p.m. UTC | #15
On Thu, Oct 01, 2020 at 09:07:19AM -0500, Pierre-Louis Bossart wrote:

> > > are controlled by DT/ACPI. The same argument applies for not using MFD
> > > in this scenario as it relies on individual function devices being
> > > physical devices that are DT enumerated.

> > MFD has no reliance on devices being DT enumerated, it works on systems
> > that don't have DT and in many cases it's not clear that the split you'd

...

> To the best of my knowledge, the part of 'individual function devices being
> physical devices' is correct though. MFDs typically expose different
> functions on a single physical bus, and the functions are separated out by
> register maps. In the case where there's no physical bus/device and no
> register map it's unclear how MFDs would help?

MFD doesn't care.  All MFD is doing is instantiating platform devices
and providing mechanisms to pass resources through from the parent
device to the child devices.  It doesn't really matter to it which if
any combination of resources are being provided to the children or what
the devices represent.
Greg KH Oct. 1, 2020, 3:32 p.m. UTC | #16
On Thu, Oct 01, 2020 at 03:40:19PM +0100, Mark Brown wrote:
> On Thu, Oct 01, 2020 at 03:12:52PM +0200, Greg KH wrote:
> > On Thu, Oct 01, 2020 at 01:50:38PM +0100, Mark Brown wrote:
> 
> > > That seems to result in some duplication, has some potential for devices
> > > to need to churn between the two buses and for duplication in parent
> > > devices if they need to create both platform and auxiliary devices.
> > > What exactly is the problem we're trying to solve here beyond the
> > > labelling one?  I can see that it's a bit messy but this whole area is a
> > > bit messy and I'm not clear that we're not just pushing the mess around.
> 
> > This series doesn't really show how this is ment to be used, from what I
> > can tell.
> 
> > The goal is to NOT need a platform device/bus as that's an
> > overloaded/abused subsystem, and to just use a much lighter-weight
> > subsystem that allows one "device" (PCI/USB/whatever) to have multiple
> > child devices that then are bound to different subsystems (networking,
> > tty, input, etc.)  Yes, there will be some core "sharing" needed, but
> > that's up to the driver that implements this, not the generic code.
> 
> Right, so my concern is that as soon as we decide we want to pass some
> resources or platform data through to one of the subdevices it needs to
> move over into being a platform device and vice versa.  That feels like
> something that's going to add to the mess for some of the uses.

There shouldn't be a need for resources or platform data to be passed
that way as they are all "owned" by the parent device that creates
these.

I don't want to see platform devices become children of real devices
(like PCI and USB and others), which is the goal here.  platform devices
are overloaded and abused enough as it is, let's not make it worse.

thanks,

greg k-h
Dave Ertman Oct. 1, 2020, 3:54 p.m. UTC | #17
Sorry about the two copies.  That was my fault.  On my first send
my subscription to alsa-devel had not gone through and I forgot
to suppress-cc on the second send.

-DaveE

> -----Original Message-----
> From: Greg KH <gregkh@linuxfoundation.org>
> Sent: Wednesday, September 30, 2020 10:58 PM
> To: Ertman, David M <david.m.ertman@intel.com>
> Cc: alsa-devel@alsa-project.org; tiwai@suse.de; broonie@kernel.org; pierre-
> louis.bossart@linux.intel.com; Sridharan, Ranjani
> <ranjani.sridharan@intel.com>; jgg@nvidia.com; parav@nvidia.com
> Subject: Re: [PATCH 0/6] Ancillary bus implementation and SOF multi-client
> support
> 
> On Wed, Sep 30, 2020 at 03:50:45PM -0700, Dave Ertman wrote:
> > Brief history of Ancillary Bus
> > ==============================
> 
> <snip>
> 
> Did you send 2 copies of this?  Which one is the "correct" one to
> review?
> 
> confused,
> 
> greg k-h
Dave Ertman Oct. 1, 2020, 3:55 p.m. UTC | #18
I have also sent this patch set to netdev and linux-rdma mailing lists.

-DaveE

> -----Original Message-----
> From: Greg KH <gregkh@linuxfoundation.org>
> Sent: Thursday, October 1, 2020 12:14 AM
> To: Ertman, David M <david.m.ertman@intel.com>
> Cc: alsa-devel@alsa-project.org; tiwai@suse.de; broonie@kernel.org; pierre-
> louis.bossart@linux.intel.com; Sridharan, Ranjani
> <ranjani.sridharan@intel.com>; jgg@nvidia.com; parav@nvidia.com
> Subject: Re: [PATCH 0/6] Ancillary bus implementation and SOF multi-client
> support
> 
> On Wed, Sep 30, 2020 at 03:50:45PM -0700, Dave Ertman wrote:
> > The ancillary bus (then known as virtual bus) was originally submitted
> > along with implementation code for the ice driver and irdma drive,
> > causing the complication of also having dependencies in the rdma tree.
> > This new submission is utilizing an ancillary bus consumer in only the
> > sound driver tree to create the initial implementation and a single
> > user.
> 
> So this will not work for the ice driver and/or irdma drivers?  It would
> be great to see how they work for this as well as getting those
> maintainers to review and sign off on this implementation as well.
> Don't ignore those developers, that's a bit "odd", don't you think?
> 
> To drop them from the review process is actually kind of rude, what
> happens if this gets merged without their input?
> 
> And the name, why was it changed and what does it mean?  For non-native
> english speakers this is going to be rough, given that I as a native
> english speaker had to go look up the word in a dictionary to fully
> understand what you are trying to do with that name.
> 
> Naming is hard, but I think this name is really hard to explain and
> understand, don't you think?
> 
> thanks,
> 
> greg k-h
Mark Brown Oct. 1, 2020, 4:03 p.m. UTC | #19
On Thu, Oct 01, 2020 at 05:32:07PM +0200, Greg KH wrote:
> On Thu, Oct 01, 2020 at 03:40:19PM +0100, Mark Brown wrote:

> > Right, so my concern is that as soon as we decide we want to pass some
> > resources or platform data through to one of the subdevices it needs to
> > move over into being a platform device and vice versa.  That feels like
> > something that's going to add to the mess for some of the uses.

> There shouldn't be a need for resources or platform data to be passed
> that way as they are all "owned" by the parent device that creates
> these.

> I don't want to see platform devices become children of real devices
> (like PCI and USB and others), which is the goal here.  platform devices
> are overloaded and abused enough as it is, let's not make it worse.

How does this interact with the situation where someone makes a PCI
device that's basically a bunch of IPs glued together in a PCI memory
region (or similarly for other buses)?  The IPs all have distinct
memory regions and other resources like interrupt lines which makes them
unsuitable for auxilliary devices as proposed, especially in cases where
there's more than one copy of the IP instantiated.  There's a bunch of
PCI MFDs in tree already of admittedly varying degrees of taste, and
MFDs on other buses also use the resource passing stuff.
Greg KH Oct. 1, 2020, 4:10 p.m. UTC | #20
On Thu, Oct 01, 2020 at 03:55:46PM +0000, Ertman, David M wrote:
> I have also sent this patch set to netdev and linux-rdma mailing lists.

And not cc:ed us on it?  Wonderful, we will never see the review
comments, and the developers there will not see ours :(

{sigh}

greg k-h
Pierre-Louis Bossart Oct. 1, 2020, 4:20 p.m. UTC | #21
On 10/1/20 10:24 AM, Mark Brown wrote:
> On Thu, Oct 01, 2020 at 09:07:19AM -0500, Pierre-Louis Bossart wrote:
> 
>>>> are controlled by DT/ACPI. The same argument applies for not using MFD
>>>> in this scenario as it relies on individual function devices being
>>>> physical devices that are DT enumerated.
> 
>>> MFD has no reliance on devices being DT enumerated, it works on systems
>>> that don't have DT and in many cases it's not clear that the split you'd
> 
> ...
> 
>> To the best of my knowledge, the part of 'individual function devices being
>> physical devices' is correct though. MFDs typically expose different
>> functions on a single physical bus, and the functions are separated out by
>> register maps. In the case where there's no physical bus/device and no
>> register map it's unclear how MFDs would help?
> 
> MFD doesn't care.  All MFD is doing is instantiating platform devices
> and providing mechanisms to pass resources through from the parent
> device to the child devices.  It doesn't really matter to it which if
> any combination of resources are being provided to the children or what
> the devices represent.

I have nothing against MFD, but if this boils down to platform devices 
we are back to the initial open "are platform devices suitable as 
children of PCI devices"? I've heard Greg say no for the last year and a 
half - and he just re-stated this earlier in this thread.
Dave Ertman Oct. 1, 2020, 4:50 p.m. UTC | #22
> -----Original Message-----
> From: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
> Sent: Thursday, October 1, 2020 7:07 AM
> To: Mark Brown <broonie@kernel.org>; Ertman, David M
> <david.m.ertman@intel.com>
> Cc: alsa-devel@alsa-project.org; tiwai@suse.de;
> gregkh@linuxfoundation.org; Sridharan, Ranjani
> <ranjani.sridharan@intel.com>; parav@nvidia.com; jgg@nvidia.com
> Subject: Re: [PATCH 0/6] Ancillary bus implementation and SOF multi-client
> support
> 
> 
> 
> >> are controlled by DT/ACPI. The same argument applies for not using MFD
> >> in this scenario as it relies on individual function devices being
> >> physical devices that are DT enumerated.
> >
> > MFD has no reliance on devices being DT enumerated, it works on systems
> > that don't have DT and in many cases it's not clear that the split you'd
> > want for the way Linux describes devices is a sensible one for other
> > operating systems so we don't want to put it into DT.  Forcing things to
> > be DT enumerated would just create needless ABIs.
> 
> I agree the "DT enumerated" part should be removed.
> 
> To the best of my knowledge, the part of 'individual function devices
> being physical devices' is correct though. MFDs typically expose
> different functions on a single physical bus, and the functions are
> separated out by register maps. In the case where there's no physical
> bus/device and no register map it's unclear how MFDs would help?

The MFD bus also uses parts of the platform bus in the background, including
platform_data and the such.  We submitted a version of the RDMA/LAN solution
using MFD and it was NACK'd by GregKH.

-DaveE
Mark Brown Oct. 1, 2020, 4:51 p.m. UTC | #23
On Thu, Oct 01, 2020 at 11:20:39AM -0500, Pierre-Louis Bossart wrote:

> I have nothing against MFD, but if this boils down to platform devices we
> are back to the initial open "are platform devices suitable as children of
> PCI devices"? I've heard Greg say no for the last year and a half - and he
> just re-stated this earlier in this thread.

As you'll have seen from this thread and the prior version (which was
the first time I became aware of this stuff) I'm not clear how that
desire maps on to hardware, as soon as subdevices start needing to get
regions and interrupts mapped then we're going to end up reinventing
resources and regmaps AFAICT.
Mark Brown Oct. 1, 2020, 5:10 p.m. UTC | #24
On Thu, Oct 01, 2020 at 04:50:26PM +0000, Ertman, David M wrote:

> > >> are controlled by DT/ACPI. The same argument applies for not using MFD
> > >> in this scenario as it relies on individual function devices being
> > >> physical devices that are DT enumerated.

...

> The MFD bus also uses parts of the platform bus in the background, including
> platform_data and the such.  We submitted a version of the RDMA/LAN solution
> using MFD and it was NACK'd by GregKH.

That does not mean that it's a good idea to write documentation that
says things that are not true.  It would be better to just not say
anything here rather than mislead people.
Dave Ertman Oct. 1, 2020, 5:13 p.m. UTC | #25
> -----Original Message-----
> From: Greg KH <gregkh@linuxfoundation.org>
> Sent: Thursday, October 1, 2020 9:11 AM
> To: Ertman, David M <david.m.ertman@intel.com>
> Cc: alsa-devel@alsa-project.org; tiwai@suse.de; broonie@kernel.org; pierre-
> louis.bossart@linux.intel.com; Sridharan, Ranjani
> <ranjani.sridharan@intel.com>; jgg@nvidia.com; parav@nvidia.com
> Subject: Re: [PATCH 0/6] Ancillary bus implementation and SOF multi-client
> support
> 
> On Thu, Oct 01, 2020 at 03:55:46PM +0000, Ertman, David M wrote:
> > I have also sent this patch set to netdev and linux-rdma mailing lists.
> 
> And not cc:ed us on it?  Wonderful, we will never see the review
> comments, and the developers there will not see ours :(
> 
> {sigh}
> 
> greg k-h

Yes, mea culpa, I did hose up this first version submission.  I am not used
to dealing with multiple mailing lists at the same time.

The submission of version 2 will be all mailing lists and cc'd in a single send.

I will learn from my mistake.

-DaveE
Dave Ertman Oct. 1, 2020, 5:16 p.m. UTC | #26
> -----Original Message-----
> From: Mark Brown <broonie@kernel.org>
> Sent: Thursday, October 1, 2020 10:10 AM
> To: Ertman, David M <david.m.ertman@intel.com>
> Cc: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>; alsa-
> devel@alsa-project.org; tiwai@suse.de; gregkh@linuxfoundation.org;
> Sridharan, Ranjani <ranjani.sridharan@intel.com>; parav@nvidia.com;
> jgg@nvidia.com
> Subject: Re: [PATCH 0/6] Ancillary bus implementation and SOF multi-client
> support
> 
> On Thu, Oct 01, 2020 at 04:50:26PM +0000, Ertman, David M wrote:
> 
> > > >> are controlled by DT/ACPI. The same argument applies for not using
> MFD
> > > >> in this scenario as it relies on individual function devices being
> > > >> physical devices that are DT enumerated.
> 
> ...
> 
> > The MFD bus also uses parts of the platform bus in the background,
> including
> > platform_data and the such.  We submitted a version of the RDMA/LAN
> solution
> > using MFD and it was NACK'd by GregKH.
> 
> That does not mean that it's a good idea to write documentation that
> says things that are not true.  It would be better to just not say
> anything here rather than mislead people.

I have removed the line about MFD devices requiring DT enumeration.

-DaveE
Dave Ertman Oct. 1, 2020, 5:52 p.m. UTC | #27
-DaveE

> -----Original Message-----
> From: Alsa-devel <alsa-devel-bounces@alsa-project.org> On Behalf Of Dave
> Ertman
> Sent: Wednesday, September 30, 2020 3:51 PM
> To: alsa-devel@alsa-project.org
> Cc: tiwai@suse.de; gregkh@linuxfoundation.org; Sridharan, Ranjani
> <ranjani.sridharan@intel.com>; pierre-louis.bossart@linux.intel.com;
> broonie@kernel.org; parav@nvidia.com; jgg@nvidia.com
> Subject: [PATCH 0/6] Ancillary bus implementation and SOF multi-client
> support
> 
> Brief history of Ancillary Bus
> ==============================
> The ancillary bus code was originally submitted upstream as virtual
> bus, and was submitted through the netdev tree.  This process generated
> up to v4.  This discussion can be found here:
>  https://lore.kernel.org/netdev/0200520070227.3392100-2-
> jeffrey.t.kirsher@intel.com/T/#u

Sorry, this is a borked link.  Will fix with actual link:

https://lore.kernel.org/netdev/20191111192219.30259-1-jeffrey.t.kirsher@intel.com/#t

this was the original submission of virtual bus in Nov 2019.

> 
> At this point, GregKH requested that we take the review and revision
> process to an internal mailing list and garner the buy-in of a respected
> kernel contributor.
> 
> The ancillary bus (then known as virtual bus) was originally submitted
> along with implementation code for the ice driver and irdma drive,
> causing the complication of also having dependencies in the rdma tree.
> This new submission is utilizing an ancillary bus consumer in only the
> sound driver tree to create the initial implementation and a single
> user.
> 
> Since implementation work has started on this patch set, there have been
> multiple inquiries about the time frame of its completion.  It appears
> that there will be numerous consumers of this functionality.
> 
> The process of internal review and implementation using the sound
> drivers generated 19 internal versions.  The changes, including the name
> change from virtual bus to ancillary bus, from these versions can be
> summarized as the following:
> 
> - Fixed compilation and checkpatch errors
> - Improved documentation to address the motivation for virtual bus.
> - Renamed virtual bus to ancillary bus
> - increased maximum device name size
> - Correct order in Kconfig and Makefile
> - removed the mid-layer adev->release layer for device unregister
> - pushed adev->id management to parent driver
> - all error paths out of ancillary_device_register return error code
> - all error paths out of ancillary_device_register use put_device
> - added adev->name element
> - modname in register cannot be NULL
> - added KBUILD_MODNAME as prefix for match_name
> - push adev->id responsibility to registering driver
> - uevent now parses adev->dev name
> - match_id function now parses adev->dev name
> - changed drivers probe function to also take an ancillary_device_id param
> - split ancillary_device_register into device_initialize and device_add
> - adjusted what is done in device_initialize and device_add
> - change adev to ancildev and adrv to ancildrv
> - change adev to ancildev in documentation
> 
> This submission is the first time that this patch set will be sent to
> the alsa-devel mailing list, so it is currently being submitted as
> version 1.
> 
> ==========================
> 
> Introduces the ancillary bus implementation along with the example usage
> in the Sound Open Firmware(SOF) audio driver.
> 
> In some subsystems, the functionality of the core device
> (PCI/ACPI/other) may be too complex for a single device to be managed as
> a monolithic block or a part of the functionality might need to be
> exposed to a different subsystem.  Splitting the functionality into
> smaller orthogonal devices makes it easier to manage data, power
> management and domain-specific communication with the hardware.  Also,
> common ancillary_device functionality across primary devices can be
> handled by a common ancillary_device. A key requirement for such a split
> is that there is no dependency on a physical bus, device, register
> accesses or regmap support. These individual devices split from the core
> cannot live on the platform bus as they are not physical devices that
> are controlled by DT/ACPI. The same argument applies for not using MFD
> in this scenario as it relies on individual function devices being
> physical devices that are DT enumerated.
> 
> An example for this kind of requirement is the audio subsystem where a
> single IP handles multiple entities such as HDMI, Soundwire, local
> devices such as mics/speakers etc. The split for the core's
> functionality can be arbitrary or be defined by the DSP firmware
> topology and include hooks for test/debug. This allows for the audio
> core device to be minimal and tightly coupled with handling the
> hardware-specific logic and communication.
> 
> The ancillary bus is intended to be minimal, generic and avoid
> domain-specific assumptions. Each ancillary bus device represents a part
> of its parent functionality. The generic behavior can be extended and
> specialized as needed by encapsulating an ancillary bus device within
> other domain-specific structures and the use of .ops callbacks.
> 
> The SOF driver adopts the ancillary bus for implementing the
> multi-client support. A client in the context of the SOF driver
> represents a part of the core device's functionality. It is not a
> physical device but rather an ancillary device that needs to communicate
> with the DSP via IPCs. With multi-client support,the sound card can be
> separated into multiple orthogonal ancillary devices for local devices
> (mic/speakers etc), HDMI, sensing, probes, debug etc.  In this series,
> we demonstrate the usage of the ancillary bus with the help of the IPC
> test client which is used for testing the serialization of IPCs when
> multiple clients talk to the DSP at the same time.
> 
> Dave Ertman (1):
>   Add ancillary bus support
> 
> Fred Oh (1):
>   ASoC: SOF: debug: Remove IPC flood test support in SOF core
> 
> Ranjani Sridharan (4):
>   ASoC: SOF: Introduce descriptors for SOF client
>   ASoC: SOF: Create client driver for IPC test
>   ASoC: SOF: ops: Add ops for client registration
>   ASoC: SOF: Intel: Define ops for client registration
> 
>  Documentation/driver-api/ancillary_bus.rst | 230 +++++++++++++++
>  Documentation/driver-api/index.rst         |   1 +
>  drivers/bus/Kconfig                        |   3 +
>  drivers/bus/Makefile                       |   3 +
>  drivers/bus/ancillary.c                    | 191 +++++++++++++
>  include/linux/ancillary_bus.h              |  58 ++++
>  include/linux/mod_devicetable.h            |   8 +
>  scripts/mod/devicetable-offsets.c          |   3 +
>  scripts/mod/file2alias.c                   |   8 +
>  sound/soc/sof/Kconfig                      |  29 +-
>  sound/soc/sof/Makefile                     |   7 +
>  sound/soc/sof/core.c                       |  12 +
>  sound/soc/sof/debug.c                      | 230 ---------------
>  sound/soc/sof/intel/Kconfig                |   9 +
>  sound/soc/sof/intel/Makefile               |   3 +
>  sound/soc/sof/intel/apl.c                  |  18 ++
>  sound/soc/sof/intel/bdw.c                  |  18 ++
>  sound/soc/sof/intel/byt.c                  |  22 ++
>  sound/soc/sof/intel/cnl.c                  |  18 ++
>  sound/soc/sof/intel/intel-client.c         |  49 ++++
>  sound/soc/sof/intel/intel-client.h         |  26 ++
>  sound/soc/sof/ops.h                        |  14 +
>  sound/soc/sof/sof-client.c                 | 117 ++++++++
>  sound/soc/sof/sof-client.h                 |  65 +++++
>  sound/soc/sof/sof-ipc-test-client.c        | 314 +++++++++++++++++++++
>  sound/soc/sof/sof-priv.h                   |  16 +-
>  26 files changed, 1233 insertions(+), 239 deletions(-)
>  create mode 100644 Documentation/driver-api/ancillary_bus.rst
>  create mode 100644 drivers/bus/ancillary.c
>  create mode 100644 include/linux/ancillary_bus.h
>  create mode 100644 sound/soc/sof/intel/intel-client.c
>  create mode 100644 sound/soc/sof/intel/intel-client.h
>  create mode 100644 sound/soc/sof/sof-client.c
>  create mode 100644 sound/soc/sof/sof-client.h
>  create mode 100644 sound/soc/sof/sof-ipc-test-client.c
> 
> --
> 2.26.2
Jason Gunthorpe Oct. 1, 2020, 6:04 p.m. UTC | #28
On Thu, Oct 01, 2020 at 05:51:37PM +0100, Mark Brown wrote:
> On Thu, Oct 01, 2020 at 11:20:39AM -0500, Pierre-Louis Bossart wrote:
> 
> > I have nothing against MFD, but if this boils down to platform devices we
> > are back to the initial open "are platform devices suitable as children of
> > PCI devices"? I've heard Greg say no for the last year and a half - and he
> > just re-stated this earlier in this thread.
> 
> As you'll have seen from this thread and the prior version (which was
> the first time I became aware of this stuff) I'm not clear how that
> desire maps on to hardware, as soon as subdevices start needing to get
> regions and interrupts mapped then we're going to end up reinventing
> resources and regmaps AFAICT.

I think the truth is MFD and anciallary bus are solving almost the
same problem and could meet in the middle at some point.

Since Greg has completely NAK'd using pci_device inside MFD it looks
like this is the preference.

If people have a use case for regmaps/IRQs then they should add them
here. Right now the places I know of don't need this.

The target for this is queue-based devices where the sharing granule
is a queue. The actual thing being shared to the subsystem from the
PCI driver basically the ability to create a queue.

Jason
Greg KH Oct. 1, 2020, 6:13 p.m. UTC | #29
On Thu, Oct 01, 2020 at 03:04:48PM -0300, Jason Gunthorpe wrote:
> On Thu, Oct 01, 2020 at 05:51:37PM +0100, Mark Brown wrote:
> > On Thu, Oct 01, 2020 at 11:20:39AM -0500, Pierre-Louis Bossart wrote:
> > 
> > > I have nothing against MFD, but if this boils down to platform devices we
> > > are back to the initial open "are platform devices suitable as children of
> > > PCI devices"? I've heard Greg say no for the last year and a half - and he
> > > just re-stated this earlier in this thread.
> > 
> > As you'll have seen from this thread and the prior version (which was
> > the first time I became aware of this stuff) I'm not clear how that
> > desire maps on to hardware, as soon as subdevices start needing to get
> > regions and interrupts mapped then we're going to end up reinventing
> > resources and regmaps AFAICT.
> 
> I think the truth is MFD and anciallary bus are solving almost the
> same problem and could meet in the middle at some point.

Agreed.

> Since Greg has completely NAK'd using pci_device inside MFD it looks
> like this is the preference.

I have NAK'd using platform devices as children for things that are not
platform devices.  MFD is the thing that is wed to platform devices at
the moment, so unless you want to change MFD to not enforce that (and
last I looked the MFD maintainer said no), then yes, we can't use MFD.

> If people have a use case for regmaps/IRQs then they should add them
> here. Right now the places I know of don't need this.
> 
> The target for this is queue-based devices where the sharing granule
> is a queue. The actual thing being shared to the subsystem from the
> PCI driver basically the ability to create a queue.

And PCI resources, you will have drivers that want that.  But those are
easy to share :)

thanks,

greg k-h
Greg KH Oct. 1, 2020, 6:16 p.m. UTC | #30
On Thu, Oct 01, 2020 at 05:03:16PM +0100, Mark Brown wrote:
> On Thu, Oct 01, 2020 at 05:32:07PM +0200, Greg KH wrote:
> > On Thu, Oct 01, 2020 at 03:40:19PM +0100, Mark Brown wrote:
> 
> > > Right, so my concern is that as soon as we decide we want to pass some
> > > resources or platform data through to one of the subdevices it needs to
> > > move over into being a platform device and vice versa.  That feels like
> > > something that's going to add to the mess for some of the uses.
> 
> > There shouldn't be a need for resources or platform data to be passed
> > that way as they are all "owned" by the parent device that creates
> > these.
> 
> > I don't want to see platform devices become children of real devices
> > (like PCI and USB and others), which is the goal here.  platform devices
> > are overloaded and abused enough as it is, let's not make it worse.
> 
> How does this interact with the situation where someone makes a PCI
> device that's basically a bunch of IPs glued together in a PCI memory
> region (or similarly for other buses)?  The IPs all have distinct
> memory regions and other resources like interrupt lines which makes them
> unsuitable for auxilliary devices as proposed, especially in cases where
> there's more than one copy of the IP instantiated.  There's a bunch of
> PCI MFDs in tree already of admittedly varying degrees of taste, and
> MFDs on other buses also use the resource passing stuff.

I would like to move those PCI MFDs away from that, and into this bus
instead.

If there needs to have a way to pass/share resources, great, let's add
it, there's no objection from me.

thanks,

greg k-h
Dave Ertman Oct. 1, 2020, 6:29 p.m. UTC | #31
> -----Original Message-----
> From: Greg KH <gregkh@linuxfoundation.org>
> Sent: Thursday, October 1, 2020 11:16 AM
> To: Mark Brown <broonie@kernel.org>
> Cc: Ertman, David M <david.m.ertman@intel.com>; alsa-devel@alsa-
> project.org; tiwai@suse.de; pierre-louis.bossart@linux.intel.com; Sridharan,
> Ranjani <ranjani.sridharan@intel.com>; jgg@nvidia.com; parav@nvidia.com
> Subject: Re: [PATCH 0/6] Ancillary bus implementation and SOF multi-client
> support
> 
> On Thu, Oct 01, 2020 at 05:03:16PM +0100, Mark Brown wrote:
> > On Thu, Oct 01, 2020 at 05:32:07PM +0200, Greg KH wrote:
> > > On Thu, Oct 01, 2020 at 03:40:19PM +0100, Mark Brown wrote:
> >
> > > > Right, so my concern is that as soon as we decide we want to pass some
> > > > resources or platform data through to one of the subdevices it needs to
> > > > move over into being a platform device and vice versa.  That feels like
> > > > something that's going to add to the mess for some of the uses.
> >
> > > There shouldn't be a need for resources or platform data to be passed
> > > that way as they are all "owned" by the parent device that creates
> > > these.
> >
> > > I don't want to see platform devices become children of real devices
> > > (like PCI and USB and others), which is the goal here.  platform devices
> > > are overloaded and abused enough as it is, let's not make it worse.
> >
> > How does this interact with the situation where someone makes a PCI
> > device that's basically a bunch of IPs glued together in a PCI memory
> > region (or similarly for other buses)?  The IPs all have distinct
> > memory regions and other resources like interrupt lines which makes them
> > unsuitable for auxilliary devices as proposed, especially in cases where
> > there's more than one copy of the IP instantiated.  There's a bunch of
> > PCI MFDs in tree already of admittedly varying degrees of taste, and
> > MFDs on other buses also use the resource passing stuff.
> 
> I would like to move those PCI MFDs away from that, and into this bus
> instead.
> 
> If there needs to have a way to pass/share resources, great, let's add
> it, there's no objection from me.
> 
> thanks,
> 
> greg k-h

The sharing of information is done by having the parent driver declare
the ancillary_device as an element in a parent_struct that also contains a
pointer to the shared information.  This way, when the ancillary_driver
is probed, and a pointer to ancillary_device is passed, it can perform a
container_of on the ancillary_device and gain access to the shared data.

This keeps all requirements out of the ancillary bus code and it can be
as flexible as the implementer wants it to be.

-DaveE
Mark Brown Oct. 1, 2020, 7:23 p.m. UTC | #32
On Thu, Oct 01, 2020 at 03:04:48PM -0300, Jason Gunthorpe wrote:
> On Thu, Oct 01, 2020 at 05:51:37PM +0100, Mark Brown wrote:

> > As you'll have seen from this thread and the prior version (which was
> > the first time I became aware of this stuff) I'm not clear how that
> > desire maps on to hardware, as soon as subdevices start needing to get
> > regions and interrupts mapped then we're going to end up reinventing
> > resources and regmaps AFAICT.

> I think the truth is MFD and anciallary bus are solving almost the
> same problem and could meet in the middle at some point.

I do too, which is why I am pushing back.

> Since Greg has completely NAK'd using pci_device inside MFD it looks
> like this is the preference.

I know Greg has said he doesn't want this, I would like to if not change
his mind at least understand why so we can understand what we are
supposed to be doing here and ideally capture it in the documentation.

> If people have a use case for regmaps/IRQs then they should add them
> here. Right now the places I know of don't need this.

If it is actually the case that we're not supposed to instantiate
platform devices as children of devices on actual physical buses then
there's a huge chunk of existing MFDs that are use cases, including some
PCI ones (though the PCI ones aren't great for the most part).  If it is
just PCI and USB devices then it becomes difficult to articulate the
underlying logic about why those bus types in particular.  I would
expect at least some devices instantiated on PCI attached FPGAs to also
need resources (it looks like at least one of the PCI MFDs that's been
in the tree for a long time, timberdale, is actually such a device
although not very dynamic like they could be).

I think the fundamental problem here is that this is all representing
hardware which doesn't fit neatly into abstractions and that trying to
separate out things based on abstract criteria (especially poorly
defined abstract criteria) is going to leave us with a pile of sharp
edges for people to run into, at least in terms of confusion and hassle.
The messiness that exists is I fear a reflection of reality.
Mark Brown Oct. 1, 2020, 7:38 p.m. UTC | #33
On Thu, Oct 01, 2020 at 06:29:58PM +0000, Ertman, David M wrote:

> > I would like to move those PCI MFDs away from that, and into this bus
> > instead.

> > If there needs to have a way to pass/share resources, great, let's add
> > it, there's no objection from me.

OK, if we can add resource passing then no huge problem from my end
since there'd be no case where you couldn't use an ancillairy device.
It's still a bit confusing but there's no cases where we're supposed to
use an ancilliary device but it doesn't have required features.

> The sharing of information is done by having the parent driver declare
> the ancillary_device as an element in a parent_struct that also contains a
> pointer to the shared information.  This way, when the ancillary_driver
> is probed, and a pointer to ancillary_device is passed, it can perform a
> container_of on the ancillary_device and gain access to the shared data.

> This keeps all requirements out of the ancillary bus code and it can be
> as flexible as the implementer wants it to be.

This means that every device has to reinvent the wheel for common
resources like interrupts, including things like looking them up by name
and so on.  That doesn't seem ideal.
Dave Ertman Oct. 1, 2020, 7:54 p.m. UTC | #34
> -----Original Message-----
> From: Mark Brown <broonie@kernel.org>
> Sent: Thursday, October 1, 2020 12:39 PM
> To: Ertman, David M <david.m.ertman@intel.com>
> Cc: Greg KH <gregkh@linuxfoundation.org>; alsa-devel@alsa-project.org;
> tiwai@suse.de; pierre-louis.bossart@linux.intel.com; Sridharan, Ranjani
> <ranjani.sridharan@intel.com>; jgg@nvidia.com; parav@nvidia.com
> Subject: Re: [PATCH 0/6] Ancillary bus implementation and SOF multi-client
> support
> 
> On Thu, Oct 01, 2020 at 06:29:58PM +0000, Ertman, David M wrote:
> 
> > > I would like to move those PCI MFDs away from that, and into this bus
> > > instead.
> 
> > > If there needs to have a way to pass/share resources, great, let's add
> > > it, there's no objection from me.
> 
> OK, if we can add resource passing then no huge problem from my end
> since there'd be no case where you couldn't use an ancillairy device.
> It's still a bit confusing but there's no cases where we're supposed to
> use an ancilliary device but it doesn't have required features.
> 
> > The sharing of information is done by having the parent driver declare
> > the ancillary_device as an element in a parent_struct that also contains a
> > pointer to the shared information.  This way, when the ancillary_driver
> > is probed, and a pointer to ancillary_device is passed, it can perform a
> > container_of on the ancillary_device and gain access to the shared data.
> 
> > This keeps all requirements out of the ancillary bus code and it can be
> > as flexible as the implementer wants it to be.
> 
> This means that every device has to reinvent the wheel for common
> resources like interrupts, including things like looking them up by name
> and so on.  That doesn't seem ideal.

It's a shared header file between the device and driver implementer.  If they
want to share a "struct irq_data *irq" as an element in  the shared data, that is
fine, but not everyone will need to include that.  This is why we left what is shared
up to the implementer, so that we don't impose a debt on some other
implementation that doesn't need it.

I suppose it is not the end of the world adding elements to the definition of
struct ancillary_device, but what would be a "universal" element to add?
Where do you draw the line on what you allow into the bus internals?  The
overriding goal was to make a subsystem agnostic bus that doesn't impose
implementation specific details from any single subsystem.

-DaveE

-DaveE
Mark Brown Oct. 1, 2020, 8:17 p.m. UTC | #35
On Thu, Oct 01, 2020 at 07:54:26PM +0000, Ertman, David M wrote:

> > This means that every device has to reinvent the wheel for common
> > resources like interrupts, including things like looking them up by name
> > and so on.  That doesn't seem ideal.

> It's a shared header file between the device and driver implementer.  If they
> want to share a "struct irq_data *irq" as an element in  the shared data, that is
> fine, but not everyone will need to include that.  This is why we left what is shared
> up to the implementer, so that we don't impose a debt on some other
> implementation that doesn't need it.

Realistically I think you're saying this because this has taken so long
already and you're being asked for more changes rather than because it's
a good design decision.  That is entirely understandable and reasonable
but I'm not sure it's the best decision when we have so many common
patterns between devices, one of the things that the current situtation
handles well is allowing lots of common stuff to just be data
definitions rather than code - there used to be a lot more open coding
of resource sharing.

One thing we should agree here is that we don't actually have to
implement everything right now, we just need to understand what the
design and direction of travel are.  That means that at this point it's
probably just a fairly quick documentation update rather than
substantial code changes.

> I suppose it is not the end of the world adding elements to the definition of
> struct ancillary_device, but what would be a "universal" element to add?
> Where do you draw the line on what you allow into the bus internals?  The
> overriding goal was to make a subsystem agnostic bus that doesn't impose
> implementation specific details from any single subsystem.

I think that this needs to grow everything that platform and MFD have so
that we can avoid using platform devices to represent things that are
not in the very strictest sense platform devices (which I interpret to
be memory mapped devices that can't be enumerated in some fashion).  My
understanding here is that the goal is an abstraction cleanup, it's
possible I've misunderstood though.
Jason Gunthorpe Oct. 2, 2020, 12:47 a.m. UTC | #36
On Thu, Oct 01, 2020 at 09:17:18PM +0100, Mark Brown wrote:
> > I suppose it is not the end of the world adding elements to the definition of
> > struct ancillary_device, but what would be a "universal" element to add?
> > Where do you draw the line on what you allow into the bus internals?  The
> > overriding goal was to make a subsystem agnostic bus that doesn't impose
> > implementation specific details from any single subsystem.
> 
> I think that this needs to grow everything that platform and MFD have so
> that we can avoid using platform devices to represent things that are
> not in the very strictest sense platform devices (which I interpret to
> be memory mapped devices that can't be enumerated in some fashion).  My
> understanding here is that the goal is an abstraction cleanup, it's
> possible I've misunderstood though.

Instead of making ancillary bus giant, it might be interesting to use
a library technique to avoid the code duplication you are talking
about, eg

 struct my_ancillary_data {
      struct ancillary_device adev;
      struct ancillary_resource resources;
 }

Then each user can access the facets they need.

Not sure what else is in platform_device or mfd_cell that is
relevant:
 - There is no DMA, these devices are not for DMA. The physical device
   pointer should be used with the DMA API.
 - resources as above
 - id/override/archdata not relavent

From mfd_cell
 - enable/disable suspend/resume, these look like they duplicate the
   driver core stuff?
 - pdata/of_compatible/acpi_match - not applicable
 - properties - use struct members and container of
 - resources as above
 - regulators - another struct member? Only two users.

Jason
Mark Brown Oct. 2, 2020, 11:19 a.m. UTC | #37
On Thu, Oct 01, 2020 at 09:47:40PM -0300, Jason Gunthorpe wrote:
> On Thu, Oct 01, 2020 at 09:17:18PM +0100, Mark Brown wrote:

> Instead of making ancillary bus giant, it might be interesting to use
> a library technique to avoid the code duplication you are talking
> about, eg

>  struct my_ancillary_data {
>       struct ancillary_device adev;
>       struct ancillary_resource resources;
>  }

> Then each user can access the facets they need.

The way most of this stuff works already it's not a table in the device
itself, it's an array of resources with type information.  Your second
struct there is presumably just going to be the pointer and size
information which you *could* split out but I'm really not sure what
you're buying there.

The interesting bit isn't really the data in the devices themselves,
it's the code that allows devices to just supply a data table and have
things mapped through to their child devices.  That isn't particularly
complex either, but it's still worth avoiding having lots of copies of
it.  *None* of this bus stuff is hugely complex but we still don't want
each device with children having to open code it all when they could
just do something data driven.
Dave Ertman Oct. 2, 2020, 5:23 p.m. UTC | #38
> -----Original Message-----
> From: Mark Brown <broonie@kernel.org>
> Sent: Friday, October 2, 2020 4:20 AM
> To: Jason Gunthorpe <jgg@nvidia.com>
> Cc: Ertman, David M <david.m.ertman@intel.com>; Greg KH
> <gregkh@linuxfoundation.org>; alsa-devel@alsa-project.org;
> tiwai@suse.de; pierre-louis.bossart@linux.intel.com; Sridharan, Ranjani
> <ranjani.sridharan@intel.com>; parav@nvidia.com
> Subject: Re: [PATCH 0/6] Ancillary bus implementation and SOF multi-client
> support
> 
> On Thu, Oct 01, 2020 at 09:47:40PM -0300, Jason Gunthorpe wrote:
> > On Thu, Oct 01, 2020 at 09:17:18PM +0100, Mark Brown wrote:
> 
> > Instead of making ancillary bus giant, it might be interesting to use
> > a library technique to avoid the code duplication you are talking
> > about, eg
> 
> >  struct my_ancillary_data {
> >       struct ancillary_device adev;
> >       struct ancillary_resource resources;
> >  }
> 
> > Then each user can access the facets they need.
> 
> The way most of this stuff works already it's not a table in the device
> itself, it's an array of resources with type information.  Your second
> struct there is presumably just going to be the pointer and size
> information which you *could* split out but I'm really not sure what
> you're buying there.
> 
> The interesting bit isn't really the data in the devices themselves,
> it's the code that allows devices to just supply a data table and have
> things mapped through to their child devices.  That isn't particularly
> complex either, but it's still worth avoiding having lots of copies of
> it.  *None* of this bus stuff is hugely complex but we still don't want
> each device with children having to open code it all when they could
> just do something data driven.

Would you recommend adding two elements to the ancillary_device like:
	void *ancillary_data;
	u32 ancildata_size;
like the platform_device uses?

-DaveE
Jason Gunthorpe Oct. 2, 2020, 5:25 p.m. UTC | #39
On Fri, Oct 02, 2020 at 05:23:53PM +0000, Ertman, David M wrote:
> > From: Mark Brown <broonie@kernel.org>
> > Sent: Friday, October 2, 2020 4:20 AM
> > To: Jason Gunthorpe <jgg@nvidia.com>
> > Cc: Ertman, David M <david.m.ertman@intel.com>; Greg KH
> > <gregkh@linuxfoundation.org>; alsa-devel@alsa-project.org;
> > tiwai@suse.de; pierre-louis.bossart@linux.intel.com; Sridharan, Ranjani
> > <ranjani.sridharan@intel.com>; parav@nvidia.com
> > Subject: Re: [PATCH 0/6] Ancillary bus implementation and SOF multi-client
> > support
> > 
> > On Thu, Oct 01, 2020 at 09:47:40PM -0300, Jason Gunthorpe wrote:
> > > On Thu, Oct 01, 2020 at 09:17:18PM +0100, Mark Brown wrote:
> > 
> > > Instead of making ancillary bus giant, it might be interesting to use
> > > a library technique to avoid the code duplication you are talking
> > > about, eg
> > 
> > >  struct my_ancillary_data {
> > >       struct ancillary_device adev;
> > >       struct ancillary_resource resources;
> > >  }
> > 
> > > Then each user can access the facets they need.
> > 
> > The way most of this stuff works already it's not a table in the device
> > itself, it's an array of resources with type information.  Your second
> > struct there is presumably just going to be the pointer and size
> > information which you *could* split out but I'm really not sure what
> > you're buying there.
> > 
> > The interesting bit isn't really the data in the devices themselves,
> > it's the code that allows devices to just supply a data table and have
> > things mapped through to their child devices.  That isn't particularly
> > complex either, but it's still worth avoiding having lots of copies of
> > it.  *None* of this bus stuff is hugely complex but we still don't want
> > each device with children having to open code it all when they could
> > just do something data driven.
> 
> Would you recommend adding two elements to the ancillary_device like:
> 	void *ancillary_data;
> 	u32 ancildata_size;
> like the platform_device uses?

That doesn't seem useful here, the intent is to use container_of, if
the creator wants to pass private data then it should be structured
into the containing struct.

platform_devices/etc don't use container_of so have this side band way
to pass more data.

Jason
Mark Brown Oct. 2, 2020, 5:44 p.m. UTC | #40
On Fri, Oct 02, 2020 at 02:25:58PM -0300, Jason Gunthorpe wrote:
> On Fri, Oct 02, 2020 at 05:23:53PM +0000, Ertman, David M wrote:

> > Would you recommend adding two elements to the ancillary_device like:
> > 	void *ancillary_data;
> > 	u32 ancildata_size;
> > like the platform_device uses?

> That doesn't seem useful here, the intent is to use container_of, if
> the creator wants to pass private data then it should be structured
> into the containing struct.

> platform_devices/etc don't use container_of so have this side band way
> to pass more data.

The other thing platform_data lets you do is keep constant data separate
from dynamic data - if you have to use container_of() then you need to
either allocate a pointer to any constant data in the container or copy
it into the container.  Since the platform_data is in struct device it's
going to be there anyway and may as well get used.
Dave Ertman Oct. 2, 2020, 8:23 p.m. UTC | #41
> -----Original Message-----
> From: Greg KH <gregkh@linuxfoundation.org>
> Sent: Thursday, October 1, 2020 12:14 AM
> To: Ertman, David M <david.m.ertman@intel.com>
> Cc: alsa-devel@alsa-project.org; tiwai@suse.de; broonie@kernel.org; pierre-
> louis.bossart@linux.intel.com; Sridharan, Ranjani
> <ranjani.sridharan@intel.com>; jgg@nvidia.com; parav@nvidia.com
> Subject: Re: [PATCH 0/6] Ancillary bus implementation and SOF multi-client
> support
> 
> On Wed, Sep 30, 2020 at 03:50:45PM -0700, Dave Ertman wrote:
> > The ancillary bus (then known as virtual bus) was originally submitted
> > along with implementation code for the ice driver and irdma drive,
> > causing the complication of also having dependencies in the rdma tree.
> > This new submission is utilizing an ancillary bus consumer in only the
> > sound driver tree to create the initial implementation and a single
> > user.
> 
> So this will not work for the ice driver and/or irdma drivers?  It would
> be great to see how they work for this as well as getting those
> maintainers to review and sign off on this implementation as well.
> Don't ignore those developers, that's a bit "odd", don't you think?
> 
> To drop them from the review process is actually kind of rude, what
> happens if this gets merged without their input?
> 
> And the name, why was it changed and what does it mean?  For non-native
> english speakers this is going to be rough, given that I as a native
> english speaker had to go look up the word in a dictionary to fully
> understand what you are trying to do with that name.

Through our internal review process, objections were raised on naming the
new bus virtual bus. The main objection was that virtual bus was too close to virtio,
virtchnl, etc., that /sys/bus/virtual would be confused with /sys/bus/virtio, and
there is just a lot of 'virt' stuff in the kernel already.

Several names were suggested (like peer bus, which was shot down because in
parts on the English speaking world the peerage means nobility), finally
"ancillary bus" was arrived at by consensus of not hating it.

adjective -
providing necessary support to the primary activities or operation of an organization,
institution, industry, or system.

Changing from ancillary would be a small pain, but do-able if ancillary is really
objectionable.  Do you have any suggestions on a name that would be more
tolerable?

I would like to finalize the name issue before going farther though 
Greg KH Oct. 3, 2020, 9:08 a.m. UTC | #42
On Fri, Oct 02, 2020 at 08:23:49PM +0000, Ertman, David M wrote:
> > -----Original Message-----
> > From: Greg KH <gregkh@linuxfoundation.org>
> > Sent: Thursday, October 1, 2020 12:14 AM
> > To: Ertman, David M <david.m.ertman@intel.com>
> > Cc: alsa-devel@alsa-project.org; tiwai@suse.de; broonie@kernel.org; pierre-
> > louis.bossart@linux.intel.com; Sridharan, Ranjani
> > <ranjani.sridharan@intel.com>; jgg@nvidia.com; parav@nvidia.com
> > Subject: Re: [PATCH 0/6] Ancillary bus implementation and SOF multi-client
> > support
> > 
> > On Wed, Sep 30, 2020 at 03:50:45PM -0700, Dave Ertman wrote:
> > > The ancillary bus (then known as virtual bus) was originally submitted
> > > along with implementation code for the ice driver and irdma drive,
> > > causing the complication of also having dependencies in the rdma tree.
> > > This new submission is utilizing an ancillary bus consumer in only the
> > > sound driver tree to create the initial implementation and a single
> > > user.
> > 
> > So this will not work for the ice driver and/or irdma drivers?  It would
> > be great to see how they work for this as well as getting those
> > maintainers to review and sign off on this implementation as well.
> > Don't ignore those developers, that's a bit "odd", don't you think?
> > 
> > To drop them from the review process is actually kind of rude, what
> > happens if this gets merged without their input?
> > 
> > And the name, why was it changed and what does it mean?  For non-native
> > english speakers this is going to be rough, given that I as a native
> > english speaker had to go look up the word in a dictionary to fully
> > understand what you are trying to do with that name.
> 
> Through our internal review process, objections were raised on naming the
> new bus virtual bus. The main objection was that virtual bus was too close to virtio,
> virtchnl, etc., that /sys/bus/virtual would be confused with /sys/bus/virtio, and
> there is just a lot of 'virt' stuff in the kernel already.

We already have a virtual bus/location in the driver model today, has
that confused anyone?  I see this as an extension of that logic and
ideally, those users will be moved over to this interface over time as
well.

> Several names were suggested (like peer bus, which was shot down because in
> parts on the English speaking world the peerage means nobility), finally
> "ancillary bus" was arrived at by consensus of not hating it.

"not hating it", while sometimes is a good thing, for something that I
am going to have to tell everyone to go use, I would like to at least
"like it".  And right now I don't like it...

I think we should go back to "virtual" for now, or, if the people who
didn't like it on your "internal" reviews wish to participate here and
defend their choice, I would be glad to listen to that reasoning.

thanks,

greg k-h
Greg KH Oct. 3, 2020, 9:09 a.m. UTC | #43
On Sat, Oct 03, 2020 at 11:08:55AM +0200, Greg KH wrote:
> On Fri, Oct 02, 2020 at 08:23:49PM +0000, Ertman, David M wrote:
> > > -----Original Message-----
> > > From: Greg KH <gregkh@linuxfoundation.org>
> > > Sent: Thursday, October 1, 2020 12:14 AM
> > > To: Ertman, David M <david.m.ertman@intel.com>
> > > Cc: alsa-devel@alsa-project.org; tiwai@suse.de; broonie@kernel.org; pierre-
> > > louis.bossart@linux.intel.com; Sridharan, Ranjani
> > > <ranjani.sridharan@intel.com>; jgg@nvidia.com; parav@nvidia.com
> > > Subject: Re: [PATCH 0/6] Ancillary bus implementation and SOF multi-client
> > > support
> > > 
> > > On Wed, Sep 30, 2020 at 03:50:45PM -0700, Dave Ertman wrote:
> > > > The ancillary bus (then known as virtual bus) was originally submitted
> > > > along with implementation code for the ice driver and irdma drive,
> > > > causing the complication of also having dependencies in the rdma tree.
> > > > This new submission is utilizing an ancillary bus consumer in only the
> > > > sound driver tree to create the initial implementation and a single
> > > > user.
> > > 
> > > So this will not work for the ice driver and/or irdma drivers?  It would
> > > be great to see how they work for this as well as getting those
> > > maintainers to review and sign off on this implementation as well.
> > > Don't ignore those developers, that's a bit "odd", don't you think?
> > > 
> > > To drop them from the review process is actually kind of rude, what
> > > happens if this gets merged without their input?
> > > 
> > > And the name, why was it changed and what does it mean?  For non-native
> > > english speakers this is going to be rough, given that I as a native
> > > english speaker had to go look up the word in a dictionary to fully
> > > understand what you are trying to do with that name.
> > 
> > Through our internal review process, objections were raised on naming the
> > new bus virtual bus. The main objection was that virtual bus was too close to virtio,
> > virtchnl, etc., that /sys/bus/virtual would be confused with /sys/bus/virtio, and
> > there is just a lot of 'virt' stuff in the kernel already.
> 
> We already have a virtual bus/location in the driver model today, has
> that confused anyone?  I see this as an extension of that logic and
> ideally, those users will be moved over to this interface over time as
> well.
> 
> > Several names were suggested (like peer bus, which was shot down because in
> > parts on the English speaking world the peerage means nobility), finally
> > "ancillary bus" was arrived at by consensus of not hating it.
> 
> "not hating it", while sometimes is a good thing, for something that I
> am going to have to tell everyone to go use, I would like to at least
> "like it".  And right now I don't like it...
> 
> I think we should go back to "virtual" for now, or, if the people who
> didn't like it on your "internal" reviews wish to participate here and
> defend their choice, I would be glad to listen to that reasoning.

Also, the fact that we are even talking about a core kernel function on
only the alsa-devel list is pretty horrible.  I'm just going to drop
this whole thread and wait until you can submit it properly before
responding anymore on it.

greg k-h
Parav Pandit Oct. 4, 2020, 2:26 a.m. UTC | #44
> From: Greg KH <gregkh@linuxfoundation.org>
> Sent: Saturday, October 3, 2020 2:39 PM
> 
> On Fri, Oct 02, 2020 at 08:23:49PM +0000, Ertman, David M wrote:
> > > -----Original Message-----
> > > From: Greg KH <gregkh@linuxfoundation.org>
> > > Sent: Thursday, October 1, 2020 12:14 AM
> > > To: Ertman, David M <david.m.ertman@intel.com>
> > > Cc: alsa-devel@alsa-project.org; tiwai@suse.de; broonie@kernel.org;
> > > pierre- louis.bossart@linux.intel.com; Sridharan, Ranjani
> > > <ranjani.sridharan@intel.com>; jgg@nvidia.com; parav@nvidia.com
> > > Subject: Re: [PATCH 0/6] Ancillary bus implementation and SOF
> > > multi-client support
> > >
> > > On Wed, Sep 30, 2020 at 03:50:45PM -0700, Dave Ertman wrote:
> > > > The ancillary bus (then known as virtual bus) was originally
> > > > submitted along with implementation code for the ice driver and
> > > > irdma drive, causing the complication of also having dependencies in the
> rdma tree.
> > > > This new submission is utilizing an ancillary bus consumer in only
> > > > the sound driver tree to create the initial implementation and a
> > > > single user.
> > >
> > > So this will not work for the ice driver and/or irdma drivers?  It
> > > would be great to see how they work for this as well as getting
> > > those maintainers to review and sign off on this implementation as well.
> > > Don't ignore those developers, that's a bit "odd", don't you think?
> > >
> > > To drop them from the review process is actually kind of rude, what
> > > happens if this gets merged without their input?
> > >
> > > And the name, why was it changed and what does it mean?  For
> > > non-native english speakers this is going to be rough, given that I
> > > as a native english speaker had to go look up the word in a
> > > dictionary to fully understand what you are trying to do with that name.
> >
> > Through our internal review process, objections were raised on naming
> > the new bus virtual bus. The main objection was that virtual bus was
> > too close to virtio, virtchnl, etc., that /sys/bus/virtual would be
> > confused with /sys/bus/virtio, and there is just a lot of 'virt' stuff in the kernel
> already.
> 
> We already have a virtual bus/location in the driver model today, has that
> confused anyone?  I see this as an extension of that logic and ideally, those
> users will be moved over to this interface over time as well.
> 
> > Several names were suggested (like peer bus, which was shot down
> > because in parts on the English speaking world the peerage means
> > nobility), finally "ancillary bus" was arrived at by consensus of not hating it.
> 
> "not hating it", while sometimes is a good thing, for something that I am going
> to have to tell everyone to go use, I would like to at least "like it".  And right now
> I don't like it...
> 
> I think we should go back to "virtual" for now, or, if the people who didn't like it
> on your "internal" reviews wish to participate here and defend their choice, I
> would be glad to listen to that reasoning.
> 
Like Greg and Leon, I was no exception to look up dictionary to understand the meaning on my first review.
But I don't have strong opinion.

Since intended use of the bus is to create sub devices, either for matching service purpose or for actual subdevice usage,

How about naming the bus, 'subdev_bus'?
Dan Williams Oct. 4, 2020, 11:45 p.m. UTC | #45
[ Ugh, as other have lameneted, I was not copied on this thread so I
could not respond in real time. Let me be another to say, please copy
all impacted lists and stakeholders on patches. ]

On Sat, 2020-10-03 at 11:08 +0200, Greg KH wrote:
[..]
> 
> > Several names were suggested (like peer bus, which was shot down
> > because in
> > parts on the English speaking world the peerage means nobility),
> > finally
> > "ancillary bus" was arrived at by consensus of not hating it.
> 
> "not hating it", while sometimes is a good thing, for something that
> I
> am going to have to tell everyone to go use, I would like to at least
> "like it".  And right now I don't like it...
> 
> I think we should go back to "virtual" for now, or, if the people who
> didn't like it on your "internal" reviews wish to participate here
> and
> defend their choice, I would be glad to listen to that reasoning.

I came out strongly against "virtual" because there is nothing virtual
about these devices, they are functional partitions of the parent
device. Also, /sys/devices/virtual is already the land of unparented  /
software-defined devices. Having /sys/devices/virtual alongside that is
 not quite a namespace collision, but it's certainly namespace
confusion in my view.

I proposed other names, the team came back with "ancillary" which was
not my first choice, but perfectly suitable. In deference to the people
doing the work I let their choice stand.

It is an uncomfortable position being a middle tier reviewer of pre-
release patch sets when the patch set can still be de-railed by
preference nits. A lack of deference makes it a difficult job to
convince people "hey my internal review will save you some time
upstream", when in practice they can see the opposite is true.
Dave Ertman Oct. 5, 2020, 1:18 a.m. UTC | #46
> -----Original Message-----
> From: Williams, Dan J <dan.j.williams@intel.com>
> Sent: Sunday, October 4, 2020 4:46 PM
> To: Ertman, David M <david.m.ertman@intel.com>;
> gregkh@linuxfoundation.org
> Cc: pierre-louis.bossart@linux.intel.com; parav@nvidia.com;
> broonie@kernel.org; jgg@nvidia.com; Sridharan, Ranjani
> <ranjani.sridharan@intel.com>; alsa-devel@alsa-project.org; tiwai@suse.de
> Subject: Re: [PATCH 0/6] Ancillary bus implementation and SOF multi-client
> support
> 
> [ Ugh, as other have lameneted, I was not copied on this thread so I
> could not respond in real time. Let me be another to say, please copy
> all impacted lists and stakeholders on patches. ]
> 
> On Sat, 2020-10-03 at 11:08 +0200, Greg KH wrote:
> [..]
> >
> > > Several names were suggested (like peer bus, which was shot down
> > > because in
> > > parts on the English speaking world the peerage means nobility),
> > > finally
> > > "ancillary bus" was arrived at by consensus of not hating it.
> >
> > "not hating it", while sometimes is a good thing, for something that
> > I
> > am going to have to tell everyone to go use, I would like to at least
> > "like it".  And right now I don't like it...
> >
> > I think we should go back to "virtual" for now, or, if the people who
> > didn't like it on your "internal" reviews wish to participate here
> > and
> > defend their choice, I would be glad to listen to that reasoning.
> 
> I came out strongly against "virtual" because there is nothing virtual
> about these devices, they are functional partitions of the parent
> device. Also, /sys/devices/virtual is already the land of unparented  /
> software-defined devices. Having /sys/devices/virtual alongside that is
>  not quite a namespace collision, but it's certainly namespace
> confusion in my view.
> 
> I proposed other names, the team came back with "ancillary" which was
> not my first choice, but perfectly suitable. In deference to the people
> doing the work I let their choice stand.
> 
> It is an uncomfortable position being a middle tier reviewer of pre-
> release patch sets when the patch set can still be de-railed by
> preference nits. A lack of deference makes it a difficult job to
> convince people "hey my internal review will save you some time
> upstream", when in practice they can see the opposite is true.

I have to admit that I am biased towards the virtual bus (or virtbus) name 
as that is what I was using during the original implementation of this code.

That being said, I can also understand Dan's objection to the name.

At first, I didn't object to Parav's suggestion of subdev bus, but the more I
think of it, I don't want to have to describe to someone how to use a
subdev device's sub device element (yikes, what a mouthful!).

At this point, I just want a name that is easy to understand and doesn't
generate a lot of confusion or tongue twisting alliteration.

Can we call it the super_useful bus? (j/k 
Parav Pandit Oct. 5, 2020, 2:39 a.m. UTC | #47
> From: Ertman, David M <david.m.ertman@intel.com>
> Sent: Monday, October 5, 2020 6:49 AM
> 
> > -----Original Message-----
> > From: Williams, Dan J <dan.j.williams@intel.com>
> > Sent: Sunday, October 4, 2020 4:46 PM
> > To: Ertman, David M <david.m.ertman@intel.com>;
> > gregkh@linuxfoundation.org
> > Cc: pierre-louis.bossart@linux.intel.com; parav@nvidia.com;
> > broonie@kernel.org; jgg@nvidia.com; Sridharan, Ranjani
> > <ranjani.sridharan@intel.com>; alsa-devel@alsa-project.org;
> > tiwai@suse.de
> > Subject: Re: [PATCH 0/6] Ancillary bus implementation and SOF
> > multi-client support
> >
> > [ Ugh, as other have lameneted, I was not copied on this thread so I
> > could not respond in real time. Let me be another to say, please copy
> > all impacted lists and stakeholders on patches. ]
> >
> > On Sat, 2020-10-03 at 11:08 +0200, Greg KH wrote:
> > [..]
> > >
> > > > Several names were suggested (like peer bus, which was shot down
> > > > because in parts on the English speaking world the peerage means
> > > > nobility), finally "ancillary bus" was arrived at by consensus of
> > > > not hating it.
> > >
> > > "not hating it", while sometimes is a good thing, for something that
> > > I am going to have to tell everyone to go use, I would like to at
> > > least "like it".  And right now I don't like it...
> > >
> > > I think we should go back to "virtual" for now, or, if the people
> > > who didn't like it on your "internal" reviews wish to participate
> > > here and defend their choice, I would be glad to listen to that
> > > reasoning.
> >
> > I came out strongly against "virtual" because there is nothing virtual
> > about these devices, they are functional partitions of the parent
> > device. Also, /sys/devices/virtual is already the land of unparented
> > / software-defined devices. Having /sys/devices/virtual alongside that
> > is  not quite a namespace collision, but it's certainly namespace
> > confusion in my view.
> >
> > I proposed other names, the team came back with "ancillary" which was
> > not my first choice, but perfectly suitable. In deference to the
> > people doing the work I let their choice stand.
> >
> > It is an uncomfortable position being a middle tier reviewer of pre-
> > release patch sets when the patch set can still be de-railed by
> > preference nits. A lack of deference makes it a difficult job to
> > convince people "hey my internal review will save you some time
> > upstream", when in practice they can see the opposite is true.
> 
> I have to admit that I am biased towards the virtual bus (or virtbus) name as
> that is what I was using during the original implementation of this code.
> 
> That being said, I can also understand Dan's objection to the name.
> 
> At first, I didn't object to Parav's suggestion of subdev bus, but the more I
> think of it, I don't want to have to describe to someone how to use a subdev
> device's sub device element (yikes, what a mouthful!).
We have fair documentation that describes what an ancillary device is in the Documentation file of this patch.
Do you plan to remove that after renaming the bus in spirit of not describing comment above?

As Dan clearly described what devices are ancillary device in previous email -> " they are functional partitions of the parent device ",
How about subfunction_bus or partdev_bus?

So I have 3 simpler names that describes it multiple use cases.
(a) subdev_bus
(b) subfunction_bus
(c) partdev_bus
Greg KH Oct. 5, 2020, 11:25 a.m. UTC | #48
On Sun, Oct 04, 2020 at 11:45:41PM +0000, Williams, Dan J wrote:
> [ Ugh, as other have lameneted, I was not copied on this thread so I
> could not respond in real time. Let me be another to say, please copy
> all impacted lists and stakeholders on patches. ]
> 
> On Sat, 2020-10-03 at 11:08 +0200, Greg KH wrote:
> [..]
> > 
> > > Several names were suggested (like peer bus, which was shot down
> > > because in
> > > parts on the English speaking world the peerage means nobility),
> > > finally
> > > "ancillary bus" was arrived at by consensus of not hating it.
> > 
> > "not hating it", while sometimes is a good thing, for something that
> > I
> > am going to have to tell everyone to go use, I would like to at least
> > "like it".  And right now I don't like it...
> > 
> > I think we should go back to "virtual" for now, or, if the people who
> > didn't like it on your "internal" reviews wish to participate here
> > and
> > defend their choice, I would be glad to listen to that reasoning.
> 
> I came out strongly against "virtual" because there is nothing virtual
> about these devices, they are functional partitions of the parent
> device. Also, /sys/devices/virtual is already the land of unparented  /
> software-defined devices. Having /sys/devices/virtual alongside that is
>  not quite a namespace collision, but it's certainly namespace
> confusion in my view.
> 
> I proposed other names, the team came back with "ancillary" which was
> not my first choice, but perfectly suitable. In deference to the people
> doing the work I let their choice stand.
> 
> It is an uncomfortable position being a middle tier reviewer of pre-
> release patch sets when the patch set can still be de-railed by
> preference nits. A lack of deference makes it a difficult job to
> convince people "hey my internal review will save you some time
> upstream", when in practice they can see the opposite is true.

I will publically state that without those middle-tier reviews, this
would have been a worse submission, which is why I am _REQUIRING_ them
from Intel at the moment.

So your review did not happen in vain, and if developers complain about
this, send them to me please.

As for the name, why is everyone ignoring the fact that we have had
/sys/devices/virtual/ for decades now, with no one being confused about
that name usage?  I see this as an extension of that existing code
pattern, is everyone missing that?

thanks,

greg k-h
Dan Williams Oct. 6, 2020, 10:40 p.m. UTC | #49
On Mon, Oct 5, 2020 at 4:25 AM gregkh@linuxfoundation.org
<gregkh@linuxfoundation.org> wrote:
>
> On Sun, Oct 04, 2020 at 11:45:41PM +0000, Williams, Dan J wrote:
> > [ Ugh, as other have lameneted, I was not copied on this thread so I
> > could not respond in real time. Let me be another to say, please copy
> > all impacted lists and stakeholders on patches. ]
> >
> > On Sat, 2020-10-03 at 11:08 +0200, Greg KH wrote:
> > [..]
> > >
> > > > Several names were suggested (like peer bus, which was shot down
> > > > because in
> > > > parts on the English speaking world the peerage means nobility),
> > > > finally
> > > > "ancillary bus" was arrived at by consensus of not hating it.
> > >
> > > "not hating it", while sometimes is a good thing, for something that
> > > I
> > > am going to have to tell everyone to go use, I would like to at least
> > > "like it".  And right now I don't like it...
> > >
> > > I think we should go back to "virtual" for now, or, if the people who
> > > didn't like it on your "internal" reviews wish to participate here
> > > and
> > > defend their choice, I would be glad to listen to that reasoning.
> >
> > I came out strongly against "virtual" because there is nothing virtual
> > about these devices, they are functional partitions of the parent
> > device. Also, /sys/devices/virtual is already the land of unparented  /
> > software-defined devices. Having /sys/devices/virtual alongside that is
> >  not quite a namespace collision, but it's certainly namespace
> > confusion in my view.
> >
> > I proposed other names, the team came back with "ancillary" which was
> > not my first choice, but perfectly suitable. In deference to the people
> > doing the work I let their choice stand.
> >
> > It is an uncomfortable position being a middle tier reviewer of pre-
> > release patch sets when the patch set can still be de-railed by
> > preference nits. A lack of deference makes it a difficult job to
> > convince people "hey my internal review will save you some time
> > upstream", when in practice they can see the opposite is true.
>
> I will publically state that without those middle-tier reviews, this
> would have been a worse submission, which is why I am _REQUIRING_ them
> from Intel at the moment.
>
> So your review did not happen in vain, and if developers complain about
> this, send them to me please.
>
> As for the name, why is everyone ignoring the fact that we have had
> /sys/devices/virtual/ for decades now, with no one being confused about
> that name usage?  I see this as an extension of that existing code
> pattern, is everyone missing that?

Oh, I was specifically reacting to /sys/devices/virtual being a place
where un-parented devices land. Things like raid composite devices and
other devices that just do not fit cleanly in the parent device
hierarchy, each of them has independent lifetime rules, some do not
attach to drivers they just exist in an "unbound" state when active...
it's an anything goes catch all space. This bus is the opposite, all
devices have a maximum lifetime tied to their parent device bind
lifetime, and they are not active without drivers. Placing them under
/sys/bus/virtual/devices confuses what the "virtual" term means to
sysfs.

"ancillary" devices and drivers has meaning to me, in a way that
"virtual" devices and drivers does not. If "ancillary" does not hit
the right tone what about "aux" for "auxiliary"
(/sys/bus/aux/devices)?
Greg KH Oct. 7, 2020, 9:14 a.m. UTC | #50
On Tue, Oct 06, 2020 at 03:40:58PM -0700, Dan Williams wrote:
> On Mon, Oct 5, 2020 at 4:25 AM gregkh@linuxfoundation.org
> <gregkh@linuxfoundation.org> wrote:
> >
> > On Sun, Oct 04, 2020 at 11:45:41PM +0000, Williams, Dan J wrote:
> > > [ Ugh, as other have lameneted, I was not copied on this thread so I
> > > could not respond in real time. Let me be another to say, please copy
> > > all impacted lists and stakeholders on patches. ]
> > >
> > > On Sat, 2020-10-03 at 11:08 +0200, Greg KH wrote:
> > > [..]
> > > >
> > > > > Several names were suggested (like peer bus, which was shot down
> > > > > because in
> > > > > parts on the English speaking world the peerage means nobility),
> > > > > finally
> > > > > "ancillary bus" was arrived at by consensus of not hating it.
> > > >
> > > > "not hating it", while sometimes is a good thing, for something that
> > > > I
> > > > am going to have to tell everyone to go use, I would like to at least
> > > > "like it".  And right now I don't like it...
> > > >
> > > > I think we should go back to "virtual" for now, or, if the people who
> > > > didn't like it on your "internal" reviews wish to participate here
> > > > and
> > > > defend their choice, I would be glad to listen to that reasoning.
> > >
> > > I came out strongly against "virtual" because there is nothing virtual
> > > about these devices, they are functional partitions of the parent
> > > device. Also, /sys/devices/virtual is already the land of unparented  /
> > > software-defined devices. Having /sys/devices/virtual alongside that is
> > >  not quite a namespace collision, but it's certainly namespace
> > > confusion in my view.
> > >
> > > I proposed other names, the team came back with "ancillary" which was
> > > not my first choice, but perfectly suitable. In deference to the people
> > > doing the work I let their choice stand.
> > >
> > > It is an uncomfortable position being a middle tier reviewer of pre-
> > > release patch sets when the patch set can still be de-railed by
> > > preference nits. A lack of deference makes it a difficult job to
> > > convince people "hey my internal review will save you some time
> > > upstream", when in practice they can see the opposite is true.
> >
> > I will publically state that without those middle-tier reviews, this
> > would have been a worse submission, which is why I am _REQUIRING_ them
> > from Intel at the moment.
> >
> > So your review did not happen in vain, and if developers complain about
> > this, send them to me please.
> >
> > As for the name, why is everyone ignoring the fact that we have had
> > /sys/devices/virtual/ for decades now, with no one being confused about
> > that name usage?  I see this as an extension of that existing code
> > pattern, is everyone missing that?
> 
> Oh, I was specifically reacting to /sys/devices/virtual being a place
> where un-parented devices land. Things like raid composite devices and
> other devices that just do not fit cleanly in the parent device
> hierarchy, each of them has independent lifetime rules, some do not
> attach to drivers they just exist in an "unbound" state when active...
> it's an anything goes catch all space. This bus is the opposite, all
> devices have a maximum lifetime tied to their parent device bind
> lifetime, and they are not active without drivers. Placing them under
> /sys/bus/virtual/devices confuses what the "virtual" term means to
> sysfs.

"virtual" here means "not real" :)

> "ancillary" devices and drivers has meaning to me, in a way that
> "virtual" devices and drivers does not. If "ancillary" does not hit
> the right tone what about "aux" for "auxiliary"
> (/sys/bus/aux/devices)?

"aux" is nice, I'll think about that.  simple is good, and naming is
hard...

thanks,

greg k-h
Dan Williams Oct. 7, 2020, 4:19 p.m. UTC | #51
On Wed, Oct 7, 2020 at 2:14 AM gregkh@linuxfoundation.org
<gregkh@linuxfoundation.org> wrote:
>
> On Tue, Oct 06, 2020 at 03:40:58PM -0700, Dan Williams wrote:
> > On Mon, Oct 5, 2020 at 4:25 AM gregkh@linuxfoundation.org
> > <gregkh@linuxfoundation.org> wrote:
> > >
> > > On Sun, Oct 04, 2020 at 11:45:41PM +0000, Williams, Dan J wrote:
> > > > [ Ugh, as other have lameneted, I was not copied on this thread so I
> > > > could not respond in real time. Let me be another to say, please copy
> > > > all impacted lists and stakeholders on patches. ]
> > > >
> > > > On Sat, 2020-10-03 at 11:08 +0200, Greg KH wrote:
> > > > [..]
> > > > >
> > > > > > Several names were suggested (like peer bus, which was shot down
> > > > > > because in
> > > > > > parts on the English speaking world the peerage means nobility),
> > > > > > finally
> > > > > > "ancillary bus" was arrived at by consensus of not hating it.
> > > > >
> > > > > "not hating it", while sometimes is a good thing, for something that
> > > > > I
> > > > > am going to have to tell everyone to go use, I would like to at least
> > > > > "like it".  And right now I don't like it...
> > > > >
> > > > > I think we should go back to "virtual" for now, or, if the people who
> > > > > didn't like it on your "internal" reviews wish to participate here
> > > > > and
> > > > > defend their choice, I would be glad to listen to that reasoning.
> > > >
> > > > I came out strongly against "virtual" because there is nothing virtual
> > > > about these devices, they are functional partitions of the parent
> > > > device. Also, /sys/devices/virtual is already the land of unparented  /
> > > > software-defined devices. Having /sys/devices/virtual alongside that is
> > > >  not quite a namespace collision, but it's certainly namespace
> > > > confusion in my view.
> > > >
> > > > I proposed other names, the team came back with "ancillary" which was
> > > > not my first choice, but perfectly suitable. In deference to the people
> > > > doing the work I let their choice stand.
> > > >
> > > > It is an uncomfortable position being a middle tier reviewer of pre-
> > > > release patch sets when the patch set can still be de-railed by
> > > > preference nits. A lack of deference makes it a difficult job to
> > > > convince people "hey my internal review will save you some time
> > > > upstream", when in practice they can see the opposite is true.
> > >
> > > I will publically state that without those middle-tier reviews, this
> > > would have been a worse submission, which is why I am _REQUIRING_ them
> > > from Intel at the moment.
> > >
> > > So your review did not happen in vain, and if developers complain about
> > > this, send them to me please.
> > >
> > > As for the name, why is everyone ignoring the fact that we have had
> > > /sys/devices/virtual/ for decades now, with no one being confused about
> > > that name usage?  I see this as an extension of that existing code
> > > pattern, is everyone missing that?
> >
> > Oh, I was specifically reacting to /sys/devices/virtual being a place
> > where un-parented devices land. Things like raid composite devices and
> > other devices that just do not fit cleanly in the parent device
> > hierarchy, each of them has independent lifetime rules, some do not
> > attach to drivers they just exist in an "unbound" state when active...
> > it's an anything goes catch all space. This bus is the opposite, all
> > devices have a maximum lifetime tied to their parent device bind
> > lifetime, and they are not active without drivers. Placing them under
> > /sys/bus/virtual/devices confuses what the "virtual" term means to
> > sysfs.
>
> "virtual" here means "not real" :)

Which of these aux device use cases is not a real device? One of my
planned usages for this facility is for the NPEM (Native PCIE
Enclosure Management) mechanism that can appear on any PCIE
bridge/endpoint. While it is true that the NPEM register set does not
get its own PCI-b:d:f address on the host bus, it is still
discoverable by a standard enumeration scheme. It is real auxiliary
device functionality that can appear on any PCIE device where the
kernel can now have one common NPEM driver for all instances in the
topology.

>
> > "ancillary" devices and drivers has meaning to me, in a way that
> > "virtual" devices and drivers does not. If "ancillary" does not hit
> > the right tone what about "aux" for "auxiliary"
> > (/sys/bus/aux/devices)?
>
> "aux" is nice, I'll think about that.  simple is good, and naming is
> hard...

I have a soft spot for functional three letter abbreviations.
Certainly "aux" is clearer than "ancil", and less overloaded than
"virt".
Mark Brown Oct. 7, 2020, 4:22 p.m. UTC | #52
On Wed, Oct 07, 2020 at 09:19:32AM -0700, Dan Williams wrote:
> On Wed, Oct 7, 2020 at 2:14 AM gregkh@linuxfoundation.org

> > "virtual" here means "not real" :)

> Which of these aux device use cases is not a real device? One of my
> planned usages for this facility is for the NPEM (Native PCIE
> Enclosure Management) mechanism that can appear on any PCIE
> bridge/endpoint. While it is true that the NPEM register set does not
> get its own PCI-b:d:f address on the host bus, it is still
> discoverable by a standard enumeration scheme. It is real auxiliary
> device functionality that can appear on any PCIE device where the
> kernel can now have one common NPEM driver for all instances in the
> topology.

Some if not all of the SOF cases are entirely software defined by the
firmware downloaded to the audio DSPs.
Dan Williams Oct. 7, 2020, 4:41 p.m. UTC | #53
On Wed, Oct 7, 2020 at 9:23 AM Mark Brown <broonie@kernel.org> wrote:
>
> On Wed, Oct 07, 2020 at 09:19:32AM -0700, Dan Williams wrote:
> > On Wed, Oct 7, 2020 at 2:14 AM gregkh@linuxfoundation.org
>
> > > "virtual" here means "not real" :)
>
> > Which of these aux device use cases is not a real device? One of my
> > planned usages for this facility is for the NPEM (Native PCIE
> > Enclosure Management) mechanism that can appear on any PCIE
> > bridge/endpoint. While it is true that the NPEM register set does not
> > get its own PCI-b:d:f address on the host bus, it is still
> > discoverable by a standard enumeration scheme. It is real auxiliary
> > device functionality that can appear on any PCIE device where the
> > kernel can now have one common NPEM driver for all instances in the
> > topology.
>
> Some if not all of the SOF cases are entirely software defined by the
> firmware downloaded to the audio DSPs.

"Software-defined" in the kernel context to me means software Linux
kernel developers have control over, so device-mapper devices, other
things that show up under /sys/devices/virtual, or
/sys/devices/system/memory/. Firmware changing device functionality is
more like just-in-time hardware than software-defined. In other words
kernel software is not involved in constructing the device
functionality.
Pierre-Louis Bossart Oct. 7, 2020, 4:42 p.m. UTC | #54
>>> "virtual" here means "not real" :)
> 
>> Which of these aux device use cases is not a real device? One of my
>> planned usages for this facility is for the NPEM (Native PCIE
>> Enclosure Management) mechanism that can appear on any PCIE
>> bridge/endpoint. While it is true that the NPEM register set does not
>> get its own PCI-b:d:f address on the host bus, it is still
>> discoverable by a standard enumeration scheme. It is real auxiliary
>> device functionality that can appear on any PCIE device where the
>> kernel can now have one common NPEM driver for all instances in the
>> topology.
> 
> Some if not all of the SOF cases are entirely software defined by the
> firmware downloaded to the audio DSPs.

Correct for DSP processing/debug stuff. In some cases though, the 
firmware deals with different IOs (HDMI, I2C) and having multiple 'aux' 
devices helps expose unrelated physical functions in a more modular way.

The non-SOF audio case I can think of is SoundWire. We want to expose 
SoundWire links as separate devices even though they are not exposed in 
the platform firmware or PCI configs (decision driven by Windows). We 
currently use platform devices for this, but we'd like to transition to 
this 'aux' bus
Parav Pandit Oct. 7, 2020, 4:56 p.m. UTC | #55
Hi Pierre, Mark, Dan,

> From: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
> Sent: Wednesday, October 7, 2020 10:12 PM
> 
> 
> >>> "virtual" here means "not real" :)
> >
> >> Which of these aux device use cases is not a real device? One of my
> >> planned usages for this facility is for the NPEM (Native PCIE
> >> Enclosure Management) mechanism that can appear on any PCIE
> >> bridge/endpoint. While it is true that the NPEM register set does not
> >> get its own PCI-b:d:f address on the host bus, it is still
> >> discoverable by a standard enumeration scheme. It is real auxiliary
> >> device functionality that can appear on any PCIE device where the
> >> kernel can now have one common NPEM driver for all instances in the
> >> topology.
> >
> > Some if not all of the SOF cases are entirely software defined by the
> > firmware downloaded to the audio DSPs.
> 
> Correct for DSP processing/debug stuff. In some cases though, the firmware
> deals with different IOs (HDMI, I2C) and having multiple 'aux'
> devices helps expose unrelated physical functions in a more modular way.
> 
> The non-SOF audio case I can think of is SoundWire. We want to expose
> SoundWire links as separate devices even though they are not exposed in
> the platform firmware or PCI configs (decision driven by Windows). We
> currently use platform devices for this, but we'd like to transition to this 'aux'
> bus

There is more updated version of the patch [1] from Dave which is covering multiple mailing list who are also going to consume this bus.
This includes 
(a) mlx5 subdevices for netdev, rdma and more carved from a pci device.
(b) mlx5 matching service to load multiple drivers on for a given PCI PF/VF/subfunction.
(similar use case as irdma)

Greg also mentioned that he likes to see other mailing list CCed, which Dave already did in PATCHv2 at [1].
So lets please discuss in thread [1]?

[1] https://lore.kernel.org/linux-rdma/20201005182446.977325-1-david.m.ertman@intel.com/