mbox series

[v4,00/31] v4l: add support for multiplexed streams

Message ID 20190328200608.9463-1-jacopo+renesas@jmondi.org (mailing list archive)
Headers show
Series v4l: add support for multiplexed streams | expand

Message

Jacopo Mondi March 28, 2019, 8:05 p.m. UTC
Hello,
   new iteration of multiplexed stream support patch series.

V3 available at:
https://patchwork.kernel.org/cover/10839889/

V2 sent by Niklas is available at:
https://patchwork.kernel.org/cover/10573817/

Series available at:
git://jmondi.org/linux v4l2-mux/media-master/v4

Few changes compared to v3:
- Rebased on latest media master
- Include Sakari's and Luca's comments on documentation patch
- Improve connected pad operator as suggested by Ian and Sakari
- Fix a bogus -EINVAL return in 31/31 as suggested by Sakari
- Improve adv748x afe has_route as suggested by Luca and Sakari
- Closed some checkpatch warnings (there are still a few though)

v2 -> v3
- Added IOCTLs documentation, and a section to describe multiplexed media links
  to the v4l2-subdevice interface chapter and expanded functions documentation
- Re-worked the IOCTLs so that they don't need a compat version
- Sort pads to be passed to has_route() by their index values
- Drop indirect routes support from this initial version. Might be considered
  later to simplify subdevice drivers implementations
- Add has_route() operations to adv748x and rcar-csi2 subdevice drivers


Jacopo Mondi (4):
  media: entity: Add iterator helper for entity pads
  media: Documentation: Add GS_ROUTING documentation
  adv748x: afe: Implement has_route()
  media: rcar-csi2: Implement has_route()

Laurent Pinchart (4):
  media: entity: Add has_route entity operation
  media: entity: Add media_entity_has_route() function
  media: entity: Use routing information during graph traversal
  v4l: subdev: Add [GS]_ROUTING subdev ioctls and operations

Niklas Söderlund (7):
  adv748x: csi2: add translation from pixelcode to CSI-2 datatype
  adv748x: csi2: only allow formats on sink pads
  adv748x: csi2: describe the multiplexed stream
  adv748x: csi2: add internal routing configuration
  adv748x: afe: add routing support
  rcar-csi2: use frame description information to configure CSI-2 bus
  rcar-csi2: expose the subdevice internal routing

Sakari Ailus (16):
  media: entity: Use pad as a starting point for graph walk
  media: entity: Use pads instead of entities in the media graph walk
    stack
  media: entity: Walk the graph based on pads
  v4l: mc: Start walk from a specific pad in use count calculation
  media: entity: Move the pipeline from entity to pads
  media: entity: Use pad as the starting point for a pipeline
  media: entity: Skip link validation for pads to which there is no
    route to
  media: entity: Add an iterator helper for connected pads
  media: entity: Add only connected pads to the pipeline
  media: entity: Add debug information in graph walk route check
  v4l: Add bus type to frame descriptors
  v4l: Add CSI-2 bus configuration to frame descriptors
  v4l: Add stream to frame descriptor
  v4l: subdev: Take routing information into account in link validation
  v4l: subdev: Improve link format validation debug messages
  v4l: mc: Add an S_ROUTING helper function for power state changes

 Documentation/media/kapi/mc-core.rst          |  15 +-
 Documentation/media/uapi/v4l/dev-subdev.rst   |  92 ++++++
 Documentation/media/uapi/v4l/user-func.rst    |   1 +
 .../uapi/v4l/vidioc-subdev-g-routing.rst      | 139 ++++++++
 drivers/media/i2c/adv748x/adv748x-afe.c       |  91 ++++++
 drivers/media/i2c/adv748x/adv748x-csi2.c      | 136 +++++++-
 drivers/media/i2c/adv748x/adv748x.h           |   1 +
 drivers/media/media-device.c                  |  13 +-
 drivers/media/media-entity.c                  | 238 ++++++++------
 drivers/media/pci/intel/ipu3/ipu3-cio2.c      |   6 +-
 .../media/platform/exynos4-is/fimc-capture.c  |   8 +-
 .../platform/exynos4-is/fimc-isp-video.c      |   8 +-
 drivers/media/platform/exynos4-is/fimc-isp.c  |   2 +-
 drivers/media/platform/exynos4-is/fimc-lite.c |  10 +-
 drivers/media/platform/exynos4-is/media-dev.c |  20 +-
 drivers/media/platform/omap3isp/isp.c         |   2 +-
 drivers/media/platform/omap3isp/ispvideo.c    |  25 +-
 drivers/media/platform/omap3isp/ispvideo.h    |   2 +-
 .../media/platform/qcom/camss/camss-video.c   |   6 +-
 drivers/media/platform/rcar-vin/rcar-core.c   |  16 +-
 drivers/media/platform/rcar-vin/rcar-csi2.c   | 220 ++++++++++---
 drivers/media/platform/rcar-vin/rcar-dma.c    |   8 +-
 .../media/platform/s3c-camif/camif-capture.c  |   6 +-
 drivers/media/platform/vimc/vimc-capture.c    |   5 +-
 drivers/media/platform/vsp1/vsp1_video.c      |  18 +-
 drivers/media/platform/xilinx/xilinx-dma.c    |  20 +-
 drivers/media/platform/xilinx/xilinx-dma.h    |   2 +-
 drivers/media/usb/au0828/au0828-core.c        |   4 +-
 drivers/media/v4l2-core/v4l2-ioctl.c          |  25 +-
 drivers/media/v4l2-core/v4l2-mc.c             |  76 +++--
 drivers/media/v4l2-core/v4l2-subdev.c         | 296 ++++++++++++++++--
 .../staging/media/davinci_vpfe/vpfe_video.c   |  47 +--
 drivers/staging/media/imx/imx-media-utils.c   |   8 +-
 drivers/staging/media/omap4iss/iss.c          |   2 +-
 drivers/staging/media/omap4iss/iss_video.c    |  38 +--
 drivers/staging/media/omap4iss/iss_video.h    |   2 +-
 include/media/media-entity.h                  | 143 ++++++---
 include/media/v4l2-mc.h                       |  22 ++
 include/media/v4l2-subdev.h                   |  49 +++
 include/uapi/linux/v4l2-subdev.h              |  40 +++
 40 files changed, 1492 insertions(+), 370 deletions(-)
 create mode 100644 Documentation/media/uapi/v4l/vidioc-subdev-g-routing.rst

--
2.21.0

Comments

Sakari Ailus March 29, 2019, 1:17 a.m. UTC | #1
Hi Jacopo,

On Thu, Mar 28, 2019 at 09:05:37PM +0100, Jacopo Mondi wrote:
> Hello,
>    new iteration of multiplexed stream support patch series.
> 
> V3 available at:
> https://patchwork.kernel.org/cover/10839889/
> 
> V2 sent by Niklas is available at:
> https://patchwork.kernel.org/cover/10573817/
> 
> Series available at:
> git://jmondi.org/linux v4l2-mux/media-master/v4
> 
> Few changes compared to v3:
> - Rebased on latest media master
> - Include Sakari's and Luca's comments on documentation patch
> - Improve connected pad operator as suggested by Ian and Sakari
> - Fix a bogus -EINVAL return in 31/31 as suggested by Sakari
> - Improve adv748x afe has_route as suggested by Luca and Sakari
> - Closed some checkpatch warnings (there are still a few though)

Thank you for working on this.

I still need to read the documentation tomorrow, but this begins to look
like material for merging. Your SoB seems to be missing from the patches
you haven't written. Shouldn't it be there?
Jacopo Mondi March 29, 2019, 8:32 a.m. UTC | #2
Hi Sakari,

On Fri, Mar 29, 2019 at 03:17:11AM +0200, Sakari Ailus wrote:
> Hi Jacopo,
>
> On Thu, Mar 28, 2019 at 09:05:37PM +0100, Jacopo Mondi wrote:
> > Hello,
> >    new iteration of multiplexed stream support patch series.
> >
> > V3 available at:
> > https://patchwork.kernel.org/cover/10839889/
> >
> > V2 sent by Niklas is available at:
> > https://patchwork.kernel.org/cover/10573817/
> >
> > Series available at:
> > git://jmondi.org/linux v4l2-mux/media-master/v4
> >
> > Few changes compared to v3:
> > - Rebased on latest media master
> > - Include Sakari's and Luca's comments on documentation patch
> > - Improve connected pad operator as suggested by Ian and Sakari
> > - Fix a bogus -EINVAL return in 31/31 as suggested by Sakari
> > - Improve adv748x afe has_route as suggested by Luca and Sakari
> > - Closed some checkpatch warnings (there are still a few though)
>
> Thank you for working on this.
>
> I still need to read the documentation tomorrow, but this begins to look
> like material for merging. Your SoB seems to be missing from the patches
> you haven't written. Shouldn't it be there?

Should it? As "someone in the patch deliery path" (quoting
submitting-patches.rst) I probably should. I didn't as Niklas didn't
do that in v2, but I can add it to v5.

Thanks for pointing it out!

>
> --
> Kind regards,
>
> Sakari Ailus
> sakari.ailus@linux.intel.com
Sakari Ailus April 1, 2019, 12:32 p.m. UTC | #3
On Fri, Mar 29, 2019 at 09:32:51AM +0100, Jacopo Mondi wrote:
> Hi Sakari,
> 
> On Fri, Mar 29, 2019 at 03:17:11AM +0200, Sakari Ailus wrote:
> > Hi Jacopo,
> >
> > On Thu, Mar 28, 2019 at 09:05:37PM +0100, Jacopo Mondi wrote:
> > > Hello,
> > >    new iteration of multiplexed stream support patch series.
> > >
> > > V3 available at:
> > > https://patchwork.kernel.org/cover/10839889/
> > >
> > > V2 sent by Niklas is available at:
> > > https://patchwork.kernel.org/cover/10573817/
> > >
> > > Series available at:
> > > git://jmondi.org/linux v4l2-mux/media-master/v4
> > >
> > > Few changes compared to v3:
> > > - Rebased on latest media master
> > > - Include Sakari's and Luca's comments on documentation patch
> > > - Improve connected pad operator as suggested by Ian and Sakari
> > > - Fix a bogus -EINVAL return in 31/31 as suggested by Sakari
> > > - Improve adv748x afe has_route as suggested by Luca and Sakari
> > > - Closed some checkpatch warnings (there are still a few though)
> >
> > Thank you for working on this.
> >
> > I still need to read the documentation tomorrow, but this begins to look
> > like material for merging. Your SoB seems to be missing from the patches
> > you haven't written. Shouldn't it be there?
> 
> Should it? As "someone in the patch deliery path" (quoting
> submitting-patches.rst) I probably should. I didn't as Niklas didn't
> do that in v2, but I can add it to v5.
> 
> Thanks for pointing it out!

It's basically telling that the patch has passed through your hands, so I
think it should be there. No need to resend for that IMO though.
Tomi Valkeinen Feb. 11, 2021, 1:44 p.m. UTC | #4
Hi all,

On 28/03/2019 22:05, Jacopo Mondi wrote:
> Hello,
>    new iteration of multiplexed stream support patch series.
> 
> V3 available at:
> https://patchwork.kernel.org/cover/10839889/
> 
> V2 sent by Niklas is available at:
> https://patchwork.kernel.org/cover/10573817/
> 
> Series available at:
> git://jmondi.org/linux v4l2-mux/media-master/v4

I'm trying to understand how these changes can be used with virtual
channels and also with embedded data.

I have an SoC with two CSI-2 RX ports, both of which connect to a
processing block with 8 DMA engines. Each of the DMA engines can be
programmed to handle a certain virtual channel and datatype.

The board has a multiplexer, connected to 4 cameras, and the multiplexer
connects to SoC's CSI-2 RX port. This board has just one multiplexer
connected, but, of course, both RX ports could have a multiplexer,
amounting to total 8 cameras.

So, in theory, there could be 16 streams to be handled (4 pixel streams
and 4 embedded data streams for both RX ports). With only 8 DMA engines
available, the driver has to manage them dynamically, reserving a DMA
engine when a stream is started.

My confusion is with the /dev/video nodes. I think it would be logical
to create 8 of them, one for each DMA engine (or less, if I know there
is only, say, 1 camera connected, in which case 2 nodes would be
enough). But in that case how does the user know what data is being
received from that node? In other words, how to connect, say,
/dev/video0 to second camera's embedded data stream?

Another option would be to create 16 /dev/video nodes, and document that
first one maps to virtual channel 0 + pixel data, second to virtual
channel 0 + embedded data, and so on. And only allow 8 of them to be
turned on at a time. But I don't like this idea much.

The current driver architecture is such that the multiplexer is modeled
with a subdev with 4 sink pads and one source pad, the SoC's RX ports
are subdevs with a single sink and a single output pad, and then there
are the video devices connected to RX's source pad.

And while I can connect the video node's pad to the source pad on either
of the RX ports, I don't think I have any way to define which stream it
receives.

Does that mean that each RX port subdev should instead have 8 source
pads? Isn't a pad like a physical connection? There's really just one
output from the RX port, with multiplexed streams, so 8 pads doesn't
sound right.

 Tomi
Sakari Ailus Oct. 6, 2022, 11:20 a.m. UTC | #5
Moi,

On Thu, Feb 11, 2021 at 03:44:56PM +0200, Tomi Valkeinen wrote:
> Hi all,
> 
> On 28/03/2019 22:05, Jacopo Mondi wrote:
> > Hello,
> >    new iteration of multiplexed stream support patch series.
> > 
> > V3 available at:
> > https://patchwork.kernel.org/cover/10839889/
> > 
> > V2 sent by Niklas is available at:
> > https://patchwork.kernel.org/cover/10573817/
> > 
> > Series available at:
> > git://jmondi.org/linux v4l2-mux/media-master/v4
> 
> I'm trying to understand how these changes can be used with virtual
> channels and also with embedded data.
> 
> I have an SoC with two CSI-2 RX ports, both of which connect to a
> processing block with 8 DMA engines. Each of the DMA engines can be
> programmed to handle a certain virtual channel and datatype.
> 
> The board has a multiplexer, connected to 4 cameras, and the multiplexer
> connects to SoC's CSI-2 RX port. This board has just one multiplexer
> connected, but, of course, both RX ports could have a multiplexer,
> amounting to total 8 cameras.
> 
> So, in theory, there could be 16 streams to be handled (4 pixel streams
> and 4 embedded data streams for both RX ports). With only 8 DMA engines
> available, the driver has to manage them dynamically, reserving a DMA
> engine when a stream is started.
> 
> My confusion is with the /dev/video nodes. I think it would be logical
> to create 8 of them, one for each DMA engine (or less, if I know there
> is only, say, 1 camera connected, in which case 2 nodes would be

For more complex devices, it is often not possible to define such a number.
Say, put an external ISP in between the sensor and the CSI-2 receiver, and
you may get more streams than you would from the sensor alone.

> enough). But in that case how does the user know what data is being
> received from that node? In other words, how to connect, say,
> /dev/video0 to second camera's embedded data stream?
> 
> Another option would be to create 16 /dev/video nodes, and document that
> first one maps to virtual channel 0 + pixel data, second to virtual
> channel 0 + embedded data, and so on. And only allow 8 of them to be
> turned on at a time. But I don't like this idea much.

This isn't great IMO as it is limited to pre-defined use cases.

> 
> The current driver architecture is such that the multiplexer is modeled
> with a subdev with 4 sink pads and one source pad, the SoC's RX ports
> are subdevs with a single sink and a single output pad, and then there
> are the video devices connected to RX's source pad.
> 
> And while I can connect the video node's pad to the source pad on either
> of the RX ports, I don't think I have any way to define which stream it
> receives.
> 
> Does that mean that each RX port subdev should instead have 8 source
> pads? Isn't a pad like a physical connection? There's really just one
> output from the RX port, with multiplexed streams, so 8 pads doesn't
> sound right.

If you have eight DMAs you should always have eight video nodes.

I would put one link between the sub-device and a video node, and handle
the incoming streams by routing them to the desired video nodes.

If there's any doubt about the approach, it needs to be documented in
driver documentation.
Tomi Valkeinen Oct. 7, 2022, 11:58 a.m. UTC | #6
On 06/10/2022 14:20, Sakari Ailus wrote:
> Moi,
> 
> On Thu, Feb 11, 2021 at 03:44:56PM +0200, Tomi Valkeinen wrote:

You found an old one =).

>> Hi all,
>>
>> On 28/03/2019 22:05, Jacopo Mondi wrote:
>>> Hello,
>>>     new iteration of multiplexed stream support patch series.
>>>
>>> V3 available at:
>>> https://patchwork.kernel.org/cover/10839889/
>>>
>>> V2 sent by Niklas is available at:
>>> https://patchwork.kernel.org/cover/10573817/
>>>
>>> Series available at:
>>> git://jmondi.org/linux v4l2-mux/media-master/v4
>>
>> I'm trying to understand how these changes can be used with virtual
>> channels and also with embedded data.
>>
>> I have an SoC with two CSI-2 RX ports, both of which connect to a
>> processing block with 8 DMA engines. Each of the DMA engines can be
>> programmed to handle a certain virtual channel and datatype.
>>
>> The board has a multiplexer, connected to 4 cameras, and the multiplexer
>> connects to SoC's CSI-2 RX port. This board has just one multiplexer
>> connected, but, of course, both RX ports could have a multiplexer,
>> amounting to total 8 cameras.
>>
>> So, in theory, there could be 16 streams to be handled (4 pixel streams
>> and 4 embedded data streams for both RX ports). With only 8 DMA engines
>> available, the driver has to manage them dynamically, reserving a DMA
>> engine when a stream is started.
>>
>> My confusion is with the /dev/video nodes. I think it would be logical
>> to create 8 of them, one for each DMA engine (or less, if I know there
>> is only, say, 1 camera connected, in which case 2 nodes would be
> 
> For more complex devices, it is often not possible to define such a number.
> Say, put an external ISP in between the sensor and the CSI-2 receiver, and
> you may get more streams than you would from the sensor alone.
> 
>> enough). But in that case how does the user know what data is being
>> received from that node? In other words, how to connect, say,
>> /dev/video0 to second camera's embedded data stream?
>>
>> Another option would be to create 16 /dev/video nodes, and document that
>> first one maps to virtual channel 0 + pixel data, second to virtual
>> channel 0 + embedded data, and so on. And only allow 8 of them to be
>> turned on at a time. But I don't like this idea much.
> 
> This isn't great IMO as it is limited to pre-defined use cases.
> 
>>
>> The current driver architecture is such that the multiplexer is modeled
>> with a subdev with 4 sink pads and one source pad, the SoC's RX ports
>> are subdevs with a single sink and a single output pad, and then there
>> are the video devices connected to RX's source pad.
>>
>> And while I can connect the video node's pad to the source pad on either
>> of the RX ports, I don't think I have any way to define which stream it
>> receives.
>>
>> Does that mean that each RX port subdev should instead have 8 source
>> pads? Isn't a pad like a physical connection? There's really just one
>> output from the RX port, with multiplexed streams, so 8 pads doesn't
>> sound right.
> 
> If you have eight DMAs you should always have eight video nodes.
> 
> I would put one link between the sub-device and a video node, and handle
> the incoming streams by routing them to the desired video nodes.

This is how it's been for quite a while. However, I think this model 
causes problems with more programmable DMA systems, where there's no 
maximum number of DMA "engines" (or the max is something like 256). But 
for now those systems can just define a sensible number of DMAs (8? 16? 
I guess it depends on the HW).

  Tomi
Laurent Pinchart Oct. 7, 2022, 12:14 p.m. UTC | #7
Hi Tomi,

On Fri, Oct 07, 2022 at 02:58:28PM +0300, Tomi Valkeinen wrote:
> On 06/10/2022 14:20, Sakari Ailus wrote:
> > On Thu, Feb 11, 2021 at 03:44:56PM +0200, Tomi Valkeinen wrote:
> 
> You found an old one =).
> 
> >> Hi all,
> >>
> >> On 28/03/2019 22:05, Jacopo Mondi wrote:
> >>> Hello,
> >>>     new iteration of multiplexed stream support patch series.
> >>>
> >>> V3 available at:
> >>> https://patchwork.kernel.org/cover/10839889/
> >>>
> >>> V2 sent by Niklas is available at:
> >>> https://patchwork.kernel.org/cover/10573817/
> >>>
> >>> Series available at:
> >>> git://jmondi.org/linux v4l2-mux/media-master/v4
> >>
> >> I'm trying to understand how these changes can be used with virtual
> >> channels and also with embedded data.
> >>
> >> I have an SoC with two CSI-2 RX ports, both of which connect to a
> >> processing block with 8 DMA engines. Each of the DMA engines can be
> >> programmed to handle a certain virtual channel and datatype.
> >>
> >> The board has a multiplexer, connected to 4 cameras, and the multiplexer
> >> connects to SoC's CSI-2 RX port. This board has just one multiplexer
> >> connected, but, of course, both RX ports could have a multiplexer,
> >> amounting to total 8 cameras.
> >>
> >> So, in theory, there could be 16 streams to be handled (4 pixel streams
> >> and 4 embedded data streams for both RX ports). With only 8 DMA engines
> >> available, the driver has to manage them dynamically, reserving a DMA
> >> engine when a stream is started.
> >>
> >> My confusion is with the /dev/video nodes. I think it would be logical
> >> to create 8 of them, one for each DMA engine (or less, if I know there
> >> is only, say, 1 camera connected, in which case 2 nodes would be
> > 
> > For more complex devices, it is often not possible to define such a number.
> > Say, put an external ISP in between the sensor and the CSI-2 receiver, and
> > you may get more streams than you would from the sensor alone.
> > 
> >> enough). But in that case how does the user know what data is being
> >> received from that node? In other words, how to connect, say,
> >> /dev/video0 to second camera's embedded data stream?
> >>
> >> Another option would be to create 16 /dev/video nodes, and document that
> >> first one maps to virtual channel 0 + pixel data, second to virtual
> >> channel 0 + embedded data, and so on. And only allow 8 of them to be
> >> turned on at a time. But I don't like this idea much.
> > 
> > This isn't great IMO as it is limited to pre-defined use cases.
> > 
> >> The current driver architecture is such that the multiplexer is modeled
> >> with a subdev with 4 sink pads and one source pad, the SoC's RX ports
> >> are subdevs with a single sink and a single output pad, and then there
> >> are the video devices connected to RX's source pad.
> >>
> >> And while I can connect the video node's pad to the source pad on either
> >> of the RX ports, I don't think I have any way to define which stream it
> >> receives.
> >>
> >> Does that mean that each RX port subdev should instead have 8 source
> >> pads? Isn't a pad like a physical connection? There's really just one
> >> output from the RX port, with multiplexed streams, so 8 pads doesn't
> >> sound right.
> > 
> > If you have eight DMAs you should always have eight video nodes.
> > 
> > I would put one link between the sub-device and a video node, and handle
> > the incoming streams by routing them to the desired video nodes.
> 
> This is how it's been for quite a while. However, I think this model 
> causes problems with more programmable DMA systems, where there's no 
> maximum number of DMA "engines" (or the max is something like 256). But 
> for now those systems can just define a sensible number of DMAs (8? 16? 
> I guess it depends on the HW).

Agreed, if we get to 256 (or more) DMA engines (or likely, in that case,
DMA engine contexts), then we'll need a different API, with explicit
stream support on video nodes. Hopefully someone else will solve that
problem :-)