diff mbox series

[v2,6/9] media: uapi: Add a control for DW100 driver

Message ID 20220308184829.38242-7-xavier.roumegue@oss.nxp.com (mailing list archive)
State New, archived
Headers show
Series i.MX8MP DW100 dewarper driver | expand

Commit Message

Xavier Roumegue (OSS) March 8, 2022, 6:48 p.m. UTC
The DW100 driver gets the dewarping mapping as a binary blob from the
userspace application through a custom control.
The blob format is hardware specific so create a dedicated control for
this purpose.

Signed-off-by: Xavier Roumegue <xavier.roumegue@oss.nxp.com>
---
 Documentation/userspace-api/media/drivers/dw100.rst |  7 +++++++
 include/uapi/linux/dw100.h                          | 11 +++++++++++
 2 files changed, 18 insertions(+)
 create mode 100644 include/uapi/linux/dw100.h

Comments

Nicolas Dufresne March 8, 2022, 7:15 p.m. UTC | #1
Le mardi 08 mars 2022 à 19:48 +0100, Xavier Roumegue a écrit :
> The DW100 driver gets the dewarping mapping as a binary blob from the
> userspace application through a custom control.
> The blob format is hardware specific so create a dedicated control for
> this purpose.
> 
> Signed-off-by: Xavier Roumegue <xavier.roumegue@oss.nxp.com>
> ---
>  Documentation/userspace-api/media/drivers/dw100.rst |  7 +++++++
>  include/uapi/linux/dw100.h                          | 11 +++++++++++
>  2 files changed, 18 insertions(+)
>  create mode 100644 include/uapi/linux/dw100.h
> 
> diff --git a/Documentation/userspace-api/media/drivers/dw100.rst b/Documentation/userspace-api/media/drivers/dw100.rst
> index 20aeae63a94f..3abad05849ad 100644
> --- a/Documentation/userspace-api/media/drivers/dw100.rst
> +++ b/Documentation/userspace-api/media/drivers/dw100.rst
> @@ -20,4 +20,11 @@ match the expected size inherited from the destination image resolution.
>  More details on the DW100 hardware operations can be found in
>  *chapter 13.15 DeWarp* of IMX8MP_ reference manuel.
>  
> +The Vivante DW100 m2m driver implements the following driver-specific control:
> +
> +``V4L2_CID_DW100_MAPPING (integer)``
> +    Specifies to DW100 driver its dewarping map (aka LUT) blob as described in
> +    *chapter 13.15.2.3 Dewarping Remap* of IMX8MP_ reference manual as an U32
> +    dynamic array.
> +
>  .. _IMX8MP: https://www.nxp.com/webapp/Download?colCode=IMX8MPIEC

This point to a document names "i.MX 8M Plus Applications Processor Datasheet
for Industrial Products" which does not contain that reference.

> diff --git a/include/uapi/linux/dw100.h b/include/uapi/linux/dw100.h
> new file mode 100644
> index 000000000000..0ef926c61cf0
> --- /dev/null
> +++ b/include/uapi/linux/dw100.h
> @@ -0,0 +1,11 @@
> +/* SPDX-License-Identifier: GPL-2.0-only WITH Linux-syscall-note */
> +/* Copyright 2022 NXP */
> +
> +#ifndef __UAPI_DW100_H__
> +#define __UAPI_DW100_H__
> +
> +#include <linux/v4l2-controls.h>
> +
> +#define V4L2_CID_DW100_MAPPING		(V4L2_CID_USER_DW100_BASE + 1)
> +
> +#endif
Xavier Roumegue (OSS) March 8, 2022, 7:42 p.m. UTC | #2
Hello Nicolas,

On 3/8/22 20:15, Nicolas Dufresne wrote:
> Le mardi 08 mars 2022 à 19:48 +0100, Xavier Roumegue a écrit :
>> The DW100 driver gets the dewarping mapping as a binary blob from the
>> userspace application through a custom control.
>> The blob format is hardware specific so create a dedicated control for
>> this purpose.
>>
>> Signed-off-by: Xavier Roumegue <xavier.roumegue@oss.nxp.com>
>> ---
>>   Documentation/userspace-api/media/drivers/dw100.rst |  7 +++++++
>>   include/uapi/linux/dw100.h                          | 11 +++++++++++
>>   2 files changed, 18 insertions(+)
>>   create mode 100644 include/uapi/linux/dw100.h
>>
>> diff --git a/Documentation/userspace-api/media/drivers/dw100.rst b/Documentation/userspace-api/media/drivers/dw100.rst
>> index 20aeae63a94f..3abad05849ad 100644
>> --- a/Documentation/userspace-api/media/drivers/dw100.rst
>> +++ b/Documentation/userspace-api/media/drivers/dw100.rst
>> @@ -20,4 +20,11 @@ match the expected size inherited from the destination image resolution.
>>   More details on the DW100 hardware operations can be found in
>>   *chapter 13.15 DeWarp* of IMX8MP_ reference manuel.
>>   
>> +The Vivante DW100 m2m driver implements the following driver-specific control:
>> +
>> +``V4L2_CID_DW100_MAPPING (integer)``
>> +    Specifies to DW100 driver its dewarping map (aka LUT) blob as described in
>> +    *chapter 13.15.2.3 Dewarping Remap* of IMX8MP_ reference manual as an U32
>> +    dynamic array.
>> +
>>   .. _IMX8MP: https://www.nxp.com/webapp/Download?colCode=IMX8MPIEC
> 
> This point to a document names "i.MX 8M Plus Applications Processor Datasheet
> for Industrial Products" which does not contain that reference.
My bad.. Wrong link. :)
Will repost with correct link.
> 
>> diff --git a/include/uapi/linux/dw100.h b/include/uapi/linux/dw100.h
>> new file mode 100644
>> index 000000000000..0ef926c61cf0
>> --- /dev/null
>> +++ b/include/uapi/linux/dw100.h
>> @@ -0,0 +1,11 @@
>> +/* SPDX-License-Identifier: GPL-2.0-only WITH Linux-syscall-note */
>> +/* Copyright 2022 NXP */
>> +
>> +#ifndef __UAPI_DW100_H__
>> +#define __UAPI_DW100_H__
>> +
>> +#include <linux/v4l2-controls.h>
>> +
>> +#define V4L2_CID_DW100_MAPPING		(V4L2_CID_USER_DW100_BASE + 1)
>> +
>> +#endif
>
Nicolas Dufresne March 8, 2022, 8:28 p.m. UTC | #3
Le mardi 08 mars 2022 à 20:42 +0100, Xavier Roumegue (OSS) a écrit :
> Hello Nicolas,
> 
> On 3/8/22 20:15, Nicolas Dufresne wrote:
> > Le mardi 08 mars 2022 à 19:48 +0100, Xavier Roumegue a écrit :
> > > The DW100 driver gets the dewarping mapping as a binary blob from the
> > > userspace application through a custom control.
> > > The blob format is hardware specific so create a dedicated control for
> > > this purpose.
> > > 
> > > Signed-off-by: Xavier Roumegue <xavier.roumegue@oss.nxp.com>
> > > ---
> > >   Documentation/userspace-api/media/drivers/dw100.rst |  7 +++++++
> > >   include/uapi/linux/dw100.h                          | 11 +++++++++++
> > >   2 files changed, 18 insertions(+)
> > >   create mode 100644 include/uapi/linux/dw100.h
> > > 
> > > diff --git a/Documentation/userspace-api/media/drivers/dw100.rst b/Documentation/userspace-api/media/drivers/dw100.rst
> > > index 20aeae63a94f..3abad05849ad 100644
> > > --- a/Documentation/userspace-api/media/drivers/dw100.rst
> > > +++ b/Documentation/userspace-api/media/drivers/dw100.rst
> > > @@ -20,4 +20,11 @@ match the expected size inherited from the destination image resolution.
> > >   More details on the DW100 hardware operations can be found in
> > >   *chapter 13.15 DeWarp* of IMX8MP_ reference manuel.
> > >   
> > > +The Vivante DW100 m2m driver implements the following driver-specific control:
> > > +
> > > +``V4L2_CID_DW100_MAPPING (integer)``
> > > +    Specifies to DW100 driver its dewarping map (aka LUT) blob as described in
> > > +    *chapter 13.15.2.3 Dewarping Remap* of IMX8MP_ reference manual as an U32
> > > +    dynamic array.
> > > +
> > >   .. _IMX8MP: https://www.nxp.com/webapp/Download?colCode=IMX8MPIEC
> > 
> > This point to a document names "i.MX 8M Plus Applications Processor Datasheet
> > for Industrial Products" which does not contain that reference.
> My bad.. Wrong link. :)
> Will repost with correct link.

Thanks. What I wanted to check is if it actually made sense to expose the
synthetized HW LUT. But for this, one need to share the parameters / algo needed
to generate them. This way we can compare against other popular dewarp
algorithms / API and see if they have something in common.

The issue I see with this control is relate to the message it gives. When adding
controls for the prosperity, we want these control to actually be usable. This
is possible if the documentation makes its usage obvious, or if there is Open
Source userland to support that.

None of this is met, so as a side effect, this looks like NXP sneaking in
private blob control into a publicly maintained Open Source project. This isn't
truly aligned with how V4L2 controls are meant to be. Doing trivial lut
synthesis in the kernel could be fine though.

> > 
> > > diff --git a/include/uapi/linux/dw100.h b/include/uapi/linux/dw100.h
> > > new file mode 100644
> > > index 000000000000..0ef926c61cf0
> > > --- /dev/null
> > > +++ b/include/uapi/linux/dw100.h
> > > @@ -0,0 +1,11 @@
> > > +/* SPDX-License-Identifier: GPL-2.0-only WITH Linux-syscall-note */
> > > +/* Copyright 2022 NXP */
> > > +
> > > +#ifndef __UAPI_DW100_H__
> > > +#define __UAPI_DW100_H__
> > > +
> > > +#include <linux/v4l2-controls.h>
> > > +
> > > +#define V4L2_CID_DW100_MAPPING		(V4L2_CID_USER_DW100_BASE + 1)
> > > +
> > > +#endif
> >
Xavier Roumegue (OSS) March 8, 2022, 11:16 p.m. UTC | #4
On 3/8/22 21:28, Nicolas Dufresne wrote:
> Le mardi 08 mars 2022 à 20:42 +0100, Xavier Roumegue (OSS) a écrit :
>> Hello Nicolas,
>>
>> On 3/8/22 20:15, Nicolas Dufresne wrote:
>>> Le mardi 08 mars 2022 à 19:48 +0100, Xavier Roumegue a écrit :
>>>> The DW100 driver gets the dewarping mapping as a binary blob from the
>>>> userspace application through a custom control.
>>>> The blob format is hardware specific so create a dedicated control for
>>>> this purpose.
>>>>
>>>> Signed-off-by: Xavier Roumegue <xavier.roumegue@oss.nxp.com>
>>>> ---
>>>>    Documentation/userspace-api/media/drivers/dw100.rst |  7 +++++++
>>>>    include/uapi/linux/dw100.h                          | 11 +++++++++++
>>>>    2 files changed, 18 insertions(+)
>>>>    create mode 100644 include/uapi/linux/dw100.h
>>>>
>>>> diff --git a/Documentation/userspace-api/media/drivers/dw100.rst b/Documentation/userspace-api/media/drivers/dw100.rst
>>>> index 20aeae63a94f..3abad05849ad 100644
>>>> --- a/Documentation/userspace-api/media/drivers/dw100.rst
>>>> +++ b/Documentation/userspace-api/media/drivers/dw100.rst
>>>> @@ -20,4 +20,11 @@ match the expected size inherited from the destination image resolution.
>>>>    More details on the DW100 hardware operations can be found in
>>>>    *chapter 13.15 DeWarp* of IMX8MP_ reference manuel.
>>>>    
>>>> +The Vivante DW100 m2m driver implements the following driver-specific control:
>>>> +
>>>> +``V4L2_CID_DW100_MAPPING (integer)``
>>>> +    Specifies to DW100 driver its dewarping map (aka LUT) blob as described in
>>>> +    *chapter 13.15.2.3 Dewarping Remap* of IMX8MP_ reference manual as an U32
>>>> +    dynamic array.
>>>> +
>>>>    .. _IMX8MP: https://www.nxp.com/webapp/Download?colCode=IMX8MPIEC
>>>
>>> This point to a document names "i.MX 8M Plus Applications Processor Datasheet
>>> for Industrial Products" which does not contain that reference.
>> My bad.. Wrong link. :)
>> Will repost with correct link.
> 
> Thanks. What I wanted to check is if it actually made sense to expose the
> synthetized HW LUT. But for this, one need to share the parameters / algo needed
> to generate them.
There is no special dewarping algorithm which strictly depends on the 
dw100 IP, or optimized for the IP capabilities.

  This way we can compare against other popular dewarp
> algorithms / API and see if they have something in common.
The dw100 hw lut description is rather close to a how you implement 
dewarping with openGL taking benefit of the shader pipeline stage.
The main differences with OpenGL implementation are:
- Fixed vertices coordinates (16x16) vs any
- Limited resolution on input (texture) coordinates (UQ12.4) vs float

Standard routines from OpenCV such as initUndistortRectifyMap()
https://docs.opencv.org/4.5.5/d9/d0c/group__calib3d.html#ga7dfb72c9cf9780a347fbe3d1c47e5d5a
can be used to generate the binary blob, with an additional decimation 
processing stage to satisfy the 16x16 macro block vertices grid and the 
fixed point format.

> 
> The issue I see with this control is relate to the message it gives. When adding
> controls for the prosperity, we want these control to actually be usable. This
> is possible if the documentation makes its usage obvious, or if there is Open
> Source userland to support that.
So yes, most famous vision opensource project such OpenCV can be used to 
generate the blob.
> 
> None of this is met, so as a side effect, this looks like NXP sneaking in
> private blob control into a publicly maintained Open Source project.
I then disagree with this statement considering my previous comments.

I plan to release publicly some programming examples on how to generate 
the dewarping map only using openCV library routines and aligned with 
lenses calibration state of the art method.
A dedicated openCV module taking benefit of the DW100 will be published 
as well.

A long term target is to add its support in libcamera, combined with all 
media components (CSI, ISP, ISI) pulled from upstream kernel tree.

  This isn't
> truly aligned with how V4L2 controls are meant to be. Doing trivial lut
> synthesis in the kernel could be fine though.
I am not sure what you meant with this comment.

As part of this patch series, an identity map is generated in the driver 
which should be enough for anyone familiar with dewarping process.
If you meant to generate the remapping table from the lens calibration 
data, I don't think this is a reasonable option considering the 
NP-completeness of the problem.

If this is the idea of binary blob (despite its public format 
description) which hurts you, the map can be exposed to the kernel in a 
more human readable format such Image_in(xin, yin) -> Image_out(xout, 
yout) in UQ1.31 format but will add extra processing at runtime for 
something which has to be done anyway offline, and memory overhead. But 
I don't think we can end with a generic v4l2 control considering the 
hardware restrictions (vertices position, limited fixed point 
resolution, etc..).

Adding a generic dewarping API to V4L2 is possible but this was not the 
scope of this patchset, and anyway missing data on any existing public 
dewarp hardware implementation supported by the kernel is somehow a 
blocker for this.

> 
>>>
>>>> diff --git a/include/uapi/linux/dw100.h b/include/uapi/linux/dw100.h
>>>> new file mode 100644
>>>> index 000000000000..0ef926c61cf0
>>>> --- /dev/null
>>>> +++ b/include/uapi/linux/dw100.h
>>>> @@ -0,0 +1,11 @@
>>>> +/* SPDX-License-Identifier: GPL-2.0-only WITH Linux-syscall-note */
>>>> +/* Copyright 2022 NXP */
>>>> +
>>>> +#ifndef __UAPI_DW100_H__
>>>> +#define __UAPI_DW100_H__
>>>> +
>>>> +#include <linux/v4l2-controls.h>
>>>> +
>>>> +#define V4L2_CID_DW100_MAPPING		(V4L2_CID_USER_DW100_BASE + 1)
>>>> +
>>>> +#endif
>>>
>
Nicolas Dufresne March 9, 2022, 8:08 p.m. UTC | #5
Le mercredi 09 mars 2022 à 00:16 +0100, Xavier Roumegue (OSS) a écrit :
> 
> On 3/8/22 21:28, Nicolas Dufresne wrote:
> > Le mardi 08 mars 2022 à 20:42 +0100, Xavier Roumegue (OSS) a écrit :
> > > Hello Nicolas,
> > > 
> > > On 3/8/22 20:15, Nicolas Dufresne wrote:
> > > > Le mardi 08 mars 2022 à 19:48 +0100, Xavier Roumegue a écrit :
> > > > > The DW100 driver gets the dewarping mapping as a binary blob from the
> > > > > userspace application through a custom control.
> > > > > The blob format is hardware specific so create a dedicated control for
> > > > > this purpose.
> > > > > 
> > > > > Signed-off-by: Xavier Roumegue <xavier.roumegue@oss.nxp.com>
> > > > > ---
> > > > >    Documentation/userspace-api/media/drivers/dw100.rst |  7 +++++++
> > > > >    include/uapi/linux/dw100.h                          | 11 +++++++++++
> > > > >    2 files changed, 18 insertions(+)
> > > > >    create mode 100644 include/uapi/linux/dw100.h
> > > > > 
> > > > > diff --git a/Documentation/userspace-api/media/drivers/dw100.rst b/Documentation/userspace-api/media/drivers/dw100.rst
> > > > > index 20aeae63a94f..3abad05849ad 100644
> > > > > --- a/Documentation/userspace-api/media/drivers/dw100.rst
> > > > > +++ b/Documentation/userspace-api/media/drivers/dw100.rst
> > > > > @@ -20,4 +20,11 @@ match the expected size inherited from the destination image resolution.
> > > > >    More details on the DW100 hardware operations can be found in
> > > > >    *chapter 13.15 DeWarp* of IMX8MP_ reference manuel.
> > > > >    
> > > > > +The Vivante DW100 m2m driver implements the following driver-specific control:
> > > > > +
> > > > > +``V4L2_CID_DW100_MAPPING (integer)``
> > > > > +    Specifies to DW100 driver its dewarping map (aka LUT) blob as described in
> > > > > +    *chapter 13.15.2.3 Dewarping Remap* of IMX8MP_ reference manual as an U32
> > > > > +    dynamic array.
> > > > > +
> > > > >    .. _IMX8MP: https://www.nxp.com/webapp/Download?colCode=IMX8MPIEC
> > > > 
> > > > This point to a document names "i.MX 8M Plus Applications Processor Datasheet
> > > > for Industrial Products" which does not contain that reference.
> > > My bad.. Wrong link. :)
> > > Will repost with correct link.
> > 
> > Thanks. What I wanted to check is if it actually made sense to expose the
> > synthetized HW LUT. But for this, one need to share the parameters / algo needed
> > to generate them.
> There is no special dewarping algorithm which strictly depends on the 
> dw100 IP, or optimized for the IP capabilities.
> 
>   This way we can compare against other popular dewarp
> > algorithms / API and see if they have something in common.
> The dw100 hw lut description is rather close to a how you implement 
> dewarping with openGL taking benefit of the shader pipeline stage.
> The main differences with OpenGL implementation are:
> - Fixed vertices coordinates (16x16) vs any
> - Limited resolution on input (texture) coordinates (UQ12.4) vs float
> 
> Standard routines from OpenCV such as initUndistortRectifyMap()
> https://docs.opencv.org/4.5.5/d9/d0c/group__calib3d.html#ga7dfb72c9cf9780a347fbe3d1c47e5d5a
> can be used to generate the binary blob, with an additional decimation 
> processing stage to satisfy the 16x16 macro block vertices grid and the 
> fixed point format.
> 
> > 
> > The issue I see with this control is relate to the message it gives. When adding
> > controls for the prosperity, we want these control to actually be usable. This
> > is possible if the documentation makes its usage obvious, or if there is Open
> > Source userland to support that.
> So yes, most famous vision opensource project such OpenCV can be used to 
> generate the blob.
> > 
> > None of this is met, so as a side effect, this looks like NXP sneaking in
> > private blob control into a publicly maintained Open Source project.
> I then disagree with this statement considering my previous comments.
> 
> I plan to release publicly some programming examples on how to generate 
> the dewarping map only using openCV library routines and aligned with 
> lenses calibration state of the art method.
> A dedicated openCV module taking benefit of the DW100 will be published 
> as well.
> 
> A long term target is to add its support in libcamera, combined with all 
> media components (CSI, ISP, ISI) pulled from upstream kernel tree.
> 
>   This isn't
> > truly aligned with how V4L2 controls are meant to be. Doing trivial lut
> > synthesis in the kernel could be fine though.
> I am not sure what you meant with this comment.
> 
> As part of this patch series, an identity map is generated in the driver 
> which should be enough for anyone familiar with dewarping process.
> If you meant to generate the remapping table from the lens calibration 
> data, I don't think this is a reasonable option considering the 
> NP-completeness of the problem.
> 
> If this is the idea of binary blob (despite its public format 
> description) which hurts you, the map can be exposed to the kernel in a 
> more human readable format such Image_in(xin, yin) -> Image_out(xout, 
> yout) in UQ1.31 format but will add extra processing at runtime for 
> something which has to be done anyway offline, and memory overhead. But 
> I don't think we can end with a generic v4l2 control considering the 
> hardware restrictions (vertices position, limited fixed point 
> resolution, etc..).

Please avoid implication that I would be *hurt* by your patchset. Your
imagination will make you read my comment as saying something it is not. My
comment are strictly scoped within the information you have provided with the
patchset to justify adding a vendor control in contrast to providing controls
that would be reused by another driver later. I'm not into lense or anything, I
strictly review the userland APIs that you submitted with care on documentation
and usability.

Try and ignore everything you know and the inner of this hardware design, and
perhaps about dewarping technique and you may endup with a different read of
your patchset. My impression while reading it is that I would not be able to use
it due to lack of example. And if NXP website would stop hosting the
documentation, this would make it just impossible. Time have showed that vendor
controls are rarely the solution and should only be added with great care and
good documentation. For a first driver supporting a technology like this one, it
could be acceptable, assuming it is documented in a future proof way.

All the information and the rationale you are adding in this reply can be added
in the next submission. What I think you should strictly address:

- The LUT format and meaning should be documented directly in the Linux Kernel
documentation. Having to register an account with NXP in order to download the
documentation is not acceptable and not future safe.
- You forgot to provide the output of v4l2-compliance, I didn't mention yet, but
that would have come of course.

The rest are just nice to have, though generally wanted.

- The name of the control could be made more descriptive. The lut is mapping
what in one word ? And that word could be added to the name.
- The patchset could mention userland code that uses it, which show that this is
actually tested*
- There is other feature you mention, unrelated to the dewarp feature. You
haven't said with what userland these have been tested. M2M scaling, csc and
crop are generic should just work with existing userland. You could use
GStreamer as an example.

* You'll find this funny, or perhaps even insulting at first, but you'd be
surprise how much code (specially from ARM SoC vendors) that get sent every year
that don't even compile or have never been tested after being up-ported from an
older tree. And that is only scratching the surface of the problem we have to
deal with. Notably drivers were only 1 pixel format out of let's say 10 have
been tested that comes with broken stride and memory buffer size calculation
causing much larger troubles in the system.

> 
> Adding a generic dewarping API to V4L2 is possible but this was not the 
> scope of this patchset, and anyway missing data on any existing public 
> dewarp hardware implementation supported by the kernel is somehow a 
> blocker for this.

I was asking to share about your research that made you opt-out any kind of non-
vendor control for this feature. From your original submission, it would have
been ill advised for me to assume anything. Note that programming interface for
a V4L2 driver does not need to be based on other hardware vendor interface. I'm
not in this industry, but there could have been an industry standard for
expressing lense correction, produce through a a calibration process. The one
thing I've been assuming is that you are in the industry and would be able to
share a bit on that.

> 
> > 
> > > > 
> > > > > diff --git a/include/uapi/linux/dw100.h b/include/uapi/linux/dw100.h
> > > > > new file mode 100644
> > > > > index 000000000000..0ef926c61cf0
> > > > > --- /dev/null
> > > > > +++ b/include/uapi/linux/dw100.h
> > > > > @@ -0,0 +1,11 @@
> > > > > +/* SPDX-License-Identifier: GPL-2.0-only WITH Linux-syscall-note */
> > > > > +/* Copyright 2022 NXP */
> > > > > +
> > > > > +#ifndef __UAPI_DW100_H__
> > > > > +#define __UAPI_DW100_H__
> > > > > +
> > > > > +#include <linux/v4l2-controls.h>
> > > > > +
> > > > > +#define V4L2_CID_DW100_MAPPING		(V4L2_CID_USER_DW100_BASE + 1)
> > > > > +
> > > > > +#endif
> > > > 
> >
Laurent Pinchart March 9, 2022, 8:25 p.m. UTC | #6
Hi Nicolas and Xavier,

On Wed, Mar 09, 2022 at 03:08:06PM -0500, Nicolas Dufresne wrote:
> Le mercredi 09 mars 2022 à 00:16 +0100, Xavier Roumegue (OSS) a écrit :
> > On 3/8/22 21:28, Nicolas Dufresne wrote:
> > > Le mardi 08 mars 2022 à 20:42 +0100, Xavier Roumegue (OSS) a écrit :
> > > > On 3/8/22 20:15, Nicolas Dufresne wrote:
> > > > > Le mardi 08 mars 2022 à 19:48 +0100, Xavier Roumegue a écrit :
> > > > > > The DW100 driver gets the dewarping mapping as a binary blob from the
> > > > > > userspace application through a custom control.
> > > > > > The blob format is hardware specific so create a dedicated control for
> > > > > > this purpose.
> > > > > > 
> > > > > > Signed-off-by: Xavier Roumegue <xavier.roumegue@oss.nxp.com>
> > > > > > ---
> > > > > >    Documentation/userspace-api/media/drivers/dw100.rst |  7 +++++++
> > > > > >    include/uapi/linux/dw100.h                          | 11 +++++++++++
> > > > > >    2 files changed, 18 insertions(+)
> > > > > >    create mode 100644 include/uapi/linux/dw100.h
> > > > > > 
> > > > > > diff --git a/Documentation/userspace-api/media/drivers/dw100.rst b/Documentation/userspace-api/media/drivers/dw100.rst
> > > > > > index 20aeae63a94f..3abad05849ad 100644
> > > > > > --- a/Documentation/userspace-api/media/drivers/dw100.rst
> > > > > > +++ b/Documentation/userspace-api/media/drivers/dw100.rst
> > > > > > @@ -20,4 +20,11 @@ match the expected size inherited from the destination image resolution.
> > > > > >    More details on the DW100 hardware operations can be found in
> > > > > >    *chapter 13.15 DeWarp* of IMX8MP_ reference manuel.
> > > > > >    
> > > > > > +The Vivante DW100 m2m driver implements the following driver-specific control:
> > > > > > +
> > > > > > +``V4L2_CID_DW100_MAPPING (integer)``
> > > > > > +    Specifies to DW100 driver its dewarping map (aka LUT) blob as described in
> > > > > > +    *chapter 13.15.2.3 Dewarping Remap* of IMX8MP_ reference manual as an U32
> > > > > > +    dynamic array.
> > > > > > +
> > > > > >    .. _IMX8MP: https://www.nxp.com/webapp/Download?colCode=IMX8MPIEC
> > > > > 
> > > > > This point to a document names "i.MX 8M Plus Applications Processor Datasheet
> > > > > for Industrial Products" which does not contain that reference.
> > > > My bad.. Wrong link. :)
> > > > Will repost with correct link.
> > > 
> > > Thanks. What I wanted to check is if it actually made sense to expose the
> > > synthetized HW LUT. But for this, one need to share the parameters / algo needed
> > > to generate them.
> >
> > There is no special dewarping algorithm which strictly depends on the 
> > dw100 IP, or optimized for the IP capabilities.
> > 
> >   This way we can compare against other popular dewarp
> > > algorithms / API and see if they have something in common.
> >
> > The dw100 hw lut description is rather close to a how you implement 
> > dewarping with openGL taking benefit of the shader pipeline stage.
> > The main differences with OpenGL implementation are:
> > - Fixed vertices coordinates (16x16) vs any
> > - Limited resolution on input (texture) coordinates (UQ12.4) vs float
> > 
> > Standard routines from OpenCV such as initUndistortRectifyMap()
> > https://docs.opencv.org/4.5.5/d9/d0c/group__calib3d.html#ga7dfb72c9cf9780a347fbe3d1c47e5d5a
> > can be used to generate the binary blob, with an additional decimation 
> > processing stage to satisfy the 16x16 macro block vertices grid and the 
> > fixed point format.
> > 
> > > The issue I see with this control is relate to the message it gives. When adding
> > > controls for the prosperity, we want these control to actually be usable. This
> > > is possible if the documentation makes its usage obvious, or if there is Open
> > > Source userland to support that.
> >
> > So yes, most famous vision opensource project such OpenCV can be used to 
> > generate the blob.
> >
> > > None of this is met, so as a side effect, this looks like NXP sneaking in
> > > private blob control into a publicly maintained Open Source project.
> >
> > I then disagree with this statement considering my previous comments.
> > 
> > I plan to release publicly some programming examples on how to generate 
> > the dewarping map only using openCV library routines and aligned with 
> > lenses calibration state of the art method.
> > A dedicated openCV module taking benefit of the DW100 will be published 
> > as well.
> > 
> > A long term target is to add its support in libcamera, combined with all 
> > media components (CSI, ISP, ISI) pulled from upstream kernel tree.
> > 
> >   This isn't
> > > truly aligned with how V4L2 controls are meant to be. Doing trivial lut
> > > synthesis in the kernel could be fine though.
> >
> > I am not sure what you meant with this comment.
> > 
> > As part of this patch series, an identity map is generated in the driver 
> > which should be enough for anyone familiar with dewarping process.
> > If you meant to generate the remapping table from the lens calibration 
> > data, I don't think this is a reasonable option considering the 
> > NP-completeness of the problem.
> > 
> > If this is the idea of binary blob (despite its public format 
> > description) which hurts you, the map can be exposed to the kernel in a 
> > more human readable format such Image_in(xin, yin) -> Image_out(xout, 
> > yout) in UQ1.31 format but will add extra processing at runtime for 
> > something which has to be done anyway offline, and memory overhead. But 
> > I don't think we can end with a generic v4l2 control considering the 
> > hardware restrictions (vertices position, limited fixed point 
> > resolution, etc..).
> 
> Please avoid implication that I would be *hurt* by your patchset.

There are many developers in the community for who code is their baby,
so being hurt by a patch series is actually a feeling that can occurs
:-) I don't think anyone is trying to hurt anyone here. There was a bit
of miscommunication in the beginning as Xavier's cover letter didn't
outline the userspace development plans, and that has been clarified
now. The driver may expose a hardware-specific configuration blob to
userspace, but that will be fully supported by an open-source userspace
implementation.

> Your imagination will make you read my comment as saying something it is not. My
> comment are strictly scoped within the information you have provided with the
> patchset to justify adding a vendor control in contrast to providing controls
> that would be reused by another driver later. I'm not into lense or anything, I
> strictly review the userland APIs that you submitted with care on documentation
> and usability.
> 
> Try and ignore everything you know and the inner of this hardware design, and
> perhaps about dewarping technique and you may endup with a different read of
> your patchset. My impression while reading it is that I would not be able to use
> it due to lack of example. And if NXP website would stop hosting the
> documentation, this would make it just impossible. Time have showed that vendor
> controls are rarely the solution and should only be added with great care and
> good documentation. For a first driver supporting a technology like this one, it
> could be acceptable, assuming it is documented in a future proof way.

While I'm not sure there would be value in trying to standardize the
control here, documenting it in the kernel instead of simply referring
to the reference manual could indeed be nice. That's a bit of extra work
for little immediate gain, but it would be more future-proof.

> All the information and the rationale you are adding in this reply can be added
> in the next submission. What I think you should strictly address:
> 
> - The LUT format and meaning should be documented directly in the Linux Kernel
> documentation. Having to register an account with NXP in order to download the
> documentation is not acceptable and not future safe.

We don't *necessarily* need to fully document the format in the kernel
documentation itself (but we *can*), it could also be documented as part
of an open-source userspace project (think about GPU command streams
that are not documented on the kernel side but in Mesa). In any case,
the documentation should be provided to merge the driver, even if it
lives as part of a userspace project.

> - You forgot to provide the output of v4l2-compliance, I didn't mention yet, but
> that would have come of course.
> 
> The rest are just nice to have, though generally wanted.
> 
> - The name of the control could be made more descriptive. The lut is mapping
> what in one word ? And that word could be added to the name.
> - The patchset could mention userland code that uses it, which show that this is
> actually tested*
> - There is other feature you mention, unrelated to the dewarp feature. You
> haven't said with what userland these have been tested. M2M scaling, csc and
> crop are generic should just work with existing userland. You could use
> GStreamer as an example.
> 
> * You'll find this funny, or perhaps even insulting at first, but you'd be
> surprise how much code (specially from ARM SoC vendors) that get sent every year
> that don't even compile or have never been tested after being up-ported from an
> older tree. And that is only scratching the surface of the problem we have to
> deal with. Notably drivers were only 1 pixel format out of let's say 10 have
> been tested that comes with broken stride and memory buffer size calculation
> causing much larger troubles in the system.

While I don't necessarily disagree, and while I understand it may make
you cautious (as we say in French, chat échaudé craint l'eau froide),
let's remember to be welcoming to new contributions without any
prejudice.

> > Adding a generic dewarping API to V4L2 is possible but this was not the 
> > scope of this patchset, and anyway missing data on any existing public 
> > dewarp hardware implementation supported by the kernel is somehow a 
> > blocker for this.
> 
> I was asking to share about your research that made you opt-out any kind of non-
> vendor control for this feature. From your original submission, it would have
> been ill advised for me to assume anything. Note that programming interface for
> a V4L2 driver does not need to be based on other hardware vendor interface. I'm
> not in this industry, but there could have been an industry standard for
> expressing lense correction, produce through a a calibration process. The one
> thing I've been assuming is that you are in the industry and would be able to
> share a bit on that.
> 
> > > > > > diff --git a/include/uapi/linux/dw100.h b/include/uapi/linux/dw100.h
> > > > > > new file mode 100644
> > > > > > index 000000000000..0ef926c61cf0
> > > > > > --- /dev/null
> > > > > > +++ b/include/uapi/linux/dw100.h
> > > > > > @@ -0,0 +1,11 @@
> > > > > > +/* SPDX-License-Identifier: GPL-2.0-only WITH Linux-syscall-note */
> > > > > > +/* Copyright 2022 NXP */
> > > > > > +
> > > > > > +#ifndef __UAPI_DW100_H__
> > > > > > +#define __UAPI_DW100_H__
> > > > > > +
> > > > > > +#include <linux/v4l2-controls.h>
> > > > > > +
> > > > > > +#define V4L2_CID_DW100_MAPPING		(V4L2_CID_USER_DW100_BASE + 1)
> > > > > > +
> > > > > > +#endif
Xavier Roumegue (OSS) March 10, 2022, 12:20 p.m. UTC | #7
On 3/9/22 21:08, Nicolas Dufresne wrote:
> Le mercredi 09 mars 2022 à 00:16 +0100, Xavier Roumegue (OSS) a écrit :
>>
>> On 3/8/22 21:28, Nicolas Dufresne wrote:
>>> Le mardi 08 mars 2022 à 20:42 +0100, Xavier Roumegue (OSS) a écrit :
>>>> Hello Nicolas,
>>>>
>>>> On 3/8/22 20:15, Nicolas Dufresne wrote:
>>>>> Le mardi 08 mars 2022 à 19:48 +0100, Xavier Roumegue a écrit :
>>>>>> The DW100 driver gets the dewarping mapping as a binary blob from the
>>>>>> userspace application through a custom control.
>>>>>> The blob format is hardware specific so create a dedicated control for
>>>>>> this purpose.
>>>>>>
>>>>>> Signed-off-by: Xavier Roumegue <xavier.roumegue@oss.nxp.com>
>>>>>> ---
>>>>>>     Documentation/userspace-api/media/drivers/dw100.rst |  7 +++++++
>>>>>>     include/uapi/linux/dw100.h                          | 11 +++++++++++
>>>>>>     2 files changed, 18 insertions(+)
>>>>>>     create mode 100644 include/uapi/linux/dw100.h
>>>>>>
>>>>>> diff --git a/Documentation/userspace-api/media/drivers/dw100.rst b/Documentation/userspace-api/media/drivers/dw100.rst
>>>>>> index 20aeae63a94f..3abad05849ad 100644
>>>>>> --- a/Documentation/userspace-api/media/drivers/dw100.rst
>>>>>> +++ b/Documentation/userspace-api/media/drivers/dw100.rst
>>>>>> @@ -20,4 +20,11 @@ match the expected size inherited from the destination image resolution.
>>>>>>     More details on the DW100 hardware operations can be found in
>>>>>>     *chapter 13.15 DeWarp* of IMX8MP_ reference manuel.
>>>>>>     
>>>>>> +The Vivante DW100 m2m driver implements the following driver-specific control:
>>>>>> +
>>>>>> +``V4L2_CID_DW100_MAPPING (integer)``
>>>>>> +    Specifies to DW100 driver its dewarping map (aka LUT) blob as described in
>>>>>> +    *chapter 13.15.2.3 Dewarping Remap* of IMX8MP_ reference manual as an U32
>>>>>> +    dynamic array.
>>>>>> +
>>>>>>     .. _IMX8MP: https://www.nxp.com/webapp/Download?colCode=IMX8MPIEC
>>>>>
>>>>> This point to a document names "i.MX 8M Plus Applications Processor Datasheet
>>>>> for Industrial Products" which does not contain that reference.
>>>> My bad.. Wrong link. :)
>>>> Will repost with correct link.
>>>
>>> Thanks. What I wanted to check is if it actually made sense to expose the
>>> synthetized HW LUT. But for this, one need to share the parameters / algo needed
>>> to generate them.
>> There is no special dewarping algorithm which strictly depends on the
>> dw100 IP, or optimized for the IP capabilities.
>>
>>    This way we can compare against other popular dewarp
>>> algorithms / API and see if they have something in common.
>> The dw100 hw lut description is rather close to a how you implement
>> dewarping with openGL taking benefit of the shader pipeline stage.
>> The main differences with OpenGL implementation are:
>> - Fixed vertices coordinates (16x16) vs any
>> - Limited resolution on input (texture) coordinates (UQ12.4) vs float
>>
>> Standard routines from OpenCV such as initUndistortRectifyMap()
>> https://docs.opencv.org/4.5.5/d9/d0c/group__calib3d.html#ga7dfb72c9cf9780a347fbe3d1c47e5d5a
>> can be used to generate the binary blob, with an additional decimation
>> processing stage to satisfy the 16x16 macro block vertices grid and the
>> fixed point format.
>>
>>>
>>> The issue I see with this control is relate to the message it gives. When adding
>>> controls for the prosperity, we want these control to actually be usable. This
>>> is possible if the documentation makes its usage obvious, or if there is Open
>>> Source userland to support that.
>> So yes, most famous vision opensource project such OpenCV can be used to
>> generate the blob.
>>>
>>> None of this is met, so as a side effect, this looks like NXP sneaking in
>>> private blob control into a publicly maintained Open Source project.
>> I then disagree with this statement considering my previous comments.
>>
>> I plan to release publicly some programming examples on how to generate
>> the dewarping map only using openCV library routines and aligned with
>> lenses calibration state of the art method.
>> A dedicated openCV module taking benefit of the DW100 will be published
>> as well.
>>
>> A long term target is to add its support in libcamera, combined with all
>> media components (CSI, ISP, ISI) pulled from upstream kernel tree.
>>
>>    This isn't
>>> truly aligned with how V4L2 controls are meant to be. Doing trivial lut
>>> synthesis in the kernel could be fine though.
>> I am not sure what you meant with this comment.
>>
>> As part of this patch series, an identity map is generated in the driver
>> which should be enough for anyone familiar with dewarping process.
>> If you meant to generate the remapping table from the lens calibration
>> data, I don't think this is a reasonable option considering the
>> NP-completeness of the problem.
>>
>> If this is the idea of binary blob (despite its public format
>> description) which hurts you, the map can be exposed to the kernel in a
>> more human readable format such Image_in(xin, yin) -> Image_out(xout,
>> yout) in UQ1.31 format but will add extra processing at runtime for
>> something which has to be done anyway offline, and memory overhead. But
>> I don't think we can end with a generic v4l2 control considering the
>> hardware restrictions (vertices position, limited fixed point
>> resolution, etc..).
> 
> Please avoid implication that I would be *hurt* by your patchset. Your
> imagination will make you read my comment as saying something it is not. My
> comment are strictly scoped within the information you have provided with the
> patchset to justify adding a vendor control in contrast to providing controls
> that would be reused by another driver later. I'm not into lense or anything, I
> strictly review the userland APIs that you submitted with care on documentation
> and usability.
> 
> Try and ignore everything you know and the inner of this hardware design, and
> perhaps about dewarping technique and you may endup with a different read of
> your patchset. My impression while reading it is that I would not be able to use
> it due to lack of example. And if NXP website would stop hosting the
> documentation, this would make it just impossible. Time have showed that vendor
> controls are rarely the solution and should only be added with great care and
> good documentation. For a first driver supporting a technology like this one, it
> could be acceptable, assuming it is documented in a future proof way.
I fully understand uapi changes have to be handle with care, and that 
was the reason I was initially willing to use a private custom control 
(as few drivers are doing), without being aware of the current policy 
with this regards.

I was willing to share the details of the hardware specification through 
NXP website such as one could get all public details available on the 
IP, and I was (wrongly) thinking the code was talking by itself to give 
indication on its format (finally pretty simple). Again, I understand 
one could be mistrustful with documentation hosted out of kernel tree 
for the reasons you mentioned, even though the risk is pretty small as 
NXP (as most of the vendors) has some long term maintenance legal 
contracts to fulfill.

> 
> All the information and the rationale you are adding in this reply can be added
> in the next submission. What I think you should strictly address:
> 
> - The LUT format and meaning should be documented directly in the Linux Kernel
> documentation. Having to register an account with NXP in order to download the
> documentation is not acceptable and not future safe.
Will do, and will provide a short script example to generate the LUT.
> - You forgot to provide the output of v4l2-compliance, I didn't mention yet, but
> that would have come of course.
The v4l2-compliance report is actually in the cover letter of the patchset.
> 
> The rest are just nice to have, though generally wanted.
> 
> - The name of the control could be made more descriptive. The lut is mapping
> what in one word ? And that word could be added to the name.
I am running out of imagination for figuring out the good word to use.
The LUT is mapping "input pixels coordinates" to "output pixels 
coordinates".
Using OpenGL semantic, this maps textures coordinates to vertices 
coordinates. Any naming suggestions are welcome.

> - The patchset could mention userland code that uses it, which show that this is
> actually tested*

Will do.
Custom control was tested with a gst pipelone using a (hacky) 
gstv4l2transform element and a opencv script using custom module which 
will be shared publicly.



> - There is other feature you mention, unrelated to the dewarp feature. You
> haven't said with what userland these have been tested. M2M scaling, csc and
> crop are generic should just work with existing userland. You could use
> GStreamer as an example.
v4l2-ctl and gst pipeline using (vanilla) gstv4l2transform have been 
used for testing.

Unfortunately, I had to apply oneliner patches on v4l2-ctl to get the 
cropping working to prevent the use of read_write_padded_frame() for 
FWHT cases which is applying a sw cropping/compose if I got it right, 
which seems incorrect for generic m2m.

https://git.linuxtv.org/v4l-utils.git/tree/utils/v4l2-ctl/v4l2-ctl-streaming.cpp#n1112

https://git.linuxtv.org/v4l-utils.git/tree/utils/v4l2-ctl/v4l2-ctl-streaming.cpp#n1372

I will open a thread on v4l2-utils to discuss this.


> 
> * You'll find this funny, or perhaps even insulting at first, but you'd be
> surprise how much code (specially from ARM SoC vendors) that get sent every year
> that don't even compile or have never been tested after being up-ported from an
> older tree. And that is only scratching the surface of the problem we have to
> deal with. Notably drivers were only 1 pixel format out of let's say 10 have
> been tested that comes with broken stride and memory buffer size calculation
> causing much larger troubles in the system.
This certainly does not ensure the driver to be bug-free, but I swear I 
tested all in/out pixel format combinations and driver compilation is 
W=12 warnings free :)
> 
>>
>> Adding a generic dewarping API to V4L2 is possible but this was not the
>> scope of this patchset, and anyway missing data on any existing public
>> dewarp hardware implementation supported by the kernel is somehow a
>> blocker for this.
> 
> I was asking to share about your research that made you opt-out any kind of non-
> vendor control for this feature. From your original submission, it would have
> been ill advised for me to assume anything. Note that programming interface for
> a V4L2 driver does not need to be based on other hardware vendor interface. I'm
> not in this industry, but there could have been an industry standard for
> expressing lense correction, produce through a a calibration process. The one
> thing I've been assuming is that you are in the industry and would be able to
> share a bit on that.
I am looking on dewarp stuff for 3 months but I can share for sure my 
undersanding whatever it worths.
The optical system can be mathematically described using a set of 
matrices and lenses distorsion parameters which are estimated during 
your calibration stage.

https://docs.opencv.org/4.5.5/dc/dbb/tutorial_py_calibration.html

Then it's a matter of resolving a non linear system (ouch) to get the 
remapping lut correcting the distorsion. OpenCV computes a 1:1 pixel 
(re)mapping.

This is obviously impossible to perform those software computation in 
the kernel.
One could imagine that some hw dewarpers might have implemented mapping 
lut computation in hardware, and if so, the control api could have been 
inherited from those calibration parameters. I have no idea if such 
hardware exists.

Another alternative is to consider the remapping LUT as an input which 
seems more reasonable applying divide and conquer concepts.
I would rather go for such option if a generic v4l2 interface should be 
designed and combined with a library. And this would likely help to get 
synergies with openGL implementation from the application standpoint.

The driver would have to expose its mapping capabilities (mainly 
vertices coordinates constraints (x:y mapping) and float resolution).
But this might worth waiting a bit to check the availability trend on 
such capable hardware.



> 
>>
>>>
>>>>>
>>>>>> diff --git a/include/uapi/linux/dw100.h b/include/uapi/linux/dw100.h
>>>>>> new file mode 100644
>>>>>> index 000000000000..0ef926c61cf0
>>>>>> --- /dev/null
>>>>>> +++ b/include/uapi/linux/dw100.h
>>>>>> @@ -0,0 +1,11 @@
>>>>>> +/* SPDX-License-Identifier: GPL-2.0-only WITH Linux-syscall-note */
>>>>>> +/* Copyright 2022 NXP */
>>>>>> +
>>>>>> +#ifndef __UAPI_DW100_H__
>>>>>> +#define __UAPI_DW100_H__
>>>>>> +
>>>>>> +#include <linux/v4l2-controls.h>
>>>>>> +
>>>>>> +#define V4L2_CID_DW100_MAPPING		(V4L2_CID_USER_DW100_BASE + 1)
>>>>>> +
>>>>>> +#endif
>>>>>
>>>
>
Nicolas Dufresne March 10, 2022, 9:52 p.m. UTC | #8
Le jeudi 10 mars 2022 à 13:20 +0100, Xavier Roumegue (OSS) a écrit :
> 
> On 3/9/22 21:08, Nicolas Dufresne wrote:
> > Le mercredi 09 mars 2022 à 00:16 +0100, Xavier Roumegue (OSS) a écrit :
> > > 
> > > On 3/8/22 21:28, Nicolas Dufresne wrote:
> > > > Le mardi 08 mars 2022 à 20:42 +0100, Xavier Roumegue (OSS) a écrit :
> > > > > Hello Nicolas,
> > > > > 
> > > > > On 3/8/22 20:15, Nicolas Dufresne wrote:
> > > > > > Le mardi 08 mars 2022 à 19:48 +0100, Xavier Roumegue a écrit :
> > > > > > > The DW100 driver gets the dewarping mapping as a binary blob from the
> > > > > > > userspace application through a custom control.
> > > > > > > The blob format is hardware specific so create a dedicated control for
> > > > > > > this purpose.
> > > > > > > 
> > > > > > > Signed-off-by: Xavier Roumegue <xavier.roumegue@oss.nxp.com>
> > > > > > > ---
> > > > > > >     Documentation/userspace-api/media/drivers/dw100.rst |  7 +++++++
> > > > > > >     include/uapi/linux/dw100.h                          | 11 +++++++++++
> > > > > > >     2 files changed, 18 insertions(+)
> > > > > > >     create mode 100644 include/uapi/linux/dw100.h
> > > > > > > 
> > > > > > > diff --git a/Documentation/userspace-api/media/drivers/dw100.rst b/Documentation/userspace-api/media/drivers/dw100.rst
> > > > > > > index 20aeae63a94f..3abad05849ad 100644
> > > > > > > --- a/Documentation/userspace-api/media/drivers/dw100.rst
> > > > > > > +++ b/Documentation/userspace-api/media/drivers/dw100.rst
> > > > > > > @@ -20,4 +20,11 @@ match the expected size inherited from the destination image resolution.
> > > > > > >     More details on the DW100 hardware operations can be found in
> > > > > > >     *chapter 13.15 DeWarp* of IMX8MP_ reference manuel.
> > > > > > >     
> > > > > > > +The Vivante DW100 m2m driver implements the following driver-specific control:
> > > > > > > +
> > > > > > > +``V4L2_CID_DW100_MAPPING (integer)``
> > > > > > > +    Specifies to DW100 driver its dewarping map (aka LUT) blob as described in
> > > > > > > +    *chapter 13.15.2.3 Dewarping Remap* of IMX8MP_ reference manual as an U32
> > > > > > > +    dynamic array.
> > > > > > > +
> > > > > > >     .. _IMX8MP: https://www.nxp.com/webapp/Download?colCode=IMX8MPIEC
> > > > > > 
> > > > > > This point to a document names "i.MX 8M Plus Applications Processor Datasheet
> > > > > > for Industrial Products" which does not contain that reference.
> > > > > My bad.. Wrong link. :)
> > > > > Will repost with correct link.
> > > > 
> > > > Thanks. What I wanted to check is if it actually made sense to expose the
> > > > synthetized HW LUT. But for this, one need to share the parameters / algo needed
> > > > to generate them.
> > > There is no special dewarping algorithm which strictly depends on the
> > > dw100 IP, or optimized for the IP capabilities.
> > > 
> > >    This way we can compare against other popular dewarp
> > > > algorithms / API and see if they have something in common.
> > > The dw100 hw lut description is rather close to a how you implement
> > > dewarping with openGL taking benefit of the shader pipeline stage.
> > > The main differences with OpenGL implementation are:
> > > - Fixed vertices coordinates (16x16) vs any
> > > - Limited resolution on input (texture) coordinates (UQ12.4) vs float
> > > 
> > > Standard routines from OpenCV such as initUndistortRectifyMap()
> > > https://docs.opencv.org/4.5.5/d9/d0c/group__calib3d.html#ga7dfb72c9cf9780a347fbe3d1c47e5d5a
> > > can be used to generate the binary blob, with an additional decimation
> > > processing stage to satisfy the 16x16 macro block vertices grid and the
> > > fixed point format.
> > > 
> > > > 
> > > > The issue I see with this control is relate to the message it gives. When adding
> > > > controls for the prosperity, we want these control to actually be usable. This
> > > > is possible if the documentation makes its usage obvious, or if there is Open
> > > > Source userland to support that.
> > > So yes, most famous vision opensource project such OpenCV can be used to
> > > generate the blob.
> > > > 
> > > > None of this is met, so as a side effect, this looks like NXP sneaking in
> > > > private blob control into a publicly maintained Open Source project.
> > > I then disagree with this statement considering my previous comments.
> > > 
> > > I plan to release publicly some programming examples on how to generate
> > > the dewarping map only using openCV library routines and aligned with
> > > lenses calibration state of the art method.
> > > A dedicated openCV module taking benefit of the DW100 will be published
> > > as well.
> > > 
> > > A long term target is to add its support in libcamera, combined with all
> > > media components (CSI, ISP, ISI) pulled from upstream kernel tree.
> > > 
> > >    This isn't
> > > > truly aligned with how V4L2 controls are meant to be. Doing trivial lut
> > > > synthesis in the kernel could be fine though.
> > > I am not sure what you meant with this comment.
> > > 
> > > As part of this patch series, an identity map is generated in the driver
> > > which should be enough for anyone familiar with dewarping process.
> > > If you meant to generate the remapping table from the lens calibration
> > > data, I don't think this is a reasonable option considering the
> > > NP-completeness of the problem.
> > > 
> > > If this is the idea of binary blob (despite its public format
> > > description) which hurts you, the map can be exposed to the kernel in a
> > > more human readable format such Image_in(xin, yin) -> Image_out(xout,
> > > yout) in UQ1.31 format but will add extra processing at runtime for
> > > something which has to be done anyway offline, and memory overhead. But
> > > I don't think we can end with a generic v4l2 control considering the
> > > hardware restrictions (vertices position, limited fixed point
> > > resolution, etc..).
> > 
> > Please avoid implication that I would be *hurt* by your patchset. Your
> > imagination will make you read my comment as saying something it is not. My
> > comment are strictly scoped within the information you have provided with the
> > patchset to justify adding a vendor control in contrast to providing controls
> > that would be reused by another driver later. I'm not into lense or anything, I
> > strictly review the userland APIs that you submitted with care on documentation
> > and usability.
> > 
> > Try and ignore everything you know and the inner of this hardware design, and
> > perhaps about dewarping technique and you may endup with a different read of
> > your patchset. My impression while reading it is that I would not be able to use
> > it due to lack of example. And if NXP website would stop hosting the
> > documentation, this would make it just impossible. Time have showed that vendor
> > controls are rarely the solution and should only be added with great care and
> > good documentation. For a first driver supporting a technology like this one, it
> > could be acceptable, assuming it is documented in a future proof way.
> I fully understand uapi changes have to be handle with care, and that 
> was the reason I was initially willing to use a private custom control 
> (as few drivers are doing), without being aware of the current policy 
> with this regards.
> 
> I was willing to share the details of the hardware specification through 
> NXP website such as one could get all public details available on the 
> IP, and I was (wrongly) thinking the code was talking by itself to give 
> indication on its format (finally pretty simple). Again, I understand 
> one could be mistrustful with documentation hosted out of kernel tree 
> for the reasons you mentioned, even though the risk is pretty small as 
> NXP (as most of the vendors) has some long term maintenance legal 
> contracts to fulfill.
> 
> > 
> > All the information and the rationale you are adding in this reply can be added
> > in the next submission. What I think you should strictly address:
> > 
> > - The LUT format and meaning should be documented directly in the Linux Kernel
> > documentation. Having to register an account with NXP in order to download the
> > documentation is not acceptable and not future safe.
> Will do, and will provide a short script example to generate the LUT.
> > - You forgot to provide the output of v4l2-compliance, I didn't mention yet, but
> > that would have come of course.
> The v4l2-compliance report is actually in the cover letter of the patchset.
> > 
> > The rest are just nice to have, though generally wanted.
> > 
> > - The name of the control could be made more descriptive. The lut is mapping
> > what in one word ? And that word could be added to the name.
> I am running out of imagination for figuring out the good word to use.
> The LUT is mapping "input pixels coordinates" to "output pixels 
> coordinates".
> Using OpenGL semantic, this maps textures coordinates to vertices 
> coordinates. Any naming suggestions are welcome.

I just read the 2 paragraph of doc in the 7K pages TRM, and indeed this is
simple. The table is relocating/remapping vertex (tiles) not pixels. Is my
reading correct ?

So it's basically an array of 32bit X/Y coordinate. Each coordinate are 16 bit
fixed point, with 12bit for the rational, 4bit fractionnal (convient considering
we have 16 x 16 vertex, as it got a step of 1/16). And the size of the control
will vary depending on the resolution of the incoming stream. Basically rounded
up form of width/16 x height/16 * 32bit. Right and bottom most tile are just
missing pixels if the image size is not aligned, at least that was my reading of
the doc.

The coordinate points to the middle of the tile/vertex, and relocate with
interpolation toward the specified coordinate. Basically stretching the image in
that direction.

Some naming ideas:

- DW100_DEWARPING_MAP

Just the doc wording, no detail.

- DW100_DEWARPING_GRID_MAP

Another wording used in the doc.

- DW100_DEWARPING_16x16_VERTEX_MAP

A little more detail, still using mostly doc wording.

- DW100_DEWARPING_16x16_TILE_MAP

Using tile ? I simply use the term tile before because of my background, but
vextex might speak better to folks used to do this in vertex shaders ?

- DW100_DEWARPING_16x16_GRID_MAP

That basically avoid both tiles and vertex, grid is also a wording used in the
doc.

Just some ideas. I kept the DW100 since its likely going to be classified as
vendor. I would not make it private though.

> 
> > - The patchset could mention userland code that uses it, which show that this is
> > actually tested*
> 
> Will do.
> Custom control was tested with a gst pipelone using a (hacky) 
> gstv4l2transform element and a opencv script using custom module which 
> will be shared publicly.
> 
> 
> 
> > - There is other feature you mention, unrelated to the dewarp feature. You
> > haven't said with what userland these have been tested. M2M scaling, csc and
> > crop are generic should just work with existing userland. You could use
> > GStreamer as an example.
> v4l2-ctl and gst pipeline using (vanilla) gstv4l2transform have been 
> used for testing.
> 
> Unfortunately, I had to apply oneliner patches on v4l2-ctl to get the 
> cropping working to prevent the use of read_write_padded_frame() for 
> FWHT cases which is applying a sw cropping/compose if I got it right, 
> which seems incorrect for generic m2m.
> 
> https://git.linuxtv.org/v4l-utils.git/tree/utils/v4l2-ctl/v4l2-ctl-streaming.cpp#n1112
> 
> https://git.linuxtv.org/v4l-utils.git/tree/utils/v4l2-ctl/v4l2-ctl-streaming.cpp#n1372
> 
> I will open a thread on v4l2-utils to discuss this.
> 
> 
> > 
> > * You'll find this funny, or perhaps even insulting at first, but you'd be
> > surprise how much code (specially from ARM SoC vendors) that get sent every year
> > that don't even compile or have never been tested after being up-ported from an
> > older tree. And that is only scratching the surface of the problem we have to
> > deal with. Notably drivers were only 1 pixel format out of let's say 10 have
> > been tested that comes with broken stride and memory buffer size calculation
> > causing much larger troubles in the system.
> This certainly does not ensure the driver to be bug-free, but I swear I 
> tested all in/out pixel format combinations and driver compilation is 
> W=12 warnings free :)
> > 
> > > 
> > > Adding a generic dewarping API to V4L2 is possible but this was not the
> > > scope of this patchset, and anyway missing data on any existing public
> > > dewarp hardware implementation supported by the kernel is somehow a
> > > blocker for this.
> > 
> > I was asking to share about your research that made you opt-out any kind of non-
> > vendor control for this feature. From your original submission, it would have
> > been ill advised for me to assume anything. Note that programming interface for
> > a V4L2 driver does not need to be based on other hardware vendor interface. I'm
> > not in this industry, but there could have been an industry standard for
> > expressing lense correction, produce through a a calibration process. The one
> > thing I've been assuming is that you are in the industry and would be able to
> > share a bit on that.
> I am looking on dewarp stuff for 3 months but I can share for sure my 
> undersanding whatever it worths.
> The optical system can be mathematically described using a set of 
> matrices and lenses distorsion parameters which are estimated during 
> your calibration stage.
> 
> https://docs.opencv.org/4.5.5/dc/dbb/tutorial_py_calibration.html
> 
> Then it's a matter of resolving a non linear system (ouch) to get the 
> remapping lut correcting the distorsion. OpenCV computes a 1:1 pixel 
> (re)mapping.
> 
> This is obviously impossible to perform those software computation in 
> the kernel.
> One could imagine that some hw dewarpers might have implemented mapping 
> lut computation in hardware, and if so, the control api could have been 
> inherited from those calibration parameters. I have no idea if such 
> hardware exists.
> 
> Another alternative is to consider the remapping LUT as an input which 
> seems more reasonable applying divide and conquer concepts.
> I would rather go for such option if a generic v4l2 interface should be 
> designed and combined with a library. And this would likely help to get 
> synergies with openGL implementation from the application standpoint.
> 
> The driver would have to expose its mapping capabilities (mainly 
> vertices coordinates constraints (x:y mapping) and float resolution).
> But this might worth waiting a bit to check the availability trend on 
> such capable hardware.
> 
> 
> 
> > 
> > > 
> > > > 
> > > > > > 
> > > > > > > diff --git a/include/uapi/linux/dw100.h b/include/uapi/linux/dw100.h
> > > > > > > new file mode 100644
> > > > > > > index 000000000000..0ef926c61cf0
> > > > > > > --- /dev/null
> > > > > > > +++ b/include/uapi/linux/dw100.h
> > > > > > > @@ -0,0 +1,11 @@
> > > > > > > +/* SPDX-License-Identifier: GPL-2.0-only WITH Linux-syscall-note */
> > > > > > > +/* Copyright 2022 NXP */
> > > > > > > +
> > > > > > > +#ifndef __UAPI_DW100_H__
> > > > > > > +#define __UAPI_DW100_H__
> > > > > > > +
> > > > > > > +#include <linux/v4l2-controls.h>
> > > > > > > +
> > > > > > > +#define V4L2_CID_DW100_MAPPING		(V4L2_CID_USER_DW100_BASE + 1)
> > > > > > > +
> > > > > > > +#endif
> > > > > > 
> > > > 
> >
Xavier Roumegue (OSS) March 10, 2022, 10:42 p.m. UTC | #9
On 3/10/22 22:52, Nicolas Dufresne wrote:
> Le jeudi 10 mars 2022 à 13:20 +0100, Xavier Roumegue (OSS) a écrit :
>>
>> On 3/9/22 21:08, Nicolas Dufresne wrote:
>>> Le mercredi 09 mars 2022 à 00:16 +0100, Xavier Roumegue (OSS) a écrit :
>>>>
>>>> On 3/8/22 21:28, Nicolas Dufresne wrote:
>>>>> Le mardi 08 mars 2022 à 20:42 +0100, Xavier Roumegue (OSS) a écrit :
>>>>>> Hello Nicolas,
>>>>>>
>>>>>> On 3/8/22 20:15, Nicolas Dufresne wrote:
>>>>>>> Le mardi 08 mars 2022 à 19:48 +0100, Xavier Roumegue a écrit :
>>>>>>>> The DW100 driver gets the dewarping mapping as a binary blob from the
>>>>>>>> userspace application through a custom control.
>>>>>>>> The blob format is hardware specific so create a dedicated control for
>>>>>>>> this purpose.
>>>>>>>>
>>>>>>>> Signed-off-by: Xavier Roumegue <xavier.roumegue@oss.nxp.com>
>>>>>>>> ---
>>>>>>>>      Documentation/userspace-api/media/drivers/dw100.rst |  7 +++++++
>>>>>>>>      include/uapi/linux/dw100.h                          | 11 +++++++++++
>>>>>>>>      2 files changed, 18 insertions(+)
>>>>>>>>      create mode 100644 include/uapi/linux/dw100.h
>>>>>>>>
>>>>>>>> diff --git a/Documentation/userspace-api/media/drivers/dw100.rst b/Documentation/userspace-api/media/drivers/dw100.rst
>>>>>>>> index 20aeae63a94f..3abad05849ad 100644
>>>>>>>> --- a/Documentation/userspace-api/media/drivers/dw100.rst
>>>>>>>> +++ b/Documentation/userspace-api/media/drivers/dw100.rst
>>>>>>>> @@ -20,4 +20,11 @@ match the expected size inherited from the destination image resolution.
>>>>>>>>      More details on the DW100 hardware operations can be found in
>>>>>>>>      *chapter 13.15 DeWarp* of IMX8MP_ reference manuel.
>>>>>>>>      
>>>>>>>> +The Vivante DW100 m2m driver implements the following driver-specific control:
>>>>>>>> +
>>>>>>>> +``V4L2_CID_DW100_MAPPING (integer)``
>>>>>>>> +    Specifies to DW100 driver its dewarping map (aka LUT) blob as described in
>>>>>>>> +    *chapter 13.15.2.3 Dewarping Remap* of IMX8MP_ reference manual as an U32
>>>>>>>> +    dynamic array.
>>>>>>>> +
>>>>>>>>      .. _IMX8MP: https://www.nxp.com/webapp/Download?colCode=IMX8MPIEC
>>>>>>>
>>>>>>> This point to a document names "i.MX 8M Plus Applications Processor Datasheet
>>>>>>> for Industrial Products" which does not contain that reference.
>>>>>> My bad.. Wrong link. :)
>>>>>> Will repost with correct link.
>>>>>
>>>>> Thanks. What I wanted to check is if it actually made sense to expose the
>>>>> synthetized HW LUT. But for this, one need to share the parameters / algo needed
>>>>> to generate them.
>>>> There is no special dewarping algorithm which strictly depends on the
>>>> dw100 IP, or optimized for the IP capabilities.
>>>>
>>>>     This way we can compare against other popular dewarp
>>>>> algorithms / API and see if they have something in common.
>>>> The dw100 hw lut description is rather close to a how you implement
>>>> dewarping with openGL taking benefit of the shader pipeline stage.
>>>> The main differences with OpenGL implementation are:
>>>> - Fixed vertices coordinates (16x16) vs any
>>>> - Limited resolution on input (texture) coordinates (UQ12.4) vs float
>>>>
>>>> Standard routines from OpenCV such as initUndistortRectifyMap()
>>>> https://docs.opencv.org/4.5.5/d9/d0c/group__calib3d.html#ga7dfb72c9cf9780a347fbe3d1c47e5d5a
>>>> can be used to generate the binary blob, with an additional decimation
>>>> processing stage to satisfy the 16x16 macro block vertices grid and the
>>>> fixed point format.
>>>>
>>>>>
>>>>> The issue I see with this control is relate to the message it gives. When adding
>>>>> controls for the prosperity, we want these control to actually be usable. This
>>>>> is possible if the documentation makes its usage obvious, or if there is Open
>>>>> Source userland to support that.
>>>> So yes, most famous vision opensource project such OpenCV can be used to
>>>> generate the blob.
>>>>>
>>>>> None of this is met, so as a side effect, this looks like NXP sneaking in
>>>>> private blob control into a publicly maintained Open Source project.
>>>> I then disagree with this statement considering my previous comments.
>>>>
>>>> I plan to release publicly some programming examples on how to generate
>>>> the dewarping map only using openCV library routines and aligned with
>>>> lenses calibration state of the art method.
>>>> A dedicated openCV module taking benefit of the DW100 will be published
>>>> as well.
>>>>
>>>> A long term target is to add its support in libcamera, combined with all
>>>> media components (CSI, ISP, ISI) pulled from upstream kernel tree.
>>>>
>>>>     This isn't
>>>>> truly aligned with how V4L2 controls are meant to be. Doing trivial lut
>>>>> synthesis in the kernel could be fine though.
>>>> I am not sure what you meant with this comment.
>>>>
>>>> As part of this patch series, an identity map is generated in the driver
>>>> which should be enough for anyone familiar with dewarping process.
>>>> If you meant to generate the remapping table from the lens calibration
>>>> data, I don't think this is a reasonable option considering the
>>>> NP-completeness of the problem.
>>>>
>>>> If this is the idea of binary blob (despite its public format
>>>> description) which hurts you, the map can be exposed to the kernel in a
>>>> more human readable format such Image_in(xin, yin) -> Image_out(xout,
>>>> yout) in UQ1.31 format but will add extra processing at runtime for
>>>> something which has to be done anyway offline, and memory overhead. But
>>>> I don't think we can end with a generic v4l2 control considering the
>>>> hardware restrictions (vertices position, limited fixed point
>>>> resolution, etc..).
>>>
>>> Please avoid implication that I would be *hurt* by your patchset. Your
>>> imagination will make you read my comment as saying something it is not. My
>>> comment are strictly scoped within the information you have provided with the
>>> patchset to justify adding a vendor control in contrast to providing controls
>>> that would be reused by another driver later. I'm not into lense or anything, I
>>> strictly review the userland APIs that you submitted with care on documentation
>>> and usability.
>>>
>>> Try and ignore everything you know and the inner of this hardware design, and
>>> perhaps about dewarping technique and you may endup with a different read of
>>> your patchset. My impression while reading it is that I would not be able to use
>>> it due to lack of example. And if NXP website would stop hosting the
>>> documentation, this would make it just impossible. Time have showed that vendor
>>> controls are rarely the solution and should only be added with great care and
>>> good documentation. For a first driver supporting a technology like this one, it
>>> could be acceptable, assuming it is documented in a future proof way.
>> I fully understand uapi changes have to be handle with care, and that
>> was the reason I was initially willing to use a private custom control
>> (as few drivers are doing), without being aware of the current policy
>> with this regards.
>>
>> I was willing to share the details of the hardware specification through
>> NXP website such as one could get all public details available on the
>> IP, and I was (wrongly) thinking the code was talking by itself to give
>> indication on its format (finally pretty simple). Again, I understand
>> one could be mistrustful with documentation hosted out of kernel tree
>> for the reasons you mentioned, even though the risk is pretty small as
>> NXP (as most of the vendors) has some long term maintenance legal
>> contracts to fulfill.
>>
>>>
>>> All the information and the rationale you are adding in this reply can be added
>>> in the next submission. What I think you should strictly address:
>>>
>>> - The LUT format and meaning should be documented directly in the Linux Kernel
>>> documentation. Having to register an account with NXP in order to download the
>>> documentation is not acceptable and not future safe.
>> Will do, and will provide a short script example to generate the LUT.
>>> - You forgot to provide the output of v4l2-compliance, I didn't mention yet, but
>>> that would have come of course.
>> The v4l2-compliance report is actually in the cover letter of the patchset.
>>>
>>> The rest are just nice to have, though generally wanted.
>>>
>>> - The name of the control could be made more descriptive. The lut is mapping
>>> what in one word ? And that word could be added to the name.
>> I am running out of imagination for figuring out the good word to use.
>> The LUT is mapping "input pixels coordinates" to "output pixels
>> coordinates".
>> Using OpenGL semantic, this maps textures coordinates to vertices
>> coordinates. Any naming suggestions are welcome.
> 
> I just read the 2 paragraph of doc in the 7K pages TRM, and indeed this is
> simple. The table is relocating/remapping vertex (tiles) not pixels. Is my
> reading correct ?
Yes, you are correct. The table is describing quadrilateral pixel areas 
to tiles remapping.
> 
> So it's basically an array of 32bit X/Y coordinate. Each coordinate are 16 bit
> fixed point, with 12bit for the rational, 4bit fractionnal (convient considering
> we have 16 x 16 vertex, as it got a step of 1/16). And the size of the control
> will vary depending on the resolution of the incoming stream. Basically rounded
> up form of width/16 x height/16 * 32bit. Right and bottom most tile are just
> missing pixels if the image size is not aligned, at least that was my reading of
> the doc.
Strictly speaking, the control size depends on the destination image 
resolution, check dw100_create_mapping() in patch 8 for more details.

But you definitely got the idea.

> 
> The coordinate points to the middle of the tile/vertex, and relocate with
> interpolation toward the specified coordinate. Basically stretching the image in
> that direction.
Yeap, as a GPU shader core does while dealing with a texture unit.
> 
> Some naming ideas:
> 
> - DW100_DEWARPING_MAP
> 
> Just the doc wording, no detail.
> 
> - DW100_DEWARPING_GRID_MAP
> 
> Another wording used in the doc.
> 
> - DW100_DEWARPING_16x16_VERTEX_MAP
> 
> A little more detail, still using mostly doc wording.
> 
> - DW100_DEWARPING_16x16_TILE_MAP
> 
> Using tile ? I simply use the term tile before because of my background, but
> vextex might speak better to folks used to do this in vertex shaders ?
You are right to introduce the tile wording in the discussion, as the 
vertices position in the destination image are deduced from those 16x16 
tiles. But this is likely more accurate to use vertex.
> 
> - DW100_DEWARPING_16x16_GRID_MAP
> 
> That basically avoid both tiles and vertex, grid is also a wording used in the
> doc.
> 
> Just some ideas. I kept the DW100 since its likely going to be classified as
> vendor. I would not make it private though.
Many thanks for those suggestions, I might have a slight preference for 
DW100_DEWARPING_16x16_VERTEX_MAP.
> 
>>
>>> - The patchset could mention userland code that uses it, which show that this is
>>> actually tested*
>>
>> Will do.
>> Custom control was tested with a gst pipelone using a (hacky)
>> gstv4l2transform element and a opencv script using custom module which
>> will be shared publicly.
>>
>>
>>
>>> - There is other feature you mention, unrelated to the dewarp feature. You
>>> haven't said with what userland these have been tested. M2M scaling, csc and
>>> crop are generic should just work with existing userland. You could use
>>> GStreamer as an example.
>> v4l2-ctl and gst pipeline using (vanilla) gstv4l2transform have been
>> used for testing.
>>
>> Unfortunately, I had to apply oneliner patches on v4l2-ctl to get the
>> cropping working to prevent the use of read_write_padded_frame() for
>> FWHT cases which is applying a sw cropping/compose if I got it right,
>> which seems incorrect for generic m2m.
>>
>> https://git.linuxtv.org/v4l-utils.git/tree/utils/v4l2-ctl/v4l2-ctl-streaming.cpp#n1112
>>
>> https://git.linuxtv.org/v4l-utils.git/tree/utils/v4l2-ctl/v4l2-ctl-streaming.cpp#n1372
>>
>> I will open a thread on v4l2-utils to discuss this.
>>
>>
>>>
>>> * You'll find this funny, or perhaps even insulting at first, but you'd be
>>> surprise how much code (specially from ARM SoC vendors) that get sent every year
>>> that don't even compile or have never been tested after being up-ported from an
>>> older tree. And that is only scratching the surface of the problem we have to
>>> deal with. Notably drivers were only 1 pixel format out of let's say 10 have
>>> been tested that comes with broken stride and memory buffer size calculation
>>> causing much larger troubles in the system.
>> This certainly does not ensure the driver to be bug-free, but I swear I
>> tested all in/out pixel format combinations and driver compilation is
>> W=12 warnings free :)
>>>
>>>>
>>>> Adding a generic dewarping API to V4L2 is possible but this was not the
>>>> scope of this patchset, and anyway missing data on any existing public
>>>> dewarp hardware implementation supported by the kernel is somehow a
>>>> blocker for this.
>>>
>>> I was asking to share about your research that made you opt-out any kind of non-
>>> vendor control for this feature. From your original submission, it would have
>>> been ill advised for me to assume anything. Note that programming interface for
>>> a V4L2 driver does not need to be based on other hardware vendor interface. I'm
>>> not in this industry, but there could have been an industry standard for
>>> expressing lense correction, produce through a a calibration process. The one
>>> thing I've been assuming is that you are in the industry and would be able to
>>> share a bit on that.
>> I am looking on dewarp stuff for 3 months but I can share for sure my
>> undersanding whatever it worths.
>> The optical system can be mathematically described using a set of
>> matrices and lenses distorsion parameters which are estimated during
>> your calibration stage.
>>
>> https://docs.opencv.org/4.5.5/dc/dbb/tutorial_py_calibration.html
>>
>> Then it's a matter of resolving a non linear system (ouch) to get the
>> remapping lut correcting the distorsion. OpenCV computes a 1:1 pixel
>> (re)mapping.
>>
>> This is obviously impossible to perform those software computation in
>> the kernel.
>> One could imagine that some hw dewarpers might have implemented mapping
>> lut computation in hardware, and if so, the control api could have been
>> inherited from those calibration parameters. I have no idea if such
>> hardware exists.
>>
>> Another alternative is to consider the remapping LUT as an input which
>> seems more reasonable applying divide and conquer concepts.
>> I would rather go for such option if a generic v4l2 interface should be
>> designed and combined with a library. And this would likely help to get
>> synergies with openGL implementation from the application standpoint.
>>
>> The driver would have to expose its mapping capabilities (mainly
>> vertices coordinates constraints (x:y mapping) and float resolution).
>> But this might worth waiting a bit to check the availability trend on
>> such capable hardware.
>>
>>
>>
>>>
>>>>
>>>>>
>>>>>>>
>>>>>>>> diff --git a/include/uapi/linux/dw100.h b/include/uapi/linux/dw100.h
>>>>>>>> new file mode 100644
>>>>>>>> index 000000000000..0ef926c61cf0
>>>>>>>> --- /dev/null
>>>>>>>> +++ b/include/uapi/linux/dw100.h
>>>>>>>> @@ -0,0 +1,11 @@
>>>>>>>> +/* SPDX-License-Identifier: GPL-2.0-only WITH Linux-syscall-note */
>>>>>>>> +/* Copyright 2022 NXP */
>>>>>>>> +
>>>>>>>> +#ifndef __UAPI_DW100_H__
>>>>>>>> +#define __UAPI_DW100_H__
>>>>>>>> +
>>>>>>>> +#include <linux/v4l2-controls.h>
>>>>>>>> +
>>>>>>>> +#define V4L2_CID_DW100_MAPPING		(V4L2_CID_USER_DW100_BASE + 1)
>>>>>>>> +
>>>>>>>> +#endif
>>>>>>>
>>>>>
>>>
>
diff mbox series

Patch

diff --git a/Documentation/userspace-api/media/drivers/dw100.rst b/Documentation/userspace-api/media/drivers/dw100.rst
index 20aeae63a94f..3abad05849ad 100644
--- a/Documentation/userspace-api/media/drivers/dw100.rst
+++ b/Documentation/userspace-api/media/drivers/dw100.rst
@@ -20,4 +20,11 @@  match the expected size inherited from the destination image resolution.
 More details on the DW100 hardware operations can be found in
 *chapter 13.15 DeWarp* of IMX8MP_ reference manuel.
 
+The Vivante DW100 m2m driver implements the following driver-specific control:
+
+``V4L2_CID_DW100_MAPPING (integer)``
+    Specifies to DW100 driver its dewarping map (aka LUT) blob as described in
+    *chapter 13.15.2.3 Dewarping Remap* of IMX8MP_ reference manual as an U32
+    dynamic array.
+
 .. _IMX8MP: https://www.nxp.com/webapp/Download?colCode=IMX8MPIEC
diff --git a/include/uapi/linux/dw100.h b/include/uapi/linux/dw100.h
new file mode 100644
index 000000000000..0ef926c61cf0
--- /dev/null
+++ b/include/uapi/linux/dw100.h
@@ -0,0 +1,11 @@ 
+/* SPDX-License-Identifier: GPL-2.0-only WITH Linux-syscall-note */
+/* Copyright 2022 NXP */
+
+#ifndef __UAPI_DW100_H__
+#define __UAPI_DW100_H__
+
+#include <linux/v4l2-controls.h>
+
+#define V4L2_CID_DW100_MAPPING		(V4L2_CID_USER_DW100_BASE + 1)
+
+#endif