Message ID | 20210730204134.21769-2-harry.wentland@amd.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | A drm_plane API to support HDR planes | expand |
Hi, Thanks for having a stab at this, it's a massive complex topic to solve. Do you have the the HTML rendered somewhere for convenience? On Fri, Jul 30, 2021 at 04:41:29PM -0400, Harry Wentland wrote: > Use the new DRM RFC doc section to capture the RFC previously only > described in the cover letter at > https://patchwork.freedesktop.org/series/89506/ > > v3: > * Add sections on single-plane and multi-plane HDR > * Describe approach to define HW details vs approach to define SW intentions > * Link Jeremy Cline's excellent HDR summaries > * Outline intention behind overly verbose doc > * Describe FP16 use-case > * Clean up links > > v2: create this doc > > v1: n/a > > Signed-off-by: Harry Wentland <harry.wentland@amd.com> > --- > Documentation/gpu/rfc/color_intentions.drawio | 1 + > Documentation/gpu/rfc/color_intentions.svg | 3 + > Documentation/gpu/rfc/colorpipe | 1 + > Documentation/gpu/rfc/colorpipe.svg | 3 + > Documentation/gpu/rfc/hdr-wide-gamut.rst | 580 ++++++++++++++++++ > Documentation/gpu/rfc/index.rst | 1 + > 6 files changed, 589 insertions(+) > create mode 100644 Documentation/gpu/rfc/color_intentions.drawio > create mode 100644 Documentation/gpu/rfc/color_intentions.svg > create mode 100644 Documentation/gpu/rfc/colorpipe > create mode 100644 Documentation/gpu/rfc/colorpipe.svg > create mode 100644 Documentation/gpu/rfc/hdr-wide-gamut.rst > -- snip -- > + > +Mastering Luminances > +-------------------- > + > +Even though we are able to describe the absolute luminance of a pixel > +using the PQ 2084 EOTF we are presented with physical limitations of the > +display technologies on the market today. Here are a few examples of > +luminance ranges of displays. > + > +.. flat-table:: > + :header-rows: 1 > + > + * - Display > + - Luminance range in nits > + > + * - Typical PC display > + - 0.3 - 200 > + > + * - Excellent LCD HDTV > + - 0.3 - 400 > + > + * - HDR LCD w/ local dimming > + - 0.05 - 1,500 > + > +Since no display can currently show the full 0.0005 to 10,000 nits > +luminance range of PQ the display will need to tone-map the HDR content, > +i.e to fit the content within a display's capabilities. To assist > +with tone-mapping HDR content is usually accompanied by a metadata > +that describes (among other things) the minimum and maximum mastering > +luminance, i.e. the maximum and minimum luminance of the display that > +was used to master the HDR content. > + > +The HDR metadata is currently defined on the drm_connector via the > +hdr_output_metadata blob property. > + > +It might be useful to define per-plane hdr metadata, as different planes > +might have been mastered differently. I think this only applies to the approach where all the processing is decided in the kernel right? If we directly expose each pipeline stage, and userspace controls everything, there's no need for the kernel to know the mastering luminance of any of the input content. The kernel would only need to know the eventual *output* luminance range, which might not even match any of the input content! ... > + > +How are we solving the problem? > +=============================== > + > +Single-plane > +------------ > + > +If a single drm_plane is used no further work is required. The compositor > +will provide one HDR plane alongside a drm_connector's hdr_output_metadata > +and the display HW will output this plane without further processing if > +no CRTC LUTs are provided. > + > +If desired a compositor can use the CRTC LUTs for HDR content but without > +support for PWL or multi-segmented LUTs the quality of the operation is > +expected to be subpar for HDR content. > + > + > +Multi-plane > +----------- > + > +In multi-plane configurations we need to solve the problem of blending > +HDR and SDR content. This blending should be done in linear space and > +therefore requires framebuffer data that is presented in linear space > +or a way to convert non-linear data to linear space. Additionally > +we need a way to define the luminance of any SDR content in relation > +to the HDR content. > + Android doesn't blend in linear space, so any API shouldn't be built around an assumption of linear blending. > +In order to present framebuffer data in linear space without losing a > +lot of precision it needs to be presented using 16 bpc precision. > + > + > +Defining HW Details > +------------------- > + > +One way to take full advantage of modern HW's color pipelines is by > +defining a "generic" pipeline that matches all capable HW. Something > +like this, which I took `from Uma Shankar`_ and expanded on: > + > +.. _from Uma Shankar: https://patchwork.freedesktop.org/series/90826/ > + > +.. kernel-figure:: colorpipe.svg I don't think this pipeline is expressive enough, in part because of Android's non-linear blending as I mentioned above, but also because the "tonemapping" block is a bit of a monolithic black-box. I'd be in favour of splitting what you've called "Tonemapping" to separate luminance adjustment (I've seen that called OOTF) and pre-blending OETF (GAMMA); with similar post-blending as well: Before blending: FB --> YUV-to-RGB --> EOTF (DEGAMMA) --> CTM/CSC (and/or 3D LUT) --> OOTF --> OETF (GAMMA) --> To blending After blending: From blending --> EOTF (DEGAMMA) --> CTM/CSC (and/or 3D LUT) --> OOTF --> OETF (GAMMA) --> RGB-to-YUV --> To cable This separates the logical pipeline stages a bit better to me. > + > +I intentionally put de-Gamma, and Gamma in parentheses in my graph > +as they describe the intention of the block but not necessarily a > +strict definition of how a userspace implementation is required to > +use them. > + > +De-Gamma and Gamma blocks are named LUT, but they could be non-programmable > +LUTs in some HW implementations with no programmable LUT available. See > +the definitions for AMD's `latest dGPU generation`_ as an example. > + > +.. _latest dGPU generation: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c?h=v5.13#n2586 > + > +I renamed the "Plane Gamma LUT" and "CRTC De-Gamma LUT" to "Tonemapping" > +as we generally don't want to re-apply gamma before blending, or do > +de-gamma post blending. These blocks tend generally to be intended for > +tonemapping purposes. Sorry for repeating myself (again) - but I don't think this is true in Android. > + > +Tonemapping in this case could be a simple nits value or `EDR`_ to describe > +how to scale the :ref:`SDR luminance`. > + > +Tonemapping could also include the ability to use a 3D LUT which might be > +accompanied by a 1D shaper LUT. The shaper LUT is required in order to > +ensure a 3D LUT with limited entries (e.g. 9x9x9, or 17x17x17) operates > +in perceptual (non-linear) space, so as to evenly spread the limited > +entries evenly across the perceived space. Some terminology care may be needed here - up until this point, I think you've been talking about "tonemapping" being luminance adjustment, whereas I'd expect 3D LUTs to be used for gamut adjustment. > + > +.. _EDR: https://gitlab.freedesktop.org/swick/wayland-protocols/-/blob/color/unstable/color-management/color.rst#id8 > + > +Creating a model that is flexible enough to define color pipelines for > +a wide variety of HW is challenging, though not impossible. Implementing > +support for such a flexible definition in userspace, though, amounts > +to essentially writing color pipeline drivers for each HW. > + Without this, it seems like it would be hard/impossible for a general-purpose compositor use the display hardware. There will always be cases where compositing needs to fall back to a GPU pass instead of using HW. If userspace has no idea what the kernel/hardware is doing, it has no hope of matching the processing and there will be significant visual differences between the two paths. This is perhaps less relevant for post-blending stuff, which I expect would be applied by HW in both cases. > + > +Defining SW Intentions > +---------------------- > + > +An alternative to describing the HW color pipeline in enough detail to > +be useful for color management and HDR purposes is to instead define > +SW intentions. > + > +.. kernel-figure:: color_intentions.svg > + > +This greatly simplifies the API and lets the driver do what a driver > +does best: figure out how to program the HW to achieve the desired > +effect. > + > +The above diagram could include white point, primaries, and maximum > +peak and average white levels in order to facilitate tone mapping. > + > +At this point I suggest to keep tonemapping (other than an SDR luminance > +adjustment) out of the current DRM/KMS API. Most HDR displays are capable > +of tonemapping. If for some reason tonemapping is still desired on > +a plane, a shader might be a better way of doing that instead of relying > +on display HW. > + > +In some ways this mirrors how various userspace APIs treat HDR: > + * Gstreamer's `GstVideoTransferFunction`_ > + * EGL's `EGL_EXT_gl_colorspace_bt2020_pq`_ extension > + * Vulkan's `VkColorSpaceKHR`_ > + > +.. _GstVideoTransferFunction: https://gstreamer.freedesktop.org/documentation/video/video-color.html?gi-language=c#GstVideoTransferFunction > +.. _EGL_EXT_gl_colorspace_bt2020_pq: https://www.khronos.org/registry/EGL/extensions/EXT/EGL_EXT_gl_colorspace_bt2020_linear.txt > +.. _VkColorSpaceKHR: https://www.khronos.org/registry/vulkan/specs/1.2-extensions/html/vkspec.html#VkColorSpaceKHR > + These (at least the Khronos ones) are application-facing APIs, rather than APIs that a compositor would use. They only communicate content hints to "the platform" so that the compositor can do-the-right-thing. I think that this enum approach makes sense for an app, but not for implementing a compositor, which would want direct, explicit control. > + > +A hybrid approach to the API > +---------------------------- > + > +Our current approach attempts a hybrid approach, defining API to specify > +input and output transfer functions, as well as an SDR boost, and a > +input color space definition. > + > +We would like to solicit feedback and encourage discussion around the > +merits and weaknesses of these approaches. This question is at the core > +of defining a good API and we'd like to get it right. > + > + > +Input and Output Transfer functions > +----------------------------------- > + > +We define an input transfer function on drm_plane to describe the > +transform from framebuffer to blending space. > + > +We define an output transfer function on drm_crtc to describe the > +transform from blending space to display space. > + > +The transfer function can be a pre-defined function, such as PQ EOTF, or > +a custom LUT. A driver will be able to specify support for specific > +transfer functions, including custom ones. > + > +Defining the transfer function in this way allows us to support in on HW > +that uses ROMs to support these transforms, as well as on HW that use > +LUT definitions that are complex and don't map easily onto a standard LUT > +definition. > + > +We will not define per-plane LUTs in this patchset as the scope of our > +current work only deals with pre-defined transfer functions. This API has > +the flexibility to add custom 1D or 3D LUTs at a later date. > + > +In order to support the existing 1D de-gamma and gamma LUTs on the drm_crtc > +we will include a "custom 1D" enum value to indicate that the custom gamma and > +de-gamma 1D LUTs should be used. > + > +Possible transfer functions: > + > +.. flat-table:: > + :header-rows: 1 > + > + * - Transfer Function > + - Description > + > + * - Gamma 2.2 > + - a simple 2.2 gamma function > + > + * - sRGB > + - 2.4 gamma with small initial linear section > + > + * - PQ 2084 > + - SMPTE ST 2084; used for HDR video and allows for up to 10,000 nit support > + > + * - Linear > + - Linear relationship between pixel value and luminance value > + > + * - Custom 1D > + - Custom 1D de-gamma and gamma LUTs; one LUT per color > + > + * - Custom 3D > + - Custom 3D LUT (to be defined) > + > + > +Describing SDR Luminance > +------------------------------ > + > +Since many displays do no correctly advertise the HDR white level we > +propose to define the SDR white level in nits. > + > +We define a new drm_plane property to specify the white level of an SDR > +plane. > + > + > +Defining the color space > +------------------------ > + > +We propose to add a new color space property to drm_plane to define a > +plane's color space. What is this used/useful for? > + > +While some color space conversions can be performed with a simple color > +transformation matrix (CTM) others require a 3D LUT. > + > + > +Defining mastering color space and luminance > +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > + > +ToDo I don't think this is necessary at all (in the kernel API) if we expose the full pipeline. Cheers, -Brian
Hello Brian, (+Uma in cc) Thanks for your comments, Let me try to fill-in for Harry to keep the design discussion going. Please find my comments inline. On 8/2/2021 10:00 PM, Brian Starkey wrote: > Hi, > > Thanks for having a stab at this, it's a massive complex topic to > solve. > > Do you have the the HTML rendered somewhere for convenience? > > On Fri, Jul 30, 2021 at 04:41:29PM -0400, Harry Wentland wrote: >> Use the new DRM RFC doc section to capture the RFC previously only >> described in the cover letter at >> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpatchwork.freedesktop.org%2Fseries%2F89506%2F&data=04%7C01%7CShashank.Sharma%40amd.com%7C42a8172c947b41c17a5c08d955d2e859%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637635186605487756%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=DAoEKo7fl83YPgqFvEGCHF2vyYfILfoLBCCu5Q2Lg88%3D&reserved=0 >> >> v3: >> * Add sections on single-plane and multi-plane HDR >> * Describe approach to define HW details vs approach to define SW intentions >> * Link Jeremy Cline's excellent HDR summaries >> * Outline intention behind overly verbose doc >> * Describe FP16 use-case >> * Clean up links >> >> v2: create this doc >> >> v1: n/a >> >> Signed-off-by: Harry Wentland <harry.wentland@amd.com> >> --- >> Documentation/gpu/rfc/color_intentions.drawio | 1 + >> Documentation/gpu/rfc/color_intentions.svg | 3 + >> Documentation/gpu/rfc/colorpipe | 1 + >> Documentation/gpu/rfc/colorpipe.svg | 3 + >> Documentation/gpu/rfc/hdr-wide-gamut.rst | 580 ++++++++++++++++++ >> Documentation/gpu/rfc/index.rst | 1 + >> 6 files changed, 589 insertions(+) >> create mode 100644 Documentation/gpu/rfc/color_intentions.drawio >> create mode 100644 Documentation/gpu/rfc/color_intentions.svg >> create mode 100644 Documentation/gpu/rfc/colorpipe >> create mode 100644 Documentation/gpu/rfc/colorpipe.svg >> create mode 100644 Documentation/gpu/rfc/hdr-wide-gamut.rst >> > > -- snip -- > >> + >> +Mastering Luminances >> +-------------------- >> + >> +Even though we are able to describe the absolute luminance of a pixel >> +using the PQ 2084 EOTF we are presented with physical limitations of the >> +display technologies on the market today. Here are a few examples of >> +luminance ranges of displays. >> + >> +.. flat-table:: >> + :header-rows: 1 >> + >> + * - Display >> + - Luminance range in nits >> + >> + * - Typical PC display >> + - 0.3 - 200 >> + >> + * - Excellent LCD HDTV >> + - 0.3 - 400 >> + >> + * - HDR LCD w/ local dimming >> + - 0.05 - 1,500 >> + >> +Since no display can currently show the full 0.0005 to 10,000 nits >> +luminance range of PQ the display will need to tone-map the HDR content, >> +i.e to fit the content within a display's capabilities. To assist >> +with tone-mapping HDR content is usually accompanied by a metadata >> +that describes (among other things) the minimum and maximum mastering >> +luminance, i.e. the maximum and minimum luminance of the display that >> +was used to master the HDR content. >> + >> +The HDR metadata is currently defined on the drm_connector via the >> +hdr_output_metadata blob property. >> + >> +It might be useful to define per-plane hdr metadata, as different planes >> +might have been mastered differently. > > I think this only applies to the approach where all the processing is > decided in the kernel right? > > If we directly expose each pipeline stage, and userspace controls > everything, there's no need for the kernel to know the mastering > luminance of any of the input content. The kernel would only need to > know the eventual *output* luminance range, which might not even match > any of the input content! > > Yes, you are right. In an approach where a compositor controls everything, we might not need this property, as the compositor can directly prepare the color correction pipeline with the required matrices and kernel can just follow it. The reason why we introduced this property here is that there may be a hardware which implements a fixed function degamma HW unit or tone mapping unit, and this property might make it easier for their drivers to implement. So the whole idea was to plan a seed for thoughts for those drivers, and see if it makes sense to have such a property. > ... > >> + >> +How are we solving the problem? >> +=============================== >> + >> +Single-plane >> +------------ >> + >> +If a single drm_plane is used no further work is required. The compositor >> +will provide one HDR plane alongside a drm_connector's hdr_output_metadata >> +and the display HW will output this plane without further processing if >> +no CRTC LUTs are provided. >> + >> +If desired a compositor can use the CRTC LUTs for HDR content but without >> +support for PWL or multi-segmented LUTs the quality of the operation is >> +expected to be subpar for HDR content. >> + >> + >> +Multi-plane >> +----------- >> + >> +In multi-plane configurations we need to solve the problem of blending >> +HDR and SDR content. This blending should be done in linear space and >> +therefore requires framebuffer data that is presented in linear space >> +or a way to convert non-linear data to linear space. Additionally >> +we need a way to define the luminance of any SDR content in relation >> +to the HDR content. >> + > > Android doesn't blend in linear space, so any API shouldn't be built > around an assumption of linear blending. > If I am not wrong, we still need linear buffers for accurate Gamut transformation (SRGB -> BT2020 or other way around) isn't it ? >> +In order to present framebuffer data in linear space without losing a >> +lot of precision it needs to be presented using 16 bpc precision. >> + >> + >> +Defining HW Details >> +------------------- >> + >> +One way to take full advantage of modern HW's color pipelines is by >> +defining a "generic" pipeline that matches all capable HW. Something >> +like this, which I took `from Uma Shankar`_ and expanded on: >> + >> +.. _from Uma Shankar: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpatchwork.freedesktop.org%2Fseries%2F90826%2F&data=04%7C01%7CShashank.Sharma%40amd.com%7C42a8172c947b41c17a5c08d955d2e859%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637635186605487756%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=krxqbMPxwiArlEeG7yLaFy6CMP%2BQnNNDSn%2B%2FxDWcfhM%3D&reserved=0 >> + >> +.. kernel-figure:: colorpipe.svg > > I don't think this pipeline is expressive enough, in part because of > Android's non-linear blending as I mentioned above, but also because > the "tonemapping" block is a bit of a monolithic black-box. > > I'd be in favour of splitting what you've called "Tonemapping" to > separate luminance adjustment (I've seen that called OOTF) and > pre-blending OETF (GAMMA); with similar post-blending as well: > > Before blending: > > FB --> YUV-to-RGB --> EOTF (DEGAMMA) --> CTM/CSC (and/or 3D LUT) --> OOTF --> OETF (GAMMA) --> To blending > > After blending: > > From blending --> EOTF (DEGAMMA) --> CTM/CSC (and/or 3D LUT) --> OOTF --> OETF (GAMMA) --> RGB-to-YUV --> To cable > > This separates the logical pipeline stages a bit better to me. I agree, seems like a good logical separation, and also provides rooms for flexible color correction. > >> + >> +I intentionally put de-Gamma, and Gamma in parentheses in my graph >> +as they describe the intention of the block but not necessarily a >> +strict definition of how a userspace implementation is required to >> +use them. >> + >> +De-Gamma and Gamma blocks are named LUT, but they could be non-programmable >> +LUTs in some HW implementations with no programmable LUT available. See >> +the definitions for AMD's `latest dGPU generation`_ as an example. >> + >> +.. _latest dGPU generation: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit.kernel.org%2Fpub%2Fscm%2Flinux%2Fkernel%2Fgit%2Fstable%2Flinux.git%2Ftree%2Fdrivers%2Fgpu%2Fdrm%2Famd%2Fdisplay%2Fdc%2Fdcn30%2Fdcn30_resource.c%3Fh%3Dv5.13%23n2586&data=04%7C01%7CShashank.Sharma%40amd.com%7C42a8172c947b41c17a5c08d955d2e859%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637635186605487756%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=%2BBbp593GEp0zNwUqjjLWQ1KbnVHyvkRtQy%2FugIID6DY%3D&reserved=0 >> + >> +I renamed the "Plane Gamma LUT" and "CRTC De-Gamma LUT" to "Tonemapping" >> +as we generally don't want to re-apply gamma before blending, or do >> +de-gamma post blending. These blocks tend generally to be intended for >> +tonemapping purposes. > > Sorry for repeating myself (again) - but I don't think this is true in > Android. > Same as above >> + >> +Tonemapping in this case could be a simple nits value or `EDR`_ to describe >> +how to scale the :ref:`SDR luminance`. >> + >> +Tonemapping could also include the ability to use a 3D LUT which might be >> +accompanied by a 1D shaper LUT. The shaper LUT is required in order to >> +ensure a 3D LUT with limited entries (e.g. 9x9x9, or 17x17x17) operates >> +in perceptual (non-linear) space, so as to evenly spread the limited >> +entries evenly across the perceived space. > > Some terminology care may be needed here - up until this point, I > think you've been talking about "tonemapping" being luminance > adjustment, whereas I'd expect 3D LUTs to be used for gamut > adjustment. > IMO, what harry wants to say here is that, which HW block gets picked and how tone mapping is achieved can be a very driver/HW specific thing, where one driver can use a 1D/Fixed function block, whereas another one can choose more complex HW like a 3D LUT for the same. DRM layer needs to define only the property to hook the API with core driver, and the driver can decide which HW to pick and configure for the activity. So when we have a tonemapping property, we might not have a separate 3D-LUT property, or the driver may fail the atomic_check() if both of them are programmed for different usages. >> + >> +.. _EDR: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.freedesktop.org%2Fswick%2Fwayland-protocols%2F-%2Fblob%2Fcolor%2Funstable%2Fcolor-management%2Fcolor.rst%23id8&data=04%7C01%7CShashank.Sharma%40amd.com%7C42a8172c947b41c17a5c08d955d2e859%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637635186605487756%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=LUKbk%2FJtJBj8D3BTi8K68lFWTDVuoUsoA4dDNkvt1o0%3D&reserved=0 >> + >> +Creating a model that is flexible enough to define color pipelines for >> +a wide variety of HW is challenging, though not impossible. Implementing >> +support for such a flexible definition in userspace, though, amounts >> +to essentially writing color pipeline drivers for each HW. >> + > > Without this, it seems like it would be hard/impossible for a > general-purpose compositor use the display hardware. > Agree > There will always be cases where compositing needs to fall back to a > GPU pass instead of using HW. If userspace has no idea what the > kernel/hardware is doing, it has no hope of matching the processing > and there will be significant visual differences between the two > paths. > Indeed, I find this another interesting and complex problem to solve. Need many more inputs from compositor developers as well (considering I am not an actual one :)). > This is perhaps less relevant for post-blending stuff, which I expect > would be applied by HW in both cases. > >> + >> +Defining SW Intentions >> +---------------------- >> + >> +An alternative to describing the HW color pipeline in enough detail to >> +be useful for color management and HDR purposes is to instead define >> +SW intentions. >> + >> +.. kernel-figure:: color_intentions.svg >> + >> +This greatly simplifies the API and lets the driver do what a driver >> +does best: figure out how to program the HW to achieve the desired >> +effect. >> + >> +The above diagram could include white point, primaries, and maximum >> +peak and average white levels in order to facilitate tone mapping. >> + >> +At this point I suggest to keep tonemapping (other than an SDR luminance >> +adjustment) out of the current DRM/KMS API. Most HDR displays are capable >> +of tonemapping. If for some reason tonemapping is still desired on >> +a plane, a shader might be a better way of doing that instead of relying >> +on display HW. >> + >> +In some ways this mirrors how various userspace APIs treat HDR: >> + * Gstreamer's `GstVideoTransferFunction`_ >> + * EGL's `EGL_EXT_gl_colorspace_bt2020_pq`_ extension >> + * Vulkan's `VkColorSpaceKHR`_ >> + >> +.. _GstVideoTransferFunction: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgstreamer.freedesktop.org%2Fdocumentation%2Fvideo%2Fvideo-color.html%3Fgi-language%3Dc%23GstVideoTransferFunction&data=04%7C01%7CShashank.Sharma%40amd.com%7C42a8172c947b41c17a5c08d955d2e859%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637635186605487756%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=JKdpEZ4Pn2gjH0ABNO4S2cTwelmkYfPF59c93qu8Iuo%3D&reserved=0 >> +.. _EGL_EXT_gl_colorspace_bt2020_pq: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.khronos.org%2Fregistry%2FEGL%2Fextensions%2FEXT%2FEGL_EXT_gl_colorspace_bt2020_linear.txt&data=04%7C01%7CShashank.Sharma%40amd.com%7C42a8172c947b41c17a5c08d955d2e859%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637635186605487756%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=9hRRvJHfihS3UwitXRCXEZgc60HG4MK%2FFeuSJSva9vc%3D&reserved=0 >> +.. _VkColorSpaceKHR: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.khronos.org%2Fregistry%2Fvulkan%2Fspecs%2F1.2-extensions%2Fhtml%2Fvkspec.html%23VkColorSpaceKHR&data=04%7C01%7CShashank.Sharma%40amd.com%7C42a8172c947b41c17a5c08d955d2e859%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637635186605487756%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=hH9izNfEy4OK2QcYvnvEko62%2Fk1cTYXQOe1LC1AzbMI%3D&reserved=0 >> + > > These (at least the Khronos ones) are application-facing APIs, rather > than APIs that a compositor would use. They only communicate content > hints to "the platform" so that the compositor can do-the-right-thing. > > I think that this enum approach makes sense for an app, but not for > implementing a compositor, which would want direct, explicit control. > Agree, we can fine tune this part and come back with something else. >> + >> +A hybrid approach to the API >> +---------------------------- >> + >> +Our current approach attempts a hybrid approach, defining API to specify >> +input and output transfer functions, as well as an SDR boost, and a >> +input color space definition. >> + >> +We would like to solicit feedback and encourage discussion around the >> +merits and weaknesses of these approaches. This question is at the core >> +of defining a good API and we'd like to get it right. >> + >> + >> +Input and Output Transfer functions >> +----------------------------------- >> + >> +We define an input transfer function on drm_plane to describe the >> +transform from framebuffer to blending space. >> + >> +We define an output transfer function on drm_crtc to describe the >> +transform from blending space to display space. >> + >> +The transfer function can be a pre-defined function, such as PQ EOTF, or >> +a custom LUT. A driver will be able to specify support for specific >> +transfer functions, including custom ones. >> + >> +Defining the transfer function in this way allows us to support in on HW >> +that uses ROMs to support these transforms, as well as on HW that use >> +LUT definitions that are complex and don't map easily onto a standard LUT >> +definition. >> + >> +We will not define per-plane LUTs in this patchset as the scope of our >> +current work only deals with pre-defined transfer functions. This API has >> +the flexibility to add custom 1D or 3D LUTs at a later date. >> + >> +In order to support the existing 1D de-gamma and gamma LUTs on the drm_crtc >> +we will include a "custom 1D" enum value to indicate that the custom gamma and >> +de-gamma 1D LUTs should be used. >> + >> +Possible transfer functions: >> + >> +.. flat-table:: >> + :header-rows: 1 >> + >> + * - Transfer Function >> + - Description >> + >> + * - Gamma 2.2 >> + - a simple 2.2 gamma function >> + >> + * - sRGB >> + - 2.4 gamma with small initial linear section >> + >> + * - PQ 2084 >> + - SMPTE ST 2084; used for HDR video and allows for up to 10,000 nit support >> + >> + * - Linear >> + - Linear relationship between pixel value and luminance value >> + >> + * - Custom 1D >> + - Custom 1D de-gamma and gamma LUTs; one LUT per color >> + >> + * - Custom 3D >> + - Custom 3D LUT (to be defined) >> + >> + >> +Describing SDR Luminance >> +------------------------------ >> + >> +Since many displays do no correctly advertise the HDR white level we >> +propose to define the SDR white level in nits. >> + >> +We define a new drm_plane property to specify the white level of an SDR >> +plane. >> + >> + >> +Defining the color space >> +------------------------ >> + >> +We propose to add a new color space property to drm_plane to define a >> +plane's color space. > > What is this used/useful for? > >> + >> +While some color space conversions can be performed with a simple color >> +transformation matrix (CTM) others require a 3D LUT. >> + >> + >> +Defining mastering color space and luminance >> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >> + >> +ToDo > > I don't think this is necessary at all (in the kernel API) if we > expose the full pipeline. As you can observe, both colorspace and mastering luminance properties get introduced as a part of the hybrid approach, where the compositor need not to set the whole color pipeline for HDR blending, but can just set the target/current color space of a plane being flipped, and the driver can internally prepare the pipeline for blending. This would be in order to reduce the complexity for compositor, and offload some work on driver. At the same time, I agree that it would be something difficult to design at first. - Shashank > > Cheers, > -Brian >
On Fri, Aug 13, 2021 at 10:42:12AM +0530, Sharma, Shashank wrote: > Hello Brian, > (+Uma in cc) > > Thanks for your comments, Let me try to fill-in for Harry to keep the design > discussion going. Please find my comments inline. > > On 8/2/2021 10:00 PM, Brian Starkey wrote: > > -- snip -- > > > > Android doesn't blend in linear space, so any API shouldn't be built > > around an assumption of linear blending. > > > > If I am not wrong, we still need linear buffers for accurate Gamut > transformation (SRGB -> BT2020 or other way around) isn't it ? Yeah, you need to transform the buffer to linear for color gamut conversions, but then back to non-linear (probably sRGB or gamma 2.2) for actual blending. This is why I'd like to have the per-plane "OETF/GAMMA" separate from tone-mapping, so that the composition transfer function is independent. > ... > > > + > > > +Tonemapping in this case could be a simple nits value or `EDR`_ to describe > > > +how to scale the :ref:`SDR luminance`. > > > + > > > +Tonemapping could also include the ability to use a 3D LUT which might be > > > +accompanied by a 1D shaper LUT. The shaper LUT is required in order to > > > +ensure a 3D LUT with limited entries (e.g. 9x9x9, or 17x17x17) operates > > > +in perceptual (non-linear) space, so as to evenly spread the limited > > > +entries evenly across the perceived space. > > > > Some terminology care may be needed here - up until this point, I > > think you've been talking about "tonemapping" being luminance > > adjustment, whereas I'd expect 3D LUTs to be used for gamut > > adjustment. > > > > IMO, what harry wants to say here is that, which HW block gets picked and > how tone mapping is achieved can be a very driver/HW specific thing, where > one driver can use a 1D/Fixed function block, whereas another one can choose > more complex HW like a 3D LUT for the same. > > DRM layer needs to define only the property to hook the API with core > driver, and the driver can decide which HW to pick and configure for the > activity. So when we have a tonemapping property, we might not have a > separate 3D-LUT property, or the driver may fail the atomic_check() if both > of them are programmed for different usages. I still think that directly exposing the HW blocks and their capabilities is the right approach, rather than a "magic" tonemapping property. Yes, userspace would need to have a good understanding of how to use that hardware, but if the pipeline model is standardised that's the kind of thing a cross-vendor library could handle. It would definitely be good to get some compositor opinions here. Cheers, -Brian
On 2021-08-16 7:10 a.m., Brian Starkey wrote: > On Fri, Aug 13, 2021 at 10:42:12AM +0530, Sharma, Shashank wrote: >> Hello Brian, >> (+Uma in cc) >> >> Thanks for your comments, Let me try to fill-in for Harry to keep the design >> discussion going. Please find my comments inline. >> Thanks, Shashank. I'm back at work now. Had to cut my trip short due to rising Covid cases and concern for my kids. >> On 8/2/2021 10:00 PM, Brian Starkey wrote: >>> > > -- snip -- > >>> >>> Android doesn't blend in linear space, so any API shouldn't be built >>> around an assumption of linear blending. >>> This seems incorrect but I guess ultimately the OS is in control of this. If we want to allow blending in non-linear space with the new API we would either need to describe the blending space or the pre/post-blending gamma/de-gamma. Any idea if this blending behavior in Android might get changed in the future? >> >> If I am not wrong, we still need linear buffers for accurate Gamut >> transformation (SRGB -> BT2020 or other way around) isn't it ? > > Yeah, you need to transform the buffer to linear for color gamut > conversions, but then back to non-linear (probably sRGB or gamma 2.2) > for actual blending. > > This is why I'd like to have the per-plane "OETF/GAMMA" separate > from tone-mapping, so that the composition transfer function is > independent. > >> > > ... > >>>> + >>>> +Tonemapping in this case could be a simple nits value or `EDR`_ to describe >>>> +how to scale the :ref:`SDR luminance`. >>>> + >>>> +Tonemapping could also include the ability to use a 3D LUT which might be >>>> +accompanied by a 1D shaper LUT. The shaper LUT is required in order to >>>> +ensure a 3D LUT with limited entries (e.g. 9x9x9, or 17x17x17) operates >>>> +in perceptual (non-linear) space, so as to evenly spread the limited >>>> +entries evenly across the perceived space. >>> >>> Some terminology care may be needed here - up until this point, I >>> think you've been talking about "tonemapping" being luminance >>> adjustment, whereas I'd expect 3D LUTs to be used for gamut >>> adjustment. >>> >> >> IMO, what harry wants to say here is that, which HW block gets picked and >> how tone mapping is achieved can be a very driver/HW specific thing, where >> one driver can use a 1D/Fixed function block, whereas another one can choose >> more complex HW like a 3D LUT for the same. >> >> DRM layer needs to define only the property to hook the API with core >> driver, and the driver can decide which HW to pick and configure for the >> activity. So when we have a tonemapping property, we might not have a >> separate 3D-LUT property, or the driver may fail the atomic_check() if both >> of them are programmed for different usages. > > I still think that directly exposing the HW blocks and their > capabilities is the right approach, rather than a "magic" tonemapping > property. > > Yes, userspace would need to have a good understanding of how to use > that hardware, but if the pipeline model is standardised that's the > kind of thing a cross-vendor library could handle. > One problem with cross-vendor libraries is that they might struggle to really be cross-vendor when it comes to unique HW behavior. Or they might pick sub-optimal configurations as they're not aware of the power impact of a configuration. What's an optimal configuration might differ greatly between different HW. We're seeing this problem with "universal" planes as well. > It would definitely be good to get some compositor opinions here. > For this we'll probably have to wait for Pekka's input when he's back from his vacation. > Cheers, > -Brian >
On 2021-08-16 14:40, Harry Wentland wrote: > On 2021-08-16 7:10 a.m., Brian Starkey wrote: >> On Fri, Aug 13, 2021 at 10:42:12AM +0530, Sharma, Shashank wrote: >>> Hello Brian, >>> (+Uma in cc) >>> >>> Thanks for your comments, Let me try to fill-in for Harry to keep the >>> design >>> discussion going. Please find my comments inline. >>> > > Thanks, Shashank. I'm back at work now. Had to cut my trip short > due to rising Covid cases and concern for my kids. > >>> On 8/2/2021 10:00 PM, Brian Starkey wrote: >>>> >> >> -- snip -- >> >>>> >>>> Android doesn't blend in linear space, so any API shouldn't be built >>>> around an assumption of linear blending. >>>> > > This seems incorrect but I guess ultimately the OS is in control of > this. If we want to allow blending in non-linear space with the new > API we would either need to describe the blending space or the > pre/post-blending gamma/de-gamma. > > Any idea if this blending behavior in Android might get changed in > the future? There is lots of software which blends in sRGB space and designers adjusted to the incorrect blending in a way that the result looks right. Blending in linear space would result in incorrectly looking images. >>> >>> If I am not wrong, we still need linear buffers for accurate Gamut >>> transformation (SRGB -> BT2020 or other way around) isn't it ? >> >> Yeah, you need to transform the buffer to linear for color gamut >> conversions, but then back to non-linear (probably sRGB or gamma 2.2) >> for actual blending. >> >> This is why I'd like to have the per-plane "OETF/GAMMA" separate >> from tone-mapping, so that the composition transfer function is >> independent. >> >>> >> >> ... >> >>>>> + >>>>> +Tonemapping in this case could be a simple nits value or `EDR`_ to >>>>> describe >>>>> +how to scale the :ref:`SDR luminance`. >>>>> + >>>>> +Tonemapping could also include the ability to use a 3D LUT which >>>>> might be >>>>> +accompanied by a 1D shaper LUT. The shaper LUT is required in >>>>> order to >>>>> +ensure a 3D LUT with limited entries (e.g. 9x9x9, or 17x17x17) >>>>> operates >>>>> +in perceptual (non-linear) space, so as to evenly spread the >>>>> limited >>>>> +entries evenly across the perceived space. >>>> >>>> Some terminology care may be needed here - up until this point, I >>>> think you've been talking about "tonemapping" being luminance >>>> adjustment, whereas I'd expect 3D LUTs to be used for gamut >>>> adjustment. >>>> >>> >>> IMO, what harry wants to say here is that, which HW block gets picked >>> and >>> how tone mapping is achieved can be a very driver/HW specific thing, >>> where >>> one driver can use a 1D/Fixed function block, whereas another one can >>> choose >>> more complex HW like a 3D LUT for the same. >>> >>> DRM layer needs to define only the property to hook the API with core >>> driver, and the driver can decide which HW to pick and configure for >>> the >>> activity. So when we have a tonemapping property, we might not have a >>> separate 3D-LUT property, or the driver may fail the atomic_check() >>> if both >>> of them are programmed for different usages. >> >> I still think that directly exposing the HW blocks and their >> capabilities is the right approach, rather than a "magic" tonemapping >> property. >> >> Yes, userspace would need to have a good understanding of how to use >> that hardware, but if the pipeline model is standardised that's the >> kind of thing a cross-vendor library could handle. >> > > One problem with cross-vendor libraries is that they might struggle > to really be cross-vendor when it comes to unique HW behavior. Or > they might pick sub-optimal configurations as they're not aware of > the power impact of a configuration. What's an optimal configuration > might differ greatly between different HW. > > We're seeing this problem with "universal" planes as well. I'm repeating what has been said before but apparently it has to be said again: if a property can't be replicated exactly in a shader the property is useless. If your hardware is so unique that it can't give us the exact formula we expect you cannot expose the property. Maybe my view on power consumption is simplistic but I would expect enum < 1d lut < 3d lut < shader. Is there more to it? Either way if the fixed KMS pixel pipeline is not sufficient to expose the intricacies of real hardware the right move would be to make the KMS pixel pipeline more dynamic, expose more hardware specifics and create a hardware specific user space like mesa. Moving the whole compositing with all its policies and decision making into the kernel is exactly the wrong way to go. Laurent Pinchart put this very well: https://lists.freedesktop.org/archives/dri-devel/2021-June/311689.html >> It would definitely be good to get some compositor opinions here. >> > > For this we'll probably have to wait for Pekka's input when he's > back from his vacation. > >> Cheers, >> -Brian >>
> -----Original Message----- > From: sebastian@sebastianwick.net <sebastian@sebastianwick.net> > Sent: Monday, August 16, 2021 7:07 PM > To: Harry Wentland <harry.wentland@amd.com> > Cc: Brian Starkey <brian.starkey@arm.com>; Sharma, Shashank > <shashank.sharma@amd.com>; amd-gfx@lists.freedesktop.org; dri- > devel@lists.freedesktop.org; ppaalanen@gmail.com; mcasas@google.com; > jshargo@google.com; Deepak.Sharma@amd.com; Shirish.S@amd.com; > Vitaly.Prosyak@amd.com; aric.cyr@amd.com; Bhawanpreet.Lakha@amd.com; > Krunoslav.Kovac@amd.com; hersenxs.wu@amd.com; > Nicholas.Kazlauskas@amd.com; laurentiu.palcu@oss.nxp.com; > ville.syrjala@linux.intel.com; nd@arm.com; Shankar, Uma > <uma.shankar@intel.com> > Subject: Re: [RFC PATCH v3 1/6] drm/doc: Color Management and HDR10 RFC > > On 2021-08-16 14:40, Harry Wentland wrote: > > On 2021-08-16 7:10 a.m., Brian Starkey wrote: > >> On Fri, Aug 13, 2021 at 10:42:12AM +0530, Sharma, Shashank wrote: > >>> Hello Brian, > >>> (+Uma in cc) > >>> Thanks Shashank for cc'ing me. Apologies for being late here. Now seems all stakeholders are back so we can resume the UAPI discussion on color. > >>> Thanks for your comments, Let me try to fill-in for Harry to keep > >>> the design discussion going. Please find my comments inline. > >>> > > > > Thanks, Shashank. I'm back at work now. Had to cut my trip short due > > to rising Covid cases and concern for my kids. > > > >>> On 8/2/2021 10:00 PM, Brian Starkey wrote: > >>>> > >> > >> -- snip -- > >> > >>>> > >>>> Android doesn't blend in linear space, so any API shouldn't be > >>>> built around an assumption of linear blending. > >>>> > > > > This seems incorrect but I guess ultimately the OS is in control of > > this. If we want to allow blending in non-linear space with the new > > API we would either need to describe the blending space or the > > pre/post-blending gamma/de-gamma. > > > > Any idea if this blending behavior in Android might get changed in the > > future? > > There is lots of software which blends in sRGB space and designers adjusted to the > incorrect blending in a way that the result looks right. > Blending in linear space would result in incorrectly looking images. > I feel we should just leave it to userspace to decide rather than forcing linear or non Linear blending in driver. > >>> > >>> If I am not wrong, we still need linear buffers for accurate Gamut > >>> transformation (SRGB -> BT2020 or other way around) isn't it ? > >> > >> Yeah, you need to transform the buffer to linear for color gamut > >> conversions, but then back to non-linear (probably sRGB or gamma 2.2) > >> for actual blending. > >> > >> This is why I'd like to have the per-plane "OETF/GAMMA" separate from > >> tone-mapping, so that the composition transfer function is > >> independent. > >> > >>> > >> > >> ... > >> > >>>>> + > >>>>> +Tonemapping in this case could be a simple nits value or `EDR`_ > >>>>> +to > >>>>> describe > >>>>> +how to scale the :ref:`SDR luminance`. > >>>>> + > >>>>> +Tonemapping could also include the ability to use a 3D LUT which > >>>>> might be > >>>>> +accompanied by a 1D shaper LUT. The shaper LUT is required in > >>>>> order to > >>>>> +ensure a 3D LUT with limited entries (e.g. 9x9x9, or 17x17x17) > >>>>> operates > >>>>> +in perceptual (non-linear) space, so as to evenly spread the > >>>>> limited > >>>>> +entries evenly across the perceived space. > >>>> > >>>> Some terminology care may be needed here - up until this point, I > >>>> think you've been talking about "tonemapping" being luminance > >>>> adjustment, whereas I'd expect 3D LUTs to be used for gamut > >>>> adjustment. > >>>> > >>> > >>> IMO, what harry wants to say here is that, which HW block gets > >>> picked and how tone mapping is achieved can be a very driver/HW > >>> specific thing, where one driver can use a 1D/Fixed function block, > >>> whereas another one can choose more complex HW like a 3D LUT for the > >>> same. > >>> > >>> DRM layer needs to define only the property to hook the API with > >>> core driver, and the driver can decide which HW to pick and > >>> configure for the activity. So when we have a tonemapping property, > >>> we might not have a separate 3D-LUT property, or the driver may fail > >>> the atomic_check() if both of them are programmed for different > >>> usages. > >> > >> I still think that directly exposing the HW blocks and their > >> capabilities is the right approach, rather than a "magic" tonemapping > >> property. > >> > >> Yes, userspace would need to have a good understanding of how to use > >> that hardware, but if the pipeline model is standardised that's the > >> kind of thing a cross-vendor library could handle. > >> > > > > One problem with cross-vendor libraries is that they might struggle to > > really be cross-vendor when it comes to unique HW behavior. Or they > > might pick sub-optimal configurations as they're not aware of the > > power impact of a configuration. What's an optimal configuration might > > differ greatly between different HW. > > > > We're seeing this problem with "universal" planes as well. > > I'm repeating what has been said before but apparently it has to be said > again: if a property can't be replicated exactly in a shader the property is useless. If > your hardware is so unique that it can't give us the exact formula we expect you > cannot expose the property. > > Maybe my view on power consumption is simplistic but I would expect enum < 1d lut > < 3d lut < shader. Is there more to it? > > Either way if the fixed KMS pixel pipeline is not sufficient to expose the intricacies of > real hardware the right move would be to make the KMS pixel pipeline more > dynamic, expose more hardware specifics and create a hardware specific user space > like mesa. Moving the whole compositing with all its policies and decision making > into the kernel is exactly the wrong way to go. > I agree here, we can give flexibility to userspace to decide how it wants to use the hardware blocks. So exposing the hardware capability to userspace and then servicing on its behalf would be the right way to go for driver I believe. Any compositor or userspace can define its own policy and drive the hardware. We already have done that with crtc level color properties. We can do the same for plane color. HDR will be just be an extension that way. > Laurent Pinchart put this very well: > https://lists.freedesktop.org/archives/dri-devel/2021-June/311689.html > > >> It would definitely be good to get some compositor opinions here. > >> > > > > For this we'll probably have to wait for Pekka's input when he's back > > from his vacation. > > Yeah, Pekka's input would be really useful here. We can work together Harry to come up with unified UAPI's which caters to general purpose color hardware pipeline. Just floated a RFC series with a UAPI proposal, link below: https://patchwork.freedesktop.org/series/90826/ Please check and share your feedback. Regards, Uma Shankar > >> Cheers, > >> -Brian > >>
On Fri, 30 Jul 2021 16:41:29 -0400 Harry Wentland <harry.wentland@amd.com> wrote: > Use the new DRM RFC doc section to capture the RFC previously only > described in the cover letter at > https://patchwork.freedesktop.org/series/89506/ > > v3: > * Add sections on single-plane and multi-plane HDR > * Describe approach to define HW details vs approach to define SW intentions > * Link Jeremy Cline's excellent HDR summaries > * Outline intention behind overly verbose doc > * Describe FP16 use-case > * Clean up links > > v2: create this doc > > v1: n/a > > Signed-off-by: Harry Wentland <harry.wentland@amd.com> Hi Harry, I finally managed to go through this, comments below. Excellent to have pictures included. I wrote this reply over several days, sorry if it's not quite coherent. > --- > Documentation/gpu/rfc/color_intentions.drawio | 1 + > Documentation/gpu/rfc/color_intentions.svg | 3 + > Documentation/gpu/rfc/colorpipe | 1 + > Documentation/gpu/rfc/colorpipe.svg | 3 + > Documentation/gpu/rfc/hdr-wide-gamut.rst | 580 ++++++++++++++++++ > Documentation/gpu/rfc/index.rst | 1 + > 6 files changed, 589 insertions(+) > create mode 100644 Documentation/gpu/rfc/color_intentions.drawio > create mode 100644 Documentation/gpu/rfc/color_intentions.svg > create mode 100644 Documentation/gpu/rfc/colorpipe > create mode 100644 Documentation/gpu/rfc/colorpipe.svg > create mode 100644 Documentation/gpu/rfc/hdr-wide-gamut.rst ... > diff --git a/Documentation/gpu/rfc/hdr-wide-gamut.rst b/Documentation/gpu/rfc/hdr-wide-gamut.rst > new file mode 100644 > index 000000000000..e463670191ab > --- /dev/null > +++ b/Documentation/gpu/rfc/hdr-wide-gamut.rst > @@ -0,0 +1,580 @@ > +============================== > +HDR & Wide Color Gamut Support > +============================== > + > +.. role:: wy-text-strike > + > +ToDo > +==== > + > +* :wy-text-strike:`Reformat as RST kerneldoc` - done > +* :wy-text-strike:`Don't use color_encoding for color_space definitions` - done > +* :wy-text-strike:`Update SDR luminance description and reasoning` - done > +* :wy-text-strike:`Clarify 3D LUT required for some color space transformations` - done > +* :wy-text-strike:`Highlight need for named color space and EOTF definitions` - done > +* :wy-text-strike:`Define transfer function API` - done > +* :wy-text-strike:`Draft upstream plan` - done > +* :wy-text-strike:`Reference to wayland plan` - done > +* Reference to Chrome plans > +* Sketch view of HW pipeline for couple of HW implementations > + > + > +Upstream Plan > +============= > + > +* Reach consensus on DRM/KMS API > +* Implement support in amdgpu > +* Implement IGT tests > +* Add API support to Weston, ChromiumOS, or other canonical open-source project interested in HDR > +* Merge user-space > +* Merge kernel patches The order is: review acceptance of userspace but don't merge, merge kernel, merge userspace. > + > + > +History > +======= > + > +v3: > + > +* Add sections on single-plane and multi-plane HDR > +* Describe approach to define HW details vs approach to define SW intentions > +* Link Jeremy Cline's excellent HDR summaries > +* Outline intention behind overly verbose doc > +* Describe FP16 use-case > +* Clean up links > + > +v2: create this doc > + > +v1: n/a > + > + > +Introduction > +============ > + > +We are looking to enable HDR support for a couple of single-plane and > +multi-plane scenarios. To do this effectively we recommend new interfaces > +to drm_plane. Below I'll give a bit of background on HDR and why we > +propose these interfaces. > + > +As an RFC doc this document is more verbose than what we would want from > +an eventual uAPI doc. This is intentional in order to ensure interested > +parties are all on the same page and to facilitate discussion if there > +is disagreement on aspects of the intentions behind the proposed uAPI. I would recommend keeping the discussion parts of the document as well, but if you think they hurt the readability of the uAPI specification, then split things into normative and informative sections. > + > + > +Overview and background > +======================= > + > +I highly recommend you read `Jeremy Cline's HDR primer`_ > + > +Jeremy Cline did a much better job describing this. I highly recommend > +you read it at [1]: > + > +.. _Jeremy Cline's HDR primer: https://www.jcline.org/blog/fedora/graphics/hdr/2021/05/07/hdr-in-linux-p1.html That's a nice write-up I didn't know about, thanks. I just wish such write-ups would be somehow peer-reviewed for correctness and curated for proper referencing. Perhaps like we develop code: at least some initial peer review and then fixes when anyone notices something to improve. Like... what you are doing here! :-) The post is perhaps a bit too narrow with OETF/EOTF terms, accidentally implying that OETF = EOTF^-1 which is not generally true, but that all depends on which O-to-E or E-to-O functions one is talking about. Particularly there is a difference between functions used for signal compression which needs an exact matching inverse function, and functions containing tone-mapping and artistic effects that when concatenated result in the (non-identity) OOTF. Nothing in the post seems to disagree with my current understanding FWI'mW. > + > +Defining a pixel's luminance > +---------------------------- > + > +The luminance space of pixels in a framebuffer/plane presented to the > +display is not well defined in the DRM/KMS APIs. It is usually assumed to > +be in a 2.2 or 2.4 gamma space and has no mapping to an absolute luminance > +value; it is interpreted in relative terms. > + > +Luminance can be measured and described in absolute terms as candela > +per meter squared, or cd/m2, or nits. Even though a pixel value can be > +mapped to luminance in a linear fashion to do so without losing a lot of > +detail requires 16-bpc color depth. The reason for this is that human > +perception can distinguish roughly between a 0.5-1% luminance delta. A > +linear representation is suboptimal, wasting precision in the highlights > +and losing precision in the shadows. > + > +A gamma curve is a decent approximation to a human's perception of > +luminance, but the `PQ (perceptual quantizer) function`_ improves on > +it. It also defines the luminance values in absolute terms, with the > +highest value being 10,000 nits and the lowest 0.0005 nits. > + > +Using a content that's defined in PQ space we can approximate the real > +world in a much better way. Or HLG. It is said that HLG puts the OOTF in the display, while in a PQ system OOTF is baked into the transmission. However, a monitor that consumes PQ will likely do some tone-mapping to fit it to the display capabilities, so it is adding an OOTF of its own. In a HLG system I would think artistic adjustments are done before transmission baking them in, adding its own OOTF in addition to the sink OOTF. So both systems necessarily have some O-O mangling on both sides of transmission. There is a HLG presentation at https://www.w3.org/Graphics/Color/Workshop/talks.html#intro > + > +Here are some examples of real-life objects and their approximate > +luminance values: > + > + > +.. _PQ (perceptual quantizer) function: https://en.wikipedia.org/wiki/High-dynamic-range_video#Perceptual_Quantizer > + > +.. flat-table:: > + :header-rows: 1 > + > + * - Object > + - Luminance in nits > + > + * - Fluorescent light > + - 10,000 > + > + * - Highlights > + - 1,000 - sunlight Did fluorescent and highlights get swapped here? > + > + * - White Objects > + - 250 - 1,000 > + > + * - Typical Objects > + - 1 - 250 > + > + * - Shadows > + - 0.01 - 1 > + > + * - Ultra Blacks > + - 0 - 0.0005 > + > + > +Transfer functions > +------------------ > + > +Traditionally we used the terms gamma and de-gamma to describe the > +encoding of a pixel's luminance value and the operation to transfer from > +a linear luminance space to the non-linear space used to encode the > +pixels. Since some newer encodings don't use a gamma curve I suggest > +we refer to non-linear encodings using the terms `EOTF, and OETF`_, or > +simply as transfer function in general. Yeah, gamma could mean lots of things. If you have e.g. OETF gamma 1/2.2 and EOTF gamma 2.4, the result is OOTF gamma 1.09. OETF, EOTF and OOTF are not unambiguous either, since there is always the question of whose function is it. Two different EOTFs are of interest in composition for display: - the display EOTF (since display signal is electrical) - the content EOTF (since content is stored in electrical encoding) > + > +The EOTF (Electro-Optical Transfer Function) describes how to transfer > +from an electrical signal to an optical signal. This was traditionally > +done by the de-gamma function. > + > +The OETF (Opto Electronic Transfer Function) describes how to transfer > +from an optical signal to an electronic signal. This was traditionally > +done by the gamma function. > + > +More generally we can name the transfer function describing the transform > +between scanout and blending space as the **input transfer function**, and "scanout space" makes me think of cable/signal values, not framebuffer values. Or, I'm not sure. I'd recommend replacing the term "scanout space" with something less ambiguous like framebuffer values. > +the transfer function describing the transform from blending space to the > +output space as **output transfer function**. You're talking about "spaces" here, but what you are actually talking about are value encodings, not (color) spaces. An EOTF or OETF is not meant to modify the color space. When talking about blending, what you're actually interested in is linear vs. non-linear color value encoding. This matches your talk about EOTF and OETF, although you need to be careful to specify which EOTF and OETF you mean. For blending, color values need to be linear in light intensity, and the inverse of the E-to-O mapping before blending is exactly the same as the O-to-E mapping after blending. Otherwise you would alter even opaque pixels. OETF is often associated with cameras, not displays. Maybe use EOTF^-1 instead? Btw. another terminology thing: color space vs. color model. RGB and YCbCr are color models. sRGB, BT.601 and BT.2020 are color spaces. These two are orthogonal concepts. > + > + > +.. _EOTF, and OETF: https://en.wikipedia.org/wiki/Transfer_functions_in_imaging > + > +Mastering Luminances > +-------------------- > + > +Even though we are able to describe the absolute luminance of a pixel > +using the PQ 2084 EOTF we are presented with physical limitations of the > +display technologies on the market today. Here are a few examples of > +luminance ranges of displays. > + > +.. flat-table:: > + :header-rows: 1 > + > + * - Display > + - Luminance range in nits > + > + * - Typical PC display > + - 0.3 - 200 > + > + * - Excellent LCD HDTV > + - 0.3 - 400 > + > + * - HDR LCD w/ local dimming > + - 0.05 - 1,500 > + > +Since no display can currently show the full 0.0005 to 10,000 nits > +luminance range of PQ the display will need to tone-map the HDR content, > +i.e to fit the content within a display's capabilities. To assist > +with tone-mapping HDR content is usually accompanied by a metadata > +that describes (among other things) the minimum and maximum mastering > +luminance, i.e. the maximum and minimum luminance of the display that > +was used to master the HDR content. > + > +The HDR metadata is currently defined on the drm_connector via the > +hdr_output_metadata blob property. HDR_OUTPUT_METADATA, all caps. > + > +It might be useful to define per-plane hdr metadata, as different planes > +might have been mastered differently. > + > +.. _SDR Luminance: > + > +SDR Luminance > +------------- > + > +Traditional SDR content's maximum white luminance is not well defined. > +Some like to define it at 80 nits, others at 200 nits. It also depends > +to a large extent on the environmental viewing conditions. In practice > +this means that we need to define the maximum SDR white luminance, either > +in nits, or as a ratio. > + > +`One Windows API`_ defines it as a ratio against 80 nits. > + > +`Another Windows API`_ defines it as a nits value. > + > +The `Wayland color management proposal`_ uses Apple's definition of EDR as a > +ratio of the HDR range vs SDR range. > + > +If a display's maximum HDR white level is correctly reported it is trivial > +to convert between all of the above representations of SDR white level. If > +it is not, defining SDR luminance as a nits value, or a ratio vs a fixed > +nits value is preferred, assuming we are blending in linear space. > + > +It is our experience that many HDR displays do not report maximum white > +level correctly Which value do you refer to as "maximum white", and how did you measure it? You also need to define who is "us" since kernel docs tend to get lots of authors over time. > + > +.. _One Windows API: https://docs.microsoft.com/en-us/windows-hardware/drivers/ddi/dispmprt/ns-dispmprt-_dxgkarg_settargetadjustedcolorimetry2 > +.. _Another Windows API: https://docs.microsoft.com/en-us/uwp/api/windows.graphics.display.advancedcolorinfo.sdrwhitelevelinnits?view=winrt-20348 > +.. _Wayland color management proposal: https://gitlab.freedesktop.org/swick/wayland-protocols/-/blob/color/unstable/color-management/color.rst#id8 > + > +Let There Be Color > +------------------ > + > +So far we've only talked about luminance, ignoring colors altogether. Just > +like in the luminance space, traditionally the color space of display > +outputs has not been well defined. Similar to how an EOTF defines a > +mapping of pixel data to an absolute luminance value, the color space > +maps color information for each pixel onto the CIE 1931 chromaticity > +space. This can be thought of as a mapping to an absolute, real-life, > +color value. > + > +A color space is defined by its primaries and white point. The primaries > +and white point are expressed as coordinates in the CIE 1931 color > +space. Think of the red primary as the reddest red that can be displayed > +within the color space. Same for green and blue. > + > +Examples of color spaces are: > + > +.. flat-table:: > + :header-rows: 1 > + > + * - Color Space > + - Description > + > + * - BT 601 > + - similar to BT 709 > + > + * - BT 709 > + - used by sRGB content; ~53% of BT 2020 > + > + * - DCI-P3 > + - used by most HDR displays; ~72% of BT 2020 > + > + * - BT 2020 > + - standard for most HDR content > + > + > + > +Color Primaries and White Point > +------------------------------- > + > +Just like displays can currently not represent the entire 0.0005 - > +10,000 nits HDR range of the PQ 2084 EOTF, they are currently not capable "PQ" or "ST 2084". > +of representing the entire BT.2020 color Gamut. For this reason video > +content will often specify the color primaries and white point used to > +master the video, in order to allow displays to be able to map the image > +as best as possible onto the display's gamut. > + > + > +Displays and Tonemapping > +------------------------ > + > +External displays are able to do their own tone and color mapping, based > +on the mastering luminance, color primaries, and white space defined in > +the HDR metadata. HLG does things differently wrt. metadata and tone-mapping than PQ. > + > +Some internal panels might not include the complex HW to do tone and color > +mapping on their own and will require the display driver to perform > +appropriate mapping. > + > + > +How are we solving the problem? > +=============================== > + > +Single-plane > +------------ > + > +If a single drm_plane is used no further work is required. The compositor > +will provide one HDR plane alongside a drm_connector's hdr_output_metadata > +and the display HW will output this plane without further processing if > +no CRTC LUTs are provided. > + > +If desired a compositor can use the CRTC LUTs for HDR content but without > +support for PWL or multi-segmented LUTs the quality of the operation is > +expected to be subpar for HDR content. Explain/expand PWL. Do you have references to these subpar results? I'm interested in when and how they appear. I may want to use that information to avoid using KMS LUTs when they are inadequate. > + > + > +Multi-plane > +----------- > + > +In multi-plane configurations we need to solve the problem of blending > +HDR and SDR content. This blending should be done in linear space and > +therefore requires framebuffer data that is presented in linear space > +or a way to convert non-linear data to linear space. Additionally > +we need a way to define the luminance of any SDR content in relation > +to the HDR content. > + > +In order to present framebuffer data in linear space without losing a > +lot of precision it needs to be presented using 16 bpc precision. Integer or floating-point? > + > + > +Defining HW Details > +------------------- > + > +One way to take full advantage of modern HW's color pipelines is by > +defining a "generic" pipeline that matches all capable HW. Something > +like this, which I took `from Uma Shankar`_ and expanded on: > + > +.. _from Uma Shankar: https://patchwork.freedesktop.org/series/90826/ > + > +.. kernel-figure:: colorpipe.svg Btw. there will be interesting issues with alpha-premult, filtering, and linearisation if your planes have alpha channels. That's before HDR is even considered. > + > +I intentionally put de-Gamma, and Gamma in parentheses in my graph > +as they describe the intention of the block but not necessarily a > +strict definition of how a userspace implementation is required to > +use them. > + > +De-Gamma and Gamma blocks are named LUT, but they could be non-programmable > +LUTs in some HW implementations with no programmable LUT available. See > +the definitions for AMD's `latest dGPU generation`_ as an example. > + > +.. _latest dGPU generation: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c?h=v5.13#n2586 > + > +I renamed the "Plane Gamma LUT" and "CRTC De-Gamma LUT" to "Tonemapping" > +as we generally don't want to re-apply gamma before blending, or do > +de-gamma post blending. These blocks tend generally to be intended for > +tonemapping purposes. Right. > + > +Tonemapping in this case could be a simple nits value or `EDR`_ to describe > +how to scale the :ref:`SDR luminance`. I do wonder how that will turn out in the end... but on Friday there will be HDR Compositing and Tone-mapping live Q&A session: https://www.w3.org/Graphics/Color/Workshop/talks.html#compos > + > +Tonemapping could also include the ability to use a 3D LUT which might be > +accompanied by a 1D shaper LUT. The shaper LUT is required in order to > +ensure a 3D LUT with limited entries (e.g. 9x9x9, or 17x17x17) operates > +in perceptual (non-linear) space, so as to evenly spread the limited > +entries evenly across the perceived space. > + > +.. _EDR: https://gitlab.freedesktop.org/swick/wayland-protocols/-/blob/color/unstable/color-management/color.rst#id8 > + > +Creating a model that is flexible enough to define color pipelines for > +a wide variety of HW is challenging, though not impossible. Implementing > +support for such a flexible definition in userspace, though, amounts > +to essentially writing color pipeline drivers for each HW. My thinking right now is that userspace has it's own pipeline model with the elements it must have. Then it attempts to map that pipeline to what elements the KMS pipeline happens to expose. If there is a mapping, good. If not, fall back to shaders on GPU. To help that succeed more often, I'm using the current KMS abstract pipeline as a guide in designing the Weston internal color pipeline. > + > + > +Defining SW Intentions > +---------------------- > + > +An alternative to describing the HW color pipeline in enough detail to > +be useful for color management and HDR purposes is to instead define > +SW intentions. > + > +.. kernel-figure:: color_intentions.svg > + > +This greatly simplifies the API and lets the driver do what a driver > +does best: figure out how to program the HW to achieve the desired > +effect. > + > +The above diagram could include white point, primaries, and maximum > +peak and average white levels in order to facilitate tone mapping. > + > +At this point I suggest to keep tonemapping (other than an SDR luminance > +adjustment) out of the current DRM/KMS API. Most HDR displays are capable > +of tonemapping. If for some reason tonemapping is still desired on > +a plane, a shader might be a better way of doing that instead of relying > +on display HW. "Non-programmable LUT" as you referred to them is an interesting departure from the earlier suggestion, where you intended to describe color spaces and encodings of content and display and let the hardware do whatever wild magic in between. Now it seems like you have shifted to programming transformations instead. They may be programmable or enumerated, but still transformations rather than source and destination descriptions. If the enumerated transformations follow standards, even better. I think this is a step in the right direction. However, you wrote in the heading "Intentions" which sounds like your old approach. Conversion from one additive linear color space to another is a matter of matrix multiplication. That is simple and easy to define, just load a matrix. The problem is gamut mapping: you may end up outside of the destination gamut, or maybe you want to use more of the destination gamut than what the color space definitions imply. There are many conflicting goals and ways to this, and I suspect the room for secret sauce is here (and in tone-mapping). There is also a difference between color space (signal) gamut and device gamut. A display may accept BT.2020 signal, but the gamut it can show is usually much less. > + > +In some ways this mirrors how various userspace APIs treat HDR: > + * Gstreamer's `GstVideoTransferFunction`_ > + * EGL's `EGL_EXT_gl_colorspace_bt2020_pq`_ extension > + * Vulkan's `VkColorSpaceKHR`_ > + > +.. _GstVideoTransferFunction: https://gstreamer.freedesktop.org/documentation/video/video-color.html?gi-language=c#GstVideoTransferFunction > +.. _EGL_EXT_gl_colorspace_bt2020_pq: https://www.khronos.org/registry/EGL/extensions/EXT/EGL_EXT_gl_colorspace_bt2020_linear.txt > +.. _VkColorSpaceKHR: https://www.khronos.org/registry/vulkan/specs/1.2-extensions/html/vkspec.html#VkColorSpaceKHR > + > + > +A hybrid approach to the API > +---------------------------- > + > +Our current approach attempts a hybrid approach, defining API to specify > +input and output transfer functions, as well as an SDR boost, and a > +input color space definition. Using a color space definition in the KMS UAPI brings us back to the old problem. Using descriptions of content (color spaces) instead of prescribing transformations seems to be designed to allow vendors make use of their secret hardware sauce: how to best realise the intent. Since it is secret sauce, by definition it cannot be fully replicated in software or shaders. One might even get sued for succeeding. General purpose (read: desktop) compositors need to adapt to any scenegraph and they want to make the most of the hardware under all situations. This means that it is not possible to guarantee that a certain window is always going to be using a KMS plane. Maybe a small change in the scenegraph, a moving window or cursor, suddenly causes the KMS plane to become unsuitable for the window, or in the opposite case the KMS plane suddenly becomes available for the window. This means that a general purpose compositor will be doing frame-by-frame decisions on which window to put on which KMS plane, and which windows need to be composited with shaders. Not being able to replicate what the hardware does means that shaders cannot produce the same image on screen as the KMS plane would. When KMS plane assignments change, the window appearance would change as well. I imagine end users would be complaining of such glitches. However, there are other use cases where I can imagine this descriptive design working perfectly. Any non-general, non-desktop compositor, or a closed system, could probably guarantee that the scenegraph will always map in a specific way to the KMS planes. The window would always map to the KMS plane, meaning that it would never need to be composited with shaders, and therefore cannot change color unexpectedly from end user point of view. TVs, set-top-boxes, etc., maybe even phones. Some use cases have a hard requirement of putting a specific window on a specific KMS plane, or the system simply cannot display it (performance, protection...). Is it worth having two fundamentally different KMS UAPIs for HDR composition support, where one interface supports only a subset of use cases and the other (per-plane LUT, CTM, LUT, and more, freely programmable by userspace) supports all use cases? That's a genuine question. Are the benefits worth the kernel developers' efforts to design, implement, and forever maintain both mutually exclusive interfaces? Now, someone might say that the Wayland protocol design for HDR aims to be descriptive and not prescriptive, so why should KMS UAPI be different? The reason is explained above: *some* KMS clients may switch frame by frame between KMS and shaders, but Wayland clients pick one path and stick to it. Wayland clients have no reason that I can imagine to switch arbitrarily in flight. > + > +We would like to solicit feedback and encourage discussion around the > +merits and weaknesses of these approaches. This question is at the core > +of defining a good API and we'd like to get it right. > + > + > +Input and Output Transfer functions > +----------------------------------- > + > +We define an input transfer function on drm_plane to describe the > +transform from framebuffer to blending space. > + > +We define an output transfer function on drm_crtc to describe the > +transform from blending space to display space. > + Here is again the terminology problem between transfer function and (color) space. > +The transfer function can be a pre-defined function, such as PQ EOTF, or > +a custom LUT. A driver will be able to specify support for specific > +transfer functions, including custom ones. This sounds good. > + > +Defining the transfer function in this way allows us to support in on HW > +that uses ROMs to support these transforms, as well as on HW that use > +LUT definitions that are complex and don't map easily onto a standard LUT > +definition. > + > +We will not define per-plane LUTs in this patchset as the scope of our > +current work only deals with pre-defined transfer functions. This API has > +the flexibility to add custom 1D or 3D LUTs at a later date. Ok. > + > +In order to support the existing 1D de-gamma and gamma LUTs on the drm_crtc > +we will include a "custom 1D" enum value to indicate that the custom gamma and > +de-gamma 1D LUTs should be used. Sounds fine. > + > +Possible transfer functions: > + > +.. flat-table:: > + :header-rows: 1 > + > + * - Transfer Function > + - Description > + > + * - Gamma 2.2 > + - a simple 2.2 gamma function > + > + * - sRGB > + - 2.4 gamma with small initial linear section Maybe rephrase to: The piece-wise sRGB transfer function with the small initial linear section, approximately corresponding to 2.4 gamma function. I recall some debate, too, whether with a digital flat panel you should use a pure 2.4 gamma function or the sRGB function. (Which one do displays expect?) > + > + * - PQ 2084 > + - SMPTE ST 2084; used for HDR video and allows for up to 10,000 nit support Perceptual Quantizer (PQ), or ST 2084. There is no PQ 2084. > + > + * - Linear > + - Linear relationship between pixel value and luminance value > + > + * - Custom 1D > + - Custom 1D de-gamma and gamma LUTs; one LUT per color > + > + * - Custom 3D > + - Custom 3D LUT (to be defined) Adding HLG transfer function to this set would be interesting, because it requires a parameter I believe. How would you handle parameterised transfer functions? It's worth to note that while PQ is absolute in luminance (providing cd/m² values), everything else here is relative for both SDR and HDR. You cannot blend content in PQ with content in something else together, until you practically define the absolute luminance for all non-PQ content or vice versa. A further complication is that you could have different relative-luminance transfer functions, meaning that the (absolute) luminance they are relative to varies. The obvious case is blending SDR content with HDR content when both have relative-luminance transfer function. Then you have HLG which is more like scene-referred than display-referred, but that might be solved with the parameter I mentioned, I'm not quite sure. PQ is said to be display-referred, but it's usually referred to someone else's display than yours, which means it needs the HDR metadata to be able to tone-map suitably to your display. This seems to be a similar problem as with signal gamut vs. device gamut. The traditional relative-luminance transfer functions, well, the content implied by them, is display-referred when it arrived at KMS or compositor level. There the question of "whose display" doesn't matter much because it's SDR and narrow gamut, and we probably don't even notice when we see an image wrong. With HDR the mismatch might be noticeable. > + > + > +Describing SDR Luminance > +------------------------------ > + > +Since many displays do no correctly advertise the HDR white level we > +propose to define the SDR white level in nits. This means that even if you had no content using PQ, you still need to define the absolute luminance for all the (HDR) relative-luminance transfer functions. There probably needs to be something to relate everything to a single, relative or absolute, luminance range. That is necessary for any composition (KMS and software) since the output is a single image. Is it better to go with relative or absolute metrics? Right now I would tend to say relative, because relative is unitless. Absolute values are numerically equivalent, but they might not have anything to do with actual physical measurements, making them actually relative. This happens when your monitor does not support PQ mode or does tone-mapping to your image, for instance. The concept we have played with in Wayland so far is EDR, but then you have the question of "what does zero mean", i.e. the luminance of darkest black could vary between contents as well, not just the luminance of extreme white. > + > +We define a new drm_plane property to specify the white level of an SDR > +plane. > + > + > +Defining the color space > +------------------------ > + > +We propose to add a new color space property to drm_plane to define a > +plane's color space. > + > +While some color space conversions can be performed with a simple color > +transformation matrix (CTM) others require a 3D LUT. > + > + > +Defining mastering color space and luminance > +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > + > +ToDo > + > + > + > +Pixel Formats > +~~~~~~~~~~~~~ > + > +The pixel formats, such as ARGB8888, ARGB2101010, P010, or FP16 are > +unrelated to color space and EOTF definitions. HDR pixels can be formatted Yes! > +in different ways but in order to not lose precision HDR content requires > +at least 10 bpc precision. For this reason ARGB2101010, P010, and FP16 are > +the obvious candidates for HDR. ARGB2101010 and P010 have the advantage > +of requiring only half the bandwidth as FP16, while FP16 has the advantage > +of enough precision to operate in a linear space, i.e. without EOTF. This reminds me of something interesting said during the W3C WCG & HDR Q&A session yesterday. Unfortunately I forget his name, but I think transcriptions should become available at some point, someone said that pixel depth or bit precision should be thought of as setting the noise floor. When you quantize values, always do dithering. Then the precision only changes your noise floor level. Then something about how audio has realized this ages ago and we are just catching up. If you don't dither, you get banding artifacts in gradients. If you do dither, it's just noise. > + > + > +Use Cases > +========= > + > +RGB10 HDR plane - composited HDR video & desktop > +------------------------------------------------ > + > +A single, composited plane of HDR content. The use-case is a video player > +on a desktop with the compositor owning the composition of SDR and HDR > +content. The content shall be PQ BT.2020 formatted. The drm_connector's > +hdr_output_metadata shall be set. > + > + > +P010 HDR video plane + RGB8 SDR desktop plane > +--------------------------------------------- > +A normal 8bpc desktop plane, with a P010 HDR video plane underlayed. The > +HDR plane shall be PQ BT.2020 formatted. The desktop plane shall specify > +an SDR boost value. The drm_connector's hdr_output_metadata shall be set. > + > + > +One XRGB8888 SDR Plane - HDR output > +----------------------------------- > + > +In order to support a smooth transition we recommend an OS that supports > +HDR output to provide the hdr_output_metadata on the drm_connector to > +configure the output for HDR, even when the content is only SDR. This will > +allow for a smooth transition between SDR-only and HDR content. In this Agreed, but this also kind of contradicts the idea of pushing HDR metadata from video all the way to the display in the RGB10 HDR plane case - something you do not seem to suggest here at all, but I would have expected that to be a prime use case for you. A set-top-box might want to push the video HDR metadata all the way to the display when supported, and then adapt all the non-video graphics to that. Thanks, pq > +use-case the SDR max luminance value should be provided on the drm_plane. > + > +In DCN we will de-PQ or de-Gamma all input in order to blend in linear > +space. For SDR content we will also apply any desired boost before > +blending. After blending we will then re-apply the PQ EOTF and do RGB > +to YCbCr conversion if needed. > + > +FP16 HDR linear planes > +---------------------- > + > +These will require a transformation into the display's encoding (e.g. PQ) > +using the CRTC LUT. Current CRTC LUTs are lacking the precision in the > +dark areas to do the conversion without losing detail. > + > +One of the newly defined output transfer functions or a PWL or `multi-segmented > +LUT`_ can be used to facilitate the conversion to PQ, HLG, or another > +encoding supported by displays. > + > +.. _multi-segmented LUT: https://patchwork.freedesktop.org/series/90822/ > + > + > +User Space > +========== > + > +Gnome & GStreamer > +----------------- > + > +See Jeremy Cline's `HDR in Linux\: Part 2`_. > + > +.. _HDR in Linux\: Part 2: https://www.jcline.org/blog/fedora/graphics/hdr/2021/06/28/hdr-in-linux-p2.html > + > + > +Wayland > +------- > + > +See `Wayland Color Management and HDR Design Goals`_. > + > +.. _Wayland Color Management and HDR Design Goals: https://gitlab.freedesktop.org/swick/wayland-protocols/-/blob/color/unstable/color-management/color.rst > + > + > +ChromeOS Ozone > +-------------- > + > +ToDo > + > + > +HW support > +========== > + > +ToDo, describe pipeline on a couple different HW platforms > + > + > +Further Reading > +=============== > + > +* https://gitlab.freedesktop.org/swick/wayland-protocols/-/blob/color/unstable/color-management/color.rst > +* http://downloads.bbc.co.uk/rd/pubs/whp/whp-pdf-files/WHP309.pdf > +* https://app.spectracal.com/Documents/White%20Papers/HDR_Demystified.pdf > +* https://www.jcline.org/blog/fedora/graphics/hdr/2021/05/07/hdr-in-linux-p1.html > +* https://www.jcline.org/blog/fedora/graphics/hdr/2021/06/28/hdr-in-linux-p2.html > + > + > diff --git a/Documentation/gpu/rfc/index.rst b/Documentation/gpu/rfc/index.rst > index 05670442ca1b..8d8430cfdde1 100644 > --- a/Documentation/gpu/rfc/index.rst > +++ b/Documentation/gpu/rfc/index.rst > @@ -19,3 +19,4 @@ host such documentation: > .. toctree:: > > i915_gem_lmem.rst > + hdr-wide-gamut.rst
On Mon, 16 Aug 2021 15:37:23 +0200 sebastian@sebastianwick.net wrote: > On 2021-08-16 14:40, Harry Wentland wrote: > > On 2021-08-16 7:10 a.m., Brian Starkey wrote: > >> On Fri, Aug 13, 2021 at 10:42:12AM +0530, Sharma, Shashank wrote: > >>> Hello Brian, > >>> (+Uma in cc) > >>> > >>> Thanks for your comments, Let me try to fill-in for Harry to keep the > >>> design > >>> discussion going. Please find my comments inline. > >>> > > > > Thanks, Shashank. I'm back at work now. Had to cut my trip short > > due to rising Covid cases and concern for my kids. > > > >>> On 8/2/2021 10:00 PM, Brian Starkey wrote: > >>>> > >> > >> -- snip -- > >> > >>>> > >>>> Android doesn't blend in linear space, so any API shouldn't be built > >>>> around an assumption of linear blending. > >>>> > > > > This seems incorrect but I guess ultimately the OS is in control of > > this. If we want to allow blending in non-linear space with the new > > API we would either need to describe the blending space or the > > pre/post-blending gamma/de-gamma. > > > > Any idea if this blending behavior in Android might get changed in > > the future? > > There is lots of software which blends in sRGB space and designers > adjusted to the incorrect blending in a way that the result looks right. > Blending in linear space would result in incorrectly looking images. Hi, yes, and I'm guilty of that too, at least by negligence. ;-) All Wayland compositors do it, since that's what everyone has always been doing, more or less. It's still physically wrong, but when all you have is sRGB and black window shadows and rounded corners as the only use case, you don't mind. When you start blending with colors other than black (gradients!), when you go to wide gamut, or especially with HDR, I believe the problems start to become painfully obvious. But as long as you're stuck with sRGB only, people expect the "wrong" result and deviating from that is a regression. Similarly, once Weston starts doing color management and people turn it on and install monitor profiles, I expect to get reports saying "all old apps look really dull now". That's how sRGB is defined to look like, they've been looking at something else for all that time. :-) Maybe we need a sRGB "gamut boost" similar to SDR luminance boost. ;-) > >> I still think that directly exposing the HW blocks and their > >> capabilities is the right approach, rather than a "magic" tonemapping > >> property. > >> > >> Yes, userspace would need to have a good understanding of how to use > >> that hardware, but if the pipeline model is standardised that's the > >> kind of thing a cross-vendor library could handle. > >> > > > > One problem with cross-vendor libraries is that they might struggle > > to really be cross-vendor when it comes to unique HW behavior. Or > > they might pick sub-optimal configurations as they're not aware of > > the power impact of a configuration. What's an optimal configuration > > might differ greatly between different HW. > > > > We're seeing this problem with "universal" planes as well. > > I'm repeating what has been said before but apparently it has to be said > again: if a property can't be replicated exactly in a shader the > property is useless. If your hardware is so unique that it can't give us > the exact formula we expect you cannot expose the property. From desktop perspective, yes, but I'm nowadays less adamant about it. If kernel developers are happy to maintain multiple alternative UAPIs, then I'm not going to try to NAK that - I'll just say when I can and cannot make use of them. Also everything is always up to some precision, and ultimately here it is a question of whether people can see the difference. Entertainment end user audience is also much more forgiving than professional color management audience. For the latter, I'd hesitate to use non-primary KMS planes at all. > Either way if the fixed KMS pixel pipeline is not sufficient to expose > the intricacies of real hardware the right move would be to make the KMS > pixel pipeline more dynamic, expose more hardware specifics and create a > hardware specific user space like mesa. Moving the whole compositing > with all its policies and decision making into the kernel is exactly the > wrong way to go. > > Laurent Pinchart put this very well: > https://lists.freedesktop.org/archives/dri-devel/2021-June/311689.html Thanks for digging that up, saved me the trouble. :-) Thanks, pq
On Wed, 2021-09-15 at 17:01 +0300, Pekka Paalanen wrote: > On Fri, 30 Jul 2021 16:41:29 -0400 > Harry Wentland <harry.wentland@amd.com> wrote: > > > Use the new DRM RFC doc section to capture the RFC previously only > > described in the cover letter at > > https://patchwork.freedesktop.org/series/89506/ > > > > v3: > > Â * Add sections on single-plane and multi-plane HDR > > Â * Describe approach to define HW details vs approach to define SW > > intentions > > Â * Link Jeremy Cline's excellent HDR summaries > > Â * Outline intention behind overly verbose doc > > Â * Describe FP16 use-case > > Â * Clean up links > > > > v2: create this doc > > > > v1: n/a > > > > Signed-off-by: Harry Wentland <harry.wentland@amd.com> > > Hi Harry, > > I finally managed to go through this, comments below. Excellent to > have > pictures included. I wrote this reply over several days, sorry if > it's > not quite coherent. > > > > <snip> > > + > > + > > +Overview and background > > +======================= > > + > > +I highly recommend you read `Jeremy Cline's HDR primer`_ > > + > > +Jeremy Cline did a much better job describing this. I highly > > recommend > > +you read it at [1]: > > + > > +.. _Jeremy Cline's HDR primer: > > https://www.jcline.org/blog/fedora/graphics/hdr/2021/05/07/hdr-in-linux-p1.html > > That's a nice write-up I didn't know about, thanks. > > I just wish such write-ups would be somehow peer-reviewed for > correctness and curated for proper referencing. Perhaps like we > develop > code: at least some initial peer review and then fixes when anyone > notices something to improve. Like... what you are doing here! :-) > > The post is perhaps a bit too narrow with OETF/EOTF terms, > accidentally > implying that OETF = EOTF^-1 which is not generally true, but that > all > depends on which O-to-E or E-to-O functions one is talking about. > Particularly there is a difference between functions used for signal > compression which needs an exact matching inverse function, and > functions containing tone-mapping and artistic effects that when > concatenated result in the (non-identity) OOTF. > > Nothing in the post seems to disagree with my current understanding > FWI'mW. I'm more than happy to update things that are incorrect or mis-leading since the last thing I want to do is muddy the waters. Personally, I would much prefer that any useful content from it be peer-reviewed and included directly in the documentation since, well, it's being hosted out of my laundry room and the cats have a habit of turning off the UPS... Do let me know if I can be of any assistance there; I'm no longer employed to do anything HDR-related, but I do like clear documentation so I could dedicate a bit of free time to it. - Jeremy
On 2021-09-15 10:01, Pekka Paalanen wrote:> On Fri, 30 Jul 2021 16:41:29 -0400 > Harry Wentland <harry.wentland@amd.com> wrote: > >> Use the new DRM RFC doc section to capture the RFC previously only >> described in the cover letter at >> https://patchwork.freedesktop.org/series/89506/ >> >> v3: >> * Add sections on single-plane and multi-plane HDR >> * Describe approach to define HW details vs approach to define SW intentions >> * Link Jeremy Cline's excellent HDR summaries >> * Outline intention behind overly verbose doc >> * Describe FP16 use-case >> * Clean up links >> >> v2: create this doc >> >> v1: n/a >> >> Signed-off-by: Harry Wentland <harry.wentland@amd.com> > > Hi Harry, > > I finally managed to go through this, comments below. Excellent to have > pictures included. I wrote this reply over several days, sorry if it's > not quite coherent. > Hi Pekka, Thanks for taking the time to go through this. My reply is also a multi-day endeavor (due to other interruptions) so please bear with me as well if it looks a bit disjointed in places. > >> --- >> Documentation/gpu/rfc/color_intentions.drawio | 1 + >> Documentation/gpu/rfc/color_intentions.svg | 3 + >> Documentation/gpu/rfc/colorpipe | 1 + >> Documentation/gpu/rfc/colorpipe.svg | 3 + >> Documentation/gpu/rfc/hdr-wide-gamut.rst | 580 ++++++++++++++++++ >> Documentation/gpu/rfc/index.rst | 1 + >> 6 files changed, 589 insertions(+) >> create mode 100644 Documentation/gpu/rfc/color_intentions.drawio >> create mode 100644 Documentation/gpu/rfc/color_intentions.svg >> create mode 100644 Documentation/gpu/rfc/colorpipe >> create mode 100644 Documentation/gpu/rfc/colorpipe.svg >> create mode 100644 Documentation/gpu/rfc/hdr-wide-gamut.rst > > ... > >> diff --git a/Documentation/gpu/rfc/hdr-wide-gamut.rst b/Documentation/gpu/rfc/hdr-wide-gamut.rst >> new file mode 100644 >> index 000000000000..e463670191ab >> --- /dev/null >> +++ b/Documentation/gpu/rfc/hdr-wide-gamut.rst >> @@ -0,0 +1,580 @@ >> +============================== >> +HDR & Wide Color Gamut Support >> +============================== >> + >> +.. role:: wy-text-strike >> + >> +ToDo >> +==== >> + >> +* :wy-text-strike:`Reformat as RST kerneldoc` - done >> +* :wy-text-strike:`Don't use color_encoding for color_space definitions` - done >> +* :wy-text-strike:`Update SDR luminance description and reasoning` - done >> +* :wy-text-strike:`Clarify 3D LUT required for some color space transformations` - done >> +* :wy-text-strike:`Highlight need for named color space and EOTF definitions` - done >> +* :wy-text-strike:`Define transfer function API` - done >> +* :wy-text-strike:`Draft upstream plan` - done >> +* :wy-text-strike:`Reference to wayland plan` - done >> +* Reference to Chrome plans >> +* Sketch view of HW pipeline for couple of HW implementations >> + >> + >> +Upstream Plan >> +============= >> + >> +* Reach consensus on DRM/KMS API >> +* Implement support in amdgpu >> +* Implement IGT tests >> +* Add API support to Weston, ChromiumOS, or other canonical open-source project interested in HDR >> +* Merge user-space >> +* Merge kernel patches > > The order is: review acceptance of userspace but don't merge, merge > kernel, merge userspace. > Updated for v4 >> + >> + >> +History >> +======= >> + >> +v3: >> + >> +* Add sections on single-plane and multi-plane HDR >> +* Describe approach to define HW details vs approach to define SW intentions >> +* Link Jeremy Cline's excellent HDR summaries >> +* Outline intention behind overly verbose doc >> +* Describe FP16 use-case >> +* Clean up links >> + >> +v2: create this doc >> + >> +v1: n/a >> + >> + >> +Introduction >> +============ >> + >> +We are looking to enable HDR support for a couple of single-plane and >> +multi-plane scenarios. To do this effectively we recommend new interfaces >> +to drm_plane. Below I'll give a bit of background on HDR and why we >> +propose these interfaces. >> + >> +As an RFC doc this document is more verbose than what we would want from >> +an eventual uAPI doc. This is intentional in order to ensure interested >> +parties are all on the same page and to facilitate discussion if there >> +is disagreement on aspects of the intentions behind the proposed uAPI. > > I would recommend keeping the discussion parts of the document as well, > but if you think they hurt the readability of the uAPI specification, > then split things into normative and informative sections. > Good point. Let me think how to organize this in a way that preserves readability of the spec and also preserves (key) discussions for posterity. The history behind an API can often be more informative than the API doc itself. >> + >> + >> +Overview and background >> +======================= >> + >> +I highly recommend you read `Jeremy Cline's HDR primer`_ >> + >> +Jeremy Cline did a much better job describing this. I highly recommend >> +you read it at [1]: >> + >> +.. _Jeremy Cline's HDR primer: https://www.jcline.org/blog/fedora/graphics/hdr/2021/05/07/hdr-in-linux-p1.html > > That's a nice write-up I didn't know about, thanks. > > I just wish such write-ups would be somehow peer-reviewed for > correctness and curated for proper referencing. Perhaps like we develop > code: at least some initial peer review and then fixes when anyone > notices something to improve. Like... what you are doing here! :-) > > The post is perhaps a bit too narrow with OETF/EOTF terms, accidentally > implying that OETF = EOTF^-1 which is not generally true, but that all > depends on which O-to-E or E-to-O functions one is talking about. > Particularly there is a difference between functions used for signal > compression which needs an exact matching inverse function, and > functions containing tone-mapping and artistic effects that when > concatenated result in the (non-identity) OOTF. > > Nothing in the post seems to disagree with my current understanding > FWI'mW. > >> + >> +Defining a pixel's luminance >> +---------------------------- >> + >> +The luminance space of pixels in a framebuffer/plane presented to the >> +display is not well defined in the DRM/KMS APIs. It is usually assumed to >> +be in a 2.2 or 2.4 gamma space and has no mapping to an absolute luminance >> +value; it is interpreted in relative terms. >> + >> +Luminance can be measured and described in absolute terms as candela >> +per meter squared, or cd/m2, or nits. Even though a pixel value can be >> +mapped to luminance in a linear fashion to do so without losing a lot of >> +detail requires 16-bpc color depth. The reason for this is that human >> +perception can distinguish roughly between a 0.5-1% luminance delta. A >> +linear representation is suboptimal, wasting precision in the highlights >> +and losing precision in the shadows. >> + >> +A gamma curve is a decent approximation to a human's perception of >> +luminance, but the `PQ (perceptual quantizer) function`_ improves on >> +it. It also defines the luminance values in absolute terms, with the >> +highest value being 10,000 nits and the lowest 0.0005 nits. >> + >> +Using a content that's defined in PQ space we can approximate the real >> +world in a much better way. > > Or HLG. It is said that HLG puts the OOTF in the display, while in a PQ > system OOTF is baked into the transmission. However, a monitor that > consumes PQ will likely do some tone-mapping to fit it to the display > capabilities, so it is adding an OOTF of its own. In a HLG system I > would think artistic adjustments are done before transmission baking > them in, adding its own OOTF in addition to the sink OOTF. So both > systems necessarily have some O-O mangling on both sides of > transmission. > > There is a HLG presentation at > https://www.w3.org/Graphics/Color/Workshop/talks.html#intro > Thanks for sharing. I spent some time on Friday to watch them all and found them very informative, especially the HLG talk and the talk about linear vs composited HDR pipelines. >> + >> +Here are some examples of real-life objects and their approximate >> +luminance values: >> + >> + >> +.. _PQ (perceptual quantizer) function: https://en.wikipedia.org/wiki/High-dynamic-range_video#Perceptual_Quantizer >> + >> +.. flat-table:: >> + :header-rows: 1 >> + >> + * - Object >> + - Luminance in nits >> + >> + * - Fluorescent light >> + - 10,000 >> + >> + * - Highlights >> + - 1,000 - sunlight > > Did fluorescent and highlights get swapped here? > No, though at first glance it can look like that. This is pulled from an internal doc I didn't write, but I think the intention is to show that fluorescent lights can be up to 10,000 nits and highlights are usually 1,000+ nits. I'll clarify this in v4. A quick google search seems to show that there are even fluorescent lights with 46,000 nits. I guess these numbers provide a ballpark view more than anything. >> + >> + * - White Objects >> + - 250 - 1,000 >> + >> + * - Typical Objects >> + - 1 - 250 >> + >> + * - Shadows >> + - 0.01 - 1 >> + >> + * - Ultra Blacks >> + - 0 - 0.0005 >> + >> + >> +Transfer functions >> +------------------ >> + >> +Traditionally we used the terms gamma and de-gamma to describe the >> +encoding of a pixel's luminance value and the operation to transfer from >> +a linear luminance space to the non-linear space used to encode the >> +pixels. Since some newer encodings don't use a gamma curve I suggest >> +we refer to non-linear encodings using the terms `EOTF, and OETF`_, or >> +simply as transfer function in general. > > Yeah, gamma could mean lots of things. If you have e.g. OETF gamma > 1/2.2 and EOTF gamma 2.4, the result is OOTF gamma 1.09. > > OETF, EOTF and OOTF are not unambiguous either, since there is always > the question of whose function is it. > Yeah, I think both gamma and EO/OE/OO/EETF are all somewhat problematic. I tend to think about these more in terms of input and output transfer functions but then you have the ambiguity about what your input and output mean. I see the input TF between framebuffer and blender, and the output TF between blender and display. You also have the challenge that input and output transfer functions fulfill multiple roles, e.g. an output transfer as defined above might do linear-to-PQ conversion but could also fill the role of tone mapping in the case where the input content spans a larger range than the display space. > Two different EOTFs are of interest in composition for display: > - the display EOTF (since display signal is electrical) > - the content EOTF (since content is stored in electrical encoding) > > >> + >> +The EOTF (Electro-Optical Transfer Function) describes how to transfer >> +from an electrical signal to an optical signal. This was traditionally >> +done by the de-gamma function. >> + >> +The OETF (Opto Electronic Transfer Function) describes how to transfer >> +from an optical signal to an electronic signal. This was traditionally >> +done by the gamma function. >> + >> +More generally we can name the transfer function describing the transform >> +between scanout and blending space as the **input transfer function**, and > > "scanout space" makes me think of cable/signal values, not framebuffer > values. Or, I'm not sure. I'd recommend replacing the term "scanout > space" with something less ambiguous like framebuffer values. > Framebuffer space/values is much better than scanout space. >> +the transfer function describing the transform from blending space to the >> +output space as **output transfer function**. > > You're talking about "spaces" here, but what you are actually talking > about are value encodings, not (color) spaces. An EOTF or OETF is not > meant to modify the color space. > > When talking about blending, what you're actually interested in is > linear vs. non-linear color value encoding. This matches your talk > about EOTF and OETF, although you need to be careful to specify which > EOTF and OETF you mean. For blending, color values need to be linear in > light intensity, and the inverse of the E-to-O mapping before blending > is exactly the same as the O-to-E mapping after blending. Otherwise you > would alter even opaque pixels. > I struggle a bit with finding the right term to talk about color value encoding in general. Concrete examples can be PQ-encoded, Gamma 2.2, or linearly encoded spaces but I was grasping for a more general term; something that could potentially include TFs that also tone-map. Interestingly, the Canvas API changes presented by Christopher Cameron also seem to use the new colorSpace property to deal with both color space, as well as EOTF. https://www.youtube.com/watch?v=fHbLbVacYw4 > OETF is often associated with cameras, not displays. Maybe use EOTF^-1 > instead? > Good point. Fixed for v4. > Btw. another terminology thing: color space vs. color model. RGB and > YCbCr are color models. sRGB, BT.601 and BT.2020 are color spaces. > These two are orthogonal concepts. > Thanks for clarifying. >> + >> + >> +.. _EOTF, and OETF: https://en.wikipedia.org/wiki/Transfer_functions_in_imaging >> + >> +Mastering Luminances >> +-------------------- >> + >> +Even though we are able to describe the absolute luminance of a pixel >> +using the PQ 2084 EOTF we are presented with physical limitations of the >> +display technologies on the market today. Here are a few examples of >> +luminance ranges of displays. >> + >> +.. flat-table:: >> + :header-rows: 1 >> + >> + * - Display >> + - Luminance range in nits >> + >> + * - Typical PC display >> + - 0.3 - 200 >> + >> + * - Excellent LCD HDTV >> + - 0.3 - 400 >> + >> + * - HDR LCD w/ local dimming >> + - 0.05 - 1,500 >> + >> +Since no display can currently show the full 0.0005 to 10,000 nits >> +luminance range of PQ the display will need to tone-map the HDR content, >> +i.e to fit the content within a display's capabilities. To assist >> +with tone-mapping HDR content is usually accompanied by a metadata >> +that describes (among other things) the minimum and maximum mastering >> +luminance, i.e. the maximum and minimum luminance of the display that >> +was used to master the HDR content. >> + >> +The HDR metadata is currently defined on the drm_connector via the >> +hdr_output_metadata blob property. > > HDR_OUTPUT_METADATA, all caps. > >> + >> +It might be useful to define per-plane hdr metadata, as different planes >> +might have been mastered differently. >> + >> +.. _SDR Luminance: >> + >> +SDR Luminance >> +------------- >> + >> +Traditional SDR content's maximum white luminance is not well defined. >> +Some like to define it at 80 nits, others at 200 nits. It also depends >> +to a large extent on the environmental viewing conditions. In practice >> +this means that we need to define the maximum SDR white luminance, either >> +in nits, or as a ratio. >> + >> +`One Windows API`_ defines it as a ratio against 80 nits. >> + >> +`Another Windows API`_ defines it as a nits value. >> + >> +The `Wayland color management proposal`_ uses Apple's definition of EDR as a >> +ratio of the HDR range vs SDR range. >> + >> +If a display's maximum HDR white level is correctly reported it is trivial >> +to convert between all of the above representations of SDR white level. If >> +it is not, defining SDR luminance as a nits value, or a ratio vs a fixed >> +nits value is preferred, assuming we are blending in linear space. >> + >> +It is our experience that many HDR displays do not report maximum white >> +level correctly > > Which value do you refer to as "maximum white", and how did you measure > it? > Good question. I haven't played with those displays myself but I'll try to find out a bit more background behind this statement. > You also need to define who is "us" since kernel docs tend to get lots > of authors over time. > Good point. Changed in v4 > >> + >> +.. _One Windows API: https://docs.microsoft.com/en-us/windows-hardware/drivers/ddi/dispmprt/ns-dispmprt-_dxgkarg_settargetadjustedcolorimetry2 >> +.. _Another Windows API: https://docs.microsoft.com/en-us/uwp/api/windows.graphics.display.advancedcolorinfo.sdrwhitelevelinnits?view=winrt-20348 >> +.. _Wayland color management proposal: https://gitlab.freedesktop.org/swick/wayland-protocols/-/blob/color/unstable/color-management/color.rst#id8 >> + >> +Let There Be Color >> +------------------ >> + >> +So far we've only talked about luminance, ignoring colors altogether. Just >> +like in the luminance space, traditionally the color space of display >> +outputs has not been well defined. Similar to how an EOTF defines a >> +mapping of pixel data to an absolute luminance value, the color space >> +maps color information for each pixel onto the CIE 1931 chromaticity >> +space. This can be thought of as a mapping to an absolute, real-life, >> +color value. >> + >> +A color space is defined by its primaries and white point. The primaries >> +and white point are expressed as coordinates in the CIE 1931 color >> +space. Think of the red primary as the reddest red that can be displayed >> +within the color space. Same for green and blue. >> + >> +Examples of color spaces are: >> + >> +.. flat-table:: >> + :header-rows: 1 >> + >> + * - Color Space >> + - Description >> + >> + * - BT 601 >> + - similar to BT 709 >> + >> + * - BT 709 >> + - used by sRGB content; ~53% of BT 2020 >> + >> + * - DCI-P3 >> + - used by most HDR displays; ~72% of BT 2020 >> + >> + * - BT 2020 >> + - standard for most HDR content >> + >> + >> + >> +Color Primaries and White Point >> +------------------------------- >> + >> +Just like displays can currently not represent the entire 0.0005 - >> +10,000 nits HDR range of the PQ 2084 EOTF, they are currently not capable > > "PQ" or "ST 2084". > Fixed in v4 >> +of representing the entire BT.2020 color Gamut. For this reason video >> +content will often specify the color primaries and white point used to >> +master the video, in order to allow displays to be able to map the image >> +as best as possible onto the display's gamut. >> + >> + >> +Displays and Tonemapping >> +------------------------ >> + >> +External displays are able to do their own tone and color mapping, based >> +on the mastering luminance, color primaries, and white space defined in >> +the HDR metadata. > > HLG does things differently wrt. metadata and tone-mapping than PQ. > As mentioned above I had some time to watch the HLG presentation and that indeed has interesting implications. With HLG we also have relative luminance HDR content. One challenge is How to tone-map HLG content alongside SDR (sRGB) content and PQ content. I think ultimately this means that we can't rely on display tonemapping when we are dealing with mixed content on the screen. In that case we would probably want to output to the display in the EDID-referred space and tone-map all incoming buffers to the EDID-referred space. I think the doc needs a lot more pictures. I wonder if I can do that without polluting git with large files. >> + >> +Some internal panels might not include the complex HW to do tone and color >> +mapping on their own and will require the display driver to perform >> +appropriate mapping. >> + >> + >> +How are we solving the problem? >> +=============================== >> + >> +Single-plane >> +------------ >> + >> +If a single drm_plane is used no further work is required. The compositor >> +will provide one HDR plane alongside a drm_connector's hdr_output_metadata >> +and the display HW will output this plane without further processing if >> +no CRTC LUTs are provided. >> + >> +If desired a compositor can use the CRTC LUTs for HDR content but without >> +support for PWL or multi-segmented LUTs the quality of the operation is >> +expected to be subpar for HDR content. > > Explain/expand PWL. > Updated in v4. > Do you have references to these subpar results? I'm interested in when > and how they appear. I may want to use that information to avoid using > KMS LUTs when they are inadequate. > I don't have any actual results or data to back up this statement at this point. > >> + >> + >> +Multi-plane >> +----------- >> + >> +In multi-plane configurations we need to solve the problem of blending >> +HDR and SDR content. This blending should be done in linear space and >> +therefore requires framebuffer data that is presented in linear space >> +or a way to convert non-linear data to linear space. Additionally >> +we need a way to define the luminance of any SDR content in relation >> +to the HDR content. >> + >> +In order to present framebuffer data in linear space without losing a >> +lot of precision it needs to be presented using 16 bpc precision. > > Integer or floating-point? > Floating point. Fixed in v4. I doubt integer would work since we'd lose too much precision in the dark areas. Though, maybe 16-bit would let us map those well enough? I don't know for sure. Either way, I think anybody doing linear is using FP16. > >> + >> + >> +Defining HW Details >> +------------------- >> + >> +One way to take full advantage of modern HW's color pipelines is by >> +defining a "generic" pipeline that matches all capable HW. Something >> +like this, which I took `from Uma Shankar`_ and expanded on: >> + >> +.. _from Uma Shankar: https://patchwork.freedesktop.org/series/90826/ >> + >> +.. kernel-figure:: colorpipe.svg > > Btw. there will be interesting issues with alpha-premult, filtering, > and linearisation if your planes have alpha channels. That's before > HDR is even considered. > Could you expand on this a bit? >> + >> +I intentionally put de-Gamma, and Gamma in parentheses in my graph >> +as they describe the intention of the block but not necessarily a >> +strict definition of how a userspace implementation is required to >> +use them. >> + >> +De-Gamma and Gamma blocks are named LUT, but they could be non-programmable >> +LUTs in some HW implementations with no programmable LUT available. See >> +the definitions for AMD's `latest dGPU generation`_ as an example. >> + >> +.. _latest dGPU generation: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c?h=v5.13#n2586 >> + >> +I renamed the "Plane Gamma LUT" and "CRTC De-Gamma LUT" to "Tonemapping" >> +as we generally don't want to re-apply gamma before blending, or do >> +de-gamma post blending. These blocks tend generally to be intended for >> +tonemapping purposes. > > Right. > >> + >> +Tonemapping in this case could be a simple nits value or `EDR`_ to describe >> +how to scale the :ref:`SDR luminance`. > > I do wonder how that will turn out in the end... but on Friday there > will be HDR Compositing and Tone-mapping live Q&A session: > https://www.w3.org/Graphics/Color/Workshop/talks.html#compos > I didn't manage to join the compositing and tone-mapping live Q&A? Did anything interesting emerge from that? I've watched Timo Kunkel's talk and it's been very eye opening. He does a great job of highlighting the challenges of compositing HDR content. >> + >> +Tonemapping could also include the ability to use a 3D LUT which might be >> +accompanied by a 1D shaper LUT. The shaper LUT is required in order to >> +ensure a 3D LUT with limited entries (e.g. 9x9x9, or 17x17x17) operates >> +in perceptual (non-linear) space, so as to evenly spread the limited >> +entries evenly across the perceived space. >> + >> +.. _EDR: https://gitlab.freedesktop.org/swick/wayland-protocols/-/blob/color/unstable/color-management/color.rst#id8 >> + >> +Creating a model that is flexible enough to define color pipelines for >> +a wide variety of HW is challenging, though not impossible. Implementing >> +support for such a flexible definition in userspace, though, amounts >> +to essentially writing color pipeline drivers for each HW. > > My thinking right now is that userspace has it's own pipeline model > with the elements it must have. Then it attempts to map that pipeline > to what elements the KMS pipeline happens to expose. If there is a > mapping, good. If not, fall back to shaders on GPU. > > To help that succeed more often, I'm using the current KMS abstract > pipeline as a guide in designing the Weston internal color pipeline. > I feel I should know, but is this pipeline documented? Is it merely, the plane > crtc > connector model, or does it go beyond that? >> + >> + >> +Defining SW Intentions >> +---------------------- >> + >> +An alternative to describing the HW color pipeline in enough detail to >> +be useful for color management and HDR purposes is to instead define >> +SW intentions. >> + >> +.. kernel-figure:: color_intentions.svg >> + >> +This greatly simplifies the API and lets the driver do what a driver >> +does best: figure out how to program the HW to achieve the desired >> +effect. >> + >> +The above diagram could include white point, primaries, and maximum >> +peak and average white levels in order to facilitate tone mapping. >> + >> +At this point I suggest to keep tonemapping (other than an SDR luminance >> +adjustment) out of the current DRM/KMS API. Most HDR displays are capable >> +of tonemapping. If for some reason tonemapping is still desired on >> +a plane, a shader might be a better way of doing that instead of relying >> +on display HW. > > "Non-programmable LUT" as you referred to them is an interesting > departure from the earlier suggestion, where you intended to describe > color spaces and encodings of content and display and let the hardware > do whatever wild magic in between. Now it seems like you have shifted > to programming transformations instead. They may be programmable or > enumerated, but still transformations rather than source and > destination descriptions. If the enumerated transformations follow > standards, even better. > > I think this is a step in the right direction. > > However, you wrote in the heading "Intentions" which sounds like your > old approach. > > Conversion from one additive linear color space to another is a matter > of matrix multiplication. That is simple and easy to define, just load a > matrix. The problem is gamut mapping: you may end up outside of the > destination gamut, or maybe you want to use more of the destination > gamut than what the color space definitions imply. There are many > conflicting goals and ways to this, and I suspect the room for secret > sauce is here (and in tone-mapping). > > There is also a difference between color space (signal) gamut and > device gamut. A display may accept BT.2020 signal, but the gamut it can > show is usually much less. > True. > >> + >> +In some ways this mirrors how various userspace APIs treat HDR: >> + * Gstreamer's `GstVideoTransferFunction`_ >> + * EGL's `EGL_EXT_gl_colorspace_bt2020_pq`_ extension >> + * Vulkan's `VkColorSpaceKHR`_ >> + >> +.. _GstVideoTransferFunction: https://gstreamer.freedesktop.org/documentation/video/video-color.html?gi-language=c#GstVideoTransferFunction >> +.. _EGL_EXT_gl_colorspace_bt2020_pq: https://www.khronos.org/registry/EGL/extensions/EXT/EGL_EXT_gl_colorspace_bt2020_linear.txt >> +.. _VkColorSpaceKHR: https://www.khronos.org/registry/vulkan/specs/1.2-extensions/html/vkspec.html#VkColorSpaceKHR >> + >> + >> +A hybrid approach to the API >> +---------------------------- >> + >> +Our current approach attempts a hybrid approach, defining API to specify >> +input and output transfer functions, as well as an SDR boost, and a >> +input color space definition. > > Using a color space definition in the KMS UAPI brings us back to the > old problem. > > Using descriptions of content (color spaces) instead of prescribing > transformations seems to be designed to allow vendors make use of their > secret hardware sauce: how to best realise the intent. Since it is > secret sauce, by definition it cannot be fully replicated in software > or shaders. One might even get sued for succeeding. > > General purpose (read: desktop) compositors need to adapt to any > scenegraph and they want to make the most of the hardware under all > situations. This means that it is not possible to guarantee that a > certain window is always going to be using a KMS plane. Maybe a small > change in the scenegraph, a moving window or cursor, suddenly causes > the KMS plane to become unsuitable for the window, or in the opposite > case the KMS plane suddenly becomes available for the window. This > means that a general purpose compositor will be doing frame-by-frame > decisions on which window to put on which KMS plane, and which windows > need to be composited with shaders. > > Not being able to replicate what the hardware does means that shaders > cannot produce the same image on screen as the KMS plane would. When > KMS plane assignments change, the window appearance would change as > well. I imagine end users would be complaining of such glitches. > I see your point. > However, there are other use cases where I can imagine this descriptive > design working perfectly. Any non-general, non-desktop compositor, or a > closed system, could probably guarantee that the scenegraph will always > map in a specific way to the KMS planes. The window would always map to > the KMS plane, meaning that it would never need to be composited with > shaders, and therefore cannot change color unexpectedly from end user > point of view. TVs, set-top-boxes, etc., maybe even phones. Some use > cases have a hard requirement of putting a specific window on a > specific KMS plane, or the system simply cannot display it > (performance, protection...). > > Is it worth having two fundamentally different KMS UAPIs for HDR > composition support, where one interface supports only a subset of use > cases and the other (per-plane LUT, CTM, LUT, and more, freely > programmable by userspace) supports all use cases? > > That's a genuine question. Are the benefits worth the kernel > developers' efforts to design, implement, and forever maintain both > mutually exclusive interfaces? > Tbh, I'm personally less interested in use-cases where specific windows always map to a KMS plane. From an AMD HW point of view we can't really guarantee that a KMS plane is always available in most scenarios. So this would have to work for a general desktop compositor scenario where KMS plane usage could change frame to frame. > > Now, someone might say that the Wayland protocol design for HDR aims to > be descriptive and not prescriptive, so why should KMS UAPI be > different? The reason is explained above: *some* KMS clients may switch > frame by frame between KMS and shaders, but Wayland clients pick one > path and stick to it. Wayland clients have no reason that I can imagine > to switch arbitrarily in flight. > I'm a bit confused about this paragraph. Wouldn't the Wayland compositor decide whether to use a KMS plane or shader and not the client? >> + >> +We would like to solicit feedback and encourage discussion around the >> +merits and weaknesses of these approaches. This question is at the core >> +of defining a good API and we'd like to get it right. >> + >> + >> +Input and Output Transfer functions >> +----------------------------------- >> + >> +We define an input transfer function on drm_plane to describe the >> +transform from framebuffer to blending space. >> + >> +We define an output transfer function on drm_crtc to describe the >> +transform from blending space to display space. >> + > > Here is again the terminology problem between transfer function and > (color) space. > Color value encoding? Or luminance space? Or maybe there's a different term altogether to describe this? >> +The transfer function can be a pre-defined function, such as PQ EOTF, or >> +a custom LUT. A driver will be able to specify support for specific >> +transfer functions, including custom ones. > > This sounds good. > >> + >> +Defining the transfer function in this way allows us to support in on HW >> +that uses ROMs to support these transforms, as well as on HW that use >> +LUT definitions that are complex and don't map easily onto a standard LUT >> +definition. >> + >> +We will not define per-plane LUTs in this patchset as the scope of our >> +current work only deals with pre-defined transfer functions. This API has >> +the flexibility to add custom 1D or 3D LUTs at a later date. > > Ok. > >> + >> +In order to support the existing 1D de-gamma and gamma LUTs on the drm_crtc >> +we will include a "custom 1D" enum value to indicate that the custom gamma and >> +de-gamma 1D LUTs should be used. > > Sounds fine. > >> + >> +Possible transfer functions: >> + >> +.. flat-table:: >> + :header-rows: 1 >> + >> + * - Transfer Function >> + - Description >> + >> + * - Gamma 2.2 >> + - a simple 2.2 gamma function >> + >> + * - sRGB >> + - 2.4 gamma with small initial linear section > > Maybe rephrase to: The piece-wise sRGB transfer function with the small > initial linear section, approximately corresponding to 2.4 gamma > function. > > I recall some debate, too, whether with a digital flat panel you should > use a pure 2.4 gamma function or the sRGB function. (Which one do > displays expect?) > Updated in v4. >> + >> + * - PQ 2084 >> + - SMPTE ST 2084; used for HDR video and allows for up to 10,000 nit support > > Perceptual Quantizer (PQ), or ST 2084. There is no PQ 2084. > Fixed in v4 >> + >> + * - Linear >> + - Linear relationship between pixel value and luminance value >> + >> + * - Custom 1D >> + - Custom 1D de-gamma and gamma LUTs; one LUT per color >> + >> + * - Custom 3D >> + - Custom 3D LUT (to be defined) > > Adding HLG transfer function to this set would be interesting, because > it requires a parameter I believe. How would you handle parameterised > transfer functions? > Good question. I haven't really explored HLG so far but it looks like it's important to arrive at a sensible design. > It's worth to note that while PQ is absolute in luminance (providing > cd/m² values), everything else here is relative for both SDR and HDR. > You cannot blend content in PQ with content in something else together, > until you practically define the absolute luminance for all non-PQ > content or vice versa. > > A further complication is that you could have different > relative-luminance transfer functions, meaning that the (absolute) > luminance they are relative to varies. The obvious case is blending SDR > content with HDR content when both have relative-luminance transfer > function. > Good points. It sounds like we would need something akin to EDR (or max-SDR nits) for any relative-luminance TF, i.e. a way to arbitrarily scale the luminance of the respective plane. > Then you have HLG which is more like scene-referred than > display-referred, but that might be solved with the parameter I > mentioned, I'm not quite sure. > > PQ is said to be display-referred, but it's usually referred to > someone else's display than yours, which means it needs the HDR > metadata to be able to tone-map suitably to your display. This seems to > be a similar problem as with signal gamut vs. device gamut. > > The traditional relative-luminance transfer functions, well, the > content implied by them, is display-referred when it arrived at KMS or > compositor level. There the question of "whose display" doesn't matter > much because it's SDR and narrow gamut, and we probably don't even > notice when we see an image wrong. With HDR the mismatch might be > noticeable. > > >> + >> + >> +Describing SDR Luminance >> +------------------------------ >> + >> +Since many displays do no correctly advertise the HDR white level we >> +propose to define the SDR white level in nits. > > This means that even if you had no content using PQ, you still need to > define the absolute luminance for all the (HDR) relative-luminance > transfer functions. > > There probably needs to be something to relate everything to a single, > relative or absolute, luminance range. That is necessary for any > composition (KMS and software) since the output is a single image. > > Is it better to go with relative or absolute metrics? Right now I would > tend to say relative, because relative is unitless. Absolute values are > numerically equivalent, but they might not have anything to do with > actual physical measurements, making them actually relative. This > happens when your monitor does not support PQ mode or does tone-mapping > to your image, for instance. > It sounds like PQ is the outlier here in defining luminance in absolute units. Though it's also currently the most commonly used TF for HDR content. Wouldn't you use the absolute luminance definition for PQ if you relate everything to a relative range? Would it make sense to relate everything to a common output luminance range? If that output is PQ then an input PQ buffer is still output as PQ and relative-luminance buffers can be scaled. Would that scaling (EDR or similar) be different for SDR (sRGB) content vs other HDR relative-luminance content? > The concept we have played with in Wayland so far is EDR, but then you > have the question of "what does zero mean", i.e. the luminance of > darkest black could vary between contents as well, not just the > luminance of extreme white. > This is a good question. For AMD HW we have a way to scaled SDR content but I don't think that includes an ability to set the black point (unless you go and define a LUT for it). >> + >> +We define a new drm_plane property to specify the white level of an SDR >> +plane. >> + >> + >> +Defining the color space >> +------------------------ >> + >> +We propose to add a new color space property to drm_plane to define a >> +plane's color space. >> + >> +While some color space conversions can be performed with a simple color >> +transformation matrix (CTM) others require a 3D LUT. >> + >> + >> +Defining mastering color space and luminance >> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >> + >> +ToDo >> + >> + >> + >> +Pixel Formats >> +~~~~~~~~~~~~~ >> + >> +The pixel formats, such as ARGB8888, ARGB2101010, P010, or FP16 are >> +unrelated to color space and EOTF definitions. HDR pixels can be formatted > > Yes! > >> +in different ways but in order to not lose precision HDR content requires >> +at least 10 bpc precision. For this reason ARGB2101010, P010, and FP16 are >> +the obvious candidates for HDR. ARGB2101010 and P010 have the advantage >> +of requiring only half the bandwidth as FP16, while FP16 has the advantage >> +of enough precision to operate in a linear space, i.e. without EOTF. > > This reminds me of something interesting said during the W3C WCG & HDR > Q&A session yesterday. Unfortunately I forget his name, but I think > transcriptions should become available at some point, someone said that > pixel depth or bit precision should be thought of as setting the noise > floor. When you quantize values, always do dithering. Then the > precision only changes your noise floor level. Then something about how > audio has realized this ages ago and we are just catching up. > > If you don't dither, you get banding artifacts in gradients. If you do > dither, it's just noise. > That's a great way to think about it. On AMD HW we basically always dither (if programmed correctly) and have done so for ages. >> + >> + >> +Use Cases >> +========= >> + >> +RGB10 HDR plane - composited HDR video & desktop >> +------------------------------------------------ >> + >> +A single, composited plane of HDR content. The use-case is a video player >> +on a desktop with the compositor owning the composition of SDR and HDR >> +content. The content shall be PQ BT.2020 formatted. The drm_connector's >> +hdr_output_metadata shall be set. >> + >> + >> +P010 HDR video plane + RGB8 SDR desktop plane >> +--------------------------------------------- >> +A normal 8bpc desktop plane, with a P010 HDR video plane underlayed. The >> +HDR plane shall be PQ BT.2020 formatted. The desktop plane shall specify >> +an SDR boost value. The drm_connector's hdr_output_metadata shall be set. >> + >> + >> +One XRGB8888 SDR Plane - HDR output >> +----------------------------------- >> + >> +In order to support a smooth transition we recommend an OS that supports >> +HDR output to provide the hdr_output_metadata on the drm_connector to >> +configure the output for HDR, even when the content is only SDR. This will >> +allow for a smooth transition between SDR-only and HDR content. In this > > Agreed, but this also kind of contradicts the idea of pushing HDR > metadata from video all the way to the display in the RGB10 HDR plane > case - something you do not seem to suggest here at all, but I would > have expected that to be a prime use case for you. > > A set-top-box might want to push the video HDR metadata all the way to > the display when supported, and then adapt all the non-video graphics > to that. > Initially I was hoping to find a quick way to allow pushing video straight from decoder through a KMS plane to the output. Increasingly I'm realizing that this is probably not going to work well for a general desktop compositor, hence the statement here to pretty much say the Wayland plan is the correct plan for this: single-plane HDR (with shader composition) first, then KMS offloading for power saving. On some level I'm still interested in the direct decoder-to-KMS-to-display path but am afraid we won't get the API right if we don't deal with the general desktop compositor use-case first. Apologies, again, if some of my response is a bit incoherent. I've been writing the responses over Friday and today. Harry > > Thanks, > pq > >> +use-case the SDR max luminance value should be provided on the drm_plane. >> + >> +In DCN we will de-PQ or de-Gamma all input in order to blend in linear >> +space. For SDR content we will also apply any desired boost before >> +blending. After blending we will then re-apply the PQ EOTF and do RGB >> +to YCbCr conversion if needed. >> + >> +FP16 HDR linear planes >> +---------------------- >> + >> +These will require a transformation into the display's encoding (e.g. PQ) >> +using the CRTC LUT. Current CRTC LUTs are lacking the precision in the >> +dark areas to do the conversion without losing detail. >> + >> +One of the newly defined output transfer functions or a PWL or `multi-segmented >> +LUT`_ can be used to facilitate the conversion to PQ, HLG, or another >> +encoding supported by displays. >> + >> +.. _multi-segmented LUT: https://patchwork.freedesktop.org/series/90822/ >> + >> + >> +User Space >> +========== >> + >> +Gnome & GStreamer >> +----------------- >> + >> +See Jeremy Cline's `HDR in Linux\: Part 2`_. >> + >> +.. _HDR in Linux\: Part 2: https://www.jcline.org/blog/fedora/graphics/hdr/2021/06/28/hdr-in-linux-p2.html >> + >> + >> +Wayland >> +------- >> + >> +See `Wayland Color Management and HDR Design Goals`_. >> + >> +.. _Wayland Color Management and HDR Design Goals: https://gitlab.freedesktop.org/swick/wayland-protocols/-/blob/color/unstable/color-management/color.rst >> + >> + >> +ChromeOS Ozone >> +-------------- >> + >> +ToDo >> + >> + >> +HW support >> +========== >> + >> +ToDo, describe pipeline on a couple different HW platforms >> + >> + >> +Further Reading >> +=============== >> + >> +* https://gitlab.freedesktop.org/swick/wayland-protocols/-/blob/color/unstable/color-management/color.rst >> +* http://downloads.bbc.co.uk/rd/pubs/whp/whp-pdf-files/WHP309.pdf >> +* https://app.spectracal.com/Documents/White%20Papers/HDR_Demystified.pdf >> +* https://www.jcline.org/blog/fedora/graphics/hdr/2021/05/07/hdr-in-linux-p1.html >> +* https://www.jcline.org/blog/fedora/graphics/hdr/2021/06/28/hdr-in-linux-p2.html >> + >> + >> diff --git a/Documentation/gpu/rfc/index.rst b/Documentation/gpu/rfc/index.rst >> index 05670442ca1b..8d8430cfdde1 100644 >> --- a/Documentation/gpu/rfc/index.rst >> +++ b/Documentation/gpu/rfc/index.rst >> @@ -19,3 +19,4 @@ host such documentation: >> .. toctree:: >> >> i915_gem_lmem.rst >> + hdr-wide-gamut.rst >
On 2021-09-15 10:36, Pekka Paalanen wrote: > On Mon, 16 Aug 2021 15:37:23 +0200 > sebastian@sebastianwick.net wrote: > >> On 2021-08-16 14:40, Harry Wentland wrote: >>> On 2021-08-16 7:10 a.m., Brian Starkey wrote: >>>> On Fri, Aug 13, 2021 at 10:42:12AM +0530, Sharma, Shashank wrote: >>>>> Hello Brian, >>>>> (+Uma in cc) >>>>> >>>>> Thanks for your comments, Let me try to fill-in for Harry to keep the >>>>> design >>>>> discussion going. Please find my comments inline. >>>>> >>> >>> Thanks, Shashank. I'm back at work now. Had to cut my trip short >>> due to rising Covid cases and concern for my kids. >>> >>>>> On 8/2/2021 10:00 PM, Brian Starkey wrote: >>>>>> >>>> >>>> -- snip -- >>>> >>>>>> >>>>>> Android doesn't blend in linear space, so any API shouldn't be built >>>>>> around an assumption of linear blending. >>>>>> >>> >>> This seems incorrect but I guess ultimately the OS is in control of >>> this. If we want to allow blending in non-linear space with the new >>> API we would either need to describe the blending space or the >>> pre/post-blending gamma/de-gamma. >>> >>> Any idea if this blending behavior in Android might get changed in >>> the future? >> >> There is lots of software which blends in sRGB space and designers >> adjusted to the incorrect blending in a way that the result looks right. >> Blending in linear space would result in incorrectly looking images. > > Hi, > > yes, and I'm guilty of that too, at least by negligence. ;-) > > All Wayland compositors do it, since that's what everyone has always > been doing, more or less. It's still physically wrong, but when all you > have is sRGB and black window shadows and rounded corners as the only > use case, you don't mind. > > When you start blending with colors other than black (gradients!), when > you go to wide gamut, or especially with HDR, I believe the problems > start to become painfully obvious. > > But as long as you're stuck with sRGB only, people expect the "wrong" > result and deviating from that is a regression. > > Similarly, once Weston starts doing color management and people turn it > on and install monitor profiles, I expect to get reports saying "all > old apps look really dull now". That's how sRGB is defined to look > like, they've been looking at something else for all that time. > :-) > > Maybe we need a sRGB "gamut boost" similar to SDR luminance boost. ;-) > I wonder how other OSes deal with this change in expectations. I also have a Chromebook with a nice HDR OLED panel but an OS that doesn't really do HDR and seems to output to the full gamut (I could be wrong on this) and luminance range of the display. It makes content seem really vibrant but I'm equally worried how users will perceive it if there's ever proper color management. >>>> I still think that directly exposing the HW blocks and their >>>> capabilities is the right approach, rather than a "magic" tonemapping >>>> property. >>>> >>>> Yes, userspace would need to have a good understanding of how to use >>>> that hardware, but if the pipeline model is standardised that's the >>>> kind of thing a cross-vendor library could handle. >>>> >>> >>> One problem with cross-vendor libraries is that they might struggle >>> to really be cross-vendor when it comes to unique HW behavior. Or >>> they might pick sub-optimal configurations as they're not aware of >>> the power impact of a configuration. What's an optimal configuration >>> might differ greatly between different HW. >>> >>> We're seeing this problem with "universal" planes as well. >> >> I'm repeating what has been said before but apparently it has to be said >> again: if a property can't be replicated exactly in a shader the >> property is useless. If your hardware is so unique that it can't give us >> the exact formula we expect you cannot expose the property. > > From desktop perspective, yes, but I'm nowadays less adamant about it. > If kernel developers are happy to maintain multiple alternative UAPIs, > then I'm not going to try to NAK that - I'll just say when I can and > cannot make use of them. Also everything is always up to some > precision, and ultimately here it is a question of whether people can > see the difference. > > Entertainment end user audience is also much more forgiving than > professional color management audience. For the latter, I'd hesitate to > use non-primary KMS planes at all. > >> Either way if the fixed KMS pixel pipeline is not sufficient to expose >> the intricacies of real hardware the right move would be to make the KMS >> pixel pipeline more dynamic, expose more hardware specifics and create a >> hardware specific user space like mesa. Moving the whole compositing >> with all its policies and decision making into the kernel is exactly the >> wrong way to go. >> >> Laurent Pinchart put this very well: >> https://lists.freedesktop.org/archives/dri-devel/2021-June/311689.html > > Thanks for digging that up, saved me the trouble. :-) > Really good summary. I can see the parallel to the camera subsystem. Maybe now is a good time for libdisplay, or a "mesa" for display HW. Btw, I fully agree on the need to have clear ground rules (like the newly formalized requirement for driver properties) to keep this from becoming an unmaintainable mess. Harry > > Thanks, > pq >
On Mon, 20 Sep 2021 20:14:50 -0400 Harry Wentland <harry.wentland@amd.com> wrote: > On 2021-09-15 10:01, Pekka Paalanen wrote:> On Fri, 30 Jul 2021 16:41:29 -0400 > > Harry Wentland <harry.wentland@amd.com> wrote: > > > >> Use the new DRM RFC doc section to capture the RFC previously only > >> described in the cover letter at > >> https://patchwork.freedesktop.org/series/89506/ > >> > >> v3: > >> * Add sections on single-plane and multi-plane HDR > >> * Describe approach to define HW details vs approach to define SW intentions > >> * Link Jeremy Cline's excellent HDR summaries > >> * Outline intention behind overly verbose doc > >> * Describe FP16 use-case > >> * Clean up links > >> > >> v2: create this doc > >> > >> v1: n/a > >> > >> Signed-off-by: Harry Wentland <harry.wentland@amd.com> Hi Harry! ... > >> --- > >> Documentation/gpu/rfc/color_intentions.drawio | 1 + > >> Documentation/gpu/rfc/color_intentions.svg | 3 + > >> Documentation/gpu/rfc/colorpipe | 1 + > >> Documentation/gpu/rfc/colorpipe.svg | 3 + > >> Documentation/gpu/rfc/hdr-wide-gamut.rst | 580 ++++++++++++++++++ > >> Documentation/gpu/rfc/index.rst | 1 + > >> 6 files changed, 589 insertions(+) > >> create mode 100644 Documentation/gpu/rfc/color_intentions.drawio > >> create mode 100644 Documentation/gpu/rfc/color_intentions.svg > >> create mode 100644 Documentation/gpu/rfc/colorpipe > >> create mode 100644 Documentation/gpu/rfc/colorpipe.svg > >> create mode 100644 Documentation/gpu/rfc/hdr-wide-gamut.rst ... > >> + > >> +Here are some examples of real-life objects and their approximate > >> +luminance values: > >> + > >> + > >> +.. _PQ (perceptual quantizer) function: https://en.wikipedia.org/wiki/High-dynamic-range_video#Perceptual_Quantizer > >> + > >> +.. flat-table:: > >> + :header-rows: 1 > >> + > >> + * - Object > >> + - Luminance in nits > >> + > >> + * - Fluorescent light > >> + - 10,000 > >> + > >> + * - Highlights > >> + - 1,000 - sunlight > > > > Did fluorescent and highlights get swapped here? > > > No, though at first glance it can look like that. This is pulled > from an internal doc I didn't write, but I think the intention is > to show that fluorescent lights can be up to 10,000 nits and > highlights are usually 1,000+ nits. > > I'll clarify this in v4. > > A quick google search seems to show that there are even fluorescent > lights with 46,000 nits. I guess these numbers provide a ballpark > view more than anything. Those seem quite extreme fluorescent lights, far beyond what one might find in offices I suppose? I mean, I can totally stare straight at my office fluorescent lights without any discomfort. Highlights OTOH of course depend on which highlights we're talking about, and your 1000 - sunlight range I can totally agree with. If you look at a sea or a lake on a sunny day, the reflections of Sun on the water surface are much much brighter than anything else in nature aside from Sun itself. I happened to see this myself when playing with a camera: the rest of the image can be black while the water highlights still shoot way beyond the captured dynamic range. > >> + > >> + * - White Objects > >> + - 250 - 1,000 > >> + > >> + * - Typical Objects > >> + - 1 - 250 > >> + > >> + * - Shadows > >> + - 0.01 - 1 > >> + > >> + * - Ultra Blacks > >> + - 0 - 0.0005 > >> + > >> + > >> +Transfer functions > >> +------------------ > >> + > >> +Traditionally we used the terms gamma and de-gamma to describe the > >> +encoding of a pixel's luminance value and the operation to transfer from > >> +a linear luminance space to the non-linear space used to encode the > >> +pixels. Since some newer encodings don't use a gamma curve I suggest > >> +we refer to non-linear encodings using the terms `EOTF, and OETF`_, or > >> +simply as transfer function in general. > > > > Yeah, gamma could mean lots of things. If you have e.g. OETF gamma > > 1/2.2 and EOTF gamma 2.4, the result is OOTF gamma 1.09. > > > > OETF, EOTF and OOTF are not unambiguous either, since there is always > > the question of whose function is it. > > > Yeah, I think both gamma and EO/OE/OO/EETF are all somewhat problematic. We can use them, but we have to explain which functions we are referring to. In particular, if you have a specific EOTF, then the inverse of it should be called EOTF^-1 and not OETF, to follow what I have understood of specs like BT.2100. Personally I'd take things further and talk about encoding and decoding functions when the intent is to translate between pixel values and light-linear color values rather than characterising a piece of equipment. > I tend to think about these more in terms of input and output transfer > functions but then you have the ambiguity about what your input and > output mean. I see the input TF between framebuffer and blender, > and the output TF between blender and display. Indeed, those are good explanations. > You also have the challenge that input and output transfer functions > fulfill multiple roles, e.g. an output transfer as defined above might do > linear-to-PQ conversion but could also fill the role of tone mapping > in the case where the input content spans a larger range than the > display space. I would like to avoid such conflation or use different terms. That is indeed the confusion often had I think. I would say that encoding/decoding function does not do any kind of tone-mapping. It's purely for numerical encoding to save bits on transmission or taps in a LUT. Although, for taps in a LUT optimization, it is called "shaper" instead. A shaper function (or 1D LUT) does not need to equal an encoding function. We're going to need glossary. > > Two different EOTFs are of interest in composition for display: > > - the display EOTF (since display signal is electrical) > > - the content EOTF (since content is stored in electrical encoding) > > > > > >> + > >> +The EOTF (Electro-Optical Transfer Function) describes how to transfer > >> +from an electrical signal to an optical signal. This was traditionally > >> +done by the de-gamma function. > >> + > >> +The OETF (Opto Electronic Transfer Function) describes how to transfer > >> +from an optical signal to an electronic signal. This was traditionally > >> +done by the gamma function. > >> + > >> +More generally we can name the transfer function describing the transform > >> +between scanout and blending space as the **input transfer function**, and > > > > "scanout space" makes me think of cable/signal values, not framebuffer > > values. Or, I'm not sure. I'd recommend replacing the term "scanout > > space" with something less ambiguous like framebuffer values. > > > Framebuffer space/values is much better than scanout space. I'd go with values. Does "space" include encoding or not? Depends on context. Thinking about: - light-linear RGB values in BT.709 color space - sRGB encoded RGB values in BT.709 color space - sRGB encoded YCbCr values in BT.709 color space Are these difference spaces, or the same space but with different encodings and color models? I have been gravitating towards "color space" being the same in all of the above: BT.709 color space. OTOH, saying "color space, encoding and model" gets awkward really fast, so sometimes it's just "color space". Framebuffer or pixel values could be, say, 10-bit integer, while (non-linear) color values would be that converted to the [0.0, 1.0] range for example. > >> +the transfer function describing the transform from blending space to the > >> +output space as **output transfer function**. > > > > You're talking about "spaces" here, but what you are actually talking > > about are value encodings, not (color) spaces. An EOTF or OETF is not > > meant to modify the color space. > > > > When talking about blending, what you're actually interested in is > > linear vs. non-linear color value encoding. This matches your talk > > about EOTF and OETF, although you need to be careful to specify which > > EOTF and OETF you mean. For blending, color values need to be linear in > > light intensity, and the inverse of the E-to-O mapping before blending > > is exactly the same as the O-to-E mapping after blending. Otherwise you > > would alter even opaque pixels. > > > I struggle a bit with finding the right term to talk about color value > encoding in general. Concrete examples can be PQ-encoded, Gamma 2.2, or > linearly encoded spaces but I was grasping for a more general term; > something that could potentially include TFs that also tone-map. I would very much prefer to keep tone-mapping as a separate conceptual object, but I think I see where you are coming from: the API has a single slot for the combined coding/tone-mapping function. Is "combined coding/tone-mapping function" too long to type? :-) > Interestingly, the Canvas API changes presented by Christopher Cameron > also seem to use the new colorSpace property to deal with both color > space, as well as EOTF. > > https://www.youtube.com/watch?v=fHbLbVacYw4 That may be practical from API point of view, but conceptually I find it confusing. I think it is easier to think through the theory with completely independent color space and encoding concepts, and then it will be easy to understand that in an API you just pick specific pairs of them since those are enough for most use cases. If you start from the API concepts, try to work towards the theory, and then you are presented a display whose EOTF is measured and does not match any of the standard ones present in the API, I think you would struggle to make that display work until you realise that color space and encoding can be decoupled. A bit like how YCbCr is not a color space but a color model you can apply to any RGB color space, and you can even pick the encoding function separately if you want to. Also mind that tone mapping is completely separate to all the above. The above describe what colors pixels represent on one device (or in an image). Tone mapping is an operation that adapts an image from one device to another device. Gamut mapping is as well. So describing a color space, color model, and encoding is one thing. Adapting (converting) an image from one such to another is a whole different thing. However, when you have hardware pixel pipeline, you tend to program the total transformation from source to destination, where all those different unrelated or orthogonal concepts have been combined and baked in, usually in such a way that you cannot separate them anymore. Our plans for Weston internals follow the same: you have descriptions of source and destination pixels, you have your rendering intent that affects how things like gamut mapping and tone mapping work, and then you compute the two transformations from all those: the transformation from source to blending space, and from blending space to output (monitor cable values). In the Weston design the renderer KMS framebuffer will hold either blending space values or cable values. Btw. another thing is color space conversion vs. gamut and tone mapping. These are also separate concepts. You can start with BT.2020 color space color values, and convert those to sRGB color values. A pure color space conversion can result in color values outside of the sRGB value range, because BT.2020 is a bigger color space. If you clip those out-of-range values into range, then you are doing gamut (and tone?) mapping in my opinion. ... > >> +Displays and Tonemapping > >> +------------------------ > >> + > >> +External displays are able to do their own tone and color mapping, based > >> +on the mastering luminance, color primaries, and white space defined in > >> +the HDR metadata. > > > > HLG does things differently wrt. metadata and tone-mapping than PQ. > > > As mentioned above I had some time to watch the HLG presentation and that > indeed has interesting implications. With HLG we also have relative luminance > HDR content. One challenge is How to tone-map HLG content alongside SDR (sRGB) > content and PQ content. > > I think ultimately this means that we can't rely on display tonemapping when > we are dealing with mixed content on the screen. In that case we would probably > want to output to the display in the EDID-referred space and tone-map all incoming > buffers to the EDID-referred space. That's exactly the plan with Weston. The display signal space has three options according to EDID/HDMI: - HDR with traditional gamma (which I suppose means the relative [0.0, 1.0] range with either sRGB or 2.2 gamma encoding and using the monitor's native gamut) - BT.2020 PQ - HLG (BT.2020?) These are what the monitor cable must carry, so these are what the CRTC must produce. I suppose one could pick the blending space to be something else, but in Weston the plan is to use cable signal as the blending space, just linearised for light and limited by the monitors gamut and dynamic range. That keeps the post-blend operations as simple as possible, meaning we are likely to be able to offload that to KMS and do not need another renderer pass for that. One thing I realised yesterday is that HLG displays are much better defined than PQ displays, because HLG defines what OOTF the display must implement. In a PQ system, the signal carries the full 10k nits range, and then the monitor must do vendor magic to display it. That's for tone mapping, not sure if HLG has an advantage in gamut mapping as well. For a PQ display, all we can do is hope that if we tell the monitor via HDR static metadata that our content will never exceed monitor capabilities then the monitor doesn't mangle our images too bad. > I think the doc needs a lot more pictures. I wonder if I can do that without > polluting git with large files. > ... > >> +Multi-plane > >> +----------- > >> + > >> +In multi-plane configurations we need to solve the problem of blending > >> +HDR and SDR content. This blending should be done in linear space and > >> +therefore requires framebuffer data that is presented in linear space > >> +or a way to convert non-linear data to linear space. Additionally > >> +we need a way to define the luminance of any SDR content in relation > >> +to the HDR content. > >> + > >> +In order to present framebuffer data in linear space without losing a > >> +lot of precision it needs to be presented using 16 bpc precision. > > > > Integer or floating-point? > > > Floating point. Fixed in v4. > > I doubt integer would work since we'd lose too much precision in the dark > areas. Though, maybe 16-bit would let us map those well enough? I don't know > for sure. Either way, I think anybody doing linear is using FP16. That's a safe assumption. Integer precision in the dark end also depends on how high the bright end goes. With floating point that seems like a non-issue. What I think is "common knowledge" by now is that 8 bits is not enough for a linear channel. However, 10 bits integer might be enough for a linear channel in SDR. > > > > >> + > >> + > >> +Defining HW Details > >> +------------------- > >> + > >> +One way to take full advantage of modern HW's color pipelines is by > >> +defining a "generic" pipeline that matches all capable HW. Something > >> +like this, which I took `from Uma Shankar`_ and expanded on: > >> + > >> +.. _from Uma Shankar: https://patchwork.freedesktop.org/series/90826/ > >> + > >> +.. kernel-figure:: colorpipe.svg > > > > Btw. there will be interesting issues with alpha-premult, filtering, > > and linearisation if your planes have alpha channels. That's before > > HDR is even considered. > > > Could you expand on this a bit? First you might want to read http://ssp.impulsetrain.com/gamma-premult.html and then ask, which way does software and hardware do and expect alpha premultiplication. I don't actually know. I have always assumed the intuitive way for compositing in non-linear values before I understood what light-linear means, which means I have always assumed the *wrong* way of doing premult. The next topic is, when you do filtering to sample from a texture that has an alpha channel, what should the values be from which you compute the weighted average or convolution? If I remember right, the answer is that they must be light-linear *and* premultiplied. So there is exactly one way that is correct, and all other orders of operations are more or less incorrect. > >> + > >> +I intentionally put de-Gamma, and Gamma in parentheses in my graph > >> +as they describe the intention of the block but not necessarily a > >> +strict definition of how a userspace implementation is required to > >> +use them. > >> + > >> +De-Gamma and Gamma blocks are named LUT, but they could be non-programmable > >> +LUTs in some HW implementations with no programmable LUT available. See > >> +the definitions for AMD's `latest dGPU generation`_ as an example. > >> + > >> +.. _latest dGPU generation: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c?h=v5.13#n2586 > >> + > >> +I renamed the "Plane Gamma LUT" and "CRTC De-Gamma LUT" to "Tonemapping" > >> +as we generally don't want to re-apply gamma before blending, or do > >> +de-gamma post blending. These blocks tend generally to be intended for > >> +tonemapping purposes. > > > > Right. > > > >> + > >> +Tonemapping in this case could be a simple nits value or `EDR`_ to describe > >> +how to scale the :ref:`SDR luminance`. > > > > I do wonder how that will turn out in the end... but on Friday there > > will be HDR Compositing and Tone-mapping live Q&A session: > > https://www.w3.org/Graphics/Color/Workshop/talks.html#compos > > > I didn't manage to join the compositing and tone-mapping live Q&A? Did > anything interesting emerge from that? I guess for me it wasn't mind blowing really, since I've been struggling to understand things for a good while now, and apparently I've actually learnt something. :-) It was good (or bad?) to hear that much of the compositing challenges were still unsolved, and we're definitely not alone trying to find answers. A much more interesting Q&A session was yesterday on Color creation and manipulation, where the topics were even more to our scope, perhaps surprisingly. I got a grasp of how mindbogglingly complex the ICCmax specification is. It is so complex, that just recently they have started publishing a series of specifications that tell which parts of ICCmax one should implement or support for specific common use cases. Hopefully the emergence of those "Interoperability Conformance Specifications" gives rise to at least partial FOSS implementations. If you want to do gamut reduction, OKLab color space seems like the best place to do it. It's not a specific gamut reduction algorithm, but it's a good space to work in, whatever you want to do. The Krita presentation opened up practical issues with HDR and interoperability, and there I was able to ask about PQ and HLG differences and learn that HLG displays are better defined. Even EDR was also talked about briefly. As for take-aways... sorry, my mind hasn't returned to me yet. We will have to wait for the Q&A session transcripts to be published. Yes, there are supposed to be transcripts! I didn't manage to ask how EDR is handling differences in black levels. EDR obviously caters for the peak whites, but I don't know about low blacks. They did give us a link: https://developer.apple.com/videos/play/wwdc2021/10161/ I haven't watched it yet. > I've watched Timo Kunkel's talk and it's been very eye opening. He does > a great job of highlighting the challenges of compositing HDR content. > > >> + > >> +Tonemapping could also include the ability to use a 3D LUT which might be > >> +accompanied by a 1D shaper LUT. The shaper LUT is required in order to > >> +ensure a 3D LUT with limited entries (e.g. 9x9x9, or 17x17x17) operates > >> +in perceptual (non-linear) space, so as to evenly spread the limited > >> +entries evenly across the perceived space. > >> + > >> +.. _EDR: https://gitlab.freedesktop.org/swick/wayland-protocols/-/blob/color/unstable/color-management/color.rst#id8 > >> + > >> +Creating a model that is flexible enough to define color pipelines for > >> +a wide variety of HW is challenging, though not impossible. Implementing > >> +support for such a flexible definition in userspace, though, amounts > >> +to essentially writing color pipeline drivers for each HW. > > > > My thinking right now is that userspace has it's own pipeline model > > with the elements it must have. Then it attempts to map that pipeline > > to what elements the KMS pipeline happens to expose. If there is a > > mapping, good. If not, fall back to shaders on GPU. > > To help that succeed more often, I'm using the current KMS abstract > > pipeline as a guide in designing the Weston internal color pipeline. > > > I feel I should know, but is this pipeline documented? Is it merely, the > plane > crtc > connector model, or does it go beyond that? The KMS pixel pipeline model right now is just a bunch of properties in the CRTC. These properties allude to the degamma LUT -> CTM -> gamma LUT pipeline model, post-blending. In Weston, we take a very similar approach. A color transformation (which maps to a single rendering pass, or the CRTC KMS properties, or the future per-plane KMS properties) is: color model change -> pre-curve -> color mapping -> post-curve - Color model change is more or less for YCbCr->RGB conversion. - Pre- and post-curves are essentially per-channel 1D LUTs or enumerated functions. - Color mapping is a 3D LUT, a matrix, or whatever else is needed. You can see a similar structure to the KMS degamma->CTM->gamma, but with options to plug in other defined operations in the slots so that at least the GL-renderer can be flexible enough for everything, even if it doesn't match KMS capabilities. Each of the slots can also be identity (which even gets compile out of the GL shader). Weston has one color transformation per window to go from content to blending space, and another color transformation to go from blending to output (cable) space. It's not really documented, as half of that code, and more really, is still waiting for review or to be written. Oh, I did have some plans written down here: https://gitlab.freedesktop.org/wayland/weston/-/issues/467#note_864054 Pre-curve for instance could be a combination of decoding to linear light and a shaper for the 3D LUT coming next. That's why we don't call them gamma or EOTF, that would be too limiting. (Using a shaper may help to keep the 3D LUT size reasonable - I suppose very much like those multi-segmented LUTs.) ... > > Now, someone might say that the Wayland protocol design for HDR aims to > > be descriptive and not prescriptive, so why should KMS UAPI be > > different? The reason is explained above: *some* KMS clients may switch > > frame by frame between KMS and shaders, but Wayland clients pick one > > path and stick to it. Wayland clients have no reason that I can imagine > > to switch arbitrarily in flight. > > > I'm a bit confused about this paragraph. Wouldn't the Wayland compositor > decide whether to use a KMS plane or shader and not the client? What I meant is, Wayland clients will not randomly switch between doing color transformations themselves and letting the compositor do it. They should be able to just pick one path and stick to it as long as the window is up. > >> + > >> +We would like to solicit feedback and encourage discussion around the > >> +merits and weaknesses of these approaches. This question is at the core > >> +of defining a good API and we'd like to get it right. > >> + > >> + > >> +Input and Output Transfer functions > >> +----------------------------------- > >> + > >> +We define an input transfer function on drm_plane to describe the > >> +transform from framebuffer to blending space. > >> + > >> +We define an output transfer function on drm_crtc to describe the > >> +transform from blending space to display space. > >> + > > > > Here is again the terminology problem between transfer function and > > (color) space. > > > Color value encoding? Or luminance space? Or maybe there's a different term > altogether to describe this? The problem in the statement is that it implies a transfer function can do color space conversions or color space mapping. In Weston we call it "color transformation" in an attempt to include everything. The input function must include the possibility for color space mapping because you may have different planes with different content color spaces, and blending requires converting them all into one common color space. Depending on what you choose as your blending space, the output function could be just the display EOTF or something more complicated. ... > > It's worth to note that while PQ is absolute in luminance (providing > > cd/m² values), everything else here is relative for both SDR and HDR. > > You cannot blend content in PQ with content in something else together, > > until you practically define the absolute luminance for all non-PQ > > content or vice versa. > > > > A further complication is that you could have different > > relative-luminance transfer functions, meaning that the (absolute) > > luminance they are relative to varies. The obvious case is blending SDR > > content with HDR content when both have relative-luminance transfer > > function. > > > Good points. It sounds like we would need something akin to EDR (or > max-SDR nits) for any relative-luminance TF, i.e. a way to arbitrarily > scale the luminance of the respective plane. Right. However, in the past few days, I've heard statements that scaling luminance linearly will look not so good. What you need to do is to follow the human visual system (HVS) characteristic and use a gamma function. (This is not about non-linear encoding, just that the function happens to be similar - which is not totally a coincidence, since also non-linear encoding is meant to follow the HVS[*].) HLG OOTF does exactly this IIUC. Naturally, these statements came from Andrew Cotton as I recall. * Or actually, the non-linear encoding was meant to follow cathode-ray tube characteristic, which by pure coincidence happens to roughly agree with HVS. > > Then you have HLG which is more like scene-referred than > > display-referred, but that might be solved with the parameter I > > mentioned, I'm not quite sure. > > > > PQ is said to be display-referred, but it's usually referred to > > someone else's display than yours, which means it needs the HDR > > metadata to be able to tone-map suitably to your display. This seems to > > be a similar problem as with signal gamut vs. device gamut. > > > > The traditional relative-luminance transfer functions, well, the > > content implied by them, is display-referred when it arrived at KMS or > > compositor level. There the question of "whose display" doesn't matter > > much because it's SDR and narrow gamut, and we probably don't even > > notice when we see an image wrong. With HDR the mismatch might be > > noticeable. > > > > > >> + > >> + > >> +Describing SDR Luminance > >> +------------------------------ > >> + > >> +Since many displays do no correctly advertise the HDR white level we > >> +propose to define the SDR white level in nits. > > > > This means that even if you had no content using PQ, you still need to > > define the absolute luminance for all the (HDR) relative-luminance > > transfer functions. > > > > There probably needs to be something to relate everything to a single, > > relative or absolute, luminance range. That is necessary for any > > composition (KMS and software) since the output is a single image. > > > > Is it better to go with relative or absolute metrics? Right now I would > > tend to say relative, because relative is unitless. Absolute values are > > numerically equivalent, but they might not have anything to do with > > actual physical measurements, making them actually relative. This > > happens when your monitor does not support PQ mode or does tone-mapping > > to your image, for instance. > > > It sounds like PQ is the outlier here in defining luminance in absolute > units. Though it's also currently the most commonly used TF for HDR > content. Yes. "A completely new way", I recall reading somewhere advocating PQ. :-) You can't switch from PQ to HLG by only replacing the TF, mind. Or so they say... I suppose converting from one to the other requires making decisions on the way. At least you need to know what display dynamic range you are targeting I think. > Wouldn't you use the absolute luminance definition for PQ if you relate > everything to a relative range? > > Would it make sense to relate everything to a common output luminance > range? If that output is PQ then an input PQ buffer is still output > as PQ and relative-luminance buffers can be scaled. > > Would that scaling (EDR or similar) be different for SDR (sRGB) content > vs other HDR relative-luminance content? I think we need to know the target display, especially the dynamic range of it. Then we know what HLG OOTF it should use. From PQ we need at least the HDR static metadata to know the actual range, as assuming the full 10k nit range being meaningful could seriously lose highlights or something I guess. Everything is relative to the target display I believe, even PQ since displaying PQ as-is only works on the mastering display. Since PQ content comes with some metadata, we need PQ-to-PQ conversions for PQ display, assuming we don't just pass through the metadata to the display. Maybe the HLG OOTF could be used for the tone mapping of PQ-to-PQ... I think both PQ and HLG have different standards written for how to map SDR to them. I don't remember which ITU-R or SMPTE spec those might be, but I suppose BT.2100 could be a starting point searching for them. ... > Initially I was hoping to find a quick way to allow pushing video > straight from decoder through a KMS plane to the output. Increasingly > I'm realizing that this is probably not going to work well for a general > desktop compositor, hence the statement here to pretty much say the > Wayland plan is the correct plan for this: single-plane HDR (with shader > composition) first, then KMS offloading for power saving. > > On some level I'm still interested in the direct decoder-to-KMS-to-display > path but am afraid we won't get the API right if we don't deal with the general > desktop compositor use-case first. I am very happy to hear that. :-) > Apologies, again, if some of my response is a bit incoherent. I've been writing > the responses over Friday and today. It wasn't at all! Thanks, pq
On 2021-09-21 09:31, Pekka Paalanen wrote: > On Mon, 20 Sep 2021 20:14:50 -0400 > Harry Wentland <harry.wentland@amd.com> wrote: > >> On 2021-09-15 10:01, Pekka Paalanen wrote:> On Fri, 30 Jul 2021 16:41:29 -0400 >>> Harry Wentland <harry.wentland@amd.com> wrote: >>> >>>> Use the new DRM RFC doc section to capture the RFC previously only >>>> described in the cover letter at >>>> https://patchwork.freedesktop.org/series/89506/ >>>> >>>> v3: >>>> * Add sections on single-plane and multi-plane HDR >>>> * Describe approach to define HW details vs approach to define SW intentions >>>> * Link Jeremy Cline's excellent HDR summaries >>>> * Outline intention behind overly verbose doc >>>> * Describe FP16 use-case >>>> * Clean up links >>>> >>>> v2: create this doc >>>> >>>> v1: n/a >>>> >>>> Signed-off-by: Harry Wentland <harry.wentland@amd.com> > > Hi Harry! > > ... > >>>> --- >>>> Documentation/gpu/rfc/color_intentions.drawio | 1 + >>>> Documentation/gpu/rfc/color_intentions.svg | 3 + >>>> Documentation/gpu/rfc/colorpipe | 1 + >>>> Documentation/gpu/rfc/colorpipe.svg | 3 + >>>> Documentation/gpu/rfc/hdr-wide-gamut.rst | 580 ++++++++++++++++++ >>>> Documentation/gpu/rfc/index.rst | 1 + >>>> 6 files changed, 589 insertions(+) >>>> create mode 100644 Documentation/gpu/rfc/color_intentions.drawio >>>> create mode 100644 Documentation/gpu/rfc/color_intentions.svg >>>> create mode 100644 Documentation/gpu/rfc/colorpipe >>>> create mode 100644 Documentation/gpu/rfc/colorpipe.svg >>>> create mode 100644 Documentation/gpu/rfc/hdr-wide-gamut.rst > > ... > >>>> + >>>> +Here are some examples of real-life objects and their approximate >>>> +luminance values: >>>> + >>>> + >>>> +.. _PQ (perceptual quantizer) function: https://en.wikipedia.org/wiki/High-dynamic-range_video#Perceptual_Quantizer >>>> + >>>> +.. flat-table:: >>>> + :header-rows: 1 >>>> + >>>> + * - Object >>>> + - Luminance in nits >>>> + >>>> + * - Fluorescent light >>>> + - 10,000 >>>> + >>>> + * - Highlights >>>> + - 1,000 - sunlight >>> >>> Did fluorescent and highlights get swapped here? >>> >> No, though at first glance it can look like that. This is pulled >> from an internal doc I didn't write, but I think the intention is >> to show that fluorescent lights can be up to 10,000 nits and >> highlights are usually 1,000+ nits. >> >> I'll clarify this in v4. >> >> A quick google search seems to show that there are even fluorescent >> lights with 46,000 nits. I guess these numbers provide a ballpark >> view more than anything. > > Those seem quite extreme fluorescent lights, far beyond what one might > find in offices I suppose? > > I mean, I can totally stare straight at my office fluorescent lights > without any discomfort. > > Highlights OTOH of course depend on which highlights we're talking > about, and your 1000 - sunlight range I can totally agree with. > > If you look at a sea or a lake on a sunny day, the reflections of Sun > on the water surface are much much brighter than anything else in > nature aside from Sun itself. I happened to see this myself when > playing with a camera: the rest of the image can be black while the > water highlights still shoot way beyond the captured dynamic range. > >>>> + >>>> + * - White Objects >>>> + - 250 - 1,000 >>>> + >>>> + * - Typical Objects >>>> + - 1 - 250 >>>> + >>>> + * - Shadows >>>> + - 0.01 - 1 >>>> + >>>> + * - Ultra Blacks >>>> + - 0 - 0.0005 >>>> + >>>> + >>>> +Transfer functions >>>> +------------------ >>>> + >>>> +Traditionally we used the terms gamma and de-gamma to describe the >>>> +encoding of a pixel's luminance value and the operation to transfer from >>>> +a linear luminance space to the non-linear space used to encode the >>>> +pixels. Since some newer encodings don't use a gamma curve I suggest >>>> +we refer to non-linear encodings using the terms `EOTF, and OETF`_, or >>>> +simply as transfer function in general. >>> >>> Yeah, gamma could mean lots of things. If you have e.g. OETF gamma >>> 1/2.2 and EOTF gamma 2.4, the result is OOTF gamma 1.09. >>> >>> OETF, EOTF and OOTF are not unambiguous either, since there is always >>> the question of whose function is it. >>> >> Yeah, I think both gamma and EO/OE/OO/EETF are all somewhat problematic. > > We can use them, but we have to explain which functions we are > referring to. In particular, if you have a specific EOTF, then the > inverse of it should be called EOTF^-1 and not OETF, to follow what I > have understood of specs like BT.2100. > I should probably add a paragraph about OOTF. The apple talk you linked below uses OOTF to refer to tone-mapping. > Personally I'd take things further and talk about encoding and decoding > functions when the intent is to translate between pixel values and > light-linear color values rather than characterising a piece of > equipment. > >> I tend to think about these more in terms of input and output transfer >> functions but then you have the ambiguity about what your input and >> output mean. I see the input TF between framebuffer and blender, >> and the output TF between blender and display. > > Indeed, those are good explanations. > >> You also have the challenge that input and output transfer functions >> fulfill multiple roles, e.g. an output transfer as defined above might do >> linear-to-PQ conversion but could also fill the role of tone mapping >> in the case where the input content spans a larger range than the >> display space. > > I would like to avoid such conflation or use different terms. That is > indeed the confusion often had I think. > > I would say that encoding/decoding function does not do any kind of > tone-mapping. It's purely for numerical encoding to save bits on > transmission or taps in a LUT. Although, for taps in a LUT > optimization, it is called "shaper" instead. A shaper function (or 1D > LUT) does not need to equal an encoding function. > > We're going to need glossary. > Ack >>> Two different EOTFs are of interest in composition for display: >>> - the display EOTF (since display signal is electrical) >>> - the content EOTF (since content is stored in electrical encoding) >>> >>> >>>> + >>>> +The EOTF (Electro-Optical Transfer Function) describes how to transfer >>>> +from an electrical signal to an optical signal. This was traditionally >>>> +done by the de-gamma function. >>>> + >>>> +The OETF (Opto Electronic Transfer Function) describes how to transfer >>>> +from an optical signal to an electronic signal. This was traditionally >>>> +done by the gamma function. >>>> + >>>> +More generally we can name the transfer function describing the transform >>>> +between scanout and blending space as the **input transfer function**, and >>> >>> "scanout space" makes me think of cable/signal values, not framebuffer >>> values. Or, I'm not sure. I'd recommend replacing the term "scanout >>> space" with something less ambiguous like framebuffer values. >>> >> Framebuffer space/values is much better than scanout space. > > I'd go with values. Does "space" include encoding or not? Depends on > context. Thinking about: > > - light-linear RGB values in BT.709 color space > - sRGB encoded RGB values in BT.709 color space > - sRGB encoded YCbCr values in BT.709 color space > > Are these difference spaces, or the same space but with different > encodings and color models? > > I have been gravitating towards "color space" being the same in all of > the above: BT.709 color space. OTOH, saying "color space, encoding and > model" gets awkward really fast, so sometimes it's just "color space". > > Framebuffer or pixel values could be, say, 10-bit integer, while > (non-linear) color values would be that converted to the [0.0, 1.0] > range for example. > I think we need to talk about what 1.0 means. Apple's EDR defines 1.0 as "reference white" or in other words the max SDR white. That definition might change depending on the content type. >>>> +the transfer function describing the transform from blending space to the >>>> +output space as **output transfer function**. >>> >>> You're talking about "spaces" here, but what you are actually talking >>> about are value encodings, not (color) spaces. An EOTF or OETF is not >>> meant to modify the color space. >>> >>> When talking about blending, what you're actually interested in is >>> linear vs. non-linear color value encoding. This matches your talk >>> about EOTF and OETF, although you need to be careful to specify which >>> EOTF and OETF you mean. For blending, color values need to be linear in >>> light intensity, and the inverse of the E-to-O mapping before blending >>> is exactly the same as the O-to-E mapping after blending. Otherwise you >>> would alter even opaque pixels. >>> >> I struggle a bit with finding the right term to talk about color value >> encoding in general. Concrete examples can be PQ-encoded, Gamma 2.2, or >> linearly encoded spaces but I was grasping for a more general term; >> something that could potentially include TFs that also tone-map. > > I would very much prefer to keep tone-mapping as a separate conceptual > object, but I think I see where you are coming from: the API has a > single slot for the combined coding/tone-mapping function. > > Is "combined coding/tone-mapping function" too long to type? :-) > >> Interestingly, the Canvas API changes presented by Christopher Cameron >> also seem to use the new colorSpace property to deal with both color >> space, as well as EOTF. >> >> https://www.youtube.com/watch?v=fHbLbVacYw4 > > That may be practical from API point of view, but conceptually I find > it confusing. I think it is easier to think through the theory with > completely independent color space and encoding concepts, and then it > will be easy to understand that in an API you just pick specific pairs > of them since those are enough for most use cases. > > If you start from the API concepts, try to work towards the theory, and > then you are presented a display whose EOTF is measured and does not > match any of the standard ones present in the API, I think you would > struggle to make that display work until you realise that color space > and encoding can be decoupled. > > A bit like how YCbCr is not a color space but a color model you can > apply to any RGB color space, and you can even pick the encoding > function separately if you want to. > > Also mind that tone mapping is completely separate to all the above. > The above describe what colors pixels represent on one device (or in an > image). Tone mapping is an operation that adapts an image from one > device to another device. Gamut mapping is as well. > > So describing a color space, color model, and encoding is one thing. > Adapting (converting) an image from one such to another is a whole > different thing. However, when you have hardware pixel pipeline, you > tend to program the total transformation from source to destination, > where all those different unrelated or orthogonal concepts have been > combined and baked in, usually in such a way that you cannot separate > them anymore. > > Our plans for Weston internals follow the same: you have descriptions > of source and destination pixels, you have your rendering intent that > affects how things like gamut mapping and tone mapping work, and then > you compute the two transformations from all those: the transformation > from source to blending space, and from blending space to output > (monitor cable values). In the Weston design the renderer KMS > framebuffer will hold either blending space values or cable values. > > Btw. another thing is color space conversion vs. gamut and tone > mapping. These are also separate concepts. You can start with BT.2020 > color space color values, and convert those to sRGB color values. A > pure color space conversion can result in color values outside of the > sRGB value range, because BT.2020 is a bigger color space. If you clip > those out-of-range values into range, then you are doing gamut (and > tone?) mapping in my opinion. > > > ... > >>>> +Displays and Tonemapping >>>> +------------------------ >>>> + >>>> +External displays are able to do their own tone and color mapping, based >>>> +on the mastering luminance, color primaries, and white space defined in >>>> +the HDR metadata. >>> >>> HLG does things differently wrt. metadata and tone-mapping than PQ. >>> >> As mentioned above I had some time to watch the HLG presentation and that >> indeed has interesting implications. With HLG we also have relative luminance >> HDR content. One challenge is How to tone-map HLG content alongside SDR (sRGB) >> content and PQ content. >> >> I think ultimately this means that we can't rely on display tonemapping when >> we are dealing with mixed content on the screen. In that case we would probably >> want to output to the display in the EDID-referred space and tone-map all incoming >> buffers to the EDID-referred space. > > That's exactly the plan with Weston. > > The display signal space has three options according to EDID/HDMI: > > - HDR with traditional gamma (which I suppose means the relative [0.0, > 1.0] range with either sRGB or 2.2 gamma encoding and using the > monitor's native gamut) > > - BT.2020 PQ > > - HLG (BT.2020?) > > These are what the monitor cable must carry, so these are what the CRTC > must produce. I suppose one could pick the blending space to be > something else, but in Weston the plan is to use cable signal as the > blending space, just linearised for light and limited by the monitors > gamut and dynamic range. That keeps the post-blend operations as simple > as possible, meaning we are likely to be able to offload that to KMS > and do not need another renderer pass for that. > > One thing I realised yesterday is that HLG displays are much better > defined than PQ displays, because HLG defines what OOTF the display > must implement. In a PQ system, the signal carries the full 10k nits > range, and then the monitor must do vendor magic to display it. That's > for tone mapping, not sure if HLG has an advantage in gamut mapping as > well. > Doesn't the metadata describe the max content white? So even if the signal carries the full 10k nits the actual max luminance of the content should be incoded as part of the metadata. > For a PQ display, all we can do is hope that if we tell the monitor via > HDR static metadata that our content will never exceed monitor > capabilities then the monitor doesn't mangle our images too bad. > >> I think the doc needs a lot more pictures. I wonder if I can do that without >> polluting git with large files. >> > > ... > >>>> +Multi-plane >>>> +----------- >>>> + >>>> +In multi-plane configurations we need to solve the problem of blending >>>> +HDR and SDR content. This blending should be done in linear space and >>>> +therefore requires framebuffer data that is presented in linear space >>>> +or a way to convert non-linear data to linear space. Additionally >>>> +we need a way to define the luminance of any SDR content in relation >>>> +to the HDR content. >>>> + >>>> +In order to present framebuffer data in linear space without losing a >>>> +lot of precision it needs to be presented using 16 bpc precision. >>> >>> Integer or floating-point? >>> >> Floating point. Fixed in v4. >> >> I doubt integer would work since we'd lose too much precision in the dark >> areas. Though, maybe 16-bit would let us map those well enough? I don't know >> for sure. Either way, I think anybody doing linear is using FP16. > > That's a safe assumption. Integer precision in the dark end also depends > on how high the bright end goes. With floating point that seems like a > non-issue. > > What I think is "common knowledge" by now is that 8 bits is not enough > for a linear channel. However, 10 bits integer might be enough for a > linear channel in SDR. > >> >>> >>>> + >>>> + >>>> +Defining HW Details >>>> +------------------- >>>> + >>>> +One way to take full advantage of modern HW's color pipelines is by >>>> +defining a "generic" pipeline that matches all capable HW. Something >>>> +like this, which I took `from Uma Shankar`_ and expanded on: >>>> + >>>> +.. _from Uma Shankar: https://patchwork.freedesktop.org/series/90826/ >>>> + >>>> +.. kernel-figure:: colorpipe.svg >>> >>> Btw. there will be interesting issues with alpha-premult, filtering, >>> and linearisation if your planes have alpha channels. That's before >>> HDR is even considered. >>> >> Could you expand on this a bit? > > First you might want to read > http://ssp.impulsetrain.com/gamma-premult.html > and then ask, which way does software and hardware do and expect alpha > premultiplication. I don't actually know. I have always assumed the > intuitive way for compositing in non-linear values before I understood > what light-linear means, which means I have always assumed the *wrong* > way of doing premult. > > The next topic is, when you do filtering to sample from a texture that > has an alpha channel, what should the values be from which you compute > the weighted average or convolution? If I remember right, the answer is > that they must be light-linear *and* premultiplied. > > So there is exactly one way that is correct, and all other orders of > operations are more or less incorrect. > > >>>> + >>>> +I intentionally put de-Gamma, and Gamma in parentheses in my graph >>>> +as they describe the intention of the block but not necessarily a >>>> +strict definition of how a userspace implementation is required to >>>> +use them. >>>> + >>>> +De-Gamma and Gamma blocks are named LUT, but they could be non-programmable >>>> +LUTs in some HW implementations with no programmable LUT available. See >>>> +the definitions for AMD's `latest dGPU generation`_ as an example. >>>> + >>>> +.. _latest dGPU generation: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c?h=v5.13#n2586 >>>> + >>>> +I renamed the "Plane Gamma LUT" and "CRTC De-Gamma LUT" to "Tonemapping" >>>> +as we generally don't want to re-apply gamma before blending, or do >>>> +de-gamma post blending. These blocks tend generally to be intended for >>>> +tonemapping purposes. >>> >>> Right. >>> >>>> + >>>> +Tonemapping in this case could be a simple nits value or `EDR`_ to describe >>>> +how to scale the :ref:`SDR luminance`. >>> >>> I do wonder how that will turn out in the end... but on Friday there >>> will be HDR Compositing and Tone-mapping live Q&A session: >>> https://www.w3.org/Graphics/Color/Workshop/talks.html#compos >>> >> I didn't manage to join the compositing and tone-mapping live Q&A? Did >> anything interesting emerge from that? > > I guess for me it wasn't mind blowing really, since I've been > struggling to understand things for a good while now, and apparently > I've actually learnt something. :-) > > It was good (or bad?) to hear that much of the compositing challenges > were still unsolved, and we're definitely not alone trying to find > answers. > > A much more interesting Q&A session was yesterday on Color creation and > manipulation, where the topics were even more to our scope, perhaps > surprisingly. > > I got a grasp of how mindbogglingly complex the ICCmax specification > is. It is so complex, that just recently they have started publishing a > series of specifications that tell which parts of ICCmax one should > implement or support for specific common use cases. Hopefully the > emergence of those "Interoperability Conformance Specifications" gives > rise to at least partial FOSS implementations. > > If you want to do gamut reduction, OKLab color space seems like the > best place to do it. It's not a specific gamut reduction algorithm, but > it's a good space to work in, whatever you want to do. > > The Krita presentation opened up practical issues with HDR and > interoperability, and there I was able to ask about PQ and HLG > differences and learn that HLG displays are better defined. > > Even EDR was also talked about briefly. > > As for take-aways... sorry, my mind hasn't returned to me yet. We will > have to wait for the Q&A session transcripts to be published. Yes, > there are supposed to be transcripts! > > I didn't manage to ask how EDR is handling differences in black levels. > EDR obviously caters for the peak whites, but I don't know about low > blacks. They did give us a link: > https://developer.apple.com/videos/play/wwdc2021/10161/ > > I haven't watched it yet. > I just went through it. It's a worthwile watch, though contains a bunch of corporate spin. It sounds like EDR describes not just the mapping of SDR content to HDR outputs but goes beyond that and is the term used to describe the whole technology that allows rendering of content with different color spaces and in different pixel value representations. It looks like Apple has the composition of temporally & spatially mixed media figured out. They don't seem to do proper tone-mapping in most cases, though. They talk about clipping highlights and seem to allude to the fact that tone-mapping (or soft-clipping) is an application's responsibility. Their color value representation represents SDR as values between 0.0 and 1.0. Any value above 1.0 is an "HDR" value and can get clipped. There is some good bits in the "best practices" section of the talk, like a mechanism of converting PQ content to EDR. >> I've watched Timo Kunkel's talk and it's been very eye opening. He does >> a great job of highlighting the challenges of compositing HDR content. >> >>>> + >>>> +Tonemapping could also include the ability to use a 3D LUT which might be >>>> +accompanied by a 1D shaper LUT. The shaper LUT is required in order to >>>> +ensure a 3D LUT with limited entries (e.g. 9x9x9, or 17x17x17) operates >>>> +in perceptual (non-linear) space, so as to evenly spread the limited >>>> +entries evenly across the perceived space. >>>> + >>>> +.. _EDR: https://gitlab.freedesktop.org/swick/wayland-protocols/-/blob/color/unstable/color-management/color.rst#id8 >>>> + >>>> +Creating a model that is flexible enough to define color pipelines for >>>> +a wide variety of HW is challenging, though not impossible. Implementing >>>> +support for such a flexible definition in userspace, though, amounts >>>> +to essentially writing color pipeline drivers for each HW. >>> >>> My thinking right now is that userspace has it's own pipeline model >>> with the elements it must have. Then it attempts to map that pipeline >>> to what elements the KMS pipeline happens to expose. If there is a >>> mapping, good. If not, fall back to shaders on GPU. >>> To help that succeed more often, I'm using the current KMS abstract >>> pipeline as a guide in designing the Weston internal color pipeline. >>> >> I feel I should know, but is this pipeline documented? Is it merely, the >> plane > crtc > connector model, or does it go beyond that? > > The KMS pixel pipeline model right now is just a bunch of properties in > the CRTC. These properties allude to the degamma LUT -> CTM -> gamma > LUT pipeline model, post-blending. > > In Weston, we take a very similar approach. A color transformation > (which maps to a single rendering pass, or the CRTC KMS properties, or > the future per-plane KMS properties) is: > > color model change -> pre-curve -> color mapping -> post-curve > > - Color model change is more or less for YCbCr->RGB conversion. > > - Pre- and post-curves are essentially per-channel 1D LUTs or > enumerated functions. > > - Color mapping is a 3D LUT, a matrix, or whatever else is needed. > > You can see a similar structure to the KMS degamma->CTM->gamma, but > with options to plug in other defined operations in the slots so > that at least the GL-renderer can be flexible enough for everything, > even if it doesn't match KMS capabilities. Each of the slots can also > be identity (which even gets compile out of the GL shader). > > Weston has one color transformation per window to go from content to > blending space, and another color transformation to go from blending to > output (cable) space. > > It's not really documented, as half of that code, and more really, is > still waiting for review or to be written. Oh, I did have some plans > written down here: > https://gitlab.freedesktop.org/wayland/weston/-/issues/467#note_864054 > Right, I need to digest this again. Did anybody start any CM doc patches in Weston or Wayland yet? > Pre-curve for instance could be a combination of decoding to linear > light and a shaper for the 3D LUT coming next. That's why we don't call > them gamma or EOTF, that would be too limiting. > > (Using a shaper may help to keep the 3D LUT size reasonable - I suppose > very much like those multi-segmented LUTs.) > AFAIU a 3D LUTs will need a shaper as they don't have enough precision. But that's going deeper into color theory than I understand. Vitaly would know better all the details around 3D LUT usage. > > ... > >>> Now, someone might say that the Wayland protocol design for HDR aims to >>> be descriptive and not prescriptive, so why should KMS UAPI be >>> different? The reason is explained above: *some* KMS clients may switch >>> frame by frame between KMS and shaders, but Wayland clients pick one >>> path and stick to it. Wayland clients have no reason that I can imagine >>> to switch arbitrarily in flight. >>> >> I'm a bit confused about this paragraph. Wouldn't the Wayland compositor >> decide whether to use a KMS plane or shader and not the client? > > What I meant is, Wayland clients will not randomly switch between doing > color transformations themselves and letting the compositor do it. They > should be able to just pick one path and stick to it as long as the > window is up. > Makes sense. >>>> + >>>> +We would like to solicit feedback and encourage discussion around the >>>> +merits and weaknesses of these approaches. This question is at the core >>>> +of defining a good API and we'd like to get it right. >>>> + >>>> + >>>> +Input and Output Transfer functions >>>> +----------------------------------- >>>> + >>>> +We define an input transfer function on drm_plane to describe the >>>> +transform from framebuffer to blending space. >>>> + >>>> +We define an output transfer function on drm_crtc to describe the >>>> +transform from blending space to display space. >>>> + >>> >>> Here is again the terminology problem between transfer function and >>> (color) space. >>> >> Color value encoding? Or luminance space? Or maybe there's a different term >> altogether to describe this? > > The problem in the statement is that it implies a transfer function can > do color space conversions or color space mapping. > > In Weston we call it "color transformation" in an attempt to include > everything. > > The input function must include the possibility for color space mapping > because you may have different planes with different content color > spaces, and blending requires converting them all into one common color > space. > > Depending on what you choose as your blending space, the output > function could be just the display EOTF or something more complicated. > > > ... > >>> It's worth to note that while PQ is absolute in luminance (providing >>> cd/m² values), everything else here is relative for both SDR and HDR. >>> You cannot blend content in PQ with content in something else together, >>> until you practically define the absolute luminance for all non-PQ >>> content or vice versa. >>> >>> A further complication is that you could have different >>> relative-luminance transfer functions, meaning that the (absolute) >>> luminance they are relative to varies. The obvious case is blending SDR >>> content with HDR content when both have relative-luminance transfer >>> function. >>> >> Good points. It sounds like we would need something akin to EDR (or >> max-SDR nits) for any relative-luminance TF, i.e. a way to arbitrarily >> scale the luminance of the respective plane. > > Right. However, in the past few days, I've heard statements that > scaling luminance linearly will look not so good. What you need to do > is to follow the human visual system (HVS) characteristic and use a > gamma function. (This is not about non-linear encoding, just that the > function happens to be similar - which is not totally a coincidence, > since also non-linear encoding is meant to follow the HVS[*].) HLG OOTF > does exactly this IIUC. Naturally, these statements came from Andrew > Cotton as I recall. > Interesting comment about scaling luminance. > * Or actually, the non-linear encoding was meant to follow cathode-ray > tube characteristic, which by pure coincidence happens to roughly > agree with HVS. > >>> Then you have HLG which is more like scene-referred than >>> display-referred, but that might be solved with the parameter I >>> mentioned, I'm not quite sure. >>> >>> PQ is said to be display-referred, but it's usually referred to >>> someone else's display than yours, which means it needs the HDR >>> metadata to be able to tone-map suitably to your display. This seems to >>> be a similar problem as with signal gamut vs. device gamut. >>> >>> The traditional relative-luminance transfer functions, well, the >>> content implied by them, is display-referred when it arrived at KMS or >>> compositor level. There the question of "whose display" doesn't matter >>> much because it's SDR and narrow gamut, and we probably don't even >>> notice when we see an image wrong. With HDR the mismatch might be >>> noticeable. >>> >>> >>>> + >>>> + >>>> +Describing SDR Luminance >>>> +------------------------------ >>>> + >>>> +Since many displays do no correctly advertise the HDR white level we >>>> +propose to define the SDR white level in nits. >>> >>> This means that even if you had no content using PQ, you still need to >>> define the absolute luminance for all the (HDR) relative-luminance >>> transfer functions. >>> >>> There probably needs to be something to relate everything to a single, >>> relative or absolute, luminance range. That is necessary for any >>> composition (KMS and software) since the output is a single image. >>> >>> Is it better to go with relative or absolute metrics? Right now I would >>> tend to say relative, because relative is unitless. Absolute values are >>> numerically equivalent, but they might not have anything to do with >>> actual physical measurements, making them actually relative. This >>> happens when your monitor does not support PQ mode or does tone-mapping >>> to your image, for instance. >>> >> It sounds like PQ is the outlier here in defining luminance in absolute >> units. Though it's also currently the most commonly used TF for HDR >> content. > > Yes. "A completely new way", I recall reading somewhere advocating PQ. :-) > > You can't switch from PQ to HLG by only replacing the TF, mind. Or so > they say... I suppose converting from one to the other requires making > decisions on the way. At least you need to know what display dynamic > range you are targeting I think. > >> Wouldn't you use the absolute luminance definition for PQ if you relate >> everything to a relative range? >> >> Would it make sense to relate everything to a common output luminance >> range? If that output is PQ then an input PQ buffer is still output >> as PQ and relative-luminance buffers can be scaled. >> >> Would that scaling (EDR or similar) be different for SDR (sRGB) content >> vs other HDR relative-luminance content? > > I think we need to know the target display, especially the dynamic > range of it. Then we know what HLG OOTF it should use. From PQ we need > at least the HDR static metadata to know the actual range, as assuming > the full 10k nit range being meaningful could seriously lose highlights > or something I guess. > > Everything is relative to the target display I believe, even PQ since > displaying PQ as-is only works on the mastering display. > > Since PQ content comes with some metadata, we need PQ-to-PQ conversions > for PQ display, assuming we don't just pass through the metadata to the > display. Maybe the HLG OOTF could be used for the tone mapping of > PQ-to-PQ... > > I think both PQ and HLG have different standards written for how to map > SDR to them. I don't remember which ITU-R or SMPTE spec those might be, > but I suppose BT.2100 could be a starting point searching for them. > I wonder if an intermediate representation of color values, like the EDR representation, would help with the conversions. Thanks, Harry > > ... > >> Initially I was hoping to find a quick way to allow pushing video >> straight from decoder through a KMS plane to the output. Increasingly >> I'm realizing that this is probably not going to work well for a general >> desktop compositor, hence the statement here to pretty much say the >> Wayland plan is the correct plan for this: single-plane HDR (with shader >> composition) first, then KMS offloading for power saving. >> >> On some level I'm still interested in the direct decoder-to-KMS-to-display >> path but am afraid we won't get the API right if we don't deal with the general >> desktop compositor use-case first. > > I am very happy to hear that. :-) > >> Apologies, again, if some of my response is a bit incoherent. I've been writing >> the responses over Friday and today. > > It wasn't at all! > > > Thanks, > pq >
On Tue, 21 Sep 2021 14:05:05 -0400 Harry Wentland <harry.wentland@amd.com> wrote: > On 2021-09-21 09:31, Pekka Paalanen wrote: > > On Mon, 20 Sep 2021 20:14:50 -0400 > > Harry Wentland <harry.wentland@amd.com> wrote: > > > >> On 2021-09-15 10:01, Pekka Paalanen wrote:> On Fri, 30 Jul 2021 16:41:29 -0400 > >>> Harry Wentland <harry.wentland@amd.com> wrote: > >>> > >>>> Use the new DRM RFC doc section to capture the RFC previously only > >>>> described in the cover letter at > >>>> https://patchwork.freedesktop.org/series/89506/ > >>>> > >>>> v3: > >>>> * Add sections on single-plane and multi-plane HDR > >>>> * Describe approach to define HW details vs approach to define SW intentions > >>>> * Link Jeremy Cline's excellent HDR summaries > >>>> * Outline intention behind overly verbose doc > >>>> * Describe FP16 use-case > >>>> * Clean up links > >>>> > >>>> v2: create this doc > >>>> > >>>> v1: n/a > >>>> > >>>> Signed-off-by: Harry Wentland <harry.wentland@amd.com> > > > > Hi Harry! > > > > ... > > > >>>> --- > >>>> Documentation/gpu/rfc/color_intentions.drawio | 1 + > >>>> Documentation/gpu/rfc/color_intentions.svg | 3 + > >>>> Documentation/gpu/rfc/colorpipe | 1 + > >>>> Documentation/gpu/rfc/colorpipe.svg | 3 + > >>>> Documentation/gpu/rfc/hdr-wide-gamut.rst | 580 ++++++++++++++++++ > >>>> Documentation/gpu/rfc/index.rst | 1 + > >>>> 6 files changed, 589 insertions(+) > >>>> create mode 100644 Documentation/gpu/rfc/color_intentions.drawio > >>>> create mode 100644 Documentation/gpu/rfc/color_intentions.svg > >>>> create mode 100644 Documentation/gpu/rfc/colorpipe > >>>> create mode 100644 Documentation/gpu/rfc/colorpipe.svg > >>>> create mode 100644 Documentation/gpu/rfc/hdr-wide-gamut.rst ... > I think we need to talk about what 1.0 means. Apple's EDR defines 1.0 > as "reference white" or in other words the max SDR white. > > That definition might change depending on the content type. Yes, the definition of 1.0 depends on the... *cough* encoding. Semantic encoding? Sometimes it just means max signal value (like everywhere until now), sometimes it maps to something else. It might be relative (other than PQ system) or absolute (PQ system) luminance, with a fixed scale after non-linear encoding. The definition of 0.0, or { 0.0, 0.0, 0.0 } more like, is pretty much always the darkest possible black - or is it? The darkest possible black is not usually 0 cd/m², but something above that depending on both the device and the viewing environment. A display necessarily reflects some light from the environment which sets the black level of the image, even if the display itself was capable of exactly 0 cd/m². Maybe VR goggles are an exception. As a side note: if the viewing environment sets the display black level, then the environment also sets the display black's white point, and that may be different from the display's own white point. Also HVS has rods for low light vision, while color management concentrates wholly on the cones that provide color vision. So dark shades might be in the rod range where color cannot be perceived. I digress though. Then there is the whole issue of HVS adaptation which basically sets the observable dynamic range bracket (and what one considers as white I think). Minimum observable color and luminance difference depends on that bracket and the color position inside the bracket. Trying to look at a monitor in bright daylight is a painful example of these. ;-) Btw. is was an awesome experience many years ago to spend 15-30 minutes in a room lit with a pale green light only, and then walking outside. I have never ever seen so vivid and saturated reds, yellows, violets, browns(!), etc. than just after coming out of that room. That was the real world, not a display. :-) ... > > One thing I realised yesterday is that HLG displays are much better > > defined than PQ displays, because HLG defines what OOTF the display > > must implement. In a PQ system, the signal carries the full 10k nits > > range, and then the monitor must do vendor magic to display it. That's > > for tone mapping, not sure if HLG has an advantage in gamut mapping as > > well. > > > > Doesn't the metadata describe the max content white? So even if the signal > carries the full 10k nits the actual max luminance of the content should > be incoded as part of the metadata. It is in the HDR static metadata, yes, if present. There is also dynamic metadata version. However, the static metadata describes the presentation on the (professional) mastering display, more or less. Almost certainly the display an end user has is not a mastering display capable device, so arbitrary magic still needs to happen to squeeze the signal down to what the display can do. Or, I suppose, if the signal (image) does not need squeezing for people who bought the average HDR display, then people who bought high-end HDR displays will be unimpressed by the image on their display. Thinking of buying a new fancy TV and then the image looks exactly the same as in the old one. Ironically, that is exactly what color management might do to SDR content. One could expand a narrow range to a wider range, and I'm sure displays do that too for more sales, but I guess you would have the usual problems of upscaling. It's hard to invent detail where there was none recorded. ... > Did anybody start any CM doc patches in Weston or Wayland yet? There is the https://gitlab.freedesktop.org/swick/wayland-protocols/-/blob/color/unstable/color-management/color.rst we started a long time ago, and have not really touched it for a while. Since we last touched it, at least my understanding has developed somewhat. It is linked from the overview in https://gitlab.freedesktop.org/wayland/wayland-protocols/-/merge_requests/14 and if you want to propose changes, the way to do it is file a MR in https://gitlab.freedesktop.org/swick/wayland-protocols/-/merge_requests against the 'color' branch. Patches very much welcome, that doc does not need to limit itself to Wayland. :-) We also have issues tracked at https://gitlab.freedesktop.org/swick/wayland-protocols/-/issues?scope=all&utf8=%E2%9C%93&state=opened > > Pre-curve for instance could be a combination of decoding to linear > > light and a shaper for the 3D LUT coming next. That's why we don't call > > them gamma or EOTF, that would be too limiting. > > > > (Using a shaper may help to keep the 3D LUT size reasonable - I suppose > > very much like those multi-segmented LUTs.) > > > > AFAIU a 3D LUTs will need a shaper as they don't have enough precision. > But that's going deeper into color theory than I understand. Vitaly would > know better all the details around 3D LUT usage. There is a very practical problem: the sheer number of elements in a 3D LUT grows to the power of three. So you can't have very many taps per channel without storage requirements blowing up. Each element needs to be a 3-channel value, too. And then 8 bits is not enough. I'm really happy that Vitaly is working with us on Weston and Wayland. :-) He's a huge help, and I feel like I'm currently the one slowing things down by being backlogged in reviews. Thanks, pq
On 2021-09-20 20:14, Harry Wentland wrote: > On 2021-09-15 10:01, Pekka Paalanen wrote:> On Fri, 30 Jul 2021 16:41:29 -0400 >> Harry Wentland <harry.wentland@amd.com> wrote: >> <snip> >>> +If a display's maximum HDR white level is correctly reported it is trivial >>> +to convert between all of the above representations of SDR white level. If >>> +it is not, defining SDR luminance as a nits value, or a ratio vs a fixed >>> +nits value is preferred, assuming we are blending in linear space. >>> + >>> +It is our experience that many HDR displays do not report maximum white >>> +level correctly >> >> Which value do you refer to as "maximum white", and how did you measure >> it? >> > Good question. I haven't played with those displays myself but I'll try to > find out a bit more background behind this statement. > Some TVs report the EOTF but not the luminance values. For an example edid-code capture of my eDP HDR panel: HDR Static Metadata Data Block: Electro optical transfer functions: Traditional gamma - SDR luminance range SMPTE ST2084 Supported static metadata descriptors: Static metadata type 1 Desired content max luminance: 115 (603.666 cd/m^2) Desired content max frame-average luminance: 109 (530.095 cd/m^2) Desired content min luminance: 7 (0.005 cd/m^2) I suspect on those TVs it looks like this: HDR Static Metadata Data Block: Electro optical transfer functions: Traditional gamma - SDR luminance range SMPTE ST2084 Supported static metadata descriptors: Static metadata type 1 Windows has some defaults in this case and our Windows driver also has some defaults. Using defaults in the 1000-2000 nits range would yield much better tone-mapping results than assuming the monitor can support a full 10k nits. As an aside, recently we've come across displays where the max average luminance is higher than the max peak luminance. This is not a mistake but due to how the display's dimming zones work. Not sure what impact this might have on tone-mapping, other than to keep in mind that we can assume that max_avg < max_peak. Harry
On 2021-09-22 04:31, Pekka Paalanen wrote: > On Tue, 21 Sep 2021 14:05:05 -0400 > Harry Wentland <harry.wentland@amd.com> wrote: > >> On 2021-09-21 09:31, Pekka Paalanen wrote: >>> On Mon, 20 Sep 2021 20:14:50 -0400 >>> Harry Wentland <harry.wentland@amd.com> wrote: >>> ... > >> Did anybody start any CM doc patches in Weston or Wayland yet? > > There is the > https://gitlab.freedesktop.org/swick/wayland-protocols/-/blob/color/unstable/color-management/color.rst > we started a long time ago, and have not really touched it for a while. > Since we last touched it, at least my understanding has developed > somewhat. > > It is linked from the overview in > https://gitlab.freedesktop.org/wayland/wayland-protocols/-/merge_requests/14 > and if you want to propose changes, the way to do it is file a MR in > https://gitlab.freedesktop.org/swick/wayland-protocols/-/merge_requests > against the 'color' branch. Patches very much welcome, that doc does > not need to limit itself to Wayland. :-) > Right, I've read all that a while back. It might be a good place to consolidate most of the Linux CM/HDR discussion, since gitlab is good with allowing discussions, we can track changes, and it's more formatting and diagram friendly than text-only email. > We also have issues tracked at > https://gitlab.freedesktop.org/swick/wayland-protocols/-/issues?scope=all&utf8=%E2%9C%93&state=opened > >>> Pre-curve for instance could be a combination of decoding to linear >>> light and a shaper for the 3D LUT coming next. That's why we don't call >>> them gamma or EOTF, that would be too limiting. >>> >>> (Using a shaper may help to keep the 3D LUT size reasonable - I suppose >>> very much like those multi-segmented LUTs.) >>> >> >> AFAIU a 3D LUTs will need a shaper as they don't have enough precision. >> But that's going deeper into color theory than I understand. Vitaly would >> know better all the details around 3D LUT usage. > > There is a very practical problem: the sheer number of elements in a 3D > LUT grows to the power of three. So you can't have very many taps per > channel without storage requirements blowing up. Each element needs to > be a 3-channel value, too. And then 8 bits is not enough. > And those storage requirements would have a direct impact on silicon real estate and therefore the price and power usage of the HW. Harry > I'm really happy that Vitaly is working with us on Weston and Wayland. :-) > He's a huge help, and I feel like I'm currently the one slowing things > down by being backlogged in reviews. > > > Thanks, > pq >
On Wed, 22 Sep 2021 11:28:37 -0400 Harry Wentland <harry.wentland@amd.com> wrote: > On 2021-09-22 04:31, Pekka Paalanen wrote: > > On Tue, 21 Sep 2021 14:05:05 -0400 > > Harry Wentland <harry.wentland@amd.com> wrote: > > > >> On 2021-09-21 09:31, Pekka Paalanen wrote: > >>> On Mon, 20 Sep 2021 20:14:50 -0400 > >>> Harry Wentland <harry.wentland@amd.com> wrote: > >>> > > ... > > > > >> Did anybody start any CM doc patches in Weston or Wayland yet? > > > > There is the > > https://gitlab.freedesktop.org/swick/wayland-protocols/-/blob/color/unstable/color-management/color.rst > > we started a long time ago, and have not really touched it for a while. > > Since we last touched it, at least my understanding has developed > > somewhat. > > > > It is linked from the overview in > > https://gitlab.freedesktop.org/wayland/wayland-protocols/-/merge_requests/14 > > and if you want to propose changes, the way to do it is file a MR in > > https://gitlab.freedesktop.org/swick/wayland-protocols/-/merge_requests > > against the 'color' branch. Patches very much welcome, that doc does > > not need to limit itself to Wayland. :-) > > > > Right, I've read all that a while back. > > It might be a good place to consolidate most of the Linux CM/HDR discussion, > since gitlab is good with allowing discussions, we can track changes, and > it's more formatting and diagram friendly than text-only email. Fine by me, but the way things are right now, we'd be hijacking Sebastian's personal repository for these things. That's not ideal. We can't merge the protocol XML into wayland-protocols until it has the accepted implementations required by the governance rules, but I wonder if we could land color.rst ahead of time, then work on that in wayland-protocols upstream repo. It's hard to pick a good place for a cross-project document. Any other ideas? > > We also have issues tracked at > > https://gitlab.freedesktop.org/swick/wayland-protocols/-/issues?scope=all&utf8=%E2%9C%93&state=opened Thanks, pq
On Wed, 22 Sep 2021 11:06:53 -0400 Harry Wentland <harry.wentland@amd.com> wrote: > On 2021-09-20 20:14, Harry Wentland wrote: > > On 2021-09-15 10:01, Pekka Paalanen wrote:> On Fri, 30 Jul 2021 16:41:29 -0400 > >> Harry Wentland <harry.wentland@amd.com> wrote: > >> > > <snip> > > >>> +If a display's maximum HDR white level is correctly reported it is trivial > >>> +to convert between all of the above representations of SDR white level. If > >>> +it is not, defining SDR luminance as a nits value, or a ratio vs a fixed > >>> +nits value is preferred, assuming we are blending in linear space. > >>> + > >>> +It is our experience that many HDR displays do not report maximum white > >>> +level correctly > >> > >> Which value do you refer to as "maximum white", and how did you measure > >> it? > >> > > Good question. I haven't played with those displays myself but I'll try to > > find out a bit more background behind this statement. > > > > > Some TVs report the EOTF but not the luminance values. > For an example edid-code capture of my eDP HDR panel: > > HDR Static Metadata Data Block: > Electro optical transfer functions: > Traditional gamma - SDR luminance range > SMPTE ST2084 > Supported static metadata descriptors: > Static metadata type 1 > Desired content max luminance: 115 (603.666 cd/m^2) > Desired content max frame-average luminance: 109 (530.095 cd/m^2) > Desired content min luminance: 7 (0.005 cd/m^2) > I forget where I heard (you, Vitaly, someone?) that integrated panels may not have the magic gamut and tone mapping hardware, which means that software (or display engine) must do the full correct thing. That's another reason to not rely on magic display functionality, which suits my plans perfectly. > I suspect on those TVs it looks like this: > > HDR Static Metadata Data Block: > Electro optical transfer functions: > Traditional gamma - SDR luminance range > SMPTE ST2084 > Supported static metadata descriptors: > Static metadata type 1 > > Windows has some defaults in this case and our Windows driver also has > some defaults. Oh, missing information. Yay. > Using defaults in the 1000-2000 nits range would yield much better > tone-mapping results than assuming the monitor can support a full > 10k nits. Obviously. > As an aside, recently we've come across displays where the max > average luminance is higher than the max peak luminance. This is > not a mistake but due to how the display's dimming zones work. IOW, the actual max peak luminance in absolute units depends on the current image average luminance. Wonderful, but what am I (the content producer, the display server) supposed to do with that information... > Not sure what impact this might have on tone-mapping, other than > to keep in mind that we can assume that max_avg < max_peak. *cannot Seems like it would lead to a very different tone mapping algorithm which needs to compute the image average luminance before it can account for max peak luminance (which I wouldn't know how to infer). So either a two-pass algorithm, or taking the average from the previous frame. I imagine that is going to be fun considering one needs to composite different types of input images together, and the final tone mapping might need to differ for each. Strictly thinking that might lead to an iterative optimisation algorithm which would be quite intractable in practise to complete for a single frame at a time. Thanks, pq
On 2021-09-23 04:01, Pekka Paalanen wrote: > On Wed, 22 Sep 2021 11:06:53 -0400 > Harry Wentland <harry.wentland@amd.com> wrote: > >> On 2021-09-20 20:14, Harry Wentland wrote: >>> On 2021-09-15 10:01, Pekka Paalanen wrote:> On Fri, 30 Jul 2021 16:41:29 -0400 >>>> Harry Wentland <harry.wentland@amd.com> wrote: >>>> >> >> <snip> >> >>>>> +If a display's maximum HDR white level is correctly reported it is trivial >>>>> +to convert between all of the above representations of SDR white level. If >>>>> +it is not, defining SDR luminance as a nits value, or a ratio vs a fixed >>>>> +nits value is preferred, assuming we are blending in linear space. >>>>> + >>>>> +It is our experience that many HDR displays do not report maximum white >>>>> +level correctly >>>> >>>> Which value do you refer to as "maximum white", and how did you measure >>>> it? >>>> >>> Good question. I haven't played with those displays myself but I'll try to >>> find out a bit more background behind this statement. >>> >> >> >> Some TVs report the EOTF but not the luminance values. >> For an example edid-code capture of my eDP HDR panel: >> >> HDR Static Metadata Data Block: >> Electro optical transfer functions: >> Traditional gamma - SDR luminance range >> SMPTE ST2084 >> Supported static metadata descriptors: >> Static metadata type 1 >> Desired content max luminance: 115 (603.666 cd/m^2) >> Desired content max frame-average luminance: 109 (530.095 cd/m^2) >> Desired content min luminance: 7 (0.005 cd/m^2) >> > > I forget where I heard (you, Vitaly, someone?) that integrated panels > may not have the magic gamut and tone mapping hardware, which means > that software (or display engine) must do the full correct thing. > > That's another reason to not rely on magic display functionality, which > suits my plans perfectly. > I've mentioned it before but there aren't really a lot of integrated HDR panels yet. I think we've only seen one or two without tone-mapping ability. Either way we probably need at least the ability to tone-map the output on the transmitter side (SW, GPU, or display HW). >> I suspect on those TVs it looks like this: >> >> HDR Static Metadata Data Block: >> Electro optical transfer functions: >> Traditional gamma - SDR luminance range >> SMPTE ST2084 >> Supported static metadata descriptors: >> Static metadata type 1 >> >> Windows has some defaults in this case and our Windows driver also has >> some defaults. > > Oh, missing information. Yay. > >> Using defaults in the 1000-2000 nits range would yield much better >> tone-mapping results than assuming the monitor can support a full >> 10k nits. > > Obviously. > >> As an aside, recently we've come across displays where the max >> average luminance is higher than the max peak luminance. This is >> not a mistake but due to how the display's dimming zones work. > > IOW, the actual max peak luminance in absolute units depends on the > current image average luminance. Wonderful, but what am I (the content > producer, the display server) supposed to do with that information... > >> Not sure what impact this might have on tone-mapping, other than >> to keep in mind that we can assume that max_avg < max_peak. > > *cannot > Right > Seems like it would lead to a very different tone mapping algorithm > which needs to compute the image average luminance before it can > account for max peak luminance (which I wouldn't know how to infer). So > either a two-pass algorithm, or taking the average from the previous > frame. > > I imagine that is going to be fun considering one needs to composite > different types of input images together, and the final tone mapping > might need to differ for each. Strictly thinking that might lead to an > iterative optimisation algorithm which would be quite intractable in > practise to complete for a single frame at a time. > Maybe a good approach for this would be to just consider MaxAvg = MaxPeak in this case. At least until one would want to consider dynamic tone-mapping, i.e. tone-mapping that is changing frame-by-frame based on content. Dynamic tone-mapping might be challenging to do in SW but could be a possibility with specialized HW. Though I'm not sure exactly how that HW would look like. Maybe something like a histogram engine like Laurent mentions in https://lists.freedesktop.org/archives/dri-devel/2021-June/311689.html. Harry > > Thanks, > pq >
On 2021-09-23 9:40 a.m., Harry Wentland wrote: > > On 2021-09-23 04:01, Pekka Paalanen wrote: >> On Wed, 22 Sep 2021 11:06:53 -0400 >> Harry Wentland <harry.wentland@amd.com> wrote: >> >>> On 2021-09-20 20:14, Harry Wentland wrote: >>>> On 2021-09-15 10:01, Pekka Paalanen wrote:> On Fri, 30 Jul 2021 16:41:29 -0400 >>>>> Harry Wentland <harry.wentland@amd.com> wrote: >>>>> >>> <snip> >>> >>>>>> +If a display's maximum HDR white level is correctly reported it is trivial >>>>>> +to convert between all of the above representations of SDR white level. If >>>>>> +it is not, defining SDR luminance as a nits value, or a ratio vs a fixed >>>>>> +nits value is preferred, assuming we are blending in linear space. >>>>>> + >>>>>> +It is our experience that many HDR displays do not report maximum white >>>>>> +level correctly >>>>> Which value do you refer to as "maximum white", and how did you measure >>>>> it? >>>>> >>>> Good question. I haven't played with those displays myself but I'll try to >>>> find out a bit more background behind this statement. >>>> >>> >>> Some TVs report the EOTF but not the luminance values. >>> For an example edid-code capture of my eDP HDR panel: >>> >>> HDR Static Metadata Data Block: >>> Electro optical transfer functions: >>> Traditional gamma - SDR luminance range >>> SMPTE ST2084 >>> Supported static metadata descriptors: >>> Static metadata type 1 >>> Desired content max luminance: 115 (603.666 cd/m^2) >>> Desired content max frame-average luminance: 109 (530.095 cd/m^2) >>> Desired content min luminance: 7 (0.005 cd/m^2) >>> >> I forget where I heard (you, Vitaly, someone?) that integrated panels >> may not have the magic gamut and tone mapping hardware, which means >> that software (or display engine) must do the full correct thing. >> >> That's another reason to not rely on magic display functionality, which >> suits my plans perfectly. >> > I've mentioned it before but there aren't really a lot of integrated > HDR panels yet. I think we've only seen one or two without tone-mapping > ability. > > Either way we probably need at least the ability to tone-map the output > on the transmitter side (SW, GPU, or display HW). It is really interesting to see the quality of panel TM algorithm by specifying different metadata and validate how severe loss of details which could mean no TM at all or 1DLUTÂ is used to soften the clipping or 3DLUT( which has wider possibilities for TM) To facilitate this development we may use LCMS proofing capabilities to allow simulate the image view on high end(wide gamut display) how it may looks on low end (narrow gamut displays or integrated panels) >>> I suspect on those TVs it looks like this: >>> >>> HDR Static Metadata Data Block: >>> Electro optical transfer functions: >>> Traditional gamma - SDR luminance range >>> SMPTE ST2084 >>> Supported static metadata descriptors: >>> Static metadata type 1 >>> >>> Windows has some defaults in this case and our Windows driver also has >>> some defaults. >> Oh, missing information. Yay. >> >>> Using defaults in the 1000-2000 nits range would yield much better >>> tone-mapping results than assuming the monitor can support a full >>> 10k nits. >> Obviously. >> >>> As an aside, recently we've come across displays where the max >>> average luminance is higher than the max peak luminance. This is >>> not a mistake but due to how the display's dimming zones work. >> IOW, the actual max peak luminance in absolute units depends on the >> current image average luminance. Wonderful, but what am I (the content >> producer, the display server) supposed to do with that information... >> >>> Not sure what impact this might have on tone-mapping, other than >>> to keep in mind that we can assume that max_avg < max_peak. >> *cannot >> > Right > >> Seems like it would lead to a very different tone mapping algorithm >> which needs to compute the image average luminance before it can >> account for max peak luminance (which I wouldn't know how to infer). So >> either a two-pass algorithm, or taking the average from the previous >> frame. >> >> I imagine that is going to be fun considering one needs to composite >> different types of input images together, and the final tone mapping >> might need to differ for each. Strictly thinking that might lead to an >> iterative optimisation algorithm which would be quite intractable in >> practise to complete for a single frame at a time. >> > Maybe a good approach for this would be to just consider MaxAvg = MaxPeak > in this case. At least until one would want to consider dynamic tone-mapping, > i.e. tone-mapping that is changing frame-by-frame based on content. > Dynamic tone-mapping might be challenging to do in SW but could be a possibility > with specialized HW. Though I'm not sure exactly how that HW would look like. > Maybe something like a histogram engine like Laurent mentions in > https://lists.freedesktop.org/archives/dri-devel/2021-June/311689.html. > > Harry > >> Thanks, >> pq >>
On Thu, 23 Sep 2021 10:43:54 +0300 Pekka Paalanen <ppaalanen@gmail.com> wrote: > On Wed, 22 Sep 2021 11:28:37 -0400 > Harry Wentland <harry.wentland@amd.com> wrote: > > > On 2021-09-22 04:31, Pekka Paalanen wrote: > > > On Tue, 21 Sep 2021 14:05:05 -0400 > > > Harry Wentland <harry.wentland@amd.com> wrote: > > > > > >> On 2021-09-21 09:31, Pekka Paalanen wrote: > > >>> On Mon, 20 Sep 2021 20:14:50 -0400 > > >>> Harry Wentland <harry.wentland@amd.com> wrote: > > >>> > > > > ... > > > > > > > >> Did anybody start any CM doc patches in Weston or Wayland yet? > > > > > > There is the > > > https://gitlab.freedesktop.org/swick/wayland-protocols/-/blob/color/unstable/color-management/color.rst > > > we started a long time ago, and have not really touched it for a while. > > > Since we last touched it, at least my understanding has developed > > > somewhat. > > > > > > It is linked from the overview in > > > https://gitlab.freedesktop.org/wayland/wayland-protocols/-/merge_requests/14 > > > and if you want to propose changes, the way to do it is file a MR in > > > https://gitlab.freedesktop.org/swick/wayland-protocols/-/merge_requests > > > against the 'color' branch. Patches very much welcome, that doc does > > > not need to limit itself to Wayland. :-) > > > > > > > Right, I've read all that a while back. > > > > It might be a good place to consolidate most of the Linux CM/HDR discussion, > > since gitlab is good with allowing discussions, we can track changes, and > > it's more formatting and diagram friendly than text-only email. > > Fine by me, but the way things are right now, we'd be hijacking > Sebastian's personal repository for these things. That's not ideal. > > We can't merge the protocol XML into wayland-protocols until it has the > accepted implementations required by the governance rules, but I wonder > if we could land color.rst ahead of time, then work on that in > wayland-protocols upstream repo. > > It's hard to pick a good place for a cross-project document. Any other > ideas? > > > > We also have issues tracked at > > > https://gitlab.freedesktop.org/swick/wayland-protocols/-/issues?scope=all&utf8=%E2%9C%93&state=opened Hi all, we discussed things in https://gitlab.freedesktop.org/swick/wayland-protocols/-/issues/6 and we have a new home for the color related WIP documentation we can use across Wayland, Mesa, DRM, and even X11 if people want to: https://gitlab.freedesktop.org/pq/color-and-hdr Yes, it's still someone's personal repository, but we avoid entangling it with wayland-protocols which also means we can keep the full git history. If this gets enough traction, the repository can be moved from under my personal group to somewhere more communal, and if that is still inside gitlab.fd.o then all merge requests and issues will move with it. The README notes that we will deal out merge permissions as well. This is not meant to supersede the documentation of individual APIs, but to host additional documentation that would be too verbose, too big, or out of scope to host within respective API docs. Feel free to join the effort or just to discuss. Thanks, pq
diff --git a/Documentation/gpu/rfc/color_intentions.drawio b/Documentation/gpu/rfc/color_intentions.drawio new file mode 100644 index 000000000000..d62f3b24e1ec --- /dev/null +++ b/Documentation/gpu/rfc/color_intentions.drawio @@ -0,0 +1 @@ +<mxfile host="Electron" modified="2021-07-27T17:06:00.446Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/14.6.13 Chrome/91.0.4472.124 Electron/13.1.7 Safari/537.36" etag="5FhBvRxDzJPI4Jsj_73y" version="14.6.13" type="device"><diagram id="5iAH_SKWfpT4d7aif5q5" name="Page-1">7VhJl9owDP41OTIvC2E5DjDQvravtMybGY6GmMR9Tpw6Dkt/fZXEWUwgw1CWSy8QfbZkWfokBTRr6G8nHIXeN+Zgqpm6s9WskWaatt2HzwTYZYBhtK0McTlxJFYCM/IHS1CXaEwcHCkbBWNUkFAFlywI8FIoGOKcbdRtK0bVU0Pk4howWyJaR1+JI7z8GrpeLnzCxPXk0T1bLvgo3yyByEMO21Qg60mzhpwxkT352yGmSfDyuGR64yOrhWMcB+IUhcfJ5y/0ZUKC53AxfYnmr31j1TJkNtaIxvLG0luxy0OAHYiIFAMWwNeAszhwcGJYB4lx4TGXBYh+ZSwE0ADwFxZiJ/OJYsEA8oRP5Sr4zHdvif4DxCwH5gC09Ae9REZbeUYm7arSFHPiY4G5BDPHE2+PRkhCEYv5EjeFRTINcReLhn3dIo9QAJiBN3wHehxTJMha9QNJJrrFvkJ1ygh4aOqyavqSMXnN5AzKLWRuSaUy4/BQ8aKEUh58hBMHKNGhEIaBQ9YKNTq/44S9A4G3ooUocQPNekyKEQIOaSnW4cmV36mdKERBjo058vEiXq1SBZptG1d3VODUARU9AMV0H6GkyYWfk0Hv/bPrNpqtRmD2HKsA1i+wf/e9ElXrceMRgWchSim+gb6s1p5M1IjiVWlsjTlksblw6kSXCm2VsLm4qTTMvF961V6pH68MhdQfZbDVvk0T2xKR9TBbSvPKStm6EiHvXBdsUtdpPoatJrOWpSt3n26t+wwoDhwSuP/G+svT3OjVeV5M+yrPO9fiudG7+/Tudrv709vu3Ht69+86vot313vN736NFQeG7IKfNjZPGPfpIKkN+6mexOGd8Xm+9R/Xsz14NnWz2fe9IX3nxmR0zhvABXj5CWzXODgiUUhR4vX3WISxuBkR4T3sPxNvxUT9yNvDNagIYvnzOeue5Z8Q1tNf</diagram></mxfile> \ No newline at end of file diff --git a/Documentation/gpu/rfc/color_intentions.svg b/Documentation/gpu/rfc/color_intentions.svg new file mode 100644 index 000000000000..2f6b5f5813a3 --- /dev/null +++ b/Documentation/gpu/rfc/color_intentions.svg @@ -0,0 +1,3 @@ +<?xml version="1.0" encoding="UTF-8"?> +<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd"> +<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" version="1.1" width="242px" height="362px" viewBox="-0.5 -0.5 242 362" content="<mxfile host="Electron" modified="2021-07-27T15:46:56.623Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/14.6.13 Chrome/91.0.4472.124 Electron/13.1.7 Safari/537.36" etag="yRHplrf8g5DRVJVrDlPI" version="14.6.13" type="device"><diagram id="5iAH_SKWfpT4d7aif5q5" name="Page-1">7Vhbl5owEP41PLqHy+LlcdXV9rQ9td09u+tjlAjpCYSGoNhf3wHCJaKsa7289EWYL5nJMPPNDKhZIz+ZchR635iDqWbqTqJZY800bXsAvymwzYG+ZeeAy4mTQ0YFPJE/WIK6RGPi4EjZKBijgoQquGRBgJdCwRDnbKNuWzGqnhoiFzeApyWiTfSVOMKTqKHr1cInTFxPHt235YKPis0SiDzksE0Nsh41a8QZE/mdn4wwTWNXxCXXmxxYLR3jOBDHKDxMP3+hL1MSPIeL2Us0fx0Yq45h5WbWiMbyiaW3YluEADsQESkGLIDLkLM4cHBqWAeJceExlwWIfmUsBNAA8BcWYivziWLBAPKET+Uq+My3b6n+HcSsAOYAdPQ7vULGiTwjl7Z1aYY58bHAXIK546m3ByMkoYjFfInbwiKZhriLRcu+XplH4D9m4A3fgh7HFAmyVv1Akoluua9UnTECHpq6LJqBZIwsGaNgUGEhd0sqVRmHm5oXFZTx4COc2EOJLoUwDB2yVqjR/R2n7B0KnIgOosQNNOshLUYIOKSlXIc7V14zO1GIggKbcOTjRbxaZQo03zap76jBmQMqugeK6S5CSZsLP6fD/vtnN220W43A7ClWAWw+wO6z75SoWo8bjwj8FKKM4htoy2rtyUSNKV5VxtaYQxbbC6dJdKlwrxK2EDe1hln0S6/eK/XDlaGQ+qMMtu6v08QSIvIeZktpXlupWlcqFJ3rjE3qMs3HsNVkNrJ04e7Ta3SfIcWBQwL331h/fpob/SbPy2lf53n3Ujw3+jef3r1eb3d6291bT+/BTcd3+e56q/k9aLBiz5Bd8OPG5hHjPhskjWE/09M4vDM+T7f+43K2h8+mbrb7vjOkb9yYjO5pA7gEzz+B7QYHxyQKKUq9/h6LMBZXIyK8h/1n4rWYqB94e7gEFUGsPp/z7ln9B2E9/gU=</diagram></mxfile>" style="background-color: rgb(255, 255, 255);"><defs/><g><path d="M 58.07 88 L 58.15 139.95" fill="none" stroke="#000000" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 58.16 145.2 L 54.65 138.21 L 58.15 139.95 L 61.65 138.2 Z" fill="#000000" stroke="#000000" stroke-miterlimit="10" pointer-events="all"/><rect x="8" y="8" width="100" height="80" fill="#ffffff" stroke="#000000" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject style="overflow: visible; text-align: left;" pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe flex-start; width: 98px; height: 1px; padding-top: 48px; margin-left: 10px;"><div style="box-sizing: border-box; font-size: 0; text-align: left; "><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: #000000; line-height: 1.2; pointer-events: all; white-space: normal; word-wrap: normal; "><div style="text-align: center"><span>Framebuffer</span></div><div><ul><li><span>RGB8</span></li><li><span>sRGB</span></li></ul></div></div></div></div></foreignObject><text x="10" y="52" fill="#000000" font-family="Helvetica" font-size="12px">Framebuffer...</text></switch></g><path d="M 118 208 L 118 241.63" fill="none" stroke="#000000" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 118 246.88 L 114.5 239.88 L 118 241.63 L 121.5 239.88 Z" fill="#000000" stroke="#000000" stroke-miterlimit="10" pointer-events="all"/><rect x="8" y="148" width="220" height="60" fill="#ffffff" stroke="#000000" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject style="overflow: visible; text-align: left;" pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 218px; height: 1px; padding-top: 178px; margin-left: 9px;"><div style="box-sizing: border-box; font-size: 0; text-align: center; "><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: #000000; line-height: 1.2; pointer-events: all; white-space: normal; word-wrap: normal; ">Blending</div></div></div></foreignObject><text x="118" y="182" fill="#000000" font-family="Helvetica" font-size="12px" text-anchor="middle">Blending</text></switch></g><path d="M 178.54 108 L 178.87 138.27" fill="none" stroke="#000000" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 178.93 143.52 L 175.35 136.56 L 178.87 138.27 L 182.35 136.48 Z" fill="#000000" stroke="#000000" stroke-miterlimit="10" pointer-events="all"/><rect x="128" y="8" width="100" height="100" fill="#ffffff" stroke="#000000" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject style="overflow: visible; text-align: left;" pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 98px; height: 1px; padding-top: 58px; margin-left: 129px;"><div style="box-sizing: border-box; font-size: 0; text-align: center; "><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: #000000; line-height: 1.2; pointer-events: all; white-space: normal; word-wrap: normal; ">Framebuffer<br /><ul><li style="text-align: left">P010</li><li style="text-align: left">PQ</li><li style="text-align: left">BT2020</li></ul></div></div></div></foreignObject><text x="178" y="62" fill="#000000" font-family="Helvetica" font-size="12px" text-anchor="middle">Framebuffer...</text></switch></g><rect x="68" y="248" width="100" height="100" fill="#ffffff" stroke="#000000" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject style="overflow: visible; text-align: left;" pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 98px; height: 1px; padding-top: 298px; margin-left: 69px;"><div style="box-sizing: border-box; font-size: 0; text-align: center; "><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: #000000; line-height: 1.2; pointer-events: all; white-space: normal; word-wrap: normal; ">Display Output<br /><ul><li style="text-align: left">RGB10</li><li style="text-align: left">PQ</li><li style="text-align: left">BT2020</li></ul></div></div></div></foreignObject><text x="118" y="302" fill="#000000" font-family="Helvetica" font-size="12px" text-anchor="middle">Display Output...</text></switch></g></g><switch><g requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"/><a transform="translate(0,-5)" xlink:href="https://www.diagrams.net/doc/faq/svg-export-text-problems" target="_blank"><text text-anchor="middle" font-size="10px" x="50%" y="100%">Viewer does not support full SVG 1.1</text></a></switch></svg> \ No newline at end of file diff --git a/Documentation/gpu/rfc/colorpipe b/Documentation/gpu/rfc/colorpipe new file mode 100644 index 000000000000..2d12490eddec --- /dev/null +++ b/Documentation/gpu/rfc/colorpipe @@ -0,0 +1 @@ +<mxfile host="Electron" modified="2021-07-27T14:17:36.119Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/14.6.13 Chrome/91.0.4472.124 Electron/13.1.7 Safari/537.36" etag="Gs2TTWVtoGmka55_Cxxx" version="14.6.13" type="device"><diagram id="5iAH_SKWfpT4d7aif5q5" name="Page-1">7ZpdU+IwFIZ/DTPuhUyb9AMuFZXdWZ1xBtTlMtLQdqdtOiEI7K/f1Cb9ICrdUUkclgtoTptwes7zpsmBHhylmzFFeXRDApz0gBVsevCiB4DrDvl7YdiWhgF0S0NI46A02bVhEv/BwmgJ6yoO8LJ1ISMkYXHeNs5JluE5a9kQpWTdvmxBkva35ijEimEyR4lqfYgDFgmrbVn1ie84DiPx1QNXnEiRvFgYlhEKyLphgpc9OKKEsPIo3YxwUsROxqXsd/XK2coxijPWpcPZ+MfP5H4cZ9P88fZ+OXsY2otTG5bDPKFkJe5YeMu2MgQ44BERzYxk/OOcklUW4GJgi7cIZREJSYaSa0JybrS58TdmbCvyiVaMcFPE0kSc5T7T7a+if9+VzZkY7rlxsWm1tqJVOld49GoUhGlJVnSO37p1QROiIWZvXAeqXHHGMUkx94f3ozhBLH5q+4EEbWF1XZ0QfiBy8i/5UdJzRVGKH1eLBaZKptppWUcxw5McPUdhzdXZToEYGVOGN28HU7152UGyLcRtO6K9bkhFKiVqqESK5OPj5R4tz6Ajz45OnoGSngCfzu7uywmTxhvjmK7m731MO5/GtHO0TDsdmXZ1Mq2m54RDPUZpir71gJdwx88f+WTthcXR9d3UPMaH2hn3DoP0JmYNonlr1jhT81w0tk249crA7SgDT6cM1CfvaHrTK0i74u+jycg47KGnHXv/K2DPA9ACv2/Zzh74n1u3mMY8Tnyh+uGK8DoqwtepCE9RxJQnNEV5HmehcWpw5M5cmxqg8xXUoOsh4HdEHlovp/0wzPsK8+cJzgITgXctdfqvVvtN4L1Pm/4HSrQOowD9NNvDjji/ltQD1V+GSoZMKsBUc7ExBRipnyNEuqpP70Ua6kRaDmxwDWYXa/01GEnxMWINu2KttQoD1F8yTC/DKJhrL8OA/yvwvYB3UYLWbScwvxKzS77+Sgz4EgVI3/dNq8SArvvS92pCdL0lMXexJmkw7INBCya+r+sPHKt6yd8E5aClp2KcHWwqx95BkroFNqnssys9/WUfoO6xjmVtBa2O4oFA5xMFqpu6kwbThi6v7MFOvUf78graSlAMfMhoEwPoKgat+2fppsHLq13yPe3LK/jC7szkrdluBP1PfEryZv0vwHIVUv+VEl7+BQ==</diagram></mxfile> \ No newline at end of file diff --git a/Documentation/gpu/rfc/colorpipe.svg b/Documentation/gpu/rfc/colorpipe.svg new file mode 100644 index 000000000000..f6b8ece2499d --- /dev/null +++ b/Documentation/gpu/rfc/colorpipe.svg @@ -0,0 +1,3 @@ +<?xml version="1.0" encoding="UTF-8"?> +<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd"> +<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" version="1.1" width="241px" height="657px" viewBox="-0.5 -0.5 241 657" content="<mxfile host="Electron" modified="2021-07-27T14:17:39.137Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/14.6.13 Chrome/91.0.4472.124 Electron/13.1.7 Safari/537.36" etag="ffS0jb-Ry8iliDxn_T30" version="14.6.13" type="device"><diagram id="5iAH_SKWfpT4d7aif5q5" name="Page-1">7ZpdU+IwFIZ/DTPuhUyb9AMuFZXdWZ1xBtTlMtLQdqdtOiEI7K/f1Cb9ICrdUUkclgtoTptwes7zpsmBHhylmzFFeXRDApz0gBVsevCiB4DrDvl7YdiWhgF0S0NI46A02bVhEv/BwmgJ6yoO8LJ1ISMkYXHeNs5JluE5a9kQpWTdvmxBkva35ijEimEyR4lqfYgDFgmrbVn1ie84DiPx1QNXnEiRvFgYlhEKyLphgpc9OKKEsPIo3YxwUsROxqXsd/XK2coxijPWpcPZ+MfP5H4cZ9P88fZ+OXsY2otTG5bDPKFkJe5YeMu2MgQ44BERzYxk/OOcklUW4GJgi7cIZREJSYaSa0JybrS58TdmbCvyiVaMcFPE0kSc5T7T7a+if9+VzZkY7rlxsWm1tqJVOld49GoUhGlJVnSO37p1QROiIWZvXAeqXHHGMUkx94f3ozhBLH5q+4EEbWF1XZ0QfiBy8i/5UdJzRVGKH1eLBaZKptppWUcxw5McPUdhzdXZToEYGVOGN28HU7152UGyLcRtO6K9bkhFKiVqqESK5OPj5R4tz6Ajz45OnoGSngCfzu7uywmTxhvjmK7m731MO5/GtHO0TDsdmXZ1Mq2m54RDPUZpir71gJdwx88f+WTthcXR9d3UPMaH2hn3DoP0JmYNonlr1jhT81w0tk249crA7SgDT6cM1CfvaHrTK0i74u+jycg47KGnHXv/K2DPA9ACv2/Zzh74n1u3mMY8Tnyh+uGK8DoqwtepCE9RxJQnNEV5HmehcWpw5M5cmxqg8xXUoOsh4HdEHlovp/0wzPsK8+cJzgITgXctdfqvVvtN4L1Pm/4HSrQOowD9NNvDjji/ltQD1V+GSoZMKsBUc7ExBRipnyNEuqpP70Ua6kRaDmxwDWYXa/01GEnxMWINu2KttQoD1F8yTC/DKJhrL8OA/yvwvYB3UYLWbScwvxKzS77+Sgz4EgVI3/dNq8SArvvS92pCdL0lMXexJmkw7INBCya+r+sPHKt6yd8E5aClp2KcHWwqx95BkroFNqnssys9/WUfoO6xjmVtBa2O4oFA5xMFqpu6kwbThi6v7MFOvUf78graSlAMfMhoEwPoKgat+2fppsHLq13yPe3LK/jC7szkrdluBP1PfEryZv0vwHIVUv+VEl7+BQ==</diagram></mxfile>" style="background-color: rgb(255, 255, 255);"><defs/><g><path d="M 58 58 L 58 81.63" fill="none" stroke="#000000" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 58 86.88 L 54.5 79.88 L 58 81.63 L 61.5 79.88 Z" fill="#000000" stroke="#000000" stroke-miterlimit="10" pointer-events="all"/><rect x="8" y="8" width="100" height="50" fill="#ffffff" stroke="#000000" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject style="overflow: visible; text-align: left;" pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 98px; height: 1px; padding-top: 33px; margin-left: 9px;"><div style="box-sizing: border-box; font-size: 0; text-align: center; "><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: #000000; line-height: 1.2; pointer-events: all; white-space: normal; word-wrap: normal; ">Framebuffer</div></div></div></foreignObject><text x="58" y="37" fill="#000000" font-family="Helvetica" font-size="12px" text-anchor="middle">Framebuffer</text></switch></g><path d="M 58 128 L 58 151.63" fill="none" stroke="#000000" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 58 156.88 L 54.5 149.88 L 58 151.63 L 61.5 149.88 Z" fill="#000000" stroke="#000000" stroke-miterlimit="10" pointer-events="all"/><rect x="8" y="88" width="100" height="40" fill="#ffffff" stroke="#000000" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject style="overflow: visible; text-align: left;" pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 98px; height: 1px; padding-top: 108px; margin-left: 9px;"><div style="box-sizing: border-box; font-size: 0; text-align: center; "><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: #000000; line-height: 1.2; pointer-events: all; white-space: normal; word-wrap: normal; ">de-YUV matrix</div></div></div></foreignObject><text x="58" y="112" fill="#000000" font-family="Helvetica" font-size="12px" text-anchor="middle">de-YUV matrix</text></switch></g><path d="M 58 198 L 58 221.63" fill="none" stroke="#000000" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 58 226.88 L 54.5 219.88 L 58 221.63 L 61.5 219.88 Z" fill="#000000" stroke="#000000" stroke-miterlimit="10" pointer-events="all"/><rect x="8" y="158" width="100" height="40" fill="#ffffff" stroke="#000000" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject style="overflow: visible; text-align: left;" pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 98px; height: 1px; padding-top: 178px; margin-left: 9px;"><div style="box-sizing: border-box; font-size: 0; text-align: center; "><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: #000000; line-height: 1.2; pointer-events: all; white-space: normal; word-wrap: normal; ">(de-Gamma)<br />LUT</div></div></div></foreignObject><text x="58" y="182" fill="#000000" font-family="Helvetica" font-size="12px" text-anchor="middle">(de-Gamma)...</text></switch></g><path d="M 58 268 L 58 296.63" fill="none" stroke="#000000" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 58 301.88 L 54.5 294.88 L 58 296.63 L 61.5 294.88 Z" fill="#000000" stroke="#000000" stroke-miterlimit="10" pointer-events="all"/><rect x="8" y="228" width="100" height="40" fill="#ffffff" stroke="#000000" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject style="overflow: visible; text-align: left;" pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 98px; height: 1px; padding-top: 248px; margin-left: 9px;"><div style="box-sizing: border-box; font-size: 0; text-align: center; "><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: #000000; line-height: 1.2; pointer-events: all; white-space: normal; word-wrap: normal; ">CTM / CSC</div></div></div></foreignObject><text x="58" y="252" fill="#000000" font-family="Helvetica" font-size="12px" text-anchor="middle">CTM / CSC</text></switch></g><path d="M 58 343 L 57.46 362.47" fill="none" stroke="#000000" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 57.31 367.72 L 54.01 360.63 L 57.46 362.47 L 61 360.82 Z" fill="#000000" stroke="#000000" stroke-miterlimit="10" pointer-events="all"/><rect x="8" y="303" width="100" height="40" fill="#ffffff" stroke="#000000" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject style="overflow: visible; text-align: left;" pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 98px; height: 1px; padding-top: 323px; margin-left: 9px;"><div style="box-sizing: border-box; font-size: 0; text-align: center; "><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: #000000; line-height: 1.2; pointer-events: all; white-space: normal; word-wrap: normal; ">Tonemapping</div></div></div></foreignObject><text x="58" y="327" fill="#000000" font-family="Helvetica" font-size="12px" text-anchor="middle">Tonemapping</text></switch></g><path d="M 118 428 L 118 451.63" fill="none" stroke="#000000" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 118 456.88 L 114.5 449.88 L 118 451.63 L 121.5 449.88 Z" fill="#000000" stroke="#000000" stroke-miterlimit="10" pointer-events="all"/><rect x="8" y="368" width="220" height="60" fill="#ffffff" stroke="#000000" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject style="overflow: visible; text-align: left;" pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 218px; height: 1px; padding-top: 398px; margin-left: 9px;"><div style="box-sizing: border-box; font-size: 0; text-align: center; "><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: #000000; line-height: 1.2; pointer-events: all; white-space: normal; word-wrap: normal; ">Blending</div></div></div></foreignObject><text x="118" y="402" fill="#000000" font-family="Helvetica" font-size="12px" text-anchor="middle">Blending</text></switch></g><path d="M 178 58 L 178 81.63" fill="none" stroke="#000000" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 178 86.88 L 174.5 79.88 L 178 81.63 L 181.5 79.88 Z" fill="#000000" stroke="#000000" stroke-miterlimit="10" pointer-events="all"/><rect x="128" y="8" width="100" height="50" fill="#ffffff" stroke="#000000" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject style="overflow: visible; text-align: left;" pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 98px; height: 1px; padding-top: 33px; margin-left: 129px;"><div style="box-sizing: border-box; font-size: 0; text-align: center; "><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: #000000; line-height: 1.2; pointer-events: all; white-space: normal; word-wrap: normal; ">Framebuffer</div></div></div></foreignObject><text x="178" y="37" fill="#000000" font-family="Helvetica" font-size="12px" text-anchor="middle">Framebuffer</text></switch></g><path d="M 178 128 L 178 151.63" fill="none" stroke="#000000" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 178 156.88 L 174.5 149.88 L 178 151.63 L 181.5 149.88 Z" fill="#000000" stroke="#000000" stroke-miterlimit="10" pointer-events="all"/><rect x="128" y="88" width="100" height="40" fill="#ffffff" stroke="#000000" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject style="overflow: visible; text-align: left;" pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 98px; height: 1px; padding-top: 108px; margin-left: 129px;"><div style="box-sizing: border-box; font-size: 0; text-align: center; "><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: #000000; line-height: 1.2; pointer-events: all; white-space: normal; word-wrap: normal; ">de-YUV matrix</div></div></div></foreignObject><text x="178" y="112" fill="#000000" font-family="Helvetica" font-size="12px" text-anchor="middle">de-YUV matrix</text></switch></g><path d="M 178 198 L 178 221.63" fill="none" stroke="#000000" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 178 226.88 L 174.5 219.88 L 178 221.63 L 181.5 219.88 Z" fill="#000000" stroke="#000000" stroke-miterlimit="10" pointer-events="all"/><rect x="128" y="158" width="100" height="40" fill="#ffffff" stroke="#000000" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject style="overflow: visible; text-align: left;" pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 98px; height: 1px; padding-top: 178px; margin-left: 129px;"><div style="box-sizing: border-box; font-size: 0; text-align: center; "><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: #000000; line-height: 1.2; pointer-events: all; white-space: normal; word-wrap: normal; ">(de-Gamma)<br />LUT</div></div></div></foreignObject><text x="178" y="182" fill="#000000" font-family="Helvetica" font-size="12px" text-anchor="middle">(de-Gamma)...</text></switch></g><path d="M 178 268 L 178 296.63" fill="none" stroke="#000000" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 178 301.88 L 174.5 294.88 L 178 296.63 L 181.5 294.88 Z" fill="#000000" stroke="#000000" stroke-miterlimit="10" pointer-events="all"/><rect x="128" y="228" width="100" height="40" fill="#ffffff" stroke="#000000" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject style="overflow: visible; text-align: left;" pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 98px; height: 1px; padding-top: 248px; margin-left: 129px;"><div style="box-sizing: border-box; font-size: 0; text-align: center; "><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: #000000; line-height: 1.2; pointer-events: all; white-space: normal; word-wrap: normal; ">CTM / CSC</div></div></div></foreignObject><text x="178" y="252" fill="#000000" font-family="Helvetica" font-size="12px" text-anchor="middle">CTM / CSC</text></switch></g><path d="M 178 343 L 178.71 362.48" fill="none" stroke="#000000" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 178.9 367.72 L 175.15 360.85 L 178.71 362.48 L 182.14 360.6 Z" fill="#000000" stroke="#000000" stroke-miterlimit="10" pointer-events="all"/><rect x="128" y="303" width="100" height="40" fill="#ffffff" stroke="#000000" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject style="overflow: visible; text-align: left;" pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 98px; height: 1px; padding-top: 323px; margin-left: 129px;"><div style="box-sizing: border-box; font-size: 0; text-align: center; "><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: #000000; line-height: 1.2; pointer-events: all; white-space: normal; word-wrap: normal; ">Tonemapping</div></div></div></foreignObject><text x="178" y="327" fill="#000000" font-family="Helvetica" font-size="12px" text-anchor="middle">Tonemapping</text></switch></g><path d="M 118 498 L 118 521.63" fill="none" stroke="#000000" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 118 526.88 L 114.5 519.88 L 118 521.63 L 121.5 519.88 Z" fill="#000000" stroke="#000000" stroke-miterlimit="10" pointer-events="all"/><rect x="68" y="458" width="100" height="40" fill="#ffffff" stroke="#000000" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject style="overflow: visible; text-align: left;" pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 98px; height: 1px; padding-top: 478px; margin-left: 69px;"><div style="box-sizing: border-box; font-size: 0; text-align: center; "><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: #000000; line-height: 1.2; pointer-events: all; white-space: normal; word-wrap: normal; ">(Tonemapping)<br />LUT</div></div></div></foreignObject><text x="118" y="482" fill="#000000" font-family="Helvetica" font-size="12px" text-anchor="middle">(Tonemapping)...</text></switch></g><path d="M 118 568 L 118 596.63" fill="none" stroke="#000000" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 118 601.88 L 114.5 594.88 L 118 596.63 L 121.5 594.88 Z" fill="#000000" stroke="#000000" stroke-miterlimit="10" pointer-events="all"/><rect x="68" y="528" width="100" height="40" fill="#ffffff" stroke="#000000" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject style="overflow: visible; text-align: left;" pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 98px; height: 1px; padding-top: 548px; margin-left: 69px;"><div style="box-sizing: border-box; font-size: 0; text-align: center; "><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: #000000; line-height: 1.2; pointer-events: all; white-space: normal; word-wrap: normal; ">CTM / CSC</div></div></div></foreignObject><text x="118" y="552" fill="#000000" font-family="Helvetica" font-size="12px" text-anchor="middle">CTM / CSC</text></switch></g><rect x="68" y="603" width="100" height="40" fill="#ffffff" stroke="#000000" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject style="overflow: visible; text-align: left;" pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 98px; height: 1px; padding-top: 623px; margin-left: 69px;"><div style="box-sizing: border-box; font-size: 0; text-align: center; "><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: #000000; line-height: 1.2; pointer-events: all; white-space: normal; word-wrap: normal; ">(Gamma)<br />LUT</div></div></div></foreignObject><text x="118" y="627" fill="#000000" font-family="Helvetica" font-size="12px" text-anchor="middle">(Gamma)...</text></switch></g></g><switch><g requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"/><a transform="translate(0,-5)" xlink:href="https://www.diagrams.net/doc/faq/svg-export-text-problems" target="_blank"><text text-anchor="middle" font-size="10px" x="50%" y="100%">Viewer does not support full SVG 1.1</text></a></switch></svg> \ No newline at end of file diff --git a/Documentation/gpu/rfc/hdr-wide-gamut.rst b/Documentation/gpu/rfc/hdr-wide-gamut.rst new file mode 100644 index 000000000000..e463670191ab --- /dev/null +++ b/Documentation/gpu/rfc/hdr-wide-gamut.rst @@ -0,0 +1,580 @@ +============================== +HDR & Wide Color Gamut Support +============================== + +.. role:: wy-text-strike + +ToDo +==== + +* :wy-text-strike:`Reformat as RST kerneldoc` - done +* :wy-text-strike:`Don't use color_encoding for color_space definitions` - done +* :wy-text-strike:`Update SDR luminance description and reasoning` - done +* :wy-text-strike:`Clarify 3D LUT required for some color space transformations` - done +* :wy-text-strike:`Highlight need for named color space and EOTF definitions` - done +* :wy-text-strike:`Define transfer function API` - done +* :wy-text-strike:`Draft upstream plan` - done +* :wy-text-strike:`Reference to wayland plan` - done +* Reference to Chrome plans +* Sketch view of HW pipeline for couple of HW implementations + + +Upstream Plan +============= + +* Reach consensus on DRM/KMS API +* Implement support in amdgpu +* Implement IGT tests +* Add API support to Weston, ChromiumOS, or other canonical open-source project interested in HDR +* Merge user-space +* Merge kernel patches + + +History +======= + +v3: + +* Add sections on single-plane and multi-plane HDR +* Describe approach to define HW details vs approach to define SW intentions +* Link Jeremy Cline's excellent HDR summaries +* Outline intention behind overly verbose doc +* Describe FP16 use-case +* Clean up links + +v2: create this doc + +v1: n/a + + +Introduction +============ + +We are looking to enable HDR support for a couple of single-plane and +multi-plane scenarios. To do this effectively we recommend new interfaces +to drm_plane. Below I'll give a bit of background on HDR and why we +propose these interfaces. + +As an RFC doc this document is more verbose than what we would want from +an eventual uAPI doc. This is intentional in order to ensure interested +parties are all on the same page and to facilitate discussion if there +is disagreement on aspects of the intentions behind the proposed uAPI. + + +Overview and background +======================= + +I highly recommend you read `Jeremy Cline's HDR primer`_ + +Jeremy Cline did a much better job describing this. I highly recommend +you read it at [1]: + +.. _Jeremy Cline's HDR primer: https://www.jcline.org/blog/fedora/graphics/hdr/2021/05/07/hdr-in-linux-p1.html + +Defining a pixel's luminance +---------------------------- + +The luminance space of pixels in a framebuffer/plane presented to the +display is not well defined in the DRM/KMS APIs. It is usually assumed to +be in a 2.2 or 2.4 gamma space and has no mapping to an absolute luminance +value; it is interpreted in relative terms. + +Luminance can be measured and described in absolute terms as candela +per meter squared, or cd/m2, or nits. Even though a pixel value can be +mapped to luminance in a linear fashion to do so without losing a lot of +detail requires 16-bpc color depth. The reason for this is that human +perception can distinguish roughly between a 0.5-1% luminance delta. A +linear representation is suboptimal, wasting precision in the highlights +and losing precision in the shadows. + +A gamma curve is a decent approximation to a human's perception of +luminance, but the `PQ (perceptual quantizer) function`_ improves on +it. It also defines the luminance values in absolute terms, with the +highest value being 10,000 nits and the lowest 0.0005 nits. + +Using a content that's defined in PQ space we can approximate the real +world in a much better way. + +Here are some examples of real-life objects and their approximate +luminance values: + + +.. _PQ (perceptual quantizer) function: https://en.wikipedia.org/wiki/High-dynamic-range_video#Perceptual_Quantizer + +.. flat-table:: + :header-rows: 1 + + * - Object + - Luminance in nits + + * - Fluorescent light + - 10,000 + + * - Highlights + - 1,000 - sunlight + + * - White Objects + - 250 - 1,000 + + * - Typical Objects + - 1 - 250 + + * - Shadows + - 0.01 - 1 + + * - Ultra Blacks + - 0 - 0.0005 + + +Transfer functions +------------------ + +Traditionally we used the terms gamma and de-gamma to describe the +encoding of a pixel's luminance value and the operation to transfer from +a linear luminance space to the non-linear space used to encode the +pixels. Since some newer encodings don't use a gamma curve I suggest +we refer to non-linear encodings using the terms `EOTF, and OETF`_, or +simply as transfer function in general. + +The EOTF (Electro-Optical Transfer Function) describes how to transfer +from an electrical signal to an optical signal. This was traditionally +done by the de-gamma function. + +The OETF (Opto Electronic Transfer Function) describes how to transfer +from an optical signal to an electronic signal. This was traditionally +done by the gamma function. + +More generally we can name the transfer function describing the transform +between scanout and blending space as the **input transfer function**, and +the transfer function describing the transform from blending space to the +output space as **output transfer function**. + + +.. _EOTF, and OETF: https://en.wikipedia.org/wiki/Transfer_functions_in_imaging + +Mastering Luminances +-------------------- + +Even though we are able to describe the absolute luminance of a pixel +using the PQ 2084 EOTF we are presented with physical limitations of the +display technologies on the market today. Here are a few examples of +luminance ranges of displays. + +.. flat-table:: + :header-rows: 1 + + * - Display + - Luminance range in nits + + * - Typical PC display + - 0.3 - 200 + + * - Excellent LCD HDTV + - 0.3 - 400 + + * - HDR LCD w/ local dimming + - 0.05 - 1,500 + +Since no display can currently show the full 0.0005 to 10,000 nits +luminance range of PQ the display will need to tone-map the HDR content, +i.e to fit the content within a display's capabilities. To assist +with tone-mapping HDR content is usually accompanied by a metadata +that describes (among other things) the minimum and maximum mastering +luminance, i.e. the maximum and minimum luminance of the display that +was used to master the HDR content. + +The HDR metadata is currently defined on the drm_connector via the +hdr_output_metadata blob property. + +It might be useful to define per-plane hdr metadata, as different planes +might have been mastered differently. + +.. _SDR Luminance: + +SDR Luminance +------------- + +Traditional SDR content's maximum white luminance is not well defined. +Some like to define it at 80 nits, others at 200 nits. It also depends +to a large extent on the environmental viewing conditions. In practice +this means that we need to define the maximum SDR white luminance, either +in nits, or as a ratio. + +`One Windows API`_ defines it as a ratio against 80 nits. + +`Another Windows API`_ defines it as a nits value. + +The `Wayland color management proposal`_ uses Apple's definition of EDR as a +ratio of the HDR range vs SDR range. + +If a display's maximum HDR white level is correctly reported it is trivial +to convert between all of the above representations of SDR white level. If +it is not, defining SDR luminance as a nits value, or a ratio vs a fixed +nits value is preferred, assuming we are blending in linear space. + +It is our experience that many HDR displays do not report maximum white +level correctly + +.. _One Windows API: https://docs.microsoft.com/en-us/windows-hardware/drivers/ddi/dispmprt/ns-dispmprt-_dxgkarg_settargetadjustedcolorimetry2 +.. _Another Windows API: https://docs.microsoft.com/en-us/uwp/api/windows.graphics.display.advancedcolorinfo.sdrwhitelevelinnits?view=winrt-20348 +.. _Wayland color management proposal: https://gitlab.freedesktop.org/swick/wayland-protocols/-/blob/color/unstable/color-management/color.rst#id8 + +Let There Be Color +------------------ + +So far we've only talked about luminance, ignoring colors altogether. Just +like in the luminance space, traditionally the color space of display +outputs has not been well defined. Similar to how an EOTF defines a +mapping of pixel data to an absolute luminance value, the color space +maps color information for each pixel onto the CIE 1931 chromaticity +space. This can be thought of as a mapping to an absolute, real-life, +color value. + +A color space is defined by its primaries and white point. The primaries +and white point are expressed as coordinates in the CIE 1931 color +space. Think of the red primary as the reddest red that can be displayed +within the color space. Same for green and blue. + +Examples of color spaces are: + +.. flat-table:: + :header-rows: 1 + + * - Color Space + - Description + + * - BT 601 + - similar to BT 709 + + * - BT 709 + - used by sRGB content; ~53% of BT 2020 + + * - DCI-P3 + - used by most HDR displays; ~72% of BT 2020 + + * - BT 2020 + - standard for most HDR content + + + +Color Primaries and White Point +------------------------------- + +Just like displays can currently not represent the entire 0.0005 - +10,000 nits HDR range of the PQ 2084 EOTF, they are currently not capable +of representing the entire BT.2020 color Gamut. For this reason video +content will often specify the color primaries and white point used to +master the video, in order to allow displays to be able to map the image +as best as possible onto the display's gamut. + + +Displays and Tonemapping +------------------------ + +External displays are able to do their own tone and color mapping, based +on the mastering luminance, color primaries, and white space defined in +the HDR metadata. + +Some internal panels might not include the complex HW to do tone and color +mapping on their own and will require the display driver to perform +appropriate mapping. + + +How are we solving the problem? +=============================== + +Single-plane +------------ + +If a single drm_plane is used no further work is required. The compositor +will provide one HDR plane alongside a drm_connector's hdr_output_metadata +and the display HW will output this plane without further processing if +no CRTC LUTs are provided. + +If desired a compositor can use the CRTC LUTs for HDR content but without +support for PWL or multi-segmented LUTs the quality of the operation is +expected to be subpar for HDR content. + + +Multi-plane +----------- + +In multi-plane configurations we need to solve the problem of blending +HDR and SDR content. This blending should be done in linear space and +therefore requires framebuffer data that is presented in linear space +or a way to convert non-linear data to linear space. Additionally +we need a way to define the luminance of any SDR content in relation +to the HDR content. + +In order to present framebuffer data in linear space without losing a +lot of precision it needs to be presented using 16 bpc precision. + + +Defining HW Details +------------------- + +One way to take full advantage of modern HW's color pipelines is by +defining a "generic" pipeline that matches all capable HW. Something +like this, which I took `from Uma Shankar`_ and expanded on: + +.. _from Uma Shankar: https://patchwork.freedesktop.org/series/90826/ + +.. kernel-figure:: colorpipe.svg + +I intentionally put de-Gamma, and Gamma in parentheses in my graph +as they describe the intention of the block but not necessarily a +strict definition of how a userspace implementation is required to +use them. + +De-Gamma and Gamma blocks are named LUT, but they could be non-programmable +LUTs in some HW implementations with no programmable LUT available. See +the definitions for AMD's `latest dGPU generation`_ as an example. + +.. _latest dGPU generation: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c?h=v5.13#n2586 + +I renamed the "Plane Gamma LUT" and "CRTC De-Gamma LUT" to "Tonemapping" +as we generally don't want to re-apply gamma before blending, or do +de-gamma post blending. These blocks tend generally to be intended for +tonemapping purposes. + +Tonemapping in this case could be a simple nits value or `EDR`_ to describe +how to scale the :ref:`SDR luminance`. + +Tonemapping could also include the ability to use a 3D LUT which might be +accompanied by a 1D shaper LUT. The shaper LUT is required in order to +ensure a 3D LUT with limited entries (e.g. 9x9x9, or 17x17x17) operates +in perceptual (non-linear) space, so as to evenly spread the limited +entries evenly across the perceived space. + +.. _EDR: https://gitlab.freedesktop.org/swick/wayland-protocols/-/blob/color/unstable/color-management/color.rst#id8 + +Creating a model that is flexible enough to define color pipelines for +a wide variety of HW is challenging, though not impossible. Implementing +support for such a flexible definition in userspace, though, amounts +to essentially writing color pipeline drivers for each HW. + + +Defining SW Intentions +---------------------- + +An alternative to describing the HW color pipeline in enough detail to +be useful for color management and HDR purposes is to instead define +SW intentions. + +.. kernel-figure:: color_intentions.svg + +This greatly simplifies the API and lets the driver do what a driver +does best: figure out how to program the HW to achieve the desired +effect. + +The above diagram could include white point, primaries, and maximum +peak and average white levels in order to facilitate tone mapping. + +At this point I suggest to keep tonemapping (other than an SDR luminance +adjustment) out of the current DRM/KMS API. Most HDR displays are capable +of tonemapping. If for some reason tonemapping is still desired on +a plane, a shader might be a better way of doing that instead of relying +on display HW. + +In some ways this mirrors how various userspace APIs treat HDR: + * Gstreamer's `GstVideoTransferFunction`_ + * EGL's `EGL_EXT_gl_colorspace_bt2020_pq`_ extension + * Vulkan's `VkColorSpaceKHR`_ + +.. _GstVideoTransferFunction: https://gstreamer.freedesktop.org/documentation/video/video-color.html?gi-language=c#GstVideoTransferFunction +.. _EGL_EXT_gl_colorspace_bt2020_pq: https://www.khronos.org/registry/EGL/extensions/EXT/EGL_EXT_gl_colorspace_bt2020_linear.txt +.. _VkColorSpaceKHR: https://www.khronos.org/registry/vulkan/specs/1.2-extensions/html/vkspec.html#VkColorSpaceKHR + + +A hybrid approach to the API +---------------------------- + +Our current approach attempts a hybrid approach, defining API to specify +input and output transfer functions, as well as an SDR boost, and a +input color space definition. + +We would like to solicit feedback and encourage discussion around the +merits and weaknesses of these approaches. This question is at the core +of defining a good API and we'd like to get it right. + + +Input and Output Transfer functions +----------------------------------- + +We define an input transfer function on drm_plane to describe the +transform from framebuffer to blending space. + +We define an output transfer function on drm_crtc to describe the +transform from blending space to display space. + +The transfer function can be a pre-defined function, such as PQ EOTF, or +a custom LUT. A driver will be able to specify support for specific +transfer functions, including custom ones. + +Defining the transfer function in this way allows us to support in on HW +that uses ROMs to support these transforms, as well as on HW that use +LUT definitions that are complex and don't map easily onto a standard LUT +definition. + +We will not define per-plane LUTs in this patchset as the scope of our +current work only deals with pre-defined transfer functions. This API has +the flexibility to add custom 1D or 3D LUTs at a later date. + +In order to support the existing 1D de-gamma and gamma LUTs on the drm_crtc +we will include a "custom 1D" enum value to indicate that the custom gamma and +de-gamma 1D LUTs should be used. + +Possible transfer functions: + +.. flat-table:: + :header-rows: 1 + + * - Transfer Function + - Description + + * - Gamma 2.2 + - a simple 2.2 gamma function + + * - sRGB + - 2.4 gamma with small initial linear section + + * - PQ 2084 + - SMPTE ST 2084; used for HDR video and allows for up to 10,000 nit support + + * - Linear + - Linear relationship between pixel value and luminance value + + * - Custom 1D + - Custom 1D de-gamma and gamma LUTs; one LUT per color + + * - Custom 3D + - Custom 3D LUT (to be defined) + + +Describing SDR Luminance +------------------------------ + +Since many displays do no correctly advertise the HDR white level we +propose to define the SDR white level in nits. + +We define a new drm_plane property to specify the white level of an SDR +plane. + + +Defining the color space +------------------------ + +We propose to add a new color space property to drm_plane to define a +plane's color space. + +While some color space conversions can be performed with a simple color +transformation matrix (CTM) others require a 3D LUT. + + +Defining mastering color space and luminance +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +ToDo + + + +Pixel Formats +~~~~~~~~~~~~~ + +The pixel formats, such as ARGB8888, ARGB2101010, P010, or FP16 are +unrelated to color space and EOTF definitions. HDR pixels can be formatted +in different ways but in order to not lose precision HDR content requires +at least 10 bpc precision. For this reason ARGB2101010, P010, and FP16 are +the obvious candidates for HDR. ARGB2101010 and P010 have the advantage +of requiring only half the bandwidth as FP16, while FP16 has the advantage +of enough precision to operate in a linear space, i.e. without EOTF. + + +Use Cases +========= + +RGB10 HDR plane - composited HDR video & desktop +------------------------------------------------ + +A single, composited plane of HDR content. The use-case is a video player +on a desktop with the compositor owning the composition of SDR and HDR +content. The content shall be PQ BT.2020 formatted. The drm_connector's +hdr_output_metadata shall be set. + + +P010 HDR video plane + RGB8 SDR desktop plane +--------------------------------------------- +A normal 8bpc desktop plane, with a P010 HDR video plane underlayed. The +HDR plane shall be PQ BT.2020 formatted. The desktop plane shall specify +an SDR boost value. The drm_connector's hdr_output_metadata shall be set. + + +One XRGB8888 SDR Plane - HDR output +----------------------------------- + +In order to support a smooth transition we recommend an OS that supports +HDR output to provide the hdr_output_metadata on the drm_connector to +configure the output for HDR, even when the content is only SDR. This will +allow for a smooth transition between SDR-only and HDR content. In this +use-case the SDR max luminance value should be provided on the drm_plane. + +In DCN we will de-PQ or de-Gamma all input in order to blend in linear +space. For SDR content we will also apply any desired boost before +blending. After blending we will then re-apply the PQ EOTF and do RGB +to YCbCr conversion if needed. + +FP16 HDR linear planes +---------------------- + +These will require a transformation into the display's encoding (e.g. PQ) +using the CRTC LUT. Current CRTC LUTs are lacking the precision in the +dark areas to do the conversion without losing detail. + +One of the newly defined output transfer functions or a PWL or `multi-segmented +LUT`_ can be used to facilitate the conversion to PQ, HLG, or another +encoding supported by displays. + +.. _multi-segmented LUT: https://patchwork.freedesktop.org/series/90822/ + + +User Space +========== + +Gnome & GStreamer +----------------- + +See Jeremy Cline's `HDR in Linux\: Part 2`_. + +.. _HDR in Linux\: Part 2: https://www.jcline.org/blog/fedora/graphics/hdr/2021/06/28/hdr-in-linux-p2.html + + +Wayland +------- + +See `Wayland Color Management and HDR Design Goals`_. + +.. _Wayland Color Management and HDR Design Goals: https://gitlab.freedesktop.org/swick/wayland-protocols/-/blob/color/unstable/color-management/color.rst + + +ChromeOS Ozone +-------------- + +ToDo + + +HW support +========== + +ToDo, describe pipeline on a couple different HW platforms + + +Further Reading +=============== + +* https://gitlab.freedesktop.org/swick/wayland-protocols/-/blob/color/unstable/color-management/color.rst +* http://downloads.bbc.co.uk/rd/pubs/whp/whp-pdf-files/WHP309.pdf +* https://app.spectracal.com/Documents/White%20Papers/HDR_Demystified.pdf +* https://www.jcline.org/blog/fedora/graphics/hdr/2021/05/07/hdr-in-linux-p1.html +* https://www.jcline.org/blog/fedora/graphics/hdr/2021/06/28/hdr-in-linux-p2.html + + diff --git a/Documentation/gpu/rfc/index.rst b/Documentation/gpu/rfc/index.rst index 05670442ca1b..8d8430cfdde1 100644 --- a/Documentation/gpu/rfc/index.rst +++ b/Documentation/gpu/rfc/index.rst @@ -19,3 +19,4 @@ host such documentation: .. toctree:: i915_gem_lmem.rst + hdr-wide-gamut.rst
Use the new DRM RFC doc section to capture the RFC previously only described in the cover letter at https://patchwork.freedesktop.org/series/89506/ v3: * Add sections on single-plane and multi-plane HDR * Describe approach to define HW details vs approach to define SW intentions * Link Jeremy Cline's excellent HDR summaries * Outline intention behind overly verbose doc * Describe FP16 use-case * Clean up links v2: create this doc v1: n/a Signed-off-by: Harry Wentland <harry.wentland@amd.com> --- Documentation/gpu/rfc/color_intentions.drawio | 1 + Documentation/gpu/rfc/color_intentions.svg | 3 + Documentation/gpu/rfc/colorpipe | 1 + Documentation/gpu/rfc/colorpipe.svg | 3 + Documentation/gpu/rfc/hdr-wide-gamut.rst | 580 ++++++++++++++++++ Documentation/gpu/rfc/index.rst | 1 + 6 files changed, 589 insertions(+) create mode 100644 Documentation/gpu/rfc/color_intentions.drawio create mode 100644 Documentation/gpu/rfc/color_intentions.svg create mode 100644 Documentation/gpu/rfc/colorpipe create mode 100644 Documentation/gpu/rfc/colorpipe.svg create mode 100644 Documentation/gpu/rfc/hdr-wide-gamut.rst