diff mbox

rfc: breaking old userspace gamma for 10-bit support

Message ID 20110420123848.6a78ff68@jbarnes-desktop (mailing list archive)
State New, archived
Headers show

Commit Message

Jesse Barnes April 20, 2011, 7:38 p.m. UTC
> > Andrew, do you have anything hacked together for this yet?
> 
> Nope.  I gave up because I couldn't even get the mode to set. :)

Ok well you should be able to now. :)  Using the patchset I posted
earlier along with the two attached patches, the testdisplay program in
intel-gpu-tools will set a 30bpp mode and draw some nice gradients
(though without a 10bpc gamma ramp loaded).

> One issue was that the RandR apis aren't really designed for cards
> that can accept more than one gamma ramp size.  Someone (I forget who)
> suggested adding a display property to control it.  It might be
> possible to kill two birds with one stone by adding a property with
> two settings:
> 
>  - Low depth: the logic you implemented: the bit depth is set to match
> the framebuffer when possible and the gamma ramp size is set according
> to the framebuffer depth.
>  - High depth: the bit depth is set to the maximum that the encoder,
> connector, and monitor support at the requested resolution and the
> gamma ramp size is set to whatever gives the highest precision in each
> entry.

Well, I haven't implemented anything, gamma-wise.  If you select say
30bpp when you start X *and* you have a kernel with my patches applied,
you'll get 10bpc all the way out to the display if it supports it.  If
the display does *not* support 10bpc, the pipe will dither it to 8bpc
before sending it to the encoder.  Current DDX on an old kernel will
fail to create an FB with 10bpc because the kernel will reject it.

So I guess I don't understand your low vs high distinction; the bit
depth is ultimately tied to the framebuffer and its allocation.
Correction happens after the fact in hw when the plane feeds bits to the
pipe.

Of course that's all separate from the color correction that happens
before the bits get to the pipe.  For that we have to wait until the
DDX starts up and decides what to do.  In a 30bpp mode, I'd hope it
would default to using the 10 bit gamma ramp (1024 entries of 30 bits
each), which I think is what you had in mind?  For that, we'd just need
to check whether we can hand the kernel a 1024 entry table, then do it
if possible, otherwise fall back to the existing 256 entry code.

Comments

Andrew Lutomirski April 20, 2011, 7:45 p.m. UTC | #1
On Wed, Apr 20, 2011 at 3:38 PM, Jesse Barnes <jbarnes@virtuousgeek.org> wrote:
>> > Andrew, do you have anything hacked together for this yet?
>>
>> Nope.  I gave up because I couldn't even get the mode to set. :)
>
> Ok well you should be able to now. :)  Using the patchset I posted
> earlier along with the two attached patches, the testdisplay program in
> intel-gpu-tools will set a 30bpp mode and draw some nice gradients
> (though without a 10bpc gamma ramp loaded).

Will test at home (that's where by 10bpc display is).

>
>> One issue was that the RandR apis aren't really designed for cards
>> that can accept more than one gamma ramp size.  Someone (I forget who)
>> suggested adding a display property to control it.  It might be
>> possible to kill two birds with one stone by adding a property with
>> two settings:
>>
>>  - Low depth: the logic you implemented: the bit depth is set to match
>> the framebuffer when possible and the gamma ramp size is set according
>> to the framebuffer depth.
>>  - High depth: the bit depth is set to the maximum that the encoder,
>> connector, and monitor support at the requested resolution and the
>> gamma ramp size is set to whatever gives the highest precision in each
>> entry.
>
> Well, I haven't implemented anything, gamma-wise.  If you select say
> 30bpp when you start X *and* you have a kernel with my patches applied,
> you'll get 10bpc all the way out to the display if it supports it.  If
> the display does *not* support 10bpc, the pipe will dither it to 8bpc
> before sending it to the encoder.  Current DDX on an old kernel will
> fail to create an FB with 10bpc because the kernel will reject it.
>
> So I guess I don't understand your low vs high distinction; the bit
> depth is ultimately tied to the framebuffer and its allocation.
> Correction happens after the fact in hw when the plane feeds bits to the
> pipe.

I want to have a 24-bit display plane with a 10-bit precision (or
12-bit interpolated) gamma ramp driving a 10bpc pipe and 10bpc over
DisplayPort.  That way I can ask Argyll for a 10bpc gamma ramp and I
get a gamma-corrected display without any banding but also without
having to wait for all the userspace stuff (mesa, compiz, etc.) to be
able to draw to a 30-bit framebuffer.

--Andy
diff mbox

Patch

diff --git a/tests/testdisplay.c b/tests/testdisplay.c
index 5bf5183..eeff97e 100644
--- a/tests/testdisplay.c
+++ b/tests/testdisplay.c
@@ -369,6 +369,8 @@  allocate_surface(int fd, int width, int height, uint32_t depth, uint32_t bpp,
 		format = CAIRO_FORMAT_RGB24;
 		break;
 	case 30:
+		format = CAIRO_FORMAT_RGB30;
+		break;
 	case 32:
 		format = CAIRO_FORMAT_ARGB32;
 		break;