diff mbox series

[2/2] drm: bridge: ldb: Configure LDB clock in .mode_set

Message ID 20241008223846.337162-2-marex@denx.de (mailing list archive)
State Not Applicable, archived
Headers show
Series [1/2] clk: imx: clk-imx8mp: Allow LDB serializer clock reconfigure parent rate | expand

Commit Message

Marek Vasut Oct. 8, 2024, 10:38 p.m. UTC
The LDB serializer clock operate at either x7 or x14 rate of the input
LCDIFv3 scanout engine clock. Make sure the serializer clock and their
upstream Video PLL are configured early in .mode_set to the x7 or x14
rate of pixel clock, before LCDIFv3 .atomic_enable is called which would
configure the Video PLL to low x1 rate, which is unusable.

With this patch in place, the clock tree is correctly configured. The
example below is for a 71.1 MHz pixel clock panel, the LDB serializer
clock is then 497.7 MHz:

video_pll1_ref_sel                      1 1 0  24000000 0 0 50000
   video_pll1                           1 1 0 497700000 0 0 50000
      video_pll1_bypass                 1 1 0 497700000 0 0 50000
         video_pll1_out                 2 2 0 497700000 0 0 50000
            media_ldb                   1 1 0 497700000 0 0 50000
               media_ldb_root_clk       1 1 0 497700000 0 0 50000
            media_disp2_pix             1 1 0  71100000 0 0 50000
               media_disp2_pix_root_clk 1 1 0  71100000 0 0 50000

Signed-off-by: Marek Vasut <marex@denx.de>
---
Cc: Abel Vesa <abelvesa@kernel.org>
Cc: Andrzej Hajda <andrzej.hajda@intel.com>
Cc: David Airlie <airlied@gmail.com>
Cc: Fabio Estevam <festevam@gmail.com>
Cc: Isaac Scott <isaac.scott@ideasonboard.com>
Cc: Jernej Skrabec <jernej.skrabec@gmail.com>
Cc: Jonas Karlman <jonas@kwiboo.se>
Cc: Laurent Pinchart <Laurent.pinchart@ideasonboard.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Maxime Ripard <mripard@kernel.org>
Cc: Michael Turquette <mturquette@baylibre.com>
Cc: Neil Armstrong <neil.armstrong@linaro.org>
Cc: Peng Fan <peng.fan@nxp.com>
Cc: Pengutronix Kernel Team <kernel@pengutronix.de>
Cc: Robert Foss <rfoss@kernel.org>
Cc: Sascha Hauer <s.hauer@pengutronix.de>
Cc: Shawn Guo <shawnguo@kernel.org>
Cc: Simona Vetter <simona@ffwll.ch>
Cc: Stephen Boyd <sboyd@kernel.org>
Cc: Thomas Zimmermann <tzimmermann@suse.de>
Cc: dri-devel@lists.freedesktop.org
Cc: imx@lists.linux.dev
Cc: kernel@dh-electronics.com
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-clk@vger.kernel.org
---
 drivers/gpu/drm/bridge/fsl-ldb.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

Comments

Isaac Scott Oct. 9, 2024, 10:27 a.m. UTC | #1
On Wed, 2024-10-09 at 00:38 +0200, Marek Vasut wrote:
> The LDB serializer clock operate at either x7 or x14 rate of the
> input
> LCDIFv3 scanout engine clock. Make sure the serializer clock and
> their
> upstream Video PLL are configured early in .mode_set to the x7 or x14
> rate of pixel clock, before LCDIFv3 .atomic_enable is called which
> would
> configure the Video PLL to low x1 rate, which is unusable.
> 
> With this patch in place, the clock tree is correctly configured. The
> example below is for a 71.1 MHz pixel clock panel, the LDB serializer
> clock is then 497.7 MHz:

Awesome! Thank you for this, this seems to fix the regression and the
patches work as expected. I have tested both patches on v6.12-rc2 and
the display works well.

For both patches, 

Tested-by: Isaac Scott <isaac.scott@ideasonboard.com>
> 
> video_pll1_ref_sel                      1 1 0  24000000 0 0 50000
>    video_pll1                           1 1 0 497700000 0 0 50000
>       video_pll1_bypass                 1 1 0 497700000 0 0 50000
>          video_pll1_out                 2 2 0 497700000 0 0 50000
>             media_ldb                   1 1 0 497700000 0 0 50000
>                media_ldb_root_clk       1 1 0 497700000 0 0 50000
>             media_disp2_pix             1 1 0  71100000 0 0 50000
>                media_disp2_pix_root_clk 1 1 0  71100000 0 0 50000
> 
> Signed-off-by: Marek Vasut <marex@denx.de>
> ---
> Cc: Abel Vesa <abelvesa@kernel.org>
> Cc: Andrzej Hajda <andrzej.hajda@intel.com>
> Cc: David Airlie <airlied@gmail.com>
> Cc: Fabio Estevam <festevam@gmail.com>
> Cc: Isaac Scott <isaac.scott@ideasonboard.com>
> Cc: Jernej Skrabec <jernej.skrabec@gmail.com>
> Cc: Jonas Karlman <jonas@kwiboo.se>
> Cc: Laurent Pinchart <Laurent.pinchart@ideasonboard.com>
> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> Cc: Maxime Ripard <mripard@kernel.org>
> Cc: Michael Turquette <mturquette@baylibre.com>
> Cc: Neil Armstrong <neil.armstrong@linaro.org>
> Cc: Peng Fan <peng.fan@nxp.com>
> Cc: Pengutronix Kernel Team <kernel@pengutronix.de>
> Cc: Robert Foss <rfoss@kernel.org>
> Cc: Sascha Hauer <s.hauer@pengutronix.de>
> Cc: Shawn Guo <shawnguo@kernel.org>
> Cc: Simona Vetter <simona@ffwll.ch>
> Cc: Stephen Boyd <sboyd@kernel.org>
> Cc: Thomas Zimmermann <tzimmermann@suse.de>
> Cc: dri-devel@lists.freedesktop.org
> Cc: imx@lists.linux.dev
> Cc: kernel@dh-electronics.com
> Cc: linux-arm-kernel@lists.infradead.org
> Cc: linux-clk@vger.kernel.org
> ---
>  drivers/gpu/drm/bridge/fsl-ldb.c | 11 +++++++++++
>  1 file changed, 11 insertions(+)
> 
> diff --git a/drivers/gpu/drm/bridge/fsl-ldb.c
> b/drivers/gpu/drm/bridge/fsl-ldb.c
> index 0e4bac7dd04ff..a3a31467fcc85 100644
> --- a/drivers/gpu/drm/bridge/fsl-ldb.c
> +++ b/drivers/gpu/drm/bridge/fsl-ldb.c
> @@ -278,6 +278,16 @@ fsl_ldb_mode_valid(struct drm_bridge *bridge,
>  	return MODE_OK;
>  }
>  
> +static void fsl_ldb_mode_set(struct drm_bridge *bridge,
> +			       const struct drm_display_mode *mode,
> +			       const struct drm_display_mode *adj)
> +{
> +	struct fsl_ldb *fsl_ldb = to_fsl_ldb(bridge);
> +	unsigned long requested_link_freq =
> fsl_ldb_link_frequency(fsl_ldb, mode->clock);
> +
> +	clk_set_rate(fsl_ldb->clk, requested_link_freq);
> +}
> +
>  static const struct drm_bridge_funcs funcs = {
>  	.attach = fsl_ldb_attach,
>  	.atomic_enable = fsl_ldb_atomic_enable,
> @@ -287,6 +297,7 @@ static const struct drm_bridge_funcs funcs = {
>  	.atomic_get_input_bus_fmts =
> fsl_ldb_atomic_get_input_bus_fmts,
>  	.atomic_reset = drm_atomic_helper_bridge_reset,
>  	.mode_valid = fsl_ldb_mode_valid,
> +	.mode_set = fsl_ldb_mode_set,
>  };
>  
>  static int fsl_ldb_probe(struct platform_device *pdev)
Marek Vasut Oct. 9, 2024, 3:41 p.m. UTC | #2
On 10/9/24 12:27 PM, Isaac Scott wrote:
> On Wed, 2024-10-09 at 00:38 +0200, Marek Vasut wrote:
>> The LDB serializer clock operate at either x7 or x14 rate of the
>> input
>> LCDIFv3 scanout engine clock. Make sure the serializer clock and
>> their
>> upstream Video PLL are configured early in .mode_set to the x7 or x14
>> rate of pixel clock, before LCDIFv3 .atomic_enable is called which
>> would
>> configure the Video PLL to low x1 rate, which is unusable.
>>
>> With this patch in place, the clock tree is correctly configured. The
>> example below is for a 71.1 MHz pixel clock panel, the LDB serializer
>> clock is then 497.7 MHz:
> 
> Awesome! Thank you for this, this seems to fix the regression and the
> patches work as expected. I have tested both patches on v6.12-rc2 and
> the display works well.
> 
> For both patches,
> 
> Tested-by: Isaac Scott <isaac.scott@ideasonboard.com>
Thank you for testing, but this patch feels too much like a feature 
development to me. Does the DT tweak I suggested also fix your issue? If 
so, I would really like that DT tweak to be the fix for current release 
and these two patches be feature development for 6.13 cycle. What do you 
think ?
Liu Ying Oct. 10, 2024, 7:15 a.m. UTC | #3
On 10/09/2024, Marek Vasut wrote:
> The LDB serializer clock operate at either x7 or x14 rate of the input

Isn't it either x7 or 3.5x?

s/operate/operates/

> LCDIFv3 scanout engine clock. Make sure the serializer clock and their
> upstream Video PLL are configured early in .mode_set to the x7 or x14
> rate of pixel clock, before LCDIFv3 .atomic_enable is called which would
> configure the Video PLL to low x1 rate, which is unusable.
> 
> With this patch in place, the clock tree is correctly configured. The
> example below is for a 71.1 MHz pixel clock panel, the LDB serializer
> clock is then 497.7 MHz:
> 
> video_pll1_ref_sel                      1 1 0  24000000 0 0 50000
>    video_pll1                           1 1 0 497700000 0 0 50000
>       video_pll1_bypass                 1 1 0 497700000 0 0 50000
>          video_pll1_out                 2 2 0 497700000 0 0 50000
>             media_ldb                   1 1 0 497700000 0 0 50000
>                media_ldb_root_clk       1 1 0 497700000 0 0 50000
>             media_disp2_pix             1 1 0  71100000 0 0 50000
>                media_disp2_pix_root_clk 1 1 0  71100000 0 0 50000
> 
> Signed-off-by: Marek Vasut <marex@denx.de>
> ---
> Cc: Abel Vesa <abelvesa@kernel.org>
> Cc: Andrzej Hajda <andrzej.hajda@intel.com>
> Cc: David Airlie <airlied@gmail.com>
> Cc: Fabio Estevam <festevam@gmail.com>
> Cc: Isaac Scott <isaac.scott@ideasonboard.com>
> Cc: Jernej Skrabec <jernej.skrabec@gmail.com>
> Cc: Jonas Karlman <jonas@kwiboo.se>
> Cc: Laurent Pinchart <Laurent.pinchart@ideasonboard.com>
> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> Cc: Maxime Ripard <mripard@kernel.org>
> Cc: Michael Turquette <mturquette@baylibre.com>
> Cc: Neil Armstrong <neil.armstrong@linaro.org>
> Cc: Peng Fan <peng.fan@nxp.com>
> Cc: Pengutronix Kernel Team <kernel@pengutronix.de>
> Cc: Robert Foss <rfoss@kernel.org>
> Cc: Sascha Hauer <s.hauer@pengutronix.de>
> Cc: Shawn Guo <shawnguo@kernel.org>
> Cc: Simona Vetter <simona@ffwll.ch>
> Cc: Stephen Boyd <sboyd@kernel.org>
> Cc: Thomas Zimmermann <tzimmermann@suse.de>
> Cc: dri-devel@lists.freedesktop.org
> Cc: imx@lists.linux.dev
> Cc: kernel@dh-electronics.com
> Cc: linux-arm-kernel@lists.infradead.org
> Cc: linux-clk@vger.kernel.org
> ---
>  drivers/gpu/drm/bridge/fsl-ldb.c | 11 +++++++++++
>  1 file changed, 11 insertions(+)
> 
> diff --git a/drivers/gpu/drm/bridge/fsl-ldb.c b/drivers/gpu/drm/bridge/fsl-ldb.c
> index 0e4bac7dd04ff..a3a31467fcc85 100644
> --- a/drivers/gpu/drm/bridge/fsl-ldb.c
> +++ b/drivers/gpu/drm/bridge/fsl-ldb.c
> @@ -278,6 +278,16 @@ fsl_ldb_mode_valid(struct drm_bridge *bridge,
>  	return MODE_OK;
>  }
>  
> +static void fsl_ldb_mode_set(struct drm_bridge *bridge,
> +			       const struct drm_display_mode *mode,
> +			       const struct drm_display_mode *adj)
> +{
> +	struct fsl_ldb *fsl_ldb = to_fsl_ldb(bridge);
> +	unsigned long requested_link_freq = fsl_ldb_link_frequency(fsl_ldb, mode->clock);
> +
> +	clk_set_rate(fsl_ldb->clk, requested_link_freq);

The mode_set callback won't be called when only crtc_state->active
is changed from false to true in an atomic commit, e.g., blanking
the emulated fbdev first and then unblanking it.  So, in this case,
the clk_set_rate() in fsl_ldb_atomic_enable() is still called after
those from mxsfb_kms or lcdif_kms.

Also, it doesn't look neat to call clk_set_rate() from both mode_set
callback and atomic_enable callback.

The idea is to assign a reasonable PLL clock rate in DT to make
display drivers' life easier, especially for i.MX8MP where LDB,
Samsung MIPI DSI may use a single PLL at the same time.

> +}
> +
>  static const struct drm_bridge_funcs funcs = {
>  	.attach = fsl_ldb_attach,
>  	.atomic_enable = fsl_ldb_atomic_enable,
> @@ -287,6 +297,7 @@ static const struct drm_bridge_funcs funcs = {
>  	.atomic_get_input_bus_fmts = fsl_ldb_atomic_get_input_bus_fmts,
>  	.atomic_reset = drm_atomic_helper_bridge_reset,
>  	.mode_valid = fsl_ldb_mode_valid,
> +	.mode_set = fsl_ldb_mode_set,
>  };
>  
>  static int fsl_ldb_probe(struct platform_device *pdev)
Marek Vasut Oct. 11, 2024, 1:59 a.m. UTC | #4
On 10/10/24 9:15 AM, Liu Ying wrote:
> On 10/09/2024, Marek Vasut wrote:
>> The LDB serializer clock operate at either x7 or x14 rate of the input
> 
> Isn't it either x7 or 3.5x?

Is it 3.5 for the dual-link LVDS ?
I don't have such a panel right now to test.

[...]

>> diff --git a/drivers/gpu/drm/bridge/fsl-ldb.c b/drivers/gpu/drm/bridge/fsl-ldb.c
>> index 0e4bac7dd04ff..a3a31467fcc85 100644
>> --- a/drivers/gpu/drm/bridge/fsl-ldb.c
>> +++ b/drivers/gpu/drm/bridge/fsl-ldb.c
>> @@ -278,6 +278,16 @@ fsl_ldb_mode_valid(struct drm_bridge *bridge,
>>   	return MODE_OK;
>>   }
>>   
>> +static void fsl_ldb_mode_set(struct drm_bridge *bridge,
>> +			       const struct drm_display_mode *mode,
>> +			       const struct drm_display_mode *adj)
>> +{
>> +	struct fsl_ldb *fsl_ldb = to_fsl_ldb(bridge);
>> +	unsigned long requested_link_freq = fsl_ldb_link_frequency(fsl_ldb, mode->clock);
>> +
>> +	clk_set_rate(fsl_ldb->clk, requested_link_freq);
> 
> The mode_set callback won't be called when only crtc_state->active
> is changed from false to true in an atomic commit, e.g., blanking
> the emulated fbdev first and then unblanking it.  So, in this case,
> the clk_set_rate() in fsl_ldb_atomic_enable() is still called after
> those from mxsfb_kms or lcdif_kms.
> 
> Also, it doesn't look neat to call clk_set_rate() from both mode_set
> callback and atomic_enable callback.

I agree the mode_set callback is not the best place for this.
Do you know of a better callback where to do this ? I couldn't find one.

> The idea is to assign a reasonable PLL clock rate in DT to make
> display drivers' life easier, especially for i.MX8MP where LDB,
> Samsung MIPI DSI may use a single PLL at the same time.
I would really like to avoid setting arbitrary clock in DT, esp. if it 
can be avoided. And it surely can be avoided for this simple use case.
Liu Ying Oct. 11, 2024, 6:49 a.m. UTC | #5
On 10/11/2024, Marek Vasut wrote:
> On 10/10/24 9:15 AM, Liu Ying wrote:
>> On 10/09/2024, Marek Vasut wrote:
>>> The LDB serializer clock operate at either x7 or x14 rate of the input
>>
>> Isn't it either x7 or 3.5x?
> 
> Is it 3.5 for the dual-link LVDS ?

Yes.

static unsigned long fsl_ldb_link_frequency(struct fsl_ldb *fsl_ldb, int clock)  
{                                                                                
        if (fsl_ldb_is_dual(fsl_ldb))                                            
                return clock * 3500;                                             
        else                                                                     
                return clock * 7000;                                             
}    

> I don't have such a panel right now to test.

You can add a panel DT node for test to see the relationship
between the clocks, without a real dual-link LVDS panel.

> 
> [...]
> 
>>> diff --git a/drivers/gpu/drm/bridge/fsl-ldb.c b/drivers/gpu/drm/bridge/fsl-ldb.c
>>> index 0e4bac7dd04ff..a3a31467fcc85 100644
>>> --- a/drivers/gpu/drm/bridge/fsl-ldb.c
>>> +++ b/drivers/gpu/drm/bridge/fsl-ldb.c
>>> @@ -278,6 +278,16 @@ fsl_ldb_mode_valid(struct drm_bridge *bridge,
>>>       return MODE_OK;
>>>   }
>>>   +static void fsl_ldb_mode_set(struct drm_bridge *bridge,
>>> +                   const struct drm_display_mode *mode,
>>> +                   const struct drm_display_mode *adj)
>>> +{
>>> +    struct fsl_ldb *fsl_ldb = to_fsl_ldb(bridge);
>>> +    unsigned long requested_link_freq = fsl_ldb_link_frequency(fsl_ldb, mode->clock);
>>> +
>>> +    clk_set_rate(fsl_ldb->clk, requested_link_freq);
>>
>> The mode_set callback won't be called when only crtc_state->active
>> is changed from false to true in an atomic commit, e.g., blanking
>> the emulated fbdev first and then unblanking it.  So, in this case,
>> the clk_set_rate() in fsl_ldb_atomic_enable() is still called after
>> those from mxsfb_kms or lcdif_kms.
>>
>> Also, it doesn't look neat to call clk_set_rate() from both mode_set
>> callback and atomic_enable callback.
> 
> I agree the mode_set callback is not the best place for this.
> Do you know of a better callback where to do this ? I couldn't find one.

A wild idea is to change the order between the CRTC atomic_enable
callback and the bridge one by implementing your own
atomic_commit_tail...  I don't think there is any place to do this
other than atomic_enable callback.

Anyway, I don't think it is necessary to manage the clk_set_rate()
function calls between this driver and mxsfb_kms or lcdif_kms
because "video_pll1" clock rate is supposed to be assigned in DT...

> 
>> The idea is to assign a reasonable PLL clock rate in DT to make
>> display drivers' life easier, especially for i.MX8MP where LDB,
>> Samsung MIPI DSI may use a single PLL at the same time.
> I would really like to avoid setting arbitrary clock in DT, esp. if it can be avoided. And it surely can be avoided for this simple use case.

... just like I said in patch 1/2, "video_pll1" clock rate needs
to be x2 "media_ldb" clock rate for dual LVDS link mode. Without
an assigned "video_pll1" clock rate in DT, this driver cannot
achieve that.  And, the i.MX8MP LDB + Samsung MIPI DSI case is
not simple considering using one single PLL and display modes
read from EDID.
Marek Vasut Oct. 12, 2024, 9:12 p.m. UTC | #6
On 10/11/24 8:49 AM, Liu Ying wrote:
> On 10/11/2024, Marek Vasut wrote:
>> On 10/10/24 9:15 AM, Liu Ying wrote:
>>> On 10/09/2024, Marek Vasut wrote:
>>>> The LDB serializer clock operate at either x7 or x14 rate of the input
>>>
>>> Isn't it either x7 or 3.5x?
>>
>> Is it 3.5 for the dual-link LVDS ?
> 
> Yes.
> 
> static unsigned long fsl_ldb_link_frequency(struct fsl_ldb *fsl_ldb, int clock)
> {
>          if (fsl_ldb_is_dual(fsl_ldb))
>                  return clock * 3500;
>          else
>                  return clock * 7000;
> }
> 
>> I don't have such a panel right now to test.
> 
> You can add a panel DT node for test to see the relationship
> between the clocks, without a real dual-link LVDS panel.

I'll take your word for this.

>> [...]
>>
>>>> diff --git a/drivers/gpu/drm/bridge/fsl-ldb.c b/drivers/gpu/drm/bridge/fsl-ldb.c
>>>> index 0e4bac7dd04ff..a3a31467fcc85 100644
>>>> --- a/drivers/gpu/drm/bridge/fsl-ldb.c
>>>> +++ b/drivers/gpu/drm/bridge/fsl-ldb.c
>>>> @@ -278,6 +278,16 @@ fsl_ldb_mode_valid(struct drm_bridge *bridge,
>>>>        return MODE_OK;
>>>>    }
>>>>    +static void fsl_ldb_mode_set(struct drm_bridge *bridge,
>>>> +                   const struct drm_display_mode *mode,
>>>> +                   const struct drm_display_mode *adj)
>>>> +{
>>>> +    struct fsl_ldb *fsl_ldb = to_fsl_ldb(bridge);
>>>> +    unsigned long requested_link_freq = fsl_ldb_link_frequency(fsl_ldb, mode->clock);
>>>> +
>>>> +    clk_set_rate(fsl_ldb->clk, requested_link_freq);
>>>
>>> The mode_set callback won't be called when only crtc_state->active
>>> is changed from false to true in an atomic commit, e.g., blanking
>>> the emulated fbdev first and then unblanking it.  So, in this case,
>>> the clk_set_rate() in fsl_ldb_atomic_enable() is still called after
>>> those from mxsfb_kms or lcdif_kms.
>>>
>>> Also, it doesn't look neat to call clk_set_rate() from both mode_set
>>> callback and atomic_enable callback.
>>
>> I agree the mode_set callback is not the best place for this.
>> Do you know of a better callback where to do this ? I couldn't find one.
> 
> A wild idea is to change the order between the CRTC atomic_enable
> callback and the bridge one by implementing your own
> atomic_commit_tail...  I don't think there is any place to do this
> other than atomic_enable callback.

I will give that a try, thanks.

> Anyway, I don't think it is necessary to manage the clk_set_rate()
> function calls between this driver and mxsfb_kms or lcdif_kms
> because "video_pll1" clock rate is supposed to be assigned in DT...

I disagree with this part. I believe the assignment of clock in DT is 
only a temporary workaround which should be removed. The drivers should 
be able to figure out and set the clock tree configuration.

>>> The idea is to assign a reasonable PLL clock rate in DT to make
>>> display drivers' life easier, especially for i.MX8MP where LDB,
>>> Samsung MIPI DSI may use a single PLL at the same time.
>> I would really like to avoid setting arbitrary clock in DT, esp. if it can be avoided. And it surely can be avoided for this simple use case.
> 
> ... just like I said in patch 1/2, "video_pll1" clock rate needs
> to be x2 "media_ldb" clock rate for dual LVDS link mode. Without
> an assigned "video_pll1" clock rate in DT, this driver cannot
> achieve that.

This is something the LDB driver can infer from DT and configure the 
clock tree accordingly.

> And, the i.MX8MP LDB + Samsung MIPI DSI case is
> not simple considering using one single PLL and display modes
> read from EDID.
You could use separate PLLs for each LCDIF scanout engine in such a 
deployment, I already ran into that, so I am aware of it. That is 
probably the best way out of such a problem, esp. if accurate pixel 
clock are the requirement.

[...]
Liu Ying Oct. 22, 2024, 5:59 a.m. UTC | #7
On 10/13/2024, Marek Vasut wrote:
> On 10/11/24 8:49 AM, Liu Ying wrote:
>> On 10/11/2024, Marek Vasut wrote:
>>> On 10/10/24 9:15 AM, Liu Ying wrote:
>>>> On 10/09/2024, Marek Vasut wrote:

[...]

>> Anyway, I don't think it is necessary to manage the clk_set_rate()
>> function calls between this driver and mxsfb_kms or lcdif_kms
>> because "video_pll1" clock rate is supposed to be assigned in DT...
> 
> I disagree with this part. I believe the assignment of clock in DT is only a temporary workaround which should be removed. The drivers should be able to figure out and set the clock tree configuration.

I think the clock rate assignment in DT is still needed.
A good reason is that again we need to share one video PLL
between MIPI DSI and LDB display pipelines for i.MX8MP.

> 
>>>> The idea is to assign a reasonable PLL clock rate in DT to make
>>>> display drivers' life easier, especially for i.MX8MP where LDB,
>>>> Samsung MIPI DSI may use a single PLL at the same time.
>>> I would really like to avoid setting arbitrary clock in DT, esp. if it can be avoided. And it surely can be avoided for this simple use case.
>>
>> ... just like I said in patch 1/2, "video_pll1" clock rate needs
>> to be x2 "media_ldb" clock rate for dual LVDS link mode. Without
>> an assigned "video_pll1" clock rate in DT, this driver cannot
>> achieve that.
> 
> This is something the LDB driver can infer from DT and configure the clock tree accordingly.

Well, the LDB driver only controls the "ldb" clock rate. It doesn't
magically set the parent "video_pll1" clock's rate to 2x it's rate,
unless the driver gets "video_pll1_out" clock by calling
clk_get_parent() and directly controls the PLL clock rate which
doesn't look neat. 

> 
>> And, the i.MX8MP LDB + Samsung MIPI DSI case is
>> not simple considering using one single PLL and display modes
>> read from EDID.
> You could use separate PLLs for each LCDIF scanout engine in such a deployment, I already ran into that, so I am aware of it. That is probably the best way out of such a problem, esp. if accurate pixel clock are the requirement.

I cannot use separate PLLs for the i.MX8MP LDB and Samsung MIPI
DSI display pipelines on i.MX8MP EVK, because the PLLs are limited
resources and we are running out of it.  Because LDB needs the pixel
clock and LVDS serial clock to be derived from a same PLL, the only
valid PLLs(see imx8mp_media_disp_pix_sels[] and
imx8mp_media_ldb_sels[]) are "video_pll1_out", "audio_pll2_out",
"sys_pll2_1000m" and "sys_pll1_800m".  All are used as either audio
clock or system clocks on i.MX8MP EVK, except "video_pll1_out".

You probably may use separate PLLs for a particular i.MX8MP platform
with limited features, but not for i.MX8MP EVK which is supposed to
evaluate all SoC features.

> 
> [...]
Marek Vasut Oct. 23, 2024, 12:55 a.m. UTC | #8
On 10/22/24 7:59 AM, Liu Ying wrote:

[...]

>>> Anyway, I don't think it is necessary to manage the clk_set_rate()
>>> function calls between this driver and mxsfb_kms or lcdif_kms
>>> because "video_pll1" clock rate is supposed to be assigned in DT...
>>
>> I disagree with this part. I believe the assignment of clock in DT is only a temporary workaround which should be removed. The drivers should be able to figure out and set the clock tree configuration.
> 
> I think the clock rate assignment in DT is still needed.
> A good reason is that again we need to share one video PLL
> between MIPI DSI and LDB display pipelines for i.MX8MP.

You don't really need to share the Video PLL , you can free up e.g. PLL3 
and use it for one video output pipeline, and use the Video PLL for the 
other video pipeline, and then you get accurate pixel clock in both 
pipelines.

>>>>> The idea is to assign a reasonable PLL clock rate in DT to make
>>>>> display drivers' life easier, especially for i.MX8MP where LDB,
>>>>> Samsung MIPI DSI may use a single PLL at the same time.
>>>> I would really like to avoid setting arbitrary clock in DT, esp. if it can be avoided. And it surely can be avoided for this simple use case.
>>>
>>> ... just like I said in patch 1/2, "video_pll1" clock rate needs
>>> to be x2 "media_ldb" clock rate for dual LVDS link mode. Without
>>> an assigned "video_pll1" clock rate in DT, this driver cannot
>>> achieve that.
>>
>> This is something the LDB driver can infer from DT and configure the clock tree accordingly.
> 
> Well, the LDB driver only controls the "ldb" clock rate. It doesn't
> magically set the parent "video_pll1" clock's rate to 2x it's rate,
> unless the driver gets "video_pll1_out" clock by calling
> clk_get_parent() and directly controls the PLL clock rate which
> doesn't look neat.

It isn't nice, but it actually may solve this problem, no ?

>>> And, the i.MX8MP LDB + Samsung MIPI DSI case is
>>> not simple considering using one single PLL and display modes
>>> read from EDID.
>> You could use separate PLLs for each LCDIF scanout engine in such a deployment, I already ran into that, so I am aware of it. That is probably the best way out of such a problem, esp. if accurate pixel clock are the requirement.
> 
> I cannot use separate PLLs for the i.MX8MP LDB and Samsung MIPI
> DSI display pipelines on i.MX8MP EVK, because the PLLs are limited
> resources and we are running out of it.  Because LDB needs the pixel
> clock and LVDS serial clock to be derived from a same PLL, the only
> valid PLLs(see imx8mp_media_disp_pix_sels[] and
> imx8mp_media_ldb_sels[]) are "video_pll1_out", "audio_pll2_out",
> "sys_pll2_1000m" and "sys_pll1_800m".  All are used as either audio
> clock or system clocks on i.MX8MP EVK, except "video_pll1_out".

Could you use Video PLL for LDB and PLL3 for DSI then ?

I think this could still be configurable per board, it shouldn't be such 
that one board which attempts to showcase everything would prevent other 
boards with specific requirements from achieving those.

> You probably may use separate PLLs for a particular i.MX8MP platform
> with limited features, but not for i.MX8MP EVK which is supposed to
> evaluate all SoC features.
Right, that, exactly.

[...]
Liu Ying Oct. 23, 2024, 5:03 a.m. UTC | #9
On 10/23/2024, Marek Vasut wrote:
> On 10/22/24 7:59 AM, Liu Ying wrote:
> 
> [...]
> 
>>>> Anyway, I don't think it is necessary to manage the clk_set_rate()
>>>> function calls between this driver and mxsfb_kms or lcdif_kms
>>>> because "video_pll1" clock rate is supposed to be assigned in DT...
>>>
>>> I disagree with this part. I believe the assignment of clock in DT is only a temporary workaround which should be removed. The drivers should be able to figure out and set the clock tree configuration.
>>
>> I think the clock rate assignment in DT is still needed.
>> A good reason is that again we need to share one video PLL
>> between MIPI DSI and LDB display pipelines for i.MX8MP.
> 
> You don't really need to share the Video PLL , you can free up e.g. PLL3 and use it for one video output pipeline, and use the Video PLL for the other video pipeline, and then you get accurate pixel clock in both pipelines.

I need to share the video PLL. PLL3 is used as audio AXI's parent
in NXP downstream kernel for i.MX8MP EVK(Nominal Drive Mode) and
derives a audio AXI clock running at 600MHz which is the nominal
audio AXI clock rate mentioned in i.MX8MP chip data sheet.

https://github.com/nxp-imx/linux-imx/blob/lf-6.6.y/arch/arm64/boot/dts/freescale/imx8mp-evk-ndm.dts#L19
https://github.com/nxp-imx/linux-imx/blob/lf-6.6.y/arch/arm64/boot/dts/freescale/imx8mp-ddr4-evk.dts#L25

Upstream kernel currently uses PLL1 800M as audio AXI's parent.
Although audio AXI clock is assigned to 600MHz, the actual rate
reported by clock summary is 400MHz(not the nominal rate). So,
audio AXI clock's parent is supposed to be changed to PLL3.

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch/arm64/boot/dts/freescale/imx8mp.dtsi?h=v6.12-rc4#n815

    sys_pll1_ref_sel                 1       1        0        24000000    0          0     50000      Y      deviceless                      no_connection_id         
       sys_pll1                      1       1        0        800000000   0          0     50000      Y         deviceless                      no_connection_id         
          sys_pll1_bypass            1       1        0        800000000   0          0     50000      Y            deviceless                      no_connection_id         
             sys_pll1_out            4       4        0        800000000   0          0     50000      Y               deviceless                      no_connection_id         
                sys_pll1_800m        5       6        0        800000000   0          0     50000      Y                  deviceless                      no_connection_id             
                   audio_axi         1       1        0        400000000   0          0     50000      Y                     power-domain@5                  no_connection_id         
                                                                                                                             deviceless                      no_connection_id         
                      audio_axi_root 0       0        0        400000000   0          0     50000      Y                        deviceless                      no_connection_id            

All other clocks in imx8mp_media_disp_pix_sels[] are not appropriate
to be used by display pipelines, except "video_pll1_out", at least
for i.MX8MP EVK.

> 
>>>>>> The idea is to assign a reasonable PLL clock rate in DT to make
>>>>>> display drivers' life easier, especially for i.MX8MP where LDB,
>>>>>> Samsung MIPI DSI may use a single PLL at the same time.
>>>>> I would really like to avoid setting arbitrary clock in DT, esp. if it can be avoided. And it surely can be avoided for this simple use case.
>>>>
>>>> ... just like I said in patch 1/2, "video_pll1" clock rate needs
>>>> to be x2 "media_ldb" clock rate for dual LVDS link mode. Without
>>>> an assigned "video_pll1" clock rate in DT, this driver cannot
>>>> achieve that.
>>>
>>> This is something the LDB driver can infer from DT and configure the clock tree accordingly.
>>
>> Well, the LDB driver only controls the "ldb" clock rate. It doesn't
>> magically set the parent "video_pll1" clock's rate to 2x it's rate,
>> unless the driver gets "video_pll1_out" clock by calling
>> clk_get_parent() and directly controls the PLL clock rate which
>> doesn't look neat.
> 
> It isn't nice, but it actually may solve this problem, no ?

Not nice, but it may actually call clk_set_rate() directly
for "video_pll1_out".

> 
>>>> And, the i.MX8MP LDB + Samsung MIPI DSI case is
>>>> not simple considering using one single PLL and display modes
>>>> read from EDID.
>>> You could use separate PLLs for each LCDIF scanout engine in such a deployment, I already ran into that, so I am aware of it. That is probably the best way out of such a problem, esp. if accurate pixel clock are the requirement.
>>
>> I cannot use separate PLLs for the i.MX8MP LDB and Samsung MIPI
>> DSI display pipelines on i.MX8MP EVK, because the PLLs are limited
>> resources and we are running out of it.  Because LDB needs the pixel
>> clock and LVDS serial clock to be derived from a same PLL, the only
>> valid PLLs(see imx8mp_media_disp_pix_sels[] and
>> imx8mp_media_ldb_sels[]) are "video_pll1_out", "audio_pll2_out",
>> "sys_pll2_1000m" and "sys_pll1_800m".  All are used as either audio
>> clock or system clocks on i.MX8MP EVK, except "video_pll1_out".
> 
> Could you use Video PLL for LDB and PLL3 for DSI then ?

No, I can't, as I explained above - PLL3 is supposed to be used as
audio AXI clock's parent to achieve the nominal 600MHz clock rate
for audio AXI clock.

> 
> I think this could still be configurable per board, it shouldn't be such that one board which attempts to showcase everything would prevent other boards with specific requirements from achieving those.

You probably may set "ldb" clock rate in this driver and
additionally/un-nicely set it's parent clock rate(the video PLL
rate for i.MX8MP) for dual-link LVDS use cases.  But, due to the
shared PLL, it doesn't look ok to set CLK_SET_RATE_PARENT flag
for "media_ldb" as patch 1 does.

> 
>> You probably may use separate PLLs for a particular i.MX8MP platform
>> with limited features, but not for i.MX8MP EVK which is supposed to
>> evaluate all SoC features.
> Right, that, exactly.
> 
> [...]
>
diff mbox series

Patch

diff --git a/drivers/gpu/drm/bridge/fsl-ldb.c b/drivers/gpu/drm/bridge/fsl-ldb.c
index 0e4bac7dd04ff..a3a31467fcc85 100644
--- a/drivers/gpu/drm/bridge/fsl-ldb.c
+++ b/drivers/gpu/drm/bridge/fsl-ldb.c
@@ -278,6 +278,16 @@  fsl_ldb_mode_valid(struct drm_bridge *bridge,
 	return MODE_OK;
 }
 
+static void fsl_ldb_mode_set(struct drm_bridge *bridge,
+			       const struct drm_display_mode *mode,
+			       const struct drm_display_mode *adj)
+{
+	struct fsl_ldb *fsl_ldb = to_fsl_ldb(bridge);
+	unsigned long requested_link_freq = fsl_ldb_link_frequency(fsl_ldb, mode->clock);
+
+	clk_set_rate(fsl_ldb->clk, requested_link_freq);
+}
+
 static const struct drm_bridge_funcs funcs = {
 	.attach = fsl_ldb_attach,
 	.atomic_enable = fsl_ldb_atomic_enable,
@@ -287,6 +297,7 @@  static const struct drm_bridge_funcs funcs = {
 	.atomic_get_input_bus_fmts = fsl_ldb_atomic_get_input_bus_fmts,
 	.atomic_reset = drm_atomic_helper_bridge_reset,
 	.mode_valid = fsl_ldb_mode_valid,
+	.mode_set = fsl_ldb_mode_set,
 };
 
 static int fsl_ldb_probe(struct platform_device *pdev)