Message ID | 20241008223846.337162-2-marex@denx.de (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [1/2] clk: imx: clk-imx8mp: Allow LDB serializer clock reconfigure parent rate | expand |
On Wed, 2024-10-09 at 00:38 +0200, Marek Vasut wrote: > The LDB serializer clock operate at either x7 or x14 rate of the > input > LCDIFv3 scanout engine clock. Make sure the serializer clock and > their > upstream Video PLL are configured early in .mode_set to the x7 or x14 > rate of pixel clock, before LCDIFv3 .atomic_enable is called which > would > configure the Video PLL to low x1 rate, which is unusable. > > With this patch in place, the clock tree is correctly configured. The > example below is for a 71.1 MHz pixel clock panel, the LDB serializer > clock is then 497.7 MHz: Awesome! Thank you for this, this seems to fix the regression and the patches work as expected. I have tested both patches on v6.12-rc2 and the display works well. For both patches, Tested-by: Isaac Scott <isaac.scott@ideasonboard.com> > > video_pll1_ref_sel 1 1 0 24000000 0 0 50000 > video_pll1 1 1 0 497700000 0 0 50000 > video_pll1_bypass 1 1 0 497700000 0 0 50000 > video_pll1_out 2 2 0 497700000 0 0 50000 > media_ldb 1 1 0 497700000 0 0 50000 > media_ldb_root_clk 1 1 0 497700000 0 0 50000 > media_disp2_pix 1 1 0 71100000 0 0 50000 > media_disp2_pix_root_clk 1 1 0 71100000 0 0 50000 > > Signed-off-by: Marek Vasut <marex@denx.de> > --- > Cc: Abel Vesa <abelvesa@kernel.org> > Cc: Andrzej Hajda <andrzej.hajda@intel.com> > Cc: David Airlie <airlied@gmail.com> > Cc: Fabio Estevam <festevam@gmail.com> > Cc: Isaac Scott <isaac.scott@ideasonboard.com> > Cc: Jernej Skrabec <jernej.skrabec@gmail.com> > Cc: Jonas Karlman <jonas@kwiboo.se> > Cc: Laurent Pinchart <Laurent.pinchart@ideasonboard.com> > Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> > Cc: Maxime Ripard <mripard@kernel.org> > Cc: Michael Turquette <mturquette@baylibre.com> > Cc: Neil Armstrong <neil.armstrong@linaro.org> > Cc: Peng Fan <peng.fan@nxp.com> > Cc: Pengutronix Kernel Team <kernel@pengutronix.de> > Cc: Robert Foss <rfoss@kernel.org> > Cc: Sascha Hauer <s.hauer@pengutronix.de> > Cc: Shawn Guo <shawnguo@kernel.org> > Cc: Simona Vetter <simona@ffwll.ch> > Cc: Stephen Boyd <sboyd@kernel.org> > Cc: Thomas Zimmermann <tzimmermann@suse.de> > Cc: dri-devel@lists.freedesktop.org > Cc: imx@lists.linux.dev > Cc: kernel@dh-electronics.com > Cc: linux-arm-kernel@lists.infradead.org > Cc: linux-clk@vger.kernel.org > --- > drivers/gpu/drm/bridge/fsl-ldb.c | 11 +++++++++++ > 1 file changed, 11 insertions(+) > > diff --git a/drivers/gpu/drm/bridge/fsl-ldb.c > b/drivers/gpu/drm/bridge/fsl-ldb.c > index 0e4bac7dd04ff..a3a31467fcc85 100644 > --- a/drivers/gpu/drm/bridge/fsl-ldb.c > +++ b/drivers/gpu/drm/bridge/fsl-ldb.c > @@ -278,6 +278,16 @@ fsl_ldb_mode_valid(struct drm_bridge *bridge, > return MODE_OK; > } > > +static void fsl_ldb_mode_set(struct drm_bridge *bridge, > + const struct drm_display_mode *mode, > + const struct drm_display_mode *adj) > +{ > + struct fsl_ldb *fsl_ldb = to_fsl_ldb(bridge); > + unsigned long requested_link_freq = > fsl_ldb_link_frequency(fsl_ldb, mode->clock); > + > + clk_set_rate(fsl_ldb->clk, requested_link_freq); > +} > + > static const struct drm_bridge_funcs funcs = { > .attach = fsl_ldb_attach, > .atomic_enable = fsl_ldb_atomic_enable, > @@ -287,6 +297,7 @@ static const struct drm_bridge_funcs funcs = { > .atomic_get_input_bus_fmts = > fsl_ldb_atomic_get_input_bus_fmts, > .atomic_reset = drm_atomic_helper_bridge_reset, > .mode_valid = fsl_ldb_mode_valid, > + .mode_set = fsl_ldb_mode_set, > }; > > static int fsl_ldb_probe(struct platform_device *pdev)
On 10/9/24 12:27 PM, Isaac Scott wrote: > On Wed, 2024-10-09 at 00:38 +0200, Marek Vasut wrote: >> The LDB serializer clock operate at either x7 or x14 rate of the >> input >> LCDIFv3 scanout engine clock. Make sure the serializer clock and >> their >> upstream Video PLL are configured early in .mode_set to the x7 or x14 >> rate of pixel clock, before LCDIFv3 .atomic_enable is called which >> would >> configure the Video PLL to low x1 rate, which is unusable. >> >> With this patch in place, the clock tree is correctly configured. The >> example below is for a 71.1 MHz pixel clock panel, the LDB serializer >> clock is then 497.7 MHz: > > Awesome! Thank you for this, this seems to fix the regression and the > patches work as expected. I have tested both patches on v6.12-rc2 and > the display works well. > > For both patches, > > Tested-by: Isaac Scott <isaac.scott@ideasonboard.com> Thank you for testing, but this patch feels too much like a feature development to me. Does the DT tweak I suggested also fix your issue? If so, I would really like that DT tweak to be the fix for current release and these two patches be feature development for 6.13 cycle. What do you think ?
On 10/09/2024, Marek Vasut wrote: > The LDB serializer clock operate at either x7 or x14 rate of the input Isn't it either x7 or 3.5x? s/operate/operates/ > LCDIFv3 scanout engine clock. Make sure the serializer clock and their > upstream Video PLL are configured early in .mode_set to the x7 or x14 > rate of pixel clock, before LCDIFv3 .atomic_enable is called which would > configure the Video PLL to low x1 rate, which is unusable. > > With this patch in place, the clock tree is correctly configured. The > example below is for a 71.1 MHz pixel clock panel, the LDB serializer > clock is then 497.7 MHz: > > video_pll1_ref_sel 1 1 0 24000000 0 0 50000 > video_pll1 1 1 0 497700000 0 0 50000 > video_pll1_bypass 1 1 0 497700000 0 0 50000 > video_pll1_out 2 2 0 497700000 0 0 50000 > media_ldb 1 1 0 497700000 0 0 50000 > media_ldb_root_clk 1 1 0 497700000 0 0 50000 > media_disp2_pix 1 1 0 71100000 0 0 50000 > media_disp2_pix_root_clk 1 1 0 71100000 0 0 50000 > > Signed-off-by: Marek Vasut <marex@denx.de> > --- > Cc: Abel Vesa <abelvesa@kernel.org> > Cc: Andrzej Hajda <andrzej.hajda@intel.com> > Cc: David Airlie <airlied@gmail.com> > Cc: Fabio Estevam <festevam@gmail.com> > Cc: Isaac Scott <isaac.scott@ideasonboard.com> > Cc: Jernej Skrabec <jernej.skrabec@gmail.com> > Cc: Jonas Karlman <jonas@kwiboo.se> > Cc: Laurent Pinchart <Laurent.pinchart@ideasonboard.com> > Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> > Cc: Maxime Ripard <mripard@kernel.org> > Cc: Michael Turquette <mturquette@baylibre.com> > Cc: Neil Armstrong <neil.armstrong@linaro.org> > Cc: Peng Fan <peng.fan@nxp.com> > Cc: Pengutronix Kernel Team <kernel@pengutronix.de> > Cc: Robert Foss <rfoss@kernel.org> > Cc: Sascha Hauer <s.hauer@pengutronix.de> > Cc: Shawn Guo <shawnguo@kernel.org> > Cc: Simona Vetter <simona@ffwll.ch> > Cc: Stephen Boyd <sboyd@kernel.org> > Cc: Thomas Zimmermann <tzimmermann@suse.de> > Cc: dri-devel@lists.freedesktop.org > Cc: imx@lists.linux.dev > Cc: kernel@dh-electronics.com > Cc: linux-arm-kernel@lists.infradead.org > Cc: linux-clk@vger.kernel.org > --- > drivers/gpu/drm/bridge/fsl-ldb.c | 11 +++++++++++ > 1 file changed, 11 insertions(+) > > diff --git a/drivers/gpu/drm/bridge/fsl-ldb.c b/drivers/gpu/drm/bridge/fsl-ldb.c > index 0e4bac7dd04ff..a3a31467fcc85 100644 > --- a/drivers/gpu/drm/bridge/fsl-ldb.c > +++ b/drivers/gpu/drm/bridge/fsl-ldb.c > @@ -278,6 +278,16 @@ fsl_ldb_mode_valid(struct drm_bridge *bridge, > return MODE_OK; > } > > +static void fsl_ldb_mode_set(struct drm_bridge *bridge, > + const struct drm_display_mode *mode, > + const struct drm_display_mode *adj) > +{ > + struct fsl_ldb *fsl_ldb = to_fsl_ldb(bridge); > + unsigned long requested_link_freq = fsl_ldb_link_frequency(fsl_ldb, mode->clock); > + > + clk_set_rate(fsl_ldb->clk, requested_link_freq); The mode_set callback won't be called when only crtc_state->active is changed from false to true in an atomic commit, e.g., blanking the emulated fbdev first and then unblanking it. So, in this case, the clk_set_rate() in fsl_ldb_atomic_enable() is still called after those from mxsfb_kms or lcdif_kms. Also, it doesn't look neat to call clk_set_rate() from both mode_set callback and atomic_enable callback. The idea is to assign a reasonable PLL clock rate in DT to make display drivers' life easier, especially for i.MX8MP where LDB, Samsung MIPI DSI may use a single PLL at the same time. > +} > + > static const struct drm_bridge_funcs funcs = { > .attach = fsl_ldb_attach, > .atomic_enable = fsl_ldb_atomic_enable, > @@ -287,6 +297,7 @@ static const struct drm_bridge_funcs funcs = { > .atomic_get_input_bus_fmts = fsl_ldb_atomic_get_input_bus_fmts, > .atomic_reset = drm_atomic_helper_bridge_reset, > .mode_valid = fsl_ldb_mode_valid, > + .mode_set = fsl_ldb_mode_set, > }; > > static int fsl_ldb_probe(struct platform_device *pdev)
On 10/10/24 9:15 AM, Liu Ying wrote: > On 10/09/2024, Marek Vasut wrote: >> The LDB serializer clock operate at either x7 or x14 rate of the input > > Isn't it either x7 or 3.5x? Is it 3.5 for the dual-link LVDS ? I don't have such a panel right now to test. [...] >> diff --git a/drivers/gpu/drm/bridge/fsl-ldb.c b/drivers/gpu/drm/bridge/fsl-ldb.c >> index 0e4bac7dd04ff..a3a31467fcc85 100644 >> --- a/drivers/gpu/drm/bridge/fsl-ldb.c >> +++ b/drivers/gpu/drm/bridge/fsl-ldb.c >> @@ -278,6 +278,16 @@ fsl_ldb_mode_valid(struct drm_bridge *bridge, >> return MODE_OK; >> } >> >> +static void fsl_ldb_mode_set(struct drm_bridge *bridge, >> + const struct drm_display_mode *mode, >> + const struct drm_display_mode *adj) >> +{ >> + struct fsl_ldb *fsl_ldb = to_fsl_ldb(bridge); >> + unsigned long requested_link_freq = fsl_ldb_link_frequency(fsl_ldb, mode->clock); >> + >> + clk_set_rate(fsl_ldb->clk, requested_link_freq); > > The mode_set callback won't be called when only crtc_state->active > is changed from false to true in an atomic commit, e.g., blanking > the emulated fbdev first and then unblanking it. So, in this case, > the clk_set_rate() in fsl_ldb_atomic_enable() is still called after > those from mxsfb_kms or lcdif_kms. > > Also, it doesn't look neat to call clk_set_rate() from both mode_set > callback and atomic_enable callback. I agree the mode_set callback is not the best place for this. Do you know of a better callback where to do this ? I couldn't find one. > The idea is to assign a reasonable PLL clock rate in DT to make > display drivers' life easier, especially for i.MX8MP where LDB, > Samsung MIPI DSI may use a single PLL at the same time. I would really like to avoid setting arbitrary clock in DT, esp. if it can be avoided. And it surely can be avoided for this simple use case.
On 10/11/2024, Marek Vasut wrote: > On 10/10/24 9:15 AM, Liu Ying wrote: >> On 10/09/2024, Marek Vasut wrote: >>> The LDB serializer clock operate at either x7 or x14 rate of the input >> >> Isn't it either x7 or 3.5x? > > Is it 3.5 for the dual-link LVDS ? Yes. static unsigned long fsl_ldb_link_frequency(struct fsl_ldb *fsl_ldb, int clock) { if (fsl_ldb_is_dual(fsl_ldb)) return clock * 3500; else return clock * 7000; } > I don't have such a panel right now to test. You can add a panel DT node for test to see the relationship between the clocks, without a real dual-link LVDS panel. > > [...] > >>> diff --git a/drivers/gpu/drm/bridge/fsl-ldb.c b/drivers/gpu/drm/bridge/fsl-ldb.c >>> index 0e4bac7dd04ff..a3a31467fcc85 100644 >>> --- a/drivers/gpu/drm/bridge/fsl-ldb.c >>> +++ b/drivers/gpu/drm/bridge/fsl-ldb.c >>> @@ -278,6 +278,16 @@ fsl_ldb_mode_valid(struct drm_bridge *bridge, >>> return MODE_OK; >>> } >>> +static void fsl_ldb_mode_set(struct drm_bridge *bridge, >>> + const struct drm_display_mode *mode, >>> + const struct drm_display_mode *adj) >>> +{ >>> + struct fsl_ldb *fsl_ldb = to_fsl_ldb(bridge); >>> + unsigned long requested_link_freq = fsl_ldb_link_frequency(fsl_ldb, mode->clock); >>> + >>> + clk_set_rate(fsl_ldb->clk, requested_link_freq); >> >> The mode_set callback won't be called when only crtc_state->active >> is changed from false to true in an atomic commit, e.g., blanking >> the emulated fbdev first and then unblanking it. So, in this case, >> the clk_set_rate() in fsl_ldb_atomic_enable() is still called after >> those from mxsfb_kms or lcdif_kms. >> >> Also, it doesn't look neat to call clk_set_rate() from both mode_set >> callback and atomic_enable callback. > > I agree the mode_set callback is not the best place for this. > Do you know of a better callback where to do this ? I couldn't find one. A wild idea is to change the order between the CRTC atomic_enable callback and the bridge one by implementing your own atomic_commit_tail... I don't think there is any place to do this other than atomic_enable callback. Anyway, I don't think it is necessary to manage the clk_set_rate() function calls between this driver and mxsfb_kms or lcdif_kms because "video_pll1" clock rate is supposed to be assigned in DT... > >> The idea is to assign a reasonable PLL clock rate in DT to make >> display drivers' life easier, especially for i.MX8MP where LDB, >> Samsung MIPI DSI may use a single PLL at the same time. > I would really like to avoid setting arbitrary clock in DT, esp. if it can be avoided. And it surely can be avoided for this simple use case. ... just like I said in patch 1/2, "video_pll1" clock rate needs to be x2 "media_ldb" clock rate for dual LVDS link mode. Without an assigned "video_pll1" clock rate in DT, this driver cannot achieve that. And, the i.MX8MP LDB + Samsung MIPI DSI case is not simple considering using one single PLL and display modes read from EDID.
On 10/11/24 8:49 AM, Liu Ying wrote: > On 10/11/2024, Marek Vasut wrote: >> On 10/10/24 9:15 AM, Liu Ying wrote: >>> On 10/09/2024, Marek Vasut wrote: >>>> The LDB serializer clock operate at either x7 or x14 rate of the input >>> >>> Isn't it either x7 or 3.5x? >> >> Is it 3.5 for the dual-link LVDS ? > > Yes. > > static unsigned long fsl_ldb_link_frequency(struct fsl_ldb *fsl_ldb, int clock) > { > if (fsl_ldb_is_dual(fsl_ldb)) > return clock * 3500; > else > return clock * 7000; > } > >> I don't have such a panel right now to test. > > You can add a panel DT node for test to see the relationship > between the clocks, without a real dual-link LVDS panel. I'll take your word for this. >> [...] >> >>>> diff --git a/drivers/gpu/drm/bridge/fsl-ldb.c b/drivers/gpu/drm/bridge/fsl-ldb.c >>>> index 0e4bac7dd04ff..a3a31467fcc85 100644 >>>> --- a/drivers/gpu/drm/bridge/fsl-ldb.c >>>> +++ b/drivers/gpu/drm/bridge/fsl-ldb.c >>>> @@ -278,6 +278,16 @@ fsl_ldb_mode_valid(struct drm_bridge *bridge, >>>> return MODE_OK; >>>> } >>>> +static void fsl_ldb_mode_set(struct drm_bridge *bridge, >>>> + const struct drm_display_mode *mode, >>>> + const struct drm_display_mode *adj) >>>> +{ >>>> + struct fsl_ldb *fsl_ldb = to_fsl_ldb(bridge); >>>> + unsigned long requested_link_freq = fsl_ldb_link_frequency(fsl_ldb, mode->clock); >>>> + >>>> + clk_set_rate(fsl_ldb->clk, requested_link_freq); >>> >>> The mode_set callback won't be called when only crtc_state->active >>> is changed from false to true in an atomic commit, e.g., blanking >>> the emulated fbdev first and then unblanking it. So, in this case, >>> the clk_set_rate() in fsl_ldb_atomic_enable() is still called after >>> those from mxsfb_kms or lcdif_kms. >>> >>> Also, it doesn't look neat to call clk_set_rate() from both mode_set >>> callback and atomic_enable callback. >> >> I agree the mode_set callback is not the best place for this. >> Do you know of a better callback where to do this ? I couldn't find one. > > A wild idea is to change the order between the CRTC atomic_enable > callback and the bridge one by implementing your own > atomic_commit_tail... I don't think there is any place to do this > other than atomic_enable callback. I will give that a try, thanks. > Anyway, I don't think it is necessary to manage the clk_set_rate() > function calls between this driver and mxsfb_kms or lcdif_kms > because "video_pll1" clock rate is supposed to be assigned in DT... I disagree with this part. I believe the assignment of clock in DT is only a temporary workaround which should be removed. The drivers should be able to figure out and set the clock tree configuration. >>> The idea is to assign a reasonable PLL clock rate in DT to make >>> display drivers' life easier, especially for i.MX8MP where LDB, >>> Samsung MIPI DSI may use a single PLL at the same time. >> I would really like to avoid setting arbitrary clock in DT, esp. if it can be avoided. And it surely can be avoided for this simple use case. > > ... just like I said in patch 1/2, "video_pll1" clock rate needs > to be x2 "media_ldb" clock rate for dual LVDS link mode. Without > an assigned "video_pll1" clock rate in DT, this driver cannot > achieve that. This is something the LDB driver can infer from DT and configure the clock tree accordingly. > And, the i.MX8MP LDB + Samsung MIPI DSI case is > not simple considering using one single PLL and display modes > read from EDID. You could use separate PLLs for each LCDIF scanout engine in such a deployment, I already ran into that, so I am aware of it. That is probably the best way out of such a problem, esp. if accurate pixel clock are the requirement. [...]
diff --git a/drivers/gpu/drm/bridge/fsl-ldb.c b/drivers/gpu/drm/bridge/fsl-ldb.c index 0e4bac7dd04ff..a3a31467fcc85 100644 --- a/drivers/gpu/drm/bridge/fsl-ldb.c +++ b/drivers/gpu/drm/bridge/fsl-ldb.c @@ -278,6 +278,16 @@ fsl_ldb_mode_valid(struct drm_bridge *bridge, return MODE_OK; } +static void fsl_ldb_mode_set(struct drm_bridge *bridge, + const struct drm_display_mode *mode, + const struct drm_display_mode *adj) +{ + struct fsl_ldb *fsl_ldb = to_fsl_ldb(bridge); + unsigned long requested_link_freq = fsl_ldb_link_frequency(fsl_ldb, mode->clock); + + clk_set_rate(fsl_ldb->clk, requested_link_freq); +} + static const struct drm_bridge_funcs funcs = { .attach = fsl_ldb_attach, .atomic_enable = fsl_ldb_atomic_enable, @@ -287,6 +297,7 @@ static const struct drm_bridge_funcs funcs = { .atomic_get_input_bus_fmts = fsl_ldb_atomic_get_input_bus_fmts, .atomic_reset = drm_atomic_helper_bridge_reset, .mode_valid = fsl_ldb_mode_valid, + .mode_set = fsl_ldb_mode_set, }; static int fsl_ldb_probe(struct platform_device *pdev)
The LDB serializer clock operate at either x7 or x14 rate of the input LCDIFv3 scanout engine clock. Make sure the serializer clock and their upstream Video PLL are configured early in .mode_set to the x7 or x14 rate of pixel clock, before LCDIFv3 .atomic_enable is called which would configure the Video PLL to low x1 rate, which is unusable. With this patch in place, the clock tree is correctly configured. The example below is for a 71.1 MHz pixel clock panel, the LDB serializer clock is then 497.7 MHz: video_pll1_ref_sel 1 1 0 24000000 0 0 50000 video_pll1 1 1 0 497700000 0 0 50000 video_pll1_bypass 1 1 0 497700000 0 0 50000 video_pll1_out 2 2 0 497700000 0 0 50000 media_ldb 1 1 0 497700000 0 0 50000 media_ldb_root_clk 1 1 0 497700000 0 0 50000 media_disp2_pix 1 1 0 71100000 0 0 50000 media_disp2_pix_root_clk 1 1 0 71100000 0 0 50000 Signed-off-by: Marek Vasut <marex@denx.de> --- Cc: Abel Vesa <abelvesa@kernel.org> Cc: Andrzej Hajda <andrzej.hajda@intel.com> Cc: David Airlie <airlied@gmail.com> Cc: Fabio Estevam <festevam@gmail.com> Cc: Isaac Scott <isaac.scott@ideasonboard.com> Cc: Jernej Skrabec <jernej.skrabec@gmail.com> Cc: Jonas Karlman <jonas@kwiboo.se> Cc: Laurent Pinchart <Laurent.pinchart@ideasonboard.com> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Cc: Maxime Ripard <mripard@kernel.org> Cc: Michael Turquette <mturquette@baylibre.com> Cc: Neil Armstrong <neil.armstrong@linaro.org> Cc: Peng Fan <peng.fan@nxp.com> Cc: Pengutronix Kernel Team <kernel@pengutronix.de> Cc: Robert Foss <rfoss@kernel.org> Cc: Sascha Hauer <s.hauer@pengutronix.de> Cc: Shawn Guo <shawnguo@kernel.org> Cc: Simona Vetter <simona@ffwll.ch> Cc: Stephen Boyd <sboyd@kernel.org> Cc: Thomas Zimmermann <tzimmermann@suse.de> Cc: dri-devel@lists.freedesktop.org Cc: imx@lists.linux.dev Cc: kernel@dh-electronics.com Cc: linux-arm-kernel@lists.infradead.org Cc: linux-clk@vger.kernel.org --- drivers/gpu/drm/bridge/fsl-ldb.c | 11 +++++++++++ 1 file changed, 11 insertions(+)