diff mbox

[2/4] clk: qcom: Add clk_rcg2_gfx3d_ops for SDM845

Message ID 1528285308-25477-3-git-send-email-anischal@codeaurora.org (mailing list archive)
State Superseded, archived
Delegated to: Andy Gross
Headers show

Commit Message

Amit Nischal June 6, 2018, 11:41 a.m. UTC
To turn on the gpu_gx_gdsc, there is a hardware requirement to
turn on the root clock (GFX3D RCG) first which would be the turn
on signal for the gdsc along with the SW_COLLAPSE. As per the
current implementation of clk_rcg2_shared_ops, it clears the
root_enable bit in the enable() and set_rate() clock ops. But due
to the above said requirement for GFX3D shared RCG, root_enable bit
would be already set by gdsc driver and rcg2_shared_ops should not clear
the root unless the disable is called.

Add support for the same by reusing the existing clk_rcg2_shared_ops
and deriving "clk_rcg2_gfx3d_ops" clk_ops for GFX3D clock to
take care of the root set/clear requirement.

Signed-off-by: Amit Nischal <anischal@codeaurora.org>
---
 drivers/clk/qcom/clk-rcg.h  |  1 +
 drivers/clk/qcom/clk-rcg2.c | 78 +++++++++++++++++++++++++++++++++------------
 2 files changed, 58 insertions(+), 21 deletions(-)

--
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation

--
To unsubscribe from this list: send the line "unsubscribe linux-arm-msm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Stephen Boyd July 9, 2018, 6:15 a.m. UTC | #1
Quoting Amit Nischal (2018-06-06 04:41:46)
> To turn on the gpu_gx_gdsc, there is a hardware requirement to
> turn on the root clock (GFX3D RCG) first which would be the turn
> on signal for the gdsc along with the SW_COLLAPSE. As per the
> current implementation of clk_rcg2_shared_ops, it clears the
> root_enable bit in the enable() and set_rate() clock ops. But due
> to the above said requirement for GFX3D shared RCG, root_enable bit
> would be already set by gdsc driver and rcg2_shared_ops should not clear
> the root unless the disable is called.
> 

It sounds like the GDSC enable is ANDed with the RCG's root enable
bit? Does the RCG need to be clocking for the GDSC to actually turn on?
Or is it purely that the enable bit is logically combined that way so
that if the RCG is parented to a PLL that's off the GDSC will still turn
on?

> Add support for the same by reusing the existing clk_rcg2_shared_ops
> and deriving "clk_rcg2_gfx3d_ops" clk_ops for GFX3D clock to
> take care of the root set/clear requirement.

Anyway, this patch will probably significantly change given that the RCG
is a glorified div-2 that muxes between a safe CXO speed and a
configurable PLL frequency. A lot of the logic can probably just be
hardcoded then.

> 
> Signed-off-by: Amit Nischal <anischal@codeaurora.org>

Patch looks sane.
--
To unsubscribe from this list: send the line "unsubscribe linux-arm-msm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Amit Nischal July 12, 2018, 12:30 p.m. UTC | #2
On 2018-07-09 11:45, Stephen Boyd wrote:
> Quoting Amit Nischal (2018-06-06 04:41:46)
>> To turn on the gpu_gx_gdsc, there is a hardware requirement to
>> turn on the root clock (GFX3D RCG) first which would be the turn
>> on signal for the gdsc along with the SW_COLLAPSE. As per the
>> current implementation of clk_rcg2_shared_ops, it clears the
>> root_enable bit in the enable() and set_rate() clock ops. But due
>> to the above said requirement for GFX3D shared RCG, root_enable bit
>> would be already set by gdsc driver and rcg2_shared_ops should not 
>> clear
>> the root unless the disable is called.
>> 
> 
> It sounds like the GDSC enable is ANDed with the RCG's root enable
> bit?

Here, the root clock (GFX3D RCG) needs to be enabled first before
writing to SW_COLLAPSE bit of the GDSC. RCG's CLK_ON signal would
power up the GDSC.

> Does the RCG need to be clocking for the GDSC to actually turn on?
> Or is it purely that the enable bit is logically combined that way so
> that if the RCG is parented to a PLL that's off the GDSC will still 
> turn
> on?
> 

Yes, the RCG needs to be clocking for the GPU_GX GDSC to actually turn 
on.

>> Add support for the same by reusing the existing clk_rcg2_shared_ops
>> and deriving "clk_rcg2_gfx3d_ops" clk_ops for GFX3D clock to
>> take care of the root set/clear requirement.
> 
> Anyway, this patch will probably significantly change given that the 
> RCG
> is a glorified div-2 that muxes between a safe CXO speed and a
> configurable PLL frequency. A lot of the logic can probably just be
> hardcoded then.
> 

Thanks for the suggestion.
We are planning to introduce the "clk_rcg2_gfx3d_determine_rate" op 
which will
always return the best parent as ‘GPU_CC_PLL0_OUT_EVEN’ and best_parent
rate equal to twice of the requested rate. With this approach frequency 
table
for rcg2 structure would not be required as all the supported 
frequencies would
be derived from the GPU_CC_PLL0_OUT_EVEN src with a divider of 2.

Also, we will add support to check the requested rate if falls within 
allowed
set_rate range. This will make sure that the source PLL would not go out 
of
the supported range. set_rate_range would be set by the GPUCC driver 
with min/max
value by calling below API.

clk_hw_set_rate_range(&gpu_cc_gx_gfx3d_clk_src.clkr.hw, 180000000, 
710000000)

>> 
>> Signed-off-by: Amit Nischal <anischal@codeaurora.org>
> 
> Patch looks sane.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-clk" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-arm-msm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Stephen Boyd July 25, 2018, 6:58 a.m. UTC | #3
Quoting Amit Nischal (2018-07-12 05:30:21)
> On 2018-07-09 11:45, Stephen Boyd wrote:
> > Quoting Amit Nischal (2018-06-06 04:41:46)
> >> To turn on the gpu_gx_gdsc, there is a hardware requirement to
> >> turn on the root clock (GFX3D RCG) first which would be the turn
> >> on signal for the gdsc along with the SW_COLLAPSE. As per the
> >> current implementation of clk_rcg2_shared_ops, it clears the
> >> root_enable bit in the enable() and set_rate() clock ops. But due
> >> to the above said requirement for GFX3D shared RCG, root_enable bit
> >> would be already set by gdsc driver and rcg2_shared_ops should not 
> >> clear
> >> the root unless the disable is called.
> >> 
> > 
> > It sounds like the GDSC enable is ANDed with the RCG's root enable
> > bit?
> 
> Here, the root clock (GFX3D RCG) needs to be enabled first before
> writing to SW_COLLAPSE bit of the GDSC. RCG's CLK_ON signal would
> power up the GDSC.
> 
> > Does the RCG need to be clocking for the GDSC to actually turn on?
> > Or is it purely that the enable bit is logically combined that way so
> > that if the RCG is parented to a PLL that's off the GDSC will still 
> > turn
> > on?
> > 
> 
> Yes, the RCG needs to be clocking for the GPU_GX GDSC to actually turn 
> on.

Cool, please add these details to the commit text.

> 
> >> Add support for the same by reusing the existing clk_rcg2_shared_ops
> >> and deriving "clk_rcg2_gfx3d_ops" clk_ops for GFX3D clock to
> >> take care of the root set/clear requirement.
> > 
> > Anyway, this patch will probably significantly change given that the 
> > RCG
> > is a glorified div-2 that muxes between a safe CXO speed and a
> > configurable PLL frequency. A lot of the logic can probably just be
> > hardcoded then.
> > 
> 
> Thanks for the suggestion.
> We are planning to introduce the "clk_rcg2_gfx3d_determine_rate" op 
> which will
> always return the best parent as ‘GPU_CC_PLL0_OUT_EVEN’ and best_parent
> rate equal to twice of the requested rate. With this approach frequency 
> table
> for rcg2 structure would not be required as all the supported 
> frequencies would
> be derived from the GPU_CC_PLL0_OUT_EVEN src with a divider of 2.
> 
> Also, we will add support to check the requested rate if falls within 
> allowed
> set_rate range. This will make sure that the source PLL would not go out 
> of
> the supported range. set_rate_range would be set by the GPUCC driver 
> with min/max
> value by calling below API.
> 
> clk_hw_set_rate_range(&gpu_cc_gx_gfx3d_clk_src.clkr.hw, 180000000, 
> 710000000)

Ok. Sounds good! Is the rate range call really needed? It can't be
determined in the PLL code with some table or avoided by making sure GPU
uses OPP table with only approved frequencies?

--
To unsubscribe from this list: send the line "unsubscribe linux-arm-msm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Amit Nischal July 30, 2018, 11:28 a.m. UTC | #4
On 2018-07-25 12:28, Stephen Boyd wrote:
> Quoting Amit Nischal (2018-07-12 05:30:21)
>> On 2018-07-09 11:45, Stephen Boyd wrote:
>> > Quoting Amit Nischal (2018-06-06 04:41:46)
>> >> To turn on the gpu_gx_gdsc, there is a hardware requirement to
>> >> turn on the root clock (GFX3D RCG) first which would be the turn
>> >> on signal for the gdsc along with the SW_COLLAPSE. As per the
>> >> current implementation of clk_rcg2_shared_ops, it clears the
>> >> root_enable bit in the enable() and set_rate() clock ops. But due
>> >> to the above said requirement for GFX3D shared RCG, root_enable bit
>> >> would be already set by gdsc driver and rcg2_shared_ops should not
>> >> clear
>> >> the root unless the disable is called.
>> >>
>> >
>> > It sounds like the GDSC enable is ANDed with the RCG's root enable
>> > bit?
>> 
>> Here, the root clock (GFX3D RCG) needs to be enabled first before
>> writing to SW_COLLAPSE bit of the GDSC. RCG's CLK_ON signal would
>> power up the GDSC.
>> 
>> > Does the RCG need to be clocking for the GDSC to actually turn on?
>> > Or is it purely that the enable bit is logically combined that way so
>> > that if the RCG is parented to a PLL that's off the GDSC will still
>> > turn
>> > on?
>> >
>> 
>> Yes, the RCG needs to be clocking for the GPU_GX GDSC to actually turn
>> on.
> 
> Cool, please add these details to the commit text.

Thanks. I will add these details in the commit text in the next patch 
series.

> 
>> 
>> >> Add support for the same by reusing the existing clk_rcg2_shared_ops
>> >> and deriving "clk_rcg2_gfx3d_ops" clk_ops for GFX3D clock to
>> >> take care of the root set/clear requirement.
>> >
>> > Anyway, this patch will probably significantly change given that the
>> > RCG
>> > is a glorified div-2 that muxes between a safe CXO speed and a
>> > configurable PLL frequency. A lot of the logic can probably just be
>> > hardcoded then.
>> >
>> 
>> Thanks for the suggestion.
>> We are planning to introduce the "clk_rcg2_gfx3d_determine_rate" op
>> which will
>> always return the best parent as ‘GPU_CC_PLL0_OUT_EVEN’ and 
>> best_parent
>> rate equal to twice of the requested rate. With this approach 
>> frequency
>> table
>> for rcg2 structure would not be required as all the supported
>> frequencies would
>> be derived from the GPU_CC_PLL0_OUT_EVEN src with a divider of 2.
>> 
>> Also, we will add support to check the requested rate if falls within
>> allowed
>> set_rate range. This will make sure that the source PLL would not go 
>> out
>> of
>> the supported range. set_rate_range would be set by the GPUCC driver
>> with min/max
>> value by calling below API.
>> 
>> clk_hw_set_rate_range(&gpu_cc_gx_gfx3d_clk_src.clkr.hw, 180000000,
>> 710000000)
> 
> Ok. Sounds good! Is the rate range call really needed? It can't be
> determined in the PLL code with some table or avoided by making sure 
> GPU
> uses OPP table with only approved frequencies?
> 

Currently fabia PLL code does not have any table to check this and 
intention
was to avoid relying on the client to call set_rate with only approved
frequencies so we have added the set_rate_range() call in the GPUCC 
driver
in order to set the rate range.

> --
> To unsubscribe from this list: send the line "unsubscribe linux-clk" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-arm-msm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Stephen Boyd Aug. 2, 2018, 10:44 p.m. UTC | #5
Quoting Amit Nischal (2018-07-30 04:28:56)
> On 2018-07-25 12:28, Stephen Boyd wrote:
> > 
> > Ok. Sounds good! Is the rate range call really needed? It can't be
> > determined in the PLL code with some table or avoided by making sure 
> > GPU
> > uses OPP table with only approved frequencies?
> > 
> 
> Currently fabia PLL code does not have any table to check this and 
> intention
> was to avoid relying on the client to call set_rate with only approved
> frequencies so we have added the set_rate_range() call in the GPUCC 
> driver
> in order to set the rate range.
> 

But GPU will use OPP so it doesn't seem like it really buys us anything
here. And it really doesn't matter when the clk driver implementation
doesn't use the min/max to clamp the values of the round_rate() call. Is
that being done here? I need to double check. I would be more convinced
if the implementation was looking at min/max to constrain the rate
requested.

--
To unsubscribe from this list: send the line "unsubscribe linux-arm-msm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Amit Nischal Aug. 6, 2018, 9:07 a.m. UTC | #6
On 2018-08-03 04:14, Stephen Boyd wrote:
> Quoting Amit Nischal (2018-07-30 04:28:56)
>> On 2018-07-25 12:28, Stephen Boyd wrote:
>> >
>> > Ok. Sounds good! Is the rate range call really needed? It can't be
>> > determined in the PLL code with some table or avoided by making sure
>> > GPU
>> > uses OPP table with only approved frequencies?
>> >
>> 
>> Currently fabia PLL code does not have any table to check this and
>> intention
>> was to avoid relying on the client to call set_rate with only approved
>> frequencies so we have added the set_rate_range() call in the GPUCC
>> driver
>> in order to set the rate range.
>> 
> 
> But GPU will use OPP so it doesn't seem like it really buys us anything
> here. And it really doesn't matter when the clk driver implementation
> doesn't use the min/max to clamp the values of the round_rate() call. 
> Is
> that being done here? I need to double check. I would be more convinced
> if the implementation was looking at min/max to constrain the rate
> requested.
> 

So our understanding is that GPU(client) driver will always call the
set_rate with approved frequencies only and we can completely rely on 
the
client. Is our understanding is correct?

> --
> To unsubscribe from this list: send the line "unsubscribe linux-clk" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-arm-msm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jordan Crouse Aug. 6, 2018, 3:04 p.m. UTC | #7
On Mon, Aug 06, 2018 at 02:37:18PM +0530, Amit Nischal wrote:
> On 2018-08-03 04:14, Stephen Boyd wrote:
> >Quoting Amit Nischal (2018-07-30 04:28:56)
> >>On 2018-07-25 12:28, Stephen Boyd wrote:
> >>>
> >>> Ok. Sounds good! Is the rate range call really needed? It can't be
> >>> determined in the PLL code with some table or avoided by making sure
> >>> GPU
> >>> uses OPP table with only approved frequencies?
> >>>
> >>
> >>Currently fabia PLL code does not have any table to check this and
> >>intention
> >>was to avoid relying on the client to call set_rate with only approved
> >>frequencies so we have added the set_rate_range() call in the GPUCC
> >>driver
> >>in order to set the rate range.
> >>
> >
> >But GPU will use OPP so it doesn't seem like it really buys us anything
> >here. And it really doesn't matter when the clk driver implementation
> >doesn't use the min/max to clamp the values of the round_rate()
> >call. Is
> >that being done here? I need to double check. I would be more convinced
> >if the implementation was looking at min/max to constrain the rate
> >requested.
> >
> 
> So our understanding is that GPU(client) driver will always call the
> set_rate with approved frequencies only and we can completely rely
> on the
> client. Is our understanding is correct?


First: on sdm845 the software doesn't set the GPU clocks - we rely on the GMU
firmware to do that on our behalf but for the GPU at least this is an academic
exercise.

But that said: traditionally we've expected that the clock driver correctly
clamp the requested rate to the correct values. In the past we have taken
advantage of this and we may in the future. I don't think it is reasonable
to require the leaf driver to only pass "approved" frequencies especially
since we depend on our own OPP table that may or may not be similar to the
one used by the clock driver.

Jordan
Stephen Boyd Aug. 8, 2018, 5:58 a.m. UTC | #8
Quoting Jordan Crouse (2018-08-06 08:04:37)
> On Mon, Aug 06, 2018 at 02:37:18PM +0530, Amit Nischal wrote:
> > On 2018-08-03 04:14, Stephen Boyd wrote:
> > >Quoting Amit Nischal (2018-07-30 04:28:56)
> > >>On 2018-07-25 12:28, Stephen Boyd wrote:
> > >>>
> > >>> Ok. Sounds good! Is the rate range call really needed? It can't be
> > >>> determined in the PLL code with some table or avoided by making sure
> > >>> GPU
> > >>> uses OPP table with only approved frequencies?
> > >>>
> > >>
> > >>Currently fabia PLL code does not have any table to check this and
> > >>intention
> > >>was to avoid relying on the client to call set_rate with only approved
> > >>frequencies so we have added the set_rate_range() call in the GPUCC
> > >>driver
> > >>in order to set the rate range.
> > >>
> > >
> > >But GPU will use OPP so it doesn't seem like it really buys us anything
> > >here. And it really doesn't matter when the clk driver implementation
> > >doesn't use the min/max to clamp the values of the round_rate()
> > >call. Is
> > >that being done here? I need to double check. I would be more convinced
> > >if the implementation was looking at min/max to constrain the rate
> > >requested.
> > >
> > 
> > So our understanding is that GPU(client) driver will always call the
> > set_rate with approved frequencies only and we can completely rely
> > on the
> > client. Is our understanding is correct?
> 
> 
> First: on sdm845 the software doesn't set the GPU clocks - we rely on the GMU
> firmware to do that on our behalf but for the GPU at least this is an academic
> exercise.

So what is this GPU clk driver for then?

> 
> But that said: traditionally we've expected that the clock driver correctly
> clamp the requested rate to the correct values. In the past we have taken
> advantage of this and we may in the future. I don't think it is reasonable
> to require the leaf driver to only pass "approved" frequencies especially
> since we depend on our own OPP table that may or may not be similar to the
> one used by the clock driver.
> 

Ok. Sounds like things can't be kept in sync between the clk driver and
the OPP tables. Why is that hard to do?

Either way, I'd be fine if the code actually used the frequency limits
to round the rate to something within range, but I don't recall seeing
that being done here. So if the min/max limits stay, the clk driver
should round to within that range.

--
To unsubscribe from this list: send the line "unsubscribe linux-arm-msm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jordan Crouse Aug. 8, 2018, 2:51 p.m. UTC | #9
On Tue, Aug 07, 2018 at 10:58:19PM -0700, Stephen Boyd wrote:
> Quoting Jordan Crouse (2018-08-06 08:04:37)
> > On Mon, Aug 06, 2018 at 02:37:18PM +0530, Amit Nischal wrote:
> > > On 2018-08-03 04:14, Stephen Boyd wrote:
> > > >Quoting Amit Nischal (2018-07-30 04:28:56)
> > > >>On 2018-07-25 12:28, Stephen Boyd wrote:
> > > >>>
> > > >>> Ok. Sounds good! Is the rate range call really needed? It can't be
> > > >>> determined in the PLL code with some table or avoided by making sure
> > > >>> GPU
> > > >>> uses OPP table with only approved frequencies?
> > > >>>
> > > >>
> > > >>Currently fabia PLL code does not have any table to check this and
> > > >>intention
> > > >>was to avoid relying on the client to call set_rate with only approved
> > > >>frequencies so we have added the set_rate_range() call in the GPUCC
> > > >>driver
> > > >>in order to set the rate range.
> > > >>
> > > >
> > > >But GPU will use OPP so it doesn't seem like it really buys us anything
> > > >here. And it really doesn't matter when the clk driver implementation
> > > >doesn't use the min/max to clamp the values of the round_rate()
> > > >call. Is
> > > >that being done here? I need to double check. I would be more convinced
> > > >if the implementation was looking at min/max to constrain the rate
> > > >requested.
> > > >
> > > 
> > > So our understanding is that GPU(client) driver will always call the
> > > set_rate with approved frequencies only and we can completely rely
> > > on the
> > > client. Is our understanding is correct?
> > 
> > 
> > First: on sdm845 the software doesn't set the GPU clocks - we rely on the GMU
> > firmware to do that on our behalf but for the GPU at least this is an academic
> > exercise.
> 
> So what is this GPU clk driver for then?

That is a good question. There is a hodgepodge of clocks for the GMU, GPU and
SMMU and I'm not sure which ones are provided by the various clk drivers. This
isn't my area of expertise.

The GMU uses:

GPU_CC_CX_GMU_CLK
GPU_CC_CXO_CLK
GCC_DDRSS_GPU_AXI_CLK
GCC_GPU_MEMNOC_GFX_CLK

These are controlled by the GMU device.

The SMMU uses:

GCC_GPU_CFG_AHB_CLK
GCC_DDRSS_GPU_AXI_CLK
GCC_GPU_MEMNOC_GFX_CLK

These should be controlled by the SMMU device.

Downstream defines these drivers for the GPU but we don't get/prepare/use
them for the GPU device - I think they are there in case the GMU isn't
working or is disabled for some other reason.

GPU_CC_GX_GFX3D_CLK
GPU_CC_CXO_CLK
GCC_DDRSS_GPU_AXI_CLK
GCC_GPU_MEMNOC_GFX_CLK
GPU_CC_CX_GMU_CLK

> > 
> > But that said: traditionally we've expected that the clock driver correctly
> > clamp the requested rate to the correct values. In the past we have taken
> > advantage of this and we may in the future. I don't think it is reasonable
> > to require the leaf driver to only pass "approved" frequencies especially
> > since we depend on our own OPP table that may or may not be similar to the
> > one used by the clock driver.
> > 
> 
> Ok. Sounds like things can't be kept in sync between the clk driver and
> the OPP tables. Why is that hard to do?

Again, not my area of expertise. Traditionally the leaf driver is responsible
for setting its own OPP table. I'm not sure if the clock driver can or should
be in the role of switching up the table. We've always assumed that the clk
driver will sanity check whatever we ask it for.

Jordan
Amit Nischal Aug. 13, 2018, 6:30 a.m. UTC | #10
On 2018-08-08 11:28, Stephen Boyd wrote:
> Quoting Jordan Crouse (2018-08-06 08:04:37)
>> On Mon, Aug 06, 2018 at 02:37:18PM +0530, Amit Nischal wrote:
>> > On 2018-08-03 04:14, Stephen Boyd wrote:
>> > >Quoting Amit Nischal (2018-07-30 04:28:56)
>> > >>On 2018-07-25 12:28, Stephen Boyd wrote:
>> > >>>
>> > >>> Ok. Sounds good! Is the rate range call really needed? It can't be
>> > >>> determined in the PLL code with some table or avoided by making sure
>> > >>> GPU
>> > >>> uses OPP table with only approved frequencies?
>> > >>>
>> > >>
>> > >>Currently fabia PLL code does not have any table to check this and
>> > >>intention
>> > >>was to avoid relying on the client to call set_rate with only approved
>> > >>frequencies so we have added the set_rate_range() call in the GPUCC
>> > >>driver
>> > >>in order to set the rate range.
>> > >>
>> > >
>> > >But GPU will use OPP so it doesn't seem like it really buys us anything
>> > >here. And it really doesn't matter when the clk driver implementation
>> > >doesn't use the min/max to clamp the values of the round_rate()
>> > >call. Is
>> > >that being done here? I need to double check. I would be more convinced
>> > >if the implementation was looking at min/max to constrain the rate
>> > >requested.
>> > >
>> >
>> > So our understanding is that GPU(client) driver will always call the
>> > set_rate with approved frequencies only and we can completely rely
>> > on the
>> > client. Is our understanding is correct?
>> 
>> 
>> First: on sdm845 the software doesn't set the GPU clocks - we rely on 
>> the GMU
>> firmware to do that on our behalf but for the GPU at least this is an 
>> academic
>> exercise.
> 
> So what is this GPU clk driver for then?
> 
>> 
>> But that said: traditionally we've expected that the clock driver 
>> correctly
>> clamp the requested rate to the correct values. In the past we have 
>> taken
>> advantage of this and we may in the future. I don't think it is 
>> reasonable
>> to require the leaf driver to only pass "approved" frequencies 
>> especially
>> since we depend on our own OPP table that may or may not be similar to 
>> the
>> one used by the clock driver.
>> 
> 
> Ok. Sounds like things can't be kept in sync between the clk driver and
> the OPP tables. Why is that hard to do?
> 
> Either way, I'd be fine if the code actually used the frequency limits
> to round the rate to something within range, but I don't recall seeing
> that being done here. So if the min/max limits stay, the clk driver
> should round to within that range.
> 

Thanks Stephen for your suggestion. I have modified the existing
determine_rate() op to use the min/max limits and round the requested
rate so that it stays withing the set_rate range. I will submit the
same in the next patch series.


> --
> To unsubscribe from this list: send the line "unsubscribe linux-clk" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/drivers/clk/qcom/clk-rcg.h b/drivers/clk/qcom/clk-rcg.h
index b209a2f..c8c9558 100644
--- a/drivers/clk/qcom/clk-rcg.h
+++ b/drivers/clk/qcom/clk-rcg.h
@@ -160,5 +160,6 @@  struct clk_rcg2 {
 extern const struct clk_ops clk_pixel_ops;
 extern const struct clk_ops clk_gfx3d_ops;
 extern const struct clk_ops clk_rcg2_shared_ops;
+extern const struct clk_ops clk_rcg2_gfx3d_ops;

 #endif
diff --git a/drivers/clk/qcom/clk-rcg2.c b/drivers/clk/qcom/clk-rcg2.c
index 52208d4..491e710 100644
--- a/drivers/clk/qcom/clk-rcg2.c
+++ b/drivers/clk/qcom/clk-rcg2.c
@@ -823,28 +823,12 @@  static int clk_rcg2_clear_force_enable(struct clk_hw *hw)
 					CMD_ROOT_EN, 0);
 }

-static int
-clk_rcg2_shared_force_enable_clear(struct clk_hw *hw, const struct freq_tbl *f)
-{
-	struct clk_rcg2 *rcg = to_clk_rcg2(hw);
-	int ret;
-
-	ret = clk_rcg2_set_force_enable(hw);
-	if (ret)
-		return ret;
-
-	ret = clk_rcg2_configure(rcg, f);
-	if (ret)
-		return ret;
-
-	return clk_rcg2_clear_force_enable(hw);
-}
-
-static int clk_rcg2_shared_set_rate(struct clk_hw *hw, unsigned long rate,
+static int __clk_rcg2_shared_set_rate(struct clk_hw *hw, unsigned long rate,
 				    unsigned long parent_rate)
 {
 	struct clk_rcg2 *rcg = to_clk_rcg2(hw);
 	const struct freq_tbl *f;
+	int ret;

 	f = qcom_find_freq(rcg->freq_tbl, rate);
 	if (!f)
@@ -857,7 +841,23 @@  static int clk_rcg2_shared_set_rate(struct clk_hw *hw, unsigned long rate,
 	if (!__clk_is_enabled(hw->clk))
 		return __clk_rcg2_configure(rcg, f);

-	return clk_rcg2_shared_force_enable_clear(hw, f);
+	ret = clk_rcg2_set_force_enable(hw);
+	if (ret)
+		return ret;
+
+	return clk_rcg2_configure(rcg, f);
+}
+
+static int clk_rcg2_shared_set_rate(struct clk_hw *hw, unsigned long rate,
+				    unsigned long parent_rate)
+{
+	int ret;
+
+	ret = __clk_rcg2_shared_set_rate(hw, rate, parent_rate);
+	if (ret)
+		return ret;
+
+	return clk_rcg2_clear_force_enable(hw);
 }

 static int clk_rcg2_shared_set_rate_and_parent(struct clk_hw *hw,
@@ -866,7 +866,7 @@  static int clk_rcg2_shared_set_rate_and_parent(struct clk_hw *hw,
 	return clk_rcg2_shared_set_rate(hw, rate, parent_rate);
 }

-static int clk_rcg2_shared_enable(struct clk_hw *hw)
+static int __clk_rcg2_shared_enable(struct clk_hw *hw)
 {
 	struct clk_rcg2 *rcg = to_clk_rcg2(hw);
 	int ret;
@@ -879,7 +879,14 @@  static int clk_rcg2_shared_enable(struct clk_hw *hw)
 	if (ret)
 		return ret;

-	ret = update_config(rcg);
+	return update_config(rcg);
+}
+
+static int clk_rcg2_shared_enable(struct clk_hw *hw)
+{
+	int ret;
+
+	ret = __clk_rcg2_shared_enable(hw);
 	if (ret)
 		return ret;

@@ -929,3 +936,32 @@  static void clk_rcg2_shared_disable(struct clk_hw *hw)
 	.set_rate_and_parent = clk_rcg2_shared_set_rate_and_parent,
 };
 EXPORT_SYMBOL_GPL(clk_rcg2_shared_ops);
+
+static int clk_rcg2_gfx3d_enable(struct clk_hw *hw)
+{
+	return __clk_rcg2_shared_enable(hw);
+}
+
+static int clk_rcg2_gfx3d_set_rate(struct clk_hw *hw, unsigned long rate,
+				    unsigned long parent_rate)
+{
+	return __clk_rcg2_shared_set_rate(hw, rate, parent_rate);
+}
+
+static int clk_rcg2_gfx3d_set_rate_and_parent(struct clk_hw *hw,
+		unsigned long rate, unsigned long parent_rate, u8 index)
+{
+	return clk_rcg2_gfx3d_set_rate(hw, rate, parent_rate);
+}
+
+const struct clk_ops clk_rcg2_gfx3d_ops = {
+	.enable = clk_rcg2_gfx3d_enable,
+	.disable = clk_rcg2_shared_disable,
+	.get_parent = clk_rcg2_get_parent,
+	.set_parent = clk_rcg2_set_parent,
+	.recalc_rate = clk_rcg2_recalc_rate,
+	.determine_rate = clk_rcg2_determine_rate,
+	.set_rate = clk_rcg2_gfx3d_set_rate,
+	.set_rate_and_parent = clk_rcg2_gfx3d_set_rate_and_parent,
+};
+EXPORT_SYMBOL_GPL(clk_rcg2_gfx3d_ops);