Message ID | 1deb7a85-0859-54f1-950a-33de3a08f9fd@ti.com (mailing list archive) |
---|---|
State | Accepted, archived |
Headers | show |
Series | [GIT,PULL] clk: keystone: changes for 5.3 v2 | expand |
Quoting Tero Kristo (2019-06-12 04:56:15) > Hi Stephen, Mike, Santosh, > > Here's a 2nd take of the pull request for the clock changes for keystone > SoC for 5.3. The only change compared to the v1 is to add the required > drivers/firmware change in. This avoids the nasty dependency between the > pull requests between the clock driver and firmware driver. > > -Tero > > --- Thanks. Pulled into clk-next. I guess we should increase the size of the number of parents that can exist to be more than a u8? We're close to getting there with the new way of specifying clk parents, so maybe we should expand it to an unsigned int, but then we may need to optimize finding parents when searching through all the parents of a clk. Also, there isn't any quantification of how much better it is to scan DT for all the clks that are used and only register those ones. It would be nice to understand how much better it is to do that sort of scan vs. just populating all clks at boot time. It may be useful to make the code generic because NXP folks also want to populate clks from DT so maybe we should provide this from the core framework somehow to ask providers to register a particular clk or not. I haven't thought about it at all, but it may come up that we totally rewrite this code in the future to be shared outside of the TI clk driver. FYI: I had to apply this patch on top to make sparse happy. diff --git a/drivers/firmware/ti_sci.c b/drivers/firmware/ti_sci.c index b417cef35769..92b77d38dd1f 100644 --- a/drivers/firmware/ti_sci.c +++ b/drivers/firmware/ti_sci.c @@ -2402,12 +2402,13 @@ devm_ti_sci_get_of_resource(const struct ti_sci_handle *handle, if (!res) return ERR_PTR(-ENOMEM); - res->sets = of_property_count_elems_of_size(dev_of_node(dev), of_prop, + ret = of_property_count_elems_of_size(dev_of_node(dev), of_prop, sizeof(u32)); - if (res->sets < 0) { + if (ret < 0) { dev_err(dev, "%s resource type ids not available\n", of_prop); - return ERR_PTR(res->sets); + return ERR_PTR(ret); } + res->sets = ret; res->desc = devm_kcalloc(dev, res->sets, sizeof(*res->desc), GFP_KERNEL);
On 25/06/2019 04:17, Stephen Boyd wrote: > Quoting Tero Kristo (2019-06-12 04:56:15) >> Hi Stephen, Mike, Santosh, >> >> Here's a 2nd take of the pull request for the clock changes for keystone >> SoC for 5.3. The only change compared to the v1 is to add the required >> drivers/firmware change in. This avoids the nasty dependency between the >> pull requests between the clock driver and firmware driver. >> >> -Tero >> >> --- > > Thanks. Pulled into clk-next. I guess we should increase the size of the > number of parents that can exist to be more than a u8? We're close to > getting there with the new way of specifying clk parents, so maybe we > should expand it to an unsigned int, but then we may need to optimize > finding parents when searching through all the parents of a clk. For now, this is not an issue with TI SoC:s at least, I think we only have like 64 parents at max for muxes. > Also, there isn't any quantification of how much better it is to scan DT > for all the clks that are used and only register those ones. It would be > nice to understand how much better it is to do that sort of scan vs. > just populating all clks at boot time. I haven't done measurements lately, but it provides pretty drastic improvement. On am65x for example, it cuts the scan time from bit more than 1 second to couple of hundred milliseconds. I don't have measurements for the new j721e SoC, but I would believe the improvement is even more with that one. > It may be useful to make the code > generic because NXP folks also want to populate clks from DT so maybe we > should provide this from the core framework somehow to ask providers to > register a particular clk or not. I haven't thought about it at all, but > it may come up that we totally rewrite this code in the future to be > shared outside of the TI clk driver. It might also be worth thinking whether some sort of lazy clock probe would be possible... now we register everything at one go, but would it be possible to only register / reparent clocks once they are actually requested by some driver? > > FYI: I had to apply this patch on top to make sparse happy. Ok looks good to me, thanks. -Tero > > diff --git a/drivers/firmware/ti_sci.c b/drivers/firmware/ti_sci.c > index b417cef35769..92b77d38dd1f 100644 > --- a/drivers/firmware/ti_sci.c > +++ b/drivers/firmware/ti_sci.c > @@ -2402,12 +2402,13 @@ devm_ti_sci_get_of_resource(const struct ti_sci_handle *handle, > if (!res) > return ERR_PTR(-ENOMEM); > > - res->sets = of_property_count_elems_of_size(dev_of_node(dev), of_prop, > + ret = of_property_count_elems_of_size(dev_of_node(dev), of_prop, > sizeof(u32)); > - if (res->sets < 0) { > + if (ret < 0) { > dev_err(dev, "%s resource type ids not available\n", of_prop); > - return ERR_PTR(res->sets); > + return ERR_PTR(ret); > } > + res->sets = ret; > > res->desc = devm_kcalloc(dev, res->sets, sizeof(*res->desc), > GFP_KERNEL); > -- Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
Quoting Tero Kristo (2019-06-25 03:33:50) > On 25/06/2019 04:17, Stephen Boyd wrote: > > Quoting Tero Kristo (2019-06-12 04:56:15) > >> Hi Stephen, Mike, Santosh, > >> > >> Here's a 2nd take of the pull request for the clock changes for keystone > >> SoC for 5.3. The only change compared to the v1 is to add the required > >> drivers/firmware change in. This avoids the nasty dependency between the > >> pull requests between the clock driver and firmware driver. > >> > >> -Tero > > > > Thanks. Pulled into clk-next. I guess we should increase the size of the > > number of parents that can exist to be more than a u8? We're close to > > getting there with the new way of specifying clk parents, so maybe we > > should expand it to an unsigned int, but then we may need to optimize > > finding parents when searching through all the parents of a clk. > > For now, this is not an issue with TI SoC:s at least, I think we only > have like 64 parents at max for muxes. Ok. It doesn't sound like a priority then. > > > Also, there isn't any quantification of how much better it is to scan DT > > for all the clks that are used and only register those ones. It would be > > nice to understand how much better it is to do that sort of scan vs. > > just populating all clks at boot time. > > I haven't done measurements lately, but it provides pretty drastic > improvement. On am65x for example, it cuts the scan time from bit more > than 1 second to couple of hundred milliseconds. I don't have > measurements for the new j721e SoC, but I would believe the improvement > is even more with that one. Cool. Thanks for the numbers. > > > It may be useful to make the code > > generic because NXP folks also want to populate clks from DT so maybe we > > should provide this from the core framework somehow to ask providers to > > register a particular clk or not. I haven't thought about it at all, but > > it may come up that we totally rewrite this code in the future to be > > shared outside of the TI clk driver. > > It might also be worth thinking whether some sort of lazy clock probe > would be possible... now we register everything at one go, but would it > be possible to only register / reparent clocks once they are actually > requested by some driver? Sure. Ideally the optimization isn't vendor driver specific.