diff mbox series

[v5,2/4] dt-bindings: cpufreq: apple,soc-cpufreq: Add binding for Apple SoC cpufreq

Message ID 20221128142912.16022-3-marcan@marcan.st (mailing list archive)
State New, archived
Headers show
Series Apple SoC cpufreq driver | expand

Commit Message

Hector Martin Nov. 28, 2022, 2:29 p.m. UTC
This binding represents the cpufreq/DVFS hardware present in Apple SoCs.
The hardware has an independent controller per CPU cluster, and we
represent them as unique nodes in order to accurately describe the
hardware. The driver is responsible for binding them as a single cpufreq
device (in the Linux cpufreq model).

Acked-by: Marc Zyngier <maz@kernel.org>
Signed-off-by: Hector Martin <marcan@marcan.st>
---
 .../cpufreq/apple,cluster-cpufreq.yaml        | 117 ++++++++++++++++++
 1 file changed, 117 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/cpufreq/apple,cluster-cpufreq.yaml

Comments

Krzysztof Kozlowski Nov. 28, 2022, 2:40 p.m. UTC | #1
On 28/11/2022 15:29, Hector Martin wrote:
> This binding represents the cpufreq/DVFS hardware present in Apple SoCs.
> The hardware has an independent controller per CPU cluster, and we
> represent them as unique nodes in order to accurately describe the
> hardware. The driver is responsible for binding them as a single cpufreq
> device (in the Linux cpufreq model).
> 


Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>

Best regards,
Krzysztof
Ulf Hansson Nov. 29, 2022, 11:36 a.m. UTC | #2
On Mon, 28 Nov 2022 at 15:29, Hector Martin <marcan@marcan.st> wrote:
>
> This binding represents the cpufreq/DVFS hardware present in Apple SoCs.
> The hardware has an independent controller per CPU cluster, and we
> represent them as unique nodes in order to accurately describe the
> hardware. The driver is responsible for binding them as a single cpufreq
> device (in the Linux cpufreq model).
>
> Acked-by: Marc Zyngier <maz@kernel.org>
> Signed-off-by: Hector Martin <marcan@marcan.st>
> ---
>  .../cpufreq/apple,cluster-cpufreq.yaml        | 117 ++++++++++++++++++
>  1 file changed, 117 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/cpufreq/apple,cluster-cpufreq.yaml
>
> diff --git a/Documentation/devicetree/bindings/cpufreq/apple,cluster-cpufreq.yaml b/Documentation/devicetree/bindings/cpufreq/apple,cluster-cpufreq.yaml
> new file mode 100644
> index 000000000000..76cb9726660e
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/cpufreq/apple,cluster-cpufreq.yaml
> @@ -0,0 +1,117 @@
> +# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause
> +%YAML 1.2
> +---
> +$id: http://devicetree.org/schemas/cpufreq/apple,cluster-cpufreq.yaml#
> +$schema: http://devicetree.org/meta-schemas/core.yaml#
> +
> +title: Apple SoC cluster cpufreq device
> +
> +maintainers:
> +  - Hector Martin <marcan@marcan.st>
> +
> +description: |
> +  Apple SoCs (e.g. M1) have a per-cpu-cluster DVFS controller that is part of
> +  the cluster management register block. This binding uses the standard
> +  operating-points-v2 table to define the CPU performance states, with the
> +  opp-level property specifying the hardware p-state index for that level.
> +
> +properties:
> +  compatible:
> +    oneOf:
> +      - items:
> +          - enum:
> +              - apple,t8103-cluster-cpufreq
> +              - apple,t8112-cluster-cpufreq
> +          - const: apple,cluster-cpufreq
> +      - items:
> +          - const: apple,t6000-cluster-cpufreq
> +          - const: apple,t8103-cluster-cpufreq
> +          - const: apple,cluster-cpufreq
> +
> +  reg:
> +    maxItems: 1
> +
> +  '#performance-domain-cells':
> +    const: 0
> +
> +required:
> +  - compatible
> +  - reg
> +  - '#performance-domain-cells'
> +
> +additionalProperties: false
> +
> +examples:
> +  - |
> +    // This example shows a single CPU per domain and 2 domains,
> +    // with two p-states per domain.
> +    // Shipping hardware has 2-4 CPUs per domain and 2-6 domains.
> +    cpus {
> +      #address-cells = <2>;
> +      #size-cells = <0>;
> +
> +      cpu@0 {
> +        compatible = "apple,icestorm";
> +        device_type = "cpu";
> +        reg = <0x0 0x0>;
> +        operating-points-v2 = <&ecluster_opp>;

To me, it looks like the operating-points-v2 phandle better belongs in
the performance-domains provider node. I mean, isn't the OPPs really a
description of the performance-domain provider?

That said, I suggest we try to extend the generic performance-domain
binding [1] with an "operating-points-v2". In that way, we should
instead be able to reference it from this binding.

In fact, that would be very similar to what already exists for the
generic power-domain binding [2]. I think it would be rather nice to
follow a similar pattern for the performance-domain binding.

> +        performance-domains = <&cpufreq_e>;
> +      };
> +
> +      cpu@10100 {
> +        compatible = "apple,firestorm";
> +        device_type = "cpu";
> +        reg = <0x0 0x10100>;
> +        operating-points-v2 = <&pcluster_opp>;
> +        performance-domains = <&cpufreq_p>;
> +      };
> +    };
> +
> +    ecluster_opp: opp-table-0 {
> +      compatible = "operating-points-v2";
> +      opp-shared;
> +
> +      opp01 {
> +        opp-hz = /bits/ 64 <600000000>;
> +        opp-level = <1>;
> +        clock-latency-ns = <7500>;
> +      };
> +      opp02 {
> +        opp-hz = /bits/ 64 <972000000>;
> +        opp-level = <2>;
> +        clock-latency-ns = <22000>;
> +      };
> +    };
> +
> +    pcluster_opp: opp-table-1 {
> +      compatible = "operating-points-v2";
> +      opp-shared;
> +
> +      opp01 {
> +        opp-hz = /bits/ 64 <600000000>;
> +        opp-level = <1>;
> +        clock-latency-ns = <8000>;
> +      };
> +      opp02 {
> +        opp-hz = /bits/ 64 <828000000>;
> +        opp-level = <2>;
> +        clock-latency-ns = <19000>;
> +      };
> +    };
> +
> +    soc {
> +      #address-cells = <2>;
> +      #size-cells = <2>;
> +
> +      cpufreq_e: performance-controller@210e20000 {
> +        compatible = "apple,t8103-cluster-cpufreq", "apple,cluster-cpufreq";
> +        reg = <0x2 0x10e20000 0 0x1000>;
> +        #performance-domain-cells = <0>;
> +      };
> +
> +      cpufreq_p: performance-controller@211e20000 {
> +        compatible = "apple,t8103-cluster-cpufreq", "apple,cluster-cpufreq";
> +        reg = <0x2 0x11e20000 0 0x1000>;
> +        #performance-domain-cells = <0>;
> +      };
> +    };

Kind regards
Uffe

[1]
Documentation/devicetree/bindings/dvfs/performance-domain.yaml
[2]
Documentation/devicetree/bindings/power/power-domain.yaml
Hector Martin Nov. 29, 2022, 2 p.m. UTC | #3
On 29/11/2022 20.36, Ulf Hansson wrote:
> On Mon, 28 Nov 2022 at 15:29, Hector Martin <marcan@marcan.st> wrote:
>> +examples:
>> +  - |
>> +    // This example shows a single CPU per domain and 2 domains,
>> +    // with two p-states per domain.
>> +    // Shipping hardware has 2-4 CPUs per domain and 2-6 domains.
>> +    cpus {
>> +      #address-cells = <2>;
>> +      #size-cells = <0>;
>> +
>> +      cpu@0 {
>> +        compatible = "apple,icestorm";
>> +        device_type = "cpu";
>> +        reg = <0x0 0x0>;
>> +        operating-points-v2 = <&ecluster_opp>;
> 
> To me, it looks like the operating-points-v2 phandle better belongs in
> the performance-domains provider node. I mean, isn't the OPPs really a
> description of the performance-domain provider?
> 
> That said, I suggest we try to extend the generic performance-domain
> binding [1] with an "operating-points-v2". In that way, we should
> instead be able to reference it from this binding.
> 
> In fact, that would be very similar to what already exists for the
> generic power-domain binding [2]. I think it would be rather nice to
> follow a similar pattern for the performance-domain binding.

While I agree with the technical rationale and the proposed approach
being better in principle...

We're at v5 of bikeshedding this trivial driver's DT binding, and the
comment could've been made at v3. To quote IRC just now:

> this way the machines will be obsolete before things are fully upstreamed

I think it's long overdue for the kernel community to take a deep look
at itself and its development and review process, because it is quite
honestly insane how pathologically inefficient it is compared to,
basically, every other large and healthy open source project of similar
or even greater impact and scope.

Cc Linus, because this is for your Mac and I assume you care. We're at
v5 here for this silly driver. Meanwhile, rmk recently threw the towel
on upstreaming macsmc for us. We're trying, and I'll keep trying because
I actually get paid (by very generous donors) to do this, but if I
weren't I'd have given up a long time ago. And while I won't give up, I
can't deny this situation affects my morale and willingness to keep
pushing on upstreaming on a regular basis.

Meanwhile, OpenBSD has been *shipping* full M1 support for a while now
in official release images (and since Linux is the source of truth for
DT bindings, every time we re-bikeshed it we break their users because
they, quite reasonably, aren't interested in waiting for us Linux
slowpokes to figure it out first).

Please, let's introspect about this for a moment. Something is deeply
broken if people with 25+ years being an arch maintainer can't get a
700-line mfd driver upstreamed before giving up. I don't know how we
expect to ever get a Rust GPU driver merged if it takes 6+ versions to
upstream the world's easiest cpufreq hardware.

- Hector
Marc Zyngier Nov. 29, 2022, 2:02 p.m. UTC | #4
On 2022-11-29 11:36, Ulf Hansson wrote:
> On Mon, 28 Nov 2022 at 15:29, Hector Martin <marcan@marcan.st> wrote:
>> 
>> +examples:
>> +  - |
>> +    // This example shows a single CPU per domain and 2 domains,
>> +    // with two p-states per domain.
>> +    // Shipping hardware has 2-4 CPUs per domain and 2-6 domains.
>> +    cpus {
>> +      #address-cells = <2>;
>> +      #size-cells = <0>;
>> +
>> +      cpu@0 {
>> +        compatible = "apple,icestorm";
>> +        device_type = "cpu";
>> +        reg = <0x0 0x0>;
>> +        operating-points-v2 = <&ecluster_opp>;
> 
> To me, it looks like the operating-points-v2 phandle better belongs in
> the performance-domains provider node. I mean, isn't the OPPs really a
> description of the performance-domain provider?
> 
> That said, I suggest we try to extend the generic performance-domain
> binding [1] with an "operating-points-v2". In that way, we should
> instead be able to reference it from this binding.
> 
> In fact, that would be very similar to what already exists for the
> generic power-domain binding [2]. I think it would be rather nice to
> follow a similar pattern for the performance-domain binding.

I'm not going to rabbit-hole into whether this is a good
or a bad binding. As far as I'm concerned, and as a user
of the HW it describes, it does the job in a satisfactory
manner.

What I'm concerned about is that this constant, last minute
bikeshedding that actively prevents upstreaming of support
for HW that people are actively using.

If anything, this is only causing people to stop trying to
contribute to the upstream kernel because of this constant
DT bikeshed. Honestly, writing code to support this HW is
a lot less effort than trying to get 3 different people to
agree on a binding.

It also affects other OSs that rely on the the same bindings,
and will eventually only result in developers not submitting
bindings anymore. After all, nothing says that bindings and
device trees have to exist in the Linux kernel tree.

         M.
Krzysztof Kozlowski Nov. 29, 2022, 2:34 p.m. UTC | #5
On 29/11/2022 15:00, Hector Martin wrote:
> On 29/11/2022 20.36, Ulf Hansson wrote:
>> On Mon, 28 Nov 2022 at 15:29, Hector Martin <marcan@marcan.st> wrote:
>>> +examples:
>>> +  - |
>>> +    // This example shows a single CPU per domain and 2 domains,
>>> +    // with two p-states per domain.
>>> +    // Shipping hardware has 2-4 CPUs per domain and 2-6 domains.
>>> +    cpus {
>>> +      #address-cells = <2>;
>>> +      #size-cells = <0>;
>>> +
>>> +      cpu@0 {
>>> +        compatible = "apple,icestorm";
>>> +        device_type = "cpu";
>>> +        reg = <0x0 0x0>;
>>> +        operating-points-v2 = <&ecluster_opp>;
>>
>> To me, it looks like the operating-points-v2 phandle better belongs in
>> the performance-domains provider node. I mean, isn't the OPPs really a
>> description of the performance-domain provider?
>>
>> That said, I suggest we try to extend the generic performance-domain
>> binding [1] with an "operating-points-v2". In that way, we should
>> instead be able to reference it from this binding.
>>
>> In fact, that would be very similar to what already exists for the
>> generic power-domain binding [2]. I think it would be rather nice to
>> follow a similar pattern for the performance-domain binding.
> 
> While I agree with the technical rationale and the proposed approach
> being better in principle...
> 
> We're at v5 of bikeshedding this trivial driver's DT binding, and the
> comment could've been made at v3. To quote IRC just now:
> 
>> this way the machines will be obsolete before things are fully upstreamed
> 
> I think it's long overdue for the kernel community to take a deep look
> at itself and its development and review process, because it is quite
> honestly insane how pathologically inefficient it is compared to,
> basically, every other large and healthy open source project of similar
> or even greater impact and scope.
> 
> Cc Linus, because this is for your Mac and I assume you care. We're at
> v5 here for this silly driver. Meanwhile, rmk recently threw the towel
> on upstreaming macsmc for us. We're trying, and I'll keep trying because
> I actually get paid (by very generous donors) to do this, but if I
> weren't I'd have given up a long time ago. And while I won't give up, I
> can't deny this situation affects my morale and willingness to keep
> pushing on upstreaming on a regular basis.
> 
> Meanwhile, OpenBSD has been *shipping* full M1 support for a while now
> in official release images (and since Linux is the source of truth for
> DT bindings, every time we re-bikeshed it we break their users because
> they, quite reasonably, aren't interested in waiting for us Linux
> slowpokes to figure it out first).
> 
> Please, let's introspect about this for a moment. Something is deeply
> broken if people with 25+ years being an arch maintainer can't get a

If arch maintainer sends patches which does not build (make
dt_binding_check), then what do you exactly expect? Accept them just
because it is 25+ years of experience or a maintainer? So we have
difference processes - for beginners code should compile. For
experienced people, it does not have to build because otherwise they
will get discouraged?

> 700-line mfd driver upstreamed before giving up. I don't know how we
> expect to ever get a Rust GPU driver merged if it takes 6+ versions to
> upstream the world's easiest cpufreq hardware.

While I understand your point about bikeschedding, but I think your
previous bindings got pretty nice and fast reviews, so using examples of
non-building case is poor choice.

Best regards,
Krzysztof
Hector Martin Nov. 29, 2022, 3:17 p.m. UTC | #6
On 29/11/2022 23.34, Krzysztof Kozlowski wrote:
> On 29/11/2022 15:00, Hector Martin wrote:
>> On 29/11/2022 20.36, Ulf Hansson wrote:
>> Please, let's introspect about this for a moment. Something is deeply
>> broken if people with 25+ years being an arch maintainer can't get a
> 
> If arch maintainer sends patches which does not build (make
> dt_binding_check), then what do you exactly expect? Accept them just
> because it is 25+ years of experience or a maintainer? So we have
> difference processes - for beginners code should compile. For
> experienced people, it does not have to build because otherwise they
> will get discouraged?

I expect the process to not be so confusing and frustrating that a
maintainer with 25+ years of experience gives up. That the bindings
didn't pass the checker is besides the point. People say the Linux
kernel community is hostile to newbies. This issue proves it's not just
newbies, the process is failing even experienced folks.

On that specific issue, any other functional open source project would
have the binding checks be a CI bot, with a friendly message telling you
what to do to fix it, and it would re-run when you push to the PR again,
which is a *much* lower friction action than sending a whole new patch
series out for review via email (if you don't agree with this, then
you're not the average contributor - the Linux kernel is by far the
scariest major open source project to contribute to, and I think most
people would agree with me on that).

I know Rob has a DT checker bot, but its error output is practically
line noise, and the error email doesn't even mention the
DT_SCHEMA_FILES= make option (which is the only way to make the check
not take *forever* to run). Absolutely nobody is going to look at those
emails without already knowing the intricacies of DT bindings and the
checker and not find them incredibly frustrating.

But it's not just the DT checker. That came after an argument where the
MFD maintainer complained about the driver while offering no solution
nor proposed path forward. I had to have an IRC conversation with him to
work it out, after which he accepted one of the options I'd already
proposed over email. If you have to change communication mediums to
resolve an issue, that means your initial medium failed at its job.

Not to mention the random drive-by reviews, nitpicks, disagreements
between maintainers about how to do things, or just plain cases of
maintainers stubbornly being *wrong* and refusing to listen while
everyone around them is telling them they're wrong (until someone above
them in the maintainer tree tells them they're wrong - then they finally
get it. If it takes someone in a position of authority telling you
you're wrong for you to accept it, you're doing a poor job at your own
position.)

And then there's the coding style. The rest of the world has
standardized on formatting tools. Here, every subsystem maintainer has
their own pet style you have to learn. "Please put your variables in
reverse christmas tree order". I once got a review comment that
complained that I re-aligned an existing #define when I added another
one (to keep them visually aligned, as the new one increased the
required tab stop). Because "cleanup patches should be separate" (I
didn't submit a cleanup patch after that, I just left the defines
misaligned). So now I need to maintain a massive mental map of exactly
what style conventions and patch breakup conventions every subsystem
maintainer wants me to use.

I'm so glad `make rustfmt` is a thing now. Maybe 50 years down the line,
most of the kernel will have been rewritten in Rust and we'll finally
fix just one of these problems /s

Some of these things are, to some extent, a natural result of working
with other humans. But the kernel community has both a number of humans
harder to work with than is reasonable in their position, and an overall
process that multiplies the resulting pain by an order of magnitude for
everyone involved.

> While I understand your point about bikeschedding, but I think your
> previous bindings got pretty nice and fast reviews, so using examples of
> non-building case is poor choice.

Yeah, after a while, because I learned how to do DT bindings the hard
way after having to submit a bunch and getting everything wrong (and
even then I still get the simplest things wrong, see: v4 here). Which is
why I'll pick up after rmk's attempt and try again with macsmc at some
point (probably after 6.1). But I'm not holding my breath that I won't
need another half dozen rounds of bikeshedding.

When it takes 10 times more man-hours to upstream a driver than to
reverse engineer and write it with zero documentation, never mind the
calendar time it takes, something is very wrong. I actively dread
submitting new drivers to new subsystems or some existing ones now. How
much pain will the next one be? Will I be asked to move files around 3
times? Spend 4 rounds bikeshedding the DT schema? Think it's finally
ready only for someone to show up and ask to change a major part of the
design at the last minute?

And this all when we're actually submitting code of decent quality (I
think I have enough experience not to submit crap most of the time). Now
imagine how much worse this all is for a newbie who submits a
well-intentioned but definitely not up to standard patch. There's a huge
chance they'll give up before ever getting the submission through.

Again, I'll keep pushing and sending out patches, because this is
important. But this is the reality. This is how bad it is. The process
is broken, and everyone outside the kernel community can see that and
has been saying that for ages. Just because some of us are willing to
put up with the pain anyway doesn't mean it's working as intended.

- Hector
Krzysztof Kozlowski Nov. 29, 2022, 3:46 p.m. UTC | #7
On 29/11/2022 16:17, Hector Martin wrote:
> On 29/11/2022 23.34, Krzysztof Kozlowski wrote:
>> On 29/11/2022 15:00, Hector Martin wrote:
>>> On 29/11/2022 20.36, Ulf Hansson wrote:
>>> Please, let's introspect about this for a moment. Something is deeply
>>> broken if people with 25+ years being an arch maintainer can't get a
>>
>> If arch maintainer sends patches which does not build (make
>> dt_binding_check), then what do you exactly expect? Accept them just
>> because it is 25+ years of experience or a maintainer? So we have
>> difference processes - for beginners code should compile. For
>> experienced people, it does not have to build because otherwise they
>> will get discouraged?
> 
> I expect the process to not be so confusing and frustrating that a
> maintainer with 25+ years of experience gives up. That the bindings
> didn't pass the checker is besides the point. People say the Linux
> kernel community is hostile to newbies. This issue proves it's not just
> newbies, the process is failing even experienced folks.
> 
> On that specific issue, any other functional open source project would
> have the binding checks be a CI bot, with a friendly message telling you
> what to do to fix it, and it would re-run when you push to the PR again,
> which is a *much* lower friction action than sending a whole new patch
> series out for review via email (if you don't agree with this, then
> you're not the average contributor - the Linux kernel is by far the
> scariest major open source project to contribute to, and I think most
> people would agree with me on that).

I agree with this entirely. Not only DT checks, but also for driver code
with sparse/smatch/coccinelle/W=1/clang builds.

> I know Rob has a DT checker bot, but its error output is practically
> line noise, and the error email doesn't even mention the
> DT_SCHEMA_FILES= make option (which is the only way to make the check
> not take *forever* to run). Absolutely nobody is going to look at those
> emails without already knowing the intricacies of DT bindings and the
> checker and not find them incredibly frustrating.

Ack

> 
> But it's not just the DT checker. That came after an argument where the
> MFD maintainer complained about the driver while offering no solution
> nor proposed path forward. I had to have an IRC conversation with him to
> work it out, after which he accepted one of the options I'd already
> proposed over email. If you have to change communication mediums to
> resolve an issue, that means your initial medium failed at its job.

Ack

> 
> Not to mention the random drive-by reviews, nitpicks, disagreements
> between maintainers about how to do things, or just plain cases of
> maintainers stubbornly being *wrong* and refusing to listen while
> everyone around them is telling them they're wrong (until someone above
> them in the maintainer tree tells them they're wrong - then they finally
> get it. If it takes someone in a position of authority telling you
> you're wrong for you to accept it, you're doing a poor job at your own
> position.)

Ack

> 
> And then there's the coding style. The rest of the world has
> standardized on formatting tools. Here, every subsystem maintainer has
> their own pet style you have to learn. "Please put your variables in
> reverse christmas tree order".

Ack here as well, and by above Acks I meant - I agree, you are right.  I
don't think we should discuss people's differences, I mean, different
people can give you different review, sometimes less sometimes more
thorough. However at least policies/processes and coding style should be
heavily unified.

> I once got a review comment that
> complained that I re-aligned an existing #define when I added another
> one (to keep them visually aligned, as the new one increased the
> required tab stop). Because "cleanup patches should be separate" (I
> didn't submit a cleanup patch after that, I just left the defines
> misaligned). So now I need to maintain a massive mental map of exactly
> what style conventions and patch breakup conventions every subsystem
> maintainer wants me to use.

Yep. I just have a doc, instead of mental map. :)

> 
> I'm so glad `make rustfmt` is a thing now. Maybe 50 years down the line,
> most of the kernel will have been rewritten in Rust and we'll finally
> fix just one of these problems /s

+1

> 
> Some of these things are, to some extent, a natural result of working
> with other humans. But the kernel community has both a number of humans
> harder to work with than is reasonable in their position, and an overall
> process that multiplies the resulting pain by an order of magnitude for
> everyone involved.
> 
>> While I understand your point about bikeschedding, but I think your
>> previous bindings got pretty nice and fast reviews, so using examples of
>> non-building case is poor choice.
> 
> Yeah, after a while, because I learned how to do DT bindings the hard
> way after having to submit a bunch and getting everything wrong (and
> even then I still get the simplest things wrong, see: v4 here). Which is
> why I'll pick up after rmk's attempt and try again with macsmc at some
> point (probably after 6.1). But I'm not holding my breath that I won't
> need another half dozen rounds of bikeshedding.
> 
> When it takes 10 times more man-hours to upstream a driver than to
> reverse engineer and write it with zero documentation, never mind the
> calendar time it takes, something is very wrong. 

Yes, that's a point. Yet we (community) still have another goal here
than the pure goal of submitter. The submitter wants its stuff upstream.
Therefore this upstreaming effort is for submitter a bit wasted (as I
agree that reverse-engineering should be the biggest task).

However the community (expressed maybe mostly as maintainers) wants
something which will be maintainable and usable also for others, not
only for patch submitters.

Indeed maybe 10x difference between writing code and upstreaming is
something to improve, but please do not forget that the concept is here
to be able to manage this big codebase.

> I actively dread
> submitting new drivers to new subsystems or some existing ones now. How
> much pain will the next one be? Will I be asked to move files around 3
> times? Spend 4 rounds bikeshedding the DT schema? Think it's finally
> ready only for someone to show up and ask to change a major part of the
> design at the last minute?
> 
> And this all when we're actually submitting code of decent quality (I
> think I have enough experience not to submit crap most of the time). Now
> imagine how much worse this all is for a newbie who submits a
> well-intentioned but definitely not up to standard patch. There's a huge
> chance they'll give up before ever getting the submission through.

You say this like any newbie should be able to send a patch and get it
accepted right away, regardless of actually what is in that patch (not
up to standard). It's not newbie's right. No one said it's easy and fast
process... If you want easy and fast, do JavaScript... If you want to
make it easier and faster in the kernel, then we need more reviewers and
maintainers. Somehow many, many people want to send patches, but not
that many want to review them.

> 
> Again, I'll keep pushing and sending out patches, because this is
> important. But this is the reality. This is how bad it is. The process
> is broken, and everyone outside the kernel community can see that and
> has been saying that for ages. Just because some of us are willing to
> put up with the pain anyway doesn't mean it's working as intended.
Best regards,
Krzysztof
Hector Martin Nov. 29, 2022, 4:05 p.m. UTC | #8
On 30/11/2022 00.46, Krzysztof Kozlowski wrote:
> On 29/11/2022 16:17, Hector Martin wrote:
>> When it takes 10 times more man-hours to upstream a driver than to
>> reverse engineer and write it with zero documentation, never mind the
>> calendar time it takes, something is very wrong. 
> 
> Yes, that's a point. Yet we (community) still have another goal here
> than the pure goal of submitter. The submitter wants its stuff upstream.
> Therefore this upstreaming effort is for submitter a bit wasted (as I
> agree that reverse-engineering should be the biggest task).
> 
> However the community (expressed maybe mostly as maintainers) wants
> something which will be maintainable and usable also for others, not
> only for patch submitters.
> 
> Indeed maybe 10x difference between writing code and upstreaming is
> something to improve, but please do not forget that the concept is here
> to be able to manage this big codebase.

You're misunderstanding me here. The problem isn't the goal of keeping
things maintainable. The problem is that the process involved makes the
entire experience take much, much longer than it should. A more
efficient process could maintain the same (actual) code quality with a
lot less time wasted for everyone, both in process issues and in
bikeshedding details that don't actually increase code quality.

>> I actively dread
>> submitting new drivers to new subsystems or some existing ones now. How
>> much pain will the next one be? Will I be asked to move files around 3
>> times? Spend 4 rounds bikeshedding the DT schema? Think it's finally
>> ready only for someone to show up and ask to change a major part of the
>> design at the last minute?
>>
>> And this all when we're actually submitting code of decent quality (I
>> think I have enough experience not to submit crap most of the time). Now
>> imagine how much worse this all is for a newbie who submits a
>> well-intentioned but definitely not up to standard patch. There's a huge
>> chance they'll give up before ever getting the submission through.
> 
> You say this like any newbie should be able to send a patch and get it
> accepted right away, regardless of actually what is in that patch (not
> up to standard). It's not newbie's right. No one said it's easy and fast
> process... If you want easy and fast, do JavaScript... If you want to
> make it easier and faster in the kernel, then we need more reviewers and
> maintainers. Somehow many, many people want to send patches, but not
> that many want to review them.

Again, I'm not saying the bad code should go in. I'm saying the
*process* is so frustrating that most newbies will give up before the
code has a chance to evolve from bad to good and go in. The solution
isn't to let more bad code in, it's to make the process less painful.

And yes, some of it does mean either relaxing style rules or just
adopting a formatter, because let's face it: complaints that a line is
81 characters instead of 80 don't *actually* increase code quality to
any measurable extent, they just waste everyone's time, and if that
level of style pickiness is desired, that's what formatters are for.

I have personal experience with this with the Asahi Linux bootloader
(m1n1). We use a formatter for C; the CI tells people when they have to
run `make format` to fix their style and that solves all our C
formatting issues (incidentally, we also have a CI bot to check for
S-o-b lines - very handy). We don't use a formatter for Python, and as a
result, I don't impose overly strict rules on things like exactly how
many lines of whitespace should exist between functions or exactly how
expression continuations should be indented or what the maximum line
length is (within reason), because code doesn't have to look like it was
written by a homogeneous robot to still be completely understandable and
readable to humans using modern editing tools, and trying to teach every
contributor my own personal style (which isn't even 100% consistent, I'm
not a robot either!) would be a waste of my and their time. Most
contributors are capable of defaulting to a readable coding style
similar to the rest of the code around it, and it's rare I have to tell
someone to fix the formatting because something is clearly wrong (like
blatantly inconsistent block indentation).

- Hector
Ulf Hansson Nov. 29, 2022, 4:13 p.m. UTC | #9
On Tue, 29 Nov 2022 at 15:00, Hector Martin <marcan@marcan.st> wrote:
>
> On 29/11/2022 20.36, Ulf Hansson wrote:
> > On Mon, 28 Nov 2022 at 15:29, Hector Martin <marcan@marcan.st> wrote:
> >> +examples:
> >> +  - |
> >> +    // This example shows a single CPU per domain and 2 domains,
> >> +    // with two p-states per domain.
> >> +    // Shipping hardware has 2-4 CPUs per domain and 2-6 domains.
> >> +    cpus {
> >> +      #address-cells = <2>;
> >> +      #size-cells = <0>;
> >> +
> >> +      cpu@0 {
> >> +        compatible = "apple,icestorm";
> >> +        device_type = "cpu";
> >> +        reg = <0x0 0x0>;
> >> +        operating-points-v2 = <&ecluster_opp>;
> >
> > To me, it looks like the operating-points-v2 phandle better belongs in
> > the performance-domains provider node. I mean, isn't the OPPs really a
> > description of the performance-domain provider?
> >
> > That said, I suggest we try to extend the generic performance-domain
> > binding [1] with an "operating-points-v2". In that way, we should
> > instead be able to reference it from this binding.
> >
> > In fact, that would be very similar to what already exists for the
> > generic power-domain binding [2]. I think it would be rather nice to
> > follow a similar pattern for the performance-domain binding.
>
> While I agree with the technical rationale and the proposed approach
> being better in principle...
>
> We're at v5 of bikeshedding this trivial driver's DT binding, and the
> comment could've been made at v3. To quote IRC just now:

It could and I certainly apologize for that.

It's simply been a busy period for me, so I haven't been able to look
closer at the DT bindings, until now.

>
> > this way the machines will be obsolete before things are fully upstreamed
>
> I think it's long overdue for the kernel community to take a deep look
> at itself and its development and review process, because it is quite
> honestly insane how pathologically inefficient it is compared to,
> basically, every other large and healthy open source project of similar
> or even greater impact and scope.
>
> Cc Linus, because this is for your Mac and I assume you care. We're at
> v5 here for this silly driver. Meanwhile, rmk recently threw the towel
> on upstreaming macsmc for us. We're trying, and I'll keep trying because
> I actually get paid (by very generous donors) to do this, but if I
> weren't I'd have given up a long time ago. And while I won't give up, I
> can't deny this situation affects my morale and willingness to keep
> pushing on upstreaming on a regular basis.
>
> Meanwhile, OpenBSD has been *shipping* full M1 support for a while now
> in official release images (and since Linux is the source of truth for
> DT bindings, every time we re-bikeshed it we break their users because
> they, quite reasonably, aren't interested in waiting for us Linux
> slowpokes to figure it out first).
>
> Please, let's introspect about this for a moment. Something is deeply
> broken if people with 25+ years being an arch maintainer can't get a
> 700-line mfd driver upstreamed before giving up. I don't know how we
> expect to ever get a Rust GPU driver merged if it takes 6+ versions to
> upstream the world's easiest cpufreq hardware.
>
> - Hector

I didn't intend to bikesheed this, while I do understand your valid
concerns from the above statements.

Instead, my intent was to help, by reviewing. Simply, because I care
about this too.

If you think incorporating the changes I proposed is a too big deal at
this point, let me not stand in the way of applying this. In the end,
it's the DT maintainers' decision.

Kind regards
Uffe
Rob Herring Nov. 29, 2022, 11:28 p.m. UTC | #10
On Wed, Nov 30, 2022 at 12:17:08AM +0900, Hector Martin wrote:
> On 29/11/2022 23.34, Krzysztof Kozlowski wrote:
> > On 29/11/2022 15:00, Hector Martin wrote:
> >> On 29/11/2022 20.36, Ulf Hansson wrote:
> >> Please, let's introspect about this for a moment. Something is deeply
> >> broken if people with 25+ years being an arch maintainer can't get a
> > 
> > If arch maintainer sends patches which does not build (make
> > dt_binding_check), then what do you exactly expect? Accept them just
> > because it is 25+ years of experience or a maintainer? So we have
> > difference processes - for beginners code should compile. For
> > experienced people, it does not have to build because otherwise they
> > will get discouraged?
> 
> I expect the process to not be so confusing and frustrating that a
> maintainer with 25+ years of experience gives up. That the bindings
> didn't pass the checker is besides the point. People say the Linux
> kernel community is hostile to newbies. This issue proves it's not just
> newbies, the process is failing even experienced folks.

IME, a lack of response is a bigger issue and more frustrating.

> On that specific issue, any other functional open source project would
> have the binding checks be a CI bot, with a friendly message telling you
> what to do to fix it, and it would re-run when you push to the PR again,
> which is a *much* lower friction action than sending a whole new patch
> series out for review via email (if you don't agree with this, then
> you're not the average contributor - the Linux kernel is by far the
> scariest major open source project to contribute to, and I think most
> people would agree with me on that).

We could probably add a $ci_provider job description to do that. In 
fact, I did try that once[1]. The challenge would be what to run if 
there's multiple maintainers doing something. Otherwise, it's a 
maintainer creating their own thing which we have too much of already.

> I know Rob has a DT checker bot, but its error output is practically
> line noise,

I'm not sure what to do there beyond the 'hint' lines I've added. It's 
kind of how json-schema functions unfortunately. I think it stems from 
each schema keyword being evaluated independently.

>  and the error email doesn't even mention the
> DT_SCHEMA_FILES= make option (which is the only way to make the check
> not take *forever* to run).

That's easy enough to add and a specific suggestion I can act on.

However, note that the full thing still has to be run because any 
schema change can affect any other example (which is a large part of 
why it's slow).

>  Absolutely nobody is going to look at those
> emails without already knowing the intricacies of DT bindings and the
> checker and not find them incredibly frustrating.

I don't know how else to enable someone not understanding DT bindings 
nor json-schema to write DT bindings. I get that json-schema is not 
something kernel developers typically know already. Trust me, the 
alternatives proposed for a schema over the years would have been 
much worse.

Rob

[1] https://lore.kernel.org/all/20181003222715.28667-1-robh@kernel.org/
Rob Herring Nov. 29, 2022, 11:30 p.m. UTC | #11
On Mon, 28 Nov 2022 23:29:10 +0900, Hector Martin wrote:
> This binding represents the cpufreq/DVFS hardware present in Apple SoCs.
> The hardware has an independent controller per CPU cluster, and we
> represent them as unique nodes in order to accurately describe the
> hardware. The driver is responsible for binding them as a single cpufreq
> device (in the Linux cpufreq model).
> 
> Acked-by: Marc Zyngier <maz@kernel.org>
> Signed-off-by: Hector Martin <marcan@marcan.st>
> ---
>  .../cpufreq/apple,cluster-cpufreq.yaml        | 117 ++++++++++++++++++
>  1 file changed, 117 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/cpufreq/apple,cluster-cpufreq.yaml

In case there's any doubt:

Reviewed-by: Rob Herring <robh@kernel.org>
Rob Herring Nov. 30, 2022, 7:50 p.m. UTC | #12
On Tue, Nov 29, 2022 at 5:28 PM Rob Herring <robh@kernel.org> wrote:
>
> On Wed, Nov 30, 2022 at 12:17:08AM +0900, Hector Martin wrote:
> > On 29/11/2022 23.34, Krzysztof Kozlowski wrote:
> > > On 29/11/2022 15:00, Hector Martin wrote:
> > >> On 29/11/2022 20.36, Ulf Hansson wrote:
> > >> Please, let's introspect about this for a moment. Something is deeply
> > >> broken if people with 25+ years being an arch maintainer can't get a
> > >
> > > If arch maintainer sends patches which does not build (make
> > > dt_binding_check), then what do you exactly expect? Accept them just
> > > because it is 25+ years of experience or a maintainer? So we have
> > > difference processes - for beginners code should compile. For
> > > experienced people, it does not have to build because otherwise they
> > > will get discouraged?
> >
> > I expect the process to not be so confusing and frustrating that a
> > maintainer with 25+ years of experience gives up. That the bindings
> > didn't pass the checker is besides the point. People say the Linux
> > kernel community is hostile to newbies. This issue proves it's not just
> > newbies, the process is failing even experienced folks.
>
> IME, a lack of response is a bigger issue and more frustrating.
>
> > On that specific issue, any other functional open source project would
> > have the binding checks be a CI bot, with a friendly message telling you
> > what to do to fix it, and it would re-run when you push to the PR again,
> > which is a *much* lower friction action than sending a whole new patch
> > series out for review via email (if you don't agree with this, then
> > you're not the average contributor - the Linux kernel is by far the
> > scariest major open source project to contribute to, and I think most
> > people would agree with me on that).
>
> We could probably add a $ci_provider job description to do that. In
> fact, I did try that once[1]. The challenge would be what to run if
> there's multiple maintainers doing something. Otherwise, it's a
> maintainer creating their own thing which we have too much of already.

Actually, turns out this pretty much already exists with my CI. I just
had to turn on merge requests on the project. If anyone actually uses
it, I'll have to tweak it to not do 'make dtbs_check' because that is
really slow. And this all runs on my machines, so that is another
issue. It already is just running it for patches on the list (which is
a different CI job).

Just create a MR here:

https://gitlab.com/robherring/linux-dt/-/merge_requests

Rob
diff mbox series

Patch

diff --git a/Documentation/devicetree/bindings/cpufreq/apple,cluster-cpufreq.yaml b/Documentation/devicetree/bindings/cpufreq/apple,cluster-cpufreq.yaml
new file mode 100644
index 000000000000..76cb9726660e
--- /dev/null
+++ b/Documentation/devicetree/bindings/cpufreq/apple,cluster-cpufreq.yaml
@@ -0,0 +1,117 @@ 
+# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/cpufreq/apple,cluster-cpufreq.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Apple SoC cluster cpufreq device
+
+maintainers:
+  - Hector Martin <marcan@marcan.st>
+
+description: |
+  Apple SoCs (e.g. M1) have a per-cpu-cluster DVFS controller that is part of
+  the cluster management register block. This binding uses the standard
+  operating-points-v2 table to define the CPU performance states, with the
+  opp-level property specifying the hardware p-state index for that level.
+
+properties:
+  compatible:
+    oneOf:
+      - items:
+          - enum:
+              - apple,t8103-cluster-cpufreq
+              - apple,t8112-cluster-cpufreq
+          - const: apple,cluster-cpufreq
+      - items:
+          - const: apple,t6000-cluster-cpufreq
+          - const: apple,t8103-cluster-cpufreq
+          - const: apple,cluster-cpufreq
+
+  reg:
+    maxItems: 1
+
+  '#performance-domain-cells':
+    const: 0
+
+required:
+  - compatible
+  - reg
+  - '#performance-domain-cells'
+
+additionalProperties: false
+
+examples:
+  - |
+    // This example shows a single CPU per domain and 2 domains,
+    // with two p-states per domain.
+    // Shipping hardware has 2-4 CPUs per domain and 2-6 domains.
+    cpus {
+      #address-cells = <2>;
+      #size-cells = <0>;
+
+      cpu@0 {
+        compatible = "apple,icestorm";
+        device_type = "cpu";
+        reg = <0x0 0x0>;
+        operating-points-v2 = <&ecluster_opp>;
+        performance-domains = <&cpufreq_e>;
+      };
+
+      cpu@10100 {
+        compatible = "apple,firestorm";
+        device_type = "cpu";
+        reg = <0x0 0x10100>;
+        operating-points-v2 = <&pcluster_opp>;
+        performance-domains = <&cpufreq_p>;
+      };
+    };
+
+    ecluster_opp: opp-table-0 {
+      compatible = "operating-points-v2";
+      opp-shared;
+
+      opp01 {
+        opp-hz = /bits/ 64 <600000000>;
+        opp-level = <1>;
+        clock-latency-ns = <7500>;
+      };
+      opp02 {
+        opp-hz = /bits/ 64 <972000000>;
+        opp-level = <2>;
+        clock-latency-ns = <22000>;
+      };
+    };
+
+    pcluster_opp: opp-table-1 {
+      compatible = "operating-points-v2";
+      opp-shared;
+
+      opp01 {
+        opp-hz = /bits/ 64 <600000000>;
+        opp-level = <1>;
+        clock-latency-ns = <8000>;
+      };
+      opp02 {
+        opp-hz = /bits/ 64 <828000000>;
+        opp-level = <2>;
+        clock-latency-ns = <19000>;
+      };
+    };
+
+    soc {
+      #address-cells = <2>;
+      #size-cells = <2>;
+
+      cpufreq_e: performance-controller@210e20000 {
+        compatible = "apple,t8103-cluster-cpufreq", "apple,cluster-cpufreq";
+        reg = <0x2 0x10e20000 0 0x1000>;
+        #performance-domain-cells = <0>;
+      };
+
+      cpufreq_p: performance-controller@211e20000 {
+        compatible = "apple,t8103-cluster-cpufreq", "apple,cluster-cpufreq";
+        reg = <0x2 0x11e20000 0 0x1000>;
+        #performance-domain-cells = <0>;
+      };
+    };