diff mbox series

[v12,2/5] dt-bindings: msm/dp: add data-lanes and link-frequencies property

Message ID 1670967848-31475-3-git-send-email-quic_khsieh@quicinc.com (mailing list archive)
State New, archived
Headers show
Series Add data-lanes and link-frequencies to dp_out endpoint | expand

Commit Message

Kuogee Hsieh Dec. 13, 2022, 9:44 p.m. UTC
Add both data-lanes and link-frequencies property into endpoint

Changes in v7:
-- split yaml out of dtsi patch
-- link-frequencies from link rate to symbol rate
-- deprecation of old data-lanes property

Changes in v8:
-- correct Bjorn mail address to kernel.org

Changes in v10:
-- add menu item to data-lanes and link-frequecnis

Changes in v11:
-- add endpoint property at port@1

Changes in v12:
-- use enum for item at data-lanes and link-frequencies

Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com>`
---
 .../bindings/display/msm/dp-controller.yaml        | 30 ++++++++++++++++++++--
 1 file changed, 28 insertions(+), 2 deletions(-)

Comments

Dmitry Baryshkov Dec. 13, 2022, 10:02 p.m. UTC | #1
On 13 December 2022 23:44:05 EET, Kuogee Hsieh <quic_khsieh@quicinc.com> wrote:
>Add both data-lanes and link-frequencies property into endpoint
>
>Changes in v7:
>-- split yaml out of dtsi patch
>-- link-frequencies from link rate to symbol rate
>-- deprecation of old data-lanes property
>
>Changes in v8:
>-- correct Bjorn mail address to kernel.org
>
>Changes in v10:
>-- add menu item to data-lanes and link-frequecnis
>
>Changes in v11:
>-- add endpoint property at port@1
>
>Changes in v12:
>-- use enum for item at data-lanes and link-frequencies

This is not a full list of changes

>
>Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com>`
>---
> .../bindings/display/msm/dp-controller.yaml        | 30 ++++++++++++++++++++--
> 1 file changed, 28 insertions(+), 2 deletions(-)
>
>diff --git a/Documentation/devicetree/bindings/display/msm/dp-controller.yaml b/Documentation/devicetree/bindings/display/msm/dp-controller.yaml
>index f2515af..8fb9fa5 100644
>--- a/Documentation/devicetree/bindings/display/msm/dp-controller.yaml
>+++ b/Documentation/devicetree/bindings/display/msm/dp-controller.yaml
>@@ -81,6 +81,7 @@ properties:
> 
>   data-lanes:
>     $ref: /schemas/types.yaml#/definitions/uint32-array
>+    deprecated: true
>     minItems: 1
>     maxItems: 4
>     items:
>@@ -96,14 +97,37 @@ properties:
> 
>   ports:
>     $ref: /schemas/graph.yaml#/properties/ports
>+
>     properties:
>       port@0:
>-        $ref: /schemas/graph.yaml#/properties/port
>+        $ref: "/schemas/graph.yaml#/$defs/port-base"
>         description: Input endpoint of the controller
>+        properties:
>+          endpoint:
>+            $ref: /schemas/media/video-interfaces.yaml#


I'd keep it as is. There are no video properties at this side of the graph.

> 
>       port@1:
>-        $ref: /schemas/graph.yaml#/properties/port
>+        $ref: "/schemas/graph.yaml#/$defs/port-base"
>         description: Output endpoint of the controller
>+        properties:
>+          endpoint:
>+            $ref: /schemas/media/video-interfaces.yaml#
>+            properties:
>+              data-lanes:
>+                minItems: 1
>+                maxItems: 4
>+                items:
>+                  enum: [ 0, 1, 2, 3 ]
>+
>+              link-frequencies:
>+                minItems: 1
>+                maxItems: 4
>+                items:
>+                  enum: [ 1620000000, 2700000000, 5400000000, 8100000000 ]
>+
>+    required:
>+      - port@0
>+      - port@1
> 
> required:
>   - compatible
>@@ -193,6 +217,8 @@ examples:
>                 reg = <1>;
>                 endpoint {
>                     remote-endpoint = <&typec>;
>+                    data-lanes = <0 1>;
>+                    link-frequencies = /bits/ 64 <1620000000 2700000000 5400000000 8100000000>; 
>                 };
>             };
>         };
Stephen Boyd Dec. 13, 2022, 11:06 p.m. UTC | #2
Quoting Kuogee Hsieh (2022-12-13 13:44:05)
> Add both data-lanes and link-frequencies property into endpoint

Why do we care? Please tell us why it's important.

>
> Changes in v7:
> -- split yaml out of dtsi patch
> -- link-frequencies from link rate to symbol rate
> -- deprecation of old data-lanes property
>
> Changes in v8:
> -- correct Bjorn mail address to kernel.org
>
> Changes in v10:
> -- add menu item to data-lanes and link-frequecnis
>
> Changes in v11:
> -- add endpoint property at port@1
>
> Changes in v12:
> -- use enum for item at data-lanes and link-frequencies
>
> Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com>`
                                                       ^
Stray ` here? -----------------------------------------/

> ---
>  .../bindings/display/msm/dp-controller.yaml        | 30 ++++++++++++++++++++--
>  1 file changed, 28 insertions(+), 2 deletions(-)
>
> diff --git a/Documentation/devicetree/bindings/display/msm/dp-controller.yaml b/Documentation/devicetree/bindings/display/msm/dp-controller.yaml
> index f2515af..8fb9fa5 100644
> --- a/Documentation/devicetree/bindings/display/msm/dp-controller.yaml
> +++ b/Documentation/devicetree/bindings/display/msm/dp-controller.yaml
> @@ -96,14 +97,37 @@ properties:
>
>    ports:
>      $ref: /schemas/graph.yaml#/properties/ports
> +
>      properties:
>        port@0:
> -        $ref: /schemas/graph.yaml#/properties/port
> +        $ref: "/schemas/graph.yaml#/$defs/port-base"
>          description: Input endpoint of the controller
> +        properties:
> +          endpoint:
> +            $ref: /schemas/media/video-interfaces.yaml#
>
>        port@1:
> -        $ref: /schemas/graph.yaml#/properties/port
> +        $ref: "/schemas/graph.yaml#/$defs/port-base"

I thought the quotes weren't needed?

>          description: Output endpoint of the controller
> +        properties:
> +          endpoint:
> +            $ref: /schemas/media/video-interfaces.yaml#

Does this need 'unevaluatedProperties: false' here?

> +            properties:
> +              data-lanes:
> +                minItems: 1
> +                maxItems: 4
> +                items:
> +                  enum: [ 0, 1, 2, 3 ]
> +
> +              link-frequencies:
> +                minItems: 1
> +                maxItems: 4
> +                items:
> +                  enum: [ 1620000000, 2700000000, 5400000000, 8100000000 ]
> +
> +    required:
> +      - port@0
> +      - port@1
>
>  required:
>    - compatible
> @@ -193,6 +217,8 @@ examples:
>                  reg = <1>;
>                  endpoint {
>                      remote-endpoint = <&typec>;
> +                    data-lanes = <0 1>;
> +                    link-frequencies = /bits/ 64 <1620000000 2700000000 5400000000 8100000000>;
>                  };

So far we haven't used the output port on the DP controller in DT.

I'm still not clear on what we should do in general for DP because
there's a PHY that actually controls a lane count and lane mapping. In
my mental model of the SoC, this DP controller's output port is
connected to the DP PHY, which then sends the DP lanes out of the SoC to
the next downstream device (i.e. a DP connector or type-c muxer). Having
a remote-endpoint property with a phandle to typec doesn't fit my mental
model. I'd expect it to be the typec PHY.

That brings up the question: when we have 2 lanes vs. 4 lanes will we
duplicate the data-lanes property in the PHY binding? I suspect we'll
have to. Hopefully that sort of duplication is OK?

Similarly, we may have a redriver that limits the link-frequencies
property further (e.g. only support <= 2.7GHz). Having multiple
link-frequencies along the graph is OK, right? And isn't the
link-frequencies property known here by fact that the DP controller
tells us which SoC this controller is for, and thus we already know the
supported link frequencies?

Finally, I wonder if we should put any of this in the DP controller's
output endpoint, or if we can put these sorts of properties in the DP
PHY binding directly? Can't we do that and then when the DP controller
tries to set 4 lanes, the PHY immediately fails the call and the link
training algorithm does its thing and tries fewer lanes? And similarly,
if link-frequencies were in the PHY's binding, the PHY could fail to set
those frequencies during link training, returning an error to the DP
controller, letting the training move on to a lower frequency. If we did
that this patch series would largely be about modifying the PHY binding,
updating the PHY driver to enforce constraints, and handling errors
during link training in the DP controller (which may already be done? I
didn't check).
Kuogee Hsieh Dec. 14, 2022, 10:56 p.m. UTC | #3
On 12/13/2022 3:06 PM, Stephen Boyd wrote:
> Quoting Kuogee Hsieh (2022-12-13 13:44:05)
>> Add both data-lanes and link-frequencies property into endpoint
> Why do we care? Please tell us why it's important.
>
>> Changes in v7:
>> -- split yaml out of dtsi patch
>> -- link-frequencies from link rate to symbol rate
>> -- deprecation of old data-lanes property
>>
>> Changes in v8:
>> -- correct Bjorn mail address to kernel.org
>>
>> Changes in v10:
>> -- add menu item to data-lanes and link-frequecnis
>>
>> Changes in v11:
>> -- add endpoint property at port@1
>>
>> Changes in v12:
>> -- use enum for item at data-lanes and link-frequencies
>>
>> Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com>`
>                                                         ^
> Stray ` here? -----------------------------------------/
>
>> ---
>>   .../bindings/display/msm/dp-controller.yaml        | 30 ++++++++++++++++++++--
>>   1 file changed, 28 insertions(+), 2 deletions(-)
>>
>> diff --git a/Documentation/devicetree/bindings/display/msm/dp-controller.yaml b/Documentation/devicetree/bindings/display/msm/dp-controller.yaml
>> index f2515af..8fb9fa5 100644
>> --- a/Documentation/devicetree/bindings/display/msm/dp-controller.yaml
>> +++ b/Documentation/devicetree/bindings/display/msm/dp-controller.yaml
>> @@ -96,14 +97,37 @@ properties:
>>
>>     ports:
>>       $ref: /schemas/graph.yaml#/properties/ports
>> +
>>       properties:
>>         port@0:
>> -        $ref: /schemas/graph.yaml#/properties/port
>> +        $ref: "/schemas/graph.yaml#/$defs/port-base"
>>           description: Input endpoint of the controller
>> +        properties:
>> +          endpoint:
>> +            $ref: /schemas/media/video-interfaces.yaml#
>>
>>         port@1:
>> -        $ref: /schemas/graph.yaml#/properties/port
>> +        $ref: "/schemas/graph.yaml#/$defs/port-base"
> I thought the quotes weren't needed?
>
>>           description: Output endpoint of the controller
>> +        properties:
>> +          endpoint:
>> +            $ref: /schemas/media/video-interfaces.yaml#
> Does this need 'unevaluatedProperties: false' here?
>
>> +            properties:
>> +              data-lanes:
>> +                minItems: 1
>> +                maxItems: 4
>> +                items:
>> +                  enum: [ 0, 1, 2, 3 ]
>> +
>> +              link-frequencies:
>> +                minItems: 1
>> +                maxItems: 4
>> +                items:
>> +                  enum: [ 1620000000, 2700000000, 5400000000, 8100000000 ]
>> +
>> +    required:
>> +      - port@0
>> +      - port@1
>>
>>   required:
>>     - compatible
>> @@ -193,6 +217,8 @@ examples:
>>                   reg = <1>;
>>                   endpoint {
>>                       remote-endpoint = <&typec>;
>> +                    data-lanes = <0 1>;
>> +                    link-frequencies = /bits/ 64 <1620000000 2700000000 5400000000 8100000000>;
>>                   };
> So far we haven't used the output port on the DP controller in DT.
>
> I'm still not clear on what we should do in general for DP because
> there's a PHY that actually controls a lane count and lane mapping. In
> my mental model of the SoC, this DP controller's output port is
> connected to the DP PHY, which then sends the DP lanes out of the SoC to
> the next downstream device (i.e. a DP connector or type-c muxer). Having
> a remote-endpoint property with a phandle to typec doesn't fit my mental
> model. I'd expect it to be the typec PHY.
ack
>
> That brings up the question: when we have 2 lanes vs. 4 lanes will we
> duplicate the data-lanes property in the PHY binding? I suspect we'll
> have to. Hopefully that sort of duplication is OK?
Current we have limitation by reserve 2 data lanes for usb2, i am not 
sure duplication to 4 lanes will work automatically.
>
> Similarly, we may have a redriver that limits the link-frequencies
> property further (e.g. only support <= 2.7GHz). Having multiple
> link-frequencies along the graph is OK, right? And isn't the
> link-frequencies property known here by fact that the DP controller
> tells us which SoC this controller is for, and thus we already know the
> supported link frequencies?
>
> Finally, I wonder if we should put any of this in the DP controller's
> output endpoint, or if we can put these sorts of properties in the DP
> PHY binding directly? Can't we do that and then when the DP controller
> tries to set 4 lanes, the PHY immediately fails the call and the link
> training algorithm does its thing and tries fewer lanes? And similarly,
> if link-frequencies were in the PHY's binding, the PHY could fail to set
> those frequencies during link training, returning an error to the DP
> controller, letting the training move on to a lower frequency. If we did
> that this patch series would largely be about modifying the PHY binding,
> updating the PHY driver to enforce constraints, and handling errors
> during link training in the DP controller (which may already be done? I
> didn't check).


phy/pll have different configuration base on link lanes and rate.

it has to be set up before link training can start.

Once link training start, then there are no any interactions between 
controller and phy during link training session.

Link training only happen between dp controller and sink since link 
status is reported by sink (read back from sink's dpcd register directly).

T achieve link symbol locked, link training will start from reduce link 
rate until lowest rate, if it still failed, then it will reduce lanes 
with highest rate and start training  again.

it will repeat same process until lowest lane (one lane), if it still 
failed, then it will give up and declare link training failed.

Therefore I think add data-lanes and link-frequencies properties in the 
DP PHY binding directly will not helps.
Stephen Boyd Dec. 15, 2022, 12:38 a.m. UTC | #4
Quoting Kuogee Hsieh (2022-12-14 14:56:23)
>
> On 12/13/2022 3:06 PM, Stephen Boyd wrote:
> > Quoting Kuogee Hsieh (2022-12-13 13:44:05)
> >> Add both data-lanes and link-frequencies property into endpoint
> > Why do we care? Please tell us why it's important.

Any response?

> >> @@ -193,6 +217,8 @@ examples:
> >>                   reg = <1>;
> >>                   endpoint {
> >>                       remote-endpoint = <&typec>;
> >> +                    data-lanes = <0 1>;
> >> +                    link-frequencies = /bits/ 64 <1620000000 2700000000 5400000000 8100000000>;
> >>                   };
> > So far we haven't used the output port on the DP controller in DT.
> >
> > I'm still not clear on what we should do in general for DP because
> > there's a PHY that actually controls a lane count and lane mapping. In
> > my mental model of the SoC, this DP controller's output port is
> > connected to the DP PHY, which then sends the DP lanes out of the SoC to
> > the next downstream device (i.e. a DP connector or type-c muxer). Having
> > a remote-endpoint property with a phandle to typec doesn't fit my mental
> > model. I'd expect it to be the typec PHY.
> ack
> >
> > That brings up the question: when we have 2 lanes vs. 4 lanes will we
> > duplicate the data-lanes property in the PHY binding? I suspect we'll
> > have to. Hopefully that sort of duplication is OK?
> Current we have limitation by reserve 2 data lanes for usb2, i am not
> sure duplication to 4 lanes will work automatically.
> >
> > Similarly, we may have a redriver that limits the link-frequencies
> > property further (e.g. only support <= 2.7GHz). Having multiple
> > link-frequencies along the graph is OK, right? And isn't the
> > link-frequencies property known here by fact that the DP controller
> > tells us which SoC this controller is for, and thus we already know the
> > supported link frequencies?
> >
> > Finally, I wonder if we should put any of this in the DP controller's
> > output endpoint, or if we can put these sorts of properties in the DP
> > PHY binding directly? Can't we do that and then when the DP controller
> > tries to set 4 lanes, the PHY immediately fails the call and the link
> > training algorithm does its thing and tries fewer lanes? And similarly,
> > if link-frequencies were in the PHY's binding, the PHY could fail to set
> > those frequencies during link training, returning an error to the DP
> > controller, letting the training move on to a lower frequency. If we did
> > that this patch series would largely be about modifying the PHY binding,
> > updating the PHY driver to enforce constraints, and handling errors
> > during link training in the DP controller (which may already be done? I
> > didn't check).
>
>
> phy/pll have different configuration base on link lanes and rate.
>
> it has to be set up before link training can start.
>
> Once link training start, then there are no any interactions between
> controller and phy during link training session.

What do you mean? The DP controller calls phy_configure() and changes
the link rate. The return value from phy_configure() should be checked
and link training should skip link rates that aren't supported and/or
number of lanes that aren't supported.

>
> Link training only happen between dp controller and sink since link
> status is reported by sink (read back from sink's dpcd register directly).
>
> T achieve link symbol locked, link training will start from reduce link
> rate until lowest rate, if it still failed, then it will reduce lanes
> with highest rate and start training  again.
>
> it will repeat same process until lowest lane (one lane), if it still
> failed, then it will give up and declare link training failed.

Yes, that describes the link training algorithm. I don't see why
phy_configure() return value can't be checked and either number of lanes
or link frequencies be checked. If only two lanes are supported, then
phy_configure() will fail for the 4 link rates and the algorithm will
reduce the number of lanes and go back to the highest rate. Then when
the highest rate isn't supported it will drop link rate until the link
rate is supported.

>
> Therefore I think add data-lanes and link-frequencies properties in the
> DP PHY binding directly will not helps.
>

I didn't follow your logic. Sorry.
Kuogee Hsieh Dec. 15, 2022, 5:08 p.m. UTC | #5
On 12/14/2022 4:38 PM, Stephen Boyd wrote:
> Quoting Kuogee Hsieh (2022-12-14 14:56:23)
>> On 12/13/2022 3:06 PM, Stephen Boyd wrote:
>>> Quoting Kuogee Hsieh (2022-12-13 13:44:05)
>>>> Add both data-lanes and link-frequencies property into endpoint
>>> Why do we care? Please tell us why it's important.
> Any response?
yes, i did that at my local patch already.
>
>>>> @@ -193,6 +217,8 @@ examples:
>>>>                    reg = <1>;
>>>>                    endpoint {
>>>>                        remote-endpoint = <&typec>;
>>>> +                    data-lanes = <0 1>;
>>>> +                    link-frequencies = /bits/ 64 <1620000000 2700000000 5400000000 8100000000>;
>>>>                    };
>>> So far we haven't used the output port on the DP controller in DT.
>>>
>>> I'm still not clear on what we should do in general for DP because
>>> there's a PHY that actually controls a lane count and lane mapping. In
>>> my mental model of the SoC, this DP controller's output port is
>>> connected to the DP PHY, which then sends the DP lanes out of the SoC to
>>> the next downstream device (i.e. a DP connector or type-c muxer). Having
>>> a remote-endpoint property with a phandle to typec doesn't fit my mental
>>> model. I'd expect it to be the typec PHY.
>> ack
>>> That brings up the question: when we have 2 lanes vs. 4 lanes will we
>>> duplicate the data-lanes property in the PHY binding? I suspect we'll
>>> have to. Hopefully that sort of duplication is OK?
>> Current we have limitation by reserve 2 data lanes for usb2, i am not
>> sure duplication to 4 lanes will work automatically.
>>> Similarly, we may have a redriver that limits the link-frequencies
>>> property further (e.g. only support <= 2.7GHz). Having multiple
>>> link-frequencies along the graph is OK, right? And isn't the
>>> link-frequencies property known here by fact that the DP controller
>>> tells us which SoC this controller is for, and thus we already know the
>>> supported link frequencies?
>>>
>>> Finally, I wonder if we should put any of this in the DP controller's
>>> output endpoint, or if we can put these sorts of properties in the DP
>>> PHY binding directly? Can't we do that and then when the DP controller
>>> tries to set 4 lanes, the PHY immediately fails the call and the link
>>> training algorithm does its thing and tries fewer lanes? And similarly,
>>> if link-frequencies were in the PHY's binding, the PHY could fail to set
>>> those frequencies during link training, returning an error to the DP
>>> controller, letting the training move on to a lower frequency. If we did
>>> that this patch series would largely be about modifying the PHY binding,
>>> updating the PHY driver to enforce constraints, and handling errors
>>> during link training in the DP controller (which may already be done? I
>>> didn't check).
>>
>> phy/pll have different configuration base on link lanes and rate.
>>
>> it has to be set up before link training can start.
>>
>> Once link training start, then there are no any interactions between
>> controller and phy during link training session.
> What do you mean? The DP controller calls phy_configure() and changes
> the link rate. The return value from phy_configure() should be checked
> and link training should skip link rates that aren't supported and/or
> number of lanes that aren't supported.
>
>> Link training only happen between dp controller and sink since link
>> status is reported by sink (read back from sink's dpcd register directly).
>>
>> T achieve link symbol locked, link training will start from reduce link
>> rate until lowest rate, if it still failed, then it will reduce lanes
>> with highest rate and start training  again.
>>
>> it will repeat same process until lowest lane (one lane), if it still
>> failed, then it will give up and declare link training failed.
> Yes, that describes the link training algorithm. I don't see why
> phy_configure() return value can't be checked and either number of lanes
> or link frequencies be checked. If only two lanes are supported, then
> phy_configure() will fail for the 4 link rates and the algorithm will
> reduce the number of lanes and go back to the highest rate. Then when
> the highest rate isn't supported it will drop link rate until the link
> rate is supported.
>
>> Therefore I think add data-lanes and link-frequencies properties in the
>> DP PHY binding directly will not helps.
>>
> I didn't follow your logic. Sorry.

Sorry, probably i did not understand your proposal clearly.

1) move both data-lanes and link-frequencies property from dp controller 
endpoint to phy

2) phy_configure() return succeed if both data-lanes and link 
frequencies are supported. otherwise return failed.

is above two summary items correct?

Currently phy_configure()  is part of link training process and called 
if link lanes or rate changes.

however, since current phy_configure() implementation always return 0, 
the return value is not checking.

This proposal is new, can we discuss more detail at meeting and decide 
to implement it or not.

Meanwhile can we merge current implementation (both data-lanes and 
link-frequqncies at dp controller end point) first?
Stephen Boyd Dec. 15, 2022, 8:12 p.m. UTC | #6
Quoting Kuogee Hsieh (2022-12-15 09:08:04)
>
> On 12/14/2022 4:38 PM, Stephen Boyd wrote:
> > Quoting Kuogee Hsieh (2022-12-14 14:56:23)
> >> On 12/13/2022 3:06 PM, Stephen Boyd wrote:
> >>> Quoting Kuogee Hsieh (2022-12-13 13:44:05)
> >
> >> Therefore I think add data-lanes and link-frequencies properties in the
> >> DP PHY binding directly will not helps.
> >>
> > I didn't follow your logic. Sorry.
>
> Sorry, probably i did not understand your proposal clearly.
>
> 1) move both data-lanes and link-frequencies property from dp controller
> endpoint to phy
>
> 2) phy_configure() return succeed if both data-lanes and link
> frequencies are supported. otherwise return failed.
>
> is above two summary items correct?

Yes.

>
> Currently phy_configure()  is part of link training process and called
> if link lanes or rate changes.
>
> however, since current phy_configure() implementation always return 0,
> the return value is not checking.
>
> This proposal is new, can we discuss more detail at meeting and decide
> to implement it or not.
>
> Meanwhile can we merge current implementation (both data-lanes and
> link-frequqncies at dp controller end point) first?
>

I don't think we can merge this patch because it depends on a DT binding
change. If the PHY approach works then I'd prefer we just go with that.
Dmitry Baryshkov Dec. 15, 2022, 9:12 p.m. UTC | #7
On 15/12/2022 02:38, Stephen Boyd wrote:
> Quoting Kuogee Hsieh (2022-12-14 14:56:23)
>>
>> On 12/13/2022 3:06 PM, Stephen Boyd wrote:
>>> Quoting Kuogee Hsieh (2022-12-13 13:44:05)
>>>> Add both data-lanes and link-frequencies property into endpoint
>>> Why do we care? Please tell us why it's important.
> 
> Any response?
> 
>>>> @@ -193,6 +217,8 @@ examples:
>>>>                    reg = <1>;
>>>>                    endpoint {
>>>>                        remote-endpoint = <&typec>;
>>>> +                    data-lanes = <0 1>;
>>>> +                    link-frequencies = /bits/ 64 <1620000000 2700000000 5400000000 8100000000>;
>>>>                    };
>>> So far we haven't used the output port on the DP controller in DT.
>>>
>>> I'm still not clear on what we should do in general for DP because
>>> there's a PHY that actually controls a lane count and lane mapping. In
>>> my mental model of the SoC, this DP controller's output port is
>>> connected to the DP PHY, which then sends the DP lanes out of the SoC to
>>> the next downstream device (i.e. a DP connector or type-c muxer). Having
>>> a remote-endpoint property with a phandle to typec doesn't fit my mental
>>> model. I'd expect it to be the typec PHY.
>> ack
>>>
>>> That brings up the question: when we have 2 lanes vs. 4 lanes will we
>>> duplicate the data-lanes property in the PHY binding? I suspect we'll
>>> have to. Hopefully that sort of duplication is OK?
>> Current we have limitation by reserve 2 data lanes for usb2, i am not
>> sure duplication to 4 lanes will work automatically.
>>>
>>> Similarly, we may have a redriver that limits the link-frequencies
>>> property further (e.g. only support <= 2.7GHz). Having multiple
>>> link-frequencies along the graph is OK, right? And isn't the
>>> link-frequencies property known here by fact that the DP controller
>>> tells us which SoC this controller is for, and thus we already know the
>>> supported link frequencies?
>>>
>>> Finally, I wonder if we should put any of this in the DP controller's
>>> output endpoint, or if we can put these sorts of properties in the DP
>>> PHY binding directly? Can't we do that and then when the DP controller
>>> tries to set 4 lanes, the PHY immediately fails the call and the link
>>> training algorithm does its thing and tries fewer lanes? And similarly,
>>> if link-frequencies were in the PHY's binding, the PHY could fail to set
>>> those frequencies during link training, returning an error to the DP
>>> controller, letting the training move on to a lower frequency. If we did
>>> that this patch series would largely be about modifying the PHY binding,
>>> updating the PHY driver to enforce constraints, and handling errors
>>> during link training in the DP controller (which may already be done? I
>>> didn't check).
>>
>>
>> phy/pll have different configuration base on link lanes and rate.
>>
>> it has to be set up before link training can start.
>>
>> Once link training start, then there are no any interactions between
>> controller and phy during link training session.
> 
> What do you mean? The DP controller calls phy_configure() and changes
> the link rate. The return value from phy_configure() should be checked
> and link training should skip link rates that aren't supported and/or
> number of lanes that aren't supported.

I'd toss another coin into the argument. We have previously discussed 
using the link-frequencies property in the context of limiting link 
speeds for the DSI. There we have both hardware (SoC) limitations and 
the board limitations as in some cases the DSI lanes can not sustain 
some high rate. I still hope for these patches to materialize at some point.

For the DP this is more or less the same story. We have the hardware 
(SoC, PHY, etc) limitations, but also we have the board/device 
limitations. For example some of the board might not be able to support 
HBR3 e.g. because of the PCB design. And while it might be logical to 
also add the 'max bit rate' support to the eDP & combo PHYs, it 
definitely makes sense to be able to limit the rate on the DP <-> 
`something' link.

Now, for all the practical purposes this `something' for the DP is the 
DP connector, the eDP panel or the USB-C mux (with the possible 
redrivers in the middle).

Thus I'd support Kuogee's proposal to have link-frequencies in the DP's 
outbound endpoint. This is the link which will be driven by the data 
stream from the Linux point of view. The PHY is linked through the 
'phys' property, but it doesn't participate in the USB-C (or in the 
connector/panel) graph.

Now let's discuss the data lanes. Currently we have them in the DP 
property itself. Please correct me if I'm wrong, but I think that we can 
drop it for all the practical purposes. Judging by the DP compat string 
the driver can determine if it uses 2 lanes (eDP) or 4 lanes 
(full-featured DP). In case of USB-C when the altmode dictates whether 
to use 2 or 4 lanes, the TCPM (Type-C Port Manager) will negotiate the 
mode and pin configuration, then inform the DP controller about the 
selected amount of lanes. Then DP informs the PHY about the selection 
(note, PHY doesn't have control at all in this scenario).

The only problematic case is the mixed mode ports, which if I understand 
correctly, can be configured either to eDP or DP modes. I'm not sure who 
specifies and limits the amount of lanes available to the DP controller.
Doug Anderson Dec. 15, 2022, 10:57 p.m. UTC | #8
Hi,

On Thu, Dec 15, 2022 at 1:12 PM Dmitry Baryshkov
<dmitry.baryshkov@linaro.org> wrote:
>
> On 15/12/2022 02:38, Stephen Boyd wrote:
> > Quoting Kuogee Hsieh (2022-12-14 14:56:23)
> >>
> >> On 12/13/2022 3:06 PM, Stephen Boyd wrote:
> >>> Quoting Kuogee Hsieh (2022-12-13 13:44:05)
> >>>> Add both data-lanes and link-frequencies property into endpoint
> >>> Why do we care? Please tell us why it's important.
> >
> > Any response?
> >
> >>>> @@ -193,6 +217,8 @@ examples:
> >>>>                    reg = <1>;
> >>>>                    endpoint {
> >>>>                        remote-endpoint = <&typec>;
> >>>> +                    data-lanes = <0 1>;
> >>>> +                    link-frequencies = /bits/ 64 <1620000000 2700000000 5400000000 8100000000>;
> >>>>                    };
> >>> So far we haven't used the output port on the DP controller in DT.
> >>>
> >>> I'm still not clear on what we should do in general for DP because
> >>> there's a PHY that actually controls a lane count and lane mapping. In
> >>> my mental model of the SoC, this DP controller's output port is
> >>> connected to the DP PHY, which then sends the DP lanes out of the SoC to
> >>> the next downstream device (i.e. a DP connector or type-c muxer). Having
> >>> a remote-endpoint property with a phandle to typec doesn't fit my mental
> >>> model. I'd expect it to be the typec PHY.
> >> ack
> >>>
> >>> That brings up the question: when we have 2 lanes vs. 4 lanes will we
> >>> duplicate the data-lanes property in the PHY binding? I suspect we'll
> >>> have to. Hopefully that sort of duplication is OK?
> >> Current we have limitation by reserve 2 data lanes for usb2, i am not
> >> sure duplication to 4 lanes will work automatically.
> >>>
> >>> Similarly, we may have a redriver that limits the link-frequencies
> >>> property further (e.g. only support <= 2.7GHz). Having multiple
> >>> link-frequencies along the graph is OK, right? And isn't the
> >>> link-frequencies property known here by fact that the DP controller
> >>> tells us which SoC this controller is for, and thus we already know the
> >>> supported link frequencies?
> >>>
> >>> Finally, I wonder if we should put any of this in the DP controller's
> >>> output endpoint, or if we can put these sorts of properties in the DP
> >>> PHY binding directly? Can't we do that and then when the DP controller
> >>> tries to set 4 lanes, the PHY immediately fails the call and the link
> >>> training algorithm does its thing and tries fewer lanes? And similarly,
> >>> if link-frequencies were in the PHY's binding, the PHY could fail to set
> >>> those frequencies during link training, returning an error to the DP
> >>> controller, letting the training move on to a lower frequency. If we did
> >>> that this patch series would largely be about modifying the PHY binding,
> >>> updating the PHY driver to enforce constraints, and handling errors
> >>> during link training in the DP controller (which may already be done? I
> >>> didn't check).
> >>
> >>
> >> phy/pll have different configuration base on link lanes and rate.
> >>
> >> it has to be set up before link training can start.
> >>
> >> Once link training start, then there are no any interactions between
> >> controller and phy during link training session.
> >
> > What do you mean? The DP controller calls phy_configure() and changes
> > the link rate. The return value from phy_configure() should be checked
> > and link training should skip link rates that aren't supported and/or
> > number of lanes that aren't supported.
>
> I'd toss another coin into the argument. We have previously discussed
> using the link-frequencies property in the context of limiting link
> speeds for the DSI. There we have both hardware (SoC) limitations and
> the board limitations as in some cases the DSI lanes can not sustain
> some high rate. I still hope for these patches to materialize at some point.
>
> For the DP this is more or less the same story. We have the hardware
> (SoC, PHY, etc) limitations, but also we have the board/device
> limitations. For example some of the board might not be able to support
> HBR3 e.g. because of the PCB design. And while it might be logical to
> also add the 'max bit rate' support to the eDP & combo PHYs, it
> definitely makes sense to be able to limit the rate on the DP <->
> `something' link.
>
> Now, for all the practical purposes this `something' for the DP is the
> DP connector, the eDP panel or the USB-C mux (with the possible
> redrivers in the middle).
>
> Thus I'd support Kuogee's proposal to have link-frequencies in the DP's
> outbound endpoint. This is the link which will be driven by the data
> stream from the Linux point of view. The PHY is linked through the
> 'phys' property, but it doesn't participate in the USB-C (or in the
> connector/panel) graph.
>
> Now let's discuss the data lanes. Currently we have them in the DP
> property itself. Please correct me if I'm wrong, but I think that we can
> drop it for all the practical purposes. Judging by the DP compat string
> the driver can determine if it uses 2 lanes (eDP) or 4 lanes
> (full-featured DP). In case of USB-C when the altmode dictates whether
> to use 2 or 4 lanes, the TCPM (Type-C Port Manager) will negotiate the
> mode and pin configuration, then inform the DP controller about the
> selected amount of lanes. Then DP informs the PHY about the selection
> (note, PHY doesn't have control at all in this scenario).
>
> The only problematic case is the mixed mode ports, which if I understand
> correctly, can be configured either to eDP or DP modes. I'm not sure who
> specifies and limits the amount of lanes available to the DP controller.

For the most part, I'll let others debate the best way to represent
this data, but I'll comment that the above statement isn't really
correct. Specifically it's wrong to say that eDP is 2 lanes and DP is
2/4 lanes. I will say:

* An eDP display could support 1, 2, or 4 lanes.
* An eDP controller could support 1, 2, or 4 lanes.
* A board may wire up 1, 2, or 4 lanes.

Thus if you have an eDP controller that should be capable of 4 lanes
and an eDP panel that says it's capable of 4 lanes, you still might
need to use a 2 lane configuration because a board only wired up 2 of
the lanes. IMO the number of lanes that are wired up should be in the
device tree somewhere because that's where this board limit should be
defined.

Similarly, you could have an eDP controller that supports 4 lanes, you
may wire 4 lanes off the board, but an eDP panel may only support 1 or
2 lanes. This is handled by querying the panel and asking how many
lanes it supports.

-Doug
Dmitry Baryshkov Dec. 15, 2022, 11:02 p.m. UTC | #9
On Fri, 16 Dec 2022 at 00:57, Doug Anderson <dianders@chromium.org> wrote:
>
> Hi,
>
> On Thu, Dec 15, 2022 at 1:12 PM Dmitry Baryshkov
> <dmitry.baryshkov@linaro.org> wrote:
> >
> > On 15/12/2022 02:38, Stephen Boyd wrote:
> > > Quoting Kuogee Hsieh (2022-12-14 14:56:23)
> > >>
> > >> On 12/13/2022 3:06 PM, Stephen Boyd wrote:
> > >>> Quoting Kuogee Hsieh (2022-12-13 13:44:05)
> > >>>> Add both data-lanes and link-frequencies property into endpoint
> > >>> Why do we care? Please tell us why it's important.
> > >
> > > Any response?
> > >
> > >>>> @@ -193,6 +217,8 @@ examples:
> > >>>>                    reg = <1>;
> > >>>>                    endpoint {
> > >>>>                        remote-endpoint = <&typec>;
> > >>>> +                    data-lanes = <0 1>;
> > >>>> +                    link-frequencies = /bits/ 64 <1620000000 2700000000 5400000000 8100000000>;
> > >>>>                    };
> > >>> So far we haven't used the output port on the DP controller in DT.
> > >>>
> > >>> I'm still not clear on what we should do in general for DP because
> > >>> there's a PHY that actually controls a lane count and lane mapping. In
> > >>> my mental model of the SoC, this DP controller's output port is
> > >>> connected to the DP PHY, which then sends the DP lanes out of the SoC to
> > >>> the next downstream device (i.e. a DP connector or type-c muxer). Having
> > >>> a remote-endpoint property with a phandle to typec doesn't fit my mental
> > >>> model. I'd expect it to be the typec PHY.
> > >> ack
> > >>>
> > >>> That brings up the question: when we have 2 lanes vs. 4 lanes will we
> > >>> duplicate the data-lanes property in the PHY binding? I suspect we'll
> > >>> have to. Hopefully that sort of duplication is OK?
> > >> Current we have limitation by reserve 2 data lanes for usb2, i am not
> > >> sure duplication to 4 lanes will work automatically.
> > >>>
> > >>> Similarly, we may have a redriver that limits the link-frequencies
> > >>> property further (e.g. only support <= 2.7GHz). Having multiple
> > >>> link-frequencies along the graph is OK, right? And isn't the
> > >>> link-frequencies property known here by fact that the DP controller
> > >>> tells us which SoC this controller is for, and thus we already know the
> > >>> supported link frequencies?
> > >>>
> > >>> Finally, I wonder if we should put any of this in the DP controller's
> > >>> output endpoint, or if we can put these sorts of properties in the DP
> > >>> PHY binding directly? Can't we do that and then when the DP controller
> > >>> tries to set 4 lanes, the PHY immediately fails the call and the link
> > >>> training algorithm does its thing and tries fewer lanes? And similarly,
> > >>> if link-frequencies were in the PHY's binding, the PHY could fail to set
> > >>> those frequencies during link training, returning an error to the DP
> > >>> controller, letting the training move on to a lower frequency. If we did
> > >>> that this patch series would largely be about modifying the PHY binding,
> > >>> updating the PHY driver to enforce constraints, and handling errors
> > >>> during link training in the DP controller (which may already be done? I
> > >>> didn't check).
> > >>
> > >>
> > >> phy/pll have different configuration base on link lanes and rate.
> > >>
> > >> it has to be set up before link training can start.
> > >>
> > >> Once link training start, then there are no any interactions between
> > >> controller and phy during link training session.
> > >
> > > What do you mean? The DP controller calls phy_configure() and changes
> > > the link rate. The return value from phy_configure() should be checked
> > > and link training should skip link rates that aren't supported and/or
> > > number of lanes that aren't supported.
> >
> > I'd toss another coin into the argument. We have previously discussed
> > using the link-frequencies property in the context of limiting link
> > speeds for the DSI. There we have both hardware (SoC) limitations and
> > the board limitations as in some cases the DSI lanes can not sustain
> > some high rate. I still hope for these patches to materialize at some point.
> >
> > For the DP this is more or less the same story. We have the hardware
> > (SoC, PHY, etc) limitations, but also we have the board/device
> > limitations. For example some of the board might not be able to support
> > HBR3 e.g. because of the PCB design. And while it might be logical to
> > also add the 'max bit rate' support to the eDP & combo PHYs, it
> > definitely makes sense to be able to limit the rate on the DP <->
> > `something' link.
> >
> > Now, for all the practical purposes this `something' for the DP is the
> > DP connector, the eDP panel or the USB-C mux (with the possible
> > redrivers in the middle).
> >
> > Thus I'd support Kuogee's proposal to have link-frequencies in the DP's
> > outbound endpoint. This is the link which will be driven by the data
> > stream from the Linux point of view. The PHY is linked through the
> > 'phys' property, but it doesn't participate in the USB-C (or in the
> > connector/panel) graph.
> >
> > Now let's discuss the data lanes. Currently we have them in the DP
> > property itself. Please correct me if I'm wrong, but I think that we can
> > drop it for all the practical purposes. Judging by the DP compat string
> > the driver can determine if it uses 2 lanes (eDP) or 4 lanes
> > (full-featured DP). In case of USB-C when the altmode dictates whether
> > to use 2 or 4 lanes, the TCPM (Type-C Port Manager) will negotiate the
> > mode and pin configuration, then inform the DP controller about the
> > selected amount of lanes. Then DP informs the PHY about the selection
> > (note, PHY doesn't have control at all in this scenario).
> >
> > The only problematic case is the mixed mode ports, which if I understand
> > correctly, can be configured either to eDP or DP modes. I'm not sure who
> > specifies and limits the amount of lanes available to the DP controller.
>
> For the most part, I'll let others debate the best way to represent
> this data, but I'll comment that the above statement isn't really
> correct. Specifically it's wrong to say that eDP is 2 lanes and DP is
> 2/4 lanes. I will say:
>
> * An eDP display could support 1, 2, or 4 lanes.
> * An eDP controller could support 1, 2, or 4 lanes.
> * A board may wire up 1, 2, or 4 lanes.
>
> Thus if you have an eDP controller that should be capable of 4 lanes
> and an eDP panel that says it's capable of 4 lanes, you still might
> need to use a 2 lane configuration because a board only wired up 2 of
> the lanes. IMO the number of lanes that are wired up should be in the
> device tree somewhere because that's where this board limit should be
> defined.
>
> Similarly, you could have an eDP controller that supports 4 lanes, you
> may wire 4 lanes off the board, but an eDP panel may only support 1 or
> 2 lanes. This is handled by querying the panel and asking how many
> lanes it supports.

Thank you for the explanations. So the `data-lanes' should definitely
be a property of the link between the DP controller and the eDP panel.
This is the path that Kuogee has been using.
Stephen Boyd Dec. 16, 2022, 2:16 a.m. UTC | #10
Quoting Dmitry Baryshkov (2022-12-15 13:12:49)
> On 15/12/2022 02:38, Stephen Boyd wrote:
> > Quoting Kuogee Hsieh (2022-12-14 14:56:23)
> >>
> >> Once link training start, then there are no any interactions between
> >> controller and phy during link training session.
> >
> > What do you mean? The DP controller calls phy_configure() and changes
> > the link rate. The return value from phy_configure() should be checked
> > and link training should skip link rates that aren't supported and/or
> > number of lanes that aren't supported.
>
> I'd toss another coin into the argument. We have previously discussed
> using the link-frequencies property in the context of limiting link
> speeds for the DSI. There we have both hardware (SoC) limitations and
> the board limitations as in some cases the DSI lanes can not sustain
> some high rate. I still hope for these patches to materialize at some point.
>
> For the DP this is more or less the same story. We have the hardware
> (SoC, PHY, etc) limitations, but also we have the board/device
> limitations. For example some of the board might not be able to support
> HBR3 e.g. because of the PCB design. And while it might be logical to
> also add the 'max bit rate' support to the eDP & combo PHYs, it
> definitely makes sense to be able to limit the rate on the DP <->
> `something' link.

Honestly I don't think the PHY even makes sense to put the link rate
property. In the case of Trogdor, the DP controller and DP PHY both
support all DP link frequencies. The limiting factor is the TCPC
redriver that is only rated to support HBR2. We don't describe the TCPC
in DT because the EC controls it. This means we have to put the limit
*somewhere*, and putting it in the DP output node is the only place we
have right now. I would really prefer we put it wherever the limit is,
in this case either in the EC node or on the type-c ports.

Another nice to have feature would be to support different TCPCs connected
to the same DP port. We were considering doing this on Trogdor, where we
would have a TCPC rated for HBR2 and another TCPC rated for HBR3 and
then detect which TCPC was in use to adjust the supported link rates.
We didn't do this though, so the idea got back-burnered.

When the SoC is directly wired to a DP connector, I'd expect the
connector to have the link rate property. That's because the connector
or the traces outside of the SoC will be the part that's limiting the
supported frequencies, not the SoC. The graph would need to be walked to
find the link rate of course. The PHY could do this just as much as the
DP controller could.

>
> Now, for all the practical purposes this `something' for the DP is the
> DP connector, the eDP panel or the USB-C mux (with the possible
> redrivers in the middle).
>
> Thus I'd support Kuogee's proposal to have link-frequencies in the DP's
> outbound endpoint. This is the link which will be driven by the data
> stream from the Linux point of view. The PHY is linked through the
> 'phys' property, but it doesn't participate in the USB-C (or in the
> connector/panel) graph.

Why doesn't the PHY participate in the graph? The eDP panel could just
as easily be connected to the eDP PHY if the PHY participated in the
graph.

>
> Now let's discuss the data lanes. Currently we have them in the DP
> property itself. Please correct me if I'm wrong, but I think that we can
> drop it for all the practical purposes.

I vaguely recall that the driver was checking data-lanes to figure out
how many lanes are usable. This is another shortcut taken on Trogdor to
work around a lack of complete DP bindings. We only support two lanes of
DP on Trogdor.

> Judging by the DP compat string
> the driver can determine if it uses 2 lanes (eDP) or 4 lanes
> (full-featured DP). In case of USB-C when the altmode dictates whether
> to use 2 or 4 lanes, the TCPM (Type-C Port Manager) will negotiate the
> mode and pin configuration, then inform the DP controller about the
> selected amount of lanes. Then DP informs the PHY about the selection
> (note, PHY doesn't have control at all in this scenario).
>
> The only problematic case is the mixed mode ports, which if I understand
> correctly, can be configured either to eDP or DP modes. I'm not sure who
> specifies and limits the amount of lanes available to the DP controller.
>

This would depend on where we send the type-c message in the kernel. It
really gets to the heart of the question too. Should the PHY be "dumb"
and do whatever the controller tells it to do? Or should the PHY be
aware of what's going on and take action itself? Note that the
data-lanes property is also used to remap lanes. On sc7180 the lane
remapping happens in the DP PHY, and then the type-c PHY can flip that
too, so if we don't involve the PHY(s) in the graph we'll have to
express this information in the DP controller graph and then pass it to
the PHY from the controller. Similarly, when we have more dynamic
configuration of the type-c PHY, where USB may or may not be used
because the TCPM has decided to use 2 or 4 lanes of DP, the data-lanes
property will only indicate lane mappings and not the number of lanes
supported. We'll again have to express the number of lanes to the PHY by
parsing the type-c messages.

It looks simpler to me if the PHY APIs push errors up to the caller for
unsupported configurations. This will hopefully make it easier for the
DP controller when the DP lanes are muxed onto a type-c port so that the
controller doesn't have to parse type-c messages. Instead, the PHY will
get the type-c message, stash away supported number of lanes and link
rates and then notify the DP controller to retrain the link with the
link training algorithm. A few steps of the link training may be
skipped, but the type-c message parsing won't need to be part of the DP
controller code. Said another way, the DP controller can stay focused on
DP instead of navigating type-c in addition to DP.

From a binding perspective, data-lanes/link-frequencies are part of the
graph binding. Having a graph port without a remote-endpoint doesn't
really make any sense. Therefore we should decide to either connect the
PHY into the graph and constrain it via graph properties like
data-lanes, or leave it disconnected and have the controller drive the
PHY (or PHYs when we have type-c). The type-c framework will want the
orientation control (the type-c PHY) to be part of the graph from the
usb-c-connector. That way we can properly map the PHY pins to the
flipped or not-flipped state of the cable. Maybe we don't need to
connect the PHY to the DP graph? Instead there can be a type-c graph for
the PHY, TCPM, etc. and a display graph for the display chain. It feels
like that must not work somehow.

Either way, I don't see how or why these properties should be part of
the DP controller. The controller isn't the limiting part, it's the
redriver or the board/connector/panel that's the limiting factor. Walk
the graph to find the lowest common denominator of link-frequencies and
handle data-lanes either statically in the PHY or dynamically by parsing
type-c messages. How does the eDP panel indicate only two lanes are
supported when all four lanes are wired? I thought that link training
just fails but I don't know.
Kuogee Hsieh Dec. 16, 2022, 7:03 p.m. UTC | #11
On 12/15/2022 6:16 PM, Stephen Boyd wrote:
> Quoting Dmitry Baryshkov (2022-12-15 13:12:49)
>> On 15/12/2022 02:38, Stephen Boyd wrote:
>>> Quoting Kuogee Hsieh (2022-12-14 14:56:23)
>>>> Once link training start, then there are no any interactions between
>>>> controller and phy during link training session.
>>> What do you mean? The DP controller calls phy_configure() and changes
>>> the link rate. The return value from phy_configure() should be checked
>>> and link training should skip link rates that aren't supported and/or
>>> number of lanes that aren't supported.
>> I'd toss another coin into the argument. We have previously discussed
>> using the link-frequencies property in the context of limiting link
>> speeds for the DSI. There we have both hardware (SoC) limitations and
>> the board limitations as in some cases the DSI lanes can not sustain
>> some high rate. I still hope for these patches to materialize at some point.
>>
>> For the DP this is more or less the same story. We have the hardware
>> (SoC, PHY, etc) limitations, but also we have the board/device
>> limitations. For example some of the board might not be able to support
>> HBR3 e.g. because of the PCB design. And while it might be logical to
>> also add the 'max bit rate' support to the eDP & combo PHYs, it
>> definitely makes sense to be able to limit the rate on the DP <->
>> `something' link.
> Honestly I don't think the PHY even makes sense to put the link rate
> property. In the case of Trogdor, the DP controller and DP PHY both
> support all DP link frequencies. The limiting factor is the TCPC
> redriver that is only rated to support HBR2. We don't describe the TCPC
> in DT because the EC controls it. This means we have to put the limit
> *somewhere*, and putting it in the DP output node is the only place we
> have right now. I would really prefer we put it wherever the limit is,
> in this case either in the EC node or on the type-c ports.
>
> Another nice to have feature would be to support different TCPCs connected
> to the same DP port. We were considering doing this on Trogdor, where we
> would have a TCPC rated for HBR2 and another TCPC rated for HBR3 and
> then detect which TCPC was in use to adjust the supported link rates.
> We didn't do this though, so the idea got back-burnered.
>
> When the SoC is directly wired to a DP connector, I'd expect the
> connector to have the link rate property. That's because the connector
> or the traces outside of the SoC will be the part that's limiting the
> supported frequencies, not the SoC. The graph would need to be walked to
> find the link rate of course. The PHY could do this just as much as the
> DP controller could.
>
>> Now, for all the practical purposes this `something' for the DP is the
>> DP connector, the eDP panel or the USB-C mux (with the possible
>> redrivers in the middle).
>>
>> Thus I'd support Kuogee's proposal to have link-frequencies in the DP's
>> outbound endpoint. This is the link which will be driven by the data
>> stream from the Linux point of view. The PHY is linked through the
>> 'phys' property, but it doesn't participate in the USB-C (or in the
>> connector/panel) graph.
> Why doesn't the PHY participate in the graph? The eDP panel could just
> as easily be connected to the eDP PHY if the PHY participated in the
> graph.
>
>> Now let's discuss the data lanes. Currently we have them in the DP
>> property itself. Please correct me if I'm wrong, but I think that we can
>> drop it for all the practical purposes.
> I vaguely recall that the driver was checking data-lanes to figure out
> how many lanes are usable. This is another shortcut taken on Trogdor to
> work around a lack of complete DP bindings. We only support two lanes of
> DP on Trogdor.
>
>> Judging by the DP compat string
>> the driver can determine if it uses 2 lanes (eDP) or 4 lanes
>> (full-featured DP). In case of USB-C when the altmode dictates whether
>> to use 2 or 4 lanes, the TCPM (Type-C Port Manager) will negotiate the
>> mode and pin configuration, then inform the DP controller about the
>> selected amount of lanes. Then DP informs the PHY about the selection
>> (note, PHY doesn't have control at all in this scenario).
>>
>> The only problematic case is the mixed mode ports, which if I understand
>> correctly, can be configured either to eDP or DP modes. I'm not sure who
>> specifies and limits the amount of lanes available to the DP controller.
>>
> This would depend on where we send the type-c message in the kernel. It
> really gets to the heart of the question too. Should the PHY be "dumb"
> and do whatever the controller tells it to do? Or should the PHY be
> aware of what's going on and take action itself? Note that the
> data-lanes property is also used to remap lanes. On sc7180 the lane
> remapping happens in the DP PHY, and then the type-c PHY can flip that
> too, so if we don't involve the PHY(s) in the graph we'll have to
> express this information in the DP controller graph and then pass it to
> the PHY from the controller. Similarly, when we have more dynamic
> configuration of the type-c PHY, where USB may or may not be used
> because the TCPM has decided to use 2 or 4 lanes of DP, the data-lanes
> property will only indicate lane mappings and not the number of lanes
> supported. We'll again have to express the number of lanes to the PHY by
> parsing the type-c messages.
>
> It looks simpler to me if the PHY APIs push errors up to the caller for
> unsupported configurations. This will hopefully make it easier for the
> DP controller when the DP lanes are muxed onto a type-c port so that the
> controller doesn't have to parse type-c messages. Instead, the PHY will
> get the type-c message, stash away supported number of lanes and link
> rates and then notify the DP controller to retrain the link with the
> link training algorithm. A few steps of the link training may be
> skipped, but the type-c message parsing won't need to be part of the DP
> controller code. Said another way, the DP controller can stay focused on
> DP instead of navigating type-c in addition to DP.
>
>  From a binding perspective, data-lanes/link-frequencies are part of the
> graph binding. Having a graph port without a remote-endpoint doesn't
> really make any sense. Therefore we should decide to either connect the
> PHY into the graph and constrain it via graph properties like
> data-lanes, or leave it disconnected and have the controller drive the
> PHY (or PHYs when we have type-c). The type-c framework will want the
> orientation control (the type-c PHY) to be part of the graph from the
> usb-c-connector. That way we can properly map the PHY pins to the
> flipped or not-flipped state of the cable. Maybe we don't need to
> connect the PHY to the DP graph? Instead there can be a type-c graph for
> the PHY, TCPM, etc. and a display graph for the display chain. It feels
> like that must not work somehow.
>
> Either way, I don't see how or why these properties should be part of
> the DP controller. The controller isn't the limiting part, it's the
> redriver or the board/connector/panel that's the limiting factor. Walk
> the graph to find the lowest common denominator of link-frequencies and
> handle data-lanes either statically in the PHY or dynamically by parsing
> type-c messages. How does the eDP panel indicate only two lanes are
> supported when all four lanes are wired? I thought that link training
> just fails but I don't know.

According to  DP specification, link policy and stream policy is happen 
at link layer which is dp controller in our case.

Also DP CTS (compliance Test) is also happen at link layer.

I think intelligence should be placed at link layer.

I will submit v14 patch and we can discuss more.
Dmitry Baryshkov Dec. 16, 2022, 7:43 p.m. UTC | #12
Krzysztof, there is a bunch of DTS code below. If you can comment on it 
(and on my understanding of the existing schemas) that would be great.

On 16/12/2022 04:16, Stephen Boyd wrote:
> Quoting Dmitry Baryshkov (2022-12-15 13:12:49)
>> On 15/12/2022 02:38, Stephen Boyd wrote:
>>> Quoting Kuogee Hsieh (2022-12-14 14:56:23)
>>>>
>>>> Once link training start, then there are no any interactions between
>>>> controller and phy during link training session.
>>>
>>> What do you mean? The DP controller calls phy_configure() and changes
>>> the link rate. The return value from phy_configure() should be checked
>>> and link training should skip link rates that aren't supported and/or
>>> number of lanes that aren't supported.
>>
>> I'd toss another coin into the argument. We have previously discussed
>> using the link-frequencies property in the context of limiting link
>> speeds for the DSI. There we have both hardware (SoC) limitations and
>> the board limitations as in some cases the DSI lanes can not sustain
>> some high rate. I still hope for these patches to materialize at some point.
>>
>> For the DP this is more or less the same story. We have the hardware
>> (SoC, PHY, etc) limitations, but also we have the board/device
>> limitations. For example some of the board might not be able to support
>> HBR3 e.g. because of the PCB design. And while it might be logical to
>> also add the 'max bit rate' support to the eDP & combo PHYs, it
>> definitely makes sense to be able to limit the rate on the DP <->
>> `something' link.


This discussion made me review some parts of the bindings (and note that 
I was wrong in some of the cases in my previous mail).

> Honestly I don't think the PHY even makes sense to put the link rate
> property. In the case of Trogdor, the DP controller and DP PHY both
> support all DP link frequencies. The limiting factor is the TCPC
> redriver that is only rated to support HBR2. We don't describe the TCPC
> in DT because the EC controls it. This means we have to put the limit
> *somewhere*, and putting it in the DP output node is the only place we
> have right now. I would really prefer we put it wherever the limit is,
> in this case either in the EC node or on the type-c ports.
Yeah, the redriver kind of mixes the puzzle. We have one on the RB5 
board, and I could not find a good way to express it in the DT.

> 
> Another nice to have feature would be to support different TCPCs connected
> to the same DP port. We were considering doing this on Trogdor, where we
> would have a TCPC rated for HBR2 and another TCPC rated for HBR3 and
> then detect which TCPC was in use to adjust the supported link rates.
> We didn't do this though, so the idea got back-burnered.
> 
> When the SoC is directly wired to a DP connector, I'd expect the
> connector to have the link rate property. That's because the connector
> or the traces outside of the SoC will be the part that's limiting the
> supported frequencies, not the SoC. The graph would need to be walked to
> find the link rate of course. The PHY could do this just as much as the
> DP controller could.

Yes, I also thought about putting the frequencies in the connector 
first. But then, the video-interfaces.yaml describes these properties as 
a property of the link.

So the end result can look this way:

displayport_controller@ae900000 {
   phys = <&qmp_combo_phy 1>;


   ports {
     port@0 {
       endpoint {
         remote-endpoint = <&dpu_intf_out>;
       };
     };
     port@1 {
       dp_out: endpoint {
         remote-endpoint = <&dp_con_in>;
       };
     };
   };
};

dp-connector {
   compatible = "dp-connector";
   port {
     dp_con_in: endpoint {
       remote-endpoint = <&dp_out>;
     };
   };
};

In such case the data-lanes and link-frequencies logically fall into the 
dp_out node, in full accordance with the video-interfaces.yaml.

> 
>>
>> Now, for all the practical purposes this `something' for the DP is the
>> DP connector, the eDP panel or the USB-C mux (with the possible
>> redrivers in the middle).
>>
>> Thus I'd support Kuogee's proposal to have link-frequencies in the DP's
>> outbound endpoint. This is the link which will be driven by the data
>> stream from the Linux point of view. The PHY is linked through the
>> 'phys' property, but it doesn't participate in the USB-C (or in the
>> connector/panel) graph.
> 
> Why doesn't the PHY participate in the graph? The eDP panel could just
> as easily be connected to the eDP PHY if the PHY participated in the
> graph.

I think that for eDP we have a much simpler graph, which is more in tune 
with the dp-connector one:

displayport_controller@ae900000 {
   phys = <&qmp_edp_phy>;

   aux-bus {
     panel {
       port {
         panel_in: endpoint {
           remote-endpoint = <&dp_out>;
         };
       };
     };
   };

   ports {
     port@0 {
       endpoint {
         remote-endpoint = <&dpu_intf_out>;
       };
     };
     port@1 {
       dp_out: endpoint {
         remote-endpoint = <&panel_in>;
       };
     };
   };
};


> 
>>
>> Now let's discuss the data lanes. Currently we have them in the DP
>> property itself. Please correct me if I'm wrong, but I think that we can
>> drop it for all the practical purposes.
> 
> I vaguely recall that the driver was checking data-lanes to figure out
> how many lanes are usable.

Yes, I think so. However this binding doesn't follow the 
video-interfaces.yaml, thus I suggested moving the property to the 
proper place, the endpoint.

> This is another shortcut taken on Trogdor to
> work around a lack of complete DP bindings. We only support two lanes of
> DP on Trogdor.

How do you implement the rest of USB-C functionality on the Trogdor? Do 
you support USB role switching? Does EC use the DP_HPD GPIO to notify 
the DP controller about all the events?

>> Judging by the DP compat string
>> the driver can determine if it uses 2 lanes (eDP) or 4 lanes
>> (full-featured DP). In case of USB-C when the altmode dictates whether
>> to use 2 or 4 lanes, the TCPM (Type-C Port Manager) will negotiate the
>> mode and pin configuration, then inform the DP controller about the
>> selected amount of lanes. Then DP informs the PHY about the selection
>> (note, PHY doesn't have control at all in this scenario).
>>
>> The only problematic case is the mixed mode ports, which if I understand
>> correctly, can be configured either to eDP or DP modes. I'm not sure who
>> specifies and limits the amount of lanes available to the DP controller.
>>
> 
> This would depend on where we send the type-c message in the kernel. It
> really gets to the heart of the question too. Should the PHY be "dumb"
> and do whatever the controller tells it to do? Or should the PHY be
> aware of what's going on and take action itself? Note that the
> data-lanes property is also used to remap lanes. On sc7180 the lane
> remapping happens in the DP PHY, and then the type-c PHY can flip that
> too, so if we don't involve the PHY(s) in the graph we'll have to
> express this information in the DP controller graph and then pass it to
> the PHY from the controller. Similarly, when we have more dynamic
> configuration of the type-c PHY, where USB may or may not be used
> because the TCPM has decided to use 2 or 4 lanes of DP, the data-lanes
> property will only indicate lane mappings and not the number of lanes
> supported. We'll again have to express the number of lanes to the PHY by
> parsing the type-c messages.
> 
> It looks simpler to me if the PHY APIs push errors up to the caller for
> unsupported configurations. This will hopefully make it easier for the
> DP controller when the DP lanes are muxed onto a type-c port so that the
> controller doesn't have to parse type-c messages. Instead, the PHY will
> get the type-c message, stash away supported number of lanes and link
> rates and then notify the DP controller to retrain the link with the
> link training algorithm. A few steps of the link training may be
> skipped, but the type-c message parsing won't need to be part of the DP
> controller code. Said another way, the DP controller can stay focused on
> DP instead of navigating type-c in addition to DP.

Yes, not like the downstream. We have a separate TCPM instance, so this 
should be a part of the usb framework, not the DP.

> 
>  From a binding perspective, data-lanes/link-frequencies are part of the
> graph binding. Having a graph port without a remote-endpoint doesn't
> really make any sense. Therefore we should decide to either connect the
> PHY into the graph and constrain it via graph properties like
> data-lanes, or leave it disconnected and have the controller drive the
> PHY (or PHYs when we have type-c). The type-c framework will want the
> orientation control (the type-c PHY) to be part of the graph from the
> usb-c-connector. That way we can properly map the PHY pins to the
> flipped or not-flipped state of the cable. Maybe we don't need to
> connect the PHY to the DP graph? Instead there can be a type-c graph for
> the PHY, TCPM, etc. and a display graph for the display chain. It feels
> like that must not work somehow.



Just as a reminder:

tcpc: tcpc {
   connector {
     compatible = "usb-c-connector";
     ports {
       port@0 {
         con_hs_ep: endpoint {
           remote-endpoint = <&typec_hs_ep>;
         };
       };
       con_ss_ep: port@1 {
         endpoint {
           remote-endpoint = <&typec_ss_ep>;
         };
       };
       port@2 {
         con_sbu_ep: endpoint {
           remote-endpoint = <&typec_sbu_ep>;
         };
       };
     };
   };
};

Judging from the bindings and all the examples the HS connection should 
be implemented as:

usb@a6f8800 {
   compatible = "qcom,SoC-dwc3", "qcom,dwc3";
   usb@a600000 {
     // existing part
     compatible = "snps,dwc3";
     phys = <&qmp_hs_phy>, <&qmp_combo_phy 0>;

     // new properties per snps,dwc3.yaml
     usb-role-switch;
     port {
       usb_hs_ep: endpoint {
         remote-endpoint = <&con_hs_ep>;
       };
     };
   };
};

qmp_hs_phy: phy@88e3000 {
   #phy-cells = <1>;
};

qmp_combo_phy: phy@88e9000 {
   #phy-cells = <1>;
};

sbu-switch { // GPIO, FSA4480, CrOS EC, etc. See fcs,fsa4480.yaml
   orientation-switch;
   mode-switch;
   port {
     typec_sbu_ep: endpoint {
       remote-endpoint = <&con_sbu_ep>;
     };
   };
};

This is quite logical, as SBU lines from the connector are terminated at 
the switch.

Other devices land the SS lanes at the dwc3 controller (imx8mq-librem5, 
hi3660-hikey960), at the SS lanes switch (r8a774c0-cat874, 
beacon-renesom-baseboard) or at the TypeC PHY itself (rk3399-eaidk-610, 
rk3399-firefly, rk3399-orangepi, rk3399-pinebook-pro).

Thus I'd do the same as rk3399:

&qmp_combo_phy {
   port {
     typec_ss_ep: endpoint {
       remote-endpoint = <&con_ss_ep>;
     };
   };
};

Letting the combo PHY know the orientation would probably require 
additional setup, so we might end up with:

&qmp_combo_phy {
   orientation-switch;
   // maybe also mode-switch; ???
   ports {
     port@0 {
       typec_ss_ep: endpoint {
         remote-endpoint = <&con_ss_ep>;
       };
     };
     port@1 {
       phy_sbu_ep: endpoint {
         remote-endpoint = <&con_phy_sbu_ep>;
       };
     };
   };
};

&tcpc {
   connector {
     ports {
       port@2 {
         con_sbu_ep: endpoint@0 {
           remote-endpoint = <&typec_sbu_ep>;
         };
         con_phy_sbu_ep: endpoint@0 {
           remote-endpoint = <&phy_sbu_ep>;
         };
       };
     };
   };
};

OR:

&qmp_combo_phy {
   orientation-switch;
   // maybe also mode-switch; ???
   ports {
     port@0 {
       typec_ss_ep: endpoint {
         remote-endpoint = <&con_ss_ep>;
       };
     };
     port@1 {
       phy_sbu_ep: endpoint {
         remote-endpoint = <&switch_sbu_ep>;
       };
     };
   };
};

sbu-switch { // GPIO, FSA4480, CrOS EC, etc. See fcs,fsa4480.yaml
   orientation-switch;
   mode-switch;
   ports {
     port@0 {
       typec_sbu_ep: endpoint {
         remote-endpoint = <&con_sbu_ep>;
       };
     };
     port@1 {
       switch_sbu_ep: endpoint {
         remote-endpoint = <&phy_sbu_ep>;
       };
     };
};

I can not select, which one suits better in this case, slight preference 
for the second implementation, as it follows the hardware design.

The redriver then can either be sitting near the EP ports, or be 
implemented in a following way, replacing the switch.

&qmp_combo_phy {
   orientation-switch;
   // maybe also mode-switch; ???
   ports {
     port@0 {
       phy_ss_ep: endpoint {
         remote-endpoint = <&re_ss_ep>;
       };
     };
     port@1 {
       phy_sbu_ep: endpoint {
         remote-endpoint = <&re_sbu_ep>;
       };
     };
   };
};

redriver {
   orientation-switch;
   mode-switch;
   ports {
     port@0 {
       typec_ss_ep: endpoint@0 {
         remote-endpoint = <&con_ss_ep>;
       };
       typec_sbu_ep: endpoint@1 {
         remote-endpoint = <&con_sbu_ep>;
       };
     };
     port@1 {
       re_ss_ep: endpoint@0 {
         remote-endpoint = <&phy_ss_ep>;
       };
       re_sbu_ep: endpoint@1 {
         remote-endpoint = <&phy_sbu_ep>;
       };
     };
   };
};

However all these examples leave the dp_out endpoint unconnected, there 
is no 'next DRM bridge' yet. Original Intel design suggested putting a 
single link from the tcpc node to the displayport controller in the 
corresponding altmode definition. Then the altmode controller would use 
the drm_connector_oob_hotplug_event() to notify the DP connector.

Since unlike Intel we use drm bridges, Bjorn's suggestion was to 
implement the drm_bridge inside pmic_glink (typec port manager in his 
case) and to link it to the dp_out.

This again raises a question regarding the redrivers. I do not think 
that we should add additional links from the redriver to the dp. Neither 
should it implement the drm_bridge.

> Either way, I don't see how or why these properties should be part of
> the DP controller. The controller isn't the limiting part, it's the
> redriver or the board/connector/panel that's the limiting factor. Walk
> the graph to find the lowest common denominator of link-frequencies and
> handle data-lanes either statically in the PHY or dynamically by parsing
> type-c messages. How does the eDP panel indicate only two lanes are
> supported when all four lanes are wired? I thought that link training
> just fails but I don't know.

I think you would agree that data-lanes is the property of the link. It 
might be the link between the DP and the panel, between the DP and 
redriver, or (in the theoretic case) a link between the redriver and the 
connector. Compare this with the DSI case, when we are putting the 
data-lanes property to the host-bridge or host-panel graph link.

For the link rates it's not that obvious, but I also think that 
redrivers can impose different limits on its links.

That said, I think we might end up implementing two different mechanisms 
for such limitations:

- The one coming from the dp_out endpoint (and thus a link to the next 
DP bridge/panel/connector/etc). The link specifies the number of data 
lanes and the maximum link rate.

- Another one implemented through the QMP combo PHY, knowing SoC 
limitations, being able to parse the link to the redriver or the USB-C 
connector and further query the link properties.
diff mbox series

Patch

diff --git a/Documentation/devicetree/bindings/display/msm/dp-controller.yaml b/Documentation/devicetree/bindings/display/msm/dp-controller.yaml
index f2515af..8fb9fa5 100644
--- a/Documentation/devicetree/bindings/display/msm/dp-controller.yaml
+++ b/Documentation/devicetree/bindings/display/msm/dp-controller.yaml
@@ -81,6 +81,7 @@  properties:
 
   data-lanes:
     $ref: /schemas/types.yaml#/definitions/uint32-array
+    deprecated: true
     minItems: 1
     maxItems: 4
     items:
@@ -96,14 +97,37 @@  properties:
 
   ports:
     $ref: /schemas/graph.yaml#/properties/ports
+
     properties:
       port@0:
-        $ref: /schemas/graph.yaml#/properties/port
+        $ref: "/schemas/graph.yaml#/$defs/port-base"
         description: Input endpoint of the controller
+        properties:
+          endpoint:
+            $ref: /schemas/media/video-interfaces.yaml#
 
       port@1:
-        $ref: /schemas/graph.yaml#/properties/port
+        $ref: "/schemas/graph.yaml#/$defs/port-base"
         description: Output endpoint of the controller
+        properties:
+          endpoint:
+            $ref: /schemas/media/video-interfaces.yaml#
+            properties:
+              data-lanes:
+                minItems: 1
+                maxItems: 4
+                items:
+                  enum: [ 0, 1, 2, 3 ]
+
+              link-frequencies:
+                minItems: 1
+                maxItems: 4
+                items:
+                  enum: [ 1620000000, 2700000000, 5400000000, 8100000000 ]
+
+    required:
+      - port@0
+      - port@1
 
 required:
   - compatible
@@ -193,6 +217,8 @@  examples:
                 reg = <1>;
                 endpoint {
                     remote-endpoint = <&typec>;
+                    data-lanes = <0 1>;
+                    link-frequencies = /bits/ 64 <1620000000 2700000000 5400000000 8100000000>; 
                 };
             };
         };