diff mbox

[2/3] drm/dp/mst: Calculate total link bandwidth instead of hardcoding it

Message ID 1479434628-2373-3-git-send-email-dhinakaran.pandiyan@intel.com (mailing list archive)
State New, archived
Headers show

Commit Message

Dhinakaran Pandiyan Nov. 18, 2016, 2:03 a.m. UTC
The total or the nominal link bandwidth, which we save in terms of PBN, is
a factor of link rate and lane count. But, currently we hardcode it to
2560 PBN. This results in incorrect computation of total slots.

E.g, 2 lane HBR2 configuration and 4k@60Hz, 24bpp mode
  nominal link bw = 1080 MBps = 1280PBN = 64 slots
  required bw 533.25 MHz*3 = 1599.75 MBps or 1896 PBN
     with +0.6% margin = 1907.376 PBN = 96 slots
  This is greater than the max. possible value of 64 slots. But, we
  incorrectly compute available slots as 2560 PBN = 128 slots and don't
  return error.

So, let's fix this by calculating the total link bandwidth as
link bw (PBN) = BW per time slot(PBN) * max. time slots , where max. time
slots is 64

Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
---
 drivers/gpu/drm/drm_dp_mst_topology.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

Comments

Dhinakaran Pandiyan Nov. 19, 2016, 2:01 a.m. UTC | #1
This patch along with https://patchwork.freedesktop.org/series/15305/
will fix https://bugs.freedesktop.org/show_bug.cgi?id=98141. I'd like
this to be reviewed independently since the other two patches in this
series require rework for atomic support.

-DK

On Thu, 2016-11-17 at 18:03 -0800, Dhinakaran Pandiyan wrote:
> The total or the nominal link bandwidth, which we save in terms of PBN, is

> a factor of link rate and lane count. But, currently we hardcode it to

> 2560 PBN. This results in incorrect computation of total slots.

> 

> E.g, 2 lane HBR2 configuration and 4k@60Hz, 24bpp mode

>   nominal link bw = 1080 MBps = 1280PBN = 64 slots

>   required bw 533.25 MHz*3 = 1599.75 MBps or 1896 PBN

>      with +0.6% margin = 1907.376 PBN = 96 slots

>   This is greater than the max. possible value of 64 slots. But, we

>   incorrectly compute available slots as 2560 PBN = 128 slots and don't

>   return error.

> 

> So, let's fix this by calculating the total link bandwidth as

> link bw (PBN) = BW per time slot(PBN) * max. time slots , where max. time

> slots is 64

> 

> Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>

> ---

>  drivers/gpu/drm/drm_dp_mst_topology.c | 5 ++---

>  1 file changed, 2 insertions(+), 3 deletions(-)

> 

> diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c

> index 04e4571..26dfd99 100644

> --- a/drivers/gpu/drm/drm_dp_mst_topology.c

> +++ b/drivers/gpu/drm/drm_dp_mst_topology.c

> @@ -2038,9 +2038,8 @@ int drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr *mgr, bool ms

>  			ret = -EINVAL;

>  			goto out_unlock;

>  		}

> -

> -		mgr->total_pbn = 2560;

> -		mgr->total_slots = DIV_ROUND_UP(mgr->total_pbn, mgr->pbn_div);

> +		mgr->total_pbn = 64 * mgr->pbn_div;

> +		mgr->total_slots = 64;

>  		mgr->avail_slots = mgr->total_slots;

>  

>  		/* add initial branch device at LCT 1 */
Ville Syrjälä Nov. 29, 2016, 8:58 p.m. UTC | #2
On Thu, Nov 17, 2016 at 06:03:47PM -0800, Dhinakaran Pandiyan wrote:
> The total or the nominal link bandwidth, which we save in terms of PBN, is
> a factor of link rate and lane count. But, currently we hardcode it to
> 2560 PBN. This results in incorrect computation of total slots.
> 
> E.g, 2 lane HBR2 configuration and 4k@60Hz, 24bpp mode
>   nominal link bw = 1080 MBps = 1280PBN = 64 slots
>   required bw 533.25 MHz*3 = 1599.75 MBps or 1896 PBN
>      with +0.6% margin = 1907.376 PBN = 96 slots
>   This is greater than the max. possible value of 64 slots. But, we
>   incorrectly compute available slots as 2560 PBN = 128 slots and don't
>   return error.
> 
> So, let's fix this by calculating the total link bandwidth as
> link bw (PBN) = BW per time slot(PBN) * max. time slots , where max. time
> slots is 64
> 
> Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
> ---
>  drivers/gpu/drm/drm_dp_mst_topology.c | 5 ++---
>  1 file changed, 2 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
> index 04e4571..26dfd99 100644
> --- a/drivers/gpu/drm/drm_dp_mst_topology.c
> +++ b/drivers/gpu/drm/drm_dp_mst_topology.c
> @@ -2038,9 +2038,8 @@ int drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr *mgr, bool ms
>  			ret = -EINVAL;
>  			goto out_unlock;
>  		}
> -
> -		mgr->total_pbn = 2560;
> -		mgr->total_slots = DIV_ROUND_UP(mgr->total_pbn, mgr->pbn_div);
> +		mgr->total_pbn = 64 * mgr->pbn_div;

Do we actually have a use in mind for total_pbn? It seems unused now.

> +		mgr->total_slots = 64;

Same for total_slots.

>  		mgr->avail_slots = mgr->total_slots;

So avail_slots is all that's used. And shouldn't it actually start
out at 63 instead of 64 on account of the MTP header always taking
up one slot?

>  
>  		/* add initial branch device at LCT 1 */
> -- 
> 2.7.4
Dhinakaran Pandiyan Nov. 29, 2016, 9:04 p.m. UTC | #3
On Tue, 2016-11-29 at 22:58 +0200, Ville Syrjälä wrote:
> On Thu, Nov 17, 2016 at 06:03:47PM -0800, Dhinakaran Pandiyan wrote:

> > The total or the nominal link bandwidth, which we save in terms of PBN, is

> > a factor of link rate and lane count. But, currently we hardcode it to

> > 2560 PBN. This results in incorrect computation of total slots.

> > 

> > E.g, 2 lane HBR2 configuration and 4k@60Hz, 24bpp mode

> >   nominal link bw = 1080 MBps = 1280PBN = 64 slots

> >   required bw 533.25 MHz*3 = 1599.75 MBps or 1896 PBN

> >      with +0.6% margin = 1907.376 PBN = 96 slots

> >   This is greater than the max. possible value of 64 slots. But, we

> >   incorrectly compute available slots as 2560 PBN = 128 slots and don't

> >   return error.

> > 

> > So, let's fix this by calculating the total link bandwidth as

> > link bw (PBN) = BW per time slot(PBN) * max. time slots , where max. time

> > slots is 64

> > 

> > Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>

> > ---

> >  drivers/gpu/drm/drm_dp_mst_topology.c | 5 ++---

> >  1 file changed, 2 insertions(+), 3 deletions(-)

> > 

> > diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c

> > index 04e4571..26dfd99 100644

> > --- a/drivers/gpu/drm/drm_dp_mst_topology.c

> > +++ b/drivers/gpu/drm/drm_dp_mst_topology.c

> > @@ -2038,9 +2038,8 @@ int drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr *mgr, bool ms

> >  			ret = -EINVAL;

> >  			goto out_unlock;

> >  		}

> > -

> > -		mgr->total_pbn = 2560;

> > -		mgr->total_slots = DIV_ROUND_UP(mgr->total_pbn, mgr->pbn_div);

> > +		mgr->total_pbn = 64 * mgr->pbn_div;

> 

> Do we actually have a use in mind for total_pbn? It seems unused now.


Not really, I will remove it and submit this patch separately.

> 

> > +		mgr->total_slots = 64;

> 

> Same for total_slots.

> 

> >  		mgr->avail_slots = mgr->total_slots;

> 

> So avail_slots is all that's used. And shouldn't it actually start

> out at 63 instead of 64 on account of the MTP header always taking

> up one slot?

> 


Yeah, I had a check for < avail_slots in the patch that followed.


-DK
> >  

> >  		/* add initial branch device at LCT 1 */

> > -- 

> > 2.7.4

>
diff mbox

Patch

diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
index 04e4571..26dfd99 100644
--- a/drivers/gpu/drm/drm_dp_mst_topology.c
+++ b/drivers/gpu/drm/drm_dp_mst_topology.c
@@ -2038,9 +2038,8 @@  int drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr *mgr, bool ms
 			ret = -EINVAL;
 			goto out_unlock;
 		}
-
-		mgr->total_pbn = 2560;
-		mgr->total_slots = DIV_ROUND_UP(mgr->total_pbn, mgr->pbn_div);
+		mgr->total_pbn = 64 * mgr->pbn_div;
+		mgr->total_slots = 64;
 		mgr->avail_slots = mgr->total_slots;
 
 		/* add initial branch device at LCT 1 */