Message ID | 20180801170419.1085-2-chris@chris-wilson.co.uk (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [1/2] drm/i915: Drop stray clearing of rps->last_adj | expand |
Quoting Chris Wilson (2018-08-01 18:04:19) > Currently, we note congestion for the slow start ramping up of RPS only > when we overshoot the target workload and have to reverse direction for > our reclocking. That is, if we have a period where the current GPU > frequency is enough to sustain the workload within our target > utilisation, we should not trigger any RPS EI interrupts, and then may > continue again with the previous last_adj after multiple periods causing > us to dramatically overreact. To prevent us not noticing a period where > the system is behaving correctly, we can schedule an extra interrupt > that will not be associated with either an up or down event causing to > reset last_adj back to zero, cancelling the slow start due to the > congestion. > > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> > Cc: Mika Kuoppala <mika.kuoppala@intel.com> > --- > drivers/gpu/drm/i915/i915_irq.c | 13 +++++++++---- > drivers/gpu/drm/i915/intel_pm.c | 15 +++++++++++---- > 2 files changed, 20 insertions(+), 8 deletions(-) > > diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c > index 90628a47ae17..e2ee1e13cec7 100644 > --- a/drivers/gpu/drm/i915/i915_irq.c > +++ b/drivers/gpu/drm/i915/i915_irq.c > @@ -1297,6 +1297,7 @@ static void gen6_pm_rps_work(struct work_struct *work) > goto out; > > mutex_lock(&dev_priv->pcu_lock); > + dev_priv->pm_rps_events &= ~GEN6_PM_RP_DOWN_EI_EXPIRED; > > pm_iir |= vlv_wa_c0_ei(dev_priv, pm_iir); > > @@ -1310,10 +1311,12 @@ static void gen6_pm_rps_work(struct work_struct *work) > new_delay = rps->boost_freq; > adj = 0; > } else if (pm_iir & GEN6_PM_RP_UP_THRESHOLD) { > - if (adj > 0) > + if (adj > 0) { > + dev_priv->pm_rps_events |= GEN6_PM_RP_DOWN_EI_EXPIRED; > adj *= 2; The original plan was to use UP/DOWN EI as the danger is that the two evaluation intervals are not aligned and so not we may falsely detect congestion in the middle of the ramp. The reason I didn't was we do use UP_EI_EXPIRED for the manual calcs for vlv. Hmm, still it would be better not to mix the wrong EI. -Chris
diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c index 90628a47ae17..e2ee1e13cec7 100644 --- a/drivers/gpu/drm/i915/i915_irq.c +++ b/drivers/gpu/drm/i915/i915_irq.c @@ -1297,6 +1297,7 @@ static void gen6_pm_rps_work(struct work_struct *work) goto out; mutex_lock(&dev_priv->pcu_lock); + dev_priv->pm_rps_events &= ~GEN6_PM_RP_DOWN_EI_EXPIRED; pm_iir |= vlv_wa_c0_ei(dev_priv, pm_iir); @@ -1310,10 +1311,12 @@ static void gen6_pm_rps_work(struct work_struct *work) new_delay = rps->boost_freq; adj = 0; } else if (pm_iir & GEN6_PM_RP_UP_THRESHOLD) { - if (adj > 0) + if (adj > 0) { + dev_priv->pm_rps_events |= GEN6_PM_RP_DOWN_EI_EXPIRED; adj *= 2; - else /* CHV needs even encode values */ + } else { /* CHV needs even encode values */ adj = IS_CHERRYVIEW(dev_priv) ? 2 : 1; + } if (new_delay >= rps->max_freq_softlimit) adj = 0; @@ -1326,10 +1329,12 @@ static void gen6_pm_rps_work(struct work_struct *work) new_delay = rps->min_freq_softlimit; adj = 0; } else if (pm_iir & GEN6_PM_RP_DOWN_THRESHOLD) { - if (adj < 0) + if (adj < 0) { + dev_priv->pm_rps_events |= GEN6_PM_RP_DOWN_EI_EXPIRED; adj *= 2; - else /* CHV needs even encode values */ + } else { /* CHV needs even encode values */ adj = IS_CHERRYVIEW(dev_priv) ? -2 : -1; + } if (new_delay <= rps->min_freq_softlimit) adj = 0; diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c index f90a3c7f1c40..321a0acd274a 100644 --- a/drivers/gpu/drm/i915/intel_pm.c +++ b/drivers/gpu/drm/i915/intel_pm.c @@ -6397,10 +6397,17 @@ static u32 gen6_rps_pm_mask(struct drm_i915_private *dev_priv, u8 val) u32 mask = 0; /* We use UP_EI_EXPIRED interupts for both up/down in manual mode */ - if (val > rps->min_freq_softlimit) - mask |= GEN6_PM_RP_UP_EI_EXPIRED | GEN6_PM_RP_DOWN_THRESHOLD | GEN6_PM_RP_DOWN_TIMEOUT; - if (val < rps->max_freq_softlimit) - mask |= GEN6_PM_RP_UP_EI_EXPIRED | GEN6_PM_RP_UP_THRESHOLD; + if (val > rps->min_freq_softlimit) { + mask |= (GEN6_PM_RP_UP_EI_EXPIRED | + GEN6_PM_RP_DOWN_EI_EXPIRED | + GEN6_PM_RP_DOWN_THRESHOLD | + GEN6_PM_RP_DOWN_TIMEOUT); + } + if (val < rps->max_freq_softlimit) { + mask |= (GEN6_PM_RP_UP_EI_EXPIRED | + GEN6_PM_RP_DOWN_EI_EXPIRED | + GEN6_PM_RP_UP_THRESHOLD); + } mask &= dev_priv->pm_rps_events;
Currently, we note congestion for the slow start ramping up of RPS only when we overshoot the target workload and have to reverse direction for our reclocking. That is, if we have a period where the current GPU frequency is enough to sustain the workload within our target utilisation, we should not trigger any RPS EI interrupts, and then may continue again with the previous last_adj after multiple periods causing us to dramatically overreact. To prevent us not noticing a period where the system is behaving correctly, we can schedule an extra interrupt that will not be associated with either an up or down event causing to reset last_adj back to zero, cancelling the slow start due to the congestion. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@intel.com> --- drivers/gpu/drm/i915/i915_irq.c | 13 +++++++++---- drivers/gpu/drm/i915/intel_pm.c | 15 +++++++++++---- 2 files changed, 20 insertions(+), 8 deletions(-)