diff mbox

[18/19] drm/i915/execlists: Direct submission (avoid tasklet/ksoftirqd)

Message ID 20180517074055.14638-18-chris@chris-wilson.co.uk (mailing list archive)
State New, archived
Headers show

Commit Message

Chris Wilson May 17, 2018, 7:40 a.m. UTC
Back in commit 27af5eea54d1 ("drm/i915: Move execlists irq handler to a
bottom half"), we came to the conclusion that running our CSB processing
and ELSP submission from inside the irq handler was a bad idea. A really
bad idea as we could impose nearly 1s latency on other users of the
system, on average! Deferring our work to a tasklet allowed us to do the
processing with irqs enabled, reducing the impact to an average of about
50us.

We have since eradicated the use of forcewaked mmio from inside the CSB
processing and ELSP submission, bringing the impact down to around 5us
(on Kabylake); an order of magnitude better than our measurements 2
years ago on Broadwell and only about 2x worse on average than the
gem_syslatency on an unladen system.

Comparing the impact on the maximum latency observed over a 120s interval,
repeated several times (using gem_syslatency, similar to RT's cyclictest)
while the system is fully laden with i915 nops, we see that direct
submission definitely worsens the response but not to the same outlandish
degree as before.

x Unladen baseline
+ Using tasklet
* Direct submission

+------------------------------------------------------------------------+
|xx x          ++    +++ +                           *  * *   ** *** *  *|
||A|              |__AM__|                               |_____A_M___|   |
+------------------------------------------------------------------------+
    N           Min           Max        Median           Avg        Stddev
x  10             5            18            10           9.3     3.6530049
+  10            72           120           108         102.9     15.758243
*  10           255           348           316         305.7      28.74814

And with a background load

+------------------------------------------------------------------------+
|x                          +           *              *                 |
|x                    +     + + + +  + +* * ** ++      * *   *          *|
|A                        |_______A_____|__|_______A___M______|          |
+------------------------------------------------------------------------+
    N           Min           Max        Median           Avg        Stddev
x  10             4            11             9           8.5     2.1730675
+  10           633          1388           972           993     243.33744
*  10          1152          2109          1608        1488.3     314.80719

References: 27af5eea54d1 ("drm/i915: Move execlists irq handler to a bottom half")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/i915_irq.c  | 11 ++------
 drivers/gpu/drm/i915/intel_lrc.c | 44 +++++++++++++++++---------------
 2 files changed, 26 insertions(+), 29 deletions(-)

Comments

Tvrtko Ursulin May 17, 2018, 1:13 p.m. UTC | #1
On 17/05/2018 08:40, Chris Wilson wrote:
> Back in commit 27af5eea54d1 ("drm/i915: Move execlists irq handler to a
> bottom half"), we came to the conclusion that running our CSB processing
> and ELSP submission from inside the irq handler was a bad idea. A really
> bad idea as we could impose nearly 1s latency on other users of the
> system, on average! Deferring our work to a tasklet allowed us to do the
> processing with irqs enabled, reducing the impact to an average of about
> 50us.
> 
> We have since eradicated the use of forcewaked mmio from inside the CSB
> processing and ELSP submission, bringing the impact down to around 5us
> (on Kabylake); an order of magnitude better than our measurements 2
> years ago on Broadwell and only about 2x worse on average than the
> gem_syslatency on an unladen system.
> 
> Comparing the impact on the maximum latency observed over a 120s interval,
> repeated several times (using gem_syslatency, similar to RT's cyclictest)
> while the system is fully laden with i915 nops, we see that direct
> submission definitely worsens the response but not to the same outlandish
> degree as before.
> 
> x Unladen baseline
> + Using tasklet
> * Direct submission
> 
> +------------------------------------------------------------------------+
> |xx x          ++    +++ +                           *  * *   ** *** *  *|
> ||A|              |__AM__|                               |_____A_M___|   |
> +------------------------------------------------------------------------+

What are these headers? This one and below, I cannot decipher them at all.

>      N           Min           Max        Median           Avg        Stddev
> x  10             5            18            10           9.3     3.6530049
> +  10            72           120           108         102.9     15.758243
> *  10           255           348           316         305.7      28.74814

In micro-seconds? so tasklet is 108us median? Direct submission 316us 
median?

> 
> And with a background load

This is IO background load?

Regards,

Tvrtko

> 
> +------------------------------------------------------------------------+
> |x                          +           *              *                 |
> |x                    +     + + + +  + +* * ** ++      * *   *          *|
> |A                        |_______A_____|__|_______A___M______|          |
> +------------------------------------------------------------------------+
>      N           Min           Max        Median           Avg        Stddev
> x  10             4            11             9           8.5     2.1730675
> +  10           633          1388           972           993     243.33744
> *  10          1152          2109          1608        1488.3     314.80719
> 
> References: 27af5eea54d1 ("drm/i915: Move execlists irq handler to a bottom half")
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> ---
>   drivers/gpu/drm/i915/i915_irq.c  | 11 ++------
>   drivers/gpu/drm/i915/intel_lrc.c | 44 +++++++++++++++++---------------
>   2 files changed, 26 insertions(+), 29 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
> index 3f139ff64385..8b61ebf5cb4a 100644
> --- a/drivers/gpu/drm/i915/i915_irq.c
> +++ b/drivers/gpu/drm/i915/i915_irq.c
> @@ -1462,22 +1462,15 @@ static void snb_gt_irq_handler(struct drm_i915_private *dev_priv,
>   static void
>   gen8_cs_irq_handler(struct intel_engine_cs *engine, u32 iir)
>   {
> -	bool tasklet = false;
> -
> -	if (iir & GT_CONTEXT_SWITCH_INTERRUPT) {
> +	if (iir & GT_CONTEXT_SWITCH_INTERRUPT)
>   		intel_engine_handle_execlists_irq(engine);
> -		tasklet = true;
> -	}
>   
>   	if (iir & GT_RENDER_USER_INTERRUPT) {
>   		if (intel_engine_uses_guc(engine))
> -			tasklet = true;
> +			tasklet_hi_schedule(&engine->execlists.tasklet);
>   
>   		notify_ring(engine);
>   	}
> -
> -	if (tasklet)
> -		tasklet_hi_schedule(&engine->execlists.tasklet);
>   }
>   
>   static void gen8_gt_irq_ack(struct drm_i915_private *i915,
> diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
> index 954eb3a71051..37839d89e03a 100644
> --- a/drivers/gpu/drm/i915/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/intel_lrc.c
> @@ -575,7 +575,7 @@ static void complete_preempt_context(struct intel_engine_execlists *execlists)
>   	execlists_clear_active(execlists, EXECLISTS_ACTIVE_PREEMPT);
>   }
>   
> -static void __execlists_dequeue(struct intel_engine_cs *engine)
> +static void execlists_dequeue(struct intel_engine_cs *engine)
>   {
>   	struct intel_engine_execlists * const execlists = &engine->execlists;
>   	struct execlist_port *port = execlists->port;
> @@ -587,7 +587,11 @@ static void __execlists_dequeue(struct intel_engine_cs *engine)
>   
>   	lockdep_assert_held(&engine->timeline.lock);
>   
> -	/* Hardware submission is through 2 ports. Conceptually each port
> +	if (execlists_is_active(&engine->execlists, EXECLISTS_ACTIVE_PREEMPT))
> +		return;
> +
> +	/*
> +	 * Hardware submission is through 2 ports. Conceptually each port
>   	 * has a (RING_START, RING_HEAD, RING_TAIL) tuple. RING_START is
>   	 * static for a context, and unique to each, so we only execute
>   	 * requests belonging to a single context from each ring. RING_HEAD
> @@ -777,15 +781,6 @@ static void __execlists_dequeue(struct intel_engine_cs *engine)
>   		   !port_isset(engine->execlists.port));
>   }
>   
> -static void execlists_dequeue(struct intel_engine_cs *engine)
> -{
> -	unsigned long flags;
> -
> -	spin_lock_irqsave(&engine->timeline.lock, flags);
> -	__execlists_dequeue(engine);
> -	spin_unlock_irqrestore(&engine->timeline.lock, flags);
> -}
> -
>   void
>   execlists_cancel_port_requests(struct intel_engine_execlists * const execlists)
>   {
> @@ -1106,6 +1101,7 @@ void intel_engine_handle_execlists_irq(struct intel_engine_cs *engine)
>   	       execlists->csb_read);
>   	execlists->csb_head = head;
>   
> +	execlists_dequeue(engine);
>   	spin_unlock(&engine->timeline.lock);
>   }
>   
> @@ -1122,8 +1118,9 @@ static void execlists_submission_tasklet(unsigned long data)
>   		  engine->i915->gt.awake,
>   		  engine->execlists.active);
>   
> -	if (!execlists_is_active(&engine->execlists, EXECLISTS_ACTIVE_PREEMPT))
> -		execlists_dequeue(engine);
> +	spin_lock_irq(&engine->timeline.lock);
> +	execlists_dequeue(engine);
> +	spin_unlock_irq(&engine->timeline.lock);
>   }
>   
>   static void queue_request(struct intel_engine_cs *engine,
> @@ -1134,16 +1131,20 @@ static void queue_request(struct intel_engine_cs *engine,
>   		      &lookup_priolist(engine, prio)->requests);
>   }
>   
> -static void __submit_queue(struct intel_engine_cs *engine, int prio)
> +static void __update_queue(struct intel_engine_cs *engine, int prio)
>   {
>   	engine->execlists.queue_priority = prio;
> -	tasklet_hi_schedule(&engine->execlists.tasklet);
>   }
>   
>   static void submit_queue(struct intel_engine_cs *engine, int prio)
>   {
> -	if (prio > engine->execlists.queue_priority)
> -		__submit_queue(engine, prio);
> +	if (prio > engine->execlists.queue_priority) {
> +		__update_queue(engine, prio);
> +		if (!intel_engine_uses_guc(engine))
> +			execlists_dequeue(engine);
> +		else
> +			tasklet_hi_schedule(&engine->execlists.tasklet);
> +	}
>   }
>   
>   static void execlists_submit_request(struct i915_request *request)
> @@ -1155,11 +1156,12 @@ static void execlists_submit_request(struct i915_request *request)
>   	spin_lock_irqsave(&engine->timeline.lock, flags);
>   
>   	queue_request(engine, &request->sched, rq_prio(request));
> -	submit_queue(engine, rq_prio(request));
>   
>   	GEM_BUG_ON(!engine->execlists.first);
>   	GEM_BUG_ON(list_empty(&request->sched.link));
>   
> +	submit_queue(engine, rq_prio(request));
> +
>   	spin_unlock_irqrestore(&engine->timeline.lock, flags);
>   }
>   
> @@ -1286,8 +1288,10 @@ static void execlists_schedule(struct i915_request *request,
>   		}
>   
>   		if (prio > engine->execlists.queue_priority &&
> -		    i915_sw_fence_done(&sched_to_request(node)->submit))
> -			__submit_queue(engine, prio);
> +		    i915_sw_fence_done(&sched_to_request(node)->submit)) {
> +			__update_queue(engine, prio);
> +			tasklet_hi_schedule(&engine->execlists.tasklet);
> +		}
>   	}
>   
>   	spin_unlock_irq(&engine->timeline.lock);
>
Chris Wilson May 17, 2018, 5:07 p.m. UTC | #2
Quoting Tvrtko Ursulin (2018-05-17 14:13:00)
> 
> On 17/05/2018 08:40, Chris Wilson wrote:
> > Back in commit 27af5eea54d1 ("drm/i915: Move execlists irq handler to a
> > bottom half"), we came to the conclusion that running our CSB processing
> > and ELSP submission from inside the irq handler was a bad idea. A really
> > bad idea as we could impose nearly 1s latency on other users of the
> > system, on average! Deferring our work to a tasklet allowed us to do the
> > processing with irqs enabled, reducing the impact to an average of about
> > 50us.
> > 
> > We have since eradicated the use of forcewaked mmio from inside the CSB
> > processing and ELSP submission, bringing the impact down to around 5us
> > (on Kabylake); an order of magnitude better than our measurements 2
> > years ago on Broadwell and only about 2x worse on average than the
> > gem_syslatency on an unladen system.
> > 
> > Comparing the impact on the maximum latency observed over a 120s interval,
> > repeated several times (using gem_syslatency, similar to RT's cyclictest)
> > while the system is fully laden with i915 nops, we see that direct
> > submission definitely worsens the response but not to the same outlandish
> > degree as before.
> > 
> > x Unladen baseline
> > + Using tasklet
> > * Direct submission
> > 
> > +------------------------------------------------------------------------+
> > |xx x          ++    +++ +                           *  * *   ** *** *  *|
> > ||A|              |__AM__|                               |_____A_M___|   |
> > +------------------------------------------------------------------------+
> 
> What are these headers? This one and below, I cannot decipher them at all.

Ministat histogram. The headers being the label for the charts; it's a
bit flat so hard to tell it's a histogram.

> >      N           Min           Max        Median           Avg        Stddev
> > x  10             5            18            10           9.3     3.6530049
> > +  10            72           120           108         102.9     15.758243
> > *  10           255           348           316         305.7      28.74814
> 
> In micro-seconds? so tasklet is 108us median? Direct submission 316us 
> median?

Yup, more runs required so you have prettier graphs and units.

> > And with a background load
> 
> This is IO background load?

Yes, background writeout.
-Chris
Tvrtko Ursulin May 18, 2018, 8:06 a.m. UTC | #3
On 17/05/2018 18:07, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2018-05-17 14:13:00)
>>
>> On 17/05/2018 08:40, Chris Wilson wrote:
>>> Back in commit 27af5eea54d1 ("drm/i915: Move execlists irq handler to a
>>> bottom half"), we came to the conclusion that running our CSB processing
>>> and ELSP submission from inside the irq handler was a bad idea. A really
>>> bad idea as we could impose nearly 1s latency on other users of the
>>> system, on average! Deferring our work to a tasklet allowed us to do the
>>> processing with irqs enabled, reducing the impact to an average of about
>>> 50us.
>>>
>>> We have since eradicated the use of forcewaked mmio from inside the CSB
>>> processing and ELSP submission, bringing the impact down to around 5us
>>> (on Kabylake); an order of magnitude better than our measurements 2
>>> years ago on Broadwell and only about 2x worse on average than the
>>> gem_syslatency on an unladen system.
>>>
>>> Comparing the impact on the maximum latency observed over a 120s interval,
>>> repeated several times (using gem_syslatency, similar to RT's cyclictest)
>>> while the system is fully laden with i915 nops, we see that direct
>>> submission definitely worsens the response but not to the same outlandish
>>> degree as before.
>>>
>>> x Unladen baseline
>>> + Using tasklet
>>> * Direct submission
>>>
>>> +------------------------------------------------------------------------+
>>> |xx x          ++    +++ +                           *  * *   ** *** *  *|
>>> ||A|              |__AM__|                               |_____A_M___|   |
>>> +------------------------------------------------------------------------+
>>
>> What are these headers? This one and below, I cannot decipher them at all.
> 
> Ministat histogram. The headers being the label for the charts; it's a
> bit flat so hard to tell it's a histogram.
> 
>>>       N           Min           Max        Median           Avg        Stddev
>>> x  10             5            18            10           9.3     3.6530049
>>> +  10            72           120           108         102.9     15.758243
>>> *  10           255           348           316         305.7      28.74814
>>
>> In micro-seconds? so tasklet is 108us median? Direct submission 316us
>> median?
> 
> Yup, more runs required so you have prettier graphs and units.

Biggest problem for me is that in these tests it is only showing as 
significantly worse than the current tasklet. So it is kind of difficult 
the sell the series. :)

If it only solves issues when submitting from a RT task then it is not 
so interesting. RT tasks are rare and which deal with 3d/media/ocl 
probably even rarer. Or you know of any?

Or perhaps you need to add data from gem_latency as well if that shows 
some wins purely on the i915 side?

Regards,

Tvrtko

>>> And with a background load
>>
>> This is IO background load?
> 
> Yes, background writeout.
> -Chris
>
Chris Wilson May 18, 2018, 8:18 a.m. UTC | #4
Quoting Tvrtko Ursulin (2018-05-18 09:06:03)
> 
> On 17/05/2018 18:07, Chris Wilson wrote:
> > Quoting Tvrtko Ursulin (2018-05-17 14:13:00)
> >>
> >> On 17/05/2018 08:40, Chris Wilson wrote:
> >>> Back in commit 27af5eea54d1 ("drm/i915: Move execlists irq handler to a
> >>> bottom half"), we came to the conclusion that running our CSB processing
> >>> and ELSP submission from inside the irq handler was a bad idea. A really
> >>> bad idea as we could impose nearly 1s latency on other users of the
> >>> system, on average! Deferring our work to a tasklet allowed us to do the
> >>> processing with irqs enabled, reducing the impact to an average of about
> >>> 50us.
> >>>
> >>> We have since eradicated the use of forcewaked mmio from inside the CSB
> >>> processing and ELSP submission, bringing the impact down to around 5us
> >>> (on Kabylake); an order of magnitude better than our measurements 2
> >>> years ago on Broadwell and only about 2x worse on average than the
> >>> gem_syslatency on an unladen system.
> >>>
> >>> Comparing the impact on the maximum latency observed over a 120s interval,
> >>> repeated several times (using gem_syslatency, similar to RT's cyclictest)
> >>> while the system is fully laden with i915 nops, we see that direct
> >>> submission definitely worsens the response but not to the same outlandish
> >>> degree as before.
> >>>
> >>> x Unladen baseline
> >>> + Using tasklet
> >>> * Direct submission
> >>>
> >>> +------------------------------------------------------------------------+
> >>> |xx x          ++    +++ +                           *  * *   ** *** *  *|
> >>> ||A|              |__AM__|                               |_____A_M___|   |
> >>> +------------------------------------------------------------------------+
> >>
> >> What are these headers? This one and below, I cannot decipher them at all.
> > 
> > Ministat histogram. The headers being the label for the charts; it's a
> > bit flat so hard to tell it's a histogram.
> > 
> >>>       N           Min           Max        Median           Avg        Stddev
> >>> x  10             5            18            10           9.3     3.6530049
> >>> +  10            72           120           108         102.9     15.758243
> >>> *  10           255           348           316         305.7      28.74814
> >>
> >> In micro-seconds? so tasklet is 108us median? Direct submission 316us
> >> median?
> > 
> > Yup, more runs required so you have prettier graphs and units.
> 
> Biggest problem for me is that in these tests it is only showing as 
> significantly worse than the current tasklet. So it is kind of difficult 
> the sell the series. :)

Ba! Compared to the previous best state 2 years ago, we're an order
magnitude better. (Unbelievable!)

I think it's simply a lack of a reference point, we're about 100% faster
in some microbenchmark stress igts, about 10% faster in regular stress
igts, and will not suffer some of the 200ms timeout -> wedged!

Any latency is bad, but at what point do we worry about the trade-off
between our latency and the rest of the system?
 
> If it only solves issues when submitting from a RT task then it is not 
> so interesting. RT tasks are rare and which deal with 3d/media/ocl 
> probably even rarer. Or you know of any?

No, it's not just an issue from RT. We see examples of the driver being
so starved we declare the system is wedged, to much more mundane
artifacts of our throughput being diminished due to the latency in
submitting work.
 
> Or perhaps you need to add data from gem_latency as well if that shows 
> some wins purely on the i915 side?

The patch before (process CSB from interrupt handler) is justifiable
just by the bugs it solves. This patch, I think is justifiable by our
perf gains and the impact it has for low latency requirements. Or we can
wait until the next patch is mature, that tries to reduce the lock
contention by splitting the data structures.
-Chris
Chris Wilson May 18, 2018, 7:36 p.m. UTC | #5
Quoting Tvrtko Ursulin (2018-05-18 09:06:03)
> 
> On 17/05/2018 18:07, Chris Wilson wrote:
> > Quoting Tvrtko Ursulin (2018-05-17 14:13:00)
> >>
> >> On 17/05/2018 08:40, Chris Wilson wrote:
> >>> Back in commit 27af5eea54d1 ("drm/i915: Move execlists irq handler to a
> >>> bottom half"), we came to the conclusion that running our CSB processing
> >>> and ELSP submission from inside the irq handler was a bad idea. A really
> >>> bad idea as we could impose nearly 1s latency on other users of the
> >>> system, on average! Deferring our work to a tasklet allowed us to do the
> >>> processing with irqs enabled, reducing the impact to an average of about
> >>> 50us.
> >>>
> >>> We have since eradicated the use of forcewaked mmio from inside the CSB
> >>> processing and ELSP submission, bringing the impact down to around 5us
> >>> (on Kabylake); an order of magnitude better than our measurements 2
> >>> years ago on Broadwell and only about 2x worse on average than the
> >>> gem_syslatency on an unladen system.
> >>>
> >>> Comparing the impact on the maximum latency observed over a 120s interval,
> >>> repeated several times (using gem_syslatency, similar to RT's cyclictest)
> >>> while the system is fully laden with i915 nops, we see that direct
> >>> submission definitely worsens the response but not to the same outlandish
> >>> degree as before.
> >>>
> >>> x Unladen baseline
> >>> + Using tasklet
> >>> * Direct submission
> >>>
> >>> +------------------------------------------------------------------------+
> >>> |xx x          ++    +++ +                           *  * *   ** *** *  *|
> >>> ||A|              |__AM__|                               |_____A_M___|   |
> >>> +------------------------------------------------------------------------+
> >>
> >> What are these headers? This one and below, I cannot decipher them at all.
> > 
> > Ministat histogram. The headers being the label for the charts; it's a
> > bit flat so hard to tell it's a histogram.
> > 
> >>>       N           Min           Max        Median           Avg        Stddev
> >>> x  10             5            18            10           9.3     3.6530049
> >>> +  10            72           120           108         102.9     15.758243
> >>> *  10           255           348           316         305.7      28.74814
> >>
> >> In micro-seconds? so tasklet is 108us median? Direct submission 316us
> >> median?
> > 
> > Yup, more runs required so you have prettier graphs and units.
> 
> Biggest problem for me is that in these tests it is only showing as 
> significantly worse than the current tasklet. So it is kind of difficult 
> the sell the series. :)

As a reference point, on bdw from before we introduced tasklets:
gem_syslatency: cycles=20849458, latency mean=678.892us max=8308500us
gem_syslatency: cycles=22964600, latency mean=1583.312us max=5991894us
gem_syslatency: cycles=21404190, latency mean=1766.220us max=12423925us
gem_syslatency: cycles=22779405, latency mean=1698.282us max=9110117us
gem_syslatency: cycles=22021655, latency mean=1921.855us max=7193398us

Message from syslogd@broadwell at May 18 18:27:47 ...
 kernel:[ 8991.394488] NMI watchdog: BUG: soft lockup - CPU#2 stuck for
 22s! [gem_syslatency:1800]
 gem_syslatency: cycles=21218381, latency mean=1711.088us max=13932087us
 gem_syslatency: cycles=22189636, latency mean=1724.968us max=15483237us
 gem_syslatency: cycles=21941275, latency mean=1697.738us max=6065099us
 gem_syslatency: cycles=21896529, latency mean=1748.536us max=8007063us
 gem_syslatency: cycles=22053588, latency mean=1677.465us max=6604678us
 gem_syslatency: cycles=21657859, latency mean=1570.085us max=10346811us
 gem_syslatency: cycles=21627270, latency mean=1227.782us max=10489759us
 gem_syslatency: cycles=22441025, latency mean=1635.776us max=5932223us
 gem_syslatency: cycles=21890354, latency mean=1790.042us max=7956373us
 gem_syslatency: cycles=20976620, latency mean=989.215us max=11631632us
 gem_syslatency: cycles=22087242, latency mean=1723.138us max=12337920us
 gem_syslatency: cycles=22800746, latency mean=1749.050us max=7080445us

* at this point, the sata driver ate itself and took out the fs.
-Chris
Chris Wilson May 18, 2018, 9:21 p.m. UTC | #6
Quoting Tvrtko Ursulin (2018-05-17 14:13:00)
> 
> On 17/05/2018 08:40, Chris Wilson wrote:
> > Back in commit 27af5eea54d1 ("drm/i915: Move execlists irq handler to a
> > bottom half"), we came to the conclusion that running our CSB processing
> > and ELSP submission from inside the irq handler was a bad idea. A really
> > bad idea as we could impose nearly 1s latency on other users of the
> > system, on average! Deferring our work to a tasklet allowed us to do the
> > processing with irqs enabled, reducing the impact to an average of about
> > 50us.
> > 
> > We have since eradicated the use of forcewaked mmio from inside the CSB
> > processing and ELSP submission, bringing the impact down to around 5us
> > (on Kabylake); an order of magnitude better than our measurements 2
> > years ago on Broadwell and only about 2x worse on average than the
> > gem_syslatency on an unladen system.
> > 
> > Comparing the impact on the maximum latency observed over a 120s interval,
> > repeated several times (using gem_syslatency, similar to RT's cyclictest)
> > while the system is fully laden with i915 nops, we see that direct
> > submission definitely worsens the response but not to the same outlandish
> > degree as before.
> > 
> > x Unladen baseline
> > + Using tasklet
> > * Direct submission
> > 
> > +------------------------------------------------------------------------+
> > |xx x          ++    +++ +                           *  * *   ** *** *  *|
> > ||A|              |__AM__|                               |_____A_M___|   |
> > +------------------------------------------------------------------------+
> 
> What are these headers? This one and below, I cannot decipher them at all.
> 
> >      N           Min           Max        Median           Avg        Stddev
> > x  10             5            18            10           9.3     3.6530049
> > +  10            72           120           108         102.9     15.758243
> > *  10           255           348           316         305.7      28.74814
> 
> In micro-seconds? so tasklet is 108us median? Direct submission 316us 
> median?

Perspective, ivb ringbuf:

x syslatency-ringbuf-mean.txt
+ syslatency-ringbuf-max.txt
+------------------------------------------------------------------------+
|   xx                                                           +       |
|   xx                                                           +       |
|   xx                                                           +       |
|   xx                                                          ++       |
|  xxxx                                       +  +      +  ++ + ++       |
|x xxxx      +                             +  +  ++  +  ++ ++++ ++++    +|
|  |AM                                         |__________A__M_______|   |
+------------------------------------------------------------------------+
    N           Min           Max        Median           Avg        Stddev
x  30         5.395        17.893        13.626     13.135367     2.2143809
+  30            33           169           143     135.86667     25.609445

Using execlists+tasklet is on par with irqoff (i.e. max) latency of
ringbuf. (There should be very little irq disabling for ringbuf.) Normal
average latency of execlists+direct_submission is still better than ivb,
but that is almost entirely generational differences.
-Chris
diff mbox

Patch

diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index 3f139ff64385..8b61ebf5cb4a 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -1462,22 +1462,15 @@  static void snb_gt_irq_handler(struct drm_i915_private *dev_priv,
 static void
 gen8_cs_irq_handler(struct intel_engine_cs *engine, u32 iir)
 {
-	bool tasklet = false;
-
-	if (iir & GT_CONTEXT_SWITCH_INTERRUPT) {
+	if (iir & GT_CONTEXT_SWITCH_INTERRUPT)
 		intel_engine_handle_execlists_irq(engine);
-		tasklet = true;
-	}
 
 	if (iir & GT_RENDER_USER_INTERRUPT) {
 		if (intel_engine_uses_guc(engine))
-			tasklet = true;
+			tasklet_hi_schedule(&engine->execlists.tasklet);
 
 		notify_ring(engine);
 	}
-
-	if (tasklet)
-		tasklet_hi_schedule(&engine->execlists.tasklet);
 }
 
 static void gen8_gt_irq_ack(struct drm_i915_private *i915,
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 954eb3a71051..37839d89e03a 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -575,7 +575,7 @@  static void complete_preempt_context(struct intel_engine_execlists *execlists)
 	execlists_clear_active(execlists, EXECLISTS_ACTIVE_PREEMPT);
 }
 
-static void __execlists_dequeue(struct intel_engine_cs *engine)
+static void execlists_dequeue(struct intel_engine_cs *engine)
 {
 	struct intel_engine_execlists * const execlists = &engine->execlists;
 	struct execlist_port *port = execlists->port;
@@ -587,7 +587,11 @@  static void __execlists_dequeue(struct intel_engine_cs *engine)
 
 	lockdep_assert_held(&engine->timeline.lock);
 
-	/* Hardware submission is through 2 ports. Conceptually each port
+	if (execlists_is_active(&engine->execlists, EXECLISTS_ACTIVE_PREEMPT))
+		return;
+
+	/*
+	 * Hardware submission is through 2 ports. Conceptually each port
 	 * has a (RING_START, RING_HEAD, RING_TAIL) tuple. RING_START is
 	 * static for a context, and unique to each, so we only execute
 	 * requests belonging to a single context from each ring. RING_HEAD
@@ -777,15 +781,6 @@  static void __execlists_dequeue(struct intel_engine_cs *engine)
 		   !port_isset(engine->execlists.port));
 }
 
-static void execlists_dequeue(struct intel_engine_cs *engine)
-{
-	unsigned long flags;
-
-	spin_lock_irqsave(&engine->timeline.lock, flags);
-	__execlists_dequeue(engine);
-	spin_unlock_irqrestore(&engine->timeline.lock, flags);
-}
-
 void
 execlists_cancel_port_requests(struct intel_engine_execlists * const execlists)
 {
@@ -1106,6 +1101,7 @@  void intel_engine_handle_execlists_irq(struct intel_engine_cs *engine)
 	       execlists->csb_read);
 	execlists->csb_head = head;
 
+	execlists_dequeue(engine);
 	spin_unlock(&engine->timeline.lock);
 }
 
@@ -1122,8 +1118,9 @@  static void execlists_submission_tasklet(unsigned long data)
 		  engine->i915->gt.awake,
 		  engine->execlists.active);
 
-	if (!execlists_is_active(&engine->execlists, EXECLISTS_ACTIVE_PREEMPT))
-		execlists_dequeue(engine);
+	spin_lock_irq(&engine->timeline.lock);
+	execlists_dequeue(engine);
+	spin_unlock_irq(&engine->timeline.lock);
 }
 
 static void queue_request(struct intel_engine_cs *engine,
@@ -1134,16 +1131,20 @@  static void queue_request(struct intel_engine_cs *engine,
 		      &lookup_priolist(engine, prio)->requests);
 }
 
-static void __submit_queue(struct intel_engine_cs *engine, int prio)
+static void __update_queue(struct intel_engine_cs *engine, int prio)
 {
 	engine->execlists.queue_priority = prio;
-	tasklet_hi_schedule(&engine->execlists.tasklet);
 }
 
 static void submit_queue(struct intel_engine_cs *engine, int prio)
 {
-	if (prio > engine->execlists.queue_priority)
-		__submit_queue(engine, prio);
+	if (prio > engine->execlists.queue_priority) {
+		__update_queue(engine, prio);
+		if (!intel_engine_uses_guc(engine))
+			execlists_dequeue(engine);
+		else
+			tasklet_hi_schedule(&engine->execlists.tasklet);
+	}
 }
 
 static void execlists_submit_request(struct i915_request *request)
@@ -1155,11 +1156,12 @@  static void execlists_submit_request(struct i915_request *request)
 	spin_lock_irqsave(&engine->timeline.lock, flags);
 
 	queue_request(engine, &request->sched, rq_prio(request));
-	submit_queue(engine, rq_prio(request));
 
 	GEM_BUG_ON(!engine->execlists.first);
 	GEM_BUG_ON(list_empty(&request->sched.link));
 
+	submit_queue(engine, rq_prio(request));
+
 	spin_unlock_irqrestore(&engine->timeline.lock, flags);
 }
 
@@ -1286,8 +1288,10 @@  static void execlists_schedule(struct i915_request *request,
 		}
 
 		if (prio > engine->execlists.queue_priority &&
-		    i915_sw_fence_done(&sched_to_request(node)->submit))
-			__submit_queue(engine, prio);
+		    i915_sw_fence_done(&sched_to_request(node)->submit)) {
+			__update_queue(engine, prio);
+			tasklet_hi_schedule(&engine->execlists.tasklet);
+		}
 	}
 
 	spin_unlock_irq(&engine->timeline.lock);