diff mbox

[07/31] drm/i915/execlists: Direct submission of new requests (avoid tasklet/ksoftirqd)

Message ID 20180625094842.8499-7-chris@chris-wilson.co.uk (mailing list archive)
State New, archived
Headers show

Commit Message

Chris Wilson June 25, 2018, 9:48 a.m. UTC
Back in commit 27af5eea54d1 ("drm/i915: Move execlists irq handler to a
bottom half"), we came to the conclusion that running our CSB processing
and ELSP submission from inside the irq handler was a bad idea. A really
bad idea as we could impose nearly 1s latency on other users of the
system, on average! Deferring our work to a tasklet allowed us to do the
processing with irqs enabled, reducing the impact to an average of about
50us.

We have since eradicated the use of forcewaked mmio from inside the CSB
processing and ELSP submission, bringing the impact down to around 5us
(on Kabylake); an order of magnitude better than our measurements 2
years ago on Broadwell and only about 2x worse on average than the
gem_syslatency on an unladen system.

In this iteration of the tasklet-vs-direct submission debate, we seek a
compromise where by we submit new requests immediately to the HW but
defer processing the CS interrupt onto a tasklet. We gain the advantage
of low-latency and ksoftirqd avoidance when waking up the HW, while
avoiding the system-wide starvation of our CS irq-storms.

Comparing the impact on the maximum latency observed (that is the time
stolen from an RT process) over a 120s interval, repeated several times
(using gem_syslatency, similar to RT's cyclictest) while the system is
fully laden with i915 nops, we see that direct submission an actually
improve the worse case.

Maximum latency in microseconds of a third party RT thread
(gem_syslatency -t 120 -f 2)
  x Always using tasklets (a couple of >1000us outliers removed)
  + Only using tasklets from CS irq, direct submission of requests
+------------------------------------------------------------------------+
|          +                                                             |
|          +                                                             |
|          +                                                             |
|          +       +                                                     |
|          + +     +                                                     |
|       +  + +     +  x     x     x                                      |
|      +++ + +     +  x  x  x  x  x  x                                   |
|      +++ + ++  + +  *x x  x  x  x  x                                   |
|      +++ + ++  + *  *x x  *  x  x  x                                   |
|    + +++ + ++  * * +*xxx  *  x  x  xx                                  |
|    * +++ + ++++* *x+**xx+ *  x  x xxxx x                               |
|   **x++++*++**+*x*x****x+ * +x xx xxxx x          x                    |
|x* ******+***************++*+***xxxxxx* xx*x     xxx +                x+|
|             |__________MA___________|                                  |
|      |______M__A________|                                              |
+------------------------------------------------------------------------+
    N           Min           Max        Median           Avg        Stddev
x 118            91           186           124     125.28814     16.279137
+ 120            92           187           109     112.00833     13.458617
Difference at 95.0% confidence
	-13.2798 +/- 3.79219
	-10.5994% +/- 3.02677%
	(Student's t, pooled s = 14.9237)

However the mean latency is adversely affected:

Mean latency in microseconds of a third party RT thread
(gem_syslatency -t 120 -f 1)
  x Always using tasklets
  + Only using tasklets from CS irq, direct submission of requests
+------------------------------------------------------------------------+
|           xxxxxx                                        +   ++         |
|           xxxxxx                                        +   ++         |
|           xxxxxx                                      + +++ ++         |
|           xxxxxxx                                     +++++ ++         |
|           xxxxxxx                                     +++++ ++         |
|           xxxxxxx                                     +++++ +++        |
|           xxxxxxx                                   + ++++++++++       |
|           xxxxxxxx                                 ++ ++++++++++       |
|           xxxxxxxx                                 ++ ++++++++++       |
|          xxxxxxxxxx                                +++++++++++++++     |
|         xxxxxxxxxxx    x                           +++++++++++++++     |
|x       xxxxxxxxxxxxx   x           +            + ++++++++++++++++++  +|
|           |__A__|                                                      |
|                                                      |____A___|        |
+------------------------------------------------------------------------+
    N           Min           Max        Median           Avg        Stddev
x 120         3.506         3.727         3.631     3.6321417    0.02773109
+ 120         3.834         4.149         4.039     4.0375167   0.041221676
Difference at 95.0% confidence
	0.405375 +/- 0.00888913
	11.1608% +/- 0.244735%
	(Student's t, pooled s = 0.03513)

However, since the mean latency corresponds to the amount of irqsoff
processing we have to do for a CS interrupt, we only need to speed that
up to benefit not just system latency but our own throughput.

v2: Remember to defer submissions when under reset.
v4: Only use direct submission for new requests
v5: Be aware that with mixing direct tasklet evaluation and deferred
tasklets, we may end up idling before running the deferred tasklet.

Testcase: igt/gem_exec_latency/*rthog*
References: 27af5eea54d1 ("drm/i915: Move execlists irq handler to a bottom half")
Suggested-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/i915_gem.h         |   5 +
 drivers/gpu/drm/i915/i915_irq.c         |  11 +-
 drivers/gpu/drm/i915/intel_engine_cs.c  |   8 +-
 drivers/gpu/drm/i915/intel_lrc.c        | 147 ++++++++++++++----------
 drivers/gpu/drm/i915/intel_ringbuffer.h |   1 -
 5 files changed, 98 insertions(+), 74 deletions(-)

Comments

Tvrtko Ursulin June 27, 2018, 10:40 a.m. UTC | #1
On 25/06/2018 10:48, Chris Wilson wrote:
> Back in commit 27af5eea54d1 ("drm/i915: Move execlists irq handler to a
> bottom half"), we came to the conclusion that running our CSB processing
> and ELSP submission from inside the irq handler was a bad idea. A really
> bad idea as we could impose nearly 1s latency on other users of the
> system, on average! Deferring our work to a tasklet allowed us to do the
> processing with irqs enabled, reducing the impact to an average of about
> 50us.
> 
> We have since eradicated the use of forcewaked mmio from inside the CSB
> processing and ELSP submission, bringing the impact down to around 5us
> (on Kabylake); an order of magnitude better than our measurements 2
> years ago on Broadwell and only about 2x worse on average than the
> gem_syslatency on an unladen system.
> 
> In this iteration of the tasklet-vs-direct submission debate, we seek a
> compromise where by we submit new requests immediately to the HW but
> defer processing the CS interrupt onto a tasklet. We gain the advantage
> of low-latency and ksoftirqd avoidance when waking up the HW, while
> avoiding the system-wide starvation of our CS irq-storms.
> 
> Comparing the impact on the maximum latency observed (that is the time
> stolen from an RT process) over a 120s interval, repeated several times
> (using gem_syslatency, similar to RT's cyclictest) while the system is
> fully laden with i915 nops, we see that direct submission an actually
> improve the worse case.
> 
> Maximum latency in microseconds of a third party RT thread
> (gem_syslatency -t 120 -f 2)
>    x Always using tasklets (a couple of >1000us outliers removed)
>    + Only using tasklets from CS irq, direct submission of requests
> +------------------------------------------------------------------------+
> |          +                                                             |
> |          +                                                             |
> |          +                                                             |
> |          +       +                                                     |
> |          + +     +                                                     |
> |       +  + +     +  x     x     x                                      |
> |      +++ + +     +  x  x  x  x  x  x                                   |
> |      +++ + ++  + +  *x x  x  x  x  x                                   |
> |      +++ + ++  + *  *x x  *  x  x  x                                   |
> |    + +++ + ++  * * +*xxx  *  x  x  xx                                  |
> |    * +++ + ++++* *x+**xx+ *  x  x xxxx x                               |
> |   **x++++*++**+*x*x****x+ * +x xx xxxx x          x                    |
> |x* ******+***************++*+***xxxxxx* xx*x     xxx +                x+|
> |             |__________MA___________|                                  |
> |      |______M__A________|                                              |
> +------------------------------------------------------------------------+
>      N           Min           Max        Median           Avg        Stddev
> x 118            91           186           124     125.28814     16.279137
> + 120            92           187           109     112.00833     13.458617
> Difference at 95.0% confidence
> 	-13.2798 +/- 3.79219
> 	-10.5994% +/- 3.02677%
> 	(Student's t, pooled s = 14.9237)
> 
> However the mean latency is adversely affected:
> 
> Mean latency in microseconds of a third party RT thread
> (gem_syslatency -t 120 -f 1)
>    x Always using tasklets
>    + Only using tasklets from CS irq, direct submission of requests
> +------------------------------------------------------------------------+
> |           xxxxxx                                        +   ++         |
> |           xxxxxx                                        +   ++         |
> |           xxxxxx                                      + +++ ++         |
> |           xxxxxxx                                     +++++ ++         |
> |           xxxxxxx                                     +++++ ++         |
> |           xxxxxxx                                     +++++ +++        |
> |           xxxxxxx                                   + ++++++++++       |
> |           xxxxxxxx                                 ++ ++++++++++       |
> |           xxxxxxxx                                 ++ ++++++++++       |
> |          xxxxxxxxxx                                +++++++++++++++     |
> |         xxxxxxxxxxx    x                           +++++++++++++++     |
> |x       xxxxxxxxxxxxx   x           +            + ++++++++++++++++++  +|
> |           |__A__|                                                      |
> |                                                      |____A___|        |
> +------------------------------------------------------------------------+
>      N           Min           Max        Median           Avg        Stddev
> x 120         3.506         3.727         3.631     3.6321417    0.02773109
> + 120         3.834         4.149         4.039     4.0375167   0.041221676
> Difference at 95.0% confidence
> 	0.405375 +/- 0.00888913
> 	11.1608% +/- 0.244735%
> 	(Student's t, pooled s = 0.03513)
> 
> However, since the mean latency corresponds to the amount of irqsoff
> processing we have to do for a CS interrupt, we only need to speed that
> up to benefit not just system latency but our own throughput.
> 
> v2: Remember to defer submissions when under reset.
> v4: Only use direct submission for new requests
> v5: Be aware that with mixing direct tasklet evaluation and deferred
> tasklets, we may end up idling before running the deferred tasklet.
> 
> Testcase: igt/gem_exec_latency/*rthog*
> References: 27af5eea54d1 ("drm/i915: Move execlists irq handler to a bottom half")
> Suggested-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> ---
>   drivers/gpu/drm/i915/i915_gem.h         |   5 +
>   drivers/gpu/drm/i915/i915_irq.c         |  11 +-
>   drivers/gpu/drm/i915/intel_engine_cs.c  |   8 +-
>   drivers/gpu/drm/i915/intel_lrc.c        | 147 ++++++++++++++----------
>   drivers/gpu/drm/i915/intel_ringbuffer.h |   1 -
>   5 files changed, 98 insertions(+), 74 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_gem.h b/drivers/gpu/drm/i915/i915_gem.h
> index 261da577829a..7892ac773916 100644
> --- a/drivers/gpu/drm/i915/i915_gem.h
> +++ b/drivers/gpu/drm/i915/i915_gem.h
> @@ -88,4 +88,9 @@ static inline void __tasklet_enable_sync_once(struct tasklet_struct *t)
>   		tasklet_kill(t);
>   }
>   
> +static inline bool __tasklet_is_enabled(const struct tasklet_struct *t)
> +{
> +	return likely(!atomic_read(&t->count));
> +}
> +

For the unlikely-likely chain from 
__submit_queue->reset_in_progress->__tasklet_is_enabled I think it would 
be better to drop the likely/unlikely from low-level helpers and put the 
one unlikely into the __submit_queue.

>   #endif /* __I915_GEM_H__ */
> diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
> index 46aaef5c1851..316d0b08d40f 100644
> --- a/drivers/gpu/drm/i915/i915_irq.c
> +++ b/drivers/gpu/drm/i915/i915_irq.c
> @@ -1469,14 +1469,10 @@ static void snb_gt_irq_handler(struct drm_i915_private *dev_priv,
>   static void
>   gen8_cs_irq_handler(struct intel_engine_cs *engine, u32 iir)
>   {
> -	struct intel_engine_execlists * const execlists = &engine->execlists;
>   	bool tasklet = false;
>   
> -	if (iir & GT_CONTEXT_SWITCH_INTERRUPT) {
> -		if (READ_ONCE(engine->execlists.active))

What is the thinking behind this change? It used to be that we scheduled 
the tasklet only when we knew we are expecting interrupts and now we 
don't care any more for some reason?

> -			tasklet = !test_and_set_bit(ENGINE_IRQ_EXECLIST,
> -						    &engine->irq_posted);

And this is gone as well. Can you put a paragraph in the commit message 
explaining the change? It doesn't seem immediately connected with direct 
submission.

> -	}
> +	if (iir & GT_CONTEXT_SWITCH_INTERRUPT)
> +		tasklet = true;
>   
>   	if (iir & GT_RENDER_USER_INTERRUPT) {
>   		notify_ring(engine);
> @@ -1484,7 +1480,7 @@ gen8_cs_irq_handler(struct intel_engine_cs *engine, u32 iir)
>   	}
>   
>   	if (tasklet)
> -		tasklet_hi_schedule(&execlists->tasklet);
> +		tasklet_hi_schedule(&engine->execlists.tasklet);
>   }
>   
>   static void gen8_gt_irq_ack(struct drm_i915_private *i915,
> @@ -2216,7 +2212,6 @@ static irqreturn_t cherryview_irq_handler(int irq, void *arg)
>   
>   		I915_WRITE(VLV_IER, ier);
>   		I915_WRITE(GEN8_MASTER_IRQ, GEN8_MASTER_IRQ_CONTROL);
> -		POSTING_READ(GEN8_MASTER_IRQ);

What is this?

>   
>   		gen8_gt_irq_handler(dev_priv, master_ctl, gt_iir);
>   
> diff --git a/drivers/gpu/drm/i915/intel_engine_cs.c b/drivers/gpu/drm/i915/intel_engine_cs.c
> index 7209c22798e6..ace93958689e 100644
> --- a/drivers/gpu/drm/i915/intel_engine_cs.c
> +++ b/drivers/gpu/drm/i915/intel_engine_cs.c
> @@ -1353,12 +1353,10 @@ static void intel_engine_print_registers(const struct intel_engine_cs *engine,
>   		ptr = I915_READ(RING_CONTEXT_STATUS_PTR(engine));
>   		read = GEN8_CSB_READ_PTR(ptr);
>   		write = GEN8_CSB_WRITE_PTR(ptr);
> -		drm_printf(m, "\tExeclist CSB read %d [%d cached], write %d [%d from hws], interrupt posted? %s, tasklet queued? %s (%s)\n",
> +		drm_printf(m, "\tExeclist CSB read %d [%d cached], write %d [%d from hws], tasklet queued? %s (%s)\n",
>   			   read, execlists->csb_head,
>   			   write,
>   			   intel_read_status_page(engine, intel_hws_csb_write_index(engine->i915)),
> -			   yesno(test_bit(ENGINE_IRQ_EXECLIST,
> -					  &engine->irq_posted)),
>   			   yesno(test_bit(TASKLET_STATE_SCHED,
>   					  &engine->execlists.tasklet.state)),
>   			   enableddisabled(!atomic_read(&engine->execlists.tasklet.count)));
> @@ -1570,11 +1568,9 @@ void intel_engine_dump(struct intel_engine_cs *engine,
>   	spin_unlock(&b->rb_lock);
>   	local_irq_restore(flags);
>   
> -	drm_printf(m, "IRQ? 0x%lx (breadcrumbs? %s) (execlists? %s)\n",
> +	drm_printf(m, "IRQ? 0x%lx (breadcrumbs? %s)\n",
>   		   engine->irq_posted,
>   		   yesno(test_bit(ENGINE_IRQ_BREADCRUMB,
> -				  &engine->irq_posted)),
> -		   yesno(test_bit(ENGINE_IRQ_EXECLIST,
>   				  &engine->irq_posted)));
>   
>   	drm_printf(m, "HWSP:\n");
> diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
> index 5a12b8fc9d8f..c82efa3ac105 100644
> --- a/drivers/gpu/drm/i915/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/intel_lrc.c
> @@ -562,13 +562,15 @@ static void complete_preempt_context(struct intel_engine_execlists *execlists)
>   {
>   	GEM_BUG_ON(!execlists_is_active(execlists, EXECLISTS_ACTIVE_PREEMPT));
>   
> +	__unwind_incomplete_requests(container_of(execlists,
> +						  typeof(struct intel_engine_cs),
> +						  execlists));
>   	execlists_cancel_port_requests(execlists);
> -	execlists_unwind_incomplete_requests(execlists);

Is the ordering change significant and why?

>   
>   	execlists_clear_active(execlists, EXECLISTS_ACTIVE_PREEMPT);
>   }
>   
> -static void __execlists_dequeue(struct intel_engine_cs *engine)
> +static void execlists_dequeue(struct intel_engine_cs *engine)
>   {
>   	struct intel_engine_execlists * const execlists = &engine->execlists;
>   	struct execlist_port *port = execlists->port;
> @@ -580,7 +582,11 @@ static void __execlists_dequeue(struct intel_engine_cs *engine)
>   
>   	lockdep_assert_held(&engine->timeline.lock);
>   
> -	/* Hardware submission is through 2 ports. Conceptually each port
> +	GEM_BUG_ON(execlists_is_active(&engine->execlists,
> +				       EXECLISTS_ACTIVE_PREEMPT));
> +
> +	/*
> +	 * Hardware submission is through 2 ports. Conceptually each port
>   	 * has a (RING_START, RING_HEAD, RING_TAIL) tuple. RING_START is
>   	 * static for a context, and unique to each, so we only execute
>   	 * requests belonging to a single context from each ring. RING_HEAD
> @@ -770,15 +776,6 @@ static void __execlists_dequeue(struct intel_engine_cs *engine)
>   		   !port_isset(engine->execlists.port));
>   }
>   
> -static void execlists_dequeue(struct intel_engine_cs *engine)
> -{
> -	unsigned long flags;
> -
> -	spin_lock_irqsave(&engine->timeline.lock, flags);
> -	__execlists_dequeue(engine);
> -	spin_unlock_irqrestore(&engine->timeline.lock, flags);
> -}
> -
>   void
>   execlists_cancel_port_requests(struct intel_engine_execlists * const execlists)
>   {
> @@ -874,14 +871,6 @@ static void reset_irq(struct intel_engine_cs *engine)
>   	smp_store_mb(engine->execlists.active, 0);
>   
>   	clear_gtiir(engine);
> -
> -	/*
> -	 * The port is checked prior to scheduling a tasklet, but
> -	 * just in case we have suspended the tasklet to do the
> -	 * wedging make sure that when it wakes, it decides there
> -	 * is no work to do by clearing the irq_posted bit.
> -	 */
> -	clear_bit(ENGINE_IRQ_EXECLIST, &engine->irq_posted);
>   }
>   
>   static void execlists_cancel_requests(struct intel_engine_cs *engine)
> @@ -956,6 +945,12 @@ static void execlists_cancel_requests(struct intel_engine_cs *engine)
>   	local_irq_restore(flags);
>   }
>   
> +static inline bool
> +reset_in_progress(const struct intel_engine_execlists *execlists)
> +{
> +	return unlikely(!__tasklet_is_enabled(&execlists->tasklet));
> +}
> +
>   static void process_csb(struct intel_engine_cs *engine)
>   {
>   	struct intel_engine_execlists * const execlists = &engine->execlists;
> @@ -963,10 +958,6 @@ static void process_csb(struct intel_engine_cs *engine)
>   	const u32 * const buf = execlists->csb_status;
>   	u8 head, tail;
>   
> -	/* Clear before reading to catch new interrupts */
> -	clear_bit(ENGINE_IRQ_EXECLIST, &engine->irq_posted);
> -	smp_mb__after_atomic();
> -
>   	/* Note that csb_write, csb_status may be either in HWSP or mmio */
>   	head = execlists->csb_head;
>   	tail = READ_ONCE(*execlists->csb_write);
> @@ -1096,19 +1087,9 @@ static void process_csb(struct intel_engine_cs *engine)
>   	execlists->csb_head = head;
>   }
>   
> -/*
> - * Check the unread Context Status Buffers and manage the submission of new
> - * contexts to the ELSP accordingly.
> - */
> -static void execlists_submission_tasklet(unsigned long data)
> +static void __execlists_submission_tasklet(struct intel_engine_cs *const engine)
>   {
> -	struct intel_engine_cs * const engine = (struct intel_engine_cs *)data;
> -
> -	GEM_TRACE("%s awake?=%d, active=%x, irq-posted?=%d\n",
> -		  engine->name,
> -		  engine->i915->gt.awake,
> -		  engine->execlists.active,
> -		  test_bit(ENGINE_IRQ_EXECLIST, &engine->irq_posted));
> +	lockdep_assert_held(&engine->timeline.lock);
>   
>   	/*
>   	 * We can skip acquiring intel_runtime_pm_get() here as it was taken
> @@ -1120,18 +1101,33 @@ static void execlists_submission_tasklet(unsigned long data)
>   	 */
>   	GEM_BUG_ON(!engine->i915->gt.awake);
>   
> -	/*
> -	 * Prefer doing test_and_clear_bit() as a two stage operation to avoid
> -	 * imposing the cost of a locked atomic transaction when submitting a
> -	 * new request (outside of the context-switch interrupt).
> -	 */
> -	if (test_bit(ENGINE_IRQ_EXECLIST, &engine->irq_posted))
> -		process_csb(engine);
> -
> +	process_csb(engine);
>   	if (!execlists_is_active(&engine->execlists, EXECLISTS_ACTIVE_PREEMPT))
>   		execlists_dequeue(engine);
>   }
>   
> +/*
> + * Check the unread Context Status Buffers and manage the submission of new
> + * contexts to the ELSP accordingly.
> + */
> +static void execlists_submission_tasklet(unsigned long data)
> +{
> +	struct intel_engine_cs * const engine = (struct intel_engine_cs *)data;
> +	unsigned long flags;
> +
> +	GEM_TRACE("%s awake?=%d, active=%x\n",
> +		  engine->name,
> +		  engine->i915->gt.awake,
> +		  engine->execlists.active);
> +
> +	spin_lock_irqsave(&engine->timeline.lock, flags);
> +
> +	if (engine->i915->gt.awake) /* we may be delayed until after we idle! */
> +		__execlists_submission_tasklet(engine);

Sounds quite bad! this means we fail to process pending CSB. And going 
idle syncs the tasklets so what am I missing?

> +
> +	spin_unlock_irqrestore(&engine->timeline.lock, flags);
> +}
> +
>   static void queue_request(struct intel_engine_cs *engine,
>   			  struct i915_sched_node *node,
>   			  int prio)
> @@ -1140,16 +1136,30 @@ static void queue_request(struct intel_engine_cs *engine,
>   		      &lookup_priolist(engine, prio)->requests);
>   }
>   
> -static void __submit_queue(struct intel_engine_cs *engine, int prio)
> +static void __update_queue(struct intel_engine_cs *engine, int prio)
>   {
>   	engine->execlists.queue_priority = prio;
> -	tasklet_hi_schedule(&engine->execlists.tasklet);
> +}
> +
> +static void __submit_queue(struct intel_engine_cs *engine)
> +{
> +	struct intel_engine_execlists * const execlists = &engine->execlists;
> +
> +	if (reset_in_progress(execlists))
> +		return; /* defer until we restart the engine following reset */
> +
> +	if (execlists->tasklet.func == execlists_submission_tasklet)

What is this check determining?

> +		__execlists_submission_tasklet(engine);

Are we always calling it directly even if the ports are busy? Wouldn't 
it be better to schedule in that that case?

> +	else
> +		tasklet_hi_schedule(&execlists->tasklet);
>   }
>   
>   static void submit_queue(struct intel_engine_cs *engine, int prio)
>   {
> -	if (prio > engine->execlists.queue_priority)
> -		__submit_queue(engine, prio);
> +	if (prio > engine->execlists.queue_priority) {
> +		__update_queue(engine, prio);
> +		__submit_queue(engine);
> +	}
>   }
>   
>   static void execlists_submit_request(struct i915_request *request)
> @@ -1161,11 +1171,12 @@ static void execlists_submit_request(struct i915_request *request)
>   	spin_lock_irqsave(&engine->timeline.lock, flags);
>   
>   	queue_request(engine, &request->sched, rq_prio(request));
> -	submit_queue(engine, rq_prio(request));
>   
>   	GEM_BUG_ON(!engine->execlists.first);
>   	GEM_BUG_ON(list_empty(&request->sched.link));
>   
> +	submit_queue(engine, rq_prio(request));
> +
>   	spin_unlock_irqrestore(&engine->timeline.lock, flags);
>   }
>   
> @@ -1292,8 +1303,11 @@ static void execlists_schedule(struct i915_request *request,
>   		}
>   
>   		if (prio > engine->execlists.queue_priority &&
> -		    i915_sw_fence_done(&sched_to_request(node)->submit))
> -			__submit_queue(engine, prio);
> +		    i915_sw_fence_done(&sched_to_request(node)->submit)) {
> +			/* defer submission until after all of our updates */
> +			__update_queue(engine, prio);
> +			tasklet_hi_schedule(&engine->execlists.tasklet);
> +		}
>   	}
>   
>   	spin_unlock_irq(&engine->timeline.lock);
> @@ -1874,6 +1888,7 @@ execlists_reset_prepare(struct intel_engine_cs *engine)
>   {
>   	struct intel_engine_execlists * const execlists = &engine->execlists;
>   	struct i915_request *request, *active;
> +	unsigned long flags;
>   
>   	GEM_TRACE("%s\n", engine->name);
>   
> @@ -1895,8 +1910,9 @@ execlists_reset_prepare(struct intel_engine_cs *engine)
>   	 * and avoid blaming an innocent request if the stall was due to the
>   	 * preemption itself.
>   	 */
> -	if (test_bit(ENGINE_IRQ_EXECLIST, &engine->irq_posted))
> -		process_csb(engine);
> +	spin_lock_irqsave(&engine->timeline.lock, flags);
> +
> +	process_csb(engine);
>   
>   	/*
>   	 * The last active request can then be no later than the last request
> @@ -1906,15 +1922,12 @@ execlists_reset_prepare(struct intel_engine_cs *engine)
>   	active = NULL;
>   	request = port_request(execlists->port);
>   	if (request) {
> -		unsigned long flags;
> -
>   		/*
>   		 * Prevent the breadcrumb from advancing before we decide
>   		 * which request is currently active.
>   		 */
>   		intel_engine_stop_cs(engine);
>   
> -		spin_lock_irqsave(&engine->timeline.lock, flags);
>   		list_for_each_entry_from_reverse(request,
>   						 &engine->timeline.requests,
>   						 link) {
> @@ -1924,12 +1937,28 @@ execlists_reset_prepare(struct intel_engine_cs *engine)
>   
>   			active = request;
>   		}
> -		spin_unlock_irqrestore(&engine->timeline.lock, flags);
>   	}
>   
> +	spin_unlock_irqrestore(&engine->timeline.lock, flags);
> +
>   	return active;
>   }
>   
> +static void reset_csb_pointers(struct intel_engine_execlists *execlists)
> +{
> +	/*
> +	 * After a reset, the HW starts writing into CSB entry [0]. We
> +	 * therefore have to set our HEAD pointer back one entry so that
> +	 * the *first* entry we check is entry 0. To complicate this further,
> +	 * as we don't wait for the first interrupt after reset, we have to
> +	 * fake the HW write to point back to the last entry so that our
> +	 * inline comparison of our cached head position against the last HW
> +	 * write works even before the first interrupt.
> +	 */
> +	execlists->csb_head = GEN8_CSB_ENTRIES - 1;
> +	WRITE_ONCE(*execlists->csb_write, (GEN8_CSB_ENTRIES - 1) | 0xff << 16);
> +}

Hmm this makes me think there should be another prep patch before direct 
submission. Need to build a clearer picture before I can suggest how.

> +
>   static void execlists_reset(struct intel_engine_cs *engine,
>   			    struct i915_request *request)
>   {
> @@ -1960,7 +1989,7 @@ static void execlists_reset(struct intel_engine_cs *engine,
>   	__unwind_incomplete_requests(engine);
>   
>   	/* Following the reset, we need to reload the CSB read/write pointers */
> -	engine->execlists.csb_head = GEN8_CSB_ENTRIES - 1;
> +	reset_csb_pointers(&engine->execlists);
>   
>   	spin_unlock_irqrestore(&engine->timeline.lock, flags);
>   
> @@ -2459,7 +2488,6 @@ static int logical_ring_init(struct intel_engine_cs *engine)
>   			upper_32_bits(ce->lrc_desc);
>   	}
>   
> -	execlists->csb_head = GEN8_CSB_ENTRIES - 1;
>   	execlists->csb_read =
>   		i915->regs + i915_mmio_reg_offset(RING_CONTEXT_STATUS_PTR(engine));
>   	if (csb_force_mmio(i915)) {
> @@ -2474,6 +2502,7 @@ static int logical_ring_init(struct intel_engine_cs *engine)
>   		execlists->csb_write =
>   			&engine->status_page.page_addr[intel_hws_csb_write_index(i915)];
>   	}
> +	reset_csb_pointers(execlists);
>   
>   	return 0;
>   
> diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
> index 5b92c5f03e1d..381c243bfc6f 100644
> --- a/drivers/gpu/drm/i915/intel_ringbuffer.h
> +++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
> @@ -359,7 +359,6 @@ struct intel_engine_cs {
>   	atomic_t irq_count;
>   	unsigned long irq_posted;
>   #define ENGINE_IRQ_BREADCRUMB 0
> -#define ENGINE_IRQ_EXECLIST 1
>   
>   	/* Rather than have every client wait upon all user interrupts,
>   	 * with the herd waking after every interrupt and each doing the
> 

Regards,

Tvrtko
Chris Wilson June 27, 2018, 10:58 a.m. UTC | #2
Quoting Tvrtko Ursulin (2018-06-27 11:40:32)
> 
> On 25/06/2018 10:48, Chris Wilson wrote:
> > Back in commit 27af5eea54d1 ("drm/i915: Move execlists irq handler to a
> > bottom half"), we came to the conclusion that running our CSB processing
> > and ELSP submission from inside the irq handler was a bad idea. A really
> > bad idea as we could impose nearly 1s latency on other users of the
> > system, on average! Deferring our work to a tasklet allowed us to do the
> > processing with irqs enabled, reducing the impact to an average of about
> > 50us.
> > 
> > We have since eradicated the use of forcewaked mmio from inside the CSB
> > processing and ELSP submission, bringing the impact down to around 5us
> > (on Kabylake); an order of magnitude better than our measurements 2
> > years ago on Broadwell and only about 2x worse on average than the
> > gem_syslatency on an unladen system.
> > 
> > In this iteration of the tasklet-vs-direct submission debate, we seek a
> > compromise where by we submit new requests immediately to the HW but
> > defer processing the CS interrupt onto a tasklet. We gain the advantage
> > of low-latency and ksoftirqd avoidance when waking up the HW, while
> > avoiding the system-wide starvation of our CS irq-storms.
> > 
> > Comparing the impact on the maximum latency observed (that is the time
> > stolen from an RT process) over a 120s interval, repeated several times
> > (using gem_syslatency, similar to RT's cyclictest) while the system is
> > fully laden with i915 nops, we see that direct submission an actually
> > improve the worse case.
> > 
> > Maximum latency in microseconds of a third party RT thread
> > (gem_syslatency -t 120 -f 2)
> >    x Always using tasklets (a couple of >1000us outliers removed)
> >    + Only using tasklets from CS irq, direct submission of requests
> > +------------------------------------------------------------------------+
> > |          +                                                             |
> > |          +                                                             |
> > |          +                                                             |
> > |          +       +                                                     |
> > |          + +     +                                                     |
> > |       +  + +     +  x     x     x                                      |
> > |      +++ + +     +  x  x  x  x  x  x                                   |
> > |      +++ + ++  + +  *x x  x  x  x  x                                   |
> > |      +++ + ++  + *  *x x  *  x  x  x                                   |
> > |    + +++ + ++  * * +*xxx  *  x  x  xx                                  |
> > |    * +++ + ++++* *x+**xx+ *  x  x xxxx x                               |
> > |   **x++++*++**+*x*x****x+ * +x xx xxxx x          x                    |
> > |x* ******+***************++*+***xxxxxx* xx*x     xxx +                x+|
> > |             |__________MA___________|                                  |
> > |      |______M__A________|                                              |
> > +------------------------------------------------------------------------+
> >      N           Min           Max        Median           Avg        Stddev
> > x 118            91           186           124     125.28814     16.279137
> > + 120            92           187           109     112.00833     13.458617
> > Difference at 95.0% confidence
> >       -13.2798 +/- 3.79219
> >       -10.5994% +/- 3.02677%
> >       (Student's t, pooled s = 14.9237)
> > 
> > However the mean latency is adversely affected:
> > 
> > Mean latency in microseconds of a third party RT thread
> > (gem_syslatency -t 120 -f 1)
> >    x Always using tasklets
> >    + Only using tasklets from CS irq, direct submission of requests
> > +------------------------------------------------------------------------+
> > |           xxxxxx                                        +   ++         |
> > |           xxxxxx                                        +   ++         |
> > |           xxxxxx                                      + +++ ++         |
> > |           xxxxxxx                                     +++++ ++         |
> > |           xxxxxxx                                     +++++ ++         |
> > |           xxxxxxx                                     +++++ +++        |
> > |           xxxxxxx                                   + ++++++++++       |
> > |           xxxxxxxx                                 ++ ++++++++++       |
> > |           xxxxxxxx                                 ++ ++++++++++       |
> > |          xxxxxxxxxx                                +++++++++++++++     |
> > |         xxxxxxxxxxx    x                           +++++++++++++++     |
> > |x       xxxxxxxxxxxxx   x           +            + ++++++++++++++++++  +|
> > |           |__A__|                                                      |
> > |                                                      |____A___|        |
> > +------------------------------------------------------------------------+
> >      N           Min           Max        Median           Avg        Stddev
> > x 120         3.506         3.727         3.631     3.6321417    0.02773109
> > + 120         3.834         4.149         4.039     4.0375167   0.041221676
> > Difference at 95.0% confidence
> >       0.405375 +/- 0.00888913
> >       11.1608% +/- 0.244735%
> >       (Student's t, pooled s = 0.03513)
> > 
> > However, since the mean latency corresponds to the amount of irqsoff
> > processing we have to do for a CS interrupt, we only need to speed that
> > up to benefit not just system latency but our own throughput.
> > 
> > v2: Remember to defer submissions when under reset.
> > v4: Only use direct submission for new requests
> > v5: Be aware that with mixing direct tasklet evaluation and deferred
> > tasklets, we may end up idling before running the deferred tasklet.
> > 
> > Testcase: igt/gem_exec_latency/*rthog*
> > References: 27af5eea54d1 ("drm/i915: Move execlists irq handler to a bottom half")
> > Suggested-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> > ---
> >   drivers/gpu/drm/i915/i915_gem.h         |   5 +
> >   drivers/gpu/drm/i915/i915_irq.c         |  11 +-
> >   drivers/gpu/drm/i915/intel_engine_cs.c  |   8 +-
> >   drivers/gpu/drm/i915/intel_lrc.c        | 147 ++++++++++++++----------
> >   drivers/gpu/drm/i915/intel_ringbuffer.h |   1 -
> >   5 files changed, 98 insertions(+), 74 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/i915_gem.h b/drivers/gpu/drm/i915/i915_gem.h
> > index 261da577829a..7892ac773916 100644
> > --- a/drivers/gpu/drm/i915/i915_gem.h
> > +++ b/drivers/gpu/drm/i915/i915_gem.h
> > @@ -88,4 +88,9 @@ static inline void __tasklet_enable_sync_once(struct tasklet_struct *t)
> >               tasklet_kill(t);
> >   }
> >   
> > +static inline bool __tasklet_is_enabled(const struct tasklet_struct *t)
> > +{
> > +     return likely(!atomic_read(&t->count));
> > +}
> > +
> 
> For the unlikely-likely chain from 
> __submit_queue->reset_in_progress->__tasklet_is_enabled I think it would 
> be better to drop the likely/unlikely from low-level helpers and put the 
> one unlikely into the __submit_queue.

Tasklets are rarely disabled, I think that's quite important to stress.
Tasklets do not function very well (heavy spinning) while disabled.

> >   #endif /* __I915_GEM_H__ */
> > diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
> > index 46aaef5c1851..316d0b08d40f 100644
> > --- a/drivers/gpu/drm/i915/i915_irq.c
> > +++ b/drivers/gpu/drm/i915/i915_irq.c
> > @@ -1469,14 +1469,10 @@ static void snb_gt_irq_handler(struct drm_i915_private *dev_priv,
> >   static void
> >   gen8_cs_irq_handler(struct intel_engine_cs *engine, u32 iir)
> >   {
> > -     struct intel_engine_execlists * const execlists = &engine->execlists;
> >       bool tasklet = false;
> >   
> > -     if (iir & GT_CONTEXT_SWITCH_INTERRUPT) {
> > -             if (READ_ONCE(engine->execlists.active))
> 
> What is the thinking behind this change? It used to be that we scheduled 
> the tasklet only when we knew we are expecting interrupts and now we 
> don't care any more for some reason?

The filtering is done inside process_csb(). We filtered on active
previously as some interrupts were seemingly going astray, now I am much
more confident that all are accounted for.
 
> > -                     tasklet = !test_and_set_bit(ENGINE_IRQ_EXECLIST,
> > -                                                 &engine->irq_posted);
> 
> And this is gone as well. Can you put a paragraph in the commit message 
> explaining the change? It doesn't seem immediately connected with direct 
> submission.

Removing one heavyweight atomic operation in the latency sensitive
interrupt.

> > +     if (iir & GT_CONTEXT_SWITCH_INTERRUPT)
> > +             tasklet = true;
> >   
> >       if (iir & GT_RENDER_USER_INTERRUPT) {
> >               notify_ring(engine);
> > @@ -1484,7 +1480,7 @@ gen8_cs_irq_handler(struct intel_engine_cs *engine, u32 iir)
> >       }
> >   
> >       if (tasklet)
> > -             tasklet_hi_schedule(&execlists->tasklet);
> > +             tasklet_hi_schedule(&engine->execlists.tasklet);
> >   }
> >   
> >   static void gen8_gt_irq_ack(struct drm_i915_private *i915,
> > @@ -2216,7 +2212,6 @@ static irqreturn_t cherryview_irq_handler(int irq, void *arg)
> >   
> >               I915_WRITE(VLV_IER, ier);
> >               I915_WRITE(GEN8_MASTER_IRQ, GEN8_MASTER_IRQ_CONTROL);
> > -             POSTING_READ(GEN8_MASTER_IRQ);
> 
> What is this?

Something that I haven't managed to kill yet.

> > diff --git a/drivers/gpu/drm/i915/intel_engine_cs.c b/drivers/gpu/drm/i915/intel_engine_cs.c
> > index 7209c22798e6..ace93958689e 100644
> > --- a/drivers/gpu/drm/i915/intel_engine_cs.c
> > +++ b/drivers/gpu/drm/i915/intel_engine_cs.c
> > @@ -1353,12 +1353,10 @@ static void intel_engine_print_registers(const struct intel_engine_cs *engine,
> >               ptr = I915_READ(RING_CONTEXT_STATUS_PTR(engine));
> >               read = GEN8_CSB_READ_PTR(ptr);
> >               write = GEN8_CSB_WRITE_PTR(ptr);
> > -             drm_printf(m, "    Execlist CSB read %d [%d cached], write %d [%d from hws], interrupt posted? %s, tasklet queued? %s (%s)\n",
> > +             drm_printf(m, "    Execlist CSB read %d [%d cached], write %d [%d from hws], tasklet queued? %s (%s)\n",
> >                          read, execlists->csb_head,
> >                          write,
> >                          intel_read_status_page(engine, intel_hws_csb_write_index(engine->i915)),
> > -                        yesno(test_bit(ENGINE_IRQ_EXECLIST,
> > -                                       &engine->irq_posted)),
> >                          yesno(test_bit(TASKLET_STATE_SCHED,
> >                                         &engine->execlists.tasklet.state)),
> >                          enableddisabled(!atomic_read(&engine->execlists.tasklet.count)));
> > @@ -1570,11 +1568,9 @@ void intel_engine_dump(struct intel_engine_cs *engine,
> >       spin_unlock(&b->rb_lock);
> >       local_irq_restore(flags);
> >   
> > -     drm_printf(m, "IRQ? 0x%lx (breadcrumbs? %s) (execlists? %s)\n",
> > +     drm_printf(m, "IRQ? 0x%lx (breadcrumbs? %s)\n",
> >                  engine->irq_posted,
> >                  yesno(test_bit(ENGINE_IRQ_BREADCRUMB,
> > -                               &engine->irq_posted)),
> > -                yesno(test_bit(ENGINE_IRQ_EXECLIST,
> >                                 &engine->irq_posted)));
> >   
> >       drm_printf(m, "HWSP:\n");
> > diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
> > index 5a12b8fc9d8f..c82efa3ac105 100644
> > --- a/drivers/gpu/drm/i915/intel_lrc.c
> > +++ b/drivers/gpu/drm/i915/intel_lrc.c
> > @@ -562,13 +562,15 @@ static void complete_preempt_context(struct intel_engine_execlists *execlists)
> >   {
> >       GEM_BUG_ON(!execlists_is_active(execlists, EXECLISTS_ACTIVE_PREEMPT));
> >   
> > +     __unwind_incomplete_requests(container_of(execlists,
> > +                                               typeof(struct intel_engine_cs),
> > +                                               execlists));
> >       execlists_cancel_port_requests(execlists);
> > -     execlists_unwind_incomplete_requests(execlists);
> 
> Is the ordering change significant and why?

Mostly for consistency and reasoning about request reference lifetimes.
(Unwind => we retain the request reference, as it is moved back to the
protected execution lists.)

> > +static void execlists_submission_tasklet(unsigned long data)
> > +{
> > +     struct intel_engine_cs * const engine = (struct intel_engine_cs *)data;
> > +     unsigned long flags;
> > +
> > +     GEM_TRACE("%s awake?=%d, active=%x\n",
> > +               engine->name,
> > +               engine->i915->gt.awake,
> > +               engine->execlists.active);
> > +
> > +     spin_lock_irqsave(&engine->timeline.lock, flags);
> > +
> > +     if (engine->i915->gt.awake) /* we may be delayed until after we idle! */
> > +             __execlists_submission_tasklet(engine);
> 
> Sounds quite bad! this means we fail to process pending CSB. And going 
> idle syncs the tasklets so what am I missing?

That tasklets get kicked randomly, I think was the culprit.

> > +static void __submit_queue(struct intel_engine_cs *engine)
> > +{
> > +     struct intel_engine_execlists * const execlists = &engine->execlists;
> > +
> > +     if (reset_in_progress(execlists))
> > +             return; /* defer until we restart the engine following reset */
> > +
> > +     if (execlists->tasklet.func == execlists_submission_tasklet)
> 
> What is this check determining?

That we can call __execlists_submission_tasklet, i.e. it is not guc,
veng or anything weirder.

> Are we always calling it directly even if the ports are busy? Wouldn't 
> it be better to schedule in that that case?

No, we are only calling it if we have a more important request than
either port (see queue_priority).

> > +static void reset_csb_pointers(struct intel_engine_execlists *execlists)
> > +{
> > +     /*
> > +      * After a reset, the HW starts writing into CSB entry [0]. We
> > +      * therefore have to set our HEAD pointer back one entry so that
> > +      * the *first* entry we check is entry 0. To complicate this further,
> > +      * as we don't wait for the first interrupt after reset, we have to
> > +      * fake the HW write to point back to the last entry so that our
> > +      * inline comparison of our cached head position against the last HW
> > +      * write works even before the first interrupt.
> > +      */
> > +     execlists->csb_head = GEN8_CSB_ENTRIES - 1;
> > +     WRITE_ONCE(*execlists->csb_write, (GEN8_CSB_ENTRIES - 1) | 0xff << 16);
> > +}
> 
> Hmm this makes me think there should be another prep patch before direct 
> submission. Need to build a clearer picture before I can suggest how.

For what? This was already in the prep patches (handing CSB on reset vs
resume paths, and all the fake fallout), I don't actually need it in a
function, it was just handy to do so as iirc I wanted to use it
elsewhere, but fortunately killed off that caller.

So the prep patch is just making it into a function.
-Chris
Tvrtko Ursulin June 27, 2018, 1:15 p.m. UTC | #3
On 27/06/2018 11:58, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2018-06-27 11:40:32)
>>
>> On 25/06/2018 10:48, Chris Wilson wrote:
>>> Back in commit 27af5eea54d1 ("drm/i915: Move execlists irq handler to a
>>> bottom half"), we came to the conclusion that running our CSB processing
>>> and ELSP submission from inside the irq handler was a bad idea. A really
>>> bad idea as we could impose nearly 1s latency on other users of the
>>> system, on average! Deferring our work to a tasklet allowed us to do the
>>> processing with irqs enabled, reducing the impact to an average of about
>>> 50us.
>>>
>>> We have since eradicated the use of forcewaked mmio from inside the CSB
>>> processing and ELSP submission, bringing the impact down to around 5us
>>> (on Kabylake); an order of magnitude better than our measurements 2
>>> years ago on Broadwell and only about 2x worse on average than the
>>> gem_syslatency on an unladen system.
>>>
>>> In this iteration of the tasklet-vs-direct submission debate, we seek a
>>> compromise where by we submit new requests immediately to the HW but
>>> defer processing the CS interrupt onto a tasklet. We gain the advantage
>>> of low-latency and ksoftirqd avoidance when waking up the HW, while
>>> avoiding the system-wide starvation of our CS irq-storms.
>>>
>>> Comparing the impact on the maximum latency observed (that is the time
>>> stolen from an RT process) over a 120s interval, repeated several times
>>> (using gem_syslatency, similar to RT's cyclictest) while the system is
>>> fully laden with i915 nops, we see that direct submission an actually
>>> improve the worse case.
>>>
>>> Maximum latency in microseconds of a third party RT thread
>>> (gem_syslatency -t 120 -f 2)
>>>     x Always using tasklets (a couple of >1000us outliers removed)
>>>     + Only using tasklets from CS irq, direct submission of requests
>>> +------------------------------------------------------------------------+
>>> |          +                                                             |
>>> |          +                                                             |
>>> |          +                                                             |
>>> |          +       +                                                     |
>>> |          + +     +                                                     |
>>> |       +  + +     +  x     x     x                                      |
>>> |      +++ + +     +  x  x  x  x  x  x                                   |
>>> |      +++ + ++  + +  *x x  x  x  x  x                                   |
>>> |      +++ + ++  + *  *x x  *  x  x  x                                   |
>>> |    + +++ + ++  * * +*xxx  *  x  x  xx                                  |
>>> |    * +++ + ++++* *x+**xx+ *  x  x xxxx x                               |
>>> |   **x++++*++**+*x*x****x+ * +x xx xxxx x          x                    |
>>> |x* ******+***************++*+***xxxxxx* xx*x     xxx +                x+|
>>> |             |__________MA___________|                                  |
>>> |      |______M__A________|                                              |
>>> +------------------------------------------------------------------------+
>>>       N           Min           Max        Median           Avg        Stddev
>>> x 118            91           186           124     125.28814     16.279137
>>> + 120            92           187           109     112.00833     13.458617
>>> Difference at 95.0% confidence
>>>        -13.2798 +/- 3.79219
>>>        -10.5994% +/- 3.02677%
>>>        (Student's t, pooled s = 14.9237)
>>>
>>> However the mean latency is adversely affected:
>>>
>>> Mean latency in microseconds of a third party RT thread
>>> (gem_syslatency -t 120 -f 1)
>>>     x Always using tasklets
>>>     + Only using tasklets from CS irq, direct submission of requests
>>> +------------------------------------------------------------------------+
>>> |           xxxxxx                                        +   ++         |
>>> |           xxxxxx                                        +   ++         |
>>> |           xxxxxx                                      + +++ ++         |
>>> |           xxxxxxx                                     +++++ ++         |
>>> |           xxxxxxx                                     +++++ ++         |
>>> |           xxxxxxx                                     +++++ +++        |
>>> |           xxxxxxx                                   + ++++++++++       |
>>> |           xxxxxxxx                                 ++ ++++++++++       |
>>> |           xxxxxxxx                                 ++ ++++++++++       |
>>> |          xxxxxxxxxx                                +++++++++++++++     |
>>> |         xxxxxxxxxxx    x                           +++++++++++++++     |
>>> |x       xxxxxxxxxxxxx   x           +            + ++++++++++++++++++  +|
>>> |           |__A__|                                                      |
>>> |                                                      |____A___|        |
>>> +------------------------------------------------------------------------+
>>>       N           Min           Max        Median           Avg        Stddev
>>> x 120         3.506         3.727         3.631     3.6321417    0.02773109
>>> + 120         3.834         4.149         4.039     4.0375167   0.041221676
>>> Difference at 95.0% confidence
>>>        0.405375 +/- 0.00888913
>>>        11.1608% +/- 0.244735%
>>>        (Student's t, pooled s = 0.03513)
>>>
>>> However, since the mean latency corresponds to the amount of irqsoff
>>> processing we have to do for a CS interrupt, we only need to speed that
>>> up to benefit not just system latency but our own throughput.
>>>
>>> v2: Remember to defer submissions when under reset.
>>> v4: Only use direct submission for new requests
>>> v5: Be aware that with mixing direct tasklet evaluation and deferred
>>> tasklets, we may end up idling before running the deferred tasklet.
>>>
>>> Testcase: igt/gem_exec_latency/*rthog*
>>> References: 27af5eea54d1 ("drm/i915: Move execlists irq handler to a bottom half")
>>> Suggested-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
>>> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>> ---
>>>    drivers/gpu/drm/i915/i915_gem.h         |   5 +
>>>    drivers/gpu/drm/i915/i915_irq.c         |  11 +-
>>>    drivers/gpu/drm/i915/intel_engine_cs.c  |   8 +-
>>>    drivers/gpu/drm/i915/intel_lrc.c        | 147 ++++++++++++++----------
>>>    drivers/gpu/drm/i915/intel_ringbuffer.h |   1 -
>>>    5 files changed, 98 insertions(+), 74 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/i915/i915_gem.h b/drivers/gpu/drm/i915/i915_gem.h
>>> index 261da577829a..7892ac773916 100644
>>> --- a/drivers/gpu/drm/i915/i915_gem.h
>>> +++ b/drivers/gpu/drm/i915/i915_gem.h
>>> @@ -88,4 +88,9 @@ static inline void __tasklet_enable_sync_once(struct tasklet_struct *t)
>>>                tasklet_kill(t);
>>>    }
>>>    
>>> +static inline bool __tasklet_is_enabled(const struct tasklet_struct *t)
>>> +{
>>> +     return likely(!atomic_read(&t->count));
>>> +}
>>> +
>>
>> For the unlikely-likely chain from
>> __submit_queue->reset_in_progress->__tasklet_is_enabled I think it would
>> be better to drop the likely/unlikely from low-level helpers and put the
>> one unlikely into the __submit_queue.
> 
> Tasklets are rarely disabled, I think that's quite important to stress.
> Tasklets do not function very well (heavy spinning) while disabled.

I think we shouldn't be concerned by that. Purpose of this is to wrap 
internal implementation we even shouldn't be touching if we could help 
it, and I feel correct thing is to express the branching hint higher up 
the stack. Caller wants to optimize certain scenarios, while the helper 
doesn't know who is calling it and why. On top we have this 
likely-unlikley chain which I mentioned. Even just one unlikely in 
reset_in_progress would probably be enough for what you wanted to ensure.

>>>    #endif /* __I915_GEM_H__ */
>>> diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
>>> index 46aaef5c1851..316d0b08d40f 100644
>>> --- a/drivers/gpu/drm/i915/i915_irq.c
>>> +++ b/drivers/gpu/drm/i915/i915_irq.c
>>> @@ -1469,14 +1469,10 @@ static void snb_gt_irq_handler(struct drm_i915_private *dev_priv,
>>>    static void
>>>    gen8_cs_irq_handler(struct intel_engine_cs *engine, u32 iir)
>>>    {
>>> -     struct intel_engine_execlists * const execlists = &engine->execlists;
>>>        bool tasklet = false;
>>>    
>>> -     if (iir & GT_CONTEXT_SWITCH_INTERRUPT) {
>>> -             if (READ_ONCE(engine->execlists.active))
>>
>> What is the thinking behind this change? It used to be that we scheduled
>> the tasklet only when we knew we are expecting interrupts and now we
>> don't care any more for some reason?
> 
> The filtering is done inside process_csb(). We filtered on active
> previously as some interrupts were seemingly going astray, now I am much
> more confident that all are accounted for.

Hm how? We filter extra interrupts, we can't filter to get what's missing?

>   
>>> -                     tasklet = !test_and_set_bit(ENGINE_IRQ_EXECLIST,
>>> -                                                 &engine->irq_posted);
>>
>> And this is gone as well. Can you put a paragraph in the commit message
>> explaining the change? It doesn't seem immediately connected with direct
>> submission.
> 
> Removing one heavyweight atomic operation in the latency sensitive
> interrupt.

But on the higher level - why we don't need this any more.

> 
>>> +     if (iir & GT_CONTEXT_SWITCH_INTERRUPT)
>>> +             tasklet = true;
>>>    
>>>        if (iir & GT_RENDER_USER_INTERRUPT) {
>>>                notify_ring(engine);
>>> @@ -1484,7 +1480,7 @@ gen8_cs_irq_handler(struct intel_engine_cs *engine, u32 iir)
>>>        }
>>>    
>>>        if (tasklet)
>>> -             tasklet_hi_schedule(&execlists->tasklet);
>>> +             tasklet_hi_schedule(&engine->execlists.tasklet);
>>>    }
>>>    
>>>    static void gen8_gt_irq_ack(struct drm_i915_private *i915,
>>> @@ -2216,7 +2212,6 @@ static irqreturn_t cherryview_irq_handler(int irq, void *arg)
>>>    
>>>                I915_WRITE(VLV_IER, ier);
>>>                I915_WRITE(GEN8_MASTER_IRQ, GEN8_MASTER_IRQ_CONTROL);
>>> -             POSTING_READ(GEN8_MASTER_IRQ);
>>
>> What is this?
> 
> Something that I haven't managed to kill yet.

No sneaking in this patch then either! :D

> 
>>> diff --git a/drivers/gpu/drm/i915/intel_engine_cs.c b/drivers/gpu/drm/i915/intel_engine_cs.c
>>> index 7209c22798e6..ace93958689e 100644
>>> --- a/drivers/gpu/drm/i915/intel_engine_cs.c
>>> +++ b/drivers/gpu/drm/i915/intel_engine_cs.c
>>> @@ -1353,12 +1353,10 @@ static void intel_engine_print_registers(const struct intel_engine_cs *engine,
>>>                ptr = I915_READ(RING_CONTEXT_STATUS_PTR(engine));
>>>                read = GEN8_CSB_READ_PTR(ptr);
>>>                write = GEN8_CSB_WRITE_PTR(ptr);
>>> -             drm_printf(m, "    Execlist CSB read %d [%d cached], write %d [%d from hws], interrupt posted? %s, tasklet queued? %s (%s)\n",
>>> +             drm_printf(m, "    Execlist CSB read %d [%d cached], write %d [%d from hws], tasklet queued? %s (%s)\n",
>>>                           read, execlists->csb_head,
>>>                           write,
>>>                           intel_read_status_page(engine, intel_hws_csb_write_index(engine->i915)),
>>> -                        yesno(test_bit(ENGINE_IRQ_EXECLIST,
>>> -                                       &engine->irq_posted)),
>>>                           yesno(test_bit(TASKLET_STATE_SCHED,
>>>                                          &engine->execlists.tasklet.state)),
>>>                           enableddisabled(!atomic_read(&engine->execlists.tasklet.count)));
>>> @@ -1570,11 +1568,9 @@ void intel_engine_dump(struct intel_engine_cs *engine,
>>>        spin_unlock(&b->rb_lock);
>>>        local_irq_restore(flags);
>>>    
>>> -     drm_printf(m, "IRQ? 0x%lx (breadcrumbs? %s) (execlists? %s)\n",
>>> +     drm_printf(m, "IRQ? 0x%lx (breadcrumbs? %s)\n",
>>>                   engine->irq_posted,
>>>                   yesno(test_bit(ENGINE_IRQ_BREADCRUMB,
>>> -                               &engine->irq_posted)),
>>> -                yesno(test_bit(ENGINE_IRQ_EXECLIST,
>>>                                  &engine->irq_posted)));
>>>    
>>>        drm_printf(m, "HWSP:\n");
>>> diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
>>> index 5a12b8fc9d8f..c82efa3ac105 100644
>>> --- a/drivers/gpu/drm/i915/intel_lrc.c
>>> +++ b/drivers/gpu/drm/i915/intel_lrc.c
>>> @@ -562,13 +562,15 @@ static void complete_preempt_context(struct intel_engine_execlists *execlists)
>>>    {
>>>        GEM_BUG_ON(!execlists_is_active(execlists, EXECLISTS_ACTIVE_PREEMPT));
>>>    
>>> +     __unwind_incomplete_requests(container_of(execlists,
>>> +                                               typeof(struct intel_engine_cs),
>>> +                                               execlists));
>>>        execlists_cancel_port_requests(execlists);
>>> -     execlists_unwind_incomplete_requests(execlists);
>>
>> Is the ordering change significant and why?
> 
> Mostly for consistency and reasoning about request reference lifetimes.
> (Unwind => we retain the request reference, as it is moved back to the
> protected execution lists.)

Remove from this patch then?

>>> +static void execlists_submission_tasklet(unsigned long data)
>>> +{
>>> +     struct intel_engine_cs * const engine = (struct intel_engine_cs *)data;
>>> +     unsigned long flags;
>>> +
>>> +     GEM_TRACE("%s awake?=%d, active=%x\n",
>>> +               engine->name,
>>> +               engine->i915->gt.awake,
>>> +               engine->execlists.active);
>>> +
>>> +     spin_lock_irqsave(&engine->timeline.lock, flags);
>>> +
>>> +     if (engine->i915->gt.awake) /* we may be delayed until after we idle! */
>>> +             __execlists_submission_tasklet(engine);
>>
>> Sounds quite bad! this means we fail to process pending CSB. And going
>> idle syncs the tasklets so what am I missing?
> 
> That tasklets get kicked randomly, I think was the culprit.

What do you mean? I hope we have busy-idle quite controlled and we know 
when we should and should expect a tasklet. If we synced them when 
transitioning to idle they cannot happen. Otherwise we better be active! 
GEM_BUG_ON(!engine->i915->gt.awake) instead? Does that trigger?!

> 
>>> +static void __submit_queue(struct intel_engine_cs *engine)
>>> +{
>>> +     struct intel_engine_execlists * const execlists = &engine->execlists;
>>> +
>>> +     if (reset_in_progress(execlists))
>>> +             return; /* defer until we restart the engine following reset */
>>> +
>>> +     if (execlists->tasklet.func == execlists_submission_tasklet)
>>
>> What is this check determining?
> 
> That we can call __execlists_submission_tasklet, i.e. it is not guc,
> veng or anything weirder.

Ok.

>> Are we always calling it directly even if the ports are busy? Wouldn't
>> it be better to schedule in that that case?
> 
> No, we are only calling it if we have a more important request than
> either port (see queue_priority).

Ah yes, forgot about that.

> 
>>> +static void reset_csb_pointers(struct intel_engine_execlists *execlists)
>>> +{
>>> +     /*
>>> +      * After a reset, the HW starts writing into CSB entry [0]. We
>>> +      * therefore have to set our HEAD pointer back one entry so that
>>> +      * the *first* entry we check is entry 0. To complicate this further,
>>> +      * as we don't wait for the first interrupt after reset, we have to
>>> +      * fake the HW write to point back to the last entry so that our
>>> +      * inline comparison of our cached head position against the last HW
>>> +      * write works even before the first interrupt.
>>> +      */
>>> +     execlists->csb_head = GEN8_CSB_ENTRIES - 1;
>>> +     WRITE_ONCE(*execlists->csb_write, (GEN8_CSB_ENTRIES - 1) | 0xff << 16);
>>> +}
>>
>> Hmm this makes me think there should be another prep patch before direct
>> submission. Need to build a clearer picture before I can suggest how.
> 
> For what? This was already in the prep patches (handing CSB on reset vs
> resume paths, and all the fake fallout), I don't actually need it in a
> function, it was just handy to do so as iirc I wanted to use it
> elsewhere, but fortunately killed off that caller.
> 
> So the prep patch is just making it into a function.

It is adding write of csb_write so I think that was a good extraction.

Regards,

Tvrtko
Chris Wilson June 27, 2018, 1:29 p.m. UTC | #4
Quoting Tvrtko Ursulin (2018-06-27 14:15:22)
> 
> On 27/06/2018 11:58, Chris Wilson wrote:
> > Quoting Tvrtko Ursulin (2018-06-27 11:40:32)
> >>
> >> On 25/06/2018 10:48, Chris Wilson wrote:
> >>> Back in commit 27af5eea54d1 ("drm/i915: Move execlists irq handler to a
> >>> bottom half"), we came to the conclusion that running our CSB processing
> >>> and ELSP submission from inside the irq handler was a bad idea. A really
> >>> bad idea as we could impose nearly 1s latency on other users of the
> >>> system, on average! Deferring our work to a tasklet allowed us to do the
> >>> processing with irqs enabled, reducing the impact to an average of about
> >>> 50us.
> >>>
> >>> We have since eradicated the use of forcewaked mmio from inside the CSB
> >>> processing and ELSP submission, bringing the impact down to around 5us
> >>> (on Kabylake); an order of magnitude better than our measurements 2
> >>> years ago on Broadwell and only about 2x worse on average than the
> >>> gem_syslatency on an unladen system.
> >>>
> >>> In this iteration of the tasklet-vs-direct submission debate, we seek a
> >>> compromise where by we submit new requests immediately to the HW but
> >>> defer processing the CS interrupt onto a tasklet. We gain the advantage
> >>> of low-latency and ksoftirqd avoidance when waking up the HW, while
> >>> avoiding the system-wide starvation of our CS irq-storms.
> >>>
> >>> Comparing the impact on the maximum latency observed (that is the time
> >>> stolen from an RT process) over a 120s interval, repeated several times
> >>> (using gem_syslatency, similar to RT's cyclictest) while the system is
> >>> fully laden with i915 nops, we see that direct submission an actually
> >>> improve the worse case.
> >>>
> >>> Maximum latency in microseconds of a third party RT thread
> >>> (gem_syslatency -t 120 -f 2)
> >>>     x Always using tasklets (a couple of >1000us outliers removed)
> >>>     + Only using tasklets from CS irq, direct submission of requests
> >>> +------------------------------------------------------------------------+
> >>> |          +                                                             |
> >>> |          +                                                             |
> >>> |          +                                                             |
> >>> |          +       +                                                     |
> >>> |          + +     +                                                     |
> >>> |       +  + +     +  x     x     x                                      |
> >>> |      +++ + +     +  x  x  x  x  x  x                                   |
> >>> |      +++ + ++  + +  *x x  x  x  x  x                                   |
> >>> |      +++ + ++  + *  *x x  *  x  x  x                                   |
> >>> |    + +++ + ++  * * +*xxx  *  x  x  xx                                  |
> >>> |    * +++ + ++++* *x+**xx+ *  x  x xxxx x                               |
> >>> |   **x++++*++**+*x*x****x+ * +x xx xxxx x          x                    |
> >>> |x* ******+***************++*+***xxxxxx* xx*x     xxx +                x+|
> >>> |             |__________MA___________|                                  |
> >>> |      |______M__A________|                                              |
> >>> +------------------------------------------------------------------------+
> >>>       N           Min           Max        Median           Avg        Stddev
> >>> x 118            91           186           124     125.28814     16.279137
> >>> + 120            92           187           109     112.00833     13.458617
> >>> Difference at 95.0% confidence
> >>>        -13.2798 +/- 3.79219
> >>>        -10.5994% +/- 3.02677%
> >>>        (Student's t, pooled s = 14.9237)
> >>>
> >>> However the mean latency is adversely affected:
> >>>
> >>> Mean latency in microseconds of a third party RT thread
> >>> (gem_syslatency -t 120 -f 1)
> >>>     x Always using tasklets
> >>>     + Only using tasklets from CS irq, direct submission of requests
> >>> +------------------------------------------------------------------------+
> >>> |           xxxxxx                                        +   ++         |
> >>> |           xxxxxx                                        +   ++         |
> >>> |           xxxxxx                                      + +++ ++         |
> >>> |           xxxxxxx                                     +++++ ++         |
> >>> |           xxxxxxx                                     +++++ ++         |
> >>> |           xxxxxxx                                     +++++ +++        |
> >>> |           xxxxxxx                                   + ++++++++++       |
> >>> |           xxxxxxxx                                 ++ ++++++++++       |
> >>> |           xxxxxxxx                                 ++ ++++++++++       |
> >>> |          xxxxxxxxxx                                +++++++++++++++     |
> >>> |         xxxxxxxxxxx    x                           +++++++++++++++     |
> >>> |x       xxxxxxxxxxxxx   x           +            + ++++++++++++++++++  +|
> >>> |           |__A__|                                                      |
> >>> |                                                      |____A___|        |
> >>> +------------------------------------------------------------------------+
> >>>       N           Min           Max        Median           Avg        Stddev
> >>> x 120         3.506         3.727         3.631     3.6321417    0.02773109
> >>> + 120         3.834         4.149         4.039     4.0375167   0.041221676
> >>> Difference at 95.0% confidence
> >>>        0.405375 +/- 0.00888913
> >>>        11.1608% +/- 0.244735%
> >>>        (Student's t, pooled s = 0.03513)
> >>>
> >>> However, since the mean latency corresponds to the amount of irqsoff
> >>> processing we have to do for a CS interrupt, we only need to speed that
> >>> up to benefit not just system latency but our own throughput.
> >>>
> >>> v2: Remember to defer submissions when under reset.
> >>> v4: Only use direct submission for new requests
> >>> v5: Be aware that with mixing direct tasklet evaluation and deferred
> >>> tasklets, we may end up idling before running the deferred tasklet.
> >>>
> >>> Testcase: igt/gem_exec_latency/*rthog*
> >>> References: 27af5eea54d1 ("drm/i915: Move execlists irq handler to a bottom half")
> >>> Suggested-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> >>> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> >>> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> >>> ---
> >>>    drivers/gpu/drm/i915/i915_gem.h         |   5 +
> >>>    drivers/gpu/drm/i915/i915_irq.c         |  11 +-
> >>>    drivers/gpu/drm/i915/intel_engine_cs.c  |   8 +-
> >>>    drivers/gpu/drm/i915/intel_lrc.c        | 147 ++++++++++++++----------
> >>>    drivers/gpu/drm/i915/intel_ringbuffer.h |   1 -
> >>>    5 files changed, 98 insertions(+), 74 deletions(-)
> >>>
> >>> diff --git a/drivers/gpu/drm/i915/i915_gem.h b/drivers/gpu/drm/i915/i915_gem.h
> >>> index 261da577829a..7892ac773916 100644
> >>> --- a/drivers/gpu/drm/i915/i915_gem.h
> >>> +++ b/drivers/gpu/drm/i915/i915_gem.h
> >>> @@ -88,4 +88,9 @@ static inline void __tasklet_enable_sync_once(struct tasklet_struct *t)
> >>>                tasklet_kill(t);
> >>>    }
> >>>    
> >>> +static inline bool __tasklet_is_enabled(const struct tasklet_struct *t)
> >>> +{
> >>> +     return likely(!atomic_read(&t->count));
> >>> +}
> >>> +
> >>
> >> For the unlikely-likely chain from
> >> __submit_queue->reset_in_progress->__tasklet_is_enabled I think it would
> >> be better to drop the likely/unlikely from low-level helpers and put the
> >> one unlikely into the __submit_queue.
> > 
> > Tasklets are rarely disabled, I think that's quite important to stress.
> > Tasklets do not function very well (heavy spinning) while disabled.
> 
> I think we shouldn't be concerned by that. Purpose of this is to wrap 
> internal implementation we even shouldn't be touching if we could help 
> it, and I feel correct thing is to express the branching hint higher up 
> the stack. Caller wants to optimize certain scenarios, while the helper 
> doesn't know who is calling it and why. On top we have this 
> likely-unlikley chain which I mentioned. Even just one unlikely in 
> reset_in_progress would probably be enough for what you wanted to ensure.

I already acquiesced and did extra that.

> >>>    #endif /* __I915_GEM_H__ */
> >>> diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
> >>> index 46aaef5c1851..316d0b08d40f 100644
> >>> --- a/drivers/gpu/drm/i915/i915_irq.c
> >>> +++ b/drivers/gpu/drm/i915/i915_irq.c
> >>> @@ -1469,14 +1469,10 @@ static void snb_gt_irq_handler(struct drm_i915_private *dev_priv,
> >>>    static void
> >>>    gen8_cs_irq_handler(struct intel_engine_cs *engine, u32 iir)
> >>>    {
> >>> -     struct intel_engine_execlists * const execlists = &engine->execlists;
> >>>        bool tasklet = false;
> >>>    
> >>> -     if (iir & GT_CONTEXT_SWITCH_INTERRUPT) {
> >>> -             if (READ_ONCE(engine->execlists.active))
> >>
> >> What is the thinking behind this change? It used to be that we scheduled
> >> the tasklet only when we knew we are expecting interrupts and now we
> >> don't care any more for some reason?
> > 
> > The filtering is done inside process_csb(). We filtered on active
> > previously as some interrupts were seemingly going astray, now I am much
> > more confident that all are accounted for.
> 
> Hm how? We filter extra interrupts, we can't filter to get what's missing?

Not quite, since we process the CSB more frequently than interrupts, we
may also get an interrupt after having already processed the CSB.

> >>> -                     tasklet = !test_and_set_bit(ENGINE_IRQ_EXECLIST,
> >>> -                                                 &engine->irq_posted);
> >>
> >> And this is gone as well. Can you put a paragraph in the commit message
> >> explaining the change? It doesn't seem immediately connected with direct
> >> submission.
> > 
> > Removing one heavyweight atomic operation in the latency sensitive
> > interrupt.
> 
> But on the higher level - why we don't need this any more.

Because we are using the CSB, ok I think that can be a separate step.

> >>> +     if (iir & GT_CONTEXT_SWITCH_INTERRUPT)
> >>> +             tasklet = true;
> >>>    
> >>>        if (iir & GT_RENDER_USER_INTERRUPT) {
> >>>                notify_ring(engine);
> >>> @@ -1484,7 +1480,7 @@ gen8_cs_irq_handler(struct intel_engine_cs *engine, u32 iir)
> >>>        }
> >>>    
> >>>        if (tasklet)
> >>> -             tasklet_hi_schedule(&execlists->tasklet);
> >>> +             tasklet_hi_schedule(&engine->execlists.tasklet);
> >>>    }
> >>>    
> >>>    static void gen8_gt_irq_ack(struct drm_i915_private *i915,
> >>> @@ -2216,7 +2212,6 @@ static irqreturn_t cherryview_irq_handler(int irq, void *arg)
> >>>    
> >>>                I915_WRITE(VLV_IER, ier);
> >>>                I915_WRITE(GEN8_MASTER_IRQ, GEN8_MASTER_IRQ_CONTROL);
> >>> -             POSTING_READ(GEN8_MASTER_IRQ);
> >>
> >> What is this?
> > 
> > Something that I haven't managed to kill yet.
> 
> No sneaking in this patch then either! :D

Just close your eyes. Nothing to see here.

> >>> diff --git a/drivers/gpu/drm/i915/intel_engine_cs.c b/drivers/gpu/drm/i915/intel_engine_cs.c
> >>> index 7209c22798e6..ace93958689e 100644
> >>> --- a/drivers/gpu/drm/i915/intel_engine_cs.c
> >>> +++ b/drivers/gpu/drm/i915/intel_engine_cs.c
> >>> @@ -1353,12 +1353,10 @@ static void intel_engine_print_registers(const struct intel_engine_cs *engine,
> >>>                ptr = I915_READ(RING_CONTEXT_STATUS_PTR(engine));
> >>>                read = GEN8_CSB_READ_PTR(ptr);
> >>>                write = GEN8_CSB_WRITE_PTR(ptr);
> >>> -             drm_printf(m, "    Execlist CSB read %d [%d cached], write %d [%d from hws], interrupt posted? %s, tasklet queued? %s (%s)\n",
> >>> +             drm_printf(m, "    Execlist CSB read %d [%d cached], write %d [%d from hws], tasklet queued? %s (%s)\n",
> >>>                           read, execlists->csb_head,
> >>>                           write,
> >>>                           intel_read_status_page(engine, intel_hws_csb_write_index(engine->i915)),
> >>> -                        yesno(test_bit(ENGINE_IRQ_EXECLIST,
> >>> -                                       &engine->irq_posted)),
> >>>                           yesno(test_bit(TASKLET_STATE_SCHED,
> >>>                                          &engine->execlists.tasklet.state)),
> >>>                           enableddisabled(!atomic_read(&engine->execlists.tasklet.count)));
> >>> @@ -1570,11 +1568,9 @@ void intel_engine_dump(struct intel_engine_cs *engine,
> >>>        spin_unlock(&b->rb_lock);
> >>>        local_irq_restore(flags);
> >>>    
> >>> -     drm_printf(m, "IRQ? 0x%lx (breadcrumbs? %s) (execlists? %s)\n",
> >>> +     drm_printf(m, "IRQ? 0x%lx (breadcrumbs? %s)\n",
> >>>                   engine->irq_posted,
> >>>                   yesno(test_bit(ENGINE_IRQ_BREADCRUMB,
> >>> -                               &engine->irq_posted)),
> >>> -                yesno(test_bit(ENGINE_IRQ_EXECLIST,
> >>>                                  &engine->irq_posted)));
> >>>    
> >>>        drm_printf(m, "HWSP:\n");
> >>> diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
> >>> index 5a12b8fc9d8f..c82efa3ac105 100644
> >>> --- a/drivers/gpu/drm/i915/intel_lrc.c
> >>> +++ b/drivers/gpu/drm/i915/intel_lrc.c
> >>> @@ -562,13 +562,15 @@ static void complete_preempt_context(struct intel_engine_execlists *execlists)
> >>>    {
> >>>        GEM_BUG_ON(!execlists_is_active(execlists, EXECLISTS_ACTIVE_PREEMPT));
> >>>    
> >>> +     __unwind_incomplete_requests(container_of(execlists,
> >>> +                                               typeof(struct intel_engine_cs),
> >>> +                                               execlists));
> >>>        execlists_cancel_port_requests(execlists);
> >>> -     execlists_unwind_incomplete_requests(execlists);
> >>
> >> Is the ordering change significant and why?
> > 
> > Mostly for consistency and reasoning about request reference lifetimes.
> > (Unwind => we retain the request reference, as it is moved back to the
> > protected execution lists.)
> 
> Remove from this patch then?

Move to an earlier patch then.

> >>> +static void execlists_submission_tasklet(unsigned long data)
> >>> +{
> >>> +     struct intel_engine_cs * const engine = (struct intel_engine_cs *)data;
> >>> +     unsigned long flags;
> >>> +
> >>> +     GEM_TRACE("%s awake?=%d, active=%x\n",
> >>> +               engine->name,
> >>> +               engine->i915->gt.awake,
> >>> +               engine->execlists.active);
> >>> +
> >>> +     spin_lock_irqsave(&engine->timeline.lock, flags);
> >>> +
> >>> +     if (engine->i915->gt.awake) /* we may be delayed until after we idle! */
> >>> +             __execlists_submission_tasklet(engine);
> >>
> >> Sounds quite bad! this means we fail to process pending CSB. And going
> >> idle syncs the tasklets so what am I missing?
> > 
> > That tasklets get kicked randomly, I think was the culprit.
> 
> What do you mean? I hope we have busy-idle quite controlled and we know 
> when we should and should expect a tasklet. If we synced them when 
> transitioning to idle they cannot happen. Otherwise we better be active! 
> GEM_BUG_ON(!engine->i915->gt.awake) instead? Does that trigger?!

tasklet_schedule() is called off the main path, without locking, so
unsynchronized to parking. Just because.
-Chris
Tvrtko Ursulin June 27, 2018, 3:21 p.m. UTC | #5
On 27/06/2018 14:29, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2018-06-27 14:15:22)
>>
>> On 27/06/2018 11:58, Chris Wilson wrote:
>>> Quoting Tvrtko Ursulin (2018-06-27 11:40:32)
>>>>
>>>> On 25/06/2018 10:48, Chris Wilson wrote:
>>>>> Back in commit 27af5eea54d1 ("drm/i915: Move execlists irq handler to a
>>>>> bottom half"), we came to the conclusion that running our CSB processing
>>>>> and ELSP submission from inside the irq handler was a bad idea. A really
>>>>> bad idea as we could impose nearly 1s latency on other users of the
>>>>> system, on average! Deferring our work to a tasklet allowed us to do the
>>>>> processing with irqs enabled, reducing the impact to an average of about
>>>>> 50us.
>>>>>
>>>>> We have since eradicated the use of forcewaked mmio from inside the CSB
>>>>> processing and ELSP submission, bringing the impact down to around 5us
>>>>> (on Kabylake); an order of magnitude better than our measurements 2
>>>>> years ago on Broadwell and only about 2x worse on average than the
>>>>> gem_syslatency on an unladen system.
>>>>>
>>>>> In this iteration of the tasklet-vs-direct submission debate, we seek a
>>>>> compromise where by we submit new requests immediately to the HW but
>>>>> defer processing the CS interrupt onto a tasklet. We gain the advantage
>>>>> of low-latency and ksoftirqd avoidance when waking up the HW, while
>>>>> avoiding the system-wide starvation of our CS irq-storms.
>>>>>
>>>>> Comparing the impact on the maximum latency observed (that is the time
>>>>> stolen from an RT process) over a 120s interval, repeated several times
>>>>> (using gem_syslatency, similar to RT's cyclictest) while the system is
>>>>> fully laden with i915 nops, we see that direct submission an actually
>>>>> improve the worse case.
>>>>>
>>>>> Maximum latency in microseconds of a third party RT thread
>>>>> (gem_syslatency -t 120 -f 2)
>>>>>      x Always using tasklets (a couple of >1000us outliers removed)
>>>>>      + Only using tasklets from CS irq, direct submission of requests
>>>>> +------------------------------------------------------------------------+
>>>>> |          +                                                             |
>>>>> |          +                                                             |
>>>>> |          +                                                             |
>>>>> |          +       +                                                     |
>>>>> |          + +     +                                                     |
>>>>> |       +  + +     +  x     x     x                                      |
>>>>> |      +++ + +     +  x  x  x  x  x  x                                   |
>>>>> |      +++ + ++  + +  *x x  x  x  x  x                                   |
>>>>> |      +++ + ++  + *  *x x  *  x  x  x                                   |
>>>>> |    + +++ + ++  * * +*xxx  *  x  x  xx                                  |
>>>>> |    * +++ + ++++* *x+**xx+ *  x  x xxxx x                               |
>>>>> |   **x++++*++**+*x*x****x+ * +x xx xxxx x          x                    |
>>>>> |x* ******+***************++*+***xxxxxx* xx*x     xxx +                x+|
>>>>> |             |__________MA___________|                                  |
>>>>> |      |______M__A________|                                              |
>>>>> +------------------------------------------------------------------------+
>>>>>        N           Min           Max        Median           Avg        Stddev
>>>>> x 118            91           186           124     125.28814     16.279137
>>>>> + 120            92           187           109     112.00833     13.458617
>>>>> Difference at 95.0% confidence
>>>>>         -13.2798 +/- 3.79219
>>>>>         -10.5994% +/- 3.02677%
>>>>>         (Student's t, pooled s = 14.9237)
>>>>>
>>>>> However the mean latency is adversely affected:
>>>>>
>>>>> Mean latency in microseconds of a third party RT thread
>>>>> (gem_syslatency -t 120 -f 1)
>>>>>      x Always using tasklets
>>>>>      + Only using tasklets from CS irq, direct submission of requests
>>>>> +------------------------------------------------------------------------+
>>>>> |           xxxxxx                                        +   ++         |
>>>>> |           xxxxxx                                        +   ++         |
>>>>> |           xxxxxx                                      + +++ ++         |
>>>>> |           xxxxxxx                                     +++++ ++         |
>>>>> |           xxxxxxx                                     +++++ ++         |
>>>>> |           xxxxxxx                                     +++++ +++        |
>>>>> |           xxxxxxx                                   + ++++++++++       |
>>>>> |           xxxxxxxx                                 ++ ++++++++++       |
>>>>> |           xxxxxxxx                                 ++ ++++++++++       |
>>>>> |          xxxxxxxxxx                                +++++++++++++++     |
>>>>> |         xxxxxxxxxxx    x                           +++++++++++++++     |
>>>>> |x       xxxxxxxxxxxxx   x           +            + ++++++++++++++++++  +|
>>>>> |           |__A__|                                                      |
>>>>> |                                                      |____A___|        |
>>>>> +------------------------------------------------------------------------+
>>>>>        N           Min           Max        Median           Avg        Stddev
>>>>> x 120         3.506         3.727         3.631     3.6321417    0.02773109
>>>>> + 120         3.834         4.149         4.039     4.0375167   0.041221676
>>>>> Difference at 95.0% confidence
>>>>>         0.405375 +/- 0.00888913
>>>>>         11.1608% +/- 0.244735%
>>>>>         (Student's t, pooled s = 0.03513)
>>>>>
>>>>> However, since the mean latency corresponds to the amount of irqsoff
>>>>> processing we have to do for a CS interrupt, we only need to speed that
>>>>> up to benefit not just system latency but our own throughput.
>>>>>
>>>>> v2: Remember to defer submissions when under reset.
>>>>> v4: Only use direct submission for new requests
>>>>> v5: Be aware that with mixing direct tasklet evaluation and deferred
>>>>> tasklets, we may end up idling before running the deferred tasklet.
>>>>>
>>>>> Testcase: igt/gem_exec_latency/*rthog*
>>>>> References: 27af5eea54d1 ("drm/i915: Move execlists irq handler to a bottom half")
>>>>> Suggested-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>>>> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
>>>>> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>>>> ---
>>>>>     drivers/gpu/drm/i915/i915_gem.h         |   5 +
>>>>>     drivers/gpu/drm/i915/i915_irq.c         |  11 +-
>>>>>     drivers/gpu/drm/i915/intel_engine_cs.c  |   8 +-
>>>>>     drivers/gpu/drm/i915/intel_lrc.c        | 147 ++++++++++++++----------
>>>>>     drivers/gpu/drm/i915/intel_ringbuffer.h |   1 -
>>>>>     5 files changed, 98 insertions(+), 74 deletions(-)
>>>>>
>>>>> diff --git a/drivers/gpu/drm/i915/i915_gem.h b/drivers/gpu/drm/i915/i915_gem.h
>>>>> index 261da577829a..7892ac773916 100644
>>>>> --- a/drivers/gpu/drm/i915/i915_gem.h
>>>>> +++ b/drivers/gpu/drm/i915/i915_gem.h
>>>>> @@ -88,4 +88,9 @@ static inline void __tasklet_enable_sync_once(struct tasklet_struct *t)
>>>>>                 tasklet_kill(t);
>>>>>     }
>>>>>     
>>>>> +static inline bool __tasklet_is_enabled(const struct tasklet_struct *t)
>>>>> +{
>>>>> +     return likely(!atomic_read(&t->count));
>>>>> +}
>>>>> +
>>>>
>>>> For the unlikely-likely chain from
>>>> __submit_queue->reset_in_progress->__tasklet_is_enabled I think it would
>>>> be better to drop the likely/unlikely from low-level helpers and put the
>>>> one unlikely into the __submit_queue.
>>>
>>> Tasklets are rarely disabled, I think that's quite important to stress.
>>> Tasklets do not function very well (heavy spinning) while disabled.
>>
>> I think we shouldn't be concerned by that. Purpose of this is to wrap
>> internal implementation we even shouldn't be touching if we could help
>> it, and I feel correct thing is to express the branching hint higher up
>> the stack. Caller wants to optimize certain scenarios, while the helper
>> doesn't know who is calling it and why. On top we have this
>> likely-unlikley chain which I mentioned. Even just one unlikely in
>> reset_in_progress would probably be enough for what you wanted to ensure.
> 
> I already acquiesced and did extra that.

Okay.

> 
>>>>>     #endif /* __I915_GEM_H__ */
>>>>> diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
>>>>> index 46aaef5c1851..316d0b08d40f 100644
>>>>> --- a/drivers/gpu/drm/i915/i915_irq.c
>>>>> +++ b/drivers/gpu/drm/i915/i915_irq.c
>>>>> @@ -1469,14 +1469,10 @@ static void snb_gt_irq_handler(struct drm_i915_private *dev_priv,
>>>>>     static void
>>>>>     gen8_cs_irq_handler(struct intel_engine_cs *engine, u32 iir)
>>>>>     {
>>>>> -     struct intel_engine_execlists * const execlists = &engine->execlists;
>>>>>         bool tasklet = false;
>>>>>     
>>>>> -     if (iir & GT_CONTEXT_SWITCH_INTERRUPT) {
>>>>> -             if (READ_ONCE(engine->execlists.active))
>>>>
>>>> What is the thinking behind this change? It used to be that we scheduled
>>>> the tasklet only when we knew we are expecting interrupts and now we
>>>> don't care any more for some reason?
>>>
>>> The filtering is done inside process_csb(). We filtered on active
>>> previously as some interrupts were seemingly going astray, now I am much
>>> more confident that all are accounted for.
>>
>> Hm how? We filter extra interrupts, we can't filter to get what's missing?
> 
> Not quite, since we process the CSB more frequently than interrupts, we
> may also get an interrupt after having already processed the CSB.

If it is all processed already then why schedule the tasklet?

Or from a different old angle - does this belong in this patch?

> 
>>>>> -                     tasklet = !test_and_set_bit(ENGINE_IRQ_EXECLIST,
>>>>> -                                                 &engine->irq_posted);
>>>>
>>>> And this is gone as well. Can you put a paragraph in the commit message
>>>> explaining the change? It doesn't seem immediately connected with direct
>>>> submission.
>>>
>>> Removing one heavyweight atomic operation in the latency sensitive
>>> interrupt.
>>
>> But on the higher level - why we don't need this any more.
> 
> Because we are using the CSB, ok I think that can be a separate step.

Cool.

> 
>>>>> +     if (iir & GT_CONTEXT_SWITCH_INTERRUPT)
>>>>> +             tasklet = true;
>>>>>     
>>>>>         if (iir & GT_RENDER_USER_INTERRUPT) {
>>>>>                 notify_ring(engine);
>>>>> @@ -1484,7 +1480,7 @@ gen8_cs_irq_handler(struct intel_engine_cs *engine, u32 iir)
>>>>>         }
>>>>>     
>>>>>         if (tasklet)
>>>>> -             tasklet_hi_schedule(&execlists->tasklet);
>>>>> +             tasklet_hi_schedule(&engine->execlists.tasklet);
>>>>>     }
>>>>>     
>>>>>     static void gen8_gt_irq_ack(struct drm_i915_private *i915,
>>>>> @@ -2216,7 +2212,6 @@ static irqreturn_t cherryview_irq_handler(int irq, void *arg)
>>>>>     
>>>>>                 I915_WRITE(VLV_IER, ier);
>>>>>                 I915_WRITE(GEN8_MASTER_IRQ, GEN8_MASTER_IRQ_CONTROL);
>>>>> -             POSTING_READ(GEN8_MASTER_IRQ);
>>>>
>>>> What is this?
>>>
>>> Something that I haven't managed to kill yet.
>>
>> No sneaking in this patch then either! :D
> 
> Just close your eyes. Nothing to see here.

You shall not pass! :D

> 
>>>>> diff --git a/drivers/gpu/drm/i915/intel_engine_cs.c b/drivers/gpu/drm/i915/intel_engine_cs.c
>>>>> index 7209c22798e6..ace93958689e 100644
>>>>> --- a/drivers/gpu/drm/i915/intel_engine_cs.c
>>>>> +++ b/drivers/gpu/drm/i915/intel_engine_cs.c
>>>>> @@ -1353,12 +1353,10 @@ static void intel_engine_print_registers(const struct intel_engine_cs *engine,
>>>>>                 ptr = I915_READ(RING_CONTEXT_STATUS_PTR(engine));
>>>>>                 read = GEN8_CSB_READ_PTR(ptr);
>>>>>                 write = GEN8_CSB_WRITE_PTR(ptr);
>>>>> -             drm_printf(m, "    Execlist CSB read %d [%d cached], write %d [%d from hws], interrupt posted? %s, tasklet queued? %s (%s)\n",
>>>>> +             drm_printf(m, "    Execlist CSB read %d [%d cached], write %d [%d from hws], tasklet queued? %s (%s)\n",
>>>>>                            read, execlists->csb_head,
>>>>>                            write,
>>>>>                            intel_read_status_page(engine, intel_hws_csb_write_index(engine->i915)),
>>>>> -                        yesno(test_bit(ENGINE_IRQ_EXECLIST,
>>>>> -                                       &engine->irq_posted)),
>>>>>                            yesno(test_bit(TASKLET_STATE_SCHED,
>>>>>                                           &engine->execlists.tasklet.state)),
>>>>>                            enableddisabled(!atomic_read(&engine->execlists.tasklet.count)));
>>>>> @@ -1570,11 +1568,9 @@ void intel_engine_dump(struct intel_engine_cs *engine,
>>>>>         spin_unlock(&b->rb_lock);
>>>>>         local_irq_restore(flags);
>>>>>     
>>>>> -     drm_printf(m, "IRQ? 0x%lx (breadcrumbs? %s) (execlists? %s)\n",
>>>>> +     drm_printf(m, "IRQ? 0x%lx (breadcrumbs? %s)\n",
>>>>>                    engine->irq_posted,
>>>>>                    yesno(test_bit(ENGINE_IRQ_BREADCRUMB,
>>>>> -                               &engine->irq_posted)),
>>>>> -                yesno(test_bit(ENGINE_IRQ_EXECLIST,
>>>>>                                   &engine->irq_posted)));
>>>>>     
>>>>>         drm_printf(m, "HWSP:\n");
>>>>> diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
>>>>> index 5a12b8fc9d8f..c82efa3ac105 100644
>>>>> --- a/drivers/gpu/drm/i915/intel_lrc.c
>>>>> +++ b/drivers/gpu/drm/i915/intel_lrc.c
>>>>> @@ -562,13 +562,15 @@ static void complete_preempt_context(struct intel_engine_execlists *execlists)
>>>>>     {
>>>>>         GEM_BUG_ON(!execlists_is_active(execlists, EXECLISTS_ACTIVE_PREEMPT));
>>>>>     
>>>>> +     __unwind_incomplete_requests(container_of(execlists,
>>>>> +                                               typeof(struct intel_engine_cs),
>>>>> +                                               execlists));
>>>>>         execlists_cancel_port_requests(execlists);
>>>>> -     execlists_unwind_incomplete_requests(execlists);
>>>>
>>>> Is the ordering change significant and why?
>>>
>>> Mostly for consistency and reasoning about request reference lifetimes.
>>> (Unwind => we retain the request reference, as it is moved back to the
>>> protected execution lists.)
>>
>> Remove from this patch then?
> 
> Move to an earlier patch then.

Cool.

> 
>>>>> +static void execlists_submission_tasklet(unsigned long data)
>>>>> +{
>>>>> +     struct intel_engine_cs * const engine = (struct intel_engine_cs *)data;
>>>>> +     unsigned long flags;
>>>>> +
>>>>> +     GEM_TRACE("%s awake?=%d, active=%x\n",
>>>>> +               engine->name,
>>>>> +               engine->i915->gt.awake,
>>>>> +               engine->execlists.active);
>>>>> +
>>>>> +     spin_lock_irqsave(&engine->timeline.lock, flags);
>>>>> +
>>>>> +     if (engine->i915->gt.awake) /* we may be delayed until after we idle! */
>>>>> +             __execlists_submission_tasklet(engine);
>>>>
>>>> Sounds quite bad! this means we fail to process pending CSB. And going
>>>> idle syncs the tasklets so what am I missing?
>>>
>>> That tasklets get kicked randomly, I think was the culprit.
>>
>> What do you mean? I hope we have busy-idle quite controlled and we know
>> when we should and should expect a tasklet. If we synced them when
>> transitioning to idle they cannot happen. Otherwise we better be active!
>> GEM_BUG_ON(!engine->i915->gt.awake) instead? Does that trigger?!
> 
> tasklet_schedule() is called off the main path, without locking, so
> unsynchronized to parking. Just because.

I need to understand this - which main path? Submission - we will be 
mark_busy. After last request - we will idle the engines and sync the 
tasklet.

There is even GEM_BUG_ON(!engine->i915->gt.awake); in 
__execlists_submission_tasklet.

Regards,

Tvrtko
Chris Wilson June 27, 2018, 3:28 p.m. UTC | #6
Quoting Tvrtko Ursulin (2018-06-27 16:21:24)
> 
> On 27/06/2018 14:29, Chris Wilson wrote:
> > Quoting Tvrtko Ursulin (2018-06-27 14:15:22)
> >>
> >> On 27/06/2018 11:58, Chris Wilson wrote:
> >>> That tasklets get kicked randomly, I think was the culprit.
> >>
> >> What do you mean? I hope we have busy-idle quite controlled and we know
> >> when we should and should expect a tasklet. If we synced them when
> >> transitioning to idle they cannot happen. Otherwise we better be active!
> >> GEM_BUG_ON(!engine->i915->gt.awake) instead? Does that trigger?!
> > 
> > tasklet_schedule() is called off the main path, without locking, so
> > unsynchronized to parking. Just because.
> 
> I need to understand this - which main path? Submission - we will be 
> mark_busy. After last request - we will idle the engines and sync the 
> tasklet.

There's a bonus kick in intel_engine_is_idle() (behind an unprotected
read of active, so still possible to race), and I've added an
unconditional kick to pmu_enable because we play games with
tasklet_disable there that may cause us to miss a direct submission.
-Chris
Tvrtko Ursulin June 28, 2018, 11:56 a.m. UTC | #7
On 27/06/2018 16:28, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2018-06-27 16:21:24)
>>
>> On 27/06/2018 14:29, Chris Wilson wrote:
>>> Quoting Tvrtko Ursulin (2018-06-27 14:15:22)
>>>>
>>>> On 27/06/2018 11:58, Chris Wilson wrote:
>>>>> That tasklets get kicked randomly, I think was the culprit.
>>>>
>>>> What do you mean? I hope we have busy-idle quite controlled and we know
>>>> when we should and should expect a tasklet. If we synced them when
>>>> transitioning to idle they cannot happen. Otherwise we better be active!
>>>> GEM_BUG_ON(!engine->i915->gt.awake) instead? Does that trigger?!
>>>
>>> tasklet_schedule() is called off the main path, without locking, so
>>> unsynchronized to parking. Just because.
>>
>> I need to understand this - which main path? Submission - we will be
>> mark_busy. After last request - we will idle the engines and sync the
>> tasklet.
> 
> There's a bonus kick in intel_engine_is_idle() (behind an unprotected
> read of active, so still possible to race), and I've added an
> unconditional kick to pmu_enable because we play games with
> tasklet_disable there that may cause us to miss a direct submission.

intel_engine_is_idle form the idle work handler is before awake is 
cleared, so not that?

And tasklet kick from intel_enable_engine_stats, hm yep. But wouldn't 
taking the timeline lock around active state reconstruction solve that 
simpler?

Regards,

Tvrtko
Chris Wilson June 28, 2018, 12:07 p.m. UTC | #8
Quoting Tvrtko Ursulin (2018-06-28 12:56:56)
> 
> On 27/06/2018 16:28, Chris Wilson wrote:
> > Quoting Tvrtko Ursulin (2018-06-27 16:21:24)
> >>
> >> On 27/06/2018 14:29, Chris Wilson wrote:
> >>> Quoting Tvrtko Ursulin (2018-06-27 14:15:22)
> >>>>
> >>>> On 27/06/2018 11:58, Chris Wilson wrote:
> >>>>> That tasklets get kicked randomly, I think was the culprit.
> >>>>
> >>>> What do you mean? I hope we have busy-idle quite controlled and we know
> >>>> when we should and should expect a tasklet. If we synced them when
> >>>> transitioning to idle they cannot happen. Otherwise we better be active!
> >>>> GEM_BUG_ON(!engine->i915->gt.awake) instead? Does that trigger?!
> >>>
> >>> tasklet_schedule() is called off the main path, without locking, so
> >>> unsynchronized to parking. Just because.
> >>
> >> I need to understand this - which main path? Submission - we will be
> >> mark_busy. After last request - we will idle the engines and sync the
> >> tasklet.
> > 
> > There's a bonus kick in intel_engine_is_idle() (behind an unprotected
> > read of active, so still possible to race), and I've added an
> > unconditional kick to pmu_enable because we play games with
> > tasklet_disable there that may cause us to miss a direct submission.
> 
> intel_engine_is_idle form the idle work handler is before awake is 
> cleared, so not that?

intel_engine_is_idle() may be called at any time though, and will race.
 
> And tasklet kick from intel_enable_engine_stats, hm yep. But wouldn't 
> taking the timeline lock around active state reconstruction solve that 
> simpler?

Can you? We probably can. (That one was a very recent discovery and
quick fix.)

Since that was a very recent discovery, my fallible memory says the
GEM_BUG_ON() popped up from intel_engine_is_idle. Not that everything
has changed since then, ofc.
-Chris
Chris Wilson June 28, 2018, 12:11 p.m. UTC | #9
Quoting Chris Wilson (2018-06-28 13:07:51)
> Quoting Tvrtko Ursulin (2018-06-28 12:56:56)
> > And tasklet kick from intel_enable_engine_stats, hm yep. But wouldn't 
> > taking the timeline lock around active state reconstruction solve that 
> > simpler?
> 
> Can you? We probably can. (That one was a very recent discovery and
> quick fix.)

The biggest issue being whether or not the same locking applies equally
to all submission backends. That's not yet true, but then again we don't
use stats everywhere. So whether or not that's an issue, I don't know,
but it's enough to make me want to punt changing the locking inside
intel_enable_engine_stats to a separate patch.
-Chris
Tvrtko Ursulin June 28, 2018, 12:29 p.m. UTC | #10
On 28/06/2018 13:11, Chris Wilson wrote:
> Quoting Chris Wilson (2018-06-28 13:07:51)
>> Quoting Tvrtko Ursulin (2018-06-28 12:56:56)
>>> And tasklet kick from intel_enable_engine_stats, hm yep. But wouldn't
>>> taking the timeline lock around active state reconstruction solve that
>>> simpler?
>>
>> Can you? We probably can. (That one was a very recent discovery and
>> quick fix.)
> 
> The biggest issue being whether or not the same locking applies equally
> to all submission backends. That's not yet true, but then again we don't
> use stats everywhere. So whether or not that's an issue, I don't know,
> but it's enough to make me want to punt changing the locking inside
> intel_enable_engine_stats to a separate patch.

Big benefit is removing the extra tasklet schedule from engine stats 
which is in fact even racy. We need the state reconstruction to be 
atomic so I think it really needs to be under the engine lock.

tasklet_disable/enable can then also be dropped I think.

To which patch in this series that belongs is the question. Last one I 
think, when all is in place that port updates are protected by the 
timeline lock.

Regards,

Tvrtko
Chris Wilson June 28, 2018, 12:35 p.m. UTC | #11
Quoting Tvrtko Ursulin (2018-06-28 13:29:32)
> 
> On 28/06/2018 13:11, Chris Wilson wrote:
> > Quoting Chris Wilson (2018-06-28 13:07:51)
> >> Quoting Tvrtko Ursulin (2018-06-28 12:56:56)
> >>> And tasklet kick from intel_enable_engine_stats, hm yep. But wouldn't
> >>> taking the timeline lock around active state reconstruction solve that
> >>> simpler?
> >>
> >> Can you? We probably can. (That one was a very recent discovery and
> >> quick fix.)
> > 
> > The biggest issue being whether or not the same locking applies equally
> > to all submission backends. That's not yet true, but then again we don't
> > use stats everywhere. So whether or not that's an issue, I don't know,
> > but it's enough to make me want to punt changing the locking inside
> > intel_enable_engine_stats to a separate patch.
> 
> Big benefit is removing the extra tasklet schedule from engine stats 
> which is in fact even racy.

It's racy, but the tasklet being run more often than required is just
wasted effort. Unless you think we can get ourselves into a loop here?

> We need the state reconstruction to be 
> atomic so I think it really needs to be under the engine lock.
> 
> tasklet_disable/enable can then also be dropped I think.
> 
> To which patch in this series that belongs is the question. Last one I 
> think, when all is in place that port updates are protected by the 
> timeline lock.

I definitely support it being a new patch. I don't think the race is a
problem that requires preventative work.
-Chris
Tvrtko Ursulin June 28, 2018, 1:03 p.m. UTC | #12
On 28/06/2018 13:35, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2018-06-28 13:29:32)
>>
>> On 28/06/2018 13:11, Chris Wilson wrote:
>>> Quoting Chris Wilson (2018-06-28 13:07:51)
>>>> Quoting Tvrtko Ursulin (2018-06-28 12:56:56)
>>>>> And tasklet kick from intel_enable_engine_stats, hm yep. But wouldn't
>>>>> taking the timeline lock around active state reconstruction solve that
>>>>> simpler?
>>>>
>>>> Can you? We probably can. (That one was a very recent discovery and
>>>> quick fix.)
>>>
>>> The biggest issue being whether or not the same locking applies equally
>>> to all submission backends. That's not yet true, but then again we don't
>>> use stats everywhere. So whether or not that's an issue, I don't know,
>>> but it's enough to make me want to punt changing the locking inside
>>> intel_enable_engine_stats to a separate patch.
>>
>> Big benefit is removing the extra tasklet schedule from engine stats
>> which is in fact even racy.
> 
> It's racy, but the tasklet being run more often than required is just
> wasted effort. Unless you think we can get ourselves into a loop here?
> 
>> We need the state reconstruction to be
>> atomic so I think it really needs to be under the engine lock.
>>
>> tasklet_disable/enable can then also be dropped I think.
>>
>> To which patch in this series that belongs is the question. Last one I
>> think, when all is in place that port updates are protected by the
>> timeline lock.
> 
> I definitely support it being a new patch. I don't think the race is a
> problem that requires preventative work.

I think it has to be part of this series as the last, or before it, 
since otherwise the perf_pmu test could/would/should start failing.

Because it is this series which breaks the assumption in 
intel_enable_engine_stats that ports can be sampled safely while the 
tasklet is disabled.

I don't think the extra tasklet schedule fixes it - it cannot help 
correcting the state once the above races with direct submission.

So that extra/new tasklet schedule also goes away.

Regards,

Tvrtko
diff mbox

Patch

diff --git a/drivers/gpu/drm/i915/i915_gem.h b/drivers/gpu/drm/i915/i915_gem.h
index 261da577829a..7892ac773916 100644
--- a/drivers/gpu/drm/i915/i915_gem.h
+++ b/drivers/gpu/drm/i915/i915_gem.h
@@ -88,4 +88,9 @@  static inline void __tasklet_enable_sync_once(struct tasklet_struct *t)
 		tasklet_kill(t);
 }
 
+static inline bool __tasklet_is_enabled(const struct tasklet_struct *t)
+{
+	return likely(!atomic_read(&t->count));
+}
+
 #endif /* __I915_GEM_H__ */
diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index 46aaef5c1851..316d0b08d40f 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -1469,14 +1469,10 @@  static void snb_gt_irq_handler(struct drm_i915_private *dev_priv,
 static void
 gen8_cs_irq_handler(struct intel_engine_cs *engine, u32 iir)
 {
-	struct intel_engine_execlists * const execlists = &engine->execlists;
 	bool tasklet = false;
 
-	if (iir & GT_CONTEXT_SWITCH_INTERRUPT) {
-		if (READ_ONCE(engine->execlists.active))
-			tasklet = !test_and_set_bit(ENGINE_IRQ_EXECLIST,
-						    &engine->irq_posted);
-	}
+	if (iir & GT_CONTEXT_SWITCH_INTERRUPT)
+		tasklet = true;
 
 	if (iir & GT_RENDER_USER_INTERRUPT) {
 		notify_ring(engine);
@@ -1484,7 +1480,7 @@  gen8_cs_irq_handler(struct intel_engine_cs *engine, u32 iir)
 	}
 
 	if (tasklet)
-		tasklet_hi_schedule(&execlists->tasklet);
+		tasklet_hi_schedule(&engine->execlists.tasklet);
 }
 
 static void gen8_gt_irq_ack(struct drm_i915_private *i915,
@@ -2216,7 +2212,6 @@  static irqreturn_t cherryview_irq_handler(int irq, void *arg)
 
 		I915_WRITE(VLV_IER, ier);
 		I915_WRITE(GEN8_MASTER_IRQ, GEN8_MASTER_IRQ_CONTROL);
-		POSTING_READ(GEN8_MASTER_IRQ);
 
 		gen8_gt_irq_handler(dev_priv, master_ctl, gt_iir);
 
diff --git a/drivers/gpu/drm/i915/intel_engine_cs.c b/drivers/gpu/drm/i915/intel_engine_cs.c
index 7209c22798e6..ace93958689e 100644
--- a/drivers/gpu/drm/i915/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/intel_engine_cs.c
@@ -1353,12 +1353,10 @@  static void intel_engine_print_registers(const struct intel_engine_cs *engine,
 		ptr = I915_READ(RING_CONTEXT_STATUS_PTR(engine));
 		read = GEN8_CSB_READ_PTR(ptr);
 		write = GEN8_CSB_WRITE_PTR(ptr);
-		drm_printf(m, "\tExeclist CSB read %d [%d cached], write %d [%d from hws], interrupt posted? %s, tasklet queued? %s (%s)\n",
+		drm_printf(m, "\tExeclist CSB read %d [%d cached], write %d [%d from hws], tasklet queued? %s (%s)\n",
 			   read, execlists->csb_head,
 			   write,
 			   intel_read_status_page(engine, intel_hws_csb_write_index(engine->i915)),
-			   yesno(test_bit(ENGINE_IRQ_EXECLIST,
-					  &engine->irq_posted)),
 			   yesno(test_bit(TASKLET_STATE_SCHED,
 					  &engine->execlists.tasklet.state)),
 			   enableddisabled(!atomic_read(&engine->execlists.tasklet.count)));
@@ -1570,11 +1568,9 @@  void intel_engine_dump(struct intel_engine_cs *engine,
 	spin_unlock(&b->rb_lock);
 	local_irq_restore(flags);
 
-	drm_printf(m, "IRQ? 0x%lx (breadcrumbs? %s) (execlists? %s)\n",
+	drm_printf(m, "IRQ? 0x%lx (breadcrumbs? %s)\n",
 		   engine->irq_posted,
 		   yesno(test_bit(ENGINE_IRQ_BREADCRUMB,
-				  &engine->irq_posted)),
-		   yesno(test_bit(ENGINE_IRQ_EXECLIST,
 				  &engine->irq_posted)));
 
 	drm_printf(m, "HWSP:\n");
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 5a12b8fc9d8f..c82efa3ac105 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -562,13 +562,15 @@  static void complete_preempt_context(struct intel_engine_execlists *execlists)
 {
 	GEM_BUG_ON(!execlists_is_active(execlists, EXECLISTS_ACTIVE_PREEMPT));
 
+	__unwind_incomplete_requests(container_of(execlists,
+						  typeof(struct intel_engine_cs),
+						  execlists));
 	execlists_cancel_port_requests(execlists);
-	execlists_unwind_incomplete_requests(execlists);
 
 	execlists_clear_active(execlists, EXECLISTS_ACTIVE_PREEMPT);
 }
 
-static void __execlists_dequeue(struct intel_engine_cs *engine)
+static void execlists_dequeue(struct intel_engine_cs *engine)
 {
 	struct intel_engine_execlists * const execlists = &engine->execlists;
 	struct execlist_port *port = execlists->port;
@@ -580,7 +582,11 @@  static void __execlists_dequeue(struct intel_engine_cs *engine)
 
 	lockdep_assert_held(&engine->timeline.lock);
 
-	/* Hardware submission is through 2 ports. Conceptually each port
+	GEM_BUG_ON(execlists_is_active(&engine->execlists,
+				       EXECLISTS_ACTIVE_PREEMPT));
+
+	/*
+	 * Hardware submission is through 2 ports. Conceptually each port
 	 * has a (RING_START, RING_HEAD, RING_TAIL) tuple. RING_START is
 	 * static for a context, and unique to each, so we only execute
 	 * requests belonging to a single context from each ring. RING_HEAD
@@ -770,15 +776,6 @@  static void __execlists_dequeue(struct intel_engine_cs *engine)
 		   !port_isset(engine->execlists.port));
 }
 
-static void execlists_dequeue(struct intel_engine_cs *engine)
-{
-	unsigned long flags;
-
-	spin_lock_irqsave(&engine->timeline.lock, flags);
-	__execlists_dequeue(engine);
-	spin_unlock_irqrestore(&engine->timeline.lock, flags);
-}
-
 void
 execlists_cancel_port_requests(struct intel_engine_execlists * const execlists)
 {
@@ -874,14 +871,6 @@  static void reset_irq(struct intel_engine_cs *engine)
 	smp_store_mb(engine->execlists.active, 0);
 
 	clear_gtiir(engine);
-
-	/*
-	 * The port is checked prior to scheduling a tasklet, but
-	 * just in case we have suspended the tasklet to do the
-	 * wedging make sure that when it wakes, it decides there
-	 * is no work to do by clearing the irq_posted bit.
-	 */
-	clear_bit(ENGINE_IRQ_EXECLIST, &engine->irq_posted);
 }
 
 static void execlists_cancel_requests(struct intel_engine_cs *engine)
@@ -956,6 +945,12 @@  static void execlists_cancel_requests(struct intel_engine_cs *engine)
 	local_irq_restore(flags);
 }
 
+static inline bool
+reset_in_progress(const struct intel_engine_execlists *execlists)
+{
+	return unlikely(!__tasklet_is_enabled(&execlists->tasklet));
+}
+
 static void process_csb(struct intel_engine_cs *engine)
 {
 	struct intel_engine_execlists * const execlists = &engine->execlists;
@@ -963,10 +958,6 @@  static void process_csb(struct intel_engine_cs *engine)
 	const u32 * const buf = execlists->csb_status;
 	u8 head, tail;
 
-	/* Clear before reading to catch new interrupts */
-	clear_bit(ENGINE_IRQ_EXECLIST, &engine->irq_posted);
-	smp_mb__after_atomic();
-
 	/* Note that csb_write, csb_status may be either in HWSP or mmio */
 	head = execlists->csb_head;
 	tail = READ_ONCE(*execlists->csb_write);
@@ -1096,19 +1087,9 @@  static void process_csb(struct intel_engine_cs *engine)
 	execlists->csb_head = head;
 }
 
-/*
- * Check the unread Context Status Buffers and manage the submission of new
- * contexts to the ELSP accordingly.
- */
-static void execlists_submission_tasklet(unsigned long data)
+static void __execlists_submission_tasklet(struct intel_engine_cs *const engine)
 {
-	struct intel_engine_cs * const engine = (struct intel_engine_cs *)data;
-
-	GEM_TRACE("%s awake?=%d, active=%x, irq-posted?=%d\n",
-		  engine->name,
-		  engine->i915->gt.awake,
-		  engine->execlists.active,
-		  test_bit(ENGINE_IRQ_EXECLIST, &engine->irq_posted));
+	lockdep_assert_held(&engine->timeline.lock);
 
 	/*
 	 * We can skip acquiring intel_runtime_pm_get() here as it was taken
@@ -1120,18 +1101,33 @@  static void execlists_submission_tasklet(unsigned long data)
 	 */
 	GEM_BUG_ON(!engine->i915->gt.awake);
 
-	/*
-	 * Prefer doing test_and_clear_bit() as a two stage operation to avoid
-	 * imposing the cost of a locked atomic transaction when submitting a
-	 * new request (outside of the context-switch interrupt).
-	 */
-	if (test_bit(ENGINE_IRQ_EXECLIST, &engine->irq_posted))
-		process_csb(engine);
-
+	process_csb(engine);
 	if (!execlists_is_active(&engine->execlists, EXECLISTS_ACTIVE_PREEMPT))
 		execlists_dequeue(engine);
 }
 
+/*
+ * Check the unread Context Status Buffers and manage the submission of new
+ * contexts to the ELSP accordingly.
+ */
+static void execlists_submission_tasklet(unsigned long data)
+{
+	struct intel_engine_cs * const engine = (struct intel_engine_cs *)data;
+	unsigned long flags;
+
+	GEM_TRACE("%s awake?=%d, active=%x\n",
+		  engine->name,
+		  engine->i915->gt.awake,
+		  engine->execlists.active);
+
+	spin_lock_irqsave(&engine->timeline.lock, flags);
+
+	if (engine->i915->gt.awake) /* we may be delayed until after we idle! */
+		__execlists_submission_tasklet(engine);
+
+	spin_unlock_irqrestore(&engine->timeline.lock, flags);
+}
+
 static void queue_request(struct intel_engine_cs *engine,
 			  struct i915_sched_node *node,
 			  int prio)
@@ -1140,16 +1136,30 @@  static void queue_request(struct intel_engine_cs *engine,
 		      &lookup_priolist(engine, prio)->requests);
 }
 
-static void __submit_queue(struct intel_engine_cs *engine, int prio)
+static void __update_queue(struct intel_engine_cs *engine, int prio)
 {
 	engine->execlists.queue_priority = prio;
-	tasklet_hi_schedule(&engine->execlists.tasklet);
+}
+
+static void __submit_queue(struct intel_engine_cs *engine)
+{
+	struct intel_engine_execlists * const execlists = &engine->execlists;
+
+	if (reset_in_progress(execlists))
+		return; /* defer until we restart the engine following reset */
+
+	if (execlists->tasklet.func == execlists_submission_tasklet)
+		__execlists_submission_tasklet(engine);
+	else
+		tasklet_hi_schedule(&execlists->tasklet);
 }
 
 static void submit_queue(struct intel_engine_cs *engine, int prio)
 {
-	if (prio > engine->execlists.queue_priority)
-		__submit_queue(engine, prio);
+	if (prio > engine->execlists.queue_priority) {
+		__update_queue(engine, prio);
+		__submit_queue(engine);
+	}
 }
 
 static void execlists_submit_request(struct i915_request *request)
@@ -1161,11 +1171,12 @@  static void execlists_submit_request(struct i915_request *request)
 	spin_lock_irqsave(&engine->timeline.lock, flags);
 
 	queue_request(engine, &request->sched, rq_prio(request));
-	submit_queue(engine, rq_prio(request));
 
 	GEM_BUG_ON(!engine->execlists.first);
 	GEM_BUG_ON(list_empty(&request->sched.link));
 
+	submit_queue(engine, rq_prio(request));
+
 	spin_unlock_irqrestore(&engine->timeline.lock, flags);
 }
 
@@ -1292,8 +1303,11 @@  static void execlists_schedule(struct i915_request *request,
 		}
 
 		if (prio > engine->execlists.queue_priority &&
-		    i915_sw_fence_done(&sched_to_request(node)->submit))
-			__submit_queue(engine, prio);
+		    i915_sw_fence_done(&sched_to_request(node)->submit)) {
+			/* defer submission until after all of our updates */
+			__update_queue(engine, prio);
+			tasklet_hi_schedule(&engine->execlists.tasklet);
+		}
 	}
 
 	spin_unlock_irq(&engine->timeline.lock);
@@ -1874,6 +1888,7 @@  execlists_reset_prepare(struct intel_engine_cs *engine)
 {
 	struct intel_engine_execlists * const execlists = &engine->execlists;
 	struct i915_request *request, *active;
+	unsigned long flags;
 
 	GEM_TRACE("%s\n", engine->name);
 
@@ -1895,8 +1910,9 @@  execlists_reset_prepare(struct intel_engine_cs *engine)
 	 * and avoid blaming an innocent request if the stall was due to the
 	 * preemption itself.
 	 */
-	if (test_bit(ENGINE_IRQ_EXECLIST, &engine->irq_posted))
-		process_csb(engine);
+	spin_lock_irqsave(&engine->timeline.lock, flags);
+
+	process_csb(engine);
 
 	/*
 	 * The last active request can then be no later than the last request
@@ -1906,15 +1922,12 @@  execlists_reset_prepare(struct intel_engine_cs *engine)
 	active = NULL;
 	request = port_request(execlists->port);
 	if (request) {
-		unsigned long flags;
-
 		/*
 		 * Prevent the breadcrumb from advancing before we decide
 		 * which request is currently active.
 		 */
 		intel_engine_stop_cs(engine);
 
-		spin_lock_irqsave(&engine->timeline.lock, flags);
 		list_for_each_entry_from_reverse(request,
 						 &engine->timeline.requests,
 						 link) {
@@ -1924,12 +1937,28 @@  execlists_reset_prepare(struct intel_engine_cs *engine)
 
 			active = request;
 		}
-		spin_unlock_irqrestore(&engine->timeline.lock, flags);
 	}
 
+	spin_unlock_irqrestore(&engine->timeline.lock, flags);
+
 	return active;
 }
 
+static void reset_csb_pointers(struct intel_engine_execlists *execlists)
+{
+	/*
+	 * After a reset, the HW starts writing into CSB entry [0]. We
+	 * therefore have to set our HEAD pointer back one entry so that
+	 * the *first* entry we check is entry 0. To complicate this further,
+	 * as we don't wait for the first interrupt after reset, we have to
+	 * fake the HW write to point back to the last entry so that our
+	 * inline comparison of our cached head position against the last HW
+	 * write works even before the first interrupt.
+	 */
+	execlists->csb_head = GEN8_CSB_ENTRIES - 1;
+	WRITE_ONCE(*execlists->csb_write, (GEN8_CSB_ENTRIES - 1) | 0xff << 16);
+}
+
 static void execlists_reset(struct intel_engine_cs *engine,
 			    struct i915_request *request)
 {
@@ -1960,7 +1989,7 @@  static void execlists_reset(struct intel_engine_cs *engine,
 	__unwind_incomplete_requests(engine);
 
 	/* Following the reset, we need to reload the CSB read/write pointers */
-	engine->execlists.csb_head = GEN8_CSB_ENTRIES - 1;
+	reset_csb_pointers(&engine->execlists);
 
 	spin_unlock_irqrestore(&engine->timeline.lock, flags);
 
@@ -2459,7 +2488,6 @@  static int logical_ring_init(struct intel_engine_cs *engine)
 			upper_32_bits(ce->lrc_desc);
 	}
 
-	execlists->csb_head = GEN8_CSB_ENTRIES - 1;
 	execlists->csb_read =
 		i915->regs + i915_mmio_reg_offset(RING_CONTEXT_STATUS_PTR(engine));
 	if (csb_force_mmio(i915)) {
@@ -2474,6 +2502,7 @@  static int logical_ring_init(struct intel_engine_cs *engine)
 		execlists->csb_write =
 			&engine->status_page.page_addr[intel_hws_csb_write_index(i915)];
 	}
+	reset_csb_pointers(execlists);
 
 	return 0;
 
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index 5b92c5f03e1d..381c243bfc6f 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -359,7 +359,6 @@  struct intel_engine_cs {
 	atomic_t irq_count;
 	unsigned long irq_posted;
 #define ENGINE_IRQ_BREADCRUMB 0
-#define ENGINE_IRQ_EXECLIST 1
 
 	/* Rather than have every client wait upon all user interrupts,
 	 * with the herd waking after every interrupt and each doing the