diff mbox

[RFC,1/3] drm/i915: Watchdog timeout: IRQ handler for gen8+

Message ID 20170223194421.28463-1-michel.thierry@intel.com (mailing list archive)
State New, archived
Headers show

Commit Message

Michel Thierry Feb. 23, 2017, 7:44 p.m. UTC
*** General ***

Watchdog timeout (or "media engine reset") is a feature that allows
userland applications to enable hang detection on individual batch buffers.
The detection mechanism itself is mostly bound to the hardware and the only
thing that the driver needs to do to support this form of hang detection
is to implement the interrupt handling support as well as watchdog command
emission before and after the emitted batch buffer start instruction in the
ring buffer.

The principle of the hang detection mechanism is as follows:

1. Once the decision has been made to enable watchdog timeout for a
particular batch buffer and the driver is in the process of emitting the
batch buffer start instruction into the ring buffer it also emits a
watchdog timer start instruction before and a watchdog timer cancellation
instruction after the batch buffer start instruction in the ring buffer.

2. Once the GPU execution reaches the watchdog timer start instruction
the hardware watchdog counter is started by the hardware. The counter
keeps counting until either reaching a previously configured threshold
value or the timer cancellation instruction is executed.

2a. If the counter reaches the threshold value the hardware fires a
watchdog interrupt that is picked up by the watchdog interrupt handler.
This means that a hang has been detected and the driver needs to deal with
it the same way it would deal with a engine hang detected by the periodic
hang checker. The only difference between the two is that we already blamed
the active request (to ensure an engine reset).

2b. If the batch buffer completes and the execution reaches the watchdog
cancellation instruction before the watchdog counter reaches its
threshold value the watchdog is cancelled and nothing more comes of it.
No hang is detected.

Note about future interaction with preemption: Preemption could happen
in a command sequence prior to watchdog counter getting disabled,
resulting in watchdog being triggered following preemption. The driver will
need to explicitly disable the watchdog counter as part of the
preemption sequence.

*** This patch introduces: ***

1. IRQ handler code for watchdog timeout allowing direct hang recovery
based on hardware-driven hang detection, which then integrates directly
with the hang recovery path. This is independent of having per-engine reset
or just full gpu reset.

2. Watchdog specific register information.

Currently the render engine and all available media engines support
watchdog timeout (VECS is only supported in GEN9). The specifications elude
to the BCS engine being supported but that is currently not supported by
this commit.

Note that the value to stop the counter is different between render and
non-render engines.

Signed-off-by: Tomas Elf <tomas.elf@intel.com>
Signed-off-by: Ian Lister <ian.lister@intel.com>
Signed-off-by: Arun Siluvery <arun.siluvery@linux.intel.com>
Signed-off-by: Michel Thierry <michel.thierry@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h        |  4 ++++
 drivers/gpu/drm/i915/i915_irq.c        | 31 ++++++++++++++++++++++++++++++-
 drivers/gpu/drm/i915/i915_reg.h        |  6 ++++++
 drivers/gpu/drm/i915/intel_hangcheck.c | 13 +++++++++----
 drivers/gpu/drm/i915/intel_lrc.c       | 16 ++++++++++++++++
 5 files changed, 65 insertions(+), 5 deletions(-)

Comments

Chris Wilson Feb. 23, 2017, 8:57 p.m. UTC | #1
On Thu, Feb 23, 2017 at 11:44:17AM -0800, Michel Thierry wrote:
> *** General ***
> 
> Watchdog timeout (or "media engine reset") is a feature that allows
> userland applications to enable hang detection on individual batch buffers.
> The detection mechanism itself is mostly bound to the hardware and the only
> thing that the driver needs to do to support this form of hang detection
> is to implement the interrupt handling support as well as watchdog command
> emission before and after the emitted batch buffer start instruction in the
> ring buffer.
> 
> The principle of the hang detection mechanism is as follows:
> 
> 1. Once the decision has been made to enable watchdog timeout for a
> particular batch buffer and the driver is in the process of emitting the
> batch buffer start instruction into the ring buffer it also emits a
> watchdog timer start instruction before and a watchdog timer cancellation
> instruction after the batch buffer start instruction in the ring buffer.
> 
> 2. Once the GPU execution reaches the watchdog timer start instruction
> the hardware watchdog counter is started by the hardware. The counter
> keeps counting until either reaching a previously configured threshold
> value or the timer cancellation instruction is executed.
> 
> 2a. If the counter reaches the threshold value the hardware fires a
> watchdog interrupt that is picked up by the watchdog interrupt handler.
> This means that a hang has been detected and the driver needs to deal with
> it the same way it would deal with a engine hang detected by the periodic
> hang checker. The only difference between the two is that we already blamed
> the active request (to ensure an engine reset).
> 
> 2b. If the batch buffer completes and the execution reaches the watchdog
> cancellation instruction before the watchdog counter reaches its
> threshold value the watchdog is cancelled and nothing more comes of it.
> No hang is detected.
> 
> Note about future interaction with preemption: Preemption could happen
> in a command sequence prior to watchdog counter getting disabled,
> resulting in watchdog being triggered following preemption. The driver will
> need to explicitly disable the watchdog counter as part of the
> preemption sequence.
> 
> *** This patch introduces: ***
> 
> 1. IRQ handler code for watchdog timeout allowing direct hang recovery
> based on hardware-driven hang detection, which then integrates directly
> with the hang recovery path. This is independent of having per-engine reset
> or just full gpu reset.
> 
> 2. Watchdog specific register information.
> 
> Currently the render engine and all available media engines support
> watchdog timeout (VECS is only supported in GEN9). The specifications elude
> to the BCS engine being supported but that is currently not supported by
> this commit.
> 
> Note that the value to stop the counter is different between render and
> non-render engines.
> 
> Signed-off-by: Tomas Elf <tomas.elf@intel.com>
> Signed-off-by: Ian Lister <ian.lister@intel.com>
> Signed-off-by: Arun Siluvery <arun.siluvery@linux.intel.com>
> Signed-off-by: Michel Thierry <michel.thierry@intel.com>
> ---
>  drivers/gpu/drm/i915/i915_drv.h        |  4 ++++
>  drivers/gpu/drm/i915/i915_irq.c        | 31 ++++++++++++++++++++++++++++++-
>  drivers/gpu/drm/i915/i915_reg.h        |  6 ++++++
>  drivers/gpu/drm/i915/intel_hangcheck.c | 13 +++++++++----
>  drivers/gpu/drm/i915/intel_lrc.c       | 16 ++++++++++++++++
>  5 files changed, 65 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> index eed9ead1b592..0e4f4cc3c6de 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -1568,6 +1568,9 @@ struct i915_gpu_error {
>  	 * recovery. All waiters on the reset_queue will be woken when
>  	 * that happens.
>  	 *
> +	 * When hw detects a hang before us, we can use I915_RESET_WATCHDOG to
> +	 * report the hang detection cause accurately.
> +	 *
>  	 * This counter is used by the wait_seqno code to notice that reset
>  	 * event happened and it needs to restart the entire ioctl (since most
>  	 * likely the seqno it waited for won't ever signal anytime soon).
> @@ -1580,6 +1583,7 @@ struct i915_gpu_error {
>  
>  	unsigned long flags;
>  #define I915_RESET_IN_PROGRESS	0
> +#define I915_RESET_WATCHDOG	2 /* looking at the future */
>  #define I915_WEDGED		(BITS_PER_LONG - 1)
>  
>  	/**
> diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
> index bc70e2c451b2..4ef73363bbe9 100644
> --- a/drivers/gpu/drm/i915/i915_irq.c
> +++ b/drivers/gpu/drm/i915/i915_irq.c
> @@ -1352,6 +1352,28 @@ gen8_cs_irq_handler(struct intel_engine_cs *engine, u32 iir, int test_shift)
>  		set_bit(ENGINE_IRQ_EXECLIST, &engine->irq_posted);
>  		tasklet_hi_schedule(&engine->irq_tasklet);
>  	}
> +
> +	if (iir & (GT_GEN8_WATCHDOG_INTERRUPT << test_shift)) {
> +		struct drm_i915_private *dev_priv = engine->i915;
> +		u32 watchdog_disable;
> +
> +		if (engine->id == RCS)
> +			watchdog_disable = GEN8_RCS_WATCHDOG_DISABLE;
> +		else
> +			watchdog_disable = GEN8_XCS_WATCHDOG_DISABLE;
> +
> +		/* Stop the counter to prevent further timeout interrupts */
> +		I915_WRITE_FW(RING_CNTR(engine->mmio_base), watchdog_disable);

There's no guarrantee you hold forcewake, you need to use I915_WRITE.
Better yet would be to avoid having to wait for forcewake within the
hardirq handler.

> +
> +		/* Make sure the active request will be marked as guilty */
> +		engine->hangcheck.stalled = true;
> +		engine->hangcheck.seqno = intel_engine_get_seqno(engine);

Just set a flag saying the engine->hangcheck.watchdog = true. Don't
confuse us. engine->hangcheck.seqno does not give the guilty seqno!

Also there is no guarrantee here that seqno is the guilty party. That's
a nasty bug. Servicing the interrupt will be running in parallel with
the GPU that may complete the request before we read the HWS.

Please tell me we can use a PID along with the watchdog timer...
-Chris
Michel Thierry Feb. 23, 2017, 9:21 p.m. UTC | #2
On 23/02/17 12:57, Chris Wilson wrote:
> On Thu, Feb 23, 2017 at 11:44:17AM -0800, Michel Thierry wrote:
>> *** General ***
>>
>> Watchdog timeout (or "media engine reset") is a feature that allows
>> userland applications to enable hang detection on individual batch buffers.
>> The detection mechanism itself is mostly bound to the hardware and the only
>> thing that the driver needs to do to support this form of hang detection
>> is to implement the interrupt handling support as well as watchdog command
>> emission before and after the emitted batch buffer start instruction in the
>> ring buffer.
>>
>> The principle of the hang detection mechanism is as follows:
>>
>> 1. Once the decision has been made to enable watchdog timeout for a
>> particular batch buffer and the driver is in the process of emitting the
>> batch buffer start instruction into the ring buffer it also emits a
>> watchdog timer start instruction before and a watchdog timer cancellation
>> instruction after the batch buffer start instruction in the ring buffer.
>>
>> 2. Once the GPU execution reaches the watchdog timer start instruction
>> the hardware watchdog counter is started by the hardware. The counter
>> keeps counting until either reaching a previously configured threshold
>> value or the timer cancellation instruction is executed.
>>
>> 2a. If the counter reaches the threshold value the hardware fires a
>> watchdog interrupt that is picked up by the watchdog interrupt handler.
>> This means that a hang has been detected and the driver needs to deal with
>> it the same way it would deal with a engine hang detected by the periodic
>> hang checker. The only difference between the two is that we already blamed
>> the active request (to ensure an engine reset).
>>
>> 2b. If the batch buffer completes and the execution reaches the watchdog
>> cancellation instruction before the watchdog counter reaches its
>> threshold value the watchdog is cancelled and nothing more comes of it.
>> No hang is detected.
>>
>> Note about future interaction with preemption: Preemption could happen
>> in a command sequence prior to watchdog counter getting disabled,
>> resulting in watchdog being triggered following preemption. The driver will
>> need to explicitly disable the watchdog counter as part of the
>> preemption sequence.
>>
>> *** This patch introduces: ***
>>
>> 1. IRQ handler code for watchdog timeout allowing direct hang recovery
>> based on hardware-driven hang detection, which then integrates directly
>> with the hang recovery path. This is independent of having per-engine reset
>> or just full gpu reset.
>>
>> 2. Watchdog specific register information.
>>
>> Currently the render engine and all available media engines support
>> watchdog timeout (VECS is only supported in GEN9). The specifications elude
>> to the BCS engine being supported but that is currently not supported by
>> this commit.
>>
>> Note that the value to stop the counter is different between render and
>> non-render engines.
>>
>> Signed-off-by: Tomas Elf <tomas.elf@intel.com>
>> Signed-off-by: Ian Lister <ian.lister@intel.com>
>> Signed-off-by: Arun Siluvery <arun.siluvery@linux.intel.com>
>> Signed-off-by: Michel Thierry <michel.thierry@intel.com>
>> ---
>>  drivers/gpu/drm/i915/i915_drv.h        |  4 ++++
>>  drivers/gpu/drm/i915/i915_irq.c        | 31 ++++++++++++++++++++++++++++++-
>>  drivers/gpu/drm/i915/i915_reg.h        |  6 ++++++
>>  drivers/gpu/drm/i915/intel_hangcheck.c | 13 +++++++++----
>>  drivers/gpu/drm/i915/intel_lrc.c       | 16 ++++++++++++++++
>>  5 files changed, 65 insertions(+), 5 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
>> index eed9ead1b592..0e4f4cc3c6de 100644
>> --- a/drivers/gpu/drm/i915/i915_drv.h
>> +++ b/drivers/gpu/drm/i915/i915_drv.h
>> @@ -1568,6 +1568,9 @@ struct i915_gpu_error {
>>  	 * recovery. All waiters on the reset_queue will be woken when
>>  	 * that happens.
>>  	 *
>> +	 * When hw detects a hang before us, we can use I915_RESET_WATCHDOG to
>> +	 * report the hang detection cause accurately.
>> +	 *
>>  	 * This counter is used by the wait_seqno code to notice that reset
>>  	 * event happened and it needs to restart the entire ioctl (since most
>>  	 * likely the seqno it waited for won't ever signal anytime soon).
>> @@ -1580,6 +1583,7 @@ struct i915_gpu_error {
>>
>>  	unsigned long flags;
>>  #define I915_RESET_IN_PROGRESS	0
>> +#define I915_RESET_WATCHDOG	2 /* looking at the future */
>>  #define I915_WEDGED		(BITS_PER_LONG - 1)
>>
>>  	/**
>> diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
>> index bc70e2c451b2..4ef73363bbe9 100644
>> --- a/drivers/gpu/drm/i915/i915_irq.c
>> +++ b/drivers/gpu/drm/i915/i915_irq.c
>> @@ -1352,6 +1352,28 @@ gen8_cs_irq_handler(struct intel_engine_cs *engine, u32 iir, int test_shift)
>>  		set_bit(ENGINE_IRQ_EXECLIST, &engine->irq_posted);
>>  		tasklet_hi_schedule(&engine->irq_tasklet);
>>  	}
>> +
>> +	if (iir & (GT_GEN8_WATCHDOG_INTERRUPT << test_shift)) {
>> +		struct drm_i915_private *dev_priv = engine->i915;
>> +		u32 watchdog_disable;
>> +
>> +		if (engine->id == RCS)
>> +			watchdog_disable = GEN8_RCS_WATCHDOG_DISABLE;
>> +		else
>> +			watchdog_disable = GEN8_XCS_WATCHDOG_DISABLE;
>> +
>> +		/* Stop the counter to prevent further timeout interrupts */
>> +		I915_WRITE_FW(RING_CNTR(engine->mmio_base), watchdog_disable);
>
> There's no guarrantee you hold forcewake, you need to use I915_WRITE.
> Better yet would be to avoid having to wait for forcewake within the
> hardirq handler.
>
>> +
>> +		/* Make sure the active request will be marked as guilty */
>> +		engine->hangcheck.stalled = true;
>> +		engine->hangcheck.seqno = intel_engine_get_seqno(engine);
>
> Just set a flag saying the engine->hangcheck.watchdog = true. Don't
> confuse us. engine->hangcheck.seqno does not give the guilty seqno!
>
> Also there is no guarrantee here that seqno is the guilty party. That's
> a nasty bug. Servicing the interrupt will be running in parallel with
> the GPU that may complete the request before we read the HWS.
>
> Please tell me we can use a PID along with the watchdog timer...

A 'watchdog' PID and 'running' PID in the HWSP would sound ok?

There's also the question if we want different thresholds per engine.
Chris Wilson Feb. 23, 2017, 9:49 p.m. UTC | #3
On Thu, Feb 23, 2017 at 01:21:03PM -0800, Michel Thierry wrote:
> 
> 
> On 23/02/17 12:57, Chris Wilson wrote:
> >On Thu, Feb 23, 2017 at 11:44:17AM -0800, Michel Thierry wrote:
> >>*** General ***
> >>
> >>Watchdog timeout (or "media engine reset") is a feature that allows
> >>userland applications to enable hang detection on individual batch buffers.
> >>The detection mechanism itself is mostly bound to the hardware and the only
> >>thing that the driver needs to do to support this form of hang detection
> >>is to implement the interrupt handling support as well as watchdog command
> >>emission before and after the emitted batch buffer start instruction in the
> >>ring buffer.
> >>
> >>The principle of the hang detection mechanism is as follows:
> >>
> >>1. Once the decision has been made to enable watchdog timeout for a
> >>particular batch buffer and the driver is in the process of emitting the
> >>batch buffer start instruction into the ring buffer it also emits a
> >>watchdog timer start instruction before and a watchdog timer cancellation
> >>instruction after the batch buffer start instruction in the ring buffer.
> >>
> >>2. Once the GPU execution reaches the watchdog timer start instruction
> >>the hardware watchdog counter is started by the hardware. The counter
> >>keeps counting until either reaching a previously configured threshold
> >>value or the timer cancellation instruction is executed.
> >>
> >>2a. If the counter reaches the threshold value the hardware fires a
> >>watchdog interrupt that is picked up by the watchdog interrupt handler.
> >>This means that a hang has been detected and the driver needs to deal with
> >>it the same way it would deal with a engine hang detected by the periodic
> >>hang checker. The only difference between the two is that we already blamed
> >>the active request (to ensure an engine reset).
> >>
> >>2b. If the batch buffer completes and the execution reaches the watchdog
> >>cancellation instruction before the watchdog counter reaches its
> >>threshold value the watchdog is cancelled and nothing more comes of it.
> >>No hang is detected.
> >>
> >>Note about future interaction with preemption: Preemption could happen
> >>in a command sequence prior to watchdog counter getting disabled,
> >>resulting in watchdog being triggered following preemption. The driver will
> >>need to explicitly disable the watchdog counter as part of the
> >>preemption sequence.
> >>
> >>*** This patch introduces: ***
> >>
> >>1. IRQ handler code for watchdog timeout allowing direct hang recovery
> >>based on hardware-driven hang detection, which then integrates directly
> >>with the hang recovery path. This is independent of having per-engine reset
> >>or just full gpu reset.
> >>
> >>2. Watchdog specific register information.
> >>
> >>Currently the render engine and all available media engines support
> >>watchdog timeout (VECS is only supported in GEN9). The specifications elude
> >>to the BCS engine being supported but that is currently not supported by
> >>this commit.
> >>
> >>Note that the value to stop the counter is different between render and
> >>non-render engines.
> >>
> >>Signed-off-by: Tomas Elf <tomas.elf@intel.com>
> >>Signed-off-by: Ian Lister <ian.lister@intel.com>
> >>Signed-off-by: Arun Siluvery <arun.siluvery@linux.intel.com>
> >>Signed-off-by: Michel Thierry <michel.thierry@intel.com>
> >>---
> >> drivers/gpu/drm/i915/i915_drv.h        |  4 ++++
> >> drivers/gpu/drm/i915/i915_irq.c        | 31 ++++++++++++++++++++++++++++++-
> >> drivers/gpu/drm/i915/i915_reg.h        |  6 ++++++
> >> drivers/gpu/drm/i915/intel_hangcheck.c | 13 +++++++++----
> >> drivers/gpu/drm/i915/intel_lrc.c       | 16 ++++++++++++++++
> >> 5 files changed, 65 insertions(+), 5 deletions(-)
> >>
> >>diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> >>index eed9ead1b592..0e4f4cc3c6de 100644
> >>--- a/drivers/gpu/drm/i915/i915_drv.h
> >>+++ b/drivers/gpu/drm/i915/i915_drv.h
> >>@@ -1568,6 +1568,9 @@ struct i915_gpu_error {
> >> 	 * recovery. All waiters on the reset_queue will be woken when
> >> 	 * that happens.
> >> 	 *
> >>+	 * When hw detects a hang before us, we can use I915_RESET_WATCHDOG to
> >>+	 * report the hang detection cause accurately.
> >>+	 *
> >> 	 * This counter is used by the wait_seqno code to notice that reset
> >> 	 * event happened and it needs to restart the entire ioctl (since most
> >> 	 * likely the seqno it waited for won't ever signal anytime soon).
> >>@@ -1580,6 +1583,7 @@ struct i915_gpu_error {
> >>
> >> 	unsigned long flags;
> >> #define I915_RESET_IN_PROGRESS	0
> >>+#define I915_RESET_WATCHDOG	2 /* looking at the future */
> >> #define I915_WEDGED		(BITS_PER_LONG - 1)
> >>
> >> 	/**
> >>diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
> >>index bc70e2c451b2..4ef73363bbe9 100644
> >>--- a/drivers/gpu/drm/i915/i915_irq.c
> >>+++ b/drivers/gpu/drm/i915/i915_irq.c
> >>@@ -1352,6 +1352,28 @@ gen8_cs_irq_handler(struct intel_engine_cs *engine, u32 iir, int test_shift)
> >> 		set_bit(ENGINE_IRQ_EXECLIST, &engine->irq_posted);
> >> 		tasklet_hi_schedule(&engine->irq_tasklet);
> >> 	}
> >>+
> >>+	if (iir & (GT_GEN8_WATCHDOG_INTERRUPT << test_shift)) {
> >>+		struct drm_i915_private *dev_priv = engine->i915;
> >>+		u32 watchdog_disable;
> >>+
> >>+		if (engine->id == RCS)
> >>+			watchdog_disable = GEN8_RCS_WATCHDOG_DISABLE;
> >>+		else
> >>+			watchdog_disable = GEN8_XCS_WATCHDOG_DISABLE;
> >>+
> >>+		/* Stop the counter to prevent further timeout interrupts */
> >>+		I915_WRITE_FW(RING_CNTR(engine->mmio_base), watchdog_disable);
> >
> >There's no guarrantee you hold forcewake, you need to use I915_WRITE.
> >Better yet would be to avoid having to wait for forcewake within the
> >hardirq handler.
> >
> >>+
> >>+		/* Make sure the active request will be marked as guilty */
> >>+		engine->hangcheck.stalled = true;
> >>+		engine->hangcheck.seqno = intel_engine_get_seqno(engine);
> >
> >Just set a flag saying the engine->hangcheck.watchdog = true. Don't
> >confuse us. engine->hangcheck.seqno does not give the guilty seqno!
> >
> >Also there is no guarrantee here that seqno is the guilty party. That's
> >a nasty bug. Servicing the interrupt will be running in parallel with
> >the GPU that may complete the request before we read the HWS.
> >
> >Please tell me we can use a PID along with the watchdog timer...
> 
> A 'watchdog' PID and 'running' PID in the HWSP would sound ok?

No, Another STORE_DWORD_IMM has the same asynchronicity issue as just
reading seqno. I take it there is no WATCHDOG_PID that is set when the
watchdog expires? Or we can't program the CS to stop when the watchdog
goes off?

The issue is that we may blame the following context (a completely
unrelated process) for the hang - dos ahoy.

Or we can do something like current hangcheck, program the watchdog to
fire twice before we declare a hang. And only reset if we see the same
seqno on both occasions.
 
> There's also the question if we want different thresholds per engine.

I suspect we do. But that can be extended through the same
context_set_param just by passing an array (size > 0) instead of a
single value.
-Chris
Michel Thierry Feb. 23, 2017, 10:12 p.m. UTC | #4
On 23/02/17 13:49, Chris Wilson wrote:
> On Thu, Feb 23, 2017 at 01:21:03PM -0800, Michel Thierry wrote:
>>
>>
>> On 23/02/17 12:57, Chris Wilson wrote:
>>> On Thu, Feb 23, 2017 at 11:44:17AM -0800, Michel Thierry wrote:
>>>> *** General ***
>>>>
>>>> Watchdog timeout (or "media engine reset") is a feature that allows
>>>> userland applications to enable hang detection on individual batch buffers.
>>>> The detection mechanism itself is mostly bound to the hardware and the only
>>>> thing that the driver needs to do to support this form of hang detection
>>>> is to implement the interrupt handling support as well as watchdog command
>>>> emission before and after the emitted batch buffer start instruction in the
>>>> ring buffer.
>>>>
>>>> The principle of the hang detection mechanism is as follows:
>>>>
>>>> 1. Once the decision has been made to enable watchdog timeout for a
>>>> particular batch buffer and the driver is in the process of emitting the
>>>> batch buffer start instruction into the ring buffer it also emits a
>>>> watchdog timer start instruction before and a watchdog timer cancellation
>>>> instruction after the batch buffer start instruction in the ring buffer.
>>>>
>>>> 2. Once the GPU execution reaches the watchdog timer start instruction
>>>> the hardware watchdog counter is started by the hardware. The counter
>>>> keeps counting until either reaching a previously configured threshold
>>>> value or the timer cancellation instruction is executed.
>>>>
>>>> 2a. If the counter reaches the threshold value the hardware fires a
>>>> watchdog interrupt that is picked up by the watchdog interrupt handler.
>>>> This means that a hang has been detected and the driver needs to deal with
>>>> it the same way it would deal with a engine hang detected by the periodic
>>>> hang checker. The only difference between the two is that we already blamed
>>>> the active request (to ensure an engine reset).
>>>>
>>>> 2b. If the batch buffer completes and the execution reaches the watchdog
>>>> cancellation instruction before the watchdog counter reaches its
>>>> threshold value the watchdog is cancelled and nothing more comes of it.
>>>> No hang is detected.
>>>>
>>>> Note about future interaction with preemption: Preemption could happen
>>>> in a command sequence prior to watchdog counter getting disabled,
>>>> resulting in watchdog being triggered following preemption. The driver will
>>>> need to explicitly disable the watchdog counter as part of the
>>>> preemption sequence.
>>>>
>>>> *** This patch introduces: ***
>>>>
>>>> 1. IRQ handler code for watchdog timeout allowing direct hang recovery
>>>> based on hardware-driven hang detection, which then integrates directly
>>>> with the hang recovery path. This is independent of having per-engine reset
>>>> or just full gpu reset.
>>>>
>>>> 2. Watchdog specific register information.
>>>>
>>>> Currently the render engine and all available media engines support
>>>> watchdog timeout (VECS is only supported in GEN9). The specifications elude
>>>> to the BCS engine being supported but that is currently not supported by
>>>> this commit.
>>>>
>>>> Note that the value to stop the counter is different between render and
>>>> non-render engines.
>>>>
>>>> Signed-off-by: Tomas Elf <tomas.elf@intel.com>
>>>> Signed-off-by: Ian Lister <ian.lister@intel.com>
>>>> Signed-off-by: Arun Siluvery <arun.siluvery@linux.intel.com>
>>>> Signed-off-by: Michel Thierry <michel.thierry@intel.com>
>>>> ---
>>>> drivers/gpu/drm/i915/i915_drv.h        |  4 ++++
>>>> drivers/gpu/drm/i915/i915_irq.c        | 31 ++++++++++++++++++++++++++++++-
>>>> drivers/gpu/drm/i915/i915_reg.h        |  6 ++++++
>>>> drivers/gpu/drm/i915/intel_hangcheck.c | 13 +++++++++----
>>>> drivers/gpu/drm/i915/intel_lrc.c       | 16 ++++++++++++++++
>>>> 5 files changed, 65 insertions(+), 5 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
>>>> index eed9ead1b592..0e4f4cc3c6de 100644
>>>> --- a/drivers/gpu/drm/i915/i915_drv.h
>>>> +++ b/drivers/gpu/drm/i915/i915_drv.h
>>>> @@ -1568,6 +1568,9 @@ struct i915_gpu_error {
>>>> 	 * recovery. All waiters on the reset_queue will be woken when
>>>> 	 * that happens.
>>>> 	 *
>>>> +	 * When hw detects a hang before us, we can use I915_RESET_WATCHDOG to
>>>> +	 * report the hang detection cause accurately.
>>>> +	 *
>>>> 	 * This counter is used by the wait_seqno code to notice that reset
>>>> 	 * event happened and it needs to restart the entire ioctl (since most
>>>> 	 * likely the seqno it waited for won't ever signal anytime soon).
>>>> @@ -1580,6 +1583,7 @@ struct i915_gpu_error {
>>>>
>>>> 	unsigned long flags;
>>>> #define I915_RESET_IN_PROGRESS	0
>>>> +#define I915_RESET_WATCHDOG	2 /* looking at the future */
>>>> #define I915_WEDGED		(BITS_PER_LONG - 1)
>>>>
>>>> 	/**
>>>> diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
>>>> index bc70e2c451b2..4ef73363bbe9 100644
>>>> --- a/drivers/gpu/drm/i915/i915_irq.c
>>>> +++ b/drivers/gpu/drm/i915/i915_irq.c
>>>> @@ -1352,6 +1352,28 @@ gen8_cs_irq_handler(struct intel_engine_cs *engine, u32 iir, int test_shift)
>>>> 		set_bit(ENGINE_IRQ_EXECLIST, &engine->irq_posted);
>>>> 		tasklet_hi_schedule(&engine->irq_tasklet);
>>>> 	}
>>>> +
>>>> +	if (iir & (GT_GEN8_WATCHDOG_INTERRUPT << test_shift)) {
>>>> +		struct drm_i915_private *dev_priv = engine->i915;
>>>> +		u32 watchdog_disable;
>>>> +
>>>> +		if (engine->id == RCS)
>>>> +			watchdog_disable = GEN8_RCS_WATCHDOG_DISABLE;
>>>> +		else
>>>> +			watchdog_disable = GEN8_XCS_WATCHDOG_DISABLE;
>>>> +
>>>> +		/* Stop the counter to prevent further timeout interrupts */
>>>> +		I915_WRITE_FW(RING_CNTR(engine->mmio_base), watchdog_disable);
>>>
>>> There's no guarrantee you hold forcewake, you need to use I915_WRITE.
>>> Better yet would be to avoid having to wait for forcewake within the
>>> hardirq handler.
>>>
>>>> +
>>>> +		/* Make sure the active request will be marked as guilty */
>>>> +		engine->hangcheck.stalled = true;
>>>> +		engine->hangcheck.seqno = intel_engine_get_seqno(engine);
>>>
>>> Just set a flag saying the engine->hangcheck.watchdog = true. Don't
>>> confuse us. engine->hangcheck.seqno does not give the guilty seqno!
>>>
>>> Also there is no guarrantee here that seqno is the guilty party. That's
>>> a nasty bug. Servicing the interrupt will be running in parallel with
>>> the GPU that may complete the request before we read the HWS.
>>>
>>> Please tell me we can use a PID along with the watchdog timer...
>>
>> A 'watchdog' PID and 'running' PID in the HWSP would sound ok?
>
> No, Another STORE_DWORD_IMM has the same asynchronicity issue as just
> reading seqno. I take it there is no WATCHDOG_PID that is set when the
> watchdog expires? Or we can't program the CS to stop when the watchdog
> goes off?
>

If you were asking about the HW, no, there's no such thing, just a 
register for the counter and one for on/off control.

It is also not aware of what else is happening. For example, if it is 
pre-empted, we need to write to the reg to disable the timer, or it will 
expire while the new bb is running.

> The issue is that we may blame the following context (a completely
> unrelated process) for the hang - dos ahoy.
>
> Or we can do something like current hangcheck, program the watchdog to
> fire twice before we declare a hang. And only reset if we see the same
> seqno on both occasions.
>

I'll check how it works with the fire twice before doing something.

>> There's also the question if we want different thresholds per engine.
>
> I suspect we do. But that can be extended through the same
> context_set_param just by passing an array (size > 0) instead of a
> single value.
> -Chris
>
Chris Wilson Feb. 23, 2017, 11:38 p.m. UTC | #5
On Thu, Feb 23, 2017 at 08:57:54PM +0000, Chris Wilson wrote:
> On Thu, Feb 23, 2017 at 11:44:17AM -0800, Michel Thierry wrote:
> > +
> > +		/* Make sure the active request will be marked as guilty */
> > +		engine->hangcheck.stalled = true;
> > +		engine->hangcheck.seqno = intel_engine_get_seqno(engine);
> 
> Just set a flag saying the engine->hangcheck.watchdog = true. Don't
> confuse us. engine->hangcheck.seqno does not give the guilty seqno!

Hmm. So I was expecting a little more work on hangcheck. Once we kick
hangcheck from watchdog, we need to confirm that the active seqno hasn't
advance and ideally we would stop the ring before it does. As it stands
hangcheck.seqno will at least detect when we fail to reset in time
(still has the smell of dos as both the watchdog timeout and the request
duration are under user control), but I'm still uncomfortable with it
being set outside of the timer based hangcheck - at least not unless we
split the different seqno:

engine->hangcheck.watchdog_seqno
engine->hangcheck.timer_seqno
engine->hangcheck.stalled // now a seqno from either path

We also need to teach the standard hangcheck/i915_reset to only reset
selected engines.
-Chris
diff mbox

Patch

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index eed9ead1b592..0e4f4cc3c6de 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1568,6 +1568,9 @@  struct i915_gpu_error {
 	 * recovery. All waiters on the reset_queue will be woken when
 	 * that happens.
 	 *
+	 * When hw detects a hang before us, we can use I915_RESET_WATCHDOG to
+	 * report the hang detection cause accurately.
+	 *
 	 * This counter is used by the wait_seqno code to notice that reset
 	 * event happened and it needs to restart the entire ioctl (since most
 	 * likely the seqno it waited for won't ever signal anytime soon).
@@ -1580,6 +1583,7 @@  struct i915_gpu_error {
 
 	unsigned long flags;
 #define I915_RESET_IN_PROGRESS	0
+#define I915_RESET_WATCHDOG	2 /* looking at the future */
 #define I915_WEDGED		(BITS_PER_LONG - 1)
 
 	/**
diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index bc70e2c451b2..4ef73363bbe9 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -1352,6 +1352,28 @@  gen8_cs_irq_handler(struct intel_engine_cs *engine, u32 iir, int test_shift)
 		set_bit(ENGINE_IRQ_EXECLIST, &engine->irq_posted);
 		tasklet_hi_schedule(&engine->irq_tasklet);
 	}
+
+	if (iir & (GT_GEN8_WATCHDOG_INTERRUPT << test_shift)) {
+		struct drm_i915_private *dev_priv = engine->i915;
+		u32 watchdog_disable;
+
+		if (engine->id == RCS)
+			watchdog_disable = GEN8_RCS_WATCHDOG_DISABLE;
+		else
+			watchdog_disable = GEN8_XCS_WATCHDOG_DISABLE;
+
+		/* Stop the counter to prevent further timeout interrupts */
+		I915_WRITE_FW(RING_CNTR(engine->mmio_base), watchdog_disable);
+
+		/* Make sure the active request will be marked as guilty */
+		engine->hangcheck.stalled = true;
+		engine->hangcheck.seqno = intel_engine_get_seqno(engine);
+
+		/* And try to run the hangcheck_work as soon as possible */
+		set_bit(I915_RESET_WATCHDOG, &dev_priv->gpu_error.flags);
+		queue_delayed_work(system_long_wq,
+				   &dev_priv->gpu_error.hangcheck_work, 0);
+	}
 }
 
 static irqreturn_t gen8_gt_irq_ack(struct drm_i915_private *dev_priv,
@@ -3433,12 +3455,15 @@  static void gen8_gt_irq_postinstall(struct drm_i915_private *dev_priv)
 	uint32_t gt_interrupts[] = {
 		GT_RENDER_USER_INTERRUPT << GEN8_RCS_IRQ_SHIFT |
 			GT_CONTEXT_SWITCH_INTERRUPT << GEN8_RCS_IRQ_SHIFT |
+			GT_GEN8_WATCHDOG_INTERRUPT << GEN8_RCS_IRQ_SHIFT |
 			GT_RENDER_USER_INTERRUPT << GEN8_BCS_IRQ_SHIFT |
 			GT_CONTEXT_SWITCH_INTERRUPT << GEN8_BCS_IRQ_SHIFT,
 		GT_RENDER_USER_INTERRUPT << GEN8_VCS1_IRQ_SHIFT |
 			GT_CONTEXT_SWITCH_INTERRUPT << GEN8_VCS1_IRQ_SHIFT |
+			GT_GEN8_WATCHDOG_INTERRUPT << GEN8_VCS1_IRQ_SHIFT |
 			GT_RENDER_USER_INTERRUPT << GEN8_VCS2_IRQ_SHIFT |
-			GT_CONTEXT_SWITCH_INTERRUPT << GEN8_VCS2_IRQ_SHIFT,
+			GT_CONTEXT_SWITCH_INTERRUPT << GEN8_VCS2_IRQ_SHIFT |
+			GT_GEN8_WATCHDOG_INTERRUPT << GEN8_VCS2_IRQ_SHIFT,
 		0,
 		GT_RENDER_USER_INTERRUPT << GEN8_VECS_IRQ_SHIFT |
 			GT_CONTEXT_SWITCH_INTERRUPT << GEN8_VECS_IRQ_SHIFT
@@ -3447,6 +3472,10 @@  static void gen8_gt_irq_postinstall(struct drm_i915_private *dev_priv)
 	if (HAS_L3_DPF(dev_priv))
 		gt_interrupts[0] |= GT_RENDER_L3_PARITY_ERROR_INTERRUPT;
 
+	/* VECS watchdog is only available in skl+ */
+	if (INTEL_GEN(dev_priv) >= 9)
+		gt_interrupts[3] |= GT_GEN8_WATCHDOG_INTERRUPT;
+
 	dev_priv->pm_ier = 0x0;
 	dev_priv->pm_imr = ~dev_priv->pm_ier;
 	GEN8_IRQ_INIT_NDX(GT, 0, ~gt_interrupts[0], gt_interrupts[0]);
diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
index 141a5c1e3895..b28cd6eee2dd 100644
--- a/drivers/gpu/drm/i915/i915_reg.h
+++ b/drivers/gpu/drm/i915/i915_reg.h
@@ -1896,6 +1896,11 @@  enum skl_disp_power_wells {
 #define RING_START(base)	_MMIO((base)+0x38)
 #define RING_CTL(base)		_MMIO((base)+0x3c)
 #define   RING_CTL_SIZE(size)	((size) - PAGE_SIZE) /* in bytes -> pages */
+#define RING_CNTR(base)        _MMIO((base) + 0x178)
+#define   GEN8_WATCHDOG_ENABLE		0
+#define   GEN8_RCS_WATCHDOG_DISABLE	1
+#define   GEN8_XCS_WATCHDOG_DISABLE	0xFFFFFFFF
+#define RING_THRESH(base)      _MMIO((base) + 0x17C)
 #define RING_SYNC_0(base)	_MMIO((base)+0x40)
 #define RING_SYNC_1(base)	_MMIO((base)+0x44)
 #define RING_SYNC_2(base)	_MMIO((base)+0x48)
@@ -2374,6 +2379,7 @@  enum skl_disp_power_wells {
 #define GT_BSD_USER_INTERRUPT			(1 << 12)
 #define GT_RENDER_L3_PARITY_ERROR_INTERRUPT_S1	(1 << 11) /* hsw+; rsvd on snb, ivb, vlv */
 #define GT_CONTEXT_SWITCH_INTERRUPT		(1 <<  8)
+#define GT_GEN8_WATCHDOG_INTERRUPT		(1 <<  6) /* gen8+ */
 #define GT_RENDER_L3_PARITY_ERROR_INTERRUPT	(1 <<  5) /* !snb */
 #define GT_RENDER_PIPECTL_NOTIFY_INTERRUPT	(1 <<  4)
 #define GT_RENDER_CS_MASTER_ERROR_INTERRUPT	(1 <<  3)
diff --git a/drivers/gpu/drm/i915/intel_hangcheck.c b/drivers/gpu/drm/i915/intel_hangcheck.c
index dce742243ba6..0e9272c97096 100644
--- a/drivers/gpu/drm/i915/intel_hangcheck.c
+++ b/drivers/gpu/drm/i915/intel_hangcheck.c
@@ -388,7 +388,8 @@  static void hangcheck_accumulate_sample(struct intel_engine_cs *engine,
 
 static void hangcheck_declare_hang(struct drm_i915_private *i915,
 				   unsigned int hung,
-				   unsigned int stuck)
+				   unsigned int stuck,
+				   unsigned int watchdog)
 {
 	struct intel_engine_cs *engine;
 	char msg[80];
@@ -401,7 +402,8 @@  static void hangcheck_declare_hang(struct drm_i915_private *i915,
 	if (stuck != hung)
 		hung &= ~stuck;
 	len = scnprintf(msg, sizeof(msg),
-			"%s on ", stuck == hung ? "No progress" : "Hang");
+			"%s on ", watchdog ? "Watchdog timeout" :
+				  stuck == hung ? "No progress" : "Hang");
 	for_each_engine_masked(engine, i915, hung, tmp)
 		len += scnprintf(msg + len, sizeof(msg) - len,
 				 "%s, ", engine->name);
@@ -425,7 +427,7 @@  static void i915_hangcheck_elapsed(struct work_struct *work)
 			     gpu_error.hangcheck_work.work);
 	struct intel_engine_cs *engine;
 	enum intel_engine_id id;
-	unsigned int hung = 0, stuck = 0;
+	unsigned int hung = 0, stuck = 0, watchdog = 0;
 	int busy_count = 0;
 
 	if (!i915.enable_hangcheck)
@@ -437,6 +439,9 @@  static void i915_hangcheck_elapsed(struct work_struct *work)
 	if (i915_terminally_wedged(&dev_priv->gpu_error))
 		return;
 
+	if (test_and_clear_bit(I915_RESET_WATCHDOG, &dev_priv->gpu_error.flags))
+		watchdog = 1;
+
 	/* As enabling the GPU requires fairly extensive mmio access,
 	 * periodically arm the mmio checker to see if we are triggering
 	 * any invalid access.
@@ -463,7 +468,7 @@  static void i915_hangcheck_elapsed(struct work_struct *work)
 	}
 
 	if (hung)
-		hangcheck_declare_hang(dev_priv, hung, stuck);
+		hangcheck_declare_hang(dev_priv, hung, stuck, watchdog);
 
 	/* Reset timer in case GPU hangs without another request being added */
 	if (busy_count)
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 39329d40da46..8c9ebf0cebf7 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -1629,6 +1629,22 @@  logical_ring_default_irqs(struct intel_engine_cs *engine)
 	unsigned shift = engine->irq_shift;
 	engine->irq_enable_mask = GT_RENDER_USER_INTERRUPT << shift;
 	engine->irq_keep_mask = GT_CONTEXT_SWITCH_INTERRUPT << shift;
+
+	switch (engine->id) {
+	default:
+		/* BCS engine does not support hw watchdog */
+		break;
+	case RCS:
+	case VCS:
+	case VCS2:
+		engine->irq_keep_mask |= (GT_GEN8_WATCHDOG_INTERRUPT << shift);
+		break;
+	case VECS:
+		if (INTEL_GEN(engine->i915) >= 9)
+			engine->irq_keep_mask |=
+				(GT_GEN8_WATCHDOG_INTERRUPT << shift);
+		break;
+	}
 }
 
 static int