diff mbox series

[v4,2/5] drm/i915: Watchdog timeout: IRQ handler for gen8+

Message ID 20190221025820.28447-3-carlos.santa@intel.com (mailing list archive)
State New, archived
Headers show
Series GEN8+ GPU Watchdog Reset Support | expand

Commit Message

Santa, Carlos Feb. 21, 2019, 2:58 a.m. UTC
From: Michel Thierry <michel.thierry@intel.com>

*** General ***

Watchdog timeout (or "media engine reset") is a feature that allows
userland applications to enable hang detection on individual batch buffers.
The detection mechanism itself is mostly bound to the hardware and the only
thing that the driver needs to do to support this form of hang detection
is to implement the interrupt handling support as well as watchdog command
emission before and after the emitted batch buffer start instruction in the
ring buffer.

The principle of the hang detection mechanism is as follows:

1. Once the decision has been made to enable watchdog timeout for a
particular batch buffer and the driver is in the process of emitting the
batch buffer start instruction into the ring buffer it also emits a
watchdog timer start instruction before and a watchdog timer cancellation
instruction after the batch buffer start instruction in the ring buffer.

2. Once the GPU execution reaches the watchdog timer start instruction
the hardware watchdog counter is started by the hardware. The counter
keeps counting until either reaching a previously configured threshold
value or the timer cancellation instruction is executed.

2a. If the counter reaches the threshold value the hardware fires a
watchdog interrupt that is picked up by the watchdog interrupt handler.
This means that a hang has been detected and the driver needs to deal with
it the same way it would deal with a engine hang detected by the periodic
hang checker. The only difference between the two is that we already blamed
the active request (to ensure an engine reset).

2b. If the batch buffer completes and the execution reaches the watchdog
cancellation instruction before the watchdog counter reaches its
threshold value the watchdog is cancelled and nothing more comes of it.
No hang is detected.

Note about future interaction with preemption: Preemption could happen
in a command sequence prior to watchdog counter getting disabled,
resulting in watchdog being triggered following preemption (e.g. when
watchdog had been enabled in the low priority batch). The driver will
need to explicitly disable the watchdog counter as part of the
preemption sequence.

*** This patch introduces: ***

1. IRQ handler code for watchdog timeout allowing direct hang recovery
based on hardware-driven hang detection, which then integrates directly
with the hang recovery path. This is independent of having per-engine reset
or just full gpu reset.

2. Watchdog specific register information.

Currently the render engine and all available media engines support
watchdog timeout (VECS is only supported in GEN9). The specifications elude
to the BCS engine being supported but that is currently not supported by
this commit.

Note that the value to stop the counter is different between render and
non-render engines in GEN8; GEN9 onwards it's the same.

v2: Move irq handler to tasklet, arm watchdog for a 2nd time to check
against false-positives.

v3: Don't use high priority tasklet, use engine_last_submit while
checking for false-positives. From GEN9 onwards, the stop counter bit is
the same for all engines.

v4: Remove unnecessary brackets, use current_seqno to mark the request
as guilty in the hangcheck/capture code.

v5: Rebased after RESET_ENGINEs flag.

v6: Don't capture error state in case of watchdog timeout. The capture
process is time consuming and this will align to what happens when we
use GuC to handle the watchdog timeout. (Chris)

v7: Rebase.

v8: Rebase, use HZ to reschedule.

v9: Rebase, get forcewake domains in function (no longer in execlists
struct).

v10: Rebase.

v11: Rebase,
     remove extra braces (Tvrtko),
     implement watchdog_to_clock_counts helper (Tvrtko),
     Move tasklet_kill(watchdog_tasklet) inside intel_engines (Tvrtko),
     Use a global heartbeat seqno instead of engine seqno (Chris)
     Make all engines checks all class based checks (Tvrtko)

Cc: Antonio Argenziano <antonio.argenziano@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Signed-off-by: Michel Thierry <michel.thierry@intel.com>
Signed-off-by: Carlos Santa <carlos.santa@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h         |  8 +++
 drivers/gpu/drm/i915/i915_gpu_error.h   |  4 ++
 drivers/gpu/drm/i915/i915_irq.c         | 12 ++++-
 drivers/gpu/drm/i915/i915_reg.h         |  6 +++
 drivers/gpu/drm/i915/intel_engine_cs.c  |  1 +
 drivers/gpu/drm/i915/intel_hangcheck.c  | 17 +++++--
 drivers/gpu/drm/i915/intel_lrc.c        | 65 +++++++++++++++++++++++++
 drivers/gpu/drm/i915/intel_ringbuffer.h |  7 +++
 8 files changed, 114 insertions(+), 6 deletions(-)

Comments

Tvrtko Ursulin Feb. 28, 2019, 5:38 p.m. UTC | #1
On 21/02/2019 02:58, Carlos Santa wrote:
> From: Michel Thierry <michel.thierry@intel.com>
> 
> *** General ***
> 
> Watchdog timeout (or "media engine reset") is a feature that allows
> userland applications to enable hang detection on individual batch buffers.
> The detection mechanism itself is mostly bound to the hardware and the only
> thing that the driver needs to do to support this form of hang detection
> is to implement the interrupt handling support as well as watchdog command
> emission before and after the emitted batch buffer start instruction in the
> ring buffer.
> 
> The principle of the hang detection mechanism is as follows:
> 
> 1. Once the decision has been made to enable watchdog timeout for a
> particular batch buffer and the driver is in the process of emitting the
> batch buffer start instruction into the ring buffer it also emits a
> watchdog timer start instruction before and a watchdog timer cancellation
> instruction after the batch buffer start instruction in the ring buffer.
> 
> 2. Once the GPU execution reaches the watchdog timer start instruction
> the hardware watchdog counter is started by the hardware. The counter
> keeps counting until either reaching a previously configured threshold
> value or the timer cancellation instruction is executed.
> 
> 2a. If the counter reaches the threshold value the hardware fires a
> watchdog interrupt that is picked up by the watchdog interrupt handler.
> This means that a hang has been detected and the driver needs to deal with
> it the same way it would deal with a engine hang detected by the periodic
> hang checker. The only difference between the two is that we already blamed
> the active request (to ensure an engine reset).
> 
> 2b. If the batch buffer completes and the execution reaches the watchdog
> cancellation instruction before the watchdog counter reaches its
> threshold value the watchdog is cancelled and nothing more comes of it.
> No hang is detected.
> 
> Note about future interaction with preemption: Preemption could happen
> in a command sequence prior to watchdog counter getting disabled,
> resulting in watchdog being triggered following preemption (e.g. when
> watchdog had been enabled in the low priority batch). The driver will
> need to explicitly disable the watchdog counter as part of the
> preemption sequence.
> 
> *** This patch introduces: ***
> 
> 1. IRQ handler code for watchdog timeout allowing direct hang recovery
> based on hardware-driven hang detection, which then integrates directly
> with the hang recovery path. This is independent of having per-engine reset
> or just full gpu reset.
> 
> 2. Watchdog specific register information.
> 
> Currently the render engine and all available media engines support
> watchdog timeout (VECS is only supported in GEN9). The specifications elude
> to the BCS engine being supported but that is currently not supported by
> this commit.
> 
> Note that the value to stop the counter is different between render and
> non-render engines in GEN8; GEN9 onwards it's the same.
> 
> v2: Move irq handler to tasklet, arm watchdog for a 2nd time to check
> against false-positives.
> 
> v3: Don't use high priority tasklet, use engine_last_submit while
> checking for false-positives. From GEN9 onwards, the stop counter bit is
> the same for all engines.
> 
> v4: Remove unnecessary brackets, use current_seqno to mark the request
> as guilty in the hangcheck/capture code.
> 
> v5: Rebased after RESET_ENGINEs flag.
> 
> v6: Don't capture error state in case of watchdog timeout. The capture
> process is time consuming and this will align to what happens when we
> use GuC to handle the watchdog timeout. (Chris)
> 
> v7: Rebase.
> 
> v8: Rebase, use HZ to reschedule.
> 
> v9: Rebase, get forcewake domains in function (no longer in execlists
> struct).
> 
> v10: Rebase.
> 
> v11: Rebase,
>       remove extra braces (Tvrtko),
>       implement watchdog_to_clock_counts helper (Tvrtko),
>       Move tasklet_kill(watchdog_tasklet) inside intel_engines (Tvrtko),
>       Use a global heartbeat seqno instead of engine seqno (Chris)
>       Make all engines checks all class based checks (Tvrtko)
> 
> Cc: Antonio Argenziano <antonio.argenziano@intel.com>
> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
> Signed-off-by: Michel Thierry <michel.thierry@intel.com>
> Signed-off-by: Carlos Santa <carlos.santa@intel.com>
> ---
>   drivers/gpu/drm/i915/i915_drv.h         |  8 +++
>   drivers/gpu/drm/i915/i915_gpu_error.h   |  4 ++
>   drivers/gpu/drm/i915/i915_irq.c         | 12 ++++-
>   drivers/gpu/drm/i915/i915_reg.h         |  6 +++
>   drivers/gpu/drm/i915/intel_engine_cs.c  |  1 +
>   drivers/gpu/drm/i915/intel_hangcheck.c  | 17 +++++--
>   drivers/gpu/drm/i915/intel_lrc.c        | 65 +++++++++++++++++++++++++
>   drivers/gpu/drm/i915/intel_ringbuffer.h |  7 +++
>   8 files changed, 114 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> index 63a008aebfcd..0fcb2df869a2 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -3120,6 +3120,14 @@ i915_gem_context_lookup(struct drm_i915_file_private *file_priv, u32 id)
>   	return ctx;
>   }
>   
> +static inline u32
> +watchdog_to_clock_counts(struct drm_i915_private *dev_priv, u64 value_in_us)
> +{
> +	u64 threshold = 0;
> +
> +	return threshold;
> +}
> +
>   int i915_perf_open_ioctl(struct drm_device *dev, void *data,
>   			 struct drm_file *file);
>   int i915_perf_add_config_ioctl(struct drm_device *dev, void *data,
> diff --git a/drivers/gpu/drm/i915/i915_gpu_error.h b/drivers/gpu/drm/i915/i915_gpu_error.h
> index f408060e0667..bd1821c73ecd 100644
> --- a/drivers/gpu/drm/i915/i915_gpu_error.h
> +++ b/drivers/gpu/drm/i915/i915_gpu_error.h
> @@ -233,6 +233,9 @@ struct i915_gpu_error {
>   	 * i915_mutex_lock_interruptible()?). I915_RESET_BACKOFF serves a
>   	 * secondary role in preventing two concurrent global reset attempts.
>   	 *
> +	 * #I915_RESET_WATCHDOG - When hw detects a hang before us, we can use
> +	 * I915_RESET_WATCHDOG to report the hang detection cause accurately.
> +	 *
>   	 * #I915_RESET_ENGINE[num_engines] - Since the driver doesn't need to
>   	 * acquire the struct_mutex to reset an engine, we need an explicit
>   	 * flag to prevent two concurrent reset attempts in the same engine.
> @@ -248,6 +251,7 @@ struct i915_gpu_error {
>   #define I915_RESET_BACKOFF	0
>   #define I915_RESET_MODESET	1
>   #define I915_RESET_ENGINE	2
> +#define I915_RESET_WATCHDOG	3
>   #define I915_WEDGED		(BITS_PER_LONG - 1)
>   
>   	/** Number of times an engine has been reset */
> diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
> index 4b23b2fd1fad..e2a1a07b0f2c 100644
> --- a/drivers/gpu/drm/i915/i915_irq.c
> +++ b/drivers/gpu/drm/i915/i915_irq.c
> @@ -1456,6 +1456,9 @@ gen8_cs_irq_handler(struct intel_engine_cs *engine, u32 iir)
>   
>   	if (tasklet)
>   		tasklet_hi_schedule(&engine->execlists.tasklet);
> +
> +	if (iir & GT_GEN8_WATCHDOG_INTERRUPT)
> +		tasklet_schedule(&engine->execlists.watchdog_tasklet);
>   }
>   
>   static void gen8_gt_irq_ack(struct drm_i915_private *i915,
> @@ -3883,17 +3886,24 @@ static void gen8_gt_irq_postinstall(struct drm_i915_private *dev_priv)
>   	u32 gt_interrupts[] = {
>   		GT_RENDER_USER_INTERRUPT << GEN8_RCS_IRQ_SHIFT |
>   			GT_CONTEXT_SWITCH_INTERRUPT << GEN8_RCS_IRQ_SHIFT |
> +			GT_GEN8_WATCHDOG_INTERRUPT << GEN8_RCS_IRQ_SHIFT |
>   			GT_RENDER_USER_INTERRUPT << GEN8_BCS_IRQ_SHIFT |
>   			GT_CONTEXT_SWITCH_INTERRUPT << GEN8_BCS_IRQ_SHIFT,
>   		GT_RENDER_USER_INTERRUPT << GEN8_VCS1_IRQ_SHIFT |
>   			GT_CONTEXT_SWITCH_INTERRUPT << GEN8_VCS1_IRQ_SHIFT |
> +			GT_GEN8_WATCHDOG_INTERRUPT << GEN8_VCS1_IRQ_SHIFT |
>   			GT_RENDER_USER_INTERRUPT << GEN8_VCS2_IRQ_SHIFT |
> -			GT_CONTEXT_SWITCH_INTERRUPT << GEN8_VCS2_IRQ_SHIFT,
> +			GT_CONTEXT_SWITCH_INTERRUPT << GEN8_VCS2_IRQ_SHIFT |
> +			GT_GEN8_WATCHDOG_INTERRUPT << GEN8_VCS2_IRQ_SHIFT,
>   		0,
>   		GT_RENDER_USER_INTERRUPT << GEN8_VECS_IRQ_SHIFT |
>   			GT_CONTEXT_SWITCH_INTERRUPT << GEN8_VECS_IRQ_SHIFT
>   		};
>   
> +	/* VECS watchdog is only available in skl+ */
> +	if (INTEL_GEN(dev_priv) >= 9)
> +		gt_interrupts[3] |= GT_GEN8_WATCHDOG_INTERRUPT;
> +
>   	dev_priv->pm_ier = 0x0;
>   	dev_priv->pm_imr = ~dev_priv->pm_ier;
>   	GEN8_IRQ_INIT_NDX(GT, 0, ~gt_interrupts[0], gt_interrupts[0]);
> diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
> index 1eca166d95bb..a0e101bbcbce 100644
> --- a/drivers/gpu/drm/i915/i915_reg.h
> +++ b/drivers/gpu/drm/i915/i915_reg.h
> @@ -2335,6 +2335,11 @@ enum i915_power_well_id {
>   #define RING_START(base)	_MMIO((base) + 0x38)
>   #define RING_CTL(base)		_MMIO((base) + 0x3c)
>   #define   RING_CTL_SIZE(size)	((size) - PAGE_SIZE) /* in bytes -> pages */
> +#define RING_CNTR(base)		_MMIO((base) + 0x178)
> +#define   GEN8_WATCHDOG_ENABLE		0
> +#define   GEN8_WATCHDOG_DISABLE		1
> +#define   GEN8_XCS_WATCHDOG_DISABLE	0xFFFFFFFF /* GEN8 & non-render only */
> +#define RING_THRESH(base)	_MMIO((base) + 0x17C)
>   #define RING_SYNC_0(base)	_MMIO((base) + 0x40)
>   #define RING_SYNC_1(base)	_MMIO((base) + 0x44)
>   #define RING_SYNC_2(base)	_MMIO((base) + 0x48)
> @@ -2894,6 +2899,7 @@ enum i915_power_well_id {
>   #define GT_BSD_USER_INTERRUPT			(1 << 12)
>   #define GT_RENDER_L3_PARITY_ERROR_INTERRUPT_S1	(1 << 11) /* hsw+; rsvd on snb, ivb, vlv */
>   #define GT_CONTEXT_SWITCH_INTERRUPT		(1 <<  8)
> +#define GT_GEN8_WATCHDOG_INTERRUPT		(1 <<  6) /* gen8+ */
>   #define GT_RENDER_L3_PARITY_ERROR_INTERRUPT	(1 <<  5) /* !snb */
>   #define GT_RENDER_PIPECTL_NOTIFY_INTERRUPT	(1 <<  4)
>   #define GT_RENDER_CS_MASTER_ERROR_INTERRUPT	(1 <<  3)
> diff --git a/drivers/gpu/drm/i915/intel_engine_cs.c b/drivers/gpu/drm/i915/intel_engine_cs.c
> index 7ae753358a6d..74f563d23cc8 100644
> --- a/drivers/gpu/drm/i915/intel_engine_cs.c
> +++ b/drivers/gpu/drm/i915/intel_engine_cs.c
> @@ -1106,6 +1106,7 @@ void intel_engines_park(struct drm_i915_private *i915)
>   		/* Flush the residual irq tasklets first. */
>   		intel_engine_disarm_breadcrumbs(engine);
>   		tasklet_kill(&engine->execlists.tasklet);
> +		tasklet_kill(&engine->execlists.watchdog_tasklet);
>   
>   		/*
>   		 * We are committed now to parking the engines, make sure there
> diff --git a/drivers/gpu/drm/i915/intel_hangcheck.c b/drivers/gpu/drm/i915/intel_hangcheck.c
> index 58b6ff8453dc..bc10acb24d9a 100644
> --- a/drivers/gpu/drm/i915/intel_hangcheck.c
> +++ b/drivers/gpu/drm/i915/intel_hangcheck.c
> @@ -218,7 +218,8 @@ static void hangcheck_accumulate_sample(struct intel_engine_cs *engine,
>   
>   static void hangcheck_declare_hang(struct drm_i915_private *i915,
>   				   unsigned int hung,
> -				   unsigned int stuck)
> +				   unsigned int stuck,
> +				   unsigned int watchdog)
>   {
>   	struct intel_engine_cs *engine;
>   	char msg[80];
> @@ -231,13 +232,16 @@ static void hangcheck_declare_hang(struct drm_i915_private *i915,
>   	if (stuck != hung)
>   		hung &= ~stuck;
>   	len = scnprintf(msg, sizeof(msg),
> -			"%s on ", stuck == hung ? "no progress" : "hang");
> +			"%s on ", watchdog ? "watchdog timeout" :
> +				  stuck == hung ? "no progress" : "hang");
>   	for_each_engine_masked(engine, i915, hung, tmp)
>   		len += scnprintf(msg + len, sizeof(msg) - len,
>   				 "%s, ", engine->name);
>   	msg[len-2] = '\0';
>   
> -	return i915_handle_error(i915, hung, I915_ERROR_CAPTURE, "%s", msg);
> +	return i915_handle_error(i915, hung,
> +				 watchdog ? 0 : I915_ERROR_CAPTURE,
> +				 "%s", msg);
>   }
>   
>   /*
> @@ -255,7 +259,7 @@ static void i915_hangcheck_elapsed(struct work_struct *work)
>   			     gpu_error.hangcheck_work.work);
>   	struct intel_engine_cs *engine;
>   	enum intel_engine_id id;
> -	unsigned int hung = 0, stuck = 0, wedged = 0;
> +	unsigned int hung = 0, stuck = 0, wedged = 0, watchdog = 0;
>   
>   	if (!i915_modparams.enable_hangcheck)
>   		return;
> @@ -266,6 +270,9 @@ static void i915_hangcheck_elapsed(struct work_struct *work)
>   	if (i915_terminally_wedged(&dev_priv->gpu_error))
>   		return;
>   
> +	if (test_and_clear_bit(I915_RESET_WATCHDOG, &dev_priv->gpu_error.flags))
> +		watchdog = 1;
> +
>   	/* As enabling the GPU requires fairly extensive mmio access,
>   	 * periodically arm the mmio checker to see if we are triggering
>   	 * any invalid access.
> @@ -311,7 +318,7 @@ static void i915_hangcheck_elapsed(struct work_struct *work)
>   	}
>   
>   	if (hung)
> -		hangcheck_declare_hang(dev_priv, hung, stuck);
> +		hangcheck_declare_hang(dev_priv, hung, stuck, watchdog);
>   
>   	/* Reset timer in case GPU hangs without another request being added */
>   	i915_queue_hangcheck(dev_priv);
> diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
> index 9ca7dc7a6fa5..c38b239ab39e 100644
> --- a/drivers/gpu/drm/i915/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/intel_lrc.c
> @@ -2352,6 +2352,53 @@ static int gen8_emit_flush_render(struct i915_request *request,
>   	return 0;
>   }
>   
> +/* From GEN9 onwards, all engines use the same RING_CNTR format */
> +static inline u32 get_watchdog_disable(struct intel_engine_cs *engine)

I'd let the compiler decide on the inline or not.

> +{
> +	if (engine->id == RCS || INTEL_GEN(engine->i915) >= 9)
> +		return GEN8_WATCHDOG_DISABLE;
> +	else
> +		return GEN8_XCS_WATCHDOG_DISABLE;
> +}
> +
> +#define GEN8_WATCHDOG_1000US(dev_priv) watchdog_to_clock_counts(dev_priv, 1000)

Not sure macro is useful.

> +static void gen8_watchdog_irq_handler(unsigned long data)

gen8_watchdog_tasklet I guess.

> +{
> +	struct intel_engine_cs *engine = (struct intel_engine_cs *)data;
> +	struct drm_i915_private *dev_priv = engine->i915;
> +	unsigned int hung = 0;
> +	u32 current_seqno=0;

Coding style.

> +	char msg[80];
> +	unsigned int tmp;
> +	int len;
> +
> +	/* Stop the counter to prevent further timeout interrupts */
> +	I915_WRITE_FW(RING_CNTR(engine->mmio_base), get_watchdog_disable(engine));

These registers do not need forcewake?

> +
> +	/* Read the heartbeat seqno once again to check if we are stuck? */
> +	current_seqno = intel_engine_get_hangcheck_seqno(engine);
> +
> +    if (current_seqno == engine->current_seqno) {
> +		hung |= engine->mask;
> +
> +		len = scnprintf(msg, sizeof(msg), "%s on ", "watchdog timeout");
> +		for_each_engine_masked(engine, dev_priv, hung, tmp)
> +			len += scnprintf(msg + len, sizeof(msg) - len,
> +					 "%s, ", engine->name);
> +		msg[len-2] = '\0';

Copy/paste from intel_hangcheck.c ? Moving to common helper would be good.

> +
> +		i915_handle_error(dev_priv, hung, 0, "%s", msg);
> +
> +		/* Reset timer in case GPU hangs without another request being added */
> +		i915_queue_hangcheck(dev_priv);

Mis-indented block.

> +    }else{

Coding style.

> +		/* Re-start the counter, if really hung, it will expire again */
> +		I915_WRITE_FW(RING_THRESH(engine->mmio_base),
> +			      GEN8_WATCHDOG_1000US(dev_priv));
> +		I915_WRITE_FW(RING_CNTR(engine->mmio_base), GEN8_WATCHDOG_ENABLE);
> +    }
> +}
> +
>   /*
>    * Reserve space for 2 NOOPs at the end of each request to be
>    * used as a workaround for not being allowed to do lite
> @@ -2539,6 +2586,21 @@ logical_ring_default_irqs(struct intel_engine_cs *engine)
>   
>   	engine->irq_enable_mask = GT_RENDER_USER_INTERRUPT << shift;
>   	engine->irq_keep_mask = GT_CONTEXT_SWITCH_INTERRUPT << shift;
> +
> +	switch (engine->class) {
> +	default:
> +		/* BCS engine does not support hw watchdog */
> +		break;
> +	case RENDER_CLASS:
> +	case VIDEO_DECODE_CLASS:
> +		engine->irq_keep_mask |= GT_GEN8_WATCHDOG_INTERRUPT << shift;
> +		break;
> +	case VIDEO_ENHANCEMENT_CLASS:
> +		if (INTEL_GEN(engine->i915) >= 9)
> +			engine->irq_keep_mask |=
> +				GT_GEN8_WATCHDOG_INTERRUPT << shift;
> +		break;
> +	}
>   }
>   
>   static int
> @@ -2556,6 +2618,9 @@ logical_ring_setup(struct intel_engine_cs *engine)
>   	tasklet_init(&engine->execlists.tasklet,
>   		     execlists_submission_tasklet, (unsigned long)engine);
>   
> +	tasklet_init(&engine->execlists.watchdog_tasklet,
> +		     gen8_watchdog_irq_handler, (unsigned long)engine);
> +
>   	logical_ring_default_vfuncs(engine);
>   	logical_ring_default_irqs(engine);
>   
> diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
> index 465094e38d32..17250ba0246f 100644
> --- a/drivers/gpu/drm/i915/intel_ringbuffer.h
> +++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
> @@ -122,6 +122,7 @@ struct intel_engine_hangcheck {
>   	u64 acthd;
>   	u32 last_seqno;
>   	u32 next_seqno;
> +	u32 watchdog;

Looks unused.

>   	unsigned long action_timestamp;
>   	struct intel_instdone instdone;
>   };
> @@ -222,6 +223,11 @@ struct intel_engine_execlists {
>   	 */
>   	struct tasklet_struct tasklet;
>   
> +	/**
> +	 * @watchdog_tasklet: stop counter and re-schedule hangcheck_work asap
> +	 */
> +	struct tasklet_struct watchdog_tasklet;
> +
>   	/**
>   	 * @default_priolist: priority list for I915_PRIORITY_NORMAL
>   	 */
> @@ -353,6 +359,7 @@ struct intel_engine_cs {
>   	unsigned int hw_id;
>   	unsigned int guc_id;
>   	unsigned long mask;
> +	u32 current_seqno;

I don't see where this is set in this patch?

And I'd recommend calling it watchdog_last_seqno or something along 
those lines so it is obvious it is not a fundamental part of the engine.

>   
>   	u8 uabi_class;
>   
> 

Regards,

Tvrtko
Santa, Carlos March 1, 2019, 1:51 a.m. UTC | #2
On Thu, 2019-02-28 at 17:38 +0000, Tvrtko Ursulin wrote:
> On 21/02/2019 02:58, Carlos Santa wrote:
> > From: Michel Thierry <michel.thierry@intel.com>
> > 
> > *** General ***
> > 
> > Watchdog timeout (or "media engine reset") is a feature that allows
> > userland applications to enable hang detection on individual batch
> > buffers.
> > The detection mechanism itself is mostly bound to the hardware and
> > the only
> > thing that the driver needs to do to support this form of hang
> > detection
> > is to implement the interrupt handling support as well as watchdog
> > command
> > emission before and after the emitted batch buffer start
> > instruction in the
> > ring buffer.
> > 
> > The principle of the hang detection mechanism is as follows:
> > 
> > 1. Once the decision has been made to enable watchdog timeout for a
> > particular batch buffer and the driver is in the process of
> > emitting the
> > batch buffer start instruction into the ring buffer it also emits a
> > watchdog timer start instruction before and a watchdog timer
> > cancellation
> > instruction after the batch buffer start instruction in the ring
> > buffer.
> > 
> > 2. Once the GPU execution reaches the watchdog timer start
> > instruction
> > the hardware watchdog counter is started by the hardware. The
> > counter
> > keeps counting until either reaching a previously configured
> > threshold
> > value or the timer cancellation instruction is executed.
> > 
> > 2a. If the counter reaches the threshold value the hardware fires a
> > watchdog interrupt that is picked up by the watchdog interrupt
> > handler.
> > This means that a hang has been detected and the driver needs to
> > deal with
> > it the same way it would deal with a engine hang detected by the
> > periodic
> > hang checker. The only difference between the two is that we
> > already blamed
> > the active request (to ensure an engine reset).
> > 
> > 2b. If the batch buffer completes and the execution reaches the
> > watchdog
> > cancellation instruction before the watchdog counter reaches its
> > threshold value the watchdog is cancelled and nothing more comes of
> > it.
> > No hang is detected.
> > 
> > Note about future interaction with preemption: Preemption could
> > happen
> > in a command sequence prior to watchdog counter getting disabled,
> > resulting in watchdog being triggered following preemption (e.g.
> > when
> > watchdog had been enabled in the low priority batch). The driver
> > will
> > need to explicitly disable the watchdog counter as part of the
> > preemption sequence.
> > 
> > *** This patch introduces: ***
> > 
> > 1. IRQ handler code for watchdog timeout allowing direct hang
> > recovery
> > based on hardware-driven hang detection, which then integrates
> > directly
> > with the hang recovery path. This is independent of having per-
> > engine reset
> > or just full gpu reset.
> > 
> > 2. Watchdog specific register information.
> > 
> > Currently the render engine and all available media engines support
> > watchdog timeout (VECS is only supported in GEN9). The
> > specifications elude
> > to the BCS engine being supported but that is currently not
> > supported by
> > this commit.
> > 
> > Note that the value to stop the counter is different between render
> > and
> > non-render engines in GEN8; GEN9 onwards it's the same.
> > 
> > v2: Move irq handler to tasklet, arm watchdog for a 2nd time to
> > check
> > against false-positives.
> > 
> > v3: Don't use high priority tasklet, use engine_last_submit while
> > checking for false-positives. From GEN9 onwards, the stop counter
> > bit is
> > the same for all engines.
> > 
> > v4: Remove unnecessary brackets, use current_seqno to mark the
> > request
> > as guilty in the hangcheck/capture code.
> > 
> > v5: Rebased after RESET_ENGINEs flag.
> > 
> > v6: Don't capture error state in case of watchdog timeout. The
> > capture
> > process is time consuming and this will align to what happens when
> > we
> > use GuC to handle the watchdog timeout. (Chris)
> > 
> > v7: Rebase.
> > 
> > v8: Rebase, use HZ to reschedule.
> > 
> > v9: Rebase, get forcewake domains in function (no longer in
> > execlists
> > struct).
> > 
> > v10: Rebase.
> > 
> > v11: Rebase,
> >       remove extra braces (Tvrtko),
> >       implement watchdog_to_clock_counts helper (Tvrtko),
> >       Move tasklet_kill(watchdog_tasklet) inside intel_engines
> > (Tvrtko),
> >       Use a global heartbeat seqno instead of engine seqno (Chris)
> >       Make all engines checks all class based checks (Tvrtko)
> > 
> > Cc: Antonio Argenziano <antonio.argenziano@intel.com>
> > Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
> > Signed-off-by: Michel Thierry <michel.thierry@intel.com>
> > Signed-off-by: Carlos Santa <carlos.santa@intel.com>
> > ---
> >   drivers/gpu/drm/i915/i915_drv.h         |  8 +++
> >   drivers/gpu/drm/i915/i915_gpu_error.h   |  4 ++
> >   drivers/gpu/drm/i915/i915_irq.c         | 12 ++++-
> >   drivers/gpu/drm/i915/i915_reg.h         |  6 +++
> >   drivers/gpu/drm/i915/intel_engine_cs.c  |  1 +
> >   drivers/gpu/drm/i915/intel_hangcheck.c  | 17 +++++--
> >   drivers/gpu/drm/i915/intel_lrc.c        | 65
> > +++++++++++++++++++++++++
> >   drivers/gpu/drm/i915/intel_ringbuffer.h |  7 +++
> >   8 files changed, 114 insertions(+), 6 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/i915_drv.h
> > b/drivers/gpu/drm/i915/i915_drv.h
> > index 63a008aebfcd..0fcb2df869a2 100644
> > --- a/drivers/gpu/drm/i915/i915_drv.h
> > +++ b/drivers/gpu/drm/i915/i915_drv.h
> > @@ -3120,6 +3120,14 @@ i915_gem_context_lookup(struct
> > drm_i915_file_private *file_priv, u32 id)
> >   	return ctx;
> >   }
> >   
> > +static inline u32
> > +watchdog_to_clock_counts(struct drm_i915_private *dev_priv, u64
> > value_in_us)
> > +{
> > +	u64 threshold = 0;
> > +
> > +	return threshold;
> > +}
> > +
> >   int i915_perf_open_ioctl(struct drm_device *dev, void *data,
> >   			 struct drm_file *file);
> >   int i915_perf_add_config_ioctl(struct drm_device *dev, void
> > *data,
> > diff --git a/drivers/gpu/drm/i915/i915_gpu_error.h
> > b/drivers/gpu/drm/i915/i915_gpu_error.h
> > index f408060e0667..bd1821c73ecd 100644
> > --- a/drivers/gpu/drm/i915/i915_gpu_error.h
> > +++ b/drivers/gpu/drm/i915/i915_gpu_error.h
> > @@ -233,6 +233,9 @@ struct i915_gpu_error {
> >   	 * i915_mutex_lock_interruptible()?). I915_RESET_BACKOFF serves
> > a
> >   	 * secondary role in preventing two concurrent global reset
> > attempts.
> >   	 *
> > +	 * #I915_RESET_WATCHDOG - When hw detects a hang before us, we
> > can use
> > +	 * I915_RESET_WATCHDOG to report the hang detection cause
> > accurately.
> > +	 *
> >   	 * #I915_RESET_ENGINE[num_engines] - Since the driver doesn't
> > need to
> >   	 * acquire the struct_mutex to reset an engine, we need an
> > explicit
> >   	 * flag to prevent two concurrent reset attempts in the same
> > engine.
> > @@ -248,6 +251,7 @@ struct i915_gpu_error {
> >   #define I915_RESET_BACKOFF	0
> >   #define I915_RESET_MODESET	1
> >   #define I915_RESET_ENGINE	2
> > +#define I915_RESET_WATCHDOG	3
> >   #define I915_WEDGED		(BITS_PER_LONG - 1)
> >   
> >   	/** Number of times an engine has been reset */
> > diff --git a/drivers/gpu/drm/i915/i915_irq.c
> > b/drivers/gpu/drm/i915/i915_irq.c
> > index 4b23b2fd1fad..e2a1a07b0f2c 100644
> > --- a/drivers/gpu/drm/i915/i915_irq.c
> > +++ b/drivers/gpu/drm/i915/i915_irq.c
> > @@ -1456,6 +1456,9 @@ gen8_cs_irq_handler(struct intel_engine_cs
> > *engine, u32 iir)
> >   
> >   	if (tasklet)
> >   		tasklet_hi_schedule(&engine->execlists.tasklet);
> > +
> > +	if (iir & GT_GEN8_WATCHDOG_INTERRUPT)
> > +		tasklet_schedule(&engine->execlists.watchdog_tasklet);
> >   }
> >   
> >   static void gen8_gt_irq_ack(struct drm_i915_private *i915,
> > @@ -3883,17 +3886,24 @@ static void gen8_gt_irq_postinstall(struct
> > drm_i915_private *dev_priv)
> >   	u32 gt_interrupts[] = {
> >   		GT_RENDER_USER_INTERRUPT << GEN8_RCS_IRQ_SHIFT |
> >   			GT_CONTEXT_SWITCH_INTERRUPT <<
> > GEN8_RCS_IRQ_SHIFT |
> > +			GT_GEN8_WATCHDOG_INTERRUPT <<
> > GEN8_RCS_IRQ_SHIFT |
> >   			GT_RENDER_USER_INTERRUPT << GEN8_BCS_IRQ_SHIFT
> > |
> >   			GT_CONTEXT_SWITCH_INTERRUPT <<
> > GEN8_BCS_IRQ_SHIFT,
> >   		GT_RENDER_USER_INTERRUPT << GEN8_VCS1_IRQ_SHIFT |
> >   			GT_CONTEXT_SWITCH_INTERRUPT <<
> > GEN8_VCS1_IRQ_SHIFT |
> > +			GT_GEN8_WATCHDOG_INTERRUPT <<
> > GEN8_VCS1_IRQ_SHIFT |
> >   			GT_RENDER_USER_INTERRUPT << GEN8_VCS2_IRQ_SHIFT
> > |
> > -			GT_CONTEXT_SWITCH_INTERRUPT <<
> > GEN8_VCS2_IRQ_SHIFT,
> > +			GT_CONTEXT_SWITCH_INTERRUPT <<
> > GEN8_VCS2_IRQ_SHIFT |
> > +			GT_GEN8_WATCHDOG_INTERRUPT <<
> > GEN8_VCS2_IRQ_SHIFT,
> >   		0,
> >   		GT_RENDER_USER_INTERRUPT << GEN8_VECS_IRQ_SHIFT |
> >   			GT_CONTEXT_SWITCH_INTERRUPT <<
> > GEN8_VECS_IRQ_SHIFT
> >   		};
> >   
> > +	/* VECS watchdog is only available in skl+ */
> > +	if (INTEL_GEN(dev_priv) >= 9)
> > +		gt_interrupts[3] |= GT_GEN8_WATCHDOG_INTERRUPT;
> > +
> >   	dev_priv->pm_ier = 0x0;
> >   	dev_priv->pm_imr = ~dev_priv->pm_ier;
> >   	GEN8_IRQ_INIT_NDX(GT, 0, ~gt_interrupts[0], gt_interrupts[0]);
> > diff --git a/drivers/gpu/drm/i915/i915_reg.h
> > b/drivers/gpu/drm/i915/i915_reg.h
> > index 1eca166d95bb..a0e101bbcbce 100644
> > --- a/drivers/gpu/drm/i915/i915_reg.h
> > +++ b/drivers/gpu/drm/i915/i915_reg.h
> > @@ -2335,6 +2335,11 @@ enum i915_power_well_id {
> >   #define RING_START(base)	_MMIO((base) + 0x38)
> >   #define RING_CTL(base)		_MMIO((base) + 0x3c)
> >   #define   RING_CTL_SIZE(size)	((size) - PAGE_SIZE) /* in
> > bytes -> pages */
> > +#define RING_CNTR(base)		_MMIO((base) + 0x178)
> > +#define   GEN8_WATCHDOG_ENABLE		0
> > +#define   GEN8_WATCHDOG_DISABLE		1
> > +#define   GEN8_XCS_WATCHDOG_DISABLE	0xFFFFFFFF /* GEN8 &
> > non-render only */
> > +#define RING_THRESH(base)	_MMIO((base) + 0x17C)
> >   #define RING_SYNC_0(base)	_MMIO((base) + 0x40)
> >   #define RING_SYNC_1(base)	_MMIO((base) + 0x44)
> >   #define RING_SYNC_2(base)	_MMIO((base) + 0x48)
> > @@ -2894,6 +2899,7 @@ enum i915_power_well_id {
> >   #define GT_BSD_USER_INTERRUPT			(1 << 12)
> >   #define GT_RENDER_L3_PARITY_ERROR_INTERRUPT_S1	(1 << 11) /*
> > hsw+; rsvd on snb, ivb, vlv */
> >   #define GT_CONTEXT_SWITCH_INTERRUPT		(1 <<  8)
> > +#define GT_GEN8_WATCHDOG_INTERRUPT		(1 <<  6) /* gen8+ */
> >   #define GT_RENDER_L3_PARITY_ERROR_INTERRUPT	(1 <<  5) /*
> > !snb */
> >   #define GT_RENDER_PIPECTL_NOTIFY_INTERRUPT	(1 <<  4)
> >   #define GT_RENDER_CS_MASTER_ERROR_INTERRUPT	(1 <<  3)
> > diff --git a/drivers/gpu/drm/i915/intel_engine_cs.c
> > b/drivers/gpu/drm/i915/intel_engine_cs.c
> > index 7ae753358a6d..74f563d23cc8 100644
> > --- a/drivers/gpu/drm/i915/intel_engine_cs.c
> > +++ b/drivers/gpu/drm/i915/intel_engine_cs.c
> > @@ -1106,6 +1106,7 @@ void intel_engines_park(struct
> > drm_i915_private *i915)
> >   		/* Flush the residual irq tasklets first. */
> >   		intel_engine_disarm_breadcrumbs(engine);
> >   		tasklet_kill(&engine->execlists.tasklet);
> > +		tasklet_kill(&engine->execlists.watchdog_tasklet);
> >   
> >   		/*
> >   		 * We are committed now to parking the engines, make
> > sure there
> > diff --git a/drivers/gpu/drm/i915/intel_hangcheck.c
> > b/drivers/gpu/drm/i915/intel_hangcheck.c
> > index 58b6ff8453dc..bc10acb24d9a 100644
> > --- a/drivers/gpu/drm/i915/intel_hangcheck.c
> > +++ b/drivers/gpu/drm/i915/intel_hangcheck.c
> > @@ -218,7 +218,8 @@ static void hangcheck_accumulate_sample(struct
> > intel_engine_cs *engine,
> >   
> >   static void hangcheck_declare_hang(struct drm_i915_private *i915,
> >   				   unsigned int hung,
> > -				   unsigned int stuck)
> > +				   unsigned int stuck,
> > +				   unsigned int watchdog)
> >   {
> >   	struct intel_engine_cs *engine;
> >   	char msg[80];
> > @@ -231,13 +232,16 @@ static void hangcheck_declare_hang(struct
> > drm_i915_private *i915,
> >   	if (stuck != hung)
> >   		hung &= ~stuck;
> >   	len = scnprintf(msg, sizeof(msg),
> > -			"%s on ", stuck == hung ? "no progress" :
> > "hang");
> > +			"%s on ", watchdog ? "watchdog timeout" :
> > +				  stuck == hung ? "no progress" :
> > "hang");
> >   	for_each_engine_masked(engine, i915, hung, tmp)
> >   		len += scnprintf(msg + len, sizeof(msg) - len,
> >   				 "%s, ", engine->name);
> >   	msg[len-2] = '\0';
> >   
> > -	return i915_handle_error(i915, hung, I915_ERROR_CAPTURE, "%s",
> > msg);
> > +	return i915_handle_error(i915, hung,
> > +				 watchdog ? 0 : I915_ERROR_CAPTURE,
> > +				 "%s", msg);
> >   }
> >   
> >   /*
> > @@ -255,7 +259,7 @@ static void i915_hangcheck_elapsed(struct
> > work_struct *work)
> >   			     gpu_error.hangcheck_work.work);
> >   	struct intel_engine_cs *engine;
> >   	enum intel_engine_id id;
> > -	unsigned int hung = 0, stuck = 0, wedged = 0;
> > +	unsigned int hung = 0, stuck = 0, wedged = 0, watchdog = 0;
> >   
> >   	if (!i915_modparams.enable_hangcheck)
> >   		return;
> > @@ -266,6 +270,9 @@ static void i915_hangcheck_elapsed(struct
> > work_struct *work)
> >   	if (i915_terminally_wedged(&dev_priv->gpu_error))
> >   		return;
> >   
> > +	if (test_and_clear_bit(I915_RESET_WATCHDOG, &dev_priv-
> > >gpu_error.flags))
> > +		watchdog = 1;
> > +
> >   	/* As enabling the GPU requires fairly extensive mmio access,
> >   	 * periodically arm the mmio checker to see if we are
> > triggering
> >   	 * any invalid access.
> > @@ -311,7 +318,7 @@ static void i915_hangcheck_elapsed(struct
> > work_struct *work)
> >   	}
> >   
> >   	if (hung)
> > -		hangcheck_declare_hang(dev_priv, hung, stuck);
> > +		hangcheck_declare_hang(dev_priv, hung, stuck,
> > watchdog);
> >   
> >   	/* Reset timer in case GPU hangs without another request being
> > added */
> >   	i915_queue_hangcheck(dev_priv);
> > diff --git a/drivers/gpu/drm/i915/intel_lrc.c
> > b/drivers/gpu/drm/i915/intel_lrc.c
> > index 9ca7dc7a6fa5..c38b239ab39e 100644
> > --- a/drivers/gpu/drm/i915/intel_lrc.c
> > +++ b/drivers/gpu/drm/i915/intel_lrc.c
> > @@ -2352,6 +2352,53 @@ static int gen8_emit_flush_render(struct
> > i915_request *request,
> >   	return 0;
> >   }
> >   
> > +/* From GEN9 onwards, all engines use the same RING_CNTR format */
> > +static inline u32 get_watchdog_disable(struct intel_engine_cs
> > *engine)
> 
> I'd let the compiler decide on the inline or not.
> 
> > +{
> > +	if (engine->id == RCS || INTEL_GEN(engine->i915) >= 9)
> > +		return GEN8_WATCHDOG_DISABLE;
> > +	else
> > +		return GEN8_XCS_WATCHDOG_DISABLE;
> > +}
> > +
> > +#define GEN8_WATCHDOG_1000US(dev_priv)
> > watchdog_to_clock_counts(dev_priv, 1000)
> 
> Not sure macro is useful.
> 
> > +static void gen8_watchdog_irq_handler(unsigned long data)
> 
> gen8_watchdog_tasklet I guess.
> 
> > +{
> > +	struct intel_engine_cs *engine = (struct intel_engine_cs
> > *)data;
> > +	struct drm_i915_private *dev_priv = engine->i915;
> > +	unsigned int hung = 0;
> > +	u32 current_seqno=0;
> 
> Coding style.
> 
> > +	char msg[80];
> > +	unsigned int tmp;
> > +	int len;
> > +
> > +	/* Stop the counter to prevent further timeout interrupts */
> > +	I915_WRITE_FW(RING_CNTR(engine->mmio_base),
> > get_watchdog_disable(engine));
> 
> These registers do not need forcewake?
> 
> > +
> > +	/* Read the heartbeat seqno once again to check if we are
> > stuck? */
> > +	current_seqno = intel_engine_get_hangcheck_seqno(engine);
> > +
> > +    if (current_seqno == engine->current_seqno) {
> > +		hung |= engine->mask;
> > +
> > +		len = scnprintf(msg, sizeof(msg), "%s on ", "watchdog
> > timeout");
> > +		for_each_engine_masked(engine, dev_priv, hung, tmp)
> > +			len += scnprintf(msg + len, sizeof(msg) - len,
> > +					 "%s, ", engine->name);
> > +		msg[len-2] = '\0';
> 
> Copy/paste from intel_hangcheck.c ? Moving to common helper would be
> good.
> 
> > +
> > +		i915_handle_error(dev_priv, hung, 0, "%s", msg);
> > +
> > +		/* Reset timer in case GPU hangs without another
> > request being added */
> > +		i915_queue_hangcheck(dev_priv);
> 
> Mis-indented block.
> 
> > +    }else{
> 
> Coding style.
> 
> > +		/* Re-start the counter, if really hung, it will expire
> > again */
> > +		I915_WRITE_FW(RING_THRESH(engine->mmio_base),
> > +			      GEN8_WATCHDOG_1000US(dev_priv));
> > +		I915_WRITE_FW(RING_CNTR(engine->mmio_base),
> > GEN8_WATCHDOG_ENABLE);
> > +    }
> > +}
> > +
> >   /*
> >    * Reserve space for 2 NOOPs at the end of each request to be
> >    * used as a workaround for not being allowed to do lite
> > @@ -2539,6 +2586,21 @@ logical_ring_default_irqs(struct
> > intel_engine_cs *engine)
> >   
> >   	engine->irq_enable_mask = GT_RENDER_USER_INTERRUPT << shift;
> >   	engine->irq_keep_mask = GT_CONTEXT_SWITCH_INTERRUPT << shift;
> > +
> > +	switch (engine->class) {
> > +	default:
> > +		/* BCS engine does not support hw watchdog */
> > +		break;
> > +	case RENDER_CLASS:
> > +	case VIDEO_DECODE_CLASS:
> > +		engine->irq_keep_mask |= GT_GEN8_WATCHDOG_INTERRUPT <<
> > shift;
> > +		break;
> > +	case VIDEO_ENHANCEMENT_CLASS:
> > +		if (INTEL_GEN(engine->i915) >= 9)
> > +			engine->irq_keep_mask |=
> > +				GT_GEN8_WATCHDOG_INTERRUPT << shift;
> > +		break;
> > +	}
> >   }
> >   
> >   static int
> > @@ -2556,6 +2618,9 @@ logical_ring_setup(struct intel_engine_cs
> > *engine)
> >   	tasklet_init(&engine->execlists.tasklet,
> >   		     execlists_submission_tasklet, (unsigned
> > long)engine);
> >   
> > +	tasklet_init(&engine->execlists.watchdog_tasklet,
> > +		     gen8_watchdog_irq_handler, (unsigned long)engine);
> > +
> >   	logical_ring_default_vfuncs(engine);
> >   	logical_ring_default_irqs(engine);
> >   
> > diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h
> > b/drivers/gpu/drm/i915/intel_ringbuffer.h
> > index 465094e38d32..17250ba0246f 100644
> > --- a/drivers/gpu/drm/i915/intel_ringbuffer.h
> > +++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
> > @@ -122,6 +122,7 @@ struct intel_engine_hangcheck {
> >   	u64 acthd;
> >   	u32 last_seqno;
> >   	u32 next_seqno;
> > +	u32 watchdog;
> 
> Looks unused.
> 
> >   	unsigned long action_timestamp;
> >   	struct intel_instdone instdone;
> >   };
> > @@ -222,6 +223,11 @@ struct intel_engine_execlists {
> >   	 */
> >   	struct tasklet_struct tasklet;
> >   
> > +	/**
> > +	 * @watchdog_tasklet: stop counter and re-schedule
> > hangcheck_work asap
> > +	 */
> > +	struct tasklet_struct watchdog_tasklet;
> > +
> >   	/**
> >   	 * @default_priolist: priority list for I915_PRIORITY_NORMAL
> >   	 */
> > @@ -353,6 +359,7 @@ struct intel_engine_cs {
> >   	unsigned int hw_id;
> >   	unsigned int guc_id;
> >   	unsigned long mask;
> > +	u32 current_seqno;
> 
> I don't see where this is set in this patch?

It was declared here but assigned on patch #3 of the series, right
after the watchdog timer is started. The idea was to store the seqno we
are currently working (before we hang) and then cros check once again
right before we reset inside the irq for the watchdog. 

/* Read the heartbeat seqno once again to check if we are stuck? */
current_seqno = intel_engine_get_hangcheck_seqno(engine);

if (current_seqno == engine->current_seqno) {

> 
> And I'd recommend calling it watchdog_last_seqno or something along 
> those lines so it is obvious it is not a fundamental part of the
> engine.
> 
> >   
> >   	u8 uabi_class;
> >   
> > 
> 
> Regards,
> 
> Tvrtko
Chris Wilson March 1, 2019, 9:36 a.m. UTC | #3
Quoting Carlos Santa (2019-02-21 02:58:16)
> +#define GEN8_WATCHDOG_1000US(dev_priv) watchdog_to_clock_counts(dev_priv, 1000)
> +static void gen8_watchdog_irq_handler(unsigned long data)
> +{
> +       struct intel_engine_cs *engine = (struct intel_engine_cs *)data;
> +       struct drm_i915_private *dev_priv = engine->i915;
> +       unsigned int hung = 0;
> +       u32 current_seqno=0;
> +       char msg[80];
> +       unsigned int tmp;
> +       int len;
> +
> +       /* Stop the counter to prevent further timeout interrupts */
> +       I915_WRITE_FW(RING_CNTR(engine->mmio_base), get_watchdog_disable(engine));
> +
> +       /* Read the heartbeat seqno once again to check if we are stuck? */
> +       current_seqno = intel_engine_get_hangcheck_seqno(engine);

I have said this before, but this doesn't exist either, it's just a
temporary glitch in the matrix.

> +    if (current_seqno == engine->current_seqno) {
> +               hung |= engine->mask;
> +
> +               len = scnprintf(msg, sizeof(msg), "%s on ", "watchdog timeout");
> +               for_each_engine_masked(engine, dev_priv, hung, tmp)
> +                       len += scnprintf(msg + len, sizeof(msg) - len,
> +                                        "%s, ", engine->name);
> +               msg[len-2] = '\0';
> +
> +               i915_handle_error(dev_priv, hung, 0, "%s", msg);
> +
> +               /* Reset timer in case GPU hangs without another request being added */
> +               i915_queue_hangcheck(dev_priv);

You still haven't explained why we are not just resetting the engine
immediately. Have you looked at the preempt-timeout patches that need to
do the same thing from timer-irq context?

Resending the same old stuff over and over again is just exasperating.
-Chris
Santa, Carlos March 2, 2019, 2:08 a.m. UTC | #4
On Fri, 2019-03-01 at 09:36 +0000, Chris Wilson wrote:
> Quoting Carlos Santa (2019-02-21 02:58:16)
> > +#define GEN8_WATCHDOG_1000US(dev_priv)
> > watchdog_to_clock_counts(dev_priv, 1000)
> > +static void gen8_watchdog_irq_handler(unsigned long data)
> > +{
> > +       struct intel_engine_cs *engine = (struct intel_engine_cs
> > *)data;
> > +       struct drm_i915_private *dev_priv = engine->i915;
> > +       unsigned int hung = 0;
> > +       u32 current_seqno=0;
> > +       char msg[80];
> > +       unsigned int tmp;
> > +       int len;
> > +
> > +       /* Stop the counter to prevent further timeout interrupts
> > */
> > +       I915_WRITE_FW(RING_CNTR(engine->mmio_base),
> > get_watchdog_disable(engine));
> > +
> > +       /* Read the heartbeat seqno once again to check if we are
> > stuck? */
> > +       current_seqno = intel_engine_get_hangcheck_seqno(engine);
> 
> I have said this before, but this doesn't exist either, it's just a
> temporary glitch in the matrix.

That was my only way to check for the "quilty" seqno right before
resetting during smoke testing... Will reach out again before sending a
new rev to cross check on the new approach you mentioned today.

> 
> > +    if (current_seqno == engine->current_seqno) {
> > +               hung |= engine->mask;
> > +
> > +               len = scnprintf(msg, sizeof(msg), "%s on ",
> > "watchdog timeout");
> > +               for_each_engine_masked(engine, dev_priv, hung, tmp)
> > +                       len += scnprintf(msg + len, sizeof(msg) -
> > len,
> > +                                        "%s, ", engine->name);
> > +               msg[len-2] = '\0';
> > +
> > +               i915_handle_error(dev_priv, hung, 0, "%s", msg);
> > +
> > +               /* Reset timer in case GPU hangs without another
> > request being added */
> > +               i915_queue_hangcheck(dev_priv);
> 
> You still haven't explained why we are not just resetting the engine
> immediately. Have you looked at the preempt-timeout patches that need
> to
> do the same thing from timer-irq context?
> 
> Resending the same old stuff over and over again is just
> exasperating.
> -Chris

Oops, I had the wrong assumption, as I honestly thought removing the
workqueue from v3 would allow for an immediate reset. Thanks for the
feedback on the preempt-timeout series... will rework this. 

Carlos
Santa, Carlos March 8, 2019, 3:16 a.m. UTC | #5
On Fri, 2019-03-01 at 09:36 +0000, Chris Wilson wrote:
> > 
> Quoting Carlos Santa (2019-02-21 02:58:16)
> > +#define GEN8_WATCHDOG_1000US(dev_priv)
> > watchdog_to_clock_counts(dev_priv, 1000)
> > +static void gen8_watchdog_irq_handler(unsigned long data)
> > +{
> > +       struct intel_engine_cs *engine = (struct intel_engine_cs
> > *)data;
> > +       struct drm_i915_private *dev_priv = engine->i915;
> > +       unsigned int hung = 0;
> > +       u32 current_seqno=0;
> > +       char msg[80];
> > +       unsigned int tmp;
> > +       int len;
> > +
> > +       /* Stop the counter to prevent further timeout interrupts
> > */
> > +       I915_WRITE_FW(RING_CNTR(engine->mmio_base),
> > get_watchdog_disable(engine));
> > +
> > +       /* Read the heartbeat seqno once again to check if we are
> > stuck? */
> > +       current_seqno = intel_engine_get_hangcheck_seqno(engine);
> 
> I have said this before, but this doesn't exist either, it's just a
> temporary glitch in the matrix.
> 

Chris, Tvrtko, I need some guidance on how to find the quilty seqno
during a hang, can you please advice here what to do? 

Thanks,
Carlos
Tvrtko Ursulin March 11, 2019, 10:39 a.m. UTC | #6
On 08/03/2019 03:16, Carlos Santa wrote:
> On Fri, 2019-03-01 at 09:36 +0000, Chris Wilson wrote:
>>>
>> Quoting Carlos Santa (2019-02-21 02:58:16)
>>> +#define GEN8_WATCHDOG_1000US(dev_priv)
>>> watchdog_to_clock_counts(dev_priv, 1000)
>>> +static void gen8_watchdog_irq_handler(unsigned long data)
>>> +{
>>> +       struct intel_engine_cs *engine = (struct intel_engine_cs
>>> *)data;
>>> +       struct drm_i915_private *dev_priv = engine->i915;
>>> +       unsigned int hung = 0;
>>> +       u32 current_seqno=0;
>>> +       char msg[80];
>>> +       unsigned int tmp;
>>> +       int len;
>>> +
>>> +       /* Stop the counter to prevent further timeout interrupts
>>> */
>>> +       I915_WRITE_FW(RING_CNTR(engine->mmio_base),
>>> get_watchdog_disable(engine));
>>> +
>>> +       /* Read the heartbeat seqno once again to check if we are
>>> stuck? */
>>> +       current_seqno = intel_engine_get_hangcheck_seqno(engine);
>>
>> I have said this before, but this doesn't exist either, it's just a
>> temporary glitch in the matrix.
>>
> 
> Chris, Tvrtko, I need some guidance on how to find the quilty seqno
> during a hang, can you please advice here what to do?

When an interrupt fires you need to ascertain whether the same request 
which enabled the watchdog is running, correct?

So I think you would need this, with a disclaimer that I haven't thought 
about the details really:

1. Take a reference to timeline hwsp when setting up the watchdog for a 
request.

2. Store the initial seqno associated with this request.

3. Force enable user interrupts.

4. When timeout fires, inspect the HWSP seqno to see if the request 
completed or not.

5. Reset the engine if not completed.

6. Put the timeline/hwsp reference.

If the user interrupt fires with the request completed cancel the above 
operations.

There could be an inherent race between inspecting the seqno and 
deciding to reset. Not sure at the moment what to do. Maybe just call it 
bad luck?

I also think for the software implementation you need to force no 
request coalescing for contexts with timeout set. Because you want to 
have 100% defined borders for request in and out - since the timeout is 
defined per request.

In this case you don't need the user interrupt for the trailing edge 
signal but can use context complete. Maybe putting hooks into 
context_in/out in intel_lrc.c would work under these circumstances.

Also if preempted you need to cancel the timer setup and store elapsed 
execution time.

Or it may make sense to just disable preemption for these contexts. 
Otherwise there is no point in trying to mandate the timeout?

But it is also kind of bad since non-privileged contexts can make 
themselves non-preemptable by setting the watchdog timeout.

Maybe as a compromise we need to automatically apply an elevated 
priority level, but not as high to be completely non-preemptable. Sounds 
like a hard question.

Regards,

Tvrtko
Santa, Carlos March 18, 2019, 12:15 a.m. UTC | #7
On Mon, 2019-03-11 at 10:39 +0000, Tvrtko Ursulin wrote:
> On 08/03/2019 03:16, Carlos Santa wrote:
> > On Fri, 2019-03-01 at 09:36 +0000, Chris Wilson wrote:
> > > > 
> > > 
> > > Quoting Carlos Santa (2019-02-21 02:58:16)
> > > > +#define GEN8_WATCHDOG_1000US(dev_priv)
> > > > watchdog_to_clock_counts(dev_priv, 1000)
> > > > +static void gen8_watchdog_irq_handler(unsigned long data)
> > > > +{
> > > > +       struct intel_engine_cs *engine = (struct
> > > > intel_engine_cs
> > > > *)data;
> > > > +       struct drm_i915_private *dev_priv = engine->i915;
> > > > +       unsigned int hung = 0;
> > > > +       u32 current_seqno=0;
> > > > +       char msg[80];
> > > > +       unsigned int tmp;
> > > > +       int len;
> > > > +
> > > > +       /* Stop the counter to prevent further timeout
> > > > interrupts
> > > > */
> > > > +       I915_WRITE_FW(RING_CNTR(engine->mmio_base),
> > > > get_watchdog_disable(engine));
> > > > +
> > > > +       /* Read the heartbeat seqno once again to check if we
> > > > are
> > > > stuck? */
> > > > +       current_seqno =
> > > > intel_engine_get_hangcheck_seqno(engine);
> > > 
> > > I have said this before, but this doesn't exist either, it's just
> > > a
> > > temporary glitch in the matrix.
> > > 
> > 
> > Chris, Tvrtko, I need some guidance on how to find the quilty seqno
> > during a hang, can you please advice here what to do?
> 
> When an interrupt fires you need to ascertain whether the same
> request 
> which enabled the watchdog is running, correct?
> 
> So I think you would need this, with a disclaimer that I haven't
> thought 
> about the details really:
> 
> 1. Take a reference to timeline hwsp when setting up the watchdog for
> a 
> request.
> 
> 2. Store the initial seqno associated with this request.
> 
> 3. Force enable user interrupts.
> 
> 4. When timeout fires, inspect the HWSP seqno to see if the request 
> completed or not.
> 
> 5. Reset the engine if not completed.
> 
> 6. Put the timeline/hwsp reference.


static int gen8_emit_bb_start(struct i915_request *rq,
							u64 offset, u32
len,
							const unsigned
int flags)
{
	struct i915_timeline *tl;
	u32 seqno;

	if (enable_watchdog) {
		/* Start watchdog timer */
		cs = gen8_emit_start_watchdog(rq, cs);
		tl = ce->ring->timeline;
		i915_timeline_get_seqno(tl, rq, &seqno);
		/*Store initial hwsp seqno associated with this request 
		engine->watchdog_hwsp_seqno = tl->hwsp_seqno;
	}

}

static void gen8_watchdog_tasklet(unsigned long data)
{
		struct i915_request *rq;

		rq = intel_engine_find_active_request(engine);

		/* Inspect the watchdog seqno once again for
completion? */
		if (!i915_seqno_passed(engine->watchdog_hwsp_seqno, rq-
>fence.seqno)) {
			//Reset Engine
		}
}

Tvrtko, is the above acceptable to inspect whether the seqno has
completed?

I noticed there's a helper function i915_request_completed(struct
i915_request *rq) but it will require me to modify it in order to pass
2 different seqnos.

Regards,
Carlos

> 
> If the user interrupt fires with the request completed cancel the
> above 
> operations.
> 
> There could be an inherent race between inspecting the seqno and 
> deciding to reset. Not sure at the moment what to do. Maybe just call
> it 
> bad luck?
> 
> I also think for the software implementation you need to force no 
> request coalescing for contexts with timeout set. Because you want
> to 
> have 100% defined borders for request in and out - since the timeout
> is 
> defined per request.
> 
> In this case you don't need the user interrupt for the trailing edge 
> signal but can use context complete. Maybe putting hooks into 
> context_in/out in intel_lrc.c would work under these circumstances.
> 
> Also if preempted you need to cancel the timer setup and store
> elapsed 
> execution time.
> 
> Or it may make sense to just disable preemption for these contexts. 
> Otherwise there is no point in trying to mandate the timeout?
> 
> But it is also kind of bad since non-privileged contexts can make 
> themselves non-preemptable by setting the watchdog timeout.
> 
> Maybe as a compromise we need to automatically apply an elevated 
> priority level, but not as high to be completely non-preemptable.
> Sounds 
> like a hard question.
> 
> Regards,
> 
> Tvrtko
Tvrtko Ursulin March 19, 2019, 12:39 p.m. UTC | #8
On 18/03/2019 00:15, Carlos Santa wrote:
> On Mon, 2019-03-11 at 10:39 +0000, Tvrtko Ursulin wrote:
>> On 08/03/2019 03:16, Carlos Santa wrote:
>>> On Fri, 2019-03-01 at 09:36 +0000, Chris Wilson wrote:
>>>>>
>>>>
>>>> Quoting Carlos Santa (2019-02-21 02:58:16)
>>>>> +#define GEN8_WATCHDOG_1000US(dev_priv)
>>>>> watchdog_to_clock_counts(dev_priv, 1000)
>>>>> +static void gen8_watchdog_irq_handler(unsigned long data)
>>>>> +{
>>>>> +       struct intel_engine_cs *engine = (struct
>>>>> intel_engine_cs
>>>>> *)data;
>>>>> +       struct drm_i915_private *dev_priv = engine->i915;
>>>>> +       unsigned int hung = 0;
>>>>> +       u32 current_seqno=0;
>>>>> +       char msg[80];
>>>>> +       unsigned int tmp;
>>>>> +       int len;
>>>>> +
>>>>> +       /* Stop the counter to prevent further timeout
>>>>> interrupts
>>>>> */
>>>>> +       I915_WRITE_FW(RING_CNTR(engine->mmio_base),
>>>>> get_watchdog_disable(engine));
>>>>> +
>>>>> +       /* Read the heartbeat seqno once again to check if we
>>>>> are
>>>>> stuck? */
>>>>> +       current_seqno =
>>>>> intel_engine_get_hangcheck_seqno(engine);
>>>>
>>>> I have said this before, but this doesn't exist either, it's just
>>>> a
>>>> temporary glitch in the matrix.
>>>>
>>>
>>> Chris, Tvrtko, I need some guidance on how to find the quilty seqno
>>> during a hang, can you please advice here what to do?
>>
>> When an interrupt fires you need to ascertain whether the same
>> request
>> which enabled the watchdog is running, correct?
>>
>> So I think you would need this, with a disclaimer that I haven't
>> thought
>> about the details really:
>>
>> 1. Take a reference to timeline hwsp when setting up the watchdog for
>> a
>> request.
>>
>> 2. Store the initial seqno associated with this request.
>>
>> 3. Force enable user interrupts.
>>
>> 4. When timeout fires, inspect the HWSP seqno to see if the request
>> completed or not.
>>
>> 5. Reset the engine if not completed.
>>
>> 6. Put the timeline/hwsp reference.
> 
> 
> static int gen8_emit_bb_start(struct i915_request *rq,
> 							u64 offset, u32
> len,
> 							const unsigned
> int flags)
> {
> 	struct i915_timeline *tl;
> 	u32 seqno;
> 
> 	if (enable_watchdog) {
> 		/* Start watchdog timer */
> 		cs = gen8_emit_start_watchdog(rq, cs);
> 		tl = ce->ring->timeline;
> 		i915_timeline_get_seqno(tl, rq, &seqno);
> 		/*Store initial hwsp seqno associated with this request
> 		engine->watchdog_hwsp_seqno = tl->hwsp_seqno;

You should not need to allocate a new seqno and also having something 
stored per engine does not make clear how will you solve out of order.

Maybe you just set up the timer, then lets see below..

Also, are you not trying to do the software implementation to start with?

> 	}
> 
> }
> 
> static void gen8_watchdog_tasklet(unsigned long data)
> {
> 		struct i915_request *rq;
> 
> 		rq = intel_engine_find_active_request(engine);
> 
> 		/* Inspect the watchdog seqno once again for
> completion? */
> 		if (!i915_seqno_passed(engine->watchdog_hwsp_seqno, rq-
>> fence.seqno)) {
> 			//Reset Engine
> 		}
> }

What happens if you simply reset without checking anything? You know hw 
timer wouldn't have fired if the context wasn't running, correct?

(Ignoring the race condition between interrupt raised -> hw interrupt 
delivered -> serviced -> tasklet scheduled -> tasklet running. Which may 
mean request has completed in the meantime and you reset the engine for 
nothing. But this is probably not 100% solvable.)

Regards,

Tvrtko

> Tvrtko, is the above acceptable to inspect whether the seqno has
> completed?
> 
> I noticed there's a helper function i915_request_completed(struct
> i915_request *rq) but it will require me to modify it in order to pass
> 2 different seqnos.
> 
> Regards,
> Carlos
> 
>>
>> If the user interrupt fires with the request completed cancel the
>> above
>> operations.
>>
>> There could be an inherent race between inspecting the seqno and
>> deciding to reset. Not sure at the moment what to do. Maybe just call
>> it
>> bad luck?
>>
>> I also think for the software implementation you need to force no
>> request coalescing for contexts with timeout set. Because you want
>> to
>> have 100% defined borders for request in and out - since the timeout
>> is
>> defined per request.
>>
>> In this case you don't need the user interrupt for the trailing edge
>> signal but can use context complete. Maybe putting hooks into
>> context_in/out in intel_lrc.c would work under these circumstances.
>>
>> Also if preempted you need to cancel the timer setup and store
>> elapsed
>> execution time.
>>
>> Or it may make sense to just disable preemption for these contexts.
>> Otherwise there is no point in trying to mandate the timeout?
>>
>> But it is also kind of bad since non-privileged contexts can make
>> themselves non-preemptable by setting the watchdog timeout.
>>
>> Maybe as a compromise we need to automatically apply an elevated
>> priority level, but not as high to be completely non-preemptable.
>> Sounds
>> like a hard question.
>>
>> Regards,
>>
>> Tvrtko
> 
>
Tvrtko Ursulin March 19, 2019, 12:46 p.m. UTC | #9
On 19/03/2019 12:39, Tvrtko Ursulin wrote:
> 
> On 18/03/2019 00:15, Carlos Santa wrote:
>> On Mon, 2019-03-11 at 10:39 +0000, Tvrtko Ursulin wrote:
>>> On 08/03/2019 03:16, Carlos Santa wrote:
>>>> On Fri, 2019-03-01 at 09:36 +0000, Chris Wilson wrote:
>>>>>>
>>>>>
>>>>> Quoting Carlos Santa (2019-02-21 02:58:16)
>>>>>> +#define GEN8_WATCHDOG_1000US(dev_priv)
>>>>>> watchdog_to_clock_counts(dev_priv, 1000)
>>>>>> +static void gen8_watchdog_irq_handler(unsigned long data)
>>>>>> +{
>>>>>> +       struct intel_engine_cs *engine = (struct
>>>>>> intel_engine_cs
>>>>>> *)data;
>>>>>> +       struct drm_i915_private *dev_priv = engine->i915;
>>>>>> +       unsigned int hung = 0;
>>>>>> +       u32 current_seqno=0;
>>>>>> +       char msg[80];
>>>>>> +       unsigned int tmp;
>>>>>> +       int len;
>>>>>> +
>>>>>> +       /* Stop the counter to prevent further timeout
>>>>>> interrupts
>>>>>> */
>>>>>> +       I915_WRITE_FW(RING_CNTR(engine->mmio_base),
>>>>>> get_watchdog_disable(engine));
>>>>>> +
>>>>>> +       /* Read the heartbeat seqno once again to check if we
>>>>>> are
>>>>>> stuck? */
>>>>>> +       current_seqno =
>>>>>> intel_engine_get_hangcheck_seqno(engine);
>>>>>
>>>>> I have said this before, but this doesn't exist either, it's just
>>>>> a
>>>>> temporary glitch in the matrix.
>>>>>
>>>>
>>>> Chris, Tvrtko, I need some guidance on how to find the quilty seqno
>>>> during a hang, can you please advice here what to do?
>>>
>>> When an interrupt fires you need to ascertain whether the same
>>> request
>>> which enabled the watchdog is running, correct?
>>>
>>> So I think you would need this, with a disclaimer that I haven't
>>> thought
>>> about the details really:
>>>
>>> 1. Take a reference to timeline hwsp when setting up the watchdog for
>>> a
>>> request.
>>>
>>> 2. Store the initial seqno associated with this request.
>>>
>>> 3. Force enable user interrupts.
>>>
>>> 4. When timeout fires, inspect the HWSP seqno to see if the request
>>> completed or not.
>>>
>>> 5. Reset the engine if not completed.
>>>
>>> 6. Put the timeline/hwsp reference.
>>
>>
>> static int gen8_emit_bb_start(struct i915_request *rq,
>>                             u64 offset, u32
>> len,
>>                             const unsigned
>> int flags)
>> {
>>     struct i915_timeline *tl;
>>     u32 seqno;
>>
>>     if (enable_watchdog) {
>>         /* Start watchdog timer */
>>         cs = gen8_emit_start_watchdog(rq, cs);
>>         tl = ce->ring->timeline;
>>         i915_timeline_get_seqno(tl, rq, &seqno);
>>         /*Store initial hwsp seqno associated with this request
>>         engine->watchdog_hwsp_seqno = tl->hwsp_seqno;
> 
> You should not need to allocate a new seqno and also having something 
> stored per engine does not make clear how will you solve out of order.
> 
> Maybe you just set up the timer, then lets see below..
> 
> Also, are you not trying to do the software implementation to start with?
> 
>>     }
>>
>> }
>>
>> static void gen8_watchdog_tasklet(unsigned long data)
>> {
>>         struct i915_request *rq;
>>
>>         rq = intel_engine_find_active_request(engine);
>>
>>         /* Inspect the watchdog seqno once again for
>> completion? */
>>         if (!i915_seqno_passed(engine->watchdog_hwsp_seqno, rq-
>>> fence.seqno)) {
>>             //Reset Engine
>>         }
>> }
> 
> What happens if you simply reset without checking anything? You know hw 
> timer wouldn't have fired if the context wasn't running, correct?
> 
> (Ignoring the race condition between interrupt raised -> hw interrupt 
> delivered -> serviced -> tasklet scheduled -> tasklet running. Which may 
> mean request has completed in the meantime and you reset the engine for 
> nothing. But this is probably not 100% solvable.)

Good idea would be to write some tests to exercise some normal and more 
edge case scenarios like coalesced requests, preemption etc. Checking 
which request got reset etc.

Regards,

Tvrtko

> Regards,
> 
> Tvrtko
> 
>> Tvrtko, is the above acceptable to inspect whether the seqno has
>> completed?
>>
>> I noticed there's a helper function i915_request_completed(struct
>> i915_request *rq) but it will require me to modify it in order to pass
>> 2 different seqnos.
>>
>> Regards,
>> Carlos
>>
>>>
>>> If the user interrupt fires with the request completed cancel the
>>> above
>>> operations.
>>>
>>> There could be an inherent race between inspecting the seqno and
>>> deciding to reset. Not sure at the moment what to do. Maybe just call
>>> it
>>> bad luck?
>>>
>>> I also think for the software implementation you need to force no
>>> request coalescing for contexts with timeout set. Because you want
>>> to
>>> have 100% defined borders for request in and out - since the timeout
>>> is
>>> defined per request.
>>>
>>> In this case you don't need the user interrupt for the trailing edge
>>> signal but can use context complete. Maybe putting hooks into
>>> context_in/out in intel_lrc.c would work under these circumstances.
>>>
>>> Also if preempted you need to cancel the timer setup and store
>>> elapsed
>>> execution time.
>>>
>>> Or it may make sense to just disable preemption for these contexts.
>>> Otherwise there is no point in trying to mandate the timeout?
>>>
>>> But it is also kind of bad since non-privileged contexts can make
>>> themselves non-preemptable by setting the watchdog timeout.
>>>
>>> Maybe as a compromise we need to automatically apply an elevated
>>> priority level, but not as high to be completely non-preemptable.
>>> Sounds
>>> like a hard question.
>>>
>>> Regards,
>>>
>>> Tvrtko
>>
>>
Santa, Carlos March 19, 2019, 5:52 p.m. UTC | #10
On Tue, 2019-03-19 at 12:46 +0000, Tvrtko Ursulin wrote:
> On 19/03/2019 12:39, Tvrtko Ursulin wrote:
> > 
> > On 18/03/2019 00:15, Carlos Santa wrote:
> > > On Mon, 2019-03-11 at 10:39 +0000, Tvrtko Ursulin wrote:
> > > > On 08/03/2019 03:16, Carlos Santa wrote:
> > > > > On Fri, 2019-03-01 at 09:36 +0000, Chris Wilson wrote:
> > > > > > > 
> > > > > > 
> > > > > > Quoting Carlos Santa (2019-02-21 02:58:16)
> > > > > > > +#define GEN8_WATCHDOG_1000US(dev_priv)
> > > > > > > watchdog_to_clock_counts(dev_priv, 1000)
> > > > > > > +static void gen8_watchdog_irq_handler(unsigned long
> > > > > > > data)
> > > > > > > +{
> > > > > > > +       struct intel_engine_cs *engine = (struct
> > > > > > > intel_engine_cs
> > > > > > > *)data;
> > > > > > > +       struct drm_i915_private *dev_priv = engine->i915;
> > > > > > > +       unsigned int hung = 0;
> > > > > > > +       u32 current_seqno=0;
> > > > > > > +       char msg[80];
> > > > > > > +       unsigned int tmp;
> > > > > > > +       int len;
> > > > > > > +
> > > > > > > +       /* Stop the counter to prevent further timeout
> > > > > > > interrupts
> > > > > > > */
> > > > > > > +       I915_WRITE_FW(RING_CNTR(engine->mmio_base),
> > > > > > > get_watchdog_disable(engine));
> > > > > > > +
> > > > > > > +       /* Read the heartbeat seqno once again to check
> > > > > > > if we
> > > > > > > are
> > > > > > > stuck? */
> > > > > > > +       current_seqno =
> > > > > > > intel_engine_get_hangcheck_seqno(engine);
> > > > > > 
> > > > > > I have said this before, but this doesn't exist either,
> > > > > > it's just
> > > > > > a
> > > > > > temporary glitch in the matrix.
> > > > > > 
> > > > > 
> > > > > Chris, Tvrtko, I need some guidance on how to find the quilty
> > > > > seqno
> > > > > during a hang, can you please advice here what to do?
> > > > 
> > > > When an interrupt fires you need to ascertain whether the same
> > > > request
> > > > which enabled the watchdog is running, correct?
> > > > 
> > > > So I think you would need this, with a disclaimer that I
> > > > haven't
> > > > thought
> > > > about the details really:
> > > > 
> > > > 1. Take a reference to timeline hwsp when setting up the
> > > > watchdog for
> > > > a
> > > > request.
> > > > 
> > > > 2. Store the initial seqno associated with this request.
> > > > 
> > > > 3. Force enable user interrupts.
> > > > 
> > > > 4. When timeout fires, inspect the HWSP seqno to see if the
> > > > request
> > > > completed or not.
> > > > 
> > > > 5. Reset the engine if not completed.
> > > > 
> > > > 6. Put the timeline/hwsp reference.
> > > 
> > > 
> > > static int gen8_emit_bb_start(struct i915_request *rq,
> > >                             u64 offset, u32
> > > len,
> > >                             const unsigned
> > > int flags)
> > > {
> > >     struct i915_timeline *tl;
> > >     u32 seqno;
> > > 
> > >     if (enable_watchdog) {
> > >         /* Start watchdog timer */
> > >         cs = gen8_emit_start_watchdog(rq, cs);
> > >         tl = ce->ring->timeline;
> > >         i915_timeline_get_seqno(tl, rq, &seqno);
> > >         /*Store initial hwsp seqno associated with this request
> > >         engine->watchdog_hwsp_seqno = tl->hwsp_seqno;
> > 
> > You should not need to allocate a new seqno and also having
> > something 
> > stored per engine does not make clear how will you solve out of
> > order.

Understood, I missed that there's a convenience pointer available to us
per request (i.e.,  *hwsp_seqno). On step #1 above you have said to
take a reference to the timeline so I was trying to make a link between
the timeline and the seqno but if the request comes already with a
convenience pointer then we may not need the timeline after all...

However, on v4 of the series I was using
intel_engine_get_hangcheck_seqno(engine) for this purpose, and even
though Chris was against it, I saw that it landed recently on the
tree...

> > 
> > Maybe you just set up the timer, then lets see below..

I think you're suggesting simply not to bother checking for the guilty
seqno in the tasklet and simply reset...

> > 
> > Also, are you not trying to do the software implementation to start
> > with?

Trying to keep it simple with just the h/w timers for now... adding
front/back end to accomodate the s/w timers will just muddy the waters?
Will get to it once we agree on what to do here...

> > 
> > >     }
> > > 
> > > }
> > > 
> > > static void gen8_watchdog_tasklet(unsigned long data)
> > > {
> > >         struct i915_request *rq;
> > > 
> > >         rq = intel_engine_find_active_request(engine);
> > > 
> > >         /* Inspect the watchdog seqno once again for
> > > completion? */
> > >         if (!i915_seqno_passed(engine->watchdog_hwsp_seqno, rq-
> > > > fence.seqno)) {
> > > 
> > >             //Reset Engine
> > >         }
> > > }
> > 
> > What happens if you simply reset without checking anything? You
> > know hw 
> > timer wouldn't have fired if the context wasn't running, correct?

Need to verify this by running some tests then...

> > 
> > (Ignoring the race condition between interrupt raised -> hw
> > interrupt 
> > delivered -> serviced -> tasklet scheduled -> tasklet running.
> > Which may 
> > mean request has completed in the meantime and you reset the engine
> > for 
> > nothing. But this is probably not 100% solvable.)
> 
> Good idea would be to write some tests to exercise some normal and
> more 
> edge case scenarios like coalesced requests, preemption etc.
> Checking 
> which request got reset etc.

Ok, need to try some test cases then.

Regards,
Carlos

> 
> Regards,
> 
> Tvrtko
> 
> > Regards,
> > 
> > Tvrtko
> > 
> > > Tvrtko, is the above acceptable to inspect whether the seqno has
> > > completed?
> > > 
> > > I noticed there's a helper function i915_request_completed(struct
> > > i915_request *rq) but it will require me to modify it in order to
> > > pass
> > > 2 different seqnos.
> > > 
> > > Regards,
> > > Carlos
> > > 
> > > > 
> > > > If the user interrupt fires with the request completed cancel
> > > > the
> > > > above
> > > > operations.
> > > > 
> > > > There could be an inherent race between inspecting the seqno
> > > > and
> > > > deciding to reset. Not sure at the moment what to do. Maybe
> > > > just call
> > > > it
> > > > bad luck?
> > > > 
> > > > I also think for the software implementation you need to force
> > > > no
> > > > request coalescing for contexts with timeout set. Because you
> > > > want
> > > > to
> > > > have 100% defined borders for request in and out - since the
> > > > timeout
> > > > is
> > > > defined per request.
> > > > 
> > > > In this case you don't need the user interrupt for the trailing
> > > > edge
> > > > signal but can use context complete. Maybe putting hooks into
> > > > context_in/out in intel_lrc.c would work under these
> > > > circumstances.
> > > > 
> > > > Also if preempted you need to cancel the timer setup and store
> > > > elapsed
> > > > execution time.
> > > > 
> > > > Or it may make sense to just disable preemption for these
> > > > contexts.
> > > > Otherwise there is no point in trying to mandate the timeout?
> > > > 
> > > > But it is also kind of bad since non-privileged contexts can
> > > > make
> > > > themselves non-preemptable by setting the watchdog timeout.
> > > > 
> > > > Maybe as a compromise we need to automatically apply an
> > > > elevated
> > > > priority level, but not as high to be completely non-
> > > > preemptable.
> > > > Sounds
> > > > like a hard question.
> > > > 
> > > > Regards,
> > > > 
> > > > Tvrtko
> > > 
> > >
diff mbox series

Patch

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 63a008aebfcd..0fcb2df869a2 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -3120,6 +3120,14 @@  i915_gem_context_lookup(struct drm_i915_file_private *file_priv, u32 id)
 	return ctx;
 }
 
+static inline u32
+watchdog_to_clock_counts(struct drm_i915_private *dev_priv, u64 value_in_us)
+{
+	u64 threshold = 0;
+
+	return threshold;
+}
+
 int i915_perf_open_ioctl(struct drm_device *dev, void *data,
 			 struct drm_file *file);
 int i915_perf_add_config_ioctl(struct drm_device *dev, void *data,
diff --git a/drivers/gpu/drm/i915/i915_gpu_error.h b/drivers/gpu/drm/i915/i915_gpu_error.h
index f408060e0667..bd1821c73ecd 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.h
+++ b/drivers/gpu/drm/i915/i915_gpu_error.h
@@ -233,6 +233,9 @@  struct i915_gpu_error {
 	 * i915_mutex_lock_interruptible()?). I915_RESET_BACKOFF serves a
 	 * secondary role in preventing two concurrent global reset attempts.
 	 *
+	 * #I915_RESET_WATCHDOG - When hw detects a hang before us, we can use
+	 * I915_RESET_WATCHDOG to report the hang detection cause accurately.
+	 *
 	 * #I915_RESET_ENGINE[num_engines] - Since the driver doesn't need to
 	 * acquire the struct_mutex to reset an engine, we need an explicit
 	 * flag to prevent two concurrent reset attempts in the same engine.
@@ -248,6 +251,7 @@  struct i915_gpu_error {
 #define I915_RESET_BACKOFF	0
 #define I915_RESET_MODESET	1
 #define I915_RESET_ENGINE	2
+#define I915_RESET_WATCHDOG	3
 #define I915_WEDGED		(BITS_PER_LONG - 1)
 
 	/** Number of times an engine has been reset */
diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index 4b23b2fd1fad..e2a1a07b0f2c 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -1456,6 +1456,9 @@  gen8_cs_irq_handler(struct intel_engine_cs *engine, u32 iir)
 
 	if (tasklet)
 		tasklet_hi_schedule(&engine->execlists.tasklet);
+
+	if (iir & GT_GEN8_WATCHDOG_INTERRUPT)
+		tasklet_schedule(&engine->execlists.watchdog_tasklet);
 }
 
 static void gen8_gt_irq_ack(struct drm_i915_private *i915,
@@ -3883,17 +3886,24 @@  static void gen8_gt_irq_postinstall(struct drm_i915_private *dev_priv)
 	u32 gt_interrupts[] = {
 		GT_RENDER_USER_INTERRUPT << GEN8_RCS_IRQ_SHIFT |
 			GT_CONTEXT_SWITCH_INTERRUPT << GEN8_RCS_IRQ_SHIFT |
+			GT_GEN8_WATCHDOG_INTERRUPT << GEN8_RCS_IRQ_SHIFT |
 			GT_RENDER_USER_INTERRUPT << GEN8_BCS_IRQ_SHIFT |
 			GT_CONTEXT_SWITCH_INTERRUPT << GEN8_BCS_IRQ_SHIFT,
 		GT_RENDER_USER_INTERRUPT << GEN8_VCS1_IRQ_SHIFT |
 			GT_CONTEXT_SWITCH_INTERRUPT << GEN8_VCS1_IRQ_SHIFT |
+			GT_GEN8_WATCHDOG_INTERRUPT << GEN8_VCS1_IRQ_SHIFT |
 			GT_RENDER_USER_INTERRUPT << GEN8_VCS2_IRQ_SHIFT |
-			GT_CONTEXT_SWITCH_INTERRUPT << GEN8_VCS2_IRQ_SHIFT,
+			GT_CONTEXT_SWITCH_INTERRUPT << GEN8_VCS2_IRQ_SHIFT |
+			GT_GEN8_WATCHDOG_INTERRUPT << GEN8_VCS2_IRQ_SHIFT,
 		0,
 		GT_RENDER_USER_INTERRUPT << GEN8_VECS_IRQ_SHIFT |
 			GT_CONTEXT_SWITCH_INTERRUPT << GEN8_VECS_IRQ_SHIFT
 		};
 
+	/* VECS watchdog is only available in skl+ */
+	if (INTEL_GEN(dev_priv) >= 9)
+		gt_interrupts[3] |= GT_GEN8_WATCHDOG_INTERRUPT;
+
 	dev_priv->pm_ier = 0x0;
 	dev_priv->pm_imr = ~dev_priv->pm_ier;
 	GEN8_IRQ_INIT_NDX(GT, 0, ~gt_interrupts[0], gt_interrupts[0]);
diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
index 1eca166d95bb..a0e101bbcbce 100644
--- a/drivers/gpu/drm/i915/i915_reg.h
+++ b/drivers/gpu/drm/i915/i915_reg.h
@@ -2335,6 +2335,11 @@  enum i915_power_well_id {
 #define RING_START(base)	_MMIO((base) + 0x38)
 #define RING_CTL(base)		_MMIO((base) + 0x3c)
 #define   RING_CTL_SIZE(size)	((size) - PAGE_SIZE) /* in bytes -> pages */
+#define RING_CNTR(base)		_MMIO((base) + 0x178)
+#define   GEN8_WATCHDOG_ENABLE		0
+#define   GEN8_WATCHDOG_DISABLE		1
+#define   GEN8_XCS_WATCHDOG_DISABLE	0xFFFFFFFF /* GEN8 & non-render only */
+#define RING_THRESH(base)	_MMIO((base) + 0x17C)
 #define RING_SYNC_0(base)	_MMIO((base) + 0x40)
 #define RING_SYNC_1(base)	_MMIO((base) + 0x44)
 #define RING_SYNC_2(base)	_MMIO((base) + 0x48)
@@ -2894,6 +2899,7 @@  enum i915_power_well_id {
 #define GT_BSD_USER_INTERRUPT			(1 << 12)
 #define GT_RENDER_L3_PARITY_ERROR_INTERRUPT_S1	(1 << 11) /* hsw+; rsvd on snb, ivb, vlv */
 #define GT_CONTEXT_SWITCH_INTERRUPT		(1 <<  8)
+#define GT_GEN8_WATCHDOG_INTERRUPT		(1 <<  6) /* gen8+ */
 #define GT_RENDER_L3_PARITY_ERROR_INTERRUPT	(1 <<  5) /* !snb */
 #define GT_RENDER_PIPECTL_NOTIFY_INTERRUPT	(1 <<  4)
 #define GT_RENDER_CS_MASTER_ERROR_INTERRUPT	(1 <<  3)
diff --git a/drivers/gpu/drm/i915/intel_engine_cs.c b/drivers/gpu/drm/i915/intel_engine_cs.c
index 7ae753358a6d..74f563d23cc8 100644
--- a/drivers/gpu/drm/i915/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/intel_engine_cs.c
@@ -1106,6 +1106,7 @@  void intel_engines_park(struct drm_i915_private *i915)
 		/* Flush the residual irq tasklets first. */
 		intel_engine_disarm_breadcrumbs(engine);
 		tasklet_kill(&engine->execlists.tasklet);
+		tasklet_kill(&engine->execlists.watchdog_tasklet);
 
 		/*
 		 * We are committed now to parking the engines, make sure there
diff --git a/drivers/gpu/drm/i915/intel_hangcheck.c b/drivers/gpu/drm/i915/intel_hangcheck.c
index 58b6ff8453dc..bc10acb24d9a 100644
--- a/drivers/gpu/drm/i915/intel_hangcheck.c
+++ b/drivers/gpu/drm/i915/intel_hangcheck.c
@@ -218,7 +218,8 @@  static void hangcheck_accumulate_sample(struct intel_engine_cs *engine,
 
 static void hangcheck_declare_hang(struct drm_i915_private *i915,
 				   unsigned int hung,
-				   unsigned int stuck)
+				   unsigned int stuck,
+				   unsigned int watchdog)
 {
 	struct intel_engine_cs *engine;
 	char msg[80];
@@ -231,13 +232,16 @@  static void hangcheck_declare_hang(struct drm_i915_private *i915,
 	if (stuck != hung)
 		hung &= ~stuck;
 	len = scnprintf(msg, sizeof(msg),
-			"%s on ", stuck == hung ? "no progress" : "hang");
+			"%s on ", watchdog ? "watchdog timeout" :
+				  stuck == hung ? "no progress" : "hang");
 	for_each_engine_masked(engine, i915, hung, tmp)
 		len += scnprintf(msg + len, sizeof(msg) - len,
 				 "%s, ", engine->name);
 	msg[len-2] = '\0';
 
-	return i915_handle_error(i915, hung, I915_ERROR_CAPTURE, "%s", msg);
+	return i915_handle_error(i915, hung,
+				 watchdog ? 0 : I915_ERROR_CAPTURE,
+				 "%s", msg);
 }
 
 /*
@@ -255,7 +259,7 @@  static void i915_hangcheck_elapsed(struct work_struct *work)
 			     gpu_error.hangcheck_work.work);
 	struct intel_engine_cs *engine;
 	enum intel_engine_id id;
-	unsigned int hung = 0, stuck = 0, wedged = 0;
+	unsigned int hung = 0, stuck = 0, wedged = 0, watchdog = 0;
 
 	if (!i915_modparams.enable_hangcheck)
 		return;
@@ -266,6 +270,9 @@  static void i915_hangcheck_elapsed(struct work_struct *work)
 	if (i915_terminally_wedged(&dev_priv->gpu_error))
 		return;
 
+	if (test_and_clear_bit(I915_RESET_WATCHDOG, &dev_priv->gpu_error.flags))
+		watchdog = 1;
+
 	/* As enabling the GPU requires fairly extensive mmio access,
 	 * periodically arm the mmio checker to see if we are triggering
 	 * any invalid access.
@@ -311,7 +318,7 @@  static void i915_hangcheck_elapsed(struct work_struct *work)
 	}
 
 	if (hung)
-		hangcheck_declare_hang(dev_priv, hung, stuck);
+		hangcheck_declare_hang(dev_priv, hung, stuck, watchdog);
 
 	/* Reset timer in case GPU hangs without another request being added */
 	i915_queue_hangcheck(dev_priv);
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 9ca7dc7a6fa5..c38b239ab39e 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -2352,6 +2352,53 @@  static int gen8_emit_flush_render(struct i915_request *request,
 	return 0;
 }
 
+/* From GEN9 onwards, all engines use the same RING_CNTR format */
+static inline u32 get_watchdog_disable(struct intel_engine_cs *engine)
+{
+	if (engine->id == RCS || INTEL_GEN(engine->i915) >= 9)
+		return GEN8_WATCHDOG_DISABLE;
+	else
+		return GEN8_XCS_WATCHDOG_DISABLE;
+}
+
+#define GEN8_WATCHDOG_1000US(dev_priv) watchdog_to_clock_counts(dev_priv, 1000)
+static void gen8_watchdog_irq_handler(unsigned long data)
+{
+	struct intel_engine_cs *engine = (struct intel_engine_cs *)data;
+	struct drm_i915_private *dev_priv = engine->i915;
+	unsigned int hung = 0;
+	u32 current_seqno=0;
+	char msg[80];
+	unsigned int tmp;
+	int len;
+
+	/* Stop the counter to prevent further timeout interrupts */
+	I915_WRITE_FW(RING_CNTR(engine->mmio_base), get_watchdog_disable(engine));
+
+	/* Read the heartbeat seqno once again to check if we are stuck? */
+	current_seqno = intel_engine_get_hangcheck_seqno(engine);
+
+    if (current_seqno == engine->current_seqno) {
+		hung |= engine->mask;
+
+		len = scnprintf(msg, sizeof(msg), "%s on ", "watchdog timeout");
+		for_each_engine_masked(engine, dev_priv, hung, tmp)
+			len += scnprintf(msg + len, sizeof(msg) - len,
+					 "%s, ", engine->name);
+		msg[len-2] = '\0';
+
+		i915_handle_error(dev_priv, hung, 0, "%s", msg);
+
+		/* Reset timer in case GPU hangs without another request being added */
+		i915_queue_hangcheck(dev_priv);
+    }else{
+		/* Re-start the counter, if really hung, it will expire again */
+		I915_WRITE_FW(RING_THRESH(engine->mmio_base),
+			      GEN8_WATCHDOG_1000US(dev_priv));
+		I915_WRITE_FW(RING_CNTR(engine->mmio_base), GEN8_WATCHDOG_ENABLE);
+    }
+}
+
 /*
  * Reserve space for 2 NOOPs at the end of each request to be
  * used as a workaround for not being allowed to do lite
@@ -2539,6 +2586,21 @@  logical_ring_default_irqs(struct intel_engine_cs *engine)
 
 	engine->irq_enable_mask = GT_RENDER_USER_INTERRUPT << shift;
 	engine->irq_keep_mask = GT_CONTEXT_SWITCH_INTERRUPT << shift;
+
+	switch (engine->class) {
+	default:
+		/* BCS engine does not support hw watchdog */
+		break;
+	case RENDER_CLASS:
+	case VIDEO_DECODE_CLASS:
+		engine->irq_keep_mask |= GT_GEN8_WATCHDOG_INTERRUPT << shift;
+		break;
+	case VIDEO_ENHANCEMENT_CLASS:
+		if (INTEL_GEN(engine->i915) >= 9)
+			engine->irq_keep_mask |=
+				GT_GEN8_WATCHDOG_INTERRUPT << shift;
+		break;
+	}
 }
 
 static int
@@ -2556,6 +2618,9 @@  logical_ring_setup(struct intel_engine_cs *engine)
 	tasklet_init(&engine->execlists.tasklet,
 		     execlists_submission_tasklet, (unsigned long)engine);
 
+	tasklet_init(&engine->execlists.watchdog_tasklet,
+		     gen8_watchdog_irq_handler, (unsigned long)engine);
+
 	logical_ring_default_vfuncs(engine);
 	logical_ring_default_irqs(engine);
 
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index 465094e38d32..17250ba0246f 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -122,6 +122,7 @@  struct intel_engine_hangcheck {
 	u64 acthd;
 	u32 last_seqno;
 	u32 next_seqno;
+	u32 watchdog;
 	unsigned long action_timestamp;
 	struct intel_instdone instdone;
 };
@@ -222,6 +223,11 @@  struct intel_engine_execlists {
 	 */
 	struct tasklet_struct tasklet;
 
+	/**
+	 * @watchdog_tasklet: stop counter and re-schedule hangcheck_work asap
+	 */
+	struct tasklet_struct watchdog_tasklet;
+
 	/**
 	 * @default_priolist: priority list for I915_PRIORITY_NORMAL
 	 */
@@ -353,6 +359,7 @@  struct intel_engine_cs {
 	unsigned int hw_id;
 	unsigned int guc_id;
 	unsigned long mask;
+	u32 current_seqno;
 
 	u8 uabi_class;