drm/i915: Reset request handling for gen8+
diff mbox

Message ID 1434621100-9612-1-git-send-email-mika.kuoppala@intel.com
State New
Headers show

Commit Message

Mika Kuoppala June 18, 2015, 9:51 a.m. UTC
In order for gen8+ hardware to guarantee that no context switch
takes place during engine reset and that current context is properly
saved, the driver needs to notify and query hw before commencing
with reset.

There are gpu hangs where the engine gets so stuck that it never will
report to be ready for reset. We could proceed with reset anyway, but
with some hangs with skl, the forced gpu reset will result in a system
hang. By inspecting the unreadiness for reset seems to correlate with
the probable system hang.

We will only proceed with reset if all engines report that they
are ready for reset. If root cause for system hang is found and
can be worked around with another means, we can reconsider if
we can reinstate full reset for unreadiness case.

v2: -EIO, Recovery, gen8 (Chris, Tomas, Daniel)
v3: updated commit msg
v4: timeout_ms, simpler error path (Chris)

References: https://bugs.freedesktop.org/show_bug.cgi?id=89959
References: https://bugs.freedesktop.org/show_bug.cgi?id=90854
Testcase: igt/gem_concurrent_blit --r prw-blt-overwrite-source-read-rcs-forked
Testcase: igt/gem_concurrent_blit --r gtt-blt-overwrite-source-read-rcs-forked
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Tomas Elf <tomas.elf@intel.com>
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
---
 drivers/gpu/drm/i915/i915_reg.h     |  3 +++
 drivers/gpu/drm/i915/intel_uncore.c | 43 ++++++++++++++++++++++++++++++++++++-
 2 files changed, 45 insertions(+), 1 deletion(-)

Comments

Chris Wilson June 18, 2015, 10:03 a.m. UTC | #1
On Thu, Jun 18, 2015 at 12:51:40PM +0300, Mika Kuoppala wrote:
> In order for gen8+ hardware to guarantee that no context switch
> takes place during engine reset and that current context is properly
> saved, the driver needs to notify and query hw before commencing
> with reset.
> 
> There are gpu hangs where the engine gets so stuck that it never will
> report to be ready for reset. We could proceed with reset anyway, but
> with some hangs with skl, the forced gpu reset will result in a system
> hang. By inspecting the unreadiness for reset seems to correlate with
> the probable system hang.
> 
> We will only proceed with reset if all engines report that they
> are ready for reset. If root cause for system hang is found and
> can be worked around with another means, we can reconsider if
> we can reinstate full reset for unreadiness case.
> 
> v2: -EIO, Recovery, gen8 (Chris, Tomas, Daniel)
> v3: updated commit msg
> v4: timeout_ms, simpler error path (Chris)
> 
> References: https://bugs.freedesktop.org/show_bug.cgi?id=89959
> References: https://bugs.freedesktop.org/show_bug.cgi?id=90854
> Testcase: igt/gem_concurrent_blit --r prw-blt-overwrite-source-read-rcs-forked
> Testcase: igt/gem_concurrent_blit --r gtt-blt-overwrite-source-read-rcs-forked

Is this the new format for subtests?

I thought the form was
igt/gem_concurrent_blit/prw-blt-overwrite-source-read-rcs-forked

> Cc: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> Cc: Tomas Elf <tomas.elf@intel.com>
> Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>

Lgtm,
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
-Chris
Tomas Elf June 18, 2015, 10:11 a.m. UTC | #2
On 18/06/2015 10:51, Mika Kuoppala wrote:
> In order for gen8+ hardware to guarantee that no context switch
> takes place during engine reset and that current context is properly
> saved, the driver needs to notify and query hw before commencing
> with reset.
>
> There are gpu hangs where the engine gets so stuck that it never will
> report to be ready for reset. We could proceed with reset anyway, but
> with some hangs with skl, the forced gpu reset will result in a system
> hang. By inspecting the unreadiness for reset seems to correlate with
> the probable system hang.
>
> We will only proceed with reset if all engines report that they
> are ready for reset. If root cause for system hang is found and
> can be worked around with another means, we can reconsider if
> we can reinstate full reset for unreadiness case.
>
> v2: -EIO, Recovery, gen8 (Chris, Tomas, Daniel)
> v3: updated commit msg
> v4: timeout_ms, simpler error path (Chris)
>
> References: https://bugs.freedesktop.org/show_bug.cgi?id=89959
> References: https://bugs.freedesktop.org/show_bug.cgi?id=90854
> Testcase: igt/gem_concurrent_blit --r prw-blt-overwrite-source-read-rcs-forked
> Testcase: igt/gem_concurrent_blit --r gtt-blt-overwrite-source-read-rcs-forked
> Cc: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> Cc: Tomas Elf <tomas.elf@intel.com>
> Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
> ---
>   drivers/gpu/drm/i915/i915_reg.h     |  3 +++
>   drivers/gpu/drm/i915/intel_uncore.c | 43 ++++++++++++++++++++++++++++++++++++-
>   2 files changed, 45 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
> index 0b979ad..3684f92 100644
> --- a/drivers/gpu/drm/i915/i915_reg.h
> +++ b/drivers/gpu/drm/i915/i915_reg.h
> @@ -1461,6 +1461,9 @@ enum skl_disp_power_wells {
>   #define RING_MAX_IDLE(base)	((base)+0x54)
>   #define RING_HWS_PGA(base)	((base)+0x80)
>   #define RING_HWS_PGA_GEN6(base)	((base)+0x2080)
> +#define RING_RESET_CTL(base)	((base)+0xd0)
> +#define   RESET_CTL_REQUEST_RESET  (1 << 0)
> +#define   RESET_CTL_READY_TO_RESET (1 << 1)
>
>   #define HSW_GTT_CACHE_EN	0x4024
>   #define   GTT_CACHE_EN_ALL	0xF0007FFF
> diff --git a/drivers/gpu/drm/i915/intel_uncore.c b/drivers/gpu/drm/i915/intel_uncore.c
> index 4a86cf0..160a47a 100644
> --- a/drivers/gpu/drm/i915/intel_uncore.c
> +++ b/drivers/gpu/drm/i915/intel_uncore.c
> @@ -1455,9 +1455,50 @@ static int gen6_do_reset(struct drm_device *dev)
>   	return ret;
>   }
>
> +static int wait_for_register(struct drm_i915_private *dev_priv,
> +			     const u32 reg,
> +			     const u32 mask,
> +			     const u32 value,
> +			     const unsigned long timeout_ms)
> +{
> +	return wait_for((I915_READ(reg) & mask) == value, timeout_ms);
> +}
> +
> +static int gen8_do_reset(struct drm_device *dev)
> +{
> +	struct drm_i915_private *dev_priv = dev->dev_private;
> +	struct intel_engine_cs *engine;
> +	int i;
> +
> +	for_each_ring(engine, dev_priv, i) {
> +		I915_WRITE(RING_RESET_CTL(engine->mmio_base),
> +			   _MASKED_BIT_ENABLE(RESET_CTL_REQUEST_RESET));
> +
> +		if (wait_for_register(dev_priv,
> +				      RING_RESET_CTL(engine->mmio_base),
> +				      RESET_CTL_READY_TO_RESET,
> +				      RESET_CTL_READY_TO_RESET,
> +				      700)) {
> +			DRM_ERROR("%s: reset request timeout\n", engine->name);
> +			goto not_ready;
> +		}

So just to be clear here: If one or more of the reset control registers 
decide that they are at a point where they will never again be ready for 
reset we will simply not do a full GPU reset until reboot? Is there 
perhaps a case where you would want to try reset request once or twice 
or like five times or whatever but then simply go ahead with the full 
GPU reset regardless of what the reset control register tells you? After 
all, it's our only way out if the hardware is truly stuck.

Thanks,
Tomas

> +	}
> +
> +	return gen6_do_reset(dev);
> +
> +not_ready:
> +	for_each_ring(engine, dev_priv, i)
> +		I915_WRITE(RING_RESET_CTL(engine->mmio_base),
> +			   _MASKED_BIT_DISABLE(RESET_CTL_REQUEST_RESET));
> +
> +	return -EIO;
> +}
> +
>   static int (*intel_get_gpu_reset(struct drm_device *dev))(struct drm_device *)
>   {
> -	if (INTEL_INFO(dev)->gen >= 6)
> +	if (INTEL_INFO(dev)->gen >= 8)
> +		return gen8_do_reset;
> +	else if (INTEL_INFO(dev)->gen >= 6)
>   		return gen6_do_reset;
>   	else if (IS_GEN5(dev))
>   		return ironlake_do_reset;
>
Mika Kuoppala June 18, 2015, 10:22 a.m. UTC | #3
Chris Wilson <chris@chris-wilson.co.uk> writes:

> On Thu, Jun 18, 2015 at 12:51:40PM +0300, Mika Kuoppala wrote:
>> In order for gen8+ hardware to guarantee that no context switch
>> takes place during engine reset and that current context is properly
>> saved, the driver needs to notify and query hw before commencing
>> with reset.
>> 
>> There are gpu hangs where the engine gets so stuck that it never will
>> report to be ready for reset. We could proceed with reset anyway, but
>> with some hangs with skl, the forced gpu reset will result in a system
>> hang. By inspecting the unreadiness for reset seems to correlate with
>> the probable system hang.
>> 
>> We will only proceed with reset if all engines report that they
>> are ready for reset. If root cause for system hang is found and
>> can be worked around with another means, we can reconsider if
>> we can reinstate full reset for unreadiness case.
>> 
>> v2: -EIO, Recovery, gen8 (Chris, Tomas, Daniel)
>> v3: updated commit msg
>> v4: timeout_ms, simpler error path (Chris)
>> 
>> References: https://bugs.freedesktop.org/show_bug.cgi?id=89959
>> References: https://bugs.freedesktop.org/show_bug.cgi?id=90854
>> Testcase: igt/gem_concurrent_blit --r prw-blt-overwrite-source-read-rcs-forked
>> Testcase: igt/gem_concurrent_blit --r gtt-blt-overwrite-source-read-rcs-forked
>
> Is this the new format for subtests?

No. It is me cutpasting from scripts. Daniel could you please
fix while merging.

Thanks,
-Mika

> I thought the form was
> igt/gem_concurrent_blit/prw-blt-overwrite-source-read-rcs-forked
>
>> Cc: Chris Wilson <chris@chris-wilson.co.uk>
>> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
>> Cc: Tomas Elf <tomas.elf@intel.com>
>> Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
>
> Lgtm,
> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
> -Chris
>
> -- 
> Chris Wilson, Intel Open Source Technology Centre
Mika Kuoppala June 18, 2015, 10:31 a.m. UTC | #4
Tomas Elf <tomas.elf@intel.com> writes:

> On 18/06/2015 10:51, Mika Kuoppala wrote:
>> In order for gen8+ hardware to guarantee that no context switch
>> takes place during engine reset and that current context is properly
>> saved, the driver needs to notify and query hw before commencing
>> with reset.
>>
>> There are gpu hangs where the engine gets so stuck that it never will
>> report to be ready for reset. We could proceed with reset anyway, but
>> with some hangs with skl, the forced gpu reset will result in a system
>> hang. By inspecting the unreadiness for reset seems to correlate with
>> the probable system hang.
>>
>> We will only proceed with reset if all engines report that they
>> are ready for reset. If root cause for system hang is found and
>> can be worked around with another means, we can reconsider if
>> we can reinstate full reset for unreadiness case.
>>
>> v2: -EIO, Recovery, gen8 (Chris, Tomas, Daniel)
>> v3: updated commit msg
>> v4: timeout_ms, simpler error path (Chris)
>>
>> References: https://bugs.freedesktop.org/show_bug.cgi?id=89959
>> References: https://bugs.freedesktop.org/show_bug.cgi?id=90854
>> Testcase: igt/gem_concurrent_blit --r prw-blt-overwrite-source-read-rcs-forked
>> Testcase: igt/gem_concurrent_blit --r gtt-blt-overwrite-source-read-rcs-forked
>> Cc: Chris Wilson <chris@chris-wilson.co.uk>
>> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
>> Cc: Tomas Elf <tomas.elf@intel.com>
>> Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
>> ---
>>   drivers/gpu/drm/i915/i915_reg.h     |  3 +++
>>   drivers/gpu/drm/i915/intel_uncore.c | 43 ++++++++++++++++++++++++++++++++++++-
>>   2 files changed, 45 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
>> index 0b979ad..3684f92 100644
>> --- a/drivers/gpu/drm/i915/i915_reg.h
>> +++ b/drivers/gpu/drm/i915/i915_reg.h
>> @@ -1461,6 +1461,9 @@ enum skl_disp_power_wells {
>>   #define RING_MAX_IDLE(base)	((base)+0x54)
>>   #define RING_HWS_PGA(base)	((base)+0x80)
>>   #define RING_HWS_PGA_GEN6(base)	((base)+0x2080)
>> +#define RING_RESET_CTL(base)	((base)+0xd0)
>> +#define   RESET_CTL_REQUEST_RESET  (1 << 0)
>> +#define   RESET_CTL_READY_TO_RESET (1 << 1)
>>
>>   #define HSW_GTT_CACHE_EN	0x4024
>>   #define   GTT_CACHE_EN_ALL	0xF0007FFF
>> diff --git a/drivers/gpu/drm/i915/intel_uncore.c b/drivers/gpu/drm/i915/intel_uncore.c
>> index 4a86cf0..160a47a 100644
>> --- a/drivers/gpu/drm/i915/intel_uncore.c
>> +++ b/drivers/gpu/drm/i915/intel_uncore.c
>> @@ -1455,9 +1455,50 @@ static int gen6_do_reset(struct drm_device *dev)
>>   	return ret;
>>   }
>>
>> +static int wait_for_register(struct drm_i915_private *dev_priv,
>> +			     const u32 reg,
>> +			     const u32 mask,
>> +			     const u32 value,
>> +			     const unsigned long timeout_ms)
>> +{
>> +	return wait_for((I915_READ(reg) & mask) == value, timeout_ms);
>> +}
>> +
>> +static int gen8_do_reset(struct drm_device *dev)
>> +{
>> +	struct drm_i915_private *dev_priv = dev->dev_private;
>> +	struct intel_engine_cs *engine;
>> +	int i;
>> +
>> +	for_each_ring(engine, dev_priv, i) {
>> +		I915_WRITE(RING_RESET_CTL(engine->mmio_base),
>> +			   _MASKED_BIT_ENABLE(RESET_CTL_REQUEST_RESET));
>> +
>> +		if (wait_for_register(dev_priv,
>> +				      RING_RESET_CTL(engine->mmio_base),
>> +				      RESET_CTL_READY_TO_RESET,
>> +				      RESET_CTL_READY_TO_RESET,
>> +				      700)) {
>> +			DRM_ERROR("%s: reset request timeout\n", engine->name);
>> +			goto not_ready;
>> +		}
>
> So just to be clear here: If one or more of the reset control registers 
> decide that they are at a point where they will never again be ready for 
> reset we will simply not do a full GPU reset until reboot? 

Correct. Atleast for now until we find out what upsets the engine
so much that resetting it hangs the system. So for now it is just
a choise between dead gpu or dead system.

>Is there 
> perhaps a case where you would want to try reset request once or twice 
> or like five times or whatever but then simply go ahead with the full 
> GPU reset regardless of what the reset control register tells you? After 
> all, it's our only way out if the hardware is truly stuck.
>

That would be the best if we could count that the reset only resets
the GPU. Then we would risk just losing/messing the context (and
only with per ring resets).

But until we learn more of this situation, we risk hanging the
whole system by trying to revive the gpu. I tried to update
the commit message to reflect this.

-Mika

> Thanks,
> Tomas
>
>> +	}
>> +
>> +	return gen6_do_reset(dev);
>> +
>> +not_ready:
>> +	for_each_ring(engine, dev_priv, i)
>> +		I915_WRITE(RING_RESET_CTL(engine->mmio_base),
>> +			   _MASKED_BIT_DISABLE(RESET_CTL_REQUEST_RESET));
>> +
>> +	return -EIO;
>> +}
>> +
>>   static int (*intel_get_gpu_reset(struct drm_device *dev))(struct drm_device *)
>>   {
>> -	if (INTEL_INFO(dev)->gen >= 6)
>> +	if (INTEL_INFO(dev)->gen >= 8)
>> +		return gen8_do_reset;
>> +	else if (INTEL_INFO(dev)->gen >= 6)
>>   		return gen6_do_reset;
>>   	else if (IS_GEN5(dev))
>>   		return ironlake_do_reset;
>>
Chris Wilson June 18, 2015, 10:36 a.m. UTC | #5
On Thu, Jun 18, 2015 at 11:11:55AM +0100, Tomas Elf wrote:
> On 18/06/2015 10:51, Mika Kuoppala wrote:
> >In order for gen8+ hardware to guarantee that no context switch
> >takes place during engine reset and that current context is properly
> >saved, the driver needs to notify and query hw before commencing
> >with reset.
> >
> >There are gpu hangs where the engine gets so stuck that it never will
> >report to be ready for reset. We could proceed with reset anyway, but
> >with some hangs with skl, the forced gpu reset will result in a system
> >hang. By inspecting the unreadiness for reset seems to correlate with
> >the probable system hang.
> >
> >We will only proceed with reset if all engines report that they
> >are ready for reset. If root cause for system hang is found and
> >can be worked around with another means, we can reconsider if
> >we can reinstate full reset for unreadiness case.
> >
> >v2: -EIO, Recovery, gen8 (Chris, Tomas, Daniel)
> >v3: updated commit msg
> >v4: timeout_ms, simpler error path (Chris)
> >
> >References: https://bugs.freedesktop.org/show_bug.cgi?id=89959
> >References: https://bugs.freedesktop.org/show_bug.cgi?id=90854
> >Testcase: igt/gem_concurrent_blit --r prw-blt-overwrite-source-read-rcs-forked
> >Testcase: igt/gem_concurrent_blit --r gtt-blt-overwrite-source-read-rcs-forked
> >Cc: Chris Wilson <chris@chris-wilson.co.uk>
> >Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> >Cc: Tomas Elf <tomas.elf@intel.com>
> >Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
> >---
> >  drivers/gpu/drm/i915/i915_reg.h     |  3 +++
> >  drivers/gpu/drm/i915/intel_uncore.c | 43 ++++++++++++++++++++++++++++++++++++-
> >  2 files changed, 45 insertions(+), 1 deletion(-)
> >
> >diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
> >index 0b979ad..3684f92 100644
> >--- a/drivers/gpu/drm/i915/i915_reg.h
> >+++ b/drivers/gpu/drm/i915/i915_reg.h
> >@@ -1461,6 +1461,9 @@ enum skl_disp_power_wells {
> >  #define RING_MAX_IDLE(base)	((base)+0x54)
> >  #define RING_HWS_PGA(base)	((base)+0x80)
> >  #define RING_HWS_PGA_GEN6(base)	((base)+0x2080)
> >+#define RING_RESET_CTL(base)	((base)+0xd0)
> >+#define   RESET_CTL_REQUEST_RESET  (1 << 0)
> >+#define   RESET_CTL_READY_TO_RESET (1 << 1)
> >
> >  #define HSW_GTT_CACHE_EN	0x4024
> >  #define   GTT_CACHE_EN_ALL	0xF0007FFF
> >diff --git a/drivers/gpu/drm/i915/intel_uncore.c b/drivers/gpu/drm/i915/intel_uncore.c
> >index 4a86cf0..160a47a 100644
> >--- a/drivers/gpu/drm/i915/intel_uncore.c
> >+++ b/drivers/gpu/drm/i915/intel_uncore.c
> >@@ -1455,9 +1455,50 @@ static int gen6_do_reset(struct drm_device *dev)
> >  	return ret;
> >  }
> >
> >+static int wait_for_register(struct drm_i915_private *dev_priv,
> >+			     const u32 reg,
> >+			     const u32 mask,
> >+			     const u32 value,
> >+			     const unsigned long timeout_ms)
> >+{
> >+	return wait_for((I915_READ(reg) & mask) == value, timeout_ms);
> >+}
> >+
> >+static int gen8_do_reset(struct drm_device *dev)
> >+{
> >+	struct drm_i915_private *dev_priv = dev->dev_private;
> >+	struct intel_engine_cs *engine;
> >+	int i;
> >+
> >+	for_each_ring(engine, dev_priv, i) {
> >+		I915_WRITE(RING_RESET_CTL(engine->mmio_base),
> >+			   _MASKED_BIT_ENABLE(RESET_CTL_REQUEST_RESET));
> >+
> >+		if (wait_for_register(dev_priv,
> >+				      RING_RESET_CTL(engine->mmio_base),
> >+				      RESET_CTL_READY_TO_RESET,
> >+				      RESET_CTL_READY_TO_RESET,
> >+				      700)) {
> >+			DRM_ERROR("%s: reset request timeout\n", engine->name);
> >+			goto not_ready;
> >+		}
> 
> So just to be clear here: If one or more of the reset control
> registers decide that they are at a point where they will never
> again be ready for reset we will simply not do a full GPU reset
> until reboot? Is there perhaps a case where you would want to try
> reset request once or twice or like five times or whatever but then
> simply go ahead with the full GPU reset regardless of what the reset
> control register tells you? After all, it's our only way out if the
> hardware is truly stuck.

What happens is that we skip the reset, report an error and that marks
the GPU as wedged. To get out of that state requires user intervention,
either by rebooting or through use of debugfs/i915_wedged.

We can try to repeat the reset from a workqueue, but we should first
tackle interaction with TDR first and get your per-engine reset
upstream, along with it's various levels of backoff and recovery.
-Chris
Tomas Elf June 18, 2015, 11:18 a.m. UTC | #6
On 18/06/2015 11:36, Chris Wilson wrote:> On Thu, Jun 18, 2015 at 
11:11:55AM +0100, Tomas Elf wrote:
 >> On 18/06/2015 10:51, Mika Kuoppala wrote:
 >>> In order for gen8+ hardware to guarantee that no context switch
 >>> takes place during engine reset and that current context is properly
 >>> saved, the driver needs to notify and query hw before commencing
 >>> with reset.
 >>>
 >>> There are gpu hangs where the engine gets so stuck that it never will
 >>> report to be ready for reset. We could proceed with reset anyway, but
 >>> with some hangs with skl, the forced gpu reset will result in a system
 >>> hang. By inspecting the unreadiness for reset seems to correlate with
 >>> the probable system hang.
 >>>
 >>> We will only proceed with reset if all engines report that they
 >>> are ready for reset. If root cause for system hang is found and
 >>> can be worked around with another means, we can reconsider if
 >>> we can reinstate full reset for unreadiness case.
 >>>
 >>> v2: -EIO, Recovery, gen8 (Chris, Tomas, Daniel)
 >>> v3: updated commit msg
 >>> v4: timeout_ms, simpler error path (Chris)
 >>>
 >>> References: https://bugs.freedesktop.org/show_bug.cgi?id=89959
 >>> References: https://bugs.freedesktop.org/show_bug.cgi?id=90854
 >>> Testcase: igt/gem_concurrent_blit --r 
prw-blt-overwrite-source-read-rcs-forked
 >>> Testcase: igt/gem_concurrent_blit --r 
gtt-blt-overwrite-source-read-rcs-forked
 >>> Cc: Chris Wilson <chris@chris-wilson.co.uk>
 >>> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
 >>> Cc: Tomas Elf <tomas.elf@intel.com>
 >>> Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
 >>> ---
 >>>   drivers/gpu/drm/i915/i915_reg.h     |  3 +++
 >>>   drivers/gpu/drm/i915/intel_uncore.c | 43 
++++++++++++++++++++++++++++++++++++-
 >>>   2 files changed, 45 insertions(+), 1 deletion(-)
 >>>
 >>> diff --git a/drivers/gpu/drm/i915/i915_reg.h 
b/drivers/gpu/drm/i915/i915_reg.h
 >>> index 0b979ad..3684f92 100644
 >>> --- a/drivers/gpu/drm/i915/i915_reg.h
 >>> +++ b/drivers/gpu/drm/i915/i915_reg.h
 >>> @@ -1461,6 +1461,9 @@ enum skl_disp_power_wells {
 >>>   #define RING_MAX_IDLE(base)	((base)+0x54)
 >>>   #define RING_HWS_PGA(base)	((base)+0x80)
 >>>   #define RING_HWS_PGA_GEN6(base)	((base)+0x2080)
 >>> +#define RING_RESET_CTL(base)	((base)+0xd0)
 >>> +#define   RESET_CTL_REQUEST_RESET  (1 << 0)
 >>> +#define   RESET_CTL_READY_TO_RESET (1 << 1)
 >>>
 >>>   #define HSW_GTT_CACHE_EN	0x4024
 >>>   #define   GTT_CACHE_EN_ALL	0xF0007FFF
 >>> diff --git a/drivers/gpu/drm/i915/intel_uncore.c 
b/drivers/gpu/drm/i915/intel_uncore.c
 >>> index 4a86cf0..160a47a 100644
 >>> --- a/drivers/gpu/drm/i915/intel_uncore.c
 >>> +++ b/drivers/gpu/drm/i915/intel_uncore.c
 >>> @@ -1455,9 +1455,50 @@ static int gen6_do_reset(struct drm_device *dev)
 >>>   	return ret;
 >>>   }
 >>>
 >>> +static int wait_for_register(struct drm_i915_private *dev_priv,
 >>> +			     const u32 reg,
 >>> +			     const u32 mask,
 >>> +			     const u32 value,
 >>> +			     const unsigned long timeout_ms)
 >>> +{
 >>> +	return wait_for((I915_READ(reg) & mask) == value, timeout_ms);
 >>> +}
 >>> +
 >>> +static int gen8_do_reset(struct drm_device *dev)
 >>> +{
 >>> +	struct drm_i915_private *dev_priv = dev->dev_private;
 >>> +	struct intel_engine_cs *engine;
 >>> +	int i;
 >>> +
 >>> +	for_each_ring(engine, dev_priv, i) {
 >>> +		I915_WRITE(RING_RESET_CTL(engine->mmio_base),
 >>> +			   _MASKED_BIT_ENABLE(RESET_CTL_REQUEST_RESET));
 >>> +
 >>> +		if (wait_for_register(dev_priv,
 >>> +				      RING_RESET_CTL(engine->mmio_base),
 >>> +				      RESET_CTL_READY_TO_RESET,
 >>> +				      RESET_CTL_READY_TO_RESET,
 >>> +				      700)) {
 >>> +			DRM_ERROR("%s: reset request timeout\n", engine->name);
 >>> +			goto not_ready;
 >>> +		}
 >>
 >> So just to be clear here: If one or more of the reset control
 >> registers decide that they are at a point where they will never
 >> again be ready for reset we will simply not do a full GPU reset
 >> until reboot? Is there perhaps a case where you would want to try
 >> reset request once or twice or like five times or whatever but then
 >> simply go ahead with the full GPU reset regardless of what the reset
 >> control register tells you? After all, it's our only way out if the
 >> hardware is truly stuck.
 >
 > What happens is that we skip the reset, report an error and that marks
 > the GPU as wedged. To get out of that state requires user intervention,
 > either by rebooting or through use of debugfs/i915_wedged.

That's a fair point, we will mark the GPU as terminally wedged. That's 
always been there as a final state where we simply give up. I guess it 
might be better to actively mark the GPU as terminally wedged from the 
driver's point of view rather than plow ahead in a last ditch effort to 
reset the GPU, which may or may not succeed and which may irrecoverably 
hang the system in the worst case. I guess we at least protect the 
currently running context if we just mark the GPU as terminally wedged 
instead of putting it in a potentially undefined state.

 >
 > We can try to repeat the reset from a workqueue, but we should first
 > tackle interaction with TDR first and get your per-engine reset
 > upstream, along with it's various levels of backoff and recovery.
 > -Chris

My point was more along the lines of bailing out if the reset request 
fails and not return an error message but simply keep track of the 
number of times we've attempted the reset request. By not returning an 
error we would allow more subsequent hang detections to happen (since 
the hang is still there), which would end up in the same reset request 
in the future. If the reset request would fail more times we would 
simply increment the counter and at one point we would decide that we've 
had too many unsuccessful reset request attempts and simply go ahead 
with the reset anyway and if the reset would fail we would return an 
error at that point in time, which would result in a terminally wedged 
state. But, yeah, I can see why we shouldn't do this.

We could certainly introduce per-engine reset support into this to add 
more levels of recovery and fall-back but in the end if we use reset 
handshaking for both per-engine reset and for full GPU reset and if 
reset handshaking fails in both cases then we're screwed no matter what 
(so we try engine reset request and fail, then fall back to full GPU 
reset request and fail there too - terminally wedged!). The reset 
request failure will block both per-engine reset and full GPU reset and 
result in a terminally wedged state no matter what.

The only thing we gain in this particular case by adding per-engine 
reset support is if the reset request failure is limited to the blitter 
engine (which Ben Widawsky seems to be questioning on IRC). In that 
case, supporting per-engine reset support would allow us to unblock 
other engines separately without touching full GPU reset and thereby not 
having to request blitter engine reset, avoiding the potential case of 
having the blitter engine reset request fail, which would thereby block 
any other hang recovery for all engines.

Anyway, if we prefer the terminally wedged state rather than a last 
ditch attempt at a full GPU reset then I can understand how this makes 
sense.

Thanks,
Tomas


 >
Chris Wilson June 18, 2015, 11:42 a.m. UTC | #7
On Thu, Jun 18, 2015 at 12:18:39PM +0100, Tomas Elf wrote:
> My point was more along the lines of bailing out if the reset
> request fails and not return an error message but simply keep track
> of the number of times we've attempted the reset request. By not
> returning an error we would allow more subsequent hang detections to
> happen (since the hang is still there), which would end up in the
> same reset request in the future. If the reset request would fail
> more times we would simply increment the counter and at one point we
> would decide that we've had too many unsuccessful reset request
> attempts and simply go ahead with the reset anyway and if the reset
> would fail we would return an error at that point in time, which
> would result in a terminally wedged state. But, yeah, I can see why
> we shouldn't do this.

Skipping to the middle!

I understand the merit in trying the reset a few times before giving up,
it would just need a bit of restructuring to try the reset before
clearing gem state (trivial) and requeueing the hangcheck. I am just
wary of feature creep before we get stuck into TDR, which promises to
change how we think about resets entirely.

I am trying not to block your work by doing "it would be nice if" tasks
first! :)
-Chris
Daniel Vetter June 18, 2015, 2:58 p.m. UTC | #8
On Thu, Jun 18, 2015 at 12:42:55PM +0100, Chris Wilson wrote:
> On Thu, Jun 18, 2015 at 12:18:39PM +0100, Tomas Elf wrote:
> > My point was more along the lines of bailing out if the reset
> > request fails and not return an error message but simply keep track
> > of the number of times we've attempted the reset request. By not
> > returning an error we would allow more subsequent hang detections to
> > happen (since the hang is still there), which would end up in the
> > same reset request in the future. If the reset request would fail
> > more times we would simply increment the counter and at one point we
> > would decide that we've had too many unsuccessful reset request
> > attempts and simply go ahead with the reset anyway and if the reset
> > would fail we would return an error at that point in time, which
> > would result in a terminally wedged state. But, yeah, I can see why
> > we shouldn't do this.
> 
> Skipping to the middle!
> 
> I understand the merit in trying the reset a few times before giving up,
> it would just need a bit of restructuring to try the reset before
> clearing gem state (trivial) and requeueing the hangcheck. I am just
> wary of feature creep before we get stuck into TDR, which promises to
> change how we think about resets entirely.

My maintainer concern here is always that we should err on the side of not
killing the machine. If the reset failed, or if the gpu reinit failed then
marking the gpu as wedged has historically been the safe option. The
system will still run, display mostly works and there's a reasonable
chance you can gather debug data.

We do have i915.reset to disable the reset for these cases, but it's
always a nuisance to have to resort to that.
-Daneil
Daniel Vetter June 18, 2015, 3 p.m. UTC | #9
On Thu, Jun 18, 2015 at 01:22:36PM +0300, Mika Kuoppala wrote:
> Chris Wilson <chris@chris-wilson.co.uk> writes:
> 
> > On Thu, Jun 18, 2015 at 12:51:40PM +0300, Mika Kuoppala wrote:
> >> In order for gen8+ hardware to guarantee that no context switch
> >> takes place during engine reset and that current context is properly
> >> saved, the driver needs to notify and query hw before commencing
> >> with reset.
> >> 
> >> There are gpu hangs where the engine gets so stuck that it never will
> >> report to be ready for reset. We could proceed with reset anyway, but
> >> with some hangs with skl, the forced gpu reset will result in a system
> >> hang. By inspecting the unreadiness for reset seems to correlate with
> >> the probable system hang.
> >> 
> >> We will only proceed with reset if all engines report that they
> >> are ready for reset. If root cause for system hang is found and
> >> can be worked around with another means, we can reconsider if
> >> we can reinstate full reset for unreadiness case.
> >> 
> >> v2: -EIO, Recovery, gen8 (Chris, Tomas, Daniel)
> >> v3: updated commit msg
> >> v4: timeout_ms, simpler error path (Chris)
> >> 
> >> References: https://bugs.freedesktop.org/show_bug.cgi?id=89959
> >> References: https://bugs.freedesktop.org/show_bug.cgi?id=90854
> >> Testcase: igt/gem_concurrent_blit --r prw-blt-overwrite-source-read-rcs-forked
> >> Testcase: igt/gem_concurrent_blit --r gtt-blt-overwrite-source-read-rcs-forked
> >
> > Is this the new format for subtests?
> 
> No. It is me cutpasting from scripts. Daniel could you please
> fix while merging.

Done and queued for -next, thanks for the patch.
-Daniel
Chris Wilson June 19, 2015, 4:30 p.m. UTC | #10
On Thu, Jun 18, 2015 at 04:58:06PM +0200, Daniel Vetter wrote:
> On Thu, Jun 18, 2015 at 12:42:55PM +0100, Chris Wilson wrote:
> > I understand the merit in trying the reset a few times before giving up,
> > it would just need a bit of restructuring to try the reset before
> > clearing gem state (trivial) and requeueing the hangcheck. I am just
> > wary of feature creep before we get stuck into TDR, which promises to
> > change how we think about resets entirely.
> 
> My maintainer concern here is always that we should err on the side of not
> killing the machine. If the reset failed, or if the gpu reinit failed then
> marking the gpu as wedged has historically been the safe option. The
> system will still run, display mostly works and there's a reasonable
> chance you can gather debug data.

One thing to bear in mind here is that it with this particular don't
reset if not ready logic, repeating the attempt at reset after another
hangcheck is equivalent to just using a slower hangcheck. (more or less,
a couple of writes to one register difference) So it is no more likely
to hang the machine than the original GPU hang.

We can differentiate the cases here, between say EBUSY, ENODEV, and EIO,
from the actual the reset request to determine which we want to retry
(i.e. EBUSY).
-Chris
Daniel Vetter June 22, 2015, 12:50 p.m. UTC | #11
On Fri, Jun 19, 2015 at 05:30:45PM +0100, Chris Wilson wrote:
> On Thu, Jun 18, 2015 at 04:58:06PM +0200, Daniel Vetter wrote:
> > On Thu, Jun 18, 2015 at 12:42:55PM +0100, Chris Wilson wrote:
> > > I understand the merit in trying the reset a few times before giving up,
> > > it would just need a bit of restructuring to try the reset before
> > > clearing gem state (trivial) and requeueing the hangcheck. I am just
> > > wary of feature creep before we get stuck into TDR, which promises to
> > > change how we think about resets entirely.
> > 
> > My maintainer concern here is always that we should err on the side of not
> > killing the machine. If the reset failed, or if the gpu reinit failed then
> > marking the gpu as wedged has historically been the safe option. The
> > system will still run, display mostly works and there's a reasonable
> > chance you can gather debug data.
> 
> One thing to bear in mind here is that it with this particular don't
> reset if not ready logic, repeating the attempt at reset after another
> hangcheck is equivalent to just using a slower hangcheck. (more or less,
> a couple of writes to one register difference) So it is no more likely
> to hang the machine than the original GPU hang.
> 
> We can differentiate the cases here, between say EBUSY, ENODEV, and EIO,
> from the actual the reset request to determine which we want to retry
> (i.e. EBUSY).

Tbh I don't want to make the reset code to clever with multiple fallback
paths - it's a really tricky code and as-is already suffers from imo
insufficient test coverage and too many bugs. Once we decided that the gpu
is dead and return -EIO this should be a terminal state. Developers can
always manually unwedge through debugfs, but for users it's imo paramount
that we don't automatically run some little-tested path and take down
their box in the process.
-Daniel

Patch
diff mbox

diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
index 0b979ad..3684f92 100644
--- a/drivers/gpu/drm/i915/i915_reg.h
+++ b/drivers/gpu/drm/i915/i915_reg.h
@@ -1461,6 +1461,9 @@  enum skl_disp_power_wells {
 #define RING_MAX_IDLE(base)	((base)+0x54)
 #define RING_HWS_PGA(base)	((base)+0x80)
 #define RING_HWS_PGA_GEN6(base)	((base)+0x2080)
+#define RING_RESET_CTL(base)	((base)+0xd0)
+#define   RESET_CTL_REQUEST_RESET  (1 << 0)
+#define   RESET_CTL_READY_TO_RESET (1 << 1)
 
 #define HSW_GTT_CACHE_EN	0x4024
 #define   GTT_CACHE_EN_ALL	0xF0007FFF
diff --git a/drivers/gpu/drm/i915/intel_uncore.c b/drivers/gpu/drm/i915/intel_uncore.c
index 4a86cf0..160a47a 100644
--- a/drivers/gpu/drm/i915/intel_uncore.c
+++ b/drivers/gpu/drm/i915/intel_uncore.c
@@ -1455,9 +1455,50 @@  static int gen6_do_reset(struct drm_device *dev)
 	return ret;
 }
 
+static int wait_for_register(struct drm_i915_private *dev_priv,
+			     const u32 reg,
+			     const u32 mask,
+			     const u32 value,
+			     const unsigned long timeout_ms)
+{
+	return wait_for((I915_READ(reg) & mask) == value, timeout_ms);
+}
+
+static int gen8_do_reset(struct drm_device *dev)
+{
+	struct drm_i915_private *dev_priv = dev->dev_private;
+	struct intel_engine_cs *engine;
+	int i;
+
+	for_each_ring(engine, dev_priv, i) {
+		I915_WRITE(RING_RESET_CTL(engine->mmio_base),
+			   _MASKED_BIT_ENABLE(RESET_CTL_REQUEST_RESET));
+
+		if (wait_for_register(dev_priv,
+				      RING_RESET_CTL(engine->mmio_base),
+				      RESET_CTL_READY_TO_RESET,
+				      RESET_CTL_READY_TO_RESET,
+				      700)) {
+			DRM_ERROR("%s: reset request timeout\n", engine->name);
+			goto not_ready;
+		}
+	}
+
+	return gen6_do_reset(dev);
+
+not_ready:
+	for_each_ring(engine, dev_priv, i)
+		I915_WRITE(RING_RESET_CTL(engine->mmio_base),
+			   _MASKED_BIT_DISABLE(RESET_CTL_REQUEST_RESET));
+
+	return -EIO;
+}
+
 static int (*intel_get_gpu_reset(struct drm_device *dev))(struct drm_device *)
 {
-	if (INTEL_INFO(dev)->gen >= 6)
+	if (INTEL_INFO(dev)->gen >= 8)
+		return gen8_do_reset;
+	else if (INTEL_INFO(dev)->gen >= 6)
 		return gen6_do_reset;
 	else if (IS_GEN5(dev))
 		return ironlake_do_reset;