diff mbox

drm/i915: Clear local engine-needs-reset bit if in progress elsewhere

Message ID 20170828192530.7918-1-jeff.mcgee@intel.com (mailing list archive)
State New, archived
Headers show

Commit Message

jeff.mcgee@intel.com Aug. 28, 2017, 7:25 p.m. UTC
From: Jeff McGee <jeff.mcgee@intel.com>

If someone else is resetting the engine we should clear our own bit as
part of skipping that engine. Otherwise we will later believe that it
has not been reset successfully and then trigger full gpu reset. If the
other guy's reset actually fails, he will trigger the full gpu reset.

Signed-off-by: Jeff McGee <jeff.mcgee@intel.com>
---
 drivers/gpu/drm/i915/i915_irq.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

Comments

Michel Thierry Aug. 28, 2017, 7:41 p.m. UTC | #1
On 28/08/17 12:25, jeff.mcgee@intel.com wrote:
> From: Jeff McGee <jeff.mcgee@intel.com>
> 
> If someone else is resetting the engine we should clear our own bit as
> part of skipping that engine. Otherwise we will later believe that it
> has not been reset successfully and then trigger full gpu reset. If the
> other guy's reset actually fails, he will trigger the full gpu reset.
> 

Did you hit this by manually setting wedged to 'x' ring repeatedly?

> Signed-off-by: Jeff McGee <jeff.mcgee@intel.com>
> ---
>   drivers/gpu/drm/i915/i915_irq.c | 4 +++-
>   1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
> index 5d391e689070..575d618ccdbf 100644
> --- a/drivers/gpu/drm/i915/i915_irq.c
> +++ b/drivers/gpu/drm/i915/i915_irq.c
> @@ -2711,8 +2711,10 @@ void i915_handle_error(struct drm_i915_private *dev_priv,
>   		for_each_engine_masked(engine, dev_priv, engine_mask, tmp) {
>   			BUILD_BUG_ON(I915_RESET_MODESET >= I915_RESET_ENGINE);
>   			if (test_and_set_bit(I915_RESET_ENGINE + engine->id,
> -					     &dev_priv->gpu_error.flags))
> +					     &dev_priv->gpu_error.flags)) {
> +				engine_mask &= ~intel_engine_flag(engine);
>   				continue;
> +			}
>   
>   			if (i915_reset_engine(engine, 0) == 0)
>   				engine_mask &= ~intel_engine_flag(engine);
> 

Reviewed-by: Michel Thierry <michel.thierry@intel.com>
Chris Wilson Aug. 28, 2017, 7:44 p.m. UTC | #2
Quoting jeff.mcgee@intel.com (2017-08-28 20:25:30)
> From: Jeff McGee <jeff.mcgee@intel.com>
> 
> If someone else is resetting the engine we should clear our own bit as
> part of skipping that engine. Otherwise we will later believe that it
> has not been reset successfully and then trigger full gpu reset. If the
> other guy's reset actually fails, he will trigger the full gpu reset.

The reason we did continue on to the global reset was to serialise
i915_handle_error() with the other thread. Not a huge issue, but a
reasonable property to keep -- and we definitely want a to explain why
only one reset at a time is important.

bool intel_engine_lock_reset() {
	if (!test_and_set_bit(I915_RESET_ENGINE + engine->id,
			      &engine->i915->gpu_error.flags))
		return true;

	intel_engine_wait_for_reset(engine);
	return false; /* somebody else beat us to the reset */
}

void intel_engine_wait_for_reset() {
	while (test_and_set_bit(I915_RESET_ENGINE + engine->id,
		                &engine->i915->gpu_error.flags))
		wait_on_bit(&engine->i915->gpu_error.flags, I915_RESET_ENGINE + engine->id,
		            TASK_UNINTERRUPTIBLE);
}

It can also be used by selftests/intel_hangcheck.c, so let's refactor
before we have 3 copies.
-Chris
jeff.mcgee@intel.com Aug. 28, 2017, 7:46 p.m. UTC | #3
On Mon, Aug 28, 2017 at 12:41:58PM -0700, Michel Thierry wrote:
> On 28/08/17 12:25, jeff.mcgee@intel.com wrote:
> >From: Jeff McGee <jeff.mcgee@intel.com>
> >
> >If someone else is resetting the engine we should clear our own bit as
> >part of skipping that engine. Otherwise we will later believe that it
> >has not been reset successfully and then trigger full gpu reset. If the
> >other guy's reset actually fails, he will trigger the full gpu reset.
> >
> 
> Did you hit this by manually setting wedged to 'x' ring repeatedly?
> 
I haven't actually reproduced it. Have just been looking at the code a
lot to try to develop reset for preemption enforcement. The implementation
will call i915_handle_error from another work item that can run concurrent
with hangcheck.

> >Signed-off-by: Jeff McGee <jeff.mcgee@intel.com>
> >---
> >  drivers/gpu/drm/i915/i915_irq.c | 4 +++-
> >  1 file changed, 3 insertions(+), 1 deletion(-)
> >
> >diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
> >index 5d391e689070..575d618ccdbf 100644
> >--- a/drivers/gpu/drm/i915/i915_irq.c
> >+++ b/drivers/gpu/drm/i915/i915_irq.c
> >@@ -2711,8 +2711,10 @@ void i915_handle_error(struct drm_i915_private *dev_priv,
> >  		for_each_engine_masked(engine, dev_priv, engine_mask, tmp) {
> >  			BUILD_BUG_ON(I915_RESET_MODESET >= I915_RESET_ENGINE);
> >  			if (test_and_set_bit(I915_RESET_ENGINE + engine->id,
> >-					     &dev_priv->gpu_error.flags))
> >+					     &dev_priv->gpu_error.flags)) {
> >+				engine_mask &= ~intel_engine_flag(engine);
> >  				continue;
> >+			}
> >  			if (i915_reset_engine(engine, 0) == 0)
> >  				engine_mask &= ~intel_engine_flag(engine);
> >
> 
> Reviewed-by: Michel Thierry <michel.thierry@intel.com>
jeff.mcgee@intel.com Aug. 28, 2017, 8:18 p.m. UTC | #4
On Mon, Aug 28, 2017 at 08:44:48PM +0100, Chris Wilson wrote:
> Quoting jeff.mcgee@intel.com (2017-08-28 20:25:30)
> > From: Jeff McGee <jeff.mcgee@intel.com>
> > 
> > If someone else is resetting the engine we should clear our own bit as
> > part of skipping that engine. Otherwise we will later believe that it
> > has not been reset successfully and then trigger full gpu reset. If the
> > other guy's reset actually fails, he will trigger the full gpu reset.
> 
> The reason we did continue on to the global reset was to serialise
> i915_handle_error() with the other thread. Not a huge issue, but a
> reasonable property to keep -- and we definitely want a to explain why
> only one reset at a time is important.
> 
> bool intel_engine_lock_reset() {
> 	if (!test_and_set_bit(I915_RESET_ENGINE + engine->id,
> 			      &engine->i915->gpu_error.flags))
> 		return true;
> 
> 	intel_engine_wait_for_reset(engine);
The current code doesn't wait for the other thread to finish the reset, but
this would add that wait. Did you intend that as an additional change to
the current code? I don't think it is necessary. Each thread wants to
reset some subset of engines, so it seems the thread can safely exit as soon
as it knows each of those engines has been reset or is being reset as part
of another thread that got the lock first. If any of the threads fail to
reset an engine they "own", then full gpu reset is assured.
-Jeff

> 	return false; /* somebody else beat us to the reset */
> }
> 
> void intel_engine_wait_for_reset() {
> 	while (test_and_set_bit(I915_RESET_ENGINE + engine->id,
> 		                &engine->i915->gpu_error.flags))
> 		wait_on_bit(&engine->i915->gpu_error.flags, I915_RESET_ENGINE + engine->id,
> 		            TASK_UNINTERRUPTIBLE);
> }
> 
> It can also be used by selftests/intel_hangcheck.c, so let's refactor
> before we have 3 copies.
> -Chris
Chris Wilson Aug. 29, 2017, 9:07 a.m. UTC | #5
Quoting Jeff McGee (2017-08-28 21:18:44)
> On Mon, Aug 28, 2017 at 08:44:48PM +0100, Chris Wilson wrote:
> > Quoting jeff.mcgee@intel.com (2017-08-28 20:25:30)
> > > From: Jeff McGee <jeff.mcgee@intel.com>
> > > 
> > > If someone else is resetting the engine we should clear our own bit as
> > > part of skipping that engine. Otherwise we will later believe that it
> > > has not been reset successfully and then trigger full gpu reset. If the
> > > other guy's reset actually fails, he will trigger the full gpu reset.
> > 
> > The reason we did continue on to the global reset was to serialise
> > i915_handle_error() with the other thread. Not a huge issue, but a
> > reasonable property to keep -- and we definitely want a to explain why
> > only one reset at a time is important.
> > 
> > bool intel_engine_lock_reset() {
> >       if (!test_and_set_bit(I915_RESET_ENGINE + engine->id,
> >                             &engine->i915->gpu_error.flags))
> >               return true;
> > 
> >       intel_engine_wait_for_reset(engine);
> The current code doesn't wait for the other thread to finish the reset, but
> this would add that wait. 

Pardon? If we can't reset the engine, we go to the full reset which is
serialised, both with individual engine resets and other globals.

> Did you intend that as an additional change to
> the current code? I don't think it is necessary. Each thread wants to
> reset some subset of engines, so it seems the thread can safely exit as soon
> as it knows each of those engines has been reset or is being reset as part
> of another thread that got the lock first. If any of the threads fail to
> reset an engine they "own", then full gpu reset is assured.

It's unexpected for this function to return before the reset.
-Chris
jeff.mcgee@intel.com Aug. 29, 2017, 3:04 p.m. UTC | #6
On Tue, Aug 29, 2017 at 10:07:18AM +0100, Chris Wilson wrote:
> Quoting Jeff McGee (2017-08-28 21:18:44)
> > On Mon, Aug 28, 2017 at 08:44:48PM +0100, Chris Wilson wrote:
> > > Quoting jeff.mcgee@intel.com (2017-08-28 20:25:30)
> > > > From: Jeff McGee <jeff.mcgee@intel.com>
> > > > 
> > > > If someone else is resetting the engine we should clear our own bit as
> > > > part of skipping that engine. Otherwise we will later believe that it
> > > > has not been reset successfully and then trigger full gpu reset. If the
> > > > other guy's reset actually fails, he will trigger the full gpu reset.
> > > 
> > > The reason we did continue on to the global reset was to serialise
> > > i915_handle_error() with the other thread. Not a huge issue, but a
> > > reasonable property to keep -- and we definitely want a to explain why
> > > only one reset at a time is important.
> > > 
> > > bool intel_engine_lock_reset() {
> > >       if (!test_and_set_bit(I915_RESET_ENGINE + engine->id,
> > >                             &engine->i915->gpu_error.flags))
> > >               return true;
> > > 
> > >       intel_engine_wait_for_reset(engine);
> > The current code doesn't wait for the other thread to finish the reset, but
> > this would add that wait. 
> 
> Pardon? If we can't reset the engine, we go to the full reset which is
> serialised, both with individual engine resets and other globals.
> 
> > Did you intend that as an additional change to
> > the current code? I don't think it is necessary. Each thread wants to
> > reset some subset of engines, so it seems the thread can safely exit as soon
> > as it knows each of those engines has been reset or is being reset as part
> > of another thread that got the lock first. If any of the threads fail to
> > reset an engine they "own", then full gpu reset is assured.
> 
> It's unexpected for this function to return before the reset.
> -Chris

I'm a bit confused, so let's go back to the original code that I was trying
to fix:


	/*
	 * Try engine reset when available. We fall back to full reset if
	 * single reset fails.
	 */
	if (intel_has_reset_engine(dev_priv)) {
		for_each_engine_masked(engine, dev_priv, engine_mask, tmp) {
			BUILD_BUG_ON(I915_RESET_MODESET >= I915_RESET_ENGINE);
			if (test_and_set_bit(I915_RESET_ENGINE + engine->id,
					     &dev_priv->gpu_error.flags))
				continue;

			if (i915_reset_engine(engine, 0) == 0)
				engine_mask &= ~intel_engine_flag(engine);

			clear_bit(I915_RESET_ENGINE + engine->id,
				  &dev_priv->gpu_error.flags);
			wake_up_bit(&dev_priv->gpu_error.flags,
				    I915_RESET_ENGINE + engine->id);
		}
	}

	if (!engine_mask)
		goto out;

	/* Full reset needs the mutex, stop any other user trying to do so. */

Let's say that 2 threads are here intending to reset render. #1 gets the lock
and starts the render engine-only reset. #2 fails to get the lock which implies
that someone else is in the process of resetting the render engine (with single
engine reset or full gpu reset). #2 continues on without waiting but doesn't
clear the render bit in engine_mask. So #2 will proceed to initiate a full
gpu reset when it may not be necessary. That's the problem I was trying
to address with my initial patch. Do you agree that #2 must clear this bit
to avoid always triggering full gpu reset? If the engine-only reset done by
#1 fails, #1 will do the fallback to full gpu reset, so there is no risk that
we would miss the full gpu reset if it is really needed.

Then there is the question of whether #2 should wait around for the
render engine reset by #1 to complete. It doesn't in current code and I don't
see why it needs to. But that can be a separate discussion from the above.
-Jeff
Chris Wilson Aug. 29, 2017, 3:17 p.m. UTC | #7
Quoting Jeff McGee (2017-08-29 16:04:17)
> On Tue, Aug 29, 2017 at 10:07:18AM +0100, Chris Wilson wrote:
> > Quoting Jeff McGee (2017-08-28 21:18:44)
> > > On Mon, Aug 28, 2017 at 08:44:48PM +0100, Chris Wilson wrote:
> > > > Quoting jeff.mcgee@intel.com (2017-08-28 20:25:30)
> > > > > From: Jeff McGee <jeff.mcgee@intel.com>
> > > > > 
> > > > > If someone else is resetting the engine we should clear our own bit as
> > > > > part of skipping that engine. Otherwise we will later believe that it
> > > > > has not been reset successfully and then trigger full gpu reset. If the
> > > > > other guy's reset actually fails, he will trigger the full gpu reset.
> > > > 
> > > > The reason we did continue on to the global reset was to serialise
> > > > i915_handle_error() with the other thread. Not a huge issue, but a
> > > > reasonable property to keep -- and we definitely want a to explain why
> > > > only one reset at a time is important.
> > > > 
> > > > bool intel_engine_lock_reset() {
> > > >       if (!test_and_set_bit(I915_RESET_ENGINE + engine->id,
> > > >                             &engine->i915->gpu_error.flags))
> > > >               return true;
> > > > 
> > > >       intel_engine_wait_for_reset(engine);
> > > The current code doesn't wait for the other thread to finish the reset, but
> > > this would add that wait. 
> > 
> > Pardon? If we can't reset the engine, we go to the full reset which is
> > serialised, both with individual engine resets and other globals.
> > 
> > > Did you intend that as an additional change to
> > > the current code? I don't think it is necessary. Each thread wants to
> > > reset some subset of engines, so it seems the thread can safely exit as soon
> > > as it knows each of those engines has been reset or is being reset as part
> > > of another thread that got the lock first. If any of the threads fail to
> > > reset an engine they "own", then full gpu reset is assured.
> > 
> > It's unexpected for this function to return before the reset.
> > -Chris
> 
> I'm a bit confused, so let's go back to the original code that I was trying
> to fix:
> 
> 
>         /*
>          * Try engine reset when available. We fall back to full reset if
>          * single reset fails.
>          */
>         if (intel_has_reset_engine(dev_priv)) {
>                 for_each_engine_masked(engine, dev_priv, engine_mask, tmp) {
>                         BUILD_BUG_ON(I915_RESET_MODESET >= I915_RESET_ENGINE);
>                         if (test_and_set_bit(I915_RESET_ENGINE + engine->id,
>                                              &dev_priv->gpu_error.flags))
>                                 continue;
> 
>                         if (i915_reset_engine(engine, 0) == 0)
>                                 engine_mask &= ~intel_engine_flag(engine);
> 
>                         clear_bit(I915_RESET_ENGINE + engine->id,
>                                   &dev_priv->gpu_error.flags);
>                         wake_up_bit(&dev_priv->gpu_error.flags,
>                                     I915_RESET_ENGINE + engine->id);
>                 }
>         }
> 
>         if (!engine_mask)
>                 goto out;
> 
>         /* Full reset needs the mutex, stop any other user trying to do so. */
> 
> Let's say that 2 threads are here intending to reset render. #1 gets the lock
> and starts the render engine-only reset. #2 fails to get the lock which implies
> that someone else is in the process of resetting the render engine (with single
> engine reset or full gpu reset). #2 continues on without waiting but doesn't
> clear the render bit in engine_mask. So #2 will proceed to initiate a full
> gpu reset when it may not be necessary. That's the problem I was trying
> to address with my initial patch. Do you agree that #2 must clear this bit
> to avoid always triggering full gpu reset? If the engine-only reset done by
> #1 fails, #1 will do the fallback to full gpu reset, so there is no risk that
> we would miss the full gpu reset if it is really needed.
> 
> Then there is the question of whether #2 should wait around for the
> render engine reset by #1 to complete. It doesn't in current code and I don't
> see why it needs to. But that can be a separate discussion from the above.

It very much does in the current code. If we can not do the per-engine
reset, it falls back to the global reset. The global reset is serialised
with itself and the per-engine resets. Ergo it waits, and that was
intentional.
-Chris
Chris Wilson Aug. 29, 2017, 3:22 p.m. UTC | #8
Quoting Jeff McGee (2017-08-28 20:46:00)
> On Mon, Aug 28, 2017 at 12:41:58PM -0700, Michel Thierry wrote:
> > On 28/08/17 12:25, jeff.mcgee@intel.com wrote:
> > >From: Jeff McGee <jeff.mcgee@intel.com>
> > >
> > >If someone else is resetting the engine we should clear our own bit as
> > >part of skipping that engine. Otherwise we will later believe that it
> > >has not been reset successfully and then trigger full gpu reset. If the
> > >other guy's reset actually fails, he will trigger the full gpu reset.
> > >
> > 
> > Did you hit this by manually setting wedged to 'x' ring repeatedly?
> > 
> I haven't actually reproduced it. Have just been looking at the code a
> lot to try to develop reset for preemption enforcement. The implementation
> will call i915_handle_error from another work item that can run concurrent
> with hangcheck.

Note to hit it in practice is a nasty bug. The assumption is that between
a pair of resets there was sufficient time for the engine to recover,
and so if we reset too quickly we conclude that the reset/recovery
mechanism is broken.

And if you do start playing with fast resets, you very quickly find that
kthread_park is a livelock waiting to happen.
-Chris
jeff.mcgee@intel.com Aug. 29, 2017, 5:01 p.m. UTC | #9
On Tue, Aug 29, 2017 at 04:17:46PM +0100, Chris Wilson wrote:
> Quoting Jeff McGee (2017-08-29 16:04:17)
> > On Tue, Aug 29, 2017 at 10:07:18AM +0100, Chris Wilson wrote:
> > > Quoting Jeff McGee (2017-08-28 21:18:44)
> > > > On Mon, Aug 28, 2017 at 08:44:48PM +0100, Chris Wilson wrote:
> > > > > Quoting jeff.mcgee@intel.com (2017-08-28 20:25:30)
> > > > > > From: Jeff McGee <jeff.mcgee@intel.com>
> > > > > > 
> > > > > > If someone else is resetting the engine we should clear our own bit as
> > > > > > part of skipping that engine. Otherwise we will later believe that it
> > > > > > has not been reset successfully and then trigger full gpu reset. If the
> > > > > > other guy's reset actually fails, he will trigger the full gpu reset.
> > > > > 
> > > > > The reason we did continue on to the global reset was to serialise
> > > > > i915_handle_error() with the other thread. Not a huge issue, but a
> > > > > reasonable property to keep -- and we definitely want a to explain why
> > > > > only one reset at a time is important.
> > > > > 
> > > > > bool intel_engine_lock_reset() {
> > > > >       if (!test_and_set_bit(I915_RESET_ENGINE + engine->id,
> > > > >                             &engine->i915->gpu_error.flags))
> > > > >               return true;
> > > > > 
> > > > >       intel_engine_wait_for_reset(engine);
> > > > The current code doesn't wait for the other thread to finish the reset, but
> > > > this would add that wait. 
> > > 
> > > Pardon? If we can't reset the engine, we go to the full reset which is
> > > serialised, both with individual engine resets and other globals.
> > > 
> > > > Did you intend that as an additional change to
> > > > the current code? I don't think it is necessary. Each thread wants to
> > > > reset some subset of engines, so it seems the thread can safely exit as soon
> > > > as it knows each of those engines has been reset or is being reset as part
> > > > of another thread that got the lock first. If any of the threads fail to
> > > > reset an engine they "own", then full gpu reset is assured.
> > > 
> > > It's unexpected for this function to return before the reset.
> > > -Chris
> > 
> > I'm a bit confused, so let's go back to the original code that I was trying
> > to fix:
> > 
> > 
> >         /*
> >          * Try engine reset when available. We fall back to full reset if
> >          * single reset fails.
> >          */
> >         if (intel_has_reset_engine(dev_priv)) {
> >                 for_each_engine_masked(engine, dev_priv, engine_mask, tmp) {
> >                         BUILD_BUG_ON(I915_RESET_MODESET >= I915_RESET_ENGINE);
> >                         if (test_and_set_bit(I915_RESET_ENGINE + engine->id,
> >                                              &dev_priv->gpu_error.flags))
> >                                 continue;
> > 
> >                         if (i915_reset_engine(engine, 0) == 0)
> >                                 engine_mask &= ~intel_engine_flag(engine);
> > 
> >                         clear_bit(I915_RESET_ENGINE + engine->id,
> >                                   &dev_priv->gpu_error.flags);
> >                         wake_up_bit(&dev_priv->gpu_error.flags,
> >                                     I915_RESET_ENGINE + engine->id);
> >                 }
> >         }
> > 
> >         if (!engine_mask)
> >                 goto out;
> > 
> >         /* Full reset needs the mutex, stop any other user trying to do so. */
> > 
> > Let's say that 2 threads are here intending to reset render. #1 gets the lock
> > and starts the render engine-only reset. #2 fails to get the lock which implies
> > that someone else is in the process of resetting the render engine (with single
> > engine reset or full gpu reset). #2 continues on without waiting but doesn't
> > clear the render bit in engine_mask. So #2 will proceed to initiate a full
> > gpu reset when it may not be necessary. That's the problem I was trying
> > to address with my initial patch. Do you agree that #2 must clear this bit
> > to avoid always triggering full gpu reset? If the engine-only reset done by
> > #1 fails, #1 will do the fallback to full gpu reset, so there is no risk that
> > we would miss the full gpu reset if it is really needed.
> > 
> > Then there is the question of whether #2 should wait around for the
> > render engine reset by #1 to complete. It doesn't in current code and I don't
> > see why it needs to. But that can be a separate discussion from the above.
> 
> It very much does in the current code. If we can not do the per-engine
> reset, it falls back to the global reset.

So are you saying that it is by design in this scenario that #2 will resort
to full gpu reset just because it wasn't the thread that actually performed
the engine reset, even though it can clearly infer based on the engine lock
being held that #1 is performing that reset for him?

> The global reset is serialised
> with itself and the per-engine resets. Ergo it waits, and that was
> intentional.
> -Chris

Yes, the wait happens because #2 goes on to start full gpu reset which
requires all engine bits to be grabbed. My contention is that it should not
start full gpu reset.
-Jeff
Chris Wilson Sept. 6, 2017, 9:57 p.m. UTC | #10
Quoting Jeff McGee (2017-08-29 18:01:47)
> On Tue, Aug 29, 2017 at 04:17:46PM +0100, Chris Wilson wrote:
> > Quoting Jeff McGee (2017-08-29 16:04:17)
> > > On Tue, Aug 29, 2017 at 10:07:18AM +0100, Chris Wilson wrote:
> > > > Quoting Jeff McGee (2017-08-28 21:18:44)
> > > > > On Mon, Aug 28, 2017 at 08:44:48PM +0100, Chris Wilson wrote:
> > > > > > Quoting jeff.mcgee@intel.com (2017-08-28 20:25:30)
> > > > > > > From: Jeff McGee <jeff.mcgee@intel.com>
> > > > > > > 
> > > > > > > If someone else is resetting the engine we should clear our own bit as
> > > > > > > part of skipping that engine. Otherwise we will later believe that it
> > > > > > > has not been reset successfully and then trigger full gpu reset. If the
> > > > > > > other guy's reset actually fails, he will trigger the full gpu reset.
> > > > > > 
> > > > > > The reason we did continue on to the global reset was to serialise
> > > > > > i915_handle_error() with the other thread. Not a huge issue, but a
> > > > > > reasonable property to keep -- and we definitely want a to explain why
> > > > > > only one reset at a time is important.
> > > > > > 
> > > > > > bool intel_engine_lock_reset() {
> > > > > >       if (!test_and_set_bit(I915_RESET_ENGINE + engine->id,
> > > > > >                             &engine->i915->gpu_error.flags))
> > > > > >               return true;
> > > > > > 
> > > > > >       intel_engine_wait_for_reset(engine);
> > > > > The current code doesn't wait for the other thread to finish the reset, but
> > > > > this would add that wait. 
> > > > 
> > > > Pardon? If we can't reset the engine, we go to the full reset which is
> > > > serialised, both with individual engine resets and other globals.
> > > > 
> > > > > Did you intend that as an additional change to
> > > > > the current code? I don't think it is necessary. Each thread wants to
> > > > > reset some subset of engines, so it seems the thread can safely exit as soon
> > > > > as it knows each of those engines has been reset or is being reset as part
> > > > > of another thread that got the lock first. If any of the threads fail to
> > > > > reset an engine they "own", then full gpu reset is assured.
> > > > 
> > > > It's unexpected for this function to return before the reset.
> > > > -Chris
> > > 
> > > I'm a bit confused, so let's go back to the original code that I was trying
> > > to fix:
> > > 
> > > 
> > >         /*
> > >          * Try engine reset when available. We fall back to full reset if
> > >          * single reset fails.
> > >          */
> > >         if (intel_has_reset_engine(dev_priv)) {
> > >                 for_each_engine_masked(engine, dev_priv, engine_mask, tmp) {
> > >                         BUILD_BUG_ON(I915_RESET_MODESET >= I915_RESET_ENGINE);
> > >                         if (test_and_set_bit(I915_RESET_ENGINE + engine->id,
> > >                                              &dev_priv->gpu_error.flags))
> > >                                 continue;
> > > 
> > >                         if (i915_reset_engine(engine, 0) == 0)
> > >                                 engine_mask &= ~intel_engine_flag(engine);
> > > 
> > >                         clear_bit(I915_RESET_ENGINE + engine->id,
> > >                                   &dev_priv->gpu_error.flags);
> > >                         wake_up_bit(&dev_priv->gpu_error.flags,
> > >                                     I915_RESET_ENGINE + engine->id);
> > >                 }
> > >         }
> > > 
> > >         if (!engine_mask)
> > >                 goto out;
> > > 
> > >         /* Full reset needs the mutex, stop any other user trying to do so. */
> > > 
> > > Let's say that 2 threads are here intending to reset render. #1 gets the lock
> > > and starts the render engine-only reset. #2 fails to get the lock which implies
> > > that someone else is in the process of resetting the render engine (with single
> > > engine reset or full gpu reset). #2 continues on without waiting but doesn't
> > > clear the render bit in engine_mask. So #2 will proceed to initiate a full
> > > gpu reset when it may not be necessary. That's the problem I was trying
> > > to address with my initial patch. Do you agree that #2 must clear this bit
> > > to avoid always triggering full gpu reset? If the engine-only reset done by
> > > #1 fails, #1 will do the fallback to full gpu reset, so there is no risk that
> > > we would miss the full gpu reset if it is really needed.
> > > 
> > > Then there is the question of whether #2 should wait around for the
> > > render engine reset by #1 to complete. It doesn't in current code and I don't
> > > see why it needs to. But that can be a separate discussion from the above.
> > 
> > It very much does in the current code. If we can not do the per-engine
> > reset, it falls back to the global reset.
> 
> So are you saying that it is by design in this scenario that #2 will resort
> to full gpu reset just because it wasn't the thread that actually performed
> the engine reset, even though it can clearly infer based on the engine lock
> being held that #1 is performing that reset for him?

Yes, that wait was intentional.
 
> > The global reset is serialised
> > with itself and the per-engine resets. Ergo it waits, and that was
> > intentional.
> > -Chris
> 
> Yes, the wait happens because #2 goes on to start full gpu reset which
> requires all engine bits to be grabbed. My contention is that it should not
> start full gpu reset.

And that I am not disputing. Just that returning before the reset is
complete changes the current contract.
-Chris
jeff.mcgee@intel.com Sept. 13, 2017, 2:23 p.m. UTC | #11
On Wed, Sep 06, 2017 at 10:57:20PM +0100, Chris Wilson wrote:
> Quoting Jeff McGee (2017-08-29 18:01:47)
> > On Tue, Aug 29, 2017 at 04:17:46PM +0100, Chris Wilson wrote:
> > > Quoting Jeff McGee (2017-08-29 16:04:17)
> > > > On Tue, Aug 29, 2017 at 10:07:18AM +0100, Chris Wilson wrote:
> > > > > Quoting Jeff McGee (2017-08-28 21:18:44)
> > > > > > On Mon, Aug 28, 2017 at 08:44:48PM +0100, Chris Wilson wrote:
> > > > > > > Quoting jeff.mcgee@intel.com (2017-08-28 20:25:30)
> > > > > > > > From: Jeff McGee <jeff.mcgee@intel.com>
> > > > > > > > 
> > > > > > > > If someone else is resetting the engine we should clear our own bit as
> > > > > > > > part of skipping that engine. Otherwise we will later believe that it
> > > > > > > > has not been reset successfully and then trigger full gpu reset. If the
> > > > > > > > other guy's reset actually fails, he will trigger the full gpu reset.
> > > > > > > 
> > > > > > > The reason we did continue on to the global reset was to serialise
> > > > > > > i915_handle_error() with the other thread. Not a huge issue, but a
> > > > > > > reasonable property to keep -- and we definitely want a to explain why
> > > > > > > only one reset at a time is important.
> > > > > > > 
> > > > > > > bool intel_engine_lock_reset() {
> > > > > > >       if (!test_and_set_bit(I915_RESET_ENGINE + engine->id,
> > > > > > >                             &engine->i915->gpu_error.flags))
> > > > > > >               return true;
> > > > > > > 
> > > > > > >       intel_engine_wait_for_reset(engine);
> > > > > > The current code doesn't wait for the other thread to finish the reset, but
> > > > > > this would add that wait. 
> > > > > 
> > > > > Pardon? If we can't reset the engine, we go to the full reset which is
> > > > > serialised, both with individual engine resets and other globals.
> > > > > 
> > > > > > Did you intend that as an additional change to
> > > > > > the current code? I don't think it is necessary. Each thread wants to
> > > > > > reset some subset of engines, so it seems the thread can safely exit as soon
> > > > > > as it knows each of those engines has been reset or is being reset as part
> > > > > > of another thread that got the lock first. If any of the threads fail to
> > > > > > reset an engine they "own", then full gpu reset is assured.
> > > > > 
> > > > > It's unexpected for this function to return before the reset.
> > > > > -Chris
> > > > 
> > > > I'm a bit confused, so let's go back to the original code that I was trying
> > > > to fix:
> > > > 
> > > > 
> > > >         /*
> > > >          * Try engine reset when available. We fall back to full reset if
> > > >          * single reset fails.
> > > >          */
> > > >         if (intel_has_reset_engine(dev_priv)) {
> > > >                 for_each_engine_masked(engine, dev_priv, engine_mask, tmp) {
> > > >                         BUILD_BUG_ON(I915_RESET_MODESET >= I915_RESET_ENGINE);
> > > >                         if (test_and_set_bit(I915_RESET_ENGINE + engine->id,
> > > >                                              &dev_priv->gpu_error.flags))
> > > >                                 continue;
> > > > 
> > > >                         if (i915_reset_engine(engine, 0) == 0)
> > > >                                 engine_mask &= ~intel_engine_flag(engine);
> > > > 
> > > >                         clear_bit(I915_RESET_ENGINE + engine->id,
> > > >                                   &dev_priv->gpu_error.flags);
> > > >                         wake_up_bit(&dev_priv->gpu_error.flags,
> > > >                                     I915_RESET_ENGINE + engine->id);
> > > >                 }
> > > >         }
> > > > 
> > > >         if (!engine_mask)
> > > >                 goto out;
> > > > 
> > > >         /* Full reset needs the mutex, stop any other user trying to do so. */
> > > > 
> > > > Let's say that 2 threads are here intending to reset render. #1 gets the lock
> > > > and starts the render engine-only reset. #2 fails to get the lock which implies
> > > > that someone else is in the process of resetting the render engine (with single
> > > > engine reset or full gpu reset). #2 continues on without waiting but doesn't
> > > > clear the render bit in engine_mask. So #2 will proceed to initiate a full
> > > > gpu reset when it may not be necessary. That's the problem I was trying
> > > > to address with my initial patch. Do you agree that #2 must clear this bit
> > > > to avoid always triggering full gpu reset? If the engine-only reset done by
> > > > #1 fails, #1 will do the fallback to full gpu reset, so there is no risk that
> > > > we would miss the full gpu reset if it is really needed.
> > > > 
> > > > Then there is the question of whether #2 should wait around for the
> > > > render engine reset by #1 to complete. It doesn't in current code and I don't
> > > > see why it needs to. But that can be a separate discussion from the above.
> > > 
> > > It very much does in the current code. If we can not do the per-engine
> > > reset, it falls back to the global reset.
> > 
> > So are you saying that it is by design in this scenario that #2 will resort
> > to full gpu reset just because it wasn't the thread that actually performed
> > the engine reset, even though it can clearly infer based on the engine lock
> > being held that #1 is performing that reset for him?
> 
> Yes, that wait was intentional.
>  

So couldn't we preserve the wait without resorting to full GPU reset to do it?
It is the unnecessary triggering of full GPU reset that concerns me the most,
not that thread #2 has to wait.

I admit it's not a major concern at the moment because the code runs
concurrently only if debugfs wedged is invoked along with hangcheck (if I read
it correctly). I've added another delayed work that can run the code to do
"forced" preemption to clear the way for high-priority requests, and unnecessary
fallback to slow full GPU reset is a bad thing.

> > > The global reset is serialised
> > > with itself and the per-engine resets. Ergo it waits, and that was
> > > intentional.
> > > -Chris
> > 
> > Yes, the wait happens because #2 goes on to start full gpu reset which
> > requires all engine bits to be grabbed. My contention is that it should not
> > start full gpu reset.
> 
> And that I am not disputing. Just that returning before the reset is
> complete changes the current contract.
> -Chris
diff mbox

Patch

diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index 5d391e689070..575d618ccdbf 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -2711,8 +2711,10 @@  void i915_handle_error(struct drm_i915_private *dev_priv,
 		for_each_engine_masked(engine, dev_priv, engine_mask, tmp) {
 			BUILD_BUG_ON(I915_RESET_MODESET >= I915_RESET_ENGINE);
 			if (test_and_set_bit(I915_RESET_ENGINE + engine->id,
-					     &dev_priv->gpu_error.flags))
+					     &dev_priv->gpu_error.flags)) {
+				engine_mask &= ~intel_engine_flag(engine);
 				continue;
+			}
 
 			if (i915_reset_engine(engine, 0) == 0)
 				engine_mask &= ~intel_engine_flag(engine);