diff mbox series

[4/4] drm/i915/guc: Refcount context during error capture

Message ID 20210914050956.30685-5-matthew.brost@intel.com (mailing list archive)
State New, archived
Headers show
Series Do error capture async, flush G2H processing on reset | expand

Commit Message

Matthew Brost Sept. 14, 2021, 5:09 a.m. UTC
From: John Harrison <John.C.Harrison@Intel.com>

When i915 receives a context reset notification from GuC, it triggers
an error capture before resetting any outstanding requsts of that
context. Unfortunately, the error capture is not a time bound
operation. In certain situations it can take a long time, particularly
when multiple large LMEM buffers must be read back and eoncoded. If
this delay is longer than other timeouts (heartbeat, test recovery,
etc.) then a full GT reset can be triggered in the middle.

That can result in the context being reset by GuC actually being
destroyed before the error capture completes and the GuC submission
code resumes. Thus, the GuC side can start dereferencing stale
pointers and Bad Things ensue.

So add a refcount get of the context during the entire reset
operation. That way, the context can't be destroyed part way through
no matter what other resets or user interactions occur.

v2:
 (Matthew Brost)
  - Update patch to work with async error capture

Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
 .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 24 +++++++++++++++++--
 1 file changed, 22 insertions(+), 2 deletions(-)

Comments

Daniel Vetter Sept. 14, 2021, 2:29 p.m. UTC | #1
On Mon, Sep 13, 2021 at 10:09:56PM -0700, Matthew Brost wrote:
> From: John Harrison <John.C.Harrison@Intel.com>
> 
> When i915 receives a context reset notification from GuC, it triggers
> an error capture before resetting any outstanding requsts of that
> context. Unfortunately, the error capture is not a time bound
> operation. In certain situations it can take a long time, particularly
> when multiple large LMEM buffers must be read back and eoncoded. If
> this delay is longer than other timeouts (heartbeat, test recovery,
> etc.) then a full GT reset can be triggered in the middle.
> 
> That can result in the context being reset by GuC actually being
> destroyed before the error capture completes and the GuC submission
> code resumes. Thus, the GuC side can start dereferencing stale
> pointers and Bad Things ensue.
> 
> So add a refcount get of the context during the entire reset
> operation. That way, the context can't be destroyed part way through
> no matter what other resets or user interactions occur.
> 
> v2:
>  (Matthew Brost)
>   - Update patch to work with async error capture
> 
> Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>

This sounds like a fundamental issue in our reset/scheduler design. If we
have multiple timeout-things working in parallel, then there's going to be
an endless whack-a-mole fireworks show.

Reset is not a perf critical path (aside from media timeout, which guc
handles internally anyway). Simplicity trumps everything else. The fix
here is to guarantee that anything related to reset cannot happen in
parallel with anything else related to reset/timeout. At least on a
per-engine (and really on a per-reset domain) basis.

The fix we've developed for drm/sched is that the driver can allocate a
single-thread work queue, pass it to each drm/sched instance, and all
timeout handling is run in there.

For i915 it's more of a mess since we have a ton of random things that
time out/reset potentially going on in parallel. But that's the design we
should head towards.

_not_ sprinkling random refcounts all over the place until most of the
oops/splats disappear. That's cargo-culting, not engineering.
-Daniel

> ---
>  .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 24 +++++++++++++++++--
>  1 file changed, 22 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> index 1986a57b52cc..02917fc4d4a8 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> @@ -2888,6 +2888,8 @@ static void capture_worker_func(struct work_struct *w)
>  	intel_engine_set_hung_context(engine, ce);
>  	with_intel_runtime_pm(&i915->runtime_pm, wakeref)
>  		i915_capture_error_state(gt, ce->engine->mask);
> +
> +	intel_context_put(ce);
>  }
>  
>  static void capture_error_state(struct intel_guc *guc,
> @@ -2924,7 +2926,7 @@ static void guc_context_replay(struct intel_context *ce)
>  	tasklet_hi_schedule(&sched_engine->tasklet);
>  }
>  
> -static void guc_handle_context_reset(struct intel_guc *guc,
> +static bool guc_handle_context_reset(struct intel_guc *guc,
>  				     struct intel_context *ce)
>  {
>  	trace_intel_context_reset(ce);
> @@ -2937,7 +2939,11 @@ static void guc_handle_context_reset(struct intel_guc *guc,
>  		   !context_blocked(ce))) {
>  		capture_error_state(guc, ce);
>  		guc_context_replay(ce);
> +
> +		return false;
>  	}
> +
> +	return true;
>  }
>  
>  int intel_guc_context_reset_process_msg(struct intel_guc *guc,
> @@ -2945,6 +2951,7 @@ int intel_guc_context_reset_process_msg(struct intel_guc *guc,
>  {
>  	struct intel_context *ce;
>  	int desc_idx;
> +	unsigned long flags;
>  
>  	if (unlikely(len != 1)) {
>  		drm_err(&guc_to_gt(guc)->i915->drm, "Invalid length %u", len);
> @@ -2952,11 +2959,24 @@ int intel_guc_context_reset_process_msg(struct intel_guc *guc,
>  	}
>  
>  	desc_idx = msg[0];
> +
> +	/*
> +	 * The context lookup uses the xarray but lookups only require an RCU lock
> +	 * not the full spinlock. So take the lock explicitly and keep it until the
> +	 * context has been reference count locked to ensure it can't be destroyed
> +	 * asynchronously until the reset is done.
> +	 */
> +	xa_lock_irqsave(&guc->context_lookup, flags);
>  	ce = g2h_context_lookup(guc, desc_idx);
> +	if (ce)
> +		intel_context_get(ce);
> +	xa_unlock_irqrestore(&guc->context_lookup, flags);
> +
>  	if (unlikely(!ce))
>  		return -EPROTO;
>  
> -	guc_handle_context_reset(guc, ce);
> +	if (guc_handle_context_reset(guc, ce))
> +		intel_context_put(ce);
>  
>  	return 0;
>  }
> -- 
> 2.32.0
>
John Harrison Sept. 14, 2021, 11:23 p.m. UTC | #2
On 9/14/2021 07:29, Daniel Vetter wrote:
> On Mon, Sep 13, 2021 at 10:09:56PM -0700, Matthew Brost wrote:
>> From: John Harrison <John.C.Harrison@Intel.com>
>>
>> When i915 receives a context reset notification from GuC, it triggers
>> an error capture before resetting any outstanding requsts of that
>> context. Unfortunately, the error capture is not a time bound
>> operation. In certain situations it can take a long time, particularly
>> when multiple large LMEM buffers must be read back and eoncoded. If
>> this delay is longer than other timeouts (heartbeat, test recovery,
>> etc.) then a full GT reset can be triggered in the middle.
>>
>> That can result in the context being reset by GuC actually being
>> destroyed before the error capture completes and the GuC submission
>> code resumes. Thus, the GuC side can start dereferencing stale
>> pointers and Bad Things ensue.
>>
>> So add a refcount get of the context during the entire reset
>> operation. That way, the context can't be destroyed part way through
>> no matter what other resets or user interactions occur.
>>
>> v2:
>>   (Matthew Brost)
>>    - Update patch to work with async error capture
>>
>> Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
>> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> This sounds like a fundamental issue in our reset/scheduler design. If we
> have multiple timeout-things working in parallel, then there's going to be
> an endless whack-a-mole fireworks show.
>
> Reset is not a perf critical path (aside from media timeout, which guc
> handles internally anyway). Simplicity trumps everything else. The fix
> here is to guarantee that anything related to reset cannot happen in
> parallel with anything else related to reset/timeout. At least on a
> per-engine (and really on a per-reset domain) basis.
>
> The fix we've developed for drm/sched is that the driver can allocate a
> single-thread work queue, pass it to each drm/sched instance, and all
> timeout handling is run in there.
>
> For i915 it's more of a mess since we have a ton of random things that
> time out/reset potentially going on in parallel. But that's the design we
> should head towards.
>
> _not_ sprinkling random refcounts all over the place until most of the
> oops/splats disappear. That's cargo-culting, not engineering.
> -Daniel
Not sure I follow this.

The code pulls an intel_context object out of a structure and proceeds 
to dereference it in what can be a slow piece of code that is running in 
a worker thread and is therefore already asynchronous to other activity. 
Acquiring a reference count on that object while holding its pointer is 
standard practice, I thought. That's the whole point of reference counting!

To be clear, this is not adding a brand new reference count object. It 
is merely taking the correct lock on an object while accessing that object.

It uses the xarray's lock while accessing the xarray and then the ce's 
lock while accessing the ce and makes sure to overlap the two to prevent 
any race conditions. To me, that seems like a) correct object access 
practice and b) it should have been there in the first place.

John.


>
>> ---
>>   .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 24 +++++++++++++++++--
>>   1 file changed, 22 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>> index 1986a57b52cc..02917fc4d4a8 100644
>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>> @@ -2888,6 +2888,8 @@ static void capture_worker_func(struct work_struct *w)
>>   	intel_engine_set_hung_context(engine, ce);
>>   	with_intel_runtime_pm(&i915->runtime_pm, wakeref)
>>   		i915_capture_error_state(gt, ce->engine->mask);
>> +
>> +	intel_context_put(ce);
>>   }
>>   
>>   static void capture_error_state(struct intel_guc *guc,
>> @@ -2924,7 +2926,7 @@ static void guc_context_replay(struct intel_context *ce)
>>   	tasklet_hi_schedule(&sched_engine->tasklet);
>>   }
>>   
>> -static void guc_handle_context_reset(struct intel_guc *guc,
>> +static bool guc_handle_context_reset(struct intel_guc *guc,
>>   				     struct intel_context *ce)
>>   {
>>   	trace_intel_context_reset(ce);
>> @@ -2937,7 +2939,11 @@ static void guc_handle_context_reset(struct intel_guc *guc,
>>   		   !context_blocked(ce))) {
>>   		capture_error_state(guc, ce);
>>   		guc_context_replay(ce);
>> +
>> +		return false;
>>   	}
>> +
>> +	return true;
>>   }
>>   
>>   int intel_guc_context_reset_process_msg(struct intel_guc *guc,
>> @@ -2945,6 +2951,7 @@ int intel_guc_context_reset_process_msg(struct intel_guc *guc,
>>   {
>>   	struct intel_context *ce;
>>   	int desc_idx;
>> +	unsigned long flags;
>>   
>>   	if (unlikely(len != 1)) {
>>   		drm_err(&guc_to_gt(guc)->i915->drm, "Invalid length %u", len);
>> @@ -2952,11 +2959,24 @@ int intel_guc_context_reset_process_msg(struct intel_guc *guc,
>>   	}
>>   
>>   	desc_idx = msg[0];
>> +
>> +	/*
>> +	 * The context lookup uses the xarray but lookups only require an RCU lock
>> +	 * not the full spinlock. So take the lock explicitly and keep it until the
>> +	 * context has been reference count locked to ensure it can't be destroyed
>> +	 * asynchronously until the reset is done.
>> +	 */
>> +	xa_lock_irqsave(&guc->context_lookup, flags);
>>   	ce = g2h_context_lookup(guc, desc_idx);
>> +	if (ce)
>> +		intel_context_get(ce);
>> +	xa_unlock_irqrestore(&guc->context_lookup, flags);
>> +
>>   	if (unlikely(!ce))
>>   		return -EPROTO;
>>   
>> -	guc_handle_context_reset(guc, ce);
>> +	if (guc_handle_context_reset(guc, ce))
>> +		intel_context_put(ce);
>>   
>>   	return 0;
>>   }
>> -- 
>> 2.32.0
>>
Matthew Brost Sept. 14, 2021, 11:36 p.m. UTC | #3
On Tue, Sep 14, 2021 at 04:29:21PM +0200, Daniel Vetter wrote:
> On Mon, Sep 13, 2021 at 10:09:56PM -0700, Matthew Brost wrote:
> > From: John Harrison <John.C.Harrison@Intel.com>
> > 
> > When i915 receives a context reset notification from GuC, it triggers
> > an error capture before resetting any outstanding requsts of that
> > context. Unfortunately, the error capture is not a time bound
> > operation. In certain situations it can take a long time, particularly
> > when multiple large LMEM buffers must be read back and eoncoded. If
> > this delay is longer than other timeouts (heartbeat, test recovery,
> > etc.) then a full GT reset can be triggered in the middle.
> > 
> > That can result in the context being reset by GuC actually being
> > destroyed before the error capture completes and the GuC submission
> > code resumes. Thus, the GuC side can start dereferencing stale
> > pointers and Bad Things ensue.
> > 
> > So add a refcount get of the context during the entire reset
> > operation. That way, the context can't be destroyed part way through
> > no matter what other resets or user interactions occur.
> > 
> > v2:
> >  (Matthew Brost)
> >   - Update patch to work with async error capture
> > 
> > Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
> > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> 
> This sounds like a fundamental issue in our reset/scheduler design. If we
> have multiple timeout-things working in parallel, then there's going to be
> an endless whack-a-mole fireworks show.
> 

We have two different possible reset paths.

One initiated from the GuC on per context basis. Each of these resets is
execute serially in the order in which they are received and each
contexts reset is protected by a lock.

Another is a full GT reset, typically triggered from the heartbeat or
selftest. Only 1 of these can happen at time as this is protected by a
reset mutex. The full GT reset should flush all the inflight per context
resets before proceeding with the whole GT reset (after patch #3 in this
series), I believe this patch was written before patch #3 so when it was
written there was a race where a per context reset & GT reset could be
happening at the same time but that is no longer the case. The commit
message should be reworded to reflect that. All that being said I still
believe the patch is correct to reference count the context until after
the error capture completes. As John H said, this is just a standard ref
count here.

> Reset is not a perf critical path (aside from media timeout, which guc
> handles internally anyway). Simplicity trumps everything else. The fix
> here is to guarantee that anything related to reset cannot happen in
> parallel with anything else related to reset/timeout. At least on a
> per-engine (and really on a per-reset domain) basis.
> 
> The fix we've developed for drm/sched is that the driver can allocate a
> single-thread work queue, pass it to each drm/sched instance, and all
> timeout handling is run in there.
> 
> For i915 it's more of a mess since we have a ton of random things that
> time out/reset potentially going on in parallel. But that's the design we
> should head towards.
>

See above, the parallel resets is fixed by patch #3 in this series.

> _not_ sprinkling random refcounts all over the place until most of the
> oops/splats disappear. That's cargo-culting, not engineering.

See above, I believe the ref count is still correct.

Matt

> -Daniel
> 
> > ---
> >  .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 24 +++++++++++++++++--
> >  1 file changed, 22 insertions(+), 2 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > index 1986a57b52cc..02917fc4d4a8 100644
> > --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > @@ -2888,6 +2888,8 @@ static void capture_worker_func(struct work_struct *w)
> >  	intel_engine_set_hung_context(engine, ce);
> >  	with_intel_runtime_pm(&i915->runtime_pm, wakeref)
> >  		i915_capture_error_state(gt, ce->engine->mask);
> > +
> > +	intel_context_put(ce);
> >  }
> >  
> >  static void capture_error_state(struct intel_guc *guc,
> > @@ -2924,7 +2926,7 @@ static void guc_context_replay(struct intel_context *ce)
> >  	tasklet_hi_schedule(&sched_engine->tasklet);
> >  }
> >  
> > -static void guc_handle_context_reset(struct intel_guc *guc,
> > +static bool guc_handle_context_reset(struct intel_guc *guc,
> >  				     struct intel_context *ce)
> >  {
> >  	trace_intel_context_reset(ce);
> > @@ -2937,7 +2939,11 @@ static void guc_handle_context_reset(struct intel_guc *guc,
> >  		   !context_blocked(ce))) {
> >  		capture_error_state(guc, ce);
> >  		guc_context_replay(ce);
> > +
> > +		return false;
> >  	}
> > +
> > +	return true;
> >  }
> >  
> >  int intel_guc_context_reset_process_msg(struct intel_guc *guc,
> > @@ -2945,6 +2951,7 @@ int intel_guc_context_reset_process_msg(struct intel_guc *guc,
> >  {
> >  	struct intel_context *ce;
> >  	int desc_idx;
> > +	unsigned long flags;
> >  
> >  	if (unlikely(len != 1)) {
> >  		drm_err(&guc_to_gt(guc)->i915->drm, "Invalid length %u", len);
> > @@ -2952,11 +2959,24 @@ int intel_guc_context_reset_process_msg(struct intel_guc *guc,
> >  	}
> >  
> >  	desc_idx = msg[0];
> > +
> > +	/*
> > +	 * The context lookup uses the xarray but lookups only require an RCU lock
> > +	 * not the full spinlock. So take the lock explicitly and keep it until the
> > +	 * context has been reference count locked to ensure it can't be destroyed
> > +	 * asynchronously until the reset is done.
> > +	 */
> > +	xa_lock_irqsave(&guc->context_lookup, flags);
> >  	ce = g2h_context_lookup(guc, desc_idx);
> > +	if (ce)
> > +		intel_context_get(ce);
> > +	xa_unlock_irqrestore(&guc->context_lookup, flags);
> > +
> >  	if (unlikely(!ce))
> >  		return -EPROTO;
> >  
> > -	guc_handle_context_reset(guc, ce);
> > +	if (guc_handle_context_reset(guc, ce))
> > +		intel_context_put(ce);
> >  
> >  	return 0;
> >  }
> > -- 
> > 2.32.0
> > 
> 
> -- 
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch
Daniel Vetter Sept. 17, 2021, 12:37 p.m. UTC | #4
On Tue, Sep 14, 2021 at 04:23:26PM -0700, John Harrison wrote:
> On 9/14/2021 07:29, Daniel Vetter wrote:
> > On Mon, Sep 13, 2021 at 10:09:56PM -0700, Matthew Brost wrote:
> > > From: John Harrison <John.C.Harrison@Intel.com>
> > > 
> > > When i915 receives a context reset notification from GuC, it triggers
> > > an error capture before resetting any outstanding requsts of that
> > > context. Unfortunately, the error capture is not a time bound
> > > operation. In certain situations it can take a long time, particularly
> > > when multiple large LMEM buffers must be read back and eoncoded. If
> > > this delay is longer than other timeouts (heartbeat, test recovery,
> > > etc.) then a full GT reset can be triggered in the middle.
> > > 
> > > That can result in the context being reset by GuC actually being
> > > destroyed before the error capture completes and the GuC submission
> > > code resumes. Thus, the GuC side can start dereferencing stale
> > > pointers and Bad Things ensue.
> > > 
> > > So add a refcount get of the context during the entire reset
> > > operation. That way, the context can't be destroyed part way through
> > > no matter what other resets or user interactions occur.
> > > 
> > > v2:
> > >   (Matthew Brost)
> > >    - Update patch to work with async error capture
> > > 
> > > Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
> > > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > This sounds like a fundamental issue in our reset/scheduler design. If we
> > have multiple timeout-things working in parallel, then there's going to be
> > an endless whack-a-mole fireworks show.
> > 
> > Reset is not a perf critical path (aside from media timeout, which guc
> > handles internally anyway). Simplicity trumps everything else. The fix
> > here is to guarantee that anything related to reset cannot happen in
> > parallel with anything else related to reset/timeout. At least on a
> > per-engine (and really on a per-reset domain) basis.
> > 
> > The fix we've developed for drm/sched is that the driver can allocate a
> > single-thread work queue, pass it to each drm/sched instance, and all
> > timeout handling is run in there.
> > 
> > For i915 it's more of a mess since we have a ton of random things that
> > time out/reset potentially going on in parallel. But that's the design we
> > should head towards.
> > 
> > _not_ sprinkling random refcounts all over the place until most of the
> > oops/splats disappear. That's cargo-culting, not engineering.
> > -Daniel
> Not sure I follow this.
> 
> The code pulls an intel_context object out of a structure and proceeds to
> dereference it in what can be a slow piece of code that is running in a
> worker thread and is therefore already asynchronous to other activity.
> Acquiring a reference count on that object while holding its pointer is
> standard practice, I thought. That's the whole point of reference counting!
> 
> To be clear, this is not adding a brand new reference count object. It is
> merely taking the correct lock on an object while accessing that object.
> 
> It uses the xarray's lock while accessing the xarray and then the ce's lock
> while accessing the ce and makes sure to overlap the two to prevent any race
> conditions. To me, that seems like a) correct object access practice and b)
> it should have been there in the first place.

Sure we do reference count. And we reference count intel_context. But we
shouldn't just use a reference count because it's there and looks
convenient.

This is reset code. If the intel_context can go away while we process the
reset affecting it, there's a giantic bug going on. Doing locally a bit
more reference counting just makes the race small enough to not hit it
anymore easily. It doesn't fix a bug anywhere, or if it does, then the
locking looks really, really fragile.

The proper fix here is breaking this back to data structures, figuring out
what exactly the invariants are (e.g. it shouldn't be possible to try
processing an intel_context when it's not longer in need of processing).
And then figuring out the locking scheme you need.

For the intel_context refcount we currently (if I got them all):
- gem_context -> intel_context refcount
- some temp reference during execbuf
- i915_request->context so that we don't tear down the context while it's
  still running stuff

The latter should be enough to also make sure the context doesn't
disappear while guc code is processing it. If that's not enough, then we
need to analyze this, figure out why/where, and rework this rules around
locking/refcounting so that things are clean, simple, understandable and
actually get the job done.

This patch otoh looks a lot like "if we whack this refcount the oops goes
away, therefore it must be the right fix". And that's not how locking
works, at least not maintainable locking.

Cheers, Daniel

> 
> John.
> 
> 
> > 
> > > ---
> > >   .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 24 +++++++++++++++++--
> > >   1 file changed, 22 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > > index 1986a57b52cc..02917fc4d4a8 100644
> > > --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > > +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > > @@ -2888,6 +2888,8 @@ static void capture_worker_func(struct work_struct *w)
> > >   	intel_engine_set_hung_context(engine, ce);
> > >   	with_intel_runtime_pm(&i915->runtime_pm, wakeref)
> > >   		i915_capture_error_state(gt, ce->engine->mask);
> > > +
> > > +	intel_context_put(ce);
> > >   }
> > >   static void capture_error_state(struct intel_guc *guc,
> > > @@ -2924,7 +2926,7 @@ static void guc_context_replay(struct intel_context *ce)
> > >   	tasklet_hi_schedule(&sched_engine->tasklet);
> > >   }
> > > -static void guc_handle_context_reset(struct intel_guc *guc,
> > > +static bool guc_handle_context_reset(struct intel_guc *guc,
> > >   				     struct intel_context *ce)
> > >   {
> > >   	trace_intel_context_reset(ce);
> > > @@ -2937,7 +2939,11 @@ static void guc_handle_context_reset(struct intel_guc *guc,
> > >   		   !context_blocked(ce))) {
> > >   		capture_error_state(guc, ce);
> > >   		guc_context_replay(ce);
> > > +
> > > +		return false;
> > >   	}
> > > +
> > > +	return true;
> > >   }
> > >   int intel_guc_context_reset_process_msg(struct intel_guc *guc,
> > > @@ -2945,6 +2951,7 @@ int intel_guc_context_reset_process_msg(struct intel_guc *guc,
> > >   {
> > >   	struct intel_context *ce;
> > >   	int desc_idx;
> > > +	unsigned long flags;
> > >   	if (unlikely(len != 1)) {
> > >   		drm_err(&guc_to_gt(guc)->i915->drm, "Invalid length %u", len);
> > > @@ -2952,11 +2959,24 @@ int intel_guc_context_reset_process_msg(struct intel_guc *guc,
> > >   	}
> > >   	desc_idx = msg[0];
> > > +
> > > +	/*
> > > +	 * The context lookup uses the xarray but lookups only require an RCU lock
> > > +	 * not the full spinlock. So take the lock explicitly and keep it until the
> > > +	 * context has been reference count locked to ensure it can't be destroyed
> > > +	 * asynchronously until the reset is done.
> > > +	 */
> > > +	xa_lock_irqsave(&guc->context_lookup, flags);
> > >   	ce = g2h_context_lookup(guc, desc_idx);
> > > +	if (ce)
> > > +		intel_context_get(ce);
> > > +	xa_unlock_irqrestore(&guc->context_lookup, flags);
> > > +
> > >   	if (unlikely(!ce))
> > >   		return -EPROTO;
> > > -	guc_handle_context_reset(guc, ce);
> > > +	if (guc_handle_context_reset(guc, ce))
> > > +		intel_context_put(ce);
> > >   	return 0;
> > >   }
> > > -- 
> > > 2.32.0
> > > 
>
Daniel Vetter Sept. 17, 2021, 12:40 p.m. UTC | #5
On Tue, Sep 14, 2021 at 04:36:54PM -0700, Matthew Brost wrote:
> On Tue, Sep 14, 2021 at 04:29:21PM +0200, Daniel Vetter wrote:
> > On Mon, Sep 13, 2021 at 10:09:56PM -0700, Matthew Brost wrote:
> > > From: John Harrison <John.C.Harrison@Intel.com>
> > > 
> > > When i915 receives a context reset notification from GuC, it triggers
> > > an error capture before resetting any outstanding requsts of that
> > > context. Unfortunately, the error capture is not a time bound
> > > operation. In certain situations it can take a long time, particularly
> > > when multiple large LMEM buffers must be read back and eoncoded. If
> > > this delay is longer than other timeouts (heartbeat, test recovery,
> > > etc.) then a full GT reset can be triggered in the middle.
> > > 
> > > That can result in the context being reset by GuC actually being
> > > destroyed before the error capture completes and the GuC submission
> > > code resumes. Thus, the GuC side can start dereferencing stale
> > > pointers and Bad Things ensue.
> > > 
> > > So add a refcount get of the context during the entire reset
> > > operation. That way, the context can't be destroyed part way through
> > > no matter what other resets or user interactions occur.
> > > 
> > > v2:
> > >  (Matthew Brost)
> > >   - Update patch to work with async error capture
> > > 
> > > Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
> > > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > 
> > This sounds like a fundamental issue in our reset/scheduler design. If we
> > have multiple timeout-things working in parallel, then there's going to be
> > an endless whack-a-mole fireworks show.
> > 
> 
> We have two different possible reset paths.
> 
> One initiated from the GuC on per context basis. Each of these resets is
> execute serially in the order in which they are received and each
> contexts reset is protected by a lock.
> 
> Another is a full GT reset, typically triggered from the heartbeat or
> selftest. Only 1 of these can happen at time as this is protected by a
> reset mutex. The full GT reset should flush all the inflight per context
> resets before proceeding with the whole GT reset (after patch #3 in this
> series), I believe this patch was written before patch #3 so when it was
> written there was a race where a per context reset & GT reset could be
> happening at the same time but that is no longer the case. The commit
> message should be reworded to reflect that. All that being said I still
> believe the patch is correct to reference count the context until after
> the error capture completes. As John H said, this is just a standard ref
> count here.

Yeah the direction in drm/sched, and which we should follow, is that
resets can't happen in parallel. At least not when touching the same
structs. So per-engine reset can proceed as-is, but if you go a level
higher (guc reset) then that needs to block out everything else.

And yes heartbeat and timeout and all that should follow this pattern too.

If we can have multiple ongoing resets touching the same engine in
parallel, then shit will hit the fan.

I'm also involved in a discussion with amdgpu folks for similar reasons.
You can't fix this with some hacks locally.

Wrt "it's just standard refcounting", see my other reply.

> > Reset is not a perf critical path (aside from media timeout, which guc
> > handles internally anyway). Simplicity trumps everything else. The fix
> > here is to guarantee that anything related to reset cannot happen in
> > parallel with anything else related to reset/timeout. At least on a
> > per-engine (and really on a per-reset domain) basis.
> > 
> > The fix we've developed for drm/sched is that the driver can allocate a
> > single-thread work queue, pass it to each drm/sched instance, and all
> > timeout handling is run in there.
> > 
> > For i915 it's more of a mess since we have a ton of random things that
> > time out/reset potentially going on in parallel. But that's the design we
> > should head towards.
> >
> 
> See above, the parallel resets is fixed by patch #3 in this series.
> 
> > _not_ sprinkling random refcounts all over the place until most of the
> > oops/splats disappear. That's cargo-culting, not engineering.
> 
> See above, I believe the ref count is still correct.

So with patch #3 we don't need this patch anymore? If so, then we should
drop it. And document the exact rules we rely on that guarantee that the
context doesn't untimely disappear (in the kerneldoc for the involved
structs).
-Daniel

> 
> Matt
> 
> > -Daniel
> > 
> > > ---
> > >  .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 24 +++++++++++++++++--
> > >  1 file changed, 22 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > > index 1986a57b52cc..02917fc4d4a8 100644
> > > --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > > +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > > @@ -2888,6 +2888,8 @@ static void capture_worker_func(struct work_struct *w)
> > >  	intel_engine_set_hung_context(engine, ce);
> > >  	with_intel_runtime_pm(&i915->runtime_pm, wakeref)
> > >  		i915_capture_error_state(gt, ce->engine->mask);
> > > +
> > > +	intel_context_put(ce);
> > >  }
> > >  
> > >  static void capture_error_state(struct intel_guc *guc,
> > > @@ -2924,7 +2926,7 @@ static void guc_context_replay(struct intel_context *ce)
> > >  	tasklet_hi_schedule(&sched_engine->tasklet);
> > >  }
> > >  
> > > -static void guc_handle_context_reset(struct intel_guc *guc,
> > > +static bool guc_handle_context_reset(struct intel_guc *guc,
> > >  				     struct intel_context *ce)
> > >  {
> > >  	trace_intel_context_reset(ce);
> > > @@ -2937,7 +2939,11 @@ static void guc_handle_context_reset(struct intel_guc *guc,
> > >  		   !context_blocked(ce))) {
> > >  		capture_error_state(guc, ce);
> > >  		guc_context_replay(ce);
> > > +
> > > +		return false;
> > >  	}
> > > +
> > > +	return true;
> > >  }
> > >  
> > >  int intel_guc_context_reset_process_msg(struct intel_guc *guc,
> > > @@ -2945,6 +2951,7 @@ int intel_guc_context_reset_process_msg(struct intel_guc *guc,
> > >  {
> > >  	struct intel_context *ce;
> > >  	int desc_idx;
> > > +	unsigned long flags;
> > >  
> > >  	if (unlikely(len != 1)) {
> > >  		drm_err(&guc_to_gt(guc)->i915->drm, "Invalid length %u", len);
> > > @@ -2952,11 +2959,24 @@ int intel_guc_context_reset_process_msg(struct intel_guc *guc,
> > >  	}
> > >  
> > >  	desc_idx = msg[0];
> > > +
> > > +	/*
> > > +	 * The context lookup uses the xarray but lookups only require an RCU lock
> > > +	 * not the full spinlock. So take the lock explicitly and keep it until the
> > > +	 * context has been reference count locked to ensure it can't be destroyed
> > > +	 * asynchronously until the reset is done.
> > > +	 */
> > > +	xa_lock_irqsave(&guc->context_lookup, flags);
> > >  	ce = g2h_context_lookup(guc, desc_idx);
> > > +	if (ce)
> > > +		intel_context_get(ce);
> > > +	xa_unlock_irqrestore(&guc->context_lookup, flags);
> > > +
> > >  	if (unlikely(!ce))
> > >  		return -EPROTO;
> > >  
> > > -	guc_handle_context_reset(guc, ce);
> > > +	if (guc_handle_context_reset(guc, ce))
> > > +		intel_context_put(ce);
> > >  
> > >  	return 0;
> > >  }
> > > -- 
> > > 2.32.0
> > > 
> > 
> > -- 
> > Daniel Vetter
> > Software Engineer, Intel Corporation
> > http://blog.ffwll.ch
diff mbox series

Patch

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index 1986a57b52cc..02917fc4d4a8 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -2888,6 +2888,8 @@  static void capture_worker_func(struct work_struct *w)
 	intel_engine_set_hung_context(engine, ce);
 	with_intel_runtime_pm(&i915->runtime_pm, wakeref)
 		i915_capture_error_state(gt, ce->engine->mask);
+
+	intel_context_put(ce);
 }
 
 static void capture_error_state(struct intel_guc *guc,
@@ -2924,7 +2926,7 @@  static void guc_context_replay(struct intel_context *ce)
 	tasklet_hi_schedule(&sched_engine->tasklet);
 }
 
-static void guc_handle_context_reset(struct intel_guc *guc,
+static bool guc_handle_context_reset(struct intel_guc *guc,
 				     struct intel_context *ce)
 {
 	trace_intel_context_reset(ce);
@@ -2937,7 +2939,11 @@  static void guc_handle_context_reset(struct intel_guc *guc,
 		   !context_blocked(ce))) {
 		capture_error_state(guc, ce);
 		guc_context_replay(ce);
+
+		return false;
 	}
+
+	return true;
 }
 
 int intel_guc_context_reset_process_msg(struct intel_guc *guc,
@@ -2945,6 +2951,7 @@  int intel_guc_context_reset_process_msg(struct intel_guc *guc,
 {
 	struct intel_context *ce;
 	int desc_idx;
+	unsigned long flags;
 
 	if (unlikely(len != 1)) {
 		drm_err(&guc_to_gt(guc)->i915->drm, "Invalid length %u", len);
@@ -2952,11 +2959,24 @@  int intel_guc_context_reset_process_msg(struct intel_guc *guc,
 	}
 
 	desc_idx = msg[0];
+
+	/*
+	 * The context lookup uses the xarray but lookups only require an RCU lock
+	 * not the full spinlock. So take the lock explicitly and keep it until the
+	 * context has been reference count locked to ensure it can't be destroyed
+	 * asynchronously until the reset is done.
+	 */
+	xa_lock_irqsave(&guc->context_lookup, flags);
 	ce = g2h_context_lookup(guc, desc_idx);
+	if (ce)
+		intel_context_get(ce);
+	xa_unlock_irqrestore(&guc->context_lookup, flags);
+
 	if (unlikely(!ce))
 		return -EPROTO;
 
-	guc_handle_context_reset(guc, ce);
+	if (guc_handle_context_reset(guc, ce))
+		intel_context_put(ce);
 
 	return 0;
 }