diff mbox series

drm/i915/guc: Refcount context during error capture

Message ID 20210913211016.2299138-1-John.C.Harrison@Intel.com (mailing list archive)
State New, archived
Headers show
Series drm/i915/guc: Refcount context during error capture | expand

Commit Message

John Harrison Sept. 13, 2021, 9:10 p.m. UTC
From: John Harrison <John.C.Harrison@Intel.com>

When i915 receives a context reset notification from GuC, it triggers
an error capture before resetting any outstanding requsts of that
context. Unfortunately, the error capture is not a time bound
operation. In certain situations it can take a long time, particularly
when multiple large LMEM buffers must be read back and eoncoded. If
this delay is longer than other timeouts (heartbeat, test recovery,
etc.) then a full GT reset can be triggered in the middle.

That can result in the context being reset by GuC actually being
destroyed before the error capture completes and the GuC submission
code resumes. Thus, the GuC side can start dereferencing stale
pointers and Bad Things ensue.

So add a refcount get of the context during the entire reset
operation. That way, the context can't be destroyed part way through
no matter what other resets or user interactions occur.

Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
---
 drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

Comments

Matthew Brost Sept. 13, 2021, 9:17 p.m. UTC | #1
On Mon, Sep 13, 2021 at 02:10:16PM -0700, John.C.Harrison@Intel.com wrote:
> From: John Harrison <John.C.Harrison@Intel.com>
> 
> When i915 receives a context reset notification from GuC, it triggers
> an error capture before resetting any outstanding requsts of that
> context. Unfortunately, the error capture is not a time bound
> operation. In certain situations it can take a long time, particularly
> when multiple large LMEM buffers must be read back and eoncoded. If
> this delay is longer than other timeouts (heartbeat, test recovery,
> etc.) then a full GT reset can be triggered in the middle.
> 
> That can result in the context being reset by GuC actually being
> destroyed before the error capture completes and the GuC submission
> code resumes. Thus, the GuC side can start dereferencing stale
> pointers and Bad Things ensue.
> 
> So add a refcount get of the context during the entire reset
> operation. That way, the context can't be destroyed part way through
> no matter what other resets or user interactions occur.
> 
> Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
> ---
>  drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 14 ++++++++++++++
>  1 file changed, 14 insertions(+)
> 
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> index c7a41802b448..7291fd8f68a6 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> @@ -2920,6 +2920,7 @@ int intel_guc_context_reset_process_msg(struct intel_guc *guc,
>  {
>  	struct intel_context *ce;
>  	int desc_idx;
> +	unsigned long flags;
>  
>  	if (unlikely(len != 1)) {
>  		drm_err(&guc_to_gt(guc)->i915->drm, "Invalid length %u", len);
> @@ -2927,11 +2928,24 @@ int intel_guc_context_reset_process_msg(struct intel_guc *guc,
>  	}
>  
>  	desc_idx = msg[0];
> +
> +	/*
> +	 * The context lookup uses the xarray but lookups only require an RCU lock
> +	 * not the full spinlock. So take the lock explicitly and keep it until the
> +	 * context has been reference count locked to ensure it can't be destroyed
> +	 * asynchronously until the reset is done.
> +	 */
> +	xa_lock_irqsave(&guc->context_lookup, flags);
>  	ce = g2h_context_lookup(guc, desc_idx);
> +	if (ce)
> +		intel_context_get(ce);
> +	xa_unlock_irqrestore(&guc->context_lookup, flags);
> +
>  	if (unlikely(!ce))
>  		return -EPROTO;
>  
>  	guc_handle_context_reset(guc, ce);
> +	intel_context_put(ce);

So this is going to directly conflict with a patch that I'm about to
post as I'm going to change the error capture to async operation. In
that case the intel_context_put would need to be done once that op
completes. I'll likely pull this patch into that series. I'd expect it
to be posted by the end of the day.

Matt 

>  
>  	return 0;
>  }
> -- 
> 2.25.1
>
Daniel Vetter Sept. 14, 2021, 8:02 p.m. UTC | #2
On Mon, Sep 13, 2021 at 02:17:42PM -0700, Matthew Brost wrote:
> On Mon, Sep 13, 2021 at 02:10:16PM -0700, John.C.Harrison@Intel.com wrote:
> > From: John Harrison <John.C.Harrison@Intel.com>
> > 
> > When i915 receives a context reset notification from GuC, it triggers
> > an error capture before resetting any outstanding requsts of that
> > context. Unfortunately, the error capture is not a time bound
> > operation. In certain situations it can take a long time, particularly
> > when multiple large LMEM buffers must be read back and eoncoded. If
> > this delay is longer than other timeouts (heartbeat, test recovery,
> > etc.) then a full GT reset can be triggered in the middle.
> > 
> > That can result in the context being reset by GuC actually being
> > destroyed before the error capture completes and the GuC submission
> > code resumes. Thus, the GuC side can start dereferencing stale
> > pointers and Bad Things ensue.
> > 
> > So add a refcount get of the context during the entire reset
> > operation. That way, the context can't be destroyed part way through
> > no matter what other resets or user interactions occur.
> > 
> > Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
> > ---
> >  drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 14 ++++++++++++++
> >  1 file changed, 14 insertions(+)
> > 
> > diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > index c7a41802b448..7291fd8f68a6 100644
> > --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> > @@ -2920,6 +2920,7 @@ int intel_guc_context_reset_process_msg(struct intel_guc *guc,
> >  {
> >  	struct intel_context *ce;
> >  	int desc_idx;
> > +	unsigned long flags;
> >  
> >  	if (unlikely(len != 1)) {
> >  		drm_err(&guc_to_gt(guc)->i915->drm, "Invalid length %u", len);
> > @@ -2927,11 +2928,24 @@ int intel_guc_context_reset_process_msg(struct intel_guc *guc,
> >  	}
> >  
> >  	desc_idx = msg[0];
> > +
> > +	/*
> > +	 * The context lookup uses the xarray but lookups only require an RCU lock
> > +	 * not the full spinlock. So take the lock explicitly and keep it until the
> > +	 * context has been reference count locked to ensure it can't be destroyed
> > +	 * asynchronously until the reset is done.
> > +	 */
> > +	xa_lock_irqsave(&guc->context_lookup, flags);
> >  	ce = g2h_context_lookup(guc, desc_idx);
> > +	if (ce)
> > +		intel_context_get(ce);
> > +	xa_unlock_irqrestore(&guc->context_lookup, flags);
> > +
> >  	if (unlikely(!ce))
> >  		return -EPROTO;
> >  
> >  	guc_handle_context_reset(guc, ce);
> > +	intel_context_put(ce);
> 
> So this is going to directly conflict with a patch that I'm about to
> post as I'm going to change the error capture to async operation. In
> that case the intel_context_put would need to be done once that op
> completes. I'll likely pull this patch into that series. I'd expect it
> to be posted by the end of the day.

tbh this entire thing is looking very scary. Somehow we can race with
other processing while we try to handle a reset. That's fragile at best.

Proper fix is to exclude these kind of problems by design, by either
guaranteeing that no concurrent dequeuing of guc2host message can happen,
or by holding appropriate locks, or by keeping track of anything pending
in a more controlled way (something like expected g2h messages as separate
structs, instead of the current spaghetti layering violation chaos we have
for processing g2h message).

We maybe should use a few of these things when we're going through the
locking engineering training with guc team.
-Daniel

> 
> Matt 
> 
> >  
> >  	return 0;
> >  }
> > -- 
> > 2.25.1
> >
diff mbox series

Patch

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index c7a41802b448..7291fd8f68a6 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -2920,6 +2920,7 @@  int intel_guc_context_reset_process_msg(struct intel_guc *guc,
 {
 	struct intel_context *ce;
 	int desc_idx;
+	unsigned long flags;
 
 	if (unlikely(len != 1)) {
 		drm_err(&guc_to_gt(guc)->i915->drm, "Invalid length %u", len);
@@ -2927,11 +2928,24 @@  int intel_guc_context_reset_process_msg(struct intel_guc *guc,
 	}
 
 	desc_idx = msg[0];
+
+	/*
+	 * The context lookup uses the xarray but lookups only require an RCU lock
+	 * not the full spinlock. So take the lock explicitly and keep it until the
+	 * context has been reference count locked to ensure it can't be destroyed
+	 * asynchronously until the reset is done.
+	 */
+	xa_lock_irqsave(&guc->context_lookup, flags);
 	ce = g2h_context_lookup(guc, desc_idx);
+	if (ce)
+		intel_context_get(ce);
+	xa_unlock_irqrestore(&guc->context_lookup, flags);
+
 	if (unlikely(!ce))
 		return -EPROTO;
 
 	guc_handle_context_reset(guc, ce);
+	intel_context_put(ce);
 
 	return 0;
 }