diff mbox series

[2/2] drm/i915/execlists: Skip nested spinlock for validating pending

Message ID 20191203115339.2943374-2-chris@chris-wilson.co.uk (mailing list archive)
State New, archived
Headers show
Series [1/2] drm/i915/execlists: Add a couple more validity checks to assert_pending() | expand

Commit Message

Chris Wilson Dec. 3, 2019, 11:53 a.m. UTC
Only along the submission path can we guarantee that the locked request
is indeed from a foreign engine, and so the nesting of engine/rq is
permissible. On the submission tasklet (process_csb()), we may find
ourselves competing with the normal nesting of rq/engine, invalidating
our nesting. As we only use the spinlock for debug purposes, skip the
debug if we cannot acquire the spinlock for safe validation - catching
99% of the bugs is better than causing a hard lockup.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/gt/intel_lrc.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

Comments

Tvrtko Ursulin Dec. 3, 2019, 3:38 p.m. UTC | #1
On 03/12/2019 11:53, Chris Wilson wrote:
> Only along the submission path can we guarantee that the locked request
> is indeed from a foreign engine, and so the nesting of engine/rq is
> permissible. On the submission tasklet (process_csb()), we may find
> ourselves competing with the normal nesting of rq/engine, invalidating
> our nesting. As we only use the spinlock for debug purposes, skip the
> debug if we cannot acquire the spinlock for safe validation - catching
> 99% of the bugs is better than causing a hard lockup.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>   drivers/gpu/drm/i915/gt/intel_lrc.c | 7 +++----
>   1 file changed, 3 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
> index 37ab9742abe7..b411e4ce6771 100644
> --- a/drivers/gpu/drm/i915/gt/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
> @@ -1300,7 +1300,6 @@ assert_pending_valid(const struct intel_engine_execlists *execlists,
>   	}
>   
>   	for (port = execlists->pending; (rq = *port); port++) {
> -		unsigned long flags;
>   		bool ok = true;
>   
>   		GEM_BUG_ON(!kref_read(&rq->fence.refcount));
> @@ -1315,8 +1314,8 @@ assert_pending_valid(const struct intel_engine_execlists *execlists,
>   		ce = rq->hw_context;
>   
>   		/* Hold tightly onto the lock to prevent concurrent retires! */
> -		spin_lock_irqsave_nested(&rq->lock, flags,
> -					 SINGLE_DEPTH_NESTING);
> +		if (!spin_trylock(&rq->lock))
> +			continue;
>   
>   		if (i915_request_completed(rq))
>   			goto unlock;
> @@ -1347,7 +1346,7 @@ assert_pending_valid(const struct intel_engine_execlists *execlists,
>   		}
>   
>   unlock:
> -		spin_unlock_irqrestore(&rq->lock, flags);
> +		spin_unlock(&rq->lock);
>   		if (!ok)
>   			return false;
>   	}
> 

With Fixes: and irqsave variant:

Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Regards,

Tvrtko
Chris Wilson Dec. 3, 2019, 3:40 p.m. UTC | #2
Quoting Tvrtko Ursulin (2019-12-03 15:38:20)
> 
> On 03/12/2019 11:53, Chris Wilson wrote:
> > Only along the submission path can we guarantee that the locked request
> > is indeed from a foreign engine, and so the nesting of engine/rq is
> > permissible. On the submission tasklet (process_csb()), we may find
> > ourselves competing with the normal nesting of rq/engine, invalidating
> > our nesting. As we only use the spinlock for debug purposes, skip the
> > debug if we cannot acquire the spinlock for safe validation - catching
> > 99% of the bugs is better than causing a hard lockup.
> > 
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > ---
> >   drivers/gpu/drm/i915/gt/intel_lrc.c | 7 +++----
> >   1 file changed, 3 insertions(+), 4 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
> > index 37ab9742abe7..b411e4ce6771 100644
> > --- a/drivers/gpu/drm/i915/gt/intel_lrc.c
> > +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
> > @@ -1300,7 +1300,6 @@ assert_pending_valid(const struct intel_engine_execlists *execlists,
> >       }
> >   
> >       for (port = execlists->pending; (rq = *port); port++) {
> > -             unsigned long flags;
> >               bool ok = true;
> >   
> >               GEM_BUG_ON(!kref_read(&rq->fence.refcount));
> > @@ -1315,8 +1314,8 @@ assert_pending_valid(const struct intel_engine_execlists *execlists,
> >               ce = rq->hw_context;
> >   
> >               /* Hold tightly onto the lock to prevent concurrent retires! */
> > -             spin_lock_irqsave_nested(&rq->lock, flags,
> > -                                      SINGLE_DEPTH_NESTING);
> > +             if (!spin_trylock(&rq->lock))
> > +                     continue;
> >   
> >               if (i915_request_completed(rq))
> >                       goto unlock;
> > @@ -1347,7 +1346,7 @@ assert_pending_valid(const struct intel_engine_execlists *execlists,
> >               }
> >   
> >   unlock:
> > -             spin_unlock_irqrestore(&rq->lock, flags);
> > +             spin_unlock(&rq->lock);
> >               if (!ok)
> >                       return false;
> >       }
> > 
> 
> With Fixes: and irqsave variant:
> 
> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Predictive patching: https://patchwork.freedesktop.org/patch/343495/?series=70375&rev=1

Ta,
-Chris
diff mbox series

Patch

diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
index 37ab9742abe7..b411e4ce6771 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -1300,7 +1300,6 @@  assert_pending_valid(const struct intel_engine_execlists *execlists,
 	}
 
 	for (port = execlists->pending; (rq = *port); port++) {
-		unsigned long flags;
 		bool ok = true;
 
 		GEM_BUG_ON(!kref_read(&rq->fence.refcount));
@@ -1315,8 +1314,8 @@  assert_pending_valid(const struct intel_engine_execlists *execlists,
 		ce = rq->hw_context;
 
 		/* Hold tightly onto the lock to prevent concurrent retires! */
-		spin_lock_irqsave_nested(&rq->lock, flags,
-					 SINGLE_DEPTH_NESTING);
+		if (!spin_trylock(&rq->lock))
+			continue;
 
 		if (i915_request_completed(rq))
 			goto unlock;
@@ -1347,7 +1346,7 @@  assert_pending_valid(const struct intel_engine_execlists *execlists,
 		}
 
 unlock:
-		spin_unlock_irqrestore(&rq->lock, flags);
+		spin_unlock(&rq->lock);
 		if (!ok)
 			return false;
 	}