drm/i915/execlists: Drop setting sibling priority hint on virtual engines
diff mbox series

Message ID 20200325101358.12231-1-chris@chris-wilson.co.uk
State New
Headers show
Series
  • drm/i915/execlists: Drop setting sibling priority hint on virtual engines
Related show

Commit Message

Chris Wilson March 25, 2020, 10:13 a.m. UTC
We set the priority hint on execlists to avoid executing the tasklet for
when we know that there will be no change in execution order. However,
as we set it from the virtual engine for all siblings, but only one
physical engine may respond, we leave the hint set on the others
stopping direct submission that could take place.

If we do not set the hint, we may attempt direct submission even if we
don't expect to submit. If we set the hint, we may not do any submission
until the tasklet is run (and sometimes we may park the engine before
that has had a chance). Ergo there's only a minor ill-effect on mixed
virtual/physical engine workloads where we may try and fail to do direct
submission more often than required. (Pure virtual / engine workloads
will have redundant tasklet execution suppressed as normal.)

Closes: https://gitlab.freedesktop.org/drm/intel/issues/1522
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gt/intel_lrc.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

Comments

Tvrtko Ursulin March 25, 2020, 10:43 a.m. UTC | #1
On 25/03/2020 10:13, Chris Wilson wrote:
> We set the priority hint on execlists to avoid executing the tasklet for
> when we know that there will be no change in execution order. However,
> as we set it from the virtual engine for all siblings, but only one
> physical engine may respond, we leave the hint set on the others
> stopping direct submission that could take place.
> 
> If we do not set the hint, we may attempt direct submission even if we
> don't expect to submit. If we set the hint, we may not do any submission
> until the tasklet is run (and sometimes we may park the engine before
> that has had a chance). Ergo there's only a minor ill-effect on mixed
> virtual/physical engine workloads where we may try and fail to do direct
> submission more often than required. (Pure virtual / engine workloads
> will have redundant tasklet execution suppressed as normal.)
> 
> Closes: https://gitlab.freedesktop.org/drm/intel/issues/1522
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> ---
>   drivers/gpu/drm/i915/gt/intel_lrc.c | 4 +---
>   1 file changed, 1 insertion(+), 3 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
> index 210f60e14ef4..f88d3b95c4e1 100644
> --- a/drivers/gpu/drm/i915/gt/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
> @@ -4985,10 +4985,8 @@ static void virtual_submission_tasklet(unsigned long data)
>   submit_engine:
>   		GEM_BUG_ON(RB_EMPTY_NODE(&node->rb));
>   		node->prio = prio;
> -		if (first && prio > sibling->execlists.queue_priority_hint) {
> -			sibling->execlists.queue_priority_hint = prio;
> +		if (first && prio > sibling->execlists.queue_priority_hint)
>   			tasklet_hi_schedule(&sibling->execlists.tasklet);
> -		}
>   
>   		spin_unlock(&sibling->active.lock);
>   	}
> 

Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

The queue_priority_hint scheme with virtual engine design is a bit 
problematic, since we have no way to unwind. And it's spreading it's 
tentacles all over the place. Oh well. Could we one day consider just 
peeking at the top of the tree(s)

Regards,

Tvrtko
Chris Wilson March 25, 2020, 10:53 a.m. UTC | #2
Quoting Tvrtko Ursulin (2020-03-25 10:43:55)
> 
> On 25/03/2020 10:13, Chris Wilson wrote:
> > We set the priority hint on execlists to avoid executing the tasklet for
> > when we know that there will be no change in execution order. However,
> > as we set it from the virtual engine for all siblings, but only one
> > physical engine may respond, we leave the hint set on the others
> > stopping direct submission that could take place.
> > 
> > If we do not set the hint, we may attempt direct submission even if we
> > don't expect to submit. If we set the hint, we may not do any submission
> > until the tasklet is run (and sometimes we may park the engine before
> > that has had a chance). Ergo there's only a minor ill-effect on mixed
> > virtual/physical engine workloads where we may try and fail to do direct
> > submission more often than required. (Pure virtual / engine workloads
> > will have redundant tasklet execution suppressed as normal.)
> > 
> > Closes: https://gitlab.freedesktop.org/drm/intel/issues/1522
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> > ---
> >   drivers/gpu/drm/i915/gt/intel_lrc.c | 4 +---
> >   1 file changed, 1 insertion(+), 3 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
> > index 210f60e14ef4..f88d3b95c4e1 100644
> > --- a/drivers/gpu/drm/i915/gt/intel_lrc.c
> > +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
> > @@ -4985,10 +4985,8 @@ static void virtual_submission_tasklet(unsigned long data)
> >   submit_engine:
> >               GEM_BUG_ON(RB_EMPTY_NODE(&node->rb));
> >               node->prio = prio;
> > -             if (first && prio > sibling->execlists.queue_priority_hint) {
> > -                     sibling->execlists.queue_priority_hint = prio;
> > +             if (first && prio > sibling->execlists.queue_priority_hint)
> >                       tasklet_hi_schedule(&sibling->execlists.tasklet);
> > -             }
> >   
> >               spin_unlock(&sibling->active.lock);
> >       }
> > 
> 
> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> 
> The queue_priority_hint scheme with virtual engine design is a bit 
> problematic, since we have no way to unwind. And it's spreading it's 
> tentacles all over the place. Oh well.

Hear, hear. 

> Could we one day consider just peeking at the top of the tree(s)

The problem is that we have a single attention bit (tasklet_schedule).
So if we add a new virtual engine below the top of the tree, and we race
with two engines pulling from the virtual trees, we need both engines to
claim a virtual request or else we waste the tasklet and do not have a
second wakeup queued.

I don't think we can drop rechecking the virtual rbtree if we lose the
race in the execlists tasklet.
-Chris

Patch
diff mbox series

diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
index 210f60e14ef4..f88d3b95c4e1 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -4985,10 +4985,8 @@  static void virtual_submission_tasklet(unsigned long data)
 submit_engine:
 		GEM_BUG_ON(RB_EMPTY_NODE(&node->rb));
 		node->prio = prio;
-		if (first && prio > sibling->execlists.queue_priority_hint) {
-			sibling->execlists.queue_priority_hint = prio;
+		if (first && prio > sibling->execlists.queue_priority_hint)
 			tasklet_hi_schedule(&sibling->execlists.tasklet);
-		}
 
 		spin_unlock(&sibling->active.lock);
 	}