diff mbox series

[2/2] drm/i915/gt: Do not schedule normal requests immediately along virtual

Message ID 20200526090753.11329-2-chris@chris-wilson.co.uk (mailing list archive)
State New, archived
Headers show
Series [1/2] drm/i915: Reorder await_execution before await_request | expand

Commit Message

Chris Wilson May 26, 2020, 9:07 a.m. UTC
When we push a virtual request onto the HW, we update the rq->engine to
point to the physical engine. A request that is then submitted by the
user that waits upon the virtual engine, but along the physical engine
in use, will then see that it is due to be submitted to the same engine
and take a shortcut (and be queued without waiting for the completion
fence). However, the virtual request may be preempted (either by higher
priority users, or by timeslicing) and removed from the physical engine
to be migrated over to one of its siblings. The dependent normal request
however is oblivious to the removal of the virtual request and remains
queued to execute on HW, believing that once it reaches the head of its
queue all of its predecessors will have completed executing!

v2: Beware restriction of signal->execution_mask prior to submission.

Fixes: 6d06779e8672 ("drm/i915: Load balancing across a virtual engine")
Testcase: igt/gem_exec_balancer/sliced
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: <stable@vger.kernel.org> # v5.3+
---
 drivers/gpu/drm/i915/i915_request.c | 25 +++++++++++++++++++++----
 1 file changed, 21 insertions(+), 4 deletions(-)

Comments

Tvrtko Ursulin May 26, 2020, 4 p.m. UTC | #1
On 26/05/2020 10:07, Chris Wilson wrote:
> When we push a virtual request onto the HW, we update the rq->engine to
> point to the physical engine. A request that is then submitted by the
> user that waits upon the virtual engine, but along the physical engine
> in use, will then see that it is due to be submitted to the same engine
> and take a shortcut (and be queued without waiting for the completion
> fence). However, the virtual request may be preempted (either by higher
> priority users, or by timeslicing) and removed from the physical engine
> to be migrated over to one of its siblings. The dependent normal request
> however is oblivious to the removal of the virtual request and remains
> queued to execute on HW, believing that once it reaches the head of its
> queue all of its predecessors will have completed executing!
> 
> v2: Beware restriction of signal->execution_mask prior to submission.
> 
> Fixes: 6d06779e8672 ("drm/i915: Load balancing across a virtual engine")
> Testcase: igt/gem_exec_balancer/sliced
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> Cc: <stable@vger.kernel.org> # v5.3+
> ---
>   drivers/gpu/drm/i915/i915_request.c | 25 +++++++++++++++++++++----
>   1 file changed, 21 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
> index 33bbad623e02..0b07ccc7e9bc 100644
> --- a/drivers/gpu/drm/i915/i915_request.c
> +++ b/drivers/gpu/drm/i915/i915_request.c
> @@ -1237,6 +1237,25 @@ i915_request_await_execution(struct i915_request *rq,
>   	return 0;
>   }
>   
> +static int
> +await_request_submit(struct i915_request *to, struct i915_request *from)
> +{
> +	/*
> +	 * If we are waiting on a virtual engine, then it may be
> +	 * constrained to execute on a single engine *prior* to submission.
> +	 * When it is submitted, it will be first submitted to the virtual
> +	 * engine and then passed to the physical engine. We cannot allow
> +	 * the waiter to be submitted immediately to the physical engine
> +	 * as it may then bypass the virtual request.
> +	 */
> +	if (to->engine == READ_ONCE(from->engine))
> +		return i915_sw_fence_await_sw_fence_gfp(&to->submit,
> +							&from->submit,
> +							I915_FENCE_GFP);
> +	else

When can engines be different and the mask test below brought us here?

Regards,

Tvrtko

> +		return __i915_request_await_execution(to, from, NULL);
> +}
> +
>   static int
>   i915_request_await_request(struct i915_request *to, struct i915_request *from)
>   {
> @@ -1258,10 +1277,8 @@ i915_request_await_request(struct i915_request *to, struct i915_request *from)
>   			return ret;
>   	}
>   
> -	if (to->engine == READ_ONCE(from->engine))
> -		ret = i915_sw_fence_await_sw_fence_gfp(&to->submit,
> -						       &from->submit,
> -						       I915_FENCE_GFP);
> +	if (is_power_of_2(to->execution_mask | READ_ONCE(from->execution_mask)))
> +		ret = await_request_submit(to, from);
>   	else
>   		ret = emit_semaphore_wait(to, from, I915_FENCE_GFP);
>   	if (ret < 0)
>
Chris Wilson May 26, 2020, 4:09 p.m. UTC | #2
Quoting Tvrtko Ursulin (2020-05-26 17:00:06)
> 
> On 26/05/2020 10:07, Chris Wilson wrote:
> > When we push a virtual request onto the HW, we update the rq->engine to
> > point to the physical engine. A request that is then submitted by the
> > user that waits upon the virtual engine, but along the physical engine
> > in use, will then see that it is due to be submitted to the same engine
> > and take a shortcut (and be queued without waiting for the completion
> > fence). However, the virtual request may be preempted (either by higher
> > priority users, or by timeslicing) and removed from the physical engine
> > to be migrated over to one of its siblings. The dependent normal request
> > however is oblivious to the removal of the virtual request and remains
> > queued to execute on HW, believing that once it reaches the head of its
> > queue all of its predecessors will have completed executing!
> > 
> > v2: Beware restriction of signal->execution_mask prior to submission.
> > 
> > Fixes: 6d06779e8672 ("drm/i915: Load balancing across a virtual engine")
> > Testcase: igt/gem_exec_balancer/sliced
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> > Cc: <stable@vger.kernel.org> # v5.3+
> > ---
> >   drivers/gpu/drm/i915/i915_request.c | 25 +++++++++++++++++++++----
> >   1 file changed, 21 insertions(+), 4 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
> > index 33bbad623e02..0b07ccc7e9bc 100644
> > --- a/drivers/gpu/drm/i915/i915_request.c
> > +++ b/drivers/gpu/drm/i915/i915_request.c
> > @@ -1237,6 +1237,25 @@ i915_request_await_execution(struct i915_request *rq,
> >       return 0;
> >   }
> >   
> > +static int
> > +await_request_submit(struct i915_request *to, struct i915_request *from)
> > +{
> > +     /*
> > +      * If we are waiting on a virtual engine, then it may be
> > +      * constrained to execute on a single engine *prior* to submission.
> > +      * When it is submitted, it will be first submitted to the virtual
> > +      * engine and then passed to the physical engine. We cannot allow
> > +      * the waiter to be submitted immediately to the physical engine
> > +      * as it may then bypass the virtual request.
> > +      */
> > +     if (to->engine == READ_ONCE(from->engine))
> > +             return i915_sw_fence_await_sw_fence_gfp(&to->submit,
> > +                                                     &from->submit,
> > +                                                     I915_FENCE_GFP);
> > +     else
> 
> When can engines be different and the mask test below brought us here?

We change the mask during evaluation of the bond, which is from the
signaler's signaler's execute_cb before the signaler is submitted. So
there will be a period where the from->execution_mask is constrained to
a single engine, but it is still waiting to be queued. Once it is
executed on HW, it will remain on that engine.
-Chris
Tvrtko Ursulin May 27, 2020, 6:51 a.m. UTC | #3
On 26/05/2020 10:07, Chris Wilson wrote:
> When we push a virtual request onto the HW, we update the rq->engine to
> point to the physical engine. A request that is then submitted by the
> user that waits upon the virtual engine, but along the physical engine
> in use, will then see that it is due to be submitted to the same engine
> and take a shortcut (and be queued without waiting for the completion
> fence). However, the virtual request may be preempted (either by higher
> priority users, or by timeslicing) and removed from the physical engine
> to be migrated over to one of its siblings. The dependent normal request
> however is oblivious to the removal of the virtual request and remains
> queued to execute on HW, believing that once it reaches the head of its
> queue all of its predecessors will have completed executing!
> 
> v2: Beware restriction of signal->execution_mask prior to submission.
> 
> Fixes: 6d06779e8672 ("drm/i915: Load balancing across a virtual engine")
> Testcase: igt/gem_exec_balancer/sliced
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> Cc: <stable@vger.kernel.org> # v5.3+
> ---
>   drivers/gpu/drm/i915/i915_request.c | 25 +++++++++++++++++++++----
>   1 file changed, 21 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
> index 33bbad623e02..0b07ccc7e9bc 100644
> --- a/drivers/gpu/drm/i915/i915_request.c
> +++ b/drivers/gpu/drm/i915/i915_request.c
> @@ -1237,6 +1237,25 @@ i915_request_await_execution(struct i915_request *rq,
>   	return 0;
>   }
>   
> +static int
> +await_request_submit(struct i915_request *to, struct i915_request *from)
> +{
> +	/*
> +	 * If we are waiting on a virtual engine, then it may be
> +	 * constrained to execute on a single engine *prior* to submission.
> +	 * When it is submitted, it will be first submitted to the virtual
> +	 * engine and then passed to the physical engine. We cannot allow
> +	 * the waiter to be submitted immediately to the physical engine
> +	 * as it may then bypass the virtual request.
> +	 */
> +	if (to->engine == READ_ONCE(from->engine))
> +		return i915_sw_fence_await_sw_fence_gfp(&to->submit,
> +							&from->submit,
> +							I915_FENCE_GFP);
> +	else
> +		return __i915_request_await_execution(to, from, NULL);

If I am following correctly this branch will be the virtual <-> physical 
or virtual <-> virtual dependency on the same physical engine. Why is 
await_execution sufficient in this case? Is there something preventing 
timeslicing between the two (not wanted right!) once from start 
execution (not finishes).

Regards,

Tvrtko

> +}
> +
>   static int
>   i915_request_await_request(struct i915_request *to, struct i915_request *from)
>   {
> @@ -1258,10 +1277,8 @@ i915_request_await_request(struct i915_request *to, struct i915_request *from)
>   			return ret;
>   	}
>   
> -	if (to->engine == READ_ONCE(from->engine))
> -		ret = i915_sw_fence_await_sw_fence_gfp(&to->submit,
> -						       &from->submit,
> -						       I915_FENCE_GFP);
> +	if (is_power_of_2(to->execution_mask | READ_ONCE(from->execution_mask)))
> +		ret = await_request_submit(to, from);
>   	else
>   		ret = emit_semaphore_wait(to, from, I915_FENCE_GFP);
>   	if (ret < 0)
>
Chris Wilson May 27, 2020, 7:03 a.m. UTC | #4
Quoting Tvrtko Ursulin (2020-05-27 07:51:44)
> 
> On 26/05/2020 10:07, Chris Wilson wrote:
> > When we push a virtual request onto the HW, we update the rq->engine to
> > point to the physical engine. A request that is then submitted by the
> > user that waits upon the virtual engine, but along the physical engine
> > in use, will then see that it is due to be submitted to the same engine
> > and take a shortcut (and be queued without waiting for the completion
> > fence). However, the virtual request may be preempted (either by higher
> > priority users, or by timeslicing) and removed from the physical engine
> > to be migrated over to one of its siblings. The dependent normal request
> > however is oblivious to the removal of the virtual request and remains
> > queued to execute on HW, believing that once it reaches the head of its
> > queue all of its predecessors will have completed executing!
> > 
> > v2: Beware restriction of signal->execution_mask prior to submission.
> > 
> > Fixes: 6d06779e8672 ("drm/i915: Load balancing across a virtual engine")
> > Testcase: igt/gem_exec_balancer/sliced
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> > Cc: <stable@vger.kernel.org> # v5.3+
> > ---
> >   drivers/gpu/drm/i915/i915_request.c | 25 +++++++++++++++++++++----
> >   1 file changed, 21 insertions(+), 4 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
> > index 33bbad623e02..0b07ccc7e9bc 100644
> > --- a/drivers/gpu/drm/i915/i915_request.c
> > +++ b/drivers/gpu/drm/i915/i915_request.c
> > @@ -1237,6 +1237,25 @@ i915_request_await_execution(struct i915_request *rq,
> >       return 0;
> >   }
> >   
> > +static int
> > +await_request_submit(struct i915_request *to, struct i915_request *from)
> > +{
> > +     /*
> > +      * If we are waiting on a virtual engine, then it may be
> > +      * constrained to execute on a single engine *prior* to submission.
> > +      * When it is submitted, it will be first submitted to the virtual
> > +      * engine and then passed to the physical engine. We cannot allow
> > +      * the waiter to be submitted immediately to the physical engine
> > +      * as it may then bypass the virtual request.
> > +      */
> > +     if (to->engine == READ_ONCE(from->engine))
> > +             return i915_sw_fence_await_sw_fence_gfp(&to->submit,
> > +                                                     &from->submit,
> > +                                                     I915_FENCE_GFP);
> > +     else
> > +             return __i915_request_await_execution(to, from, NULL);
> 
> If I am following correctly this branch will be the virtual <-> physical 
> or virtual <-> virtual dependency on the same physical engine. Why is 
> await_execution sufficient in this case? Is there something preventing 
> timeslicing between the two (not wanted right!) once from start 
> execution (not finishes).

Timeslicing is only between independent requests. When we expire a
request, we also expire all of its waiters along the same engine, so
that the execution order remains intact.

await_execution is a more restrictive form of the
await_sw_fence(submit), in that 'to' can only be submitted once 'from'
reaches HW and not simply once 'from' reaches its engine submission
queue.
-Chris
Tvrtko Ursulin May 27, 2020, 7:32 a.m. UTC | #5
On 27/05/2020 08:03, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2020-05-27 07:51:44)
>>
>> On 26/05/2020 10:07, Chris Wilson wrote:
>>> When we push a virtual request onto the HW, we update the rq->engine to
>>> point to the physical engine. A request that is then submitted by the
>>> user that waits upon the virtual engine, but along the physical engine
>>> in use, will then see that it is due to be submitted to the same engine
>>> and take a shortcut (and be queued without waiting for the completion
>>> fence). However, the virtual request may be preempted (either by higher
>>> priority users, or by timeslicing) and removed from the physical engine
>>> to be migrated over to one of its siblings. The dependent normal request
>>> however is oblivious to the removal of the virtual request and remains
>>> queued to execute on HW, believing that once it reaches the head of its
>>> queue all of its predecessors will have completed executing!
>>>
>>> v2: Beware restriction of signal->execution_mask prior to submission.
>>>
>>> Fixes: 6d06779e8672 ("drm/i915: Load balancing across a virtual engine")
>>> Testcase: igt/gem_exec_balancer/sliced
>>> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
>>> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>> Cc: <stable@vger.kernel.org> # v5.3+
>>> ---
>>>    drivers/gpu/drm/i915/i915_request.c | 25 +++++++++++++++++++++----
>>>    1 file changed, 21 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
>>> index 33bbad623e02..0b07ccc7e9bc 100644
>>> --- a/drivers/gpu/drm/i915/i915_request.c
>>> +++ b/drivers/gpu/drm/i915/i915_request.c
>>> @@ -1237,6 +1237,25 @@ i915_request_await_execution(struct i915_request *rq,
>>>        return 0;
>>>    }
>>>    
>>> +static int
>>> +await_request_submit(struct i915_request *to, struct i915_request *from)
>>> +{
>>> +     /*
>>> +      * If we are waiting on a virtual engine, then it may be
>>> +      * constrained to execute on a single engine *prior* to submission.
>>> +      * When it is submitted, it will be first submitted to the virtual
>>> +      * engine and then passed to the physical engine. We cannot allow
>>> +      * the waiter to be submitted immediately to the physical engine
>>> +      * as it may then bypass the virtual request.
>>> +      */
>>> +     if (to->engine == READ_ONCE(from->engine))
>>> +             return i915_sw_fence_await_sw_fence_gfp(&to->submit,
>>> +                                                     &from->submit,
>>> +                                                     I915_FENCE_GFP);
>>> +     else
>>> +             return __i915_request_await_execution(to, from, NULL);
>>
>> If I am following correctly this branch will be the virtual <-> physical
>> or virtual <-> virtual dependency on the same physical engine. Why is
>> await_execution sufficient in this case? Is there something preventing
>> timeslicing between the two (not wanted right!) once from start
>> execution (not finishes).
> 
> Timeslicing is only between independent requests. When we expire a
> request, we also expire all of its waiters along the same engine, so
> that the execution order remains intact.

Via scheduler dependencies - they are enough?

Regards,

Tvrtko

> 
> await_execution is a more restrictive form of the
> await_sw_fence(submit), in that 'to' can only be submitted once 'from'
> reaches HW and not simply once 'from' reaches its engine submission
> queue.
Chris Wilson May 27, 2020, 7:47 a.m. UTC | #6
Quoting Tvrtko Ursulin (2020-05-27 08:32:05)
> 
> On 27/05/2020 08:03, Chris Wilson wrote:
> > Quoting Tvrtko Ursulin (2020-05-27 07:51:44)
> >>
> >> On 26/05/2020 10:07, Chris Wilson wrote:
> >>> When we push a virtual request onto the HW, we update the rq->engine to
> >>> point to the physical engine. A request that is then submitted by the
> >>> user that waits upon the virtual engine, but along the physical engine
> >>> in use, will then see that it is due to be submitted to the same engine
> >>> and take a shortcut (and be queued without waiting for the completion
> >>> fence). However, the virtual request may be preempted (either by higher
> >>> priority users, or by timeslicing) and removed from the physical engine
> >>> to be migrated over to one of its siblings. The dependent normal request
> >>> however is oblivious to the removal of the virtual request and remains
> >>> queued to execute on HW, believing that once it reaches the head of its
> >>> queue all of its predecessors will have completed executing!
> >>>
> >>> v2: Beware restriction of signal->execution_mask prior to submission.
> >>>
> >>> Fixes: 6d06779e8672 ("drm/i915: Load balancing across a virtual engine")
> >>> Testcase: igt/gem_exec_balancer/sliced
> >>> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> >>> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> >>> Cc: <stable@vger.kernel.org> # v5.3+
> >>> ---
> >>>    drivers/gpu/drm/i915/i915_request.c | 25 +++++++++++++++++++++----
> >>>    1 file changed, 21 insertions(+), 4 deletions(-)
> >>>
> >>> diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
> >>> index 33bbad623e02..0b07ccc7e9bc 100644
> >>> --- a/drivers/gpu/drm/i915/i915_request.c
> >>> +++ b/drivers/gpu/drm/i915/i915_request.c
> >>> @@ -1237,6 +1237,25 @@ i915_request_await_execution(struct i915_request *rq,
> >>>        return 0;
> >>>    }
> >>>    
> >>> +static int
> >>> +await_request_submit(struct i915_request *to, struct i915_request *from)
> >>> +{
> >>> +     /*
> >>> +      * If we are waiting on a virtual engine, then it may be
> >>> +      * constrained to execute on a single engine *prior* to submission.
> >>> +      * When it is submitted, it will be first submitted to the virtual
> >>> +      * engine and then passed to the physical engine. We cannot allow
> >>> +      * the waiter to be submitted immediately to the physical engine
> >>> +      * as it may then bypass the virtual request.
> >>> +      */
> >>> +     if (to->engine == READ_ONCE(from->engine))
> >>> +             return i915_sw_fence_await_sw_fence_gfp(&to->submit,
> >>> +                                                     &from->submit,
> >>> +                                                     I915_FENCE_GFP);
> >>> +     else
> >>> +             return __i915_request_await_execution(to, from, NULL);
> >>
> >> If I am following correctly this branch will be the virtual <-> physical
> >> or virtual <-> virtual dependency on the same physical engine. Why is
> >> await_execution sufficient in this case? Is there something preventing
> >> timeslicing between the two (not wanted right!) once from start
> >> execution (not finishes).
> > 
> > Timeslicing is only between independent requests. When we expire a
> > request, we also expire all of its waiters along the same engine, so
> > that the execution order remains intact.
> 
> Via scheduler dependencies - they are enough?

Yes.
-Chris
Tvrtko Ursulin May 27, 2020, 7:50 a.m. UTC | #7
On 27/05/2020 08:47, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2020-05-27 08:32:05)
>>
>> On 27/05/2020 08:03, Chris Wilson wrote:
>>> Quoting Tvrtko Ursulin (2020-05-27 07:51:44)
>>>>
>>>> On 26/05/2020 10:07, Chris Wilson wrote:
>>>>> When we push a virtual request onto the HW, we update the rq->engine to
>>>>> point to the physical engine. A request that is then submitted by the
>>>>> user that waits upon the virtual engine, but along the physical engine
>>>>> in use, will then see that it is due to be submitted to the same engine
>>>>> and take a shortcut (and be queued without waiting for the completion
>>>>> fence). However, the virtual request may be preempted (either by higher
>>>>> priority users, or by timeslicing) and removed from the physical engine
>>>>> to be migrated over to one of its siblings. The dependent normal request
>>>>> however is oblivious to the removal of the virtual request and remains
>>>>> queued to execute on HW, believing that once it reaches the head of its
>>>>> queue all of its predecessors will have completed executing!
>>>>>
>>>>> v2: Beware restriction of signal->execution_mask prior to submission.
>>>>>
>>>>> Fixes: 6d06779e8672 ("drm/i915: Load balancing across a virtual engine")
>>>>> Testcase: igt/gem_exec_balancer/sliced
>>>>> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
>>>>> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>>>> Cc: <stable@vger.kernel.org> # v5.3+
>>>>> ---
>>>>>     drivers/gpu/drm/i915/i915_request.c | 25 +++++++++++++++++++++----
>>>>>     1 file changed, 21 insertions(+), 4 deletions(-)
>>>>>
>>>>> diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
>>>>> index 33bbad623e02..0b07ccc7e9bc 100644
>>>>> --- a/drivers/gpu/drm/i915/i915_request.c
>>>>> +++ b/drivers/gpu/drm/i915/i915_request.c
>>>>> @@ -1237,6 +1237,25 @@ i915_request_await_execution(struct i915_request *rq,
>>>>>         return 0;
>>>>>     }
>>>>>     
>>>>> +static int
>>>>> +await_request_submit(struct i915_request *to, struct i915_request *from)
>>>>> +{
>>>>> +     /*
>>>>> +      * If we are waiting on a virtual engine, then it may be
>>>>> +      * constrained to execute on a single engine *prior* to submission.
>>>>> +      * When it is submitted, it will be first submitted to the virtual
>>>>> +      * engine and then passed to the physical engine. We cannot allow
>>>>> +      * the waiter to be submitted immediately to the physical engine
>>>>> +      * as it may then bypass the virtual request.
>>>>> +      */
>>>>> +     if (to->engine == READ_ONCE(from->engine))
>>>>> +             return i915_sw_fence_await_sw_fence_gfp(&to->submit,
>>>>> +                                                     &from->submit,
>>>>> +                                                     I915_FENCE_GFP);
>>>>> +     else
>>>>> +             return __i915_request_await_execution(to, from, NULL);
>>>>
>>>> If I am following correctly this branch will be the virtual <-> physical
>>>> or virtual <-> virtual dependency on the same physical engine. Why is
>>>> await_execution sufficient in this case? Is there something preventing
>>>> timeslicing between the two (not wanted right!) once from start
>>>> execution (not finishes).
>>>
>>> Timeslicing is only between independent requests. When we expire a
>>> request, we also expire all of its waiters along the same engine, so
>>> that the execution order remains intact.
>>
>> Via scheduler dependencies - they are enough?
> 
> Yes.

Okay, I really need to use this all more often rather than just review..

Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Regards,

Tvrtko
diff mbox series

Patch

diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
index 33bbad623e02..0b07ccc7e9bc 100644
--- a/drivers/gpu/drm/i915/i915_request.c
+++ b/drivers/gpu/drm/i915/i915_request.c
@@ -1237,6 +1237,25 @@  i915_request_await_execution(struct i915_request *rq,
 	return 0;
 }
 
+static int
+await_request_submit(struct i915_request *to, struct i915_request *from)
+{
+	/*
+	 * If we are waiting on a virtual engine, then it may be
+	 * constrained to execute on a single engine *prior* to submission.
+	 * When it is submitted, it will be first submitted to the virtual
+	 * engine and then passed to the physical engine. We cannot allow
+	 * the waiter to be submitted immediately to the physical engine
+	 * as it may then bypass the virtual request.
+	 */
+	if (to->engine == READ_ONCE(from->engine))
+		return i915_sw_fence_await_sw_fence_gfp(&to->submit,
+							&from->submit,
+							I915_FENCE_GFP);
+	else
+		return __i915_request_await_execution(to, from, NULL);
+}
+
 static int
 i915_request_await_request(struct i915_request *to, struct i915_request *from)
 {
@@ -1258,10 +1277,8 @@  i915_request_await_request(struct i915_request *to, struct i915_request *from)
 			return ret;
 	}
 
-	if (to->engine == READ_ONCE(from->engine))
-		ret = i915_sw_fence_await_sw_fence_gfp(&to->submit,
-						       &from->submit,
-						       I915_FENCE_GFP);
+	if (is_power_of_2(to->execution_mask | READ_ONCE(from->execution_mask)))
+		ret = await_request_submit(to, from);
 	else
 		ret = emit_semaphore_wait(to, from, I915_FENCE_GFP);
 	if (ret < 0)