diff mbox

[7/9] drm/i915/execlists: Reduce lock context between schedule/submit_request

Message ID 20170503113759.31145-7-chris@chris-wilson.co.uk (mailing list archive)
State New, archived
Headers show

Commit Message

Chris Wilson May 3, 2017, 11:37 a.m. UTC
If we do not require to perform priority bumping, and we haven't yet
submitted the request, we can update its priority in situ and skip
acquiring the engine locks -- thus avoiding any contention between us
and submit/execute.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/intel_lrc.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

Comments

Chris Wilson May 5, 2017, 12:13 p.m. UTC | #1
s/context/contention/ in subject

On Wed, May 03, 2017 at 12:37:57PM +0100, Chris Wilson wrote:
> If we do not require to perform priority bumping, and we haven't yet
> submitted the request, we can update its priority in situ and skip
> acquiring the engine locks -- thus avoiding any contention between us
> and submit/execute.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>  drivers/gpu/drm/i915/intel_lrc.c | 11 +++++++++++
>  1 file changed, 11 insertions(+)
> 
> diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
> index fb0025627676..ca7f28795e2d 100644
> --- a/drivers/gpu/drm/i915/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/intel_lrc.c
> @@ -767,6 +767,17 @@ static void execlists_schedule(struct drm_i915_gem_request *request, int prio)
>  		list_safe_reset_next(dep, p, dfs_link);
>  	}
>  
> +	/* If we didn't need to bump any existing priorites, and we haven't
> +	 * yet submitted this request (i..e there is no porential race with
> +	 * execlists_submit_request()), we can set our own priority and skip
> +	 * acquiring the engine locks.
> +	 */
> +	if (request->priotree.priority == INT_MIN) {
> +		request->priotree.priority = prio;
> +		if (stack.dfs_link.next == stack.dfs_link.prev)
> +			return;
> +	}
> +
>  	engine = request->engine;
>  	spin_lock_irq(&engine->timeline->lock);
>  
> -- 
> 2.11.0
>
Tvrtko Ursulin May 5, 2017, 1:30 p.m. UTC | #2
On 03/05/2017 12:37, Chris Wilson wrote:
> If we do not require to perform priority bumping, and we haven't yet
> submitted the request, we can update its priority in situ and skip
> acquiring the engine locks -- thus avoiding any contention between us
> and submit/execute.
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>  drivers/gpu/drm/i915/intel_lrc.c | 11 +++++++++++
>  1 file changed, 11 insertions(+)
>
> diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
> index fb0025627676..ca7f28795e2d 100644
> --- a/drivers/gpu/drm/i915/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/intel_lrc.c
> @@ -767,6 +767,17 @@ static void execlists_schedule(struct drm_i915_gem_request *request, int prio)
>  		list_safe_reset_next(dep, p, dfs_link);
>  	}
>
> +	/* If we didn't need to bump any existing priorites, and we haven't

priorities

> +	 * yet submitted this request (i..e there is no porential race with

potential

> +	 * execlists_submit_request()), we can set our own priority and skip
> +	 * acquiring the engine locks.
> +	 */
> +	if (request->priotree.priority == INT_MIN) {
> +		request->priotree.priority = prio;
> +		if (stack.dfs_link.next == stack.dfs_link.prev)
> +			return;

Move the assignment of the priority under the if?

> +	}
> +
>  	engine = request->engine;
>  	spin_lock_irq(&engine->timeline->lock);
>
>

Regards,

Tvrtko
Chris Wilson May 5, 2017, 1:38 p.m. UTC | #3
On Fri, May 05, 2017 at 02:30:08PM +0100, Tvrtko Ursulin wrote:
> 
> On 03/05/2017 12:37, Chris Wilson wrote:
> >If we do not require to perform priority bumping, and we haven't yet
> >submitted the request, we can update its priority in situ and skip
> >acquiring the engine locks -- thus avoiding any contention between us
> >and submit/execute.
> >
> >Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> >---
> > drivers/gpu/drm/i915/intel_lrc.c | 11 +++++++++++
> > 1 file changed, 11 insertions(+)
> >
> >diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
> >index fb0025627676..ca7f28795e2d 100644
> >--- a/drivers/gpu/drm/i915/intel_lrc.c
> >+++ b/drivers/gpu/drm/i915/intel_lrc.c
> >@@ -767,6 +767,17 @@ static void execlists_schedule(struct drm_i915_gem_request *request, int prio)
> > 		list_safe_reset_next(dep, p, dfs_link);
> > 	}
> >
> >+	/* If we didn't need to bump any existing priorites, and we haven't
> 
> priorities
> 
> >+	 * yet submitted this request (i..e there is no porential race with
> 
> potential
> 
> >+	 * execlists_submit_request()), we can set our own priority and skip
> >+	 * acquiring the engine locks.
> >+	 */
> >+	if (request->priotree.priority == INT_MIN) {
> >+		request->priotree.priority = prio;
> >+		if (stack.dfs_link.next == stack.dfs_link.prev)
> >+			return;
> 
> Move the assignment of the priority under the if?

The assignment always work. I just liked the look of this code more :)
The skip of the assignment is minor benefit. For bonus points, could do
a list_del_entry(&stack.dfs_link) after the return.

Sold.
-Chris
diff mbox

Patch

diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index fb0025627676..ca7f28795e2d 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -767,6 +767,17 @@  static void execlists_schedule(struct drm_i915_gem_request *request, int prio)
 		list_safe_reset_next(dep, p, dfs_link);
 	}
 
+	/* If we didn't need to bump any existing priorites, and we haven't
+	 * yet submitted this request (i..e there is no porential race with
+	 * execlists_submit_request()), we can set our own priority and skip
+	 * acquiring the engine locks.
+	 */
+	if (request->priotree.priority == INT_MIN) {
+		request->priotree.priority = prio;
+		if (stack.dfs_link.next == stack.dfs_link.prev)
+			return;
+	}
+
 	engine = request->engine;
 	spin_lock_irq(&engine->timeline->lock);