diff mbox

[2/2] drm/i915: Remove impossibe checks from i915_gem_request_add_to_client

Message ID 1458746103-25849-2-git-send-email-tvrtko.ursulin@linux.intel.com (mailing list archive)
State New, archived
Headers show

Commit Message

Tvrtko Ursulin March 23, 2016, 3:15 p.m. UTC
From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

There is only one caller of this functions which calls it under
strictly defined conditions. Error checking and return value
can be removed.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
BTW it looks like the whole point of this request list is to implement
the throttle ioctl? Looks like a strange ioctl with a fixed 20ms criteria,
could it be re-implemented somehow without having to have this list?

Does it have to be per-client or could it wait for any requests on any
engine emitted 20ms ago?

 drivers/gpu/drm/i915/i915_drv.h            |  4 ++--
 drivers/gpu/drm/i915/i915_gem.c            | 14 ++------------
 drivers/gpu/drm/i915/i915_gem_execbuffer.c |  4 +---
 3 files changed, 5 insertions(+), 17 deletions(-)

Comments

Chris Wilson March 23, 2016, 3:31 p.m. UTC | #1
On Wed, Mar 23, 2016 at 03:15:03PM +0000, Tvrtko Ursulin wrote:
> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> 
> There is only one caller of this functions which calls it under
> strictly defined conditions. Error checking and return value
> can be removed.
> 
> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> ---
> BTW it looks like the whole point of this request list is to implement
> the throttle ioctl? Looks like a strange ioctl with a fixed 20ms criteria,
> could it be re-implemented somehow without having to have this list?
> 
> Does it have to be per-client or could it wait for any requests on any
> engine emitted 20ms ago?

It is per-client, and I long wished the 20ms wasn't defined by the ABI.

What I proposed doing is this:

https://cgit.freedesktop.org/~ickle/linux-2.6/commit/?h=breadcrumbs&id=e33947505b54582964c8c202b22f88fc5705f484

Please note that this also fixes a race condition I've seen in the wild.
-Chris
Tvrtko Ursulin March 24, 2016, 9:49 a.m. UTC | #2
On 23/03/16 15:31, Chris Wilson wrote:
> On Wed, Mar 23, 2016 at 03:15:03PM +0000, Tvrtko Ursulin wrote:
>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>
>> There is only one caller of this functions which calls it under
>> strictly defined conditions. Error checking and return value
>> can be removed.
>>
>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>> ---
>> BTW it looks like the whole point of this request list is to implement
>> the throttle ioctl? Looks like a strange ioctl with a fixed 20ms criteria,
>> could it be re-implemented somehow without having to have this list?
>>
>> Does it have to be per-client or could it wait for any requests on any
>> engine emitted 20ms ago?
>
> It is per-client, and I long wished the 20ms wasn't defined by the ABI.

Is it useful and/or widely used though?

With multiple clients submitting work I don't see the semantics are 
deterministic. I was thinking if it could be replaced with a simpler 
implementation for the similar random effect, losing the list altogether?

Especially wonder how the scheduler will affect it.


> What I proposed doing is this:
>
> https://cgit.freedesktop.org/~ickle/linux-2.6/commit/?h=breadcrumbs&id=e33947505b54582964c8c202b22f88fc5705f484

How is it safe? list_add modifies various pointers in multiple steps so 
I didn't know that is safe against concurrent iteration.

RCU flavoured list API can do such things but in this case switching to 
that would only enable lockless throttle and not gain anything on the 
add_to_client path.

> Please note that this also fixes a race condition I've seen in the wild.

What is the race?

Regards,

Tvrtko
Chris Wilson March 24, 2016, 10:11 a.m. UTC | #3
On Thu, Mar 24, 2016 at 09:49:25AM +0000, Tvrtko Ursulin wrote:
> 
> On 23/03/16 15:31, Chris Wilson wrote:
> >On Wed, Mar 23, 2016 at 03:15:03PM +0000, Tvrtko Ursulin wrote:
> >>From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> >>
> >>There is only one caller of this functions which calls it under
> >>strictly defined conditions. Error checking and return value
> >>can be removed.
> >>
> >>Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> >>---
> >>BTW it looks like the whole point of this request list is to implement
> >>the throttle ioctl? Looks like a strange ioctl with a fixed 20ms criteria,
> >>could it be re-implemented somehow without having to have this list?
> >>
> >>Does it have to be per-client or could it wait for any requests on any
> >>engine emitted 20ms ago?
> >
> >It is per-client, and I long wished the 20ms wasn't defined by the ABI.
> 
> Is it useful and/or widely used though?

X for throttling front buffer rendering.
 
> With multiple clients submitting work I don't see the semantics are
> deterministic. I was thinking if it could be replaced with a simpler
> implementation for the similar random effect, losing the list
> altogether?

No, it wants per-client semantics.
 
> Especially wonder how the scheduler will affect it.

It won't because that would be breaking ABI.

> >What I proposed doing is this:
> >
> >https://cgit.freedesktop.org/~ickle/linux-2.6/commit/?h=breadcrumbs&id=e33947505b54582964c8c202b22f88fc5705f484
> 
> How is it safe? list_add modifies various pointers in multiple steps
> so I didn't know that is safe against concurrent iteration.

It is safe. list_add() is now properly ordered using the compiler
barrier (i.e. it gained the RCU semantics).
 
> RCU flavoured list API can do such things but in this case switching
> to that would only enable lockless throttle and not gain anything on
> the add_to_client path.

And add_to_client is the hotter of the pair.
 
> >Please note that this also fixes a race condition I've seen in the wild.
> 
> What is the race?

file_priv not being checked after acquiring the spinlock under which the
list may be reset.
-Chris
diff mbox

Patch

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 8727746cecd2..eaf015918b81 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2292,8 +2292,8 @@  i915_gem_request_alloc(struct intel_engine_cs *engine,
 		       struct intel_context *ctx);
 void i915_gem_request_cancel(struct drm_i915_gem_request *req);
 void i915_gem_request_free(struct kref *req_ref);
-int i915_gem_request_add_to_client(struct drm_i915_gem_request *req,
-				   struct drm_file *file);
+void i915_gem_request_add_to_client(struct drm_i915_gem_request *req,
+				    struct drm_file *file);
 
 static inline uint32_t
 i915_gem_request_get_seqno(struct drm_i915_gem_request *req)
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 8588c83abb35..6b4f271602f3 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1367,19 +1367,11 @@  out:
 	return ret;
 }
 
-int i915_gem_request_add_to_client(struct drm_i915_gem_request *req,
-				   struct drm_file *file)
+void i915_gem_request_add_to_client(struct drm_i915_gem_request *req,
+				    struct drm_file *file)
 {
 	struct drm_i915_file_private *file_priv;
 
-	WARN_ON(!req || !file || req->file_priv);
-
-	if (!req || !file)
-		return -EINVAL;
-
-	if (req->file_priv)
-		return -EINVAL;
-
 	file_priv = file->driver_priv;
 
 	spin_lock(&file_priv->mm.lock);
@@ -1388,8 +1380,6 @@  int i915_gem_request_add_to_client(struct drm_i915_gem_request *req,
 	spin_unlock(&file_priv->mm.lock);
 
 	req->pid = get_pid(task_pid(current));
-
-	return 0;
 }
 
 static inline void
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index 374a0cb7a092..9295168cce10 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -1620,9 +1620,7 @@  i915_gem_do_execbuffer(struct drm_device *dev, void *data,
 		goto err_batch_unpin;
 	}
 
-	ret = i915_gem_request_add_to_client(req, file);
-	if (ret)
-		goto err_batch_unpin;
+	i915_gem_request_add_to_client(req, file);
 
 	/*
 	 * Save assorted stuff away to pass through to *_submission().