diff mbox

[37/48] drm/i915: Defer request freeing

Message ID 1386367941-7131-37-git-send-email-benjamin.widawsky@intel.com (mailing list archive)
State New, archived
Headers show

Commit Message

Ben Widawsky Dec. 6, 2013, 10:11 p.m. UTC
From: Ben Widawsky <ben@bwidawsk.net>

With context destruction, we always want to be able to tear down the
underlying address space. This is invoked on the last unreference to the
context which could happen before we've moved all objects to the
inactive list. To enable a clean tear down the address space, make sure
to process the request free lastly.

Without this change, we cannot guarantee to we don't still have active
objects in the VM.

As an example of a failing case:
CTX-A is created, count=1
CTX-A is used during execbuf
	does a context switch count = 2
	and add_request count = 3
CTX B runs, switches, CTX-A count = 2
CTX-A is destroyed, count = 1
retire requests is called
	free_request from CTX-A, count = 0 <--- free context with active object

As mentioned above, by doing the free request after processing the
active list, we can avoid this case.

Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
---
 drivers/gpu/drm/i915/i915_gem.c | 13 ++++++++++---
 1 file changed, 10 insertions(+), 3 deletions(-)

Comments

Chris Wilson Dec. 12, 2013, 11:08 a.m. UTC | #1
On Fri, Dec 06, 2013 at 02:11:22PM -0800, Ben Widawsky wrote:
> From: Ben Widawsky <ben@bwidawsk.net>
> 
> With context destruction, we always want to be able to tear down the
> underlying address space. This is invoked on the last unreference to the
> context which could happen before we've moved all objects to the
> inactive list. To enable a clean tear down the address space, make sure
> to process the request free lastly.
> 
> Without this change, we cannot guarantee to we don't still have active
> objects in the VM.
> 
> As an example of a failing case:
> CTX-A is created, count=1
> CTX-A is used during execbuf
> 	does a context switch count = 2
> 	and add_request count = 3
> CTX B runs, switches, CTX-A count = 2
> CTX-A is destroyed, count = 1
> retire requests is called
> 	free_request from CTX-A, count = 0 <--- free context with active object
> 
> As mentioned above, by doing the free request after processing the
> active list, we can avoid this case.

So just process the active_list first, the requests and objects
associated with the ring should be fairly independent.
-Chris
Daniel Vetter Dec. 18, 2013, 2:39 p.m. UTC | #2
On Thu, Dec 12, 2013 at 11:08:47AM +0000, Chris Wilson wrote:
> On Fri, Dec 06, 2013 at 02:11:22PM -0800, Ben Widawsky wrote:
> > From: Ben Widawsky <ben@bwidawsk.net>
> > 
> > With context destruction, we always want to be able to tear down the
> > underlying address space. This is invoked on the last unreference to the
> > context which could happen before we've moved all objects to the
> > inactive list. To enable a clean tear down the address space, make sure
> > to process the request free lastly.
> > 
> > Without this change, we cannot guarantee to we don't still have active
> > objects in the VM.
> > 
> > As an example of a failing case:
> > CTX-A is created, count=1
> > CTX-A is used during execbuf
> > 	does a context switch count = 2
> > 	and add_request count = 3
> > CTX B runs, switches, CTX-A count = 2
> > CTX-A is destroyed, count = 1
> > retire requests is called
> > 	free_request from CTX-A, count = 0 <--- free context with active object
> > 
> > As mentioned above, by doing the free request after processing the
> > active list, we can avoid this case.
> 
> So just process the active_list first, the requests and objects
> associated with the ring should be fairly independent.

Also, stuff like this needs a testcase, and I didn't spot such a thing in
the patches posted thus far.
-Daniel
diff mbox

Patch

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index ed52108..e6d7b4c 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2423,6 +2423,8 @@  void i915_gem_reset(struct drm_device *dev)
 void
 i915_gem_retire_requests_ring(struct intel_ring_buffer *ring)
 {
+	LIST_HEAD(deferred_request_free);
+	struct drm_i915_gem_request *request;
 	uint32_t seqno;
 
 	if (list_empty(&ring->request_list))
@@ -2433,8 +2435,6 @@  i915_gem_retire_requests_ring(struct intel_ring_buffer *ring)
 	seqno = ring->get_seqno(ring, true);
 
 	while (!list_empty(&ring->request_list)) {
-		struct drm_i915_gem_request *request;
-
 		request = list_first_entry(&ring->request_list,
 					   struct drm_i915_gem_request,
 					   list);
@@ -2450,7 +2450,7 @@  i915_gem_retire_requests_ring(struct intel_ring_buffer *ring)
 		 */
 		ring->last_retired_head = request->tail;
 
-		i915_gem_free_request(request);
+		list_move_tail(&request->list, &deferred_request_free);
 	}
 
 	/* Move any buffers on the active list that are no longer referenced
@@ -2475,6 +2475,13 @@  i915_gem_retire_requests_ring(struct intel_ring_buffer *ring)
 		ring->trace_irq_seqno = 0;
 	}
 
+	/* Finish processing active list before freeing request */
+	while (!list_empty(&deferred_request_free)) {
+		request = list_first_entry(&deferred_request_free,
+					   struct drm_i915_gem_request,
+					   list);
+		i915_gem_free_request(request);
+	}
 	WARN_ON(i915_verify_lists(ring->dev));
 }