Message ID | 569CFE81.9070006@linux.intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Mon, Jan 18, 2016 at 03:02:25PM +0000, Tvrtko Ursulin wrote: > - while (!list_empty(&ring->request_list)) { > - struct drm_i915_gem_request *request; > - > - request = list_first_entry(&ring->request_list, > - struct drm_i915_gem_request, > - list); > - > - if (!i915_gem_request_completed(request, true)) > + list_for_each_entry_safe(req, next, &ring->request_list, list) { > + if (!i915_gem_request_completed(req, true)) > break; > > - i915_gem_request_retire(request); > + if (!i915.enable_execlists || !i915.enable_guc_submission) { > + i915_gem_request_retire(req); > + } else { > + prev_req = list_prev_entry(req, list); > + if (prev_req) > + i915_gem_request_retire(prev_req); > + } > } > > To explain, this attempts to ensure that in GuC mode requests are only > unreferenced if there is a *following* *completed* request. > > This way, regardless of whether they are using the same or different > contexts, we can be sure that the GPU has either completed the > context writing, or that the unreference will not cause the final > unpin of the context. This is the first bogus step. contexts have to be unreferenced from request retire, not request free. As it stands today, this forces us to hold the struct_mutex for the free (causing many foul ups along the line). The only reason why it is like that is because of execlists not decoupling its context pinning inside request cancel. -Chris
On 18/01/16 16:53, Chris Wilson wrote: > On Mon, Jan 18, 2016 at 03:02:25PM +0000, Tvrtko Ursulin wrote: >> - while (!list_empty(&ring->request_list)) { >> - struct drm_i915_gem_request *request; >> - >> - request = list_first_entry(&ring->request_list, >> - struct drm_i915_gem_request, >> - list); >> - >> - if (!i915_gem_request_completed(request, true)) >> + list_for_each_entry_safe(req, next, &ring->request_list, list) { >> + if (!i915_gem_request_completed(req, true)) >> break; >> >> - i915_gem_request_retire(request); >> + if (!i915.enable_execlists || !i915.enable_guc_submission) { >> + i915_gem_request_retire(req); >> + } else { >> + prev_req = list_prev_entry(req, list); >> + if (prev_req) >> + i915_gem_request_retire(prev_req); >> + } >> } >> >> To explain, this attempts to ensure that in GuC mode requests are only >> unreferenced if there is a *following* *completed* request. >> >> This way, regardless of whether they are using the same or different >> contexts, we can be sure that the GPU has either completed the >> context writing, or that the unreference will not cause the final >> unpin of the context. > > This is the first bogus step. contexts have to be unreferenced from > request retire, not request free. As it stands today, this forces us to > hold the struct_mutex for the free (causing many foul ups along the > line). The only reason why it is like that is because of execlists not > decoupling its context pinning inside request cancel. What is the first bogus step? My idea of how to fix the GuC issue, or the mention of final unreference in relation to GPU completing the submission? Also I don't understand how would you decouple context and request lifetime? Maybe we can ignore execlist mode for the moment and just consider the GuC which, as much as I understand it, has a simpler and fully aligned request/context/lrc lifetime of: * reference and pin and request creation * unpin and unreference at retire Where retire is decoupled from actual GPU activity, or maybe better say indirectly driven. Execlists bolt on a parallel another instance reference and pin on top, with different lifetime rules so maybe ignore that for the GuC discussion. Just to figure out what you have in mind. Regards, Tvrtko
On Mon, Jan 18, 2016 at 05:14:26PM +0000, Tvrtko Ursulin wrote: > > On 18/01/16 16:53, Chris Wilson wrote: > >On Mon, Jan 18, 2016 at 03:02:25PM +0000, Tvrtko Ursulin wrote: > >>- while (!list_empty(&ring->request_list)) { > >>- struct drm_i915_gem_request *request; > >>- > >>- request = list_first_entry(&ring->request_list, > >>- struct drm_i915_gem_request, > >>- list); > >>- > >>- if (!i915_gem_request_completed(request, true)) > >>+ list_for_each_entry_safe(req, next, &ring->request_list, list) { > >>+ if (!i915_gem_request_completed(req, true)) > >> break; > >> > >>- i915_gem_request_retire(request); > >>+ if (!i915.enable_execlists || !i915.enable_guc_submission) { > >>+ i915_gem_request_retire(req); > >>+ } else { > >>+ prev_req = list_prev_entry(req, list); > >>+ if (prev_req) > >>+ i915_gem_request_retire(prev_req); > >>+ } > >> } > >> > >>To explain, this attempts to ensure that in GuC mode requests are only > >>unreferenced if there is a *following* *completed* request. > >> > >>This way, regardless of whether they are using the same or different > >>contexts, we can be sure that the GPU has either completed the > >>context writing, or that the unreference will not cause the final > >>unpin of the context. > > > >This is the first bogus step. contexts have to be unreferenced from > >request retire, not request free. As it stands today, this forces us to > >hold the struct_mutex for the free (causing many foul ups along the > >line). The only reason why it is like that is because of execlists not > >decoupling its context pinning inside request cancel. > > What is the first bogus step? My idea of how to fix the GuC issue, > or the mention of final unreference in relation to GPU completing > the submission? That we want to want to actually unreference the request. We want to unpin the context at the appropriate juncture. At the moment, it looks like that you are conflating those two steps: "requests are only unreferenced". Using the retirement mechanism would mean coupling the context unpinning into a subsequent request rather than defer retiring a completed request, for example legacy uses active vma tracking to accomplish the same thing. Aiui, the current claim is that we couldn't do that since the guc may reorder contexts - except that we currently use a global seqno so that would be bad on many levels. -Chris
On 18/01/16 20:47, Chris Wilson wrote: > On Mon, Jan 18, 2016 at 05:14:26PM +0000, Tvrtko Ursulin wrote: >> >> On 18/01/16 16:53, Chris Wilson wrote: >>> On Mon, Jan 18, 2016 at 03:02:25PM +0000, Tvrtko Ursulin wrote: >>>> - while (!list_empty(&ring->request_list)) { >>>> - struct drm_i915_gem_request *request; >>>> - >>>> - request = list_first_entry(&ring->request_list, >>>> - struct drm_i915_gem_request, >>>> - list); >>>> - >>>> - if (!i915_gem_request_completed(request, true)) >>>> + list_for_each_entry_safe(req, next, &ring->request_list, list) { >>>> + if (!i915_gem_request_completed(req, true)) >>>> break; >>>> >>>> - i915_gem_request_retire(request); >>>> + if (!i915.enable_execlists || !i915.enable_guc_submission) { >>>> + i915_gem_request_retire(req); >>>> + } else { >>>> + prev_req = list_prev_entry(req, list); >>>> + if (prev_req) >>>> + i915_gem_request_retire(prev_req); >>>> + } >>>> } >>>> >>>> To explain, this attempts to ensure that in GuC mode requests are only >>>> unreferenced if there is a *following* *completed* request. >>>> >>>> This way, regardless of whether they are using the same or different >>>> contexts, we can be sure that the GPU has either completed the >>>> context writing, or that the unreference will not cause the final >>>> unpin of the context. >>> >>> This is the first bogus step. contexts have to be unreferenced from >>> request retire, not request free. As it stands today, this forces us to >>> hold the struct_mutex for the free (causing many foul ups along the >>> line). The only reason why it is like that is because of execlists not >>> decoupling its context pinning inside request cancel. >> >> What is the first bogus step? My idea of how to fix the GuC issue, >> or the mention of final unreference in relation to GPU completing >> the submission? > > That we want to want to actually unreference the request. We want to > unpin the context at the appropriate juncture. At the moment, it looks What would be the appropriate juncture? With GuC we don't have the equivalent of context complete irq. > like that you are conflating those two steps: "requests are only > unreferenced". Using the retirement mechanism would mean coupling the > context unpinning into a subsequent request rather than defer retiring a > completed request, for example legacy uses active vma tracking to > accomplish the same thing. Aiui, the current claim is that we couldn't > do that since the guc may reorder contexts - except that we currently > use a global seqno so that would be bad on many levels. I don't know legacy. :( I can see that request/context lifetime is coupled there and associated with request creation to retirement. Does it have the same problem of seqno signaling completion before the GPU is done with writing out the context image and how does it solve that? Regards, Tvrtko
On 19/01/16 10:24, Tvrtko Ursulin wrote: > > > On 18/01/16 20:47, Chris Wilson wrote: >> On Mon, Jan 18, 2016 at 05:14:26PM +0000, Tvrtko Ursulin wrote: >>> >>> On 18/01/16 16:53, Chris Wilson wrote: >>>> On Mon, Jan 18, 2016 at 03:02:25PM +0000, Tvrtko Ursulin wrote: >>>>> - while (!list_empty(&ring->request_list)) { >>>>> - struct drm_i915_gem_request *request; >>>>> - >>>>> - request = list_first_entry(&ring->request_list, >>>>> - struct >>>>> drm_i915_gem_request, >>>>> - list); >>>>> - >>>>> - if (!i915_gem_request_completed(request, true)) >>>>> + list_for_each_entry_safe(req, next, &ring->request_list, >>>>> list) { >>>>> + if (!i915_gem_request_completed(req, true)) >>>>> break; >>>>> >>>>> - i915_gem_request_retire(request); >>>>> + if (!i915.enable_execlists || >>>>> !i915.enable_guc_submission) { >>>>> + i915_gem_request_retire(req); >>>>> + } else { >>>>> + prev_req = list_prev_entry(req, list); >>>>> + if (prev_req) >>>>> + i915_gem_request_retire(prev_req); >>>>> + } >>>>> } >>>>> >>>>> To explain, this attempts to ensure that in GuC mode requests are only >>>>> unreferenced if there is a *following* *completed* request. >>>>> >>>>> This way, regardless of whether they are using the same or different >>>>> contexts, we can be sure that the GPU has either completed the >>>>> context writing, or that the unreference will not cause the final >>>>> unpin of the context. >>>> >>>> This is the first bogus step. contexts have to be unreferenced from >>>> request retire, not request free. As it stands today, this forces us to >>>> hold the struct_mutex for the free (causing many foul ups along the >>>> line). The only reason why it is like that is because of execlists not >>>> decoupling its context pinning inside request cancel. >>> >>> What is the first bogus step? My idea of how to fix the GuC issue, >>> or the mention of final unreference in relation to GPU completing >>> the submission? >> >> That we want to want to actually unreference the request. We want to >> unpin the context at the appropriate juncture. At the moment, it looks > > What would be the appropriate juncture? With GuC we don't have the > equivalent of context complete irq. > >> like that you are conflating those two steps: "requests are only >> unreferenced". Using the retirement mechanism would mean coupling the >> context unpinning into a subsequent request rather than defer retiring a >> completed request, for example legacy uses active vma tracking to >> accomplish the same thing. Aiui, the current claim is that we couldn't >> do that since the guc may reorder contexts - except that we currently >> use a global seqno so that would be bad on many levels. > > I don't know legacy. :( I can see that request/context lifetime is > coupled there and associated with request creation to retirement. > > Does it have the same problem of seqno signaling completion before the > GPU is done with writing out the context image and how does it solve that? Ok I think I am starting to see the legacy code paths. Interesting areas are i915_switch_context + do_switch which do the ring->last_context tracking and make the ring/engine own one extra reference on the context. Then, code paths which want to make sure no user context are active on the GPU call i915_gpu_idle and submit a dummy default context request. The latter even explicitly avoids execlist mode. So unless I am missing something, we could just unify the behaviour between the two. Make ring/engine->last_context do the identical tracking as legacy context switching and let i915_gpu_idle idle the GPU in execlist mode as well? Regards, Tvrtko
On 19/01/16 17:18, Tvrtko Ursulin wrote: > > On 19/01/16 10:24, Tvrtko Ursulin wrote: >> >> >> On 18/01/16 20:47, Chris Wilson wrote: >>> On Mon, Jan 18, 2016 at 05:14:26PM +0000, Tvrtko Ursulin wrote: >>>> >>>> On 18/01/16 16:53, Chris Wilson wrote: >>>>> On Mon, Jan 18, 2016 at 03:02:25PM +0000, Tvrtko Ursulin wrote: >>>>>> - while (!list_empty(&ring->request_list)) { >>>>>> - struct drm_i915_gem_request *request; >>>>>> - >>>>>> - request = list_first_entry(&ring->request_list, >>>>>> - struct >>>>>> drm_i915_gem_request, >>>>>> - list); >>>>>> - >>>>>> - if (!i915_gem_request_completed(request, true)) >>>>>> + list_for_each_entry_safe(req, next, &ring->request_list, >>>>>> list) { >>>>>> + if (!i915_gem_request_completed(req, true)) >>>>>> break; >>>>>> >>>>>> - i915_gem_request_retire(request); >>>>>> + if (!i915.enable_execlists || >>>>>> !i915.enable_guc_submission) { >>>>>> + i915_gem_request_retire(req); >>>>>> + } else { >>>>>> + prev_req = list_prev_entry(req, list); >>>>>> + if (prev_req) >>>>>> + i915_gem_request_retire(prev_req); >>>>>> + } >>>>>> } >>>>>> >>>>>> To explain, this attempts to ensure that in GuC mode requests are >>>>>> only >>>>>> unreferenced if there is a *following* *completed* request. >>>>>> >>>>>> This way, regardless of whether they are using the same or different >>>>>> contexts, we can be sure that the GPU has either completed the >>>>>> context writing, or that the unreference will not cause the final >>>>>> unpin of the context. >>>>> >>>>> This is the first bogus step. contexts have to be unreferenced from >>>>> request retire, not request free. As it stands today, this forces >>>>> us to >>>>> hold the struct_mutex for the free (causing many foul ups along the >>>>> line). The only reason why it is like that is because of execlists >>>>> not >>>>> decoupling its context pinning inside request cancel. >>>> >>>> What is the first bogus step? My idea of how to fix the GuC issue, >>>> or the mention of final unreference in relation to GPU completing >>>> the submission? >>> >>> That we want to want to actually unreference the request. We want to >>> unpin the context at the appropriate juncture. At the moment, it looks >> >> What would be the appropriate juncture? With GuC we don't have the >> equivalent of context complete irq. >> >>> like that you are conflating those two steps: "requests are only >>> unreferenced". Using the retirement mechanism would mean coupling the >>> context unpinning into a subsequent request rather than defer retiring a >>> completed request, for example legacy uses active vma tracking to >>> accomplish the same thing. Aiui, the current claim is that we couldn't >>> do that since the guc may reorder contexts - except that we currently >>> use a global seqno so that would be bad on many levels. >> >> I don't know legacy. :( I can see that request/context lifetime is >> coupled there and associated with request creation to retirement. >> >> Does it have the same problem of seqno signaling completion before the >> GPU is done with writing out the context image and how does it solve >> that? > > Ok I think I am starting to see the legacy code paths. > > Interesting areas are i915_switch_context + do_switch which do the > ring->last_context tracking and make the ring/engine own one extra > reference on the context. > > Then, code paths which want to make sure no user context are active on > the GPU call i915_gpu_idle and submit a dummy default context request. > > The latter even explicitly avoids execlist mode. > > So unless I am missing something, we could just unify the behaviour > between the two. Make ring/engine->last_context do the identical > tracking as legacy context switching and let i915_gpu_idle idle the GPU > in execlist mode as well? Although I am not sure the engine->last_context concept works with LRC and GuC because of the multiple submission ports. Need to give it more thought. Regards, Tvrtko
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 2cfcf9401971..63bb251edffd 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -2927,6 +2927,8 @@ void i915_gem_reset(struct drm_device *dev) void i915_gem_retire_requests_ring(struct intel_engine_cs *ring) { + struct drm_i915_gem_request *prev_req, *next, *req; + WARN_ON(i915_verify_lists(ring->dev)); /* Retire requests first as we use it above for the early return. @@ -2934,17 +2936,17 @@ i915_gem_retire_requests_ring(struct intel_engine_cs *ring) * the requests lists without clearing the active list, leading to * confusion. */ - while (!list_empty(&ring->request_list)) { - struct drm_i915_gem_request *request; - - request = list_first_entry(&ring->request_list, - struct drm_i915_gem_request, - list); - - if (!i915_gem_request_completed(request, true)) + list_for_each_entry_safe(req, next, &ring->request_list, list) { + if (!i915_gem_request_completed(req, true)) break; - i915_gem_request_retire(request); + if (!i915.enable_execlists || !i915.enable_guc_submission) { + i915_gem_request_retire(req); + } else { + prev_req = list_prev_entry(req, list); + if (prev_req) + i915_gem_request_retire(prev_req); + } } To explain, this attempts to ensure that in GuC mode requests are only