diff mbox series

drm/i915/guc: do not capture error state on exiting context

Message ID 20220926215410.2268295-1-andrzej.hajda@intel.com (mailing list archive)
State New, archived
Headers show
Series drm/i915/guc: do not capture error state on exiting context | expand

Commit Message

Andrzej Hajda Sept. 26, 2022, 9:54 p.m. UTC
Capturing error state is time consuming (up to 350ms on DG2), so it should
be avoided if possible. Context reset triggered by context removal is a
good example.
With this patch multiple igt tests will not timeout and should run faster.

Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/1551
Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/3952
Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/5891
Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/6268
Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/6281
Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com>
---
 drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

Comments

Andi Shyti Sept. 26, 2022, 10:44 p.m. UTC | #1
Hi Andrzej,

On Mon, Sep 26, 2022 at 11:54:09PM +0200, Andrzej Hajda wrote:
> Capturing error state is time consuming (up to 350ms on DG2), so it should
> be avoided if possible. Context reset triggered by context removal is a
> good example.
> With this patch multiple igt tests will not timeout and should run faster.
> 
> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/1551
> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/3952
> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/5891
> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/6268
> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/6281
> Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com>

fine for me:

Reviewed-by: Andi Shyti <andi.shyti@linux.intel.com>

Just to be on the safe side, can we also have the ack from any of
the GuC folks? Daniele, John?

Andi


> ---
>  drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> index 22ba66e48a9b01..cb58029208afe1 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
> @@ -4425,7 +4425,8 @@ static void guc_handle_context_reset(struct intel_guc *guc,
>  	trace_intel_context_reset(ce);
>  
>  	if (likely(!intel_context_is_banned(ce))) {
> -		capture_error_state(guc, ce);
> +		if (!intel_context_is_exiting(ce))
> +			capture_error_state(guc, ce);
>  		guc_context_replay(ce);
>  	} else {
>  		drm_info(&guc_to_gt(guc)->i915->drm,
> -- 
> 2.34.1
Daniele Ceraolo Spurio Sept. 26, 2022, 11:34 p.m. UTC | #2
On 9/26/2022 3:44 PM, Andi Shyti wrote:
> Hi Andrzej,
>
> On Mon, Sep 26, 2022 at 11:54:09PM +0200, Andrzej Hajda wrote:
>> Capturing error state is time consuming (up to 350ms on DG2), so it should
>> be avoided if possible. Context reset triggered by context removal is a
>> good example.
>> With this patch multiple igt tests will not timeout and should run faster.
>>
>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/1551
>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/3952
>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/5891
>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/6268
>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/6281
>> Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com>
> fine for me:
>
> Reviewed-by: Andi Shyti <andi.shyti@linux.intel.com>
>
> Just to be on the safe side, can we also have the ack from any of
> the GuC folks? Daniele, John?
>
> Andi
>
>
>> ---
>>   drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 3 ++-
>>   1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>> index 22ba66e48a9b01..cb58029208afe1 100644
>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>> @@ -4425,7 +4425,8 @@ static void guc_handle_context_reset(struct intel_guc *guc,
>>   	trace_intel_context_reset(ce);
>>   
>>   	if (likely(!intel_context_is_banned(ce))) {
>> -		capture_error_state(guc, ce);
>> +		if (!intel_context_is_exiting(ce))
>> +			capture_error_state(guc, ce);
>>   		guc_context_replay(ce);

You definitely don't want to replay requests of a context that is going 
away.

This seems at least in part due to 
https://patchwork.freedesktop.org/patch/487531/, where we replaced the 
"context_ban" with "context_exiting". There are several places where we 
skipped operations if the context was banned (here included) which are 
now not covered anymore for exiting contexts. Maybe we need a new 
checker function to check both flags in places where we don't care why 
the context is being removed (ban vs exiting), just that it is?

Daniele

>>   	} else {
>>   		drm_info(&guc_to_gt(guc)->i915->drm,
>> -- 
>> 2.34.1
Andrzej Hajda Sept. 27, 2022, 6:49 a.m. UTC | #3
On 27.09.2022 01:34, Ceraolo Spurio, Daniele wrote:
>
>
> On 9/26/2022 3:44 PM, Andi Shyti wrote:
>> Hi Andrzej,
>>
>> On Mon, Sep 26, 2022 at 11:54:09PM +0200, Andrzej Hajda wrote:
>>> Capturing error state is time consuming (up to 350ms on DG2), so it 
>>> should
>>> be avoided if possible. Context reset triggered by context removal is a
>>> good example.
>>> With this patch multiple igt tests will not timeout and should run 
>>> faster.
>>>
>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/1551
>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/3952
>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/5891
>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/6268
>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/6281
>>> Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com>
>> fine for me:
>>
>> Reviewed-by: Andi Shyti <andi.shyti@linux.intel.com>
>>
>> Just to be on the safe side, can we also have the ack from any of
>> the GuC folks? Daniele, John?
>>
>> Andi
>>
>>
>>> ---
>>>   drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 3 ++-
>>>   1 file changed, 2 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
>>> b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>> index 22ba66e48a9b01..cb58029208afe1 100644
>>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>> @@ -4425,7 +4425,8 @@ static void guc_handle_context_reset(struct 
>>> intel_guc *guc,
>>>       trace_intel_context_reset(ce);
>>>         if (likely(!intel_context_is_banned(ce))) {
>>> -        capture_error_state(guc, ce);
>>> +        if (!intel_context_is_exiting(ce))
>>> +            capture_error_state(guc, ce);
>>>           guc_context_replay(ce);
>
> You definitely don't want to replay requests of a context that is 
> going away.

My intention was to just avoid error capture, but that's even better, 
only condition change:
-        if (likely(!intel_context_is_banned(ce))) {
+       if (likely(intel_context_is_schedulable(ce)))  {

>
> This seems at least in part due to 
> https://patchwork.freedesktop.org/patch/487531/, where we replaced the 
> "context_ban" with "context_exiting". There are several places where 
> we skipped operations if the context was banned (here included) which 
> are now not covered anymore for exiting contexts. Maybe we need a new 
> checker function to check both flags in places where we don't care why 
> the context is being removed (ban vs exiting), just that it is?
>
> Daniele
>
>>>       } else {
>>>           drm_info(&guc_to_gt(guc)->i915->drm,

And maybe degrade above to drm_dbg, to avoid spamming dmesg?

Regards
Andrzej


>>> -- 
>>> 2.34.1
>
Tvrtko Ursulin Sept. 27, 2022, 7:45 a.m. UTC | #4
On 27/09/2022 07:49, Andrzej Hajda wrote:
> 
> 
> On 27.09.2022 01:34, Ceraolo Spurio, Daniele wrote:
>>
>>
>> On 9/26/2022 3:44 PM, Andi Shyti wrote:
>>> Hi Andrzej,
>>>
>>> On Mon, Sep 26, 2022 at 11:54:09PM +0200, Andrzej Hajda wrote:
>>>> Capturing error state is time consuming (up to 350ms on DG2), so it 
>>>> should
>>>> be avoided if possible. Context reset triggered by context removal is a
>>>> good example.
>>>> With this patch multiple igt tests will not timeout and should run 
>>>> faster.
>>>>
>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/1551
>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/3952
>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/5891
>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/6268
>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/6281
>>>> Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com>
>>> fine for me:
>>>
>>> Reviewed-by: Andi Shyti <andi.shyti@linux.intel.com>
>>>
>>> Just to be on the safe side, can we also have the ack from any of
>>> the GuC folks? Daniele, John?
>>>
>>> Andi
>>>
>>>
>>>> ---
>>>>   drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 3 ++-
>>>>   1 file changed, 2 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
>>>> b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>> index 22ba66e48a9b01..cb58029208afe1 100644
>>>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>> @@ -4425,7 +4425,8 @@ static void guc_handle_context_reset(struct 
>>>> intel_guc *guc,
>>>>       trace_intel_context_reset(ce);
>>>>         if (likely(!intel_context_is_banned(ce))) {
>>>> -        capture_error_state(guc, ce);
>>>> +        if (!intel_context_is_exiting(ce))
>>>> +            capture_error_state(guc, ce);

I am not sure here - if we have a persistent context which caused a GPU 
hang I'd expect we'd still want error capture.

What causes the reset in the affected IGTs? Always preemption timeout?

>>>>           guc_context_replay(ce);
>>
>> You definitely don't want to replay requests of a context that is 
>> going away.
> 
> My intention was to just avoid error capture, but that's even better, 
> only condition change:
> -        if (likely(!intel_context_is_banned(ce))) {
> +       if (likely(intel_context_is_schedulable(ce)))  {

Yes that helper was intended to be used for contexts which should not be 
scheduled post exit or ban.

Daniele - you say there are some misses in the GuC backend. Should most, 
or even all in intel_guc_submission.c be converted to use 
intel_context_is_schedulable? My idea indeed was that "ban" should be a 
level up from the backends. Backend should only distinguish between 
"should I run this or not", and not the reason.

Regards,

Tvrtko

> 
>>
>> This seems at least in part due to 
>> https://patchwork.freedesktop.org/patch/487531/, where we replaced the 
>> "context_ban" with "context_exiting". There are several places where 
>> we skipped operations if the context was banned (here included) which 
>> are now not covered anymore for exiting contexts. Maybe we need a new 
>> checker function to check both flags in places where we don't care why 
>> the context is being removed (ban vs exiting), just that it is?
>>
>> Daniele
>>
>>>>       } else {
>>>>           drm_info(&guc_to_gt(guc)->i915->drm,
> 
> And maybe degrade above to drm_dbg, to avoid spamming dmesg?
> 
> Regards
> Andrzej
> 
> 
>>>> -- 
>>>> 2.34.1
>>
>
Andrzej Hajda Sept. 27, 2022, 8:16 a.m. UTC | #5
On 27.09.2022 09:45, Tvrtko Ursulin wrote:
>
> On 27/09/2022 07:49, Andrzej Hajda wrote:
>>
>>
>> On 27.09.2022 01:34, Ceraolo Spurio, Daniele wrote:
>>>
>>>
>>> On 9/26/2022 3:44 PM, Andi Shyti wrote:
>>>> Hi Andrzej,
>>>>
>>>> On Mon, Sep 26, 2022 at 11:54:09PM +0200, Andrzej Hajda wrote:
>>>>> Capturing error state is time consuming (up to 350ms on DG2), so 
>>>>> it should
>>>>> be avoided if possible. Context reset triggered by context removal 
>>>>> is a
>>>>> good example.
>>>>> With this patch multiple igt tests will not timeout and should run 
>>>>> faster.
>>>>>
>>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/1551
>>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/3952
>>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/5891
>>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/6268
>>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/6281
>>>>> Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com>
>>>> fine for me:
>>>>
>>>> Reviewed-by: Andi Shyti <andi.shyti@linux.intel.com>
>>>>
>>>> Just to be on the safe side, can we also have the ack from any of
>>>> the GuC folks? Daniele, John?
>>>>
>>>> Andi
>>>>
>>>>
>>>>> ---
>>>>>   drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 3 ++-
>>>>>   1 file changed, 2 insertions(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
>>>>> b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>> index 22ba66e48a9b01..cb58029208afe1 100644
>>>>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>> @@ -4425,7 +4425,8 @@ static void guc_handle_context_reset(struct 
>>>>> intel_guc *guc,
>>>>>       trace_intel_context_reset(ce);
>>>>>         if (likely(!intel_context_is_banned(ce))) {
>>>>> -        capture_error_state(guc, ce);
>>>>> +        if (!intel_context_is_exiting(ce))
>>>>> +            capture_error_state(guc, ce);
>
> I am not sure here - if we have a persistent context which caused a 
> GPU hang I'd expect we'd still want error capture.
>
> What causes the reset in the affected IGTs? Always preemption timeout?

Affected tests performs always context destroy with bb having 
IGT_SPIN_NO_PREEMPTION, and "preempt_timeout_ms" set to 50.
So I guess yes.

Regards
Andrzej


>
>>>>>           guc_context_replay(ce);
>>>
>>> You definitely don't want to replay requests of a context that is 
>>> going away.
>>
>> My intention was to just avoid error capture, but that's even better, 
>> only condition change:
>> -        if (likely(!intel_context_is_banned(ce))) {
>> +       if (likely(intel_context_is_schedulable(ce)))  {
>
> Yes that helper was intended to be used for contexts which should not 
> be scheduled post exit or ban.
>
> Daniele - you say there are some misses in the GuC backend. Should 
> most, or even all in intel_guc_submission.c be converted to use 
> intel_context_is_schedulable? My idea indeed was that "ban" should be 
> a level up from the backends. Backend should only distinguish between 
> "should I run this or not", and not the reason.
>
> Regards,
>
> Tvrtko
>
>>
>>>
>>> This seems at least in part due to 
>>> https://patchwork.freedesktop.org/patch/487531/, where we replaced 
>>> the "context_ban" with "context_exiting". There are several places 
>>> where we skipped operations if the context was banned (here 
>>> included) which are now not covered anymore for exiting contexts. 
>>> Maybe we need a new checker function to check both flags in places 
>>> where we don't care why the context is being removed (ban vs 
>>> exiting), just that it is?
>>>
>>> Daniele
>>>
>>>>>       } else {
>>>>>           drm_info(&guc_to_gt(guc)->i915->drm,
>>
>> And maybe degrade above to drm_dbg, to avoid spamming dmesg?
>>
>> Regards
>> Andrzej
>>
>>
>>>>> -- 
>>>>> 2.34.1
>>>
>>
Andrzej Hajda Sept. 27, 2022, 10:14 a.m. UTC | #6
On 27.09.2022 01:34, Ceraolo Spurio, Daniele wrote:
> 
> 
> On 9/26/2022 3:44 PM, Andi Shyti wrote:
>> Hi Andrzej,
>>
>> On Mon, Sep 26, 2022 at 11:54:09PM +0200, Andrzej Hajda wrote:
>>> Capturing error state is time consuming (up to 350ms on DG2), so it 
>>> should
>>> be avoided if possible. Context reset triggered by context removal is a
>>> good example.
>>> With this patch multiple igt tests will not timeout and should run 
>>> faster.
>>>
>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/1551
>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/3952
>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/5891
>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/6268
>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/6281
>>> Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com>
>> fine for me:
>>
>> Reviewed-by: Andi Shyti <andi.shyti@linux.intel.com>
>>
>> Just to be on the safe side, can we also have the ack from any of
>> the GuC folks? Daniele, John?
>>
>> Andi
>>
>>
>>> ---
>>>   drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 3 ++-
>>>   1 file changed, 2 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
>>> b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>> index 22ba66e48a9b01..cb58029208afe1 100644
>>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>> @@ -4425,7 +4425,8 @@ static void guc_handle_context_reset(struct 
>>> intel_guc *guc,
>>>       trace_intel_context_reset(ce);
>>>       if (likely(!intel_context_is_banned(ce))) {
>>> -        capture_error_state(guc, ce);
>>> +        if (!intel_context_is_exiting(ce))
>>> +            capture_error_state(guc, ce);
>>>           guc_context_replay(ce);
> 
> You definitely don't want to replay requests of a context that is going 
> away.

Without guc_context_replay I see timeouts. Probably because 
guc_context_replay calls __guc_reset_context. I am not sure if there is 
need to dig deeper, stay with my initial proposition, or sth like:

	if (likely(!intel_context_is_banned(ce))) {
		if (!intel_context_is_exiting(ce)) {
			capture_error_state(guc, ce);
			guc_context_replay(ce);
		} else {
			__guc_reset_context(ce, ce->engine->mask);
		}
	} else {

The latter is also working.

Regards
Andrzej


> 
> This seems at least in part due to 
> https://patchwork.freedesktop.org/patch/487531/, where we replaced the 
> "context_ban" with "context_exiting". There are several places where we 
> skipped operations if the context was banned (here included) which are 
> now not covered anymore for exiting contexts. Maybe we need a new 
> checker function to check both flags in places where we don't care why 
> the context is being removed (ban vs exiting), just that it is?
> 
> Daniele
> 
>>>       } else {
>>>           drm_info(&guc_to_gt(guc)->i915->drm,
>>> -- 
>>> 2.34.1
>
Daniele Ceraolo Spurio Sept. 27, 2022, 9:33 p.m. UTC | #7
On 9/27/2022 3:14 AM, Andrzej Hajda wrote:
> On 27.09.2022 01:34, Ceraolo Spurio, Daniele wrote:
>>
>>
>> On 9/26/2022 3:44 PM, Andi Shyti wrote:
>>> Hi Andrzej,
>>>
>>> On Mon, Sep 26, 2022 at 11:54:09PM +0200, Andrzej Hajda wrote:
>>>> Capturing error state is time consuming (up to 350ms on DG2), so it 
>>>> should
>>>> be avoided if possible. Context reset triggered by context removal 
>>>> is a
>>>> good example.
>>>> With this patch multiple igt tests will not timeout and should run 
>>>> faster.
>>>>
>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/1551
>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/3952
>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/5891
>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/6268
>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/6281
>>>> Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com>
>>> fine for me:
>>>
>>> Reviewed-by: Andi Shyti <andi.shyti@linux.intel.com>
>>>
>>> Just to be on the safe side, can we also have the ack from any of
>>> the GuC folks? Daniele, John?
>>>
>>> Andi
>>>
>>>
>>>> ---
>>>>   drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 3 ++-
>>>>   1 file changed, 2 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
>>>> b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>> index 22ba66e48a9b01..cb58029208afe1 100644
>>>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>> @@ -4425,7 +4425,8 @@ static void guc_handle_context_reset(struct 
>>>> intel_guc *guc,
>>>>       trace_intel_context_reset(ce);
>>>>       if (likely(!intel_context_is_banned(ce))) {
>>>> -        capture_error_state(guc, ce);
>>>> +        if (!intel_context_is_exiting(ce))
>>>> +            capture_error_state(guc, ce);
>>>>           guc_context_replay(ce);
>>
>> You definitely don't want to replay requests of a context that is 
>> going away.
>
> Without guc_context_replay I see timeouts. Probably because 
> guc_context_replay calls __guc_reset_context. I am not sure if there 
> is need to dig deeper, stay with my initial proposition, or sth like:
>
>     if (likely(!intel_context_is_banned(ce))) {
>         if (!intel_context_is_exiting(ce)) {
>             capture_error_state(guc, ce);
>             guc_context_replay(ce);
>         } else {
>             __guc_reset_context(ce, ce->engine->mask);
>         }
>     } else {
>
> The latter is also working.

This seems to be an issue with the context close path when hangcheck is 
disabled. In that case we don't call the revoke() helper, so we're not 
clearing the context state in the guc backend and therefore we require 
__guc_reset_context() in the reset handler to do so. I'd argue that the 
proper solution would be to ban the context on close in the hangcheck 
disabled scenario and not just rely on the pulse, which btw I'm not sure 
works with GuC submission with a preemptable context because the GUC 
will just schedule the context back in unless we send an H2G to 
explicitly disable it. Not sure why we're not banning right now though, 
so I'd prefer if someone knowledgeable could chime in in case there is a 
good reason for it.

Daniele

>
> Regards
> Andrzej
>
>
>>
>> This seems at least in part due to 
>> https://patchwork.freedesktop.org/patch/487531/, where we replaced 
>> the "context_ban" with "context_exiting". There are several places 
>> where we skipped operations if the context was banned (here included) 
>> which are now not covered anymore for exiting contexts. Maybe we need 
>> a new checker function to check both flags in places where we don't 
>> care why the context is being removed (ban vs exiting), just that it is?
>>
>> Daniele
>>
>>>>       } else {
>>>>           drm_info(&guc_to_gt(guc)->i915->drm,
>>>> -- 
>>>> 2.34.1
>>
>
Daniele Ceraolo Spurio Sept. 27, 2022, 9:36 p.m. UTC | #8
On 9/27/2022 12:45 AM, Tvrtko Ursulin wrote:
>
> On 27/09/2022 07:49, Andrzej Hajda wrote:
>>
>>
>> On 27.09.2022 01:34, Ceraolo Spurio, Daniele wrote:
>>>
>>>
>>> On 9/26/2022 3:44 PM, Andi Shyti wrote:
>>>> Hi Andrzej,
>>>>
>>>> On Mon, Sep 26, 2022 at 11:54:09PM +0200, Andrzej Hajda wrote:
>>>>> Capturing error state is time consuming (up to 350ms on DG2), so 
>>>>> it should
>>>>> be avoided if possible. Context reset triggered by context removal 
>>>>> is a
>>>>> good example.
>>>>> With this patch multiple igt tests will not timeout and should run 
>>>>> faster.
>>>>>
>>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/1551
>>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/3952
>>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/5891
>>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/6268
>>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/6281
>>>>> Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com>
>>>> fine for me:
>>>>
>>>> Reviewed-by: Andi Shyti <andi.shyti@linux.intel.com>
>>>>
>>>> Just to be on the safe side, can we also have the ack from any of
>>>> the GuC folks? Daniele, John?
>>>>
>>>> Andi
>>>>
>>>>
>>>>> ---
>>>>>   drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 3 ++-
>>>>>   1 file changed, 2 insertions(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
>>>>> b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>> index 22ba66e48a9b01..cb58029208afe1 100644
>>>>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>> @@ -4425,7 +4425,8 @@ static void guc_handle_context_reset(struct 
>>>>> intel_guc *guc,
>>>>>       trace_intel_context_reset(ce);
>>>>>         if (likely(!intel_context_is_banned(ce))) {
>>>>> -        capture_error_state(guc, ce);
>>>>> +        if (!intel_context_is_exiting(ce))
>>>>> +            capture_error_state(guc, ce);
>
> I am not sure here - if we have a persistent context which caused a 
> GPU hang I'd expect we'd still want error capture.
>
> What causes the reset in the affected IGTs? Always preemption timeout?
>
>>>>>           guc_context_replay(ce);
>>>
>>> You definitely don't want to replay requests of a context that is 
>>> going away.
>>
>> My intention was to just avoid error capture, but that's even better, 
>> only condition change:
>> -        if (likely(!intel_context_is_banned(ce))) {
>> +       if (likely(intel_context_is_schedulable(ce)))  {
>
> Yes that helper was intended to be used for contexts which should not 
> be scheduled post exit or ban.
>
> Daniele - you say there are some misses in the GuC backend. Should 
> most, or even all in intel_guc_submission.c be converted to use 
> intel_context_is_schedulable? My idea indeed was that "ban" should be 
> a level up from the backends. Backend should only distinguish between 
> "should I run this or not", and not the reason.

I think that all of them should be updated, but I'd like Matt B to 
confirm as he's more familiar with the code than me.

Daniele

>
> Regards,
>
> Tvrtko
>
>>
>>>
>>> This seems at least in part due to 
>>> https://patchwork.freedesktop.org/patch/487531/, where we replaced 
>>> the "context_ban" with "context_exiting". There are several places 
>>> where we skipped operations if the context was banned (here 
>>> included) which are now not covered anymore for exiting contexts. 
>>> Maybe we need a new checker function to check both flags in places 
>>> where we don't care why the context is being removed (ban vs 
>>> exiting), just that it is?
>>>
>>> Daniele
>>>
>>>>>       } else {
>>>>>           drm_info(&guc_to_gt(guc)->i915->drm,
>>
>> And maybe degrade above to drm_dbg, to avoid spamming dmesg?
>>
>> Regards
>> Andrzej
>>
>>
>>>>> -- 
>>>>> 2.34.1
>>>
>>
Tvrtko Ursulin Sept. 28, 2022, 7:19 a.m. UTC | #9
On 27/09/2022 22:36, Ceraolo Spurio, Daniele wrote:
> 
> 
> On 9/27/2022 12:45 AM, Tvrtko Ursulin wrote:
>>
>> On 27/09/2022 07:49, Andrzej Hajda wrote:
>>>
>>>
>>> On 27.09.2022 01:34, Ceraolo Spurio, Daniele wrote:
>>>>
>>>>
>>>> On 9/26/2022 3:44 PM, Andi Shyti wrote:
>>>>> Hi Andrzej,
>>>>>
>>>>> On Mon, Sep 26, 2022 at 11:54:09PM +0200, Andrzej Hajda wrote:
>>>>>> Capturing error state is time consuming (up to 350ms on DG2), so 
>>>>>> it should
>>>>>> be avoided if possible. Context reset triggered by context removal 
>>>>>> is a
>>>>>> good example.
>>>>>> With this patch multiple igt tests will not timeout and should run 
>>>>>> faster.
>>>>>>
>>>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/1551
>>>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/3952
>>>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/5891
>>>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/6268
>>>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/6281
>>>>>> Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com>
>>>>> fine for me:
>>>>>
>>>>> Reviewed-by: Andi Shyti <andi.shyti@linux.intel.com>
>>>>>
>>>>> Just to be on the safe side, can we also have the ack from any of
>>>>> the GuC folks? Daniele, John?
>>>>>
>>>>> Andi
>>>>>
>>>>>
>>>>>> ---
>>>>>>   drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 3 ++-
>>>>>>   1 file changed, 2 insertions(+), 1 deletion(-)
>>>>>>
>>>>>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
>>>>>> b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>>> index 22ba66e48a9b01..cb58029208afe1 100644
>>>>>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>>> @@ -4425,7 +4425,8 @@ static void guc_handle_context_reset(struct 
>>>>>> intel_guc *guc,
>>>>>>       trace_intel_context_reset(ce);
>>>>>>         if (likely(!intel_context_is_banned(ce))) {
>>>>>> -        capture_error_state(guc, ce);
>>>>>> +        if (!intel_context_is_exiting(ce))
>>>>>> +            capture_error_state(guc, ce);
>>
>> I am not sure here - if we have a persistent context which caused a 
>> GPU hang I'd expect we'd still want error capture.
>>
>> What causes the reset in the affected IGTs? Always preemption timeout?
>>
>>>>>>           guc_context_replay(ce);
>>>>
>>>> You definitely don't want to replay requests of a context that is 
>>>> going away.
>>>
>>> My intention was to just avoid error capture, but that's even better, 
>>> only condition change:
>>> -        if (likely(!intel_context_is_banned(ce))) {
>>> +       if (likely(intel_context_is_schedulable(ce)))  {
>>
>> Yes that helper was intended to be used for contexts which should not 
>> be scheduled post exit or ban.
>>
>> Daniele - you say there are some misses in the GuC backend. Should 
>> most, or even all in intel_guc_submission.c be converted to use 
>> intel_context_is_schedulable? My idea indeed was that "ban" should be 
>> a level up from the backends. Backend should only distinguish between 
>> "should I run this or not", and not the reason.
> 
> I think that all of them should be updated, but I'd like Matt B to 
> confirm as he's more familiar with the code than me.

Right, that sounds plausible to me as well.

One thing I forgot to mention - the only place where backend can care 
between "schedulable" and "banned" is when it picks the preempt timeout 
for non-schedulable contexts. This is to only apply the strict 1ms to 
banned (so bad or naught contexts), while the ones which are exiting 
cleanly get the full preempt timeout as otherwise configured. This 
solves the ugly user experience quirk where GPU resets/errors were 
logged upon exit/Ctrl-C of a well behaving application (using 
non-persistent contexts). Hopefully GuC can match that behaviour so 
customers stay happy.

Regards,

Tvrtko
John Harrison Sept. 28, 2022, 6:27 p.m. UTC | #10
On 9/28/2022 00:19, Tvrtko Ursulin wrote:
> On 27/09/2022 22:36, Ceraolo Spurio, Daniele wrote:
>> On 9/27/2022 12:45 AM, Tvrtko Ursulin wrote:
>>> On 27/09/2022 07:49, Andrzej Hajda wrote:
>>>> On 27.09.2022 01:34, Ceraolo Spurio, Daniele wrote:
>>>>> On 9/26/2022 3:44 PM, Andi Shyti wrote:
>>>>>> Hi Andrzej,
>>>>>>
>>>>>> On Mon, Sep 26, 2022 at 11:54:09PM +0200, Andrzej Hajda wrote:
>>>>>>> Capturing error state is time consuming (up to 350ms on DG2), so 
>>>>>>> it should
>>>>>>> be avoided if possible. Context reset triggered by context 
>>>>>>> removal is a
>>>>>>> good example.
>>>>>>> With this patch multiple igt tests will not timeout and should 
>>>>>>> run faster.
>>>>>>>
>>>>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/1551
>>>>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/3952
>>>>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/5891
>>>>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/6268
>>>>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/6281
>>>>>>> Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com>
>>>>>> fine for me:
>>>>>>
>>>>>> Reviewed-by: Andi Shyti <andi.shyti@linux.intel.com>
>>>>>>
>>>>>> Just to be on the safe side, can we also have the ack from any of
>>>>>> the GuC folks? Daniele, John?
>>>>>>
>>>>>> Andi
>>>>>>
>>>>>>
>>>>>>> ---
>>>>>>>   drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 3 ++-
>>>>>>>   1 file changed, 2 insertions(+), 1 deletion(-)
>>>>>>>
>>>>>>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
>>>>>>> b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>>>> index 22ba66e48a9b01..cb58029208afe1 100644
>>>>>>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>>>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>>>> @@ -4425,7 +4425,8 @@ static void 
>>>>>>> guc_handle_context_reset(struct intel_guc *guc,
>>>>>>>       trace_intel_context_reset(ce);
>>>>>>>         if (likely(!intel_context_is_banned(ce))) {
>>>>>>> -        capture_error_state(guc, ce);
>>>>>>> +        if (!intel_context_is_exiting(ce))
>>>>>>> +            capture_error_state(guc, ce);
>>>
>>> I am not sure here - if we have a persistent context which caused a 
>>> GPU hang I'd expect we'd still want error capture.
>>>
>>> What causes the reset in the affected IGTs? Always preemption timeout?
>>>
>>>>>>> guc_context_replay(ce);
>>>>>
>>>>> You definitely don't want to replay requests of a context that is 
>>>>> going away.
>>>>
>>>> My intention was to just avoid error capture, but that's even 
>>>> better, only condition change:
>>>> -        if (likely(!intel_context_is_banned(ce))) {
>>>> +       if (likely(intel_context_is_schedulable(ce)))  {
>>>
>>> Yes that helper was intended to be used for contexts which should 
>>> not be scheduled post exit or ban.
>>>
>>> Daniele - you say there are some misses in the GuC backend. Should 
>>> most, or even all in intel_guc_submission.c be converted to use 
>>> intel_context_is_schedulable? My idea indeed was that "ban" should 
>>> be a level up from the backends. Backend should only distinguish 
>>> between "should I run this or not", and not the reason.
>>
>> I think that all of them should be updated, but I'd like Matt B to 
>> confirm as he's more familiar with the code than me.
>
> Right, that sounds plausible to me as well.
>
> One thing I forgot to mention - the only place where backend can care 
> between "schedulable" and "banned" is when it picks the preempt 
> timeout for non-schedulable contexts. This is to only apply the strict 
> 1ms to banned (so bad or naught contexts), while the ones which are 
> exiting cleanly get the full preempt timeout as otherwise configured. 
> This solves the ugly user experience quirk where GPU resets/errors 
> were logged upon exit/Ctrl-C of a well behaving application (using 
> non-persistent contexts). Hopefully GuC can match that behaviour so 
> customers stay happy.
>
> Regards,
>
> Tvrtko

The whole revoke vs ban thing seems broken to me.

First of all, if the user hits Ctrl+C we need to kill the context off 
immediately. That is a fundamental customer requirement. Render and 
compute engines have a 7.5s pre-emption timeout. The user should not 
have to wait 7.5s for a context to be removed from the system when they 
have explicitly killed it themselves. Even the regular timeout of 640ms 
is borderline a long time to wait. And note that there is an ongoing 
request/requirement to increase that to 1900ms.

Under what circumstances would a user expect anything sensible to happen 
after a Ctrl+C in terms of things finishing their rendering and display 
nice pretty images? They killed the app. They want it dead. We should be 
getting it off the hardware as quickly as possible. If you are really 
concerned about resets causing collateral damage then maybe bump the 
termination timeout from 1ms up to 10ms, maybe at most 100ms. If an app 
is 'well behaved' then it should cleanly exit within 10ms. But if it is 
bad (which is almost certainly the case if the user is manually and 
explicitly killing it) then it needs to be killed because it is not 
going to gracefully exit.

Secondly, the whole persistence thing is a total mess, completely broken 
and intended to be massively simplified. See the internal task for it. 
In short, the plan is that all contexts will be immediately killed when 
the last DRM file handle is closed. Persistence is only valid between 
the time the per context file handle is closed and the time the master 
DRM handle is closed. Whereas, non-persistent contexts get killed as 
soon as the per context handle is closed. There is absolutely no 
connection to heartbeats or other irrelevant operations.

So in my view, the best option is to revert the ban vs revoke patch. It 
is creating bugs. It is making persistence more complex not simpler. It 
harms the user experience.

If the original problem was simply that error captures were being done 
on Ctrl+C then the fix is simple. Don't capture for a banned context. 
There is no need for all the rest of the revoke patch.

John.
Tvrtko Ursulin Sept. 29, 2022, 8:22 a.m. UTC | #11
On 28/09/2022 19:27, John Harrison wrote:
> On 9/28/2022 00:19, Tvrtko Ursulin wrote:
>> On 27/09/2022 22:36, Ceraolo Spurio, Daniele wrote:
>>> On 9/27/2022 12:45 AM, Tvrtko Ursulin wrote:
>>>> On 27/09/2022 07:49, Andrzej Hajda wrote:
>>>>> On 27.09.2022 01:34, Ceraolo Spurio, Daniele wrote:
>>>>>> On 9/26/2022 3:44 PM, Andi Shyti wrote:
>>>>>>> Hi Andrzej,
>>>>>>>
>>>>>>> On Mon, Sep 26, 2022 at 11:54:09PM +0200, Andrzej Hajda wrote:
>>>>>>>> Capturing error state is time consuming (up to 350ms on DG2), so 
>>>>>>>> it should
>>>>>>>> be avoided if possible. Context reset triggered by context 
>>>>>>>> removal is a
>>>>>>>> good example.
>>>>>>>> With this patch multiple igt tests will not timeout and should 
>>>>>>>> run faster.
>>>>>>>>
>>>>>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/1551
>>>>>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/3952
>>>>>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/5891
>>>>>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/6268
>>>>>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/6281
>>>>>>>> Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com>
>>>>>>> fine for me:
>>>>>>>
>>>>>>> Reviewed-by: Andi Shyti <andi.shyti@linux.intel.com>
>>>>>>>
>>>>>>> Just to be on the safe side, can we also have the ack from any of
>>>>>>> the GuC folks? Daniele, John?
>>>>>>>
>>>>>>> Andi
>>>>>>>
>>>>>>>
>>>>>>>> ---
>>>>>>>>   drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 3 ++-
>>>>>>>>   1 file changed, 2 insertions(+), 1 deletion(-)
>>>>>>>>
>>>>>>>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
>>>>>>>> b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>>>>> index 22ba66e48a9b01..cb58029208afe1 100644
>>>>>>>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>>>>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>>>>> @@ -4425,7 +4425,8 @@ static void 
>>>>>>>> guc_handle_context_reset(struct intel_guc *guc,
>>>>>>>>       trace_intel_context_reset(ce);
>>>>>>>>         if (likely(!intel_context_is_banned(ce))) {
>>>>>>>> -        capture_error_state(guc, ce);
>>>>>>>> +        if (!intel_context_is_exiting(ce))
>>>>>>>> +            capture_error_state(guc, ce);
>>>>
>>>> I am not sure here - if we have a persistent context which caused a 
>>>> GPU hang I'd expect we'd still want error capture.
>>>>
>>>> What causes the reset in the affected IGTs? Always preemption timeout?
>>>>
>>>>>>>> guc_context_replay(ce);
>>>>>>
>>>>>> You definitely don't want to replay requests of a context that is 
>>>>>> going away.
>>>>>
>>>>> My intention was to just avoid error capture, but that's even 
>>>>> better, only condition change:
>>>>> -        if (likely(!intel_context_is_banned(ce))) {
>>>>> +       if (likely(intel_context_is_schedulable(ce)))  {
>>>>
>>>> Yes that helper was intended to be used for contexts which should 
>>>> not be scheduled post exit or ban.
>>>>
>>>> Daniele - you say there are some misses in the GuC backend. Should 
>>>> most, or even all in intel_guc_submission.c be converted to use 
>>>> intel_context_is_schedulable? My idea indeed was that "ban" should 
>>>> be a level up from the backends. Backend should only distinguish 
>>>> between "should I run this or not", and not the reason.
>>>
>>> I think that all of them should be updated, but I'd like Matt B to 
>>> confirm as he's more familiar with the code than me.
>>
>> Right, that sounds plausible to me as well.
>>
>> One thing I forgot to mention - the only place where backend can care 
>> between "schedulable" and "banned" is when it picks the preempt 
>> timeout for non-schedulable contexts. This is to only apply the strict 
>> 1ms to banned (so bad or naught contexts), while the ones which are 
>> exiting cleanly get the full preempt timeout as otherwise configured. 
>> This solves the ugly user experience quirk where GPU resets/errors 
>> were logged upon exit/Ctrl-C of a well behaving application (using 
>> non-persistent contexts). Hopefully GuC can match that behaviour so 
>> customers stay happy.
>>
>> Regards,
>>
>> Tvrtko
> 
> The whole revoke vs ban thing seems broken to me.
> 
> First of all, if the user hits Ctrl+C we need to kill the context off 
> immediately. That is a fundamental customer requirement. Render and 
> compute engines have a 7.5s pre-emption timeout. The user should not 
> have to wait 7.5s for a context to be removed from the system when they 
> have explicitly killed it themselves. Even the regular timeout of 640ms 
> is borderline a long time to wait. And note that there is an ongoing 
> request/requirement to increase that to 1900ms.
> 
> Under what circumstances would a user expect anything sensible to happen 
> after a Ctrl+C in terms of things finishing their rendering and display 
> nice pretty images? They killed the app. They want it dead. We should be 
> getting it off the hardware as quickly as possible. If you are really 
> concerned about resets causing collateral damage then maybe bump the 
> termination timeout from 1ms up to 10ms, maybe at most 100ms. If an app 
> is 'well behaved' then it should cleanly exit within 10ms. But if it is 
> bad (which is almost certainly the case if the user is manually and 
> explicitly killing it) then it needs to be killed because it is not 
> going to gracefully exit.

Right.. I had it like that initially (lower timeout - I think 20ms or 
so, patch history on the mailing list would know for sure), but then 
simplified it after review feedback to avoid adding another timeout value.

So it's not at all about any expectation that something should actually 
finish to any sort of completion/success. It is primarily about not 
logging an error message when there is no error. Thing to keep in mind 
is that error messages are a big deal in some cultures. In addition to 
that, avoiding needless engine resets is a good thing as well.

Previously the execlists backend was over eager and only allowed for 1ms 
for such contexts to exit. If the context was banned sure - that means 
it was a bad context which was causing many hangs already. But if the 
context was a clean one I argue there is no point in doing an engine reset.

So if you want, I think it is okay to re-introduce a secondary timeout.

Or if you have an idea on how to avoid the error messages / GPU resets 
when "friendly" contexts exit in some other way, that is also something 
to discuss.

> Secondly, the whole persistence thing is a total mess, completely broken 
> and intended to be massively simplified. See the internal task for it. 
> In short, the plan is that all contexts will be immediately killed when 
> the last DRM file handle is closed. Persistence is only valid between 
> the time the per context file handle is closed and the time the master 
> DRM handle is closed. Whereas, non-persistent contexts get killed as 
> soon as the per context handle is closed. There is absolutely no 
> connection to heartbeats or other irrelevant operations.

The change we are discussing is not about persistence, but for the 
persistence itself - I am not sure it is completely broken and if, or 
when, the internal task will result with anything being attempted. In 
the meantime we had unhappy customers for more than a year. So do we 
tell them "please wait for a few years more until some internal task 
with no clear timeline or anyone assigned maybe gets looked at"?

> So in my view, the best option is to revert the ban vs revoke patch. It 
> is creating bugs. It is making persistence more complex not simpler. It 
> harms the user experience.

I am not aware of the bugs, even less so that it is harming user 
experience!?

Bugs are limited to the GuC backend or in general? My CI runs were clean 
so maybe test cases are lacking. Is it just a case of 
s/intel_context_is_banned/intel_context_is_schedulable/ in there to fix it?

Again, the change was not about persistence. It is the opposite - 
allowing non-persistent contexts to exit cleanly.

> If the original problem was simply that error captures were being done 
> on Ctrl+C then the fix is simple. Don't capture for a banned context. 
> There is no need for all the rest of the revoke patch.

Error capture was not part of the original story so it may be a 
completely orthogonal topic that we are discussing it in this thread.

Regards,

Tvrtko
Andrzej Hajda Sept. 29, 2022, 9:49 a.m. UTC | #12
On 29.09.2022 10:22, Tvrtko Ursulin wrote:
> 
> On 28/09/2022 19:27, John Harrison wrote:
>> On 9/28/2022 00:19, Tvrtko Ursulin wrote:
>>> On 27/09/2022 22:36, Ceraolo Spurio, Daniele wrote:
>>>> On 9/27/2022 12:45 AM, Tvrtko Ursulin wrote:
>>>>> On 27/09/2022 07:49, Andrzej Hajda wrote:
>>>>>> On 27.09.2022 01:34, Ceraolo Spurio, Daniele wrote:
>>>>>>> On 9/26/2022 3:44 PM, Andi Shyti wrote:
>>>>>>>> Hi Andrzej,
>>>>>>>>
>>>>>>>> On Mon, Sep 26, 2022 at 11:54:09PM +0200, Andrzej Hajda wrote:
>>>>>>>>> Capturing error state is time consuming (up to 350ms on DG2), 
>>>>>>>>> so it should
>>>>>>>>> be avoided if possible. Context reset triggered by context 
>>>>>>>>> removal is a
>>>>>>>>> good example.
>>>>>>>>> With this patch multiple igt tests will not timeout and should 
>>>>>>>>> run faster.
>>>>>>>>>
>>>>>>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/1551
>>>>>>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/3952
>>>>>>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/5891
>>>>>>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/6268
>>>>>>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/6281
>>>>>>>>> Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com>
>>>>>>>> fine for me:
>>>>>>>>
>>>>>>>> Reviewed-by: Andi Shyti <andi.shyti@linux.intel.com>
>>>>>>>>
>>>>>>>> Just to be on the safe side, can we also have the ack from any of
>>>>>>>> the GuC folks? Daniele, John?
>>>>>>>>
>>>>>>>> Andi
>>>>>>>>
>>>>>>>>
>>>>>>>>> ---
>>>>>>>>>   drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 3 ++-
>>>>>>>>>   1 file changed, 2 insertions(+), 1 deletion(-)
>>>>>>>>>
>>>>>>>>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
>>>>>>>>> b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>>>>>> index 22ba66e48a9b01..cb58029208afe1 100644
>>>>>>>>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>>>>>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>>>>>> @@ -4425,7 +4425,8 @@ static void 
>>>>>>>>> guc_handle_context_reset(struct intel_guc *guc,
>>>>>>>>>       trace_intel_context_reset(ce);
>>>>>>>>>         if (likely(!intel_context_is_banned(ce))) {
>>>>>>>>> -        capture_error_state(guc, ce);
>>>>>>>>> +        if (!intel_context_is_exiting(ce))
>>>>>>>>> +            capture_error_state(guc, ce);
>>>>>
>>>>> I am not sure here - if we have a persistent context which caused a 
>>>>> GPU hang I'd expect we'd still want error capture.
>>>>>
>>>>> What causes the reset in the affected IGTs? Always preemption timeout?
>>>>>
>>>>>>>>> guc_context_replay(ce);
>>>>>>>
>>>>>>> You definitely don't want to replay requests of a context that is 
>>>>>>> going away.
>>>>>>
>>>>>> My intention was to just avoid error capture, but that's even 
>>>>>> better, only condition change:
>>>>>> -        if (likely(!intel_context_is_banned(ce))) {
>>>>>> +       if (likely(intel_context_is_schedulable(ce)))  {
>>>>>
>>>>> Yes that helper was intended to be used for contexts which should 
>>>>> not be scheduled post exit or ban.
>>>>>
>>>>> Daniele - you say there are some misses in the GuC backend. Should 
>>>>> most, or even all in intel_guc_submission.c be converted to use 
>>>>> intel_context_is_schedulable? My idea indeed was that "ban" should 
>>>>> be a level up from the backends. Backend should only distinguish 
>>>>> between "should I run this or not", and not the reason.
>>>>
>>>> I think that all of them should be updated, but I'd like Matt B to 
>>>> confirm as he's more familiar with the code than me.
>>>
>>> Right, that sounds plausible to me as well.
>>>
>>> One thing I forgot to mention - the only place where backend can care 
>>> between "schedulable" and "banned" is when it picks the preempt 
>>> timeout for non-schedulable contexts. This is to only apply the 
>>> strict 1ms to banned (so bad or naught contexts), while the ones 
>>> which are exiting cleanly get the full preempt timeout as otherwise 
>>> configured. This solves the ugly user experience quirk where GPU 
>>> resets/errors were logged upon exit/Ctrl-C of a well behaving 
>>> application (using non-persistent contexts). Hopefully GuC can match 
>>> that behaviour so customers stay happy.
>>>
>>> Regards,
>>>
>>> Tvrtko
>>
>> The whole revoke vs ban thing seems broken to me.
>>
>> First of all, if the user hits Ctrl+C we need to kill the context off 
>> immediately. That is a fundamental customer requirement. Render and 
>> compute engines have a 7.5s pre-emption timeout. The user should not 
>> have to wait 7.5s for a context to be removed from the system when 
>> they have explicitly killed it themselves. Even the regular timeout of 
>> 640ms is borderline a long time to wait. And note that there is an 
>> ongoing request/requirement to increase that to 1900ms.
>>
>> Under what circumstances would a user expect anything sensible to 
>> happen after a Ctrl+C in terms of things finishing their rendering and 
>> display nice pretty images? They killed the app. They want it dead. We 
>> should be getting it off the hardware as quickly as possible. If you 
>> are really concerned about resets causing collateral damage then maybe 
>> bump the termination timeout from 1ms up to 10ms, maybe at most 100ms. 
>> If an app is 'well behaved' then it should cleanly exit within 10ms. 
>> But if it is bad (which is almost certainly the case if the user is 
>> manually and explicitly killing it) then it needs to be killed because 
>> it is not going to gracefully exit.
> 
> Right.. I had it like that initially (lower timeout - I think 20ms or 
> so, patch history on the mailing list would know for sure), but then 
> simplified it after review feedback to avoid adding another timeout value.
> 
> So it's not at all about any expectation that something should actually 
> finish to any sort of completion/success. It is primarily about not 
> logging an error message when there is no error. Thing to keep in mind 
> is that error messages are a big deal in some cultures. In addition to 
> that, avoiding needless engine resets is a good thing as well.
> 
> Previously the execlists backend was over eager and only allowed for 1ms 
> for such contexts to exit. If the context was banned sure - that means 
> it was a bad context which was causing many hangs already. But if the 
> context was a clean one I argue there is no point in doing an engine reset.
> 
> So if you want, I think it is okay to re-introduce a secondary timeout.
> 
> Or if you have an idea on how to avoid the error messages / GPU resets 
> when "friendly" contexts exit in some other way, that is also something 
> to discuss.
> 
>> Secondly, the whole persistence thing is a total mess, completely 
>> broken and intended to be massively simplified. See the internal task 
>> for it. In short, the plan is that all contexts will be immediately 
>> killed when the last DRM file handle is closed. Persistence is only 
>> valid between the time the per context file handle is closed and the 
>> time the master DRM handle is closed. Whereas, non-persistent contexts 
>> get killed as soon as the per context handle is closed. There is 
>> absolutely no connection to heartbeats or other irrelevant operations.
> 
> The change we are discussing is not about persistence, but for the 
> persistence itself - I am not sure it is completely broken and if, or 
> when, the internal task will result with anything being attempted. In 
> the meantime we had unhappy customers for more than a year. So do we 
> tell them "please wait for a few years more until some internal task 
> with no clear timeline or anyone assigned maybe gets looked at"?
> 
>> So in my view, the best option is to revert the ban vs revoke patch. 
>> It is creating bugs. It is making persistence more complex not 
>> simpler. It harms the user experience.
> 
> I am not aware of the bugs, even less so that it is harming user 
> experience!?
> 
> Bugs are limited to the GuC backend or in general? My CI runs were clean 
> so maybe test cases are lacking. Is it just a case of 
> s/intel_context_is_banned/intel_context_is_schedulable/ in there to fix it?
> 
> Again, the change was not about persistence. It is the opposite - 
> allowing non-persistent contexts to exit cleanly.
> 
>> If the original problem was simply that error captures were being done 
>> on Ctrl+C then the fix is simple. Don't capture for a banned context. 
>> There is no need for all the rest of the revoke patch.
> 
> Error capture was not part of the original story so it may be a 
> completely orthogonal topic that we are discussing it in this thread.

Wouldn't be good then to separate these two issues: 
banned/exiting/schedulable handling and error capturing of exiting context.
This patch handles only the latter, and as I understand there is no big 
controversy that we de not need capture errors for exiting contexts.
If yes, can we ack/merge this patch, to make CI happy and continue 
discussion on the former.

Regards
Andrzej


> 
> Regards,
> 
> Tvrtko
Tvrtko Ursulin Sept. 29, 2022, 10:40 a.m. UTC | #13
On 29/09/2022 10:49, Andrzej Hajda wrote:
> On 29.09.2022 10:22, Tvrtko Ursulin wrote:
>> On 28/09/2022 19:27, John Harrison wrote:
>>> On 9/28/2022 00:19, Tvrtko Ursulin wrote:
>>>> On 27/09/2022 22:36, Ceraolo Spurio, Daniele wrote:
>>>>> On 9/27/2022 12:45 AM, Tvrtko Ursulin wrote:
>>>>>> On 27/09/2022 07:49, Andrzej Hajda wrote:
>>>>>>> On 27.09.2022 01:34, Ceraolo Spurio, Daniele wrote:
>>>>>>>> On 9/26/2022 3:44 PM, Andi Shyti wrote:
>>>>>>>>> Hi Andrzej,
>>>>>>>>>
>>>>>>>>> On Mon, Sep 26, 2022 at 11:54:09PM +0200, Andrzej Hajda wrote:
>>>>>>>>>> Capturing error state is time consuming (up to 350ms on DG2), 
>>>>>>>>>> so it should
>>>>>>>>>> be avoided if possible. Context reset triggered by context 
>>>>>>>>>> removal is a
>>>>>>>>>> good example.
>>>>>>>>>> With this patch multiple igt tests will not timeout and should 
>>>>>>>>>> run faster.
>>>>>>>>>>
>>>>>>>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/1551
>>>>>>>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/3952
>>>>>>>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/5891
>>>>>>>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/6268
>>>>>>>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/6281
>>>>>>>>>> Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com>
>>>>>>>>> fine for me:
>>>>>>>>>
>>>>>>>>> Reviewed-by: Andi Shyti <andi.shyti@linux.intel.com>
>>>>>>>>>
>>>>>>>>> Just to be on the safe side, can we also have the ack from any of
>>>>>>>>> the GuC folks? Daniele, John?
>>>>>>>>>
>>>>>>>>> Andi
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>> ---
>>>>>>>>>>   drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 3 ++-
>>>>>>>>>>   1 file changed, 2 insertions(+), 1 deletion(-)
>>>>>>>>>>
>>>>>>>>>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
>>>>>>>>>> b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>>>>>>> index 22ba66e48a9b01..cb58029208afe1 100644
>>>>>>>>>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>>>>>>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>>>>>>> @@ -4425,7 +4425,8 @@ static void 
>>>>>>>>>> guc_handle_context_reset(struct intel_guc *guc,
>>>>>>>>>>       trace_intel_context_reset(ce);
>>>>>>>>>>         if (likely(!intel_context_is_banned(ce))) {
>>>>>>>>>> -        capture_error_state(guc, ce);
>>>>>>>>>> +        if (!intel_context_is_exiting(ce))
>>>>>>>>>> +            capture_error_state(guc, ce);
>>>>>>
>>>>>> I am not sure here - if we have a persistent context which caused 
>>>>>> a GPU hang I'd expect we'd still want error capture.
>>>>>>
>>>>>> What causes the reset in the affected IGTs? Always preemption 
>>>>>> timeout?
>>>>>>
>>>>>>>>>> guc_context_replay(ce);
>>>>>>>>
>>>>>>>> You definitely don't want to replay requests of a context that 
>>>>>>>> is going away.
>>>>>>>
>>>>>>> My intention was to just avoid error capture, but that's even 
>>>>>>> better, only condition change:
>>>>>>> -        if (likely(!intel_context_is_banned(ce))) {
>>>>>>> +       if (likely(intel_context_is_schedulable(ce)))  {
>>>>>>
>>>>>> Yes that helper was intended to be used for contexts which should 
>>>>>> not be scheduled post exit or ban.
>>>>>>
>>>>>> Daniele - you say there are some misses in the GuC backend. Should 
>>>>>> most, or even all in intel_guc_submission.c be converted to use 
>>>>>> intel_context_is_schedulable? My idea indeed was that "ban" should 
>>>>>> be a level up from the backends. Backend should only distinguish 
>>>>>> between "should I run this or not", and not the reason.
>>>>>
>>>>> I think that all of them should be updated, but I'd like Matt B to 
>>>>> confirm as he's more familiar with the code than me.
>>>>
>>>> Right, that sounds plausible to me as well.
>>>>
>>>> One thing I forgot to mention - the only place where backend can 
>>>> care between "schedulable" and "banned" is when it picks the preempt 
>>>> timeout for non-schedulable contexts. This is to only apply the 
>>>> strict 1ms to banned (so bad or naught contexts), while the ones 
>>>> which are exiting cleanly get the full preempt timeout as otherwise 
>>>> configured. This solves the ugly user experience quirk where GPU 
>>>> resets/errors were logged upon exit/Ctrl-C of a well behaving 
>>>> application (using non-persistent contexts). Hopefully GuC can match 
>>>> that behaviour so customers stay happy.
>>>>
>>>> Regards,
>>>>
>>>> Tvrtko
>>>
>>> The whole revoke vs ban thing seems broken to me.
>>>
>>> First of all, if the user hits Ctrl+C we need to kill the context off 
>>> immediately. That is a fundamental customer requirement. Render and 
>>> compute engines have a 7.5s pre-emption timeout. The user should not 
>>> have to wait 7.5s for a context to be removed from the system when 
>>> they have explicitly killed it themselves. Even the regular timeout 
>>> of 640ms is borderline a long time to wait. And note that there is an 
>>> ongoing request/requirement to increase that to 1900ms.
>>>
>>> Under what circumstances would a user expect anything sensible to 
>>> happen after a Ctrl+C in terms of things finishing their rendering 
>>> and display nice pretty images? They killed the app. They want it 
>>> dead. We should be getting it off the hardware as quickly as 
>>> possible. If you are really concerned about resets causing collateral 
>>> damage then maybe bump the termination timeout from 1ms up to 10ms, 
>>> maybe at most 100ms. If an app is 'well behaved' then it should 
>>> cleanly exit within 10ms. But if it is bad (which is almost certainly 
>>> the case if the user is manually and explicitly killing it) then it 
>>> needs to be killed because it is not going to gracefully exit.
>>
>> Right.. I had it like that initially (lower timeout - I think 20ms or 
>> so, patch history on the mailing list would know for sure), but then 
>> simplified it after review feedback to avoid adding another timeout 
>> value.
>>
>> So it's not at all about any expectation that something should 
>> actually finish to any sort of completion/success. It is primarily 
>> about not logging an error message when there is no error. Thing to 
>> keep in mind is that error messages are a big deal in some cultures. 
>> In addition to that, avoiding needless engine resets is a good thing 
>> as well.
>>
>> Previously the execlists backend was over eager and only allowed for 
>> 1ms for such contexts to exit. If the context was banned sure - that 
>> means it was a bad context which was causing many hangs already. But 
>> if the context was a clean one I argue there is no point in doing an 
>> engine reset.
>>
>> So if you want, I think it is okay to re-introduce a secondary timeout.
>>
>> Or if you have an idea on how to avoid the error messages / GPU resets 
>> when "friendly" contexts exit in some other way, that is also 
>> something to discuss.
>>
>>> Secondly, the whole persistence thing is a total mess, completely 
>>> broken and intended to be massively simplified. See the internal task 
>>> for it. In short, the plan is that all contexts will be immediately 
>>> killed when the last DRM file handle is closed. Persistence is only 
>>> valid between the time the per context file handle is closed and the 
>>> time the master DRM handle is closed. Whereas, non-persistent 
>>> contexts get killed as soon as the per context handle is closed. 
>>> There is absolutely no connection to heartbeats or other irrelevant 
>>> operations.
>>
>> The change we are discussing is not about persistence, but for the 
>> persistence itself - I am not sure it is completely broken and if, or 
>> when, the internal task will result with anything being attempted. In 
>> the meantime we had unhappy customers for more than a year. So do we 
>> tell them "please wait for a few years more until some internal task 
>> with no clear timeline or anyone assigned maybe gets looked at"?
>>
>>> So in my view, the best option is to revert the ban vs revoke patch. 
>>> It is creating bugs. It is making persistence more complex not 
>>> simpler. It harms the user experience.
>>
>> I am not aware of the bugs, even less so that it is harming user 
>> experience!?
>>
>> Bugs are limited to the GuC backend or in general? My CI runs were 
>> clean so maybe test cases are lacking. Is it just a case of 
>> s/intel_context_is_banned/intel_context_is_schedulable/ in there to 
>> fix it?
>>
>> Again, the change was not about persistence. It is the opposite - 
>> allowing non-persistent contexts to exit cleanly.
>>
>>> If the original problem was simply that error captures were being 
>>> done on Ctrl+C then the fix is simple. Don't capture for a banned 
>>> context. There is no need for all the rest of the revoke patch.
>>
>> Error capture was not part of the original story so it may be a 
>> completely orthogonal topic that we are discussing it in this thread.
> 
> Wouldn't be good then to separate these two issues: 
> banned/exiting/schedulable handling and error capturing of exiting context.
> This patch handles only the latter, and as I understand there is no big 
> controversy that we de not need capture errors for exiting contexts.
> If yes, can we ack/merge this patch, to make CI happy and continue 
> discussion on the former.

Right, question is if the code in guc_handle_context_reset shouldn't be changed to:

  	if (likely(!intel_context_is_exiting(ce))) {
		capture_error_state(guc, ce);
  		guc_context_replay(ce);
  	} else {

And if that should be part of patch which changes a few more instances of that same check.

But you wrote that doesn't work? And then Daniele said he thinks it is because revoke is not called when hangcheck is disabled and GuC backend gets confused? If I got the conversation right..

I wonder if that means equivalent of execlists:

         if (unlikely(intel_context_is_closed(ce) &&
                      !intel_engine_has_heartbeat(engine)))
                intel_context_set_exiting(ce);

Is needed somewhere in the GuC backend. Which with execlists skips over the context which is no longer schedulable.

But I don't understand why testing did not pick up that miss, or the miss with guc_context_replay on an exiting context. Or where exactly to put the extra handling in the GuC backend. Perhaps it isn't possible in which case we could have an ugly solution where for GuC we do something special in kill_engines() if hangcheck is disabled. Maybe add and call a new helper like:

intel_context_exit_nohangcheck()
{
	bool ret = intel_context_set_exiting(ce);

	if (!ret && intel_engine_uses_guc(ce->engine))
		intel_context_ban(ce, NULL);

	return ret;
}

Too ugly?

Regards,

Tvrtko
Daniele Ceraolo Spurio Sept. 29, 2022, 2:28 p.m. UTC | #14
On 9/29/2022 3:40 AM, Tvrtko Ursulin wrote:
>
> On 29/09/2022 10:49, Andrzej Hajda wrote:
>> On 29.09.2022 10:22, Tvrtko Ursulin wrote:
>>> On 28/09/2022 19:27, John Harrison wrote:
>>>> On 9/28/2022 00:19, Tvrtko Ursulin wrote:
>>>>> On 27/09/2022 22:36, Ceraolo Spurio, Daniele wrote:
>>>>>> On 9/27/2022 12:45 AM, Tvrtko Ursulin wrote:
>>>>>>> On 27/09/2022 07:49, Andrzej Hajda wrote:
>>>>>>>> On 27.09.2022 01:34, Ceraolo Spurio, Daniele wrote:
>>>>>>>>> On 9/26/2022 3:44 PM, Andi Shyti wrote:
>>>>>>>>>> Hi Andrzej,
>>>>>>>>>>
>>>>>>>>>> On Mon, Sep 26, 2022 at 11:54:09PM +0200, Andrzej Hajda wrote:
>>>>>>>>>>> Capturing error state is time consuming (up to 350ms on 
>>>>>>>>>>> DG2), so it should
>>>>>>>>>>> be avoided if possible. Context reset triggered by context 
>>>>>>>>>>> removal is a
>>>>>>>>>>> good example.
>>>>>>>>>>> With this patch multiple igt tests will not timeout and 
>>>>>>>>>>> should run faster.
>>>>>>>>>>>
>>>>>>>>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/1551
>>>>>>>>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/3952
>>>>>>>>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/5891
>>>>>>>>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/6268
>>>>>>>>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/6281
>>>>>>>>>>> Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com>
>>>>>>>>>> fine for me:
>>>>>>>>>>
>>>>>>>>>> Reviewed-by: Andi Shyti <andi.shyti@linux.intel.com>
>>>>>>>>>>
>>>>>>>>>> Just to be on the safe side, can we also have the ack from 
>>>>>>>>>> any of
>>>>>>>>>> the GuC folks? Daniele, John?
>>>>>>>>>>
>>>>>>>>>> Andi
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>> ---
>>>>>>>>>>> drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 3 ++-
>>>>>>>>>>>   1 file changed, 2 insertions(+), 1 deletion(-)
>>>>>>>>>>>
>>>>>>>>>>> diff --git 
>>>>>>>>>>> a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
>>>>>>>>>>> b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>>>>>>>> index 22ba66e48a9b01..cb58029208afe1 100644
>>>>>>>>>>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>>>>>>>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>>>>>>>> @@ -4425,7 +4425,8 @@ static void 
>>>>>>>>>>> guc_handle_context_reset(struct intel_guc *guc,
>>>>>>>>>>>       trace_intel_context_reset(ce);
>>>>>>>>>>>         if (likely(!intel_context_is_banned(ce))) {
>>>>>>>>>>> -        capture_error_state(guc, ce);
>>>>>>>>>>> +        if (!intel_context_is_exiting(ce))
>>>>>>>>>>> +            capture_error_state(guc, ce);
>>>>>>>
>>>>>>> I am not sure here - if we have a persistent context which 
>>>>>>> caused a GPU hang I'd expect we'd still want error capture.
>>>>>>>
>>>>>>> What causes the reset in the affected IGTs? Always preemption 
>>>>>>> timeout?
>>>>>>>
>>>>>>>>>>> guc_context_replay(ce);
>>>>>>>>>
>>>>>>>>> You definitely don't want to replay requests of a context that 
>>>>>>>>> is going away.
>>>>>>>>
>>>>>>>> My intention was to just avoid error capture, but that's even 
>>>>>>>> better, only condition change:
>>>>>>>> -        if (likely(!intel_context_is_banned(ce))) {
>>>>>>>> +       if (likely(intel_context_is_schedulable(ce)))  {
>>>>>>>
>>>>>>> Yes that helper was intended to be used for contexts which 
>>>>>>> should not be scheduled post exit or ban.
>>>>>>>
>>>>>>> Daniele - you say there are some misses in the GuC backend. 
>>>>>>> Should most, or even all in intel_guc_submission.c be converted 
>>>>>>> to use intel_context_is_schedulable? My idea indeed was that 
>>>>>>> "ban" should be a level up from the backends. Backend should 
>>>>>>> only distinguish between "should I run this or not", and not the 
>>>>>>> reason.
>>>>>>
>>>>>> I think that all of them should be updated, but I'd like Matt B 
>>>>>> to confirm as he's more familiar with the code than me.
>>>>>
>>>>> Right, that sounds plausible to me as well.
>>>>>
>>>>> One thing I forgot to mention - the only place where backend can 
>>>>> care between "schedulable" and "banned" is when it picks the 
>>>>> preempt timeout for non-schedulable contexts. This is to only 
>>>>> apply the strict 1ms to banned (so bad or naught contexts), while 
>>>>> the ones which are exiting cleanly get the full preempt timeout as 
>>>>> otherwise configured. This solves the ugly user experience quirk 
>>>>> where GPU resets/errors were logged upon exit/Ctrl-C of a well 
>>>>> behaving application (using non-persistent contexts). Hopefully 
>>>>> GuC can match that behaviour so customers stay happy.
>>>>>
>>>>> Regards,
>>>>>
>>>>> Tvrtko
>>>>
>>>> The whole revoke vs ban thing seems broken to me.
>>>>
>>>> First of all, if the user hits Ctrl+C we need to kill the context 
>>>> off immediately. That is a fundamental customer requirement. Render 
>>>> and compute engines have a 7.5s pre-emption timeout. The user 
>>>> should not have to wait 7.5s for a context to be removed from the 
>>>> system when they have explicitly killed it themselves. Even the 
>>>> regular timeout of 640ms is borderline a long time to wait. And 
>>>> note that there is an ongoing request/requirement to increase that 
>>>> to 1900ms.
>>>>
>>>> Under what circumstances would a user expect anything sensible to 
>>>> happen after a Ctrl+C in terms of things finishing their rendering 
>>>> and display nice pretty images? They killed the app. They want it 
>>>> dead. We should be getting it off the hardware as quickly as 
>>>> possible. If you are really concerned about resets causing 
>>>> collateral damage then maybe bump the termination timeout from 1ms 
>>>> up to 10ms, maybe at most 100ms. If an app is 'well behaved' then 
>>>> it should cleanly exit within 10ms. But if it is bad (which is 
>>>> almost certainly the case if the user is manually and explicitly 
>>>> killing it) then it needs to be killed because it is not going to 
>>>> gracefully exit.
>>>
>>> Right.. I had it like that initially (lower timeout - I think 20ms 
>>> or so, patch history on the mailing list would know for sure), but 
>>> then simplified it after review feedback to avoid adding another 
>>> timeout value.
>>>
>>> So it's not at all about any expectation that something should 
>>> actually finish to any sort of completion/success. It is primarily 
>>> about not logging an error message when there is no error. Thing to 
>>> keep in mind is that error messages are a big deal in some cultures. 
>>> In addition to that, avoiding needless engine resets is a good thing 
>>> as well.
>>>
>>> Previously the execlists backend was over eager and only allowed for 
>>> 1ms for such contexts to exit. If the context was banned sure - that 
>>> means it was a bad context which was causing many hangs already. But 
>>> if the context was a clean one I argue there is no point in doing an 
>>> engine reset.
>>>
>>> So if you want, I think it is okay to re-introduce a secondary timeout.
>>>
>>> Or if you have an idea on how to avoid the error messages / GPU 
>>> resets when "friendly" contexts exit in some other way, that is also 
>>> something to discuss.
>>>
>>>> Secondly, the whole persistence thing is a total mess, completely 
>>>> broken and intended to be massively simplified. See the internal 
>>>> task for it. In short, the plan is that all contexts will be 
>>>> immediately killed when the last DRM file handle is closed. 
>>>> Persistence is only valid between the time the per context file 
>>>> handle is closed and the time the master DRM handle is closed. 
>>>> Whereas, non-persistent contexts get killed as soon as the per 
>>>> context handle is closed. There is absolutely no connection to 
>>>> heartbeats or other irrelevant operations.
>>>
>>> The change we are discussing is not about persistence, but for the 
>>> persistence itself - I am not sure it is completely broken and if, 
>>> or when, the internal task will result with anything being 
>>> attempted. In the meantime we had unhappy customers for more than a 
>>> year. So do we tell them "please wait for a few years more until 
>>> some internal task with no clear timeline or anyone assigned maybe 
>>> gets looked at"?
>>>
>>>> So in my view, the best option is to revert the ban vs revoke 
>>>> patch. It is creating bugs. It is making persistence more complex 
>>>> not simpler. It harms the user experience.
>>>
>>> I am not aware of the bugs, even less so that it is harming user 
>>> experience!?
>>>
>>> Bugs are limited to the GuC backend or in general? My CI runs were 
>>> clean so maybe test cases are lacking. Is it just a case of 
>>> s/intel_context_is_banned/intel_context_is_schedulable/ in there to 
>>> fix it?
>>>
>>> Again, the change was not about persistence. It is the opposite - 
>>> allowing non-persistent contexts to exit cleanly.
>>>
>>>> If the original problem was simply that error captures were being 
>>>> done on Ctrl+C then the fix is simple. Don't capture for a banned 
>>>> context. There is no need for all the rest of the revoke patch.
>>>
>>> Error capture was not part of the original story so it may be a 
>>> completely orthogonal topic that we are discussing it in this thread.
>>
>> Wouldn't be good then to separate these two issues: 
>> banned/exiting/schedulable handling and error capturing of exiting 
>> context.
>> This patch handles only the latter, and as I understand there is no 
>> big controversy that we de not need capture errors for exiting contexts.
>> If yes, can we ack/merge this patch, to make CI happy and continue 
>> discussion on the former.
>
> Right, question is if the code in guc_handle_context_reset shouldn't 
> be changed to:
>
>      if (likely(!intel_context_is_exiting(ce))) {
>         capture_error_state(guc, ce);
>          guc_context_replay(ce);
>      } else {
>
> And if that should be part of patch which changes a few more instances 
> of that same check.
>
> But you wrote that doesn't work? And then Daniele said he thinks it is 
> because revoke is not called when hangcheck is disabled and GuC 
> backend gets confused? If I got the conversation right..
>
> I wonder if that means equivalent of execlists:
>
>         if (unlikely(intel_context_is_closed(ce) &&
>                      !intel_engine_has_heartbeat(engine)))
>                intel_context_set_exiting(ce);
>
> Is needed somewhere in the GuC backend. Which with execlists skips 
> over the context which is no longer schedulable.

There is nowhere we can put that in the GuC back-end if the context has 
already been handed over to the GuC, because at that point it is out of 
our hands. We need to tell the GuC if we want the context to be dropped.

>
> But I don't understand why testing did not pick up that miss, or the 
> miss with guc_context_replay on an exiting context. Or where exactly 
> to put the extra handling in the GuC backend. 

My worry here is that some of the bugs seem to pre-date your patch 
(which might be why they weren't flagged in the CI run), so there might 
be something else going on that we're missing.

> Perhaps it isn't possible in which case we could have an ugly solution 
> where for GuC we do something special in kill_engines() if hangcheck 
> is disabled. Maybe add and call a new helper like:
>
> intel_context_exit_nohangcheck()
> {
>     bool ret = intel_context_set_exiting(ce);
>
>     if (!ret && intel_engine_uses_guc(ce->engine))
>         intel_context_ban(ce, NULL);
>
>     return ret;
> }
>
> Too ugly?

This works for me if it fixes the issues. The no hangcheck case is not 
common and the user should be careful of what they're running if they 
select it, so IMO we don't need a super pretty or super efficient 
solution, just something that works.

Daniele

>
> Regards,
>
> Tvrtko
John Harrison Sept. 29, 2022, 4:49 p.m. UTC | #15
On 9/29/2022 01:22, Tvrtko Ursulin wrote:
> On 28/09/2022 19:27, John Harrison wrote:
>> On 9/28/2022 00:19, Tvrtko Ursulin wrote:
>>> On 27/09/2022 22:36, Ceraolo Spurio, Daniele wrote:
>>>> On 9/27/2022 12:45 AM, Tvrtko Ursulin wrote:
>>>>> On 27/09/2022 07:49, Andrzej Hajda wrote:
>>>>>> On 27.09.2022 01:34, Ceraolo Spurio, Daniele wrote:
>>>>>>> On 9/26/2022 3:44 PM, Andi Shyti wrote:
>>>>>>>> Hi Andrzej,
>>>>>>>>
>>>>>>>> On Mon, Sep 26, 2022 at 11:54:09PM +0200, Andrzej Hajda wrote:
>>>>>>>>> Capturing error state is time consuming (up to 350ms on DG2), 
>>>>>>>>> so it should
>>>>>>>>> be avoided if possible. Context reset triggered by context 
>>>>>>>>> removal is a
>>>>>>>>> good example.
>>>>>>>>> With this patch multiple igt tests will not timeout and should 
>>>>>>>>> run faster.
>>>>>>>>>
>>>>>>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/1551
>>>>>>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/3952
>>>>>>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/5891
>>>>>>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/6268
>>>>>>>>> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/6281
>>>>>>>>> Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com>
>>>>>>>> fine for me:
>>>>>>>>
>>>>>>>> Reviewed-by: Andi Shyti <andi.shyti@linux.intel.com>
>>>>>>>>
>>>>>>>> Just to be on the safe side, can we also have the ack from any of
>>>>>>>> the GuC folks? Daniele, John?
>>>>>>>>
>>>>>>>> Andi
>>>>>>>>
>>>>>>>>
>>>>>>>>> ---
>>>>>>>>> drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 3 ++-
>>>>>>>>>   1 file changed, 2 insertions(+), 1 deletion(-)
>>>>>>>>>
>>>>>>>>> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
>>>>>>>>> b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>>>>>> index 22ba66e48a9b01..cb58029208afe1 100644
>>>>>>>>> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>>>>>> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
>>>>>>>>> @@ -4425,7 +4425,8 @@ static void 
>>>>>>>>> guc_handle_context_reset(struct intel_guc *guc,
>>>>>>>>>       trace_intel_context_reset(ce);
>>>>>>>>>         if (likely(!intel_context_is_banned(ce))) {
>>>>>>>>> -        capture_error_state(guc, ce);
>>>>>>>>> +        if (!intel_context_is_exiting(ce))
>>>>>>>>> +            capture_error_state(guc, ce);
>>>>>
>>>>> I am not sure here - if we have a persistent context which caused 
>>>>> a GPU hang I'd expect we'd still want error capture.
>>>>>
>>>>> What causes the reset in the affected IGTs? Always preemption 
>>>>> timeout?
>>>>>
>>>>>>>>> guc_context_replay(ce);
>>>>>>>
>>>>>>> You definitely don't want to replay requests of a context that 
>>>>>>> is going away.
>>>>>>
>>>>>> My intention was to just avoid error capture, but that's even 
>>>>>> better, only condition change:
>>>>>> -        if (likely(!intel_context_is_banned(ce))) {
>>>>>> +       if (likely(intel_context_is_schedulable(ce)))  {
>>>>>
>>>>> Yes that helper was intended to be used for contexts which should 
>>>>> not be scheduled post exit or ban.
>>>>>
>>>>> Daniele - you say there are some misses in the GuC backend. Should 
>>>>> most, or even all in intel_guc_submission.c be converted to use 
>>>>> intel_context_is_schedulable? My idea indeed was that "ban" should 
>>>>> be a level up from the backends. Backend should only distinguish 
>>>>> between "should I run this or not", and not the reason.
>>>>
>>>> I think that all of them should be updated, but I'd like Matt B to 
>>>> confirm as he's more familiar with the code than me.
>>>
>>> Right, that sounds plausible to me as well.
>>>
>>> One thing I forgot to mention - the only place where backend can 
>>> care between "schedulable" and "banned" is when it picks the preempt 
>>> timeout for non-schedulable contexts. This is to only apply the 
>>> strict 1ms to banned (so bad or naught contexts), while the ones 
>>> which are exiting cleanly get the full preempt timeout as otherwise 
>>> configured. This solves the ugly user experience quirk where GPU 
>>> resets/errors were logged upon exit/Ctrl-C of a well behaving 
>>> application (using non-persistent contexts). Hopefully GuC can match 
>>> that behaviour so customers stay happy.
>>>
>>> Regards,
>>>
>>> Tvrtko
>>
>> The whole revoke vs ban thing seems broken to me.
>>
>> First of all, if the user hits Ctrl+C we need to kill the context off 
>> immediately. That is a fundamental customer requirement. Render and 
>> compute engines have a 7.5s pre-emption timeout. The user should not 
>> have to wait 7.5s for a context to be removed from the system when 
>> they have explicitly killed it themselves. Even the regular timeout 
>> of 640ms is borderline a long time to wait. And note that there is an 
>> ongoing request/requirement to increase that to 1900ms.
>>
>> Under what circumstances would a user expect anything sensible to 
>> happen after a Ctrl+C in terms of things finishing their rendering 
>> and display nice pretty images? They killed the app. They want it 
>> dead. We should be getting it off the hardware as quickly as 
>> possible. If you are really concerned about resets causing collateral 
>> damage then maybe bump the termination timeout from 1ms up to 10ms, 
>> maybe at most 100ms. If an app is 'well behaved' then it should 
>> cleanly exit within 10ms. But if it is bad (which is almost certainly 
>> the case if the user is manually and explicitly killing it) then it 
>> needs to be killed because it is not going to gracefully exit.
>
> Right.. I had it like that initially (lower timeout - I think 20ms or 
> so, patch history on the mailing list would know for sure), but then 
> simplified it after review feedback to avoid adding another timeout 
> value.
>
> So it's not at all about any expectation that something should 
> actually finish to any sort of completion/success. It is primarily 
> about not logging an error message when there is no error. Thing to 
> keep in mind is that error messages are a big deal in some cultures. 
> In addition to that, avoiding needless engine resets is a good thing 
> as well.
But not calling the error capture code on a banned context is a trivial 
change. I don't see why it is so complicated to just suppress that part 
of the clean up.

>
> Previously the execlists backend was over eager and only allowed for 
> 1ms for such contexts to exit. If the context was banned sure - that 
> means it was a bad context which was causing many hangs already. But 
> if the context was a clean one I argue there is no point in doing an 
> engine reset.
>
> So if you want, I think it is okay to re-introduce a secondary timeout.
>
> Or if you have an idea on how to avoid the error messages / GPU resets 
> when "friendly" contexts exit in some other way, that is also 
> something to discuss.
Well, yes. Just don't call the error capture code for a banned context. 
That's the only bit that prints out any GPU hang error messages. If you 
don't call that, the user won't know that anything has happened.

>
>> Secondly, the whole persistence thing is a total mess, completely 
>> broken and intended to be massively simplified. See the internal task 
>> for it. In short, the plan is that all contexts will be immediately 
>> killed when the last DRM file handle is closed. Persistence is only 
>> valid between the time the per context file handle is closed and the 
>> time the master DRM handle is closed. Whereas, non-persistent 
>> contexts get killed as soon as the per context handle is closed. 
>> There is absolutely no connection to heartbeats or other irrelevant 
>> operations.
>
> The change we are discussing is not about persistence, but for the 
> persistence itself - I am not sure it is completely broken and if, or 
> when, the internal task will result with anything being attempted. In 
> the meantime we had unhappy customers for more than a year. So do we 
> tell them "please wait for a few years more until some internal task 
> with no clear timeline or anyone assigned maybe gets looked at"?
Persistence is totally broken for any post-execlist platform. It 
fundamentally relies upon code deep within the execlst backend that 
cannot be done with any other backend - GuC, DRM, anything that comes in 
the future, ... Pretty much any IGT with 'persistence' (or 
'no-hangcheck') in the name is failing for GuC because of this.

Daniel Vetter's view is that any connection to a submission backend, 
heartbeat, or indeed anything other than file handle closure is 
horrendous over complication and must be removed.

The task is theoretically at the top of my todo list. But I keep getting 
large high priority interrupts and never manage to work on it :(. If you 
are feeling bored, then please pick it up. You would massively improve 
our DG2 pass rates...

>
>> So in my view, the best option is to revert the ban vs revoke patch. 
>> It is creating bugs. It is making persistence more complex not 
>> simpler. It harms the user experience.
>
> I am not aware of the bugs, even less so that it is harming user 
> experience!?
This whole thread is because there are bugs. E.g. the fact that the GuC 
backend did not get properly updated to cope with the new distinction of 
ban vs revoke. The fact that compute contexts now take 7.5s to kill via 
Ctrl+C. And if the user has disabled the pre-emption timeout completely 
then Ctrl+C just won't work at all.

>
> Bugs are limited to the GuC backend or in general? My CI runs were 
> clean so maybe test cases are lacking. Is it just a case of 
> s/intel_context_is_banned/intel_context_is_schedulable/ in there to 
> fix it?
>
> Again, the change was not about persistence. It is the opposite - 
> allowing non-persistent contexts to exit cleanly.
If the code being added says 'if(persistent) X; else Y;' then it is 
about persistence and it is making the whole persistence problem worse.

>
>> If the original problem was simply that error captures were being 
>> done on Ctrl+C then the fix is simple. Don't capture for a banned 
>> context. There is no need for all the rest of the revoke patch.
>
> Error capture was not part of the original story so it may be a 
> completely orthogonal topic that we are discussing it in this thread.
Then I'm lost. What was the purpose of the original change? According to 
the commit message, the whole point of introducing revoke was to 
suppress the error capture on a Ctrl+C wasn't it? - "logging engine 
resets during normal operation not desirable".

John


>
> Regards,
>
> Tvrtko
diff mbox series

Patch

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index 22ba66e48a9b01..cb58029208afe1 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -4425,7 +4425,8 @@  static void guc_handle_context_reset(struct intel_guc *guc,
 	trace_intel_context_reset(ce);
 
 	if (likely(!intel_context_is_banned(ce))) {
-		capture_error_state(guc, ce);
+		if (!intel_context_is_exiting(ce))
+			capture_error_state(guc, ce);
 		guc_context_replay(ce);
 	} else {
 		drm_info(&guc_to_gt(guc)->i915->drm,