diff mbox series

[RFC,v2,13/17] gpu: host1x: Reset max value when freeing a syncpoint

Message ID 20200905103420.3021852-14-mperttunen@nvidia.com (mailing list archive)
State New, archived
Headers show
Series Host1x/TegraDRM UAPI | expand

Commit Message

Mikko Perttunen Sept. 5, 2020, 10:34 a.m. UTC
With job recovery becoming optional, syncpoints may have a mismatch
between their value and max value when freed. As such, when freeing,
set the max value to the current value of the syncpoint so that it
is in a sane state for the next user.

Signed-off-by: Mikko Perttunen <mperttunen@nvidia.com>
---
 drivers/gpu/host1x/syncpt.c | 1 +
 1 file changed, 1 insertion(+)

Comments

Dmitry Osipenko Sept. 16, 2020, 7:44 p.m. UTC | #1
05.09.2020 13:34, Mikko Perttunen пишет:
> With job recovery becoming optional, syncpoints may have a mismatch
> between their value and max value when freed. As such, when freeing,
> set the max value to the current value of the syncpoint so that it
> is in a sane state for the next user.
> 
> Signed-off-by: Mikko Perttunen <mperttunen@nvidia.com>
> ---
>  drivers/gpu/host1x/syncpt.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/drivers/gpu/host1x/syncpt.c b/drivers/gpu/host1x/syncpt.c
> index 2fad8b2a55cc..82ecb4ac387e 100644
> --- a/drivers/gpu/host1x/syncpt.c
> +++ b/drivers/gpu/host1x/syncpt.c
> @@ -385,6 +385,7 @@ static void syncpt_release(struct kref *ref)
>  {
>  	struct host1x_syncpt *sp = container_of(ref, struct host1x_syncpt, ref);
>  
> +	atomic_set(&sp->max_val, host1x_syncpt_read_min(sp));
>  	sp->locked = false;
>  
>  	mutex_lock(&sp->host->syncpt_mutex);
> 

Please note that the sync point state actually needs to be completely
reset at the sync point request-time because both downstream fastboot
and upstream u-boot [1] are needlessly enabling display VBLANK interrupt
that continuously increments sync point #26 during of kernel boot until
display controller is reset.

[1] https://github.com/u-boot/u-boot/blob/master/drivers/video/tegra.c#L155

Hence once sync point #26 is requested, it will have a dirty state. So
far this doesn't have any visible effect because sync points aren't used
much.
Mikko Perttunen Sept. 16, 2020, 8:43 p.m. UTC | #2
On 9/16/20 10:44 PM, Dmitry Osipenko wrote:
> 05.09.2020 13:34, Mikko Perttunen пишет:
>> With job recovery becoming optional, syncpoints may have a mismatch
>> between their value and max value when freed. As such, when freeing,
>> set the max value to the current value of the syncpoint so that it
>> is in a sane state for the next user.
>>
>> Signed-off-by: Mikko Perttunen <mperttunen@nvidia.com>
>> ---
>>   drivers/gpu/host1x/syncpt.c | 1 +
>>   1 file changed, 1 insertion(+)
>>
>> diff --git a/drivers/gpu/host1x/syncpt.c b/drivers/gpu/host1x/syncpt.c
>> index 2fad8b2a55cc..82ecb4ac387e 100644
>> --- a/drivers/gpu/host1x/syncpt.c
>> +++ b/drivers/gpu/host1x/syncpt.c
>> @@ -385,6 +385,7 @@ static void syncpt_release(struct kref *ref)
>>   {
>>   	struct host1x_syncpt *sp = container_of(ref, struct host1x_syncpt, ref);
>>   
>> +	atomic_set(&sp->max_val, host1x_syncpt_read_min(sp));
>>   	sp->locked = false;
>>   
>>   	mutex_lock(&sp->host->syncpt_mutex);
>>
> 
> Please note that the sync point state actually needs to be completely
> reset at the sync point request-time because both downstream fastboot
> and upstream u-boot [1] are needlessly enabling display VBLANK interrupt
> that continuously increments sync point #26 during of kernel boot until
> display controller is reset.
> 
> [1] https://github.com/u-boot/u-boot/blob/master/drivers/video/tegra.c#L155
> 
> Hence once sync point #26 is requested, it will have a dirty state. So
> far this doesn't have any visible effect because sync points aren't used
> much.
> 

Maybe we can instead reserve syncpoints that might be used by the boot 
chain, and only allow allocating them once the display driver has acked 
that the syncpoint will no longer be incremented? That way if the 
display driver is disabled for some reason we'll still be fine.

Looking at the downstream driver, it (still, on new chips..) reserves 
the following syncpoints:

- 10 (AVP)
- 22 (3D)
- 26 (VBLANK0)
- 27 (VBLANK1)

and says that this applies to T20, T30, T114 and T148.

I suppose if you haven't observed this happening to other syncpoints 
than 26, then reserving 26 would probably be enough.

Mikko
Dmitry Osipenko Sept. 16, 2020, 9:37 p.m. UTC | #3
16.09.2020 23:43, Mikko Perttunen пишет:
...
>> Please note that the sync point state actually needs to be completely
>> reset at the sync point request-time because both downstream fastboot
>> and upstream u-boot [1] are needlessly enabling display VBLANK interrupt
>> that continuously increments sync point #26 during of kernel boot until
>> display controller is reset.
>>
>> [1]
>> https://github.com/u-boot/u-boot/blob/master/drivers/video/tegra.c#L155
>>
>> Hence once sync point #26 is requested, it will have a dirty state. So
>> far this doesn't have any visible effect because sync points aren't used
>> much.
>>
> 
> Maybe we can instead reserve syncpoints that might be used by the boot
> chain, and only allow allocating them once the display driver has acked
> that the syncpoint will no longer be incremented? That way if the
> display driver is disabled for some reason we'll still be fine.

sounds good

> Looking at the downstream driver, it (still, on new chips..) reserves
> the following syncpoints:
> 
> - 10 (AVP)
> - 22 (3D)
> - 26 (VBLANK0)
> - 27 (VBLANK1)
> 
> and says that this applies to T20, T30, T114 and T148.
> 
> I suppose if you haven't observed this happening to other syncpoints
> than 26, then reserving 26 would probably be enough.

I only saw SP 26 being used by the DC, but perhaps that may vary from
device to device and SP 27 could actually be used in a wild as well.

I think the AVP SP should only relate to the AVP-firmware that upstream
doesn't support, so we can ignore its reservation.

I've no idea what may use the 3D SP.
Mikko Perttunen Sept. 17, 2020, 7:25 a.m. UTC | #4
On 9/17/20 12:37 AM, Dmitry Osipenko wrote:
> 16.09.2020 23:43, Mikko Perttunen пишет:
> ...
>>> Please note that the sync point state actually needs to be completely
>>> reset at the sync point request-time because both downstream fastboot
>>> and upstream u-boot [1] are needlessly enabling display VBLANK interrupt
>>> that continuously increments sync point #26 during of kernel boot until
>>> display controller is reset.
>>>
>>> [1]
>>> https://github.com/u-boot/u-boot/blob/master/drivers/video/tegra.c#L155
>>>
>>> Hence once sync point #26 is requested, it will have a dirty state. So
>>> far this doesn't have any visible effect because sync points aren't used
>>> much.
>>>
>>
>> Maybe we can instead reserve syncpoints that might be used by the boot
>> chain, and only allow allocating them once the display driver has acked
>> that the syncpoint will no longer be incremented? That way if the
>> display driver is disabled for some reason we'll still be fine.
> 
> sounds good
> 
>> Looking at the downstream driver, it (still, on new chips..) reserves
>> the following syncpoints:
>>
>> - 10 (AVP)
>> - 22 (3D)
>> - 26 (VBLANK0)
>> - 27 (VBLANK1)
>>
>> and says that this applies to T20, T30, T114 and T148.
>>
>> I suppose if you haven't observed this happening to other syncpoints
>> than 26, then reserving 26 would probably be enough.
> 
> I only saw SP 26 being used by the DC, but perhaps that may vary from
> device to device and SP 27 could actually be used in a wild as well.
> 
> I think the AVP SP should only relate to the AVP-firmware that upstream
> doesn't support, so we can ignore its reservation.
> 
> I've no idea what may use the 3D SP.
> 

My guess is that some very old code used fixed syncpoint numbers so 
these were added to the reservation list. Let's reserve 26 and 27, that 
should be simple enough since both would be "released" by the display 
driver.

Mikko
diff mbox series

Patch

diff --git a/drivers/gpu/host1x/syncpt.c b/drivers/gpu/host1x/syncpt.c
index 2fad8b2a55cc..82ecb4ac387e 100644
--- a/drivers/gpu/host1x/syncpt.c
+++ b/drivers/gpu/host1x/syncpt.c
@@ -385,6 +385,7 @@  static void syncpt_release(struct kref *ref)
 {
 	struct host1x_syncpt *sp = container_of(ref, struct host1x_syncpt, ref);
 
+	atomic_set(&sp->max_val, host1x_syncpt_read_min(sp));
 	sp->locked = false;
 
 	mutex_lock(&sp->host->syncpt_mutex);