diff mbox series

drm/v3d: Fix calling drm_sched_resubmit_jobs for same sched.

Message ID 1552409822-17230-1-git-send-email-andrey.grodzovsky@amd.com (mailing list archive)
State New, archived
Headers show
Series drm/v3d: Fix calling drm_sched_resubmit_jobs for same sched. | expand

Commit Message

Andrey Grodzovsky March 12, 2019, 4:57 p.m. UTC
Also stop calling drm_sched_increase_karma multiple times.

Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
---
 drivers/gpu/drm/v3d/v3d_sched.c | 13 +++++--------
 1 file changed, 5 insertions(+), 8 deletions(-)

Comments

Eric Anholt March 12, 2019, 5:33 p.m. UTC | #1
Andrey Grodzovsky <andrey.grodzovsky@amd.com> writes:

> Also stop calling drm_sched_increase_karma multiple times.

Each v3d->queue[q].sched was initialized with a separate
drm_sched_init().  I wouldn't have thought they were all the "same
sched".
Andrey Grodzovsky March 12, 2019, 5:48 p.m. UTC | #2
They are not the same, but the guilty job belongs to only one {entity, scheduler} pair and so we mark as guilty only for that particular entity in the context of that scheduler only once.

Andrey
Christian König March 13, 2019, 8:25 a.m. UTC | #3
Am 12.03.19 um 17:57 schrieb Andrey Grodzovsky:
> Also stop calling drm_sched_increase_karma multiple times.
>
> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>

Acked-by: Christian König <christian.koenig@amd.com>

> ---
>   drivers/gpu/drm/v3d/v3d_sched.c | 13 +++++--------
>   1 file changed, 5 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/gpu/drm/v3d/v3d_sched.c b/drivers/gpu/drm/v3d/v3d_sched.c
> index 4704b2d..ce7c737b 100644
> --- a/drivers/gpu/drm/v3d/v3d_sched.c
> +++ b/drivers/gpu/drm/v3d/v3d_sched.c
> @@ -231,20 +231,17 @@ v3d_gpu_reset_for_timeout(struct v3d_dev *v3d, struct drm_sched_job *sched_job)
>   	mutex_lock(&v3d->reset_lock);
>   
>   	/* block scheduler */
> -	for (q = 0; q < V3D_MAX_QUEUES; q++) {
> -		struct drm_gpu_scheduler *sched = &v3d->queue[q].sched;
> -
> -		drm_sched_stop(sched);
> +	for (q = 0; q < V3D_MAX_QUEUES; q++)
> +		drm_sched_stop(&v3d->queue[q].sched);
>   
> -		if(sched_job)
> -			drm_sched_increase_karma(sched_job);
> -	}
> +	if(sched_job)
> +		drm_sched_increase_karma(sched_job);
>   
>   	/* get the GPU back into the init state */
>   	v3d_reset(v3d);
>   
>   	for (q = 0; q < V3D_MAX_QUEUES; q++)
> -		drm_sched_resubmit_jobs(sched_job->sched);
> +		drm_sched_resubmit_jobs(&v3d->queue[q].sched);
>   
>   	/* Unblock schedulers and restart their jobs. */
>   	for (q = 0; q < V3D_MAX_QUEUES; q++) {
Eric Anholt March 13, 2019, 4:13 p.m. UTC | #4
"Grodzovsky, Andrey" <Andrey.Grodzovsky@amd.com> writes:

> They are not the same, but the guilty job belongs to only one {entity,
> scheduler} pair and so we mark as guilty only for that particular
> entity in the context of that scheduler only once.

I get it now, sorry.  I'll merge this through drm-misc-next.
Andrey Grodzovsky March 13, 2019, 4:36 p.m. UTC | #5
On 3/13/19 12:13 PM, Eric Anholt wrote:
> "Grodzovsky, Andrey" <Andrey.Grodzovsky@amd.com> writes:
>
>> They are not the same, but the guilty job belongs to only one {entity,
>> scheduler} pair and so we mark as guilty only for that particular
>> entity in the context of that scheduler only once.
> I get it now, sorry.  I'll merge this through drm-misc-next.

np, i actually pushed it into our internal branch already so you can do 
that or wait for our next pull request.

Andrey
Eric Anholt March 13, 2019, 5:53 p.m. UTC | #6
"Grodzovsky, Andrey" <Andrey.Grodzovsky@amd.com> writes:

> On 3/13/19 12:13 PM, Eric Anholt wrote:
>> "Grodzovsky, Andrey" <Andrey.Grodzovsky@amd.com> writes:
>>
>>> They are not the same, but the guilty job belongs to only one {entity,
>>> scheduler} pair and so we mark as guilty only for that particular
>>> entity in the context of that scheduler only once.
>> I get it now, sorry.  I'll merge this through drm-misc-next.
>
> np, i actually pushed it into our internal branch already so you can do 
> that or wait for our next pull request.

I also fixed the whitespace in the moved code and added the missing
Fixes: line, so I'd like to get it merged through the proper tree for
maintaining v3d.
Andrey Grodzovsky March 13, 2019, 6:49 p.m. UTC | #7
np

Andrey

On 3/13/19 1:53 PM, Eric Anholt wrote:
> "Grodzovsky, Andrey" <Andrey.Grodzovsky@amd.com> writes:
>
>> On 3/13/19 12:13 PM, Eric Anholt wrote:
>>> "Grodzovsky, Andrey" <Andrey.Grodzovsky@amd.com> writes:
>>>
>>>> They are not the same, but the guilty job belongs to only one {entity,
>>>> scheduler} pair and so we mark as guilty only for that particular
>>>> entity in the context of that scheduler only once.
>>> I get it now, sorry.  I'll merge this through drm-misc-next.
>> np, i actually pushed it into our internal branch already so you can do
>> that or wait for our next pull request.
> I also fixed the whitespace in the moved code and added the missing
> Fixes: line, so I'd like to get it merged through the proper tree for
> maintaining v3d.
diff mbox series

Patch

diff --git a/drivers/gpu/drm/v3d/v3d_sched.c b/drivers/gpu/drm/v3d/v3d_sched.c
index 4704b2d..ce7c737b 100644
--- a/drivers/gpu/drm/v3d/v3d_sched.c
+++ b/drivers/gpu/drm/v3d/v3d_sched.c
@@ -231,20 +231,17 @@  v3d_gpu_reset_for_timeout(struct v3d_dev *v3d, struct drm_sched_job *sched_job)
 	mutex_lock(&v3d->reset_lock);
 
 	/* block scheduler */
-	for (q = 0; q < V3D_MAX_QUEUES; q++) {
-		struct drm_gpu_scheduler *sched = &v3d->queue[q].sched;
-
-		drm_sched_stop(sched);
+	for (q = 0; q < V3D_MAX_QUEUES; q++)
+		drm_sched_stop(&v3d->queue[q].sched);
 
-		if(sched_job)
-			drm_sched_increase_karma(sched_job);
-	}
+	if(sched_job)
+		drm_sched_increase_karma(sched_job);
 
 	/* get the GPU back into the init state */
 	v3d_reset(v3d);
 
 	for (q = 0; q < V3D_MAX_QUEUES; q++)
-		drm_sched_resubmit_jobs(sched_job->sched);
+		drm_sched_resubmit_jobs(&v3d->queue[q].sched);
 
 	/* Unblock schedulers and restart their jobs. */
 	for (q = 0; q < V3D_MAX_QUEUES; q++) {