Message ID | 1552409822-17230-1-git-send-email-andrey.grodzovsky@amd.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | drm/v3d: Fix calling drm_sched_resubmit_jobs for same sched. | expand |
Andrey Grodzovsky <andrey.grodzovsky@amd.com> writes:
> Also stop calling drm_sched_increase_karma multiple times.
Each v3d->queue[q].sched was initialized with a separate
drm_sched_init(). I wouldn't have thought they were all the "same
sched".
They are not the same, but the guilty job belongs to only one {entity, scheduler} pair and so we mark as guilty only for that particular entity in the context of that scheduler only once. Andrey
Am 12.03.19 um 17:57 schrieb Andrey Grodzovsky: > Also stop calling drm_sched_increase_karma multiple times. > > Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com> Acked-by: Christian König <christian.koenig@amd.com> > --- > drivers/gpu/drm/v3d/v3d_sched.c | 13 +++++-------- > 1 file changed, 5 insertions(+), 8 deletions(-) > > diff --git a/drivers/gpu/drm/v3d/v3d_sched.c b/drivers/gpu/drm/v3d/v3d_sched.c > index 4704b2d..ce7c737b 100644 > --- a/drivers/gpu/drm/v3d/v3d_sched.c > +++ b/drivers/gpu/drm/v3d/v3d_sched.c > @@ -231,20 +231,17 @@ v3d_gpu_reset_for_timeout(struct v3d_dev *v3d, struct drm_sched_job *sched_job) > mutex_lock(&v3d->reset_lock); > > /* block scheduler */ > - for (q = 0; q < V3D_MAX_QUEUES; q++) { > - struct drm_gpu_scheduler *sched = &v3d->queue[q].sched; > - > - drm_sched_stop(sched); > + for (q = 0; q < V3D_MAX_QUEUES; q++) > + drm_sched_stop(&v3d->queue[q].sched); > > - if(sched_job) > - drm_sched_increase_karma(sched_job); > - } > + if(sched_job) > + drm_sched_increase_karma(sched_job); > > /* get the GPU back into the init state */ > v3d_reset(v3d); > > for (q = 0; q < V3D_MAX_QUEUES; q++) > - drm_sched_resubmit_jobs(sched_job->sched); > + drm_sched_resubmit_jobs(&v3d->queue[q].sched); > > /* Unblock schedulers and restart their jobs. */ > for (q = 0; q < V3D_MAX_QUEUES; q++) {
"Grodzovsky, Andrey" <Andrey.Grodzovsky@amd.com> writes: > They are not the same, but the guilty job belongs to only one {entity, > scheduler} pair and so we mark as guilty only for that particular > entity in the context of that scheduler only once. I get it now, sorry. I'll merge this through drm-misc-next.
On 3/13/19 12:13 PM, Eric Anholt wrote: > "Grodzovsky, Andrey" <Andrey.Grodzovsky@amd.com> writes: > >> They are not the same, but the guilty job belongs to only one {entity, >> scheduler} pair and so we mark as guilty only for that particular >> entity in the context of that scheduler only once. > I get it now, sorry. I'll merge this through drm-misc-next. np, i actually pushed it into our internal branch already so you can do that or wait for our next pull request. Andrey
"Grodzovsky, Andrey" <Andrey.Grodzovsky@amd.com> writes: > On 3/13/19 12:13 PM, Eric Anholt wrote: >> "Grodzovsky, Andrey" <Andrey.Grodzovsky@amd.com> writes: >> >>> They are not the same, but the guilty job belongs to only one {entity, >>> scheduler} pair and so we mark as guilty only for that particular >>> entity in the context of that scheduler only once. >> I get it now, sorry. I'll merge this through drm-misc-next. > > np, i actually pushed it into our internal branch already so you can do > that or wait for our next pull request. I also fixed the whitespace in the moved code and added the missing Fixes: line, so I'd like to get it merged through the proper tree for maintaining v3d.
np Andrey On 3/13/19 1:53 PM, Eric Anholt wrote: > "Grodzovsky, Andrey" <Andrey.Grodzovsky@amd.com> writes: > >> On 3/13/19 12:13 PM, Eric Anholt wrote: >>> "Grodzovsky, Andrey" <Andrey.Grodzovsky@amd.com> writes: >>> >>>> They are not the same, but the guilty job belongs to only one {entity, >>>> scheduler} pair and so we mark as guilty only for that particular >>>> entity in the context of that scheduler only once. >>> I get it now, sorry. I'll merge this through drm-misc-next. >> np, i actually pushed it into our internal branch already so you can do >> that or wait for our next pull request. > I also fixed the whitespace in the moved code and added the missing > Fixes: line, so I'd like to get it merged through the proper tree for > maintaining v3d.
diff --git a/drivers/gpu/drm/v3d/v3d_sched.c b/drivers/gpu/drm/v3d/v3d_sched.c index 4704b2d..ce7c737b 100644 --- a/drivers/gpu/drm/v3d/v3d_sched.c +++ b/drivers/gpu/drm/v3d/v3d_sched.c @@ -231,20 +231,17 @@ v3d_gpu_reset_for_timeout(struct v3d_dev *v3d, struct drm_sched_job *sched_job) mutex_lock(&v3d->reset_lock); /* block scheduler */ - for (q = 0; q < V3D_MAX_QUEUES; q++) { - struct drm_gpu_scheduler *sched = &v3d->queue[q].sched; - - drm_sched_stop(sched); + for (q = 0; q < V3D_MAX_QUEUES; q++) + drm_sched_stop(&v3d->queue[q].sched); - if(sched_job) - drm_sched_increase_karma(sched_job); - } + if(sched_job) + drm_sched_increase_karma(sched_job); /* get the GPU back into the init state */ v3d_reset(v3d); for (q = 0; q < V3D_MAX_QUEUES; q++) - drm_sched_resubmit_jobs(sched_job->sched); + drm_sched_resubmit_jobs(&v3d->queue[q].sched); /* Unblock schedulers and restart their jobs. */ for (q = 0; q < V3D_MAX_QUEUES; q++) {
Also stop calling drm_sched_increase_karma multiple times. Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com> --- drivers/gpu/drm/v3d/v3d_sched.c | 13 +++++-------- 1 file changed, 5 insertions(+), 8 deletions(-)