Message ID | 20181009111938.6872-2-christian.koenig@amd.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [1/2] drm/sched: add drm_sched_start_timeout helper | expand |
On Tue, Oct 9, 2018 at 8:20 PM Christian König <ckoenig.leichtzumerken@gmail.com> wrote: > > We need to make sure that we don't race between job completion and > timeout. > > v2: put revert label after calling the handling manually > > Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Nayan Deshmukh <nayan26deshmukh@gmail.com> > --- > drivers/gpu/drm/scheduler/sched_main.c | 30 +++++++++++++++++++++++++++++- > 1 file changed, 29 insertions(+), 1 deletion(-) > > diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c > index bd7d11c47202..44fe587aaef9 100644 > --- a/drivers/gpu/drm/scheduler/sched_main.c > +++ b/drivers/gpu/drm/scheduler/sched_main.c > @@ -249,13 +249,41 @@ static void drm_sched_job_timedout(struct work_struct *work) > { > struct drm_gpu_scheduler *sched; > struct drm_sched_job *job; > + int r; > > sched = container_of(work, struct drm_gpu_scheduler, work_tdr.work); > + > + spin_lock(&sched->job_list_lock); > + list_for_each_entry_reverse(job, &sched->ring_mirror_list, node) { > + struct drm_sched_fence *fence = job->s_fence; > + > + if (!dma_fence_remove_callback(fence->parent, &fence->cb)) > + goto already_signaled; > + } > + > job = list_first_entry_or_null(&sched->ring_mirror_list, > struct drm_sched_job, node); > + spin_unlock(&sched->job_list_lock); > > if (job) > - job->sched->ops->timedout_job(job); > + sched->ops->timedout_job(job); > + > + spin_lock(&sched->job_list_lock); > + list_for_each_entry(job, &sched->ring_mirror_list, node) { > + struct drm_sched_fence *fence = job->s_fence; > + > + if (!fence->parent || !list_empty(&fence->cb.node)) > + continue; > + > + r = dma_fence_add_callback(fence->parent, &fence->cb, > + drm_sched_process_job); > + if (r) > + drm_sched_process_job(fence->parent, &fence->cb); > + > +already_signaled: > + ; > + } > + spin_unlock(&sched->job_list_lock); > } > > /** > -- > 2.14.1 > > _______________________________________________ > dri-devel mailing list > dri-devel@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/dri-devel
diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index bd7d11c47202..44fe587aaef9 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -249,13 +249,41 @@ static void drm_sched_job_timedout(struct work_struct *work) { struct drm_gpu_scheduler *sched; struct drm_sched_job *job; + int r; sched = container_of(work, struct drm_gpu_scheduler, work_tdr.work); + + spin_lock(&sched->job_list_lock); + list_for_each_entry_reverse(job, &sched->ring_mirror_list, node) { + struct drm_sched_fence *fence = job->s_fence; + + if (!dma_fence_remove_callback(fence->parent, &fence->cb)) + goto already_signaled; + } + job = list_first_entry_or_null(&sched->ring_mirror_list, struct drm_sched_job, node); + spin_unlock(&sched->job_list_lock); if (job) - job->sched->ops->timedout_job(job); + sched->ops->timedout_job(job); + + spin_lock(&sched->job_list_lock); + list_for_each_entry(job, &sched->ring_mirror_list, node) { + struct drm_sched_fence *fence = job->s_fence; + + if (!fence->parent || !list_empty(&fence->cb.node)) + continue; + + r = dma_fence_add_callback(fence->parent, &fence->cb, + drm_sched_process_job); + if (r) + drm_sched_process_job(fence->parent, &fence->cb); + +already_signaled: + ; + } + spin_unlock(&sched->job_list_lock); } /**
We need to make sure that we don't race between job completion and timeout. v2: put revert label after calling the handling manually Signed-off-by: Christian König <christian.koenig@amd.com> --- drivers/gpu/drm/scheduler/sched_main.c | 30 +++++++++++++++++++++++++++++- 1 file changed, 29 insertions(+), 1 deletion(-)