Message ID | 1554739692-6999-2-git-send-email-andrey.grodzovsky@amd.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [1/3] drm/scheduler: rework job destruction | expand |
Am 08.04.19 um 18:08 schrieb Andrey Grodzovsky: > For later driver's reference to see if the fence is signaled. > > Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com> > --- > drivers/gpu/drm/scheduler/sched_main.c | 11 ++++++++--- > 1 file changed, 8 insertions(+), 3 deletions(-) > > diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c > index c215cde..5bb4368 100644 > --- a/drivers/gpu/drm/scheduler/sched_main.c > +++ b/drivers/gpu/drm/scheduler/sched_main.c > @@ -191,8 +191,6 @@ EXPORT_SYMBOL(drm_sched_dependency_optimized); > */ > static void drm_sched_start_timeout(struct drm_gpu_scheduler *sched) > { > - unsigned long flags; > - I think this actually belongs into patch #1. > if (sched->timeout != MAX_SCHEDULE_TIMEOUT && > !list_empty(&sched->ring_mirror_list)) > schedule_delayed_work(&sched->work_tdr, sched->timeout); > @@ -371,7 +369,6 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad) > dma_fence_remove_callback(s_job->s_fence->parent, > &s_job->cb)) { > dma_fence_put(s_job->s_fence->parent); > - s_job->s_fence->parent = NULL; How about also moving the dma_fence_put() into drm_sched_resubmit_jobs(), right before we re-assign s_job->s_fence->parent? I think that would be cleaner, but not sure if that wouldn't have any ugly side effects. Christian. > atomic_dec(&sched->hw_rq_count); > } else { > /* > @@ -398,6 +395,14 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad) > sched->ops->free_job(s_job); > } > } > + > + /* > + * Stop pending timer in flight as we rearm it in drm_sched_start. This > + * avoids the pending timeout work in progress to fire right away after > + * this TDR finished and before the newly restarted jobs had a > + * chance to complete. > + */ > + cancel_delayed_work(&sched->work_tdr); > } > > EXPORT_SYMBOL(drm_sched_stop);
diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index c215cde..5bb4368 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -191,8 +191,6 @@ EXPORT_SYMBOL(drm_sched_dependency_optimized); */ static void drm_sched_start_timeout(struct drm_gpu_scheduler *sched) { - unsigned long flags; - if (sched->timeout != MAX_SCHEDULE_TIMEOUT && !list_empty(&sched->ring_mirror_list)) schedule_delayed_work(&sched->work_tdr, sched->timeout); @@ -371,7 +369,6 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad) dma_fence_remove_callback(s_job->s_fence->parent, &s_job->cb)) { dma_fence_put(s_job->s_fence->parent); - s_job->s_fence->parent = NULL; atomic_dec(&sched->hw_rq_count); } else { /* @@ -398,6 +395,14 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad) sched->ops->free_job(s_job); } } + + /* + * Stop pending timer in flight as we rearm it in drm_sched_start. This + * avoids the pending timeout work in progress to fire right away after + * this TDR finished and before the newly restarted jobs had a + * chance to complete. + */ + cancel_delayed_work(&sched->work_tdr); } EXPORT_SYMBOL(drm_sched_stop);
For later driver's reference to see if the fence is signaled. Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com> --- drivers/gpu/drm/scheduler/sched_main.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-)