Message ID | 20180806121916.51155-1-christian.koenig@amd.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [1/2] drm/scheduler: bind job earlier to scheduler | expand |
Hi Christian, On Mon, Aug 6, 2018 at 5:49 PM Christian König < ckoenig.leichtzumerken@gmail.com> wrote: > Update job earlier with the scheduler it is supposed to be scheduled on. > > Otherwise we could incorrectly optimize dependencies when moving an > entity from one scheduler to another. > I don't think change is really required, the old code should work fine. Read below for more comments. > > Signed-off-by: Christian König <christian.koenig@amd.com> > --- > drivers/gpu/drm/scheduler/gpu_scheduler.c | 4 ++-- > drivers/gpu/drm/scheduler/sched_fence.c | 2 +- > 2 files changed, 3 insertions(+), 3 deletions(-) > > diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c > b/drivers/gpu/drm/scheduler/gpu_scheduler.c > index 65078dd3c82c..029863726c99 100644 > --- a/drivers/gpu/drm/scheduler/gpu_scheduler.c > +++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c > @@ -533,8 +533,6 @@ drm_sched_entity_pop_job(struct drm_sched_entity > *entity) > if (!sched_job) > return NULL; > > - sched_job->sched = sched; > - sched_job->s_fence->sched = sched; > while ((entity->dependency = sched->ops->dependency(sched_job, > entity))) > The job should have the correct scheduler before the sched->oops->dependency() function is called for the first time. Hence I placed the assignment here. It might also be that I am missing some case so please let me know if such a case exists where this might lead to wrong optimization. Regards, Nayan > if (drm_sched_entity_add_dependency_cb(entity)) > return NULL; > @@ -581,6 +579,8 @@ void drm_sched_entity_push_job(struct drm_sched_job > *sched_job, > spin_unlock(&entity->rq_lock); > } > > + sched_job->sched = entity->rq->sched; > + sched_job->s_fence->sched = entity->rq->sched; > trace_drm_sched_job(sched_job, entity); > atomic_inc(&entity->rq->sched->num_jobs); > WRITE_ONCE(entity->last_user, current->group_leader); > diff --git a/drivers/gpu/drm/scheduler/sched_fence.c > b/drivers/gpu/drm/scheduler/sched_fence.c > index 4029312fdd81..6dab18d288d7 100644 > --- a/drivers/gpu/drm/scheduler/sched_fence.c > +++ b/drivers/gpu/drm/scheduler/sched_fence.c > @@ -172,7 +172,7 @@ struct drm_sched_fence *drm_sched_fence_create(struct > drm_sched_entity *entity, > return NULL; > > fence->owner = owner; > - fence->sched = entity->rq->sched; > + fence->sched = NULL; > spin_lock_init(&fence->lock); > > seq = atomic_inc_return(&entity->fence_seq); > -- > 2.14.1 > > <div dir="ltr"><div>Hi Christian,<br><br></div><div><div><div class="gmail_quote"><div dir="ltr">On Mon, Aug 6, 2018 at 5:49 PM Christian König <<a href="mailto:ckoenig.leichtzumerken@gmail.com">ckoenig.leichtzumerken@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Update job earlier with the scheduler it is supposed to be scheduled on.<br> <br> Otherwise we could incorrectly optimize dependencies when moving an<br> entity from one scheduler to another.<br></blockquote><div>I don't think change is really required, the old code should work fine. Read below for more comments.<br> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <br> Signed-off-by: Christian König <<a href="mailto:christian.koenig@amd.com" target="_blank">christian.koenig@amd.com</a>><br> ---<br> drivers/gpu/drm/scheduler/gpu_scheduler.c | 4 ++--<br> drivers/gpu/drm/scheduler/sched_fence.c | 2 +-<br> 2 files changed, 3 insertions(+), 3 deletions(-)<br> <br> diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c b/drivers/gpu/drm/scheduler/gpu_scheduler.c<br> index 65078dd3c82c..029863726c99 100644<br> --- a/drivers/gpu/drm/scheduler/gpu_scheduler.c<br> +++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c<br> @@ -533,8 +533,6 @@ drm_sched_entity_pop_job(struct drm_sched_entity *entity)<br> if (!sched_job)<br> return NULL;<br> <br> - sched_job->sched = sched;<br> - sched_job->s_fence->sched = sched;<br> while ((entity->dependency = sched->ops->dependency(sched_job, entity))) <br></blockquote><div>The job should have the correct scheduler before the sched->oops->dependency() function is called for the first time. Hence I placed the assignment here.<br><br></div><div>It might also be that I am missing some case so please let me know if such a case exists where this might lead to wrong optimization.<br><br></div><div>Regards,<br></div><div>Nayan<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> if (drm_sched_entity_add_dependency_cb(entity))<br> return NULL;<br> @@ -581,6 +579,8 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job,<br> spin_unlock(&entity->rq_lock);<br> }<br> <br> + sched_job->sched = entity->rq->sched;<br> + sched_job->s_fence->sched = entity->rq->sched;<br> trace_drm_sched_job(sched_job, entity);<br> atomic_inc(&entity->rq->sched->num_jobs);<br> WRITE_ONCE(entity->last_user, current->group_leader);<br> diff --git a/drivers/gpu/drm/scheduler/sched_fence.c b/drivers/gpu/drm/scheduler/sched_fence.c<br> index 4029312fdd81..6dab18d288d7 100644<br> --- a/drivers/gpu/drm/scheduler/sched_fence.c<br> +++ b/drivers/gpu/drm/scheduler/sched_fence.c<br> @@ -172,7 +172,7 @@ struct drm_sched_fence *drm_sched_fence_create(struct drm_sched_entity *entity,<br> return NULL;<br> <br> fence->owner = owner;<br> - fence->sched = entity->rq->sched;<br> + fence->sched = NULL;<br> spin_lock_init(&fence->lock);<br> <br> seq = atomic_inc_return(&entity->fence_seq);<br> -- <br> 2.14.1<br> <br> </blockquote></div></div></div></div>
Am 06.08.2018 um 15:11 schrieb Nayan Deshmukh: > Hi Christian, > > On Mon, Aug 6, 2018 at 5:49 PM Christian König > <ckoenig.leichtzumerken@gmail.com > <mailto:ckoenig.leichtzumerken@gmail.com>> wrote: > > Update job earlier with the scheduler it is supposed to be > scheduled on. > > Otherwise we could incorrectly optimize dependencies when moving an > entity from one scheduler to another. > > I don't think change is really required, the old code should work > fine. Read below for more comments. > > > Signed-off-by: Christian König <christian.koenig@amd.com > <mailto:christian.koenig@amd.com>> > --- > drivers/gpu/drm/scheduler/gpu_scheduler.c | 4 ++-- > drivers/gpu/drm/scheduler/sched_fence.c | 2 +- > 2 files changed, 3 insertions(+), 3 deletions(-) > > diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c > b/drivers/gpu/drm/scheduler/gpu_scheduler.c > index 65078dd3c82c..029863726c99 100644 > --- a/drivers/gpu/drm/scheduler/gpu_scheduler.c > +++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c > @@ -533,8 +533,6 @@ drm_sched_entity_pop_job(struct > drm_sched_entity *entity) > if (!sched_job) > return NULL; > > - sched_job->sched = sched; > - sched_job->s_fence->sched = sched; > while ((entity->dependency = > sched->ops->dependency(sched_job, entity))) > > The job should have the correct scheduler before the > sched->oops->dependency() function is called for the first time. Hence > I placed the assignment here. > > It might also be that I am missing some case so please let me know if > such a case exists where this might lead to wrong optimization. The problem is that the scheduler fence of job A could be the dependency of another job B Job B then makes an incorrect optimization because it sees the wrong scheduler in the scheduler fence of job A. Regards, Christian. > > Regards, > Nayan > > if (drm_sched_entity_add_dependency_cb(entity)) > return NULL; > @@ -581,6 +579,8 @@ void drm_sched_entity_push_job(struct > drm_sched_job *sched_job, > spin_unlock(&entity->rq_lock); > } > > + sched_job->sched = entity->rq->sched; > + sched_job->s_fence->sched = entity->rq->sched; > trace_drm_sched_job(sched_job, entity); > atomic_inc(&entity->rq->sched->num_jobs); > WRITE_ONCE(entity->last_user, current->group_leader); > diff --git a/drivers/gpu/drm/scheduler/sched_fence.c > b/drivers/gpu/drm/scheduler/sched_fence.c > index 4029312fdd81..6dab18d288d7 100644 > --- a/drivers/gpu/drm/scheduler/sched_fence.c > +++ b/drivers/gpu/drm/scheduler/sched_fence.c > @@ -172,7 +172,7 @@ struct drm_sched_fence > *drm_sched_fence_create(struct drm_sched_entity *entity, > return NULL; > > fence->owner = owner; > - fence->sched = entity->rq->sched; > + fence->sched = NULL; > spin_lock_init(&fence->lock); > > seq = atomic_inc_return(&entity->fence_seq); > -- > 2.14.1 > <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> </head> <body text="#000000" bgcolor="#FFFFFF"> <div class="moz-cite-prefix">Am 06.08.2018 um 15:11 schrieb Nayan Deshmukh:<br> </div> <blockquote type="cite" cite="mid:CAFd4ddwAMerWXrLvrE_7xii503PDQ0j0dmqC3gtX8XXOm97DXw@mail.gmail.com"> <meta http-equiv="content-type" content="text/html; charset=utf-8"> <div dir="ltr"> <div>Hi Christian,<br> <br> </div> <div> <div> <div class="gmail_quote"> <div dir="ltr">On Mon, Aug 6, 2018 at 5:49 PM Christian König <<a href="mailto:ckoenig.leichtzumerken@gmail.com" moz-do-not-send="true">ckoenig.leichtzumerken@gmail.com</a>> wrote:<br> </div> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Update job earlier with the scheduler it is supposed to be scheduled on.<br> <br> Otherwise we could incorrectly optimize dependencies when moving an<br> entity from one scheduler to another.<br> </blockquote> <div>I don't think change is really required, the old code should work fine. Read below for more comments.<br> </div> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <br> Signed-off-by: Christian König <<a href="mailto:christian.koenig@amd.com" target="_blank" moz-do-not-send="true">christian.koenig@amd.com</a>><br> ---<br> drivers/gpu/drm/scheduler/gpu_scheduler.c | 4 ++--<br> drivers/gpu/drm/scheduler/sched_fence.c | 2 +-<br> 2 files changed, 3 insertions(+), 3 deletions(-)<br> <br> diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c b/drivers/gpu/drm/scheduler/gpu_scheduler.c<br> index 65078dd3c82c..029863726c99 100644<br> --- a/drivers/gpu/drm/scheduler/gpu_scheduler.c<br> +++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c<br> @@ -533,8 +533,6 @@ drm_sched_entity_pop_job(struct drm_sched_entity *entity)<br> if (!sched_job)<br> return NULL;<br> <br> - sched_job->sched = sched;<br> - sched_job->s_fence->sched = sched;<br> while ((entity->dependency = sched->ops->dependency(sched_job, entity))) <br> </blockquote> <div>The job should have the correct scheduler before the sched->oops->dependency() function is called for the first time. Hence I placed the assignment here.<br> <br> </div> <div>It might also be that I am missing some case so please let me know if such a case exists where this might lead to wrong optimization.<br> </div> </div> </div> </div> </div> </blockquote> <br> The problem is that the scheduler fence of job A could be the dependency of another job B<br> <br> Job B then makes an incorrect optimization because it sees the wrong scheduler in the scheduler fence of job A.<br> <br> Regards,<br> Christian.<br> <br> <blockquote type="cite" cite="mid:CAFd4ddwAMerWXrLvrE_7xii503PDQ0j0dmqC3gtX8XXOm97DXw@mail.gmail.com"> <div dir="ltr"> <div> <div> <div class="gmail_quote"> <div><br> </div> <div>Regards,<br> </div> <div>Nayan<br> </div> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> if (drm_sched_entity_add_dependency_cb(entity))<br> return NULL;<br> @@ -581,6 +579,8 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job,<br> spin_unlock(&entity->rq_lock);<br> }<br> <br> + sched_job->sched = entity->rq->sched;<br> + sched_job->s_fence->sched = entity->rq->sched;<br> trace_drm_sched_job(sched_job, entity);<br> atomic_inc(&entity->rq->sched->num_jobs);<br> WRITE_ONCE(entity->last_user, current->group_leader);<br> diff --git a/drivers/gpu/drm/scheduler/sched_fence.c b/drivers/gpu/drm/scheduler/sched_fence.c<br> index 4029312fdd81..6dab18d288d7 100644<br> --- a/drivers/gpu/drm/scheduler/sched_fence.c<br> +++ b/drivers/gpu/drm/scheduler/sched_fence.c<br> @@ -172,7 +172,7 @@ struct drm_sched_fence *drm_sched_fence_create(struct drm_sched_entity *entity,<br> return NULL;<br> <br> fence->owner = owner;<br> - fence->sched = entity->rq->sched;<br> + fence->sched = NULL;<br> spin_lock_init(&fence->lock);<br> <br> seq = atomic_inc_return(&entity->fence_seq);<br> -- <br> 2.14.1<br> <br> </blockquote> </div> </div> </div> </div> </blockquote> <br> </body> </html>
On Mon, Aug 6, 2018 at 6:45 PM Christian König < ckoenig.leichtzumerken@gmail.com> wrote: > Am 06.08.2018 um 15:11 schrieb Nayan Deshmukh: > > Hi Christian, > > On Mon, Aug 6, 2018 at 5:49 PM Christian König < > ckoenig.leichtzumerken@gmail.com> wrote: > >> Update job earlier with the scheduler it is supposed to be scheduled on. >> >> Otherwise we could incorrectly optimize dependencies when moving an >> entity from one scheduler to another. >> > I don't think change is really required, the old code should work fine. > Read below for more comments. > > >> >> Signed-off-by: Christian König <christian.koenig@amd.com> >> --- >> drivers/gpu/drm/scheduler/gpu_scheduler.c | 4 ++-- >> drivers/gpu/drm/scheduler/sched_fence.c | 2 +- >> 2 files changed, 3 insertions(+), 3 deletions(-) >> >> diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c >> b/drivers/gpu/drm/scheduler/gpu_scheduler.c >> index 65078dd3c82c..029863726c99 100644 >> --- a/drivers/gpu/drm/scheduler/gpu_scheduler.c >> +++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c >> @@ -533,8 +533,6 @@ drm_sched_entity_pop_job(struct drm_sched_entity >> *entity) >> if (!sched_job) >> return NULL; >> >> - sched_job->sched = sched; >> - sched_job->s_fence->sched = sched; >> while ((entity->dependency = sched->ops->dependency(sched_job, >> entity))) >> > The job should have the correct scheduler before the > sched->oops->dependency() function is called for the first time. Hence I > placed the assignment here. > > It might also be that I am missing some case so please let me know if such > a case exists where this might lead to wrong optimization. > > > The problem is that the scheduler fence of job A could be the dependency > of another job B > > Job B then makes an incorrect optimization because it sees the wrong > scheduler in the scheduler fence of job A. > > Ah..I missed this case. This make sense now. Reviewed-by: Nayan Deshmukh < nayan26deshmukh@gmail.com> > Regards, > Christian. > > > Regards, > Nayan > >> if (drm_sched_entity_add_dependency_cb(entity)) >> return NULL; >> @@ -581,6 +579,8 @@ void drm_sched_entity_push_job(struct drm_sched_job >> *sched_job, >> spin_unlock(&entity->rq_lock); >> } >> >> + sched_job->sched = entity->rq->sched; >> + sched_job->s_fence->sched = entity->rq->sched; >> trace_drm_sched_job(sched_job, entity); >> atomic_inc(&entity->rq->sched->num_jobs); >> WRITE_ONCE(entity->last_user, current->group_leader); >> diff --git a/drivers/gpu/drm/scheduler/sched_fence.c >> b/drivers/gpu/drm/scheduler/sched_fence.c >> index 4029312fdd81..6dab18d288d7 100644 >> --- a/drivers/gpu/drm/scheduler/sched_fence.c >> +++ b/drivers/gpu/drm/scheduler/sched_fence.c >> @@ -172,7 +172,7 @@ struct drm_sched_fence *drm_sched_fence_create(struct >> drm_sched_entity *entity, >> return NULL; >> >> fence->owner = owner; >> - fence->sched = entity->rq->sched; >> + fence->sched = NULL; >> spin_lock_init(&fence->lock); >> >> seq = atomic_inc_return(&entity->fence_seq); >> -- >> 2.14.1 >> >> > <div dir="ltr"><br><br><div class="gmail_quote"><div dir="ltr">On Mon, Aug 6, 2018 at 6:45 PM Christian König <<a href="mailto:ckoenig.leichtzumerken@gmail.com">ckoenig.leichtzumerken@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <div text="#000000" bgcolor="#FFFFFF"> <div class="m_2321479661523521660moz-cite-prefix">Am 06.08.2018 um 15:11 schrieb Nayan Deshmukh:<br> </div> <blockquote type="cite"> <div dir="ltr"> <div>Hi Christian,<br> <br> </div> <div> <div> <div class="gmail_quote"> <div dir="ltr">On Mon, Aug 6, 2018 at 5:49 PM Christian König <<a href="mailto:ckoenig.leichtzumerken@gmail.com" target="_blank">ckoenig.leichtzumerken@gmail.com</a>> wrote:<br> </div> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Update job earlier with the scheduler it is supposed to be scheduled on.<br> <br> Otherwise we could incorrectly optimize dependencies when moving an<br> entity from one scheduler to another.<br> </blockquote> <div>I don't think change is really required, the old code should work fine. Read below for more comments.<br> </div> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <br> Signed-off-by: Christian König <<a href="mailto:christian.koenig@amd.com" target="_blank">christian.koenig@amd.com</a>><br> ---<br> drivers/gpu/drm/scheduler/gpu_scheduler.c | 4 ++--<br> drivers/gpu/drm/scheduler/sched_fence.c | 2 +-<br> 2 files changed, 3 insertions(+), 3 deletions(-)<br> <br> diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c b/drivers/gpu/drm/scheduler/gpu_scheduler.c<br> index 65078dd3c82c..029863726c99 100644<br> --- a/drivers/gpu/drm/scheduler/gpu_scheduler.c<br> +++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c<br> @@ -533,8 +533,6 @@ drm_sched_entity_pop_job(struct drm_sched_entity *entity)<br> if (!sched_job)<br> return NULL;<br> <br> - sched_job->sched = sched;<br> - sched_job->s_fence->sched = sched;<br> while ((entity->dependency = sched->ops->dependency(sched_job, entity))) <br> </blockquote> <div>The job should have the correct scheduler before the sched->oops->dependency() function is called for the first time. Hence I placed the assignment here.<br> <br> </div> <div>It might also be that I am missing some case so please let me know if such a case exists where this might lead to wrong optimization.<br> </div> </div> </div> </div> </div> </blockquote> <br> The problem is that the scheduler fence of job A could be the dependency of another job B<br> <br> Job B then makes an incorrect optimization because it sees the wrong scheduler in the scheduler fence of job A.<br> <br></div></blockquote><div>Ah..I missed this case. This make sense now. Reviewed-by: Nayan Deshmukh <<a href="mailto:nayan26deshmukh@gmail.com">nayan26deshmukh@gmail.com</a>><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div text="#000000" bgcolor="#FFFFFF"> Regards,<br> Christian.<br> <br> <blockquote type="cite"> <div dir="ltr"> <div> <div> <div class="gmail_quote"> <div><br> </div> <div>Regards,<br> </div> <div>Nayan<br> </div> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> if (drm_sched_entity_add_dependency_cb(entity))<br> return NULL;<br> @@ -581,6 +579,8 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job,<br> spin_unlock(&entity->rq_lock);<br> }<br> <br> + sched_job->sched = entity->rq->sched;<br> + sched_job->s_fence->sched = entity->rq->sched;<br> trace_drm_sched_job(sched_job, entity);<br> atomic_inc(&entity->rq->sched->num_jobs);<br> WRITE_ONCE(entity->last_user, current->group_leader);<br> diff --git a/drivers/gpu/drm/scheduler/sched_fence.c b/drivers/gpu/drm/scheduler/sched_fence.c<br> index 4029312fdd81..6dab18d288d7 100644<br> --- a/drivers/gpu/drm/scheduler/sched_fence.c<br> +++ b/drivers/gpu/drm/scheduler/sched_fence.c<br> @@ -172,7 +172,7 @@ struct drm_sched_fence *drm_sched_fence_create(struct drm_sched_entity *entity,<br> return NULL;<br> <br> fence->owner = owner;<br> - fence->sched = entity->rq->sched;<br> + fence->sched = NULL;<br> spin_lock_init(&fence->lock);<br> <br> seq = atomic_inc_return(&entity->fence_seq);<br> -- <br> 2.14.1<br> <br> </blockquote> </div> </div> </div> </div> </blockquote> <br> </div> </blockquote></div></div>
diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c b/drivers/gpu/drm/scheduler/gpu_scheduler.c index 65078dd3c82c..029863726c99 100644 --- a/drivers/gpu/drm/scheduler/gpu_scheduler.c +++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c @@ -533,8 +533,6 @@ drm_sched_entity_pop_job(struct drm_sched_entity *entity) if (!sched_job) return NULL; - sched_job->sched = sched; - sched_job->s_fence->sched = sched; while ((entity->dependency = sched->ops->dependency(sched_job, entity))) if (drm_sched_entity_add_dependency_cb(entity)) return NULL; @@ -581,6 +579,8 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job, spin_unlock(&entity->rq_lock); } + sched_job->sched = entity->rq->sched; + sched_job->s_fence->sched = entity->rq->sched; trace_drm_sched_job(sched_job, entity); atomic_inc(&entity->rq->sched->num_jobs); WRITE_ONCE(entity->last_user, current->group_leader); diff --git a/drivers/gpu/drm/scheduler/sched_fence.c b/drivers/gpu/drm/scheduler/sched_fence.c index 4029312fdd81..6dab18d288d7 100644 --- a/drivers/gpu/drm/scheduler/sched_fence.c +++ b/drivers/gpu/drm/scheduler/sched_fence.c @@ -172,7 +172,7 @@ struct drm_sched_fence *drm_sched_fence_create(struct drm_sched_entity *entity, return NULL; fence->owner = owner; - fence->sched = entity->rq->sched; + fence->sched = NULL; spin_lock_init(&fence->lock); seq = atomic_inc_return(&entity->fence_seq);
Update job earlier with the scheduler it is supposed to be scheduled on. Otherwise we could incorrectly optimize dependencies when moving an entity from one scheduler to another. Signed-off-by: Christian König <christian.koenig@amd.com> --- drivers/gpu/drm/scheduler/gpu_scheduler.c | 4 ++-- drivers/gpu/drm/scheduler/sched_fence.c | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-)