Message ID | 20180801082002.20696-2-nayan26deshmukh@gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [1/4] drm/scheduler: add a list of run queues to the entity | expand |
Yeah, I've actually added one before pushing it to amd-staging-drm-next. But thanks for the reminder, wanted to note that to Nayan as well :) Christian. Am 01.08.2018 um 15:15 schrieb Huang Rui: > On Wed, Aug 01, 2018 at 01:50:00PM +0530, Nayan Deshmukh wrote: > > This should need a commmit message. > > Thanks, > Ray > >> Signed-off-by: Nayan Deshmukh <nayan26deshmukh@gmail.com> >> --- >> drivers/gpu/drm/scheduler/gpu_scheduler.c | 3 +++ >> include/drm/gpu_scheduler.h | 2 ++ >> 2 files changed, 5 insertions(+) >> >> diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c b/drivers/gpu/drm/scheduler/gpu_scheduler.c >> index a3eacc35cf98..375f6f7f6a93 100644 >> --- a/drivers/gpu/drm/scheduler/gpu_scheduler.c >> +++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c >> @@ -549,6 +549,7 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job, >> >> trace_drm_sched_job(sched_job, entity); >> >> + atomic_inc(&entity->rq->sched->num_jobs); >> first = spsc_queue_push(&entity->job_queue, &sched_job->queue_node); >> >> /* first job wakes up scheduler */ >> @@ -836,6 +837,7 @@ static void drm_sched_process_job(struct dma_fence *f, struct dma_fence_cb *cb) >> >> dma_fence_get(&s_fence->finished); >> atomic_dec(&sched->hw_rq_count); >> + atomic_dec(&sched->num_jobs); >> drm_sched_fence_finished(s_fence); >> >> trace_drm_sched_process_job(s_fence); >> @@ -953,6 +955,7 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, >> INIT_LIST_HEAD(&sched->ring_mirror_list); >> spin_lock_init(&sched->job_list_lock); >> atomic_set(&sched->hw_rq_count, 0); >> + atomic_set(&sched->num_jobs, 0); >> atomic64_set(&sched->job_id_count, 0); >> >> /* Each scheduler will run on a seperate kernel thread */ >> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h >> index a60896222a3e..89881ce974a5 100644 >> --- a/include/drm/gpu_scheduler.h >> +++ b/include/drm/gpu_scheduler.h >> @@ -260,6 +260,7 @@ struct drm_sched_backend_ops { >> * @job_list_lock: lock to protect the ring_mirror_list. >> * @hang_limit: once the hangs by a job crosses this limit then it is marked >> * guilty and it will be considered for scheduling further. >> + * @num_jobs: the number of jobs in queue in the scheduler >> * >> * One scheduler is implemented for each hardware ring. >> */ >> @@ -277,6 +278,7 @@ struct drm_gpu_scheduler { >> struct list_head ring_mirror_list; >> spinlock_t job_list_lock; >> int hang_limit; >> + atomic_t num_jobs; >> }; >> >> int drm_sched_init(struct drm_gpu_scheduler *sched, >> -- >> 2.14.3 >> >> _______________________________________________ >> amd-gfx mailing list >> amd-gfx@lists.freedesktop.org >> https://lists.freedesktop.org/mailman/listinfo/amd-gfx > _______________________________________________ > dri-devel mailing list > dri-devel@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/dri-devel
On Wed, Aug 01, 2018 at 01:50:00PM +0530, Nayan Deshmukh wrote: This should need a commmit message. Thanks, Ray > Signed-off-by: Nayan Deshmukh <nayan26deshmukh@gmail.com> > --- > drivers/gpu/drm/scheduler/gpu_scheduler.c | 3 +++ > include/drm/gpu_scheduler.h | 2 ++ > 2 files changed, 5 insertions(+) > > diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c b/drivers/gpu/drm/scheduler/gpu_scheduler.c > index a3eacc35cf98..375f6f7f6a93 100644 > --- a/drivers/gpu/drm/scheduler/gpu_scheduler.c > +++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c > @@ -549,6 +549,7 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job, > > trace_drm_sched_job(sched_job, entity); > > + atomic_inc(&entity->rq->sched->num_jobs); > first = spsc_queue_push(&entity->job_queue, &sched_job->queue_node); > > /* first job wakes up scheduler */ > @@ -836,6 +837,7 @@ static void drm_sched_process_job(struct dma_fence *f, struct dma_fence_cb *cb) > > dma_fence_get(&s_fence->finished); > atomic_dec(&sched->hw_rq_count); > + atomic_dec(&sched->num_jobs); > drm_sched_fence_finished(s_fence); > > trace_drm_sched_process_job(s_fence); > @@ -953,6 +955,7 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, > INIT_LIST_HEAD(&sched->ring_mirror_list); > spin_lock_init(&sched->job_list_lock); > atomic_set(&sched->hw_rq_count, 0); > + atomic_set(&sched->num_jobs, 0); > atomic64_set(&sched->job_id_count, 0); > > /* Each scheduler will run on a seperate kernel thread */ > diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h > index a60896222a3e..89881ce974a5 100644 > --- a/include/drm/gpu_scheduler.h > +++ b/include/drm/gpu_scheduler.h > @@ -260,6 +260,7 @@ struct drm_sched_backend_ops { > * @job_list_lock: lock to protect the ring_mirror_list. > * @hang_limit: once the hangs by a job crosses this limit then it is marked > * guilty and it will be considered for scheduling further. > + * @num_jobs: the number of jobs in queue in the scheduler > * > * One scheduler is implemented for each hardware ring. > */ > @@ -277,6 +278,7 @@ struct drm_gpu_scheduler { > struct list_head ring_mirror_list; > spinlock_t job_list_lock; > int hang_limit; > + atomic_t num_jobs; > }; > > int drm_sched_init(struct drm_gpu_scheduler *sched, > -- > 2.14.3 > > _______________________________________________ > amd-gfx mailing list > amd-gfx@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/amd-gfx
On Wed, Aug 01, 2018 at 09:06:29PM +0800, Christian König wrote: > Yeah, I've actually added one before pushing it to amd-staging-drm-next. > > But thanks for the reminder, wanted to note that to Nayan as well :) > Yes, a soft reminder to Nayan. Thanks Nayan for the contribution. :-) Thanks, Ray > Christian. > > Am 01.08.2018 um 15:15 schrieb Huang Rui: > > On Wed, Aug 01, 2018 at 01:50:00PM +0530, Nayan Deshmukh wrote: > > > > This should need a commmit message. > > > > Thanks, > > Ray > > > >> Signed-off-by: Nayan Deshmukh <nayan26deshmukh@gmail.com> > >> --- > >> drivers/gpu/drm/scheduler/gpu_scheduler.c | 3 +++ > >> include/drm/gpu_scheduler.h | 2 ++ > >> 2 files changed, 5 insertions(+) > >> > >> diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c b/drivers/gpu/drm/scheduler/gpu_scheduler.c > >> index a3eacc35cf98..375f6f7f6a93 100644 > >> --- a/drivers/gpu/drm/scheduler/gpu_scheduler.c > >> +++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c > >> @@ -549,6 +549,7 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job, > >> > >> trace_drm_sched_job(sched_job, entity); > >> > >> + atomic_inc(&entity->rq->sched->num_jobs); > >> first = spsc_queue_push(&entity->job_queue, &sched_job->queue_node); > >> > >> /* first job wakes up scheduler */ > >> @@ -836,6 +837,7 @@ static void drm_sched_process_job(struct dma_fence *f, struct dma_fence_cb *cb) > >> > >> dma_fence_get(&s_fence->finished); > >> atomic_dec(&sched->hw_rq_count); > >> + atomic_dec(&sched->num_jobs); > >> drm_sched_fence_finished(s_fence); > >> > >> trace_drm_sched_process_job(s_fence); > >> @@ -953,6 +955,7 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, > >> INIT_LIST_HEAD(&sched->ring_mirror_list); > >> spin_lock_init(&sched->job_list_lock); > >> atomic_set(&sched->hw_rq_count, 0); > >> + atomic_set(&sched->num_jobs, 0); > >> atomic64_set(&sched->job_id_count, 0); > >> > >> /* Each scheduler will run on a seperate kernel thread */ > >> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h > >> index a60896222a3e..89881ce974a5 100644 > >> --- a/include/drm/gpu_scheduler.h > >> +++ b/include/drm/gpu_scheduler.h > >> @@ -260,6 +260,7 @@ struct drm_sched_backend_ops { > >> * @job_list_lock: lock to protect the ring_mirror_list. > >> * @hang_limit: once the hangs by a job crosses this limit then it is marked > >> * guilty and it will be considered for scheduling further. > >> + * @num_jobs: the number of jobs in queue in the scheduler > >> * > >> * One scheduler is implemented for each hardware ring. > >> */ > >> @@ -277,6 +278,7 @@ struct drm_gpu_scheduler { > >> struct list_head ring_mirror_list; > >> spinlock_t job_list_lock; > >> int hang_limit; > >> + atomic_t num_jobs; > >> }; > >> > >> int drm_sched_init(struct drm_gpu_scheduler *sched, > >> -- > >> 2.14.3 > >> > >> _______________________________________________ > >> amd-gfx mailing list > >> amd-gfx@lists.freedesktop.org > >> https://lists.freedesktop.org/mailman/listinfo/amd-gfx > > _______________________________________________ > > dri-devel mailing list > > dri-devel@lists.freedesktop.org > > https://lists.freedesktop.org/mailman/listinfo/dri-devel >
Thanks for the reminders. I felt that the commit header was sufficient enough but I guess that didn't cover the motivation for the change. Thanks Christian for adding the commit message. Regards, Nayan On Thu, Aug 2, 2018 at 8:16 AM Huang Rui <ray.huang@amd.com> wrote: > On Wed, Aug 01, 2018 at 09:06:29PM +0800, Christian König wrote: > > Yeah, I've actually added one before pushing it to amd-staging-drm-next. > > > > But thanks for the reminder, wanted to note that to Nayan as well :) > > > > Yes, a soft reminder to Nayan. Thanks Nayan for the contribution. :-) > > Thanks, > Ray > > > Christian. > > > > Am 01.08.2018 um 15:15 schrieb Huang Rui: > > > On Wed, Aug 01, 2018 at 01:50:00PM +0530, Nayan Deshmukh wrote: > > > > > > This should need a commmit message. > > > > > > Thanks, > > > Ray > > > > > >> Signed-off-by: Nayan Deshmukh <nayan26deshmukh@gmail.com> > > >> --- > > >> drivers/gpu/drm/scheduler/gpu_scheduler.c | 3 +++ > > >> include/drm/gpu_scheduler.h | 2 ++ > > >> 2 files changed, 5 insertions(+) > > >> > > >> diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c > b/drivers/gpu/drm/scheduler/gpu_scheduler.c > > >> index a3eacc35cf98..375f6f7f6a93 100644 > > >> --- a/drivers/gpu/drm/scheduler/gpu_scheduler.c > > >> +++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c > > >> @@ -549,6 +549,7 @@ void drm_sched_entity_push_job(struct > drm_sched_job *sched_job, > > >> > > >> trace_drm_sched_job(sched_job, entity); > > >> > > >> + atomic_inc(&entity->rq->sched->num_jobs); > > >> first = spsc_queue_push(&entity->job_queue, > &sched_job->queue_node); > > >> > > >> /* first job wakes up scheduler */ > > >> @@ -836,6 +837,7 @@ static void drm_sched_process_job(struct > dma_fence *f, struct dma_fence_cb *cb) > > >> > > >> dma_fence_get(&s_fence->finished); > > >> atomic_dec(&sched->hw_rq_count); > > >> + atomic_dec(&sched->num_jobs); > > >> drm_sched_fence_finished(s_fence); > > >> > > >> trace_drm_sched_process_job(s_fence); > > >> @@ -953,6 +955,7 @@ int drm_sched_init(struct drm_gpu_scheduler > *sched, > > >> INIT_LIST_HEAD(&sched->ring_mirror_list); > > >> spin_lock_init(&sched->job_list_lock); > > >> atomic_set(&sched->hw_rq_count, 0); > > >> + atomic_set(&sched->num_jobs, 0); > > >> atomic64_set(&sched->job_id_count, 0); > > >> > > >> /* Each scheduler will run on a seperate kernel thread */ > > >> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h > > >> index a60896222a3e..89881ce974a5 100644 > > >> --- a/include/drm/gpu_scheduler.h > > >> +++ b/include/drm/gpu_scheduler.h > > >> @@ -260,6 +260,7 @@ struct drm_sched_backend_ops { > > >> * @job_list_lock: lock to protect the ring_mirror_list. > > >> * @hang_limit: once the hangs by a job crosses this limit then it > is marked > > >> * guilty and it will be considered for scheduling > further. > > >> + * @num_jobs: the number of jobs in queue in the scheduler > > >> * > > >> * One scheduler is implemented for each hardware ring. > > >> */ > > >> @@ -277,6 +278,7 @@ struct drm_gpu_scheduler { > > >> struct list_head ring_mirror_list; > > >> spinlock_t job_list_lock; > > >> int hang_limit; > > >> + atomic_t num_jobs; > > >> }; > > >> > > >> int drm_sched_init(struct drm_gpu_scheduler *sched, > > >> -- > > >> 2.14.3 > > >> > > >> _______________________________________________ > > >> amd-gfx mailing list > > >> amd-gfx@lists.freedesktop.org > > >> https://lists.freedesktop.org/mailman/listinfo/amd-gfx > > > _______________________________________________ > > > dri-devel mailing list > > > dri-devel@lists.freedesktop.org > > > https://lists.freedesktop.org/mailman/listinfo/dri-devel > > > <div dir="ltr"><div><div><div>Thanks for the reminders. I felt that the commit header was sufficient enough but I guess that didn't cover the motivation for the change.<br><br></div>Thanks Christian for adding the commit message.<br><br></div>Regards,<br></div>Nayan<br></div><br><div class="gmail_quote"><div dir="ltr">On Thu, Aug 2, 2018 at 8:16 AM Huang Rui <<a href="mailto:ray.huang@amd.com">ray.huang@amd.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On Wed, Aug 01, 2018 at 09:06:29PM +0800, Christian König wrote:<br> > Yeah, I've actually added one before pushing it to amd-staging-drm-next.<br> > <br> > But thanks for the reminder, wanted to note that to Nayan as well :)<br> > <br> <br> Yes, a soft reminder to Nayan. Thanks Nayan for the contribution. :-)<br> <br> Thanks,<br> Ray<br> <br> > Christian.<br> > <br> > Am 01.08.2018 um 15:15 schrieb Huang Rui:<br> > > On Wed, Aug 01, 2018 at 01:50:00PM +0530, Nayan Deshmukh wrote:<br> > ><br> > > This should need a commmit message.<br> > ><br> > > Thanks,<br> > > Ray<br> > ><br> > >> Signed-off-by: Nayan Deshmukh <<a href="mailto:nayan26deshmukh@gmail.com" target="_blank">nayan26deshmukh@gmail.com</a>><br> > >> ---<br> > >> drivers/gpu/drm/scheduler/gpu_scheduler.c | 3 +++<br> > >> include/drm/gpu_scheduler.h | 2 ++<br> > >> 2 files changed, 5 insertions(+)<br> > >><br> > >> diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c b/drivers/gpu/drm/scheduler/gpu_scheduler.c<br> > >> index a3eacc35cf98..375f6f7f6a93 100644<br> > >> --- a/drivers/gpu/drm/scheduler/gpu_scheduler.c<br> > >> +++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c<br> > >> @@ -549,6 +549,7 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job,<br> > >> <br> > >> trace_drm_sched_job(sched_job, entity);<br> > >> <br> > >> + atomic_inc(&entity->rq->sched->num_jobs);<br> > >> first = spsc_queue_push(&entity->job_queue, &sched_job->queue_node);<br> > >> <br> > >> /* first job wakes up scheduler */<br> > >> @@ -836,6 +837,7 @@ static void drm_sched_process_job(struct dma_fence *f, struct dma_fence_cb *cb)<br> > >> <br> > >> dma_fence_get(&s_fence->finished);<br> > >> atomic_dec(&sched->hw_rq_count);<br> > >> + atomic_dec(&sched->num_jobs);<br> > >> drm_sched_fence_finished(s_fence);<br> > >> <br> > >> trace_drm_sched_process_job(s_fence);<br> > >> @@ -953,6 +955,7 @@ int drm_sched_init(struct drm_gpu_scheduler *sched,<br> > >> INIT_LIST_HEAD(&sched->ring_mirror_list);<br> > >> spin_lock_init(&sched->job_list_lock);<br> > >> atomic_set(&sched->hw_rq_count, 0);<br> > >> + atomic_set(&sched->num_jobs, 0);<br> > >> atomic64_set(&sched->job_id_count, 0);<br> > >> <br> > >> /* Each scheduler will run on a seperate kernel thread */<br> > >> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h<br> > >> index a60896222a3e..89881ce974a5 100644<br> > >> --- a/include/drm/gpu_scheduler.h<br> > >> +++ b/include/drm/gpu_scheduler.h<br> > >> @@ -260,6 +260,7 @@ struct drm_sched_backend_ops {<br> > >> * @job_list_lock: lock to protect the ring_mirror_list.<br> > >> * @hang_limit: once the hangs by a job crosses this limit then it is marked<br> > >> * guilty and it will be considered for scheduling further.<br> > >> + * @num_jobs: the number of jobs in queue in the scheduler<br> > >> *<br> > >> * One scheduler is implemented for each hardware ring.<br> > >> */<br> > >> @@ -277,6 +278,7 @@ struct drm_gpu_scheduler {<br> > >> struct list_head ring_mirror_list;<br> > >> spinlock_t job_list_lock;<br> > >> int hang_limit;<br> > >> + atomic_t num_jobs;<br> > >> };<br> > >> <br> > >> int drm_sched_init(struct drm_gpu_scheduler *sched,<br> > >> -- <br> > >> 2.14.3<br> > >><br> > >> _______________________________________________<br> > >> amd-gfx mailing list<br> > >> <a href="mailto:amd-gfx@lists.freedesktop.org" target="_blank">amd-gfx@lists.freedesktop.org</a><br> > >> <a href="https://lists.freedesktop.org/mailman/listinfo/amd-gfx" rel="noreferrer" target="_blank">https://lists.freedesktop.org/mailman/listinfo/amd-gfx</a><br> > > _______________________________________________<br> > > dri-devel mailing list<br> > > <a href="mailto:dri-devel@lists.freedesktop.org" target="_blank">dri-devel@lists.freedesktop.org</a><br> > > <a href="https://lists.freedesktop.org/mailman/listinfo/dri-devel" rel="noreferrer" target="_blank">https://lists.freedesktop.org/mailman/listinfo/dri-devel</a><br> > <br> </blockquote></div>
diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c b/drivers/gpu/drm/scheduler/gpu_scheduler.c index a3eacc35cf98..375f6f7f6a93 100644 --- a/drivers/gpu/drm/scheduler/gpu_scheduler.c +++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c @@ -549,6 +549,7 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job, trace_drm_sched_job(sched_job, entity); + atomic_inc(&entity->rq->sched->num_jobs); first = spsc_queue_push(&entity->job_queue, &sched_job->queue_node); /* first job wakes up scheduler */ @@ -836,6 +837,7 @@ static void drm_sched_process_job(struct dma_fence *f, struct dma_fence_cb *cb) dma_fence_get(&s_fence->finished); atomic_dec(&sched->hw_rq_count); + atomic_dec(&sched->num_jobs); drm_sched_fence_finished(s_fence); trace_drm_sched_process_job(s_fence); @@ -953,6 +955,7 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, INIT_LIST_HEAD(&sched->ring_mirror_list); spin_lock_init(&sched->job_list_lock); atomic_set(&sched->hw_rq_count, 0); + atomic_set(&sched->num_jobs, 0); atomic64_set(&sched->job_id_count, 0); /* Each scheduler will run on a seperate kernel thread */ diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index a60896222a3e..89881ce974a5 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -260,6 +260,7 @@ struct drm_sched_backend_ops { * @job_list_lock: lock to protect the ring_mirror_list. * @hang_limit: once the hangs by a job crosses this limit then it is marked * guilty and it will be considered for scheduling further. + * @num_jobs: the number of jobs in queue in the scheduler * * One scheduler is implemented for each hardware ring. */ @@ -277,6 +278,7 @@ struct drm_gpu_scheduler { struct list_head ring_mirror_list; spinlock_t job_list_lock; int hang_limit; + atomic_t num_jobs; }; int drm_sched_init(struct drm_gpu_scheduler *sched,
Signed-off-by: Nayan Deshmukh <nayan26deshmukh@gmail.com> --- drivers/gpu/drm/scheduler/gpu_scheduler.c | 3 +++ include/drm/gpu_scheduler.h | 2 ++ 2 files changed, 5 insertions(+)