Message ID | 20200512085944.222637-9-daniel.vetter@ffwll.ch (mailing list archive) |
---|---|
State | RFC |
Headers | show |
Series | dma-fence lockdep annotations | expand |
On Tue, May 12, 2020 at 11:00 AM Daniel Vetter <daniel.vetter@ffwll.ch> wrote: > > If the scheduler rt thread gets stuck on a mutex that we're holding > while waiting for gpu workloads to complete, we have a problem. > > Add dma-fence annotations so that lockdep can check this for us. > > I've tried to quite carefully review this, and I think it's at the > right spot. But obviosly no expert on drm scheduler. > > Cc: linux-media@vger.kernel.org > Cc: linaro-mm-sig@lists.linaro.org > Cc: linux-rdma@vger.kernel.org > Cc: amd-gfx@lists.freedesktop.org > Cc: intel-gfx@lists.freedesktop.org > Cc: Chris Wilson <chris@chris-wilson.co.uk> > Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> > Cc: Christian König <christian.koenig@amd.com> > Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> Adding a bunch more people from drivers using the drm/scheduler (so that's maintainers for etnaviv, lima, panfrost, and v3d on top of amdgpu folks arlready on cc). Any takes or testing on this and well the entire series very much appreciated, there's also another patch to anotate the tdr work in this series. Plus ofc the prep work. Thanks, Daniel > --- > drivers/gpu/drm/scheduler/sched_main.c | 6 ++++++ > 1 file changed, 6 insertions(+) > > diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c > index 2f319102ae9f..06a736e506ad 100644 > --- a/drivers/gpu/drm/scheduler/sched_main.c > +++ b/drivers/gpu/drm/scheduler/sched_main.c > @@ -763,9 +763,12 @@ static int drm_sched_main(void *param) > struct sched_param sparam = {.sched_priority = 1}; > struct drm_gpu_scheduler *sched = (struct drm_gpu_scheduler *)param; > int r; > + bool fence_cookie; > > sched_setscheduler(current, SCHED_FIFO, &sparam); > > + fence_cookie = dma_fence_begin_signalling(); > + > while (!kthread_should_stop()) { > struct drm_sched_entity *entity = NULL; > struct drm_sched_fence *s_fence; > @@ -823,6 +826,9 @@ static int drm_sched_main(void *param) > > wake_up(&sched->job_scheduled); > } > + > + dma_fence_end_signalling(fence_cookie); > + > return 0; > } > > -- > 2.26.2 >
diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 2f319102ae9f..06a736e506ad 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -763,9 +763,12 @@ static int drm_sched_main(void *param) struct sched_param sparam = {.sched_priority = 1}; struct drm_gpu_scheduler *sched = (struct drm_gpu_scheduler *)param; int r; + bool fence_cookie; sched_setscheduler(current, SCHED_FIFO, &sparam); + fence_cookie = dma_fence_begin_signalling(); + while (!kthread_should_stop()) { struct drm_sched_entity *entity = NULL; struct drm_sched_fence *s_fence; @@ -823,6 +826,9 @@ static int drm_sched_main(void *param) wake_up(&sched->job_scheduled); } + + dma_fence_end_signalling(fence_cookie); + return 0; }
If the scheduler rt thread gets stuck on a mutex that we're holding while waiting for gpu workloads to complete, we have a problem. Add dma-fence annotations so that lockdep can check this for us. I've tried to quite carefully review this, and I think it's at the right spot. But obviosly no expert on drm scheduler. Cc: linux-media@vger.kernel.org Cc: linaro-mm-sig@lists.linaro.org Cc: linux-rdma@vger.kernel.org Cc: amd-gfx@lists.freedesktop.org Cc: intel-gfx@lists.freedesktop.org Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Cc: Christian König <christian.koenig@amd.com> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> --- drivers/gpu/drm/scheduler/sched_main.c | 6 ++++++ 1 file changed, 6 insertions(+)