diff mbox series

[02/11] drm/panfrost: Remove sched_lock

Message ID 20210521090959.1663703-2-daniel.vetter@ffwll.ch (mailing list archive)
State New, archived
Headers show
Series [01/11] drm/amdgpu: Comply with implicit fencing rules | expand

Commit Message

Daniel Vetter May 21, 2021, 9:09 a.m. UTC
Scheduler takes care of its own locking, dont worry. For everything else
there's reservation locking on each bo.

So seems to be entirely redundnant and just a bit in the way.

Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Cc: Rob Herring <robh@kernel.org>
Cc: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Cc: Steven Price <steven.price@arm.com>
Cc: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
---
 drivers/gpu/drm/panfrost/panfrost_device.c |  1 -
 drivers/gpu/drm/panfrost/panfrost_device.h |  2 --
 drivers/gpu/drm/panfrost/panfrost_job.c    | 13 ++-----------
 3 files changed, 2 insertions(+), 14 deletions(-)

Comments

Lucas Stach May 21, 2021, 9:32 a.m. UTC | #1
Am Freitag, dem 21.05.2021 um 11:09 +0200 schrieb Daniel Vetter:
> Scheduler takes care of its own locking, dont worry. For everything else
> there's reservation locking on each bo.
> 
> So seems to be entirely redundnant and just a bit in the way.

I haven't read all the surrounding code, but this looks wrong from a
glance. You must hold a lock across drm_sched_job_init ->
drm_sched_entity_push_job as the scheduler fences are initialized in
the job init, so if there's no exclusive section across those two
function calls you might end up with jobs being queued with their fence
seqnos not monotonically increasing, which breaks all kinds of other
stuff.

I don't see a reason to hold the lock across the reservation calls,
though.

Regards,
Lucas

> 
> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> Cc: Rob Herring <robh@kernel.org>
> Cc: Tomeu Vizoso <tomeu.vizoso@collabora.com>
> Cc: Steven Price <steven.price@arm.com>
> Cc: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
> ---
>  drivers/gpu/drm/panfrost/panfrost_device.c |  1 -
>  drivers/gpu/drm/panfrost/panfrost_device.h |  2 --
>  drivers/gpu/drm/panfrost/panfrost_job.c    | 13 ++-----------
>  3 files changed, 2 insertions(+), 14 deletions(-)
> 
> diff --git a/drivers/gpu/drm/panfrost/panfrost_device.c b/drivers/gpu/drm/panfrost/panfrost_device.c
> index 125ed973feaa..23070c01c63f 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_device.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_device.c
> @@ -199,7 +199,6 @@ int panfrost_device_init(struct panfrost_device *pfdev)
>  	int err;
>  	struct resource *res;
>  
> -	mutex_init(&pfdev->sched_lock);
>  	INIT_LIST_HEAD(&pfdev->scheduled_jobs);
>  	INIT_LIST_HEAD(&pfdev->as_lru_list);
>  
> diff --git a/drivers/gpu/drm/panfrost/panfrost_device.h b/drivers/gpu/drm/panfrost/panfrost_device.h
> index 597cf1459b0a..7519903bb031 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_device.h
> +++ b/drivers/gpu/drm/panfrost/panfrost_device.h
> @@ -105,8 +105,6 @@ struct panfrost_device {
>  
>  	struct panfrost_perfcnt *perfcnt;
>  
> -	struct mutex sched_lock;
> -
>  	struct {
>  		struct work_struct work;
>  		atomic_t pending;
> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
> index 6003cfeb1322..f5d39ee14ab5 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
> @@ -218,26 +218,19 @@ static void panfrost_attach_object_fences(struct drm_gem_object **bos,
>  
>  int panfrost_job_push(struct panfrost_job *job)
>  {
> -	struct panfrost_device *pfdev = job->pfdev;
>  	int slot = panfrost_job_get_slot(job);
>  	struct drm_sched_entity *entity = &job->file_priv->sched_entity[slot];
>  	struct ww_acquire_ctx acquire_ctx;
>  	int ret = 0;
>  
> -	mutex_lock(&pfdev->sched_lock);
> -
>  	ret = drm_gem_lock_reservations(job->bos, job->bo_count,
>  					    &acquire_ctx);
> -	if (ret) {
> -		mutex_unlock(&pfdev->sched_lock);
> +	if (ret)
>  		return ret;
> -	}
>  
>  	ret = drm_sched_job_init(&job->base, entity, NULL);
> -	if (ret) {
> -		mutex_unlock(&pfdev->sched_lock);
> +	if (ret)
>  		goto unlock;
> -	}
>  
>  	job->render_done_fence = dma_fence_get(&job->base.s_fence->finished);
>  
> @@ -248,8 +241,6 @@ int panfrost_job_push(struct panfrost_job *job)
>  
>  	drm_sched_entity_push_job(&job->base, entity);
>  
> -	mutex_unlock(&pfdev->sched_lock);
> -
>  	panfrost_attach_object_fences(job->bos, job->bo_count,
>  				      job->render_done_fence);
>
Daniel Vetter May 21, 2021, 2:49 p.m. UTC | #2
On Fri, May 21, 2021 at 11:32:48AM +0200, Lucas Stach wrote:
> Am Freitag, dem 21.05.2021 um 11:09 +0200 schrieb Daniel Vetter:
> > Scheduler takes care of its own locking, dont worry. For everything else
> > there's reservation locking on each bo.
> > 
> > So seems to be entirely redundnant and just a bit in the way.
> 
> I haven't read all the surrounding code, but this looks wrong from a
> glance. You must hold a lock across drm_sched_job_init ->
> drm_sched_entity_push_job as the scheduler fences are initialized in
> the job init, so if there's no exclusive section across those two
> function calls you might end up with jobs being queued with their fence
> seqnos not monotonically increasing, which breaks all kinds of other
> stuff.

Uh indeed. That's a bit a loaded gun since generally _init() shouldn't
have any such side effects.

> I don't see a reason to hold the lock across the reservation calls,
> though.

Ok I'll adjust the patch.
-Daniel

> 
> Regards,
> Lucas
> 
> > 
> > Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> > Cc: Rob Herring <robh@kernel.org>
> > Cc: Tomeu Vizoso <tomeu.vizoso@collabora.com>
> > Cc: Steven Price <steven.price@arm.com>
> > Cc: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
> > ---
> >  drivers/gpu/drm/panfrost/panfrost_device.c |  1 -
> >  drivers/gpu/drm/panfrost/panfrost_device.h |  2 --
> >  drivers/gpu/drm/panfrost/panfrost_job.c    | 13 ++-----------
> >  3 files changed, 2 insertions(+), 14 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/panfrost/panfrost_device.c b/drivers/gpu/drm/panfrost/panfrost_device.c
> > index 125ed973feaa..23070c01c63f 100644
> > --- a/drivers/gpu/drm/panfrost/panfrost_device.c
> > +++ b/drivers/gpu/drm/panfrost/panfrost_device.c
> > @@ -199,7 +199,6 @@ int panfrost_device_init(struct panfrost_device *pfdev)
> >  	int err;
> >  	struct resource *res;
> >  
> > -	mutex_init(&pfdev->sched_lock);
> >  	INIT_LIST_HEAD(&pfdev->scheduled_jobs);
> >  	INIT_LIST_HEAD(&pfdev->as_lru_list);
> >  
> > diff --git a/drivers/gpu/drm/panfrost/panfrost_device.h b/drivers/gpu/drm/panfrost/panfrost_device.h
> > index 597cf1459b0a..7519903bb031 100644
> > --- a/drivers/gpu/drm/panfrost/panfrost_device.h
> > +++ b/drivers/gpu/drm/panfrost/panfrost_device.h
> > @@ -105,8 +105,6 @@ struct panfrost_device {
> >  
> >  	struct panfrost_perfcnt *perfcnt;
> >  
> > -	struct mutex sched_lock;
> > -
> >  	struct {
> >  		struct work_struct work;
> >  		atomic_t pending;
> > diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
> > index 6003cfeb1322..f5d39ee14ab5 100644
> > --- a/drivers/gpu/drm/panfrost/panfrost_job.c
> > +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
> > @@ -218,26 +218,19 @@ static void panfrost_attach_object_fences(struct drm_gem_object **bos,
> >  
> >  int panfrost_job_push(struct panfrost_job *job)
> >  {
> > -	struct panfrost_device *pfdev = job->pfdev;
> >  	int slot = panfrost_job_get_slot(job);
> >  	struct drm_sched_entity *entity = &job->file_priv->sched_entity[slot];
> >  	struct ww_acquire_ctx acquire_ctx;
> >  	int ret = 0;
> >  
> > -	mutex_lock(&pfdev->sched_lock);
> > -
> >  	ret = drm_gem_lock_reservations(job->bos, job->bo_count,
> >  					    &acquire_ctx);
> > -	if (ret) {
> > -		mutex_unlock(&pfdev->sched_lock);
> > +	if (ret)
> >  		return ret;
> > -	}
> >  
> >  	ret = drm_sched_job_init(&job->base, entity, NULL);
> > -	if (ret) {
> > -		mutex_unlock(&pfdev->sched_lock);
> > +	if (ret)
> >  		goto unlock;
> > -	}
> >  
> >  	job->render_done_fence = dma_fence_get(&job->base.s_fence->finished);
> >  
> > @@ -248,8 +241,6 @@ int panfrost_job_push(struct panfrost_job *job)
> >  
> >  	drm_sched_entity_push_job(&job->base, entity);
> >  
> > -	mutex_unlock(&pfdev->sched_lock);
> > -
> >  	panfrost_attach_object_fences(job->bos, job->bo_count,
> >  				      job->render_done_fence);
> >  
> 
>
diff mbox series

Patch

diff --git a/drivers/gpu/drm/panfrost/panfrost_device.c b/drivers/gpu/drm/panfrost/panfrost_device.c
index 125ed973feaa..23070c01c63f 100644
--- a/drivers/gpu/drm/panfrost/panfrost_device.c
+++ b/drivers/gpu/drm/panfrost/panfrost_device.c
@@ -199,7 +199,6 @@  int panfrost_device_init(struct panfrost_device *pfdev)
 	int err;
 	struct resource *res;
 
-	mutex_init(&pfdev->sched_lock);
 	INIT_LIST_HEAD(&pfdev->scheduled_jobs);
 	INIT_LIST_HEAD(&pfdev->as_lru_list);
 
diff --git a/drivers/gpu/drm/panfrost/panfrost_device.h b/drivers/gpu/drm/panfrost/panfrost_device.h
index 597cf1459b0a..7519903bb031 100644
--- a/drivers/gpu/drm/panfrost/panfrost_device.h
+++ b/drivers/gpu/drm/panfrost/panfrost_device.h
@@ -105,8 +105,6 @@  struct panfrost_device {
 
 	struct panfrost_perfcnt *perfcnt;
 
-	struct mutex sched_lock;
-
 	struct {
 		struct work_struct work;
 		atomic_t pending;
diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
index 6003cfeb1322..f5d39ee14ab5 100644
--- a/drivers/gpu/drm/panfrost/panfrost_job.c
+++ b/drivers/gpu/drm/panfrost/panfrost_job.c
@@ -218,26 +218,19 @@  static void panfrost_attach_object_fences(struct drm_gem_object **bos,
 
 int panfrost_job_push(struct panfrost_job *job)
 {
-	struct panfrost_device *pfdev = job->pfdev;
 	int slot = panfrost_job_get_slot(job);
 	struct drm_sched_entity *entity = &job->file_priv->sched_entity[slot];
 	struct ww_acquire_ctx acquire_ctx;
 	int ret = 0;
 
-	mutex_lock(&pfdev->sched_lock);
-
 	ret = drm_gem_lock_reservations(job->bos, job->bo_count,
 					    &acquire_ctx);
-	if (ret) {
-		mutex_unlock(&pfdev->sched_lock);
+	if (ret)
 		return ret;
-	}
 
 	ret = drm_sched_job_init(&job->base, entity, NULL);
-	if (ret) {
-		mutex_unlock(&pfdev->sched_lock);
+	if (ret)
 		goto unlock;
-	}
 
 	job->render_done_fence = dma_fence_get(&job->base.s_fence->finished);
 
@@ -248,8 +241,6 @@  int panfrost_job_push(struct panfrost_job *job)
 
 	drm_sched_entity_push_job(&job->base, entity);
 
-	mutex_unlock(&pfdev->sched_lock);
-
 	panfrost_attach_object_fences(job->bos, job->bo_count,
 				      job->render_done_fence);