diff mbox

[08/35] drm: Protect dev->filelist with its own mutex

Message ID 1461691808-12414-9-git-send-email-daniel.vetter@ffwll.ch (mailing list archive)
State New, archived
Headers show

Commit Message

Daniel Vetter April 26, 2016, 5:29 p.m. UTC
amdgpu gained dev->struct_mutex usage, and that's because it's walking
the dev->filelist list. Protect that list with it's own lock to take
one more step towards getting rid of struct_mutex usage in drivers
once and for all.

While doing the conversion I noticed that 2 debugfs files in i915
completely lacked appropriate locking. Fix that up too.

v2: don't forget to switch to drm_gem_object_unreference_unlocked.

Cc: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 10 +++++-----
 drivers/gpu/drm/drm_drv.c               |  1 +
 drivers/gpu/drm/drm_fops.c              |  9 ++++++---
 drivers/gpu/drm/drm_info.c              |  4 ++--
 drivers/gpu/drm/i915/i915_debugfs.c     | 12 ++++++++++--
 include/drm/drmP.h                      |  1 +
 6 files changed, 25 insertions(+), 12 deletions(-)

Comments

Chris Wilson April 26, 2016, 8:52 p.m. UTC | #1
On Tue, Apr 26, 2016 at 07:29:41PM +0200, Daniel Vetter wrote:
> amdgpu gained dev->struct_mutex usage, and that's because it's walking
> the dev->filelist list. Protect that list with it's own lock to take
> one more step towards getting rid of struct_mutex usage in drivers
> once and for all.
> 
> While doing the conversion I noticed that 2 debugfs files in i915
> completely lacked appropriate locking. Fix that up too.
> 
> v2: don't forget to switch to drm_gem_object_unreference_unlocked.
> 
> Cc: Alex Deucher <alexander.deucher@amd.com>
> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>

Just wondering if this worth converting over. Opening/closing isn't
going to be high contention, I hope, though we can certainly write
stress cases for it! The goal for drivers to stop using the struct_mutex
as their BKL, which doesn't preclude keeping the struct_mutex around for
where it makes sense to have a single mutex rather than a multitude.

I have some misgivings over this, but only because I think its overkill.
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
-Chris
Alex Deucher April 26, 2016, 9:45 p.m. UTC | #2
On Tue, Apr 26, 2016 at 4:52 PM, Chris Wilson <chris@chris-wilson.co.uk> wrote:
> On Tue, Apr 26, 2016 at 07:29:41PM +0200, Daniel Vetter wrote:
>> amdgpu gained dev->struct_mutex usage, and that's because it's walking
>> the dev->filelist list. Protect that list with it's own lock to take
>> one more step towards getting rid of struct_mutex usage in drivers
>> once and for all.
>>
>> While doing the conversion I noticed that 2 debugfs files in i915
>> completely lacked appropriate locking. Fix that up too.
>>
>> v2: don't forget to switch to drm_gem_object_unreference_unlocked.
>>
>> Cc: Alex Deucher <alexander.deucher@amd.com>
>> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
>
> Just wondering if this worth converting over. Opening/closing isn't
> going to be high contention, I hope, though we can certainly write
> stress cases for it! The goal for drivers to stop using the struct_mutex
> as their BKL, which doesn't preclude keeping the struct_mutex around for
> where it makes sense to have a single mutex rather than a multitude.
>
> I have some misgivings over this, but only because I think its overkill.
> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>

I agree with Chris' sentiments.

Reviewed-by: Alex Deucher <alexander.deucher@amd.com>

> -Chris
>
> --
> Chris Wilson, Intel Open Source Technology Centre
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
Daniel Vetter April 27, 2016, 7:06 a.m. UTC | #3
On Tue, Apr 26, 2016 at 05:45:44PM -0400, Alex Deucher wrote:
> On Tue, Apr 26, 2016 at 4:52 PM, Chris Wilson <chris@chris-wilson.co.uk> wrote:
> > On Tue, Apr 26, 2016 at 07:29:41PM +0200, Daniel Vetter wrote:
> >> amdgpu gained dev->struct_mutex usage, and that's because it's walking
> >> the dev->filelist list. Protect that list with it's own lock to take
> >> one more step towards getting rid of struct_mutex usage in drivers
> >> once and for all.
> >>
> >> While doing the conversion I noticed that 2 debugfs files in i915
> >> completely lacked appropriate locking. Fix that up too.
> >>
> >> v2: don't forget to switch to drm_gem_object_unreference_unlocked.
> >>
> >> Cc: Alex Deucher <alexander.deucher@amd.com>
> >> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> >
> > Just wondering if this worth converting over. Opening/closing isn't
> > going to be high contention, I hope, though we can certainly write
> > stress cases for it! The goal for drivers to stop using the struct_mutex
> > as their BKL, which doesn't preclude keeping the struct_mutex around for
> > where it makes sense to have a single mutex rather than a multitude.
> >
> > I have some misgivings over this, but only because I think its overkill.
> > Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
> 
> I agree with Chris' sentiments.

It's not to have more speed or less contention, but just to have fewer
things to worry about when reviewing locking. Hence orthogonal locks for
independent parts.

My goal is that in the end dev->struct_mutex is only used by some existing
drivers for their internals, plus all the legacy core stuff. And never
even used by modern drivers. New locks are pretty cheap, and not dragging
in the entire legacy horror show has benefits.

When/once I tackle the one thing left (master locking) I might move the
master handling under this lock too (since it's closely related to open
files). Not sure yet.
-Daniel

> Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Daniel Vetter April 27, 2016, 7:08 a.m. UTC | #4
On Wed, Apr 27, 2016 at 09:06:09AM +0200, Daniel Vetter wrote:
> On Tue, Apr 26, 2016 at 05:45:44PM -0400, Alex Deucher wrote:
> > On Tue, Apr 26, 2016 at 4:52 PM, Chris Wilson <chris@chris-wilson.co.uk> wrote:
> > > On Tue, Apr 26, 2016 at 07:29:41PM +0200, Daniel Vetter wrote:
> > >> amdgpu gained dev->struct_mutex usage, and that's because it's walking
> > >> the dev->filelist list. Protect that list with it's own lock to take
> > >> one more step towards getting rid of struct_mutex usage in drivers
> > >> once and for all.
> > >>
> > >> While doing the conversion I noticed that 2 debugfs files in i915
> > >> completely lacked appropriate locking. Fix that up too.
> > >>
> > >> v2: don't forget to switch to drm_gem_object_unreference_unlocked.
> > >>
> > >> Cc: Alex Deucher <alexander.deucher@amd.com>
> > >> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> > >
> > > Just wondering if this worth converting over. Opening/closing isn't
> > > going to be high contention, I hope, though we can certainly write
> > > stress cases for it! The goal for drivers to stop using the struct_mutex
> > > as their BKL, which doesn't preclude keeping the struct_mutex around for
> > > where it makes sense to have a single mutex rather than a multitude.
> > >
> > > I have some misgivings over this, but only because I think its overkill.
> > > Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
> > 
> > I agree with Chris' sentiments.
> 
> It's not to have more speed or less contention, but just to have fewer
> things to worry about when reviewing locking. Hence orthogonal locks for
> independent parts.
> 
> My goal is that in the end dev->struct_mutex is only used by some existing
> drivers for their internals, plus all the legacy core stuff. And never
> even used by modern drivers. New locks are pretty cheap, and not dragging
> in the entire legacy horror show has benefits.
> 
> When/once I tackle the one thing left (master locking) I might move the
> master handling under this lock too (since it's closely related to open
> files). Not sure yet.

Oh and the main reason I did it here: Much easier to review using git grep
struct_mutex than auditing codepaths. With this drivers have no reason to
ever grab struct_mutex (not even for debugfs or similar), since the last
remaining bit (master control) really shouldn't be a concern for drivers
ever.

That also makes reviewing new drivers simpler: Contains struct_mutex?
Reject it!

Cheers, Daniel
Alex Deucher April 27, 2016, 7:14 a.m. UTC | #5
> -----Original Message-----
> From: Daniel Vetter [mailto:daniel.vetter@ffwll.ch] On Behalf Of Daniel
> Vetter
> Sent: Wednesday, April 27, 2016 3:09 AM
> To: Alex Deucher
> Cc: Chris Wilson; Daniel Vetter; DRI Development; Deucher, Alexander; Intel
> Graphics Development; Daniel Vetter
> Subject: Re: [Intel-gfx] [PATCH 08/35] drm: Protect dev->filelist with its own
> mutex
> 
> On Wed, Apr 27, 2016 at 09:06:09AM +0200, Daniel Vetter wrote:
> > On Tue, Apr 26, 2016 at 05:45:44PM -0400, Alex Deucher wrote:
> > > On Tue, Apr 26, 2016 at 4:52 PM, Chris Wilson <chris@chris-wilson.co.uk>
> wrote:
> > > > On Tue, Apr 26, 2016 at 07:29:41PM +0200, Daniel Vetter wrote:
> > > >> amdgpu gained dev->struct_mutex usage, and that's because it's
> walking
> > > >> the dev->filelist list. Protect that list with it's own lock to take
> > > >> one more step towards getting rid of struct_mutex usage in drivers
> > > >> once and for all.
> > > >>
> > > >> While doing the conversion I noticed that 2 debugfs files in i915
> > > >> completely lacked appropriate locking. Fix that up too.
> > > >>
> > > >> v2: don't forget to switch to
> drm_gem_object_unreference_unlocked.
> > > >>
> > > >> Cc: Alex Deucher <alexander.deucher@amd.com>
> > > >> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> > > >
> > > > Just wondering if this worth converting over. Opening/closing isn't
> > > > going to be high contention, I hope, though we can certainly write
> > > > stress cases for it! The goal for drivers to stop using the struct_mutex
> > > > as their BKL, which doesn't preclude keeping the struct_mutex around
> for
> > > > where it makes sense to have a single mutex rather than a multitude.
> > > >
> > > > I have some misgivings over this, but only because I think its overkill.
> > > > Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
> > >
> > > I agree with Chris' sentiments.
> >
> > It's not to have more speed or less contention, but just to have fewer
> > things to worry about when reviewing locking. Hence orthogonal locks for
> > independent parts.
> >
> > My goal is that in the end dev->struct_mutex is only used by some existing
> > drivers for their internals, plus all the legacy core stuff. And never
> > even used by modern drivers. New locks are pretty cheap, and not
> dragging
> > in the entire legacy horror show has benefits.
> >
> > When/once I tackle the one thing left (master locking) I might move the
> > master handling under this lock too (since it's closely related to open
> > files). Not sure yet.
> 
> Oh and the main reason I did it here: Much easier to review using git grep
> struct_mutex than auditing codepaths. With this drivers have no reason to
> ever grab struct_mutex (not even for debugfs or similar), since the last
> remaining bit (master control) really shouldn't be a concern for drivers
> ever.
> 
> That also makes reviewing new drivers simpler: Contains struct_mutex?
> Reject it!

I'm convinced!  Thanks for cleaning this up.

Alex

> 
> Cheers, Daniel
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch
Chris Wilson April 27, 2016, 7:17 a.m. UTC | #6
On Wed, Apr 27, 2016 at 09:06:09AM +0200, Daniel Vetter wrote:
> On Tue, Apr 26, 2016 at 05:45:44PM -0400, Alex Deucher wrote:
> > On Tue, Apr 26, 2016 at 4:52 PM, Chris Wilson <chris@chris-wilson.co.uk> wrote:
> > > On Tue, Apr 26, 2016 at 07:29:41PM +0200, Daniel Vetter wrote:
> > >> amdgpu gained dev->struct_mutex usage, and that's because it's walking
> > >> the dev->filelist list. Protect that list with it's own lock to take
> > >> one more step towards getting rid of struct_mutex usage in drivers
> > >> once and for all.
> > >>
> > >> While doing the conversion I noticed that 2 debugfs files in i915
> > >> completely lacked appropriate locking. Fix that up too.
> > >>
> > >> v2: don't forget to switch to drm_gem_object_unreference_unlocked.
> > >>
> > >> Cc: Alex Deucher <alexander.deucher@amd.com>
> > >> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> > >
> > > Just wondering if this worth converting over. Opening/closing isn't
> > > going to be high contention, I hope, though we can certainly write
> > > stress cases for it! The goal for drivers to stop using the struct_mutex
> > > as their BKL, which doesn't preclude keeping the struct_mutex around for
> > > where it makes sense to have a single mutex rather than a multitude.
> > >
> > > I have some misgivings over this, but only because I think its overkill.
> > > Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
> > 
> > I agree with Chris' sentiments.
> 
> It's not to have more speed or less contention, but just to have fewer
> things to worry about when reviewing locking. Hence orthogonal locks for
> independent parts.
> 
> My goal is that in the end dev->struct_mutex is only used by some existing
> drivers for their internals, plus all the legacy core stuff. And never
> even used by modern drivers. New locks are pretty cheap, and not dragging
> in the entire legacy horror show has benefits.
> 
> When/once I tackle the one thing left (master locking) I might move the
> master handling under this lock too (since it's closely related to open
> files). Not sure yet.

drm: s/struct_mutex/legacy_mutex/
drm/i915: s/struct_mutex/bfg9000/
-Chris
Daniel Vetter April 27, 2016, 8:01 a.m. UTC | #7
On Wed, Apr 27, 2016 at 08:17:08AM +0100, Chris Wilson wrote:
> On Wed, Apr 27, 2016 at 09:06:09AM +0200, Daniel Vetter wrote:
> > On Tue, Apr 26, 2016 at 05:45:44PM -0400, Alex Deucher wrote:
> > > On Tue, Apr 26, 2016 at 4:52 PM, Chris Wilson <chris@chris-wilson.co.uk> wrote:
> > > > On Tue, Apr 26, 2016 at 07:29:41PM +0200, Daniel Vetter wrote:
> > > >> amdgpu gained dev->struct_mutex usage, and that's because it's walking
> > > >> the dev->filelist list. Protect that list with it's own lock to take
> > > >> one more step towards getting rid of struct_mutex usage in drivers
> > > >> once and for all.
> > > >>
> > > >> While doing the conversion I noticed that 2 debugfs files in i915
> > > >> completely lacked appropriate locking. Fix that up too.
> > > >>
> > > >> v2: don't forget to switch to drm_gem_object_unreference_unlocked.
> > > >>
> > > >> Cc: Alex Deucher <alexander.deucher@amd.com>
> > > >> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> > > >
> > > > Just wondering if this worth converting over. Opening/closing isn't
> > > > going to be high contention, I hope, though we can certainly write
> > > > stress cases for it! The goal for drivers to stop using the struct_mutex
> > > > as their BKL, which doesn't preclude keeping the struct_mutex around for
> > > > where it makes sense to have a single mutex rather than a multitude.
> > > >
> > > > I have some misgivings over this, but only because I think its overkill.
> > > > Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
> > > 
> > > I agree with Chris' sentiments.
> > 
> > It's not to have more speed or less contention, but just to have fewer
> > things to worry about when reviewing locking. Hence orthogonal locks for
> > independent parts.
> > 
> > My goal is that in the end dev->struct_mutex is only used by some existing
> > drivers for their internals, plus all the legacy core stuff. And never
> > even used by modern drivers. New locks are pretty cheap, and not dragging
> > in the entire legacy horror show has benefits.
> > 
> > When/once I tackle the one thing left (master locking) I might move the
> > master handling under this lock too (since it's closely related to open
> > files). Not sure yet.
> 
> drm: s/struct_mutex/legacy_mutex/

Yeah that's the eventual plan. Probably need to convert over some of the
current gem drivers first to make it less of a flag day.

> drm/i915: s/struct_mutex/bfg9000/

Can't do that yet because holding struct_mutex prevents objects from
disappearing, speficially all our mm and lru lists only hold a weak
reference. We need to rework our shrinkers first and switch over to
gem_object_free_unlocked before we can disengage from struct_mutex in
i915.

But yeah, again that's the plan.
-Daniel
Chris Wilson April 27, 2016, 8:04 a.m. UTC | #8
On Wed, Apr 27, 2016 at 10:01:16AM +0200, Daniel Vetter wrote:
> On Wed, Apr 27, 2016 at 08:17:08AM +0100, Chris Wilson wrote:
> > drm/i915: s/struct_mutex/bfg9000/
> 
> Can't do that yet because holding struct_mutex prevents objects from
> disappearing, speficially all our mm and lru lists only hold a weak
> reference. We need to rework our shrinkers first and switch over to
> gem_object_free_unlocked before we can disengage from struct_mutex in
> i915.

Done. Just a few years of backlog still to go...
-Chris
Daniel Vetter April 27, 2016, 8:17 a.m. UTC | #9
On Tue, Apr 26, 2016 at 07:29:41PM +0200, Daniel Vetter wrote:
> amdgpu gained dev->struct_mutex usage, and that's because it's walking
> the dev->filelist list. Protect that list with it's own lock to take
> one more step towards getting rid of struct_mutex usage in drivers
> once and for all.
> 
> While doing the conversion I noticed that 2 debugfs files in i915
> completely lacked appropriate locking. Fix that up too.
> 
> v2: don't forget to switch to drm_gem_object_unreference_unlocked.
> 
> Cc: Alex Deucher <alexander.deucher@amd.com>
> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>

Ok, merged up to this patch to drm-misc, thanks a lot for feedback and
review.
-Daniel

> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 10 +++++-----
>  drivers/gpu/drm/drm_drv.c               |  1 +
>  drivers/gpu/drm/drm_fops.c              |  9 ++++++---
>  drivers/gpu/drm/drm_info.c              |  4 ++--
>  drivers/gpu/drm/i915/i915_debugfs.c     | 12 ++++++++++--
>  include/drm/drmP.h                      |  1 +
>  6 files changed, 25 insertions(+), 12 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> index fa6a27bff298..a087b9638cde 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> @@ -93,7 +93,7 @@ void amdgpu_gem_force_release(struct amdgpu_device *adev)
>  	struct drm_device *ddev = adev->ddev;
>  	struct drm_file *file;
>  
> -	mutex_lock(&ddev->struct_mutex);
> +	mutex_lock(&ddev->filelist_mutex);
>  
>  	list_for_each_entry(file, &ddev->filelist, lhead) {
>  		struct drm_gem_object *gobj;
> @@ -103,13 +103,13 @@ void amdgpu_gem_force_release(struct amdgpu_device *adev)
>  		spin_lock(&file->table_lock);
>  		idr_for_each_entry(&file->object_idr, gobj, handle) {
>  			WARN_ONCE(1, "And also active allocations!\n");
> -			drm_gem_object_unreference(gobj);
> +			drm_gem_object_unreference_unlocked(gobj);
>  		}
>  		idr_destroy(&file->object_idr);
>  		spin_unlock(&file->table_lock);
>  	}
>  
> -	mutex_unlock(&ddev->struct_mutex);
> +	mutex_unlock(&ddev->filelist_mutex);
>  }
>  
>  /*
> @@ -769,7 +769,7 @@ static int amdgpu_debugfs_gem_info(struct seq_file *m, void *data)
>  	struct drm_file *file;
>  	int r;
>  
> -	r = mutex_lock_interruptible(&dev->struct_mutex);
> +	r = mutex_lock_interruptible(&dev->filelist_mutex);
>  	if (r)
>  		return r;
>  
> @@ -793,7 +793,7 @@ static int amdgpu_debugfs_gem_info(struct seq_file *m, void *data)
>  		spin_unlock(&file->table_lock);
>  	}
>  
> -	mutex_unlock(&dev->struct_mutex);
> +	mutex_unlock(&dev->filelist_mutex);
>  	return 0;
>  }
>  
> diff --git a/drivers/gpu/drm/drm_drv.c b/drivers/gpu/drm/drm_drv.c
> index e1fb52d4f72c..bff89226a344 100644
> --- a/drivers/gpu/drm/drm_drv.c
> +++ b/drivers/gpu/drm/drm_drv.c
> @@ -590,6 +590,7 @@ struct drm_device *drm_dev_alloc(struct drm_driver *driver,
>  	spin_lock_init(&dev->buf_lock);
>  	spin_lock_init(&dev->event_lock);
>  	mutex_init(&dev->struct_mutex);
> +	mutex_init(&dev->filelist_mutex);
>  	mutex_init(&dev->ctxlist_mutex);
>  	mutex_init(&dev->master_mutex);
>  
> diff --git a/drivers/gpu/drm/drm_fops.c b/drivers/gpu/drm/drm_fops.c
> index c3d0aaac0669..7af7f8bcb355 100644
> --- a/drivers/gpu/drm/drm_fops.c
> +++ b/drivers/gpu/drm/drm_fops.c
> @@ -297,9 +297,9 @@ static int drm_open_helper(struct file *filp, struct drm_minor *minor)
>  	}
>  	mutex_unlock(&dev->master_mutex);
>  
> -	mutex_lock(&dev->struct_mutex);
> +	mutex_lock(&dev->filelist_mutex);
>  	list_add(&priv->lhead, &dev->filelist);
> -	mutex_unlock(&dev->struct_mutex);
> +	mutex_unlock(&dev->filelist_mutex);
>  
>  #ifdef __alpha__
>  	/*
> @@ -447,8 +447,11 @@ int drm_release(struct inode *inode, struct file *filp)
>  
>  	DRM_DEBUG("open_count = %d\n", dev->open_count);
>  
> -	mutex_lock(&dev->struct_mutex);
> +	mutex_lock(&dev->filelist_mutex);
>  	list_del(&file_priv->lhead);
> +	mutex_unlock(&dev->filelist_mutex);
> +
> +	mutex_lock(&dev->struct_mutex);
>  	if (file_priv->magic)
>  		idr_remove(&file_priv->master->magic_map, file_priv->magic);
>  	mutex_unlock(&dev->struct_mutex);
> diff --git a/drivers/gpu/drm/drm_info.c b/drivers/gpu/drm/drm_info.c
> index cbb4fc0fc969..5d469b2f26f4 100644
> --- a/drivers/gpu/drm/drm_info.c
> +++ b/drivers/gpu/drm/drm_info.c
> @@ -174,7 +174,7 @@ int drm_clients_info(struct seq_file *m, void *data)
>  	/* dev->filelist is sorted youngest first, but we want to present
>  	 * oldest first (i.e. kernel, servers, clients), so walk backwardss.
>  	 */
> -	mutex_lock(&dev->struct_mutex);
> +	mutex_lock(&dev->filelist_mutex);
>  	list_for_each_entry_reverse(priv, &dev->filelist, lhead) {
>  		struct task_struct *task;
>  
> @@ -190,7 +190,7 @@ int drm_clients_info(struct seq_file *m, void *data)
>  			   priv->magic);
>  		rcu_read_unlock();
>  	}
> -	mutex_unlock(&dev->struct_mutex);
> +	mutex_unlock(&dev->filelist_mutex);
>  	return 0;
>  }
>  
> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
> index 4950d05d2948..8b8d6f07d7aa 100644
> --- a/drivers/gpu/drm/i915/i915_debugfs.c
> +++ b/drivers/gpu/drm/i915/i915_debugfs.c
> @@ -528,6 +528,10 @@ static int i915_gem_object_info(struct seq_file *m, void* data)
>  
>  	seq_putc(m, '\n');
>  	print_batch_pool_stats(m, dev_priv);
> +
> +	mutex_unlock(&dev->struct_mutex);
> +
> +	mutex_lock(&dev->filelist_mutex);
>  	list_for_each_entry_reverse(file, &dev->filelist, lhead) {
>  		struct file_stats stats;
>  		struct task_struct *task;
> @@ -548,8 +552,7 @@ static int i915_gem_object_info(struct seq_file *m, void* data)
>  		print_file_stats(m, task ? task->comm : "<unknown>", stats);
>  		rcu_read_unlock();
>  	}
> -
> -	mutex_unlock(&dev->struct_mutex);
> +	mutex_unlock(&dev->filelist_mutex);
>  
>  	return 0;
>  }
> @@ -2354,6 +2357,7 @@ static int i915_ppgtt_info(struct seq_file *m, void *data)
>  	else if (INTEL_INFO(dev)->gen >= 6)
>  		gen6_ppgtt_info(m, dev);
>  
> +	mutex_lock(&dev->filelist_mutex);
>  	list_for_each_entry_reverse(file, &dev->filelist, lhead) {
>  		struct drm_i915_file_private *file_priv = file->driver_priv;
>  		struct task_struct *task;
> @@ -2368,6 +2372,7 @@ static int i915_ppgtt_info(struct seq_file *m, void *data)
>  		idr_for_each(&file_priv->context_idr, per_file_ctx,
>  			     (void *)(unsigned long)m);
>  	}
> +	mutex_unlock(&dev->filelist_mutex);
>  
>  out_put:
>  	intel_runtime_pm_put(dev_priv);
> @@ -2403,6 +2408,8 @@ static int i915_rps_boost_info(struct seq_file *m, void *data)
>  		   intel_gpu_freq(dev_priv, dev_priv->rps.min_freq_softlimit),
>  		   intel_gpu_freq(dev_priv, dev_priv->rps.max_freq_softlimit),
>  		   intel_gpu_freq(dev_priv, dev_priv->rps.max_freq));
> +
> +	mutex_lock(&dev->filelist_mutex);
>  	spin_lock(&dev_priv->rps.client_lock);
>  	list_for_each_entry_reverse(file, &dev->filelist, lhead) {
>  		struct drm_i915_file_private *file_priv = file->driver_priv;
> @@ -2425,6 +2432,7 @@ static int i915_rps_boost_info(struct seq_file *m, void *data)
>  		   list_empty(&dev_priv->rps.mmioflips.link) ? "" : ", active");
>  	seq_printf(m, "Kernel boosts: %d\n", dev_priv->rps.boosts);
>  	spin_unlock(&dev_priv->rps.client_lock);
> +	mutex_unlock(&dev->filelist_mutex);
>  
>  	return 0;
>  }
> diff --git a/include/drm/drmP.h b/include/drm/drmP.h
> index f82979dee647..c81dd2250fc6 100644
> --- a/include/drm/drmP.h
> +++ b/include/drm/drmP.h
> @@ -769,6 +769,7 @@ struct drm_device {
>  	atomic_t buf_alloc;		/**< Buffer allocation in progress */
>  	/*@} */
>  
> +	struct mutex filelist_mutex;
>  	struct list_head filelist;
>  
>  	/** \name Memory management */
> -- 
> 2.8.1
>
diff mbox

Patch

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index fa6a27bff298..a087b9638cde 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -93,7 +93,7 @@  void amdgpu_gem_force_release(struct amdgpu_device *adev)
 	struct drm_device *ddev = adev->ddev;
 	struct drm_file *file;
 
-	mutex_lock(&ddev->struct_mutex);
+	mutex_lock(&ddev->filelist_mutex);
 
 	list_for_each_entry(file, &ddev->filelist, lhead) {
 		struct drm_gem_object *gobj;
@@ -103,13 +103,13 @@  void amdgpu_gem_force_release(struct amdgpu_device *adev)
 		spin_lock(&file->table_lock);
 		idr_for_each_entry(&file->object_idr, gobj, handle) {
 			WARN_ONCE(1, "And also active allocations!\n");
-			drm_gem_object_unreference(gobj);
+			drm_gem_object_unreference_unlocked(gobj);
 		}
 		idr_destroy(&file->object_idr);
 		spin_unlock(&file->table_lock);
 	}
 
-	mutex_unlock(&ddev->struct_mutex);
+	mutex_unlock(&ddev->filelist_mutex);
 }
 
 /*
@@ -769,7 +769,7 @@  static int amdgpu_debugfs_gem_info(struct seq_file *m, void *data)
 	struct drm_file *file;
 	int r;
 
-	r = mutex_lock_interruptible(&dev->struct_mutex);
+	r = mutex_lock_interruptible(&dev->filelist_mutex);
 	if (r)
 		return r;
 
@@ -793,7 +793,7 @@  static int amdgpu_debugfs_gem_info(struct seq_file *m, void *data)
 		spin_unlock(&file->table_lock);
 	}
 
-	mutex_unlock(&dev->struct_mutex);
+	mutex_unlock(&dev->filelist_mutex);
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/drm_drv.c b/drivers/gpu/drm/drm_drv.c
index e1fb52d4f72c..bff89226a344 100644
--- a/drivers/gpu/drm/drm_drv.c
+++ b/drivers/gpu/drm/drm_drv.c
@@ -590,6 +590,7 @@  struct drm_device *drm_dev_alloc(struct drm_driver *driver,
 	spin_lock_init(&dev->buf_lock);
 	spin_lock_init(&dev->event_lock);
 	mutex_init(&dev->struct_mutex);
+	mutex_init(&dev->filelist_mutex);
 	mutex_init(&dev->ctxlist_mutex);
 	mutex_init(&dev->master_mutex);
 
diff --git a/drivers/gpu/drm/drm_fops.c b/drivers/gpu/drm/drm_fops.c
index c3d0aaac0669..7af7f8bcb355 100644
--- a/drivers/gpu/drm/drm_fops.c
+++ b/drivers/gpu/drm/drm_fops.c
@@ -297,9 +297,9 @@  static int drm_open_helper(struct file *filp, struct drm_minor *minor)
 	}
 	mutex_unlock(&dev->master_mutex);
 
-	mutex_lock(&dev->struct_mutex);
+	mutex_lock(&dev->filelist_mutex);
 	list_add(&priv->lhead, &dev->filelist);
-	mutex_unlock(&dev->struct_mutex);
+	mutex_unlock(&dev->filelist_mutex);
 
 #ifdef __alpha__
 	/*
@@ -447,8 +447,11 @@  int drm_release(struct inode *inode, struct file *filp)
 
 	DRM_DEBUG("open_count = %d\n", dev->open_count);
 
-	mutex_lock(&dev->struct_mutex);
+	mutex_lock(&dev->filelist_mutex);
 	list_del(&file_priv->lhead);
+	mutex_unlock(&dev->filelist_mutex);
+
+	mutex_lock(&dev->struct_mutex);
 	if (file_priv->magic)
 		idr_remove(&file_priv->master->magic_map, file_priv->magic);
 	mutex_unlock(&dev->struct_mutex);
diff --git a/drivers/gpu/drm/drm_info.c b/drivers/gpu/drm/drm_info.c
index cbb4fc0fc969..5d469b2f26f4 100644
--- a/drivers/gpu/drm/drm_info.c
+++ b/drivers/gpu/drm/drm_info.c
@@ -174,7 +174,7 @@  int drm_clients_info(struct seq_file *m, void *data)
 	/* dev->filelist is sorted youngest first, but we want to present
 	 * oldest first (i.e. kernel, servers, clients), so walk backwardss.
 	 */
-	mutex_lock(&dev->struct_mutex);
+	mutex_lock(&dev->filelist_mutex);
 	list_for_each_entry_reverse(priv, &dev->filelist, lhead) {
 		struct task_struct *task;
 
@@ -190,7 +190,7 @@  int drm_clients_info(struct seq_file *m, void *data)
 			   priv->magic);
 		rcu_read_unlock();
 	}
-	mutex_unlock(&dev->struct_mutex);
+	mutex_unlock(&dev->filelist_mutex);
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 4950d05d2948..8b8d6f07d7aa 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -528,6 +528,10 @@  static int i915_gem_object_info(struct seq_file *m, void* data)
 
 	seq_putc(m, '\n');
 	print_batch_pool_stats(m, dev_priv);
+
+	mutex_unlock(&dev->struct_mutex);
+
+	mutex_lock(&dev->filelist_mutex);
 	list_for_each_entry_reverse(file, &dev->filelist, lhead) {
 		struct file_stats stats;
 		struct task_struct *task;
@@ -548,8 +552,7 @@  static int i915_gem_object_info(struct seq_file *m, void* data)
 		print_file_stats(m, task ? task->comm : "<unknown>", stats);
 		rcu_read_unlock();
 	}
-
-	mutex_unlock(&dev->struct_mutex);
+	mutex_unlock(&dev->filelist_mutex);
 
 	return 0;
 }
@@ -2354,6 +2357,7 @@  static int i915_ppgtt_info(struct seq_file *m, void *data)
 	else if (INTEL_INFO(dev)->gen >= 6)
 		gen6_ppgtt_info(m, dev);
 
+	mutex_lock(&dev->filelist_mutex);
 	list_for_each_entry_reverse(file, &dev->filelist, lhead) {
 		struct drm_i915_file_private *file_priv = file->driver_priv;
 		struct task_struct *task;
@@ -2368,6 +2372,7 @@  static int i915_ppgtt_info(struct seq_file *m, void *data)
 		idr_for_each(&file_priv->context_idr, per_file_ctx,
 			     (void *)(unsigned long)m);
 	}
+	mutex_unlock(&dev->filelist_mutex);
 
 out_put:
 	intel_runtime_pm_put(dev_priv);
@@ -2403,6 +2408,8 @@  static int i915_rps_boost_info(struct seq_file *m, void *data)
 		   intel_gpu_freq(dev_priv, dev_priv->rps.min_freq_softlimit),
 		   intel_gpu_freq(dev_priv, dev_priv->rps.max_freq_softlimit),
 		   intel_gpu_freq(dev_priv, dev_priv->rps.max_freq));
+
+	mutex_lock(&dev->filelist_mutex);
 	spin_lock(&dev_priv->rps.client_lock);
 	list_for_each_entry_reverse(file, &dev->filelist, lhead) {
 		struct drm_i915_file_private *file_priv = file->driver_priv;
@@ -2425,6 +2432,7 @@  static int i915_rps_boost_info(struct seq_file *m, void *data)
 		   list_empty(&dev_priv->rps.mmioflips.link) ? "" : ", active");
 	seq_printf(m, "Kernel boosts: %d\n", dev_priv->rps.boosts);
 	spin_unlock(&dev_priv->rps.client_lock);
+	mutex_unlock(&dev->filelist_mutex);
 
 	return 0;
 }
diff --git a/include/drm/drmP.h b/include/drm/drmP.h
index f82979dee647..c81dd2250fc6 100644
--- a/include/drm/drmP.h
+++ b/include/drm/drmP.h
@@ -769,6 +769,7 @@  struct drm_device {
 	atomic_t buf_alloc;		/**< Buffer allocation in progress */
 	/*@} */
 
+	struct mutex filelist_mutex;
 	struct list_head filelist;
 
 	/** \name Memory management */