Message ID | 20200116192047.22303-1-brian.welty@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | drm/i915: Make use of drm_gem_object_release | expand |
Quoting Brian Welty (2020-01-16 19:20:47) > As i915 is using drm_gem_private_object_init, it is best to > use the inverse function for cleanup: drm_gem_object_release. > This removes need for a shmem_release and phys_release. > > Signed-off-by: Brian Welty <brian.welty@intel.com> > --- > Chris, the cleanup sequence in drm_gem_object_release() vs the replaced > i915 code is different, but should be okay? Light testing didn't find > any issues. commit 0c159ffef628fa94d0f4f9128e7f2b6f2b5e86ef Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Wed Jul 3 19:06:01 2019 +0100 drm/i915/gem: Defer obj->base.resv fini until RCU callback Since reservation_object_fini() does an immediate free, rather than kfree_rcu as normal, we have to delay the release until after the RCU grace period has elapsed (i.e. from the rcu cleanup callback) so that we can rely on the RCU protected access to the fences while the object is a zombie. i915_gem_busy_ioctl relies on having an RCU barrier to protect the reservation in order to avoid having to take a reference and strong memory barriers. v2: Order is important; only release after putting the pages! Fixes: c03467ba40f7 ("drm/i915/gem: Free pages before rcu-freeing the object ") Testcase: igt/gem_busy/close-race Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190703180601.10950-1-c hris@chris-wilson.co.uk
On 1/16/2020 11:30 AM, Chris Wilson wrote: > Quoting Brian Welty (2020-01-16 19:20:47) >> As i915 is using drm_gem_private_object_init, it is best to >> use the inverse function for cleanup: drm_gem_object_release. >> This removes need for a shmem_release and phys_release. >> >> Signed-off-by: Brian Welty <brian.welty@intel.com> >> --- >> Chris, the cleanup sequence in drm_gem_object_release() vs the replaced >> i915 code is different, but should be okay? Light testing didn't find >> any issues. > > commit 0c159ffef628fa94d0f4f9128e7f2b6f2b5e86ef > Author: Chris Wilson <chris@chris-wilson.co.uk> > Date: Wed Jul 3 19:06:01 2019 +0100 > > drm/i915/gem: Defer obj->base.resv fini until RCU callback > > Since reservation_object_fini() does an immediate free, rather than > kfree_rcu as normal, we have to delay the release until after the RCU > grace period has elapsed (i.e. from the rcu cleanup callback) so that we > can rely on the RCU protected access to the fences while the object is a > zombie. > > i915_gem_busy_ioctl relies on having an RCU barrier to protect the > reservation in order to avoid having to take a reference and strong > memory barriers. > > v2: Order is important; only release after putting the pages! > > Fixes: c03467ba40f7 ("drm/i915/gem: Free pages before rcu-freeing the object > ") > Testcase: igt/gem_busy/close-race > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> > Cc: Matthew Auld <matthew.auld@intel.com> > Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> > Link: https://patchwork.freedesktop.org/patch/msgid/20190703180601.10950-1-c > hris@chris-wilson.co.uk > Thanks, I didn't check the history to see this was using drm_gem_object_release in past. But looks to be using kfree_rcu now for the free. Are we okay now as this patch has gone in since?: commit 96e95496b02dbf1b19a2d4ce238810572e149606 Author: Christian König <christian.koenig@amd.com> Date: Tue Aug 6 13:33:12 2019 +0200 dma-buf: fix shared fence list handling in reservation_object_copy_fences
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c index 46bacc82ddc4..d51838d7d2ec 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c @@ -159,7 +159,6 @@ static void __i915_gem_free_object_rcu(struct rcu_head *head) container_of(head, typeof(*obj), rcu); struct drm_i915_private *i915 = to_i915(obj->base.dev); - dma_resv_fini(&obj->base._resv); i915_gem_object_free(obj); GEM_BUG_ON(!atomic_read(&i915->mm.free_count)); @@ -222,8 +221,7 @@ static void __i915_gem_free_objects(struct drm_i915_private *i915, if (obj->base.import_attach) drm_prime_gem_destroy(&obj->base, NULL); - drm_gem_free_mmap_offset(&obj->base); - + drm_gem_object_release(&obj->base); if (obj->ops->release) obj->ops->release(obj); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_phys.c b/drivers/gpu/drm/i915/gem/i915_gem_phys.c index b1b7c1b3038a..7c19f92f256b 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_phys.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_phys.c @@ -134,16 +134,9 @@ i915_gem_object_put_pages_phys(struct drm_i915_gem_object *obj, drm_pci_free(obj->base.dev, obj->phys_handle); } -static void phys_release(struct drm_i915_gem_object *obj) -{ - fput(obj->base.filp); -} - static const struct drm_i915_gem_object_ops i915_gem_phys_ops = { .get_pages = i915_gem_object_get_pages_phys, .put_pages = i915_gem_object_put_pages_phys, - - .release = phys_release, }; int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c index a2a980d9d241..4004cfe1e28a 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c @@ -418,13 +418,6 @@ shmem_pwrite(struct drm_i915_gem_object *obj, return 0; } -static void shmem_release(struct drm_i915_gem_object *obj) -{ - i915_gem_object_release_memory_region(obj); - - fput(obj->base.filp); -} - const struct drm_i915_gem_object_ops i915_gem_shmem_ops = { .flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE | I915_GEM_OBJECT_IS_SHRINKABLE, @@ -436,7 +429,7 @@ const struct drm_i915_gem_object_ops i915_gem_shmem_ops = { .pwrite = shmem_pwrite, - .release = shmem_release, + .release = i915_gem_object_release_memory_region, }; static int __create_shmem(struct drm_i915_private *i915,
As i915 is using drm_gem_private_object_init, it is best to use the inverse function for cleanup: drm_gem_object_release. This removes need for a shmem_release and phys_release. Signed-off-by: Brian Welty <brian.welty@intel.com> --- Chris, the cleanup sequence in drm_gem_object_release() vs the replaced i915 code is different, but should be okay? Light testing didn't find any issues. --- drivers/gpu/drm/i915/gem/i915_gem_object.c | 4 +--- drivers/gpu/drm/i915/gem/i915_gem_phys.c | 7 ------- drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 9 +-------- 3 files changed, 2 insertions(+), 18 deletions(-)