Message ID | 1467331482-19811-1-git-send-email-james.xiong@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Thu, Jun 30, 2016 at 05:04:42PM -0700, James Xiong wrote: > From: "Xiong, James" <james.xiong@intel.com> > > currently mmap of a tiled object that is larger than mappable > aperture is rejected in fault handler, and causes sigbus error > and application crash. Please note that SIGBUS can be returned at any time. If your application doesn't handle it, please fix that. > This commit rejects it in mmap instead so that the client has > chance to handle the failure. Wrong. Please review the patches to fix this correctly. -Chris
Thanks, James -----Original Message----- From: Chris Wilson [mailto:chris.ickle.wilson@gmail.com] On Behalf Of Chris Wilson Sent: Friday, July 1, 2016 12:25 AM To: Xiong, James <james.xiong@intel.com> Cc: intel-gfx@lists.freedesktop.org Subject: Re: [Intel-gfx] [PATCH 1/1] drm/i915: gracefully reject mmap of huge tiled objects On Thu, Jun 30, 2016 at 05:04:42PM -0700, James Xiong wrote: > From: "Xiong, James" <james.xiong@intel.com> > > currently mmap of a tiled object that is larger than mappable aperture > is rejected in fault handler, and causes sigbus error and application > crash. Please note that SIGBUS can be returned at any time. If your application doesn't handle it, please fix that. [JX] I agree, the way I put it like it's a bug is wrong, it's okay to return sigbus in i915 fault handler. It's a common practice that an application validates a pointer then accesses it directly, in case of SIGBUS, there is not much signal handler can do other than clearing up before the application aborts(please correct me if I am wrong). I have seen people use longjump/sigaction to handle sigbus but it has problems: only be able to jump within the current function, reentrancy etc,etc,. makes it impractical to apply for all accesses. And sometime the app wants to be able to continue in case of SIGBUG, for example: one test case fails because of the buffer size and SIGBUS, with this change, the app is able to either reduce buffer size, re-run test or continue with the next test. The change helps with these cases. Another thing is that when mmap is called to map a tiled/250M+ obj to user space, i915 knows it doesn't have enough space, shouldn't ENOSPC be returned right there and then? > This commit rejects it in mmap instead so that the client has chance > to handle the failure. Wrong. Please review the patches to fix this correctly. -Chris -- Chris Wilson, Intel Open Source Technology Centre
diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c index 0b9105cf3..c560406 100644 --- a/drivers/gpu/drm/i915/i915_drv.c +++ b/drivers/gpu/drm/i915/i915_drv.c @@ -1808,7 +1808,7 @@ static const struct file_operations i915_driver_fops = { .open = drm_open, .release = drm_release, .unlocked_ioctl = drm_ioctl, - .mmap = drm_gem_mmap, + .mmap = i915_gem_mmap, .poll = drm_poll, .read = drm_read, #ifdef CONFIG_COMPAT diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 2e56e97..5867c3a 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -3062,6 +3062,7 @@ void *i915_gem_object_alloc(struct drm_device *dev); void i915_gem_object_free(struct drm_i915_gem_object *obj); void i915_gem_object_init(struct drm_i915_gem_object *obj, const struct drm_i915_gem_object_ops *ops); +int i915_gem_mmap(struct file *filp, struct vm_area_struct *vma); struct drm_i915_gem_object *i915_gem_alloc_object(struct drm_device *dev, size_t size); struct drm_i915_gem_object *i915_gem_object_create_from_data( diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index aa4b63b..ce2e09f 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -5986,3 +5986,35 @@ fail: drm_gem_object_unreference(&obj->base); return ERR_PTR(ret); } + +int i915_gem_mmap(struct file *filp, struct vm_area_struct *vma) +{ + struct drm_file *priv = filp->private_data; + struct drm_device *dev = priv->minor->dev; + struct drm_gem_object *obj = NULL; + struct drm_vma_offset_node *node; + struct drm_i915_private *dev_priv = dev->dev_private; + + drm_vma_offset_lock_lookup(dev->vma_offset_manager); + node = drm_vma_offset_exact_lookup_locked(dev->vma_offset_manager, + vma->vm_pgoff, + vma_pages(vma)); + if (likely(node)) { + obj = container_of(node, struct drm_gem_object, vma_node); + if (!kref_get_unless_zero(&obj->refcount)) + obj = NULL; + } + drm_vma_offset_unlock_lookup(dev->vma_offset_manager); + + if (!obj) + return -EINVAL; + + if (obj->size >= dev_priv->ggtt.mappable_end && + to_intel_bo(obj)->tiling_mode != I915_TILING_NONE) { + drm_gem_object_unreference_unlocked(obj); + return -ENOSPC; + } + + drm_gem_object_unreference_unlocked(obj); + return drm_gem_mmap(filp, vma); +}