Message ID | 20210122181514.541436-1-matthew.auld@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [1/2] drm/i915/dmabuf: don't trust the dma_buf->size | expand |
Quoting Matthew Auld (2021-01-22 18:15:13) > At least for the time being, we need to limit our object sizes such that > the number of pages can fit within a 32b signed int. It looks like we > should also apply the same restriction to any imported dma-buf. > > Signed-off-by: Matthew Auld <matthew.auld@intel.com> From behind the grumbling that we really should have sorted this out by now, Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> -Chris
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c index 04e9c04545ad..dc11497f830b 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c @@ -244,6 +244,16 @@ struct drm_gem_object *i915_gem_prime_import(struct drm_device *dev, } } + /* + * XXX: There is a prevalence of the assumption that we fit the + * object's page count inside a 32bit _signed_ variable. Let's document + * this and catch if we ever need to fix it. In the meantime, if you do + * spot such a local variable, please consider fixing! + */ + + if (dma_buf->size >> PAGE_SHIFT > INT_MAX) + return ERR_PTR(-E2BIG); + /* need to attach */ attach = dma_buf_attach(dma_buf, dev->dev); if (IS_ERR(attach))
At least for the time being, we need to limit our object sizes such that the number of pages can fit within a 32b signed int. It looks like we should also apply the same restriction to any imported dma-buf. Signed-off-by: Matthew Auld <matthew.auld@intel.com> --- drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 10 ++++++++++ 1 file changed, 10 insertions(+)