diff mbox series

drm/i915/gem: Apply lmem size restriction to get_pages

Message ID 20191216122603.2598155-1-chris@chris-wilson.co.uk (mailing list archive)
State New, archived
Headers show
Series drm/i915/gem: Apply lmem size restriction to get_pages | expand

Commit Message

Chris Wilson Dec. 16, 2019, 12:26 p.m. UTC
When creating a handle, it is just that, an abstract handle. The fact
that we cannot currently support a handle larger than the size of the
backing storage is an artifact of our whole-object-at-a-time handling in
get_pages() and being an implementation limitation is best handled at
that point -- similar to shmem, where we only barf when asked to
populate the whole object if larger than RAM. (Pinning the whole object
at a time is major hindrance that we are likely to have to overcome in
the near future.) In the case of the buddy allocator, the late check is
preferable as the request size may often be smaller than the required
size.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_lmem.c   | 3 ---
 drivers/gpu/drm/i915/intel_memory_region.c | 3 +++
 2 files changed, 3 insertions(+), 3 deletions(-)

Comments

Matthew Auld Dec. 16, 2019, 1:19 p.m. UTC | #1
On Mon, 16 Dec 2019 at 12:26, Chris Wilson <chris@chris-wilson.co.uk> wrote:
>
> When creating a handle, it is just that, an abstract handle. The fact
> that we cannot currently support a handle larger than the size of the
> backing storage is an artifact of our whole-object-at-a-time handling in
> get_pages() and being an implementation limitation is best handled at
> that point -- similar to shmem, where we only barf when asked to
> populate the whole object if larger than RAM. (Pinning the whole object
> at a time is major hindrance that we are likely to have to overcome in
> the near future.) In the case of the buddy allocator, the late check is
> preferable as the request size may often be smaller than the required
> size.
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Matthew Auld <matthew.auld@intel.com>
> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

I think we just need:

@@ -1420,7 +1420,7 @@ static int igt_ppgtt_smoke_huge(void *arg)

                err = i915_gem_object_pin_pages(obj);
                if (err) {
-                       if (err == -ENXIO) {
+                       if (err == -ENXIO || err == -E2BIG) {
                                i915_gem_object_put(obj);
                                size >>= 1;
                                goto try_again;

?

Or whatever takes your fancy,
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
diff mbox series

Patch

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_lmem.c b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c
index 0e2bf6b7e143..520cc9cac471 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_lmem.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c
@@ -79,9 +79,6 @@  __i915_gem_lmem_object_create(struct intel_memory_region *mem,
 	struct drm_i915_private *i915 = mem->i915;
 	struct drm_i915_gem_object *obj;
 
-	if (size > BIT(mem->mm.max_order) * mem->mm.chunk_size)
-		return ERR_PTR(-E2BIG);
-
 	obj = i915_gem_object_alloc();
 	if (!obj)
 		return ERR_PTR(-ENOMEM);
diff --git a/drivers/gpu/drm/i915/intel_memory_region.c b/drivers/gpu/drm/i915/intel_memory_region.c
index baaeaecc64af..e24c280e5930 100644
--- a/drivers/gpu/drm/i915/intel_memory_region.c
+++ b/drivers/gpu/drm/i915/intel_memory_region.c
@@ -73,6 +73,9 @@  __intel_memory_region_get_pages_buddy(struct intel_memory_region *mem,
 		min_order = ilog2(size) - ilog2(mem->mm.chunk_size);
 	}
 
+	if (size > BIT(mem->mm.max_order) * mem->mm.chunk_size)
+		return -E2BIG;
+
 	n_pages = size >> ilog2(mem->mm.chunk_size);
 
 	mutex_lock(&mem->mm_lock);