Message ID | 20170929161032.24394-9-matthew.auld@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Fri, 2017-09-29 at 17:10 +0100, Matthew Auld wrote: > For the 48b PPGTT try to align the vma start address to the required > page size boundary to guarantee we use said page size in the gtt. If we > are dealing with multiple page sizes, we can't guarantee anything and > just align to the largest. For soft pinning and objects which need to be > tightly packed into the lower 32bits we don't force any alignment. > > v2: various improvements suggested by Chris > > v3: use set_pages and better placement of page_sizes > > v4: prefer upper_32_bits() > > Signed-off-by: Matthew Auld <matthew.auld@intel.com> > Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> > Cc: Chris Wilson <chris@chris-wilson.co.uk> > Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> <SNIP> > @@ -238,6 +241,8 @@ static void clear_pages(struct i915_vma *vma) > kfree(vma->pages); > } > vma->pages = NULL; > + > + memset(&vma->page_sizes, 0, sizeof(struct i915_page_sizes)); sizeof(vma->page_sizes) > @@ -2538,6 +2543,9 @@ static int ggtt_set_pages(struct i915_vma *vma) > if (ret) > return ret; > > + vma->page_sizes.phys = vma->obj->mm.page_sizes.phys; > + vma->page_sizes.sg = vma->obj->mm.page_sizes.sg; Hmm, are we not able to assign vma->page_sizes = vma->obj->mm.page_sizes? Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Regards, Joonas
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c index c534b74eee32..c989e3d24e37 100644 --- a/drivers/gpu/drm/i915/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c @@ -226,6 +226,9 @@ static int ppgtt_set_pages(struct i915_vma *vma) vma->pages = vma->obj->mm.pages; + vma->page_sizes.phys = vma->obj->mm.page_sizes.phys; + vma->page_sizes.sg = vma->obj->mm.page_sizes.sg; + return 0; } @@ -238,6 +241,8 @@ static void clear_pages(struct i915_vma *vma) kfree(vma->pages); } vma->pages = NULL; + + memset(&vma->page_sizes, 0, sizeof(struct i915_page_sizes)); } static gen8_pte_t gen8_pte_encode(dma_addr_t addr, @@ -2538,6 +2543,9 @@ static int ggtt_set_pages(struct i915_vma *vma) if (ret) return ret; + vma->page_sizes.phys = vma->obj->mm.page_sizes.phys; + vma->page_sizes.sg = vma->obj->mm.page_sizes.sg; + return 0; } diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c index 49bf49571e47..5067eab27829 100644 --- a/drivers/gpu/drm/i915/i915_vma.c +++ b/drivers/gpu/drm/i915/i915_vma.c @@ -493,6 +493,19 @@ i915_vma_insert(struct i915_vma *vma, u64 size, u64 alignment, u64 flags) if (ret) goto err_clear; } else { + /* + * We only support huge gtt pages through the 48b PPGTT, + * however we also don't want to force any alignment for + * objects which need to be tightly packed into the low 32bits. + */ + if (upper_32_bits(end) && + vma->page_sizes.sg > I915_GTT_PAGE_SIZE) { + u64 page_alignment = + rounddown_pow_of_two(vma->page_sizes.sg); + + alignment = max(alignment, page_alignment); + } + ret = i915_gem_gtt_insert(vma->vm, &vma->node, size, alignment, obj->cache_level, start, end, flags); diff --git a/drivers/gpu/drm/i915/i915_vma.h b/drivers/gpu/drm/i915/i915_vma.h index e811067c7724..c59ba76613a3 100644 --- a/drivers/gpu/drm/i915/i915_vma.h +++ b/drivers/gpu/drm/i915/i915_vma.h @@ -55,6 +55,7 @@ struct i915_vma { void __iomem *iomap; u64 size; u64 display_alignment; + struct i915_page_sizes page_sizes; u32 fence_size; u32 fence_alignment;