diff mbox series

[v2] drm/i915: ensure segment offset never exceeds allowed max

Message ID agetzfnm7xpwocyvnznewkmfqin3hyha2qywyvhqnk77gyuvuy@hat5jnfpljty (mailing list archive)
State New
Headers show
Series [v2] drm/i915: ensure segment offset never exceeds allowed max | expand

Commit Message

Krzysztof Karas Oct. 23, 2024, 1:24 p.m. UTC
Commit 255fc1703e42 ("drm/i915/gem: Calculate object page offset for
partial memory mapping") introduced a new offset that affects
r.sgt.curr value. This field is used in remap_sg() function, in
set_pte_at() call and changing its value causes page table entry to
also be affected (see set_ptes() description).
Example:
 1) upon entering remap_sg() r.sgt.curr could have already been changed to
  a value equal to or greater than r.sgt.max,
 2) set_pte_at() uses r.sgt.curr to map a page entry from another segment
  to the current one,
 3) r->sgt pointer is moved to the next entry returned from __sg_iter()
  only once,
 3) the memory of the mismapped page might become unavailabe (accessing
  some addresses causes -EFAULT).

This patch makes sure we never exceed the allowed segment maximum.

v2:
 - instead of moving segment offset added checking if offset + current
segment offset exceed allowed segment max offset before setting new
current offset for that segment
 - changed the patch title from "move segment iterator to match current
offset" to "ensure segment offset never exceeds allowed max"

Signed-off-by: Krzysztof Karas <krzysztof.karas@intel.com>
---
 drivers/gpu/drm/i915/i915_mm.c | 14 ++++++++------
 1 file changed, 8 insertions(+), 6 deletions(-)
diff mbox series

Patch

diff --git a/drivers/gpu/drm/i915/i915_mm.c b/drivers/gpu/drm/i915/i915_mm.c
index f5c97a620962..ba022afc624c 100644
--- a/drivers/gpu/drm/i915/i915_mm.c
+++ b/drivers/gpu/drm/i915/i915_mm.c
@@ -143,13 +143,15 @@  int remap_io_sg(struct vm_area_struct *vma,
 	/* We rely on prevalidation of the io-mapping to skip track_pfn(). */
 	GEM_BUG_ON((vma->vm_flags & EXPECTED_FLAGS) != EXPECTED_FLAGS);
 
-	while (offset >= sg_dma_len(r.sgt.sgp) >> PAGE_SHIFT) {
-		offset -= sg_dma_len(r.sgt.sgp) >> PAGE_SHIFT;
-		r.sgt = __sgt_iter(__sg_next(r.sgt.sgp), use_dma(iobase));
-		if (!r.sgt.sgp)
-			return -EINVAL;
+	if (r.sgt.curr + (offset << PAGE_SHIFT) < r.sgt.max) {
+		while (offset >= sg_dma_len(r.sgt.sgp) >> PAGE_SHIFT) {
+			offset -= sg_dma_len(r.sgt.sgp) >> PAGE_SHIFT;
+			r.sgt = __sgt_iter(__sg_next(r.sgt.sgp), use_dma(iobase));
+			if (!r.sgt.sgp)
+				return -EINVAL;
+		}
+		r.sgt.curr = offset << PAGE_SHIFT;
 	}
-	r.sgt.curr = offset << PAGE_SHIFT;
 
 	if (!use_dma(iobase))
 		flush_cache_range(vma, addr, size);