diff mbox

drm/i915: Fix vmap_batch page iterator overrun

Message ID 1426252913-5181-1-git-send-email-mika.kuoppala@intel.com (mailing list archive)
State New, archived
Headers show

Commit Message

Mika Kuoppala March 13, 2015, 1:21 p.m. UTC
vmap_batch() calculates amount of needed pages for the mapping
we are going to create. And it uses this page count as an
argument for the for_each_sg_pages() macro. The macro takes the number
of sg list entities as an argument, not the page count. So we ended
up iterating through all the pages on the mapped object, corrupting
memory past the smaller pages[] array.

Fix this by bailing out when we have enough pages.

Cc: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
---
 drivers/gpu/drm/i915/i915_cmd_parser.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

Comments

Tvrtko Ursulin March 13, 2015, 2:01 p.m. UTC | #1
On 03/13/2015 01:21 PM, Mika Kuoppala wrote:
> vmap_batch() calculates amount of needed pages for the mapping
> we are going to create. And it uses this page count as an
> argument for the for_each_sg_pages() macro. The macro takes the number
> of sg list entities as an argument, not the page count. So we ended
> up iterating through all the pages on the mapped object, corrupting
> memory past the smaller pages[] array.
>
> Fix this by bailing out when we have enough pages.
>
> Cc: Chris Wilson <chris@chris-wilson.co.uk>
> Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
> ---
>   drivers/gpu/drm/i915/i915_cmd_parser.c | 5 ++++-
>   1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/i915/i915_cmd_parser.c b/drivers/gpu/drm/i915/i915_cmd_parser.c
> index 9a6da35..61ae8ff 100644
> --- a/drivers/gpu/drm/i915/i915_cmd_parser.c
> +++ b/drivers/gpu/drm/i915/i915_cmd_parser.c
> @@ -836,8 +836,11 @@ static u32 *vmap_batch(struct drm_i915_gem_object *obj,
>   	}
>
>   	i = 0;
> -	for_each_sg_page(obj->pages->sgl, &sg_iter, npages, first_page)
> +	for_each_sg_page(obj->pages->sgl, &sg_iter, obj->pages->nents, first_page) {
>   		pages[i++] = sg_page_iter_page(&sg_iter);
> +		if (i == npages)
> +			break;
> +	}

Are you sure this manual check is needed now that you fixed 
for_each_sg_page?

Pages array looks pessimistically big enough so I don't see that memory 
was getting overwritten. It looks more like our sg table was not 
properly terminated which made for_each_sg_page go into random memory 
and return random page pointers.

Regards,

Tvrtko
Chris Wilson March 13, 2015, 2:05 p.m. UTC | #2
On Fri, Mar 13, 2015 at 03:21:53PM +0200, Mika Kuoppala wrote:
> vmap_batch() calculates amount of needed pages for the mapping
> we are going to create. And it uses this page count as an
> argument for the for_each_sg_pages() macro. The macro takes the number
> of sg list entities as an argument, not the page count. So we ended
> up iterating through all the pages on the mapped object, corrupting
> memory past the smaller pages[] array.
> 
> Fix this by bailing out when we have enough pages.
> 
> Cc: Chris Wilson <chris@chris-wilson.co.uk>
> Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>

Can I ask for a st_for_each_page(&obj->pages, &sg_iter, n)?

That would simplify all of our users, and stop me from making the same
mistake again.
-Chris
Daniel Vetter March 13, 2015, 5:36 p.m. UTC | #3
On Fri, Mar 13, 2015 at 02:05:46PM +0000, Chris Wilson wrote:
> On Fri, Mar 13, 2015 at 03:21:53PM +0200, Mika Kuoppala wrote:
> > vmap_batch() calculates amount of needed pages for the mapping
> > we are going to create. And it uses this page count as an
> > argument for the for_each_sg_pages() macro. The macro takes the number
> > of sg list entities as an argument, not the page count. So we ended
> > up iterating through all the pages on the mapped object, corrupting
> > memory past the smaller pages[] array.
> > 
> > Fix this by bailing out when we have enough pages.

Reference to the commit which has introduced this regression is missing,
I've added that. Also for next time around pls cc everyone on that patch,
especially also reviewers.

> > 
> > Cc: Chris Wilson <chris@chris-wilson.co.uk>
> > Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>

Queued for -next, thanks for the patch.
-Daniel
Shuang He March 14, 2015, 2:45 a.m. UTC | #4
Tested-By: PRC QA PRTS (Patch Regression Test System Contact: shuang.he@intel.com)
Task id: 5947
-------------------------------------Summary-------------------------------------
Platform          Delta          drm-intel-nightly          Series Applied
PNV                                  276/276              276/276
ILK                                  303/303              303/303
SNB                                  279/279              279/279
IVB                                  343/343              343/343
BYT                                  287/287              287/287
HSW                                  363/363              363/363
BDW                                  308/308              308/308
-------------------------------------Detailed-------------------------------------
Platform  Test                                drm-intel-nightly          Series Applied
Note: You need to pay more attention to line start with '*'
diff mbox

Patch

diff --git a/drivers/gpu/drm/i915/i915_cmd_parser.c b/drivers/gpu/drm/i915/i915_cmd_parser.c
index 9a6da35..61ae8ff 100644
--- a/drivers/gpu/drm/i915/i915_cmd_parser.c
+++ b/drivers/gpu/drm/i915/i915_cmd_parser.c
@@ -836,8 +836,11 @@  static u32 *vmap_batch(struct drm_i915_gem_object *obj,
 	}
 
 	i = 0;
-	for_each_sg_page(obj->pages->sgl, &sg_iter, npages, first_page)
+	for_each_sg_page(obj->pages->sgl, &sg_iter, obj->pages->nents, first_page) {
 		pages[i++] = sg_page_iter_page(&sg_iter);
+		if (i == npages)
+			break;
+	}
 
 	addr = vmap(pages, i, 0, PAGE_KERNEL);
 	if (addr == NULL) {