diff mbox

drm/i915/userptr: Replace get_user_pages()

Message ID 1458207434-13530-1-git-send-email-chris@chris-wilson.co.uk (mailing list archive)
State New, archived
Headers show

Commit Message

Chris Wilson March 17, 2016, 9:37 a.m. UTC
Although the long term future of get_user_pages_locked() itself is
doubtful, the kernel currently recommends:

/* get_user_pages should be phased out in favor of
 * get_user_pages_locked|unlocked or get_user_pages_fast. Nothing
 * should use get_user_pages because it cannot pass
 * FAULT_FLAG_ALLOW_RETRY to handle_mm_fault.
 */

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem_userptr.c | 14 ++++++++------
 1 file changed, 8 insertions(+), 6 deletions(-)

Comments

Tvrtko Ursulin March 17, 2016, 10:09 a.m. UTC | #1
On 17/03/16 09:37, Chris Wilson wrote:
> Although the long term future of get_user_pages_locked() itself is
> doubtful, the kernel currently recommends:
>
> /* get_user_pages should be phased out in favor of
>   * get_user_pages_locked|unlocked or get_user_pages_fast. Nothing
>   * should use get_user_pages because it cannot pass
>   * FAULT_FLAG_ALLOW_RETRY to handle_mm_fault.
>   */
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> ---
>   drivers/gpu/drm/i915/i915_gem_userptr.c | 14 ++++++++------
>   1 file changed, 8 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/i915_gem_userptr.c b/drivers/gpu/drm/i915/i915_gem_userptr.c
> index 54088a4d6498..fa7d8a8b66ef 100644
> --- a/drivers/gpu/drm/i915/i915_gem_userptr.c
> +++ b/drivers/gpu/drm/i915/i915_gem_userptr.c
> @@ -500,20 +500,22 @@ __i915_gem_userptr_get_pages_worker(struct work_struct *_work)
>   		pvec = drm_malloc_ab(npages, sizeof(struct page *));
>   	if (pvec != NULL) {
>   		struct mm_struct *mm = obj->userptr.mm->mm;
> +		int locked = 1;
>
>   		down_read(&mm->mmap_sem);
>   		while (pinned < npages) {
> -			ret = get_user_pages(work->task, mm,
> -					     obj->userptr.ptr + pinned * PAGE_SIZE,
> -					     npages - pinned,
> -					     !obj->userptr.read_only, 0,
> -					     pvec + pinned, NULL);
> +			ret = get_user_pages_locked(work->task, mm,
> +						    obj->userptr.ptr + pinned * PAGE_SIZE,
> +						    npages - pinned,
> +						    !obj->userptr.read_only, 0,
> +						    pvec + pinned, &locked);
>   			if (ret < 0)
>   				break;
>
>   			pinned += ret;
>   		}
> -		up_read(&mm->mmap_sem);
> +		if (locked)
> +			up_read(&mm->mmap_sem);
>   	}
>
>   	mutex_lock(&dev->struct_mutex);
>

Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Is it just cleanup or actually improves some real (as in hit by someone) 
scenario?

Regards,

Tvrtko
Chris Wilson March 17, 2016, 10:20 a.m. UTC | #2
On Thu, Mar 17, 2016 at 10:09:28AM +0000, Tvrtko Ursulin wrote:
> 
> On 17/03/16 09:37, Chris Wilson wrote:
> >Although the long term future of get_user_pages_locked() itself is
> >doubtful, the kernel currently recommends:
> >
> >/* get_user_pages should be phased out in favor of
> >  * get_user_pages_locked|unlocked or get_user_pages_fast. Nothing
> >  * should use get_user_pages because it cannot pass
> >  * FAULT_FLAG_ALLOW_RETRY to handle_mm_fault.
> >  */
> >
> >Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> >---
> >  drivers/gpu/drm/i915/i915_gem_userptr.c | 14 ++++++++------
> >  1 file changed, 8 insertions(+), 6 deletions(-)
> >
> >diff --git a/drivers/gpu/drm/i915/i915_gem_userptr.c b/drivers/gpu/drm/i915/i915_gem_userptr.c
> >index 54088a4d6498..fa7d8a8b66ef 100644
> >--- a/drivers/gpu/drm/i915/i915_gem_userptr.c
> >+++ b/drivers/gpu/drm/i915/i915_gem_userptr.c
> >@@ -500,20 +500,22 @@ __i915_gem_userptr_get_pages_worker(struct work_struct *_work)
> >  		pvec = drm_malloc_ab(npages, sizeof(struct page *));
> >  	if (pvec != NULL) {
> >  		struct mm_struct *mm = obj->userptr.mm->mm;
> >+		int locked = 1;
> >
> >  		down_read(&mm->mmap_sem);
> >  		while (pinned < npages) {
> >-			ret = get_user_pages(work->task, mm,
> >-					     obj->userptr.ptr + pinned * PAGE_SIZE,
> >-					     npages - pinned,
> >-					     !obj->userptr.read_only, 0,
> >-					     pvec + pinned, NULL);
> >+			ret = get_user_pages_locked(work->task, mm,
> >+						    obj->userptr.ptr + pinned * PAGE_SIZE,
> >+						    npages - pinned,
> >+						    !obj->userptr.read_only, 0,
> >+						    pvec + pinned, &locked);
> >  			if (ret < 0)
> >  				break;
> >
> >  			pinned += ret;
> >  		}
> >-		up_read(&mm->mmap_sem);
> >+		if (locked)
> >+			up_read(&mm->mmap_sem);
> >  	}
> >
> >  	mutex_lock(&dev->struct_mutex);
> >
> 
> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> 
> Is it just cleanup or actually improves some real (as in hit by
> someone) scenario?

It's cleanup. Current open is 

https://bugs.freedesktop.org/show_bug.cgi?id=93910
[ 5686.374255] WARNING: CPU: 0 PID: 13274 at mm/filemap.c:217
__delete_from_page_cache+0x274/0x280()

which may or may not be causing a concurrent GPF on the same page.
That concurrent GPF may or may not go away with the above patch. The
problem with indeterminant races!
-Chris
diff mbox

Patch

diff --git a/drivers/gpu/drm/i915/i915_gem_userptr.c b/drivers/gpu/drm/i915/i915_gem_userptr.c
index 54088a4d6498..fa7d8a8b66ef 100644
--- a/drivers/gpu/drm/i915/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/i915_gem_userptr.c
@@ -500,20 +500,22 @@  __i915_gem_userptr_get_pages_worker(struct work_struct *_work)
 		pvec = drm_malloc_ab(npages, sizeof(struct page *));
 	if (pvec != NULL) {
 		struct mm_struct *mm = obj->userptr.mm->mm;
+		int locked = 1;
 
 		down_read(&mm->mmap_sem);
 		while (pinned < npages) {
-			ret = get_user_pages(work->task, mm,
-					     obj->userptr.ptr + pinned * PAGE_SIZE,
-					     npages - pinned,
-					     !obj->userptr.read_only, 0,
-					     pvec + pinned, NULL);
+			ret = get_user_pages_locked(work->task, mm,
+						    obj->userptr.ptr + pinned * PAGE_SIZE,
+						    npages - pinned,
+						    !obj->userptr.read_only, 0,
+						    pvec + pinned, &locked);
 			if (ret < 0)
 				break;
 
 			pinned += ret;
 		}
-		up_read(&mm->mmap_sem);
+		if (locked)
+			up_read(&mm->mmap_sem);
 	}
 
 	mutex_lock(&dev->struct_mutex);