diff mbox series

[hmm,5/8] mm/hmm: add missing call to hmm_range_need_fault() before returning EFAULT

Message ID 20200311183506.3997-6-jgg@ziepe.ca (mailing list archive)
State New, archived
Headers show
Series Various error case bug fixes for hmm_range_fault() | expand

Commit Message

Jason Gunthorpe March 11, 2020, 6:35 p.m. UTC
From: Jason Gunthorpe <jgg@mellanox.com>

All return paths that do EFAULT must call hmm_range_need_fault() to
determine if the user requires this page to be valid.

If the page cannot be made valid if the user later requires it, due to vma
flags in this case, then the return should be HMM_PFN_ERROR.

Fixes: a3e0d41c2b1f ("mm/hmm: improve driver API to work and wait over a range")
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
---
 mm/hmm.c | 19 ++++++++-----------
 1 file changed, 8 insertions(+), 11 deletions(-)

Comments

Ralph Campbell March 12, 2020, 1:34 a.m. UTC | #1
On 3/11/20 11:35 AM, Jason Gunthorpe wrote:
> From: Jason Gunthorpe <jgg@mellanox.com>
> 
> All return paths that do EFAULT must call hmm_range_need_fault() to
> determine if the user requires this page to be valid.
> 
> If the page cannot be made valid if the user later requires it, due to vma
> flags in this case, then the return should be HMM_PFN_ERROR.
> 
> Fixes: a3e0d41c2b1f ("mm/hmm: improve driver API to work and wait over a range")
> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>

Reviewed-by: Ralph Campbell <rcampbell@nvidia.com>

> ---
>   mm/hmm.c | 19 ++++++++-----------
>   1 file changed, 8 insertions(+), 11 deletions(-)
> 
> diff --git a/mm/hmm.c b/mm/hmm.c
> index 5f5ccf13dd1e85..e10cd0adba7b37 100644
> --- a/mm/hmm.c
> +++ b/mm/hmm.c
> @@ -582,18 +582,15 @@ static int hmm_vma_walk_test(unsigned long start, unsigned long end,
>   	struct vm_area_struct *vma = walk->vma;
>   
>   	/*
> -	 * Skip vma ranges that don't have struct page backing them or
> -	 * map I/O devices directly.
> -	 */
> -	if (vma->vm_flags & (VM_IO | VM_PFNMAP | VM_MIXEDMAP))
> -		return -EFAULT;
> -
> -	/*
> +	 * Skip vma ranges that don't have struct page backing them or map I/O
> +	 * devices directly.
> +	 *
>   	 * If the vma does not allow read access, then assume that it does not
> -	 * allow write access either. HMM does not support architectures
> -	 * that allow write without read.
> +	 * allow write access either. HMM does not support architectures that
> +	 * allow write without read.
>   	 */
> -	if (!(vma->vm_flags & VM_READ)) {
> +	if ((vma->vm_flags & (VM_IO | VM_PFNMAP | VM_MIXEDMAP)) ||
> +	    !(vma->vm_flags & VM_READ)) {
>   		bool fault, write_fault;
>   
>   		/*
> @@ -607,7 +604,7 @@ static int hmm_vma_walk_test(unsigned long start, unsigned long end,
>   		if (fault || write_fault)
>   			return -EFAULT;
>   
> -		hmm_pfns_fill(start, end, range, HMM_PFN_NONE);
> +		hmm_pfns_fill(start, end, range, HMM_PFN_ERROR);
>   		hmm_vma_walk->last = end;
>   
>   		/* Skip this vma and continue processing the next vma. */
>
Christoph Hellwig March 16, 2020, 9:07 a.m. UTC | #2
On Wed, Mar 11, 2020 at 03:35:03PM -0300, Jason Gunthorpe wrote:
> From: Jason Gunthorpe <jgg@mellanox.com>
> 
> All return paths that do EFAULT must call hmm_range_need_fault() to
> determine if the user requires this page to be valid.
> 
> If the page cannot be made valid if the user later requires it, due to vma
> flags in this case, then the return should be HMM_PFN_ERROR.
> 
> Fixes: a3e0d41c2b1f ("mm/hmm: improve driver API to work and wait over a range")
> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>
diff mbox series

Patch

diff --git a/mm/hmm.c b/mm/hmm.c
index 5f5ccf13dd1e85..e10cd0adba7b37 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -582,18 +582,15 @@  static int hmm_vma_walk_test(unsigned long start, unsigned long end,
 	struct vm_area_struct *vma = walk->vma;
 
 	/*
-	 * Skip vma ranges that don't have struct page backing them or
-	 * map I/O devices directly.
-	 */
-	if (vma->vm_flags & (VM_IO | VM_PFNMAP | VM_MIXEDMAP))
-		return -EFAULT;
-
-	/*
+	 * Skip vma ranges that don't have struct page backing them or map I/O
+	 * devices directly.
+	 *
 	 * If the vma does not allow read access, then assume that it does not
-	 * allow write access either. HMM does not support architectures
-	 * that allow write without read.
+	 * allow write access either. HMM does not support architectures that
+	 * allow write without read.
 	 */
-	if (!(vma->vm_flags & VM_READ)) {
+	if ((vma->vm_flags & (VM_IO | VM_PFNMAP | VM_MIXEDMAP)) ||
+	    !(vma->vm_flags & VM_READ)) {
 		bool fault, write_fault;
 
 		/*
@@ -607,7 +604,7 @@  static int hmm_vma_walk_test(unsigned long start, unsigned long end,
 		if (fault || write_fault)
 			return -EFAULT;
 
-		hmm_pfns_fill(start, end, range, HMM_PFN_NONE);
+		hmm_pfns_fill(start, end, range, HMM_PFN_ERROR);
 		hmm_vma_walk->last = end;
 
 		/* Skip this vma and continue processing the next vma. */