diff mbox series

[v2,hmm,7/9] mm/hmm: do not unconditionally set pfns when returning EBUSY

Message ID 20200324011457.2817-8-jgg@ziepe.ca (mailing list archive)
State New, archived
Headers show
Series [v2,hmm,1/9] mm/hmm: remove pgmap checking for devmap pages | expand

Commit Message

Jason Gunthorpe March 24, 2020, 1:14 a.m. UTC
From: Jason Gunthorpe <jgg@mellanox.com>

In hmm_vma_handle_pte() and hmm_vma_walk_hugetlb_entry() if fault happens
then -EBUSY will be returned and the pfns input flags will have been
destroyed.

For hmm_vma_handle_pte() set HMM_PFN_NONE only on the success returns that
don't otherwise store to pfns.

For hmm_vma_walk_hugetlb_entry() all exit paths already set pfns, so
remove the redundant store.

Fixes: 2aee09d8c116 ("mm/hmm: change hmm_vma_fault() to allow write fault on page basis")
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
---
 mm/hmm.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

Comments

Christoph Hellwig March 24, 2020, 7:37 a.m. UTC | #1
On Mon, Mar 23, 2020 at 10:14:55PM -0300, Jason Gunthorpe wrote:
>  	if (pte_none(pte)) {
>  		required_fault = hmm_pte_need_fault(hmm_vma_walk, orig_pfn, 0);
>  		if (required_fault)
>  			goto fault;
> +		*pfn = range->values[HMM_PFN_NONE];
>  		return 0;
>  	}
>  
> @@ -274,8 +274,10 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr,
>  		}
>  
>  		required_fault = hmm_pte_need_fault(hmm_vma_walk, orig_pfn, 0);
> -		if (!required_fault)
> +		if (!required_fault) {
> +			*pfn = range->values[HMM_PFN_NONE];
>  			return 0;
> +		}

Maybe throw in a goto hole to consolidaste the set PFN and return
0 cases?

Otherwise looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>
Jason Gunthorpe March 24, 2020, 3:47 p.m. UTC | #2
On Tue, Mar 24, 2020 at 08:37:46AM +0100, Christoph Hellwig wrote:
> On Mon, Mar 23, 2020 at 10:14:55PM -0300, Jason Gunthorpe wrote:
> >  	if (pte_none(pte)) {
> >  		required_fault = hmm_pte_need_fault(hmm_vma_walk, orig_pfn, 0);
> >  		if (required_fault)
> >  			goto fault;
> > +		*pfn = range->values[HMM_PFN_NONE];
> >  		return 0;
> >  	}
> >  
> > @@ -274,8 +274,10 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr,
> >  		}
> >  
> >  		required_fault = hmm_pte_need_fault(hmm_vma_walk, orig_pfn, 0);
> > -		if (!required_fault)
> > +		if (!required_fault) {
> > +			*pfn = range->values[HMM_PFN_NONE];
> >  			return 0;
> > +		}
> 
> Maybe throw in a goto hole to consolidaste the set PFN and return
> 0 cases?

Then we have goto fault and goto none both ending in returns. I
generally prefer the goto labels to have a single return

The pte_unmap() before faulting makes this routine twisty and I
haven't thought of a good way to untwist it

Thanks,
Jason
diff mbox series

Patch

diff --git a/mm/hmm.c b/mm/hmm.c
index e114110ad498a2..bf77b852f12d3a 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -249,11 +249,11 @@  static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr,
 	pte_t pte = *ptep;
 	uint64_t orig_pfn = *pfn;
 
-	*pfn = range->values[HMM_PFN_NONE];
 	if (pte_none(pte)) {
 		required_fault = hmm_pte_need_fault(hmm_vma_walk, orig_pfn, 0);
 		if (required_fault)
 			goto fault;
+		*pfn = range->values[HMM_PFN_NONE];
 		return 0;
 	}
 
@@ -274,8 +274,10 @@  static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr,
 		}
 
 		required_fault = hmm_pte_need_fault(hmm_vma_walk, orig_pfn, 0);
-		if (!required_fault)
+		if (!required_fault) {
+			*pfn = range->values[HMM_PFN_NONE];
 			return 0;
+		}
 
 		if (!non_swap_entry(entry))
 			goto fault;
@@ -493,7 +495,6 @@  static int hmm_vma_walk_hugetlb_entry(pte_t *pte, unsigned long hmask,
 
 	i = (start - range->start) >> PAGE_SHIFT;
 	orig_pfn = range->pfns[i];
-	range->pfns[i] = range->values[HMM_PFN_NONE];
 	cpu_flags = pte_to_hmm_pfn_flags(range, entry);
 	required_fault = hmm_pte_need_fault(hmm_vma_walk, orig_pfn, cpu_flags);
 	if (required_fault) {