diff mbox series

[02/10] mm/hmm: do not erase snapshot when a range is invalidated

Message ID 20190129165428.3931-3-jglisse@redhat.com (mailing list archive)
State New, archived
Headers show
Series HMM updates for 5.1 | expand

Commit Message

Jerome Glisse Jan. 29, 2019, 4:54 p.m. UTC
From: Jérôme Glisse <jglisse@redhat.com>

Users of HMM might be using the snapshot information to do
preparatory step like dma mapping pages to a device before
checking for invalidation through hmm_vma_range_done() so
do not erase that information and assume users will do the
right thing.

Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: John Hubbard <jhubbard@nvidia.com>
---
 mm/hmm.c | 6 ------
 1 file changed, 6 deletions(-)

Comments

John Hubbard Feb. 20, 2019, 11:58 p.m. UTC | #1
On 1/29/19 8:54 AM, jglisse@redhat.com wrote:
> From: Jérôme Glisse <jglisse@redhat.com>
> 
> Users of HMM might be using the snapshot information to do
> preparatory step like dma mapping pages to a device before
> checking for invalidation through hmm_vma_range_done() so
> do not erase that information and assume users will do the
> right thing.
> 
> Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Ralph Campbell <rcampbell@nvidia.com>
> Cc: John Hubbard <jhubbard@nvidia.com>
> ---
>   mm/hmm.c | 6 ------
>   1 file changed, 6 deletions(-)
> 
> diff --git a/mm/hmm.c b/mm/hmm.c
> index b9f384ea15e9..74d69812d6be 100644
> --- a/mm/hmm.c
> +++ b/mm/hmm.c
> @@ -170,16 +170,10 @@ static int hmm_invalidate_range(struct hmm *hmm, bool device,
>   
>   	spin_lock(&hmm->lock);
>   	list_for_each_entry(range, &hmm->ranges, list) {
> -		unsigned long addr, idx, npages;
> -
>   		if (update->end < range->start || update->start >= range->end)
>   			continue;
>   
>   		range->valid = false;
> -		addr = max(update->start, range->start);
> -		idx = (addr - range->start) >> PAGE_SHIFT;
> -		npages = (min(range->end, update->end) - addr) >> PAGE_SHIFT;
> -		memset(&range->pfns[idx], 0, sizeof(*range->pfns) * npages);
>   	}
>   	spin_unlock(&hmm->lock);
>   
> 

Seems harmless to me. I really cannot see how this could cause a problem,
so you can add:

	Reviewed-by: John Hubbard <jhubbard@nvidia.com>

thanks,
diff mbox series

Patch

diff --git a/mm/hmm.c b/mm/hmm.c
index b9f384ea15e9..74d69812d6be 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -170,16 +170,10 @@  static int hmm_invalidate_range(struct hmm *hmm, bool device,
 
 	spin_lock(&hmm->lock);
 	list_for_each_entry(range, &hmm->ranges, list) {
-		unsigned long addr, idx, npages;
-
 		if (update->end < range->start || update->start >= range->end)
 			continue;
 
 		range->valid = false;
-		addr = max(update->start, range->start);
-		idx = (addr - range->start) >> PAGE_SHIFT;
-		npages = (min(range->end, update->end) - addr) >> PAGE_SHIFT;
-		memset(&range->pfns[idx], 0, sizeof(*range->pfns) * npages);
 	}
 	spin_unlock(&hmm->lock);