diff mbox series

[rc,v2] mm/gup: use unpin_user_pages() in __gup_longterm_locked()

Message ID 0-v2-3ae7d9d162e2+2a7-gup_cma_fix_jgg@nvidia.com (mailing list archive)
State New, archived
Headers show
Series [rc,v2] mm/gup: use unpin_user_pages() in __gup_longterm_locked() | expand

Commit Message

Jason Gunthorpe Nov. 2, 2020, 6:19 p.m. UTC
When FOLL_PIN is passed to __get_user_pages() the page list must be put
back using unpin_user_pages() otherwise the page pin reference persists in
a corrupted state.

There are two places in the unwind of __gup_longterm_locked() that put the
pages back without checking. Normally on error this function would return
the partial page list making this the caller's responsibility, but in
these two cases the caller is not allowed to see these pages at all.

Cc: <stable@kernel.org>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Fixes: 3faa52c03f44 ("mm/gup: track FOLL_PIN pages")
Reported-by: Ira Weiny <ira.weiny@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 mm/gup.c | 14 ++++++++++----
 1 file changed, 10 insertions(+), 4 deletions(-)

v2:
 - Catch the DAX related case as well (Ira)
v1: https://lore.kernel.org/r/0-v1-976effcd4468+d4-gup_cma_fix_jgg@nvidia.com

Andrew, this version with a modified commit message and extra hunk replaces:
  mm-gup-use-unpin_user_pages-in-check_and_migrate_cma_pages.patch

Thanks,
Jason

Comments

Ira Weiny Nov. 2, 2020, 7:04 p.m. UTC | #1
On Mon, Nov 02, 2020 at 02:19:59PM -0400, Jason Gunthorpe wrote:
> When FOLL_PIN is passed to __get_user_pages() the page list must be put
> back using unpin_user_pages() otherwise the page pin reference persists in
> a corrupted state.
> 
> There are two places in the unwind of __gup_longterm_locked() that put the
> pages back without checking. Normally on error this function would return
> the partial page list making this the caller's responsibility, but in
> these two cases the caller is not allowed to see these pages at all.
> 
> Cc: <stable@kernel.org>
> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> Fixes: 3faa52c03f44 ("mm/gup: track FOLL_PIN pages")
> Reported-by: Ira Weiny <ira.weiny@intel.com>

Reviewed-by: Ira Weiny <ira.weiny@intel.com>

> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> ---
>  mm/gup.c | 14 ++++++++++----
>  1 file changed, 10 insertions(+), 4 deletions(-)
> 
> v2:
>  - Catch the DAX related case as well (Ira)
> v1: https://lore.kernel.org/r/0-v1-976effcd4468+d4-gup_cma_fix_jgg@nvidia.com
> 
> Andrew, this version with a modified commit message and extra hunk replaces:
>   mm-gup-use-unpin_user_pages-in-check_and_migrate_cma_pages.patch
> 
> Thanks,
> Jason
> 
> diff --git a/mm/gup.c b/mm/gup.c
> index 102877ed77a4b4..98eb8e6d2609c3 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -1647,8 +1647,11 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm,
>  		/*
>  		 * drop the above get_user_pages reference.
>  		 */
> -		for (i = 0; i < nr_pages; i++)
> -			put_page(pages[i]);
> +		if (gup_flags & FOLL_PIN)
> +			unpin_user_pages(pages, nr_pages);
> +		else
> +			for (i = 0; i < nr_pages; i++)
> +				put_page(pages[i]);
>  
>  		if (migrate_pages(&cma_page_list, alloc_migration_target, NULL,
>  			(unsigned long)&mtc, MIGRATE_SYNC, MR_CONTIG_RANGE)) {
> @@ -1728,8 +1731,11 @@ static long __gup_longterm_locked(struct mm_struct *mm,
>  			goto out;
>  
>  		if (check_dax_vmas(vmas_tmp, rc)) {
> -			for (i = 0; i < rc; i++)
> -				put_page(pages[i]);
> +			if (gup_flags & FOLL_PIN)
> +				unpin_user_pages(pages, rc);
> +			else
> +				for (i = 0; i < rc; i++)
> +					put_page(pages[i]);
>  			rc = -EOPNOTSUPP;
>  			goto out;
>  		}
> -- 
> 2.28.0
>
John Hubbard Nov. 2, 2020, 7:19 p.m. UTC | #2
On 11/2/20 10:19 AM, Jason Gunthorpe wrote:
> When FOLL_PIN is passed to __get_user_pages() the page list must be put
> back using unpin_user_pages() otherwise the page pin reference persists in
> a corrupted state.
> 
> There are two places in the unwind of __gup_longterm_locked() that put the
> pages back without checking. Normally on error this function would return
> the partial page list making this the caller's responsibility, but in
> these two cases the caller is not allowed to see these pages at all.
> 
> Cc: <stable@kernel.org>
> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> Fixes: 3faa52c03f44 ("mm/gup: track FOLL_PIN pages")
> Reported-by: Ira Weiny <ira.weiny@intel.com>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> ---
>   mm/gup.c | 14 ++++++++++----
>   1 file changed, 10 insertions(+), 4 deletions(-)
> 

Reviewed-by: John Hubbard <jhubbard@nvidia.com>

thanks,
diff mbox series

Patch

diff --git a/mm/gup.c b/mm/gup.c
index 102877ed77a4b4..98eb8e6d2609c3 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1647,8 +1647,11 @@  static long check_and_migrate_cma_pages(struct mm_struct *mm,
 		/*
 		 * drop the above get_user_pages reference.
 		 */
-		for (i = 0; i < nr_pages; i++)
-			put_page(pages[i]);
+		if (gup_flags & FOLL_PIN)
+			unpin_user_pages(pages, nr_pages);
+		else
+			for (i = 0; i < nr_pages; i++)
+				put_page(pages[i]);
 
 		if (migrate_pages(&cma_page_list, alloc_migration_target, NULL,
 			(unsigned long)&mtc, MIGRATE_SYNC, MR_CONTIG_RANGE)) {
@@ -1728,8 +1731,11 @@  static long __gup_longterm_locked(struct mm_struct *mm,
 			goto out;
 
 		if (check_dax_vmas(vmas_tmp, rc)) {
-			for (i = 0; i < rc; i++)
-				put_page(pages[i]);
+			if (gup_flags & FOLL_PIN)
+				unpin_user_pages(pages, rc);
+			else
+				for (i = 0; i < rc; i++)
+					put_page(pages[i]);
 			rc = -EOPNOTSUPP;
 			goto out;
 		}