Message ID | 1571671030-58029-1-git-send-email-zhongjiang@huawei.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm/gup: allow CMA migration to propagate errors back to caller | expand |
On 10/21/19 5:17 PM, zhong jiang wrote: > check_and_migrate_cma_pages() was recording the result of > __get_user_pages_locked() in an unsigned "nr_pages" variable. Because > __get_user_pages_locked() returns a signed value that can include > negative errno values, this had the effect of hiding errors. > > Change check_and_migrate_cma_pages() implementation so that it > uses a signed variable instead, and propagates the results back > to the caller just as other gup internal functions do. > > This was discovered with the help of unsigned_lesser_than_zero.cocci. > > Suggested-by: John Hubbard <jhubbard@nvidia.com> > Signed-off-by: zhong jiang <zhongjiang@huawei.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> > --- > mm/gup.c | 8 +++++--- > 1 file changed, 5 insertions(+), 3 deletions(-) > > diff --git a/mm/gup.c b/mm/gup.c > index 8f236a3..c2b3e11 100644 > --- a/mm/gup.c > +++ b/mm/gup.c > @@ -1443,6 +1443,7 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk, > bool drain_allow = true; > bool migrate_allow = true; > LIST_HEAD(cma_page_list); > + long ret = nr_pages; > > check_again: > for (i = 0; i < nr_pages;) { > @@ -1504,17 +1505,18 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk, > * again migrating any new CMA pages which we failed to isolate > * earlier. > */ > - nr_pages = __get_user_pages_locked(tsk, mm, start, nr_pages, > + ret = __get_user_pages_locked(tsk, mm, start, nr_pages, > pages, vmas, NULL, > gup_flags); > > - if ((nr_pages > 0) && migrate_allow) { > + if ((ret > 0) && migrate_allow) { > + nr_pages = ret; > drain_allow = true; > goto check_again; > } > } > > - return nr_pages; > + return ret; > } > #else > static long check_and_migrate_cma_pages(struct task_struct *tsk, >
On 10/21/19 8:17 AM, zhong jiang wrote: > check_and_migrate_cma_pages() was recording the result of > __get_user_pages_locked() in an unsigned "nr_pages" variable. Because > __get_user_pages_locked() returns a signed value that can include > negative errno values, this had the effect of hiding errors. > > Change check_and_migrate_cma_pages() implementation so that it > uses a signed variable instead, and propagates the results back > to the caller just as other gup internal functions do. > > This was discovered with the help of unsigned_lesser_than_zero.cocci. > > Suggested-by: John Hubbard <jhubbard@nvidia.com> > Signed-off-by: zhong jiang <zhongjiang@huawei.com> > --- Reviewed-by: John Hubbard <jhubbard@nvidia.com> thanks, John Hubbard NVIDIA > mm/gup.c | 8 +++++--- > 1 file changed, 5 insertions(+), 3 deletions(-) > > diff --git a/mm/gup.c b/mm/gup.c > index 8f236a3..c2b3e11 100644 > --- a/mm/gup.c > +++ b/mm/gup.c > @@ -1443,6 +1443,7 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk, > bool drain_allow = true; > bool migrate_allow = true; > LIST_HEAD(cma_page_list); > + long ret = nr_pages; > > check_again: > for (i = 0; i < nr_pages;) { > @@ -1504,17 +1505,18 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk, > * again migrating any new CMA pages which we failed to isolate > * earlier. > */ > - nr_pages = __get_user_pages_locked(tsk, mm, start, nr_pages, > + ret = __get_user_pages_locked(tsk, mm, start, nr_pages, > pages, vmas, NULL, > gup_flags); > > - if ((nr_pages > 0) && migrate_allow) { > + if ((ret > 0) && migrate_allow) { > + nr_pages = ret; > drain_allow = true; > goto check_again; > } > } > > - return nr_pages; > + return ret; > } > #else > static long check_and_migrate_cma_pages(struct task_struct *tsk, >
On Mon, Oct 21, 2019 at 11:17:10PM +0800, zhong jiang wrote: > check_and_migrate_cma_pages() was recording the result of > __get_user_pages_locked() in an unsigned "nr_pages" variable. Because > __get_user_pages_locked() returns a signed value that can include > negative errno values, this had the effect of hiding errors. > > Change check_and_migrate_cma_pages() implementation so that it > uses a signed variable instead, and propagates the results back > to the caller just as other gup internal functions do. > > This was discovered with the help of unsigned_lesser_than_zero.cocci. > > Suggested-by: John Hubbard <jhubbard@nvidia.com> > Signed-off-by: zhong jiang <zhongjiang@huawei.com> Reviewed-by: Ira Weiny <ira.weiny@intel.com> > --- > mm/gup.c | 8 +++++--- > 1 file changed, 5 insertions(+), 3 deletions(-) > > diff --git a/mm/gup.c b/mm/gup.c > index 8f236a3..c2b3e11 100644 > --- a/mm/gup.c > +++ b/mm/gup.c > @@ -1443,6 +1443,7 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk, > bool drain_allow = true; > bool migrate_allow = true; > LIST_HEAD(cma_page_list); > + long ret = nr_pages; > > check_again: > for (i = 0; i < nr_pages;) { > @@ -1504,17 +1505,18 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk, > * again migrating any new CMA pages which we failed to isolate > * earlier. > */ > - nr_pages = __get_user_pages_locked(tsk, mm, start, nr_pages, > + ret = __get_user_pages_locked(tsk, mm, start, nr_pages, > pages, vmas, NULL, > gup_flags); > > - if ((nr_pages > 0) && migrate_allow) { > + if ((ret > 0) && migrate_allow) { > + nr_pages = ret; > drain_allow = true; > goto check_again; > } > } > > - return nr_pages; > + return ret; > } > #else > static long check_and_migrate_cma_pages(struct task_struct *tsk, > -- > 1.7.12.4 > >
diff --git a/mm/gup.c b/mm/gup.c index 8f236a3..c2b3e11 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1443,6 +1443,7 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk, bool drain_allow = true; bool migrate_allow = true; LIST_HEAD(cma_page_list); + long ret = nr_pages; check_again: for (i = 0; i < nr_pages;) { @@ -1504,17 +1505,18 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk, * again migrating any new CMA pages which we failed to isolate * earlier. */ - nr_pages = __get_user_pages_locked(tsk, mm, start, nr_pages, + ret = __get_user_pages_locked(tsk, mm, start, nr_pages, pages, vmas, NULL, gup_flags); - if ((nr_pages > 0) && migrate_allow) { + if ((ret > 0) && migrate_allow) { + nr_pages = ret; drain_allow = true; goto check_again; } } - return nr_pages; + return ret; } #else static long check_and_migrate_cma_pages(struct task_struct *tsk,
check_and_migrate_cma_pages() was recording the result of __get_user_pages_locked() in an unsigned "nr_pages" variable. Because __get_user_pages_locked() returns a signed value that can include negative errno values, this had the effect of hiding errors. Change check_and_migrate_cma_pages() implementation so that it uses a signed variable instead, and propagates the results back to the caller just as other gup internal functions do. This was discovered with the help of unsigned_lesser_than_zero.cocci. Suggested-by: John Hubbard <jhubbard@nvidia.com> Signed-off-by: zhong jiang <zhongjiang@huawei.com> --- mm/gup.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)