Message ID | 20220204020010.68930-3-jhubbard@nvidia.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm/gup: some cleanups | expand |
Looks good,
Reviewed-by: Christoph Hellwig <hch@lst.de>
On Thu 03-02-22 18:00:07, John Hubbard wrote: > Remove a quirky special case from follow_pfn_pte(), and adjust its > callers to match. Caller changes include: > > __get_user_pages(): Regardless of any FOLL_* flags, get_user_pages() and > its variants should handle PFN-only entries by stopping early, if the > caller expected **pages to be filled in. This makes for a more reliable > API, as compared to the previous approach of skipping over such entries > (and thus leaving them silently unwritten). > > move_pages(): squash the -EEXIST error return from follow_page() into > -EFAULT, because -EFAULT is listed in the man page, whereas -EEXIST is > not. > > Cc: Peter Xu <peterx@redhat.com> > Cc: Lukas Bulwahn <lukas.bulwahn@gmail.com> > Cc: Jan Kara <jack@suse.cz> > Cc: Matthew Wilcox <willy@infradead.org> > Cc: Claudio Imbrenda <imbrenda@linux.ibm.com> > Suggested-by: Jason Gunthorpe <jgg@nvidia.com> > Signed-off-by: John Hubbard <jhubbard@nvidia.com> Looks good. Feel free to add: Reviewed-by: Jan Kara <jack@suse.cz> Honza > --- > mm/gup.c | 13 ++++++++----- > mm/migrate.c | 7 +++++++ > 2 files changed, 15 insertions(+), 5 deletions(-) > > diff --git a/mm/gup.c b/mm/gup.c > index 80229ecf0114..2df0d0103c43 100644 > --- a/mm/gup.c > +++ b/mm/gup.c > @@ -464,10 +464,6 @@ static struct page *no_page_table(struct vm_area_struct *vma, > static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address, > pte_t *pte, unsigned int flags) > { > - /* No page to get reference */ > - if (flags & (FOLL_GET | FOLL_PIN)) > - return -EFAULT; > - > if (flags & FOLL_TOUCH) { > pte_t entry = *pte; > > @@ -1205,8 +1201,15 @@ static long __get_user_pages(struct mm_struct *mm, > } else if (PTR_ERR(page) == -EEXIST) { > /* > * Proper page table entry exists, but no corresponding > - * struct page. > + * struct page. If the caller expects **pages to be > + * filled in, bail out now, because that can't be done > + * for this page. > */ > + if (pages) { > + ret = PTR_ERR(page); > + goto out; > + } > + > goto next_page; > } else if (IS_ERR(page)) { > ret = PTR_ERR(page); > diff --git a/mm/migrate.c b/mm/migrate.c > index c7da064b4781..be0d5ae36dc1 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -1761,6 +1761,13 @@ static int do_pages_move(struct mm_struct *mm, nodemask_t task_nodes, > continue; > } > > + /* > + * The move_pages() man page does not have an -EEXIST choice, so > + * use -EFAULT instead. > + */ > + if (err == -EEXIST) > + err = -EFAULT; > + > /* > * If the page is already on the target node (!err), store the > * node, otherwise, store the err. > -- > 2.35.1 >
diff --git a/mm/gup.c b/mm/gup.c index 80229ecf0114..2df0d0103c43 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -464,10 +464,6 @@ static struct page *no_page_table(struct vm_area_struct *vma, static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address, pte_t *pte, unsigned int flags) { - /* No page to get reference */ - if (flags & (FOLL_GET | FOLL_PIN)) - return -EFAULT; - if (flags & FOLL_TOUCH) { pte_t entry = *pte; @@ -1205,8 +1201,15 @@ static long __get_user_pages(struct mm_struct *mm, } else if (PTR_ERR(page) == -EEXIST) { /* * Proper page table entry exists, but no corresponding - * struct page. + * struct page. If the caller expects **pages to be + * filled in, bail out now, because that can't be done + * for this page. */ + if (pages) { + ret = PTR_ERR(page); + goto out; + } + goto next_page; } else if (IS_ERR(page)) { ret = PTR_ERR(page); diff --git a/mm/migrate.c b/mm/migrate.c index c7da064b4781..be0d5ae36dc1 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1761,6 +1761,13 @@ static int do_pages_move(struct mm_struct *mm, nodemask_t task_nodes, continue; } + /* + * The move_pages() man page does not have an -EEXIST choice, so + * use -EFAULT instead. + */ + if (err == -EEXIST) + err = -EFAULT; + /* * If the page is already on the target node (!err), store the * node, otherwise, store the err.
Remove a quirky special case from follow_pfn_pte(), and adjust its callers to match. Caller changes include: __get_user_pages(): Regardless of any FOLL_* flags, get_user_pages() and its variants should handle PFN-only entries by stopping early, if the caller expected **pages to be filled in. This makes for a more reliable API, as compared to the previous approach of skipping over such entries (and thus leaving them silently unwritten). move_pages(): squash the -EEXIST error return from follow_page() into -EFAULT, because -EFAULT is listed in the man page, whereas -EEXIST is not. Cc: Peter Xu <peterx@redhat.com> Cc: Lukas Bulwahn <lukas.bulwahn@gmail.com> Cc: Jan Kara <jack@suse.cz> Cc: Matthew Wilcox <willy@infradead.org> Cc: Claudio Imbrenda <imbrenda@linux.ibm.com> Suggested-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: John Hubbard <jhubbard@nvidia.com> --- mm/gup.c | 13 ++++++++----- mm/migrate.c | 7 +++++++ 2 files changed, 15 insertions(+), 5 deletions(-)