Message ID | 20190212025632.28946-6-peterx@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | userfaultfd: write protection support | expand |
On Tue, Feb 12, 2019 at 10:56:11AM +0800, Peter Xu wrote: > This is the gup counterpart of the change that allows the VM_FAULT_RETRY > to happen for more than once. > > Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Jérôme Glisse <jglisse@redhat.com> > --- > mm/gup.c | 17 +++++++++++++---- > 1 file changed, 13 insertions(+), 4 deletions(-) > > diff --git a/mm/gup.c b/mm/gup.c > index fa75a03204c1..ba387aec0d80 100644 > --- a/mm/gup.c > +++ b/mm/gup.c > @@ -528,7 +528,10 @@ static int faultin_page(struct task_struct *tsk, struct vm_area_struct *vma, > if (*flags & FOLL_NOWAIT) > fault_flags |= FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_RETRY_NOWAIT; > if (*flags & FOLL_TRIED) { > - VM_WARN_ON_ONCE(fault_flags & FAULT_FLAG_ALLOW_RETRY); > + /* > + * Note: FAULT_FLAG_ALLOW_RETRY and FAULT_FLAG_TRIED > + * can co-exist > + */ > fault_flags |= FAULT_FLAG_TRIED; > } > > @@ -943,17 +946,23 @@ static __always_inline long __get_user_pages_locked(struct task_struct *tsk, > /* VM_FAULT_RETRY triggered, so seek to the faulting offset */ > pages += ret; > start += ret << PAGE_SHIFT; > + lock_dropped = true; > > +retry: > /* > * Repeat on the address that fired VM_FAULT_RETRY > - * without FAULT_FLAG_ALLOW_RETRY but with > + * with both FAULT_FLAG_ALLOW_RETRY and > * FAULT_FLAG_TRIED. > */ > *locked = 1; > - lock_dropped = true; > down_read(&mm->mmap_sem); > ret = __get_user_pages(tsk, mm, start, 1, flags | FOLL_TRIED, > - pages, NULL, NULL); > + pages, NULL, locked); > + if (!*locked) { > + /* Continue to retry until we succeeded */ > + BUG_ON(ret != 0); > + goto retry; > + } > if (ret != 1) { > BUG_ON(ret > 1); > if (!pages_done) > -- > 2.17.1 >
On Thu, Feb 21, 2019 at 11:06:55AM -0500, Jerome Glisse wrote: > On Tue, Feb 12, 2019 at 10:56:11AM +0800, Peter Xu wrote: > > This is the gup counterpart of the change that allows the VM_FAULT_RETRY > > to happen for more than once. > > > > Signed-off-by: Peter Xu <peterx@redhat.com> > > Reviewed-by: Jérôme Glisse <jglisse@redhat.com> Thanks for the r-b, Jerome! Though I plan to change this patch a bit because I just noticed that I didn't touch up the hugetlbfs path for GUP. Though it was not needed for now because hugetlbfs is not yet supported but I think maybe I'd better do that as well in this same patch to make follow up works easier on hugetlb, and the patch will be more self contained. The new version will simply squash below change into current patch: diff --git a/mm/hugetlb.c b/mm/hugetlb.c index e3c738bde72e..a8eace2d5296 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -4257,8 +4257,10 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, fault_flags |= FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_RETRY_NOWAIT; if (flags & FOLL_TRIED) { - VM_WARN_ON_ONCE(fault_flags & - FAULT_FLAG_ALLOW_RETRY); + /* + * Note: FAULT_FLAG_ALLOW_RETRY and + * FAULT_FLAG_TRIED can co-exist + */ fault_flags |= FAULT_FLAG_TRIED; } ret = hugetlb_fault(mm, vma, vaddr, fault_flags); I'd say this change is straightforward (it's the same as the faultin_page below but just for hugetlbfs). Please let me know if you still want to offer the r-b with above change squashed (I'll be more than glad to take it!), or I'll just wait for your review comment when I post the next version. Thanks, > > > --- > > mm/gup.c | 17 +++++++++++++---- > > 1 file changed, 13 insertions(+), 4 deletions(-) > > > > diff --git a/mm/gup.c b/mm/gup.c > > index fa75a03204c1..ba387aec0d80 100644 > > --- a/mm/gup.c > > +++ b/mm/gup.c > > @@ -528,7 +528,10 @@ static int faultin_page(struct task_struct *tsk, struct vm_area_struct *vma, > > if (*flags & FOLL_NOWAIT) > > fault_flags |= FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_RETRY_NOWAIT; > > if (*flags & FOLL_TRIED) { > > - VM_WARN_ON_ONCE(fault_flags & FAULT_FLAG_ALLOW_RETRY); > > + /* > > + * Note: FAULT_FLAG_ALLOW_RETRY and FAULT_FLAG_TRIED > > + * can co-exist > > + */ > > fault_flags |= FAULT_FLAG_TRIED; > > } > > > > @@ -943,17 +946,23 @@ static __always_inline long __get_user_pages_locked(struct task_struct *tsk, > > /* VM_FAULT_RETRY triggered, so seek to the faulting offset */ > > pages += ret; > > start += ret << PAGE_SHIFT; > > + lock_dropped = true; > > > > +retry: > > /* > > * Repeat on the address that fired VM_FAULT_RETRY > > - * without FAULT_FLAG_ALLOW_RETRY but with > > + * with both FAULT_FLAG_ALLOW_RETRY and > > * FAULT_FLAG_TRIED. > > */ > > *locked = 1; > > - lock_dropped = true; > > down_read(&mm->mmap_sem); > > ret = __get_user_pages(tsk, mm, start, 1, flags | FOLL_TRIED, > > - pages, NULL, NULL); > > + pages, NULL, locked); > > + if (!*locked) { > > + /* Continue to retry until we succeeded */ > > + BUG_ON(ret != 0); > > + goto retry; > > + } > > if (ret != 1) { > > BUG_ON(ret > 1); > > if (!pages_done) > > -- > > 2.17.1 > >
On Fri, Feb 22, 2019 at 12:41:05PM +0800, Peter Xu wrote: > On Thu, Feb 21, 2019 at 11:06:55AM -0500, Jerome Glisse wrote: > > On Tue, Feb 12, 2019 at 10:56:11AM +0800, Peter Xu wrote: > > > This is the gup counterpart of the change that allows the VM_FAULT_RETRY > > > to happen for more than once. > > > > > > Signed-off-by: Peter Xu <peterx@redhat.com> > > > > Reviewed-by: Jérôme Glisse <jglisse@redhat.com> > > Thanks for the r-b, Jerome! > > Though I plan to change this patch a bit because I just noticed that I > didn't touch up the hugetlbfs path for GUP. Though it was not needed > for now because hugetlbfs is not yet supported but I think maybe I'd > better do that as well in this same patch to make follow up works > easier on hugetlb, and the patch will be more self contained. The new > version will simply squash below change into current patch: > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index e3c738bde72e..a8eace2d5296 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -4257,8 +4257,10 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, > fault_flags |= FAULT_FLAG_ALLOW_RETRY | > FAULT_FLAG_RETRY_NOWAIT; > if (flags & FOLL_TRIED) { > - VM_WARN_ON_ONCE(fault_flags & > - FAULT_FLAG_ALLOW_RETRY); > + /* > + * Note: FAULT_FLAG_ALLOW_RETRY and > + * FAULT_FLAG_TRIED can co-exist > + */ > fault_flags |= FAULT_FLAG_TRIED; > } > ret = hugetlb_fault(mm, vma, vaddr, fault_flags); > > I'd say this change is straightforward (it's the same as the > faultin_page below but just for hugetlbfs). Please let me know if you > still want to offer the r-b with above change squashed (I'll be more > than glad to take it!), or I'll just wait for your review comment when > I post the next version. Looks good i should have thought of hugetlbfs. You can keep my r-b. Cheers, Jérôme
diff --git a/mm/gup.c b/mm/gup.c index fa75a03204c1..ba387aec0d80 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -528,7 +528,10 @@ static int faultin_page(struct task_struct *tsk, struct vm_area_struct *vma, if (*flags & FOLL_NOWAIT) fault_flags |= FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_RETRY_NOWAIT; if (*flags & FOLL_TRIED) { - VM_WARN_ON_ONCE(fault_flags & FAULT_FLAG_ALLOW_RETRY); + /* + * Note: FAULT_FLAG_ALLOW_RETRY and FAULT_FLAG_TRIED + * can co-exist + */ fault_flags |= FAULT_FLAG_TRIED; } @@ -943,17 +946,23 @@ static __always_inline long __get_user_pages_locked(struct task_struct *tsk, /* VM_FAULT_RETRY triggered, so seek to the faulting offset */ pages += ret; start += ret << PAGE_SHIFT; + lock_dropped = true; +retry: /* * Repeat on the address that fired VM_FAULT_RETRY - * without FAULT_FLAG_ALLOW_RETRY but with + * with both FAULT_FLAG_ALLOW_RETRY and * FAULT_FLAG_TRIED. */ *locked = 1; - lock_dropped = true; down_read(&mm->mmap_sem); ret = __get_user_pages(tsk, mm, start, 1, flags | FOLL_TRIED, - pages, NULL, NULL); + pages, NULL, locked); + if (!*locked) { + /* Continue to retry until we succeeded */ + BUG_ON(ret != 0); + goto retry; + } if (ret != 1) { BUG_ON(ret > 1); if (!pages_done)
This is the gup counterpart of the change that allows the VM_FAULT_RETRY to happen for more than once. Signed-off-by: Peter Xu <peterx@redhat.com> --- mm/gup.c | 17 +++++++++++++---- 1 file changed, 13 insertions(+), 4 deletions(-)