Message ID | 20181110085041.10071-2-jhubbard@nvidia.com (mailing list archive) |
---|---|
State | RFC |
Headers | show |
Series | RFC: gup+dma: tracking dma-pinned pages | expand |
On Sat, Nov 10, 2018 at 12:50:36AM -0800, john.hubbard@gmail.com wrote: > From: John Hubbard <jhubbard@nvidia.com> > > An upcoming patch wants to be able to operate on each page that > get_user_pages has retrieved. In order to do that, it's best to > have a common exit point from the routine. Most of this has been > taken care of by commit df06b37ffe5a4 ("mm/gup: cache dev_pagemap while > pinning pages"), but there was one case remaining. > > Also, there was still an unnecessary shadow declaration (with a > different type) of the "ret" variable, which this commit removes. > > Cc: Keith Busch <keith.busch@intel.com> > Cc: Dan Williams <dan.j.williams@intel.com> > Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> > Cc: Dave Hansen <dave.hansen@intel.com> > Signed-off-by: John Hubbard <jhubbard@nvidia.com> > --- > mm/gup.c | 3 +-- > 1 file changed, 1 insertion(+), 2 deletions(-) > > diff --git a/mm/gup.c b/mm/gup.c > index f76e77a2d34b..55a41dee0340 100644 > --- a/mm/gup.c > +++ b/mm/gup.c > @@ -696,12 +696,11 @@ static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, > if (!vma || start >= vma->vm_end) { > vma = find_extend_vma(mm, start); > if (!vma && in_gate_area(mm, start)) { > - int ret; > ret = get_gate_page(mm, start & PAGE_MASK, > gup_flags, &vma, > pages ? &pages[i] : NULL); > if (ret) > - return i ? : ret; > + goto out; > ctx.page_mask = 0; > goto next_page; > } This also fixes a potentially leaked dev_pagemap reference count if a failure occurs when an iteration crosses a vma boundary. I don't think it's normal to have different vma's on a users mapped zone device memory, but good to fix anyway. Reviewed-by: Keith Busch <keith.busch@intel.com>
On Mon, Nov 12, 2018 at 7:45 AM Keith Busch <keith.busch@intel.com> wrote: > > On Sat, Nov 10, 2018 at 12:50:36AM -0800, john.hubbard@gmail.com wrote: > > From: John Hubbard <jhubbard@nvidia.com> > > > > An upcoming patch wants to be able to operate on each page that > > get_user_pages has retrieved. In order to do that, it's best to > > have a common exit point from the routine. Most of this has been > > taken care of by commit df06b37ffe5a4 ("mm/gup: cache dev_pagemap while > > pinning pages"), but there was one case remaining. > > > > Also, there was still an unnecessary shadow declaration (with a > > different type) of the "ret" variable, which this commit removes. > > > > Cc: Keith Busch <keith.busch@intel.com> > > Cc: Dan Williams <dan.j.williams@intel.com> > > Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> > > Cc: Dave Hansen <dave.hansen@intel.com> > > Signed-off-by: John Hubbard <jhubbard@nvidia.com> > > --- > > mm/gup.c | 3 +-- > > 1 file changed, 1 insertion(+), 2 deletions(-) > > > > diff --git a/mm/gup.c b/mm/gup.c > > index f76e77a2d34b..55a41dee0340 100644 > > --- a/mm/gup.c > > +++ b/mm/gup.c > > @@ -696,12 +696,11 @@ static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, > > if (!vma || start >= vma->vm_end) { > > vma = find_extend_vma(mm, start); > > if (!vma && in_gate_area(mm, start)) { > > - int ret; > > ret = get_gate_page(mm, start & PAGE_MASK, > > gup_flags, &vma, > > pages ? &pages[i] : NULL); > > if (ret) > > - return i ? : ret; > > + goto out; > > ctx.page_mask = 0; > > goto next_page; > > } > > This also fixes a potentially leaked dev_pagemap reference count if a > failure occurs when an iteration crosses a vma boundary. I don't think > it's normal to have different vma's on a users mapped zone device memory, > but good to fix anyway. Does not sound abnormal to me, we should promote this as a fix for the current cycle with an updated changelog.
On 11/12/18 8:14 AM, Dan Williams wrote: > On Mon, Nov 12, 2018 at 7:45 AM Keith Busch <keith.busch@intel.com> wrote: >> >> On Sat, Nov 10, 2018 at 12:50:36AM -0800, john.hubbard@gmail.com wrote: >>> From: John Hubbard <jhubbard@nvidia.com> >>> >>> An upcoming patch wants to be able to operate on each page that >>> get_user_pages has retrieved. In order to do that, it's best to >>> have a common exit point from the routine. Most of this has been >>> taken care of by commit df06b37ffe5a4 ("mm/gup: cache dev_pagemap while >>> pinning pages"), but there was one case remaining. >>> >>> Also, there was still an unnecessary shadow declaration (with a >>> different type) of the "ret" variable, which this commit removes. >>> >>> Cc: Keith Busch <keith.busch@intel.com> >>> Cc: Dan Williams <dan.j.williams@intel.com> >>> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> >>> Cc: Dave Hansen <dave.hansen@intel.com> >>> Signed-off-by: John Hubbard <jhubbard@nvidia.com> >>> --- >>> mm/gup.c | 3 +-- >>> 1 file changed, 1 insertion(+), 2 deletions(-) >>> >>> diff --git a/mm/gup.c b/mm/gup.c >>> index f76e77a2d34b..55a41dee0340 100644 >>> --- a/mm/gup.c >>> +++ b/mm/gup.c >>> @@ -696,12 +696,11 @@ static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, >>> if (!vma || start >= vma->vm_end) { >>> vma = find_extend_vma(mm, start); >>> if (!vma && in_gate_area(mm, start)) { >>> - int ret; >>> ret = get_gate_page(mm, start & PAGE_MASK, >>> gup_flags, &vma, >>> pages ? &pages[i] : NULL); >>> if (ret) >>> - return i ? : ret; >>> + goto out; >>> ctx.page_mask = 0; >>> goto next_page; >>> } >> >> This also fixes a potentially leaked dev_pagemap reference count if a >> failure occurs when an iteration crosses a vma boundary. I don't think >> it's normal to have different vma's on a users mapped zone device memory, >> but good to fix anyway. > > Does not sound abnormal to me, we should promote this as a fix for the > current cycle with an updated changelog. > Andrew, should I send this patch separately, or do you have what you need already? thanks,
diff --git a/mm/gup.c b/mm/gup.c index f76e77a2d34b..55a41dee0340 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -696,12 +696,11 @@ static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, if (!vma || start >= vma->vm_end) { vma = find_extend_vma(mm, start); if (!vma && in_gate_area(mm, start)) { - int ret; ret = get_gate_page(mm, start & PAGE_MASK, gup_flags, &vma, pages ? &pages[i] : NULL); if (ret) - return i ? : ret; + goto out; ctx.page_mask = 0; goto next_page; }