Message ID | 1559170444-3304-1-git-send-email-kernelfans@gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm/gup: fix omission of check on FOLL_LONGTERM in get_user_pages_fast() | expand |
On Thu, May 30, 2019 at 06:54:04AM +0800, Pingfan Liu wrote: > As for FOLL_LONGTERM, it is checked in the slow path > __gup_longterm_unlocked(). But it is not checked in the fast path, which > means a possible leak of CMA page to longterm pinned requirement through > this crack. > > Place a check in the fast path. > > Signed-off-by: Pingfan Liu <kernelfans@gmail.com> > Cc: Ira Weiny <ira.weiny@intel.com> > Cc: Andrew Morton <akpm@linux-foundation.org> > Cc: Mike Rapoport <rppt@linux.ibm.com> > Cc: Dan Williams <dan.j.williams@intel.com> > Cc: Matthew Wilcox <willy@infradead.org> > Cc: John Hubbard <jhubbard@nvidia.com> > Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> > Cc: Keith Busch <keith.busch@intel.com> > Cc: linux-kernel@vger.kernel.org > --- > mm/gup.c | 12 ++++++++++++ > 1 file changed, 12 insertions(+) > > diff --git a/mm/gup.c b/mm/gup.c > index f173fcb..00feab3 100644 > --- a/mm/gup.c > +++ b/mm/gup.c > @@ -2235,6 +2235,18 @@ int get_user_pages_fast(unsigned long start, int nr_pages, > local_irq_enable(); > ret = nr; > } > +#if defined(CONFIG_CMA) > + if (unlikely(gup_flags & FOLL_LONGTERM)) { > + int i, j; > + > + for (i = 0; i < nr; i++) > + if (is_migrate_cma_page(pages[i])) { > + for (j = i; j < nr; j++) > + put_page(pages[j]); Should be put_user_page() now. For now that just calls put_page() but it is slated to change soon. I also wonder if this would be more efficient as a check as we are walking the page tables and bail early. Perhaps the code complexity is not worth it? > + nr = i; Why not just break from the loop here? Or better yet just use 'i' in the inner loop... Ira > + } > + } > +#endif > > if (nr < nr_pages) { > /* Try to get the remaining pages with get_user_pages */ > -- > 2.7.5 >
On 5/30/19 2:47 PM, Ira Weiny wrote: > On Thu, May 30, 2019 at 06:54:04AM +0800, Pingfan Liu wrote: [...] >> + for (j = i; j < nr; j++) >> + put_page(pages[j]); > > Should be put_user_page() now. For now that just calls put_page() but it is > slated to change soon. > > I also wonder if this would be more efficient as a check as we are walking the > page tables and bail early. > > Perhaps the code complexity is not worth it? Good point, it might be worth it. Because now we've got two loops that we run, after the interrupts-off page walk, and it's starting to look like a potential performance concern. > >> + nr = i; > > Why not just break from the loop here? > > Or better yet just use 'i' in the inner loop... > ...but if you do end up putting in the after-the-fact check, then we can go one or two steps further in cleaning it up, by: * hiding the visible #ifdef that was slicing up gup_fast, * using put_user_pages() instead of either put_page or put_user_page, thus getting rid of j entirely, and * renaming an ancient minor confusion: nr --> nr_pinned), we could have this, which is looks cleaner and still does the same thing: diff --git a/mm/gup.c b/mm/gup.c index f173fcbaf1b2..0c1f36be1863 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1486,6 +1486,33 @@ static __always_inline long __gup_longterm_locked(struct task_struct *tsk, } #endif /* CONFIG_FS_DAX || CONFIG_CMA */ +#ifdef CONFIG_CMA +/* + * Returns the number of pages that were *not* rejected. This makes it + * exactly compatible with its callers. + */ +static int reject_cma_pages(int nr_pinned, unsigned gup_flags, + struct page **pages) +{ + int i = 0; + if (unlikely(gup_flags & FOLL_LONGTERM)) { + + for (i = 0; i < nr_pinned; i++) + if (is_migrate_cma_page(pages[i])) { + put_user_pages(&pages[i], nr_pinned - i); + break; + } + } + return i; +} +#else +static int reject_cma_pages(int nr_pinned, unsigned gup_flags, + struct page **pages) +{ + return nr_pinned; +} +#endif + /* * This is the same as get_user_pages_remote(), just with a * less-flexible calling convention where we assume that the task @@ -2216,7 +2243,7 @@ int get_user_pages_fast(unsigned long start, int nr_pages, unsigned int gup_flags, struct page **pages) { unsigned long addr, len, end; - int nr = 0, ret = 0; + int nr_pinned = 0, ret = 0; start &= PAGE_MASK; addr = start; @@ -2231,25 +2258,27 @@ int get_user_pages_fast(unsigned long start, int nr_pages, if (gup_fast_permitted(start, nr_pages)) { local_irq_disable(); - gup_pgd_range(addr, end, gup_flags, pages, &nr); + gup_pgd_range(addr, end, gup_flags, pages, &nr_pinned); local_irq_enable(); - ret = nr; + ret = nr_pinned; } - if (nr < nr_pages) { + nr_pinned = reject_cma_pages(nr_pinned, gup_flags, pages); + + if (nr_pinned < nr_pages) { /* Try to get the remaining pages with get_user_pages */ - start += nr << PAGE_SHIFT; - pages += nr; + start += nr_pinned << PAGE_SHIFT; + pages += nr_pinned; - ret = __gup_longterm_unlocked(start, nr_pages - nr, + ret = __gup_longterm_unlocked(start, nr_pages - nr_pinned, gup_flags, pages); /* Have to be a bit careful with return values */ - if (nr > 0) { + if (nr_pinned > 0) { if (ret < 0) - ret = nr; + ret = nr_pinned; else - ret += nr; + ret += nr_pinned; } } Rather lightly tested...I've compile-tested with CONFIG_CMA and !CONFIG_CMA, and boot tested with CONFIG_CMA, but could use a second set of eyes on whether I've added any off-by-one errors, or worse. :) thanks,
On Thu, May 30, 2019 at 04:21:19PM -0700, John Hubbard wrote: > On 5/30/19 2:47 PM, Ira Weiny wrote: > > On Thu, May 30, 2019 at 06:54:04AM +0800, Pingfan Liu wrote: > [...] > >> + for (j = i; j < nr; j++) > >> + put_page(pages[j]); > > > > Should be put_user_page() now. For now that just calls put_page() but it is > > slated to change soon. > > > > I also wonder if this would be more efficient as a check as we are walking the > > page tables and bail early. > > > > Perhaps the code complexity is not worth it? > > Good point, it might be worth it. Because now we've got two loops that > we run, after the interrupts-off page walk, and it's starting to look like > a potential performance concern. FWIW I don't see this being a huge issue at the moment. Perhaps those more familiar with CMA can weigh in here. How was this issue found? If it was found by running some test perhaps that indicates a performance preference? > > > > >> + nr = i; > > > > Why not just break from the loop here? > > > > Or better yet just use 'i' in the inner loop... > > > > ...but if you do end up putting in the after-the-fact check, then we can > go one or two steps further in cleaning it up, by: > > * hiding the visible #ifdef that was slicing up gup_fast, > > * using put_user_pages() instead of either put_page or put_user_page, > thus getting rid of j entirely, and > > * renaming an ancient minor confusion: nr --> nr_pinned), > > we could have this, which is looks cleaner and still does the same thing: > > diff --git a/mm/gup.c b/mm/gup.c > index f173fcbaf1b2..0c1f36be1863 100644 > --- a/mm/gup.c > +++ b/mm/gup.c > @@ -1486,6 +1486,33 @@ static __always_inline long __gup_longterm_locked(struct task_struct *tsk, > } > #endif /* CONFIG_FS_DAX || CONFIG_CMA */ > > +#ifdef CONFIG_CMA > +/* > + * Returns the number of pages that were *not* rejected. This makes it > + * exactly compatible with its callers. > + */ > +static int reject_cma_pages(int nr_pinned, unsigned gup_flags, > + struct page **pages) > +{ > + int i = 0; > + if (unlikely(gup_flags & FOLL_LONGTERM)) { > + > + for (i = 0; i < nr_pinned; i++) > + if (is_migrate_cma_page(pages[i])) { > + put_user_pages(&pages[i], nr_pinned - i); Yes this is cleaner. > + break; > + } > + } > + return i; > +} > +#else > +static int reject_cma_pages(int nr_pinned, unsigned gup_flags, > + struct page **pages) > +{ > + return nr_pinned; > +} > +#endif > + > /* > * This is the same as get_user_pages_remote(), just with a > * less-flexible calling convention where we assume that the task > @@ -2216,7 +2243,7 @@ int get_user_pages_fast(unsigned long start, int nr_pages, > unsigned int gup_flags, struct page **pages) > { > unsigned long addr, len, end; > - int nr = 0, ret = 0; > + int nr_pinned = 0, ret = 0; To be absolutely pedantic I would have split the nr_pinned change to a separate patch. Ira > > start &= PAGE_MASK; > addr = start; > @@ -2231,25 +2258,27 @@ int get_user_pages_fast(unsigned long start, int nr_pages, > > if (gup_fast_permitted(start, nr_pages)) { > local_irq_disable(); > - gup_pgd_range(addr, end, gup_flags, pages, &nr); > + gup_pgd_range(addr, end, gup_flags, pages, &nr_pinned); > local_irq_enable(); > - ret = nr; > + ret = nr_pinned; > } > > - if (nr < nr_pages) { > + nr_pinned = reject_cma_pages(nr_pinned, gup_flags, pages); > + > + if (nr_pinned < nr_pages) { > /* Try to get the remaining pages with get_user_pages */ > - start += nr << PAGE_SHIFT; > - pages += nr; > + start += nr_pinned << PAGE_SHIFT; > + pages += nr_pinned; > > - ret = __gup_longterm_unlocked(start, nr_pages - nr, > + ret = __gup_longterm_unlocked(start, nr_pages - nr_pinned, > gup_flags, pages); > > /* Have to be a bit careful with return values */ > - if (nr > 0) { > + if (nr_pinned > 0) { > if (ret < 0) > - ret = nr; > + ret = nr_pinned; > else > - ret += nr; > + ret += nr_pinned; > } > } > > > Rather lightly tested...I've compile-tested with CONFIG_CMA and !CONFIG_CMA, > and boot tested with CONFIG_CMA, but could use a second set of eyes on whether > I've added any off-by-one errors, or worse. :) > > thanks, > -- > John Hubbard > NVIDIA
On Fri, May 31, 2019 at 5:46 AM Ira Weiny <ira.weiny@intel.com> wrote: > > On Thu, May 30, 2019 at 06:54:04AM +0800, Pingfan Liu wrote: > > As for FOLL_LONGTERM, it is checked in the slow path > > __gup_longterm_unlocked(). But it is not checked in the fast path, which > > means a possible leak of CMA page to longterm pinned requirement through > > this crack. > > > > Place a check in the fast path. > > > > Signed-off-by: Pingfan Liu <kernelfans@gmail.com> > > Cc: Ira Weiny <ira.weiny@intel.com> > > Cc: Andrew Morton <akpm@linux-foundation.org> > > Cc: Mike Rapoport <rppt@linux.ibm.com> > > Cc: Dan Williams <dan.j.williams@intel.com> > > Cc: Matthew Wilcox <willy@infradead.org> > > Cc: John Hubbard <jhubbard@nvidia.com> > > Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> > > Cc: Keith Busch <keith.busch@intel.com> > > Cc: linux-kernel@vger.kernel.org > > --- > > mm/gup.c | 12 ++++++++++++ > > 1 file changed, 12 insertions(+) > > > > diff --git a/mm/gup.c b/mm/gup.c > > index f173fcb..00feab3 100644 > > --- a/mm/gup.c > > +++ b/mm/gup.c > > @@ -2235,6 +2235,18 @@ int get_user_pages_fast(unsigned long start, int nr_pages, > > local_irq_enable(); > > ret = nr; > > } > > +#if defined(CONFIG_CMA) > > + if (unlikely(gup_flags & FOLL_LONGTERM)) { > > + int i, j; > > + > > + for (i = 0; i < nr; i++) > > + if (is_migrate_cma_page(pages[i])) { > > + for (j = i; j < nr; j++) > > + put_page(pages[j]); > > Should be put_user_page() now. For now that just calls put_page() but it is > slated to change soon. > Not aware of these changes. And get your point now. > I also wonder if this would be more efficient as a check as we are walking the > page tables and bail early. > > Perhaps the code complexity is not worth it? > Yes. That will spread such logic in huge page and normal page. > > + nr = i; > > Why not just break from the loop here? > A mistake. Thanks, Pingfan > Or better yet just use 'i' in the inner loop... > > Ira > > > + } > > + } > > +#endif > > > > if (nr < nr_pages) { > > /* Try to get the remaining pages with get_user_pages */ > > -- > > 2.7.5 > >
On Fri, May 31, 2019 at 7:52 AM Ira Weiny <ira.weiny@intel.com> wrote: > > On Thu, May 30, 2019 at 04:21:19PM -0700, John Hubbard wrote: > > On 5/30/19 2:47 PM, Ira Weiny wrote: > > > On Thu, May 30, 2019 at 06:54:04AM +0800, Pingfan Liu wrote: > > [...] > > >> + for (j = i; j < nr; j++) > > >> + put_page(pages[j]); > > > > > > Should be put_user_page() now. For now that just calls put_page() but it is > > > slated to change soon. > > > > > > I also wonder if this would be more efficient as a check as we are walking the > > > page tables and bail early. > > > > > > Perhaps the code complexity is not worth it? > > > > Good point, it might be worth it. Because now we've got two loops that > > we run, after the interrupts-off page walk, and it's starting to look like > > a potential performance concern. > > FWIW I don't see this being a huge issue at the moment. Perhaps those more > familiar with CMA can weigh in here. How was this issue found? If it was > found by running some test perhaps that indicates a performance preference? > I found the bug by reading code. And I do not see any performance concern. Bailing out early contritute little to performance, as we fall on the slow path immediately. Regards, Pingfan [....]
On Fri, May 31, 2019 at 7:21 AM John Hubbard <jhubbard@nvidia.com> wrote: > > On 5/30/19 2:47 PM, Ira Weiny wrote: > > On Thu, May 30, 2019 at 06:54:04AM +0800, Pingfan Liu wrote: > [...] > >> + for (j = i; j < nr; j++) > >> + put_page(pages[j]); > > > > Should be put_user_page() now. For now that just calls put_page() but it is > > slated to change soon. > > > > I also wonder if this would be more efficient as a check as we are walking the > > page tables and bail early. > > > > Perhaps the code complexity is not worth it? > > Good point, it might be worth it. Because now we've got two loops that > we run, after the interrupts-off page walk, and it's starting to look like > a potential performance concern. > > > > >> + nr = i; > > > > Why not just break from the loop here? > > > > Or better yet just use 'i' in the inner loop... > > > > ...but if you do end up putting in the after-the-fact check, then we can > go one or two steps further in cleaning it up, by: > > * hiding the visible #ifdef that was slicing up gup_fast, > > * using put_user_pages() instead of either put_page or put_user_page, > thus getting rid of j entirely, and > > * renaming an ancient minor confusion: nr --> nr_pinned), > > we could have this, which is looks cleaner and still does the same thing: > > diff --git a/mm/gup.c b/mm/gup.c > index f173fcbaf1b2..0c1f36be1863 100644 > --- a/mm/gup.c > +++ b/mm/gup.c > @@ -1486,6 +1486,33 @@ static __always_inline long __gup_longterm_locked(struct task_struct *tsk, > } > #endif /* CONFIG_FS_DAX || CONFIG_CMA */ > > +#ifdef CONFIG_CMA > +/* > + * Returns the number of pages that were *not* rejected. This makes it > + * exactly compatible with its callers. > + */ > +static int reject_cma_pages(int nr_pinned, unsigned gup_flags, > + struct page **pages) > +{ > + int i = 0; > + if (unlikely(gup_flags & FOLL_LONGTERM)) { > + > + for (i = 0; i < nr_pinned; i++) > + if (is_migrate_cma_page(pages[i])) { > + put_user_pages(&pages[i], nr_pinned - i); > + break; > + } > + } > + return i; > +} > +#else > +static int reject_cma_pages(int nr_pinned, unsigned gup_flags, > + struct page **pages) > +{ > + return nr_pinned; > +} > +#endif > + > /* > * This is the same as get_user_pages_remote(), just with a > * less-flexible calling convention where we assume that the task > @@ -2216,7 +2243,7 @@ int get_user_pages_fast(unsigned long start, int nr_pages, > unsigned int gup_flags, struct page **pages) > { > unsigned long addr, len, end; > - int nr = 0, ret = 0; > + int nr_pinned = 0, ret = 0; > > start &= PAGE_MASK; > addr = start; > @@ -2231,25 +2258,27 @@ int get_user_pages_fast(unsigned long start, int nr_pages, > > if (gup_fast_permitted(start, nr_pages)) { > local_irq_disable(); > - gup_pgd_range(addr, end, gup_flags, pages, &nr); > + gup_pgd_range(addr, end, gup_flags, pages, &nr_pinned); > local_irq_enable(); > - ret = nr; > + ret = nr_pinned; > } > > - if (nr < nr_pages) { > + nr_pinned = reject_cma_pages(nr_pinned, gup_flags, pages); > + > + if (nr_pinned < nr_pages) { > /* Try to get the remaining pages with get_user_pages */ > - start += nr << PAGE_SHIFT; > - pages += nr; > + start += nr_pinned << PAGE_SHIFT; > + pages += nr_pinned; > > - ret = __gup_longterm_unlocked(start, nr_pages - nr, > + ret = __gup_longterm_unlocked(start, nr_pages - nr_pinned, > gup_flags, pages); > > /* Have to be a bit careful with return values */ > - if (nr > 0) { > + if (nr_pinned > 0) { > if (ret < 0) > - ret = nr; > + ret = nr_pinned; > else > - ret += nr; > + ret += nr_pinned; > } > } > > > Rather lightly tested...I've compile-tested with CONFIG_CMA and !CONFIG_CMA, > and boot tested with CONFIG_CMA, but could use a second set of eyes on whether > I've added any off-by-one errors, or worse. :) > Do you mind I send V2 based on your above patch? Anyway, it is a simple bug fix. Thanks, Pingfan
On 5/31/19 4:05 AM, Pingfan Liu wrote: > On Fri, May 31, 2019 at 7:21 AM John Hubbard <jhubbard@nvidia.com> wrote: >> On 5/30/19 2:47 PM, Ira Weiny wrote: >>> On Thu, May 30, 2019 at 06:54:04AM +0800, Pingfan Liu wrote: >> [...] >> Rather lightly tested...I've compile-tested with CONFIG_CMA and !CONFIG_CMA, >> and boot tested with CONFIG_CMA, but could use a second set of eyes on whether >> I've added any off-by-one errors, or worse. :) >> > Do you mind I send V2 based on your above patch? Anyway, it is a simple bug fix. > Sure, that's why I sent it. :) Note that Ira also recommended splitting the "nr --> nr_pinned" renaming into a separate patch. thanks,
On Fri, May 31, 2019 at 07:05:27PM +0800, Pingfan Liu wrote: > On Fri, May 31, 2019 at 7:21 AM John Hubbard <jhubbard@nvidia.com> wrote: > > > > > > Rather lightly tested...I've compile-tested with CONFIG_CMA and !CONFIG_CMA, > > and boot tested with CONFIG_CMA, but could use a second set of eyes on whether > > I've added any off-by-one errors, or worse. :) > > > Do you mind I send V2 based on your above patch? Anyway, it is a simple bug fix. FWIW please split out the nr_pinned change to a separate patch. Thanks, Ira
On Sat, Jun 1, 2019 at 1:12 AM Ira Weiny <ira.weiny@intel.com> wrote: > > On Fri, May 31, 2019 at 07:05:27PM +0800, Pingfan Liu wrote: > > On Fri, May 31, 2019 at 7:21 AM John Hubbard <jhubbard@nvidia.com> wrote: > > > > > > > > > Rather lightly tested...I've compile-tested with CONFIG_CMA and !CONFIG_CMA, > > > and boot tested with CONFIG_CMA, but could use a second set of eyes on whether > > > I've added any off-by-one errors, or worse. :) > > > > > Do you mind I send V2 based on your above patch? Anyway, it is a simple bug fix. > > FWIW please split out the nr_pinned change to a separate patch. > OK. Thanks, Pingfan
On Sat, Jun 1, 2019 at 1:06 AM John Hubbard <jhubbard@nvidia.com> wrote: > > On 5/31/19 4:05 AM, Pingfan Liu wrote: > > On Fri, May 31, 2019 at 7:21 AM John Hubbard <jhubbard@nvidia.com> wrote: > >> On 5/30/19 2:47 PM, Ira Weiny wrote: > >>> On Thu, May 30, 2019 at 06:54:04AM +0800, Pingfan Liu wrote: > >> [...] > >> Rather lightly tested...I've compile-tested with CONFIG_CMA and !CONFIG_CMA, > >> and boot tested with CONFIG_CMA, but could use a second set of eyes on whether > >> I've added any off-by-one errors, or worse. :) > >> > > Do you mind I send V2 based on your above patch? Anyway, it is a simple bug fix. > > > > Sure, that's why I sent it. :) Note that Ira also recommended splitting the > "nr --> nr_pinned" renaming into a separate patch. > Thanks for your kind help. I will split out nr_pinned to a separate patch. Regards, Pingfan
diff --git a/mm/gup.c b/mm/gup.c index f173fcb..00feab3 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2235,6 +2235,18 @@ int get_user_pages_fast(unsigned long start, int nr_pages, local_irq_enable(); ret = nr; } +#if defined(CONFIG_CMA) + if (unlikely(gup_flags & FOLL_LONGTERM)) { + int i, j; + + for (i = 0; i < nr; i++) + if (is_migrate_cma_page(pages[i])) { + for (j = i; j < nr; j++) + put_page(pages[j]); + nr = i; + } + } +#endif if (nr < nr_pages) { /* Try to get the remaining pages with get_user_pages */
As for FOLL_LONGTERM, it is checked in the slow path __gup_longterm_unlocked(). But it is not checked in the fast path, which means a possible leak of CMA page to longterm pinned requirement through this crack. Place a check in the fast path. Signed-off-by: Pingfan Liu <kernelfans@gmail.com> Cc: Ira Weiny <ira.weiny@intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Mike Rapoport <rppt@linux.ibm.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: John Hubbard <jhubbard@nvidia.com> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> Cc: Keith Busch <keith.busch@intel.com> Cc: linux-kernel@vger.kernel.org --- mm/gup.c | 12 ++++++++++++ 1 file changed, 12 insertions(+)