diff mbox series

[mm-unstable,v1] mm/hugetlb_vmemmap: fix memory loads ordering

Message ID 20250107043505.351925-1-yuzhao@google.com (mailing list archive)
State New
Headers show
Series [mm-unstable,v1] mm/hugetlb_vmemmap: fix memory loads ordering | expand

Commit Message

Yu Zhao Jan. 7, 2025, 4:35 a.m. UTC
Using x86_64 as an example, for a 32KB struct page[] area describing a
2MB hugeTLB, HVO reduces the area to 4KB by the following steps:
1. Split the (r/w vmemmap) PMD mapping the area into 512 (r/w) PTEs;
2. For the 8 PTEs mapping the area, remap PTE 1-7 to the page mapped
   by PTE 0, and at the same time change the permission from r/w to
   r/o;
3. Free the pages PTE 1-7 used to map, hence the reduction from 32KB
   to 4KB.

However, the following race can happen due to improperly memory loads
ordering:
  CPU 1 (HVO)                     CPU 2 (speculative PFN walker)

  page_ref_freeze()
  synchronize_rcu()
                                  rcu_read_lock()
                                  page_is_fake_head() is false
  vmemmap_remap_pte()
  XXX: struct page[] becomes r/o

  page_ref_unfreeze()
                                  page_ref_count() is not zero

                                  atomic_add_unless(&page->_refcount)
                                  XXX: try to modify r/o struct page[]

Specifically, page_is_fake_head() must be ordered after
page_ref_count() on CPU 2 so that it can only return true for this
case, to avoid the later attempt to modify r/o struct page[].

This patch adds the missing memory barrier and makes the tests on
page_is_fake_head() and page_ref_count() done in the proper order.

Fixes: bd225530a4c7 ("mm/hugetlb_vmemmap: fix race with speculative PFN walkers")
Reported-by: Will Deacon <will@kernel.org>
Closes: https://lore.kernel.org/20241128142028.GA3506@willie-the-truck/
Signed-off-by: Yu Zhao <yuzhao@google.com>
---
 include/linux/page-flags.h | 2 +-
 include/linux/page_ref.h   | 8 ++++++--
 2 files changed, 7 insertions(+), 3 deletions(-)

Comments

Muchun Song Jan. 7, 2025, 8:41 a.m. UTC | #1
> On Jan 7, 2025, at 12:35, Yu Zhao <yuzhao@google.com> wrote:
> 
> Using x86_64 as an example, for a 32KB struct page[] area describing a
> 2MB hugeTLB, HVO reduces the area to 4KB by the following steps:
> 1. Split the (r/w vmemmap) PMD mapping the area into 512 (r/w) PTEs;
> 2. For the 8 PTEs mapping the area, remap PTE 1-7 to the page mapped
>   by PTE 0, and at the same time change the permission from r/w to
>   r/o;
> 3. Free the pages PTE 1-7 used to map, hence the reduction from 32KB
>   to 4KB.
> 
> However, the following race can happen due to improperly memory loads
> ordering:
>  CPU 1 (HVO)                     CPU 2 (speculative PFN walker)
> 
>  page_ref_freeze()
>  synchronize_rcu()
>                                  rcu_read_lock()
>                                  page_is_fake_head() is false
>  vmemmap_remap_pte()
>  XXX: struct page[] becomes r/o
> 
>  page_ref_unfreeze()
>                                  page_ref_count() is not zero
> 
>                                  atomic_add_unless(&page->_refcount)
>                                  XXX: try to modify r/o struct page[]
> 
> Specifically, page_is_fake_head() must be ordered after
> page_ref_count() on CPU 2 so that it can only return true for this
> case, to avoid the later attempt to modify r/o struct page[].
> 
> This patch adds the missing memory barrier and makes the tests on
> page_is_fake_head() and page_ref_count() done in the proper order.
> 
> Fixes: bd225530a4c7 ("mm/hugetlb_vmemmap: fix race with speculative PFN walkers")
> Reported-by: Will Deacon <will@kernel.org>
> Closes: https://lore.kernel.org/20241128142028.GA3506@willie-the-truck/
> Signed-off-by: Yu Zhao <yuzhao@google.com>
> ---
> include/linux/page-flags.h | 2 +-
> include/linux/page_ref.h   | 8 ++++++--
> 2 files changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
> index 691506bdf2c5..6b8ecf86f1b6 100644
> --- a/include/linux/page-flags.h
> +++ b/include/linux/page-flags.h
> @@ -212,7 +212,7 @@ static __always_inline const struct page *page_fixed_fake_head(const struct page
> * cold cacheline in some cases.
> */
> 	if (IS_ALIGNED((unsigned long)page, PAGE_SIZE) &&
> -	    test_bit(PG_head, &page->flags)) {
> +	    test_bit_acquire(PG_head, &page->flags)) {
> 		/*
> 		 * We can safely access the field of the @page[1] with PG_head
> 		 * because the @page is a compound page composed with at least
> diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h
> index 8c236c651d1d..5becea98bd79 100644
> --- a/include/linux/page_ref.h
> +++ b/include/linux/page_ref.h
> @@ -233,8 +233,12 @@ static inline bool page_ref_add_unless(struct page *page, int nr, int u)
> 	bool ret = false;
> 
> 	rcu_read_lock();
> - 	/* avoid writing to the vmemmap area being remapped */
> - 	if (!page_is_fake_head(page) && page_ref_count(page) != u)
> + 	/*
> +	 * To avoid writing to the vmemmap area remapped into r/o in parallel,
> +	 * the page_ref_count() test must precede the page_is_fake_head() test
> +	 * so that test_bit_acquire() in the latter is ordered after the former.
> +	 */
> + 	if (page_ref_count(page) != u && !page_is_fake_head(page))

IIUC, we need to insert a memory barrier between page_ref_count() and page_is_fake_head().
Specifically, accessing between page->_refcount and page->flags. So we should insert a
read memory barrier here, right? But I saw you added an acquire barrier in page_fixed_fake_head(),
I don't understand why an acquire barrier could stop the CPU reordering the accessing
between them. What am I missing here?

Muchun,
Thanks.

> 		ret = atomic_add_unless(&page->_refcount, nr, u);
> 	rcu_read_unlock();
> 
> -- 
> 2.47.1.613.gc27f4b7a9f-goog
>
David Hildenbrand Jan. 7, 2025, 8:49 a.m. UTC | #2
On 07.01.25 05:35, Yu Zhao wrote:
> Using x86_64 as an example, for a 32KB struct page[] area describing a
> 2MB hugeTLB, HVO reduces the area to 4KB by the following steps:
> 1. Split the (r/w vmemmap) PMD mapping the area into 512 (r/w) PTEs;
> 2. For the 8 PTEs mapping the area, remap PTE 1-7 to the page mapped
>     by PTE 0, and at the same time change the permission from r/w to
>     r/o;
> 3. Free the pages PTE 1-7 used to map, hence the reduction from 32KB
>     to 4KB.
> 
> However, the following race can happen due to improperly memory loads
> ordering:
>    CPU 1 (HVO)                     CPU 2 (speculative PFN walker)
> 
>    page_ref_freeze()
>    synchronize_rcu()
>                                    rcu_read_lock()
>                                    page_is_fake_head() is false
>    vmemmap_remap_pte()
>    XXX: struct page[] becomes r/o
> 
>    page_ref_unfreeze()
>                                    page_ref_count() is not zero
> 
>                                    atomic_add_unless(&page->_refcount)
>                                    XXX: try to modify r/o struct page[]
> 
> Specifically, page_is_fake_head() must be ordered after
> page_ref_count() on CPU 2 so that it can only return true for this
> case, to avoid the later attempt to modify r/o struct page[].

I *think* this is correct.

> 
> This patch adds the missing memory barrier and makes the tests on
> page_is_fake_head() and page_ref_count() done in the proper order.
> 
> Fixes: bd225530a4c7 ("mm/hugetlb_vmemmap: fix race with speculative PFN walkers")
> Reported-by: Will Deacon <will@kernel.org>
> Closes: https://lore.kernel.org/20241128142028.GA3506@willie-the-truck/
> Signed-off-by: Yu Zhao <yuzhao@google.com>
> ---
>   include/linux/page-flags.h | 2 +-
>   include/linux/page_ref.h   | 8 ++++++--
>   2 files changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
> index 691506bdf2c5..6b8ecf86f1b6 100644
> --- a/include/linux/page-flags.h
> +++ b/include/linux/page-flags.h
> @@ -212,7 +212,7 @@ static __always_inline const struct page *page_fixed_fake_head(const struct page
>   	 * cold cacheline in some cases.
>   	 */
>   	if (IS_ALIGNED((unsigned long)page, PAGE_SIZE) &&
> -	    test_bit(PG_head, &page->flags)) {
> +	    test_bit_acquire(PG_head, &page->flags)) {

This change will affect all page_fixed_fake_head() users, like ordinary 
PageTail even on !hugetlb.

I assume you want an explicit memory barrier in the single problematic 
caller instead.
Matthew Wilcox Jan. 7, 2025, 4:35 p.m. UTC | #3
On Tue, Jan 07, 2025 at 09:49:18AM +0100, David Hildenbrand wrote:
> > +++ b/include/linux/page-flags.h
> > @@ -212,7 +212,7 @@ static __always_inline const struct page *page_fixed_fake_head(const struct page
> >   	 * cold cacheline in some cases.
> >   	 */
> >   	if (IS_ALIGNED((unsigned long)page, PAGE_SIZE) &&
> > -	    test_bit(PG_head, &page->flags)) {
> > +	    test_bit_acquire(PG_head, &page->flags)) {
> 
> This change will affect all page_fixed_fake_head() users, like ordinary
> PageTail even on !hugetlb.

I've been looking at the callers of PageTail() because it's going to
be a bit of a weird thing to be checking in the separate-page-and-folio
world.  Obviously we can implement it, but there's a bit of a "But why
would you want to ask that question" question.

Most current occurrences of PageTail() are in assertions of one form or
another.  Fair enough, not performance critical.

make_device_exclusive_range() is a little weird; looks like it's trying
to make sure that each folio is only made exclusive once, and ignore any
partial folios which overlap the start of the area.

damon_get_folio() wants to fail for tail pages.  Fair enough.

split_huge_pages_all() is debug code.

page_idle_get_folio() is like damon.

That's it.  We don't seem to have any PageTail() callers in critical
code any more.
David Hildenbrand Jan. 7, 2025, 5:02 p.m. UTC | #4
On 07.01.25 17:35, Matthew Wilcox wrote:
> On Tue, Jan 07, 2025 at 09:49:18AM +0100, David Hildenbrand wrote:
>>> +++ b/include/linux/page-flags.h
>>> @@ -212,7 +212,7 @@ static __always_inline const struct page *page_fixed_fake_head(const struct page
>>>    	 * cold cacheline in some cases.
>>>    	 */
>>>    	if (IS_ALIGNED((unsigned long)page, PAGE_SIZE) &&
>>> -	    test_bit(PG_head, &page->flags)) {
>>> +	    test_bit_acquire(PG_head, &page->flags)) {
>>
>> This change will affect all page_fixed_fake_head() users, like ordinary
>> PageTail even on !hugetlb.
> 
> I've been looking at the callers of PageTail() because it's going to
> be a bit of a weird thing to be checking in the separate-page-and-folio
> world.  Obviously we can implement it, but there's a bit of a "But why
> would you want to ask that question" question.
> 
> Most current occurrences of PageTail() are in assertions of one form or
> another.  Fair enough, not performance critical.
> 
> make_device_exclusive_range() is a little weird; looks like it's trying
> to make sure that each folio is only made exclusive once, and ignore any
> partial folios which overlap the start of the area.

I could have sworn we only support small folios here, but looks like
we do support large folios.

IIUC, there is no way to identify reliably "this folio is device exclusive",
the only hint is "no mappings". The following might do:

diff --git a/mm/rmap.c b/mm/rmap.c
index c6c4d4ea29a7e..1424d0a351a86 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -2543,7 +2543,13 @@ int make_device_exclusive_range(struct mm_struct *mm, unsigned long start,
  
         for (i = 0; i < npages; i++, start += PAGE_SIZE) {
                 struct folio *folio = page_folio(pages[i]);
-               if (PageTail(pages[i]) || !folio_trylock(folio)) {
+
+               /*
+                * If there are no mappings, either the folio is actually
+                * unmapped or only device-exclusive swap entries point at
+                * this folio.
+                */
+               if (!folio_mapped(folio) || !folio_trylock(folio)) {
                         folio_put(folio);
                         pages[i] = NULL;
                         continue;


> 
> damon_get_folio() wants to fail for tail pages.  Fair enough.
> 
> split_huge_pages_all() is debug code.
> 
> page_idle_get_folio() is like damon.
> 
> That's it.  We don't seem to have any PageTail() callers in critical
> code any more.

Ah, you're right. Interestingly, PageTransTail() is even unused?
Yu Zhao Jan. 8, 2025, 7:32 a.m. UTC | #5
On Tue, Jan 7, 2025 at 1:41 AM Muchun Song <muchun.song@linux.dev> wrote:
>
>
>
> > On Jan 7, 2025, at 12:35, Yu Zhao <yuzhao@google.com> wrote:
> >
> > Using x86_64 as an example, for a 32KB struct page[] area describing a
> > 2MB hugeTLB, HVO reduces the area to 4KB by the following steps:
> > 1. Split the (r/w vmemmap) PMD mapping the area into 512 (r/w) PTEs;
> > 2. For the 8 PTEs mapping the area, remap PTE 1-7 to the page mapped
> >   by PTE 0, and at the same time change the permission from r/w to
> >   r/o;
> > 3. Free the pages PTE 1-7 used to map, hence the reduction from 32KB
> >   to 4KB.
> >
> > However, the following race can happen due to improperly memory loads
> > ordering:
> >  CPU 1 (HVO)                     CPU 2 (speculative PFN walker)
> >
> >  page_ref_freeze()
> >  synchronize_rcu()
> >                                  rcu_read_lock()
> >                                  page_is_fake_head() is false
> >  vmemmap_remap_pte()
> >  XXX: struct page[] becomes r/o
> >
> >  page_ref_unfreeze()
> >                                  page_ref_count() is not zero
> >
> >                                  atomic_add_unless(&page->_refcount)
> >                                  XXX: try to modify r/o struct page[]
> >
> > Specifically, page_is_fake_head() must be ordered after
> > page_ref_count() on CPU 2 so that it can only return true for this
> > case, to avoid the later attempt to modify r/o struct page[].
> >
> > This patch adds the missing memory barrier and makes the tests on
> > page_is_fake_head() and page_ref_count() done in the proper order.
> >
> > Fixes: bd225530a4c7 ("mm/hugetlb_vmemmap: fix race with speculative PFN walkers")
> > Reported-by: Will Deacon <will@kernel.org>
> > Closes: https://lore.kernel.org/20241128142028.GA3506@willie-the-truck/
> > Signed-off-by: Yu Zhao <yuzhao@google.com>
> > ---
> > include/linux/page-flags.h | 2 +-
> > include/linux/page_ref.h   | 8 ++++++--
> > 2 files changed, 7 insertions(+), 3 deletions(-)
> >
> > diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
> > index 691506bdf2c5..6b8ecf86f1b6 100644
> > --- a/include/linux/page-flags.h
> > +++ b/include/linux/page-flags.h
> > @@ -212,7 +212,7 @@ static __always_inline const struct page *page_fixed_fake_head(const struct page
> > * cold cacheline in some cases.
> > */
> >       if (IS_ALIGNED((unsigned long)page, PAGE_SIZE) &&
> > -         test_bit(PG_head, &page->flags)) {
> > +         test_bit_acquire(PG_head, &page->flags)) {
> >               /*
> >                * We can safely access the field of the @page[1] with PG_head
> >                * because the @page is a compound page composed with at least
> > diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h
> > index 8c236c651d1d..5becea98bd79 100644
> > --- a/include/linux/page_ref.h
> > +++ b/include/linux/page_ref.h
> > @@ -233,8 +233,12 @@ static inline bool page_ref_add_unless(struct page *page, int nr, int u)
> >       bool ret = false;
> >
> >       rcu_read_lock();
> > -     /* avoid writing to the vmemmap area being remapped */
> > -     if (!page_is_fake_head(page) && page_ref_count(page) != u)
> > +     /*
> > +      * To avoid writing to the vmemmap area remapped into r/o in parallel,
> > +      * the page_ref_count() test must precede the page_is_fake_head() test
> > +      * so that test_bit_acquire() in the latter is ordered after the former.
> > +      */
> > +     if (page_ref_count(page) != u && !page_is_fake_head(page))
>
> IIUC, we need to insert a memory barrier between page_ref_count() and page_is_fake_head().
> Specifically, accessing between page->_refcount and page->flags. So we should insert a
> read memory barrier here, right?

Correct, i.e., page_ref_count(page) != u; smp_rmb(); !page_is_fake_head(page).

> But I saw you added an acquire barrier in page_fixed_fake_head(),
> I don't understand why an acquire barrier could stop the CPU reordering the accessing
> between them. What am I missing here?

A load-acquire on page->_refcount would be equivalent to the smp_rmb()
above. But apparently I used on page->flags because I misremembered
whether a load-acquire inserts the equivalent smp_rmb() before or
after (it's after, not before). Will fix this in v2.
Yu Zhao Jan. 8, 2025, 7:34 a.m. UTC | #6
On Tue, Jan 7, 2025 at 1:49 AM David Hildenbrand <david@redhat.com> wrote:
>
> On 07.01.25 05:35, Yu Zhao wrote:
> > Using x86_64 as an example, for a 32KB struct page[] area describing a
> > 2MB hugeTLB, HVO reduces the area to 4KB by the following steps:
> > 1. Split the (r/w vmemmap) PMD mapping the area into 512 (r/w) PTEs;
> > 2. For the 8 PTEs mapping the area, remap PTE 1-7 to the page mapped
> >     by PTE 0, and at the same time change the permission from r/w to
> >     r/o;
> > 3. Free the pages PTE 1-7 used to map, hence the reduction from 32KB
> >     to 4KB.
> >
> > However, the following race can happen due to improperly memory loads
> > ordering:
> >    CPU 1 (HVO)                     CPU 2 (speculative PFN walker)
> >
> >    page_ref_freeze()
> >    synchronize_rcu()
> >                                    rcu_read_lock()
> >                                    page_is_fake_head() is false
> >    vmemmap_remap_pte()
> >    XXX: struct page[] becomes r/o
> >
> >    page_ref_unfreeze()
> >                                    page_ref_count() is not zero
> >
> >                                    atomic_add_unless(&page->_refcount)
> >                                    XXX: try to modify r/o struct page[]
> >
> > Specifically, page_is_fake_head() must be ordered after
> > page_ref_count() on CPU 2 so that it can only return true for this
> > case, to avoid the later attempt to modify r/o struct page[].
>
> I *think* this is correct.
>
> >
> > This patch adds the missing memory barrier and makes the tests on
> > page_is_fake_head() and page_ref_count() done in the proper order.
> >
> > Fixes: bd225530a4c7 ("mm/hugetlb_vmemmap: fix race with speculative PFN walkers")
> > Reported-by: Will Deacon <will@kernel.org>
> > Closes: https://lore.kernel.org/20241128142028.GA3506@willie-the-truck/
> > Signed-off-by: Yu Zhao <yuzhao@google.com>
> > ---
> >   include/linux/page-flags.h | 2 +-
> >   include/linux/page_ref.h   | 8 ++++++--
> >   2 files changed, 7 insertions(+), 3 deletions(-)
> >
> > diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
> > index 691506bdf2c5..6b8ecf86f1b6 100644
> > --- a/include/linux/page-flags.h
> > +++ b/include/linux/page-flags.h
> > @@ -212,7 +212,7 @@ static __always_inline const struct page *page_fixed_fake_head(const struct page
> >        * cold cacheline in some cases.
> >        */
> >       if (IS_ALIGNED((unsigned long)page, PAGE_SIZE) &&
> > -         test_bit(PG_head, &page->flags)) {
> > +         test_bit_acquire(PG_head, &page->flags)) {
>
> This change will affect all page_fixed_fake_head() users, like ordinary
> PageTail even on !hugetlb.
>
> I assume you want an explicit memory barrier in the single problematic
> caller instead.

Let me make it HVO specific in v2. It might look cleaner that way.
diff mbox series

Patch

diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 691506bdf2c5..6b8ecf86f1b6 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -212,7 +212,7 @@  static __always_inline const struct page *page_fixed_fake_head(const struct page
 	 * cold cacheline in some cases.
 	 */
 	if (IS_ALIGNED((unsigned long)page, PAGE_SIZE) &&
-	    test_bit(PG_head, &page->flags)) {
+	    test_bit_acquire(PG_head, &page->flags)) {
 		/*
 		 * We can safely access the field of the @page[1] with PG_head
 		 * because the @page is a compound page composed with at least
diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h
index 8c236c651d1d..5becea98bd79 100644
--- a/include/linux/page_ref.h
+++ b/include/linux/page_ref.h
@@ -233,8 +233,12 @@  static inline bool page_ref_add_unless(struct page *page, int nr, int u)
 	bool ret = false;
 
 	rcu_read_lock();
-	/* avoid writing to the vmemmap area being remapped */
-	if (!page_is_fake_head(page) && page_ref_count(page) != u)
+	/*
+	 * To avoid writing to the vmemmap area remapped into r/o in parallel,
+	 * the page_ref_count() test must precede the page_is_fake_head() test
+	 * so that test_bit_acquire() in the latter is ordered after the former.
+	 */
+	if (page_ref_count(page) != u && !page_is_fake_head(page))
 		ret = atomic_add_unless(&page->_refcount, nr, u);
 	rcu_read_unlock();