diff mbox series

[v2,2/2] mm: fix missing cache flush for all tail pages of THP

Message ID 20220124051752.83281-2-songmuchun@bytedance.com (mailing list archive)
State New
Headers show
Series [v2,1/2] mm: thp: fix wrong cache flush in remove_migration_pmd() | expand

Commit Message

Muchun Song Jan. 24, 2022, 5:17 a.m. UTC
The D-cache maintenance inside move_to_new_page() only consider one page,
there is still D-cache maintenance issue for tail pages of THP. Fix this
by not using flush_dcache_folio() since it is not backportable.

Fixes: 616b8371539a ("mm: thp: enable thp migration in generic path")
Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
Changes in v2:
 - Using a for loop instead of the folio variant for backportable.

 mm/migrate.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

Comments

Zi Yan Jan. 24, 2022, 4:07 p.m. UTC | #1
On 24 Jan 2022, at 0:17, Muchun Song wrote:

> The D-cache maintenance inside move_to_new_page() only consider one page,
> there is still D-cache maintenance issue for tail pages of THP. Fix this
> by not using flush_dcache_folio() since it is not backportable.
>
> Fixes: 616b8371539a ("mm: thp: enable thp migration in generic path")
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> ---
> Changes in v2:
>  - Using a for loop instead of the folio variant for backportable.
>
>  mm/migrate.c | 7 +++++--
>  1 file changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index c9296d63878d..c418e8d92b9c 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -933,9 +933,12 @@ static int move_to_new_page(struct page *newpage, struct page *page,
>  		if (!PageMappingFlags(page))
>  			page->mapping = NULL;
>
> -		if (likely(!is_zone_device_page(newpage)))
> -			flush_dcache_page(newpage);
> +		if (likely(!is_zone_device_page(newpage))) {
> +			int i, nr = compound_nr(newpage);
>
> +			for (i = 0; i < nr; i++)
> +				flush_dcache_page(newpage + i);
> +		}
>  	}
>  out:
>  	return rc;
> -- 
> 2.11.0

LGTM. Thanks. Reviewed-by: Zi Yan <ziy@nvidia.com>

--
Best Regards,
Yan, Zi
David Rientjes Jan. 24, 2022, 6:11 p.m. UTC | #2
On Mon, 24 Jan 2022, Muchun Song wrote:

> The D-cache maintenance inside move_to_new_page() only consider one page,
> there is still D-cache maintenance issue for tail pages of THP. Fix this
> by not using flush_dcache_folio() since it is not backportable.
> 

The mention of being backportable suggests that we should backport this, 
likely to 4.14+.  So should it be marked as stable?

That aside, should there be a follow-up patch that converts to using 
flush_dcache_folio()?

> Fixes: 616b8371539a ("mm: thp: enable thp migration in generic path")
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> ---
> Changes in v2:
>  - Using a for loop instead of the folio variant for backportable.
> 
>  mm/migrate.c | 7 +++++--
>  1 file changed, 5 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/migrate.c b/mm/migrate.c
> index c9296d63878d..c418e8d92b9c 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -933,9 +933,12 @@ static int move_to_new_page(struct page *newpage, struct page *page,
>  		if (!PageMappingFlags(page))
>  			page->mapping = NULL;
>  
> -		if (likely(!is_zone_device_page(newpage)))
> -			flush_dcache_page(newpage);
> +		if (likely(!is_zone_device_page(newpage))) {
> +			int i, nr = compound_nr(newpage);
>  
> +			for (i = 0; i < nr; i++)
> +				flush_dcache_page(newpage + i);
> +		}
>  	}
>  out:
>  	return rc;
> -- 
> 2.11.0
> 
> 
>
Zi Yan Jan. 24, 2022, 7:22 p.m. UTC | #3
On 24 Jan 2022, at 13:11, David Rientjes wrote:

> On Mon, 24 Jan 2022, Muchun Song wrote:
>
>> The D-cache maintenance inside move_to_new_page() only consider one page,
>> there is still D-cache maintenance issue for tail pages of THP. Fix this
>> by not using flush_dcache_folio() since it is not backportable.
>>
>
> The mention of being backportable suggests that we should backport this,
> likely to 4.14+.  So should it be marked as stable?

Hmm, after more digging, I am not sure if the bug exists. For THP migration,
flush_cache_range() is used in remove_migration_pmd(). The flush_dcache_page()
was added by Lars Persson (cc’d) to solve the data corruption on MIPS[1],
but THP migration is only enabled on x86_64, PPC_BOOK3S_64, and ARM64.

To make code more consistent, I guess flush_cache_range() in remove_migration_pmd()
can be removed, since it is superseded by the flush_dcache_page() below.

The Fixes can be dropped. Let me know if I miss anything.

>
> That aside, should there be a follow-up patch that converts to using
> flush_dcache_folio()?

Are you suggesting to convert just this code or the entire move_to_new_page()
to use folio? The latter might be more desirable, since the code will be
more consistent.


[1] https://lore.kernel.org/all/20190315083502.11849-1-larper@axis.com/T/#u

>
>> Fixes: 616b8371539a ("mm: thp: enable thp migration in generic path")
>> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
>> ---
>> Changes in v2:
>>  - Using a for loop instead of the folio variant for backportable.
>>
>>  mm/migrate.c | 7 +++++--
>>  1 file changed, 5 insertions(+), 2 deletions(-)
>>
>> diff --git a/mm/migrate.c b/mm/migrate.c
>> index c9296d63878d..c418e8d92b9c 100644
>> --- a/mm/migrate.c
>> +++ b/mm/migrate.c
>> @@ -933,9 +933,12 @@ static int move_to_new_page(struct page *newpage, struct page *page,
>>  		if (!PageMappingFlags(page))
>>  			page->mapping = NULL;
>>
>> -		if (likely(!is_zone_device_page(newpage)))
>> -			flush_dcache_page(newpage);
>> +		if (likely(!is_zone_device_page(newpage))) {
>> +			int i, nr = compound_nr(newpage);
>>
>> +			for (i = 0; i < nr; i++)
>> +				flush_dcache_page(newpage + i);
>> +		}
>>  	}
>>  out:
>>  	return rc;
>> -- 
>> 2.11.0
>>
>>
>>

--
Best Regards,
Yan, Zi
David Rientjes Jan. 25, 2022, 12:41 a.m. UTC | #4
On Mon, 24 Jan 2022, Zi Yan wrote:

> >> The D-cache maintenance inside move_to_new_page() only consider one page,
> >> there is still D-cache maintenance issue for tail pages of THP. Fix this
> >> by not using flush_dcache_folio() since it is not backportable.
> >>
> >
> > The mention of being backportable suggests that we should backport this,
> > likely to 4.14+.  So should it be marked as stable?
> 
> Hmm, after more digging, I am not sure if the bug exists. For THP migration,
> flush_cache_range() is used in remove_migration_pmd(). The flush_dcache_page()
> was added by Lars Persson (cc’d) to solve the data corruption on MIPS[1],
> but THP migration is only enabled on x86_64, PPC_BOOK3S_64, and ARM64.
> 
> To make code more consistent, I guess flush_cache_range() in remove_migration_pmd()
> can be removed, since it is superseded by the flush_dcache_page() below.
> 
> The Fixes can be dropped. Let me know if I miss anything.
> 

Yeah, I don't think the Fixes needs to exist here because there doesn't 
appear to be an issue today.  We likely need to choose one of the two 
paths from above to handle the flush only in a single place.
Muchun Song Jan. 25, 2022, 1:55 a.m. UTC | #5
On Tue, Jan 25, 2022 at 3:22 AM Zi Yan <ziy@nvidia.com> wrote:
>
> On 24 Jan 2022, at 13:11, David Rientjes wrote:
>
> > On Mon, 24 Jan 2022, Muchun Song wrote:
> >
> >> The D-cache maintenance inside move_to_new_page() only consider one page,
> >> there is still D-cache maintenance issue for tail pages of THP. Fix this
> >> by not using flush_dcache_folio() since it is not backportable.
> >>
> >
> > The mention of being backportable suggests that we should backport this,
> > likely to 4.14+.  So should it be marked as stable?
>
> Hmm, after more digging, I am not sure if the bug exists. For THP migration,
> flush_cache_range() is used in remove_migration_pmd(). The flush_dcache_page()
> was added by Lars Persson (cc’d) to solve the data corruption on MIPS[1],
> but THP migration is only enabled on x86_64, PPC_BOOK3S_64, and ARM64.

I only mention the THP case. After some more thinking, I think the HugeTLB
should also be considered, Right? The HugeTLB is enabled on arm, arm64,
mips, parisc, powerpc, riscv, s390 and sh.

>
> To make code more consistent, I guess flush_cache_range() in remove_migration_pmd()
> can be removed, since it is superseded by the flush_dcache_page() below.

From my point of view, flush_cache_range() in remove_migration_pmd() is
a wrong usage, which cannot replace flush_dcache_page(). I think the commit
c2cc499c5bcf ("mm compaction: fix of improper cache flush in migration code")
, which is similar to the situation here, can offer more infos.

>
> The Fixes can be dropped. Let me know if I miss anything.
>
> >
> > That aside, should there be a follow-up patch that converts to using
> > flush_dcache_folio()?
>
> Are you suggesting to convert just this code or the entire move_to_new_page()
> to use folio? The latter might be more desirable, since the code will be
> more consistent.
>
>
> [1] https://lore.kernel.org/all/20190315083502.11849-1-larper@axis.com/T/#u
>
Zi Yan Jan. 25, 2022, 2:42 a.m. UTC | #6
On 24 Jan 2022, at 20:55, Muchun Song wrote:

> On Tue, Jan 25, 2022 at 3:22 AM Zi Yan <ziy@nvidia.com> wrote:
>>
>> On 24 Jan 2022, at 13:11, David Rientjes wrote:
>>
>>> On Mon, 24 Jan 2022, Muchun Song wrote:
>>>
>>>> The D-cache maintenance inside move_to_new_page() only consider one page,
>>>> there is still D-cache maintenance issue for tail pages of THP. Fix this
>>>> by not using flush_dcache_folio() since it is not backportable.
>>>>
>>>
>>> The mention of being backportable suggests that we should backport this,
>>> likely to 4.14+.  So should it be marked as stable?
>>
>> Hmm, after more digging, I am not sure if the bug exists. For THP migration,
>> flush_cache_range() is used in remove_migration_pmd(). The flush_dcache_page()
>> was added by Lars Persson (cc’d) to solve the data corruption on MIPS[1],
>> but THP migration is only enabled on x86_64, PPC_BOOK3S_64, and ARM64.
>
> I only mention the THP case. After some more thinking, I think the HugeTLB
> should also be considered, Right? The HugeTLB is enabled on arm, arm64,
> mips, parisc, powerpc, riscv, s390 and sh.
>

+Mike for HugeTLB

If HugeTLB page migration also misses flush_dcache_page() on its tail pages,
you will need a different patch for the commit introducing hugetlb page migration.

>>
>> To make code more consistent, I guess flush_cache_range() in remove_migration_pmd()
>> can be removed, since it is superseded by the flush_dcache_page() below.
>
> From my point of view, flush_cache_range() in remove_migration_pmd() is
> a wrong usage, which cannot replace flush_dcache_page(). I think the commit
> c2cc499c5bcf ("mm compaction: fix of improper cache flush in migration code")
> , which is similar to the situation here, can offer more infos.
>

Thanks for the information. That helps. But remove_migration_pmd() did not cause
any issue at the commit pointed by Fixes but at the commit which enabled THP
migration on IBM and ARM64, whichever came first.

IIUC, there will be different versions of the fix targeting different stable
trees:

1. pre-4.14, THP migration did not exist: you will need to fix the use of
flush_dcache_page() at that time for HugeTLB page migration. Both flushing
dcache page for all subpages and moving flush_dcache_page from
remove_migration_pte() to move_to_new_page(). 4.9 and 4.4 are affected.
But EOL of 4.4 is next month, so you might skip it.

2. 4.14 to before device public page is removed: your current fix will not
apply directly, but the for loop works. flush_cache_range() in
remove_migration_pmd() should be removed, since it is dead code based on
the commit you mentioned. It might not be worth the effort to find when
IBM and ARM64 enable THP migration.

3. after device public page is removed: your current fix will apply cleanly
and the removal of flush_cache_range() in remove_migration_pmd() should
be added.

Let me know if it makes sense.

>>
>> The Fixes can be dropped. Let me know if I miss anything.
>>
>>>
>>> That aside, should there be a follow-up patch that converts to using
>>> flush_dcache_folio()?
>>
>> Are you suggesting to convert just this code or the entire move_to_new_page()
>> to use folio? The latter might be more desirable, since the code will be
>> more consistent.
>>
>>
>> [1] https://lore.kernel.org/all/20190315083502.11849-1-larper@axis.com/T/#u
>>

--
Best Regards,
Yan, Zi
Muchun Song Jan. 25, 2022, 6:01 a.m. UTC | #7
On Tue, Jan 25, 2022 at 10:42 AM Zi Yan <ziy@nvidia.com> wrote:
>
> On 24 Jan 2022, at 20:55, Muchun Song wrote:
>
> > On Tue, Jan 25, 2022 at 3:22 AM Zi Yan <ziy@nvidia.com> wrote:
> >>
> >> On 24 Jan 2022, at 13:11, David Rientjes wrote:
> >>
> >>> On Mon, 24 Jan 2022, Muchun Song wrote:
> >>>
> >>>> The D-cache maintenance inside move_to_new_page() only consider one page,
> >>>> there is still D-cache maintenance issue for tail pages of THP. Fix this
> >>>> by not using flush_dcache_folio() since it is not backportable.
> >>>>
> >>>
> >>> The mention of being backportable suggests that we should backport this,
> >>> likely to 4.14+.  So should it be marked as stable?
> >>
> >> Hmm, after more digging, I am not sure if the bug exists. For THP migration,
> >> flush_cache_range() is used in remove_migration_pmd(). The flush_dcache_page()
> >> was added by Lars Persson (cc’d) to solve the data corruption on MIPS[1],
> >> but THP migration is only enabled on x86_64, PPC_BOOK3S_64, and ARM64.
> >
> > I only mention the THP case. After some more thinking, I think the HugeTLB
> > should also be considered, Right? The HugeTLB is enabled on arm, arm64,
> > mips, parisc, powerpc, riscv, s390 and sh.
> >
>
> +Mike for HugeTLB
>
> If HugeTLB page migration also misses flush_dcache_page() on its tail pages,
> you will need a different patch for the commit introducing hugetlb page migration.

Agree. I think arm (see the following commit) has handled this issue, while most
others do not.

  commit 0b19f93351dd ("ARM: mm: Add support for flushing HugeTLB pages.")

But I do not have any real devices to test if this issue exists on other archs.
In theory, it exists.

>
> >>
> >> To make code more consistent, I guess flush_cache_range() in remove_migration_pmd()
> >> can be removed, since it is superseded by the flush_dcache_page() below.
> >
> > From my point of view, flush_cache_range() in remove_migration_pmd() is
> > a wrong usage, which cannot replace flush_dcache_page(). I think the commit
> > c2cc499c5bcf ("mm compaction: fix of improper cache flush in migration code")
> > , which is similar to the situation here, can offer more infos.
> >
>
> Thanks for the information. That helps. But remove_migration_pmd() did not cause
> any issue at the commit pointed by Fixes but at the commit which enabled THP
> migration on IBM and ARM64, whichever came first.
>
> IIUC, there will be different versions of the fix targeting different stable
> trees:
>
> 1. pre-4.14, THP migration did not exist: you will need to fix the use of
> flush_dcache_page() at that time for HugeTLB page migration. Both flushing
> dcache page for all subpages and moving flush_dcache_page from
> remove_migration_pte() to move_to_new_page(). 4.9 and 4.4 are affected.
> But EOL of 4.4 is next month, so you might skip it.
>
> 2. 4.14 to before device public page is removed: your current fix will not
> apply directly, but the for loop works. flush_cache_range() in
> remove_migration_pmd() should be removed, since it is dead code based on
> the commit you mentioned. It might not be worth the effort to find when
> IBM and ARM64 enable THP migration.
>
> 3. after device public page is removed: your current fix will apply cleanly
> and the removal of flush_cache_range() in remove_migration_pmd() should
> be added.
>
> Let me know if it makes sense.

Make sense.

Thanks.
Mike Kravetz Jan. 25, 2022, 9:24 p.m. UTC | #8
On 1/24/22 22:01, Muchun Song wrote:
> On Tue, Jan 25, 2022 at 10:42 AM Zi Yan <ziy@nvidia.com> wrote:
>>
>> On 24 Jan 2022, at 20:55, Muchun Song wrote:
>>
>>> On Tue, Jan 25, 2022 at 3:22 AM Zi Yan <ziy@nvidia.com> wrote:
>>>>
>>>> On 24 Jan 2022, at 13:11, David Rientjes wrote:
>>>>
>>>>> On Mon, 24 Jan 2022, Muchun Song wrote:
>>>>>
>>>>>> The D-cache maintenance inside move_to_new_page() only consider one page,
>>>>>> there is still D-cache maintenance issue for tail pages of THP. Fix this
>>>>>> by not using flush_dcache_folio() since it is not backportable.
>>>>>>
>>>>>
>>>>> The mention of being backportable suggests that we should backport this,
>>>>> likely to 4.14+.  So should it be marked as stable?
>>>>
>>>> Hmm, after more digging, I am not sure if the bug exists. For THP migration,
>>>> flush_cache_range() is used in remove_migration_pmd(). The flush_dcache_page()
>>>> was added by Lars Persson (cc’d) to solve the data corruption on MIPS[1],
>>>> but THP migration is only enabled on x86_64, PPC_BOOK3S_64, and ARM64.
>>>
>>> I only mention the THP case. After some more thinking, I think the HugeTLB
>>> should also be considered, Right? The HugeTLB is enabled on arm, arm64,
>>> mips, parisc, powerpc, riscv, s390 and sh.
>>>
>>
>> +Mike for HugeTLB
>>
>> If HugeTLB page migration also misses flush_dcache_page() on its tail pages,
>> you will need a different patch for the commit introducing hugetlb page migration.
> 
> Agree. I think arm (see the following commit) has handled this issue, while most
> others do not.
> 
>   commit 0b19f93351dd ("ARM: mm: Add support for flushing HugeTLB pages.")
> 
> But I do not have any real devices to test if this issue exists on other archs.
> In theory, it exists.
> 

Thanks for adding me to the discussion.

I agree that this issue exists at least in theory for hugetlb pages as well.
This made me look at other places with similar code for hugetlb.  i.e.
Allocating a new page, copying data to new page and then establishing a
mapping (pte) to the new page.

- hugetlb_cow calls copy_user_huge_page() which ends up calling
  copy_user_highpage that includes dcache flushing of the target for some
  architectures, but not all.
- userfaultfd calls copy_huge_page_from_user which does not appear to do
  any dcache flushing for the target page.

Do you think these code paths have the same potential issue?
Muchun Song Jan. 26, 2022, 3:29 a.m. UTC | #9
On Wed, Jan 26, 2022 at 5:24 AM Mike Kravetz <mike.kravetz@oracle.com> wrote:
>
> On 1/24/22 22:01, Muchun Song wrote:
> > On Tue, Jan 25, 2022 at 10:42 AM Zi Yan <ziy@nvidia.com> wrote:
> >>
> >> On 24 Jan 2022, at 20:55, Muchun Song wrote:
> >>
> >>> On Tue, Jan 25, 2022 at 3:22 AM Zi Yan <ziy@nvidia.com> wrote:
> >>>>
> >>>> On 24 Jan 2022, at 13:11, David Rientjes wrote:
> >>>>
> >>>>> On Mon, 24 Jan 2022, Muchun Song wrote:
> >>>>>
> >>>>>> The D-cache maintenance inside move_to_new_page() only consider one page,
> >>>>>> there is still D-cache maintenance issue for tail pages of THP. Fix this
> >>>>>> by not using flush_dcache_folio() since it is not backportable.
> >>>>>>
> >>>>>
> >>>>> The mention of being backportable suggests that we should backport this,
> >>>>> likely to 4.14+.  So should it be marked as stable?
> >>>>
> >>>> Hmm, after more digging, I am not sure if the bug exists. For THP migration,
> >>>> flush_cache_range() is used in remove_migration_pmd(). The flush_dcache_page()
> >>>> was added by Lars Persson (cc’d) to solve the data corruption on MIPS[1],
> >>>> but THP migration is only enabled on x86_64, PPC_BOOK3S_64, and ARM64.
> >>>
> >>> I only mention the THP case. After some more thinking, I think the HugeTLB
> >>> should also be considered, Right? The HugeTLB is enabled on arm, arm64,
> >>> mips, parisc, powerpc, riscv, s390 and sh.
> >>>
> >>
> >> +Mike for HugeTLB
> >>
> >> If HugeTLB page migration also misses flush_dcache_page() on its tail pages,
> >> you will need a different patch for the commit introducing hugetlb page migration.
> >
> > Agree. I think arm (see the following commit) has handled this issue, while most
> > others do not.
> >
> >   commit 0b19f93351dd ("ARM: mm: Add support for flushing HugeTLB pages.")
> >
> > But I do not have any real devices to test if this issue exists on other archs.
> > In theory, it exists.
> >
>
> Thanks for adding me to the discussion.
>
> I agree that this issue exists at least in theory for hugetlb pages as well.
> This made me look at other places with similar code for hugetlb.  i.e.
> Allocating a new page, copying data to new page and then establishing a
> mapping (pte) to the new page.

Hi Mike,

Thanks for looking at this.

>
> - hugetlb_cow calls copy_user_huge_page() which ends up calling
>   copy_user_highpage that includes dcache flushing of the target for some
>   architectures, but not all.

copy_user_page() inside copy_user_highpage() is already considering
the cache maintenance on different architectures, which is documented
in Documentation/core-api/cachetlb.rst. So there are no problems in this
case.

> - userfaultfd calls copy_huge_page_from_user which does not appear to do
>   any dcache flushing for the target page.

Right. The new page should be flushed before setting up the mapping
to the user space.

> Do you think these code paths have the same potential issue?

The latter does have the issue, the former does not. The fixes may
look like the following:

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index a1baa198519a..828240aee3f9 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -5819,6 +5819,7 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm,
                        goto out;
                }
                folio_copy(page_folio(page), page_folio(*pagep));
+               flush_dcache_folio(page_folio(page));
                put_page(*pagep);
                *pagep = NULL;
        }
diff --git a/mm/memory.c b/mm/memory.c
index e8ce066be5f2..ff6f48cdcc48 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -5400,6 +5400,7 @@ long copy_huge_page_from_user(struct page *dst_page,
                        kunmap(subpage);
                else
                        kunmap_atomic(page_kaddr);
+               flush_dcache_page(subpage);

                ret_val -= (PAGE_SIZE - rc);
                if (rc)

Thanks.
Mike Kravetz Jan. 26, 2022, 11:26 p.m. UTC | #10
On 1/25/22 19:29, Muchun Song wrote:
> On Wed, Jan 26, 2022 at 5:24 AM Mike Kravetz <mike.kravetz@oracle.com> wrote:
>>
>> On 1/24/22 22:01, Muchun Song wrote:
>>> On Tue, Jan 25, 2022 at 10:42 AM Zi Yan <ziy@nvidia.com> wrote:
>>>>
>>>> On 24 Jan 2022, at 20:55, Muchun Song wrote:
>>>>
>>>>> On Tue, Jan 25, 2022 at 3:22 AM Zi Yan <ziy@nvidia.com> wrote:
>>>>>>
>>>>>> On 24 Jan 2022, at 13:11, David Rientjes wrote:
>>>>>>
>>>>>>> On Mon, 24 Jan 2022, Muchun Song wrote:
>>>>>>>
>>>>>>>> The D-cache maintenance inside move_to_new_page() only consider one page,
>>>>>>>> there is still D-cache maintenance issue for tail pages of THP. Fix this
>>>>>>>> by not using flush_dcache_folio() since it is not backportable.
>>>>>>>>
>>>>>>>
>>>>>>> The mention of being backportable suggests that we should backport this,
>>>>>>> likely to 4.14+.  So should it be marked as stable?
>>>>>>
>>>>>> Hmm, after more digging, I am not sure if the bug exists. For THP migration,
>>>>>> flush_cache_range() is used in remove_migration_pmd(). The flush_dcache_page()
>>>>>> was added by Lars Persson (cc’d) to solve the data corruption on MIPS[1],
>>>>>> but THP migration is only enabled on x86_64, PPC_BOOK3S_64, and ARM64.
>>>>>
>>>>> I only mention the THP case. After some more thinking, I think the HugeTLB
>>>>> should also be considered, Right? The HugeTLB is enabled on arm, arm64,
>>>>> mips, parisc, powerpc, riscv, s390 and sh.
>>>>>
>>>>
>>>> +Mike for HugeTLB
>>>>
>>>> If HugeTLB page migration also misses flush_dcache_page() on its tail pages,
>>>> you will need a different patch for the commit introducing hugetlb page migration.
>>>
>>> Agree. I think arm (see the following commit) has handled this issue, while most
>>> others do not.
>>>
>>>   commit 0b19f93351dd ("ARM: mm: Add support for flushing HugeTLB pages.")
>>>
>>> But I do not have any real devices to test if this issue exists on other archs.
>>> In theory, it exists.
>>>
>>
>> Thanks for adding me to the discussion.
>>
>> I agree that this issue exists at least in theory for hugetlb pages as well.
>> This made me look at other places with similar code for hugetlb.  i.e.
>> Allocating a new page, copying data to new page and then establishing a
>> mapping (pte) to the new page.
> 
> Hi Mike,
> 
> Thanks for looking at this.
> 
>>
>> - hugetlb_cow calls copy_user_huge_page() which ends up calling
>>   copy_user_highpage that includes dcache flushing of the target for some
>>   architectures, but not all.
> 
> copy_user_page() inside copy_user_highpage() is already considering
> the cache maintenance on different architectures, which is documented
> in Documentation/core-api/cachetlb.rst. So there are no problems in this
> case.
> 

Thanks!  That cleared up some of my confusion.


>> - userfaultfd calls copy_huge_page_from_user which does not appear to do
>>   any dcache flushing for the target page.
> 
> Right. The new page should be flushed before setting up the mapping
> to the user space.
> 
>> Do you think these code paths have the same potential issue?
> 
> The latter does have the issue, the former does not. The fixes may
> look like the following:
> 
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index a1baa198519a..828240aee3f9 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -5819,6 +5819,7 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm,
>                         goto out;
>                 }
>                 folio_copy(page_folio(page), page_folio(*pagep));
> +               flush_dcache_folio(page_folio(page));
>                 put_page(*pagep);
>                 *pagep = NULL;
>         }
> diff --git a/mm/memory.c b/mm/memory.c
> index e8ce066be5f2..ff6f48cdcc48 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -5400,6 +5400,7 @@ long copy_huge_page_from_user(struct page *dst_page,
>                         kunmap(subpage);
>                 else
>                         kunmap_atomic(page_kaddr);
> +               flush_dcache_page(subpage);
> 
>                 ret_val -= (PAGE_SIZE - rc);
>                 if (rc)
> 

That looks good to me.  Do you plan to include this in the next version
of this series?
Muchun Song Jan. 27, 2022, 1:55 a.m. UTC | #11
On Thu, Jan 27, 2022 at 7:27 AM Mike Kravetz <mike.kravetz@oracle.com> wrote:
>
> On 1/25/22 19:29, Muchun Song wrote:
> > On Wed, Jan 26, 2022 at 5:24 AM Mike Kravetz <mike.kravetz@oracle.com> wrote:
> >>
> >> On 1/24/22 22:01, Muchun Song wrote:
> >>> On Tue, Jan 25, 2022 at 10:42 AM Zi Yan <ziy@nvidia.com> wrote:
> >>>>
> >>>> On 24 Jan 2022, at 20:55, Muchun Song wrote:
> >>>>
> >>>>> On Tue, Jan 25, 2022 at 3:22 AM Zi Yan <ziy@nvidia.com> wrote:
> >>>>>>
> >>>>>> On 24 Jan 2022, at 13:11, David Rientjes wrote:
> >>>>>>
> >>>>>>> On Mon, 24 Jan 2022, Muchun Song wrote:
> >>>>>>>
> >>>>>>>> The D-cache maintenance inside move_to_new_page() only consider one page,
> >>>>>>>> there is still D-cache maintenance issue for tail pages of THP. Fix this
> >>>>>>>> by not using flush_dcache_folio() since it is not backportable.
> >>>>>>>>
> >>>>>>>
> >>>>>>> The mention of being backportable suggests that we should backport this,
> >>>>>>> likely to 4.14+.  So should it be marked as stable?
> >>>>>>
> >>>>>> Hmm, after more digging, I am not sure if the bug exists. For THP migration,
> >>>>>> flush_cache_range() is used in remove_migration_pmd(). The flush_dcache_page()
> >>>>>> was added by Lars Persson (cc’d) to solve the data corruption on MIPS[1],
> >>>>>> but THP migration is only enabled on x86_64, PPC_BOOK3S_64, and ARM64.
> >>>>>
> >>>>> I only mention the THP case. After some more thinking, I think the HugeTLB
> >>>>> should also be considered, Right? The HugeTLB is enabled on arm, arm64,
> >>>>> mips, parisc, powerpc, riscv, s390 and sh.
> >>>>>
> >>>>
> >>>> +Mike for HugeTLB
> >>>>
> >>>> If HugeTLB page migration also misses flush_dcache_page() on its tail pages,
> >>>> you will need a different patch for the commit introducing hugetlb page migration.
> >>>
> >>> Agree. I think arm (see the following commit) has handled this issue, while most
> >>> others do not.
> >>>
> >>>   commit 0b19f93351dd ("ARM: mm: Add support for flushing HugeTLB pages.")
> >>>
> >>> But I do not have any real devices to test if this issue exists on other archs.
> >>> In theory, it exists.
> >>>
> >>
> >> Thanks for adding me to the discussion.
> >>
> >> I agree that this issue exists at least in theory for hugetlb pages as well.
> >> This made me look at other places with similar code for hugetlb.  i.e.
> >> Allocating a new page, copying data to new page and then establishing a
> >> mapping (pte) to the new page.
> >
> > Hi Mike,
> >
> > Thanks for looking at this.
> >
> >>
> >> - hugetlb_cow calls copy_user_huge_page() which ends up calling
> >>   copy_user_highpage that includes dcache flushing of the target for some
> >>   architectures, but not all.
> >
> > copy_user_page() inside copy_user_highpage() is already considering
> > the cache maintenance on different architectures, which is documented
> > in Documentation/core-api/cachetlb.rst. So there are no problems in this
> > case.
> >
>
> Thanks!  That cleared up some of my confusion.
>
>
> >> - userfaultfd calls copy_huge_page_from_user which does not appear to do
> >>   any dcache flushing for the target page.
> >
> > Right. The new page should be flushed before setting up the mapping
> > to the user space.
> >
> >> Do you think these code paths have the same potential issue?
> >
> > The latter does have the issue, the former does not. The fixes may
> > look like the following:
> >
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index a1baa198519a..828240aee3f9 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -5819,6 +5819,7 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm,
> >                         goto out;
> >                 }
> >                 folio_copy(page_folio(page), page_folio(*pagep));
> > +               flush_dcache_folio(page_folio(page));
> >                 put_page(*pagep);
> >                 *pagep = NULL;
> >         }
> > diff --git a/mm/memory.c b/mm/memory.c
> > index e8ce066be5f2..ff6f48cdcc48 100644
> > --- a/mm/memory.c
> > +++ b/mm/memory.c
> > @@ -5400,6 +5400,7 @@ long copy_huge_page_from_user(struct page *dst_page,
> >                         kunmap(subpage);
> >                 else
> >                         kunmap_atomic(page_kaddr);
> > +               flush_dcache_page(subpage);
> >
> >                 ret_val -= (PAGE_SIZE - rc);
> >                 if (rc)
> >
>
> That looks good to me.  Do you plan to include this in the next version
> of this series?

Yes, will do.

Thanks.
diff mbox series

Patch

diff --git a/mm/migrate.c b/mm/migrate.c
index c9296d63878d..c418e8d92b9c 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -933,9 +933,12 @@  static int move_to_new_page(struct page *newpage, struct page *page,
 		if (!PageMappingFlags(page))
 			page->mapping = NULL;
 
-		if (likely(!is_zone_device_page(newpage)))
-			flush_dcache_page(newpage);
+		if (likely(!is_zone_device_page(newpage))) {
+			int i, nr = compound_nr(newpage);
 
+			for (i = 0; i < nr; i++)
+				flush_dcache_page(newpage + i);
+		}
 	}
 out:
 	return rc;