diff mbox series

[v5,5/7] mm: shmem: fix missing cache flush in shmem_mfill_atomic_pte()

Message ID 20220210123058.79206-6-songmuchun@bytedance.com (mailing list archive)
State New
Headers show
Series Fix some cache flush bugs | expand

Commit Message

Muchun Song Feb. 10, 2022, 12:30 p.m. UTC
The userfaultfd calls shmem_mfill_atomic_pte() which does not do any
cache flushing for the target page.  Then the target page will be mapped
to the user space with a different address (user address), which might
have an alias issue with the kernel address used to copy the data from the
user to.  Insert flush_dcache_page() in non-zero-page case.  And replace
clear_highpage() with clear_user_highpage() which already considers
the cache maintenance.

Fixes: 8d1039634206 ("userfaultfd: shmem: add shmem_mfill_zeropage_pte for userfaultfd support")
Fixes: 4c27fe4c4c84 ("userfaultfd: shmem: add shmem_mcopy_atomic_pte for userfaultfd support")
Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/shmem.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

Comments

Mike Kravetz Feb. 15, 2022, 7:11 p.m. UTC | #1
On 2/10/22 04:30, Muchun Song wrote:
> The userfaultfd calls shmem_mfill_atomic_pte() which does not do any
> cache flushing for the target page.  Then the target page will be mapped
> to the user space with a different address (user address), which might
> have an alias issue with the kernel address used to copy the data from the
> user to.  Insert flush_dcache_page() in non-zero-page case.  And replace
> clear_highpage() with clear_user_highpage() which already considers
> the cache maintenance.
> 
> Fixes: 8d1039634206 ("userfaultfd: shmem: add shmem_mfill_zeropage_pte for userfaultfd support")
> Fixes: 4c27fe4c4c84 ("userfaultfd: shmem: add shmem_mcopy_atomic_pte for userfaultfd support")
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> ---
>  mm/shmem.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)

Thanks,

It might have been better to combine this and the next patch.  When looking
at this, I noted the 'fallback to copy_from_user outside mmap_lock' case needs
to be addressed as well.  It is in the next patch.  No need to change.

Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Muchun Song Feb. 16, 2022, 6:16 a.m. UTC | #2
On Wed, Feb 16, 2022 at 3:12 AM Mike Kravetz <mike.kravetz@oracle.com> wrote:
>
> On 2/10/22 04:30, Muchun Song wrote:
> > The userfaultfd calls shmem_mfill_atomic_pte() which does not do any
> > cache flushing for the target page.  Then the target page will be mapped
> > to the user space with a different address (user address), which might
> > have an alias issue with the kernel address used to copy the data from the
> > user to.  Insert flush_dcache_page() in non-zero-page case.  And replace
> > clear_highpage() with clear_user_highpage() which already considers
> > the cache maintenance.
> >
> > Fixes: 8d1039634206 ("userfaultfd: shmem: add shmem_mfill_zeropage_pte for userfaultfd support")
> > Fixes: 4c27fe4c4c84 ("userfaultfd: shmem: add shmem_mcopy_atomic_pte for userfaultfd support")
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> > ---
> >  mm/shmem.c | 4 +++-
> >  1 file changed, 3 insertions(+), 1 deletion(-)
>
> Thanks,
>
> It might have been better to combine this and the next patch.  When looking
> at this, I noted the 'fallback to copy_from_user outside mmap_lock' case needs
> to be addressed as well.  It is in the next patch.  No need to change.

I separate those changes into 2 patches since the fixed patch
is different. This patch is fixing linux 4.13 and later, while next
patch is fixing linux 4.2 and later. Maybe it is hard to backport
if combining those two patches.

>
> Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>

Thanks Mike.
diff mbox series

Patch

diff --git a/mm/shmem.c b/mm/shmem.c
index eb0fd9001130..2e17ec9231a2 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -2371,8 +2371,10 @@  int shmem_mfill_atomic_pte(struct mm_struct *dst_mm,
 				/* don't free the page */
 				goto out_unacct_blocks;
 			}
+
+			flush_dcache_page(page);
 		} else {		/* ZEROPAGE */
-			clear_highpage(page);
+			clear_user_highpage(page, dst_addr);
 		}
 	} else {
 		page = *pagep;