diff mbox series

[1/5] mm: rmap: fix cache flush on THP pages

Message ID 20220121075515.79311-1-songmuchun@bytedance.com (mailing list archive)
State New, archived
Headers show
Series [1/5] mm: rmap: fix cache flush on THP pages | expand

Commit Message

Muchun Song Jan. 21, 2022, 7:55 a.m. UTC
The flush_cache_page() only remove a PAGE_SIZE sized range from the cache.
However, it does not cover the full pages in a THP except a head page.
Replace it with flush_cache_range() to fix this issue. At least, no
problems were found due to this. Maybe because the architectures that
have virtual indexed caches is less.

Fixes: f27176cfc363 ("mm: convert page_mkclean_one() to use page_vma_mapped_walk()")
Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/rmap.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Yang Shi Jan. 21, 2022, 6:05 p.m. UTC | #1
On Thu, Jan 20, 2022 at 11:56 PM Muchun Song <songmuchun@bytedance.com> wrote:
>
> The flush_cache_page() only remove a PAGE_SIZE sized range from the cache.
> However, it does not cover the full pages in a THP except a head page.
> Replace it with flush_cache_range() to fix this issue. At least, no
> problems were found due to this. Maybe because the architectures that
> have virtual indexed caches is less.

Yeah, actually flush_cache_page()/flush_cache_range() are no-op for
the most architectures which have THP supported, i.e. x86, aarch64,
powerpc, etc.

And currently just tmpfs and read-only files support PMD-mapped THP,
but both don't have to do writeback. And it seems DAX doesn't have
writeback either, which uses __set_page_dirty_no_writeback() for
set_page_dirty. So this code should never be called IIUC.

But anyway your fix looks correct to me. Reviewed-by: Yang Shi
<shy828301@gmail.com>

>
> Fixes: f27176cfc363 ("mm: convert page_mkclean_one() to use page_vma_mapped_walk()")
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> ---
>  mm/rmap.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/rmap.c b/mm/rmap.c
> index b0fd9dc19eba..65670cb805d6 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -974,7 +974,7 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma,
>                         if (!pmd_dirty(*pmd) && !pmd_write(*pmd))
>                                 continue;
>
> -                       flush_cache_page(vma, address, page_to_pfn(page));
> +                       flush_cache_range(vma, address, address + HPAGE_PMD_SIZE);
>                         entry = pmdp_invalidate(vma, address, pmd);
>                         entry = pmd_wrprotect(entry);
>                         entry = pmd_mkclean(entry);
> --
> 2.11.0
>
Christoph Hellwig Jan. 24, 2022, 7:34 a.m. UTC | #2
On Fri, Jan 21, 2022 at 03:55:11PM +0800, Muchun Song wrote:
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/mm/rmap.c b/mm/rmap.c
> index b0fd9dc19eba..65670cb805d6 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -974,7 +974,7 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma,
>  			if (!pmd_dirty(*pmd) && !pmd_write(*pmd))
>  				continue;
>  
> -			flush_cache_page(vma, address, page_to_pfn(page));
> +			flush_cache_range(vma, address, address + HPAGE_PMD_SIZE);

Do we need a flush_cache_folio here given that we must be dealing with
what effectively is a folio here?

Also please avoid the overly long line.
Muchun Song Jan. 24, 2022, 8:51 a.m. UTC | #3
On Mon, Jan 24, 2022 at 3:34 PM Christoph Hellwig <hch@infradead.org> wrote:
>
> On Fri, Jan 21, 2022 at 03:55:11PM +0800, Muchun Song wrote:
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/mm/rmap.c b/mm/rmap.c
> > index b0fd9dc19eba..65670cb805d6 100644
> > --- a/mm/rmap.c
> > +++ b/mm/rmap.c
> > @@ -974,7 +974,7 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma,
> >                       if (!pmd_dirty(*pmd) && !pmd_write(*pmd))
> >                               continue;
> >
> > -                     flush_cache_page(vma, address, page_to_pfn(page));
> > +                     flush_cache_range(vma, address, address + HPAGE_PMD_SIZE);
>
> Do we need a flush_cache_folio here given that we must be dealing with
> what effectively is a folio here?

I think it is a future improvement. I suspect it will be easy if
someone wants to backport this patch. If we do not want
someone to do this, I think it is better to introduce
flush_cache_folio in this patch. What do you think?

>
> Also please avoid the overly long line.
>

OK.

Thanks.
diff mbox series

Patch

diff --git a/mm/rmap.c b/mm/rmap.c
index b0fd9dc19eba..65670cb805d6 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -974,7 +974,7 @@  static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma,
 			if (!pmd_dirty(*pmd) && !pmd_write(*pmd))
 				continue;
 
-			flush_cache_page(vma, address, page_to_pfn(page));
+			flush_cache_range(vma, address, address + HPAGE_PMD_SIZE);
 			entry = pmdp_invalidate(vma, address, pmd);
 			entry = pmd_wrprotect(entry);
 			entry = pmd_mkclean(entry);