diff mbox

[kvm-unit-tests,2/2] arm/arm64: mmu: add missing TLB flushes

Message ID 20171123161300.GA11746@flask (mailing list archive)
State New, archived
Headers show

Commit Message

Radim Krčmář Nov. 23, 2017, 4:13 p.m. UTC
2017-11-21 17:49+0100, Andrew Jones:
> Since 031755db "arm: enable vmalloc" the virtual addresses returned
> from malloc and friends are no longer identical to the physical
> addresses they map to. On some hardware the change exposes missing
> TLB flushes. Let's get them added.
> 
> Signed-off-by: Andrew Jones <drjones@redhat.com>
> ---
>  lib/arm/mmu.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/lib/arm/mmu.c b/lib/arm/mmu.c
> index 21bcf3a363af..2e5c993f1e7f 100644
> --- a/lib/arm/mmu.c
> +++ b/lib/arm/mmu.c
> @@ -86,6 +86,7 @@ static pteval_t *install_pte(pgd_t *pgtable, uintptr_t vaddr, pteval_t pte)
>  {
>  	pteval_t *p_pte = get_pte(pgtable, vaddr);
>  	*p_pte = pte;
> +	flush_tlb_page(vaddr);
>  	return p_pte;
>  }
>  
> @@ -136,9 +137,9 @@ void mmu_set_range_sect(pgd_t *pgtable, uintptr_t virt_offset,
>  		pgd_val(*pgd) |= PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S;
>  		pgd_val(*pgd) |= pgprot_val(prot);
>  	}
> +	flush_tlb_all();
>  }

Applied, thanks.

Out of curiosity, when does it become better to use flush_tlb_all() than
flush_tlb_page()? i.e.

Comments

Andrew Jones Nov. 23, 2017, 4:23 p.m. UTC | #1
On Thu, Nov 23, 2017 at 05:13:01PM +0100, Radim Krčmář wrote:
> 2017-11-21 17:49+0100, Andrew Jones:
> > Since 031755db "arm: enable vmalloc" the virtual addresses returned
> > from malloc and friends are no longer identical to the physical
> > addresses they map to. On some hardware the change exposes missing
> > TLB flushes. Let's get them added.
> > 
> > Signed-off-by: Andrew Jones <drjones@redhat.com>
> > ---
> >  lib/arm/mmu.c | 3 ++-
> >  1 file changed, 2 insertions(+), 1 deletion(-)
> > 
> > diff --git a/lib/arm/mmu.c b/lib/arm/mmu.c
> > index 21bcf3a363af..2e5c993f1e7f 100644
> > --- a/lib/arm/mmu.c
> > +++ b/lib/arm/mmu.c
> > @@ -86,6 +86,7 @@ static pteval_t *install_pte(pgd_t *pgtable, uintptr_t vaddr, pteval_t pte)
> >  {
> >  	pteval_t *p_pte = get_pte(pgtable, vaddr);
> >  	*p_pte = pte;
> > +	flush_tlb_page(vaddr);
> >  	return p_pte;
> >  }
> >  
> > @@ -136,9 +137,9 @@ void mmu_set_range_sect(pgd_t *pgtable, uintptr_t virt_offset,
> >  		pgd_val(*pgd) |= PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S;
> >  		pgd_val(*pgd) |= pgprot_val(prot);
> >  	}
> > +	flush_tlb_all();
> >  }
> 
> Applied, thanks.
> 
> Out of curiosity, when does it become better to use flush_tlb_all() than
> flush_tlb_page()? i.e.
>

I was just being lazy with the mmu_set_range_sect() function, because,
while that's currently part of the MMU API, it's actually only used in
one place, setup_mmu(), which does mmu_enable() after its call anyway.

So, in short, the mmu_set_range_sect() hunk is correct, but not optimal,
and it doesn't really matter :-)

drew
diff mbox

Patch

diff --git a/lib/arm/mmu.c b/lib/arm/mmu.c
index 2e5c993f1e7f..030c44412c2a 100644
--- a/lib/arm/mmu.c
+++ b/lib/arm/mmu.c
@@ -136,8 +136,8 @@  void mmu_set_range_sect(pgd_t *pgtable, uintptr_t virt_offset,
 		pgd_val(*pgd) = paddr;
 		pgd_val(*pgd) |= PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S;
 		pgd_val(*pgd) |= pgprot_val(prot);
+		flush_tlb_page(vaddr);
 	}
-	flush_tlb_all();
 }
 
 void *setup_mmu(phys_addr_t phys_end)