Message ID | 1566999511-24916-10-git-send-email-alexandru.elisei@arm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | arm64: Run at EL2 | expand |
On Wed, Aug 28, 2019 at 02:38:24PM +0100, Alexandru Elisei wrote: > Let's invalidate the TLB before enabling the MMU, not after, so we don't > accidently use a stale TLB mapping. For arm64, we already do that in > asm_mmu_enable, and we issue an extra invalidation after the function > returns. Invalidate the TLB in asm_mmu_enable for arm also, and remove the > redundant call to tlb_flush_all. > > Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> > --- > lib/arm/mmu.c | 1 - > arm/cstart.S | 4 ++++ > 2 files changed, 4 insertions(+), 1 deletion(-) > > diff --git a/lib/arm/mmu.c b/lib/arm/mmu.c > index 161f7a8e607c..66a05d789386 100644 > --- a/lib/arm/mmu.c > +++ b/lib/arm/mmu.c > @@ -57,7 +57,6 @@ void mmu_enable(pgd_t *pgtable) > struct thread_info *info = current_thread_info(); > > asm_mmu_enable(__pa(pgtable)); > - flush_tlb_all(); > > info->pgtable = pgtable; > mmu_mark_enabled(info->cpu); > diff --git a/arm/cstart.S b/arm/cstart.S > index 5d4fe4b1570b..316672545551 100644 > --- a/arm/cstart.S > +++ b/arm/cstart.S > @@ -160,6 +160,10 @@ halt: > .equ NMRR, 0xff000004 @ MAIR1 (from Linux kernel) > .globl asm_mmu_enable > asm_mmu_enable: > + /* TLBIALL */ > + mcr p15, 0, r2, c8, c7, 0 > + dsb ish > + > /* TTBCR */ > mrc p15, 0, r2, c2, c0, 2 > orr r2, #(1 << 31) @ TTB_EAE > -- > 2.7.4 > Reviewed-by: Andrew Jones <drjones@redhat.com>
diff --git a/lib/arm/mmu.c b/lib/arm/mmu.c index 161f7a8e607c..66a05d789386 100644 --- a/lib/arm/mmu.c +++ b/lib/arm/mmu.c @@ -57,7 +57,6 @@ void mmu_enable(pgd_t *pgtable) struct thread_info *info = current_thread_info(); asm_mmu_enable(__pa(pgtable)); - flush_tlb_all(); info->pgtable = pgtable; mmu_mark_enabled(info->cpu); diff --git a/arm/cstart.S b/arm/cstart.S index 5d4fe4b1570b..316672545551 100644 --- a/arm/cstart.S +++ b/arm/cstart.S @@ -160,6 +160,10 @@ halt: .equ NMRR, 0xff000004 @ MAIR1 (from Linux kernel) .globl asm_mmu_enable asm_mmu_enable: + /* TLBIALL */ + mcr p15, 0, r2, c8, c7, 0 + dsb ish + /* TTBCR */ mrc p15, 0, r2, c2, c0, 2 orr r2, #(1 << 31) @ TTB_EAE
Let's invalidate the TLB before enabling the MMU, not after, so we don't accidently use a stale TLB mapping. For arm64, we already do that in asm_mmu_enable, and we issue an extra invalidation after the function returns. Invalidate the TLB in asm_mmu_enable for arm also, and remove the redundant call to tlb_flush_all. Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> --- lib/arm/mmu.c | 1 - arm/cstart.S | 4 ++++ 2 files changed, 4 insertions(+), 1 deletion(-)