Message ID | 20191128180418.6938-17-alexandru.elisei@arm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | arm/arm64: Various fixes | expand |
diff --git a/arm/cstart64.S b/arm/cstart64.S index f41ffa3bc6c2..87bf873795a1 100644 --- a/arm/cstart64.S +++ b/arm/cstart64.S @@ -167,8 +167,8 @@ halt: .globl asm_mmu_enable asm_mmu_enable: ic iallu // I+BTB cache invalidate - tlbi vmalle1is // invalidate I + D TLBs - dsb ish + tlbi vmalle1 // invalidate I + D TLBs + dsb nsh /* TCR */ ldr x1, =TCR_TxSZ(VA_BITS) | \
There's really no need to invalidate the TLB entries for all CPUs when enabling the MMU for the current CPU, so use the non-shareable version of the TLBI operation (and downgrade the DSB accordingly). Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> --- arm/cstart64.S | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)