Message ID | 20191128180418.6938-3-alexandru.elisei@arm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | arm/arm64: Various fixes | expand |
diff --git a/lib/arm64/asm/mmu.h b/lib/arm64/asm/mmu.h index 72d75eafc882..5d6d49036a06 100644 --- a/lib/arm64/asm/mmu.h +++ b/lib/arm64/asm/mmu.h @@ -12,7 +12,6 @@ static inline void flush_tlb_all(void) { - dsb(ishst); asm("tlbi vmalle1is"); dsb(ish); isb(); @@ -21,7 +20,6 @@ static inline void flush_tlb_all(void) static inline void flush_tlb_page(unsigned long vaddr) { unsigned long page = vaddr >> 12; - dsb(ishst); asm("tlbi vaae1is, %0" :: "r" (page)); dsb(ish); isb();
When changing a translation table entry, we already use all the necessary barriers. Remove them from the flush_tlb_{page,all} functions. We don't touch the arm versions of the TLB operations because they had no barriers before the TLBIs to begin with. Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> --- lib/arm64/asm/mmu.h | 2 -- 1 file changed, 2 deletions(-)