Message ID | 20241004144307.66199-9-steven.price@arm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | arm64: Support for running as a guest in Arm CCA | expand |
On 10/5/24 12:43 AM, Steven Price wrote: > When __change_memory_common() is purely setting the valid bit on a PTE > (e.g. via the set_memory_valid() call) there is no need for a TLBI as > either the entry isn't changing (the valid bit was already set) or the > entry was invalid and so should not have been cached in the TLB. > > Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> > Signed-off-by: Steven Price <steven.price@arm.com> > --- > v4: New patch > --- > arch/arm64/mm/pageattr.c | 8 +++++++- > 1 file changed, 7 insertions(+), 1 deletion(-) > Reviewed-by: Gavin Shan <gshan@redhat.com>
On 04/10/2024 15:43, Steven Price wrote: > When __change_memory_common() is purely setting the valid bit on a PTE > (e.g. via the set_memory_valid() call) there is no need for a TLBI as > either the entry isn't changing (the valid bit was already set) or the > entry was invalid and so should not have been cached in the TLB. > > Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> > Signed-off-by: Steven Price <steven.price@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> > --- > v4: New patch > --- > arch/arm64/mm/pageattr.c | 8 +++++++- > 1 file changed, 7 insertions(+), 1 deletion(-) > > diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c > index 0e270a1c51e6..547a9e0b46c2 100644 > --- a/arch/arm64/mm/pageattr.c > +++ b/arch/arm64/mm/pageattr.c > @@ -60,7 +60,13 @@ static int __change_memory_common(unsigned long start, unsigned long size, > ret = apply_to_page_range(&init_mm, start, size, change_page_range, > &data); > > - flush_tlb_kernel_range(start, start + size); > + /* > + * If the memory is being made valid without changing any other bits > + * then a TLBI isn't required as a non-valid entry cannot be cached in > + * the TLB. > + */ > + if (pgprot_val(set_mask) != PTE_VALID || pgprot_val(clear_mask)) > + flush_tlb_kernel_range(start, start + size); > return ret; > } >
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c index 0e270a1c51e6..547a9e0b46c2 100644 --- a/arch/arm64/mm/pageattr.c +++ b/arch/arm64/mm/pageattr.c @@ -60,7 +60,13 @@ static int __change_memory_common(unsigned long start, unsigned long size, ret = apply_to_page_range(&init_mm, start, size, change_page_range, &data); - flush_tlb_kernel_range(start, start + size); + /* + * If the memory is being made valid without changing any other bits + * then a TLBI isn't required as a non-valid entry cannot be cached in + * the TLB. + */ + if (pgprot_val(set_mask) != PTE_VALID || pgprot_val(clear_mask)) + flush_tlb_kernel_range(start, start + size); return ret; }