diff mbox series

arm64: dbm: Invalidate local TLB when setting TCR_EL1.HD

Message ID 20201001093920.18260-1-will@kernel.org (mailing list archive)
State New, archived
Headers show
Series arm64: dbm: Invalidate local TLB when setting TCR_EL1.HD | expand

Commit Message

Will Deacon Oct. 1, 2020, 9:39 a.m. UTC
TCR_EL1.HD is permitted to be cached in a TLB, so invalidate the local
TLB after setting the bit when detected support for the feature. Although
this isn't strictly necessary, since we can happily operate with the bit
effectively clear, the current code uses an ISB in a half-hearted attempt
to make the change effective, so let's just fix that up.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/cpufeature.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Catalin Marinas Oct. 1, 2020, 9:52 a.m. UTC | #1
On Thu, Oct 01, 2020 at 10:39:20AM +0100, Will Deacon wrote:
> TCR_EL1.HD is permitted to be cached in a TLB, so invalidate the local
> TLB after setting the bit when detected support for the feature. Although
> this isn't strictly necessary, since we can happily operate with the bit
> effectively clear, the current code uses an ISB in a half-hearted attempt
> to make the change effective, so let's just fix that up.
> 
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Signed-off-by: Will Deacon <will@kernel.org>
> ---
>  arch/arm64/kernel/cpufeature.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 6424584be01e..29de76046977 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -1442,7 +1442,7 @@ static inline void __cpu_enable_hw_dbm(void)
>  	u64 tcr = read_sysreg(tcr_el1) | TCR_HD;
>  
>  	write_sysreg(tcr, tcr_el1);
> -	isb();
> +	local_flush_tlb_all();

Is the TLBI ordered with the TCR_EL1 write or we need the ISB as well?
Will Deacon Oct. 1, 2020, 10:32 a.m. UTC | #2
On Thu, Oct 01, 2020 at 10:52:52AM +0100, Catalin Marinas wrote:
> On Thu, Oct 01, 2020 at 10:39:20AM +0100, Will Deacon wrote:
> > TCR_EL1.HD is permitted to be cached in a TLB, so invalidate the local
> > TLB after setting the bit when detected support for the feature. Although
> > this isn't strictly necessary, since we can happily operate with the bit
> > effectively clear, the current code uses an ISB in a half-hearted attempt
> > to make the change effective, so let's just fix that up.
> > 
> > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > Signed-off-by: Will Deacon <will@kernel.org>
> > ---
> >  arch/arm64/kernel/cpufeature.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> > index 6424584be01e..29de76046977 100644
> > --- a/arch/arm64/kernel/cpufeature.c
> > +++ b/arch/arm64/kernel/cpufeature.c
> > @@ -1442,7 +1442,7 @@ static inline void __cpu_enable_hw_dbm(void)
> >  	u64 tcr = read_sysreg(tcr_el1) | TCR_HD;
> >  
> >  	write_sysreg(tcr, tcr_el1);
> > -	isb();
> > +	local_flush_tlb_all();
> 
> Is the TLBI ordered with the TCR_EL1 write or we need the ISB as well?

Good catch! I think this is direct-write -> indirect-read ordering, which
needs the ISB. so I'll keep it and add the flush immediately afterwards.

Will
Mark Rutland Oct. 1, 2020, 10:33 a.m. UTC | #3
On Thu, Oct 01, 2020 at 10:52:52AM +0100, Catalin Marinas wrote:
> On Thu, Oct 01, 2020 at 10:39:20AM +0100, Will Deacon wrote:
> > TCR_EL1.HD is permitted to be cached in a TLB, so invalidate the local
> > TLB after setting the bit when detected support for the feature. Although
> > this isn't strictly necessary, since we can happily operate with the bit
> > effectively clear, the current code uses an ISB in a half-hearted attempt
> > to make the change effective, so let's just fix that up.
> > 
> > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > Signed-off-by: Will Deacon <will@kernel.org>
> > ---
> >  arch/arm64/kernel/cpufeature.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> > index 6424584be01e..29de76046977 100644
> > --- a/arch/arm64/kernel/cpufeature.c
> > +++ b/arch/arm64/kernel/cpufeature.c
> > @@ -1442,7 +1442,7 @@ static inline void __cpu_enable_hw_dbm(void)
> >  	u64 tcr = read_sysreg(tcr_el1) | TCR_HD;
> >  
> >  	write_sysreg(tcr, tcr_el1);
> > -	isb();
> > +	local_flush_tlb_all();
> 
> Is the TLBI ordered with the TCR_EL1 write or we need the ISB as well?

I believe we still need the ISB prior to the TLBI since MSR and TLBI are
not usually ordered w.r.t. one another, so after teh TLBI it might still
be possible to allcote the stale value into TLBs.

Mark.
diff mbox series

Patch

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 6424584be01e..29de76046977 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1442,7 +1442,7 @@  static inline void __cpu_enable_hw_dbm(void)
 	u64 tcr = read_sysreg(tcr_el1) | TCR_HD;
 
 	write_sysreg(tcr, tcr_el1);
-	isb();
+	local_flush_tlb_all();
 }
 
 static bool cpu_has_broken_dbm(void)