@@ -1246,7 +1246,13 @@ void spectre_bhb_enable_mitigation(const struct arm64_cpu_capabilities *entry)
} else if (spectre_bhb_loop_affected(SCOPE_LOCAL_CPU)) {
switch (spectre_bhb_loop_affected(SCOPE_SYSTEM)) {
case 8:
- kvm_setup_bhb_slot(__spectre_bhb_loop_k8_start);
+ /*
+ * A57/A72-r0 will already have selected the
+ * spectre-indirect vector, which is sufficient
+ * for BHB too.
+ */
+ if (!__this_cpu_read(bp_hardening_data.fn))
+ kvm_setup_bhb_slot(__spectre_bhb_loop_k8_start);
break;
case 24:
kvm_setup_bhb_slot(__spectre_bhb_loop_k24_start);
Both the Spectre-v2 and Spectre-BHB mitigations involve running a sequence immediately after exiting a guest, before any branches. In the stable kernels these sequences are built by copying templates into an empty vector slot. For Spectre-BHB, Cortex-A57 and A72 require the branchy loop with k=8. If Spectre-v2 needs mitigating at the same time, a firmware call to EL3 is needed. The work EL3 does at this point is also enough to mitigate Spectre-BHB. When enabling the Spectre-BHB mitigation, spectre_bhb_enable_mitigation() should check if a slot has already been allocated for Spectre-v2, meaning no work is needed for Spectre-BHB. This check was missed in the earlier backport, add it. Fixes: c20d55174479 ("arm64: Mitigate spectre style branch history side channels") Signed-off-by: James Morse <james.morse@arm.com> --- arch/arm64/kernel/cpu_errata.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-)