Message ID | 20240909103828.16699-2-abelova@astralinux.ru (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | arm64: KVM: prevent overflow in inject_abt64 | expand |
[dropping Christoffer and Matt from the list, as these addresses are pretty obsolete and I doubt they actually care] Anastasia, On Mon, 09 Sep 2024 11:38:27 +0100, Anastasia Belova <abelova@astralinux.ru> wrote: > > ESR_ELx_EC_IABT_LOW << ESR_ELx_EC_SHIFT = 0x20 << 26. > ESR_ELx_EC_IABT_CUR << ESR_ELx_EC_SHIFT = 0x21 << 26. > There operations' results are int with 1 in 32th bit. > While casting these values into u64 (esr is u64) 1 > fills 32 highest bits. > > Add explicit casting to prevent it. > > Found by Linux Verification Center (linuxtesting.org) with SVACE. > > Fixes: aa8eff9bfbd5 ("arm64: KVM: fault injection into a guest") > Signed-off-by: Anastasia Belova <abelova@astralinux.ru> > --- > arch/arm64/kvm/inject_fault.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c > index a640e839848e..b6b2cfff6629 100644 > --- a/arch/arm64/kvm/inject_fault.c > +++ b/arch/arm64/kvm/inject_fault.c > @@ -74,9 +74,9 @@ static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr > * an AArch32 fault, it means we managed to trap an EL0 fault. > */ > if (is_aarch32 || (cpsr & PSR_MODE_MASK) == PSR_MODE_EL0t) > - esr |= (ESR_ELx_EC_IABT_LOW << ESR_ELx_EC_SHIFT); > + esr |= ((u64)ESR_ELx_EC_IABT_LOW << ESR_ELx_EC_SHIFT); > else > - esr |= (ESR_ELx_EC_IABT_CUR << ESR_ELx_EC_SHIFT); > + esr |= ((u64)ESR_ELx_EC_IABT_CUR << ESR_ELx_EC_SHIFT); > > if (!is_iabt) > esr |= ESR_ELx_EC_DABT_LOW << ESR_ELx_EC_SHIFT; Thanks for this. Although correct, this is a point fix that leaves plenty of opportunities for problems. I'd rather you update all the ESR_ELx_EC_* constants to be defined as UL(0x..) instead of (0x..), which will allow us to forget this forever. Something like this (based on my working tree and won't apply on upstream, but you will hopefully get the idea): diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h index cc1b0ec7a51be..47799a7f95cf8 100644 --- a/arch/arm64/include/asm/esr.h +++ b/arch/arm64/include/asm/esr.h @@ -10,64 +10,64 @@ #include <asm/memory.h> #include <asm/sysreg.h> -#define ESR_ELx_EC_UNKNOWN (0x00) -#define ESR_ELx_EC_WFx (0x01) +#define ESR_ELx_EC_UNKNOWN UL(0x00) +#define ESR_ELx_EC_WFx UL(0x01) /* Unallocated EC: 0x02 */ -#define ESR_ELx_EC_CP15_32 (0x03) -#define ESR_ELx_EC_CP15_64 (0x04) -#define ESR_ELx_EC_CP14_MR (0x05) -#define ESR_ELx_EC_CP14_LS (0x06) -#define ESR_ELx_EC_FP_ASIMD (0x07) -#define ESR_ELx_EC_CP10_ID (0x08) /* EL2 only */ -#define ESR_ELx_EC_PAC (0x09) /* EL2 and above */ -#define ESR_ELx_EC_LS64B (0x0A) +#define ESR_ELx_EC_CP15_32 UL(0x03) +#define ESR_ELx_EC_CP15_64 UL(0x04) +#define ESR_ELx_EC_CP14_MR UL(0x05) +#define ESR_ELx_EC_CP14_LS UL(0x06) +#define ESR_ELx_EC_FP_ASIMD UL(0x07) +#define ESR_ELx_EC_CP10_ID UL(0x08) /* EL2 only */ +#define ESR_ELx_EC_PAC UL(0x09) /* EL2 and above */ +#define ESR_ELx_EC_LS64B UL(0x0A) /* Unallocated EC: 0x0B */ -#define ESR_ELx_EC_CP14_64 (0x0C) -#define ESR_ELx_EC_BTI (0x0D) -#define ESR_ELx_EC_ILL (0x0E) +#define ESR_ELx_EC_CP14_64 UL(0x0C) +#define ESR_ELx_EC_BTI UL(0x0D) +#define ESR_ELx_EC_ILL UL(0x0E) /* Unallocated EC: 0x0F - 0x10 */ -#define ESR_ELx_EC_SVC32 (0x11) -#define ESR_ELx_EC_HVC32 (0x12) /* EL2 only */ -#define ESR_ELx_EC_SMC32 (0x13) /* EL2 and above */ +#define ESR_ELx_EC_SVC32 UL(0x11) +#define ESR_ELx_EC_HVC32 UL(0x12) /* EL2 only */ +#define ESR_ELx_EC_SMC32 UL(0x13) /* EL2 and above */ /* Unallocated EC: 0x14 */ -#define ESR_ELx_EC_SVC64 (0x15) -#define ESR_ELx_EC_HVC64 (0x16) /* EL2 and above */ -#define ESR_ELx_EC_SMC64 (0x17) /* EL2 and above */ -#define ESR_ELx_EC_SYS64 (0x18) -#define ESR_ELx_EC_SVE (0x19) -#define ESR_ELx_EC_ERET (0x1a) /* EL2 only */ +#define ESR_ELx_EC_SVC64 UL(0x15) +#define ESR_ELx_EC_HVC64 UL(0x16) /* EL2 and above */ +#define ESR_ELx_EC_SMC64 UL(0x17) /* EL2 and above */ +#define ESR_ELx_EC_SYS64 UL(0x18) +#define ESR_ELx_EC_SVE UL(0x19) +#define ESR_ELx_EC_ERET UL(0x1a) /* EL2 only */ /* Unallocated EC: 0x1B */ -#define ESR_ELx_EC_FPAC (0x1C) /* EL1 and above */ -#define ESR_ELx_EC_SME (0x1D) +#define ESR_ELx_EC_FPAC UL(0x1C) /* EL1 and above */ +#define ESR_ELx_EC_SME UL(0x1D) /* Unallocated EC: 0x1E */ -#define ESR_ELx_EC_IMP_DEF (0x1f) /* EL3 only */ -#define ESR_ELx_EC_IABT_LOW (0x20) -#define ESR_ELx_EC_IABT_CUR (0x21) -#define ESR_ELx_EC_PC_ALIGN (0x22) +#define ESR_ELx_EC_IMP_DEF UL(0x1f) /* EL3 only */ +#define ESR_ELx_EC_IABT_LOW UL(0x20) +#define ESR_ELx_EC_IABT_CUR UL(0x21) +#define ESR_ELx_EC_PC_ALIGN UL(0x22) /* Unallocated EC: 0x23 */ -#define ESR_ELx_EC_DABT_LOW (0x24) -#define ESR_ELx_EC_DABT_CUR (0x25) -#define ESR_ELx_EC_SP_ALIGN (0x26) -#define ESR_ELx_EC_MOPS (0x27) -#define ESR_ELx_EC_FP_EXC32 (0x28) +#define ESR_ELx_EC_DABT_LOW UL(0x24) +#define ESR_ELx_EC_DABT_CUR UL(0x25) +#define ESR_ELx_EC_SP_ALIGN UL(0x26) +#define ESR_ELx_EC_MOPS UL(0x27) +#define ESR_ELx_EC_FP_EXC32 UL(0x28) /* Unallocated EC: 0x29 - 0x2B */ -#define ESR_ELx_EC_FP_EXC64 (0x2C) +#define ESR_ELx_EC_FP_EXC64 UL(0x2C) /* Unallocated EC: 0x2D - 0x2E */ -#define ESR_ELx_EC_SERROR (0x2F) -#define ESR_ELx_EC_BREAKPT_LOW (0x30) -#define ESR_ELx_EC_BREAKPT_CUR (0x31) -#define ESR_ELx_EC_SOFTSTP_LOW (0x32) -#define ESR_ELx_EC_SOFTSTP_CUR (0x33) -#define ESR_ELx_EC_WATCHPT_LOW (0x34) -#define ESR_ELx_EC_WATCHPT_CUR (0x35) +#define ESR_ELx_EC_SERROR UL(0x2F) +#define ESR_ELx_EC_BREAKPT_LOW UL(0x30) +#define ESR_ELx_EC_BREAKPT_CUR UL(0x31) +#define ESR_ELx_EC_SOFTSTP_LOW UL(0x32) +#define ESR_ELx_EC_SOFTSTP_CUR UL(0x33) +#define ESR_ELx_EC_WATCHPT_LOW UL(0x34) +#define ESR_ELx_EC_WATCHPT_CUR UL(0x35) /* Unallocated EC: 0x36 - 0x37 */ -#define ESR_ELx_EC_BKPT32 (0x38) +#define ESR_ELx_EC_BKPT32 UL(0x38) /* Unallocated EC: 0x39 */ -#define ESR_ELx_EC_VECTOR32 (0x3A) /* EL2 only */ +#define ESR_ELx_EC_VECTOR32 UL(0x3A) /* EL2 only */ /* Unallocated EC: 0x3B */ -#define ESR_ELx_EC_BRK64 (0x3C) +#define ESR_ELx_EC_BRK64 UL(0x3C) /* Unallocated EC: 0x3D - 0x3F */ -#define ESR_ELx_EC_MAX (0x3F) +#define ESR_ELx_EC_MAX UL(0x3F) #define ESR_ELx_EC_SHIFT (26) #define ESR_ELx_EC_WIDTH (6) Could you please respin this quickly so that it can be taken in v6.12? Thanks, M.
diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c index a640e839848e..b6b2cfff6629 100644 --- a/arch/arm64/kvm/inject_fault.c +++ b/arch/arm64/kvm/inject_fault.c @@ -74,9 +74,9 @@ static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr * an AArch32 fault, it means we managed to trap an EL0 fault. */ if (is_aarch32 || (cpsr & PSR_MODE_MASK) == PSR_MODE_EL0t) - esr |= (ESR_ELx_EC_IABT_LOW << ESR_ELx_EC_SHIFT); + esr |= ((u64)ESR_ELx_EC_IABT_LOW << ESR_ELx_EC_SHIFT); else - esr |= (ESR_ELx_EC_IABT_CUR << ESR_ELx_EC_SHIFT); + esr |= ((u64)ESR_ELx_EC_IABT_CUR << ESR_ELx_EC_SHIFT); if (!is_iabt) esr |= ESR_ELx_EC_DABT_LOW << ESR_ELx_EC_SHIFT;
ESR_ELx_EC_IABT_LOW << ESR_ELx_EC_SHIFT = 0x20 << 26. ESR_ELx_EC_IABT_CUR << ESR_ELx_EC_SHIFT = 0x21 << 26. There operations' results are int with 1 in 32th bit. While casting these values into u64 (esr is u64) 1 fills 32 highest bits. Add explicit casting to prevent it. Found by Linux Verification Center (linuxtesting.org) with SVACE. Fixes: aa8eff9bfbd5 ("arm64: KVM: fault injection into a guest") Signed-off-by: Anastasia Belova <abelova@astralinux.ru> --- arch/arm64/kvm/inject_fault.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)