Message ID | 20201026133450.73304-2-maz@kernel.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: arm64: Move PC/ELR/SPSR/PSTATE updatess to EL2 | expand |
On Mon, Oct 26, 2020 at 01:34:40PM +0000, Marc Zyngier wrote: > On SMC trap, the prefered return address is set to that of the SMC > instruction itself. It is thus wrong to tyr and roll it back when Typo: s/tyr/try/ > an SError occurs while trapping on SMC. It is still necessary on > HVC though, as HVC doesn't cause a trap, and sets ELR to returning > *after* the HVC. > > It also became apparent that the is 16bit encoding for an AArch32 I guess s/that the is/that there is no/ ? > HVC instruction, meaning that the displacement is always 4 bytes, > no matter what the ISA is. Take this opportunity to simplify it. > > Signed-off-by: Marc Zyngier <maz@kernel.org> Assuming that there is no 16-bit HVC: Acked-by: Mark Rutland <mark.rutland@arm.com> Mark. > --- > arch/arm64/kvm/handle_exit.c | 16 ++++++++-------- > 1 file changed, 8 insertions(+), 8 deletions(-) > > diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c > index 5d690d60ccad..79a720657c47 100644 > --- a/arch/arm64/kvm/handle_exit.c > +++ b/arch/arm64/kvm/handle_exit.c > @@ -245,15 +245,15 @@ int handle_exit(struct kvm_vcpu *vcpu, int exception_index) > u8 esr_ec = ESR_ELx_EC(kvm_vcpu_get_esr(vcpu)); > > /* > - * HVC/SMC already have an adjusted PC, which we need > - * to correct in order to return to after having > - * injected the SError. > + * HVC already have an adjusted PC, which we need to > + * correct in order to return to after having injected > + * the SError. > + * > + * SMC, on the other hand, is *trapped*, meaning its > + * preferred return address is the SMC itself. > */ > - if (esr_ec == ESR_ELx_EC_HVC32 || esr_ec == ESR_ELx_EC_HVC64 || > - esr_ec == ESR_ELx_EC_SMC32 || esr_ec == ESR_ELx_EC_SMC64) { > - u32 adj = kvm_vcpu_trap_il_is32bit(vcpu) ? 4 : 2; > - *vcpu_pc(vcpu) -= adj; > - } > + if (esr_ec == ESR_ELx_EC_HVC32 || esr_ec == ESR_ELx_EC_HVC64) > + *vcpu_pc(vcpu) -= 4; > > return 1; > } > -- > 2.28.0 > > _______________________________________________ > kvmarm mailing list > kvmarm@lists.cs.columbia.edu > https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
On 2020-10-26 13:53, Mark Rutland wrote: > On Mon, Oct 26, 2020 at 01:34:40PM +0000, Marc Zyngier wrote: >> On SMC trap, the prefered return address is set to that of the SMC >> instruction itself. It is thus wrong to tyr and roll it back when > > Typo: s/tyr/try/ > >> an SError occurs while trapping on SMC. It is still necessary on >> HVC though, as HVC doesn't cause a trap, and sets ELR to returning >> *after* the HVC. >> >> It also became apparent that the is 16bit encoding for an AArch32 > > I guess s/that the is/that there is no/ ? Something along these lines, yes! ;-) > >> HVC instruction, meaning that the displacement is always 4 bytes, >> no matter what the ISA is. Take this opportunity to simplify it. >> >> Signed-off-by: Marc Zyngier <maz@kernel.org> > > Assuming that there is no 16-bit HVC: It is actually impossible to have a 16bit encoding for HVC, as it always convey a 16bit immediate, and you need some space to encode the instruction itself! > > Acked-by: Mark Rutland <mark.rutland@arm.com> Thanks, M.
On Mon, Oct 26, 2020 at 02:08:35PM +0000, Marc Zyngier wrote: > On 2020-10-26 13:53, Mark Rutland wrote: > > Assuming that there is no 16-bit HVC: > > It is actually impossible to have a 16bit encoding for HVC, as > it always convey a 16bit immediate, and you need some space > to encode the instruction itself! Ah, of course! Mark.
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index 5d690d60ccad..79a720657c47 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -245,15 +245,15 @@ int handle_exit(struct kvm_vcpu *vcpu, int exception_index) u8 esr_ec = ESR_ELx_EC(kvm_vcpu_get_esr(vcpu)); /* - * HVC/SMC already have an adjusted PC, which we need - * to correct in order to return to after having - * injected the SError. + * HVC already have an adjusted PC, which we need to + * correct in order to return to after having injected + * the SError. + * + * SMC, on the other hand, is *trapped*, meaning its + * preferred return address is the SMC itself. */ - if (esr_ec == ESR_ELx_EC_HVC32 || esr_ec == ESR_ELx_EC_HVC64 || - esr_ec == ESR_ELx_EC_SMC32 || esr_ec == ESR_ELx_EC_SMC64) { - u32 adj = kvm_vcpu_trap_il_is32bit(vcpu) ? 4 : 2; - *vcpu_pc(vcpu) -= adj; - } + if (esr_ec == ESR_ELx_EC_HVC32 || esr_ec == ESR_ELx_EC_HVC64) + *vcpu_pc(vcpu) -= 4; return 1; }
On SMC trap, the prefered return address is set to that of the SMC instruction itself. It is thus wrong to tyr and roll it back when an SError occurs while trapping on SMC. It is still necessary on HVC though, as HVC doesn't cause a trap, and sets ELR to returning *after* the HVC. It also became apparent that the is 16bit encoding for an AArch32 HVC instruction, meaning that the displacement is always 4 bytes, no matter what the ISA is. Take this opportunity to simplify it. Signed-off-by: Marc Zyngier <maz@kernel.org> --- arch/arm64/kvm/handle_exit.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-)