diff mbox series

[04/11] KVM: arm64: Move PC rollback on SError to HYP

Message ID 20201026133450.73304-5-maz@kernel.org (mailing list archive)
State New, archived
Headers show
Series KVM: arm64: Move PC/ELR/SPSR/PSTATE updatess to EL2 | expand

Commit Message

Marc Zyngier Oct. 26, 2020, 1:34 p.m. UTC
Instead of handling the "PC rollback on SError during HVC" at EL1 (which
requires disclosing PC to a potentially untrusted kernel), let's move
this fixup to ... fixup_guest_exit(), which is where we do all fixups.

Isn't that neat?

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/handle_exit.c            | 17 -----------------
 arch/arm64/kvm/hyp/include/hyp/switch.h | 15 +++++++++++++++
 2 files changed, 15 insertions(+), 17 deletions(-)

Comments

Mark Rutland Oct. 26, 2020, 2:06 p.m. UTC | #1
On Mon, Oct 26, 2020 at 01:34:43PM +0000, Marc Zyngier wrote:
> Instead of handling the "PC rollback on SError during HVC" at EL1 (which
> requires disclosing PC to a potentially untrusted kernel), let's move
> this fixup to ... fixup_guest_exit(), which is where we do all fixups.
> 
> Isn't that neat?
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>

Acked-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  arch/arm64/kvm/handle_exit.c            | 17 -----------------
>  arch/arm64/kvm/hyp/include/hyp/switch.h | 15 +++++++++++++++
>  2 files changed, 15 insertions(+), 17 deletions(-)
> 
> diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
> index d4e00a864ee6..f79137ee4274 100644
> --- a/arch/arm64/kvm/handle_exit.c
> +++ b/arch/arm64/kvm/handle_exit.c
> @@ -241,23 +241,6 @@ int handle_exit(struct kvm_vcpu *vcpu, int exception_index)
>  {
>  	struct kvm_run *run = vcpu->run;
>  
> -	if (ARM_SERROR_PENDING(exception_index)) {
> -		u8 esr_ec = ESR_ELx_EC(kvm_vcpu_get_esr(vcpu));
> -
> -		/*
> -		 * HVC already have an adjusted PC, which we need to
> -		 * correct in order to return to after having injected
> -		 * the SError.
> -		 *
> -		 * SMC, on the other hand, is *trapped*, meaning its
> -		 * preferred return address is the SMC itself.
> -		 */
> -		if (esr_ec == ESR_ELx_EC_HVC32 || esr_ec == ESR_ELx_EC_HVC64)
> -			*vcpu_pc(vcpu) -= 4;
> -
> -		return 1;
> -	}
> -
>  	exception_index = ARM_EXCEPTION_CODE(exception_index);
>  
>  	switch (exception_index) {
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index d687e574cde5..668f02c7b0b3 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -411,6 +411,21 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
>  	if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ)
>  		vcpu->arch.fault.esr_el2 = read_sysreg_el2(SYS_ESR);
>  
> +	if (ARM_SERROR_PENDING(*exit_code)) {
> +		u8 esr_ec = kvm_vcpu_trap_get_class(vcpu);
> +
> +		/*
> +		 * HVC already have an adjusted PC, which we need to
> +		 * correct in order to return to after having injected
> +		 * the SError.
> +		 *
> +		 * SMC, on the other hand, is *trapped*, meaning its
> +		 * preferred return address is the SMC itself.
> +		 */
> +		if (esr_ec == ESR_ELx_EC_HVC32 || esr_ec == ESR_ELx_EC_HVC64)
> +			*vcpu_pc(vcpu) -= 4;
> +	}
> +
>  	/*
>  	 * We're using the raw exception code in order to only process
>  	 * the trap if no SError is pending. We will come back to the
> -- 
> 2.28.0
> 
> _______________________________________________
> kvmarm mailing list
> kvmarm@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
James Morse Oct. 27, 2020, 2:56 p.m. UTC | #2
Hi Marc,

On 26/10/2020 13:34, Marc Zyngier wrote:
> Instead of handling the "PC rollback on SError during HVC" at EL1 (which
> requires disclosing PC to a potentially untrusted kernel), let's move
> this fixup to ... fixup_guest_exit(), which is where we do all fixups.

> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index d687e574cde5..668f02c7b0b3 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -411,6 +411,21 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
>  	if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ)
>  		vcpu->arch.fault.esr_el2 = read_sysreg_el2(SYS_ESR);
>  
> +	if (ARM_SERROR_PENDING(*exit_code)) {
> +		u8 esr_ec = kvm_vcpu_trap_get_class(vcpu);
> +
> +		/*
> +		 * HVC already have an adjusted PC, which we need to
> +		 * correct in order to return to after having injected
> +		 * the SError.
> +		 *
> +		 * SMC, on the other hand, is *trapped*, meaning its
> +		 * preferred return address is the SMC itself.
> +		 */
> +		if (esr_ec == ESR_ELx_EC_HVC32 || esr_ec == ESR_ELx_EC_HVC64)
> +			*vcpu_pc(vcpu) -= 4;

Isn't *vcpu_pc(vcpu) the PC of the previous entry for this vcpu?.... its not the PC of the
exit until __sysreg_save_el2_return_state() saves it, which happens just after
fixup_guest_exit().

Mess with ELR_EL2 directly?


Thanks,

James

> +	}
> +
>  	/*
>  	 * We're using the raw exception code in order to only process
>  	 * the trap if no SError is pending. We will come back to the
>
Marc Zyngier Oct. 27, 2020, 2:59 p.m. UTC | #3
On 2020-10-27 14:56, James Morse wrote:
> Hi Marc,
> 
> On 26/10/2020 13:34, Marc Zyngier wrote:
>> Instead of handling the "PC rollback on SError during HVC" at EL1 
>> (which
>> requires disclosing PC to a potentially untrusted kernel), let's move
>> this fixup to ... fixup_guest_exit(), which is where we do all fixups.
> 
>> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h 
>> b/arch/arm64/kvm/hyp/include/hyp/switch.h
>> index d687e574cde5..668f02c7b0b3 100644
>> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
>> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
>> @@ -411,6 +411,21 @@ static inline bool fixup_guest_exit(struct 
>> kvm_vcpu *vcpu, u64 *exit_code)
>>  	if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ)
>>  		vcpu->arch.fault.esr_el2 = read_sysreg_el2(SYS_ESR);
>> 
>> +	if (ARM_SERROR_PENDING(*exit_code)) {
>> +		u8 esr_ec = kvm_vcpu_trap_get_class(vcpu);
>> +
>> +		/*
>> +		 * HVC already have an adjusted PC, which we need to
>> +		 * correct in order to return to after having injected
>> +		 * the SError.
>> +		 *
>> +		 * SMC, on the other hand, is *trapped*, meaning its
>> +		 * preferred return address is the SMC itself.
>> +		 */
>> +		if (esr_ec == ESR_ELx_EC_HVC32 || esr_ec == ESR_ELx_EC_HVC64)
>> +			*vcpu_pc(vcpu) -= 4;
> 
> Isn't *vcpu_pc(vcpu) the PC of the previous entry for this vcpu?....
> its not the PC of the
> exit until __sysreg_save_el2_return_state() saves it, which happens 
> just after
> fixup_guest_exit().

Hmmm. Good point. The move was obviously done in haste, thank you for 
pointing
this blatant bug.

> Mess with ELR_EL2 directly?

Yes, that's the best course of action. We never run this code anyway.

Thanks,

         M.
diff mbox series

Patch

diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index d4e00a864ee6..f79137ee4274 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -241,23 +241,6 @@  int handle_exit(struct kvm_vcpu *vcpu, int exception_index)
 {
 	struct kvm_run *run = vcpu->run;
 
-	if (ARM_SERROR_PENDING(exception_index)) {
-		u8 esr_ec = ESR_ELx_EC(kvm_vcpu_get_esr(vcpu));
-
-		/*
-		 * HVC already have an adjusted PC, which we need to
-		 * correct in order to return to after having injected
-		 * the SError.
-		 *
-		 * SMC, on the other hand, is *trapped*, meaning its
-		 * preferred return address is the SMC itself.
-		 */
-		if (esr_ec == ESR_ELx_EC_HVC32 || esr_ec == ESR_ELx_EC_HVC64)
-			*vcpu_pc(vcpu) -= 4;
-
-		return 1;
-	}
-
 	exception_index = ARM_EXCEPTION_CODE(exception_index);
 
 	switch (exception_index) {
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index d687e574cde5..668f02c7b0b3 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -411,6 +411,21 @@  static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
 	if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ)
 		vcpu->arch.fault.esr_el2 = read_sysreg_el2(SYS_ESR);
 
+	if (ARM_SERROR_PENDING(*exit_code)) {
+		u8 esr_ec = kvm_vcpu_trap_get_class(vcpu);
+
+		/*
+		 * HVC already have an adjusted PC, which we need to
+		 * correct in order to return to after having injected
+		 * the SError.
+		 *
+		 * SMC, on the other hand, is *trapped*, meaning its
+		 * preferred return address is the SMC itself.
+		 */
+		if (esr_ec == ESR_ELx_EC_HVC32 || esr_ec == ESR_ELx_EC_HVC64)
+			*vcpu_pc(vcpu) -= 4;
+	}
+
 	/*
 	 * We're using the raw exception code in order to only process
 	 * the trap if no SError is pending. We will come back to the