diff mbox series

[v1] KVM: arm64: Tidying up PAuth code in KVM

Message ID 20240722123740.674846-1-tabba@google.com (mailing list archive)
State New, archived
Headers show
Series [v1] KVM: arm64: Tidying up PAuth code in KVM | expand

Commit Message

Fuad Tabba July 22, 2024, 12:37 p.m. UTC
Tidy up some of the PAuth trapping code to clear up some comments
and avoid clang/checkpatch warnings. Also, do not bother setting
the PAuth HCR_EL2 bits for protected VMs in pKVM, since that is
handled by the hypervisor.

Fixes: 814ad8f96e92 ("KVM: arm64: Drop trapping of PAuth instructions/keys")
Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/kvm_ptrauth.h    | 2 +-
 arch/arm64/kvm/arm.c                    | 7 ++++---
 arch/arm64/kvm/hyp/include/hyp/switch.h | 1 -
 arch/arm64/kvm/hyp/nvhe/switch.c        | 5 ++---
 4 files changed, 7 insertions(+), 8 deletions(-)


base-commit: 0c3836482481200ead7b416ca80c68a29cfdaabd

Comments

Marc Zyngier July 22, 2024, 3:08 p.m. UTC | #1
Hi Fuad,

On Mon, 22 Jul 2024 13:37:40 +0100,
Fuad Tabba <tabba@google.com> wrote:
> 
> Tidy up some of the PAuth trapping code to clear up some comments
> and avoid clang/checkpatch warnings. Also, do not bother setting
> the PAuth HCR_EL2 bits for protected VMs in pKVM, since that is
> handled by the hypervisor.
>
> Fixes: 814ad8f96e92 ("KVM: arm64: Drop trapping of PAuth instructions/keys")

nit: AFAICT, this doesn't really fix anything. It has no material
impact on the guest or the hypervisor.

> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
>  arch/arm64/include/asm/kvm_ptrauth.h    | 2 +-
>  arch/arm64/kvm/arm.c                    | 7 ++++---
>  arch/arm64/kvm/hyp/include/hyp/switch.h | 1 -
>  arch/arm64/kvm/hyp/nvhe/switch.c        | 5 ++---
>  4 files changed, 7 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_ptrauth.h b/arch/arm64/include/asm/kvm_ptrauth.h
> index d81bac256abc..6199c9f7ec6e 100644
> --- a/arch/arm64/include/asm/kvm_ptrauth.h
> +++ b/arch/arm64/include/asm/kvm_ptrauth.h
> @@ -104,7 +104,7 @@ alternative_else_nop_endif
>  
>  #define __ptrauth_save_key(ctxt, key)					\
>  	do {								\
> -		u64 __val;                                              \
> +		u64 __val;						\
>  		__val = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
>  		ctxt_sys_reg(ctxt, key ## KEYLO_EL1) = __val;		\
>  		__val = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 59716789fe0f..6516348024ba 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -510,10 +510,10 @@ void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu)
>  
>  static void vcpu_set_pauth_traps(struct kvm_vcpu *vcpu)
>  {
> -	if (vcpu_has_ptrauth(vcpu)) {
> +	if (vcpu_has_ptrauth(vcpu) && !vcpu_is_protected(vcpu)) {

I think this isn't quite correct.

Non-protected VMs in protected mode are still subjected to pKVM's own
handling of the HCR_EL2 configuration, and the whole thing should be
skipped altogether in that case, irrespective of the pauth status of
the vcpu.

What pKVM should evaluate is that status and decide for itself whether
it must enable it or not. You can then hoist the check for protected
mode early and skip the whole function unconditionally, irrespective
of the protected status of the vcpu.

Thanks,

	M.
Fuad Tabba July 22, 2024, 3:22 p.m. UTC | #2
Hi Marc,

On Mon, 22 Jul 2024 at 16:09, Marc Zyngier <maz@kernel.org> wrote:
>
> Hi Fuad,
>
> On Mon, 22 Jul 2024 13:37:40 +0100,
> Fuad Tabba <tabba@google.com> wrote:
> >
> > Tidy up some of the PAuth trapping code to clear up some comments
> > and avoid clang/checkpatch warnings. Also, do not bother setting
> > the PAuth HCR_EL2 bits for protected VMs in pKVM, since that is
> > handled by the hypervisor.
> >
> > Fixes: 814ad8f96e92 ("KVM: arm64: Drop trapping of PAuth instructions/keys")
>
> nit: AFAICT, this doesn't really fix anything. It has no material
> impact on the guest or the hypervisor.

Got it.

> > Signed-off-by: Fuad Tabba <tabba@google.com>
> > ---
> >  arch/arm64/include/asm/kvm_ptrauth.h    | 2 +-
> >  arch/arm64/kvm/arm.c                    | 7 ++++---
> >  arch/arm64/kvm/hyp/include/hyp/switch.h | 1 -
> >  arch/arm64/kvm/hyp/nvhe/switch.c        | 5 ++---
> >  4 files changed, 7 insertions(+), 8 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/kvm_ptrauth.h b/arch/arm64/include/asm/kvm_ptrauth.h
> > index d81bac256abc..6199c9f7ec6e 100644
> > --- a/arch/arm64/include/asm/kvm_ptrauth.h
> > +++ b/arch/arm64/include/asm/kvm_ptrauth.h
> > @@ -104,7 +104,7 @@ alternative_else_nop_endif
> >
> >  #define __ptrauth_save_key(ctxt, key)                                        \
> >       do {                                                            \
> > -             u64 __val;                                              \
> > +             u64 __val;                                              \
> >               __val = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);        \
> >               ctxt_sys_reg(ctxt, key ## KEYLO_EL1) = __val;           \
> >               __val = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);        \
> > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> > index 59716789fe0f..6516348024ba 100644
> > --- a/arch/arm64/kvm/arm.c
> > +++ b/arch/arm64/kvm/arm.c
> > @@ -510,10 +510,10 @@ void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu)
> >
> >  static void vcpu_set_pauth_traps(struct kvm_vcpu *vcpu)
> >  {
> > -     if (vcpu_has_ptrauth(vcpu)) {
> > +     if (vcpu_has_ptrauth(vcpu) && !vcpu_is_protected(vcpu)) {
>
> I think this isn't quite correct.
>
> Non-protected VMs in protected mode are still subjected to pKVM's own
> handling of the HCR_EL2 configuration, and the whole thing should be
> skipped altogether in that case, irrespective of the pauth status of
> the vcpu.
>
> What pKVM should evaluate is that status and decide for itself whether
> it must enable it or not. You can then hoist the check for protected
> mode early and skip the whole function unconditionally, irrespective
> of the protected status of the vcpu.

What pKVM does right now is use the host value of hcr_el2 as a base,
and ensure that the HCR_GUEST_FLAGS are also set. That said, it
doesn't do that for MDCR_EL2 nor for CPTR_EL2. Thinking about it, I
think it makes more sense for HCR_EL2 in pKVM to be initialized
completely at EL2, and hoist the check, like you're suggesting.

I'll fix that (in this patch and in pKVM), and resend.

Thanks!
/fuad

> Thanks,
>
>         M.
>
> --
> Without deviation from the norm, progress is not possible.
diff mbox series

Patch

diff --git a/arch/arm64/include/asm/kvm_ptrauth.h b/arch/arm64/include/asm/kvm_ptrauth.h
index d81bac256abc..6199c9f7ec6e 100644
--- a/arch/arm64/include/asm/kvm_ptrauth.h
+++ b/arch/arm64/include/asm/kvm_ptrauth.h
@@ -104,7 +104,7 @@  alternative_else_nop_endif
 
 #define __ptrauth_save_key(ctxt, key)					\
 	do {								\
-		u64 __val;                                              \
+		u64 __val;						\
 		__val = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
 		ctxt_sys_reg(ctxt, key ## KEYLO_EL1) = __val;		\
 		__val = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 59716789fe0f..6516348024ba 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -510,10 +510,10 @@  void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu)
 
 static void vcpu_set_pauth_traps(struct kvm_vcpu *vcpu)
 {
-	if (vcpu_has_ptrauth(vcpu)) {
+	if (vcpu_has_ptrauth(vcpu) && !vcpu_is_protected(vcpu)) {
 		/*
-		 * Either we're running running an L2 guest, and the API/APK
-		 * bits come from L1's HCR_EL2, or API/APK are both set.
+		 * Either we're running an L2 guest, and the API/APK bits come
+		 * from L1's HCR_EL2, or API/APK are both set.
 		 */
 		if (unlikely(vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu))) {
 			u64 val;
@@ -540,6 +540,7 @@  static void vcpu_set_pauth_traps(struct kvm_vcpu *vcpu)
 
 		if (vcpu->arch.hcr_el2 & (HCR_API | HCR_APK)) {
 			struct kvm_cpu_context *ctxt;
+
 			ctxt = this_cpu_ptr_hyp_sym(kvm_hyp_ctxt);
 			ptrauth_save_keys(ctxt);
 		}
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 0c4de44534b7..9eb68c0dd727 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -27,7 +27,6 @@ 
 #include <asm/kvm_hyp.h>
 #include <asm/kvm_mmu.h>
 #include <asm/kvm_nested.h>
-#include <asm/kvm_ptrauth.h>
 #include <asm/fpsimd.h>
 #include <asm/debug-monitors.h>
 #include <asm/processor.h>
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 6af179c6356d..8f5c56d5b1cd 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -173,9 +173,8 @@  static void __pmu_switch_to_host(struct kvm_vcpu *vcpu)
 static bool kvm_handle_pvm_sys64(struct kvm_vcpu *vcpu, u64 *exit_code)
 {
 	/*
-	 * Make sure we handle the exit for workarounds and ptrauth
-	 * before the pKVM handling, as the latter could decide to
-	 * UNDEF.
+	 * Make sure we handle the exit for workarounds before the pKVM
+	 * handling, as the latter could decide to UNDEF.
 	 */
 	return (kvm_hyp_handle_sysreg(vcpu, exit_code) ||
 		kvm_handle_pvm_sysreg(vcpu, exit_code));