[v4,2/6] arm64/kvm: context-switch ptrauth registers
diff mbox series

Message ID 1545119810-12182-3-git-send-email-amit.kachhap@arm.com
State New
Headers show
Series
  • Add ARMv8.3 pointer authentication for kvm guest
Related show

Commit Message

Amit Kachhap Dec. 18, 2018, 7:56 a.m. UTC
When pointer authentication is supported, a guest may wish to use it.
This patch adds the necessary KVM infrastructure for this to work, with
a semi-lazy context switch of the pointer auth state.

Pointer authentication feature is only enabled when VHE is built
into the kernel and present into CPU implementation so only VHE code
paths are modified.

When we schedule a vcpu, we disable guest usage of pointer
authentication instructions and accesses to the keys. While these are
disabled, we avoid context-switching the keys. When we trap the guest
trying to use pointer authentication functionality, we change to eagerly
context-switching the keys, and enable the feature. The next time the
vcpu is scheduled out/in, we start again.

Pointer authentication consists of address authentication and generic
authentication, and CPUs in a system might have varied support for
either. Where support for either feature is not uniform, it is hidden
from guests via ID register emulation, as a result of the cpufeature
framework in the host.

Unfortunately, address authentication and generic authentication cannot
be trapped separately, as the architecture provides a single EL2 trap
covering both. If we wish to expose one without the other, we cannot
prevent a (badly-written) guest from intermittently using a feature
which is not uniformly supported (when scheduled on a physical CPU which
supports the relevant feature). When the guest is scheduled on a
physical CPU lacking the feature, these attemts will result in an UNDEF
being taken by the guest.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
---
 arch/arm/include/asm/kvm_host.h     |  1 +
 arch/arm64/include/asm/cpufeature.h |  6 +++
 arch/arm64/include/asm/kvm_host.h   | 24 ++++++++++++
 arch/arm64/include/asm/kvm_hyp.h    |  7 ++++
 arch/arm64/kernel/traps.c           |  1 +
 arch/arm64/kvm/handle_exit.c        | 24 +++++++-----
 arch/arm64/kvm/hyp/Makefile         |  1 +
 arch/arm64/kvm/hyp/ptrauth-sr.c     | 73 +++++++++++++++++++++++++++++++++++++
 arch/arm64/kvm/hyp/switch.c         |  4 ++
 arch/arm64/kvm/sys_regs.c           | 40 ++++++++++++++++----
 virt/kvm/arm/arm.c                  |  2 +
 11 files changed, 166 insertions(+), 17 deletions(-)
 create mode 100644 arch/arm64/kvm/hyp/ptrauth-sr.c

Comments

James Morse Jan. 4, 2019, 5:57 p.m. UTC | #1
Hi Amit,

On 18/12/2018 07:56, Amit Daniel Kachhap wrote:
> When pointer authentication is supported, a guest may wish to use it.
> This patch adds the necessary KVM infrastructure for this to work, with
> a semi-lazy context switch of the pointer auth state.
> 
> Pointer authentication feature is only enabled when VHE is built
> into the kernel and present into CPU implementation so only VHE code
> paths are modified.
> 
> When we schedule a vcpu, we disable guest usage of pointer
> authentication instructions and accesses to the keys. While these are
> disabled, we avoid context-switching the keys. When we trap the guest
> trying to use pointer authentication functionality, we change to eagerly
> context-switching the keys, and enable the feature. The next time the
> vcpu is scheduled out/in, we start again.

Why start again?
Taking the trap at all suggests the guest knows about pointer-auth, if it uses
it, its probably in some context switch code. It would be good to avoid trapping
that.


> Pointer authentication consists of address authentication and generic
> authentication, and CPUs in a system might have varied support for
> either. Where support for either feature is not uniform, it is hidden
> from guests via ID register emulation, as a result of the cpufeature
> framework in the host.
> 
> Unfortunately, address authentication and generic authentication cannot
> be trapped separately, as the architecture provides a single EL2 trap
> covering both. If we wish to expose one without the other, we cannot
> prevent a (badly-written) guest from intermittently using a feature
> which is not uniformly supported (when scheduled on a physical CPU which
> supports the relevant feature).

Yuck!


> When the guest is scheduled on a
> physical CPU lacking the feature, these attemts will result in an UNDEF

(attempts)

> being taken by the guest.

Can we only expose the feature to a guest if both address and generic
authentication are supported? (and hide it from the id register otherwise).

This avoids having to know if this cpu supports address/generic when the system
doesn't. We would need to scrub the values to avoid guest-values being left
loaded when something else is running.


> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index 1c8393f..ac7d496 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -526,6 +526,12 @@ static inline bool system_supports_generic_auth(void)
>  		cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH);
>  }
>  
> +static inline bool system_supports_ptrauth(void)
> +{
> +	return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) &&
> +		cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH);
> +}

(mainline has a system_supports_address_auth(), I guess this will be folded
around a bit during the rebase).

kvm_supports_ptrauth() that checks the two architected algorithm caps?


> diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
> index ab35929..5c47a8f47 100644
> --- a/arch/arm64/kvm/handle_exit.c
> +++ b/arch/arm64/kvm/handle_exit.c
> @@ -174,19 +174,25 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  }
>  
>  /*
> + * Handle the guest trying to use a ptrauth instruction, or trying to access a
> + * ptrauth register. This trap should not occur as we enable ptrauth during
> + * vcpu schedule itself but is anyway kept here for any unfortunate scenario.

... so we don't need this? Or if it ever runs its indicative of a bug?


> + */
> +void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
> +{
> +	if (system_supports_ptrauth())
> +		kvm_arm_vcpu_ptrauth_enable(vcpu);
> +	else
> +		kvm_inject_undefined(vcpu);
> +}
> +
> +/*
>   * Guest usage of a ptrauth instruction (which the guest EL1 did not turn into
> - * a NOP).
> + * a NOP), or guest EL1 access to a ptrauth register.
>   */
>  static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  {
> -	/*
> -	 * We don't currently support ptrauth in a guest, and we mask the ID
> -	 * registers to prevent well-behaved guests from trying to make use of
> -	 * it.
> -	 *
> -	 * Inject an UNDEF, as if the feature really isn't present.
> -	 */
> -	kvm_inject_undefined(vcpu);
> +	kvm_arm_vcpu_ptrauth_trap(vcpu);
>  	return 1;
>  }

> diff --git a/arch/arm64/kvm/hyp/ptrauth-sr.c b/arch/arm64/kvm/hyp/ptrauth-sr.c
> new file mode 100644
> index 0000000..1bfaf74
> --- /dev/null
> +++ b/arch/arm64/kvm/hyp/ptrauth-sr.c
> @@ -0,0 +1,73 @@

> +{
> +	return vcpu->arch.hcr_el2 & (HCR_API | HCR_APK);
> +}

(If you include the the IS_ENABLED() in here too, the compiler can work out
pointer-auth was compiled out and prune all the unused static functions.)


> +#define __ptrauth_save_key(regs, key)						\
> +({										\
> +	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
> +	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
> +})

(shame we can't re-use the arch code's macros for these)


> +static __always_inline void __hyp_text __ptrauth_save_state(struct kvm_cpu_context *ctxt)
> +{
> +	__ptrauth_save_key(ctxt->sys_regs, APIA);
> +	__ptrauth_save_key(ctxt->sys_regs, APIB);
> +	__ptrauth_save_key(ctxt->sys_regs, APDA);
> +	__ptrauth_save_key(ctxt->sys_regs, APDB);

> +	__ptrauth_save_key(ctxt->sys_regs, APGA);

I thought the generic authentication bits had a separate ID-register-field/cap?
But you only checked address-auth above. Is it possible the CPU doesn't have
this register?

(but I think we should only support this if both generic&address are supported)


> +}

> +void __hyp_text __ptrauth_switch_to_guest(struct kvm_vcpu *vcpu,
> +					  struct kvm_cpu_context *host_ctxt,
> +					  struct kvm_cpu_context *guest_ctxt)
> +{
> +	if (!__ptrauth_is_enabled(vcpu))
> +		return;
> +
> +	__ptrauth_save_state(host_ctxt);
> +	__ptrauth_restore_state(guest_ctxt);
> +}
> +
> +void __hyp_text __ptrauth_switch_to_host(struct kvm_vcpu *vcpu,
> +					 struct kvm_cpu_context *host_ctxt,
> +					 struct kvm_cpu_context *guest_ctxt)
> +{
> +	if (!__ptrauth_is_enabled(vcpu))
> +		return;
> +
> +	__ptrauth_save_state(guest_ctxt);
> +	__ptrauth_restore_state(host_ctxt);
> +}


> diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
> index 85a2a5c..6c57552 100644
> --- a/arch/arm64/kvm/hyp/switch.c
> +++ b/arch/arm64/kvm/hyp/switch.c
> @@ -508,6 +508,8 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
>  	sysreg_restore_guest_state_vhe(guest_ctxt);
>  	__debug_switch_to_guest(vcpu);
>  
> +	__ptrauth_switch_to_guest(vcpu, host_ctxt, guest_ctxt);
> +
>  	__set_guest_arch_workaround_state(vcpu);
>  
>  	do {
> @@ -519,6 +521,8 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
>  
>  	__set_host_arch_workaround_state(vcpu);
>  
> +	__ptrauth_switch_to_host(vcpu, host_ctxt, guest_ctxt);
> +
>  	sysreg_save_guest_state_vhe(guest_ctxt);
>  
>  	__deactivate_traps(vcpu);

This is deep in the world-switch code, meaning we save/restore 10 sysregs for
every entry/exit. As the kernel isn't using pointer-auth, wouldn't it be better
to defer the save/restore to the preempt notifier using kvm_arch_vcpu_load()?

I've seen the discussion that the kernel may start using this 'soon':
lore.kernel.org/r/20181112223212.5o4ipc5kt5ziuupt@localhost

(does anyone know when soon is?)

... but it doesn't today, and save/restoring these in C becomes impossible when
that happens, which the arch code is doing too, so we aren't inventing a new
problem.


probable-tangent: {
If the kernel starts using ptrauth, it will need to switch these registers in
the entry asm, and in __cpu_switch_to()... there will be new asm macro's to do this.

Can we make it so that the same series that does that, can easily do KVM too?

If we use the preempt-notifier stuff the GET/SET_ONE_REG code knows that today
the ptrauth values are always loaded on the cpu, it doesn't need to look in the
sys_reg_descs[] array.
This means the preempt notifier could save/restore them to a struct ptrauth_keys
in struct kvm_vcpu_arch. This lets us use the arch codes __ptrauth_key_install()
and ptrauth_keys_switch() helpers today, instead of duplicating them because the
layout is slightly different.

Once these become asm macros, the same structure is already in use, so we can
(cough) probably feed it a different base pointer for guest-entry/ret_to_user.
guest-exit would almost be the same as kernel_enter, its only the
save-guest-values that would be specific to kvm.

(oh, and the GET/SET_ONE_REG code would need to know to look elsewhere for the
guest values, but it looks like your patch 5 is already doing some of this).

}


> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 1ca592d..6af6c7d 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -986,6 +986,32 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  	{ SYS_DESC(SYS_PMEVTYPERn_EL0(n)),					\
>  	  access_pmu_evtyper, reset_unknown, (PMEVTYPER0_EL0 + n), }
>  
> +
> +void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu)
> +{
> +	vcpu->arch.hcr_el2 |= (HCR_API | HCR_APK);
> +}
> +
> +void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu)
> +{
> +	vcpu->arch.hcr_el2 &= ~(HCR_API | HCR_APK);
> +}
> +

> @@ -1040,14 +1066,6 @@ static u64 read_id_reg(struct sys_reg_desc const *r, bool raz)
>  			kvm_debug("SVE unsupported for guests, suppressing\n");
>  
>  		val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT);
> -	} else if (id == SYS_ID_AA64ISAR1_EL1) {
> -		const u64 ptrauth_mask = (0xfUL << ID_AA64ISAR1_APA_SHIFT) |
> -					 (0xfUL << ID_AA64ISAR1_API_SHIFT) |
> -					 (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
> -					 (0xfUL << ID_AA64ISAR1_GPI_SHIFT);

Don't we still want to remove the imp-def bits? and the APA/GPA bits if we
decide there is incomplete support.


> -		if (val & ptrauth_mask)
> -			kvm_debug("ptrauth unsupported for guests, suppressing\n");
> -		val &= ~ptrauth_mask;
>  	} else if (id == SYS_ID_AA64MMFR1_EL1) {
>  		if (val & (0xfUL << ID_AA64MMFR1_LOR_SHIFT))
>  			kvm_debug("LORegions unsupported for guests, suppressing\n");


> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index feda456..1f3b6ed 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -388,6 +388,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
>  		vcpu_clear_wfe_traps(vcpu);
>  	else
>  		vcpu_set_wfe_traps(vcpu);
> +
> +	kvm_arm_vcpu_ptrauth_config(vcpu);

(doesn't this unconfigure ptrauth?!)

>  }

Thanks,

James
Amit Kachhap Jan. 9, 2019, 10:01 a.m. UTC | #2
Hi,

On Sat, Jan 5, 2019 at 12:05 AM James Morse <james.morse@arm.com> wrote:
>
> Hi Amit,
>
> On 18/12/2018 07:56, Amit Daniel Kachhap wrote:
> > When pointer authentication is supported, a guest may wish to use it.
> > This patch adds the necessary KVM infrastructure for this to work, with
> > a semi-lazy context switch of the pointer auth state.
> >
> > Pointer authentication feature is only enabled when VHE is built
> > into the kernel and present into CPU implementation so only VHE code
> > paths are modified.
> >
> > When we schedule a vcpu, we disable guest usage of pointer
> > authentication instructions and accesses to the keys. While these are
> > disabled, we avoid context-switching the keys. When we trap the guest
> > trying to use pointer authentication functionality, we change to eagerly
> > context-switching the keys, and enable the feature. The next time the
> > vcpu is scheduled out/in, we start again.
>
> Why start again?
> Taking the trap at all suggests the guest knows about pointer-auth, if it uses
> it, its probably in some context switch code. It would be good to avoid trapping
> that.
This is a pointer to the earlier discussion
https://lore.kernel.org/lkml/20180409125818.GE10904@cbox/.
It seems there is some agreement reached to flip ptrauth status in
every vcpu load/unload
as this would cater to both ptrauth enabled/disabled application.
>
>
> > Pointer authentication consists of address authentication and generic
> > authentication, and CPUs in a system might have varied support for
> > either. Where support for either feature is not uniform, it is hidden
> > from guests via ID register emulation, as a result of the cpufeature
> > framework in the host.
> >
> > Unfortunately, address authentication and generic authentication cannot
> > be trapped separately, as the architecture provides a single EL2 trap
> > covering both. If we wish to expose one without the other, we cannot
> > prevent a (badly-written) guest from intermittently using a feature
> > which is not uniformly supported (when scheduled on a physical CPU which
> > supports the relevant feature).
>
> Yuck!
:)
>
>
> > When the guest is scheduled on a
> > physical CPU lacking the feature, these attemts will result in an UNDEF
>
> (attempts)
ok.
>
> > being taken by the guest.
>
> Can we only expose the feature to a guest if both address and generic
> authentication are supported? (and hide it from the id register otherwise).
>
> This avoids having to know if this cpu supports address/generic when the system
> doesn't. We would need to scrub the values to avoid guest-values being left
> loaded when something else is running.
Yes it can be done. Currently it is done indirectly by userspace
option. Agree with you on this.
>
>
> > diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> > index 1c8393f..ac7d496 100644
> > --- a/arch/arm64/include/asm/cpufeature.h
> > +++ b/arch/arm64/include/asm/cpufeature.h
> > @@ -526,6 +526,12 @@ static inline bool system_supports_generic_auth(void)
> >               cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH);
> >  }
> >
> > +static inline bool system_supports_ptrauth(void)
> > +{
> > +     return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) &&
> > +             cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH);
> > +}
>
> (mainline has a system_supports_address_auth(), I guess this will be folded
> around a bit during the rebase).
>
> system_supports_ptrauth() that checks the two architected algorithm caps?
yes. I will add.
>
>
> > diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
> > index ab35929..5c47a8f47 100644
> > --- a/arch/arm64/kvm/handle_exit.c
> > +++ b/arch/arm64/kvm/handle_exit.c
> > @@ -174,19 +174,25 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
> >  }
> >
> >  /*
> > + * Handle the guest trying to use a ptrauth instruction, or trying to access a
> > + * ptrauth register. This trap should not occur as we enable ptrauth during
> > + * vcpu schedule itself but is anyway kept here for any unfortunate scenario.
>
> ... so we don't need this? Or if it ever runs its indicative of a bug?
Sorry This comment is confusing and is leftover of last V3 version.
>
>
> > + */
> > +void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
> > +{
> > +     if (system_supports_ptrauth())
> > +             kvm_arm_vcpu_ptrauth_enable(vcpu);
> > +     else
> > +             kvm_inject_undefined(vcpu);
> > +}
> > +
> > +/*
> >   * Guest usage of a ptrauth instruction (which the guest EL1 did not turn into
> > - * a NOP).
> > + * a NOP), or guest EL1 access to a ptrauth register.
> >   */
> >  static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
> >  {
> > -     /*
> > -      * We don't currently support ptrauth in a guest, and we mask the ID
> > -      * registers to prevent well-behaved guests from trying to make use of
> > -      * it.
> > -      *
> > -      * Inject an UNDEF, as if the feature really isn't present.
> > -      */
> > -     kvm_inject_undefined(vcpu);
> > +     kvm_arm_vcpu_ptrauth_trap(vcpu);
> >       return 1;
> >  }
>
> > diff --git a/arch/arm64/kvm/hyp/ptrauth-sr.c b/arch/arm64/kvm/hyp/ptrauth-sr.c
> > new file mode 100644
> > index 0000000..1bfaf74
> > --- /dev/null
> > +++ b/arch/arm64/kvm/hyp/ptrauth-sr.c
> > @@ -0,0 +1,73 @@
>
> > +{
> > +     return vcpu->arch.hcr_el2 & (HCR_API | HCR_APK);
> > +}
>
> (If you include the the IS_ENABLED() in here too, the compiler can work out
> pointer-auth was compiled out and prune all the unused static functions.)
ok.
>
>
> > +#define __ptrauth_save_key(regs, key)                                                \
> > +({                                                                           \
> > +     regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);       \
> > +     regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);       \
> > +})
>
> (shame we can't re-use the arch code's macros for these)
>
>
> > +static __always_inline void __hyp_text __ptrauth_save_state(struct kvm_cpu_context *ctxt)
> > +{
> > +     __ptrauth_save_key(ctxt->sys_regs, APIA);
> > +     __ptrauth_save_key(ctxt->sys_regs, APIB);
> > +     __ptrauth_save_key(ctxt->sys_regs, APDA);
> > +     __ptrauth_save_key(ctxt->sys_regs, APDB);
>
> > +     __ptrauth_save_key(ctxt->sys_regs, APGA);
>
> I thought the generic authentication bits had a separate ID-register-field/cap?
> But you only checked address-auth above. Is it possible the CPU doesn't have
> this register?
Yes it may be possible and generic ptrauth userspace patches by
Kristina has a separate
check for generic key. I will add a similar check here.
>
> (but I think we should only support this if both generic&address are supported)
>
>
> > +}
>
> > +void __hyp_text __ptrauth_switch_to_guest(struct kvm_vcpu *vcpu,
> > +                                       struct kvm_cpu_context *host_ctxt,
> > +                                       struct kvm_cpu_context *guest_ctxt)
> > +{
> > +     if (!__ptrauth_is_enabled(vcpu))
> > +             return;
> > +
> > +     __ptrauth_save_state(host_ctxt);
> > +     __ptrauth_restore_state(guest_ctxt);
> > +}
> > +
> > +void __hyp_text __ptrauth_switch_to_host(struct kvm_vcpu *vcpu,
> > +                                      struct kvm_cpu_context *host_ctxt,
> > +                                      struct kvm_cpu_context *guest_ctxt)
> > +{
> > +     if (!__ptrauth_is_enabled(vcpu))
> > +             return;
> > +
> > +     __ptrauth_save_state(guest_ctxt);
> > +     __ptrauth_restore_state(host_ctxt);
> > +}
>
>
> > diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
> > index 85a2a5c..6c57552 100644
> > --- a/arch/arm64/kvm/hyp/switch.c
> > +++ b/arch/arm64/kvm/hyp/switch.c
> > @@ -508,6 +508,8 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
> >       sysreg_restore_guest_state_vhe(guest_ctxt);
> >       __debug_switch_to_guest(vcpu);
> >
> > +     __ptrauth_switch_to_guest(vcpu, host_ctxt, guest_ctxt);
> > +
> >       __set_guest_arch_workaround_state(vcpu);
> >
> >       do {
> > @@ -519,6 +521,8 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
> >
> >       __set_host_arch_workaround_state(vcpu);
> >
> > +     __ptrauth_switch_to_host(vcpu, host_ctxt, guest_ctxt);
> > +
> >       sysreg_save_guest_state_vhe(guest_ctxt);
> >
> >       __deactivate_traps(vcpu);
>
> This is deep in the world-switch code, meaning we save/restore 10 sysregs for
> every entry/exit. As the kernel isn't using pointer-auth, wouldn't it be better
> to defer the save/restore to the preempt notifier using kvm_arch_vcpu_load()?
>
> I've seen the discussion that the kernel may start using this 'soon':
> lore.kernel.org/r/20181112223212.5o4ipc5kt5ziuupt@localhost
>
> (does anyone know when soon is?)
One version is already posted and next version is worked upon.
>
> ... but it doesn't today, and save/restoring these in C becomes impossible when
> that happens, which the arch code is doing too, so we aren't inventing a new
> problem.
Save and restore of key possible in C function with GCC function attribute like
sign-return-address=none. I left the switch function as of now and
only this change is needed
to enable it for kernel ptrauth.
>
>
> probable-tangent: {
> If the kernel starts using ptrauth, it will need to switch these registers in
> the entry asm, and in __cpu_switch_to()... there will be new asm macro's to do this.
>
> Can we make it so that the same series that does that, can easily do KVM too?
Yes it should be possible.
>
> If we use the preempt-notifier stuff the GET/SET_ONE_REG code knows that today
> the ptrauth values are always loaded on the cpu, it doesn't need to look in the
> sys_reg_descs[] array.
> This means the preempt notifier could save/restore them to a struct ptrauth_keys
> in struct kvm_vcpu_arch. This lets us use the arch codes __ptrauth_key_install()
> and ptrauth_keys_switch() helpers today, instead of duplicating them because the
> layout is slightly different.
Since vcpu is a thread so keys are allocated using the current user
ptrauth patches and
I can see in guest switch that has keys are present in guest context.
Do you see advantage
in allocating keys in the above way?
>
> Once these become asm macros, the same structure is already in use, so we can
> (cough) probably feed it a different base pointer for guest-entry/ret_to_user.
> guest-exit would almost be the same as kernel_enter, its only the
> save-guest-values that would be specific to kvm.
yes reusing will be better.
>
> (oh, and the GET/SET_ONE_REG code would need to know to look elsewhere for the
> guest values, but it looks like your patch 5 is already doing some of this).
ok.
>
> }
>
>
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index 1ca592d..6af6c7d 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -986,6 +986,32 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> >       { SYS_DESC(SYS_PMEVTYPERn_EL0(n)),                                      \
> >         access_pmu_evtyper, reset_unknown, (PMEVTYPER0_EL0 + n), }
> >
> > +
> > +void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu)
> > +{
> > +     vcpu->arch.hcr_el2 |= (HCR_API | HCR_APK);
> > +}
> > +
> > +void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu)
> > +{
> > +     vcpu->arch.hcr_el2 &= ~(HCR_API | HCR_APK);
> > +}
> > +
>
> > @@ -1040,14 +1066,6 @@ static u64 read_id_reg(struct sys_reg_desc const *r, bool raz)
> >                       kvm_debug("SVE unsupported for guests, suppressing\n");
> >
> >               val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT);
> > -     } else if (id == SYS_ID_AA64ISAR1_EL1) {
> > -             const u64 ptrauth_mask = (0xfUL << ID_AA64ISAR1_APA_SHIFT) |
> > -                                      (0xfUL << ID_AA64ISAR1_API_SHIFT) |
> > -                                      (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
> > -                                      (0xfUL << ID_AA64ISAR1_GPI_SHIFT);
>
> Don't we still want to remove the imp-def bits? and the APA/GPA bits if we
> decide there is incomplete support.
I think it was decided to enable all features even though they are not
supported. I cannot find the link though.
>
>
> > -             if (val & ptrauth_mask)
> > -                     kvm_debug("ptrauth unsupported for guests, suppressing\n");
> > -             val &= ~ptrauth_mask;
> >       } else if (id == SYS_ID_AA64MMFR1_EL1) {
> >               if (val & (0xfUL << ID_AA64MMFR1_LOR_SHIFT))
> >                       kvm_debug("LORegions unsupported for guests, suppressing\n");
>
>
> > diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> > index feda456..1f3b6ed 100644
> > --- a/virt/kvm/arm/arm.c
> > +++ b/virt/kvm/arm/arm.c
> > @@ -388,6 +388,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
> >               vcpu_clear_wfe_traps(vcpu);
> >       else
> >               vcpu_set_wfe_traps(vcpu);
> > +
> > +     kvm_arm_vcpu_ptrauth_config(vcpu);
>
> (doesn't this unconfigure ptrauth?!)
ok will change the name to something like kvm_arm_vcpu_ptrauth_reset.
>
> >  }
>
> Thanks,
>
> James

//Amit

Patch
diff mbox series

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 0f012c8..02d9bfc 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -351,6 +351,7 @@  static inline int kvm_arm_have_ssbd(void)
 
 static inline void kvm_vcpu_load_sysregs(struct kvm_vcpu *vcpu) {}
 static inline void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu) {}
+static inline void kvm_arm_vcpu_ptrauth_config(struct kvm_vcpu *vcpu) {}
 
 #define __KVM_HAVE_ARCH_VM_ALLOC
 struct kvm *kvm_arch_alloc_vm(void);
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 1c8393f..ac7d496 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -526,6 +526,12 @@  static inline bool system_supports_generic_auth(void)
 		cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH);
 }
 
+static inline bool system_supports_ptrauth(void)
+{
+	return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) &&
+		cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH);
+}
+
 #define ARM64_SSBD_UNKNOWN		-1
 #define ARM64_SSBD_FORCE_DISABLE	0
 #define ARM64_SSBD_KERNEL		1
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 1b9eed9..629712d 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -146,6 +146,18 @@  enum vcpu_sysreg {
 	PMSWINC_EL0,	/* Software Increment Register */
 	PMUSERENR_EL0,	/* User Enable Register */
 
+	/* Pointer Authentication Registers */
+	APIAKEYLO_EL1,
+	APIAKEYHI_EL1,
+	APIBKEYLO_EL1,
+	APIBKEYHI_EL1,
+	APDAKEYLO_EL1,
+	APDAKEYHI_EL1,
+	APDBKEYLO_EL1,
+	APDBKEYHI_EL1,
+	APGAKEYLO_EL1,
+	APGAKEYHI_EL1,
+
 	/* 32bit specific registers. Keep them at the end of the range */
 	DACR32_EL2,	/* Domain Access Control Register */
 	IFSR32_EL2,	/* Instruction Fault Status Register */
@@ -439,6 +451,18 @@  static inline bool kvm_arch_check_sve_has_vhe(void)
 		return true;
 }
 
+void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu);
+void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu);
+
+static inline void kvm_arm_vcpu_ptrauth_config(struct kvm_vcpu *vcpu)
+{
+	/* Disable ptrauth and use it in a lazy context via traps */
+	if (has_vhe() && system_supports_ptrauth())
+		kvm_arm_vcpu_ptrauth_disable(vcpu);
+}
+
+void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu);
+
 static inline void kvm_arch_hardware_unsetup(void) {}
 static inline void kvm_arch_sync_events(struct kvm *kvm) {}
 static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {}
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 23aca66..ebffd44 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -152,6 +152,13 @@  bool __fpsimd_enabled(void);
 void activate_traps_vhe_load(struct kvm_vcpu *vcpu);
 void deactivate_traps_vhe_put(void);
 
+void __ptrauth_switch_to_guest(struct kvm_vcpu *vcpu,
+			       struct kvm_cpu_context *host_ctxt,
+			       struct kvm_cpu_context *guest_ctxt);
+void __ptrauth_switch_to_host(struct kvm_vcpu *vcpu,
+			      struct kvm_cpu_context *host_ctxt,
+			      struct kvm_cpu_context *guest_ctxt);
+
 u64 __guest_enter(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt);
 void __noreturn __hyp_do_panic(unsigned long, ...);
 
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index 5f4d9ac..90e18de 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -748,6 +748,7 @@  static const char *esr_class_str[] = {
 	[ESR_ELx_EC_CP14_LS]		= "CP14 LDC/STC",
 	[ESR_ELx_EC_FP_ASIMD]		= "ASIMD",
 	[ESR_ELx_EC_CP10_ID]		= "CP10 MRC/VMRS",
+	[ESR_ELx_EC_PAC]		= "Pointer authentication trap",
 	[ESR_ELx_EC_CP14_64]		= "CP14 MCRR/MRRC",
 	[ESR_ELx_EC_ILL]		= "PSTATE.IL",
 	[ESR_ELx_EC_SVC32]		= "SVC (AArch32)",
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index ab35929..5c47a8f47 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -174,19 +174,25 @@  static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
 }
 
 /*
+ * Handle the guest trying to use a ptrauth instruction, or trying to access a
+ * ptrauth register. This trap should not occur as we enable ptrauth during
+ * vcpu schedule itself but is anyway kept here for any unfortunate scenario.
+ */
+void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
+{
+	if (system_supports_ptrauth())
+		kvm_arm_vcpu_ptrauth_enable(vcpu);
+	else
+		kvm_inject_undefined(vcpu);
+}
+
+/*
  * Guest usage of a ptrauth instruction (which the guest EL1 did not turn into
- * a NOP).
+ * a NOP), or guest EL1 access to a ptrauth register.
  */
 static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
 {
-	/*
-	 * We don't currently support ptrauth in a guest, and we mask the ID
-	 * registers to prevent well-behaved guests from trying to make use of
-	 * it.
-	 *
-	 * Inject an UNDEF, as if the feature really isn't present.
-	 */
-	kvm_inject_undefined(vcpu);
+	kvm_arm_vcpu_ptrauth_trap(vcpu);
 	return 1;
 }
 
diff --git a/arch/arm64/kvm/hyp/Makefile b/arch/arm64/kvm/hyp/Makefile
index 82d1904..17cec99 100644
--- a/arch/arm64/kvm/hyp/Makefile
+++ b/arch/arm64/kvm/hyp/Makefile
@@ -19,6 +19,7 @@  obj-$(CONFIG_KVM_ARM_HOST) += switch.o
 obj-$(CONFIG_KVM_ARM_HOST) += fpsimd.o
 obj-$(CONFIG_KVM_ARM_HOST) += tlb.o
 obj-$(CONFIG_KVM_ARM_HOST) += hyp-entry.o
+obj-$(CONFIG_KVM_ARM_HOST) += ptrauth-sr.o
 
 # KVM code is run at a different exception code with a different map, so
 # compiler instrumentation that inserts callbacks or checks into the code may
diff --git a/arch/arm64/kvm/hyp/ptrauth-sr.c b/arch/arm64/kvm/hyp/ptrauth-sr.c
new file mode 100644
index 0000000..1bfaf74
--- /dev/null
+++ b/arch/arm64/kvm/hyp/ptrauth-sr.c
@@ -0,0 +1,73 @@ 
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * arch/arm64/kvm/hyp/ptrauth-sr.c: Guest/host ptrauth save/restore
+ *
+ * Copyright 2018 Arm Limited
+ * Author: Mark Rutland <mark.rutland@arm.com>
+ *         Amit Daniel Kachhap <amit.kachhap@arm.com>
+ */
+#include <linux/compiler.h>
+#include <linux/kvm_host.h>
+
+#include <asm/cpucaps.h>
+#include <asm/cpufeature.h>
+#include <asm/kvm_asm.h>
+#include <asm/kvm_hyp.h>
+#include <asm/pointer_auth.h>
+
+static __always_inline bool __hyp_text __ptrauth_is_enabled(struct kvm_vcpu *vcpu)
+{
+	return vcpu->arch.hcr_el2 & (HCR_API | HCR_APK);
+}
+
+#define __ptrauth_save_key(regs, key)						\
+({										\
+	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
+	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
+})
+
+static __always_inline void __hyp_text __ptrauth_save_state(struct kvm_cpu_context *ctxt)
+{
+	__ptrauth_save_key(ctxt->sys_regs, APIA);
+	__ptrauth_save_key(ctxt->sys_regs, APIB);
+	__ptrauth_save_key(ctxt->sys_regs, APDA);
+	__ptrauth_save_key(ctxt->sys_regs, APDB);
+	__ptrauth_save_key(ctxt->sys_regs, APGA);
+}
+
+#define __ptrauth_restore_key(regs, key) 					\
+({										\
+	write_sysreg_s(regs[key ## KEYLO_EL1], SYS_ ## key ## KEYLO_EL1);	\
+	write_sysreg_s(regs[key ## KEYHI_EL1], SYS_ ## key ## KEYHI_EL1);	\
+})
+
+static __always_inline void __hyp_text __ptrauth_restore_state(struct kvm_cpu_context *ctxt)
+{
+	__ptrauth_restore_key(ctxt->sys_regs, APIA);
+	__ptrauth_restore_key(ctxt->sys_regs, APIB);
+	__ptrauth_restore_key(ctxt->sys_regs, APDA);
+	__ptrauth_restore_key(ctxt->sys_regs, APDB);
+	__ptrauth_restore_key(ctxt->sys_regs, APGA);
+}
+
+void __hyp_text __ptrauth_switch_to_guest(struct kvm_vcpu *vcpu,
+					  struct kvm_cpu_context *host_ctxt,
+					  struct kvm_cpu_context *guest_ctxt)
+{
+	if (!__ptrauth_is_enabled(vcpu))
+		return;
+
+	__ptrauth_save_state(host_ctxt);
+	__ptrauth_restore_state(guest_ctxt);
+}
+
+void __hyp_text __ptrauth_switch_to_host(struct kvm_vcpu *vcpu,
+					 struct kvm_cpu_context *host_ctxt,
+					 struct kvm_cpu_context *guest_ctxt)
+{
+	if (!__ptrauth_is_enabled(vcpu))
+		return;
+
+	__ptrauth_save_state(guest_ctxt);
+	__ptrauth_restore_state(host_ctxt);
+}
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index 85a2a5c..6c57552 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -508,6 +508,8 @@  int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
 	sysreg_restore_guest_state_vhe(guest_ctxt);
 	__debug_switch_to_guest(vcpu);
 
+	__ptrauth_switch_to_guest(vcpu, host_ctxt, guest_ctxt);
+
 	__set_guest_arch_workaround_state(vcpu);
 
 	do {
@@ -519,6 +521,8 @@  int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
 
 	__set_host_arch_workaround_state(vcpu);
 
+	__ptrauth_switch_to_host(vcpu, host_ctxt, guest_ctxt);
+
 	sysreg_save_guest_state_vhe(guest_ctxt);
 
 	__deactivate_traps(vcpu);
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 1ca592d..6af6c7d 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -986,6 +986,32 @@  static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	{ SYS_DESC(SYS_PMEVTYPERn_EL0(n)),					\
 	  access_pmu_evtyper, reset_unknown, (PMEVTYPER0_EL0 + n), }
 
+
+void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu)
+{
+	vcpu->arch.hcr_el2 |= (HCR_API | HCR_APK);
+}
+
+void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu)
+{
+	vcpu->arch.hcr_el2 &= ~(HCR_API | HCR_APK);
+}
+
+static bool trap_ptrauth(struct kvm_vcpu *vcpu,
+			 struct sys_reg_params *p,
+			 const struct sys_reg_desc *rd)
+{
+	kvm_arm_vcpu_ptrauth_trap(vcpu);
+	return false;
+}
+
+#define __PTRAUTH_KEY(k)						\
+	{ SYS_DESC(SYS_## k), trap_ptrauth, reset_unknown, k }
+
+#define PTRAUTH_KEY(k)							\
+	__PTRAUTH_KEY(k ## KEYLO_EL1),					\
+	__PTRAUTH_KEY(k ## KEYHI_EL1)
+
 static bool access_cntp_tval(struct kvm_vcpu *vcpu,
 		struct sys_reg_params *p,
 		const struct sys_reg_desc *r)
@@ -1040,14 +1066,6 @@  static u64 read_id_reg(struct sys_reg_desc const *r, bool raz)
 			kvm_debug("SVE unsupported for guests, suppressing\n");
 
 		val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT);
-	} else if (id == SYS_ID_AA64ISAR1_EL1) {
-		const u64 ptrauth_mask = (0xfUL << ID_AA64ISAR1_APA_SHIFT) |
-					 (0xfUL << ID_AA64ISAR1_API_SHIFT) |
-					 (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
-					 (0xfUL << ID_AA64ISAR1_GPI_SHIFT);
-		if (val & ptrauth_mask)
-			kvm_debug("ptrauth unsupported for guests, suppressing\n");
-		val &= ~ptrauth_mask;
 	} else if (id == SYS_ID_AA64MMFR1_EL1) {
 		if (val & (0xfUL << ID_AA64MMFR1_LOR_SHIFT))
 			kvm_debug("LORegions unsupported for guests, suppressing\n");
@@ -1316,6 +1334,12 @@  static const struct sys_reg_desc sys_reg_descs[] = {
 	{ SYS_DESC(SYS_TTBR1_EL1), access_vm_reg, reset_unknown, TTBR1_EL1 },
 	{ SYS_DESC(SYS_TCR_EL1), access_vm_reg, reset_val, TCR_EL1, 0 },
 
+	PTRAUTH_KEY(APIA),
+	PTRAUTH_KEY(APIB),
+	PTRAUTH_KEY(APDA),
+	PTRAUTH_KEY(APDB),
+	PTRAUTH_KEY(APGA),
+
 	{ SYS_DESC(SYS_AFSR0_EL1), access_vm_reg, reset_unknown, AFSR0_EL1 },
 	{ SYS_DESC(SYS_AFSR1_EL1), access_vm_reg, reset_unknown, AFSR1_EL1 },
 	{ SYS_DESC(SYS_ESR_EL1), access_vm_reg, reset_unknown, ESR_EL1 },
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index feda456..1f3b6ed 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -388,6 +388,8 @@  void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 		vcpu_clear_wfe_traps(vcpu);
 	else
 		vcpu_set_wfe_traps(vcpu);
+
+	kvm_arm_vcpu_ptrauth_config(vcpu);
 }
 
 void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)