diff mbox series

KVM: arm64: Prevent vcpu_has_ptrauth from generating OOL functions

Message ID 20200722162231.3689767-1-maz@kernel.org (mailing list archive)
State Mainlined
Commit bf4086b1a1efa3d3a2c17582e00bbd2176dfe177
Headers show
Series KVM: arm64: Prevent vcpu_has_ptrauth from generating OOL functions | expand

Commit Message

Marc Zyngier July 22, 2020, 4:22 p.m. UTC
So far, vcpu_has_ptrauth() is implemented in terms of system_supports_*_auth()
calls, which are declared "inline". In some specific conditions (clang
and SCS), the "inline" very much turns into an "out of line", which
leads to a fireworks when this predicate is evaluated on a non-VHE
system (right at the beginning of __hyp_handle_ptrauth).

Instead, make sure vcpu_has_ptrauth gets expanded inline by directly
using the cpus_have_final_cap() helpers, which are __always_inline,
generate much better code, and are the only thing that make sense when
running at EL2 on a nVHE system.

Fixes: 29eb5a3c57f7 ("KVM: arm64: Handle PtrAuth traps early")
Reported-by: Nathan Chancellor <natechancellor@gmail.com>
Reported-by: Nick Desaulniers <ndesaulniers@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

Comments

Nathan Chancellor July 23, 2020, 2:51 a.m. UTC | #1
On Wed, Jul 22, 2020 at 05:22:31PM +0100, Marc Zyngier wrote:
> So far, vcpu_has_ptrauth() is implemented in terms of system_supports_*_auth()
> calls, which are declared "inline". In some specific conditions (clang
> and SCS), the "inline" very much turns into an "out of line", which
> leads to a fireworks when this predicate is evaluated on a non-VHE
> system (right at the beginning of __hyp_handle_ptrauth).
> 
> Instead, make sure vcpu_has_ptrauth gets expanded inline by directly
> using the cpus_have_final_cap() helpers, which are __always_inline,
> generate much better code, and are the only thing that make sense when
> running at EL2 on a nVHE system.
> 
> Fixes: 29eb5a3c57f7 ("KVM: arm64: Handle PtrAuth traps early")
> Reported-by: Nathan Chancellor <natechancellor@gmail.com>
> Reported-by: Nick Desaulniers <ndesaulniers@google.com>
> Signed-off-by: Marc Zyngier <maz@kernel.org>

Thank you for the quick fix! I have booted a mainline kernel with this
patch with Shadow Call Stack enabled and verified that using KVM no
longer causes a panic.

Reviewed-by: Nathan Chancellor <natechancellor@gmail.com>
Tested-by: Nathan Chancellor <natechancellor@gmail.com>

For the future, is there an easy way to tell which type of system I am
using (nVHE or VHE)? I am new to the arm64 KVM world but it is something
that I am going to continue to test with various clang technologies now
that I have actual hardware capable of it that can run a mainline
kernel.

Cheers,
Nathan

> ---
>  arch/arm64/include/asm/kvm_host.h | 11 ++++++++---
>  1 file changed, 8 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 147064314abf..a8278f6873e6 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -391,9 +391,14 @@ struct kvm_vcpu_arch {
>  #define vcpu_has_sve(vcpu) (system_supports_sve() && \
>  			    ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE))
>  
> -#define vcpu_has_ptrauth(vcpu)	((system_supports_address_auth() || \
> -				  system_supports_generic_auth()) && \
> -				 ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH))
> +#ifdef CONFIG_ARM64_PTR_AUTH
> +#define vcpu_has_ptrauth(vcpu)						\
> +	((cpus_have_final_cap(ARM64_HAS_ADDRESS_AUTH) ||		\
> +	  cpus_have_final_cap(ARM64_HAS_GENERIC_AUTH)) &&		\
> +	 (vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH)
> +#else
> +#define vcpu_has_ptrauth(vcpu)		false
> +#endif
>  
>  #define vcpu_gp_regs(v)		(&(v)->arch.ctxt.gp_regs)
>  
> -- 
> 2.28.0.rc0.142.g3c755180ce-goog
>
Zenghui Yu July 23, 2020, 3:30 a.m. UTC | #2
Hi Nathan,

On 2020/7/23 10:51, Nathan Chancellor wrote:
> For the future, is there an easy way to tell which type of system I am
> using (nVHE or VHE)?

afaict the easiest way is looking at the kernel log and you will find
something like "{VHE,Hyp} mode initialized successfully". I can get the
following message on my *VHE* box:

  # cat /var/log/dmesg | grep kvm
[    4.896295] kvm [1]: IPA Size Limit: 48bits
[    4.896339] [...]
[    4.899407] kvm [1]: VHE mode initialized successfully
                         ^^^

Have a look at kvm_arch_init(). With VHE, the host kernel is running at
EL2 (aka Hyp mode).


Thanks,
Zenghui
Marc Zyngier July 23, 2020, 8:17 a.m. UTC | #3
Hi Nathan,

On 2020-07-23 03:51, Nathan Chancellor wrote:
> On Wed, Jul 22, 2020 at 05:22:31PM +0100, Marc Zyngier wrote:
>> So far, vcpu_has_ptrauth() is implemented in terms of 
>> system_supports_*_auth()
>> calls, which are declared "inline". In some specific conditions (clang
>> and SCS), the "inline" very much turns into an "out of line", which
>> leads to a fireworks when this predicate is evaluated on a non-VHE
>> system (right at the beginning of __hyp_handle_ptrauth).
>> 
>> Instead, make sure vcpu_has_ptrauth gets expanded inline by directly
>> using the cpus_have_final_cap() helpers, which are __always_inline,
>> generate much better code, and are the only thing that make sense when
>> running at EL2 on a nVHE system.
>> 
>> Fixes: 29eb5a3c57f7 ("KVM: arm64: Handle PtrAuth traps early")
>> Reported-by: Nathan Chancellor <natechancellor@gmail.com>
>> Reported-by: Nick Desaulniers <ndesaulniers@google.com>
>> Signed-off-by: Marc Zyngier <maz@kernel.org>
> 
> Thank you for the quick fix! I have booted a mainline kernel with this
> patch with Shadow Call Stack enabled and verified that using KVM no
> longer causes a panic.

Great! I'll try and ferry this to mainline  as quickly as possible.

> Reviewed-by: Nathan Chancellor <natechancellor@gmail.com>
> Tested-by: Nathan Chancellor <natechancellor@gmail.com>
> 
> For the future, is there an easy way to tell which type of system I am
> using (nVHE or VHE)? I am new to the arm64 KVM world but it is 
> something
> that I am going to continue to test with various clang technologies now
> that I have actual hardware capable of it that can run a mainline
> kernel.

ARMv8.0 CPUs are only capable of running non-VHE. So if you have
something based on older ARM CPUs (such as A57, A72, A53, A73, A35...),
or licensee CPUs (ThunderX, XGene, EMag...), this will only run
non-VHE (the host kernel runs at EL1, while the hypervisor runs at
EL2.

 From ARMv8.1 onward, VHE is normally present, and the host kernel
can run at EL2 directly. ARM CPUs include A55, A65, A75, A76, A77,
N1, while licensee CPUs include TX2, Kunpeng 920, and probably some
more.

As pointed out by Zenghui in another email, KVM shows which mode
it is using. Even without KVM, the kernel prints very early on:

[    0.000000] CPU features: detected: Virtualization Host Extensions

Note that this is only a performance difference, and that most
features that are supported by the CPU can be used by KVM in either
mode.

Thanks again,

         M.
Nathan Chancellor July 23, 2020, 3:59 p.m. UTC | #4
On Thu, Jul 23, 2020 at 09:17:15AM +0100, Marc Zyngier wrote:
> Hi Nathan,
> 
> On 2020-07-23 03:51, Nathan Chancellor wrote:
> > On Wed, Jul 22, 2020 at 05:22:31PM +0100, Marc Zyngier wrote:
> > > So far, vcpu_has_ptrauth() is implemented in terms of
> > > system_supports_*_auth()
> > > calls, which are declared "inline". In some specific conditions (clang
> > > and SCS), the "inline" very much turns into an "out of line", which
> > > leads to a fireworks when this predicate is evaluated on a non-VHE
> > > system (right at the beginning of __hyp_handle_ptrauth).
> > > 
> > > Instead, make sure vcpu_has_ptrauth gets expanded inline by directly
> > > using the cpus_have_final_cap() helpers, which are __always_inline,
> > > generate much better code, and are the only thing that make sense when
> > > running at EL2 on a nVHE system.
> > > 
> > > Fixes: 29eb5a3c57f7 ("KVM: arm64: Handle PtrAuth traps early")
> > > Reported-by: Nathan Chancellor <natechancellor@gmail.com>
> > > Reported-by: Nick Desaulniers <ndesaulniers@google.com>
> > > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > 
> > Thank you for the quick fix! I have booted a mainline kernel with this
> > patch with Shadow Call Stack enabled and verified that using KVM no
> > longer causes a panic.
> 
> Great! I'll try and ferry this to mainline  as quickly as possible.

Awesome, I will keep an eye out.

> > Reviewed-by: Nathan Chancellor <natechancellor@gmail.com>
> > Tested-by: Nathan Chancellor <natechancellor@gmail.com>
> > 
> > For the future, is there an easy way to tell which type of system I am
> > using (nVHE or VHE)? I am new to the arm64 KVM world but it is something
> > that I am going to continue to test with various clang technologies now
> > that I have actual hardware capable of it that can run a mainline
> > kernel.
> 
> ARMv8.0 CPUs are only capable of running non-VHE. So if you have
> something based on older ARM CPUs (such as A57, A72, A53, A73, A35...),
> or licensee CPUs (ThunderX, XGene, EMag...), this will only run
> non-VHE (the host kernel runs at EL1, while the hypervisor runs at
> EL2.
> 
> From ARMv8.1 onward, VHE is normally present, and the host kernel
> can run at EL2 directly. ARM CPUs include A55, A65, A75, A76, A77,
> N1, while licensee CPUs include TX2, Kunpeng 920, and probably some
> more.
> 
> As pointed out by Zenghui in another email, KVM shows which mode
> it is using. Even without KVM, the kernel prints very early on:
> 
> [    0.000000] CPU features: detected: Virtualization Host Extensions
> 
> Note that this is only a performance difference, and that most
> features that are supported by the CPU can be used by KVM in either
> mode.
> 
> Thanks again,
> 
>         M.
> -- 
> Jazz is not dead. It just smells funny...

Excellent, thank you both for the in-depth explanation. Hopefully my
test farm continues to grow so I can stay on top of testing this stuff.

Cheers,
Nathan
diff mbox series

Patch

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 147064314abf..a8278f6873e6 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -391,9 +391,14 @@  struct kvm_vcpu_arch {
 #define vcpu_has_sve(vcpu) (system_supports_sve() && \
 			    ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE))
 
-#define vcpu_has_ptrauth(vcpu)	((system_supports_address_auth() || \
-				  system_supports_generic_auth()) && \
-				 ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH))
+#ifdef CONFIG_ARM64_PTR_AUTH
+#define vcpu_has_ptrauth(vcpu)						\
+	((cpus_have_final_cap(ARM64_HAS_ADDRESS_AUTH) ||		\
+	  cpus_have_final_cap(ARM64_HAS_GENERIC_AUTH)) &&		\
+	 (vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_PTRAUTH)
+#else
+#define vcpu_has_ptrauth(vcpu)		false
+#endif
 
 #define vcpu_gp_regs(v)		(&(v)->arch.ctxt.gp_regs)