diff mbox series

[1/4] KVM: arm64: Enable Pointer Authentication at EL2 if available

Message ID 20200615081954.6233-2-maz@kernel.org (mailing list archive)
State New, archived
Headers show
Series KVM/arm64: Enable PtrAuth on non-VHE KVM | expand

Commit Message

Marc Zyngier June 15, 2020, 8:19 a.m. UTC
While initializing EL2, switch Pointer Authentication if detected
from EL1. We use the EL1-provided keys though.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/hyp-init.S | 11 +++++++++++
 1 file changed, 11 insertions(+)

Comments

Andrew Scull June 15, 2020, 8:48 a.m. UTC | #1
On Mon, Jun 15, 2020 at 09:19:51AM +0100, Marc Zyngier wrote:
> While initializing EL2, switch Pointer Authentication if detected

                                ^ nit: on?

> from EL1. We use the EL1-provided keys though.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/hyp-init.S | 11 +++++++++++
>  1 file changed, 11 insertions(+)
> 
> diff --git a/arch/arm64/kvm/hyp-init.S b/arch/arm64/kvm/hyp-init.S
> index 6e6ed5581eed..81732177507d 100644
> --- a/arch/arm64/kvm/hyp-init.S
> +++ b/arch/arm64/kvm/hyp-init.S
> @@ -104,6 +104,17 @@ alternative_else_nop_endif
>  	 */
>  	mov_q	x4, (SCTLR_EL2_RES1 | (SCTLR_ELx_FLAGS & ~SCTLR_ELx_A))
>  CPU_BE(	orr	x4, x4, #SCTLR_ELx_EE)
> +alternative_if ARM64_HAS_ADDRESS_AUTH_ARCH
> +	b	1f
> +alternative_else_nop_endif
> +alternative_if_not ARM64_HAS_ADDRESS_AUTH_IMP_DEF
> +	b	2f
> +alternative_else_nop_endif
> +1:
> +	orr	x4, x4, #(SCTLR_ELx_ENIA | SCTLR_ELx_ENIB)
> +	orr	x4, x4, #SCTLR_ELx_ENDA
> +	orr	x4, x4, #SCTLR_ELx_ENDB

mm/proc.S builds the mask with ldr and ors it in one go, would be nice
to use the same pattern.

> +2:
>  	msr	sctlr_el2, x4
>  	isb

Acked-by: Andrew Scull <ascull@google.com>
Mark Rutland June 15, 2020, 10:03 a.m. UTC | #2
On Mon, Jun 15, 2020 at 09:19:51AM +0100, Marc Zyngier wrote:
> While initializing EL2, switch Pointer Authentication if detected
> from EL1. We use the EL1-provided keys though.

Perhaps "enable address authentication", to avoid confusion with
context-switch, and since generic authentication cannot be disabled
locally at EL2.

> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/hyp-init.S | 11 +++++++++++
>  1 file changed, 11 insertions(+)
> 
> diff --git a/arch/arm64/kvm/hyp-init.S b/arch/arm64/kvm/hyp-init.S
> index 6e6ed5581eed..81732177507d 100644
> --- a/arch/arm64/kvm/hyp-init.S
> +++ b/arch/arm64/kvm/hyp-init.S
> @@ -104,6 +104,17 @@ alternative_else_nop_endif
>  	 */
>  	mov_q	x4, (SCTLR_EL2_RES1 | (SCTLR_ELx_FLAGS & ~SCTLR_ELx_A))
>  CPU_BE(	orr	x4, x4, #SCTLR_ELx_EE)
> +alternative_if ARM64_HAS_ADDRESS_AUTH_ARCH
> +	b	1f
> +alternative_else_nop_endif
> +alternative_if_not ARM64_HAS_ADDRESS_AUTH_IMP_DEF
> +	b	2f
> +alternative_else_nop_endif

I see this is the same pattern we use in the kvm context switch, but I
think we can use the ARM64_HAS_ADDRESS_AUTH cap instead (likewise in the
existing code).

AFAICT that won't permit mismatch given both ARM64_HAS_ADDRESS_AUTH_ARCH
and ARM64_HAS_ADDRESS_AUTH_IMP_DEF are dealt with as
ARM64_CPUCAP_BOOT_CPU_FEATURE.

> +1:
> +	orr	x4, x4, #(SCTLR_ELx_ENIA | SCTLR_ELx_ENIB)
> +	orr	x4, x4, #SCTLR_ELx_ENDA
> +	orr	x4, x4, #SCTLR_ELx_ENDB

Assuming we have a spare register, it would be nice if we could follow the same
pattern as in proc.S, where we do:

| ldr     x2, =SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | \
|              SCTLR_ELx_ENDA | SCTLR_ELx_ENDB
| orr     x0, x0, x2

... though we could/should use mov_q rather than a load literal, here and in
proc.S.

... otherwise this looks sound to me.

Thanks,
Mark.

> +2:
>  	msr	sctlr_el2, x4
>  	isb
>  
> -- 
> 2.27.0
> 
> _______________________________________________
> kvmarm mailing list
> kvmarm@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
Marc Zyngier June 15, 2020, 10:45 a.m. UTC | #3
Hi Andrew,

On 2020-06-15 09:48, Andrew Scull wrote:
> On Mon, Jun 15, 2020 at 09:19:51AM +0100, Marc Zyngier wrote:
>> While initializing EL2, switch Pointer Authentication if detected
> 
>                                 ^ nit: on?

Yes.

> 
>> from EL1. We use the EL1-provided keys though.
>> 
>> Signed-off-by: Marc Zyngier <maz@kernel.org>
>> ---
>>  arch/arm64/kvm/hyp-init.S | 11 +++++++++++
>>  1 file changed, 11 insertions(+)
>> 
>> diff --git a/arch/arm64/kvm/hyp-init.S b/arch/arm64/kvm/hyp-init.S
>> index 6e6ed5581eed..81732177507d 100644
>> --- a/arch/arm64/kvm/hyp-init.S
>> +++ b/arch/arm64/kvm/hyp-init.S
>> @@ -104,6 +104,17 @@ alternative_else_nop_endif
>>  	 */
>>  	mov_q	x4, (SCTLR_EL2_RES1 | (SCTLR_ELx_FLAGS & ~SCTLR_ELx_A))
>>  CPU_BE(	orr	x4, x4, #SCTLR_ELx_EE)
>> +alternative_if ARM64_HAS_ADDRESS_AUTH_ARCH
>> +	b	1f
>> +alternative_else_nop_endif
>> +alternative_if_not ARM64_HAS_ADDRESS_AUTH_IMP_DEF
>> +	b	2f
>> +alternative_else_nop_endif
>> +1:
>> +	orr	x4, x4, #(SCTLR_ELx_ENIA | SCTLR_ELx_ENIB)
>> +	orr	x4, x4, #SCTLR_ELx_ENDA
>> +	orr	x4, x4, #SCTLR_ELx_ENDB
> 
> mm/proc.S builds the mask with ldr and ors it in one go, would be nice
> to use the same pattern.

Do you actually mean kernel/head.S, or even __ptrauth_keys_init_cpu in 
asm/asm_pointer_auth.h?

If so, I agree that it'd be good to make it look similar by using the 
mov_q macro, at the expense of a spare register (which we definitely can 
afford here).

Thanks,

         M.
Marc Zyngier June 15, 2020, 10:55 a.m. UTC | #4
On 2020-06-15 11:03, Mark Rutland wrote:
> On Mon, Jun 15, 2020 at 09:19:51AM +0100, Marc Zyngier wrote:
>> While initializing EL2, switch Pointer Authentication if detected
>> from EL1. We use the EL1-provided keys though.
> 
> Perhaps "enable address authentication", to avoid confusion with
> context-switch, and since generic authentication cannot be disabled
> locally at EL2.

Ah, fair enough.

>> 
>> Signed-off-by: Marc Zyngier <maz@kernel.org>
>> ---
>>  arch/arm64/kvm/hyp-init.S | 11 +++++++++++
>>  1 file changed, 11 insertions(+)
>> 
>> diff --git a/arch/arm64/kvm/hyp-init.S b/arch/arm64/kvm/hyp-init.S
>> index 6e6ed5581eed..81732177507d 100644
>> --- a/arch/arm64/kvm/hyp-init.S
>> +++ b/arch/arm64/kvm/hyp-init.S
>> @@ -104,6 +104,17 @@ alternative_else_nop_endif
>>  	 */
>>  	mov_q	x4, (SCTLR_EL2_RES1 | (SCTLR_ELx_FLAGS & ~SCTLR_ELx_A))
>>  CPU_BE(	orr	x4, x4, #SCTLR_ELx_EE)
>> +alternative_if ARM64_HAS_ADDRESS_AUTH_ARCH
>> +	b	1f
>> +alternative_else_nop_endif
>> +alternative_if_not ARM64_HAS_ADDRESS_AUTH_IMP_DEF
>> +	b	2f
>> +alternative_else_nop_endif
> 
> I see this is the same pattern we use in the kvm context switch, but I
> think we can use the ARM64_HAS_ADDRESS_AUTH cap instead (likewise in 
> the
> existing code).
> 
> AFAICT that won't permit mismatch given both 
> ARM64_HAS_ADDRESS_AUTH_ARCH
> and ARM64_HAS_ADDRESS_AUTH_IMP_DEF are dealt with as
> ARM64_CPUCAP_BOOT_CPU_FEATURE.

That'd be a nice cleanup, as the two back to back alternatives are a bit 
hard to read.

> 
>> +1:
>> +	orr	x4, x4, #(SCTLR_ELx_ENIA | SCTLR_ELx_ENIB)
>> +	orr	x4, x4, #SCTLR_ELx_ENDA
>> +	orr	x4, x4, #SCTLR_ELx_ENDB
> 
> Assuming we have a spare register, it would be nice if we could follow 
> the same
> pattern as in proc.S, where we do:
> 
> | ldr     x2, =SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | \
> |              SCTLR_ELx_ENDA | SCTLR_ELx_ENDB
> | orr     x0, x0, x2
> 
> ... though we could/should use mov_q rather than a load literal, here 
> and in
> proc.S.

Looks like this code isn't in -rc1 anymore, replaced with a mov_q in 
__ptrauth_keys_init_cpu.

I'll switch to that in v2.

Thanks,

         M.
diff mbox series

Patch

diff --git a/arch/arm64/kvm/hyp-init.S b/arch/arm64/kvm/hyp-init.S
index 6e6ed5581eed..81732177507d 100644
--- a/arch/arm64/kvm/hyp-init.S
+++ b/arch/arm64/kvm/hyp-init.S
@@ -104,6 +104,17 @@  alternative_else_nop_endif
 	 */
 	mov_q	x4, (SCTLR_EL2_RES1 | (SCTLR_ELx_FLAGS & ~SCTLR_ELx_A))
 CPU_BE(	orr	x4, x4, #SCTLR_ELx_EE)
+alternative_if ARM64_HAS_ADDRESS_AUTH_ARCH
+	b	1f
+alternative_else_nop_endif
+alternative_if_not ARM64_HAS_ADDRESS_AUTH_IMP_DEF
+	b	2f
+alternative_else_nop_endif
+1:
+	orr	x4, x4, #(SCTLR_ELx_ENIA | SCTLR_ELx_ENIB)
+	orr	x4, x4, #SCTLR_ELx_ENDA
+	orr	x4, x4, #SCTLR_ELx_ENDB
+2:
 	msr	sctlr_el2, x4
 	isb