mbox series

[v6,00/18] arm64: return address signing

Message ID 1583476525-13505-1-git-send-email-amit.kachhap@arm.com (mailing list archive)
Headers show
Series arm64: return address signing | expand

Message

Amit Daniel Kachhap March 6, 2020, 6:35 a.m. UTC
Hi,

This series improves function return address protection for the arm64 kernel, by
compiling the kernel with ARMv8.3 Pointer Authentication instructions (referred
ptrauth hereafter). This should help protect the kernel against attacks using
return-oriented programming.

Changes since v5 [1]:
 - Added a new patch(arm64: cpufeature: Move cpu capability..) to move cpucapability
   type helpers in cpufeature.c file. This makes adding new cpucapability easier.
 - Moved kernel key restore to function __cpu_setup(proc.S) as suggested by Catalin.
 - More comments for as-option Kconfig option for concerns raised by Masahiro.
 - Clarified comments for -march=armv8.3-a non-integrated assembler option.

Changes since v4 [2]:
 - Rebased the patch series to v5.6-rc2.
 - Patch "arm64: cpufeature: Fix meta-capability" updated as per Suzuki's
   review comments.

Some additional work not implemented below will be taken up separately:
 - kdump tools may need some rework to work with ptrauth. The kdump
   tools may need the ptrauth information to strip PAC bits. This will
   be sent in a separate patch.
 - Few more ptrauth generic lkdtm tests as requested by Kees Cook.
 - Generate compile time warnings if requested Kconfig feature not 
   supported by compilers.

This series is based on Linux version v5.6-rc4. This complete series can be
found at (git://linux-arm.org/linux-ak.git PAC_mainline_v6) for reference.

Feedback welcome!

Thanks,
Amit Daniel

[1]: http://lists.infradead.org/pipermail/linux-arm-kernel/2020-February/711699.html 
[2]: http://lists.infradead.org/pipermail/linux-arm-kernel/2020-January/707567.html

Amit Daniel Kachhap (9):
  arm64: cpufeature: Fix meta-capability cpufeature check
  arm64: create macro to park cpu in an infinite loop
  arm64: ptrauth: Add bootup/runtime flags for __cpu_setup
  arm64: cpufeature: Move cpu capability helpers inside C file
  arm64: initialize ptrauth keys for kernel booting task
  arm64: mask PAC bits of __builtin_return_address
  arm64: __show_regs: strip PAC from lr in printk
  arm64: suspend: restore the kernel ptrauth keys
  lkdtm: arm64: test kernel pointer authentication

Kristina Martsenko (7):
  arm64: cpufeature: add pointer auth meta-capabilities
  arm64: rename ptrauth key structures to be user-specific
  arm64: install user ptrauth keys at kernel exit time
  arm64: cpufeature: handle conflicts based on capability
  arm64: enable ptrauth earlier
  arm64: initialize and switch ptrauth kernel keys
  arm64: compile the kernel with ptrauth return address 
Mark Rutland (1):
  arm64: unwind: strip PAC from kernel addresses

Vincenzo Frascino (1):
  kconfig: Add support for 'as-option'

 arch/arm64/Kconfig                        | 27 +++++++++-
 arch/arm64/Makefile                       | 11 ++++
 arch/arm64/include/asm/asm_pointer_auth.h | 65 +++++++++++++++++++++++
 arch/arm64/include/asm/compiler.h         | 20 +++++++
 arch/arm64/include/asm/cpucaps.h          |  4 +-
 arch/arm64/include/asm/cpufeature.h       | 39 +++++++-------
 arch/arm64/include/asm/pointer_auth.h     | 54 +++++++++----------
 arch/arm64/include/asm/processor.h        |  3 +-
 arch/arm64/include/asm/smp.h              | 10 ++++
 arch/arm64/include/asm/stackprotector.h   |  5 ++
 arch/arm64/kernel/asm-offsets.c           | 16 ++++++
 arch/arm64/kernel/cpufeature.c            | 87 +++++++++++++++++++++++--------
 arch/arm64/kernel/entry.S                 |  6 +++
 arch/arm64/kernel/head.S                  | 27 +++++-----
 arch/arm64/kernel/pointer_auth.c          |  7 +--
 arch/arm64/kernel/process.c               |  5 +-
 arch/arm64/kernel/ptrace.c                | 16 +++---
 arch/arm64/kernel/sleep.S                 |  2 +
 arch/arm64/kernel/smp.c                   | 10 ++++
 arch/arm64/kernel/stacktrace.c            |  3 ++
 arch/arm64/mm/proc.S                      | 71 +++++++++++++++++++++----
 drivers/misc/lkdtm/bugs.c                 | 36 +++++++++++++
 drivers/misc/lkdtm/core.c                 |  1 +
 drivers/misc/lkdtm/lkdtm.h                |  1 +
 include/linux/stackprotector.h            |  2 +-
 scripts/Kconfig.include                   |  6 +++
 26 files changed, 424 insertions(+), 110 deletions(-)
 create mode 100644 arch/arm64/include/asm/asm_pointer_auth.h
 create mode 100644 arch/arm64/include/asm/compiler.h

Comments

Rémi Denis-Courmont March 10, 2020, 3:59 p.m. UTC | #1
Hello,

Le perjantaina 6. maaliskuuta 2020, 8.35.07 EET Amit Daniel Kachhap a écrit :
> This series improves function return address protection for the arm64
> kernel, by compiling the kernel with ARMv8.3 Pointer Authentication
> instructions (referred ptrauth hereafter). This should help protect the
> kernel against attacks using return-oriented programming.

Is there any notion what the runtime overhead will be?

We tried to estimate PAuth-based return address signing and an attempt of ours 
to improve it (ArXiv:1912.04145, to be presented at DAC 2020), but we lacked 
proper hardware.
James Morse March 11, 2020, 9:28 a.m. UTC | #2
Hi Amit,

(CC: +Marc)

On 3/6/20 6:35 AM, Amit Daniel Kachhap wrote:
> This series improves function return address protection for the arm64 kernel, by
> compiling the kernel with ARMv8.3 Pointer Authentication instructions (referred
> ptrauth hereafter). This should help protect the kernel against attacks using
> return-oriented programming.

(as it looks like there may be another version of this:)

Am I right in thinking that after your patch 10 changing
cpu_switch_to(), only the A key is live during kernel execution?

KVM is still save/restoring 4 extra keys around guest-entry/exit. As you
restore all the keys on return to user-space, is this still necessary?

(insert cross-tree headache here)


Thanks,

James
Amit Daniel Kachhap March 12, 2020, 6:53 a.m. UTC | #3
Hi James,

On 3/11/20 2:58 PM, James Morse wrote:
> Hi Amit,
> 
> (CC: +Marc)
> 
> On 3/6/20 6:35 AM, Amit Daniel Kachhap wrote:
>> This series improves function return address protection for the arm64 kernel, by
>> compiling the kernel with ARMv8.3 Pointer Authentication instructions (referred
>> ptrauth hereafter). This should help protect the kernel against attacks using
>> return-oriented programming.
> 
> (as it looks like there may be another version of this:)
> 
> Am I right in thinking that after your patch 10 changing
> cpu_switch_to(), only the A key is live during kernel execution?

Yes

> 
> KVM is still save/restoring 4 extra keys around guest-entry/exit. As you
> restore all the keys on return to user-space, is this still necessary?

Yes Its a good optimization to skip 4 non-A keys. I was wondering 
whether to do it in this series or send it separately.

Cheers,
Amit Daniel

> 
> (insert cross-tree headache here)
> 
> 
> Thanks,
> 
> James
>
Amit Daniel Kachhap March 12, 2020, 8:06 a.m. UTC | #4
Hi James,

On 3/12/20 12:23 PM, Amit Kachhap wrote:
> Hi James,
> 
> On 3/11/20 2:58 PM, James Morse wrote:
>> Hi Amit,
>>
>> (CC: +Marc)
>>
>> On 3/6/20 6:35 AM, Amit Daniel Kachhap wrote:
>>> This series improves function return address protection for the arm64 
>>> kernel, by
>>> compiling the kernel with ARMv8.3 Pointer Authentication instructions 
>>> (referred
>>> ptrauth hereafter). This should help protect the kernel against 
>>> attacks using
>>> return-oriented programming.
>>
>> (as it looks like there may be another version of this:)
>>
>> Am I right in thinking that after your patch 10 changing
>> cpu_switch_to(), only the A key is live during kernel execution?
> 
> Yes
> 
>>
>> KVM is still save/restoring 4 extra keys around guest-entry/exit. As you
>> restore all the keys on return to user-space, is this still necessary?
> 
> Yes Its a good optimization to skip 4 non-A keys. I was wondering 
> whether to do it in this series or send it separately.

I suppose we can only skip non-A keys save/restore for host context. If
we skip non-A keys for guest context then guest with old implementation
will break. Let me know your opinion.

//Amit
> 
> Cheers,
> Amit Daniel
> 
>>
>> (insert cross-tree headache here)
>>
>>
>> Thanks,
>>
>> James
>>
Marc Zyngier March 12, 2020, 12:47 p.m. UTC | #5
Hi Amit,

On 2020-03-12 08:06, Amit Kachhap wrote:
> Hi James,
> 
> On 3/12/20 12:23 PM, Amit Kachhap wrote:
>> Hi James,
>> 
>> On 3/11/20 2:58 PM, James Morse wrote:
>>> Hi Amit,
>>> 
>>> (CC: +Marc)
>>> 
>>> On 3/6/20 6:35 AM, Amit Daniel Kachhap wrote:
>>>> This series improves function return address protection for the 
>>>> arm64 kernel, by
>>>> compiling the kernel with ARMv8.3 Pointer Authentication 
>>>> instructions (referred
>>>> ptrauth hereafter). This should help protect the kernel against 
>>>> attacks using
>>>> return-oriented programming.
>>> 
>>> (as it looks like there may be another version of this:)
>>> 
>>> Am I right in thinking that after your patch 10 changing
>>> cpu_switch_to(), only the A key is live during kernel execution?
>> 
>> Yes
>> 
>>> 
>>> KVM is still save/restoring 4 extra keys around guest-entry/exit. As 
>>> you
>>> restore all the keys on return to user-space, is this still 
>>> necessary?
>> 
>> Yes Its a good optimization to skip 4 non-A keys. I was wondering 
>> whether to do it in this series or send it separately.
> 
> I suppose we can only skip non-A keys save/restore for host context. If
> we skip non-A keys for guest context then guest with old implementation
> will break. Let me know your opinion.

I don't think you can skip anything as far as the guest is concerned.
But being able to skip the B keys (which is what I expect you call the
non-A keys) on the host would certainly be useful.

I assume you have a way to hide them from userspace, though.

Thanks,

         M.
Amit Daniel Kachhap March 12, 2020, 1:21 p.m. UTC | #6
Hi Marc,

On 3/12/20 6:17 PM, Marc Zyngier wrote:
> Hi Amit,
> 
> On 2020-03-12 08:06, Amit Kachhap wrote:
>> Hi James,
>>
>> On 3/12/20 12:23 PM, Amit Kachhap wrote:
>>> Hi James,
>>>
>>> On 3/11/20 2:58 PM, James Morse wrote:
>>>> Hi Amit,
>>>>
>>>> (CC: +Marc)
>>>>
>>>> On 3/6/20 6:35 AM, Amit Daniel Kachhap wrote:
>>>>> This series improves function return address protection for the 
>>>>> arm64 kernel, by
>>>>> compiling the kernel with ARMv8.3 Pointer Authentication 
>>>>> instructions (referred
>>>>> ptrauth hereafter). This should help protect the kernel against 
>>>>> attacks using
>>>>> return-oriented programming.
>>>>
>>>> (as it looks like there may be another version of this:)
>>>>
>>>> Am I right in thinking that after your patch 10 changing
>>>> cpu_switch_to(), only the A key is live during kernel execution?
>>>
>>> Yes
>>>
>>>>
>>>> KVM is still save/restoring 4 extra keys around guest-entry/exit. As 
>>>> you
>>>> restore all the keys on return to user-space, is this still necessary?
>>>
>>> Yes Its a good optimization to skip 4 non-A keys. I was wondering 
>>> whether to do it in this series or send it separately.
>>
>> I suppose we can only skip non-A keys save/restore for host context. If
>> we skip non-A keys for guest context then guest with old implementation
>> will break. Let me know your opinion.
> 
> I don't think you can skip anything as far as the guest is concerned.
> But being able to skip the B keys (which is what I expect you call the
> non-A keys) on the host would certainly be useful.

Thanks for the clarification.

> 
> I assume you have a way to hide them from userspace, though.

You mean hide all the keys from userspace like below,

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 3e909b1..29cc74f 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1023,7 +1023,7 @@ static bool trap_ptrauth(struct kvm_vcpu *vcpu,
  static unsigned int ptrauth_visibility(const struct kvm_vcpu *vcpu,
                         const struct sys_reg_desc *rd)
  {
-       return vcpu_has_ptrauth(vcpu) ? 0 : REG_HIDDEN_USER | 
REG_HIDDEN_GUEST;
+       return vcpu_has_ptrauth(vcpu) ? REG_HIDDEN_USER : 
REG_HIDDEN_USER | REG_HIDDEN_GUEST;
  }

  #define __PTRAUTH_KEY(k)

I don't remember why it was not done this way last time.

Cheers,
Amit

> 
> Thanks,
> 
>          M.
Marc Zyngier March 12, 2020, 3:05 p.m. UTC | #7
[Somehow I managed to butcher the subject line. no idea how...]

On 2020-03-12 13:21, Amit Kachhap wrote:
> Hi Marc,
> 
> On 3/12/20 6:17 PM, Marc Zyngier wrote:
>> Hi Amit,
>> 
>> On 2020-03-12 08:06, Amit Kachhap wrote:
>>> Hi James,
>>> 
>>> On 3/12/20 12:23 PM, Amit Kachhap wrote:
>>>> Hi James,
>>>> 
>>>> On 3/11/20 2:58 PM, James Morse wrote:
>>>>> Hi Amit,
>>>>> 
>>>>> (CC: +Marc)
>>>>> 
>>>>> On 3/6/20 6:35 AM, Amit Daniel Kachhap wrote:
>>>>>> This series improves function return address protection for the 
>>>>>> arm64 kernel, by
>>>>>> compiling the kernel with ARMv8.3 Pointer Authentication 
>>>>>> instructions (referred
>>>>>> ptrauth hereafter). This should help protect the kernel against 
>>>>>> attacks using
>>>>>> return-oriented programming.
>>>>> 
>>>>> (as it looks like there may be another version of this:)
>>>>> 
>>>>> Am I right in thinking that after your patch 10 changing
>>>>> cpu_switch_to(), only the A key is live during kernel execution?
>>>> 
>>>> Yes
>>>> 
>>>>> 
>>>>> KVM is still save/restoring 4 extra keys around guest-entry/exit. 
>>>>> As you
>>>>> restore all the keys on return to user-space, is this still 
>>>>> necessary?
>>>> 
>>>> Yes Its a good optimization to skip 4 non-A keys. I was wondering 
>>>> whether to do it in this series or send it separately.
>>> 
>>> I suppose we can only skip non-A keys save/restore for host context. 
>>> If
>>> we skip non-A keys for guest context then guest with old 
>>> implementation
>>> will break. Let me know your opinion.
>> 
>> I don't think you can skip anything as far as the guest is concerned.
>> But being able to skip the B keys (which is what I expect you call the
>> non-A keys) on the host would certainly be useful.
> 
> Thanks for the clarification.
> 
>> 
>> I assume you have a way to hide them from userspace, though.
> 
> You mean hide all the keys from userspace like below,
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 3e909b1..29cc74f 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1023,7 +1023,7 @@ static bool trap_ptrauth(struct kvm_vcpu *vcpu,
>  static unsigned int ptrauth_visibility(const struct kvm_vcpu *vcpu,
>                         const struct sys_reg_desc *rd)
>  {
> -       return vcpu_has_ptrauth(vcpu) ? 0 : REG_HIDDEN_USER | 
> REG_HIDDEN_GUEST;
> +       return vcpu_has_ptrauth(vcpu) ? REG_HIDDEN_USER :
> REG_HIDDEN_USER | REG_HIDDEN_GUEST;
>  }
> 
>  #define __PTRAUTH_KEY(k)
> 
> I don't remember why it was not done this way last time.

No, that's not what I meant. What you're describing is preventing keys
from being exposed to the VMM controlling the guest, and that'd be
pretty bad (you need to be able to save/restore them for migration).

But if KVM doesn't save/restore the host's B-keys in the world switch,
then you must make sure that no host userspace can make use of them,
as they would be the guest's keys.

         M.
James Morse March 12, 2020, 5:26 p.m. UTC | #8
Hi Amit, Marc,

On 12/03/2020 15:05, Marc Zyngier wrote:
> On 2020-03-12 13:21, Amit Kachhap wrote:
>> On 3/12/20 6:17 PM, Marc Zyngier wrote:
>>> On 2020-03-12 08:06, Amit Kachhap wrote:
>>>> On 3/12/20 12:23 PM, Amit Kachhap wrote:
>>>>> On 3/11/20 2:58 PM, James Morse wrote:
>>>>>> On 3/6/20 6:35 AM, Amit Daniel Kachhap wrote:
>>>>>>> This series improves function return address protection for the arm64 kernel, by
>>>>>>> compiling the kernel with ARMv8.3 Pointer Authentication instructions (referred
>>>>>>> ptrauth hereafter). This should help protect the kernel against attacks using
>>>>>>> return-oriented programming.
>>>>>>
>>>>>> (as it looks like there may be another version of this:)
>>>>>>
>>>>>> Am I right in thinking that after your patch 10 changing
>>>>>> cpu_switch_to(), only the A key is live during kernel execution?
>>>>>
>>>>> Yes

>>>>>> KVM is still save/restoring 4 extra keys around guest-entry/exit. As you
>>>>>> restore all the keys on return to user-space, is this still necessary?
>>>>>
>>>>> Yes Its a good optimization to skip 4 non-A keys. I was wondering whether to do it in
>>>>> this series or send it separately.
>>>>
>>>> I suppose we can only skip non-A keys save/restore for host context. If
>>>> we skip non-A keys for guest context then guest with old implementation
>>>> will break. Let me know your opinion.
>>>
>>> I don't think you can skip anything as far as the guest is concerned.
>>> But being able to skip the B keys (which is what I expect you call the
>>> non-A keys) on the host would certainly be useful.

> But if KVM doesn't save/restore the host's B-keys in the world switch,
> then you must make sure that no host userspace can make use of them,
> as they would be the guest's keys.

Yes, the arch code entry.S changes cover this with ptrauth_keys_install_user. It restores
4 keys, and the generic key.


We always need to save/restore all the guest keys (as we do today).
But when ptrauth_restore_state restores the host keys, it only needs to restore the one
the kernel uses. (possibly using the same macro's so it stays up to date?!)

If we return to user-space, the arch code's entry code does the right thing.
KVM's user-space peeking at the keys will see the saved values.


My original question was more around: do we need to do this now, or can we clean it up in
a later kernel version?

(and a sanity check that it doesn't lead to a correctness problem)



Thanks,

James
Marc Zyngier March 12, 2020, 5:31 p.m. UTC | #9
Hi James,

On 2020-03-12 17:26, James Morse wrote:
> Hi Amit, Marc,
> 
> On 12/03/2020 15:05, Marc Zyngier wrote:
>> On 2020-03-12 13:21, Amit Kachhap wrote:
>>> On 3/12/20 6:17 PM, Marc Zyngier wrote:
>>>> On 2020-03-12 08:06, Amit Kachhap wrote:
>>>>> On 3/12/20 12:23 PM, Amit Kachhap wrote:
>>>>>> On 3/11/20 2:58 PM, James Morse wrote:
>>>>>>> On 3/6/20 6:35 AM, Amit Daniel Kachhap wrote:
>>>>>>>> This series improves function return address protection for the 
>>>>>>>> arm64 kernel, by
>>>>>>>> compiling the kernel with ARMv8.3 Pointer Authentication 
>>>>>>>> instructions (referred
>>>>>>>> ptrauth hereafter). This should help protect the kernel against 
>>>>>>>> attacks using
>>>>>>>> return-oriented programming.
>>>>>>> 
>>>>>>> (as it looks like there may be another version of this:)
>>>>>>> 
>>>>>>> Am I right in thinking that after your patch 10 changing
>>>>>>> cpu_switch_to(), only the A key is live during kernel execution?
>>>>>> 
>>>>>> Yes
> 
>>>>>>> KVM is still save/restoring 4 extra keys around guest-entry/exit. 
>>>>>>> As you
>>>>>>> restore all the keys on return to user-space, is this still 
>>>>>>> necessary?
>>>>>> 
>>>>>> Yes Its a good optimization to skip 4 non-A keys. I was wondering 
>>>>>> whether to do it in
>>>>>> this series or send it separately.
>>>>> 
>>>>> I suppose we can only skip non-A keys save/restore for host 
>>>>> context. If
>>>>> we skip non-A keys for guest context then guest with old 
>>>>> implementation
>>>>> will break. Let me know your opinion.
>>>> 
>>>> I don't think you can skip anything as far as the guest is 
>>>> concerned.
>>>> But being able to skip the B keys (which is what I expect you call 
>>>> the
>>>> non-A keys) on the host would certainly be useful.
> 
>> But if KVM doesn't save/restore the host's B-keys in the world switch,
>> then you must make sure that no host userspace can make use of them,
>> as they would be the guest's keys.
> 
> Yes, the arch code entry.S changes cover this with
> ptrauth_keys_install_user. It restores
> 4 keys, and the generic key.
> 
> 
> We always need to save/restore all the guest keys (as we do today).
> But when ptrauth_restore_state restores the host keys, it only needs
> to restore the one
> the kernel uses. (possibly using the same macro's so it stays up to 
> date?!)
> 
> If we return to user-space, the arch code's entry code does the right 
> thing.
> KVM's user-space peeking at the keys will see the saved values.
> 
> 
> My original question was more around: do we need to do this now, or
> can we clean it up in
> a later kernel version?
> 
> (and a sanity check that it doesn't lead to a correctness problem)

I think what we have now is sane, and doesn't seem to lead to any
issue (at least that I can see). We can always optimize this at a
later point.

Thanks,

        M.