Message ID | 1583476525-13505-13-git-send-email-amit.kachhap@arm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | arm64: return address signing | expand |
Hi Amit, On 06/03/2020 06:35, Amit Daniel Kachhap wrote: > This redefines __builtin_return_address to mask pac bits > when Pointer Authentication is enabled. As __builtin_return_address > is used mostly used to refer to the caller function symbol address > so masking runtime generated pac bits will help to find the match. I'm not entirely sure what problem you are trying to solve from this paragraph. > This patch adds a new file (asm/compiler.h) and is transitively > included (via include/compiler_types.h) on the compiler command line > so it is guaranteed to be loaded and the users of this macro will > not find a wrong version. > > A helper macro ptrauth_kernel_pac_mask is created for this purpose > and added in this file. A similar macro ptrauth_user_pac_mask exists > in pointer_auth.h and is now moved here for the sake of consistency. > This change fixes the utilities like cat /proc/vmallocinfo to show > correct symbol names. This is to avoid things like: | 0x(____ptrval____)-0x(____ptrval____) 20480 0xb8b8000100f75fc pages=4 vmalloc N0=4 | 0x(____ptrval____)-0x(____ptrval____) 20480 0xc0f28000100f75fc pages=4 vmalloc N0=4 Where those 64bit values should be the same symbol name, not different LR values. Could you phrase the commit message to describe the problem, then how you fix it. something like: | Functions like vmap() record how much memory has been allocated by their callers, | callers are identified using __builtin_return_address(). Once the kernel is using | pointer-auth the return address will be signed. This means it will not match any kernel | symbol, and will vary between threads even for the same caller. | | Use the pre-processor to add logic to strip the PAC to __builtin_return_address() | callers. > diff --git a/arch/arm64/include/asm/compiler.h b/arch/arm64/include/asm/compiler.h > new file mode 100644 > index 0000000..085e7cd0 > --- /dev/null > +++ b/arch/arm64/include/asm/compiler.h > @@ -0,0 +1,20 @@ > +/* SPDX-License-Identifier: GPL-2.0 */ > +#ifndef __ASM_COMPILER_H > +#define __ASM_COMPILER_H > + > +#if defined(CONFIG_ARM64_PTR_AUTH) > + > +/* > + * The EL0/EL1 pointer bits used by a pointer authentication code. > + * This is dependent on TBI0/TBI1 being enabled, or bits 63:56 would also apply. > + */ > +#define ptrauth_user_pac_mask() GENMASK_ULL(54, vabits_actual) > +#define ptrauth_kernel_pac_mask() GENMASK_ULL(63, vabits_actual) > + > +#define __builtin_return_address(val) \ > + (void *)((unsigned long)__builtin_return_address(val) | \ > + ptrauth_kernel_pac_mask()) This is pretty manky, ideally we would have some __arch_return_address() hook for this, but this works, and lets us postpone any tree-wide noise until its absolutely necessary. (I assume if this does ever break, it will be a build error) You add ptrauth_strip_insn_pac() in this patch, but don't use it until the next one. However... could you use it here? This would go wrong if __builtin_return_address() legitimately returned a value that was mapped by TTBR0, we would force the top bits to be set instead of clearing the PAC bits. This would corrupt the value instead of corrupting it. I don't think there is anywhere this could happen today, but passing callbacks into UEFI runtime services or making kernel calls from an idmap may both do this. With that: Reviewed-by: James Morse <james.morse@arm.com> Thanks! James > --- a/arch/arm64/include/asm/pointer_auth.h > +++ b/arch/arm64/include/asm/pointer_auth.h > @@ -68,16 +68,13 @@ static __always_inline void ptrauth_keys_switch_kernel(struct ptrauth_keys_kerne > > extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg); > > -/* > - * The EL0 pointer bits used by a pointer authentication code. > - * This is dependent on TBI0 being enabled, or bits 63:56 would also apply. > - */ > -#define ptrauth_user_pac_mask() GENMASK(54, vabits_actual) > - > -/* Only valid for EL0 TTBR0 instruction pointers */ > +/* Valid for EL0 TTBR0 and EL1 TTBR1 instruction pointers */ > static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr) > { > - return ptr & ~ptrauth_user_pac_mask(); > + if (ptr & BIT_ULL(55)) > + return ptr | ptrauth_kernel_pac_mask(); > + else > + return ptr & ~ptrauth_user_pac_mask(); > } > > #define ptrauth_thread_init_user(tsk) \ >
Hi James, On 3/7/20 12:37 AM, James Morse wrote: > Hi Amit, > > On 06/03/2020 06:35, Amit Daniel Kachhap wrote: >> This redefines __builtin_return_address to mask pac bits >> when Pointer Authentication is enabled. As __builtin_return_address >> is used mostly used to refer to the caller function symbol address >> so masking runtime generated pac bits will help to find the match. > > I'm not entirely sure what problem you are trying to solve from this paragraph. > > >> This patch adds a new file (asm/compiler.h) and is transitively >> included (via include/compiler_types.h) on the compiler command line >> so it is guaranteed to be loaded and the users of this macro will >> not find a wrong version. >> >> A helper macro ptrauth_kernel_pac_mask is created for this purpose >> and added in this file. A similar macro ptrauth_user_pac_mask exists >> in pointer_auth.h and is now moved here for the sake of consistency. > >> This change fixes the utilities like cat /proc/vmallocinfo to show >> correct symbol names. > > This is to avoid things like: > | 0x(____ptrval____)-0x(____ptrval____) 20480 0xb8b8000100f75fc pages=4 vmalloc N0=4 > | 0x(____ptrval____)-0x(____ptrval____) 20480 0xc0f28000100f75fc pages=4 vmalloc N0=4 > > Where those 64bit values should be the same symbol name, not different LR values. > > Could you phrase the commit message to describe the problem, then how you fix it. > something like: > | Functions like vmap() record how much memory has been allocated by their callers, > | callers are identified using __builtin_return_address(). Once the kernel is using > | pointer-auth the return address will be signed. This means it will not match any kernel > | symbol, and will vary between threads even for the same caller. > | > | Use the pre-processor to add logic to strip the PAC to __builtin_return_address() > | callers. > Thanks for the detailed description. I will update my commit message. > >> diff --git a/arch/arm64/include/asm/compiler.h b/arch/arm64/include/asm/compiler.h >> new file mode 100644 >> index 0000000..085e7cd0 >> --- /dev/null >> +++ b/arch/arm64/include/asm/compiler.h >> @@ -0,0 +1,20 @@ >> +/* SPDX-License-Identifier: GPL-2.0 */ >> +#ifndef __ASM_COMPILER_H >> +#define __ASM_COMPILER_H >> + >> +#if defined(CONFIG_ARM64_PTR_AUTH) >> + >> +/* >> + * The EL0/EL1 pointer bits used by a pointer authentication code. >> + * This is dependent on TBI0/TBI1 being enabled, or bits 63:56 would also apply. >> + */ >> +#define ptrauth_user_pac_mask() GENMASK_ULL(54, vabits_actual) >> +#define ptrauth_kernel_pac_mask() GENMASK_ULL(63, vabits_actual) >> + >> +#define __builtin_return_address(val) \ >> + (void *)((unsigned long)__builtin_return_address(val) | \ >> + ptrauth_kernel_pac_mask()) > > > This is pretty manky, ideally we would have some __arch_return_address() hook for this, > but this works, and lets us postpone any tree-wide noise until its absolutely necessary. > (I assume if this does ever break, it will be a build error) ok. > > > You add ptrauth_strip_insn_pac() in this patch, but don't use it until the next one. > However... could you use it here? Yes ptrauth_strip_insn_pac can be used here not as a C function but as a macro. I will post in my next version. > > This would go wrong if __builtin_return_address() legitimately returned a value that was > mapped by TTBR0, we would force the top bits to be set instead of clearing the PAC bits. > This would corrupt the value instead of corrupting it. > > I don't think there is anywhere this could happen today, but passing callbacks into UEFI > runtime services or making kernel calls from an idmap may both do this. I didnt thought about this scenario so did this way. > > > With that: > Reviewed-by: James Morse <james.morse@arm.com> Thankyou. Cheers, Amit > > > > Thanks! > > James > > >> --- a/arch/arm64/include/asm/pointer_auth.h >> +++ b/arch/arm64/include/asm/pointer_auth.h >> @@ -68,16 +68,13 @@ static __always_inline void ptrauth_keys_switch_kernel(struct ptrauth_keys_kerne >> >> extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg); >> >> -/* >> - * The EL0 pointer bits used by a pointer authentication code. >> - * This is dependent on TBI0 being enabled, or bits 63:56 would also apply. >> - */ >> -#define ptrauth_user_pac_mask() GENMASK(54, vabits_actual) >> - >> -/* Only valid for EL0 TTBR0 instruction pointers */ >> +/* Valid for EL0 TTBR0 and EL1 TTBR1 instruction pointers */ >> static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr) >> { >> - return ptr & ~ptrauth_user_pac_mask(); >> + if (ptr & BIT_ULL(55)) >> + return ptr | ptrauth_kernel_pac_mask(); >> + else >> + return ptr & ~ptrauth_user_pac_mask(); >> } >> >> #define ptrauth_thread_init_user(tsk) \ >> >
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 87e2cbb..115ceea 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -118,6 +118,7 @@ config ARM64 select HAVE_ALIGNED_STRUCT_PAGE if SLUB select HAVE_ARCH_AUDITSYSCALL select HAVE_ARCH_BITREVERSE + select HAVE_ARCH_COMPILER_H select HAVE_ARCH_HUGE_VMAP select HAVE_ARCH_JUMP_LABEL select HAVE_ARCH_JUMP_LABEL_RELATIVE diff --git a/arch/arm64/include/asm/compiler.h b/arch/arm64/include/asm/compiler.h new file mode 100644 index 0000000..085e7cd0 --- /dev/null +++ b/arch/arm64/include/asm/compiler.h @@ -0,0 +1,20 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __ASM_COMPILER_H +#define __ASM_COMPILER_H + +#if defined(CONFIG_ARM64_PTR_AUTH) + +/* + * The EL0/EL1 pointer bits used by a pointer authentication code. + * This is dependent on TBI0/TBI1 being enabled, or bits 63:56 would also apply. + */ +#define ptrauth_user_pac_mask() GENMASK_ULL(54, vabits_actual) +#define ptrauth_kernel_pac_mask() GENMASK_ULL(63, vabits_actual) + +#define __builtin_return_address(val) \ + (void *)((unsigned long)__builtin_return_address(val) | \ + ptrauth_kernel_pac_mask()) + +#endif /* CONFIG_ARM64_PTR_AUTH */ + +#endif /* __ASM_COMPILER_H */ diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h index 833d3f9..5340dbb 100644 --- a/arch/arm64/include/asm/pointer_auth.h +++ b/arch/arm64/include/asm/pointer_auth.h @@ -68,16 +68,13 @@ static __always_inline void ptrauth_keys_switch_kernel(struct ptrauth_keys_kerne extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg); -/* - * The EL0 pointer bits used by a pointer authentication code. - * This is dependent on TBI0 being enabled, or bits 63:56 would also apply. - */ -#define ptrauth_user_pac_mask() GENMASK(54, vabits_actual) - -/* Only valid for EL0 TTBR0 instruction pointers */ +/* Valid for EL0 TTBR0 and EL1 TTBR1 instruction pointers */ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr) { - return ptr & ~ptrauth_user_pac_mask(); + if (ptr & BIT_ULL(55)) + return ptr | ptrauth_kernel_pac_mask(); + else + return ptr & ~ptrauth_user_pac_mask(); } #define ptrauth_thread_init_user(tsk) \
This redefines __builtin_return_address to mask pac bits when Pointer Authentication is enabled. As __builtin_return_address is used mostly used to refer to the caller function symbol address so masking runtime generated pac bits will help to find the match. This patch adds a new file (asm/compiler.h) and is transitively included (via include/compiler_types.h) on the compiler command line so it is guaranteed to be loaded and the users of this macro will not find a wrong version. A helper macro ptrauth_kernel_pac_mask is created for this purpose and added in this file. A similar macro ptrauth_user_pac_mask exists in pointer_auth.h and is now moved here for the sake of consistency. This change fixes the utilities like cat /proc/vmallocinfo to show correct symbol names. Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> --- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/compiler.h | 20 ++++++++++++++++++++ arch/arm64/include/asm/pointer_auth.h | 13 +++++-------- 3 files changed, 26 insertions(+), 8 deletions(-) create mode 100644 arch/arm64/include/asm/compiler.h