Message ID | 1576486038-9899-10-git-send-email-amit.kachhap@arm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | arm64: return address signing | expand |
On Mon, Dec 16, 2019 at 02:17:11PM +0530, Amit Daniel Kachhap wrote: > diff --git a/arch/arm64/include/asm/compiler.h b/arch/arm64/include/asm/compiler.h > new file mode 100644 > index 0000000..3cb06f9 > --- /dev/null > +++ b/arch/arm64/include/asm/compiler.h > @@ -0,0 +1,20 @@ > +/* SPDX-License-Identifier: GPL-2.0 */ > +#ifndef __ASM_COMPILER_H > +#define __ASM_COMPILER_H > + > +#if defined(CONFIG_ARM64_PTR_AUTH) > + > +/* > + * The EL0/EL1 pointer bits used by a pointer authentication code. > + * This is dependent on TBI0/TBI1 being enabled, or bits 63:56 would also apply. > + */ > +#define ptrauth_user_pac_mask() GENMASK_ULL(54, vabits_actual) That's the current behaviour but I guess we could extend the mask to 63 here without breaking anything since we don't expect instruction addresses to be tagged. I also think we should turn TCR_EL1.TBID0 on when we have PAC present (in a separate patch for both the mask change and the TCR_EL1 bit as this may be slightly more controversial, a theoretical ABI change). > +#define ptrauth_kernel_pac_mask() (GENMASK_ULL(63, 56) | GENMASK_ULL(54, VA_BITS)) I think the kernel mask should be GENMASK_ULL(63, vabits_actual), no need to skip bit 55 since it's 1 already. With regards to VA_BITS (a constant), I'm not sure that's correct. ARMv8.2-LVA (52-bit VA) is an optional feature and I don't think PAC in 8.3 mandates it.
On 1/17/20 10:14 AM, Catalin Marinas wrote: > On Mon, Dec 16, 2019 at 02:17:11PM +0530, Amit Daniel Kachhap wrote: >> diff --git a/arch/arm64/include/asm/compiler.h b/arch/arm64/include/asm/compiler.h >> new file mode 100644 >> index 0000000..3cb06f9 >> --- /dev/null >> +++ b/arch/arm64/include/asm/compiler.h >> @@ -0,0 +1,20 @@ >> +/* SPDX-License-Identifier: GPL-2.0 */ >> +#ifndef __ASM_COMPILER_H >> +#define __ASM_COMPILER_H >> + >> +#if defined(CONFIG_ARM64_PTR_AUTH) >> + >> +/* >> + * The EL0/EL1 pointer bits used by a pointer authentication code. >> + * This is dependent on TBI0/TBI1 being enabled, or bits 63:56 would also apply. >> + */ >> +#define ptrauth_user_pac_mask() GENMASK_ULL(54, vabits_actual) > > That's the current behaviour but I guess we could extend the mask to 63 > here without breaking anything since we don't expect instruction > addresses to be tagged. I also think we should turn TCR_EL1.TBID0 on > when we have PAC present (in a separate patch for both the mask change > and the TCR_EL1 bit as this may be slightly more controversial, a > theoretical ABI change). ok. For this there has to be 2 mask then as ptrace passes both the masks to user. #define ptrauth_user_ins_pac_mask() GENMASK_ULL(63, vabits_actual) #define ptrauth_user_data_pac_mask() GENMASK_ULL(54, vabits_actual) > >> +#define ptrauth_kernel_pac_mask() (GENMASK_ULL(63, 56) | GENMASK_ULL(54, VA_BITS)) > > I think the kernel mask should be GENMASK_ULL(63, vabits_actual), no > need to skip bit 55 since it's 1 already. > > With regards to VA_BITS (a constant), I'm not sure that's correct. > ARMv8.2-LVA (52-bit VA) is an optional feature and I don't think PAC in > 8.3 mandates it. yes. Thanks for the correction. >
On Mon, Jan 20, 2020 at 02:20:17PM +0000, Amit Kachhap wrote: > On 1/17/20 10:14 AM, Catalin Marinas wrote: > > On Mon, Dec 16, 2019 at 02:17:11PM +0530, Amit Daniel Kachhap wrote: > > > diff --git a/arch/arm64/include/asm/compiler.h b/arch/arm64/include/asm/compiler.h > > > new file mode 100644 > > > index 0000000..3cb06f9 > > > --- /dev/null > > > +++ b/arch/arm64/include/asm/compiler.h > > > @@ -0,0 +1,20 @@ > > > +/* SPDX-License-Identifier: GPL-2.0 */ > > > +#ifndef __ASM_COMPILER_H > > > +#define __ASM_COMPILER_H > > > + > > > +#if defined(CONFIG_ARM64_PTR_AUTH) > > > + > > > +/* > > > + * The EL0/EL1 pointer bits used by a pointer authentication code. > > > + * This is dependent on TBI0/TBI1 being enabled, or bits 63:56 would also apply. > > > + */ > > > +#define ptrauth_user_pac_mask() GENMASK_ULL(54, vabits_actual) > > > > That's the current behaviour but I guess we could extend the mask to 63 > > here without breaking anything since we don't expect instruction > > addresses to be tagged. I also think we should turn TCR_EL1.TBID0 on > > when we have PAC present (in a separate patch for both the mask change > > and the TCR_EL1 bit as this may be slightly more controversial, a > > theoretical ABI change). > > ok. For this there has to be 2 mask then as ptrace passes both the masks to > user. > > #define ptrauth_user_ins_pac_mask() GENMASK_ULL(63, vabits_actual) > > #define ptrauth_user_data_pac_mask() GENMASK_ULL(54, vabits_actual) Yes. But as I said, keep this patch as is and change the above in a separate patch, possibly even a separate series.
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 5aabe8a..06b5025 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -116,6 +116,7 @@ config ARM64 select HAVE_ALIGNED_STRUCT_PAGE if SLUB select HAVE_ARCH_AUDITSYSCALL select HAVE_ARCH_BITREVERSE + select HAVE_ARCH_COMPILER_H select HAVE_ARCH_HUGE_VMAP select HAVE_ARCH_JUMP_LABEL select HAVE_ARCH_JUMP_LABEL_RELATIVE diff --git a/arch/arm64/include/asm/compiler.h b/arch/arm64/include/asm/compiler.h new file mode 100644 index 0000000..3cb06f9 --- /dev/null +++ b/arch/arm64/include/asm/compiler.h @@ -0,0 +1,20 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __ASM_COMPILER_H +#define __ASM_COMPILER_H + +#if defined(CONFIG_ARM64_PTR_AUTH) + +/* + * The EL0/EL1 pointer bits used by a pointer authentication code. + * This is dependent on TBI0/TBI1 being enabled, or bits 63:56 would also apply. + */ +#define ptrauth_user_pac_mask() GENMASK_ULL(54, vabits_actual) +#define ptrauth_kernel_pac_mask() (GENMASK_ULL(63, 56) | GENMASK_ULL(54, VA_BITS)) + +#define __builtin_return_address(val) \ + (void *)((unsigned long)__builtin_return_address(val) | \ + ptrauth_kernel_pac_mask()) + +#endif /* CONFIG_ARM64_PTR_AUTH */ + +#endif /* __ASM_COMPILER_H */ diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h index 0f89f59..ec33af3 100644 --- a/arch/arm64/include/asm/pointer_auth.h +++ b/arch/arm64/include/asm/pointer_auth.h @@ -68,16 +68,13 @@ static inline void ptrauth_keys_switch_kernel(struct ptrauth_keys_kernel *keys) extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg); -/* - * The EL0 pointer bits used by a pointer authentication code. - * This is dependent on TBI0 being enabled, or bits 63:56 would also apply. - */ -#define ptrauth_user_pac_mask() GENMASK(54, vabits_actual) - -/* Only valid for EL0 TTBR0 instruction pointers */ +/* Valid for EL0 TTBR0 and EL1 TTBR1 instruction pointers */ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr) { - return ptr & ~ptrauth_user_pac_mask(); + if (ptr & BIT_ULL(55)) + return ptr | ptrauth_kernel_pac_mask(); + else + return ptr & ~ptrauth_user_pac_mask(); } #define ptrauth_thread_init_user(tsk) \
This redefines __builtin_return_address to mask pac bits when Pointer Authentication is enabled. As __builtin_return_address is used mostly used to refer to the caller function symbol address so masking runtime generated pac bits will help to find the match. This patch adds a new file (asm/compiler.h) and is transitively included (via include/compiler_types.h) on the compiler command line so it is guaranteed to be loaded and the users of this macro will not find a wrong version. A helper macro ptrauth_kernel_pac_mask is created for this purpose and added in this file. A similar macro ptrauth_user_pac_mask exists in pointer_auth.h and is now moved here for the sake of consistency. This change fixes the utilities like cat /proc/vmallocinfo to show correct symbol names. Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> --- Changes since last version: * Moved pac mask macros from pointer_auth.h to asm/compiler.h as suggested by Ard. * There was suggestion by Richard to use xpaclri/xpac instruction to clear the pack but this needs -march=armv8.3-a GCC option which also generates non-nops instruction like retaa and hence may break the backward compatibility. So it is left like earlier. arch/arm64/Kconfig | 1 + arch/arm64/include/asm/compiler.h | 20 ++++++++++++++++++++ arch/arm64/include/asm/pointer_auth.h | 13 +++++-------- 3 files changed, 26 insertions(+), 8 deletions(-) create mode 100644 arch/arm64/include/asm/compiler.h