Message ID | 20220505161011.1801596-3-ardb@kernel.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | arm64: dynamic shadow call stack support | expand |
On Thu, May 5, 2022 at 9:10 AM Ard Biesheuvel <ardb@kernel.org> wrote: > > In order to allow arches to use code patching to conditionally emit the > shadow stack pushes and pops, rather than always taking the performance > hit even on CPUs that implement alternatives such as stack pointer > authentication on arm64, add a Kconfig symbol that can be set by the > arch to omit the SCS codegen itself, without otherwise affecting how > support code for SCS and compiler options (for register reservation, for > instance) are emitted. > > Also, add a static key and some plumbing to omit the allocation of > shadow call stack for dynamic SCS configurations if SCS is disabled at > runtime. > > Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Thanks for the patch! Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> > --- > Makefile | 2 ++ > arch/Kconfig | 7 +++++++ > include/linux/scs.h | 10 ++++++++++ > kernel/scs.c | 14 ++++++++++++-- > 4 files changed, 31 insertions(+), 2 deletions(-) > > diff --git a/Makefile b/Makefile > index fa5112a0ec1b..a578fffc0337 100644 > --- a/Makefile > +++ b/Makefile > @@ -882,8 +882,10 @@ LDFLAGS_vmlinux += --gc-sections > endif > > ifdef CONFIG_SHADOW_CALL_STACK > +ifndef CONFIG_DYNAMIC_SCS > CC_FLAGS_SCS := -fsanitize=shadow-call-stack > KBUILD_CFLAGS += $(CC_FLAGS_SCS) > +endif > export CC_FLAGS_SCS > endif > > diff --git a/arch/Kconfig b/arch/Kconfig > index 29b0167c088b..126caa75969a 100644 > --- a/arch/Kconfig > +++ b/arch/Kconfig > @@ -627,6 +627,13 @@ config SHADOW_CALL_STACK > reading and writing arbitrary memory may be able to locate them > and hijack control flow by modifying the stacks. > > +config DYNAMIC_SCS > + bool > + help > + Set by the arch code if it relies on code patching to insert the > + shadow call stack push and pop instructions rather than on the > + compiler. > + > config LTO > bool > help > diff --git a/include/linux/scs.h b/include/linux/scs.h > index 18122d9e17ff..4cc01f21b17a 100644 > --- a/include/linux/scs.h > +++ b/include/linux/scs.h > @@ -53,6 +53,15 @@ static inline bool task_scs_end_corrupted(struct task_struct *tsk) > return sz >= SCS_SIZE - 1 || READ_ONCE_NOCHECK(*magic) != SCS_END_MAGIC; > } > > +DECLARE_STATIC_KEY_TRUE(dynamic_scs_enabled); > + > +static inline bool scs_is_enabled(void) > +{ > + if (!IS_ENABLED(CONFIG_DYNAMIC_SCS)) > + return true; > + return static_branch_likely(&dynamic_scs_enabled); > +} > + > #else /* CONFIG_SHADOW_CALL_STACK */ > > static inline void *scs_alloc(int node) { return NULL; } > @@ -62,6 +71,7 @@ static inline void scs_task_reset(struct task_struct *tsk) {} > static inline int scs_prepare(struct task_struct *tsk, int node) { return 0; } > static inline void scs_release(struct task_struct *tsk) {} > static inline bool task_scs_end_corrupted(struct task_struct *tsk) { return false; } > +static inline bool scs_is_enabled(void) { return false; } > > #endif /* CONFIG_SHADOW_CALL_STACK */ > > diff --git a/kernel/scs.c b/kernel/scs.c > index b7e1b096d906..8826794d2645 100644 > --- a/kernel/scs.c > +++ b/kernel/scs.c > @@ -12,6 +12,10 @@ > #include <linux/vmalloc.h> > #include <linux/vmstat.h> > > +#ifdef CONFIG_DYNAMIC_SCS > +DEFINE_STATIC_KEY_TRUE(dynamic_scs_enabled); > +#endif > + > static void __scs_account(void *s, int account) > { > struct page *scs_page = vmalloc_to_page(s); > @@ -101,14 +105,20 @@ static int scs_cleanup(unsigned int cpu) > > void __init scs_init(void) > { > + if (!scs_is_enabled()) > + return; > cpuhp_setup_state(CPUHP_BP_PREPARE_DYN, "scs:scs_cache", NULL, > scs_cleanup); > } > > int scs_prepare(struct task_struct *tsk, int node) > { > - void *s = scs_alloc(node); > + void *s; > > + if (!scs_is_enabled()) > + return 0; > + > + s = scs_alloc(node); > if (!s) > return -ENOMEM; > > @@ -148,7 +158,7 @@ void scs_release(struct task_struct *tsk) > { > void *s = task_scs(tsk); > > - if (!s) > + if (!scs_is_enabled() || !s) > return; > > WARN(task_scs_end_corrupted(tsk), > -- > 2.30.2 > It is less obvious that the two other functions with extern linkage defined in this TU don't need scs_is_enabled checks because of guards in the callers.
On Thu, May 05, 2022 at 06:10:10PM +0200, Ard Biesheuvel wrote: > In order to allow arches to use code patching to conditionally emit the > shadow stack pushes and pops, rather than always taking the performance > hit even on CPUs that implement alternatives such as stack pointer > authentication on arm64, add a Kconfig symbol that can be set by the > arch to omit the SCS codegen itself, without otherwise affecting how > support code for SCS and compiler options (for register reservation, for > instance) are emitted. > > Also, add a static key and some plumbing to omit the allocation of > shadow call stack for dynamic SCS configurations if SCS is disabled at > runtime. > > Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Kees Cook <keescook@chromium.org>
diff --git a/Makefile b/Makefile index fa5112a0ec1b..a578fffc0337 100644 --- a/Makefile +++ b/Makefile @@ -882,8 +882,10 @@ LDFLAGS_vmlinux += --gc-sections endif ifdef CONFIG_SHADOW_CALL_STACK +ifndef CONFIG_DYNAMIC_SCS CC_FLAGS_SCS := -fsanitize=shadow-call-stack KBUILD_CFLAGS += $(CC_FLAGS_SCS) +endif export CC_FLAGS_SCS endif diff --git a/arch/Kconfig b/arch/Kconfig index 29b0167c088b..126caa75969a 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -627,6 +627,13 @@ config SHADOW_CALL_STACK reading and writing arbitrary memory may be able to locate them and hijack control flow by modifying the stacks. +config DYNAMIC_SCS + bool + help + Set by the arch code if it relies on code patching to insert the + shadow call stack push and pop instructions rather than on the + compiler. + config LTO bool help diff --git a/include/linux/scs.h b/include/linux/scs.h index 18122d9e17ff..4cc01f21b17a 100644 --- a/include/linux/scs.h +++ b/include/linux/scs.h @@ -53,6 +53,15 @@ static inline bool task_scs_end_corrupted(struct task_struct *tsk) return sz >= SCS_SIZE - 1 || READ_ONCE_NOCHECK(*magic) != SCS_END_MAGIC; } +DECLARE_STATIC_KEY_TRUE(dynamic_scs_enabled); + +static inline bool scs_is_enabled(void) +{ + if (!IS_ENABLED(CONFIG_DYNAMIC_SCS)) + return true; + return static_branch_likely(&dynamic_scs_enabled); +} + #else /* CONFIG_SHADOW_CALL_STACK */ static inline void *scs_alloc(int node) { return NULL; } @@ -62,6 +71,7 @@ static inline void scs_task_reset(struct task_struct *tsk) {} static inline int scs_prepare(struct task_struct *tsk, int node) { return 0; } static inline void scs_release(struct task_struct *tsk) {} static inline bool task_scs_end_corrupted(struct task_struct *tsk) { return false; } +static inline bool scs_is_enabled(void) { return false; } #endif /* CONFIG_SHADOW_CALL_STACK */ diff --git a/kernel/scs.c b/kernel/scs.c index b7e1b096d906..8826794d2645 100644 --- a/kernel/scs.c +++ b/kernel/scs.c @@ -12,6 +12,10 @@ #include <linux/vmalloc.h> #include <linux/vmstat.h> +#ifdef CONFIG_DYNAMIC_SCS +DEFINE_STATIC_KEY_TRUE(dynamic_scs_enabled); +#endif + static void __scs_account(void *s, int account) { struct page *scs_page = vmalloc_to_page(s); @@ -101,14 +105,20 @@ static int scs_cleanup(unsigned int cpu) void __init scs_init(void) { + if (!scs_is_enabled()) + return; cpuhp_setup_state(CPUHP_BP_PREPARE_DYN, "scs:scs_cache", NULL, scs_cleanup); } int scs_prepare(struct task_struct *tsk, int node) { - void *s = scs_alloc(node); + void *s; + if (!scs_is_enabled()) + return 0; + + s = scs_alloc(node); if (!s) return -ENOMEM; @@ -148,7 +158,7 @@ void scs_release(struct task_struct *tsk) { void *s = task_scs(tsk); - if (!s) + if (!scs_is_enabled() || !s) return; WARN(task_scs_end_corrupted(tsk),
In order to allow arches to use code patching to conditionally emit the shadow stack pushes and pops, rather than always taking the performance hit even on CPUs that implement alternatives such as stack pointer authentication on arm64, add a Kconfig symbol that can be set by the arch to omit the SCS codegen itself, without otherwise affecting how support code for SCS and compiler options (for register reservation, for instance) are emitted. Also, add a static key and some plumbing to omit the allocation of shadow call stack for dynamic SCS configurations if SCS is disabled at runtime. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> --- Makefile | 2 ++ arch/Kconfig | 7 +++++++ include/linux/scs.h | 10 ++++++++++ kernel/scs.c | 14 ++++++++++++-- 4 files changed, 31 insertions(+), 2 deletions(-)