From patchwork Tue Feb 25 17:39:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11404335 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E2C3514BC for ; Tue, 25 Feb 2020 17:40:02 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id E70C421927 for ; Tue, 25 Feb 2020 17:40:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="jBIWXZpl" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E70C421927 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17906-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 32139 invoked by uid 550); 25 Feb 2020 17:39:57 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 32050 invoked from network); 25 Feb 2020 17:39:56 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=kuv4yLONMoYmLvMJSNsDa6eBWytQP6sdLL4Di/E6Zfg=; b=jBIWXZplRQUyRIokKw9KHWjxGYo6WTS/FxobeHTC3m7AuSixlFsO6aiqV8buu0cDo5 mpfCcGyjalkzcxP5j55lHTi6FsxV5Cs39lrIourFMX3jMIAImtjKqlC5EZSPN0jt2YVC 45b7feqcKN5vEhY6Ub/5R8lWApcgP+N7knGEcbN2j4bRbzPrSak4VExSb7+Buq7TKnm0 bq7g3rByGetsTXHRlvlxqnud8hlSGEMQTSmu0Q8lZg14NgI+CVVusSGfcUmNsrwilzxV XbHXfiON0/MD/msbzaNAzeFv5dStPwn771sFwp/4389rFQIoRHQNzFEk0wtc3OOLvMgd 6AEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=kuv4yLONMoYmLvMJSNsDa6eBWytQP6sdLL4Di/E6Zfg=; b=SBmGku3/e5A914ZFPV4pTfBd8T/ITqi1EK3naHZ/Xu1k15hb7WNrfLv8ODf+1jEB+Q u385u4oi2BcUv2yq/7U6FLvtUXknQbboDbkuGGvr0ZGk+bDpUFPNx7Dewx1oT5h8Vi7E uBT9l26bR7xhU5LbFEbEGUgbB5NR7gV3dWlDy/3Hvbf9aqfFsKAvdpXOF9bx7Wd7lrr2 G8j1bGLhHFa1dcXiGCqvF5YvoUq0H9F9LvhnjMhxB9F7TrYPbj4GSAsXGfAMOQtJNst1 MtUrVk2FmmYw+gATSHHD/osZqMJFyj71nzIHRS+l4Vo8e11pJk1PN3+9M2XkeNrTXauL cHIg== X-Gm-Message-State: APjAAAU0uIqlZia7LltsqWnloStvUh7zC3v1L4xdf1qyNYVPFkdtHC5p /5pyYoL0rtXFj8WXf/OKSpiXPYZfZFu3+oAOYt4= X-Google-Smtp-Source: APXvYqxYbTvR608KyWhCEbYl2PMiswQUHnJ6+pd7UcRumF5Khd99ZgsmEYl3ClUcBIhXnnZM9WrwiDZ5AlxsI5C8MB0= X-Received: by 2002:a63:74b:: with SMTP id 72mr16324743pgh.320.1582652383492; Tue, 25 Feb 2020 09:39:43 -0800 (PST) Date: Tue, 25 Feb 2020 09:39:22 -0800 In-Reply-To: <20200225173933.74818-1-samitolvanen@google.com> Message-Id: <20200225173933.74818-2-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20200225173933.74818-1-samitolvanen@google.com> X-Mailer: git-send-email 2.25.0.265.gbab2e86ba0-goog Subject: [PATCH v9 01/12] add support for Clang's Shadow Call Stack (SCS) From: Sami Tolvanen To: Will Deacon , Catalin Marinas , James Morse , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel , Mark Rutland Cc: Dave Martin , Kees Cook , Laura Abbott , Marc Zyngier , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen This change adds generic support for Clang's Shadow Call Stack, which uses a shadow stack to protect return addresses from being overwritten by an attacker. Details are available here: https://clang.llvm.org/docs/ShadowCallStack.html Note that security guarantees in the kernel differ from the ones documented for user space. The kernel must store addresses of shadow stacks used by other tasks and interrupt handlers in memory, which means an attacker capable reading and writing arbitrary memory may be able to locate them and hijack control flow by modifying shadow stacks that are not currently in use. Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook Reviewed-by: Miguel Ojeda --- Makefile | 6 ++ arch/Kconfig | 34 ++++++ include/linux/compiler-clang.h | 6 ++ include/linux/compiler_types.h | 4 + include/linux/scs.h | 57 ++++++++++ init/init_task.c | 8 ++ kernel/Makefile | 1 + kernel/fork.c | 9 ++ kernel/sched/core.c | 2 + kernel/scs.c | 187 +++++++++++++++++++++++++++++++++ 10 files changed, 314 insertions(+) create mode 100644 include/linux/scs.h create mode 100644 kernel/scs.c diff --git a/Makefile b/Makefile index 0914049d2929..ea465905b399 100644 --- a/Makefile +++ b/Makefile @@ -845,6 +845,12 @@ ifdef CONFIG_LIVEPATCH KBUILD_CFLAGS += $(call cc-option, -flive-patching=inline-clone) endif +ifdef CONFIG_SHADOW_CALL_STACK +CC_FLAGS_SCS := -fsanitize=shadow-call-stack +KBUILD_CFLAGS += $(CC_FLAGS_SCS) +export CC_FLAGS_SCS +endif + # arch Makefile may override CC so keep this after arch Makefile is included NOSTDINC_FLAGS += -nostdinc -isystem $(shell $(CC) -print-file-name=include) diff --git a/arch/Kconfig b/arch/Kconfig index 98de654b79b3..a67fa78c92e7 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -526,6 +526,40 @@ config STACKPROTECTOR_STRONG about 20% of all kernel functions, which increases the kernel code size by about 2%. +config ARCH_SUPPORTS_SHADOW_CALL_STACK + bool + help + An architecture should select this if it supports Clang's Shadow + Call Stack, has asm/scs.h, and implements runtime support for shadow + stack switching. + +config SHADOW_CALL_STACK + bool "Clang Shadow Call Stack" + depends on ARCH_SUPPORTS_SHADOW_CALL_STACK + help + This option enables Clang's Shadow Call Stack, which uses a + shadow stack to protect function return addresses from being + overwritten by an attacker. More information can be found in + Clang's documentation: + + https://clang.llvm.org/docs/ShadowCallStack.html + + Note that security guarantees in the kernel differ from the ones + documented for user space. The kernel must store addresses of shadow + stacks used by other tasks and interrupt handlers in memory, which + means an attacker capable of reading and writing arbitrary memory + may be able to locate them and hijack control flow by modifying + shadow stacks that are not currently in use. + +config SHADOW_CALL_STACK_VMAP + bool "Use virtually mapped shadow call stacks" + depends on SHADOW_CALL_STACK + help + Use virtually mapped shadow call stacks. Selecting this option + provides better stack exhaustion protection, but increases per-thread + memory consumption as a full page is allocated for each shadow stack. + + config HAVE_ARCH_WITHIN_STACK_FRAMES bool help diff --git a/include/linux/compiler-clang.h b/include/linux/compiler-clang.h index 333a6695a918..18fc4d29ef27 100644 --- a/include/linux/compiler-clang.h +++ b/include/linux/compiler-clang.h @@ -42,3 +42,9 @@ * compilers, like ICC. */ #define barrier() __asm__ __volatile__("" : : : "memory") + +#if __has_feature(shadow_call_stack) +# define __noscs __attribute__((__no_sanitize__("shadow-call-stack"))) +#else +# define __noscs +#endif diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h index 72393a8c1a6c..be5d5be4b1ae 100644 --- a/include/linux/compiler_types.h +++ b/include/linux/compiler_types.h @@ -202,6 +202,10 @@ struct ftrace_likely_data { # define randomized_struct_fields_end #endif +#ifndef __noscs +# define __noscs +#endif + #ifndef asm_volatile_goto #define asm_volatile_goto(x...) asm goto(x) #endif diff --git a/include/linux/scs.h b/include/linux/scs.h new file mode 100644 index 000000000000..c5572fd770b0 --- /dev/null +++ b/include/linux/scs.h @@ -0,0 +1,57 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Shadow Call Stack support. + * + * Copyright (C) 2019 Google LLC + */ + +#ifndef _LINUX_SCS_H +#define _LINUX_SCS_H + +#include +#include +#include + +#ifdef CONFIG_SHADOW_CALL_STACK + +/* + * In testing, 1 KiB shadow stack size (i.e. 128 stack frames on a 64-bit + * architecture) provided ~40% safety margin on stack usage while keeping + * memory allocation overhead reasonable. + */ +#define SCS_SIZE 1024UL +#define GFP_SCS (GFP_KERNEL | __GFP_ZERO) + +/* + * A random number outside the kernel's virtual address space to mark the + * end of the shadow stack. + */ +#define SCS_END_MAGIC 0xaf0194819b1635f6UL + +#define task_scs(tsk) (task_thread_info(tsk)->shadow_call_stack) + +static inline void task_set_scs(struct task_struct *tsk, void *s) +{ + task_scs(tsk) = s; +} + +extern void scs_init(void); +extern void scs_task_reset(struct task_struct *tsk); +extern int scs_prepare(struct task_struct *tsk, int node); +extern bool scs_corrupted(struct task_struct *tsk); +extern void scs_release(struct task_struct *tsk); + +#else /* CONFIG_SHADOW_CALL_STACK */ + +#define task_scs(tsk) NULL + +static inline void task_set_scs(struct task_struct *tsk, void *s) {} +static inline void scs_init(void) {} +static inline void scs_task_reset(struct task_struct *tsk) {} +static inline int scs_prepare(struct task_struct *tsk, int node) { return 0; } +static inline bool scs_corrupted(struct task_struct *tsk) { return false; } +static inline void scs_release(struct task_struct *tsk) {} + +#endif /* CONFIG_SHADOW_CALL_STACK */ + +#endif /* _LINUX_SCS_H */ diff --git a/init/init_task.c b/init/init_task.c index 9e5cbe5eab7b..cbd40460e903 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include @@ -184,6 +185,13 @@ struct task_struct init_task }; EXPORT_SYMBOL(init_task); +#ifdef CONFIG_SHADOW_CALL_STACK +unsigned long init_shadow_call_stack[SCS_SIZE / sizeof(long)] __init_task_data + __aligned(SCS_SIZE) = { + [(SCS_SIZE / sizeof(long)) - 1] = SCS_END_MAGIC +}; +#endif + /* * Initial thread structure. Alignment of this is handled by a special * linker map entry. diff --git a/kernel/Makefile b/kernel/Makefile index 4cb4130ced32..c332eb9d4841 100644 --- a/kernel/Makefile +++ b/kernel/Makefile @@ -103,6 +103,7 @@ obj-$(CONFIG_TRACEPOINTS) += trace/ obj-$(CONFIG_IRQ_WORK) += irq_work.o obj-$(CONFIG_CPU_PM) += cpu_pm.o obj-$(CONFIG_BPF) += bpf/ +obj-$(CONFIG_SHADOW_CALL_STACK) += scs.o obj-$(CONFIG_PERF_EVENTS) += events/ diff --git a/kernel/fork.c b/kernel/fork.c index 60a1295f4384..2bc73d654593 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -94,6 +94,7 @@ #include #include #include +#include #include #include @@ -454,6 +455,8 @@ void put_task_stack(struct task_struct *tsk) void free_task(struct task_struct *tsk) { + scs_release(tsk); + #ifndef CONFIG_THREAD_INFO_IN_TASK /* * The task is finally done with both the stack and thread_info, @@ -837,6 +840,8 @@ void __init fork_init(void) NULL, free_vm_stack_cache); #endif + scs_init(); + lockdep_init_task(&init_task); uprobes_init(); } @@ -896,6 +901,10 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node) if (err) goto free_stack; + err = scs_prepare(tsk, node); + if (err) + goto free_stack; + #ifdef CONFIG_SECCOMP /* * We must handle setting up seccomp filters once we're under diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 1a9983da4408..7473cd685560 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -11,6 +11,7 @@ #include #include +#include #include #include @@ -6036,6 +6037,7 @@ void init_idle(struct task_struct *idle, int cpu) idle->se.exec_start = sched_clock(); idle->flags |= PF_IDLE; + scs_task_reset(idle); kasan_unpoison_task_stack(idle); #ifdef CONFIG_SMP diff --git a/kernel/scs.c b/kernel/scs.c new file mode 100644 index 000000000000..28abed21950c --- /dev/null +++ b/kernel/scs.c @@ -0,0 +1,187 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Shadow Call Stack support. + * + * Copyright (C) 2019 Google LLC + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +static inline void *__scs_base(struct task_struct *tsk) +{ + /* + * To minimize risk the of exposure, architectures may clear a + * task's thread_info::shadow_call_stack while that task is + * running, and only save/restore the active shadow call stack + * pointer when the usual register may be clobbered (e.g. across + * context switches). + * + * The shadow call stack is aligned to SCS_SIZE, and grows + * upwards, so we can mask out the low bits to extract the base + * when the task is not running. + */ + return (void *)((unsigned long)task_scs(tsk) & ~(SCS_SIZE - 1)); +} + +static inline unsigned long *scs_magic(void *s) +{ + return (unsigned long *)(s + SCS_SIZE) - 1; +} + +static inline void scs_set_magic(void *s) +{ + *scs_magic(s) = SCS_END_MAGIC; +} + +#ifdef CONFIG_SHADOW_CALL_STACK_VMAP + +/* Matches NR_CACHED_STACKS for VMAP_STACK */ +#define NR_CACHED_SCS 2 +static DEFINE_PER_CPU(void *, scs_cache[NR_CACHED_SCS]); + +static void *scs_alloc(int node) +{ + int i; + void *s; + + for (i = 0; i < NR_CACHED_SCS; i++) { + s = this_cpu_xchg(scs_cache[i], NULL); + if (s) { + memset(s, 0, SCS_SIZE); + goto out; + } + } + + /* + * We allocate a full page for the shadow stack, which should be + * more than we need. Check the assumption nevertheless. + */ + BUILD_BUG_ON(SCS_SIZE > PAGE_SIZE); + + s = __vmalloc_node_range(PAGE_SIZE, SCS_SIZE, + VMALLOC_START, VMALLOC_END, + GFP_SCS, PAGE_KERNEL, 0, + node, __builtin_return_address(0)); + +out: + if (s) + scs_set_magic(s); + /* TODO: poison for KASAN, unpoison in scs_free */ + + return s; +} + +static void scs_free(void *s) +{ + int i; + + for (i = 0; i < NR_CACHED_SCS; i++) + if (this_cpu_cmpxchg(scs_cache[i], 0, s) == NULL) + return; + + vfree_atomic(s); +} + +static int scs_cleanup(unsigned int cpu) +{ + int i; + void **cache = per_cpu_ptr(scs_cache, cpu); + + for (i = 0; i < NR_CACHED_SCS; i++) { + vfree(cache[i]); + cache[i] = NULL; + } + + return 0; +} + +void __init scs_init(void) +{ + WARN_ON(cpuhp_setup_state(CPUHP_BP_PREPARE_DYN, "scs:scs_cache", NULL, + scs_cleanup) < 0); +} + +#else /* !CONFIG_SHADOW_CALL_STACK_VMAP */ + +static struct kmem_cache *scs_cache; + +static inline void *scs_alloc(int node) +{ + void *s; + + s = kmem_cache_alloc_node(scs_cache, GFP_SCS, node); + if (s) { + scs_set_magic(s); + /* + * Poison the allocation to catch unintentional accesses to + * the shadow stack when KASAN is enabled. + */ + kasan_poison_object_data(scs_cache, s); + } + + return s; +} + +static inline void scs_free(void *s) +{ + kasan_unpoison_object_data(scs_cache, s); + kmem_cache_free(scs_cache, s); +} + +void __init scs_init(void) +{ + scs_cache = kmem_cache_create("scs_cache", SCS_SIZE, SCS_SIZE, + 0, NULL); + WARN_ON(!scs_cache); +} + +#endif /* CONFIG_SHADOW_CALL_STACK_VMAP */ + +void scs_task_reset(struct task_struct *tsk) +{ + /* + * Reset the shadow stack to the base address in case the task + * is reused. + */ + task_set_scs(tsk, __scs_base(tsk)); +} + +int scs_prepare(struct task_struct *tsk, int node) +{ + void *s; + + s = scs_alloc(node); + if (!s) + return -ENOMEM; + + task_set_scs(tsk, s); + return 0; +} + +bool scs_corrupted(struct task_struct *tsk) +{ + unsigned long *magic = scs_magic(__scs_base(tsk)); + + return READ_ONCE_NOCHECK(*magic) != SCS_END_MAGIC; +} + +void scs_release(struct task_struct *tsk) +{ + void *s; + + s = __scs_base(tsk); + if (!s) + return; + + WARN_ON(scs_corrupted(tsk)); + + task_set_scs(tsk, NULL); + scs_free(s); +} From patchwork Tue Feb 25 17:39:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11404339 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 76D5A930 for ; Tue, 25 Feb 2020 17:40:12 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id A6C6E24653 for ; Tue, 25 Feb 2020 17:40:11 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="kANL2xon" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A6C6E24653 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17907-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 32440 invoked by uid 550); 25 Feb 2020 17:40:01 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 32352 invoked from network); 25 Feb 2020 17:40:00 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=iAnwqX/AAGd26TFtb0myYOOPt2TU78CQZiCptAhki6M=; b=kANL2xonmMglvpp1HfS6OLga6N5UDkSGtpNoYRb3QmlCE3L8SPQMIKT0iDR3qYJM4z QTTIru01wguLnqgAcN2nH6WtOo/2T1NAHTxky0gtDSUlLUBV6iJYHp2N3mB3soWiP732 v+x4Vx0CdfTOiv+coKS9r3OuJ2RuRAZVnvNywrO6hkp6dl0ErBtAJaOCEWjMI8Kqc4lf FTxSiCkD7wHKSJoUvdn7zblAtF8/A4iqq+ABiGArPX8vEHGecFN7So+5z3jEL8m3WMOO OSbmRVMXb/QPVb21NVhw9gPhrDiOXr7K4onJVLVItVVvCMsux/Zwu+VdFRxjb2TzYJRC gX8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=iAnwqX/AAGd26TFtb0myYOOPt2TU78CQZiCptAhki6M=; b=VZAzajwF3bljAHqrgsglssEhEsJ0sV8d/gW4imT3jN8tsHk2yHDysuMbSWSE5t2F5e lvE4G62kKHiRA8zjsv2HeGns8YAscKdjz3cr7TCq0anshdHqzbEfBav7VY0lgH8Yfqiz aI60VLqfxNtGpmW0GLi14iQvsHdlpSjBl4wDthpOl/qfaktOfMz65PMiadBUELnVH1Cu SoFChq0drNpRrQpp2m9ol2+f+aMiQkrssuXFnkZ+5a+FZ+tFVGwpYQ5ZiB6dgq0NCdhx 2SkkSZ2xFHhCO0ptKJZBsygoTv2Fc5nlA0XIEsZxTVdoap0Hcgi8OEQCxOXOErHxEgyu 0ovw== X-Gm-Message-State: APjAAAUkZkHQD4JwHZF3UyhfIFxqlgV9gSySDAxfDDPnrMJxlvmuRvB+ QzBVYTkAtGujSNki8maDbhBtEcvdZzOxjp8uGBE= X-Google-Smtp-Source: APXvYqzDtV10I5QzVUi7XahVWKmT668K+XJVk+Iy8jV8kkQ9Wu9ruJ8a5IfOUgRHt5YFHQdl+WDA0mJxrn6/yraHQiI= X-Received: by 2002:a65:6901:: with SMTP id s1mr2905137pgq.446.1582652388046; Tue, 25 Feb 2020 09:39:48 -0800 (PST) Date: Tue, 25 Feb 2020 09:39:23 -0800 In-Reply-To: <20200225173933.74818-1-samitolvanen@google.com> Message-Id: <20200225173933.74818-3-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20200225173933.74818-1-samitolvanen@google.com> X-Mailer: git-send-email 2.25.0.265.gbab2e86ba0-goog Subject: [PATCH v9 02/12] scs: add accounting From: Sami Tolvanen To: Will Deacon , Catalin Marinas , James Morse , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel , Mark Rutland Cc: Dave Martin , Kees Cook , Laura Abbott , Marc Zyngier , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen This change adds accounting for the memory allocated for shadow stacks. Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook --- drivers/base/node.c | 6 ++++++ fs/proc/meminfo.c | 4 ++++ include/linux/mmzone.h | 3 +++ kernel/scs.c | 20 ++++++++++++++++++++ mm/page_alloc.c | 6 ++++++ mm/vmstat.c | 3 +++ 6 files changed, 42 insertions(+) diff --git a/drivers/base/node.c b/drivers/base/node.c index 98a31bafc8a2..874a8b428438 100644 --- a/drivers/base/node.c +++ b/drivers/base/node.c @@ -415,6 +415,9 @@ static ssize_t node_read_meminfo(struct device *dev, "Node %d AnonPages: %8lu kB\n" "Node %d Shmem: %8lu kB\n" "Node %d KernelStack: %8lu kB\n" +#ifdef CONFIG_SHADOW_CALL_STACK + "Node %d ShadowCallStack:%8lu kB\n" +#endif "Node %d PageTables: %8lu kB\n" "Node %d NFS_Unstable: %8lu kB\n" "Node %d Bounce: %8lu kB\n" @@ -438,6 +441,9 @@ static ssize_t node_read_meminfo(struct device *dev, nid, K(node_page_state(pgdat, NR_ANON_MAPPED)), nid, K(i.sharedram), nid, sum_zone_node_page_state(nid, NR_KERNEL_STACK_KB), +#ifdef CONFIG_SHADOW_CALL_STACK + nid, sum_zone_node_page_state(nid, NR_KERNEL_SCS_BYTES) / 1024, +#endif nid, K(sum_zone_node_page_state(nid, NR_PAGETABLE)), nid, K(node_page_state(pgdat, NR_UNSTABLE_NFS)), nid, K(sum_zone_node_page_state(nid, NR_BOUNCE)), diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c index 8c1f1bb1a5ce..49768005a79e 100644 --- a/fs/proc/meminfo.c +++ b/fs/proc/meminfo.c @@ -103,6 +103,10 @@ static int meminfo_proc_show(struct seq_file *m, void *v) show_val_kb(m, "SUnreclaim: ", sunreclaim); seq_printf(m, "KernelStack: %8lu kB\n", global_zone_page_state(NR_KERNEL_STACK_KB)); +#ifdef CONFIG_SHADOW_CALL_STACK + seq_printf(m, "ShadowCallStack:%8lu kB\n", + global_zone_page_state(NR_KERNEL_SCS_BYTES) / 1024); +#endif show_val_kb(m, "PageTables: ", global_zone_page_state(NR_PAGETABLE)); diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 462f6873905a..0a6f395abc68 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -200,6 +200,9 @@ enum zone_stat_item { NR_MLOCK, /* mlock()ed pages found and moved off LRU */ NR_PAGETABLE, /* used for pagetables */ NR_KERNEL_STACK_KB, /* measured in KiB */ +#if IS_ENABLED(CONFIG_SHADOW_CALL_STACK) + NR_KERNEL_SCS_BYTES, /* measured in bytes */ +#endif /* Second 128 byte cacheline */ NR_BOUNCE, #if IS_ENABLED(CONFIG_ZSMALLOC) diff --git a/kernel/scs.c b/kernel/scs.c index 28abed21950c..5245e992c692 100644 --- a/kernel/scs.c +++ b/kernel/scs.c @@ -12,6 +12,7 @@ #include #include #include +#include #include static inline void *__scs_base(struct task_struct *tsk) @@ -89,6 +90,11 @@ static void scs_free(void *s) vfree_atomic(s); } +static struct page *__scs_page(struct task_struct *tsk) +{ + return vmalloc_to_page(__scs_base(tsk)); +} + static int scs_cleanup(unsigned int cpu) { int i; @@ -135,6 +141,11 @@ static inline void scs_free(void *s) kmem_cache_free(scs_cache, s); } +static struct page *__scs_page(struct task_struct *tsk) +{ + return virt_to_page(__scs_base(tsk)); +} + void __init scs_init(void) { scs_cache = kmem_cache_create("scs_cache", SCS_SIZE, SCS_SIZE, @@ -153,6 +164,12 @@ void scs_task_reset(struct task_struct *tsk) task_set_scs(tsk, __scs_base(tsk)); } +static void scs_account(struct task_struct *tsk, int account) +{ + mod_zone_page_state(page_zone(__scs_page(tsk)), NR_KERNEL_SCS_BYTES, + account * SCS_SIZE); +} + int scs_prepare(struct task_struct *tsk, int node) { void *s; @@ -162,6 +179,8 @@ int scs_prepare(struct task_struct *tsk, int node) return -ENOMEM; task_set_scs(tsk, s); + scs_account(tsk, 1); + return 0; } @@ -182,6 +201,7 @@ void scs_release(struct task_struct *tsk) WARN_ON(scs_corrupted(tsk)); + scs_account(tsk, -1); task_set_scs(tsk, NULL); scs_free(s); } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 3c4eb750a199..1381b9d84e4c 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5340,6 +5340,9 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask) " managed:%lukB" " mlocked:%lukB" " kernel_stack:%lukB" +#ifdef CONFIG_SHADOW_CALL_STACK + " shadow_call_stack:%lukB" +#endif " pagetables:%lukB" " bounce:%lukB" " free_pcp:%lukB" @@ -5362,6 +5365,9 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask) K(zone_managed_pages(zone)), K(zone_page_state(zone, NR_MLOCK)), zone_page_state(zone, NR_KERNEL_STACK_KB), +#ifdef CONFIG_SHADOW_CALL_STACK + zone_page_state(zone, NR_KERNEL_SCS_BYTES) / 1024, +#endif K(zone_page_state(zone, NR_PAGETABLE)), K(zone_page_state(zone, NR_BOUNCE)), K(free_pcp), diff --git a/mm/vmstat.c b/mm/vmstat.c index 78d53378db99..d0650391c8c1 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1119,6 +1119,9 @@ const char * const vmstat_text[] = { "nr_mlock", "nr_page_table_pages", "nr_kernel_stack", +#if IS_ENABLED(CONFIG_SHADOW_CALL_STACK) + "nr_shadow_call_stack_bytes", +#endif "nr_bounce", #if IS_ENABLED(CONFIG_ZSMALLOC) "nr_zspages", From patchwork Tue Feb 25 17:39:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11404341 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EC4D014BC for ; Tue, 25 Feb 2020 17:40:21 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 546672084E for ; Tue, 25 Feb 2020 17:40:21 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="EyPcT46l" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 546672084E Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17908-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 1080 invoked by uid 550); 25 Feb 2020 17:40:05 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 1028 invoked from network); 25 Feb 2020 17:40:05 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=0f/RzoyEatytyatThC+F3eOfN2v+QCCQ4NfkPjtyieo=; b=EyPcT46lQh9aSHtmCQLytlqbk6ZyOPPLjYX5wx5scKW+7DdNTGkOrtr8WuC5C5A+kK M2BFBAAJx6JVs3WDRKsHQLLRO9BpOo3t8S+EeXvYD98qVLGaIk4n8lhxVg48BDxFZjs5 Nc6k34FDhn5xt3oOYDagP97Jkr+plBtRe7++kD55djqzKodwN+PGU4IxKwvZGNmFBXDd 8QcN5Mdz8zndha20IVl5BB1PTfJZ9LIAOFGvbOHNSUOX3dDBBKVjDDteDI9+2tlyB8v+ Ocw5Df3CWfNU1rJlsPH4eLU/NoQ8/uHEpJR3GieaNAhgxd6/Vv7NFoxf5lF0teYb4qv/ fWJQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=0f/RzoyEatytyatThC+F3eOfN2v+QCCQ4NfkPjtyieo=; b=dUg6pv5J+s36hLMybeRu9yly1JpAJnoXWAEpkwRSmw9oxQKkVRmQHZmZCbgLbVzZk2 kijUyFh0q5zi83HniMvJX3fO8jUZ226gntbNcEe5OqVsD5a3iBESjJYEv9Tnsmw3fCcH Y+Eoc5S2meNNrMIA6N2Seserr0G6QziNYp5/ZtWnSG0k45xjAKcX7bVMS2dF31XB/Atz 6dNv5TOJbE048cUwz9hQV25Zu+POUDx3SKA7QIv2L93IZ4Lo1phqb9RCcQJwClnjhS3H RMmP+Uae0dPfeF5MfIXO6AsRbFTEEctOBGK1HGmdubScZpbN7cDXmbamMQWwBy5kkD1V qAlQ== X-Gm-Message-State: APjAAAVpnUuOGrTgFYHqPqslEKceunfWB+UMrRlQRjeHOMtk3tyfcK6z qq5qChFvmBfg9WbM11SOOsJ5VvYgve0/l8mduRU= X-Google-Smtp-Source: APXvYqx4xojoNdX1jmjAi/7s33/U+0mXVBajNtGJvXPQGE5mjpgdpP3ETPZQPfufHi2tnyRoYGFEBUq0dG+s+BS43pM= X-Received: by 2002:a05:6214:1874:: with SMTP id eh20mr53245231qvb.122.1582652393129; Tue, 25 Feb 2020 09:39:53 -0800 (PST) Date: Tue, 25 Feb 2020 09:39:24 -0800 In-Reply-To: <20200225173933.74818-1-samitolvanen@google.com> Message-Id: <20200225173933.74818-4-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20200225173933.74818-1-samitolvanen@google.com> X-Mailer: git-send-email 2.25.0.265.gbab2e86ba0-goog Subject: [PATCH v9 03/12] scs: add support for stack usage debugging From: Sami Tolvanen To: Will Deacon , Catalin Marinas , James Morse , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel , Mark Rutland Cc: Dave Martin , Kees Cook , Laura Abbott , Marc Zyngier , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen Implements CONFIG_DEBUG_STACK_USAGE for shadow stacks. When enabled, also prints out the highest shadow stack usage per process. Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook --- kernel/scs.c | 39 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 39 insertions(+) diff --git a/kernel/scs.c b/kernel/scs.c index 5245e992c692..ad74d13f2c0f 100644 --- a/kernel/scs.c +++ b/kernel/scs.c @@ -184,6 +184,44 @@ int scs_prepare(struct task_struct *tsk, int node) return 0; } +#ifdef CONFIG_DEBUG_STACK_USAGE +static inline unsigned long scs_used(struct task_struct *tsk) +{ + unsigned long *p = __scs_base(tsk); + unsigned long *end = scs_magic(p); + unsigned long s = (unsigned long)p; + + while (p < end && READ_ONCE_NOCHECK(*p)) + p++; + + return (unsigned long)p - s; +} + +static void scs_check_usage(struct task_struct *tsk) +{ + static DEFINE_SPINLOCK(lock); + static unsigned long highest; + unsigned long used = scs_used(tsk); + + if (used <= highest) + return; + + spin_lock(&lock); + + if (used > highest) { + pr_info("%s (%d): highest shadow stack usage: %lu bytes\n", + tsk->comm, task_pid_nr(tsk), used); + highest = used; + } + + spin_unlock(&lock); +} +#else +static inline void scs_check_usage(struct task_struct *tsk) +{ +} +#endif + bool scs_corrupted(struct task_struct *tsk) { unsigned long *magic = scs_magic(__scs_base(tsk)); @@ -200,6 +238,7 @@ void scs_release(struct task_struct *tsk) return; WARN_ON(scs_corrupted(tsk)); + scs_check_usage(tsk); scs_account(tsk, -1); task_set_scs(tsk, NULL); From patchwork Tue Feb 25 17:39:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11404343 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 34B1314BC for ; Tue, 25 Feb 2020 17:40:30 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 903462082F for ; Tue, 25 Feb 2020 17:40:29 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="rcZDangc" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 903462082F Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17909-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 1416 invoked by uid 550); 25 Feb 2020 17:40:09 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 1349 invoked from network); 25 Feb 2020 17:40:09 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=euAiLk7SvDA/PnGZzpLLTI9flOv+YzdznFtW1y3XFgk=; b=rcZDangcxdPLQ8c/UeSJooK9TmmcaDC40GzTyXoR+zBGLCamuGVic+lOltfCliyqYp Qy8x3/XWiF5TB8IYJLOgBrOoaX/kfXC+FQpjknL3PmAvzLsHKIG8DFoA74qXXIQXcNUZ 0OPMc9Q6heNg+Um6Kgciy/F5zEx8rAtsKgU3ZM7obRhY5/2A+Kk5A2CEjGk/5z4KifOG 9Z6jS9mj4n9vzgOcBl5JrDZVyk1nzFBi2cDuhhMmE7njtr4mtFSRw2VEqaau0AxfNwT+ T7LcR9LbC/GlbWKawMf7nnsCnVxQ4M+uxzYY1Vt1qL5/aWpfYaqxMkUArMyYV5gLmZTm 3Obg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=euAiLk7SvDA/PnGZzpLLTI9flOv+YzdznFtW1y3XFgk=; b=SY9RSftACSsaSq8iPrzPexHI697ESQnj8U1bnFv50kGHAf23wQDL0Smoe21T7rppFG mdzCrE7JOrClaHOMK/Gk+mFUuTvxfFD5lYRM0b53oKCziLewLazhovJi0UyXaPjf4RF3 3gJyyN+gfEQbAWFk6P2s0HrGNb61xy8yqYFw8wh8N7/VRcNLKQkdK4fk1SZ1MemS0wBm ZORy9gTLApDVGcI5meXfi/6gdRN6QhYPpaf1fKppBSuwwN2JhyrkMG9/Iv1TYz3uXtPe x86TjVi31l3CWUmproAulM7BFj73BPuYwUDv3G0pU61aW0yYFJEoHZBcGN5IHOEVDGu2 SO7A== X-Gm-Message-State: APjAAAXm8lUTY3RdYkimwpxZ7z9ZY+IR3PvN8rc+0cuT8duThkGR9X7p 6+Ati0NKeLtvQ0PeF5DQ0KyWUaA4IhFL8a0Sbg4= X-Google-Smtp-Source: APXvYqxlqJxQl5BI1MzivwW+BHpAHu+yoCC0b4OSIv8tWhwsj+eyaqOdONKRweH1H/aDrmbR0wf2esPZBfRljhjv3g8= X-Received: by 2002:ac8:3aa6:: with SMTP id x35mr39983775qte.38.1582652397221; Tue, 25 Feb 2020 09:39:57 -0800 (PST) Date: Tue, 25 Feb 2020 09:39:25 -0800 In-Reply-To: <20200225173933.74818-1-samitolvanen@google.com> Message-Id: <20200225173933.74818-5-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20200225173933.74818-1-samitolvanen@google.com> X-Mailer: git-send-email 2.25.0.265.gbab2e86ba0-goog Subject: [PATCH v9 04/12] scs: disable when function graph tracing is enabled From: Sami Tolvanen To: Will Deacon , Catalin Marinas , James Morse , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel , Mark Rutland Cc: Dave Martin , Kees Cook , Laura Abbott , Marc Zyngier , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen The graph tracer hooks returns by modifying frame records on the (regular) stack, but with SCS the return address is taken from the shadow stack, and the value in the frame record has no effect. As we don't currently have a mechanism to determine the corresponding slot on the shadow stack (and to pass this through the ftrace infrastructure), for now let's disable SCS when the graph tracer is enabled. With SCS the return address is taken from the shadow stack and the value in the frame record has no effect. The mcount based graph tracer hooks returns by modifying frame records on the (regular) stack, and thus is not compatible. The patchable-function-entry graph tracer used for DYNAMIC_FTRACE_WITH_REGS modifies the LR before it is saved to the shadow stack, and is compatible. Modifying the mcount based graph tracer to work with SCS would require a mechanism to determine the corresponding slot on the shadow stack (and to pass this through the ftrace infrastructure), and we expect that everyone will eventually move to the patchable-function-entry based graph tracer anyway, so for now let's disable SCS when the mcount-based graph tracer is enabled. SCS and patchable-function-entry are both supported from LLVM 10.x. Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook Reviewed-by: Mark Rutland --- arch/Kconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/Kconfig b/arch/Kconfig index a67fa78c92e7..d53ade0950a5 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -535,6 +535,7 @@ config ARCH_SUPPORTS_SHADOW_CALL_STACK config SHADOW_CALL_STACK bool "Clang Shadow Call Stack" + depends on DYNAMIC_FTRACE_WITH_REGS || !FUNCTION_GRAPH_TRACER depends on ARCH_SUPPORTS_SHADOW_CALL_STACK help This option enables Clang's Shadow Call Stack, which uses a From patchwork Tue Feb 25 17:39:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11404347 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 279F8930 for ; Tue, 25 Feb 2020 17:40:40 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 841572082F for ; Tue, 25 Feb 2020 17:40:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="V+O8aB/k" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 841572082F Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17910-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 1727 invoked by uid 550); 25 Feb 2020 17:40:13 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 1648 invoked from network); 25 Feb 2020 17:40:12 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=jVKqEF9zEsFZTQz5E/s7PAlXyZsbPgu1MND2Fom4HAc=; b=V+O8aB/kkUO8xqMJE51PIQtZsppsJth+56C9mzqs3rHEUWYqsaBskb5C8HPCIPzcS0 KLPMyT4P8jAjhfw3HtS6Zd2Bb5U4/swuVVbMREAIk3jxj7rYDCvJV69tJYOtu8AUZqQf z2JnJ+dqNTgOx86Zx5X0AKcPG1QBderHLqB7MhI+u/50ASt0GTaqJITfIYTKWJX1nDlb MH5KYE7gNzyqd1gGbRsR7gUtF6BdQMWSviF1MHJ423kF5K89MmBFjiBcyN5dmGhPgq/S lhVwlyp4UFisGTJT9QGrdtvE2NZMU8DEbTuhfkbOuZMSqCeD2QDwyktTVpa+TfqPqQwZ EwDQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=jVKqEF9zEsFZTQz5E/s7PAlXyZsbPgu1MND2Fom4HAc=; b=ZXGn4X+rs99DXMqSBVEQs8orFKE6nO7jfoCKjibifaiJBv8gx4w7zysHzlXu/78msx a9c671VJGDDAK/emP8SBExqVDcUJdUd6IbQPz9V0j7262hF1+5iOup2pTPPSGPdoTwID oybY5f60gPymrTPx9pmvJjGlYpFEZWgU+Anjk8a5tQDFghqq1pHHXZNWpYyQJqcJrJvJ muJI0BA3dVUD2phmZDyqDl+/B1SkNoOEmxx9Fdjf4nQkZYflk5HNZ59oRnO1orKB83Eq bnkhSFzrc1hjw42drU37xvUN3df7bPkYNjKx0QR53xYSEYAEyDIlISA0cw10gIskeiDe qMDA== X-Gm-Message-State: APjAAAV3c/uDaILzz2uU5x6ZYFDzTmI0OATt1TWd/lyY0Yx+1CCx++QU trB7MtKiPtRjDQtgoEaBUGTNrLEvMAqasbVs36c= X-Google-Smtp-Source: APXvYqx9O9w9/ihBrjsk0SyFNaEZNd7Lr04/IUqXXK96nskmhLx3/ywlCS6EAR4FJpxoJjOrSsMx/Wn5T6BL4+LTzRE= X-Received: by 2002:a63:e04a:: with SMTP id n10mr57879618pgj.341.1582652401052; Tue, 25 Feb 2020 09:40:01 -0800 (PST) Date: Tue, 25 Feb 2020 09:39:26 -0800 In-Reply-To: <20200225173933.74818-1-samitolvanen@google.com> Message-Id: <20200225173933.74818-6-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20200225173933.74818-1-samitolvanen@google.com> X-Mailer: git-send-email 2.25.0.265.gbab2e86ba0-goog Subject: [PATCH v9 05/12] arm64: reserve x18 from general allocation with SCS From: Sami Tolvanen To: Will Deacon , Catalin Marinas , James Morse , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel , Mark Rutland Cc: Dave Martin , Kees Cook , Laura Abbott , Marc Zyngier , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen Reserve the x18 register from general allocation when SCS is enabled, because the compiler uses the register to store the current task's shadow stack pointer. Note that all external kernel modules must also be compiled with -ffixed-x18 if the kernel has SCS enabled. Signed-off-by: Sami Tolvanen Reviewed-by: Nick Desaulniers Reviewed-by: Kees Cook Acked-by: Will Deacon --- arch/arm64/Makefile | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile index dca1a97751ab..ab26b448faa9 100644 --- a/arch/arm64/Makefile +++ b/arch/arm64/Makefile @@ -65,6 +65,10 @@ stack_protector_prepare: prepare0 include/generated/asm-offsets.h)) endif +ifeq ($(CONFIG_SHADOW_CALL_STACK), y) +KBUILD_CFLAGS += -ffixed-x18 +endif + ifeq ($(CONFIG_CPU_BIG_ENDIAN), y) KBUILD_CPPFLAGS += -mbig-endian CHECKFLAGS += -D__AARCH64EB__ From patchwork Tue Feb 25 17:39:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11404351 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 78C0013A4 for ; Tue, 25 Feb 2020 17:40:50 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id D452320658 for ; Tue, 25 Feb 2020 17:40:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="t2cjymvo" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D452320658 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17911-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 1978 invoked by uid 550); 25 Feb 2020 17:40:16 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 1910 invoked from network); 25 Feb 2020 17:40:16 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=tR8HWR8eyR6LK2Q8Pau54PB7Q0F7v08515rGGzIGLyo=; b=t2cjymvoAu3LNnY2V+9fr1UqF1UTZQ7oaKDZV95hsccbo+Wem9ObhjTRot2oxcbV5s JfHwf3EV7LvbpaOCXbv6b6KpquhcRc0DyfxTbmG8xfI0XgwDdCaXq1SsCwfJFidfJNH0 /1lyLaxQdfo98V3WIDBrRZyBcNTKLXgCD4k/O7fm5khTgVNWgLamopl3Y0pvahPUzGES f28q5lH+lgtKGXo6iAptbVXigkwOR4L5+NG1VNse0XvPI6kiA+OwQCXooeRip2W1h5N+ n5U5kzyS0lZfpanYDv8d/3XVs1lgSeb9v4a97K9YEiQO+OqzbCqn5MKxr0OMervjTlWG lsGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=tR8HWR8eyR6LK2Q8Pau54PB7Q0F7v08515rGGzIGLyo=; b=k5rQqj0IjMo/4ltWMVFRIyXMlF+yro1+fkAzbCvdt/uipHUD33VpCYP/9os0fNQtO8 e0NY6bqrxD68VFkteVt7x9KJD3+AWXFSTX7LT1We/5LvOEo6ssBnW/Bdt4kdv5pXb2aE Rk6Oxigcu0kFO2L20W28ubbeVl7BIQ9+55KuJp9fw9sDF8bm1spCJ437htdj0BC3o9AA H4A08XFhnjgJWH5+lHkSHnd/oBzPP5xEWkkqaS4R6Fz/8IDnWyaQCcz/lDaKvP1P+SEo nBTC1eNvb5PpC4GwFy49ByhrvfVVxq2b82yiNwQucuE8JpIVfN27v85eLsl5lFcvO3Ql IdIQ== X-Gm-Message-State: APjAAAVmaMkI6xnLQ7aXNRRVo6WhhTZCrXwUBM/sKg/eUQYnikodkzOp 6CFn87lLtl3/KEi4hOUz39JN75zwPjAeFgHv3k0= X-Google-Smtp-Source: APXvYqzUk4s2BxWsI4JfB/KPQ9dbRtIv21dR3HjtRTMHFyoCRyFREYlh9Gc57x5jkFPGsRhM08FL9vGs5N7u7TLsbsw= X-Received: by 2002:a63:691:: with SMTP id 139mr62220302pgg.325.1582652403730; Tue, 25 Feb 2020 09:40:03 -0800 (PST) Date: Tue, 25 Feb 2020 09:39:27 -0800 In-Reply-To: <20200225173933.74818-1-samitolvanen@google.com> Message-Id: <20200225173933.74818-7-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20200225173933.74818-1-samitolvanen@google.com> X-Mailer: git-send-email 2.25.0.265.gbab2e86ba0-goog Subject: [PATCH v9 06/12] arm64: preserve x18 when CPU is suspended From: Sami Tolvanen To: Will Deacon , Catalin Marinas , James Morse , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel , Mark Rutland Cc: Dave Martin , Kees Cook , Laura Abbott , Marc Zyngier , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen Don't lose the current task's shadow stack when the CPU is suspended. Signed-off-by: Sami Tolvanen Reviewed-by: Nick Desaulniers Reviewed-by: Kees Cook Reviewed-by: Mark Rutland Acked-by: Will Deacon --- arch/arm64/include/asm/suspend.h | 2 +- arch/arm64/mm/proc.S | 14 ++++++++++++++ 2 files changed, 15 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/suspend.h b/arch/arm64/include/asm/suspend.h index 8939c87c4dce..0cde2f473971 100644 --- a/arch/arm64/include/asm/suspend.h +++ b/arch/arm64/include/asm/suspend.h @@ -2,7 +2,7 @@ #ifndef __ASM_SUSPEND_H #define __ASM_SUSPEND_H -#define NR_CTX_REGS 12 +#define NR_CTX_REGS 13 #define NR_CALLEE_SAVED_REGS 12 /* diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index aafed6902411..7d37e3c70ff5 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -56,6 +56,8 @@ * cpu_do_suspend - save CPU registers context * * x0: virtual address of context pointer + * + * This must be kept in sync with struct cpu_suspend_ctx in . */ SYM_FUNC_START(cpu_do_suspend) mrs x2, tpidr_el0 @@ -80,6 +82,11 @@ alternative_endif stp x8, x9, [x0, #48] stp x10, x11, [x0, #64] stp x12, x13, [x0, #80] + /* + * Save x18 as it may be used as a platform register, e.g. by shadow + * call stack. + */ + str x18, [x0, #96] ret SYM_FUNC_END(cpu_do_suspend) @@ -96,6 +103,13 @@ SYM_FUNC_START(cpu_do_resume) ldp x9, x10, [x0, #48] ldp x11, x12, [x0, #64] ldp x13, x14, [x0, #80] + /* + * Restore x18, as it may be used as a platform register, and clear + * the buffer to minimize the risk of exposure when used for shadow + * call stack. + */ + ldr x18, [x0, #96] + str xzr, [x0, #96] msr tpidr_el0, x2 msr tpidrro_el0, x3 msr contextidr_el1, x4 From patchwork Tue Feb 25 17:39:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11404353 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 65A39930 for ; Tue, 25 Feb 2020 17:41:01 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id C086C20658 for ; Tue, 25 Feb 2020 17:41:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="B7ASfSbr" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C086C20658 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17912-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 3209 invoked by uid 550); 25 Feb 2020 17:40:19 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 3151 invoked from network); 25 Feb 2020 17:40:18 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=QniVckAJWtqLmmFgAWqtQvDUqGmhiEDDkK5Atd0uI+w=; b=B7ASfSbr6PJI1PNYuMw4AeQl2rfrBE18p50aZHmoTWWjcghaevDSqsyUvYLaGuvGEV wwBpIUOA2JHt7hx+G+Ornny4GKtz9v4Eypp4NgPZ3xEK3rgMRBRBuNUqNS+ox6ACh+ZC 6cWbkc8+i4jYfG+umapz9k/Bm3qSztOLHJyMg8JMXVQHlKA9VuWwhJw3cIfwhzFXI03b UEh9pcFUZQVWnOG6NbIAR+vAckRKiK94vNJZQ4ldnB79ldCEHyc60gKr6pHwkSc3vVOl SlY8HCGuRiNp4u0S5QuDxi4DnFctp1zaF1wh2WUSCaV9cRZWkMvslEfisyUwfEbN9k+G Kagg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=QniVckAJWtqLmmFgAWqtQvDUqGmhiEDDkK5Atd0uI+w=; b=qushFcQL+m++luhlbJ++JSuPpCMQ77845oq/8OG2gep3tM1xD5ycOhesJQstKQRS50 oQFUON9zYnYYcya1eo2dmcAehhL2q6vWsXOxjfvE5cWWtvVrrPS5/WE4T43aYwlOhCAu rE2+zROaAccKe/1e53/eqqI+XJ3TF8fA6t1A6CyFhZC+LrI8TCVnKlnoBFaZy4xETbnQ 4Sq+4g5bySAEaxpDsYJY00IdwzOgPNMryETkHMi5a5Fe6Os9NAcaZgn9gCF4JWVqg2ie yuKMVaibxmT5m3nPc1V4w+GROZufh6Dsu7bkeIWI6CxSZ0FRoFsOjYEa0J/wI24jGM2n JS6A== X-Gm-Message-State: APjAAAUB5LLm5t6oz1aNvV2V2QTsBKSEB53xTP3Z+MPIwo9tDv6qzYzM ZrKuBMz5+HLgH0ODp4fdpBIOkQI1VpncVkni9Yk= X-Google-Smtp-Source: APXvYqw22kkM3bH9wLjSIFkgjo4U46Ik1liYbPY7ppLoTBG5p4rEwIGlGJAnCM7FSLJOm8bUadpuV5p12hkwZ3bHzak= X-Received: by 2002:a63:d244:: with SMTP id t4mr15492036pgi.241.1582652406574; Tue, 25 Feb 2020 09:40:06 -0800 (PST) Date: Tue, 25 Feb 2020 09:39:28 -0800 In-Reply-To: <20200225173933.74818-1-samitolvanen@google.com> Message-Id: <20200225173933.74818-8-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20200225173933.74818-1-samitolvanen@google.com> X-Mailer: git-send-email 2.25.0.265.gbab2e86ba0-goog Subject: [PATCH v9 07/12] arm64: efi: restore x18 if it was corrupted From: Sami Tolvanen To: Will Deacon , Catalin Marinas , James Morse , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel , Mark Rutland Cc: Dave Martin , Kees Cook , Laura Abbott , Marc Zyngier , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen If we detect a corrupted x18, restore the register before jumping back to potentially SCS instrumented code. This is safe, because the wrapper is called with preemption disabled and a separate shadow stack is used for interrupt handling. Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook Acked-by: Will Deacon --- arch/arm64/kernel/efi-rt-wrapper.S | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kernel/efi-rt-wrapper.S b/arch/arm64/kernel/efi-rt-wrapper.S index 3fc71106cb2b..6ca6c0dc11a1 100644 --- a/arch/arm64/kernel/efi-rt-wrapper.S +++ b/arch/arm64/kernel/efi-rt-wrapper.S @@ -34,5 +34,14 @@ ENTRY(__efi_rt_asm_wrapper) ldp x29, x30, [sp], #32 b.ne 0f ret -0: b efi_handle_corrupted_x18 // tail call +0: + /* + * With CONFIG_SHADOW_CALL_STACK, the kernel uses x18 to store a + * shadow stack pointer, which we need to restore before returning to + * potentially instrumented code. This is safe because the wrapper is + * called with preemption disabled and a separate shadow stack is used + * for interrupts. + */ + mov x18, x2 + b efi_handle_corrupted_x18 // tail call ENDPROC(__efi_rt_asm_wrapper) From patchwork Tue Feb 25 17:39:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11404355 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 593FE13A4 for ; Tue, 25 Feb 2020 17:41:13 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id B370D20658 for ; Tue, 25 Feb 2020 17:41:12 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Xi2ZbcU5" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B370D20658 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17913-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 3447 invoked by uid 550); 25 Feb 2020 17:40:22 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 3339 invoked from network); 25 Feb 2020 17:40:21 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=W8ajq15Euo0CBn6HLEY8Ur3IDCmQBKzronvmXuYJ1Wc=; b=Xi2ZbcU5w4xAMDBbk+3G0M7WhFEvev3QQQmayIYq12lXrXU0TZY00WT8V3tl6EBeFs 4YY+WNru1RHx9Rtv7WI7GU9OFbC0kpv1arYQcLbeOtalBIzT9bzfvmjebvYpIQYGN0iz ohwXof/jSEC3zwPA+NUbVqZJ7ozto76pl8Tl3GYn7Wj0oBFcPxmvhAMiLDwhWBcd/dtK oQCsutUGRHGUPUXm6ZqfV5QbrWXnb7sD4ecMMfiyZisuKFBUhFGQatODjW3Cdm3VhZZA ET4mX4w5o/xHxsjJresFyuImLMlTwnQvGNeTqx3ZIqe9K0l8H2+6zMTkYnV/LRZbxSrS sfyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=W8ajq15Euo0CBn6HLEY8Ur3IDCmQBKzronvmXuYJ1Wc=; b=NPA0xI+ZEbkTC5Ntwv+e/bQjELzEB88oLX2GkymyoQWGpBCAXzdfuOfeEisE4EBiUx Z9bzOPgSBcLQ1jcHtTjk9lmk26NLKfsod7MJE79UAMS/eNfFIGY/bFVXHYyhhiMwe73s 9mhgWFvSV9prdBEjA3ZttiK5LRbqqsqeku1kJohah2uUxpSsC2Tr2SShAfwFhZCjhnR5 rIUA6UM0XpX4zR7gj7zcpGLNq+rR8EgWIxOes4g8iv1z2YYx4cYgtBhtnPPCurg1Eh13 kini2T4tUfY27goHbrezzt3ztnTmhealqqZ7DXzu+kt5J+yWScK3UlboTMP7sNrePSTC R+Mw== X-Gm-Message-State: APjAAAXvYzkR29mrnMoR3awV6SyBr7TiA1uqlAbSvK6DiC67G8SlPvbb AWa80e9mvAOkWAtna0CrdH+RDRw6baDR40iJEL0= X-Google-Smtp-Source: APXvYqwvqENHxbX/ovgpjf0ZVx0mLlUs8H8QzWYeq0EOjlcVEkkcC9/C+tlcnWHubokdElLT2kLplCW3jEbNjcX6X1w= X-Received: by 2002:a63:3207:: with SMTP id y7mr3943460pgy.344.1582652409047; Tue, 25 Feb 2020 09:40:09 -0800 (PST) Date: Tue, 25 Feb 2020 09:39:29 -0800 In-Reply-To: <20200225173933.74818-1-samitolvanen@google.com> Message-Id: <20200225173933.74818-9-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20200225173933.74818-1-samitolvanen@google.com> X-Mailer: git-send-email 2.25.0.265.gbab2e86ba0-goog Subject: [PATCH v9 08/12] arm64: vdso: disable Shadow Call Stack From: Sami Tolvanen To: Will Deacon , Catalin Marinas , James Morse , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel , Mark Rutland Cc: Dave Martin , Kees Cook , Laura Abbott , Marc Zyngier , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen Shadow stacks are only available in the kernel, so disable SCS instrumentation for the vDSO. Signed-off-by: Sami Tolvanen Reviewed-by: Nick Desaulniers Reviewed-by: Kees Cook Reviewed-by: Mark Rutland Acked-by: Will Deacon --- arch/arm64/kernel/vdso/Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/kernel/vdso/Makefile b/arch/arm64/kernel/vdso/Makefile index dd2514bb1511..a87a4f11724e 100644 --- a/arch/arm64/kernel/vdso/Makefile +++ b/arch/arm64/kernel/vdso/Makefile @@ -25,7 +25,7 @@ ccflags-y += -DDISABLE_BRANCH_PROFILING VDSO_LDFLAGS := -Bsymbolic -CFLAGS_REMOVE_vgettimeofday.o = $(CC_FLAGS_FTRACE) -Os +CFLAGS_REMOVE_vgettimeofday.o = $(CC_FLAGS_FTRACE) -Os $(CC_FLAGS_SCS) KBUILD_CFLAGS += $(DISABLE_LTO) KASAN_SANITIZE := n UBSAN_SANITIZE := n From patchwork Tue Feb 25 17:39:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11404361 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 361E114BC for ; Tue, 25 Feb 2020 17:41:25 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 91ACE2082F for ; Tue, 25 Feb 2020 17:41:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="o7vwSCWz" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 91ACE2082F Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17914-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 3673 invoked by uid 550); 25 Feb 2020 17:40:24 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 3587 invoked from network); 25 Feb 2020 17:40:23 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=/9ynHIjebmTtgZE9DUTlwVjGdkmD9QfvZQiqhOExK9Y=; b=o7vwSCWzTZQYP7RYWcSjGD6CpDAB5oOvCiPmaDLzo25KnXRfX+NUrDok0q2y0mGfIa TDZrXM9faPShPRy4a6WiU0N5UIJawHlnPCJfHebtHrH5fvwpUKCDI+tLjTcvUoZXDeaB HCszEPkk37N+dLtmZiSu0APRUzakK7UtyMUDF5BWhTy6i22v7izEycfB2f+6f0SG52UX a6DuVXYMOkhGQurfoTI6OFw6fWv0lxkEhyIlwp/KPXUNbvGuN/nLmhBhm4IMlnyF0qei Qo7+V5ZpS5ueHVWgq55uxj/h3LF9pK5y+dwb8XA7OUhBm3/Bp2RIxaOxSVJIBZIb1iZV Wu2g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=/9ynHIjebmTtgZE9DUTlwVjGdkmD9QfvZQiqhOExK9Y=; b=U/yrK/hD4/opGElqauLGqAHQcM7mulmnWwMEXEwArCtH4aSc7C40wI8LdfwbqiHGIX /7EbHKHJiCSkmXqaUqfT0VYfmY3xh6y//CbtEt2Zyuu4Jc9WYq60u9EXdWBcA1Iw2f6/ 1fOYBBy+cftWlPS/PW3EvXAP7RQPLQwzHDD/uTtQCQqdjCcNsESI+Bpbxgpk1Sb8BEVE 9F5eZ2xc0IlM0EPTs+ztQf3HoWcYjklcZ01nwobedRDFY/d3UUDEwiU68LBL5NGKunW3 YsbQY2f9HmpfxQ0x+7WPZ5Zo6t7K2ZGkQMwEBr2HKN01TobI6Ye4P57iBEZm6zMS3Te/ FsGg== X-Gm-Message-State: APjAAAVqJFESrhihVekzYsdhVitzq7Zz8/QVFoIP1aYbNPpiKBP293eJ pmzpkKYZoYPgdxc1X4/jRJc8X4zg6AIeWmPsS/0= X-Google-Smtp-Source: APXvYqxvXT5YFgJDX3OV+onXaZib4AkO6FSAZUIUlR7kvR8/LVLBi4op9vXOxKw+pDWwfIzeWhkA7GqRpKGKFMrVwjA= X-Received: by 2002:a63:5a65:: with SMTP id k37mr60903965pgm.264.1582652411820; Tue, 25 Feb 2020 09:40:11 -0800 (PST) Date: Tue, 25 Feb 2020 09:39:30 -0800 In-Reply-To: <20200225173933.74818-1-samitolvanen@google.com> Message-Id: <20200225173933.74818-10-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20200225173933.74818-1-samitolvanen@google.com> X-Mailer: git-send-email 2.25.0.265.gbab2e86ba0-goog Subject: [PATCH v9 09/12] arm64: disable SCS for hypervisor code From: Sami Tolvanen To: Will Deacon , Catalin Marinas , James Morse , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel , Mark Rutland Cc: Dave Martin , Kees Cook , Laura Abbott , Marc Zyngier , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen Disable SCS for code that runs at a different exception level by adding __noscs to __hyp_text. Suggested-by: James Morse Signed-off-by: Sami Tolvanen Acked-by: Marc Zyngier --- arch/arm64/include/asm/kvm_hyp.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index a3a6a2ba9a63..0f0603f55ea0 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -13,7 +13,7 @@ #include #include -#define __hyp_text __section(.hyp.text) notrace +#define __hyp_text __section(.hyp.text) notrace __noscs #define read_sysreg_elx(r,nvh,vh) \ ({ \ From patchwork Tue Feb 25 17:39:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11404365 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8A4BD930 for ; Tue, 25 Feb 2020 17:41:36 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 9103D2082F for ; Tue, 25 Feb 2020 17:41:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="WnsqAWuL" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9103D2082F Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17915-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 3962 invoked by uid 550); 25 Feb 2020 17:40:27 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 3862 invoked from network); 25 Feb 2020 17:40:26 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=HxSYBdgam0Oo55tV66MssX9ym6F+6IzxUZWIIcNmGrg=; b=WnsqAWuLQhL34VdyL8HEMdBvtEN0nwwkCkkPrRScSuBpNshfVPC4fSWx9oG6dZ4Igi XIEh4f3HiI5wi2haOsjkcQilYZAbVROrWNiWzvpwC1L1RU3kM/TnT6LxWybcSnseRNlg oL6yInqQ0/y8IYUh6FgSaZ7SvOxQL/d+ou7HqE0onfKQQ3p8HupnAWPUSSfsZ+dSpHQe XcbFK1TB6cmTCC53XfWfIFjj2U1u8Y4huuEL17n4NgEvxh67yjeUpXkpBd5Ef7HE17I3 t6U+MXCFfxZu5rpGg5cQe6d4TmzMndIga1YsP8Fo4+wpD8Tx9iKdiUG8QcYzwTOqM07+ UVoA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=HxSYBdgam0Oo55tV66MssX9ym6F+6IzxUZWIIcNmGrg=; b=d14kS4YG+Bh4+Z1WpXnULpQYcPEfhlfcoAr0/p9t1ERLLufGZg2MrVmmK2bT4qrXJC 7/tKnvNV2lqHVUSA9Ksrxk9bksR84wzuN5ICNQJvYEnyyYVocWcBGqQGQNFLWAnH9uRS zAjNg/KxLWS33TKiTDiXQ50YPrvOJCKeAvcorcfX+O1aWjTSAiseNoA/J6DWDBDSjXHj +YYrFX6K3Vd862mXgfnfpHEGbOIXQVNi8qh/2NfW63xb8dZOB34KULO/IggsE+VYouoB c7iM3TXoUPs1KH030A0/GoTnYV+39oDLRypiy4cvUSdcTlH3Hd41wS/5UHzErf8sbcE3 JZqw== X-Gm-Message-State: APjAAAVlCuc/iEu9WosMOT0pZZr8B0zweBw6u4M8Fa1C758pxmf6TQj2 UyubAaJjTgpL74e47V8mXnjm885bDydNL14q3lY= X-Google-Smtp-Source: APXvYqyQfoPKwIkQ6rJi7MaIoCcIRZjxHcmUgjNDSqOp44AjfoOKpH2TGCMdLli/9Ay5woHt46AJl3HPg/698wniiTs= X-Received: by 2002:a05:6102:3235:: with SMTP id x21mr229653vsf.8.1582652414506; Tue, 25 Feb 2020 09:40:14 -0800 (PST) Date: Tue, 25 Feb 2020 09:39:31 -0800 In-Reply-To: <20200225173933.74818-1-samitolvanen@google.com> Message-Id: <20200225173933.74818-11-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20200225173933.74818-1-samitolvanen@google.com> X-Mailer: git-send-email 2.25.0.265.gbab2e86ba0-goog Subject: [PATCH v9 10/12] arm64: implement Shadow Call Stack From: Sami Tolvanen To: Will Deacon , Catalin Marinas , James Morse , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel , Mark Rutland Cc: Dave Martin , Kees Cook , Laura Abbott , Marc Zyngier , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen This change implements shadow stack switching, initial SCS set-up, and interrupt shadow stacks for arm64. Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook Reviewed-by: James Morse --- arch/arm64/Kconfig | 5 ++++ arch/arm64/include/asm/scs.h | 37 +++++++++++++++++++++++++ arch/arm64/include/asm/thread_info.h | 3 +++ arch/arm64/kernel/Makefile | 1 + arch/arm64/kernel/asm-offsets.c | 3 +++ arch/arm64/kernel/entry.S | 32 ++++++++++++++++++++-- arch/arm64/kernel/head.S | 9 +++++++ arch/arm64/kernel/irq.c | 2 ++ arch/arm64/kernel/process.c | 2 ++ arch/arm64/kernel/scs.c | 40 ++++++++++++++++++++++++++++ arch/arm64/kernel/smp.c | 4 +++ 11 files changed, 136 insertions(+), 2 deletions(-) create mode 100644 arch/arm64/include/asm/scs.h create mode 100644 arch/arm64/kernel/scs.c diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 0b30e884e088..eae76686be77 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -65,6 +65,7 @@ config ARM64 select ARCH_USE_QUEUED_RWLOCKS select ARCH_USE_QUEUED_SPINLOCKS select ARCH_SUPPORTS_MEMORY_FAILURE + select ARCH_SUPPORTS_SHADOW_CALL_STACK if CC_HAVE_SHADOW_CALL_STACK select ARCH_SUPPORTS_ATOMIC_RMW select ARCH_SUPPORTS_INT128 if CC_HAS_INT128 && (GCC_VERSION >= 50000 || CC_IS_CLANG) select ARCH_SUPPORTS_NUMA_BALANCING @@ -1022,6 +1023,10 @@ config ARCH_HAS_CACHE_LINE_SIZE config ARCH_ENABLE_SPLIT_PMD_PTLOCK def_bool y if PGTABLE_LEVELS > 2 +# Supported by clang >= 7.0 +config CC_HAVE_SHADOW_CALL_STACK + def_bool $(cc-option, -fsanitize=shadow-call-stack -ffixed-x18) + config SECCOMP bool "Enable seccomp to safely compute untrusted bytecode" ---help--- diff --git a/arch/arm64/include/asm/scs.h b/arch/arm64/include/asm/scs.h new file mode 100644 index 000000000000..c50d2b0c6c5f --- /dev/null +++ b/arch/arm64/include/asm/scs.h @@ -0,0 +1,37 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_SCS_H +#define _ASM_SCS_H + +#ifndef __ASSEMBLY__ + +#include + +#ifdef CONFIG_SHADOW_CALL_STACK + +extern void scs_init_irq(void); + +static __always_inline void scs_save(struct task_struct *tsk) +{ + void *s; + + asm volatile("mov %0, x18" : "=r" (s)); + task_set_scs(tsk, s); +} + +static inline void scs_overflow_check(struct task_struct *tsk) +{ + if (unlikely(scs_corrupted(tsk))) + panic("corrupted shadow stack detected inside scheduler\n"); +} + +#else /* CONFIG_SHADOW_CALL_STACK */ + +static inline void scs_init_irq(void) {} +static inline void scs_save(struct task_struct *tsk) {} +static inline void scs_overflow_check(struct task_struct *tsk) {} + +#endif /* CONFIG_SHADOW_CALL_STACK */ + +#endif /* __ASSEMBLY __ */ + +#endif /* _ASM_SCS_H */ diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h index f0cec4160136..8c73764b9ed2 100644 --- a/arch/arm64/include/asm/thread_info.h +++ b/arch/arm64/include/asm/thread_info.h @@ -41,6 +41,9 @@ struct thread_info { #endif } preempt; }; +#ifdef CONFIG_SHADOW_CALL_STACK + void *shadow_call_stack; +#endif }; #define thread_saved_pc(tsk) \ diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile index fc6488660f64..08fafc4da2cf 100644 --- a/arch/arm64/kernel/Makefile +++ b/arch/arm64/kernel/Makefile @@ -63,6 +63,7 @@ obj-$(CONFIG_CRASH_CORE) += crash_core.o obj-$(CONFIG_ARM_SDE_INTERFACE) += sdei.o obj-$(CONFIG_ARM64_SSBD) += ssbd.o obj-$(CONFIG_ARM64_PTR_AUTH) += pointer_auth.o +obj-$(CONFIG_SHADOW_CALL_STACK) += scs.o obj-y += vdso/ probes/ obj-$(CONFIG_COMPAT_VDSO) += vdso32/ diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index a5bdce8af65b..d485dc5cd196 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -33,6 +33,9 @@ int main(void) DEFINE(TSK_TI_ADDR_LIMIT, offsetof(struct task_struct, thread_info.addr_limit)); #ifdef CONFIG_ARM64_SW_TTBR0_PAN DEFINE(TSK_TI_TTBR0, offsetof(struct task_struct, thread_info.ttbr0)); +#endif +#ifdef CONFIG_SHADOW_CALL_STACK + DEFINE(TSK_TI_SCS, offsetof(struct task_struct, thread_info.shadow_call_stack)); #endif DEFINE(TSK_STACK, offsetof(struct task_struct, stack)); #ifdef CONFIG_STACKPROTECTOR diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index 9461d812ae27..4b18c3bbdea5 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -177,6 +177,10 @@ alternative_cb_end apply_ssbd 1, x22, x23 +#ifdef CONFIG_SHADOW_CALL_STACK + ldr x18, [tsk, #TSK_TI_SCS] // Restore shadow call stack + str xzr, [tsk, #TSK_TI_SCS] // Limit visibility of saved SCS +#endif .else add x21, sp, #S_FRAME_SIZE get_current_task tsk @@ -278,6 +282,12 @@ alternative_else_nop_endif ct_user_enter .endif +#ifdef CONFIG_SHADOW_CALL_STACK + .if \el == 0 + str x18, [tsk, #TSK_TI_SCS] // Save shadow call stack + .endif +#endif + #ifdef CONFIG_ARM64_SW_TTBR0_PAN /* * Restore access to TTBR0_EL1. If returning to EL0, no need for SPSR @@ -383,6 +393,9 @@ alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0 .macro irq_stack_entry mov x19, sp // preserve the original sp +#ifdef CONFIG_SHADOW_CALL_STACK + mov x24, x18 // preserve the original shadow stack +#endif /* * Compare sp with the base of the task stack. @@ -400,15 +413,25 @@ alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0 /* switch to the irq stack */ mov sp, x26 + +#ifdef CONFIG_SHADOW_CALL_STACK + /* also switch to the irq shadow stack */ + ldr_this_cpu x18, irq_shadow_call_stack_ptr, x26 +#endif + 9998: .endm /* - * x19 should be preserved between irq_stack_entry and - * irq_stack_exit. + * The callee-saved regs (x19-x29) should be preserved between + * irq_stack_entry and irq_stack_exit, but note that kernel_entry + * uses x20-x23 to store data for later use. */ .macro irq_stack_exit mov sp, x19 +#ifdef CONFIG_SHADOW_CALL_STACK + mov x18, x24 +#endif .endm /* GPRs used by entry code */ @@ -895,6 +918,11 @@ ENTRY(cpu_switch_to) ldr lr, [x8] mov sp, x9 msr sp_el0, x1 +#ifdef CONFIG_SHADOW_CALL_STACK + str x18, [x0, #TSK_TI_SCS] + ldr x18, [x1, #TSK_TI_SCS] + str xzr, [x1, #TSK_TI_SCS] // limit visibility of saved SCS +#endif ret ENDPROC(cpu_switch_to) NOKPROBE(cpu_switch_to) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 989b1944cb71..ca561de903d4 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -27,6 +27,7 @@ #include #include #include +#include #include #include #include @@ -424,6 +425,10 @@ __primary_switched: stp xzr, x30, [sp, #-16]! mov x29, sp +#ifdef CONFIG_SHADOW_CALL_STACK + adr_l x18, init_shadow_call_stack // Set shadow call stack +#endif + str_l x21, __fdt_pointer, x5 // Save FDT pointer ldr_l x4, kimage_vaddr // Save the offset between @@ -731,6 +736,10 @@ __secondary_switched: ldr x2, [x0, #CPU_BOOT_TASK] cbz x2, __secondary_too_slow msr sp_el0, x2 +#ifdef CONFIG_SHADOW_CALL_STACK + ldr x18, [x2, #TSK_TI_SCS] // set shadow call stack + str xzr, [x2, #TSK_TI_SCS] // limit visibility of saved SCS +#endif mov x29, #0 mov x30, #0 b secondary_start_kernel diff --git a/arch/arm64/kernel/irq.c b/arch/arm64/kernel/irq.c index 04a327ccf84d..fe0ca522ff60 100644 --- a/arch/arm64/kernel/irq.c +++ b/arch/arm64/kernel/irq.c @@ -21,6 +21,7 @@ #include #include #include +#include unsigned long irq_err_count; @@ -63,6 +64,7 @@ static void init_irq_stacks(void) void __init init_IRQ(void) { init_irq_stacks(); + scs_init_irq(); irqchip_init(); if (!handle_arch_irq) panic("No interrupt controller found."); diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index 00626057a384..9151616c354c 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -52,6 +52,7 @@ #include #include #include +#include #include #if defined(CONFIG_STACKPROTECTOR) && !defined(CONFIG_STACKPROTECTOR_PER_TASK) @@ -514,6 +515,7 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev, uao_thread_switch(next); ptrauth_thread_switch(next); ssbs_thread_switch(next); + scs_overflow_check(next); /* * Complete any pending TLB or cache maintenance on this CPU in case diff --git a/arch/arm64/kernel/scs.c b/arch/arm64/kernel/scs.c new file mode 100644 index 000000000000..eaadf5430baa --- /dev/null +++ b/arch/arm64/kernel/scs.c @@ -0,0 +1,40 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Shadow Call Stack support. + * + * Copyright (C) 2019 Google LLC + */ + +#include +#include +#include +#include + +DEFINE_PER_CPU(unsigned long *, irq_shadow_call_stack_ptr); + +#ifndef CONFIG_SHADOW_CALL_STACK_VMAP +DEFINE_PER_CPU(unsigned long [SCS_SIZE/sizeof(long)], irq_shadow_call_stack) + __aligned(SCS_SIZE); +#endif + +void scs_init_irq(void) +{ + int cpu; + + for_each_possible_cpu(cpu) { +#ifdef CONFIG_SHADOW_CALL_STACK_VMAP + unsigned long *p; + + p = __vmalloc_node_range(PAGE_SIZE, SCS_SIZE, + VMALLOC_START, VMALLOC_END, + GFP_SCS, PAGE_KERNEL, + 0, cpu_to_node(cpu), + __builtin_return_address(0)); + + per_cpu(irq_shadow_call_stack_ptr, cpu) = p; +#else + per_cpu(irq_shadow_call_stack_ptr, cpu) = + per_cpu(irq_shadow_call_stack, cpu); +#endif /* CONFIG_SHADOW_CALL_STACK_VMAP */ + } +} diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index d4ed9a19d8fe..f2cb344f998c 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -46,6 +46,7 @@ #include #include #include +#include #include #include #include @@ -358,6 +359,9 @@ void cpu_die(void) { unsigned int cpu = smp_processor_id(); + /* Save the shadow stack pointer before exiting the idle task */ + scs_save(current); + idle_task_exit(); local_daif_mask(); From patchwork Tue Feb 25 17:39:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11404369 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 326B1930 for ; Tue, 25 Feb 2020 17:41:48 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 387A42082F for ; Tue, 25 Feb 2020 17:41:46 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="QBeeCatb" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 387A42082F Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17916-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 5201 invoked by uid 550); 25 Feb 2020 17:40:29 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 4061 invoked from network); 25 Feb 2020 17:40:29 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=dz7r1q013n6NcVaMMRs0TXOzsYswHRkIsJwZpxS1u88=; b=QBeeCatb59qgm3CQxmHT7JLGqW/PuPjdCoZbQvbm+k2B2gheo4muBgiE4f7ONCN7VR lAYVpyyPlUwl6OA4RljVniPuT+hDv0cosEs/t4m8hpXrjzNL9Urd+IEnW4u8bDljje1l wu5nLRRoaqN/jAwFW/K6DIkx0dwMgaqZ8/LeiN29bDbMrYPsyzFVUMVsroJRTcBvIJhf ISKSmqwNtNSNi5gXe3gYSBJ7nU3QJZYZGBZK8ihD9fqGwoutuNqxaLbG0/igc3TfHWIp UccewLRDF+MiO82NG6d2plKU+SGpA+n1GlZzsd6mmwS7Ct/0J0WgIFIzrZ3MjtO+bCit H6Sg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=dz7r1q013n6NcVaMMRs0TXOzsYswHRkIsJwZpxS1u88=; b=XxIndXhXEYSgJQUsCrQJj8x8RJfSbs+DwDscTcJ/WsDiarIFdVFRW/cOfIr6tMw2ui WCP+fzoa0KMeSmo8NKxZmn9a1sHvLDFiSCMzTJTCtezhPv7iiUV35AC5Iu7bD2LDDOSO dgcvqeldicJ7mu2OOD9F0Zrc4+EIyWKURrL7ClU+vSWk8RYYFTWrEN3L4AQETVFNefXZ Ia5Rk39TwLNi2sE6CRxeVxNYZiKacE5BtcFHrY8WNHkawvQrYMW/bSVJkjNh4lqs3Gih hQklLmaRrfqhq/yzZoazVcp0s4sPEr6n/U1elsVkk47tTTSyUjf532LhWlZw5z/3ubbA Qrtg== X-Gm-Message-State: APjAAAUuLrV21KrnmYivkwUsVBTKnfO9pZ9VqCgBQ8o/gLAUdN3GB2fV LJ4X4lV/qoU1tIMZENSTUVwkQj525PtURarXrOk= X-Google-Smtp-Source: APXvYqxdy6rSOfurv0NH1mFMEAFvcqncWUlmBGdVDIxGZO12j+zNLVCFjC+X3DonlD/4S+Eforr6Y8tqp95Uin4nxVE= X-Received: by 2002:ad4:4e24:: with SMTP id dm4mr49827392qvb.170.1582652417166; Tue, 25 Feb 2020 09:40:17 -0800 (PST) Date: Tue, 25 Feb 2020 09:39:32 -0800 In-Reply-To: <20200225173933.74818-1-samitolvanen@google.com> Message-Id: <20200225173933.74818-12-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20200225173933.74818-1-samitolvanen@google.com> X-Mailer: git-send-email 2.25.0.265.gbab2e86ba0-goog Subject: [PATCH v9 11/12] arm64: scs: add shadow stacks for SDEI From: Sami Tolvanen To: Will Deacon , Catalin Marinas , James Morse , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel , Mark Rutland Cc: Dave Martin , Kees Cook , Laura Abbott , Marc Zyngier , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen This change adds per-CPU shadow call stacks for the SDEI handler. Similarly to how the kernel stacks are handled, we add separate shadow stacks for normal and critical events. Signed-off-by: Sami Tolvanen Reviewed-by: James Morse Tested-by: James Morse --- arch/arm64/include/asm/scs.h | 2 + arch/arm64/kernel/entry.S | 14 ++++- arch/arm64/kernel/scs.c | 106 +++++++++++++++++++++++++++++------ arch/arm64/kernel/sdei.c | 7 +++ 4 files changed, 112 insertions(+), 17 deletions(-) diff --git a/arch/arm64/include/asm/scs.h b/arch/arm64/include/asm/scs.h index c50d2b0c6c5f..8e327e14bc15 100644 --- a/arch/arm64/include/asm/scs.h +++ b/arch/arm64/include/asm/scs.h @@ -9,6 +9,7 @@ #ifdef CONFIG_SHADOW_CALL_STACK extern void scs_init_irq(void); +extern int scs_init_sdei(void); static __always_inline void scs_save(struct task_struct *tsk) { @@ -27,6 +28,7 @@ static inline void scs_overflow_check(struct task_struct *tsk) #else /* CONFIG_SHADOW_CALL_STACK */ static inline void scs_init_irq(void) {} +static inline int scs_init_sdei(void) { return 0; } static inline void scs_save(struct task_struct *tsk) {} static inline void scs_overflow_check(struct task_struct *tsk) {} diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index 4b18c3bbdea5..2e2ce1b9ebf5 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -1050,13 +1050,16 @@ ENTRY(__sdei_asm_handler) mov x19, x1 +#if defined(CONFIG_VMAP_STACK) || defined(CONFIG_SHADOW_CALL_STACK) + ldrb w4, [x19, #SDEI_EVENT_PRIORITY] +#endif + #ifdef CONFIG_VMAP_STACK /* * entry.S may have been using sp as a scratch register, find whether * this is a normal or critical event and switch to the appropriate * stack for this CPU. */ - ldrb w4, [x19, #SDEI_EVENT_PRIORITY] cbnz w4, 1f ldr_this_cpu dst=x5, sym=sdei_stack_normal_ptr, tmp=x6 b 2f @@ -1066,6 +1069,15 @@ ENTRY(__sdei_asm_handler) mov sp, x5 #endif +#ifdef CONFIG_SHADOW_CALL_STACK + /* Use a separate shadow call stack for normal and critical events */ + cbnz w4, 3f + ldr_this_cpu dst=x18, sym=sdei_shadow_call_stack_normal_ptr, tmp=x6 + b 4f +3: ldr_this_cpu dst=x18, sym=sdei_shadow_call_stack_critical_ptr, tmp=x6 +4: +#endif + /* * We may have interrupted userspace, or a guest, or exit-from or * return-to either of these. We can't trust sp_el0, restore it. diff --git a/arch/arm64/kernel/scs.c b/arch/arm64/kernel/scs.c index eaadf5430baa..dddb7c56518b 100644 --- a/arch/arm64/kernel/scs.c +++ b/arch/arm64/kernel/scs.c @@ -10,31 +10,105 @@ #include #include -DEFINE_PER_CPU(unsigned long *, irq_shadow_call_stack_ptr); +#define DECLARE_SCS(name) \ + DECLARE_PER_CPU(unsigned long *, name ## _ptr); \ + DECLARE_PER_CPU(unsigned long [SCS_SIZE/sizeof(long)], name) -#ifndef CONFIG_SHADOW_CALL_STACK_VMAP -DEFINE_PER_CPU(unsigned long [SCS_SIZE/sizeof(long)], irq_shadow_call_stack) - __aligned(SCS_SIZE); +#ifdef CONFIG_SHADOW_CALL_STACK_VMAP +#define DEFINE_SCS(name) \ + DEFINE_PER_CPU(unsigned long *, name ## _ptr) +#else +/* Allocate a static per-CPU shadow stack */ +#define DEFINE_SCS(name) \ + DEFINE_PER_CPU(unsigned long *, name ## _ptr); \ + DEFINE_PER_CPU(unsigned long [SCS_SIZE/sizeof(long)], name) \ + __aligned(SCS_SIZE) +#endif /* CONFIG_SHADOW_CALL_STACK_VMAP */ + +DECLARE_SCS(irq_shadow_call_stack); +DECLARE_SCS(sdei_shadow_call_stack_normal); +DECLARE_SCS(sdei_shadow_call_stack_critical); + +DEFINE_SCS(irq_shadow_call_stack); +#ifdef CONFIG_ARM_SDE_INTERFACE +DEFINE_SCS(sdei_shadow_call_stack_normal); +DEFINE_SCS(sdei_shadow_call_stack_critical); #endif +static int scs_alloc_percpu(unsigned long * __percpu *ptr, int cpu) +{ + unsigned long *p; + + p = __vmalloc_node_range(PAGE_SIZE, SCS_SIZE, + VMALLOC_START, VMALLOC_END, + GFP_SCS, PAGE_KERNEL, + 0, cpu_to_node(cpu), + __builtin_return_address(0)); + + if (!p) + return -ENOMEM; + per_cpu(*ptr, cpu) = p; + + return 0; +} + +static void scs_free_percpu(unsigned long * __percpu *ptr, int cpu) +{ + unsigned long *p = per_cpu(*ptr, cpu); + + if (p) { + per_cpu(*ptr, cpu) = NULL; + vfree(p); + } +} + +static void scs_free_sdei(void) +{ + int cpu; + + for_each_possible_cpu(cpu) { + scs_free_percpu(&sdei_shadow_call_stack_normal_ptr, cpu); + scs_free_percpu(&sdei_shadow_call_stack_critical_ptr, cpu); + } +} + void scs_init_irq(void) { int cpu; for_each_possible_cpu(cpu) { -#ifdef CONFIG_SHADOW_CALL_STACK_VMAP - unsigned long *p; + if (IS_ENABLED(CONFIG_SHADOW_CALL_STACK_VMAP)) + WARN_ON(scs_alloc_percpu(&irq_shadow_call_stack_ptr, + cpu)); + else + per_cpu(irq_shadow_call_stack_ptr, cpu) = + per_cpu(irq_shadow_call_stack, cpu); + } +} - p = __vmalloc_node_range(PAGE_SIZE, SCS_SIZE, - VMALLOC_START, VMALLOC_END, - GFP_SCS, PAGE_KERNEL, - 0, cpu_to_node(cpu), - __builtin_return_address(0)); +int scs_init_sdei(void) +{ + int cpu; - per_cpu(irq_shadow_call_stack_ptr, cpu) = p; -#else - per_cpu(irq_shadow_call_stack_ptr, cpu) = - per_cpu(irq_shadow_call_stack, cpu); -#endif /* CONFIG_SHADOW_CALL_STACK_VMAP */ + if (!IS_ENABLED(CONFIG_ARM_SDE_INTERFACE)) + return 0; + + for_each_possible_cpu(cpu) { + if (IS_ENABLED(CONFIG_SHADOW_CALL_STACK_VMAP)) { + if (scs_alloc_percpu( + &sdei_shadow_call_stack_normal_ptr, cpu) || + scs_alloc_percpu( + &sdei_shadow_call_stack_critical_ptr, cpu)) { + scs_free_sdei(); + return -ENOMEM; + } + } else { + per_cpu(sdei_shadow_call_stack_normal_ptr, cpu) = + per_cpu(sdei_shadow_call_stack_normal, cpu); + per_cpu(sdei_shadow_call_stack_critical_ptr, cpu) = + per_cpu(sdei_shadow_call_stack_critical, cpu); + } } + + return 0; } diff --git a/arch/arm64/kernel/sdei.c b/arch/arm64/kernel/sdei.c index d6259dac62b6..2854b9f7760a 100644 --- a/arch/arm64/kernel/sdei.c +++ b/arch/arm64/kernel/sdei.c @@ -13,6 +13,7 @@ #include #include #include +#include #include #include #include @@ -162,6 +163,12 @@ unsigned long sdei_arch_get_entry_point(int conduit) return 0; } + if (scs_init_sdei()) { + if (IS_ENABLED(CONFIG_VMAP_STACK)) + free_sdei_stacks(); + return 0; + } + sdei_exit_mode = (conduit == SMCCC_CONDUIT_HVC) ? SDEI_EXIT_HVC : SDEI_EXIT_SMC; #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 From patchwork Tue Feb 25 17:39:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11404373 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A65F813A4 for ; Tue, 25 Feb 2020 17:41:59 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 0DD892084E for ; Tue, 25 Feb 2020 17:41:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="AG/N+nAH" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0DD892084E Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17917-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 5373 invoked by uid 550); 25 Feb 2020 17:40:32 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 5298 invoked from network); 25 Feb 2020 17:40:31 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=S6rjWX4zh65J8qRPSZGN8zFIqvl385Xc8XQW3COWMIA=; b=AG/N+nAHceAaZJA2t3Ejlewnc800yTOYWW+XxDFlkHuEYsEQXM8laR8JPO6q9sZexT Qx9MR9PKuF1TFGK9rO4WrVGkhDFRUGBo3nTHm42lh1aY7DKUsRdqOVC006GRSybPTBpy CwdI4+LxD7e+4hZQ2qE/GwtlmXo1ZC8r/ZErTUuiCdp5rmFmAgH5PeFSBDiA+eNFZEE1 7vUfoZhf7+iQVtJY9b36r4K3TymVF9QDFXBobvvCwA1YLJK0qNYbbXNNu3MS1aQw9yMX I0P+8rGbiMpOXi6P0ZHT3Ij2kkgGsyVJDrLVC8dfHFUYAz5mItfFosYAxy/OwBsM9sCx TRAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=S6rjWX4zh65J8qRPSZGN8zFIqvl385Xc8XQW3COWMIA=; b=sGFev9YB671i79d8M36p7ZjTLKabgc8ZqXQZuONuYCYbKcWIJBJ8mIUJ/uQsTrjyp0 NAZ0ZdQhpuIBJb+io6ZxCB0FKl8+ZOFRdheCCxYeg/15sFRZAIYMLHb/NCGsX8BZGTFi nqV2l3fuHn9GERQ2KRW5aMSlU05Tl0tmEO0jS5fGWldscX2Rv8iEM5LvmNwFOq9M4Ope 0BvaAkB8VwfUg9wIgGk/R7WSJ2ZA7EepAHJMa+r+XFiBGbwiAKbc4ESY5r+0lKWP4LJ8 SUhLfjO9cjJ13bkveEK0eGAS0OTqK0NoEhKZy1cu2plRxlUTIaAmEIrwDNIKzIBtUXkj E/MQ== X-Gm-Message-State: APjAAAUCDpCiZRDeYEi1Mh7RAwbS3UI83WIRlY/WlWbc9kUnxw2dj4x8 Va7EkmWx8yhgEYkprZZNv512am8Qcp8Tn9pKcck= X-Google-Smtp-Source: APXvYqxga1IezAaj/7lzDoz6ajYcBY13ZBQ7EtWObgAVtXX9jDsYBp8jjd+EWNf7mvFW/jyX4NCc+z3bne5DtOlbYaQ= X-Received: by 2002:a63:e044:: with SMTP id n4mr57741015pgj.338.1582652419605; Tue, 25 Feb 2020 09:40:19 -0800 (PST) Date: Tue, 25 Feb 2020 09:39:33 -0800 In-Reply-To: <20200225173933.74818-1-samitolvanen@google.com> Message-Id: <20200225173933.74818-13-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20200225173933.74818-1-samitolvanen@google.com> X-Mailer: git-send-email 2.25.0.265.gbab2e86ba0-goog Subject: [PATCH v9 12/12] efi/libstub: disable SCS From: Sami Tolvanen To: Will Deacon , Catalin Marinas , James Morse , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel , Mark Rutland Cc: Dave Martin , Kees Cook , Laura Abbott , Marc Zyngier , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen Shadow stacks are not available in the EFI stub, filter out SCS flags. Suggested-by: James Morse Signed-off-by: Sami Tolvanen --- drivers/firmware/efi/libstub/Makefile | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile index 98a81576213d..ee5c37c401c9 100644 --- a/drivers/firmware/efi/libstub/Makefile +++ b/drivers/firmware/efi/libstub/Makefile @@ -30,6 +30,9 @@ KBUILD_CFLAGS := $(cflags-y) -DDISABLE_BRANCH_PROFILING \ $(call cc-option,-fno-stack-protector) \ -D__DISABLE_EXPORTS +# remove SCS flags from all objects in this directory +KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_SCS), $(KBUILD_CFLAGS)) + GCOV_PROFILE := n KASAN_SANITIZE := n UBSAN_SANITIZE := n