From patchwork Mon Apr 6 16:41:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11475823 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AE8BC913 for ; Mon, 6 Apr 2020 16:41:52 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id B85952496D for ; Mon, 6 Apr 2020 16:41:51 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="tlEf7Wn2" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B85952496D Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-18431-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 22508 invoked by uid 550); 6 Apr 2020 16:41:45 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 22391 invoked from network); 6 Apr 2020 16:41:44 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=hNd/HG0KaDX+Yh49hL+OKYKSwe823koA4Dp0UmE6Oic=; b=tlEf7Wn2jHp2oLNMWl3DLAyB9Ege9VGHDp0FAqjqefVUH9XtmCtNPnm/V8T1yz3FLY acxDUbDQJQRTziGbfsuHLuMrFGep9g7IXLEF5Dnh/fvqX/xW40Xp1QfNsBaP5qButBjJ VFVUtiKFTq/wnXh30yqsBRwvAomCXRHj61aUBdyXT2LICSFRjpf+YugMW5DOjzA4t97v nXuqPrzIGIolfEMXufgqPTZFYnjYc+ozhrmvluRXaUBOJaDboZlUNBs13Lr77rRi4HKH j5WTIn/7DD8rMqotWAGORcVFK5l5m2G39/iqMng7XLTOVEyFUXxKmegQNU/WR8uMXRAv y7EA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=hNd/HG0KaDX+Yh49hL+OKYKSwe823koA4Dp0UmE6Oic=; b=f/Gd/BvTX+MgEY8ZPMjLxziG7/UaNUkTBq6UTxPcUkvqRDZhbns4K017xooXYE8YCF ST0M57MlUA282lccVUHYUwRvXMoNb8AK7pQF5RZKTO2Yx1l4Zcr+3u9a0GitndEqXTLs Y2vfJUyM4uqNMJ7YNOrkcJx0OtqvW92Pd9edxr97Lfv0HfP4oWFQIrLj3O1w/NCCTi5Z dfcw3c1EnHMbxkYxzfiztmfLfwptoh1wkcd+H82Lsy0dV65n+eHahUltPy1mz380Es9d m17v1/W1slcQ9n0tgLnEC8vXLIJ0EbEOi2x5fHQWrM0sT+QplYzBJ7YNGJo1DGAE6vYs AYrw== X-Gm-Message-State: AGi0PuaTxu1XALooM5ZRaJbhxDZ3L+HQBfTNxn93teQz4aOupWI/lveF ro2gWo8mPSo3uSEHUd6jjb5XD6wLEfNszI4PB+Q= X-Google-Smtp-Source: APiQypJ9A+PB1cIsBLgv/50b7OQGmYulfYM8442TSR/vt3pUBbeNIQlaFzcrwYX7smfRqvJ7kDroKvu9ny5KRVlIb6I= X-Received: by 2002:a17:90a:fa17:: with SMTP id cm23mr10647pjb.121.1586191292561; Mon, 06 Apr 2020 09:41:32 -0700 (PDT) Date: Mon, 6 Apr 2020 09:41:10 -0700 In-Reply-To: <20200406164121.154322-1-samitolvanen@google.com> Message-Id: <20200406164121.154322-2-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20200406164121.154322-1-samitolvanen@google.com> X-Mailer: git-send-email 2.26.0.292.g33ef6b2f38-goog Subject: [PATCH v10 01/12] add support for Clang's Shadow Call Stack (SCS) From: Sami Tolvanen To: Will Deacon , Catalin Marinas , James Morse , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel , Mark Rutland Cc: Dave Martin , Kees Cook , Laura Abbott , Marc Zyngier , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen This change adds generic support for Clang's Shadow Call Stack, which uses a shadow stack to protect return addresses from being overwritten by an attacker. Details are available here: https://clang.llvm.org/docs/ShadowCallStack.html Note that security guarantees in the kernel differ from the ones documented for user space. The kernel must store addresses of shadow stacks used by other tasks and interrupt handlers in memory, which means an attacker capable reading and writing arbitrary memory may be able to locate them and hijack control flow by modifying shadow stacks that are not currently in use. Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook Reviewed-by: Miguel Ojeda --- Makefile | 6 ++ arch/Kconfig | 34 ++++++ include/linux/compiler-clang.h | 6 ++ include/linux/compiler_types.h | 4 + include/linux/scs.h | 57 ++++++++++ init/init_task.c | 8 ++ kernel/Makefile | 1 + kernel/fork.c | 9 ++ kernel/sched/core.c | 2 + kernel/scs.c | 187 +++++++++++++++++++++++++++++++++ 10 files changed, 314 insertions(+) create mode 100644 include/linux/scs.h create mode 100644 kernel/scs.c diff --git a/Makefile b/Makefile index c91342953d9e..cb2ed7443d57 100644 --- a/Makefile +++ b/Makefile @@ -851,6 +851,12 @@ ifdef CONFIG_LIVEPATCH KBUILD_CFLAGS += $(call cc-option, -flive-patching=inline-clone) endif +ifdef CONFIG_SHADOW_CALL_STACK +CC_FLAGS_SCS := -fsanitize=shadow-call-stack +KBUILD_CFLAGS += $(CC_FLAGS_SCS) +export CC_FLAGS_SCS +endif + # arch Makefile may override CC so keep this after arch Makefile is included NOSTDINC_FLAGS += -nostdinc -isystem $(shell $(CC) -print-file-name=include) diff --git a/arch/Kconfig b/arch/Kconfig index 786a85d4ad40..691a552c2cc3 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -533,6 +533,40 @@ config STACKPROTECTOR_STRONG about 20% of all kernel functions, which increases the kernel code size by about 2%. +config ARCH_SUPPORTS_SHADOW_CALL_STACK + bool + help + An architecture should select this if it supports Clang's Shadow + Call Stack, has asm/scs.h, and implements runtime support for shadow + stack switching. + +config SHADOW_CALL_STACK + bool "Clang Shadow Call Stack" + depends on ARCH_SUPPORTS_SHADOW_CALL_STACK + help + This option enables Clang's Shadow Call Stack, which uses a + shadow stack to protect function return addresses from being + overwritten by an attacker. More information can be found in + Clang's documentation: + + https://clang.llvm.org/docs/ShadowCallStack.html + + Note that security guarantees in the kernel differ from the ones + documented for user space. The kernel must store addresses of shadow + stacks used by other tasks and interrupt handlers in memory, which + means an attacker capable of reading and writing arbitrary memory + may be able to locate them and hijack control flow by modifying + shadow stacks that are not currently in use. + +config SHADOW_CALL_STACK_VMAP + bool "Use virtually mapped shadow call stacks" + depends on SHADOW_CALL_STACK + help + Use virtually mapped shadow call stacks. Selecting this option + provides better stack exhaustion protection, but increases per-thread + memory consumption as a full page is allocated for each shadow stack. + + config HAVE_ARCH_WITHIN_STACK_FRAMES bool help diff --git a/include/linux/compiler-clang.h b/include/linux/compiler-clang.h index 333a6695a918..18fc4d29ef27 100644 --- a/include/linux/compiler-clang.h +++ b/include/linux/compiler-clang.h @@ -42,3 +42,9 @@ * compilers, like ICC. */ #define barrier() __asm__ __volatile__("" : : : "memory") + +#if __has_feature(shadow_call_stack) +# define __noscs __attribute__((__no_sanitize__("shadow-call-stack"))) +#else +# define __noscs +#endif diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h index 72393a8c1a6c..be5d5be4b1ae 100644 --- a/include/linux/compiler_types.h +++ b/include/linux/compiler_types.h @@ -202,6 +202,10 @@ struct ftrace_likely_data { # define randomized_struct_fields_end #endif +#ifndef __noscs +# define __noscs +#endif + #ifndef asm_volatile_goto #define asm_volatile_goto(x...) asm goto(x) #endif diff --git a/include/linux/scs.h b/include/linux/scs.h new file mode 100644 index 000000000000..c5572fd770b0 --- /dev/null +++ b/include/linux/scs.h @@ -0,0 +1,57 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Shadow Call Stack support. + * + * Copyright (C) 2019 Google LLC + */ + +#ifndef _LINUX_SCS_H +#define _LINUX_SCS_H + +#include +#include +#include + +#ifdef CONFIG_SHADOW_CALL_STACK + +/* + * In testing, 1 KiB shadow stack size (i.e. 128 stack frames on a 64-bit + * architecture) provided ~40% safety margin on stack usage while keeping + * memory allocation overhead reasonable. + */ +#define SCS_SIZE 1024UL +#define GFP_SCS (GFP_KERNEL | __GFP_ZERO) + +/* + * A random number outside the kernel's virtual address space to mark the + * end of the shadow stack. + */ +#define SCS_END_MAGIC 0xaf0194819b1635f6UL + +#define task_scs(tsk) (task_thread_info(tsk)->shadow_call_stack) + +static inline void task_set_scs(struct task_struct *tsk, void *s) +{ + task_scs(tsk) = s; +} + +extern void scs_init(void); +extern void scs_task_reset(struct task_struct *tsk); +extern int scs_prepare(struct task_struct *tsk, int node); +extern bool scs_corrupted(struct task_struct *tsk); +extern void scs_release(struct task_struct *tsk); + +#else /* CONFIG_SHADOW_CALL_STACK */ + +#define task_scs(tsk) NULL + +static inline void task_set_scs(struct task_struct *tsk, void *s) {} +static inline void scs_init(void) {} +static inline void scs_task_reset(struct task_struct *tsk) {} +static inline int scs_prepare(struct task_struct *tsk, int node) { return 0; } +static inline bool scs_corrupted(struct task_struct *tsk) { return false; } +static inline void scs_release(struct task_struct *tsk) {} + +#endif /* CONFIG_SHADOW_CALL_STACK */ + +#endif /* _LINUX_SCS_H */ diff --git a/init/init_task.c b/init/init_task.c index bd403ed3e418..aaa71366d162 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include @@ -185,6 +186,13 @@ struct task_struct init_task }; EXPORT_SYMBOL(init_task); +#ifdef CONFIG_SHADOW_CALL_STACK +unsigned long init_shadow_call_stack[SCS_SIZE / sizeof(long)] __init_task_data + __aligned(SCS_SIZE) = { + [(SCS_SIZE / sizeof(long)) - 1] = SCS_END_MAGIC +}; +#endif + /* * Initial thread structure. Alignment of this is handled by a special * linker map entry. diff --git a/kernel/Makefile b/kernel/Makefile index 4cb4130ced32..c332eb9d4841 100644 --- a/kernel/Makefile +++ b/kernel/Makefile @@ -103,6 +103,7 @@ obj-$(CONFIG_TRACEPOINTS) += trace/ obj-$(CONFIG_IRQ_WORK) += irq_work.o obj-$(CONFIG_CPU_PM) += cpu_pm.o obj-$(CONFIG_BPF) += bpf/ +obj-$(CONFIG_SHADOW_CALL_STACK) += scs.o obj-$(CONFIG_PERF_EVENTS) += events/ diff --git a/kernel/fork.c b/kernel/fork.c index d2a967bf85d5..3f54070a7a53 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -94,6 +94,7 @@ #include #include #include +#include #include #include @@ -455,6 +456,8 @@ void put_task_stack(struct task_struct *tsk) void free_task(struct task_struct *tsk) { + scs_release(tsk); + #ifndef CONFIG_THREAD_INFO_IN_TASK /* * The task is finally done with both the stack and thread_info, @@ -838,6 +841,8 @@ void __init fork_init(void) NULL, free_vm_stack_cache); #endif + scs_init(); + lockdep_init_task(&init_task); uprobes_init(); } @@ -897,6 +902,10 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node) if (err) goto free_stack; + err = scs_prepare(tsk, node); + if (err) + goto free_stack; + #ifdef CONFIG_SECCOMP /* * We must handle setting up seccomp filters once we're under diff --git a/kernel/sched/core.c b/kernel/sched/core.c index a2694ba82874..9bb593f7974f 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -11,6 +11,7 @@ #include #include +#include #include #include @@ -6050,6 +6051,7 @@ void init_idle(struct task_struct *idle, int cpu) idle->se.exec_start = sched_clock(); idle->flags |= PF_IDLE; + scs_task_reset(idle); kasan_unpoison_task_stack(idle); #ifdef CONFIG_SMP diff --git a/kernel/scs.c b/kernel/scs.c new file mode 100644 index 000000000000..28abed21950c --- /dev/null +++ b/kernel/scs.c @@ -0,0 +1,187 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Shadow Call Stack support. + * + * Copyright (C) 2019 Google LLC + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +static inline void *__scs_base(struct task_struct *tsk) +{ + /* + * To minimize risk the of exposure, architectures may clear a + * task's thread_info::shadow_call_stack while that task is + * running, and only save/restore the active shadow call stack + * pointer when the usual register may be clobbered (e.g. across + * context switches). + * + * The shadow call stack is aligned to SCS_SIZE, and grows + * upwards, so we can mask out the low bits to extract the base + * when the task is not running. + */ + return (void *)((unsigned long)task_scs(tsk) & ~(SCS_SIZE - 1)); +} + +static inline unsigned long *scs_magic(void *s) +{ + return (unsigned long *)(s + SCS_SIZE) - 1; +} + +static inline void scs_set_magic(void *s) +{ + *scs_magic(s) = SCS_END_MAGIC; +} + +#ifdef CONFIG_SHADOW_CALL_STACK_VMAP + +/* Matches NR_CACHED_STACKS for VMAP_STACK */ +#define NR_CACHED_SCS 2 +static DEFINE_PER_CPU(void *, scs_cache[NR_CACHED_SCS]); + +static void *scs_alloc(int node) +{ + int i; + void *s; + + for (i = 0; i < NR_CACHED_SCS; i++) { + s = this_cpu_xchg(scs_cache[i], NULL); + if (s) { + memset(s, 0, SCS_SIZE); + goto out; + } + } + + /* + * We allocate a full page for the shadow stack, which should be + * more than we need. Check the assumption nevertheless. + */ + BUILD_BUG_ON(SCS_SIZE > PAGE_SIZE); + + s = __vmalloc_node_range(PAGE_SIZE, SCS_SIZE, + VMALLOC_START, VMALLOC_END, + GFP_SCS, PAGE_KERNEL, 0, + node, __builtin_return_address(0)); + +out: + if (s) + scs_set_magic(s); + /* TODO: poison for KASAN, unpoison in scs_free */ + + return s; +} + +static void scs_free(void *s) +{ + int i; + + for (i = 0; i < NR_CACHED_SCS; i++) + if (this_cpu_cmpxchg(scs_cache[i], 0, s) == NULL) + return; + + vfree_atomic(s); +} + +static int scs_cleanup(unsigned int cpu) +{ + int i; + void **cache = per_cpu_ptr(scs_cache, cpu); + + for (i = 0; i < NR_CACHED_SCS; i++) { + vfree(cache[i]); + cache[i] = NULL; + } + + return 0; +} + +void __init scs_init(void) +{ + WARN_ON(cpuhp_setup_state(CPUHP_BP_PREPARE_DYN, "scs:scs_cache", NULL, + scs_cleanup) < 0); +} + +#else /* !CONFIG_SHADOW_CALL_STACK_VMAP */ + +static struct kmem_cache *scs_cache; + +static inline void *scs_alloc(int node) +{ + void *s; + + s = kmem_cache_alloc_node(scs_cache, GFP_SCS, node); + if (s) { + scs_set_magic(s); + /* + * Poison the allocation to catch unintentional accesses to + * the shadow stack when KASAN is enabled. + */ + kasan_poison_object_data(scs_cache, s); + } + + return s; +} + +static inline void scs_free(void *s) +{ + kasan_unpoison_object_data(scs_cache, s); + kmem_cache_free(scs_cache, s); +} + +void __init scs_init(void) +{ + scs_cache = kmem_cache_create("scs_cache", SCS_SIZE, SCS_SIZE, + 0, NULL); + WARN_ON(!scs_cache); +} + +#endif /* CONFIG_SHADOW_CALL_STACK_VMAP */ + +void scs_task_reset(struct task_struct *tsk) +{ + /* + * Reset the shadow stack to the base address in case the task + * is reused. + */ + task_set_scs(tsk, __scs_base(tsk)); +} + +int scs_prepare(struct task_struct *tsk, int node) +{ + void *s; + + s = scs_alloc(node); + if (!s) + return -ENOMEM; + + task_set_scs(tsk, s); + return 0; +} + +bool scs_corrupted(struct task_struct *tsk) +{ + unsigned long *magic = scs_magic(__scs_base(tsk)); + + return READ_ONCE_NOCHECK(*magic) != SCS_END_MAGIC; +} + +void scs_release(struct task_struct *tsk) +{ + void *s; + + s = __scs_base(tsk); + if (!s) + return; + + WARN_ON(scs_corrupted(tsk)); + + task_set_scs(tsk, NULL); + scs_free(s); +} From patchwork Mon Apr 6 16:41:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11475827 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6BE80913 for ; Mon, 6 Apr 2020 16:42:01 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 9A1B924975 for ; Mon, 6 Apr 2020 16:42:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ga0tMwo4" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9A1B924975 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-18432-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 23714 invoked by uid 550); 6 Apr 2020 16:41:47 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 23611 invoked from network); 6 Apr 2020 16:41:46 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=VAnQEUsgY33nOzjUN917hwd14X5vhInOkGlwMgUW6to=; b=ga0tMwo4hocSLGxm4l6+EMVLCOPXiZAhOCoweoRj/26YWTQxqm019uwBVe1zQpYGz1 NLPfJfCeuBTcnNClAyII8TZj8R/32seVIGg3i3VfoAeMUhL1QmnTQPK4NDGjIFX5d5/D 6qZYK7YLMbtobVgs0YPm4MWgSIeGpdg3CpFBTBBtEHQXv4UNTzoGkJvPLilvI6ZaYfX6 diLK8ujuPJ0V7ZgNa2frZvZuINCMGF5D9Fh0XsdPDDtmBXIbv6YFLplIApIPTfoOhmIX PG3RPNl7lE+rTzKw181J+a8g830J7hIMVqmVDZtSsYHd8A9zpUp2fIY8oPmEkJXniZ5V xEMw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=VAnQEUsgY33nOzjUN917hwd14X5vhInOkGlwMgUW6to=; b=jo90i8hkRmfLJa5it5WfZpzfrWEYfq9nbWIWr56/ak+kD+Bya/CTk+eznuizTAX6lx zizdP/82CcOQhePYuyebxB4C3wHtrT8c7GA3CHi2cmhhVtKNMmBk9PQDWsTLv6yMOR5G xsfVYhnflo/NUZr9OM4ONgrjBr2SHTyLMgrXclakIvlzoJBAV1BqUdh2Tqn3SDA1If3r 476bJy/a7M+lvmt88rERfmqecApSFvnC9nfEb5LKoqOXE1zCiKUd8Wx+MWwfDGnGfCWl 7rhtij1ZiH2TSaLBN2Va1697eAHIigqEVaNXyBTmT0OYn75KM2N9DqXAtkIYCBTOjqbP 7RfQ== X-Gm-Message-State: AGi0PuaV0VX2O7WqvIBd1Le4yPbbztlyibJgpTddab7sCxsRIgujY4yo qNjOEUsIddev1ZfZ414nl3B7WHOusbqiF3cgFWM= X-Google-Smtp-Source: APiQypIvPMqKQlBGjPbAu3M/paMBK7+TUXDJV9JQ24MfJV9s+yLd4R5kmyChmxMvqDQEUUR4fIlPAySaFA70lMhu4+U= X-Received: by 2002:a17:90a:9e9:: with SMTP id 96mr190140pjo.168.1586191294850; Mon, 06 Apr 2020 09:41:34 -0700 (PDT) Date: Mon, 6 Apr 2020 09:41:11 -0700 In-Reply-To: <20200406164121.154322-1-samitolvanen@google.com> Message-Id: <20200406164121.154322-3-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20200406164121.154322-1-samitolvanen@google.com> X-Mailer: git-send-email 2.26.0.292.g33ef6b2f38-goog Subject: [PATCH v10 02/12] scs: add accounting From: Sami Tolvanen To: Will Deacon , Catalin Marinas , James Morse , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel , Mark Rutland Cc: Dave Martin , Kees Cook , Laura Abbott , Marc Zyngier , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen This change adds accounting for the memory allocated for shadow stacks. Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook --- drivers/base/node.c | 6 ++++++ fs/proc/meminfo.c | 4 ++++ include/linux/mmzone.h | 3 +++ kernel/scs.c | 20 ++++++++++++++++++++ mm/page_alloc.c | 6 ++++++ mm/vmstat.c | 3 +++ 6 files changed, 42 insertions(+) diff --git a/drivers/base/node.c b/drivers/base/node.c index 10d7e818e118..502ab5447c8d 100644 --- a/drivers/base/node.c +++ b/drivers/base/node.c @@ -415,6 +415,9 @@ static ssize_t node_read_meminfo(struct device *dev, "Node %d AnonPages: %8lu kB\n" "Node %d Shmem: %8lu kB\n" "Node %d KernelStack: %8lu kB\n" +#ifdef CONFIG_SHADOW_CALL_STACK + "Node %d ShadowCallStack:%8lu kB\n" +#endif "Node %d PageTables: %8lu kB\n" "Node %d NFS_Unstable: %8lu kB\n" "Node %d Bounce: %8lu kB\n" @@ -438,6 +441,9 @@ static ssize_t node_read_meminfo(struct device *dev, nid, K(node_page_state(pgdat, NR_ANON_MAPPED)), nid, K(i.sharedram), nid, sum_zone_node_page_state(nid, NR_KERNEL_STACK_KB), +#ifdef CONFIG_SHADOW_CALL_STACK + nid, sum_zone_node_page_state(nid, NR_KERNEL_SCS_BYTES) / 1024, +#endif nid, K(sum_zone_node_page_state(nid, NR_PAGETABLE)), nid, K(node_page_state(pgdat, NR_UNSTABLE_NFS)), nid, K(sum_zone_node_page_state(nid, NR_BOUNCE)), diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c index 8c1f1bb1a5ce..49768005a79e 100644 --- a/fs/proc/meminfo.c +++ b/fs/proc/meminfo.c @@ -103,6 +103,10 @@ static int meminfo_proc_show(struct seq_file *m, void *v) show_val_kb(m, "SUnreclaim: ", sunreclaim); seq_printf(m, "KernelStack: %8lu kB\n", global_zone_page_state(NR_KERNEL_STACK_KB)); +#ifdef CONFIG_SHADOW_CALL_STACK + seq_printf(m, "ShadowCallStack:%8lu kB\n", + global_zone_page_state(NR_KERNEL_SCS_BYTES) / 1024); +#endif show_val_kb(m, "PageTables: ", global_zone_page_state(NR_PAGETABLE)); diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index e84d448988b6..a6c60e6efa68 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -200,6 +200,9 @@ enum zone_stat_item { NR_MLOCK, /* mlock()ed pages found and moved off LRU */ NR_PAGETABLE, /* used for pagetables */ NR_KERNEL_STACK_KB, /* measured in KiB */ +#if IS_ENABLED(CONFIG_SHADOW_CALL_STACK) + NR_KERNEL_SCS_BYTES, /* measured in bytes */ +#endif /* Second 128 byte cacheline */ NR_BOUNCE, #if IS_ENABLED(CONFIG_ZSMALLOC) diff --git a/kernel/scs.c b/kernel/scs.c index 28abed21950c..5245e992c692 100644 --- a/kernel/scs.c +++ b/kernel/scs.c @@ -12,6 +12,7 @@ #include #include #include +#include #include static inline void *__scs_base(struct task_struct *tsk) @@ -89,6 +90,11 @@ static void scs_free(void *s) vfree_atomic(s); } +static struct page *__scs_page(struct task_struct *tsk) +{ + return vmalloc_to_page(__scs_base(tsk)); +} + static int scs_cleanup(unsigned int cpu) { int i; @@ -135,6 +141,11 @@ static inline void scs_free(void *s) kmem_cache_free(scs_cache, s); } +static struct page *__scs_page(struct task_struct *tsk) +{ + return virt_to_page(__scs_base(tsk)); +} + void __init scs_init(void) { scs_cache = kmem_cache_create("scs_cache", SCS_SIZE, SCS_SIZE, @@ -153,6 +164,12 @@ void scs_task_reset(struct task_struct *tsk) task_set_scs(tsk, __scs_base(tsk)); } +static void scs_account(struct task_struct *tsk, int account) +{ + mod_zone_page_state(page_zone(__scs_page(tsk)), NR_KERNEL_SCS_BYTES, + account * SCS_SIZE); +} + int scs_prepare(struct task_struct *tsk, int node) { void *s; @@ -162,6 +179,8 @@ int scs_prepare(struct task_struct *tsk, int node) return -ENOMEM; task_set_scs(tsk, s); + scs_account(tsk, 1); + return 0; } @@ -182,6 +201,7 @@ void scs_release(struct task_struct *tsk) WARN_ON(scs_corrupted(tsk)); + scs_account(tsk, -1); task_set_scs(tsk, NULL); scs_free(s); } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index e5f76da8cd4e..79f07ccac63e 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5338,6 +5338,9 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask) " managed:%lukB" " mlocked:%lukB" " kernel_stack:%lukB" +#ifdef CONFIG_SHADOW_CALL_STACK + " shadow_call_stack:%lukB" +#endif " pagetables:%lukB" " bounce:%lukB" " free_pcp:%lukB" @@ -5360,6 +5363,9 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask) K(zone_managed_pages(zone)), K(zone_page_state(zone, NR_MLOCK)), zone_page_state(zone, NR_KERNEL_STACK_KB), +#ifdef CONFIG_SHADOW_CALL_STACK + zone_page_state(zone, NR_KERNEL_SCS_BYTES) / 1024, +#endif K(zone_page_state(zone, NR_PAGETABLE)), K(zone_page_state(zone, NR_BOUNCE)), K(free_pcp), diff --git a/mm/vmstat.c b/mm/vmstat.c index c9c0d71f917f..287a95987b7b 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1119,6 +1119,9 @@ const char * const vmstat_text[] = { "nr_mlock", "nr_page_table_pages", "nr_kernel_stack", +#if IS_ENABLED(CONFIG_SHADOW_CALL_STACK) + "nr_shadow_call_stack_bytes", +#endif "nr_bounce", #if IS_ENABLED(CONFIG_ZSMALLOC) "nr_zspages", From patchwork Mon Apr 6 16:41:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11475829 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 37307912 for ; Mon, 6 Apr 2020 16:42:11 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 90D092496D for ; Mon, 6 Apr 2020 16:42:10 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="IPPdkNAe" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 90D092496D Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-18433-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 23966 invoked by uid 550); 6 Apr 2020 16:41:50 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 23904 invoked from network); 6 Apr 2020 16:41:49 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Ummeb//L4UGzSQ9YmLkDAY3kIJvHpvVlDZif3GsciVU=; b=IPPdkNAeCJXNHbsXe/mGa5z+wBWOy0VNpPvPMMCuc5EwpOHxDZY3dzEwp7Z9q8+A+6 mhsuX7BenNoOfFBKR2vXW+Llg8rWZeo4jTI1ptL1eGZE/asnMxQpJUAUfYi3BJXd7MG4 h0qQK+DY9jclc+sAdLOTRMBATVdbAqN5NRIHqlyT0D1+Fo0Gd445yIwHfWRAfFnusS77 HUjqk5/F1uJieIRQySJq4jsWcZe28h1QPbCvmrKMLo14OqtT0N2cLzM8APA7TdFwMXtN WPwc7ygpjmZ2rzLIeLKsG3/w6/yH4WdpnZJ/ARDydbD7hwXi5d+x4Y6UIpOR8Ph3Q2J4 kjlg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Ummeb//L4UGzSQ9YmLkDAY3kIJvHpvVlDZif3GsciVU=; b=goi2p3pDDrafhM+Suy83VqFXtmdSGi90lLoCJ3kk7UsgdbumpMZQ0XZipph6VCY1Di yaSXOFMJ7JEnVcUov8llj0DY94bzwK6U2jc2j7ngyUQeEG8h2FqQuVFWUSI4fwEIKRhZ kaE7ieuVomk0p9cUVfe/hkm0Mgji0Z5Uug3c9diiRQrECgleMoVCAatV2AikH1iSXaCA FpDdno43Bb8ZQzqfkVsq/mOBUoztOhGoRND5Qp62Kk/2f1BxEuNcE0qLUtcnnBU/oP9c BdCamD8+yAgRYwFAuozwcEk4US6D0YSwFdDwxzIa3pz/+PM9Ca/hKigce2IEcG6VdX/z J2nQ== X-Gm-Message-State: AGi0PuZua7a1qJeM8O288zvqBuZqZ4a4XJuOEbEe/ukfyf/153d0ZtvR FMP4hedB7imTClRGGS05hHHKDO34lNwTALttM2c= X-Google-Smtp-Source: APiQypLfMllChVStp0nVVJRc46ooIEmPC5lxhHr5UbQWQtwRSJW5jYCIR/c33rb1nlAjMKVaAFGW/fG0jdBYpJoZ1PU= X-Received: by 2002:a17:90a:20f0:: with SMTP id f103mr196700pjg.88.1586191297993; Mon, 06 Apr 2020 09:41:37 -0700 (PDT) Date: Mon, 6 Apr 2020 09:41:12 -0700 In-Reply-To: <20200406164121.154322-1-samitolvanen@google.com> Message-Id: <20200406164121.154322-4-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20200406164121.154322-1-samitolvanen@google.com> X-Mailer: git-send-email 2.26.0.292.g33ef6b2f38-goog Subject: [PATCH v10 03/12] scs: add support for stack usage debugging From: Sami Tolvanen To: Will Deacon , Catalin Marinas , James Morse , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel , Mark Rutland Cc: Dave Martin , Kees Cook , Laura Abbott , Marc Zyngier , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen Implements CONFIG_DEBUG_STACK_USAGE for shadow stacks. When enabled, also prints out the highest shadow stack usage per process. Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook --- kernel/scs.c | 39 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 39 insertions(+) diff --git a/kernel/scs.c b/kernel/scs.c index 5245e992c692..ad74d13f2c0f 100644 --- a/kernel/scs.c +++ b/kernel/scs.c @@ -184,6 +184,44 @@ int scs_prepare(struct task_struct *tsk, int node) return 0; } +#ifdef CONFIG_DEBUG_STACK_USAGE +static inline unsigned long scs_used(struct task_struct *tsk) +{ + unsigned long *p = __scs_base(tsk); + unsigned long *end = scs_magic(p); + unsigned long s = (unsigned long)p; + + while (p < end && READ_ONCE_NOCHECK(*p)) + p++; + + return (unsigned long)p - s; +} + +static void scs_check_usage(struct task_struct *tsk) +{ + static DEFINE_SPINLOCK(lock); + static unsigned long highest; + unsigned long used = scs_used(tsk); + + if (used <= highest) + return; + + spin_lock(&lock); + + if (used > highest) { + pr_info("%s (%d): highest shadow stack usage: %lu bytes\n", + tsk->comm, task_pid_nr(tsk), used); + highest = used; + } + + spin_unlock(&lock); +} +#else +static inline void scs_check_usage(struct task_struct *tsk) +{ +} +#endif + bool scs_corrupted(struct task_struct *tsk) { unsigned long *magic = scs_magic(__scs_base(tsk)); @@ -200,6 +238,7 @@ void scs_release(struct task_struct *tsk) return; WARN_ON(scs_corrupted(tsk)); + scs_check_usage(tsk); scs_account(tsk, -1); task_set_scs(tsk, NULL); From patchwork Mon Apr 6 16:41:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11475833 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0EA09912 for ; Mon, 6 Apr 2020 16:42:21 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 6C19024974 for ; Mon, 6 Apr 2020 16:42:20 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Et7gqyUD" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6C19024974 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-18434-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 24280 invoked by uid 550); 6 Apr 2020 16:41:54 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 24186 invoked from network); 6 Apr 2020 16:41:53 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=9QG25IWAvZkbAzL22nhTW74bT1ay/PnpyS4BHMHWGZk=; b=Et7gqyUD+/vBdaTxuXGtUp/T61DuoxfIWoFaBtN3Pan+9c6mJeAvmtTBqFUxV+t/h4 LN0HQT9+KjSaqn7Mixerv+7k7S7BD4mkDsLs/o8wZMIjVa09HJA7KFSGB/TtuXU7FvlL 6uPHdyBl7z3cK9g1xF6YKJ4HL90gk6SiklDTGe+TeqWBho/vNNZIxuxtDhoiNjjP56EE VSD6bCJZy8gm1psKtCCT8FCp/6jDu2WpsQ5TdwxuxinTh/gByzaJiThF8xLLt1HhDfgN opEo0Vvbom6U7ovVNJ2IYajd7Ohq17yrtIacfoTmqzq2KcnP242dGSb7nqvNQr4W2sOL fQPQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=9QG25IWAvZkbAzL22nhTW74bT1ay/PnpyS4BHMHWGZk=; b=MVn4+36ZaoOOY4VAueOcJ6ZHTjy0RpQrqATWdw/Z6fpgMCYTRVMSwbOLoWd7Pe/A1t IH1UJhD+q/JMVzU5fw3gejCV5HqfJAbiEHlqgGvVO712FOO2v+oc/TQYvKj1511IIrye EQXlTLc2eGYtYJwrKMF7l9BqbDRLj9SZ1jNdfbWUPXYtW6W9aiffvJJrkhEDqUlsFUHg HPoSsFFoDDfS1aGTXGYQWIgi5q7SP/U2DKzMc6FA7OEBzA6ShwMh3AOEJqm9SQPmFcVP mDGZCBGDhD+B+12s+o1c5Kb9uq88v2oEtOqABeeJdoARL+XMUYKAmQHzBftl0irndOM+ oMtw== X-Gm-Message-State: AGi0Pub5atvLngpeIql8PcW3lmiJDqPbCGBVKFdD4DOTPit3VwaW/vrv y36NtNSZYN0Hq2yzYbqBdZ4ecy4ZD8Q0toQEyh4= X-Google-Smtp-Source: APiQypKOUCwdutCrbx5jsB7oXvfPBtLmcNStpAuGEoVRC2rwC/xHSN2ipZVV7C3tJFoLyJ5tV7qmBNnr17/luexyr1s= X-Received: by 2002:a17:90a:1b4f:: with SMTP id q73mr169352pjq.188.1586191301528; Mon, 06 Apr 2020 09:41:41 -0700 (PDT) Date: Mon, 6 Apr 2020 09:41:13 -0700 In-Reply-To: <20200406164121.154322-1-samitolvanen@google.com> Message-Id: <20200406164121.154322-5-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20200406164121.154322-1-samitolvanen@google.com> X-Mailer: git-send-email 2.26.0.292.g33ef6b2f38-goog Subject: [PATCH v10 04/12] scs: disable when function graph tracing is enabled From: Sami Tolvanen To: Will Deacon , Catalin Marinas , James Morse , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel , Mark Rutland Cc: Dave Martin , Kees Cook , Laura Abbott , Marc Zyngier , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen The graph tracer hooks returns by modifying frame records on the (regular) stack, but with SCS the return address is taken from the shadow stack, and the value in the frame record has no effect. As we don't currently have a mechanism to determine the corresponding slot on the shadow stack (and to pass this through the ftrace infrastructure), for now let's disable SCS when the graph tracer is enabled. With SCS the return address is taken from the shadow stack and the value in the frame record has no effect. The mcount based graph tracer hooks returns by modifying frame records on the (regular) stack, and thus is not compatible. The patchable-function-entry graph tracer used for DYNAMIC_FTRACE_WITH_REGS modifies the LR before it is saved to the shadow stack, and is compatible. Modifying the mcount based graph tracer to work with SCS would require a mechanism to determine the corresponding slot on the shadow stack (and to pass this through the ftrace infrastructure), and we expect that everyone will eventually move to the patchable-function-entry based graph tracer anyway, so for now let's disable SCS when the mcount-based graph tracer is enabled. SCS and patchable-function-entry are both supported from LLVM 10.x. Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook Reviewed-by: Mark Rutland --- arch/Kconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/Kconfig b/arch/Kconfig index 691a552c2cc3..c53cb9025ad2 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -542,6 +542,7 @@ config ARCH_SUPPORTS_SHADOW_CALL_STACK config SHADOW_CALL_STACK bool "Clang Shadow Call Stack" + depends on DYNAMIC_FTRACE_WITH_REGS || !FUNCTION_GRAPH_TRACER depends on ARCH_SUPPORTS_SHADOW_CALL_STACK help This option enables Clang's Shadow Call Stack, which uses a From patchwork Mon Apr 6 16:41:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11475835 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8E98D912 for ; Mon, 6 Apr 2020 16:42:32 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id E8F7624981 for ; Mon, 6 Apr 2020 16:42:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="HrWDz0fZ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E8F7624981 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-18435-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 25602 invoked by uid 550); 6 Apr 2020 16:41:57 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 24488 invoked from network); 6 Apr 2020 16:41:57 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=9B6Bbg3gx9O6jKpvJ6qFzEYX2A3WHmV1/UUBgqymv0A=; b=HrWDz0fZO75NK2vCDmhD/+I1qivG9oQ4qAVeDGVwVMfnHG2UMPDsy9o7I7DFuuos8b 1wMXq0iZh8/+0SOrfAKklcnAPfzQcpOOxTjvsBz0Vi0obIwCilmN41649T4n71L4/o4U LAdhGzV5hQoCG+zdFTTKQJpy0xQJRahkRKsMczLj6t4/jv9w74++Q58CRZVqcxunEXmb e/F51UD4+HJXaVRvm6gJ4a3RWNd/HZviv2qgqNNwJ3gi/p4aI5O/iHpykcrqnt5Z3sZ4 klV/kXyI+4SCreOsjka0bJe/27FDR4qvujrtwmGtRqMmKebKPVVef3jTwgHg8nxIVBQP CMtg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=9B6Bbg3gx9O6jKpvJ6qFzEYX2A3WHmV1/UUBgqymv0A=; b=T4IQe8jOTg7EVb1W3yA0qD8WDl3rp6aQU77a8dmqoiD6gssk2otq+JABVcgnIbnVIQ qlp6EupigOR/F64oR/WnbzWN7VZRu0hemGaSYA5kCFbjX5XdG2gvIO8/RLMvEfwiVrWG OuXACko2WFDG8Z3ytEuhNXS15hJ6gn3w/qo5+bKYILlof37fyIKMbv3rPdGtaAbg/FcD d8ArI5kuRS6nkfrNxNsE8HF8dLZlYp6okcCZtwGztcwE1P+0hROxA6gzeDvXFrwGWVuJ lkWUH1th5dXloc+7QQw9cY5DjQzaAJgq4yQoYKZP33rqG2xKB9l9/o/ZsNo3cnXeTunZ ELWQ== X-Gm-Message-State: AGi0PubLPLmj3DV3JLpeNL8npCWSTxC7YM5GvYBfTJrLUC12m/jPQPpe 6JCIuhEGxpaAfMrF5nxO6tZrYOTEpBDHeaAtuv4= X-Google-Smtp-Source: APiQypIh9gY+yUNhuam6c+bzLbAj5vxnuqxB7ZWcGUFaI6ujh55+UwXryWUkEBM6Rr0lP42Us55g5qxLwuz/E8Mta2o= X-Received: by 2002:a63:1662:: with SMTP id 34mr22378296pgw.117.1586191304990; Mon, 06 Apr 2020 09:41:44 -0700 (PDT) Date: Mon, 6 Apr 2020 09:41:14 -0700 In-Reply-To: <20200406164121.154322-1-samitolvanen@google.com> Message-Id: <20200406164121.154322-6-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20200406164121.154322-1-samitolvanen@google.com> X-Mailer: git-send-email 2.26.0.292.g33ef6b2f38-goog Subject: [PATCH v10 05/12] arm64: reserve x18 from general allocation with SCS From: Sami Tolvanen To: Will Deacon , Catalin Marinas , James Morse , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel , Mark Rutland Cc: Dave Martin , Kees Cook , Laura Abbott , Marc Zyngier , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen Reserve the x18 register from general allocation when SCS is enabled, because the compiler uses the register to store the current task's shadow stack pointer. Note that all external kernel modules must also be compiled with -ffixed-x18 if the kernel has SCS enabled. Signed-off-by: Sami Tolvanen Reviewed-by: Nick Desaulniers Reviewed-by: Kees Cook Acked-by: Will Deacon --- arch/arm64/Makefile | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile index f15f92ba53e6..34277c60cdf9 100644 --- a/arch/arm64/Makefile +++ b/arch/arm64/Makefile @@ -76,6 +76,10 @@ branch-prot-flags-$(CONFIG_AS_HAS_PAC) += -Wa,-march=armv8.3-a KBUILD_CFLAGS += $(branch-prot-flags-y) endif +ifeq ($(CONFIG_SHADOW_CALL_STACK), y) +KBUILD_CFLAGS += -ffixed-x18 +endif + ifeq ($(CONFIG_CPU_BIG_ENDIAN), y) KBUILD_CPPFLAGS += -mbig-endian CHECKFLAGS += -D__AARCH64EB__ From patchwork Mon Apr 6 16:41:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11475839 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 281BE913 for ; Mon, 6 Apr 2020 16:42:44 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 844A024971 for ; Mon, 6 Apr 2020 16:42:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="sZgUXhLM" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 844A024971 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-18436-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 25896 invoked by uid 550); 6 Apr 2020 16:42:01 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 25826 invoked from network); 6 Apr 2020 16:42:01 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=FvIJZaGomfz5bQTqJN2U0Xo4/dMI8mtkhTC7egpm8RA=; b=sZgUXhLMxuUZHoqYFcqJQyMtRrVeB4vbIJgZfI8vPE3ap9A/9BT16xYBgEZfJZwzHb p/Xb5EGHwP5/CHALYTdDgmSLbcPrGZxAna83nSg6gIywsrUh7VegvbRlWKSidmTZaduL JkIi3FLIB3RLFCMs8/R2AIvOjODoY4FM+pu6UiLLtsoUZ4FIpMSRZZd/sl1DrxuOHLRs qsTLwGT7TZOTmsB4zwdErv1k7eKdBWOhQ0UgfSaLWeBzdHI/14RLXcoxTbmTmzGS4PJz S1BSe2jfOXJpexnv2HjTJdH928LuWNUeCfCevn3GUQ3kWhph4qWJWBu706wq7G4VUNsy Vq+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=FvIJZaGomfz5bQTqJN2U0Xo4/dMI8mtkhTC7egpm8RA=; b=Ugc1VW9mSBYHW2dpA02pSZ9NVohpPFm66iLLNcUnPs0Pv5dQl81OYI8cjJv7JicdnH Dt11rHRMI5ruJsq8NI9U2nrx34sKyUcQGcIO0cmC6jT1mqE/+UooQcW5gwOGPSoJKHmB UDEl1Qmobclpbn6hFwl8/fKurRINUHIUU1W/YNEA0tK7wGaOvumGk1jmFYX0k5CRu3rR jBMlVbBwz/S1VYyUYDG54OE5RzSyydCz7uxxnEDf1m+Xr+VW8a4Ub/t5Qzv0aA/Raiug lT0vUGlhbHUDQUPDtCfKF7iIB0JojIx4ICcvSTSVxl+qLFsn3ShczJFIfDcWttHaesWF t1IA== X-Gm-Message-State: AGi0PubulWemkvLZcb0bOox2K9jeO5ruuIss2ZOCX04d4pq3nF+UePEJ 06U1uPgUOootmMNtt88OUAdu6o3bP70AHaqOHPE= X-Google-Smtp-Source: APiQypIFwmNrjKh1lQZCso9eQPW5DK0TW2vYCZpfEPndLBGmeeUElj0RyNkfI+/Jw0+mhfmcbtmfQfN+X3LYM4MOsRg= X-Received: by 2002:a63:9550:: with SMTP id t16mr19437932pgn.300.1586191308872; Mon, 06 Apr 2020 09:41:48 -0700 (PDT) Date: Mon, 6 Apr 2020 09:41:15 -0700 In-Reply-To: <20200406164121.154322-1-samitolvanen@google.com> Message-Id: <20200406164121.154322-7-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20200406164121.154322-1-samitolvanen@google.com> X-Mailer: git-send-email 2.26.0.292.g33ef6b2f38-goog Subject: [PATCH v10 06/12] arm64: preserve x18 when CPU is suspended From: Sami Tolvanen To: Will Deacon , Catalin Marinas , James Morse , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel , Mark Rutland Cc: Dave Martin , Kees Cook , Laura Abbott , Marc Zyngier , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen Don't lose the current task's shadow stack when the CPU is suspended. Signed-off-by: Sami Tolvanen Reviewed-by: Nick Desaulniers Reviewed-by: Kees Cook Reviewed-by: Mark Rutland Acked-by: Will Deacon --- arch/arm64/include/asm/suspend.h | 2 +- arch/arm64/mm/proc.S | 14 ++++++++++++++ 2 files changed, 15 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/suspend.h b/arch/arm64/include/asm/suspend.h index 8939c87c4dce..0cde2f473971 100644 --- a/arch/arm64/include/asm/suspend.h +++ b/arch/arm64/include/asm/suspend.h @@ -2,7 +2,7 @@ #ifndef __ASM_SUSPEND_H #define __ASM_SUSPEND_H -#define NR_CTX_REGS 12 +#define NR_CTX_REGS 13 #define NR_CALLEE_SAVED_REGS 12 /* diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index 197a9ba2d5ea..ed15be0f8103 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -58,6 +58,8 @@ * cpu_do_suspend - save CPU registers context * * x0: virtual address of context pointer + * + * This must be kept in sync with struct cpu_suspend_ctx in . */ SYM_FUNC_START(cpu_do_suspend) mrs x2, tpidr_el0 @@ -82,6 +84,11 @@ alternative_endif stp x8, x9, [x0, #48] stp x10, x11, [x0, #64] stp x12, x13, [x0, #80] + /* + * Save x18 as it may be used as a platform register, e.g. by shadow + * call stack. + */ + str x18, [x0, #96] ret SYM_FUNC_END(cpu_do_suspend) @@ -98,6 +105,13 @@ SYM_FUNC_START(cpu_do_resume) ldp x9, x10, [x0, #48] ldp x11, x12, [x0, #64] ldp x13, x14, [x0, #80] + /* + * Restore x18, as it may be used as a platform register, and clear + * the buffer to minimize the risk of exposure when used for shadow + * call stack. + */ + ldr x18, [x0, #96] + str xzr, [x0, #96] msr tpidr_el0, x2 msr tpidrro_el0, x3 msr contextidr_el1, x4 From patchwork Mon Apr 6 16:41:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11475843 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A2EA3912 for ; Mon, 6 Apr 2020 16:42:55 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 0BE3524982 for ; Mon, 6 Apr 2020 16:42:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="I6aOOb4F" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0BE3524982 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-18437-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 26235 invoked by uid 550); 6 Apr 2020 16:42:06 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 26168 invoked from network); 6 Apr 2020 16:42:05 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=P7tRJ6/bRwi2I74AEfEJ7NL6sovvq6Ksezb3luMLZ/I=; b=I6aOOb4F+m585U3+RTjXCgdA35wkhP1CN3Uhmh+0KXvkwJcQQrkknIVKYFc21EGa9T gXREpmuKmKvxZJQyHbkni+ojnnrEUvsfKtX9CGOGZlTJmkKexDQDF2et3kSeqToQRiKo Wt+3R+N5GtS2ABYhkNEB3tNvZvXgtf9rP6ZYtHP7qsABnpKM8zORokraX7ZOWpN6zase 7NeHFG/CLonjoUheDOHlw7LzXo3Z6KjdaB9CNgPtQJRi3+pdaP4ZaWGXm8Emnc+boObD Up/eCuQFeygzhdH7a2SHIC/Eb6fhV//t/0cYReB1Mak0oksza68jY0Bqyf5hZJG0QQMh gQVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=P7tRJ6/bRwi2I74AEfEJ7NL6sovvq6Ksezb3luMLZ/I=; b=quWbj46bGQmqFfqEJaB56v0O6Shgg9HTREWETtu8zzuWUVqrS2QBHabVYZ8saYW17v ZkmzJB1Mz2XAuHwbecIEcYp7NrJ+1OJfkylZdIVNaSi2c3ZMJqhuMC4HHG2zMnZ4AmmJ UXVNsD2XfjNIRdi9aNG4+79XMsl/fDtkaJRxLLSuDI1VROLun5gtjQhW21jX+2qseUer u24YzG80jCdN7KPBb2i6ADzvtJSgW1kd7MKBsFDEPM4bDWb0Pa68FUpnGWQrMyrcfmBy HP6I89TkRYVFj6aLjRinasisFatPRXYcFNUKwv9u9mMZvN59bN55+RNlq7p+VfAhnBP8 WfeQ== X-Gm-Message-State: AGi0PuZISaHI9HKwL3j65shTSrhYNlaCX2X5lam2XWR6zNnCsqn6bdLJ Mwxf6Z+TOLP1j8iceKTviEL+0mMKAMpASSnZ4T4= X-Google-Smtp-Source: APiQypJJ2vW2BCKCiSZ+xIRCi6v70yfbRRHeakQcdO++WZV928etab2mQBvVqnKPYnI22hBzW364z/buE5ZaeQqgJWo= X-Received: by 2002:a67:fb0f:: with SMTP id d15mr471360vsr.88.1586191313260; Mon, 06 Apr 2020 09:41:53 -0700 (PDT) Date: Mon, 6 Apr 2020 09:41:16 -0700 In-Reply-To: <20200406164121.154322-1-samitolvanen@google.com> Message-Id: <20200406164121.154322-8-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20200406164121.154322-1-samitolvanen@google.com> X-Mailer: git-send-email 2.26.0.292.g33ef6b2f38-goog Subject: [PATCH v10 07/12] arm64: efi: restore x18 if it was corrupted From: Sami Tolvanen To: Will Deacon , Catalin Marinas , James Morse , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel , Mark Rutland Cc: Dave Martin , Kees Cook , Laura Abbott , Marc Zyngier , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen If we detect a corrupted x18, restore the register before jumping back to potentially SCS instrumented code. This is safe, because the wrapper is called with preemption disabled and a separate shadow stack is used for interrupt handling. Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook Acked-by: Will Deacon --- arch/arm64/kernel/efi-rt-wrapper.S | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kernel/efi-rt-wrapper.S b/arch/arm64/kernel/efi-rt-wrapper.S index 3fc71106cb2b..6ca6c0dc11a1 100644 --- a/arch/arm64/kernel/efi-rt-wrapper.S +++ b/arch/arm64/kernel/efi-rt-wrapper.S @@ -34,5 +34,14 @@ ENTRY(__efi_rt_asm_wrapper) ldp x29, x30, [sp], #32 b.ne 0f ret -0: b efi_handle_corrupted_x18 // tail call +0: + /* + * With CONFIG_SHADOW_CALL_STACK, the kernel uses x18 to store a + * shadow stack pointer, which we need to restore before returning to + * potentially instrumented code. This is safe because the wrapper is + * called with preemption disabled and a separate shadow stack is used + * for interrupts. + */ + mov x18, x2 + b efi_handle_corrupted_x18 // tail call ENDPROC(__efi_rt_asm_wrapper) From patchwork Mon Apr 6 16:41:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11475853 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 17C2B92C for ; Mon, 6 Apr 2020 16:43:07 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 71CEB2496D for ; Mon, 6 Apr 2020 16:43:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="H7r9/K1P" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 71CEB2496D Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-18438-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 26461 invoked by uid 550); 6 Apr 2020 16:42:09 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 26392 invoked from network); 6 Apr 2020 16:42:08 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=4apKHH+0KXGOaGE6jTeN52LWOcENawq8hrV28HQBrcA=; b=H7r9/K1Pb2jZ/QTf4eps8FXgNOwBjIplJ83GvJcB7p53k1kwhM1xPQxZLkgvAaK02n beraESPrDeb3MtB6jvklmi+/vUcZS4mHBB1OCRD//Yx8vijzVcAIERr2dp1g0QDMqGEH /HT1yD1mZDPSkJulcCDMMrG5W512OIw+DI+B5TIlQrfESZv1YUtu4bFOI7koPSLqre4V 1GvXHs5lxJf7sNgG/FH6hL3WB2A8MbOLzhIcXcIuTXJy/+sRweQg+5SqJEi4k7ntmLnb VpUd6/DJobY+aRGS+iUjHZ3if6GvDWbplbrX5is0cinNOSMDE2y/gYC1eTqNbsHLdzRj P8oQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=4apKHH+0KXGOaGE6jTeN52LWOcENawq8hrV28HQBrcA=; b=SySP/e/gvOd2n9UbOV0bZ2MhKlPle3XSXqwBMamb1Ck1imHOSLazlM2NokYPagTZBA rc//20oXQsMMzmek4X4ys63GJ/Q/G75AXiWGCofOTAGMFEIYc5/FK2EQVoV7JopPdaoI /RkiJ6lMKwk/uSHeZ1lkVeUKyUeMJxhZmI+kRLx0FfGnVrHEBjnwRQ5u0RqybfbdLtvw 8uGVgVzM5jkC2BU/26PETJDccr8+aw+lFJrzwVjTtpLrBT4Uha5j889AT5ArEX9J61NK 5k0uCPCa0jIB2LDzYUF5+Af1bAxhjH4bKdYOcqTrKVVIfSeDiuIygioFRJBYxPyk53QW UpeA== X-Gm-Message-State: AGi0PuZzDWdor9dNzRTBz656YJk22/6oIcBA2Io1yKgKUBki2Ta2B1As bXdgofcVXILV9lzACR22sD+jU+/rOCfP4iZrESw= X-Google-Smtp-Source: APiQypK5eLQJG4otrid1fZuEbwfLIfGLtaeCQl5ykKUwyTbJ+T2+3uqzOPYi+Sh8FFR75wCHX7LWfP4FZsjp9J54Zb4= X-Received: by 2002:a63:9043:: with SMTP id a64mr22525247pge.308.1586191316557; Mon, 06 Apr 2020 09:41:56 -0700 (PDT) Date: Mon, 6 Apr 2020 09:41:17 -0700 In-Reply-To: <20200406164121.154322-1-samitolvanen@google.com> Message-Id: <20200406164121.154322-9-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20200406164121.154322-1-samitolvanen@google.com> X-Mailer: git-send-email 2.26.0.292.g33ef6b2f38-goog Subject: [PATCH v10 08/12] arm64: vdso: disable Shadow Call Stack From: Sami Tolvanen To: Will Deacon , Catalin Marinas , James Morse , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel , Mark Rutland Cc: Dave Martin , Kees Cook , Laura Abbott , Marc Zyngier , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen Shadow stacks are only available in the kernel, so disable SCS instrumentation for the vDSO. Signed-off-by: Sami Tolvanen Reviewed-by: Nick Desaulniers Reviewed-by: Kees Cook Reviewed-by: Mark Rutland Acked-by: Will Deacon --- arch/arm64/kernel/vdso/Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/kernel/vdso/Makefile b/arch/arm64/kernel/vdso/Makefile index dd2514bb1511..a87a4f11724e 100644 --- a/arch/arm64/kernel/vdso/Makefile +++ b/arch/arm64/kernel/vdso/Makefile @@ -25,7 +25,7 @@ ccflags-y += -DDISABLE_BRANCH_PROFILING VDSO_LDFLAGS := -Bsymbolic -CFLAGS_REMOVE_vgettimeofday.o = $(CC_FLAGS_FTRACE) -Os +CFLAGS_REMOVE_vgettimeofday.o = $(CC_FLAGS_FTRACE) -Os $(CC_FLAGS_SCS) KBUILD_CFLAGS += $(DISABLE_LTO) KASAN_SANITIZE := n UBSAN_SANITIZE := n From patchwork Mon Apr 6 16:41:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11475861 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 14A10912 for ; Mon, 6 Apr 2020 16:43:19 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 6CBD424987 for ; Mon, 6 Apr 2020 16:43:18 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="sFTilaim" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6CBD424987 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-18439-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 27844 invoked by uid 550); 6 Apr 2020 16:42:13 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 27771 invoked from network); 6 Apr 2020 16:42:13 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=og4BfgkxEdyP0L7K3K2SXksO/AChbKUXMouXqYPSA00=; b=sFTilaimS0olRUWr52P6NjZp8iOs10uO0Z0JxM5Uuc4SjrkX4akRPhGMMko6yn6Vjl ws3NQWWVJtJ3ueU7u0ytNYHOplXfphbPsUfJrc6hmnfKIl9LG70ErBmGNeEAaScsBIsa 8pzQNL6ueTZCTceiNxzsBsZ4KCIrIv1gusx7ObuDw8OyHYxmtcP4Mhcku3xGZumdZOoc 822uP7JCGsr7Bd/cARY+mTanl8Xs4pkuavmztPyJ4QiMT2w19+ROUOHp2IOOfM+O1hNX 7nYMkWpEpPknWEovRK6lAkDgfAOYFuYN4lNw2My6u1IqRuL++rFvl2FIx6OjAvMIwUcN E4oA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=og4BfgkxEdyP0L7K3K2SXksO/AChbKUXMouXqYPSA00=; b=aiXo+7hqAgEBhkuQrvUNzPFyMcjVBAY1RMnLACJsgR7nf901wsMCLBOjRyMjW7jAae 5A77IpM90HxF2tjdIg6bpTBxxuTPPxTh1OgEOalJNztPQgmzQtq6/sVYfHeNMkQcWgL8 uVfDjJSBmkuc5q/w2Jyu0v5wFInhkSscMAX7wN9Af79dLftz6Ui3pTitnP8cAVxDQfjr ARsRVA3g2aZwgOi4s3vPnjkPAzQM+IQ0bPGvkTZhmiKBStnATSPSh8wgpj4YHVzHj1vX +OS6xmPDGJU4VONpeup2PTSDaEcgh2d24RqvuE87ognYq6haTeLmPa/8/bgCfZ8UDLxH 7cuQ== X-Gm-Message-State: AGi0PuZOD6kcBi6ECByExWgrc5sPPyIsM/nDRJHW6Vbpwccb6ITBJ4/q TG9Drff9WvCjzj7Hf4Hqh+XQOLNu8moD21xoOkA= X-Google-Smtp-Source: APiQypJDVItbvdsaAcfKFSSNsMelQNBO6U0MvHJemAJCSTnqHm4EIsriO2lzZR7x/iXYclFFASObx3hqbCBuu2Gzn3g= X-Received: by 2002:a63:c504:: with SMTP id f4mr1623499pgd.292.1586191321112; Mon, 06 Apr 2020 09:42:01 -0700 (PDT) Date: Mon, 6 Apr 2020 09:41:18 -0700 In-Reply-To: <20200406164121.154322-1-samitolvanen@google.com> Message-Id: <20200406164121.154322-10-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20200406164121.154322-1-samitolvanen@google.com> X-Mailer: git-send-email 2.26.0.292.g33ef6b2f38-goog Subject: [PATCH v10 09/12] arm64: disable SCS for hypervisor code From: Sami Tolvanen To: Will Deacon , Catalin Marinas , James Morse , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel , Mark Rutland Cc: Dave Martin , Kees Cook , Laura Abbott , Marc Zyngier , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen Disable SCS for code that runs at a different exception level by adding __noscs to __hyp_text. Suggested-by: James Morse Signed-off-by: Sami Tolvanen Acked-by: Marc Zyngier Reviewed-by: Kees Cook --- arch/arm64/include/asm/kvm_hyp.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index fe57f60f06a8..875b106c5d98 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -13,7 +13,7 @@ #include #include -#define __hyp_text __section(.hyp.text) notrace +#define __hyp_text __section(.hyp.text) notrace __noscs #define read_sysreg_elx(r,nvh,vh) \ ({ \ From patchwork Mon Apr 6 16:41:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11475865 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C75C6912 for ; Mon, 6 Apr 2020 16:43:31 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id CDE1124999 for ; Mon, 6 Apr 2020 16:43:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="UdDOtErT" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CDE1124999 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-18440-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 28160 invoked by uid 550); 6 Apr 2020 16:42:17 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 28051 invoked from network); 6 Apr 2020 16:42:16 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=kgQPYIY9NGlgwtfpFfU58so/w3Z2UFjuSrLkEeyFxgA=; b=UdDOtErTo1/8iqcA9+CiKb6eIThDzCqBp8bNiJxxwvATaeQXnTm2f2TDF+0i7a7NVJ hNsSmqwXQxndcZxaTVMhrVANcvxxHXsXmB0/J2L0m/ZB/s6BLxbCgFYQF/tiphaOkOwJ S7MkFQOXsKmmxRSWirkHPdeJyiq4sEjL+tM8bVPvL8dJjhr/pxGSJL8criCMzrlganEO 8SQMdcvslcrSWUfvc6/qpMZjWFt8n/RXaj9n8ngc8dPddOE47LNnlX0CKQ+uYTGi9do+ noelF7VlxRhmKn5AyhRLYvGvqRNUnQhLgWLb9x6Sh40un4UTkwAti+kN8rWkY1B1jCfa zpEg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=kgQPYIY9NGlgwtfpFfU58so/w3Z2UFjuSrLkEeyFxgA=; b=Eg43BJ86G+gRLadVJ954BcLWFXzOSpFdMec/QRZWeQP8x4/omFWXD9Zljl1/LAGWgj xVsAdVampzxR0WxbVjs883NeYbBjUYlNGZCM+nrgUUdNYn186XeY4BFbrnfK/tX2FKlx 1H45QbYsMn55w8B9nDefLyIlr2EGBlTpNBLFSj6nQ6Uj04BjDkYRUPtrq21+ROZyf+yO BvKovx93/rbshvmmN16JOUOLeGtrLBDdEZ1jO4+v1JXqu5R22oTidmC/SNJyr9bOthSg HhBvDXhtDI6nIt56Pd4L2sVp0uSx2o3DHzpiXYExJTUreDxreij1SMuv7V5UXAFz0voW rPyg== X-Gm-Message-State: AGi0PuZUT+ASZi52s4GWLN123WS0hmzrxRCcOhllQ/PL1E17my45u9K2 WbdhVnhCaEKUWfMb6kcyFQWrWvY1lNH0LkA8fSc= X-Google-Smtp-Source: APiQypL0jyIOK2p31lKOl0PTPaOWZweRKIDgliA5TEXk2ImUqh2/I5YZC86nRelwBuwSXzMtQ+2kTtin9/TAVODTwQE= X-Received: by 2002:a17:90a:fa08:: with SMTP id cm8mr50931pjb.144.1586191324789; Mon, 06 Apr 2020 09:42:04 -0700 (PDT) Date: Mon, 6 Apr 2020 09:41:19 -0700 In-Reply-To: <20200406164121.154322-1-samitolvanen@google.com> Message-Id: <20200406164121.154322-11-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20200406164121.154322-1-samitolvanen@google.com> X-Mailer: git-send-email 2.26.0.292.g33ef6b2f38-goog Subject: [PATCH v10 10/12] arm64: implement Shadow Call Stack From: Sami Tolvanen To: Will Deacon , Catalin Marinas , James Morse , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel , Mark Rutland Cc: Dave Martin , Kees Cook , Laura Abbott , Marc Zyngier , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen This change implements shadow stack switching, initial SCS set-up, and interrupt shadow stacks for arm64. Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook --- arch/arm64/Kconfig | 5 ++++ arch/arm64/include/asm/scs.h | 37 +++++++++++++++++++++++++ arch/arm64/include/asm/thread_info.h | 3 +++ arch/arm64/kernel/Makefile | 1 + arch/arm64/kernel/asm-offsets.c | 3 +++ arch/arm64/kernel/entry.S | 33 +++++++++++++++++++++-- arch/arm64/kernel/head.S | 8 ++++++ arch/arm64/kernel/irq.c | 2 ++ arch/arm64/kernel/process.c | 2 ++ arch/arm64/kernel/scs.c | 40 ++++++++++++++++++++++++++++ arch/arm64/kernel/smp.c | 4 +++ 11 files changed, 136 insertions(+), 2 deletions(-) create mode 100644 arch/arm64/include/asm/scs.h create mode 100644 arch/arm64/kernel/scs.c diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 6e41c4b62607..b47c254ce1dd 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -64,6 +64,7 @@ config ARM64 select ARCH_USE_QUEUED_RWLOCKS select ARCH_USE_QUEUED_SPINLOCKS select ARCH_SUPPORTS_MEMORY_FAILURE + select ARCH_SUPPORTS_SHADOW_CALL_STACK if CC_HAVE_SHADOW_CALL_STACK select ARCH_SUPPORTS_ATOMIC_RMW select ARCH_SUPPORTS_INT128 if CC_HAS_INT128 && (GCC_VERSION >= 50000 || CC_IS_CLANG) select ARCH_SUPPORTS_NUMA_BALANCING @@ -1025,6 +1026,10 @@ config ARCH_HAS_CACHE_LINE_SIZE config ARCH_ENABLE_SPLIT_PMD_PTLOCK def_bool y if PGTABLE_LEVELS > 2 +# Supported by clang >= 7.0 +config CC_HAVE_SHADOW_CALL_STACK + def_bool $(cc-option, -fsanitize=shadow-call-stack -ffixed-x18) + config SECCOMP bool "Enable seccomp to safely compute untrusted bytecode" ---help--- diff --git a/arch/arm64/include/asm/scs.h b/arch/arm64/include/asm/scs.h new file mode 100644 index 000000000000..c50d2b0c6c5f --- /dev/null +++ b/arch/arm64/include/asm/scs.h @@ -0,0 +1,37 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_SCS_H +#define _ASM_SCS_H + +#ifndef __ASSEMBLY__ + +#include + +#ifdef CONFIG_SHADOW_CALL_STACK + +extern void scs_init_irq(void); + +static __always_inline void scs_save(struct task_struct *tsk) +{ + void *s; + + asm volatile("mov %0, x18" : "=r" (s)); + task_set_scs(tsk, s); +} + +static inline void scs_overflow_check(struct task_struct *tsk) +{ + if (unlikely(scs_corrupted(tsk))) + panic("corrupted shadow stack detected inside scheduler\n"); +} + +#else /* CONFIG_SHADOW_CALL_STACK */ + +static inline void scs_init_irq(void) {} +static inline void scs_save(struct task_struct *tsk) {} +static inline void scs_overflow_check(struct task_struct *tsk) {} + +#endif /* CONFIG_SHADOW_CALL_STACK */ + +#endif /* __ASSEMBLY __ */ + +#endif /* _ASM_SCS_H */ diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h index 512174a8e789..1fb651f73da3 100644 --- a/arch/arm64/include/asm/thread_info.h +++ b/arch/arm64/include/asm/thread_info.h @@ -41,6 +41,9 @@ struct thread_info { #endif } preempt; }; +#ifdef CONFIG_SHADOW_CALL_STACK + void *shadow_call_stack; +#endif }; #define thread_saved_pc(tsk) \ diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile index 4e5b8ee31442..151f28521f1e 100644 --- a/arch/arm64/kernel/Makefile +++ b/arch/arm64/kernel/Makefile @@ -63,6 +63,7 @@ obj-$(CONFIG_CRASH_CORE) += crash_core.o obj-$(CONFIG_ARM_SDE_INTERFACE) += sdei.o obj-$(CONFIG_ARM64_SSBD) += ssbd.o obj-$(CONFIG_ARM64_PTR_AUTH) += pointer_auth.o +obj-$(CONFIG_SHADOW_CALL_STACK) += scs.o obj-y += vdso/ probes/ obj-$(CONFIG_COMPAT_VDSO) += vdso32/ diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 9981a0a5a87f..777a662888ec 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -33,6 +33,9 @@ int main(void) DEFINE(TSK_TI_ADDR_LIMIT, offsetof(struct task_struct, thread_info.addr_limit)); #ifdef CONFIG_ARM64_SW_TTBR0_PAN DEFINE(TSK_TI_TTBR0, offsetof(struct task_struct, thread_info.ttbr0)); +#endif +#ifdef CONFIG_SHADOW_CALL_STACK + DEFINE(TSK_TI_SCS, offsetof(struct task_struct, thread_info.shadow_call_stack)); #endif DEFINE(TSK_STACK, offsetof(struct task_struct, stack)); #ifdef CONFIG_STACKPROTECTOR diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index ddcde093c433..c33264ce7258 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -179,6 +179,11 @@ alternative_cb_end apply_ssbd 1, x22, x23 ptrauth_keys_install_kernel tsk, 1, x20, x22, x23 + +#ifdef CONFIG_SHADOW_CALL_STACK + ldr x18, [tsk, #TSK_TI_SCS] // Restore shadow call stack + str xzr, [tsk, #TSK_TI_SCS] // Limit visibility of saved SCS +#endif .else add x21, sp, #S_FRAME_SIZE get_current_task tsk @@ -280,6 +285,12 @@ alternative_else_nop_endif ct_user_enter .endif +#ifdef CONFIG_SHADOW_CALL_STACK + .if \el == 0 + str x18, [tsk, #TSK_TI_SCS] // Save shadow call stack + .endif +#endif + #ifdef CONFIG_ARM64_SW_TTBR0_PAN /* * Restore access to TTBR0_EL1. If returning to EL0, no need for SPSR @@ -388,6 +399,9 @@ alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0 .macro irq_stack_entry mov x19, sp // preserve the original sp +#ifdef CONFIG_SHADOW_CALL_STACK + mov x24, x18 // preserve the original shadow stack +#endif /* * Compare sp with the base of the task stack. @@ -405,15 +419,25 @@ alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0 /* switch to the irq stack */ mov sp, x26 + +#ifdef CONFIG_SHADOW_CALL_STACK + /* also switch to the irq shadow stack */ + ldr_this_cpu x18, irq_shadow_call_stack_ptr, x26 +#endif + 9998: .endm /* - * x19 should be preserved between irq_stack_entry and - * irq_stack_exit. + * The callee-saved regs (x19-x29) should be preserved between + * irq_stack_entry and irq_stack_exit, but note that kernel_entry + * uses x20-x23 to store data for later use. */ .macro irq_stack_exit mov sp, x19 +#ifdef CONFIG_SHADOW_CALL_STACK + mov x18, x24 +#endif .endm /* GPRs used by entry code */ @@ -901,6 +925,11 @@ SYM_FUNC_START(cpu_switch_to) mov sp, x9 msr sp_el0, x1 ptrauth_keys_install_kernel x1, 1, x8, x9, x10 +#ifdef CONFIG_SHADOW_CALL_STACK + str x18, [x0, #TSK_TI_SCS] + ldr x18, [x1, #TSK_TI_SCS] + str xzr, [x1, #TSK_TI_SCS] // limit visibility of saved SCS +#endif ret SYM_FUNC_END(cpu_switch_to) NOKPROBE(cpu_switch_to) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 57a91032b4c2..1514445bbccb 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -424,6 +424,10 @@ SYM_FUNC_START_LOCAL(__primary_switched) stp xzr, x30, [sp, #-16]! mov x29, sp +#ifdef CONFIG_SHADOW_CALL_STACK + adr_l x18, init_shadow_call_stack // Set shadow call stack +#endif + str_l x21, __fdt_pointer, x5 // Save FDT pointer ldr_l x4, kimage_vaddr // Save the offset between @@ -737,6 +741,10 @@ SYM_FUNC_START_LOCAL(__secondary_switched) ldr x2, [x0, #CPU_BOOT_TASK] cbz x2, __secondary_too_slow msr sp_el0, x2 +#ifdef CONFIG_SHADOW_CALL_STACK + ldr x18, [x2, #TSK_TI_SCS] // set shadow call stack + str xzr, [x2, #TSK_TI_SCS] // limit visibility of saved SCS +#endif mov x29, #0 mov x30, #0 b secondary_start_kernel diff --git a/arch/arm64/kernel/irq.c b/arch/arm64/kernel/irq.c index 04a327ccf84d..fe0ca522ff60 100644 --- a/arch/arm64/kernel/irq.c +++ b/arch/arm64/kernel/irq.c @@ -21,6 +21,7 @@ #include #include #include +#include unsigned long irq_err_count; @@ -63,6 +64,7 @@ static void init_irq_stacks(void) void __init init_IRQ(void) { init_irq_stacks(); + scs_init_irq(); irqchip_init(); if (!handle_arch_irq) panic("No interrupt controller found."); diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index 56be4cbf771f..a35d3318492c 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -52,6 +52,7 @@ #include #include #include +#include #include #if defined(CONFIG_STACKPROTECTOR) && !defined(CONFIG_STACKPROTECTOR_PER_TASK) @@ -515,6 +516,7 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev, entry_task_switch(next); uao_thread_switch(next); ssbs_thread_switch(next); + scs_overflow_check(next); /* * Complete any pending TLB or cache maintenance on this CPU in case diff --git a/arch/arm64/kernel/scs.c b/arch/arm64/kernel/scs.c new file mode 100644 index 000000000000..eaadf5430baa --- /dev/null +++ b/arch/arm64/kernel/scs.c @@ -0,0 +1,40 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Shadow Call Stack support. + * + * Copyright (C) 2019 Google LLC + */ + +#include +#include +#include +#include + +DEFINE_PER_CPU(unsigned long *, irq_shadow_call_stack_ptr); + +#ifndef CONFIG_SHADOW_CALL_STACK_VMAP +DEFINE_PER_CPU(unsigned long [SCS_SIZE/sizeof(long)], irq_shadow_call_stack) + __aligned(SCS_SIZE); +#endif + +void scs_init_irq(void) +{ + int cpu; + + for_each_possible_cpu(cpu) { +#ifdef CONFIG_SHADOW_CALL_STACK_VMAP + unsigned long *p; + + p = __vmalloc_node_range(PAGE_SIZE, SCS_SIZE, + VMALLOC_START, VMALLOC_END, + GFP_SCS, PAGE_KERNEL, + 0, cpu_to_node(cpu), + __builtin_return_address(0)); + + per_cpu(irq_shadow_call_stack_ptr, cpu) = p; +#else + per_cpu(irq_shadow_call_stack_ptr, cpu) = + per_cpu(irq_shadow_call_stack, cpu); +#endif /* CONFIG_SHADOW_CALL_STACK_VMAP */ + } +} diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index 061f60fe452f..1d112e34a636 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -46,6 +46,7 @@ #include #include #include +#include #include #include #include @@ -370,6 +371,9 @@ void cpu_die(void) unsigned int cpu = smp_processor_id(); const struct cpu_operations *ops = get_cpu_ops(cpu); + /* Save the shadow stack pointer before exiting the idle task */ + scs_save(current); + idle_task_exit(); local_daif_mask(); From patchwork Mon Apr 6 16:41:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11475869 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7DD22913 for ; Mon, 6 Apr 2020 16:43:43 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id B01A92498C for ; Mon, 6 Apr 2020 16:43:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="qB51hhRM" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B01A92498C Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-18441-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 28470 invoked by uid 550); 6 Apr 2020 16:42:22 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 28386 invoked from network); 6 Apr 2020 16:42:21 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=qVHd8+jN8Vuuz39mW+dfPzoZqC8ZUKPDpisRC8YjA1w=; b=qB51hhRMDr0Ag5IaI36i4hZoEHxjV8DQIh26Sk4fYjM4PK1Ts10yUoZexKtyA3b9fn 8IFVyZPJTjsT03IPgtYEi5fD4+CTWMSMKZZYeZ4KJ87PJPUnvCQmYcedrxytYp6Fiw1N XS5qnZNpIOtOMdtAdwmd8Kt71Gj7BV3XQ4coG4rYpiErhAJAj01gCZmMsuoOoZiKVgh3 l7KZjA1hl0X041wOieac5HYchSivBRBCEdz5Dggr+cCnGg8cELCxccbQWf5NhLZ6x7XG mzs71fe5JLQNiiJpHsvbJt1FoLFOcOTdS7QXTXkoZilVyApqPIZGqRl/DJpOwBdxqta8 bk+w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=qVHd8+jN8Vuuz39mW+dfPzoZqC8ZUKPDpisRC8YjA1w=; b=tPa+QUN31ISTmEZ9PRn7eU4eBr46o1Flo0Wt27IlYqK2GZyR4Qv0c6n6ziSOAnDdGv lePZ/g1xmNBmuHsvDHs9Mz58A6XLCn2Lzjo4asmceGNOJEXhxlfZe0c23IKWjndwpsw/ k4o05nvAaN5PKn6PuE8R4TSeSAnusSeuCj8W2Zb7FnoeGsIVXauQ30BClxhihmIiMEFV xc+yOeM9yd7PP47aZ8aUU8Eda3wWlSSkfnA1fiES66UVbU2Eil+chHKlCw/9GfmhKqW1 j/S6KoMMMsdSb40G2S+RKE5e4FJV/lGkZmtWZEeftxc8NywLzTMb74jcGFF5ipKCBLWo VtwQ== X-Gm-Message-State: AGi0PuYU1Mb3nUEMDnQ+sad5szRqI0AkSh1tsa6208zoiE7oHehiRdWs qFO+azUBzGuOgh7m/T7VtTkn1oaGgUC6jyH/ryY= X-Google-Smtp-Source: APiQypLVFKx88C0q0zxoBZGoaD0M5xnRxKEkl+DoZEPVuVwGDHaGpCXwBc0cr9Gj0qqKs/5rsRyDBu7j5EhRUWOsmLg= X-Received: by 2002:a5b:b8a:: with SMTP id l10mr34314766ybq.25.1586191329088; Mon, 06 Apr 2020 09:42:09 -0700 (PDT) Date: Mon, 6 Apr 2020 09:41:20 -0700 In-Reply-To: <20200406164121.154322-1-samitolvanen@google.com> Message-Id: <20200406164121.154322-12-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20200406164121.154322-1-samitolvanen@google.com> X-Mailer: git-send-email 2.26.0.292.g33ef6b2f38-goog Subject: [PATCH v10 11/12] arm64: scs: add shadow stacks for SDEI From: Sami Tolvanen To: Will Deacon , Catalin Marinas , James Morse , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel , Mark Rutland Cc: Dave Martin , Kees Cook , Laura Abbott , Marc Zyngier , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen This change adds per-CPU shadow call stacks for the SDEI handler. Similarly to how the kernel stacks are handled, we add separate shadow stacks for normal and critical events. Signed-off-by: Sami Tolvanen Reviewed-by: James Morse Tested-by: James Morse --- arch/arm64/include/asm/scs.h | 2 + arch/arm64/kernel/entry.S | 14 ++++- arch/arm64/kernel/scs.c | 106 +++++++++++++++++++++++++++++------ arch/arm64/kernel/sdei.c | 7 +++ 4 files changed, 112 insertions(+), 17 deletions(-) diff --git a/arch/arm64/include/asm/scs.h b/arch/arm64/include/asm/scs.h index c50d2b0c6c5f..8e327e14bc15 100644 --- a/arch/arm64/include/asm/scs.h +++ b/arch/arm64/include/asm/scs.h @@ -9,6 +9,7 @@ #ifdef CONFIG_SHADOW_CALL_STACK extern void scs_init_irq(void); +extern int scs_init_sdei(void); static __always_inline void scs_save(struct task_struct *tsk) { @@ -27,6 +28,7 @@ static inline void scs_overflow_check(struct task_struct *tsk) #else /* CONFIG_SHADOW_CALL_STACK */ static inline void scs_init_irq(void) {} +static inline int scs_init_sdei(void) { return 0; } static inline void scs_save(struct task_struct *tsk) {} static inline void scs_overflow_check(struct task_struct *tsk) {} diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index c33264ce7258..768cd7abd32c 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -1058,13 +1058,16 @@ SYM_CODE_START(__sdei_asm_handler) mov x19, x1 +#if defined(CONFIG_VMAP_STACK) || defined(CONFIG_SHADOW_CALL_STACK) + ldrb w4, [x19, #SDEI_EVENT_PRIORITY] +#endif + #ifdef CONFIG_VMAP_STACK /* * entry.S may have been using sp as a scratch register, find whether * this is a normal or critical event and switch to the appropriate * stack for this CPU. */ - ldrb w4, [x19, #SDEI_EVENT_PRIORITY] cbnz w4, 1f ldr_this_cpu dst=x5, sym=sdei_stack_normal_ptr, tmp=x6 b 2f @@ -1074,6 +1077,15 @@ SYM_CODE_START(__sdei_asm_handler) mov sp, x5 #endif +#ifdef CONFIG_SHADOW_CALL_STACK + /* Use a separate shadow call stack for normal and critical events */ + cbnz w4, 3f + ldr_this_cpu dst=x18, sym=sdei_shadow_call_stack_normal_ptr, tmp=x6 + b 4f +3: ldr_this_cpu dst=x18, sym=sdei_shadow_call_stack_critical_ptr, tmp=x6 +4: +#endif + /* * We may have interrupted userspace, or a guest, or exit-from or * return-to either of these. We can't trust sp_el0, restore it. diff --git a/arch/arm64/kernel/scs.c b/arch/arm64/kernel/scs.c index eaadf5430baa..dddb7c56518b 100644 --- a/arch/arm64/kernel/scs.c +++ b/arch/arm64/kernel/scs.c @@ -10,31 +10,105 @@ #include #include -DEFINE_PER_CPU(unsigned long *, irq_shadow_call_stack_ptr); +#define DECLARE_SCS(name) \ + DECLARE_PER_CPU(unsigned long *, name ## _ptr); \ + DECLARE_PER_CPU(unsigned long [SCS_SIZE/sizeof(long)], name) -#ifndef CONFIG_SHADOW_CALL_STACK_VMAP -DEFINE_PER_CPU(unsigned long [SCS_SIZE/sizeof(long)], irq_shadow_call_stack) - __aligned(SCS_SIZE); +#ifdef CONFIG_SHADOW_CALL_STACK_VMAP +#define DEFINE_SCS(name) \ + DEFINE_PER_CPU(unsigned long *, name ## _ptr) +#else +/* Allocate a static per-CPU shadow stack */ +#define DEFINE_SCS(name) \ + DEFINE_PER_CPU(unsigned long *, name ## _ptr); \ + DEFINE_PER_CPU(unsigned long [SCS_SIZE/sizeof(long)], name) \ + __aligned(SCS_SIZE) +#endif /* CONFIG_SHADOW_CALL_STACK_VMAP */ + +DECLARE_SCS(irq_shadow_call_stack); +DECLARE_SCS(sdei_shadow_call_stack_normal); +DECLARE_SCS(sdei_shadow_call_stack_critical); + +DEFINE_SCS(irq_shadow_call_stack); +#ifdef CONFIG_ARM_SDE_INTERFACE +DEFINE_SCS(sdei_shadow_call_stack_normal); +DEFINE_SCS(sdei_shadow_call_stack_critical); #endif +static int scs_alloc_percpu(unsigned long * __percpu *ptr, int cpu) +{ + unsigned long *p; + + p = __vmalloc_node_range(PAGE_SIZE, SCS_SIZE, + VMALLOC_START, VMALLOC_END, + GFP_SCS, PAGE_KERNEL, + 0, cpu_to_node(cpu), + __builtin_return_address(0)); + + if (!p) + return -ENOMEM; + per_cpu(*ptr, cpu) = p; + + return 0; +} + +static void scs_free_percpu(unsigned long * __percpu *ptr, int cpu) +{ + unsigned long *p = per_cpu(*ptr, cpu); + + if (p) { + per_cpu(*ptr, cpu) = NULL; + vfree(p); + } +} + +static void scs_free_sdei(void) +{ + int cpu; + + for_each_possible_cpu(cpu) { + scs_free_percpu(&sdei_shadow_call_stack_normal_ptr, cpu); + scs_free_percpu(&sdei_shadow_call_stack_critical_ptr, cpu); + } +} + void scs_init_irq(void) { int cpu; for_each_possible_cpu(cpu) { -#ifdef CONFIG_SHADOW_CALL_STACK_VMAP - unsigned long *p; + if (IS_ENABLED(CONFIG_SHADOW_CALL_STACK_VMAP)) + WARN_ON(scs_alloc_percpu(&irq_shadow_call_stack_ptr, + cpu)); + else + per_cpu(irq_shadow_call_stack_ptr, cpu) = + per_cpu(irq_shadow_call_stack, cpu); + } +} - p = __vmalloc_node_range(PAGE_SIZE, SCS_SIZE, - VMALLOC_START, VMALLOC_END, - GFP_SCS, PAGE_KERNEL, - 0, cpu_to_node(cpu), - __builtin_return_address(0)); +int scs_init_sdei(void) +{ + int cpu; - per_cpu(irq_shadow_call_stack_ptr, cpu) = p; -#else - per_cpu(irq_shadow_call_stack_ptr, cpu) = - per_cpu(irq_shadow_call_stack, cpu); -#endif /* CONFIG_SHADOW_CALL_STACK_VMAP */ + if (!IS_ENABLED(CONFIG_ARM_SDE_INTERFACE)) + return 0; + + for_each_possible_cpu(cpu) { + if (IS_ENABLED(CONFIG_SHADOW_CALL_STACK_VMAP)) { + if (scs_alloc_percpu( + &sdei_shadow_call_stack_normal_ptr, cpu) || + scs_alloc_percpu( + &sdei_shadow_call_stack_critical_ptr, cpu)) { + scs_free_sdei(); + return -ENOMEM; + } + } else { + per_cpu(sdei_shadow_call_stack_normal_ptr, cpu) = + per_cpu(sdei_shadow_call_stack_normal, cpu); + per_cpu(sdei_shadow_call_stack_critical_ptr, cpu) = + per_cpu(sdei_shadow_call_stack_critical, cpu); + } } + + return 0; } diff --git a/arch/arm64/kernel/sdei.c b/arch/arm64/kernel/sdei.c index d6259dac62b6..2854b9f7760a 100644 --- a/arch/arm64/kernel/sdei.c +++ b/arch/arm64/kernel/sdei.c @@ -13,6 +13,7 @@ #include #include #include +#include #include #include #include @@ -162,6 +163,12 @@ unsigned long sdei_arch_get_entry_point(int conduit) return 0; } + if (scs_init_sdei()) { + if (IS_ENABLED(CONFIG_VMAP_STACK)) + free_sdei_stacks(); + return 0; + } + sdei_exit_mode = (conduit == SMCCC_CONDUIT_HVC) ? SDEI_EXIT_HVC : SDEI_EXIT_SMC; #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 From patchwork Mon Apr 6 16:41:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11475873 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 056F3912 for ; Mon, 6 Apr 2020 16:43:55 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 6183C2493B for ; Mon, 6 Apr 2020 16:43:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="uzysDj83" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6183C2493B Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-18442-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 29760 invoked by uid 550); 6 Apr 2020 16:42:25 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 29703 invoked from network); 6 Apr 2020 16:42:25 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Y7rlrv2nhrkBeB3yxrt1isCdPASfC8dC7N7qsaqIYpI=; b=uzysDj83wFfvyx5+xHx53APFuGywhQGNYTYnY3+lbcNA+3qpT2RyVCtTCzOVWuYtrd IJ7VKcld7TMSS+B4Duop178kHp3dnyyCbFCaQG30nNlIZu2qp3pd8gPvWCQ/vh5UCVNI qYZP5lzEpIK9s82LrNfMVTyGg0YHy+wtBcdXCYONK5VtuSr/B+ie2tXtTLyaJ9IPy1+g 3D+xo+ACBtd8mKECNNqIvX1PJ+oInNfLa3VP4gJ7IaAStMJAKw0/1v2sPcOcrSw5vqh7 xI4vpzaPpkYp+bSxRgABVKY3EorsZQeCB4tyb7kBdLZS86i8LGqxeDzzj01uFZHbP3SR Lm4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Y7rlrv2nhrkBeB3yxrt1isCdPASfC8dC7N7qsaqIYpI=; b=hwBgqKQga9qZHG0aMRjrYwuW/kLjUwHx4LFItdZHflf70GDyhX2yflFMlu9ykM0X3C 4BaTVEfw1he32P+pZNghH6nciPyrLv7vcM4/oQpEEXviv/2Elm/zupr2lwniVZo3ZF3b 9U9u0jJ6GmJ4PkYDuBcrKy1aOZrqN2QtKyBFMCws4dkhuSvwuSOXdoYZ+Qk4mADdkyvY kdqSo/+gX5FgS79srOrdEE/vR8/QkO4LkouAwDJxrmBznUeTCeiPGjPJ9u6Y3AAZuI88 6SawrO0XBCfG8iIkTITZkrnIWGSrSqPycy3gqztMrYcA03sxJmKno4JKRg1tX+S41zWc 8UFg== X-Gm-Message-State: AGi0PuYf7uZRRzCsf95wSBWEPMXA8TFntqgOEq0WwQrGqY3k+hL2YKxn MdNjyfnZUDI/5YvLs3RhzAEqoVJWFGZ5fG1gWjk= X-Google-Smtp-Source: APiQypIq6K+/6sNFF07UPysJVLbTskXzuAKdx/I5NBmymQNj01YODUpkepZADD3StGdWVd8jEygPyA4Kk1CGRFm+W3w= X-Received: by 2002:a37:4d88:: with SMTP id a130mr21585753qkb.443.1586191332817; Mon, 06 Apr 2020 09:42:12 -0700 (PDT) Date: Mon, 6 Apr 2020 09:41:21 -0700 In-Reply-To: <20200406164121.154322-1-samitolvanen@google.com> Message-Id: <20200406164121.154322-13-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20200406164121.154322-1-samitolvanen@google.com> X-Mailer: git-send-email 2.26.0.292.g33ef6b2f38-goog Subject: [PATCH v10 12/12] efi/libstub: disable SCS From: Sami Tolvanen To: Will Deacon , Catalin Marinas , James Morse , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel , Mark Rutland Cc: Dave Martin , Kees Cook , Laura Abbott , Marc Zyngier , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen Shadow stacks are not available in the EFI stub, filter out SCS flags. Suggested-by: James Morse Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook Acked-by: Ard Biesheuvel --- drivers/firmware/efi/libstub/Makefile | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile index 094eabdecfe6..fa0bb64f93d6 100644 --- a/drivers/firmware/efi/libstub/Makefile +++ b/drivers/firmware/efi/libstub/Makefile @@ -32,6 +32,9 @@ KBUILD_CFLAGS := $(cflags-y) -DDISABLE_BRANCH_PROFILING \ $(call cc-option,-fno-stack-protector) \ -D__DISABLE_EXPORTS +# remove SCS flags from all objects in this directory +KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_SCS), $(KBUILD_CFLAGS)) + GCOV_PROFILE := n KASAN_SANITIZE := n UBSAN_SANITIZE := n