From patchwork Thu Apr 16 16:12:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11493373 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A1CB96CA for ; Thu, 16 Apr 2020 16:13:12 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id A7D3220857 for ; Thu, 16 Apr 2020 16:13:11 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="V0HmZh6q" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A7D3220857 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-18529-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 32207 invoked by uid 550); 16 Apr 2020 16:13:05 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 32099 invoked from network); 16 Apr 2020 16:13:04 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=gZFZjEsghENrg4G5IPyMVBdZG9H40xcfFjyBErzwUv8=; b=V0HmZh6qgau3wzOOtzxHHIGosLNYn+sF1ihVrAjGuRajoDfW+Zp3p22e0/un2UXmvC c3/SMfPpC9KW46QxB48jsTjkuxoYooS525QGaYi8mUzAvfonG4ign/WR9G9rZFxVXH/s uYSyuL0OVad2/9FdMqYUaeeP7Ux56oz1F2oztVMSq7KkoiESuIpJXBUGFm4xj+1l4/Dj kvDGjOvDO9NT6mw5Fc74wr4gAGsyFHkZbFCorndyoMV6/j0syf2MLUfxEqXZ+CzQZr6H 4t0NdamGc87otZOHEted3XgNW7xwofDamoEKgeTfdVX+vdwLFnykDBEioZJe4kXU+qiN DxvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=gZFZjEsghENrg4G5IPyMVBdZG9H40xcfFjyBErzwUv8=; b=LYN+K4NA+UO9EduYd1Mo0prYC3sDU+R80scrBETgmObkbvBUt0M7THg5xYgmvy2S+T ABqlrdoo84zi6/vYMJSyjiikxqyZPyqBRf4Pg4u6YC+ucrcUH08QW3IwpLPMmK5Edbmx 8/AN8qGCTeYSjp5ubHkVDe4VSHz10GamwWij4yclMrSuIy7ow295nNiN2Uz771PcO8RX kh3MiPpmo9GbB0qDP3UqceMFB/nauup8/7SfkY/fBddLcOyg5dDZSF3kOeFmmvz77d2E NKX6zgUrnm3gOPDTDFxVKdlHcICqz6+Qc0RwDGQ2sYTOqxSKXTRznvc8XCrQYPgBL5J9 //Lw== X-Gm-Message-State: AGi0Pubk8ftyKHCnsQtZ9j/Q4ndzL1U37pqFJTgtVGDJo1eVc0T552I3 BvMGwdpISIY+v4YVKVd3rcPMdNfp1yixSeY1plM= X-Google-Smtp-Source: APiQypL9lKXxEadoP2/4KYQBDU0eL1jpfb2EDSLIiwnkjRh0DR1JH9dWcV9JFO8nZi/LI77d1nSbAeGZs2HKp932bBY= X-Received: by 2002:a67:d18e:: with SMTP id w14mr10280362vsi.138.1587053572435; Thu, 16 Apr 2020 09:12:52 -0700 (PDT) Date: Thu, 16 Apr 2020 09:12:34 -0700 In-Reply-To: <20200416161245.148813-1-samitolvanen@google.com> Message-Id: <20200416161245.148813-2-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20200416161245.148813-1-samitolvanen@google.com> X-Mailer: git-send-email 2.26.1.301.g55bc3eb7cb9-goog Subject: [PATCH v11 01/12] add support for Clang's Shadow Call Stack (SCS) From: Sami Tolvanen To: Will Deacon , Catalin Marinas , James Morse , Steven Rostedt , Ard Biesheuvel , Mark Rutland , Masahiro Yamada , Michal Marek , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot Cc: Dave Martin , Kees Cook , Laura Abbott , Marc Zyngier , Masami Hiramatsu , Nick Desaulniers , Jann Horn , Miguel Ojeda , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen This change adds generic support for Clang's Shadow Call Stack, which uses a shadow stack to protect return addresses from being overwritten by an attacker. Details are available here: https://clang.llvm.org/docs/ShadowCallStack.html Note that security guarantees in the kernel differ from the ones documented for user space. The kernel must store addresses of shadow stacks used by other tasks and interrupt handlers in memory, which means an attacker capable reading and writing arbitrary memory may be able to locate them and hijack control flow by modifying shadow stacks that are not currently in use. Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook Reviewed-by: Miguel Ojeda --- Makefile | 6 ++ arch/Kconfig | 34 ++++++ include/linux/compiler-clang.h | 6 ++ include/linux/compiler_types.h | 4 + include/linux/scs.h | 57 ++++++++++ init/init_task.c | 8 ++ kernel/Makefile | 1 + kernel/fork.c | 9 ++ kernel/sched/core.c | 2 + kernel/scs.c | 187 +++++++++++++++++++++++++++++++++ 10 files changed, 314 insertions(+) create mode 100644 include/linux/scs.h create mode 100644 kernel/scs.c diff --git a/Makefile b/Makefile index 70def4907036..baea6024b409 100644 --- a/Makefile +++ b/Makefile @@ -866,6 +866,12 @@ ifdef CONFIG_LIVEPATCH KBUILD_CFLAGS += $(call cc-option, -flive-patching=inline-clone) endif +ifdef CONFIG_SHADOW_CALL_STACK +CC_FLAGS_SCS := -fsanitize=shadow-call-stack +KBUILD_CFLAGS += $(CC_FLAGS_SCS) +export CC_FLAGS_SCS +endif + # arch Makefile may override CC so keep this after arch Makefile is included NOSTDINC_FLAGS += -nostdinc -isystem $(shell $(CC) -print-file-name=include) diff --git a/arch/Kconfig b/arch/Kconfig index 786a85d4ad40..691a552c2cc3 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -533,6 +533,40 @@ config STACKPROTECTOR_STRONG about 20% of all kernel functions, which increases the kernel code size by about 2%. +config ARCH_SUPPORTS_SHADOW_CALL_STACK + bool + help + An architecture should select this if it supports Clang's Shadow + Call Stack, has asm/scs.h, and implements runtime support for shadow + stack switching. + +config SHADOW_CALL_STACK + bool "Clang Shadow Call Stack" + depends on ARCH_SUPPORTS_SHADOW_CALL_STACK + help + This option enables Clang's Shadow Call Stack, which uses a + shadow stack to protect function return addresses from being + overwritten by an attacker. More information can be found in + Clang's documentation: + + https://clang.llvm.org/docs/ShadowCallStack.html + + Note that security guarantees in the kernel differ from the ones + documented for user space. The kernel must store addresses of shadow + stacks used by other tasks and interrupt handlers in memory, which + means an attacker capable of reading and writing arbitrary memory + may be able to locate them and hijack control flow by modifying + shadow stacks that are not currently in use. + +config SHADOW_CALL_STACK_VMAP + bool "Use virtually mapped shadow call stacks" + depends on SHADOW_CALL_STACK + help + Use virtually mapped shadow call stacks. Selecting this option + provides better stack exhaustion protection, but increases per-thread + memory consumption as a full page is allocated for each shadow stack. + + config HAVE_ARCH_WITHIN_STACK_FRAMES bool help diff --git a/include/linux/compiler-clang.h b/include/linux/compiler-clang.h index 333a6695a918..18fc4d29ef27 100644 --- a/include/linux/compiler-clang.h +++ b/include/linux/compiler-clang.h @@ -42,3 +42,9 @@ * compilers, like ICC. */ #define barrier() __asm__ __volatile__("" : : : "memory") + +#if __has_feature(shadow_call_stack) +# define __noscs __attribute__((__no_sanitize__("shadow-call-stack"))) +#else +# define __noscs +#endif diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h index e970f97a7fcb..97b62f47a80d 100644 --- a/include/linux/compiler_types.h +++ b/include/linux/compiler_types.h @@ -193,6 +193,10 @@ struct ftrace_likely_data { # define randomized_struct_fields_end #endif +#ifndef __noscs +# define __noscs +#endif + #ifndef asm_volatile_goto #define asm_volatile_goto(x...) asm goto(x) #endif diff --git a/include/linux/scs.h b/include/linux/scs.h new file mode 100644 index 000000000000..c5572fd770b0 --- /dev/null +++ b/include/linux/scs.h @@ -0,0 +1,57 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Shadow Call Stack support. + * + * Copyright (C) 2019 Google LLC + */ + +#ifndef _LINUX_SCS_H +#define _LINUX_SCS_H + +#include +#include +#include + +#ifdef CONFIG_SHADOW_CALL_STACK + +/* + * In testing, 1 KiB shadow stack size (i.e. 128 stack frames on a 64-bit + * architecture) provided ~40% safety margin on stack usage while keeping + * memory allocation overhead reasonable. + */ +#define SCS_SIZE 1024UL +#define GFP_SCS (GFP_KERNEL | __GFP_ZERO) + +/* + * A random number outside the kernel's virtual address space to mark the + * end of the shadow stack. + */ +#define SCS_END_MAGIC 0xaf0194819b1635f6UL + +#define task_scs(tsk) (task_thread_info(tsk)->shadow_call_stack) + +static inline void task_set_scs(struct task_struct *tsk, void *s) +{ + task_scs(tsk) = s; +} + +extern void scs_init(void); +extern void scs_task_reset(struct task_struct *tsk); +extern int scs_prepare(struct task_struct *tsk, int node); +extern bool scs_corrupted(struct task_struct *tsk); +extern void scs_release(struct task_struct *tsk); + +#else /* CONFIG_SHADOW_CALL_STACK */ + +#define task_scs(tsk) NULL + +static inline void task_set_scs(struct task_struct *tsk, void *s) {} +static inline void scs_init(void) {} +static inline void scs_task_reset(struct task_struct *tsk) {} +static inline int scs_prepare(struct task_struct *tsk, int node) { return 0; } +static inline bool scs_corrupted(struct task_struct *tsk) { return false; } +static inline void scs_release(struct task_struct *tsk) {} + +#endif /* CONFIG_SHADOW_CALL_STACK */ + +#endif /* _LINUX_SCS_H */ diff --git a/init/init_task.c b/init/init_task.c index bd403ed3e418..aaa71366d162 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include @@ -185,6 +186,13 @@ struct task_struct init_task }; EXPORT_SYMBOL(init_task); +#ifdef CONFIG_SHADOW_CALL_STACK +unsigned long init_shadow_call_stack[SCS_SIZE / sizeof(long)] __init_task_data + __aligned(SCS_SIZE) = { + [(SCS_SIZE / sizeof(long)) - 1] = SCS_END_MAGIC +}; +#endif + /* * Initial thread structure. Alignment of this is handled by a special * linker map entry. diff --git a/kernel/Makefile b/kernel/Makefile index 4cb4130ced32..c332eb9d4841 100644 --- a/kernel/Makefile +++ b/kernel/Makefile @@ -103,6 +103,7 @@ obj-$(CONFIG_TRACEPOINTS) += trace/ obj-$(CONFIG_IRQ_WORK) += irq_work.o obj-$(CONFIG_CPU_PM) += cpu_pm.o obj-$(CONFIG_BPF) += bpf/ +obj-$(CONFIG_SHADOW_CALL_STACK) += scs.o obj-$(CONFIG_PERF_EVENTS) += events/ diff --git a/kernel/fork.c b/kernel/fork.c index 4385f3d639f2..c4c984d29573 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -94,6 +94,7 @@ #include #include #include +#include #include #include @@ -456,6 +457,8 @@ void put_task_stack(struct task_struct *tsk) void free_task(struct task_struct *tsk) { + scs_release(tsk); + #ifndef CONFIG_THREAD_INFO_IN_TASK /* * The task is finally done with both the stack and thread_info, @@ -840,6 +843,8 @@ void __init fork_init(void) NULL, free_vm_stack_cache); #endif + scs_init(); + lockdep_init_task(&init_task); uprobes_init(); } @@ -899,6 +904,10 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node) if (err) goto free_stack; + err = scs_prepare(tsk, node); + if (err) + goto free_stack; + #ifdef CONFIG_SECCOMP /* * We must handle setting up seccomp filters once we're under diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 3a61a3b8eaa9..c99620c1ec20 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -11,6 +11,7 @@ #include #include +#include #include #include @@ -6045,6 +6046,7 @@ void init_idle(struct task_struct *idle, int cpu) idle->se.exec_start = sched_clock(); idle->flags |= PF_IDLE; + scs_task_reset(idle); kasan_unpoison_task_stack(idle); #ifdef CONFIG_SMP diff --git a/kernel/scs.c b/kernel/scs.c new file mode 100644 index 000000000000..28abed21950c --- /dev/null +++ b/kernel/scs.c @@ -0,0 +1,187 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Shadow Call Stack support. + * + * Copyright (C) 2019 Google LLC + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +static inline void *__scs_base(struct task_struct *tsk) +{ + /* + * To minimize risk the of exposure, architectures may clear a + * task's thread_info::shadow_call_stack while that task is + * running, and only save/restore the active shadow call stack + * pointer when the usual register may be clobbered (e.g. across + * context switches). + * + * The shadow call stack is aligned to SCS_SIZE, and grows + * upwards, so we can mask out the low bits to extract the base + * when the task is not running. + */ + return (void *)((unsigned long)task_scs(tsk) & ~(SCS_SIZE - 1)); +} + +static inline unsigned long *scs_magic(void *s) +{ + return (unsigned long *)(s + SCS_SIZE) - 1; +} + +static inline void scs_set_magic(void *s) +{ + *scs_magic(s) = SCS_END_MAGIC; +} + +#ifdef CONFIG_SHADOW_CALL_STACK_VMAP + +/* Matches NR_CACHED_STACKS for VMAP_STACK */ +#define NR_CACHED_SCS 2 +static DEFINE_PER_CPU(void *, scs_cache[NR_CACHED_SCS]); + +static void *scs_alloc(int node) +{ + int i; + void *s; + + for (i = 0; i < NR_CACHED_SCS; i++) { + s = this_cpu_xchg(scs_cache[i], NULL); + if (s) { + memset(s, 0, SCS_SIZE); + goto out; + } + } + + /* + * We allocate a full page for the shadow stack, which should be + * more than we need. Check the assumption nevertheless. + */ + BUILD_BUG_ON(SCS_SIZE > PAGE_SIZE); + + s = __vmalloc_node_range(PAGE_SIZE, SCS_SIZE, + VMALLOC_START, VMALLOC_END, + GFP_SCS, PAGE_KERNEL, 0, + node, __builtin_return_address(0)); + +out: + if (s) + scs_set_magic(s); + /* TODO: poison for KASAN, unpoison in scs_free */ + + return s; +} + +static void scs_free(void *s) +{ + int i; + + for (i = 0; i < NR_CACHED_SCS; i++) + if (this_cpu_cmpxchg(scs_cache[i], 0, s) == NULL) + return; + + vfree_atomic(s); +} + +static int scs_cleanup(unsigned int cpu) +{ + int i; + void **cache = per_cpu_ptr(scs_cache, cpu); + + for (i = 0; i < NR_CACHED_SCS; i++) { + vfree(cache[i]); + cache[i] = NULL; + } + + return 0; +} + +void __init scs_init(void) +{ + WARN_ON(cpuhp_setup_state(CPUHP_BP_PREPARE_DYN, "scs:scs_cache", NULL, + scs_cleanup) < 0); +} + +#else /* !CONFIG_SHADOW_CALL_STACK_VMAP */ + +static struct kmem_cache *scs_cache; + +static inline void *scs_alloc(int node) +{ + void *s; + + s = kmem_cache_alloc_node(scs_cache, GFP_SCS, node); + if (s) { + scs_set_magic(s); + /* + * Poison the allocation to catch unintentional accesses to + * the shadow stack when KASAN is enabled. + */ + kasan_poison_object_data(scs_cache, s); + } + + return s; +} + +static inline void scs_free(void *s) +{ + kasan_unpoison_object_data(scs_cache, s); + kmem_cache_free(scs_cache, s); +} + +void __init scs_init(void) +{ + scs_cache = kmem_cache_create("scs_cache", SCS_SIZE, SCS_SIZE, + 0, NULL); + WARN_ON(!scs_cache); +} + +#endif /* CONFIG_SHADOW_CALL_STACK_VMAP */ + +void scs_task_reset(struct task_struct *tsk) +{ + /* + * Reset the shadow stack to the base address in case the task + * is reused. + */ + task_set_scs(tsk, __scs_base(tsk)); +} + +int scs_prepare(struct task_struct *tsk, int node) +{ + void *s; + + s = scs_alloc(node); + if (!s) + return -ENOMEM; + + task_set_scs(tsk, s); + return 0; +} + +bool scs_corrupted(struct task_struct *tsk) +{ + unsigned long *magic = scs_magic(__scs_base(tsk)); + + return READ_ONCE_NOCHECK(*magic) != SCS_END_MAGIC; +} + +void scs_release(struct task_struct *tsk) +{ + void *s; + + s = __scs_base(tsk); + if (!s) + return; + + WARN_ON(scs_corrupted(tsk)); + + task_set_scs(tsk, NULL); + scs_free(s); +} From patchwork Thu Apr 16 16:12:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11493377 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E7C37112C for ; Thu, 16 Apr 2020 16:13:22 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 26C1920857 for ; Thu, 16 Apr 2020 16:13:21 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="FFzNqynx" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 26C1920857 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-18530-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 32477 invoked by uid 550); 16 Apr 2020 16:13:08 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 32356 invoked from network); 16 Apr 2020 16:13:06 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=A4YlZ0JelJa47y4ITdTVRwEiR7bDyxC3LQiUD87DYQg=; b=FFzNqynxBWRyOECiXO08TW9/nzM2LX+rId0MEdvMFA6lVhlyCX/GJDLa5wojSpu2es LBJsyTuE35dNsfxQf8lTHg/Vkccbulv9Ls1aIrucOT7Dhe0gwJre4UuJHVa8/LwaNRMm icnvYCkuzbA5xVK0GqQIVovu1d4IbnfRArIZvvfEE2tQHaN0viTXVSpk8Wx2wLu0KZ9t dBfcnsKmVQMRxsTp0y5WbsCqXF1OoOHFa2tFFaVKTqI2/w/9x+o/P2Kqga+VDdJhaDsm dy26FuCh88AA2xhRtYk2/TpuYY9GTzhm81GolQYr70AIV3fNyJKnpOkAYE3ebigotOuD WzQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=A4YlZ0JelJa47y4ITdTVRwEiR7bDyxC3LQiUD87DYQg=; b=SSjCEzsA58mLJU9/Vk4AVxFrh+JooE71+oLTgK4ZikjBVDjkAHEXuKDC7phraOZCBt pFXRcE1fSsYs6TSu8bUw7H1yXHzXlFwiSeinmtswx/ZWI0MOEKt3Pvo6zM7gKuI/GX92 R5Xug6yoEYO+E6/d+vCDs7K+slVDcX2CfOGO9vbkI0TOPlH9NXlzLt0mBKQLfOhD8vn5 S3vPBGqzTYIKYyYeCR89dqA/oSfbry6terVekQlrNYeAzhqKUEBx37qzQK9wAFTZJ5Ku cmEVBS4Vtd/tynii1wjpJHBM6/aDXYlqocYgAs8tf67G9o2/DS24mYhuPfd2sA39jIbh 2eNA== X-Gm-Message-State: AGi0PuZJCI+NrvRBuySeq/kvX5llOudvyNEQhnFCh5FgfrJJWkjy1t/b y3uMhvg7mBdodsR1BSGzXhsI5H+Bn6EqCKiSMKE= X-Google-Smtp-Source: APiQypJmv3+o6V05+VLuG3hxDl4yVwtUp8XzaHB/dFrpzqD6MaQqKxQnnDJBQdwbiwE0Zo3YVJvfdaPIUOR+54s6WqE= X-Received: by 2002:a17:90a:d703:: with SMTP id y3mr6337624pju.75.1587053574923; Thu, 16 Apr 2020 09:12:54 -0700 (PDT) Date: Thu, 16 Apr 2020 09:12:35 -0700 In-Reply-To: <20200416161245.148813-1-samitolvanen@google.com> Message-Id: <20200416161245.148813-3-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20200416161245.148813-1-samitolvanen@google.com> X-Mailer: git-send-email 2.26.1.301.g55bc3eb7cb9-goog Subject: [PATCH v11 02/12] scs: add accounting From: Sami Tolvanen To: Will Deacon , Catalin Marinas , James Morse , Steven Rostedt , Ard Biesheuvel , Mark Rutland , Masahiro Yamada , Michal Marek , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot Cc: Dave Martin , Kees Cook , Laura Abbott , Marc Zyngier , Masami Hiramatsu , Nick Desaulniers , Jann Horn , Miguel Ojeda , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen This change adds accounting for the memory allocated for shadow stacks. Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook --- drivers/base/node.c | 6 ++++++ fs/proc/meminfo.c | 4 ++++ include/linux/mmzone.h | 3 +++ kernel/scs.c | 20 ++++++++++++++++++++ mm/page_alloc.c | 6 ++++++ mm/vmstat.c | 3 +++ 6 files changed, 42 insertions(+) diff --git a/drivers/base/node.c b/drivers/base/node.c index 10d7e818e118..502ab5447c8d 100644 --- a/drivers/base/node.c +++ b/drivers/base/node.c @@ -415,6 +415,9 @@ static ssize_t node_read_meminfo(struct device *dev, "Node %d AnonPages: %8lu kB\n" "Node %d Shmem: %8lu kB\n" "Node %d KernelStack: %8lu kB\n" +#ifdef CONFIG_SHADOW_CALL_STACK + "Node %d ShadowCallStack:%8lu kB\n" +#endif "Node %d PageTables: %8lu kB\n" "Node %d NFS_Unstable: %8lu kB\n" "Node %d Bounce: %8lu kB\n" @@ -438,6 +441,9 @@ static ssize_t node_read_meminfo(struct device *dev, nid, K(node_page_state(pgdat, NR_ANON_MAPPED)), nid, K(i.sharedram), nid, sum_zone_node_page_state(nid, NR_KERNEL_STACK_KB), +#ifdef CONFIG_SHADOW_CALL_STACK + nid, sum_zone_node_page_state(nid, NR_KERNEL_SCS_BYTES) / 1024, +#endif nid, K(sum_zone_node_page_state(nid, NR_PAGETABLE)), nid, K(node_page_state(pgdat, NR_UNSTABLE_NFS)), nid, K(sum_zone_node_page_state(nid, NR_BOUNCE)), diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c index 8c1f1bb1a5ce..49768005a79e 100644 --- a/fs/proc/meminfo.c +++ b/fs/proc/meminfo.c @@ -103,6 +103,10 @@ static int meminfo_proc_show(struct seq_file *m, void *v) show_val_kb(m, "SUnreclaim: ", sunreclaim); seq_printf(m, "KernelStack: %8lu kB\n", global_zone_page_state(NR_KERNEL_STACK_KB)); +#ifdef CONFIG_SHADOW_CALL_STACK + seq_printf(m, "ShadowCallStack:%8lu kB\n", + global_zone_page_state(NR_KERNEL_SCS_BYTES) / 1024); +#endif show_val_kb(m, "PageTables: ", global_zone_page_state(NR_PAGETABLE)); diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 1b9de7d220fb..89aa96797743 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -156,6 +156,9 @@ enum zone_stat_item { NR_MLOCK, /* mlock()ed pages found and moved off LRU */ NR_PAGETABLE, /* used for pagetables */ NR_KERNEL_STACK_KB, /* measured in KiB */ +#if IS_ENABLED(CONFIG_SHADOW_CALL_STACK) + NR_KERNEL_SCS_BYTES, /* measured in bytes */ +#endif /* Second 128 byte cacheline */ NR_BOUNCE, #if IS_ENABLED(CONFIG_ZSMALLOC) diff --git a/kernel/scs.c b/kernel/scs.c index 28abed21950c..5245e992c692 100644 --- a/kernel/scs.c +++ b/kernel/scs.c @@ -12,6 +12,7 @@ #include #include #include +#include #include static inline void *__scs_base(struct task_struct *tsk) @@ -89,6 +90,11 @@ static void scs_free(void *s) vfree_atomic(s); } +static struct page *__scs_page(struct task_struct *tsk) +{ + return vmalloc_to_page(__scs_base(tsk)); +} + static int scs_cleanup(unsigned int cpu) { int i; @@ -135,6 +141,11 @@ static inline void scs_free(void *s) kmem_cache_free(scs_cache, s); } +static struct page *__scs_page(struct task_struct *tsk) +{ + return virt_to_page(__scs_base(tsk)); +} + void __init scs_init(void) { scs_cache = kmem_cache_create("scs_cache", SCS_SIZE, SCS_SIZE, @@ -153,6 +164,12 @@ void scs_task_reset(struct task_struct *tsk) task_set_scs(tsk, __scs_base(tsk)); } +static void scs_account(struct task_struct *tsk, int account) +{ + mod_zone_page_state(page_zone(__scs_page(tsk)), NR_KERNEL_SCS_BYTES, + account * SCS_SIZE); +} + int scs_prepare(struct task_struct *tsk, int node) { void *s; @@ -162,6 +179,8 @@ int scs_prepare(struct task_struct *tsk, int node) return -ENOMEM; task_set_scs(tsk, s); + scs_account(tsk, 1); + return 0; } @@ -182,6 +201,7 @@ void scs_release(struct task_struct *tsk) WARN_ON(scs_corrupted(tsk)); + scs_account(tsk, -1); task_set_scs(tsk, NULL); scs_free(s); } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 69827d4fa052..721879d56bbd 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5411,6 +5411,9 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask) " managed:%lukB" " mlocked:%lukB" " kernel_stack:%lukB" +#ifdef CONFIG_SHADOW_CALL_STACK + " shadow_call_stack:%lukB" +#endif " pagetables:%lukB" " bounce:%lukB" " free_pcp:%lukB" @@ -5433,6 +5436,9 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask) K(zone_managed_pages(zone)), K(zone_page_state(zone, NR_MLOCK)), zone_page_state(zone, NR_KERNEL_STACK_KB), +#ifdef CONFIG_SHADOW_CALL_STACK + zone_page_state(zone, NR_KERNEL_SCS_BYTES) / 1024, +#endif K(zone_page_state(zone, NR_PAGETABLE)), K(zone_page_state(zone, NR_BOUNCE)), K(free_pcp), diff --git a/mm/vmstat.c b/mm/vmstat.c index 96d21a792b57..089602efa477 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1119,6 +1119,9 @@ const char * const vmstat_text[] = { "nr_mlock", "nr_page_table_pages", "nr_kernel_stack", +#if IS_ENABLED(CONFIG_SHADOW_CALL_STACK) + "nr_shadow_call_stack_bytes", +#endif "nr_bounce", #if IS_ENABLED(CONFIG_ZSMALLOC) "nr_zspages", From patchwork Thu Apr 16 16:12:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11493379 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DB2426CA for ; Thu, 16 Apr 2020 16:13:32 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 4206C20771 for ; Thu, 16 Apr 2020 16:13:32 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="oaXHw4Oo" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4206C20771 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-18531-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 32642 invoked by uid 550); 16 Apr 2020 16:13:09 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 32559 invoked from network); 16 Apr 2020 16:13:08 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=0zoCAxwf9k6lfoo3cxV3l2Ts8Ighj8ILmpKvjJbqwvo=; b=oaXHw4OoNjSODYtqP+D86dNRDfH9pFvBIgL5AAghUBFYKnrbVWQbiCIYvMoV5G/92q eAijVWC8ga1IgT8f2InqqkXDfvnJOHAfEoJR4Tv3ygcBIFNB+3MRslx3FassUAy0Uatd KTgMwbBl3RKE4Te0j7hl6uSM+q2hHmXvTAE68O2Qqmw80ejmwBmIa/uQIk0Ea6awb4D/ kiUfpIC642BpdAmbbEnAoxCTXQxQ62KaUPmVGWnjnutLyFbIJjdlfDx/bJhA/U7yXFFD en3fX1+ixzUBoNEWIbC1MibOBSiJdNjGY0/wQFmuEOqMPTQHYahJnCQZvu4lXlhPO+Ut 3Uiw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=0zoCAxwf9k6lfoo3cxV3l2Ts8Ighj8ILmpKvjJbqwvo=; b=bfFNTd2LKmx5kZa+RhzmbwLa7kKguwnvRi83IyWPABuK/RoATl7CSwJwQEfALmSs9q pdJjNwTrfmdpcQsabjt9w21spVQqmn2tITXOjafLCIfjVtasqRoH+BvF01NWcoQhMXUL GTKOQCdS3j7f++Pw2j7mSiW7I+GF67kwW8ogMKeMMUKC7cEE8vctCB3YT5irwfS36ltw 8NVLPy3erw7tCO9TrMpFg7gI5aMAk2CInhYDawnyiIMqFFuM3Tjjqku/3s8HeUZ/M21C DsJyUWCrom6Z7mJw9zRqmooebkLdythhyq3RXty257xEsBS9o82+lmJIjqmKRXIRXr5Q OiQQ== X-Gm-Message-State: AGi0PuZsGsTD//gMRgi9Xcpadt8NY974m3p+mD7FLGInU9q7Is+j2Qas BjLPgH24uz85VxW+Kc4jkcm3pMyYxpc16E7R8lQ= X-Google-Smtp-Source: APiQypKIig2VehAviH0oXuxl4EJ3M3D8ILlWZ/6oWMjPFzuhTA+6Vu4jAmcNZSFI+UI1yW8xrivvKTZYZblKhvZz34o= X-Received: by 2002:a63:d049:: with SMTP id s9mr30834027pgi.384.1587053577150; Thu, 16 Apr 2020 09:12:57 -0700 (PDT) Date: Thu, 16 Apr 2020 09:12:36 -0700 In-Reply-To: <20200416161245.148813-1-samitolvanen@google.com> Message-Id: <20200416161245.148813-4-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20200416161245.148813-1-samitolvanen@google.com> X-Mailer: git-send-email 2.26.1.301.g55bc3eb7cb9-goog Subject: [PATCH v11 03/12] scs: add support for stack usage debugging From: Sami Tolvanen To: Will Deacon , Catalin Marinas , James Morse , Steven Rostedt , Ard Biesheuvel , Mark Rutland , Masahiro Yamada , Michal Marek , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot Cc: Dave Martin , Kees Cook , Laura Abbott , Marc Zyngier , Masami Hiramatsu , Nick Desaulniers , Jann Horn , Miguel Ojeda , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen Implements CONFIG_DEBUG_STACK_USAGE for shadow stacks. When enabled, also prints out the highest shadow stack usage per process. Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook --- kernel/scs.c | 39 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 39 insertions(+) diff --git a/kernel/scs.c b/kernel/scs.c index 5245e992c692..ad74d13f2c0f 100644 --- a/kernel/scs.c +++ b/kernel/scs.c @@ -184,6 +184,44 @@ int scs_prepare(struct task_struct *tsk, int node) return 0; } +#ifdef CONFIG_DEBUG_STACK_USAGE +static inline unsigned long scs_used(struct task_struct *tsk) +{ + unsigned long *p = __scs_base(tsk); + unsigned long *end = scs_magic(p); + unsigned long s = (unsigned long)p; + + while (p < end && READ_ONCE_NOCHECK(*p)) + p++; + + return (unsigned long)p - s; +} + +static void scs_check_usage(struct task_struct *tsk) +{ + static DEFINE_SPINLOCK(lock); + static unsigned long highest; + unsigned long used = scs_used(tsk); + + if (used <= highest) + return; + + spin_lock(&lock); + + if (used > highest) { + pr_info("%s (%d): highest shadow stack usage: %lu bytes\n", + tsk->comm, task_pid_nr(tsk), used); + highest = used; + } + + spin_unlock(&lock); +} +#else +static inline void scs_check_usage(struct task_struct *tsk) +{ +} +#endif + bool scs_corrupted(struct task_struct *tsk) { unsigned long *magic = scs_magic(__scs_base(tsk)); @@ -200,6 +238,7 @@ void scs_release(struct task_struct *tsk) return; WARN_ON(scs_corrupted(tsk)); + scs_check_usage(tsk); scs_account(tsk, -1); task_set_scs(tsk, NULL); From patchwork Thu Apr 16 16:12:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11493381 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 41C8E186E for ; Thu, 16 Apr 2020 16:13:43 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 9DD0D22252 for ; Thu, 16 Apr 2020 16:13:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="LLB6uGgi" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9DD0D22252 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-18532-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 1099 invoked by uid 550); 16 Apr 2020 16:13:12 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 32764 invoked from network); 16 Apr 2020 16:13:11 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=v+FpjDALSMEE3YToTyyoSh5dhgN0urRtPwHsB1+RBqo=; b=LLB6uGgi081g4dKc+3D/4hphc5jFZuDn5l3K/Cxtfcs3/rhIPXgTzCDJ4HjatU2B34 JY4fNmY4l5JYswvESy2IHRhqnwuSzzW+BBMx2qv9Y+i4mCamy8HON5fMlGYrckrZs0zi zqYykbeZ1nK8/8teogXlHD/G+fEHWxaLjwONYJdsxpHLsGV/uV6CPYJuDeVcg/JjeiqB SSqMbSaUnpkdFmw7gIR6KIznWHRwY5meJ2DEzQesEEI4NungE9nWx22c6EzLC3ly+4jk MftpYNrj0up2ZWOvkRRAr9cuASUwgpZg2ADv9LJBOB+WmvPbpOHT0hNO3fPOr3IaSvnm 7NdA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=v+FpjDALSMEE3YToTyyoSh5dhgN0urRtPwHsB1+RBqo=; b=JNjHDB0l1JucuoD57TgTOgaB6CzhzX1TBhIVOCiREMSFV1k8FpjaVk0Xbhfe4Rlpg3 A3AfxOZ2vsfcrD6BqjjrOm0rxtIax2NZllonl1hOBp5iPoqCOGe5Og9otgGH2ev/h88P f/yIsqaI9EajxHtl5nBS3OJVB4A1GQB8xKEmyVLRxYWrRq9//wlJN8Ooci3MYFBUYTAI 3tb3Z6/xN+gRDPHMS8uKtCirhnYSemGNmKPNY5azFbu8+Xjk3qcYkeEngh6d7JT4w8OF ORyf8na9i3mH5aBJ6bllKOafjnVFZJEJ1BAtmoXsaX1+BhIndE/bytDWgPyD5FxM6RFl I7dg== X-Gm-Message-State: AGi0PubvigWzDoj/4Q4rJ1z3OTGItejsC94MEutYG0eAMh5v6PowLzO5 k4oSejr5p0XXGJ76ZPVvXMbanNltNQhAA4Pqr5o= X-Google-Smtp-Source: APiQypJFs2LuRTbNsCDtN/g/2Dzd+U0S8ccpzr5nO2MdW7DPIurrW0+QxGuzgOdLZ6sO3fXFhdLkcQx/1PSGVXyeSjg= X-Received: by 2002:a17:90a:8989:: with SMTP id v9mr6124346pjn.119.1587053579583; Thu, 16 Apr 2020 09:12:59 -0700 (PDT) Date: Thu, 16 Apr 2020 09:12:37 -0700 In-Reply-To: <20200416161245.148813-1-samitolvanen@google.com> Message-Id: <20200416161245.148813-5-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20200416161245.148813-1-samitolvanen@google.com> X-Mailer: git-send-email 2.26.1.301.g55bc3eb7cb9-goog Subject: [PATCH v11 04/12] scs: disable when function graph tracing is enabled From: Sami Tolvanen To: Will Deacon , Catalin Marinas , James Morse , Steven Rostedt , Ard Biesheuvel , Mark Rutland , Masahiro Yamada , Michal Marek , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot Cc: Dave Martin , Kees Cook , Laura Abbott , Marc Zyngier , Masami Hiramatsu , Nick Desaulniers , Jann Horn , Miguel Ojeda , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen The graph tracer hooks returns by modifying frame records on the (regular) stack, but with SCS the return address is taken from the shadow stack, and the value in the frame record has no effect. As we don't currently have a mechanism to determine the corresponding slot on the shadow stack (and to pass this through the ftrace infrastructure), for now let's disable SCS when the graph tracer is enabled. With SCS the return address is taken from the shadow stack and the value in the frame record has no effect. The mcount based graph tracer hooks returns by modifying frame records on the (regular) stack, and thus is not compatible. The patchable-function-entry graph tracer used for DYNAMIC_FTRACE_WITH_REGS modifies the LR before it is saved to the shadow stack, and is compatible. Modifying the mcount based graph tracer to work with SCS would require a mechanism to determine the corresponding slot on the shadow stack (and to pass this through the ftrace infrastructure), and we expect that everyone will eventually move to the patchable-function-entry based graph tracer anyway, so for now let's disable SCS when the mcount-based graph tracer is enabled. SCS and patchable-function-entry are both supported from LLVM 10.x. Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook Reviewed-by: Mark Rutland --- arch/Kconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/Kconfig b/arch/Kconfig index 691a552c2cc3..c53cb9025ad2 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -542,6 +542,7 @@ config ARCH_SUPPORTS_SHADOW_CALL_STACK config SHADOW_CALL_STACK bool "Clang Shadow Call Stack" + depends on DYNAMIC_FTRACE_WITH_REGS || !FUNCTION_GRAPH_TRACER depends on ARCH_SUPPORTS_SHADOW_CALL_STACK help This option enables Clang's Shadow Call Stack, which uses a From patchwork Thu Apr 16 16:12:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11493385 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 58E53112C for ; Thu, 16 Apr 2020 16:13:56 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id B21FC20771 for ; Thu, 16 Apr 2020 16:13:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="aGvQz7rp" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B21FC20771 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-18533-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 1367 invoked by uid 550); 16 Apr 2020 16:13:15 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 1292 invoked from network); 16 Apr 2020 16:13:14 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=jhbZSy8N7pmIKGLCoqfuWjSb1n91/ddT1S0YkqMO/sE=; b=aGvQz7rpj1gmlNot9DGOyVfFExMSMhcz9FOTIfJbSo6J2eaS4Q9EvaTX97Nd1b7Is2 Bi/uzVXKD8nyy0gdnni7g4rNQaOUONSHCDbXi1JxVmjuSDUSR9C3hyS6ONRIgdqe5A7o KRFqaiwPkP7F+S6duXdhRB2xtAeRc18dKkptU+TW+VkoS72brWS3gLP1uLJyWKMa1apq 6GOIGu4r7BbAhSSQiEldxlhfcpZpwS7Ujq6MBf2bUhjBlcqdRF1nhMsPi7t9S8NV1SxC 8eUuHSdrmpT1WtCHfSF0mYe/M0/QauT7xIFnKzHxZWJuc08m/q50qVVerwI3cu0X+K2t 149A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=jhbZSy8N7pmIKGLCoqfuWjSb1n91/ddT1S0YkqMO/sE=; b=Tzjz+uz0kRwAIKSltlOrXoYy/PLmQemSY9PITXWk18e1ERy4xXpA2VrqDA85Ige2cx Pzu8xmlVNIsL4F9mKESBCBwnPOjZOAwFrT9LQtxDWiOCh9u92p8InTlLs6hpJoMDCDk+ gY2e27FSr5KqThIpA9hlDWBX7I8a4HQeDLePq7/GaMl5uFSXbJiHHO6UA+GzpXf09UjO sG5fhjYL3jrYPXmZi32udNWHuGyb7ftNuOUVDWbzYH5hIVvvQkqqYqCE9C6otImT9IoV TcD6NiXif0bstbA2YQtYruhu+lUNUUC56LzX7mO4uZRdVANVjvnqxBXSXQ6jiItp9mKR 1h7w== X-Gm-Message-State: AGi0PuZScacwHK3UsgploBVFwg4IbBDyZNq2G2R72q3Ohsn8DGIMFSqp VV4GK6/L/vCJmWRaDobcDrUEtjwvBfneAEpnaOs= X-Google-Smtp-Source: APiQypL8a8EcwgkAdr2i3smmJxOyKHY3RiUFRcgoCxLBedFlxJV/kyUEcY4a7Jl+ITX16xXqRHQjeNwR/jbT4qE4WrI= X-Received: by 2002:a65:68c7:: with SMTP id k7mr33098346pgt.248.1587053582058; Thu, 16 Apr 2020 09:13:02 -0700 (PDT) Date: Thu, 16 Apr 2020 09:12:38 -0700 In-Reply-To: <20200416161245.148813-1-samitolvanen@google.com> Message-Id: <20200416161245.148813-6-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20200416161245.148813-1-samitolvanen@google.com> X-Mailer: git-send-email 2.26.1.301.g55bc3eb7cb9-goog Subject: [PATCH v11 05/12] arm64: reserve x18 from general allocation with SCS From: Sami Tolvanen To: Will Deacon , Catalin Marinas , James Morse , Steven Rostedt , Ard Biesheuvel , Mark Rutland , Masahiro Yamada , Michal Marek , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot Cc: Dave Martin , Kees Cook , Laura Abbott , Marc Zyngier , Masami Hiramatsu , Nick Desaulniers , Jann Horn , Miguel Ojeda , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen Reserve the x18 register from general allocation when SCS is enabled, because the compiler uses the register to store the current task's shadow stack pointer. Note that all external kernel modules must also be compiled with -ffixed-x18 if the kernel has SCS enabled. Signed-off-by: Sami Tolvanen Reviewed-by: Nick Desaulniers Reviewed-by: Kees Cook Acked-by: Will Deacon --- arch/arm64/Makefile | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile index 85e4149cc5d5..409a6c1be8cc 100644 --- a/arch/arm64/Makefile +++ b/arch/arm64/Makefile @@ -81,6 +81,10 @@ endif KBUILD_CFLAGS += $(branch-prot-flags-y) +ifeq ($(CONFIG_SHADOW_CALL_STACK), y) +KBUILD_CFLAGS += -ffixed-x18 +endif + ifeq ($(CONFIG_CPU_BIG_ENDIAN), y) KBUILD_CPPFLAGS += -mbig-endian CHECKFLAGS += -D__AARCH64EB__ From patchwork Thu Apr 16 16:12:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11493389 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 54C566CA for ; Thu, 16 Apr 2020 16:14:11 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id AB02A20771 for ; Thu, 16 Apr 2020 16:14:10 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="W5q7QBjk" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AB02A20771 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-18534-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 1515 invoked by uid 550); 16 Apr 2020 16:13:16 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 1442 invoked from network); 16 Apr 2020 16:13:16 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Kugk+73vmOylC3TFtzHwjlOoaJ/Fe5NzWOKKOj4AXU4=; b=W5q7QBjkw4JghTfPUShlkXNs2sDNOGOuiYOGElC1wAONEdOLPNJ/uNyxDMDXjlRLf4 Pa2+Zrq7EyaDCJp1Sp3AVQn9w8BtLyDEdMdL2575OXgi5fxdIK+8sv379QKTbap2WvC+ NiN0Q+AVLm5GD6wuH4dSCDO0b66W2nLZ0Xzrofc/J1m+UZX9J2yLmiCSxELYozVXH1uQ vVBs/CwtxyA+WTpTHf3NFy3LgZlj4g/kbXfkm3SRVstHLiNAqtUgbanoiYUosQNk9PUW zvsM3kOiPbsMtH3WUpHwLBRuJopu9sSmI6vIv47ooOAFB9l9KFuPLGhVXV6robI3fW4H ch8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Kugk+73vmOylC3TFtzHwjlOoaJ/Fe5NzWOKKOj4AXU4=; b=Su6DfZtRDqrDg9mpB4EcSJJ9JJousm2i1vD7wYW4xb9TK+loW5SnecvEOETCWMMiFe xNwMfnyDodL6peKARcJr6WZacOZPUcL7RqtU4asksYP1asESx6/e0jpI1d9+LoriPFHf K/11u6m4cRoIWL82RLx2W6bCVSiZk+n/iAmi+QMEbs5wX/L8OY8DgjqalSgodRwJZvfb EE2MqMsLEFuj9pdLzPJ+hV1XUIJ8+H61xpUm80KzgoaR2bXJlo1aEtDgXwi8UlOvf+WT mggU8S0faL4bQpRQgQDEYzaj5QIEt3dYMk1ApYR1DY8HAXTlZLo9x5HzqNBnQZ6bmSUA uA7Q== X-Gm-Message-State: AGi0PuYXpOgdCB7k3qL/aJQE0ahCMHSnmRfr8cr3FgVDq0WFpLka/+1T be9BZ5l97s+04Mm5lK8Rq70w/nkCb3KzMShyB0s= X-Google-Smtp-Source: APiQypItSg+rLQgEmLkTBr9HxWTvnV3rSuRsTSpO5mKzGKaJOiL7a4R9QX9rkJEWAZiMSzM5JWs5mbOwhT82CTPYKHg= X-Received: by 2002:a63:9e54:: with SMTP id r20mr8539917pgo.301.1587053584336; Thu, 16 Apr 2020 09:13:04 -0700 (PDT) Date: Thu, 16 Apr 2020 09:12:39 -0700 In-Reply-To: <20200416161245.148813-1-samitolvanen@google.com> Message-Id: <20200416161245.148813-7-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20200416161245.148813-1-samitolvanen@google.com> X-Mailer: git-send-email 2.26.1.301.g55bc3eb7cb9-goog Subject: [PATCH v11 06/12] arm64: preserve x18 when CPU is suspended From: Sami Tolvanen To: Will Deacon , Catalin Marinas , James Morse , Steven Rostedt , Ard Biesheuvel , Mark Rutland , Masahiro Yamada , Michal Marek , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot Cc: Dave Martin , Kees Cook , Laura Abbott , Marc Zyngier , Masami Hiramatsu , Nick Desaulniers , Jann Horn , Miguel Ojeda , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen Don't lose the current task's shadow stack when the CPU is suspended. Signed-off-by: Sami Tolvanen Reviewed-by: Nick Desaulniers Reviewed-by: Kees Cook Reviewed-by: Mark Rutland Acked-by: Will Deacon --- arch/arm64/include/asm/suspend.h | 2 +- arch/arm64/mm/proc.S | 14 ++++++++++++++ 2 files changed, 15 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/suspend.h b/arch/arm64/include/asm/suspend.h index 8939c87c4dce..0cde2f473971 100644 --- a/arch/arm64/include/asm/suspend.h +++ b/arch/arm64/include/asm/suspend.h @@ -2,7 +2,7 @@ #ifndef __ASM_SUSPEND_H #define __ASM_SUSPEND_H -#define NR_CTX_REGS 12 +#define NR_CTX_REGS 13 #define NR_CALLEE_SAVED_REGS 12 /* diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index 197a9ba2d5ea..ed15be0f8103 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -58,6 +58,8 @@ * cpu_do_suspend - save CPU registers context * * x0: virtual address of context pointer + * + * This must be kept in sync with struct cpu_suspend_ctx in . */ SYM_FUNC_START(cpu_do_suspend) mrs x2, tpidr_el0 @@ -82,6 +84,11 @@ alternative_endif stp x8, x9, [x0, #48] stp x10, x11, [x0, #64] stp x12, x13, [x0, #80] + /* + * Save x18 as it may be used as a platform register, e.g. by shadow + * call stack. + */ + str x18, [x0, #96] ret SYM_FUNC_END(cpu_do_suspend) @@ -98,6 +105,13 @@ SYM_FUNC_START(cpu_do_resume) ldp x9, x10, [x0, #48] ldp x11, x12, [x0, #64] ldp x13, x14, [x0, #80] + /* + * Restore x18, as it may be used as a platform register, and clear + * the buffer to minimize the risk of exposure when used for shadow + * call stack. + */ + ldr x18, [x0, #96] + str xzr, [x0, #96] msr tpidr_el0, x2 msr tpidrro_el0, x3 msr contextidr_el1, x4 From patchwork Thu Apr 16 16:12:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11493401 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D0447112C for ; Thu, 16 Apr 2020 16:14:26 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 37C452078B for ; Thu, 16 Apr 2020 16:14:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="PfLQtJvA" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 37C452078B Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-18535-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 1697 invoked by uid 550); 16 Apr 2020 16:13:19 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 1597 invoked from network); 16 Apr 2020 16:13:18 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ir6AMcU/LNJfVmH7TAbfuSTmLjnaQbzCZFrLbu9sicg=; b=PfLQtJvAzz5yI91olwSgmzdmwgdtJxdSpUwDBUqAfM/Qj4X8PTnr8pe6M1E9q+SxMQ 9DuGyRyHutXrki59bdmwIERRsNZo12qtXtT7YTGaQUvErlBRD5mI0A9o3BWgwYogesFw pJQazMuF7VSg8nXXNbbuwnalk1A8GI2siOE+gzAVLDi5SEfj4qBjBhPGSFAFigx0XT27 2J6qu4mUgNZPuqq+yi7V5b3R7FRrHRFEL+uzZRZfJJCkuZ3Y9YcoQ6IkCgna/0CQ22Jg qGjnRtJiwEZAIC0v5v+CcPVf8e5pwqpO5XgzgToKbUlUPQ54YIcNjAxSzrtDUK8hBIAD 0GIQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ir6AMcU/LNJfVmH7TAbfuSTmLjnaQbzCZFrLbu9sicg=; b=hXHIQAE36k3p+xnyS2bs5cQXUD3HopZCyoH/FobvRGcSTaX1wYhiZ3RTqX+0vQiyn4 5yNs8a2VuAsKXrR1zoyLP8jqa1mL9BmgnUHanZaPqb3t4ob3KMb36XS0wCBHX7I9Ezc5 hN4ISzdZbEJdZOWLp3czQTFS+fudDkApgwqyYqYgpER/ejJTMjo+kXeUM6Uzi+3OznZ5 OvwCL21uQPcqDxd94XQJ8j1HK8M2ZCqF7sv3GCECcO6EChcakLdyKjY/k3n5pI032hBq x2t9feptM1OB1RbcpresrSQrq9KcMGcO6l/+4zuZZCNBvaVcxVIg7rUDp4B6MSzJDEu6 T18A== X-Gm-Message-State: AGi0PuZk9knng60Sd7oIMd0GS9MkdkoHHHA8Ym+6OmER/tsz/fqDQPBQ HUVfrUrGCXyHgDXuh3kcBqcPtkYmYRtz/u9P7Ns= X-Google-Smtp-Source: APiQypLtd8ftrhqgq/6RnJ5qnFSnfzNaIUqdmQcwCsxZfxvN3FzC3GYwf1kFiAS1IaIXQEFXlhjHe+bl82oElgHqTYg= X-Received: by 2002:a17:90a:cc10:: with SMTP id b16mr5944791pju.29.1587053586616; Thu, 16 Apr 2020 09:13:06 -0700 (PDT) Date: Thu, 16 Apr 2020 09:12:40 -0700 In-Reply-To: <20200416161245.148813-1-samitolvanen@google.com> Message-Id: <20200416161245.148813-8-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20200416161245.148813-1-samitolvanen@google.com> X-Mailer: git-send-email 2.26.1.301.g55bc3eb7cb9-goog Subject: [PATCH v11 07/12] arm64: efi: restore x18 if it was corrupted From: Sami Tolvanen To: Will Deacon , Catalin Marinas , James Morse , Steven Rostedt , Ard Biesheuvel , Mark Rutland , Masahiro Yamada , Michal Marek , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot Cc: Dave Martin , Kees Cook , Laura Abbott , Marc Zyngier , Masami Hiramatsu , Nick Desaulniers , Jann Horn , Miguel Ojeda , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen If we detect a corrupted x18, restore the register before jumping back to potentially SCS instrumented code. This is safe, because the wrapper is called with preemption disabled and a separate shadow stack is used for interrupt handling. Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook Acked-by: Will Deacon --- arch/arm64/kernel/efi-rt-wrapper.S | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kernel/efi-rt-wrapper.S b/arch/arm64/kernel/efi-rt-wrapper.S index 3fc71106cb2b..6ca6c0dc11a1 100644 --- a/arch/arm64/kernel/efi-rt-wrapper.S +++ b/arch/arm64/kernel/efi-rt-wrapper.S @@ -34,5 +34,14 @@ ENTRY(__efi_rt_asm_wrapper) ldp x29, x30, [sp], #32 b.ne 0f ret -0: b efi_handle_corrupted_x18 // tail call +0: + /* + * With CONFIG_SHADOW_CALL_STACK, the kernel uses x18 to store a + * shadow stack pointer, which we need to restore before returning to + * potentially instrumented code. This is safe because the wrapper is + * called with preemption disabled and a separate shadow stack is used + * for interrupts. + */ + mov x18, x2 + b efi_handle_corrupted_x18 // tail call ENDPROC(__efi_rt_asm_wrapper) From patchwork Thu Apr 16 16:12:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11493407 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2DA746CA for ; Thu, 16 Apr 2020 16:14:42 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 8B62920771 for ; Thu, 16 Apr 2020 16:14:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="CpB297mh" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8B62920771 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-18536-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 1872 invoked by uid 550); 16 Apr 2020 16:13:21 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 1806 invoked from network); 16 Apr 2020 16:13:20 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=MhRn5bu5/kvUh5QzOzRXssMosuq4vthGJuay/ttgTR4=; b=CpB297mheaErr6yv8xzZk9C8e3OlyZuurSZ4iPlqnxBL1rdN7DVreXtQ6hHd7Wghk3 A5W9OQc4/k6ybv0BMxLVLn8OvbS40EACfZw8xiC3B9iqMWTZ/G+4Y/YX3/JNpQMeDGS3 ay0rPEcWXkE7+/beHpj+T7HGAeju4SLB85FtaqnWCnPyeZYfmL7ROzgbfHdDeuyse6Mw X10A0ECz0Imqt1Wo4UmMjqK65aL12fZ5Eglv5O0xEJh6LVssqIx/QWQa3QbH0ppYnbu+ 0maH/v5qWpKrIiGAPfF4cw/U7QIU4ulRW3FrsdGzWeLWUmoK0Amlfi7T20CvdzF4vk8N JCtQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=MhRn5bu5/kvUh5QzOzRXssMosuq4vthGJuay/ttgTR4=; b=uCfZ+vD634nq0BY9ho0sQC+yXWCYpfSyfegV2i++5ORhy5olWpyftNwB3A5Umh+wKY kwkUIfoHpY5guZFgCfG+WiZf8y5g9OxTx4j4TXzN1OwnB9x76NQvjPHt7ve35/VT874d AeqLRLAdgnt28+EFyP/9JvC5eHZCBj2330eRco+sarXq4bpQUGpULBgH6NbRMhaj/vOQ bGc5003kLQKzS3raJ9VcxZEX7H5q37r66g4eI3s0PUV+CmZCIR1/afEVm343eraXzeto uq+cf2l4FWo64uyRSToG+ED0ybG55ZmZPY/P3SuKE0cnfqM0Pm5CgfjPWy8do/m288Eu yMrA== X-Gm-Message-State: AGi0PuZjusz3sTMTNXPkoiDaPotsPVBdLGHB04y6D8lVlRLxEQGPAx3m s/n7noRkN9f2S8vgUkGYYbs8nQf9lW6uqtxrF/c= X-Google-Smtp-Source: APiQypL0T1GFOmyxncLIjxwWs5XI51QswoFv2KQDHvzuxs+kU6I+8aUYgz8VLRqT+kH3Do9DXOfw/WbQdUM/RpiZ80k= X-Received: by 2002:a17:90a:f407:: with SMTP id ch7mr6146496pjb.72.1587053588956; Thu, 16 Apr 2020 09:13:08 -0700 (PDT) Date: Thu, 16 Apr 2020 09:12:41 -0700 In-Reply-To: <20200416161245.148813-1-samitolvanen@google.com> Message-Id: <20200416161245.148813-9-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20200416161245.148813-1-samitolvanen@google.com> X-Mailer: git-send-email 2.26.1.301.g55bc3eb7cb9-goog Subject: [PATCH v11 08/12] arm64: vdso: disable Shadow Call Stack From: Sami Tolvanen To: Will Deacon , Catalin Marinas , James Morse , Steven Rostedt , Ard Biesheuvel , Mark Rutland , Masahiro Yamada , Michal Marek , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot Cc: Dave Martin , Kees Cook , Laura Abbott , Marc Zyngier , Masami Hiramatsu , Nick Desaulniers , Jann Horn , Miguel Ojeda , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen Shadow stacks are only available in the kernel, so disable SCS instrumentation for the vDSO. Signed-off-by: Sami Tolvanen Reviewed-by: Nick Desaulniers Reviewed-by: Kees Cook Reviewed-by: Mark Rutland Acked-by: Will Deacon --- arch/arm64/kernel/vdso/Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/kernel/vdso/Makefile b/arch/arm64/kernel/vdso/Makefile index dd2514bb1511..a87a4f11724e 100644 --- a/arch/arm64/kernel/vdso/Makefile +++ b/arch/arm64/kernel/vdso/Makefile @@ -25,7 +25,7 @@ ccflags-y += -DDISABLE_BRANCH_PROFILING VDSO_LDFLAGS := -Bsymbolic -CFLAGS_REMOVE_vgettimeofday.o = $(CC_FLAGS_FTRACE) -Os +CFLAGS_REMOVE_vgettimeofday.o = $(CC_FLAGS_FTRACE) -Os $(CC_FLAGS_SCS) KBUILD_CFLAGS += $(DISABLE_LTO) KASAN_SANITIZE := n UBSAN_SANITIZE := n From patchwork Thu Apr 16 16:12:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11493411 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B767F6CA for ; Thu, 16 Apr 2020 16:14:55 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 1ADEA20771 for ; Thu, 16 Apr 2020 16:14:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="sLxFQroY" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1ADEA20771 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-18537-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 3134 invoked by uid 550); 16 Apr 2020 16:13:24 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 2028 invoked from network); 16 Apr 2020 16:13:23 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=SV1DVkULqNa+FuGbiBRMBHT4OnEsBDpzGAv6cWsD5ng=; b=sLxFQroYETeBwkC7GL2sYyxH2hNaeokGKsdeLtXjhCO6cOtPj2JOlGsmapVVPWTSK1 uZovYKEMjxeKiEqbJmm1N7h1ZuFNEWeqj4HZX6p71THvzQF8G9M+4JNKvHa1/tmuIfCm 6EVESi0OEnWDzNouoomBEUVi/Rq0z+KDY6RjRyGUB3tsfFtJuEFP+col35MQzcyIEv98 JSCbKVeCMC9JhoiaW1ka8ABAA/JedzZ9Vi0vS2OTLgQwsGeOmODa1mbGdE1L5+3CEz9R c0a2TlVnEXnb/0S7j/hvoXDbtLEn1ydd7L+LiJbescqWyWwpPatKM4QMw31dYxQHQmR6 pJZg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=SV1DVkULqNa+FuGbiBRMBHT4OnEsBDpzGAv6cWsD5ng=; b=StXhxjdqiTrqxSzHV6/Rucl2hBTqwSmuIMv/mO8dv9n6FP/KpE2JNKz4CjIAY8zndl dWXxIMfW+xh61IL9PV6xNIJ7x7yRKA5k0Ch/Z/93nWVt+QLlIm+D51nzaOirKro7J5Ac jKBjGUjce5DZuOtXx+bdr7V2fOWWwDl2bBhSZqIFvTHo7mf0aVIS8ommdI/RMpgP57oo hdACrVO7s9bBzRJhQfSAEc8k60rIameb660ix88i07KGBwNH44QdRHyjnZcciALpXWmu jDFuJESUXB5gKPdTzgKVpa+jbQh7uyeRzFoT2fIDp1cTQW8vzgbSK+6h0lGV9JkMCZH1 DVJQ== X-Gm-Message-State: AGi0Pua07R+nFf1BfrF/UlJkyiaSEk0hSVx8ar274bXLR55SA4/vdd0u 4tKHoPHiLype9u2tSNPHg34Ucg7LA17JHj/Tu3w= X-Google-Smtp-Source: APiQypJNkpQBGKyIWPV2BtmwX/E/gHMUX9jAQmns+9eOAcaQtZAXVp0Bz90VtPtI2vifzdaZP44LMcdHl+FTfzmDINE= X-Received: by 2002:a63:ca41:: with SMTP id o1mr33386721pgi.419.1587053591236; Thu, 16 Apr 2020 09:13:11 -0700 (PDT) Date: Thu, 16 Apr 2020 09:12:42 -0700 In-Reply-To: <20200416161245.148813-1-samitolvanen@google.com> Message-Id: <20200416161245.148813-10-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20200416161245.148813-1-samitolvanen@google.com> X-Mailer: git-send-email 2.26.1.301.g55bc3eb7cb9-goog Subject: [PATCH v11 09/12] arm64: disable SCS for hypervisor code From: Sami Tolvanen To: Will Deacon , Catalin Marinas , James Morse , Steven Rostedt , Ard Biesheuvel , Mark Rutland , Masahiro Yamada , Michal Marek , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot Cc: Dave Martin , Kees Cook , Laura Abbott , Marc Zyngier , Masami Hiramatsu , Nick Desaulniers , Jann Horn , Miguel Ojeda , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen Disable SCS for code that runs at a different exception level by adding __noscs to __hyp_text. Suggested-by: James Morse Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook Acked-by: Marc Zyngier --- arch/arm64/include/asm/kvm_hyp.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index fe57f60f06a8..875b106c5d98 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -13,7 +13,7 @@ #include #include -#define __hyp_text __section(.hyp.text) notrace +#define __hyp_text __section(.hyp.text) notrace __noscs #define read_sysreg_elx(r,nvh,vh) \ ({ \ From patchwork Thu Apr 16 16:12:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11493415 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 033166CA for ; Thu, 16 Apr 2020 16:15:10 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id D10D72224E for ; Thu, 16 Apr 2020 16:15:08 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="matMSajC" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D10D72224E Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-18538-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 3337 invoked by uid 550); 16 Apr 2020 16:13:26 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 3240 invoked from network); 16 Apr 2020 16:13:25 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=7KA04ITSOHFy54t0zDNkNFCerAUlrNCDYI6Nn8+B86w=; b=matMSajCfuFazCFtlyIGuJ40x09DmAkRlSQ8EBERVCQp3UWpPFUk/c/53rffJ+qk9/ 4vieljP5wlngo38+81n5dH4eq19SiVe9VJhdsB1dYkoCvORYhQiHhVwjYLbYk6OedTbs VwqZUMOD+fph20MPoZ1qV7+nSdBW752Ctcg5XIS2Y6XU0e7tc0rHdsSN2m7v27TuTCk7 EeYgiqr0souuuR7TYs9yEdG7/EaZpxGfurTL0NFpAYaAKkrD9NmiLdOHi5QVWjaWbyUB Mru8tDElUm/fm1d+8ZmeEHLBgmuyd8YEaMGgkXh/6tOvgZf9alcjG3CKKpj1cLqfqcQo szyg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=7KA04ITSOHFy54t0zDNkNFCerAUlrNCDYI6Nn8+B86w=; b=bWPb2ml1D+6Ga5+Ny9e7C0NBirUOirjhw317n90FTtXcRWRBU6gbQSqm3O9guwdD+I unYvw8rNmuouO1jB8XhC/4zl650D50OvnaLc3ygPHDBHEMJthnhgcvs4hWO/ozz22lqX OTtXi3StzTkxJ6Ft4AZXaQ/7lRYlT9ieCcKD/Z+JWiSYlNYtlhBlaalD16utDAvYUhxn TAOcVwg1ooAinDwTwI/9apSCxnYvv4R4tdeItdAnfWiHay+/w6sQjbEDiFDOxQCR/JLh Mqw5BHq1YMsChLp4ZdGAvTTYACDXcbE1IGA2YoRNBez162dryt21Vd8piHpS7/DDXw7m TXaw== X-Gm-Message-State: AGi0PuZ4D5W4Z/KqeEs6cFWpR8NCIxHcAmZ//Ey+jGQ09GUMA2REggQ4 6u18lUN6ZbNHkwv4J9e6gzp7zCm21wUNdPxrGA0= X-Google-Smtp-Source: APiQypKKkIhw/aNI9mlSqB3SzxMbCzN+2rIgA1ICL2rPv3JAy9cU3r/kc0yYGOfab4fUbZfv2F1fW9zY3KeCAcHdzCw= X-Received: by 2002:a17:90a:db53:: with SMTP id u19mr6099984pjx.41.1587053593670; Thu, 16 Apr 2020 09:13:13 -0700 (PDT) Date: Thu, 16 Apr 2020 09:12:43 -0700 In-Reply-To: <20200416161245.148813-1-samitolvanen@google.com> Message-Id: <20200416161245.148813-11-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20200416161245.148813-1-samitolvanen@google.com> X-Mailer: git-send-email 2.26.1.301.g55bc3eb7cb9-goog Subject: [PATCH v11 10/12] arm64: implement Shadow Call Stack From: Sami Tolvanen To: Will Deacon , Catalin Marinas , James Morse , Steven Rostedt , Ard Biesheuvel , Mark Rutland , Masahiro Yamada , Michal Marek , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot Cc: Dave Martin , Kees Cook , Laura Abbott , Marc Zyngier , Masami Hiramatsu , Nick Desaulniers , Jann Horn , Miguel Ojeda , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen This change implements shadow stack switching, initial SCS set-up, and interrupt shadow stacks for arm64. Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook --- arch/arm64/Kconfig | 5 ++++ arch/arm64/include/asm/scs.h | 37 +++++++++++++++++++++++++ arch/arm64/include/asm/thread_info.h | 3 +++ arch/arm64/kernel/Makefile | 1 + arch/arm64/kernel/asm-offsets.c | 3 +++ arch/arm64/kernel/entry.S | 33 +++++++++++++++++++++-- arch/arm64/kernel/head.S | 8 ++++++ arch/arm64/kernel/irq.c | 2 ++ arch/arm64/kernel/process.c | 2 ++ arch/arm64/kernel/scs.c | 40 ++++++++++++++++++++++++++++ arch/arm64/kernel/smp.c | 4 +++ 11 files changed, 136 insertions(+), 2 deletions(-) create mode 100644 arch/arm64/include/asm/scs.h create mode 100644 arch/arm64/kernel/scs.c diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 40fb05d96c60..c380a16533f6 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -64,6 +64,7 @@ config ARM64 select ARCH_USE_QUEUED_RWLOCKS select ARCH_USE_QUEUED_SPINLOCKS select ARCH_SUPPORTS_MEMORY_FAILURE + select ARCH_SUPPORTS_SHADOW_CALL_STACK if CC_HAVE_SHADOW_CALL_STACK select ARCH_SUPPORTS_ATOMIC_RMW select ARCH_SUPPORTS_INT128 if CC_HAS_INT128 && (GCC_VERSION >= 50000 || CC_IS_CLANG) select ARCH_SUPPORTS_NUMA_BALANCING @@ -1025,6 +1026,10 @@ config ARCH_HAS_CACHE_LINE_SIZE config ARCH_ENABLE_SPLIT_PMD_PTLOCK def_bool y if PGTABLE_LEVELS > 2 +# Supported by clang >= 7.0 +config CC_HAVE_SHADOW_CALL_STACK + def_bool $(cc-option, -fsanitize=shadow-call-stack -ffixed-x18) + config SECCOMP bool "Enable seccomp to safely compute untrusted bytecode" ---help--- diff --git a/arch/arm64/include/asm/scs.h b/arch/arm64/include/asm/scs.h new file mode 100644 index 000000000000..c50d2b0c6c5f --- /dev/null +++ b/arch/arm64/include/asm/scs.h @@ -0,0 +1,37 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_SCS_H +#define _ASM_SCS_H + +#ifndef __ASSEMBLY__ + +#include + +#ifdef CONFIG_SHADOW_CALL_STACK + +extern void scs_init_irq(void); + +static __always_inline void scs_save(struct task_struct *tsk) +{ + void *s; + + asm volatile("mov %0, x18" : "=r" (s)); + task_set_scs(tsk, s); +} + +static inline void scs_overflow_check(struct task_struct *tsk) +{ + if (unlikely(scs_corrupted(tsk))) + panic("corrupted shadow stack detected inside scheduler\n"); +} + +#else /* CONFIG_SHADOW_CALL_STACK */ + +static inline void scs_init_irq(void) {} +static inline void scs_save(struct task_struct *tsk) {} +static inline void scs_overflow_check(struct task_struct *tsk) {} + +#endif /* CONFIG_SHADOW_CALL_STACK */ + +#endif /* __ASSEMBLY __ */ + +#endif /* _ASM_SCS_H */ diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h index 512174a8e789..1fb651f73da3 100644 --- a/arch/arm64/include/asm/thread_info.h +++ b/arch/arm64/include/asm/thread_info.h @@ -41,6 +41,9 @@ struct thread_info { #endif } preempt; }; +#ifdef CONFIG_SHADOW_CALL_STACK + void *shadow_call_stack; +#endif }; #define thread_saved_pc(tsk) \ diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile index 4e5b8ee31442..151f28521f1e 100644 --- a/arch/arm64/kernel/Makefile +++ b/arch/arm64/kernel/Makefile @@ -63,6 +63,7 @@ obj-$(CONFIG_CRASH_CORE) += crash_core.o obj-$(CONFIG_ARM_SDE_INTERFACE) += sdei.o obj-$(CONFIG_ARM64_SSBD) += ssbd.o obj-$(CONFIG_ARM64_PTR_AUTH) += pointer_auth.o +obj-$(CONFIG_SHADOW_CALL_STACK) += scs.o obj-y += vdso/ probes/ obj-$(CONFIG_COMPAT_VDSO) += vdso32/ diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 9981a0a5a87f..777a662888ec 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -33,6 +33,9 @@ int main(void) DEFINE(TSK_TI_ADDR_LIMIT, offsetof(struct task_struct, thread_info.addr_limit)); #ifdef CONFIG_ARM64_SW_TTBR0_PAN DEFINE(TSK_TI_TTBR0, offsetof(struct task_struct, thread_info.ttbr0)); +#endif +#ifdef CONFIG_SHADOW_CALL_STACK + DEFINE(TSK_TI_SCS, offsetof(struct task_struct, thread_info.shadow_call_stack)); #endif DEFINE(TSK_STACK, offsetof(struct task_struct, stack)); #ifdef CONFIG_STACKPROTECTOR diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index ddcde093c433..c33264ce7258 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -179,6 +179,11 @@ alternative_cb_end apply_ssbd 1, x22, x23 ptrauth_keys_install_kernel tsk, 1, x20, x22, x23 + +#ifdef CONFIG_SHADOW_CALL_STACK + ldr x18, [tsk, #TSK_TI_SCS] // Restore shadow call stack + str xzr, [tsk, #TSK_TI_SCS] // Limit visibility of saved SCS +#endif .else add x21, sp, #S_FRAME_SIZE get_current_task tsk @@ -280,6 +285,12 @@ alternative_else_nop_endif ct_user_enter .endif +#ifdef CONFIG_SHADOW_CALL_STACK + .if \el == 0 + str x18, [tsk, #TSK_TI_SCS] // Save shadow call stack + .endif +#endif + #ifdef CONFIG_ARM64_SW_TTBR0_PAN /* * Restore access to TTBR0_EL1. If returning to EL0, no need for SPSR @@ -388,6 +399,9 @@ alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0 .macro irq_stack_entry mov x19, sp // preserve the original sp +#ifdef CONFIG_SHADOW_CALL_STACK + mov x24, x18 // preserve the original shadow stack +#endif /* * Compare sp with the base of the task stack. @@ -405,15 +419,25 @@ alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0 /* switch to the irq stack */ mov sp, x26 + +#ifdef CONFIG_SHADOW_CALL_STACK + /* also switch to the irq shadow stack */ + ldr_this_cpu x18, irq_shadow_call_stack_ptr, x26 +#endif + 9998: .endm /* - * x19 should be preserved between irq_stack_entry and - * irq_stack_exit. + * The callee-saved regs (x19-x29) should be preserved between + * irq_stack_entry and irq_stack_exit, but note that kernel_entry + * uses x20-x23 to store data for later use. */ .macro irq_stack_exit mov sp, x19 +#ifdef CONFIG_SHADOW_CALL_STACK + mov x18, x24 +#endif .endm /* GPRs used by entry code */ @@ -901,6 +925,11 @@ SYM_FUNC_START(cpu_switch_to) mov sp, x9 msr sp_el0, x1 ptrauth_keys_install_kernel x1, 1, x8, x9, x10 +#ifdef CONFIG_SHADOW_CALL_STACK + str x18, [x0, #TSK_TI_SCS] + ldr x18, [x1, #TSK_TI_SCS] + str xzr, [x1, #TSK_TI_SCS] // limit visibility of saved SCS +#endif ret SYM_FUNC_END(cpu_switch_to) NOKPROBE(cpu_switch_to) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 57a91032b4c2..1514445bbccb 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -424,6 +424,10 @@ SYM_FUNC_START_LOCAL(__primary_switched) stp xzr, x30, [sp, #-16]! mov x29, sp +#ifdef CONFIG_SHADOW_CALL_STACK + adr_l x18, init_shadow_call_stack // Set shadow call stack +#endif + str_l x21, __fdt_pointer, x5 // Save FDT pointer ldr_l x4, kimage_vaddr // Save the offset between @@ -737,6 +741,10 @@ SYM_FUNC_START_LOCAL(__secondary_switched) ldr x2, [x0, #CPU_BOOT_TASK] cbz x2, __secondary_too_slow msr sp_el0, x2 +#ifdef CONFIG_SHADOW_CALL_STACK + ldr x18, [x2, #TSK_TI_SCS] // set shadow call stack + str xzr, [x2, #TSK_TI_SCS] // limit visibility of saved SCS +#endif mov x29, #0 mov x30, #0 b secondary_start_kernel diff --git a/arch/arm64/kernel/irq.c b/arch/arm64/kernel/irq.c index 04a327ccf84d..fe0ca522ff60 100644 --- a/arch/arm64/kernel/irq.c +++ b/arch/arm64/kernel/irq.c @@ -21,6 +21,7 @@ #include #include #include +#include unsigned long irq_err_count; @@ -63,6 +64,7 @@ static void init_irq_stacks(void) void __init init_IRQ(void) { init_irq_stacks(); + scs_init_irq(); irqchip_init(); if (!handle_arch_irq) panic("No interrupt controller found."); diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index 56be4cbf771f..a35d3318492c 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -52,6 +52,7 @@ #include #include #include +#include #include #if defined(CONFIG_STACKPROTECTOR) && !defined(CONFIG_STACKPROTECTOR_PER_TASK) @@ -515,6 +516,7 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev, entry_task_switch(next); uao_thread_switch(next); ssbs_thread_switch(next); + scs_overflow_check(next); /* * Complete any pending TLB or cache maintenance on this CPU in case diff --git a/arch/arm64/kernel/scs.c b/arch/arm64/kernel/scs.c new file mode 100644 index 000000000000..eaadf5430baa --- /dev/null +++ b/arch/arm64/kernel/scs.c @@ -0,0 +1,40 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Shadow Call Stack support. + * + * Copyright (C) 2019 Google LLC + */ + +#include +#include +#include +#include + +DEFINE_PER_CPU(unsigned long *, irq_shadow_call_stack_ptr); + +#ifndef CONFIG_SHADOW_CALL_STACK_VMAP +DEFINE_PER_CPU(unsigned long [SCS_SIZE/sizeof(long)], irq_shadow_call_stack) + __aligned(SCS_SIZE); +#endif + +void scs_init_irq(void) +{ + int cpu; + + for_each_possible_cpu(cpu) { +#ifdef CONFIG_SHADOW_CALL_STACK_VMAP + unsigned long *p; + + p = __vmalloc_node_range(PAGE_SIZE, SCS_SIZE, + VMALLOC_START, VMALLOC_END, + GFP_SCS, PAGE_KERNEL, + 0, cpu_to_node(cpu), + __builtin_return_address(0)); + + per_cpu(irq_shadow_call_stack_ptr, cpu) = p; +#else + per_cpu(irq_shadow_call_stack_ptr, cpu) = + per_cpu(irq_shadow_call_stack, cpu); +#endif /* CONFIG_SHADOW_CALL_STACK_VMAP */ + } +} diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index 061f60fe452f..1d112e34a636 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -46,6 +46,7 @@ #include #include #include +#include #include #include #include @@ -370,6 +371,9 @@ void cpu_die(void) unsigned int cpu = smp_processor_id(); const struct cpu_operations *ops = get_cpu_ops(cpu); + /* Save the shadow stack pointer before exiting the idle task */ + scs_save(current); + idle_task_exit(); local_daif_mask(); From patchwork Thu Apr 16 16:12:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11493425 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 97B036CA for ; Thu, 16 Apr 2020 16:15:25 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id C969F20771 for ; Thu, 16 Apr 2020 16:15:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Fu1W4Jaf" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C969F20771 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-18539-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 3565 invoked by uid 550); 16 Apr 2020 16:13:29 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 3482 invoked from network); 16 Apr 2020 16:13:28 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=hEmpFtr1j91FcLAOVORJZwbS3k3EwuLiK8/TK1VpwGo=; b=Fu1W4JafLiUsCjpI0xs77J2jpmJRZMinV4sL8sCdPHohAfwmN97YzbHE+Oipqmnuuh 6TAJTu916+L1u2w9Ks3ld0MIQaXN7CDPnrkskTNLYZJUUo0/Un7kGBbRYGxs00nN7J1T FrPwhdGFpyomXA/iMLV2qWrvIxFG2u1nYHz6pCzt99WXNbZhlQHF9PpoKe6zO2+ygSQP WVN5y97cyuPRXeQr/YczdZO6CnG7xn8zTnwBWLX40CIuPIk0OTy4UGoSUqpWmUovfkno 8SbPyULyLrSPAdPXoXKLLytN3niyVUXCSsJJh5GjgWsWrXsy1VgA3dv5kuhHbtT3fYVC x5aw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=hEmpFtr1j91FcLAOVORJZwbS3k3EwuLiK8/TK1VpwGo=; b=EkK+DqDs6s7ewAwOMOHtuQI1Zt55tb+4cUVzyb4bQYBpLxfqcIp8+dRgmJC9kQOPp5 T9mj/AeY3/LY2KO/ji6ROelmxvN/yO7MFJuuqrHoS1T+5OT7gEMZhvgHOBqxeWcFBOjg /6jLPrIsMHSvNqic+UDAT42Nt5IimKiDJkQv5Du2aiAzpnqBLWMfM6XvwnW2Nbs21btV dAhGitJ1xj8d3IYEOEiYXrqKSFbbglp91GSSeH4aMPvsFfJfhxon6wiPazLt/vhn86nI wyR04nFPFPVA+GsHs32MwEqR3MCfNWI4mFojFKtSLQa5Xl2CS2dueT26whrgmlxmJE4g aHgQ== X-Gm-Message-State: AGi0PuYxUUqbPCYHCTj7bstdqLyq9Fsupqz14f3f2fdeSXzrCFeAfL/r HDPs85wj73qIkdW3o2apt+TaS0w9jX6T1SHB9gQ= X-Google-Smtp-Source: APiQypLPF8BoOizeBk7USPwMZwndRxBD/5JMBPRvpOtqCm+DswJlOTQn/V/gu+fgQJfxg+pKko19qPXOmtCiDKBDsXA= X-Received: by 2002:a37:7987:: with SMTP id u129mr32380183qkc.312.1587053596089; Thu, 16 Apr 2020 09:13:16 -0700 (PDT) Date: Thu, 16 Apr 2020 09:12:44 -0700 In-Reply-To: <20200416161245.148813-1-samitolvanen@google.com> Message-Id: <20200416161245.148813-12-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20200416161245.148813-1-samitolvanen@google.com> X-Mailer: git-send-email 2.26.1.301.g55bc3eb7cb9-goog Subject: [PATCH v11 11/12] arm64: scs: add shadow stacks for SDEI From: Sami Tolvanen To: Will Deacon , Catalin Marinas , James Morse , Steven Rostedt , Ard Biesheuvel , Mark Rutland , Masahiro Yamada , Michal Marek , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot Cc: Dave Martin , Kees Cook , Laura Abbott , Marc Zyngier , Masami Hiramatsu , Nick Desaulniers , Jann Horn , Miguel Ojeda , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen This change adds per-CPU shadow call stacks for the SDEI handler. Similarly to how the kernel stacks are handled, we add separate shadow stacks for normal and critical events. Signed-off-by: Sami Tolvanen Reviewed-by: James Morse Tested-by: James Morse --- arch/arm64/include/asm/scs.h | 2 + arch/arm64/kernel/entry.S | 14 ++++- arch/arm64/kernel/scs.c | 106 +++++++++++++++++++++++++++++------ arch/arm64/kernel/sdei.c | 7 +++ 4 files changed, 112 insertions(+), 17 deletions(-) diff --git a/arch/arm64/include/asm/scs.h b/arch/arm64/include/asm/scs.h index c50d2b0c6c5f..8e327e14bc15 100644 --- a/arch/arm64/include/asm/scs.h +++ b/arch/arm64/include/asm/scs.h @@ -9,6 +9,7 @@ #ifdef CONFIG_SHADOW_CALL_STACK extern void scs_init_irq(void); +extern int scs_init_sdei(void); static __always_inline void scs_save(struct task_struct *tsk) { @@ -27,6 +28,7 @@ static inline void scs_overflow_check(struct task_struct *tsk) #else /* CONFIG_SHADOW_CALL_STACK */ static inline void scs_init_irq(void) {} +static inline int scs_init_sdei(void) { return 0; } static inline void scs_save(struct task_struct *tsk) {} static inline void scs_overflow_check(struct task_struct *tsk) {} diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index c33264ce7258..768cd7abd32c 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -1058,13 +1058,16 @@ SYM_CODE_START(__sdei_asm_handler) mov x19, x1 +#if defined(CONFIG_VMAP_STACK) || defined(CONFIG_SHADOW_CALL_STACK) + ldrb w4, [x19, #SDEI_EVENT_PRIORITY] +#endif + #ifdef CONFIG_VMAP_STACK /* * entry.S may have been using sp as a scratch register, find whether * this is a normal or critical event and switch to the appropriate * stack for this CPU. */ - ldrb w4, [x19, #SDEI_EVENT_PRIORITY] cbnz w4, 1f ldr_this_cpu dst=x5, sym=sdei_stack_normal_ptr, tmp=x6 b 2f @@ -1074,6 +1077,15 @@ SYM_CODE_START(__sdei_asm_handler) mov sp, x5 #endif +#ifdef CONFIG_SHADOW_CALL_STACK + /* Use a separate shadow call stack for normal and critical events */ + cbnz w4, 3f + ldr_this_cpu dst=x18, sym=sdei_shadow_call_stack_normal_ptr, tmp=x6 + b 4f +3: ldr_this_cpu dst=x18, sym=sdei_shadow_call_stack_critical_ptr, tmp=x6 +4: +#endif + /* * We may have interrupted userspace, or a guest, or exit-from or * return-to either of these. We can't trust sp_el0, restore it. diff --git a/arch/arm64/kernel/scs.c b/arch/arm64/kernel/scs.c index eaadf5430baa..dddb7c56518b 100644 --- a/arch/arm64/kernel/scs.c +++ b/arch/arm64/kernel/scs.c @@ -10,31 +10,105 @@ #include #include -DEFINE_PER_CPU(unsigned long *, irq_shadow_call_stack_ptr); +#define DECLARE_SCS(name) \ + DECLARE_PER_CPU(unsigned long *, name ## _ptr); \ + DECLARE_PER_CPU(unsigned long [SCS_SIZE/sizeof(long)], name) -#ifndef CONFIG_SHADOW_CALL_STACK_VMAP -DEFINE_PER_CPU(unsigned long [SCS_SIZE/sizeof(long)], irq_shadow_call_stack) - __aligned(SCS_SIZE); +#ifdef CONFIG_SHADOW_CALL_STACK_VMAP +#define DEFINE_SCS(name) \ + DEFINE_PER_CPU(unsigned long *, name ## _ptr) +#else +/* Allocate a static per-CPU shadow stack */ +#define DEFINE_SCS(name) \ + DEFINE_PER_CPU(unsigned long *, name ## _ptr); \ + DEFINE_PER_CPU(unsigned long [SCS_SIZE/sizeof(long)], name) \ + __aligned(SCS_SIZE) +#endif /* CONFIG_SHADOW_CALL_STACK_VMAP */ + +DECLARE_SCS(irq_shadow_call_stack); +DECLARE_SCS(sdei_shadow_call_stack_normal); +DECLARE_SCS(sdei_shadow_call_stack_critical); + +DEFINE_SCS(irq_shadow_call_stack); +#ifdef CONFIG_ARM_SDE_INTERFACE +DEFINE_SCS(sdei_shadow_call_stack_normal); +DEFINE_SCS(sdei_shadow_call_stack_critical); #endif +static int scs_alloc_percpu(unsigned long * __percpu *ptr, int cpu) +{ + unsigned long *p; + + p = __vmalloc_node_range(PAGE_SIZE, SCS_SIZE, + VMALLOC_START, VMALLOC_END, + GFP_SCS, PAGE_KERNEL, + 0, cpu_to_node(cpu), + __builtin_return_address(0)); + + if (!p) + return -ENOMEM; + per_cpu(*ptr, cpu) = p; + + return 0; +} + +static void scs_free_percpu(unsigned long * __percpu *ptr, int cpu) +{ + unsigned long *p = per_cpu(*ptr, cpu); + + if (p) { + per_cpu(*ptr, cpu) = NULL; + vfree(p); + } +} + +static void scs_free_sdei(void) +{ + int cpu; + + for_each_possible_cpu(cpu) { + scs_free_percpu(&sdei_shadow_call_stack_normal_ptr, cpu); + scs_free_percpu(&sdei_shadow_call_stack_critical_ptr, cpu); + } +} + void scs_init_irq(void) { int cpu; for_each_possible_cpu(cpu) { -#ifdef CONFIG_SHADOW_CALL_STACK_VMAP - unsigned long *p; + if (IS_ENABLED(CONFIG_SHADOW_CALL_STACK_VMAP)) + WARN_ON(scs_alloc_percpu(&irq_shadow_call_stack_ptr, + cpu)); + else + per_cpu(irq_shadow_call_stack_ptr, cpu) = + per_cpu(irq_shadow_call_stack, cpu); + } +} - p = __vmalloc_node_range(PAGE_SIZE, SCS_SIZE, - VMALLOC_START, VMALLOC_END, - GFP_SCS, PAGE_KERNEL, - 0, cpu_to_node(cpu), - __builtin_return_address(0)); +int scs_init_sdei(void) +{ + int cpu; - per_cpu(irq_shadow_call_stack_ptr, cpu) = p; -#else - per_cpu(irq_shadow_call_stack_ptr, cpu) = - per_cpu(irq_shadow_call_stack, cpu); -#endif /* CONFIG_SHADOW_CALL_STACK_VMAP */ + if (!IS_ENABLED(CONFIG_ARM_SDE_INTERFACE)) + return 0; + + for_each_possible_cpu(cpu) { + if (IS_ENABLED(CONFIG_SHADOW_CALL_STACK_VMAP)) { + if (scs_alloc_percpu( + &sdei_shadow_call_stack_normal_ptr, cpu) || + scs_alloc_percpu( + &sdei_shadow_call_stack_critical_ptr, cpu)) { + scs_free_sdei(); + return -ENOMEM; + } + } else { + per_cpu(sdei_shadow_call_stack_normal_ptr, cpu) = + per_cpu(sdei_shadow_call_stack_normal, cpu); + per_cpu(sdei_shadow_call_stack_critical_ptr, cpu) = + per_cpu(sdei_shadow_call_stack_critical, cpu); + } } + + return 0; } diff --git a/arch/arm64/kernel/sdei.c b/arch/arm64/kernel/sdei.c index d6259dac62b6..2854b9f7760a 100644 --- a/arch/arm64/kernel/sdei.c +++ b/arch/arm64/kernel/sdei.c @@ -13,6 +13,7 @@ #include #include #include +#include #include #include #include @@ -162,6 +163,12 @@ unsigned long sdei_arch_get_entry_point(int conduit) return 0; } + if (scs_init_sdei()) { + if (IS_ENABLED(CONFIG_VMAP_STACK)) + free_sdei_stacks(); + return 0; + } + sdei_exit_mode = (conduit == SMCCC_CONDUIT_HVC) ? SDEI_EXIT_HVC : SDEI_EXIT_SMC; #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 From patchwork Thu Apr 16 16:12:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11493435 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 345F31392 for ; Thu, 16 Apr 2020 16:15:41 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 9052A20771 for ; Thu, 16 Apr 2020 16:15:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="fKyFfLXO" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9052A20771 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-18540-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 3729 invoked by uid 550); 16 Apr 2020 16:13:31 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 3660 invoked from network); 16 Apr 2020 16:13:30 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=0RQ1N542RT3QYdW6h+S0B1gn+F6MhyfTNdXGmDt/FSM=; b=fKyFfLXOz5sVcsMLYKlGfkS9d2/UCV8LgsPAZpozE0APa/tBvBvuH72SAkskPwpulK Q9xEycscm+hu03BJdYzx1SmxGohZ5WLC2Kp00QJoD/1v3vT77G97X59ssrg+rhyJkLt0 FIZhmLR8ROQHOMNhMyY0XOXWlBU9bwJvWf1mFMpV7bvXEXcTn+KsgXL1xWQ9YaJxsI3p r5wBMm8vbJljvjWEKVD3VUjKdk7HSzBXS47lLsTkO0XDaM+rtLDkPFyAx4fJbEib9dsz fBUPYaCtMV7+NrFyKOzLqdDoXj0u8w2J3DBLtUaWWXI1UwSZCihejvWj1Y1tkFy1U7G+ /Qng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=0RQ1N542RT3QYdW6h+S0B1gn+F6MhyfTNdXGmDt/FSM=; b=MTULMDbxyFoLq6t51A6p4vl8+piJgGpUKXOmsT2R/8gCptQPxJQx+IFO5MlZhpzJqv VK0G8qktYXrp1Y2HmK20mXG7JDJk8bvo3xn3r4Odbtiqe5ZhV1rpaPY/PEbgNGBNKDD+ jt+9CTfCkd/PVPbIDtnrymYcXbZYWlaVhiZ5Tg16NvJsHzEM4ISoqENxBCmlYUbJrdBD K4r0rPdnaYclg+OqWPeGST7f5dVqpuQuGu0O18z2SAhU7VoG91jFOVDH3P/eKjZoFeDN YCuQKKB8aQxgvSKlLd+bNk9lcPVy/HaQ+7eUR/wqaq3hVlulwAWwvuPs9Rd9PZvfbnYv Lsvg== X-Gm-Message-State: AGi0PuYIJYoPnExHi7Y1Nh7c2xyPNdX4FHFkXpQ7ZIcBE1uyjdpqO9u+ 0vZbC8pltmQyKNDL16MwF+HVMkcpQOXdbTsK2lU= X-Google-Smtp-Source: APiQypLcEVWlOSbUU7JM1b+dPKZalYkgI8XZ2RJRR7DguNAHXrP4Q+xtVussY8m1ZWA3xxy8DZYEgVb57sv3jHyS0G0= X-Received: by 2002:a1f:a703:: with SMTP id q3mr5645257vke.4.1587053598668; Thu, 16 Apr 2020 09:13:18 -0700 (PDT) Date: Thu, 16 Apr 2020 09:12:45 -0700 In-Reply-To: <20200416161245.148813-1-samitolvanen@google.com> Message-Id: <20200416161245.148813-13-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20200416161245.148813-1-samitolvanen@google.com> X-Mailer: git-send-email 2.26.1.301.g55bc3eb7cb9-goog Subject: [PATCH v11 12/12] efi/libstub: disable SCS From: Sami Tolvanen To: Will Deacon , Catalin Marinas , James Morse , Steven Rostedt , Ard Biesheuvel , Mark Rutland , Masahiro Yamada , Michal Marek , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot Cc: Dave Martin , Kees Cook , Laura Abbott , Marc Zyngier , Masami Hiramatsu , Nick Desaulniers , Jann Horn , Miguel Ojeda , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen , Ard Biesheuvel Shadow stacks are not available in the EFI stub, filter out SCS flags. Suggested-by: James Morse Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook Acked-by: Ard Biesheuvel --- drivers/firmware/efi/libstub/Makefile | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile index 094eabdecfe6..b52ae8c29560 100644 --- a/drivers/firmware/efi/libstub/Makefile +++ b/drivers/firmware/efi/libstub/Makefile @@ -32,6 +32,9 @@ KBUILD_CFLAGS := $(cflags-y) -DDISABLE_BRANCH_PROFILING \ $(call cc-option,-fno-stack-protector) \ -D__DISABLE_EXPORTS +# remove SCS flags from all objects in this directory +KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_SCS), $(KBUILD_CFLAGS)) + GCOV_PROFILE := n KASAN_SANITIZE := n UBSAN_SANITIZE := n