From patchwork Thu Oct 24 22:51:20 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11211009 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F400113B1 for ; Thu, 24 Oct 2019 22:53:14 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C537C2084C for ; Thu, 24 Oct 2019 22:53:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="Y9Wwmam+"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="P8G3cS30" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C537C2084C Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:To:From:Subject:References:Mime-Version :Message-Id:In-Reply-To:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=hrdihDKO4ATLAu9DZ1bamvtjGvNt5EudMTbIWIDflqM=; b=Y9Wwmam+2jw93R 3C88DzRBerox69T8G2Ht94BstGqBTnOjaoE0gmP4MhptHsVajesIrO5wM385m39kZuBdSn8W3+2Bq ZR/eAh5dwroKsPtYMWuFSxtt7aDiVbMxosdEZSBLgRU1N8HT8UdxrbxwSfTTS8s2RsL8Wb2sZFMkl /hfSTv/GiRbzD3t3WaaWIWyyxM9lzL1fMayDn9/w4w+FIIsWPaCGk7RCk4w4YIsJecS1HIVNr06gY ms+6gIVy5YXqsxD1L41PJ/L6ic7B1eJ6U5LxA8GRjk88J+LfUzsV2h+JC6gFaXnfyDJNMn9ywsNQ7 t9RBs4IPBDh61J6uY/2w==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNlyj-0003Yg-Vk; Thu, 24 Oct 2019 22:53:14 +0000 Received: from mail-ua1-x94a.google.com ([2607:f8b0:4864:20::94a]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1iNlxV-0002OK-1v for linux-arm-kernel@lists.infradead.org; Thu, 24 Oct 2019 22:51:59 +0000 Received: by mail-ua1-x94a.google.com with SMTP id o9so114613uai.12 for ; Thu, 24 Oct 2019 15:51:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=KIWKK5tzLIPoTh2S7f/9vxWq3XCSgl7vPelXfuAvI5M=; b=P8G3cS30ra1QhPvFX/eiscxXWE1B8PNSSjOlQao14iQp3uHR8ds6JIFV0YOCMWLhwB GCiNfnj2kgK4CjyHUKELgoqLxqc94kdpZ8ROEaY7scorIJbntBLJ9Zw/oNnPIYaTGhP1 cmdxnjhRZyjFDErZhv8zKVN/zR8J/1JTwjRwhI+9BtXAGYFF9PLQyED0PrM3/zl5Lj8P d5L0N8y8vMqD7wQYc3K0pkOK6/cjgkG9dP2hlr26zwJ5o/ixvaVZxdRXghQoOrOX/Q/h 1mVy8RIVPshTAnNOSkVQxLt62+A6vgxPAhOYeoXro7xLqbpVe7497IAfGZPYiA9kjy9s cJEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=KIWKK5tzLIPoTh2S7f/9vxWq3XCSgl7vPelXfuAvI5M=; b=PgSXVNDqsUUBQRmAzDDxnGbjVqP/My5yQSNE0euEhIX2r70zSq9ER276QuVZI+0EwA 8YRVEWH+/13I2QWsWCFT+ZRKWpFvJ4ZXz1QSxplMIf6oa3zyF/ibVjIHy5/4zdIW7rOV 7EFSJYahcDN4Pl/ikEWvKe9CiQHtWEvlbXY2Qzi9ez6dZ9rTDt3lGWB2rgprbFQCkGR9 vDgNrvyKVtmAF1PsKWhVVZzUmbSIoUZRcNab9Hf7ZdQlGPbQAf9HPIAhXSAAFu4/b46X vOdkg0lzXa1FPfsWlapTZ1k8Gj62Z4h8byl8t6w53dpzhqTDF1NUaGU1NoTYvz30wbK1 w7hQ== X-Gm-Message-State: APjAAAXz7UURtLUGUlvaIz3TOwnIHkYCz/eY3m5I54mqpPD7YOrfZMY4 pDVanEdiaabdGWpSJHatoYYsL1cbJZnZlTlUv9E= X-Google-Smtp-Source: APXvYqyLAQaeLtRHXU7k50Bg4Jf272he6KnEligQu1kcZViCpkur1/bfqJ4PYKWWtok7pW4uzAI9PzkCIcb/CcsItf0= X-Received: by 2002:a1f:9d14:: with SMTP id g20mr512654vke.43.1571957514425; Thu, 24 Oct 2019 15:51:54 -0700 (PDT) Date: Thu, 24 Oct 2019 15:51:20 -0700 In-Reply-To: <20191024225132.13410-1-samitolvanen@google.com> Message-Id: <20191024225132.13410-6-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20191024225132.13410-1-samitolvanen@google.com> X-Mailer: git-send-email 2.24.0.rc0.303.g954a862665-goog Subject: [PATCH v2 05/17] add support for Clang's Shadow Call Stack (SCS) From: samitolvanen@google.com To: Will Deacon , Catalin Marinas , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20191024_155157_213246_1918DE9D X-CRM114-Status: GOOD ( 23.35 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (-7.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:94a listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Kees Cook , Jann Horn , Masahiro Yamada , kernel-hardening@lists.openwall.com, Nick Desaulniers , linux-kernel@vger.kernel.org, Miguel Ojeda , clang-built-linux@googlegroups.com, Sami Tolvanen , Laura Abbott , Dave Martin , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org This change adds generic support for Clang's Shadow Call Stack, which uses a shadow stack to protect return addresses from being overwritten by an attacker. Details are available here: https://clang.llvm.org/docs/ShadowCallStack.html Note that security guarantees in the kernel differ from the ones documented for user space. The kernel must store addresses of shadow stacks used by other tasks and interrupt handlers in memory, which means an attacker capable reading and writing arbitrary memory may be able to locate them and hijack control flow by modifying shadow stacks that are not currently in use. Signed-off-by: Sami Tolvanen --- Makefile | 6 ++ arch/Kconfig | 33 +++++++ include/linux/compiler-clang.h | 6 ++ include/linux/compiler_types.h | 4 + include/linux/scs.h | 78 +++++++++++++++++ init/init_task.c | 8 ++ kernel/Makefile | 1 + kernel/fork.c | 9 ++ kernel/sched/core.c | 2 + kernel/sched/sched.h | 1 + kernel/scs.c | 155 +++++++++++++++++++++++++++++++++ 11 files changed, 303 insertions(+) create mode 100644 include/linux/scs.h create mode 100644 kernel/scs.c diff --git a/Makefile b/Makefile index 5475cdb6d57d..2b5c59fb18f2 100644 --- a/Makefile +++ b/Makefile @@ -846,6 +846,12 @@ ifdef CONFIG_LIVEPATCH KBUILD_CFLAGS += $(call cc-option, -flive-patching=inline-clone) endif +ifdef CONFIG_SHADOW_CALL_STACK +CC_FLAGS_SCS := -fsanitize=shadow-call-stack +KBUILD_CFLAGS += $(CC_FLAGS_SCS) +export CC_FLAGS_SCS +endif + # arch Makefile may override CC so keep this after arch Makefile is included NOSTDINC_FLAGS += -nostdinc -isystem $(shell $(CC) -print-file-name=include) diff --git a/arch/Kconfig b/arch/Kconfig index 5f8a5d84dbbe..5e34cbcd8d6a 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -521,6 +521,39 @@ config STACKPROTECTOR_STRONG about 20% of all kernel functions, which increases the kernel code size by about 2%. +config ARCH_SUPPORTS_SHADOW_CALL_STACK + bool + help + An architecture should select this if it supports Clang's Shadow + Call Stack, has asm/scs.h, and implements runtime support for shadow + stack switching. + +config SHADOW_CALL_STACK_VMAP + bool + depends on SHADOW_CALL_STACK + help + Use virtually mapped shadow call stacks. Selecting this option + provides better stack exhaustion protection, but increases per-thread + memory consumption as a full page is allocated for each shadow stack. + +config SHADOW_CALL_STACK + bool "Clang Shadow Call Stack" + depends on ARCH_SUPPORTS_SHADOW_CALL_STACK + help + This option enables Clang's Shadow Call Stack, which uses a + shadow stack to protect function return addresses from being + overwritten by an attacker. More information can be found from + Clang's documentation: + + https://clang.llvm.org/docs/ShadowCallStack.html + + Note that security guarantees in the kernel differ from the ones + documented for user space. The kernel must store addresses of shadow + stacks used by other tasks and interrupt handlers in memory, which + means an attacker capable reading and writing arbitrary memory may + be able to locate them and hijack control flow by modifying shadow + stacks that are not currently in use. + config HAVE_ARCH_WITHIN_STACK_FRAMES bool help diff --git a/include/linux/compiler-clang.h b/include/linux/compiler-clang.h index 333a6695a918..afe5e24088b2 100644 --- a/include/linux/compiler-clang.h +++ b/include/linux/compiler-clang.h @@ -42,3 +42,9 @@ * compilers, like ICC. */ #define barrier() __asm__ __volatile__("" : : : "memory") + +#if __has_feature(shadow_call_stack) +# define __noscs __attribute__((no_sanitize("shadow-call-stack"))) +#else +# define __noscs +#endif diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h index 72393a8c1a6c..be5d5be4b1ae 100644 --- a/include/linux/compiler_types.h +++ b/include/linux/compiler_types.h @@ -202,6 +202,10 @@ struct ftrace_likely_data { # define randomized_struct_fields_end #endif +#ifndef __noscs +# define __noscs +#endif + #ifndef asm_volatile_goto #define asm_volatile_goto(x...) asm goto(x) #endif diff --git a/include/linux/scs.h b/include/linux/scs.h new file mode 100644 index 000000000000..c8b0ccfdd803 --- /dev/null +++ b/include/linux/scs.h @@ -0,0 +1,78 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Shadow Call Stack support. + * + * Copyright (C) 2018 Google LLC + */ + +#ifndef _LINUX_SCS_H +#define _LINUX_SCS_H + +#include +#include +#include + +#ifdef CONFIG_SHADOW_CALL_STACK + +#define SCS_SIZE 1024 +#define SCS_END_MAGIC 0xaf0194819b1635f6UL + +#define GFP_SCS (GFP_KERNEL | __GFP_ZERO) + +static inline void *task_scs(struct task_struct *tsk) +{ + return task_thread_info(tsk)->shadow_call_stack; +} + +static inline void task_set_scs(struct task_struct *tsk, void *s) +{ + task_thread_info(tsk)->shadow_call_stack = s; +} + +extern void scs_init(void); +extern void scs_task_init(struct task_struct *tsk); +extern void scs_task_reset(struct task_struct *tsk); +extern int scs_prepare(struct task_struct *tsk, int node); +extern bool scs_corrupted(struct task_struct *tsk); +extern void scs_release(struct task_struct *tsk); + +#else /* CONFIG_SHADOW_CALL_STACK */ + +static inline void *task_scs(struct task_struct *tsk) +{ + return 0; +} + +static inline void task_set_scs(struct task_struct *tsk, void *s) +{ +} + +static inline void scs_init(void) +{ +} + +static inline void scs_task_init(struct task_struct *tsk) +{ +} + +static inline void scs_task_reset(struct task_struct *tsk) +{ +} + +static inline int scs_prepare(struct task_struct *tsk, int node) +{ + return 0; +} + +static inline bool scs_corrupted(struct task_struct *tsk) +{ + return false; +} + +static inline void scs_release(struct task_struct *tsk) +{ +} + +#endif /* CONFIG_SHADOW_CALL_STACK */ + +#endif /* _LINUX_SCS_H */ diff --git a/init/init_task.c b/init/init_task.c index 9e5cbe5eab7b..cbd40460e903 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include @@ -184,6 +185,13 @@ struct task_struct init_task }; EXPORT_SYMBOL(init_task); +#ifdef CONFIG_SHADOW_CALL_STACK +unsigned long init_shadow_call_stack[SCS_SIZE / sizeof(long)] __init_task_data + __aligned(SCS_SIZE) = { + [(SCS_SIZE / sizeof(long)) - 1] = SCS_END_MAGIC +}; +#endif + /* * Initial thread structure. Alignment of this is handled by a special * linker map entry. diff --git a/kernel/Makefile b/kernel/Makefile index daad787fb795..313dbd44d576 100644 --- a/kernel/Makefile +++ b/kernel/Makefile @@ -102,6 +102,7 @@ obj-$(CONFIG_TRACEPOINTS) += trace/ obj-$(CONFIG_IRQ_WORK) += irq_work.o obj-$(CONFIG_CPU_PM) += cpu_pm.o obj-$(CONFIG_BPF) += bpf/ +obj-$(CONFIG_SHADOW_CALL_STACK) += scs.o obj-$(CONFIG_PERF_EVENTS) += events/ diff --git a/kernel/fork.c b/kernel/fork.c index bcdf53125210..ae7ebe9f0586 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -94,6 +94,7 @@ #include #include #include +#include #include #include @@ -451,6 +452,8 @@ void put_task_stack(struct task_struct *tsk) void free_task(struct task_struct *tsk) { + scs_release(tsk); + #ifndef CONFIG_THREAD_INFO_IN_TASK /* * The task is finally done with both the stack and thread_info, @@ -834,6 +837,8 @@ void __init fork_init(void) NULL, free_vm_stack_cache); #endif + scs_init(); + lockdep_init_task(&init_task); uprobes_init(); } @@ -907,6 +912,7 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node) clear_user_return_notifier(tsk); clear_tsk_need_resched(tsk); set_task_stack_end_magic(tsk); + scs_task_init(tsk); #ifdef CONFIG_STACKPROTECTOR tsk->stack_canary = get_random_canary(); @@ -2022,6 +2028,9 @@ static __latent_entropy struct task_struct *copy_process( args->tls); if (retval) goto bad_fork_cleanup_io; + retval = scs_prepare(p, node); + if (retval) + goto bad_fork_cleanup_thread; stackleak_task_init(p); diff --git a/kernel/sched/core.c b/kernel/sched/core.c index dd05a378631a..e7faeb383008 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6013,6 +6013,8 @@ void init_idle(struct task_struct *idle, int cpu) raw_spin_lock_irqsave(&idle->pi_lock, flags); raw_spin_lock(&rq->lock); + scs_task_reset(idle); + __sched_fork(0, idle); idle->state = TASK_RUNNING; idle->se.exec_start = sched_clock(); diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 0db2c1b3361e..c153003a011c 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -58,6 +58,7 @@ #include #include #include +#include #include #include #include diff --git a/kernel/scs.c b/kernel/scs.c new file mode 100644 index 000000000000..383d29e8c199 --- /dev/null +++ b/kernel/scs.c @@ -0,0 +1,155 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Shadow Call Stack support. + * + * Copyright (C) 2019 Google LLC + */ + +#include +#include +#include +#include +#include +#include + +static inline void *__scs_base(struct task_struct *tsk) +{ + return (void *)((uintptr_t)task_scs(tsk) & ~(SCS_SIZE - 1)); +} + +#ifdef CONFIG_SHADOW_CALL_STACK_VMAP + +/* Keep a cache of shadow stacks */ +#define SCS_CACHE_SIZE 2 +static DEFINE_PER_CPU(void *, scs_cache[SCS_CACHE_SIZE]); + +static void *scs_alloc(int node) +{ + int i; + + for (i = 0; i < SCS_CACHE_SIZE; i++) { + void *s; + + s = this_cpu_xchg(scs_cache[i], NULL); + if (s) { + memset(s, 0, SCS_SIZE); + return s; + } + } + + BUILD_BUG_ON(SCS_SIZE > PAGE_SIZE); + + return __vmalloc_node_range(PAGE_SIZE, SCS_SIZE, + VMALLOC_START, VMALLOC_END, + GFP_SCS, PAGE_KERNEL, 0, + node, __builtin_return_address(0)); +} + +static void scs_free(void *s) +{ + int i; + + for (i = 0; i < SCS_CACHE_SIZE; i++) { + if (this_cpu_cmpxchg(scs_cache[i], 0, s) != 0) + continue; + + return; + } + + vfree_atomic(s); +} + +static int scs_cleanup(unsigned int cpu) +{ + int i; + void **cache = per_cpu_ptr(scs_cache, cpu); + + for (i = 0; i < SCS_CACHE_SIZE; i++) { + vfree(cache[i]); + cache[i] = NULL; + } + + return 0; +} + +void __init scs_init(void) +{ + cpuhp_setup_state(CPUHP_BP_PREPARE_DYN, "scs:scs_cache", NULL, + scs_cleanup); +} + +#else /* !CONFIG_SHADOW_CALL_STACK_VMAP */ + +static struct kmem_cache *scs_cache; + +static inline void *scs_alloc(int node) +{ + return kmem_cache_alloc_node(scs_cache, GFP_SCS, node); +} + +static inline void scs_free(void *s) +{ + kmem_cache_free(scs_cache, s); +} + +void __init scs_init(void) +{ + scs_cache = kmem_cache_create("scs_cache", SCS_SIZE, SCS_SIZE, + 0, NULL); + WARN_ON(!scs_cache); +} + +#endif /* CONFIG_SHADOW_CALL_STACK_VMAP */ + +static inline unsigned long *scs_magic(struct task_struct *tsk) +{ + return (unsigned long *)(__scs_base(tsk) + SCS_SIZE - sizeof(long)); +} + +static inline void scs_set_magic(struct task_struct *tsk) +{ + *scs_magic(tsk) = SCS_END_MAGIC; +} + +void scs_task_init(struct task_struct *tsk) +{ + task_set_scs(tsk, NULL); +} + +void scs_task_reset(struct task_struct *tsk) +{ + task_set_scs(tsk, __scs_base(tsk)); +} + +int scs_prepare(struct task_struct *tsk, int node) +{ + void *s; + + s = scs_alloc(node); + if (!s) + return -ENOMEM; + + task_set_scs(tsk, s); + scs_set_magic(tsk); + + return 0; +} + +bool scs_corrupted(struct task_struct *tsk) +{ + return *scs_magic(tsk) != SCS_END_MAGIC; +} + +void scs_release(struct task_struct *tsk) +{ + void *s; + + s = __scs_base(tsk); + if (!s) + return; + + WARN_ON(scs_corrupted(tsk)); + + scs_task_init(tsk); + scs_free(s); +}