From patchwork Thu Aug 10 17:26:11 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Garnier X-Patchwork-Id: 9894379 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id DE422603F2 for ; Thu, 10 Aug 2017 17:31:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C5AB228AE0 for ; Thu, 10 Aug 2017 17:31:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B99C428AE9; Thu, 10 Aug 2017 17:31:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, RCVD_IN_DNSWL_MED, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 18DD428AE0 for ; Thu, 10 Aug 2017 17:31:49 +0000 (UTC) Received: (qmail 1913 invoked by uid 550); 10 Aug 2017 17:27:29 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 1689 invoked from network); 10 Aug 2017 17:27:23 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=WLFPHNxuP0k4qfhBxeW1ShgrA4TI2t9Bv2MxmLNu86Y=; b=phZTAFZCgJ3wu5rsznwMpeeUz90w7/l7HGLK3UbrZ2HhWbgFV2pBxd2B6AecmCdmRm bx3X2P5+LkRndwj9vVTM9BnJyTi6WIi4F9tt8tcmvTIzoZEk04BZEFFmNSDNVB0+aQYl PoVPIqbBhBy3hDO1UWjzHmm1ZBsjXKi/TUvE4fJ9G9Gqqg1jrFsM7L652XihEywsPEYh NxAxgCYe15aI1zJSHktWlmhVSbWfMgVpJaV1UZob1uf0ui1InxLMZC173XpBxjSFsi8N v6qEfVoposLpmr1FtFaWonIMttyVBLdE1t7DV1MyS5d1IJzbewNVpgklqx9kZvuibuyq A5yA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=WLFPHNxuP0k4qfhBxeW1ShgrA4TI2t9Bv2MxmLNu86Y=; b=Z1cr8TTIQowibmw5ofkIwrFctexZxbGjMYY+nGKulG6Zy8Em/K7g5xkQe5OMw7pgde eUSz1A+G4YNEdSrTx/L4jG8DxIhPmcMJuRt2jPUEF9T0c7uJM0uMdWaJdtD08EB4h0uJ Oq60Vr4M9OFnd00ssTHBqZ+pvRQ+troqoGsFrTyZITEtiwcEi06n9M7yHalFjEWeGhAy A4gXq3XHmImsZd0fX2T28LBYDkg8g5DbSUGhj14wKG7Tkej2X1mQFLLeQov6Ok1Eedd3 fi3IlQDeAkBVhdCZ/a2Mm2RYe6mOACNAWLsbrSIwUoVINzHKEulSnaTmPrQzG6QjsoaT bKuA== X-Gm-Message-State: AHYfb5jnaENSCSkKJeJfQ8iF4pdTVcap1voRHo9kHMQ6W+5sOpZcGn6K v2QzvD4uez1thm05 X-Received: by 10.98.69.155 with SMTP id n27mr12920790pfi.183.1502386030782; Thu, 10 Aug 2017 10:27:10 -0700 (PDT) From: Thomas Garnier To: Herbert Xu , "David S . Miller" , Thomas Gleixner , Ingo Molnar , "H . Peter Anvin" , Peter Zijlstra , Josh Poimboeuf , Arnd Bergmann , Thomas Garnier , Matthias Kaehlcke , Boris Ostrovsky , Juergen Gross , Paolo Bonzini , =?UTF-8?q?Radim=20Kr=C4=8Dm=C3=A1=C5=99?= , Joerg Roedel , Tom Lendacky , Andy Lutomirski , Borislav Petkov , Brian Gerst , "Kirill A . Shutemov" , "Rafael J . Wysocki" , Len Brown , Pavel Machek , Tejun Heo , Christoph Lameter , Paul Gortmaker , Chris Metcalf , Andrew Morton , "Paul E . McKenney" , Nicolas Pitre , Christopher Li , "Rafael J . Wysocki" , Lukas Wunner , Mika Westerberg , Dou Liyang , Daniel Borkmann , Alexei Starovoitov , Masahiro Yamada , Markus Trippelsdorf , Steven Rostedt , Kees Cook , Rik van Riel , David Howells , Waiman Long , Kyle Huey , Peter Foley , Tim Chen , Catalin Marinas , Ard Biesheuvel , Michal Hocko , Matthew Wilcox , "H . J . Lu" , Paul Bolle , Rob Landley , Baoquan He , Daniel Micay Cc: x86@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, linux-pm@vger.kernel.org, linux-arch@vger.kernel.org, linux-sparse@vger.kernel.org, kernel-hardening@lists.openwall.com Date: Thu, 10 Aug 2017 10:26:11 -0700 Message-Id: <20170810172615.51965-20-thgarnie@google.com> X-Mailer: git-send-email 2.14.0.434.g98096fd7a8-goog In-Reply-To: <20170810172615.51965-1-thgarnie@google.com> References: <20170810172615.51965-1-thgarnie@google.com> Subject: [kernel-hardening] [RFC v2 19/23] x86: Support global stack cookie X-Virus-Scanned: ClamAV using ClamSMTP Add an off-by-default configuration option to use a global stack cookie instead of the default TLS. This configuration option will only be used with PIE binaries. For kernel stack cookie, the compiler uses the mcmodel=kernel to switch between the fs segment to gs segment. A PIE binary does not use mcmodel=kernel because it can be relocated anywhere, therefore the compiler will default to the fs segment register. This is going to be fixed with a compiler change allowing to pick the segment register as done on PowerPC. In the meantime, this configuration can be used to support older compilers. Signed-off-by: Thomas Garnier --- arch/x86/Kconfig | 4 ++++ arch/x86/Makefile | 9 +++++++++ arch/x86/entry/entry_32.S | 3 ++- arch/x86/entry/entry_64.S | 3 ++- arch/x86/include/asm/processor.h | 3 ++- arch/x86/include/asm/stackprotector.h | 19 ++++++++++++++----- arch/x86/kernel/asm-offsets.c | 3 ++- arch/x86/kernel/asm-offsets_32.c | 3 ++- arch/x86/kernel/asm-offsets_64.c | 3 ++- arch/x86/kernel/cpu/common.c | 3 ++- arch/x86/kernel/head_32.S | 3 ++- arch/x86/kernel/process.c | 5 +++++ 12 files changed, 48 insertions(+), 13 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index da37ba375e63..2632fa8e8945 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -2128,6 +2128,10 @@ config RANDOMIZE_MEMORY_PHYSICAL_PADDING If unsure, leave at the default value. +config X86_GLOBAL_STACKPROTECTOR + bool + depends on CC_STACKPROTECTOR + config HOTPLUG_CPU bool "Support for hot-pluggable CPUs" depends on SMP diff --git a/arch/x86/Makefile b/arch/x86/Makefile index 1e902f926be3..66af2704f096 100644 --- a/arch/x86/Makefile +++ b/arch/x86/Makefile @@ -58,6 +58,15 @@ endif KBUILD_CFLAGS += -mno-sse -mno-mmx -mno-sse2 -mno-3dnow KBUILD_CFLAGS += $(call cc-option,-mno-avx,) +ifdef CONFIG_X86_GLOBAL_STACKPROTECTOR + ifeq ($(call cc-option, -mstack-protector-guard=global),) + $(error Cannot use CONFIG_X86_GLOBAL_STACKPROTECTOR: \ + -mstack-protector-guard=global not supported \ + by compiler) + endif + KBUILD_CFLAGS += -mstack-protector-guard=global +endif + ifeq ($(CONFIG_X86_32),y) BITS := 32 UTS_MACHINE := i386 diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S index 48ef7bb32c42..91b4f2c4f837 100644 --- a/arch/x86/entry/entry_32.S +++ b/arch/x86/entry/entry_32.S @@ -237,7 +237,8 @@ ENTRY(__switch_to_asm) movl %esp, TASK_threadsp(%eax) movl TASK_threadsp(%edx), %esp -#ifdef CONFIG_CC_STACKPROTECTOR +#if defined(CONFIG_CC_STACKPROTECTOR) && \ + !defined(CONFIG_X86_GLOBAL_STACKPROTECTOR) movl TASK_stack_canary(%edx), %ebx movl %ebx, PER_CPU_VAR(stack_canary)+stack_canary_offset #endif diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index c1f9b29d4c24..566380112a4f 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -395,7 +395,8 @@ ENTRY(__switch_to_asm) movq %rsp, TASK_threadsp(%rdi) movq TASK_threadsp(%rsi), %rsp -#ifdef CONFIG_CC_STACKPROTECTOR +#if defined(CONFIG_CC_STACKPROTECTOR) && \ + !defined(CONFIG_X86_GLOBAL_STACKPROTECTOR) movq TASK_stack_canary(%rsi), %rbx movq %rbx, PER_CPU_VAR(irq_stack_union + stack_canary_offset) #endif diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index 14fc21e2df08..842b3b5d5e36 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -394,7 +394,8 @@ DECLARE_PER_CPU(char *, irq_stack_ptr); DECLARE_PER_CPU(unsigned int, irq_count); extern asmlinkage void ignore_sysret(void); #else /* X86_64 */ -#ifdef CONFIG_CC_STACKPROTECTOR +#if defined(CONFIG_CC_STACKPROTECTOR) && \ + defined(CONFIG_X86_GLOBAL_STACKPROTECTOR) /* * Make sure stack canary segment base is cached-aligned: * "For Intel Atom processors, avoid non zero segment base address diff --git a/arch/x86/include/asm/stackprotector.h b/arch/x86/include/asm/stackprotector.h index 8abedf1d650e..66462d778dc5 100644 --- a/arch/x86/include/asm/stackprotector.h +++ b/arch/x86/include/asm/stackprotector.h @@ -51,6 +51,10 @@ #define GDT_STACK_CANARY_INIT \ [GDT_ENTRY_STACK_CANARY] = GDT_ENTRY_INIT(0x4090, 0, 0x18), +#ifdef CONFIG_X86_GLOBAL_STACKPROTECTOR +extern unsigned long __stack_chk_guard; +#endif + /* * Initialize the stackprotector canary value. * @@ -62,7 +66,7 @@ static __always_inline void boot_init_stack_canary(void) u64 canary; u64 tsc; -#ifdef CONFIG_X86_64 +#if defined(CONFIG_X86_64) && !defined(CONFIG_X86_GLOBAL_STACKPROTECTOR) BUILD_BUG_ON(offsetof(union irq_stack_union, stack_canary) != 40); #endif /* @@ -76,17 +80,22 @@ static __always_inline void boot_init_stack_canary(void) canary += tsc + (tsc << 32UL); canary &= CANARY_MASK; +#ifdef CONFIG_X86_GLOBAL_STACKPROTECTOR + if (__stack_chk_guard == 0) + __stack_chk_guard = canary ?: 1; +#else /* !CONFIG_X86_GLOBAL_STACKPROTECTOR */ current->stack_canary = canary; #ifdef CONFIG_X86_64 this_cpu_write(irq_stack_union.stack_canary, canary); -#else +#else /* CONFIG_X86_32 */ this_cpu_write(stack_canary.canary, canary); #endif +#endif } static inline void setup_stack_canary_segment(int cpu) { -#ifdef CONFIG_X86_32 +#if defined(CONFIG_X86_32) && !defined(CONFIG_X86_GLOBAL_STACKPROTECTOR) unsigned long canary = (unsigned long)&per_cpu(stack_canary, cpu); struct desc_struct *gdt_table = get_cpu_gdt_rw(cpu); struct desc_struct desc; @@ -99,7 +108,7 @@ static inline void setup_stack_canary_segment(int cpu) static inline void load_stack_canary_segment(void) { -#ifdef CONFIG_X86_32 +#if defined(CONFIG_X86_32) && !defined(CONFIG_X86_GLOBAL_STACKPROTECTOR) asm("mov %0, %%gs" : : "r" (__KERNEL_STACK_CANARY) : "memory"); #endif } @@ -115,7 +124,7 @@ static inline void setup_stack_canary_segment(int cpu) static inline void load_stack_canary_segment(void) { -#ifdef CONFIG_X86_32 +#if defined(CONFIG_X86_32) && !defined(CONFIG_X86_GLOBAL_STACKPROTECTOR) asm volatile ("mov %0, %%gs" : : "r" (0)); #endif } diff --git a/arch/x86/kernel/asm-offsets.c b/arch/x86/kernel/asm-offsets.c index de827d6ac8c2..b30a12cd021e 100644 --- a/arch/x86/kernel/asm-offsets.c +++ b/arch/x86/kernel/asm-offsets.c @@ -30,7 +30,8 @@ void common(void) { BLANK(); OFFSET(TASK_threadsp, task_struct, thread.sp); -#ifdef CONFIG_CC_STACKPROTECTOR +#if defined(CONFIG_CC_STACKPROTECTOR) && \ + !defined(CONFIG_X86_GLOBAL_STACKPROTECTOR) OFFSET(TASK_stack_canary, task_struct, stack_canary); #endif diff --git a/arch/x86/kernel/asm-offsets_32.c b/arch/x86/kernel/asm-offsets_32.c index 880aa093268d..9deb60866eab 100644 --- a/arch/x86/kernel/asm-offsets_32.c +++ b/arch/x86/kernel/asm-offsets_32.c @@ -57,7 +57,8 @@ void foo(void) /* Size of SYSENTER_stack */ DEFINE(SIZEOF_SYSENTER_stack, sizeof(((struct tss_struct *)0)->SYSENTER_stack)); -#ifdef CONFIG_CC_STACKPROTECTOR +#if defined(CONFIG_CC_STACKPROTECTOR) && \ + !defined(CONFIG_X86_GLOBAL_STACKPROTECTOR) BLANK(); OFFSET(stack_canary_offset, stack_canary, canary); #endif diff --git a/arch/x86/kernel/asm-offsets_64.c b/arch/x86/kernel/asm-offsets_64.c index 99332f550c48..297bcdfc520a 100644 --- a/arch/x86/kernel/asm-offsets_64.c +++ b/arch/x86/kernel/asm-offsets_64.c @@ -65,7 +65,8 @@ int main(void) OFFSET(TSS_sp0, tss_struct, x86_tss.sp0); BLANK(); -#ifdef CONFIG_CC_STACKPROTECTOR +#if defined(CONFIG_CC_STACKPROTECTOR) && \ + !defined(CONFIG_X86_GLOBAL_STACKPROTECTOR) DEFINE(stack_canary_offset, offsetof(union irq_stack_union, stack_canary)); BLANK(); #endif diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index 31300767ec0f..9e8608fc10a4 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -1454,7 +1454,8 @@ DEFINE_PER_CPU(unsigned long, cpu_current_top_of_stack) = (unsigned long)&init_thread_union + THREAD_SIZE; EXPORT_PER_CPU_SYMBOL(cpu_current_top_of_stack); -#ifdef CONFIG_CC_STACKPROTECTOR +#if defined(CONFIG_CC_STACKPROTECTOR) && \ + !defined(CONFIG_X86_GLOBAL_STACKPROTECTOR) DEFINE_PER_CPU_ALIGNED(struct stack_canary, stack_canary); #endif diff --git a/arch/x86/kernel/head_32.S b/arch/x86/kernel/head_32.S index 0332664eb158..6989bfb0a628 100644 --- a/arch/x86/kernel/head_32.S +++ b/arch/x86/kernel/head_32.S @@ -411,7 +411,8 @@ setup_once: addl $8,%edi loop 2b -#ifdef CONFIG_CC_STACKPROTECTOR +#if defined(CONFIG_CC_STACKPROTECTOR) && \ + !defined(CONFIG_X86_GLOBAL_STACKPROTECTOR) /* * Configure the stack canary. The linker can't handle this by * relocation. Manually set base address in stack canary diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c index bd6b85fac666..66ea1a35413e 100644 --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -73,6 +73,11 @@ EXPORT_PER_CPU_SYMBOL(cpu_tss); DEFINE_PER_CPU(bool, __tss_limit_invalid); EXPORT_PER_CPU_SYMBOL_GPL(__tss_limit_invalid); +#ifdef CONFIG_X86_GLOBAL_STACKPROTECTOR +unsigned long __stack_chk_guard __read_mostly; +EXPORT_SYMBOL(__stack_chk_guard); +#endif + /* * this gets called so that we can store lazy state into memory and copy the * current task into the new thread.