From patchwork Wed Oct 11 20:30:19 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Garnier X-Patchwork-Id: 10000605 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 603366037F for ; Wed, 11 Oct 2017 20:35:17 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 53A2B28B66 for ; Wed, 11 Oct 2017 20:35:17 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4781F28B72; Wed, 11 Oct 2017 20:35:17 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, RCVD_IN_DNSWL_MED, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id B8D3228B66 for ; Wed, 11 Oct 2017 20:35:15 +0000 (UTC) Received: (qmail 15731 invoked by uid 550); 11 Oct 2017 20:31:50 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 15589 invoked from network); 11 Oct 2017 20:31:46 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=9u/FOyZF2vbX9CoosyTp4ykJloWHt0cpgNeXZ5SYbek=; b=TLgcAW5NieyK8d+i4oEmBM8KgwaxdG62/k17RWMXKYqigObOfLZ83VsK0a2ryOr0sQ YxiyR9e1TvXrQI4gk61tQk87qYycgb+qsI3nHjLM/PSQ0S8tZbvHDULkXDZFnumE0P01 zZj5WVAKUUsv5/hSJMrOVXSquK+jfzW0PuBdIAtxbFwY9Z6ZQTfX/gsteYF+a+pKmAUB CsHtxZEn+4EFJyt8G5mGkfEAWDU9/vZkrTtGXQxlfIN1GtMe5XBUn+fQl7eOkOJlr/S3 8J5z1geOtRqV+rCQewe+v35jmNmrvUgsU/KLfOhOapuP2PrReF77jVa05uExpOPZETFQ otrg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=9u/FOyZF2vbX9CoosyTp4ykJloWHt0cpgNeXZ5SYbek=; b=itXF9EOzDGdvVxny604Svnpc4XEi+RoEjwP0sVzVgw97+yxnMVjZuOo4y7eCYiZaLp GjBGAM1AfKG8vwhRxR4nbrl/JET3L936BVMbKz7qm1Ac0hqM5fxQwYHu2MQzm/Z5qQOZ d7mmaFkSE5ET4rGq50pHx3kcb72WgEH76QOBenbAm7Hu8Qbs61cog8NqAplUL1yJGrNu SvETjTsYDZRdSz8tnfU0cDbdpc1s5RkQu34n4Dmu5YaL4gYVsjktw8s5BFyxYIGwu3NT nmx35dtHLdD2fRvkd5fzcAydj7NsK6ND0E58WaKaqUslev8UqOM3e4IeR5B2m7FJweuT D1NA== X-Gm-Message-State: AMCzsaVG98g+WX12aMrIgYSNCoyCRCEeq+WLLxATQPa4uggIMHy+Lwbo tcSwN7bgymYJ2kiTVfJ1c2agnQ== X-Google-Smtp-Source: AOwi7QBHNDJC4A5P4fVtXH4oVQ9Mx0vOoRvu4WHtlodvbn4fM2v+gQuozKmjnGY2Uy6d99jlSaQHjA== X-Received: by 10.99.117.13 with SMTP id q13mr178985pgc.366.1507753894218; Wed, 11 Oct 2017 13:31:34 -0700 (PDT) From: Thomas Garnier To: Herbert Xu , "David S . Miller" , Thomas Gleixner , Ingo Molnar , "H . Peter Anvin" , Peter Zijlstra , Josh Poimboeuf , Arnd Bergmann , Thomas Garnier , Kees Cook , Andrey Ryabinin , Matthias Kaehlcke , Tom Lendacky , Andy Lutomirski , "Kirill A . Shutemov" , Borislav Petkov , "Rafael J . Wysocki" , Len Brown , Pavel Machek , Juergen Gross , Chris Wright , Alok Kataria , Rusty Russell , Tejun Heo , Christoph Lameter , Boris Ostrovsky , Paul Gortmaker , Andrew Morton , Alexey Dobriyan , "Paul E . McKenney" , Nicolas Pitre , Borislav Petkov , "Luis R . Rodriguez" , Greg Kroah-Hartman , Christopher Li , Steven Rostedt , Jason Baron , Mika Westerberg , Dou Liyang , "Rafael J . Wysocki" , Lukas Wunner , Masahiro Yamada , Alexei Starovoitov , Daniel Borkmann , Markus Trippelsdorf , Paolo Bonzini , =?UTF-8?q?Radim=20Kr=C4=8Dm=C3=A1=C5=99?= , Joerg Roedel , Rik van Riel , David Howells , Ard Biesheuvel , Waiman Long , Kyle Huey , Jonathan Corbet , Michal Hocko , Peter Foley , Paul Bolle , Jiri Kosina , "H . J . Lu" , Rob Landley , Baoquan He , =?UTF-8?q?Jan=20H=20=2E=20Sch=C3=B6nherr?= , Daniel Micay Cc: x86@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, virtualization@lists.linux-foundation.org, xen-devel@lists.xenproject.org, linux-arch@vger.kernel.org, linux-sparse@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, kernel-hardening@lists.openwall.com Date: Wed, 11 Oct 2017 13:30:19 -0700 Message-Id: <20171011203027.11248-20-thgarnie@google.com> X-Mailer: git-send-email 2.15.0.rc0.271.g36b669edcc-goog In-Reply-To: <20171011203027.11248-1-thgarnie@google.com> References: <20171011203027.11248-1-thgarnie@google.com> Subject: [kernel-hardening] [PATCH v1 19/27] x86: Support global stack cookie X-Virus-Scanned: ClamAV using ClamSMTP Add an off-by-default configuration option to use a global stack cookie instead of the default TLS. This configuration option will only be used with PIE binaries. For kernel stack cookie, the compiler uses the mcmodel=kernel to switch between the fs segment to gs segment. A PIE binary does not use mcmodel=kernel because it can be relocated anywhere, therefore the compiler will default to the fs segment register. This is going to be fixed with a compiler change allowing to pick the segment register as done on PowerPC. In the meantime, this configuration can be used to support older compilers. Signed-off-by: Thomas Garnier --- arch/x86/Kconfig | 11 +++++++++++ arch/x86/Makefile | 9 +++++++++ arch/x86/entry/entry_32.S | 3 ++- arch/x86/entry/entry_64.S | 3 ++- arch/x86/include/asm/processor.h | 3 ++- arch/x86/include/asm/stackprotector.h | 19 ++++++++++++++----- arch/x86/kernel/asm-offsets.c | 3 ++- arch/x86/kernel/asm-offsets_32.c | 3 ++- arch/x86/kernel/asm-offsets_64.c | 3 ++- arch/x86/kernel/cpu/common.c | 3 ++- arch/x86/kernel/head_32.S | 3 ++- arch/x86/kernel/process.c | 5 +++++ 12 files changed, 55 insertions(+), 13 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 063f1e0d51aa..772ff3e0f623 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -2133,6 +2133,17 @@ config RANDOMIZE_MEMORY_PHYSICAL_PADDING If unsure, leave at the default value. +config X86_GLOBAL_STACKPROTECTOR + bool "Stack cookie using a global variable" + select CC_STACKPROTECTOR + ---help--- + This option turns on the "stack-protector" GCC feature using a global + variable instead of a segment register. It is useful when the + compiler does not support custom segment registers when building a + position independent (PIE) binary. + + If unsure, say N + config HOTPLUG_CPU bool "Support for hot-pluggable CPUs" depends on SMP diff --git a/arch/x86/Makefile b/arch/x86/Makefile index 6276572259c8..de228200ef2a 100644 --- a/arch/x86/Makefile +++ b/arch/x86/Makefile @@ -141,6 +141,15 @@ else KBUILD_CFLAGS += $(call cc-option,-funit-at-a-time) endif +ifdef CONFIG_X86_GLOBAL_STACKPROTECTOR + ifeq ($(call cc-option, -mstack-protector-guard=global),) + $(error Cannot use CONFIG_X86_GLOBAL_STACKPROTECTOR: \ + -mstack-protector-guard=global not supported \ + by compiler) + endif + KBUILD_CFLAGS += -mstack-protector-guard=global +endif + ifdef CONFIG_X86_X32 x32_ld_ok := $(call try-run,\ /bin/echo -e '1: .quad 1b' | \ diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S index 8a13d468635a..ab3e5056722f 100644 --- a/arch/x86/entry/entry_32.S +++ b/arch/x86/entry/entry_32.S @@ -237,7 +237,8 @@ ENTRY(__switch_to_asm) movl %esp, TASK_threadsp(%eax) movl TASK_threadsp(%edx), %esp -#ifdef CONFIG_CC_STACKPROTECTOR +#if defined(CONFIG_CC_STACKPROTECTOR) && \ + !defined(CONFIG_X86_GLOBAL_STACKPROTECTOR) movl TASK_stack_canary(%edx), %ebx movl %ebx, PER_CPU_VAR(stack_canary)+stack_canary_offset #endif diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index d3a52d2342af..01be62c1b436 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -390,7 +390,8 @@ ENTRY(__switch_to_asm) movq %rsp, TASK_threadsp(%rdi) movq TASK_threadsp(%rsi), %rsp -#ifdef CONFIG_CC_STACKPROTECTOR +#if defined(CONFIG_CC_STACKPROTECTOR) && \ + !defined(CONFIG_X86_GLOBAL_STACKPROTECTOR) movq TASK_stack_canary(%rsi), %rbx movq %rbx, PER_CPU_VAR(irq_stack_union + stack_canary_offset) #endif diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index b09bd50b06c7..e3a7ef8d5fb8 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -394,7 +394,8 @@ DECLARE_PER_CPU(char *, irq_stack_ptr); DECLARE_PER_CPU(unsigned int, irq_count); extern asmlinkage void ignore_sysret(void); #else /* X86_64 */ -#ifdef CONFIG_CC_STACKPROTECTOR +#if defined(CONFIG_CC_STACKPROTECTOR) && \ + defined(CONFIG_X86_GLOBAL_STACKPROTECTOR) /* * Make sure stack canary segment base is cached-aligned: * "For Intel Atom processors, avoid non zero segment base address diff --git a/arch/x86/include/asm/stackprotector.h b/arch/x86/include/asm/stackprotector.h index 8abedf1d650e..66462d778dc5 100644 --- a/arch/x86/include/asm/stackprotector.h +++ b/arch/x86/include/asm/stackprotector.h @@ -51,6 +51,10 @@ #define GDT_STACK_CANARY_INIT \ [GDT_ENTRY_STACK_CANARY] = GDT_ENTRY_INIT(0x4090, 0, 0x18), +#ifdef CONFIG_X86_GLOBAL_STACKPROTECTOR +extern unsigned long __stack_chk_guard; +#endif + /* * Initialize the stackprotector canary value. * @@ -62,7 +66,7 @@ static __always_inline void boot_init_stack_canary(void) u64 canary; u64 tsc; -#ifdef CONFIG_X86_64 +#if defined(CONFIG_X86_64) && !defined(CONFIG_X86_GLOBAL_STACKPROTECTOR) BUILD_BUG_ON(offsetof(union irq_stack_union, stack_canary) != 40); #endif /* @@ -76,17 +80,22 @@ static __always_inline void boot_init_stack_canary(void) canary += tsc + (tsc << 32UL); canary &= CANARY_MASK; +#ifdef CONFIG_X86_GLOBAL_STACKPROTECTOR + if (__stack_chk_guard == 0) + __stack_chk_guard = canary ?: 1; +#else /* !CONFIG_X86_GLOBAL_STACKPROTECTOR */ current->stack_canary = canary; #ifdef CONFIG_X86_64 this_cpu_write(irq_stack_union.stack_canary, canary); -#else +#else /* CONFIG_X86_32 */ this_cpu_write(stack_canary.canary, canary); #endif +#endif } static inline void setup_stack_canary_segment(int cpu) { -#ifdef CONFIG_X86_32 +#if defined(CONFIG_X86_32) && !defined(CONFIG_X86_GLOBAL_STACKPROTECTOR) unsigned long canary = (unsigned long)&per_cpu(stack_canary, cpu); struct desc_struct *gdt_table = get_cpu_gdt_rw(cpu); struct desc_struct desc; @@ -99,7 +108,7 @@ static inline void setup_stack_canary_segment(int cpu) static inline void load_stack_canary_segment(void) { -#ifdef CONFIG_X86_32 +#if defined(CONFIG_X86_32) && !defined(CONFIG_X86_GLOBAL_STACKPROTECTOR) asm("mov %0, %%gs" : : "r" (__KERNEL_STACK_CANARY) : "memory"); #endif } @@ -115,7 +124,7 @@ static inline void setup_stack_canary_segment(int cpu) static inline void load_stack_canary_segment(void) { -#ifdef CONFIG_X86_32 +#if defined(CONFIG_X86_32) && !defined(CONFIG_X86_GLOBAL_STACKPROTECTOR) asm volatile ("mov %0, %%gs" : : "r" (0)); #endif } diff --git a/arch/x86/kernel/asm-offsets.c b/arch/x86/kernel/asm-offsets.c index de827d6ac8c2..b30a12cd021e 100644 --- a/arch/x86/kernel/asm-offsets.c +++ b/arch/x86/kernel/asm-offsets.c @@ -30,7 +30,8 @@ void common(void) { BLANK(); OFFSET(TASK_threadsp, task_struct, thread.sp); -#ifdef CONFIG_CC_STACKPROTECTOR +#if defined(CONFIG_CC_STACKPROTECTOR) && \ + !defined(CONFIG_X86_GLOBAL_STACKPROTECTOR) OFFSET(TASK_stack_canary, task_struct, stack_canary); #endif diff --git a/arch/x86/kernel/asm-offsets_32.c b/arch/x86/kernel/asm-offsets_32.c index 710edab9e644..33584e7e486b 100644 --- a/arch/x86/kernel/asm-offsets_32.c +++ b/arch/x86/kernel/asm-offsets_32.c @@ -54,7 +54,8 @@ void foo(void) /* Size of SYSENTER_stack */ DEFINE(SIZEOF_SYSENTER_stack, sizeof(((struct tss_struct *)0)->SYSENTER_stack)); -#ifdef CONFIG_CC_STACKPROTECTOR +#if defined(CONFIG_CC_STACKPROTECTOR) && \ + !defined(CONFIG_X86_GLOBAL_STACKPROTECTOR) BLANK(); OFFSET(stack_canary_offset, stack_canary, canary); #endif diff --git a/arch/x86/kernel/asm-offsets_64.c b/arch/x86/kernel/asm-offsets_64.c index cf42206926af..06feb31a09f5 100644 --- a/arch/x86/kernel/asm-offsets_64.c +++ b/arch/x86/kernel/asm-offsets_64.c @@ -64,7 +64,8 @@ int main(void) OFFSET(TSS_sp0, tss_struct, x86_tss.sp0); BLANK(); -#ifdef CONFIG_CC_STACKPROTECTOR +#if defined(CONFIG_CC_STACKPROTECTOR) && \ + !defined(CONFIG_X86_GLOBAL_STACKPROTECTOR) DEFINE(stack_canary_offset, offsetof(union irq_stack_union, stack_canary)); BLANK(); #endif diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index fac71a3ee0b5..99c8af974874 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -1431,7 +1431,8 @@ DEFINE_PER_CPU(unsigned long, cpu_current_top_of_stack) = (unsigned long)&init_thread_union + THREAD_SIZE; EXPORT_PER_CPU_SYMBOL(cpu_current_top_of_stack); -#ifdef CONFIG_CC_STACKPROTECTOR +#if defined(CONFIG_CC_STACKPROTECTOR) && \ + !defined(CONFIG_X86_GLOBAL_STACKPROTECTOR) DEFINE_PER_CPU_ALIGNED(struct stack_canary, stack_canary); #endif diff --git a/arch/x86/kernel/head_32.S b/arch/x86/kernel/head_32.S index 9ed3074d0d27..a55a67b33934 100644 --- a/arch/x86/kernel/head_32.S +++ b/arch/x86/kernel/head_32.S @@ -377,7 +377,8 @@ ENDPROC(startup_32_smp) */ __INIT setup_once: -#ifdef CONFIG_CC_STACKPROTECTOR +#if defined(CONFIG_CC_STACKPROTECTOR) && \ + !defined(CONFIG_X86_GLOBAL_STACKPROTECTOR) /* * Configure the stack canary. The linker can't handle this by * relocation. Manually set base address in stack canary diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c index bd6b85fac666..66ea1a35413e 100644 --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -73,6 +73,11 @@ EXPORT_PER_CPU_SYMBOL(cpu_tss); DEFINE_PER_CPU(bool, __tss_limit_invalid); EXPORT_PER_CPU_SYMBOL_GPL(__tss_limit_invalid); +#ifdef CONFIG_X86_GLOBAL_STACKPROTECTOR +unsigned long __stack_chk_guard __read_mostly; +EXPORT_SYMBOL(__stack_chk_guard); +#endif + /* * this gets called so that we can store lazy state into memory and copy the * current task into the new thread.