From patchwork Mon Jun 25 22:39:08 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Garnier X-Patchwork-Id: 10488305 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 25198601A0 for ; Tue, 26 Jun 2018 08:44:21 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0FA7A287E2 for ; Tue, 26 Jun 2018 08:44:21 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 019DE287E7; Tue, 26 Jun 2018 08:44:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED, USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 9C624287E2 for ; Tue, 26 Jun 2018 08:44:19 +0000 (UTC) Received: (qmail 28486 invoked by uid 550); 26 Jun 2018 08:39:56 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Delivered-To: moderator for kernel-hardening@lists.openwall.com Received: (qmail 12164 invoked from network); 25 Jun 2018 22:42:33 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:date:in-reply-to:message-id:references:subject:from:to :cc; bh=/5GEMgpBLPGWFkcOFJY1LD9yOkW4NWzAJyU8jeRJwgg=; b=PSmVWwpUEAmzaxcg5GQHFcz8LiHoM6qG5sgiomGHJEdq9dyHR0n9ipbvBatcJ9ttzz LMLiX+RDJsbD/Ac7chrV59sb/fD6ef2cB5iGvY2fZ+RE4wNZnhfTPO3RcM4P2VOCwnXc j7LK3WnLRcSL//dF75fJHsxLc6hqIoB+KwaKpBHIIvcIJl3EthuZTt/uCuC9XbXs7Sql kixrV+VGn7YPNqEenK+JLHw2XQ2xZFWkTlQneWZad7zzN9ZHMrGQE/sbV53ZghQkevQm t0vZa2Bc8pvhser8DW0zHr1s55zJqWXXfv7ztLHIMYmhlA+MjyHArVcOS/XOxNrM5fOR AxSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:date:in-reply-to:message-id :references:subject:from:to:cc; bh=/5GEMgpBLPGWFkcOFJY1LD9yOkW4NWzAJyU8jeRJwgg=; b=jiKrOdZjLtu2ncHHpIrcJcx1gbvp8YLguFEF+iM8gXxwliTWrfUXrV8bZt82R/hPX9 IVvs/TYfjpGnFpC0b8J9tABGqlZ2Zg9wsRoloyqcRr3oGVGbLlL6kmC9pz6GO+2rc9aq OtWgr7q8K090UoepTKUgd07OX+ymv+sSltJWPvc5tNxnDZQeCsx24rhDn/zBRjHLTz6V p5+Ns+Me4Ta9wPA8STZ7eMn/SxSb1ktIP8SeX8HVcYHuKfG93a1KYVABElw2IjM8HpTM 5UB5P64nX69x1rlHH5oEUMmFrLY9t6UoYbAzK/+QARKuEAkkJFrLKZQ/W8MBU7bYEnH4 K6FQ== X-Gm-Message-State: APt69E3YU4eNmKXaSytyY17ZvAgzEkEL27i20aUH7EzQZuXWMM6ZNysR R0D/5kxX/JeVYTND/zkIFu2iIgR+eSXh1BBJyvHCmWdWCXqPP1tAqwBKKHBHHHJSXrpCKQhwbXg Jc4Ott70oVD9M65OsUoyUKS1z9jLpUS0OrasuCS0WnBkiWy5+dhZICLazvI93GWPY/R/bgR7ofz J5UNU1Ft+Y X-Google-Smtp-Source: ADUXVKICsvCTvwgOAUuayFgWMU2OYbpWKTAExQ92PbijGBL0Hg97/ezUyQWu0/0GDuOlQw6UtsT6nJuiDKgTRg== MIME-Version: 1.0 X-Received: by 2002:ab0:1a08:: with SMTP id a8-v6mr6149112uai.96.1529966541238; Mon, 25 Jun 2018 15:42:21 -0700 (PDT) Date: Mon, 25 Jun 2018 15:39:08 -0700 In-Reply-To: <20180625224014.134829-1-thgarnie@google.com> Message-Id: <20180625224014.134829-21-thgarnie@google.com> References: <20180625224014.134829-1-thgarnie@google.com> X-Mailer: git-send-email 2.18.0.rc2.346.g013aa6912e-goog Subject: [PATCH v5 20/27] x86: Support global stack cookie From: Thomas Garnier To: kernel-hardening@lists.openwall.com Cc: Thomas Garnier , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , x86@kernel.org, Andy Lutomirski , Borislav Petkov , Andrew Morton , Greg Kroah-Hartman , Masahiro Yamada , Rik van Riel , Philippe Ombredanne , Peter Zijlstra , Dave Hansen , Juergen Gross , Boris Ostrovsky , Konrad Rzeszutek Wilk , Tom Lendacky , linux-kernel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Add an off-by-default configuration option to use a global stack cookie instead of the default TLS. This configuration option will only be used with PIE binaries. For kernel stack cookie, the compiler uses the mcmodel=kernel to switch between the fs segment to gs segment. A PIE binary does not use mcmodel=kernel because it can be relocated anywhere, therefore the compiler will default to the fs segment register. This is fixed on the latest version of gcc. If the segment selector is available, it will be automatically added. If the automatic configuration was selected, a warning is written and the global variable stack cookie is used. If a specific stack mode was selected (regular or strong) and the compiler does not support selecting the segment register, an error is emitted. Signed-off-by: Thomas Garnier --- arch/x86/Kconfig | 12 ++++++++++++ arch/x86/Makefile | 9 +++++++++ arch/x86/entry/entry_32.S | 3 ++- arch/x86/entry/entry_64.S | 3 ++- arch/x86/include/asm/processor.h | 3 ++- arch/x86/include/asm/stackprotector.h | 19 ++++++++++++++----- arch/x86/kernel/asm-offsets.c | 3 ++- arch/x86/kernel/asm-offsets_32.c | 3 ++- arch/x86/kernel/asm-offsets_64.c | 3 ++- arch/x86/kernel/cpu/common.c | 3 ++- arch/x86/kernel/head_32.S | 3 ++- arch/x86/kernel/process.c | 5 +++++ 12 files changed, 56 insertions(+), 13 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index c4d64b19acff..f49725df7109 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -2212,6 +2212,18 @@ config RANDOMIZE_MEMORY_PHYSICAL_PADDING If unsure, leave at the default value. +config X86_GLOBAL_STACKPROTECTOR + bool "Stack cookie using a global variable" + depends on CC_STACKPROTECTOR_AUTO + default n + ---help--- + This option turns on the "stack-protector" GCC feature using a global + variable instead of a segment register. It is useful when the + compiler does not support custom segment registers when building a + position independent (PIE) binary. + + If unsure, say N + config HOTPLUG_CPU bool "Support for hot-pluggable CPUs" depends on SMP diff --git a/arch/x86/Makefile b/arch/x86/Makefile index a08e82856563..c2c221fd20d7 100644 --- a/arch/x86/Makefile +++ b/arch/x86/Makefile @@ -141,6 +141,15 @@ else KBUILD_CFLAGS += $(call cc-option,-funit-at-a-time) endif +ifdef CONFIG_X86_GLOBAL_STACKPROTECTOR + ifeq ($(call cc-option, -mstack-protector-guard=global),) + $(error Cannot use CONFIG_X86_GLOBAL_STACKPROTECTOR: \ + -mstack-protector-guard=global not supported \ + by compiler) + endif + KBUILD_CFLAGS += -mstack-protector-guard=global +endif + ifdef CONFIG_X86_X32 x32_ld_ok := $(call try-run,\ /bin/echo -e '1: .quad 1b' | \ diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S index 2582881d19ce..4298307c4275 100644 --- a/arch/x86/entry/entry_32.S +++ b/arch/x86/entry/entry_32.S @@ -239,7 +239,8 @@ ENTRY(__switch_to_asm) movl %esp, TASK_threadsp(%eax) movl TASK_threadsp(%edx), %esp -#ifdef CONFIG_STACKPROTECTOR +#if defined(CONFIG_STACKPROTECTOR) && \ + !defined(CONFIG_X86_GLOBAL_STACKPROTECTOR) movl TASK_stack_canary(%edx), %ebx movl %ebx, PER_CPU_VAR(stack_canary)+stack_canary_offset #endif diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index 2afd2e2a86db..a603a0505706 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -357,7 +357,8 @@ ENTRY(__switch_to_asm) movq %rsp, TASK_threadsp(%rdi) movq TASK_threadsp(%rsi), %rsp -#ifdef CONFIG_STACKPROTECTOR +#if defined(CONFIG_STACKPROTECTOR) && \ + !defined(CONFIG_X86_GLOBAL_STACKPROTECTOR) movq TASK_stack_canary(%rsi), %rbx movq %rbx, PER_CPU_VAR(irq_stack_union + stack_canary_offset) #endif diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index 6ee253d279d9..a1979e15621f 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -414,7 +414,8 @@ extern asmlinkage void ignore_sysret(void); void save_fsgs_for_kvm(void); #endif #else /* X86_64 */ -#ifdef CONFIG_STACKPROTECTOR +#if defined(CONFIG_STACKPROTECTOR) && \ + !defined(CONFIG_X86_GLOBAL_STACKPROTECTOR) /* * Make sure stack canary segment base is cached-aligned: * "For Intel Atom processors, avoid non zero segment base address diff --git a/arch/x86/include/asm/stackprotector.h b/arch/x86/include/asm/stackprotector.h index 8ec97a62c245..4e120cf36782 100644 --- a/arch/x86/include/asm/stackprotector.h +++ b/arch/x86/include/asm/stackprotector.h @@ -52,6 +52,10 @@ #define GDT_STACK_CANARY_INIT \ [GDT_ENTRY_STACK_CANARY] = GDT_ENTRY_INIT(0x4090, 0, 0x18), +#ifdef CONFIG_X86_GLOBAL_STACKPROTECTOR +extern unsigned long __stack_chk_guard; +#endif + /* * Initialize the stackprotector canary value. * @@ -63,7 +67,7 @@ static __always_inline void boot_init_stack_canary(void) u64 canary; u64 tsc; -#ifdef CONFIG_X86_64 +#if defined(CONFIG_X86_64) && !defined(CONFIG_X86_GLOBAL_STACKPROTECTOR) BUILD_BUG_ON(offsetof(union irq_stack_union, stack_canary) != 40); #endif /* @@ -77,17 +81,22 @@ static __always_inline void boot_init_stack_canary(void) canary += tsc + (tsc << 32UL); canary &= CANARY_MASK; +#ifdef CONFIG_X86_GLOBAL_STACKPROTECTOR + if (__stack_chk_guard == 0) + __stack_chk_guard = canary ?: 1; +#else /* !CONFIG_X86_GLOBAL_STACKPROTECTOR */ current->stack_canary = canary; #ifdef CONFIG_X86_64 this_cpu_write(irq_stack_union.stack_canary, canary); -#else +#else /* CONFIG_X86_32 */ this_cpu_write(stack_canary.canary, canary); #endif +#endif } static inline void setup_stack_canary_segment(int cpu) { -#ifdef CONFIG_X86_32 +#if defined(CONFIG_X86_32) && !defined(CONFIG_X86_GLOBAL_STACKPROTECTOR) unsigned long canary = (unsigned long)&per_cpu(stack_canary, cpu); struct desc_struct *gdt_table = get_cpu_gdt_rw(cpu); struct desc_struct desc; @@ -100,7 +109,7 @@ static inline void setup_stack_canary_segment(int cpu) static inline void load_stack_canary_segment(void) { -#ifdef CONFIG_X86_32 +#if defined(CONFIG_X86_32) && !defined(CONFIG_X86_GLOBAL_STACKPROTECTOR) asm("mov %0, %%gs" : : "r" (__KERNEL_STACK_CANARY) : "memory"); #endif } @@ -116,7 +125,7 @@ static inline void setup_stack_canary_segment(int cpu) static inline void load_stack_canary_segment(void) { -#ifdef CONFIG_X86_32 +#if defined(CONFIG_X86_32) && !defined(CONFIG_X86_GLOBAL_STACKPROTECTOR) asm volatile ("mov %0, %%gs" : : "r" (0)); #endif } diff --git a/arch/x86/kernel/asm-offsets.c b/arch/x86/kernel/asm-offsets.c index dcb008c320fe..b303dc639d14 100644 --- a/arch/x86/kernel/asm-offsets.c +++ b/arch/x86/kernel/asm-offsets.c @@ -32,7 +32,8 @@ void common(void) { BLANK(); OFFSET(TASK_threadsp, task_struct, thread.sp); -#ifdef CONFIG_STACKPROTECTOR +#if defined(CONFIG_STACKPROTECTOR) && \ + !defined(CONFIG_X86_GLOBAL_STACKPROTECTOR) OFFSET(TASK_stack_canary, task_struct, stack_canary); #endif diff --git a/arch/x86/kernel/asm-offsets_32.c b/arch/x86/kernel/asm-offsets_32.c index a4a3be399f4b..efb92f261c63 100644 --- a/arch/x86/kernel/asm-offsets_32.c +++ b/arch/x86/kernel/asm-offsets_32.c @@ -50,7 +50,8 @@ void foo(void) DEFINE(TSS_sysenter_sp0, offsetof(struct cpu_entry_area, tss.x86_tss.sp0) - offsetofend(struct cpu_entry_area, entry_stack_page.stack)); -#ifdef CONFIG_STACKPROTECTOR +#if defined(CONFIG_STACKPROTECTOR) && \ + !defined(CONFIG_X86_GLOBAL_STACKPROTECTOR) BLANK(); OFFSET(stack_canary_offset, stack_canary, canary); #endif diff --git a/arch/x86/kernel/asm-offsets_64.c b/arch/x86/kernel/asm-offsets_64.c index b2dcd161f514..7809c50d9afd 100644 --- a/arch/x86/kernel/asm-offsets_64.c +++ b/arch/x86/kernel/asm-offsets_64.c @@ -69,7 +69,8 @@ int main(void) OFFSET(TSS_sp1, tss_struct, x86_tss.sp1); BLANK(); -#ifdef CONFIG_STACKPROTECTOR +#if defined(CONFIG_STACKPROTECTOR) && \ + !defined(CONFIG_X86_GLOBAL_STACKPROTECTOR) DEFINE(stack_canary_offset, offsetof(union irq_stack_union, stack_canary)); BLANK(); #endif diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index eb4cb3efd20e..3d6b99845231 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -1602,7 +1602,8 @@ DEFINE_PER_CPU(unsigned long, cpu_current_top_of_stack) = (unsigned long)&init_thread_union + THREAD_SIZE; EXPORT_PER_CPU_SYMBOL(cpu_current_top_of_stack); -#ifdef CONFIG_STACKPROTECTOR +#if defined(CONFIG_STACKPROTECTOR) && \ + !defined(CONFIG_X86_GLOBAL_STACKPROTECTOR) DEFINE_PER_CPU_ALIGNED(struct stack_canary, stack_canary); #endif diff --git a/arch/x86/kernel/head_32.S b/arch/x86/kernel/head_32.S index abe6df15a8fb..37c1d2f2d858 100644 --- a/arch/x86/kernel/head_32.S +++ b/arch/x86/kernel/head_32.S @@ -375,7 +375,8 @@ ENDPROC(startup_32_smp) */ __INIT setup_once: -#ifdef CONFIG_STACKPROTECTOR +#if defined(CONFIG_STACKPROTECTOR) && \ + !defined(CONFIG_X86_GLOBAL_STACKPROTECTOR) /* * Configure the stack canary. The linker can't handle this by * relocation. Manually set base address in stack canary diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c index 30ca2d1a9231..ace68dbfedf5 100644 --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -87,6 +87,11 @@ EXPORT_PER_CPU_SYMBOL(cpu_tss_rw); DEFINE_PER_CPU(bool, __tss_limit_invalid); EXPORT_PER_CPU_SYMBOL_GPL(__tss_limit_invalid); +#ifdef CONFIG_X86_GLOBAL_STACKPROTECTOR +unsigned long __stack_chk_guard __read_mostly; +EXPORT_SYMBOL(__stack_chk_guard); +#endif + /* * this gets called so that we can store lazy state into memory and copy the * current task into the new thread.