From patchwork Thu Mar 23 17:25:12 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Garnier X-Patchwork-Id: 9641677 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 17472602CA for ; Thu, 23 Mar 2017 17:25:39 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0451925D99 for ; Thu, 23 Mar 2017 17:25:39 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id ECCB52841E; Thu, 23 Mar 2017 17:25:38 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, RCVD_IN_DNSWL_MED, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id E539F25D99 for ; Thu, 23 Mar 2017 17:25:37 +0000 (UTC) Received: (qmail 18192 invoked by uid 550); 23 Mar 2017 17:25:35 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 18172 invoked from network); 23 Mar 2017 17:25:35 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=dNt4fxDRSCq//pZO1N1/pX1tzqm0md2OVPgfgYS+0VA=; b=a7oi2NLQTgZS759FFNBWaGxrogoLC/iuP9yaUcPzPQvALNEtdbbWgGOJadjbDQR5K3 AsPcpSYPW3tiEfQbWij9OEVwgqQT2IdeF1mcnBph3WSnYZ468Y4sozz1Y02MrLXk7cD6 deRI1KGGhxQ3Pw0qBDzhe8RVV4pEIS5I4+4642o8tfSmB3gVJo7iIa43V893U+uKO7Ni F8rMP/TIZp2kZW/ENOsJIpRk961ElG5pl68+Ih6mcQqDB/EsTvIP7U3wPtZuFZ1tG44n tq1hwn8P93fqjuMRLV15XMGv5J/E/lIUow7VM04mFPNSQ4VJ89m4kcRKO7J2NsVtX6aW MyRg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=dNt4fxDRSCq//pZO1N1/pX1tzqm0md2OVPgfgYS+0VA=; b=X93GljQvXt8YIbA5+RsKex7g7JM9A3IaY/AqbCQkXDNwcoEFz150x2007lszwllS+j cWj15wVmZ9bglmW8L9BJ7OQ+5dooVZqMfpkLcESdNjkJk0vHVpIOohxRUm+qaJxIq3kY qM4Sk7qNIIUrAVQOPLx1Awr3o1hBcGCS+FuPvY0ypnaFSCPZXGq5ouTidNj+kID+f3Bn 9MxI88lhIE+bxeoAglXPAZYyls3Q9hyB+x1rDvr8V5+CPv+jiPgQ9bFRT1LLJSe2fSUt VFEWJNlQxWOQWiVOpgcqwX5CvZLcuaLKP1XQuk+oEnEkEZNDiFRaHMTlO9fNAXSDJbJw +kSQ== X-Gm-Message-State: AFeK/H3oZJnthqNfQBbmU8z1+TsV5iVFQK7B5YzQrcyx4OUvZNN79Hu0X158tZbvPe0vOXrT X-Received: by 10.98.207.68 with SMTP id b65mr4308649pfg.12.1490289922794; Thu, 23 Mar 2017 10:25:22 -0700 (PDT) From: Thomas Garnier To: Martin Schwidefsky , Heiko Carstens , David Howells , Arnd Bergmann , Dave Hansen , Al Viro , Thomas Gleixner , =?UTF-8?q?Ren=C3=A9=20Nyffenegger?= , Thomas Garnier , Andrew Morton , "Paul E . McKenney" , Ingo Molnar , Oleg Nesterov , Pavel Tikhomirov , Stephen Smalley , Ingo Molnar , "H . Peter Anvin" , Andy Lutomirski , Paolo Bonzini , Rik van Riel , Kees Cook , Josh Poimboeuf , Borislav Petkov , Brian Gerst , "Kirill A . Shutemov" , Christian Borntraeger , Russell King , Will Deacon , Catalin Marinas , Mark Rutland , James Morse Cc: linux-s390@vger.kernel.org, linux-kernel@vger.kernel.org, linux-api@vger.kernel.org, x86@kernel.org, linux-arm-kernel@lists.infradead.org, kernel-hardening@lists.openwall.com Date: Thu, 23 Mar 2017 10:25:12 -0700 Message-Id: <20170323172515.27950-1-thgarnie@google.com> X-Mailer: git-send-email 2.12.1.500.gab5fba24ee-goog Subject: [kernel-hardening] [PATCH v5 1/4] syscalls: Restore address limit after a syscall X-Virus-Scanned: ClamAV using ClamSMTP This patch ensures a syscall does not return to user-mode with a kernel address limit. If that happened, a process can corrupt kernel-mode memory and elevate privileges. For example, it would mitigation this bug: - https://bugs.chromium.org/p/project-zero/issues/detail?id=990 The CONFIG_ARCH_NO_SYSCALL_VERIFY_PRE_USERMODE_STATE option is also added so each architecture can optimize this change. Signed-off-by: Thomas Garnier Tested-by: Kees Cook --- Based on next-20170322 --- arch/s390/Kconfig | 1 + include/linux/syscalls.h | 26 +++++++++++++++++++++++++- init/Kconfig | 7 +++++++ kernel/sys.c | 7 +++++++ 4 files changed, 40 insertions(+), 1 deletion(-) diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig index a2dcef0aacc7..b73f5b87bc99 100644 --- a/arch/s390/Kconfig +++ b/arch/s390/Kconfig @@ -103,6 +103,7 @@ config S390 select ARCH_INLINE_WRITE_UNLOCK_BH select ARCH_INLINE_WRITE_UNLOCK_IRQ select ARCH_INLINE_WRITE_UNLOCK_IRQRESTORE + select ARCH_NO_SYSCALL_VERIFY_PRE_USERMODE_STATE select ARCH_SAVE_PAGE_KEYS if HIBERNATION select ARCH_SUPPORTS_ATOMIC_RMW select ARCH_SUPPORTS_NUMA_BALANCING diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h index 980c3c9b06f8..f9ff80fa92ff 100644 --- a/include/linux/syscalls.h +++ b/include/linux/syscalls.h @@ -191,6 +191,27 @@ extern struct trace_event_functions exit_syscall_print_funcs; SYSCALL_METADATA(sname, x, __VA_ARGS__) \ __SYSCALL_DEFINEx(x, sname, __VA_ARGS__) + +/* + * Called before coming back to user-mode. Returning to user-mode with an + * address limit different than USER_DS can allow to overwrite kernel memory. + */ +static inline void verify_pre_usermode_state(void) { + BUG_ON(!segment_eq(get_fs(), USER_DS)); +} + +#ifndef CONFIG_ARCH_NO_SYSCALL_VERIFY_PRE_USERMODE_STATE +#define __CHECK_USER_CALLER() \ + bool user_caller = segment_eq(get_fs(), USER_DS) +#define __VERIFY_PRE_USERMODE_STATE() \ + if (user_caller) verify_pre_usermode_state() +#else +#define __CHECK_USER_CALLER() +#define __VERIFY_PRE_USERMODE_STATE() +asmlinkage void asm_verify_pre_usermode_state(void); +#endif + + #define __PROTECT(...) asmlinkage_protect(__VA_ARGS__) #define __SYSCALL_DEFINEx(x, name, ...) \ asmlinkage long sys##name(__MAP(x,__SC_DECL,__VA_ARGS__)) \ @@ -199,7 +220,10 @@ extern struct trace_event_functions exit_syscall_print_funcs; asmlinkage long SyS##name(__MAP(x,__SC_LONG,__VA_ARGS__)); \ asmlinkage long SyS##name(__MAP(x,__SC_LONG,__VA_ARGS__)) \ { \ - long ret = SYSC##name(__MAP(x,__SC_CAST,__VA_ARGS__)); \ + long ret; \ + __CHECK_USER_CALLER(); \ + ret = SYSC##name(__MAP(x,__SC_CAST,__VA_ARGS__)); \ + __VERIFY_PRE_USERMODE_STATE(); \ __MAP(x,__SC_TEST,__VA_ARGS__); \ __PROTECT(x, ret,__MAP(x,__SC_ARGS,__VA_ARGS__)); \ return ret; \ diff --git a/init/Kconfig b/init/Kconfig index c859c993c26f..c4efc3a95e4a 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -1929,6 +1929,13 @@ config PROFILING config TRACEPOINTS bool +# +# Set by each architecture that want to optimize how verify_pre_usermode_state +# is called. +# +config ARCH_NO_SYSCALL_VERIFY_PRE_USERMODE_STATE + bool + source "arch/Kconfig" endmenu # General setup diff --git a/kernel/sys.c b/kernel/sys.c index 196c7134bee6..4ae278fcc290 100644 --- a/kernel/sys.c +++ b/kernel/sys.c @@ -2459,3 +2459,10 @@ COMPAT_SYSCALL_DEFINE1(sysinfo, struct compat_sysinfo __user *, info) return 0; } #endif /* CONFIG_COMPAT */ + +#ifdef CONFIG_ARCH_NO_SYSCALL_VERIFY_PRE_USERMODE_STATE +asmlinkage void asm_verify_pre_usermode_state(void) +{ + verify_pre_usermode_state(); +} +#endif