From patchwork Wed Mar 22 20:38:31 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Garnier X-Patchwork-Id: 9640017 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 6A16F60327 for ; Wed, 22 Mar 2017 20:39:01 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5CD8328461 for ; Wed, 22 Mar 2017 20:39:01 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 50E7B28485; Wed, 22 Mar 2017 20:39:01 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, RCVD_IN_DNSWL_MED, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 43B8728461 for ; Wed, 22 Mar 2017 20:38:59 +0000 (UTC) Received: (qmail 28656 invoked by uid 550); 22 Mar 2017 20:38:57 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 28636 invoked from network); 22 Mar 2017 20:38:57 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=KqIxq5omk7Ua9zpBChPtSaIVPCqtcxtWZUe71Tx/J4E=; b=qb7eFTmakr8+B1sLCMCaIqM67WMRoV9A0eg1bs8JvC2CfKTvwvccoRNV6KVoAGZBse wz1QGy/WOHb33vTRuEEpRB0XYDzr8Rrae/W4hWiTV0EVdwzIHWXZB12bfrfiBKY3yqff UZXUDaCsvf/iEHqMcU4L6KQJ5g9lC9+v/0EWX1HLaAVRO7MODR/Jmqifm1S2rY/aBF+i g8uSKHnU2udq0lLbmpBDtUjh+0lNM4un5KMW6KHecbAa8Bg74kPoqfLJ9fpEiu6Wi9+K M/+2DRVZtez4owwon4lQ7Wo4kJ8mpYAt1ysUq2otf42yu77XtuFVT6orbskLUDf710a1 JiHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=KqIxq5omk7Ua9zpBChPtSaIVPCqtcxtWZUe71Tx/J4E=; b=WO0WE3Ylp4UyV1bFR+cbROdBS31rIHlpjr0Hg0lAbcZsQqoCGMPsamr4Tan8h+W0mT uDhDhyKi+BFr1MGkNbEhopj09gvEbcHIbcb8mfq5a95W4nm5Zd3jTemdLhBDwMH/QiEi K9YBoqlzVh3xGHqRD9uo0ZyI3C7AI7BXnPnj3SZgMDdGA2T7wUccNBY9p+YUMZoSnidE wxcuCNATV2g5wD0pdQuhIHFlqDU0m7Q/oglDx2QYwY43Gd3IMhnN3fAhVNDXxU5B6KQ7 xJZxAqIF0O9ntKGAAKR/tk2tMbGnmjUnBAdWQ2lLXUsC3L8doROCDt5ae1ya6gKCfyfO UBJA== X-Gm-Message-State: AFeK/H1jlw6vMxoG9eSrGAVJKxC2JgftCNkxrjMz7KGCaRucpsfwf4meFSuDyzVK6p7DkGnN X-Received: by 10.99.67.1 with SMTP id q1mr47117816pga.210.1490215124868; Wed, 22 Mar 2017 13:38:44 -0700 (PDT) From: Thomas Garnier To: Martin Schwidefsky , Heiko Carstens , Dave Hansen , David Howells , Al Viro , Arnd Bergmann , Thomas Garnier , =?UTF-8?q?Ren=C3=A9=20Nyffenegger?= , Andrew Morton , "Paul E . McKenney" , Ingo Molnar , Thomas Gleixner , Oleg Nesterov , Pavel Tikhomirov , Stephen Smalley , Ingo Molnar , "H . Peter Anvin" , Andy Lutomirski , Paolo Bonzini , Rik van Riel , Kees Cook , Josh Poimboeuf , Borislav Petkov , Brian Gerst , "Kirill A . Shutemov" , Christian Borntraeger , Russell King , Vladimir Murzin , Will Deacon , Catalin Marinas , Mark Rutland , James Morse Cc: linux-s390@vger.kernel.org, linux-kernel@vger.kernel.org, linux-api@vger.kernel.org, x86@kernel.org, linux-arm-kernel@lists.infradead.org, kernel-hardening@lists.openwall.com Date: Wed, 22 Mar 2017 13:38:31 -0700 Message-Id: <20170322203834.67556-1-thgarnie@google.com> X-Mailer: git-send-email 2.12.1.500.gab5fba24ee-goog Subject: [kernel-hardening] [PATCH v4 1/4] syscalls: Restore address limit after a syscall X-Virus-Scanned: ClamAV using ClamSMTP This patch ensures a syscall does not return to user-mode with a kernel address limit. If that happened, a process can corrupt kernel-mode memory and elevate privileges. For example, it would mitigation this bug: - https://bugs.chromium.org/p/project-zero/issues/detail?id=990 If the CONFIG_BUG_ON_DATA_CORRUPTION option is enabled, an incorrect state will result in a BUG_ON. The CONFIG_ARCH_NO_SYSCALL_VERIFY_PRE_USERMODE_STATE option is also added so each architecture can optimize this change. Signed-off-by: Thomas Garnier --- Based on next-20170322 --- arch/s390/Kconfig | 1 + include/linux/syscalls.h | 18 +++++++++++++++++- init/Kconfig | 7 +++++++ kernel/sys.c | 8 ++++++++ 4 files changed, 33 insertions(+), 1 deletion(-) diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig index a2dcef0aacc7..b73f5b87bc99 100644 --- a/arch/s390/Kconfig +++ b/arch/s390/Kconfig @@ -103,6 +103,7 @@ config S390 select ARCH_INLINE_WRITE_UNLOCK_BH select ARCH_INLINE_WRITE_UNLOCK_IRQ select ARCH_INLINE_WRITE_UNLOCK_IRQRESTORE + select ARCH_NO_SYSCALL_VERIFY_PRE_USERMODE_STATE select ARCH_SAVE_PAGE_KEYS if HIBERNATION select ARCH_SUPPORTS_ATOMIC_RMW select ARCH_SUPPORTS_NUMA_BALANCING diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h index 980c3c9b06f8..e659076adf6c 100644 --- a/include/linux/syscalls.h +++ b/include/linux/syscalls.h @@ -191,6 +191,19 @@ extern struct trace_event_functions exit_syscall_print_funcs; SYSCALL_METADATA(sname, x, __VA_ARGS__) \ __SYSCALL_DEFINEx(x, sname, __VA_ARGS__) +asmlinkage void verify_pre_usermode_state(void); + +#ifndef CONFIG_ARCH_NO_SYSCALL_VERIFY_PRE_USERMODE_STATE +#define __CHECK_USER_CALLER() \ + bool user_caller = segment_eq(get_fs(), USER_DS) +#define __VERIFY_PRE_USERMODE_STATE() \ + if (user_caller) verify_pre_usermode_state() +#else +#define __CHECK_USER_CALLER() +#define __VERIFY_PRE_USERMODE_STATE() +#endif + + #define __PROTECT(...) asmlinkage_protect(__VA_ARGS__) #define __SYSCALL_DEFINEx(x, name, ...) \ asmlinkage long sys##name(__MAP(x,__SC_DECL,__VA_ARGS__)) \ @@ -199,7 +212,10 @@ extern struct trace_event_functions exit_syscall_print_funcs; asmlinkage long SyS##name(__MAP(x,__SC_LONG,__VA_ARGS__)); \ asmlinkage long SyS##name(__MAP(x,__SC_LONG,__VA_ARGS__)) \ { \ - long ret = SYSC##name(__MAP(x,__SC_CAST,__VA_ARGS__)); \ + long ret; \ + __CHECK_USER_CALLER(); \ + ret = SYSC##name(__MAP(x,__SC_CAST,__VA_ARGS__)); \ + __VERIFY_PRE_USERMODE_STATE(); \ __MAP(x,__SC_TEST,__VA_ARGS__); \ __PROTECT(x, ret,__MAP(x,__SC_ARGS,__VA_ARGS__)); \ return ret; \ diff --git a/init/Kconfig b/init/Kconfig index c859c993c26f..c4efc3a95e4a 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -1929,6 +1929,13 @@ config PROFILING config TRACEPOINTS bool +# +# Set by each architecture that want to optimize how verify_pre_usermode_state +# is called. +# +config ARCH_NO_SYSCALL_VERIFY_PRE_USERMODE_STATE + bool + source "arch/Kconfig" endmenu # General setup diff --git a/kernel/sys.c b/kernel/sys.c index 196c7134bee6..411163ac9dc3 100644 --- a/kernel/sys.c +++ b/kernel/sys.c @@ -2459,3 +2459,11 @@ COMPAT_SYSCALL_DEFINE1(sysinfo, struct compat_sysinfo __user *, info) return 0; } #endif /* CONFIG_COMPAT */ + +/* Called before coming back to user-mode */ +asmlinkage void verify_pre_usermode_state(void) +{ + if (CHECK_DATA_CORRUPTION(!segment_eq(get_fs(), USER_DS), + "incorrect get_fs() on user-mode return")) + set_fs(USER_DS); +}