From patchwork Wed Mar 15 17:43:10 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Garnier X-Patchwork-Id: 9626291 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 2D1546048C for ; Wed, 15 Mar 2017 17:44:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2152228659 for ; Wed, 15 Mar 2017 17:44:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 139B72865C; Wed, 15 Mar 2017 17:44:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, RCVD_IN_DNSWL_MED, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 97E7028659 for ; Wed, 15 Mar 2017 17:44:43 +0000 (UTC) Received: (qmail 15396 invoked by uid 550); 15 Mar 2017 17:43:24 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 14247 invoked from network); 15 Mar 2017 17:43:23 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=A/wHwS9GBDXk7kR5qPUqc4hTIYsNFsfk5cBtFInW0rU=; b=smX/sUKmdXzvkqxgMGjAASPZebcIo9ixRI3pee+9pzVp5bUqcvwObEij1rlk586RxC ehsrNGoUeBTommKZUXeWGtOoGqyNt58VG44grFPdlVMmMepaP96QDcEw8gj+4BuhBwvJ bwFAItFW2ESgHBE4ThOWp8WQuHeyZqv3goKYTUwh5kwR0bWb1de4qckQY3tj4BprU1Lb 8aEmvCsGpw14bDLSgq7KY9llzEYhbSD8EWmpwiXsyTbQ8PL6DZczpdj1HqjWJ7PH541X n0wljwApcK/zK4IN5/zM6U8iBhzfquyzGaUARJeBDm13kinLNROw4BMoM1R4adlFGBpN g7CQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=A/wHwS9GBDXk7kR5qPUqc4hTIYsNFsfk5cBtFInW0rU=; b=GDClHOATssA4mUvLG8p++Uq5Ao75v2GbYo9Je5UbK3TUn2EUwQ79BDx3Q0CxNd5hPd ee5w0YWtFNeiZtto2z4lo6TCW3uiUsayuXjVXxSGAuqYO0F0SNPvxd8A+3JPT8mkVllN 7sMrKe6KS/0VpD9oiYM+5FbJesg8Z7xlzEW4SWo6h/EL/FEEFgjZQlEpn6VbLJMNz2zV DgpE13A/GNqrhfaUKkIn3lo5v9dVHwg7BvzE38ZvzDJLmbTO7BVLXhwH36aJGR3CgwUZ H6XoZ3A2K/eNWH3SR25f2or4p8O/LQUsFUl0a4DgcJZbUxSx1bsyCrPnn/xjD0frrbo1 Rf/A== X-Gm-Message-State: AFeK/H23aKQ5woVBCoav2l1AiboWPVn3gNsxv8dfLMfRU3r2Al2jKLja2ubkHOEAuFiVIw4tUksjXkEOLH8Gg1Gy X-Received: by 10.107.174.27 with SMTP id x27mr6087960ioe.35.1489599791407; Wed, 15 Mar 2017 10:43:11 -0700 (PDT) MIME-Version: 1.0 In-Reply-To: <8fa1a789-231f-dc2c-4a43-6406194259f9@zytor.com> References: <20170311000501.46607-1-thgarnie@google.com> <20170311000501.46607-2-thgarnie@google.com> <20170311094200.GA27700@gmail.com> <733ed189-6c01-2975-a81a-6fbfe4b7b593@zytor.com> <2d9aad2a-a677-40d2-c179-379fb6e9f194@zytor.com> <7389c6e7-87dc-ea0d-5b2a-7925b8c8d33e@zytor.com> <8fa1a789-231f-dc2c-4a43-6406194259f9@zytor.com> From: Thomas Garnier Date: Wed, 15 Mar 2017 10:43:10 -0700 Message-ID: To: "H. Peter Anvin" Cc: Andy Lutomirski , Ingo Molnar , Martin Schwidefsky , Heiko Carstens , David Howells , Arnd Bergmann , Al Viro , Dave Hansen , =?UTF-8?Q?Ren=C3=A9_Nyffenegger?= , Andrew Morton , Kees Cook , "Paul E . McKenney" , Andy Lutomirski , Ard Biesheuvel , Nicolas Pitre , Petr Mladek , Sebastian Andrzej Siewior , Sergey Senozhatsky , Helge Deller , Rik van Riel , John Stultz , Thomas Gleixner , Oleg Nesterov , Stephen Smalley , Pavel Tikhomirov , Frederic Weisbecker , Stanislav Kinsburskiy , Ingo Molnar , Paolo Bonzini , Dmitry Safonov , Borislav Petkov , Josh Poimboeuf , Brian Gerst , Jan Beulich , Christian Borntraeger , Fenghua Yu , He Chen , Russell King , Vladimir Murzin , Will Deacon , Catalin Marinas , Mark Rutland , James Morse , "David A . Long" , Pratyush Anand , Laura Abbott , Andre Przywara , Chris Metcalf , linux-s390 , LKML , Linux API , "the arch/x86 maintainers" , "linux-arm-kernel@lists.infradead.org" , Kernel Hardening Subject: [kernel-hardening] Re: [PATCH v3 2/4] x86/syscalls: Specific usage of verify_pre_usermode_state X-Virus-Scanned: ClamAV using ClamSMTP Thanks for the feedback. I will look into inlining by default (looking at code size on different arch), the updated patch for x86 in the meantime: =========== Implement specific usage of verify_pre_usermode_state for user-mode returns for x86. --- Based on next-20170308 --- arch/x86/Kconfig | 1 + arch/x86/entry/common.c | 3 +++ arch/x86/entry/entry_64.S | 8 ++++++++ arch/x86/include/asm/pgtable_64_types.h | 11 +++++++++++ arch/x86/include/asm/processor.h | 11 ----------- 5 files changed, 23 insertions(+), 11 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 005df7c825f5..6d48e18e6f09 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -63,6 +63,7 @@ config X86 select ARCH_MIGHT_HAVE_ACPI_PDC if ACPI select ARCH_MIGHT_HAVE_PC_PARPORT select ARCH_MIGHT_HAVE_PC_SERIO + select ARCH_NO_SYSCALL_VERIFY_PRE_USERMODE_STATE select ARCH_SUPPORTS_ATOMIC_RMW select ARCH_SUPPORTS_DEFERRED_STRUCT_PAGE_INIT select ARCH_SUPPORTS_NUMA_BALANCING if X86_64 diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c index 370c42c7f046..525edbb77f03 100644 --- a/arch/x86/entry/common.c +++ b/arch/x86/entry/common.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include @@ -180,6 +181,8 @@ __visible inline void prepare_exit_to_usermode(struct pt_regs *regs) struct thread_info *ti = current_thread_info(); u32 cached_flags; + verify_pre_usermode_state(); + if (IS_ENABLED(CONFIG_PROVE_LOCKING) && WARN_ON(!irqs_disabled())) local_irq_disable(); diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index d2b2a2948ffe..c079b010205c 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -218,6 +218,14 @@ entry_SYSCALL_64_fastpath: testl $_TIF_ALLWORK_MASK, TASK_TI_flags(%r11) jnz 1f + /* + * If address limit is not based on user-mode, jump to slow path for + * additional security checks. + */ + movq $TASK_SIZE_MAX, %rcx + cmp %rcx, TASK_addr_limit(%r11) + jnz 1f + LOCKDEP_SYS_EXIT TRACE_IRQS_ON /* user mode is traced as IRQs on */ movq RIP(%rsp), %rcx diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h index 3a264200c62f..0fbbb79d058c 100644 --- a/arch/x86/include/asm/pgtable_64_types.h +++ b/arch/x86/include/asm/pgtable_64_types.h @@ -76,4 +76,15 @@ typedef struct { pteval_t pte; } pte_t; #define EARLY_DYNAMIC_PAGE_TABLES 64 +/* + * User space process size. 47bits minus one guard page. The guard + * page is necessary on Intel CPUs: if a SYSCALL instruction is at + * the highest possible canonical userspace address, then that + * syscall will enter the kernel with a non-canonical return + * address, and SYSRET will explode dangerously. We avoid this + * particular problem by preventing anything from being mapped + * at the maximum canonical address. + */ +#define TASK_SIZE_MAX ((_AC(1, UL) << 47) - PAGE_SIZE) + #endif /* _ASM_X86_PGTABLE_64_DEFS_H */ diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index f385eca5407a..9bc99d37133e 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -829,17 +829,6 @@ static inline void spin_lock_prefetch(const void *x) #define KSTK_ESP(task) (task_pt_regs(task)->sp) #else -/* - * User space process size. 47bits minus one guard page. The guard - * page is necessary on Intel CPUs: if a SYSCALL instruction is at - * the highest possible canonical userspace address, then that - * syscall will enter the kernel with a non-canonical return - * address, and SYSRET will explode dangerously. We avoid this - * particular problem by preventing anything from being mapped - * at the maximum canonical address. - */ -#define TASK_SIZE_MAX ((1UL << 47) - PAGE_SIZE) - /* This decides where the kernel will search for a free chunk of vm * space during mmap's. */