From patchwork Mon Jan 22 18:04:29 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andy Lutomirski X-Patchwork-Id: 10178999 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 88CD2601D5 for ; Mon, 22 Jan 2018 18:04:47 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7D2B71FF82 for ; Mon, 22 Jan 2018 18:04:47 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 71257283C5; Mon, 22 Jan 2018 18:04:47 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 90CC927FA1 for ; Mon, 22 Jan 2018 18:04:45 +0000 (UTC) Received: (qmail 1905 invoked by uid 550); 22 Jan 2018 18:04:44 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 1866 invoked from network); 22 Jan 2018 18:04:43 -0000 DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5F16521781 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=luto@kernel.org From: Andy Lutomirski To: x86@kernel.org, LKML Cc: Linus Torvalds , Greg Kroah-Hartman , Alan Cox , Jann Horn , Samuel Neves , Dan Williams , Kernel Hardening , Borislav Petkov , Andy Lutomirski Date: Mon, 22 Jan 2018 10:04:29 -0800 Message-Id: <503224b776b9513885453756e44bab235221124e.1516644136.git.luto@kernel.org> X-Mailer: git-send-email 2.13.6 Subject: [kernel-hardening] [PATCH] x86/retpoline/entry: Disable the entire SYSCALL64 fast path with retpolines on X-Virus-Scanned: ClamAV using ClamSMTP The existing retpoline code carefully and awkwardly retpolinifies the SYSCALL64 slow path. This stops the fast path from being particularly fast, and it's IMO rather messy. Instead, just bypass the fast path entirely if retpolines are on. This seems to be a speedup on a "minimal" retpoline kernel, mainly because do_syscall_64() ends up calling the syscall handler without using a slow retpoline thunk. As an added benefit, we won't need to apply further Spectre mitigations to the fast path. The current fast path spectre mitigations may have a hole: if the syscall nr is out of bounds, it is plausible that the CPU would mispredict the bounds check and, load a bogus function pointer, and speculatively execute it right though the retpoline. If this is indeed a problem, we need to fix it in the slow paths anyway, but with this patch applied, we can at least leave the fast path alone. Cleans-up: 2641f08bb7fc ("x86/retpoline/entry: Convert entry assembler indirect jumps") Signed-off-by: Andy Lutomirski --- arch/x86/entry/entry_64.S | 23 +++++++++++++++++------ 1 file changed, 17 insertions(+), 6 deletions(-) diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index 4f8e1d35a97c..b915bad58754 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -245,6 +245,9 @@ GLOBAL(entry_SYSCALL_64_after_hwframe) * If we need to do entry work or if we guess we'll need to do * exit work, go straight to the slow path. */ +#ifdef CONFIG_RETPOLINE + ALTERNATIVE "", "jmp entry_SYSCALL64_slow_path", X86_FEATURE_RETPOLINE +#endif movq PER_CPU_VAR(current_task), %r11 testl $_TIF_WORK_SYSCALL_ENTRY|_TIF_ALLWORK_MASK, TASK_TI_flags(%r11) jnz entry_SYSCALL64_slow_path @@ -270,13 +273,11 @@ entry_SYSCALL_64_fastpath: * This call instruction is handled specially in stub_ptregs_64. * It might end up jumping to the slow path. If it jumps, RAX * and all argument registers are clobbered. + * + * NB: no retpoline needed -- we don't execute this code with + * retpolines enabled. */ -#ifdef CONFIG_RETPOLINE - movq sys_call_table(, %rax, 8), %rax - call __x86_indirect_thunk_rax -#else call *sys_call_table(, %rax, 8) -#endif .Lentry_SYSCALL_64_after_fastpath_call: movq %rax, RAX(%rsp) @@ -431,6 +432,9 @@ ENTRY(stub_ptregs_64) * which we achieve by trying again on the slow path. If we are on * the slow path, the extra regs are already saved. * + * This code is unreachable (even via mispredicted conditional branches) + * if we're using retpolines. + * * RAX stores a pointer to the C function implementing the syscall. * IRQs are on. */ @@ -448,12 +452,19 @@ ENTRY(stub_ptregs_64) jmp entry_SYSCALL64_slow_path 1: - JMP_NOSPEC %rax /* Called from C */ + jmp *%rax /* Called from C */ END(stub_ptregs_64) .macro ptregs_stub func ENTRY(ptregs_\func) UNWIND_HINT_FUNC +#ifdef CONFIG_RETPOLINE + /* + * If retpolines are enabled, we don't use the syscall fast path, + * so just jump straight to the syscall body. + */ + ALTERNATIVE "", __stringify(jmp \func), X86_FEATURE_RETPOLINE +#endif leaq \func(%rip), %rax jmp stub_ptregs_64 END(ptregs_\func)