From patchwork Wed Sep 20 00:44:34 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 9960547 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id DA1D3601E9 for ; Wed, 20 Sep 2017 00:49:32 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CD4F628F19 for ; Wed, 20 Sep 2017 00:49:32 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C1CD528F1B; Wed, 20 Sep 2017 00:49:32 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_MED, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 996DA28F19 for ; Wed, 20 Sep 2017 00:49:31 +0000 (UTC) Received: (qmail 22352 invoked by uid 550); 20 Sep 2017 00:49:19 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 22321 invoked from network); 20 Sep 2017 00:49:18 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=uBdBW9BTkiCq6WtcKdqN+GVMp5eJNTSRP9KUYeQL6/s=; b=RAGVZ9Nb7By3egFgWuFx2dkXUqXU8g9p8g1nduHfEQeSfu7LuXsrJtNAcUAQOSGoTr 3FPoaabZ68d5S0a2L1tlPZ7uE1Losgjas4+eWQQKDR5hB/PCnT6kqDhlNZOHqURfcTGx xJybYD6dBf/2HxHZ+P4Dj5P0lkxslErrvCYDuIWPvhZ5hYRdrl2FuN79WQoTObr5KFWO TWZ01snHu9JOkw7P0xBiJLIwUi0tCtdeJVOobp4DZpqEF8miJpP5UpWuLX+ET8KVNZDn Eok2nSzdvvHpuL9+HJJc+YURIydP1H0Cq/uzJMyQ5cgKps9Blw2lNPqePk6+z9R10p10 /R7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=uBdBW9BTkiCq6WtcKdqN+GVMp5eJNTSRP9KUYeQL6/s=; b=Ew1beIin6yn49PLhVgpBqH9JSMQTsVJSTknGsMU16tbVd7aQ+ubdNwAsVuUcxR0zcX cJcvif1t1yR5DAF3ldao0yUVzWdq+W17BJK0ereebPevTFCS/jWBsnHVdimv643rZp6f Yne7xYVzvIPpFlmmSZfXo8gMz8wKYir0rxXsrL+MvxdLJUuc83G/j/YLGoYea1dW/iao R1v7Rwip82pCV/dvVvK1tlKRrXrvrCEtSj4CpVslWYYV1tnsb8UoYC9b8aCpHCn5WAD9 wwlreWqVC8VB5EhkSIh6ckteXyL6PNJ+Um++sneZ2d5AvSJ60t5d0IXUttwzG+miIB/v +ljg== X-Gm-Message-State: AHPjjUhngdzTYUc+9bJLJmdU2zdxfWuWt5Z6UWycrMECFTZPG8I5/gXK /CBj6PdLI34SU+476cDa8uk= X-Google-Smtp-Source: AOwi7QB0G65CWPkVfn/SwYJqjawtRqm813SpQ4WkRt24w7knZFOWdropcfgGXJOXOpYuGDUJj3BuEw== X-Received: by 10.98.17.216 with SMTP id 85mr409174pfr.206.1505868546317; Tue, 19 Sep 2017 17:49:06 -0700 (PDT) From: Eric Biggers To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, kernel-hardening@lists.openwall.com, Andy Lutomirski , Dave Hansen , Dmitry Vyukov , Fenghua Yu , Ingo Molnar , Kevin Hao , Oleg Nesterov , Wanpeng Li , Yu-cheng Yu , Michael Halcrow , Eric Biggers Date: Tue, 19 Sep 2017 17:44:34 -0700 Message-Id: <20170920004434.35308-4-ebiggers3@gmail.com> X-Mailer: git-send-email 2.14.1.690.gbb1197296e-goog In-Reply-To: <20170920004434.35308-1-ebiggers3@gmail.com> References: <20170920004434.35308-1-ebiggers3@gmail.com> Subject: [kernel-hardening] [PATCH v2 3/3] x86/fpu: reinitialize FPU registers if restoring FPU state fails X-Virus-Scanned: ClamAV using ClamSMTP From: Eric Biggers Userspace can change the FPU state of a task using the ptrace() or rt_sigreturn() system calls. Because reserved bits in the FPU state can cause the XRSTOR instruction to fail, the kernel has to carefully validate that no reserved bits or other invalid values are being set. Unfortunately, there have been bugs in this validation code. For example, we were not checking that the 'xcomp_bv' field in the xstate_header was 0. As-is, such bugs are exploitable to read the FPU registers of other processes on the system. To do so, an attacker can create a task, assign to it an invalid FPU state, then spin in a loop and monitor the values of the FPU registers. Because the task's FPU registers are not being restored, sometimes the FPU registers will have the values from another process. This is likely to continue to be a problem in the future because the validation done by the CPU instructions like XRSTOR is not immediately visible to kernel developers. Nor will invalid FPU states ever be encountered during ordinary use --- they will only be seen during fuzzing or exploits. There can even be reserved bits outside the xstate_header which are easy to forget about. For example, the MXCSR register contains reserved bits, which were not validated by the KVM_SET_XSAVE ioctl until commit a575813bfe4b ("KVM: x86: Fix load damaged SSEx MXCSR register"). Therefore, mitigate this class of vulnerability by restoring the FPU registers from init_fpstate if restoring from the task's state fails. We actually used to do this, but it was (perhaps unwisely) removed by commit 9ccc27a5d297 ("x86/fpu: Remove error return values from copy_kernel_to_*regs() functions"). This new patch is also a bit different in that it only clears the registers, not also the bad in-memory state. This is simpler and makes it easier to make the mitigation cover all callers of __copy_kernel_to_fpregs(). Cc: Andy Lutomirski Cc: Dave Hansen Cc: Dmitry Vyukov Cc: Fenghua Yu Cc: Ingo Molnar Cc: Kevin Hao Cc: Oleg Nesterov Cc: Wanpeng Li Cc: Yu-cheng Yu Signed-off-by: Eric Biggers --- arch/x86/include/asm/fpu/internal.h | 29 ++++++++++++++++++++--------- arch/x86/kernel/fpu/core.c | 16 ++++++++++++++++ 2 files changed, 36 insertions(+), 9 deletions(-) diff --git a/arch/x86/include/asm/fpu/internal.h b/arch/x86/include/asm/fpu/internal.h index 554cdb205d17..4efd483d2407 100644 --- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -104,6 +104,8 @@ static inline void fpstate_init_fxstate(struct fxregs_state *fx) } extern void fpstate_sanitize_xstate(struct fpu *fpu); +extern void __handle_bad_fpstate(union fpregs_state *fpstate, u64 mask); + #define user_insn(insn, output, input...) \ ({ \ int err; \ @@ -151,7 +153,7 @@ static inline int copy_fxregs_to_user(struct fxregs_state __user *fx) return user_insn(rex64/fxsave (%[fx]), "=m" (*fx), [fx] "R" (fx)); } -static inline void copy_kernel_to_fxregs(struct fxregs_state *fx) +static inline int copy_kernel_to_fxregs(struct fxregs_state *fx) { int err; @@ -165,8 +167,8 @@ static inline void copy_kernel_to_fxregs(struct fxregs_state *fx) err = check_insn(rex64/fxrstor (%[fx]), "=m" (*fx), [fx] "R" (fx), "m" (*fx)); } } - /* Copying from a kernel buffer to FPU registers should never fail: */ WARN_ON_FPU(err); + return err; } static inline int copy_user_to_fxregs(struct fxregs_state __user *fx) @@ -181,11 +183,12 @@ static inline int copy_user_to_fxregs(struct fxregs_state __user *fx) "m" (*fx)); } -static inline void copy_kernel_to_fregs(struct fregs_state *fx) +static inline int copy_kernel_to_fregs(struct fregs_state *fx) { int err = check_insn(frstor %[fx], "=m" (*fx), [fx] "m" (*fx)); WARN_ON_FPU(err); + return err; } static inline int copy_user_to_fregs(struct fregs_state __user *fx) @@ -361,7 +364,7 @@ static inline void copy_xregs_to_kernel(struct xregs_state *xstate) /* * Restore processor xstate from xsave area. */ -static inline void copy_kernel_to_xregs(struct xregs_state *xstate, u64 mask) +static inline int copy_kernel_to_xregs(struct xregs_state *xstate, u64 mask) { u32 lmask = mask; u32 hmask = mask >> 32; @@ -369,8 +372,8 @@ static inline void copy_kernel_to_xregs(struct xregs_state *xstate, u64 mask) XSTATE_XRESTORE(xstate, lmask, hmask, err); - /* We should never fault when copying from a kernel buffer: */ WARN_ON_FPU(err); + return err; } /* @@ -450,18 +453,26 @@ static inline int copy_fpregs_to_fpstate(struct fpu *fpu) return 0; } -static inline void __copy_kernel_to_fpregs(union fpregs_state *fpstate, u64 mask) +static inline int ____copy_kernel_to_fpregs(union fpregs_state *fpstate, u64 mask) { if (use_xsave()) { - copy_kernel_to_xregs(&fpstate->xsave, mask); + return copy_kernel_to_xregs(&fpstate->xsave, mask); } else { if (use_fxsr()) - copy_kernel_to_fxregs(&fpstate->fxsave); + return copy_kernel_to_fxregs(&fpstate->fxsave); else - copy_kernel_to_fregs(&fpstate->fsave); + return copy_kernel_to_fregs(&fpstate->fsave); } } +static inline void __copy_kernel_to_fpregs(union fpregs_state *fpstate, u64 mask) +{ + int err = ____copy_kernel_to_fpregs(fpstate, mask); + + if (unlikely(err)) + __handle_bad_fpstate(fpstate, mask); +} + static inline void copy_kernel_to_fpregs(union fpregs_state *fpstate) { /* diff --git a/arch/x86/kernel/fpu/core.c b/arch/x86/kernel/fpu/core.c index e1114f070c2d..85567db482e2 100644 --- a/arch/x86/kernel/fpu/core.c +++ b/arch/x86/kernel/fpu/core.c @@ -536,3 +536,19 @@ int fpu__exception_code(struct fpu *fpu, int trap_nr) */ return 0; } + +/* + * We should never get here because the fpregs_state stored in 'struct fpu' + * should always be readable and contain a valid FPU state. However, past bugs + * have allowed userspace to set reserved bits in the XSAVE area using + * PTRACE_SETREGSET or sys_rt_sigreturn(). These caused XRSTOR to fail when + * switching to the task, leaking the FPU registers of the task previously + * executing on the CPU. Mitigate this class of vulnerability by restoring from + * the initial state (essentially, zeroing out all the FPU registers) if we + * can't restore from the task's FPU state. + */ +void __handle_bad_fpstate(union fpregs_state *fpstate, u64 mask) +{ + WARN_ONCE(1, "Bad FPU state detected, reinitializing FPU registers"); + ____copy_kernel_to_fpregs(&init_fpstate, mask); +}