From patchwork Wed Sep 20 00:44:33 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 9960545 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 07C82601E9 for ; Wed, 20 Sep 2017 00:49:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EEF2E28F19 for ; Wed, 20 Sep 2017 00:49:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E3C7E28F1A; Wed, 20 Sep 2017 00:49:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_MED, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id B1FD128F4A for ; Wed, 20 Sep 2017 00:49:26 +0000 (UTC) Received: (qmail 22197 invoked by uid 550); 20 Sep 2017 00:49:16 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 22085 invoked from network); 20 Sep 2017 00:49:15 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=wHAbpM/J1Ip2pC8WVi3Sz8QwMkRkr1cdyw3RH8dx7pg=; b=FzcqlXcmafnf0tswGAMpC4S79A7ocaJ+8wIrVjshp6yvCfE+9klb425qDokSeLH9bl VisIIOSTPiuJ7TS6qdHLkeE8wlVBFkrZ4Oc+Jxt5+MtprbmMQTopVkl9Rc7bYhkdwjNK ccf8d0wlhHR8VmtesQth4qEAfL+FejU+lkS0ZEYjWQol8kNFMMyMUbKatYMJC6F0KU02 m5woVcq3oCD1mbCDc3/ZwysFRinX6d/YSXbLH82OrQEYjchGTAM+ruvVPqdqzheYKQIA /a856RYyy+vYzLFJV8PNIxC/IY2ml8Nh9I3I7WuGaOlVstHLv6uwxFYbiOaU3m3Q82hS GMJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=wHAbpM/J1Ip2pC8WVi3Sz8QwMkRkr1cdyw3RH8dx7pg=; b=Yad+0eVhpXMlU/YtPswmACrWEhVhZCoRK+nHnpCC1P7XeiqVL7clk5bfRnw6uJ8hLL nlPlet4h/oDoniJ8Z4Ub/yX7QXpCrJ9dKlfyLZVug44hsrW7Cw4zAdzC3/PUkq848ZC9 ZbkFAzvKU455iBYk2MHUKif4OjcIf+Hv4tP3BNnxN1b78YTw/0V1PkP1kHCYVmA6wmhM SKabz0UlzbrjmAQ5UuR3NbvSF5xf2YLedDCiQquQ1ghvxoiLbyL5xsMSr1Cx+SUkHlRH rdvMoECwfddr4UboVphwkuLuEOHz9thIzeg7RqpbEqp48lrybpucFplDh1028xeDNFZX DqFg== X-Gm-Message-State: AHPjjUicxAmJF8gvtr6Y2VjfhnV7MPmEc27eaW4IRrgvr9as5BQtbHg9 7KaawUr+2Uc+UCaqM12fCWU= X-Google-Smtp-Source: AOwi7QA7HxgkgG8XC49Km3yqV7dizqeRxNffK7knayEfEG3RGivy7gVZpUiIC+xu52TVR/vUHWq/Fg== X-Received: by 10.98.110.70 with SMTP id j67mr392753pfc.63.1505868543497; Tue, 19 Sep 2017 17:49:03 -0700 (PDT) From: Eric Biggers To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, kernel-hardening@lists.openwall.com, Andy Lutomirski , Dave Hansen , Dmitry Vyukov , Fenghua Yu , Ingo Molnar , Kevin Hao , Oleg Nesterov , Wanpeng Li , Yu-cheng Yu , Michael Halcrow , Eric Biggers Date: Tue, 19 Sep 2017 17:44:33 -0700 Message-Id: <20170920004434.35308-3-ebiggers3@gmail.com> X-Mailer: git-send-email 2.14.1.690.gbb1197296e-goog In-Reply-To: <20170920004434.35308-1-ebiggers3@gmail.com> References: <20170920004434.35308-1-ebiggers3@gmail.com> Subject: [kernel-hardening] [PATCH v2 2/3] x86/fpu: tighten validation of user-supplied xstate_header X-Virus-Scanned: ClamAV using ClamSMTP From: Eric Biggers Move validation of user-supplied xstate_headers into a helper function and call it from both the ptrace and sigreturn syscall paths. The new function also considers it to be an error if *any* reserved bits are set, whereas before we were just clearing most of them. This should reduce the chance of bugs that fail to correctly validate user-supplied XSAVE areas. It also will expose any broken userspace programs that set the other reserved bits; this is desirable because such programs will lose compatibility with future CPUs and kernels if those bits are ever used for anything. (There shouldn't be any such programs, and in fact in the case where the compacted format is in use we were already validating xfeatures. But you never know...) Cc: Andy Lutomirski Cc: Dave Hansen Cc: Dmitry Vyukov Cc: Fenghua Yu Cc: Ingo Molnar Cc: Kevin Hao Cc: Oleg Nesterov Cc: Wanpeng Li Cc: Yu-cheng Yu Signed-off-by: Eric Biggers --- arch/x86/include/asm/fpu/xstate.h | 19 +++++++++++++++++++ arch/x86/kernel/fpu/regset.c | 21 +++++++-------------- arch/x86/kernel/fpu/signal.c | 19 +++++++++++-------- arch/x86/kernel/fpu/xstate.c | 27 ++++++++++----------------- 4 files changed, 47 insertions(+), 39 deletions(-) diff --git a/arch/x86/include/asm/fpu/xstate.h b/arch/x86/include/asm/fpu/xstate.h index 1b2799e0699a..7fae180a8716 100644 --- a/arch/x86/include/asm/fpu/xstate.h +++ b/arch/x86/include/asm/fpu/xstate.h @@ -52,4 +52,23 @@ int copyout_from_xsaves(unsigned int pos, unsigned int count, void *kbuf, void __user *ubuf, struct xregs_state *xsave); int copyin_to_xsaves(const void *kbuf, const void __user *ubuf, struct xregs_state *xsave); + +/* Validate an xstate header supplied by userspace (ptrace or sigreturn) */ +static inline int validate_xstate_header(const struct xstate_header *hdr) +{ + /* No unknown or supervisor features may be set */ + if (hdr->xfeatures & (~xfeatures_mask | XFEATURE_MASK_SUPERVISOR)) + return -EINVAL; + + /* Userspace must use the uncompacted format */ + if (hdr->xcomp_bv) + return -EINVAL; + + /* No reserved bits may be set */ + if (memchr_inv(hdr->reserved, 0, sizeof(hdr->reserved))) + return -EINVAL; + + return 0; +} + #endif diff --git a/arch/x86/kernel/fpu/regset.c b/arch/x86/kernel/fpu/regset.c index 8ab1a1f4d1c1..3fd3a2f4f591 100644 --- a/arch/x86/kernel/fpu/regset.c +++ b/arch/x86/kernel/fpu/regset.c @@ -131,31 +131,24 @@ int xstateregs_set(struct task_struct *target, const struct user_regset *regset, fpu__activate_fpstate_write(fpu); - if (boot_cpu_has(X86_FEATURE_XSAVES)) { + if (using_compacted_format()) { ret = copyin_to_xsaves(kbuf, ubuf, xsave); } else { ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, xsave, 0, -1); - - /* xcomp_bv must be 0 when using uncompacted format */ - if (!ret && xsave->header.xcomp_bv) - ret = -EINVAL; + if (!ret) + ret = validate_xstate_header(&xsave->header); } - /* - * In case of failure, mark all states as init: - */ - if (ret) - fpstate_init(&fpu->state); - /* * mxcsr reserved bits must be masked to zero for security reasons. */ xsave->i387.mxcsr &= mxcsr_feature_mask; - xsave->header.xfeatures &= xfeatures_mask; + /* - * These bits must be zero. + * In case of failure, mark all states as init: */ - memset(&xsave->header.reserved, 0, 48); + if (ret) + fpstate_init(&fpu->state); return ret; } diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c index 169d6e001d32..a325f0b25456 100644 --- a/arch/x86/kernel/fpu/signal.c +++ b/arch/x86/kernel/fpu/signal.c @@ -213,8 +213,11 @@ sanitize_restored_xstate(struct task_struct *tsk, struct xstate_header *header = &xsave->header; if (use_xsave()) { - /* These bits must be zero. */ - memset(header->reserved, 0, 48); + /* + * Note: we don't need to zero the reserved bits in the + * xstate_header here because we either didn't copy them at all, + * or we checked earlier that they aren't set. + */ /* * Init the state that is not present in the memory @@ -223,7 +226,7 @@ sanitize_restored_xstate(struct task_struct *tsk, if (fx_only) header->xfeatures = XFEATURE_MASK_FPSSE; else - header->xfeatures &= (xfeatures_mask & xfeatures); + header->xfeatures &= xfeatures; } if (use_fxsr()) { @@ -307,7 +310,7 @@ static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size) /* * For 32-bit frames with fxstate, copy the user state to the * thread's fpu state, reconstruct fxstate from the fsave - * header. Sanitize the copied state etc. + * header. Validate and sanitize the copied state. */ struct fpu *fpu = &tsk->thread.fpu; struct user_i387_ia32_struct env; @@ -329,10 +332,10 @@ static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size) } else { err = __copy_from_user(&fpu->state.xsave, buf_fx, state_size); - - /* xcomp_bv must be 0 when using uncompacted format */ - if (!err && fpu->state.xsave.header.xcomp_bv) - err = -EINVAL; + if (!err) { + err = validate_xstate_header( + &fpu->state.xsave.header); + } } if (err || __copy_from_user(&env, buf, sizeof(env))) { diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c index c24ac1efb12d..52018be79e27 100644 --- a/arch/x86/kernel/fpu/xstate.c +++ b/arch/x86/kernel/fpu/xstate.c @@ -1018,41 +1018,34 @@ int copyout_from_xsaves(unsigned int pos, unsigned int count, void *kbuf, } /* - * Convert from a ptrace standard-format buffer to kernel XSAVES format - * and copy to the target thread. This is called from xstateregs_set() and - * there we check the CPU has XSAVES and a whole standard-sized buffer - * exists. + * Convert from a ptrace or sigreturn standard-format buffer to kernel XSAVES + * format and copy to the target thread. This is called from xstateregs_set(), + * as well as potentially from the sigreturn() and rt_sigreturn() system calls. */ int copyin_to_xsaves(const void *kbuf, const void __user *ubuf, struct xregs_state *xsave) { unsigned int offset, size; int i; - u64 xfeatures; - u64 allowed_features; + struct xstate_header hdr; offset = offsetof(struct xregs_state, header); - size = sizeof(xfeatures); + size = sizeof(hdr); if (kbuf) { - memcpy(&xfeatures, kbuf + offset, size); + memcpy(&hdr, kbuf + offset, size); } else { - if (__copy_from_user(&xfeatures, ubuf + offset, size)) + if (__copy_from_user(&hdr, ubuf + offset, size)) return -EFAULT; } - /* - * Reject if the user sets any disabled or supervisor features: - */ - allowed_features = xfeatures_mask & ~XFEATURE_MASK_SUPERVISOR; - - if (xfeatures & ~allowed_features) + if (validate_xstate_header(&hdr) != 0) return -EINVAL; for (i = 0; i < XFEATURE_MAX; i++) { u64 mask = ((u64)1 << i); - if (xfeatures & mask) { + if (hdr.xfeatures & mask) { void *dst = __raw_xsave_addr(xsave, 1 << i); offset = xstate_offsets[i]; @@ -1076,7 +1069,7 @@ int copyin_to_xsaves(const void *kbuf, const void __user *ubuf, /* * Add back in the features that came in from userspace: */ - xsave->header.xfeatures |= xfeatures; + xsave->header.xfeatures |= hdr.xfeatures; return 0; }