From patchwork Fri Sep 22 17:41:55 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 9966739 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id EA8496057C for ; Fri, 22 Sep 2017 17:46:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D80212999C for ; Fri, 22 Sep 2017 17:46:08 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CD065299A6; Fri, 22 Sep 2017 17:46:08 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_MED, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 4051E2999C for ; Fri, 22 Sep 2017 17:46:06 +0000 (UTC) Received: (qmail 24505 invoked by uid 550); 22 Sep 2017 17:45:05 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 24368 invoked from network); 22 Sep 2017 17:45:03 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=96gSiZY1OTZ9rozjUZZAY5GOPi+lVK+j7Bt2CfCK26A=; b=JxrqK68eAxgy+E1oaSW1MgFR6HCsGGU6WXsgJEISAEE1Gff0dbz8uYuCv0HR7HmOqN Bgq8J9zKkvMFh69Oz5QbjVAt4WXRMvEdjBLvhPmQ/xM3Wstuf+w8J9zZSN+mSTlZPgT+ QFCgx9FgKgBy8/f+AiDysZaUQ8THwPeeAXPqRSjQjFamvp1qs8P3Rc45lVz2WyaFCiXM aYU4clXjpStS6xd7aEvA5IMWAujoK32y++8PzvF5bT+QLUsIcYmTHt7eE1m/fPzDgxqm 9CAbTz08KFsKog14gyWl8KZeOJMvwDXjwBed8YEXT07UCbo9pPRAgGdTc7IG22o5HJYs LPag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=96gSiZY1OTZ9rozjUZZAY5GOPi+lVK+j7Bt2CfCK26A=; b=bWYXwU8hIHa/vRvqvbJ9d2tLL7XxQUFD7O2KdI15vOiaAWSEbS4c6yJ+4mo2x5lwnT 3v40+Ed/ikvNbtr/xOfMKgNPjWVs+HRI5e53dWJaF47F4/g05YVwJlP71GKEPc+K12Xf sCG13vlmUfRTFMEXsxlLZchKhjKlpy3xmZXF1EjjRUzkCT8JmPNnORm0eV6UzCOhS7Uf CSgMiHSWZMbYJhqxb/9+OJKBoXr/WfnqgoMqt3zG51NlGiFHsd3ruhVsj5TAX3z8C/gE DAni+pmv+6oVkLYgZu1R8JPG0uA2mibPBiubYkmg6nAr71wagrS5Mi2YBCGTp8eoIomw R9XQ== X-Gm-Message-State: AHPjjUicZdja/SsT6rpuVPp6YfMyC0FOlxYKO0vfqaqfEbDv2GIlJg0o J8BoGX1odmlp1/52TmkNse4= X-Google-Smtp-Source: AOwi7QDtjcYh57yEBaly+ATrWKAhteqpS/pJKrPtFnlxjlUSqBlgjoFN9FM4oLxMIm+RSA2+yP6YsQ== X-Received: by 10.84.235.204 with SMTP id m12mr5027089plt.291.1506102292117; Fri, 22 Sep 2017 10:44:52 -0700 (PDT) From: Eric Biggers To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, kernel-hardening@lists.openwall.com, Andy Lutomirski , Dave Hansen , Dmitry Vyukov , Fenghua Yu , Ingo Molnar , Kevin Hao , Oleg Nesterov , Wanpeng Li , Yu-cheng Yu , Michael Halcrow , Eric Biggers Date: Fri, 22 Sep 2017 10:41:55 -0700 Message-Id: <20170922174156.16780-3-ebiggers3@gmail.com> X-Mailer: git-send-email 2.14.1.821.g8fa685d3b7-goog In-Reply-To: <20170922174156.16780-1-ebiggers3@gmail.com> References: <20170922174156.16780-1-ebiggers3@gmail.com> Subject: [kernel-hardening] [PATCH v4 2/3] x86/fpu: tighten validation of user-supplied xstate_header X-Virus-Scanned: ClamAV using ClamSMTP From: Eric Biggers Move validation of user-supplied xstate_headers into a helper function and call it from both the ptrace and sigreturn syscall paths. The new function also considers it to be an error if *any* reserved bits are set, whereas before we were just clearing most of them. This should reduce the chance of bugs that fail to correctly validate user-supplied XSAVE areas. It also will expose any broken userspace programs that set the other reserved bits; this is desirable because such programs will lose compatibility with future CPUs and kernels if those bits are ever used for anything. (There shouldn't be any such programs, and in fact in the case where the compacted format is in use we were already validating xfeatures. But you never know...) Reviewed-by: Kees Cook Reviewed-by: Rik van Riel Acked-by: Dave Hansen Cc: Andy Lutomirski Cc: Dmitry Vyukov Cc: Fenghua Yu Cc: Ingo Molnar Cc: Kevin Hao Cc: Oleg Nesterov Cc: Wanpeng Li Cc: Yu-cheng Yu Signed-off-by: Eric Biggers --- arch/x86/include/asm/fpu/xstate.h | 25 +++++++++++++++++++ arch/x86/kernel/fpu/regset.c | 21 ++++++---------- arch/x86/kernel/fpu/signal.c | 17 +++++++------ arch/x86/kernel/fpu/xstate.c | 52 ++++++++++++++------------------------- 4 files changed, 60 insertions(+), 55 deletions(-) diff --git a/arch/x86/include/asm/fpu/xstate.h b/arch/x86/include/asm/fpu/xstate.h index 579ac2358e63..3d79d0ee4d30 100644 --- a/arch/x86/include/asm/fpu/xstate.h +++ b/arch/x86/include/asm/fpu/xstate.h @@ -52,4 +52,29 @@ int copy_xstate_to_kernel(void *kbuf, struct xregs_state *xsave, unsigned int of int copy_xstate_to_user(void __user *ubuf, struct xregs_state *xsave, unsigned int offset, unsigned int size); int copy_kernel_to_xstate(struct xregs_state *xsave, const void *kbuf); int copy_user_to_xstate(struct xregs_state *xsave, const void __user *ubuf); + +/* Validate an xstate header supplied by userspace (ptrace or sigreturn) */ +static inline int validate_xstate_header(const struct xstate_header *hdr) +{ + /* No unknown or supervisor features may be set */ + if (hdr->xfeatures & (~xfeatures_mask | XFEATURE_MASK_SUPERVISOR)) + return -EINVAL; + + /* Userspace must use the uncompacted format */ + if (hdr->xcomp_bv) + return -EINVAL; + + /* + * If 'reserved' is shrunken to add a new field, make sure to validate + * that new field here! + */ + BUILD_BUG_ON(sizeof(hdr->reserved) != 48); + + /* No reserved bits may be set */ + if (memchr_inv(hdr->reserved, 0, sizeof(hdr->reserved))) + return -EINVAL; + + return 0; +} + #endif diff --git a/arch/x86/kernel/fpu/regset.c b/arch/x86/kernel/fpu/regset.c index c764f7405322..0467e536b0a2 100644 --- a/arch/x86/kernel/fpu/regset.c +++ b/arch/x86/kernel/fpu/regset.c @@ -134,34 +134,27 @@ int xstateregs_set(struct task_struct *target, const struct user_regset *regset, fpu__activate_fpstate_write(fpu); - if (boot_cpu_has(X86_FEATURE_XSAVES)) { + if (using_compacted_format()) { if (kbuf) ret = copy_kernel_to_xstate(xsave, kbuf); else ret = copy_user_to_xstate(xsave, ubuf); } else { ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, xsave, 0, -1); - - /* xcomp_bv must be 0 when using uncompacted format */ - if (!ret && xsave->header.xcomp_bv) - ret = -EINVAL; + if (!ret) + ret = validate_xstate_header(&xsave->header); } - /* - * In case of failure, mark all states as init: - */ - if (ret) - fpstate_init(&fpu->state); - /* * mxcsr reserved bits must be masked to zero for security reasons. */ xsave->i387.mxcsr &= mxcsr_feature_mask; - xsave->header.xfeatures &= xfeatures_mask; + /* - * These bits must be zero. + * In case of failure, mark all states as init: */ - memset(&xsave->header.reserved, 0, 48); + if (ret) + fpstate_init(&fpu->state); return ret; } diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c index d34349934702..5b5d75e3b2a4 100644 --- a/arch/x86/kernel/fpu/signal.c +++ b/arch/x86/kernel/fpu/signal.c @@ -214,8 +214,11 @@ sanitize_restored_xstate(struct task_struct *tsk, struct xstate_header *header = &xsave->header; if (use_xsave()) { - /* These bits must be zero. */ - memset(header->reserved, 0, 48); + /* + * Note: we don't need to zero the reserved bits in the + * xstate_header here because we either didn't copy them at all, + * or we checked earlier that they aren't set. + */ /* * Init the state that is not present in the memory @@ -224,7 +227,7 @@ sanitize_restored_xstate(struct task_struct *tsk, if (fx_only) header->xfeatures = XFEATURE_MASK_FPSSE; else - header->xfeatures &= (xfeatures_mask & xfeatures); + header->xfeatures &= xfeatures; } if (use_fxsr()) { @@ -308,7 +311,7 @@ static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size) /* * For 32-bit frames with fxstate, copy the user state to the * thread's fpu state, reconstruct fxstate from the fsave - * header. Sanitize the copied state etc. + * header. Validate and sanitize the copied state. */ struct fpu *fpu = &tsk->thread.fpu; struct user_i387_ia32_struct env; @@ -328,10 +331,8 @@ static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size) err = copy_user_to_xstate(&fpu->state.xsave, buf_fx); } else { err = __copy_from_user(&fpu->state.xsave, buf_fx, state_size); - - /* xcomp_bv must be 0 when using uncompacted format */ - if (!err && fpu->state.xsave.header.xcomp_bv) - err = -EINVAL; + if (!err) + err = validate_xstate_header(&fpu->state.xsave.header); } if (err || __copy_from_user(&env, buf, sizeof(env))) { diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c index fda1109cc355..d5150163a0df 100644 --- a/arch/x86/kernel/fpu/xstate.c +++ b/arch/x86/kernel/fpu/xstate.c @@ -1114,34 +1114,26 @@ int copy_xstate_to_user(void __user *ubuf, struct xregs_state *xsave, unsigned i /* * Convert from a ptrace standard-format kernel buffer to kernel XSAVES format - * and copy to the target thread. This is called from xstateregs_set() and - * there we check the CPU has XSAVES and a whole standard-sized buffer - * exists. + * and copy to the target thread. This is called from xstateregs_set(). */ int copy_kernel_to_xstate(struct xregs_state *xsave, const void *kbuf) { unsigned int offset, size; int i; - u64 xfeatures; - u64 allowed_features; + struct xstate_header hdr; offset = offsetof(struct xregs_state, header); - size = sizeof(xfeatures); - - memcpy(&xfeatures, kbuf + offset, size); + size = sizeof(hdr); - /* - * Reject if the user sets any disabled or supervisor features: - */ - allowed_features = xfeatures_mask & ~XFEATURE_MASK_SUPERVISOR; + memcpy(&hdr, kbuf + offset, size); - if (xfeatures & ~allowed_features) + if (validate_xstate_header(&hdr) != 0) return -EINVAL; for (i = 0; i < XFEATURE_MAX; i++) { u64 mask = ((u64)1 << i); - if (xfeatures & mask) { + if (hdr.xfeatures & mask) { void *dst = __raw_xsave_addr(xsave, 1 << i); offset = xstate_offsets[i]; @@ -1151,7 +1143,7 @@ int copy_kernel_to_xstate(struct xregs_state *xsave, const void *kbuf) } } - if (xfeatures_mxcsr_quirk(xfeatures)) { + if (xfeatures_mxcsr_quirk(hdr.xfeatures)) { offset = offsetof(struct fxregs_state, mxcsr); size = MXCSR_AND_FLAGS_SIZE; memcpy(&xsave->i387.mxcsr, kbuf + offset, size); @@ -1166,42 +1158,36 @@ int copy_kernel_to_xstate(struct xregs_state *xsave, const void *kbuf) /* * Add back in the features that came in from userspace: */ - xsave->header.xfeatures |= xfeatures; + xsave->header.xfeatures |= hdr.xfeatures; return 0; } /* - * Convert from a ptrace standard-format user-space buffer to kernel XSAVES format - * and copy to the target thread. This is called from xstateregs_set() and - * there we check the CPU has XSAVES and a whole standard-sized buffer - * exists. + * Convert from a ptrace or sigreturn standard-format user-space buffer to + * kernel XSAVES format and copy to the target thread. This is called from + * xstateregs_set(), as well as potentially from the sigreturn() and + * rt_sigreturn() system calls. */ int copy_user_to_xstate(struct xregs_state *xsave, const void __user *ubuf) { unsigned int offset, size; int i; - u64 xfeatures; - u64 allowed_features; + struct xstate_header hdr; offset = offsetof(struct xregs_state, header); - size = sizeof(xfeatures); + size = sizeof(hdr); - if (__copy_from_user(&xfeatures, ubuf + offset, size)) + if (__copy_from_user(&hdr, ubuf + offset, size)) return -EFAULT; - /* - * Reject if the user sets any disabled or supervisor features: - */ - allowed_features = xfeatures_mask & ~XFEATURE_MASK_SUPERVISOR; - - if (xfeatures & ~allowed_features) + if (validate_xstate_header(&hdr) != 0) return -EINVAL; for (i = 0; i < XFEATURE_MAX; i++) { u64 mask = ((u64)1 << i); - if (xfeatures & mask) { + if (hdr.xfeatures & mask) { void *dst = __raw_xsave_addr(xsave, 1 << i); offset = xstate_offsets[i]; @@ -1212,7 +1198,7 @@ int copy_user_to_xstate(struct xregs_state *xsave, const void __user *ubuf) } } - if (xfeatures_mxcsr_quirk(xfeatures)) { + if (xfeatures_mxcsr_quirk(hdr.xfeatures)) { offset = offsetof(struct fxregs_state, mxcsr); size = MXCSR_AND_FLAGS_SIZE; if (__copy_from_user(&xsave->i387.mxcsr, ubuf + offset, size)) @@ -1228,7 +1214,7 @@ int copy_user_to_xstate(struct xregs_state *xsave, const void __user *ubuf) /* * Add back in the features that came in from userspace: */ - xsave->header.xfeatures |= xfeatures; + xsave->header.xfeatures |= hdr.xfeatures; return 0; }