From patchwork Wed Feb 20 00:54:49 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 10820939 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 972486CB for ; Wed, 20 Feb 2019 00:55:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7EDA329293 for ; Wed, 20 Feb 2019 00:55:08 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 721A82C8EB; Wed, 20 Feb 2019 00:55:08 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 6D42329293 for ; Wed, 20 Feb 2019 00:55:06 +0000 (UTC) Received: (qmail 3614 invoked by uid 550); 20 Feb 2019 00:55:05 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 3579 invoked from network); 20 Feb 2019 00:55:04 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=date:from:to:cc:subject:message-id:mime-version:content-disposition; bh=4uNdrUywJPhF+zhWxCJFV6yElKL+6Vip2BXw5eAd9JY=; b=Qkl9UbX0JOKpyIGJSyGo04tvxgJETZb97d87ynLaexx5wes08gEgEdnv9SwEQXkHpV uYeUUfX2JQt+tbA74da4SwE0VcgGsbPruw9wmb9Okccqvc6J188RaOH5MYBnX/Us4BIE 1kVf86l4glKj07tWoNt1T6rratuUOnefZGBXo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:mime-version :content-disposition; bh=4uNdrUywJPhF+zhWxCJFV6yElKL+6Vip2BXw5eAd9JY=; b=PuJP0emjIQGvxNjSue9E63a6PaivUuDLGxniFJuIBNd7zvOFcdtO5VF9BYntxD4uJr YKzkKFCdhY8ViLnKKQiEkMskrWD0mc3dftURnrfVDTuug7+VTb4BqnyKT7iKlz63htpE Fn6UDkCG4iDwtO/L62uy0FmhNNIu3lVFLQ5W6bI2NsJfdK73z7EWt7Hvy9V/jyFXSB7s yaLEIalne8MpBkMTxgFD5AU7E/hlD9yu3nogqIada0oWEtgWf8tNgYxMBNywOkz3xR/n dCZ3quFb4kMCdoOpAPQ74rO6Q0t3aG3ihdB5mTvLt2BacsTS/LQWL3d5JCsS+VBIfSTZ tCzA== X-Gm-Message-State: AHQUAubbfUjsFWkLoqlgrJ/eOWrZJjuWOwGK5u6nS6mu98+M2zgGkB6Z i0QGHJgCd5aFx2GjNKnuaJKGKw== X-Google-Smtp-Source: AHgI3IYOkXek+ihAEQeJpC/w9Zn1RhKGmQ0DFojiYLoEpYpF9F0eVUtF+TR4AyneBvQ8tax2Q+X4EA== X-Received: by 2002:a17:902:b187:: with SMTP id s7mr22461887plr.174.1550624092061; Tue, 19 Feb 2019 16:54:52 -0800 (PST) Date: Tue, 19 Feb 2019 16:54:49 -0800 From: Kees Cook To: Thomas Gleixner Cc: linux-kernel@vger.kernel.org, kernel-hardening@lists.openwall.com, x86@kernel.org Subject: [PATCH] x86/asm: Pin sensitive CR4 bits Message-ID: <20190220005449.GA25243@beast> MIME-Version: 1.0 Content-Disposition: inline X-Virus-Scanned: ClamAV using ClamSMTP Several recent exploits have used direct calls to the native_write_cr4() function to disable SMEP and SMAP before then continuing their exploits using userspace memory access. This pins bits of cr4 so that they cannot be changed through a common function. This is not intended to be general ROP protection (which would require CFI to defend against properly), but rather a way to avoid trivial direct function calling (or CFI bypassing via a matching function prototype) as seen in: https://googleprojectzero.blogspot.com/2017/05/exploiting-linux-kernel-via-packet.html (https://github.com/xairy/kernel-exploits/tree/master/CVE-2017-7308) The goals of this change: - pin specific bits (SMEP, SMAP, and UMIP) when writing cr4. - avoid setting the bits too early (they must become pinned only after first being used). - pinning mask needs to be read-only during normal runtime. - pinning needs to be rechecked after set to avoid jumps into the middle of the function. Using __ro_after_init on the mask is done so it can't be first disabled with a malicious write. And since it becomes read-only, we must avoid writing to it later (hence the check for bits already having been set instead of unconditionally writing to the mask). The use of volatile is done to force the compiler to perform a full reload of the mask after setting cr4 (to protect against just jumping into the function past where the masking happens; we must check that the mask was applied after we do the set). Due to how this function can be built by the compiler (especially due to the removal of frame pointers), jumping into the middle of the function frequently doesn't require stack manipulation to construct a stack frame (there may only a retq without pops, which is sufficient for use with exploits like timer overwrites mentioned above). For example, without the recheck, the function may appear as: native_write_cr4: mov [pin], %rbx or %rbx, %rdi 1: mov %rdi, %cr4 retq The masking "or" could be trivially bypassed by just calling to label "1" instead of "native_write_cr4". (CFI will force calls to only be able to call into native_write_cr4, but CFI and CET are uncommon currently.) Signed-off-by: Kees Cook --- arch/x86/include/asm/special_insns.h | 12 ++++++++++++ arch/x86/kernel/cpu/common.c | 12 +++++++++++- 2 files changed, 23 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h index 43c029cdc3fe..bb08b731a33b 100644 --- a/arch/x86/include/asm/special_insns.h +++ b/arch/x86/include/asm/special_insns.h @@ -72,9 +72,21 @@ static inline unsigned long native_read_cr4(void) return val; } +extern volatile unsigned long cr4_pin; + static inline void native_write_cr4(unsigned long val) { +again: + val |= cr4_pin; asm volatile("mov %0,%%cr4": : "r" (val), "m" (__force_order)); + /* + * If the MOV above was used directly as a ROP gadget we can + * notice the lack of pinned bits in "val" and start the function + * from the beginning to gain the cr4_pin bits for sure. + */ + if (WARN_ONCE(cr4_pin && (val & cr4_pin) == 0, + "cr4 pin bypass attempt?!\n")) + goto again; } #ifdef CONFIG_X86_64 diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index cb28e98a0659..7e0ea4470f8e 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -312,10 +312,16 @@ static __init int setup_disable_smep(char *arg) } __setup("nosmep", setup_disable_smep); +volatile unsigned long cr4_pin __ro_after_init; +EXPORT_SYMBOL_GPL(cr4_pin); + static __always_inline void setup_smep(struct cpuinfo_x86 *c) { - if (cpu_has(c, X86_FEATURE_SMEP)) + if (cpu_has(c, X86_FEATURE_SMEP)) { + if (!(cr4_pin & X86_CR4_SMEP)) + cr4_pin |= X86_CR4_SMEP; cr4_set_bits(X86_CR4_SMEP); + } } static __init int setup_disable_smap(char *arg) @@ -334,6 +340,8 @@ static __always_inline void setup_smap(struct cpuinfo_x86 *c) if (cpu_has(c, X86_FEATURE_SMAP)) { #ifdef CONFIG_X86_SMAP + if (!(cr4_pin & X86_CR4_SMAP)) + cr4_pin |= X86_CR4_SMAP; cr4_set_bits(X86_CR4_SMAP); #else cr4_clear_bits(X86_CR4_SMAP); @@ -351,6 +359,8 @@ static __always_inline void setup_umip(struct cpuinfo_x86 *c) if (!cpu_has(c, X86_FEATURE_UMIP)) goto out; + if (!(cr4_pin & X86_CR4_UMIP)) + cr4_pin |= X86_CR4_UMIP; cr4_set_bits(X86_CR4_UMIP); pr_info_once("x86/cpu: User Mode Instruction Prevention (UMIP) activated\n");