From patchwork Tue Jun 18 04:55:01 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 11000895 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BB57E14DB for ; Tue, 18 Jun 2019 04:55:50 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AC06A287E5 for ; Tue, 18 Jun 2019 04:55:50 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A046728979; Tue, 18 Jun 2019 04:55:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 88B16287E5 for ; Tue, 18 Jun 2019 04:55:49 +0000 (UTC) Received: (qmail 19900 invoked by uid 550); 18 Jun 2019 04:55:24 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 19769 invoked from network); 18 Jun 2019 04:55:21 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=qnStX17gprC4EO3JslfOKqvuRlw//DOp35gcW3WHxCs=; b=QFjbYmGJrCOeph3wo/lPROf1twpnMNCBoCurylkTc1w+8wzef3vU05t7FapRwubwqZ Ld2fdhXxahLeFTTSVZ2EQ8KnI1JjFdknbkYfZXN4dW2wR3ik0yl1ZEsbXa2nA/rxfUbT wPySjJ6fbla/F11m/fH6O6VN2PwJBuzlxdUTk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=qnStX17gprC4EO3JslfOKqvuRlw//DOp35gcW3WHxCs=; b=hZ6X5DsHPLGSN2PGyXTacsR1aUWSsCqzxDQLZzbeEkRy6nv7Y0qdp0wPgFZoTxgzwk 2wkh9oxV34VGurOBblwL0vc4XAZVEAt9MiLWGETlMchmG+0YcFCUtRIg5rLtDfEPpiKw SrcbMJImY3G9TuzB7wQphcwRRJ07E74E57pwfdkEzTeN3Ti/hvbAEFVZYyLZDYnb1xal geD19qf//wCT5PDsU8BegCfDMXg86CbLoseI77uT7xxPBdVHzViWksWoNf40TYYgVDWG BqsMHLcHR2sKZcvH/U4UQPaYrj8QGWJCn0qnfQglDovJKbNTymFrXjV7I3zXXEIaxAf9 Kqfg== X-Gm-Message-State: APjAAAWEsNtJX391i1YKY3BWzXM34yR+VqybzquNgKDIScGLoaznaG8v uo9uzIjgJQyrL/A+DB8obY8cRA== X-Google-Smtp-Source: APXvYqzegS8FdaYYJ9tUvTEawroRiXm2ywlT9vgjrGEqGUYD2udBCjG/A1A/lyVqmE62G1Ui+eNFbQ== X-Received: by 2002:aa7:86c6:: with SMTP id h6mr81450477pfo.51.1560833709869; Mon, 17 Jun 2019 21:55:09 -0700 (PDT) From: Kees Cook To: Thomas Gleixner Cc: Kees Cook , Linus Torvalds , x86@kernel.org, Peter Zijlstra , Dave Hansen , linux-kernel@vger.kernel.org, kernel-hardening@lists.openwall.com Subject: [PATCH v3 1/3] lkdtm: Check for SMEP clearing protections Date: Mon, 17 Jun 2019 21:55:01 -0700 Message-Id: <20190618045503.39105-2-keescook@chromium.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190618045503.39105-1-keescook@chromium.org> References: <20190618045503.39105-1-keescook@chromium.org> X-Virus-Scanned: ClamAV using ClamSMTP This adds an x86-specific test for pinned cr4 bits. A successful test will validate pinning and check the ROP-style call-middle-of-function defense, if needed. For example, in the case of native_write_cr4() looking like this: ffffffff8171bce0 : ffffffff8171bce0: 48 8b 35 79 46 f2 00 mov 0xf24679(%rip),%rsi ffffffff8171bce7: 48 09 f7 or %rsi,%rdi ffffffff8171bcea: 0f 22 e7 mov %rdi,%cr4 ... ffffffff8171bd5a: c3 retq The UNSET_SMEP test will jump to ffffffff8171bcea (the mov to cr4) instead of ffffffff8171bce0 (native_write_cr4() entry) to simulate a direct-call bypass attempt. Expected successful results: # echo UNSET_SMEP > /sys/kernel/debug/provoke-crash/DIRECT # dmesg [ 79.594433] lkdtm: Performing direct entry UNSET_SMEP [ 79.596459] lkdtm: trying to clear SMEP normally [ 79.601810] ------------[ cut here ]------------ [ 79.603421] Attempt to unpin cr4 bits: 100000; bypass attack?! ... [ 79.650170] ---[ end trace 2452ca0f6126242e ]--- [ 79.652281] lkdtm: ok: SMEP did not get cleared [ 79.654344] lkdtm: trying to clear SMEP with call gadget [ 79.655937] lkdtm: ok: SMEP removal was reverted Signed-off-by: Kees Cook --- drivers/misc/lkdtm/bugs.c | 66 ++++++++++++++++++++++++++++++++++++++ drivers/misc/lkdtm/core.c | 1 + drivers/misc/lkdtm/lkdtm.h | 1 + 3 files changed, 68 insertions(+) diff --git a/drivers/misc/lkdtm/bugs.c b/drivers/misc/lkdtm/bugs.c index 7eebbdfbcacd..3edf4464b9fc 100644 --- a/drivers/misc/lkdtm/bugs.c +++ b/drivers/misc/lkdtm/bugs.c @@ -255,3 +255,69 @@ void lkdtm_STACK_GUARD_PAGE_TRAILING(void) pr_err("FAIL: accessed page after stack!\n"); } + +void lkdtm_UNSET_SMEP(void) +{ +#ifdef CONFIG_X86_64 +#define MOV_CR4_DEPTH 64 + void (*direct_write_cr4)(unsigned long val); + unsigned char *insn; + unsigned long cr4; + int i; + + cr4 = native_read_cr4(); + + if ((cr4 & X86_CR4_SMEP) != X86_CR4_SMEP) { + pr_err("FAIL: SMEP not in use\n"); + return; + } + cr4 &= ~(X86_CR4_SMEP); + + pr_info("trying to clear SMEP normally\n"); + native_write_cr4(cr4); + if (cr4 == native_read_cr4()) { + pr_err("FAIL: pinning SMEP failed!\n"); + cr4 |= X86_CR4_SMEP; + pr_info("restoring SMEP\n"); + native_write_cr4(cr4); + return; + } + pr_info("ok: SMEP did not get cleared\n"); + + /* + * To test the post-write pinning verification we need to call + * directly into the the middle of native_write_cr4() where the + * cr4 write happens, skipping the pinning. This searches for + * the cr4 writing instruction. + */ + insn = (unsigned char *)native_write_cr4; + for (i = 0; i < MOV_CR4_DEPTH; i++) { + /* mov %rdi, %cr4 */ + if (insn[i] == 0x0f && insn[i+1] == 0x22 && insn[i+2] == 0xe7) + break; + /* mov %rdi,%rax; mov %rax, %cr4 */ + if (insn[i] == 0x48 && insn[i+1] == 0x89 && + insn[i+2] == 0xf8 && insn[i+3] == 0x0f && + insn[i+4] == 0x22 && insn[i+5] == 0xe0) + break; + } + if (i >= MOV_CR4_DEPTH) { + pr_info("ok: cannot locate cr4 writing call gadget\n"); + return; + } + direct_write_cr4 = (void *)(insn + i); + + pr_info("trying to clear SMEP with call gadget\n"); + direct_write_cr4(cr4); + if (native_read_cr4() & X86_CR4_SMEP) { + pr_info("ok: SMEP removal was reverted\n"); + } else { + pr_err("FAIL: cleared SMEP not detected!\n"); + cr4 |= X86_CR4_SMEP; + pr_info("restoring SMEP\n"); + native_write_cr4(cr4); + } +#else + pr_err("FAIL: this test is x86_64-only\n"); +#endif +} diff --git a/drivers/misc/lkdtm/core.c b/drivers/misc/lkdtm/core.c index b51cf182b031..58cfd713f8dc 100644 --- a/drivers/misc/lkdtm/core.c +++ b/drivers/misc/lkdtm/core.c @@ -123,6 +123,7 @@ static const struct crashtype crashtypes[] = { CRASHTYPE(CORRUPT_LIST_ADD), CRASHTYPE(CORRUPT_LIST_DEL), CRASHTYPE(CORRUPT_USER_DS), + CRASHTYPE(UNSET_SMEP), CRASHTYPE(CORRUPT_STACK), CRASHTYPE(CORRUPT_STACK_STRONG), CRASHTYPE(STACK_GUARD_PAGE_LEADING), diff --git a/drivers/misc/lkdtm/lkdtm.h b/drivers/misc/lkdtm/lkdtm.h index b69ee004a3f7..d7eb5a8f1da4 100644 --- a/drivers/misc/lkdtm/lkdtm.h +++ b/drivers/misc/lkdtm/lkdtm.h @@ -26,6 +26,7 @@ void lkdtm_CORRUPT_LIST_DEL(void); void lkdtm_CORRUPT_USER_DS(void); void lkdtm_STACK_GUARD_PAGE_LEADING(void); void lkdtm_STACK_GUARD_PAGE_TRAILING(void); +void lkdtm_UNSET_SMEP(void); /* lkdtm_heap.c */ void lkdtm_OVERWRITE_ALLOCATION(void); From patchwork Tue Jun 18 04:55:02 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 11000893 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7079914DB for ; Tue, 18 Jun 2019 04:55:42 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5C63F287E5 for ; Tue, 18 Jun 2019 04:55:42 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4B9DD28979; Tue, 18 Jun 2019 04:55:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 0C092287E5 for ; Tue, 18 Jun 2019 04:55:40 +0000 (UTC) Received: (qmail 19859 invoked by uid 550); 18 Jun 2019 04:55:23 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 19748 invoked from network); 18 Jun 2019 04:55:21 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=yYHRZxcVIOJ87/Z9nm6LP7uJRHg7vrdw/pdXnFLGsoI=; b=WPtBC2/rnGbL7o69gsOB23X74xE0oDbilDU7We7N2elbiNp3awRnsHxmToWToxdABs dwRcja/rlwPlKVx9c+Rs2Ks0FVVKTz4u8P1WqjIfRvx5DH+8869aO+lZZ2QCdwh/a6vW 9HRLbapCE4X6TwD0uieKc3s3yst/KMb949N1c= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=yYHRZxcVIOJ87/Z9nm6LP7uJRHg7vrdw/pdXnFLGsoI=; b=UTOk0GN0A2jn+IgEEZqUonrU+WaT2fv3evDiqdzIUGfzMcU98dug/CBKhEgSyulWgN VH2AtxcYp/LlZa2NgQTxDLWfZoKu5pZDKl56vKzMm7gRAAW+FjbE7KGDKrSua7f0Y/fE bC0ofqb8OjZTUqn3TlKCF4ncer+dErfPaAz1riThUZ9yxSINKVJCrafLDiqRbdN7dOdU HP2YQoIUWx6C9lF4huuZUGZTUlFg9agTKUtMnDa96xHtO1PWMYGaishAr3x3ckVocU1Z phKrOmmg0UXtud+GUyWi2JwuTeoAUkoEdYn+6QkSVQInpLZO9bymPxQD5wLRvOyjQFyZ RYTg== X-Gm-Message-State: APjAAAW6BFrw+0stjzW1xIs56cWN64gNpc8Szs1dAJYEb5K8IDiCL9hP WoyJXfeNGF71O8bTO3xPpZ6kTg== X-Google-Smtp-Source: APXvYqy5Y64lPIwaK9NSnPD/QkHTqK38gXsW4ijdVJD1ZUwxlHY/qrWZ1RA+bu069kcyh7dUgxq3ug== X-Received: by 2002:a62:2ec4:: with SMTP id u187mr116203059pfu.84.1560833709401; Mon, 17 Jun 2019 21:55:09 -0700 (PDT) From: Kees Cook To: Thomas Gleixner Cc: Kees Cook , Linus Torvalds , x86@kernel.org, Peter Zijlstra , Dave Hansen , linux-kernel@vger.kernel.org, kernel-hardening@lists.openwall.com Subject: [PATCH v3 2/3] x86/asm: Pin sensitive CR4 bits Date: Mon, 17 Jun 2019 21:55:02 -0700 Message-Id: <20190618045503.39105-3-keescook@chromium.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190618045503.39105-1-keescook@chromium.org> References: <20190618045503.39105-1-keescook@chromium.org> X-Virus-Scanned: ClamAV using ClamSMTP Several recent exploits have used direct calls to the native_write_cr4() function to disable SMEP and SMAP before then continuing their exploits using userspace memory access. This pins bits of CR4 so that they cannot be changed through a common function. This is not intended to be general ROP protection (which would require CFI to defend against properly), but rather a way to avoid trivial direct function calling (or CFI bypasses via a matching function prototype) as seen in: https://googleprojectzero.blogspot.com/2017/05/exploiting-linux-kernel-via-packet.html (https://github.com/xairy/kernel-exploits/tree/master/CVE-2017-7308) The goals of this change: - pin specific bits (SMEP, SMAP, and UMIP) when writing CR4. - avoid setting the bits too early (they must become pinned only after CPU feature detection and selection has finished). - pinning mask needs to be read-only during normal runtime. - pinning needs to be checked after write to validate the cr4 state Using __ro_after_init on the mask is done so it can't be first disabled with a malicious write. Since these bits are global state (once established by the boot CPU and kernel boot parameters), they are safe to write to secondary CPUs before those CPUs have finished feature detection. As such, the bits are set at the first cr4 write, so that cr4 write bugs can be detected (instead of silently papered over). This uses a few bytes less storage of a location we don't have: read-only per-CPU data. A check is performed after the register write because an attack could just skip directly to the register write. Such a direct jump is possible because of how this function may be built by the compiler (especially due to the removal of frame pointers) where it doesn't add a stack frame (function exit may only be a retq without pops) which is sufficient for trivial exploitation like in the timer overwrites mentioned above). The asm argument constraints gain the "+" modifier to convince the compiler that it shouldn't make ordering assumptions about the arguments or memory, and treat them as changed. Signed-off-by: Kees Cook --- v3: - added missing EXPORT_SYMBOL()s - remove always-OR, instead doing an early OR in secondary startup (tglx) v2: - move setup until after CPU feature detection and selection. - refactor to use static branches to have atomic enabling. - only perform the "or" after a failed check. --- arch/x86/include/asm/special_insns.h | 22 +++++++++++++++++++++- arch/x86/kernel/cpu/common.c | 20 ++++++++++++++++++++ arch/x86/kernel/smpboot.c | 8 +++++++- 3 files changed, 48 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h index 0a3c4cab39db..c8c8143ab27b 100644 --- a/arch/x86/include/asm/special_insns.h +++ b/arch/x86/include/asm/special_insns.h @@ -6,6 +6,8 @@ #ifdef __KERNEL__ #include +#include +#include /* * Volatile isn't enough to prevent the compiler from reordering the @@ -16,6 +18,10 @@ */ extern unsigned long __force_order; +/* Starts false and gets enabled once CPU feature detection is done. */ +DECLARE_STATIC_KEY_FALSE(cr_pinning); +extern unsigned long cr4_pinned_bits; + static inline unsigned long native_read_cr0(void) { unsigned long val; @@ -74,7 +80,21 @@ static inline unsigned long native_read_cr4(void) static inline void native_write_cr4(unsigned long val) { - asm volatile("mov %0,%%cr4": : "r" (val), "m" (__force_order)); + unsigned long bits_missing = 0; + +set_register: + asm volatile("mov %0,%%cr4": "+r" (val), "+m" (cr4_pinned_bits)); + + if (static_branch_likely(&cr_pinning)) { + if (unlikely((val & cr4_pinned_bits) != cr4_pinned_bits)) { + bits_missing = ~val & cr4_pinned_bits; + val |= bits_missing; + goto set_register; + } + /* Warn after we've set the missing bits. */ + WARN_ONCE(bits_missing, "CR4 bits went missing: %lx!?\n", + bits_missing); + } } #ifdef CONFIG_X86_64 diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index 2c57fffebf9b..c578addfcf8a 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -366,6 +366,25 @@ static __always_inline void setup_umip(struct cpuinfo_x86 *c) cr4_clear_bits(X86_CR4_UMIP); } +DEFINE_STATIC_KEY_FALSE_RO(cr_pinning); +EXPORT_SYMBOL(cr_pinning); +unsigned long cr4_pinned_bits __ro_after_init; +EXPORT_SYMBOL(cr4_pinned_bits); + +/* + * Once CPU feature detection is finished (and boot params have been + * parsed), record any of the sensitive CR bits that are set, and + * enable CR pinning. + */ +static void __init setup_cr_pinning(void) +{ + unsigned long mask; + + mask = (X86_CR4_SMEP | X86_CR4_SMAP | X86_CR4_UMIP); + cr4_pinned_bits = this_cpu_read(cpu_tlbstate.cr4) & mask; + static_key_enable(&cr_pinning.key); +} + /* * Protection Keys are not available in 32-bit mode. */ @@ -1464,6 +1483,7 @@ void __init identify_boot_cpu(void) enable_sep_cpu(); #endif cpu_detect_tlb(&boot_cpu_data); + setup_cr_pinning(); } void identify_secondary_cpu(struct cpuinfo_x86 *c) diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c index 362dd8953f48..1af7a2d89419 100644 --- a/arch/x86/kernel/smpboot.c +++ b/arch/x86/kernel/smpboot.c @@ -205,13 +205,19 @@ static int enable_start_cpu0; */ static void notrace start_secondary(void *unused) { + unsigned long cr4 = __read_cr4(); + /* * Don't put *anything* except direct CPU state initialization * before cpu_init(), SMP booting is too fragile that we want to * limit the things done here to the most necessary things. */ if (boot_cpu_has(X86_FEATURE_PCID)) - __write_cr4(__read_cr4() | X86_CR4_PCIDE); + cr4 |= X86_CR4_PCIDE; + if (static_branch_likely(&cr_pinning)) + cr4 |= cr4_pinned_bits; + + __write_cr4(cr4); #ifdef CONFIG_X86_32 /* switch away from the initial page table */ From patchwork Tue Jun 18 04:55:03 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 11000889 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8803A14DB for ; Tue, 18 Jun 2019 04:55:27 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5E9C0287E5 for ; Tue, 18 Jun 2019 04:55:27 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4BF7D28979; Tue, 18 Jun 2019 04:55:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 7AB5F287E5 for ; Tue, 18 Jun 2019 04:55:25 +0000 (UTC) Received: (qmail 19726 invoked by uid 550); 18 Jun 2019 04:55:21 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 19631 invoked from network); 18 Jun 2019 04:55:19 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=QsQC2UzpT6NyTsbVwUSVvnO0Ud1PYkMdAize/TKgzcI=; b=nhXQMKA3pqnFMskPYlyfzfiZ0m1EDAn1/34A37MARMPc0vR6VdJcP0BLE/CZWd3K4G MV5zR19K/bY3u5fi7NY094sNRBhUptzdF+DEyNdptTYK+e8hw1nlN6XpuTFbcUGTB68s 3p5aDl6HPjUBNt8/I3w6OE5IVo6Xj4pFf93yo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=QsQC2UzpT6NyTsbVwUSVvnO0Ud1PYkMdAize/TKgzcI=; b=BmWEpKBGRrNEgX4pRNPH+xOA3VR2ZOTfjSLMn1rvPG8Mxbj9IMSDwvfYKXzMhkR/y7 cD8a6aHJXGminQLrJYvaQuQBmZ5aMBWf1Tvv+BwJoyZaeArBKKQsBlN9X2xzAJTfRGGD 5Z/lMiPIidJANQtLLda+z8NXhEAMwb8uR95s/V1YW8n9QCngyyXV2ZHl1fQp7ydT8oIN jTfjqeTBz3BiwZrVaftnIrzSiUkPxe3eB/menGvfZB5MMCU/sMRMNvi5JOH6/tme5kyK pvdVACPN85go4fUKLSyKhE1lLoNnVYS2BlHUnSfHkCj/eINURb6xJRINbpIpQZra3WkM Ds7w== X-Gm-Message-State: APjAAAVC8wF+YFniFxelnNcV4t1BD/0Yh1vyVOScaN3rY5Cpva35zyWC cRgyRL7KZjag5WtBOC53/GWe2A== X-Google-Smtp-Source: APXvYqzvoUCCIV/j/VUYbcljjbnLQAQDKu3Bb56KK/h8GtlbpUAufcbOlyMYfYfo58429sKdsQp80g== X-Received: by 2002:a17:90a:1785:: with SMTP id q5mr2950312pja.106.1560833708115; Mon, 17 Jun 2019 21:55:08 -0700 (PDT) From: Kees Cook To: Thomas Gleixner Cc: Kees Cook , Linus Torvalds , x86@kernel.org, Peter Zijlstra , Dave Hansen , linux-kernel@vger.kernel.org, kernel-hardening@lists.openwall.com Subject: [PATCH v3 3/3] x86/asm: Pin sensitive CR0 bits Date: Mon, 17 Jun 2019 21:55:03 -0700 Message-Id: <20190618045503.39105-4-keescook@chromium.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190618045503.39105-1-keescook@chromium.org> References: <20190618045503.39105-1-keescook@chromium.org> X-Virus-Scanned: ClamAV using ClamSMTP With sensitive CR4 bits pinned now, it's possible that the WP bit for CR0 might become a target as well. Following the same reasoning for the CR4 pinning, this pins CR0's WP bit (but this can be done with a static value). Suggested-by: Peter Zijlstra Signed-off-by: Kees Cook --- arch/x86/include/asm/special_insns.h | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h index c8c8143ab27b..b2e84d113f2a 100644 --- a/arch/x86/include/asm/special_insns.h +++ b/arch/x86/include/asm/special_insns.h @@ -31,7 +31,20 @@ static inline unsigned long native_read_cr0(void) static inline void native_write_cr0(unsigned long val) { - asm volatile("mov %0,%%cr0": : "r" (val), "m" (__force_order)); + unsigned long bits_missing = 0; + +set_register: + asm volatile("mov %0,%%cr0": "+r" (val), "+m" (__force_order)); + + if (static_branch_likely(&cr_pinning)) { + if (unlikely((val & X86_CR0_WP) != X86_CR0_WP)) { + bits_missing = X86_CR0_WP; + val |= bits_missing; + goto set_register; + } + /* Warn after we've set the missing bits. */ + WARN_ONCE(bits_missing, "CR0 WP bit went missing!?\n"); + } } static inline unsigned long native_read_cr2(void)