From patchwork Tue Feb 26 23:36:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 10830929 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2137C1390 for ; Tue, 26 Feb 2019 23:37:13 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0B0412D734 for ; Tue, 26 Feb 2019 23:37:13 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F0D272D73A; Tue, 26 Feb 2019 23:37:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 226952D734 for ; Tue, 26 Feb 2019 23:37:11 +0000 (UTC) Received: (qmail 32160 invoked by uid 550); 26 Feb 2019 23:37:08 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 32128 invoked from network); 26 Feb 2019 23:37:07 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=n8q94zZi+D1uUg0/ig6itn9R3+pd02EnClmqjqkWQnE=; b=NypsjIdAYo6L/jO9AlpSyMSePo6ucvRN086mH6ChxMb4IzrRtC1Rmpd7Gl2IFsckZs J/IfhGgKHIpkfUnCygIdg1LywbCmwjEPfh3LcMisruZIWFgr6TdkMMDekTwfK2m/3vSv 2StdOd6S7lZlK0FmAu216d7jRKw4Tr73Uk6hE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=n8q94zZi+D1uUg0/ig6itn9R3+pd02EnClmqjqkWQnE=; b=NQ3+R3TIb1c6HXkiD/BPm1gZrWw6NqSKfl0SnFhJ6+uPFsHSfj8XXWjJN+PsKiqKmk 8FByAxMZgEeDeGv32msz+W9L4pBxIJPngPXL6xVObWdWuSKCL4HJKXMwEw10sCS5XCXM OR1ZZjDxqS3GQWEg0Ey+410wy0SsXlNyI1BNUmInUCGEQNDa/keeTcbUiFJ6qa28GtDL +8CXqfW3uMXx66vXIUhbXNecz/fQewZQTq/bxVH0w843hfBLDrls5tBfeF3YsWM4LydA if6sT/XWQ6sAWzLZG7CxavzGmnmNjefCcWSmt9Bye6wUWOHXyyo3FNluOVah5sDyqk4O bKNw== X-Gm-Message-State: AHQUAuZQG8Fy/z3K602NkQ+LJs0ruzMGws08V2cS6vcSJI7AsD0iTET3 nrXG73CXTT43IKEhw491ZpTjhA== X-Google-Smtp-Source: AHgI3Ib2l1hkm7hH/c6XoCvlp1mVj1TCSiXDcE3ddiMYT2YtlPeGf+P8zA7Ep0xQHFXD2Rr63h3VXA== X-Received: by 2002:a62:5789:: with SMTP id i9mr28023400pfj.75.1551224215766; Tue, 26 Feb 2019 15:36:55 -0800 (PST) From: Kees Cook To: Thomas Gleixner Cc: Kees Cook , Peter Zijlstra , Jann Horn , Sean Christopherson , Dominik Brodowski , Kernel Hardening , linux-kernel@vger.kernel.org Subject: [PATCH 1/3] x86/asm: Pin sensitive CR0 bits Date: Tue, 26 Feb 2019 15:36:45 -0800 Message-Id: <20190226233647.28547-2-keescook@chromium.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190226233647.28547-1-keescook@chromium.org> References: <20190226233647.28547-1-keescook@chromium.org> X-Virus-Scanned: ClamAV using ClamSMTP With sensitive CR4 bits pinned now, it's possible that the WP bit for CR0 might become a target as well. Following the same reasoning for the CR4 pinning, this pins CR0's WP bit (but this can be done with a static value). As before, to convince the compiler to not optimize away the check for the WP bit after the set, this marks "val" as an output from the asm() block. This protects against just jumping into the function past where the masking happens; we must check that the mask was applied after we do the set). Due to how this function can be built by the compiler (especially due to the removal of frame pointers), jumping into the middle of the function frequently doesn't require stack manipulation to construct a stack frame (there may only a retq without pops, which is sufficient for use with exploits like timer overwrites). Additionally, this avoids WARN()ing before resetting the bit, just to minimize any race conditions with leaving the bit unset. Suggested-by: Peter Zijlstra Signed-off-by: Kees Cook --- arch/x86/include/asm/special_insns.h | 23 ++++++++++++++++++++++- 1 file changed, 22 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h index fabda1400137..8416d6b31084 100644 --- a/arch/x86/include/asm/special_insns.h +++ b/arch/x86/include/asm/special_insns.h @@ -25,7 +25,28 @@ static inline unsigned long native_read_cr0(void) static inline void native_write_cr0(unsigned long val) { - asm volatile("mov %0,%%cr0": : "r" (val), "m" (__force_order)); + bool warn = false; + +again: + val |= X86_CR0_WP; + /* + * In order to have the compiler not optimize away the check + * in the WARN_ONCE(), mark "val" as being also an output ("+r") + * by this asm() block so it will perform an explicit check, as + * if it were "volatile". + */ + asm volatile("mov %0,%%cr0": "+r" (val) : "m" (__force_order) : ); + /* + * If the MOV above was used directly as a ROP gadget we can + * notice the lack of pinned bits in "val" and start the function + * from the beginning to gain the WP bit for sure. And do it + * without first taking the exception for a WARN(). + */ + if ((val & X86_CR0_WP) != X86_CR0_WP) { + warn = true; + goto again; + } + WARN_ONCE(warn, "Attempt to unpin X86_CR0_WP, cr0 bypass attack?!\n"); } static inline unsigned long native_read_cr2(void) From patchwork Tue Feb 26 23:36:46 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 10830931 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A264C180E for ; Tue, 26 Feb 2019 23:37:21 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8F1522D734 for ; Tue, 26 Feb 2019 23:37:21 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 833652D738; Tue, 26 Feb 2019 23:37:21 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id C2EC02D739 for ; Tue, 26 Feb 2019 23:37:20 +0000 (UTC) Received: (qmail 32224 invoked by uid 550); 26 Feb 2019 23:37:09 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 32161 invoked from network); 26 Feb 2019 23:37:08 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=+oYY7xbWtmy3I3xAdK40RKuX5lYtH9k01cC1Ag26GTE=; b=UFpQQXyv1WlG+vQ2VGQVeBpekJEgGJm2XiXC/0iqN9nei8flgJHMUhw+1gO/u6r5Z7 0vm0yI+z26dsUE9TactgLhLPQvgHVxBI9LzDiQ3XZ2VyxREYHaViUVTcnjLuZEl7BgO0 yrwaIBLzjSsVT03OTDfX79+D30OJwPVl2oPoA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=+oYY7xbWtmy3I3xAdK40RKuX5lYtH9k01cC1Ag26GTE=; b=tsbqccE+WBt76eLtYf7l5QE+gpyK+OnMoziBYo4AvuhxIZh810FMhK9b+DINvoQCfQ vhXJnsr6BV7WOxNiUa9YwPQrJzq+5j2udhHAlVrlgGQAqOzVLHur1bfgV5O5M5xeCBVQ 9yymCIOUiRrFfGc3gvGjefYpxMD4e6p6YE2mnHL3rOp9szibcusFGHZJyT4wLAO7o1DT db5WbKUAMKP1swvoU2BRgHlapW92xdmfIhRfhgNUoyqS37H813QQOnIu6XOHHvBFhNeU 7VB9PqNFrnPhnN1d3E4xQXIZnCbeRloUVFvxDuawBCw9OijWQKZG7FHlBN5D9HV5hxaD VdMw== X-Gm-Message-State: AHQUAubdskuBminSEAlrQpSJKV2iJC8/ItvSFYKP2Abu0aDtPwooKcT4 ydZpWl4FEs0+ScqfTTaudaOI//WK4KA= X-Google-Smtp-Source: AHgI3IYxRjSOI0v0d+U8i/glfBS5fF/2id9Tw7ZYe31sT6gSxDF2rJYY5yfnK8VNxonTJyUxWqvduQ== X-Received: by 2002:a17:902:6b8c:: with SMTP id p12mr29293448plk.282.1551224217159; Tue, 26 Feb 2019 15:36:57 -0800 (PST) From: Kees Cook To: Thomas Gleixner Cc: Kees Cook , Peter Zijlstra , Jann Horn , Sean Christopherson , Dominik Brodowski , Kernel Hardening , linux-kernel@vger.kernel.org Subject: [PATCH 2/3] x86/asm: Avoid taking an exception before cr4 restore Date: Tue, 26 Feb 2019 15:36:46 -0800 Message-Id: <20190226233647.28547-3-keescook@chromium.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190226233647.28547-1-keescook@chromium.org> References: <20190226233647.28547-1-keescook@chromium.org> X-Virus-Scanned: ClamAV using ClamSMTP Instead of taking a full WARN() exception before restoring a potentially missed CR4 bit, this retains the missing bit for later reporting. This matches the logic done for the CR0 pinning. Signed-off-by: Kees Cook --- arch/x86/include/asm/special_insns.h | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h index 8416d6b31084..6f649eaecc73 100644 --- a/arch/x86/include/asm/special_insns.h +++ b/arch/x86/include/asm/special_insns.h @@ -97,6 +97,8 @@ extern volatile unsigned long cr4_pin; static inline void native_write_cr4(unsigned long val) { + unsigned long warn = 0; + again: val |= cr4_pin; asm volatile("mov %0,%%cr4": : "r" (val), "m" (__force_order)); @@ -105,10 +107,12 @@ static inline void native_write_cr4(unsigned long val) * notice the lack of pinned bits in "val" and start the function * from the beginning to gain the cr4_pin bits for sure. */ - if (WARN_ONCE((val & cr4_pin) != cr4_pin, - "Attempt to unpin cr4 bits: %lx, cr4 bypass attack?!", - ~val & cr4_pin)) + if ((val & cr4_pin) != cr4_pin) { + warn = ~val & cr4_pin; goto again; + } + WARN_ONCE(warn, "Attempt to unpin cr4 bits: %lx; bypass attack?!\n", + warn); } #ifdef CONFIG_X86_64 From patchwork Tue Feb 26 23:36:47 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 10830935 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2A70E1575 for ; Tue, 26 Feb 2019 23:37:36 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 170C12D734 for ; Tue, 26 Feb 2019 23:37:36 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0B0F42D739; Tue, 26 Feb 2019 23:37:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 226CC2D734 for ; Tue, 26 Feb 2019 23:37:34 +0000 (UTC) Received: (qmail 32408 invoked by uid 550); 26 Feb 2019 23:37:11 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 32192 invoked from network); 26 Feb 2019 23:37:09 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=DEBVAMtax4OzUK5N+O2GdbAJZx0cc+8FrVNmyZKUAQc=; b=YIFxU2kk+jRTvhisTPnzHut//4afeoCxATSyPkc8NNLhJLTe0yFnnl3Fd+IT1OWjIk Md4KAzhJgsWSCNlWWNdjfOMB/dI8+ujGVrqFkgz2NkNcUXXqcmUcLgdwUxu34fCevC3w Wx+AfpwH1L9oN1Cp/ER/TuCElUWN8+UFN+iFU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=DEBVAMtax4OzUK5N+O2GdbAJZx0cc+8FrVNmyZKUAQc=; b=QvLgWYDSJOZy5Vdh2znJK7+DS3p0FhvzVrPn1oEe81C+XmCM+lVdJUkezkgCeDm6ji gJsZikgc18D5m4n7YVfoWVSFBN9l8kib2OUpqSG0KPDpBAUboBm34QDyGJMpaVxVi7Ci TGsbuP3tWAy7m7YfrjcK5zhVwnp48tDIQPTE2WYvx7D1aFlgzT9W2OtqAStUyOqhuZzC Jh/kZUsRlXAr+iWPL+Y84umIVMrTZHx0aVI8GGcYFiik3Z2XC6PWQEQBGzF1SOaHC4zX wvS4/EWJoRWE2DgrnR0PDuVAT0xOw4KmOzWNxSrodIpaTk2/YEkt0srqoEKI89bBRDr/ dt5g== X-Gm-Message-State: AHQUAuYY+No+0KKhydszlZN0dsnockOLM+cIBHyDSeALqOj8a4Me0hz4 rV5kXJx0AO0MF+AxdqkRwOo7IXNAtT8= X-Google-Smtp-Source: AHgI3IbxyfGp4sLmEtEZZHKuN+ZqR8zfinoD6nbTKW6NtDqijg/YpLpTKahmS9w2vmf9CW2Rz3TCpA== X-Received: by 2002:a62:4641:: with SMTP id t62mr28291188pfa.141.1551224217647; Tue, 26 Feb 2019 15:36:57 -0800 (PST) From: Kees Cook To: Thomas Gleixner Cc: Kees Cook , Peter Zijlstra , Jann Horn , Sean Christopherson , Dominik Brodowski , Kernel Hardening , linux-kernel@vger.kernel.org Subject: [PATCH 3/3] lkdtm: Check for SMEP clearing protections Date: Tue, 26 Feb 2019 15:36:47 -0800 Message-Id: <20190226233647.28547-4-keescook@chromium.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190226233647.28547-1-keescook@chromium.org> References: <20190226233647.28547-1-keescook@chromium.org> X-Virus-Scanned: ClamAV using ClamSMTP This adds an x86-specific test for pinned cr4 bits. A successful test will validate pinning and check the ROP-style call-middle-of-function defense, if needed. For example, in the case of native_write_cr4() looking like this: ffffffff8171bce0 : ffffffff8171bce0: 48 8b 35 79 46 f2 00 mov 0xf24679(%rip),%rsi ffffffff8171bce7: 48 09 f7 or %rsi,%rdi ffffffff8171bcea: 0f 22 e7 mov %rdi,%cr4 ... ffffffff8171bd5a: c3 retq The UNSET_SMEP test will jump to ffffffff8171bcea (the mov to cr4) instead of ffffffff8171bce0 (native_write_cr4() entry) to simulate a direct-call bypass attempt. Expected successful results: # echo UNSET_SMEP > /sys/kernel/debug/provoke-crash/DIRECT # dmesg [ 79.594433] lkdtm: Performing direct entry UNSET_SMEP [ 79.596459] lkdtm: trying to clear SMEP normally [ 79.598406] lkdtm: ok: SMEP did not get cleared [ 79.599981] lkdtm: trying to clear SMEP with call gadget [ 79.601810] ------------[ cut here ]------------ [ 79.603421] Attempt to unpin cr4 bits: 100000; bypass attack?! ... [ 79.650170] ---[ end trace 2452ca0f6126242e ]--- [ 79.650937] lkdtm: ok: SMEP removal was reverted Signed-off-by: Kees Cook --- drivers/misc/lkdtm/bugs.c | 60 ++++++++++++++++++++++++++++++++++++++ drivers/misc/lkdtm/core.c | 1 + drivers/misc/lkdtm/lkdtm.h | 1 + 3 files changed, 62 insertions(+) diff --git a/drivers/misc/lkdtm/bugs.c b/drivers/misc/lkdtm/bugs.c index 7eebbdfbcacd..c79b43a5ba34 100644 --- a/drivers/misc/lkdtm/bugs.c +++ b/drivers/misc/lkdtm/bugs.c @@ -255,3 +255,63 @@ void lkdtm_STACK_GUARD_PAGE_TRAILING(void) pr_err("FAIL: accessed page after stack!\n"); } + +void lkdtm_UNSET_SMEP(void) +{ +#ifdef CONFIG_X86_64 + void (*direct_write_cr4)(unsigned long val); + unsigned char *insn; + unsigned long cr4; + int i; + + cr4 = native_read_cr4(); + + if ((cr4 & X86_CR4_SMEP) != X86_CR4_SMEP) { + pr_err("FAIL: SMEP not in use\n"); + return; + } + cr4 &= ~(X86_CR4_SMEP); + + pr_info("trying to clear SMEP normally\n"); + native_write_cr4(cr4); + if (cr4 == native_read_cr4()) { + pr_err("FAIL: pinning SMEP failed!\n"); + cr4 |= X86_CR4_SMEP; + pr_info("restoring SMEP\n"); + native_write_cr4(cr4); + return; + } + pr_info("ok: SMEP did not get cleared\n"); + + /* + * To test the post-write pinning verification we need to call + * directly into the the middle of native_write_cr4() where the + * cr4 write happens, skipping the pinning. This searches for + * the cr4 writing instruction. + */ + insn = (unsigned char *)native_write_cr4; + for (i = 0; i < 64; i++) { + /* mov %rdi, %cr4 */ + if (insn[i] == 0x0f && insn[i+1] == 0x22 && insn[i+2] == 0xe7) + break; + } + if (i >= 256) { + pr_info("ok: cannot locate cr4 writing call gadget\n"); + return; + } + direct_write_cr4 = (void *)(insn + i); + + pr_info("trying to clear SMEP with call gadget\n"); + direct_write_cr4(cr4); + if (native_read_cr4() & X86_CR4_SMEP) { + pr_info("ok: SMEP removal was reverted\n"); + } else { + pr_err("FAIL: cleared SMEP not detected!\n"); + cr4 |= X86_CR4_SMEP; + pr_info("restoring SMEP\n"); + native_write_cr4(cr4); + } +#else + pr_err("FAIL: this test is x86_64-only\n"); +#endif +} diff --git a/drivers/misc/lkdtm/core.c b/drivers/misc/lkdtm/core.c index 2837dc77478e..fd668776414b 100644 --- a/drivers/misc/lkdtm/core.c +++ b/drivers/misc/lkdtm/core.c @@ -132,6 +132,7 @@ static const struct crashtype crashtypes[] = { CRASHTYPE(CORRUPT_LIST_ADD), CRASHTYPE(CORRUPT_LIST_DEL), CRASHTYPE(CORRUPT_USER_DS), + CRASHTYPE(UNSET_SMEP), CRASHTYPE(CORRUPT_STACK), CRASHTYPE(CORRUPT_STACK_STRONG), CRASHTYPE(STACK_GUARD_PAGE_LEADING), diff --git a/drivers/misc/lkdtm/lkdtm.h b/drivers/misc/lkdtm/lkdtm.h index 3c6fd327e166..9c78d7e21c13 100644 --- a/drivers/misc/lkdtm/lkdtm.h +++ b/drivers/misc/lkdtm/lkdtm.h @@ -26,6 +26,7 @@ void lkdtm_CORRUPT_LIST_DEL(void); void lkdtm_CORRUPT_USER_DS(void); void lkdtm_STACK_GUARD_PAGE_LEADING(void); void lkdtm_STACK_GUARD_PAGE_TRAILING(void); +void lkdtm_UNSET_SMEP(void); /* lkdtm_heap.c */ void lkdtm_OVERWRITE_ALLOCATION(void);