From patchwork Wed Feb 27 20:01:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 10832285 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 63FED139A for ; Wed, 27 Feb 2019 20:02:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 565222E838 for ; Wed, 27 Feb 2019 20:02:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 548D62E86D; Wed, 27 Feb 2019 20:02:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 708CA2E85E for ; Wed, 27 Feb 2019 20:02:18 +0000 (UTC) Received: (qmail 3766 invoked by uid 550); 27 Feb 2019 20:01:55 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 3627 invoked from network); 27 Feb 2019 20:01:53 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=vzz1FQN1jJNmckhFM7jqyw+Eyl5dEJHVvmFRIHlT1C8=; b=SGQ/y+brpOs/JqYhS5bcF63efY4uCnTaDn1aLtIltGZ7xEeLWAMgMbuUymxbfABuAj yvzueaBcI9pMRMyk6Q0jLyZqNKjCms5YABwY5J/4ohmBI6J5gvUVlo8y4WxLVdwQtg6I TKmbiTxhkMY11pQfE9O4Lb4xZGHEEqbafpzd0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=vzz1FQN1jJNmckhFM7jqyw+Eyl5dEJHVvmFRIHlT1C8=; b=GE8G3TYRH1rW9UrHdH2kZIsm+UAfFqcV75i2k+DjDAfPf1yV7jB6T3TzhjIgjMszXd FSHngnOOrtMH4wHZv04wC+l4Libl6TgVKBzb9zOeiZVvJX255kmW7WovnzK0SO5EfuH1 lqrhfEOihXyY/oWRXEAbGxkkl1qj02eahq8+8xX+5hoa/ck801Qfzq3s+po7oUFy52qY rTQEb6alTUBR7daIdNiUCdXrm++BE2H6IR1ET/XRKM7iZ91bl7gVCkkyj5GWRhnVrnZo aH0dlA2Xv7PRUO9AlbAhGb1CpQEw5Ob0qtRGMlF1ayFa8LHnbuPSGPjKRFnabgzEKIZb UTgw== X-Gm-Message-State: AHQUAuam3zNouZR5Qt8wcPX8raGKwp3FGXiUnTEgnwbGDLU0xH+AsmbC sDW3c9tXUG8TN9l41Z/eR4v0Rw== X-Google-Smtp-Source: AHgI3IYt70D/0LlJK+xicUiFn5VvA2shPPKvjRnzWaY0zyjgpiWfszNSK+ubT6JIE2ABxcEYkMdunA== X-Received: by 2002:a63:1155:: with SMTP id 21mr4378829pgr.96.1551297701730; Wed, 27 Feb 2019 12:01:41 -0800 (PST) From: Kees Cook To: Thomas Gleixner Cc: Kees Cook , Peter Zijlstra , Solar Designer , Greg KH , Jann Horn , Sean Christopherson , Dominik Brodowski , linux-kernel@vger.kernel.org, Kernel Hardening Subject: [PATCH v2 1/3] x86/asm: Pin sensitive CR0 bits Date: Wed, 27 Feb 2019 12:01:30 -0800 Message-Id: <20190227200132.24707-2-keescook@chromium.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190227200132.24707-1-keescook@chromium.org> References: <20190227200132.24707-1-keescook@chromium.org> X-Virus-Scanned: ClamAV using ClamSMTP With sensitive CR4 bits pinned now, it's possible that the WP bit for CR0 might become a target as well. Following the same reasoning for the CR4 pinning, this pins CR0's WP bit (but this can be done with a static value). As before, to convince the compiler to not optimize away the check for the WP bit after the set, this marks "val" as an output from the asm() block. This protects against just jumping into the function past where the masking happens; we must check that the mask was applied after we do the set). Due to how this function can be built by the compiler (especially due to the removal of frame pointers), jumping into the middle of the function frequently doesn't require stack manipulation to construct a stack frame (there may only a retq without pops, which is sufficient for use with exploits like timer overwrites). Additionally, this avoids WARN()ing before resetting the bit, just to minimize any race conditions with leaving the bit unset. Suggested-by: Peter Zijlstra Signed-off-by: Kees Cook --- arch/x86/include/asm/special_insns.h | 23 ++++++++++++++++++++++- 1 file changed, 22 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h index fabda1400137..1f01dc3f6c64 100644 --- a/arch/x86/include/asm/special_insns.h +++ b/arch/x86/include/asm/special_insns.h @@ -25,7 +25,28 @@ static inline unsigned long native_read_cr0(void) static inline void native_write_cr0(unsigned long val) { - asm volatile("mov %0,%%cr0": : "r" (val), "m" (__force_order)); + bool warn = false; + +again: + val |= X86_CR0_WP; + /* + * In order to have the compiler not optimize away the check + * after the cr4 write, mark "val" as being also an output ("+r") + * by this asm() block so it will perform an explicit check, as + * if it were "volatile". + */ + asm volatile("mov %0,%%cr0" : "+r" (val) : "m" (__force_order) : ); + /* + * If the MOV above was used directly as a ROP gadget we can + * notice the lack of pinned bits in "val" and start the function + * from the beginning to gain the WP bit for sure. And do it + * without first taking the exception for a WARN(). + */ + if ((val & X86_CR0_WP) != X86_CR0_WP) { + warn = true; + goto again; + } + WARN_ONCE(warn, "Attempt to unpin X86_CR0_WP, cr0 bypass attack?!\n"); } static inline unsigned long native_read_cr2(void)