From patchwork Thu Sep 3 20:30:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11755061 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A6BD913B1 for ; Thu, 3 Sep 2020 20:55:24 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 49CCE206EB for ; Thu, 3 Sep 2020 20:55:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="Sdd/9VIi"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="HZ5se5B6"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="r6if3o8k" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 49CCE206EB Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:To:From:Subject:References:Mime-Version:Message-Id: In-Reply-To:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=6lZayJYCK8PowMvIM4mA8waVtp+8zUK/O2YodCmarBI=; b=Sdd/9VIitDsMPtsUaWxg6Wsi8 sYCSqn5vbXJgD/UpdPEw2GG8K2qqdbCxbSQS2WLsLG8A2MLrDkCv3fssUejJEI/VzeV5bjcwZrSko 6qsP6Og9pP8OMq9qC4ONNMgCfWuLbxlWqpite+Fk5GJZVtWk2fKzpquJQBx+9Oo7x0bcnXO3GIwkn v3gXeq5RCPFQCmJ8/47wCJgQCSpS0O7IZLpl7xtpOZaEQ3Mw7Tt3E4MPH4pe8wH8ahmS6wtciPRuk G6gDWBZiAaq+Tlx8sebr6sd3/GfN2SfouOrkEEew6czhBrRPYl1EeaBOo4Y5OpkKAPmcCUzvNdlFv fquBvaqSQ==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kDwFy-00084Y-NP; Thu, 03 Sep 2020 20:54:54 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kDwBt-0005lz-Cn for linux-arm-kernel@merlin.infradead.org; Thu, 03 Sep 2020 20:50:41 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:Cc:To:From:Subject: References:Mime-Version:Message-Id:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=Wz9KEqPe6jynvYZUGVwJeRVVfJ6wC2gosaUL7BUwAA8=; b=HZ5se5B6/YRjD6CFkBgfFdhmT/ Cc8SqCHiy1Tx1VpCtMHh98GyhaE55hIELDRYu4ihb35wU3NDRDYIIlIoDgpjPBQf0Sn0TZCfMZh4b IebkP7zhuQTHCSgIT8zLnZTu5THsdc3y8Wo7b0uYv+nmuvaeSfaevfZXXhn+SCNTNSEgvqeGpMWMx pDoqs6t+6K8eBtQ4imeM5nT0k/i8p2E0o7Bh+nJwnV/9J4cBkyH3+ngBZzyc6VvQkDJhRSTI/8oSf BQBqsQQMnury3u7P0y3sC9bY1TKM1c3XCFaNUqgskfiSOpa0J/7Ms6j2KOBEVcsfrA3YraGlyABAI nV55lElQ==; Received: from mail-pf1-x449.google.com ([2607:f8b0:4864:20::449]) by casper.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kDvsu-0003fm-Mj for linux-arm-kernel@lists.infradead.org; Thu, 03 Sep 2020 20:31:17 +0000 Received: by mail-pf1-x449.google.com with SMTP id c190so2696670pfa.10 for ; Thu, 03 Sep 2020 13:31:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=Wz9KEqPe6jynvYZUGVwJeRVVfJ6wC2gosaUL7BUwAA8=; b=r6if3o8knJySaJfoQ4JTngFUI0d1h6vp5qLxz+Vvlc8q1cbeT5KkqtSOVWg7ghFfQV qsMG7kKX9nNp1jnuolNpkrvj9xT183rdg77MJJBWQgGF5csZOMvL+2NyKRIdITpZuiaB aMoTZEyAcj84fqVuNMqyTh/tD9qLUaPUgC9twAC5d7Kdh6bPIQBWLn/6fQPd3H+tBLIS BHX9rtxDfsQSWDSPW1p36oUHIyzyQDklpIkkB7aCEYC6BbMCgcOM/tjTbDaj/3jq2r+b YRmbX2nQcBS9pgUkuhLEj0IiOREm6aVewLODSFBUooZ+BFlVjDMARblfxZE/gz5AgH8E PfQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Wz9KEqPe6jynvYZUGVwJeRVVfJ6wC2gosaUL7BUwAA8=; b=Ih5VrNaQyjcOnl2g9ipm7HY7jZaRHctJPWTI/ivtrRi8boeilUZ9SOCXmm0acUNVK3 gXOxu46+7nInq3TuABdibPt+Bm2pmLsHKRA0m6J8ov0GT9s3Cng5/McSSXAG79dRlBo1 IyS7hzu438c15acpgmiGR325BseLocdJAU6A4i3PISI71KiX8PUU7NEuBLzHVSgIl/Wd 09To7azQYZFM6geFvtK/87s9H6W4mbeliBF9/2GGmrJM2/gElgF/AVoyxI+Ei2Yycr7E TK7F272EMtIKRHzVeM0KW2YJDIHI+yLtReda0yfB+aLik5dK11xD/SPRv073CQFUzhhp sz7A== X-Gm-Message-State: AOAM533B/HMl1iC/+jq5fZVsk68XMirbGMHsRaWXopkk1CczLUdPe1e8 JUi9maDwq30C23Yl09hN67mrogIa6iocezGPkpc= X-Google-Smtp-Source: ABdhPJx1LZYXiaw/GwusQ9gBkXNPhAc7NSMNU+Bf/gMurElFCbaIO17NmgnX7OhagTFary4oUp+SEuBx/fIdC4iS/ag= X-Received: from samitolvanen1.mtv.corp.google.com ([2620:15c:201:2:f693:9fff:fef4:1b6d]) (user=samitolvanen job=sendgmr) by 2002:a62:14c5:0:b029:13c:1611:658a with SMTP id 188-20020a6214c50000b029013c1611658amr3803962pfu.7.1599165058704; Thu, 03 Sep 2020 13:30:58 -0700 (PDT) Date: Thu, 3 Sep 2020 13:30:27 -0700 In-Reply-To: <20200903203053.3411268-1-samitolvanen@google.com> Message-Id: <20200903203053.3411268-3-samitolvanen@google.com> Mime-Version: 1.0 References: <20200624203200.78870-1-samitolvanen@google.com> <20200903203053.3411268-1-samitolvanen@google.com> X-Mailer: git-send-email 2.28.0.526.ge36021eeef-goog Subject: [PATCH v2 02/28] x86/asm: Replace __force_order with memory clobber From: Sami Tolvanen To: Masahiro Yamada , Will Deacon X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200903_213104_876417_D7AC68EB X-CRM114-Status: GOOD ( 27.21 ) X-Spam-Score: -10.1 (----------) X-Spam-Report: SpamAssassin version 3.4.4 on casper.infradead.org summary: Content analysis details: (-10.1 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:449 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.5 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arch@vger.kernel.org, x86@kernel.org, Kees Cook , "Paul E. McKenney" , kernel-hardening@lists.openwall.com, Peter Zijlstra , Greg Kroah-Hartman , linux-kbuild@vger.kernel.org, Nick Desaulniers , linux-kernel@vger.kernel.org, Steven Rostedt , clang-built-linux@googlegroups.com, Arvind Sankar , linux-pci@vger.kernel.org, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org From: Arvind Sankar The CRn accessor functions use __force_order as a dummy operand to prevent the compiler from reordering CRn reads/writes with respect to each other. The fact that the asm is volatile should be enough to prevent this: volatile asm statements should be executed in program order. However GCC 4.9.x and 5.x have a bug that might result in reordering. This was fixed in 8.1, 7.3 and 6.5. Versions prior to these, including 5.x and 4.9.x, may reorder volatile asm statements with respect to each other. There are some issues with __force_order as implemented: - It is used only as an input operand for the write functions, and hence doesn't do anything additional to prevent reordering writes. - It allows memory accesses to be cached/reordered across write functions, but CRn writes affect the semantics of memory accesses, so this could be dangerous. - __force_order is not actually defined in the kernel proper, but the LLVM toolchain can in some cases require a definition: LLVM (as well as GCC 4.9) requires it for PIE code, which is why the compressed kernel has a definition, but also the clang integrated assembler may consider the address of __force_order to be significant, resulting in a reference that requires a definition. Fix this by: - Using a memory clobber for the write functions to additionally prevent caching/reordering memory accesses across CRn writes. - Using a dummy input operand with an arbitrary constant address for the read functions, instead of a global variable. This will prevent reads from being reordered across writes, while allowing memory loads to be cached/reordered across CRn reads, which should be safe. Signed-off-by: Arvind Sankar Tested-by: Nathan Chancellor Tested-by: Sedat Dilek Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82602 Link: https://lore.kernel.org/lkml/20200527135329.1172644-1-arnd@arndb.de/ Reviewed-by: Kees Cook --- arch/x86/boot/compressed/pgtable_64.c | 9 --------- arch/x86/include/asm/special_insns.h | 28 ++++++++++++++------------- arch/x86/kernel/cpu/common.c | 4 ++-- 3 files changed, 17 insertions(+), 24 deletions(-) diff --git a/arch/x86/boot/compressed/pgtable_64.c b/arch/x86/boot/compressed/pgtable_64.c index c8862696a47b..7d0394f4ebf9 100644 --- a/arch/x86/boot/compressed/pgtable_64.c +++ b/arch/x86/boot/compressed/pgtable_64.c @@ -5,15 +5,6 @@ #include "pgtable.h" #include "../string.h" -/* - * __force_order is used by special_insns.h asm code to force instruction - * serialization. - * - * It is not referenced from the code, but GCC < 5 with -fPIE would fail - * due to an undefined symbol. Define it to make these ancient GCCs work. - */ -unsigned long __force_order; - #define BIOS_START_MIN 0x20000U /* 128K, less than this is insane */ #define BIOS_START_MAX 0x9f000U /* 640K, absolute maximum */ diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h index 59a3e13204c3..d6e3bb9363d2 100644 --- a/arch/x86/include/asm/special_insns.h +++ b/arch/x86/include/asm/special_insns.h @@ -11,45 +11,47 @@ #include /* - * Volatile isn't enough to prevent the compiler from reordering the - * read/write functions for the control registers and messing everything up. - * A memory clobber would solve the problem, but would prevent reordering of - * all loads stores around it, which can hurt performance. Solution is to - * use a variable and mimic reads and writes to it to enforce serialization + * The compiler should not reorder volatile asm statements with respect to each + * other: they should execute in program order. However GCC 4.9.x and 5.x have + * a bug (which was fixed in 8.1, 7.3 and 6.5) where they might reorder + * volatile asm. The write functions are not affected since they have memory + * clobbers preventing reordering. To prevent reads from being reordered with + * respect to writes, use a dummy memory operand. */ -extern unsigned long __force_order; + +#define __FORCE_ORDER "m"(*(unsigned int *)0x1000UL) void native_write_cr0(unsigned long val); static inline unsigned long native_read_cr0(void) { unsigned long val; - asm volatile("mov %%cr0,%0\n\t" : "=r" (val), "=m" (__force_order)); + asm volatile("mov %%cr0,%0\n\t" : "=r" (val) : __FORCE_ORDER); return val; } static __always_inline unsigned long native_read_cr2(void) { unsigned long val; - asm volatile("mov %%cr2,%0\n\t" : "=r" (val), "=m" (__force_order)); + asm volatile("mov %%cr2,%0\n\t" : "=r" (val) : __FORCE_ORDER); return val; } static __always_inline void native_write_cr2(unsigned long val) { - asm volatile("mov %0,%%cr2": : "r" (val), "m" (__force_order)); + asm volatile("mov %0,%%cr2": : "r" (val) : "memory"); } static inline unsigned long __native_read_cr3(void) { unsigned long val; - asm volatile("mov %%cr3,%0\n\t" : "=r" (val), "=m" (__force_order)); + asm volatile("mov %%cr3,%0\n\t" : "=r" (val) : __FORCE_ORDER); return val; } static inline void native_write_cr3(unsigned long val) { - asm volatile("mov %0,%%cr3": : "r" (val), "m" (__force_order)); + asm volatile("mov %0,%%cr3": : "r" (val) : "memory"); } static inline unsigned long native_read_cr4(void) @@ -64,10 +66,10 @@ static inline unsigned long native_read_cr4(void) asm volatile("1: mov %%cr4, %0\n" "2:\n" _ASM_EXTABLE(1b, 2b) - : "=r" (val), "=m" (__force_order) : "0" (0)); + : "=r" (val) : "0" (0), __FORCE_ORDER); #else /* CR4 always exists on x86_64. */ - asm volatile("mov %%cr4,%0\n\t" : "=r" (val), "=m" (__force_order)); + asm volatile("mov %%cr4,%0\n\t" : "=r" (val) : __FORCE_ORDER); #endif return val; } diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index c5d6f17d9b9d..178499f90366 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -359,7 +359,7 @@ void native_write_cr0(unsigned long val) unsigned long bits_missing = 0; set_register: - asm volatile("mov %0,%%cr0": "+r" (val), "+m" (__force_order)); + asm volatile("mov %0,%%cr0": "+r" (val) : : "memory"); if (static_branch_likely(&cr_pinning)) { if (unlikely((val & X86_CR0_WP) != X86_CR0_WP)) { @@ -378,7 +378,7 @@ void native_write_cr4(unsigned long val) unsigned long bits_changed = 0; set_register: - asm volatile("mov %0,%%cr4": "+r" (val), "+m" (cr4_pinned_bits)); + asm volatile("mov %0,%%cr4": "+r" (val) : : "memory"); if (static_branch_likely(&cr_pinning)) { if (unlikely((val & cr4_pinned_mask) != cr4_pinned_bits)) {