From patchwork Fri Jan 12 00:46:45 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 10160247 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 7B157602D8 for ; Fri, 12 Jan 2018 11:28:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6A07828985 for ; Fri, 12 Jan 2018 11:28:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5DD9828987; Fri, 12 Jan 2018 11:28:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 5379E28985 for ; Fri, 12 Jan 2018 11:28:36 +0000 (UTC) Received: (qmail 13648 invoked by uid 550); 12 Jan 2018 11:28:18 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Delivered-To: moderator for kernel-hardening@lists.openwall.com Received: (qmail 22248 invoked from network); 12 Jan 2018 00:55:12 -0000 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.46,346,1511856000"; d="scan'208";a="8980908" From: Dan Williams To: linux-kernel@vger.kernel.org Cc: Mark Rutland , Tom Lendacky , linux-arch@vger.kernel.org, Greg KH , Peter Zijlstra , Alan Cox , x86@kernel.org, Ingo Molnar , "H. Peter Anvin" , kernel-hardening@lists.openwall.com, tglx@linutronix.de, torvalds@linux-foundation.org, akpm@linux-foundation.org, Elena Reshetova , alan@linux.intel.com Date: Thu, 11 Jan 2018 16:46:45 -0800 Message-ID: <151571800589.27429.13615996439124092232.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <151571798296.27429.7166552848688034184.stgit@dwillia2-desk3.amr.corp.intel.com> References: <151571798296.27429.7166552848688034184.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.17.1-9-g687f MIME-Version: 1.0 Subject: [kernel-hardening] [PATCH v2 04/19] x86: implement ifence() X-Virus-Scanned: ClamAV using ClamSMTP The new barrier, 'ifence', ensures that no instructions past the boundary are speculatively executed. Previously the kernel only needed this fence in 'rdtsc_ordered', but it can also be used as a mitigation against Spectre variant1 attacks that speculative access memory past an array bounds check. 'ifence', via 'ifence_array_ptr', is an opt-in fallback to the default mitigation provided by '__array_ptr'. It is also proposed for blocking speculation in the 'get_user' path to bypass 'access_ok' checks. For now, just provide the common definition for later patches to build upon. Suggested-by: Peter Zijlstra Suggested-by: Alan Cox Cc: Tom Lendacky Cc: Mark Rutland Cc: Greg KH Cc: Thomas Gleixner Cc: Ingo Molnar Cc: "H. Peter Anvin" Cc: x86@kernel.org Signed-off-by: Elena Reshetova Signed-off-by: Dan Williams --- arch/x86/include/asm/barrier.h | 4 ++++ arch/x86/include/asm/msr.h | 3 +-- 2 files changed, 5 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h index 7fb336210e1b..b04f572d6d97 100644 --- a/arch/x86/include/asm/barrier.h +++ b/arch/x86/include/asm/barrier.h @@ -24,6 +24,10 @@ #define wmb() asm volatile("sfence" ::: "memory") #endif +/* prevent speculative execution past this barrier */ +#define ifence() alternative_2("", "mfence", X86_FEATURE_MFENCE_RDTSC, \ + "lfence", X86_FEATURE_LFENCE_RDTSC) + #ifdef CONFIG_X86_PPRO_FENCE #define dma_rmb() rmb() #else diff --git a/arch/x86/include/asm/msr.h b/arch/x86/include/asm/msr.h index 07962f5f6fba..e426d2a33ff3 100644 --- a/arch/x86/include/asm/msr.h +++ b/arch/x86/include/asm/msr.h @@ -214,8 +214,7 @@ static __always_inline unsigned long long rdtsc_ordered(void) * that some other imaginary CPU is updating continuously with a * time stamp. */ - alternative_2("", "mfence", X86_FEATURE_MFENCE_RDTSC, - "lfence", X86_FEATURE_LFENCE_RDTSC); + ifence(); return rdtsc(); }