From patchwork Sat Sep 30 11:56:04 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Borislav Petkov X-Patchwork-Id: 9979461 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 697C460311 for ; Sat, 30 Sep 2017 11:56:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5028429212 for ; Sat, 30 Sep 2017 11:56:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 43820295A4; Sat, 30 Sep 2017 11:56:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8076229212 for ; Sat, 30 Sep 2017 11:56:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752764AbdI3L4S (ORCPT ); Sat, 30 Sep 2017 07:56:18 -0400 Received: from mx2.suse.de ([195.135.220.15]:42846 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751677AbdI3L4R (ORCPT ); Sat, 30 Sep 2017 07:56:17 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (charybdis-ext.suse.de [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 81F64AAEF; Sat, 30 Sep 2017 11:56:15 +0000 (UTC) Date: Sat, 30 Sep 2017 13:56:04 +0200 From: Borislav Petkov To: Brijesh Singh , Tom Lendacky Cc: Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Paolo Bonzini , Radim =?utf-8?B?S3LEjW3DocWZ?= , kvm@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH] x86/CPU/AMD, mm: Extend with mem_encrypt=sme option Message-ID: <20170930115604.w4g3oplmyo5oyjeq@pd.tnic> References: <20170928090242.ber7gynwaldinafa@pd.tnic> <20170929230652.37821-1-brijesh.singh@amd.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20170929230652.37821-1-brijesh.singh@amd.com> User-Agent: NeoMutt/20170113 (1.7.2) Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On Fri, Sep 29, 2017 at 06:06:52PM -0500, Brijesh Singh wrote: > The mem_encrypt=on activates both SME and SEV. Add a new argument to disable > the SEV and allow SME. The argument can be useful when SEV has issues and > we want to disable it. > > early_detect_mem_encrypt() [cpu/amd.com] will need to know the state of > the mem_encrypt= argument. Since early_detect_mem_encrypt() is not defined > as __init hence we are not able to use the 'boot_command_line' variable to > parse the cmdline argument. We introduce a new function me_cmdline_state() > to get the cmdline state from mem_encrypt.c. > > Cc: Thomas Gleixner > Cc: Ingo Molnar > Cc: "H. Peter Anvin" > Cc: Paolo Bonzini > Cc: "Radim Krčmář" > Cc: Borislav Petkov > Cc: kvm@vger.kernel.org > Cc: x86@kernel.org > Cc: linux-kernel@vger.kernel.org > Cc: Tom Lendacky > Signed-off-by: Brijesh Singh > --- Ok, I went and simplified this whole code path a bit because it was needlessly a bit too complex. Below is the result, only compile-tested. Brijesh, Tom, guys, please check my logic, I might've missed a case. Thanks. --- From: Borislav Petkov Date: Sat, 30 Sep 2017 13:33:26 +0200 Subject: [PATCH] x86/CPU/AMD, mm: Extend with mem_encrypt=sme option Extend the mem_encrypt= cmdline option with the "sme" argument so that one can enable SME only (i.e., this serves as a SEV chicken bit). While at it, streamline and document the flow logic here: 1. Check whether the SME CPUID leaf is present 2. Check whether the HW has enabled SME/SEV 3. Only *then* look at any potential command line params because doing so before is pointless. 3.1 mem_encrypt=on - enable both SME/SEV 3.2 mem_encrypt=sme - enable only SME 3.3 mem_encrypt=off - disable both In addition, CONFIG_AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT enables both if the kernel is built with it enabled. While at it, shorten variable names, simplify code flow. This is based on a patch by Brijesh Singh . Signed-off-by: Borislav Petkov Cc: Brijesh Singh Cc: Tom Lendacky Cc: kvm@vger.kernel.org Cc: x86@kernel.org --- arch/x86/include/asm/mem_encrypt.h | 2 + arch/x86/kernel/cpu/amd.c | 6 +++ arch/x86/mm/mem_encrypt.c | 82 +++++++++++++++++++------------------- 3 files changed, 49 insertions(+), 41 deletions(-) diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index 3ba68c92be1b..175310f00202 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -19,6 +19,8 @@ #include +extern bool sev_enabled; + #ifdef CONFIG_AMD_MEM_ENCRYPT extern u64 sme_me_mask; diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c index c1234aa0550c..d0669f3966a6 100644 --- a/arch/x86/kernel/cpu/amd.c +++ b/arch/x86/kernel/cpu/amd.c @@ -13,6 +13,7 @@ #include #include #include +#include #ifdef CONFIG_X86_64 # include @@ -32,6 +33,8 @@ static bool cpu_has_amd_erratum(struct cpuinfo_x86 *cpu, const int *erratum); */ static u32 nodes_per_socket = 1; +bool sev_enabled __section(.data) = false; + static inline int rdmsrl_amd_safe(unsigned msr, unsigned long long *p) { u32 gprs[8] = { 0 }; @@ -588,6 +591,9 @@ static void early_detect_mem_encrypt(struct cpuinfo_x86 *c) if (IS_ENABLED(CONFIG_X86_32)) goto clear_all; + if (!sev_enabled) + goto clear_sev; + rdmsrl(MSR_K7_HWCR, msr); if (!(msr & MSR_K7_HWCR_SMMLOCK)) goto clear_sev; diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index 057417a3d9b4..9b83bc1be7c0 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -27,12 +27,14 @@ #include #include #include +#include #include "mm_internal.h" -static char sme_cmdline_arg[] __initdata = "mem_encrypt"; -static char sme_cmdline_on[] __initdata = "on"; -static char sme_cmdline_off[] __initdata = "off"; +static char sme_cmd[] __initdata = "mem_encrypt"; +static char sme_cmd_on[] __initdata = "on"; +static char sme_cmd_off[] __initdata = "off"; +static char sme_cmd_sme[] __initdata = "sme"; /* * Since SME related variables are set early in the boot process they must @@ -44,8 +46,6 @@ EXPORT_SYMBOL_GPL(sme_me_mask); DEFINE_STATIC_KEY_FALSE(__sev); EXPORT_SYMBOL_GPL(__sev); -static bool sev_enabled __section(.data) = false; - /* Buffer used for early in-place encryption by BSP, no locking needed */ static char sme_early_buffer[PAGE_SIZE] __aligned(PAGE_SIZE); @@ -768,13 +768,13 @@ void __init sme_encrypt_kernel(void) void __init __nostackprotector sme_enable(struct boot_params *bp) { - const char *cmdline_ptr, *cmdline_arg, *cmdline_on, *cmdline_off; + const char *cmdline_ptr, *cmd, *cmd_on, *cmd_off, *cmd_sme; unsigned int eax, ebx, ecx, edx; unsigned long feature_mask; - bool active_by_default; - unsigned long me_mask; + u64 me_mask, msr; char buffer[16]; - u64 msr; + bool sme_only; + int ret; /* Check for the SME/SEV support leaf */ eax = 0x80000000; @@ -808,55 +808,55 @@ void __init __nostackprotector sme_enable(struct boot_params *bp) if (!(eax & feature_mask)) return; - me_mask = 1UL << (ebx & 0x3f); - - /* Check if memory encryption is enabled */ + /* For SME, check the SYSCFG MSR */ if (feature_mask == AMD_SME_BIT) { - /* For SME, check the SYSCFG MSR */ msr = __rdmsr(MSR_K8_SYSCFG); if (!(msr & MSR_K8_SYSCFG_MEM_ENCRYPT)) return; - } else { - /* For SEV, check the SEV MSR */ + } + + /* For SEV, check the SEV MSR */ + if (feature_mask == AMD_SEV_BIT) { msr = __rdmsr(MSR_AMD64_SEV); if (!(msr & MSR_AMD64_SEV_ENABLED)) return; - - /* SEV state cannot be controlled by a command line option */ - sme_me_mask = me_mask; - sev_enabled = true; - return; } + me_mask = BIT_ULL(ebx & 0x3f); + /* * Fixups have not been applied to phys_base yet and we're running * identity mapped, so we must obtain the address to the SME command * line argument data using rip-relative addressing. */ - asm ("lea sme_cmdline_arg(%%rip), %0" - : "=r" (cmdline_arg) - : "p" (sme_cmdline_arg)); - asm ("lea sme_cmdline_on(%%rip), %0" - : "=r" (cmdline_on) - : "p" (sme_cmdline_on)); - asm ("lea sme_cmdline_off(%%rip), %0" - : "=r" (cmdline_off) - : "p" (sme_cmdline_off)); - - if (IS_ENABLED(CONFIG_AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT)) - active_by_default = true; - else - active_by_default = false; + asm ("lea sme_cmd(%%rip), %0" : "=r" (cmd) : "p" (sme_cmd)); + asm ("lea sme_cmd_on(%%rip), %0" : "=r" (cmd_on) : "p" (sme_cmd_on)); + asm ("lea sme_cmd_off(%%rip), %0" : "=r" (cmd_off) : "p" (sme_cmd_off)); + asm ("lea sme_cmd_sme(%%rip), %0" : "=r" (cmd_sme) : "p" (sme_cmd_sme)); cmdline_ptr = (const char *)((u64)bp->hdr.cmd_line_ptr | - ((u64)bp->ext_cmd_line_ptr << 32)); + ((u64)bp->ext_cmd_line_ptr << 32)); - cmdline_find_option(cmdline_ptr, cmdline_arg, buffer, sizeof(buffer)); + ret = cmdline_find_option(cmdline_ptr, cmd, buffer, sizeof(buffer)); + if (ret < 0) + return; - if (!strncmp(buffer, cmdline_on, sizeof(buffer))) - sme_me_mask = me_mask; - else if (!strncmp(buffer, cmdline_off, sizeof(buffer))) + if (!strncmp(buffer, cmd_off, sizeof(buffer))) { sme_me_mask = 0; - else - sme_me_mask = active_by_default ? me_mask : 0; + return; + } else if (!strncmp(buffer, cmd_on, sizeof(buffer))) { + sme_me_mask = me_mask; + } else if (!strncmp(buffer, cmd_sme, sizeof(buffer))) { + sme_only = true; + } + + if (IS_ENABLED(CONFIG_AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT)) + sme_me_mask = me_mask; + + if (sme_only) + return; + + /* For SEV, check the SEV MSR */ + if (feature_mask == AMD_SEV_BIT) + sev_enabled = true; }