From patchwork Tue Jul 25 10:26:57 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Borislav Petkov X-Patchwork-Id: 9861683 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id DEFF16038C for ; Tue, 25 Jul 2017 10:27:57 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D66E228609 for ; Tue, 25 Jul 2017 10:27:57 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C8AE32861B; Tue, 25 Jul 2017 10:27:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6692D28609 for ; Tue, 25 Jul 2017 10:27:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751483AbdGYK1n (ORCPT ); Tue, 25 Jul 2017 06:27:43 -0400 Received: from mx2.suse.de ([195.135.220.15]:38076 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1750855AbdGYK1m (ORCPT ); Tue, 25 Jul 2017 06:27:42 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (charybdis-ext.suse.de [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 11571AE5F; Tue, 25 Jul 2017 10:27:40 +0000 (UTC) Date: Tue, 25 Jul 2017 12:26:57 +0200 From: Borislav Petkov To: Brijesh Singh , Tom Lendacky Cc: linux-kernel@vger.kernel.org, x86@kernel.org, linux-efi@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, Thomas Gleixner , Ingo Molnar , "H . Peter Anvin" , Andy Lutomirski , Tony Luck , Piotr Luc , Fenghua Yu , Lu Baolu , Reza Arbab , David Howells , Matt Fleming , "Kirill A . Shutemov" , Laura Abbott , Ard Biesheuvel , Andrew Morton , Eric Biederman , Benjamin Herrenschmidt , Paul Mackerras , Konrad Rzeszutek Wilk , Jonathan Corbet , Dave Airlie , Kees Cook , Paolo Bonzini , Radim =?utf-8?B?S3LEjW3DocWZ?= , Arnd Bergmann , Tejun Heo , Christoph Lameter Subject: Re: [RFC Part1 PATCH v3 02/17] x86/CPU/AMD: Add the Secure Encrypted Virtualization CPU feature Message-ID: <20170725102657.GD21822@nazgul.tnic> References: <20170724190757.11278-1-brijesh.singh@amd.com> <20170724190757.11278-3-brijesh.singh@amd.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20170724190757.11278-3-brijesh.singh@amd.com> User-Agent: Mutt/1.6.0 (2016-04-01) Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On Mon, Jul 24, 2017 at 02:07:42PM -0500, Brijesh Singh wrote: > From: Tom Lendacky > > Update the CPU features to include identifying and reporting on the > Secure Encrypted Virtualization (SEV) feature. SME is identified by > CPUID 0x8000001f, but requires BIOS support to enable it (set bit 23 of > MSR_K8_SYSCFG and set bit 0 of MSR_K7_HWCR). Only show the SEV feature > as available if reported by CPUID and enabled by BIOS. > > Signed-off-by: Tom Lendacky > Signed-off-by: Brijesh Singh > --- > arch/x86/include/asm/cpufeatures.h | 1 + > arch/x86/include/asm/msr-index.h | 2 ++ > arch/x86/kernel/cpu/amd.c | 30 +++++++++++++++++++++++++----- > arch/x86/kernel/cpu/scattered.c | 1 + > 4 files changed, 29 insertions(+), 5 deletions(-) ... > @@ -637,6 +642,21 @@ static void early_init_amd(struct cpuinfo_x86 *c) > clear_cpu_cap(c, X86_FEATURE_SME); > } > } > + > + if (cpu_has(c, X86_FEATURE_SEV)) { > + if (IS_ENABLED(CONFIG_X86_32)) { > + clear_cpu_cap(c, X86_FEATURE_SEV); > + } else { > + u64 syscfg, hwcr; > + > + /* Check if SEV is enabled */ > + rdmsrl(MSR_K8_SYSCFG, syscfg); > + rdmsrl(MSR_K7_HWCR, hwcr); > + if (!(syscfg & MSR_K8_SYSCFG_MEM_ENCRYPT) || > + !(hwcr & MSR_K7_HWCR_SMMLOCK)) > + clear_cpu_cap(c, X86_FEATURE_SEV); > + } > + } Let's simplify this and read the MSRs only once. Diff ontop. Please check if I'm missing a case: diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c index c413f04bdd41..79af07731ab1 100644 --- a/arch/x86/kernel/cpu/amd.c +++ b/arch/x86/kernel/cpu/amd.c @@ -546,6 +546,48 @@ static void bsp_init_amd(struct cpuinfo_x86 *c) } } +static void early_detect_mem_enc(struct cpuinfo_x86 *c) +{ + u64 syscfg, hwcr; + + /* + * BIOS support is required for SME and SEV. + * For SME: If BIOS has enabled SME then adjust x86_phys_bits by + * the SME physical address space reduction value. + * If BIOS has not enabled SME then don't advertise the + * SME feature (set in scattered.c). + * For SEV: If BIOS has not enabled SEV then don't advertise the + * SEV feature (set in scattered.c). + * + * In all cases, since support for SME and SEV requires long mode, + * don't advertise the feature under CONFIG_X86_32. + */ + if (cpu_has(c, X86_FEATURE_SME) || + cpu_has(c, X86_FEATURE_SEV)) { + + if (IS_ENABLED(CONFIG_X86_32)) + goto clear; + + /* Check if SME is enabled */ + rdmsrl(MSR_K8_SYSCFG, syscfg); + if (!(syscfg & MSR_K8_SYSCFG_MEM_ENCRYPT)) + goto clear; + + c->x86_phys_bits -= (cpuid_ebx(0x8000001f) >> 6) & 0x3f; + + /* Check if SEV is enabled */ + rdmsrl(MSR_K7_HWCR, hwcr); + if (!(hwcr & MSR_K7_HWCR_SMMLOCK)) + goto clear_sev; + + return; +clear: + clear_cpu_cap(c, X86_FEATURE_SME); +clear_sev: + clear_cpu_cap(c, X86_FEATURE_SEV); + } +} + static void early_init_amd(struct cpuinfo_x86 *c) { u32 dummy; @@ -617,46 +659,8 @@ static void early_init_amd(struct cpuinfo_x86 *c) if (cpu_has_amd_erratum(c, amd_erratum_400)) set_cpu_bug(c, X86_BUG_AMD_E400); - /* - * BIOS support is required for SME and SEV. - * For SME: If BIOS has enabled SME then adjust x86_phys_bits by - * the SME physical address space reduction value. - * If BIOS has not enabled SME then don't advertise the - * SME feature (set in scattered.c). - * For SEV: If BIOS has not enabled SEV then don't advertise the - * SEV feature (set in scattered.c). - * - * In all cases, since support for SME and SEV requires long mode, - * don't advertise the feature under CONFIG_X86_32. - */ - if (cpu_has(c, X86_FEATURE_SME)) { - u64 msr; - - /* Check if SME is enabled */ - rdmsrl(MSR_K8_SYSCFG, msr); - if (msr & MSR_K8_SYSCFG_MEM_ENCRYPT) { - c->x86_phys_bits -= (cpuid_ebx(0x8000001f) >> 6) & 0x3f; - if (IS_ENABLED(CONFIG_X86_32)) - clear_cpu_cap(c, X86_FEATURE_SME); - } else { - clear_cpu_cap(c, X86_FEATURE_SME); - } - } + early_detect_mem_enc(c); - if (cpu_has(c, X86_FEATURE_SEV)) { - if (IS_ENABLED(CONFIG_X86_32)) { - clear_cpu_cap(c, X86_FEATURE_SEV); - } else { - u64 syscfg, hwcr; - - /* Check if SEV is enabled */ - rdmsrl(MSR_K8_SYSCFG, syscfg); - rdmsrl(MSR_K7_HWCR, hwcr); - if (!(syscfg & MSR_K8_SYSCFG_MEM_ENCRYPT) || - !(hwcr & MSR_K7_HWCR_SMMLOCK)) - clear_cpu_cap(c, X86_FEATURE_SEV); - } - } } static void init_amd_k8(struct cpuinfo_x86 *c)