From patchwork Tue Mar 3 23:35:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jarkko Sakkinen X-Patchwork-Id: 11418907 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1CB4E14B7 for ; Tue, 3 Mar 2020 23:37:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 05DDF206DB for ; Tue, 3 Mar 2020 23:37:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728126AbgCCXho (ORCPT ); Tue, 3 Mar 2020 18:37:44 -0500 Received: from mga02.intel.com ([134.134.136.20]:53430 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728079AbgCCXho (ORCPT ); Tue, 3 Mar 2020 18:37:44 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 03 Mar 2020 15:37:43 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,511,1574150400"; d="scan'208";a="229117328" Received: from kwasilew-mobl.ger.corp.intel.com (HELO localhost) ([10.251.88.57]) by orsmga007.jf.intel.com with ESMTP; 03 Mar 2020 15:37:32 -0800 From: Jarkko Sakkinen To: linux-kernel@vger.kernel.org, x86@kernel.org, linux-sgx@vger.kernel.org Cc: akpm@linux-foundation.org, dave.hansen@intel.com, sean.j.christopherson@intel.com, nhorman@redhat.com, npmccallum@redhat.com, haitao.huang@intel.com, andriy.shevchenko@linux.intel.com, tglx@linutronix.de, kai.svahn@intel.com, bp@alien8.de, josh@joshtriplett.org, luto@kernel.org, kai.huang@intel.com, rientjes@google.com, cedric.xing@intel.com, puiterwijk@redhat.com, Jarkko Sakkinen Subject: [PATCH v28 07/22] x86/cpu/intel: Detect SGX supprt Date: Wed, 4 Mar 2020 01:35:54 +0200 Message-Id: <20200303233609.713348-8-jarkko.sakkinen@linux.intel.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200303233609.713348-1-jarkko.sakkinen@linux.intel.com> References: <20200303233609.713348-1-jarkko.sakkinen@linux.intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org From: Sean Christopherson Configure SGX as part of feature control MSR initialization and update the associated X86_FEATURE flags accordingly. Because the kernel will require the LE hash MSRs to be writable when running native enclaves, disable X86_FEATURE_SGX (and all derivatives) if SGX Launch Control is not (or cannot) be fully enabled via feature control MSR. The check is done for every CPU, not just BSP, in order to verify that MSR_IA32_FEATURE_CONTROL is correctly configured on all CPUs. The other parts of the kernel, like the enclave driver, expect the same configuration from all CPUs. Note, unlike VMX, clear the X86_FEATURE_SGX* flags for all CPUs if any CPU lacks SGX support as the kernel expects SGX to be available on all CPUs. X86_FEATURE_VMX is intentionally cleared only for the current CPU so that KVM can provide additional information if KVM fails to load, e.g. print which CPU doesn't support VMX. KVM/VMX requires additional per-CPU enabling, e.g. to set CR4.VMXE and do VMXON, and so already has the necessary infrastructure to do per-CPU checks. SGX on the other hand doesn't require additional enabling, so clearing the feature flags on all CPUs means the SGX subsystem doesn't need to manually do support checks on a per-CPU basis. Signed-off-by: Sean Christopherson Co-developed-by: Jarkko Sakkinen Signed-off-by: Jarkko Sakkinen --- arch/x86/kernel/cpu/feat_ctl.c | 29 ++++++++++++++++++++++++++++- 1 file changed, 28 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/cpu/feat_ctl.c b/arch/x86/kernel/cpu/feat_ctl.c index 0268185bef94..b16b71a6da74 100644 --- a/arch/x86/kernel/cpu/feat_ctl.c +++ b/arch/x86/kernel/cpu/feat_ctl.c @@ -92,6 +92,14 @@ static void init_vmx_capabilities(struct cpuinfo_x86 *c) } #endif /* CONFIG_X86_VMX_FEATURE_NAMES */ +static void clear_sgx_caps(void) +{ + setup_clear_cpu_cap(X86_FEATURE_SGX); + setup_clear_cpu_cap(X86_FEATURE_SGX_LC); + setup_clear_cpu_cap(X86_FEATURE_SGX1); + setup_clear_cpu_cap(X86_FEATURE_SGX2); +} + void init_ia32_feat_ctl(struct cpuinfo_x86 *c) { bool tboot = tboot_enabled(); @@ -99,6 +107,7 @@ void init_ia32_feat_ctl(struct cpuinfo_x86 *c) if (rdmsrl_safe(MSR_IA32_FEAT_CTL, &msr)) { clear_cpu_cap(c, X86_FEATURE_VMX); + clear_sgx_caps(); return; } @@ -123,13 +132,21 @@ void init_ia32_feat_ctl(struct cpuinfo_x86 *c) msr |= FEAT_CTL_VMX_ENABLED_INSIDE_SMX; } + /* + * Enable SGX if and only if the kernel supports SGX and Launch Control + * is supported, i.e. disable SGX if the LE hash MSRs can't be written. + */ + if (cpu_has(c, X86_FEATURE_SGX) && cpu_has(c, X86_FEATURE_SGX_LC) && + IS_ENABLED(CONFIG_INTEL_SGX)) + msr |= FEAT_CTL_SGX_ENABLED | FEAT_CTL_SGX_LC_ENABLED; + wrmsrl(MSR_IA32_FEAT_CTL, msr); update_caps: set_cpu_cap(c, X86_FEATURE_MSR_IA32_FEAT_CTL); if (!cpu_has(c, X86_FEATURE_VMX)) - return; + goto update_sgx; if ( (tboot && !(msr & FEAT_CTL_VMX_ENABLED_INSIDE_SMX)) || (!tboot && !(msr & FEAT_CTL_VMX_ENABLED_OUTSIDE_SMX))) { @@ -142,4 +159,14 @@ void init_ia32_feat_ctl(struct cpuinfo_x86 *c) init_vmx_capabilities(c); #endif } + +update_sgx: + if (!cpu_has(c, X86_FEATURE_SGX) || !cpu_has(c, X86_FEATURE_SGX_LC)) { + clear_sgx_caps(); + } else if (!(msr & FEAT_CTL_SGX_ENABLED) || + !(msr & FEAT_CTL_SGX_LC_ENABLED)) { + if (IS_ENABLED(CONFIG_INTEL_SGX)) + pr_err_once("SGX disabled by BIOS\n"); + clear_sgx_caps(); + } }