From patchwork Tue Apr 14 06:31:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiaoyao Li X-Patchwork-Id: 11486853 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 37A66913 for ; Tue, 14 Apr 2020 06:50:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2A2C42072D for ; Tue, 14 Apr 2020 06:50:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2406119AbgDNGu1 (ORCPT ); Tue, 14 Apr 2020 02:50:27 -0400 Received: from mga03.intel.com ([134.134.136.65]:58687 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728133AbgDNGuX (ORCPT ); Tue, 14 Apr 2020 02:50:23 -0400 IronPort-SDR: 9FJWmLoNwaFHndKdKQUeH1cKoVFQ2/lJsGTDBtWQFdWDjtJvvwpE+fJ++rSpLb2v8FuKxpha6k yfwaHWY0jfbw== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2020 23:50:23 -0700 IronPort-SDR: DfWewV7MsffRgXIo+CGZQdPWQa7x8bBlpqYv7odtpjRFsrqjOUGwhjUKruSy4N/g/Pw02MIeET Kf77p1OmkvAg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,381,1580803200"; d="scan'208";a="277158345" Received: from lxy-clx-4s.sh.intel.com ([10.239.43.132]) by fmsmga004.fm.intel.com with ESMTP; 13 Apr 2020 23:50:18 -0700 From: Xiaoyao Li To: Paolo Bonzini , kvm@vger.kernel.org, Sean Christopherson , Thomas Gleixner Cc: linux-kernel@vger.kernel.org, x86@kernel.org, Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra , Arvind Sankar , Xiaoyao Li Subject: [PATCH v8 1/4] kvm: x86: Emulate MSR IA32_CORE_CAPABILITIES Date: Tue, 14 Apr 2020 14:31:26 +0800 Message-Id: <20200414063129.133630-2-xiaoyao.li@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200414063129.133630-1-xiaoyao.li@intel.com> References: <20200414063129.133630-1-xiaoyao.li@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Emulate MSR_IA32_CORE_CAPABILITIES in software and unconditionally advertise its support to userspace. Like MSR_IA32_ARCH_CAPABILITIES, it is a feature-enumerating MSR and can be fully emulated regardless of hardware support. Note, support for individual features enumerated via CORE_CAPABILITIES, e.g., split lock detection, will be added in future patches. Signed-off-by: Xiaoyao Li --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/cpuid.c | 3 ++- arch/x86/kvm/x86.c | 22 ++++++++++++++++++++++ 3 files changed, 25 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 42a2d0d3984a..30aee4dd3760 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -597,6 +597,7 @@ struct kvm_vcpu_arch { u64 ia32_xss; u64 microcode_version; u64 arch_capabilities; + u64 core_capabilities; /* * Paging state of the vcpu diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 901cd1fdecd9..3f9c09a34ed4 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -341,9 +341,10 @@ void kvm_set_cpu_caps(void) F(MD_CLEAR) | F(AVX512_VP2INTERSECT) | F(FSRM) ); - /* TSC_ADJUST and ARCH_CAPABILITIES are emulated in software. */ + /* Uconditionally advertise features that are emulated in software. */ kvm_cpu_cap_set(X86_FEATURE_TSC_ADJUST); kvm_cpu_cap_set(X86_FEATURE_ARCH_CAPABILITIES); + kvm_cpu_cap_set(X86_FEATURE_CORE_CAPABILITIES); if (boot_cpu_has(X86_FEATURE_IBPB) && boot_cpu_has(X86_FEATURE_IBRS)) kvm_cpu_cap_set(X86_FEATURE_SPEC_CTRL); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 3bf2ecafd027..adfd4d74ea53 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1248,6 +1248,7 @@ static const u32 emulated_msrs_all[] = { MSR_IA32_TSC_ADJUST, MSR_IA32_TSCDEADLINE, MSR_IA32_ARCH_CAPABILITIES, + MSR_IA32_CORE_CAPS, MSR_IA32_MISC_ENABLE, MSR_IA32_MCG_STATUS, MSR_IA32_MCG_CTL, @@ -1314,6 +1315,7 @@ static const u32 msr_based_features_all[] = { MSR_F10H_DECFG, MSR_IA32_UCODE_REV, MSR_IA32_ARCH_CAPABILITIES, + MSR_IA32_CORE_CAPS, }; static u32 msr_based_features[ARRAY_SIZE(msr_based_features_all)]; @@ -1367,12 +1369,20 @@ static u64 kvm_get_arch_capabilities(void) return data; } +static u64 kvm_get_core_capabilities(void) +{ + return 0; +} + static int kvm_get_msr_feature(struct kvm_msr_entry *msr) { switch (msr->index) { case MSR_IA32_ARCH_CAPABILITIES: msr->data = kvm_get_arch_capabilities(); break; + case MSR_IA32_CORE_CAPS: + msr->data = kvm_get_core_capabilities(); + break; case MSR_IA32_UCODE_REV: rdmsrl_safe(msr->index, &msr->data); break; @@ -2753,6 +2763,11 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info) return 1; vcpu->arch.arch_capabilities = data; break; + case MSR_IA32_CORE_CAPS: + if (!msr_info->host_initiated) + return 1; + vcpu->arch.core_capabilities = data; + break; case MSR_EFER: return set_efer(vcpu, msr_info); case MSR_K7_HWCR: @@ -3080,6 +3095,12 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info) return 1; msr_info->data = vcpu->arch.arch_capabilities; break; + case MSR_IA32_CORE_CAPS: + if (!msr_info->host_initiated && + !guest_cpuid_has(vcpu, X86_FEATURE_CORE_CAPABILITIES)) + return 1; + msr_info->data = vcpu->arch.core_capabilities; + break; case MSR_IA32_POWER_CTL: msr_info->data = vcpu->arch.msr_ia32_power_ctl; break; @@ -9378,6 +9399,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) goto free_guest_fpu; vcpu->arch.arch_capabilities = kvm_get_arch_capabilities(); + vcpu->arch.core_capabilities = kvm_get_core_capabilities(); vcpu->arch.msr_platform_info = MSR_PLATFORM_INFO_CPUID_FAULT; kvm_vcpu_mtrr_init(vcpu); vcpu_load(vcpu); From patchwork Tue Apr 14 06:31:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiaoyao Li X-Patchwork-Id: 11486857 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9F569913 for ; Tue, 14 Apr 2020 06:50:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8F04820774 for ; Tue, 14 Apr 2020 06:50:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2406140AbgDNGuf (ORCPT ); Tue, 14 Apr 2020 02:50:35 -0400 Received: from mga03.intel.com ([134.134.136.65]:58687 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2406124AbgDNGu3 (ORCPT ); Tue, 14 Apr 2020 02:50:29 -0400 IronPort-SDR: cHHVIXbmnJQ/NpaGsGVRUx8Xq/sLqN2wA9Dxale4yhwzCThU3U+una7ZqKUF38EULAKYJLxoz5 gEdzUsavbhgg== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2020 23:50:28 -0700 IronPort-SDR: NFCO0sF3wtqP7E5YlLAA9JDGose3LeqY/P+nXKAZMvepxhzkHcf16o3MHZ1gvdWHxc93Xpwby7 u8bFvPBrXRqg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,381,1580803200"; d="scan'208";a="277158366" Received: from lxy-clx-4s.sh.intel.com ([10.239.43.132]) by fmsmga004.fm.intel.com with ESMTP; 13 Apr 2020 23:50:23 -0700 From: Xiaoyao Li To: Paolo Bonzini , kvm@vger.kernel.org, Sean Christopherson , Thomas Gleixner Cc: linux-kernel@vger.kernel.org, x86@kernel.org, Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra , Arvind Sankar , Xiaoyao Li Subject: [PATCH v8 2/4] kvm: vmx: Enable MSR TEST_CTRL for guest Date: Tue, 14 Apr 2020 14:31:27 +0800 Message-Id: <20200414063129.133630-3-xiaoyao.li@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200414063129.133630-1-xiaoyao.li@intel.com> References: <20200414063129.133630-1-xiaoyao.li@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Unconditionally allow the guest to read and zero-write MSR TEST_CTRL. This matches the fact that most Intel CPUs support MSR TEST_CTRL, and it also alleviates the effort to handle wrmsr/rdmsr when split lock detection is exposed to the guest in a future patch. Signed-off-by: Xiaoyao Li --- arch/x86/kvm/vmx/vmx.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 83050977490c..ae394ed174cd 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -1789,6 +1789,9 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) u32 index; switch (msr_info->index) { + case MSR_TEST_CTRL: + msr_info->data = 0; + break; #ifdef CONFIG_X86_64 case MSR_FS_BASE: msr_info->data = vmcs_readl(GUEST_FS_BASE); @@ -1942,6 +1945,11 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) u32 index; switch (msr_index) { + case MSR_TEST_CTRL: + if (data) + return 1; + + break; case MSR_EFER: ret = kvm_set_msr_common(vcpu, msr_info); break; From patchwork Tue Apr 14 06:31:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiaoyao Li X-Patchwork-Id: 11486859 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DB6E915AB for ; Tue, 14 Apr 2020 06:51:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CC3E420735 for ; Tue, 14 Apr 2020 06:51:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2406130AbgDNGu4 (ORCPT ); Tue, 14 Apr 2020 02:50:56 -0400 Received: from mga03.intel.com ([134.134.136.65]:58687 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2406135AbgDNGuc (ORCPT ); Tue, 14 Apr 2020 02:50:32 -0400 IronPort-SDR: HLywqpAtjln3ODpDTTfajLhWYAC4TFR77CZD94LVRasqz7Gq9tmuvOC11j6vn0GDosDUGxM+ee 46kpPq0UiMBA== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2020 23:50:31 -0700 IronPort-SDR: 1zX7xUoXY6F2Iske0khthpiXytW/LDuEgmESGtWbStn+66HlC04sNpprulO/8prq/aWE56W9/F BNhe8cZywDPg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,381,1580803200"; d="scan'208";a="277158378" Received: from lxy-clx-4s.sh.intel.com ([10.239.43.132]) by fmsmga004.fm.intel.com with ESMTP; 13 Apr 2020 23:50:29 -0700 From: Xiaoyao Li To: Paolo Bonzini , kvm@vger.kernel.org, Sean Christopherson , Thomas Gleixner Cc: linux-kernel@vger.kernel.org, x86@kernel.org, Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra , Arvind Sankar , Xiaoyao Li Subject: [PATCH v8 3/4] x86/split_lock: Export sld_update_msr() and sld_state Date: Tue, 14 Apr 2020 14:31:28 +0800 Message-Id: <20200414063129.133630-4-xiaoyao.li@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200414063129.133630-1-xiaoyao.li@intel.com> References: <20200414063129.133630-1-xiaoyao.li@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org sld_update_msr() and sld_state will be used in KVM in future patch to add virtualization support of split lock detection. Signed-off-by: Xiaoyao Li --- arch/x86/include/asm/cpu.h | 12 ++++++++++++ arch/x86/kernel/cpu/intel.c | 13 +++++-------- 2 files changed, 17 insertions(+), 8 deletions(-) diff --git a/arch/x86/include/asm/cpu.h b/arch/x86/include/asm/cpu.h index dd17c2da1af5..6c6528b3153e 100644 --- a/arch/x86/include/asm/cpu.h +++ b/arch/x86/include/asm/cpu.h @@ -40,12 +40,23 @@ int mwait_usable(const struct cpuinfo_x86 *); unsigned int x86_family(unsigned int sig); unsigned int x86_model(unsigned int sig); unsigned int x86_stepping(unsigned int sig); +enum split_lock_detect_state { + sld_off = 0, + sld_warn, + sld_fatal, +}; + #ifdef CONFIG_CPU_SUP_INTEL +extern enum split_lock_detect_state sld_state __ro_after_init; + extern void __init cpu_set_core_cap_bits(struct cpuinfo_x86 *c); extern void switch_to_sld(unsigned long tifn); extern bool handle_user_split_lock(struct pt_regs *regs, long error_code); extern bool handle_guest_split_lock(unsigned long ip); +extern void sld_update_msr(bool on); #else +#define sld_state sld_off + static inline void __init cpu_set_core_cap_bits(struct cpuinfo_x86 *c) {} static inline void switch_to_sld(unsigned long tifn) {} static inline bool handle_user_split_lock(struct pt_regs *regs, long error_code) @@ -57,5 +68,6 @@ static inline bool handle_guest_split_lock(unsigned long ip) { return false; } +static inline void sld_update_msr(bool on) {} #endif #endif /* _ASM_X86_CPU_H */ diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c index bf08d4508ecb..80d1c0c93c08 100644 --- a/arch/x86/kernel/cpu/intel.c +++ b/arch/x86/kernel/cpu/intel.c @@ -34,18 +34,14 @@ #include #endif -enum split_lock_detect_state { - sld_off = 0, - sld_warn, - sld_fatal, -}; - /* * Default to sld_off because most systems do not support split lock detection * split_lock_setup() will switch this to sld_warn on systems that support * split lock detect, unless there is a command line override. */ -static enum split_lock_detect_state sld_state __ro_after_init = sld_off; +enum split_lock_detect_state sld_state __ro_after_init = sld_off; +EXPORT_SYMBOL_GPL(sld_state); + static u64 msr_test_ctrl_cache __ro_after_init; /* @@ -1052,7 +1048,7 @@ static void __init split_lock_setup(void) * is not implemented as one thread could undo the setting of the other * thread immediately after dropping the lock anyway. */ -static void sld_update_msr(bool on) +void sld_update_msr(bool on) { u64 test_ctrl_val = msr_test_ctrl_cache; @@ -1061,6 +1057,7 @@ static void sld_update_msr(bool on) wrmsrl(MSR_TEST_CTRL, test_ctrl_val); } +EXPORT_SYMBOL_GPL(sld_update_msr); static void split_lock_init(void) { From patchwork Tue Apr 14 06:31:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiaoyao Li X-Patchwork-Id: 11486855 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 188A4913 for ; Tue, 14 Apr 2020 06:50:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0510920735 for ; Tue, 14 Apr 2020 06:50:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2406137AbgDNGuj (ORCPT ); Tue, 14 Apr 2020 02:50:39 -0400 Received: from mga03.intel.com ([134.134.136.65]:58687 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2406141AbgDNGuh (ORCPT ); Tue, 14 Apr 2020 02:50:37 -0400 IronPort-SDR: q1dc8ySF3THGnnvppSvObwifJRqQpGybvrhMQmrrqqloPsUnxeIUf+QRnrbgUu/4Na2lucbWiQ sXOMFl6LgNCQ== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2020 23:50:36 -0700 IronPort-SDR: iJq4LJ78HrawWYBucTTooYa+ir1LIfUCukNGl5RHtNcX24Rwdo6aCivfVdTjqEQWDMky6rK/oV IhtJzPnUz57Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,381,1580803200"; d="scan'208";a="277158388" Received: from lxy-clx-4s.sh.intel.com ([10.239.43.132]) by fmsmga004.fm.intel.com with ESMTP; 13 Apr 2020 23:50:32 -0700 From: Xiaoyao Li To: Paolo Bonzini , kvm@vger.kernel.org, Sean Christopherson , Thomas Gleixner Cc: linux-kernel@vger.kernel.org, x86@kernel.org, Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra , Arvind Sankar , Xiaoyao Li Subject: [PATCH v8 4/4] kvm: vmx: virtualize split lock detection Date: Tue, 14 Apr 2020 14:31:29 +0800 Message-Id: <20200414063129.133630-5-xiaoyao.li@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200414063129.133630-1-xiaoyao.li@intel.com> References: <20200414063129.133630-1-xiaoyao.li@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Due to the fact that TEST_CTRL MSR is per-core scope, i.e., the sibling threads in the same physical CPU core share the same MSR, only advertising feature split lock detection to guest when SMT is disabled or unsupported, for simplicitly. 1) When host sld_state is sld_off, feature split lock detection is unsupported/disabled. Cannot expose it to guest in this case. 2) When host sld_state is sld_warn, feature split lock detection can be exposed to guest if nosmt. Further, to avoid the potential MSR_TEST_CTRL.SLD toggling overhead during every vm-enter/-exit, loading guest's SLD setting when in KVM context. 3) When host sld_state is sld_fatal, feature split lock detection can also be exposed to guest if nosmt. But the feature is forced on for guest, i.e., the hardware MSR_TEST_CTRL.SLD bit is always set even if guest clears the SLD bit. Signed-off-by: Xiaoyao Li --- arch/x86/kvm/vmx/vmx.c | 79 +++++++++++++++++++++++++++++++++++++----- arch/x86/kvm/vmx/vmx.h | 2 ++ arch/x86/kvm/x86.c | 17 +++++++-- 3 files changed, 86 insertions(+), 12 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index ae394ed174cd..2077abe4edf9 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -1120,6 +1120,35 @@ void vmx_set_host_fs_gs(struct vmcs_host_state *host, u16 fs_sel, u16 gs_sel, } } +/* + * Note: for guest, feature split lock detection can only be enumerated through + * MSR_IA32_CORE_CAPABILITIES bit. The FMS enumeration is unsupported. + */ +static inline bool guest_cpu_has_feature_sld(struct kvm_vcpu *vcpu) +{ + return vcpu->arch.core_capabilities & + MSR_IA32_CORE_CAPS_SPLIT_LOCK_DETECT; +} + +static inline bool guest_cpu_sld_on(struct vcpu_vmx *vmx) +{ + return vmx->msr_test_ctrl & MSR_TEST_CTRL_SPLIT_LOCK_DETECT; +} + +static inline void vmx_update_sld(struct kvm_vcpu *vcpu, bool on) +{ + /* + * Toggle SLD if the guest wants it enabled but its been disabled for + * the userspace VMM, and vice versa. Note, TIF_SLD is true if SLD has + * been turned off. Yes, it's a terrible name. + */ + if (sld_state == sld_warn && guest_cpu_has_feature_sld(vcpu) && + on == test_thread_flag(TIF_SLD)) { + sld_update_msr(on); + update_thread_flag(TIF_SLD, !on); + } +} + void vmx_prepare_switch_to_guest(struct kvm_vcpu *vcpu) { struct vcpu_vmx *vmx = to_vmx(vcpu); @@ -1188,6 +1217,10 @@ void vmx_prepare_switch_to_guest(struct kvm_vcpu *vcpu) #endif vmx_set_host_fs_gs(host_state, fs_sel, gs_sel, fs_base, gs_base); + + vmx->host_sld_on = !test_thread_flag(TIF_SLD); + vmx_update_sld(vcpu, guest_cpu_sld_on(vmx)); + vmx->guest_state_loaded = true; } @@ -1226,6 +1259,9 @@ static void vmx_prepare_switch_to_host(struct vcpu_vmx *vmx) wrmsrl(MSR_KERNEL_GS_BASE, vmx->msr_host_kernel_gs_base); #endif load_fixmap_gdt(raw_smp_processor_id()); + + vmx_update_sld(&vmx->vcpu, vmx->host_sld_on); + vmx->guest_state_loaded = false; vmx->guest_msrs_ready = false; } @@ -1777,6 +1813,16 @@ static int vmx_get_msr_feature(struct kvm_msr_entry *msr) } } +static inline u64 vmx_msr_test_ctrl_valid_bits(struct kvm_vcpu *vcpu) +{ + u64 valid_bits = 0; + + if (guest_cpu_has_feature_sld(vcpu)) + valid_bits |= MSR_TEST_CTRL_SPLIT_LOCK_DETECT; + + return valid_bits; +} + /* * Reads an msr value (of 'msr_index') into 'pdata'. * Returns 0 on success, non-0 otherwise. @@ -1790,7 +1836,7 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) switch (msr_info->index) { case MSR_TEST_CTRL: - msr_info->data = 0; + msr_info->data = vmx->msr_test_ctrl; break; #ifdef CONFIG_X86_64 case MSR_FS_BASE: @@ -1946,9 +1992,15 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) switch (msr_index) { case MSR_TEST_CTRL: - if (data) + if (data & ~vmx_msr_test_ctrl_valid_bits(vcpu)) return 1; + vmx->msr_test_ctrl = data; + + preempt_disable(); + if (vmx->guest_state_loaded) + vmx_update_sld(vcpu, guest_cpu_sld_on(vmx)); + preempt_enable(); break; case MSR_EFER: ret = kvm_set_msr_common(vcpu, msr_info); @@ -4266,7 +4318,7 @@ static void vmx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) vmx->rmode.vm86_active = 0; vmx->spec_ctrl = 0; - + vmx->msr_test_ctrl = 0; vmx->msr_ia32_umwait_control = 0; vmx->vcpu.arch.regs[VCPU_REGS_RDX] = get_rdx_init_val(); @@ -4596,24 +4648,33 @@ static int handle_machine_check(struct kvm_vcpu *vcpu) return 1; } +static inline bool guest_cpu_alignment_check_enabled(struct kvm_vcpu *vcpu) +{ + return vmx_get_cpl(vcpu) == 3 && kvm_read_cr0_bits(vcpu, X86_CR0_AM) && + (kvm_get_rflags(vcpu) & X86_EFLAGS_AC); +} + /* * If the host has split lock detection disabled, then #AC is * unconditionally injected into the guest, which is the pre split lock * detection behaviour. * * If the host has split lock detection enabled then #AC is - * only injected into the guest when: - * - Guest CPL == 3 (user mode) - * - Guest has #AC detection enabled in CR0 - * - Guest EFLAGS has AC bit set + * injected into the guest when: + * 1) guest has alignment check enabled; + * or 2) guest has split lock detection enabled; */ static inline bool guest_inject_ac(struct kvm_vcpu *vcpu) { if (!boot_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT)) return true; - return vmx_get_cpl(vcpu) == 3 && kvm_read_cr0_bits(vcpu, X86_CR0_AM) && - (kvm_get_rflags(vcpu) & X86_EFLAGS_AC); + /* + * A split lock access must be an unaligned access, so we should check + * guest_cpu_alignment_check_enabled() first. + */ + return guest_cpu_alignment_check_enabled(vcpu) || + guest_cpu_sld_on(to_vmx(vcpu)); } static int handle_exception_nmi(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index aab9df55336e..b3c5be90b023 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -216,12 +216,14 @@ struct vcpu_vmx { int nmsrs; int save_nmsrs; bool guest_msrs_ready; + bool host_sld_on; #ifdef CONFIG_X86_64 u64 msr_host_kernel_gs_base; u64 msr_guest_kernel_gs_base; #endif u64 spec_ctrl; + u64 msr_test_ctrl; u32 msr_ia32_umwait_control; u32 secondary_exec_control; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index adfd4d74ea53..8c8f5ccfd98b 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1189,7 +1189,7 @@ static const u32 msrs_to_save_all[] = { #endif MSR_IA32_TSC, MSR_IA32_CR_PAT, MSR_VM_HSAVE_PA, MSR_IA32_FEAT_CTL, MSR_IA32_BNDCFGS, MSR_TSC_AUX, - MSR_IA32_SPEC_CTRL, + MSR_IA32_SPEC_CTRL, MSR_TEST_CTRL, MSR_IA32_RTIT_CTL, MSR_IA32_RTIT_STATUS, MSR_IA32_RTIT_CR3_MATCH, MSR_IA32_RTIT_OUTPUT_BASE, MSR_IA32_RTIT_OUTPUT_MASK, MSR_IA32_RTIT_ADDR0_A, MSR_IA32_RTIT_ADDR0_B, @@ -1371,7 +1371,12 @@ static u64 kvm_get_arch_capabilities(void) static u64 kvm_get_core_capabilities(void) { - return 0; + u64 data = 0; + + if (boot_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT) && !cpu_smt_possible()) + data |= MSR_IA32_CORE_CAPS_SPLIT_LOCK_DETECT; + + return data; } static int kvm_get_msr_feature(struct kvm_msr_entry *msr) @@ -2764,7 +2769,8 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info) vcpu->arch.arch_capabilities = data; break; case MSR_IA32_CORE_CAPS: - if (!msr_info->host_initiated) + if (!msr_info->host_initiated || + data & ~kvm_get_core_capabilities()) return 1; vcpu->arch.core_capabilities = data; break; @@ -5243,6 +5249,11 @@ static void kvm_init_msr_list(void) * to the guests in some cases. */ switch (msrs_to_save_all[i]) { + case MSR_TEST_CTRL: + if (!(kvm_get_core_capabilities() & + MSR_IA32_CORE_CAPS_SPLIT_LOCK_DETECT)) + continue; + break; case MSR_IA32_BNDCFGS: if (!kvm_mpx_supported()) continue;