From patchwork Fri Nov 6 01:16:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yang, Weijiang" X-Patchwork-Id: 11885629 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68D99C55178 for ; Fri, 6 Nov 2020 01:06:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1E4FB2075A for ; Fri, 6 Nov 2020 01:06:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733221AbgKFBGi (ORCPT ); Thu, 5 Nov 2020 20:06:38 -0500 Received: from mga18.intel.com ([134.134.136.126]:38191 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1733213AbgKFBGh (ORCPT ); Thu, 5 Nov 2020 20:06:37 -0500 IronPort-SDR: 55qWWIVJN+rGaPPppx9me/JevqZvgq85xLBYfRonKapqkushk1dF2mpQjo9JrcudCJlUveIUJK 849B6wpJFGwg== X-IronPort-AV: E=McAfee;i="6000,8403,9796"; a="157264714" X-IronPort-AV: E=Sophos;i="5.77,454,1596524400"; d="scan'208";a="157264714" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2020 17:06:36 -0800 IronPort-SDR: oRowV/6fvjKU/L0uJHkiEOvegywWRZvq7nMK6TVFxShQWHXqRYWN//sCNlqAZswoYEaW6MCRAU YI0y64BIhrsA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,454,1596524400"; d="scan'208";a="471874563" Received: from local-michael-cet-test.sh.intel.com ([10.239.159.156]) by orsmga004.jf.intel.com with ESMTP; 05 Nov 2020 17:06:35 -0800 From: Yang Weijiang To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, pbonzini@redhat.com, sean.j.christopherson@intel.com, jmattson@google.com Cc: yu.c.zhang@linux.intel.com, Yang Weijiang Subject: [PATCH v14 11/13] KVM: VMX: Pass through CET MSRs to the guest when supported Date: Fri, 6 Nov 2020 09:16:35 +0800 Message-Id: <20201106011637.14289-12-weijiang.yang@intel.com> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20201106011637.14289-1-weijiang.yang@intel.com> References: <20201106011637.14289-1-weijiang.yang@intel.com> Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Pass through all CET MSRs when the associated CET component (kernel vs. user) is enabled to improve guest performance. All CET MSRs are context switched, either via dedicated VMCS fields or XSAVES. Co-developed-by: Zhang Yi Z Signed-off-by: Zhang Yi Z Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson Signed-off-by: Yang Weijiang --- arch/x86/kvm/vmx/vmx.c | 29 +++++++++++++++++++++++++++++ 1 file changed, 29 insertions(+) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index c88a6e1721b1..6ba2027a3d44 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7366,6 +7366,32 @@ static void update_intel_pt_cfg(struct kvm_vcpu *vcpu) vmx->pt_desc.ctl_bitmask &= ~(0xfULL << (32 + i * 4)); } +static bool is_cet_state_supported(struct kvm_vcpu *vcpu, u32 xss_state) +{ + return (vcpu->arch.guest_supported_xss & xss_state) && + (guest_cpuid_has(vcpu, X86_FEATURE_SHSTK) || + guest_cpuid_has(vcpu, X86_FEATURE_IBT)); +} + +static void vmx_update_intercept_for_cet_msr(struct kvm_vcpu *vcpu) +{ + bool incpt = !is_cet_state_supported(vcpu, XFEATURE_MASK_CET_USER); + + vmx_set_intercept_for_msr(vcpu, MSR_IA32_U_CET, MSR_TYPE_RW, incpt); + + incpt |= !guest_cpuid_has(vcpu, X86_FEATURE_SHSTK); + vmx_set_intercept_for_msr(vcpu, MSR_IA32_PL3_SSP, MSR_TYPE_RW, incpt); + + incpt = !is_cet_state_supported(vcpu, XFEATURE_MASK_CET_KERNEL); + vmx_set_intercept_for_msr(vcpu, MSR_IA32_S_CET, MSR_TYPE_RW, incpt); + + incpt |= !guest_cpuid_has(vcpu, X86_FEATURE_SHSTK); + vmx_set_intercept_for_msr(vcpu, MSR_IA32_INT_SSP_TAB, MSR_TYPE_RW, incpt); + vmx_set_intercept_for_msr(vcpu, MSR_IA32_PL0_SSP, MSR_TYPE_RW, incpt); + vmx_set_intercept_for_msr(vcpu, MSR_IA32_PL1_SSP, MSR_TYPE_RW, incpt); + vmx_set_intercept_for_msr(vcpu, MSR_IA32_PL2_SSP, MSR_TYPE_RW, incpt); +} + static void vmx_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu) { struct vcpu_vmx *vmx = to_vmx(vcpu); @@ -7409,6 +7435,9 @@ static void vmx_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu) /* Refresh #PF interception to account for MAXPHYADDR changes. */ update_exception_bitmap(vcpu); + + if (kvm_cet_supported()) + vmx_update_intercept_for_cet_msr(vcpu); } static __init void vmx_set_cpu_caps(void)