From patchwork Thu Jun 16 08:46:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Weijiang X-Patchwork-Id: 12883584 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B4576CCA47F for ; Thu, 16 Jun 2022 08:49:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1376372AbiFPIte (ORCPT ); Thu, 16 Jun 2022 04:49:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36532 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1376392AbiFPIsy (ORCPT ); Thu, 16 Jun 2022 04:48:54 -0400 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6035A5DE4A; Thu, 16 Jun 2022 01:47:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1655369268; x=1686905268; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Ztaj5gfZsCYVHfksYf+7zuhCQauSNN6mu9/n9Y3T3iQ=; b=Vj7pxj1McicP+r4f+895P/n+IAYgV2c4Ah3Q7aTrFQYXMaXdpvXUb+D2 OYyAjYyiP3/nfNQClYqOGDet9DUZ7prPe4pkoJbauMYOIwBUhTkBnR1R6 eie8JnYyKR0RsQZwDIC38mS4iYsdm3IosDoojMKwK2HZ5N46AgBCR3WqS 1lkI+J5xw69U8KnToBdJyPQiKQIkwcZCXfpsKQUJ3Cj3Rp6asFI4P3ZGB CaeTMNLP/za+5tNIFlvp1HKZDJqxyiVLKoTJZaBARXr7csFe6hbvKWW1u rY9F5DrcclI/UfIz79PCD76Jesyol5UqseP1WNDNRpjNwdNmjEusP5UJq A==; X-IronPort-AV: E=McAfee;i="6400,9594,10379"; a="259055234" X-IronPort-AV: E=Sophos;i="5.91,304,1647327600"; d="scan'208";a="259055234" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Jun 2022 01:47:42 -0700 X-IronPort-AV: E=Sophos;i="5.91,304,1647327600"; d="scan'208";a="613083178" Received: from embargo.jf.intel.com ([10.165.9.183]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Jun 2022 01:47:40 -0700 From: Yang Weijiang To: pbonzini@redhat.com, seanjc@google.com, x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, rick.p.edgecombe@intel.com Cc: weijiang.yang@intel.com, Zhang Yi Z , Sean Christopherson Subject: [PATCH 17/19] KVM: VMX: Pass through CET MSRs to the guest when supported Date: Thu, 16 Jun 2022 04:46:41 -0400 Message-Id: <20220616084643.19564-18-weijiang.yang@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220616084643.19564-1-weijiang.yang@intel.com> References: <20220616084643.19564-1-weijiang.yang@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Pass through CET user mode MSRs when the associated CET component is enabled to improve guest performance. All CET MSRs are context switched, either via dedicated VMCS fields or XSAVES. Co-developed-by: Zhang Yi Z Signed-off-by: Zhang Yi Z Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson Signed-off-by: Yang Weijiang --- arch/x86/kvm/vmx/vmx.c | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 4bdede87669a..9aebd67ff03e 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -583,6 +583,9 @@ static bool is_valid_passthrough_msr(u32 msr) case MSR_LBR_CORE_TO ... MSR_LBR_CORE_TO + 8: /* LBR MSRs. These are handled in vmx_update_intercept_for_lbr_msrs() */ return true; + case MSR_IA32_U_CET: + case MSR_IA32_PL3_SSP: + return true; } r = possible_passthrough_msr_slot(msr) != -ENOENT; @@ -7595,6 +7598,23 @@ static void update_intel_pt_cfg(struct kvm_vcpu *vcpu) vmx->pt_desc.ctl_bitmask &= ~(0xfULL << (32 + i * 4)); } +static bool is_cet_state_supported(struct kvm_vcpu *vcpu, u32 xss_state) +{ + return (kvm_caps.supported_xss & xss_state) && + (guest_cpuid_has(vcpu, X86_FEATURE_SHSTK) || + guest_cpuid_has(vcpu, X86_FEATURE_IBT)); +} + +static void vmx_update_intercept_for_cet_msr(struct kvm_vcpu *vcpu) +{ + bool incpt = !is_cet_state_supported(vcpu, XFEATURE_MASK_CET_USER); + + vmx_set_intercept_for_msr(vcpu, MSR_IA32_U_CET, MSR_TYPE_RW, incpt); + + incpt |= !guest_cpuid_has(vcpu, X86_FEATURE_SHSTK); + vmx_set_intercept_for_msr(vcpu, MSR_IA32_PL3_SSP, MSR_TYPE_RW, incpt); +} + static void vmx_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu) { struct vcpu_vmx *vmx = to_vmx(vcpu); @@ -7657,6 +7677,9 @@ static void vmx_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu) /* Refresh #PF interception to account for MAXPHYADDR changes. */ vmx_update_exception_bitmap(vcpu); + + if (kvm_cet_user_supported()) + vmx_update_intercept_for_cet_msr(vcpu); } static __init void vmx_set_cpu_caps(void)