From patchwork Wed Feb 3 11:34:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yang, Weijiang" X-Patchwork-Id: 12064043 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 72947C433E0 for ; Wed, 3 Feb 2021 11:23:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1DD7C64F61 for ; Wed, 3 Feb 2021 11:23:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234192AbhBCLXG (ORCPT ); Wed, 3 Feb 2021 06:23:06 -0500 Received: from mga01.intel.com ([192.55.52.88]:28325 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234116AbhBCLXC (ORCPT ); Wed, 3 Feb 2021 06:23:02 -0500 IronPort-SDR: sggTV0+etuRJmIMt16bAMm70pfBkCVCHAT/IYZ5X8HXgW1U/ACDdcg+uTg3EfwaeIXmPBt6V1O FGwDJoxqVQ0w== X-IronPort-AV: E=McAfee;i="6000,8403,9883"; a="199981262" X-IronPort-AV: E=Sophos;i="5.79,398,1602572400"; d="scan'208";a="199981262" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Feb 2021 03:22:12 -0800 IronPort-SDR: 50jYEXoZRq6Xq1+aXeb2z+cZ6+FKiEuQ/iJ2HYLq5WsBvkom41LNDewIxgetUnOLEXsi+Isw2g UnABJQ1RI74w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,398,1602572400"; d="scan'208";a="480311113" Received: from unknown (HELO local-michael-cet-test.sh.intel.com) ([10.239.159.166]) by fmsmga001.fm.intel.com with ESMTP; 03 Feb 2021 03:22:10 -0800 From: Yang Weijiang To: pbonzini@redhat.com, seanjc@google.com, jmattson@google.com, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: yu.c.zhang@linux.intel.com, Sean Christopherson , Yang Weijiang Subject: [PATCH v15 03/14] KVM: x86: Load guest fpu state when accessing MSRs managed by XSAVES Date: Wed, 3 Feb 2021 19:34:10 +0800 Message-Id: <20210203113421.5759-4-weijiang.yang@intel.com> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20210203113421.5759-1-weijiang.yang@intel.com> References: <20210203113421.5759-1-weijiang.yang@intel.com> Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Sean Christopherson A handful of CET MSRs are not context switched through "traditional" methods, e.g. VMCS or manual switching, but rather are passed through to the guest and are saved and restored by XSAVES/XRSTORS, i.e. in the guest's FPU state. Load the guest's FPU state if userspace is accessing MSRs whose values are managed by XSAVES so that the MSR helper, e.g. vmx_{get,set}_xsave_msr(), can simply do {RD,WR}MSR to access the guest's value. Because is also used for the KVM_GET_MSRS device ioctl(), explicitly check that @vcpu is non-null before attempting to load guest state. The XSS supporting MSRs cannot be retrieved via the device ioctl() without loading guest FPU state (which doesn't exist). Note that guest_cpuid_has() is not queried as host userspace is allowed to access MSRs that have not been exposed to the guest, e.g. it might do KVM_SET_MSRS prior to KVM_SET_CPUID2. Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson Signed-off-by: Yang Weijiang --- arch/x86/kvm/x86.c | 19 ++++++++++++++++++- 1 file changed, 18 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 30a07caf077c..99f787152d12 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -110,6 +110,8 @@ static void enter_smm(struct kvm_vcpu *vcpu); static void __kvm_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags); static void store_regs(struct kvm_vcpu *vcpu); static int sync_regs(struct kvm_vcpu *vcpu); +static void kvm_load_guest_fpu(struct kvm_vcpu *vcpu); +static void kvm_put_guest_fpu(struct kvm_vcpu *vcpu); struct kvm_x86_ops kvm_x86_ops __read_mostly; EXPORT_SYMBOL_GPL(kvm_x86_ops); @@ -3618,6 +3620,12 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info) } EXPORT_SYMBOL_GPL(kvm_get_msr_common); +static bool is_xsaves_msr(u32 index) +{ + return index == MSR_IA32_U_CET || + (index >= MSR_IA32_PL0_SSP && index <= MSR_IA32_PL3_SSP); +} + /* * Read or write a bunch of msrs. All parameters are kernel addresses. * @@ -3628,11 +3636,20 @@ static int __msr_io(struct kvm_vcpu *vcpu, struct kvm_msrs *msrs, int (*do_msr)(struct kvm_vcpu *vcpu, unsigned index, u64 *data)) { + bool fpu_loaded = false; int i; - for (i = 0; i < msrs->nmsrs; ++i) + for (i = 0; i < msrs->nmsrs; ++i) { + if (vcpu && !fpu_loaded && supported_xss && + is_xsaves_msr(entries[i].index)) { + kvm_load_guest_fpu(vcpu); + fpu_loaded = true; + } if (do_msr(vcpu, entries[i].index, &entries[i].data)) break; + } + if (fpu_loaded) + kvm_put_guest_fpu(vcpu); return i; }