From patchwork Fri May 6 03:33:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yang, Weijiang" X-Patchwork-Id: 12840525 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E1A68C433EF for ; Fri, 6 May 2022 03:34:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1388745AbiEFDhu (ORCPT ); Thu, 5 May 2022 23:37:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36764 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1388671AbiEFDhR (ORCPT ); Thu, 5 May 2022 23:37:17 -0400 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AAE0864718; Thu, 5 May 2022 20:33:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1651808015; x=1683344015; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=tRnHJmtzAg1MSD9fLxV9VfabCVjK/mtmxDLODr1jzOs=; b=aHldzYBtIOhas6NNseXXHwdXrBQIeTtrWAlvncWF+ZTOOUPwlhyEL8fm 3L0QGlTZrFhgNio3SJDttQm60H0ujaZy4hJ7WxUnXxYxDmlKKEqlV/w+v Z2vZ9zASSUJHKfgdOrRYWjWhV3KgP4WHoBRklaM/tZX8xuDapeLNYy/BY HF/OZ1tgscvqziM0pYr9G/EVGZQSDy9S1Dt2s/CdH1nBzzP/4JEDK8wWD eWczqCImdW1GiPtcfHnnD1oZNmgB8UliveghXfNS+lzZbTAHHvphQ9cx2 bmBdU4Sp9xy7vyU8m+CKYIAwzqmR8qkcvvswT1bxVEGqWHdTj70Pm1Koq A==; X-IronPort-AV: E=McAfee;i="6400,9594,10338"; a="248241444" X-IronPort-AV: E=Sophos;i="5.91,203,1647327600"; d="scan'208";a="248241444" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 May 2022 20:33:35 -0700 X-IronPort-AV: E=Sophos;i="5.91,203,1647327600"; d="scan'208";a="632745201" Received: from embargo.jf.intel.com ([10.165.9.183]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 May 2022 20:33:35 -0700 From: Yang Weijiang To: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, kan.liang@linux.intel.com, like.xu.linux@gmail.com, vkuznets@redhat.com, wei.w.wang@intel.com, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Yang Weijiang Subject: [PATCH v11 12/16] KVM: nVMX: Add necessary Arch LBR settings for nested VM Date: Thu, 5 May 2022 23:33:01 -0400 Message-Id: <20220506033305.5135-13-weijiang.yang@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220506033305.5135-1-weijiang.yang@intel.com> References: <20220506033305.5135-1-weijiang.yang@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Arch LBR is not supported in nested VM now. This patch is to add necessary settings to make it pass host KVM checks before L2 VM is launched and also to avoid some warnings reported from L1. Signed-off-by: Yang Weijiang --- arch/x86/kvm/vmx/nested.c | 7 +++++-- arch/x86/kvm/vmx/pmu_intel.c | 2 ++ arch/x86/kvm/vmx/vmcs12.c | 1 + arch/x86/kvm/vmx/vmcs12.h | 3 ++- 4 files changed, 10 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 856c87563883..2434d8bd5e30 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -6563,7 +6563,9 @@ void nested_vmx_setup_ctls_msrs(struct nested_vmx_msrs *msrs, u32 ept_caps) VM_EXIT_HOST_ADDR_SPACE_SIZE | #endif VM_EXIT_LOAD_IA32_PAT | VM_EXIT_SAVE_IA32_PAT | - VM_EXIT_CLEAR_BNDCFGS | VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL; + VM_EXIT_CLEAR_BNDCFGS | VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL | + VM_EXIT_CLEAR_IA32_LBR_CTL; + msrs->exit_ctls_high |= VM_EXIT_ALWAYSON_WITHOUT_TRUE_MSR | VM_EXIT_LOAD_IA32_EFER | VM_EXIT_SAVE_IA32_EFER | @@ -6583,7 +6585,8 @@ void nested_vmx_setup_ctls_msrs(struct nested_vmx_msrs *msrs, u32 ept_caps) VM_ENTRY_IA32E_MODE | #endif VM_ENTRY_LOAD_IA32_PAT | VM_ENTRY_LOAD_BNDCFGS | - VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL; + VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL | VM_ENTRY_LOAD_IA32_LBR_CTL; + msrs->entry_ctls_high |= (VM_ENTRY_ALWAYSON_WITHOUT_TRUE_MSR | VM_ENTRY_LOAD_IA32_EFER); diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index bd4ddf63ba8f..3adc8f28d142 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -198,6 +198,8 @@ static bool intel_pmu_is_valid_lbr_msr(struct kvm_vcpu *vcpu, u32 index) return ret; if (index == MSR_ARCH_LBR_DEPTH || index == MSR_ARCH_LBR_CTL) { + if (is_guest_mode(vcpu)) + return ret; if (kvm_cpu_cap_has(X86_FEATURE_ARCH_LBR)) ret = guest_cpuid_has(vcpu, X86_FEATURE_ARCH_LBR); return ret; diff --git a/arch/x86/kvm/vmx/vmcs12.c b/arch/x86/kvm/vmx/vmcs12.c index 2251b60920f8..bcda664e4d26 100644 --- a/arch/x86/kvm/vmx/vmcs12.c +++ b/arch/x86/kvm/vmx/vmcs12.c @@ -65,6 +65,7 @@ const unsigned short vmcs12_field_offsets[] = { FIELD64(HOST_IA32_PAT, host_ia32_pat), FIELD64(HOST_IA32_EFER, host_ia32_efer), FIELD64(HOST_IA32_PERF_GLOBAL_CTRL, host_ia32_perf_global_ctrl), + FIELD64(GUEST_IA32_LBR_CTL, guest_lbr_ctl), FIELD(PIN_BASED_VM_EXEC_CONTROL, pin_based_vm_exec_control), FIELD(CPU_BASED_VM_EXEC_CONTROL, cpu_based_vm_exec_control), FIELD(EXCEPTION_BITMAP, exception_bitmap), diff --git a/arch/x86/kvm/vmx/vmcs12.h b/arch/x86/kvm/vmx/vmcs12.h index 746129ddd5ae..bf50227fe401 100644 --- a/arch/x86/kvm/vmx/vmcs12.h +++ b/arch/x86/kvm/vmx/vmcs12.h @@ -71,7 +71,7 @@ struct __packed vmcs12 { u64 pml_address; u64 encls_exiting_bitmap; u64 tsc_multiplier; - u64 padding64[1]; /* room for future expansion */ + u64 guest_lbr_ctl; /* * To allow migration of L1 (complete with its L2 guests) between * machines of different natural widths (32 or 64 bit), we cannot have @@ -254,6 +254,7 @@ static inline void vmx_check_vmcs12_offsets(void) CHECK_OFFSET(pml_address, 312); CHECK_OFFSET(encls_exiting_bitmap, 320); CHECK_OFFSET(tsc_multiplier, 328); + CHECK_OFFSET(guest_lbr_ctl, 336); CHECK_OFFSET(cr0_guest_host_mask, 344); CHECK_OFFSET(cr4_guest_host_mask, 352); CHECK_OFFSET(cr0_read_shadow, 360);