From patchwork Tue Aug 17 09:31:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robert Hoo X-Patchwork-Id: 12441209 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3BBB2C4320A for ; Tue, 17 Aug 2021 09:31:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2173060184 for ; Tue, 17 Aug 2021 09:31:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235634AbhHQJcH (ORCPT ); Tue, 17 Aug 2021 05:32:07 -0400 Received: from mga01.intel.com ([192.55.52.88]:49582 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235364AbhHQJcF (ORCPT ); Tue, 17 Aug 2021 05:32:05 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10078"; a="238111500" X-IronPort-AV: E=Sophos;i="5.84,328,1620716400"; d="scan'208";a="238111500" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Aug 2021 02:31:33 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.84,328,1620716400"; d="scan'208";a="449200705" Received: from sqa-gate.sh.intel.com (HELO robert-ivt.tsp.org) ([10.239.48.212]) by fmsmga007.fm.intel.com with ESMTP; 17 Aug 2021 02:31:30 -0700 From: Robert Hoo To: seanjc@google.com, pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org Cc: kvm@vger.kernel.org, yu.c.zhang@linux.intel.com, Robert Hoo Subject: [PATCH v1 1/5] KVM: x86: nVMX: Add vmcs12 field existence bitmap in nested_vmx Date: Tue, 17 Aug 2021 17:31:09 +0800 Message-Id: <1629192673-9911-2-git-send-email-robert.hu@linux.intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1629192673-9911-1-git-send-email-robert.hu@linux.intel.com> References: <1629192673-9911-1-git-send-email-robert.hu@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Bits map fields: The bitmap correspond to VMCS12 fields. To simplify logic, we don't skip VMCS header. And as VMCS fields width differ, use the common divisor u16 as unit, therefore this bitmap is a little sparse. Life cycle: The vmcs12_field_existence_bitmap share same life cycle as vCPU, i.e. allocated when vCPU is created, and initialized; destroyed when vCPU is about to be freed. It cannot be allocated/freed like cached_vmcs12 is because it's needed before guest vmx_on. Initialize/destroy: By the nature of each field, they can be categorized into 2 types: fixed and dynamic. These fixed has no dependency on any VMX feature while dynamic ones have. So the initalization is divided into 2: vmcs12_field_fixed_init() and vmcs12_field_dynamic_init(). vmcs12_field_dynamic_init() is actually a wrapper of all vmcs12_field_update_by_xxx() functions, these update functions will be reused in later patch when VMX msrs are set. Signed-off-by: Robert Hoo Signed-off-by: Yu Zhang --- arch/x86/kvm/vmx/nested.c | 2 + arch/x86/kvm/vmx/vmcs12.c | 363 ++++++++++++++++++++++++++++++++++++++ arch/x86/kvm/vmx/vmcs12.h | 26 +++ arch/x86/kvm/vmx/vmx.c | 12 +- arch/x86/kvm/vmx/vmx.h | 3 + 5 files changed, 405 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 0d0dd6580cfd..125b94dc3cf1 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -327,6 +327,8 @@ void nested_vmx_free_vcpu(struct kvm_vcpu *vcpu) { vcpu_load(vcpu); vmx_leave_nested(vcpu); + kfree(to_vmx(vcpu)->nested.vmcs12_field_existence_bitmap); + to_vmx(vcpu)->nested.vmcs12_field_existence_bitmap = NULL; vcpu_put(vcpu); } diff --git a/arch/x86/kvm/vmx/vmcs12.c b/arch/x86/kvm/vmx/vmcs12.c index d9f5d7c56ae3..22fd5b21e136 100644 --- a/arch/x86/kvm/vmx/vmcs12.c +++ b/arch/x86/kvm/vmx/vmcs12.c @@ -153,3 +153,366 @@ const unsigned short vmcs_field_to_offset_table[] = { FIELD(HOST_RIP, host_rip), }; const unsigned int nr_vmcs12_fields = ARRAY_SIZE(vmcs_field_to_offset_table); + +#define FIELD_BIT_SET(name, bitmap) set_bit(f_pos(name), bitmap) +#define FIELD64_BIT_SET(name, bitmap) \ + do {set_bit(f_pos(name), bitmap); \ + set_bit(f_pos(name) + (sizeof(u32) / sizeof(u16)), bitmap);\ + } while (0) + +#define FIELD_BIT_CLEAR(name, bitmap) clear_bit(f_pos(name), bitmap) +#define FIELD64_BIT_CLEAR(name, bitmap) \ + do {clear_bit(f_pos(name), bitmap); \ + clear_bit(f_pos(name) + (sizeof(u32) / sizeof(u16)), bitmap);\ + } while (0) + +#define FIELD_BIT_CHANGE(name, bitmap) change_bit(f_pos(name), bitmap) +#define FIELD64_BIT_CHANGE(name, bitmap) \ + do {change_bit(f_pos(name), bitmap); \ + change_bit(f_pos(name) + (sizeof(u32) / sizeof(u16)), bitmap);\ + } while (0) + +/* + * Set non-dependent fields to exist + */ +void vmcs12_field_fixed_init(unsigned long *bitmap) +{ + if (unlikely(bitmap == NULL)) { + pr_err_once("%s: NULL bitmap", __func__); + return; + } + FIELD_BIT_SET(guest_es_selector, bitmap); + FIELD_BIT_SET(guest_cs_selector, bitmap); + FIELD_BIT_SET(guest_ss_selector, bitmap); + FIELD_BIT_SET(guest_ds_selector, bitmap); + FIELD_BIT_SET(guest_fs_selector, bitmap); + FIELD_BIT_SET(guest_gs_selector, bitmap); + FIELD_BIT_SET(guest_ldtr_selector, bitmap); + FIELD_BIT_SET(guest_tr_selector, bitmap); + FIELD_BIT_SET(host_es_selector, bitmap); + FIELD_BIT_SET(host_cs_selector, bitmap); + FIELD_BIT_SET(host_ss_selector, bitmap); + FIELD_BIT_SET(host_ds_selector, bitmap); + FIELD_BIT_SET(host_fs_selector, bitmap); + FIELD_BIT_SET(host_gs_selector, bitmap); + FIELD_BIT_SET(host_tr_selector, bitmap); + FIELD64_BIT_SET(io_bitmap_a, bitmap); + FIELD64_BIT_SET(io_bitmap_b, bitmap); + FIELD64_BIT_SET(vm_exit_msr_store_addr, bitmap); + FIELD64_BIT_SET(vm_exit_msr_load_addr, bitmap); + FIELD64_BIT_SET(vm_entry_msr_load_addr, bitmap); + FIELD64_BIT_SET(tsc_offset, bitmap); + FIELD64_BIT_SET(vmcs_link_pointer, bitmap); + FIELD64_BIT_SET(guest_ia32_debugctl, bitmap); + FIELD_BIT_SET(pin_based_vm_exec_control, bitmap); + FIELD_BIT_SET(cpu_based_vm_exec_control, bitmap); + FIELD_BIT_SET(exception_bitmap, bitmap); + FIELD_BIT_SET(page_fault_error_code_mask, bitmap); + FIELD_BIT_SET(page_fault_error_code_match, bitmap); + FIELD_BIT_SET(cr3_target_count, bitmap); + FIELD_BIT_SET(vm_exit_controls, bitmap); + FIELD_BIT_SET(vm_exit_msr_store_count, bitmap); + FIELD_BIT_SET(vm_exit_msr_load_count, bitmap); + FIELD_BIT_SET(vm_entry_controls, bitmap); + FIELD_BIT_SET(vm_entry_msr_load_count, bitmap); + FIELD_BIT_SET(vm_entry_intr_info_field, bitmap); + FIELD_BIT_SET(vm_entry_exception_error_code, bitmap); + FIELD_BIT_SET(vm_entry_instruction_len, bitmap); + FIELD_BIT_SET(vm_instruction_error, bitmap); + FIELD_BIT_SET(vm_exit_reason, bitmap); + FIELD_BIT_SET(vm_exit_intr_info, bitmap); + FIELD_BIT_SET(vm_exit_intr_error_code, bitmap); + FIELD_BIT_SET(idt_vectoring_info_field, bitmap); + FIELD_BIT_SET(idt_vectoring_error_code, bitmap); + FIELD_BIT_SET(vm_exit_instruction_len, bitmap); + FIELD_BIT_SET(vmx_instruction_info, bitmap); + FIELD_BIT_SET(guest_es_limit, bitmap); + FIELD_BIT_SET(guest_cs_limit, bitmap); + FIELD_BIT_SET(guest_ss_limit, bitmap); + FIELD_BIT_SET(guest_ds_limit, bitmap); + FIELD_BIT_SET(guest_fs_limit, bitmap); + FIELD_BIT_SET(guest_gs_limit, bitmap); + FIELD_BIT_SET(guest_ldtr_limit, bitmap); + FIELD_BIT_SET(guest_tr_limit, bitmap); + FIELD_BIT_SET(guest_gdtr_limit, bitmap); + FIELD_BIT_SET(guest_idtr_limit, bitmap); + FIELD_BIT_SET(guest_es_ar_bytes, bitmap); + FIELD_BIT_SET(guest_cs_ar_bytes, bitmap); + FIELD_BIT_SET(guest_ss_ar_bytes, bitmap); + FIELD_BIT_SET(guest_ds_ar_bytes, bitmap); + FIELD_BIT_SET(guest_fs_ar_bytes, bitmap); + FIELD_BIT_SET(guest_gs_ar_bytes, bitmap); + FIELD_BIT_SET(guest_ldtr_ar_bytes, bitmap); + FIELD_BIT_SET(guest_tr_ar_bytes, bitmap); + FIELD_BIT_SET(guest_interruptibility_info, bitmap); + FIELD_BIT_SET(guest_activity_state, bitmap); + FIELD_BIT_SET(guest_sysenter_cs, bitmap); + FIELD_BIT_SET(host_ia32_sysenter_cs, bitmap); + FIELD_BIT_SET(cr0_guest_host_mask, bitmap); + FIELD_BIT_SET(cr4_guest_host_mask, bitmap); + FIELD_BIT_SET(cr0_read_shadow, bitmap); + FIELD_BIT_SET(cr4_read_shadow, bitmap); + FIELD_BIT_SET(exit_qualification, bitmap); + FIELD_BIT_SET(guest_linear_address, bitmap); + FIELD_BIT_SET(guest_cr0, bitmap); + FIELD_BIT_SET(guest_cr3, bitmap); + FIELD_BIT_SET(guest_cr4, bitmap); + FIELD_BIT_SET(guest_es_base, bitmap); + FIELD_BIT_SET(guest_cs_base, bitmap); + FIELD_BIT_SET(guest_ss_base, bitmap); + FIELD_BIT_SET(guest_ds_base, bitmap); + FIELD_BIT_SET(guest_fs_base, bitmap); + FIELD_BIT_SET(guest_gs_base, bitmap); + FIELD_BIT_SET(guest_ldtr_base, bitmap); + FIELD_BIT_SET(guest_tr_base, bitmap); + FIELD_BIT_SET(guest_gdtr_base, bitmap); + FIELD_BIT_SET(guest_idtr_base, bitmap); + FIELD_BIT_SET(guest_dr7, bitmap); + FIELD_BIT_SET(guest_rsp, bitmap); + FIELD_BIT_SET(guest_rip, bitmap); + FIELD_BIT_SET(guest_rflags, bitmap); + FIELD_BIT_SET(guest_pending_dbg_exceptions, bitmap); + FIELD_BIT_SET(guest_sysenter_esp, bitmap); + FIELD_BIT_SET(guest_sysenter_eip, bitmap); + FIELD_BIT_SET(host_cr0, bitmap); + FIELD_BIT_SET(host_cr3, bitmap); + FIELD_BIT_SET(host_cr4, bitmap); + FIELD_BIT_SET(host_fs_base, bitmap); + FIELD_BIT_SET(host_gs_base, bitmap); + FIELD_BIT_SET(host_tr_base, bitmap); + FIELD_BIT_SET(host_gdtr_base, bitmap); + FIELD_BIT_SET(host_idtr_base, bitmap); + FIELD_BIT_SET(host_ia32_sysenter_esp, bitmap); + FIELD_BIT_SET(host_ia32_sysenter_eip, bitmap); + FIELD_BIT_SET(host_rsp, bitmap); + FIELD_BIT_SET(host_rip, bitmap); +} + +void vmcs12_field_dynamic_init(struct nested_vmx_msrs *vmx_msrs, + unsigned long *bitmap) +{ + if (unlikely(bitmap == NULL)) { + pr_err_once("%s: NULL bitmap", __func__); + return; + } + vmcs12_field_update_by_pinbased_ctrl(0, vmx_msrs->pinbased_ctls_high, + bitmap); + + vmcs12_field_update_by_procbased_ctrl(0, vmx_msrs->procbased_ctls_high, + bitmap); + + vmcs12_field_update_by_procbased_ctrl2(0, vmx_msrs->secondary_ctls_high, + bitmap); + + vmcs12_field_update_by_vmentry_ctrl(vmx_msrs->exit_ctls_high, 0, + vmx_msrs->entry_ctls_high, + bitmap); + + vmcs12_field_update_by_vmexit_ctrl(vmx_msrs->entry_ctls_high, 0, + vmx_msrs->exit_ctls_high, + bitmap); + + vmcs12_field_update_by_vm_func(0, vmx_msrs->vmfunc_controls, bitmap); +} + +void vmcs12_field_update_by_pinbased_ctrl(u32 old_val, u32 new_val, + unsigned long *bitmap) +{ + if (unlikely(bitmap == NULL)) { + pr_err_once("%s: NULL bitmap", __func__); + return; + } + + if (!(old_val ^ new_val)) + return; + if ((old_val ^ new_val) & PIN_BASED_POSTED_INTR) { + FIELD_BIT_CHANGE(posted_intr_nv, bitmap); + FIELD64_BIT_CHANGE(posted_intr_desc_addr, bitmap); + } + + if ((old_val ^ new_val) & PIN_BASED_VMX_PREEMPTION_TIMER) + FIELD_BIT_CHANGE(vmx_preemption_timer_value, bitmap); +} + +void vmcs12_field_update_by_procbased_ctrl(u32 old_val, u32 new_val, + unsigned long *bitmap) +{ + if (unlikely(bitmap == NULL)) { + pr_err_once("%s: NULL bitmap", __func__); + return; + } + if (!(old_val ^ new_val)) + return; + + if ((old_val ^ new_val) & CPU_BASED_USE_MSR_BITMAPS) + FIELD64_BIT_CHANGE(msr_bitmap, bitmap); + + if ((old_val ^ new_val) & CPU_BASED_TPR_SHADOW) { + FIELD64_BIT_CHANGE(virtual_apic_page_addr, bitmap); + FIELD_BIT_CHANGE(tpr_threshold, bitmap); + } + + if ((old_val ^ new_val) & + CPU_BASED_ACTIVATE_SECONDARY_CONTROLS) { + FIELD_BIT_CHANGE(secondary_vm_exec_control, bitmap); + } +} + +void vmcs12_field_update_by_procbased_ctrl2(u32 old_val, u32 new_val, + unsigned long *bitmap) +{ + if (unlikely(bitmap == NULL)) { + pr_err_once("%s: NULL bitmap", __func__); + return; + } + if (!(old_val ^ new_val)) + return; + + if ((old_val ^ new_val) & SECONDARY_EXEC_ENABLE_VPID) + FIELD_BIT_CHANGE(virtual_processor_id, bitmap); + + if ((old_val ^ new_val) & + SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY) { + FIELD_BIT_CHANGE(guest_intr_status, bitmap); + FIELD64_BIT_CHANGE(eoi_exit_bitmap0, bitmap); + FIELD64_BIT_CHANGE(eoi_exit_bitmap1, bitmap); + FIELD64_BIT_CHANGE(eoi_exit_bitmap2, bitmap); + FIELD64_BIT_CHANGE(eoi_exit_bitmap3, bitmap); + } + + if ((old_val ^ new_val) & SECONDARY_EXEC_ENABLE_PML) { + FIELD_BIT_CHANGE(guest_pml_index, bitmap); + FIELD64_BIT_CHANGE(pml_address, bitmap); + } + + if ((old_val ^ new_val) & SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES) + FIELD64_BIT_CHANGE(apic_access_addr, bitmap); + + if ((old_val ^ new_val) & SECONDARY_EXEC_ENABLE_VMFUNC) + FIELD64_BIT_CHANGE(vm_function_control, bitmap); + + if ((old_val ^ new_val) & SECONDARY_EXEC_ENABLE_EPT) { + FIELD64_BIT_CHANGE(ept_pointer, bitmap); + FIELD64_BIT_CHANGE(guest_physical_address, bitmap); + FIELD64_BIT_CHANGE(guest_pdptr0, bitmap); + FIELD64_BIT_CHANGE(guest_pdptr1, bitmap); + FIELD64_BIT_CHANGE(guest_pdptr2, bitmap); + FIELD64_BIT_CHANGE(guest_pdptr3, bitmap); + } + + if ((old_val ^ new_val) & SECONDARY_EXEC_SHADOW_VMCS) { + FIELD64_BIT_CHANGE(vmread_bitmap, bitmap); + FIELD64_BIT_CHANGE(vmwrite_bitmap, bitmap); + } + + if ((old_val ^ new_val) & SECONDARY_EXEC_XSAVES) + FIELD64_BIT_CHANGE(xss_exit_bitmap, bitmap); + + if ((old_val ^ new_val) & SECONDARY_EXEC_ENCLS_EXITING) + FIELD64_BIT_CHANGE(encls_exiting_bitmap, bitmap); + + if ((old_val ^ new_val) & SECONDARY_EXEC_TSC_SCALING) + FIELD64_BIT_CHANGE(tsc_multiplier, bitmap); + + if ((old_val ^ new_val) & SECONDARY_EXEC_PAUSE_LOOP_EXITING) { + FIELD64_BIT_CHANGE(vmread_bitmap, bitmap); + FIELD64_BIT_CHANGE(vmwrite_bitmap, bitmap); + } +} + +void vmcs12_field_update_by_vmentry_ctrl(u32 vm_exit_ctrl, u32 old_val, + u32 new_val, unsigned long *bitmap) +{ + if (unlikely(bitmap == NULL)) { + pr_err_once("%s: NULL bitmap", __func__); + return; + } + if (!(old_val ^ new_val)) + return; + + if ((old_val ^ new_val) & VM_ENTRY_LOAD_IA32_PAT) { + if ((new_val & VM_ENTRY_LOAD_IA32_PAT) || + (vm_exit_ctrl & VM_EXIT_SAVE_IA32_PAT)) + FIELD64_BIT_SET(guest_ia32_pat, bitmap); + else + FIELD64_BIT_CLEAR(guest_ia32_pat, bitmap); + } + + if ((old_val ^ new_val) & VM_ENTRY_LOAD_IA32_EFER) { + if ((new_val & VM_ENTRY_LOAD_IA32_EFER) || + (vm_exit_ctrl & VM_EXIT_SAVE_IA32_EFER)) + FIELD64_BIT_SET(guest_ia32_efer, bitmap); + else + FIELD64_BIT_CLEAR(guest_ia32_efer, bitmap); + } + + if ((old_val ^ new_val) & VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL) + FIELD64_BIT_CHANGE(guest_ia32_perf_global_ctrl, bitmap); + + if ((old_val ^ new_val) & VM_ENTRY_LOAD_BNDCFGS) { + if ((new_val & VM_ENTRY_LOAD_BNDCFGS) || + (vm_exit_ctrl & VM_EXIT_CLEAR_BNDCFGS)) + FIELD64_BIT_SET(guest_bndcfgs, bitmap); + else + FIELD64_BIT_CLEAR(guest_bndcfgs, bitmap); + } +} + +void vmcs12_field_update_by_vmexit_ctrl(u32 vm_entry_ctrl, u32 old_val, + u32 new_val, unsigned long *bitmap) +{ + if (unlikely(bitmap == NULL)) { + pr_err_once("%s: NULL bitmap", __func__); + return; + } + if (!(old_val ^ new_val)) + return; + + if ((old_val ^ new_val) & VM_EXIT_LOAD_IA32_PAT) + FIELD64_BIT_CHANGE(host_ia32_pat, bitmap); + + if ((old_val ^ new_val) & VM_EXIT_LOAD_IA32_EFER) + FIELD64_BIT_CHANGE(host_ia32_efer, bitmap); + + if ((old_val ^ new_val) & VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL) + FIELD64_BIT_CHANGE(host_ia32_perf_global_ctrl, bitmap); + + if ((old_val ^ new_val) & VM_EXIT_SAVE_IA32_PAT) { + if ((new_val & VM_EXIT_SAVE_IA32_PAT) || + (vm_entry_ctrl & VM_ENTRY_LOAD_IA32_PAT)) + FIELD64_BIT_SET(guest_ia32_pat, bitmap); + else + FIELD64_BIT_CLEAR(guest_ia32_pat, bitmap); + } + + if ((old_val ^ new_val) & VM_EXIT_SAVE_IA32_EFER) { + if ((new_val & VM_EXIT_SAVE_IA32_EFER) || + (vm_entry_ctrl & VM_ENTRY_LOAD_IA32_EFER)) + FIELD64_BIT_SET(guest_ia32_efer, bitmap); + else + FIELD64_BIT_CLEAR(guest_ia32_efer, bitmap); + } + + if ((old_val ^ new_val) & VM_EXIT_CLEAR_BNDCFGS) { + if ((new_val & VM_EXIT_CLEAR_BNDCFGS) || + (vm_entry_ctrl & VM_ENTRY_LOAD_BNDCFGS)) + FIELD64_BIT_SET(guest_bndcfgs, bitmap); + else + FIELD64_BIT_CLEAR(guest_bndcfgs, bitmap); + } +} + +void vmcs12_field_update_by_vm_func(u64 old_val, u64 new_val, + unsigned long *bitmap) +{ + if (unlikely(bitmap == NULL)) { + pr_err_once("%s: NULL bitmap", __func__); + return; + } + + if (!(old_val ^ new_val)) + return; + + if ((old_val ^ new_val) & VMFUNC_CONTROL_BIT(EPTP_SWITCHING)) + FIELD64_BIT_CHANGE(eptp_list_address, bitmap); +} diff --git a/arch/x86/kvm/vmx/vmcs12.h b/arch/x86/kvm/vmx/vmcs12.h index 5e0e1b39f495..5c39370dff3c 100644 --- a/arch/x86/kvm/vmx/vmcs12.h +++ b/arch/x86/kvm/vmx/vmcs12.h @@ -187,6 +187,32 @@ struct __packed vmcs12 { u16 guest_pml_index; }; +/* + * In unit of u16, each vmcs12 field's offset. + * Used to index each's position in bitmap + */ +#define f_pos(x) (offsetof(struct vmcs12, x) / sizeof(u16)) +#define VMCS12_FIELD_BITMAP_SIZE \ + (sizeof(struct vmcs12) / sizeof(u16) / BITS_PER_BYTE) +void vmcs12_field_fixed_init(unsigned long *bitmap); +void vmcs12_field_dynamic_init(struct nested_vmx_msrs *vmx_msrs, + unsigned long *bitmap); +void vmcs12_field_update_by_pinbased_ctrl(u32 old_val, u32 new_val, + unsigned long *bitmap); +void vmcs12_field_update_by_procbased_ctrl(u32 old_val, u32 new_val, + unsigned long *bitmap); +void vmcs12_field_update_by_procbased_ctrl2(u32 old_val, u32 new_val, + unsigned long *bitmap); +void vmcs12_field_update_by_vmentry_ctrl(u32 vm_exit_ctrl, u32 old_val, + u32 new_val, + unsigned long *bitmap); +void vmcs12_field_update_by_vmexit_ctrl(u32 vm_entry_ctrl, u32 old_val, + u32 new_val, + unsigned long *bitmap); +void vmcs12_field_update_by_vm_func(u64 old_val, u64 new_val, + unsigned long *bitmap); + + /* * VMCS12_REVISION is an arbitrary id that should be changed if the content or * layout of struct vmcs12 is changed. MSR_IA32_VMX_BASIC returns this id, and diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index ae8e62df16dd..6ab37e1d04c9 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6844,8 +6844,17 @@ static int vmx_create_vcpu(struct kvm_vcpu *vcpu) goto free_vmcs; } - if (nested) + if (nested) { memcpy(&vmx->nested.msrs, &vmcs_config.nested, sizeof(vmx->nested.msrs)); + + vmx->nested.vmcs12_field_existence_bitmap = (unsigned long *) + kzalloc(VMCS12_FIELD_BITMAP_SIZE, GFP_KERNEL_ACCOUNT); + if (!vmx->nested.vmcs12_field_existence_bitmap) + goto free_vmcs; + vmcs12_field_fixed_init(vmx->nested.vmcs12_field_existence_bitmap); + vmcs12_field_dynamic_init(&vmx->nested.msrs, + vmx->nested.vmcs12_field_existence_bitmap); + } else memset(&vmx->nested.msrs, 0, sizeof(vmx->nested.msrs)); @@ -6867,6 +6876,7 @@ static int vmx_create_vcpu(struct kvm_vcpu *vcpu) return 0; + kfree(vmx->nested.cached_shadow_vmcs12); free_vmcs: free_loaded_vmcs(vmx->loaded_vmcs); free_pml: diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 0ecc41189292..c34f1310aa36 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -141,6 +141,9 @@ struct nested_vmx { */ struct vmcs12 *cached_shadow_vmcs12; + /* VMCS12 field existence bitmap */ + unsigned long *vmcs12_field_existence_bitmap; + /* * Indicates if the shadow vmcs or enlightened vmcs must be updated * with the data held by struct vmcs12. From patchwork Tue Aug 17 09:31:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robert Hoo X-Patchwork-Id: 12441211 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5A44DC432BE for ; Tue, 17 Aug 2021 09:31:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 465B160FC3 for ; Tue, 17 Aug 2021 09:31:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235887AbhHQJcJ (ORCPT ); Tue, 17 Aug 2021 05:32:09 -0400 Received: from mga01.intel.com ([192.55.52.88]:49582 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229719AbhHQJcI (ORCPT ); Tue, 17 Aug 2021 05:32:08 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10078"; a="238111510" X-IronPort-AV: E=Sophos;i="5.84,328,1620716400"; d="scan'208";a="238111510" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Aug 2021 02:31:35 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.84,328,1620716400"; d="scan'208";a="449200712" Received: from sqa-gate.sh.intel.com (HELO robert-ivt.tsp.org) ([10.239.48.212]) by fmsmga007.fm.intel.com with ESMTP; 17 Aug 2021 02:31:33 -0700 From: Robert Hoo To: seanjc@google.com, pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org Cc: kvm@vger.kernel.org, yu.c.zhang@linux.intel.com, Robert Hoo Subject: [PATCH v1 2/5] KVM: x86: nVMX: Update VMCS12 fields existence when nVMX MSRs are set Date: Tue, 17 Aug 2021 17:31:10 +0800 Message-Id: <1629192673-9911-3-git-send-email-robert.hu@linux.intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1629192673-9911-1-git-send-email-robert.hu@linux.intel.com> References: <1629192673-9911-1-git-send-email-robert.hu@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Signed-off-by: Robert Hoo Signed-off-by: Yu Zhang --- arch/x86/kvm/vmx/nested.c | 30 ++++++++++++++++++++++++++++++ 1 file changed, 30 insertions(+) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 125b94dc3cf1..b8121f8f6d96 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -1262,6 +1262,34 @@ vmx_restore_control_msr(struct vcpu_vmx *vmx, u32 msr_index, u64 data) *lowp = data; *highp = data >> 32; + + switch (msr_index) { + case MSR_IA32_VMX_TRUE_PINBASED_CTLS: + vmcs12_field_update_by_pinbased_ctrl(*highp, + data >> 32, + vmx->nested.vmcs12_field_existence_bitmap); + break; + case MSR_IA32_VMX_TRUE_PROCBASED_CTLS: + vmcs12_field_update_by_procbased_ctrl(*highp, + data >> 32, + vmx->nested.vmcs12_field_existence_bitmap); + break; + case MSR_IA32_VMX_PROCBASED_CTLS2: + vmcs12_field_update_by_procbased_ctrl2(*highp, + data >> 32, + vmx->nested.vmcs12_field_existence_bitmap); + break; + case MSR_IA32_VMX_TRUE_EXIT_CTLS: + vmcs12_field_update_by_vmexit_ctrl(vmx->nested.msrs.entry_ctls_high, + *highp, data >> 32, + vmx->nested.vmcs12_field_existence_bitmap); + break; + case MSR_IA32_VMX_TRUE_ENTRY_CTLS: + vmcs12_field_update_by_vmentry_ctrl(vmx->nested.msrs.exit_ctls_high, + *highp, data >> 32, + vmx->nested.vmcs12_field_existence_bitmap); + break; + } return 0; } @@ -1403,6 +1431,8 @@ int vmx_set_vmx_msr(struct kvm_vcpu *vcpu, u32 msr_index, u64 data) case MSR_IA32_VMX_VMFUNC: if (data & ~vmx->nested.msrs.vmfunc_controls) return -EINVAL; + vmcs12_field_update_by_vm_func(vmx->nested.msrs.vmfunc_controls, + data, vmx->nested.vmcs12_field_existence_bitmap); vmx->nested.msrs.vmfunc_controls = data; return 0; default: From patchwork Tue Aug 17 09:31:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robert Hoo X-Patchwork-Id: 12441213 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34271C4338F for ; Tue, 17 Aug 2021 09:31:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1EC3A60F35 for ; Tue, 17 Aug 2021 09:31:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239377AbhHQJcL (ORCPT ); Tue, 17 Aug 2021 05:32:11 -0400 Received: from mga01.intel.com ([192.55.52.88]:49582 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237640AbhHQJcK (ORCPT ); Tue, 17 Aug 2021 05:32:10 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10078"; a="238111520" X-IronPort-AV: E=Sophos;i="5.84,328,1620716400"; d="scan'208";a="238111520" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Aug 2021 02:31:37 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.84,328,1620716400"; d="scan'208";a="449200718" Received: from sqa-gate.sh.intel.com (HELO robert-ivt.tsp.org) ([10.239.48.212]) by fmsmga007.fm.intel.com with ESMTP; 17 Aug 2021 02:31:35 -0700 From: Robert Hoo To: seanjc@google.com, pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org Cc: kvm@vger.kernel.org, yu.c.zhang@linux.intel.com, Robert Hoo Subject: [PATCH v1 3/5] KVM: x86: nVMX: VMCS12 field's read/write respects field existence bitmap Date: Tue, 17 Aug 2021 17:31:11 +0800 Message-Id: <1629192673-9911-4-git-send-email-robert.hu@linux.intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1629192673-9911-1-git-send-email-robert.hu@linux.intel.com> References: <1629192673-9911-1-git-send-email-robert.hu@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org In vmcs12_{read,write}_any(), check the field exist or not. If not, return failure. Hence their function prototype changed a little accordingly. In handle_vm{read,write}(), above function's caller, check return value, if failed, emulate nested vmx fail with instruction error of VMXERR_UNSUPPORTED_VMCS_COMPONENT. Signed-off-by: Robert Hoo Signed-off-by: Yu Zhang --- arch/x86/kvm/vmx/nested.c | 20 ++++++++++++------ arch/x86/kvm/vmx/vmcs12.h | 43 ++++++++++++++++++++++++++++++--------- 2 files changed, 47 insertions(+), 16 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index b8121f8f6d96..9a35953ede22 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -1547,7 +1547,8 @@ static void copy_shadow_to_vmcs12(struct vcpu_vmx *vmx) for (i = 0; i < max_shadow_read_write_fields; i++) { field = shadow_read_write_fields[i]; val = __vmcs_readl(field.encoding); - vmcs12_write_any(vmcs12, field.encoding, field.offset, val); + vmcs12_write_any(vmcs12, field.encoding, field.offset, val, + vmx->nested.vmcs12_field_existence_bitmap); } vmcs_clear(shadow_vmcs); @@ -1580,8 +1581,9 @@ static void copy_vmcs12_to_shadow(struct vcpu_vmx *vmx) for (q = 0; q < ARRAY_SIZE(fields); q++) { for (i = 0; i < max_fields[q]; i++) { field = fields[q][i]; - val = vmcs12_read_any(vmcs12, field.encoding, - field.offset); + vmcs12_read_any(vmcs12, field.encoding, + field.offset, &val, + vmx->nested.vmcs12_field_existence_bitmap); __vmcs_writel(field.encoding, val); } } @@ -5070,7 +5072,7 @@ static int handle_vmread(struct kvm_vcpu *vcpu) struct vcpu_vmx *vmx = to_vmx(vcpu); struct x86_exception e; unsigned long field; - u64 value; + unsigned long value; gva_t gva = 0; short offset; int len, r; @@ -5098,7 +5100,10 @@ static int handle_vmread(struct kvm_vcpu *vcpu) copy_vmcs02_to_vmcs12_rare(vcpu, vmcs12); /* Read the field, zero-extended to a u64 value */ - value = vmcs12_read_any(vmcs12, field, offset); + r = vmcs12_read_any(vmcs12, field, offset, &value, + vmx->nested.vmcs12_field_existence_bitmap); + if (r < 0) + return nested_vmx_fail(vcpu, VMXERR_UNSUPPORTED_VMCS_COMPONENT); /* * Now copy part of this value to register or memory, as requested. @@ -5223,7 +5228,10 @@ static int handle_vmwrite(struct kvm_vcpu *vcpu) if (field >= GUEST_ES_AR_BYTES && field <= GUEST_TR_AR_BYTES) value &= 0x1f0ff; - vmcs12_write_any(vmcs12, field, offset, value); + r = vmcs12_write_any(vmcs12, field, offset, value, + vmx->nested.vmcs12_field_existence_bitmap); + if (r < 0) + return nested_vmx_fail(vcpu, VMXERR_UNSUPPORTED_VMCS_COMPONENT); /* * Do not track vmcs12 dirty-state if in guest-mode as we actually diff --git a/arch/x86/kvm/vmx/vmcs12.h b/arch/x86/kvm/vmx/vmcs12.h index 5c39370dff3c..9ac3d6ac1b6b 100644 --- a/arch/x86/kvm/vmx/vmcs12.h +++ b/arch/x86/kvm/vmx/vmcs12.h @@ -413,31 +413,51 @@ static inline short vmcs_field_to_offset(unsigned long field) #undef ROL16 -static inline u64 vmcs12_read_any(struct vmcs12 *vmcs12, unsigned long field, - u16 offset) +static inline int vmcs12_read_any(struct vmcs12 *vmcs12, unsigned long field, + u16 offset, unsigned long *value, unsigned long *bitmap) { char *p = (char *)vmcs12 + offset; + if (unlikely(bitmap == NULL)) { + pr_err_once("vmcs12 read: NULL bitmap"); + return -EINVAL; + } + if (!test_bit(offset / sizeof(u16), bitmap)) + return -ENOENT; + switch (vmcs_field_width(field)) { case VMCS_FIELD_WIDTH_NATURAL_WIDTH: - return *((natural_width *)p); + *value = *((natural_width *)p); + break; case VMCS_FIELD_WIDTH_U16: - return *((u16 *)p); + *value = *((u16 *)p); + break; case VMCS_FIELD_WIDTH_U32: - return *((u32 *)p); + *value = *((u32 *)p); + break; case VMCS_FIELD_WIDTH_U64: - return *((u64 *)p); + *value = *((u64 *)p); + break; default: WARN_ON_ONCE(1); - return -1; + return -ENOENT; } + + return 0; } -static inline void vmcs12_write_any(struct vmcs12 *vmcs12, unsigned long field, - u16 offset, u64 field_value) +static inline int vmcs12_write_any(struct vmcs12 *vmcs12, unsigned long field, + u16 offset, u64 field_value, unsigned long *bitmap) { char *p = (char *)vmcs12 + offset; + if (unlikely(bitmap == NULL)) { + pr_err_once("%s: NULL bitmap", __func__); + return -EINVAL; + } + if (!test_bit(offset / sizeof(u16), bitmap)) + return -ENOENT; + switch (vmcs_field_width(field)) { case VMCS_FIELD_WIDTH_U16: *(u16 *)p = field_value; @@ -453,8 +473,11 @@ static inline void vmcs12_write_any(struct vmcs12 *vmcs12, unsigned long field, break; default: WARN_ON_ONCE(1); - break; + return -ENOENT; } + + return 0; } + #endif /* __KVM_X86_VMX_VMCS12_H */ From patchwork Tue Aug 17 09:31:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robert Hoo X-Patchwork-Id: 12441215 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 72437C4338F for ; Tue, 17 Aug 2021 09:31:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 53DDA60F35 for ; Tue, 17 Aug 2021 09:31:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237180AbhHQJcO (ORCPT ); Tue, 17 Aug 2021 05:32:14 -0400 Received: from mga01.intel.com ([192.55.52.88]:49582 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237640AbhHQJcM (ORCPT ); Tue, 17 Aug 2021 05:32:12 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10078"; a="238111532" X-IronPort-AV: E=Sophos;i="5.84,328,1620716400"; d="scan'208";a="238111532" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Aug 2021 02:31:40 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.84,328,1620716400"; d="scan'208";a="449200730" Received: from sqa-gate.sh.intel.com (HELO robert-ivt.tsp.org) ([10.239.48.212]) by fmsmga007.fm.intel.com with ESMTP; 17 Aug 2021 02:31:37 -0700 From: Robert Hoo To: seanjc@google.com, pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org Cc: kvm@vger.kernel.org, yu.c.zhang@linux.intel.com, Robert Hoo Subject: [PATCH v1 4/5] KVM: x86: nVMX: Respect vmcs12 field existence when calc vmx_vmcs_enum_msr Date: Tue, 17 Aug 2021 17:31:12 +0800 Message-Id: <1629192673-9911-5-git-send-email-robert.hu@linux.intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1629192673-9911-1-git-send-email-robert.hu@linux.intel.com> References: <1629192673-9911-1-git-send-email-robert.hu@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Check each fields existence when calculating vmx_vmcs_enum_msr. Note, in initial nested VMX Ctrl MSRs setup, the early stage before VM is created, we have no idea about VMX features user space would set, therefore set to raw physical MSR's value for user space's reference. After vCPU features are settled, we update dynamic field's existence. Signed-off-by: Robert Hoo Signed-off-by: Yu Zhang --- arch/x86/kvm/vmx/nested.c | 15 ++++++++++++--- arch/x86/kvm/vmx/nested.h | 1 + arch/x86/kvm/vmx/vmx.c | 5 ++++- 3 files changed, 17 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 9a35953ede22..9a733c703662 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -6421,8 +6421,7 @@ void nested_vmx_set_vmcs_shadowing_bitmap(void) * that madness to get the encoding for comparison. */ #define VMCS12_IDX_TO_ENC(idx) ((u16)(((u16)(idx) >> 6) | ((u16)(idx) << 10))) - -static u64 nested_vmx_calc_vmcs_enum_msr(void) +u64 nested_vmx_calc_vmcs_enum_msr(struct nested_vmx *nvmx) { /* * Note these are the so called "index" of the VMCS field encoding, not @@ -6442,6 +6441,15 @@ static u64 nested_vmx_calc_vmcs_enum_msr(void) if (!vmcs_field_to_offset_table[i]) continue; + if (unlikely(!nvmx->vmcs12_field_existence_bitmap)) { + WARN_ON(1); + break; + } + + if (!test_bit(vmcs_field_to_offset_table[i] / sizeof(u16), + nvmx->vmcs12_field_existence_bitmap)) + continue; + idx = vmcs_field_index(VMCS12_IDX_TO_ENC(i)); if (idx > max_idx) max_idx = idx; @@ -6695,7 +6703,8 @@ void nested_vmx_setup_ctls_msrs(struct nested_vmx_msrs *msrs, u32 ept_caps) rdmsrl(MSR_IA32_VMX_CR0_FIXED1, msrs->cr0_fixed1); rdmsrl(MSR_IA32_VMX_CR4_FIXED1, msrs->cr4_fixed1); - msrs->vmcs_enum = nested_vmx_calc_vmcs_enum_msr(); + /* In initial setup, simply read HW value for reference */ + rdmsrl(MSR_IA32_VMX_VMCS_ENUM, msrs->vmcs_enum); } void nested_vmx_hardware_unsetup(void) diff --git a/arch/x86/kvm/vmx/nested.h b/arch/x86/kvm/vmx/nested.h index b69a80f43b37..34235d276aad 100644 --- a/arch/x86/kvm/vmx/nested.h +++ b/arch/x86/kvm/vmx/nested.h @@ -36,6 +36,7 @@ void nested_vmx_pmu_entry_exit_ctls_update(struct kvm_vcpu *vcpu); void nested_mark_vmcs12_pages_dirty(struct kvm_vcpu *vcpu); bool nested_vmx_check_io_bitmaps(struct kvm_vcpu *vcpu, unsigned int port, int size); +u64 nested_vmx_calc_vmcs_enum_msr(struct nested_vmx *nvmx); static inline struct vmcs12 *get_vmcs12(struct kvm_vcpu *vcpu) { diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 6ab37e1d04c9..f44a4971cc8d 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7156,10 +7156,13 @@ static void vmx_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu) vmcs_set_secondary_exec_control(vmx); } - if (nested_vmx_allowed(vcpu)) + if (nested_vmx_allowed(vcpu)) { to_vmx(vcpu)->msr_ia32_feature_control_valid_bits |= FEAT_CTL_VMX_ENABLED_INSIDE_SMX | FEAT_CTL_VMX_ENABLED_OUTSIDE_SMX; + to_vmx(vcpu)->nested.msrs.vmcs_enum = + nested_vmx_calc_vmcs_enum_msr(&to_vmx(vcpu)->nested); + } else to_vmx(vcpu)->msr_ia32_feature_control_valid_bits &= ~(FEAT_CTL_VMX_ENABLED_INSIDE_SMX | From patchwork Tue Aug 17 09:31:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robert Hoo X-Patchwork-Id: 12441217 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E3274C4338F for ; Tue, 17 Aug 2021 09:31:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CB18E60F35 for ; Tue, 17 Aug 2021 09:31:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235975AbhHQJcQ (ORCPT ); Tue, 17 Aug 2021 05:32:16 -0400 Received: from mga01.intel.com ([192.55.52.88]:49582 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239478AbhHQJcP (ORCPT ); Tue, 17 Aug 2021 05:32:15 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10078"; a="238111542" X-IronPort-AV: E=Sophos;i="5.84,328,1620716400"; d="scan'208";a="238111542" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Aug 2021 02:31:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.84,328,1620716400"; d="scan'208";a="449200757" Received: from sqa-gate.sh.intel.com (HELO robert-ivt.tsp.org) ([10.239.48.212]) by fmsmga007.fm.intel.com with ESMTP; 17 Aug 2021 02:31:40 -0700 From: Robert Hoo To: seanjc@google.com, pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org Cc: kvm@vger.kernel.org, yu.c.zhang@linux.intel.com, Robert Hoo Subject: [PATCH v1 5/5] KVM: x86: nVMX: Ignore user space set value to MSR_IA32_VMX_VMCS_ENUM Date: Tue, 17 Aug 2021 17:31:13 +0800 Message-Id: <1629192673-9911-6-git-send-email-robert.hu@linux.intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1629192673-9911-1-git-send-email-robert.hu@linux.intel.com> References: <1629192673-9911-1-git-send-email-robert.hu@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This is actually "read-only" MSR, its value is determined by VMCS12 fields existence. Signed-off-by: Robert Hoo Signed-off-by: Yu Zhang --- arch/x86/kvm/vmx/nested.c | 1 - 1 file changed, 1 deletion(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 9a733c703662..4d13e19d7677 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -1426,7 +1426,6 @@ int vmx_set_vmx_msr(struct kvm_vcpu *vcpu, u32 msr_index, u64 data) case MSR_IA32_VMX_EPT_VPID_CAP: return vmx_restore_vmx_ept_vpid_cap(vmx, data); case MSR_IA32_VMX_VMCS_ENUM: - vmx->nested.msrs.vmcs_enum = data; return 0; case MSR_IA32_VMX_VMFUNC: if (data & ~vmx->nested.msrs.vmfunc_controls)