From patchwork Mon Jun 22 22:42:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11619329 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A6D0913A0 for ; Mon, 22 Jun 2020 22:45:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9050A2073E for ; Mon, 22 Jun 2020 22:45:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731589AbgFVWpd (ORCPT ); Mon, 22 Jun 2020 18:45:33 -0400 Received: from mga18.intel.com ([134.134.136.126]:27425 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730785AbgFVWmy (ORCPT ); Mon, 22 Jun 2020 18:42:54 -0400 IronPort-SDR: G6SSKvUj6zHHGtv7PjRMcnMXFEy8oKPMkQlbnxl37s19r/DZUdkAxmJ5MdxVV0dT3pqrY9G9Lv STxQo3RbZURg== X-IronPort-AV: E=McAfee;i="6000,8403,9660"; a="131303561" X-IronPort-AV: E=Sophos;i="5.75,268,1589266800"; d="scan'208";a="131303561" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jun 2020 15:42:51 -0700 IronPort-SDR: Dl8zm+EQRZM85TtvD8EIwG92M6HXrFkskY7uvVjO5nZcuTdxX1e49dv9KEePjG6D9bqzoO5tr+ UhJzx6KWATRw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,268,1589266800"; d="scan'208";a="264634901" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.152]) by fmsmga008.fm.intel.com with ESMTP; 22 Jun 2020 15:42:50 -0700 From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 01/15] KVM: x86: Rename "shared_msrs" to "user_return_msrs" Date: Mon, 22 Jun 2020 15:42:35 -0700 Message-Id: <20200622224249.29562-2-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200622224249.29562-1-sean.j.christopherson@intel.com> References: <20200622224249.29562-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Rename the "shared_msrs" mechanism, which is used to defer restoring MSRs that are only consumed when running in userspace, to a more banal but less likely to be confusing "user_return_msrs". The "shared" nomenclature is confusing as it's not obvious who is sharing what, e.g. reasonable interpretations are that the guest value is shared by vCPUs in a VM, or that the MSR value is shared/common to guest and host, both of which are wrong. "shared" is also misleading as the MSR value (in hardware) is not guaranteed to be shared/reused between VMs (if that's indeed the correct interpretation of the name), as the ability to share values between VMs is simply a side effect (albiet a very nice side effect) of deferring restoration of the host value until returning from userspace. "user_return" avoids the above confusion by describing the mechanism itself instead of its effects. Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 4 +- arch/x86/kvm/vmx/vmx.c | 11 ++-- arch/x86/kvm/x86.c | 101 +++++++++++++++++--------------- 3 files changed, 60 insertions(+), 56 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index f8998e97457f..65a2c442bad7 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1656,8 +1656,8 @@ int kvm_pv_send_ipi(struct kvm *kvm, unsigned long ipi_bitmap_low, unsigned long ipi_bitmap_high, u32 min, unsigned long icr, int op_64_bit); -void kvm_define_shared_msr(unsigned index, u32 msr); -int kvm_set_shared_msr(unsigned index, u64 val, u64 mask); +void kvm_define_user_return_msr(unsigned index, u32 msr); +int kvm_set_user_return_msr(unsigned index, u64 val, u64 mask); u64 kvm_scale_tsc(struct kvm_vcpu *vcpu, u64 tsc); u64 kvm_read_l1_tsc(struct kvm_vcpu *vcpu, u64 host_tsc); diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 08e26a9518c2..ea79a02d905c 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -653,8 +653,7 @@ static int vmx_set_guest_msr(struct vcpu_vmx *vmx, struct shared_msr_entry *msr, msr->data = data; if (msr - vmx->guest_msrs < vmx->save_nmsrs) { preempt_disable(); - ret = kvm_set_shared_msr(msr->index, msr->data, - msr->mask); + ret = kvm_set_user_return_msr(msr->index, msr->data, msr->mask); preempt_enable(); if (ret) msr->data = old_msr_data; @@ -1146,9 +1145,9 @@ void vmx_prepare_switch_to_guest(struct kvm_vcpu *vcpu) if (!vmx->guest_msrs_ready) { vmx->guest_msrs_ready = true; for (i = 0; i < vmx->save_nmsrs; ++i) - kvm_set_shared_msr(vmx->guest_msrs[i].index, - vmx->guest_msrs[i].data, - vmx->guest_msrs[i].mask); + kvm_set_user_return_msr(vmx->guest_msrs[i].index, + vmx->guest_msrs[i].data, + vmx->guest_msrs[i].mask); } @@ -8002,7 +8001,7 @@ static __init int hardware_setup(void) host_idt_base = dt.address; for (i = 0; i < ARRAY_SIZE(vmx_msr_index); ++i) - kvm_define_shared_msr(i, vmx_msr_index[i]); + kvm_define_user_return_msr(i, vmx_msr_index[i]); if (setup_vmcs_config(&vmcs_config, &vmx_capability) < 0) return -EIO; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 00c88c2f34e4..098b10ab2993 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -160,24 +160,29 @@ module_param(force_emulation_prefix, bool, S_IRUGO); int __read_mostly pi_inject_timer = -1; module_param(pi_inject_timer, bint, S_IRUGO | S_IWUSR); -#define KVM_NR_SHARED_MSRS 16 +/* + * Restoring the host value for MSRs that are only consumed when running in + * usermode, e.g. SYSCALL MSRs and TSC_AUX, can be deferred until the CPU + * returns to userspace, i.e. the kernel can run with the guest's value. + */ +#define KVM_MAX_NR_USER_RETURN_MSRS 16 -struct kvm_shared_msrs_global { +struct kvm_user_return_msrs_global { int nr; - u32 msrs[KVM_NR_SHARED_MSRS]; + u32 msrs[KVM_MAX_NR_USER_RETURN_MSRS]; }; -struct kvm_shared_msrs { +struct kvm_user_return_msrs { struct user_return_notifier urn; bool registered; - struct kvm_shared_msr_values { + struct kvm_user_return_msr_values { u64 host; u64 curr; - } values[KVM_NR_SHARED_MSRS]; + } values[KVM_MAX_NR_USER_RETURN_MSRS]; }; -static struct kvm_shared_msrs_global __read_mostly shared_msrs_global; -static struct kvm_shared_msrs __percpu *shared_msrs; +static struct kvm_user_return_msrs_global __read_mostly user_return_msrs_global; +static struct kvm_user_return_msrs __percpu *user_return_msrs; #define KVM_SUPPORTED_XCR0 (XFEATURE_MASK_FP | XFEATURE_MASK_SSE \ | XFEATURE_MASK_YMM | XFEATURE_MASK_BNDREGS \ @@ -266,9 +271,9 @@ static inline void kvm_async_pf_hash_reset(struct kvm_vcpu *vcpu) static void kvm_on_user_return(struct user_return_notifier *urn) { unsigned slot; - struct kvm_shared_msrs *locals - = container_of(urn, struct kvm_shared_msrs, urn); - struct kvm_shared_msr_values *values; + struct kvm_user_return_msrs *msrs + = container_of(urn, struct kvm_user_return_msrs, urn); + struct kvm_user_return_msr_values *values; unsigned long flags; /* @@ -276,73 +281,73 @@ static void kvm_on_user_return(struct user_return_notifier *urn) * interrupted and executed through kvm_arch_hardware_disable() */ local_irq_save(flags); - if (locals->registered) { - locals->registered = false; + if (msrs->registered) { + msrs->registered = false; user_return_notifier_unregister(urn); } local_irq_restore(flags); - for (slot = 0; slot < shared_msrs_global.nr; ++slot) { - values = &locals->values[slot]; + for (slot = 0; slot < user_return_msrs_global.nr; ++slot) { + values = &msrs->values[slot]; if (values->host != values->curr) { - wrmsrl(shared_msrs_global.msrs[slot], values->host); + wrmsrl(user_return_msrs_global.msrs[slot], values->host); values->curr = values->host; } } } -void kvm_define_shared_msr(unsigned slot, u32 msr) +void kvm_define_user_return_msr(unsigned slot, u32 msr) { - BUG_ON(slot >= KVM_NR_SHARED_MSRS); - shared_msrs_global.msrs[slot] = msr; - if (slot >= shared_msrs_global.nr) - shared_msrs_global.nr = slot + 1; + BUG_ON(slot >= KVM_MAX_NR_USER_RETURN_MSRS); + user_return_msrs_global.msrs[slot] = msr; + if (slot >= user_return_msrs_global.nr) + user_return_msrs_global.nr = slot + 1; } -EXPORT_SYMBOL_GPL(kvm_define_shared_msr); +EXPORT_SYMBOL_GPL(kvm_define_user_return_msr); -static void kvm_shared_msr_cpu_online(void) +static void kvm_user_return_msr_cpu_online(void) { unsigned int cpu = smp_processor_id(); - struct kvm_shared_msrs *smsr = per_cpu_ptr(shared_msrs, cpu); + struct kvm_user_return_msrs *msrs = per_cpu_ptr(user_return_msrs, cpu); u64 value; int i; - for (i = 0; i < shared_msrs_global.nr; ++i) { - rdmsrl_safe(shared_msrs_global.msrs[i], &value); - smsr->values[i].host = value; - smsr->values[i].curr = value; + for (i = 0; i < user_return_msrs_global.nr; ++i) { + rdmsrl_safe(user_return_msrs_global.msrs[i], &value); + msrs->values[i].host = value; + msrs->values[i].curr = value; } } -int kvm_set_shared_msr(unsigned slot, u64 value, u64 mask) +int kvm_set_user_return_msr(unsigned slot, u64 value, u64 mask) { unsigned int cpu = smp_processor_id(); - struct kvm_shared_msrs *smsr = per_cpu_ptr(shared_msrs, cpu); + struct kvm_user_return_msrs *msrs = per_cpu_ptr(user_return_msrs, cpu); int err; - value = (value & mask) | (smsr->values[slot].host & ~mask); - if (value == smsr->values[slot].curr) + value = (value & mask) | (msrs->values[slot].host & ~mask); + if (value == msrs->values[slot].curr) return 0; - err = wrmsrl_safe(shared_msrs_global.msrs[slot], value); + err = wrmsrl_safe(user_return_msrs_global.msrs[slot], value); if (err) return 1; - smsr->values[slot].curr = value; - if (!smsr->registered) { - smsr->urn.on_user_return = kvm_on_user_return; - user_return_notifier_register(&smsr->urn); - smsr->registered = true; + msrs->values[slot].curr = value; + if (!msrs->registered) { + msrs->urn.on_user_return = kvm_on_user_return; + user_return_notifier_register(&msrs->urn); + msrs->registered = true; } return 0; } -EXPORT_SYMBOL_GPL(kvm_set_shared_msr); +EXPORT_SYMBOL_GPL(kvm_set_user_return_msr); static void drop_user_return_notifiers(void) { unsigned int cpu = smp_processor_id(); - struct kvm_shared_msrs *smsr = per_cpu_ptr(shared_msrs, cpu); + struct kvm_user_return_msrs *msrs = per_cpu_ptr(user_return_msrs, cpu); - if (smsr->registered) - kvm_on_user_return(&smsr->urn); + if (msrs->registered) + kvm_on_user_return(&msrs->urn); } u64 kvm_get_apic_base(struct kvm_vcpu *vcpu) @@ -7465,9 +7470,9 @@ int kvm_arch_init(void *opaque) goto out_free_x86_fpu_cache; } - shared_msrs = alloc_percpu(struct kvm_shared_msrs); - if (!shared_msrs) { - printk(KERN_ERR "kvm: failed to allocate percpu kvm_shared_msrs\n"); + user_return_msrs = alloc_percpu(struct kvm_user_return_msrs); + if (!user_return_msrs) { + printk(KERN_ERR "kvm: failed to allocate percpu kvm_user_return_msrs\n"); goto out_free_x86_emulator_cache; } @@ -7500,7 +7505,7 @@ int kvm_arch_init(void *opaque) return 0; out_free_percpu: - free_percpu(shared_msrs); + free_percpu(user_return_msrs); out_free_x86_emulator_cache: kmem_cache_destroy(x86_emulator_cache); out_free_x86_fpu_cache: @@ -7527,7 +7532,7 @@ void kvm_arch_exit(void) #endif kvm_x86_ops.hardware_enable = NULL; kvm_mmu_module_exit(); - free_percpu(shared_msrs); + free_percpu(user_return_msrs); kmem_cache_destroy(x86_fpu_cache); } @@ -9664,7 +9669,7 @@ int kvm_arch_hardware_enable(void) u64 max_tsc = 0; bool stable, backwards_tsc = false; - kvm_shared_msr_cpu_online(); + kvm_user_return_msr_cpu_online(); ret = kvm_x86_ops.hardware_enable(); if (ret != 0) return ret; From patchwork Mon Jun 22 22:42:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11619325 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F08D313A0 for ; Mon, 22 Jun 2020 22:45:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E06F82076E for ; Mon, 22 Jun 2020 22:45:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731575AbgFVWpB (ORCPT ); Mon, 22 Jun 2020 18:45:01 -0400 Received: from mga18.intel.com ([134.134.136.126]:27426 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731148AbgFVWmz (ORCPT ); Mon, 22 Jun 2020 18:42:55 -0400 IronPort-SDR: xW5vWn627KCA1Z+btIlISv/mAMOUB/SompzMbhLVuzAtP8a+i7Wv83VE7/CpxzgPgMK5gUbpRR NgM2YnU001Yw== X-IronPort-AV: E=McAfee;i="6000,8403,9660"; a="131303562" X-IronPort-AV: E=Sophos;i="5.75,268,1589266800"; d="scan'208";a="131303562" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jun 2020 15:42:51 -0700 IronPort-SDR: dbQNUkpwSHjIswhH8lQ0KEewh9wNZtJKkbw6ixYaeEOhJkIdzJMIOBeGqqAyyr3c5jbTIYnFT0 sN9a8ZXW76zg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,268,1589266800"; d="scan'208";a="264634905" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.152]) by fmsmga008.fm.intel.com with ESMTP; 22 Jun 2020 15:42:51 -0700 From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 02/15] KVM: VMX: Prepend "MAX_" to MSR array size defines Date: Mon, 22 Jun 2020 15:42:36 -0700 Message-Id: <20200622224249.29562-3-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200622224249.29562-1-sean.j.christopherson@intel.com> References: <20200622224249.29562-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add "MAX" to the LOADSTORE and so called SHARED MSR defines to make it more clear that the define controls the array size, as opposed to the actual number of valid entries that are in the array. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/nested.c | 2 +- arch/x86/kvm/vmx/vmx.c | 6 +++--- arch/x86/kvm/vmx/vmx.h | 10 +++++----- 3 files changed, 9 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index adb11b504d5c..df33878ff612 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -1035,7 +1035,7 @@ static void prepare_vmx_msr_autostore_list(struct kvm_vcpu *vcpu, in_vmcs12_store_list = nested_msr_store_list_has_msr(vcpu, msr_index); if (in_vmcs12_store_list && !in_autostore_list) { - if (autostore->nr == NR_LOADSTORE_MSRS) { + if (autostore->nr == MAX_NR_LOADSTORE_MSRS) { /* * Emulated VMEntry does not fail here. Instead a less * accurate value will be returned by diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index ea79a02d905c..19e6f697affb 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -931,8 +931,8 @@ static void add_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr, if (!entry_only) j = vmx_find_msr_index(&m->host, msr); - if ((i < 0 && m->guest.nr == NR_LOADSTORE_MSRS) || - (j < 0 && m->host.nr == NR_LOADSTORE_MSRS)) { + if ((i < 0 && m->guest.nr == MAX_NR_LOADSTORE_MSRS) || + (j < 0 && m->host.nr == MAX_NR_LOADSTORE_MSRS)) { printk_once(KERN_WARNING "Not enough msr switch entries. " "Can't add msr %x\n", msr); return; @@ -6891,7 +6891,7 @@ static int vmx_create_vcpu(struct kvm_vcpu *vcpu) goto free_vpid; } - BUILD_BUG_ON(ARRAY_SIZE(vmx_msr_index) != NR_SHARED_MSRS); + BUILD_BUG_ON(ARRAY_SIZE(vmx_msr_index) != MAX_NR_SHARED_MSRS); for (i = 0; i < ARRAY_SIZE(vmx_msr_index); ++i) { u32 index = vmx_msr_index[i]; diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 8a83b5edc820..4c053d204bea 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -21,16 +21,16 @@ extern const u32 vmx_msr_index[]; #define X2APIC_MSR(r) (APIC_BASE_MSR + ((r) >> 4)) #ifdef CONFIG_X86_64 -#define NR_SHARED_MSRS 7 +#define MAX_NR_SHARED_MSRS 7 #else -#define NR_SHARED_MSRS 4 +#define MAX_NR_SHARED_MSRS 4 #endif -#define NR_LOADSTORE_MSRS 8 +#define MAX_NR_LOADSTORE_MSRS 8 struct vmx_msrs { unsigned int nr; - struct vmx_msr_entry val[NR_LOADSTORE_MSRS]; + struct vmx_msr_entry val[MAX_NR_LOADSTORE_MSRS]; }; struct shared_msr_entry { @@ -217,7 +217,7 @@ struct vcpu_vmx { u32 idt_vectoring_info; ulong rflags; - struct shared_msr_entry guest_msrs[NR_SHARED_MSRS]; + struct shared_msr_entry guest_msrs[MAX_NR_SHARED_MSRS]; int nmsrs; int save_nmsrs; bool guest_msrs_ready; From patchwork Mon Jun 22 22:42:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11619301 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D6F3E92A for ; Mon, 22 Jun 2020 22:42:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C04E82075A for ; Mon, 22 Jun 2020 22:42:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731141AbgFVWmy (ORCPT ); Mon, 22 Jun 2020 18:42:54 -0400 Received: from mga18.intel.com ([134.134.136.126]:27423 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730954AbgFVWmx (ORCPT ); Mon, 22 Jun 2020 18:42:53 -0400 IronPort-SDR: EHRXotRI3e1/NCq5v0gTctS8l3QC/B/XYDxA+spvk8uxMd2NZswQnSWHQZp+zBvfAbAbuKTIwS ZFTcaAiLo6Wg== X-IronPort-AV: E=McAfee;i="6000,8403,9660"; a="131303563" X-IronPort-AV: E=Sophos;i="5.75,268,1589266800"; d="scan'208";a="131303563" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jun 2020 15:42:52 -0700 IronPort-SDR: WP3dye+OydUpPYb9M0MXpQ3M2QZqt9UnGl1txNfqFYoOAH/kJTFFC1g7V5fOftHlnRGLHvXoqi vpz3VEASddnw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,268,1589266800"; d="scan'208";a="264634908" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.152]) by fmsmga008.fm.intel.com with ESMTP; 22 Jun 2020 15:42:51 -0700 From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 03/15] KVM: VMX: Rename "vmx_find_msr_index" to "vmx_find_loadstore_msr_slot" Date: Mon, 22 Jun 2020 15:42:37 -0700 Message-Id: <20200622224249.29562-4-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200622224249.29562-1-sean.j.christopherson@intel.com> References: <20200622224249.29562-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add "loadstore" to vmx_find_msr_index() to differentiate it from the so called shared MSRs helpers (which will soon be renamed), and replace "index" with "slot" to better convey that the helper returns slot in the array, not the MSR index (the value that gets stuffed into ECX). No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/nested.c | 16 ++++++++-------- arch/x86/kvm/vmx/vmx.c | 10 +++++----- arch/x86/kvm/vmx/vmx.h | 2 +- 3 files changed, 14 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index df33878ff612..afc8e7e9ef24 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -933,11 +933,11 @@ static bool nested_vmx_get_vmexit_msr_value(struct kvm_vcpu *vcpu, * VM-exit in L0, use the more accurate value. */ if (msr_index == MSR_IA32_TSC) { - int index = vmx_find_msr_index(&vmx->msr_autostore.guest, - MSR_IA32_TSC); + int i = vmx_find_loadstore_msr_slot(&vmx->msr_autostore.guest, + MSR_IA32_TSC); - if (index >= 0) { - u64 val = vmx->msr_autostore.guest.val[index].value; + if (i >= 0) { + u64 val = vmx->msr_autostore.guest.val[i].value; *data = kvm_read_l1_tsc(vcpu, val); return true; @@ -1026,12 +1026,12 @@ static void prepare_vmx_msr_autostore_list(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx = to_vmx(vcpu); struct vmx_msrs *autostore = &vmx->msr_autostore.guest; bool in_vmcs12_store_list; - int msr_autostore_index; + int msr_autostore_slot; bool in_autostore_list; int last; - msr_autostore_index = vmx_find_msr_index(autostore, msr_index); - in_autostore_list = msr_autostore_index >= 0; + msr_autostore_slot = vmx_find_loadstore_msr_slot(autostore, msr_index); + in_autostore_list = msr_autostore_slot >= 0; in_vmcs12_store_list = nested_msr_store_list_has_msr(vcpu, msr_index); if (in_vmcs12_store_list && !in_autostore_list) { @@ -1052,7 +1052,7 @@ static void prepare_vmx_msr_autostore_list(struct kvm_vcpu *vcpu, autostore->val[last].index = msr_index; } else if (!in_vmcs12_store_list && in_autostore_list) { last = --autostore->nr; - autostore->val[msr_autostore_index] = autostore->val[last]; + autostore->val[msr_autostore_slot] = autostore->val[last]; } } diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 19e6f697affb..ba9af25b34e4 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -826,7 +826,7 @@ static void clear_atomic_switch_msr_special(struct vcpu_vmx *vmx, vm_exit_controls_clearbit(vmx, exit); } -int vmx_find_msr_index(struct vmx_msrs *m, u32 msr) +int vmx_find_loadstore_msr_slot(struct vmx_msrs *m, u32 msr) { unsigned int i; @@ -860,7 +860,7 @@ static void clear_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr) } break; } - i = vmx_find_msr_index(&m->guest, msr); + i = vmx_find_loadstore_msr_slot(&m->guest, msr); if (i < 0) goto skip_guest; --m->guest.nr; @@ -868,7 +868,7 @@ static void clear_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr) vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, m->guest.nr); skip_guest: - i = vmx_find_msr_index(&m->host, msr); + i = vmx_find_loadstore_msr_slot(&m->host, msr); if (i < 0) return; @@ -927,9 +927,9 @@ static void add_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr, wrmsrl(MSR_IA32_PEBS_ENABLE, 0); } - i = vmx_find_msr_index(&m->guest, msr); + i = vmx_find_loadstore_msr_slot(&m->guest, msr); if (!entry_only) - j = vmx_find_msr_index(&m->host, msr); + j = vmx_find_loadstore_msr_slot(&m->host, msr); if ((i < 0 && m->guest.nr == MAX_NR_LOADSTORE_MSRS) || (j < 0 && m->host.nr == MAX_NR_LOADSTORE_MSRS)) { diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 4c053d204bea..12a845209088 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -354,7 +354,7 @@ void vmx_set_virtual_apic_mode(struct kvm_vcpu *vcpu); struct shared_msr_entry *find_msr_entry(struct vcpu_vmx *vmx, u32 msr); void pt_update_intercept_for_msr(struct vcpu_vmx *vmx); void vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp); -int vmx_find_msr_index(struct vmx_msrs *m, u32 msr); +int vmx_find_loadstore_msr_slot(struct vmx_msrs *m, u32 msr); int vmx_handle_memory_failure(struct kvm_vcpu *vcpu, int r, struct x86_exception *e); From patchwork Mon Jun 22 22:42:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11619305 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4DD3360D for ; Mon, 22 Jun 2020 22:44:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3AFC02073E for ; Mon, 22 Jun 2020 22:44:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731483AbgFVWoQ (ORCPT ); Mon, 22 Jun 2020 18:44:16 -0400 Received: from mga18.intel.com ([134.134.136.126]:27423 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731121AbgFVWnF (ORCPT ); Mon, 22 Jun 2020 18:43:05 -0400 IronPort-SDR: PWRYlzgxCsEwLRWO//V+LDjLP7t9Dxs6lQC6IvDcCrBWxOxJVLPFWx3woQYNoAonJ7wf1cK2wK f7r1nU8RbVaQ== X-IronPort-AV: E=McAfee;i="6000,8403,9660"; a="131303565" X-IronPort-AV: E=Sophos;i="5.75,268,1589266800"; d="scan'208";a="131303565" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jun 2020 15:42:52 -0700 IronPort-SDR: bqeRcXtP29SFcPkMsg282ep03hiD5bkpfhOukYtljcMJNaKzaRW7bOnfHuL3o5xvUhbGYHcMdA owVlwXju1GcQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,268,1589266800"; d="scan'208";a="264634911" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.152]) by fmsmga008.fm.intel.com with ESMTP; 22 Jun 2020 15:42:52 -0700 From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 04/15] KVM: VMX: Rename the "shared_msr_entry" struct to "vmx_uret_msr" Date: Mon, 22 Jun 2020 15:42:38 -0700 Message-Id: <20200622224249.29562-5-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200622224249.29562-1-sean.j.christopherson@intel.com> References: <20200622224249.29562-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Rename struct "shared_msr_entry" to "vmx_uret_msr" to align with x86's rename of "shared_msrs" to "user_return_msrs", and to call out that the struct is specific to VMX, i.e. not part of the generic "shared_msrs" framework. Abbreviate "user_return" as "uret" to keep line lengths marginally sane and code more or less readable. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/nested.c | 2 +- arch/x86/kvm/vmx/vmx.c | 58 +++++++++++++++++++-------------------- arch/x86/kvm/vmx/vmx.h | 10 +++---- 3 files changed, 35 insertions(+), 35 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index afc8e7e9ef24..52de3e03fcdc 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -4209,7 +4209,7 @@ static void load_vmcs12_host_state(struct kvm_vcpu *vcpu, static inline u64 nested_vmx_get_vmcs01_guest_efer(struct vcpu_vmx *vmx) { - struct shared_msr_entry *efer_msr; + struct vmx_uret_msr *efer_msr; unsigned int i; if (vm_entry_controls_get(vmx) & VM_ENTRY_LOAD_IA32_EFER) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index ba9af25b34e4..9cd40a7e9b47 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -630,28 +630,28 @@ static inline int __find_msr_index(struct vcpu_vmx *vmx, u32 msr) int i; for (i = 0; i < vmx->nmsrs; ++i) - if (vmx_msr_index[vmx->guest_msrs[i].index] == msr) + if (vmx_msr_index[vmx->guest_uret_msrs[i].index] == msr) return i; return -1; } -struct shared_msr_entry *find_msr_entry(struct vcpu_vmx *vmx, u32 msr) +struct vmx_uret_msr *find_msr_entry(struct vcpu_vmx *vmx, u32 msr) { int i; i = __find_msr_index(vmx, msr); if (i >= 0) - return &vmx->guest_msrs[i]; + return &vmx->guest_uret_msrs[i]; return NULL; } -static int vmx_set_guest_msr(struct vcpu_vmx *vmx, struct shared_msr_entry *msr, u64 data) +static int vmx_set_guest_msr(struct vcpu_vmx *vmx, struct vmx_uret_msr *msr, u64 data) { int ret = 0; u64 old_msr_data = msr->data; msr->data = data; - if (msr - vmx->guest_msrs < vmx->save_nmsrs) { + if (msr - vmx->guest_uret_msrs < vmx->save_nmsrs) { preempt_disable(); ret = kvm_set_user_return_msr(msr->index, msr->data, msr->mask); preempt_enable(); @@ -996,8 +996,8 @@ static bool update_transition_efer(struct vcpu_vmx *vmx, int efer_offset) guest_efer &= ~ignore_bits; guest_efer |= host_efer & ignore_bits; - vmx->guest_msrs[efer_offset].data = guest_efer; - vmx->guest_msrs[efer_offset].mask = ~ignore_bits; + vmx->guest_uret_msrs[efer_offset].data = guest_efer; + vmx->guest_uret_msrs[efer_offset].mask = ~ignore_bits; return true; } @@ -1145,9 +1145,9 @@ void vmx_prepare_switch_to_guest(struct kvm_vcpu *vcpu) if (!vmx->guest_msrs_ready) { vmx->guest_msrs_ready = true; for (i = 0; i < vmx->save_nmsrs; ++i) - kvm_set_user_return_msr(vmx->guest_msrs[i].index, - vmx->guest_msrs[i].data, - vmx->guest_msrs[i].mask); + kvm_set_user_return_msr(vmx->guest_uret_msrs[i].index, + vmx->guest_uret_msrs[i].data, + vmx->guest_uret_msrs[i].mask); } @@ -1714,11 +1714,11 @@ static void vmx_queue_exception(struct kvm_vcpu *vcpu) */ static void move_msr_up(struct vcpu_vmx *vmx, int from, int to) { - struct shared_msr_entry tmp; + struct vmx_uret_msr tmp; - tmp = vmx->guest_msrs[to]; - vmx->guest_msrs[to] = vmx->guest_msrs[from]; - vmx->guest_msrs[from] = tmp; + tmp = vmx->guest_uret_msrs[to]; + vmx->guest_uret_msrs[to] = vmx->guest_uret_msrs[from]; + vmx->guest_uret_msrs[from] = tmp; } /* @@ -1829,7 +1829,7 @@ static int vmx_get_msr_feature(struct kvm_msr_entry *msr) static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) { struct vcpu_vmx *vmx = to_vmx(vcpu); - struct shared_msr_entry *msr; + struct vmx_uret_msr *msr; u32 index; switch (msr_info->index) { @@ -1850,7 +1850,7 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) if (!msr_info->host_initiated && !(vcpu->arch.arch_capabilities & ARCH_CAP_TSX_CTRL_MSR)) return 1; - goto find_shared_msr; + goto find_uret_msr; case MSR_IA32_UMWAIT_CONTROL: if (!msr_info->host_initiated && !vmx_has_waitpkg(vmx)) return 1; @@ -1957,9 +1957,9 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) if (!msr_info->host_initiated && !guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP)) return 1; - goto find_shared_msr; + goto find_uret_msr; default: - find_shared_msr: + find_uret_msr: msr = find_msr_entry(vmx, msr_info->index); if (msr) { msr_info->data = msr->data; @@ -1989,7 +1989,7 @@ static u64 nested_vmx_truncate_sysenter_addr(struct kvm_vcpu *vcpu, static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) { struct vcpu_vmx *vmx = to_vmx(vcpu); - struct shared_msr_entry *msr; + struct vmx_uret_msr *msr; int ret = 0; u32 msr_index = msr_info->index; u64 data = msr_info->data; @@ -2093,7 +2093,7 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) return 1; if (data & ~(TSX_CTRL_RTM_DISABLE | TSX_CTRL_CPUID_CLEAR)) return 1; - goto find_shared_msr; + goto find_uret_msr; case MSR_IA32_PRED_CMD: if (!msr_info->host_initiated && !guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL)) @@ -2230,10 +2230,10 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) /* Check reserved bit, higher 32 bits should be zero */ if ((data >> 32) != 0) return 1; - goto find_shared_msr; + goto find_uret_msr; default: - find_shared_msr: + find_uret_msr: msr = find_msr_entry(vmx, msr_index); if (msr) ret = vmx_set_guest_msr(vmx, msr, data); @@ -2866,7 +2866,7 @@ static void enter_rmode(struct kvm_vcpu *vcpu) void vmx_set_efer(struct kvm_vcpu *vcpu, u64 efer) { struct vcpu_vmx *vmx = to_vmx(vcpu); - struct shared_msr_entry *msr = find_msr_entry(vmx, MSR_EFER); + struct vmx_uret_msr *msr = find_msr_entry(vmx, MSR_EFER); if (!msr) return; @@ -6891,7 +6891,7 @@ static int vmx_create_vcpu(struct kvm_vcpu *vcpu) goto free_vpid; } - BUILD_BUG_ON(ARRAY_SIZE(vmx_msr_index) != MAX_NR_SHARED_MSRS); + BUILD_BUG_ON(ARRAY_SIZE(vmx_msr_index) != MAX_NR_USER_RETURN_MSRS); for (i = 0; i < ARRAY_SIZE(vmx_msr_index); ++i) { u32 index = vmx_msr_index[i]; @@ -6903,8 +6903,8 @@ static int vmx_create_vcpu(struct kvm_vcpu *vcpu) if (wrmsr_safe(index, data_low, data_high) < 0) continue; - vmx->guest_msrs[j].index = i; - vmx->guest_msrs[j].data = 0; + vmx->guest_uret_msrs[j].index = i; + vmx->guest_uret_msrs[j].data = 0; switch (index) { case MSR_IA32_TSX_CTRL: /* @@ -6912,10 +6912,10 @@ static int vmx_create_vcpu(struct kvm_vcpu *vcpu) * let's avoid changing CPUID bits under the host * kernel's feet. */ - vmx->guest_msrs[j].mask = ~(u64)TSX_CTRL_CPUID_CLEAR; + vmx->guest_uret_msrs[j].mask = ~(u64)TSX_CTRL_CPUID_CLEAR; break; default: - vmx->guest_msrs[j].mask = -1ull; + vmx->guest_uret_msrs[j].mask = -1ull; break; } ++vmx->nmsrs; @@ -7282,7 +7282,7 @@ static void vmx_cpuid_update(struct kvm_vcpu *vcpu) update_intel_pt_cfg(vcpu); if (boot_cpu_has(X86_FEATURE_RTM)) { - struct shared_msr_entry *msr; + struct vmx_uret_msr *msr; msr = find_msr_entry(vmx, MSR_IA32_TSX_CTRL); if (msr) { bool enabled = guest_cpuid_has(vcpu, X86_FEATURE_RTM); diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 12a845209088..256e3e4776f8 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -21,9 +21,9 @@ extern const u32 vmx_msr_index[]; #define X2APIC_MSR(r) (APIC_BASE_MSR + ((r) >> 4)) #ifdef CONFIG_X86_64 -#define MAX_NR_SHARED_MSRS 7 +#define MAX_NR_USER_RETURN_MSRS 7 #else -#define MAX_NR_SHARED_MSRS 4 +#define MAX_NR_USER_RETURN_MSRS 4 #endif #define MAX_NR_LOADSTORE_MSRS 8 @@ -33,7 +33,7 @@ struct vmx_msrs { struct vmx_msr_entry val[MAX_NR_LOADSTORE_MSRS]; }; -struct shared_msr_entry { +struct vmx_uret_msr { unsigned index; u64 data; u64 mask; @@ -217,7 +217,7 @@ struct vcpu_vmx { u32 idt_vectoring_info; ulong rflags; - struct shared_msr_entry guest_msrs[MAX_NR_SHARED_MSRS]; + struct vmx_uret_msr guest_uret_msrs[MAX_NR_USER_RETURN_MSRS]; int nmsrs; int save_nmsrs; bool guest_msrs_ready; @@ -351,7 +351,7 @@ bool vmx_interrupt_blocked(struct kvm_vcpu *vcpu); bool vmx_get_nmi_mask(struct kvm_vcpu *vcpu); void vmx_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked); void vmx_set_virtual_apic_mode(struct kvm_vcpu *vcpu); -struct shared_msr_entry *find_msr_entry(struct vcpu_vmx *vmx, u32 msr); +struct vmx_uret_msr *find_msr_entry(struct vcpu_vmx *vmx, u32 msr); void pt_update_intercept_for_msr(struct vcpu_vmx *vmx); void vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp); int vmx_find_loadstore_msr_slot(struct vmx_msrs *m, u32 msr); From patchwork Mon Jun 22 22:42:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11619323 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E4BA114B7 for ; Mon, 22 Jun 2020 22:45:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CB5AA20776 for ; Mon, 22 Jun 2020 22:45:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731186AbgFVWpA (ORCPT ); Mon, 22 Jun 2020 18:45:00 -0400 Received: from mga18.intel.com ([134.134.136.126]:27425 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731147AbgFVWmz (ORCPT ); Mon, 22 Jun 2020 18:42:55 -0400 IronPort-SDR: V2VTv0xiauM7zRYXpILuNrNvNa+Cs+8klw+iq3WkpA6vqr1/MRFMQV2njXESlfpxiPabjHqU05 gxcsWR74Tx7Q== X-IronPort-AV: E=McAfee;i="6000,8403,9660"; a="131303566" X-IronPort-AV: E=Sophos;i="5.75,268,1589266800"; d="scan'208";a="131303566" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jun 2020 15:42:52 -0700 IronPort-SDR: wwwKgx9zxEl1HLJPEcU1QpAEIM/+tOkeFhrDOgL6x/FNMQXQssSIyKpsyequeeh292abzIEqA1 83mVHwK3xijw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,268,1589266800"; d="scan'208";a="264634914" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.152]) by fmsmga008.fm.intel.com with ESMTP; 22 Jun 2020 15:42:52 -0700 From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 05/15] KVM: VMX: Rename vcpu_vmx's "nmsrs" to "nr_uret_msrs" Date: Mon, 22 Jun 2020 15:42:39 -0700 Message-Id: <20200622224249.29562-6-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200622224249.29562-1-sean.j.christopherson@intel.com> References: <20200622224249.29562-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Rename vcpu_vmx.nsmrs to vcpu_vmx.nr_uret_msrs to explicitly associate it with the guest_uret_msrs array. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 6 +++--- arch/x86/kvm/vmx/vmx.h | 2 +- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 9cd40a7e9b47..d957f9d2e351 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -629,7 +629,7 @@ static inline int __find_msr_index(struct vcpu_vmx *vmx, u32 msr) { int i; - for (i = 0; i < vmx->nmsrs; ++i) + for (i = 0; i < vmx->nr_uret_msrs; ++i) if (vmx_msr_index[vmx->guest_uret_msrs[i].index] == msr) return i; return -1; @@ -6896,7 +6896,7 @@ static int vmx_create_vcpu(struct kvm_vcpu *vcpu) for (i = 0; i < ARRAY_SIZE(vmx_msr_index); ++i) { u32 index = vmx_msr_index[i]; u32 data_low, data_high; - int j = vmx->nmsrs; + int j = vmx->nr_uret_msrs; if (rdmsr_safe(index, &data_low, &data_high) < 0) continue; @@ -6918,7 +6918,7 @@ static int vmx_create_vcpu(struct kvm_vcpu *vcpu) vmx->guest_uret_msrs[j].mask = -1ull; break; } - ++vmx->nmsrs; + ++vmx->nr_uret_msrs; } err = alloc_loaded_vmcs(&vmx->vmcs01); diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 256e3e4776f8..16450f85ddcb 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -218,7 +218,7 @@ struct vcpu_vmx { ulong rflags; struct vmx_uret_msr guest_uret_msrs[MAX_NR_USER_RETURN_MSRS]; - int nmsrs; + int nr_uret_msrs; int save_nmsrs; bool guest_msrs_ready; #ifdef CONFIG_X86_64 From patchwork Mon Jun 22 22:42:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11619309 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CE0B892A for ; Mon, 22 Jun 2020 22:44:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C031E2073E for ; Mon, 22 Jun 2020 22:44:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731227AbgFVWm7 (ORCPT ); Mon, 22 Jun 2020 18:42:59 -0400 Received: from mga18.intel.com ([134.134.136.126]:27426 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731183AbgFVWm5 (ORCPT ); Mon, 22 Jun 2020 18:42:57 -0400 IronPort-SDR: EFiwWeqnrDAYsrX0ZXcuXBv4IIW+t+1g+0nfqUow2U/eKdtd49gK4g2dXnz9dpYreePZLmvPx+ OWVrMcZcp5Yw== X-IronPort-AV: E=McAfee;i="6000,8403,9660"; a="131303569" X-IronPort-AV: E=Sophos;i="5.75,268,1589266800"; d="scan'208";a="131303569" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jun 2020 15:42:53 -0700 IronPort-SDR: td+YW2MCcggCsxMjiLuIZRvkuADvVlfCJJtwDbLS0kKY6unhhjRFmbFKdjXmLkC/4NJ6SZ++Q1 GgniBR6b50oA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,268,1589266800"; d="scan'208";a="264634918" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.152]) by fmsmga008.fm.intel.com with ESMTP; 22 Jun 2020 15:42:52 -0700 From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 06/15] KVM: VMX: Rename vcpu_vmx's "save_nmsrs" to "nr_active_uret_msrs" Date: Mon, 22 Jun 2020 15:42:40 -0700 Message-Id: <20200622224249.29562-7-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200622224249.29562-1-sean.j.christopherson@intel.com> References: <20200622224249.29562-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add "uret" into the name of "save_nmsrs" to explicitly associate it with the guest_uret_msrs array, and replace "save" with "active" (for lack of a better word) to better describe what is being tracked. While "save" is more or less accurate when viewed as a literal description of the field, e.g. it holds the number of MSRs that were saved into the array the last time setup_msrs() was invoked, it can easily be misinterpreted by the reader, e.g. as meaning the number of MSRs that were saved from hardware at some point in the past, or as the number of MSRs that need to be saved at some point in the future, both of which are wrong. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 22 +++++++++++----------- arch/x86/kvm/vmx/vmx.h | 2 +- 2 files changed, 12 insertions(+), 12 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index d957f9d2e351..baf425fa7089 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -651,7 +651,7 @@ static int vmx_set_guest_msr(struct vcpu_vmx *vmx, struct vmx_uret_msr *msr, u64 u64 old_msr_data = msr->data; msr->data = data; - if (msr - vmx->guest_uret_msrs < vmx->save_nmsrs) { + if (msr - vmx->guest_uret_msrs < vmx->nr_active_uret_msrs) { preempt_disable(); ret = kvm_set_user_return_msr(msr->index, msr->data, msr->mask); preempt_enable(); @@ -1144,7 +1144,7 @@ void vmx_prepare_switch_to_guest(struct kvm_vcpu *vcpu) */ if (!vmx->guest_msrs_ready) { vmx->guest_msrs_ready = true; - for (i = 0; i < vmx->save_nmsrs; ++i) + for (i = 0; i < vmx->nr_active_uret_msrs; ++i) kvm_set_user_return_msr(vmx->guest_uret_msrs[i].index, vmx->guest_uret_msrs[i].data, vmx->guest_uret_msrs[i].mask); @@ -1728,9 +1728,9 @@ static void move_msr_up(struct vcpu_vmx *vmx, int from, int to) */ static void setup_msrs(struct vcpu_vmx *vmx) { - int save_nmsrs, index; + int nr_active_uret_msrs, index; - save_nmsrs = 0; + nr_active_uret_msrs = 0; #ifdef CONFIG_X86_64 /* * The SYSCALL MSRs are only needed on long mode guests, and only @@ -1739,26 +1739,26 @@ static void setup_msrs(struct vcpu_vmx *vmx) if (is_long_mode(&vmx->vcpu) && (vmx->vcpu.arch.efer & EFER_SCE)) { index = __find_msr_index(vmx, MSR_STAR); if (index >= 0) - move_msr_up(vmx, index, save_nmsrs++); + move_msr_up(vmx, index, nr_active_uret_msrs++); index = __find_msr_index(vmx, MSR_LSTAR); if (index >= 0) - move_msr_up(vmx, index, save_nmsrs++); + move_msr_up(vmx, index, nr_active_uret_msrs++); index = __find_msr_index(vmx, MSR_SYSCALL_MASK); if (index >= 0) - move_msr_up(vmx, index, save_nmsrs++); + move_msr_up(vmx, index, nr_active_uret_msrs++); } #endif index = __find_msr_index(vmx, MSR_EFER); if (index >= 0 && update_transition_efer(vmx, index)) - move_msr_up(vmx, index, save_nmsrs++); + move_msr_up(vmx, index, nr_active_uret_msrs++); index = __find_msr_index(vmx, MSR_TSC_AUX); if (index >= 0 && guest_cpuid_has(&vmx->vcpu, X86_FEATURE_RDTSCP)) - move_msr_up(vmx, index, save_nmsrs++); + move_msr_up(vmx, index, nr_active_uret_msrs++); index = __find_msr_index(vmx, MSR_IA32_TSX_CTRL); if (index >= 0) - move_msr_up(vmx, index, save_nmsrs++); + move_msr_up(vmx, index, nr_active_uret_msrs++); - vmx->save_nmsrs = save_nmsrs; + vmx->nr_active_uret_msrs = nr_active_uret_msrs; vmx->guest_msrs_ready = false; if (cpu_has_vmx_msr_bitmap()) diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 16450f85ddcb..55257195cb27 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -219,7 +219,7 @@ struct vcpu_vmx { struct vmx_uret_msr guest_uret_msrs[MAX_NR_USER_RETURN_MSRS]; int nr_uret_msrs; - int save_nmsrs; + int nr_active_uret_msrs; bool guest_msrs_ready; #ifdef CONFIG_X86_64 u64 msr_host_kernel_gs_base; From patchwork Mon Jun 22 22:42:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11619315 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9A59992A for ; Mon, 22 Jun 2020 22:44:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8DE5620767 for ; Mon, 22 Jun 2020 22:44:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731207AbgFVWm6 (ORCPT ); Mon, 22 Jun 2020 18:42:58 -0400 Received: from mga18.intel.com ([134.134.136.126]:27425 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730954AbgFVWm4 (ORCPT ); Mon, 22 Jun 2020 18:42:56 -0400 IronPort-SDR: TJuZitRUDRqKlpXBIq1I2yub3hgrzxnl6Iil8CyLio2gDBMmVemTmyOp6i3r///BZO0IG5Fjuz eoxbmm3Pu9/w== X-IronPort-AV: E=McAfee;i="6000,8403,9660"; a="131303570" X-IronPort-AV: E=Sophos;i="5.75,268,1589266800"; d="scan'208";a="131303570" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jun 2020 15:42:53 -0700 IronPort-SDR: MXXGxMAR/otzla/5xXCE68ySPELqwwKdvScoQZWPtwetPI2WHVCk4p77pCvRtqcsc/dxSOtCHM xCVPSYi1aHdQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,268,1589266800"; d="scan'208";a="264634921" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.152]) by fmsmga008.fm.intel.com with ESMTP; 22 Jun 2020 15:42:53 -0700 From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 07/15] KVM: VMX: Rename vcpu_vmx's "guest_msrs_ready" to "guest_uret_msrs_loaded" Date: Mon, 22 Jun 2020 15:42:41 -0700 Message-Id: <20200622224249.29562-8-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200622224249.29562-1-sean.j.christopherson@intel.com> References: <20200622224249.29562-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add "uret" to "guest_msrs_ready" to explicitly associate it with the "guest_uret_msrs" array, and replace "ready" with "loaded" to more precisely reflect what it tracks, e.g. "ready" could be interpreted as meaning ready for processing (setup_msrs() has run), which is wrong. "loaded" also aligns with the similar "guest_state_loaded" field. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 8 ++++---- arch/x86/kvm/vmx/vmx.h | 2 +- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index baf425fa7089..cebd68ea50ba 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -1142,8 +1142,8 @@ void vmx_prepare_switch_to_guest(struct kvm_vcpu *vcpu) * when guest state is loaded. This happens when guest transitions * to/from long-mode by setting MSR_EFER.LMA. */ - if (!vmx->guest_msrs_ready) { - vmx->guest_msrs_ready = true; + if (!vmx->guest_uret_msrs_loaded) { + vmx->guest_uret_msrs_loaded = true; for (i = 0; i < vmx->nr_active_uret_msrs; ++i) kvm_set_user_return_msr(vmx->guest_uret_msrs[i].index, vmx->guest_uret_msrs[i].data, @@ -1231,7 +1231,7 @@ static void vmx_prepare_switch_to_host(struct vcpu_vmx *vmx) #endif load_fixmap_gdt(raw_smp_processor_id()); vmx->guest_state_loaded = false; - vmx->guest_msrs_ready = false; + vmx->guest_uret_msrs_loaded = false; } #ifdef CONFIG_X86_64 @@ -1759,7 +1759,7 @@ static void setup_msrs(struct vcpu_vmx *vmx) move_msr_up(vmx, index, nr_active_uret_msrs++); vmx->nr_active_uret_msrs = nr_active_uret_msrs; - vmx->guest_msrs_ready = false; + vmx->guest_uret_msrs_loaded = false; if (cpu_has_vmx_msr_bitmap()) vmx_update_msr_bitmap(&vmx->vcpu); diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 55257195cb27..a0237ff6c4e0 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -220,7 +220,7 @@ struct vcpu_vmx { struct vmx_uret_msr guest_uret_msrs[MAX_NR_USER_RETURN_MSRS]; int nr_uret_msrs; int nr_active_uret_msrs; - bool guest_msrs_ready; + bool guest_uret_msrs_loaded; #ifdef CONFIG_X86_64 u64 msr_host_kernel_gs_base; u64 msr_guest_kernel_gs_base; From patchwork Mon Jun 22 22:42:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11619321 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3940D92A for ; Mon, 22 Jun 2020 22:45:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 26A072075A for ; Mon, 22 Jun 2020 22:45:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731562AbgFVWpA (ORCPT ); Mon, 22 Jun 2020 18:45:00 -0400 Received: from mga18.intel.com ([134.134.136.126]:27425 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731185AbgFVWm5 (ORCPT ); Mon, 22 Jun 2020 18:42:57 -0400 IronPort-SDR: cbP3cgSTgmaE/7kRgzxfcnKWjL3hxQzRf0l+3ygT14M2bwKy9jF7sZZNQBiQ0hDL0VDz2D4zI5 1HpObJW+gJqA== X-IronPort-AV: E=McAfee;i="6000,8403,9660"; a="131303573" X-IronPort-AV: E=Sophos;i="5.75,268,1589266800"; d="scan'208";a="131303573" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jun 2020 15:42:54 -0700 IronPort-SDR: CnezVRDlmv/faYZTE5hV5sWU9cmoLdiae+a8ia+x+HZKDasfglgHe4tKY21wobHkTzrRjA3Ngd GY/6JGkzvsag== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,268,1589266800"; d="scan'208";a="264634924" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.152]) by fmsmga008.fm.intel.com with ESMTP; 22 Jun 2020 15:42:53 -0700 From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 08/15] KVM: VMX: Rename "__find_msr_index" to "__vmx_find_uret_msr" Date: Mon, 22 Jun 2020 15:42:42 -0700 Message-Id: <20200622224249.29562-9-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200622224249.29562-1-sean.j.christopherson@intel.com> References: <20200622224249.29562-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Rename "__find_msr_index" to scope it to VMX, associate it with guest_uret_msrs, and to avoid conflating "MSR's ECX index" with "MSR's array index". Similarly, don't use "slot" in the name so as to avoid colliding the common x86's half of "user_return_msrs" (the slot in kvm_user_return_msrs is not the same slot in guest_uret_msrs). No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index cebd68ea50ba..a0f4049d956f 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -625,7 +625,7 @@ static inline bool report_flexpriority(void) return flexpriority_enabled; } -static inline int __find_msr_index(struct vcpu_vmx *vmx, u32 msr) +static inline int __vmx_find_uret_msr(struct vcpu_vmx *vmx, u32 msr) { int i; @@ -639,7 +639,7 @@ struct vmx_uret_msr *find_msr_entry(struct vcpu_vmx *vmx, u32 msr) { int i; - i = __find_msr_index(vmx, msr); + i = __vmx_find_uret_msr(vmx, msr); if (i >= 0) return &vmx->guest_uret_msrs[i]; return NULL; @@ -1737,24 +1737,24 @@ static void setup_msrs(struct vcpu_vmx *vmx) * when EFER.SCE is set. */ if (is_long_mode(&vmx->vcpu) && (vmx->vcpu.arch.efer & EFER_SCE)) { - index = __find_msr_index(vmx, MSR_STAR); + index = __vmx_find_uret_msr(vmx, MSR_STAR); if (index >= 0) move_msr_up(vmx, index, nr_active_uret_msrs++); - index = __find_msr_index(vmx, MSR_LSTAR); + index = __vmx_find_uret_msr(vmx, MSR_LSTAR); if (index >= 0) move_msr_up(vmx, index, nr_active_uret_msrs++); - index = __find_msr_index(vmx, MSR_SYSCALL_MASK); + index = __vmx_find_uret_msr(vmx, MSR_SYSCALL_MASK); if (index >= 0) move_msr_up(vmx, index, nr_active_uret_msrs++); } #endif - index = __find_msr_index(vmx, MSR_EFER); + index = __vmx_find_uret_msr(vmx, MSR_EFER); if (index >= 0 && update_transition_efer(vmx, index)) move_msr_up(vmx, index, nr_active_uret_msrs++); - index = __find_msr_index(vmx, MSR_TSC_AUX); + index = __vmx_find_uret_msr(vmx, MSR_TSC_AUX); if (index >= 0 && guest_cpuid_has(&vmx->vcpu, X86_FEATURE_RDTSCP)) move_msr_up(vmx, index, nr_active_uret_msrs++); - index = __find_msr_index(vmx, MSR_IA32_TSX_CTRL); + index = __vmx_find_uret_msr(vmx, MSR_IA32_TSX_CTRL); if (index >= 0) move_msr_up(vmx, index, nr_active_uret_msrs++); From patchwork Mon Jun 22 22:42:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11619327 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4B36292A for ; Mon, 22 Jun 2020 22:45:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 336EC2076E for ; Mon, 22 Jun 2020 22:45:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731552AbgFVWo7 (ORCPT ); Mon, 22 Jun 2020 18:44:59 -0400 Received: from mga18.intel.com ([134.134.136.126]:27426 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731186AbgFVWm5 (ORCPT ); Mon, 22 Jun 2020 18:42:57 -0400 IronPort-SDR: rrQdXBB1pzl7O0uBTA+CRVWxWpdN6M8nfQ0ZieKxjwMzK9pyi8AileE6z79hAs8BmYob0euarC PoUcUJg/XS3g== X-IronPort-AV: E=McAfee;i="6000,8403,9660"; a="131303574" X-IronPort-AV: E=Sophos;i="5.75,268,1589266800"; d="scan'208";a="131303574" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jun 2020 15:42:54 -0700 IronPort-SDR: FAzpuBBf11NBdp3BnF5sXwz5dNqTfJt2k/bHeEQM4oziqWdZgdT6N8zu6h9Cn0p5afhdaaPytx pL1960fCdI1w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,268,1589266800"; d="scan'208";a="264634927" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.152]) by fmsmga008.fm.intel.com with ESMTP; 22 Jun 2020 15:42:54 -0700 From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 09/15] KVM: VMX: Check guest support for RDTSCP before processing MSR_TSC_AUX Date: Mon, 22 Jun 2020 15:42:43 -0700 Message-Id: <20200622224249.29562-10-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200622224249.29562-1-sean.j.christopherson@intel.com> References: <20200622224249.29562-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Check for RDTSCP support prior to checking if MSR_TSC_AUX is in the uret MSRs array so that the array lookup and manipulation are back-to-back. This paves the way toward adding a helper to wrap the lookup and manipulation. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index a0f4049d956f..954b9aa950f2 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -1751,9 +1751,11 @@ static void setup_msrs(struct vcpu_vmx *vmx) index = __vmx_find_uret_msr(vmx, MSR_EFER); if (index >= 0 && update_transition_efer(vmx, index)) move_msr_up(vmx, index, nr_active_uret_msrs++); - index = __vmx_find_uret_msr(vmx, MSR_TSC_AUX); - if (index >= 0 && guest_cpuid_has(&vmx->vcpu, X86_FEATURE_RDTSCP)) - move_msr_up(vmx, index, nr_active_uret_msrs++); + if (guest_cpuid_has(&vmx->vcpu, X86_FEATURE_RDTSCP)) { + index = __vmx_find_uret_msr(vmx, MSR_TSC_AUX); + if (index >= 0) + move_msr_up(vmx, index, nr_active_uret_msrs++); + } index = __vmx_find_uret_msr(vmx, MSR_IA32_TSX_CTRL); if (index >= 0) move_msr_up(vmx, index, nr_active_uret_msrs++); From patchwork Mon Jun 22 22:42:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11619319 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8612660D for ; Mon, 22 Jun 2020 22:44:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7732B20767 for ; Mon, 22 Jun 2020 22:44:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731539AbgFVWow (ORCPT ); Mon, 22 Jun 2020 18:44:52 -0400 Received: from mga18.intel.com ([134.134.136.126]:27425 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731202AbgFVWm5 (ORCPT ); Mon, 22 Jun 2020 18:42:57 -0400 IronPort-SDR: fl99ngNz9N+Bh2ZAITxTEmhPGF5t8EibazaOFrmohFs8ux4DFq6wlZEMfN52S4IG6LEfvqx9jV u3eYeE8flzRw== X-IronPort-AV: E=McAfee;i="6000,8403,9660"; a="131303575" X-IronPort-AV: E=Sophos;i="5.75,268,1589266800"; d="scan'208";a="131303575" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jun 2020 15:42:54 -0700 IronPort-SDR: gsjx3G/SzUdGeQYFuDLMKThH/xydo9m+BNi0W+4o3RqkZttYlTgp5kpL8BQiP/Y02gB55KQmXs OFbhXg7Bw0gQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,268,1589266800"; d="scan'208";a="264634930" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.152]) by fmsmga008.fm.intel.com with ESMTP; 22 Jun 2020 15:42:54 -0700 From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 10/15] KVM: VMX: Move uret MSR lookup into update_transition_efer() Date: Mon, 22 Jun 2020 15:42:44 -0700 Message-Id: <20200622224249.29562-11-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200622224249.29562-1-sean.j.christopherson@intel.com> References: <20200622224249.29562-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move checking for the existence of MSR_EFER in the uret MSR array into update_transition_efer() so that the lookup and manipulation of the array in setup_msrs() occur back-to-back. This paves the way toward adding a helper to wrap the lookup and manipulation. To avoid unnecessary overhead, defer the lookup until the uret array would actually be modified in update_transition_efer(). EFER obviously exists on CPUs that support the dedicated VMCS fields for switching EFER, and EFER must exist for the guest and host EFER.NX value to diverge, i.e. there is no danger of attempting to read/write EFER when it doesn't exist. Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 35 +++++++++++++++++++++-------------- 1 file changed, 21 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 954b9aa950f2..8731ca8ca2b0 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -955,10 +955,11 @@ static void add_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr, m->host.val[j].value = host_val; } -static bool update_transition_efer(struct vcpu_vmx *vmx, int efer_offset) +static bool update_transition_efer(struct vcpu_vmx *vmx) { u64 guest_efer = vmx->vcpu.arch.efer; u64 ignore_bits = 0; + int i; /* Shadow paging assumes NX to be available. */ if (!enable_ept) @@ -990,17 +991,21 @@ static bool update_transition_efer(struct vcpu_vmx *vmx, int efer_offset) else clear_atomic_switch_msr(vmx, MSR_EFER); return false; - } else { - clear_atomic_switch_msr(vmx, MSR_EFER); - - guest_efer &= ~ignore_bits; - guest_efer |= host_efer & ignore_bits; - - vmx->guest_uret_msrs[efer_offset].data = guest_efer; - vmx->guest_uret_msrs[efer_offset].mask = ~ignore_bits; - - return true; } + + i = __vmx_find_uret_msr(vmx, MSR_EFER); + if (i < 0) + return false; + + clear_atomic_switch_msr(vmx, MSR_EFER); + + guest_efer &= ~ignore_bits; + guest_efer |= host_efer & ignore_bits; + + vmx->guest_uret_msrs[i].data = guest_efer; + vmx->guest_uret_msrs[i].mask = ~ignore_bits; + + return true; } #ifdef CONFIG_X86_32 @@ -1748,9 +1753,11 @@ static void setup_msrs(struct vcpu_vmx *vmx) move_msr_up(vmx, index, nr_active_uret_msrs++); } #endif - index = __vmx_find_uret_msr(vmx, MSR_EFER); - if (index >= 0 && update_transition_efer(vmx, index)) - move_msr_up(vmx, index, nr_active_uret_msrs++); + if (update_transition_efer(vmx)) { + index = __vmx_find_uret_msr(vmx, MSR_EFER); + if (index >= 0) + move_msr_up(vmx, index, nr_active_uret_msrs++); + } if (guest_cpuid_has(&vmx->vcpu, X86_FEATURE_RDTSCP)) { index = __vmx_find_uret_msr(vmx, MSR_TSC_AUX); if (index >= 0) From patchwork Mon Jun 22 22:42:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11619317 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CF42892A for ; Mon, 22 Jun 2020 22:44:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BFF2520767 for ; Mon, 22 Jun 2020 22:44:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731505AbgFVWoq (ORCPT ); Mon, 22 Jun 2020 18:44:46 -0400 Received: from mga18.intel.com ([134.134.136.126]:27426 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731198AbgFVWm6 (ORCPT ); Mon, 22 Jun 2020 18:42:58 -0400 IronPort-SDR: /aa+26LgBxu85NVKIQQPdMbK0FgKzX3Hnp9ik4ONQl5GsYg0U/c/DT03dyk6WOofNuyOIGmlx7 PrvKyChbxGYQ== X-IronPort-AV: E=McAfee;i="6000,8403,9660"; a="131303576" X-IronPort-AV: E=Sophos;i="5.75,268,1589266800"; d="scan'208";a="131303576" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jun 2020 15:42:55 -0700 IronPort-SDR: tKrppNG3mwzVZRukBF6u1Hs7VizfIz28P7GOyMnJcUAPBG8CoLhTDbDdyU8JdOjWyOX5hZZZsU L2u1BBRSY6Bw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,268,1589266800"; d="scan'208";a="264634933" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.152]) by fmsmga008.fm.intel.com with ESMTP; 22 Jun 2020 15:42:54 -0700 From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 11/15] KVM: VMX: Add vmx_setup_uret_msr() to handle lookup and swap Date: Mon, 22 Jun 2020 15:42:45 -0700 Message-Id: <20200622224249.29562-12-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200622224249.29562-1-sean.j.christopherson@intel.com> References: <20200622224249.29562-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add vmx_setup_uret_msr() to wrap the lookup and manipulation of the uret MSRs array during setup_msrs(). In addition to consolidating code, this eliminates move_msr_up(), which while being a very literally description of the function, isn't exacly helpful in understanding the net effect of the code. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 49 ++++++++++++++++-------------------------- 1 file changed, 18 insertions(+), 31 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 8731ca8ca2b0..f3cd1de7b0ff 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -1714,12 +1714,15 @@ static void vmx_queue_exception(struct kvm_vcpu *vcpu) vmx_clear_hlt(vcpu); } -/* - * Swap MSR entry in host/guest MSR entry array. - */ -static void move_msr_up(struct vcpu_vmx *vmx, int from, int to) +static void vmx_setup_uret_msr(struct vcpu_vmx *vmx, unsigned int msr) { struct vmx_uret_msr tmp; + int from, to; + + from = __vmx_find_uret_msr(vmx, msr); + if (from < 0) + return; + to = vmx->nr_active_uret_msrs++; tmp = vmx->guest_uret_msrs[to]; vmx->guest_uret_msrs[to] = vmx->guest_uret_msrs[from]; @@ -1733,42 +1736,26 @@ static void move_msr_up(struct vcpu_vmx *vmx, int from, int to) */ static void setup_msrs(struct vcpu_vmx *vmx) { - int nr_active_uret_msrs, index; - - nr_active_uret_msrs = 0; + vmx->guest_uret_msrs_loaded = false; + vmx->nr_active_uret_msrs = 0; #ifdef CONFIG_X86_64 /* * The SYSCALL MSRs are only needed on long mode guests, and only * when EFER.SCE is set. */ if (is_long_mode(&vmx->vcpu) && (vmx->vcpu.arch.efer & EFER_SCE)) { - index = __vmx_find_uret_msr(vmx, MSR_STAR); - if (index >= 0) - move_msr_up(vmx, index, nr_active_uret_msrs++); - index = __vmx_find_uret_msr(vmx, MSR_LSTAR); - if (index >= 0) - move_msr_up(vmx, index, nr_active_uret_msrs++); - index = __vmx_find_uret_msr(vmx, MSR_SYSCALL_MASK); - if (index >= 0) - move_msr_up(vmx, index, nr_active_uret_msrs++); + vmx_setup_uret_msr(vmx, MSR_STAR); + vmx_setup_uret_msr(vmx, MSR_LSTAR); + vmx_setup_uret_msr(vmx, MSR_SYSCALL_MASK); } #endif - if (update_transition_efer(vmx)) { - index = __vmx_find_uret_msr(vmx, MSR_EFER); - if (index >= 0) - move_msr_up(vmx, index, nr_active_uret_msrs++); - } - if (guest_cpuid_has(&vmx->vcpu, X86_FEATURE_RDTSCP)) { - index = __vmx_find_uret_msr(vmx, MSR_TSC_AUX); - if (index >= 0) - move_msr_up(vmx, index, nr_active_uret_msrs++); - } - index = __vmx_find_uret_msr(vmx, MSR_IA32_TSX_CTRL); - if (index >= 0) - move_msr_up(vmx, index, nr_active_uret_msrs++); + if (update_transition_efer(vmx)) + vmx_setup_uret_msr(vmx, MSR_EFER); - vmx->nr_active_uret_msrs = nr_active_uret_msrs; - vmx->guest_uret_msrs_loaded = false; + if (guest_cpuid_has(&vmx->vcpu, X86_FEATURE_RDTSCP)) + vmx_setup_uret_msr(vmx, MSR_TSC_AUX); + + vmx_setup_uret_msr(vmx, MSR_IA32_TSX_CTRL); if (cpu_has_vmx_msr_bitmap()) vmx_update_msr_bitmap(&vmx->vcpu); From patchwork Mon Jun 22 22:42:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11619313 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D277E60D for ; Mon, 22 Jun 2020 22:44:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C175720767 for ; Mon, 22 Jun 2020 22:44:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731502AbgFVWod (ORCPT ); Mon, 22 Jun 2020 18:44:33 -0400 Received: from mga18.intel.com ([134.134.136.126]:27425 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731216AbgFVWm7 (ORCPT ); Mon, 22 Jun 2020 18:42:59 -0400 IronPort-SDR: W+GLFBTdaAA2iShFAzuJ92DCv7/3eNsTqS5Jbqd2TnJXakM2w9hsFh0kTnL+G+xHNC1aYHaNji obqZ3lcLQh1w== X-IronPort-AV: E=McAfee;i="6000,8403,9660"; a="131303577" X-IronPort-AV: E=Sophos;i="5.75,268,1589266800"; d="scan'208";a="131303577" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jun 2020 15:42:55 -0700 IronPort-SDR: iGJxrDCabt5mvEY4Hp8VgDNM8w1b8SmwoFpj7ihxRTWXi1ZjeTgTUI+ERMLK1TgHmAj3HZmQBe kJbtgT1j6zSA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,268,1589266800"; d="scan'208";a="264634936" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.152]) by fmsmga008.fm.intel.com with ESMTP; 22 Jun 2020 15:42:55 -0700 From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 12/15] KVM: VMX: Rename "find_msr_entry" to "vmx_find_uret_msr" Date: Mon, 22 Jun 2020 15:42:46 -0700 Message-Id: <20200622224249.29562-13-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200622224249.29562-1-sean.j.christopherson@intel.com> References: <20200622224249.29562-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Rename "find_msr_entry" to scope it to VMX and to associate it with guest_uret_msrs. Drop the "entry" so that the function name pairs with the existing __vmx_find_uret_msr(), which intentionally uses a double underscore prefix instead of appending "index" or "slot" as those names are already claimed by other pieces of the user return MSR stack. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/nested.c | 2 +- arch/x86/kvm/vmx/vmx.c | 10 +++++----- arch/x86/kvm/vmx/vmx.h | 2 +- 3 files changed, 7 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 52de3e03fcdc..39a65df619e6 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -4223,7 +4223,7 @@ static inline u64 nested_vmx_get_vmcs01_guest_efer(struct vcpu_vmx *vmx) return vmx->msr_autoload.guest.val[i].value; } - efer_msr = find_msr_entry(vmx, MSR_EFER); + efer_msr = vmx_find_uret_msr(vmx, MSR_EFER); if (efer_msr) return efer_msr->data; diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index f3cd1de7b0ff..6662c1aab9b2 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -635,7 +635,7 @@ static inline int __vmx_find_uret_msr(struct vcpu_vmx *vmx, u32 msr) return -1; } -struct vmx_uret_msr *find_msr_entry(struct vcpu_vmx *vmx, u32 msr) +struct vmx_uret_msr *vmx_find_uret_msr(struct vcpu_vmx *vmx, u32 msr) { int i; @@ -1956,7 +1956,7 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) goto find_uret_msr; default: find_uret_msr: - msr = find_msr_entry(vmx, msr_info->index); + msr = vmx_find_uret_msr(vmx, msr_info->index); if (msr) { msr_info->data = msr->data; break; @@ -2230,7 +2230,7 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) default: find_uret_msr: - msr = find_msr_entry(vmx, msr_index); + msr = vmx_find_uret_msr(vmx, msr_index); if (msr) ret = vmx_set_guest_msr(vmx, msr, data); else @@ -2862,7 +2862,7 @@ static void enter_rmode(struct kvm_vcpu *vcpu) void vmx_set_efer(struct kvm_vcpu *vcpu, u64 efer) { struct vcpu_vmx *vmx = to_vmx(vcpu); - struct vmx_uret_msr *msr = find_msr_entry(vmx, MSR_EFER); + struct vmx_uret_msr *msr = vmx_find_uret_msr(vmx, MSR_EFER); if (!msr) return; @@ -7279,7 +7279,7 @@ static void vmx_cpuid_update(struct kvm_vcpu *vcpu) if (boot_cpu_has(X86_FEATURE_RTM)) { struct vmx_uret_msr *msr; - msr = find_msr_entry(vmx, MSR_IA32_TSX_CTRL); + msr = vmx_find_uret_msr(vmx, MSR_IA32_TSX_CTRL); if (msr) { bool enabled = guest_cpuid_has(vcpu, X86_FEATURE_RTM); vmx_set_guest_msr(vmx, msr, enabled ? 0 : TSX_CTRL_RTM_DISABLE); diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index a0237ff6c4e0..338469fcd8cf 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -351,7 +351,7 @@ bool vmx_interrupt_blocked(struct kvm_vcpu *vcpu); bool vmx_get_nmi_mask(struct kvm_vcpu *vcpu); void vmx_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked); void vmx_set_virtual_apic_mode(struct kvm_vcpu *vcpu); -struct vmx_uret_msr *find_msr_entry(struct vcpu_vmx *vmx, u32 msr); +struct vmx_uret_msr *vmx_find_uret_msr(struct vcpu_vmx *vmx, u32 msr); void pt_update_intercept_for_msr(struct vcpu_vmx *vmx); void vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp); int vmx_find_loadstore_msr_slot(struct vmx_msrs *m, u32 msr); From patchwork Mon Jun 22 22:42:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11619311 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2F0E060D for ; Mon, 22 Jun 2020 22:44:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1751020776 for ; Mon, 22 Jun 2020 22:44:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731511AbgFVWoe (ORCPT ); Mon, 22 Jun 2020 18:44:34 -0400 Received: from mga18.intel.com ([134.134.136.126]:27426 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730943AbgFVWm7 (ORCPT ); Mon, 22 Jun 2020 18:42:59 -0400 IronPort-SDR: XLFIK+YfUKeGhx+Tet1yfda2xJKB0G8cYqne/26fINdreBNzTGl7IusJgjbyLjDViBLp3x08EE DKiSaR37P+OQ== X-IronPort-AV: E=McAfee;i="6000,8403,9660"; a="131303578" X-IronPort-AV: E=Sophos;i="5.75,268,1589266800"; d="scan'208";a="131303578" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jun 2020 15:42:55 -0700 IronPort-SDR: FGeySH7zGAHkWHnu3g5/cg1JH6n1nzHSgyhIDbSMWmggd/VhzW1w1yE+Qspo7J9cXaFqxvshkA Qzk6yYhWPXfQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,268,1589266800"; d="scan'208";a="264634940" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.152]) by fmsmga008.fm.intel.com with ESMTP; 22 Jun 2020 15:42:55 -0700 From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 13/15] KVM: VMX: Rename "vmx_set_guest_msr" to "vmx_set_guest_uret_msr" Date: Mon, 22 Jun 2020 15:42:47 -0700 Message-Id: <20200622224249.29562-14-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200622224249.29562-1-sean.j.christopherson@intel.com> References: <20200622224249.29562-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add "uret" to vmx_set_guest_msr() to explicitly associate it with the guest_uret_msrs array, and to differentiate it from vmx_set_msr() as well as VMX's load/store MSRs. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 6662c1aab9b2..178315b2758b 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -645,7 +645,8 @@ struct vmx_uret_msr *vmx_find_uret_msr(struct vcpu_vmx *vmx, u32 msr) return NULL; } -static int vmx_set_guest_msr(struct vcpu_vmx *vmx, struct vmx_uret_msr *msr, u64 data) +static int vmx_set_guest_uret_msr(struct vcpu_vmx *vmx, + struct vmx_uret_msr *msr, u64 data) { int ret = 0; @@ -2232,7 +2233,7 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) find_uret_msr: msr = vmx_find_uret_msr(vmx, msr_index); if (msr) - ret = vmx_set_guest_msr(vmx, msr, data); + ret = vmx_set_guest_uret_msr(vmx, msr, data); else ret = kvm_set_msr_common(vcpu, msr_info); } @@ -7282,7 +7283,7 @@ static void vmx_cpuid_update(struct kvm_vcpu *vcpu) msr = vmx_find_uret_msr(vmx, MSR_IA32_TSX_CTRL); if (msr) { bool enabled = guest_cpuid_has(vcpu, X86_FEATURE_RTM); - vmx_set_guest_msr(vmx, msr, enabled ? 0 : TSX_CTRL_RTM_DISABLE); + vmx_set_guest_uret_msr(vmx, msr, enabled ? 0 : TSX_CTRL_RTM_DISABLE); } } } From patchwork Mon Jun 22 22:42:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11619307 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E4CF960D for ; Mon, 22 Jun 2020 22:44:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D5B0D2073E for ; Mon, 22 Jun 2020 22:44:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731183AbgFVWnE (ORCPT ); Mon, 22 Jun 2020 18:43:04 -0400 Received: from mga18.intel.com ([134.134.136.126]:27426 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731253AbgFVWnD (ORCPT ); Mon, 22 Jun 2020 18:43:03 -0400 IronPort-SDR: Oa+ueFLW84wxFleKAjZD1s98UNCorBROw/tXAvsthQJoamNEhd8IDEzREIbp2oEMrjX+NkrzNm V1Y3L420CKNg== X-IronPort-AV: E=McAfee;i="6000,8403,9660"; a="131303579" X-IronPort-AV: E=Sophos;i="5.75,268,1589266800"; d="scan'208";a="131303579" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jun 2020 15:42:56 -0700 IronPort-SDR: 8FKNo0ARdxWg/XLV+akSa+b+mTj3mHGCb4vr3xHaKPfR6kaZmfvZtnIyT2/4/XL23ou7NMG/Az tZ4G8WGCW/3w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,268,1589266800"; d="scan'208";a="264634945" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.152]) by fmsmga008.fm.intel.com with ESMTP; 22 Jun 2020 15:42:55 -0700 From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 14/15] KVM: VMX: Rename "vmx_msr_index" to "vmx_uret_msrs_list" Date: Mon, 22 Jun 2020 15:42:48 -0700 Message-Id: <20200622224249.29562-15-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200622224249.29562-1-sean.j.christopherson@intel.com> References: <20200622224249.29562-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Rename "vmx_msr_index" to "vmx_uret_msrs_list" to associate it with the uret MSRs array, and to avoid conflating "MSR's ECX index" with "MSR's index into an array". Similarly, don't use "slot" in the name as that terminology is claimed by the common x86 "user_return_msrs" mechanism. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 178315b2758b..e958c911dcf8 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -449,9 +449,9 @@ static unsigned long host_idt_base; * will emulate SYSCALL in legacy mode if the vendor string in guest * CPUID.0:{EBX,ECX,EDX} is "AuthenticAMD" or "AMDisbetter!" To * support this emulation, IA32_STAR must always be included in - * vmx_msr_index[], even in i386 builds. + * vmx_uret_msrs_list[], even in i386 builds. */ -const u32 vmx_msr_index[] = { +const u32 vmx_uret_msrs_list[] = { #ifdef CONFIG_X86_64 MSR_SYSCALL_MASK, MSR_LSTAR, MSR_CSTAR, #endif @@ -630,7 +630,7 @@ static inline int __vmx_find_uret_msr(struct vcpu_vmx *vmx, u32 msr) int i; for (i = 0; i < vmx->nr_uret_msrs; ++i) - if (vmx_msr_index[vmx->guest_uret_msrs[i].index] == msr) + if (vmx_uret_msrs_list[vmx->guest_uret_msrs[i].index] == msr) return i; return -1; } @@ -6888,10 +6888,10 @@ static int vmx_create_vcpu(struct kvm_vcpu *vcpu) goto free_vpid; } - BUILD_BUG_ON(ARRAY_SIZE(vmx_msr_index) != MAX_NR_USER_RETURN_MSRS); + BUILD_BUG_ON(ARRAY_SIZE(vmx_uret_msrs_list) != MAX_NR_USER_RETURN_MSRS); - for (i = 0; i < ARRAY_SIZE(vmx_msr_index); ++i) { - u32 index = vmx_msr_index[i]; + for (i = 0; i < ARRAY_SIZE(vmx_uret_msrs_list); ++i) { + u32 index = vmx_uret_msrs_list[i]; u32 data_low, data_high; int j = vmx->nr_uret_msrs; @@ -7997,8 +7997,8 @@ static __init int hardware_setup(void) store_idt(&dt); host_idt_base = dt.address; - for (i = 0; i < ARRAY_SIZE(vmx_msr_index); ++i) - kvm_define_user_return_msr(i, vmx_msr_index[i]); + for (i = 0; i < ARRAY_SIZE(vmx_uret_msrs_list); ++i) + kvm_define_user_return_msr(i, vmx_uret_msrs_list[i]); if (setup_vmcs_config(&vmcs_config, &vmx_capability) < 0) return -EIO; From patchwork Mon Jun 22 22:42:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 11619303 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 610E992A for ; Mon, 22 Jun 2020 22:44:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 512F52073E for ; Mon, 22 Jun 2020 22:44:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731271AbgFVWnF (ORCPT ); Mon, 22 Jun 2020 18:43:05 -0400 Received: from mga18.intel.com ([134.134.136.126]:27425 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731257AbgFVWnD (ORCPT ); Mon, 22 Jun 2020 18:43:03 -0400 IronPort-SDR: BK4wVfy6VI4xy6hiq4aFkiBX2tN6zZOeoF2q0IhcFY+PhjkqXlsnD7JrL0bgwzedZL5zj8BFQX Vb7m2h8+MWrw== X-IronPort-AV: E=McAfee;i="6000,8403,9660"; a="131303581" X-IronPort-AV: E=Sophos;i="5.75,268,1589266800"; d="scan'208";a="131303581" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jun 2020 15:42:56 -0700 IronPort-SDR: yUjHnqK5JTkgowcRt++MR9EaaYSccOy3UC0ftAPH/xvR5GHLMP/c39MAb9DR0L30VDKdDcguLk 4iJaIiK628VQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,268,1589266800"; d="scan'208";a="264634948" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.152]) by fmsmga008.fm.intel.com with ESMTP; 22 Jun 2020 15:42:56 -0700 From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 15/15] KVM: VMX: Rename vmx_uret_msr's "index" to "slot" Date: Mon, 22 Jun 2020 15:42:49 -0700 Message-Id: <20200622224249.29562-16-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200622224249.29562-1-sean.j.christopherson@intel.com> References: <20200622224249.29562-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Rename "index" to "slot" in struct vmx_uret_msr to align with the terminology used by common x86's kvm_user_return_msrs, and to avoid conflating "MSR's ECX index" with "MSR's index into an array". No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 8 ++++---- arch/x86/kvm/vmx/vmx.h | 2 +- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index e958c911dcf8..b335296ca02d 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -630,7 +630,7 @@ static inline int __vmx_find_uret_msr(struct vcpu_vmx *vmx, u32 msr) int i; for (i = 0; i < vmx->nr_uret_msrs; ++i) - if (vmx_uret_msrs_list[vmx->guest_uret_msrs[i].index] == msr) + if (vmx_uret_msrs_list[vmx->guest_uret_msrs[i].slot] == msr) return i; return -1; } @@ -654,7 +654,7 @@ static int vmx_set_guest_uret_msr(struct vcpu_vmx *vmx, msr->data = data; if (msr - vmx->guest_uret_msrs < vmx->nr_active_uret_msrs) { preempt_disable(); - ret = kvm_set_user_return_msr(msr->index, msr->data, msr->mask); + ret = kvm_set_user_return_msr(msr->slot, msr->data, msr->mask); preempt_enable(); if (ret) msr->data = old_msr_data; @@ -1151,7 +1151,7 @@ void vmx_prepare_switch_to_guest(struct kvm_vcpu *vcpu) if (!vmx->guest_uret_msrs_loaded) { vmx->guest_uret_msrs_loaded = true; for (i = 0; i < vmx->nr_active_uret_msrs; ++i) - kvm_set_user_return_msr(vmx->guest_uret_msrs[i].index, + kvm_set_user_return_msr(vmx->guest_uret_msrs[i].slot, vmx->guest_uret_msrs[i].data, vmx->guest_uret_msrs[i].mask); @@ -6900,7 +6900,7 @@ static int vmx_create_vcpu(struct kvm_vcpu *vcpu) if (wrmsr_safe(index, data_low, data_high) < 0) continue; - vmx->guest_uret_msrs[j].index = i; + vmx->guest_uret_msrs[j].slot = i; vmx->guest_uret_msrs[j].data = 0; switch (index) { case MSR_IA32_TSX_CTRL: diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 338469fcd8cf..57027072e21f 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -34,7 +34,7 @@ struct vmx_msrs { }; struct vmx_uret_msr { - unsigned index; + unsigned int slot; /* The MSR's slot in kvm_user_return_msrs. */ u64 data; u64 mask; };