Message ID | 1469843813-30810-1-git-send-email-jmattson@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
2016-07-29 18:56-0700, Jim Mattson: > Kexec needs to know the addresses of all VMCSs that are active on > each CPU, so that it can flush them from the VMCS caches. It is > safe to record superfluous addresses that are not associated with > an active VMCS, but it is not safe to omit an address associated > with an active VMCS. > > After a call to vmcs_load, the VMCS that was loaded is active on > the CPU. The VMCS should be added to the CPU's list of active > VMCSs before it is loaded. > > Signed-off-by: Jim Mattson <jmattson@google.com> > --- Applied to kvm/queue, thanks. I have tentatively kept the patch without "Cc: stable@..." as VMX might not write to the in-memory VMCS unless the cached VMCS has been dirtied. > arch/x86/kvm/vmx.c | 26 +++++++++++++++----------- > 1 file changed, 15 insertions(+), 11 deletions(-) > > diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c > index 7758680..f3d9995 100644 > --- a/arch/x86/kvm/vmx.c > +++ b/arch/x86/kvm/vmx.c > @@ -2121,22 +2121,14 @@ static void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu) > { > struct vcpu_vmx *vmx = to_vmx(vcpu); > u64 phys_addr = __pa(per_cpu(vmxarea, cpu)); > + bool already_loaded = vmx->loaded_vmcs->cpu == cpu; > > if (!vmm_exclusive) > kvm_cpu_vmxon(phys_addr); > - else if (vmx->loaded_vmcs->cpu != cpu) > + else if (!already_loaded) > loaded_vmcs_clear(vmx->loaded_vmcs); > > - if (per_cpu(current_vmcs, cpu) != vmx->loaded_vmcs->vmcs) { > - per_cpu(current_vmcs, cpu) = vmx->loaded_vmcs->vmcs; > - vmcs_load(vmx->loaded_vmcs->vmcs); > - } > - > - if (vmx->loaded_vmcs->cpu != cpu) { > - struct desc_ptr *gdt = this_cpu_ptr(&host_gdt); > - unsigned long sysenter_esp; > - > - kvm_make_request(KVM_REQ_TLB_FLUSH, vcpu); > + if (!already_loaded) { > local_irq_disable(); > crash_disable_local_vmclear(cpu); > > @@ -2151,6 +2143,18 @@ static void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu) > &per_cpu(loaded_vmcss_on_cpu, cpu)); > crash_enable_local_vmclear(cpu); > local_irq_enable(); > + } > + > + if (per_cpu(current_vmcs, cpu) != vmx->loaded_vmcs->vmcs) { > + per_cpu(current_vmcs, cpu) = vmx->loaded_vmcs->vmcs; > + vmcs_load(vmx->loaded_vmcs->vmcs); > + } > + > + if (!already_loaded) { > + struct desc_ptr *gdt = this_cpu_ptr(&host_gdt); > + unsigned long sysenter_esp; > + > + kvm_make_request(KVM_REQ_TLB_FLUSH, vcpu); > > /* > * Linux uses per-cpu TSS and GDT, so set these when switching > -- > 2.8.0.rc3.226.g39d4020 > > -- > To unsubscribe from this list: send the line "unsubscribe kvm" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 7758680..f3d9995 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -2121,22 +2121,14 @@ static void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu) { struct vcpu_vmx *vmx = to_vmx(vcpu); u64 phys_addr = __pa(per_cpu(vmxarea, cpu)); + bool already_loaded = vmx->loaded_vmcs->cpu == cpu; if (!vmm_exclusive) kvm_cpu_vmxon(phys_addr); - else if (vmx->loaded_vmcs->cpu != cpu) + else if (!already_loaded) loaded_vmcs_clear(vmx->loaded_vmcs); - if (per_cpu(current_vmcs, cpu) != vmx->loaded_vmcs->vmcs) { - per_cpu(current_vmcs, cpu) = vmx->loaded_vmcs->vmcs; - vmcs_load(vmx->loaded_vmcs->vmcs); - } - - if (vmx->loaded_vmcs->cpu != cpu) { - struct desc_ptr *gdt = this_cpu_ptr(&host_gdt); - unsigned long sysenter_esp; - - kvm_make_request(KVM_REQ_TLB_FLUSH, vcpu); + if (!already_loaded) { local_irq_disable(); crash_disable_local_vmclear(cpu); @@ -2151,6 +2143,18 @@ static void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu) &per_cpu(loaded_vmcss_on_cpu, cpu)); crash_enable_local_vmclear(cpu); local_irq_enable(); + } + + if (per_cpu(current_vmcs, cpu) != vmx->loaded_vmcs->vmcs) { + per_cpu(current_vmcs, cpu) = vmx->loaded_vmcs->vmcs; + vmcs_load(vmx->loaded_vmcs->vmcs); + } + + if (!already_loaded) { + struct desc_ptr *gdt = this_cpu_ptr(&host_gdt); + unsigned long sysenter_esp; + + kvm_make_request(KVM_REQ_TLB_FLUSH, vcpu); /* * Linux uses per-cpu TSS and GDT, so set these when switching
Kexec needs to know the addresses of all VMCSs that are active on each CPU, so that it can flush them from the VMCS caches. It is safe to record superfluous addresses that are not associated with an active VMCS, but it is not safe to omit an address associated with an active VMCS. After a call to vmcs_load, the VMCS that was loaded is active on the CPU. The VMCS should be added to the CPU's list of active VMCSs before it is loaded. Signed-off-by: Jim Mattson <jmattson@google.com> --- arch/x86/kvm/vmx.c | 26 +++++++++++++++----------- 1 file changed, 15 insertions(+), 11 deletions(-)