Message ID | 1408437040-49181-1-git-send-email-wanpeng.li@linux.intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Il 19/08/2014 10:30, Wanpeng Li ha scritto: > + if (vmx->nested.virtual_apic_page) > + nested_release_page(vmx->nested.virtual_apic_page); > + vmx->nested.virtual_apic_page = > + nested_get_page(vcpu, vmcs12->virtual_apic_page_addr); > + if (!vmx->nested.virtual_apic_page) > + exec_control &= > + ~CPU_BASED_TPR_SHADOW; > + else > + vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, > + page_to_phys(vmx->nested.virtual_apic_page)); > + > + /* > + * If CR8 load exits are enabled, CR8 store exits are enabled, > + * and virtualize APIC access is disabled, the processor would > + * never notice. Doing it unconditionally is not correct, but > + * it is the simplest thing. > + */ > + if (!(exec_control & CPU_BASED_TPR_SHADOW) && > + !((exec_control & CPU_BASED_CR8_LOAD_EXITING) && > + (exec_control & CPU_BASED_CR8_STORE_EXITING))) > + nested_vmx_failValid(vcpu, VMXERR_ENTRY_INVALID_CONTROL_FIELD); > + You aren't checking "virtualize APIC access" here, but the comment mentions it. As the comment says, failing the entry unconditionally could be the simplest thing, which means moving the nested_vmx_failValid call inside the "if (!vmx->nested.virtual_apic_page)". If you want to check all of CR8_LOAD/CR8_STORE/VIRTUALIZE_APIC_ACCESS, please mention in the comment that failing the vm entry is _not_ what the processor does but it's basically the only possibility we have. In that case, I would also place the "if" within the "if (!vmx->nested.virtual_apic_page)": it also simplifies the condition because you don't have to check CPU_BASED_TPR_SHADOW anymore. You can send v5 with these changes, and I'll apply it for 3.18. Thanks! Paolo -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi Paolo, On Tue, Aug 19, 2014 at 10:34:20AM +0200, Paolo Bonzini wrote: >Il 19/08/2014 10:30, Wanpeng Li ha scritto: >> + if (vmx->nested.virtual_apic_page) >> + nested_release_page(vmx->nested.virtual_apic_page); >> + vmx->nested.virtual_apic_page = >> + nested_get_page(vcpu, vmcs12->virtual_apic_page_addr); >> + if (!vmx->nested.virtual_apic_page) >> + exec_control &= >> + ~CPU_BASED_TPR_SHADOW; >> + else >> + vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, >> + page_to_phys(vmx->nested.virtual_apic_page)); >> + >> + /* >> + * If CR8 load exits are enabled, CR8 store exits are enabled, >> + * and virtualize APIC access is disabled, the processor would >> + * never notice. Doing it unconditionally is not correct, but >> + * it is the simplest thing. >> + */ >> + if (!(exec_control & CPU_BASED_TPR_SHADOW) && >> + !((exec_control & CPU_BASED_CR8_LOAD_EXITING) && >> + (exec_control & CPU_BASED_CR8_STORE_EXITING))) >> + nested_vmx_failValid(vcpu, VMXERR_ENTRY_INVALID_CONTROL_FIELD); >> + > >You aren't checking "virtualize APIC access" here, but the comment >mentions it. > >As the comment says, failing the entry unconditionally could be the >simplest thing, which means moving the nested_vmx_failValid call inside >the "if (!vmx->nested.virtual_apic_page)". > >If you want to check all of CR8_LOAD/CR8_STORE/VIRTUALIZE_APIC_ACCESS, >please mention in the comment that failing the vm entry is _not_ what >the processor does but it's basically the only possibility we have. In >that case, I would also place the "if" within the "if >(!vmx->nested.virtual_apic_page)": it also simplifies the condition >because you don't have to check CPU_BASED_TPR_SHADOW anymore. > >You can send v5 with these changes, and I'll apply it for 3.18. Thanks! > Do you mean this? + /* + * Failing the vm entry is _not_ what the processor does + * but it's basically the only possibility we have. + */ + if (!vmx->nested.virtual_apic_page) + nested_vmx_failValid(vcpu, VMXERR_ENTRY_INVALID_CONTROL_FIELD); Regards, Wanpeng Li >Paolo -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Il 20/08/2014 08:59, Wanpeng Li ha scritto: > > + /* > + * Failing the vm entry is _not_ what the processor does > + * but it's basically the only possibility we have. * We could still enter the guest if CR8 load exits are * enabled, CR8 store exits are enabled, and virtualize APIC * access is disabled; in this case the processor would never * use the TPR shadow and we could simply clear the bit from * the execution control. But such a configuration is useless, * so let's keep the code simple. > + */ > + if (!vmx->nested.virtual_apic_page) > + nested_vmx_failValid(vcpu, VMXERR_ENTRY_INVALID_CONTROL_FIELD); I thought so, but I'm afraid it's too late to do nested_vmx_failValid here. Without a test case, I'd be more confident if you moved the nested_release_page/nested_get_page to a separate function, that nested_vmx_run calls before enter_guest_mode. The same function can map apic_access_page too, for cleanliness. Something like this: if (cpu_has_secondary_exec_ctrls() && nested_cpu_has(vmcs12, CPU_BASED_ACTIVATE_SECONDARY_CONTROLS) && (vmcs12->secondary_vm_exec_control & SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) { if (vmx->nested.apic_access_page) /* shouldn't happen */ nested_release_page(vmx->nested.apic_access_page); vmx->nested.apic_access_page = nested_get_page(vcpu, vmcs12->apic_access_addr); } if (...) { /* do the same for virtual_apic_page if CPU_BASED_TPR_SHADOW is set... */ /* * Failing the vm entry is _not_ what the processor does * but it's basically the only possibility we have. * We could still enter the guest if CR8 load exits are * enabled, CR8 store exits are enabled, and virtualize APIC * access is disabled; in this case the processor would never * use the TPR shadow and we could simply clear the bit from * the execution control. But such a configuration is useless, * so let's keep the code simple. */ if (!vmx->nested.virtual_apic_page) return -EFAULT; } return 0; ... Then nested_vmx_run can do the nested_vmx_failValid if the function returns an error. Paolo -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index bfe11cf..c8d8e9a 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -379,6 +379,7 @@ struct nested_vmx { * we must keep them pinned while L2 runs. */ struct page *apic_access_page; + struct page *virtual_apic_page; u64 msr_ia32_feature_control; struct hrtimer preemption_timer; @@ -533,6 +534,7 @@ static int max_shadow_read_only_fields = ARRAY_SIZE(shadow_read_only_fields); static unsigned long shadow_read_write_fields[] = { + TPR_THRESHOLD, GUEST_RIP, GUEST_RSP, GUEST_CR0, @@ -2330,7 +2332,7 @@ static __init void nested_vmx_setup_ctls_msrs(void) CPU_BASED_MOV_DR_EXITING | CPU_BASED_UNCOND_IO_EXITING | CPU_BASED_USE_IO_BITMAPS | CPU_BASED_MONITOR_EXITING | CPU_BASED_RDPMC_EXITING | CPU_BASED_RDTSC_EXITING | - CPU_BASED_PAUSE_EXITING | + CPU_BASED_PAUSE_EXITING | CPU_BASED_TPR_SHADOW | CPU_BASED_ACTIVATE_SECONDARY_CONTROLS; /* * We can allow some features even when not supported by the @@ -6148,6 +6150,10 @@ static void free_nested(struct vcpu_vmx *vmx) nested_release_page(vmx->nested.apic_access_page); vmx->nested.apic_access_page = 0; } + if (vmx->nested.virtual_apic_page) { + nested_release_page(vmx->nested.virtual_apic_page); + vmx->nested.virtual_apic_page = 0; + } nested_free_all_saved_vmcss(vmx); } @@ -6936,7 +6942,7 @@ static bool nested_vmx_exit_handled(struct kvm_vcpu *vcpu) case EXIT_REASON_MCE_DURING_VMENTRY: return 0; case EXIT_REASON_TPR_BELOW_THRESHOLD: - return 1; + return nested_cpu_has(vmcs12, CPU_BASED_TPR_SHADOW); case EXIT_REASON_APIC_ACCESS: return nested_cpu_has2(vmcs12, SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES); @@ -7057,6 +7063,12 @@ static int vmx_handle_exit(struct kvm_vcpu *vcpu) static void update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr) { + struct vmcs12 *vmcs12 = get_vmcs12(vcpu); + + if (is_guest_mode(vcpu) && + nested_cpu_has(vmcs12, CPU_BASED_TPR_SHADOW)) + return; + if (irr == -1 || tpr < irr) { vmcs_write32(TPR_THRESHOLD, 0); return; @@ -8024,6 +8036,35 @@ static void prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12) exec_control &= ~CPU_BASED_VIRTUAL_NMI_PENDING; exec_control &= ~CPU_BASED_TPR_SHADOW; exec_control |= vmcs12->cpu_based_vm_exec_control; + + if (exec_control & CPU_BASED_TPR_SHADOW) { + if (vmx->nested.virtual_apic_page) + nested_release_page(vmx->nested.virtual_apic_page); + vmx->nested.virtual_apic_page = + nested_get_page(vcpu, vmcs12->virtual_apic_page_addr); + if (!vmx->nested.virtual_apic_page) + exec_control &= + ~CPU_BASED_TPR_SHADOW; + else + vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, + page_to_phys(vmx->nested.virtual_apic_page)); + + /* + * If CR8 load exits are enabled, CR8 store exits are enabled, + * and virtualize APIC access is disabled, the processor would + * never notice. Doing it unconditionally is not correct, but + * it is the simplest thing. + */ + if (!(exec_control & CPU_BASED_TPR_SHADOW) && + !((exec_control & CPU_BASED_CR8_LOAD_EXITING) && + (exec_control & CPU_BASED_CR8_STORE_EXITING))) + nested_vmx_failValid(vcpu, VMXERR_ENTRY_INVALID_CONTROL_FIELD); + + vmcs_write32(TPR_THRESHOLD, vmcs12->tpr_threshold); + } else if (vm_need_tpr_shadow(vmx->vcpu.kvm)) + vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, + __pa(vmx->vcpu.arch.apic->regs)); + /* * Merging of IO and MSR bitmaps not currently supported. * Rather, exit every time. @@ -8792,6 +8833,10 @@ static void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason, nested_release_page(vmx->nested.apic_access_page); vmx->nested.apic_access_page = 0; } + if (vmx->nested.virtual_apic_page) { + nested_release_page(vmx->nested.virtual_apic_page); + vmx->nested.virtual_apic_page = 0; + } /* * Exiting from L2 to L1, we're now back to L1 which thinks it just