Message ID | 20191120223147.63358-1-liran.alon@oracle.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: nVMX: Do not mark vmcs02->apic_access_page as dirty when unpinning | expand |
On Wed, Nov 20, 2019 at 2:32 PM Liran Alon <liran.alon@oracle.com> wrote: > > vmcs->apic_access_page is simply a token that the hypervisor puts into > the PFN of a 4KB EPTE (or PTE if using shadow-paging) that triggers > APIC-access VMExit or APIC virtualization logic whenever a CPU running > in VMX non-root mode read/write from/to this PFN. > > As every write either triggers an APIC-access VMExit or write is > performed on vmcs->virtual_apic_page, the PFN pointed to by > vmcs->apic_access_page should never actually be touched by CPU. > > Therefore, there is no need to mark vmcs02->apic_access_page as dirty > after unpin it on L2->L1 emulated VMExit or when L1 exit VMX operation. > > Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> > Reviewed-by: Joao Martins <joao.m.martins@oracle.com> > Signed-off-by: Liran Alon <liran.alon@oracle.com> Reviewed-by: Jim Mattson <jmattson@google.com>
On 20/11/19 23:31, Liran Alon wrote: > vmcs->apic_access_page is simply a token that the hypervisor puts into > the PFN of a 4KB EPTE (or PTE if using shadow-paging) that triggers > APIC-access VMExit or APIC virtualization logic whenever a CPU running > in VMX non-root mode read/write from/to this PFN. > > As every write either triggers an APIC-access VMExit or write is > performed on vmcs->virtual_apic_page, the PFN pointed to by > vmcs->apic_access_page should never actually be touched by CPU. > > Therefore, there is no need to mark vmcs02->apic_access_page as dirty > after unpin it on L2->L1 emulated VMExit or when L1 exit VMX operation. > > Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> > Reviewed-by: Joao Martins <joao.m.martins@oracle.com> > Signed-off-by: Liran Alon <liran.alon@oracle.com> > --- > arch/x86/kvm/vmx/nested.c | 6 +++--- > 1 file changed, 3 insertions(+), 3 deletions(-) > > diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c > index 20692e442d13..2506f431d51e 100644 > --- a/arch/x86/kvm/vmx/nested.c > +++ b/arch/x86/kvm/vmx/nested.c > @@ -257,7 +257,7 @@ static void free_nested(struct kvm_vcpu *vcpu) > vmx->nested.cached_shadow_vmcs12 = NULL; > /* Unpin physical memory we referred to in the vmcs02 */ > if (vmx->nested.apic_access_page) { > - kvm_release_page_dirty(vmx->nested.apic_access_page); > + kvm_release_page_clean(vmx->nested.apic_access_page); > vmx->nested.apic_access_page = NULL; > } > kvm_vcpu_unmap(vcpu, &vmx->nested.virtual_apic_map, true); > @@ -2933,7 +2933,7 @@ static bool nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) > * to it so we can release it later. > */ > if (vmx->nested.apic_access_page) { /* shouldn't happen */ > - kvm_release_page_dirty(vmx->nested.apic_access_page); > + kvm_release_page_clean(vmx->nested.apic_access_page); > vmx->nested.apic_access_page = NULL; > } > page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->apic_access_addr); > @@ -4126,7 +4126,7 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason, > > /* Unpin physical memory we referred to in vmcs02 */ > if (vmx->nested.apic_access_page) { > - kvm_release_page_dirty(vmx->nested.apic_access_page); > + kvm_release_page_clean(vmx->nested.apic_access_page); > vmx->nested.apic_access_page = NULL; > } > kvm_vcpu_unmap(vcpu, &vmx->nested.virtual_apic_map, true); > Queued, thanks. Paolo
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 20692e442d13..2506f431d51e 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -257,7 +257,7 @@ static void free_nested(struct kvm_vcpu *vcpu) vmx->nested.cached_shadow_vmcs12 = NULL; /* Unpin physical memory we referred to in the vmcs02 */ if (vmx->nested.apic_access_page) { - kvm_release_page_dirty(vmx->nested.apic_access_page); + kvm_release_page_clean(vmx->nested.apic_access_page); vmx->nested.apic_access_page = NULL; } kvm_vcpu_unmap(vcpu, &vmx->nested.virtual_apic_map, true); @@ -2933,7 +2933,7 @@ static bool nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) * to it so we can release it later. */ if (vmx->nested.apic_access_page) { /* shouldn't happen */ - kvm_release_page_dirty(vmx->nested.apic_access_page); + kvm_release_page_clean(vmx->nested.apic_access_page); vmx->nested.apic_access_page = NULL; } page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->apic_access_addr); @@ -4126,7 +4126,7 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason, /* Unpin physical memory we referred to in vmcs02 */ if (vmx->nested.apic_access_page) { - kvm_release_page_dirty(vmx->nested.apic_access_page); + kvm_release_page_clean(vmx->nested.apic_access_page); vmx->nested.apic_access_page = NULL; } kvm_vcpu_unmap(vcpu, &vmx->nested.virtual_apic_map, true);