Message ID | 1543829467-18025-4-git-send-email-karahmed@amazon.de (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM/X86: Introduce a new guest mapping interface | expand |
On 03.12.18 10:30, KarimAllah Ahmed wrote: > Update the PML table without mapping and unmapping the page. This also > avoids using kvm_vcpu_gpa_to_page(..) which assumes that there is a "struct > page" for guest memory. > > Signed-off-by: KarimAllah Ahmed <karahmed@amazon.de> > --- > v1 -> v2: > - Use kvm_write_guest_page instead of kvm_write_guest (pbonzini) > - Do not use pointer arithmetic for pml_address (pbonzini) > --- > arch/x86/kvm/vmx.c | 14 +++++--------- > 1 file changed, 5 insertions(+), 9 deletions(-) > > diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c > index 75817cb..6d6dfa9 100644 > --- a/arch/x86/kvm/vmx.c > +++ b/arch/x86/kvm/vmx.c > @@ -14427,9 +14427,7 @@ static int vmx_write_pml_buffer(struct kvm_vcpu *vcpu) > { > struct vmcs12 *vmcs12; > struct vcpu_vmx *vmx = to_vmx(vcpu); > - gpa_t gpa; > - struct page *page = NULL; > - u64 *pml_address; > + gpa_t gpa, dst; > > if (is_guest_mode(vcpu)) { > WARN_ON_ONCE(vmx->nested.pml_full); > @@ -14449,15 +14447,13 @@ static int vmx_write_pml_buffer(struct kvm_vcpu *vcpu) > } > > gpa = vmcs_read64(GUEST_PHYSICAL_ADDRESS) & ~0xFFFull; > + dst = vmcs12->pml_address + sizeof(u64) * vmcs12->guest_pml_index; > > - page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->pml_address); > - if (is_error_page(page)) > + if (kvm_write_guest_page(vcpu->kvm, gpa_to_gfn(dst), &gpa, > + offset_in_page(dst), sizeof(gpa))) > return 0; > > - pml_address = kmap(page); > - pml_address[vmcs12->guest_pml_index--] = gpa; > - kunmap(page); > - kvm_release_page_clean(page); So we've written to the page but released it as clean ... shouldn't that have been kvm_release_page_dirty? ... also, shouldn't there have been a mark_page_dirty() ? (to mark it dirty for migration?) Your patch certainly fixes both conditions (if it was in fact broken). In that case, we should maybe add that to the cover letter. Reviewed-by: David Hildenbrand <david@redhat.com> > + vmcs12->guest_pml_index--; > } > > return 0; >
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 75817cb..6d6dfa9 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -14427,9 +14427,7 @@ static int vmx_write_pml_buffer(struct kvm_vcpu *vcpu) { struct vmcs12 *vmcs12; struct vcpu_vmx *vmx = to_vmx(vcpu); - gpa_t gpa; - struct page *page = NULL; - u64 *pml_address; + gpa_t gpa, dst; if (is_guest_mode(vcpu)) { WARN_ON_ONCE(vmx->nested.pml_full); @@ -14449,15 +14447,13 @@ static int vmx_write_pml_buffer(struct kvm_vcpu *vcpu) } gpa = vmcs_read64(GUEST_PHYSICAL_ADDRESS) & ~0xFFFull; + dst = vmcs12->pml_address + sizeof(u64) * vmcs12->guest_pml_index; - page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->pml_address); - if (is_error_page(page)) + if (kvm_write_guest_page(vcpu->kvm, gpa_to_gfn(dst), &gpa, + offset_in_page(dst), sizeof(gpa))) return 0; - pml_address = kmap(page); - pml_address[vmcs12->guest_pml_index--] = gpa; - kunmap(page); - kvm_release_page_clean(page); + vmcs12->guest_pml_index--; } return 0;
Update the PML table without mapping and unmapping the page. This also avoids using kvm_vcpu_gpa_to_page(..) which assumes that there is a "struct page" for guest memory. Signed-off-by: KarimAllah Ahmed <karahmed@amazon.de> --- v1 -> v2: - Use kvm_write_guest_page instead of kvm_write_guest (pbonzini) - Do not use pointer arithmetic for pml_address (pbonzini) --- arch/x86/kvm/vmx.c | 14 +++++--------- 1 file changed, 5 insertions(+), 9 deletions(-)