Message ID | 20170302204148.12015-1-jmattson@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
2017-03-02 12:41-0800, Jim Mattson: > VMCLEAR should silently ignore a failure to clear the launch state of > the VMCS referenced by the operand. > > Signed-off-by: Jim Mattson <jmattson@google.com> > --- > arch/x86/kvm/vmx.c | 22 ++++------------------ > 1 file changed, 4 insertions(+), 18 deletions(-) > > diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c > index ef4ba71dbb66..bca497a92541 100644 > --- a/arch/x86/kvm/vmx.c > +++ b/arch/x86/kvm/vmx.c > @@ -7259,9 +7259,8 @@ static int handle_vmoff(struct kvm_vcpu *vcpu) > static int handle_vmclear(struct kvm_vcpu *vcpu) > { > struct vcpu_vmx *vmx = to_vmx(vcpu); > + u32 zero = 0; > gpa_t vmptr; > - struct vmcs12 *vmcs12; > - struct page *page; > > if (!nested_vmx_check_permission(vcpu)) > return 1; > @@ -7272,22 +7271,9 @@ static int handle_vmclear(struct kvm_vcpu *vcpu) > if (vmptr == vmx->nested.current_vmptr) > nested_release_vmcs12(vmx); > > - page = nested_get_page(vcpu, vmptr); > - if (page == NULL) { > - /* > - * For accurate processor emulation, VMCLEAR beyond available > - * physical memory should do nothing at all. However, it is > - * possible that a nested vmx bug, not a guest hypervisor bug, > - * resulted in this case, so let's shut down before doing any > - * more damage: > - */ (Consciously introducing bugs in order to somewhat mitigate possible bugs is not desirable.) > - kvm_make_request(KVM_REQ_TRIPLE_FAULT, vcpu); > - return 1; > - } > - vmcs12 = kmap(page); > - vmcs12->launch_state = 0; > - kunmap(page); > - nested_release_page(page); > + kvm_write_guest(vcpu->kvm, I have changed this line to kvm_vcpu_write_guest(vcpu and applied to kvm/queue (planning to merge in 4.11-rc2). The change makes a difference if the guest is using VMX inside SMM and I don't see a reason to avoid the SMM address space in that case. Let me know what you think, thanks. > + vmptr + offsetof(struct vmcs12, launch_state), > + &zero, sizeof(zero)); (Could have also been kvm_vcpu_write_guest_page(), but that is only performance gain.) > > nested_free_vmcs02(vmx, vmptr); > > -- > 2.12.0.rc1.440.g5b76565f74-goog >
That sounds good to me. Thanks! --jim On Fri, Mar 3, 2017 at 8:12 AM, Radim Krčmář <rkrcmar@redhat.com> wrote: > 2017-03-02 12:41-0800, Jim Mattson: >> VMCLEAR should silently ignore a failure to clear the launch state of >> the VMCS referenced by the operand. >> >> Signed-off-by: Jim Mattson <jmattson@google.com> >> --- >> arch/x86/kvm/vmx.c | 22 ++++------------------ >> 1 file changed, 4 insertions(+), 18 deletions(-) >> >> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c >> index ef4ba71dbb66..bca497a92541 100644 >> --- a/arch/x86/kvm/vmx.c >> +++ b/arch/x86/kvm/vmx.c >> @@ -7259,9 +7259,8 @@ static int handle_vmoff(struct kvm_vcpu *vcpu) >> static int handle_vmclear(struct kvm_vcpu *vcpu) >> { >> struct vcpu_vmx *vmx = to_vmx(vcpu); >> + u32 zero = 0; >> gpa_t vmptr; >> - struct vmcs12 *vmcs12; >> - struct page *page; >> >> if (!nested_vmx_check_permission(vcpu)) >> return 1; >> @@ -7272,22 +7271,9 @@ static int handle_vmclear(struct kvm_vcpu *vcpu) >> if (vmptr == vmx->nested.current_vmptr) >> nested_release_vmcs12(vmx); >> >> - page = nested_get_page(vcpu, vmptr); >> - if (page == NULL) { >> - /* >> - * For accurate processor emulation, VMCLEAR beyond available >> - * physical memory should do nothing at all. However, it is >> - * possible that a nested vmx bug, not a guest hypervisor bug, >> - * resulted in this case, so let's shut down before doing any >> - * more damage: >> - */ > > (Consciously introducing bugs in order to somewhat mitigate possible > bugs is not desirable.) > >> - kvm_make_request(KVM_REQ_TRIPLE_FAULT, vcpu); >> - return 1; >> - } >> - vmcs12 = kmap(page); >> - vmcs12->launch_state = 0; >> - kunmap(page); >> - nested_release_page(page); >> + kvm_write_guest(vcpu->kvm, > > I have changed this line to > > kvm_vcpu_write_guest(vcpu > > and applied to kvm/queue (planning to merge in 4.11-rc2). > > The change makes a difference if the guest is using VMX inside SMM and I > don't see a reason to avoid the SMM address space in that case. > Let me know what you think, > > thanks. > >> + vmptr + offsetof(struct vmcs12, launch_state), >> + &zero, sizeof(zero)); > > (Could have also been kvm_vcpu_write_guest_page(), but that is only > performance gain.) > >> >> nested_free_vmcs02(vmx, vmptr); >> >> -- >> 2.12.0.rc1.440.g5b76565f74-goog >>
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index ef4ba71dbb66..bca497a92541 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -7259,9 +7259,8 @@ static int handle_vmoff(struct kvm_vcpu *vcpu) static int handle_vmclear(struct kvm_vcpu *vcpu) { struct vcpu_vmx *vmx = to_vmx(vcpu); + u32 zero = 0; gpa_t vmptr; - struct vmcs12 *vmcs12; - struct page *page; if (!nested_vmx_check_permission(vcpu)) return 1; @@ -7272,22 +7271,9 @@ static int handle_vmclear(struct kvm_vcpu *vcpu) if (vmptr == vmx->nested.current_vmptr) nested_release_vmcs12(vmx); - page = nested_get_page(vcpu, vmptr); - if (page == NULL) { - /* - * For accurate processor emulation, VMCLEAR beyond available - * physical memory should do nothing at all. However, it is - * possible that a nested vmx bug, not a guest hypervisor bug, - * resulted in this case, so let's shut down before doing any - * more damage: - */ - kvm_make_request(KVM_REQ_TRIPLE_FAULT, vcpu); - return 1; - } - vmcs12 = kmap(page); - vmcs12->launch_state = 0; - kunmap(page); - nested_release_page(page); + kvm_write_guest(vcpu->kvm, + vmptr + offsetof(struct vmcs12, launch_state), + &zero, sizeof(zero)); nested_free_vmcs02(vmx, vmptr);
VMCLEAR should silently ignore a failure to clear the launch state of the VMCS referenced by the operand. Signed-off-by: Jim Mattson <jmattson@google.com> --- arch/x86/kvm/vmx.c | 22 ++++------------------ 1 file changed, 4 insertions(+), 18 deletions(-)