Message ID | 20190123223925.7558-3-sean.j.christopherson@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: nVMX: Fix address calculations for VMX instrs | expand |
On 23/01/19 23:39, Sean Christopherson wrote:
> } else if (is_protmode(vcpu)) {
Since we are at it, this "if (is_protmode(vcpu))" should probably be
removed, shouldn't it?
Paolo
On Wed, Jan 30, 2019 at 05:24:41PM +0100, Paolo Bonzini wrote: > On 23/01/19 23:39, Sean Christopherson wrote: > > } else if (is_protmode(vcpu)) { > > Since we are at it, this "if (is_protmode(vcpu))" should probably be > removed, shouldn't it? Yep. It could be converted to a WARN if we wanted to be paranoid, but that seems a bit gratuitous.
On Wed, Jan 23, 2019 at 2:39 PM Sean Christopherson <sean.j.christopherson@intel.com> wrote: > > The address size of an instruction affects the effective address, not > the virtual/linear address. The final address may still be truncated, > e.g. to 32-bits outside of long mode, but that happens irrespective of > the address size, e.g. a 32-bit address size can yield a 64-bit virtual > address when using FS/GS with a non-zero base. > > Fixes: 064aea774768 ("KVM: nVMX: Decoding memory operands of VMX instructions") > Cc: stable@vger.kernel.org > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Reviewed-by: Jim Mattson <jmattson@google.com>
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 40b6e331dce9..bc8e3fc6724d 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -4039,20 +4039,41 @@ int get_vmx_mem_address(struct kvm_vcpu *vcpu, unsigned long exit_qualification, if (index_is_valid) off += kvm_register_read(vcpu, index_reg)<<scaling; vmx_get_segment(vcpu, &s, seg_reg); - *ret = s.base + off; + /* + * The effective address, i.e. @off, of a memory operand is truncated + * based on the address size of the instruction. Note that this is + * the *effective address*, i.e. the address prior to accounting for + * the segment's base. + */ if (addr_size == 1) /* 32 bit */ - *ret &= 0xffffffff; + off &= 0xffffffff; + else if (addr_size == 0) /* 16 bit */ + off &= 0xffff; /* Checks for #GP/#SS exceptions. */ exn = false; if (is_long_mode(vcpu)) { + /* + * The virtual/linear address is never truncated in 64-bit + * mode, e.g. a 32-bit address size can yield a 64-bit virtual + * address when using FS/GS with a non-zero base. + */ + *ret = s.base + off; + /* Long mode: #GP(0)/#SS(0) if the memory address is in a * non-canonical form. This is the only check on the memory * destination for long mode! */ exn = is_noncanonical_address(*ret, vcpu); } else if (is_protmode(vcpu)) { + /* + * When not in long mode, the virtual/linear address is + * unconditionally truncated to 32 bits regardless of the + * address size. + */ + *ret = (s.base + off) & 0xffffffff; + /* Protected mode: apply checks for segment validity in the * following order: * - segment type check (#GP(0) may be thrown)
The address size of an instruction affects the effective address, not the virtual/linear address. The final address may still be truncated, e.g. to 32-bits outside of long mode, but that happens irrespective of the address size, e.g. a 32-bit address size can yield a 64-bit virtual address when using FS/GS with a non-zero base. Fixes: 064aea774768 ("KVM: nVMX: Decoding memory operands of VMX instructions") Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> --- arch/x86/kvm/vmx/nested.c | 25 +++++++++++++++++++++++-- 1 file changed, 23 insertions(+), 2 deletions(-)