diff mbox series

[3/3] KVM: nVMX: Ignore limit checks on VMX instructions using flat segments

Message ID 20190123223925.7558-4-sean.j.christopherson@intel.com (mailing list archive)
State New, archived
Headers show
Series KVM: nVMX: Fix address calculations for VMX instrs | expand

Commit Message

Sean Christopherson Jan. 23, 2019, 10:39 p.m. UTC
Regarding segments with a limit==0xffffffff, the SDM officially states:

    When the effective limit is FFFFFFFFH (4 GBytes), these accesses may
    or may not cause the indicated exceptions.  Behavior is
    implementation-specific and may vary from one execution to another.

In practice, all CPUs that support VMX ignore limit checks for "flat
segments", i.e. an expand-up data or code segment with base=0 and
limit=0xffffffff.  This is subtly different than wrapping the effective
address calculation based on the address size, as the flat segment
behavior also applies to accesses that would wrap the 4g boundary, e.g.
a 4-byte access starting at 0xffffffff will access linear addresses
0xffffffff, 0x0, 0x1 and 0x2.

Fixes: f9eb4af67c9d ("KVM: nVMX: VMX instructions: add checks for #GP/#SS exceptions")
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/vmx/nested.c | 12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

Comments

Jim Mattson Feb. 12, 2019, 6:31 p.m. UTC | #1
On Wed, Jan 23, 2019 at 2:39 PM Sean Christopherson
<sean.j.christopherson@intel.com> wrote:
>
> Regarding segments with a limit==0xffffffff, the SDM officially states:
>
>     When the effective limit is FFFFFFFFH (4 GBytes), these accesses may
>     or may not cause the indicated exceptions.  Behavior is
>     implementation-specific and may vary from one execution to another.
>
> In practice, all CPUs that support VMX ignore limit checks for "flat
> segments", i.e. an expand-up data or code segment with base=0 and
> limit=0xffffffff.  This is subtly different than wrapping the effective
> address calculation based on the address size, as the flat segment
> behavior also applies to accesses that would wrap the 4g boundary, e.g.
> a 4-byte access starting at 0xffffffff will access linear addresses
> 0xffffffff, 0x0, 0x1 and 0x2.
>
> Fixes: f9eb4af67c9d ("KVM: nVMX: VMX instructions: add checks for #GP/#SS exceptions")
> Cc: stable@vger.kernel.org
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Jim Mattson March 18, 2021, 11:24 p.m. UTC | #2
On Wed, Jan 23, 2019 at 2:39 PM Sean Christopherson
<sean.j.christopherson@intel.com> wrote:
>
> Regarding segments with a limit==0xffffffff, the SDM officially states:
>
>     When the effective limit is FFFFFFFFH (4 GBytes), these accesses may
>     or may not cause the indicated exceptions.  Behavior is
>     implementation-specific and may vary from one execution to another.
>
> In practice, all CPUs that support VMX ignore limit checks for "flat
> segments", i.e. an expand-up data or code segment with base=0 and
> limit=0xffffffff.  This is subtly different than wrapping the effective
> address calculation based on the address size, as the flat segment
> behavior also applies to accesses that would wrap the 4g boundary, e.g.
> a 4-byte access starting at 0xffffffff will access linear addresses
> 0xffffffff, 0x0, 0x1 and 0x2.
>
> Fixes: f9eb4af67c9d ("KVM: nVMX: VMX instructions: add checks for #GP/#SS exceptions")
> Cc: stable@vger.kernel.org
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/vmx/nested.c | 12 +++++++++---
>  1 file changed, 9 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> index bc8e3fc6724d..537c4899cf20 100644
> --- a/arch/x86/kvm/vmx/nested.c
> +++ b/arch/x86/kvm/vmx/nested.c
> @@ -4097,10 +4097,16 @@ int get_vmx_mem_address(struct kvm_vcpu *vcpu, unsigned long exit_qualification,
>                 /* Protected mode: #GP(0)/#SS(0) if the segment is unusable.
>                  */
>                 exn = (s.unusable != 0);
> -               /* Protected mode: #GP(0)/#SS(0) if the memory
> -                * operand is outside the segment limit.
> +
> +               /*
> +                * Protected mode: #GP(0)/#SS(0) if the memory operand is
> +                * outside the segment limit.  All CPUs that support VMX ignore
> +                * limit checks for flat segments, i.e. segments with base==0,
> +                * limit==0xffffffff and of type expand-up data or code.
>                  */
> -               exn = exn || (off + sizeof(u64) > s.limit);
> +               if (!(s.base == 0 && s.limit == 0xffffffff &&
> +                    ((s.type & 8) || !(s.type & 4))))
> +                       exn = exn || (off + sizeof(u64) > s.limit);

I know I signed off on this, but looking at it again, I don't think
this is correct for expand-down segments.

From the SDM:

> For expand-down segments, the segment limit has the reverse function; the offset can range from the segment limit plus 1 to FFFFFFFFH or FFFFH, depending on the setting of the B flag. Offsets less than or equal to the segment limit generate general-protection exceptions or stack-fault exceptions.
diff mbox series

Patch

diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index bc8e3fc6724d..537c4899cf20 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -4097,10 +4097,16 @@  int get_vmx_mem_address(struct kvm_vcpu *vcpu, unsigned long exit_qualification,
 		/* Protected mode: #GP(0)/#SS(0) if the segment is unusable.
 		 */
 		exn = (s.unusable != 0);
-		/* Protected mode: #GP(0)/#SS(0) if the memory
-		 * operand is outside the segment limit.
+
+		/*
+		 * Protected mode: #GP(0)/#SS(0) if the memory operand is
+		 * outside the segment limit.  All CPUs that support VMX ignore
+		 * limit checks for flat segments, i.e. segments with base==0,
+		 * limit==0xffffffff and of type expand-up data or code.
 		 */
-		exn = exn || (off + sizeof(u64) > s.limit);
+		if (!(s.base == 0 && s.limit == 0xffffffff &&
+		     ((s.type & 8) || !(s.type & 4))))
+			exn = exn || (off + sizeof(u64) > s.limit);
 	}
 	if (exn) {
 		kvm_queue_exception_e(vcpu,