diff mbox series

[v2,1/2] kvm: vmx: fix limit checking in get_vmx_mem_address()

Message ID 20190604220221.GA23558@dnote (mailing list archive)
State New, archived
Headers show
Series [v2,1/2] kvm: vmx: fix limit checking in get_vmx_mem_address() | expand

Commit Message

Eugene Korenevsky June 4, 2019, 10:02 p.m. UTC
Intel SDM vol. 3, 5.3:
The processor causes a
general-protection exception (or, if the segment is SS, a stack-fault
exception) any time an attempt is made to access the following addresses
in a segment:
- A byte at an offset greater than the effective limit
- A word at an offset greater than the (effective-limit – 1)
- A doubleword at an offset greater than the (effective-limit – 3)
- A quadword at an offset greater than the (effective-limit – 7)

Therefore, the generic limit checking error condition must be

exn = off > limit + 1 - operand_len

but not

exn = off + operand_len > limit

as for now.

Signed-off-by: Eugene Korenevsky <ekorenevsky@gmail.com>
---
 arch/x86/kvm/vmx/nested.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Sean Christopherson June 5, 2019, 2:28 p.m. UTC | #1
On Wed, Jun 05, 2019 at 01:02:21AM +0300, Eugene Korenevsky wrote:
> Intel SDM vol. 3, 5.3:
> The processor causes a
> general-protection exception (or, if the segment is SS, a stack-fault
> exception) any time an attempt is made to access the following addresses
> in a segment:
> - A byte at an offset greater than the effective limit
> - A word at an offset greater than the (effective-limit – 1)
> - A doubleword at an offset greater than the (effective-limit – 3)
> - A quadword at an offset greater than the (effective-limit – 7)
> 
> Therefore, the generic limit checking error condition must be
> 
> exn = off > limit + 1 - operand_len
> 
> but not
> 
> exn = off + operand_len > limit
> 
> as for now.

Probably worth adding a note in the changelog about the access size
being hardcoded to quadword.  It's difficult to correlate the code with
the changelog without the context of the following patch to add 'len'.
 
> Signed-off-by: Eugene Korenevsky <ekorenevsky@gmail.com>
> ---
>  arch/x86/kvm/vmx/nested.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> index f1a69117ac0f..fef3d7031715 100644
> --- a/arch/x86/kvm/vmx/nested.c
> +++ b/arch/x86/kvm/vmx/nested.c
> @@ -4115,7 +4115,7 @@ int get_vmx_mem_address(struct kvm_vcpu *vcpu, unsigned long exit_qualification,
>  		 */
>  		if (!(s.base == 0 && s.limit == 0xffffffff &&
>  		     ((s.type & 8) || !(s.type & 4))))
> -			exn = exn || (off + sizeof(u64) > s.limit);
> +			exn = exn || (off > s.limit + 1 - sizeof(u64));

Adjusting the limit will wrap a small limit, e.g. s.limit=3 will check
@off against 0xfffffffc.  And IMO, "off + sizeof(u64) - 1 > s.limit" is
more intuitive anyways, e.g. it conveys that we're calculating the
address of the last byte being accessed and checking to see if that would
cause a limit violation.

On a related note, there's a pre-existing wrap bug for 32-bit KVM since
@off is a 32-bit value (gva_t is unsigned long), but that's easily fixed
by casting @off to a u64.

>  	}
>  	if (exn) {
>  		kvm_queue_exception_e(vcpu,
> -- 
> 2.21.0
>
diff mbox series

Patch

diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index f1a69117ac0f..fef3d7031715 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -4115,7 +4115,7 @@  int get_vmx_mem_address(struct kvm_vcpu *vcpu, unsigned long exit_qualification,
 		 */
 		if (!(s.base == 0 && s.limit == 0xffffffff &&
 		     ((s.type & 8) || !(s.type & 4))))
-			exn = exn || (off + sizeof(u64) > s.limit);
+			exn = exn || (off > s.limit + 1 - sizeof(u64));
 	}
 	if (exn) {
 		kvm_queue_exception_e(vcpu,