diff mbox

kvm: vmx: Don't use INVVPID when EPT is enabled

Message ID 20170315145611.16880-1-jmattson@google.com (mailing list archive)
State New, archived
Headers show

Commit Message

Jim Mattson March 15, 2017, 2:56 p.m. UTC
According to the Intel SDM, volume 3, section 28.3.2: Creating and
Using Cached Translation Information, "No linear mappings are used
while EPT is in use." INVEPT will invalidate both the guest-physical
mappings and the combined mappings in the TLBs and paging-structure
caches, so an INVVPID is superfluous.

Signed-off-by: Jim Mattson <jmattson@google.com>
---
 arch/x86/kvm/vmx.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

Comments

David Hildenbrand March 15, 2017, 3:06 p.m. UTC | #1
On 15.03.2017 15:56, Jim Mattson wrote:
> According to the Intel SDM, volume 3, section 28.3.2: Creating and
> Using Cached Translation Information, "No linear mappings are used
> while EPT is in use." INVEPT will invalidate both the guest-physical
> mappings and the combined mappings in the TLBs and paging-structure
> caches, so an INVVPID is superfluous.
> 
> Signed-off-by: Jim Mattson <jmattson@google.com>
> ---
>  arch/x86/kvm/vmx.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index 2c21181c8a70..114f358b959c 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -4011,11 +4011,12 @@ static void exit_lmode(struct kvm_vcpu *vcpu)
>  
>  static inline void __vmx_flush_tlb(struct kvm_vcpu *vcpu, int vpid)
>  {
> -	vpid_sync_context(vpid);
>  	if (enable_ept) {
>  		if (!VALID_PAGE(vcpu->arch.mmu.root_hpa))
>  			return;
>  		ept_sync_context(construct_eptp(vcpu->arch.mmu.root_hpa));
> +	} else {
> +		vpid_sync_context(vpid);
>  	}
>  }

I assume this will also work fine if our nested guest is not using EPT.

I was already wondering about the

kvm_make_request(KVM_REQ_TLB_FLUSH, vcpu)

in handle_invept(), because it would use a wrong vpid when flushing.
(not the nested vpid but the "ordinary" one).
Jim Mattson March 15, 2017, 3:53 p.m. UTC | #2
Assuming the CPU supports INVEPT by context...

Things are actually simpler if the nested guest does not use EPT,
because then L1 and L2 share an EP4TA, and the INVEPT invoked by
ept_sync_context will invalidate any cached mappings that either L1 or
L2 might use.

If the nested guest *does* use EPT, then ept_sync_context will only
invalidate the cached mappings associated with L1 or L2, but not both.
If there is ever a mismatch between vcpu->arch.mmu.root_hpa and
vmx->vpid (i.e. one belongs to L1 and the other belongs to L2), then
the original version of this code would have invalidated the combined
mappings of both L1 and L2, but it still would have invalidated only
the guest-physical mappings associated with the current
vcpu->arch.mmu.root_hpa.

On Wed, Mar 15, 2017 at 8:06 AM, David Hildenbrand <david@redhat.com> wrote:
> On 15.03.2017 15:56, Jim Mattson wrote:
>> According to the Intel SDM, volume 3, section 28.3.2: Creating and
>> Using Cached Translation Information, "No linear mappings are used
>> while EPT is in use." INVEPT will invalidate both the guest-physical
>> mappings and the combined mappings in the TLBs and paging-structure
>> caches, so an INVVPID is superfluous.
>>
>> Signed-off-by: Jim Mattson <jmattson@google.com>
>> ---
>>  arch/x86/kvm/vmx.c | 3 ++-
>>  1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
>> index 2c21181c8a70..114f358b959c 100644
>> --- a/arch/x86/kvm/vmx.c
>> +++ b/arch/x86/kvm/vmx.c
>> @@ -4011,11 +4011,12 @@ static void exit_lmode(struct kvm_vcpu *vcpu)
>>
>>  static inline void __vmx_flush_tlb(struct kvm_vcpu *vcpu, int vpid)
>>  {
>> -     vpid_sync_context(vpid);
>>       if (enable_ept) {
>>               if (!VALID_PAGE(vcpu->arch.mmu.root_hpa))
>>                       return;
>>               ept_sync_context(construct_eptp(vcpu->arch.mmu.root_hpa));
>> +     } else {
>> +             vpid_sync_context(vpid);
>>       }
>>  }
>
> I assume this will also work fine if our nested guest is not using EPT.
>
> I was already wondering about the
>
> kvm_make_request(KVM_REQ_TLB_FLUSH, vcpu)
>
> in handle_invept(), because it would use a wrong vpid when flushing.
> (not the nested vpid but the "ordinary" one).
>
> --
>
> Thanks,
>
> David
David Hildenbrand March 16, 2017, 9:25 a.m. UTC | #3
On 15.03.2017 16:53, Jim Mattson wrote:
> Assuming the CPU supports INVEPT by context...
> 
> Things are actually simpler if the nested guest does not use EPT,
> because then L1 and L2 share an EP4TA, and the INVEPT invoked by
> ept_sync_context will invalidate any cached mappings that either L1 or
> L2 might use.
> 
> If the nested guest *does* use EPT, then ept_sync_context will only
> invalidate the cached mappings associated with L1 or L2, but not both.
> If there is ever a mismatch between vcpu->arch.mmu.root_hpa and
> vmx->vpid (i.e. one belongs to L1 and the other belongs to L2), then
> the original version of this code would have invalidated the combined
> mappings of both L1 and L2, but it still would have invalidated only
> the guest-physical mappings associated with the current
> vcpu->arch.mmu.root_hpa.
> 

Thanks for the explanation! Complicated stuff, so highly appreciated!

Reviewed-by: David Hildenbrand <david@redhat.com>
diff mbox

Patch

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 2c21181c8a70..114f358b959c 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -4011,11 +4011,12 @@  static void exit_lmode(struct kvm_vcpu *vcpu)
 
 static inline void __vmx_flush_tlb(struct kvm_vcpu *vcpu, int vpid)
 {
-	vpid_sync_context(vpid);
 	if (enable_ept) {
 		if (!VALID_PAGE(vcpu->arch.mmu.root_hpa))
 			return;
 		ept_sync_context(construct_eptp(vcpu->arch.mmu.root_hpa));
+	} else {
+		vpid_sync_context(vpid);
 	}
 }