diff mbox

Use rsvd_bits_mask in load_pdptrs for cleanup and considing EXB bit

Message ID 9832F13BD22FB94A829F798DA4A8280501A3C029EB@pdsmsx503.ccr.corp.intel.com (mailing list archive)
State Accepted
Headers show

Commit Message

Dong, Eddie March 31, 2009, 7:32 a.m. UTC
Neiger, Gil wrote:
> PDPTEs are used only if CR0.PG=CR4.PAE=1.
> 
> In that situation, their format depends the value of IA32_EFER.LMA.
> 
> If IA32_EFER.LMA=0, bit 63 is reserved and must be 0 in any PDPTE
> that is marked present.  The execute-disable setting of a page is
> determined only by the PDE and PTE.  
> 
> If IA32_EFER.LMA=1, bit 63 is used for the execute-disable in PML4
> entries, PDPTEs, PDEs, and PTEs (assuming IA32_EFER.NXE=1). 
> 
> 					- Gil

Rebased.
Thanks, eddie


commit 032caed3da123950eeb3e192baf444d4eae80c85
Author: root <root@eddie-wb.localdomain>
Date:   Tue Mar 31 16:22:49 2009 +0800

    Use rsvd_bits_mask in load_pdptrs and remove bit 5-6 from rsvd_bits_mask per latest SDM.
    
    Signed-off-by: Eddie Dong <Eddie.Dong@intel.com>

Comments

Avi Kivity March 31, 2009, 8:55 a.m. UTC | #1
Dong, Eddie wrote:
> Neiger, Gil wrote:
>   
>> PDPTEs are used only if CR0.PG=CR4.PAE=1.
>>
>> In that situation, their format depends the value of IA32_EFER.LMA.
>>
>> If IA32_EFER.LMA=0, bit 63 is reserved and must be 0 in any PDPTE
>> that is marked present.  The execute-disable setting of a page is
>> determined only by the PDE and PTE.  
>>
>> If IA32_EFER.LMA=1, bit 63 is used for the execute-disable in PML4
>> entries, PDPTEs, PDEs, and PTEs (assuming IA32_EFER.NXE=1). 
>>
>> 					- Gil
>>     
>
> Rebased.
> Thanks, eddie
>
>
>   

Looks good, but doesn't apply; please check if you are working against 
the latest version.
diff mbox

Patch

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 2eab758..1bed3aa 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -225,11 +225,6 @@  static int is_nx(struct kvm_vcpu *vcpu)
 	return vcpu->arch.shadow_efer & EFER_NX;
 }
 
-static int is_present_pte(unsigned long pte)
-{
-	return pte & PT_PRESENT_MASK;
-}
-
 static int is_shadow_present_pte(u64 pte)
 {
 	return pte != shadow_trap_nonpresent_pte
@@ -2199,6 +2194,9 @@  void reset_rsvds_bits_mask(struct kvm_vcpu *vcpu, int level)
 		context->rsvd_bits_mask[1][0] = 0;
 		break;
 	case PT32E_ROOT_LEVEL:
+		context->rsvd_bits_mask[0][2] =
+			rsvd_bits(maxphyaddr, 63) |
+			rsvd_bits(7, 8) | rsvd_bits(1, 2);	/* PDPTE */
 		context->rsvd_bits_mask[0][1] = exb_bit_rsvd |
 			rsvd_bits(maxphyaddr, 62);		/* PDE */
 		context->rsvd_bits_mask[0][0] = exb_bit_rsvd |
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index 258e5d5..2a6eb50 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -75,4 +75,9 @@  static inline int is_paging(struct kvm_vcpu *vcpu)
 	return vcpu->arch.cr0 & X86_CR0_PG;
 }
 
+static inline int is_present_pte(unsigned long pte)
+{
+	return pte & PT_PRESENT_MASK;
+}
+
 #endif
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 961bd2b..b449ff0 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -233,7 +233,8 @@  int load_pdptrs(struct kvm_vcpu *vcpu, unsigned long cr3)
 		goto out;
 	}
 	for (i = 0; i < ARRAY_SIZE(pdpte); ++i) {
-		if ((pdpte[i] & 1) && (pdpte[i] & 0xfffffff0000001e6ull)) {
+		if (is_present_pte(pdpte[i]) &&
+		    (pdpte[i] & vcpu->arch.mmu.rsvd_bits_mask[0][2])) {
 			ret = 0;
 			goto out;
 		}