diff mbox series

Fix bug when shadowing EPT page tables

Message ID 20181015205548.GF573482@lisa.in-ulm.de (mailing list archive)
State New, archived
Headers show
Series Fix bug when shadowing EPT page tables | expand

Commit Message

Christian Ehrhardt Oct. 15, 2018, 8:55 p.m. UTC
Hi,

on x86 shadow paging uses is_pae() to determine if the page
table that is shadowed contains 4-byte or 8-byte PTE entries.
However, when shadowing EPT tables setup by an L1 guest for a
nested L2 guest, this may be wrong as EPT page tables always use
8-byte PTEs regardless of the value of %cr4.PAE.

Similar logic applies to the cr4_pae field of a shadow page's
page role. Arguably the flag should be set when shadowing an EPT
page table. However, currently this is not the case.

As a result all 8-byte PTE writes to write tracked pages are
treated as unaligned accesses to a page with 4-byte PTEs
(detect_write_misaligned wrongly returns true). For non-PAE
L2 guests I thing we might zap the wrong PTEs.

The patch below fixes this for the VMX case with the following
approach:
- Always set cr4_pae in the base_role when shadwoing EPT tables.
- Replace calls to is_pae() with base_role.cr4_pae.
Someone who properly understands the SVM case whould probably have
a look at this and determine if something similar is required.

    regards    Christian


From 7e1b585e80c93ecf890c173bef43a4015af40630 Mon Sep 17 00:00:00 2001
From: Christian Ehrhardt <lk@--e.de>
Date: Mon, 15 Oct 2018 20:06:04 +0200
Subject: [PATCH 2/2] KVM/MMU: Always set cr4_pae when shadowing EPT pages

When shadowing EPT pages setup by L1 for a nested L2 guest
the value of the PAE bit %cr4 is irrelevant. However, in the
page role of a shadow page, cr4_pae basically means that the
shadowed page uses 64-bit page table entries. When shadowing
EPT page tables this is always the case. Thus set cr4_pae in
this case.

Similarly, calls to is_pae(vcpu) do not return useful information
when shadowing EPT tables. With the change above we can check
the cr4_pae bit in the current MMU's base_role instead. In most
cases this is the same as is_pae() anyway. However, when shadowing
EPT tables using is_pae() is wrong.

Signed-off-by: Christian Ehrhardt <lk@c--e.de>
---
 arch/x86/kvm/mmu.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)
diff mbox series

Patch

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 51b953ad9d4e..01857e4cafee 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2180,7 +2180,7 @@  static void kvm_mmu_commit_zap_page(struct kvm *kvm,
 static bool __kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
 			    struct list_head *invalid_list)
 {
-	if (sp->role.cr4_pae != !!is_pae(vcpu)
+	if (sp->role.cr4_pae != vcpu->arch.mmu.base_role.cr4_pae
 	    || vcpu->arch.mmu.sync_page(vcpu, sp) == 0) {
 		kvm_mmu_prepare_zap_page(vcpu->kvm, sp, invalid_list);
 		return false;
@@ -4838,6 +4838,7 @@  kvm_calc_shadow_ept_root_page_role(struct kvm_vcpu *vcpu, bool accessed_dirty)
 	role.direct = false;
 	role.ad_disabled = !accessed_dirty;
 	role.guest_mode = true;
+	role.cr4_pae = true;
 	role.access = ACC_ALL;
 
 	return role;
@@ -5023,7 +5024,7 @@  static u64 mmu_pte_write_fetch_gpte(struct kvm_vcpu *vcpu, gpa_t *gpa,
 	 * as the current vcpu paging mode since we update the sptes only
 	 * when they have the same mode.
 	 */
-	if (is_pae(vcpu) && *bytes == 4) {
+	if (vcpu->arch.mmu.base_role.cr4_pae && *bytes == 4) {
 		/* Handle a 32-bit guest writing two halves of a 64-bit gpte */
 		*gpa &= ~(gpa_t)7;
 		*bytes = 8;