diff mbox series

[v3,8/8] KVM: x86/mmu: explicitly check nx_hugepage in disallowed_hugepage_adjust()

Message ID 20220805230513.148869-9-seanjc@google.com (mailing list archive)
State New, archived
Headers show
Series KVM: x86: Apply NX mitigation more precisely | expand

Commit Message

Sean Christopherson Aug. 5, 2022, 11:05 p.m. UTC
From: Mingwei Zhang <mizhang@google.com>

Explicitly check if a NX huge page is disallowed when determining if a
page fault needs to be forced to use a smaller sized page.  KVM currently
assumes that the NX huge page mitigation is the only scenario where KVM
will force a shadow page instead of a huge page, and so unnecessarily
keeps an existing shadow page instead of replacing it with a huge page.

Any scenario that causes KVM to zap leaf SPTEs may result in having a SP
that can be made huge without violating the NX huge page mitigation.
E.g. prior to commit 5ba7c4c6d1c7 ("KVM: x86/MMU: Zap non-leaf SPTEs when
disabling dirty logging"), KVM would keep shadow pages after disabling
dirty logging due to a live migration being canceled, resulting in
degraded performance due to running with 4kb pages instead of huge pages.

Although the dirty logging case is "fixed", that fix is coincidental,
i.e. is an implementation detail, and there are other scenarios where KVM
will zap leaf SPTEs.  E.g. zapping leaf SPTEs in response to a host page
migration (mmu_notifier invalidation) to create a huge page would yield a
similar result; KVM would see the shadow-present non-leaf SPTE and assume
a huge page is disallowed.

Fixes: b8e8c8303ff2 ("kvm: mmu: ITLB_MULTIHIT mitigation")
Reviewed-by: Ben Gardon <bgardon@google.com>
Reviewed-by: David Matlack <dmatlack@google.com>
Signed-off-by: Mingwei Zhang <mizhang@google.com>
[sean: add barrier comments, use spte_to_child_sp(), massage changelog]
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/mmu/mmu.c     | 17 +++++++++++++++--
 arch/x86/kvm/mmu/tdp_mmu.c |  3 ++-
 2 files changed, 17 insertions(+), 3 deletions(-)

Comments

Paolo Bonzini Aug. 9, 2022, 12:57 p.m. UTC | #1
On 8/6/22 01:05, Sean Christopherson wrote:
>   	    !is_large_pte(spte)) {
> +		u64 page_mask;
> +
> +		/*
> +		 * Ensure nx_huge_page_disallowed is read after checking for a
> +		 * present shadow page.  A different vCPU may be concurrently
> +		 * installing the shadow page if mmu_lock is held for read.
> +		 * Pairs with the smp_wmb() in kvm_tdp_mmu_map().
> +		 */
> +		smp_rmb();
> +
> +		if (!spte_to_child_sp(spte)->nx_huge_page_disallowed)
> +			return;
> +

I wonder if the barrier shouldn't be simply in to_shadow_page(), i.e. 
always assume in the TDP MMU code that sp->xyz is read after the SPTE 
that points to that struct kvm_mmu_page.

Paolo
Sean Christopherson Aug. 9, 2022, 2:49 p.m. UTC | #2
On Tue, Aug 09, 2022, Paolo Bonzini wrote:
> On 8/6/22 01:05, Sean Christopherson wrote:
> >   	    !is_large_pte(spte)) {
> > +		u64 page_mask;
> > +
> > +		/*
> > +		 * Ensure nx_huge_page_disallowed is read after checking for a
> > +		 * present shadow page.  A different vCPU may be concurrently
> > +		 * installing the shadow page if mmu_lock is held for read.
> > +		 * Pairs with the smp_wmb() in kvm_tdp_mmu_map().
> > +		 */
> > +		smp_rmb();
> > +
> > +		if (!spte_to_child_sp(spte)->nx_huge_page_disallowed)
> > +			return;
> > +
> 
> I wonder if the barrier shouldn't be simply in to_shadow_page(), i.e. always
> assume in the TDP MMU code that sp->xyz is read after the SPTE that points
> to that struct kvm_mmu_page.

If we can get away with it, I'd prefer to rely on the READ_ONCE() in
kvm_tdp_mmu_read_spte() and required ordering of:

	READ_ONCE() => PRESENT => spte_to_child_sp()
diff mbox series

Patch

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 1442129c85e0..3ddfc82868fd 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -3090,6 +3090,19 @@  void disallowed_hugepage_adjust(struct kvm_page_fault *fault, u64 spte, int cur_
 	    cur_level == fault->goal_level &&
 	    is_shadow_present_pte(spte) &&
 	    !is_large_pte(spte)) {
+		u64 page_mask;
+
+		/*
+		 * Ensure nx_huge_page_disallowed is read after checking for a
+		 * present shadow page.  A different vCPU may be concurrently
+		 * installing the shadow page if mmu_lock is held for read.
+		 * Pairs with the smp_wmb() in kvm_tdp_mmu_map().
+		 */
+		smp_rmb();
+
+		if (!spte_to_child_sp(spte)->nx_huge_page_disallowed)
+			return;
+
 		/*
 		 * A small SPTE exists for this pfn, but FNAME(fetch)
 		 * and __direct_map would like to create a large PTE
@@ -3097,8 +3110,8 @@  void disallowed_hugepage_adjust(struct kvm_page_fault *fault, u64 spte, int cur_
 		 * patching back for them into pfn the next 9 bits of
 		 * the address.
 		 */
-		u64 page_mask = KVM_PAGES_PER_HPAGE(cur_level) -
-				KVM_PAGES_PER_HPAGE(cur_level - 1);
+		page_mask = KVM_PAGES_PER_HPAGE(cur_level) -
+			    KVM_PAGES_PER_HPAGE(cur_level - 1);
 		fault->pfn |= fault->gfn & page_mask;
 		fault->goal_level--;
 	}
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 526d38704e5c..c5314ca95e08 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -1202,7 +1202,8 @@  int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
 			/*
 			 * Ensure nx_huge_page_disallowed is visible before the
 			 * SP is marked present, as mmu_lock is held for read.
-			 * Pairs with the smp_rmb() in tdp_mmu_unlink_sp().
+			 * Pairs with the smp_rmb() in tdp_mmu_unlink_sp() and
+			 * in disallowed_hugepage_adjust().
 			 */
 			smp_wmb();