diff mbox series

[v2] kvm/x86: allocate the write-tracking metadata on-demand

Message ID 20240206153405.489531-1-avagin@google.com (mailing list archive)
State New, archived
Headers show
Series [v2] kvm/x86: allocate the write-tracking metadata on-demand | expand

Commit Message

Andrei Vagin Feb. 6, 2024, 3:34 p.m. UTC
The write-track is used externally only by the gpu/drm/i915 driver.
Currently, it is always enabled, if a kernel has been compiled with this
driver.

Enabling the write-track mechanism adds a two-byte overhead per page across
all memory slots. It isn't significant for regular VMs. However in gVisor,
where the entire process virtual address space is mapped into the VM, even
with a 39-bit address space, the overhead amounts to 256MB.

This change rework the write-tracking mechanism to enable it on-demand
in kvm_page_track_register_notifier.

Here is Sean's comment about the locking scheme:

The only potential hiccup would be if taking slots_arch_lock would
deadlock, but it should be impossible for slots_arch_lock to be taken in
any other path that involves VFIO and/or KVMGT *and* can be coincident.
Except for kvm_arch_destroy_vm() (which deletes KVM's internal
memslots), slots_arch_lock is taken only through KVM ioctls(), and the
caller of kvm_page_track_register_notifier() *must* hold a reference to
the VM.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Sean Christopherson <seanjc@google.com>
Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Andrei Vagin <avagin@google.com>
---
v1: https://lore.kernel.org/lkml/ZcErs9rPqT09nNge@google.com/T/
v2: allocate the write-tracking metadata on-demand

 arch/x86/include/asm/kvm_host.h |  2 +
 arch/x86/kvm/mmu/mmu.c          | 24 +++++------
 arch/x86/kvm/mmu/page_track.c   | 74 ++++++++++++++++++++++++++++-----
 arch/x86/kvm/mmu/page_track.h   |  3 +-
 4 files changed, 78 insertions(+), 25 deletions(-)

Comments

Sean Christopherson Feb. 13, 2024, 5:13 p.m. UTC | #1
On Tue, Feb 06, 2024, Andrei Vagin wrote:
> The write-track is used externally only by the gpu/drm/i915 driver.
> Currently, it is always enabled, if a kernel has been compiled with this
> driver.
> 
> Enabling the write-track mechanism adds a two-byte overhead per page across
> all memory slots. It isn't significant for regular VMs. However in gVisor,
> where the entire process virtual address space is mapped into the VM, even
> with a 39-bit address space, the overhead amounts to 256MB.
> 
> This change rework the write-tracking mechanism to enable it on-demand
> in kvm_page_track_register_notifier.

Don't use "this change", "this patch", or any other variant of "this blah" that
you come up with.  :-)  Simply phrase the changelog as a command.

> Here is Sean's comment about the locking scheme:
> 
> The only potential hiccup would be if taking slots_arch_lock would
> deadlock, but it should be impossible for slots_arch_lock to be taken in
> any other path that involves VFIO and/or KVMGT *and* can be coincident.
> Except for kvm_arch_destroy_vm() (which deletes KVM's internal
> memslots), slots_arch_lock is taken only through KVM ioctls(), and the
> caller of kvm_page_track_register_notifier() *must* hold a reference to
> the VM.
> 
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Sean Christopherson <seanjc@google.com>
> Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
> Suggested-by: Sean Christopherson <seanjc@google.com>
> Signed-off-by: Andrei Vagin <avagin@google.com>
> ---
> v1: https://lore.kernel.org/lkml/ZcErs9rPqT09nNge@google.com/T/
> v2: allocate the write-tracking metadata on-demand
> 
>  arch/x86/include/asm/kvm_host.h |  2 +
>  arch/x86/kvm/mmu/mmu.c          | 24 +++++------
>  arch/x86/kvm/mmu/page_track.c   | 74 ++++++++++++++++++++++++++++-----
>  arch/x86/kvm/mmu/page_track.h   |  3 +-
>  4 files changed, 78 insertions(+), 25 deletions(-)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index d271ba20a0b2..c35641add93c 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1503,6 +1503,8 @@ struct kvm_arch {
>  	 */
>  #define SPLIT_DESC_CACHE_MIN_NR_OBJECTS (SPTE_ENT_PER_PAGE + 1)
>  	struct kvm_mmu_memory_cache split_desc_cache;
> +
> +	bool page_write_tracking_enabled;

Rather than a generic page_write_tracking_enabled, I think it makes sense to
explicitly track if there are *external* write tracking users.  One could argue
it makes the total tracking *too* fine grained, but I think it would be helpful
for readers to when KVM itself is using write tracking (shadow paging) versus
when KVM has write tracking enabled, but has not allocated rmaps (external write
tracking user).

That way, kernels with CONFIG_KVM_EXTERNAL_WRITE_TRACKING=n don't need to check
the bool (though they'll still check kvm_shadow_root_allocated()).  And as a
bonus, the diff is quite a bit smaller.

> diff --git a/arch/x86/kvm/mmu/page_track.c b/arch/x86/kvm/mmu/page_track.c
> index c87da11f3a04..a4790b0a6f50 100644
> --- a/arch/x86/kvm/mmu/page_track.c
> +++ b/arch/x86/kvm/mmu/page_track.c
> @@ -20,10 +20,14 @@
>  #include "mmu_internal.h"
>  #include "page_track.h"
>  
> -bool kvm_page_track_write_tracking_enabled(struct kvm *kvm)
> +static bool kvm_page_track_write_tracking_enabled(struct kvm *kvm)
>  {
> -	return IS_ENABLED(CONFIG_KVM_EXTERNAL_WRITE_TRACKING) ||
> -	       !tdp_enabled || kvm_shadow_root_allocated(kvm);
> +	/*
> +	 * Read page_write_tracking_enabled before related pointers. Pairs with
> +	 * smp_store_release in kvm_page_track_write_tracking_enable.
> +	 */
> +	return smp_load_acquire(&kvm->arch.page_write_tracking_enabled) |

Needs to be a logical ||, not a bitwise |.

> @@ -161,12 +204,21 @@ int kvm_page_track_register_notifier(struct kvm *kvm,
>  				     struct kvm_page_track_notifier_node *n)
>  {
>  	struct kvm_page_track_notifier_head *head;
> +	int r;
>  
>  	if (!kvm || kvm->mm != current->mm)
>  		return -ESRCH;
>  
>  	kvm_get_kvm(kvm);
>  
> +	mutex_lock(&kvm->slots_arch_lock);

This can and should check if write tracking is enabled without taking the mutex.
I *highly* doubt it will matter in practice, especially since KVM-GT is the only
user of the external tracking, and attaching a vGPU is a one-time thing.  But
it's a cheap an easy optimization that also makes the code look more like the
shadow_root_allocated, i.e. makes it easier to grok that the two flows are doing
very similar things.

> +	r = kvm_page_track_write_tracking_enable(kvm);

I'd prefer to call this helper kvm_enable_external_write_tracking().  As is, I
had hard time seein which flows were calling enable() versus enabled().

> +	mutex_unlock(&kvm->slots_arch_lock);
> +	if (r) {
> +		kvm_put_kvm(kvm);

Allocate write tracking before kvm_get_kvm(), then there's no need to have an
error handling path.

All in all, this?  Compile tested only.

---
 arch/x86/include/asm/kvm_host.h |  9 +++++
 arch/x86/kvm/mmu/page_track.c   | 68 ++++++++++++++++++++++++++++++++-
 2 files changed, 75 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index ad5319a503f0..af857a899f85 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1467,6 +1467,15 @@ struct kvm_arch {
 	 */
 	bool shadow_root_allocated;
 
+#ifdef CONFIG_KVM_EXTERNAL_WRITE_TRACKING
+	/*
+	 * If set, the VM has (or had) an external write tracking user, and
+	 * thus all write tracking metadata has been allocated, even if KVM
+	 * itself isn't using write tracking.
+	 */
+	bool external_write_tracking_enabled;
+#endif
+
 #if IS_ENABLED(CONFIG_HYPERV)
 	hpa_t	hv_root_tdp;
 	spinlock_t hv_root_tdp_lock;
diff --git a/arch/x86/kvm/mmu/page_track.c b/arch/x86/kvm/mmu/page_track.c
index c87da11f3a04..6fb61b33675f 100644
--- a/arch/x86/kvm/mmu/page_track.c
+++ b/arch/x86/kvm/mmu/page_track.c
@@ -20,10 +20,23 @@
 #include "mmu_internal.h"
 #include "page_track.h"
 
+static bool kvm_external_write_tracking_enabled(struct kvm *kvm)
+{
+#ifdef CONFIG_KVM_EXTERNAL_WRITE_TRACKING
+	/*
+	 * Read external_write_tracking_enabled before related pointers.  Pairs
+	 * with the smp_store_release in kvm_page_track_write_tracking_enable().
+	 */
+	return smp_load_acquire(&kvm->arch.external_write_tracking_enabled);
+#else
+	return false;
+#endif
+}
+
 bool kvm_page_track_write_tracking_enabled(struct kvm *kvm)
 {
-	return IS_ENABLED(CONFIG_KVM_EXTERNAL_WRITE_TRACKING) ||
-	       !tdp_enabled || kvm_shadow_root_allocated(kvm);
+	return kvm_external_write_tracking_enabled(kvm) ||
+	       kvm_shadow_root_allocated(kvm) || !tdp_enabled;
 }
 
 void kvm_page_track_free_memslot(struct kvm_memory_slot *slot)
@@ -153,6 +166,50 @@ int kvm_page_track_init(struct kvm *kvm)
 	return init_srcu_struct(&head->track_srcu);
 }
 
+static int kvm_enable_external_write_tracking(struct kvm *kvm)
+{
+	struct kvm_memslots *slots;
+	struct kvm_memory_slot *slot;
+	int r = 0, i, bkt;
+
+	mutex_lock(&kvm->slots_arch_lock);
+
+	/*
+	 * Check for *any* write tracking user (not just external users) under
+	 * lock.  This avoids unnecessary work, e.g. if KVM itself is using
+	 * write tracking, or if two external users raced when registering.
+	 */
+	if (kvm_page_track_write_tracking_enabled(kvm))
+		goto out_success;
+
+	for (i = 0; i < kvm_arch_nr_memslot_as_ids(kvm); i++) {
+		slots = __kvm_memslots(kvm, i);
+		kvm_for_each_memslot(slot, bkt, slots) {
+			/*
+			 * Intentionally do NOT free allocations on failure to
+			 * avoid having to track which allocations were made
+			 * now versus when the memslot was created.  The
+			 * metadata is guaranteed to be freed when the slot is
+			 * freed, and will be kept/used if userspace retries
+			 * the failed ioctl() instead of killing the VM.
+			 */
+			r = kvm_page_track_write_tracking_alloc(slot);
+			if (r)
+				goto out_unlock;
+		}
+	}
+
+	/*
+	 * Ensure that external_write_tracking_enabled becomes true strictly
+	 * after all the related pointers are set.
+	 */
+out_success:
+	smp_store_release(&kvm->arch.external_write_tracking_enabled, true);
+out_unlock:
+	mutex_unlock(&kvm->slots_arch_lock);
+	return r;
+}
+
 /*
  * register the notifier so that event interception for the tracked guest
  * pages can be received.
@@ -161,10 +218,17 @@ int kvm_page_track_register_notifier(struct kvm *kvm,
 				     struct kvm_page_track_notifier_node *n)
 {
 	struct kvm_page_track_notifier_head *head;
+	int r;
 
 	if (!kvm || kvm->mm != current->mm)
 		return -ESRCH;
 
+	if (!kvm_external_write_tracking_enabled(kvm)) {
+		r = kvm_enable_external_write_tracking(kvm);
+		if (r)
+			return r;
+	}
+
 	kvm_get_kvm(kvm);
 
 	head = &kvm->arch.track_notifier_head;

base-commit: 7455665a3521aa7b56245c0a2810f748adc5fdd4
--
Andrei Vagin Feb. 13, 2024, 7:32 p.m. UTC | #2
On Tue, Feb 13, 2024 at 9:13 AM Sean Christopherson <seanjc@google.com> wrote:
>
> On Tue, Feb 06, 2024, Andrei Vagin wrote:
> > The write-track is used externally only by the gpu/drm/i915 driver.
> > Currently, it is always enabled, if a kernel has been compiled with this
> > driver.
> >
> > Enabling the write-track mechanism adds a two-byte overhead per page across
> > all memory slots. It isn't significant for regular VMs. However in gVisor,
> > where the entire process virtual address space is mapped into the VM, even
> > with a 39-bit address space, the overhead amounts to 256MB.
> >
> > This change rework the write-tracking mechanism to enable it on-demand
> > in kvm_page_track_register_notifier.
>
> Don't use "this change", "this patch", or any other variant of "this blah" that
> you come up with.  :-)  Simply phrase the changelog as a command.

ok:)

>
> > Here is Sean's comment about the locking scheme:
> >
> > The only potential hiccup would be if taking slots_arch_lock would
> > deadlock, but it should be impossible for slots_arch_lock to be taken in
> > any other path that involves VFIO and/or KVMGT *and* can be coincident.
> > Except for kvm_arch_destroy_vm() (which deletes KVM's internal
> > memslots), slots_arch_lock is taken only through KVM ioctls(), and the
> > caller of kvm_page_track_register_notifier() *must* hold a reference to
> > the VM.
> >
> > Cc: Paolo Bonzini <pbonzini@redhat.com>
> > Cc: Sean Christopherson <seanjc@google.com>
> > Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
> > Suggested-by: Sean Christopherson <seanjc@google.com>
> > Signed-off-by: Andrei Vagin <avagin@google.com>
> > ---
> > v1: https://lore.kernel.org/lkml/ZcErs9rPqT09nNge@google.com/T/
> > v2: allocate the write-tracking metadata on-demand
> >
> >  arch/x86/include/asm/kvm_host.h |  2 +
> >  arch/x86/kvm/mmu/mmu.c          | 24 +++++------
> >  arch/x86/kvm/mmu/page_track.c   | 74 ++++++++++++++++++++++++++++-----
> >  arch/x86/kvm/mmu/page_track.h   |  3 +-
> >  4 files changed, 78 insertions(+), 25 deletions(-)
> >
> > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> > index d271ba20a0b2..c35641add93c 100644
> > --- a/arch/x86/include/asm/kvm_host.h
> > +++ b/arch/x86/include/asm/kvm_host.h
> > @@ -1503,6 +1503,8 @@ struct kvm_arch {
> >        */
> >  #define SPLIT_DESC_CACHE_MIN_NR_OBJECTS (SPTE_ENT_PER_PAGE + 1)
> >       struct kvm_mmu_memory_cache split_desc_cache;
> > +
> > +     bool page_write_tracking_enabled;
>
> Rather than a generic page_write_tracking_enabled, I think it makes sense to
> explicitly track if there are *external* write tracking users.  One could argue
> it makes the total tracking *too* fine grained, but I think it would be helpful
> for readers to when KVM itself is using write tracking (shadow paging) versus
> when KVM has write tracking enabled, but has not allocated rmaps (external write
> tracking user).
>
> That way, kernels with CONFIG_KVM_EXTERNAL_WRITE_TRACKING=n don't need to check
> the bool (though they'll still check kvm_shadow_root_allocated()).  And as a
> bonus, the diff is quite a bit smaller.
>

Your patch looks good to me. I ran kvm and gvisor tests and didn't
find any issues. I sent it as v3:
https://lkml.org/lkml/2024/2/13/1181

I didn't do any changes, so feel free to change the author.

Thanks for the help.
diff mbox series

Patch

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index d271ba20a0b2..c35641add93c 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1503,6 +1503,8 @@  struct kvm_arch {
 	 */
 #define SPLIT_DESC_CACHE_MIN_NR_OBJECTS (SPTE_ENT_PER_PAGE + 1)
 	struct kvm_mmu_memory_cache split_desc_cache;
+
+	bool page_write_tracking_enabled;
 };
 
 struct kvm_vm_stat {
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 2d6cdeab1f8a..e45fca3156de 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -3755,29 +3755,29 @@  static int mmu_first_shadow_root_alloc(struct kvm *kvm)
 	 * Check if anything actually needs to be allocated, e.g. all metadata
 	 * will be allocated upfront if TDP is disabled.
 	 */
-	if (kvm_memslots_have_rmaps(kvm) &&
-	    kvm_page_track_write_tracking_enabled(kvm))
+	r = kvm_page_track_write_tracking_enable(kvm);
+	if (r)
+		goto out_unlock;
+
+	if (kvm_memslots_have_rmaps(kvm))
 		goto out_success;
 
 	for (i = 0; i < kvm_arch_nr_memslot_as_ids(kvm); i++) {
 		slots = __kvm_memslots(kvm, i);
 		kvm_for_each_memslot(slot, bkt, slots) {
 			/*
-			 * Both of these functions are no-ops if the target is
-			 * already allocated, so unconditionally calling both
-			 * is safe.  Intentionally do NOT free allocations on
-			 * failure to avoid having to track which allocations
-			 * were made now versus when the memslot was created.
-			 * The metadata is guaranteed to be freed when the slot
-			 * is freed, and will be kept/used if userspace retries
+			 * This function is no-ops if the target is already
+			 * allocated, so unconditionally calling it is safe.
+			 * Intentionally do NOT free allocations on failure to
+			 * avoid having to track which allocations were made
+			 * now versus when the memslot was created.  The
+			 * metadata is guaranteed to be freed when the slot is
+			 * freed, and will be kept/used if userspace retries
 			 * KVM_RUN instead of killing the VM.
 			 */
 			r = memslot_rmap_alloc(slot, slot->npages);
 			if (r)
 				goto out_unlock;
-			r = kvm_page_track_write_tracking_alloc(slot);
-			if (r)
-				goto out_unlock;
 		}
 	}
 
diff --git a/arch/x86/kvm/mmu/page_track.c b/arch/x86/kvm/mmu/page_track.c
index c87da11f3a04..a4790b0a6f50 100644
--- a/arch/x86/kvm/mmu/page_track.c
+++ b/arch/x86/kvm/mmu/page_track.c
@@ -20,10 +20,14 @@ 
 #include "mmu_internal.h"
 #include "page_track.h"
 
-bool kvm_page_track_write_tracking_enabled(struct kvm *kvm)
+static bool kvm_page_track_write_tracking_enabled(struct kvm *kvm)
 {
-	return IS_ENABLED(CONFIG_KVM_EXTERNAL_WRITE_TRACKING) ||
-	       !tdp_enabled || kvm_shadow_root_allocated(kvm);
+	/*
+	 * Read page_write_tracking_enabled before related pointers. Pairs with
+	 * smp_store_release in kvm_page_track_write_tracking_enable.
+	 */
+	return smp_load_acquire(&kvm->arch.page_write_tracking_enabled) |
+	       !tdp_enabled;
 }
 
 void kvm_page_track_free_memslot(struct kvm_memory_slot *slot)
@@ -32,8 +36,8 @@  void kvm_page_track_free_memslot(struct kvm_memory_slot *slot)
 	slot->arch.gfn_write_track = NULL;
 }
 
-static int __kvm_page_track_write_tracking_alloc(struct kvm_memory_slot *slot,
-						 unsigned long npages)
+static int __kvm_write_tracking_alloc(struct kvm_memory_slot *slot,
+				      unsigned long npages)
 {
 	const size_t size = sizeof(*slot->arch.gfn_write_track);
 
@@ -51,12 +55,7 @@  int kvm_page_track_create_memslot(struct kvm *kvm,
 	if (!kvm_page_track_write_tracking_enabled(kvm))
 		return 0;
 
-	return __kvm_page_track_write_tracking_alloc(slot, npages);
-}
-
-int kvm_page_track_write_tracking_alloc(struct kvm_memory_slot *slot)
-{
-	return __kvm_page_track_write_tracking_alloc(slot, slot->npages);
+	return __kvm_write_tracking_alloc(slot, npages);
 }
 
 static void update_gfn_write_track(struct kvm_memory_slot *slot, gfn_t gfn,
@@ -153,6 +152,50 @@  int kvm_page_track_init(struct kvm *kvm)
 	return init_srcu_struct(&head->track_srcu);
 }
 
+/*
+ * kvm_page_track_write_tracking_enable enables the write tracking mechanism.
+ * If it has been already enabled, this function is no-op.
+ *
+ * The caller must hold kvm->slots_arch_lock.
+ */
+int kvm_page_track_write_tracking_enable(struct kvm *kvm)
+{
+	struct kvm_memslots *slots;
+	struct kvm_memory_slot *slot;
+	int r = 0, i, bkt;
+
+	lockdep_assert_held(&kvm->slots_arch_lock);
+
+	if (kvm_page_track_write_tracking_enabled(kvm))
+		return 0;
+
+	for (i = 0; i < kvm_arch_nr_memslot_as_ids(kvm); i++) {
+		slots = __kvm_memslots(kvm, i);
+		kvm_for_each_memslot(slot, bkt, slots) {
+			/*
+			 * This function is no-ops if the target is already
+			 * allocated, so unconditionally calling it is safe.
+			 * Intentionally do NOT free allocations on failure to
+			 * avoid having to track which allocations were made
+			 * now versus when the memslot was created.  The
+			 * metadata is guaranteed to be freed when the slot is
+			 * freed, and will be kept/used if userspace retries
+			 * KVM_RUN instead of killing the VM.
+			 */
+			r = __kvm_write_tracking_alloc(slot, slot->npages);
+			if (r)
+				goto err;
+		}
+	}
+	/*
+	 * Ensure that page_write_tracking_enabled becomes true strictly after
+	 * all the related pointers are set.
+	 */
+	smp_store_release(&kvm->arch.page_write_tracking_enabled, true);
+err:
+	return r;
+}
+
 /*
  * register the notifier so that event interception for the tracked guest
  * pages can be received.
@@ -161,12 +204,21 @@  int kvm_page_track_register_notifier(struct kvm *kvm,
 				     struct kvm_page_track_notifier_node *n)
 {
 	struct kvm_page_track_notifier_head *head;
+	int r;
 
 	if (!kvm || kvm->mm != current->mm)
 		return -ESRCH;
 
 	kvm_get_kvm(kvm);
 
+	mutex_lock(&kvm->slots_arch_lock);
+	r = kvm_page_track_write_tracking_enable(kvm);
+	mutex_unlock(&kvm->slots_arch_lock);
+	if (r) {
+		kvm_put_kvm(kvm);
+		return r;
+	}
+
 	head = &kvm->arch.track_notifier_head;
 
 	write_lock(&kvm->mmu_lock);
diff --git a/arch/x86/kvm/mmu/page_track.h b/arch/x86/kvm/mmu/page_track.h
index d4d72ed999b1..f8984d163b2c 100644
--- a/arch/x86/kvm/mmu/page_track.h
+++ b/arch/x86/kvm/mmu/page_track.h
@@ -7,8 +7,7 @@ 
 #include <asm/kvm_page_track.h>
 
 
-bool kvm_page_track_write_tracking_enabled(struct kvm *kvm);
-int kvm_page_track_write_tracking_alloc(struct kvm_memory_slot *slot);
+int kvm_page_track_write_tracking_enable(struct kvm *kvm);
 
 void kvm_page_track_free_memslot(struct kvm_memory_slot *slot);
 int kvm_page_track_create_memslot(struct kvm *kvm,