diff mbox series

[2/6] KVM: Avoid VM-wide lru_slot lookup in kvm_vcpu_gfn_to_memslot

Message ID 20210730223707.4083785-3-dmatlack@google.com (mailing list archive)
State New, archived
Headers show
Series Improve gfn-to-memslot performance during page faults | expand

Commit Message

David Matlack July 30, 2021, 10:37 p.m. UTC
Now that vCPUs keep track of their own LRU slot, there's no good reason
to have them check and update the VM-wide LRU slot. There's no
performance data to motivate this change however there are two
rationals:

1. Now that vCPUs have their own LRU slot, there's a potential for a
   double miss (miss the vCPU LRU slot and then miss the VM-wide LRU slot).
   By avoiding the VM-wide LRU slot check we keep the worst case to a
   single miss.

2. Large VMs are likely to have multiple memslots and vCPUs accessing
   different slots. Intuitively, vCPUs will end up thrashing the VM-wide
   LRU slot, decreasing the LRU hit rate for VM-wide operations such as
   mmu notifiers and VM ioctls.

Signed-off-by: David Matlack <dmatlack@google.com>
---
 include/linux/kvm_host.h | 20 +++++++++++---------
 1 file changed, 11 insertions(+), 9 deletions(-)
diff mbox series

Patch

diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 320090d5a124..870e1e6fb771 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -1220,17 +1220,13 @@  static inline bool slot_contains_gfn(struct kvm_memslots *slots, int slot_index,
 static inline int __search_memslots(struct kvm_memslots *slots, gfn_t gfn)
 {
 	int start = 0, end = slots->used_slots;
-	int slot = atomic_read(&slots->lru_slot);
 	struct kvm_memory_slot *memslots = slots->memslots;
 
 	if (unlikely(!slots->used_slots))
 		return -1;
 
-	if (slot_contains_gfn(slots, slot, gfn))
-		return slot;
-
 	while (start < end) {
-		slot = start + (end - start) / 2;
+		int slot = start + (end - start) / 2;
 
 		if (gfn >= memslots[slot].base_gfn)
 			end = slot;
@@ -1238,10 +1234,8 @@  static inline int __search_memslots(struct kvm_memslots *slots, gfn_t gfn)
 			start = slot + 1;
 	}
 
-	if (slot_contains_gfn(slots, start, gfn)) {
-		atomic_set(&slots->lru_slot, start);
+	if (slot_contains_gfn(slots, start, gfn))
 		return start;
-	}
 
 	return -1;
 }
@@ -1255,8 +1249,16 @@  static inline int __search_memslots(struct kvm_memslots *slots, gfn_t gfn)
 static inline struct kvm_memory_slot *
 search_memslots(struct kvm_memslots *slots, gfn_t gfn)
 {
-	int slot_index = __search_memslots(slots, gfn);
+	int slot_index = atomic_read(&slots->lru_slot);
+
+	if (slot_contains_gfn(slots, slot_index, gfn))
+		return get_slot(slots, slot_index);
+
+	slot_index = __search_memslots(slots, gfn);
+	if (slot_index < 0)
+		return NULL;
 
+	atomic_set(&slots->lru_slot, slot_index);
 	return get_slot(slots, slot_index);
 }