mbox series

[0/6] Improve gfn-to-memslot performance during page faults

Message ID 20210730223707.4083785-1-dmatlack@google.com (mailing list archive)
Headers show
Series Improve gfn-to-memslot performance during page faults | expand

Message

David Matlack July 30, 2021, 10:37 p.m. UTC
This series improves the performance of gfn-to-memslot lookups during
page faults. Ben Gardon originally identified this performance gap and
sufficiently addressed it in Google's kernel by reading the memslot once
at the beginning of the page fault and passing around the pointer.

This series takes an alternative approach by introducing a per-vCPU
cache of the least recently used memslot index. This avoids needing to
binary search the existing memslots multiple times during a page fault.
Unlike passing around the pointer, the LRU cache has an additional
benefit in that it speeds up gfn-to-memslot lookups *across* faults and
during spte prefetching where the gfn changes.

This difference can be seen clearly when looking at the performance of
fast_page_fault when multiple slots are in play:

Metric                        | Baseline     | Pass*    | LRU**
----------------------------- | ------------ | -------- | ----------
Iteration 2 dirty memory time | 2.8s         | 1.6s     | 0.30s

* Pass: Lookup the memslot once per fault and pass it around.
** LRU: Cache the LRU slot per vCPU (i.e. this series).

(Collected via ./dirty_log_perf_test -v64 -x64)

I plan to also send a follow-up series with a version of Ben's patches
to pass the pointer to the memslot through the page fault handling code
rather than looking it up multiple times. Even when applied on top of
the LRU series it has some performance improvements by avoiding a few
extra memory accesses (mainly kvm->memslots[as_id] and
slots->used_slots). But it will be a judgement call whether or not it's
worth the code churn and complexity.

Here is a break down of this series:

Patches 1-2 introduce a per-vCPU cache of the least recently memslot
index.

Patches 3-5 convert existing gfn-to-memslot lookups to use
kvm_vcpu_gfn_to_memslot so that they can leverage the new LRU cache.

Patch 6 adds support for multiple slots to dirty_log_perf_test which is
used to generate the performance data in this series.

David Matlack (6):
  KVM: Cache the least recently used slot index per vCPU
  KVM: Avoid VM-wide lru_slot lookup in kvm_vcpu_gfn_to_memslot
  KVM: x86/mmu: Speed up dirty logging in
    tdp_mmu_map_handle_target_level
  KVM: x86/mmu: Leverage vcpu->lru_slot_index for rmap_add and
    rmap_recycle
  KVM: x86/mmu: Rename __gfn_to_rmap to gfn_to_rmap
  KVM: selftests: Support multiple slots in dirty_log_perf_test

 arch/x86/kvm/mmu/mmu.c                        | 54 +++++++------
 arch/x86/kvm/mmu/tdp_mmu.c                    | 15 +++-
 include/linux/kvm_host.h                      | 73 +++++++++++++-----
 .../selftests/kvm/access_tracking_perf_test.c |  2 +-
 .../selftests/kvm/demand_paging_test.c        |  2 +-
 .../selftests/kvm/dirty_log_perf_test.c       | 76 ++++++++++++++++---
 .../selftests/kvm/include/perf_test_util.h    |  2 +-
 .../selftests/kvm/lib/perf_test_util.c        | 20 +++--
 .../kvm/memslot_modification_stress_test.c    |  2 +-
 virt/kvm/kvm_main.c                           | 21 ++++-
 10 files changed, 198 insertions(+), 69 deletions(-)