mbox series

[00/15] KVM: arm64: Improvements to GICv3 LPI injection

Message ID 20240124204909.105952-1-oliver.upton@linux.dev (mailing list archive)
Headers show
Series KVM: arm64: Improvements to GICv3 LPI injection | expand

Message

Oliver Upton Jan. 24, 2024, 8:48 p.m. UTC
The unfortunate reality is there are increasingly large systems that are
shipping today without support for GICv4 vLPI injection. Serialization
in KVM's LPI routing/injection code has been a significant bottleneck
for VMs on these machines when under a high load of LPIs (e.g. a
multi-queue NIC).

Even though the long-term solution is quite clearly **direct
injection**, we really ought to do something about the LPI scaling
issues within KVM.

This series aims to improve the performance of LPI routing/injection in
KVM by moving readers of LPI configuration data away from the
lpi_list_lock in favor or using RCU.

Patches 1-5 change out the representation of LPIs in KVM from a
linked-list to an xarray. While not strictly necessary for making the
locking improvements, this seems to be an opportune time to switch to a
data structure that can actually be indexed.

Patches 6-10 transition vgic_get_lpi() and vgic_put_lpi() away from
taking the lpi_list_lock in favor of using RCU for protection. Note that
this requires some rework to the way references are taken on LPIs and
how reclaim works to be RCU safe.

Lastly, patches 11-15 rework the LRU policy on the LPI translation cache
to not require moving elements in the linked-list and take advantage of
this to make it an rculist readable outside of the lpi_list_lock.

All of this was tested on top of v6.8-rc1. Apologies if any of the
changelogs are a bit too light, I'm happy to rework those further in
subsequent revisions.

I would've liked to have benchmark data showing the improvement on top
of upstream with this series, but I'm currently having issues with our
internal infrastructure and upstream kernels. However, this series has
been found to have a near 2x performance improvement to redis-memtier [*]
benchmarks on our kernel tree.

[*] https://github.com/RedisLabs/memtier_benchmark

Oliver Upton (15):
  KVM: arm64: vgic: Store LPIs in an xarray
  KVM: arm64: vgic: Use xarray to find LPI in vgic_get_lpi()
  KVM: arm64: vgic-v3: Iterate the xarray to find pending LPIs
  KVM: arm64: vgic-its: Walk the LPI xarray in vgic_copy_lpi_list()
  KVM: arm64: vgic: Get rid of the LPI linked-list
  KVM: arm64: vgic: Use atomics to count LPIs
  KVM: arm64: vgic: Free LPI vgic_irq structs in an RCU-safe manner
  KVM: arm64: vgic: Rely on RCU protection in vgic_get_lpi()
  KVM: arm64: vgic: Ensure the irq refcount is nonzero when taking a ref
  KVM: arm64: vgic: Don't acquire the lpi_list_lock in vgic_put_irq()
  KVM: arm64: vgic-its: Lazily allocate LPI translation cache
  KVM: arm64: vgic-its: Pick cache victim based on usage count
  KVM: arm64: vgic-its: Protect cached vgic_irq pointers with RCU
  KVM: arm64: vgic-its: Treat the LPI translation cache as an rculist
  KVM: arm64: vgic-its: Rely on RCU to protect translation cache reads

 arch/arm64/kvm/vgic/vgic-debug.c |   2 +-
 arch/arm64/kvm/vgic/vgic-init.c  |   7 +-
 arch/arm64/kvm/vgic/vgic-its.c   | 190 ++++++++++++++++++-------------
 arch/arm64/kvm/vgic/vgic-v3.c    |   3 +-
 arch/arm64/kvm/vgic/vgic.c       |  56 +++------
 arch/arm64/kvm/vgic/vgic.h       |  12 +-
 include/kvm/arm_vgic.h           |   9 +-
 7 files changed, 146 insertions(+), 133 deletions(-)


base-commit: 6613476e225e090cc9aad49be7fa504e290dd33d

Comments

Marc Zyngier Jan. 25, 2024, 11:02 a.m. UTC | #1
Hi Oliver,

On Wed, 24 Jan 2024 20:48:54 +0000,
Oliver Upton <oliver.upton@linux.dev> wrote:
> 
> The unfortunate reality is there are increasingly large systems that are
> shipping today without support for GICv4 vLPI injection. Serialization
> in KVM's LPI routing/injection code has been a significant bottleneck
> for VMs on these machines when under a high load of LPIs (e.g. a
> multi-queue NIC).
> 
> Even though the long-term solution is quite clearly **direct
> injection**, we really ought to do something about the LPI scaling
> issues within KVM.
> 
> This series aims to improve the performance of LPI routing/injection in
> KVM by moving readers of LPI configuration data away from the
> lpi_list_lock in favor or using RCU.
> 
> Patches 1-5 change out the representation of LPIs in KVM from a
> linked-list to an xarray. While not strictly necessary for making the
> locking improvements, this seems to be an opportune time to switch to a
> data structure that can actually be indexed.
> 
> Patches 6-10 transition vgic_get_lpi() and vgic_put_lpi() away from
> taking the lpi_list_lock in favor of using RCU for protection. Note that
> this requires some rework to the way references are taken on LPIs and
> how reclaim works to be RCU safe.
> 
> Lastly, patches 11-15 rework the LRU policy on the LPI translation cache
> to not require moving elements in the linked-list and take advantage of
> this to make it an rculist readable outside of the lpi_list_lock.

I quite like the overall direction. I've left a few comments here and
there, and will probably get back to it after I try to run some tests
on a big-ish box.

> All of this was tested on top of v6.8-rc1. Apologies if any of the
> changelogs are a bit too light, I'm happy to rework those further in
> subsequent revisions.
> 
> I would've liked to have benchmark data showing the improvement on top
> of upstream with this series, but I'm currently having issues with our
> internal infrastructure and upstream kernels. However, this series has
> been found to have a near 2x performance improvement to redis-memtier [*]
> benchmarks on our kernel tree.

It'd be really good to have upstream-based numbers, with details of
the actual setup (device assignment? virtio?) so that we can compare
things and even track regressions in the future.

Thanks,

	M.
Oliver Upton Jan. 25, 2024, 3:47 p.m. UTC | #2
On Thu, Jan 25, 2024 at 11:02:01AM +0000, Marc Zyngier wrote:

[...]

> > I would've liked to have benchmark data showing the improvement on top
> > of upstream with this series, but I'm currently having issues with our
> > internal infrastructure and upstream kernels. However, this series has
> > been found to have a near 2x performance improvement to redis-memtier [*]
> > benchmarks on our kernel tree.
> 
> It'd be really good to have upstream-based numbers, with details of
> the actual setup (device assignment? virtio?) so that we can compare
> things and even track regressions in the future.

Yeah, that sort of thing isn't optional IMO, I just figured that getting
reviews on this would be a bit more productive while I try and recreate
the test correctly on top of upstream.

The test setup I based my "2x" statement on is 4 16 vCPU client VMs
talking to 1 16 vCPU server VM over NIC VFs assigned to the
respective VMs. 16 TX + 16 RX queues for each NIC. As I'm sure you're
aware, I know damn near nothing about the Redis setup itself, and I'll
need to do a bit of work to translate the thing I was using into a
script.