diff mbox series

KVM: arm64: vgic-v3: Limit IPI-ing when accessing GICR_{C,S}ACTIVER0

Message ID 20230112154840.1808595-1-maz@kernel.org (mailing list archive)
State New, archived
Headers show
Series KVM: arm64: vgic-v3: Limit IPI-ing when accessing GICR_{C,S}ACTIVER0 | expand

Commit Message

Marc Zyngier Jan. 12, 2023, 3:48 p.m. UTC
When a vcpu is accessing *its own* redistributor's SGIs/PPIs, there
is no point in doing a stop-the-world operation. Instead, we can
just let the access occur as we do with GICv2.

This is a very minor optimisation for a non-nesting guest, but
a potentially major one for a nesting L1 hypervisor which is
likely to access the emulated registers pretty often (on each
vcpu switch, at the very least).

Reported-by: Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/vgic/vgic-mmio.c | 13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

Comments

Oliver Upton Jan. 23, 2023, 8:29 p.m. UTC | #1
On Thu, 12 Jan 2023 15:48:40 +0000, Marc Zyngier wrote:
> When a vcpu is accessing *its own* redistributor's SGIs/PPIs, there
> is no point in doing a stop-the-world operation. Instead, we can
> just let the access occur as we do with GICv2.
> 
> This is a very minor optimisation for a non-nesting guest, but
> a potentially major one for a nesting L1 hypervisor which is
> likely to access the emulated registers pretty often (on each
> vcpu switch, at the very least).
> 
> [...]

Applied to kvmarm/next, thanks!

[1/1] KVM: arm64: vgic-v3: Limit IPI-ing when accessing GICR_{C,S}ACTIVER0
      https://git.kernel.org/kvmarm/kvmarm/c/fd2b165ce2cc

--
Best,
Oliver
diff mbox series

Patch

diff --git a/arch/arm64/kvm/vgic/vgic-mmio.c b/arch/arm64/kvm/vgic/vgic-mmio.c
index b32d434c1d4a..e67b3b2c8044 100644
--- a/arch/arm64/kvm/vgic/vgic-mmio.c
+++ b/arch/arm64/kvm/vgic/vgic-mmio.c
@@ -473,9 +473,10 @@  int vgic_uaccess_write_cpending(struct kvm_vcpu *vcpu,
  * active state can be overwritten when the VCPU's state is synced coming back
  * from the guest.
  *
- * For shared interrupts as well as GICv3 private interrupts, we have to
- * stop all the VCPUs because interrupts can be migrated while we don't hold
- * the IRQ locks and we don't want to be chasing moving targets.
+ * For shared interrupts as well as GICv3 private interrupts accessed from the
+ * non-owning CPU, we have to stop all the VCPUs because interrupts can be
+ * migrated while we don't hold the IRQ locks and we don't want to be chasing
+ * moving targets.
  *
  * For GICv2 private interrupts we don't have to do anything because
  * userspace accesses to the VGIC state already require all VCPUs to be
@@ -484,7 +485,8 @@  int vgic_uaccess_write_cpending(struct kvm_vcpu *vcpu,
  */
 static void vgic_access_active_prepare(struct kvm_vcpu *vcpu, u32 intid)
 {
-	if (vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3 ||
+	if ((vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3 &&
+	     vcpu != kvm_get_running_vcpu()) ||
 	    intid >= VGIC_NR_PRIVATE_IRQS)
 		kvm_arm_halt_guest(vcpu->kvm);
 }
@@ -492,7 +494,8 @@  static void vgic_access_active_prepare(struct kvm_vcpu *vcpu, u32 intid)
 /* See vgic_access_active_prepare */
 static void vgic_access_active_finish(struct kvm_vcpu *vcpu, u32 intid)
 {
-	if (vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3 ||
+	if ((vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3 &&
+	     vcpu != kvm_get_running_vcpu()) ||
 	    intid >= VGIC_NR_PRIVATE_IRQS)
 		kvm_arm_resume_guest(vcpu->kvm);
 }