Message ID | 20230518100914.2837292-2-jean-philippe@linaro.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: arm64: vgic: Locking fixes | expand |
On Thu, May 18, 2023 at 11:09:15AM +0100, Jean-Philippe Brucker wrote: > Lockdep reports a circular lock dependency between the srcu and the > config_lock: > > [ 262.179917] -> #1 (&kvm->srcu){.+.+}-{0:0}: > [ 262.182010] __synchronize_srcu+0xb0/0x224 > [ 262.183422] synchronize_srcu_expedited+0x24/0x34 > [ 262.184554] kvm_io_bus_register_dev+0x324/0x50c > [ 262.185650] vgic_register_redist_iodev+0x254/0x398 > [ 262.186740] vgic_v3_set_redist_base+0x3b0/0x724 > [ 262.188087] kvm_vgic_addr+0x364/0x600 > [ 262.189189] vgic_set_common_attr+0x90/0x544 > [ 262.190278] vgic_v3_set_attr+0x74/0x9c > [ 262.191432] kvm_device_ioctl+0x2a0/0x4e4 > [ 262.192515] __arm64_sys_ioctl+0x7ac/0x1ba8 > [ 262.193612] invoke_syscall.constprop.0+0x70/0x1e0 > [ 262.195006] do_el0_svc+0xe4/0x2d4 > [ 262.195929] el0_svc+0x44/0x8c > [ 262.196917] el0t_64_sync_handler+0xf4/0x120 > [ 262.198238] el0t_64_sync+0x190/0x194 > [ 262.199224] > [ 262.199224] -> #0 (&kvm->arch.config_lock){+.+.}-{3:3}: > [ 262.201094] __lock_acquire+0x2b70/0x626c > [ 262.202245] lock_acquire+0x454/0x778 > [ 262.203132] __mutex_lock+0x190/0x8b4 > [ 262.204023] mutex_lock_nested+0x24/0x30 > [ 262.205100] vgic_mmio_write_v3_misc+0x5c/0x2a0 > [ 262.206178] dispatch_mmio_write+0xd8/0x258 > [ 262.207498] __kvm_io_bus_write+0x1e0/0x350 > [ 262.208582] kvm_io_bus_write+0xe0/0x1cc > [ 262.209653] io_mem_abort+0x2ac/0x6d8 > [ 262.210569] kvm_handle_guest_abort+0x9b8/0x1f88 > [ 262.211937] handle_exit+0xc4/0x39c > [ 262.212971] kvm_arch_vcpu_ioctl_run+0x90c/0x1c04 > [ 262.214154] kvm_vcpu_ioctl+0x450/0x12f8 > [ 262.215233] __arm64_sys_ioctl+0x7ac/0x1ba8 > [ 262.216402] invoke_syscall.constprop.0+0x70/0x1e0 > [ 262.217774] do_el0_svc+0xe4/0x2d4 > [ 262.218758] el0_svc+0x44/0x8c > [ 262.219941] el0t_64_sync_handler+0xf4/0x120 > [ 262.221110] el0t_64_sync+0x190/0x194 > > Note that the current report, which can be triggered by the vgic_irq > kselftest, is a triple chain that includes slots_lock, but after > inverting the slots_lock/config_lock dependency, the actual problem > reported above remains. > > In several places, the vgic code calls kvm_io_bus_register_dev(), which > synchronizes the srcu, while holding config_lock (#1). And the MMIO > handler takes the config_lock while holding the srcu read lock (#0). > > Break dependency #1, by registering the distributor and redistributors > without holding config_lock. The ITS also uses kvm_io_bus_register_dev() > but already relies on slots_lock to serialize calls. > > The distributor iodev is created on the first KVM_RUN call. Multiple > threads will race for vgic initialization, and only the first one will > see !vgic_ready() under the lock. To serialize those threads, rely on > slots_lock rather than config_lock. > > Redistributors are created earlier, through KVM_DEV_ARM_VGIC_GRP_ADDR > ioctls and vCPU creation. Similarly, serialize the iodev creation with > slots_lock, and the rest with config_lock. > > Fixes: f00327731131 ("KVM: arm64: Use config_lock to protect vgic state") > Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Marc, can you squash this in when you apply the series? Get a compiler warning since 'ret' is now unused. diff --git a/arch/arm64/kvm/vgic/vgic-mmio.c b/arch/arm64/kvm/vgic/vgic-mmio.c index ce3d17463c6b..ff558c05e990 100644 --- a/arch/arm64/kvm/vgic/vgic-mmio.c +++ b/arch/arm64/kvm/vgic/vgic-mmio.c @@ -1096,7 +1096,6 @@ int vgic_register_dist_iodev(struct kvm *kvm, gpa_t dist_base_address, enum vgic_type type) { struct vgic_io_device *io_device = &kvm->arch.vgic.dist_iodev; - int ret = 0; unsigned int len; switch (type) {
Nathan, First and foremost, thanks for testing this. On Tue, Jun 06, 2023 at 03:15:25PM -0700, Nathan Chancellor wrote: > My apologies if this has been addressed or reported somewhere, I did a > search of lore.kernel.org and browsed the kvmarm archives and did not > see anything. This is news to me, but even if it had already been reported there's nothing wrong with bumping the issue. Makes it hard for us to bury our heads in the sand :) > After this change landed in 6.4-rc5 as commit 59112e9c390b > ("KVM: arm64: vgic: Fix a circular locking issue"), my QEMU Fedora VM on > my SolidRun Honeycomb fails to get to GRUB. [...] > I built a kernel with CONFIG_PROVE_LOCKING=y but I do not see any splats > while this is occurring. Additionally, neither my Raspberry Pi 4 or my > Ampere Altra system have any issues, so it is possible this could be a > platform specific problem. I am more than happy to provide any > additional information and test kernels and patches to help get to the > bottom of this. My kernel configuration is attached. I was unable to reproduce the issues you're seeing on 6.4-rc5, but I don't have any different machines from you available atm. Based on your description it sounds like your VM was able to do _something_ since it sounds like a few escape codes got out over serial... I'm wondering if you're getting wedged somewhere on a VGIC MMIO access. We don't have a precise tracepoint for VGIC accesses, but kvm:kvm_mmio should do the trick. So, given that you're the lucky winner at reproducing this bug right now, do you mind collecting a dump from that tracepoint and sharing the access that happens before your VM gets wedged? Curious if Marc has any additional insight, since (unsurprisingly) he has a lot more experience in dealing with the GIC than I. In the meantime I'll stare at the locking flows and see if anything stands out.
On Wed, 07 Jun 2023 06:23:24 +0100, Oliver Upton <oliver.upton@linux.dev> wrote: > > Nathan, > > First and foremost, thanks for testing this. > > On Tue, Jun 06, 2023 at 03:15:25PM -0700, Nathan Chancellor wrote: > > My apologies if this has been addressed or reported somewhere, I did a > > search of lore.kernel.org and browsed the kvmarm archives and did not > > see anything. > > This is news to me, but even if it had already been reported there's > nothing wrong with bumping the issue. Makes it hard for us to bury our > heads in the sand :) AFAICT, this is the very first report of this problem. > > > After this change landed in 6.4-rc5 as commit 59112e9c390b > > ("KVM: arm64: vgic: Fix a circular locking issue"), my QEMU Fedora VM on > > my SolidRun Honeycomb fails to get to GRUB. > > [...] > > > I built a kernel with CONFIG_PROVE_LOCKING=y but I do not see any splats > > while this is occurring. Additionally, neither my Raspberry Pi 4 or my > > Ampere Altra system have any issues, so it is possible this could be a > > platform specific problem. I am more than happy to provide any > > additional information and test kernels and patches to help get to the > > bottom of this. My kernel configuration is attached. > > I was unable to reproduce the issues you're seeing on 6.4-rc5, but I > don't have any different machines from you available atm. Based on > your description it sounds like your VM was able to do _something_ > since it sounds like a few escape codes got out over serial... > I'm wondering if you're getting wedged somewhere on a VGIC MMIO access. > > We don't have a precise tracepoint for VGIC accesses, but kvm:kvm_mmio > should do the trick. So, given that you're the lucky winner at > reproducing this bug right now, do you mind collecting a dump from that > tracepoint and sharing the access that happens before your VM gets > wedged? > > Curious if Marc has any additional insight, since (unsurprisingly) he > has a lot more experience in dealing with the GIC than I. In the > meantime I'll stare at the locking flows and see if anything stands > out. RPI4 is GICv2 nVHE, the NXP machine is GICv3 nVHE, and the Altra is GICv3 VHE. Not sure this is relevant here, but that's one data point. Having been able to start the guest means that we should have fully initialised the GIC. So a lockup is likely be an interaction with the GIC emulation itself, either because we failed to release a lock during initialisation, or due to some logic error in the GIC emulation (which is not necessarily MMIO...). I've just given 6.4-rc5 a go on my Synquacer, which is the closest thing I have to Nathan's NXP box, and I can't spot anything odd. It would also help to get access to the EDK2 build. It wouldn't be the first time that a change in KVM breaks some EDK2 behaviour. Finally, on top of the traces that Oliver asked above, looking at where the QEMU vcpu threads are would be interesting (I assume they'd be sleeping in the kernel). Thanks, M.
On Wed, Jun 07, 2023 at 09:37:08AM +0100, Marc Zyngier wrote: > > > After this change landed in 6.4-rc5 as commit 59112e9c390b > > > ("KVM: arm64: vgic: Fix a circular locking issue"), my QEMU Fedora VM on > > > my SolidRun Honeycomb fails to get to GRUB. > > > > [...] > > > > > I built a kernel with CONFIG_PROVE_LOCKING=y but I do not see any splats > > > while this is occurring. Additionally, neither my Raspberry Pi 4 or my > > > Ampere Altra system have any issues, so it is possible this could be a > > > platform specific problem. I am more than happy to provide any > > > additional information and test kernels and patches to help get to the > > > bottom of this. My kernel configuration is attached. > > > > I was unable to reproduce the issues you're seeing on 6.4-rc5, but I > > don't have any different machines from you available atm. Based on > > your description it sounds like your VM was able to do _something_ > > since it sounds like a few escape codes got out over serial... > > I'm wondering if you're getting wedged somewhere on a VGIC MMIO access. > > > > We don't have a precise tracepoint for VGIC accesses, but kvm:kvm_mmio > > should do the trick. So, given that you're the lucky winner at > > reproducing this bug right now, do you mind collecting a dump from that > > tracepoint and sharing the access that happens before your VM gets > > wedged? > > > > Curious if Marc has any additional insight, since (unsurprisingly) he > > has a lot more experience in dealing with the GIC than I. In the > > meantime I'll stare at the locking flows and see if anything stands > > out. > > RPI4 is GICv2 nVHE, the NXP machine is GICv3 nVHE, and the Altra is > GICv3 VHE. Not sure this is relevant here, but that's one data point. > > Having been able to start the guest means that we should have fully > initialised the GIC. So a lockup is likely be an interaction with the > GIC emulation itself, either because we failed to release a lock > during initialisation, or due to some logic error in the GIC emulation > (which is not necessarily MMIO...). > > I've just given 6.4-rc5 a go on my Synquacer, which is the closest > thing I have to Nathan's NXP box, and I can't spot anything odd. > > It would also help to get access to the EDK2 build. It wouldn't be the > first time that a change in KVM breaks some EDK2 behaviour. I found a build here: https://koji.fedoraproject.org/koji/buildinfo?buildID=2204660 edk2-aarch64-20230301gitf80f052277c8-31.fc39.noarch.rpm usr/share/edk2/aarch64/QEMU_EFI-silent-pflash.raw Haven't managed to reproduce the issue yet, but I can only test with QEMU emulating the cortex-a72 and GICv3 at the moment, and I still need to reproduce the VMM command-line exactly. I think it would be helpful to get the exact grub image as well, right now I'm using Fedora-Server-KVM-38-1.6.aarch64.qcow2 Thanks, Jean > > Finally, on top of the traces that Oliver asked above, looking at > where the QEMU vcpu threads are would be interesting (I assume they'd > be sleeping in the kernel). > > Thanks, > > M. > > -- > Without deviation from the norm, progress is not possible.
On Wed, 07 Jun 2023 14:28:19 +0100, Jean-Philippe Brucker <jean-philippe@linaro.org> wrote: > > On Wed, Jun 07, 2023 at 09:37:08AM +0100, Marc Zyngier wrote: > > > > After this change landed in 6.4-rc5 as commit 59112e9c390b > > > > ("KVM: arm64: vgic: Fix a circular locking issue"), my QEMU Fedora VM on > > > > my SolidRun Honeycomb fails to get to GRUB. > > > > > > [...] > > > > > > > I built a kernel with CONFIG_PROVE_LOCKING=y but I do not see any splats > > > > while this is occurring. Additionally, neither my Raspberry Pi 4 or my > > > > Ampere Altra system have any issues, so it is possible this could be a > > > > platform specific problem. I am more than happy to provide any > > > > additional information and test kernels and patches to help get to the > > > > bottom of this. My kernel configuration is attached. > > > > > > I was unable to reproduce the issues you're seeing on 6.4-rc5, but I > > > don't have any different machines from you available atm. Based on > > > your description it sounds like your VM was able to do _something_ > > > since it sounds like a few escape codes got out over serial... > > > I'm wondering if you're getting wedged somewhere on a VGIC MMIO access. > > > > > > We don't have a precise tracepoint for VGIC accesses, but kvm:kvm_mmio > > > should do the trick. So, given that you're the lucky winner at > > > reproducing this bug right now, do you mind collecting a dump from that > > > tracepoint and sharing the access that happens before your VM gets > > > wedged? > > > > > > Curious if Marc has any additional insight, since (unsurprisingly) he > > > has a lot more experience in dealing with the GIC than I. In the > > > meantime I'll stare at the locking flows and see if anything stands > > > out. > > > > RPI4 is GICv2 nVHE, the NXP machine is GICv3 nVHE, and the Altra is > > GICv3 VHE. Not sure this is relevant here, but that's one data point. > > > > Having been able to start the guest means that we should have fully > > initialised the GIC. So a lockup is likely be an interaction with the > > GIC emulation itself, either because we failed to release a lock > > during initialisation, or due to some logic error in the GIC emulation > > (which is not necessarily MMIO...). > > > > I've just given 6.4-rc5 a go on my Synquacer, which is the closest > > thing I have to Nathan's NXP box, and I can't spot anything odd. > > > > It would also help to get access to the EDK2 build. It wouldn't be the > > first time that a change in KVM breaks some EDK2 behaviour. > > I found a build here: > https://koji.fedoraproject.org/koji/buildinfo?buildID=2204660 > edk2-aarch64-20230301gitf80f052277c8-31.fc39.noarch.rpm > usr/share/edk2/aarch64/QEMU_EFI-silent-pflash.raw > > Haven't managed to reproduce the issue yet, but I can only test with QEMU > emulating the cortex-a72 and GICv3 at the moment, and I still need to > reproduce the VMM command-line exactly. I think it would be helpful to get > the exact grub image as well, right now I'm using > Fedora-Server-KVM-38-1.6.aarch64.qcow2 I think I managed to trigger the sucker by using the GICv2-on-GICv3 feature, which Nathan's HW supports. The vcpu is blocked in WFI, and the timer interrupt is never made pending. Interestingly, the vgic state reads: TYP ID TGT_ID PLAEHCG HWID TARGET SRC PRI VCPU_ID [...] PPI 27 0 0100110 27 1 0 0 -1 meaning that we don't see the timer interrupt being enabled (PLAEHCG reads as Not-Pending, Line-Level-high, Not-Active, Not-Enabled, HW-deactivation, Level, Group0), despite the timer having raised the interrupt line (the input level is high). So I'm changing tack altogether. This isn't a locking issue, but a distributor issue! It feels like we're registering a GICv3 distributor instead of a GICv2, so the register map is fscked-up from a guest perspective. As it turns out, this is indeed exactly what this patch does, by always using the host's GIC type instead of what was requested for the guest. This works just fine on RPI4 (GICv2) and Altra (GICv3 without compat), but totally fails on SQ (GICv3 with GICv2-compat). I expect that this is the issue Nathan is facing. I came up with the following patch, which fixes it for me on my SynQuacer. Nathan, could you please try it with your config? Thanks, M. From f42d872b2796a2a3e719fdc51cc206aa274bf0ed Mon Sep 17 00:00:00 2001 From: Marc Zyngier <maz@kernel.org> Date: Wed, 7 Jun 2023 15:38:44 +0100 Subject: [PATCH] KVM: arm64: Restore GICv2-on-GICv3 functionality When reworking the vgic locking, the vgic distributor registration got simplified, which was a very good cleanup. But just a tad too radical, as we now register the *native* vgic only, ignoring the GICv2-on-GICv3 that allows pre-historic VMs (or so I thought) to run. As it turns out, QEMU still defaults to GICv2 in some cases, and this breaks Nathan's setup! Fix it by propagating the *requested* vgic type rather than the host's version. Fixes: 59112e9c390b ("KVM: arm64: vgic: Fix a circular locking issue") Reported-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> --- arch/arm64/kvm/vgic/vgic-init.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/arch/arm64/kvm/vgic/vgic-init.c b/arch/arm64/kvm/vgic/vgic-init.c index 6eafc2c45cfc..c8c3cb812783 100644 --- a/arch/arm64/kvm/vgic/vgic-init.c +++ b/arch/arm64/kvm/vgic/vgic-init.c @@ -446,6 +446,7 @@ int vgic_lazy_init(struct kvm *kvm) int kvm_vgic_map_resources(struct kvm *kvm) { struct vgic_dist *dist = &kvm->arch.vgic; + enum vgic_type type; gpa_t dist_base; int ret = 0; @@ -460,10 +461,13 @@ int kvm_vgic_map_resources(struct kvm *kvm) if (!irqchip_in_kernel(kvm)) goto out; - if (dist->vgic_model == KVM_DEV_TYPE_ARM_VGIC_V2) + if (dist->vgic_model == KVM_DEV_TYPE_ARM_VGIC_V2) { ret = vgic_v2_map_resources(kvm); - else + type = VGIC_V2; + } else { ret = vgic_v3_map_resources(kvm); + type = VGIC_V3; + } if (ret) { __kvm_vgic_destroy(kvm); @@ -473,8 +477,7 @@ int kvm_vgic_map_resources(struct kvm *kvm) dist_base = dist->vgic_dist_base; mutex_unlock(&kvm->arch.config_lock); - ret = vgic_register_dist_iodev(kvm, dist_base, - kvm_vgic_global_state.type); + ret = vgic_register_dist_iodev(kvm, dist_base, type); if (ret) { kvm_err("Unable to register VGIC dist MMIO regions\n"); kvm_vgic_destroy(kvm);
On Wed, Jun 07, 2023 at 04:04:53PM +0100, Marc Zyngier wrote: > On Wed, 07 Jun 2023 14:28:19 +0100, > Jean-Philippe Brucker <jean-philippe@linaro.org> wrote: > > > > On Wed, Jun 07, 2023 at 09:37:08AM +0100, Marc Zyngier wrote: > > > > > After this change landed in 6.4-rc5 as commit 59112e9c390b > > > > > ("KVM: arm64: vgic: Fix a circular locking issue"), my QEMU Fedora VM on > > > > > my SolidRun Honeycomb fails to get to GRUB. > > > > > > > > [...] > > > > > > > > > I built a kernel with CONFIG_PROVE_LOCKING=y but I do not see any splats > > > > > while this is occurring. Additionally, neither my Raspberry Pi 4 or my > > > > > Ampere Altra system have any issues, so it is possible this could be a > > > > > platform specific problem. I am more than happy to provide any > > > > > additional information and test kernels and patches to help get to the > > > > > bottom of this. My kernel configuration is attached. > > > > > > > > I was unable to reproduce the issues you're seeing on 6.4-rc5, but I > > > > don't have any different machines from you available atm. Based on > > > > your description it sounds like your VM was able to do _something_ > > > > since it sounds like a few escape codes got out over serial... > > > > I'm wondering if you're getting wedged somewhere on a VGIC MMIO access. > > > > > > > > We don't have a precise tracepoint for VGIC accesses, but kvm:kvm_mmio > > > > should do the trick. So, given that you're the lucky winner at > > > > reproducing this bug right now, do you mind collecting a dump from that > > > > tracepoint and sharing the access that happens before your VM gets > > > > wedged? > > > > > > > > Curious if Marc has any additional insight, since (unsurprisingly) he > > > > has a lot more experience in dealing with the GIC than I. In the > > > > meantime I'll stare at the locking flows and see if anything stands > > > > out. > > > > > > RPI4 is GICv2 nVHE, the NXP machine is GICv3 nVHE, and the Altra is > > > GICv3 VHE. Not sure this is relevant here, but that's one data point. > > > > > > Having been able to start the guest means that we should have fully > > > initialised the GIC. So a lockup is likely be an interaction with the > > > GIC emulation itself, either because we failed to release a lock > > > during initialisation, or due to some logic error in the GIC emulation > > > (which is not necessarily MMIO...). > > > > > > I've just given 6.4-rc5 a go on my Synquacer, which is the closest > > > thing I have to Nathan's NXP box, and I can't spot anything odd. > > > > > > It would also help to get access to the EDK2 build. It wouldn't be the > > > first time that a change in KVM breaks some EDK2 behaviour. > > > > I found a build here: > > https://koji.fedoraproject.org/koji/buildinfo?buildID=2204660 > > edk2-aarch64-20230301gitf80f052277c8-31.fc39.noarch.rpm > > usr/share/edk2/aarch64/QEMU_EFI-silent-pflash.raw > > > > Haven't managed to reproduce the issue yet, but I can only test with QEMU > > emulating the cortex-a72 and GICv3 at the moment, and I still need to > > reproduce the VMM command-line exactly. I think it would be helpful to get > > the exact grub image as well, right now I'm using > > Fedora-Server-KVM-38-1.6.aarch64.qcow2 > > I think I managed to trigger the sucker by using the GICv2-on-GICv3 > feature, which Nathan's HW supports. The vcpu is blocked in WFI, and > the timer interrupt is never made pending. > > Interestingly, the vgic state reads: > > TYP ID TGT_ID PLAEHCG HWID TARGET SRC PRI VCPU_ID > [...] > PPI 27 0 0100110 27 1 0 0 -1 > > meaning that we don't see the timer interrupt being enabled (PLAEHCG > reads as Not-Pending, Line-Level-high, Not-Active, Not-Enabled, > HW-deactivation, Level, Group0), despite the timer having raised the > interrupt line (the input level is high). > > So I'm changing tack altogether. This isn't a locking issue, but a > distributor issue! It feels like we're registering a GICv3 distributor > instead of a GICv2, so the register map is fscked-up from a guest > perspective. > > As it turns out, this is indeed exactly what this patch does, by > always using the host's GIC type instead of what was requested for the > guest. This works just fine on RPI4 (GICv2) and Altra (GICv3 without > compat), but totally fails on SQ (GICv3 with GICv2-compat). I expect > that this is the issue Nathan is facing. > > I came up with the following patch, which fixes it for me on my > SynQuacer. Nathan, could you please try it with your config? Works like a charm :) Tested-by: Nathan Chancellor <nathan@kernel.org> Thanks a lot for quickly getting to the bottom of this! > Thanks, > > M. > > From f42d872b2796a2a3e719fdc51cc206aa274bf0ed Mon Sep 17 00:00:00 2001 > From: Marc Zyngier <maz@kernel.org> > Date: Wed, 7 Jun 2023 15:38:44 +0100 > Subject: [PATCH] KVM: arm64: Restore GICv2-on-GICv3 functionality > > When reworking the vgic locking, the vgic distributor registration > got simplified, which was a very good cleanup. But just a tad too > radical, as we now register the *native* vgic only, ignoring the > GICv2-on-GICv3 that allows pre-historic VMs (or so I thought) > to run. > > As it turns out, QEMU still defaults to GICv2 in some cases, and > this breaks Nathan's setup! > > Fix it by propagating the *requested* vgic type rather than the > host's version. > > Fixes: 59112e9c390b ("KVM: arm64: vgic: Fix a circular locking issue") > Reported-by: Nathan Chancellor <nathan@kernel.org> > Signed-off-by: Marc Zyngier <maz@kernel.org> > --- > arch/arm64/kvm/vgic/vgic-init.c | 11 +++++++---- > 1 file changed, 7 insertions(+), 4 deletions(-) > > diff --git a/arch/arm64/kvm/vgic/vgic-init.c b/arch/arm64/kvm/vgic/vgic-init.c > index 6eafc2c45cfc..c8c3cb812783 100644 > --- a/arch/arm64/kvm/vgic/vgic-init.c > +++ b/arch/arm64/kvm/vgic/vgic-init.c > @@ -446,6 +446,7 @@ int vgic_lazy_init(struct kvm *kvm) > int kvm_vgic_map_resources(struct kvm *kvm) > { > struct vgic_dist *dist = &kvm->arch.vgic; > + enum vgic_type type; > gpa_t dist_base; > int ret = 0; > > @@ -460,10 +461,13 @@ int kvm_vgic_map_resources(struct kvm *kvm) > if (!irqchip_in_kernel(kvm)) > goto out; > > - if (dist->vgic_model == KVM_DEV_TYPE_ARM_VGIC_V2) > + if (dist->vgic_model == KVM_DEV_TYPE_ARM_VGIC_V2) { > ret = vgic_v2_map_resources(kvm); > - else > + type = VGIC_V2; > + } else { > ret = vgic_v3_map_resources(kvm); > + type = VGIC_V3; > + } > > if (ret) { > __kvm_vgic_destroy(kvm); > @@ -473,8 +477,7 @@ int kvm_vgic_map_resources(struct kvm *kvm) > dist_base = dist->vgic_dist_base; > mutex_unlock(&kvm->arch.config_lock); > > - ret = vgic_register_dist_iodev(kvm, dist_base, > - kvm_vgic_global_state.type); > + ret = vgic_register_dist_iodev(kvm, dist_base, type); > if (ret) { > kvm_err("Unable to register VGIC dist MMIO regions\n"); > kvm_vgic_destroy(kvm); > -- > 2.39.2 > > > -- > Without deviation from the norm, progress is not possible.
On Wed, 07 Jun 2023 16:29:25 +0100, Nathan Chancellor <nathan@kernel.org> wrote: > > On Wed, Jun 07, 2023 at 04:04:53PM +0100, Marc Zyngier wrote: > > I came up with the following patch, which fixes it for me on my > > SynQuacer. Nathan, could you please try it with your config? > > Works like a charm :) > > Tested-by: Nathan Chancellor <nathan@kernel.org> > > Thanks a lot for quickly getting to the bottom of this! Many thanks for the report and the testing! I've pushed this out to fixes. M.
diff --git a/arch/arm64/kvm/vgic/vgic-init.c b/arch/arm64/kvm/vgic/vgic-init.c index 9d42c7cb2b588..c199ba2f192ef 100644 --- a/arch/arm64/kvm/vgic/vgic-init.c +++ b/arch/arm64/kvm/vgic/vgic-init.c @@ -235,9 +235,9 @@ int kvm_vgic_vcpu_init(struct kvm_vcpu *vcpu) * KVM io device for the redistributor that belongs to this VCPU. */ if (dist->vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3) { - mutex_lock(&vcpu->kvm->arch.config_lock); + mutex_lock(&vcpu->kvm->slots_lock); ret = vgic_register_redist_iodev(vcpu); - mutex_unlock(&vcpu->kvm->arch.config_lock); + mutex_unlock(&vcpu->kvm->slots_lock); } return ret; } @@ -446,11 +446,13 @@ int vgic_lazy_init(struct kvm *kvm) int kvm_vgic_map_resources(struct kvm *kvm) { struct vgic_dist *dist = &kvm->arch.vgic; + gpa_t dist_base; int ret = 0; if (likely(vgic_ready(kvm))) return 0; + mutex_lock(&kvm->slots_lock); mutex_lock(&kvm->arch.config_lock); if (vgic_ready(kvm)) goto out; @@ -463,13 +465,26 @@ int kvm_vgic_map_resources(struct kvm *kvm) else ret = vgic_v3_map_resources(kvm); - if (ret) + if (ret) { __kvm_vgic_destroy(kvm); - else - dist->ready = true; + goto out; + } + dist->ready = true; + dist_base = dist->vgic_dist_base; + mutex_unlock(&kvm->arch.config_lock); + + ret = vgic_register_dist_iodev(kvm, dist_base, + kvm_vgic_global_state.type); + if (ret) { + kvm_err("Unable to register VGIC dist MMIO regions\n"); + kvm_vgic_destroy(kvm); + } + mutex_unlock(&kvm->slots_lock); + return ret; out: mutex_unlock(&kvm->arch.config_lock); + mutex_unlock(&kvm->slots_lock); return ret; } diff --git a/arch/arm64/kvm/vgic/vgic-kvm-device.c b/arch/arm64/kvm/vgic/vgic-kvm-device.c index 35cfa268fd5de..212b73a715c1c 100644 --- a/arch/arm64/kvm/vgic/vgic-kvm-device.c +++ b/arch/arm64/kvm/vgic/vgic-kvm-device.c @@ -102,7 +102,11 @@ static int kvm_vgic_addr(struct kvm *kvm, struct kvm_device_attr *attr, bool wri if (get_user(addr, uaddr)) return -EFAULT; - mutex_lock(&kvm->arch.config_lock); + /* + * Since we can't hold config_lock while registering the redistributor + * iodevs, take the slots_lock immediately. + */ + mutex_lock(&kvm->slots_lock); switch (attr->attr) { case KVM_VGIC_V2_ADDR_TYPE_DIST: r = vgic_check_type(kvm, KVM_DEV_TYPE_ARM_VGIC_V2); @@ -182,6 +186,7 @@ static int kvm_vgic_addr(struct kvm *kvm, struct kvm_device_attr *attr, bool wri if (r) goto out; + mutex_lock(&kvm->arch.config_lock); if (write) { r = vgic_check_iorange(kvm, *addr_ptr, addr, alignment, size); if (!r) @@ -189,9 +194,10 @@ static int kvm_vgic_addr(struct kvm *kvm, struct kvm_device_attr *attr, bool wri } else { addr = *addr_ptr; } + mutex_unlock(&kvm->arch.config_lock); out: - mutex_unlock(&kvm->arch.config_lock); + mutex_unlock(&kvm->slots_lock); if (!r && !write) r = put_user(addr, uaddr); diff --git a/arch/arm64/kvm/vgic/vgic-mmio-v3.c b/arch/arm64/kvm/vgic/vgic-mmio-v3.c index 472b18ac92a24..188d2187eede9 100644 --- a/arch/arm64/kvm/vgic/vgic-mmio-v3.c +++ b/arch/arm64/kvm/vgic/vgic-mmio-v3.c @@ -769,10 +769,13 @@ int vgic_register_redist_iodev(struct kvm_vcpu *vcpu) struct vgic_io_device *rd_dev = &vcpu->arch.vgic_cpu.rd_iodev; struct vgic_redist_region *rdreg; gpa_t rd_base; - int ret; + int ret = 0; + + lockdep_assert_held(&kvm->slots_lock); + mutex_lock(&kvm->arch.config_lock); if (!IS_VGIC_ADDR_UNDEF(vgic_cpu->rd_iodev.base_addr)) - return 0; + goto out_unlock; /* * We may be creating VCPUs before having set the base address for the @@ -782,10 +785,12 @@ int vgic_register_redist_iodev(struct kvm_vcpu *vcpu) */ rdreg = vgic_v3_rdist_free_slot(&vgic->rd_regions); if (!rdreg) - return 0; + goto out_unlock; - if (!vgic_v3_check_base(kvm)) - return -EINVAL; + if (!vgic_v3_check_base(kvm)) { + ret = -EINVAL; + goto out_unlock; + } vgic_cpu->rdreg = rdreg; vgic_cpu->rdreg_index = rdreg->free_index; @@ -799,16 +804,20 @@ int vgic_register_redist_iodev(struct kvm_vcpu *vcpu) rd_dev->nr_regions = ARRAY_SIZE(vgic_v3_rd_registers); rd_dev->redist_vcpu = vcpu; - mutex_lock(&kvm->slots_lock); + mutex_unlock(&kvm->arch.config_lock); + ret = kvm_io_bus_register_dev(kvm, KVM_MMIO_BUS, rd_base, 2 * SZ_64K, &rd_dev->dev); - mutex_unlock(&kvm->slots_lock); - if (ret) return ret; + /* Protected by slots_lock */ rdreg->free_index++; return 0; + +out_unlock: + mutex_unlock(&kvm->arch.config_lock); + return ret; } static void vgic_unregister_redist_iodev(struct kvm_vcpu *vcpu) @@ -834,12 +843,10 @@ static int vgic_register_all_redist_iodevs(struct kvm *kvm) /* The current c failed, so iterate over the previous ones. */ int i; - mutex_lock(&kvm->slots_lock); for (i = 0; i < c; i++) { vcpu = kvm_get_vcpu(kvm, i); vgic_unregister_redist_iodev(vcpu); } - mutex_unlock(&kvm->slots_lock); } return ret; @@ -938,7 +945,9 @@ int vgic_v3_set_redist_base(struct kvm *kvm, u32 index, u64 addr, u32 count) { int ret; + mutex_lock(&kvm->arch.config_lock); ret = vgic_v3_alloc_redist_region(kvm, index, addr, count); + mutex_unlock(&kvm->arch.config_lock); if (ret) return ret; @@ -950,8 +959,10 @@ int vgic_v3_set_redist_base(struct kvm *kvm, u32 index, u64 addr, u32 count) if (ret) { struct vgic_redist_region *rdreg; + mutex_lock(&kvm->arch.config_lock); rdreg = vgic_v3_rdist_region_from_index(kvm, index); vgic_v3_free_redist_region(rdreg); + mutex_unlock(&kvm->arch.config_lock); return ret; } diff --git a/arch/arm64/kvm/vgic/vgic-mmio.c b/arch/arm64/kvm/vgic/vgic-mmio.c index 1939c94e0b248..ce3d17463c6bc 100644 --- a/arch/arm64/kvm/vgic/vgic-mmio.c +++ b/arch/arm64/kvm/vgic/vgic-mmio.c @@ -1114,10 +1114,6 @@ int vgic_register_dist_iodev(struct kvm *kvm, gpa_t dist_base_address, io_device->iodev_type = IODEV_DIST; io_device->redist_vcpu = NULL; - mutex_lock(&kvm->slots_lock); - ret = kvm_io_bus_register_dev(kvm, KVM_MMIO_BUS, dist_base_address, - len, &io_device->dev); - mutex_unlock(&kvm->slots_lock); - - return ret; + return kvm_io_bus_register_dev(kvm, KVM_MMIO_BUS, dist_base_address, + len, &io_device->dev); } diff --git a/arch/arm64/kvm/vgic/vgic-v2.c b/arch/arm64/kvm/vgic/vgic-v2.c index 645648349c99b..7e9cdb78f7ce8 100644 --- a/arch/arm64/kvm/vgic/vgic-v2.c +++ b/arch/arm64/kvm/vgic/vgic-v2.c @@ -312,12 +312,6 @@ int vgic_v2_map_resources(struct kvm *kvm) return ret; } - ret = vgic_register_dist_iodev(kvm, dist->vgic_dist_base, VGIC_V2); - if (ret) { - kvm_err("Unable to register VGIC MMIO regions\n"); - return ret; - } - if (!static_branch_unlikely(&vgic_v2_cpuif_trap)) { ret = kvm_phys_addr_ioremap(kvm, dist->vgic_cpu_base, kvm_vgic_global_state.vcpu_base, diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c index 469d816f356f3..76af07e66d731 100644 --- a/arch/arm64/kvm/vgic/vgic-v3.c +++ b/arch/arm64/kvm/vgic/vgic-v3.c @@ -539,7 +539,6 @@ int vgic_v3_map_resources(struct kvm *kvm) { struct vgic_dist *dist = &kvm->arch.vgic; struct kvm_vcpu *vcpu; - int ret = 0; unsigned long c; kvm_for_each_vcpu(c, vcpu, kvm) { @@ -569,12 +568,6 @@ int vgic_v3_map_resources(struct kvm *kvm) return -EBUSY; } - ret = vgic_register_dist_iodev(kvm, dist->vgic_dist_base, VGIC_V3); - if (ret) { - kvm_err("Unable to register VGICv3 dist MMIO regions\n"); - return ret; - } - if (kvm_vgic_global_state.has_gicv4_1) vgic_v4_configure_vsgis(kvm);
Lockdep reports a circular lock dependency between the srcu and the config_lock: [ 262.179917] -> #1 (&kvm->srcu){.+.+}-{0:0}: [ 262.182010] __synchronize_srcu+0xb0/0x224 [ 262.183422] synchronize_srcu_expedited+0x24/0x34 [ 262.184554] kvm_io_bus_register_dev+0x324/0x50c [ 262.185650] vgic_register_redist_iodev+0x254/0x398 [ 262.186740] vgic_v3_set_redist_base+0x3b0/0x724 [ 262.188087] kvm_vgic_addr+0x364/0x600 [ 262.189189] vgic_set_common_attr+0x90/0x544 [ 262.190278] vgic_v3_set_attr+0x74/0x9c [ 262.191432] kvm_device_ioctl+0x2a0/0x4e4 [ 262.192515] __arm64_sys_ioctl+0x7ac/0x1ba8 [ 262.193612] invoke_syscall.constprop.0+0x70/0x1e0 [ 262.195006] do_el0_svc+0xe4/0x2d4 [ 262.195929] el0_svc+0x44/0x8c [ 262.196917] el0t_64_sync_handler+0xf4/0x120 [ 262.198238] el0t_64_sync+0x190/0x194 [ 262.199224] [ 262.199224] -> #0 (&kvm->arch.config_lock){+.+.}-{3:3}: [ 262.201094] __lock_acquire+0x2b70/0x626c [ 262.202245] lock_acquire+0x454/0x778 [ 262.203132] __mutex_lock+0x190/0x8b4 [ 262.204023] mutex_lock_nested+0x24/0x30 [ 262.205100] vgic_mmio_write_v3_misc+0x5c/0x2a0 [ 262.206178] dispatch_mmio_write+0xd8/0x258 [ 262.207498] __kvm_io_bus_write+0x1e0/0x350 [ 262.208582] kvm_io_bus_write+0xe0/0x1cc [ 262.209653] io_mem_abort+0x2ac/0x6d8 [ 262.210569] kvm_handle_guest_abort+0x9b8/0x1f88 [ 262.211937] handle_exit+0xc4/0x39c [ 262.212971] kvm_arch_vcpu_ioctl_run+0x90c/0x1c04 [ 262.214154] kvm_vcpu_ioctl+0x450/0x12f8 [ 262.215233] __arm64_sys_ioctl+0x7ac/0x1ba8 [ 262.216402] invoke_syscall.constprop.0+0x70/0x1e0 [ 262.217774] do_el0_svc+0xe4/0x2d4 [ 262.218758] el0_svc+0x44/0x8c [ 262.219941] el0t_64_sync_handler+0xf4/0x120 [ 262.221110] el0t_64_sync+0x190/0x194 Note that the current report, which can be triggered by the vgic_irq kselftest, is a triple chain that includes slots_lock, but after inverting the slots_lock/config_lock dependency, the actual problem reported above remains. In several places, the vgic code calls kvm_io_bus_register_dev(), which synchronizes the srcu, while holding config_lock (#1). And the MMIO handler takes the config_lock while holding the srcu read lock (#0). Break dependency #1, by registering the distributor and redistributors without holding config_lock. The ITS also uses kvm_io_bus_register_dev() but already relies on slots_lock to serialize calls. The distributor iodev is created on the first KVM_RUN call. Multiple threads will race for vgic initialization, and only the first one will see !vgic_ready() under the lock. To serialize those threads, rely on slots_lock rather than config_lock. Redistributors are created earlier, through KVM_DEV_ARM_VGIC_GRP_ADDR ioctls and vCPU creation. Similarly, serialize the iodev creation with slots_lock, and the rest with config_lock. Fixes: f00327731131 ("KVM: arm64: Use config_lock to protect vgic state") Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> --- arch/arm64/kvm/vgic/vgic-init.c | 25 ++++++++++++++++----- arch/arm64/kvm/vgic/vgic-kvm-device.c | 10 +++++++-- arch/arm64/kvm/vgic/vgic-mmio-v3.c | 31 ++++++++++++++++++--------- arch/arm64/kvm/vgic/vgic-mmio.c | 8 ++----- arch/arm64/kvm/vgic/vgic-v2.c | 6 ------ arch/arm64/kvm/vgic/vgic-v3.c | 7 ------ 6 files changed, 51 insertions(+), 36 deletions(-)