Message ID | 20220217083601.24829-2-likexu@tencent.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [1/2] KVM: x86/pmu: Extract check_pmu_event_filter() from the same semantics | expand |
On 2/17/22 09:36, Like Xu wrote: > From: Like Xu<likexu@tencent.com> > > Fix the following positive warning: > > ============================= > WARNING: suspicious RCU usage > arch/x86/kvm/pmu.c:190 suspicious rcu_dereference_check() usage! > other info that might help us debug this: > rcu_scheduler_active = 2, debug_locks = 1 > 1 lock held by CPU 28/KVM/370841: > #0: ff11004089f280b8 (&vcpu->mutex){+.+.}-{3:3}, at: kvm_vcpu_ioctl+0x87/0x730 [kvm] > Call Trace: > <TASK> > dump_stack_lvl+0x59/0x73 > reprogram_fixed_counter+0x15d/0x1a0 [kvm] > kvm_pmu_trigger_event+0x1a3/0x260 [kvm] > ? free_moved_vector+0x1b4/0x1e0 > complete_fast_pio_in+0x8a/0xd0 [kvm] > [...] I think the right fix is to add SRCU protection to complete_userspace_io in kvm_arch_vcpu_ioctl_run. Most calls of complete_userspace_io can execute similar code to vmexits. > Fixes: 66bb8a065f5a ("KVM: x86: PMU Event Filter") It fixes 9cd803d496e7 ("KVM: x86: Update vPMCs when retiring instructions", 2022-01-07), actually. That is when the PMU filter was added to kvm_skip_emulated_instruction (called by kvm_fast_pio_in). Thanks, Paolo > It's possible to call KVM_SET_PMU_EVENT_FILTER ioctl with the vCPU running. > Similar to "kvm->arch.msr_filter", KVM should guarantee that vCPUs will > see either the previous filter or the new filter so that guest pmu events > with identical settings in both the old and new filter have deterministic > behavior.
On Fri, Feb 18, 2022, Paolo Bonzini wrote: > On 2/17/22 09:36, Like Xu wrote: > > From: Like Xu<likexu@tencent.com> > > > > Fix the following positive warning: > > > > ============================= > > WARNING: suspicious RCU usage > > arch/x86/kvm/pmu.c:190 suspicious rcu_dereference_check() usage! > > other info that might help us debug this: > > rcu_scheduler_active = 2, debug_locks = 1 > > 1 lock held by CPU 28/KVM/370841: > > #0: ff11004089f280b8 (&vcpu->mutex){+.+.}-{3:3}, at: kvm_vcpu_ioctl+0x87/0x730 [kvm] > > Call Trace: > > <TASK> > > dump_stack_lvl+0x59/0x73 > > reprogram_fixed_counter+0x15d/0x1a0 [kvm] > > kvm_pmu_trigger_event+0x1a3/0x260 [kvm] > > ? free_moved_vector+0x1b4/0x1e0 > > complete_fast_pio_in+0x8a/0xd0 [kvm] > > [...] > > I think the right fix is to add SRCU protection to complete_userspace_io in > kvm_arch_vcpu_ioctl_run. Most calls of complete_userspace_io can execute > similar code to vmexits. Agreed, I bet similar warnings can be triggered on SVM with nrips=false due to svm_skip_emulated_instruction() dropping into the emulator, e.g. for HyperV and Xen usage where next_rip doesn't appear to be filled in all paths.
diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index af2a3dd22dd9..94319f627f64 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -185,8 +185,9 @@ static bool check_pmu_event_filter(struct kvm_pmc *pmc) struct kvm *kvm = pmc->vcpu->kvm; bool allow_event = true; __u64 key; - int idx; + int idx, srcu_idx; + srcu_idx = srcu_read_lock(&kvm->srcu); filter = srcu_dereference(kvm->arch.pmu_event_filter, &kvm->srcu); if (!filter) goto out; @@ -209,6 +210,7 @@ static bool check_pmu_event_filter(struct kvm_pmc *pmc) } out: + srcu_read_unlock(&kvm->srcu, srcu_idx); return allow_event; }