Message ID | 20220203014813.2130559-2-jmattson@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [1/2] KVM: x86/pmu: Don't truncate the PerfEvtSeln MSR when creating a perf event | expand |
Reviewed-by: David Dunn <daviddunn@google.com> On Wed, Feb 2, 2022 at 5:52 PM Jim Mattson <jmattson@google.com> wrote: > > AMD's event select is 3 nybbles, with the high nybble in bits 35:32 of > a PerfEvtSeln MSR. Don't mask off the high nybble when configuring a > RAW perf event.
diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 80f7e5bb6867..06715a4f08ec 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -221,7 +221,7 @@ void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel) } if (type == PERF_TYPE_RAW) - config = eventsel & X86_RAW_EVENT_MASK; + config = eventsel & AMD64_RAW_EVENT_MASK; if (pmc->current_config == eventsel && pmc_resume_counter(pmc)) return;
AMD's event select is 3 nybbles, with the high nybble in bits 35:32 of a PerfEvtSeln MSR. Don't mask off the high nybble when configuring a RAW perf event. Fixes: ca724305a2b0 ("KVM: x86/vPMU: Implement AMD vPMU code for KVM") Signed-off-by: Jim Mattson <jmattson@google.com> --- arch/x86/kvm/pmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)