Message ID | 20220203014813.2130559-1-jmattson@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [1/2] KVM: x86/pmu: Don't truncate the PerfEvtSeln MSR when creating a perf event | expand |
Reviewed-by: David Dunn <daviddunn@google.com> On Wed, Feb 2, 2022 at 5:52 PM Jim Mattson <jmattson@google.com> wrote: > > AMD's event select is 3 nybbles, with the high nybble in bits 35:32 of > a PerfEvtSeln MSR. Don't drop the high nybble when setting up the > config field of a perf_event_attr structure for a call to > perf_event_create_kernel_counter().
On 2/3/22 02:48, Jim Mattson wrote: > AMD's event select is 3 nybbles, with the high nybble in bits 35:32 of > a PerfEvtSeln MSR. Don't drop the high nybble when setting up the > config field of a perf_event_attr structure for a call to > perf_event_create_kernel_counter(). > > Fixes: ca724305a2b0 ("KVM: x86/vPMU: Implement AMD vPMU code for KVM") > Reported-by: Stephane Eranian <eranian@google.com> > Signed-off-by: Jim Mattson <jmattson@google.com> > --- > arch/x86/kvm/pmu.c | 5 +++-- > 1 file changed, 3 insertions(+), 2 deletions(-) > > diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c > index 2c98f3ee8df4..80f7e5bb6867 100644 > --- a/arch/x86/kvm/pmu.c > +++ b/arch/x86/kvm/pmu.c > @@ -95,7 +95,7 @@ static void kvm_perf_overflow(struct perf_event *perf_event, > } > > static void pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type, > - unsigned config, bool exclude_user, > + u64 config, bool exclude_user, > bool exclude_kernel, bool intr, > bool in_tx, bool in_tx_cp) > { > @@ -181,7 +181,8 @@ static int cmp_u64(const void *a, const void *b) > > void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel) > { > - unsigned config, type = PERF_TYPE_RAW; > + u64 config; > + u32 type = PERF_TYPE_RAW; > struct kvm *kvm = pmc->vcpu->kvm; > struct kvm_pmu_event_filter *filter; > bool allow_event = true; Queued both, thanks. Paolo
diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 2c98f3ee8df4..80f7e5bb6867 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -95,7 +95,7 @@ static void kvm_perf_overflow(struct perf_event *perf_event, } static void pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type, - unsigned config, bool exclude_user, + u64 config, bool exclude_user, bool exclude_kernel, bool intr, bool in_tx, bool in_tx_cp) { @@ -181,7 +181,8 @@ static int cmp_u64(const void *a, const void *b) void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel) { - unsigned config, type = PERF_TYPE_RAW; + u64 config; + u32 type = PERF_TYPE_RAW; struct kvm *kvm = pmc->vcpu->kvm; struct kvm_pmu_event_filter *filter; bool allow_event = true;
AMD's event select is 3 nybbles, with the high nybble in bits 35:32 of a PerfEvtSeln MSR. Don't drop the high nybble when setting up the config field of a perf_event_attr structure for a call to perf_event_create_kernel_counter(). Fixes: ca724305a2b0 ("KVM: x86/vPMU: Implement AMD vPMU code for KVM") Reported-by: Stephane Eranian <eranian@google.com> Signed-off-by: Jim Mattson <jmattson@google.com> --- arch/x86/kvm/pmu.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)