Message ID | 20220919091008.60695-1-likexu@tencent.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v3,1/3] KVM: x86/pmu: Stop adding speculative Intel GP PMCs that don't exist yet | expand |
On Mon, Sep 19, 2022, Like Xu wrote: > From: Like Xu <likexu@tencent.com> > > The Intel April 2022 SDM - Table 2-2. IA-32 Architectural MSRs adds > a new architectural IA32_OVERCLOCKING_STATUS msr (0x195), plus the > presence of IA32_CORE_CAPABILITIES (0xCF), the theoretical effective > maximum value of the Intel GP PMCs is 14 (0xCF - 0xC1) instead of 18. > > But the conclusion of this speculation "14" is very fragile and can > easily be overturned once Intel declares another meaningful arch msr s/msr/MSR for consistency > in the above reserved range, and even worse, just conjecture, Intel > probably put PMCs 8-15 in a completely different range of MSR indices. > > A conservative proposal would be to stop at the maximum number of Intel > GP PMCs supported today. Also subsequent changes would limit both AMD > and Intel on the number of GP counter supported by KVM. > > There are some boxes like Intel P4 (non Architectural PMU) may indeed > have 18 counters , but those counters are in a completely different msr unnecessary whitespace before the comma. And s/msr/MSR again. > address range and is not supported by KVM. > > Cc: Vitaly Kuznetsov <vkuznets@redhat.com> > Fixes: cf05a67b68b8 ("KVM: x86: omit "impossible" pmu MSRs from MSR list") Does this need Cc: stable@vger.kernel.org? Or is this benign enough that we don't care? No need for a v4, the above nits can be handled when applying. > Suggested-by: Jim Mattson <jmattson@google.com> > Signed-off-by: Like Xu <likexu@tencent.com> > Reviewed-by: Jim Mattson <jmattson@google.com> > --- In the future, please provide a cover letter even for trivial series, it helps (me at least) mentally organize patches. Thanks!
On 8/10/2022 3:38 am, Sean Christopherson wrote: > Does this need Cc:stable@vger.kernel.org? Or is this benign enough that we don't > care? Considering stable kernel may access IA32_OVERCLOCKING_STATUS as well, cc stable list helps to remove the illusion of pmu msr scope for stable tree maintainers. > > No need for a v4, the above nits can be handled when applying. Thanks, so kind of you. > >> Suggested-by: Jim Mattson<jmattson@google.com> >> Signed-off-by: Like Xu<likexu@tencent.com> >> Reviewed-by: Jim Mattson<jmattson@google.com> >> --- > In the future, please provide a cover letter even for trivial series, it helps > (me at least) mentally organize patches. Roger that. > > Thanks! >
On Fri, Oct 14, 2022, Like Xu wrote: > On 8/10/2022 3:38 am, Sean Christopherson wrote: > > Does this need Cc:stable@vger.kernel.org? Or is this benign enough that we don't > > care? > > Considering stable kernel may access IA32_OVERCLOCKING_STATUS as well, > cc stable list helps to remove the illusion of pmu msr scope for stable tree > maintainers. Is that a "yes, this should be Cc'd stable" or "no, don't bother"?
On 15/10/2022 12:18 am, Sean Christopherson wrote: > On Fri, Oct 14, 2022, Like Xu wrote: >> On 8/10/2022 3:38 am, Sean Christopherson wrote: >>> Does this need Cc:stable@vger.kernel.org? Or is this benign enough that we don't >>> care? >> >> Considering stable kernel may access IA32_OVERCLOCKING_STATUS as well, >> cc stable list helps to remove the illusion of pmu msr scope for stable tree >> maintainers. > > Is that a "yes, this should be Cc'd stable" or "no, don't bother"? Oops, I missed this one. "Yes, this should be Cc'd" as I have received a few complaints on this. Please let me know if I need to post a new version of this minor patch set, considering the previous comment "No need for a v4, the above nits can be handled when applying. " Thanks, Like Xu
On 11/7/22 08:26, Like Xu wrote: > On 15/10/2022 12:18 am, Sean Christopherson wrote: >> On Fri, Oct 14, 2022, Like Xu wrote: >>> On 8/10/2022 3:38 am, Sean Christopherson wrote: >>>> Does this need Cc:stable@vger.kernel.org? Or is this benign enough >>>> that we don't >>>> care? >>> >>> Considering stable kernel may access IA32_OVERCLOCKING_STATUS as well, >>> cc stable list helps to remove the illusion of pmu msr scope for >>> stable tree >>> maintainers. >> >> Is that a "yes, this should be Cc'd stable" or "no, don't bother"? > > Oops, I missed this one. "Yes, this should be Cc'd" as I have received a > few complaints on this. > > Please let me know if I need to post a new version of this minor patch > set, considering the previous > comment "No need for a v4, the above nits can be handled when applying. " Queued, thanks. Paolo
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index d7374d768296..9f74c3924377 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1428,20 +1428,10 @@ static const u32 msrs_to_save_all[] = { MSR_ARCH_PERFMON_PERFCTR0 + 2, MSR_ARCH_PERFMON_PERFCTR0 + 3, MSR_ARCH_PERFMON_PERFCTR0 + 4, MSR_ARCH_PERFMON_PERFCTR0 + 5, MSR_ARCH_PERFMON_PERFCTR0 + 6, MSR_ARCH_PERFMON_PERFCTR0 + 7, - MSR_ARCH_PERFMON_PERFCTR0 + 8, MSR_ARCH_PERFMON_PERFCTR0 + 9, - MSR_ARCH_PERFMON_PERFCTR0 + 10, MSR_ARCH_PERFMON_PERFCTR0 + 11, - MSR_ARCH_PERFMON_PERFCTR0 + 12, MSR_ARCH_PERFMON_PERFCTR0 + 13, - MSR_ARCH_PERFMON_PERFCTR0 + 14, MSR_ARCH_PERFMON_PERFCTR0 + 15, - MSR_ARCH_PERFMON_PERFCTR0 + 16, MSR_ARCH_PERFMON_PERFCTR0 + 17, MSR_ARCH_PERFMON_EVENTSEL0, MSR_ARCH_PERFMON_EVENTSEL1, MSR_ARCH_PERFMON_EVENTSEL0 + 2, MSR_ARCH_PERFMON_EVENTSEL0 + 3, MSR_ARCH_PERFMON_EVENTSEL0 + 4, MSR_ARCH_PERFMON_EVENTSEL0 + 5, MSR_ARCH_PERFMON_EVENTSEL0 + 6, MSR_ARCH_PERFMON_EVENTSEL0 + 7, - MSR_ARCH_PERFMON_EVENTSEL0 + 8, MSR_ARCH_PERFMON_EVENTSEL0 + 9, - MSR_ARCH_PERFMON_EVENTSEL0 + 10, MSR_ARCH_PERFMON_EVENTSEL0 + 11, - MSR_ARCH_PERFMON_EVENTSEL0 + 12, MSR_ARCH_PERFMON_EVENTSEL0 + 13, - MSR_ARCH_PERFMON_EVENTSEL0 + 14, MSR_ARCH_PERFMON_EVENTSEL0 + 15, - MSR_ARCH_PERFMON_EVENTSEL0 + 16, MSR_ARCH_PERFMON_EVENTSEL0 + 17, MSR_IA32_PEBS_ENABLE, MSR_IA32_DS_AREA, MSR_PEBS_DATA_CFG, MSR_K7_EVNTSEL0, MSR_K7_EVNTSEL1, MSR_K7_EVNTSEL2, MSR_K7_EVNTSEL3, @@ -6926,12 +6916,12 @@ static void kvm_init_msr_list(void) intel_pt_validate_hw_cap(PT_CAP_num_address_ranges) * 2) continue; break; - case MSR_ARCH_PERFMON_PERFCTR0 ... MSR_ARCH_PERFMON_PERFCTR0 + 17: + case MSR_ARCH_PERFMON_PERFCTR0 ... MSR_ARCH_PERFMON_PERFCTR0 + 7: if (msrs_to_save_all[i] - MSR_ARCH_PERFMON_PERFCTR0 >= min(INTEL_PMC_MAX_GENERIC, kvm_pmu_cap.num_counters_gp)) continue; break; - case MSR_ARCH_PERFMON_EVENTSEL0 ... MSR_ARCH_PERFMON_EVENTSEL0 + 17: + case MSR_ARCH_PERFMON_EVENTSEL0 ... MSR_ARCH_PERFMON_EVENTSEL0 + 7: if (msrs_to_save_all[i] - MSR_ARCH_PERFMON_EVENTSEL0 >= min(INTEL_PMC_MAX_GENERIC, kvm_pmu_cap.num_counters_gp)) continue;