Message ID | 20240624055907.7720-9-Dhananjay.Ugwekar@amd.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | Add per-core RAPL energy counter support for AMD CPUs | expand |
On Mon, Jun 24, 2024 at 05:59:05AM +0000, Dhananjay Ugwekar wrote: > Prep for addition of power_per_core PMU to handle core scope energy > consumption for AMD CPUs. > > Replace the generic names with *_pkg*, to differentiate between the > scopes of the two different PMUs and their variables. But then remember patch 2 and recall that intel seems to have everything at die level, not pkg. Does this proposed naming make sense? How?
On Mon, 2024-07-01 at 15:08 +0200, Peter Zijlstra wrote: > On Mon, Jun 24, 2024 at 05:59:05AM +0000, Dhananjay Ugwekar wrote: > > Prep for addition of power_per_core PMU to handle core scope energy > > consumption for AMD CPUs. > > > > Replace the generic names with *_pkg*, to differentiate between the > > scopes of the two different PMUs and their variables. > > But then remember patch 2 and recall that intel seems to have > everything > at die level, not pkg. > > Does this proposed naming make sense? How? For Intel products, we have 1. Casecadelake-AP which has multi-die per package and has per-die RAPL MSRs 2. all other platforms which has single-die per package, so its RAPL MSRs can be considered as either package-scope or die-scope This applies to Thermal MSRs as well. so for these MSRs, we can treat them as 1. always die-scope for all existing platforms or 2. package-scope with the exception of Casecadelake-ap And current kernel code follows rule 1. I propose we switch to rule 2 for these code because rule 1 can be broke on future multi-die systems (This is already true for Thermal MSRs). In this sense, I think it is okay to call it pkg level rapl for both Intel and AMD. thanks, rui
Hello Rui, On 7/2/2024 7:55 AM, Zhang, Rui wrote: > On Mon, 2024-07-01 at 15:08 +0200, Peter Zijlstra wrote: >> On Mon, Jun 24, 2024 at 05:59:05AM +0000, Dhananjay Ugwekar wrote: >>> Prep for addition of power_per_core PMU to handle core scope energy >>> consumption for AMD CPUs. >>> >>> Replace the generic names with *_pkg*, to differentiate between the >>> scopes of the two different PMUs and their variables. >> >> But then remember patch 2 and recall that intel seems to have >> everything >> at die level, not pkg. >> >> Does this proposed naming make sense? How? > > For Intel products, we have > 1. Casecadelake-AP which has multi-die per package and has per-die RAPL > MSRs > 2. all other platforms which has single-die per package, so its RAPL > MSRs can be considered as either package-scope or die-scope > This applies to Thermal MSRs as well. > > so for these MSRs, we can treat them as > 1. always die-scope for all existing platforms > or > 2. package-scope with the exception of Casecadelake-ap > And current kernel code follows rule 1. > > I propose we switch to rule 2 for these code because rule 1 can be > broke on future multi-die systems (This is already true for Thermal > MSRs). I have a doubt about this, won't the future Intel multi-die systems have die-scope for the RAPL PMU like Casecadelake-AP? If yes, then rule 1 above seems better. Regards, Dhananjay > > In this sense, I think it is okay to call it pkg level rapl for both > Intel and AMD. > > thanks, > rui >
On Tue, 2024-07-02 at 15:50 +0530, Dhananjay Ugwekar wrote: > > > > > For Intel products, we have > > 1. Casecadelake-AP which has multi-die per package and has per-die > > RAPL > > MSRs > > 2. all other platforms which has single-die per package, so its > > RAPL > > MSRs can be considered as either package-scope or die-scope > > This applies to Thermal MSRs as well. > > > > so for these MSRs, we can treat them as > > 1. always die-scope for all existing platforms > > or > > 2. package-scope with the exception of Casecadelake-ap > > And current kernel code follows rule 1. > > > > I propose we switch to rule 2 for these code because rule 1 can be > > broke on future multi-die systems (This is already true for Thermal > > MSRs). > > I have a doubt about this, won't the future Intel multi-die systems > have die-scope for the RAPL PMU like Casecadelake-AP? For future multi-die systems that I know, the RAPL is still package scope but it is just lucky that RAPL control is not exposed via the MSRs so rule 1 is not actually broke for RAPL PMU (while it is indeed broken for other drivers like thermal). In short, if we stick with rule 1, the RAPL PMU still works. Switching to rule 2 to be consistent with the other drivers is also a choice IMV. thanks, rui > > If yes, then rule 1 above seems better. > > Regards, > Dhananjay > > > > > In this sense, I think it is okay to call it pkg level rapl for > > both > > Intel and AMD. > > > > thanks, > > rui > >
Hi Rui, On 7/3/2024 9:37 AM, Zhang, Rui wrote: > On Tue, 2024-07-02 at 15:50 +0530, Dhananjay Ugwekar wrote: >> >>> >>> For Intel products, we have >>> 1. Casecadelake-AP which has multi-die per package and has per-die >>> RAPL >>> MSRs >>> 2. all other platforms which has single-die per package, so its >>> RAPL >>> MSRs can be considered as either package-scope or die-scope >>> This applies to Thermal MSRs as well. >>> >>> so for these MSRs, we can treat them as >>> 1. always die-scope for all existing platforms >>> or >>> 2. package-scope with the exception of Casecadelake-ap >>> And current kernel code follows rule 1. >>> >>> I propose we switch to rule 2 for these code because rule 1 can be >>> broke on future multi-die systems (This is already true for Thermal >>> MSRs). >> >> I have a doubt about this, won't the future Intel multi-die systems >> have die-scope for the RAPL PMU like Casecadelake-AP? > > For future multi-die systems that I know, the RAPL is still package > scope I think in that case we can go with rule 2, it would be future proof for Intel systems. If you agree, I can make the change in next version. Something like below?, -#define rapl_pmu_is_pkg_scope() \ - (boot_cpu_data.x86_vendor == X86_VENDOR_AMD || \ - boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) +#define rapl_pmu_is_die_scope() \ + (boot_cpu_data.x86_model_id == CASCADELAKE) Regards, Dhananjay but it is just lucky that RAPL control is not exposed via the > MSRs so rule 1 is not actually broke for RAPL PMU (while it is indeed > broken for other drivers like thermal). > > In short, if we stick with rule 1, the RAPL PMU still works. Switching> to rule 2 to be consistent with the other drivers is also a choice IMV.> > thanks, > rui >> >> If yes, then rule 1 above seems better. >> >> Regards, >> Dhananjay >> >>> >>> In this sense, I think it is okay to call it pkg level rapl for >>> both >>> Intel and AMD. >>> >>> thanks, >>> rui >>> >
On Wed, 2024-07-03 at 12:01 +0530, Dhananjay Ugwekar wrote: > Hi Rui, > > On 7/3/2024 9:37 AM, Zhang, Rui wrote: > > On Tue, 2024-07-02 at 15:50 +0530, Dhananjay Ugwekar wrote: > > > > > > > > > > > For Intel products, we have > > > > 1. Casecadelake-AP which has multi-die per package and has per- > > > > die > > > > RAPL > > > > MSRs > > > > 2. all other platforms which has single-die per package, so its > > > > RAPL > > > > MSRs can be considered as either package-scope or die-scope > > > > This applies to Thermal MSRs as well. > > > > > > > > so for these MSRs, we can treat them as > > > > 1. always die-scope for all existing platforms > > > > or > > > > 2. package-scope with the exception of Casecadelake-ap > > > > And current kernel code follows rule 1. > > > > > > > > I propose we switch to rule 2 for these code because rule 1 can > > > > be > > > > broke on future multi-die systems (This is already true for > > > > Thermal > > > > MSRs). > > > > > > I have a doubt about this, won't the future Intel multi-die > > > systems > > > have die-scope for the RAPL PMU like Casecadelake-AP? > > > > For future multi-die systems that I know, the RAPL is still package > > scope > > I think in that case we can go with rule 2, it would be future proof > for Intel systems. If you agree, I can make the change in next > version. > > Something like below?, > > -#define rapl_pmu_is_pkg_scope() \ > - (boot_cpu_data.x86_vendor == X86_VENDOR_AMD || > \ > > > - boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) > > +#define rapl_pmu_is_die_scope() \ > + (boot_cpu_data.x86_model_id == CASCADELAKE) > sounds good to me. Just a reminder that using boot_cpu_data.vfm is a better choice here. And it would be great to get Peter' view on this. thanks, rui > Regards, > Dhananjay > > but it is just lucky that RAPL control is not exposed via the > > MSRs so rule 1 is not actually broke for RAPL PMU (while it is > > indeed > > broken for other drivers like thermal). > > > > In short, if we stick with rule 1, the RAPL PMU still works. > > Switching> to rule 2 to be consistent with the other drivers is > > also a choice IMV.> > > thanks, > > rui > > > > > > If yes, then rule 1 above seems better. > > > > > > Regards, > > > Dhananjay > > > > > > > > > > > In this sense, I think it is okay to call it pkg level rapl for > > > > both > > > > Intel and AMD. > > > > > > > > thanks, > > > > rui > > > > > > >
On Fri, Jul 05, 2024 at 02:18:30AM +0000, Zhang, Rui wrote: > > > > I have a doubt about this, won't the future Intel multi-die > > > > systems > > > > have die-scope for the RAPL PMU like Casecadelake-AP? > > > > > > For future multi-die systems that I know, the RAPL is still package > > > scope > > > > I think in that case we can go with rule 2, it would be future proof > > for Intel systems. If you agree, I can make the change in next > > version. > > > > Something like below?, > > > > -#define rapl_pmu_is_pkg_scope() \ > > - (boot_cpu_data.x86_vendor == X86_VENDOR_AMD || > > \ > > > > > > - boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) > > > > +#define rapl_pmu_is_die_scope() \ > > + (boot_cpu_data.x86_model_id == CASCADELAKE) > > > sounds good to me. Just a reminder that using boot_cpu_data.vfm is a > better choice here. > > And it would be great to get Peter' view on this. Peter is confused :-) So you're saying that: - old Intel is pkg wide (it has no DIE enumeration) - Cascadelake (part of the skylake refresh) is per-DIE - modern Intel is pkg wide (they have no DIE enumeration) - future Intel will be pkg wide And this works because for everything that does not enumerate a specific DIE topology, it ends up begin the same as the PKG topology. But what about future products that have DIE but are not CASCADE (what about COOPERLAKE) ? If this really is a one off for CASCADE, then yes, I think we should be doing what Dhananjay suggests, and then the PKG naming is fine.
Hi, Peter, On Fri, 2024-07-05 at 11:24 +0200, Peter Zijlstra wrote: > On Fri, Jul 05, 2024 at 02:18:30AM +0000, Zhang, Rui wrote: > > > > > > I have a doubt about this, won't the future Intel multi-die > > > > > systems > > > > > have die-scope for the RAPL PMU like Casecadelake-AP? > > > > > > > > For future multi-die systems that I know, the RAPL is still > > > > package > > > > scope > > > > > > I think in that case we can go with rule 2, it would be future > > > proof > > > for Intel systems. If you agree, I can make the change in next > > > version. > > > > > > Something like below?, > > > > > > -#define rapl_pmu_is_pkg_scope() \ > > > - (boot_cpu_data.x86_vendor == X86_VENDOR_AMD || > > > \ > > > > > > > > > > > > > > > - boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) > > > > > > +#define rapl_pmu_is_die_scope() \ > > > + (boot_cpu_data.x86_model_id == CASCADELAKE) > > > > > sounds good to me. Just a reminder that using boot_cpu_data.vfm is > > a > > better choice here. > > > > And it would be great to get Peter' view on this. > > Peter is confused :-) So you're saying that: > > - old Intel is pkg wide (it has no DIE enumeration) right. > - Cascadelake (part of the skylake refresh) is per-DIE right. It enumerates DIE and its RAPL control (RAPL MSR scope) is also per- DIE. > - modern Intel is pkg wide (they have no DIE enumeration) right. > - future Intel will be pkg wide see detailed comment below. > > And this works because for everything that does not enumerate a > specific > DIE topology, it ends up begin the same as the PKG topology. Right. > > But what about future products that have DIE but are not CASCADE > (what > about COOPERLAKE) ? For the one that I know, it has Die enumeration but its RAPL control is still pkg wide. However, the RAPL control for it (and other future Xeon platforms) is not done via MSR interface so it does not break this RAPL PMU code. Future Intel client platforms still rely on MSR to do RAPL control, but their RAPL control scope (and if they will enumerate DIE) is not clear. But still, IMO, this suggests that the RAPL control scope is not architectural and a quirk list (probably ends up with Casecade-AP only) makes more sense here. thanks, rui > > If this really is a one off for CASCADE, then yes, I think we should > be > doing what Dhananjay suggests, and then the PKG naming is fine. > >
diff --git a/arch/x86/events/rapl.c b/arch/x86/events/rapl.c index f815c60ec551..b03b044a390f 100644 --- a/arch/x86/events/rapl.c +++ b/arch/x86/events/rapl.c @@ -70,18 +70,18 @@ MODULE_LICENSE("GPL"); /* * RAPL energy status counters */ -enum perf_rapl_events { +enum perf_rapl_pkg_events { PERF_RAPL_PP0 = 0, /* all cores */ PERF_RAPL_PKG, /* entire package */ PERF_RAPL_RAM, /* DRAM */ PERF_RAPL_PP1, /* gpu */ PERF_RAPL_PSYS, /* psys */ - PERF_RAPL_MAX, - NR_RAPL_DOMAINS = PERF_RAPL_MAX, + PERF_RAPL_PKG_EVENTS_MAX, + NR_RAPL_PKG_DOMAINS = PERF_RAPL_PKG_EVENTS_MAX, }; -static const char *const rapl_domain_names[NR_RAPL_DOMAINS] __initconst = { +static const char *const rapl_pkg_domain_names[NR_RAPL_PKG_DOMAINS] __initconst = { "pp0-core", "package", "dram", @@ -132,15 +132,15 @@ enum rapl_unit_quirk { struct rapl_model { struct perf_msr *rapl_msrs; - unsigned long events; + unsigned long pkg_events; unsigned int msr_power_unit; enum rapl_unit_quirk unit_quirk; }; /* 1/2^hw_unit Joule */ -static int rapl_hw_unit[NR_RAPL_DOMAINS] __read_mostly; -static struct rapl_pmus *rapl_pmus; -static unsigned int rapl_cntr_mask; +static int rapl_hw_unit[NR_RAPL_PKG_DOMAINS] __read_mostly; +static struct rapl_pmus *rapl_pmus_pkg; +static unsigned int rapl_pkg_cntr_mask; static u64 rapl_timer_ms; static struct perf_msr *rapl_msrs; static struct rapl_model *rapl_model; @@ -165,7 +165,8 @@ static inline struct rapl_pmu *cpu_to_rapl_pmu(unsigned int cpu) * The unsigned check also catches the '-1' return value for non * existent mappings in the topology map. */ - return rapl_pmu_idx < rapl_pmus->nr_rapl_pmu ? rapl_pmus->rapl_pmu[rapl_pmu_idx] : NULL; + return rapl_pmu_idx < rapl_pmus_pkg->nr_rapl_pmu ? + rapl_pmus_pkg->rapl_pmu[rapl_pmu_idx] : NULL; } static inline u64 rapl_read_counter(struct perf_event *event) @@ -177,7 +178,7 @@ static inline u64 rapl_read_counter(struct perf_event *event) static inline u64 rapl_scale(u64 v, int cfg) { - if (cfg > NR_RAPL_DOMAINS) { + if (cfg > NR_RAPL_PKG_DOMAINS) { pr_warn("Invalid domain %d, failed to scale data\n", cfg); return v; } @@ -347,7 +348,7 @@ static int rapl_pmu_event_init(struct perf_event *event) struct rapl_pmu *rapl_pmu; /* only look at RAPL events */ - if (event->attr.type != rapl_pmus->pmu.type) + if (event->attr.type != rapl_pmus_pkg->pmu.type) return -ENOENT; /* check only supported bits are set */ @@ -359,14 +360,14 @@ static int rapl_pmu_event_init(struct perf_event *event) event->event_caps |= PERF_EV_CAP_READ_ACTIVE_PKG; - if (!cfg || cfg >= NR_RAPL_DOMAINS + 1) + if (!cfg || cfg >= NR_RAPL_PKG_DOMAINS + 1) return -EINVAL; - cfg = array_index_nospec((long)cfg, NR_RAPL_DOMAINS + 1); + cfg = array_index_nospec((long)cfg, NR_RAPL_PKG_DOMAINS + 1); bit = cfg - 1; /* check event supported */ - if (!(rapl_cntr_mask & (1 << bit))) + if (!(rapl_pkg_cntr_mask & (1 << bit))) return -EINVAL; /* unsupported modes and filters */ @@ -394,7 +395,7 @@ static void rapl_pmu_event_read(struct perf_event *event) static ssize_t rapl_get_attr_cpumask(struct device *dev, struct device_attribute *attr, char *buf) { - return cpumap_print_to_pagebuf(true, buf, &rapl_pmus->cpumask); + return cpumap_print_to_pagebuf(true, buf, &rapl_pmus_pkg->cpumask); } static DEVICE_ATTR(cpumask, S_IRUGO, rapl_get_attr_cpumask, NULL); @@ -546,11 +547,11 @@ static struct perf_msr intel_rapl_spr_msrs[] = { }; /* - * Force to PERF_RAPL_MAX size due to: - * - perf_msr_probe(PERF_RAPL_MAX) + * Force to PERF_RAPL_PKG_EVENTS_MAX size due to: + * - perf_msr_probe(PERF_RAPL_PKG_EVENTS_MAX) * - want to use same event codes across both architectures */ -static struct perf_msr amd_rapl_msrs[] = { +static struct perf_msr amd_rapl_pkg_msrs[] = { [PERF_RAPL_PP0] = { 0, &rapl_events_cores_group, NULL, false, 0 }, [PERF_RAPL_PKG] = { MSR_AMD_PKG_ENERGY_STATUS, &rapl_events_pkg_group, test_msr, false, RAPL_MSR_MASK }, [PERF_RAPL_RAM] = { 0, &rapl_events_ram_group, NULL, false, 0 }, @@ -583,7 +584,7 @@ static int __rapl_cpu_offline(struct rapl_pmus *rapl_pmus, unsigned int rapl_pmu static int rapl_cpu_offline(unsigned int cpu) { - return __rapl_cpu_offline(rapl_pmus, get_rapl_pmu_idx(cpu), + return __rapl_cpu_offline(rapl_pmus_pkg, get_rapl_pmu_idx(cpu), get_rapl_pmu_cpumask(cpu), cpu); } @@ -622,7 +623,7 @@ static int __rapl_cpu_online(struct rapl_pmus *rapl_pmus, unsigned int rapl_pmu_ static int rapl_cpu_online(unsigned int cpu) { - return __rapl_cpu_online(rapl_pmus, get_rapl_pmu_idx(cpu), + return __rapl_cpu_online(rapl_pmus_pkg, get_rapl_pmu_idx(cpu), get_rapl_pmu_cpumask(cpu), cpu); } @@ -635,7 +636,7 @@ static int rapl_check_hw_unit(void) /* protect rdmsrl() to handle virtualization */ if (rdmsrl_safe(rapl_model->msr_power_unit, &msr_rapl_power_unit_bits)) return -1; - for (i = 0; i < NR_RAPL_DOMAINS; i++) + for (i = 0; i < NR_RAPL_PKG_DOMAINS; i++) rapl_hw_unit[i] = (msr_rapl_power_unit_bits >> 8) & 0x1FULL; switch (rapl_model->unit_quirk) { @@ -677,12 +678,12 @@ static void __init rapl_advertise(void) int i; pr_info("API unit is 2^-32 Joules, %d fixed counters, %llu ms ovfl timer\n", - hweight32(rapl_cntr_mask), rapl_timer_ms); + hweight32(rapl_pkg_cntr_mask), rapl_timer_ms); - for (i = 0; i < NR_RAPL_DOMAINS; i++) { - if (rapl_cntr_mask & (1 << i)) { + for (i = 0; i < NR_RAPL_PKG_DOMAINS; i++) { + if (rapl_pkg_cntr_mask & (1 << i)) { pr_info("hw unit of domain %s 2^-%d Joules\n", - rapl_domain_names[i], rapl_hw_unit[i]); + rapl_pkg_domain_names[i], rapl_hw_unit[i]); } } } @@ -737,7 +738,7 @@ static int __init init_rapl_pmus(struct rapl_pmus **rapl_pmus_ptr) } static struct rapl_model model_snb = { - .events = BIT(PERF_RAPL_PP0) | + .pkg_events = BIT(PERF_RAPL_PP0) | BIT(PERF_RAPL_PKG) | BIT(PERF_RAPL_PP1), .msr_power_unit = MSR_RAPL_POWER_UNIT, @@ -745,7 +746,7 @@ static struct rapl_model model_snb = { }; static struct rapl_model model_snbep = { - .events = BIT(PERF_RAPL_PP0) | + .pkg_events = BIT(PERF_RAPL_PP0) | BIT(PERF_RAPL_PKG) | BIT(PERF_RAPL_RAM), .msr_power_unit = MSR_RAPL_POWER_UNIT, @@ -753,7 +754,7 @@ static struct rapl_model model_snbep = { }; static struct rapl_model model_hsw = { - .events = BIT(PERF_RAPL_PP0) | + .pkg_events = BIT(PERF_RAPL_PP0) | BIT(PERF_RAPL_PKG) | BIT(PERF_RAPL_RAM) | BIT(PERF_RAPL_PP1), @@ -762,7 +763,7 @@ static struct rapl_model model_hsw = { }; static struct rapl_model model_hsx = { - .events = BIT(PERF_RAPL_PP0) | + .pkg_events = BIT(PERF_RAPL_PP0) | BIT(PERF_RAPL_PKG) | BIT(PERF_RAPL_RAM), .unit_quirk = RAPL_UNIT_QUIRK_INTEL_HSW, @@ -771,7 +772,7 @@ static struct rapl_model model_hsx = { }; static struct rapl_model model_knl = { - .events = BIT(PERF_RAPL_PKG) | + .pkg_events = BIT(PERF_RAPL_PKG) | BIT(PERF_RAPL_RAM), .unit_quirk = RAPL_UNIT_QUIRK_INTEL_HSW, .msr_power_unit = MSR_RAPL_POWER_UNIT, @@ -779,7 +780,7 @@ static struct rapl_model model_knl = { }; static struct rapl_model model_skl = { - .events = BIT(PERF_RAPL_PP0) | + .pkg_events = BIT(PERF_RAPL_PP0) | BIT(PERF_RAPL_PKG) | BIT(PERF_RAPL_RAM) | BIT(PERF_RAPL_PP1) | @@ -789,7 +790,7 @@ static struct rapl_model model_skl = { }; static struct rapl_model model_spr = { - .events = BIT(PERF_RAPL_PP0) | + .pkg_events = BIT(PERF_RAPL_PP0) | BIT(PERF_RAPL_PKG) | BIT(PERF_RAPL_RAM) | BIT(PERF_RAPL_PSYS), @@ -799,9 +800,9 @@ static struct rapl_model model_spr = { }; static struct rapl_model model_amd_hygon = { - .events = BIT(PERF_RAPL_PKG), + .pkg_events = BIT(PERF_RAPL_PKG), .msr_power_unit = MSR_AMD_RAPL_POWER_UNIT, - .rapl_msrs = amd_rapl_msrs, + .rapl_msrs = amd_rapl_pkg_msrs, }; static const struct x86_cpu_id rapl_model_match[] __initconst = { @@ -867,14 +868,14 @@ static int __init rapl_pmu_init(void) rapl_msrs = rapl_model->rapl_msrs; - rapl_cntr_mask = perf_msr_probe(rapl_msrs, PERF_RAPL_MAX, - false, (void *) &rapl_model->events); + rapl_pkg_cntr_mask = perf_msr_probe(rapl_msrs, PERF_RAPL_PKG_EVENTS_MAX, + false, (void *) &rapl_model->pkg_events); ret = rapl_check_hw_unit(); if (ret) return ret; - ret = init_rapl_pmus(&rapl_pmus); + ret = init_rapl_pmus(&rapl_pmus_pkg); if (ret) return ret; @@ -887,7 +888,7 @@ static int __init rapl_pmu_init(void) if (ret) goto out; - ret = perf_pmu_register(&rapl_pmus->pmu, "power", -1); + ret = perf_pmu_register(&rapl_pmus_pkg->pmu, "power", -1); if (ret) goto out1; @@ -898,7 +899,7 @@ static int __init rapl_pmu_init(void) cpuhp_remove_state(CPUHP_AP_PERF_X86_RAPL_ONLINE); out: pr_warn("Initialization failed (%d), disabled\n", ret); - cleanup_rapl_pmus(rapl_pmus); + cleanup_rapl_pmus(rapl_pmus_pkg); return ret; } module_init(rapl_pmu_init); @@ -906,7 +907,7 @@ module_init(rapl_pmu_init); static void __exit intel_rapl_exit(void) { cpuhp_remove_state_nocalls(CPUHP_AP_PERF_X86_RAPL_ONLINE); - perf_pmu_unregister(&rapl_pmus->pmu); - cleanup_rapl_pmus(rapl_pmus); + perf_pmu_unregister(&rapl_pmus_pkg->pmu); + cleanup_rapl_pmus(rapl_pmus_pkg); } module_exit(intel_rapl_exit);
Prep for addition of power_per_core PMU to handle core scope energy consumption for AMD CPUs. Replace the generic names with *_pkg*, to differentiate between the scopes of the two different PMUs and their variables. No functional change. Signed-off-by: Dhananjay Ugwekar <Dhananjay.Ugwekar@amd.com> --- arch/x86/events/rapl.c | 83 +++++++++++++++++++++--------------------- 1 file changed, 42 insertions(+), 41 deletions(-)