diff mbox series

[1/5] KVM: arm64: Divorce the perf code from oprofile helpers

Message ID 20210414134409.1266357-2-maz@kernel.org (mailing list archive)
State New, archived
Headers show
Series perf: oprofile spring cleanup | expand

Commit Message

Marc Zyngier April 14, 2021, 1:44 p.m. UTC
KVM/arm64 is the sole user of perf_num_counters(), and really
could do without it. Stop using the obsolete API by relying on
the existing probing code.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/perf.c     | 7 +------
 arch/arm64/kvm/pmu-emul.c | 2 +-
 include/kvm/arm_pmu.h     | 4 ++++
 3 files changed, 6 insertions(+), 7 deletions(-)

Comments

zhukeqian April 15, 2021, 6:59 a.m. UTC | #1
Hi Marc,

On 2021/4/14 21:44, Marc Zyngier wrote:
> KVM/arm64 is the sole user of perf_num_counters(), and really
> could do without it. Stop using the obsolete API by relying on
> the existing probing code.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/perf.c     | 7 +------
>  arch/arm64/kvm/pmu-emul.c | 2 +-
>  include/kvm/arm_pmu.h     | 4 ++++
>  3 files changed, 6 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/arm64/kvm/perf.c b/arch/arm64/kvm/perf.c
> index 739164324afe..b8b398670ef2 100644
> --- a/arch/arm64/kvm/perf.c
> +++ b/arch/arm64/kvm/perf.c
> @@ -50,12 +50,7 @@ static struct perf_guest_info_callbacks kvm_guest_cbs = {
>  
>  int kvm_perf_init(void)
>  {
> -	/*
> -	 * Check if HW_PERF_EVENTS are supported by checking the number of
> -	 * hardware performance counters. This could ensure the presence of
> -	 * a physical PMU and CONFIG_PERF_EVENT is selected.
> -	 */
> -	if (IS_ENABLED(CONFIG_ARM_PMU) && perf_num_counters() > 0)
> +	if (kvm_pmu_probe_pmuver() != 0xf)
The probe() function may be called many times (kvm_arm_pmu_v3_set_attr also calls it).
I don't know whether the first calling is enough. If so, can we use a static variable
in it, so the following calling can return the result right away?

Thanks,
Keqian
Marc Zyngier April 15, 2021, 10:42 a.m. UTC | #2
On Thu, 15 Apr 2021 07:59:26 +0100,
Keqian Zhu <zhukeqian1@huawei.com> wrote:
> 
> Hi Marc,
> 
> On 2021/4/14 21:44, Marc Zyngier wrote:
> > KVM/arm64 is the sole user of perf_num_counters(), and really
> > could do without it. Stop using the obsolete API by relying on
> > the existing probing code.
> > 
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/kvm/perf.c     | 7 +------
> >  arch/arm64/kvm/pmu-emul.c | 2 +-
> >  include/kvm/arm_pmu.h     | 4 ++++
> >  3 files changed, 6 insertions(+), 7 deletions(-)
> > 
> > diff --git a/arch/arm64/kvm/perf.c b/arch/arm64/kvm/perf.c
> > index 739164324afe..b8b398670ef2 100644
> > --- a/arch/arm64/kvm/perf.c
> > +++ b/arch/arm64/kvm/perf.c
> > @@ -50,12 +50,7 @@ static struct perf_guest_info_callbacks kvm_guest_cbs = {
> >  
> >  int kvm_perf_init(void)
> >  {
> > -	/*
> > -	 * Check if HW_PERF_EVENTS are supported by checking the number of
> > -	 * hardware performance counters. This could ensure the presence of
> > -	 * a physical PMU and CONFIG_PERF_EVENT is selected.
> > -	 */
> > -	if (IS_ENABLED(CONFIG_ARM_PMU) && perf_num_counters() > 0)
> > +	if (kvm_pmu_probe_pmuver() != 0xf)
> The probe() function may be called many times
> (kvm_arm_pmu_v3_set_attr also calls it).  I don't know whether the
> first calling is enough. If so, can we use a static variable in it,
> so the following calling can return the result right away?

No, because that wouldn't help with crappy big-little implementations
that could have PMUs with different versions. We want to find the
version at the point where the virtual PMU is created, which is why we
call the probe function once per vcpu.

This of course is broken in other ways (BL+KVM is a total disaster
when it comes to PMU), but making this static would just make it
worse.

	M.
zhukeqian April 15, 2021, 11:34 a.m. UTC | #3
Hi Marc,

On 2021/4/15 18:42, Marc Zyngier wrote:
> On Thu, 15 Apr 2021 07:59:26 +0100,
> Keqian Zhu <zhukeqian1@huawei.com> wrote:
>>
>> Hi Marc,
>>
>> On 2021/4/14 21:44, Marc Zyngier wrote:
>>> KVM/arm64 is the sole user of perf_num_counters(), and really
>>> could do without it. Stop using the obsolete API by relying on
>>> the existing probing code.
>>>
>>> Signed-off-by: Marc Zyngier <maz@kernel.org>
>>> ---
>>>  arch/arm64/kvm/perf.c     | 7 +------
>>>  arch/arm64/kvm/pmu-emul.c | 2 +-
>>>  include/kvm/arm_pmu.h     | 4 ++++
>>>  3 files changed, 6 insertions(+), 7 deletions(-)
>>>
>>> diff --git a/arch/arm64/kvm/perf.c b/arch/arm64/kvm/perf.c
>>> index 739164324afe..b8b398670ef2 100644
>>> --- a/arch/arm64/kvm/perf.c
>>> +++ b/arch/arm64/kvm/perf.c
>>> @@ -50,12 +50,7 @@ static struct perf_guest_info_callbacks kvm_guest_cbs = {
>>>  
>>>  int kvm_perf_init(void)
>>>  {
>>> -	/*
>>> -	 * Check if HW_PERF_EVENTS are supported by checking the number of
>>> -	 * hardware performance counters. This could ensure the presence of
>>> -	 * a physical PMU and CONFIG_PERF_EVENT is selected.
>>> -	 */
>>> -	if (IS_ENABLED(CONFIG_ARM_PMU) && perf_num_counters() > 0)
>>> +	if (kvm_pmu_probe_pmuver() != 0xf)
>> The probe() function may be called many times
>> (kvm_arm_pmu_v3_set_attr also calls it).  I don't know whether the
>> first calling is enough. If so, can we use a static variable in it,
>> so the following calling can return the result right away?
> 
> No, because that wouldn't help with crappy big-little implementations
> that could have PMUs with different versions. We want to find the
> version at the point where the virtual PMU is created, which is why we
> call the probe function once per vcpu.
I see.

But AFAICS the pmuver is placed in kvm->arch, and the probe function is called
once per VM. Maybe I miss something.

> 
> This of course is broken in other ways (BL+KVM is a total disaster
> when it comes to PMU), but making this static would just make it
> worse.
OK.

Thanks,
Keqian
Marc Zyngier April 15, 2021, 1:49 p.m. UTC | #4
On Thu, 15 Apr 2021 12:34:40 +0100,
Keqian Zhu <zhukeqian1@huawei.com> wrote:
> 
> Hi Marc,
> 
> On 2021/4/15 18:42, Marc Zyngier wrote:
> > On Thu, 15 Apr 2021 07:59:26 +0100,
> > Keqian Zhu <zhukeqian1@huawei.com> wrote:
> >>
> >> Hi Marc,
> >>
> >> On 2021/4/14 21:44, Marc Zyngier wrote:
> >>> KVM/arm64 is the sole user of perf_num_counters(), and really
> >>> could do without it. Stop using the obsolete API by relying on
> >>> the existing probing code.
> >>>
> >>> Signed-off-by: Marc Zyngier <maz@kernel.org>
> >>> ---
> >>>  arch/arm64/kvm/perf.c     | 7 +------
> >>>  arch/arm64/kvm/pmu-emul.c | 2 +-
> >>>  include/kvm/arm_pmu.h     | 4 ++++
> >>>  3 files changed, 6 insertions(+), 7 deletions(-)
> >>>
> >>> diff --git a/arch/arm64/kvm/perf.c b/arch/arm64/kvm/perf.c
> >>> index 739164324afe..b8b398670ef2 100644
> >>> --- a/arch/arm64/kvm/perf.c
> >>> +++ b/arch/arm64/kvm/perf.c
> >>> @@ -50,12 +50,7 @@ static struct perf_guest_info_callbacks kvm_guest_cbs = {
> >>>  
> >>>  int kvm_perf_init(void)
> >>>  {
> >>> -	/*
> >>> -	 * Check if HW_PERF_EVENTS are supported by checking the number of
> >>> -	 * hardware performance counters. This could ensure the presence of
> >>> -	 * a physical PMU and CONFIG_PERF_EVENT is selected.
> >>> -	 */
> >>> -	if (IS_ENABLED(CONFIG_ARM_PMU) && perf_num_counters() > 0)
> >>> +	if (kvm_pmu_probe_pmuver() != 0xf)
> >> The probe() function may be called many times
> >> (kvm_arm_pmu_v3_set_attr also calls it).  I don't know whether the
> >> first calling is enough. If so, can we use a static variable in it,
> >> so the following calling can return the result right away?
> > 
> > No, because that wouldn't help with crappy big-little implementations
> > that could have PMUs with different versions. We want to find the
> > version at the point where the virtual PMU is created, which is why we
> > call the probe function once per vcpu.
> I see.
> 
> But AFAICS the pmuver is placed in kvm->arch, and the probe function is called
> once per VM. Maybe I miss something.

You're right, I mis-remembered. This doesn't change much though.

Thanks,

	M.
Will Deacon April 22, 2021, 10:43 a.m. UTC | #5
On Wed, Apr 14, 2021 at 02:44:05PM +0100, Marc Zyngier wrote:
> KVM/arm64 is the sole user of perf_num_counters(), and really
> could do without it. Stop using the obsolete API by relying on
> the existing probing code.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/perf.c     | 7 +------
>  arch/arm64/kvm/pmu-emul.c | 2 +-
>  include/kvm/arm_pmu.h     | 4 ++++
>  3 files changed, 6 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/arm64/kvm/perf.c b/arch/arm64/kvm/perf.c
> index 739164324afe..b8b398670ef2 100644
> --- a/arch/arm64/kvm/perf.c
> +++ b/arch/arm64/kvm/perf.c
> @@ -50,12 +50,7 @@ static struct perf_guest_info_callbacks kvm_guest_cbs = {
>  
>  int kvm_perf_init(void)
>  {
> -	/*
> -	 * Check if HW_PERF_EVENTS are supported by checking the number of
> -	 * hardware performance counters. This could ensure the presence of
> -	 * a physical PMU and CONFIG_PERF_EVENT is selected.
> -	 */
> -	if (IS_ENABLED(CONFIG_ARM_PMU) && perf_num_counters() > 0)
> +	if (kvm_pmu_probe_pmuver() != 0xf)

Took me a while to figure out that this returns 0xf if the hardware has a
PMUVer of 0x0, so it's all good:

Acked-by: Will Deacon <will@kernel.org>

Will
diff mbox series

Patch

diff --git a/arch/arm64/kvm/perf.c b/arch/arm64/kvm/perf.c
index 739164324afe..b8b398670ef2 100644
--- a/arch/arm64/kvm/perf.c
+++ b/arch/arm64/kvm/perf.c
@@ -50,12 +50,7 @@  static struct perf_guest_info_callbacks kvm_guest_cbs = {
 
 int kvm_perf_init(void)
 {
-	/*
-	 * Check if HW_PERF_EVENTS are supported by checking the number of
-	 * hardware performance counters. This could ensure the presence of
-	 * a physical PMU and CONFIG_PERF_EVENT is selected.
-	 */
-	if (IS_ENABLED(CONFIG_ARM_PMU) && perf_num_counters() > 0)
+	if (kvm_pmu_probe_pmuver() != 0xf)
 		static_branch_enable(&kvm_arm_pmu_available);
 
 	return perf_register_guest_info_callbacks(&kvm_guest_cbs);
diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index e32c6e139a09..fd167d4f4215 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -739,7 +739,7 @@  void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
 	kvm_pmu_create_perf_event(vcpu, select_idx);
 }
 
-static int kvm_pmu_probe_pmuver(void)
+int kvm_pmu_probe_pmuver(void)
 {
 	struct perf_event_attr attr = { };
 	struct perf_event *event;
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 6fd3cda608e4..864b9997efb2 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -61,6 +61,7 @@  int kvm_arm_pmu_v3_get_attr(struct kvm_vcpu *vcpu,
 int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu,
 			    struct kvm_device_attr *attr);
 int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu);
+int kvm_pmu_probe_pmuver(void);
 #else
 struct kvm_pmu {
 };
@@ -116,6 +117,9 @@  static inline u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1)
 {
 	return 0;
 }
+
+static inline int kvm_pmu_probe_pmuver(void) { return 0xf; }
+
 #endif
 
 #endif