Message ID | 20190612190450.7085-5-andrew.murray@arm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: arm/arm64: add support for chained counters | expand |
Hi Andrew, On 12/06/2019 20:04, Andrew Murray wrote: > We currently use pmc->bitmask to determine the width of the pmc - however > it's superfluous as the pmc index already describes if the pmc is a cycle > counter or event counter. The architecture clearly describes the widths of > these counters. > > Let's remove the bitmask to simplify the code. > > Signed-off-by: Andrew Murray <andrew.murray@arm.com> > --- > include/kvm/arm_pmu.h | 1 - > virt/kvm/arm/pmu.c | 19 +++++++++---------- > 2 files changed, 9 insertions(+), 11 deletions(-) > > diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h > index b73f31baca52..2f0e28dc5a9e 100644 > --- a/include/kvm/arm_pmu.h > +++ b/include/kvm/arm_pmu.h > @@ -28,7 +28,6 @@ > struct kvm_pmc { > u8 idx; /* index into the pmu->pmc array */ > struct perf_event *perf_event; > - u64 bitmask; > }; > > struct kvm_pmu { > diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c > index ae1e886d4a1a..88ce24ae0b45 100644 > --- a/virt/kvm/arm/pmu.c > +++ b/virt/kvm/arm/pmu.c > @@ -47,7 +47,10 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx) > counter += perf_event_read_value(pmc->perf_event, &enabled, > &running); > > - return counter & pmc->bitmask; > + if (select_idx != ARMV8_PMU_CYCLE_IDX) > + counter = lower_32_bits(counter); Shouldn't this depend on PMCR.LC as well? If PMCR.LC is clear we only want the lower 32bits of the cycle counter. Cheers,
On Thu, Jun 13, 2019 at 08:30:51AM +0100, Julien Thierry wrote: > Hi Andrew, > > On 12/06/2019 20:04, Andrew Murray wrote: > > We currently use pmc->bitmask to determine the width of the pmc - however > > it's superfluous as the pmc index already describes if the pmc is a cycle > > counter or event counter. The architecture clearly describes the widths of > > these counters. > > > > Let's remove the bitmask to simplify the code. > > > > Signed-off-by: Andrew Murray <andrew.murray@arm.com> > > --- > > include/kvm/arm_pmu.h | 1 - > > virt/kvm/arm/pmu.c | 19 +++++++++---------- > > 2 files changed, 9 insertions(+), 11 deletions(-) > > > > diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h > > index b73f31baca52..2f0e28dc5a9e 100644 > > --- a/include/kvm/arm_pmu.h > > +++ b/include/kvm/arm_pmu.h > > @@ -28,7 +28,6 @@ > > struct kvm_pmc { > > u8 idx; /* index into the pmu->pmc array */ > > struct perf_event *perf_event; > > - u64 bitmask; > > }; > > > > struct kvm_pmu { > > diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c > > index ae1e886d4a1a..88ce24ae0b45 100644 > > --- a/virt/kvm/arm/pmu.c > > +++ b/virt/kvm/arm/pmu.c > > @@ -47,7 +47,10 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx) > > counter += perf_event_read_value(pmc->perf_event, &enabled, > > &running); > > > > - return counter & pmc->bitmask; > > + if (select_idx != ARMV8_PMU_CYCLE_IDX) > > + counter = lower_32_bits(counter); > > Shouldn't this depend on PMCR.LC as well? If PMCR.LC is clear we only > want the lower 32bits of the cycle counter. Yes that's correct. The hunk should look like this: - return counter & pmc->bitmask; + if (!(select_idx == ARMV8_PMU_CYCLE_IDX && + __vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_LC)) + counter = lower_32_bits(counter); + + return counter; Thanks for the review. Andrew Murray > > Cheers, > > -- > Julien Thierry
On 13/06/2019 10:39, Andrew Murray wrote: > On Thu, Jun 13, 2019 at 08:30:51AM +0100, Julien Thierry wrote: >> Hi Andrew, >> >> On 12/06/2019 20:04, Andrew Murray wrote: >>> We currently use pmc->bitmask to determine the width of the pmc - however >>> it's superfluous as the pmc index already describes if the pmc is a cycle >>> counter or event counter. The architecture clearly describes the widths of >>> these counters. >>> >>> Let's remove the bitmask to simplify the code. >>> >>> Signed-off-by: Andrew Murray <andrew.murray@arm.com> >>> --- >>> include/kvm/arm_pmu.h | 1 - >>> virt/kvm/arm/pmu.c | 19 +++++++++---------- >>> 2 files changed, 9 insertions(+), 11 deletions(-) >>> >>> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h >>> index b73f31baca52..2f0e28dc5a9e 100644 >>> --- a/include/kvm/arm_pmu.h >>> +++ b/include/kvm/arm_pmu.h >>> @@ -28,7 +28,6 @@ >>> struct kvm_pmc { >>> u8 idx; /* index into the pmu->pmc array */ >>> struct perf_event *perf_event; >>> - u64 bitmask; >>> }; >>> >>> struct kvm_pmu { >>> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c >>> index ae1e886d4a1a..88ce24ae0b45 100644 >>> --- a/virt/kvm/arm/pmu.c >>> +++ b/virt/kvm/arm/pmu.c >>> @@ -47,7 +47,10 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx) >>> counter += perf_event_read_value(pmc->perf_event, &enabled, >>> &running); >>> >>> - return counter & pmc->bitmask; >>> + if (select_idx != ARMV8_PMU_CYCLE_IDX) >>> + counter = lower_32_bits(counter); >> >> Shouldn't this depend on PMCR.LC as well? If PMCR.LC is clear we only >> want the lower 32bits of the cycle counter. > > Yes that's correct. The hunk should look like this: > > - return counter & pmc->bitmask; > + if (!(select_idx == ARMV8_PMU_CYCLE_IDX && > + __vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_LC)) > + counter = lower_32_bits(counter); > + > + return counter; May be you could add a macro : #define vcpu_pmu_counter_is_64bit(vcpu, idx) ? Cheers Suzuki
On Thu, Jun 13, 2019 at 05:50:43PM +0100, Suzuki K Poulose wrote: > > > On 13/06/2019 10:39, Andrew Murray wrote: > > On Thu, Jun 13, 2019 at 08:30:51AM +0100, Julien Thierry wrote: > > > Hi Andrew, > > > > > > On 12/06/2019 20:04, Andrew Murray wrote: > > > > We currently use pmc->bitmask to determine the width of the pmc - however > > > > it's superfluous as the pmc index already describes if the pmc is a cycle > > > > counter or event counter. The architecture clearly describes the widths of > > > > these counters. > > > > > > > > Let's remove the bitmask to simplify the code. > > > > > > > > Signed-off-by: Andrew Murray <andrew.murray@arm.com> > > > > --- > > > > include/kvm/arm_pmu.h | 1 - > > > > virt/kvm/arm/pmu.c | 19 +++++++++---------- > > > > 2 files changed, 9 insertions(+), 11 deletions(-) > > > > > > > > diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h > > > > index b73f31baca52..2f0e28dc5a9e 100644 > > > > --- a/include/kvm/arm_pmu.h > > > > +++ b/include/kvm/arm_pmu.h > > > > @@ -28,7 +28,6 @@ > > > > struct kvm_pmc { > > > > u8 idx; /* index into the pmu->pmc array */ > > > > struct perf_event *perf_event; > > > > - u64 bitmask; > > > > }; > > > > struct kvm_pmu { > > > > diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c > > > > index ae1e886d4a1a..88ce24ae0b45 100644 > > > > --- a/virt/kvm/arm/pmu.c > > > > +++ b/virt/kvm/arm/pmu.c > > > > @@ -47,7 +47,10 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx) > > > > counter += perf_event_read_value(pmc->perf_event, &enabled, > > > > &running); > > > > - return counter & pmc->bitmask; > > > > + if (select_idx != ARMV8_PMU_CYCLE_IDX) > > > > + counter = lower_32_bits(counter); > > > > > > Shouldn't this depend on PMCR.LC as well? If PMCR.LC is clear we only > > > want the lower 32bits of the cycle counter. > > > > Yes that's correct. The hunk should look like this: > > > > - return counter & pmc->bitmask; > > + if (!(select_idx == ARMV8_PMU_CYCLE_IDX && > > + __vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_LC)) > > + counter = lower_32_bits(counter); > > + > > + return counter; > > May be you could add a macro : > > #define vcpu_pmu_counter_is_64bit(vcpu, idx) ? Yes I think a helper would be useful - though I'd prefer the name 'kvm_pmu_idx_is_long_cycle_counter'. This seems a little clearer as you could otherwise argue that a chained counter is also 64 bits. Thanks, Andrew Murray > > Cheers > Suzuki
On 17/06/2019 16:43, Andrew Murray wrote: > On Thu, Jun 13, 2019 at 05:50:43PM +0100, Suzuki K Poulose wrote: >> >> >> On 13/06/2019 10:39, Andrew Murray wrote: >>> On Thu, Jun 13, 2019 at 08:30:51AM +0100, Julien Thierry wrote: >>>>> index ae1e886d4a1a..88ce24ae0b45 100644 >>>>> --- a/virt/kvm/arm/pmu.c >>>>> +++ b/virt/kvm/arm/pmu.c >>>>> @@ -47,7 +47,10 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx) >>>>> counter += perf_event_read_value(pmc->perf_event, &enabled, >>>>> &running); >>>>> - return counter & pmc->bitmask; >>>>> + if (select_idx != ARMV8_PMU_CYCLE_IDX) >>>>> + counter = lower_32_bits(counter); >>>> >>>> Shouldn't this depend on PMCR.LC as well? If PMCR.LC is clear we only >>>> want the lower 32bits of the cycle counter. >>> >>> Yes that's correct. The hunk should look like this: >>> >>> - return counter & pmc->bitmask; >>> + if (!(select_idx == ARMV8_PMU_CYCLE_IDX && >>> + __vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_LC)) >>> + counter = lower_32_bits(counter); >>> + >>> + return counter; >> >> May be you could add a macro : >> >> #define vcpu_pmu_counter_is_64bit(vcpu, idx) ? > > Yes I think a helper would be useful - though I'd prefer the name > 'kvm_pmu_idx_is_long_cycle_counter'. This seems a little clearer as > you could otherwise argue that a chained counter is also 64 bits. When you get to add 64bit PMU counter (v8.5), this would be handy. So having it de-coupled from the cycle counter may be a good idea. Anyways, I leave that to you. Cheers Suzuki
On Mon, Jun 17, 2019 at 05:33:40PM +0100, Suzuki K Poulose wrote: > > > On 17/06/2019 16:43, Andrew Murray wrote: > > On Thu, Jun 13, 2019 at 05:50:43PM +0100, Suzuki K Poulose wrote: > > > > > > > > > On 13/06/2019 10:39, Andrew Murray wrote: > > > > On Thu, Jun 13, 2019 at 08:30:51AM +0100, Julien Thierry wrote: > > > > > > > index ae1e886d4a1a..88ce24ae0b45 100644 > > > > > > --- a/virt/kvm/arm/pmu.c > > > > > > +++ b/virt/kvm/arm/pmu.c > > > > > > @@ -47,7 +47,10 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx) > > > > > > counter += perf_event_read_value(pmc->perf_event, &enabled, > > > > > > &running); > > > > > > - return counter & pmc->bitmask; > > > > > > + if (select_idx != ARMV8_PMU_CYCLE_IDX) > > > > > > + counter = lower_32_bits(counter); > > > > > > > > > > Shouldn't this depend on PMCR.LC as well? If PMCR.LC is clear we only > > > > > want the lower 32bits of the cycle counter. > > > > > > > > Yes that's correct. The hunk should look like this: > > > > > > > > - return counter & pmc->bitmask; > > > > + if (!(select_idx == ARMV8_PMU_CYCLE_IDX && > > > > + __vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_LC)) > > > > + counter = lower_32_bits(counter); > > > > + > > > > + return counter; > > > > > > May be you could add a macro : > > > > > > #define vcpu_pmu_counter_is_64bit(vcpu, idx) ? > > > > Yes I think a helper would be useful - though I'd prefer the name > > 'kvm_pmu_idx_is_long_cycle_counter'. This seems a little clearer as > > you could otherwise argue that a chained counter is also 64 bits. > > When you get to add 64bit PMU counter (v8.5), this would be handy. So > having it de-coupled from the cycle counter may be a good idea. Anyways, > I leave that to you. Yes that thought crossed my mind. I'll take your suggestion afterall. Thanks, Andrew Murray > > Cheers > Suzuki
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index b73f31baca52..2f0e28dc5a9e 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -28,7 +28,6 @@ struct kvm_pmc { u8 idx; /* index into the pmu->pmc array */ struct perf_event *perf_event; - u64 bitmask; }; struct kvm_pmu { diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c index ae1e886d4a1a..88ce24ae0b45 100644 --- a/virt/kvm/arm/pmu.c +++ b/virt/kvm/arm/pmu.c @@ -47,7 +47,10 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx) counter += perf_event_read_value(pmc->perf_event, &enabled, &running); - return counter & pmc->bitmask; + if (select_idx != ARMV8_PMU_CYCLE_IDX) + counter = lower_32_bits(counter); + + return counter; } /** @@ -113,7 +116,6 @@ void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu) for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) { kvm_pmu_stop_counter(vcpu, &pmu->pmc[i]); pmu->pmc[i].idx = i; - pmu->pmc[i].bitmask = 0xffffffffUL; } } @@ -348,8 +350,6 @@ void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val) */ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) { - struct kvm_pmu *pmu = &vcpu->arch.pmu; - struct kvm_pmc *pmc; u64 mask; int i; @@ -368,11 +368,6 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) for (i = 0; i < ARMV8_PMU_CYCLE_IDX; i++) kvm_pmu_set_counter_value(vcpu, i, 0); } - - if (val & ARMV8_PMU_PMCR_LC) { - pmc = &pmu->pmc[ARMV8_PMU_CYCLE_IDX]; - pmc->bitmask = 0xffffffffffffffffUL; - } } static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx) @@ -420,7 +415,11 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx) counter = kvm_pmu_get_counter_value(vcpu, select_idx); /* The initial sample period (overflow count) of an event. */ - attr.sample_period = (-counter) & pmc->bitmask; + if (pmc->idx == ARMV8_PMU_CYCLE_IDX && + __vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_LC) + attr.sample_period = (-counter) & GENMASK(63, 0); + else + attr.sample_period = (-counter) & GENMASK(31, 0); event = perf_event_create_kernel_counter(&attr, -1, current, kvm_pmu_perf_overflow, pmc);
We currently use pmc->bitmask to determine the width of the pmc - however it's superfluous as the pmc index already describes if the pmc is a cycle counter or event counter. The architecture clearly describes the widths of these counters. Let's remove the bitmask to simplify the code. Signed-off-by: Andrew Murray <andrew.murray@arm.com> --- include/kvm/arm_pmu.h | 1 - virt/kvm/arm/pmu.c | 19 +++++++++---------- 2 files changed, 9 insertions(+), 11 deletions(-)