diff mbox

[v2,2/2] arm64: perf: Prevent wraparound during overflow

Message ID 1416587067-3220-3-git-send-email-daniel.thompson@linaro.org (mailing list archive)
State New, archived
Headers show

Commit Message

Daniel Thompson Nov. 21, 2014, 4:24 p.m. UTC
If the overflow threshold for a counter is set above or near the
0xffffffff boundary then the kernel may lose track of the overflow
causing only events that occur *after* the overflow to be recorded.
Specifically the problem occurs when the value of the performance counter
overtakes its original programmed value due to wrap around.

Typical solutions to this problem are either to avoid programming in
values likely to be overtaken or to treat the overflow bit as the 33rd
bit of the counter.

Its somewhat fiddly to refactor the code to correctly handle the 33rd bit
during irqsave sections (context switches for example) so instead we take
the simpler approach of avoiding values likely to be overtaken.

We set the limit to half of max_period because this matches the limit
imposed in __hw_perf_event_init(). This causes a doubling of the interrupt
rate for large threshold values, however even with a very fast counter
ticking at 4GHz the interrupt rate would only be ~1Hz.

Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org>
---
 arch/arm64/kernel/perf_event.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

Comments

Will Deacon Dec. 4, 2014, 10:27 a.m. UTC | #1
On Fri, Nov 21, 2014 at 04:24:27PM +0000, Daniel Thompson wrote:
> If the overflow threshold for a counter is set above or near the
> 0xffffffff boundary then the kernel may lose track of the overflow
> causing only events that occur *after* the overflow to be recorded.
> Specifically the problem occurs when the value of the performance counter
> overtakes its original programmed value due to wrap around.
> 
> Typical solutions to this problem are either to avoid programming in
> values likely to be overtaken or to treat the overflow bit as the 33rd
> bit of the counter.
> 
> Its somewhat fiddly to refactor the code to correctly handle the 33rd bit
> during irqsave sections (context switches for example) so instead we take
> the simpler approach of avoiding values likely to be overtaken.
> 
> We set the limit to half of max_period because this matches the limit
> imposed in __hw_perf_event_init(). This causes a doubling of the interrupt
> rate for large threshold values, however even with a very fast counter
> ticking at 4GHz the interrupt rate would only be ~1Hz.
> 
> Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org>
> ---
>  arch/arm64/kernel/perf_event.c | 10 ++++++++--
>  1 file changed, 8 insertions(+), 2 deletions(-)

Thanks, applied.

Will

> diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
> index aa29ecb4f800..25a5308744b1 100644
> --- a/arch/arm64/kernel/perf_event.c
> +++ b/arch/arm64/kernel/perf_event.c
> @@ -169,8 +169,14 @@ armpmu_event_set_period(struct perf_event *event,
>  		ret = 1;
>  	}
>  
> -	if (left > (s64)armpmu->max_period)
> -		left = armpmu->max_period;
> +	/*
> +	 * Limit the maximum period to prevent the counter value
> +	 * from overtaking the one we are about to program. In
> +	 * effect we are reducing max_period to account for
> +	 * interrupt latency (and we are being very conservative).
> +	 */
> +	if (left > (armpmu->max_period >> 1))
> +		left = armpmu->max_period >> 1;
>  
>  	local64_set(&hwc->prev_count, (u64)-left);
>  
> -- 
> 1.9.3
> 
>
diff mbox

Patch

diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index aa29ecb4f800..25a5308744b1 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -169,8 +169,14 @@  armpmu_event_set_period(struct perf_event *event,
 		ret = 1;
 	}
 
-	if (left > (s64)armpmu->max_period)
-		left = armpmu->max_period;
+	/*
+	 * Limit the maximum period to prevent the counter value
+	 * from overtaking the one we are about to program. In
+	 * effect we are reducing max_period to account for
+	 * interrupt latency (and we are being very conservative).
+	 */
+	if (left > (armpmu->max_period >> 1))
+		left = armpmu->max_period >> 1;
 
 	local64_set(&hwc->prev_count, (u64)-left);