diff mbox

[v3,04/12] arm: vdso: enforce monotonic and realtime as inline

Message ID 20171027222531.57223-1-salyzyn@android.com (mailing list archive)
State New, archived
Headers show

Commit Message

Mark Salyzyn Oct. 27, 2017, 10:25 p.m. UTC
Ensure monotonic and realtime are inline, small price to pay for
high volume common request.

Signed-off-by: Mark Salyzyn <salyzyn@android.com>
Cc: James Morse <james.morse@arm.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Dmitry Safonov <dsafonov@virtuozzo.com>
Cc: John Stultz <john.stultz@linaro.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Laura Abbott <labbott@redhat.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Andy Gross <andy.gross@linaro.org>
Cc: Kevin Brodsky <kevin.brodsky@arm.com>
Cc: Andrew Pinski <apinski@cavium.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux-arm-kernel@lists.infradead.org

v2:
- split first CL into 4 of 7 pieces

v3:
- rebase (unchanged)

---
 arch/arm/vdso/vgettimeofday.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

Comments

Mark Rutland Oct. 30, 2017, 2:10 p.m. UTC | #1
On Fri, Oct 27, 2017 at 03:25:28PM -0700, Mark Salyzyn wrote:
> Ensure monotonic and realtime are inline, small price to pay for
> high volume common request.

Does this make a noticeable difference on any workload?

What does this do to the binary size?

Thanks,
Mark.

> 
> Signed-off-by: Mark Salyzyn <salyzyn@android.com>
> Cc: James Morse <james.morse@arm.com>
> Cc: Russell King <linux@armlinux.org.uk>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: Andy Lutomirski <luto@amacapital.net>
> Cc: Dmitry Safonov <dsafonov@virtuozzo.com>
> Cc: John Stultz <john.stultz@linaro.org>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Laura Abbott <labbott@redhat.com>
> Cc: Kees Cook <keescook@chromium.org>
> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> Cc: Andy Gross <andy.gross@linaro.org>
> Cc: Kevin Brodsky <kevin.brodsky@arm.com>
> Cc: Andrew Pinski <apinski@cavium.com>
> Cc: linux-kernel@vger.kernel.org
> Cc: linux-arm-kernel@lists.infradead.org
> 
> v2:
> - split first CL into 4 of 7 pieces
> 
> v3:
> - rebase (unchanged)
> 
> ---
>  arch/arm/vdso/vgettimeofday.c | 9 ++++++---
>  1 file changed, 6 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/arm/vdso/vgettimeofday.c b/arch/arm/vdso/vgettimeofday.c
> index 5f596911bd53..71003a1997c4 100644
> --- a/arch/arm/vdso/vgettimeofday.c
> +++ b/arch/arm/vdso/vgettimeofday.c
> @@ -99,7 +99,7 @@ static notrace int do_monotonic_coarse(const struct vdso_data *vd,
>  
>  #ifdef CONFIG_ARM_ARCH_TIMER
>  
> -static notrace u64 get_ns(const struct vdso_data *vd)
> +static __always_inline notrace u64 get_ns(const struct vdso_data *vd)
>  {
>  	u64 cycle_delta;
>  	u64 cycle_now;
> @@ -115,7 +115,9 @@ static notrace u64 get_ns(const struct vdso_data *vd)
>  	return nsec;
>  }
>  
> -static notrace int do_realtime(const struct vdso_data *vd, struct timespec *ts)
> +/* Code size doesn't matter (vdso is 4k/16k/64k anyway) and this is faster. */
> +static __always_inline notrace int do_realtime(const struct vdso_data *vd,
> +					       struct timespec *ts)
>  {
>  	u64 nsecs;
>  	u32 seq;
> @@ -137,7 +139,8 @@ static notrace int do_realtime(const struct vdso_data *vd, struct timespec *ts)
>  	return 0;
>  }
>  
> -static notrace int do_monotonic(const struct vdso_data *vd, struct timespec *ts)
> +static __always_inline notrace int do_monotonic(const struct vdso_data *vd,
> +						struct timespec *ts)
>  {
>  	struct timespec tomono;
>  	u64 nsecs;
> -- 
> 2.15.0.rc2.357.g7e34df9404-goog
>
Russell King (Oracle) Oct. 30, 2017, 3:59 p.m. UTC | #2
On Fri, Oct 27, 2017 at 03:25:28PM -0700, Mark Salyzyn wrote:
> Ensure monotonic and realtime are inline, small price to pay for
> high volume common request.

Is this just based on a hunch, or is it based on proper measurement?
If proper measurement, where's the data?  What CPU was it measured
with?  How does this change affect other CPUs?
Mark Salyzyn Oct. 31, 2017, 3:28 p.m. UTC | #3
On 10/30/2017 08:59 AM, Russell King - ARM Linux wrote:
> On Fri, Oct 27, 2017 at 03:25:28PM -0700, Mark Salyzyn wrote:
>> Ensure monotonic and realtime are inline, small price to pay for
>> high volume common request.
> Is this just based on a hunch, or is it based on proper measurement?
> If proper measurement, where's the data?  What CPU was it measured
> with?  How does this change affect other CPUs?
>
I was tested faster in the past. Story today is less conclusive and the 
change is not worth it.

[TL;DR]

Code size in all cases is about 1/2 a 4K page, and change in size is not 
that much in or out.

Originally coded to match assembler for arm64. I tested it when I was 
first formulating the series and found a 2-4% improvement on arm 
(Nexus6, backport to 3.10) and arm64 (Nexus 6P, backport to 3.18). But 
that was (a technological) eon ago.

However, retested as-is, in and out, today side by side, clock_gettime 
for CLOCK_MONOTONIC, CLOCK_BOOTTIME and CLOCK_REALTIME, locked cores, 
affinity to littles (0-3), 50M iterations, device cooled down for 15 
minutes between (vdso64+vdso32) runs, 16 runs each averaged on a 
Hikey960, 4.9 kernel, GCC 4.9 -O2 and I get a slightly different story 
(with complete private patch stack that has vdso32):

vdso64

realtime: -4.8% (worse)

monotonic: +1.9% (better)

boottime: +3.2%

vdso32

realtime: +4.7% (better)

monotonic: +3.2%

boottime: +3.7%

The maximum deviation on the sample runs was in the order of +/-1%. I 
can not explain (the highly repeatable anomaly) as to why vdso64 
realtime is slower, yet vdso32 is equally faster. realtime is unique in 
the set as common routine serves for both __vdso_clock_gettime and 
__vdso_gettimeofday, and where I expected the gains (the hunch).

I have tried other combinations of forced inlines to try to cope with 
the clock_gettime(CLOCK_REALTIME) speed, and determined it was almost 
like a slippery tuning exercise. As such, I now come to the conclusion 
that given the (small?) gains, it is better to trust the C compiler 
(especially if this is used by a wider set of architectures) and drop 
this patch (and its side effect for boottime) from the series.

It should be noted on the same test bench that the new C coded vdso64 is 
+2.9% and +11% faster for realtime and monotonic respectively over the 
hand coded assembler it is replacing. Additional props for the C 
compiler doing the "right thing".

-- Mark
diff mbox

Patch

diff --git a/arch/arm/vdso/vgettimeofday.c b/arch/arm/vdso/vgettimeofday.c
index 5f596911bd53..71003a1997c4 100644
--- a/arch/arm/vdso/vgettimeofday.c
+++ b/arch/arm/vdso/vgettimeofday.c
@@ -99,7 +99,7 @@  static notrace int do_monotonic_coarse(const struct vdso_data *vd,
 
 #ifdef CONFIG_ARM_ARCH_TIMER
 
-static notrace u64 get_ns(const struct vdso_data *vd)
+static __always_inline notrace u64 get_ns(const struct vdso_data *vd)
 {
 	u64 cycle_delta;
 	u64 cycle_now;
@@ -115,7 +115,9 @@  static notrace u64 get_ns(const struct vdso_data *vd)
 	return nsec;
 }
 
-static notrace int do_realtime(const struct vdso_data *vd, struct timespec *ts)
+/* Code size doesn't matter (vdso is 4k/16k/64k anyway) and this is faster. */
+static __always_inline notrace int do_realtime(const struct vdso_data *vd,
+					       struct timespec *ts)
 {
 	u64 nsecs;
 	u32 seq;
@@ -137,7 +139,8 @@  static notrace int do_realtime(const struct vdso_data *vd, struct timespec *ts)
 	return 0;
 }
 
-static notrace int do_monotonic(const struct vdso_data *vd, struct timespec *ts)
+static __always_inline notrace int do_monotonic(const struct vdso_data *vd,
+						struct timespec *ts)
 {
 	struct timespec tomono;
 	u64 nsecs;