diff mbox series

x86/clear_page: Update clear_page_sse2() after dropping 32bit Xen

Message ID 1560800999-11592-1-git-send-email-andrew.cooper3@citrix.com (mailing list archive)
State New, archived
Headers show
Series x86/clear_page: Update clear_page_sse2() after dropping 32bit Xen | expand

Commit Message

Andrew Cooper June 17, 2019, 7:49 p.m. UTC
This code was never updated when the 32bit build of Xen was dropped.

 * Expand the now-redundant ptr_reg macro.
 * The number of iterations in the loop can be halfed by using 64bit writes,
   without consuming any extra execution resource in the pipeline.  Adjust all
   numbers/offsets appropriately.
 * Replace dec with sub to avoid a eflags stall, and position it to be
   macro-fused with the related jmp.
 * With no need to preserve eflags across the body of the loop, replace lea
   with add which has 1/3'rd the latency on basically all 64bit hardware.

A quick userspace perf test on my Haswell dev box indicates that the old
version takes ~1385 cycles on average (ignoring outliers), and the new version
takes ~1060 cyles, or about 77% of the time.

Reported-by: Edwin Török <edvin.torok@citrix.com>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Edwin Török <edvin.torok@citrix.com>

There is almost certainly better room for improvement, especially now that we
have alternatives, but this is substantial improvement which is very safe for
backport.
---
 xen/arch/x86/clear_page.S | 16 +++++++---------
 1 file changed, 7 insertions(+), 9 deletions(-)

Comments

Andrew Cooper June 17, 2019, 8:04 p.m. UTC | #1
On 17/06/2019 20:49, Andrew Cooper wrote:
> This code was never updated when the 32bit build of Xen was dropped.
>
>  * Expand the now-redundant ptr_reg macro.
>  * The number of iterations in the loop can be halfed by using 64bit writes,
>    without consuming any extra execution resource in the pipeline.  Adjust all
>    numbers/offsets appropriately.
>  * Replace dec with sub to avoid a eflags stall, and position it to be
>    macro-fused with the related jmp.
>  * With no need to preserve eflags across the body of the loop, replace lea
>    with add which has 1/3'rd the latency on basically all 64bit hardware.
>
> A quick userspace perf test on my Haswell dev box indicates that the old
> version takes ~1385 cycles on average (ignoring outliers), and the new version
> takes ~1060 cyles, or about 77% of the time.

And just for giggles, a rep stosq loop on this hardware is ~180 cycles,
which is more than 5 times better than the result of this patch (16% of
the time).

In some copious free time, clear/copy page ought to be come runtime
dependent on FastString being enabled, but I don't have time to organise
this right now.

~Andrew
Jan Beulich June 18, 2019, 10:33 a.m. UTC | #2
>>> On 17.06.19 at 21:49, <andrew.cooper3@citrix.com> wrote:
> This code was never updated when the 32bit build of Xen was dropped.
> 
>  * Expand the now-redundant ptr_reg macro.
>  * The number of iterations in the loop can be halfed by using 64bit writes,
>    without consuming any extra execution resource in the pipeline.  Adjust all
>    numbers/offsets appropriately.
>  * Replace dec with sub to avoid a eflags stall, and position it to be
>    macro-fused with the related jmp.
>  * With no need to preserve eflags across the body of the loop, replace lea
>    with add which has 1/3'rd the latency on basically all 64bit hardware.
> 
> A quick userspace perf test on my Haswell dev box indicates that the old
> version takes ~1385 cycles on average (ignoring outliers), and the new version
> takes ~1060 cyles, or about 77% of the time.
> 
> Reported-by: Edwin Török <edvin.torok@citrix.com>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Assuming you want this to go in despite your REP STOSQ remark
later on,
Reviewed-by: Jan Beulich <jbeulich@suse.com>
with one remark:

> --- a/xen/arch/x86/clear_page.S
> +++ b/xen/arch/x86/clear_page.S
> @@ -2,18 +2,16 @@
>  
>  #include <asm/page.h>
>  
> -#define ptr_reg %rdi
> -
>  ENTRY(clear_page_sse2)
> -        mov     $PAGE_SIZE/16, %ecx
> +        mov     $PAGE_SIZE/32, %ecx
>          xor     %eax,%eax
>  
> -0:      dec     %ecx
> -        movnti  %eax, (ptr_reg)
> -        movnti  %eax, 4(ptr_reg)
> -        movnti  %eax, 8(ptr_reg)
> -        movnti  %eax, 12(ptr_reg)
> -        lea     16(ptr_reg), ptr_reg
> +0:      movnti  %rax,  0(%rdi)

Could I talk you into leaving out this 0? Rather old gas actually emits
an 8-bit displacement when it's spelled like this.

Jan
Andrew Cooper June 18, 2019, 10:35 a.m. UTC | #3
On 18/06/2019 11:33, Jan Beulich wrote:
>>>> On 17.06.19 at 21:49, <andrew.cooper3@citrix.com> wrote:
>> This code was never updated when the 32bit build of Xen was dropped.
>>
>>  * Expand the now-redundant ptr_reg macro.
>>  * The number of iterations in the loop can be halfed by using 64bit writes,
>>    without consuming any extra execution resource in the pipeline.  Adjust all
>>    numbers/offsets appropriately.
>>  * Replace dec with sub to avoid a eflags stall, and position it to be
>>    macro-fused with the related jmp.
>>  * With no need to preserve eflags across the body of the loop, replace lea
>>    with add which has 1/3'rd the latency on basically all 64bit hardware.
>>
>> A quick userspace perf test on my Haswell dev box indicates that the old
>> version takes ~1385 cycles on average (ignoring outliers), and the new version
>> takes ~1060 cyles, or about 77% of the time.
>>
>> Reported-by: Edwin Török <edvin.torok@citrix.com>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Assuming you want this to go in despite your REP STOSQ remark
> later on,
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> with one remark:
>
>> --- a/xen/arch/x86/clear_page.S
>> +++ b/xen/arch/x86/clear_page.S
>> @@ -2,18 +2,16 @@
>>  
>>  #include <asm/page.h>
>>  
>> -#define ptr_reg %rdi
>> -
>>  ENTRY(clear_page_sse2)
>> -        mov     $PAGE_SIZE/16, %ecx
>> +        mov     $PAGE_SIZE/32, %ecx
>>          xor     %eax,%eax
>>  
>> -0:      dec     %ecx
>> -        movnti  %eax, (ptr_reg)
>> -        movnti  %eax, 4(ptr_reg)
>> -        movnti  %eax, 8(ptr_reg)
>> -        movnti  %eax, 12(ptr_reg)
>> -        lea     16(ptr_reg), ptr_reg
>> +0:      movnti  %rax,  0(%rdi)
> Could I talk you into leaving out this 0? Rather old gas actually emits
> an 8-bit displacement when it's spelled like this.

Oh ok.  I'll still align the (%rdi) though to make the column easier to
read.

I'll put this in now, and see if I can find some time before 4.13 ships
to make some alternatives-based better options.

~Andrew
diff mbox series

Patch

diff --git a/xen/arch/x86/clear_page.S b/xen/arch/x86/clear_page.S
index 243a767..0817610 100644
--- a/xen/arch/x86/clear_page.S
+++ b/xen/arch/x86/clear_page.S
@@ -2,18 +2,16 @@ 
 
 #include <asm/page.h>
 
-#define ptr_reg %rdi
-
 ENTRY(clear_page_sse2)
-        mov     $PAGE_SIZE/16, %ecx
+        mov     $PAGE_SIZE/32, %ecx
         xor     %eax,%eax
 
-0:      dec     %ecx
-        movnti  %eax, (ptr_reg)
-        movnti  %eax, 4(ptr_reg)
-        movnti  %eax, 8(ptr_reg)
-        movnti  %eax, 12(ptr_reg)
-        lea     16(ptr_reg), ptr_reg
+0:      movnti  %rax,  0(%rdi)
+        movnti  %rax,  8(%rdi)
+        movnti  %rax, 16(%rdi)
+        movnti  %rax, 24(%rdi)
+        add     $32, %rdi
+        sub     $1, %ecx
         jnz     0b
 
         sfence