Message ID | 4637f0f2-2da9-1056-37bf-17c0861b6bff@gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | riscv: improving uaccess with logs from network bench | expand |
From: Akira Tsukamoto > Sent: 19 June 2021 12:43 > > In the lucky situation that the both source and destination address are on > the aligned boundary, perform load and store with register size to copy the > data. > > Without the unrolling, it will reduce the speed since the next store > instruction for the same register using from the load will stall the > pipeline. ... > diff --git a/arch/riscv/lib/uaccess.S b/arch/riscv/lib/uaccess.S > index e2e57551fc76..bceb0629e440 100644 > --- a/arch/riscv/lib/uaccess.S > +++ b/arch/riscv/lib/uaccess.S > @@ -67,6 +67,39 @@ ENTRY(__asm_copy_from_user) > bnez a3, .Lshift_copy > > .Lword_copy: > + /* > + * Both src and dst are aligned, unrolled word copy > + * > + * a0 - start of aligned dst > + * a1 - start of aligned src > + * a3 - a1 & mask:(SZREG-1) > + * t0 - end of aligned dst > + */ > + addi t0, t0, -(8*SZREG-1) /* not to over run */ > +2: > + fixup REG_L a4, 0(a1), 10f > + fixup REG_L a5, SZREG(a1), 10f > + fixup REG_L a6, 2*SZREG(a1), 10f > + fixup REG_L a7, 3*SZREG(a1), 10f > + fixup REG_L t1, 4*SZREG(a1), 10f > + fixup REG_L t2, 5*SZREG(a1), 10f > + fixup REG_L t3, 6*SZREG(a1), 10f > + fixup REG_L t4, 7*SZREG(a1), 10f > + fixup REG_S a4, 0(a0), 10f > + fixup REG_S a5, SZREG(a0), 10f > + fixup REG_S a6, 2*SZREG(a0), 10f > + fixup REG_S a7, 3*SZREG(a0), 10f > + fixup REG_S t1, 4*SZREG(a0), 10f > + fixup REG_S t2, 5*SZREG(a0), 10f > + fixup REG_S t3, 6*SZREG(a0), 10f > + fixup REG_S t4, 7*SZREG(a0), 10f > + addi a0, a0, 8*SZREG > + addi a1, a1, 8*SZREG > + bltu a0, t0, 2b > + > + addi t0, t0, 8*SZREG-1 /* revert to original value */ > + j .Lbyte_copy_tail > + Are there any riscv chips than can do a memory read and a memory write int the same cycle but don't have significant 'out of order' execution? Such chips will execute that code very badly. Or, rather, there are loops that allow concurrent read+write that will be a lot faster. Also on a cpu that can execute a memory read/write at the same time as an add (probably anything supercaler) you want to move the two 'addi' further up so they get executed 'for free'. David - Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK Registration No: 1397386 (Wales)
On 6/21/2021 8:55 PM, David Laight wrote: > From: Akira Tsukamoto >> Sent: 19 June 2021 12:43 >> >> In the lucky situation that the both source and destination address are on >> the aligned boundary, perform load and store with register size to copy the >> data. >> >> Without the unrolling, it will reduce the speed since the next store >> instruction for the same register using from the load will stall the >> pipeline. > ... >> diff --git a/arch/riscv/lib/uaccess.S b/arch/riscv/lib/uaccess.S >> index e2e57551fc76..bceb0629e440 100644 >> --- a/arch/riscv/lib/uaccess.S >> +++ b/arch/riscv/lib/uaccess.S >> @@ -67,6 +67,39 @@ ENTRY(__asm_copy_from_user) >> bnez a3, .Lshift_copy >> >> .Lword_copy: >> + /* >> + * Both src and dst are aligned, unrolled word copy >> + * >> + * a0 - start of aligned dst >> + * a1 - start of aligned src >> + * a3 - a1 & mask:(SZREG-1) >> + * t0 - end of aligned dst >> + */ >> + addi t0, t0, -(8*SZREG-1) /* not to over run */ >> +2: >> + fixup REG_L a4, 0(a1), 10f >> + fixup REG_L a5, SZREG(a1), 10f >> + fixup REG_L a6, 2*SZREG(a1), 10f >> + fixup REG_L a7, 3*SZREG(a1), 10f >> + fixup REG_L t1, 4*SZREG(a1), 10f >> + fixup REG_L t2, 5*SZREG(a1), 10f >> + fixup REG_L t3, 6*SZREG(a1), 10f >> + fixup REG_L t4, 7*SZREG(a1), 10f >> + fixup REG_S a4, 0(a0), 10f >> + fixup REG_S a5, SZREG(a0), 10f >> + fixup REG_S a6, 2*SZREG(a0), 10f >> + fixup REG_S a7, 3*SZREG(a0), 10f >> + fixup REG_S t1, 4*SZREG(a0), 10f >> + fixup REG_S t2, 5*SZREG(a0), 10f >> + fixup REG_S t3, 6*SZREG(a0), 10f >> + fixup REG_S t4, 7*SZREG(a0), 10f >> + addi a0, a0, 8*SZREG >> + addi a1, a1, 8*SZREG >> + bltu a0, t0, 2b >> + >> + addi t0, t0, 8*SZREG-1 /* revert to original value */ >> + j .Lbyte_copy_tail >> + > > Are there any riscv chips than can do a memory read and a > memory write int the same cycle but don't have significant > 'out of order' execution? > > Such chips will execute that code very badly. > Or, rather, there are loops that allow concurrent read+write > that will be a lot faster. For the above two paragraphs, the boom will be probably one of them and perhaps U8, but I do not have a chance to try it. I have run the benchmarks both the unrolled load store and not unrolled load store and always unrolled version was faster on current cores. We could discuss and optimizing way when the Out of Order core comes out in the market with comparing bench results on real hardware. I really understand of your comments of concurrent read+write that you have mentioned in the other thread too. I just would like to make the current risc-v better as soon as possible, since the difference is significant. > > Also on a cpu that can execute a memory read/write > at the same time as an add (probably anything supercaler) > you want to move the two 'addi' further up so they get > executed 'for free'. The original assembler version of memcpy does have the `addi` moving up the few lines up. You really know the internals, I am in the between of making the code easy to understand to make the patches in the upstream and optimizing further more. If you really like to, I will move the `addi` up at the time of when merging the patches to one which do not break bisecting. Akira
diff --git a/arch/riscv/lib/uaccess.S b/arch/riscv/lib/uaccess.S index e2e57551fc76..bceb0629e440 100644 --- a/arch/riscv/lib/uaccess.S +++ b/arch/riscv/lib/uaccess.S @@ -67,6 +67,39 @@ ENTRY(__asm_copy_from_user) bnez a3, .Lshift_copy .Lword_copy: + /* + * Both src and dst are aligned, unrolled word copy + * + * a0 - start of aligned dst + * a1 - start of aligned src + * a3 - a1 & mask:(SZREG-1) + * t0 - end of aligned dst + */ + addi t0, t0, -(8*SZREG-1) /* not to over run */ +2: + fixup REG_L a4, 0(a1), 10f + fixup REG_L a5, SZREG(a1), 10f + fixup REG_L a6, 2*SZREG(a1), 10f + fixup REG_L a7, 3*SZREG(a1), 10f + fixup REG_L t1, 4*SZREG(a1), 10f + fixup REG_L t2, 5*SZREG(a1), 10f + fixup REG_L t3, 6*SZREG(a1), 10f + fixup REG_L t4, 7*SZREG(a1), 10f + fixup REG_S a4, 0(a0), 10f + fixup REG_S a5, SZREG(a0), 10f + fixup REG_S a6, 2*SZREG(a0), 10f + fixup REG_S a7, 3*SZREG(a0), 10f + fixup REG_S t1, 4*SZREG(a0), 10f + fixup REG_S t2, 5*SZREG(a0), 10f + fixup REG_S t3, 6*SZREG(a0), 10f + fixup REG_S t4, 7*SZREG(a0), 10f + addi a0, a0, 8*SZREG + addi a1, a1, 8*SZREG + bltu a0, t0, 2b + + addi t0, t0, 8*SZREG-1 /* revert to original value */ + j .Lbyte_copy_tail + .Lshift_copy: /*
In the lucky situation that the both source and destination address are on the aligned boundary, perform load and store with register size to copy the data. Without the unrolling, it will reduce the speed since the next store instruction for the same register using from the load will stall the pipeline. Signed-off-by: Akira Tsukamoto <akira.tsukamoto@gmail.com> --- arch/riscv/lib/uaccess.S | 33 +++++++++++++++++++++++++++++++++ 1 file changed, 33 insertions(+)