Message ID | 1626747709-34013-4-git-send-email-linyunsheng@huawei.com (mailing list archive) |
---|---|
State | Not Applicable |
Headers | show |
Series | refactor the ringtest testing for ptr_ring | expand |
Context | Check | Description |
---|---|---|
netdev/tree_selection | success | Not a local patch |
From: Yunsheng Lin > Sent: 20 July 2021 03:22 > > As x86 and arm64 is the two available systems that I can build > and test the cpu_relax() implementation, so only add cpu_relax() > implementation for x86 and arm64, other arches can be added easily > when needed. > ... > +#if defined(__i386__) || defined(__x86_64__) > +/* REP NOP (PAUSE) is a good thing to insert into busy-wait loops. */ > +static __always_inline void rep_nop(void) > +{ > + asm volatile("rep; nop" ::: "memory"); > +} Beware, Intel increased the stall for 'rep nop' in some recent cpu to IIRC about 200 cycles. They even document that this might have a detrimental effect. It is basically far too long for the sort of thing it makes sense to busy-wait for. David - Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK Registration No: 1397386 (Wales)
On 2021/7/22 4:53, David Laight wrote: > From: Yunsheng Lin >> Sent: 20 July 2021 03:22 >> >> As x86 and arm64 is the two available systems that I can build >> and test the cpu_relax() implementation, so only add cpu_relax() >> implementation for x86 and arm64, other arches can be added easily >> when needed. >> > ... >> +#if defined(__i386__) || defined(__x86_64__) >> +/* REP NOP (PAUSE) is a good thing to insert into busy-wait loops. */ >> +static __always_inline void rep_nop(void) >> +{ >> + asm volatile("rep; nop" ::: "memory"); >> +} > > Beware, Intel increased the stall for 'rep nop' in some recent > cpu to IIRC about 200 cycles. > > They even document that this might have a detrimental effect. > It is basically far too long for the sort of thing it makes > sense to busy-wait for. Thanks for the info:) I will be beware of that when playing with 'rep nop' in newer x86 cpu. > . >
> > Beware, Intel increased the stall for 'rep nop' in some recent > > cpu to IIRC about 200 cycles. > > > > They even document that this might have a detrimental effect. > > It is basically far too long for the sort of thing it makes > > sense to busy-wait for. > > Thanks for the info:) > I will be beware of that when playing with 'rep nop' in newer > x86 cpu. See 8.4.7 Pause Latency in Skylake Microarchitecture in Intel® 64 and IA-32 Architectures Optimization Reference Manual The latency of PAUSE instruction in prior generation microarchitecture is about 10 cycles, whereas on Skylake microarchitecture it has been extended to as many as 140 cycles. An earlier section does explain why you need pause though. One of its effects is to stop the cpu speculatively executing multiple iterations of the wait look - each with its own pending read of the memory location that is being looked at. Unwinding that isn't free - and was particularly expensive on P4 Netburst - what a surprise, they ran everything except benchmark looks very slowly. David - Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK Registration No: 1397386 (Wales)
diff --git a/tools/include/asm/processor.h b/tools/include/asm/processor.h new file mode 100644 index 0000000..f9b3902 --- /dev/null +++ b/tools/include/asm/processor.h @@ -0,0 +1,26 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef __TOOLS_LINUX_ASM_PROCESSOR_H +#define __TOOLS_LINUX_ASM_PROCESSOR_H + +#if defined(__i386__) || defined(__x86_64__) +/* REP NOP (PAUSE) is a good thing to insert into busy-wait loops. */ +static __always_inline void rep_nop(void) +{ + asm volatile("rep; nop" ::: "memory"); +} + +static __always_inline void cpu_relax(void) +{ + rep_nop(); +} +#elif defined(__aarch64__) +static inline void cpu_relax(void) +{ + asm volatile("yield" ::: "memory"); +} +#else +#error "Architecture not supported" +#endif + +#endif
As x86 and arm64 is the two available systems that I can build and test the cpu_relax() implementation, so only add cpu_relax() implementation for x86 and arm64, other arches can be added easily when needed. Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> --- tools/include/asm/processor.h | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) create mode 100644 tools/include/asm/processor.h