Message ID | 1502882530-31700-5-git-send-email-andrew.cooper3@citrix.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Wed, 2017-08-16 at 12:22 +0100, Andrew Cooper wrote: > Barriers are a complicated topic, a source of confusion, and their > incorrect > use is a common cause of bugs. It *really* doesn't help when Xen's > API is the > same as Linux, but its ABI different. > > Bring the two back in line, so programmers stand a chance of actually > getting > their usage correct. > > Drop the links in the comment, both of which are now stale. Instead, > refer to > the vendor system manuals. > Does it perhaps make sense to link this: https://www.kernel.org/doc/Documentation/memory-barriers.txt > No functional change. > IAC, FWIW: > Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> > Reviewed-by: Dario Faggioli <dario.faggioli@citrix.com> Regards, Dario
>>> On 16.08.17 at 13:22, <andrew.cooper3@citrix.com> wrote: > Barriers are a complicated topic, a source of confusion, and their incorrect > use is a common cause of bugs. It *really* doesn't help when Xen's API is the > same as Linux, but its ABI different. > > Bring the two back in line, so programmers stand a chance of actually getting > their usage correct. > > Drop the links in the comment, both of which are now stale. Instead, refer to > the vendor system manuals. > > No functional change. > > Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> with one remark: > --- a/xen/include/asm-x86/system.h > +++ b/xen/include/asm-x86/system.h > @@ -164,23 +164,27 @@ static always_inline unsigned long __xadd( > ((typeof(*(ptr)))__xadd(ptr, (typeof(*(ptr)))(v), sizeof(*(ptr)))) > > /* > + * Mandatory barriers, for enforced ordering of reads and writes, e.g. for use > + * with MMIO devices mapped with reduced cacheability. > + */ > +#define mb() asm volatile ("mfence" ::: "memory") > +#define rmb() asm volatile ("lfence" ::: "memory") > +#define wmb() asm volatile ("sfence" ::: "memory") > + > +/* > + * SMP barriers, for ordering of reads and writes between CPUs, most commonly > + * used with shared memory. > + * > * Both Intel and AMD agree that, from a programmer's viewpoint: > * Loads cannot be reordered relative to other loads. > * Stores cannot be reordered relative to other stores. > - * > - * Intel64 Architecture Memory Ordering White Paper > - * <http://developer.intel.com/products/processor/manuals/318147.pdf> > - * > - * AMD64 Architecture Programmer's Manual, Volume 2: System Programming > - * <http://www.amd.com/us-en/assets/content_type/\ > - * white_papers_and_tech_docs/24593.pdf> > + * Loads may be reordered ahead of an unaliasing store. For consistency with the other two sentences, perhaps use plural at the end of the sentence too? Jan
diff --git a/xen/include/asm-x86/system.h b/xen/include/asm-x86/system.h index 9cb6fd7..3d21291 100644 --- a/xen/include/asm-x86/system.h +++ b/xen/include/asm-x86/system.h @@ -164,23 +164,27 @@ static always_inline unsigned long __xadd( ((typeof(*(ptr)))__xadd(ptr, (typeof(*(ptr)))(v), sizeof(*(ptr)))) /* + * Mandatory barriers, for enforced ordering of reads and writes, e.g. for use + * with MMIO devices mapped with reduced cacheability. + */ +#define mb() asm volatile ("mfence" ::: "memory") +#define rmb() asm volatile ("lfence" ::: "memory") +#define wmb() asm volatile ("sfence" ::: "memory") + +/* + * SMP barriers, for ordering of reads and writes between CPUs, most commonly + * used with shared memory. + * * Both Intel and AMD agree that, from a programmer's viewpoint: * Loads cannot be reordered relative to other loads. * Stores cannot be reordered relative to other stores. - * - * Intel64 Architecture Memory Ordering White Paper - * <http://developer.intel.com/products/processor/manuals/318147.pdf> - * - * AMD64 Architecture Programmer's Manual, Volume 2: System Programming - * <http://www.amd.com/us-en/assets/content_type/\ - * white_papers_and_tech_docs/24593.pdf> + * Loads may be reordered ahead of an unaliasing store. + * + * Refer to the vendor system programming manuals for further details. */ -#define rmb() barrier() -#define wmb() barrier() - #define smp_mb() mb() -#define smp_rmb() rmb() -#define smp_wmb() wmb() +#define smp_rmb() barrier() +#define smp_wmb() barrier() #define set_mb(var, value) do { xchg(&var, value); } while (0) #define set_wmb(var, value) do { var = value; smp_wmb(); } while (0) diff --git a/xen/include/asm-x86/x86_64/system.h b/xen/include/asm-x86/x86_64/system.h index 88beae1..6b56761 100644 --- a/xen/include/asm-x86/x86_64/system.h +++ b/xen/include/asm-x86/x86_64/system.h @@ -80,7 +80,4 @@ static always_inline __uint128_t __cmpxchg16b( _rc; \ }) -#define mb() \ - asm volatile ( "mfence" : : : "memory" ) - #endif /* __X86_64_SYSTEM_H__ */
Barriers are a complicated topic, a source of confusion, and their incorrect use is a common cause of bugs. It *really* doesn't help when Xen's API is the same as Linux, but its ABI different. Bring the two back in line, so programmers stand a chance of actually getting their usage correct. Drop the links in the comment, both of which are now stale. Instead, refer to the vendor system manuals. No functional change. Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> --- CC: Jan Beulich <JBeulich@suse.com> v2: * Keep mandatory barrier definitions * Drop stale documentation links --- xen/include/asm-x86/system.h | 28 ++++++++++++++++------------ xen/include/asm-x86/x86_64/system.h | 3 --- 2 files changed, 16 insertions(+), 15 deletions(-)