From patchwork Thu Aug 8 15:29:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andrew Cooper X-Patchwork-Id: 11084405 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0674F912 for ; Thu, 8 Aug 2019 15:31:21 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E7C1628B74 for ; Thu, 8 Aug 2019 15:31:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DB91928B77; Thu, 8 Aug 2019 15:31:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 13ABF28B76 for ; Thu, 8 Aug 2019 15:31:19 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hvkMS-0002we-C4; Thu, 08 Aug 2019 15:29:52 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hvkMQ-0002wZ-Cl for xen-devel@lists.xenproject.org; Thu, 08 Aug 2019 15:29:50 +0000 X-Inumbo-ID: 5b3dc34a-b9f1-11e9-8980-bc764e045a96 Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 5b3dc34a-b9f1-11e9-8980-bc764e045a96; Thu, 08 Aug 2019 15:29:48 +0000 (UTC) Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=andrew.cooper3@citrix.com; spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender authenticity information available from domain of andrew.cooper3@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="Andrew.Cooper3@citrix.com"; x-sender="andrew.cooper3@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of Andrew.Cooper3@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="Andrew.Cooper3@citrix.com"; x-sender="Andrew.Cooper3@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="Andrew.Cooper3@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: KPetSwn3EtU2KXyIKP/RQ/fWo7KA/At1unOe3B1NbY4fChK8q1rBJUT4AMssSh0fA2LmUsmjky heOmv1fsbVX6v9ichiC4p27F3j7x5x253tuyx8gcfKgW7yKXSCZ/Ju4dTPab1NlsclFG4dESFN awAFVvgoAaM2FcPKbFzwsxDY3DOjz9bW4GmLg7U2sSCxXo4ymVynDohN4AFhY5bPHf0cF3qa3f 8E2tFMTFiWbPfX/4klytRt5ij/2dFBtTq8eDIrA9Cow0A0OBg2jcpLolgH0iSx659WLC+zNKaS 4M4= X-SBRS: 2.7 X-MesageID: 4033239 X-Ironport-Server: esa2.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,361,1559534400"; d="scan'208";a="4033239" From: Andrew Cooper To: Xen-devel Date: Thu, 8 Aug 2019 16:29:44 +0100 Message-ID: <20190808152944.16287-1-andrew.cooper3@citrix.com> X-Mailer: git-send-email 2.11.0 MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v3] x86/atomic: Improvements and simplifications to assembly constraints X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP * Constraints in the form "=r" (x) : "0" (x) can be folded to just "+r" (x) * Switch to using named parameters (mostly for legibility) which in particular helps with... * __xchg(), __cmpxchg() and __cmpxchg_user() modify their memory operand, so must list it as an output operand. This only works because they each have a memory clobber to give the construct full compiler-barrier properties. * Every memory operand has an explicit known size. Letting the compiler see the real size rather than obscuring it with __xg() allows for the removal of the instruction size suffixes without introducing ambiguity. * Misc style changes. Signed-off-by: Andrew Cooper Reviewed-by: Jan Beulich --- CC: Jan Beulich CC: Wei Liu CC: Roger Pau Monné v3: * Retain "q" v2: * Correct comment WRT 32bit builds and "=q" constraints * Drop semicolons after the lock prefix * Further reduce the use of positional parameters The reason the volatile cast in __cmpxchg_user() can't be dropped is because without it, the compiler uses a stack copy rather than the in-memory copy, which ends up tripping: /* Allowed to change in Accessed/Dirty flags only. */ BUG_ON((t ^ old) & ~(intpte_t)(_PAGE_ACCESSED|_PAGE_DIRTY)); --- xen/include/asm-x86/system.h | 99 +++++++++++++++++-------------------- xen/include/asm-x86/x86_64/system.h | 38 +++++++------- 2 files changed, 65 insertions(+), 72 deletions(-) diff --git a/xen/include/asm-x86/system.h b/xen/include/asm-x86/system.h index 3246797bec..6de90e46b2 100644 --- a/xen/include/asm-x86/system.h +++ b/xen/include/asm-x86/system.h @@ -24,9 +24,6 @@ static inline void clflush(const void *p) #define xchg(ptr,v) \ ((__typeof__(*(ptr)))__xchg((unsigned long)(v),(ptr),sizeof(*(ptr)))) -struct __xchg_dummy { unsigned long a[100]; }; -#define __xg(x) ((volatile struct __xchg_dummy *)(x)) - #include /* @@ -40,28 +37,24 @@ static always_inline unsigned long __xchg( switch ( size ) { case 1: - asm volatile ( "xchgb %b0,%1" - : "=q" (x) - : "m" (*__xg(ptr)), "0" (x) - : "memory" ); + asm volatile ( "xchg %b[x], %[ptr]" + : [x] "+q" (x), [ptr] "+m" (*(uint8_t *)ptr) + :: "memory" ); break; case 2: - asm volatile ( "xchgw %w0,%1" - : "=r" (x) - : "m" (*__xg(ptr)), "0" (x) - : "memory" ); + asm volatile ( "xchg %w[x], %[ptr]" + : [x] "+r" (x), [ptr] "+m" (*(uint16_t *)ptr) + :: "memory" ); break; case 4: - asm volatile ( "xchgl %k0,%1" - : "=r" (x) - : "m" (*__xg(ptr)), "0" (x) - : "memory" ); + asm volatile ( "xchg %k[x], %[ptr]" + : [x] "+r" (x), [ptr] "+m" (*(uint32_t *)ptr) + :: "memory" ); break; case 8: - asm volatile ( "xchgq %0,%1" - : "=r" (x) - : "m" (*__xg(ptr)), "0" (x) - : "memory" ); + asm volatile ( "xchg %q[x], %[ptr]" + : [x] "+r" (x), [ptr] "+m" (*(uint64_t *)ptr) + :: "memory" ); break; } return x; @@ -80,31 +73,27 @@ static always_inline unsigned long __cmpxchg( switch ( size ) { case 1: - asm volatile ( "lock; cmpxchgb %b1,%2" - : "=a" (prev) - : "q" (new), "m" (*__xg(ptr)), - "0" (old) + asm volatile ( "lock cmpxchg %b[new], %[ptr]" + : "=a" (prev), [ptr] "+m" (*(uint8_t *)ptr) + : [new] "q" (new), "a" (old) : "memory" ); return prev; case 2: - asm volatile ( "lock; cmpxchgw %w1,%2" - : "=a" (prev) - : "r" (new), "m" (*__xg(ptr)), - "0" (old) + asm volatile ( "lock cmpxchg %w[new], %[ptr]" + : "=a" (prev), [ptr] "+m" (*(uint16_t *)ptr) + : [new] "r" (new), "a" (old) : "memory" ); return prev; case 4: - asm volatile ( "lock; cmpxchgl %k1,%2" - : "=a" (prev) - : "r" (new), "m" (*__xg(ptr)), - "0" (old) + asm volatile ( "lock cmpxchg %k[new], %[ptr]" + : "=a" (prev), [ptr] "+m" (*(uint32_t *)ptr) + : [new] "r" (new), "a" (old) : "memory" ); return prev; case 8: - asm volatile ( "lock; cmpxchgq %1,%2" - : "=a" (prev) - : "r" (new), "m" (*__xg(ptr)), - "0" (old) + asm volatile ( "lock cmpxchg %q[new], %[ptr]" + : "=a" (prev), [ptr] "+m" (*(uint64_t *)ptr) + : [new] "r" (new), "a" (old) : "memory" ); return prev; } @@ -119,24 +108,24 @@ static always_inline unsigned long cmpxchg_local_( switch ( size ) { case 1: - asm volatile ( "cmpxchgb %b2, %1" - : "=a" (prev), "+m" (*(uint8_t *)ptr) - : "q" (new), "0" (old) ); + asm volatile ( "cmpxchg %b[new], %[ptr]" + : "=a" (prev), [ptr] "+m" (*(uint8_t *)ptr) + : [new] "q" (new), "a" (old) ); break; case 2: - asm volatile ( "cmpxchgw %w2, %1" - : "=a" (prev), "+m" (*(uint16_t *)ptr) - : "r" (new), "0" (old) ); + asm volatile ( "cmpxchg %w[new], %[ptr]" + : "=a" (prev), [ptr] "+m" (*(uint16_t *)ptr) + : [new] "r" (new), "a" (old) ); break; case 4: - asm volatile ( "cmpxchgl %k2, %1" - : "=a" (prev), "+m" (*(uint32_t *)ptr) - : "r" (new), "0" (old) ); + asm volatile ( "cmpxchg %k[new], %[ptr]" + : "=a" (prev), [ptr] "+m" (*(uint32_t *)ptr) + : [new] "r" (new), "a" (old) ); break; case 8: - asm volatile ( "cmpxchgq %2, %1" - : "=a" (prev), "+m" (*(uint64_t *)ptr) - : "r" (new), "0" (old) ); + asm volatile ( "cmpxchg %q[new], %[ptr]" + : "=a" (prev), [ptr] "+m" (*(uint64_t *)ptr) + : [new] "r" (new), "a" (old) ); break; } @@ -162,23 +151,23 @@ static always_inline unsigned long __xadd( switch ( size ) { case 1: - asm volatile ( "lock; xaddb %b0,%1" - : "+r" (v), "+m" (*__xg(ptr)) + asm volatile ( "lock xadd %b[v], %[ptr]" + : [v] "+q" (v), [ptr] "+m" (*(uint8_t *)ptr) :: "memory"); return v; case 2: - asm volatile ( "lock; xaddw %w0,%1" - : "+r" (v), "+m" (*__xg(ptr)) + asm volatile ( "lock xadd %w[v], %[ptr]" + : [v] "+r" (v), [ptr] "+m" (*(uint16_t *)ptr) :: "memory"); return v; case 4: - asm volatile ( "lock; xaddl %k0,%1" - : "+r" (v), "+m" (*__xg(ptr)) + asm volatile ( "lock xadd %k[v], %[ptr]" + : [v] "+r" (v), [ptr] "+m" (*(uint32_t *)ptr) :: "memory"); return v; case 8: - asm volatile ( "lock; xaddq %q0,%1" - : "+r" (v), "+m" (*__xg(ptr)) + asm volatile ( "lock xadd %q[v], %[ptr]" + : [v] "+r" (v), [ptr] "+m" (*(uint64_t *)ptr) :: "memory"); return v; diff --git a/xen/include/asm-x86/x86_64/system.h b/xen/include/asm-x86/x86_64/system.h index fae57bace8..87e899fca9 100644 --- a/xen/include/asm-x86/x86_64/system.h +++ b/xen/include/asm-x86/x86_64/system.h @@ -24,9 +24,10 @@ static always_inline __uint128_t __cmpxchg16b( ASSERT(cpu_has_cx16); /* Don't use "=A" here - clang can't deal with that. */ - asm volatile ( "lock; cmpxchg16b %2" - : "=d" (prev.hi), "=a" (prev.lo), "+m" (*__xg(ptr)) - : "c" (new.hi), "b" (new.lo), "0" (old.hi), "1" (old.lo) ); + asm volatile ( "lock cmpxchg16b %[ptr]" + : "=d" (prev.hi), "=a" (prev.lo), + [ptr] "+m" (*(__uint128_t *)ptr) + : "c" (new.hi), "b" (new.lo), "d" (old.hi), "a" (old.lo) ); return prev.raw; } @@ -42,9 +43,10 @@ static always_inline __uint128_t cmpxchg16b_local_( ASSERT(cpu_has_cx16); /* Don't use "=A" here - clang can't deal with that. */ - asm volatile ( "cmpxchg16b %2" - : "=d" (prev.hi), "=a" (prev.lo), "+m" (*(__uint128_t *)ptr) - : "c" (new.hi), "b" (new.lo), "0" (old.hi), "1" (old.lo) ); + asm volatile ( "cmpxchg16b %[ptr]" + : "=d" (prev.hi), "=a" (prev.lo), + [ptr] "+m" (*(__uint128_t *)ptr) + : "c" (new.hi), "b" (new.lo), "d" (old.hi), "a" (old.lo) ); return prev.raw; } @@ -63,36 +65,38 @@ static always_inline __uint128_t cmpxchg16b_local_( * If no fault occurs then _o is updated to the value we saw at _p. If this * is the same as the initial value of _o then _n is written to location _p. */ -#define __cmpxchg_user(_p,_o,_n,_isuff,_oppre,_regtype) \ +#define __cmpxchg_user(_p, _o, _n, _oppre, _regtype) \ stac(); \ asm volatile ( \ - "1: lock; cmpxchg"_isuff" %"_oppre"2,%3\n" \ + "1: lock cmpxchg %"_oppre"[new], %[ptr]\n" \ "2:\n" \ ".section .fixup,\"ax\"\n" \ - "3: movl $1,%1\n" \ + "3: movl $1, %[rc]\n" \ " jmp 2b\n" \ ".previous\n" \ _ASM_EXTABLE(1b, 3b) \ - : "=a" (_o), "=r" (_rc) \ - : _regtype (_n), "m" (*__xg((volatile void *)_p)), "0" (_o), "1" (0) \ + : "+a" (_o), [rc] "=r" (_rc), \ + [ptr] "+m" (*(volatile typeof(*(_p)) *)(_p)) \ + : [new] _regtype (_n), "[rc]" (0) \ : "memory"); \ clac() -#define cmpxchg_user(_p,_o,_n) \ +#define cmpxchg_user(_p, _o, _n) \ ({ \ int _rc; \ - switch ( sizeof(*(_p)) ) { \ + switch ( sizeof(*(_p)) ) \ + { \ case 1: \ - __cmpxchg_user(_p,_o,_n,"b","b","q"); \ + __cmpxchg_user(_p, _o, _n, "b", "q"); \ break; \ case 2: \ - __cmpxchg_user(_p,_o,_n,"w","w","r"); \ + __cmpxchg_user(_p, _o, _n, "w", "r"); \ break; \ case 4: \ - __cmpxchg_user(_p,_o,_n,"l","k","r"); \ + __cmpxchg_user(_p, _o, _n, "k", "r"); \ break; \ case 8: \ - __cmpxchg_user(_p,_o,_n,"q","","r"); \ + __cmpxchg_user(_p, _o, _n, "q", "r"); \ break; \ } \ _rc; \