From patchwork Fri Jul 1 15:01:33 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 9210013 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id A101F607D6 for ; Fri, 1 Jul 2016 15:04:04 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8FE3628531 for ; Fri, 1 Jul 2016 15:04:04 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 840402853E; Fri, 1 Jul 2016 15:04:04 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 6661628531 for ; Fri, 1 Jul 2016 15:04:02 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bIzwp-0002Oq-NE; Fri, 01 Jul 2016 15:01:39 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bIzwo-0002Ok-Oj for xen-devel@lists.xenproject.org; Fri, 01 Jul 2016 15:01:38 +0000 Received: from [85.158.137.68] by server-5.bemta-3.messagelabs.com id 43/BB-02783-1D586775; Fri, 01 Jul 2016 15:01:37 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFlrOIsWRWlGSWpSXmKPExsXS6fjDS/dCa1m 4wflDWhbft0xmcmD0OPzhCksAYxRrZl5SfkUCa8a9W1PZCi5tZqx40reWtYHxWCNjFyMnh5BA nkTLnSY2EJtXwE7iZftrdhBbQsBQYt/8VWBxFgFViesTjoLVswmoS7Q9287axcjBISJgIHHua BJImFmgXmLNvhlgJcICNhKvNvaDlfAKCEr83SEMUWIncevNDfYJjFyzEDKzkGQgbC2Jh79usU DY2hLLFr5mBilnFpCWWP6PAyJsI3HxyxZWVCUgtrvE5f/TGRcwcqxi1ChOLSpLLdI1tNBLKsp MzyjJTczM0TU0MNbLTS0uTkxPzUlMKtZLzs/dxAgMwHoGBsYdjL9Pex5ilORgUhLlXVBbFi7E l5SfUpmRWJwRX1Sak1p8iFGGg0NJgpe/BSgnWJSanlqRlpkDjAWYtAQHj5IIrzZImre4IDG3O DMdInWKUVFKnPdvM1BCACSRUZoH1waLv0uMslLCvIwMDAxCPAWpRbmZJajyrxjFORiVhHnlQM bzZOaVwE1/BbSYCWgxc2kxyOKSRISUVAOjrpveS4utqq4TbE0WdTZFqZxapzphe+esFWyiyY0 rWq+U1Z9nyWjbwbbIyVcnrM9fIHuy/69J8eeEFty1jKk5z73IvXyHuceF4o8TrjicfbyxNVcu zd+cIZmnUrvGNztxwZ09rZuXuto2TFu0qUpa3spO9ATPrDn3o/YmdGmtupbyuNczfoYSS3FGo qEWc1FxIgDGdXoUugIAAA== X-Env-Sender: JBeulich@suse.com X-Msg-Ref: server-6.tower-31.messagelabs.com!1467385294!22163884!1 X-Originating-IP: [137.65.248.74] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 8.46; banners=-,-,- X-VirusChecked: Checked Received: (qmail 41679 invoked from network); 1 Jul 2016 15:01:36 -0000 Received: from prv-mh.provo.novell.com (HELO prv-mh.provo.novell.com) (137.65.248.74) by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 1 Jul 2016 15:01:36 -0000 Received: from INET-PRV-MTA by prv-mh.provo.novell.com with Novell_GroupWise; Fri, 01 Jul 2016 09:01:33 -0600 Message-Id: <5776A1ED02000078000FA768@prv-mh.provo.novell.com> X-Mailer: Novell GroupWise Internet Agent 14.2.0 Date: Fri, 01 Jul 2016 09:01:33 -0600 From: "Jan Beulich" To: "xen-devel" Mime-Version: 1.0 Cc: Andrew Cooper , Kevin Tian , Jun Nakajima Subject: [Xen-devel] [PATCH] x86: use gcc6'es flags asm() output support X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP ..., rendering affected code more efficient and smaller. Note that in atomic.h this at once does away with the redundant output and input specifications of the memory location touched. Signed-off-by: Jan Beulich Acked-by: Kevin Tian for VMX part. --- TBD: Do we want to abstract the pattern asm ( "...; set %" : "=" (var) ... ) matching asm ( "..." : "=@cc" (var) ... ) via some macro? While this would eliminate many (all?) of the conditionals added here, it would result in the : no longer being visible in the actual source, making the asm()s look somewhat odd. Otherwise, to limit code duplication, it may be preferable to put the #ifdef-s inside the asm()s instead of around them. x86: use gcc6'es flags asm() output support ..., rendering affected code more efficient and smaller. Note that in atomic.h this at once does away with the redundant output and input specifications of the memory location touched. Signed-off-by: Jan Beulich --- TBD: Do we want to abstract the pattern asm ( "...; set %" : "=" (var) ... ) matching asm ( "..." : "=@cc" (var) ... ) via some macro? While this would eliminate many (all?) of the conditionals added here, it would result in the : no longer being visible in the actual source, making the asm()s look somewhat odd. Otherwise, to limit code duplication, it may be preferable to put the #ifdef-s inside the asm()s instead of around them. --- a/xen/arch/x86/x86_emulate/x86_emulate.c +++ b/xen/arch/x86/x86_emulate/x86_emulate.c @@ -610,7 +610,12 @@ do { */ static bool_t even_parity(uint8_t v) { +#ifdef __GCC_ASM_FLAG_OUTPUTS__ + asm ( "test %1,%1" : "=@ccp" (v) : "q" (v) ); +#else asm ( "test %1,%1; setp %0" : "=qm" (v) : "q" (v) ); +#endif + return v; } @@ -832,8 +837,14 @@ static int read_ulong( static bool_t mul_dbl(unsigned long m[2]) { bool_t rc; + +#ifdef __GCC_ASM_FLAG_OUTPUTS__ + asm ( "mul %1" : "+a" (m[0]), "+d" (m[1]), "=@cco" (rc) ); +#else asm ( "mul %1; seto %2" : "+a" (m[0]), "+d" (m[1]), "=qm" (rc) ); +#endif + return rc; } @@ -845,8 +856,14 @@ static bool_t mul_dbl(unsigned long m[2] static bool_t imul_dbl(unsigned long m[2]) { bool_t rc; + +#ifdef __GCC_ASM_FLAG_OUTPUTS__ + asm ( "imul %1" : "+a" (m[0]), "+d" (m[1]), "=@cco" (rc) ); +#else asm ( "imul %1; seto %2" : "+a" (m[0]), "+d" (m[1]), "=qm" (rc) ); +#endif + return rc; } @@ -4651,9 +4668,15 @@ x86_emulate( case 0xbc: /* bsf or tzcnt */ { bool_t zf; +#ifdef __GCC_ASM_FLAG_OUTPUTS__ + asm ( "bsf %2,%0" + : "=r" (dst.val), "=@ccz" (zf) + : "rm" (src.val) ); +#else asm ( "bsf %2,%0; setz %1" : "=r" (dst.val), "=qm" (zf) : "rm" (src.val) ); +#endif _regs.eflags &= ~EFLG_ZF; if ( (vex.pfx == vex_f3) && vcpu_has_bmi1() ) { @@ -4677,9 +4700,15 @@ x86_emulate( case 0xbd: /* bsr or lzcnt */ { bool_t zf; +#ifdef __GCC_ASM_FLAG_OUTPUTS__ + asm ( "bsr %2,%0" + : "=r" (dst.val), "=@ccz" (zf) + : "rm" (src.val) ); +#else asm ( "bsr %2,%0; setz %1" : "=r" (dst.val), "=qm" (zf) : "rm" (src.val) ); +#endif _regs.eflags &= ~EFLG_ZF; if ( (vex.pfx == vex_f3) && vcpu_has_lzcnt() ) { --- a/xen/include/asm-x86/atomic.h +++ b/xen/include/asm-x86/atomic.h @@ -193,12 +193,18 @@ static inline void atomic_sub(int i, ato */ static inline int atomic_sub_and_test(int i, atomic_t *v) { - unsigned char c; + bool_t c; + +#ifdef __GCC_ASM_FLAG_OUTPUTS__ + asm volatile ( "lock; subl %2,%0" + : "+m" (*(volatile int *)&v->counter), "=@ccz" (c) + : "ir" (i) : "memory" ); +#else + asm volatile ( "lock; subl %2,%0; setz %1" + : "+m" (*(volatile int *)&v->counter), "=qm" (c) + : "ir" (i) : "memory" ); +#endif - asm volatile ( - "lock; subl %2,%0; sete %1" - : "=m" (*(volatile int *)&v->counter), "=qm" (c) - : "ir" (i), "m" (*(volatile int *)&v->counter) : "memory" ); return c; } @@ -240,13 +246,19 @@ static inline void atomic_dec(atomic_t * */ static inline int atomic_dec_and_test(atomic_t *v) { - unsigned char c; + bool_t c; - asm volatile ( - "lock; decl %0; sete %1" - : "=m" (*(volatile int *)&v->counter), "=qm" (c) - : "m" (*(volatile int *)&v->counter) : "memory" ); - return c != 0; +#ifdef __GCC_ASM_FLAG_OUTPUTS__ + asm volatile ( "lock; decl %0" + : "+m" (*(volatile int *)&v->counter), "=@ccz" (c) + :: "memory" ); +#else + asm volatile ( "lock; decl %0; setz %1" + : "+m" (*(volatile int *)&v->counter), "=qm" (c) + :: "memory" ); +#endif + + return c; } /** @@ -259,13 +271,19 @@ static inline int atomic_dec_and_test(at */ static inline int atomic_inc_and_test(atomic_t *v) { - unsigned char c; + bool_t c; - asm volatile ( - "lock; incl %0; sete %1" - : "=m" (*(volatile int *)&v->counter), "=qm" (c) - : "m" (*(volatile int *)&v->counter) : "memory" ); - return c != 0; +#ifdef __GCC_ASM_FLAG_OUTPUTS__ + asm volatile ( "lock; incl %0" + : "+m" (*(volatile int *)&v->counter), "=@ccz" (c) + :: "memory" ); +#else + asm volatile ( "lock; incl %0; setz %1" + : "+m" (*(volatile int *)&v->counter), "=qm" (c) + :: "memory" ); +#endif + + return c; } /** @@ -279,12 +297,18 @@ static inline int atomic_inc_and_test(at */ static inline int atomic_add_negative(int i, atomic_t *v) { - unsigned char c; + bool_t c; + +#ifdef __GCC_ASM_FLAG_OUTPUTS__ + asm volatile ( "lock; addl %2,%0" + : "+m" (*(volatile int *)&v->counter), "=@ccs" (c) + : "ir" (i) : "memory" ); +#else + asm volatile ( "lock; addl %2,%0; sets %1" + : "+m" (*(volatile int *)&v->counter), "=qm" (c) + : "ir" (i) : "memory" ); +#endif - asm volatile ( - "lock; addl %2,%0; sets %1" - : "=m" (*(volatile int *)&v->counter), "=qm" (c) - : "ir" (i), "m" (*(volatile int *)&v->counter) : "memory" ); return c; } --- a/xen/include/asm-x86/bitops.h +++ b/xen/include/asm-x86/bitops.h @@ -145,8 +145,14 @@ static inline int test_and_set_bit(int n { int oldbit; +#ifdef __GCC_ASM_FLAG_OUTPUTS__ + asm volatile ( "lock; btsl %2,%1" + : "=@ccc" (oldbit), "+m" (ADDR) : "Ir" (nr) : "memory" ); +#else asm volatile ( "lock; btsl %2,%1\n\tsbbl %0,%0" - : "=r" (oldbit), "+m" (ADDR) : "Ir" (nr) : "memory"); + : "=r" (oldbit), "+m" (ADDR) : "Ir" (nr) : "memory" ); +#endif + return oldbit; } #define test_and_set_bit(nr, addr) ({ \ @@ -167,10 +173,16 @@ static inline int __test_and_set_bit(int { int oldbit; - asm volatile ( - "btsl %2,%1\n\tsbbl %0,%0" - : "=r" (oldbit), "+m" (*(int *)addr) - : "Ir" (nr) : "memory" ); +#ifdef __GCC_ASM_FLAG_OUTPUTS__ + asm volatile ( "btsl %2,%1" + : "=@ccc" (oldbit), "+m" (*(int *)addr) + : "Ir" (nr) : "memory" ); +#else + asm volatile ( "btsl %2,%1\n\tsbbl %0,%0" + : "=r" (oldbit), "+m" (*(int *)addr) + : "Ir" (nr) : "memory" ); +#endif + return oldbit; } #define __test_and_set_bit(nr, addr) ({ \ @@ -190,8 +202,14 @@ static inline int test_and_clear_bit(int { int oldbit; +#ifdef __GCC_ASM_FLAG_OUTPUTS__ + asm volatile ( "lock; btrl %2,%1" + : "=@ccc" (oldbit), "+m" (ADDR) : "Ir" (nr) : "memory" ); +#else asm volatile ( "lock; btrl %2,%1\n\tsbbl %0,%0" - : "=r" (oldbit), "+m" (ADDR) : "Ir" (nr) : "memory"); + : "=r" (oldbit), "+m" (ADDR) : "Ir" (nr) : "memory" ); +#endif + return oldbit; } #define test_and_clear_bit(nr, addr) ({ \ @@ -212,10 +230,16 @@ static inline int __test_and_clear_bit(i { int oldbit; - asm volatile ( - "btrl %2,%1\n\tsbbl %0,%0" - : "=r" (oldbit), "+m" (*(int *)addr) - : "Ir" (nr) : "memory" ); +#ifdef __GCC_ASM_FLAG_OUTPUTS__ + asm volatile ( "btrl %2,%1" + : "=@ccc" (oldbit), "+m" (*(int *)addr) + : "Ir" (nr) : "memory" ); +#else + asm volatile ( "btrl %2,%1\n\tsbbl %0,%0" + : "=r" (oldbit), "+m" (*(int *)addr) + : "Ir" (nr) : "memory" ); +#endif + return oldbit; } #define __test_and_clear_bit(nr, addr) ({ \ @@ -228,10 +252,16 @@ static inline int __test_and_change_bit( { int oldbit; - asm volatile ( - "btcl %2,%1\n\tsbbl %0,%0" - : "=r" (oldbit), "+m" (*(int *)addr) - : "Ir" (nr) : "memory" ); +#ifdef __GCC_ASM_FLAG_OUTPUTS__ + asm volatile ( "btcl %2,%1" + : "=@ccc" (oldbit), "+m" (*(int *)addr) + : "Ir" (nr) : "memory" ); +#else + asm volatile ( "btcl %2,%1\n\tsbbl %0,%0" + : "=r" (oldbit), "+m" (*(int *)addr) + : "Ir" (nr) : "memory" ); +#endif + return oldbit; } #define __test_and_change_bit(nr, addr) ({ \ @@ -251,8 +281,14 @@ static inline int test_and_change_bit(in { int oldbit; +#ifdef __GCC_ASM_FLAG_OUTPUTS__ + asm volatile ( "lock; btcl %2,%1" + : "=@ccc" (oldbit), "+m" (ADDR) : "Ir" (nr) : "memory" ); +#else asm volatile ( "lock; btcl %2,%1\n\tsbbl %0,%0" - : "=r" (oldbit), "+m" (ADDR) : "Ir" (nr) : "memory"); + : "=r" (oldbit), "+m" (ADDR) : "Ir" (nr) : "memory" ); +#endif + return oldbit; } #define test_and_change_bit(nr, addr) ({ \ @@ -270,10 +306,16 @@ static inline int variable_test_bit(int { int oldbit; - asm volatile ( - "btl %2,%1\n\tsbbl %0,%0" - : "=r" (oldbit) - : "m" (CONST_ADDR), "Ir" (nr) : "memory" ); +#ifdef __GCC_ASM_FLAG_OUTPUTS__ + asm volatile ( "btl %2,%1" + : "=@ccc" (oldbit) + : "m" (CONST_ADDR), "Ir" (nr) : "memory" ); +#else + asm volatile ( "btl %2,%1\n\tsbbl %0,%0" + : "=r" (oldbit) + : "m" (CONST_ADDR), "Ir" (nr) : "memory" ); +#endif + return oldbit; } --- a/xen/include/asm-x86/hvm/vmx/vmx.h +++ b/xen/include/asm-x86/hvm/vmx/vmx.h @@ -406,12 +406,17 @@ static inline bool_t __vmread_safe(unsig VMREAD_OPCODE MODRM_EAX_ECX #endif /* CF==1 or ZF==1 --> rc = 0 */ +#ifdef __GCC_ASM_FLAG_OUTPUTS__ + : "=@ccnbe" (okay), +#else "setnbe %0" + : "=qm" (okay), +#endif #ifdef HAVE_GAS_VMX - : "=qm" (okay), "=rm" (*value) + "=rm" (*value) : "r" (field)); #else - : "=qm" (okay), "=c" (*value) + "=c" (*value) : "a" (field)); #endif --- a/xen/arch/x86/x86_emulate/x86_emulate.c +++ b/xen/arch/x86/x86_emulate/x86_emulate.c @@ -610,7 +610,12 @@ do { */ static bool_t even_parity(uint8_t v) { +#ifdef __GCC_ASM_FLAG_OUTPUTS__ + asm ( "test %1,%1" : "=@ccp" (v) : "q" (v) ); +#else asm ( "test %1,%1; setp %0" : "=qm" (v) : "q" (v) ); +#endif + return v; } @@ -832,8 +837,14 @@ static int read_ulong( static bool_t mul_dbl(unsigned long m[2]) { bool_t rc; + +#ifdef __GCC_ASM_FLAG_OUTPUTS__ + asm ( "mul %1" : "+a" (m[0]), "+d" (m[1]), "=@cco" (rc) ); +#else asm ( "mul %1; seto %2" : "+a" (m[0]), "+d" (m[1]), "=qm" (rc) ); +#endif + return rc; } @@ -845,8 +856,14 @@ static bool_t mul_dbl(unsigned long m[2] static bool_t imul_dbl(unsigned long m[2]) { bool_t rc; + +#ifdef __GCC_ASM_FLAG_OUTPUTS__ + asm ( "imul %1" : "+a" (m[0]), "+d" (m[1]), "=@cco" (rc) ); +#else asm ( "imul %1; seto %2" : "+a" (m[0]), "+d" (m[1]), "=qm" (rc) ); +#endif + return rc; } @@ -4651,9 +4668,15 @@ x86_emulate( case 0xbc: /* bsf or tzcnt */ { bool_t zf; +#ifdef __GCC_ASM_FLAG_OUTPUTS__ + asm ( "bsf %2,%0" + : "=r" (dst.val), "=@ccz" (zf) + : "rm" (src.val) ); +#else asm ( "bsf %2,%0; setz %1" : "=r" (dst.val), "=qm" (zf) : "rm" (src.val) ); +#endif _regs.eflags &= ~EFLG_ZF; if ( (vex.pfx == vex_f3) && vcpu_has_bmi1() ) { @@ -4677,9 +4700,15 @@ x86_emulate( case 0xbd: /* bsr or lzcnt */ { bool_t zf; +#ifdef __GCC_ASM_FLAG_OUTPUTS__ + asm ( "bsr %2,%0" + : "=r" (dst.val), "=@ccz" (zf) + : "rm" (src.val) ); +#else asm ( "bsr %2,%0; setz %1" : "=r" (dst.val), "=qm" (zf) : "rm" (src.val) ); +#endif _regs.eflags &= ~EFLG_ZF; if ( (vex.pfx == vex_f3) && vcpu_has_lzcnt() ) { --- a/xen/include/asm-x86/atomic.h +++ b/xen/include/asm-x86/atomic.h @@ -193,12 +193,18 @@ static inline void atomic_sub(int i, ato */ static inline int atomic_sub_and_test(int i, atomic_t *v) { - unsigned char c; + bool_t c; + +#ifdef __GCC_ASM_FLAG_OUTPUTS__ + asm volatile ( "lock; subl %2,%0" + : "+m" (*(volatile int *)&v->counter), "=@ccz" (c) + : "ir" (i) : "memory" ); +#else + asm volatile ( "lock; subl %2,%0; setz %1" + : "+m" (*(volatile int *)&v->counter), "=qm" (c) + : "ir" (i) : "memory" ); +#endif - asm volatile ( - "lock; subl %2,%0; sete %1" - : "=m" (*(volatile int *)&v->counter), "=qm" (c) - : "ir" (i), "m" (*(volatile int *)&v->counter) : "memory" ); return c; } @@ -240,13 +246,19 @@ static inline void atomic_dec(atomic_t * */ static inline int atomic_dec_and_test(atomic_t *v) { - unsigned char c; + bool_t c; - asm volatile ( - "lock; decl %0; sete %1" - : "=m" (*(volatile int *)&v->counter), "=qm" (c) - : "m" (*(volatile int *)&v->counter) : "memory" ); - return c != 0; +#ifdef __GCC_ASM_FLAG_OUTPUTS__ + asm volatile ( "lock; decl %0" + : "+m" (*(volatile int *)&v->counter), "=@ccz" (c) + :: "memory" ); +#else + asm volatile ( "lock; decl %0; setz %1" + : "+m" (*(volatile int *)&v->counter), "=qm" (c) + :: "memory" ); +#endif + + return c; } /** @@ -259,13 +271,19 @@ static inline int atomic_dec_and_test(at */ static inline int atomic_inc_and_test(atomic_t *v) { - unsigned char c; + bool_t c; - asm volatile ( - "lock; incl %0; sete %1" - : "=m" (*(volatile int *)&v->counter), "=qm" (c) - : "m" (*(volatile int *)&v->counter) : "memory" ); - return c != 0; +#ifdef __GCC_ASM_FLAG_OUTPUTS__ + asm volatile ( "lock; incl %0" + : "+m" (*(volatile int *)&v->counter), "=@ccz" (c) + :: "memory" ); +#else + asm volatile ( "lock; incl %0; setz %1" + : "+m" (*(volatile int *)&v->counter), "=qm" (c) + :: "memory" ); +#endif + + return c; } /** @@ -279,12 +297,18 @@ static inline int atomic_inc_and_test(at */ static inline int atomic_add_negative(int i, atomic_t *v) { - unsigned char c; + bool_t c; + +#ifdef __GCC_ASM_FLAG_OUTPUTS__ + asm volatile ( "lock; addl %2,%0" + : "+m" (*(volatile int *)&v->counter), "=@ccs" (c) + : "ir" (i) : "memory" ); +#else + asm volatile ( "lock; addl %2,%0; sets %1" + : "+m" (*(volatile int *)&v->counter), "=qm" (c) + : "ir" (i) : "memory" ); +#endif - asm volatile ( - "lock; addl %2,%0; sets %1" - : "=m" (*(volatile int *)&v->counter), "=qm" (c) - : "ir" (i), "m" (*(volatile int *)&v->counter) : "memory" ); return c; } --- a/xen/include/asm-x86/bitops.h +++ b/xen/include/asm-x86/bitops.h @@ -145,8 +145,14 @@ static inline int test_and_set_bit(int n { int oldbit; +#ifdef __GCC_ASM_FLAG_OUTPUTS__ + asm volatile ( "lock; btsl %2,%1" + : "=@ccc" (oldbit), "+m" (ADDR) : "Ir" (nr) : "memory" ); +#else asm volatile ( "lock; btsl %2,%1\n\tsbbl %0,%0" - : "=r" (oldbit), "+m" (ADDR) : "Ir" (nr) : "memory"); + : "=r" (oldbit), "+m" (ADDR) : "Ir" (nr) : "memory" ); +#endif + return oldbit; } #define test_and_set_bit(nr, addr) ({ \ @@ -167,10 +173,16 @@ static inline int __test_and_set_bit(int { int oldbit; - asm volatile ( - "btsl %2,%1\n\tsbbl %0,%0" - : "=r" (oldbit), "+m" (*(int *)addr) - : "Ir" (nr) : "memory" ); +#ifdef __GCC_ASM_FLAG_OUTPUTS__ + asm volatile ( "btsl %2,%1" + : "=@ccc" (oldbit), "+m" (*(int *)addr) + : "Ir" (nr) : "memory" ); +#else + asm volatile ( "btsl %2,%1\n\tsbbl %0,%0" + : "=r" (oldbit), "+m" (*(int *)addr) + : "Ir" (nr) : "memory" ); +#endif + return oldbit; } #define __test_and_set_bit(nr, addr) ({ \ @@ -190,8 +202,14 @@ static inline int test_and_clear_bit(int { int oldbit; +#ifdef __GCC_ASM_FLAG_OUTPUTS__ + asm volatile ( "lock; btrl %2,%1" + : "=@ccc" (oldbit), "+m" (ADDR) : "Ir" (nr) : "memory" ); +#else asm volatile ( "lock; btrl %2,%1\n\tsbbl %0,%0" - : "=r" (oldbit), "+m" (ADDR) : "Ir" (nr) : "memory"); + : "=r" (oldbit), "+m" (ADDR) : "Ir" (nr) : "memory" ); +#endif + return oldbit; } #define test_and_clear_bit(nr, addr) ({ \ @@ -212,10 +230,16 @@ static inline int __test_and_clear_bit(i { int oldbit; - asm volatile ( - "btrl %2,%1\n\tsbbl %0,%0" - : "=r" (oldbit), "+m" (*(int *)addr) - : "Ir" (nr) : "memory" ); +#ifdef __GCC_ASM_FLAG_OUTPUTS__ + asm volatile ( "btrl %2,%1" + : "=@ccc" (oldbit), "+m" (*(int *)addr) + : "Ir" (nr) : "memory" ); +#else + asm volatile ( "btrl %2,%1\n\tsbbl %0,%0" + : "=r" (oldbit), "+m" (*(int *)addr) + : "Ir" (nr) : "memory" ); +#endif + return oldbit; } #define __test_and_clear_bit(nr, addr) ({ \ @@ -228,10 +252,16 @@ static inline int __test_and_change_bit( { int oldbit; - asm volatile ( - "btcl %2,%1\n\tsbbl %0,%0" - : "=r" (oldbit), "+m" (*(int *)addr) - : "Ir" (nr) : "memory" ); +#ifdef __GCC_ASM_FLAG_OUTPUTS__ + asm volatile ( "btcl %2,%1" + : "=@ccc" (oldbit), "+m" (*(int *)addr) + : "Ir" (nr) : "memory" ); +#else + asm volatile ( "btcl %2,%1\n\tsbbl %0,%0" + : "=r" (oldbit), "+m" (*(int *)addr) + : "Ir" (nr) : "memory" ); +#endif + return oldbit; } #define __test_and_change_bit(nr, addr) ({ \ @@ -251,8 +281,14 @@ static inline int test_and_change_bit(in { int oldbit; +#ifdef __GCC_ASM_FLAG_OUTPUTS__ + asm volatile ( "lock; btcl %2,%1" + : "=@ccc" (oldbit), "+m" (ADDR) : "Ir" (nr) : "memory" ); +#else asm volatile ( "lock; btcl %2,%1\n\tsbbl %0,%0" - : "=r" (oldbit), "+m" (ADDR) : "Ir" (nr) : "memory"); + : "=r" (oldbit), "+m" (ADDR) : "Ir" (nr) : "memory" ); +#endif + return oldbit; } #define test_and_change_bit(nr, addr) ({ \ @@ -270,10 +306,16 @@ static inline int variable_test_bit(int { int oldbit; - asm volatile ( - "btl %2,%1\n\tsbbl %0,%0" - : "=r" (oldbit) - : "m" (CONST_ADDR), "Ir" (nr) : "memory" ); +#ifdef __GCC_ASM_FLAG_OUTPUTS__ + asm volatile ( "btl %2,%1" + : "=@ccc" (oldbit) + : "m" (CONST_ADDR), "Ir" (nr) : "memory" ); +#else + asm volatile ( "btl %2,%1\n\tsbbl %0,%0" + : "=r" (oldbit) + : "m" (CONST_ADDR), "Ir" (nr) : "memory" ); +#endif + return oldbit; } --- a/xen/include/asm-x86/hvm/vmx/vmx.h +++ b/xen/include/asm-x86/hvm/vmx/vmx.h @@ -406,12 +406,17 @@ static inline bool_t __vmread_safe(unsig VMREAD_OPCODE MODRM_EAX_ECX #endif /* CF==1 or ZF==1 --> rc = 0 */ +#ifdef __GCC_ASM_FLAG_OUTPUTS__ + : "=@ccnbe" (okay), +#else "setnbe %0" + : "=qm" (okay), +#endif #ifdef HAVE_GAS_VMX - : "=qm" (okay), "=rm" (*value) + "=rm" (*value) : "r" (field)); #else - : "=qm" (okay), "=c" (*value) + "=c" (*value) : "a" (field)); #endif