From patchwork Wed Jun 10 15:37:55 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aoi Shinkai X-Patchwork-Id: 29315 Received: from vger.kernel.org (vger.kernel.org [209.132.176.167]) by demeter.kernel.org (8.14.2/8.14.2) with ESMTP id n5AFc3C9011459 for ; Wed, 10 Jun 2009 15:38:03 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757434AbZFJPh7 (ORCPT ); Wed, 10 Jun 2009 11:37:59 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1756592AbZFJPh7 (ORCPT ); Wed, 10 Jun 2009 11:37:59 -0400 Received: from wf-out-1314.google.com ([209.85.200.168]:13731 "EHLO wf-out-1314.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757434AbZFJPh6 (ORCPT ); Wed, 10 Jun 2009 11:37:58 -0400 Received: by wf-out-1314.google.com with SMTP id 26so348263wfd.4 for ; Wed, 10 Jun 2009 08:38:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from :mime-version:to:cc:subject:content-type:content-transfer-encoding; bh=4PDwRcCBFi1hc6Zr0iHOesOqyOlITjqBskkPrIFDAyI=; b=rElGJpXnpaVTy6DEligiQl0w6arORzV4rkLJfRDoh/GFlWGa1Yg7YC/4xq7ruYW7Ug oUvcBAQvHBADFPwHdQSRfyFbyJ9/ImjoFAfx8HMJCi9BgDZOgwFiWgo3q/1BVuh3g1PY xD6p/BIifcUF/g9+o/FKTmue1O8eXkE/4oyDc= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:mime-version:to:cc:subject:content-type :content-transfer-encoding; b=dGxJKgx1JikmGqpL+/aChiSQ/RIr92zGpFjAdQmwtPA0KJHfBU/kGJiou04/qcBIu0 +njeQOP2C/LpvOm5ydk++aScxBzH7V2DVTlvSFbjyXorOvfVfBHx08g9/mNFDIXlKb7B yGGfqpKezGI4ztatGSLde82n4nc7wQ2dMaUFk= Received: by 10.143.14.6 with SMTP id r6mr540917wfi.135.1244648281021; Wed, 10 Jun 2009 08:38:01 -0700 (PDT) Received: from localhost.localdomain (lf84.opt2.point.ne.jp [222.225.118.84]) by mx.google.com with ESMTPS id 29sm855758wfg.8.2009.06.10.08.37.58 (version=TLSv1/SSLv3 cipher=RC4-MD5); Wed, 10 Jun 2009 08:37:59 -0700 (PDT) Message-ID: <4A2FD353.5080201@gmail.com> Date: Thu, 11 Jun 2009 00:37:55 +0900 From: Aoi Shinkai MIME-Version: 1.0 To: linux-sh@vger.kernel.org CC: Paul Mundt Subject: [PATCH] sh: Fix sh4a llsc operation Sender: linux-sh-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org This patch fixes sh4a llsc operation. Most of all is taken from arm and mips. Signed-off-by: Aoi Shinkai --- -- To unsubscribe from this list: send the line "unsubscribe linux-sh" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/sh/include/asm/atomic-llsc.h b/arch/sh/include/asm/atomic-llsc.h index 4b00b78..18cca1f 100644 --- a/arch/sh/include/asm/atomic-llsc.h +++ b/arch/sh/include/asm/atomic-llsc.h @@ -104,4 +104,29 @@ static inline void atomic_set_mask(unsigned int mask, atomic_t *v) : "t"); } +#define atomic_cmpxchg(v, o, n) (cmpxchg(&((v)->counter), (o), (n))) + +/** + * atomic_add_unless - add unless the number is a given value + * @v: pointer of type atomic_t + * @a: the amount to add to v... + * @u: ...unless v is equal to u. + * + * Atomically adds @a to @v, so long as it was not @u. + * Returns non-zero if @v was not @u, and zero otherwise. + */ +static inline int atomic_add_unless(atomic_t *v, int a, int u) +{ + int c, old; + c = atomic_read(v); + for (;;) { + if (unlikely(c == (u))) + break; + old = atomic_cmpxchg((v), c, c + (a)); + if (likely(old == c)) + break; + c = old; + } + return c != (u); +} #endif /* __ASM_SH_ATOMIC_LLSC_H */ diff --git a/arch/sh/include/asm/atomic.h b/arch/sh/include/asm/atomic.h index 6327ffb..978b58e 100644 --- a/arch/sh/include/asm/atomic.h +++ b/arch/sh/include/asm/atomic.h @@ -45,7 +45,7 @@ #define atomic_inc(v) atomic_add(1,(v)) #define atomic_dec(v) atomic_sub(1,(v)) -#ifndef CONFIG_GUSA_RB +#if !defined(CONFIG_GUSA_RB) && !defined(CONFIG_CPU_SH4A) static inline int atomic_cmpxchg(atomic_t *v, int old, int new) { int ret; @@ -73,7 +73,7 @@ static inline int atomic_add_unless(atomic_t *v, int a, int u) return ret != u; } -#endif +#endif /* !CONFIG_GUSA_RB && !CONFIG_CPU_SH4A */ #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) #define atomic_inc_not_zero(v) atomic_add_unless((v), 1, 0) diff --git a/arch/sh/include/asm/cmpxchg-llsc.h b/arch/sh/include/asm/cmpxchg-llsc.h index 0fac3da..4713666 100644 --- a/arch/sh/include/asm/cmpxchg-llsc.h +++ b/arch/sh/include/asm/cmpxchg-llsc.h @@ -55,7 +55,7 @@ __cmpxchg_u32(volatile int *m, unsigned long old, unsigned long new) "mov %0, %1 \n\t" "cmp/eq %1, %3 \n\t" "bf 2f \n\t" - "mov %3, %0 \n\t" + "mov %4, %0 \n\t" "2: \n\t" "movco.l %0, @%2 \n\t" "bf 1b \n\t" diff --git a/arch/sh/include/asm/spinlock.h b/arch/sh/include/asm/spinlock.h index 6028356..69f4dc7 100644 --- a/arch/sh/include/asm/spinlock.h +++ b/arch/sh/include/asm/spinlock.h @@ -26,7 +26,7 @@ #define __raw_spin_is_locked(x) ((x)->lock <= 0) #define __raw_spin_lock_flags(lock, flags) __raw_spin_lock(lock) #define __raw_spin_unlock_wait(x) \ - do { cpu_relax(); } while ((x)->lock) + do { while (__raw_spin_is_locked(lock)) cpu_relax(); } while (0) /* * Simple spin lock operations. There are two variants, one clears IRQ's