From patchwork Fri May 30 15:43:54 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Waiman Long X-Patchwork-Id: 4272061 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 22A44BEEA7 for ; Fri, 30 May 2014 15:52:55 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 3D5E3201BC for ; Fri, 30 May 2014 15:52:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 6DF282012E for ; Fri, 30 May 2014 15:52:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965105AbaE3PqG (ORCPT ); Fri, 30 May 2014 11:46:06 -0400 Received: from g4t3427.houston.hp.com ([15.201.208.55]:44329 "EHLO g4t3427.houston.hp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965026AbaE3PqB (ORCPT ); Fri, 30 May 2014 11:46:01 -0400 Received: from g9t2301.houston.hp.com (g9t2301.houston.hp.com [16.216.185.78]) by g4t3427.houston.hp.com (Postfix) with ESMTP id D4B6C226; Fri, 30 May 2014 15:46:00 +0000 (UTC) Received: from RHEL65.localdomain (unknown [16.98.82.67]) by g9t2301.houston.hp.com (Postfix) with ESMTP id 4621275; Fri, 30 May 2014 15:45:52 +0000 (UTC) From: Waiman Long To: Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Peter Zijlstra Cc: linux-arch@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Paolo Bonzini , Konrad Rzeszutek Wilk , Boris Ostrovsky , "Paul E. McKenney" , Rik van Riel , Linus Torvalds , Raghavendra K T , David Vrabel , Oleg Nesterov , Gleb Natapov , Scott J Norton , Chegu Vinod , Waiman Long Subject: [PATCH v11 08/16] qspinlock: Prepare for unfair lock support Date: Fri, 30 May 2014 11:43:54 -0400 Message-Id: <1401464642-33890-9-git-send-email-Waiman.Long@hp.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: <1401464642-33890-1-git-send-email-Waiman.Long@hp.com> References: <1401464642-33890-1-git-send-email-Waiman.Long@hp.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-7.5 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP If unfair lock is supported, the lock acquisition loop at the end of the queue_spin_lock_slowpath() function may need to detect the fact the lock can be stolen. Code are added for the stolen lock detection. Signed-off-by: Waiman Long --- kernel/locking/qspinlock.c | 26 ++++++++++++++++++-------- 1 files changed, 18 insertions(+), 8 deletions(-) diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c index 2c7abe7..ae1b19d 100644 --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -94,7 +94,7 @@ static inline struct mcs_spinlock *decode_tail(u32 tail) * can allow better optimization of the lock acquisition for the pending * bit holder. * - * This internal structure is also used by the set_locked function which + * This internal structure is also used by the try_set_locked function which * is not restricted to _Q_PENDING_BITS == 8. */ struct __qspinlock { @@ -206,19 +206,21 @@ static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail) #endif /* _Q_PENDING_BITS == 8 */ /** - * set_locked - Set the lock bit and own the lock - * @lock: Pointer to queue spinlock structure + * try_set_locked - Try to set the lock bit and own the lock + * @lock : Pointer to queue spinlock structure + * Return: 1 if lock acquired, 0 otherwise * * This routine should only be called when the caller is the only one * entitled to acquire the lock. */ -static __always_inline void set_locked(struct qspinlock *lock) +static __always_inline int try_set_locked(struct qspinlock *lock) { struct __qspinlock *l = (void *)lock; barrier(); ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL; barrier(); + return 1; } /** @@ -357,11 +359,12 @@ queue: /* * we're at the head of the waitqueue, wait for the owner & pending to * go away. - * Load-acquired is used here because the set_locked() + * Load-acquired is used here because the try_set_locked() * function below may not be a full memory barrier. * * *,x,y -> *,0,0 */ +retry_queue_wait: while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_PENDING_MASK) arch_mutex_cpu_relax(); @@ -378,13 +381,20 @@ queue: */ for (;;) { if (val != tail) { - set_locked(lock); - break; + /* + * The try_set_locked function will only failed if the + * lock was stolen. + */ + if (try_set_locked(lock)) + break; + else + goto retry_queue_wait; } old = atomic_cmpxchg(&lock->val, val, _Q_LOCKED_VAL); if (old == val) goto release; /* No contention */ - + else if (old & _Q_LOCKED_MASK) + goto retry_queue_wait; val = old; }