From patchwork Fri Oct 6 13:34:40 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 9989481 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id B6F1A6020F for ; Fri, 6 Oct 2017 13:45:50 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 96A12283D9 for ; Fri, 6 Oct 2017 13:45:50 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8B82A28CAC; Fri, 6 Oct 2017 13:45:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 18A69283D9 for ; Fri, 6 Oct 2017 13:45:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=3xWhkTDm8Us9Y/EZ/T1l2fqRRl+ekVQOhMvdK8JL+lI=; b=Mu0xxRdNsisruFRm2mSn70+CDw yn1HXNS4RNg19Nu3nd4ktsGIKZPeKTGua6oTGvx2tu16t/DOkJ1xVYMqStF+UcqA+6MV6LsPjdAFn qmRvh2R+x4OL3k5Tk2XVQv+xOApAjHKlnmM1+9Pj5z77LHXZez5NXkA7QW00Y6yjJ26pcmay95KnH HokDTkWBgE0J/sS7DNVLfFgqYXAa70EVg14iArEQL6q222ceKHr5QZ7b/uM4otS10OKYCY5dgCF1f skawLhLXvjiyKXjdfhOwKqJm99/vRXWv2GIKbtMG9o4t1mVT3oNxZai2xTiTzOtLqR822pQpwEG4x 3aqexRsQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1e0SwS-00070u-Cl; Fri, 06 Oct 2017 13:45:28 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1e0Svt-00053q-Bi for linux-arm-kernel@bombadil.infradead.org; Fri, 06 Oct 2017 13:44:53 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=References:In-Reply-To:Message-Id:Date: Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=Z10243CUOT6we7NbHVCeLszLWlJSMZdsQMovmOORf9M=; b=HvVY73dcS93o+4vAbLjXFOc0d 7HajzKPfW91BSyUgukbKBmV8a80ioC6P5Ik/0nPoy/qo0XEUJzq81MttlgkhwFZnQpYZZTEQxNRsi Jugf3eeD5LXnm+BXUVFDLuDNuqmvlNCvEpOa4IxUTl7a7fKkcT3lJf64ckzkeJa5zhuyh0F9C1J7G 4VN2X6m58Lnx+yNFh2CigoSu//kbIONOrfDCgTYuFq4CD2bouEvxHPiwE1V/NcX3DJeLRWbZZS0OH RLv1FazagC5w36DDSBOKvtlnkiUQB37uxpYU2DXoywgEwGDz5xj4d4jk/GjndHLKp0266uKBhk9dG 3XGrpbGHQ==; Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by casper.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1e0SmM-0007eL-V3 for linux-arm-kernel@lists.infradead.org; Fri, 06 Oct 2017 13:35:08 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F00521610; Fri, 6 Oct 2017 06:34:41 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id C1E863F7BE; Fri, 6 Oct 2017 06:34:41 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id 6C6C61AE2E15; Fri, 6 Oct 2017 14:34:43 +0100 (BST) From: Will Deacon To: linux-kernel@vger.kernel.org Subject: [PATCH v2 3/5] kernel/locking: Use atomic_cond_read_acquire when spinning in qrwlock Date: Fri, 6 Oct 2017 14:34:40 +0100 Message-Id: <1507296882-18721-4-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1507296882-18721-1-git-send-email-will.deacon@arm.com> References: <1507296882-18721-1-git-send-email-will.deacon@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20171006_143503_379488_F2366FE6 X-CRM114-Status: GOOD ( 18.91 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peterz@infradead.org, boqun.feng@gmail.com, Will Deacon , Jeremy.Linton@arm.com, mingo@redhat.com, longman@redhat.com, paulmck@linux.vnet.ibm.com, linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP The qrwlock slowpaths involve spinning when either a prospective reader is waiting for a concurrent writer to drain, or a prospective writer is waiting for concurrent readers to drain. In both of these situations, atomic_cond_read_acquire can be used to avoid busy-waiting and make use of any backoff functionality provided by the architecture. This patch replaces the open-code loops and rspin_until_writer_unlock implementation with atomic_cond_read_acquire. The write mode transition zero to _QW_WAITING is left alone, since (a) this doesn't need acquire semantics and (b) should be fast. Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E. McKenney" Signed-off-by: Will Deacon --- kernel/locking/qrwlock.c | 47 +++++++++++------------------------------------ 1 file changed, 11 insertions(+), 36 deletions(-) diff --git a/kernel/locking/qrwlock.c b/kernel/locking/qrwlock.c index 1af791e37348..b7ea4647c74d 100644 --- a/kernel/locking/qrwlock.c +++ b/kernel/locking/qrwlock.c @@ -24,23 +24,6 @@ #include /** - * rspin_until_writer_unlock - inc reader count & spin until writer is gone - * @lock : Pointer to queue rwlock structure - * @writer: Current queue rwlock writer status byte - * - * In interrupt context or at the head of the queue, the reader will just - * increment the reader count & wait until the writer releases the lock. - */ -static __always_inline void -rspin_until_writer_unlock(struct qrwlock *lock, u32 cnts) -{ - while ((cnts & _QW_WMASK) == _QW_LOCKED) { - cpu_relax(); - cnts = atomic_read_acquire(&lock->cnts); - } -} - -/** * queued_read_lock_slowpath - acquire read lock of a queue rwlock * @lock: Pointer to queue rwlock structure * @cnts: Current qrwlock lock value @@ -53,13 +36,12 @@ void queued_read_lock_slowpath(struct qrwlock *lock, u32 cnts) if (unlikely(in_interrupt())) { /* * Readers in interrupt context will get the lock immediately - * if the writer is just waiting (not holding the lock yet). - * The rspin_until_writer_unlock() function returns immediately - * in this case. Otherwise, they will spin (with ACQUIRE - * semantics) until the lock is available without waiting in - * the queue. + * if the writer is just waiting (not holding the lock yet), + * so spin with ACQUIRE semantics until the lock is available + * without waiting in the queue. */ - rspin_until_writer_unlock(lock, cnts); + atomic_cond_read_acquire(&lock->cnts, (VAL & _QW_WMASK) + != _QW_LOCKED); return; } atomic_sub(_QR_BIAS, &lock->cnts); @@ -68,14 +50,14 @@ void queued_read_lock_slowpath(struct qrwlock *lock, u32 cnts) * Put the reader into the wait queue */ arch_spin_lock(&lock->wait_lock); + atomic_add(_QR_BIAS, &lock->cnts); /* * The ACQUIRE semantics of the following spinning code ensure * that accesses can't leak upwards out of our subsequent critical * section in the case that the lock is currently held for write. */ - cnts = atomic_fetch_add_acquire(_QR_BIAS, &lock->cnts); - rspin_until_writer_unlock(lock, cnts); + atomic_cond_read_acquire(&lock->cnts, (VAL & _QW_WMASK) != _QW_LOCKED); /* * Signal the next one in queue to become queue head @@ -90,8 +72,6 @@ EXPORT_SYMBOL(queued_read_lock_slowpath); */ void queued_write_lock_slowpath(struct qrwlock *lock) { - u32 cnts; - /* Put the writer into the wait queue */ arch_spin_lock(&lock->wait_lock); @@ -113,15 +93,10 @@ void queued_write_lock_slowpath(struct qrwlock *lock) } /* When no more readers, set the locked flag */ - for (;;) { - cnts = atomic_read(&lock->cnts); - if ((cnts == _QW_WAITING) && - (atomic_cmpxchg_acquire(&lock->cnts, _QW_WAITING, - _QW_LOCKED) == _QW_WAITING)) - break; - - cpu_relax(); - } + do { + atomic_cond_read_acquire(&lock->cnts, VAL == _QW_WAITING); + } while (atomic_cmpxchg_relaxed(&lock->cnts, _QW_WAITING, + _QW_LOCKED) != _QW_WAITING); unlock: arch_spin_unlock(&lock->wait_lock); }