From patchwork Tue Oct 3 20:53:25 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeremy Linton X-Patchwork-Id: 9983539 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 5281860291 for ; Tue, 3 Oct 2017 20:54:35 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4A8EB285B6 for ; Tue, 3 Oct 2017 20:54:35 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3DBB228731; Tue, 3 Oct 2017 20:54:35 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 71C9F285B6 for ; Tue, 3 Oct 2017 20:54:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=bbZPJBZ6P9O3wlpJ9AQ7SSzzHWxR5hD9w0m7yxuD9bU=; b=RtCXZpWz034fUD6MhQegbi/oPt s8wBunbE3jG6V93oqtdposUBTi4RU80FyAC2ynLNQw9/n+/LNs5te7yxfNLlMQKK79tXRXk96B2bU oTUy8fxpVfUVnd/UYmib2EwyeLKORQPNUCR/ufIw9joYSC+GYLTdIbiG88+zb6YWTmUMvLTUPcTMk DvtXDGVtlLXWLrRyI0aMTKDDW2ZQpGGTgjw8r6zrxob03GDhN7O2ciSvumcPo46byNL48cdwMZT7z AzWwgTOM1sVQkO1pXuBryfXjfLo3hM2odkMl2nSK/9rt6VfpwXPB84VErkVhKQskxKpl16fpeqRX1 Y0+V93mg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1dzUCz-0000sf-B6; Tue, 03 Oct 2017 20:54:29 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1dzUCf-0000Zo-GT for linux-arm-kernel@lists.infradead.org; Tue, 03 Oct 2017 20:54:11 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2C38215A2; Tue, 3 Oct 2017 13:53:52 -0700 (PDT) Received: from beelzebub.austin.arm.com (beelzebub.austin.arm.com [10.118.12.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id BC1B73F53D; Tue, 3 Oct 2017 13:53:51 -0700 (PDT) From: Jeremy Linton To: linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 1/1] arm64: spinlocks: Fix write starvation with rwlock Date: Tue, 3 Oct 2017 15:53:25 -0500 Message-Id: <20171003205325.28703-2-jeremy.linton@arm.com> X-Mailer: git-send-email 2.13.5 In-Reply-To: <20171003205325.28703-1-jeremy.linton@arm.com> References: <20171003205325.28703-1-jeremy.linton@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20171003_135409_659295_F88F7203 X-CRM114-Status: GOOD ( 12.99 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mark.rutland@arm.com, peterz@infradead.org, catalin.marinas@arm.com, will.deacon@arm.com, Jeremy Linton , mingo@redhat.com, robin.murphy@arm.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP The ARM64 rwlock is unfair in that readers can perpetually block the writer. This patch changes the rwlock behavior so that the writer unconditionally flags the lock structure (given that its not already flagged by another writer). This blocks further readers that aren't in interrupt context from acquiring the lock. Once all the readers have drained, the writer that successfully flagged the lock can progress. With this change, the lock still has a fairness issue caused by an open race for ownership following a write unlock. If certain cores/clusters are favored to win these races it means a small set of writers could starve other users (including writers). This should not be a common problem given rwlock users should be read heavy with the occasional writer. Further, the queued rwlock should also help to alleviate this problem. Signed-off-by: Jeremy Linton --- arch/arm64/include/asm/spinlock.h | 67 ++++++++++++++++++++++++++------------- 1 file changed, 45 insertions(+), 22 deletions(-) diff --git a/arch/arm64/include/asm/spinlock.h b/arch/arm64/include/asm/spinlock.h index 95ad7102b63c..e4f6a1b545ca 100644 --- a/arch/arm64/include/asm/spinlock.h +++ b/arch/arm64/include/asm/spinlock.h @@ -143,7 +143,9 @@ static inline int arch_spin_is_contended(arch_spinlock_t *lock) * Write lock implementation. * * Write locks set bit 31. Unlocking, is done by writing 0 since the lock is - * exclusively held. + * exclusively held. Setting the write bit (31) is used as a flag to drain the + * readers. The lock is considered taken for the writer only once all the + * readers have exited. * * The memory barriers are implicit with the load-acquire and store-release * instructions. @@ -151,29 +153,41 @@ static inline int arch_spin_is_contended(arch_spinlock_t *lock) static inline void arch_write_lock(arch_rwlock_t *rw) { - unsigned int tmp; + unsigned int tmp, tmp2, status; asm volatile(ARM64_LSE_ATOMIC_INSN( /* LL/SC */ " sevl\n" "1: wfe\n" - "2: ldaxr %w0, %1\n" - " cbnz %w0, 1b\n" - " stxr %w0, %w2, %1\n" - " cbnz %w0, 2b\n" - __nops(1), + "2: ldaxr %w0, %3\n" + " tbnz %w0, #31, 1b\n" /* must be another writer */ + " orr %w1, %w0, %w4\n" + " stxr %w2, %w1, %3\n" + " cbnz %w2, 2b\n" /* failed to store, try again */ + " cbz %w0, 5f\n" /* if there aren't any readers we're done */ + " sevl\n" + "3: wfe\n" /* spin waiting for the readers to exit */ + "4: ldaxr %w0, %3\n" + " cmp %w0, %w4\n" + " b.ne 3b\n" + "5:", /* LSE atomics */ - "1: mov %w0, wzr\n" - "2: casa %w0, %w2, %1\n" - " cbz %w0, 3f\n" - " ldxr %w0, %1\n" - " cbz %w0, 2b\n" + "1: ldseta %w4, %w0, %3\n" + " cbz %w0, 5f\n" /* lock was clear, we are done */ + " tbz %w0, #31, 4f\n" /* we own the lock, wait for readers */ + "2: ldxr %w0, %3\n" /* spin waiting for writer to exit */ + " tbz %w0, #31, 1b\n" " wfe\n" - " b 1b\n" - "3:") - : "=&r" (tmp), "+Q" (rw->lock) + " b 2b\n" + __nops(2) + "3: wfe\n" /* spin waiting for the readers to exit*/ + "4: ldaxr %w0, %3\n" + " cmp %w0, %w4\n" + " b.ne 3b\n" + "5:") + : "=&r" (tmp), "=&r" (tmp2), "=&r" (status), "+Q" (rw->lock) : "r" (0x80000000) - : "memory"); + : "cc", "memory"); } static inline int arch_write_trylock(arch_rwlock_t *rw) @@ -214,7 +228,8 @@ static inline void arch_write_unlock(arch_rwlock_t *rw) * * It exclusively loads the lock value, increments it and stores the new value * back if positive and the CPU still exclusively owns the location. If the - * value is negative, the lock is already held. + * value is negative, a writer is pending. Since the rwlock is rentrant in + * interrupt context, ignore the write block in that case. * * During unlocking there may be multiple active read locks but no write lock. * @@ -228,6 +243,7 @@ static inline void arch_write_unlock(arch_rwlock_t *rw) static inline void arch_read_lock(arch_rwlock_t *rw) { unsigned int tmp, tmp2; + int allow_write_bypass = in_interrupt(); asm volatile( " sevl\n" @@ -235,21 +251,28 @@ static inline void arch_read_lock(arch_rwlock_t *rw) /* LL/SC */ "1: wfe\n" "2: ldaxr %w0, %2\n" - " add %w0, %w0, #1\n" + " cmp %w0, %w4\n" + " b.eq 1b\n" /* writer active */ + " add %w0, %w0, #1\n" + " cbnz %w3, 3f\n" /* in interrupt, skip writer check */ " tbnz %w0, #31, 1b\n" - " stxr %w1, %w0, %2\n" + "3: stxr %w1, %w0, %2\n" " cbnz %w1, 2b\n" __nops(1), /* LSE atomics */ "1: wfe\n" "2: ldxr %w0, %2\n" + " cmp %w0, %w4\n" + " b.eq 1b\n" /* writer active, go wait */ " adds %w1, %w0, #1\n" + " cbnz %w3, 3f\n" /* in interrupt, skip writer check */ " tbnz %w1, #31, 1b\n" - " casa %w0, %w1, %2\n" + "3: casa %w0, %w1, %2\n" " sbc %w0, %w1, %w0\n" - " cbnz %w0, 2b") + " cbnz %w0, 2b" + ) : "=&r" (tmp), "=&r" (tmp2), "+Q" (rw->lock) - : + : "r" (allow_write_bypass), "r" (0x80000000) : "cc", "memory"); }