From patchwork Thu Feb 7 19:07:18 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Waiman Long X-Patchwork-Id: 10802021 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1BD436C2 for ; Thu, 7 Feb 2019 19:22:40 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0C29D2E0DD for ; Thu, 7 Feb 2019 19:22:40 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0A5CC2E535; Thu, 7 Feb 2019 19:22:40 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 8CE572E0DD for ; Thu, 7 Feb 2019 19:22:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=ggeHfMuXuCk3pcrbwc9DOdOBkEYtxZUhzXuJ4RKJtAo=; b=fSTCBIVb3yQoPI3WMi4R6hCcN6 ZG1JiyVTA1FflXLtum7Hn4gUFS67IUjpD7eMhuvsvyeBWrUWVYdH1FSZ8gWud++tR8EF8aVH/OXnN KxSEDO87gQJaoM4PrFmUFIW8d6XrDQLAsEZ/V5lChAZREWdDoHaHQnuAjZWdrL1hyeZ2vi6ac4U/1 1SiHgkmqHtzklihrbGy/46kRtONeL0Yh0Up0SuKZso9yOBImzx2PFVBNjy7YfpL7S49s+J9ouels+ FeUD6KnXQUorRCm7pIDcxTnxPMmbt7HElVVxvhnj1K9NBWOKGVYqorJ7G9sAXUFj3lXSoXrdLGhq2 siLJ8ejg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1grpFp-0000st-2s; Thu, 07 Feb 2019 19:22:33 +0000 Received: from mx1.redhat.com ([209.132.183.28]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1grp3Y-0002Un-IG for linux-arm-kernel@lists.infradead.org; Thu, 07 Feb 2019 19:11:08 +0000 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 7107B3C2CFC; Thu, 7 Feb 2019 19:09:51 +0000 (UTC) Received: from llong.com (dhcp-17-35.bos.redhat.com [10.18.17.35]) by smtp.corp.redhat.com (Postfix) with ESMTP id 518E761146; Thu, 7 Feb 2019 19:09:49 +0000 (UTC) From: Waiman Long To: Peter Zijlstra , Ingo Molnar , Will Deacon , Thomas Gleixner Subject: [PATCH-tip 14/22] locking/rwsem: Add more rwsem owner access helpers Date: Thu, 7 Feb 2019 14:07:18 -0500 Message-Id: <1549566446-27967-15-git-send-email-longman@redhat.com> In-Reply-To: <1549566446-27967-1-git-send-email-longman@redhat.com> References: <1549566446-27967-1-git-send-email-longman@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.29]); Thu, 07 Feb 2019 19:09:52 +0000 (UTC) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190207_110953_383213_2090B980 X-CRM114-Status: GOOD ( 18.68 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arch@vger.kernel.org, linux-xtensa@linux-xtensa.org, Davidlohr Bueso , linux-ia64@vger.kernel.org, Tim Chen , Arnd Bergmann , linux-sh@vger.kernel.org, linux-hexagon@vger.kernel.org, x86@kernel.org, "H. Peter Anvin" , linux-kernel@vger.kernel.org, Linus Torvalds , Borislav Petkov , linux-alpha@vger.kernel.org, sparclinux@vger.kernel.org, Waiman Long , Andrew Morton , linuxppc-dev@lists.ozlabs.org, linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Before combining owner and count, we are adding two new helpers for accessing the owner value in the rwsem. 1) struct task_struct *rwsem_get_owner(struct rw_semaphore *sem) 2) bool is_rwsem_reader_owned(struct rw_semaphore *sem) Signed-off-by: Waiman Long --- kernel/locking/rwsem-xadd.c | 11 ++++++----- kernel/locking/rwsem-xadd.h | 32 ++++++++++++++++++++++++++------ kernel/locking/rwsem.c | 3 +-- 3 files changed, 33 insertions(+), 13 deletions(-) diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c index 5f74bae..719d390 100644 --- a/kernel/locking/rwsem-xadd.c +++ b/kernel/locking/rwsem-xadd.c @@ -277,7 +277,7 @@ static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem) return false; rcu_read_lock(); - owner = READ_ONCE(sem->owner); + owner = rwsem_get_owner(sem); if (owner) { ret = is_rwsem_owner_spinnable(owner) && owner_on_cpu(owner); @@ -291,13 +291,13 @@ static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem) */ static noinline bool rwsem_spin_on_owner(struct rw_semaphore *sem) { - struct task_struct *owner = READ_ONCE(sem->owner); + struct task_struct *owner = rwsem_get_owner(sem); if (!is_rwsem_owner_spinnable(owner)) return false; rcu_read_lock(); - while (owner && (READ_ONCE(sem->owner) == owner)) { + while (owner && (rwsem_get_owner(sem) == owner)) { /* * Ensure we emit the owner->on_cpu, dereference _after_ * checking sem->owner still matches owner, if that fails, @@ -323,7 +323,7 @@ static noinline bool rwsem_spin_on_owner(struct rw_semaphore *sem) * If there is a new owner or the owner is not set, we continue * spinning. */ - return is_rwsem_owner_spinnable(READ_ONCE(sem->owner)); + return is_rwsem_owner_spinnable(rwsem_get_owner(sem)); } static bool rwsem_optimistic_spin(struct rw_semaphore *sem) @@ -361,7 +361,8 @@ static bool rwsem_optimistic_spin(struct rw_semaphore *sem) * we're an RT task that will live-lock because we won't let * the owner complete. */ - if (!sem->owner && (need_resched() || rt_task(current))) + if (!rwsem_get_owner(sem) && + (need_resched() || rt_task(current))) break; /* diff --git a/kernel/locking/rwsem-xadd.h b/kernel/locking/rwsem-xadd.h index 6d4890d..277a134 100644 --- a/kernel/locking/rwsem-xadd.h +++ b/kernel/locking/rwsem-xadd.h @@ -83,6 +83,11 @@ static inline void rwsem_clear_owner(struct rw_semaphore *sem) WRITE_ONCE(sem->owner, NULL); } +static inline struct task_struct *rwsem_get_owner(struct rw_semaphore *sem) +{ + return READ_ONCE(sem->owner); +} + /* * The task_struct pointer of the last owning reader will be left in * the owner field. @@ -116,6 +121,23 @@ static inline bool is_rwsem_owner_spinnable(struct task_struct *owner) } /* + * Return true if the rwsem is owned by a reader. + */ +static inline bool is_rwsem_reader_owned(struct rw_semaphore *sem) +{ +#ifdef CONFIG_DEBUG_RWSEMS + /* + * Check the count to see if it is write-locked. + */ + long count = atomic_long_read(&sem->count); + + if (count & RWSEM_WRITER_MASK) + return false; +#endif + return (unsigned long)sem->owner & RWSEM_READER_OWNED; +} + +/* * Return true if rwsem is owned by an anonymous writer or readers. */ static inline bool rwsem_has_anonymous_owner(struct task_struct *owner) @@ -135,6 +157,7 @@ static inline void rwsem_clear_reader_owned(struct rw_semaphore *sem) { unsigned long val = (unsigned long)current | RWSEM_READER_OWNED | RWSEM_ANONYMOUSLY_OWNED; + if (READ_ONCE(sem->owner) == (struct task_struct *)val) cmpxchg_relaxed((unsigned long *)&sem->owner, val, RWSEM_READER_OWNED | RWSEM_ANONYMOUSLY_OWNED); @@ -181,8 +204,7 @@ static inline void __down_read(struct rw_semaphore *sem) if (unlikely(atomic_long_fetch_add_acquire(RWSEM_READER_BIAS, &sem->count) & RWSEM_READ_FAILED_MASK)) { rwsem_down_read_failed(sem); - DEBUG_RWSEMS_WARN_ON(!((unsigned long)sem->owner & - RWSEM_READER_OWNED), sem); + DEBUG_RWSEMS_WARN_ON(!is_rwsem_reader_owned(sem), sem); } else { rwsem_set_reader_owned(sem); } @@ -194,8 +216,7 @@ static inline int __down_read_killable(struct rw_semaphore *sem) &sem->count) & RWSEM_READ_FAILED_MASK)) { if (IS_ERR(rwsem_down_read_failed_killable(sem))) return -EINTR; - DEBUG_RWSEMS_WARN_ON(!((unsigned long)sem->owner & - RWSEM_READER_OWNED), sem); + DEBUG_RWSEMS_WARN_ON(!is_rwsem_reader_owned(sem), sem); } else { rwsem_set_reader_owned(sem); } @@ -254,8 +275,7 @@ static inline void __up_read(struct rw_semaphore *sem) { long tmp; - DEBUG_RWSEMS_WARN_ON(!((unsigned long)sem->owner & RWSEM_READER_OWNED), - sem); + DEBUG_RWSEMS_WARN_ON(!is_rwsem_reader_owned(sem), sem); rwsem_clear_reader_owned(sem); tmp = atomic_long_add_return_release(-RWSEM_READER_BIAS, &sem->count); if (unlikely((tmp & (RWSEM_LOCK_MASK|RWSEM_FLAG_WAITERS)) diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c index bdfca7c..79fa6e4 100644 --- a/kernel/locking/rwsem.c +++ b/kernel/locking/rwsem.c @@ -203,8 +203,7 @@ int __sched down_write_killable_nested(struct rw_semaphore *sem, int subclass) void up_read_non_owner(struct rw_semaphore *sem) { - DEBUG_RWSEMS_WARN_ON(!((unsigned long)sem->owner & RWSEM_READER_OWNED), - sem); + DEBUG_RWSEMS_WARN_ON(!is_rwsem_reader_owned(sem), sem); __up_read(sem); }