From patchwork Thu Sep 7 17:47:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13376747 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F492EC8742 for ; Thu, 7 Sep 2023 17:48:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245477AbjIGRsC (ORCPT ); Thu, 7 Sep 2023 13:48:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49280 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S245527AbjIGRru (ORCPT ); Thu, 7 Sep 2023 13:47:50 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 79D261FF3; Thu, 7 Sep 2023 10:47:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=LLFJT9lhOndsodIHfPQ9LEtDlXX6YGclPNMO2t1USgU=; b=jAKEfKshiWNUs4KA27DrmnJak0 fotEfuCnFgJxrpNUy54FTDDByW/E9sBz1SE2zakIlZJhp1hvXQ/JrczFhZlH5vBtzyTCrBEViEZ1r rD2XlLJb/lmeXQvvhgAR1iFsykO9s7L/WtUzp4faEt4pLZWRXY68YIibxqDRTv/VmZzG6Z/HQjVcK WuDOyAggJnPFYDxeVSPWQmtmGsGIvkM67ia+qMuyl1Axj3bfOOoVGq16Elsg7Un3O2gVO45V950KH Cs5NbUja00oSR1Ep8FiYLK7eeJGKRtDPUrjGr35KP12gtU4KJaIFuyWbCHsCgPd1jB3NrkeDddkYO lp9gc8kQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qeJ5v-00CUFJ-Uv; Thu, 07 Sep 2023 17:47:07 +0000 From: "Matthew Wilcox (Oracle)" To: Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Chandan Babu R , "Darrick J . Wong" , linux-xfs@vger.kernel.org Subject: [PATCH 1/5] locking: Add rwsem_is_write_locked() Date: Thu, 7 Sep 2023 18:47:01 +0100 Message-Id: <20230907174705.2976191-2-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230907174705.2976191-1-willy@infradead.org> References: <20230907174705.2976191-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org Several places want to know whether the lock is held by a writer, instead of just whether it's held. We can implement this for both normal and rt rwsems. RWSEM_WRITER_LOCKED is declared in rwsem.c and exposing it outside that file might tempt other people to use it, so just use a comment to note that's what the 1 means, and help anybody find it if they're looking to change the implementation. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/rwbase_rt.h | 5 +++++ include/linux/rwsem.h | 10 ++++++++++ 2 files changed, 15 insertions(+) diff --git a/include/linux/rwbase_rt.h b/include/linux/rwbase_rt.h index 1d264dd08625..3c25b14edc05 100644 --- a/include/linux/rwbase_rt.h +++ b/include/linux/rwbase_rt.h @@ -31,6 +31,11 @@ static __always_inline bool rw_base_is_locked(struct rwbase_rt *rwb) return atomic_read(&rwb->readers) != READER_BIAS; } +static __always_inline bool rw_base_is_write_locked(struct rwbase_rt *rwb) +{ + return atomic_read(&rwb->readers) == WRITER_BIAS; +} + static __always_inline bool rw_base_is_contended(struct rwbase_rt *rwb) { return atomic_read(&rwb->readers) > 0; diff --git a/include/linux/rwsem.h b/include/linux/rwsem.h index 1dd530ce8b45..0f78b8d2e653 100644 --- a/include/linux/rwsem.h +++ b/include/linux/rwsem.h @@ -72,6 +72,11 @@ static inline int rwsem_is_locked(struct rw_semaphore *sem) return atomic_long_read(&sem->count) != 0; } +static inline int rwsem_is_write_locked(struct rw_semaphore *sem) +{ + return atomic_long_read(&sem->count) & 1 /* RWSEM_WRITER_LOCKED */; +} + #define RWSEM_UNLOCKED_VALUE 0L #define __RWSEM_COUNT_INIT(name) .count = ATOMIC_LONG_INIT(RWSEM_UNLOCKED_VALUE) @@ -157,6 +162,11 @@ static __always_inline int rwsem_is_locked(struct rw_semaphore *sem) return rw_base_is_locked(&sem->rwbase); } +static __always_inline int rwsem_is_write_locked(struct rw_semaphore *sem) +{ + return rw_base_is_write_locked(&sem->rwbase); +} + static __always_inline int rwsem_is_contended(struct rw_semaphore *sem) { return rw_base_is_contended(&sem->rwbase); From patchwork Thu Sep 7 17:47:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13376746 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80FBCEC874C for ; Thu, 7 Sep 2023 17:47:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245349AbjIGRrk (ORCPT ); Thu, 7 Sep 2023 13:47:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50288 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S245369AbjIGRrk (ORCPT ); Thu, 7 Sep 2023 13:47:40 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E61BCBD; Thu, 7 Sep 2023 10:47:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=VXFBeiZYZLxdE1EgCNB8C70Cr3AOFx9IwJaa1llihbg=; b=DV83kF/eITv9Kv5OQQmtmcXh4s aLB2PI2hxlu4An2NjCFMYLMGM0Yuq/dEbT1Zmb9aJ4Xc9KSmRjqtBtstT/W86jThMfVSYimGMHEBG RpTPu//7Bm/ZI/jY6/+Mdekr4QY8VSy2YQChZKQFh3JEyNgS06MTdLvu4t4UQY09KSLfv7PsS6NoW Ksxmeb8/rMLRtMuq8/2lhgreBwJdoa1fxIrLHb51vg+QFQj2yu2nqRD025IPMcZM31nfpA53D+F8V A+5xrl872htBCCIDaIg+jApOm9GX39pCayQ6NfVTrWvrExjy2cfEd/jc5XIW3Rrih2yROq4N02YA9 LNCPCoDA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qeJ5w-00CUFM-0r; Thu, 07 Sep 2023 17:47:08 +0000 From: "Matthew Wilcox (Oracle)" To: Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Chandan Babu R , "Darrick J . Wong" , linux-xfs@vger.kernel.org Subject: [PATCH 2/5] mm: Use rwsem_is_write_locked in mmap_assert_write_locked Date: Thu, 7 Sep 2023 18:47:02 +0100 Message-Id: <20230907174705.2976191-3-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230907174705.2976191-1-willy@infradead.org> References: <20230907174705.2976191-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org This slightly strengthens our checks when lockdep is disabled. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/mmap_lock.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h index 8d38dcb6d044..0258b06668e8 100644 --- a/include/linux/mmap_lock.h +++ b/include/linux/mmap_lock.h @@ -69,7 +69,7 @@ static inline void mmap_assert_locked(struct mm_struct *mm) static inline void mmap_assert_write_locked(struct mm_struct *mm) { lockdep_assert_held_write(&mm->mmap_lock); - VM_BUG_ON_MM(!rwsem_is_locked(&mm->mmap_lock), mm); + VM_BUG_ON_MM(!rwsem_is_write_locked(&mm->mmap_lock), mm); } #ifdef CONFIG_PER_VMA_LOCK From patchwork Thu Sep 7 17:47:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13376749 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1CD84EC8743 for ; Thu, 7 Sep 2023 17:48:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245478AbjIGRsF (ORCPT ); Thu, 7 Sep 2023 13:48:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49226 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S245516AbjIGRrt (ORCPT ); Thu, 7 Sep 2023 13:47:49 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 33F541FEC; Thu, 7 Sep 2023 10:47:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=wD6jndt8wbHj+DL+KkiwehRezl2XHcQmRoPgvfO+F4I=; b=nF26xSCABiPufnNgK3G2trN8Zm 15ekZrNYw2ptW9dbui8bV+AHuMNUASEcvB5o1xmBMGDWlFSCSX0C5wAj/svU/64WCqMG5XQ9K+tvj EYN10+S4ZDVeKfrKGbK/HyNHnzL509E2KUjoiBr8rEyDoxk3s1ie3fda6OIL+WD7qTFg/2IxxjK7Q U/3tHfGX4klo3XbKbbvi/aFGhGAiO7HckY8AwXoRANtBsxk4K+scpYQJ4NYQIvbAe9ocTPbszSS1L kyQLCkX0txtmHtVJUcG6kiR4BCbB6Rj6griCoOTUOy1OoPPO1Z1zynUfx0vwTpeNi7JRe4nfQ0pN2 GEq9iA8w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qeJ5w-00CUFP-3W; Thu, 07 Sep 2023 17:47:08 +0000 From: "Matthew Wilcox (Oracle)" To: Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Chandan Babu R , "Darrick J . Wong" , linux-xfs@vger.kernel.org Subject: [PATCH 3/5] xfs: Use rwsem_is_write_locked() Date: Thu, 7 Sep 2023 18:47:03 +0100 Message-Id: <20230907174705.2976191-4-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230907174705.2976191-1-willy@infradead.org> References: <20230907174705.2976191-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org This avoids using the mr_writer field to check the XFS ILOCK is held for write. It also improves the checking we do when lockdep is disabled. Signed-off-by: Matthew Wilcox (Oracle) --- fs/xfs/xfs_inode.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c index 360fe83a334f..e58d84d23f49 100644 --- a/fs/xfs/xfs_inode.c +++ b/fs/xfs/xfs_inode.c @@ -339,8 +339,11 @@ __xfs_rwsem_islocked( struct rw_semaphore *rwsem, bool shared) { - if (!debug_locks) + if (!debug_locks) { + if (!shared) + return rwsem_is_write_locked(rwsem); return rwsem_is_locked(rwsem); + } if (!shared) return lockdep_is_held_type(rwsem, 0); @@ -359,12 +362,10 @@ xfs_isilocked( struct xfs_inode *ip, uint lock_flags) { - if (lock_flags & (XFS_ILOCK_EXCL|XFS_ILOCK_SHARED)) { - if (!(lock_flags & XFS_ILOCK_SHARED)) - return !!ip->i_lock.mr_writer; + if (lock_flags & XFS_ILOCK_SHARED) return rwsem_is_locked(&ip->i_lock.mr_lock); - } - + if (lock_flags & XFS_ILOCK_EXCL) + return rwsem_is_write_locked(&ip->i_lock.mr_lock); if (lock_flags & (XFS_MMAPLOCK_EXCL|XFS_MMAPLOCK_SHARED)) { return __xfs_rwsem_islocked(&VFS_I(ip)->i_mapping->invalidate_lock, (lock_flags & XFS_MMAPLOCK_SHARED)); From patchwork Thu Sep 7 17:47:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13376748 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57A0EEC8726 for ; Thu, 7 Sep 2023 17:48:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245425AbjIGRsE (ORCPT ); Thu, 7 Sep 2023 13:48:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41610 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S245559AbjIGRrw (ORCPT ); Thu, 7 Sep 2023 13:47:52 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 03EE31FFB; Thu, 7 Sep 2023 10:47:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=dLYM3HXjdvUc/PUnwBXHfAR8RDp8UcgjgxntIo4zvgo=; b=IUhGTa9XHd7qtXmUgCIBha1owW sumH5DrguJTy94/25NAUDFeNSW5ijVnKvnlor6oG0tTRwRd1f4ddogPx2ZwBXXFsv1MCi4bPfMZ10 BiR9QgJlseF/ssmAd+mjYHXLebY/syL/0CGKCK2CoySItJ2TrMofGj4zYJHDku1zIvAVUcobcr8tG /q1td2ktz4JAqNne9wtQnAJUCEDaVoMqnlLRFetH47IRdETALDmMJ/2MCIOVEMtn77Uxok0/CoPKX J7Lkre49BxeCLfH7i1xc7GKuJK/RTpCXU1Zsg/MbtvaKv62N6naN49nuSeBinoLJhOoUN1Xijm38l gA8JfIzQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qeJ5w-00CUFR-5j; Thu, 07 Sep 2023 17:47:08 +0000 From: "Matthew Wilcox (Oracle)" To: Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Chandan Babu R , "Darrick J . Wong" , linux-xfs@vger.kernel.org Subject: [PATCH 4/5] xfs: Remove mrlock wrapper Date: Thu, 7 Sep 2023 18:47:04 +0100 Message-Id: <20230907174705.2976191-5-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230907174705.2976191-1-willy@infradead.org> References: <20230907174705.2976191-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org mrlock was an rwsem wrapper that also recorded whether the lock was held for read or write. Now that we can ask the generic code whether the lock is held for read or write, we can remove this wrapper and use an rwsem directly. Signed-off-by: Matthew Wilcox (Oracle) --- fs/xfs/mrlock.h | 78 ---------------------------------------------- fs/xfs/xfs_inode.c | 18 +++++------ fs/xfs/xfs_inode.h | 2 +- fs/xfs/xfs_iops.c | 4 +-- fs/xfs/xfs_linux.h | 2 +- fs/xfs/xfs_super.c | 4 +-- 6 files changed, 14 insertions(+), 94 deletions(-) delete mode 100644 fs/xfs/mrlock.h diff --git a/fs/xfs/mrlock.h b/fs/xfs/mrlock.h deleted file mode 100644 index 79155eec341b..000000000000 --- a/fs/xfs/mrlock.h +++ /dev/null @@ -1,78 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* - * Copyright (c) 2000-2006 Silicon Graphics, Inc. - * All Rights Reserved. - */ -#ifndef __XFS_SUPPORT_MRLOCK_H__ -#define __XFS_SUPPORT_MRLOCK_H__ - -#include - -typedef struct { - struct rw_semaphore mr_lock; -#if defined(DEBUG) || defined(XFS_WARN) - int mr_writer; -#endif -} mrlock_t; - -#if defined(DEBUG) || defined(XFS_WARN) -#define mrinit(mrp, name) \ - do { (mrp)->mr_writer = 0; init_rwsem(&(mrp)->mr_lock); } while (0) -#else -#define mrinit(mrp, name) \ - do { init_rwsem(&(mrp)->mr_lock); } while (0) -#endif - -#define mrlock_init(mrp, t,n,s) mrinit(mrp, n) -#define mrfree(mrp) do { } while (0) - -static inline void mraccess_nested(mrlock_t *mrp, int subclass) -{ - down_read_nested(&mrp->mr_lock, subclass); -} - -static inline void mrupdate_nested(mrlock_t *mrp, int subclass) -{ - down_write_nested(&mrp->mr_lock, subclass); -#if defined(DEBUG) || defined(XFS_WARN) - mrp->mr_writer = 1; -#endif -} - -static inline int mrtryaccess(mrlock_t *mrp) -{ - return down_read_trylock(&mrp->mr_lock); -} - -static inline int mrtryupdate(mrlock_t *mrp) -{ - if (!down_write_trylock(&mrp->mr_lock)) - return 0; -#if defined(DEBUG) || defined(XFS_WARN) - mrp->mr_writer = 1; -#endif - return 1; -} - -static inline void mrunlock_excl(mrlock_t *mrp) -{ -#if defined(DEBUG) || defined(XFS_WARN) - mrp->mr_writer = 0; -#endif - up_write(&mrp->mr_lock); -} - -static inline void mrunlock_shared(mrlock_t *mrp) -{ - up_read(&mrp->mr_lock); -} - -static inline void mrdemote(mrlock_t *mrp) -{ -#if defined(DEBUG) || defined(XFS_WARN) - mrp->mr_writer = 0; -#endif - downgrade_write(&mrp->mr_lock); -} - -#endif /* __XFS_SUPPORT_MRLOCK_H__ */ diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c index e58d84d23f49..c3cd73c29868 100644 --- a/fs/xfs/xfs_inode.c +++ b/fs/xfs/xfs_inode.c @@ -208,9 +208,9 @@ xfs_ilock( } if (lock_flags & XFS_ILOCK_EXCL) - mrupdate_nested(&ip->i_lock, XFS_ILOCK_DEP(lock_flags)); + down_write_nested(&ip->i_lock, XFS_ILOCK_DEP(lock_flags)); else if (lock_flags & XFS_ILOCK_SHARED) - mraccess_nested(&ip->i_lock, XFS_ILOCK_DEP(lock_flags)); + down_read_nested(&ip->i_lock, XFS_ILOCK_DEP(lock_flags)); } /* @@ -251,10 +251,10 @@ xfs_ilock_nowait( } if (lock_flags & XFS_ILOCK_EXCL) { - if (!mrtryupdate(&ip->i_lock)) + if (!down_write_trylock(&ip->i_lock)) goto out_undo_mmaplock; } else if (lock_flags & XFS_ILOCK_SHARED) { - if (!mrtryaccess(&ip->i_lock)) + if (!down_read_trylock(&ip->i_lock)) goto out_undo_mmaplock; } return 1; @@ -303,9 +303,9 @@ xfs_iunlock( up_read(&VFS_I(ip)->i_mapping->invalidate_lock); if (lock_flags & XFS_ILOCK_EXCL) - mrunlock_excl(&ip->i_lock); + up_write(&ip->i_lock); else if (lock_flags & XFS_ILOCK_SHARED) - mrunlock_shared(&ip->i_lock); + up_read(&ip->i_lock); trace_xfs_iunlock(ip, lock_flags, _RET_IP_); } @@ -324,7 +324,7 @@ xfs_ilock_demote( ~(XFS_IOLOCK_EXCL|XFS_MMAPLOCK_EXCL|XFS_ILOCK_EXCL)) == 0); if (lock_flags & XFS_ILOCK_EXCL) - mrdemote(&ip->i_lock); + downgrade_write(&ip->i_lock); if (lock_flags & XFS_MMAPLOCK_EXCL) downgrade_write(&VFS_I(ip)->i_mapping->invalidate_lock); if (lock_flags & XFS_IOLOCK_EXCL) @@ -363,9 +363,9 @@ xfs_isilocked( uint lock_flags) { if (lock_flags & XFS_ILOCK_SHARED) - return rwsem_is_locked(&ip->i_lock.mr_lock); + return rwsem_is_locked(&ip->i_lock); if (lock_flags & XFS_ILOCK_EXCL) - return rwsem_is_write_locked(&ip->i_lock.mr_lock); + return rwsem_is_write_locked(&ip->i_lock); if (lock_flags & (XFS_MMAPLOCK_EXCL|XFS_MMAPLOCK_SHARED)) { return __xfs_rwsem_islocked(&VFS_I(ip)->i_mapping->invalidate_lock, (lock_flags & XFS_MMAPLOCK_SHARED)); diff --git a/fs/xfs/xfs_inode.h b/fs/xfs/xfs_inode.h index 7547caf2f2ab..18941cd21b81 100644 --- a/fs/xfs/xfs_inode.h +++ b/fs/xfs/xfs_inode.h @@ -39,7 +39,7 @@ typedef struct xfs_inode { /* Transaction and locking information. */ struct xfs_inode_log_item *i_itemp; /* logging information */ - mrlock_t i_lock; /* inode lock */ + struct rw_semaphore i_lock; /* inode lock */ atomic_t i_pincount; /* inode pin count */ struct llist_node i_gclist; /* deferred inactivation list */ diff --git a/fs/xfs/xfs_iops.c b/fs/xfs/xfs_iops.c index 2ededd3f6b8c..ba64c9c5d3ab 100644 --- a/fs/xfs/xfs_iops.c +++ b/fs/xfs/xfs_iops.c @@ -1280,9 +1280,9 @@ xfs_setup_inode( */ lockdep_set_class(&inode->i_rwsem, &inode->i_sb->s_type->i_mutex_dir_key); - lockdep_set_class(&ip->i_lock.mr_lock, &xfs_dir_ilock_class); + lockdep_set_class(&ip->i_lock, &xfs_dir_ilock_class); } else { - lockdep_set_class(&ip->i_lock.mr_lock, &xfs_nondir_ilock_class); + lockdep_set_class(&ip->i_lock, &xfs_nondir_ilock_class); } /* diff --git a/fs/xfs/xfs_linux.h b/fs/xfs/xfs_linux.h index e9d317a3dafe..15fdaef578fe 100644 --- a/fs/xfs/xfs_linux.h +++ b/fs/xfs/xfs_linux.h @@ -22,7 +22,6 @@ typedef __u32 xfs_nlink_t; #include "xfs_types.h" #include "kmem.h" -#include "mrlock.h" #include #include @@ -51,6 +50,7 @@ typedef __u32 xfs_nlink_t; #include #include #include +#include #include #include #include diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c index 1f77014c6e1a..d190afbcfe04 100644 --- a/fs/xfs/xfs_super.c +++ b/fs/xfs/xfs_super.c @@ -739,9 +739,7 @@ xfs_fs_inode_init_once( /* xfs inode */ atomic_set(&ip->i_pincount, 0); spin_lock_init(&ip->i_flags_lock); - - mrlock_init(&ip->i_lock, MRLOCK_ALLOW_EQUAL_PRI|MRLOCK_BARRIER, - "xfsino", ip->i_ino); + init_rwsem(&ip->i_lock); } /* From patchwork Thu Sep 7 17:47:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13376745 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE289EC8743 for ; Thu, 7 Sep 2023 17:47:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245342AbjIGRrj (ORCPT ); Thu, 7 Sep 2023 13:47:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50258 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S245396AbjIGRrh (ORCPT ); Thu, 7 Sep 2023 13:47:37 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9466892; Thu, 7 Sep 2023 10:47:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=9FU0HDBbS9848C77tpg24Iv2uyZEtVXyAgpiQsjv/fA=; b=NcXGBCh5BPyjAcESrPbWgrRsvW PDdtlXUAW0k8lC/IdN0FAJeBK1laH7kwHd4DLz5VGxpxlbeGjY5LEWxmT2mnCzfiDg/9MxX1y9YIq Aco2rxkVPG1Q5cC4t6HrgpCG7dzJ/n00GBBB5HYp+YIDycwMRJ2stN/4f6UCE/cu+yDvEPg/xymUe gxLzC04nT4WV/5Np19ENi1c6ygMEpl7CoMfjbF6CfUeB2VJ80U//JUdE1S4mVgoEnKU3m4Us/q4ti l2voBmp7JV5jVnAX2/7wySndJLGXrAMrdqgKY6dmc/C4GztfYiBEY6CBASMTB1vOnuCu23xjQCZ8Y 6X3U3qxQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qeJ5w-00CUFT-91; Thu, 07 Sep 2023 17:47:08 +0000 From: "Matthew Wilcox (Oracle)" To: Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Chandan Babu R , "Darrick J . Wong" , linux-xfs@vger.kernel.org Subject: [PATCH 5/5] xfs: Stop using lockdep to assert that locks are held Date: Thu, 7 Sep 2023 18:47:05 +0100 Message-Id: <20230907174705.2976191-6-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230907174705.2976191-1-willy@infradead.org> References: <20230907174705.2976191-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org Lockdep does not know that the worker thread has inherited the lock from its caller. Rather than dance around moving the ownership from the caller to the thread and back again, just remove the lockdep assertions and rely on the rwsem itself to tell us whether _somebody_ is holding the lock at the moment. __xfs_rwsem_islocked() simplifies into a trivial function, which is easy to inline into xfs_isilocked(). Signed-off-by: Matthew Wilcox (Oracle) --- fs/xfs/xfs_inode.c | 40 ++++++++-------------------------------- 1 file changed, 8 insertions(+), 32 deletions(-) diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c index c3cd73c29868..81ee6bf8c662 100644 --- a/fs/xfs/xfs_inode.c +++ b/fs/xfs/xfs_inode.c @@ -334,29 +334,6 @@ xfs_ilock_demote( } #if defined(DEBUG) || defined(XFS_WARN) -static inline bool -__xfs_rwsem_islocked( - struct rw_semaphore *rwsem, - bool shared) -{ - if (!debug_locks) { - if (!shared) - return rwsem_is_write_locked(rwsem); - return rwsem_is_locked(rwsem); - } - - if (!shared) - return lockdep_is_held_type(rwsem, 0); - - /* - * We are checking that the lock is held at least in shared - * mode but don't care that it might be held exclusively - * (i.e. shared | excl). Hence we check if the lock is held - * in any mode rather than an explicit shared mode. - */ - return lockdep_is_held_type(rwsem, -1); -} - bool xfs_isilocked( struct xfs_inode *ip, @@ -366,15 +343,14 @@ xfs_isilocked( return rwsem_is_locked(&ip->i_lock); if (lock_flags & XFS_ILOCK_EXCL) return rwsem_is_write_locked(&ip->i_lock); - if (lock_flags & (XFS_MMAPLOCK_EXCL|XFS_MMAPLOCK_SHARED)) { - return __xfs_rwsem_islocked(&VFS_I(ip)->i_mapping->invalidate_lock, - (lock_flags & XFS_MMAPLOCK_SHARED)); - } - - if (lock_flags & (XFS_IOLOCK_EXCL | XFS_IOLOCK_SHARED)) { - return __xfs_rwsem_islocked(&VFS_I(ip)->i_rwsem, - (lock_flags & XFS_IOLOCK_SHARED)); - } + if (lock_flags & XFS_MMAPLOCK_SHARED) + return rwsem_is_locked(&VFS_I(ip)->i_mapping->invalidate_lock); + if (lock_flags & XFS_MMAPLOCK_EXCL) + return rwsem_is_write_locked(&VFS_I(ip)->i_mapping->invalidate_lock); + if (lock_flags & XFS_IOLOCK_SHARED) + return rwsem_is_locked(&VFS_I(ip)->i_rwsem); + if (lock_flags & XFS_IOLOCK_EXCL) + return rwsem_is_write_locked(&VFS_I(ip)->i_rwsem); ASSERT(0); return false;