From patchwork Wed Oct 30 10:56:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Sterba X-Patchwork-Id: 11219459 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 29DCD1390 for ; Wed, 30 Oct 2019 10:56:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 0704B2087E for ; Wed, 30 Oct 2019 10:56:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726882AbfJ3K4t (ORCPT ); Wed, 30 Oct 2019 06:56:49 -0400 Received: from mx2.suse.de ([195.135.220.15]:37666 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726184AbfJ3K4s (ORCPT ); Wed, 30 Oct 2019 06:56:48 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 0F092AD2B; Wed, 30 Oct 2019 10:56:47 +0000 (UTC) Received: by ds.suse.cz (Postfix, from userid 10065) id BD02FDA783; Wed, 30 Oct 2019 11:56:56 +0100 (CET) From: David Sterba To: linux-btrfs@vger.kernel.org Cc: David Sterba Subject: [PATCH v2 1/5] btrfs: merge blocking_writers branches in btrfs_tree_read_lock Date: Wed, 30 Oct 2019 11:56:56 +0100 Message-Id: X-Mailer: git-send-email 2.23.0 In-Reply-To: References: MIME-Version: 1.0 Sender: linux-btrfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org There are two ifs that use eb::blocking_writers. As this is a variable modified inside and outside of locks, we could minimize number of accesses to avoid problems with getting different results at different times. The access here is locked so this can only race with btrfs_tree_unlock that sets blocking_writers to 0 without lock and unsets the lock owner. The first branch is taken only if the same thread already holds the lock, the second if checks for blocking writers. Here we'd either unlock and wait, or proceed. Both are valid states of the locking protocol. Signed-off-by: David Sterba Reviewed-by: Johannnes Thumshirn --- fs/btrfs/locking.c | 27 ++++++++++++++------------- 1 file changed, 14 insertions(+), 13 deletions(-) diff --git a/fs/btrfs/locking.c b/fs/btrfs/locking.c index 93146b495276..c84c650e56c7 100644 --- a/fs/btrfs/locking.c +++ b/fs/btrfs/locking.c @@ -128,20 +128,21 @@ void btrfs_tree_read_lock(struct extent_buffer *eb) read_lock(&eb->lock); BUG_ON(eb->blocking_writers == 0 && current->pid == eb->lock_owner); - if (eb->blocking_writers && current->pid == eb->lock_owner) { - /* - * This extent is already write-locked by our thread. We allow - * an additional read lock to be added because it's for the same - * thread. btrfs_find_all_roots() depends on this as it may be - * called on a partly (write-)locked tree. - */ - BUG_ON(eb->lock_nested); - eb->lock_nested = true; - read_unlock(&eb->lock); - trace_btrfs_tree_read_lock(eb, start_ns); - return; - } if (eb->blocking_writers) { + if (current->pid == eb->lock_owner) { + /* + * This extent is already write-locked by our thread. + * We allow an additional read lock to be added because + * it's for the same thread. btrfs_find_all_roots() + * depends on this as it may be called on a partly + * (write-)locked tree. + */ + BUG_ON(eb->lock_nested); + eb->lock_nested = true; + read_unlock(&eb->lock); + trace_btrfs_tree_read_lock(eb, start_ns); + return; + } read_unlock(&eb->lock); wait_event(eb->write_lock_wq, eb->blocking_writers == 0); From patchwork Wed Oct 30 10:56:59 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Sterba X-Patchwork-Id: 11219455 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BF1301599 for ; Wed, 30 Oct 2019 10:56:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A68A12087F for ; Wed, 30 Oct 2019 10:56:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726995AbfJ3K4w (ORCPT ); Wed, 30 Oct 2019 06:56:52 -0400 Received: from mx2.suse.de ([195.135.220.15]:37680 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726761AbfJ3K4v (ORCPT ); Wed, 30 Oct 2019 06:56:51 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 67CA8AD12; Wed, 30 Oct 2019 10:56:49 +0000 (UTC) Received: by ds.suse.cz (Postfix, from userid 10065) id 15CB4DA783; Wed, 30 Oct 2019 11:56:59 +0100 (CET) From: David Sterba To: linux-btrfs@vger.kernel.org Cc: David Sterba Subject: [PATCH v2 2/5] btrfs: set blocking_writers directly, no increment or decrement Date: Wed, 30 Oct 2019 11:56:59 +0100 Message-Id: <56aee34d4d33bde65a9bea6e6d51fc272bee6bbf.1572432768.git.dsterba@suse.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: References: MIME-Version: 1.0 Sender: linux-btrfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org The increment and decrement was inherited from previous version that used atomics, switched in commit 06297d8cefca ("btrfs: switch extent_buffer blocking_writers from atomic to int"). The only possible values are 0 and 1 so we can set them directly. The generated assembly (gcc 9.x) did the direct value assignment in btrfs_set_lock_blocking_write (asm diff after change in 06297d8cefca): 5d: test %eax,%eax 5f: je 62 61: retq - 62: lock incl 0x44(%rdi) - 66: add $0x50,%rdi - 6a: jmpq 6f + 62: movl $0x1,0x44(%rdi) + 69: add $0x50,%rdi + 6d: jmpq 72 The part in btrfs_tree_unlock did a decrement because BUG_ON(blockers > 1) is probably not a strong hint for the compiler, but otherwise the output looks safe: - lock decl 0x44(%rdi) + sub $0x1,%eax + mov %eax,0x44(%rdi) Signed-off-by: David Sterba --- fs/btrfs/locking.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/fs/btrfs/locking.c b/fs/btrfs/locking.c index c84c650e56c7..00edf91c3d1c 100644 --- a/fs/btrfs/locking.c +++ b/fs/btrfs/locking.c @@ -109,7 +109,7 @@ void btrfs_set_lock_blocking_write(struct extent_buffer *eb) if (eb->blocking_writers == 0) { btrfs_assert_spinning_writers_put(eb); btrfs_assert_tree_locked(eb); - eb->blocking_writers++; + eb->blocking_writers = 1; write_unlock(&eb->lock); } } @@ -305,7 +305,7 @@ void btrfs_tree_unlock(struct extent_buffer *eb) if (blockers) { btrfs_assert_no_spinning_writers(eb); - eb->blocking_writers--; + eb->blocking_writers = 0; /* * We need to order modifying blocking_writers above with * actually waking up the sleepers to ensure they see the From patchwork Wed Oct 30 10:57:01 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Sterba X-Patchwork-Id: 11219461 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C833B1599 for ; Wed, 30 Oct 2019 10:56:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B09192087E for ; Wed, 30 Oct 2019 10:56:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727005AbfJ3K4y (ORCPT ); Wed, 30 Oct 2019 06:56:54 -0400 Received: from mx2.suse.de ([195.135.220.15]:37700 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726761AbfJ3K4y (ORCPT ); Wed, 30 Oct 2019 06:56:54 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id B3C08AD2B; Wed, 30 Oct 2019 10:56:51 +0000 (UTC) Received: by ds.suse.cz (Postfix, from userid 10065) id 63115DA783; Wed, 30 Oct 2019 11:57:01 +0100 (CET) From: David Sterba To: linux-btrfs@vger.kernel.org Cc: David Sterba Subject: [PATCH v2 3/5] btrfs: access eb::blocking_writers according to ACCESS_ONCE policies Date: Wed, 30 Oct 2019 11:57:01 +0100 Message-Id: <6a267daa889d9c5d0a8fc86e23fe25f845e2547d.1572432768.git.dsterba@suse.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: References: MIME-Version: 1.0 Sender: linux-btrfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org A nice writeup of the LKMM (Linux Kernel Memory Model) rules for access once policies can be found here https://lwn.net/Articles/799218/#Access-Marking%20Policies . The locked and unlocked access to eb::blocking_writers should be annotated accordingly, following this: Writes: - locked write must use ONCE, may use plain read - unlocked write must use ONCE Reads: - unlocked read must use ONCE - locked read may use plain read iff not mixed with unlocked read - unlocked read then locked must use ONCE There's one difference on the assembly level, where btrfs_tree_read_lock_atomic and btrfs_try_tree_read_lock used the cached value and did not reevaluate it after taking the lock. This could have missed some opportunities to take the lock in case blocking writers changed between the calls, but the window is just a few instructions long. As this is in try-lock, the callers handle that. Signed-off-by: David Sterba --- fs/btrfs/locking.c | 32 +++++++++++++++++++++----------- 1 file changed, 21 insertions(+), 11 deletions(-) diff --git a/fs/btrfs/locking.c b/fs/btrfs/locking.c index 00edf91c3d1c..4cd593a2f58c 100644 --- a/fs/btrfs/locking.c +++ b/fs/btrfs/locking.c @@ -109,7 +109,7 @@ void btrfs_set_lock_blocking_write(struct extent_buffer *eb) if (eb->blocking_writers == 0) { btrfs_assert_spinning_writers_put(eb); btrfs_assert_tree_locked(eb); - eb->blocking_writers = 1; + WRITE_ONCE(eb->blocking_writers, 1); write_unlock(&eb->lock); } } @@ -145,7 +145,7 @@ void btrfs_tree_read_lock(struct extent_buffer *eb) } read_unlock(&eb->lock); wait_event(eb->write_lock_wq, - eb->blocking_writers == 0); + READ_ONCE(eb->blocking_writers) == 0); goto again; } btrfs_assert_tree_read_locks_get(eb); @@ -160,11 +160,12 @@ void btrfs_tree_read_lock(struct extent_buffer *eb) */ int btrfs_tree_read_lock_atomic(struct extent_buffer *eb) { - if (eb->blocking_writers) + if (READ_ONCE(eb->blocking_writers)) return 0; read_lock(&eb->lock); - if (eb->blocking_writers) { + /* Refetch value after lock */ + if (READ_ONCE(eb->blocking_writers)) { read_unlock(&eb->lock); return 0; } @@ -180,13 +181,14 @@ int btrfs_tree_read_lock_atomic(struct extent_buffer *eb) */ int btrfs_try_tree_read_lock(struct extent_buffer *eb) { - if (eb->blocking_writers) + if (READ_ONCE(eb->blocking_writers)) return 0; if (!read_trylock(&eb->lock)) return 0; - if (eb->blocking_writers) { + /* Refetch value after lock */ + if (READ_ONCE(eb->blocking_writers)) { read_unlock(&eb->lock); return 0; } @@ -202,11 +204,12 @@ int btrfs_try_tree_read_lock(struct extent_buffer *eb) */ int btrfs_try_tree_write_lock(struct extent_buffer *eb) { - if (eb->blocking_writers || atomic_read(&eb->blocking_readers)) + if (READ_ONCE(eb->blocking_writers) || atomic_read(&eb->blocking_readers)) return 0; write_lock(&eb->lock); - if (eb->blocking_writers || atomic_read(&eb->blocking_readers)) { + /* Refetch value after lock */ + if (READ_ONCE(eb->blocking_writers) || atomic_read(&eb->blocking_readers)) { write_unlock(&eb->lock); return 0; } @@ -277,9 +280,11 @@ void btrfs_tree_lock(struct extent_buffer *eb) WARN_ON(eb->lock_owner == current->pid); again: wait_event(eb->read_lock_wq, atomic_read(&eb->blocking_readers) == 0); - wait_event(eb->write_lock_wq, eb->blocking_writers == 0); + wait_event(eb->write_lock_wq, READ_ONCE(eb->blocking_writers) == 0); write_lock(&eb->lock); - if (atomic_read(&eb->blocking_readers) || eb->blocking_writers) { + /* Refetch value after lock */ + if (atomic_read(&eb->blocking_readers) || + READ_ONCE(eb->blocking_writers)) { write_unlock(&eb->lock); goto again; } @@ -294,6 +299,10 @@ void btrfs_tree_lock(struct extent_buffer *eb) */ void btrfs_tree_unlock(struct extent_buffer *eb) { + /* + * This is read both locked and unlocked but always by the same thread + * that already owns the lock so we don't need to use READ_ONCE + */ int blockers = eb->blocking_writers; BUG_ON(blockers > 1); @@ -305,7 +314,8 @@ void btrfs_tree_unlock(struct extent_buffer *eb) if (blockers) { btrfs_assert_no_spinning_writers(eb); - eb->blocking_writers = 0; + /* Unlocked write */ + WRITE_ONCE(eb->blocking_writers, 0); /* * We need to order modifying blocking_writers above with * actually waking up the sleepers to ensure they see the From patchwork Wed Oct 30 10:57:03 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Sterba X-Patchwork-Id: 11219463 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5B1631390 for ; Wed, 30 Oct 2019 10:56:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 389BB20717 for ; Wed, 30 Oct 2019 10:56:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726761AbfJ3K45 (ORCPT ); Wed, 30 Oct 2019 06:56:57 -0400 Received: from mx2.suse.de ([195.135.220.15]:37724 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726999AbfJ3K44 (ORCPT ); Wed, 30 Oct 2019 06:56:56 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 07819AD12; Wed, 30 Oct 2019 10:56:54 +0000 (UTC) Received: by ds.suse.cz (Postfix, from userid 10065) id B5CCBDA783; Wed, 30 Oct 2019 11:57:03 +0100 (CET) From: David Sterba To: linux-btrfs@vger.kernel.org Cc: David Sterba Subject: [PATCH v2 4/5] btrfs: document extent buffer locking Date: Wed, 30 Oct 2019 11:57:03 +0100 Message-Id: X-Mailer: git-send-email 2.23.0 In-Reply-To: References: MIME-Version: 1.0 Sender: linux-btrfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Signed-off-by: David Sterba --- fs/btrfs/locking.c | 172 +++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 158 insertions(+), 14 deletions(-) diff --git a/fs/btrfs/locking.c b/fs/btrfs/locking.c index 4cd593a2f58c..571c4826c428 100644 --- a/fs/btrfs/locking.c +++ b/fs/btrfs/locking.c @@ -13,6 +13,110 @@ #include "extent_io.h" #include "locking.h" +/* + * Extent buffer locking + * ===================== + * + * The locks use a custom scheme that allows to do more operations than are + * available fromt current locking primitives. The building blocks are still + * rwlock and wait queues. + * + * Required semantics: + * + * - reader/writer exclusion + * - writer/writer exclusion + * - reader/reader sharing + * - spinning lock semantics + * - blocking lock semantics + * - try-lock semantics for readers and writers + * - one level nesting, allowing read lock to be taken by the same thread that + * already has write lock + * + * The extent buffer locks (also called tree locks) manage access to eb data + * related to the storage in the b-tree (keys, items, but not the individual + * members of eb). + * We want concurrency of many readers and safe updates. The underlying locking + * is done by read-write spinlock and the blocking part is implemented using + * counters and wait queues. + * + * spinning semantics - the low-level rwlock is held so all other threads that + * want to take it are spinning on it. + * + * blocking semantics - the low-level rwlock is not held but the counter + * denotes how many times the blocking lock was held; + * sleeping is possible + * + * Write lock always allows only one thread to access the data. + * + * + * Debugging + * --------- + * + * There are additional state counters that are asserted in various contexts, + * removed from non-debug build to reduce extent_buffer size and for + * performance reasons. + * + * + * Lock nesting + * ------------ + * + * A write operation on a tree might indirectly start a look up on the same + * tree. This can happen when btrfs_cow_block locks the tree and needs to + * lookup free extents. + * + * btrfs_cow_block + * .. + * alloc_tree_block_no_bg_flush + * btrfs_alloc_tree_block + * btrfs_reserve_extent + * .. + * load_free_space_cache + * .. + * btrfs_lookup_file_extent + * btrfs_search_slot + * + * + * Locking pattern - spinning + * -------------------------- + * + * The simple locking scenario, the +--+ denotes the spinning section. + * + * +- btrfs_tree_lock + * | - extent_buffer::rwlock is held + * | - no heavy operations should happen, eg. IO, memory allocations, large + * | structure traversals + * +- btrfs_tree_unock +* +* + * Locking pattern - blocking + * -------------------------- + * + * The blocking write uses the following scheme. The +--+ denotes the spinning + * section. + * + * +- btrfs_tree_lock + * | + * +- btrfs_set_lock_blocking_write + * + * - allowed: IO, memory allocations, etc. + * + * -- btrfs_tree_unlock - note, no explicit unblocking necessary + * + * + * Blocking read is similar. + * + * +- btrfs_tree_read_lock + * | + * +- btrfs_set_lock_blocking_read + * + * - heavy operations allowed + * + * +- btrfs_tree_read_unlock_blocking + * | + * +- btrfs_tree_read_unlock + * + */ + #ifdef CONFIG_BTRFS_DEBUG static inline void btrfs_assert_spinning_writers_get(struct extent_buffer *eb) { @@ -80,6 +184,15 @@ static void btrfs_assert_tree_write_locks_get(struct extent_buffer *eb) { } static void btrfs_assert_tree_write_locks_put(struct extent_buffer *eb) { } #endif +/* + * Mark already held read lock as blocking. Can be nested in write lock by the + * same thread. + * + * Use when there are potentially long operations ahead so other thread waiting + * on the lock will not actively spin but sleep instead. + * + * The rwlock is released and blocking reader counter is increased. + */ void btrfs_set_lock_blocking_read(struct extent_buffer *eb) { trace_btrfs_set_lock_blocking_read(eb); @@ -96,6 +209,14 @@ void btrfs_set_lock_blocking_read(struct extent_buffer *eb) read_unlock(&eb->lock); } +/* + * Mark already held write lock as blocking. + * + * Use when there are potentially long operations ahead so other threads + * waiting on the lock will not actively spin but sleep instead. + * + * The rwlock is released and blocking writers is set. + */ void btrfs_set_lock_blocking_write(struct extent_buffer *eb) { trace_btrfs_set_lock_blocking_write(eb); @@ -115,8 +236,13 @@ void btrfs_set_lock_blocking_write(struct extent_buffer *eb) } /* - * take a spinning read lock. This will wait for any blocking - * writers + * Lock the extent buffer for read. Wait for any writers (spinning or blocking). + * Can be nested in write lock by the same thread. + * + * Use when the locked section does only lightweight actions and busy waiting + * would be cheaper than making other threads do the wait/wake loop. + * + * The rwlock is held upon exit. */ void btrfs_tree_read_lock(struct extent_buffer *eb) { @@ -154,9 +280,10 @@ void btrfs_tree_read_lock(struct extent_buffer *eb) } /* - * take a spinning read lock. - * returns 1 if we get the read lock and 0 if we don't - * this won't wait for blocking writers + * Lock extent buffer for read, optimistically expecting that there are no + * contending blocking writers. If there are, don't wait. + * + * Return 1 if the rwlock has been taken, 0 otherwise */ int btrfs_tree_read_lock_atomic(struct extent_buffer *eb) { @@ -176,8 +303,9 @@ int btrfs_tree_read_lock_atomic(struct extent_buffer *eb) } /* - * returns 1 if we get the read lock and 0 if we don't - * this won't wait for blocking writers + * Try-lock for read. Don't block or wait for contending writers. + * + * Retrun 1 if the rwlock has been taken, 0 otherwise */ int btrfs_try_tree_read_lock(struct extent_buffer *eb) { @@ -199,8 +327,10 @@ int btrfs_try_tree_read_lock(struct extent_buffer *eb) } /* - * returns 1 if we get the read lock and 0 if we don't - * this won't wait for blocking writers or readers + * Try-lock for write. May block until the lock is uncontended, but does not + * wait until it is free. + * + * Retrun 1 if the rwlock has been taken, 0 otherwise */ int btrfs_try_tree_write_lock(struct extent_buffer *eb) { @@ -221,7 +351,10 @@ int btrfs_try_tree_write_lock(struct extent_buffer *eb) } /* - * drop a spinning read lock + * Release read lock. Must be used only if the lock is in spinning mode. If + * the read lock is nested, must pair with read lock before the write unlock. + * + * The rwlock is not held upon exit. */ void btrfs_tree_read_unlock(struct extent_buffer *eb) { @@ -243,7 +376,11 @@ void btrfs_tree_read_unlock(struct extent_buffer *eb) } /* - * drop a blocking read lock + * Release read lock, previously set to blocking by a pairing call to + * btrfs_set_lock_blocking_read(). Can be nested in write lock by the same + * thread. + * + * State of rwlock is unchanged, last reader wakes waiting threads. */ void btrfs_tree_read_unlock_blocking(struct extent_buffer *eb) { @@ -267,8 +404,10 @@ void btrfs_tree_read_unlock_blocking(struct extent_buffer *eb) } /* - * take a spinning write lock. This will wait for both - * blocking readers or writers + * Lock for write. Wait for all blocking and spinning readers and writers. This + * starts context where reader lock could be nested by the same thread. + * + * The rwlock is held for write upon exit. */ void btrfs_tree_lock(struct extent_buffer *eb) { @@ -295,7 +434,12 @@ void btrfs_tree_lock(struct extent_buffer *eb) } /* - * drop a spinning or a blocking write lock. + * Release the write lock, either blocking or spinning (ie. there's no need + * for an explicit blocking unlock, like btrfs_tree_read_unlock_blocking). + * This also ends the context for nesting, the read lock must have been + * released already. + * + * Tasks blocked and waiting are woken, rwlock is not held upon exit. */ void btrfs_tree_unlock(struct extent_buffer *eb) { From patchwork Wed Oct 30 10:57:06 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Sterba X-Patchwork-Id: 11219465 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D9C3E139A for ; Wed, 30 Oct 2019 10:56:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B6F7320717 for ; Wed, 30 Oct 2019 10:56:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727007AbfJ3K46 (ORCPT ); Wed, 30 Oct 2019 06:56:58 -0400 Received: from mx2.suse.de ([195.135.220.15]:37736 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727009AbfJ3K45 (ORCPT ); Wed, 30 Oct 2019 06:56:57 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 55B2DAD2B; Wed, 30 Oct 2019 10:56:56 +0000 (UTC) Received: by ds.suse.cz (Postfix, from userid 10065) id 0EF36DA783; Wed, 30 Oct 2019 11:57:06 +0100 (CET) From: David Sterba To: linux-btrfs@vger.kernel.org Cc: David Sterba Subject: [PATCH v2 5/5] btrfs: locking: add lock assertions Date: Wed, 30 Oct 2019 11:57:06 +0100 Message-Id: <86d6b97f9a9f09b8ac2d1773295fbb1176e5a73a.1572432768.git.dsterba@suse.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: References: MIME-Version: 1.0 Sender: linux-btrfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Add assertions to locking functions where the we expect the lock to be held. This must also respect the nesting, so write lock checks 'write' while read lock only if the lock is held. Signed-off-by: David Sterba --- fs/btrfs/locking.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/fs/btrfs/locking.c b/fs/btrfs/locking.c index 571c4826c428..147bf5d41962 100644 --- a/fs/btrfs/locking.c +++ b/fs/btrfs/locking.c @@ -195,6 +195,7 @@ static void btrfs_assert_tree_write_locks_put(struct extent_buffer *eb) { } */ void btrfs_set_lock_blocking_read(struct extent_buffer *eb) { + lockdep_assert_held(&eb->lock); trace_btrfs_set_lock_blocking_read(eb); /* * No lock is required. The lock owner may change if we have a read @@ -219,6 +220,7 @@ void btrfs_set_lock_blocking_read(struct extent_buffer *eb) */ void btrfs_set_lock_blocking_write(struct extent_buffer *eb) { + lockdep_assert_held_write(&eb->lock); trace_btrfs_set_lock_blocking_write(eb); /* * No lock is required. The lock owner may change if we have a read @@ -358,6 +360,7 @@ int btrfs_try_tree_write_lock(struct extent_buffer *eb) */ void btrfs_tree_read_unlock(struct extent_buffer *eb) { + lockdep_assert_held(&eb->lock); trace_btrfs_tree_read_unlock(eb); /* * if we're nested, we have the write lock. No new locking