[0/4] btrfs: reduce block group cache writeout times during commit
diff mbox

Message ID 553A4B36.2050901@fb.com
State New, archived
Headers show

Commit Message

Chris Mason April 24, 2015, 1:55 p.m. UTC
On 04/24/2015 09:43 AM, Filipe David Manana wrote:
> On Fri, Apr 24, 2015 at 2:00 PM, Chris Mason <clm@fb.com> wrote:

>> Can you please bang on this and get a more reliable reproduction? I'll
>> take a look.
> 
> Not really that easy to get a more reliable reproducer - just run
> fsstress with multiple processes - it already happened twice again
> after I sent the previous mail.
> From the quick look I had at this, this seems to be the change causing
> the problem:
> 
> http://git.kernel.org/cgit/linux/kernel/git/mason/linux-btrfs.git/commit/?h=for-linus-4.1&id=1bbc621ef28462456131c035eaeb5567a1a2a2fe
> 
> Early in btrfs_commit_transaction(), btrfs_start_dirty_block_groups()
> is called which ends up calling __btrfs_write_out_cache() for each
> dirty block group, which collects all the bitmap entries from the bg's
> space cache into a local list while holding the cache's ctl->tree_lock
> (to serialize with concurrent allocation requests).
> 
> Then we unlock ctl->tree_lock, do other stuff and later acquire
> ctl->tree_lock again and call write_bitmap_entries() to write the
> bitmap entries we previously collected. However, while we were doing
> the other stuff without holding that lock, allocation requests might
> have happened right? - since when we call
> btrfs_start_dirty_block_groups() in btrfs_commit_transaction() the
> transaction state wasn't yet changed, allowing other tasks to join the
> current transaction. If such other task allocates all the remaining
> space from a bitmap entry we collected before (because it's still in
> the space cache's rbtree), it ends up deleting it and freeing its
> ->bitmap member, which results in an invalid memory access (and the
> warning on the list corruption) when we later call
> write_bitmap_entries() in __btrfs_write_out_cache() - which is what
> the second part of the trace I sent says:

It's easy to hold the ctl->tree_lock from collection write out, but
everyone deleting items is using list_del_init, so it should be fine to
take the lock again and run through any items that are left.

Here's a replacement incremental that'll cover both cases:


 	}
@@ -1282,6 +1285,7 @@ static int __btrfs_write_out_cache(struct
btrfs_root *root, struct inode *inode,
 	 */
 	ret = write_pinned_extent_entries(root, block_group, io_ctl, &entries);
 	if (ret) {
+		spin_unlock(&ctl->tree_lock);
 		mutex_unlock(&ctl->cache_writeout_mutex);
 		goto out_nospc;
 	}
@@ -1291,7 +1295,6 @@ static int __btrfs_write_out_cache(struct
btrfs_root *root, struct inode *inode,
 	 * locked while doing it because a concurrent trim can be manipulating
 	 * or freeing the bitmap.
 	 */
-	spin_lock(&ctl->tree_lock);
 	ret = write_bitmap_entries(io_ctl, &bitmap_list);
 	spin_unlock(&ctl->tree_lock);
 	mutex_unlock(&ctl->cache_writeout_mutex);
@@ -1345,7 +1348,8 @@ out:
 	return ret;

 out_nospc:
-	cleanup_write_cache_enospc(inode, io_ctl, &cached_state, &bitmap_list);
+	cleanup_write_cache_enospc(ctl, inode, io_ctl,
+				   &cached_state, &bitmap_list);

 	if (block_group && (block_group->flags & BTRFS_BLOCK_GROUP_DATA))
 		up_write(&block_group->data_rwsem);
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Filipe Manana April 24, 2015, 3:05 p.m. UTC | #1
On Fri, Apr 24, 2015 at 2:55 PM, Chris Mason <clm@fb.com> wrote:
> On 04/24/2015 09:43 AM, Filipe David Manana wrote:
>> On Fri, Apr 24, 2015 at 2:00 PM, Chris Mason <clm@fb.com> wrote:
>
>>> Can you please bang on this and get a more reliable reproduction? I'll
>>> take a look.
>>
>> Not really that easy to get a more reliable reproducer - just run
>> fsstress with multiple processes - it already happened twice again
>> after I sent the previous mail.
>> From the quick look I had at this, this seems to be the change causing
>> the problem:
>>
>> http://git.kernel.org/cgit/linux/kernel/git/mason/linux-btrfs.git/commit/?h=for-linus-4.1&id=1bbc621ef28462456131c035eaeb5567a1a2a2fe
>>
>> Early in btrfs_commit_transaction(), btrfs_start_dirty_block_groups()
>> is called which ends up calling __btrfs_write_out_cache() for each
>> dirty block group, which collects all the bitmap entries from the bg's
>> space cache into a local list while holding the cache's ctl->tree_lock
>> (to serialize with concurrent allocation requests).
>>
>> Then we unlock ctl->tree_lock, do other stuff and later acquire
>> ctl->tree_lock again and call write_bitmap_entries() to write the
>> bitmap entries we previously collected. However, while we were doing
>> the other stuff without holding that lock, allocation requests might
>> have happened right? - since when we call
>> btrfs_start_dirty_block_groups() in btrfs_commit_transaction() the
>> transaction state wasn't yet changed, allowing other tasks to join the
>> current transaction. If such other task allocates all the remaining
>> space from a bitmap entry we collected before (because it's still in
>> the space cache's rbtree), it ends up deleting it and freeing its
>> ->bitmap member, which results in an invalid memory access (and the
>> warning on the list corruption) when we later call
>> write_bitmap_entries() in __btrfs_write_out_cache() - which is what
>> the second part of the trace I sent says:
>
> It's easy to hold the ctl->tree_lock from collection write out, but
> everyone deleting items is using list_del_init, so it should be fine to
> take the lock again and run through any items that are left.

Right, but free_bitmap() / unlink_free_space() free the bitmap entry
without deleting it from the list (nor its callers do it), which
should be enough to cause such list corruption report.

I'll try the patch and see if I can get at least one successful entire
run of xfstests.
Thanks Chris.

>
> Here's a replacement incremental that'll cover both cases:
>
>
> diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c
> index d773f22..657a8ec 100644
> --- a/fs/btrfs/free-space-cache.c
> +++ b/fs/btrfs/free-space-cache.c
> @@ -1119,18 +1119,21 @@ static int flush_dirty_cache(struct inode *inode)
>  }
>
>  static void noinline_for_stack
> -cleanup_write_cache_enospc(struct inode *inode,
> +cleanup_write_cache_enospc(struct btrfs_free_space_ctl *ctl,
> +                          struct inode *inode,
>                            struct btrfs_io_ctl *io_ctl,
>                            struct extent_state **cached_state,
>                            struct list_head *bitmap_list)
>  {
>         struct list_head *pos, *n;
>
> +       spin_lock(&ctl->tree_lock);
>         list_for_each_safe(pos, n, bitmap_list) {
>                 struct btrfs_free_space *entry =
>                         list_entry(pos, struct btrfs_free_space, list);
>                 list_del_init(&entry->list);
>         }
> +       spin_unlock(&ctl->tree_lock);
>         io_ctl_drop_pages(io_ctl);
>         unlock_extent_cached(&BTRFS_I(inode)->io_tree, 0,
>                              i_size_read(inode) - 1, cached_state,
> @@ -1266,8 +1269,8 @@ static int __btrfs_write_out_cache(struct
> btrfs_root *root, struct inode *inode,
>         ret = write_cache_extent_entries(io_ctl, ctl,
>                                          block_group, &entries, &bitmaps,
>                                          &bitmap_list);
> -       spin_unlock(&ctl->tree_lock);
>         if (ret) {
> +               spin_unlock(&ctl->tree_lock);
>                 mutex_unlock(&ctl->cache_writeout_mutex);
>                 goto out_nospc;
>         }
> @@ -1282,6 +1285,7 @@ static int __btrfs_write_out_cache(struct
> btrfs_root *root, struct inode *inode,
>          */
>         ret = write_pinned_extent_entries(root, block_group, io_ctl, &entries);
>         if (ret) {
> +               spin_unlock(&ctl->tree_lock);
>                 mutex_unlock(&ctl->cache_writeout_mutex);
>                 goto out_nospc;
>         }
> @@ -1291,7 +1295,6 @@ static int __btrfs_write_out_cache(struct
> btrfs_root *root, struct inode *inode,
>          * locked while doing it because a concurrent trim can be manipulating
>          * or freeing the bitmap.
>          */
> -       spin_lock(&ctl->tree_lock);
>         ret = write_bitmap_entries(io_ctl, &bitmap_list);
>         spin_unlock(&ctl->tree_lock);
>         mutex_unlock(&ctl->cache_writeout_mutex);
> @@ -1345,7 +1348,8 @@ out:
>         return ret;
>
>  out_nospc:
> -       cleanup_write_cache_enospc(inode, io_ctl, &cached_state, &bitmap_list);
> +       cleanup_write_cache_enospc(ctl, inode, io_ctl,
> +                                  &cached_state, &bitmap_list);
>
>         if (block_group && (block_group->flags & BTRFS_BLOCK_GROUP_DATA))
>                 up_write(&block_group->data_rwsem);
Filipe Manana April 25, 2015, 5:33 p.m. UTC | #2
On Fri, Apr 24, 2015 at 4:05 PM, Filipe David Manana <fdmanana@gmail.com> wrote:
> On Fri, Apr 24, 2015 at 2:55 PM, Chris Mason <clm@fb.com> wrote:
>> On 04/24/2015 09:43 AM, Filipe David Manana wrote:
>>> On Fri, Apr 24, 2015 at 2:00 PM, Chris Mason <clm@fb.com> wrote:
>>
>>>> Can you please bang on this and get a more reliable reproduction? I'll
>>>> take a look.
>>>
>>> Not really that easy to get a more reliable reproducer - just run
>>> fsstress with multiple processes - it already happened twice again
>>> after I sent the previous mail.
>>> From the quick look I had at this, this seems to be the change causing
>>> the problem:
>>>
>>> http://git.kernel.org/cgit/linux/kernel/git/mason/linux-btrfs.git/commit/?h=for-linus-4.1&id=1bbc621ef28462456131c035eaeb5567a1a2a2fe
>>>
>>> Early in btrfs_commit_transaction(), btrfs_start_dirty_block_groups()
>>> is called which ends up calling __btrfs_write_out_cache() for each
>>> dirty block group, which collects all the bitmap entries from the bg's
>>> space cache into a local list while holding the cache's ctl->tree_lock
>>> (to serialize with concurrent allocation requests).
>>>
>>> Then we unlock ctl->tree_lock, do other stuff and later acquire
>>> ctl->tree_lock again and call write_bitmap_entries() to write the
>>> bitmap entries we previously collected. However, while we were doing
>>> the other stuff without holding that lock, allocation requests might
>>> have happened right? - since when we call
>>> btrfs_start_dirty_block_groups() in btrfs_commit_transaction() the
>>> transaction state wasn't yet changed, allowing other tasks to join the
>>> current transaction. If such other task allocates all the remaining
>>> space from a bitmap entry we collected before (because it's still in
>>> the space cache's rbtree), it ends up deleting it and freeing its
>>> ->bitmap member, which results in an invalid memory access (and the
>>> warning on the list corruption) when we later call
>>> write_bitmap_entries() in __btrfs_write_out_cache() - which is what
>>> the second part of the trace I sent says:
>>
>> It's easy to hold the ctl->tree_lock from collection write out, but
>> everyone deleting items is using list_del_init, so it should be fine to
>> take the lock again and run through any items that are left.
>
> Right, but free_bitmap() / unlink_free_space() free the bitmap entry
> without deleting it from the list (nor its callers do it), which
> should be enough to cause such list corruption report.
>
> I'll try the patch and see if I can get at least one successful entire
> run of xfstests.
> Thanks Chris.

So with the updated version of that last patch, found at [1], this
problem no longer happens as expected.
However found 2 others one for which I've just sent fixes, plus
another one with invalid on disk caches which I'm not sure if it's
related with the (not new) race you mentioned before when writing
block group caches. It happened once with generic/299 and fsck's
report was:

checking extents
checking free space cache
Wanted bytes 2883584, found 1310720 for off 4929859584
Wanted bytes 468209664, found 1310720 for off 4929859584
cache appears valid but isnt 4324327424
Wanted bytes 262144, found 131072 for off 5403049984
Wanted bytes 1068761088, found 131072 for off 5403049984
cache appears valid but isnt 5398069248
Wanted bytes 262144, found 131072 for off 6472204288
Wanted bytes 1073348608, found 131072 for off 6472204288
cache appears valid but isnt 6471811072
Wanted bytes 786432, found 655360 for off 7545552896
Wanted bytes 1073741824, found 655360 for off 7545552896
cache appears valid but isnt 7545552896
Wanted bytes 1048576, found 131072 for off 8621916160
Wanted bytes 1071120384, found 131072 for off 8621916160
cache appears valid but isnt 8619294720
There is no free space entry for 9693298688-9695395840
There is no free space entry for 9693298688-10766778368
cache appears valid but isnt 9693036544
There is no free space entry for 10769268736-10769661952
There is no free space entry for 10769268736-11840520192
cache appears valid but isnt 10766778368
Checking filesystem on /dev/sdc
UUID: abf30a2f-f784-4829-9131-86e20f13a8cf
found 788994098 bytes used err is -22
total csum bytes: 2586348
total tree bytes: 14954496
total fs tree bytes: 4702208
total extent tree bytes: 3817472
btree space waste bytes: 5422437
file data blocks allocated: 2651303936
 referenced 2651303936

[1] http://git.kernel.org/cgit/linux/kernel/git/mason/linux-btrfs.git/commit/?h=for-linus-4.1&id=a3bdccc4e683f0ac69230707ed3fa20e7cf73a79

>
>>
>> Here's a replacement incremental that'll cover both cases:
>>
>>
>> diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c
>> index d773f22..657a8ec 100644
>> --- a/fs/btrfs/free-space-cache.c
>> +++ b/fs/btrfs/free-space-cache.c
>> @@ -1119,18 +1119,21 @@ static int flush_dirty_cache(struct inode *inode)
>>  }
>>
>>  static void noinline_for_stack
>> -cleanup_write_cache_enospc(struct inode *inode,
>> +cleanup_write_cache_enospc(struct btrfs_free_space_ctl *ctl,
>> +                          struct inode *inode,
>>                            struct btrfs_io_ctl *io_ctl,
>>                            struct extent_state **cached_state,
>>                            struct list_head *bitmap_list)
>>  {
>>         struct list_head *pos, *n;
>>
>> +       spin_lock(&ctl->tree_lock);
>>         list_for_each_safe(pos, n, bitmap_list) {
>>                 struct btrfs_free_space *entry =
>>                         list_entry(pos, struct btrfs_free_space, list);
>>                 list_del_init(&entry->list);
>>         }
>> +       spin_unlock(&ctl->tree_lock);
>>         io_ctl_drop_pages(io_ctl);
>>         unlock_extent_cached(&BTRFS_I(inode)->io_tree, 0,
>>                              i_size_read(inode) - 1, cached_state,
>> @@ -1266,8 +1269,8 @@ static int __btrfs_write_out_cache(struct
>> btrfs_root *root, struct inode *inode,
>>         ret = write_cache_extent_entries(io_ctl, ctl,
>>                                          block_group, &entries, &bitmaps,
>>                                          &bitmap_list);
>> -       spin_unlock(&ctl->tree_lock);
>>         if (ret) {
>> +               spin_unlock(&ctl->tree_lock);
>>                 mutex_unlock(&ctl->cache_writeout_mutex);
>>                 goto out_nospc;
>>         }
>> @@ -1282,6 +1285,7 @@ static int __btrfs_write_out_cache(struct
>> btrfs_root *root, struct inode *inode,
>>          */
>>         ret = write_pinned_extent_entries(root, block_group, io_ctl, &entries);
>>         if (ret) {
>> +               spin_unlock(&ctl->tree_lock);
>>                 mutex_unlock(&ctl->cache_writeout_mutex);
>>                 goto out_nospc;
>>         }
>> @@ -1291,7 +1295,6 @@ static int __btrfs_write_out_cache(struct
>> btrfs_root *root, struct inode *inode,
>>          * locked while doing it because a concurrent trim can be manipulating
>>          * or freeing the bitmap.
>>          */
>> -       spin_lock(&ctl->tree_lock);
>>         ret = write_bitmap_entries(io_ctl, &bitmap_list);
>>         spin_unlock(&ctl->tree_lock);
>>         mutex_unlock(&ctl->cache_writeout_mutex);
>> @@ -1345,7 +1348,8 @@ out:
>>         return ret;
>>
>>  out_nospc:
>> -       cleanup_write_cache_enospc(inode, io_ctl, &cached_state, &bitmap_list);
>> +       cleanup_write_cache_enospc(ctl, inode, io_ctl,
>> +                                  &cached_state, &bitmap_list);
>>
>>         if (block_group && (block_group->flags & BTRFS_BLOCK_GROUP_DATA))
>>                 up_write(&block_group->data_rwsem);
>
>
>
> --
> Filipe David Manana,
>
> "Reasonable men adapt themselves to the world.
>  Unreasonable men adapt the world to themselves.
>  That's why all progress depends on unreasonable men."

Patch
diff mbox

diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c
index d773f22..657a8ec 100644
--- a/fs/btrfs/free-space-cache.c
+++ b/fs/btrfs/free-space-cache.c
@@ -1119,18 +1119,21 @@  static int flush_dirty_cache(struct inode *inode)
 }

 static void noinline_for_stack
-cleanup_write_cache_enospc(struct inode *inode,
+cleanup_write_cache_enospc(struct btrfs_free_space_ctl *ctl,
+			   struct inode *inode,
 			   struct btrfs_io_ctl *io_ctl,
 			   struct extent_state **cached_state,
 			   struct list_head *bitmap_list)
 {
 	struct list_head *pos, *n;

+	spin_lock(&ctl->tree_lock);
 	list_for_each_safe(pos, n, bitmap_list) {
 		struct btrfs_free_space *entry =
 			list_entry(pos, struct btrfs_free_space, list);
 		list_del_init(&entry->list);
 	}
+	spin_unlock(&ctl->tree_lock);
 	io_ctl_drop_pages(io_ctl);
 	unlock_extent_cached(&BTRFS_I(inode)->io_tree, 0,
 			     i_size_read(inode) - 1, cached_state,
@@ -1266,8 +1269,8 @@  static int __btrfs_write_out_cache(struct
btrfs_root *root, struct inode *inode,
 	ret = write_cache_extent_entries(io_ctl, ctl,
 					 block_group, &entries, &bitmaps,
 					 &bitmap_list);
-	spin_unlock(&ctl->tree_lock);
 	if (ret) {
+		spin_unlock(&ctl->tree_lock);
 		mutex_unlock(&ctl->cache_writeout_mutex);
 		goto out_nospc;