diff mbox

[v2,2/2] btrfs: Enhance btrfs chunk allocation algorithm to reduce ENOSPC caused by unbalanced data/metadata allocation.

Message ID 1419386114-21703-2-git-send-email-quwenruo@cn.fujitsu.com (mailing list archive)
State New, archived
Headers show

Commit Message

Qu Wenruo Dec. 24, 2014, 1:55 a.m. UTC
When btrfs allocate a chunk, it will try to alloc up to 1G for data and
256M for metadata, or 10% of all the writeable space if there is enough
space for the stripe on device.

However, when we run out of space, this allocation may cause unbalanced
chunk allocation.
For example, there are only 1G unallocated space, and request for
allocate DATA chunk is sent, and all the space will be allocated as data
chunk, making later metadata chunk alloc request unable to handle, which
will cause ENOSPC.
This is the one of the common complains from end users about why ENOSPC
happens but there is still available space.

This patch will try not to alloc chunk which is more than half of the
unallocated space, making the last space more balanced at a small cost
of more fragmented chunk at the last 1G.

Some easy example:
Preallocate 17.5G on a 20G empty btrfs fs:
[Before]
 # btrfs fi show /mnt/test
Label: none  uuid: da8741b1-5d47-4245-9e94-bfccea34e91e
	Total devices 1 FS bytes used 17.50GiB
	devid    1 size 20.00GiB used 20.00GiB path /dev/sdb
All space is allocated. No space for later metadata allocation.

[After]
 # btrfs fi show /mnt/test
Label: none  uuid: e6935aeb-a232-4140-84f9-80aab1f23d56
	Total devices 1 FS bytes used 17.50GiB
	devid    1 size 20.00GiB used 19.77GiB path /dev/sdb
About 230M is still available for later metadata allocation.

Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com>
---
Changelog:
v2:
   Remove false dead zone judgement since it won't happen
---
 fs/btrfs/volumes.c | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

Comments

David Sterba Dec. 29, 2014, 2:56 p.m. UTC | #1
On Wed, Dec 24, 2014 at 09:55:14AM +0800, Qu Wenruo wrote:
> When btrfs allocate a chunk, it will try to alloc up to 1G for data and
> 256M for metadata, or 10% of all the writeable space if there is enough
> space for the stripe on device.
> 
> However, when we run out of space, this allocation may cause unbalanced
> chunk allocation.
> For example, there are only 1G unallocated space, and request for
> allocate DATA chunk is sent, and all the space will be allocated as data
> chunk, making later metadata chunk alloc request unable to handle, which
> will cause ENOSPC.

The question is why the metadata is full although there's 1G free, as
the metadata chunks are being preallocated according to the metadata
ratio.

> This is the one of the common complains from end users about why ENOSPC
> happens but there is still available space.
> 
> This patch will try not to alloc chunk which is more than half of the
> unallocated space, making the last space more balanced at a small cost
> of more fragmented chunk at the last 1G.

I'm really worried about the small chunks and the fragmentation on that
level wrt balancing. The small chunks will be relolcated to bigger free
chunks (eg. 256mb) and make it unusable for further rebalancing of the
256mb chunks. Newly allocated chunks will have to be reduced in size to
fit in the remaining place and will cause further fragmentation of the
chunk space.

The drawbacks of small chunks are obvious:

* more chunks mean more processing
* smaller chance of getting big contiguous space for extents, leading to
  file fragmentation that cannot be much improved fixed by
  defragmentation

IMO the chunk allocation should be more predictable and should give some
clue how the layout happens, otherwise this will become another dark
corner that would make debugging harder and can negatively and
unpreditactably affect performance after some time.

The problems you're trying to address are real, no doubt here, but I'd
rather try to address them in a different way.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Qu Wenruo Dec. 30, 2014, 12:40 a.m. UTC | #2
-------- Original Message --------
Subject: Re: [PATCH v2 2/2] btrfs: Enhance btrfs chunk allocation 
algorithm to reduce ENOSPC caused by unbalanced data/metadata allocation.
From: David Sterba <dsterba@suse.cz>
To: Qu Wenruo <quwenruo@cn.fujitsu.com>
Date: 2014?12?29? 22:56
> On Wed, Dec 24, 2014 at 09:55:14AM +0800, Qu Wenruo wrote:
>> When btrfs allocate a chunk, it will try to alloc up to 1G for data and
>> 256M for metadata, or 10% of all the writeable space if there is enough
>> space for the stripe on device.
>>
>> However, when we run out of space, this allocation may cause unbalanced
>> chunk allocation.
>> For example, there are only 1G unallocated space, and request for
>> allocate DATA chunk is sent, and all the space will be allocated as data
>> chunk, making later metadata chunk alloc request unable to handle, which
>> will cause ENOSPC.
> The question is why the metadata is full although there's 1G free, as
> the metadata chunks are being preallocated according to the metadata
> ratio.
This can still happen after the data chunk is allocated but later only 
heavy metadata workload.
>
>> This is the one of the common complains from end users about why ENOSPC
>> happens but there is still available space.
>>
>> This patch will try not to alloc chunk which is more than half of the
>> unallocated space, making the last space more balanced at a small cost
>> of more fragmented chunk at the last 1G.
> I'm really worried about the small chunks and the fragmentation on that
> level wrt balancing. The small chunks will be relolcated to bigger free
> chunks (eg. 256mb) and make it unusable for further rebalancing of the
> 256mb chunks. Newly allocated chunks will have to be reduced in size to
> fit in the remaining place and will cause further fragmentation of the
> chunk space.
>
> The drawbacks of small chunks are obvious:
>
> * more chunks mean more processing
> * smaller chance of getting big contiguous space for extents, leading to
>    file fragmentation that cannot be much improved fixed by
>    defragmentation
You're right, such half-half method will mess up with relocate, that's I 
forgot.
>
> IMO the chunk allocation should be more predictable and should give some
> clue how the layout happens, otherwise this will become another dark
> corner that would make debugging harder and can negatively and
> unpreditactably affect performance after some time.
Some other methods also come to me, like predict the data:metadata ratio 
using current or recent
allocated data:metadata ratio, but it seems not help for the last 1GB case.

Or when it comes to the last 1GB, allocate it as mixed(data+metadata) ?
It seems needs new incompat flags and some tweaks on relocate.

Thanks,
Qu
>
> The problems you're trying to address are real, no doubt here, but I'd
> rather try to address them in a different way.

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index 8e74b34..20b3eea 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -4237,6 +4237,7 @@  static int __btrfs_alloc_chunk(struct btrfs_trans_handle *trans,
 	u64 max_stripe_size;
 	u64 max_logical_size;	/* Up limit on chunk's logical size */
 	u64 max_physical_size;	/* Up limit on a chunk's on-disk size */
+	u64 total_physical_avail = 0;
 	u64 stripe_size;
 	u64 num_bytes;
 	u64 raid_stripe_len = BTRFS_STRIPE_LEN;
@@ -4349,6 +4350,7 @@  static int __btrfs_alloc_chunk(struct btrfs_trans_handle *trans,
 		devices_info[ndevs].max_avail = max_avail;
 		devices_info[ndevs].total_avail = total_avail;
 		devices_info[ndevs].dev = device;
+		total_physical_avail += total_avail;
 		++ndevs;
 	}
 
@@ -4398,6 +4400,23 @@  static int __btrfs_alloc_chunk(struct btrfs_trans_handle *trans,
 		do_div(stripe_size, num_stripes);
 		need_bump = 1;
 	}
+
+	/*
+	 * Don't alloc chunk whose physical size is larger than half
+	 * of the rest physical space.
+	 * This will reduce the possibility of ENOSPC when comes to
+	 * last unallocated space
+	 *
+	 * For the last 16~32M (e.g. 20M), it will first alloc 16M
+	 * (bumped to 16M) and the next time will be the rest size
+	 * (bumped to 16M and reduced to 4M).
+	 * So no dead zone.
+	 */
+	if (stripe_size * num_stripes > total_physical_avail / 2) {
+		stripe_size = total_physical_avail / 2;
+		need_bump = 1;
+
+	}
 	/* restrict logical chunk size  */
 	if (stripe_size * data_stripes > max_logical_size) {
 		stripe_size = max_logical_size;