diff mbox series

[RFC,2/2] btrfs: defrag: prepare defrag for larger data folio size

Message ID 5708df27430cdeaf472266b5c13dc8c4315f539c.1706068026.git.wqu@suse.com (mailing list archive)
State New, archived
Headers show
Series btrfs: defrag: further preparation for multi-page sector size | expand

Commit Message

Qu Wenruo Jan. 24, 2024, 3:59 a.m. UTC
Although we have migrated defrag to use the folio interface, we can
still further enhance it for the future larger data folio size.

This patch would:

- Rename page related variables to folio's equivalent

- Change "pgoff_t index" to "u64 folio_start" for defrag_prepare_one_folio()
  For the future multi-page sectorsize support, each data folio would be
  sector sized (except for subpage cases).
  Thus we should avoid using index, as there would be two different
  shifts:
  * PAGE_SHIFT based index
    Would be utilized in filemap related interfaces

  * Folio shift based index
    Would be utilized for the remaining cases

  So here we use the "u64 folio_start" to represent one folio

- Use fs_info->folio_shift to replace PAGE_SHIFT
  Since in the future the data folios would no longer be page sized, use
  the cached fs_info->folio_shift to handle both multi-page and subpage
  cases.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/defrag.c | 69 +++++++++++++++++++++++++----------------------
 1 file changed, 37 insertions(+), 32 deletions(-)

Comments

Matthew Wilcox Feb. 15, 2024, 8:23 p.m. UTC | #1
On Wed, Jan 24, 2024 at 02:29:08PM +1030, Qu Wenruo wrote:
> Although we have migrated defrag to use the folio interface, we can
> still further enhance it for the future larger data folio size.

This patch is wrong.  Please drop it.

>  {
>  	struct btrfs_fs_info *fs_info = inode->root->fs_info;
>  	struct extent_changeset *data_reserved = NULL;
>  	const u64 start = target->start;
>  	const u64 len = target->len;
> -	unsigned long last_index = (start + len - 1) >> PAGE_SHIFT;
> -	unsigned long start_index = start >> PAGE_SHIFT;
> +	unsigned long last_index = (start + len - 1) >> fs_info->folio_shift;
> +	unsigned long start_index = start >> fs_info->folio_shift;

indices are always in multiples of PAGE_SIZE.

>  	unsigned long first_index = folios[0]->index;

... so if you've shifted a file position by some "folio_shift" and then
subtracted it from folio->index, you have garbage.
Qu Wenruo Feb. 15, 2024, 11:07 p.m. UTC | #2
在 2024/2/16 06:53, Matthew Wilcox 写道:
> On Wed, Jan 24, 2024 at 02:29:08PM +1030, Qu Wenruo wrote:
>> Although we have migrated defrag to use the folio interface, we can
>> still further enhance it for the future larger data folio size.
>
> This patch is wrong.  Please drop it.
>
>>   {
>>   	struct btrfs_fs_info *fs_info = inode->root->fs_info;
>>   	struct extent_changeset *data_reserved = NULL;
>>   	const u64 start = target->start;
>>   	const u64 len = target->len;
>> -	unsigned long last_index = (start + len - 1) >> PAGE_SHIFT;
>> -	unsigned long start_index = start >> PAGE_SHIFT;
>> +	unsigned long last_index = (start + len - 1) >> fs_info->folio_shift;
>> +	unsigned long start_index = start >> fs_info->folio_shift;
>
> indices are always in multiples of PAGE_SIZE.

So is the fs_info->folio_shift. It would always be >= PAGE_SHIFT.

That folio_shift is assigned inside the first patch:
(Although in the first patch, there is a bug that it should assign
folio_shift, not sectorsize_bits again)

+	fs_info->folio_size = PAGE_SIZE;
+	fs_info->folio_shift = PAGE_SHIFT;
+	if (sectorsize > PAGE_SIZE) {
+		/* For future multi-page sectorsize support */
+		fs_info->folio_size = sectorsize;
+		fs_info->folio_shift  = fs_info->sectorsize_bits;
+	} else {
+		fs_info->folio_size = PAGE_SIZE;
+		fs_info->folio_shift = PAGE_SHIFT;
+	}

>
>>   	unsigned long first_index = folios[0]->index;
>
> ... so if you've shifted a file position by some "folio_shift" and then
> subtracted it from folio->index, you have garbage.

For the future larger folio support, all folio would be in the size of
sectorsize.
Mind to explain why the folio->index would be garbage?

Thanks,
Qu

>
>
Matthew Wilcox Feb. 27, 2024, 9:32 p.m. UTC | #3
On Fri, Feb 16, 2024 at 09:37:01AM +1030, Qu Wenruo wrote:
> 在 2024/2/16 06:53, Matthew Wilcox 写道:
> > On Wed, Jan 24, 2024 at 02:29:08PM +1030, Qu Wenruo wrote:
> > > Although we have migrated defrag to use the folio interface, we can
> > > still further enhance it for the future larger data folio size.
> > 
> > This patch is wrong.  Please drop it.
> > 
> > >   {
> > >   	struct btrfs_fs_info *fs_info = inode->root->fs_info;
> > >   	struct extent_changeset *data_reserved = NULL;
> > >   	const u64 start = target->start;
> > >   	const u64 len = target->len;
> > > -	unsigned long last_index = (start + len - 1) >> PAGE_SHIFT;
> > > -	unsigned long start_index = start >> PAGE_SHIFT;
> > > +	unsigned long last_index = (start + len - 1) >> fs_info->folio_shift;
> > > +	unsigned long start_index = start >> fs_info->folio_shift;
> > 
> > indices are always in multiples of PAGE_SIZE.
> 
> So is the fs_info->folio_shift. It would always be >= PAGE_SHIFT.

No, you don't understand.  folio->index * PAGE_SIZE == byte offset of folio
in the file.  What you've done here breaks that.

> > >   	unsigned long first_index = folios[0]->index;
> > 
> > ... so if you've shifted a file position by some "folio_shift" and then
> > subtracted it from folio->index, you have garbage.
> 
> For the future larger folio support, all folio would be in the size of
> sectorsize.

Yes, folios are always an integer multiple of sector size.  But their
_index_ is expressed as a multiple of the page size.
Qu Wenruo Feb. 27, 2024, 9:42 p.m. UTC | #4
在 2024/2/28 08:02, Matthew Wilcox 写道:
> On Fri, Feb 16, 2024 at 09:37:01AM +1030, Qu Wenruo wrote:
>> 在 2024/2/16 06:53, Matthew Wilcox 写道:
>>> On Wed, Jan 24, 2024 at 02:29:08PM +1030, Qu Wenruo wrote:
>>>> Although we have migrated defrag to use the folio interface, we can
>>>> still further enhance it for the future larger data folio size.
>>>
>>> This patch is wrong.  Please drop it.
>>>
>>>>    {
>>>>    	struct btrfs_fs_info *fs_info = inode->root->fs_info;
>>>>    	struct extent_changeset *data_reserved = NULL;
>>>>    	const u64 start = target->start;
>>>>    	const u64 len = target->len;
>>>> -	unsigned long last_index = (start + len - 1) >> PAGE_SHIFT;
>>>> -	unsigned long start_index = start >> PAGE_SHIFT;
>>>> +	unsigned long last_index = (start + len - 1) >> fs_info->folio_shift;
>>>> +	unsigned long start_index = start >> fs_info->folio_shift;
>>>
>>> indices are always in multiples of PAGE_SIZE.
>>
>> So is the fs_info->folio_shift. It would always be >= PAGE_SHIFT.
> 
> No, you don't understand.  folio->index * PAGE_SIZE == byte offset of folio
> in the file.  What you've done here breaks that.

Oh, I see it now.

Then it's all my bad. We should still go PAGE_SHIFT/PAGE_SIZE for filemap.

In that case, the series should be dropped.

Thanks for pointing this out,
Qu
> 
>>>>    	unsigned long first_index = folios[0]->index;
>>>
>>> ... so if you've shifted a file position by some "folio_shift" and then
>>> subtracted it from folio->index, you have garbage.
>>
>> For the future larger folio support, all folio would be in the size of
>> sectorsize.
> 
> Yes, folios are always an integer multiple of sector size.  But their
> _index_ is expressed as a multiple of the page size.
>
diff mbox series

Patch

diff --git a/fs/btrfs/defrag.c b/fs/btrfs/defrag.c
index dd1b5a060366..07df0081ac57 100644
--- a/fs/btrfs/defrag.c
+++ b/fs/btrfs/defrag.c
@@ -861,18 +861,19 @@  static bool defrag_check_next_extent(struct inode *inode, struct extent_map *em,
  * NOTE: Caller should also wait for page writeback after the cluster is
  * prepared, here we don't do writeback wait for each page.
  */
-static struct folio *defrag_prepare_one_folio(struct btrfs_inode *inode, pgoff_t index)
+static struct folio *defrag_prepare_one_folio(struct btrfs_inode *inode,
+					      u64 folio_start)
 {
+	struct btrfs_fs_info *fs_info = inode->root->fs_info;
 	struct address_space *mapping = inode->vfs_inode.i_mapping;
 	gfp_t mask = btrfs_alloc_write_mask(mapping);
-	u64 page_start = (u64)index << PAGE_SHIFT;
-	u64 page_end = page_start + PAGE_SIZE - 1;
+	u64 folio_end = folio_start + fs_info->folio_size - 1;
 	struct extent_state *cached_state = NULL;
 	struct folio *folio;
 	int ret;
 
 again:
-	folio = __filemap_get_folio(mapping, index,
+	folio = __filemap_get_folio(mapping, folio_start >> PAGE_SHIFT,
 				    FGP_LOCK | FGP_ACCESSED | FGP_CREAT, mask);
 	if (IS_ERR(folio))
 		return folio;
@@ -902,9 +903,10 @@  static struct folio *defrag_prepare_one_folio(struct btrfs_inode *inode, pgoff_t
 	while (1) {
 		struct btrfs_ordered_extent *ordered;
 
-		lock_extent(&inode->io_tree, page_start, page_end, &cached_state);
-		ordered = btrfs_lookup_ordered_range(inode, page_start, PAGE_SIZE);
-		unlock_extent(&inode->io_tree, page_start, page_end,
+		lock_extent(&inode->io_tree, folio_start, folio_end, &cached_state);
+		ordered = btrfs_lookup_ordered_range(inode, folio_start,
+						     fs_info->folio_size);
+		unlock_extent(&inode->io_tree, folio_start, folio_end,
 			      &cached_state);
 		if (!ordered)
 			break;
@@ -1163,20 +1165,20 @@  static_assert(PAGE_ALIGNED(CLUSTER_SIZE));
  */
 static int defrag_one_locked_target(struct btrfs_inode *inode,
 				    struct defrag_target_range *target,
-				    struct folio **folios, int nr_pages,
+				    struct folio **folios, int nr_folios,
 				    struct extent_state **cached_state)
 {
 	struct btrfs_fs_info *fs_info = inode->root->fs_info;
 	struct extent_changeset *data_reserved = NULL;
 	const u64 start = target->start;
 	const u64 len = target->len;
-	unsigned long last_index = (start + len - 1) >> PAGE_SHIFT;
-	unsigned long start_index = start >> PAGE_SHIFT;
+	unsigned long last_index = (start + len - 1) >> fs_info->folio_shift;
+	unsigned long start_index = start >> fs_info->folio_shift;
 	unsigned long first_index = folios[0]->index;
 	int ret = 0;
 	int i;
 
-	ASSERT(last_index - first_index + 1 <= nr_pages);
+	ASSERT(last_index - first_index + 1 <= nr_folios);
 
 	ret = btrfs_delalloc_reserve_space(inode, &data_reserved, start, len);
 	if (ret < 0)
@@ -1187,7 +1189,7 @@  static int defrag_one_locked_target(struct btrfs_inode *inode,
 	set_extent_bit(&inode->io_tree, start, start + len - 1,
 		       EXTENT_DELALLOC | EXTENT_DEFRAG, cached_state);
 
-	/* Update the page status */
+	/* Update the folio status */
 	for (i = start_index - first_index; i <= last_index - first_index; i++) {
 		folio_clear_checked(folios[i]);
 		btrfs_folio_clamp_set_dirty(fs_info, folios[i], start, len);
@@ -1202,40 +1204,42 @@  static int defrag_one_range(struct btrfs_inode *inode, u64 start, u32 len,
 			    u32 extent_thresh, u64 newer_than, bool do_compress,
 			    u64 *last_scanned_ret)
 {
+	struct btrfs_fs_info *fs_info = inode->root->fs_info;
 	struct extent_state *cached_state = NULL;
 	struct defrag_target_range *entry;
 	struct defrag_target_range *tmp;
 	LIST_HEAD(target_list);
 	struct folio **folios;
-	const u32 sectorsize = inode->root->fs_info->sectorsize;
-	u64 last_index = (start + len - 1) >> PAGE_SHIFT;
-	u64 start_index = start >> PAGE_SHIFT;
-	unsigned int nr_pages = last_index - start_index + 1;
+	const u32 sectorsize = fs_info->sectorsize;
+	u64 last_index = (start + len - 1) >> fs_info->folio_shift;
+	u64 start_index = start >> fs_info->folio_shift;
+	unsigned int nr_folios = last_index - start_index + 1;
 	int ret = 0;
 	int i;
 
-	ASSERT(nr_pages <= CLUSTER_SIZE / PAGE_SIZE);
+	ASSERT(nr_folios <= (CLUSTER_SIZE >> fs_info->folio_shift));
 	ASSERT(IS_ALIGNED(start, sectorsize) && IS_ALIGNED(len, sectorsize));
 
-	folios = kcalloc(nr_pages, sizeof(struct folio *), GFP_NOFS);
+	folios = kcalloc(nr_folios, sizeof(struct folio *), GFP_NOFS);
 	if (!folios)
 		return -ENOMEM;
 
 	/* Prepare all pages */
-	for (i = 0; i < nr_pages; i++) {
-		folios[i] = defrag_prepare_one_folio(inode, start_index + i);
+	for (i = 0; i < nr_folios ; i++) {
+		folios[i] = defrag_prepare_one_folio(inode,
+				(start_index + i) << fs_info->folio_shift);
 		if (IS_ERR(folios[i])) {
 			ret = PTR_ERR(folios[i]);
-			nr_pages = i;
+			nr_folios = i;
 			goto free_folios;
 		}
 	}
-	for (i = 0; i < nr_pages; i++)
+	for (i = 0; i < nr_folios; i++)
 		folio_wait_writeback(folios[i]);
 
 	/* Lock the pages range */
-	lock_extent(&inode->io_tree, start_index << PAGE_SHIFT,
-		    (last_index << PAGE_SHIFT) + PAGE_SIZE - 1,
+	lock_extent(&inode->io_tree, start_index << fs_info->folio_shift,
+		    (last_index << fs_info->folio_shift) + fs_info->folio_size - 1,
 		    &cached_state);
 	/*
 	 * Now we have a consistent view about the extent map, re-check
@@ -1251,7 +1255,7 @@  static int defrag_one_range(struct btrfs_inode *inode, u64 start, u32 len,
 		goto unlock_extent;
 
 	list_for_each_entry(entry, &target_list, list) {
-		ret = defrag_one_locked_target(inode, entry, folios, nr_pages,
+		ret = defrag_one_locked_target(inode, entry, folios, nr_folios,
 					       &cached_state);
 		if (ret < 0)
 			break;
@@ -1262,11 +1266,11 @@  static int defrag_one_range(struct btrfs_inode *inode, u64 start, u32 len,
 		kfree(entry);
 	}
 unlock_extent:
-	unlock_extent(&inode->io_tree, start_index << PAGE_SHIFT,
-		      (last_index << PAGE_SHIFT) + PAGE_SIZE - 1,
+	unlock_extent(&inode->io_tree, start_index << fs_info->folio_shift,
+		      (last_index << fs_info->folio_shift) + fs_info->folio_size - 1,
 		      &cached_state);
 free_folios:
-	for (i = 0; i < nr_pages; i++) {
+	for (i = 0; i < nr_folios; i++) {
 		folio_unlock(folios[i]);
 		folio_put(folios[i]);
 	}
@@ -1282,7 +1286,8 @@  static int defrag_one_cluster(struct btrfs_inode *inode,
 			      unsigned long max_sectors,
 			      u64 *last_scanned_ret)
 {
-	const u32 sectorsize = inode->root->fs_info->sectorsize;
+	struct btrfs_fs_info *fs_info = inode->root->fs_info;
+	const u32 sectorsize = fs_info->sectorsize;
 	struct defrag_target_range *entry;
 	struct defrag_target_range *tmp;
 	LIST_HEAD(target_list);
@@ -1421,7 +1426,7 @@  int btrfs_defrag_file(struct inode *inode, struct file_ra_state *ra,
 	 * Make writeback start from the beginning of the range, so that the
 	 * defrag range can be written sequentially.
 	 */
-	start_index = cur >> PAGE_SHIFT;
+	start_index = cur >> fs_info->folio_shift;
 	if (start_index < inode->i_mapping->writeback_index)
 		inode->i_mapping->writeback_index = start_index;
 
@@ -1436,8 +1441,8 @@  int btrfs_defrag_file(struct inode *inode, struct file_ra_state *ra,
 		}
 
 		/* We want the cluster end at page boundary when possible */
-		cluster_end = (((cur >> PAGE_SHIFT) +
-			       (SZ_256K >> PAGE_SHIFT)) << PAGE_SHIFT) - 1;
+		cluster_end = (((cur >> fs_info->folio_shift) +
+			(SZ_256K >> fs_info->folio_shift)) << fs_info->folio_shift) - 1;
 		cluster_end = min(cluster_end, last_byte);
 
 		btrfs_inode_lock(BTRFS_I(inode), 0);