mbox series

[v5,0/3] Btrfs: relocation: Refactor build_backref_tree() using btrfs_backref_iterator infrastructure

Message ID 20200218090129.134450-1-wqu@suse.com (mailing list archive)
Headers show
Series Btrfs: relocation: Refactor build_backref_tree() using btrfs_backref_iterator infrastructure | expand

Message

Qu Wenruo Feb. 18, 2020, 9:01 a.m. UTC
This is part 1 of the incoming refactor patches for build_backref_tree()

[THE PLAN]
The overall plan of refactoring build_backref_tree() is:
- Refactor how we iterate through backref items
  This patchset, the smallest I guess.

- Make build_backref_tree() easier to read.
  In short, that function is doing breadth-first-search to build a map
  which starts from one tree block, to all root nodes referring it.

  It involves backref iteration part, and a lot of backref cache only
  works.
  At least I hope to make this function less bulky and more structured.

- Make build_backref_tree() independent from relocation
  The hardest I guess.

  Current it even accepts reloc_control as its first parameter.
  Don't have a clear plan yet, but I guess at least I should make
  build_backref_tree() to do some more coverage, other than taking
  certain relocation-dependent shortcut.

[THIS PATCHSET]
For the patchset itself, the main purpose is to change how we iterate
through all backref items of one tree block.

The old way:

  path->search_commit_root = 1;
  path->skip_locking = 1;
  ret = btrfs_search_slot(NULL, extent_root, path, &key, 0, 0);
  ptr = btrfs_item_offset_nr()
  end = btrfs_item_end_nr()
  /* Inline item loop */
  while (ptr < end) {
	/* Handle each inline item here */
  }
  while (1) {
  	ret = btrfs_next_item();
	btrfs_item_key_to_cpu()
	if (key.objectid != bytenr ||
	    !(key.type == XXX || key.type == YYY)) 
		break;
	/* Handle each keyed item here */
  }
  
The new way:

  iterator = btrfs_backref_iterator_alloc();
  for (ret = btrfs_backref_iterator_start(iterator, bytenr);
       ret == 0; ret = btrfs_backref_iterator_next(iterator)) {
	/*
	 * Handle both keyed and inlined item here.
	 *
	 * We can use iterator->key to determine if it's inlined or
	 * keyed.
	 * Even for inlined item, it can be easily converted to keyed
	 * item, just like we did in build_backref_tree().
	 */
  }

Currently, only build_backref_tree() can utilize this infrastructure.

Backref.c has more requirement, as it needs to handle iteration for both
data and metadata, both commit root and current root.
And more importantly, backref.c uses depth first search, thus not a
perfect match for btrfs_backref_iterator.

Extra naming suggestion is welcomed.
The current naming, btrfs_backref_iterator_* looks pretty long to me
already.
Shorter naming would be much better.

Changelog:
v2:
- Fix a completion bug in btrfs_backref_iterator_next()
  It should be btrfs_extent_inline_ref_type().

v3:
- Comment and commit message update
- Move helper definitions to where they get first used
- Use helpers to replace some internal open code

v4:
- Fix a bug in end_ptr calculation
  The old btrfs_item_end_nr() doesn't take LEAF_DATA_OFFSET into
  consideration, thus causes failure in btrfs/003.

- Add extra check for keyed only backrefs
  btrfs_backref_iter_start() doesn't handle keyed only backrefs well.
  Add extra check to ensure callers get the correct cur_ptr set.

- Shorten the name, iterator->iter

v5:
- Add the missing assignment for iter->bytenr
  This makes keyed backref not checked, causing random dead loop for
  btrfs/187.

- Add comment for btrfs_backref_iter_next()
  Mostly for the return value.


Qu Wenruo (3):
  btrfs: backref: Introduce the skeleton of btrfs_backref_iter
  btrfs: backref: Implement btrfs_backref_iter_next()
  btrfs: relocation: Use btrfs_backref_iter infrastructure

 fs/btrfs/backref.c    | 145 +++++++++++++++++++++++++++++++
 fs/btrfs/backref.h    |  94 ++++++++++++++++++++
 fs/btrfs/relocation.c | 193 ++++++++++++++----------------------------
 3 files changed, 301 insertions(+), 131 deletions(-)

Comments

Nikolay Borisov Feb. 19, 2020, 9:01 a.m. UTC | #1
On 18.02.20 г. 11:01 ч., Qu Wenruo wrote:
> This is part 1 of the incoming refactor patches for build_backref_tree()
> 
> [THE PLAN]
> The overall plan of refactoring build_backref_tree() is:
> - Refactor how we iterate through backref items
>   This patchset, the smallest I guess.
> 
> - Make build_backref_tree() easier to read.
>   In short, that function is doing breadth-first-search to build a map
>   which starts from one tree block, to all root nodes referring it.
> 
>   It involves backref iteration part, and a lot of backref cache only
>   works.
>   At least I hope to make this function less bulky and more structured.
> 
> - Make build_backref_tree() independent from relocation
>   The hardest I guess.
> 
>   Current it even accepts reloc_control as its first parameter.
>   Don't have a clear plan yet, but I guess at least I should make
>   build_backref_tree() to do some more coverage, other than taking
>   certain relocation-dependent shortcut.
> 
> [THIS PATCHSET]
> For the patchset itself, the main purpose is to change how we iterate
> through all backref items of one tree block.
> 
> The old way:
> 
>   path->search_commit_root = 1;
>   path->skip_locking = 1;
>   ret = btrfs_search_slot(NULL, extent_root, path, &key, 0, 0);
>   ptr = btrfs_item_offset_nr()
>   end = btrfs_item_end_nr()
>   /* Inline item loop */
>   while (ptr < end) {
> 	/* Handle each inline item here */
>   }
>   while (1) {
>   	ret = btrfs_next_item();
> 	btrfs_item_key_to_cpu()
> 	if (key.objectid != bytenr ||
> 	    !(key.type == XXX || key.type == YYY)) 
> 		break;
> 	/* Handle each keyed item here */
>   }
>   
> The new way:
> 
>   iterator = btrfs_backref_iterator_alloc();
>   for (ret = btrfs_backref_iterator_start(iterator, bytenr);
>        ret == 0; ret = btrfs_backref_iterator_next(iterator)) {
> 	/*
> 	 * Handle both keyed and inlined item here.
> 	 *
> 	 * We can use iterator->key to determine if it's inlined or
> 	 * keyed.
> 	 * Even for inlined item, it can be easily converted to keyed
> 	 * item, just like we did in build_backref_tree().
> 	 */
>   }
> 
> Currently, only build_backref_tree() can utilize this infrastructure.
> 
> Backref.c has more requirement, as it needs to handle iteration for both
> data and metadata, both commit root and current root.
> And more importantly, backref.c uses depth first search, thus not a
> perfect match for btrfs_backref_iterator.
> 
> Extra naming suggestion is welcomed.
> The current naming, btrfs_backref_iterator_* looks pretty long to me
> already.
> Shorter naming would be much better.
> 
> Changelog:
> v2:
> - Fix a completion bug in btrfs_backref_iterator_next()
>   It should be btrfs_extent_inline_ref_type().
> 
> v3:
> - Comment and commit message update
> - Move helper definitions to where they get first used
> - Use helpers to replace some internal open code
> 
> v4:
> - Fix a bug in end_ptr calculation
>   The old btrfs_item_end_nr() doesn't take LEAF_DATA_OFFSET into
>   consideration, thus causes failure in btrfs/003.
> 
> - Add extra check for keyed only backrefs
>   btrfs_backref_iter_start() doesn't handle keyed only backrefs well.
>   Add extra check to ensure callers get the correct cur_ptr set.
> 
> - Shorten the name, iterator->iter
> 
> v5:
> - Add the missing assignment for iter->bytenr
>   This makes keyed backref not checked, causing random dead loop for
>   btrfs/187.
> 
> - Add comment for btrfs_backref_iter_next()
>   Mostly for the return value.
> 
> 
> Qu Wenruo (3):
>   btrfs: backref: Introduce the skeleton of btrfs_backref_iter
>   btrfs: backref: Implement btrfs_backref_iter_next()
>   btrfs: relocation: Use btrfs_backref_iter infrastructure
> 
>  fs/btrfs/backref.c    | 145 +++++++++++++++++++++++++++++++
>  fs/btrfs/backref.h    |  94 ++++++++++++++++++++
>  fs/btrfs/relocation.c | 193 ++++++++++++++----------------------------
>  3 files changed, 301 insertions(+), 131 deletions(-)
> 

I tested the patchset on btrfs/balance group of tests and didn't observe
any of the regressions present in the previous version so:

Tested-by: Nikolay Borisov <nborisov@suse.com>