diff mbox

[v3] Btrfs: optimize key searches in btrfs_search_slot

Message ID 1377784766-7552-1-git-send-email-fdmanana@gmail.com (mailing list archive)
State Superseded, archived
Headers show

Commit Message

Filipe Manana Aug. 29, 2013, 1:59 p.m. UTC
When the binary search returns 0 (exact match), the target key
will necessarily be at slot 0 of all nodes below the current one,
so in this case the binary search is not needed because it will
always return 0, and we waste time doing it, holding node locks
for longer than necessary, etc.

Below follow histograms with the times spent on the current approach of
doing a binary search when the previous binary search returned 0, and
times for the new approach, which directly picks the first item/child
node in the leaf/node.

Current approach:

Count: 5013
Range: 25.000 - 497.000; Mean: 82.767; Median: 64.000; Stddev: 49.972
Percentiles:  90th: 141.000; 95th: 182.000; 99th: 287.000
  25.000 -   33.930:   211 ######
  33.930 -   45.927:   277 ########
  45.927 -   62.045:  1834 #####################################################
  62.045 -   83.699:  1203 ###################################
  83.699 -  112.789:   609 ##################
 112.789 -  151.872:   450 #############
 151.872 -  204.377:   246 #######
 204.377 -  274.917:   124 ####
 274.917 -  369.684:    48 #
 369.684 -  497.000:    11 |

Approach proposed by this patch:

Count: 5013
Range: 10.000 - 8303.000; Mean: 28.505; Median: 18.000; Stddev: 119.147
Percentiles:  90th: 49.000; 95th: 74.000; 99th: 115.000
 10.000 -   20.339:  3160 #####################################################
 20.339 -   40.397:  1131 ###################
 40.397 -   79.308:   507 #########
 79.308 -  154.794:   199 ###
154.794 -  301.232:    14 |
301.232 -  585.313:     1 |
585.313 - 8303.000:     1 |

These samples were captured during a run of the btrfs tests 001, 002 and
004 in the xfstests, with a leaf/node size of 4Kb.

Signed-off-by: Filipe David Borba Manana <fdmanana@gmail.com>
---

V2: Simplified code, removed unnecessary code.
V3: Replaced BUG_ON() with the new ASSERT() from Josef.

 fs/btrfs/ctree.c |   44 ++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 42 insertions(+), 2 deletions(-)

Comments

Zach Brown Aug. 29, 2013, 6:08 p.m. UTC | #1
On Thu, Aug 29, 2013 at 02:59:26PM +0100, Filipe David Borba Manana wrote:
> When the binary search returns 0 (exact match), the target key
> will necessarily be at slot 0 of all nodes below the current one,
> so in this case the binary search is not needed because it will
> always return 0, and we waste time doing it, holding node locks
> for longer than necessary, etc.
> 
> Below follow histograms with the times spent on the current approach of
> doing a binary search when the previous binary search returned 0, and
> times for the new approach, which directly picks the first item/child
> node in the leaf/node.
> 
> Count: 5013
> Range: 25.000 - 497.000; Mean: 82.767; Median: 64.000; Stddev: 49.972
> Percentiles:  90th: 141.000; 95th: 182.000; 99th: 287.000

> Count: 5013
> Range: 10.000 - 8303.000; Mean: 28.505; Median: 18.000; Stddev: 119.147
> Percentiles:  90th: 49.000; 95th: 74.000; 99th: 115.000

Where'd the giant increase in the range max come from?  Just jittery
measurement?  Maybe get a lot more data points to smooth that out?

> +static int key_search(struct extent_buffer *b, struct btrfs_key *key,
> +		      int level, int *prev_cmp, int *slot)
> +{
> +	char *kaddr = NULL;
> +	unsigned long map_start = 0;
> +	unsigned long map_len = 0;
> +	unsigned long offset;
> +	struct btrfs_disk_key *k = NULL;
> +	struct btrfs_disk_key unaligned;
> +	int err;
> +
> +	if (*prev_cmp != 0) {
> +		*prev_cmp = bin_search(b, key, level, slot);
> +		return *prev_cmp;
> +	}


> +	*slot = 0;
> +
> +	return 0;

So this is the actual work done by the function.

> +
> +	if (level == 0)
> +		offset = offsetof(struct btrfs_leaf, items);
> +	else
> +		offset = offsetof(struct btrfs_node, ptrs);

(+10 fragility points for assuming that the key starts each struct
instead of using [0].key)

> +
> +	err = map_private_extent_buffer(b, offset, sizeof(struct btrfs_disk_key),
> +					&kaddr, &map_start, &map_len);
> +	if (!err) {
> +		k = (struct btrfs_disk_key *)(kaddr + offset - map_start);
> +	} else {
> +		read_extent_buffer(b, &unaligned, offset, sizeof(unaligned));
> +		k = &unaligned;
> +	}
> +
> +	ASSERT(comp_keys(k, key) == 0);

All of the rest of the function, including most of the local variables,
is overhead for that assertion.  We don't actually care about the
relative sorted key position of the two keys so we don't need smart
field-aware comparisions.  We can use a dumb memcmp.

We can replace all that stuff with two easy memcmp_extent_buffers()
which vanish if ASSERT is a nop. 

	if (level)
		ASSERT(!memcmp_extent_buffer(b, key,
			offsetof(struct btrfs_node, ptrs[0].key),
			sizeof(*key)));
	else
		ASSERT(!memcmp_extent_buffer(b, key,
			offsetof(struct btrfs_leaf, items[0].key),
			sizeof(*key)));

Right?

- z
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Josef Bacik Aug. 29, 2013, 6:35 p.m. UTC | #2
On Thu, Aug 29, 2013 at 11:08:16AM -0700, Zach Brown wrote:
> On Thu, Aug 29, 2013 at 02:59:26PM +0100, Filipe David Borba Manana wrote:
> > When the binary search returns 0 (exact match), the target key
> > will necessarily be at slot 0 of all nodes below the current one,
> > so in this case the binary search is not needed because it will
> > always return 0, and we waste time doing it, holding node locks
> > for longer than necessary, etc.
> > 
> > Below follow histograms with the times spent on the current approach of
> > doing a binary search when the previous binary search returned 0, and
> > times for the new approach, which directly picks the first item/child
> > node in the leaf/node.
> > 
> > Count: 5013
> > Range: 25.000 - 497.000; Mean: 82.767; Median: 64.000; Stddev: 49.972
> > Percentiles:  90th: 141.000; 95th: 182.000; 99th: 287.000
> 
> > Count: 5013
> > Range: 10.000 - 8303.000; Mean: 28.505; Median: 18.000; Stddev: 119.147
> > Percentiles:  90th: 49.000; 95th: 74.000; 99th: 115.000
> 
> Where'd the giant increase in the range max come from?  Just jittery
> measurement?  Maybe get a lot more data points to smooth that out?
> 
> > +static int key_search(struct extent_buffer *b, struct btrfs_key *key,
> > +		      int level, int *prev_cmp, int *slot)
> > +{
> > +	char *kaddr = NULL;
> > +	unsigned long map_start = 0;
> > +	unsigned long map_len = 0;
> > +	unsigned long offset;
> > +	struct btrfs_disk_key *k = NULL;
> > +	struct btrfs_disk_key unaligned;
> > +	int err;
> > +
> > +	if (*prev_cmp != 0) {
> > +		*prev_cmp = bin_search(b, key, level, slot);
> > +		return *prev_cmp;
> > +	}
> 
> 
> > +	*slot = 0;
> > +
> > +	return 0;
> 
> So this is the actual work done by the function.
> 
> > +
> > +	if (level == 0)
> > +		offset = offsetof(struct btrfs_leaf, items);
> > +	else
> > +		offset = offsetof(struct btrfs_node, ptrs);
> 
> (+10 fragility points for assuming that the key starts each struct
> instead of using [0].key)
> 
> > +
> > +	err = map_private_extent_buffer(b, offset, sizeof(struct btrfs_disk_key),
> > +					&kaddr, &map_start, &map_len);
> > +	if (!err) {
> > +		k = (struct btrfs_disk_key *)(kaddr + offset - map_start);
> > +	} else {
> > +		read_extent_buffer(b, &unaligned, offset, sizeof(unaligned));
> > +		k = &unaligned;
> > +	}
> > +
> > +	ASSERT(comp_keys(k, key) == 0);
> 
> All of the rest of the function, including most of the local variables,
> is overhead for that assertion.  We don't actually care about the
> relative sorted key position of the two keys so we don't need smart
> field-aware comparisions.  We can use a dumb memcmp.
> 
> We can replace all that stuff with two easy memcmp_extent_buffers()
> which vanish if ASSERT is a nop. 
> 

Actually we can't since we have a cpu key and the keys in the eb are disk keys.
So maybe keep what we have here and wrap it completely in CONFIG_BTRFS_ASSERT?

Josef
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Filipe Manana Aug. 29, 2013, 6:41 p.m. UTC | #3
On Thu, Aug 29, 2013 at 7:08 PM, Zach Brown <zab@redhat.com> wrote:
> On Thu, Aug 29, 2013 at 02:59:26PM +0100, Filipe David Borba Manana wrote:
>> When the binary search returns 0 (exact match), the target key
>> will necessarily be at slot 0 of all nodes below the current one,
>> so in this case the binary search is not needed because it will
>> always return 0, and we waste time doing it, holding node locks
>> for longer than necessary, etc.
>>
>> Below follow histograms with the times spent on the current approach of
>> doing a binary search when the previous binary search returned 0, and
>> times for the new approach, which directly picks the first item/child
>> node in the leaf/node.
>>
>> Count: 5013
>> Range: 25.000 - 497.000; Mean: 82.767; Median: 64.000; Stddev: 49.972
>> Percentiles:  90th: 141.000; 95th: 182.000; 99th: 287.000
>
>> Count: 5013
>> Range: 10.000 - 8303.000; Mean: 28.505; Median: 18.000; Stddev: 119.147
>> Percentiles:  90th: 49.000; 95th: 74.000; 99th: 115.000
>
> Where'd the giant increase in the range max come from?  Just jittery
> measurement?  Maybe get a lot more data points to smooth that out?

Correct, just jittery.

>
>> +static int key_search(struct extent_buffer *b, struct btrfs_key *key,
>> +                   int level, int *prev_cmp, int *slot)
>> +{
>> +     char *kaddr = NULL;
>> +     unsigned long map_start = 0;
>> +     unsigned long map_len = 0;
>> +     unsigned long offset;
>> +     struct btrfs_disk_key *k = NULL;
>> +     struct btrfs_disk_key unaligned;
>> +     int err;
>> +
>> +     if (*prev_cmp != 0) {
>> +             *prev_cmp = bin_search(b, key, level, slot);
>> +             return *prev_cmp;
>> +     }
>
>
>> +     *slot = 0;
>> +
>> +     return 0;
>
> So this is the actual work done by the function.

Correct. That and the very first if statement in the function.

>
>> +
>> +     if (level == 0)
>> +             offset = offsetof(struct btrfs_leaf, items);
>> +     else
>> +             offset = offsetof(struct btrfs_node, ptrs);
>
> (+10 fragility points for assuming that the key starts each struct
> instead of using [0].key)

Ok. I just copied that from ctree.c:bin_search(). I guess that gives
another +10 fragility points.
Thanks for pointing out.

>
>> +
>> +     err = map_private_extent_buffer(b, offset, sizeof(struct btrfs_disk_key),
>> +                                     &kaddr, &map_start, &map_len);
>> +     if (!err) {
>> +             k = (struct btrfs_disk_key *)(kaddr + offset - map_start);
>> +     } else {
>> +             read_extent_buffer(b, &unaligned, offset, sizeof(unaligned));
>> +             k = &unaligned;
>> +     }
>> +
>> +     ASSERT(comp_keys(k, key) == 0);
>
> All of the rest of the function, including most of the local variables,
> is overhead for that assertion.  We don't actually care about the
> relative sorted key position of the two keys so we don't need smart
> field-aware comparisions.  We can use a dumb memcmp.
>
> We can replace all that stuff with two easy memcmp_extent_buffers()
> which vanish if ASSERT is a nop.
>
>         if (level)
>                 ASSERT(!memcmp_extent_buffer(b, key,
>                         offsetof(struct btrfs_node, ptrs[0].key),
>                         sizeof(*key)));
>         else
>                 ASSERT(!memcmp_extent_buffer(b, key,
>                         offsetof(struct btrfs_leaf, items[0].key),
>                         sizeof(*key)));
>
> Right?

No, and as Josef just pointed, like that we compare a btrfs_key with a
btrfs_disk_key, which is wrong due to endianess differences.

So I'll go for Josef's suggestion in the following mail about wrapping
stuff with a CONFIG_BTRFS_ASSERT #ifdef macro.

>
> - z
Zach Brown Aug. 29, 2013, 7 p.m. UTC | #4
> > We can replace all that stuff with two easy memcmp_extent_buffers()
> > which vanish if ASSERT is a nop. 
> 
> Actually we can't since we have a cpu key and the keys in the eb are disk keys.
> So maybe keep what we have here and wrap it completely in CONFIG_BTRFS_ASSERT?

I could have sworn that I checked that the input was a disk key.

In that case, then, I'd put all this off in a helper function that's
called in the asserts that swabs to a disk key and then does the memcmp.
All this fiddly assert junk (which just compares keys!) doesn't belong
implemented by hand in this trivial helper.

- z
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Zach Brown Aug. 29, 2013, 7:02 p.m. UTC | #5
> >> +     if (level == 0)
> >> +             offset = offsetof(struct btrfs_leaf, items);
> >> +     else
> >> +             offset = offsetof(struct btrfs_node, ptrs);
> >
> > (+10 fragility points for assuming that the key starts each struct
> > instead of using [0].key)
> 
> Ok. I just copied that from ctree.c:bin_search(). I guess that gives
> another +10 fragility points.
> Thanks for pointing out.

Yeah.  Don't worry, you have quite a way to go before building up
personal fragility points that come anywhere near the wealth of
fragility points that btrfs has in the bank :).

- z
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
index 5fa521b..b69dd46 100644
--- a/fs/btrfs/ctree.c
+++ b/fs/btrfs/ctree.c
@@ -2426,6 +2426,42 @@  done:
 	return ret;
 }
 
+static int key_search(struct extent_buffer *b, struct btrfs_key *key,
+		      int level, int *prev_cmp, int *slot)
+{
+	char *kaddr = NULL;
+	unsigned long map_start = 0;
+	unsigned long map_len = 0;
+	unsigned long offset;
+	struct btrfs_disk_key *k = NULL;
+	struct btrfs_disk_key unaligned;
+	int err;
+
+	if (*prev_cmp != 0) {
+		*prev_cmp = bin_search(b, key, level, slot);
+		return *prev_cmp;
+	}
+
+	if (level == 0)
+		offset = offsetof(struct btrfs_leaf, items);
+	else
+		offset = offsetof(struct btrfs_node, ptrs);
+
+	err = map_private_extent_buffer(b, offset, sizeof(struct btrfs_disk_key),
+					&kaddr, &map_start, &map_len);
+	if (!err) {
+		k = (struct btrfs_disk_key *)(kaddr + offset - map_start);
+	} else {
+		read_extent_buffer(b, &unaligned, offset, sizeof(unaligned));
+		k = &unaligned;
+	}
+
+	ASSERT(comp_keys(k, key) == 0);
+	*slot = 0;
+
+	return 0;
+}
+
 /*
  * look for key in the tree.  path is filled in with nodes along the way
  * if key is found, we return zero and you can find the item in the leaf
@@ -2454,6 +2490,7 @@  int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root
 	int write_lock_level = 0;
 	u8 lowest_level = 0;
 	int min_write_lock_level;
+	int prev_cmp;
 
 	lowest_level = p->lowest_level;
 	WARN_ON(lowest_level && ins_len > 0);
@@ -2484,6 +2521,7 @@  int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root
 	min_write_lock_level = write_lock_level;
 
 again:
+	prev_cmp = -1;
 	/*
 	 * we try very hard to do read locks on the root
 	 */
@@ -2584,7 +2622,7 @@  cow_done:
 		if (!cow)
 			btrfs_unlock_up_safe(p, level + 1);
 
-		ret = bin_search(b, key, level, &slot);
+		ret = key_search(b, key, level, &prev_cmp, &slot);
 
 		if (level != 0) {
 			int dec = 0;
@@ -2719,6 +2757,7 @@  int btrfs_search_old_slot(struct btrfs_root *root, struct btrfs_key *key,
 	int level;
 	int lowest_unlock = 1;
 	u8 lowest_level = 0;
+	int prev_cmp;
 
 	lowest_level = p->lowest_level;
 	WARN_ON(p->nodes[0] != NULL);
@@ -2729,6 +2768,7 @@  int btrfs_search_old_slot(struct btrfs_root *root, struct btrfs_key *key,
 	}
 
 again:
+	prev_cmp = -1;
 	b = get_old_root(root, time_seq);
 	level = btrfs_header_level(b);
 	p->locks[level] = BTRFS_READ_LOCK;
@@ -2746,7 +2786,7 @@  again:
 		 */
 		btrfs_unlock_up_safe(p, level + 1);
 
-		ret = bin_search(b, key, level, &slot);
+		ret = key_search(b, key, level, &prev_cmp, &slot);
 
 		if (level != 0) {
 			int dec = 0;