diff mbox

Btrfs: replace raid56 stripe bubble sort with insert sort

Message ID 20171228152804.4645-1-nefelim4ag@gmail.com (mailing list archive)
State New, archived
Headers show

Commit Message

Timofey Titovets Dec. 28, 2017, 3:28 p.m. UTC
Insert sort are generaly perform better then bubble sort,
by have less iterations on avarage.
That version also try place element to right position
instead of raw swap.

I'm not sure how many stripes per bio raid56,
btrfs try to store (and try to sort).

So, that a bit shorter just in the name of a great justice.

Signed-off-by: Timofey Titovets <nefelim4ag@gmail.com>
---
 fs/btrfs/volumes.c | 29 ++++++++++++-----------------
 1 file changed, 12 insertions(+), 17 deletions(-)

Comments

Filipe Manana Jan. 3, 2018, 11:40 a.m. UTC | #1
On Thu, Dec 28, 2017 at 3:28 PM, Timofey Titovets <nefelim4ag@gmail.com> wrote:
> Insert sort are generaly perform better then bubble sort,
> by have less iterations on avarage.
> That version also try place element to right position
> instead of raw swap.
>
> I'm not sure how many stripes per bio raid56,
> btrfs try to store (and try to sort).

If you don't know it, besides unlikely to be doing the best possible
thing here, you might actually make things worse or not offering any
benefit. IOW, you should know it for sure before submitting such
changes.

You should know if the number of elements to sort is big enough such
that an insertion sort is faster than a bubble sort, and more
importantly, measure it and mention it in the changelog.
As it is, you are showing lack of understanding of the code and
component you are touching, and leaving many open questions such as
how faster this is, why insertion sort and not a
quick/merge/heap/whatever sort, etc.

>
> So, that a bit shorter just in the name of a great justice.
>
> Signed-off-by: Timofey Titovets <nefelim4ag@gmail.com>
> ---
>  fs/btrfs/volumes.c | 29 ++++++++++++-----------------
>  1 file changed, 12 insertions(+), 17 deletions(-)
>
> diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
> index 98bc2433a920..7195fc8c49b1 100644
> --- a/fs/btrfs/volumes.c
> +++ b/fs/btrfs/volumes.c
> @@ -5317,29 +5317,24 @@ static inline int parity_smaller(u64 a, u64 b)
>         return a > b;
>  }
>
> -/* Bubble-sort the stripe set to put the parity/syndrome stripes last */
> +/* Insertion-sort the stripe set to put the parity/syndrome stripes last */
>  static void sort_parity_stripes(struct btrfs_bio *bbio, int num_stripes)
>  {
>         struct btrfs_bio_stripe s;
> -       int i;
> +       int i, j;
>         u64 l;
> -       int again = 1;
>
> -       while (again) {
> -               again = 0;
> -               for (i = 0; i < num_stripes - 1; i++) {
> -                       if (parity_smaller(bbio->raid_map[i],
> -                                          bbio->raid_map[i+1])) {
> -                               s = bbio->stripes[i];
> -                               l = bbio->raid_map[i];
> -                               bbio->stripes[i] = bbio->stripes[i+1];
> -                               bbio->raid_map[i] = bbio->raid_map[i+1];
> -                               bbio->stripes[i+1] = s;
> -                               bbio->raid_map[i+1] = l;
> -
> -                               again = 1;
> -                       }
> +       for (i = 1; i < num_stripes; i++) {
> +               s = bbio->stripes[i];
> +               l = bbio->raid_map[i];
> +               for (j = i - 1; j >= 0; j--) {
> +                       if (!parity_smaller(bbio->raid_map[j], l))
> +                               break;
> +                       bbio->stripes[j+1]  = bbio->stripes[j];
> +                       bbio->raid_map[j+1] = bbio->raid_map[j];
>                 }
> +               bbio->stripes[j+1]  = s;
> +               bbio->raid_map[j+1] = l;
>         }
>  }
>
> --
> 2.15.1
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
Timofey Titovets Jan. 3, 2018, 3:39 p.m. UTC | #2
2018-01-03 14:40 GMT+03:00 Filipe Manana <fdmanana@gmail.com>:
> On Thu, Dec 28, 2017 at 3:28 PM, Timofey Titovets <nefelim4ag@gmail.com> wrote:
>> Insert sort are generaly perform better then bubble sort,
>> by have less iterations on avarage.
>> That version also try place element to right position
>> instead of raw swap.
>>
>> I'm not sure how many stripes per bio raid56,
>> btrfs try to store (and try to sort).
>
> If you don't know it, besides unlikely to be doing the best possible
> thing here, you might actually make things worse or not offering any
> benefit. IOW, you should know it for sure before submitting such
> changes.
>
> You should know if the number of elements to sort is big enough such
> that an insertion sort is faster than a bubble sort, and more
> importantly, measure it and mention it in the changelog.
> As it is, you are showing lack of understanding of the code and
> component you are touching, and leaving many open questions such as
> how faster this is, why insertion sort and not a
> quick/merge/heap/whatever sort, etc.
> --
> Filipe David Manana,
>
> “Whether you think you can, or you think you can't — you're right.”

Sorry, you are right,
I must do some tests and investigations before send a patch.
(I just try believe in some magic math things).

Input size depends on number of devs,
so on small arrays, like 3-5 no meaningful difference.

Example: raid6 (with 4 disks) produce many stripe line addresses like:
1. 4641783808 4641849344 4641914880 18446744073709551614
2. 4641652736 4641718272 18446744073709551614 4641587200
3. 18446744073709551614 4636475392 4636540928 4636606464
4. 4641521664 18446744073709551614 4641390592 4641456128

For that count of elements any sorting algo will work fast enough.

Let's, consider that addresses as random non-repeating numbers.

We can use tool like Sound Of Sorting (SoS) to make some
easy to interpret tests of algorithms.

(Sorry, no script to reproduce, as SoS not provide a cli,
just hand made by run SoS with different params).

Table (also in attach with source data points):
Sort_algo |Disk_num       |3   |4    |6    |8    |10    |12    |14    |AVG
Bubble    |Comparasions   |3   |6    |15   |28   |45    |66    |91
|36,2857142857143
Bubble    |Array_Accesses |7,8 |18,2 |45,8 |81,8 |133,4 |192   |268,6 |106,8
Insertion |Comparasions   |2,8 |5    |11,6 |17   |28,6  |39,4  |55,2  |22,8
Insertion |Array_Accesses |8,4 |13,6 |31   |48,8 |80,4  |109,6 |155,8
|63,9428571428571

i.e. on Size like 3-4 no much difference,
Insertion sort will work faster on bigger arrays (up to 1.7x for 14 disk array).

Does that make a sense?
I think yes, i.e. in any case that are several dozen machine instructions.
Which can be used elsewhere.

P.S. For heap sort, which are also available in kernel by sort(),
That will to much overhead on that small number of devices,
i.e. heap sort will show a profit over insert sort at 16+ cells in array.

/* Snob mode on */
P.S.S.
Heap sort & other like, need additional memory,
so that useless to compare in our case,
but they will works faster, of course.
/* Snob mode off */

Thanks.
Filipe Manana Jan. 5, 2018, 1:14 p.m. UTC | #3
On Wed, Jan 3, 2018 at 3:39 PM, Timofey Titovets <nefelim4ag@gmail.com> wrote:
> 2018-01-03 14:40 GMT+03:00 Filipe Manana <fdmanana@gmail.com>:
>> On Thu, Dec 28, 2017 at 3:28 PM, Timofey Titovets <nefelim4ag@gmail.com> wrote:
>>> Insert sort are generaly perform better then bubble sort,
>>> by have less iterations on avarage.
>>> That version also try place element to right position
>>> instead of raw swap.
>>>
>>> I'm not sure how many stripes per bio raid56,
>>> btrfs try to store (and try to sort).
>>
>> If you don't know it, besides unlikely to be doing the best possible
>> thing here, you might actually make things worse or not offering any
>> benefit. IOW, you should know it for sure before submitting such
>> changes.
>>
>> You should know if the number of elements to sort is big enough such
>> that an insertion sort is faster than a bubble sort, and more
>> importantly, measure it and mention it in the changelog.
>> As it is, you are showing lack of understanding of the code and
>> component you are touching, and leaving many open questions such as
>> how faster this is, why insertion sort and not a
>> quick/merge/heap/whatever sort, etc.
>> --
>> Filipe David Manana,
>>
>> “Whether you think you can, or you think you can't — you're right.”
>
> Sorry, you are right,
> I must do some tests and investigations before send a patch.
> (I just try believe in some magic math things).
>
> Input size depends on number of devs,
> so on small arrays, like 3-5 no meaningful difference.
>
> Example: raid6 (with 4 disks) produce many stripe line addresses like:
> 1. 4641783808 4641849344 4641914880 18446744073709551614
> 2. 4641652736 4641718272 18446744073709551614 4641587200
> 3. 18446744073709551614 4636475392 4636540928 4636606464
> 4. 4641521664 18446744073709551614 4641390592 4641456128
>
> For that count of elements any sorting algo will work fast enough.
>
> Let's, consider that addresses as random non-repeating numbers.
>
> We can use tool like Sound Of Sorting (SoS) to make some
> easy to interpret tests of algorithms.

Nack.
My point was about testing in the btrfs code and not somewhere else.
We can all get estimations from CS books, websites, etc for multiple
algorithms for different input sizes. And these are typically
comparing the average case, and while some algorithms perform better
than others in the average case, things can get reversed in the worst
case (heap sort vs quick sort iirc, better in worst case but usually
worse in the average case).
What matters is in the btrfs context - that where things have to be measured.


>
> (Sorry, no script to reproduce, as SoS not provide a cli,
> just hand made by run SoS with different params).
>
> Table (also in attach with source data points):
> Sort_algo |Disk_num       |3   |4    |6    |8    |10    |12    |14    |AVG
> Bubble    |Comparasions   |3   |6    |15   |28   |45    |66    |91
> |36,2857142857143
> Bubble    |Array_Accesses |7,8 |18,2 |45,8 |81,8 |133,4 |192   |268,6 |106,8
> Insertion |Comparasions   |2,8 |5    |11,6 |17   |28,6  |39,4  |55,2  |22,8
> Insertion |Array_Accesses |8,4 |13,6 |31   |48,8 |80,4  |109,6 |155,8
> |63,9428571428571
>
> i.e. on Size like 3-4 no much difference,
> Insertion sort will work faster on bigger arrays (up to 1.7x for 14 disk array).
>
> Does that make a sense?
> I think yes, i.e. in any case that are several dozen machine instructions.
> Which can be used elsewhere.
>
> P.S. For heap sort, which are also available in kernel by sort(),
> That will to much overhead on that small number of devices,
> i.e. heap sort will show a profit over insert sort at 16+ cells in array.
>
> /* Snob mode on */
> P.S.S.
> Heap sort & other like, need additional memory,

Yes... My point in listing the heap sort and other algorithms was not
meant to propose using any of them but rather for you to explain why
insertion sort and not something else.
And I think you are confusing heap sort with merge sort. Merge sort is
the one that requires extra memory.

> so that useless to compare in our case,
> but they will works faster, of course.
> /* Snob mode off */
>
> Thanks.
> --
> Have a nice day,
> Timofey.
diff mbox

Patch

diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index 98bc2433a920..7195fc8c49b1 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -5317,29 +5317,24 @@  static inline int parity_smaller(u64 a, u64 b)
 	return a > b;
 }
 
-/* Bubble-sort the stripe set to put the parity/syndrome stripes last */
+/* Insertion-sort the stripe set to put the parity/syndrome stripes last */
 static void sort_parity_stripes(struct btrfs_bio *bbio, int num_stripes)
 {
 	struct btrfs_bio_stripe s;
-	int i;
+	int i, j;
 	u64 l;
-	int again = 1;
 
-	while (again) {
-		again = 0;
-		for (i = 0; i < num_stripes - 1; i++) {
-			if (parity_smaller(bbio->raid_map[i],
-					   bbio->raid_map[i+1])) {
-				s = bbio->stripes[i];
-				l = bbio->raid_map[i];
-				bbio->stripes[i] = bbio->stripes[i+1];
-				bbio->raid_map[i] = bbio->raid_map[i+1];
-				bbio->stripes[i+1] = s;
-				bbio->raid_map[i+1] = l;
-
-				again = 1;
-			}
+	for (i = 1; i < num_stripes; i++) {
+		s = bbio->stripes[i];
+		l = bbio->raid_map[i];
+		for (j = i - 1; j >= 0; j--) {
+			if (!parity_smaller(bbio->raid_map[j], l))
+				break;
+			bbio->stripes[j+1]  = bbio->stripes[j];
+			bbio->raid_map[j+1] = bbio->raid_map[j];
 		}
+		bbio->stripes[j+1]  = s;
+		bbio->raid_map[j+1] = l;
 	}
 }