diff mbox series

[4/9] sbitmap: test bit before calling test_and_set_bit()

Message ID 20211013165416.985696-5-axboe@kernel.dk (mailing list archive)
State New, archived
Headers show
Series Batched completions | expand

Commit Message

Jens Axboe Oct. 13, 2021, 4:54 p.m. UTC
If we come across bits that are already set, then it's quicker to test
that first and gate the test_and_set_bit() operation on the result of
the bit test.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 lib/sbitmap.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Hannes Reinecke Oct. 14, 2021, 7:20 a.m. UTC | #1
On 10/13/21 6:54 PM, Jens Axboe wrote:
> If we come across bits that are already set, then it's quicker to test
> that first and gate the test_and_set_bit() operation on the result of
> the bit test.
> 
> Signed-off-by: Jens Axboe <axboe@kernel.dk>
> ---
>   lib/sbitmap.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/lib/sbitmap.c b/lib/sbitmap.c
> index c6e2f1f2c4d2..11b244a8d00f 100644
> --- a/lib/sbitmap.c
> +++ b/lib/sbitmap.c
> @@ -166,7 +166,7 @@ static int __sbitmap_get_word(unsigned long *word, unsigned long depth,
>   			return -1;
>   		}
>   
> -		if (!test_and_set_bit_lock(nr, word))
> +		if (!test_bit(nr, word) && !test_and_set_bit_lock(nr, word))
>   			break;
>   
>   		hint = nr + 1;
> 
Hah. Finally something to latch on.

I've seen this coding pattern quite a lot in the block layer, and, of 
course, mostly in the hot path.
(Kinda the point, I guess :-)

However, my question is this:

While 'test_and_set_bit()' is atomic, the combination of
'test_bit && !test_and_set_bit()' is not.

IE this change moves an atomic operation into a non-atomic one.
So how can we be sure that this patch doesn't introduce a race condition?
And if it doesn't, should we add some comment above the code why this is 
safe to do here?

Cheers,

Hannes
Jens Axboe Oct. 14, 2021, 1:01 p.m. UTC | #2
On 10/14/21 1:20 AM, Hannes Reinecke wrote:
> On 10/13/21 6:54 PM, Jens Axboe wrote:
>> If we come across bits that are already set, then it's quicker to test
>> that first and gate the test_and_set_bit() operation on the result of
>> the bit test.
>>
>> Signed-off-by: Jens Axboe <axboe@kernel.dk>
>> ---
>>   lib/sbitmap.c | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/lib/sbitmap.c b/lib/sbitmap.c
>> index c6e2f1f2c4d2..11b244a8d00f 100644
>> --- a/lib/sbitmap.c
>> +++ b/lib/sbitmap.c
>> @@ -166,7 +166,7 @@ static int __sbitmap_get_word(unsigned long *word, unsigned long depth,
>>   			return -1;
>>   		}
>>   
>> -		if (!test_and_set_bit_lock(nr, word))
>> +		if (!test_bit(nr, word) && !test_and_set_bit_lock(nr, word))
>>   			break;
>>   
>>   		hint = nr + 1;
>>
> Hah. Finally something to latch on.
> 
> I've seen this coding pattern quite a lot in the block layer, and, of 
> course, mostly in the hot path.
> (Kinda the point, I guess :-)
> 
> However, my question is this:
> 
> While 'test_and_set_bit()' is atomic, the combination of
> 'test_bit && !test_and_set_bit()' is not.
> 
> IE this change moves an atomic operation into a non-atomic one.
> So how can we be sure that this patch doesn't introduce a race condition?
> And if it doesn't, should we add some comment above the code why this is 
> safe to do here?

If test_bit() returns true, we don't bother with the atomic
test_and_set. That's to avoid re-dirtying a cacheline when the operation
would be pointless. There's no race in this case, we just skip this bit.

The sequence of !test_bit && !test_and_set_bit is very much still
atomic, it's just gated on the fact that our optimistic first check said
that the bit is currently clear. It obviously has to be.
Bart Van Assche Oct. 14, 2021, 6:41 p.m. UTC | #3
On 10/13/21 9:54 AM, Jens Axboe wrote:
> If we come across bits that are already set, then it's quicker to test
> that first and gate the test_and_set_bit() operation on the result of
> the bit test.
> 
> Signed-off-by: Jens Axboe <axboe@kernel.dk>
> ---
>   lib/sbitmap.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/lib/sbitmap.c b/lib/sbitmap.c
> index c6e2f1f2c4d2..11b244a8d00f 100644
> --- a/lib/sbitmap.c
> +++ b/lib/sbitmap.c
> @@ -166,7 +166,7 @@ static int __sbitmap_get_word(unsigned long *word, unsigned long depth,
>   			return -1;
>   		}
>   
> -		if (!test_and_set_bit_lock(nr, word))
> +		if (!test_bit(nr, word) && !test_and_set_bit_lock(nr, word))
>   			break;
>   
>   		hint = nr + 1;

Hi Jens,

Are numbers available about how much this change improves performance? 
I'm asking this since my understanding is that this patch only helps if 
bit 'nr' is set after find_next_zero_bit() finished and before 
test_and_set_bit() is called. Isn't that a very short time window? Am I 
perhaps missing something?

Thanks,

Bart.
diff mbox series

Patch

diff --git a/lib/sbitmap.c b/lib/sbitmap.c
index c6e2f1f2c4d2..11b244a8d00f 100644
--- a/lib/sbitmap.c
+++ b/lib/sbitmap.c
@@ -166,7 +166,7 @@  static int __sbitmap_get_word(unsigned long *word, unsigned long depth,
 			return -1;
 		}
 
-		if (!test_and_set_bit_lock(nr, word))
+		if (!test_bit(nr, word) && !test_and_set_bit_lock(nr, word))
 			break;
 
 		hint = nr + 1;