diff mbox series

mm/swap: take folio refcount after testing the LRU flag

Message ID 1723270558-31674-1-git-send-email-yangge1116@126.com (mailing list archive)
State New
Headers show
Series mm/swap: take folio refcount after testing the LRU flag | expand

Commit Message

Ge Yang Aug. 10, 2024, 6:15 a.m. UTC
From: yangge <yangge1116@126.com>

Whoever passes a folio to __folio_batch_add_and_move() must hold
a reference, otherwise something else would already be messed up.
If the folio is referenced, it will not be freed elsewhere, so we
can safely clear the folio's lru flag. As discussed with David
in [1], we should take the reference after testing the LRU flag,
not before.

Link: https://lore.kernel.org/lkml/d41865b4-d6fa-49ba-890a-921eefad27dd@redhat.com/ [1]
Signed-off-by: yangge <yangge1116@126.com>
---
 mm/swap.c | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

Comments

David Hildenbrand Aug. 12, 2024, 9:10 a.m. UTC | #1
On 10.08.24 08:15, yangge1116@126.com wrote:
> From: yangge <yangge1116@126.com>
> 
> Whoever passes a folio to __folio_batch_add_and_move() must hold
> a reference, otherwise something else would already be messed up.
> If the folio is referenced, it will not be freed elsewhere, so we
> can safely clear the folio's lru flag. As discussed with David
> in [1], we should take the reference after testing the LRU flag,
> not before.
> 
> Link: https://lore.kernel.org/lkml/d41865b4-d6fa-49ba-890a-921eefad27dd@redhat.com/ [1]
> Signed-off-by: yangge <yangge1116@126.com>
> ---
>   mm/swap.c | 8 +++-----
>   1 file changed, 3 insertions(+), 5 deletions(-)
> 
> diff --git a/mm/swap.c b/mm/swap.c
> index 67a2467..6b83898 100644
> --- a/mm/swap.c
> +++ b/mm/swap.c
> @@ -226,12 +226,10 @@ static void __folio_batch_add_and_move(struct folio_batch __percpu *fbatch,
>   {
>   	unsigned long flags;
>   
> -	folio_get(folio);
> -
> -	if (on_lru && !folio_test_clear_lru(folio)) {
> -		folio_put(folio);
> +	if (on_lru && !folio_test_clear_lru(folio))
>   		return;
> -	}
> +
> +	folio_get(folio);
>   
>   	if (disable_irq)
>   		local_lock_irqsave(&cpu_fbatches.lock_irq, flags);

Ah, "mm/swap: remove boilerplate" reduced them to a single instance, good.

You can throw in a

VM_WARN_ON_ONCE(!folio_ref_count(folio));

before the folio_get(folio)

so we would realize when something very unexpected happens.

Acked-by: David Hildenbrand <david@redhat.com>
Ge Yang Aug. 12, 2024, 11:18 a.m. UTC | #2
在 2024/8/12 17:10, David Hildenbrand 写道:
> On 10.08.24 08:15, yangge1116@126.com wrote:
>> From: yangge <yangge1116@126.com>
>>
>> Whoever passes a folio to __folio_batch_add_and_move() must hold
>> a reference, otherwise something else would already be messed up.
>> If the folio is referenced, it will not be freed elsewhere, so we
>> can safely clear the folio's lru flag. As discussed with David
>> in [1], we should take the reference after testing the LRU flag,
>> not before.
>>
>> Link: 
>> https://lore.kernel.org/lkml/d41865b4-d6fa-49ba-890a-921eefad27dd@redhat.com/ [1]
>> Signed-off-by: yangge <yangge1116@126.com>
>> ---
>>   mm/swap.c | 8 +++-----
>>   1 file changed, 3 insertions(+), 5 deletions(-)
>>
>> diff --git a/mm/swap.c b/mm/swap.c
>> index 67a2467..6b83898 100644
>> --- a/mm/swap.c
>> +++ b/mm/swap.c
>> @@ -226,12 +226,10 @@ static void __folio_batch_add_and_move(struct 
>> folio_batch __percpu *fbatch,
>>   {
>>       unsigned long flags;
>> -    folio_get(folio);
>> -
>> -    if (on_lru && !folio_test_clear_lru(folio)) {
>> -        folio_put(folio);
>> +    if (on_lru && !folio_test_clear_lru(folio))
>>           return;
>> -    }
>> +
>> +    folio_get(folio);
>>       if (disable_irq)
>>           local_lock_irqsave(&cpu_fbatches.lock_irq, flags);
> 
> Ah, "mm/swap: remove boilerplate" reduced them to a single instance, good.
> 
> You can throw in a
> 
> VM_WARN_ON_ONCE(!folio_ref_count(folio));
> 
> before the folio_get(folio)
> 
> so we would realize when something very unexpected happens.

Ok, thanks.

> 
> Acked-by: David Hildenbrand <david@redhat.com>
>
diff mbox series

Patch

diff --git a/mm/swap.c b/mm/swap.c
index 67a2467..6b83898 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -226,12 +226,10 @@  static void __folio_batch_add_and_move(struct folio_batch __percpu *fbatch,
 {
 	unsigned long flags;
 
-	folio_get(folio);
-
-	if (on_lru && !folio_test_clear_lru(folio)) {
-		folio_put(folio);
+	if (on_lru && !folio_test_clear_lru(folio))
 		return;
-	}
+
+	folio_get(folio);
 
 	if (disable_irq)
 		local_lock_irqsave(&cpu_fbatches.lock_irq, flags);