diff mbox series

[V2] mm/swap: take folio refcount after testing the LRU flag

Message ID 1723461718-5503-1-git-send-email-yangge1116@126.com (mailing list archive)
State New
Headers show
Series [V2] mm/swap: take folio refcount after testing the LRU flag | expand

Commit Message

Ge Yang Aug. 12, 2024, 11:21 a.m. UTC
From: yangge <yangge1116@126.com>

Whoever passes a folio to __folio_batch_add_and_move() must hold
a reference, otherwise something else would already be messed up.
If the folio is referenced, it will not be freed elsewhere, so we
can safely clear the folio's lru flag. As discussed with David
in [1], we should take the reference after testing the LRU flag,
not before.

Link: https://lore.kernel.org/lkml/d41865b4-d6fa-49ba-890a-921eefad27dd@redhat.com/ [1]
Signed-off-by: yangge <yangge1116@126.com>
Acked-by: David Hildenbrand <david@redhat.com>
---
 mm/swap.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

V2:
   Add sanity check suggested by David

Comments

Yu Zhao Aug. 12, 2024, 7:06 p.m. UTC | #1
On Mon, Aug 12, 2024 at 5:22 AM <yangge1116@126.com> wrote:
>
> From: yangge <yangge1116@126.com>
>
> Whoever passes a folio to __folio_batch_add_and_move() must hold
> a reference, otherwise something else would already be messed up.
> If the folio is referenced, it will not be freed elsewhere, so we
> can safely clear the folio's lru flag. As discussed with David
> in [1], we should take the reference after testing the LRU flag,
> not before.
>
> Link: https://lore.kernel.org/lkml/d41865b4-d6fa-49ba-890a-921eefad27dd@redhat.com/ [1]
> Signed-off-by: yangge <yangge1116@126.com>
> Acked-by: David Hildenbrand <david@redhat.com>
> ---
>  mm/swap.c | 9 ++++-----
>  1 file changed, 4 insertions(+), 5 deletions(-)
>
> V2:
>    Add sanity check suggested by David
>
> diff --git a/mm/swap.c b/mm/swap.c
> index 67a2467..c048659 100644
> --- a/mm/swap.c
> +++ b/mm/swap.c
> @@ -226,12 +226,11 @@ static void __folio_batch_add_and_move(struct folio_batch __percpu *fbatch,
>  {
>         unsigned long flags;
>
> -       folio_get(folio);
> -
> -       if (on_lru && !folio_test_clear_lru(folio)) {
> -               folio_put(folio);
> +       if (on_lru && !folio_test_clear_lru(folio))
>                 return;
> -       }
> +
> +       VM_WARN_ON_ONCE(!folio_ref_count(folio));
> +       folio_get(folio);

No need to check folio_ref_count() here, because folio_get() already
does it with a better check folio_ref_zero_or_close_to_overflow().
David Hildenbrand Aug. 12, 2024, 8:12 p.m. UTC | #2
On 12.08.24 21:06, Yu Zhao wrote:
> On Mon, Aug 12, 2024 at 5:22 AM <yangge1116@126.com> wrote:
>>
>> From: yangge <yangge1116@126.com>
>>
>> Whoever passes a folio to __folio_batch_add_and_move() must hold
>> a reference, otherwise something else would already be messed up.
>> If the folio is referenced, it will not be freed elsewhere, so we
>> can safely clear the folio's lru flag. As discussed with David
>> in [1], we should take the reference after testing the LRU flag,
>> not before.
>>
>> Link: https://lore.kernel.org/lkml/d41865b4-d6fa-49ba-890a-921eefad27dd@redhat.com/ [1]
>> Signed-off-by: yangge <yangge1116@126.com>
>> Acked-by: David Hildenbrand <david@redhat.com>
>> ---
>>   mm/swap.c | 9 ++++-----
>>   1 file changed, 4 insertions(+), 5 deletions(-)
>>
>> V2:
>>     Add sanity check suggested by David
>>
>> diff --git a/mm/swap.c b/mm/swap.c
>> index 67a2467..c048659 100644
>> --- a/mm/swap.c
>> +++ b/mm/swap.c
>> @@ -226,12 +226,11 @@ static void __folio_batch_add_and_move(struct folio_batch __percpu *fbatch,
>>   {
>>          unsigned long flags;
>>
>> -       folio_get(folio);
>> -
>> -       if (on_lru && !folio_test_clear_lru(folio)) {
>> -               folio_put(folio);
>> +       if (on_lru && !folio_test_clear_lru(folio))
>>                  return;
>> -       }
>> +
>> +       VM_WARN_ON_ONCE(!folio_ref_count(folio));
>> +       folio_get(folio);
> 
> No need to check folio_ref_count() here, because folio_get() already
> does it with a better check folio_ref_zero_or_close_to_overflow().
> 

Ah, good point, thanks!
diff mbox series

Patch

diff --git a/mm/swap.c b/mm/swap.c
index 67a2467..c048659 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -226,12 +226,11 @@  static void __folio_batch_add_and_move(struct folio_batch __percpu *fbatch,
 {
 	unsigned long flags;
 
-	folio_get(folio);
-
-	if (on_lru && !folio_test_clear_lru(folio)) {
-		folio_put(folio);
+	if (on_lru && !folio_test_clear_lru(folio))
 		return;
-	}
+
+	VM_WARN_ON_ONCE(!folio_ref_count(folio));
+	folio_get(folio);
 
 	if (disable_irq)
 		local_lock_irqsave(&cpu_fbatches.lock_irq, flags);