Message ID | 1723270558-31674-1-git-send-email-yangge1116@126.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm/swap: take folio refcount after testing the LRU flag | expand |
On 10.08.24 08:15, yangge1116@126.com wrote: > From: yangge <yangge1116@126.com> > > Whoever passes a folio to __folio_batch_add_and_move() must hold > a reference, otherwise something else would already be messed up. > If the folio is referenced, it will not be freed elsewhere, so we > can safely clear the folio's lru flag. As discussed with David > in [1], we should take the reference after testing the LRU flag, > not before. > > Link: https://lore.kernel.org/lkml/d41865b4-d6fa-49ba-890a-921eefad27dd@redhat.com/ [1] > Signed-off-by: yangge <yangge1116@126.com> > --- > mm/swap.c | 8 +++----- > 1 file changed, 3 insertions(+), 5 deletions(-) > > diff --git a/mm/swap.c b/mm/swap.c > index 67a2467..6b83898 100644 > --- a/mm/swap.c > +++ b/mm/swap.c > @@ -226,12 +226,10 @@ static void __folio_batch_add_and_move(struct folio_batch __percpu *fbatch, > { > unsigned long flags; > > - folio_get(folio); > - > - if (on_lru && !folio_test_clear_lru(folio)) { > - folio_put(folio); > + if (on_lru && !folio_test_clear_lru(folio)) > return; > - } > + > + folio_get(folio); > > if (disable_irq) > local_lock_irqsave(&cpu_fbatches.lock_irq, flags); Ah, "mm/swap: remove boilerplate" reduced them to a single instance, good. You can throw in a VM_WARN_ON_ONCE(!folio_ref_count(folio)); before the folio_get(folio) so we would realize when something very unexpected happens. Acked-by: David Hildenbrand <david@redhat.com>
在 2024/8/12 17:10, David Hildenbrand 写道: > On 10.08.24 08:15, yangge1116@126.com wrote: >> From: yangge <yangge1116@126.com> >> >> Whoever passes a folio to __folio_batch_add_and_move() must hold >> a reference, otherwise something else would already be messed up. >> If the folio is referenced, it will not be freed elsewhere, so we >> can safely clear the folio's lru flag. As discussed with David >> in [1], we should take the reference after testing the LRU flag, >> not before. >> >> Link: >> https://lore.kernel.org/lkml/d41865b4-d6fa-49ba-890a-921eefad27dd@redhat.com/ [1] >> Signed-off-by: yangge <yangge1116@126.com> >> --- >> mm/swap.c | 8 +++----- >> 1 file changed, 3 insertions(+), 5 deletions(-) >> >> diff --git a/mm/swap.c b/mm/swap.c >> index 67a2467..6b83898 100644 >> --- a/mm/swap.c >> +++ b/mm/swap.c >> @@ -226,12 +226,10 @@ static void __folio_batch_add_and_move(struct >> folio_batch __percpu *fbatch, >> { >> unsigned long flags; >> - folio_get(folio); >> - >> - if (on_lru && !folio_test_clear_lru(folio)) { >> - folio_put(folio); >> + if (on_lru && !folio_test_clear_lru(folio)) >> return; >> - } >> + >> + folio_get(folio); >> if (disable_irq) >> local_lock_irqsave(&cpu_fbatches.lock_irq, flags); > > Ah, "mm/swap: remove boilerplate" reduced them to a single instance, good. > > You can throw in a > > VM_WARN_ON_ONCE(!folio_ref_count(folio)); > > before the folio_get(folio) > > so we would realize when something very unexpected happens. Ok, thanks. > > Acked-by: David Hildenbrand <david@redhat.com> >
diff --git a/mm/swap.c b/mm/swap.c index 67a2467..6b83898 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -226,12 +226,10 @@ static void __folio_batch_add_and_move(struct folio_batch __percpu *fbatch, { unsigned long flags; - folio_get(folio); - - if (on_lru && !folio_test_clear_lru(folio)) { - folio_put(folio); + if (on_lru && !folio_test_clear_lru(folio)) return; - } + + folio_get(folio); if (disable_irq) local_lock_irqsave(&cpu_fbatches.lock_irq, flags);