diff mbox series

[v2] mm: vmscan.c: fix OOM on swap stress test

Message ID 20240905-lru-flag-v2-1-8a2d9046c594@kernel.org (mailing list archive)
State New
Headers show
Series [v2] mm: vmscan.c: fix OOM on swap stress test | expand

Commit Message

Chris Li Sept. 5, 2024, 8:08 a.m. UTC
I found a regression on mm-unstable during my swap stress test,
using tmpfs to compile linux. The test OOM very soon after
the make spawns many cc processes.

It bisects down to this change: 33dfe9204f29b415bbc0abb1a50642d1ba94f5e9
(mm/gup: clear the LRU flag of a page before adding to LRU batch)

Yu Zhao propose the fix: "I think this is one of the potential side
effects -- Huge mentioned earlier about isolate_lru_folios():"

I test that with it the swap stress test no longer OOM.

Link: https://lore.kernel.org/r/CAOUHufYi9h0kz5uW3LHHS3ZrVwEq-kKp8S6N-MZUmErNAXoXmw@mail.gmail.com/
Fixes: 33dfe9204f29 ("mm/gup: clear the LRU flag of a page before adding to LRU batch")
Suggested-by: Yu Zhao <yuzhao@google.com>
Suggested-by: Hugh Dickins <hughd@google.com>
Tested-by: Chris Li <chrisl@kernel.org>
Closes: https://lore.kernel.org/all/CAF8kJuNP5iTj2p07QgHSGOJsiUfYpJ2f4R1Q5-3BN9JiD9W_KA@mail.gmail.com/
Signed-off-by: Chris Li <chrisl@kernel.org>
---
Changes in v2:
- Add Closes tag suggested by Yu and Thorsten.
- Link to v1: https://lore.kernel.org/r/20240904-lru-flag-v1-1-36638d6a524c@kernel.org
---
 mm/vmscan.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)


---
base-commit: 756ca36d643324d028b325a170e73e392b9590cd
change-id: 20240904-lru-flag-2af2f955740e

Best regards,

Comments

Chris Li Sept. 24, 2024, 9:23 p.m. UTC | #1
I forgot to CC stable on this fix.

Chris

On Thu, Sep 5, 2024 at 1:08 AM Chris Li <chrisl@kernel.org> wrote:
>
> I found a regression on mm-unstable during my swap stress test,
> using tmpfs to compile linux. The test OOM very soon after
> the make spawns many cc processes.
>
> It bisects down to this change: 33dfe9204f29b415bbc0abb1a50642d1ba94f5e9
> (mm/gup: clear the LRU flag of a page before adding to LRU batch)
>
> Yu Zhao propose the fix: "I think this is one of the potential side
> effects -- Huge mentioned earlier about isolate_lru_folios():"
>
> I test that with it the swap stress test no longer OOM.
>
> Link: https://lore.kernel.org/r/CAOUHufYi9h0kz5uW3LHHS3ZrVwEq-kKp8S6N-MZUmErNAXoXmw@mail.gmail.com/
> Fixes: 33dfe9204f29 ("mm/gup: clear the LRU flag of a page before adding to LRU batch")
> Suggested-by: Yu Zhao <yuzhao@google.com>
> Suggested-by: Hugh Dickins <hughd@google.com>
> Tested-by: Chris Li <chrisl@kernel.org>
> Closes: https://lore.kernel.org/all/CAF8kJuNP5iTj2p07QgHSGOJsiUfYpJ2f4R1Q5-3BN9JiD9W_KA@mail.gmail.com/
> Signed-off-by: Chris Li <chrisl@kernel.org>
> ---
> Changes in v2:
> - Add Closes tag suggested by Yu and Thorsten.
> - Link to v1: https://lore.kernel.org/r/20240904-lru-flag-v1-1-36638d6a524c@kernel.org
> ---
>  mm/vmscan.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index a9b6a8196f95..96abf4a52382 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -4323,7 +4323,7 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, struct scan_c
>         }
>
>         /* ineligible */
> -       if (zone > sc->reclaim_idx) {
> +       if (!folio_test_lru(folio) || zone > sc->reclaim_idx) {
>                 gen = folio_inc_gen(lruvec, folio, false);
>                 list_move_tail(&folio->lru, &lrugen->folios[gen][type][zone]);
>                 return true;
>
> ---
> base-commit: 756ca36d643324d028b325a170e73e392b9590cd
> change-id: 20240904-lru-flag-2af2f955740e
>
> Best regards,
> --
> Chris Li <chrisl@kernel.org>
>
Greg KH Sept. 25, 2024, 6:43 a.m. UTC | #2
On Tue, Sep 24, 2024 at 02:23:51PM -0700, Chris Li wrote:
> I forgot to CC stable on this fix.

<formletter>

This is not the correct way to submit patches for inclusion in the
stable kernel tree.  Please read:
    https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html
for how to do this properly.

</formletter>
diff mbox series

Patch

diff --git a/mm/vmscan.c b/mm/vmscan.c
index a9b6a8196f95..96abf4a52382 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -4323,7 +4323,7 @@  static bool sort_folio(struct lruvec *lruvec, struct folio *folio, struct scan_c
 	}
 
 	/* ineligible */
-	if (zone > sc->reclaim_idx) {
+	if (!folio_test_lru(folio) || zone > sc->reclaim_idx) {
 		gen = folio_inc_gen(lruvec, folio, false);
 		list_move_tail(&folio->lru, &lrugen->folios[gen][type][zone]);
 		return true;