diff mbox series

[unstable] mm: rmap: abstract updating per-node and per-memcg stats fix

Message ID 49914517-dfc7-e784-fde0-0e08fafbecc2@google.com (mailing list archive)
State New
Headers show
Series [unstable] mm: rmap: abstract updating per-node and per-memcg stats fix | expand

Commit Message

Hugh Dickins June 12, 2024, 5:10 a.m. UTC
/proc/meminfo is showing ridiculously large numbers on some lines:
__folio_remove_rmap()'s __folio_mod_stat() should be subtracting!

Signed-off-by: Hugh Dickins <hughd@google.com>
---
A fix for folding into mm-unstable, not needed for 6.10-rc.

 mm/rmap.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Yosry Ahmed June 12, 2024, 6:52 a.m. UTC | #1
On Tue, Jun 11, 2024 at 10:10 PM Hugh Dickins <hughd@google.com> wrote:
>
> /proc/meminfo is showing ridiculously large numbers on some lines:
> __folio_remove_rmap()'s __folio_mod_stat() should be subtracting!
>
> Signed-off-by: Hugh Dickins <hughd@google.com>

Reviewed-by: Yosry Ahmed <yosryahmed@google.com>

Thanks a lot for fixing this! I was just looking at a test failure
report by the kernel robot caused by this [1].

Just to document my own stupidity here:

1. In [2], I sent a fix to use  __mod_node_page_state() instead of
__lruvec_stat_mod_folio() in __folio_remove_rmap(). I made the same
mistake of replacing subtraction with addition.

2. In [3], I sent a v2 of that fix that kept the subtraction in
__folio_remove_rmap() correctly.

3. In [4], I sent a cleanup on top of the fix, and that cleanup
replaced the subtraction in __folio_remove_rmap() with an addition,
again.

Apparently, I just suck at subtraction :)

[1]https://lore.kernel.org/linux-mm/202406121026.579593f2-oliver.sang@intel.com/
[2]https://lore.kernel.org/lkml/20240506170024.202111-1-yosryahmed@google.com/
[3]https://lore.kernel.org/lkml/20240506192924.271999-1-yosryahmed@google.com/
[4]https://lore.kernel.org/lkml/20240506211333.346605-1-yosryahmed@google.com/

> ---
> A fix for folding into mm-unstable, not needed for 6.10-rc.
>
>  mm/rmap.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1567,7 +1567,7 @@ static __always_inline void __folio_remove_rmap(struct folio *folio,
>                     list_empty(&folio->_deferred_list))
>                         deferred_split_folio(folio);
>         }
> -       __folio_mod_stat(folio, nr, nr_pmdmapped);
> +       __folio_mod_stat(folio, -nr, -nr_pmdmapped);
>
>         /*
>          * It would be tidy to reset folio_test_anon mapping when fully
> --
> 2.35.3
>
David Hildenbrand June 12, 2024, 7:23 a.m. UTC | #2
On 12.06.24 07:10, Hugh Dickins wrote:
> /proc/meminfo is showing ridiculously large numbers on some lines:
> __folio_remove_rmap()'s __folio_mod_stat() should be subtracting!
> 
> Signed-off-by: Hugh Dickins <hughd@google.com>
> ---
> A fix for folding into mm-unstable, not needed for 6.10-rc.
> 
>   mm/rmap.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1567,7 +1567,7 @@ static __always_inline void __folio_remove_rmap(struct folio *folio,
>   		    list_empty(&folio->_deferred_list))
>   			deferred_split_folio(folio);
>   	}
> -	__folio_mod_stat(folio, nr, nr_pmdmapped);
> +	__folio_mod_stat(folio, -nr, -nr_pmdmapped);
>   
>   	/*
>   	 * It would be tidy to reset folio_test_anon mapping when fully

Missed that detail, thanks!

Acked-by: David Hildenbrand <david@redhat.com>
diff mbox series

Patch

--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1567,7 +1567,7 @@  static __always_inline void __folio_remove_rmap(struct folio *folio,
 		    list_empty(&folio->_deferred_list))
 			deferred_split_folio(folio);
 	}
-	__folio_mod_stat(folio, nr, nr_pmdmapped);
+	__folio_mod_stat(folio, -nr, -nr_pmdmapped);
 
 	/*
 	 * It would be tidy to reset folio_test_anon mapping when fully