diff mbox series

[v7,01/13] vmstat: allow_direct_reclaim should use zone_page_state_snapshot

Message ID 20230320180745.556821285@redhat.com (mailing list archive)
State New
Headers show
Series fold per-CPU vmstats remotely | expand

Commit Message

Marcelo Tosatti March 20, 2023, 6:03 p.m. UTC
A customer provided evidence indicating that a process
was stalled in direct reclaim:

 - The process was trapped in throttle_direct_reclaim().
   The function wait_event_killable() was called to wait condition     
   allow_direct_reclaim(pgdat) for current node to be true.     
   The allow_direct_reclaim(pgdat) examined the number of free pages     
   on the node by zone_page_state() which just returns value in     
   zone->vm_stat[NR_FREE_PAGES].     
                                                
 - On node #1, zone->vm_stat[NR_FREE_PAGES] was 0.            
   However, the freelist on this node was not empty.     
                           
 - This inconsistent of vmstat value was caused by percpu vmstat on     
   nohz_full cpus. Every increment/decrement of vmstat is performed     
   on percpu vmstat counter at first, then pooled diffs are cumulated     
   to the zone's vmstat counter in timely manner. However, on nohz_full     
   cpus (in case of this customer's system, 48 of 52 cpus) these pooled     
   diffs were not cumulated once the cpu had no event on it so that     
   the cpu started sleeping infinitely.                       
   I checked percpu vmstat and found there were total 69 counts not         
   cumulated to the zone's vmstat counter yet.     
                                         
 - In this situation, kswapd did not help the trapped process.     
   In pgdat_balanced(), zone_wakermark_ok_safe() examined the number     
   of free pages on the node by zone_page_state_snapshot() which     
   checks pending counts on percpu vmstat.     
   Therefore kswapd could know there were 69 free pages correctly.     
   Since zone->_watermark = {8, 20, 32}, kswapd did not work because     
   69 was greater than 32 as high watermark.     

Change allow_direct_reclaim to use zone_page_state_snapshot, which
allows a more precise version of the vmstat counters to be used.

allow_direct_reclaim will only be called from try_to_free_pages,
which is not a hot path.

Suggested-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>

---

Comments

Michal Hocko March 20, 2023, 6:21 p.m. UTC | #1
On Mon 20-03-23 15:03:33, Marcelo Tosatti wrote:
> A customer provided evidence indicating that a process
> was stalled in direct reclaim:
> 
>  - The process was trapped in throttle_direct_reclaim().
>    The function wait_event_killable() was called to wait condition     
>    allow_direct_reclaim(pgdat) for current node to be true.     
>    The allow_direct_reclaim(pgdat) examined the number of free pages     
>    on the node by zone_page_state() which just returns value in     
>    zone->vm_stat[NR_FREE_PAGES].     
>                                                 
>  - On node #1, zone->vm_stat[NR_FREE_PAGES] was 0.            
>    However, the freelist on this node was not empty.     
>                            
>  - This inconsistent of vmstat value was caused by percpu vmstat on     
>    nohz_full cpus. Every increment/decrement of vmstat is performed     
>    on percpu vmstat counter at first, then pooled diffs are cumulated     
>    to the zone's vmstat counter in timely manner. However, on nohz_full     
>    cpus (in case of this customer's system, 48 of 52 cpus) these pooled     
>    diffs were not cumulated once the cpu had no event on it so that     
>    the cpu started sleeping infinitely.                       
>    I checked percpu vmstat and found there were total 69 counts not         
>    cumulated to the zone's vmstat counter yet.     
>                                          
>  - In this situation, kswapd did not help the trapped process.     
>    In pgdat_balanced(), zone_wakermark_ok_safe() examined the number     
>    of free pages on the node by zone_page_state_snapshot() which     
>    checks pending counts on percpu vmstat.     
>    Therefore kswapd could know there were 69 free pages correctly.     
>    Since zone->_watermark = {8, 20, 32}, kswapd did not work because     
>    69 was greater than 32 as high watermark.     
> 
> Change allow_direct_reclaim to use zone_page_state_snapshot, which
> allows a more precise version of the vmstat counters to be used.
> 
> allow_direct_reclaim will only be called from try_to_free_pages,
> which is not a hot path.

Have you managed to test this patch to confirm it addresses the above
issue? It should but better double check that.

> Suggested-by: Michal Hocko <mhocko@suse.com>
> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>

The patch makes sense regardless but a note about testing should be
added.

Acked-by: Michal Hocko <mhocko@suse.com>

> 
> ---
> 
> Index: linux-vmstat-remote/mm/vmscan.c
> ===================================================================
> --- linux-vmstat-remote.orig/mm/vmscan.c
> +++ linux-vmstat-remote/mm/vmscan.c
> @@ -6861,7 +6861,7 @@ static bool allow_direct_reclaim(pg_data
>  			continue;
>  
>  		pfmemalloc_reserve += min_wmark_pages(zone);
> -		free_pages += zone_page_state(zone, NR_FREE_PAGES);
> +		free_pages += zone_page_state_snapshot(zone, NR_FREE_PAGES);
>  	}
>  
>  	/* If there are no reserves (unexpected config) then do not throttle */
>
Marcelo Tosatti March 20, 2023, 6:32 p.m. UTC | #2
On Mon, Mar 20, 2023 at 07:21:04PM +0100, Michal Hocko wrote:
> On Mon 20-03-23 15:03:33, Marcelo Tosatti wrote:
> > A customer provided evidence indicating that a process
> > was stalled in direct reclaim:
> > 
> >  - The process was trapped in throttle_direct_reclaim().
> >    The function wait_event_killable() was called to wait condition     
> >    allow_direct_reclaim(pgdat) for current node to be true.     
> >    The allow_direct_reclaim(pgdat) examined the number of free pages     
> >    on the node by zone_page_state() which just returns value in     
> >    zone->vm_stat[NR_FREE_PAGES].     
> >                                                 
> >  - On node #1, zone->vm_stat[NR_FREE_PAGES] was 0.            
> >    However, the freelist on this node was not empty.     
> >                            
> >  - This inconsistent of vmstat value was caused by percpu vmstat on     
> >    nohz_full cpus. Every increment/decrement of vmstat is performed     
> >    on percpu vmstat counter at first, then pooled diffs are cumulated     
> >    to the zone's vmstat counter in timely manner. However, on nohz_full     
> >    cpus (in case of this customer's system, 48 of 52 cpus) these pooled     
> >    diffs were not cumulated once the cpu had no event on it so that     
> >    the cpu started sleeping infinitely.                       
> >    I checked percpu vmstat and found there were total 69 counts not         
> >    cumulated to the zone's vmstat counter yet.     
> >                                          
> >  - In this situation, kswapd did not help the trapped process.     
> >    In pgdat_balanced(), zone_wakermark_ok_safe() examined the number     
> >    of free pages on the node by zone_page_state_snapshot() which     
> >    checks pending counts on percpu vmstat.     
> >    Therefore kswapd could know there were 69 free pages correctly.     
> >    Since zone->_watermark = {8, 20, 32}, kswapd did not work because     
> >    69 was greater than 32 as high watermark.     
> > 
> > Change allow_direct_reclaim to use zone_page_state_snapshot, which
> > allows a more precise version of the vmstat counters to be used.
> > 
> > allow_direct_reclaim will only be called from try_to_free_pages,
> > which is not a hot path.
> 
> Have you managed to test this patch to confirm it addresses the above
> issue? It should but better double check that.
> 
> > Suggested-by: Michal Hocko <mhocko@suse.com>
> > Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
> 
> The patch makes sense regardless but a note about testing should be
> added.
> 
> Acked-by: Michal Hocko <mhocko@suse.com>

Michal,

The patch has not been tested in the original setup where the problem 
was found, however i don't think its easy to do that validation
(checking with the reporter anyway).

Perhaps one could find a synthetic reproducer.

It is pretty easy to note that, on an isolated nohz_full CPU, the 
deferrable timer that is queued on it (timer which should queue vmstat_update
on that CPU) does not execute for long periods.
This makes the global stats stale (since per-CPU free pages can become
stale for as long as the CPU has tick processing stopped).
Which matches the data available.

Thanks!
Michal Hocko March 22, 2023, 10:03 a.m. UTC | #3
On Mon 20-03-23 15:32:15, Marcelo Tosatti wrote:
> On Mon, Mar 20, 2023 at 07:21:04PM +0100, Michal Hocko wrote:
> > On Mon 20-03-23 15:03:33, Marcelo Tosatti wrote:
> > > A customer provided evidence indicating that a process
> > > was stalled in direct reclaim:
> > > 
> > >  - The process was trapped in throttle_direct_reclaim().
> > >    The function wait_event_killable() was called to wait condition     
> > >    allow_direct_reclaim(pgdat) for current node to be true.     
> > >    The allow_direct_reclaim(pgdat) examined the number of free pages     
> > >    on the node by zone_page_state() which just returns value in     
> > >    zone->vm_stat[NR_FREE_PAGES].     
> > >                                                 
> > >  - On node #1, zone->vm_stat[NR_FREE_PAGES] was 0.            
> > >    However, the freelist on this node was not empty.     
> > >                            
> > >  - This inconsistent of vmstat value was caused by percpu vmstat on     
> > >    nohz_full cpus. Every increment/decrement of vmstat is performed     
> > >    on percpu vmstat counter at first, then pooled diffs are cumulated     
> > >    to the zone's vmstat counter in timely manner. However, on nohz_full     
> > >    cpus (in case of this customer's system, 48 of 52 cpus) these pooled     
> > >    diffs were not cumulated once the cpu had no event on it so that     
> > >    the cpu started sleeping infinitely.                       
> > >    I checked percpu vmstat and found there were total 69 counts not         
> > >    cumulated to the zone's vmstat counter yet.     
> > >                                          
> > >  - In this situation, kswapd did not help the trapped process.     
> > >    In pgdat_balanced(), zone_wakermark_ok_safe() examined the number     
> > >    of free pages on the node by zone_page_state_snapshot() which     
> > >    checks pending counts on percpu vmstat.     
> > >    Therefore kswapd could know there were 69 free pages correctly.     
> > >    Since zone->_watermark = {8, 20, 32}, kswapd did not work because     
> > >    69 was greater than 32 as high watermark.     
> > > 
> > > Change allow_direct_reclaim to use zone_page_state_snapshot, which
> > > allows a more precise version of the vmstat counters to be used.
> > > 
> > > allow_direct_reclaim will only be called from try_to_free_pages,
> > > which is not a hot path.
> > 
> > Have you managed to test this patch to confirm it addresses the above
> > issue? It should but better double check that.
> > 
> > > Suggested-by: Michal Hocko <mhocko@suse.com>
> > > Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
> > 
> > The patch makes sense regardless but a note about testing should be
> > added.
> > 
> > Acked-by: Michal Hocko <mhocko@suse.com>
> 
> Michal,
> 
> The patch has not been tested in the original setup where the problem 
> was found, however i don't think its easy to do that validation
> (checking with the reporter anyway).

This is a fair point and I would just add it to the changelog for the
future reference.
diff mbox series

Patch

Index: linux-vmstat-remote/mm/vmscan.c
===================================================================
--- linux-vmstat-remote.orig/mm/vmscan.c
+++ linux-vmstat-remote/mm/vmscan.c
@@ -6861,7 +6861,7 @@  static bool allow_direct_reclaim(pg_data
 			continue;
 
 		pfmemalloc_reserve += min_wmark_pages(zone);
-		free_pages += zone_page_state(zone, NR_FREE_PAGES);
+		free_pages += zone_page_state_snapshot(zone, NR_FREE_PAGES);
 	}
 
 	/* If there are no reserves (unexpected config) then do not throttle */