diff mbox series

[v2] mm/vmstat: Defer the refresh_zone_stat_thresholds after all CPUs bringup

Message ID 1723443220-20623-1-git-send-email-ssengar@linux.microsoft.com (mailing list archive)
State New
Headers show
Series [v2] mm/vmstat: Defer the refresh_zone_stat_thresholds after all CPUs bringup | expand

Commit Message

Saurabh Singh Sengar Aug. 12, 2024, 6:13 a.m. UTC
refresh_zone_stat_thresholds function has two loops which is expensive for
higher number of CPUs and NUMA nodes.

Below is the rough estimation of total iterations done by these loops
based on number of NUMA and CPUs.

Total number of iterations: nCPU * 2 * Numa * mCPU
Where:
 nCPU = total number of CPUs
 Numa = total number of NUMA nodes
 mCPU = mean value of total CPUs (e.g., 512 for 1024 total CPUs)

For the system under test with 16 NUMA nodes and 1024 CPUs, this
results in a substantial increase in the number of loop iterations
during boot-up when NUMA is enabled:

No NUMA = 1024*2*1*512  =   1,048,576 : Here refresh_zone_stat_thresholds
takes around 224 ms total for all the CPUs in the system under test.
16 NUMA = 1024*2*16*512 =  16,777,216 : Here refresh_zone_stat_thresholds
takes around 4.5 seconds total for all the CPUs in the system under test.

Calling this for each CPU is expensive when there are large number
of CPUs along with multiple NUMAs. Fix this by deferring
refresh_zone_stat_thresholds to be called later at once when all the
secondary CPUs are up. Also, register the DYN hooks to keep the
existing hotplug functionality intact.

Signed-off-by: Saurabh Sengar <ssengar@linux.microsoft.com>
---
[V2]
	- Move vmstat_late_init_done under CONFIG_SMP to fix
          variable 'defined but not used' warning.

 mm/vmstat.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

Comments

Saurabh Singh Sengar Aug. 23, 2024, 9:30 a.m. UTC | #1
> -----Original Message-----
> From: Saurabh Sengar <ssengar@linux.microsoft.com>
> Sent: 12 August 2024 11:44
> To: akpm@linux-foundation.org; linux-mm@kvack.org; linux-
> kernel@vger.kernel.org
> Cc: Saurabh Singh Sengar <ssengar@microsoft.com>; wei.liu@kernel.org;
> srivatsa@csail.mit.edu
> Subject: [PATCH v2] mm/vmstat: Defer the refresh_zone_stat_thresholds after
> all CPUs bringup
> 
> refresh_zone_stat_thresholds function has two loops which is expensive for
> higher number of CPUs and NUMA nodes.
> 
> Below is the rough estimation of total iterations done by these loops based on
> number of NUMA and CPUs.
> 
> Total number of iterations: nCPU * 2 * Numa * mCPU
> Where:
>  nCPU = total number of CPUs
>  Numa = total number of NUMA nodes
>  mCPU = mean value of total CPUs (e.g., 512 for 1024 total CPUs)
> 
> For the system under test with 16 NUMA nodes and 1024 CPUs, this results in
> a substantial increase in the number of loop iterations during boot-up when
> NUMA is enabled:
> 
> No NUMA = 1024*2*1*512  =   1,048,576 : Here refresh_zone_stat_thresholds
> takes around 224 ms total for all the CPUs in the system under test.
> 16 NUMA = 1024*2*16*512 =  16,777,216 : Here
> refresh_zone_stat_thresholds takes around 4.5 seconds total for all the CPUs
> in the system under test.
> 
> Calling this for each CPU is expensive when there are large number of CPUs
> along with multiple NUMAs. Fix this by deferring
> refresh_zone_stat_thresholds to be called later at once when all the
> secondary CPUs are up. Also, register the DYN hooks to keep the existing
> hotplug functionality intact.
> 
> Signed-off-by: Saurabh Sengar <ssengar@linux.microsoft.com>

CC: Mel Gorman and Christoph Lameter
Saurabh Singh Sengar Aug. 23, 2024, 9:32 a.m. UTC | #2
> -----Original Message-----
> From: Saurabh Singh Sengar <ssengar@microsoft.com>
> Sent: 23 August 2024 15:00
> To: Saurabh Sengar <ssengar@linux.microsoft.com>; akpm@linux-
> foundation.org; linux-mm@kvack.org; linux-kernel@vger.kernel.org
> Cc: wei.liu@kernel.org; srivatsa@csail.mit.edu; clameter@sgi.com;
> mgorman@techsingularity.net
> Subject: [EXTERNAL] RE: [PATCH v2] mm/vmstat: Defer the
> refresh_zone_stat_thresholds after all CPUs bringup
> 
> 
> 
> > -----Original Message-----
> > From: Saurabh Sengar <ssengar@linux.microsoft.com>
> > Sent: 12 August 2024 11:44
> > To: akpm@linux-foundation.org; linux-mm@kvack.org; linux-
> > kernel@vger.kernel.org
> > Cc: Saurabh Singh Sengar <ssengar@microsoft.com>; wei.liu@kernel.org;
> > srivatsa@csail.mit.edu
> > Subject: [PATCH v2] mm/vmstat: Defer the refresh_zone_stat_thresholds
> > after all CPUs bringup
> >
> > refresh_zone_stat_thresholds function has two loops which is expensive
> > for higher number of CPUs and NUMA nodes.
> >
> > Below is the rough estimation of total iterations done by these loops
> > based on number of NUMA and CPUs.
> >
> > Total number of iterations: nCPU * 2 * Numa * mCPU
> > Where:
> >  nCPU = total number of CPUs
> >  Numa = total number of NUMA nodes
> >  mCPU = mean value of total CPUs (e.g., 512 for 1024 total CPUs)
> >
> > For the system under test with 16 NUMA nodes and 1024 CPUs, this
> > results in a substantial increase in the number of loop iterations
> > during boot-up when NUMA is enabled:
> >
> > No NUMA = 1024*2*1*512  =   1,048,576 : Here
> refresh_zone_stat_thresholds
> > takes around 224 ms total for all the CPUs in the system under test.
> > 16 NUMA = 1024*2*16*512 =  16,777,216 : Here
> > refresh_zone_stat_thresholds takes around 4.5 seconds total for all
> > the CPUs in the system under test.
> >
> > Calling this for each CPU is expensive when there are large number of
> > CPUs along with multiple NUMAs. Fix this by deferring
> > refresh_zone_stat_thresholds to be called later at once when all the
> > secondary CPUs are up. Also, register the DYN hooks to keep the
> > existing hotplug functionality intact.
> >
> > Signed-off-by: Saurabh Sengar <ssengar@linux.microsoft.com>
> 
> CC: Mel Gorman and Christoph Lameter


Adding cl@linux.com instead of clameter@sgi.com for Christoph Lameter

- Saurabh
Saurabh Singh Sengar Sept. 19, 2024, 7:52 p.m. UTC | #3
> > >
> > > refresh_zone_stat_thresholds function has two loops which is
> > > expensive for higher number of CPUs and NUMA nodes.
> > >
> > > Below is the rough estimation of total iterations done by these
> > > loops based on number of NUMA and CPUs.
> > >
> > > Total number of iterations: nCPU * 2 * Numa * mCPU
> > > Where:
> > >  nCPU = total number of CPUs
> > >  Numa = total number of NUMA nodes
> > >  mCPU = mean value of total CPUs (e.g., 512 for 1024 total CPUs)
> > >
> > > For the system under test with 16 NUMA nodes and 1024 CPUs, this
> > > results in a substantial increase in the number of loop iterations
> > > during boot-up when NUMA is enabled:
> > >
> > > No NUMA = 1024*2*1*512  =   1,048,576 : Here
> > refresh_zone_stat_thresholds
> > > takes around 224 ms total for all the CPUs in the system under test.
> > > 16 NUMA = 1024*2*16*512 =  16,777,216 : Here
> > > refresh_zone_stat_thresholds takes around 4.5 seconds total for all
> > > the CPUs in the system under test.
> > >
> > > Calling this for each CPU is expensive when there are large number
> > > of CPUs along with multiple NUMAs. Fix this by deferring
> > > refresh_zone_stat_thresholds to be called later at once when all the
> > > secondary CPUs are up. Also, register the DYN hooks to keep the
> > > existing hotplug functionality intact.
> > >
> > > Signed-off-by: Saurabh Sengar <ssengar@linux.microsoft.com>
> >
> > CC: Mel Gorman and Christoph Lameter
> 
> 
> Adding cl@linux.com instead of clameter@sgi.com for Christoph Lameter
> 
> - Saurabh

Hi Andrew,

Can we get this merge in for next kernel release.
Please let me know if there is any concern with this patch.

Regards,
Saurabh
Anshuman Khandual Sept. 20, 2024, 6:58 a.m. UTC | #4
On 8/12/24 11:43, Saurabh Sengar wrote:
> refresh_zone_stat_thresholds function has two loops which is expensive for
> higher number of CPUs and NUMA nodes.
> 
> Below is the rough estimation of total iterations done by these loops
> based on number of NUMA and CPUs.
> 
> Total number of iterations: nCPU * 2 * Numa * mCPU
> Where:
>  nCPU = total number of CPUs
>  Numa = total number of NUMA nodes
>  mCPU = mean value of total CPUs (e.g., 512 for 1024 total CPUs)
> 
> For the system under test with 16 NUMA nodes and 1024 CPUs, this
> results in a substantial increase in the number of loop iterations
> during boot-up when NUMA is enabled:
> 
> No NUMA = 1024*2*1*512  =   1,048,576 : Here refresh_zone_stat_thresholds
> takes around 224 ms total for all the CPUs in the system under test.
> 16 NUMA = 1024*2*16*512 =  16,777,216 : Here refresh_zone_stat_thresholds
> takes around 4.5 seconds total for all the CPUs in the system under test.
> 
> Calling this for each CPU is expensive when there are large number
> of CPUs along with multiple NUMAs. Fix this by deferring
> refresh_zone_stat_thresholds to be called later at once when all the
> secondary CPUs are up. Also, register the DYN hooks to keep the
> existing hotplug functionality intact.
> 
> Signed-off-by: Saurabh Sengar <ssengar@linux.microsoft.com>
> ---
> [V2]
> 	- Move vmstat_late_init_done under CONFIG_SMP to fix
>           variable 'defined but not used' warning.
> 
>  mm/vmstat.c | 12 +++++++++++-
>  1 file changed, 11 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/vmstat.c b/mm/vmstat.c
> index 4e2dc067a654..fa235c65c756 100644
> --- a/mm/vmstat.c
> +++ b/mm/vmstat.c
> @@ -1908,6 +1908,7 @@ static const struct seq_operations vmstat_op = {
>  #ifdef CONFIG_SMP
>  static DEFINE_PER_CPU(struct delayed_work, vmstat_work);
>  int sysctl_stat_interval __read_mostly = HZ;
> +static int vmstat_late_init_done;
>  
>  #ifdef CONFIG_PROC_FS
>  static void refresh_vm_stats(struct work_struct *work)
> @@ -2110,7 +2111,8 @@ static void __init init_cpu_node_state(void)
>  
>  static int vmstat_cpu_online(unsigned int cpu)
>  {
> -	refresh_zone_stat_thresholds();
> +	if (vmstat_late_init_done)
> +		refresh_zone_stat_thresholds();
>  
>  	if (!node_state(cpu_to_node(cpu), N_CPU)) {
>  		node_set_state(cpu_to_node(cpu), N_CPU);
> @@ -2142,6 +2144,14 @@ static int vmstat_cpu_dead(unsigned int cpu)
>  	return 0;
>  }
>  
> +static int __init vmstat_late_init(void)
> +{
> +	refresh_zone_stat_thresholds();
> +	vmstat_late_init_done = 1;
> +
> +	return 0;
> +}
> +late_initcall(vmstat_late_init);>  #endif
>  
>  struct workqueue_struct *mm_percpu_wq;

late_initcall() triggered vmstat_late_init() guaranteed to be called
before the last call into vmstat_cpu_online() during a normal boot ?
Otherwise refresh_zone_stat_thresholds() will never be called unless
there is a CPU online event later.
Andrew Morton Sept. 20, 2024, 8:16 a.m. UTC | #5
On Thu, 19 Sep 2024 19:52:45 +0000 Saurabh Singh Sengar <ssengar@microsoft.com> wrote:

> > > >
> > 
> > Adding cl@linux.com instead of clameter@sgi.com for Christoph Lameter
> > 
> > - Saurabh
> 
> Hi Andrew,
> 
> Can we get this merge in for next kernel release.
> Please let me know if there is any concern with this patch.
> 

Anshuman's review comment remains unaddressed:
https://lkml.kernel.org/r/b1dc2aa1-cd38-4f1f-89e9-6d009a619541@arm.com

Also, Christoph's observations from the v1 patch review haven't really
been addressed.

So it sounds to me that an alternative implementation should be
investigated?
Srivatsa S. Bhat Sept. 20, 2024, 9:14 a.m. UTC | #6
Hey Anshuman,

Long time... :-) Hope you are doing great!

On Fri, Sep 20, 2024 at 12:28:44PM +0530, Anshuman Khandual wrote:
[...] 
> > @@ -1908,6 +1908,7 @@ static const struct seq_operations vmstat_op = {
> >  #ifdef CONFIG_SMP
> >  static DEFINE_PER_CPU(struct delayed_work, vmstat_work);
> >  int sysctl_stat_interval __read_mostly = HZ;
> > +static int vmstat_late_init_done;
> >  
> >  #ifdef CONFIG_PROC_FS
> >  static void refresh_vm_stats(struct work_struct *work)
> > @@ -2110,7 +2111,8 @@ static void __init init_cpu_node_state(void)
> >  
> >  static int vmstat_cpu_online(unsigned int cpu)
> >  {
> > -	refresh_zone_stat_thresholds();
> > +	if (vmstat_late_init_done)
> > +		refresh_zone_stat_thresholds();
> >  
> >  	if (!node_state(cpu_to_node(cpu), N_CPU)) {
> >  		node_set_state(cpu_to_node(cpu), N_CPU);
> > @@ -2142,6 +2144,14 @@ static int vmstat_cpu_dead(unsigned int cpu)
> >  	return 0;
> >  }
> >  
> > +static int __init vmstat_late_init(void)
> > +{
> > +	refresh_zone_stat_thresholds();
> > +	vmstat_late_init_done = 1;
> > +
> > +	return 0;
> > +}
> > +late_initcall(vmstat_late_init);>  #endif
> >  
> >  struct workqueue_struct *mm_percpu_wq;
> 
> late_initcall() triggered vmstat_late_init() guaranteed to be called
> before the last call into vmstat_cpu_online() during a normal boot ?
> Otherwise refresh_zone_stat_thresholds() will never be called unless
> there is a CPU online event later.

The vmstat_late_init() function itself calls
refresh_zone_stat_thresholds(). So, we don't need another CPU online
event to happen later just to invoke refresh_zone_stat_thresholds().

Does that address your concern?

Regards,
Srivatsa
Microsoft Linux Systems Group
Srivatsa S. Bhat Sept. 20, 2024, 9:25 a.m. UTC | #7
On Fri, Sep 20, 2024 at 01:16:18AM -0700, Andrew Morton wrote:
> On Thu, 19 Sep 2024 19:52:45 +0000 Saurabh Singh Sengar <ssengar@microsoft.com> wrote:
> 
> > > > >
> > > 
> > > Adding cl@linux.com instead of clameter@sgi.com for Christoph Lameter
> > > 
> > > - Saurabh
> > 
> > Hi Andrew,
> > 
> > Can we get this merge in for next kernel release.
> > Please let me know if there is any concern with this patch.
> > 
> 
> Anshuman's review comment remains unaddressed:
> https://lkml.kernel.org/r/b1dc2aa1-cd38-4f1f-89e9-6d009a619541@arm.com
> 
> Also, Christoph's observations from the v1 patch review haven't really
> been addressed.
> 
> So it sounds to me that an alternative implementation should be
> investigated?

I believe Saurabh had a follow-up discussion in person with Christoph
regarding this patch, following our talk on this topic at LPC:
https://lpc.events/event/18/contributions/1817/

@Christoph, would you mind giving your Ack if this patch v2 looks good
to you, or kindly point out if there are any lingering concerns?

Thanks a lot!

Regards,
Srivatsa
Microsoft Linux Systems Group
Christoph Lameter (Ampere) Sept. 23, 2024, 8:17 p.m. UTC | #8
On Fri, 20 Sep 2024, Srivatsa S. Bhat wrote:

> @Christoph, would you mind giving your Ack if this patch v2 looks good
> to you, or kindly point out if there are any lingering concerns?

V2 looks good to me (unitialized pcp values result in slow operation but
no negative other effects) and the late_initcall() is always executed.

Acked-by: Christoph Lameter <cl@linux.com>
Srivatsa S. Bhat Sept. 24, 2024, 2:56 a.m. UTC | #9
On 24-09-2024 01:47, Christoph Lameter (Ampere) wrote:
> On Fri, 20 Sep 2024, Srivatsa S. Bhat wrote:
> 
>> @Christoph, would you mind giving your Ack if this patch v2 looks good
>> to you, or kindly point out if there are any lingering concerns?
> 
> V2 looks good to me (unitialized pcp values result in slow operation but
> no negative other effects) and the late_initcall() is always executed.
> 
> Acked-by: Christoph Lameter <cl@linux.com>

Thanks a lot Christoph!

Andrew, could you please consider picking up the patch for the next release,
now that all the review comments have been addressed? Thank you very much!

Also, I'd like to add to this patch v2:

Reviewed-by: Srivatsa S. Bhat (Microsoft) <srivatsa@csail.mit.edu>
 
Regards,
Srivatsa
Saurabh Singh Sengar Sept. 24, 2024, 7:39 a.m. UTC | #10
On Mon, Sep 23, 2024 at 01:17:16PM -0700, Christoph Lameter (Ampere) wrote:
> On Fri, 20 Sep 2024, Srivatsa S. Bhat wrote:
> 
> > @Christoph, would you mind giving your Ack if this patch v2 looks good
> > to you, or kindly point out if there are any lingering concerns?
> 
> V2 looks good to me (unitialized pcp values result in slow operation but
> no negative other effects) and the late_initcall() is always executed.
> 
> Acked-by: Christoph Lameter <cl@linux.com>

Thank you, Christoph. I truly appreciate your review and insights on this matter.
Looking forward to collobrate more on similar space.

- Saurabh
Saurabh Singh Sengar Sept. 24, 2024, 7:40 a.m. UTC | #11
On Tue, Sep 24, 2024 at 08:26:12AM +0530, Srivatsa S. Bhat wrote:
> On 24-09-2024 01:47, Christoph Lameter (Ampere) wrote:
> > On Fri, 20 Sep 2024, Srivatsa S. Bhat wrote:
> > 
> >> @Christoph, would you mind giving your Ack if this patch v2 looks good
> >> to you, or kindly point out if there are any lingering concerns?
> > 
> > V2 looks good to me (unitialized pcp values result in slow operation but
> > no negative other effects) and the late_initcall() is always executed.
> > 
> > Acked-by: Christoph Lameter <cl@linux.com>
> 
> Thanks a lot Christoph!
> 
> Andrew, could you please consider picking up the patch for the next release,
> now that all the review comments have been addressed? Thank you very much!
> 
> Also, I'd like to add to this patch v2:
> 
> Reviewed-by: Srivatsa S. Bhat (Microsoft) <srivatsa@csail.mit.edu>

Thanks Srivatsa !

- Saurabh
diff mbox series

Patch

diff --git a/mm/vmstat.c b/mm/vmstat.c
index 4e2dc067a654..fa235c65c756 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1908,6 +1908,7 @@  static const struct seq_operations vmstat_op = {
 #ifdef CONFIG_SMP
 static DEFINE_PER_CPU(struct delayed_work, vmstat_work);
 int sysctl_stat_interval __read_mostly = HZ;
+static int vmstat_late_init_done;
 
 #ifdef CONFIG_PROC_FS
 static void refresh_vm_stats(struct work_struct *work)
@@ -2110,7 +2111,8 @@  static void __init init_cpu_node_state(void)
 
 static int vmstat_cpu_online(unsigned int cpu)
 {
-	refresh_zone_stat_thresholds();
+	if (vmstat_late_init_done)
+		refresh_zone_stat_thresholds();
 
 	if (!node_state(cpu_to_node(cpu), N_CPU)) {
 		node_set_state(cpu_to_node(cpu), N_CPU);
@@ -2142,6 +2144,14 @@  static int vmstat_cpu_dead(unsigned int cpu)
 	return 0;
 }
 
+static int __init vmstat_late_init(void)
+{
+	refresh_zone_stat_thresholds();
+	vmstat_late_init_done = 1;
+
+	return 0;
+}
+late_initcall(vmstat_late_init);
 #endif
 
 struct workqueue_struct *mm_percpu_wq;