diff mbox series

[v2,2/3] locking/csd_lock: Provide an indication of ongoing CSD-lock stall

Message ID 20240722133735.667161-2-neeraj.upadhyay@kernel.org (mailing list archive)
State Accepted
Commit e37e73641fac8e733e8800a6d2a53e35df200af1
Headers show
Series CSD-lock diagnostics enhancements | expand

Commit Message

Neeraj Upadhyay July 22, 2024, 1:37 p.m. UTC
From: "Paul E. McKenney" <paulmck@kernel.org>

If a CSD-lock stall goes on long enough, it will cause an RCU CPU
stall warning.  This additional warning provides much additional
console-log traffic and little additional information.  Therefore,
provide a new csd_lock_is_stuck() function that returns true if there
is an ongoing CSD-lock stall.  This function will be used by the RCU
CPU stall warnings to provide a one-line indication of the stall when
this function returns true.

[ neeraj.upadhyay: Apply Rik van Riel feedback. ]

Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Cc: Imran Khan <imran.f.khan@oracle.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Leonardo Bras <leobras@redhat.com>
Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
Cc: Rik van Riel <riel@surriel.com>
Signed-off-by: Neeraj Upadhyay <neeraj.upadhyay@kernel.org>
---
 include/linux/smp.h |  6 ++++++
 kernel/smp.c        | 16 ++++++++++++++++
 2 files changed, 22 insertions(+)

Comments

Leonardo Bras July 31, 2024, 9:35 p.m. UTC | #1
On Mon, Jul 22, 2024 at 07:07:34PM +0530, neeraj.upadhyay@kernel.org wrote:
> From: "Paul E. McKenney" <paulmck@kernel.org>
> 
> If a CSD-lock stall goes on long enough, it will cause an RCU CPU
> stall warning.  This additional warning provides much additional
> console-log traffic and little additional information.  Therefore,
> provide a new csd_lock_is_stuck() function that returns true if there
> is an ongoing CSD-lock stall.  This function will be used by the RCU
> CPU stall warnings to provide a one-line indication of the stall when
> this function returns true.

I think it would be nice to also add the RCU usage here, as for the 
function being declared but not used.

> 
> [ neeraj.upadhyay: Apply Rik van Riel feedback. ]
> 
> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> Cc: Imran Khan <imran.f.khan@oracle.com>
> Cc: Ingo Molnar <mingo@kernel.org>
> Cc: Leonardo Bras <leobras@redhat.com>
> Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
> Cc: Rik van Riel <riel@surriel.com>
> Signed-off-by: Neeraj Upadhyay <neeraj.upadhyay@kernel.org>
> ---
>  include/linux/smp.h |  6 ++++++
>  kernel/smp.c        | 16 ++++++++++++++++
>  2 files changed, 22 insertions(+)
> 
> diff --git a/include/linux/smp.h b/include/linux/smp.h
> index fcd61dfe2af3..3871bd32018f 100644
> --- a/include/linux/smp.h
> +++ b/include/linux/smp.h
> @@ -294,4 +294,10 @@ int smpcfd_prepare_cpu(unsigned int cpu);
>  int smpcfd_dead_cpu(unsigned int cpu);
>  int smpcfd_dying_cpu(unsigned int cpu);
>  
> +#ifdef CONFIG_CSD_LOCK_WAIT_DEBUG
> +bool csd_lock_is_stuck(void);
> +#else
> +static inline bool csd_lock_is_stuck(void) { return false; }
> +#endif
> +
>  #endif /* __LINUX_SMP_H */
> diff --git a/kernel/smp.c b/kernel/smp.c
> index 81f7083a53e2..9385cc05de53 100644
> --- a/kernel/smp.c
> +++ b/kernel/smp.c
> @@ -207,6 +207,19 @@ static int csd_lock_wait_getcpu(call_single_data_t *csd)
>  	return -1;
>  }
>  
> +static atomic_t n_csd_lock_stuck;
> +
> +/**
> + * csd_lock_is_stuck - Has a CSD-lock acquisition been stuck too long?
> + *
> + * Returns @true if a CSD-lock acquisition is stuck and has been stuck
> + * long enough for a "non-responsive CSD lock" message to be printed.
> + */
> +bool csd_lock_is_stuck(void)
> +{
> +	return !!atomic_read(&n_csd_lock_stuck);
> +}
> +
>  /*
>   * Complain if too much time spent waiting.  Note that only
>   * the CSD_TYPE_SYNC/ASYNC types provide the destination CPU,
> @@ -228,6 +241,7 @@ static bool csd_lock_wait_toolong(call_single_data_t *csd, u64 ts0, u64 *ts1, in
>  		cpu = csd_lock_wait_getcpu(csd);
>  		pr_alert("csd: CSD lock (#%d) got unstuck on CPU#%02d, CPU#%02d released the lock.\n",
>  			 *bug_id, raw_smp_processor_id(), cpu);
> +		atomic_dec(&n_csd_lock_stuck);
>  		return true;
>  	}
>  
> @@ -251,6 +265,8 @@ static bool csd_lock_wait_toolong(call_single_data_t *csd, u64 ts0, u64 *ts1, in
>  	pr_alert("csd: %s non-responsive CSD lock (#%d) on CPU#%d, waiting %lld ns for CPU#%02d %pS(%ps).\n",
>  		 firsttime ? "Detected" : "Continued", *bug_id, raw_smp_processor_id(), (s64)ts_delta,
>  		 cpu, csd->func, csd->info);
> +	if (firsttime)
> +		atomic_inc(&n_csd_lock_stuck);
>  	/*
>  	 * If the CSD lock is still stuck after 5 minutes, it is unlikely
>  	 * to become unstuck. Use a signed comparison to avoid triggering
> -- 
> 2.40.1
> 

IIUC we have a single atomic counter for the whole system, which is 
modified in csd_lock_wait_toolong() and read in RCU stall warning.

I think it should not be an issue regarding cache bouncing because in worst 
case scenario we would have 2 modify / cpu each csd_lock_timeout (which is 
5 seconds by default).

Thanks!
Leo
Paul E. McKenney July 31, 2024, 10:08 p.m. UTC | #2
On Wed, Jul 31, 2024 at 06:35:35PM -0300, Leonardo Bras wrote:
> On Mon, Jul 22, 2024 at 07:07:34PM +0530, neeraj.upadhyay@kernel.org wrote:
> > From: "Paul E. McKenney" <paulmck@kernel.org>
> > 
> > If a CSD-lock stall goes on long enough, it will cause an RCU CPU
> > stall warning.  This additional warning provides much additional
> > console-log traffic and little additional information.  Therefore,
> > provide a new csd_lock_is_stuck() function that returns true if there
> > is an ongoing CSD-lock stall.  This function will be used by the RCU
> > CPU stall warnings to provide a one-line indication of the stall when
> > this function returns true.
> 
> I think it would be nice to also add the RCU usage here, as for the 
> function being declared but not used.

These are external functions, and the commit that uses it is just a few
farther along in the stack.  Or do we now have some tool that complains
if an external function is not used anywhere?

> > [ neeraj.upadhyay: Apply Rik van Riel feedback. ]
> > 
> > Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> > Cc: Imran Khan <imran.f.khan@oracle.com>
> > Cc: Ingo Molnar <mingo@kernel.org>
> > Cc: Leonardo Bras <leobras@redhat.com>
> > Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
> > Cc: Rik van Riel <riel@surriel.com>
> > Signed-off-by: Neeraj Upadhyay <neeraj.upadhyay@kernel.org>
> > ---
> >  include/linux/smp.h |  6 ++++++
> >  kernel/smp.c        | 16 ++++++++++++++++
> >  2 files changed, 22 insertions(+)
> > 
> > diff --git a/include/linux/smp.h b/include/linux/smp.h
> > index fcd61dfe2af3..3871bd32018f 100644
> > --- a/include/linux/smp.h
> > +++ b/include/linux/smp.h
> > @@ -294,4 +294,10 @@ int smpcfd_prepare_cpu(unsigned int cpu);
> >  int smpcfd_dead_cpu(unsigned int cpu);
> >  int smpcfd_dying_cpu(unsigned int cpu);
> >  
> > +#ifdef CONFIG_CSD_LOCK_WAIT_DEBUG
> > +bool csd_lock_is_stuck(void);
> > +#else
> > +static inline bool csd_lock_is_stuck(void) { return false; }
> > +#endif
> > +
> >  #endif /* __LINUX_SMP_H */
> > diff --git a/kernel/smp.c b/kernel/smp.c
> > index 81f7083a53e2..9385cc05de53 100644
> > --- a/kernel/smp.c
> > +++ b/kernel/smp.c
> > @@ -207,6 +207,19 @@ static int csd_lock_wait_getcpu(call_single_data_t *csd)
> >  	return -1;
> >  }
> >  
> > +static atomic_t n_csd_lock_stuck;
> > +
> > +/**
> > + * csd_lock_is_stuck - Has a CSD-lock acquisition been stuck too long?
> > + *
> > + * Returns @true if a CSD-lock acquisition is stuck and has been stuck
> > + * long enough for a "non-responsive CSD lock" message to be printed.
> > + */
> > +bool csd_lock_is_stuck(void)
> > +{
> > +	return !!atomic_read(&n_csd_lock_stuck);
> > +}
> > +
> >  /*
> >   * Complain if too much time spent waiting.  Note that only
> >   * the CSD_TYPE_SYNC/ASYNC types provide the destination CPU,
> > @@ -228,6 +241,7 @@ static bool csd_lock_wait_toolong(call_single_data_t *csd, u64 ts0, u64 *ts1, in
> >  		cpu = csd_lock_wait_getcpu(csd);
> >  		pr_alert("csd: CSD lock (#%d) got unstuck on CPU#%02d, CPU#%02d released the lock.\n",
> >  			 *bug_id, raw_smp_processor_id(), cpu);
> > +		atomic_dec(&n_csd_lock_stuck);
> >  		return true;
> >  	}
> >  
> > @@ -251,6 +265,8 @@ static bool csd_lock_wait_toolong(call_single_data_t *csd, u64 ts0, u64 *ts1, in
> >  	pr_alert("csd: %s non-responsive CSD lock (#%d) on CPU#%d, waiting %lld ns for CPU#%02d %pS(%ps).\n",
> >  		 firsttime ? "Detected" : "Continued", *bug_id, raw_smp_processor_id(), (s64)ts_delta,
> >  		 cpu, csd->func, csd->info);
> > +	if (firsttime)
> > +		atomic_inc(&n_csd_lock_stuck);
> >  	/*
> >  	 * If the CSD lock is still stuck after 5 minutes, it is unlikely
> >  	 * to become unstuck. Use a signed comparison to avoid triggering
> > -- 
> > 2.40.1
> > 
> 
> IIUC we have a single atomic counter for the whole system, which is 
> modified in csd_lock_wait_toolong() and read in RCU stall warning.
> 
> I think it should not be an issue regarding cache bouncing because in worst 
> case scenario we would have 2 modify / cpu each csd_lock_timeout (which is 
> 5 seconds by default).

If it does become a problem, there are ways of taking care of it.
Just a little added complexity.  ;-)

> Thanks!

And thank you for looking this over!

							Thanx, Paul
Leonardo Bras Aug. 5, 2024, 9:42 p.m. UTC | #3
On Wed, Jul 31, 2024 at 03:08:29PM -0700, Paul E. McKenney wrote:
> On Wed, Jul 31, 2024 at 06:35:35PM -0300, Leonardo Bras wrote:
> > On Mon, Jul 22, 2024 at 07:07:34PM +0530, neeraj.upadhyay@kernel.org wrote:
> > > From: "Paul E. McKenney" <paulmck@kernel.org>
> > > 
> > > If a CSD-lock stall goes on long enough, it will cause an RCU CPU
> > > stall warning.  This additional warning provides much additional
> > > console-log traffic and little additional information.  Therefore,
> > > provide a new csd_lock_is_stuck() function that returns true if there
> > > is an ongoing CSD-lock stall.  This function will be used by the RCU
> > > CPU stall warnings to provide a one-line indication of the stall when
> > > this function returns true.
> > 
> > I think it would be nice to also add the RCU usage here, as for the 
> > function being declared but not used.
> 

Hi Paul,

> These are external functions, and the commit that uses it is just a few
> farther along in the stack.

Oh, I see. I may have received just part of this patchset.

I found it weird a series of 3 to have a 4th patch, and did not think that 
it could have more, so I did not check the ML. :)

>  Or do we now have some tool that complains
> if an external function is not used anywhere?

Not really, I was just interested in the patchset but it made no sense in 
my head to add a function & not use it. On top of that, it did not occur to 
me that it was getting included on a different patchset. 

Thanks!
Leo


> 
> > > [ neeraj.upadhyay: Apply Rik van Riel feedback. ]
> > > 
> > > Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> > > Cc: Imran Khan <imran.f.khan@oracle.com>
> > > Cc: Ingo Molnar <mingo@kernel.org>
> > > Cc: Leonardo Bras <leobras@redhat.com>
> > > Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
> > > Cc: Rik van Riel <riel@surriel.com>
> > > Signed-off-by: Neeraj Upadhyay <neeraj.upadhyay@kernel.org>
> > > ---
> > >  include/linux/smp.h |  6 ++++++
> > >  kernel/smp.c        | 16 ++++++++++++++++
> > >  2 files changed, 22 insertions(+)
> > > 
> > > diff --git a/include/linux/smp.h b/include/linux/smp.h
> > > index fcd61dfe2af3..3871bd32018f 100644
> > > --- a/include/linux/smp.h
> > > +++ b/include/linux/smp.h
> > > @@ -294,4 +294,10 @@ int smpcfd_prepare_cpu(unsigned int cpu);
> > >  int smpcfd_dead_cpu(unsigned int cpu);
> > >  int smpcfd_dying_cpu(unsigned int cpu);
> > >  
> > > +#ifdef CONFIG_CSD_LOCK_WAIT_DEBUG
> > > +bool csd_lock_is_stuck(void);
> > > +#else
> > > +static inline bool csd_lock_is_stuck(void) { return false; }
> > > +#endif
> > > +
> > >  #endif /* __LINUX_SMP_H */
> > > diff --git a/kernel/smp.c b/kernel/smp.c
> > > index 81f7083a53e2..9385cc05de53 100644
> > > --- a/kernel/smp.c
> > > +++ b/kernel/smp.c
> > > @@ -207,6 +207,19 @@ static int csd_lock_wait_getcpu(call_single_data_t *csd)
> > >  	return -1;
> > >  }
> > >  
> > > +static atomic_t n_csd_lock_stuck;
> > > +
> > > +/**
> > > + * csd_lock_is_stuck - Has a CSD-lock acquisition been stuck too long?
> > > + *
> > > + * Returns @true if a CSD-lock acquisition is stuck and has been stuck
> > > + * long enough for a "non-responsive CSD lock" message to be printed.
> > > + */
> > > +bool csd_lock_is_stuck(void)
> > > +{
> > > +	return !!atomic_read(&n_csd_lock_stuck);
> > > +}
> > > +
> > >  /*
> > >   * Complain if too much time spent waiting.  Note that only
> > >   * the CSD_TYPE_SYNC/ASYNC types provide the destination CPU,
> > > @@ -228,6 +241,7 @@ static bool csd_lock_wait_toolong(call_single_data_t *csd, u64 ts0, u64 *ts1, in
> > >  		cpu = csd_lock_wait_getcpu(csd);
> > >  		pr_alert("csd: CSD lock (#%d) got unstuck on CPU#%02d, CPU#%02d released the lock.\n",
> > >  			 *bug_id, raw_smp_processor_id(), cpu);
> > > +		atomic_dec(&n_csd_lock_stuck);
> > >  		return true;
> > >  	}
> > >  
> > > @@ -251,6 +265,8 @@ static bool csd_lock_wait_toolong(call_single_data_t *csd, u64 ts0, u64 *ts1, in
> > >  	pr_alert("csd: %s non-responsive CSD lock (#%d) on CPU#%d, waiting %lld ns for CPU#%02d %pS(%ps).\n",
> > >  		 firsttime ? "Detected" : "Continued", *bug_id, raw_smp_processor_id(), (s64)ts_delta,
> > >  		 cpu, csd->func, csd->info);
> > > +	if (firsttime)
> > > +		atomic_inc(&n_csd_lock_stuck);
> > >  	/*
> > >  	 * If the CSD lock is still stuck after 5 minutes, it is unlikely
> > >  	 * to become unstuck. Use a signed comparison to avoid triggering
> > > -- 
> > > 2.40.1
> > > 
> > 
> > IIUC we have a single atomic counter for the whole system, which is 
> > modified in csd_lock_wait_toolong() and read in RCU stall warning.
> > 
> > I think it should not be an issue regarding cache bouncing because in worst 
> > case scenario we would have 2 modify / cpu each csd_lock_timeout (which is 
> > 5 seconds by default).
> 
> If it does become a problem, there are ways of taking care of it.
> Just a little added complexity.  ;-)
> 
> > Thanks!
> 
> And thank you for looking this over!
> 
> 							Thanx, Paul
>
diff mbox series

Patch

diff --git a/include/linux/smp.h b/include/linux/smp.h
index fcd61dfe2af3..3871bd32018f 100644
--- a/include/linux/smp.h
+++ b/include/linux/smp.h
@@ -294,4 +294,10 @@  int smpcfd_prepare_cpu(unsigned int cpu);
 int smpcfd_dead_cpu(unsigned int cpu);
 int smpcfd_dying_cpu(unsigned int cpu);
 
+#ifdef CONFIG_CSD_LOCK_WAIT_DEBUG
+bool csd_lock_is_stuck(void);
+#else
+static inline bool csd_lock_is_stuck(void) { return false; }
+#endif
+
 #endif /* __LINUX_SMP_H */
diff --git a/kernel/smp.c b/kernel/smp.c
index 81f7083a53e2..9385cc05de53 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -207,6 +207,19 @@  static int csd_lock_wait_getcpu(call_single_data_t *csd)
 	return -1;
 }
 
+static atomic_t n_csd_lock_stuck;
+
+/**
+ * csd_lock_is_stuck - Has a CSD-lock acquisition been stuck too long?
+ *
+ * Returns @true if a CSD-lock acquisition is stuck and has been stuck
+ * long enough for a "non-responsive CSD lock" message to be printed.
+ */
+bool csd_lock_is_stuck(void)
+{
+	return !!atomic_read(&n_csd_lock_stuck);
+}
+
 /*
  * Complain if too much time spent waiting.  Note that only
  * the CSD_TYPE_SYNC/ASYNC types provide the destination CPU,
@@ -228,6 +241,7 @@  static bool csd_lock_wait_toolong(call_single_data_t *csd, u64 ts0, u64 *ts1, in
 		cpu = csd_lock_wait_getcpu(csd);
 		pr_alert("csd: CSD lock (#%d) got unstuck on CPU#%02d, CPU#%02d released the lock.\n",
 			 *bug_id, raw_smp_processor_id(), cpu);
+		atomic_dec(&n_csd_lock_stuck);
 		return true;
 	}
 
@@ -251,6 +265,8 @@  static bool csd_lock_wait_toolong(call_single_data_t *csd, u64 ts0, u64 *ts1, in
 	pr_alert("csd: %s non-responsive CSD lock (#%d) on CPU#%d, waiting %lld ns for CPU#%02d %pS(%ps).\n",
 		 firsttime ? "Detected" : "Continued", *bug_id, raw_smp_processor_id(), (s64)ts_delta,
 		 cpu, csd->func, csd->info);
+	if (firsttime)
+		atomic_inc(&n_csd_lock_stuck);
 	/*
 	 * If the CSD lock is still stuck after 5 minutes, it is unlikely
 	 * to become unstuck. Use a signed comparison to avoid triggering