diff mbox series

[PATCHv2,2/4] kernel/watchdog_hld: clarify the condition in hardlockup_detector_event_create()

Message ID 20210923140951.35902-3-kernelfans@gmail.com (mailing list archive)
State New, archived
Headers show
Series watchdog_hld cleanup and async model for arm64 | expand

Commit Message

Pingfan Liu Sept. 23, 2021, 2:09 p.m. UTC
As for the context, there are two arguments to change
debug_smp_processor_id() to is_percpu_thread().

  -1. watchdog_ev is percpu, and migration will frustrate the attempt
which try to bind a watchdog_ev to a cpu by protecting this func inside
the pair of preempt_disable()/preempt_enable().

  -2. hardlockup_detector_event_create() indirectly calls
kmem_cache_alloc_node(), which is blockable.

So here, spelling out the really planned context "is_percpu_thread()".

Signed-off-by: Pingfan Liu <kernelfans@gmail.com>
Cc: Petr Mladek <pmladek@suse.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Wang Qing <wangqing@vivo.com>
Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
Cc: Santosh Sivaraj <santosh@fossix.org>
Cc: linux-arm-kernel@lists.infradead.org
To: linux-kernel@vger.kernel.org
---
 kernel/watchdog_hld.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

Comments

Petr Mladek Oct. 4, 2021, 12:32 p.m. UTC | #1
On Thu 2021-09-23 22:09:49, Pingfan Liu wrote:
> As for the context, there are two arguments to change
> debug_smp_processor_id() to is_percpu_thread().
> 
>   -1. watchdog_ev is percpu, and migration will frustrate the attempt
> which try to bind a watchdog_ev to a cpu by protecting this func inside
> the pair of preempt_disable()/preempt_enable().
> 
>   -2. hardlockup_detector_event_create() indirectly calls
> kmem_cache_alloc_node(), which is blockable.
> 
> So here, spelling out the really planned context "is_percpu_thread()".

The description is pretty hard to understand. I would suggest
something like:

Subject: kernel/watchdog_hld: Ensure CPU-bound context when creating
hardlockup detector event

hardlockup_detector_event_create() should create perf_event on the
current CPU. Preemption could not get disabled because
perf_event_create_kernel_counter() allocates memory. Instead,
the CPU locality is achieved by processing the code in a per-CPU
bound kthread.

Add a check to prevent mistakes when calling the code in another
code path.

> Signed-off-by: Pingfan Liu <kernelfans@gmail.com>
> Cc: Petr Mladek <pmladek@suse.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Wang Qing <wangqing@vivo.com>
> Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
> Cc: Santosh Sivaraj <santosh@fossix.org>
> Cc: linux-arm-kernel@lists.infradead.org
> To: linux-kernel@vger.kernel.org
> ---
>  kernel/watchdog_hld.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/watchdog_hld.c b/kernel/watchdog_hld.c
> index 247bf0b1582c..df010df76576 100644
> --- a/kernel/watchdog_hld.c
> +++ b/kernel/watchdog_hld.c
> @@ -165,10 +165,13 @@ static void watchdog_overflow_callback(struct perf_event *event,
>  
>  static int hardlockup_detector_event_create(void)
>  {
> -	unsigned int cpu = smp_processor_id();
> +	unsigned int cpu;
>  	struct perf_event_attr *wd_attr;
>  	struct perf_event *evt;
>  
> +	/* This function plans to execute in cpu bound kthread */

This does not explain why it is needed. I suggest something like:

	/*
	 * Preemption is not disabled because memory will be allocated.
	 * Ensure CPU-locality by calling this in per-CPU kthread.
	 */


> +	WARN_ON(!is_percpu_thread());
> +	cpu = raw_smp_processor_id();
>  	wd_attr = &wd_hw_attr;
>  	wd_attr->sample_period = hw_nmi_get_sample_period(watchdog_thresh);
>  

Othrewise the change looks good to me.

Best Regards,
Petr
Pingfan Liu Oct. 8, 2021, 4:11 a.m. UTC | #2
On Mon, Oct 04, 2021 at 02:32:47PM +0200, Petr Mladek wrote:
> On Thu 2021-09-23 22:09:49, Pingfan Liu wrote:
> > As for the context, there are two arguments to change
> > debug_smp_processor_id() to is_percpu_thread().
> > 
> >   -1. watchdog_ev is percpu, and migration will frustrate the attempt
> > which try to bind a watchdog_ev to a cpu by protecting this func inside
> > the pair of preempt_disable()/preempt_enable().
> > 
> >   -2. hardlockup_detector_event_create() indirectly calls
> > kmem_cache_alloc_node(), which is blockable.
> > 
> > So here, spelling out the really planned context "is_percpu_thread()".
> 
> The description is pretty hard to understand. I would suggest
> something like:
> 
> Subject: kernel/watchdog_hld: Ensure CPU-bound context when creating
> hardlockup detector event
> 
> hardlockup_detector_event_create() should create perf_event on the
> current CPU. Preemption could not get disabled because
> perf_event_create_kernel_counter() allocates memory. Instead,
> the CPU locality is achieved by processing the code in a per-CPU
> bound kthread.
> 
> Add a check to prevent mistakes when calling the code in another
> code path.
> 
Appreciate for that. I will use it.

> > Signed-off-by: Pingfan Liu <kernelfans@gmail.com>
> > Cc: Petr Mladek <pmladek@suse.com>
> > Cc: Andrew Morton <akpm@linux-foundation.org>
> > Cc: Wang Qing <wangqing@vivo.com>
> > Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
> > Cc: Santosh Sivaraj <santosh@fossix.org>
> > Cc: linux-arm-kernel@lists.infradead.org
> > To: linux-kernel@vger.kernel.org
> > ---
> >  kernel/watchdog_hld.c | 5 ++++-
> >  1 file changed, 4 insertions(+), 1 deletion(-)
> > 
> > diff --git a/kernel/watchdog_hld.c b/kernel/watchdog_hld.c
> > index 247bf0b1582c..df010df76576 100644
> > --- a/kernel/watchdog_hld.c
> > +++ b/kernel/watchdog_hld.c
> > @@ -165,10 +165,13 @@ static void watchdog_overflow_callback(struct perf_event *event,
> >  
> >  static int hardlockup_detector_event_create(void)
> >  {
> > -	unsigned int cpu = smp_processor_id();
> > +	unsigned int cpu;
> >  	struct perf_event_attr *wd_attr;
> >  	struct perf_event *evt;
> >  
> > +	/* This function plans to execute in cpu bound kthread */
> 
> This does not explain why it is needed. I suggest something like:
> 
> 	/*
> 	 * Preemption is not disabled because memory will be allocated.
> 	 * Ensure CPU-locality by calling this in per-CPU kthread.
> 	 */
> 
It sounds good. I will use it.

> 
> > +	WARN_ON(!is_percpu_thread());
> > +	cpu = raw_smp_processor_id();
> >  	wd_attr = &wd_hw_attr;
> >  	wd_attr->sample_period = hw_nmi_get_sample_period(watchdog_thresh);
> >  
> 
> Othrewise the change looks good to me.
> 
Thank for your help.

Regards,

	Pingfan
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
diff mbox series

Patch

diff --git a/kernel/watchdog_hld.c b/kernel/watchdog_hld.c
index 247bf0b1582c..df010df76576 100644
--- a/kernel/watchdog_hld.c
+++ b/kernel/watchdog_hld.c
@@ -165,10 +165,13 @@  static void watchdog_overflow_callback(struct perf_event *event,
 
 static int hardlockup_detector_event_create(void)
 {
-	unsigned int cpu = smp_processor_id();
+	unsigned int cpu;
 	struct perf_event_attr *wd_attr;
 	struct perf_event *evt;
 
+	/* This function plans to execute in cpu bound kthread */
+	WARN_ON(!is_percpu_thread());
+	cpu = raw_smp_processor_id();
 	wd_attr = &wd_hw_attr;
 	wd_attr->sample_period = hw_nmi_get_sample_period(watchdog_thresh);