Message ID | 20250216084109.3109837-1-leonylgao@gmail.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [v3] rcu/cpu_stall_cputime: fix the hardirq count for x86 architecture | expand |
Hi Yongliang, On Sun, Feb 16, 2025 at 04:41:09PM +0800, Yongliang Gao wrote: > From: Yongliang Gao <leonylgao@tencent.com> > > When counting the number of hardirqs in the x86 architecture, > it is essential to add arch_irq_stat_cpu to ensure accuracy. > > For example, a CPU loop within the rcu_read_lock function. > > Before: > [ 70.910184] rcu: INFO: rcu_preempt self-detected stall on CPU > [ 70.910436] rcu: 3-....: (4999 ticks this GP) idle=*** > [ 70.910711] rcu: hardirqs softirqs csw/system > [ 70.910870] rcu: number: 0 657 0 > [ 70.911024] rcu: cputime: 0 0 2498 ==> 2498(ms) > [ 70.911278] rcu: (t=5001 jiffies g=3677 q=29 ncpus=8) > > After: > [ 68.046132] rcu: INFO: rcu_preempt self-detected stall on CPU > [ 68.046354] rcu: 2-....: (4999 ticks this GP) idle=*** > [ 68.046628] rcu: hardirqs softirqs csw/system > [ 68.046793] rcu: number: 2498 663 0 > [ 68.046951] rcu: cputime: 0 0 2496 ==> 2496(ms) > [ 68.047244] rcu: (t=5000 jiffies g=3825 q=4 ncpus=8) > > Fixes: be42f00b73a0 ("rcu: Add RCU stall diagnosis information") > Reported-by: kernel test robot <lkp@intel.com> > Closes: https://lore.kernel.org/oe-kbuild-all/202501090842.SfI6QPGS-lkp@intel.com/ > Signed-off-by: Yongliang Gao <leonylgao@tencent.com> > Reviewed-by: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com> > Thank you for the patch, I have one minor question to x86 maintainers (Cced): does it make sense to fold the sum of arch_irq_stat_cpu() into kstat_cpu_irqs_sum()? Regards, Boqun > --- > Changes from v2: > - Add Reviewed-by 'Neeraj Upadhyay' information > - Link to v2: https://lore.kernel.org/all/20250109024652.1342595-1-leonylgao@gmail.com > Changes from v1: > - Fix compilation error when using arm64-allnoconfig/riscv-randconfig. [kernel test robot] > - Link to v1: https://lore.kernel.org/r/20250108065716.2888148-1-leonylgao%40gmail.com > --- > --- > kernel/rcu/tree.c | 10 +++++++--- > kernel/rcu/tree.h | 2 +- > kernel/rcu/tree_stall.h | 4 ++-- > 3 files changed, 10 insertions(+), 6 deletions(-) > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c > index 475f31deed14..a0dab5923d03 100644 > --- a/kernel/rcu/tree.c > +++ b/kernel/rcu/tree.c > @@ -781,6 +781,10 @@ static int rcu_watching_snap_save(struct rcu_data *rdp) > return 0; > } > > +#ifndef arch_irq_stat_cpu > +#define arch_irq_stat_cpu(cpu) 0 > +#endif > + > /* > * Returns positive if the specified CPU has passed through a quiescent state > * by virtue of being in or having passed through an dynticks idle state since > @@ -916,9 +920,9 @@ static int rcu_watching_snap_recheck(struct rcu_data *rdp) > rsrp->cputime_irq = kcpustat_field(kcsp, CPUTIME_IRQ, cpu); > rsrp->cputime_softirq = kcpustat_field(kcsp, CPUTIME_SOFTIRQ, cpu); > rsrp->cputime_system = kcpustat_field(kcsp, CPUTIME_SYSTEM, cpu); > - rsrp->nr_hardirqs = kstat_cpu_irqs_sum(rdp->cpu); > - rsrp->nr_softirqs = kstat_cpu_softirqs_sum(rdp->cpu); > - rsrp->nr_csw = nr_context_switches_cpu(rdp->cpu); > + rsrp->nr_hardirqs = kstat_cpu_irqs_sum(cpu) + arch_irq_stat_cpu(cpu); > + rsrp->nr_softirqs = kstat_cpu_softirqs_sum(cpu); > + rsrp->nr_csw = nr_context_switches_cpu(cpu); > rsrp->jiffies = jiffies; > rsrp->gp_seq = rdp->gp_seq; > } > diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h > index a9a811d9d7a3..1bba2225e744 100644 > --- a/kernel/rcu/tree.h > +++ b/kernel/rcu/tree.h > @@ -168,7 +168,7 @@ struct rcu_snap_record { > u64 cputime_irq; /* Accumulated cputime of hard irqs */ > u64 cputime_softirq;/* Accumulated cputime of soft irqs */ > u64 cputime_system; /* Accumulated cputime of kernel tasks */ > - unsigned long nr_hardirqs; /* Accumulated number of hard irqs */ > + u64 nr_hardirqs; /* Accumulated number of hard irqs */ > unsigned int nr_softirqs; /* Accumulated number of soft irqs */ > unsigned long long nr_csw; /* Accumulated number of task switches */ > unsigned long jiffies; /* Track jiffies value */ > diff --git a/kernel/rcu/tree_stall.h b/kernel/rcu/tree_stall.h > index 925fcdad5dea..56b21219442b 100644 > --- a/kernel/rcu/tree_stall.h > +++ b/kernel/rcu/tree_stall.h > @@ -435,8 +435,8 @@ static void print_cpu_stat_info(int cpu) > rsr.cputime_system = kcpustat_field(kcsp, CPUTIME_SYSTEM, cpu); > > pr_err("\t hardirqs softirqs csw/system\n"); > - pr_err("\t number: %8ld %10d %12lld\n", > - kstat_cpu_irqs_sum(cpu) - rsrp->nr_hardirqs, > + pr_err("\t number: %8lld %10d %12lld\n", > + kstat_cpu_irqs_sum(cpu) + arch_irq_stat_cpu(cpu) - rsrp->nr_hardirqs, > kstat_cpu_softirqs_sum(cpu) - rsrp->nr_softirqs, > nr_context_switches_cpu(cpu) - rsrp->nr_csw); > pr_err("\tcputime: %8lld %10lld %12lld ==> %d(ms)\n", > -- > 2.39.3 > >
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 475f31deed14..a0dab5923d03 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -781,6 +781,10 @@ static int rcu_watching_snap_save(struct rcu_data *rdp) return 0; } +#ifndef arch_irq_stat_cpu +#define arch_irq_stat_cpu(cpu) 0 +#endif + /* * Returns positive if the specified CPU has passed through a quiescent state * by virtue of being in or having passed through an dynticks idle state since @@ -916,9 +920,9 @@ static int rcu_watching_snap_recheck(struct rcu_data *rdp) rsrp->cputime_irq = kcpustat_field(kcsp, CPUTIME_IRQ, cpu); rsrp->cputime_softirq = kcpustat_field(kcsp, CPUTIME_SOFTIRQ, cpu); rsrp->cputime_system = kcpustat_field(kcsp, CPUTIME_SYSTEM, cpu); - rsrp->nr_hardirqs = kstat_cpu_irqs_sum(rdp->cpu); - rsrp->nr_softirqs = kstat_cpu_softirqs_sum(rdp->cpu); - rsrp->nr_csw = nr_context_switches_cpu(rdp->cpu); + rsrp->nr_hardirqs = kstat_cpu_irqs_sum(cpu) + arch_irq_stat_cpu(cpu); + rsrp->nr_softirqs = kstat_cpu_softirqs_sum(cpu); + rsrp->nr_csw = nr_context_switches_cpu(cpu); rsrp->jiffies = jiffies; rsrp->gp_seq = rdp->gp_seq; } diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h index a9a811d9d7a3..1bba2225e744 100644 --- a/kernel/rcu/tree.h +++ b/kernel/rcu/tree.h @@ -168,7 +168,7 @@ struct rcu_snap_record { u64 cputime_irq; /* Accumulated cputime of hard irqs */ u64 cputime_softirq;/* Accumulated cputime of soft irqs */ u64 cputime_system; /* Accumulated cputime of kernel tasks */ - unsigned long nr_hardirqs; /* Accumulated number of hard irqs */ + u64 nr_hardirqs; /* Accumulated number of hard irqs */ unsigned int nr_softirqs; /* Accumulated number of soft irqs */ unsigned long long nr_csw; /* Accumulated number of task switches */ unsigned long jiffies; /* Track jiffies value */ diff --git a/kernel/rcu/tree_stall.h b/kernel/rcu/tree_stall.h index 925fcdad5dea..56b21219442b 100644 --- a/kernel/rcu/tree_stall.h +++ b/kernel/rcu/tree_stall.h @@ -435,8 +435,8 @@ static void print_cpu_stat_info(int cpu) rsr.cputime_system = kcpustat_field(kcsp, CPUTIME_SYSTEM, cpu); pr_err("\t hardirqs softirqs csw/system\n"); - pr_err("\t number: %8ld %10d %12lld\n", - kstat_cpu_irqs_sum(cpu) - rsrp->nr_hardirqs, + pr_err("\t number: %8lld %10d %12lld\n", + kstat_cpu_irqs_sum(cpu) + arch_irq_stat_cpu(cpu) - rsrp->nr_hardirqs, kstat_cpu_softirqs_sum(cpu) - rsrp->nr_softirqs, nr_context_switches_cpu(cpu) - rsrp->nr_csw); pr_err("\tcputime: %8lld %10lld %12lld ==> %d(ms)\n",