diff mbox series

percpu_counter: add percpu_counter_sum_all interface

Message ID 20221105014013.930636-1-shakeelb@google.com (mailing list archive)
State New
Headers show
Series percpu_counter: add percpu_counter_sum_all interface | expand

Commit Message

Shakeel Butt Nov. 5, 2022, 1:40 a.m. UTC
The percpu_counter is used for scenarios where performance is more
important than the accuracy. For percpu_counter users, who want more
accurate information in their slowpath, percpu_counter_sum is provided
which traverses all the online CPUs to accumulate the data. The reason
it only needs to traverse online CPUs is because percpu_counter does
implement CPU offline callback which syncs the local data of the
offlined CPU.

However there is a small race window between the online CPUs traversal
of percpu_counter_sum and the CPU offline callback. The offline callback
has to traverse all the percpu_counters on the system to flush the CPU
local data which can be a lot. During that time, the CPU which is going
offline has already been published as offline to all the readers. So, as
the offline callback is running, percpu_counter_sum can be called for
one counter which has some state on the CPU going offline. Since
percpu_counter_sum only traverses online CPUs, it will skip that
specific CPU and the offline callback might not have flushed the state
for that specific percpu_counter on that offlined CPU.

Normally this is not an issue because percpu_counter users can deal with
some inaccuracy for small time window. However a new user i.e. mm_struct
on the cleanup path wants to check the exact state of the percpu_counter
through check_mm(). For such users, this patch introduces
percpu_counter_sum_all() which traverses all possible CPUs.

Signed-off-by: Shakeel Butt <shakeelb@google.com>
Reported-by: Marek Szyprowski <m.szyprowski@samsung.com>
Tested-by: Marek Szyprowski <m.szyprowski@samsung.com>
---
 include/linux/percpu_counter.h |  6 ++++++
 kernel/fork.c                  |  2 +-
 lib/percpu_counter.c           | 29 +++++++++++++++++++++++------
 3 files changed, 30 insertions(+), 7 deletions(-)

Comments

Andrew Morton Nov. 7, 2022, 9:05 p.m. UTC | #1
On Sat,  5 Nov 2022 01:40:13 +0000 Shakeel Butt <shakeelb@google.com> wrote:

> The percpu_counter is used for scenarios where performance is more
> important than the accuracy. For percpu_counter users, who want more
> accurate information in their slowpath, percpu_counter_sum is provided
> which traverses all the online CPUs to accumulate the data. The reason
> it only needs to traverse online CPUs is because percpu_counter does
> implement CPU offline callback which syncs the local data of the
> offlined CPU.
> 
> However there is a small race window between the online CPUs traversal
> of percpu_counter_sum and the CPU offline callback. The offline callback
> has to traverse all the percpu_counters on the system to flush the CPU
> local data which can be a lot. During that time, the CPU which is going
> offline has already been published as offline to all the readers. So, as
> the offline callback is running, percpu_counter_sum can be called for
> one counter which has some state on the CPU going offline. Since
> percpu_counter_sum only traverses online CPUs, it will skip that
> specific CPU and the offline callback might not have flushed the state
> for that specific percpu_counter on that offlined CPU.

OK, got it, thanks.

> Normally this is not an issue because percpu_counter users can deal with
> some inaccuracy for small time window. However a new user i.e. mm_struct
> on the cleanup path wants to check the exact state of the percpu_counter
> through check_mm(). For such users, this patch introduces
> percpu_counter_sum_all() which traverses all possible CPUs.

And uses it in fork.c:check_mm()!

> --- a/kernel/fork.c
> +++ b/kernel/fork.c
> @@ -756,7 +756,7 @@ static void check_mm(struct mm_struct *mm)
>  			 "Please make sure 'struct resident_page_types[]' is updated as well");
>  
>  	for (i = 0; i < NR_MM_COUNTERS; i++) {
> -		long x = percpu_counter_sum(&mm->rss_stat[i]);
> +		long x = percpu_counter_sum_all(&mm->rss_stat[i]);

check_mm() just became more expensive in some cases.  nr_possible_cpus
* 4.  I wonder if this is enough for people to start caring about.

check_mm() is presently non-optional and I'd be reluctant to change
this, given how commonly we see the "BUG: Bad rss-counter state"
getting reported (22 million hits in a google search!).

We could save a ton of that cost by running percpu_counter_sum() first,
then trying percpu_counter_sum_all() if percpu_counter_sum() indicated
an error.  This is only worth bothering about if the new check_mm()
cost is a concern.
Shakeel Butt Nov. 7, 2022, 9:19 p.m. UTC | #2
On Mon, Nov 7, 2022 at 1:05 PM Andrew Morton <akpm@linux-foundation.org> wrote:
>
> On Sat,  5 Nov 2022 01:40:13 +0000 Shakeel Butt <shakeelb@google.com> wrote:
>
> > The percpu_counter is used for scenarios where performance is more
> > important than the accuracy. For percpu_counter users, who want more
> > accurate information in their slowpath, percpu_counter_sum is provided
> > which traverses all the online CPUs to accumulate the data. The reason
> > it only needs to traverse online CPUs is because percpu_counter does
> > implement CPU offline callback which syncs the local data of the
> > offlined CPU.
> >
> > However there is a small race window between the online CPUs traversal
> > of percpu_counter_sum and the CPU offline callback. The offline callback
> > has to traverse all the percpu_counters on the system to flush the CPU
> > local data which can be a lot. During that time, the CPU which is going
> > offline has already been published as offline to all the readers. So, as
> > the offline callback is running, percpu_counter_sum can be called for
> > one counter which has some state on the CPU going offline. Since
> > percpu_counter_sum only traverses online CPUs, it will skip that
> > specific CPU and the offline callback might not have flushed the state
> > for that specific percpu_counter on that offlined CPU.
>
> OK, got it, thanks.
>
> > Normally this is not an issue because percpu_counter users can deal with
> > some inaccuracy for small time window. However a new user i.e. mm_struct
> > on the cleanup path wants to check the exact state of the percpu_counter
> > through check_mm(). For such users, this patch introduces
> > percpu_counter_sum_all() which traverses all possible CPUs.
>
> And uses it in fork.c:check_mm()!
>
> > --- a/kernel/fork.c
> > +++ b/kernel/fork.c
> > @@ -756,7 +756,7 @@ static void check_mm(struct mm_struct *mm)
> >                        "Please make sure 'struct resident_page_types[]' is updated as well");
> >
> >       for (i = 0; i < NR_MM_COUNTERS; i++) {
> > -             long x = percpu_counter_sum(&mm->rss_stat[i]);
> > +             long x = percpu_counter_sum_all(&mm->rss_stat[i]);
>
> check_mm() just became more expensive in some cases.  nr_possible_cpus
> * 4.  I wonder if this is enough for people to start caring about.
>
> check_mm() is presently non-optional and I'd be reluctant to change
> this, given how commonly we see the "BUG: Bad rss-counter state"
> getting reported (22 million hits in a google search!).
>
> We could save a ton of that cost by running percpu_counter_sum() first,
> then trying percpu_counter_sum_all() if percpu_counter_sum() indicated
> an error.  This is only worth bothering about if the new check_mm()
> cost is a concern.
>

Yes, this makes much more sense. I had run hackbench on the original
patch and didn't see any significant difference. I will update this
and run some more perf benchmarks to make sure there is no regression
due to this change.

thanks,
Shakeel
diff mbox series

Patch

diff --git a/include/linux/percpu_counter.h b/include/linux/percpu_counter.h
index bde6c4c1f405..a3aae8d57a42 100644
--- a/include/linux/percpu_counter.h
+++ b/include/linux/percpu_counter.h
@@ -45,6 +45,7 @@  void percpu_counter_set(struct percpu_counter *fbc, s64 amount);
 void percpu_counter_add_batch(struct percpu_counter *fbc, s64 amount,
 			      s32 batch);
 s64 __percpu_counter_sum(struct percpu_counter *fbc);
+s64 percpu_counter_sum_all(struct percpu_counter *fbc);
 int __percpu_counter_compare(struct percpu_counter *fbc, s64 rhs, s32 batch);
 void percpu_counter_sync(struct percpu_counter *fbc);
 
@@ -193,6 +194,11 @@  static inline s64 percpu_counter_sum(struct percpu_counter *fbc)
 	return percpu_counter_read(fbc);
 }
 
+static inline s64 percpu_counter_sum_all(struct percpu_counter *fbc)
+{
+	return percpu_counter_read(fbc);
+}
+
 static inline bool percpu_counter_initialized(struct percpu_counter *fbc)
 {
 	return true;
diff --git a/kernel/fork.c b/kernel/fork.c
index 9c32f593ef11..7d6f510cf397 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -756,7 +756,7 @@  static void check_mm(struct mm_struct *mm)
 			 "Please make sure 'struct resident_page_types[]' is updated as well");
 
 	for (i = 0; i < NR_MM_COUNTERS; i++) {
-		long x = percpu_counter_sum(&mm->rss_stat[i]);
+		long x = percpu_counter_sum_all(&mm->rss_stat[i]);
 
 		if (unlikely(x))
 			pr_alert("BUG: Bad rss-counter state mm:%p type:%s val:%ld\n",
diff --git a/lib/percpu_counter.c b/lib/percpu_counter.c
index ed610b75dc32..42f729c8e56c 100644
--- a/lib/percpu_counter.c
+++ b/lib/percpu_counter.c
@@ -117,11 +117,8 @@  void percpu_counter_sync(struct percpu_counter *fbc)
 }
 EXPORT_SYMBOL(percpu_counter_sync);
 
-/*
- * Add up all the per-cpu counts, return the result.  This is a more accurate
- * but much slower version of percpu_counter_read_positive()
- */
-s64 __percpu_counter_sum(struct percpu_counter *fbc)
+static s64 __percpu_counter_sum_mask(struct percpu_counter *fbc,
+			      const struct cpumask *cpu_mask)
 {
 	s64 ret;
 	int cpu;
@@ -129,15 +126,35 @@  s64 __percpu_counter_sum(struct percpu_counter *fbc)
 
 	raw_spin_lock_irqsave(&fbc->lock, flags);
 	ret = fbc->count;
-	for_each_online_cpu(cpu) {
+	for_each_cpu(cpu, cpu_mask) {
 		s32 *pcount = per_cpu_ptr(fbc->counters, cpu);
 		ret += *pcount;
 	}
 	raw_spin_unlock_irqrestore(&fbc->lock, flags);
 	return ret;
 }
+
+/*
+ * Add up all the per-cpu counts, return the result.  This is a more accurate
+ * but much slower version of percpu_counter_read_positive()
+ */
+s64 __percpu_counter_sum(struct percpu_counter *fbc)
+{
+	return __percpu_counter_sum_mask(fbc, cpu_online_mask);
+}
 EXPORT_SYMBOL(__percpu_counter_sum);
 
+/*
+ * This is slower version of percpu_counter_sum as it traverses all possible
+ * cpus. Use this only in the cases where accurate data is needed in the
+ * presense of CPUs getting offlined.
+ */
+s64 percpu_counter_sum_all(struct percpu_counter *fbc)
+{
+	return __percpu_counter_sum_mask(fbc, cpu_possible_mask);
+}
+EXPORT_SYMBOL(percpu_counter_sum_all);
+
 int __percpu_counter_init(struct percpu_counter *fbc, s64 amount, gfp_t gfp,
 			  struct lock_class_key *key)
 {