diff mbox series

[v5,4/9] percpu-refcount: Introduce percpu_ref_is_in_use()

Message ID 20180807225133.27221-5-bart.vanassche@wdc.com (mailing list archive)
State New, archived
Headers show
Series blk-mq: Implement runtime power management | expand

Commit Message

Bart Van Assche Aug. 7, 2018, 10:51 p.m. UTC
Introduce a function that allows to determine whether a per-cpu refcount
is in use. This function will be used in a later patch to determine
whether or not any block layer requests are being executed.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Jianchao Wang <jianchao.w.wang@oracle.com>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
Cc: Alan Stern <stern@rowland.harvard.edu>
---
 include/linux/percpu-refcount.h |  2 ++
 lib/percpu-refcount.c           | 40 +++++++++++++++++++++++++++++++++
 2 files changed, 42 insertions(+)

Comments

Tejun Heo Aug. 8, 2018, 3:23 p.m. UTC | #1
Hello,

On Tue, Aug 07, 2018 at 03:51:28PM -0700, Bart Van Assche wrote:
> Introduce a function that allows to determine whether a per-cpu refcount
> is in use. This function will be used in a later patch to determine
> whether or not any block layer requests are being executed.

I thought about it a bit and am having a bit of difficulty convincing
myself this is necessary.  Switching a percpu_ref to atomic mode isn't
expensive - it's one spinlock cycle, a rcu wait and one sweep of the
percpu counters.  The most expensive part - the percpu sweep - needs
to be there even with optimization, the wait doesn't really matter as
all it'll do is slightly delaying timer based PM operation and can be
overlayed with the propagation of set_pm_only() anyway.

So, how about just doing the simple thing?  Switch it to atomic mode
and check the counter and switch back to percpu mode afterwards.  If
we see any issues with that, we can try to optimize it later but that
seems unlikely to me.

Thanks.
Bart Van Assche Aug. 9, 2018, 2:32 p.m. UTC | #2
On Wed, 2018-08-08 at 08:23 -0700, Tejun Heo wrote:
> On Tue, Aug 07, 2018 at 03:51:28PM -0700, Bart Van Assche wrote:
> > Introduce a function that allows to determine whether a per-cpu refcount
> > is in use. This function will be used in a later patch to determine
> > whether or not any block layer requests are being executed.
> 
> I thought about it a bit and am having a bit of difficulty convincing
> myself this is necessary.  Switching a percpu_ref to atomic mode isn't
> expensive - it's one spinlock cycle, a rcu wait and one sweep of the
> percpu counters.  The most expensive part - the percpu sweep - needs
> to be there even with optimization, the wait doesn't really matter as
> all it'll do is slightly delaying timer based PM operation and can be
> overlayed with the propagation of set_pm_only() anyway.
> 
> So, how about just doing the simple thing?  Switch it to atomic mode
> and check the counter and switch back to percpu mode afterwards.  If
> we see any issues with that, we can try to optimize it later but that
> seems unlikely to me.

Hello Tejun,

Switching to atomic mode would require me to add a serialization mechanism
against blk_freeze_queue_start() and blk_mq_unfreeze_queue() since these
functions call percpu_ref_kill() and percpu_ref_reinit(). That is easy but
requires additional code. I will see whether I can implement an alternative
approach using blk_mq_queue_tag_busy_iter().

Thanks,

Bart.
diff mbox series

Patch

diff --git a/include/linux/percpu-refcount.h b/include/linux/percpu-refcount.h
index 009cdf3d65b6..dd247756d634 100644
--- a/include/linux/percpu-refcount.h
+++ b/include/linux/percpu-refcount.h
@@ -331,4 +331,6 @@  static inline bool percpu_ref_is_zero(struct percpu_ref *ref)
 	return !atomic_long_read(&ref->count);
 }
 
+bool percpu_ref_is_in_use(struct percpu_ref *ref);
+
 #endif
diff --git a/lib/percpu-refcount.c b/lib/percpu-refcount.c
index 9f96fa7bc000..1dcb47e2c561 100644
--- a/lib/percpu-refcount.c
+++ b/lib/percpu-refcount.c
@@ -369,3 +369,43 @@  void percpu_ref_reinit(struct percpu_ref *ref)
 	spin_unlock_irqrestore(&percpu_ref_switch_lock, flags);
 }
 EXPORT_SYMBOL_GPL(percpu_ref_reinit);
+
+/**
+ * percpu_ref_is_in_use - verify whether or not a percpu refcount is in use
+ * @ref: percpu_ref to test
+ *
+ * For a percpu refcount that is in percpu-mode, verify whether the reference
+ * count is above one. For a percpu refcount that is in atomic mode, verify
+ * whether the reference count is above zero. This function allows to verify
+ * whether any references are held on a percpu refcount that is switched
+ * between atomic and percpu mode with percpu_ref_reinit() /
+ * percpu_ref_kill().
+ *
+ * This function is safe to call as long as @ref is between init and exit. It
+ * is the responsibility of the caller to handle percpu_ref_get() and
+ * percpu_ref_put() calls that occur concurrently with this function.
+ */
+bool percpu_ref_is_in_use(struct percpu_ref *ref)
+{
+	unsigned long __percpu *percpu_count;
+	unsigned long sum = 0;
+	int cpu;
+	unsigned long flags;
+
+	/* Obtain percpu_ref_switch_lock to serialize against mode switches. */
+	spin_lock_irqsave(&percpu_ref_switch_lock, flags);
+	rcu_read_lock_sched();
+	if (__ref_is_percpu(ref, &percpu_count)) {
+		for_each_possible_cpu(cpu)
+			sum += *per_cpu_ptr(percpu_count, cpu);
+	}
+	rcu_read_unlock_sched();
+	sum += atomic_long_read(&ref->count);
+	sum &= ~PERCPU_COUNT_BIAS;
+	sum += (ref->percpu_count_ptr & __PERCPU_REF_DEAD);
+	spin_unlock_irqrestore(&percpu_ref_switch_lock, flags);
+
+	WARN_ON_ONCE(sum == 0);
+	return sum > 1;
+}
+EXPORT_SYMBOL_GPL(percpu_ref_is_in_use);