Message ID | 20220704150514.48816-11-elver@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | perf/hw_breakpoint: Optimize for thousands of tasks | expand |
On Mon, Jul 4, 2022 at 8:07 AM Marco Elver <elver@google.com> wrote: > > Implement simple accessors to probe percpu-rwsem's locked state: > percpu_is_write_locked(), percpu_is_read_locked(). > > Signed-off-by: Marco Elver <elver@google.com> > Reviewed-by: Dmitry Vyukov <dvyukov@google.com> Acked-by: Ian Rogers <irogers@google.com> Thanks, Ian > --- > v2: > * New patch. > --- > include/linux/percpu-rwsem.h | 6 ++++++ > kernel/locking/percpu-rwsem.c | 6 ++++++ > 2 files changed, 12 insertions(+) > > diff --git a/include/linux/percpu-rwsem.h b/include/linux/percpu-rwsem.h > index 5fda40f97fe9..36b942b67b7d 100644 > --- a/include/linux/percpu-rwsem.h > +++ b/include/linux/percpu-rwsem.h > @@ -121,9 +121,15 @@ static inline void percpu_up_read(struct percpu_rw_semaphore *sem) > preempt_enable(); > } > > +extern bool percpu_is_read_locked(struct percpu_rw_semaphore *); > extern void percpu_down_write(struct percpu_rw_semaphore *); > extern void percpu_up_write(struct percpu_rw_semaphore *); > > +static inline bool percpu_is_write_locked(struct percpu_rw_semaphore *sem) > +{ > + return atomic_read(&sem->block); > +} > + > extern int __percpu_init_rwsem(struct percpu_rw_semaphore *, > const char *, struct lock_class_key *); > > diff --git a/kernel/locking/percpu-rwsem.c b/kernel/locking/percpu-rwsem.c > index 5fe4c5495ba3..213d114fb025 100644 > --- a/kernel/locking/percpu-rwsem.c > +++ b/kernel/locking/percpu-rwsem.c > @@ -192,6 +192,12 @@ EXPORT_SYMBOL_GPL(__percpu_down_read); > __sum; \ > }) > > +bool percpu_is_read_locked(struct percpu_rw_semaphore *sem) > +{ > + return per_cpu_sum(*sem->read_count) != 0; > +} > +EXPORT_SYMBOL_GPL(percpu_is_read_locked); > + > /* > * Return true if the modular sum of the sem->read_count per-CPU variable is > * zero. If this sum is zero, then it is stable due to the fact that if any > -- > 2.37.0.rc0.161.g10f37bed90-goog >
On Mon, Jul 04, 2022 at 05:05:10PM +0200, Marco Elver wrote: > +bool percpu_is_read_locked(struct percpu_rw_semaphore *sem) > +{ > + return per_cpu_sum(*sem->read_count) != 0; > +} > +EXPORT_SYMBOL_GPL(percpu_is_read_locked); I don't think this is correct; read_count can have spurious increments. If we look at __percpu_down_read_trylock(), it does roughly something like this: this_cpu_inc(*sem->read_count); smp_mb(); if (!sem->block) return true; this_cpu_dec(*sem->read_count); return false; So percpu_is_read_locked() needs to ensure the read_count is non-zero *and* that block is not set. That said; I really dislike the whole _is_locked family with a passion. Let me try and figure out what you need this for.
On Wed, 17 Aug 2022 at 14:48, Peter Zijlstra <peterz@infradead.org> wrote: > On Mon, Jul 04, 2022 at 05:05:10PM +0200, Marco Elver wrote: > > +bool percpu_is_read_locked(struct percpu_rw_semaphore *sem) > > +{ > > + return per_cpu_sum(*sem->read_count) != 0; > > +} > > +EXPORT_SYMBOL_GPL(percpu_is_read_locked); > > I don't think this is correct; read_count can have spurious increments. > > If we look at __percpu_down_read_trylock(), it does roughly something > like this: > > this_cpu_inc(*sem->read_count); > smp_mb(); > if (!sem->block) > return true; > this_cpu_dec(*sem->read_count); > return false; > > So percpu_is_read_locked() needs to ensure the read_count is non-zero > *and* that block is not set. I shall go and fix. v4 incoming (if more comments before that, please shout). > That said; I really dislike the whole _is_locked family with a passion. > Let me try and figure out what you need this for. As in the other email, it's for the dbg_*() functions for kgdb's benefit (avoiding deadlock if kgdb wants a breakpoint, while we're in the process of handing out a breakpoint elsewhere and have the locks taken). Thanks, -- Marco
diff --git a/include/linux/percpu-rwsem.h b/include/linux/percpu-rwsem.h index 5fda40f97fe9..36b942b67b7d 100644 --- a/include/linux/percpu-rwsem.h +++ b/include/linux/percpu-rwsem.h @@ -121,9 +121,15 @@ static inline void percpu_up_read(struct percpu_rw_semaphore *sem) preempt_enable(); } +extern bool percpu_is_read_locked(struct percpu_rw_semaphore *); extern void percpu_down_write(struct percpu_rw_semaphore *); extern void percpu_up_write(struct percpu_rw_semaphore *); +static inline bool percpu_is_write_locked(struct percpu_rw_semaphore *sem) +{ + return atomic_read(&sem->block); +} + extern int __percpu_init_rwsem(struct percpu_rw_semaphore *, const char *, struct lock_class_key *); diff --git a/kernel/locking/percpu-rwsem.c b/kernel/locking/percpu-rwsem.c index 5fe4c5495ba3..213d114fb025 100644 --- a/kernel/locking/percpu-rwsem.c +++ b/kernel/locking/percpu-rwsem.c @@ -192,6 +192,12 @@ EXPORT_SYMBOL_GPL(__percpu_down_read); __sum; \ }) +bool percpu_is_read_locked(struct percpu_rw_semaphore *sem) +{ + return per_cpu_sum(*sem->read_count) != 0; +} +EXPORT_SYMBOL_GPL(percpu_is_read_locked); + /* * Return true if the modular sum of the sem->read_count per-CPU variable is * zero. If this sum is zero, then it is stable due to the fact that if any