Message ID | 20221122185534.308643-2-dima@arista.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | net/tcp: Dynamically disable TCP-MD5 static key | expand |
Context | Check | Description |
---|---|---|
netdev/tree_selection | success | Guessing tree name failed - patch did not apply, async |
On Tue, Nov 22, 2022 at 06:55:30PM +0000, Dmitry Safonov wrote: > +/*** > + * static_key_fast_inc_not_negative - adds a user for a static key > + * @key: static key that must be already enabled > + * > + * The caller must make sure that the static key can't get disabled while > + * in this function. It doesn't patch jump labels, only adds a user to > + * an already enabled static key. > + * > + * Returns true if the increment was done. > + */ I don't normally do kerneldoc style comments, and this is the first in the whole file. The moment I get a docs person complaining about some markup issue I just take the ** off. > +static bool static_key_fast_inc_not_negative(struct static_key *key) > { > + int v; > + > STATIC_KEY_CHECK_USE(key); > + /* > + * Negative key->enabled has a special meaning: it sends > + * static_key_slow_inc() down the slow path, and it is non-zero > + * so it counts as "enabled" in jump_label_update(). Note that > + * atomic_inc_unless_negative() checks >= 0, so roll our own. > + */ > + v = atomic_read(&key->enabled); > + do { > + if (v <= 0 || (v + 1) < 0) > + return false; > + } while (!likely(atomic_try_cmpxchg(&key->enabled, &v, v + 1))); > + > + return true; > +} ( vexing how this function and the JUMP_LABEL=n static_key_slow_inc() are only a single character different ) So while strictly accurate, I dislike this name (and I see I was not quick enough responding to your earlier suggestion :/). The whole negative thing is an implementation detail that should not spread outside of jump_label.c. Since you did not like the canonical _inc_not_zero(), how about inc_not_disabled() ? Also, perhaps expose this function in this patch, instead of hiding that in patch 3? Otherwise, things look good. Thanks!
On Tue, Nov 22, 2022 at 06:55:30PM +0000, Dmitry Safonov wrote: > key->enabled is indeed a ref-counter as it's documented in multiple > places: top comment in jump_label.h, Documentation/staging/static-keys.rst, > etc. > +/*** > + * static_key_fast_inc_not_negative - adds a user for a static key > + * @key: static key that must be already enabled > + * > + * The caller must make sure that the static key can't get disabled while > + * in this function. It doesn't patch jump labels, only adds a user to > + * an already enabled static key. > + * > + * Returns true if the increment was done. > + */ One more thing; it might be useful to point out that unlike refcount_t this thing does not saturate but will fail to increment on overflow.
On 11/23/22 09:55, Peter Zijlstra wrote: > On Tue, Nov 22, 2022 at 06:55:30PM +0000, Dmitry Safonov wrote: > >> +/*** >> + * static_key_fast_inc_not_negative - adds a user for a static key >> + * @key: static key that must be already enabled >> + * >> + * The caller must make sure that the static key can't get disabled while >> + * in this function. It doesn't patch jump labels, only adds a user to >> + * an already enabled static key. >> + * >> + * Returns true if the increment was done. >> + */ > > I don't normally do kerneldoc style comments, and this is the first in > the whole file. The moment I get a docs person complaining about some > markup issue I just take the ** off. The only reason I used kerneldoc style is that otherwise usually someone would come and complain. I'll convert it to a regular comment. > One more thing; it might be useful to point out that unlike refcount_t > this thing does not saturate but will fail to increment on overflow. Will add it as well. > >> +static bool static_key_fast_inc_not_negative(struct static_key *key) >> { >> + int v; >> + >> STATIC_KEY_CHECK_USE(key); >> + /* >> + * Negative key->enabled has a special meaning: it sends >> + * static_key_slow_inc() down the slow path, and it is non-zero >> + * so it counts as "enabled" in jump_label_update(). Note that >> + * atomic_inc_unless_negative() checks >= 0, so roll our own. >> + */ >> + v = atomic_read(&key->enabled); >> + do { >> + if (v <= 0 || (v + 1) < 0) >> + return false; >> + } while (!likely(atomic_try_cmpxchg(&key->enabled, &v, v + 1))); >> + >> + return true; >> +} > > ( vexing how this function and the JUMP_LABEL=n static_key_slow_inc() are > only a single character different ) Yeah, also another reason for it was that when JUMP_LABEL=y jump_label.h doesn't include <linux/atomic.h> and <linux/bug.h> because of the inclusion hell: commit 1f69bf9c6137 ("jump_label: remove bug.h, atomic.h dependencies for HAVE_JUMP_LABEL") and I can't move JUMP_LABEL=n version of static_key_slow_inc() to jump_label.c as it is not being built without the config set. So, in result I was looking into macro-define for both cases, but that adds quite some ugliness and has no type checks for just reusing 10 lines, where 1 differs... > So while strictly accurate, I dislike this name (and I see I was not > quick enough responding to your earlier suggestion :/). The whole > negative thing is an implementation detail that should not spread > outside of jump_label.c. > > Since you did not like the canonical _inc_not_zero(), how about > inc_not_disabled() ? Ok, that sounds good, I'll rename in v6. > Also, perhaps expose this function in this patch, instead of hiding that > in patch 3? Will do. > Otherwise, things look good. > > Thanks! Thanks again for the review, Dmitry
diff --git a/include/linux/jump_label.h b/include/linux/jump_label.h index 570831ca9951..c0a02d4c2ea2 100644 --- a/include/linux/jump_label.h +++ b/include/linux/jump_label.h @@ -224,9 +224,9 @@ extern bool arch_jump_label_transform_queue(struct jump_entry *entry, enum jump_label_type type); extern void arch_jump_label_transform_apply(void); extern int jump_label_text_reserved(void *start, void *end); -extern void static_key_slow_inc(struct static_key *key); +extern bool static_key_slow_inc(struct static_key *key); extern void static_key_slow_dec(struct static_key *key); -extern void static_key_slow_inc_cpuslocked(struct static_key *key); +extern bool static_key_slow_inc_cpuslocked(struct static_key *key); extern void static_key_slow_dec_cpuslocked(struct static_key *key); extern int static_key_count(struct static_key *key); extern void static_key_enable(struct static_key *key); @@ -278,10 +278,21 @@ static __always_inline bool static_key_true(struct static_key *key) return false; } -static inline void static_key_slow_inc(struct static_key *key) +static inline bool static_key_slow_inc(struct static_key *key) { + int v; + STATIC_KEY_CHECK_USE(key); - atomic_inc(&key->enabled); + /* + * Prevent key->enabled getting negative to follow the same semantics + * as for CONFIG_JUMP_LABEL=y, see kernel/jump_label.c comment. + */ + v = atomic_read(&key->enabled); + do { + if (v < 0 || (v + 1) < 0) + return false; + } while (!likely(atomic_try_cmpxchg(&key->enabled, &v, v + 1))); + return true; } static inline void static_key_slow_dec(struct static_key *key) diff --git a/kernel/jump_label.c b/kernel/jump_label.c index 4d6c6f5f60db..677a6674c130 100644 --- a/kernel/jump_label.c +++ b/kernel/jump_label.c @@ -113,9 +113,38 @@ int static_key_count(struct static_key *key) } EXPORT_SYMBOL_GPL(static_key_count); -void static_key_slow_inc_cpuslocked(struct static_key *key) +/*** + * static_key_fast_inc_not_negative - adds a user for a static key + * @key: static key that must be already enabled + * + * The caller must make sure that the static key can't get disabled while + * in this function. It doesn't patch jump labels, only adds a user to + * an already enabled static key. + * + * Returns true if the increment was done. + */ +static bool static_key_fast_inc_not_negative(struct static_key *key) { + int v; + STATIC_KEY_CHECK_USE(key); + /* + * Negative key->enabled has a special meaning: it sends + * static_key_slow_inc() down the slow path, and it is non-zero + * so it counts as "enabled" in jump_label_update(). Note that + * atomic_inc_unless_negative() checks >= 0, so roll our own. + */ + v = atomic_read(&key->enabled); + do { + if (v <= 0 || (v + 1) < 0) + return false; + } while (!likely(atomic_try_cmpxchg(&key->enabled, &v, v + 1))); + + return true; +} + +bool static_key_slow_inc_cpuslocked(struct static_key *key) +{ lockdep_assert_cpus_held(); /* @@ -124,15 +153,9 @@ void static_key_slow_inc_cpuslocked(struct static_key *key) * jump_label_update() process. At the same time, however, * the jump_label_update() call below wants to see * static_key_enabled(&key) for jumps to be updated properly. - * - * So give a special meaning to negative key->enabled: it sends - * static_key_slow_inc() down the slow path, and it is non-zero - * so it counts as "enabled" in jump_label_update(). Note that - * atomic_inc_unless_negative() checks >= 0, so roll our own. */ - for (int v = atomic_read(&key->enabled); v > 0; ) - if (likely(atomic_try_cmpxchg(&key->enabled, &v, v + 1))) - return; + if (static_key_fast_inc_not_negative(key)) + return true; jump_label_lock(); if (atomic_read(&key->enabled) == 0) { @@ -144,16 +167,23 @@ void static_key_slow_inc_cpuslocked(struct static_key *key) */ atomic_set_release(&key->enabled, 1); } else { - atomic_inc(&key->enabled); + if (WARN_ON_ONCE(!static_key_fast_inc_not_negative(key))) { + jump_label_unlock(); + return false; + } } jump_label_unlock(); + return true; } -void static_key_slow_inc(struct static_key *key) +bool static_key_slow_inc(struct static_key *key) { + bool ret; + cpus_read_lock(); - static_key_slow_inc_cpuslocked(key); + ret = static_key_slow_inc_cpuslocked(key); cpus_read_unlock(); + return ret; } EXPORT_SYMBOL_GPL(static_key_slow_inc);