Message ID | 20231223025554.2316836-17-aleksander.lobakin@intel.com (mailing list archive) |
---|---|
State | RFC |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | Christmas 3-serie XDP for idpf (+generic stuff) | expand |
Context | Check | Description |
---|---|---|
netdev/tree_selection | success | Clearly marked for net-next, async |
netdev/apply | fail | Patch does not apply to net-next |
diff --git a/kernel/jump_label.c b/kernel/jump_label.c index d9c822bbffb8..f0375372b484 100644 --- a/kernel/jump_label.c +++ b/kernel/jump_label.c @@ -177,6 +177,7 @@ bool static_key_slow_inc_cpuslocked(struct static_key *key) jump_label_unlock(); return true; } +EXPORT_SYMBOL_GPL(static_key_slow_inc_cpuslocked); bool static_key_slow_inc(struct static_key *key) { @@ -304,6 +305,7 @@ void static_key_slow_dec_cpuslocked(struct static_key *key) STATIC_KEY_CHECK_USE(key); __static_key_slow_dec_cpuslocked(key); } +EXPORT_SYMBOL_GPL(static_key_slow_dec_cpuslocked); void __static_key_slow_dec_deferred(struct static_key *key, struct delayed_work *work,
Sometimes, there's a need to modify a lot of static keys or modify the same key multiple times in a loop. In that case, it seems more optimal to lock cpu_read_lock once and then call _cpuslocked() variants. The enable/disable functions are already exported, the refcounted counterparts however are not. Fix that to allow modules to save some cycles. Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> --- kernel/jump_label.c | 2 ++ 1 file changed, 2 insertions(+)