Message ID | 20241128-scx_lockdep-v1-1-2315b813b36b@debian.org (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | rhashtable: Fix potential deadlock by moving schedule_work outside lock | expand |
diff --git a/lib/rhashtable.c b/lib/rhashtable.c index 6c902639728b767cc3ee42c61256d2e9618e6ce7..5a27ccd72db9a25d92d1ed2f8d519afcfc672afe 100644 --- a/lib/rhashtable.c +++ b/lib/rhashtable.c @@ -585,9 +585,6 @@ static struct bucket_table *rhashtable_insert_one( rht_assign_locked(bkt, obj); atomic_inc(&ht->nelems); - if (rht_grow_above_75(ht, tbl)) - schedule_work(&ht->run_work); - return NULL; } @@ -624,6 +621,9 @@ static void *rhashtable_try_insert(struct rhashtable *ht, const void *key, data = ERR_CAST(new_tbl); rht_unlock(tbl, bkt, flags); + if (rht_grow_above_75(ht, tbl)) + schedule_work(&ht->run_work); + } } while (!IS_ERR_OR_NULL(new_tbl));
Move the hash table growth check and work scheduling outside the rht lock to prevent a possible circular locking dependency. The original implementation could trigger a lockdep warning due to a potential deadlock scenario involving nested locks between rhashtable bucket, rq lock, and dsq lock. By relocating the growth check and work scheduling after releasing the rth lock, we break this potential deadlock chain. This change expands the flexibility of rhashtable by removing restrictive locking that previously limited its use in scheduler and workqueue contexts. Import to say that this calls rht_grow_above_75(), which reads from struct rhashtable without holding the lock, if this is a problem, we can move the check to the lock, and schedule the workqueue after the lock. Fixes: f0e1a0643a59 ("sched_ext: Implement BPF extensible scheduler class") Suggested-by: Tejun Heo <tj@kernel.org> Signed-off-by: Breno Leitao <leitao@debian.org> --- lib/rhashtable.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) --- base-commit: 0a31ca318eea4da46e5f495c79ccc4442c92f4dc change-id: 20241128-scx_lockdep-3fa87553609d Best regards,