Message ID | 20221102043417.279409-1-xiyou.wangcong@gmail.com (mailing list archive) |
---|---|
State | Accepted |
Commit | 8bbabb3fddcd0f858be69ed5abc9b470a239d6f2 |
Delegated to: | BPF |
Headers | show |
Series | [bpf,v2] sock_map: move cancel_work_sync() out of sock lock | expand |
Cong Wang wrote: > From: Cong Wang <cong.wang@bytedance.com> > > Stanislav reported a lockdep warning, which is caused by the > cancel_work_sync() called inside sock_map_close(), as analyzed > below by Jakub: > > psock->work.func = sk_psock_backlog() > ACQUIRE psock->work_mutex > sk_psock_handle_skb() > skb_send_sock() > __skb_send_sock() > sendpage_unlocked() > kernel_sendpage() > sock->ops->sendpage = inet_sendpage() > sk->sk_prot->sendpage = tcp_sendpage() > ACQUIRE sk->sk_lock > tcp_sendpage_locked() > RELEASE sk->sk_lock > RELEASE psock->work_mutex > > sock_map_close() > ACQUIRE sk->sk_lock > sk_psock_stop() > sk_psock_clear_state(psock, SK_PSOCK_TX_ENABLED) > cancel_work_sync() > __cancel_work_timer() > __flush_work() > // wait for psock->work to finish > RELEASE sk->sk_lock > > We can move the cancel_work_sync() out of the sock lock protection, > but still before saved_close() was called. > > Fixes: 799aa7f98d53 ("skmsg: Avoid lock_sock() in sk_psock_backlog()") > Reported-by: Stanislav Fomichev <sdf@google.com> > Cc: John Fastabend <john.fastabend@gmail.com> > Cc: Jakub Sitnicki <jakub@cloudflare.com> > Signed-off-by: Cong Wang <cong.wang@bytedance.com> > --- LGTM. Thanks. Acked-by: John Fastabend <john.fastabend@gmail.com>
On Tue, Nov 01, 2022 at 09:34 PM -07, Cong Wang wrote: > From: Cong Wang <cong.wang@bytedance.com> > > Stanislav reported a lockdep warning, which is caused by the > cancel_work_sync() called inside sock_map_close(), as analyzed > below by Jakub: > > psock->work.func = sk_psock_backlog() > ACQUIRE psock->work_mutex > sk_psock_handle_skb() > skb_send_sock() > __skb_send_sock() > sendpage_unlocked() > kernel_sendpage() > sock->ops->sendpage = inet_sendpage() > sk->sk_prot->sendpage = tcp_sendpage() > ACQUIRE sk->sk_lock > tcp_sendpage_locked() > RELEASE sk->sk_lock > RELEASE psock->work_mutex > > sock_map_close() > ACQUIRE sk->sk_lock > sk_psock_stop() > sk_psock_clear_state(psock, SK_PSOCK_TX_ENABLED) > cancel_work_sync() > __cancel_work_timer() > __flush_work() > // wait for psock->work to finish > RELEASE sk->sk_lock > > We can move the cancel_work_sync() out of the sock lock protection, > but still before saved_close() was called. > > Fixes: 799aa7f98d53 ("skmsg: Avoid lock_sock() in sk_psock_backlog()") > Reported-by: Stanislav Fomichev <sdf@google.com> > Cc: John Fastabend <john.fastabend@gmail.com> > Cc: Jakub Sitnicki <jakub@cloudflare.com> > Signed-off-by: Cong Wang <cong.wang@bytedance.com> > --- [...] Thanks! Acked-by: Jakub Sitnicki <jakub@cloudflare.com> Tested-by: Jakub Sitnicki <jakub@cloudflare.com>
Hello: This patch was applied to bpf/bpf.git (master) by Daniel Borkmann <daniel@iogearbox.net>: On Tue, 1 Nov 2022 21:34:17 -0700 you wrote: > From: Cong Wang <cong.wang@bytedance.com> > > Stanislav reported a lockdep warning, which is caused by the > cancel_work_sync() called inside sock_map_close(), as analyzed > below by Jakub: > > psock->work.func = sk_psock_backlog() > ACQUIRE psock->work_mutex > sk_psock_handle_skb() > skb_send_sock() > __skb_send_sock() > sendpage_unlocked() > kernel_sendpage() > sock->ops->sendpage = inet_sendpage() > sk->sk_prot->sendpage = tcp_sendpage() > ACQUIRE sk->sk_lock > tcp_sendpage_locked() > RELEASE sk->sk_lock > RELEASE psock->work_mutex > > [...] Here is the summary with links: - [bpf,v2] sock_map: move cancel_work_sync() out of sock lock https://git.kernel.org/bpf/bpf/c/8bbabb3fddcd You are awesome, thank you!
diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h index 48f4b645193b..70d6cb94e580 100644 --- a/include/linux/skmsg.h +++ b/include/linux/skmsg.h @@ -376,7 +376,7 @@ static inline void sk_psock_report_error(struct sk_psock *psock, int err) } struct sk_psock *sk_psock_init(struct sock *sk, int node); -void sk_psock_stop(struct sk_psock *psock, bool wait); +void sk_psock_stop(struct sk_psock *psock); #if IS_ENABLED(CONFIG_BPF_STREAM_PARSER) int sk_psock_init_strp(struct sock *sk, struct sk_psock *psock); diff --git a/net/core/skmsg.c b/net/core/skmsg.c index 1efdc47a999b..e6b9ced3eda8 100644 --- a/net/core/skmsg.c +++ b/net/core/skmsg.c @@ -803,16 +803,13 @@ static void sk_psock_link_destroy(struct sk_psock *psock) } } -void sk_psock_stop(struct sk_psock *psock, bool wait) +void sk_psock_stop(struct sk_psock *psock) { spin_lock_bh(&psock->ingress_lock); sk_psock_clear_state(psock, SK_PSOCK_TX_ENABLED); sk_psock_cork_free(psock); __sk_psock_zap_ingress(psock); spin_unlock_bh(&psock->ingress_lock); - - if (wait) - cancel_work_sync(&psock->work); } static void sk_psock_done_strp(struct sk_psock *psock); @@ -850,7 +847,7 @@ void sk_psock_drop(struct sock *sk, struct sk_psock *psock) sk_psock_stop_verdict(sk, psock); write_unlock_bh(&sk->sk_callback_lock); - sk_psock_stop(psock, false); + sk_psock_stop(psock); INIT_RCU_WORK(&psock->rwork, sk_psock_destroy); queue_rcu_work(system_wq, &psock->rwork); diff --git a/net/core/sock_map.c b/net/core/sock_map.c index a660baedd9e7..81beb16ab1eb 100644 --- a/net/core/sock_map.c +++ b/net/core/sock_map.c @@ -1596,7 +1596,7 @@ void sock_map_destroy(struct sock *sk) saved_destroy = psock->saved_destroy; sock_map_remove_links(sk, psock); rcu_read_unlock(); - sk_psock_stop(psock, false); + sk_psock_stop(psock); sk_psock_put(sk, psock); saved_destroy(sk); } @@ -1619,9 +1619,10 @@ void sock_map_close(struct sock *sk, long timeout) saved_close = psock->saved_close; sock_map_remove_links(sk, psock); rcu_read_unlock(); - sk_psock_stop(psock, true); - sk_psock_put(sk, psock); + sk_psock_stop(psock); release_sock(sk); + cancel_work_sync(&psock->work); + sk_psock_put(sk, psock); saved_close(sk, timeout); } EXPORT_SYMBOL_GPL(sock_map_close);