diff mbox series

[net-next] mptcp: fix deadlock in mptcp_close()

Message ID YnOu5xlGgE2Ln7lj@kili (mailing list archive)
State Rejected, archived
Headers show
Series [net-next] mptcp: fix deadlock in mptcp_close() | expand

Checks

Context Check Description
matttbe/build success Build and static analysis OK
matttbe/checkpatch success total: 0 errors, 0 warnings, 0 checks, 9 lines checked
matttbe/KVM_Validation__normal warning Unstable: 1 failed test(s): selftest_mptcp_join
matttbe/KVM_Validation__debug warning Unstable: 2 failed test(s): selftest_diag selftest_mptcp_join

Commit Message

Dan Carpenter May 5, 2022, 11:03 a.m. UTC
The mptcp_data_lock/unlock(sk) functions are taking the same spin lock
as the lock_sock()/release_sock() functions.  So we're already holding
the lock at this point and taking it again will lead to a deadlock.

Fixes: 4293248c6704 ("mptcp: add data lock for sk timers")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
---
From static analysis.  Not tested.

 net/mptcp/protocol.c | 2 --
 1 file changed, 2 deletions(-)

Comments

Paolo Abeni May 5, 2022, 11:50 a.m. UTC | #1
On Thu, 2022-05-05 at 14:03 +0300, Dan Carpenter wrote:
> The mptcp_data_lock/unlock(sk) functions are taking the same spin lock
> as the lock_sock()/release_sock() functions.  So we're already holding
> the lock at this point and taking it again will lead to a deadlock.

Note that lock_sock() (and release_sock()) releases the relevant
spinlock before completion. AFAICs the above deadlock is not possible.

Still I think we can revert commit 4293248c6704, I don't see why we
need the addtional spin lock ?!? 

Thanks!

Paolo
Dan Carpenter May 5, 2022, 12:23 p.m. UTC | #2
On Thu, May 05, 2022 at 01:50:01PM +0200, Paolo Abeni wrote:
> On Thu, 2022-05-05 at 14:03 +0300, Dan Carpenter wrote:
> > The mptcp_data_lock/unlock(sk) functions are taking the same spin lock
> > as the lock_sock()/release_sock() functions.  So we're already holding
> > the lock at this point and taking it again will lead to a deadlock.
> 
> Note that lock_sock() (and release_sock()) releases the relevant
> spinlock before completion. AFAICs the above deadlock is not possible.
> 

Oh.  Yeah.  You're right.  I had hard coded into my local copy of Smatch
that it took that lock.

	{"lock_sock_nested", LOCK,   spin_lock, 0, "&$->sk_lock.slock"},
	{"release_sock",     UNLOCK, spin_lock, 0, "&$->sk_lock.slock"},

But that's wrong...  It drops the lock as you say.

regards,
dan carpenter
MPTCP CI May 5, 2022, 12:34 p.m. UTC | #3
Hi Dan,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal:
  - Unstable: 1 failed test(s): selftest_mptcp_join 
diff mbox series

Patch

diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index a5d466e6b538..eef4673dae3a 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -2872,9 +2872,7 @@  static void mptcp_close(struct sock *sk, long timeout)
 		__mptcp_destroy_sock(sk);
 		do_cancel_work = true;
 	} else {
-		mptcp_data_lock(sk);
 		sk_reset_timer(sk, &sk->sk_timer, jiffies + TCP_TIMEWAIT_LEN);
-		mptcp_data_unlock(sk);
 	}
 	release_sock(sk);
 	if (do_cancel_work)