From patchwork Wed Sep 28 15:25:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tetsuo Handa X-Patchwork-Id: 12992422 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ED031C04A95 for ; Wed, 28 Sep 2022 15:26:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233787AbiI1P0n (ORCPT ); Wed, 28 Sep 2022 11:26:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38064 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233875AbiI1P0Q (ORCPT ); Wed, 28 Sep 2022 11:26:16 -0400 Received: from www262.sakura.ne.jp (www262.sakura.ne.jp [202.181.97.72]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 15AAB63F3B for ; Wed, 28 Sep 2022 08:25:41 -0700 (PDT) Received: from fsav413.sakura.ne.jp (fsav413.sakura.ne.jp [133.242.250.112]) by www262.sakura.ne.jp (8.15.2/8.15.2) with ESMTP id 28SFPdtA097322; Thu, 29 Sep 2022 00:25:39 +0900 (JST) (envelope-from penguin-kernel@I-love.SAKURA.ne.jp) Received: from www262.sakura.ne.jp (202.181.97.72) by fsav413.sakura.ne.jp (F-Secure/fsigk_smtp/550/fsav413.sakura.ne.jp); Thu, 29 Sep 2022 00:25:39 +0900 (JST) X-Virus-Status: clean(F-Secure/fsigk_smtp/550/fsav413.sakura.ne.jp) Received: from [192.168.1.9] (M106072142033.v4.enabler.ne.jp [106.72.142.33]) (authenticated bits=0) by www262.sakura.ne.jp (8.15.2/8.15.2) with ESMTPSA id 28SFPcFV097319 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NO); Thu, 29 Sep 2022 00:25:39 +0900 (JST) (envelope-from penguin-kernel@I-love.SAKURA.ne.jp) Message-ID: <3de97b2d-1c15-5dda-4fe2-78311a91d861@I-love.SAKURA.ne.jp> Date: Thu, 29 Sep 2022 00:25:37 +0900 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 6.3; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.3.0 Subject: [PATCH] net: rds: don't hold sock lock when cancelling work from rds_tcp_reset_callbacks() Content-Language: en-US To: Santosh Shilimkar , "David S. Miller" , Sowmini Varadhan , Hillf Danton References: <0000000000004fe8f805e6b9af20@google.com> Cc: syzkaller-bugs@googlegroups.com, syzbot , Network Development , OFED mailing list From: Tetsuo Handa In-Reply-To: <0000000000004fe8f805e6b9af20@google.com> Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org syzbot is reporting lockdep warning at rds_tcp_reset_callbacks() [1], for commit ac3615e7f3cffe2a ("RDS: TCP: Reduce code duplication in rds_tcp_reset_callbacks()") added cancel_delayed_work_sync() into a section protected by lock_sock() without realizing that rds_send_xmit() might call lock_sock(). We don't need to protect cancel_delayed_work_sync() using lock_sock(), for even if rds_{send,recv}_worker() re-queued this work while __flush_work() from cancel_delayed_work_sync() was waiting for this work to complete, retried rds_{send,recv}_worker() is no-op due to the absence of RDS_CONN_UP bit. Link: https://syzkaller.appspot.com/bug?extid=78c55c7bc6f66e53dce2 [1] Reported-by: syzbot Co-developed-by: Hillf Danton Signed-off-by: Hillf Danton Signed-off-by: Tetsuo Handa Tested-by: syzbot Fixes: ac3615e7f3cffe2a ("RDS: TCP: Reduce code duplication in rds_tcp_reset_callbacks()") --- Hillf, why don't you propose as a formal patch after syzbot tested your patch? Explaining as a formal patch helps us with understanding/reviewing what you thought and how you came to your patch. I feel sorry for stealing result of your trial and error... net/rds/tcp.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/rds/tcp.c b/net/rds/tcp.c index 73ee2771093d..d0ff413f697c 100644 --- a/net/rds/tcp.c +++ b/net/rds/tcp.c @@ -166,10 +166,10 @@ void rds_tcp_reset_callbacks(struct socket *sock, */ atomic_set(&cp->cp_state, RDS_CONN_RESETTING); wait_event(cp->cp_waitq, !test_bit(RDS_IN_XMIT, &cp->cp_flags)); - lock_sock(osock->sk); /* reset receive side state for rds_tcp_data_recv() for osock */ cancel_delayed_work_sync(&cp->cp_send_w); cancel_delayed_work_sync(&cp->cp_recv_w); + lock_sock(osock->sk); if (tc->t_tinc) { rds_inc_put(&tc->t_tinc->ti_inc); tc->t_tinc = NULL;