From patchwork Fri Aug 5 00:21:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mat Martineau X-Patchwork-Id: 12936806 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B249C25B06 for ; Fri, 5 Aug 2022 00:21:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236649AbiHEAVi (ORCPT ); Thu, 4 Aug 2022 20:21:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40654 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235780AbiHEAVh (ORCPT ); Thu, 4 Aug 2022 20:21:37 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D8F8519C08 for ; Thu, 4 Aug 2022 17:21:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1659658895; x=1691194895; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=SYwNjrUGNlXKJ+DznD96iZmHPRQB1EKPhXt4WC93cEE=; b=js0m1CF6nSRcG76ku+L4PBUYV8pgIjM3q8rKIyyYMmU9rxIoLvGc8bT/ X5N81ZWdJBW+OBCkVCz/45dvbLzH9l5ZRfuegy3K3l3wnQ/WIbc3GL+OU JBuTQCKAQ3VguVer8DK5iQYv8BcGW1+4LArIgLN9ZehgqH4WGP1OdfICJ ihzV4vqSyJV5yW7B/pgtLP53EvTddtL76dsQLpzUmIBemST0YyrfQsNXg Bdwnro+fPh10FzJLC1VjSVQnGZ4n3OJwFD+9hbM+/S4XCIqqf8MkeDp6v vcmPU521+gKPGbcHJSdaTFHO/MJNC/a0RkednYk/OPZT2p9tUiahelt9z w==; X-IronPort-AV: E=McAfee;i="6400,9594,10429"; a="270467351" X-IronPort-AV: E=Sophos;i="5.93,216,1654585200"; d="scan'208";a="270467351" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Aug 2022 17:21:34 -0700 X-IronPort-AV: E=Sophos;i="5.93,216,1654585200"; d="scan'208";a="729810990" Received: from ramankur-mobl.amr.corp.intel.com (HELO mjmartin-desk2.intel.com) ([10.212.169.219]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Aug 2022 17:21:34 -0700 From: Mat Martineau To: netdev@vger.kernel.org Cc: Paolo Abeni , davem@davemloft.net, kuba@kernel.org, edumazet@google.com, matthieu.baerts@tessares.net, fw@strlen.de, dcaratti@redhat.com, mptcp@lists.linux.dev, Nguyen Dinh Phi , Mat Martineau Subject: [PATCH net 1/3] mptcp: move subflow cleanup in mptcp_destroy_common() Date: Thu, 4 Aug 2022 17:21:25 -0700 Message-Id: <20220805002127.88430-2-mathew.j.martineau@linux.intel.com> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220805002127.88430-1-mathew.j.martineau@linux.intel.com> References: <20220805002127.88430-1-mathew.j.martineau@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Paolo Abeni If the mptcp socket creation fails due to a CGROUP_INET_SOCK_CREATE eBPF program, the MPTCP protocol ends-up leaking all the subflows: the related cleanup happens in __mptcp_destroy_sock() that is not invoked in such code path. Address the issue moving the subflow sockets cleanup in the mptcp_destroy_common() helper, which is invoked in every msk cleanup path. Additionally get rid of the intermediate list_splice_init step, which is an unneeded relic from the past. The issue is present since before the reported root cause commit, but any attempt to backport the fix before that hash will require a complete rewrite. Fixes: e16163b6e2 ("mptcp: refactor shutdown and close") Reported-by: Nguyen Dinh Phi Reviewed-by: Mat Martineau Co-developed-by: Nguyen Dinh Phi Signed-off-by: Nguyen Dinh Phi Signed-off-by: Paolo Abeni Signed-off-by: Mat Martineau --- net/mptcp/protocol.c | 39 +++++++++++++++------------------------ net/mptcp/protocol.h | 2 +- net/mptcp/subflow.c | 3 ++- 3 files changed, 18 insertions(+), 26 deletions(-) diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index a3f1c1461874..07fcc86e1fc9 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -2769,30 +2769,16 @@ static void __mptcp_wr_shutdown(struct sock *sk) static void __mptcp_destroy_sock(struct sock *sk) { - struct mptcp_subflow_context *subflow, *tmp; struct mptcp_sock *msk = mptcp_sk(sk); - LIST_HEAD(conn_list); pr_debug("msk=%p", msk); might_sleep(); - /* join list will be eventually flushed (with rst) at sock lock release time*/ - list_splice_init(&msk->conn_list, &conn_list); - mptcp_stop_timer(sk); sk_stop_timer(sk, &sk->sk_timer); msk->pm.status = 0; - /* clears msk->subflow, allowing the following loop to close - * even the initial subflow - */ - mptcp_dispose_initial_subflow(msk); - list_for_each_entry_safe(subflow, tmp, &conn_list, node) { - struct sock *ssk = mptcp_subflow_tcp_sock(subflow); - __mptcp_close_ssk(sk, ssk, subflow, 0); - } - sk->sk_prot->destroy(sk); WARN_ON_ONCE(msk->rmem_fwd_alloc); @@ -2884,24 +2870,20 @@ static void mptcp_copy_inaddrs(struct sock *msk, const struct sock *ssk) static int mptcp_disconnect(struct sock *sk, int flags) { - struct mptcp_subflow_context *subflow, *tmp; struct mptcp_sock *msk = mptcp_sk(sk); inet_sk_state_store(sk, TCP_CLOSE); - list_for_each_entry_safe(subflow, tmp, &msk->conn_list, node) { - struct sock *ssk = mptcp_subflow_tcp_sock(subflow); - - __mptcp_close_ssk(sk, ssk, subflow, MPTCP_CF_FASTCLOSE); - } - mptcp_stop_timer(sk); sk_stop_timer(sk, &sk->sk_timer); if (mptcp_sk(sk)->token) mptcp_event(MPTCP_EVENT_CLOSED, mptcp_sk(sk), NULL, GFP_KERNEL); - mptcp_destroy_common(msk); + /* msk->subflow is still intact, the following will not free the first + * subflow + */ + mptcp_destroy_common(msk, MPTCP_CF_FASTCLOSE); msk->last_snd = NULL; WRITE_ONCE(msk->flags, 0); msk->cb_flags = 0; @@ -3051,12 +3033,17 @@ static struct sock *mptcp_accept(struct sock *sk, int flags, int *err, return newsk; } -void mptcp_destroy_common(struct mptcp_sock *msk) +void mptcp_destroy_common(struct mptcp_sock *msk, unsigned int flags) { + struct mptcp_subflow_context *subflow, *tmp; struct sock *sk = (struct sock *)msk; __mptcp_clear_xmit(sk); + /* join list will be eventually flushed (with rst) at sock lock release time */ + list_for_each_entry_safe(subflow, tmp, &msk->conn_list, node) + __mptcp_close_ssk(sk, mptcp_subflow_tcp_sock(subflow), subflow, flags); + /* move to sk_receive_queue, sk_stream_kill_queues will purge it */ mptcp_data_lock(sk); skb_queue_splice_tail_init(&msk->receive_queue, &sk->sk_receive_queue); @@ -3078,7 +3065,11 @@ static void mptcp_destroy(struct sock *sk) { struct mptcp_sock *msk = mptcp_sk(sk); - mptcp_destroy_common(msk); + /* clears msk->subflow, allowing the following to close + * even the initial subflow + */ + mptcp_dispose_initial_subflow(msk); + mptcp_destroy_common(msk, 0); sk_sockets_allocated_dec(sk); } diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h index 5d6043c16b09..40881a7df5d5 100644 --- a/net/mptcp/protocol.h +++ b/net/mptcp/protocol.h @@ -717,7 +717,7 @@ static inline void mptcp_write_space(struct sock *sk) } } -void mptcp_destroy_common(struct mptcp_sock *msk); +void mptcp_destroy_common(struct mptcp_sock *msk, unsigned int flags); #define MPTCP_TOKEN_MAX_RETRIES 4 diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c index 901c763dcdbb..c7d49fb6e7bd 100644 --- a/net/mptcp/subflow.c +++ b/net/mptcp/subflow.c @@ -621,7 +621,8 @@ static void mptcp_sock_destruct(struct sock *sk) sock_orphan(sk); } - mptcp_destroy_common(mptcp_sk(sk)); + /* We don't need to clear msk->subflow, as it's still NULL at this point */ + mptcp_destroy_common(mptcp_sk(sk), 0); inet_sock_destruct(sk); } From patchwork Fri Aug 5 00:21:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mat Martineau X-Patchwork-Id: 12936808 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 02343C25B07 for ; Fri, 5 Aug 2022 00:21:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239733AbiHEAVj (ORCPT ); Thu, 4 Aug 2022 20:21:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40656 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235840AbiHEAVh (ORCPT ); Thu, 4 Aug 2022 20:21:37 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E6F9E19C3E for ; Thu, 4 Aug 2022 17:21:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1659658895; x=1691194895; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=As8F2NZFMzIE3CqfkdUNC3uOYKMCXanxAzq2aOrswzE=; b=ASVl8MWFUlRaNZ2tFR/QiOu2sBTgnG+YjJSYV/C6k6hArTrZ9RZ6DJDi hS6QiJPhI98qiEN8ZwdT/+EY8VeNTW1Y6WbCgYRKdRTwqY8INyeH0lwKa zkPARZlSomw+zYcxzf/HZaLJ+iBiwcqr7hmu/yAsYKhvAbW3Njn9Dnk39 My/PQhIx90aWIt1P1XbJ/NGhHtOulLBqRYRlBFLvc10XBHZk/T66JY1yw fJPwG1pD3vk1qCncnR4rGLgaUIyZ3CP8ELOOZx66odQ/5n2miev5KOsAo BaTbw6SppJ4v98p7H6XEEUSr0vIIYyizgjWmhLLLEDr+LWWNf8OPmNdY3 g==; X-IronPort-AV: E=McAfee;i="6400,9594,10429"; a="270467353" X-IronPort-AV: E=Sophos;i="5.93,216,1654585200"; d="scan'208";a="270467353" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Aug 2022 17:21:34 -0700 X-IronPort-AV: E=Sophos;i="5.93,216,1654585200"; d="scan'208";a="729810991" Received: from ramankur-mobl.amr.corp.intel.com (HELO mjmartin-desk2.intel.com) ([10.212.169.219]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Aug 2022 17:21:34 -0700 From: Mat Martineau To: netdev@vger.kernel.org Cc: Paolo Abeni , davem@davemloft.net, kuba@kernel.org, edumazet@google.com, matthieu.baerts@tessares.net, fw@strlen.de, dcaratti@redhat.com, mptcp@lists.linux.dev, Dipanjan Das , Mat Martineau Subject: [PATCH net 2/3] mptcp: do not queue data on closed subflows Date: Thu, 4 Aug 2022 17:21:26 -0700 Message-Id: <20220805002127.88430-3-mathew.j.martineau@linux.intel.com> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220805002127.88430-1-mathew.j.martineau@linux.intel.com> References: <20220805002127.88430-1-mathew.j.martineau@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Paolo Abeni Dipanjan reported a syzbot splat at close time: WARNING: CPU: 1 PID: 10818 at net/ipv4/af_inet.c:153 inet_sock_destruct+0x6d0/0x8e0 net/ipv4/af_inet.c:153 Modules linked in: uio_ivshmem(OE) uio(E) CPU: 1 PID: 10818 Comm: kworker/1:16 Tainted: G OE 5.19.0-rc6-g2eae0556bb9d #2 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1ubuntu1.1 04/01/2014 Workqueue: events mptcp_worker RIP: 0010:inet_sock_destruct+0x6d0/0x8e0 net/ipv4/af_inet.c:153 Code: 21 02 00 00 41 8b 9c 24 28 02 00 00 e9 07 ff ff ff e8 34 4d 91 f9 89 ee 4c 89 e7 e8 4a 47 60 ff e9 a6 fc ff ff e8 20 4d 91 f9 <0f> 0b e9 84 fe ff ff e8 14 4d 91 f9 0f 0b e9 d4 fd ff ff e8 08 4d RSP: 0018:ffffc9001b35fa78 EFLAGS: 00010246 RAX: 0000000000000000 RBX: 00000000002879d0 RCX: ffff8881326f3b00 RDX: 0000000000000000 RSI: ffff8881326f3b00 RDI: 0000000000000002 RBP: ffff888179662674 R08: ffffffff87e983a0 R09: 0000000000000000 R10: 0000000000000005 R11: 00000000000004ea R12: ffff888179662400 R13: ffff888179662428 R14: 0000000000000001 R15: ffff88817e38e258 FS: 0000000000000000(0000) GS:ffff8881f5f00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000020007bc0 CR3: 0000000179592000 CR4: 0000000000150ee0 Call Trace: __sk_destruct+0x4f/0x8e0 net/core/sock.c:2067 sk_destruct+0xbd/0xe0 net/core/sock.c:2112 __sk_free+0xef/0x3d0 net/core/sock.c:2123 sk_free+0x78/0xa0 net/core/sock.c:2134 sock_put include/net/sock.h:1927 [inline] __mptcp_close_ssk+0x50f/0x780 net/mptcp/protocol.c:2351 __mptcp_destroy_sock+0x332/0x760 net/mptcp/protocol.c:2828 mptcp_worker+0x5d2/0xc90 net/mptcp/protocol.c:2586 process_one_work+0x9cc/0x1650 kernel/workqueue.c:2289 worker_thread+0x623/0x1070 kernel/workqueue.c:2436 kthread+0x2e9/0x3a0 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:302 The root cause of the problem is that an mptcp-level (re)transmit can race with mptcp_close() and the packet scheduler checks the subflow state before acquiring the socket lock: we can try to (re)transmit on an already closed ssk. Fix the issue checking again the subflow socket status under the subflow socket lock protection. Additionally add the missing check for the fallback-to-tcp case. Fixes: d5f49190def6 ("mptcp: allow picking different xmit subflows") Reported-by: Dipanjan Das Reviewed-by: Mat Martineau Signed-off-by: Paolo Abeni Signed-off-by: Mat Martineau --- net/mptcp/protocol.c | 8 +++++++- net/mptcp/protocol.h | 11 +++++++---- 2 files changed, 14 insertions(+), 5 deletions(-) diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index 07fcc86e1fc9..da4257504fad 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -1240,6 +1240,9 @@ static int mptcp_sendmsg_frag(struct sock *sk, struct sock *ssk, info->limit > dfrag->data_len)) return 0; + if (unlikely(!__tcp_can_send(ssk))) + return -EAGAIN; + /* compute send limit */ info->mss_now = tcp_send_mss(ssk, &info->size_goal, info->flags); copy = info->size_goal; @@ -1413,7 +1416,8 @@ static struct sock *mptcp_subflow_get_send(struct mptcp_sock *msk) if (__mptcp_check_fallback(msk)) { if (!msk->first) return NULL; - return sk_stream_memory_free(msk->first) ? msk->first : NULL; + return __tcp_can_send(msk->first) && + sk_stream_memory_free(msk->first) ? msk->first : NULL; } /* re-use last subflow, if the burst allow that */ @@ -1564,6 +1568,8 @@ void __mptcp_push_pending(struct sock *sk, unsigned int flags) ret = mptcp_sendmsg_frag(sk, ssk, dfrag, &info); if (ret <= 0) { + if (ret == -EAGAIN) + continue; mptcp_push_release(ssk, &info); goto out; } diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h index 40881a7df5d5..132d50833df1 100644 --- a/net/mptcp/protocol.h +++ b/net/mptcp/protocol.h @@ -624,16 +624,19 @@ void mptcp_info2sockaddr(const struct mptcp_addr_info *info, struct sockaddr_storage *addr, unsigned short family); -static inline bool __mptcp_subflow_active(struct mptcp_subflow_context *subflow) +static inline bool __tcp_can_send(const struct sock *ssk) { - struct sock *ssk = mptcp_subflow_tcp_sock(subflow); + /* only send if our side has not closed yet */ + return ((1 << inet_sk_state_load(ssk)) & (TCPF_ESTABLISHED | TCPF_CLOSE_WAIT)); +} +static inline bool __mptcp_subflow_active(struct mptcp_subflow_context *subflow) +{ /* can't send if JOIN hasn't completed yet (i.e. is usable for mptcp) */ if (subflow->request_join && !subflow->fully_established) return false; - /* only send if our side has not closed yet */ - return ((1 << ssk->sk_state) & (TCPF_ESTABLISHED | TCPF_CLOSE_WAIT)); + return __tcp_can_send(mptcp_subflow_tcp_sock(subflow)); } void mptcp_subflow_set_active(struct mptcp_subflow_context *subflow); From patchwork Fri Aug 5 00:21:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mat Martineau X-Patchwork-Id: 12936809 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9909C25B06 for ; Fri, 5 Aug 2022 00:21:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235840AbiHEAVl (ORCPT ); Thu, 4 Aug 2022 20:21:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40662 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235957AbiHEAVh (ORCPT ); Thu, 4 Aug 2022 20:21:37 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D7DDA1A066 for ; Thu, 4 Aug 2022 17:21:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1659658896; x=1691194896; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=lfrTHdhyC6ywAdzyLkgsZFix3MXo+dFu0OSQV5qmghE=; b=Hiv3bvk0rimG9k2WQFLzVnI2xC3mn9KBX2AwBR4Ck83ad6fXAgZc3abm eBrFjqnV1bV3TIEbNZthBWb0IYuclRO/dNIrmYO6ghvg0+LZHfRrLIaJA H8Q5jyquun4/cd1NHg5abnfY7ssRIQQsW2/KeLwamCPolGUqMgpnuViw6 j6FT3Qzc98BWuIDDFkL/iJwI3uRGTSFO8VKFCoEw+D3G5Nlr28q3CLJ+8 KahOyYCBMtLW17t7MJc5D+pD/XwD/UQungRAM8jigZwRDsBHLi/VkVM/h FfdWVkWfOrLQglL16xlu6GcPreQry/nwfwE/n3+hYhkv58Kokx9reDKUh g==; X-IronPort-AV: E=McAfee;i="6400,9594,10429"; a="270467355" X-IronPort-AV: E=Sophos;i="5.93,216,1654585200"; d="scan'208";a="270467355" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Aug 2022 17:21:34 -0700 X-IronPort-AV: E=Sophos;i="5.93,216,1654585200"; d="scan'208";a="729810992" Received: from ramankur-mobl.amr.corp.intel.com (HELO mjmartin-desk2.intel.com) ([10.212.169.219]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Aug 2022 17:21:34 -0700 From: Mat Martineau To: netdev@vger.kernel.org Cc: Florian Westphal , davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com, edumazet@google.com, matthieu.baerts@tessares.net, dcaratti@redhat.com, mptcp@lists.linux.dev, Xiumei Mu , Mat Martineau Subject: [PATCH net 3/3] selftests: mptcp: make sendfile selftest work Date: Thu, 4 Aug 2022 17:21:27 -0700 Message-Id: <20220805002127.88430-4-mathew.j.martineau@linux.intel.com> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220805002127.88430-1-mathew.j.martineau@linux.intel.com> References: <20220805002127.88430-1-mathew.j.martineau@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Florian Westphal When the selftest got added, sendfile() on mptcp sockets returned -EOPNOTSUPP, so running 'mptcp_connect.sh -m sendfile' failed immediately. This is no longer the case, but the script fails anyway due to timeout. Let the receiver know once the sender has sent all data, just like with '-m mmap' mode. v2: need to respect cfg_wait too, as pm_userspace.sh relied on -m sendfile to keep the connection open (Mat Martineau) Fixes: 048d19d444be ("mptcp: add basic kselftest for mptcp") Reported-by: Xiumei Mu Reviewed-by: Mat Martineau Signed-off-by: Florian Westphal Signed-off-by: Mat Martineau --- .../selftests/net/mptcp/mptcp_connect.c | 26 ++++++++++++------- 1 file changed, 17 insertions(+), 9 deletions(-) diff --git a/tools/testing/selftests/net/mptcp/mptcp_connect.c b/tools/testing/selftests/net/mptcp/mptcp_connect.c index e2ea6c126c99..24d4e9cb617e 100644 --- a/tools/testing/selftests/net/mptcp/mptcp_connect.c +++ b/tools/testing/selftests/net/mptcp/mptcp_connect.c @@ -553,6 +553,18 @@ static void set_nonblock(int fd, bool nonblock) fcntl(fd, F_SETFL, flags & ~O_NONBLOCK); } +static void shut_wr(int fd) +{ + /* Close our write side, ev. give some time + * for address notification and/or checking + * the current status + */ + if (cfg_wait) + usleep(cfg_wait); + + shutdown(fd, SHUT_WR); +} + static int copyfd_io_poll(int infd, int peerfd, int outfd, bool *in_closed_after_out) { struct pollfd fds = { @@ -630,14 +642,7 @@ static int copyfd_io_poll(int infd, int peerfd, int outfd, bool *in_closed_after /* ... and peer also closed already */ break; - /* ... but we still receive. - * Close our write side, ev. give some time - * for address notification and/or checking - * the current status - */ - if (cfg_wait) - usleep(cfg_wait); - shutdown(peerfd, SHUT_WR); + shut_wr(peerfd); } else { if (errno == EINTR) continue; @@ -767,7 +772,7 @@ static int copyfd_io_mmap(int infd, int peerfd, int outfd, if (err) return err; - shutdown(peerfd, SHUT_WR); + shut_wr(peerfd); err = do_recvfile(peerfd, outfd); *in_closed_after_out = true; @@ -791,6 +796,9 @@ static int copyfd_io_sendfile(int infd, int peerfd, int outfd, err = do_sendfile(infd, peerfd, size); if (err) return err; + + shut_wr(peerfd); + err = do_recvfile(peerfd, outfd); *in_closed_after_out = true; }