From patchwork Mon Oct 23 20:44:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mat Martineau X-Patchwork-Id: 13433541 X-Patchwork-Delegate: mat@martineau.name Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B0BA71D699; Mon, 23 Oct 2023 20:45:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="MuZf/ooW" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7E656C433C8; Mon, 23 Oct 2023 20:45:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1698093934; bh=8wxbLJ82zryvI+nAd9SdZ5WA2SwnODFY54TmSNRVFHc=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=MuZf/ooWATbZqGzxeqHWTqnZGEh5pVW2FMB4GGNcZCWP9hyrg5Ee4gw5OrMUnAd8d VCB9qZuS33YFxgSaxCRGeG+RnE7zj8ysuDRmSV1v4/NY7cQGFMUHh8vkR4IJzC67Ns pYgdCduvSRZk4zvB/KGkwWQPExOT5TPQzao2tkz3U0d30Qp+0lZLj0RFP+XhBlUffR b/bbUK2nbQYv1TosseAMJwdRfDT3Fm5Ab7UEy+RxHNFht4kPMAE3G39sgQlvte5dkr oTSfW65swyjV4VnrGxx27d4k28uz/YWUEMeIzrdPIIqFFXy598wQ6gOcVaRF9Z64QZ TkIOsOxZPJG0g== From: Mat Martineau Date: Mon, 23 Oct 2023 13:44:39 -0700 Subject: [PATCH net-next 6/9] mptcp: use copy_from_iter helpers on transmit Precedence: bulk X-Mailing-List: mptcp@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20231023-send-net-next-20231023-2-v1-6-9dc60939d371@kernel.org> References: <20231023-send-net-next-20231023-2-v1-0-9dc60939d371@kernel.org> In-Reply-To: <20231023-send-net-next-20231023-2-v1-0-9dc60939d371@kernel.org> To: Matthieu Baerts , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: netdev@vger.kernel.org, mptcp@lists.linux.dev, Mat Martineau X-Mailer: b4 0.12.4 From: Paolo Abeni The perf traces show an high cost for the MPTCP transmit path memcpy. It turn out that the helper currently in use carries quite a bit of unneeded overhead, e.g. to map/unmap the memory pages. Moving to the 'copy_from_iter' variant removes such overhead and additionally gains the no-cache support. Reviewed-by: Mat Martineau Signed-off-by: Paolo Abeni Signed-off-by: Mat Martineau --- net/mptcp/protocol.c | 19 +++++++++++++++---- 1 file changed, 15 insertions(+), 4 deletions(-) diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index 7036e30c449f..5489f024dd7e 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -1760,6 +1760,18 @@ static int mptcp_sendmsg_fastopen(struct sock *sk, struct msghdr *msg, return ret; } +static int do_copy_data_nocache(struct sock *sk, int copy, + struct iov_iter *from, char *to) +{ + if (sk->sk_route_caps & NETIF_F_NOCACHE_COPY) { + if (!copy_from_iter_full_nocache(to, copy, from)) + return -EFAULT; + } else if (!copy_from_iter_full(to, copy, from)) { + return -EFAULT; + } + return 0; +} + static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len) { struct mptcp_sock *msk = mptcp_sk(sk); @@ -1833,11 +1845,10 @@ static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len) if (!sk_wmem_schedule(sk, total_ts)) goto wait_for_memory; - if (copy_page_from_iter(dfrag->page, offset, psize, - &msg->msg_iter) != psize) { - ret = -EFAULT; + ret = do_copy_data_nocache(sk, psize, &msg->msg_iter, + page_address(dfrag->page) + offset); + if (ret) goto do_error; - } /* data successfully copied into the write queue */ sk_forward_alloc_add(sk, -total_ts);