From patchwork Thu Mar 16 15:26:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13177810 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E1E52C6FD19 for ; Thu, 16 Mar 2023 15:27:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8060494000E; Thu, 16 Mar 2023 11:27:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7685F94000D; Thu, 16 Mar 2023 11:27:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6091594000E; Thu, 16 Mar 2023 11:27:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 4DEE8940007 for ; Thu, 16 Mar 2023 11:27:00 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 2AEFCA0917 for ; Thu, 16 Mar 2023 15:27:00 +0000 (UTC) X-FDA: 80575139400.06.A18C8F3 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf11.hostedemail.com (Postfix) with ESMTP id 70CB140007 for ; Thu, 16 Mar 2023 15:26:58 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=dML2t9Qr; spf=pass (imf11.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678980418; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=GYA122zSJc6JqJKbnjvsKVO49kEubsqnJ80XNajw+5k=; b=5OmD3SLTHFa0n7R/l5Vq6ZLt6zHP541u8wJAUavGLGyJR0ZONvr3uWc9ZtIgtvSTuNQ7t1 K6ZWhJsL8T6TfztCY2sWHj0SNsYFnO2Xau7JvE+1cNbIenJdBvPv90Le+opyh2R4bFmjDj murKPW3/SQO34HryT5BNvVmLZ8o6TGw= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=dML2t9Qr; spf=pass (imf11.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678980418; a=rsa-sha256; cv=none; b=pVNwgBvHzUWaQOuKD6QBxsAXBN0A5x3RtzAe/KHJRxsDYFKXzjkBuIMmSKJMu0PiZwEOPu PREIFnYesO7DdoXkWbAoMk0qrD2Uxhx6rsLa7OZHBvZUEje4tfk3n5JN7uW6F2vlimCyQx 8QkbahbxKtAxV7Zz013v76XAXgqqweI= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1678980417; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GYA122zSJc6JqJKbnjvsKVO49kEubsqnJ80XNajw+5k=; b=dML2t9Qrc7YKyD/Sqt/ql5EPJcBAveStpXqd09CkMFao/umfAG1ZDxPnCoamLBJto1S5W3 dbDv6AvW7RZRDo795BdwR/DmfBMJt2uAIqq5AgUCpwgw9lo//7fvInrhXNqmDsSrZZn2fF J9Tw2atiXkRnaBL0qwrD1E2Nw2H8zJE= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-489-LkkusC7mPii-DXgerJyhEQ-1; Thu, 16 Mar 2023 11:26:52 -0400 X-MC-Unique: LkkusC7mPii-DXgerJyhEQ-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 431AA101A553; Thu, 16 Mar 2023 15:26:49 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.18]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5D14C1121315; Thu, 16 Mar 2023 15:26:47 +0000 (UTC) From: David Howells To: Matthew Wilcox , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: David Howells , Al Viro , Christoph Hellwig , Jens Axboe , Jeff Layton , Christian Brauner , Linus Torvalds , netdev@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Willem de Bruijn Subject: [RFC PATCH 10/28] ip, udp: Support MSG_SPLICE_PAGES Date: Thu, 16 Mar 2023 15:26:00 +0000 Message-Id: <20230316152618.711970-11-dhowells@redhat.com> In-Reply-To: <20230316152618.711970-1-dhowells@redhat.com> References: <20230316152618.711970-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 70CB140007 X-Stat-Signature: juixoaxmudet6rxbk7j11ijkgkokkz7d X-HE-Tag: 1678980418-544188 X-HE-Meta: U2FsdGVkX18uHsBjSBC3IHqEIV1JATNyWP8SYcmx4P2plxMWNpFuziFV3oU71UNFqrZc2/OWiSmqKqtMv4y4eQm/p5HX1tW4StMBeipdGgS+jNbBsDUSTFZkub7WR+VB01+UuXddojcwMINnpXVWNA6ezqxdac7G07ZzVW7Y5/FrdgThsYm3o4PCSn6GnzxNVC0IdB+5bG5GaIFGMSBpUHvTYUNA/m9fQLcOKAgggYLxaIysNIvSvqgYy0KkhpBHChhe3dCPIf+4Rq6ppFnZx13qQMu0Hr7WLg1KN+Yau0c+J5EjtJoILX6O5eMSNZ+zuq0TmGj1IqsEpVgU1MRTfEgI+VuH4e2JxWslWXis70ppapNVBv+38pKOix7k5G5vfEn/aJiGcxN/h6Qpar0whmEfu3NtqTg8NUQSInrZQRCuuNENaR+++Lg/6ThRzs82cAninp5LUB1Vq3AIb5Y/n7dBoJhG8DCO8HSLeL+RELwVClDFOFS+OMcGY8/fDqCkFYlFJ1kGVF21Ci//prEQZXIZqAgYgmS7IwTl1AcaCPVYdM6/O0qMI7EgEbl09hc7oolWhkcK7pUEoXmc2sNs6Jnjp76zEF2+oqZR2FughT243tSBU6Xt/weQq+MM7vENpuHy/GgGDXiqbAWy8Xmimn+QqGA0LxgS88ijuMC3NSaUWwj5A5VijhaS6bB/kyIc5hxYaAWikl13aheIJIcJpS4D4IsGiuLLI5rJqjpZaLQZoaijBW5/50B7PaNkj0dGk7n1AYUpuGmhCpWCvfInKNLntUf69XrXNIxJDsZU25Tw1LHn8wHRroWt3YH9aW9rws8OYBCNoc85Rqjx3+Pyjv7LclKWbxEjR1DUBt1GzWcbO/E7ZY97y6q46okY5Zz/1NzE/Q2oNl4pL1/dLVwpKH8SWjmnjDqVBP9t8+fEFHBdzVd7QrCb2/rH7751UaWEBkg66DuQwt8MN4x/NMd DoF/u1Ml FRXi5M8H46eVzm2gDnECGMWQ487xIE5Aua7wRRpGs0EagvPQJFJw8l1VI+zO2JKo2azkZHnRHcE3sBMbNtMlYPrDZxTv3RCBBY5nyvEgT+XHH4XdzrwDWsbPCRnXZsY8so8RjzladdiRUo4pGoxVhZLlWRm3+6c5IfkityBqQiCTZzPzxaR7Jz7qloD1v+VDbywobrZAPdtB5O+pc0bGKEt+F91eUoQxvQt/WrlA5uiCrm/i8etTIPzSc+Oco0YRdFgqwlAx5gE12l+4W7zokzVZZS+CGjm1m11UqVZcex//BzgtPu7E7eRd+FdNJx+yO6kU+ivDpFrma14N6GTNqaIpy+E3VHFAlJdqn6ZLbXWkMhhbXc5Sa64mcA3OstXAywTx5+1HJdvGbL7qKJvUJ3r8GQue4+OWbdL7fl95d19fkv7mXDBx3CZo+uI8ru1HffG7hzMZ8VKC6AQwLWd6BUm4iEvpQGxlKOCEQZxgEXJ2GWBPdpjbsIoynsdO7/Uxck5SGDobCsqR+b38+apoYxwWQ9Q== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Make IP/UDP sendmsg() support MSG_SPLICE_PAGES. This causes pages to be spliced from the source iterator if possible (the iterator must be ITER_BVEC and the pages must be spliceable). This allows ->sendpage() to be replaced by something that can handle multiple multipage folios in a single transaction. Signed-off-by: David Howells cc: Willem de Bruijn cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: Jens Axboe cc: Matthew Wilcox cc: netdev@vger.kernel.org --- net/ipv4/ip_output.c | 89 ++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 86 insertions(+), 3 deletions(-) diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c index 4e4e308c3230..721d7e4343ed 100644 --- a/net/ipv4/ip_output.c +++ b/net/ipv4/ip_output.c @@ -977,7 +977,7 @@ static int __ip_append_data(struct sock *sk, int err; int offset = 0; bool zc = false; - unsigned int maxfraglen, fragheaderlen, maxnonfragsize; + unsigned int maxfraglen, fragheaderlen, maxnonfragsize, xlength; int csummode = CHECKSUM_NONE; struct rtable *rt = (struct rtable *)cork->dst; unsigned int wmem_alloc_delta = 0; @@ -1017,6 +1017,7 @@ static int __ip_append_data(struct sock *sk, (!exthdrlen || (rt->dst.dev->features & NETIF_F_HW_ESP_TX_CSUM))) csummode = CHECKSUM_PARTIAL; + xlength = length; if ((flags & MSG_ZEROCOPY) && length) { struct msghdr *msg = from; @@ -1047,6 +1048,16 @@ static int __ip_append_data(struct sock *sk, skb_zcopy_set(skb, uarg, &extra_uref); } } + } else if ((flags & MSG_SPLICE_PAGES) && length) { + struct msghdr *msg = from; + + if (!iov_iter_is_bvec(&msg->msg_iter)) + return -EINVAL; + if (inet->hdrincl) + return -EPERM; + if (!(rt->dst.dev->features & NETIF_F_SG)) + return -EOPNOTSUPP; + xlength = transhdrlen; /* We need an empty buffer to attach stuff to */ } cork->length += length; @@ -1074,6 +1085,50 @@ static int __ip_append_data(struct sock *sk, unsigned int alloclen, alloc_extra; unsigned int pagedlen; struct sk_buff *skb_prev; + + if (unlikely(flags & MSG_SPLICE_PAGES)) { + skb_prev = skb; + fraggap = skb_prev->len - maxfraglen; + + alloclen = fragheaderlen + hh_len + fraggap + 15; + skb = sock_wmalloc(sk, alloclen, 1, sk->sk_allocation); + if (unlikely(!skb)) { + err = -ENOBUFS; + goto error; + } + + /* + * Fill in the control structures + */ + skb->ip_summed = CHECKSUM_NONE; + skb->csum = 0; + skb_reserve(skb, hh_len); + + /* + * Find where to start putting bytes. + */ + skb_put(skb, fragheaderlen + fraggap); + skb_reset_network_header(skb); + skb->transport_header = (skb->network_header + + fragheaderlen); + if (fraggap) { + skb->csum = skb_copy_and_csum_bits( + skb_prev, maxfraglen, + skb_transport_header(skb), + fraggap); + skb_prev->csum = csum_sub(skb_prev->csum, + skb->csum); + pskb_trim_unique(skb_prev, maxfraglen); + } + + /* + * Put the packet on the pending queue. + */ + __skb_queue_tail(&sk->sk_write_queue, skb); + continue; + } + xlength = length; + alloc_new_skb: skb_prev = skb; if (skb_prev) @@ -1085,7 +1140,7 @@ static int __ip_append_data(struct sock *sk, * If remaining data exceeds the mtu, * we know we need more fragment(s). */ - datalen = length + fraggap; + datalen = xlength + fraggap; if (datalen > mtu - fragheaderlen) datalen = maxfraglen - fragheaderlen; fraglen = datalen + fragheaderlen; @@ -1099,7 +1154,7 @@ static int __ip_append_data(struct sock *sk, * because we have no idea what fragment will be * the last. */ - if (datalen == length + fraggap) + if (datalen == xlength + fraggap) alloc_extra += rt->dst.trailer_len; if ((flags & MSG_MORE) && @@ -1206,6 +1261,34 @@ static int __ip_append_data(struct sock *sk, err = -EFAULT; goto error; } + } else if (flags & MSG_SPLICE_PAGES) { + struct msghdr *msg = from; + struct iov_iter *iter = &msg->msg_iter; + const struct bio_vec *bv = iter->bvec; + + if (iov_iter_count(iter) <= 0) { + err = -EIO; + goto error; + } + + copy = iov_iter_single_seg_count(&msg->msg_iter); + + err = skb_append_pagefrags(skb, bv->bv_page, + bv->bv_offset + iter->iov_offset, + copy); + if (err < 0) + goto error; + + if (skb->ip_summed == CHECKSUM_NONE) { + __wsum csum; + csum = csum_page(bv->bv_page, + bv->bv_offset + iter->iov_offset, copy); + skb->csum = csum_block_add(skb->csum, csum, skb->len); + } + + iov_iter_advance(iter, copy); + skb_len_add(skb, copy); + refcount_add(copy, &sk->sk_wmem_alloc); } else if (!zc) { int i = skb_shinfo(skb)->nr_frags;