From patchwork Wed Mar 29 14:13:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13192649 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84AE2C761AF for ; Wed, 29 Mar 2023 14:22:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231305AbjC2OWi (ORCPT ); Wed, 29 Mar 2023 10:22:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55166 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231308AbjC2OUi (ORCPT ); Wed, 29 Mar 2023 10:20:38 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 78F3761B2 for ; Wed, 29 Mar 2023 07:16:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680099358; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wvXdMcltJv+r5QoZnXaavWZcwnuZQd4mm223J864Zdo=; b=gkmwO5KnFEEULSgM+JAQmebbKvMOHMy2IHY7Rmw6Q0YwDnU7aGiZxdyr56WY76dTcKv6oM uObbXRQGobJI28SRxSnuOd/0EESGH/akqR/UhWcws3Ec7qh8puBdGtZs7m3/kKG+oBUiZq xFJRW2FdAq2W6eYbnAIfpE8bUSmRP14= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-482-Ff5bQfAPN6KgS6uIgJRP1w-1; Wed, 29 Mar 2023 10:15:53 -0400 X-MC-Unique: Ff5bQfAPN6KgS6uIgJRP1w-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 82A463814947; Wed, 29 Mar 2023 14:15:52 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.18]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5750F492B02; Wed, 29 Mar 2023 14:15:50 +0000 (UTC) From: David Howells To: Matthew Wilcox , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: David Howells , Al Viro , Christoph Hellwig , Jens Axboe , Jeff Layton , Christian Brauner , Chuck Lever III , Linus Torvalds , netdev@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Trond Myklebust , Anna Schumaker , linux-nfs@vger.kernel.org Subject: [RFC PATCH v2 41/48] sunrpc: Rely on TCP sendmsg + MSG_SPLICE_PAGES to copy unspliceable data Date: Wed, 29 Mar 2023 15:13:47 +0100 Message-Id: <20230329141354.516864-42-dhowells@redhat.com> In-Reply-To: <20230329141354.516864-1-dhowells@redhat.com> References: <20230329141354.516864-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org Rather than copying data in svc_tcp_sendmsg() into page fragments, just hand in ITER_KVEC iterators as part of the ITER_ITERLIST and rely on TCP to copy them if the pages they're residing on are belong to the slab or have a zero refcount. Signed-off-by: David Howells cc: Trond Myklebust cc: Anna Schumaker cc: Chuck Lever cc: Jeff Layton cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: Jens Axboe cc: Matthew Wilcox cc: linux-nfs@vger.kernel.org cc: netdev@vger.kernel.org --- net/sunrpc/svcsock.c | 44 ++++++++++++-------------------------------- 1 file changed, 12 insertions(+), 32 deletions(-) diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c index f1cc53aad6e0..c1421f6fe57a 100644 --- a/net/sunrpc/svcsock.c +++ b/net/sunrpc/svcsock.c @@ -1071,47 +1071,27 @@ static int svc_tcp_recvfrom(struct svc_rqst *rqstp) static int svc_tcp_sendmsg(struct socket *sock, struct xdr_buf *xdr, rpc_fraghdr marker, unsigned int *sentp) { - const struct kvec *head = xdr->head; - const struct kvec *tail = xdr->tail; - struct iov_iter iters[3]; - struct bio_vec head_bv, tail_bv; - struct msghdr msg = { - .msg_flags = MSG_SPLICE_PAGES, - }; - void *m, *t; - int ret, n = 2, size; + struct iov_iter iters[4]; + struct kvec marker_kv; + struct msghdr msg = { .msg_flags = MSG_SPLICE_PAGES, }; + int ret, n = 0, size; *sentp = 0; ret = xdr_alloc_bvec(xdr, GFP_KERNEL); if (ret < 0) return ret; - m = page_frag_alloc(NULL, sizeof(marker) + head->iov_len + tail->iov_len, - GFP_KERNEL); - if (!m) - return -ENOMEM; - - memcpy(m, &marker, sizeof(marker)); - if (head->iov_len) - memcpy(m + sizeof(marker), head->iov_base, head->iov_len); - bvec_set_virt(&head_bv, m, sizeof(marker) + head->iov_len); - iov_iter_bvec(&iters[0], ITER_SOURCE, &head_bv, 1, - sizeof(marker) + head->iov_len); - - iov_iter_bvec(&iters[1], ITER_SOURCE, xdr->bvec, + marker_kv.iov_base = ▮ + marker_kv.iov_len = sizeof(marker); + iov_iter_kvec(&iters[n++], ITER_SOURCE, &marker_kv, 1, sizeof(marker)); + iov_iter_kvec(&iters[n++], ITER_SOURCE, xdr->head, 1, xdr->head->iov_len); + iov_iter_bvec(&iters[n++], ITER_SOURCE, xdr->bvec, xdr_buf_pagecount(xdr), xdr->page_len); - if (tail->iov_len) { - t = page_frag_alloc(NULL, tail->iov_len, GFP_KERNEL); - if (!t) - return -ENOMEM; - memcpy(t, tail->iov_base, tail->iov_len); - bvec_set_virt(&tail_bv, t, tail->iov_len); - iov_iter_bvec(&iters[2], ITER_SOURCE, &tail_bv, 1, tail->iov_len); - n++; - } + if (xdr->tail->iov_len) + iov_iter_kvec(&iters[n++], ITER_SOURCE, xdr->tail, 1, xdr->tail->iov_len); - size = sizeof(marker) + head->iov_len + xdr->page_len + tail->iov_len; + size = sizeof(marker) + xdr->head->iov_len + xdr->page_len + xdr->tail->iov_len; iov_iter_iterlist(&msg.msg_iter, ITER_SOURCE, iters, n, size); ret = sock_sendmsg(sock, &msg);