From patchwork Wed Mar 29 14:13:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13192647 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24C2AC77B6D for ; Wed, 29 Mar 2023 14:22:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231316AbjC2OWn (ORCPT ); Wed, 29 Mar 2023 10:22:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54602 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231318AbjC2OUm (ORCPT ); Wed, 29 Mar 2023 10:20:42 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 556845BB0 for ; Wed, 29 Mar 2023 07:16:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680099332; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=gZm+OH3nuU2w5lgBIDJ5zMpRPGFGHnStb+oddo8E9cs=; b=B1w1ZF3SQs4MDgmV4lwaw3FWVXtS6Y/9MnGgEPsIsOHEBzPYfhMsofVK88/qj8oLlBUw1+ /FDWk2yr8YKB8103wGfVQRcCLRac0YGAo0nENfkWwGWL2ozmIuxowh3g6uQN6y1fiNUf/t UqDB+mUJsR4d3Baj6sDGIm0IpXDzX5s= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-427-ZeMxTUzWPHqK1lFKrZJGyg-1; Wed, 29 Mar 2023 10:15:27 -0400 X-MC-Unique: ZeMxTUzWPHqK1lFKrZJGyg-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id DB5A61C0758C; Wed, 29 Mar 2023 14:15:26 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.18]) by smtp.corp.redhat.com (Postfix) with ESMTP id B2C022166B33; Wed, 29 Mar 2023 14:15:24 +0000 (UTC) From: David Howells To: Matthew Wilcox , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: David Howells , Al Viro , Christoph Hellwig , Jens Axboe , Jeff Layton , Christian Brauner , Chuck Lever III , Linus Torvalds , netdev@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, "Martin K. Petersen" , linux-scsi@vger.kernel.org, target-devel@vger.kernel.org Subject: [RFC PATCH v2 32/48] iscsi: Use sendmsg(MSG_SPLICE_PAGES) rather than sendpage Date: Wed, 29 Mar 2023 15:13:38 +0100 Message-Id: <20230329141354.516864-33-dhowells@redhat.com> In-Reply-To: <20230329141354.516864-1-dhowells@redhat.com> References: <20230329141354.516864-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 Precedence: bulk List-ID: X-Mailing-List: target-devel@vger.kernel.org Use sendmsg() with MSG_SPLICE_PAGES rather than sendpage. This allows multiple pages and multipage folios to be passed through. TODO: iscsit_fe_sendpage_sg() should perhaps set up a bio_vec array for the entire set of pages it's going to transfer plus two for the header and trailer and page fragments to hold the header and trailer - and then call sendmsg once for the entire message. Signed-off-by: David Howells cc: "Martin K. Petersen" cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: Jens Axboe cc: Matthew Wilcox cc: linux-scsi@vger.kernel.org cc: target-devel@vger.kernel.org cc: netdev@vger.kernel.org --- drivers/scsi/iscsi_tcp.c | 31 ++++++++++++------------ drivers/scsi/iscsi_tcp.h | 2 +- drivers/target/iscsi/iscsi_target_util.c | 14 ++++++----- 3 files changed, 24 insertions(+), 23 deletions(-) diff --git a/drivers/scsi/iscsi_tcp.c b/drivers/scsi/iscsi_tcp.c index c76f82fb8b63..cf3eb55d2a76 100644 --- a/drivers/scsi/iscsi_tcp.c +++ b/drivers/scsi/iscsi_tcp.c @@ -301,35 +301,37 @@ static int iscsi_sw_tcp_xmit_segment(struct iscsi_tcp_conn *tcp_conn, while (!iscsi_tcp_segment_done(tcp_conn, segment, 0, r)) { struct scatterlist *sg; + struct msghdr msg = {}; + union { + struct kvec kv; + struct bio_vec bv; + } vec; unsigned int offset, copy; - int flags = 0; r = 0; offset = segment->copied; copy = segment->size - offset; if (segment->total_copied + segment->size < segment->total_size) - flags |= MSG_MORE | MSG_SENDPAGE_NOTLAST; + msg.msg_flags |= MSG_MORE | MSG_SENDPAGE_NOTLAST; if (tcp_sw_conn->queue_recv) - flags |= MSG_DONTWAIT; + msg.msg_flags |= MSG_DONTWAIT; - /* Use sendpage if we can; else fall back to sendmsg */ if (!segment->data) { + if (tcp_conn->iscsi_conn->datadgst_en) + msg.msg_flags |= MSG_SPLICE_PAGES; sg = segment->sg; offset += segment->sg_offset + sg->offset; - r = tcp_sw_conn->sendpage(sk, sg_page(sg), offset, - copy, flags); + bvec_set_page(&vec.bv, sg_page(sg), copy, offset); + iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, &vec.bv, 1, copy); } else { - struct msghdr msg = { .msg_flags = flags }; - struct kvec iov = { - .iov_base = segment->data + offset, - .iov_len = copy - }; - - r = kernel_sendmsg(sk, &msg, &iov, 1, copy); + vec.kv.iov_base = segment->data + offset; + vec.kv.iov_len = copy; + iov_iter_kvec(&msg.msg_iter, ITER_SOURCE, &vec.kv, 1, copy); } + r = sock_sendmsg(sk, &msg); if (r < 0) { iscsi_tcp_segment_unmap(segment); return r; @@ -746,7 +748,6 @@ iscsi_sw_tcp_conn_bind(struct iscsi_cls_session *cls_session, sock_no_linger(sk); iscsi_sw_tcp_conn_set_callbacks(conn); - tcp_sw_conn->sendpage = tcp_sw_conn->sock->ops->sendpage; /* * set receive state machine into initial state */ @@ -778,8 +779,6 @@ static int iscsi_sw_tcp_conn_set_param(struct iscsi_cls_conn *cls_conn, mutex_unlock(&tcp_sw_conn->sock_lock); return -ENOTCONN; } - tcp_sw_conn->sendpage = conn->datadgst_en ? - sock_no_sendpage : tcp_sw_conn->sock->ops->sendpage; mutex_unlock(&tcp_sw_conn->sock_lock); break; case ISCSI_PARAM_MAX_R2T: diff --git a/drivers/scsi/iscsi_tcp.h b/drivers/scsi/iscsi_tcp.h index 68e14a344904..d6ec08d7eb63 100644 --- a/drivers/scsi/iscsi_tcp.h +++ b/drivers/scsi/iscsi_tcp.h @@ -48,7 +48,7 @@ struct iscsi_sw_tcp_conn { uint32_t sendpage_failures_cnt; uint32_t discontiguous_hdr_cnt; - ssize_t (*sendpage)(struct socket *, struct page *, int, size_t, int); + bool can_splice_to_tcp; }; struct iscsi_sw_tcp_host { diff --git a/drivers/target/iscsi/iscsi_target_util.c b/drivers/target/iscsi/iscsi_target_util.c index 26dc8ed3045b..c7d58e41ac3b 100644 --- a/drivers/target/iscsi/iscsi_target_util.c +++ b/drivers/target/iscsi/iscsi_target_util.c @@ -1078,6 +1078,8 @@ int iscsit_fe_sendpage_sg( struct iscsit_conn *conn) { struct scatterlist *sg = cmd->first_data_sg; + struct bio_vec bvec; + struct msghdr msghdr = { .msg_flags = MSG_SPLICE_PAGES, }; struct kvec iov; u32 tx_hdr_size, data_len; u32 offset = cmd->first_data_sg_off; @@ -1121,17 +1123,17 @@ int iscsit_fe_sendpage_sg( u32 space = (sg->length - offset); u32 sub_len = min_t(u32, data_len, space); send_pg: - tx_sent = conn->sock->ops->sendpage(conn->sock, - sg_page(sg), sg->offset + offset, sub_len, 0); + bvec_set_page(&bvec, sg_page(sg), sub_len, sg->offset + offset); + iov_iter_bvec(&msghdr.msg_iter, ITER_SOURCE, &bvec, 1, sub_len); + + tx_sent = conn->sock->ops->sendmsg(conn->sock, &msghdr, sub_len); if (tx_sent != sub_len) { if (tx_sent == -EAGAIN) { - pr_err("tcp_sendpage() returned" - " -EAGAIN\n"); + pr_err("sendmsg/splice returned -EAGAIN\n"); goto send_pg; } - pr_err("tcp_sendpage() failure: %d\n", - tx_sent); + pr_err("sendmsg/splice failure: %d\n", tx_sent); return -1; } From patchwork Wed Mar 29 14:13:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13192646 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6C1D2C74A5B for ; Wed, 29 Mar 2023 14:20:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231300AbjC2OUc (ORCPT ); Wed, 29 Mar 2023 10:20:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55152 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229771AbjC2OSu (ORCPT ); Wed, 29 Mar 2023 10:18:50 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CBFFF5FF6 for ; Wed, 29 Mar 2023 07:16:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680099337; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0V0VYuNRGUsctVySLnEH8GinGQ4l5nQcc0Sx6Q5aq0A=; b=c78w6wncXK2A7qKBNp/fqp/QLdog2d7a7+Y5riDnZFpKgCuQJ2A3HhS6otgSGW1eMfIgI1 uVeWcEMz3sZgfa5fdgtpVM1MHiGRO7STJ6WiN2FnnpiqYwwV0G2ocWgMLnXfKEPV8GrZBr oW+sF5ijcJT9WBVx6yfv7g/mFiq7EpM= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-167-kCehxZq9PD6SIf6owZvSow-1; Wed, 29 Mar 2023 10:15:33 -0400 X-MC-Unique: kCehxZq9PD6SIf6owZvSow-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C7FF3185A790; Wed, 29 Mar 2023 14:15:29 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.18]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9BD4C1121330; Wed, 29 Mar 2023 14:15:27 +0000 (UTC) From: David Howells To: Matthew Wilcox , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: David Howells , Al Viro , Christoph Hellwig , Jens Axboe , Jeff Layton , Christian Brauner , Chuck Lever III , Linus Torvalds , netdev@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, "Martin K. Petersen" , linux-scsi@vger.kernel.org, target-devel@vger.kernel.org Subject: [RFC PATCH v2 33/48] iscsi: Assume "sendpage" is okay in iscsi_tcp_segment_map() Date: Wed, 29 Mar 2023 15:13:39 +0100 Message-Id: <20230329141354.516864-34-dhowells@redhat.com> In-Reply-To: <20230329141354.516864-1-dhowells@redhat.com> References: <20230329141354.516864-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 Precedence: bulk List-ID: X-Mailing-List: target-devel@vger.kernel.org As iscsi is now using sendmsg() with MSG_SPLICE_PAGES rather than sendpage, assume that sendpage_ok() will return true in iscsi_tcp_segment_map() and leave it to TCP to copy the data if not. Signed-off-by: David Howells cc: "Martin K. Petersen" cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: Jens Axboe cc: Matthew Wilcox cc: linux-scsi@vger.kernel.org cc: target-devel@vger.kernel.org cc: netdev@vger.kernel.org --- drivers/scsi/libiscsi_tcp.c | 13 +++---------- 1 file changed, 3 insertions(+), 10 deletions(-) diff --git a/drivers/scsi/libiscsi_tcp.c b/drivers/scsi/libiscsi_tcp.c index c182aa83f2c9..07ba0d864820 100644 --- a/drivers/scsi/libiscsi_tcp.c +++ b/drivers/scsi/libiscsi_tcp.c @@ -128,18 +128,11 @@ static void iscsi_tcp_segment_map(struct iscsi_segment *segment, int recv) * coalescing neighboring slab objects into a single frag which * triggers one of hardened usercopy checks. */ - if (!recv && sendpage_ok(sg_page(sg))) + if (!recv) return; - if (recv) { - segment->atomic_mapped = true; - segment->sg_mapped = kmap_atomic(sg_page(sg)); - } else { - segment->atomic_mapped = false; - /* the xmit path can sleep with the page mapped so use kmap */ - segment->sg_mapped = kmap(sg_page(sg)); - } - + segment->atomic_mapped = true; + segment->sg_mapped = kmap_atomic(sg_page(sg)); segment->data = segment->sg_mapped + sg->offset + segment->sg_offset; }