From patchwork Mon Feb 6 06:57:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arseniy Krasnov X-Patchwork-Id: 13129375 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D05DC636D3 for ; Mon, 6 Feb 2023 06:58:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229808AbjBFG56 (ORCPT ); Mon, 6 Feb 2023 01:57:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53092 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229740AbjBFG54 (ORCPT ); Mon, 6 Feb 2023 01:57:56 -0500 Received: from mx.sberdevices.ru (mx.sberdevices.ru [45.89.227.171]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6BB421C586; Sun, 5 Feb 2023 22:57:24 -0800 (PST) Received: from s-lin-edge02.sberdevices.ru (localhost [127.0.0.1]) by mx.sberdevices.ru (Postfix) with ESMTP id 8ADD55FD03; Mon, 6 Feb 2023 09:57:17 +0300 (MSK) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sberdevices.ru; s=mail; t=1675666637; bh=Ech27EtHT9PBdnbJqZP/Jq1qwSOgf4mVJsXc+SOg5H0=; h=From:To:Subject:Date:Message-ID:Content-Type:MIME-Version; b=ebrYb8QM7X3OghlBstj9jgA7q5ncRJHFpQxNUCWYHr0uC3tNjnt+KugnSSAxgWlpm ZFTgPn9gDaiwd4i25AurBM1c3zQKbZLmWelX8qMLY4QW/8SEma+uC6gXWUFCYq1oWg fQL61/Av9OvklhTGWnEuqrLlY5XuKMpfdvaB0eBfanZjP9uVZw5sA96oQS+7pZ6ol9 YepHowbDEEIKDZldGF0xieXaGnUoBTId/lnkHZA1cUgZ1MEPPK9NFuwNPE6Rlb5GjE VM10qT7GAcL4CSS97X4VYvwh9wOceKij3r6+yQkRwC5YdelzZ6NDJw3RhIERxsAdLx zeFIHvR8PCUEg== Received: from S-MS-EXCH02.sberdevices.ru (S-MS-EXCH02.sberdevices.ru [172.16.1.5]) by mx.sberdevices.ru (Postfix) with ESMTP; Mon, 6 Feb 2023 09:57:17 +0300 (MSK) From: Arseniy Krasnov To: Stefan Hajnoczi , Stefano Garzarella , "Michael S. Tsirkin" , Jason Wang , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Arseniy Krasnov , "Krasnov Arseniy" CC: "linux-kernel@vger.kernel.org" , "kvm@vger.kernel.org" , "virtualization@lists.linux-foundation.org" , "netdev@vger.kernel.org" , kernel Subject: [RFC PATCH v1 04/12] vhost/vsock: non-linear skb handling support Thread-Topic: [RFC PATCH v1 04/12] vhost/vsock: non-linear skb handling support Thread-Index: AQHZOfg/4Da0EZnl9kKVLokjsbh9oA== Date: Mon, 6 Feb 2023 06:57:16 +0000 Message-ID: In-Reply-To: <0e7c6fc4-b4a6-a27b-36e9-359597bba2b5@sberdevices.ru> Accept-Language: en-US, ru-RU Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [172.16.1.12] Content-ID: <35D67462721A884E87BD8B0DE1EA7C7D@sberdevices.ru> MIME-Version: 1.0 X-KSMG-Rule-ID: 4 X-KSMG-Message-Action: clean X-KSMG-AntiSpam-Status: not scanned, disabled by settings X-KSMG-AntiSpam-Interceptor-Info: not scanned X-KSMG-AntiPhishing: not scanned, disabled by settings X-KSMG-AntiVirus: Kaspersky Secure Mail Gateway, version 1.1.2.30, bases: 2023/02/06 01:18:00 #20834045 X-KSMG-AntiVirus-Status: Clean, skipped Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This adds copying to guest's virtio buffers from non-linear skbs. Such skbs are created by protocol layer when MSG_ZEROCOPY flags is used. Signed-off-by: Arseniy Krasnov --- drivers/vhost/vsock.c | 56 ++++++++++++++++++++++++++++++++---- include/linux/virtio_vsock.h | 12 ++++++++ 2 files changed, 63 insertions(+), 5 deletions(-) diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c index 1f3b89c885cc..60b9cafa3e31 100644 --- a/drivers/vhost/vsock.c +++ b/drivers/vhost/vsock.c @@ -86,6 +86,44 @@ static struct vhost_vsock *vhost_vsock_get(u32 guest_cid) return NULL; } +static int vhost_transport_copy_nonlinear_skb(struct sk_buff *skb, + struct iov_iter *iov_iter, + size_t len) +{ + size_t rest_len = len; + + while (rest_len && virtio_vsock_skb_has_frags(skb)) { + struct bio_vec *curr_vec; + size_t curr_vec_end; + size_t to_copy; + int curr_frag; + int curr_offs; + + curr_frag = VIRTIO_VSOCK_SKB_CB(skb)->curr_frag; + curr_offs = VIRTIO_VSOCK_SKB_CB(skb)->frag_off; + curr_vec = &skb_shinfo(skb)->frags[curr_frag]; + + curr_vec_end = curr_vec->bv_offset + curr_vec->bv_len; + to_copy = min(rest_len, (size_t)(curr_vec_end - curr_offs)); + + if (copy_page_to_iter(curr_vec->bv_page, curr_offs, + to_copy, iov_iter) != to_copy) + return -1; + + rest_len -= to_copy; + VIRTIO_VSOCK_SKB_CB(skb)->frag_off += to_copy; + + if (VIRTIO_VSOCK_SKB_CB(skb)->frag_off == (curr_vec_end)) { + VIRTIO_VSOCK_SKB_CB(skb)->curr_frag++; + VIRTIO_VSOCK_SKB_CB(skb)->frag_off = 0; + } + } + + skb->data_len -= len; + + return 0; +} + static void vhost_transport_do_send_pkt(struct vhost_vsock *vsock, struct vhost_virtqueue *vq) @@ -197,11 +235,19 @@ vhost_transport_do_send_pkt(struct vhost_vsock *vsock, break; } - nbytes = copy_to_iter(skb->data, payload_len, &iov_iter); - if (nbytes != payload_len) { - kfree_skb(skb); - vq_err(vq, "Faulted on copying pkt buf\n"); - break; + if (skb_is_nonlinear(skb)) { + if (vhost_transport_copy_nonlinear_skb(skb, &iov_iter, + payload_len)) { + vq_err(vq, "Faulted on copying pkt buf from page\n"); + break; + } + } else { + nbytes = copy_to_iter(skb->data, payload_len, &iov_iter); + if (nbytes != payload_len) { + kfree_skb(skb); + vq_err(vq, "Faulted on copying pkt buf\n"); + break; + } } /* Deliver to monitoring devices all packets that we diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h index 3f9c16611306..e7efdb78ce6e 100644 --- a/include/linux/virtio_vsock.h +++ b/include/linux/virtio_vsock.h @@ -12,6 +12,10 @@ struct virtio_vsock_skb_cb { bool reply; bool tap_delivered; + /* Current fragment in 'frags' of skb. */ + u32 curr_frag; + /* Offset from 0 in current fragment. */ + u32 frag_off; }; #define VIRTIO_VSOCK_SKB_CB(skb) ((struct virtio_vsock_skb_cb *)((skb)->cb)) @@ -46,6 +50,14 @@ static inline void virtio_vsock_skb_clear_tap_delivered(struct sk_buff *skb) VIRTIO_VSOCK_SKB_CB(skb)->tap_delivered = false; } +static inline bool virtio_vsock_skb_has_frags(struct sk_buff *skb) +{ + if (!skb_is_nonlinear(skb)) + return false; + + return VIRTIO_VSOCK_SKB_CB(skb)->curr_frag != skb_shinfo(skb)->nr_frags; +} + static inline void virtio_vsock_skb_rx_put(struct sk_buff *skb) { u32 len;