From patchwork Fri Jun 3 05:31:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arseniy Krasnov X-Patchwork-Id: 12868534 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B8A3EC43334 for ; Fri, 3 Jun 2022 05:31:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240625AbiFCFbe (ORCPT ); Fri, 3 Jun 2022 01:31:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42058 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240572AbiFCFbc (ORCPT ); Fri, 3 Jun 2022 01:31:32 -0400 Received: from mail.sberdevices.ru (mail.sberdevices.ru [45.89.227.171]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 12F3638DB4; Thu, 2 Jun 2022 22:31:30 -0700 (PDT) Received: from s-lin-edge02.sberdevices.ru (localhost [127.0.0.1]) by mail.sberdevices.ru (Postfix) with ESMTP id D5EF95FD02; Fri, 3 Jun 2022 08:31:27 +0300 (MSK) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sberdevices.ru; s=mail; t=1654234287; bh=+4MNda9ZVaWsO3OSMFy39GadvoLLcTiXQPmvOw9yBqY=; h=From:To:Subject:Date:Message-ID:Content-Type:MIME-Version; b=OtnvFT2hWMHelc/RdeLxaqAPMtB3GTqXTl54pXjcE8WK+2p0/AEHMPYtdC30Kf00w gB/OlMdlWbfjoS3JVBo/vziQ9X2nO0FrAYyNKuUqVL5r5dgf8leEiH/JMNs9gv7D7O hQ1TPYoc7bxMATsx4JZeFjwf4zSdgMMQPgKEMPf3E5Mu1DQdi13oAdL9uLimobl501 69+5wWuEHSAC5CYMjOCNONZi/9eNqGbcOUWgP0H3iCNiWlo3KN/s176QsqypWe9flZ jVzl9NR6B66hlTsa1ctOIXhyFC3cOJTbTyzDFRj5VxMONltJuLJnuo7UOL2k6AGnIJ ER0D3ASDKzpag== Received: from S-MS-EXCH01.sberdevices.ru (S-MS-EXCH01.sberdevices.ru [172.16.1.4]) by mail.sberdevices.ru (Postfix) with ESMTP; Fri, 3 Jun 2022 08:31:27 +0300 (MSK) From: Arseniy Krasnov To: Stefano Garzarella , Stefan Hajnoczi , "Michael S. Tsirkin" , "David S. Miller" , Jason Wang , "Jakub Kicinski" , Paolo Abeni CC: "linux-kernel@vger.kernel.org" , "kvm@vger.kernel.org" , "virtualization@lists.linux-foundation.org" , "netdev@vger.kernel.org" , kernel , Krasnov Arseniy Subject: [RFC PATCH v2 1/8] virtio/vsock: rework packet allocation logic Thread-Topic: [RFC PATCH v2 1/8] virtio/vsock: rework packet allocation logic Thread-Index: AQHYdwsc/NV43wr3F0ycDhhxOYPufQ== Date: Fri, 3 Jun 2022 05:31:00 +0000 Message-ID: <78157286-3663-202f-da94-1a17e4ffe819@sberdevices.ru> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [172.16.1.12] Content-ID: MIME-Version: 1.0 X-KSMG-Rule-ID: 4 X-KSMG-Message-Action: clean X-KSMG-AntiSpam-Status: not scanned, disabled by settings X-KSMG-AntiSpam-Interceptor-Info: not scanned X-KSMG-AntiPhishing: not scanned, disabled by settings X-KSMG-AntiVirus: Kaspersky Secure Mail Gateway, version 1.1.2.30, bases: 2022/06/03 01:19:00 #19656765 X-KSMG-AntiVirus-Status: Clean, skipped Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org To support zerocopy receive, packet's buffer allocation is changed: for buffers which could be mapped to user's vma we can't use 'kmalloc()'(as kernel restricts to map slab pages to user's vma) and raw buddy allocator now called. But, for tx packets(such packets won't be mapped to user), previous 'kmalloc()' way is used, but with special flag in packet's structure which allows to distinguish between 'kmalloc()' and raw pages buffers. Signed-off-by: Arseniy Krasnov --- include/linux/virtio_vsock.h | 1 + net/vmw_vsock/virtio_transport.c | 8 ++++++-- net/vmw_vsock/virtio_transport_common.c | 9 ++++++++- 3 files changed, 15 insertions(+), 3 deletions(-) diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h index 35d7eedb5e8e..d02cb7aa922f 100644 --- a/include/linux/virtio_vsock.h +++ b/include/linux/virtio_vsock.h @@ -50,6 +50,7 @@ struct virtio_vsock_pkt { u32 off; bool reply; bool tap_delivered; + bool slab_buf; }; struct virtio_vsock_pkt_info { diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c index ad64f403536a..19909c1e9ba3 100644 --- a/net/vmw_vsock/virtio_transport.c +++ b/net/vmw_vsock/virtio_transport.c @@ -255,16 +255,20 @@ static void virtio_vsock_rx_fill(struct virtio_vsock *vsock) vq = vsock->vqs[VSOCK_VQ_RX]; do { + struct page *buf_page; + pkt = kzalloc(sizeof(*pkt), GFP_KERNEL); if (!pkt) break; - pkt->buf = kmalloc(buf_len, GFP_KERNEL); - if (!pkt->buf) { + buf_page = alloc_page(GFP_KERNEL); + + if (!buf_page) { virtio_transport_free_pkt(pkt); break; } + pkt->buf = page_to_virt(buf_page); pkt->buf_len = buf_len; pkt->len = buf_len; diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c index ec2c2afbf0d0..278567f748f2 100644 --- a/net/vmw_vsock/virtio_transport_common.c +++ b/net/vmw_vsock/virtio_transport_common.c @@ -69,6 +69,7 @@ virtio_transport_alloc_pkt(struct virtio_vsock_pkt_info *info, if (!pkt->buf) goto out_pkt; + pkt->slab_buf = true; pkt->buf_len = len; err = memcpy_from_msg(pkt->buf, info->msg, len); @@ -1342,7 +1343,13 @@ EXPORT_SYMBOL_GPL(virtio_transport_recv_pkt); void virtio_transport_free_pkt(struct virtio_vsock_pkt *pkt) { - kfree(pkt->buf); + if (pkt->buf_len) { + if (pkt->slab_buf) + kfree(pkt->buf); + else + free_pages(buf, get_order(pkt->buf_len)); + } + kfree(pkt); } EXPORT_SYMBOL_GPL(virtio_transport_free_pkt);