From patchwork Wed Aug 10 14:47:16 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Michael S. Tsirkin" X-Patchwork-Id: 9273115 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 3527E600CA for ; Wed, 10 Aug 2016 14:47:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2507328413 for ; Wed, 10 Aug 2016 14:47:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 19C5A2841A; Wed, 10 Aug 2016 14:47:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 9731228413 for ; Wed, 10 Aug 2016 14:47:52 +0000 (UTC) Received: from localhost ([::1]:42059 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bXUnP-0007oT-Am for patchwork-qemu-devel@patchwork.kernel.org; Wed, 10 Aug 2016 10:47:51 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:33980) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bXUmz-0007lw-Ju for qemu-devel@nongnu.org; Wed, 10 Aug 2016 10:47:27 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bXUmu-0004nq-B4 for qemu-devel@nongnu.org; Wed, 10 Aug 2016 10:47:24 -0400 Received: from mx1.redhat.com ([209.132.183.28]:48970) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bXUmu-0004nV-34 for qemu-devel@nongnu.org; Wed, 10 Aug 2016 10:47:20 -0400 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id B0B0585543; Wed, 10 Aug 2016 14:47:18 +0000 (UTC) Received: from redhat.com (vpn1-4-115.ams2.redhat.com [10.36.4.115]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with SMTP id u7AElGhn013229; Wed, 10 Aug 2016 10:47:17 -0400 Date: Wed, 10 Aug 2016 17:47:16 +0300 From: "Michael S. Tsirkin" To: qemu-devel@nongnu.org Message-ID: <1470840390-29235-1-git-send-email-mst@redhat.com> MIME-Version: 1.0 Content-Disposition: inline X-Mutt-Fcc: =sent X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.28]); Wed, 10 Aug 2016 14:47:18 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH v2] virtio-net: allow increasing rx queue size X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Cornelia Huck , Patrik Hermansson , Jason Wang Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP This allows increasing the rx queue size up to 1024: unlike with tx, guests don't put in huge S/G lists into RX so the risk of running into the max 1024 limitation due to some off-by-one seems small. It's helpful for users like OVS-DPDK which don't do any buffering on the host - 1K roughly matches 500 entries in tun + 256 in the current rx queue, which seems to work reasonably well. We could probably make do with ~750 entries but virtio spec limits us to powers of two. It might be a good idea to specify an s/g size limit in a future version. It also might be possible to make the queue size smaller down the road, 64 seems like the minimal value which will still work (as guests seem to assume a queue full of 1.5K buffers is enough to process the largest incoming packet, which is ~64K). No one actually asked for this, and with virtio 1 guests can reduce ring size without need for host configuration, so don't bother with this for now. Cc: Cornelia Huck Cc: Jason Wang Suggested-by: Patrik Hermansson Signed-off-by: Michael S. Tsirkin Reviewed-by: Cornelia Huck --- changes from v1: add macros as suggested by Cornelia include/hw/virtio/virtio-net.h | 1 + hw/net/virtio-net.c | 26 +++++++++++++++++++++++++- 2 files changed, 26 insertions(+), 1 deletion(-) diff --git a/include/hw/virtio/virtio-net.h b/include/hw/virtio/virtio-net.h index 91ed97c..0ced975 100644 --- a/include/hw/virtio/virtio-net.h +++ b/include/hw/virtio/virtio-net.h @@ -35,6 +35,7 @@ typedef struct virtio_net_conf uint32_t txtimer; int32_t txburst; char *tx; + uint16_t rx_queue_size; } virtio_net_conf; /* Maximum packet size we can receive from tap device: header + 64k */ diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c index 01f1351..6b8ae2c 100644 --- a/hw/net/virtio-net.c +++ b/hw/net/virtio-net.c @@ -31,6 +31,11 @@ #define MAC_TABLE_ENTRIES 64 #define MAX_VLAN (1 << 12) /* Per 802.1Q definition */ +/* previously fixed value */ +#define VIRTIO_NET_RX_QUEUE_DEFAULT_SIZE 256 +/* for now, only allow larger queues; with virtio-1, guest can downsize */ +#define VIRTIO_NET_RX_QUEUE_MIN_SIZE VIRTIO_NET_RX_QUEUE_DEFAULT_SIZE + /* * Calculate the number of bytes up to and including the given 'field' of * 'container'. @@ -1412,7 +1417,8 @@ static void virtio_net_add_queue(VirtIONet *n, int index) { VirtIODevice *vdev = VIRTIO_DEVICE(n); - n->vqs[index].rx_vq = virtio_add_queue(vdev, 256, virtio_net_handle_rx); + n->vqs[index].rx_vq = virtio_add_queue(vdev, n->net_conf.rx_queue_size, + virtio_net_handle_rx); if (n->net_conf.tx && !strcmp(n->net_conf.tx, "timer")) { n->vqs[index].tx_vq = virtio_add_queue(vdev, 256, virtio_net_handle_tx_timer); @@ -1720,6 +1726,22 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp) virtio_net_set_config_size(n, n->host_features); virtio_init(vdev, "virtio-net", VIRTIO_ID_NET, n->config_size); + /* + * We set a lower limit on RX queue size to what it always was. + * Guests that want a smaller ring can always resize it without + * help from us (using virtio 1 and up). + */ + if (n->net_conf.rx_queue_size < VIRTIO_NET_RX_QUEUE_MIN_SIZE || + n->net_conf.rx_queue_size > VIRTQUEUE_MAX_SIZE || + (n->net_conf.rx_queue_size & (n->net_conf.rx_queue_size - 1))) { + error_setg(errp, "Invalid rx_queue_size (= %" PRIu16 "), " + "must be a power of 2 between %d and %d.", + n->net_conf.rx_queue_size, VIRTIO_NET_RX_QUEUE_MIN_SIZE, + VIRTQUEUE_MAX_SIZE); + virtio_cleanup(vdev); + return; + } + n->max_queues = MAX(n->nic_conf.peers.queues, 1); if (n->max_queues * 2 + 1 > VIRTIO_QUEUE_MAX) { error_setg(errp, "Invalid number of queues (= %" PRIu32 "), " @@ -1880,6 +1902,8 @@ static Property virtio_net_properties[] = { TX_TIMER_INTERVAL), DEFINE_PROP_INT32("x-txburst", VirtIONet, net_conf.txburst, TX_BURST), DEFINE_PROP_STRING("tx", VirtIONet, net_conf.tx), + DEFINE_PROP_UINT16("rx_queue_size", VirtIONet, net_conf.rx_queue_size, + VIRTIO_NET_RX_QUEUE_DEFAULT_SIZE), DEFINE_PROP_END_OF_LIST(), };