From patchwork Wed Aug 3 23:16:14 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Michael S. Tsirkin" X-Patchwork-Id: 9262313 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id C3C5B60754 for ; Wed, 3 Aug 2016 23:16:46 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B602627E66 for ; Wed, 3 Aug 2016 23:16:46 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AA36028066; Wed, 3 Aug 2016 23:16:46 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 1013127E66 for ; Wed, 3 Aug 2016 23:16:45 +0000 (UTC) Received: from localhost ([::1]:37026 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bV5P1-0002Eg-8j for patchwork-qemu-devel@patchwork.kernel.org; Wed, 03 Aug 2016 19:16:43 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:36467) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bV5Oh-0002EN-9p for qemu-devel@nongnu.org; Wed, 03 Aug 2016 19:16:24 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bV5Oc-00073M-7Q for qemu-devel@nongnu.org; Wed, 03 Aug 2016 19:16:22 -0400 Received: from mx1.redhat.com ([209.132.183.28]:36554) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bV5Ob-00073H-W3 for qemu-devel@nongnu.org; Wed, 03 Aug 2016 19:16:18 -0400 Received: from int-mx14.intmail.prod.int.phx2.redhat.com (int-mx14.intmail.prod.int.phx2.redhat.com [10.5.11.27]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 4FE2481254 for ; Wed, 3 Aug 2016 23:16:17 +0000 (UTC) Received: from redhat.com (vpn1-7-156.ams2.redhat.com [10.36.7.156]) by int-mx14.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with SMTP id u73NGF1Q022076; Wed, 3 Aug 2016 19:16:16 -0400 Date: Thu, 4 Aug 2016 02:16:14 +0300 From: "Michael S. Tsirkin" To: qemu-devel@nongnu.org Message-ID: <1470266160-17896-1-git-send-email-mst@redhat.com> MIME-Version: 1.0 Content-Disposition: inline X-Mutt-Fcc: =sent X-Scanned-By: MIMEDefang 2.68 on 10.5.11.27 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.25]); Wed, 03 Aug 2016 23:16:17 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH] virtio-net: allow increasing rx queue size X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Jason Wang Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP This allows increasing the rx queue size up to 1024: unlike with tx, guests don't put in huge S/G lists into RX so the risk of running into the max 1024 limitation due to some off-by-one seems small. It's helpful for users like OVS-DPDK which don't do any buffering on the host - 1K roughly matches 500 entries in tun + 256 in the current rx queue, which seems to work reasonably well. We could probably make do with ~750 entries but virtio spec limits us to powers of two. It might be a good idea to specify an s/g size limit in a future version. It also might be possible to make the queue size smaller down the road, 64 seems like the minimal value which will still work (as guests seem to assume a queue full of 1.5K buffers is enough to process the largest incoming packet, which is ~64K). No one actually asked for this, and with virtio 1 guests can reduce ring size without need for host configuration, so don't bother with this for now. Signed-off-by: Michael S. Tsirkin Reviewed-by: Jason Wang --- include/hw/virtio/virtio-net.h | 1 + hw/net/virtio-net.c | 22 +++++++++++++++++++++- 2 files changed, 22 insertions(+), 1 deletion(-) diff --git a/include/hw/virtio/virtio-net.h b/include/hw/virtio/virtio-net.h index 91ed97c..0ced975 100644 --- a/include/hw/virtio/virtio-net.h +++ b/include/hw/virtio/virtio-net.h @@ -35,6 +35,7 @@ typedef struct virtio_net_conf uint32_t txtimer; int32_t txburst; char *tx; + uint16_t rx_queue_size; } virtio_net_conf; /* Maximum packet size we can receive from tap device: header + 64k */ diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c index 01f1351..4e595e9 100644 --- a/hw/net/virtio-net.c +++ b/hw/net/virtio-net.c @@ -1412,7 +1412,8 @@ static void virtio_net_add_queue(VirtIONet *n, int index) { VirtIODevice *vdev = VIRTIO_DEVICE(n); - n->vqs[index].rx_vq = virtio_add_queue(vdev, 256, virtio_net_handle_rx); + n->vqs[index].rx_vq = virtio_add_queue(vdev, n->net_conf.rx_queue_size, + virtio_net_handle_rx); if (n->net_conf.tx && !strcmp(n->net_conf.tx, "timer")) { n->vqs[index].tx_vq = virtio_add_queue(vdev, 256, virtio_net_handle_tx_timer); @@ -1716,10 +1717,28 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp) VirtIONet *n = VIRTIO_NET(dev); NetClientState *nc; int i; + int min_rx_queue_size; virtio_net_set_config_size(n, n->host_features); virtio_init(vdev, "virtio-net", VIRTIO_ID_NET, n->config_size); + /* + * We set a lower limit on RX queue size to what it always was. + * Guests that want a smaller ring can always resize it without + * help from us (using virtio 1 and up). + */ + min_rx_queue_size = 256; + if (n->net_conf.rx_queue_size < min_rx_queue_size || + n->net_conf.rx_queue_size > VIRTQUEUE_MAX_SIZE || + (n->net_conf.rx_queue_size & (n->net_conf.rx_queue_size - 1))) { + error_setg(errp, "Invalid rx_queue_size (= %" PRIu16 "), " + "must be a power of 2 between %d and %d.", + n->net_conf.rx_queue_size, min_rx_queue_size, + VIRTQUEUE_MAX_SIZE); + virtio_cleanup(vdev); + return; + } + n->max_queues = MAX(n->nic_conf.peers.queues, 1); if (n->max_queues * 2 + 1 > VIRTIO_QUEUE_MAX) { error_setg(errp, "Invalid number of queues (= %" PRIu32 "), " @@ -1880,6 +1899,7 @@ static Property virtio_net_properties[] = { TX_TIMER_INTERVAL), DEFINE_PROP_INT32("x-txburst", VirtIONet, net_conf.txburst, TX_BURST), DEFINE_PROP_STRING("tx", VirtIONet, net_conf.tx), + DEFINE_PROP_UINT16("rx_queue_size", VirtIONet, net_conf.rx_queue_size, 256), DEFINE_PROP_END_OF_LIST(), };