From patchwork Wed Jun 28 02:37:59 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wang, Wei W" X-Patchwork-Id: 9813393 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id C956B60383 for ; Wed, 28 Jun 2017 02:46:12 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B649326E1A for ; Wed, 28 Jun 2017 02:46:12 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AA2EB274D1; Wed, 28 Jun 2017 02:46:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id EB46A26E1A for ; Wed, 28 Jun 2017 02:46:11 +0000 (UTC) Received: from localhost ([::1]:58999 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dQ2za-0001jg-RM for patchwork-qemu-devel@patchwork.kernel.org; Tue, 27 Jun 2017 22:46:10 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:43847) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dQ2ye-0001i2-8f for qemu-devel@nongnu.org; Tue, 27 Jun 2017 22:45:13 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dQ2yZ-0008H0-Hg for qemu-devel@nongnu.org; Tue, 27 Jun 2017 22:45:12 -0400 Received: from mga07.intel.com ([134.134.136.100]:32618) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1dQ2yZ-0008FI-84 for qemu-devel@nongnu.org; Tue, 27 Jun 2017 22:45:07 -0400 Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga105.jf.intel.com with ESMTP; 27 Jun 2017 19:45:06 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.40,273,1496127600"; d="scan'208";a="102132807" Received: from devel-ww.sh.intel.com ([10.239.48.97]) by orsmga004.jf.intel.com with ESMTP; 27 Jun 2017 19:45:04 -0700 From: Wei Wang To: mst@redhat.com, jasowang@redhat.com, eblake@redhat.com, virtio-dev@lists.oasis-open.org, qemu-devel@nongnu.org Date: Wed, 28 Jun 2017 10:37:59 +0800 Message-Id: <1498617479-5809-1-git-send-email-wei.w.wang@intel.com> X-Mailer: git-send-email 2.7.4 X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 134.134.136.100 Subject: [Qemu-devel] [PATCH v4] virtio-net: enable configurable tx queue size X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: stefanha@gmail.com, Wei Wang , marcandre.lureau@gmail.com, armbru@redhat.com, pbonzini@redhat.com Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP This patch enables the virtio-net tx queue size to be configurable between 256 (the default queue size) and 1024 by the user when the vhost-user backend is used. Currently, the maximum tx queue size for other backends is 512 due to the following limitations: - QEMU backend: the QEMU backend implementation in some cases may send 1024+1 iovs to writev. - Vhost_net backend: there are possibilities that the guest sends a vring_desc of memory which crosses a MemoryRegion thereby generating more than 1024 iovs after translation from guest-physical address in the backend. Signed-off-by: Wei Wang --- hw/net/virtio-net.c | 32 ++++++++++++++++++++++++++++++-- include/hw/virtio/virtio-net.h | 1 + 2 files changed, 31 insertions(+), 2 deletions(-) diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c index 91eddaf..a1fc0db 100644 --- a/hw/net/virtio-net.c +++ b/hw/net/virtio-net.c @@ -34,8 +34,11 @@ /* previously fixed value */ #define VIRTIO_NET_RX_QUEUE_DEFAULT_SIZE 256 +#define VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE 256 + /* for now, only allow larger queues; with virtio-1, guest can downsize */ #define VIRTIO_NET_RX_QUEUE_MIN_SIZE VIRTIO_NET_RX_QUEUE_DEFAULT_SIZE +#define VIRTIO_NET_TX_QUEUE_MIN_SIZE VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE /* * Calculate the number of bytes up to and including the given 'field' of @@ -1508,15 +1511,18 @@ static void virtio_net_add_queue(VirtIONet *n, int index) n->vqs[index].rx_vq = virtio_add_queue(vdev, n->net_conf.rx_queue_size, virtio_net_handle_rx); + if (n->net_conf.tx && !strcmp(n->net_conf.tx, "timer")) { n->vqs[index].tx_vq = - virtio_add_queue(vdev, 256, virtio_net_handle_tx_timer); + virtio_add_queue(vdev, n->net_conf.tx_queue_size, + virtio_net_handle_tx_timer); n->vqs[index].tx_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, virtio_net_tx_timer, &n->vqs[index]); } else { n->vqs[index].tx_vq = - virtio_add_queue(vdev, 256, virtio_net_handle_tx_bh); + virtio_add_queue(vdev, n->net_conf.tx_queue_size, + virtio_net_handle_tx_bh); n->vqs[index].tx_bh = qemu_bh_new(virtio_net_tx_bh, &n->vqs[index]); } @@ -1927,6 +1933,17 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp) return; } + if (n->net_conf.tx_queue_size < VIRTIO_NET_TX_QUEUE_MIN_SIZE || + n->net_conf.tx_queue_size > VIRTQUEUE_MAX_SIZE || + !is_power_of_2(n->net_conf.tx_queue_size)) { + error_setg(errp, "Invalid tx_queue_size (= %" PRIu16 "), " + "must be a power of 2 between %d and %d", + n->net_conf.tx_queue_size, VIRTIO_NET_TX_QUEUE_MIN_SIZE, + VIRTQUEUE_MAX_SIZE); + virtio_cleanup(vdev); + return; + } + n->max_queues = MAX(n->nic_conf.peers.queues, 1); if (n->max_queues * 2 + 1 > VIRTIO_QUEUE_MAX) { error_setg(errp, "Invalid number of queues (= %" PRIu32 "), " @@ -1947,6 +1964,15 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp) error_report("Defaulting to \"bh\""); } + /* + * Currently, backends other than vhost-user don't support 1024 queue + * size. + */ + if (n->net_conf.tx_queue_size == VIRTQUEUE_MAX_SIZE && + n->nic_conf.peers.ncs[0]->info->type != NET_CLIENT_DRIVER_VHOST_USER) { + n->net_conf.tx_queue_size = VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE; + } + for (i = 0; i < n->max_queues; i++) { virtio_net_add_queue(n, i); } @@ -2106,6 +2132,8 @@ static Property virtio_net_properties[] = { DEFINE_PROP_STRING("tx", VirtIONet, net_conf.tx), DEFINE_PROP_UINT16("rx_queue_size", VirtIONet, net_conf.rx_queue_size, VIRTIO_NET_RX_QUEUE_DEFAULT_SIZE), + DEFINE_PROP_UINT16("tx_queue_size", VirtIONet, net_conf.tx_queue_size, + VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE), DEFINE_PROP_UINT16("host_mtu", VirtIONet, net_conf.mtu, 0), DEFINE_PROP_BOOL("x-mtu-bypass-backend", VirtIONet, mtu_bypass_backend, true), diff --git a/include/hw/virtio/virtio-net.h b/include/hw/virtio/virtio-net.h index 602b486..b81b6a4 100644 --- a/include/hw/virtio/virtio-net.h +++ b/include/hw/virtio/virtio-net.h @@ -36,6 +36,7 @@ typedef struct virtio_net_conf int32_t txburst; char *tx; uint16_t rx_queue_size; + uint16_t tx_queue_size; uint16_t mtu; } virtio_net_conf;