From patchwork Tue Apr 23 00:35:36 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sasha Levin X-Patchwork-Id: 2475071 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork1.kernel.org (Postfix) with ESMTP id 0BA663FCA5 for ; Tue, 23 Apr 2013 00:36:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752993Ab3DWAgZ (ORCPT ); Mon, 22 Apr 2013 20:36:25 -0400 Received: from userp1040.oracle.com ([156.151.31.81]:37766 "EHLO userp1040.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752184Ab3DWAgY (ORCPT ); Mon, 22 Apr 2013 20:36:24 -0400 Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237]) by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with ESMTP id r3N0aCSU005470 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL); Tue, 23 Apr 2013 00:36:13 GMT Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231]) by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id r3N0aB7e025304 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL); Tue, 23 Apr 2013 00:36:12 GMT Received: from abhmt111.oracle.com (abhmt111.oracle.com [141.146.116.63]) by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id r3N0aBAe015010; Tue, 23 Apr 2013 00:36:11 GMT Received: from lappy.hsd1.ma.comcast.net. (/50.133.228.71) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Mon, 22 Apr 2013 17:36:11 -0700 From: Sasha Levin To: rusty@rustcorp.com.au, mst@redhat.com Cc: penberg@kernel.org, will.deacon@arm.com, marc.zyngier@arm.com, kvm@vger.kernel.org, asias@redhat.com, jasowang@redhat.com, Sasha Levin Subject: [PATCH] virtio-net: fill only rx queues which are being used Date: Mon, 22 Apr 2013 20:35:36 -0400 Message-Id: <1366677336-2278-1-git-send-email-sasha.levin@oracle.com> X-Mailer: git-send-email 1.8.2.1 X-Source-IP: acsinet21.oracle.com [141.146.126.237] Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Due to MQ support we may allocate a whole bunch of rx queues but never use them. With this patch we'll safe the space used by the receive buffers until they are actually in use: sh-4.2# free -h total used free shared buffers cached Mem: 490M 35M 455M 0B 0B 4.1M -/+ buffers/cache: 31M 459M Swap: 0B 0B 0B sh-4.2# ethtool -L eth0 combined 8 sh-4.2# free -h total used free shared buffers cached Mem: 490M 162M 327M 0B 0B 4.1M -/+ buffers/cache: 158M 331M Swap: 0B 0B 0B Signed-off-by: Sasha Levin --- drivers/net/virtio_net.c | 16 +++++++++++----- 1 file changed, 11 insertions(+), 5 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 6bfc511..4d82d17 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -581,7 +581,7 @@ static void refill_work(struct work_struct *work) bool still_empty; int i; - for (i = 0; i < vi->max_queue_pairs; i++) { + for (i = 0; i < vi->curr_queue_pairs; i++) { struct receive_queue *rq = &vi->rq[i]; napi_disable(&rq->napi); @@ -636,7 +636,7 @@ static int virtnet_open(struct net_device *dev) struct virtnet_info *vi = netdev_priv(dev); int i; - for (i = 0; i < vi->max_queue_pairs; i++) { + for (i = 0; i < vi->curr_queue_pairs; i++) { /* Make sure we have some buffers: if oom use wq. */ if (!try_fill_recv(&vi->rq[i], GFP_KERNEL)) schedule_delayed_work(&vi->refill, 0); @@ -900,6 +900,7 @@ static int virtnet_set_queues(struct virtnet_info *vi, u16 queue_pairs) struct scatterlist sg; struct virtio_net_ctrl_mq s; struct net_device *dev = vi->dev; + int i; if (!vi->has_cvq || !virtio_has_feature(vi->vdev, VIRTIO_NET_F_MQ)) return 0; @@ -912,8 +913,13 @@ static int virtnet_set_queues(struct virtnet_info *vi, u16 queue_pairs) dev_warn(&dev->dev, "Fail to set num of queue pairs to %d\n", queue_pairs); return -EINVAL; - } else + } else { + if (queue_pairs > vi->curr_queue_pairs) + for (i = 0; i < queue_pairs; i++) + if (!try_fill_recv(&vi->rq[i], GFP_KERNEL)) + schedule_delayed_work(&vi->refill, 0); vi->curr_queue_pairs = queue_pairs; + } return 0; } @@ -1568,7 +1574,7 @@ static int virtnet_probe(struct virtio_device *vdev) } /* Last of all, set up some receive buffers. */ - for (i = 0; i < vi->max_queue_pairs; i++) { + for (i = 0; i < vi->curr_queue_pairs; i++) { try_fill_recv(&vi->rq[i], GFP_KERNEL); /* If we didn't even get one input buffer, we're useless. */ @@ -1692,7 +1698,7 @@ static int virtnet_restore(struct virtio_device *vdev) netif_device_attach(vi->dev); - for (i = 0; i < vi->max_queue_pairs; i++) + for (i = 0; i < vi->curr_queue_pairs; i++) if (!try_fill_recv(&vi->rq[i], GFP_KERNEL)) schedule_delayed_work(&vi->refill, 0);