From patchwork Thu Jun 3 19:38:31 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Rogers X-Patchwork-Id: 104162 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter.kernel.org (8.14.3/8.14.3) with ESMTP id o53JcaR6017095 for ; Thu, 3 Jun 2010 19:38:37 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754151Ab0FCTie (ORCPT ); Thu, 3 Jun 2010 15:38:34 -0400 Received: from sinclair.provo.novell.com ([137.65.248.137]:16691 "EHLO sinclair.provo.novell.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753331Ab0FCTid convert rfc822-to-8bit (ORCPT ); Thu, 3 Jun 2010 15:38:33 -0400 Received: from INET-PRV-MTA by sinclair.provo.novell.com with Novell_GroupWise; Thu, 03 Jun 2010 13:38:35 -0600 Message-Id: <4C07B057020000480009749D@sinclair.provo.novell.com> X-Mailer: Novell GroupWise Internet Agent 8.0.1 Date: Thu, 03 Jun 2010 13:38:31 -0600 From: "Bruce Rogers" To: Cc: , Subject: [PATCH 3/3][STABLE] KVM: add schedule check to napi_enable call Mime-Version: 1.0 Content-Disposition: inline Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter.kernel.org [140.211.167.41]); Thu, 03 Jun 2010 19:40:10 +0000 (UTC) --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -388,6 +388,20 @@ static void skb_recv_done(struct virtque } } +static void virtnet_napi_enable(struct virtnet_info *vi) +{ + napi_enable(&vi->napi); + + /* If all buffers were filled by other side before we napi_enabled, we + * won't get another interrupt, so process any outstanding packets + * now. virtnet_poll wants re-enable the queue, so we disable here. + * We synchronize against interrupts via NAPI_STATE_SCHED */ + if (napi_schedule_prep(&vi->napi)) { + vi->rvq->vq_ops->disable_cb(vi->rvq); + __napi_schedule(&vi->napi); + } +} + static void refill_work(struct work_struct *work) { struct virtnet_info *vi; @@ -397,7 +411,7 @@ static void refill_work(struct work_stru napi_disable(&vi->napi); try_fill_recv(vi, GFP_KERNEL); still_empty = (vi->num == 0); - napi_enable(&vi->napi); + virtnet_napi_enable(vi); /* In theory, this can happen: if we don't get any buffers in * we will *never* try to fill again. */ @@ -589,16 +603,7 @@ static int virtnet_open(struct net_devic { struct virtnet_info *vi = netdev_priv(dev); - napi_enable(&vi->napi); - - /* If all buffers were filled by other side before we napi_enabled, we - * won't get another interrupt, so process any outstanding packets - * now. virtnet_poll wants re-enable the queue, so we disable here. - * We synchronize against interrupts via NAPI_STATE_SCHED */ - if (napi_schedule_prep(&vi->napi)) { - vi->rvq->vq_ops->disable_cb(vi->rvq); - __napi_schedule(&vi->napi); - } + virtnet_napi_enable(vi); return 0; }