From patchwork Fri Jul 15 11:37:48 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sasha Levin X-Patchwork-Id: 977952 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter1.kernel.org (8.14.4/8.14.4) with ESMTP id p6FBd1Xh030850 for ; Fri, 15 Jul 2011 11:39:01 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751630Ab1GOLi6 (ORCPT ); Fri, 15 Jul 2011 07:38:58 -0400 Received: from mail-wy0-f174.google.com ([74.125.82.174]:40270 "EHLO mail-wy0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750863Ab1GOLi6 (ORCPT ); Fri, 15 Jul 2011 07:38:58 -0400 Received: by wyg8 with SMTP id 8so742589wyg.19 for ; Fri, 15 Jul 2011 04:38:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=from:to:cc:subject:date:message-id:x-mailer; bh=8QQdOOdw/zafehVdmW4Q7Ghe0jaCIyr5XcjjqQjpqME=; b=EimBc6Fv5Q2TZ/iKrBnh/mcrqN4SnswX0XRf91ulX0870/euTbCNWr9MYAxLz3Rex8 HgAjHbV/37kZzyq06HGkC50l4iBx778xLtzdRZl8UvfZfV3FijvW0tfiQsiILpiY/zoh 7kijAbZPjBwBnevG4R0UMN3LGbuuGDvxz7+Sw= Received: by 10.216.9.204 with SMTP id 54mr2783631wet.90.1310729936864; Fri, 15 Jul 2011 04:38:56 -0700 (PDT) Received: from localhost.localdomain (bzq-79-182-205-17.red.bezeqint.net [79.182.205.17]) by mx.google.com with ESMTPS id f1sm685589wed.38.2011.07.15.04.38.54 (version=TLSv1/SSLv3 cipher=OTHER); Fri, 15 Jul 2011 04:38:56 -0700 (PDT) From: Sasha Levin To: kvm@vger.kernel.org Cc: Sasha Levin , Avi Kivity , Ingo Molnar , Marcelo Tosatti , Pekka Enberg Subject: [PATCH v2 1/2] KVM: MMIO: Lock coalesced device when checking for available entry Date: Fri, 15 Jul 2011 14:37:48 +0300 Message-Id: <1310729869-1451-1-git-send-email-levinsasha928@gmail.com> X-Mailer: git-send-email 1.7.6 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter1.kernel.org [140.211.167.41]); Fri, 15 Jul 2011 11:39:01 +0000 (UTC) Move the check whether there are available entries to within the spinlock. This allows working with larger amount of VCPUs and reduces premature exits when using a large number of VCPUs. Cc: Avi Kivity Cc: Ingo Molnar Cc: Marcelo Tosatti Cc: Pekka Enberg Signed-off-by: Sasha Levin --- virt/kvm/coalesced_mmio.c | 9 ++++++--- 1 files changed, 6 insertions(+), 3 deletions(-) diff --git a/virt/kvm/coalesced_mmio.c b/virt/kvm/coalesced_mmio.c index fc84875..34188db 100644 --- a/virt/kvm/coalesced_mmio.c +++ b/virt/kvm/coalesced_mmio.c @@ -37,7 +37,7 @@ static int coalesced_mmio_in_range(struct kvm_coalesced_mmio_dev *dev, */ ring = dev->kvm->coalesced_mmio_ring; avail = (ring->first - ring->last - 1) % KVM_COALESCED_MMIO_MAX; - if (avail < KVM_MAX_VCPUS) { + if (avail == 0) { /* full */ return 0; } @@ -63,11 +63,14 @@ static int coalesced_mmio_write(struct kvm_io_device *this, { struct kvm_coalesced_mmio_dev *dev = to_mmio(this); struct kvm_coalesced_mmio_ring *ring = dev->kvm->coalesced_mmio_ring; - if (!coalesced_mmio_in_range(dev, addr, len)) - return -EOPNOTSUPP; spin_lock(&dev->lock); + if (!coalesced_mmio_in_range(dev, addr, len)) { + spin_unlock(&dev->lock); + return -EOPNOTSUPP; + } + /* copy data in first free entry of the ring */ ring->coalesced_mmio[ring->last].phys_addr = addr;