From patchwork Mon Jul 18 14:17:14 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sasha Levin X-Patchwork-Id: 986782 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter1.kernel.org (8.14.4/8.14.4) with ESMTP id p6IEHiAT020608 for ; Mon, 18 Jul 2011 14:17:44 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751405Ab1GRORf (ORCPT ); Mon, 18 Jul 2011 10:17:35 -0400 Received: from mail-wy0-f174.google.com ([74.125.82.174]:59170 "EHLO mail-wy0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751298Ab1GRORf (ORCPT ); Mon, 18 Jul 2011 10:17:35 -0400 Received: by wyg8 with SMTP id 8so2107420wyg.19 for ; Mon, 18 Jul 2011 07:17:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=from:to:cc:subject:date:message-id:x-mailer; bh=124djCFyMCovw2382Euf0YjnAmBxi9v/cp6Axbmbff4=; b=xUyG9RpoKYDN51/3aLiaTRttQz+fV6iTBicfIXPbI9DdoDrlW/7ZQiyY5eEKG+aVQV n4cYC5/gvYW40le9lSt0u920zI1yUBD8gUTx2B6P6K5LuXPmqAGb47y3TZXoTX+XiHju ocMz2/Ll9I+QTrS+ZYaV7r59QFfw2gTFui8xg= Received: by 10.217.6.81 with SMTP id x59mr5568812wes.50.1310998654299; Mon, 18 Jul 2011 07:17:34 -0700 (PDT) Received: from localhost.localdomain ([94.230.82.72]) by mx.google.com with ESMTPS id c5sm2372456wed.30.2011.07.18.07.17.32 (version=TLSv1/SSLv3 cipher=OTHER); Mon, 18 Jul 2011 07:17:34 -0700 (PDT) From: Sasha Levin To: kvm@vger.kernel.org Cc: Sasha Levin , Avi Kivity , Ingo Molnar , Marcelo Tosatti , Pekka Enberg Subject: [PATCH v3 1/2] KVM: MMIO: Lock coalesced device when checking for available entry Date: Mon, 18 Jul 2011 17:17:14 +0300 Message-Id: <1310998635-31608-1-git-send-email-levinsasha928@gmail.com> X-Mailer: git-send-email 1.7.6 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter1.kernel.org [140.211.167.41]); Mon, 18 Jul 2011 14:17:44 +0000 (UTC) Move the check whether there are available entries to within the spinlock. This allows working with larger amount of VCPUs and reduces premature exits when using a large number of VCPUs. Cc: Avi Kivity Cc: Ingo Molnar Cc: Marcelo Tosatti Cc: Pekka Enberg Signed-off-by: Sasha Levin --- virt/kvm/coalesced_mmio.c | 42 +++++++++++++++++++++++++++--------------- 1 files changed, 27 insertions(+), 15 deletions(-) diff --git a/virt/kvm/coalesced_mmio.c b/virt/kvm/coalesced_mmio.c index fc84875..ae075dc 100644 --- a/virt/kvm/coalesced_mmio.c +++ b/virt/kvm/coalesced_mmio.c @@ -25,23 +25,8 @@ static int coalesced_mmio_in_range(struct kvm_coalesced_mmio_dev *dev, gpa_t addr, int len) { struct kvm_coalesced_mmio_zone *zone; - struct kvm_coalesced_mmio_ring *ring; - unsigned avail; int i; - /* Are we able to batch it ? */ - - /* last is the first free entry - * check if we don't meet the first used entry - * there is always one unused entry in the buffer - */ - ring = dev->kvm->coalesced_mmio_ring; - avail = (ring->first - ring->last - 1) % KVM_COALESCED_MMIO_MAX; - if (avail < KVM_MAX_VCPUS) { - /* full */ - return 0; - } - /* is it in a batchable area ? */ for (i = 0; i < dev->nb_zones; i++) { @@ -58,16 +43,43 @@ static int coalesced_mmio_in_range(struct kvm_coalesced_mmio_dev *dev, return 0; } +static int coalesced_mmio_has_room(struct kvm_coalesced_mmio_dev *dev) +{ + struct kvm_coalesced_mmio_ring *ring; + unsigned avail; + + /* Are we able to batch it ? */ + + /* last is the first free entry + * check if we don't meet the first used entry + * there is always one unused entry in the buffer + */ + ring = dev->kvm->coalesced_mmio_ring; + avail = (ring->first - ring->last - 1) % KVM_COALESCED_MMIO_MAX; + if (avail == 0) { + /* full */ + return 0; + } + + return 1; +} + static int coalesced_mmio_write(struct kvm_io_device *this, gpa_t addr, int len, const void *val) { struct kvm_coalesced_mmio_dev *dev = to_mmio(this); struct kvm_coalesced_mmio_ring *ring = dev->kvm->coalesced_mmio_ring; + if (!coalesced_mmio_in_range(dev, addr, len)) return -EOPNOTSUPP; spin_lock(&dev->lock); + if (!coalesced_mmio_has_room(dev)) { + spin_unlock(&dev->lock); + return -EOPNOTSUPP; + } + /* copy data in first free entry of the ring */ ring->coalesced_mmio[ring->last].phys_addr = addr;