From patchwork Mon May 30 17:27:58 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sasha Levin X-Patchwork-Id: 830712 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id p4UHTG2S018765 for ; Mon, 30 May 2011 17:29:49 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757640Ab1E3R30 (ORCPT ); Mon, 30 May 2011 13:29:26 -0400 Received: from mail-wy0-f174.google.com ([74.125.82.174]:46736 "EHLO mail-wy0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757615Ab1E3R3W (ORCPT ); Mon, 30 May 2011 13:29:22 -0400 Received: by mail-wy0-f174.google.com with SMTP id 21so2766739wya.19 for ; Mon, 30 May 2011 10:29:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:from:to:cc:subject:date:message-id:x-mailer :in-reply-to:references; bh=K6YAoI4C+w7Gt7t6yqe5LpZYyI+gfVNpjU1MjSTwTvI=; b=i9J0ZSiJw/as+vzULMwL1lIEOjPBBJz+cQABP2qzRNw0rmQNBZFzGp2bqRLucMD8E2 hNCnYZDkCF2sXucHKBzD7ReXnfOjMuxdCx8625jTb5HEl78vic33/gfZ5Jm/XjOiwT0v 6x+Hcy8Td+rm5fsfabviaN/yvnOoM4ApfPt5g= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references; b=rD8gZSxNfDPlX2cbcl00TI/95RvUtW3zkZ+rAj3/07zsvx8URbVuitNS6LeLeex2wc UC6iXu5abkXbgWa/cwBaFEeQf6E3e12VAiGu7VFuApHUjHtdV3UIIYk9c9Ut87V6aBmD /LWSirQMMb9MDGLU7oAciiHumNDAHP4w9viWk= Received: by 10.227.205.19 with SMTP id fo19mr5260183wbb.58.1306776561886; Mon, 30 May 2011 10:29:21 -0700 (PDT) Received: from localhost.localdomain (bzq-79-181-201-100.red.bezeqint.net [79.181.201.100]) by mx.google.com with ESMTPS id gb6sm3194848wbb.17.2011.05.30.10.29.19 (version=TLSv1/SSLv3 cipher=OTHER); Mon, 30 May 2011 10:29:21 -0700 (PDT) From: Sasha Levin To: penberg@kernel.org Cc: kvm@vger.kernel.org, mingo@elte.hu, asias.hejun@gmail.com, gorcunov@gmail.com, prasadjoshi124@gmail.com, Sasha Levin Subject: [PATCH v3 8/8] kvm tools: Use brlock in MMIO and IOPORT Date: Mon, 30 May 2011 20:27:58 +0300 Message-Id: <1306776478-29613-9-git-send-email-levinsasha928@gmail.com> X-Mailer: git-send-email 1.7.5.3 In-Reply-To: <1306776478-29613-1-git-send-email-levinsasha928@gmail.com> References: <1306776478-29613-1-git-send-email-levinsasha928@gmail.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter1.kernel.org [140.211.167.41]); Mon, 30 May 2011 17:29:49 +0000 (UTC) Use brlock to protect mmio and ioport modules and make them update-safe. Signed-off-by: Sasha Levin --- tools/kvm/ioport.c | 10 +++++++++- tools/kvm/mmio.c | 21 ++++++++++++++++++--- 2 files changed, 27 insertions(+), 4 deletions(-) diff --git a/tools/kvm/ioport.c b/tools/kvm/ioport.c index d0a1aa8..e00fb59 100644 --- a/tools/kvm/ioport.c +++ b/tools/kvm/ioport.c @@ -2,7 +2,7 @@ #include "kvm/kvm.h" #include "kvm/util.h" - +#include "kvm/brlock.h" #include "kvm/rbtree-interval.h" #include "kvm/mutex.h" @@ -84,6 +84,7 @@ u16 ioport__register(u16 port, struct ioport_operations *ops, int count, void *p { struct ioport *entry; + br_write_lock(); if (port == IOPORT_EMPTY) port = ioport__find_free_port(); @@ -105,6 +106,8 @@ u16 ioport__register(u16 port, struct ioport_operations *ops, int count, void *p ioport_insert(&ioport_tree, entry); + br_write_unlock(); + return port; } @@ -127,6 +130,7 @@ bool kvm__emulate_io(struct kvm *kvm, u16 port, void *data, int direction, int s bool ret = false; struct ioport *entry; + br_read_lock(); entry = ioport_search(&ioport_tree, port); if (!entry) goto error; @@ -141,11 +145,15 @@ bool kvm__emulate_io(struct kvm *kvm, u16 port, void *data, int direction, int s ret = ops->io_out(entry, kvm, port, data, size, count); } + br_read_unlock(); + if (!ret) goto error; return true; error: + br_read_unlock(); + if (ioport_debug) ioport_error(port, data, direction, size, count); diff --git a/tools/kvm/mmio.c b/tools/kvm/mmio.c index ef986bf..acd091e 100644 --- a/tools/kvm/mmio.c +++ b/tools/kvm/mmio.c @@ -1,5 +1,6 @@ #include "kvm/kvm.h" #include "kvm/rbtree-interval.h" +#include "kvm/brlock.h" #include #include @@ -55,6 +56,7 @@ static const char *to_direction(u8 is_write) bool kvm__register_mmio(u64 phys_addr, u64 phys_addr_len, void (*kvm_mmio_callback_fn)(u64 addr, u8 *data, u32 len, u8 is_write)) { struct mmio_mapping *mmio; + int ret; mmio = malloc(sizeof(*mmio)); if (mmio == NULL) @@ -65,31 +67,44 @@ bool kvm__register_mmio(u64 phys_addr, u64 phys_addr_len, void (*kvm_mmio_callba .kvm_mmio_callback_fn = kvm_mmio_callback_fn, }; - return mmio_insert(&mmio_tree, mmio); + br_write_lock(); + ret = mmio_insert(&mmio_tree, mmio); + br_write_unlock(); + + return ret; } bool kvm__deregister_mmio(u64 phys_addr) { struct mmio_mapping *mmio; + br_write_lock(); mmio = mmio_search_single(&mmio_tree, phys_addr); - if (mmio == NULL) + if (mmio == NULL) { + br_write_unlock(); return false; + } rb_int_erase(&mmio_tree, &mmio->node); + br_write_unlock(); + free(mmio); return true; } bool kvm__emulate_mmio(struct kvm *kvm, u64 phys_addr, u8 *data, u32 len, u8 is_write) { - struct mmio_mapping *mmio = mmio_search(&mmio_tree, phys_addr, len); + struct mmio_mapping *mmio; + + br_read_lock(); + mmio = mmio_search(&mmio_tree, phys_addr, len); if (mmio) mmio->kvm_mmio_callback_fn(phys_addr, data, len, is_write); else fprintf(stderr, "Warning: Ignoring MMIO %s at %016llx (length %u)\n", to_direction(is_write), phys_addr, len); + br_read_unlock(); return true; }