From patchwork Tue May 17 06:58:35 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sasha Levin X-Patchwork-Id: 790642 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter2.kernel.org (8.14.4/8.14.3) with ESMTP id p4H71616020125 for ; Tue, 17 May 2011 07:01:07 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752601Ab1EQHA5 (ORCPT ); Tue, 17 May 2011 03:00:57 -0400 Received: from mail-fx0-f46.google.com ([209.85.161.46]:59875 "EHLO mail-fx0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752439Ab1EQHAz (ORCPT ); Tue, 17 May 2011 03:00:55 -0400 Received: by mail-fx0-f46.google.com with SMTP id 17so211588fxm.19 for ; Tue, 17 May 2011 00:00:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:from:to:cc:subject:date:message-id:x-mailer :in-reply-to:references; bh=UKmq8kWEh0J73215gFmBeX+iHUo4IMhMydMiKLnDsEc=; b=ra7iBkHUQPHdKpaTQo07FzrDLprgjXQjHs8FhMaW9E2geklslrUL/J12rph0drtJOX U/JVEUAGRUl4hsO9oHVYpj8FEq0kXwOvYHWcWTfr8ch6dqmXbtoniRwJ8/6mp06zg0ox mrXY02u2g8QJ/xFzNeibvnAvHG7oVfUZBI1/g= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references; b=p7Fq/pYFSl3Ql0Cx4RJs+PVdQb76FUhMJPN0gfSvSjk7QX4vND/iwkf+sQYdcFNt5p EQtA2KByQAxF28fpHTGCFavDmMPfOcTBKtXAbPmyqTn0QR+V/+Q7ksEvYAlhIGk/YQmn zMnzXz03D/4cxilaSeBkDBXewb1wr7+KxRfcQ= Received: by 10.223.110.5 with SMTP id l5mr373236fap.21.1305615655149; Tue, 17 May 2011 00:00:55 -0700 (PDT) Received: from localhost.localdomain ([188.120.132.169]) by mx.google.com with ESMTPS id r13sm74029fax.32.2011.05.17.00.00.53 (version=TLSv1/SSLv3 cipher=OTHER); Tue, 17 May 2011 00:00:54 -0700 (PDT) From: Sasha Levin To: penberg@kernel.org Cc: mingo@elte.hu, asias.hejun@gmail.com, prasadjoshi124@gmail.com, gorcunov@gmail.com, kvm@vger.kernel.org, john@jfloren.net, Sasha Levin Subject: [PATCH 2/2] kvm tools: Add MMIO address mapper Date: Tue, 17 May 2011 09:58:35 +0300 Message-Id: <1305615515-13913-2-git-send-email-levinsasha928@gmail.com> X-Mailer: git-send-email 1.7.5.rc3 In-Reply-To: <1305615515-13913-1-git-send-email-levinsasha928@gmail.com> References: <1305615515-13913-1-git-send-email-levinsasha928@gmail.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter2.kernel.org [140.211.167.43]); Tue, 17 May 2011 07:01:07 +0000 (UTC) When we have a MMIO exit, we need to find which device has registered to use the accessed MMIO space. The mapper maps ranges of guest physical addresses to callback functions. Implementation is based on an interval red-black tree. Signed-off-by: Sasha Levin Acked-by: Ingo Molnar --- tools/kvm/include/kvm/kvm.h | 2 + tools/kvm/mmio.c | 79 +++++++++++++++++++++++++++++++++++++++++- 2 files changed, 79 insertions(+), 2 deletions(-) diff --git a/tools/kvm/include/kvm/kvm.h b/tools/kvm/include/kvm/kvm.h index b310d50..d9943bf 100644 --- a/tools/kvm/include/kvm/kvm.h +++ b/tools/kvm/include/kvm/kvm.h @@ -44,6 +44,8 @@ void kvm__stop_timer(struct kvm *kvm); void kvm__irq_line(struct kvm *kvm, int irq, int level); bool kvm__emulate_io(struct kvm *kvm, u16 port, void *data, int direction, int size, u32 count); bool kvm__emulate_mmio(struct kvm *kvm, u64 phys_addr, u8 *data, u32 len, u8 is_write); +bool kvm__register_mmio(u64 phys_addr, u64 phys_addr_len, void (*kvm_mmio_callback_fn)(u64 addr, u8 *data, u32 len, u8 is_write)); +bool kvm__deregister_mmio(u64 phys_addr); /* * Debugging diff --git a/tools/kvm/mmio.c b/tools/kvm/mmio.c index 848267d..fab6489 100644 --- a/tools/kvm/mmio.c +++ b/tools/kvm/mmio.c @@ -1,7 +1,48 @@ #include "kvm/kvm.h" +#include "kvm/interval-rbtree.h" #include +#include + #include +#include + +#define MMIO_NODE(n) container_of(n, struct mmio_mapping, node) + +struct mmio_mapping { + struct rb_int_node node; + void (*kvm_mmio_callback_fn)(u64 addr, u8 *data, u32 len, u8 is_write); +}; + +static struct rb_root mmio_tree = RB_ROOT; + +static struct mmio_mapping *mmio_search(struct rb_root *root, u64 addr, u64 len) +{ + struct rb_int_node *node; + + node = rb_int_search_range(root, addr, addr + len); + if (node == NULL) + return NULL; + + return MMIO_NODE(node); +} + +/* Find lowest match, Check for overlap */ +static struct mmio_mapping *mmio_search_single(struct rb_root *root, u64 addr) +{ + struct rb_int_node *node; + + node = rb_int_search_single(root, addr); + if (node == NULL) + return NULL; + + return MMIO_NODE(node); +} + +static int mmio_insert(struct rb_root *root, struct mmio_mapping *data) +{ + return rb_int_insert(root, &data->node); +} static const char *to_direction(u8 is_write) { @@ -11,10 +52,44 @@ static const char *to_direction(u8 is_write) return "read"; } +bool kvm__register_mmio(u64 phys_addr, u64 phys_addr_len, void (*kvm_mmio_callback_fn)(u64 addr, u8 *data, u32 len, u8 is_write)) +{ + struct mmio_mapping *mmio; + + mmio = malloc(sizeof(*mmio)); + if (mmio == NULL) + return false; + + *mmio = (struct mmio_mapping) { + .node = RB_INT_INIT(phys_addr, phys_addr + phys_addr_len), + .kvm_mmio_callback_fn = kvm_mmio_callback_fn, + }; + + return mmio_insert(&mmio_tree, mmio); +} + +bool kvm__deregister_mmio(u64 phys_addr) +{ + struct mmio_mapping *mmio; + + mmio = mmio_search_single(&mmio_tree, phys_addr); + if (mmio == NULL) + return false; + + rb_int_erase(&mmio_tree, &mmio->node); + free(mmio); + return true; +} + bool kvm__emulate_mmio(struct kvm *kvm, u64 phys_addr, u8 *data, u32 len, u8 is_write) { - fprintf(stderr, "Warning: Ignoring MMIO %s at %016llx (length %u)\n", - to_direction(is_write), phys_addr, len); + struct mmio_mapping *mmio = mmio_search(&mmio_tree, phys_addr, len); + + if (mmio) + mmio->kvm_mmio_callback_fn(phys_addr, data, len, is_write); + else + fprintf(stderr, "Warning: Ignoring MMIO %s at %016llx (length %u)\n", + to_direction(is_write), phys_addr, len); return true; }