From patchwork Tue May 17 10:28:43 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sasha Levin X-Patchwork-Id: 791152 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter2.kernel.org (8.14.4/8.14.3) with ESMTP id p4HATLiN007625 for ; Tue, 17 May 2011 10:29:21 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753775Ab1EQK3U (ORCPT ); Tue, 17 May 2011 06:29:20 -0400 Received: from mail-fx0-f46.google.com ([209.85.161.46]:43483 "EHLO mail-fx0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753379Ab1EQK3S (ORCPT ); Tue, 17 May 2011 06:29:18 -0400 Received: by mail-fx0-f46.google.com with SMTP id 17so315122fxm.19 for ; Tue, 17 May 2011 03:29:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:from:to:cc:subject:date:message-id:x-mailer :in-reply-to:references; bh=chIaGInT3iDGr+Em6y2NGeaJutdFjSOKbzGV6PxFkV0=; b=eNx+HWeleYo24cJ0FsUSQSRTV8w/9yEwrgUapmsTqK/QQ5cUCBKUc9y8f6rXe20UTV ujVxM2asm9Cz9d+06O0Qf3csp9VKC7SyQS0TejCCJ1w2Pn9+oApPfs2b6gP2dboZvG30 z3X1ULsNog0KAN8aw3gBcJoV9j5NLcutXxnL0= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references; b=Q+pzgKXxojoX+P5P/rPPz1F6GPYVr4t/oiEvKkfNxrFT6dQfgYpDLgr6ggk2felLA1 5OYddgBXgRWjrDEwSyEpbkAjcLse8A3aV1KycOdw5gIsshIGfdMgyk1IlM7s4nLdQLfo sjp3jTky+5Xao0ZHgo9BK69Ldr7XfdwVeiYPQ= Received: by 10.223.100.15 with SMTP id w15mr615055fan.11.1305628158227; Tue, 17 May 2011 03:29:18 -0700 (PDT) Received: from localhost.localdomain ([188.120.132.169]) by mx.google.com with ESMTPS id c24sm154436fak.31.2011.05.17.03.29.16 (version=TLSv1/SSLv3 cipher=OTHER); Tue, 17 May 2011 03:29:17 -0700 (PDT) From: Sasha Levin To: penberg@kernel.org Cc: mingo@elte.hu, asias.hejun@gmail.com, prasadjoshi124@gmail.com, gorcunov@gmail.com, kvm@vger.kernel.org, john@jfloren.net, Sasha Levin Subject: [PATCH 2/2 V2] kvm tools: Add MMIO address mapper Date: Tue, 17 May 2011 13:28:43 +0300 Message-Id: <1305628123-18440-2-git-send-email-levinsasha928@gmail.com> X-Mailer: git-send-email 1.7.5.rc3 In-Reply-To: <1305628123-18440-1-git-send-email-levinsasha928@gmail.com> References: <1305628123-18440-1-git-send-email-levinsasha928@gmail.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter2.kernel.org [140.211.167.43]); Tue, 17 May 2011 10:29:21 +0000 (UTC) When we have a MMIO exit, we need to find which device has registered to use the accessed MMIO space. The mapper maps ranges of guest physical addresses to callback functions. Implementation is based on an interval red-black tree. Signed-off-by: Sasha Levin Acked-by: Ingo Molnar --- tools/kvm/include/kvm/kvm.h | 2 + tools/kvm/mmio.c | 79 +++++++++++++++++++++++++++++++++++++++++- 2 files changed, 79 insertions(+), 2 deletions(-) diff --git a/tools/kvm/include/kvm/kvm.h b/tools/kvm/include/kvm/kvm.h index b310d50..d9943bf 100644 --- a/tools/kvm/include/kvm/kvm.h +++ b/tools/kvm/include/kvm/kvm.h @@ -44,6 +44,8 @@ void kvm__stop_timer(struct kvm *kvm); void kvm__irq_line(struct kvm *kvm, int irq, int level); bool kvm__emulate_io(struct kvm *kvm, u16 port, void *data, int direction, int size, u32 count); bool kvm__emulate_mmio(struct kvm *kvm, u64 phys_addr, u8 *data, u32 len, u8 is_write); +bool kvm__register_mmio(u64 phys_addr, u64 phys_addr_len, void (*kvm_mmio_callback_fn)(u64 addr, u8 *data, u32 len, u8 is_write)); +bool kvm__deregister_mmio(u64 phys_addr); /* * Debugging diff --git a/tools/kvm/mmio.c b/tools/kvm/mmio.c index 848267d..ef986bf 100644 --- a/tools/kvm/mmio.c +++ b/tools/kvm/mmio.c @@ -1,7 +1,48 @@ #include "kvm/kvm.h" +#include "kvm/rbtree-interval.h" #include +#include + #include +#include + +#define mmio_node(n) rb_entry(n, struct mmio_mapping, node) + +struct mmio_mapping { + struct rb_int_node node; + void (*kvm_mmio_callback_fn)(u64 addr, u8 *data, u32 len, u8 is_write); +}; + +static struct rb_root mmio_tree = RB_ROOT; + +static struct mmio_mapping *mmio_search(struct rb_root *root, u64 addr, u64 len) +{ + struct rb_int_node *node; + + node = rb_int_search_range(root, addr, addr + len); + if (node == NULL) + return NULL; + + return mmio_node(node); +} + +/* Find lowest match, Check for overlap */ +static struct mmio_mapping *mmio_search_single(struct rb_root *root, u64 addr) +{ + struct rb_int_node *node; + + node = rb_int_search_single(root, addr); + if (node == NULL) + return NULL; + + return mmio_node(node); +} + +static int mmio_insert(struct rb_root *root, struct mmio_mapping *data) +{ + return rb_int_insert(root, &data->node); +} static const char *to_direction(u8 is_write) { @@ -11,10 +52,44 @@ static const char *to_direction(u8 is_write) return "read"; } +bool kvm__register_mmio(u64 phys_addr, u64 phys_addr_len, void (*kvm_mmio_callback_fn)(u64 addr, u8 *data, u32 len, u8 is_write)) +{ + struct mmio_mapping *mmio; + + mmio = malloc(sizeof(*mmio)); + if (mmio == NULL) + return false; + + *mmio = (struct mmio_mapping) { + .node = RB_INT_INIT(phys_addr, phys_addr + phys_addr_len), + .kvm_mmio_callback_fn = kvm_mmio_callback_fn, + }; + + return mmio_insert(&mmio_tree, mmio); +} + +bool kvm__deregister_mmio(u64 phys_addr) +{ + struct mmio_mapping *mmio; + + mmio = mmio_search_single(&mmio_tree, phys_addr); + if (mmio == NULL) + return false; + + rb_int_erase(&mmio_tree, &mmio->node); + free(mmio); + return true; +} + bool kvm__emulate_mmio(struct kvm *kvm, u64 phys_addr, u8 *data, u32 len, u8 is_write) { - fprintf(stderr, "Warning: Ignoring MMIO %s at %016llx (length %u)\n", - to_direction(is_write), phys_addr, len); + struct mmio_mapping *mmio = mmio_search(&mmio_tree, phys_addr, len); + + if (mmio) + mmio->kvm_mmio_callback_fn(phys_addr, data, len, is_write); + else + fprintf(stderr, "Warning: Ignoring MMIO %s at %016llx (length %u)\n", + to_direction(is_write), phys_addr, len); return true; }