From patchwork Tue May 17 12:08:00 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sasha Levin X-Patchwork-Id: 791312 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id p4HC8Qng026957 for ; Tue, 17 May 2011 12:08:26 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754247Ab1EQMIR (ORCPT ); Tue, 17 May 2011 08:08:17 -0400 Received: from mail-fx0-f46.google.com ([209.85.161.46]:53493 "EHLO mail-fx0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753710Ab1EQMIP (ORCPT ); Tue, 17 May 2011 08:08:15 -0400 Received: by mail-fx0-f46.google.com with SMTP id 17so369761fxm.19 for ; Tue, 17 May 2011 05:08:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:from:to:cc:subject:date:message-id:x-mailer :in-reply-to:references; bh=kgpujz/foKWt8hfqnQkynK28Y2lCnQMMGjSIfbA+1Lo=; b=CBYc7gBHKyxzlAlehM2Pp2wFstIZyCEgQ7uF5RbfhYslhFP2eNioplL93WVXTmipKz iMkuw39hLLPbHjgdeDSPAh4f0KGcfhbYMzRUE81CiwTvBbYVmCpzJDIvUJg5SrwC15iU 742jgG18AX/9NtU9Ze3Mxcse7WlwuVYFBfad0= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references; b=QAUQ4b3QuyK922Tq+ZvZ4d+4hRPzIL3pBT8t+acu+ioOJ4SFNE5emA1/Bj04km2QWV sPBF4qZtgvCXdCoKBv/qNo8ApIshYZjvPVsb1Nf+MkBB3nvIneU5/6Ezakg8F+wYHQHM 4eZQhNKtkbeVO/kM7HUbwjCLd4h03bbN4pnP8= Received: by 10.223.95.198 with SMTP id e6mr726794fan.13.1305634094304; Tue, 17 May 2011 05:08:14 -0700 (PDT) Received: from localhost.localdomain ([188.120.132.169]) by mx.google.com with ESMTPS id 14sm192083fas.30.2011.05.17.05.08.12 (version=TLSv1/SSLv3 cipher=OTHER); Tue, 17 May 2011 05:08:13 -0700 (PDT) From: Sasha Levin To: penberg@kernel.org Cc: mingo@elte.hu, asias.hejun@gmail.com, prasadjoshi124@gmail.com, gorcunov@gmail.com, kvm@vger.kernel.org, john@jfloren.net, Sasha Levin Subject: [PATCH 2/2 V3] kvm tools: Add MMIO address mapper Date: Tue, 17 May 2011 15:08:00 +0300 Message-Id: <1305634080-24789-2-git-send-email-levinsasha928@gmail.com> X-Mailer: git-send-email 1.7.5.rc3 In-Reply-To: <1305634080-24789-1-git-send-email-levinsasha928@gmail.com> References: <1305634080-24789-1-git-send-email-levinsasha928@gmail.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter1.kernel.org [140.211.167.41]); Tue, 17 May 2011 12:08:26 +0000 (UTC) When we have a MMIO exit, we need to find which device has registered to use the accessed MMIO space. The mapper maps ranges of guest physical addresses to callback functions. Implementation is based on an interval red-black tree. Acked-by: Ingo Molnar Signed-off-by: Sasha Levin --- tools/kvm/include/kvm/kvm.h | 2 + tools/kvm/mmio.c | 79 +++++++++++++++++++++++++++++++++++++++++- 2 files changed, 79 insertions(+), 2 deletions(-) diff --git a/tools/kvm/include/kvm/kvm.h b/tools/kvm/include/kvm/kvm.h index b310d50..d9943bf 100644 --- a/tools/kvm/include/kvm/kvm.h +++ b/tools/kvm/include/kvm/kvm.h @@ -44,6 +44,8 @@ void kvm__stop_timer(struct kvm *kvm); void kvm__irq_line(struct kvm *kvm, int irq, int level); bool kvm__emulate_io(struct kvm *kvm, u16 port, void *data, int direction, int size, u32 count); bool kvm__emulate_mmio(struct kvm *kvm, u64 phys_addr, u8 *data, u32 len, u8 is_write); +bool kvm__register_mmio(u64 phys_addr, u64 phys_addr_len, void (*kvm_mmio_callback_fn)(u64 addr, u8 *data, u32 len, u8 is_write)); +bool kvm__deregister_mmio(u64 phys_addr); /* * Debugging diff --git a/tools/kvm/mmio.c b/tools/kvm/mmio.c index 848267d..ef986bf 100644 --- a/tools/kvm/mmio.c +++ b/tools/kvm/mmio.c @@ -1,7 +1,48 @@ #include "kvm/kvm.h" +#include "kvm/rbtree-interval.h" #include +#include + #include +#include + +#define mmio_node(n) rb_entry(n, struct mmio_mapping, node) + +struct mmio_mapping { + struct rb_int_node node; + void (*kvm_mmio_callback_fn)(u64 addr, u8 *data, u32 len, u8 is_write); +}; + +static struct rb_root mmio_tree = RB_ROOT; + +static struct mmio_mapping *mmio_search(struct rb_root *root, u64 addr, u64 len) +{ + struct rb_int_node *node; + + node = rb_int_search_range(root, addr, addr + len); + if (node == NULL) + return NULL; + + return mmio_node(node); +} + +/* Find lowest match, Check for overlap */ +static struct mmio_mapping *mmio_search_single(struct rb_root *root, u64 addr) +{ + struct rb_int_node *node; + + node = rb_int_search_single(root, addr); + if (node == NULL) + return NULL; + + return mmio_node(node); +} + +static int mmio_insert(struct rb_root *root, struct mmio_mapping *data) +{ + return rb_int_insert(root, &data->node); +} static const char *to_direction(u8 is_write) { @@ -11,10 +52,44 @@ static const char *to_direction(u8 is_write) return "read"; } +bool kvm__register_mmio(u64 phys_addr, u64 phys_addr_len, void (*kvm_mmio_callback_fn)(u64 addr, u8 *data, u32 len, u8 is_write)) +{ + struct mmio_mapping *mmio; + + mmio = malloc(sizeof(*mmio)); + if (mmio == NULL) + return false; + + *mmio = (struct mmio_mapping) { + .node = RB_INT_INIT(phys_addr, phys_addr + phys_addr_len), + .kvm_mmio_callback_fn = kvm_mmio_callback_fn, + }; + + return mmio_insert(&mmio_tree, mmio); +} + +bool kvm__deregister_mmio(u64 phys_addr) +{ + struct mmio_mapping *mmio; + + mmio = mmio_search_single(&mmio_tree, phys_addr); + if (mmio == NULL) + return false; + + rb_int_erase(&mmio_tree, &mmio->node); + free(mmio); + return true; +} + bool kvm__emulate_mmio(struct kvm *kvm, u64 phys_addr, u8 *data, u32 len, u8 is_write) { - fprintf(stderr, "Warning: Ignoring MMIO %s at %016llx (length %u)\n", - to_direction(is_write), phys_addr, len); + struct mmio_mapping *mmio = mmio_search(&mmio_tree, phys_addr, len); + + if (mmio) + mmio->kvm_mmio_callback_fn(phys_addr, data, len, is_write); + else + fprintf(stderr, "Warning: Ignoring MMIO %s at %016llx (length %u)\n", + to_direction(is_write), phys_addr, len); return true; }