Message ID | 1352721450-11340-14-git-send-email-will.deacon@arm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On 11/12/2012 06:57 AM, Will Deacon wrote: > When registering memory banks for a guest, it is useful to keep the > range information around for translating between guest and host address > spaces. > > This patch adds a list of kvm_mem_bank structures to struct kvm, which > is updated when a new bank is registered. > > Signed-off-by: Will Deacon <will.deacon@arm.com> > --- > tools/kvm/include/kvm/kvm.h | 8 ++++++++ > tools/kvm/kvm.c | 23 ++++++++++++++++++++++- > 2 files changed, 30 insertions(+), 1 deletions(-) > > diff --git a/tools/kvm/include/kvm/kvm.h b/tools/kvm/include/kvm/kvm.h > index cf959ea..96dd81b 100644 > --- a/tools/kvm/include/kvm/kvm.h > +++ b/tools/kvm/include/kvm/kvm.h > @@ -35,6 +35,13 @@ struct kvm_ext { > int code; > }; > > +struct kvm_mem_bank { > + struct list_head list; > + unsigned long guest_phys_addr; > + void *host_addr; > + unsigned long size; > +}; Can we just reuse struct kvm_userspace_memory_region here? We're also using different data types for guest_phys_addr and size than whats in kvm_userspace_memory_region - that can't be good. > struct kvm { > struct kvm_arch arch; > struct kvm_config cfg; > @@ -49,6 +56,7 @@ struct kvm { > u64 ram_size; > void *ram_start; > u64 ram_pagesize; > + struct list_head mem_banks; These memory banks actually look like a perfect example to use our augmented interval rb-tree, can we switch them to use it, or is it a list on purpose? Thanks, Sasha -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, Nov 13, 2012 at 04:37:38AM +0000, Sasha Levin wrote: > On 11/12/2012 06:57 AM, Will Deacon wrote: > > +struct kvm_mem_bank { > > + struct list_head list; > > + unsigned long guest_phys_addr; > > + void *host_addr; > > + unsigned long size; > > +}; > > Can we just reuse struct kvm_userspace_memory_region here? We're also using different > data types for guest_phys_addr and size than whats in kvm_userspace_memory_region - that > can't be good. I looked briefly at doing that when I wrote the multi-bank stuff, but I hit a couple of issues: - kvmtool itself tends to use void * for host addresses, rather than the __u64 userspace_addr in kvm_userspace_memory_region - kvm_userspace_memory_region is a superset of what we need (not the end of the world I guess) so you end up casting address types a fair amount. Still, I'll revisit it and see if I can come up with something cleaner. > > struct kvm { > > struct kvm_arch arch; > > struct kvm_config cfg; > > @@ -49,6 +56,7 @@ struct kvm { > > u64 ram_size; > > void *ram_start; > > u64 ram_pagesize; > > + struct list_head mem_banks; > > These memory banks actually look like a perfect example to use our augmented interval rb-tree, > can we switch them to use it, or is it a list on purpose? Well, the usual case is one memory bank but that doesn't swing the argument either way. I'll update to use the fancy new tree. Thanks for the comments, Will -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 11/13/2012 07:16 AM, Will Deacon wrote: > On Tue, Nov 13, 2012 at 04:37:38AM +0000, Sasha Levin wrote: >> On 11/12/2012 06:57 AM, Will Deacon wrote: >>> +struct kvm_mem_bank { >>> + struct list_head list; >>> + unsigned long guest_phys_addr; >>> + void *host_addr; >>> + unsigned long size; >>> +}; >> >> Can we just reuse struct kvm_userspace_memory_region here? We're also using different >> data types for guest_phys_addr and size than whats in kvm_userspace_memory_region - that >> can't be good. > > I looked briefly at doing that when I wrote the multi-bank stuff, but I hit > a couple of issues: > > - kvmtool itself tends to use void * for host addresses, rather than > the __u64 userspace_addr in kvm_userspace_memory_region > > - kvm_userspace_memory_region is a superset of what we need (not the > end of the world I guess) > > so you end up casting address types a fair amount. Still, I'll revisit it > and see if I can come up with something cleaner. That's a good point. We used void* while the kernel side is using u64, which looks odd. In that case, let's get everything moved to u64 (obviously not in the scope of this patch series). Thanks, Sasha -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, Nov 13, 2012 at 04:09:05PM +0000, Sasha Levin wrote: > On 11/13/2012 07:16 AM, Will Deacon wrote: > > On Tue, Nov 13, 2012 at 04:37:38AM +0000, Sasha Levin wrote: > >> On 11/12/2012 06:57 AM, Will Deacon wrote: > >>> +struct kvm_mem_bank { > >>> + struct list_head list; > >>> + unsigned long guest_phys_addr; > >>> + void *host_addr; > >>> + unsigned long size; > >>> +}; > >> > >> Can we just reuse struct kvm_userspace_memory_region here? We're also using different > >> data types for guest_phys_addr and size than whats in kvm_userspace_memory_region - that > >> can't be good. > > > > I looked briefly at doing that when I wrote the multi-bank stuff, but I hit > > a couple of issues: > > > > - kvmtool itself tends to use void * for host addresses, rather than > > the __u64 userspace_addr in kvm_userspace_memory_region > > > > - kvm_userspace_memory_region is a superset of what we need (not the > > end of the world I guess) > > > > so you end up casting address types a fair amount. Still, I'll revisit it > > and see if I can come up with something cleaner. > > That's a good point. We used void* while the kernel side is using u64, which > looks odd. > > In that case, let's get everything moved to u64 (obviously not in the scope of > this patch series). Ok, I'll update the size field to match in this patch series, then we can tackle the address discrepancy separately. Cheers, Will -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi Sasha, On Tue, Nov 13, 2012 at 04:37:38AM +0000, Sasha Levin wrote: > On 11/12/2012 06:57 AM, Will Deacon wrote: > > struct kvm { > > struct kvm_arch arch; > > struct kvm_config cfg; > > @@ -49,6 +56,7 @@ struct kvm { > > u64 ram_size; > > void *ram_start; > > u64 ram_pagesize; > > + struct list_head mem_banks; > > These memory banks actually look like a perfect example to use our augmented interval rb-tree, > can we switch them to use it, or is it a list on purpose? I found some time to look at this today but unfortunately they're not as ideally suited to the interval tree as they look: the problem being that we need to search for banks by both host virtual address *and* guest physical address depending on the translation that we're doing. We could have two separate tress, but that seems like overkill given the likely number of banks. Will -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 11/20/2012 12:15 PM, Will Deacon wrote: > Hi Sasha, > > On Tue, Nov 13, 2012 at 04:37:38AM +0000, Sasha Levin wrote: >> On 11/12/2012 06:57 AM, Will Deacon wrote: >>> struct kvm { >>> struct kvm_arch arch; >>> struct kvm_config cfg; >>> @@ -49,6 +56,7 @@ struct kvm { >>> u64 ram_size; >>> void *ram_start; >>> u64 ram_pagesize; >>> + struct list_head mem_banks; >> >> These memory banks actually look like a perfect example to use our augmented interval rb-tree, >> can we switch them to use it, or is it a list on purpose? > > I found some time to look at this today but unfortunately they're not as > ideally suited to the interval tree as they look: the problem being that we > need to search for banks by both host virtual address *and* guest physical > address depending on the translation that we're doing. > > We could have two separate tress, but that seems like overkill given the > likely number of banks. Makes sense. We can convert it later if we need to as well. Thanks, Sasha -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/tools/kvm/include/kvm/kvm.h b/tools/kvm/include/kvm/kvm.h index cf959ea..96dd81b 100644 --- a/tools/kvm/include/kvm/kvm.h +++ b/tools/kvm/include/kvm/kvm.h @@ -35,6 +35,13 @@ struct kvm_ext { int code; }; +struct kvm_mem_bank { + struct list_head list; + unsigned long guest_phys_addr; + void *host_addr; + unsigned long size; +}; + struct kvm { struct kvm_arch arch; struct kvm_config cfg; @@ -49,6 +56,7 @@ struct kvm { u64 ram_size; void *ram_start; u64 ram_pagesize; + struct list_head mem_banks; bool nmi_disabled; diff --git a/tools/kvm/kvm.c b/tools/kvm/kvm.c index b283171..1a10ec0 100644 --- a/tools/kvm/kvm.c +++ b/tools/kvm/kvm.c @@ -6,7 +6,9 @@ #include "kvm/kvm-cpu.h" #include "kvm/kvm-ipc.h" +#include <linux/kernel.h> #include <linux/kvm.h> +#include <linux/list.h> #include <linux/err.h> #include <sys/un.h> @@ -133,9 +135,16 @@ struct kvm *kvm__new(void) int kvm__exit(struct kvm *kvm) { + struct kvm_mem_bank *bank, *tmp; + kvm__arch_delete_ram(kvm); - free(kvm); + list_for_each_entry_safe(bank, tmp, &kvm->mem_banks, list) { + list_del(&bank->list); + free(bank); + } + + free(kvm); return 0; } core_exit(kvm__exit); @@ -148,8 +157,18 @@ core_exit(kvm__exit); int kvm__register_mem(struct kvm *kvm, u64 guest_phys, u64 size, void *userspace_addr) { struct kvm_userspace_memory_region mem; + struct kvm_mem_bank *bank; int ret; + bank = malloc(sizeof(*bank)); + if (!bank) + return -ENOMEM; + + INIT_LIST_HEAD(&bank->list); + bank->guest_phys_addr = guest_phys; + bank->host_addr = userspace_addr; + bank->size = size; + mem = (struct kvm_userspace_memory_region) { .slot = kvm->mem_slots++, .guest_phys_addr = guest_phys, @@ -161,6 +180,7 @@ int kvm__register_mem(struct kvm *kvm, u64 guest_phys, u64 size, void *userspace if (ret < 0) return -errno; + list_add(&bank->list, &kvm->mem_banks); return 0; } @@ -245,6 +265,7 @@ int kvm__init(struct kvm *kvm) kvm__arch_init(kvm, kvm->cfg.hugetlbfs_path, kvm->cfg.ram_size); + INIT_LIST_HEAD(&kvm->mem_banks); kvm__init_ram(kvm); if (!kvm->cfg.firmware_filename) {
When registering memory banks for a guest, it is useful to keep the range information around for translating between guest and host address spaces. This patch adds a list of kvm_mem_bank structures to struct kvm, which is updated when a new bank is registered. Signed-off-by: Will Deacon <will.deacon@arm.com> --- tools/kvm/include/kvm/kvm.h | 8 ++++++++ tools/kvm/kvm.c | 23 ++++++++++++++++++++++- 2 files changed, 30 insertions(+), 1 deletions(-)