mbox series

[00/13] kvm: selftests: add aarch64 framework and dirty

Message ID 20180918175436.19742-1-drjones@redhat.com (mailing list archive)
Headers show
Series kvm: selftests: add aarch64 framework and dirty | expand

Message

Andrew Jones Sept. 18, 2018, 5:54 p.m. UTC
This series provides KVM selftests that test dirty log tracking on
AArch64 for both 4K and 64K guest page sizes. Additionally the
framework provides an easy way to test dirty log tracking with the
recently posted dynamic IPA and 52bit IPA series[1].

The series breaks down into parts as follows:

 01-02: generalize guest code to host userspace exit support by
        introducing "ucalls" - hypercalls to userspace
 03-05: prepare common code for a new architecture
 06-07: add virtual memory setup support for AArch64
    08: add vcpu setup support for AArch64
    09: port the dirty log test to AArch64
 10-11: add 64K guest page size support for the dirty log test
 12-13: prepare the dirty log test to also test > 40-bit guest
        physical address setups by allowing the test memory
        region to be placed at the top of physical memory

[1] https://www.spinics.net/lists/arm-kernel/msg676819.html

Thanks,
drew


Andrew Jones (13):
  kvm: selftests: vcpu_setup: set cr4.osfxsr
  kvm: selftests: introduce ucall
  kvm: selftests: move arch-specific files to arch-specific locations
  kvm: selftests: add cscope make target
  kvm: selftests: tidy up kvm_util
  kvm: selftests: add vm_phy_pages_alloc
  kvm: selftests: add virt mem support for aarch64
  kvm: selftests: add vcpu support for aarch64
  kvm: selftests: port dirty_log_test to aarch64
  kvm: selftests: introduce new VM mode for 64K pages
  kvm: selftests: dirty_log_test: also test 64K pages on aarch64
  kvm: selftests: stop lying to aarch64 tests about PA-bits
  kvm: selftests: support high GPAs in dirty_log_test

 tools/testing/selftests/kvm/.gitignore        |  11 +-
 tools/testing/selftests/kvm/Makefile          |  36 +-
 tools/testing/selftests/kvm/dirty_log_test.c  | 374 +++++++++----
 .../selftests/kvm/include/aarch64/processor.h |  55 ++
 .../testing/selftests/kvm/include/kvm_util.h  | 166 +++---
 .../testing/selftests/kvm/include/sparsebit.h |   6 +-
 .../testing/selftests/kvm/include/test_util.h |   6 +-
 .../kvm/include/{x86.h => x86_64/processor.h} |  24 +-
 .../selftests/kvm/include/{ => x86_64}/vmx.h  |   6 +-
 .../selftests/kvm/lib/aarch64/processor.c     | 311 +++++++++++
 tools/testing/selftests/kvm/lib/assert.c      |   2 +-
 tools/testing/selftests/kvm/lib/kvm_util.c    | 499 +++++++-----------
 .../selftests/kvm/lib/kvm_util_internal.h     |  33 +-
 tools/testing/selftests/kvm/lib/ucall.c       | 144 +++++
 .../kvm/lib/{x86.c => x86_64/processor.c}     | 197 ++++++-
 .../selftests/kvm/lib/{ => x86_64}/vmx.c      |   4 +-
 .../kvm/{ => x86_64}/cr4_cpuid_sync_test.c    |  14 +-
 .../kvm/{ => x86_64}/set_sregs_test.c         |   2 +-
 .../selftests/kvm/{ => x86_64}/state_test.c   |  25 +-
 .../kvm/{ => x86_64}/sync_regs_test.c         |   2 +-
 .../kvm/{ => x86_64}/vmx_tsc_adjust_test.c    |  23 +-
 21 files changed, 1329 insertions(+), 611 deletions(-)
 create mode 100644 tools/testing/selftests/kvm/include/aarch64/processor.h
 rename tools/testing/selftests/kvm/include/{x86.h => x86_64/processor.h} (98%)
 rename tools/testing/selftests/kvm/include/{ => x86_64}/vmx.h (99%)
 create mode 100644 tools/testing/selftests/kvm/lib/aarch64/processor.c
 create mode 100644 tools/testing/selftests/kvm/lib/ucall.c
 rename tools/testing/selftests/kvm/lib/{x86.c => x86_64/processor.c} (85%)
 rename tools/testing/selftests/kvm/lib/{ => x86_64}/vmx.c (99%)
 rename tools/testing/selftests/kvm/{ => x86_64}/cr4_cpuid_sync_test.c (91%)
 rename tools/testing/selftests/kvm/{ => x86_64}/set_sregs_test.c (98%)
 rename tools/testing/selftests/kvm/{ => x86_64}/state_test.c (90%)
 rename tools/testing/selftests/kvm/{ => x86_64}/sync_regs_test.c (99%)
 rename tools/testing/selftests/kvm/{ => x86_64}/vmx_tsc_adjust_test.c (91%)

Comments

Andrew Jones Sept. 19, 2018, 12:37 p.m. UTC | #1
On Tue, Sep 18, 2018 at 07:54:23PM +0200, Andrew Jones wrote:
> This series provides KVM selftests that test dirty log tracking on
> AArch64 for both 4K and 64K guest page sizes. Additionally the
> framework provides an easy way to test dirty log tracking with the
> recently posted dynamic IPA and 52bit IPA series[1].
> 
> The series breaks down into parts as follows:
> 
>  01-02: generalize guest code to host userspace exit support by
>         introducing "ucalls" - hypercalls to userspace
>  03-05: prepare common code for a new architecture
>  06-07: add virtual memory setup support for AArch64
>     08: add vcpu setup support for AArch64
>     09: port the dirty log test to AArch64
>  10-11: add 64K guest page size support for the dirty log test
>  12-13: prepare the dirty log test to also test > 40-bit guest
>         physical address setups by allowing the test memory
>         region to be placed at the top of physical memory
> 
> [1] https://www.spinics.net/lists/arm-kernel/msg676819.html
> 

Hi Suzuki,

Here's an [untested] add-on patch that should provide the means to
test dirty logging with a 52-bit guest physical address space.
Hopefully it'll be as easy as compiling and then running with

 $ ./dirty_log_test -t

Thanks,
drew

From 09ea1d724551a95ef9962d357049ef21ea8c77e8 Mon Sep 17 00:00:00 2001
From: Andrew Jones <drjones@redhat.com>
Date: Wed, 19 Sep 2018 14:23:30 +0200
Subject: [PATCH] kvm: selftests: aarch64: dirty_log_test: test with 52 PA-bits

Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 tools/testing/selftests/kvm/dirty_log_test.c  | 24 ++++++++++++++++---
 .../testing/selftests/kvm/include/kvm_util.h  |  2 ++
 tools/testing/selftests/kvm/lib/kvm_util.c    | 17 +++++++++----
 .../selftests/kvm/lib/kvm_util_internal.h     |  1 +
 4 files changed, 36 insertions(+), 8 deletions(-)

diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
index d59820cc2d39..c11c76e09766 100644
--- a/tools/testing/selftests/kvm/dirty_log_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_test.c
@@ -82,6 +82,7 @@ static void guest_code(void)
 static bool host_quit;
 
 /* Points to the test VM memory region on which we track dirty logs */
+static uint8_t host_ipa_limit;
 static void *host_test_mem;
 static uint64_t host_num_pages;
 
@@ -209,12 +210,14 @@ static void vm_dirty_log_verify(unsigned long *bmap)
 }
 
 static struct kvm_vm *create_vm(enum vm_guest_mode mode, uint32_t vcpuid,
-				uint64_t extra_mem_pages, void *guest_code)
+				uint64_t extra_mem_pages, void *guest_code,
+				unsigned long type)
 {
 	struct kvm_vm *vm;
 	uint64_t extra_pg_pages = extra_mem_pages / 512 * 2;
 
-	vm = vm_create(mode, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
+	vm = _vm_create(mode, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages,
+			O_RDWR, type);
 	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
 #ifdef __x86_64__
 	vm_create_irqchip(vm);
@@ -231,15 +234,22 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations,
 	struct kvm_vm *vm;
 	uint64_t max_gfn;
 	unsigned long *bmap;
+	unsigned long type = 0;
 
 	switch (mode) {
 	case VM_MODE_P52V48_4K:
 		guest_pa_bits = 52;
 		guest_page_shift = 12;
+#ifdef __aarch64__
+		type = KVM_VM_TYPE_ARM_IPA_SIZE(52);
+#endif
 		break;
 	case VM_MODE_P52V48_64K:
 		guest_pa_bits = 52;
 		guest_page_shift = 16;
+#ifdef __aarch64__
+		type = KVM_VM_TYPE_ARM_IPA_SIZE(52);
+#endif
 		break;
 	case VM_MODE_P40V48_4K:
 		guest_pa_bits = 40;
@@ -273,7 +283,7 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations,
 	bmap = bitmap_alloc(host_num_pages);
 	host_bmap_track = bitmap_alloc(host_num_pages);
 
-	vm = create_vm(mode, VCPU_ID, guest_num_pages, guest_code);
+	vm = create_vm(mode, VCPU_ID, guest_num_pages, guest_code, type);
 
 	/* Add an extra memory slot for testing dirty logging */
 	vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
@@ -392,6 +402,14 @@ int main(int argc, char *argv[])
 	unsigned int mode;
 	int opt, i;
 
+#ifdef __aarch64__
+	host_ipa_limit = kvm_check_cap(KVM_CAP_ARM_VM_IPA_SIZE);
+	if (host_ipa_limit == 52) {
+		vm_guest_modes[VM_MODE_P52V48_4K].supported = 1;
+		vm_guest_modes[VM_MODE_P52V48_64K].supported = 1;
+	}
+#endif
+
 	while ((opt = getopt(argc, argv, "hi:I:o:tm:")) != -1) {
 		switch (opt) {
 		case 'i':
diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index d76431322a30..5202fce337e3 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -53,6 +53,8 @@ enum vm_mem_backing_src_type {
 int kvm_check_cap(long cap);
 
 struct kvm_vm *vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm);
+struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages,
+			  int perm, unsigned long type);
 void kvm_vm_free(struct kvm_vm *vmp);
 void kvm_vm_restart(struct kvm_vm *vmp, int perm);
 void kvm_vm_release(struct kvm_vm *vmp);
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 542336db7b4f..e1805c9e0f39 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -62,13 +62,13 @@ int kvm_check_cap(long cap)
 	return ret;
 }
 
-static void vm_open(struct kvm_vm *vm, int perm)
+static void vm_open(struct kvm_vm *vm, int perm, unsigned long type)
 {
 	vm->kvm_fd = open(KVM_DEV_PATH, perm);
 	if (vm->kvm_fd < 0)
 		exit(KSFT_SKIP);
 
-	vm->fd = ioctl(vm->kvm_fd, KVM_CREATE_VM, NULL);
+	vm->fd = ioctl(vm->kvm_fd, KVM_CREATE_VM, type);
 	TEST_ASSERT(vm->fd >= 0, "KVM_CREATE_VM ioctl failed, "
 		"rc: %i errno: %i", vm->fd, errno);
 }
@@ -101,7 +101,8 @@ _Static_assert(sizeof(vm_guest_mode_string)/sizeof(char *) == NUM_VM_MODES,
  * descriptor to control the created VM is created with the permissions
  * given by perm (e.g. O_RDWR).
  */
-struct kvm_vm *vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm)
+struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages,
+			  int perm, unsigned long type)
 {
 	struct kvm_vm *vm;
 	int kvm_fd;
@@ -110,7 +111,8 @@ struct kvm_vm *vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm)
 	TEST_ASSERT(vm != NULL, "Insufficent Memory");
 
 	vm->mode = mode;
-	vm_open(vm, perm);
+	vm->type = type;
+	vm_open(vm, perm, type);
 
 	/* Setup mode specific traits. */
 	switch (vm->mode) {
@@ -166,6 +168,11 @@ struct kvm_vm *vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm)
 	return vm;
 }
 
+struct kvm_vm *vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm)
+{
+	return _vm_create(mode, phy_pages, perm, 0);
+}
+
 /*
  * VM Restart
  *
@@ -183,7 +190,7 @@ void kvm_vm_restart(struct kvm_vm *vmp, int perm)
 {
 	struct userspace_mem_region *region;
 
-	vm_open(vmp, perm);
+	vm_open(vmp, perm, vmp->type);
 	if (vmp->has_irqchip)
 		vm_create_irqchip(vmp);
 
diff --git a/tools/testing/selftests/kvm/lib/kvm_util_internal.h b/tools/testing/selftests/kvm/lib/kvm_util_internal.h
index 5e05fb98dc62..51a56102a5c9 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util_internal.h
+++ b/tools/testing/selftests/kvm/lib/kvm_util_internal.h
@@ -44,6 +44,7 @@ struct vcpu {
 
 struct kvm_vm {
 	int mode;
+	unsigned long type;
 	int kvm_fd;
 	int fd;
 	unsigned int pgtable_levels;
Suzuki K Poulose Sept. 19, 2018, 1:13 p.m. UTC | #2
Hi Drew,

On 09/19/2018 01:37 PM, Andrew Jones wrote:
> On Tue, Sep 18, 2018 at 07:54:23PM +0200, Andrew Jones wrote:
>> This series provides KVM selftests that test dirty log tracking on
>> AArch64 for both 4K and 64K guest page sizes. Additionally the
>> framework provides an easy way to test dirty log tracking with the
>> recently posted dynamic IPA and 52bit IPA series[1].
>>
>> The series breaks down into parts as follows:
>>
>>   01-02: generalize guest code to host userspace exit support by
>>          introducing "ucalls" - hypercalls to userspace
>>   03-05: prepare common code for a new architecture
>>   06-07: add virtual memory setup support for AArch64
>>      08: add vcpu setup support for AArch64
>>      09: port the dirty log test to AArch64
>>   10-11: add 64K guest page size support for the dirty log test
>>   12-13: prepare the dirty log test to also test > 40-bit guest
>>          physical address setups by allowing the test memory
>>          region to be placed at the top of physical memory
>>
>> [1] https://www.spinics.net/lists/arm-kernel/msg676819.html
>>
> 
> Hi Suzuki,
> 
> Here's an [untested] add-on patch that should provide the means to
> test dirty logging with a 52-bit guest physical address space.
> Hopefully it'll be as easy as compiling and then running with
> 
>   $ ./dirty_log_test -t

I will give this series a spin with 52 IPA and let you know.

Thanks
Suzuki

> 
> Thanks,
> drew
> 
>  From 09ea1d724551a95ef9962d357049ef21ea8c77e8 Mon Sep 17 00:00:00 2001
> From: Andrew Jones <drjones@redhat.com>
> Date: Wed, 19 Sep 2018 14:23:30 +0200
> Subject: [PATCH] kvm: selftests: aarch64: dirty_log_test: test with 52 PA-bits
> 
> Signed-off-by: Andrew Jones <drjones@redhat.com>
> ---
>   tools/testing/selftests/kvm/dirty_log_test.c  | 24 ++++++++++++++++---
>   .../testing/selftests/kvm/include/kvm_util.h  |  2 ++
>   tools/testing/selftests/kvm/lib/kvm_util.c    | 17 +++++++++----
>   .../selftests/kvm/lib/kvm_util_internal.h     |  1 +
>   4 files changed, 36 insertions(+), 8 deletions(-)
> 
> diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
> index d59820cc2d39..c11c76e09766 100644
> --- a/tools/testing/selftests/kvm/dirty_log_test.c
> +++ b/tools/testing/selftests/kvm/dirty_log_test.c
> @@ -82,6 +82,7 @@ static void guest_code(void)
>   static bool host_quit;
>   
>   /* Points to the test VM memory region on which we track dirty logs */
> +static uint8_t host_ipa_limit;
>   static void *host_test_mem;
>   static uint64_t host_num_pages;
>   
> @@ -209,12 +210,14 @@ static void vm_dirty_log_verify(unsigned long *bmap)
>   }
>   
>   static struct kvm_vm *create_vm(enum vm_guest_mode mode, uint32_t vcpuid,
> -				uint64_t extra_mem_pages, void *guest_code)
> +				uint64_t extra_mem_pages, void *guest_code,
> +				unsigned long type)
>   {
>   	struct kvm_vm *vm;
>   	uint64_t extra_pg_pages = extra_mem_pages / 512 * 2;
>   
> -	vm = vm_create(mode, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
> +	vm = _vm_create(mode, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages,
> +			O_RDWR, type);
>   	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
>   #ifdef __x86_64__
>   	vm_create_irqchip(vm);
> @@ -231,15 +234,22 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations,
>   	struct kvm_vm *vm;
>   	uint64_t max_gfn;
>   	unsigned long *bmap;
> +	unsigned long type = 0;
>   
>   	switch (mode) {
>   	case VM_MODE_P52V48_4K:
>   		guest_pa_bits = 52;
>   		guest_page_shift = 12;
> +#ifdef __aarch64__
> +		type = KVM_VM_TYPE_ARM_IPA_SIZE(52);
> +#endif
>   		break;
>   	case VM_MODE_P52V48_64K:
>   		guest_pa_bits = 52;
>   		guest_page_shift = 16;
> +#ifdef __aarch64__
> +		type = KVM_VM_TYPE_ARM_IPA_SIZE(52);
> +#endif
>   		break;
>   	case VM_MODE_P40V48_4K:
>   		guest_pa_bits = 40;
> @@ -273,7 +283,7 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations,
>   	bmap = bitmap_alloc(host_num_pages);
>   	host_bmap_track = bitmap_alloc(host_num_pages);
>   
> -	vm = create_vm(mode, VCPU_ID, guest_num_pages, guest_code);
> +	vm = create_vm(mode, VCPU_ID, guest_num_pages, guest_code, type);
>   
>   	/* Add an extra memory slot for testing dirty logging */
>   	vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
> @@ -392,6 +402,14 @@ int main(int argc, char *argv[])
>   	unsigned int mode;
>   	int opt, i;
>   
> +#ifdef __aarch64__
> +	host_ipa_limit = kvm_check_cap(KVM_CAP_ARM_VM_IPA_SIZE);
> +	if (host_ipa_limit == 52) {
> +		vm_guest_modes[VM_MODE_P52V48_4K].supported = 1;
> +		vm_guest_modes[VM_MODE_P52V48_64K].supported = 1;
> +	}
> +#endif
> +
>   	while ((opt = getopt(argc, argv, "hi:I:o:tm:")) != -1) {
>   		switch (opt) {
>   		case 'i':
> diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
> index d76431322a30..5202fce337e3 100644
> --- a/tools/testing/selftests/kvm/include/kvm_util.h
> +++ b/tools/testing/selftests/kvm/include/kvm_util.h
> @@ -53,6 +53,8 @@ enum vm_mem_backing_src_type {
>   int kvm_check_cap(long cap);
>   
>   struct kvm_vm *vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm);
> +struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages,
> +			  int perm, unsigned long type);
>   void kvm_vm_free(struct kvm_vm *vmp);
>   void kvm_vm_restart(struct kvm_vm *vmp, int perm);
>   void kvm_vm_release(struct kvm_vm *vmp);
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index 542336db7b4f..e1805c9e0f39 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -62,13 +62,13 @@ int kvm_check_cap(long cap)
>   	return ret;
>   }
>   
> -static void vm_open(struct kvm_vm *vm, int perm)
> +static void vm_open(struct kvm_vm *vm, int perm, unsigned long type)
>   {
>   	vm->kvm_fd = open(KVM_DEV_PATH, perm);
>   	if (vm->kvm_fd < 0)
>   		exit(KSFT_SKIP);
>   
> -	vm->fd = ioctl(vm->kvm_fd, KVM_CREATE_VM, NULL);
> +	vm->fd = ioctl(vm->kvm_fd, KVM_CREATE_VM, type);
>   	TEST_ASSERT(vm->fd >= 0, "KVM_CREATE_VM ioctl failed, "
>   		"rc: %i errno: %i", vm->fd, errno);
>   }
> @@ -101,7 +101,8 @@ _Static_assert(sizeof(vm_guest_mode_string)/sizeof(char *) == NUM_VM_MODES,
>    * descriptor to control the created VM is created with the permissions
>    * given by perm (e.g. O_RDWR).
>    */
> -struct kvm_vm *vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm)
> +struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages,
> +			  int perm, unsigned long type)
>   {
>   	struct kvm_vm *vm;
>   	int kvm_fd;
> @@ -110,7 +111,8 @@ struct kvm_vm *vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm)
>   	TEST_ASSERT(vm != NULL, "Insufficent Memory");
>   
>   	vm->mode = mode;
> -	vm_open(vm, perm);
> +	vm->type = type;
> +	vm_open(vm, perm, type);
>   
>   	/* Setup mode specific traits. */
>   	switch (vm->mode) {
> @@ -166,6 +168,11 @@ struct kvm_vm *vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm)
>   	return vm;
>   }
>   
> +struct kvm_vm *vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm)
> +{
> +	return _vm_create(mode, phy_pages, perm, 0);
> +}
> +
>   /*
>    * VM Restart
>    *
> @@ -183,7 +190,7 @@ void kvm_vm_restart(struct kvm_vm *vmp, int perm)
>   {
>   	struct userspace_mem_region *region;
>   
> -	vm_open(vmp, perm);
> +	vm_open(vmp, perm, vmp->type);
>   	if (vmp->has_irqchip)
>   		vm_create_irqchip(vmp);
>   
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util_internal.h b/tools/testing/selftests/kvm/lib/kvm_util_internal.h
> index 5e05fb98dc62..51a56102a5c9 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util_internal.h
> +++ b/tools/testing/selftests/kvm/lib/kvm_util_internal.h
> @@ -44,6 +44,7 @@ struct vcpu {
>   
>   struct kvm_vm {
>   	int mode;
> +	unsigned long type;
>   	int kvm_fd;
>   	int fd;
>   	unsigned int pgtable_levels;
>
Christoffer Dall Oct. 29, 2018, 5:40 p.m. UTC | #3
Hi Drew,

On Tue, Sep 18, 2018 at 07:54:23PM +0200, Andrew Jones wrote:
> This series provides KVM selftests that test dirty log tracking on
> AArch64 for both 4K and 64K guest page sizes. Additionally the
> framework provides an easy way to test dirty log tracking with the
> recently posted dynamic IPA and 52bit IPA series[1].

I was trying to parse the commit text of patch 2, and I realized that I
don't understand the 'hypercall to userspace' thing at all, which
probably means I have no idea how the selftests work overall.

I then spent a while reading various bits of documentation in the kernel
tree, LWN, etc., only to realize that I don't understand how this test
framework actually works.

Are the selftests modules, userspace programs, or code that is compiled
with the kernel, and (somehow?) run from userspace.  I thought the
latter, partially based on your explanation at ELC, but then I don't
understand how the "compile and run" make target works.

Can you help me paint the overall picture, or point me to the piece of
documentation/presentation that explains the high-level picture, which I
must have obviously missed somehow?


Thanks!

    Christoffer

> 
> The series breaks down into parts as follows:
> 
>  01-02: generalize guest code to host userspace exit support by
>         introducing "ucalls" - hypercalls to userspace
>  03-05: prepare common code for a new architecture
>  06-07: add virtual memory setup support for AArch64
>     08: add vcpu setup support for AArch64
>     09: port the dirty log test to AArch64
>  10-11: add 64K guest page size support for the dirty log test
>  12-13: prepare the dirty log test to also test > 40-bit guest
>         physical address setups by allowing the test memory
>         region to be placed at the top of physical memory
> 
> [1] https://www.spinics.net/lists/arm-kernel/msg676819.html
> 
> Thanks,
> drew
> 
> 
> Andrew Jones (13):
>   kvm: selftests: vcpu_setup: set cr4.osfxsr
>   kvm: selftests: introduce ucall
>   kvm: selftests: move arch-specific files to arch-specific locations
>   kvm: selftests: add cscope make target
>   kvm: selftests: tidy up kvm_util
>   kvm: selftests: add vm_phy_pages_alloc
>   kvm: selftests: add virt mem support for aarch64
>   kvm: selftests: add vcpu support for aarch64
>   kvm: selftests: port dirty_log_test to aarch64
>   kvm: selftests: introduce new VM mode for 64K pages
>   kvm: selftests: dirty_log_test: also test 64K pages on aarch64
>   kvm: selftests: stop lying to aarch64 tests about PA-bits
>   kvm: selftests: support high GPAs in dirty_log_test
> 
>  tools/testing/selftests/kvm/.gitignore        |  11 +-
>  tools/testing/selftests/kvm/Makefile          |  36 +-
>  tools/testing/selftests/kvm/dirty_log_test.c  | 374 +++++++++----
>  .../selftests/kvm/include/aarch64/processor.h |  55 ++
>  .../testing/selftests/kvm/include/kvm_util.h  | 166 +++---
>  .../testing/selftests/kvm/include/sparsebit.h |   6 +-
>  .../testing/selftests/kvm/include/test_util.h |   6 +-
>  .../kvm/include/{x86.h => x86_64/processor.h} |  24 +-
>  .../selftests/kvm/include/{ => x86_64}/vmx.h  |   6 +-
>  .../selftests/kvm/lib/aarch64/processor.c     | 311 +++++++++++
>  tools/testing/selftests/kvm/lib/assert.c      |   2 +-
>  tools/testing/selftests/kvm/lib/kvm_util.c    | 499 +++++++-----------
>  .../selftests/kvm/lib/kvm_util_internal.h     |  33 +-
>  tools/testing/selftests/kvm/lib/ucall.c       | 144 +++++
>  .../kvm/lib/{x86.c => x86_64/processor.c}     | 197 ++++++-
>  .../selftests/kvm/lib/{ => x86_64}/vmx.c      |   4 +-
>  .../kvm/{ => x86_64}/cr4_cpuid_sync_test.c    |  14 +-
>  .../kvm/{ => x86_64}/set_sregs_test.c         |   2 +-
>  .../selftests/kvm/{ => x86_64}/state_test.c   |  25 +-
>  .../kvm/{ => x86_64}/sync_regs_test.c         |   2 +-
>  .../kvm/{ => x86_64}/vmx_tsc_adjust_test.c    |  23 +-
>  21 files changed, 1329 insertions(+), 611 deletions(-)
>  create mode 100644 tools/testing/selftests/kvm/include/aarch64/processor.h
>  rename tools/testing/selftests/kvm/include/{x86.h => x86_64/processor.h} (98%)
>  rename tools/testing/selftests/kvm/include/{ => x86_64}/vmx.h (99%)
>  create mode 100644 tools/testing/selftests/kvm/lib/aarch64/processor.c
>  create mode 100644 tools/testing/selftests/kvm/lib/ucall.c
>  rename tools/testing/selftests/kvm/lib/{x86.c => x86_64/processor.c} (85%)
>  rename tools/testing/selftests/kvm/lib/{ => x86_64}/vmx.c (99%)
>  rename tools/testing/selftests/kvm/{ => x86_64}/cr4_cpuid_sync_test.c (91%)
>  rename tools/testing/selftests/kvm/{ => x86_64}/set_sregs_test.c (98%)
>  rename tools/testing/selftests/kvm/{ => x86_64}/state_test.c (90%)
>  rename tools/testing/selftests/kvm/{ => x86_64}/sync_regs_test.c (99%)
>  rename tools/testing/selftests/kvm/{ => x86_64}/vmx_tsc_adjust_test.c (91%)
> 
> -- 
> 2.17.1
>
Andrew Jones Oct. 30, 2018, 5:38 p.m. UTC | #4
Hi Christoffer,

Thanks for your interest in these tests. There isn't any documentation
that I know of, but it's a good idea to have some. I'll write something
up soon. I'll also try to answer your questions now.

On Mon, Oct 29, 2018 at 06:40:02PM +0100, Christoffer Dall wrote:
> Hi Drew,
> 
> On Tue, Sep 18, 2018 at 07:54:23PM +0200, Andrew Jones wrote:
> > This series provides KVM selftests that test dirty log tracking on
> > AArch64 for both 4K and 64K guest page sizes. Additionally the
> > framework provides an easy way to test dirty log tracking with the
> > recently posted dynamic IPA and 52bit IPA series[1].
> 
> I was trying to parse the commit text of patch 2, and I realized that I
> don't understand the 'hypercall to userspace' thing at all, which
> probably means I have no idea how the selftests work overall.

There are three parts to a kvm selftest: 1) the test code which runs in
host userspace and _is_ the kvm userspace used with kvm, 2) the vcpu
thread code which executes KVM_RUN for the guest code and possibly also
some host userspace test code, and 3) the guest code, which is naturally
run in the vcpu thread, but in guest mode.

The need for a "ucall" arises for 2's "possibly also some host userspace
test code". In that case the guest code needs to invoke an exit from guest
mode, not just to kvm, but all the way to kvm userspace. For AArch64, as
you know, this can be done with an MMIO access. The reason patch 2
generalizes the concept is because for x86 this can and is done with a
PIO access.

> 
> I then spent a while reading various bits of documentation in the kernel
> tree, LWN, etc., only to realize that I don't understand how this test
> framework actually works.
> 
> Are the selftests modules, userspace programs, or code that is compiled
> with the kernel, and (somehow?) run from userspace.  I thought the
> latter, partially based on your explanation at ELC, but then I don't
> understand how the "compile and run" make target works.

The tests are standalone userspace programs which are compiled separately,
but have dependencies on kernel headers. As stated above, for kvm, each
selftest is a kvm userspace (including its vcpu thread code) and guest
code combined. While there's a lot of complexity in the framework,
particularly for memory management, and a bit for vcpu setup, most of that
can be shared among tests using the kvm_util.h and test_util.h APIs,
allowing a given test to only have a relatively simple main(), vcpu thread
"vcpu_worker()" function, and "guest_code()" function. Guest mode code can
easily share code with the kvm userspace test code (assuming the guest
page tables are set up in the default way) and even data can be shared as
long the accesses are done with the appropriate mappings (gva vs. hva).
There's a small API to help with that as well.

> 
> Can you help me paint the overall picture, or point me to the piece of
> documentation/presentation that explains the high-level picture, which I
> must have obviously missed somehow?

We definitely need the documentation and, in hindsight, it looks like it
would have been a good BoF topic last week too.

I think this framework has a lot of potential for KVM API testing and
even for quick & dirty guest code instruction sequence tests (although
instruction sequences would also fit kvm-unit-tests). I hope I can help
get you and anyone else interested started.

Thanks,
drew

> 
> 
> Thanks!
> 
>     Christoffer
> 
> > 
> > The series breaks down into parts as follows:
> > 
> >  01-02: generalize guest code to host userspace exit support by
> >         introducing "ucalls" - hypercalls to userspace
> >  03-05: prepare common code for a new architecture
> >  06-07: add virtual memory setup support for AArch64
> >     08: add vcpu setup support for AArch64
> >     09: port the dirty log test to AArch64
> >  10-11: add 64K guest page size support for the dirty log test
> >  12-13: prepare the dirty log test to also test > 40-bit guest
> >         physical address setups by allowing the test memory
> >         region to be placed at the top of physical memory
> > 
> > [1] https://www.spinics.net/lists/arm-kernel/msg676819.html
> > 
> > Thanks,
> > drew
> > 
> > 
> > Andrew Jones (13):
> >   kvm: selftests: vcpu_setup: set cr4.osfxsr
> >   kvm: selftests: introduce ucall
> >   kvm: selftests: move arch-specific files to arch-specific locations
> >   kvm: selftests: add cscope make target
> >   kvm: selftests: tidy up kvm_util
> >   kvm: selftests: add vm_phy_pages_alloc
> >   kvm: selftests: add virt mem support for aarch64
> >   kvm: selftests: add vcpu support for aarch64
> >   kvm: selftests: port dirty_log_test to aarch64
> >   kvm: selftests: introduce new VM mode for 64K pages
> >   kvm: selftests: dirty_log_test: also test 64K pages on aarch64
> >   kvm: selftests: stop lying to aarch64 tests about PA-bits
> >   kvm: selftests: support high GPAs in dirty_log_test
> > 
> >  tools/testing/selftests/kvm/.gitignore        |  11 +-
> >  tools/testing/selftests/kvm/Makefile          |  36 +-
> >  tools/testing/selftests/kvm/dirty_log_test.c  | 374 +++++++++----
> >  .../selftests/kvm/include/aarch64/processor.h |  55 ++
> >  .../testing/selftests/kvm/include/kvm_util.h  | 166 +++---
> >  .../testing/selftests/kvm/include/sparsebit.h |   6 +-
> >  .../testing/selftests/kvm/include/test_util.h |   6 +-
> >  .../kvm/include/{x86.h => x86_64/processor.h} |  24 +-
> >  .../selftests/kvm/include/{ => x86_64}/vmx.h  |   6 +-
> >  .../selftests/kvm/lib/aarch64/processor.c     | 311 +++++++++++
> >  tools/testing/selftests/kvm/lib/assert.c      |   2 +-
> >  tools/testing/selftests/kvm/lib/kvm_util.c    | 499 +++++++-----------
> >  .../selftests/kvm/lib/kvm_util_internal.h     |  33 +-
> >  tools/testing/selftests/kvm/lib/ucall.c       | 144 +++++
> >  .../kvm/lib/{x86.c => x86_64/processor.c}     | 197 ++++++-
> >  .../selftests/kvm/lib/{ => x86_64}/vmx.c      |   4 +-
> >  .../kvm/{ => x86_64}/cr4_cpuid_sync_test.c    |  14 +-
> >  .../kvm/{ => x86_64}/set_sregs_test.c         |   2 +-
> >  .../selftests/kvm/{ => x86_64}/state_test.c   |  25 +-
> >  .../kvm/{ => x86_64}/sync_regs_test.c         |   2 +-
> >  .../kvm/{ => x86_64}/vmx_tsc_adjust_test.c    |  23 +-
> >  21 files changed, 1329 insertions(+), 611 deletions(-)
> >  create mode 100644 tools/testing/selftests/kvm/include/aarch64/processor.h
> >  rename tools/testing/selftests/kvm/include/{x86.h => x86_64/processor.h} (98%)
> >  rename tools/testing/selftests/kvm/include/{ => x86_64}/vmx.h (99%)
> >  create mode 100644 tools/testing/selftests/kvm/lib/aarch64/processor.c
> >  create mode 100644 tools/testing/selftests/kvm/lib/ucall.c
> >  rename tools/testing/selftests/kvm/lib/{x86.c => x86_64/processor.c} (85%)
> >  rename tools/testing/selftests/kvm/lib/{ => x86_64}/vmx.c (99%)
> >  rename tools/testing/selftests/kvm/{ => x86_64}/cr4_cpuid_sync_test.c (91%)
> >  rename tools/testing/selftests/kvm/{ => x86_64}/set_sregs_test.c (98%)
> >  rename tools/testing/selftests/kvm/{ => x86_64}/state_test.c (90%)
> >  rename tools/testing/selftests/kvm/{ => x86_64}/sync_regs_test.c (99%)
> >  rename tools/testing/selftests/kvm/{ => x86_64}/vmx_tsc_adjust_test.c (91%)
> > 
> > -- 
> > 2.17.1
> >
Paolo Bonzini Oct. 30, 2018, 5:50 p.m. UTC | #5
On 30/10/2018 18:38, Andrew Jones wrote:
> There are three parts to a kvm selftest: 1) the test code which runs in
> host userspace and _is_ the kvm userspace used with kvm, 2) the vcpu
> thread code which executes KVM_RUN for the guest code and possibly also
> some host userspace test code, and 3) the guest code, which is naturally
> run in the vcpu thread, but in guest mode.

Note that the vcpu thread is a specialty of this test.  Usually it's not
needed and 1+2 are the same.

Paolo
Christoffer Dall Nov. 1, 2018, 9:08 a.m. UTC | #6
On Tue, Oct 30, 2018 at 06:38:20PM +0100, Andrew Jones wrote:
> 
> Hi Christoffer,
> 
> Thanks for your interest in these tests. There isn't any documentation
> that I know of, but it's a good idea to have some. I'll write something
> up soon. I'll also try to answer your questions now.
> 

That sounds great, thanks!

> On Mon, Oct 29, 2018 at 06:40:02PM +0100, Christoffer Dall wrote:
> > Hi Drew,
> > 
> > On Tue, Sep 18, 2018 at 07:54:23PM +0200, Andrew Jones wrote:
> > > This series provides KVM selftests that test dirty log tracking on
> > > AArch64 for both 4K and 64K guest page sizes. Additionally the
> > > framework provides an easy way to test dirty log tracking with the
> > > recently posted dynamic IPA and 52bit IPA series[1].
> > 
> > I was trying to parse the commit text of patch 2, and I realized that I
> > don't understand the 'hypercall to userspace' thing at all, which
> > probably means I have no idea how the selftests work overall.
> 
> There are three parts to a kvm selftest: 1) the test code which runs in
> host userspace and _is_ the kvm userspace used with kvm, 2) the vcpu
> thread code which executes KVM_RUN for the guest code and possibly also
> some host userspace test code, and 3) the guest code, which is naturally
> run in the vcpu thread, but in guest mode.
> 
> The need for a "ucall" arises for 2's "possibly also some host userspace
> test code". In that case the guest code needs to invoke an exit from guest
> mode, not just to kvm, but all the way to kvm userspace. For AArch64, as
> you know, this can be done with an MMIO access. The reason patch 2
> generalizes the concept is because for x86 this can and is done with a
> PIO access.
> 

So in the world of normal KVM userspace, (2) would be a thread in the
same process as (1), sharing its mm.  Is this a different setup somehow,
why?

> > 
> > I then spent a while reading various bits of documentation in the kernel
> > tree, LWN, etc., only to realize that I don't understand how this test
> > framework actually works.
> > 
> > Are the selftests modules, userspace programs, or code that is compiled
> > with the kernel, and (somehow?) run from userspace.  I thought the
> > latter, partially based on your explanation at ELC, but then I don't
> > understand how the "compile and run" make target works.
> 
> The tests are standalone userspace programs which are compiled separately,
> but have dependencies on kernel headers. As stated above, for kvm, each
> selftest is a kvm userspace (including its vcpu thread code) and guest
> code combined. While there's a lot of complexity in the framework,
> particularly for memory management, and a bit for vcpu setup, most of that
> can be shared among tests using the kvm_util.h and test_util.h APIs,
> allowing a given test to only have a relatively simple main(), vcpu thread
> "vcpu_worker()" function, and "guest_code()" function. Guest mode code can
> easily share code with the kvm userspace test code (assuming the guest
> page tables are set up in the default way) and even data can be shared as
> long the accesses are done with the appropriate mappings (gva vs. hva).
> There's a small API to help with that as well.
> 

Sounds cool.  Beware of the attributes of the mappings such that both
the guest and host have mapped the memory cacheable etc., but I'm sure
you've thought of that already.

> > 
> > Can you help me paint the overall picture, or point me to the piece of
> > documentation/presentation that explains the high-level picture, which I
> > must have obviously missed somehow?
> 
> We definitely need the documentation and, in hindsight, it looks like it
> would have been a good BoF topic last week too.

An overview of the different testing approaches would be a good KVM
Forum talk for next year, IMHO.  When should you use kvm-unit-tests, and
when should you use kselftests, some examples, etc.  Just saying ;)

> 
> I think this framework has a lot of potential for KVM API testing and
> even for quick & dirty guest code instruction sequence tests (although
> instruction sequences would also fit kvm-unit-tests). I hope I can help
> get you and anyone else interested started.
> 

I'll have a look at this series and glance at the code some more, it
would be interesting to consider if using some of this for nested virt
tests makes sense.


Thanks,

    Christoffer
Andrew Jones Nov. 1, 2018, 9:31 a.m. UTC | #7
On Thu, Nov 01, 2018 at 10:08:25AM +0100, Christoffer Dall wrote:
> On Tue, Oct 30, 2018 at 06:38:20PM +0100, Andrew Jones wrote:
> > 
> > Hi Christoffer,
> > 
> > Thanks for your interest in these tests. There isn't any documentation
> > that I know of, but it's a good idea to have some. I'll write something
> > up soon. I'll also try to answer your questions now.
> > 
> 
> That sounds great, thanks!
> 
> > On Mon, Oct 29, 2018 at 06:40:02PM +0100, Christoffer Dall wrote:
> > > Hi Drew,
> > > 
> > > On Tue, Sep 18, 2018 at 07:54:23PM +0200, Andrew Jones wrote:
> > > > This series provides KVM selftests that test dirty log tracking on
> > > > AArch64 for both 4K and 64K guest page sizes. Additionally the
> > > > framework provides an easy way to test dirty log tracking with the
> > > > recently posted dynamic IPA and 52bit IPA series[1].
> > > 
> > > I was trying to parse the commit text of patch 2, and I realized that I
> > > don't understand the 'hypercall to userspace' thing at all, which
> > > probably means I have no idea how the selftests work overall.
> > 
> > There are three parts to a kvm selftest: 1) the test code which runs in
> > host userspace and _is_ the kvm userspace used with kvm, 2) the vcpu
> > thread code which executes KVM_RUN for the guest code and possibly also
> > some host userspace test code, and 3) the guest code, which is naturally
> > run in the vcpu thread, but in guest mode.
> > 
> > The need for a "ucall" arises for 2's "possibly also some host userspace
> > test code". In that case the guest code needs to invoke an exit from guest
> > mode, not just to kvm, but all the way to kvm userspace. For AArch64, as
> > you know, this can be done with an MMIO access. The reason patch 2
> > generalizes the concept is because for x86 this can and is done with a
> > PIO access.
> > 
> 
> So in the world of normal KVM userspace, (2) would be a thread in the
> same process as (1), sharing its mm.  Is this a different setup somehow,
> why?

It's the same setup. Actually the only difference is what Paolo pointed
out in his reply. There's no need for an independent vcpu thread to be
spawned when only one vcpu thread is needed and no additional main thread
is needed. I.e. the main test / kvm userspace code can call KVM_RUN
itself.

> 
> > > 
> > > I then spent a while reading various bits of documentation in the kernel
> > > tree, LWN, etc., only to realize that I don't understand how this test
> > > framework actually works.
> > > 
> > > Are the selftests modules, userspace programs, or code that is compiled
> > > with the kernel, and (somehow?) run from userspace.  I thought the
> > > latter, partially based on your explanation at ELC, but then I don't
> > > understand how the "compile and run" make target works.
> > 
> > The tests are standalone userspace programs which are compiled separately,
> > but have dependencies on kernel headers. As stated above, for kvm, each
> > selftest is a kvm userspace (including its vcpu thread code) and guest
> > code combined. While there's a lot of complexity in the framework,
> > particularly for memory management, and a bit for vcpu setup, most of that
> > can be shared among tests using the kvm_util.h and test_util.h APIs,
> > allowing a given test to only have a relatively simple main(), vcpu thread
> > "vcpu_worker()" function, and "guest_code()" function. Guest mode code can
> > easily share code with the kvm userspace test code (assuming the guest
> > page tables are set up in the default way) and even data can be shared as
> > long the accesses are done with the appropriate mappings (gva vs. hva).
> > There's a small API to help with that as well.
> > 
> 
> Sounds cool.  Beware of the attributes of the mappings such that both
> the guest and host have mapped the memory cacheable etc., but I'm sure
> you've thought of that already.

Right. If you look at virt_pg_map(), then you'll see that I have the
default set to NORMAL memory. It can be overridden by calling
_virt_pg_map() directly - which might be nice to do to specifically
test stage1/stage2 mapping combinations.

> 
> > > 
> > > Can you help me paint the overall picture, or point me to the piece of
> > > documentation/presentation that explains the high-level picture, which I
> > > must have obviously missed somehow?
> > 
> > We definitely need the documentation and, in hindsight, it looks like it
> > would have been a good BoF topic last week too.
> 
> An overview of the different testing approaches would be a good KVM
> Forum talk for next year, IMHO.  When should you use kvm-unit-tests, and
> when should you use kselftests, some examples, etc.  Just saying ;)

:-)

> 
> > 
> > I think this framework has a lot of potential for KVM API testing and
> > even for quick & dirty guest code instruction sequence tests (although
> > instruction sequences would also fit kvm-unit-tests). I hope I can help
> > get you and anyone else interested started.
> > 
> 
> I'll have a look at this series and glance at the code some more, it
> would be interesting to consider if using some of this for nested virt
> tests makes sense.

Yes. x86 has many nested tests. I think the framework was originally
created with that in mind.

Thanks,
drew