diff mbox series

[v6,07/13] KVM: selftests: Change ____vm_create() to take struct kvm_vm_mem_params

Message ID 20220906180930.230218-8-ricarkol@google.com (mailing list archive)
State New, archived
Headers show
Series KVM: selftests: Add aarch64/page_fault_test | expand

Commit Message

Ricardo Koller Sept. 6, 2022, 6:09 p.m. UTC
The vm_create() helpers are hardcoded to place most page types (code,
page-tables, stacks, etc) in the same memslot #0, and always backed with
anonymous 4K.  There are a couple of issues with that.  First, tests willing to
differ a bit, like placing page-tables in a different backing source type must
replicate much of what's already done by the vm_create() functions.  Second,
the hardcoded assumption of memslot #0 holding most things is spreaded
everywhere; this makes it very hard to change.

Fix the above issues by having selftests specify how they want memory to be
laid out: define the memory regions to use for code, pt (page-tables), and
data. Introduce a new structure, struct kvm_vm_mem_params, that defines: guest
mode, a list of memory region descriptions, and some fields specifying what
regions to use for code, pt, and data.

There is no functional change intended. The current commit adds a default
struct kvm_vm_mem_params that lays out memory exactly as before. The next
commit will change the allocators to get the region they should be using,
e.g.,: like the page table allocators using the pt memslot.

Cc: Sean Christopherson <seanjc@google.com>
Cc: Andrew Jones <andrew.jones@linux.dev>
Signed-off-by: Ricardo Koller <ricarkol@google.com>
---
 .../selftests/kvm/include/kvm_util_base.h     | 51 +++++++++++++++-
 .../selftests/kvm/lib/aarch64/processor.c     |  3 +-
 tools/testing/selftests/kvm/lib/kvm_util.c    | 58 ++++++++++++++++---
 3 files changed, 102 insertions(+), 10 deletions(-)

Comments

Andrew Jones Sept. 19, 2022, 6:42 a.m. UTC | #1
On Tue, Sep 06, 2022 at 06:09:24PM +0000, Ricardo Koller wrote:
> The vm_create() helpers are hardcoded to place most page types (code,
> page-tables, stacks, etc) in the same memslot #0, and always backed with
> anonymous 4K.  There are a couple of issues with that.  First, tests willing to
> differ a bit, like placing page-tables in a different backing source type must
> replicate much of what's already done by the vm_create() functions.  Second,
> the hardcoded assumption of memslot #0 holding most things is spreaded
> everywhere; this makes it very hard to change.
> 
> Fix the above issues by having selftests specify how they want memory to be
> laid out: define the memory regions to use for code, pt (page-tables), and
> data. Introduce a new structure, struct kvm_vm_mem_params, that defines: guest
> mode, a list of memory region descriptions, and some fields specifying what
> regions to use for code, pt, and data.
> 
> There is no functional change intended. The current commit adds a default
> struct kvm_vm_mem_params that lays out memory exactly as before. The next
> commit will change the allocators to get the region they should be using,
> e.g.,: like the page table allocators using the pt memslot.
> 
> Cc: Sean Christopherson <seanjc@google.com>
> Cc: Andrew Jones <andrew.jones@linux.dev>
> Signed-off-by: Ricardo Koller <ricarkol@google.com>
> ---
>  .../selftests/kvm/include/kvm_util_base.h     | 51 +++++++++++++++-
>  .../selftests/kvm/lib/aarch64/processor.c     |  3 +-
>  tools/testing/selftests/kvm/lib/kvm_util.c    | 58 ++++++++++++++++---
>  3 files changed, 102 insertions(+), 10 deletions(-)
>

Reviewed-by: Andrew Jones <andrew.jones@linux.dev>
Sean Christopherson Sept. 19, 2022, 4:28 p.m. UTC | #2
On Tue, Sep 06, 2022, Ricardo Koller wrote:
> @@ -637,19 +658,45 @@ vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
>  			      vm_paddr_t paddr_min, uint32_t memslot);
>  vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm);
>  
> +struct kvm_vm_mem_params {
> +	enum vm_guest_mode mode;
> +
> +	struct {
> +		enum vm_mem_backing_src_type src_type;
> +		uint64_t guest_paddr;
> +		/*
> +		 * KVM region slot (same meaning as in struct
> +		 * kvm_userspace_memory_region).
> +		 */
> +		uint32_t slot;
> +		uint64_t npages;
> +		uint32_t flags;
> +		bool enabled;

"enabled" is unnecessary, just have ____vm_create() skip over regions with npages=0.
Likely ends up being a moot point though.

> +	} region[NR_MEM_REGIONS];
> +
> +	/* Each region type points to a region in the above array. */
> +	uint16_t region_idx[NR_MEM_REGIONS];

Eww.  This is going to be super confusing and it's one more thing for tests to
screw up.  And open coding the indices for region[] is beyond gross.

> +};
> +
> +extern struct kvm_vm_mem_params kvm_vm_mem_default;
> +
>  /*
>   * ____vm_create() does KVM_CREATE_VM and little else.  __vm_create() also
>   * loads the test binary into guest memory and creates an IRQ chip (x86 only).
>   * __vm_create() does NOT create vCPUs, @nr_runnable_vcpus is used purely to
>   * calculate the amount of memory needed for per-vCPU data, e.g. stacks.
>   */
> -struct kvm_vm *____vm_create(enum vm_guest_mode mode, uint64_t nr_pages);
> +struct kvm_vm *____vm_create(struct kvm_vm_mem_params *mem_params);
>  struct kvm_vm *__vm_create(enum vm_guest_mode mode, uint32_t nr_runnable_vcpus,
>  			   uint64_t nr_extra_pages);
>  
>  static inline struct kvm_vm *vm_create_barebones(void)
>  {
> -	return ____vm_create(VM_MODE_DEFAULT, 0);
> +	struct kvm_vm_mem_params params_wo_memslots = {
> +		.mode = kvm_vm_mem_default.mode,
> +	};
> +
> +	return ____vm_create(&params_wo_memslots);

Very related to the above complaints, this is rather ugly.  I liked the idea of
passing a struct to __vm_create(), but passing it to ____vm_create() feels extremely
forced.

In an ideal world, my preference would still be to modify __vm_create() to take the
struct so that a test that wants to utilize different memslots doesn't need to
manually duplicate all the other stuff in __vm_create(), but that might end up
being too forced as well.  For now, I'm ok punting on that so the page_fault_test
can get merged.

Looking at this with fresh eyes, there's simply no reason ____vm_create() should be
creating memslots.  If this series first moves the memslot creation into __vm_create()
where it belongs (patch below), then there's no need to force ____vm_create() to take
a struct.  And if we punt on refactoring __vm_create(), then there's no need to
add kvm_vm_mem_default and no real need to add struct kvm_vm_mem_params either.

If/when there's a second test that wants fine-grained control over memslots then
we can figure out a proper API to share between page_fault_test and whatever the
new test is, but for now if page_fault_test is going to call ____vm_create()
directly, then I think it's easier to forego the common API and just have page_fault_test
and __vm_create() open code setting vm->memslots.

Alternatively, if we really want a common API right away, then we can add a helper
to populate the memory region + vm->memslots.

Option A (open code):

struct kvm_vm *__vm_create(enum vm_guest_mode mode, uint32_t nr_runnable_vcpus,
			   uint64_t nr_extra_pages)
{
	uint64_t nr_pages = vm_nr_pages_required(mode, nr_runnable_vcpus,
						 nr_extra_pages);
	struct kvm_vm *vm;
	int i;

	vm = ____vm_create(mode);

	vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, 0, 0, nr_pages, 0);

	for (i = 0; i < NR_MEM_REGIONS; i++)
		vm->memslots[i] = 0;

	kvm_vm_elf_load(vm, program_invocation_name);

#ifdef __x86_64__
	vm_create_irqchip(vm);
#endif
	return vm;
}

...

enum pf_test_memslots {
	CODE_MEMSLOT,
	PAGE_TABLE_MEMSLOT,
	DATA_MEMSLOT,
}

/* Create a code memslot at pfn=0, and data and PT ones at max_gfn. */
static void setup_memslots(struct kvm_vm *vm, struct test_params *p)
{
	uint64_t backing_src_pagesz = get_backing_src_pagesz(p->src_type);
	uint64_t guest_page_size = vm_guest_mode_params[mode].page_size;
	uint64_t max_gfn = get_max_gfn(mode);
	/* Enough for 2M of code when using 4K guest pages. */
	uint64_t code_npages = 512;
	uint64_t pt_size, data_size, data_gpa;

	/*
	 * This test requires 1 pgd, 2 pud, 4 pmd, and 6 pte pages when using
	 * VM_MODE_P48V48_4K. Note that the .text takes ~1.6MBs.  That's 13
	 * pages. VM_MODE_P48V48_4K is the mode with most PT pages; let's use
	 * twice that just in case.
	 */
	pt_size = 26 * guest_page_size;

	/* memslot sizes and gpa's must be aligned to the backing page size */
	pt_size = align_up(pt_size, backing_src_pagesz);
	data_size = align_up(guest_page_size, backing_src_pagesz);
	data_gpa = (max_gfn * guest_page_size) - data_size;
	data_gpa = align_down(data_gpa, backing_src_pagesz);

	vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, 0, CODE_MEMSLOT,
				    code_npages, 0);
	vm->memslots[MEM_REGION_CODE] = CODE_MEMSLOT;

	vm_userspace_mem_region_add(vm, p->src_type, data_gpa - pt_size,
				    PAGE_TABLE_MEMSLOT, pt_size / guest_page_size,
				    p->test_desc->pt_memslot_flags);
	vm->memslots[MEM_REGION_PT] = PAGE_TABLE_MEMSLOT;

	vm_userspace_mem_region_add(vm, p->src_type, data_gpa, DATA_MEMSLOT,
				    data_size / guest_page_size,
				    p->test_desc->data_memslot_flags);
	vm->memslots[MEM_REGION_PT] = DATA_MEMSLOT;
}


static void run_test(enum vm_guest_mode mode, void *arg)
{
	struct test_params *p = (struct test_params *)arg;
	struct test_desc *test = p->test_desc;
	struct kvm_vm *vm;
	struct kvm_vcpu *vcpu;
	struct uffd_desc *pt_uffd, *data_uffd;

	print_test_banner(mode, p);

	vm = ____vm_create(mode);
	setup_memslots(vm, p);
	kvm_vm_elf_load(vm, program_invocation_name);
	vcpu = vm_vcpu_add(vm, 0, guest_code);

	...
}

Option B (helper):

enum kvm_mem_region_mask {
	MEM_REGION_CODE_MASK	= BIT(MEM_REGION_CODE),
	MEM_REGION_PT_MASK	= BIT(MEM_REGION_PT),
	MEM_REGION_DATA_MASK	= BIT(MEM_REGION_DATA),

	MEM_REGION_ALL_MASK	= MEM_REGION_CODE_MASK |
				  MEM_REGION_PT_MASK |
				  MEM_REGION_DATA_MASK,
};

void kvm_vm_add_mem_region(struct kvm_vm *vm, enum kvm_mem_region_mask type_mask,
			   enum vm_mem_backing_src_type src_type, uint32_t slot,
			   uint64_t guest_paddr, uint64_t nr_pages, uint32_t flags)
{
	int i;

	vm_userspace_mem_region_add(vm, src_type, guest_paddr, slot, nr_pages, 0);

	for (i = 0; i < NR_MEM_REGIONS; i++) {
		if (BIT(i) & type_mask)
			vm->memslots[i] = slot;
	}
}

struct kvm_vm *__vm_create(enum vm_guest_mode mode, uint32_t nr_runnable_vcpus,
			   uint64_t nr_extra_pages)
{
	uint64_t nr_pages = vm_nr_pages_required(mode, nr_runnable_vcpus,
						 nr_extra_pages);
	struct kvm_vm *vm;
	int i;

	vm = ____vm_create(mode);

	kvm_vm_add_mem_region(vm, MEM_REGION_ALL_MASK, VM_MEM_SRC_ANONYMOUS, 0,
			      0, nr_pages, 0);

	kvm_vm_elf_load(vm, program_invocation_name);

#ifdef __x86_64__
	vm_create_irqchip(vm);
#endif
	return vm;
}

static void setup_memslots(struct kvm_vm *vm, struct test_params *p)
{
	...

	kvm_vm_add_mem_region(vm, MEM_REGION_CODE_MASK, VM_MEM_SRC_ANONYMOUS,
			      CODE_MEMSLOT, 0, code_npages, 0);

	kvm_vm_add_mem_region(vm, MEM_REGION_PT_MASK p->src_type,
			      PAGE_TABLE_MEMSLOT, data_gpa - pt_size,
			      pt_size / guest_page_size,
			      p->test_desc->pt_memslot_flags);

	kvm_vm_add_mem_region(vm, MEM_REGION_DATA_MASK, p->src_type,
			      DATA_MEMSLOT, data_gpa,
			      data_size / guest_page_size,
			      p->test_desc->data_memslot_flags);
}

---
 .../testing/selftests/kvm/include/kvm_util_base.h |  4 ++--
 tools/testing/selftests/kvm/lib/kvm_util.c        | 15 +++++++--------
 2 files changed, 9 insertions(+), 10 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
index 24fde97f6121..107cb87908f8 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
@@ -642,13 +642,13 @@ vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm);
  * __vm_create() does NOT create vCPUs, @nr_runnable_vcpus is used purely to
  * calculate the amount of memory needed for per-vCPU data, e.g. stacks.
  */
-struct kvm_vm *____vm_create(enum vm_guest_mode mode, uint64_t nr_pages);
+struct kvm_vm *____vm_create(enum vm_guest_mode mode);
 struct kvm_vm *__vm_create(enum vm_guest_mode mode, uint32_t nr_runnable_vcpus,
 			   uint64_t nr_extra_pages);
 
 static inline struct kvm_vm *vm_create_barebones(void)
 {
-	return ____vm_create(VM_MODE_DEFAULT, 0);
+	return ____vm_create(VM_MODE_DEFAULT);
 }
 
 static inline struct kvm_vm *vm_create(uint32_t nr_runnable_vcpus)
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 9889fe0d8919..c761422faa17 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -143,13 +143,10 @@ const struct vm_guest_mode_params vm_guest_mode_params[] = {
 _Static_assert(sizeof(vm_guest_mode_params)/sizeof(struct vm_guest_mode_params) == NUM_VM_MODES,
 	       "Missing new mode params?");
 
-struct kvm_vm *____vm_create(enum vm_guest_mode mode, uint64_t nr_pages)
+struct kvm_vm *____vm_create(enum vm_guest_mode mode)
 {
 	struct kvm_vm *vm;
 
-	pr_debug("%s: mode='%s' pages='%ld'\n", __func__,
-		 vm_guest_mode_string(mode), nr_pages);
-
 	vm = calloc(1, sizeof(*vm));
 	TEST_ASSERT(vm != NULL, "Insufficient Memory");
 
@@ -245,9 +242,6 @@ struct kvm_vm *____vm_create(enum vm_guest_mode mode, uint64_t nr_pages)
 
 	/* Allocate and setup memory for guest. */
 	vm->vpages_mapped = sparsebit_alloc();
-	if (nr_pages != 0)
-		vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
-					    0, 0, nr_pages, 0);
 
 	return vm;
 }
@@ -294,7 +288,12 @@ struct kvm_vm *__vm_create(enum vm_guest_mode mode, uint32_t nr_runnable_vcpus,
 						 nr_extra_pages);
 	struct kvm_vm *vm;
 
-	vm = ____vm_create(mode, nr_pages);
+	pr_debug("%s: mode='%s' pages='%ld'\n", __func__,
+		 vm_guest_mode_string(mode), nr_pages);
+
+	vm = ____vm_create(mode);
+
+	vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, 0, 0, nr_pages, 0);
 
 	kvm_vm_elf_load(vm, program_invocation_name);
 

base-commit: 372d07084593dc7a399bf9bee815711b1fb1bcf2
--
Ricardo Koller Sept. 19, 2022, 7:21 p.m. UTC | #3
On Mon, Sep 19, 2022 at 04:28:26PM +0000, Sean Christopherson wrote:
> On Tue, Sep 06, 2022, Ricardo Koller wrote:
> > @@ -637,19 +658,45 @@ vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
> >  			      vm_paddr_t paddr_min, uint32_t memslot);
> >  vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm);
> >  
> > +struct kvm_vm_mem_params {
> > +	enum vm_guest_mode mode;
> > +
> > +	struct {
> > +		enum vm_mem_backing_src_type src_type;
> > +		uint64_t guest_paddr;
> > +		/*
> > +		 * KVM region slot (same meaning as in struct
> > +		 * kvm_userspace_memory_region).
> > +		 */
> > +		uint32_t slot;
> > +		uint64_t npages;
> > +		uint32_t flags;
> > +		bool enabled;
> 
> "enabled" is unnecessary, just have ____vm_create() skip over regions with npages=0.
> Likely ends up being a moot point though.
> 
> > +	} region[NR_MEM_REGIONS];
> > +
> > +	/* Each region type points to a region in the above array. */
> > +	uint16_t region_idx[NR_MEM_REGIONS];
> 
> Eww.  This is going to be super confusing and it's one more thing for tests to
> screw up.  And open coding the indices for region[] is beyond gross.
> 
> > +};
> > +
> > +extern struct kvm_vm_mem_params kvm_vm_mem_default;
> > +
> >  /*
> >   * ____vm_create() does KVM_CREATE_VM and little else.  __vm_create() also
> >   * loads the test binary into guest memory and creates an IRQ chip (x86 only).
> >   * __vm_create() does NOT create vCPUs, @nr_runnable_vcpus is used purely to
> >   * calculate the amount of memory needed for per-vCPU data, e.g. stacks.
> >   */
> > -struct kvm_vm *____vm_create(enum vm_guest_mode mode, uint64_t nr_pages);
> > +struct kvm_vm *____vm_create(struct kvm_vm_mem_params *mem_params);
> >  struct kvm_vm *__vm_create(enum vm_guest_mode mode, uint32_t nr_runnable_vcpus,
> >  			   uint64_t nr_extra_pages);
> >  
> >  static inline struct kvm_vm *vm_create_barebones(void)
> >  {
> > -	return ____vm_create(VM_MODE_DEFAULT, 0);
> > +	struct kvm_vm_mem_params params_wo_memslots = {
> > +		.mode = kvm_vm_mem_default.mode,
> > +	};
> > +
> > +	return ____vm_create(&params_wo_memslots);
> 
> Very related to the above complaints, this is rather ugly.  I liked the idea of
> passing a struct to __vm_create(), but passing it to ____vm_create() feels extremely
> forced.
> 
> In an ideal world, my preference would still be to modify __vm_create() to take the
> struct so that a test that wants to utilize different memslots doesn't need to
> manually duplicate all the other stuff in __vm_create(), but that might end up
> being too forced as well.  For now, I'm ok punting on that so the page_fault_test
> can get merged.
> 
> Looking at this with fresh eyes, there's simply no reason ____vm_create() should be
> creating memslots.  If this series first moves the memslot creation into __vm_create()
> where it belongs (patch below), then there's no need to force ____vm_create() to take
> a struct.  And if we punt on refactoring __vm_create(), then there's no need to
> add kvm_vm_mem_default and no real need to add struct kvm_vm_mem_params either.

I think I prefer option A below. And I will take the offer of punting on
refactoring __vm_create() for after page_fault_test.

Having a struct would be nice, as that will allow tests to do things
like: run with all these combinations of (backing_src, regions, ...).

Thanks for the review,
Ricardo

> 
> If/when there's a second test that wants fine-grained control over memslots then
> we can figure out a proper API to share between page_fault_test and whatever the
> new test is, but for now if page_fault_test is going to call ____vm_create()
> directly, then I think it's easier to forego the common API and just have page_fault_test
> and __vm_create() open code setting vm->memslots.
> 
> Alternatively, if we really want a common API right away, then we can add a helper
> to populate the memory region + vm->memslots.
> 
> Option A (open code):
> 
> struct kvm_vm *__vm_create(enum vm_guest_mode mode, uint32_t nr_runnable_vcpus,
> 			   uint64_t nr_extra_pages)
> {
> 	uint64_t nr_pages = vm_nr_pages_required(mode, nr_runnable_vcpus,
> 						 nr_extra_pages);
> 	struct kvm_vm *vm;
> 	int i;
> 
> 	vm = ____vm_create(mode);
> 
> 	vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, 0, 0, nr_pages, 0);
> 
> 	for (i = 0; i < NR_MEM_REGIONS; i++)
> 		vm->memslots[i] = 0;
> 
> 	kvm_vm_elf_load(vm, program_invocation_name);
> 
> #ifdef __x86_64__
> 	vm_create_irqchip(vm);
> #endif
> 	return vm;
> }
> 
> ...
> 
> enum pf_test_memslots {
> 	CODE_MEMSLOT,
> 	PAGE_TABLE_MEMSLOT,
> 	DATA_MEMSLOT,
> }
> 
> /* Create a code memslot at pfn=0, and data and PT ones at max_gfn. */
> static void setup_memslots(struct kvm_vm *vm, struct test_params *p)
> {
> 	uint64_t backing_src_pagesz = get_backing_src_pagesz(p->src_type);
> 	uint64_t guest_page_size = vm_guest_mode_params[mode].page_size;
> 	uint64_t max_gfn = get_max_gfn(mode);
> 	/* Enough for 2M of code when using 4K guest pages. */
> 	uint64_t code_npages = 512;
> 	uint64_t pt_size, data_size, data_gpa;
> 
> 	/*
> 	 * This test requires 1 pgd, 2 pud, 4 pmd, and 6 pte pages when using
> 	 * VM_MODE_P48V48_4K. Note that the .text takes ~1.6MBs.  That's 13
> 	 * pages. VM_MODE_P48V48_4K is the mode with most PT pages; let's use
> 	 * twice that just in case.
> 	 */
> 	pt_size = 26 * guest_page_size;
> 
> 	/* memslot sizes and gpa's must be aligned to the backing page size */
> 	pt_size = align_up(pt_size, backing_src_pagesz);
> 	data_size = align_up(guest_page_size, backing_src_pagesz);
> 	data_gpa = (max_gfn * guest_page_size) - data_size;
> 	data_gpa = align_down(data_gpa, backing_src_pagesz);
> 
> 	vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, 0, CODE_MEMSLOT,
> 				    code_npages, 0);
> 	vm->memslots[MEM_REGION_CODE] = CODE_MEMSLOT;
> 
> 	vm_userspace_mem_region_add(vm, p->src_type, data_gpa - pt_size,
> 				    PAGE_TABLE_MEMSLOT, pt_size / guest_page_size,
> 				    p->test_desc->pt_memslot_flags);
> 	vm->memslots[MEM_REGION_PT] = PAGE_TABLE_MEMSLOT;
> 
> 	vm_userspace_mem_region_add(vm, p->src_type, data_gpa, DATA_MEMSLOT,
> 				    data_size / guest_page_size,
> 				    p->test_desc->data_memslot_flags);
> 	vm->memslots[MEM_REGION_PT] = DATA_MEMSLOT;
> }
> 
> 
> static void run_test(enum vm_guest_mode mode, void *arg)
> {
> 	struct test_params *p = (struct test_params *)arg;
> 	struct test_desc *test = p->test_desc;
> 	struct kvm_vm *vm;
> 	struct kvm_vcpu *vcpu;
> 	struct uffd_desc *pt_uffd, *data_uffd;
> 
> 	print_test_banner(mode, p);
> 
> 	vm = ____vm_create(mode);
> 	setup_memslots(vm, p);
> 	kvm_vm_elf_load(vm, program_invocation_name);
> 	vcpu = vm_vcpu_add(vm, 0, guest_code);
> 
> 	...
> }
> 
> Option B (helper):
> 
> enum kvm_mem_region_mask {
> 	MEM_REGION_CODE_MASK	= BIT(MEM_REGION_CODE),
> 	MEM_REGION_PT_MASK	= BIT(MEM_REGION_PT),
> 	MEM_REGION_DATA_MASK	= BIT(MEM_REGION_DATA),
> 
> 	MEM_REGION_ALL_MASK	= MEM_REGION_CODE_MASK |
> 				  MEM_REGION_PT_MASK |
> 				  MEM_REGION_DATA_MASK,
> };
> 
> void kvm_vm_add_mem_region(struct kvm_vm *vm, enum kvm_mem_region_mask type_mask,
> 			   enum vm_mem_backing_src_type src_type, uint32_t slot,
> 			   uint64_t guest_paddr, uint64_t nr_pages, uint32_t flags)
> {
> 	int i;
> 
> 	vm_userspace_mem_region_add(vm, src_type, guest_paddr, slot, nr_pages, 0);
> 
> 	for (i = 0; i < NR_MEM_REGIONS; i++) {
> 		if (BIT(i) & type_mask)
> 			vm->memslots[i] = slot;
> 	}
> }
> 
> struct kvm_vm *__vm_create(enum vm_guest_mode mode, uint32_t nr_runnable_vcpus,
> 			   uint64_t nr_extra_pages)
> {
> 	uint64_t nr_pages = vm_nr_pages_required(mode, nr_runnable_vcpus,
> 						 nr_extra_pages);
> 	struct kvm_vm *vm;
> 	int i;
> 
> 	vm = ____vm_create(mode);
> 
> 	kvm_vm_add_mem_region(vm, MEM_REGION_ALL_MASK, VM_MEM_SRC_ANONYMOUS, 0,
> 			      0, nr_pages, 0);
> 
> 	kvm_vm_elf_load(vm, program_invocation_name);
> 
> #ifdef __x86_64__
> 	vm_create_irqchip(vm);
> #endif
> 	return vm;
> }
> 
> static void setup_memslots(struct kvm_vm *vm, struct test_params *p)
> {
> 	...
> 
> 	kvm_vm_add_mem_region(vm, MEM_REGION_CODE_MASK, VM_MEM_SRC_ANONYMOUS,
> 			      CODE_MEMSLOT, 0, code_npages, 0);
> 
> 	kvm_vm_add_mem_region(vm, MEM_REGION_PT_MASK p->src_type,
> 			      PAGE_TABLE_MEMSLOT, data_gpa - pt_size,
> 			      pt_size / guest_page_size,
> 			      p->test_desc->pt_memslot_flags);
> 
> 	kvm_vm_add_mem_region(vm, MEM_REGION_DATA_MASK, p->src_type,
> 			      DATA_MEMSLOT, data_gpa,
> 			      data_size / guest_page_size,
> 			      p->test_desc->data_memslot_flags);
> }
> 
> ---
>  .../testing/selftests/kvm/include/kvm_util_base.h |  4 ++--
>  tools/testing/selftests/kvm/lib/kvm_util.c        | 15 +++++++--------
>  2 files changed, 9 insertions(+), 10 deletions(-)
> 
> diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
> index 24fde97f6121..107cb87908f8 100644
> --- a/tools/testing/selftests/kvm/include/kvm_util_base.h
> +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
> @@ -642,13 +642,13 @@ vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm);
>   * __vm_create() does NOT create vCPUs, @nr_runnable_vcpus is used purely to
>   * calculate the amount of memory needed for per-vCPU data, e.g. stacks.
>   */
> -struct kvm_vm *____vm_create(enum vm_guest_mode mode, uint64_t nr_pages);
> +struct kvm_vm *____vm_create(enum vm_guest_mode mode);
>  struct kvm_vm *__vm_create(enum vm_guest_mode mode, uint32_t nr_runnable_vcpus,
>  			   uint64_t nr_extra_pages);
>  
>  static inline struct kvm_vm *vm_create_barebones(void)
>  {
> -	return ____vm_create(VM_MODE_DEFAULT, 0);
> +	return ____vm_create(VM_MODE_DEFAULT);
>  }
>  
>  static inline struct kvm_vm *vm_create(uint32_t nr_runnable_vcpus)
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index 9889fe0d8919..c761422faa17 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -143,13 +143,10 @@ const struct vm_guest_mode_params vm_guest_mode_params[] = {
>  _Static_assert(sizeof(vm_guest_mode_params)/sizeof(struct vm_guest_mode_params) == NUM_VM_MODES,
>  	       "Missing new mode params?");
>  
> -struct kvm_vm *____vm_create(enum vm_guest_mode mode, uint64_t nr_pages)
> +struct kvm_vm *____vm_create(enum vm_guest_mode mode)
>  {
>  	struct kvm_vm *vm;
>  
> -	pr_debug("%s: mode='%s' pages='%ld'\n", __func__,
> -		 vm_guest_mode_string(mode), nr_pages);
> -
>  	vm = calloc(1, sizeof(*vm));
>  	TEST_ASSERT(vm != NULL, "Insufficient Memory");
>  
> @@ -245,9 +242,6 @@ struct kvm_vm *____vm_create(enum vm_guest_mode mode, uint64_t nr_pages)
>  
>  	/* Allocate and setup memory for guest. */
>  	vm->vpages_mapped = sparsebit_alloc();
> -	if (nr_pages != 0)
> -		vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
> -					    0, 0, nr_pages, 0);
>  
>  	return vm;
>  }
> @@ -294,7 +288,12 @@ struct kvm_vm *__vm_create(enum vm_guest_mode mode, uint32_t nr_runnable_vcpus,
>  						 nr_extra_pages);
>  	struct kvm_vm *vm;
>  
> -	vm = ____vm_create(mode, nr_pages);
> +	pr_debug("%s: mode='%s' pages='%ld'\n", __func__,
> +		 vm_guest_mode_string(mode), nr_pages);
> +
> +	vm = ____vm_create(mode);
> +
> +	vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, 0, 0, nr_pages, 0);
>  
>  	kvm_vm_elf_load(vm, program_invocation_name);
>  
> 
> base-commit: 372d07084593dc7a399bf9bee815711b1fb1bcf2
> -- 
>
diff mbox series

Patch

diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
index b2dbe253d4d0..5dbca38a512b 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
@@ -65,6 +65,13 @@  struct userspace_mem_regions {
 	DECLARE_HASHTABLE(slot_hash, 9);
 };
 
+enum kvm_mem_region_type {
+	MEM_REGION_CODE,
+	MEM_REGION_PT,
+	MEM_REGION_DATA,
+	NR_MEM_REGIONS,
+};
+
 struct kvm_vm {
 	int mode;
 	unsigned long type;
@@ -93,6 +100,13 @@  struct kvm_vm {
 	int stats_fd;
 	struct kvm_stats_header stats_header;
 	struct kvm_stats_desc *stats_desc;
+
+	/*
+	 * KVM region slots. These are the default memslots used by page
+	 * allocators, e.g., lib/elf uses the memslots[MEM_REGION_CODE]
+	 * memslot.
+	 */
+	uint32_t memslots[NR_MEM_REGIONS];
 };
 
 
@@ -105,6 +119,13 @@  struct kvm_vm {
 struct userspace_mem_region *
 memslot2region(struct kvm_vm *vm, uint32_t memslot);
 
+inline struct userspace_mem_region *
+vm_get_mem_region(struct kvm_vm *vm, enum kvm_mem_region_type mrt)
+{
+	assert(mrt < NR_MEM_REGIONS);
+	return memslot2region(vm, vm->memslots[mrt]);
+}
+
 /* Minimum allocated guest virtual and physical addresses */
 #define KVM_UTIL_MIN_VADDR		0x2000
 #define KVM_GUEST_PAGE_TABLE_MIN_PADDR	0x180000
@@ -637,19 +658,45 @@  vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
 			      vm_paddr_t paddr_min, uint32_t memslot);
 vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm);
 
+struct kvm_vm_mem_params {
+	enum vm_guest_mode mode;
+
+	struct {
+		enum vm_mem_backing_src_type src_type;
+		uint64_t guest_paddr;
+		/*
+		 * KVM region slot (same meaning as in struct
+		 * kvm_userspace_memory_region).
+		 */
+		uint32_t slot;
+		uint64_t npages;
+		uint32_t flags;
+		bool enabled;
+	} region[NR_MEM_REGIONS];
+
+	/* Each region type points to a region in the above array. */
+	uint16_t region_idx[NR_MEM_REGIONS];
+};
+
+extern struct kvm_vm_mem_params kvm_vm_mem_default;
+
 /*
  * ____vm_create() does KVM_CREATE_VM and little else.  __vm_create() also
  * loads the test binary into guest memory and creates an IRQ chip (x86 only).
  * __vm_create() does NOT create vCPUs, @nr_runnable_vcpus is used purely to
  * calculate the amount of memory needed for per-vCPU data, e.g. stacks.
  */
-struct kvm_vm *____vm_create(enum vm_guest_mode mode, uint64_t nr_pages);
+struct kvm_vm *____vm_create(struct kvm_vm_mem_params *mem_params);
 struct kvm_vm *__vm_create(enum vm_guest_mode mode, uint32_t nr_runnable_vcpus,
 			   uint64_t nr_extra_pages);
 
 static inline struct kvm_vm *vm_create_barebones(void)
 {
-	return ____vm_create(VM_MODE_DEFAULT, 0);
+	struct kvm_vm_mem_params params_wo_memslots = {
+		.mode = kvm_vm_mem_default.mode,
+	};
+
+	return ____vm_create(&params_wo_memslots);
 }
 
 static inline struct kvm_vm *vm_create(uint32_t nr_runnable_vcpus)
diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c
index 26f0eccff6fe..5a31dc85d054 100644
--- a/tools/testing/selftests/kvm/lib/aarch64/processor.c
+++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c
@@ -508,7 +508,8 @@  void aarch64_get_supported_page_sizes(uint32_t ipa,
  */
 void __attribute__((constructor)) init_guest_modes(void)
 {
-       guest_modes_append_default();
+	guest_modes_append_default();
+	kvm_vm_mem_default.mode = VM_MODE_DEFAULT;
 }
 
 void smccc_hvc(uint32_t function_id, uint64_t arg0, uint64_t arg1,
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 5a9f080ff888..02532bc528da 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -143,12 +143,37 @@  const struct vm_guest_mode_params vm_guest_mode_params[] = {
 _Static_assert(sizeof(vm_guest_mode_params)/sizeof(struct vm_guest_mode_params) == NUM_VM_MODES,
 	       "Missing new mode params?");
 
-struct kvm_vm *____vm_create(enum vm_guest_mode mode, uint64_t nr_pages)
+/*
+ * A single memslot #0 for code, data, and page tables.
+ *
+ * .region[0].npages should be set by the user.
+ */
+struct kvm_vm_mem_params kvm_vm_mem_default = {
+#ifndef __aarch64__
+	/* arm64 kvm_vm_mem_default.mode set in init_guest_modes() */
+	.mode = VM_MODE_DEFAULT,
+#endif
+	.region[0] = {
+		.src_type = VM_MEM_SRC_ANONYMOUS,
+		.guest_paddr = 0,
+		.slot = 0,
+		.npages = 0,
+		.flags = 0,
+		.enabled = true,
+	},
+	.region_idx[MEM_REGION_CODE] = 0,
+	.region_idx[MEM_REGION_PT] = 0,
+	.region_idx[MEM_REGION_DATA] = 0,
+};
+
+struct kvm_vm *____vm_create(struct kvm_vm_mem_params *mem_params)
 {
+	enum vm_guest_mode mode = mem_params->mode;
 	struct kvm_vm *vm;
+	enum kvm_mem_region_type mrt;
+	int idx;
 
-	pr_debug("%s: mode='%s' pages='%ld'\n", __func__,
-		 vm_guest_mode_string(mode), nr_pages);
+	pr_debug("%s: mode='%s'\n", __func__, vm_guest_mode_string(mode));
 
 	vm = calloc(1, sizeof(*vm));
 	TEST_ASSERT(vm != NULL, "Insufficient Memory");
@@ -245,9 +270,25 @@  struct kvm_vm *____vm_create(enum vm_guest_mode mode, uint64_t nr_pages)
 
 	/* Allocate and setup memory for guest. */
 	vm->vpages_mapped = sparsebit_alloc();
-	if (nr_pages != 0)
-		vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
-					    0, 0, nr_pages, 0);
+
+	/* Create all mem regions according to mem_params specifications. */
+	for (idx = 0; idx < NR_MEM_REGIONS; idx++) {
+		if (!mem_params->region[idx].enabled)
+			continue;
+
+		vm_userspace_mem_region_add(vm,
+			mem_params->region[idx].src_type,
+			mem_params->region[idx].guest_paddr,
+			mem_params->region[idx].slot,
+			mem_params->region[idx].npages,
+			mem_params->region[idx].flags);
+	}
+
+	/* Set all memslot types for the VM, also according to the spec. */
+	for (mrt = 0; mrt < NR_MEM_REGIONS; mrt++) {
+		idx = mem_params->region_idx[mrt];
+		vm->memslots[mrt] = mem_params->region[idx].slot;
+	}
 
 	return vm;
 }
@@ -292,9 +333,12 @@  struct kvm_vm *__vm_create(enum vm_guest_mode mode, uint32_t nr_runnable_vcpus,
 {
 	uint64_t nr_pages = vm_nr_pages_required(mode, nr_runnable_vcpus,
 						 nr_extra_pages);
+	struct kvm_vm_mem_params mem_params = kvm_vm_mem_default;
 	struct kvm_vm *vm;
 
-	vm = ____vm_create(mode, nr_pages);
+	mem_params.region[0].npages = nr_pages;
+	mem_params.mode = mode;
+	vm = ____vm_create(&mem_params);
 
 	kvm_vm_elf_load(vm, program_invocation_name);