Message ID | 20220429183935.1094599-2-dmatlack@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: selftests: Add nested support to dirty_log_perf_test | expand |
On Fri, Apr 29, 2022 at 06:39:27PM +0000, David Matlack wrote: > x86_page_size is an enum used to communicate the desired page size with > which to map a range of memory. Under the hood they just encode the > desired level at which to map the page. This ends up being clunky in a > few ways: > > - The name suggests it encodes the size of the page rather than the > level. > - In other places in x86_64/processor.c we just use a raw int to encode > the level. > > Simplify this by just admitting that x86_page_size is just the level and > using an int and some more obviously named macros (e.g. PG_LEVEL_1G). > > Signed-off-by: David Matlack <dmatlack@google.com> > --- > .../selftests/kvm/include/x86_64/processor.h | 14 +++++----- > .../selftests/kvm/lib/x86_64/processor.c | 27 +++++++++---------- > .../selftests/kvm/max_guest_memory_test.c | 2 +- > .../selftests/kvm/x86_64/mmu_role_test.c | 2 +- > 4 files changed, 22 insertions(+), 23 deletions(-) > > diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h > index 37db341d4cc5..b512f9f508ae 100644 > --- a/tools/testing/selftests/kvm/include/x86_64/processor.h > +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h > @@ -465,13 +465,13 @@ void vcpu_set_hv_cpuid(struct kvm_vm *vm, uint32_t vcpuid); > struct kvm_cpuid2 *vcpu_get_supported_hv_cpuid(struct kvm_vm *vm, uint32_t vcpuid); > void vm_xsave_req_perm(int bit); > > -enum x86_page_size { > - X86_PAGE_SIZE_4K = 0, > - X86_PAGE_SIZE_2M, > - X86_PAGE_SIZE_1G, > -}; > -void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, > - enum x86_page_size page_size); > +#define PG_LEVEL_4K 0 > +#define PG_LEVEL_2M 1 > +#define PG_LEVEL_1G 2 A nitpick is: we could have named those as PG_LEVEL_[PTE|PMD|PUD|PGD..] rather than 4K|2M|..., then... > + > +#define PG_LEVEL_SIZE(_level) (1ull << (((_level) * 9) + 12)) > + > +void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level); > > /* > * Basic CPU control in CR0 > diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c > index 9f000dfb5594..1a7de69e2495 100644 > --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c > +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c > @@ -199,15 +199,15 @@ static struct pageUpperEntry *virt_create_upper_pte(struct kvm_vm *vm, > uint64_t pt_pfn, > uint64_t vaddr, > uint64_t paddr, > - int level, > - enum x86_page_size page_size) > + int current_level, > + int target_level) > { > - struct pageUpperEntry *pte = virt_get_pte(vm, pt_pfn, vaddr, level); > + struct pageUpperEntry *pte = virt_get_pte(vm, pt_pfn, vaddr, current_level); > > if (!pte->present) { > pte->writable = true; > pte->present = true; > - pte->page_size = (level == page_size); > + pte->page_size = (current_level == target_level); > if (pte->page_size) > pte->pfn = paddr >> vm->page_shift; > else > @@ -218,20 +218,19 @@ static struct pageUpperEntry *virt_create_upper_pte(struct kvm_vm *vm, > * a hugepage at this level, and that there isn't a hugepage at > * this level. > */ > - TEST_ASSERT(level != page_size, > + TEST_ASSERT(current_level != target_level, > "Cannot create hugepage at level: %u, vaddr: 0x%lx\n", > - page_size, vaddr); > + current_level, vaddr); > TEST_ASSERT(!pte->page_size, > "Cannot create page table at level: %u, vaddr: 0x%lx\n", > - level, vaddr); > + current_level, vaddr); > } > return pte; > } > > -void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, > - enum x86_page_size page_size) > +void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level) > { > - const uint64_t pg_size = 1ull << ((page_size * 9) + 12); > + const uint64_t pg_size = PG_LEVEL_SIZE(level); > struct pageUpperEntry *pml4e, *pdpe, *pde; > struct pageTableEntry *pte; > > @@ -256,15 +255,15 @@ void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, > * early if a hugepage was created. > */ > pml4e = virt_create_upper_pte(vm, vm->pgd >> vm->page_shift, > - vaddr, paddr, 3, page_size); > + vaddr, paddr, 3, level); > if (pml4e->page_size) > return; > > - pdpe = virt_create_upper_pte(vm, pml4e->pfn, vaddr, paddr, 2, page_size); > + pdpe = virt_create_upper_pte(vm, pml4e->pfn, vaddr, paddr, 2, level); > if (pdpe->page_size) > return; > > - pde = virt_create_upper_pte(vm, pdpe->pfn, vaddr, paddr, 1, page_size); > + pde = virt_create_upper_pte(vm, pdpe->pfn, vaddr, paddr, 1, level); ... here we could also potentially replace the 3/2/1s with the new macro (or with existing naming number 3 will be missing a macro)? > if (pde->page_size) > return;
On Fri, May 13, 2022 at 1:04 PM Peter Xu <peterx@redhat.com> wrote: > > On Fri, Apr 29, 2022 at 06:39:27PM +0000, David Matlack wrote: > > x86_page_size is an enum used to communicate the desired page size with > > which to map a range of memory. Under the hood they just encode the > > desired level at which to map the page. This ends up being clunky in a > > few ways: > > > > - The name suggests it encodes the size of the page rather than the > > level. > > - In other places in x86_64/processor.c we just use a raw int to encode > > the level. > > > > Simplify this by just admitting that x86_page_size is just the level and > > using an int and some more obviously named macros (e.g. PG_LEVEL_1G). > > > > Signed-off-by: David Matlack <dmatlack@google.com> > > --- > > .../selftests/kvm/include/x86_64/processor.h | 14 +++++----- > > .../selftests/kvm/lib/x86_64/processor.c | 27 +++++++++---------- > > .../selftests/kvm/max_guest_memory_test.c | 2 +- > > .../selftests/kvm/x86_64/mmu_role_test.c | 2 +- > > 4 files changed, 22 insertions(+), 23 deletions(-) > > > > diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h > > index 37db341d4cc5..b512f9f508ae 100644 > > --- a/tools/testing/selftests/kvm/include/x86_64/processor.h > > +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h > > @@ -465,13 +465,13 @@ void vcpu_set_hv_cpuid(struct kvm_vm *vm, uint32_t vcpuid); > > struct kvm_cpuid2 *vcpu_get_supported_hv_cpuid(struct kvm_vm *vm, uint32_t vcpuid); > > void vm_xsave_req_perm(int bit); > > > > -enum x86_page_size { > > - X86_PAGE_SIZE_4K = 0, > > - X86_PAGE_SIZE_2M, > > - X86_PAGE_SIZE_1G, > > -}; > > -void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, > > - enum x86_page_size page_size); > > +#define PG_LEVEL_4K 0 > > +#define PG_LEVEL_2M 1 > > +#define PG_LEVEL_1G 2 > > A nitpick is: we could have named those as PG_LEVEL_[PTE|PMD|PUD|PGD..] > rather than 4K|2M|..., then... I went with these names to match the KVM code (although the level numbers themselves are off by 1). > > > + > > +#define PG_LEVEL_SIZE(_level) (1ull << (((_level) * 9) + 12)) > > + > > +void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level); > > > > /* > > * Basic CPU control in CR0 > > diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c > > index 9f000dfb5594..1a7de69e2495 100644 > > --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c > > +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c > > @@ -199,15 +199,15 @@ static struct pageUpperEntry *virt_create_upper_pte(struct kvm_vm *vm, > > uint64_t pt_pfn, > > uint64_t vaddr, > > uint64_t paddr, > > - int level, > > - enum x86_page_size page_size) > > + int current_level, > > + int target_level) > > { > > - struct pageUpperEntry *pte = virt_get_pte(vm, pt_pfn, vaddr, level); > > + struct pageUpperEntry *pte = virt_get_pte(vm, pt_pfn, vaddr, current_level); > > > > if (!pte->present) { > > pte->writable = true; > > pte->present = true; > > - pte->page_size = (level == page_size); > > + pte->page_size = (current_level == target_level); > > if (pte->page_size) > > pte->pfn = paddr >> vm->page_shift; > > else > > @@ -218,20 +218,19 @@ static struct pageUpperEntry *virt_create_upper_pte(struct kvm_vm *vm, > > * a hugepage at this level, and that there isn't a hugepage at > > * this level. > > */ > > - TEST_ASSERT(level != page_size, > > + TEST_ASSERT(current_level != target_level, > > "Cannot create hugepage at level: %u, vaddr: 0x%lx\n", > > - page_size, vaddr); > > + current_level, vaddr); > > TEST_ASSERT(!pte->page_size, > > "Cannot create page table at level: %u, vaddr: 0x%lx\n", > > - level, vaddr); > > + current_level, vaddr); > > } > > return pte; > > } > > > > -void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, > > - enum x86_page_size page_size) > > +void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level) > > { > > - const uint64_t pg_size = 1ull << ((page_size * 9) + 12); > > + const uint64_t pg_size = PG_LEVEL_SIZE(level); > > struct pageUpperEntry *pml4e, *pdpe, *pde; > > struct pageTableEntry *pte; > > > > @@ -256,15 +255,15 @@ void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, > > * early if a hugepage was created. > > */ > > pml4e = virt_create_upper_pte(vm, vm->pgd >> vm->page_shift, > > - vaddr, paddr, 3, page_size); > > + vaddr, paddr, 3, level); > > if (pml4e->page_size) > > return; > > > > - pdpe = virt_create_upper_pte(vm, pml4e->pfn, vaddr, paddr, 2, page_size); > > + pdpe = virt_create_upper_pte(vm, pml4e->pfn, vaddr, paddr, 2, level); > > if (pdpe->page_size) > > return; > > > > - pde = virt_create_upper_pte(vm, pdpe->pfn, vaddr, paddr, 1, page_size); > > + pde = virt_create_upper_pte(vm, pdpe->pfn, vaddr, paddr, 1, level); > > ... here we could also potentially replace the 3/2/1s with the new macro > (or with existing naming number 3 will be missing a macro)? Good point. Will do. > > > if (pde->page_size) > > return; > > -- > Peter Xu >
diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h index 37db341d4cc5..b512f9f508ae 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -465,13 +465,13 @@ void vcpu_set_hv_cpuid(struct kvm_vm *vm, uint32_t vcpuid); struct kvm_cpuid2 *vcpu_get_supported_hv_cpuid(struct kvm_vm *vm, uint32_t vcpuid); void vm_xsave_req_perm(int bit); -enum x86_page_size { - X86_PAGE_SIZE_4K = 0, - X86_PAGE_SIZE_2M, - X86_PAGE_SIZE_1G, -}; -void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, - enum x86_page_size page_size); +#define PG_LEVEL_4K 0 +#define PG_LEVEL_2M 1 +#define PG_LEVEL_1G 2 + +#define PG_LEVEL_SIZE(_level) (1ull << (((_level) * 9) + 12)) + +void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level); /* * Basic CPU control in CR0 diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c index 9f000dfb5594..1a7de69e2495 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c @@ -199,15 +199,15 @@ static struct pageUpperEntry *virt_create_upper_pte(struct kvm_vm *vm, uint64_t pt_pfn, uint64_t vaddr, uint64_t paddr, - int level, - enum x86_page_size page_size) + int current_level, + int target_level) { - struct pageUpperEntry *pte = virt_get_pte(vm, pt_pfn, vaddr, level); + struct pageUpperEntry *pte = virt_get_pte(vm, pt_pfn, vaddr, current_level); if (!pte->present) { pte->writable = true; pte->present = true; - pte->page_size = (level == page_size); + pte->page_size = (current_level == target_level); if (pte->page_size) pte->pfn = paddr >> vm->page_shift; else @@ -218,20 +218,19 @@ static struct pageUpperEntry *virt_create_upper_pte(struct kvm_vm *vm, * a hugepage at this level, and that there isn't a hugepage at * this level. */ - TEST_ASSERT(level != page_size, + TEST_ASSERT(current_level != target_level, "Cannot create hugepage at level: %u, vaddr: 0x%lx\n", - page_size, vaddr); + current_level, vaddr); TEST_ASSERT(!pte->page_size, "Cannot create page table at level: %u, vaddr: 0x%lx\n", - level, vaddr); + current_level, vaddr); } return pte; } -void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, - enum x86_page_size page_size) +void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level) { - const uint64_t pg_size = 1ull << ((page_size * 9) + 12); + const uint64_t pg_size = PG_LEVEL_SIZE(level); struct pageUpperEntry *pml4e, *pdpe, *pde; struct pageTableEntry *pte; @@ -256,15 +255,15 @@ void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, * early if a hugepage was created. */ pml4e = virt_create_upper_pte(vm, vm->pgd >> vm->page_shift, - vaddr, paddr, 3, page_size); + vaddr, paddr, 3, level); if (pml4e->page_size) return; - pdpe = virt_create_upper_pte(vm, pml4e->pfn, vaddr, paddr, 2, page_size); + pdpe = virt_create_upper_pte(vm, pml4e->pfn, vaddr, paddr, 2, level); if (pdpe->page_size) return; - pde = virt_create_upper_pte(vm, pdpe->pfn, vaddr, paddr, 1, page_size); + pde = virt_create_upper_pte(vm, pdpe->pfn, vaddr, paddr, 1, level); if (pde->page_size) return; @@ -279,7 +278,7 @@ void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, void virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr) { - __virt_pg_map(vm, vaddr, paddr, X86_PAGE_SIZE_4K); + __virt_pg_map(vm, vaddr, paddr, PG_LEVEL_4K); } static struct pageTableEntry *_vm_get_page_table_entry(struct kvm_vm *vm, int vcpuid, diff --git a/tools/testing/selftests/kvm/max_guest_memory_test.c b/tools/testing/selftests/kvm/max_guest_memory_test.c index 3875c4b23a04..15f046e19cb2 100644 --- a/tools/testing/selftests/kvm/max_guest_memory_test.c +++ b/tools/testing/selftests/kvm/max_guest_memory_test.c @@ -244,7 +244,7 @@ int main(int argc, char *argv[]) #ifdef __x86_64__ /* Identity map memory in the guest using 1gb pages. */ for (i = 0; i < slot_size; i += size_1gb) - __virt_pg_map(vm, gpa + i, gpa + i, X86_PAGE_SIZE_1G); + __virt_pg_map(vm, gpa + i, gpa + i, PG_LEVEL_1G); #else for (i = 0; i < slot_size; i += vm_get_page_size(vm)) virt_pg_map(vm, gpa + i, gpa + i); diff --git a/tools/testing/selftests/kvm/x86_64/mmu_role_test.c b/tools/testing/selftests/kvm/x86_64/mmu_role_test.c index da2325fcad87..bdecd532f935 100644 --- a/tools/testing/selftests/kvm/x86_64/mmu_role_test.c +++ b/tools/testing/selftests/kvm/x86_64/mmu_role_test.c @@ -35,7 +35,7 @@ static void mmu_role_test(u32 *cpuid_reg, u32 evil_cpuid_val) run = vcpu_state(vm, VCPU_ID); /* Map 1gb page without a backing memlot. */ - __virt_pg_map(vm, MMIO_GPA, MMIO_GPA, X86_PAGE_SIZE_1G); + __virt_pg_map(vm, MMIO_GPA, MMIO_GPA, PG_LEVEL_1G); r = _vcpu_run(vm, VCPU_ID);
x86_page_size is an enum used to communicate the desired page size with which to map a range of memory. Under the hood they just encode the desired level at which to map the page. This ends up being clunky in a few ways: - The name suggests it encodes the size of the page rather than the level. - In other places in x86_64/processor.c we just use a raw int to encode the level. Simplify this by just admitting that x86_page_size is just the level and using an int and some more obviously named macros (e.g. PG_LEVEL_1G). Signed-off-by: David Matlack <dmatlack@google.com> --- .../selftests/kvm/include/x86_64/processor.h | 14 +++++----- .../selftests/kvm/lib/x86_64/processor.c | 27 +++++++++---------- .../selftests/kvm/max_guest_memory_test.c | 2 +- .../selftests/kvm/x86_64/mmu_role_test.c | 2 +- 4 files changed, 22 insertions(+), 23 deletions(-) base-commit: 84e5ffd045f33e4fa32370135436d987478d0bf7