Message ID | 20211111000310.1435032-5-dmatlack@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: selftests: Hugepage fixes and cleanups | expand |
On Wed, Nov 10, 2021 at 4:03 PM David Matlack <dmatlack@google.com> wrote: > > From: Sean Christopherson <seanjc@google.com> > > Assert that the GPA for a memslot backed by a hugepage is aligned to > the hugepage size and fix perf_test_util accordingly. Lack of GPA > alignment prevents KVM from backing the guest with hugepages, e.g. x86's > write-protection of hugepages when dirty logging is activated is > otherwise not exercised. > > Add a comment explaining that guest_page_size is for non-huge pages to > try and avoid confusion about what it actually tracks. > > Cc: Ben Gardon <bgardon@google.com> > Cc: Yanan Wang <wangyanan55@huawei.com> > Cc: Andrew Jones <drjones@redhat.com> > Cc: Peter Xu <peterx@redhat.com> > Cc: Aaron Lewis <aaronlewis@google.com> > Signed-off-by: Sean Christopherson <seanjc@google.com> > [Used get_backing_src_pagesz() to determine alignment dynamically.] > Signed-off-by: David Matlack <dmatlack@google.com> > --- > tools/testing/selftests/kvm/lib/kvm_util.c | 2 ++ > tools/testing/selftests/kvm/lib/perf_test_util.c | 7 ++++++- > 2 files changed, 8 insertions(+), 1 deletion(-) > > diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c > index 07f37456bba0..1f6a01c33dce 100644 > --- a/tools/testing/selftests/kvm/lib/kvm_util.c > +++ b/tools/testing/selftests/kvm/lib/kvm_util.c > @@ -875,6 +875,8 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm, > if (src_type == VM_MEM_SRC_ANONYMOUS_THP) > alignment = max(backing_src_pagesz, alignment); > > + ASSERT_EQ(guest_paddr, align_up(guest_paddr, backing_src_pagesz)); > + > /* Add enough memory to align up if necessary */ > if (alignment > 1) > region->mmap_size += alignment; > diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c > index 6b8d5020dc54..a015f267d945 100644 > --- a/tools/testing/selftests/kvm/lib/perf_test_util.c > +++ b/tools/testing/selftests/kvm/lib/perf_test_util.c > @@ -55,11 +55,16 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus, > { > struct kvm_vm *vm; > uint64_t guest_num_pages; > + uint64_t backing_src_pagesz = get_backing_src_pagesz(backing_src); > int i; > > pr_info("Testing guest mode: %s\n", vm_guest_mode_string(mode)); > > perf_test_args.host_page_size = getpagesize(); > + /* > + * Snapshot the non-huge page size. This is used by the guest code to > + * access/dirty pages at the logging granularity. > + */ > perf_test_args.guest_page_size = vm_guest_mode_params[mode].page_size; Is this comment correct? I wouldn't expect the guest page size to determine the host dirty logging granularity. > > guest_num_pages = vm_adjust_num_guest_pages(mode, > @@ -92,7 +97,7 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus, > > guest_test_phys_mem = (vm_get_max_gfn(vm) - guest_num_pages) * > perf_test_args.guest_page_size; > - guest_test_phys_mem = align_down(guest_test_phys_mem, perf_test_args.host_page_size); > + guest_test_phys_mem = align_down(guest_test_phys_mem, backing_src_pagesz); > #ifdef __s390x__ > /* Align to 1M (segment size) */ > guest_test_phys_mem = align_down(guest_test_phys_mem, 1 << 20); > -- > 2.34.0.rc1.387.gb447b232ab-goog >
On Thu, Nov 11, 2021, Ben Gardon wrote: > On Wed, Nov 10, 2021 at 4:03 PM David Matlack <dmatlack@google.com> wrote: > > > > From: Sean Christopherson <seanjc@google.com> > > > > Assert that the GPA for a memslot backed by a hugepage is aligned to > > the hugepage size and fix perf_test_util accordingly. Lack of GPA > > alignment prevents KVM from backing the guest with hugepages, e.g. x86's > > write-protection of hugepages when dirty logging is activated is > > otherwise not exercised. > > > > Add a comment explaining that guest_page_size is for non-huge pages to > > try and avoid confusion about what it actually tracks. > > > > Cc: Ben Gardon <bgardon@google.com> > > Cc: Yanan Wang <wangyanan55@huawei.com> > > Cc: Andrew Jones <drjones@redhat.com> > > Cc: Peter Xu <peterx@redhat.com> > > Cc: Aaron Lewis <aaronlewis@google.com> > > Signed-off-by: Sean Christopherson <seanjc@google.com> > > [Used get_backing_src_pagesz() to determine alignment dynamically.] > > Signed-off-by: David Matlack <dmatlack@google.com> > > --- > > tools/testing/selftests/kvm/lib/kvm_util.c | 2 ++ > > tools/testing/selftests/kvm/lib/perf_test_util.c | 7 ++++++- > > 2 files changed, 8 insertions(+), 1 deletion(-) > > > > diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c > > index 07f37456bba0..1f6a01c33dce 100644 > > --- a/tools/testing/selftests/kvm/lib/kvm_util.c > > +++ b/tools/testing/selftests/kvm/lib/kvm_util.c > > @@ -875,6 +875,8 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm, > > if (src_type == VM_MEM_SRC_ANONYMOUS_THP) > > alignment = max(backing_src_pagesz, alignment); > > > > + ASSERT_EQ(guest_paddr, align_up(guest_paddr, backing_src_pagesz)); > > + > > /* Add enough memory to align up if necessary */ > > if (alignment > 1) > > region->mmap_size += alignment; > > diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c > > index 6b8d5020dc54..a015f267d945 100644 > > --- a/tools/testing/selftests/kvm/lib/perf_test_util.c > > +++ b/tools/testing/selftests/kvm/lib/perf_test_util.c > > @@ -55,11 +55,16 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus, > > { > > struct kvm_vm *vm; > > uint64_t guest_num_pages; > > + uint64_t backing_src_pagesz = get_backing_src_pagesz(backing_src); > > int i; > > > > pr_info("Testing guest mode: %s\n", vm_guest_mode_string(mode)); > > > > perf_test_args.host_page_size = getpagesize(); > > + /* > > + * Snapshot the non-huge page size. This is used by the guest code to > > + * access/dirty pages at the logging granularity. > > + */ > > perf_test_args.guest_page_size = vm_guest_mode_params[mode].page_size; > > Is this comment correct? I wouldn't expect the guest page size to > determine the host dirty logging granularity. "guest page size" is a bit of a misnomer. It's not the page size of the guest's page tables, rather it's the non-huge page size of the PTEs that KVM uses to map guest memory. That info is exposed to the guest so that the guest and host agree on the stride. > > guest_num_pages = vm_adjust_num_guest_pages(mode, > > @@ -92,7 +97,7 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus, > > > > guest_test_phys_mem = (vm_get_max_gfn(vm) - guest_num_pages) * > > perf_test_args.guest_page_size; > > - guest_test_phys_mem = align_down(guest_test_phys_mem, perf_test_args.host_page_size); > > + guest_test_phys_mem = align_down(guest_test_phys_mem, backing_src_pagesz); > > #ifdef __s390x__ > > /* Align to 1M (segment size) */ > > guest_test_phys_mem = align_down(guest_test_phys_mem, 1 << 20); > > -- > > 2.34.0.rc1.387.gb447b232ab-goog > >
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 07f37456bba0..1f6a01c33dce 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -875,6 +875,8 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm, if (src_type == VM_MEM_SRC_ANONYMOUS_THP) alignment = max(backing_src_pagesz, alignment); + ASSERT_EQ(guest_paddr, align_up(guest_paddr, backing_src_pagesz)); + /* Add enough memory to align up if necessary */ if (alignment > 1) region->mmap_size += alignment; diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c index 6b8d5020dc54..a015f267d945 100644 --- a/tools/testing/selftests/kvm/lib/perf_test_util.c +++ b/tools/testing/selftests/kvm/lib/perf_test_util.c @@ -55,11 +55,16 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus, { struct kvm_vm *vm; uint64_t guest_num_pages; + uint64_t backing_src_pagesz = get_backing_src_pagesz(backing_src); int i; pr_info("Testing guest mode: %s\n", vm_guest_mode_string(mode)); perf_test_args.host_page_size = getpagesize(); + /* + * Snapshot the non-huge page size. This is used by the guest code to + * access/dirty pages at the logging granularity. + */ perf_test_args.guest_page_size = vm_guest_mode_params[mode].page_size; guest_num_pages = vm_adjust_num_guest_pages(mode, @@ -92,7 +97,7 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus, guest_test_phys_mem = (vm_get_max_gfn(vm) - guest_num_pages) * perf_test_args.guest_page_size; - guest_test_phys_mem = align_down(guest_test_phys_mem, perf_test_args.host_page_size); + guest_test_phys_mem = align_down(guest_test_phys_mem, backing_src_pagesz); #ifdef __s390x__ /* Align to 1M (segment size) */ guest_test_phys_mem = align_down(guest_test_phys_mem, 1 << 20);