Message ID | 20210521173828.1180619-1-dmatlack@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v2] KVM: selftests: Fix 32-bit truncation of vm_get_max_gfn() | expand |
On Fri, May 21, 2021 at 05:38:28PM +0000, David Matlack wrote: > vm_get_max_gfn() casts vm->max_gfn from a uint64_t to an unsigned int, > which causes the upper 32-bits of the max_gfn to get truncated. > > Nobody noticed until now likely because vm_get_max_gfn() is only used > as a mechanism to create a memslot in an unused region of the guest > physical address space (the top), and the top of the 32-bit physical > address space was always good enough. > > This fix reveals a bug in memslot_modification_stress_test which was > trying to create a dummy memslot past the end of guest physical memory. > Fix that by moving the dummy memslot lower. > > Fixes: 52200d0d944e ("KVM: selftests: Remove duplicate guest mode handling") > Reviewed-by: Venkatesh Srinivas <venkateshs@chromium.org> > Signed-off-by: David Matlack <dmatlack@google.com> Reviewed-by: Peter Xu <peterx@redhat.com>
On Fri, May 21, 2021 at 05:38:28PM +0000, David Matlack wrote: > vm_get_max_gfn() casts vm->max_gfn from a uint64_t to an unsigned int, > which causes the upper 32-bits of the max_gfn to get truncated. > > Nobody noticed until now likely because vm_get_max_gfn() is only used > as a mechanism to create a memslot in an unused region of the guest > physical address space (the top), and the top of the 32-bit physical > address space was always good enough. > > This fix reveals a bug in memslot_modification_stress_test which was > trying to create a dummy memslot past the end of guest physical memory. > Fix that by moving the dummy memslot lower. > > Fixes: 52200d0d944e ("KVM: selftests: Remove duplicate guest mode handling") > Reviewed-by: Venkatesh Srinivas <venkateshs@chromium.org> > Signed-off-by: David Matlack <dmatlack@google.com> > --- > > v1 -> v2: > - Added Venkatesh's R-b line. > - Used PRIx64 to print uint64_t instead of %lx. > > tools/testing/selftests/kvm/include/kvm_util.h | 2 +- > tools/testing/selftests/kvm/lib/kvm_util.c | 2 +- > .../testing/selftests/kvm/lib/perf_test_util.c | 4 +++- > .../kvm/memslot_modification_stress_test.c | 18 +++++++++++------- > 4 files changed, 16 insertions(+), 10 deletions(-) > Reviewed-by: Andrew Jones <drjones@redhat.com> Thanks, drew
On 21/05/21 19:38, David Matlack wrote: > vm_get_max_gfn() casts vm->max_gfn from a uint64_t to an unsigned int, > which causes the upper 32-bits of the max_gfn to get truncated. > > Nobody noticed until now likely because vm_get_max_gfn() is only used > as a mechanism to create a memslot in an unused region of the guest > physical address space (the top), and the top of the 32-bit physical > address space was always good enough. > > This fix reveals a bug in memslot_modification_stress_test which was > trying to create a dummy memslot past the end of guest physical memory. > Fix that by moving the dummy memslot lower. > > Fixes: 52200d0d944e ("KVM: selftests: Remove duplicate guest mode handling") > Reviewed-by: Venkatesh Srinivas <venkateshs@chromium.org> > Signed-off-by: David Matlack <dmatlack@google.com> > --- > > v1 -> v2: > - Added Venkatesh's R-b line. > - Used PRIx64 to print uint64_t instead of %lx. > > tools/testing/selftests/kvm/include/kvm_util.h | 2 +- > tools/testing/selftests/kvm/lib/kvm_util.c | 2 +- > .../testing/selftests/kvm/lib/perf_test_util.c | 4 +++- > .../kvm/memslot_modification_stress_test.c | 18 +++++++++++------- > 4 files changed, 16 insertions(+), 10 deletions(-) > > diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h > index 84982eb02b29..5d9b35d09251 100644 > --- a/tools/testing/selftests/kvm/include/kvm_util.h > +++ b/tools/testing/selftests/kvm/include/kvm_util.h > @@ -303,7 +303,7 @@ bool vm_is_unrestricted_guest(struct kvm_vm *vm); > > unsigned int vm_get_page_size(struct kvm_vm *vm); > unsigned int vm_get_page_shift(struct kvm_vm *vm); > -unsigned int vm_get_max_gfn(struct kvm_vm *vm); > +uint64_t vm_get_max_gfn(struct kvm_vm *vm); > int vm_get_fd(struct kvm_vm *vm); > > unsigned int vm_calc_num_guest_pages(enum vm_guest_mode mode, size_t size); > diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c > index 1af1009254c4..aeffbb1e7c7d 100644 > --- a/tools/testing/selftests/kvm/lib/kvm_util.c > +++ b/tools/testing/selftests/kvm/lib/kvm_util.c > @@ -2058,7 +2058,7 @@ unsigned int vm_get_page_shift(struct kvm_vm *vm) > return vm->page_shift; > } > > -unsigned int vm_get_max_gfn(struct kvm_vm *vm) > +uint64_t vm_get_max_gfn(struct kvm_vm *vm) > { > return vm->max_gfn; > } > diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c > index 81490b9b4e32..abf381800a59 100644 > --- a/tools/testing/selftests/kvm/lib/perf_test_util.c > +++ b/tools/testing/selftests/kvm/lib/perf_test_util.c > @@ -2,6 +2,7 @@ > /* > * Copyright (C) 2020, Google LLC. > */ > +#include <inttypes.h> > > #include "kvm_util.h" > #include "perf_test_util.h" > @@ -80,7 +81,8 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus, > */ > TEST_ASSERT(guest_num_pages < vm_get_max_gfn(vm), > "Requested more guest memory than address space allows.\n" > - " guest pages: %lx max gfn: %x vcpus: %d wss: %lx]\n", > + " guest pages: %" PRIx64 " max gfn: %" PRIx64 > + " vcpus: %d wss: %" PRIx64 "]\n", > guest_num_pages, vm_get_max_gfn(vm), vcpus, > vcpu_memory_bytes); > > diff --git a/tools/testing/selftests/kvm/memslot_modification_stress_test.c b/tools/testing/selftests/kvm/memslot_modification_stress_test.c > index 6096bf0a5b34..98351ba0933c 100644 > --- a/tools/testing/selftests/kvm/memslot_modification_stress_test.c > +++ b/tools/testing/selftests/kvm/memslot_modification_stress_test.c > @@ -71,14 +71,22 @@ struct memslot_antagonist_args { > }; > > static void add_remove_memslot(struct kvm_vm *vm, useconds_t delay, > - uint64_t nr_modifications, uint64_t gpa) > + uint64_t nr_modifications) > { > + const uint64_t pages = 1; > + uint64_t gpa; > int i; > > + /* > + * Add the dummy memslot just below the perf_test_util memslot, which is > + * at the top of the guest physical address space. > + */ > + gpa = guest_test_phys_mem - pages * vm_get_page_size(vm); > + > for (i = 0; i < nr_modifications; i++) { > usleep(delay); > vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, gpa, > - DUMMY_MEMSLOT_INDEX, 1, 0); > + DUMMY_MEMSLOT_INDEX, pages, 0); > > vm_mem_region_delete(vm, DUMMY_MEMSLOT_INDEX); > } > @@ -120,11 +128,7 @@ static void run_test(enum vm_guest_mode mode, void *arg) > pr_info("Started all vCPUs\n"); > > add_remove_memslot(vm, p->memslot_modification_delay, > - p->nr_memslot_modifications, > - guest_test_phys_mem + > - (guest_percpu_mem_size * nr_vcpus) + > - perf_test_args.host_page_size + > - perf_test_args.guest_page_size); > + p->nr_memslot_modifications); > > run_vcpus = false; > > Queued, thanks. Paolo
On 21.05.21 19:38, David Matlack wrote: > vm_get_max_gfn() casts vm->max_gfn from a uint64_t to an unsigned int, > which causes the upper 32-bits of the max_gfn to get truncated. > > Nobody noticed until now likely because vm_get_max_gfn() is only used > as a mechanism to create a memslot in an unused region of the guest > physical address space (the top), and the top of the 32-bit physical > address space was always good enough. > > This fix reveals a bug in memslot_modification_stress_test which was > trying to create a dummy memslot past the end of guest physical memory. > Fix that by moving the dummy memslot lower. > > Fixes: 52200d0d944e ("KVM: selftests: Remove duplicate guest mode handling") > Reviewed-by: Venkatesh Srinivas <venkateshs@chromium.org> > Signed-off-by: David Matlack <dmatlack@google.com> As a heads up: I have not yet looked into this, but this broke demand_paging_test and kvm_page_table_test on s390: not ok 4 selftests: kvm: demand_paging_test # exit=254 # selftests: kvm: dirty_log_test # ==== Test Assertion Failure ==== # lib/kvm_util.c:900: ret == 0 # pid=245410 tid=245410 errno=22 - Invalid argument # 1 0x0000000001005457: vm_userspace_mem_region_add at kvm_util.c:900 # 2 0x0000000001002cbf: run_test at dirty_log_test.c:757 # 3 (inlined by) run_test at dirty_log_test.c:702 # 4 0x000000000100c055: for_each_guest_mode at guest_modes.c:37 # 5 0x00000000010022b5: main at dirty_log_test.c:929 (discriminator 3) # 6 0x000003ff96fabdb3: ?? ??:0 # 7 0x000000000100241d: .annobin_lto.hot at crt1.o:? # KVM_SET_USER_MEMORY_REGION IOCTL failed, # rc: -1 errno: 22 # slot: 1 flags: 0x1 # guest_phys_addr: 0xfffffbfe00000 size: 0x40100000 # Test iterations: 32, interval: 10 (ms) # Testing Log Mode 'dirty-log' # Testing guest mode: PA-bits:52, VA-bits:48, 4K pages # guest physical test memory offset: 0xfffffbfe00000 not ok 5 selftests: kvm: dirty_log_test # exit=254 # selftests: kvm: kvm_create_max_vcpus # KVM_CAP_MAX_VCPU_ID: 248 # KVM_CAP_MAX_VCPUS: 248 # Testing creating 248 vCPUs, with IDs 0...247. ok 6 selftests: kvm: kvm_create_max_vcpus # selftests: kvm: kvm_page_table_test # ==== Test Assertion Failure ==== # lib/kvm_util.c:900: ret == 0 # pid=245454 tid=245454 errno=22 - Invalid argument # 1 0x0000000001003e47: vm_userspace_mem_region_add at kvm_util.c:900 # 2 0x000000000100257d: pre_init_before_test at kvm_page_table_test.c:302 # 3 (inlined by) run_test at kvm_page_table_test.c:374 # 4 0x000000000100aa45: for_each_guest_mode at guest_modes.c:37 # 5 0x0000000001001dd9: main at kvm_page_table_test.c:503 # 6 0x000003ff827abdb3: ?? ??:0 # 7 0x0000000001001e8d: .annobin_lto.hot at crt1.o:? # KVM_SET_USER_MEMORY_REGION IOCTL failed, # rc: -1 errno: 22 # slot: 1 flags: 0x0 # guest_phys_addr: 0xfffffbff00000 size: 0x40000000 not ok 7 selftests: kvm: kvm_page_table_test # exit=254 > --- > > v1 -> v2: > - Added Venkatesh's R-b line. > - Used PRIx64 to print uint64_t instead of %lx. > > tools/testing/selftests/kvm/include/kvm_util.h | 2 +- > tools/testing/selftests/kvm/lib/kvm_util.c | 2 +- > .../testing/selftests/kvm/lib/perf_test_util.c | 4 +++- > .../kvm/memslot_modification_stress_test.c | 18 +++++++++++------- > 4 files changed, 16 insertions(+), 10 deletions(-) > > diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h > index 84982eb02b29..5d9b35d09251 100644 > --- a/tools/testing/selftests/kvm/include/kvm_util.h > +++ b/tools/testing/selftests/kvm/include/kvm_util.h > @@ -303,7 +303,7 @@ bool vm_is_unrestricted_guest(struct kvm_vm *vm); > > unsigned int vm_get_page_size(struct kvm_vm *vm); > unsigned int vm_get_page_shift(struct kvm_vm *vm); > -unsigned int vm_get_max_gfn(struct kvm_vm *vm); > +uint64_t vm_get_max_gfn(struct kvm_vm *vm); > int vm_get_fd(struct kvm_vm *vm); > > unsigned int vm_calc_num_guest_pages(enum vm_guest_mode mode, size_t size); > diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c > index 1af1009254c4..aeffbb1e7c7d 100644 > --- a/tools/testing/selftests/kvm/lib/kvm_util.c > +++ b/tools/testing/selftests/kvm/lib/kvm_util.c > @@ -2058,7 +2058,7 @@ unsigned int vm_get_page_shift(struct kvm_vm *vm) > return vm->page_shift; > } > > -unsigned int vm_get_max_gfn(struct kvm_vm *vm) > +uint64_t vm_get_max_gfn(struct kvm_vm *vm) > { > return vm->max_gfn; > } > diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c > index 81490b9b4e32..abf381800a59 100644 > --- a/tools/testing/selftests/kvm/lib/perf_test_util.c > +++ b/tools/testing/selftests/kvm/lib/perf_test_util.c > @@ -2,6 +2,7 @@ > /* > * Copyright (C) 2020, Google LLC. > */ > +#include <inttypes.h> > > #include "kvm_util.h" > #include "perf_test_util.h" > @@ -80,7 +81,8 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus, > */ > TEST_ASSERT(guest_num_pages < vm_get_max_gfn(vm), > "Requested more guest memory than address space allows.\n" > - " guest pages: %lx max gfn: %x vcpus: %d wss: %lx]\n", > + " guest pages: %" PRIx64 " max gfn: %" PRIx64 > + " vcpus: %d wss: %" PRIx64 "]\n", > guest_num_pages, vm_get_max_gfn(vm), vcpus, > vcpu_memory_bytes); > > diff --git a/tools/testing/selftests/kvm/memslot_modification_stress_test.c b/tools/testing/selftests/kvm/memslot_modification_stress_test.c > index 6096bf0a5b34..98351ba0933c 100644 > --- a/tools/testing/selftests/kvm/memslot_modification_stress_test.c > +++ b/tools/testing/selftests/kvm/memslot_modification_stress_test.c > @@ -71,14 +71,22 @@ struct memslot_antagonist_args { > }; > > static void add_remove_memslot(struct kvm_vm *vm, useconds_t delay, > - uint64_t nr_modifications, uint64_t gpa) > + uint64_t nr_modifications) > { > + const uint64_t pages = 1; > + uint64_t gpa; > int i; > > + /* > + * Add the dummy memslot just below the perf_test_util memslot, which is > + * at the top of the guest physical address space. > + */ > + gpa = guest_test_phys_mem - pages * vm_get_page_size(vm); > + > for (i = 0; i < nr_modifications; i++) { > usleep(delay); > vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, gpa, > - DUMMY_MEMSLOT_INDEX, 1, 0); > + DUMMY_MEMSLOT_INDEX, pages, 0); > > vm_mem_region_delete(vm, DUMMY_MEMSLOT_INDEX); > } > @@ -120,11 +128,7 @@ static void run_test(enum vm_guest_mode mode, void *arg) > pr_info("Started all vCPUs\n"); > > add_remove_memslot(vm, p->memslot_modification_delay, > - p->nr_memslot_modifications, > - guest_test_phys_mem + > - (guest_percpu_mem_size * nr_vcpus) + > - perf_test_args.host_page_size + > - perf_test_args.guest_page_size); > + p->nr_memslot_modifications); > > run_vcpus = false; >
On 08.06.21 10:39, Christian Borntraeger wrote: > > > On 21.05.21 19:38, David Matlack wrote: >> vm_get_max_gfn() casts vm->max_gfn from a uint64_t to an unsigned int, >> which causes the upper 32-bits of the max_gfn to get truncated. >> >> Nobody noticed until now likely because vm_get_max_gfn() is only used >> as a mechanism to create a memslot in an unused region of the guest >> physical address space (the top), and the top of the 32-bit physical >> address space was always good enough. >> >> This fix reveals a bug in memslot_modification_stress_test which was >> trying to create a dummy memslot past the end of guest physical memory. >> Fix that by moving the dummy memslot lower. >> >> Fixes: 52200d0d944e ("KVM: selftests: Remove duplicate guest mode handling") >> Reviewed-by: Venkatesh Srinivas <venkateshs@chromium.org> >> Signed-off-by: David Matlack <dmatlack@google.com> > > As a heads up: > I have not yet looked into this, but this broke demand_paging_test and kvm_page_table_test > on s390: > > not ok 4 selftests: kvm: demand_paging_test # exit=254 > # selftests: kvm: dirty_log_test > # ==== Test Assertion Failure ==== > # lib/kvm_util.c:900: ret == 0 > # pid=245410 tid=245410 errno=22 - Invalid argument > # 1 0x0000000001005457: vm_userspace_mem_region_add at kvm_util.c:900 > # 2 0x0000000001002cbf: run_test at dirty_log_test.c:757 > # 3 (inlined by) run_test at dirty_log_test.c:702 > # 4 0x000000000100c055: for_each_guest_mode at guest_modes.c:37 > # 5 0x00000000010022b5: main at dirty_log_test.c:929 (discriminator 3) > # 6 0x000003ff96fabdb3: ?? ??:0 > # 7 0x000000000100241d: .annobin_lto.hot at crt1.o:? > # KVM_SET_USER_MEMORY_REGION IOCTL failed, > # rc: -1 errno: 22 > # slot: 1 flags: 0x1 > # guest_phys_addr: 0xfffffbfe00000 size: 0x40100000 Ah. We do have a limit of 128TB for guest physical memory. The patch now made this apparent as we no longer cut the upper bits off.
diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h index 84982eb02b29..5d9b35d09251 100644 --- a/tools/testing/selftests/kvm/include/kvm_util.h +++ b/tools/testing/selftests/kvm/include/kvm_util.h @@ -303,7 +303,7 @@ bool vm_is_unrestricted_guest(struct kvm_vm *vm); unsigned int vm_get_page_size(struct kvm_vm *vm); unsigned int vm_get_page_shift(struct kvm_vm *vm); -unsigned int vm_get_max_gfn(struct kvm_vm *vm); +uint64_t vm_get_max_gfn(struct kvm_vm *vm); int vm_get_fd(struct kvm_vm *vm); unsigned int vm_calc_num_guest_pages(enum vm_guest_mode mode, size_t size); diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 1af1009254c4..aeffbb1e7c7d 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -2058,7 +2058,7 @@ unsigned int vm_get_page_shift(struct kvm_vm *vm) return vm->page_shift; } -unsigned int vm_get_max_gfn(struct kvm_vm *vm) +uint64_t vm_get_max_gfn(struct kvm_vm *vm) { return vm->max_gfn; } diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c index 81490b9b4e32..abf381800a59 100644 --- a/tools/testing/selftests/kvm/lib/perf_test_util.c +++ b/tools/testing/selftests/kvm/lib/perf_test_util.c @@ -2,6 +2,7 @@ /* * Copyright (C) 2020, Google LLC. */ +#include <inttypes.h> #include "kvm_util.h" #include "perf_test_util.h" @@ -80,7 +81,8 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus, */ TEST_ASSERT(guest_num_pages < vm_get_max_gfn(vm), "Requested more guest memory than address space allows.\n" - " guest pages: %lx max gfn: %x vcpus: %d wss: %lx]\n", + " guest pages: %" PRIx64 " max gfn: %" PRIx64 + " vcpus: %d wss: %" PRIx64 "]\n", guest_num_pages, vm_get_max_gfn(vm), vcpus, vcpu_memory_bytes); diff --git a/tools/testing/selftests/kvm/memslot_modification_stress_test.c b/tools/testing/selftests/kvm/memslot_modification_stress_test.c index 6096bf0a5b34..98351ba0933c 100644 --- a/tools/testing/selftests/kvm/memslot_modification_stress_test.c +++ b/tools/testing/selftests/kvm/memslot_modification_stress_test.c @@ -71,14 +71,22 @@ struct memslot_antagonist_args { }; static void add_remove_memslot(struct kvm_vm *vm, useconds_t delay, - uint64_t nr_modifications, uint64_t gpa) + uint64_t nr_modifications) { + const uint64_t pages = 1; + uint64_t gpa; int i; + /* + * Add the dummy memslot just below the perf_test_util memslot, which is + * at the top of the guest physical address space. + */ + gpa = guest_test_phys_mem - pages * vm_get_page_size(vm); + for (i = 0; i < nr_modifications; i++) { usleep(delay); vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, gpa, - DUMMY_MEMSLOT_INDEX, 1, 0); + DUMMY_MEMSLOT_INDEX, pages, 0); vm_mem_region_delete(vm, DUMMY_MEMSLOT_INDEX); } @@ -120,11 +128,7 @@ static void run_test(enum vm_guest_mode mode, void *arg) pr_info("Started all vCPUs\n"); add_remove_memslot(vm, p->memslot_modification_delay, - p->nr_memslot_modifications, - guest_test_phys_mem + - (guest_percpu_mem_size * nr_vcpus) + - perf_test_args.host_page_size + - perf_test_args.guest_page_size); + p->nr_memslot_modifications); run_vcpus = false;