Message ID | 20240315143507.102629-1-mlevitsk@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: selftests: fix max_guest_memory_test with more that 256 vCPUs | expand |
On Fri, 2024-03-15 at 10:35 -0400, Maxim Levitsky wrote: > max_guest_memory_test uses ucalls to sync with the host, but > it also resets the guest RIP back to its initial value in between > tests stages. > > This makes the guest never reach the code which frees the ucall struct > and since a fixed pool of 512 ucall structs is used, the test starts > to fail when more that 256 vCPUs are used. > > Fix that by replacing the manual register reset with a loop in > the guest code. > > Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com> > --- > .../testing/selftests/kvm/max_guest_memory_test.c | 15 ++++++--------- > 1 file changed, 6 insertions(+), 9 deletions(-) > > diff --git a/tools/testing/selftests/kvm/max_guest_memory_test.c b/tools/testing/selftests/kvm/max_guest_memory_test.c > index 6628dc4dda89f3c..1a6da7389bf1f5b 100644 > --- a/tools/testing/selftests/kvm/max_guest_memory_test.c > +++ b/tools/testing/selftests/kvm/max_guest_memory_test.c > @@ -22,10 +22,11 @@ static void guest_code(uint64_t start_gpa, uint64_t end_gpa, uint64_t stride) > { > uint64_t gpa; > > - for (gpa = start_gpa; gpa < end_gpa; gpa += stride) > - *((volatile uint64_t *)gpa) = gpa; > - > - GUEST_DONE(); > + for (;;) { > + for (gpa = start_gpa; gpa < end_gpa; gpa += stride) > + *((volatile uint64_t *)gpa) = gpa; > + GUEST_SYNC(0); > + } > } > > struct vcpu_info { > @@ -55,7 +56,7 @@ static void rendezvous_with_boss(void) > static void run_vcpu(struct kvm_vcpu *vcpu) > { > vcpu_run(vcpu); > - TEST_ASSERT_EQ(get_ucall(vcpu, NULL), UCALL_DONE); > + TEST_ASSERT_EQ(get_ucall(vcpu, NULL), UCALL_SYNC); > } > > static void *vcpu_worker(void *data) > @@ -64,17 +65,13 @@ static void *vcpu_worker(void *data) > struct kvm_vcpu *vcpu = info->vcpu; > struct kvm_vm *vm = vcpu->vm; > struct kvm_sregs sregs; > - struct kvm_regs regs; > > vcpu_args_set(vcpu, 3, info->start_gpa, info->end_gpa, vm->page_size); > > - /* Snapshot regs before the first run. */ > - vcpu_regs_get(vcpu, ®s); > rendezvous_with_boss(); > > run_vcpu(vcpu); > rendezvous_with_boss(); > - vcpu_regs_set(vcpu, ®s); > vcpu_sregs_get(vcpu, &sregs); > #ifdef __x86_64__ > /* Toggle CR0.WP to trigger a MMU context reset. */ Kind ping on this patch. Best regards, Maxim Levitsky
On Tue, Apr 02, 2024, Maxim Levitsky wrote:
> Kind ping on this patch.
It (and patches from other folks that are getting pinged) is on my list of things
to grab, I'm still digging myself out of my mailbox and time sensitive things that
cropped up while I was offline. I expect to start applying stuff this week,
especially for fixes like this.
On Fri, 15 Mar 2024 10:35:07 -0400, Maxim Levitsky wrote: > max_guest_memory_test uses ucalls to sync with the host, but > it also resets the guest RIP back to its initial value in between > tests stages. > > This makes the guest never reach the code which frees the ucall struct > and since a fixed pool of 512 ucall structs is used, the test starts > to fail when more that 256 vCPUs are used. > > [...] Applied to kvm-x86 fixes, thanks! [1/1] KVM: selftests: fix max_guest_memory_test with more that 256 vCPUs https://github.com/kvm-x86/linux/commit/0ef2dd1f4144 -- https://github.com/kvm-x86/linux/tree/next
diff --git a/tools/testing/selftests/kvm/max_guest_memory_test.c b/tools/testing/selftests/kvm/max_guest_memory_test.c index 6628dc4dda89f3c..1a6da7389bf1f5b 100644 --- a/tools/testing/selftests/kvm/max_guest_memory_test.c +++ b/tools/testing/selftests/kvm/max_guest_memory_test.c @@ -22,10 +22,11 @@ static void guest_code(uint64_t start_gpa, uint64_t end_gpa, uint64_t stride) { uint64_t gpa; - for (gpa = start_gpa; gpa < end_gpa; gpa += stride) - *((volatile uint64_t *)gpa) = gpa; - - GUEST_DONE(); + for (;;) { + for (gpa = start_gpa; gpa < end_gpa; gpa += stride) + *((volatile uint64_t *)gpa) = gpa; + GUEST_SYNC(0); + } } struct vcpu_info { @@ -55,7 +56,7 @@ static void rendezvous_with_boss(void) static void run_vcpu(struct kvm_vcpu *vcpu) { vcpu_run(vcpu); - TEST_ASSERT_EQ(get_ucall(vcpu, NULL), UCALL_DONE); + TEST_ASSERT_EQ(get_ucall(vcpu, NULL), UCALL_SYNC); } static void *vcpu_worker(void *data) @@ -64,17 +65,13 @@ static void *vcpu_worker(void *data) struct kvm_vcpu *vcpu = info->vcpu; struct kvm_vm *vm = vcpu->vm; struct kvm_sregs sregs; - struct kvm_regs regs; vcpu_args_set(vcpu, 3, info->start_gpa, info->end_gpa, vm->page_size); - /* Snapshot regs before the first run. */ - vcpu_regs_get(vcpu, ®s); rendezvous_with_boss(); run_vcpu(vcpu); rendezvous_with_boss(); - vcpu_regs_set(vcpu, ®s); vcpu_sregs_get(vcpu, &sregs); #ifdef __x86_64__ /* Toggle CR0.WP to trigger a MMU context reset. */
max_guest_memory_test uses ucalls to sync with the host, but it also resets the guest RIP back to its initial value in between tests stages. This makes the guest never reach the code which frees the ucall struct and since a fixed pool of 512 ucall structs is used, the test starts to fail when more that 256 vCPUs are used. Fix that by replacing the manual register reset with a loop in the guest code. Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com> --- .../testing/selftests/kvm/max_guest_memory_test.c | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-)