Message ID | 20201116121942.55031-1-drjones@redhat.com (mailing list archive) |
---|---|
Headers | show |
Series | KVM: selftests: Cleanups, take 2 | expand |
On 16/11/20 13:19, Andrew Jones wrote: > This series attempts to clean up demand_paging_test, dirty_log_perf_test, > and dirty_log_test by factoring out common code, creating some new API > along the way. It also splits include/perf_test_util.h into a more > conventional header and source pair. > > I've tested on x86 and AArch64 (one config each), but not s390x. > > v3: > - Rebased remaining four patches from v2 onto kvm/queue > - Picked up r-b's from Peter and Ben > > v2: https://www.spinics.net/lists/kvm/msg228711.html Unfortunately patch 2 is still broken: $ ./dirty_log_test -M dirty-ring Setting log mode to: 'dirty-ring' Test iterations: 32, interval: 10 (ms) Testing guest mode: PA-bits:ANY, VA-bits:48, 4K pages ==== Test Assertion Failure ==== lib/kvm_util.c:85: ret == 0 pid=2010122 tid=2010122 - Invalid argument 1 0x0000000000402ee7: vm_enable_cap at kvm_util.c:84 2 0x0000000000403004: vm_enable_dirty_ring at kvm_util.c:124 3 0x00000000004021a5: log_mode_create_vm_done at dirty_log_test.c:453 4 (inlined by) run_test at dirty_log_test.c:683 5 0x000000000040b643: for_each_guest_mode at guest_modes.c:37 6 0x00000000004019c2: main at dirty_log_test.c:864 7 0x00007fe3f48207b2: ?? ??:0 8 0x0000000000401aad: _start at ??:? KVM_ENABLE_CAP IOCTL failed, rc: -1 errno: 22 (Also fails without -M). Paolo > > Thanks, > drew > > > Andrew Jones (4): > KVM: selftests: Factor out guest mode code > KVM: selftests: dirty_log_test: Remove create_vm > KVM: selftests: Use vm_create_with_vcpus in create_vm > KVM: selftests: Implement perf_test_util more conventionally > > tools/testing/selftests/kvm/Makefile | 2 +- > .../selftests/kvm/demand_paging_test.c | 118 ++++-------- > .../selftests/kvm/dirty_log_perf_test.c | 145 +++++--------- > tools/testing/selftests/kvm/dirty_log_test.c | 179 +++++------------- > .../selftests/kvm/include/guest_modes.h | 21 ++ > .../testing/selftests/kvm/include/kvm_util.h | 9 + > .../selftests/kvm/include/perf_test_util.h | 167 ++-------------- > tools/testing/selftests/kvm/lib/guest_modes.c | 70 +++++++ > tools/testing/selftests/kvm/lib/kvm_util.c | 9 +- > .../selftests/kvm/lib/perf_test_util.c | 134 +++++++++++++ > 10 files changed, 378 insertions(+), 476 deletions(-) > create mode 100644 tools/testing/selftests/kvm/include/guest_modes.h > create mode 100644 tools/testing/selftests/kvm/lib/guest_modes.c > create mode 100644 tools/testing/selftests/kvm/lib/perf_test_util.c >
On Mon, Nov 16, 2020 at 07:16:50PM +0100, Paolo Bonzini wrote: > On 16/11/20 13:19, Andrew Jones wrote: > > This series attempts to clean up demand_paging_test, dirty_log_perf_test, > > and dirty_log_test by factoring out common code, creating some new API > > along the way. It also splits include/perf_test_util.h into a more > > conventional header and source pair. > > > > I've tested on x86 and AArch64 (one config each), but not s390x. > > > > v3: > > - Rebased remaining four patches from v2 onto kvm/queue > > - Picked up r-b's from Peter and Ben > > > > v2: https://www.spinics.net/lists/kvm/msg228711.html > > Unfortunately patch 2 is still broken: > > $ ./dirty_log_test -M dirty-ring > Setting log mode to: 'dirty-ring' > Test iterations: 32, interval: 10 (ms) > Testing guest mode: PA-bits:ANY, VA-bits:48, 4K pages > ==== Test Assertion Failure ==== > lib/kvm_util.c:85: ret == 0 > pid=2010122 tid=2010122 - Invalid argument > 1 0x0000000000402ee7: vm_enable_cap at kvm_util.c:84 > 2 0x0000000000403004: vm_enable_dirty_ring at kvm_util.c:124 > 3 0x00000000004021a5: log_mode_create_vm_done at dirty_log_test.c:453 > 4 (inlined by) run_test at dirty_log_test.c:683 > 5 0x000000000040b643: for_each_guest_mode at guest_modes.c:37 > 6 0x00000000004019c2: main at dirty_log_test.c:864 > 7 0x00007fe3f48207b2: ?? ??:0 > 8 0x0000000000401aad: _start at ??:? > KVM_ENABLE_CAP IOCTL failed, > rc: -1 errno: 22 > > (Also fails without -M). It should be because of the ordering of creating vcpu and enabling dirty rings, since currently for simplicity when enabling dirty ring we must have not created any vcpus: + if (kvm->created_vcpus) { + /* We don't allow to change this value after vcpu created */ + r = -EINVAL; + } else { + kvm->dirty_ring_size = size; + r = 0; + } We may need to call log_mode_create_vm_done() before creating any vcpus somehow. Sorry to not have noticed that when reviewing it.
On Mon, Nov 16, 2020 at 01:40:11PM -0500, Peter Xu wrote: > On Mon, Nov 16, 2020 at 07:16:50PM +0100, Paolo Bonzini wrote: > > On 16/11/20 13:19, Andrew Jones wrote: > > > This series attempts to clean up demand_paging_test, dirty_log_perf_test, > > > and dirty_log_test by factoring out common code, creating some new API > > > along the way. It also splits include/perf_test_util.h into a more > > > conventional header and source pair. > > > > > > I've tested on x86 and AArch64 (one config each), but not s390x. > > > > > > v3: > > > - Rebased remaining four patches from v2 onto kvm/queue > > > - Picked up r-b's from Peter and Ben > > > > > > v2: https://www.spinics.net/lists/kvm/msg228711.html > > > > Unfortunately patch 2 is still broken: > > > > $ ./dirty_log_test -M dirty-ring > > Setting log mode to: 'dirty-ring' > > Test iterations: 32, interval: 10 (ms) > > Testing guest mode: PA-bits:ANY, VA-bits:48, 4K pages > > ==== Test Assertion Failure ==== > > lib/kvm_util.c:85: ret == 0 > > pid=2010122 tid=2010122 - Invalid argument > > 1 0x0000000000402ee7: vm_enable_cap at kvm_util.c:84 > > 2 0x0000000000403004: vm_enable_dirty_ring at kvm_util.c:124 > > 3 0x00000000004021a5: log_mode_create_vm_done at dirty_log_test.c:453 > > 4 (inlined by) run_test at dirty_log_test.c:683 > > 5 0x000000000040b643: for_each_guest_mode at guest_modes.c:37 > > 6 0x00000000004019c2: main at dirty_log_test.c:864 > > 7 0x00007fe3f48207b2: ?? ??:0 > > 8 0x0000000000401aad: _start at ??:? > > KVM_ENABLE_CAP IOCTL failed, > > rc: -1 errno: 22 > > > > (Also fails without -M). > > It should be because of the ordering of creating vcpu and enabling dirty rings, > since currently for simplicity when enabling dirty ring we must have not > created any vcpus: > > + if (kvm->created_vcpus) { > + /* We don't allow to change this value after vcpu created */ > + r = -EINVAL; > + } else { > + kvm->dirty_ring_size = size; > + r = 0; > + } > > We may need to call log_mode_create_vm_done() before creating any vcpus > somehow. Sorry to not have noticed that when reviewing it. > And sorry for having not tested with '-M dirty-ring'. I thought we were trying to ensure each unique test type had its own test file (even if we have to do the weird inclusion of C files). Doing that, the command line options are then only used to change stuff like verbosity or to experiment with tweaked configurations. If we're not doing that, then I think we should. We don't want to try and explain to all the CI people how each test should be run. It's much easier to say "run all the binaries, no parameters necessary". Each binary with no parameters should run the test(s) using a good default or by executing all possible configurations. Thanks, drew
On Wed, Nov 18, 2020 at 09:38:31AM +0100, Andrew Jones wrote: > On Mon, Nov 16, 2020 at 01:40:11PM -0500, Peter Xu wrote: > > On Mon, Nov 16, 2020 at 07:16:50PM +0100, Paolo Bonzini wrote: > > > On 16/11/20 13:19, Andrew Jones wrote: > > > > This series attempts to clean up demand_paging_test, dirty_log_perf_test, > > > > and dirty_log_test by factoring out common code, creating some new API > > > > along the way. It also splits include/perf_test_util.h into a more > > > > conventional header and source pair. > > > > > > > > I've tested on x86 and AArch64 (one config each), but not s390x. > > > > > > > > v3: > > > > - Rebased remaining four patches from v2 onto kvm/queue > > > > - Picked up r-b's from Peter and Ben > > > > > > > > v2: https://www.spinics.net/lists/kvm/msg228711.html > > > > > > Unfortunately patch 2 is still broken: > > > > > > $ ./dirty_log_test -M dirty-ring > > > Setting log mode to: 'dirty-ring' > > > Test iterations: 32, interval: 10 (ms) > > > Testing guest mode: PA-bits:ANY, VA-bits:48, 4K pages > > > ==== Test Assertion Failure ==== > > > lib/kvm_util.c:85: ret == 0 > > > pid=2010122 tid=2010122 - Invalid argument > > > 1 0x0000000000402ee7: vm_enable_cap at kvm_util.c:84 > > > 2 0x0000000000403004: vm_enable_dirty_ring at kvm_util.c:124 > > > 3 0x00000000004021a5: log_mode_create_vm_done at dirty_log_test.c:453 > > > 4 (inlined by) run_test at dirty_log_test.c:683 > > > 5 0x000000000040b643: for_each_guest_mode at guest_modes.c:37 > > > 6 0x00000000004019c2: main at dirty_log_test.c:864 > > > 7 0x00007fe3f48207b2: ?? ??:0 > > > 8 0x0000000000401aad: _start at ??:? > > > KVM_ENABLE_CAP IOCTL failed, > > > rc: -1 errno: 22 > > > > > > (Also fails without -M). > > > > It should be because of the ordering of creating vcpu and enabling dirty rings, > > since currently for simplicity when enabling dirty ring we must have not > > created any vcpus: > > > > + if (kvm->created_vcpus) { > > + /* We don't allow to change this value after vcpu created */ > > + r = -EINVAL; > > + } else { > > + kvm->dirty_ring_size = size; > > + r = 0; > > + } > > > > We may need to call log_mode_create_vm_done() before creating any vcpus > > somehow. Sorry to not have noticed that when reviewing it. > > > > And sorry for having not tested with '-M dirty-ring'. I thought we were > trying to ensure each unique test type had its own test file (even if we > have to do the weird inclusion of C files). Doing that, the command line > options are then only used to change stuff like verbosity or to experiment > with tweaked configurations. > > If we're not doing that, then I think we should. We don't want to try and > explain to all the CI people how each test should be run. It's much easier > to say "run all the binaries, no parameters necessary". Each binary with > no parameters should run the test(s) using a good default or by executing > all possible configurations. > I just double checked and we are running all modes by default. This is the output I get Test iterations: 32, interval: 10 (ms) Testing Log Mode 'dirty-log' Testing guest mode: PA-bits:ANY, VA-bits:48, 4K pages guest physical test memory offset: 0x7fbfffc000 Dirtied 1024 pages Total bits checked: dirty (209373), clear (7917184), track_next (38548) Testing Log Mode 'clear-log' Testing guest mode: PA-bits:ANY, VA-bits:48, 4K pages guest physical test memory offset: 0x7fbfffc000 Dirtied 1024 pages Total bits checked: dirty (464547), clear (7662010), track_next (37553) Testing Log Mode 'dirty-ring' Log mode 'dirty-ring' not supported, skipping test which matches the output before this patch, except for minor differences in the numbers. I'm not sure how this is failing in your environment and not mine. Peter? Thanks, drew
On Wed, Nov 18, 2020 at 10:05:59AM +0100, Andrew Jones wrote: > On Wed, Nov 18, 2020 at 09:38:31AM +0100, Andrew Jones wrote: > > On Mon, Nov 16, 2020 at 01:40:11PM -0500, Peter Xu wrote: > > > On Mon, Nov 16, 2020 at 07:16:50PM +0100, Paolo Bonzini wrote: > > > > On 16/11/20 13:19, Andrew Jones wrote: > > > > > This series attempts to clean up demand_paging_test, dirty_log_perf_test, > > > > > and dirty_log_test by factoring out common code, creating some new API > > > > > along the way. It also splits include/perf_test_util.h into a more > > > > > conventional header and source pair. > > > > > > > > > > I've tested on x86 and AArch64 (one config each), but not s390x. > > > > > > > > > > v3: > > > > > - Rebased remaining four patches from v2 onto kvm/queue > > > > > - Picked up r-b's from Peter and Ben > > > > > > > > > > v2: https://www.spinics.net/lists/kvm/msg228711.html > > > > > > > > Unfortunately patch 2 is still broken: > > > > > > > > $ ./dirty_log_test -M dirty-ring > > > > Setting log mode to: 'dirty-ring' > > > > Test iterations: 32, interval: 10 (ms) > > > > Testing guest mode: PA-bits:ANY, VA-bits:48, 4K pages > > > > ==== Test Assertion Failure ==== > > > > lib/kvm_util.c:85: ret == 0 > > > > pid=2010122 tid=2010122 - Invalid argument > > > > 1 0x0000000000402ee7: vm_enable_cap at kvm_util.c:84 > > > > 2 0x0000000000403004: vm_enable_dirty_ring at kvm_util.c:124 > > > > 3 0x00000000004021a5: log_mode_create_vm_done at dirty_log_test.c:453 > > > > 4 (inlined by) run_test at dirty_log_test.c:683 > > > > 5 0x000000000040b643: for_each_guest_mode at guest_modes.c:37 > > > > 6 0x00000000004019c2: main at dirty_log_test.c:864 > > > > 7 0x00007fe3f48207b2: ?? ??:0 > > > > 8 0x0000000000401aad: _start at ??:? > > > > KVM_ENABLE_CAP IOCTL failed, > > > > rc: -1 errno: 22 > > > > > > > > (Also fails without -M). > > > > > > It should be because of the ordering of creating vcpu and enabling dirty rings, > > > since currently for simplicity when enabling dirty ring we must have not > > > created any vcpus: > > > > > > + if (kvm->created_vcpus) { > > > + /* We don't allow to change this value after vcpu created */ > > > + r = -EINVAL; > > > + } else { > > > + kvm->dirty_ring_size = size; > > > + r = 0; > > > + } > > > > > > We may need to call log_mode_create_vm_done() before creating any vcpus > > > somehow. Sorry to not have noticed that when reviewing it. > > > > > > > And sorry for having not tested with '-M dirty-ring'. I thought we were > > trying to ensure each unique test type had its own test file (even if we > > have to do the weird inclusion of C files). Doing that, the command line > > options are then only used to change stuff like verbosity or to experiment > > with tweaked configurations. > > > > If we're not doing that, then I think we should. We don't want to try and > > explain to all the CI people how each test should be run. It's much easier > > to say "run all the binaries, no parameters necessary". Each binary with > > no parameters should run the test(s) using a good default or by executing > > all possible configurations. > > > > I just double checked and we are running all modes by default. This is > the output I get > > Test iterations: 32, interval: 10 (ms) > Testing Log Mode 'dirty-log' > Testing guest mode: PA-bits:ANY, VA-bits:48, 4K pages > guest physical test memory offset: 0x7fbfffc000 > Dirtied 1024 pages > Total bits checked: dirty (209373), clear (7917184), track_next (38548) > Testing Log Mode 'clear-log' > Testing guest mode: PA-bits:ANY, VA-bits:48, 4K pages > guest physical test memory offset: 0x7fbfffc000 > Dirtied 1024 pages > Total bits checked: dirty (464547), clear (7662010), track_next (37553) > Testing Log Mode 'dirty-ring' > Log mode 'dirty-ring' not supported, skipping test > > which matches the output before this patch, except for minor differences > in the numbers. > > I'm not sure how this is failing in your environment and not mine. > Doh, sorry. I didn't actually read the output. I was only looking for an assert and the string 'dirty-ring'. It looks like the test is skipping for me. That would explain why it "works" for me... drew
On Mon, Nov 16, 2020 at 07:16:50PM +0100, Paolo Bonzini wrote: > On 16/11/20 13:19, Andrew Jones wrote: > > This series attempts to clean up demand_paging_test, dirty_log_perf_test, > > and dirty_log_test by factoring out common code, creating some new API > > along the way. It also splits include/perf_test_util.h into a more > > conventional header and source pair. > > > > I've tested on x86 and AArch64 (one config each), but not s390x. > > > > v3: > > - Rebased remaining four patches from v2 onto kvm/queue > > - Picked up r-b's from Peter and Ben > > > > v2: https://www.spinics.net/lists/kvm/msg228711.html > > Unfortunately patch 2 is still broken: > > $ ./dirty_log_test -M dirty-ring > Setting log mode to: 'dirty-ring' > Test iterations: 32, interval: 10 (ms) > Testing guest mode: PA-bits:ANY, VA-bits:48, 4K pages > ==== Test Assertion Failure ==== > lib/kvm_util.c:85: ret == 0 > pid=2010122 tid=2010122 - Invalid argument > 1 0x0000000000402ee7: vm_enable_cap at kvm_util.c:84 > 2 0x0000000000403004: vm_enable_dirty_ring at kvm_util.c:124 > 3 0x00000000004021a5: log_mode_create_vm_done at dirty_log_test.c:453 > 4 (inlined by) run_test at dirty_log_test.c:683 > 5 0x000000000040b643: for_each_guest_mode at guest_modes.c:37 > 6 0x00000000004019c2: main at dirty_log_test.c:864 > 7 0x00007fe3f48207b2: ?? ??:0 > 8 0x0000000000401aad: _start at ??:? > KVM_ENABLE_CAP IOCTL failed, > rc: -1 errno: 22 > So I finally looked closely enough at the dirty-ring stuff to see that patch 2 was always a dumb idea. dirty_ring_create_vm_done() has a comment that says "Switch to dirty ring mode after VM creation but before any of the vcpu creation". I'd argue that that comment would be better served at the log_mode_create_vm_done() call, but that doesn't excuse my sloppiness here. Maybe someday we can add a patch that adds that comment and also tries to use common code for the number of pages calculation for the VM, but not today. Regarding this series, if the other three patches look good, then we can just drop 2/4. 3/4 and 4/4 should still apply cleanly and work. Thanks, drew
On 20/11/20 09:05, Andrew Jones wrote: > So I finally looked closely enough at the dirty-ring stuff to see that > patch 2 was always a dumb idea. dirty_ring_create_vm_done() has a comment > that says "Switch to dirty ring mode after VM creation but before any of > the vcpu creation". I'd argue that that comment would be better served at > the log_mode_create_vm_done() call, but that doesn't excuse my sloppiness > here. Maybe someday we can add a patch that adds that comment and also > tries to use common code for the number of pages calculation for the VM, > but not today. > > Regarding this series, if the other three patches look good, then we > can just drop 2/4. 3/4 and 4/4 should still apply cleanly and work. Yes, the rest is good. Paolo
On Fri, Nov 20, 2020 at 09:48:26AM +0100, Paolo Bonzini wrote: > On 20/11/20 09:05, Andrew Jones wrote: > > So I finally looked closely enough at the dirty-ring stuff to see that > > patch 2 was always a dumb idea. dirty_ring_create_vm_done() has a comment > > that says "Switch to dirty ring mode after VM creation but before any of > > the vcpu creation". I'd argue that that comment would be better served at > > the log_mode_create_vm_done() call, but that doesn't excuse my sloppiness > > here. Maybe someday we can add a patch that adds that comment and also > > tries to use common code for the number of pages calculation for the VM, > > but not today. > > > > Regarding this series, if the other three patches look good, then we > > can just drop 2/4. 3/4 and 4/4 should still apply cleanly and work. > > Yes, the rest is good. > Ping? Thanks, drew
On Wed, Dec 16, 2020 at 4:47 AM Andrew Jones <drjones@redhat.com> wrote: > > On Fri, Nov 20, 2020 at 09:48:26AM +0100, Paolo Bonzini wrote: > > On 20/11/20 09:05, Andrew Jones wrote: > > > So I finally looked closely enough at the dirty-ring stuff to see that > > > patch 2 was always a dumb idea. dirty_ring_create_vm_done() has a comment > > > that says "Switch to dirty ring mode after VM creation but before any of > > > the vcpu creation". I'd argue that that comment would be better served at > > > the log_mode_create_vm_done() call, but that doesn't excuse my sloppiness > > > here. Maybe someday we can add a patch that adds that comment and also > > > tries to use common code for the number of pages calculation for the VM, > > > but not today. > > > > > > Regarding this series, if the other three patches look good, then we > > > can just drop 2/4. 3/4 and 4/4 should still apply cleanly and work. > > > > Yes, the rest is good. > > > > Ping? > > Thanks, > drew FWIW, patches 1, 3, and 4 look good to me too. Once these are merged, I've got another new test ready to go which builds on these.
On 16/12/20 13:46, Andrew Jones wrote: > On Fri, Nov 20, 2020 at 09:48:26AM +0100, Paolo Bonzini wrote: >> On 20/11/20 09:05, Andrew Jones wrote: >>> So I finally looked closely enough at the dirty-ring stuff to see that >>> patch 2 was always a dumb idea. dirty_ring_create_vm_done() has a comment >>> that says "Switch to dirty ring mode after VM creation but before any of >>> the vcpu creation". I'd argue that that comment would be better served at >>> the log_mode_create_vm_done() call, but that doesn't excuse my sloppiness >>> here. Maybe someday we can add a patch that adds that comment and also >>> tries to use common code for the number of pages calculation for the VM, >>> but not today. >>> >>> Regarding this series, if the other three patches look good, then we >>> can just drop 2/4. 3/4 and 4/4 should still apply cleanly and work. >> >> Yes, the rest is good. >> > > Ping? Sorry, I was waiting for a resend. Paolo
On Fri, Dec 18, 2020 at 11:32:02AM +0100, Paolo Bonzini wrote: > On 16/12/20 13:46, Andrew Jones wrote: > > On Fri, Nov 20, 2020 at 09:48:26AM +0100, Paolo Bonzini wrote: > > > On 20/11/20 09:05, Andrew Jones wrote: > > > > So I finally looked closely enough at the dirty-ring stuff to see that > > > > patch 2 was always a dumb idea. dirty_ring_create_vm_done() has a comment > > > > that says "Switch to dirty ring mode after VM creation but before any of > > > > the vcpu creation". I'd argue that that comment would be better served at > > > > the log_mode_create_vm_done() call, but that doesn't excuse my sloppiness > > > > here. Maybe someday we can add a patch that adds that comment and also > > > > tries to use common code for the number of pages calculation for the VM, > > > > but not today. > > > > > > > > Regarding this series, if the other three patches look good, then we > > > > can just drop 2/4. 3/4 and 4/4 should still apply cleanly and work. > > > > > > Yes, the rest is good. > > > > > > > Ping? > > Sorry, I was waiting for a resend. > Oops, I understood that we'd just drop 2/4 while applying. Should I resend now? Thanks, drew
On 18/12/20 12:13, Andrew Jones wrote: > On Fri, Dec 18, 2020 at 11:32:02AM +0100, Paolo Bonzini wrote: >> On 16/12/20 13:46, Andrew Jones wrote: >>> On Fri, Nov 20, 2020 at 09:48:26AM +0100, Paolo Bonzini wrote: >>>> On 20/11/20 09:05, Andrew Jones wrote: >>>>> So I finally looked closely enough at the dirty-ring stuff to see that >>>>> patch 2 was always a dumb idea. dirty_ring_create_vm_done() has a comment >>>>> that says "Switch to dirty ring mode after VM creation but before any of >>>>> the vcpu creation". I'd argue that that comment would be better served at >>>>> the log_mode_create_vm_done() call, but that doesn't excuse my sloppiness >>>>> here. Maybe someday we can add a patch that adds that comment and also >>>>> tries to use common code for the number of pages calculation for the VM, >>>>> but not today. >>>>> >>>>> Regarding this series, if the other three patches look good, then we >>>>> can just drop 2/4. 3/4 and 4/4 should still apply cleanly and work. >>>> >>>> Yes, the rest is good. >>>> >>> >>> Ping? >> >> Sorry, I was waiting for a resend. >> > > Oops, I understood that we'd just drop 2/4 while applying. Should I resend > now? Yes, please. Maybe there were conflicts, I don't remember why I dropped the whole series on the floor. Paolo