mbox series

[kvm-unit-tests,0/4] x86: svm: bare-metal fixes

Message ID 20200710183320.27266-1-namit@vmware.com (mailing list archive)
Headers show
Series x86: svm: bare-metal fixes | expand

Message

Nadav Amit July 10, 2020, 6:33 p.m. UTC
These patches are intended to allow the svm tests to run on bare-metal.
The second patch indicates there is a bug in KVM.

Unfortunately, two tests still fail on bare-metal for reasons that I
could not figure out, with my somewhat limited SVM knowledge.

The first failure is "direct NMI while running guest". For some reason
the NMI is not delivered. Note that "direct NMI + hlt" and others pass.

The second is npt_rw_pfwalk_check. Even after the relevant fixes,
exit_info_2 has a mismatch, when the expected value (of the faulting
guest physical address) is 0x641000 and the actual is 0x641208. It might
be related to the fact that the physical server has more memory, but I
could not reproduce it on a VM with more physical memory.

Nadav Amit (4):
  x86: svm: clear CR4.DE on DR intercept test
  x86: svm: present bit is set on nested page-faults
  x86: remove blind writes from setup_mmu()
  x86: Allow to limit maximum RAM address

 lib/x86/fwcfg.c | 4 ++++
 lib/x86/fwcfg.h | 1 +
 lib/x86/setup.c | 7 +++++++
 lib/x86/vm.c    | 3 ---
 x86/svm_tests.c | 5 +++--
 5 files changed, 15 insertions(+), 5 deletions(-)

Comments

Paolo Bonzini July 10, 2020, 8:47 p.m. UTC | #1
On 10/07/20 20:33, Nadav Amit wrote:
> These patches are intended to allow the svm tests to run on bare-metal.
> The second patch indicates there is a bug in KVM.
> 
> Unfortunately, two tests still fail on bare-metal for reasons that I
> could not figure out, with my somewhat limited SVM knowledge.
> 
> The first failure is "direct NMI while running guest". For some reason
> the NMI is not delivered. Note that "direct NMI + hlt" and others pass.
> 
> The second is npt_rw_pfwalk_check. Even after the relevant fixes,
> exit_info_2 has a mismatch, when the expected value (of the faulting
> guest physical address) is 0x641000 and the actual is 0x641208. It might
> be related to the fact that the physical server has more memory, but I
> could not reproduce it on a VM with more physical memory.

Could be much worse---and could be bugs in KVM too, though we're
definitely faring better than six months ago.

Thanks, queued patches 2-4 and sent a replacement for patch 1.

Paolo