diff mbox

Nested paging in nested SVM setup

Message ID 53F5D709.3060207@redhat.com (mailing list archive)
State New, archived
Headers show

Commit Message

Paolo Bonzini Aug. 21, 2014, 11:24 a.m. UTC
Il 21/08/2014 10:48, Valentine Sinitsyn ha scritto:
> 
> No kvm_apic: after NPTs are set up, no page faults caused by register
> read (error_code: d), to trap and emulate APIC access.

It seems to work for VMX (see the testcase I just sent).  For SVM, can you
check if this test works for you, so that we can work on a simple testcase?

The patch applies to git://git.kernel.org/pub/scm/virt/kvm/kvm-unit-tests.git
and you can run the test like this (64-bit host):

   ./configure
   make
   ./x86-run x86/svm.flat -cpu host

Paolo


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Valentine Sinitsyn Aug. 21, 2014, 12:28 p.m. UTC | #1
On 21.08.2014 17:24, Paolo Bonzini wrote:
> It seems to work for VMX (see the testcase I just sent).  For SVM, can you
> check if this test works for you, so that we can work on a simple testcase?
It passes for SVM, too.

However, npt_rsvd seems to be broken - maybe that is the reason?

Also, I tried to use different register values for npt_l1mmio_test() and 
npt_l1mmio_check() (like 0xfee00030 and 0xfee00400), but got test passed 
as well. Could it be a false positive then?

> qemu-system-x86_64 -enable-kvm -device pc-testdev -device isa-debug-exit,iobase=0xf4,iosize=0x4 -display none -serial stdio -device pci-testdev -kernel x86/svm.flat -cpu host
> enabling apic
> paging enabled
> cr0 = 80010011
> cr3 = 7fff000
> cr4 = 20
> NPT detected - running all tests with NPT enabled
> null: PASS
> vmrun: PASS
> ioio: PASS
> vmrun intercept check: PASS
> cr3 read intercept: PASS
> cr3 read nointercept: PASS
> next_rip: PASS
> mode_switch: PASS
> asid_zero: PASS
> sel_cr0_bug: PASS
> npt_nx: PASS
> npt_us: PASS
> npt_rsvd: FAIL
> npt_rw: PASS
> npt_pfwalk: PASS
> npt_l1mmio: PASS
>     Latency VMRUN : max: 93973 min: 22447 avg: 22766
>     Latency VMEXIT: max: 428760 min: 23039 avg: 23832
> latency_run_exit: PASS
>     Latency VMLOAD: max: 35697 min: 3828 avg: 3937
>     Latency VMSAVE: max: 42953 min: 3889 avg: 4012
>     Latency STGI:   max: 42961 min: 3517 avg: 3595
>     Latency CLGI:   max: 41177 min: 2859 avg: 2924
> latency_svm_insn: PASS
>
> SUMMARY: 18 TESTS, 1 FAILURES
> Return value from qemu: 3

Valentine
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Valentine Sinitsyn Aug. 21, 2014, 12:38 p.m. UTC | #2
On 21.08.2014 18:28, Valentine Sinitsyn wrote:
> Also, I tried to use different register values for npt_l1mmio_test() and
> npt_l1mmio_check() (like 0xfee00030 and 0xfee00400), but got test passed
Just a small clarification: I made npt_l1mmio_test() to read 0xfee00030 
and npt_l1mmio_check() to compare against 0xfee00020 or 0xfee00400. No 
reason, just arbitrary values to check if anything compares non-equal in 
the check.

Valentine
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Valentine Sinitsyn Aug. 21, 2014, 1:40 p.m. UTC | #3
Sorry for the chain letters.

On 21.08.2014 18:28, Valentine Sinitsyn wrote:
> It passes for SVM, too.
I also looked at SVM tests more closely, and found out that NPT maps the 
whole memory-range as cached memory. This can also be a reason for a 
false positive in the test (if there is one). Will look into it later today.

Valentine
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Paolo Bonzini Sept. 1, 2014, 5:41 p.m. UTC | #4
Il 21/08/2014 14:28, Valentine Sinitsyn ha scritto:
>> It seems to work for VMX (see the testcase I just sent).  For SVM, can
>> you check if this test works for you, so that we can work on a simple
>> testcase?
> 
> However, npt_rsvd seems to be broken - maybe that is the reason?

BTW npt_rsvd does *not* fail on the machine I've been testing on today.

Can you retry running the tests with the latest kvm-unit-tests (branch
"master"), gather a trace of kvm and kvmmmu events, and send the
compressed trace.dat my way?

Thanks,

Paolo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Valentine Sinitsyn Sept. 1, 2014, 7:21 p.m. UTC | #5
Hi Paolo,

On 01.09.2014 23:41, Paolo Bonzini wrote:
> Il 21/08/2014 14:28, Valentine Sinitsyn ha scritto:
> BTW npt_rsvd does *not* fail on the machine I've been testing on today.
I can confirm l1mmio test doesn't fail in kvm-unit-test's master 
anymore. npt_rsvd still does. I also needed to disable ioio test, or it 
was hanging for a long time (this doesn't happen if I use Jan's patched 
KVM that have IOPM bugs fixed). However, l1mmio test passes regardless I 
use stock kvm 3.16.1 or a patched version.

> Can you retry running the tests with the latest kvm-unit-tests (branch
> "master"), gather a trace of kvm and kvmmmu events, and send the
> compressed trace.dat my way?
You mean the trace when the problem reveal itself (not from running 
tests), I assume? It's around 2G uncompressed (probably I'm enabling 
tracing to early or doing anything else wrong). Will look into it 
tomorrow, hopefully, I can reduce the size (e.g. by switching to 
uniprocessor mode). Below is a trace snippet similar to the one I've 
sent earlier.

----------------------------------------------------------------------
qemu-system-x86-2728  [002]  1726.426225: kvm_exit:             reason 
npf rip 0xffffffff8104e876 info 10000000f fee000b0
  qemu-system-x86-2728  [002]  1726.426226: kvm_nested_vmexit:    rip: 
0xffffffff8104e876 reason: npf ext_inf1: 0x000000010000000f ext_inf2: 
0x00000000fee000b0 ext_int: 0x00000000 ext_int_err: 0x00000000
  qemu-system-x86-2728  [002]  1726.426227: kvm_page_fault: 
address fee000b0 error_code f
  qemu-system-x86-2725  [000]  1726.426227: kvm_exit:             reason 
npf rip 0xffffffff8104e876 info 10000000f fee000b0
  qemu-system-x86-2725  [000]  1726.426228: kvm_nested_vmexit:    rip: 
0xffffffff8104e876 reason: npf ext_inf1: 0x000000010000000f ext_inf2: 
0x00000000fee000b0 ext_int: 0x00000000 ext_int_err: 0x00000000
  qemu-system-x86-2725  [000]  1726.426229: kvm_page_fault: 
address fee000b0 error_code f
  qemu-system-x86-2728  [002]  1726.426229: kvm_emulate_insn: 
0:ffffffff8104e876:89 b7 00 b0 5f ff (prot64)
  qemu-system-x86-2725  [000]  1726.426230: kvm_emulate_insn: 
0:ffffffff8104e876:89 b7 00 b0 5f ff (prot64)
  qemu-system-x86-2728  [002]  1726.426231: kvm_mmu_pagetable_walk: addr 
ffffffffff5fb0b0 pferr 2 W
  qemu-system-x86-2725  [000]  1726.426231: kvm_mmu_pagetable_walk: addr 
ffffffffff5fb0b0 pferr 2 W
  qemu-system-x86-2728  [002]  1726.426231: kvm_mmu_pagetable_walk: addr 
1811000 pferr 6 W|U
  qemu-system-x86-2725  [000]  1726.426232: kvm_mmu_pagetable_walk: addr 
36c49000 pferr 6 W|U
  qemu-system-x86-2728  [002]  1726.426232: kvm_mmu_paging_element: pte 
3c03a027 level 4
  qemu-system-x86-2725  [000]  1726.426232: kvm_mmu_paging_element: pte 
3c03a027 level 4
  qemu-system-x86-2728  [002]  1726.426232: kvm_mmu_paging_element: pte 
3c03d027 level 3
  qemu-system-x86-2725  [000]  1726.426233: kvm_mmu_paging_element: pte 
3c03d027 level 3
  qemu-system-x86-2728  [002]  1726.426233: kvm_mmu_paging_element: pte 
18000e7 level 2
  qemu-system-x86-2725  [000]  1726.426233: kvm_mmu_paging_element: pte 
36c000e7 level 2
  qemu-system-x86-2728  [002]  1726.426233: kvm_mmu_paging_element: pte 
1814067 level 4
  qemu-system-x86-2725  [000]  1726.426233: kvm_mmu_paging_element: pte 
1814067 level 4
  qemu-system-x86-2728  [002]  1726.426233: kvm_mmu_pagetable_walk: addr 
1814000 pferr 6 W|U
  qemu-system-x86-2725  [000]  1726.426234: kvm_mmu_pagetable_walk: addr 
1814000 pferr 6 W|U
  qemu-system-x86-2728  [002]  1726.426234: kvm_mmu_paging_element: pte 
3c03a027 level 4
  qemu-system-x86-2725  [000]  1726.426234: kvm_mmu_paging_element: pte 
3c03a027 level 4
  qemu-system-x86-2728  [002]  1726.426234: kvm_mmu_paging_element: pte 
3c03d027 level 3
  qemu-system-x86-2725  [000]  1726.426235: kvm_mmu_paging_element: pte 
3c03d027 level 3
  qemu-system-x86-2728  [002]  1726.426235: kvm_mmu_paging_element: pte 
18000e7 level 2
  qemu-system-x86-2725  [000]  1726.426235: kvm_mmu_paging_element: pte 
18000e7 level 2
  qemu-system-x86-2728  [002]  1726.426235: kvm_mmu_paging_element: pte 
1816067 level 3
  qemu-system-x86-2725  [000]  1726.426235: kvm_mmu_paging_element: pte 
1816067 level 3
  qemu-system-x86-2728  [002]  1726.426235: kvm_mmu_pagetable_walk: addr 
1816000 pferr 6 W|U
  qemu-system-x86-2725  [000]  1726.426236: kvm_mmu_pagetable_walk: addr 
1816000 pferr 6 W|U
  qemu-system-x86-2728  [002]  1726.426236: kvm_mmu_paging_element: pte 
3c03a027 level 4
  qemu-system-x86-2725  [000]  1726.426236: kvm_mmu_paging_element: pte 
3c03a027 level 4
  qemu-system-x86-2728  [002]  1726.426236: kvm_mmu_paging_element: pte 
3c03d027 level 3
  qemu-system-x86-2725  [000]  1726.426236: kvm_mmu_paging_element: pte 
3c03d027 level 3
  qemu-system-x86-2728  [002]  1726.426236: kvm_mmu_paging_element: pte 
18000e7 level 2
  qemu-system-x86-2725  [000]  1726.426237: kvm_mmu_paging_element: pte 
18000e7 level 2
  qemu-system-x86-2728  [002]  1726.426237: kvm_mmu_paging_element: pte 
1a06067 level 2
  qemu-system-x86-2725  [000]  1726.426237: kvm_mmu_paging_element: pte 
1a06067 level 2
  qemu-system-x86-2725  [000]  1726.426238: kvm_mmu_pagetable_walk: addr 
1a06000 pferr 6 W|U
  qemu-system-x86-2728  [002]  1726.426238: kvm_mmu_pagetable_walk: addr 
1a06000 pferr 6 W|U
  qemu-system-x86-2725  [000]  1726.426238: kvm_mmu_paging_element: pte 
3c03a027 level 4
  qemu-system-x86-2728  [002]  1726.426238: kvm_mmu_paging_element: pte 
3c03a027 level 4
  qemu-system-x86-2725  [000]  1726.426238: kvm_mmu_paging_element: pte 
3c03d027 level 3
  qemu-system-x86-2725  [000]  1726.426239: kvm_mmu_paging_element: pte 
1a000e7 level 2
  qemu-system-x86-2728  [002]  1726.426239: kvm_mmu_paging_element: pte 
3c03d027 level 3
  qemu-system-x86-2725  [000]  1726.426239: kvm_mmu_paging_element: pte 
80000000fee0017b level 1
  qemu-system-x86-2728  [002]  1726.426239: kvm_mmu_paging_element: pte 
1a000e7 level 2
  qemu-system-x86-2725  [000]  1726.426239: kvm_mmu_pagetable_walk: addr 
fee00000 pferr 6 W|U
  qemu-system-x86-2728  [002]  1726.426239: kvm_mmu_paging_element: pte 
80000000fee0017b level 1
  qemu-system-x86-2725  [000]  1726.426240: kvm_mmu_paging_element: pte 
3c03a027 level 4
  qemu-system-x86-2728  [002]  1726.426240: kvm_mmu_pagetable_walk: addr 
fee00000 pferr 6 W|U
  qemu-system-x86-2725  [000]  1726.426240: kvm_mmu_paging_element: pte 
3c03b027 level 3
  qemu-system-x86-2728  [002]  1726.426240: kvm_mmu_paging_element: pte 
3c03a027 level 4
  qemu-system-x86-2725  [000]  1726.426240: kvm_mmu_paging_element: pte 
3c03c027 level 2
  qemu-system-x86-2728  [002]  1726.426241: kvm_mmu_paging_element: pte 
3c03b027 level 3
  qemu-system-x86-2725  [000]  1726.426241: kvm_mmu_paging_element: pte 
fee0003d level 1
  qemu-system-x86-2728  [002]  1726.426241: kvm_mmu_paging_element: pte 
3c03c027 level 2
  qemu-system-x86-2725  [000]  1726.426241: kvm_mmu_walker_error: pferr 
7 P|W|U
  qemu-system-x86-2728  [002]  1726.426241: kvm_mmu_paging_element: pte 
fee0003d level 1
  qemu-system-x86-2725  [000]  1726.426241: kvm_mmu_walker_error: pferr 2 W
  qemu-system-x86-2728  [002]  1726.426242: kvm_mmu_walker_error: pferr 
7 P|W|U
  qemu-system-x86-2728  [002]  1726.426242: kvm_mmu_walker_error: pferr 2 W
  qemu-system-x86-2725  [000]  1726.426243: kvm_inj_exception:    e (0x2)
  qemu-system-x86-2728  [002]  1726.426243: kvm_inj_exception:    e (0x2)
  qemu-system-x86-2725  [000]  1726.426244: kvm_entry:            vcpu 0

Thanks,
Valentine

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Paolo Bonzini Sept. 2, 2014, 8:25 a.m. UTC | #6
Il 01/09/2014 21:21, Valentine Sinitsyn ha scritto:
> 
>> Can you retry running the tests with the latest kvm-unit-tests (branch
>> "master"), gather a trace of kvm and kvmmmu events, and send the
>> compressed trace.dat my way?
> You mean the trace when the problem reveal itself (not from running
> tests), I assume? It's around 2G uncompressed (probably I'm enabling
> tracing to early or doing anything else wrong). Will look into it
> tomorrow, hopefully, I can reduce the size (e.g. by switching to
> uniprocessor mode). Below is a trace snippet similar to the one I've
> sent earlier.

I actually meant kvm-unit-tests in order to understand the npt_rsvd
failure.  (I had sent a separate message for Jailhouse).

For kvm-unit-tests, you can comment out tests that do not fail to reduce
the trace size.

Paolo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Valentine Sinitsyn Sept. 2, 2014, 9:16 a.m. UTC | #7
On 02.09.2014 14:25, Paolo Bonzini wrote:
> I actually meant kvm-unit-tests in order to understand the npt_rsvd
> failure.  (I had sent a separate message for Jailhouse).
Oops, sorry for misunderstanding. Uploaded it here:
https://www.dropbox.com/s/jp6ohb0ul3d6v4u/npt_rsvd.txt.bz2?dl=0

The environment is QEMU 2.1.0 + Linux 3.16.1 with paging_tmpl.h patch, 
and the only test enabled was npt_rsvd (others do pass now).

> For kvm-unit-tests, you can comment out tests that do not fail to reduce
> the trace size.
Yes, I've sent that trace earlier today.

Valentine
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Paolo Bonzini Sept. 2, 2014, 11:21 a.m. UTC | #8
Il 02/09/2014 11:16, Valentine Sinitsyn ha scritto:
> On 02.09.2014 14:25, Paolo Bonzini wrote:
>> I actually meant kvm-unit-tests in order to understand the npt_rsvd
>> failure.  (I had sent a separate message for Jailhouse).
> Oops, sorry for misunderstanding. Uploaded it here:
> https://www.dropbox.com/s/jp6ohb0ul3d6v4u/npt_rsvd.txt.bz2?dl=0

Ugh, there are many bugs and the test is even wrong because the actual
error code should be 0x200000006 (error while visiting page tables).

Paolo

> The environment is QEMU 2.1.0 + Linux 3.16.1 with paging_tmpl.h patch,
> and the only test enabled was npt_rsvd (others do pass now).
> 
>> For kvm-unit-tests, you can comment out tests that do not fail to reduce
>> the trace size.
> Yes, I've sent that trace earlier today.
> 
> Valentine
> -- 
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Valentine Sinitsyn Sept. 2, 2014, 11:26 a.m. UTC | #9
On 02.09.2014 17:21, Paolo Bonzini wrote:
> Ugh, there are many bugs and the test is even wrong because the actual
> error code should be 0x200000006 (error while visiting page tables).
Well, good they were spotted. :-) Haven't looked at the test code 
actually, just saw it fails for some reason.

Valentine
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/x86/svm.c b/x86/svm.c
index a9b29b1..aff00da 100644
--- a/x86/svm.c
+++ b/x86/svm.c
@@ -797,6 +797,27 @@  static bool npt_pfwalk_check(struct test *test)
 	   && (test->vmcb->control.exit_info_2 == read_cr3());
 }
 
+static void npt_l1mmio_prepare(struct test *test)
+{
+    vmcb_ident(test->vmcb);
+}
+
+u32 nested_apic_version;
+
+static void npt_l1mmio_test(struct test *test)
+{
+    u64 *data = (void*)(0xfee00030UL);
+
+    nested_apic_version = *data;
+}
+
+static bool npt_l1mmio_check(struct test *test)
+{
+    u64 *data = (void*)(0xfee00030);
+
+    return (nested_apic_version == *data);
+}
+
 static void latency_prepare(struct test *test)
 {
     default_prepare(test);
@@ -962,6 +983,8 @@  static struct test tests[] = {
 	    default_finished, npt_rw_check },
     { "npt_pfwalk", npt_supported, npt_pfwalk_prepare, null_test,
 	    default_finished, npt_pfwalk_check },
+    { "npt_l1mmio", npt_supported, npt_l1mmio_prepare, npt_l1mmio_test,
+	    default_finished, npt_l1mmio_check },
     { "latency_run_exit", default_supported, latency_prepare, latency_test,
       latency_finished, latency_check },
     { "latency_svm_insn", default_supported, lat_svm_insn_prepare, null_test,