diff mbox series

[v4,10/10] KVM: selftests: Move memslot 0 above KVM internal memslots

Message ID 20200123180436.99487-11-bgardon@google.com (mailing list archive)
State New
Headers show
Series Create a userfaultfd demand paging test | expand

Commit Message

Ben Gardon Jan. 23, 2020, 6:04 p.m. UTC
KVM creates internal memslots between 3 and 4 GiB paddrs on the first
vCPU creation. If memslot 0 is large enough it collides with these
memslots an causes vCPU creation to fail. Instead of creating memslot 0
at paddr 0, start it 4G into the guest physical address space.

Signed-off-by: Ben Gardon <bgardon@google.com>
---
 tools/testing/selftests/kvm/lib/kvm_util.c | 11 +++++++----
 1 file changed, 7 insertions(+), 4 deletions(-)

Comments

Paolo Bonzini Jan. 24, 2020, 9:01 a.m. UTC | #1
On 23/01/20 19:04, Ben Gardon wrote:
> KVM creates internal memslots between 3 and 4 GiB paddrs on the first
> vCPU creation. If memslot 0 is large enough it collides with these
> memslots an causes vCPU creation to fail. Instead of creating memslot 0
> at paddr 0, start it 4G into the guest physical address space.
> 
> Signed-off-by: Ben Gardon <bgardon@google.com>
> ---
>  tools/testing/selftests/kvm/lib/kvm_util.c | 11 +++++++----
>  1 file changed, 7 insertions(+), 4 deletions(-)

This breaks all tests for me:

   $ ./state_test
   Testing guest mode: PA-bits:ANY, VA-bits:48,  4K pages
   Guest physical address width detected: 46
   ==== Test Assertion Failure ====
  lib/x86_64/processor.c:580: false
  pid=4873 tid=4873 - Success
     1	0x0000000000409996: addr_gva2gpa at processor.c:579
     2	0x0000000000406a38: addr_gva2hva at kvm_util.c:1636
     3	0x000000000041036c: kvm_vm_elf_load at elf.c:192
     4	0x0000000000409ea9: vm_create_default at processor.c:829
     5	0x0000000000400f6f: main at state_test.c:132
     6	0x00007f21bdf90494: ?? ??:0
     7	0x0000000000401287: _start at ??:?
  No mapping for vm virtual address, gva: 0x400000

Memslot 0 should not be too large, so this patch should not be needed.

Paolo
Ben Gardon Jan. 24, 2020, 6:53 p.m. UTC | #2
On Fri, Jan 24, 2020 at 1:01 AM Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> On 23/01/20 19:04, Ben Gardon wrote:
> > KVM creates internal memslots between 3 and 4 GiB paddrs on the first
> > vCPU creation. If memslot 0 is large enough it collides with these
> > memslots an causes vCPU creation to fail. Instead of creating memslot 0
> > at paddr 0, start it 4G into the guest physical address space.
> >
> > Signed-off-by: Ben Gardon <bgardon@google.com>
> > ---
> >  tools/testing/selftests/kvm/lib/kvm_util.c | 11 +++++++----
> >  1 file changed, 7 insertions(+), 4 deletions(-)
>
> This breaks all tests for me:
>
>    $ ./state_test
>    Testing guest mode: PA-bits:ANY, VA-bits:48,  4K pages
>    Guest physical address width detected: 46
>    ==== Test Assertion Failure ====
>   lib/x86_64/processor.c:580: false
>   pid=4873 tid=4873 - Success
>      1  0x0000000000409996: addr_gva2gpa at processor.c:579
>      2  0x0000000000406a38: addr_gva2hva at kvm_util.c:1636
>      3  0x000000000041036c: kvm_vm_elf_load at elf.c:192
>      4  0x0000000000409ea9: vm_create_default at processor.c:829
>      5  0x0000000000400f6f: main at state_test.c:132
>      6  0x00007f21bdf90494: ?? ??:0
>      7  0x0000000000401287: _start at ??:?
>   No mapping for vm virtual address, gva: 0x400000

Uh oh, I obviously did not test this patch adequately. My apologies.
I'll send another version of this patch after I've had time to test it
better. The memslots between 3G and 4G are also somewhat x86 specific,
so maybe this code should be elsewhere.

>
> Memslot 0 should not be too large, so this patch should not be needed.

I found that 3GB was not sufficient for memslot zero in my testing
because it needs to contain both the stack for every vCPU and the page
tables for the VM. When I ran with 416 vCPUs and of 1.6TB of total
ram, memslot zero needed to be substantially larger than 3G. Just the
4K guest PTEs required to map 4G per-vCPU for 416 vCPUs require (((416
* (4<<30)) / 4096) * 8) / (1<<30) = 3.25GB of memory.
I suppose another slot could be used for the page tables, but that
would complicate the implementation of any tests that want to run
large VMs substantially.

>
> Paolo
>
Thomas Huth Jan. 27, 2020, 9:42 a.m. UTC | #3
On 23/01/2020 19.04, Ben Gardon wrote:
> KVM creates internal memslots between 3 and 4 GiB paddrs on the first
> vCPU creation. If memslot 0 is large enough it collides with these
> memslots an causes vCPU creation to fail. Instead of creating memslot 0
> at paddr 0, start it 4G into the guest physical address space.
> 
> Signed-off-by: Ben Gardon <bgardon@google.com>
> ---
>  tools/testing/selftests/kvm/lib/kvm_util.c | 11 +++++++----
>  1 file changed, 7 insertions(+), 4 deletions(-)
> 
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index 5b971c04f1643..427c88d32e988 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -130,9 +130,11 @@ _Static_assert(sizeof(vm_guest_mode_string)/sizeof(char *) == NUM_VM_MODES,
>   *
>   * Creates a VM with the mode specified by mode (e.g. VM_MODE_P52V48_4K).
>   * When phy_pages is non-zero, a memory region of phy_pages physical pages
> - * is created and mapped starting at guest physical address 0.  The file
> - * descriptor to control the created VM is created with the permissions
> - * given by perm (e.g. O_RDWR).
> + * is created, starting at 4G into the guest physical address space to avoid
> + * KVM internal memslots which map the region between 3G and 4G. If tests need
> + * to use the physical region between 0 and 3G, they can allocate another
> + * memslot for that region. The file descriptor to control the created VM is
> + * created with the permissions given by perm (e.g. O_RDWR).
>   */
>  struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm)
>  {
> @@ -231,7 +233,8 @@ struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm)
>  	vm->vpages_mapped = sparsebit_alloc();
>  	if (phy_pages != 0)
>  		vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
> -					    0, 0, phy_pages, 0);
> +					    KVM_INTERNAL_MEMSLOTS_END_PADDR,
> +					    0, phy_pages, 0);
>  
>  	return vm;
>  }

This patch causes *all* tests on s390x to fail like this:

# selftests: kvm: sync_regs_test
# Testing guest mode: PA-bits:52,  VA-bits:48,  4K pages
# ==== Test Assertion Failure ====
#   lib/kvm_util.c:1059: false
#   pid=248244 tid=248244 - Success
#      1	0x0000000001002f3d: addr_gpa2hva at kvm_util.c:1059
#      2	 (inlined by) addr_gpa2hva at kvm_util.c:1047
#      3	0x0000000001006edf: addr_gva2gpa at processor.c:144
#      4	0x0000000001004345: addr_gva2hva at kvm_util.c:1636
#      5	0x00000000010077c1: kvm_vm_elf_load at elf.c:192
#      6	0x00000000010070c3: vm_create_default at processor.c:228
#      7	0x0000000001001347: main at sync_regs_test.c:87
#      8	0x000003ffba7a3461: ?? ??:0
#      9	0x0000000001001965: .annobin_init.c.hot at crt1.o:?
#     10	0xffffffffffffffff: ?? ??:0
#   No vm physical memory at 0x0
not ok 2 selftests: kvm: sync_regs_test # exit=254

AFAIK the ELF binaries on s390x are linked to addresses below 4G, so
generally removing the memslot here seems to be a bad idea on s390x.

 Thomas
diff mbox series

Patch

diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 5b971c04f1643..427c88d32e988 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -130,9 +130,11 @@  _Static_assert(sizeof(vm_guest_mode_string)/sizeof(char *) == NUM_VM_MODES,
  *
  * Creates a VM with the mode specified by mode (e.g. VM_MODE_P52V48_4K).
  * When phy_pages is non-zero, a memory region of phy_pages physical pages
- * is created and mapped starting at guest physical address 0.  The file
- * descriptor to control the created VM is created with the permissions
- * given by perm (e.g. O_RDWR).
+ * is created, starting at 4G into the guest physical address space to avoid
+ * KVM internal memslots which map the region between 3G and 4G. If tests need
+ * to use the physical region between 0 and 3G, they can allocate another
+ * memslot for that region. The file descriptor to control the created VM is
+ * created with the permissions given by perm (e.g. O_RDWR).
  */
 struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm)
 {
@@ -231,7 +233,8 @@  struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm)
 	vm->vpages_mapped = sparsebit_alloc();
 	if (phy_pages != 0)
 		vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
-					    0, 0, phy_pages, 0);
+					    KVM_INTERNAL_MEMSLOTS_END_PADDR,
+					    0, phy_pages, 0);
 
 	return vm;
 }