From patchwork Fri May 20 21:57:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12857506 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6FA04C433EF for ; Fri, 20 May 2022 21:57:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1353781AbiETV5b (ORCPT ); Fri, 20 May 2022 17:57:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56748 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S245252AbiETV53 (ORCPT ); Fri, 20 May 2022 17:57:29 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E581518C059 for ; Fri, 20 May 2022 14:57:28 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id c4-20020a170902c2c400b0015f16fb4a54so4635904pla.22 for ; Fri, 20 May 2022 14:57:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=W8jgKI50ePyABDO/nK5JyQIAmfeyEG33bBQNVNWWMLo=; b=EVC2Tw6hx0pB9+hI6J2bqVFQKDQoSGY2N3j6GE270SRsLPPqI0BCr7fjHnrwO/7sOw JgiuqMTP9nBq0VlX4yaKaI5XspIpEKS+Yy/m9iGcXPw55Mx+StZIdikaPM6lbmFShNh1 Aek0OdJKFkwZt+JjHp4sby5LARG0olFca7Tvr7Kt7jagnXcaN/SVAj1/nBlLuPFZ3fJ/ XKflfN8YO4093ZP6hq3fMZEqLUKJVz3wwMtoCIN3kNL9gek96ne7r39n1sPTPiZGduwi JNO6+jxxPqbfGV/GaBkdnAs7A6d8pPZt6RkftP9WDMmFJYU2KQx3vl03Q59m/uK3lkXo F6Eg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=W8jgKI50ePyABDO/nK5JyQIAmfeyEG33bBQNVNWWMLo=; b=c2hZy6Y2t8tEZ3RnY+secw6JiR0QU7R/Us8klBpvD3qY3r8QjzYbCYnsfWIeD0MpXu 9GROTuNSA7C6s3MKfy9EfIkCqPnQn1rf5Gv0pgnwHE9bH5hCXCPHyRvedH2ECsTpAezA pQxarIhf5A7fzytZPdOPYoiBlXRAxXV22wsFcHyj/ersm4USc83GZct25+7pCJD7tXcH +cfcs+AT3jYof+ADg63aLFSGZuOZXTqY1yF3BDsg46hwvCXo+IGRl2VqZiExccr0CFrc nVBDBW3f0tS7qMq3lDpx9UBwcWsZT1n92VedLSLK6GcNlP1ZWSEt57w59oIe7y202J8R biSA== X-Gm-Message-State: AOAM5323GFZgQibPMoGVDwiKXIzI1XRCMrBX3kMMPkiDTfNWib6UN2/N NV5VFGkiMup9TjPULDa3UP4vc0htNOdEWQ== X-Google-Smtp-Source: ABdhPJwnZnJbPPG4z8dPZ5fHdEejCqMoUkVbpW2rjaLMv/9xYxzPSqtqIOwHMn/cWeh119XL8VfcIL5p6Yruwg== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:903:248:b0:155:e8c6:8770 with SMTP id j8-20020a170903024800b00155e8c68770mr11265080plh.129.1653083848320; Fri, 20 May 2022 14:57:28 -0700 (PDT) Date: Fri, 20 May 2022 21:57:14 +0000 In-Reply-To: <20220520215723.3270205-1-dmatlack@google.com> Message-Id: <20220520215723.3270205-2-dmatlack@google.com> Mime-Version: 1.0 References: <20220520215723.3270205-1-dmatlack@google.com> X-Mailer: git-send-email 2.36.1.124.g0e6072fb45-goog Subject: [PATCH v3 01/10] KVM: selftests: Replace x86_page_size with PG_LEVEL_XX From: David Matlack To: Paolo Bonzini Cc: Ben Gardon , Sean Christopherson , Oliver Upton , Peter Xu , Vitaly Kuznetsov , Andrew Jones , "open list:KERNEL VIRTUAL MACHINE (KVM)" , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org x86_page_size is an enum used to communicate the desired page size with which to map a range of memory. Under the hood they just encode the desired level at which to map the page. This ends up being clunky in a few ways: - The name suggests it encodes the size of the page rather than the level. - In other places in x86_64/processor.c we just use a raw int to encode the level. Simplify this by adopting the kernel style of PG_LEVEL_XX enums and pass around raw ints when referring to the level. This makes the code easier to understand since these macros are very common in KVM MMU code. Signed-off-by: David Matlack --- .../selftests/kvm/include/x86_64/processor.h | 18 ++++++---- .../selftests/kvm/lib/x86_64/processor.c | 33 ++++++++++--------- .../selftests/kvm/max_guest_memory_test.c | 2 +- .../selftests/kvm/x86_64/mmu_role_test.c | 2 +- 4 files changed, 31 insertions(+), 24 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h index 37db341d4cc5..434a4f60f4d9 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -465,13 +465,19 @@ void vcpu_set_hv_cpuid(struct kvm_vm *vm, uint32_t vcpuid); struct kvm_cpuid2 *vcpu_get_supported_hv_cpuid(struct kvm_vm *vm, uint32_t vcpuid); void vm_xsave_req_perm(int bit); -enum x86_page_size { - X86_PAGE_SIZE_4K = 0, - X86_PAGE_SIZE_2M, - X86_PAGE_SIZE_1G, +enum pg_level { + PG_LEVEL_NONE, + PG_LEVEL_4K, + PG_LEVEL_2M, + PG_LEVEL_1G, + PG_LEVEL_512G, + PG_LEVEL_NUM }; -void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, - enum x86_page_size page_size); + +#define PG_LEVEL_SHIFT(_level) ((_level - 1) * 9 + 12) +#define PG_LEVEL_SIZE(_level) (1ull << PG_LEVEL_SHIFT(_level)) + +void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level); /* * Basic CPU control in CR0 diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c index 9f000dfb5594..f733c5b02da5 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c @@ -190,7 +190,7 @@ static void *virt_get_pte(struct kvm_vm *vm, uint64_t pt_pfn, uint64_t vaddr, int level) { uint64_t *page_table = addr_gpa2hva(vm, pt_pfn << vm->page_shift); - int index = vaddr >> (vm->page_shift + level * 9) & 0x1ffu; + int index = (vaddr >> PG_LEVEL_SHIFT(level)) & 0x1ffu; return &page_table[index]; } @@ -199,15 +199,15 @@ static struct pageUpperEntry *virt_create_upper_pte(struct kvm_vm *vm, uint64_t pt_pfn, uint64_t vaddr, uint64_t paddr, - int level, - enum x86_page_size page_size) + int current_level, + int target_level) { - struct pageUpperEntry *pte = virt_get_pte(vm, pt_pfn, vaddr, level); + struct pageUpperEntry *pte = virt_get_pte(vm, pt_pfn, vaddr, current_level); if (!pte->present) { pte->writable = true; pte->present = true; - pte->page_size = (level == page_size); + pte->page_size = (current_level == target_level); if (pte->page_size) pte->pfn = paddr >> vm->page_shift; else @@ -218,20 +218,19 @@ static struct pageUpperEntry *virt_create_upper_pte(struct kvm_vm *vm, * a hugepage at this level, and that there isn't a hugepage at * this level. */ - TEST_ASSERT(level != page_size, + TEST_ASSERT(current_level != target_level, "Cannot create hugepage at level: %u, vaddr: 0x%lx\n", - page_size, vaddr); + current_level, vaddr); TEST_ASSERT(!pte->page_size, "Cannot create page table at level: %u, vaddr: 0x%lx\n", - level, vaddr); + current_level, vaddr); } return pte; } -void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, - enum x86_page_size page_size) +void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level) { - const uint64_t pg_size = 1ull << ((page_size * 9) + 12); + const uint64_t pg_size = PG_LEVEL_SIZE(level); struct pageUpperEntry *pml4e, *pdpe, *pde; struct pageTableEntry *pte; @@ -256,20 +255,22 @@ void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, * early if a hugepage was created. */ pml4e = virt_create_upper_pte(vm, vm->pgd >> vm->page_shift, - vaddr, paddr, 3, page_size); + vaddr, paddr, PG_LEVEL_512G, level); if (pml4e->page_size) return; - pdpe = virt_create_upper_pte(vm, pml4e->pfn, vaddr, paddr, 2, page_size); + pdpe = virt_create_upper_pte(vm, pml4e->pfn, vaddr, paddr, PG_LEVEL_1G, + level); if (pdpe->page_size) return; - pde = virt_create_upper_pte(vm, pdpe->pfn, vaddr, paddr, 1, page_size); + pde = virt_create_upper_pte(vm, pdpe->pfn, vaddr, paddr, PG_LEVEL_2M, + level); if (pde->page_size) return; /* Fill in page table entry. */ - pte = virt_get_pte(vm, pde->pfn, vaddr, 0); + pte = virt_get_pte(vm, pde->pfn, vaddr, PG_LEVEL_4K); TEST_ASSERT(!pte->present, "PTE already present for 4k page at vaddr: 0x%lx\n", vaddr); pte->pfn = paddr >> vm->page_shift; @@ -279,7 +280,7 @@ void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, void virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr) { - __virt_pg_map(vm, vaddr, paddr, X86_PAGE_SIZE_4K); + __virt_pg_map(vm, vaddr, paddr, PG_LEVEL_4K); } static struct pageTableEntry *_vm_get_page_table_entry(struct kvm_vm *vm, int vcpuid, diff --git a/tools/testing/selftests/kvm/max_guest_memory_test.c b/tools/testing/selftests/kvm/max_guest_memory_test.c index 3875c4b23a04..15f046e19cb2 100644 --- a/tools/testing/selftests/kvm/max_guest_memory_test.c +++ b/tools/testing/selftests/kvm/max_guest_memory_test.c @@ -244,7 +244,7 @@ int main(int argc, char *argv[]) #ifdef __x86_64__ /* Identity map memory in the guest using 1gb pages. */ for (i = 0; i < slot_size; i += size_1gb) - __virt_pg_map(vm, gpa + i, gpa + i, X86_PAGE_SIZE_1G); + __virt_pg_map(vm, gpa + i, gpa + i, PG_LEVEL_1G); #else for (i = 0; i < slot_size; i += vm_get_page_size(vm)) virt_pg_map(vm, gpa + i, gpa + i); diff --git a/tools/testing/selftests/kvm/x86_64/mmu_role_test.c b/tools/testing/selftests/kvm/x86_64/mmu_role_test.c index da2325fcad87..bdecd532f935 100644 --- a/tools/testing/selftests/kvm/x86_64/mmu_role_test.c +++ b/tools/testing/selftests/kvm/x86_64/mmu_role_test.c @@ -35,7 +35,7 @@ static void mmu_role_test(u32 *cpuid_reg, u32 evil_cpuid_val) run = vcpu_state(vm, VCPU_ID); /* Map 1gb page without a backing memlot. */ - __virt_pg_map(vm, MMIO_GPA, MMIO_GPA, X86_PAGE_SIZE_1G); + __virt_pg_map(vm, MMIO_GPA, MMIO_GPA, PG_LEVEL_1G); r = _vcpu_run(vm, VCPU_ID); From patchwork Fri May 20 21:57:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12857507 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D185C433F5 for ; Fri, 20 May 2022 21:57:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1353783AbiETV5c (ORCPT ); Fri, 20 May 2022 17:57:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56768 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1353780AbiETV5b (ORCPT ); Fri, 20 May 2022 17:57:31 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9C8A018C059 for ; Fri, 20 May 2022 14:57:30 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id r190-20020a632bc7000000b003c6222b2192so4704717pgr.11 for ; Fri, 20 May 2022 14:57:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=3YnW/PaLyJgDTu61KobGsZ8UH0DNCazauPQKjEmJn0Q=; b=pCx8QtmtJT7XEuyhRunfTmahsciLus34Qpl2T9RmkB8d3FbcbVU49B+5PKg6cvNFcq iycJ52cDi7h98AsNBWPomiAz+wT5gmvOXu4XOxMeEwh/pHHGGqYe0HK1HRx6WjHUr6Pk dU+28BdQhVm6Lho64counnVLgegQkZbBr2Pr6lSTtcTUt7cJ2sDRqMjrBMxNZB4TBDbF FBr+7jUTzNlBFA0TOiT/FHzSMP/3JroZikJNbV4iehC3aXbeEbZGIAPpe4WiU5G//unr s31eVY3nQZ0rOg+pzl6EvCcO/i/Sd37q0MSzP17hijaa0IxlJBGlp8IPFvlvUQVBfaln G+xA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=3YnW/PaLyJgDTu61KobGsZ8UH0DNCazauPQKjEmJn0Q=; b=U8zG906RFLqpwHtzM2SC0dzbONBYM23itvy/s0jl4TYrFReK64zZYo4LFdyujBGTPl G6tRyUrVD3C9rHRDCN5w4OADUP3inpWIyds8hqHwK+yxadTHicezEQ0N83XR2AGJD/KH vOSvn36up8xXx0oAJJV14FA4rkB8Ahd5CnC0lSsUa5W41POGiyhG2MFmCMzIjLNLxP8Y myFjgtSfEEGcDMs6pqJwox0Z9nrH/Neu7g949YgFHdDrAVqN26tyHpaDQL1imV3+n1w7 QHejyICBxdah9DQAbiBGLRViu3cQwM/b2+Nd60zVQqKPt+B2QzRdOShWgsqUyG6kq+ue IVhQ== X-Gm-Message-State: AOAM5329zjlpdAL7PBWx0yA0QU0MnAXnFnJotC2D2Q9/In0q737Giqa+ 5o2I/9g3Rr6jBfd0MNQOZfY+hRDuWW5bEg== X-Google-Smtp-Source: ABdhPJxx6I9X+rfrVGLHdxIBoF8KZsxOw+uBT7/yMBSUfj9cmLhaV7WcpMMuJ8UBdmYM7N6CsAvjJTNwlprPeQ== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a63:1c61:0:b0:3d8:204b:9fa8 with SMTP id c33-20020a631c61000000b003d8204b9fa8mr10475150pgm.589.1653083849930; Fri, 20 May 2022 14:57:29 -0700 (PDT) Date: Fri, 20 May 2022 21:57:15 +0000 In-Reply-To: <20220520215723.3270205-1-dmatlack@google.com> Message-Id: <20220520215723.3270205-3-dmatlack@google.com> Mime-Version: 1.0 References: <20220520215723.3270205-1-dmatlack@google.com> X-Mailer: git-send-email 2.36.1.124.g0e6072fb45-goog Subject: [PATCH v3 02/10] KVM: selftests: Add option to create 2M and 1G EPT mappings From: David Matlack To: Paolo Bonzini Cc: Ben Gardon , Sean Christopherson , Oliver Upton , Peter Xu , Vitaly Kuznetsov , Andrew Jones , "open list:KERNEL VIRTUAL MACHINE (KVM)" , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The current EPT mapping code in the selftests only supports mapping 4K pages. This commit extends that support with an option to map at 2M or 1G. This will be used in a future commit to create large page mappings to test eager page splitting. No functional change intended. Signed-off-by: David Matlack --- tools/testing/selftests/kvm/lib/x86_64/vmx.c | 110 ++++++++++--------- 1 file changed, 60 insertions(+), 50 deletions(-) diff --git a/tools/testing/selftests/kvm/lib/x86_64/vmx.c b/tools/testing/selftests/kvm/lib/x86_64/vmx.c index d089d8b850b5..fdc1e6deb922 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/vmx.c +++ b/tools/testing/selftests/kvm/lib/x86_64/vmx.c @@ -392,80 +392,90 @@ void nested_vmx_check_supported(void) } } -void nested_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm, - uint64_t nested_paddr, uint64_t paddr) +static void nested_create_pte(struct kvm_vm *vm, + struct eptPageTableEntry *pte, + uint64_t nested_paddr, + uint64_t paddr, + int current_level, + int target_level) +{ + if (!pte->readable) { + pte->writable = true; + pte->readable = true; + pte->executable = true; + pte->page_size = (current_level == target_level); + if (pte->page_size) + pte->address = paddr >> vm->page_shift; + else + pte->address = vm_alloc_page_table(vm) >> vm->page_shift; + } else { + /* + * Entry already present. Assert that the caller doesn't want + * a hugepage at this level, and that there isn't a hugepage at + * this level. + */ + TEST_ASSERT(current_level != target_level, + "Cannot create hugepage at level: %u, nested_paddr: 0x%lx\n", + current_level, nested_paddr); + TEST_ASSERT(!pte->page_size, + "Cannot create page table at level: %u, nested_paddr: 0x%lx\n", + current_level, nested_paddr); + } +} + + +void __nested_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm, + uint64_t nested_paddr, uint64_t paddr, int target_level) { - uint16_t index[4]; - struct eptPageTableEntry *pml4e; + const uint64_t page_size = PG_LEVEL_SIZE(target_level); + struct eptPageTableEntry *pt = vmx->eptp_hva, *pte; + uint16_t index; TEST_ASSERT(vm->mode == VM_MODE_PXXV48_4K, "Attempt to use " "unknown or unsupported guest mode, mode: 0x%x", vm->mode); - TEST_ASSERT((nested_paddr % vm->page_size) == 0, + TEST_ASSERT((nested_paddr % page_size) == 0, "Nested physical address not on page boundary,\n" - " nested_paddr: 0x%lx vm->page_size: 0x%x", - nested_paddr, vm->page_size); + " nested_paddr: 0x%lx page_size: 0x%lx", + nested_paddr, page_size); TEST_ASSERT((nested_paddr >> vm->page_shift) <= vm->max_gfn, "Physical address beyond beyond maximum supported,\n" " nested_paddr: 0x%lx vm->max_gfn: 0x%lx vm->page_size: 0x%x", paddr, vm->max_gfn, vm->page_size); - TEST_ASSERT((paddr % vm->page_size) == 0, + TEST_ASSERT((paddr % page_size) == 0, "Physical address not on page boundary,\n" - " paddr: 0x%lx vm->page_size: 0x%x", - paddr, vm->page_size); + " paddr: 0x%lx page_size: 0x%lx", + paddr, page_size); TEST_ASSERT((paddr >> vm->page_shift) <= vm->max_gfn, "Physical address beyond beyond maximum supported,\n" " paddr: 0x%lx vm->max_gfn: 0x%lx vm->page_size: 0x%x", paddr, vm->max_gfn, vm->page_size); - index[0] = (nested_paddr >> 12) & 0x1ffu; - index[1] = (nested_paddr >> 21) & 0x1ffu; - index[2] = (nested_paddr >> 30) & 0x1ffu; - index[3] = (nested_paddr >> 39) & 0x1ffu; - - /* Allocate page directory pointer table if not present. */ - pml4e = vmx->eptp_hva; - if (!pml4e[index[3]].readable) { - pml4e[index[3]].address = vm_alloc_page_table(vm) >> vm->page_shift; - pml4e[index[3]].writable = true; - pml4e[index[3]].readable = true; - pml4e[index[3]].executable = true; - } + for (int level = PG_LEVEL_512G; level >= PG_LEVEL_4K; level--) { + index = (nested_paddr >> PG_LEVEL_SHIFT(level)) & 0x1ffu; + pte = &pt[index]; - /* Allocate page directory table if not present. */ - struct eptPageTableEntry *pdpe; - pdpe = addr_gpa2hva(vm, pml4e[index[3]].address * vm->page_size); - if (!pdpe[index[2]].readable) { - pdpe[index[2]].address = vm_alloc_page_table(vm) >> vm->page_shift; - pdpe[index[2]].writable = true; - pdpe[index[2]].readable = true; - pdpe[index[2]].executable = true; - } + nested_create_pte(vm, pte, nested_paddr, paddr, level, target_level); - /* Allocate page table if not present. */ - struct eptPageTableEntry *pde; - pde = addr_gpa2hva(vm, pdpe[index[2]].address * vm->page_size); - if (!pde[index[1]].readable) { - pde[index[1]].address = vm_alloc_page_table(vm) >> vm->page_shift; - pde[index[1]].writable = true; - pde[index[1]].readable = true; - pde[index[1]].executable = true; - } + if (pte->page_size) + break; - /* Fill in page table entry. */ - struct eptPageTableEntry *pte; - pte = addr_gpa2hva(vm, pde[index[1]].address * vm->page_size); - pte[index[0]].address = paddr >> vm->page_shift; - pte[index[0]].writable = true; - pte[index[0]].readable = true; - pte[index[0]].executable = true; + pt = addr_gpa2hva(vm, pte->address * vm->page_size); + } /* * For now mark these as accessed and dirty because the only * testcase we have needs that. Can be reconsidered later. */ - pte[index[0]].accessed = true; - pte[index[0]].dirty = true; + pte->accessed = true; + pte->dirty = true; + +} + +void nested_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm, + uint64_t nested_paddr, uint64_t paddr) +{ + __nested_pg_map(vmx, vm, nested_paddr, paddr, PG_LEVEL_4K); } /* From patchwork Fri May 20 21:57:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12857508 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42F85C433FE for ; Fri, 20 May 2022 21:57:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1353789AbiETV5e (ORCPT ); Fri, 20 May 2022 17:57:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56778 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1353784AbiETV5d (ORCPT ); Fri, 20 May 2022 17:57:33 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5AF5718C059 for ; Fri, 20 May 2022 14:57:32 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id h31-20020a63575f000000b003f5eb841a0eso4697319pgm.8 for ; Fri, 20 May 2022 14:57:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=xju6GI4UHmGb+5whaNbhMW72VwBo+kXPtVjs0cWh6sY=; b=ffcBXpe7K5ooQbIfpQcMiz4klnhGmvBzTjvWeiwOHSC1YvvJCoRoY5/bQ97H5zgVIS zmULc3b2143PvpSjlvmIlkrAdceiic0oj0Ti6+iLN2yDFaMy1MJvCuT0j+fcr1B1ETBt zoxY37+jXEl395d7EPo/f6sKL2832+kHiZvjM3kmIgYKhL+7oAXMLTnwCzlP4UAYgguY /aPjhU8RRCfXM+oXfae+aWucnjaGAh4REQEabIZn+Hy9Xt+6iUSOWQ8CSoIPL9IYB7sZ L91/ichkJj6EbBBtjcb8L1x9X6ZQY1i9QxYoaxZULkuN6kvklBwLMmHWXGwjbCCeqQAL emSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=xju6GI4UHmGb+5whaNbhMW72VwBo+kXPtVjs0cWh6sY=; b=NF8MMLaSjGfKr+U2P3DvdW0qOl+KhRyc3+5WzdWaxj75/uNtyX9kF1Vj15xHQImH0k +dJK8RX3QwV408d3Aum9eL8Bt/+i+dbhW6fumzjFkNVYf25zQcEXapflfWhDn08C+JRe ihd6Li+FmZiq5j6sV6NoogBgQWKlmGmimHy7m5KQFO/ojE1GqaWAv7WWxlOHUaiE8xG4 B9xVoZAB/qkhqozwmT87MTgA6DNa6jKeOXYDS9jn1bvheG+KPBp/D36tF9OjmSEcHT0B U083DcZk6D4bb+NTHTejFGDhTGFlqSx2CT5ktVjiVaX+JvBUATye48R2F9XnJRgYjfIb MWWg== X-Gm-Message-State: AOAM530PCPpgIf9xGtkXx4u7TdwiJHR+p6POuCuY20V3g0/4I588Sqqv n/DMq17nqblx3AxN81ag6cPBs6VCiEcLug== X-Google-Smtp-Source: ABdhPJxcJsx4srU5/bYWG9TYS7AnIYDeXYJ0DpMuDup3NdAUPEgZtlai4SJeyuzkHbMYxq3SMpge0KkLB5JJLQ== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:902:ead5:b0:161:7a01:be10 with SMTP id p21-20020a170902ead500b001617a01be10mr11790245pld.4.1653083851802; Fri, 20 May 2022 14:57:31 -0700 (PDT) Date: Fri, 20 May 2022 21:57:16 +0000 In-Reply-To: <20220520215723.3270205-1-dmatlack@google.com> Message-Id: <20220520215723.3270205-4-dmatlack@google.com> Mime-Version: 1.0 References: <20220520215723.3270205-1-dmatlack@google.com> X-Mailer: git-send-email 2.36.1.124.g0e6072fb45-goog Subject: [PATCH v3 03/10] KVM: selftests: Drop stale function parameter comment for nested_map() From: David Matlack To: Paolo Bonzini Cc: Ben Gardon , Sean Christopherson , Oliver Upton , Peter Xu , Vitaly Kuznetsov , Andrew Jones , "open list:KERNEL VIRTUAL MACHINE (KVM)" , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org nested_map() does not take a parameter named eptp_memslot. Drop the comment referring to it. Reviewed-by: Peter Xu Signed-off-by: David Matlack --- tools/testing/selftests/kvm/lib/x86_64/vmx.c | 1 - 1 file changed, 1 deletion(-) diff --git a/tools/testing/selftests/kvm/lib/x86_64/vmx.c b/tools/testing/selftests/kvm/lib/x86_64/vmx.c index fdc1e6deb922..baeaa35de113 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/vmx.c +++ b/tools/testing/selftests/kvm/lib/x86_64/vmx.c @@ -486,7 +486,6 @@ void nested_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm, * nested_paddr - Nested guest physical address to map * paddr - VM Physical Address * size - The size of the range to map - * eptp_memslot - Memory region slot for new virtual translation tables * * Output Args: None * From patchwork Fri May 20 21:57:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12857509 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 733E6C433EF for ; Fri, 20 May 2022 21:57:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1353792AbiETV5g (ORCPT ); Fri, 20 May 2022 17:57:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56806 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1353790AbiETV5e (ORCPT ); Fri, 20 May 2022 17:57:34 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1524A18C073 for ; Fri, 20 May 2022 14:57:34 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id cd6-20020a056a00420600b00510a99055e2so4737771pfb.17 for ; Fri, 20 May 2022 14:57:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=/3btooTUxYy4rjHmRouQUpf5jOli5cRswOzHLZ9mznY=; b=k7Rtx8bLLCQghDV/l5jEO7aW1i5WYkytI786Fl27Jks39z16UXQdFxKaA3MjBaT693 4xS7S96vZ90Cfn2RGzFblgLgKnswqp8QiwNvr890xCkZBykxtDMArilakdowyAEPQzZx lIKBW3ZhLH2kui5fEPeSbz5obFo0V/vNZzX3EEYHoWM2jn45sYety4ocWVzRk7u4iS7C t7rMHVDpqDk5EMTUZg3LxztwzFLUjbcN88AwtGygrgbefHOeKW6N2GZqCl9k7ssXeCsP w/j5Q9TVrQPA/NCxzTW7FqUK7w+8jbeIMUFAWOp8DMAjfBWV6GWSNzFCzeZwH5c5zQxG 1eFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=/3btooTUxYy4rjHmRouQUpf5jOli5cRswOzHLZ9mznY=; b=5oFnMBT56uIph/2/gaW4jxeYoAdLAe+eyWVWRIzFzCUG2z1e0Yz2CKueTW1kRO/8sU cMr9WzyNgaclhkeA1xU4M6xhtFBHs41aKxCYVs5ycYLCh4n4RXxGDU1fZ9IO4DEN3IeZ IJYm0MNHn5Hv0zr6JF0KciOruMoR46OUuaWZYF7uD+FymGWv67FtqLz4iJgObMZo/ebe 4xNnPlgz4NDvjsbE7QRGQqjczdcwo+mChG6bbplsDlV3QFyC0o05y3X2fTog1KrllBtE kOMFv5NSw3WqgJwo35RoHOWESjj9oz5OCZl1t3vnErEm7IdCIO9pJkmlSxcr/l7mDSGi J6ng== X-Gm-Message-State: AOAM5335P5bgSZjeAZYSJANgqKQZriC3vZlRNF0KrNs9VOFNhHz2XzT8 vy+umt1Ow7zD0SxaPge737PU55+PcpzDuQ== X-Google-Smtp-Source: ABdhPJxvdD7xN8mPGQkL95kn3dLL455bc/VWf+69K42ROi78thKp4oHWGw/F57O4PFXLMY/iWYrARiQAMoDZFw== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:903:1cd:b0:161:f93d:7bd9 with SMTP id e13-20020a17090301cd00b00161f93d7bd9mr3516556plh.76.1653083853532; Fri, 20 May 2022 14:57:33 -0700 (PDT) Date: Fri, 20 May 2022 21:57:17 +0000 In-Reply-To: <20220520215723.3270205-1-dmatlack@google.com> Message-Id: <20220520215723.3270205-5-dmatlack@google.com> Mime-Version: 1.0 References: <20220520215723.3270205-1-dmatlack@google.com> X-Mailer: git-send-email 2.36.1.124.g0e6072fb45-goog Subject: [PATCH v3 04/10] KVM: selftests: Refactor nested_map() to specify target level From: David Matlack To: Paolo Bonzini Cc: Ben Gardon , Sean Christopherson , Oliver Upton , Peter Xu , Vitaly Kuznetsov , Andrew Jones , "open list:KERNEL VIRTUAL MACHINE (KVM)" , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Refactor nested_map() to specify that it explicityl wants 4K mappings (the existing behavior) and push the implementation down into __nested_map(), which can be used in subsequent commits to create huge page mappings. No function change intended. Reviewed-by: Peter Xu Signed-off-by: David Matlack --- tools/testing/selftests/kvm/lib/x86_64/vmx.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/tools/testing/selftests/kvm/lib/x86_64/vmx.c b/tools/testing/selftests/kvm/lib/x86_64/vmx.c index baeaa35de113..b8cfe4914a3a 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/vmx.c +++ b/tools/testing/selftests/kvm/lib/x86_64/vmx.c @@ -486,6 +486,7 @@ void nested_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm, * nested_paddr - Nested guest physical address to map * paddr - VM Physical Address * size - The size of the range to map + * level - The level at which to map the range * * Output Args: None * @@ -494,22 +495,29 @@ void nested_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm, * Within the VM given by vm, creates a nested guest translation for the * page range starting at nested_paddr to the page range starting at paddr. */ -void nested_map(struct vmx_pages *vmx, struct kvm_vm *vm, - uint64_t nested_paddr, uint64_t paddr, uint64_t size) +void __nested_map(struct vmx_pages *vmx, struct kvm_vm *vm, + uint64_t nested_paddr, uint64_t paddr, uint64_t size, + int level) { - size_t page_size = vm->page_size; + size_t page_size = PG_LEVEL_SIZE(level); size_t npages = size / page_size; TEST_ASSERT(nested_paddr + size > nested_paddr, "Vaddr overflow"); TEST_ASSERT(paddr + size > paddr, "Paddr overflow"); while (npages--) { - nested_pg_map(vmx, vm, nested_paddr, paddr); + __nested_pg_map(vmx, vm, nested_paddr, paddr, level); nested_paddr += page_size; paddr += page_size; } } +void nested_map(struct vmx_pages *vmx, struct kvm_vm *vm, + uint64_t nested_paddr, uint64_t paddr, uint64_t size) +{ + __nested_map(vmx, vm, nested_paddr, paddr, size, PG_LEVEL_4K); +} + /* Prepare an identity extended page table that maps all the * physical pages in VM. */ From patchwork Fri May 20 21:57:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12857510 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CBD6DC433F5 for ; Fri, 20 May 2022 21:57:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1353790AbiETV5h (ORCPT ); Fri, 20 May 2022 17:57:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56870 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1353796AbiETV5g (ORCPT ); Fri, 20 May 2022 17:57:36 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AC5A01A04BA for ; Fri, 20 May 2022 14:57:35 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id c4-20020a170902c2c400b0015f16fb4a54so4635904pla.22 for ; Fri, 20 May 2022 14:57:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Z6u374eRRq3ZxgbCbMl6Db4qdzVQrIIZZTwWKsQlAh0=; b=AH5h//rnKEWJfL9N5Ss62V7I6gwUFM1ui3Ku+hjiAVH40L06Hh6/2stcx26oHWHFJg zolFFkb3/3b2eiTHBDTUVD8/oAkUD+iCNVg+4Wu650U7IPaAoMSzIhaYsUfmFaIkxPSy F2x+m9Afyb++ykaotv6HoGEpw6gBU0JuVgI7CP+wTNUDVd1mGUg8FNMKmvjqVBsq8z/K V7OC2dUwGr9iY02lQU7jj7t2aqvvlTTRE37sYhN4v/Ja1qc+1fWTWqzVTtUm24hRs7O7 iFZUvDSgXSF7id75QGow974OycvFOnNHLsySzdkjADLr7MndusJ90acGaYWCIIwSbdVp Lkmg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Z6u374eRRq3ZxgbCbMl6Db4qdzVQrIIZZTwWKsQlAh0=; b=5CrRvzMuvWexzEIwHon8ZtuMgr7iSBSstVa75sBX3gT2nXsZSLIZr29/oPeiKG+Rzx r5kgxskleAYC+ND6hyKW6Tb1mg8r4c5s6XIafvyetl8PgTxtvMSnzehwdXd2MsboU9uf okjOmNN3EV9L0lqYmYHRhEWhLG51EPG8jQdz7MekALrlEoqLEFCnalIGiZBKiyS60NgC oQxemSvlnbO/8EuUVU7k/axw7YiBtuvTG7/W1JQ2qduxLnPPn1Za7rx1FbNPRzUk7ZTi gj1jbo8vlD3OHj4ZF5pLIcglgpfP8iCssxYcw/Yr9ZkZ5S+a2gMh7cubymYr4xxTPQ5F 3qVQ== X-Gm-Message-State: AOAM532TfLd/nVGJtvq2ML+ItfN18DnDokGZF/g8qMMIpgqOCbQly07a xr7xchS+SE51yLlydl5giETAkQeKZxP0bw== X-Google-Smtp-Source: ABdhPJxQxP7pYHQ6TpYpB4cct2qG9BGZTYmLGpAeGvc9Ahjl8dNodSPhFEACKJn/oh1FAh8tkT4MunjcBNO/Tg== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:90a:a385:b0:1cb:bfa8:ae01 with SMTP id x5-20020a17090aa38500b001cbbfa8ae01mr12973233pjp.116.1653083855177; Fri, 20 May 2022 14:57:35 -0700 (PDT) Date: Fri, 20 May 2022 21:57:18 +0000 In-Reply-To: <20220520215723.3270205-1-dmatlack@google.com> Message-Id: <20220520215723.3270205-6-dmatlack@google.com> Mime-Version: 1.0 References: <20220520215723.3270205-1-dmatlack@google.com> X-Mailer: git-send-email 2.36.1.124.g0e6072fb45-goog Subject: [PATCH v3 05/10] KVM: selftests: Move VMX_EPT_VPID_CAP_AD_BITS to vmx.h From: David Matlack To: Paolo Bonzini Cc: Ben Gardon , Sean Christopherson , Oliver Upton , Peter Xu , Vitaly Kuznetsov , Andrew Jones , "open list:KERNEL VIRTUAL MACHINE (KVM)" , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This is a VMX-related macro so move it to vmx.h. While here, open code the mask like the rest of the VMX bitmask macros. No functional change intended. Reviewed-by: Peter Xu Signed-off-by: David Matlack --- tools/testing/selftests/kvm/include/x86_64/processor.h | 3 --- tools/testing/selftests/kvm/include/x86_64/vmx.h | 2 ++ 2 files changed, 2 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h index 434a4f60f4d9..04f1d540bcb2 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -494,9 +494,6 @@ void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level) #define X86_CR0_CD (1UL<<30) /* Cache Disable */ #define X86_CR0_PG (1UL<<31) /* Paging */ -/* VMX_EPT_VPID_CAP bits */ -#define VMX_EPT_VPID_CAP_AD_BITS (1ULL << 21) - #define XSTATE_XTILE_CFG_BIT 17 #define XSTATE_XTILE_DATA_BIT 18 diff --git a/tools/testing/selftests/kvm/include/x86_64/vmx.h b/tools/testing/selftests/kvm/include/x86_64/vmx.h index 583ceb0d1457..3b1794baa97c 100644 --- a/tools/testing/selftests/kvm/include/x86_64/vmx.h +++ b/tools/testing/selftests/kvm/include/x86_64/vmx.h @@ -96,6 +96,8 @@ #define VMX_MISC_PREEMPTION_TIMER_RATE_MASK 0x0000001f #define VMX_MISC_SAVE_EFER_LMA 0x00000020 +#define VMX_EPT_VPID_CAP_AD_BITS 0x00200000 + #define EXIT_REASON_FAILED_VMENTRY 0x80000000 #define EXIT_REASON_EXCEPTION_NMI 0 #define EXIT_REASON_EXTERNAL_INTERRUPT 1 From patchwork Fri May 20 21:57:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12857511 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 464CBC433F5 for ; Fri, 20 May 2022 21:57:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1353796AbiETV5k (ORCPT ); Fri, 20 May 2022 17:57:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56868 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1353803AbiETV5i (ORCPT ); Fri, 20 May 2022 17:57:38 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D0CCC18FF04 for ; Fri, 20 May 2022 14:57:37 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id g5-20020a17090a4b0500b001df2807132fso7579259pjh.7 for ; Fri, 20 May 2022 14:57:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=zIzRUypz+W9E6mtg9tuZCtp0bD/d3NxgK2XH6wHm+BQ=; b=lzkGbxkf2ehhKYfK6S3OwjfDWJjLf7d6QhQboP/E/laGMeRIqVbDGr6t3Nq5MtKcbf VkKezaVqxCx38eOIC67UkssNKWJFAsovrUVFMTaCzJHsRjmQu6TyA8C23zzP0Urr1PVw sTYfhkogK9mhgiX67QNvjQWi9oHGSJVjSvhyz4ATHzsTs7w67mVEjnTK17lPKhUaWVUA Xojw4RKfFuYTWwhhBnafs25YyyR6k/jarM4jtArvZY/5f8pZWVkH1X0BIsKNHTlkEABm 9pSNJaSwU363zD9WV2WScf8gf11Q/bEcTcc2XKEdtEb2dJuwR6KFwlZ89HDqgAIignNd YUYg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=zIzRUypz+W9E6mtg9tuZCtp0bD/d3NxgK2XH6wHm+BQ=; b=SgMitj6Fkqc6VI7kdGEBQbB9AM7TqCTKykWS2/Mk+6AoKEGCCH8a3E0WF3ruMEzhcq vmEaJJWf4anIEj6clhz42cCnOyCPRfboX8HsrAISXvyv0hj5xdORyzAyIdbaUDUtmjU9 wveBqYdo71BoVZ9IwAxO1rtrsVPS7Ns8fIM2u7JRqVbi+02VpH1A0ubMZ2O2Pt+EBkwr DP6pNAZo2yET/EZxDK1fDFaosrfU9R8XdCgrKH2uMvcKXg+md4vnUjfbKZ6kF4HBYpOF fJ6B4rVLjV4vYBycD/f8LUnM/5WrIdhFIfL0BXknOAuMayCPVooxQ7SSu9WYar6Kmquf EJKQ== X-Gm-Message-State: AOAM531D2VnJdHv2SsmzoKj7ME+DIoWbQK1ktq67sFp/HCn3oyqTejWh 1o1NRHAdKNH/ZW346g3/Vb/FyBwLOSU8zA== X-Google-Smtp-Source: ABdhPJxaxMBdZrcVn1fPFu7IgRd1JMlKjV9MGZHsI5h5RID8RyDbDhMRRUkKBSP5NgIr0oNodApYpXKy77jo5g== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a05:6a00:228d:b0:510:7594:a73c with SMTP id f13-20020a056a00228d00b005107594a73cmr12032983pfe.17.1653083857218; Fri, 20 May 2022 14:57:37 -0700 (PDT) Date: Fri, 20 May 2022 21:57:19 +0000 In-Reply-To: <20220520215723.3270205-1-dmatlack@google.com> Message-Id: <20220520215723.3270205-7-dmatlack@google.com> Mime-Version: 1.0 References: <20220520215723.3270205-1-dmatlack@google.com> X-Mailer: git-send-email 2.36.1.124.g0e6072fb45-goog Subject: [PATCH v3 06/10] KVM: selftests: Add a helper to check EPT/VPID capabilities From: David Matlack To: Paolo Bonzini Cc: Ben Gardon , Sean Christopherson , Oliver Upton , Peter Xu , Vitaly Kuznetsov , Andrew Jones , "open list:KERNEL VIRTUAL MACHINE (KVM)" , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Create a small helper function to check if a given EPT/VPID capability is supported. This will be re-used in a follow-up commit to check for 1G page support. No functional change intended. Reviewed-by: Peter Xu Signed-off-by: David Matlack --- tools/testing/selftests/kvm/lib/x86_64/vmx.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/lib/x86_64/vmx.c b/tools/testing/selftests/kvm/lib/x86_64/vmx.c index b8cfe4914a3a..5bf169179455 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/vmx.c +++ b/tools/testing/selftests/kvm/lib/x86_64/vmx.c @@ -198,6 +198,11 @@ bool load_vmcs(struct vmx_pages *vmx) return true; } +static bool ept_vpid_cap_supported(uint64_t mask) +{ + return rdmsr(MSR_IA32_VMX_EPT_VPID_CAP) & mask; +} + /* * Initialize the control fields to the most basic settings possible. */ @@ -215,7 +220,7 @@ static inline void init_vmcs_control_fields(struct vmx_pages *vmx) struct eptPageTablePointer eptp = { .memory_type = VMX_BASIC_MEM_TYPE_WB, .page_walk_length = 3, /* + 1 */ - .ad_enabled = !!(rdmsr(MSR_IA32_VMX_EPT_VPID_CAP) & VMX_EPT_VPID_CAP_AD_BITS), + .ad_enabled = ept_vpid_cap_supported(VMX_EPT_VPID_CAP_AD_BITS), .address = vmx->eptp_gpa >> PAGE_SHIFT_4K, }; From patchwork Fri May 20 21:57:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12857512 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C852AC433F5 for ; Fri, 20 May 2022 21:57:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1353799AbiETV5q (ORCPT ); Fri, 20 May 2022 17:57:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57036 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1353800AbiETV5l (ORCPT ); Fri, 20 May 2022 17:57:41 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8B9081A04BA for ; Fri, 20 May 2022 14:57:39 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id q64-20020a17090a1b4600b001dfc02fe731so4158871pjq.0 for ; Fri, 20 May 2022 14:57:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=+O7sOCqFij4Cg9LTfpyCAY9gXqKy94OnTf8FSIBC0lo=; b=eAiIdTonfqHcoBQFnow40OlKYZnyLqG04Q5S9/X1WFzHqlPN1DKleCLTeD706ZR4CK tW4PFm2KPcuDcOGRaYKS5HUQlo/jxR1o3Zj9D4W5l5vkbsda8AhitXkwgnsrXBVl9SMA ClImdActs2F+BNKCDGuWfFrAAvBSRM9CPJxjy/+97BNFQ8OG3IHCpzQr6lPLGlJxl7OO rF4Fcmq7CvKIKrWdJZ4/jj7vnF2Y6mg+GwB9DNlSWomxC/T6HRBdt3JX2tYYTV5PglZh eVUXjInF7JQcBLTBHmJSfNMy0ju4o6M88Q3Hht1W3gZpEDRG2JkS64BhNxmlE9xePUSS VvZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=+O7sOCqFij4Cg9LTfpyCAY9gXqKy94OnTf8FSIBC0lo=; b=75zb4YAhjb4uAkaN4opamGxitwW4C65AXQb9kc7ZRsSdx91c2ulG1gig4iq/vzdPTH ucNJS8PoIN1RrYie60Jd03Vu2y3Hymte2aw1PFh7Qsh9ewCyN7Bexu8PRH4501/dbwyJ o/SIPxgHc+6Iu1XR7fjyJ9R24OhR+B05qdFquafPoRKX+6gEBStfzCzdYra1HCm2VAHL AG9yBgsRu9edyC2XhuXFyy8tClBkUgL0nq+5pw65EFV88w0D6VzY3AcbgoOUhYX3WhhF sLyKC2MQAM1a6RD3X7L8SJzGqsd2f6MnRR4e37KictJ+Bl+ecWQAjTKa8+8sFTf/SKOh uCRw== X-Gm-Message-State: AOAM5322uHt2aS3KtccqsOyaUVToCg4d+kKConiwW72rYHqyFl1wBFFa 0Xd1uCRSUk1wqEA83/E5WQrauqW16PsyMQ== X-Google-Smtp-Source: ABdhPJycDNMXBzHsvuBRbyFuFvEcxIXJntsN9HxI2lZSvck6iqMCOTq+KbSOI47nT6uUlsb15qalh2yeHjEKJA== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a62:6dc3:0:b0:505:895a:d38b with SMTP id i186-20020a626dc3000000b00505895ad38bmr12211524pfc.7.1653083858964; Fri, 20 May 2022 14:57:38 -0700 (PDT) Date: Fri, 20 May 2022 21:57:20 +0000 In-Reply-To: <20220520215723.3270205-1-dmatlack@google.com> Message-Id: <20220520215723.3270205-8-dmatlack@google.com> Mime-Version: 1.0 References: <20220520215723.3270205-1-dmatlack@google.com> X-Mailer: git-send-email 2.36.1.124.g0e6072fb45-goog Subject: [PATCH v3 07/10] KVM: selftests: Drop unnecessary rule for STATIC_LIBS From: David Matlack To: Paolo Bonzini Cc: Ben Gardon , Sean Christopherson , Oliver Upton , Peter Xu , Vitaly Kuznetsov , Andrew Jones , "open list:KERNEL VIRTUAL MACHINE (KVM)" , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Drop the "all: $(STATIC_LIBS)" rule. The KVM selftests already depend on $(STATIC_LIBS), so there is no reason to have an extra "all" rule. Suggested-by: Peter Xu Signed-off-by: David Matlack --- tools/testing/selftests/kvm/Makefile | 1 - 1 file changed, 1 deletion(-) diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 8c3db2f75315..ae49abe682a7 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -192,7 +192,6 @@ $(OUTPUT)/libkvm.a: $(LIBKVM_OBJS) $(AR) crs $@ $^ x := $(shell mkdir -p $(sort $(dir $(TEST_GEN_PROGS)))) -all: $(STATIC_LIBS) $(TEST_GEN_PROGS): $(STATIC_LIBS) cscope: include_paths = $(LINUX_TOOL_INCLUDE) $(LINUX_HDR_PATH) include lib .. From patchwork Fri May 20 21:57:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12857513 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 013CCC433FE for ; Fri, 20 May 2022 21:57:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1353805AbiETV5s (ORCPT ); Fri, 20 May 2022 17:57:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57210 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1353811AbiETV5p (ORCPT ); Fri, 20 May 2022 17:57:45 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 311781A074E for ; Fri, 20 May 2022 14:57:41 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id x23-20020a170902b41700b0015ea144789fso4635448plr.13 for ; Fri, 20 May 2022 14:57:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=uIcCRTk30Rd3Z1grC6z/5SkL7Y9z8d0k84iCVKmVWzQ=; b=EYtQDvWp5wBfanyb1D6exMeBbT7nrNc+auo5+cFdRJ2DE20oLu9Gi0DyNzf/T6zb+Y XdEVmYsRuE1BD9Wiube3BwIH8VKrjCTWyo7Z4UCsbvX89EU3K2T8lsVBrY/8H+4cpCKZ O5vzbA+4ImpFttuaJXcFwThXM79fiarmSwlnmQJCPtICSClsUabS5+X5JLpLQEBn/8Mt RZox5TlGHcwbXsjekM1wFngCwVQfT1zphIbavrL8EBRPkENmUXgZlNQSYNNXjxZELnbM 9VQUi7wgJ6LwPzDAcLZcVBwObeGOYj4ZbGQOFAsonBRgorpos+3fPae3WHAcyF4ZQmfe btrw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=uIcCRTk30Rd3Z1grC6z/5SkL7Y9z8d0k84iCVKmVWzQ=; b=OaizRMlhkhGK0gbec3kpWMgQZJrZD04TNWju5DRlWxnfXhCFEn18s2Q0MK2ZH6ZMAv yNCdSVkaKNuAsKqtZVDrw54zpTyW8imAT6ZTh5WTsEe5Y2rQQD0u6km/mw6Zft5egBu3 5bzD5yuldhU0m5uTYkb1Yzq5G8j+XMEnA3BMW/gpBzgVYxRjKASxL3iPBXsHIK4D/G8A tQerdk4jnHST7x7fNFLVG88aq0U69OflIC8vLLoRNgmA4djZYV0D0tT29XzE73ARFhAP Cyzk1WPqIDG1FzRBK5wscC+BhLwLYI0unxN2KDPMuzapCdf2VJc+in/MSS78Pdy+2qEG 2pgw== X-Gm-Message-State: AOAM530JLxxv0jxAK0y/2EpoCOKW4MkUn9F3vhmiaHlfpbVovgU9CCI0 4LRYArpbJ/TgTOhRru0gCx0W4DqFppDu6g== X-Google-Smtp-Source: ABdhPJwo5l12LfoisYsB2jcK5fLSiEC52KVseoy5cSbiqNBDIIua+fn3ipBal+FIf6V52RiuGbg3W3q8Xvqo5A== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:902:6b83:b0:15d:1ea2:4f80 with SMTP id p3-20020a1709026b8300b0015d1ea24f80mr11527203plk.41.1653083860694; Fri, 20 May 2022 14:57:40 -0700 (PDT) Date: Fri, 20 May 2022 21:57:21 +0000 In-Reply-To: <20220520215723.3270205-1-dmatlack@google.com> Message-Id: <20220520215723.3270205-9-dmatlack@google.com> Mime-Version: 1.0 References: <20220520215723.3270205-1-dmatlack@google.com> X-Mailer: git-send-email 2.36.1.124.g0e6072fb45-goog Subject: [PATCH v3 08/10] KVM: selftests: Link selftests directly with lib object files From: David Matlack To: Paolo Bonzini Cc: Ben Gardon , Sean Christopherson , Oliver Upton , Peter Xu , Vitaly Kuznetsov , Andrew Jones , "open list:KERNEL VIRTUAL MACHINE (KVM)" , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The linker does obey strong/weak symbols when linking static libraries, it simply resolves an undefined symbol to the first-encountered symbol. This means that defining __weak arch-generic functions and then defining arch-specific strong functions to override them in libkvm will not always work. More specifically, if we have: lib/generic.c: void __weak foo(void) { pr_info("weak\n"); } void bar(void) { foo(); } lib/x86_64/arch.c: void foo(void) { pr_info("strong\n"); } And a selftest that calls bar(), it will print "weak". Now if you make generic.o explicitly depend on arch.o (e.g. add function to arch.c that is called directly from generic.c) it will print "strong". In other words, it seems that the linker is free to throw out arch.o when linking because generic.o does not explicitly depend on it, which causes the linker to lose the strong symbol. One solution is to link libkvm.a with --whole-archive so that the linker doesn't throw away object files it thinks are unnecessary. However that is a bit difficult to plumb since we are using the common selftests makefile rules. An easier solution is to drop libkvm.a just link selftests with all the .o files that were originally in libkvm.a. Reviewed-by: Peter Xu Signed-off-by: David Matlack --- tools/testing/selftests/kvm/Makefile | 11 ++++------- 1 file changed, 4 insertions(+), 7 deletions(-) diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index ae49abe682a7..0889fc17baa5 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -173,12 +173,13 @@ LDFLAGS += -pthread $(no-pie-option) $(pgste-option) # $(TEST_GEN_PROGS) starts with $(OUTPUT)/ include ../lib.mk -STATIC_LIBS := $(OUTPUT)/libkvm.a LIBKVM_C := $(filter %.c,$(LIBKVM)) LIBKVM_S := $(filter %.S,$(LIBKVM)) LIBKVM_C_OBJ := $(patsubst %.c, $(OUTPUT)/%.o, $(LIBKVM_C)) LIBKVM_S_OBJ := $(patsubst %.S, $(OUTPUT)/%.o, $(LIBKVM_S)) -EXTRA_CLEAN += $(LIBKVM_C_OBJ) $(LIBKVM_S_OBJ) $(STATIC_LIBS) cscope.* +LIBKVM_OBJS = $(LIBKVM_C_OBJ) $(LIBKVM_S_OBJ) + +EXTRA_CLEAN += $(LIBKVM_OBJS) cscope.* x := $(shell mkdir -p $(sort $(dir $(LIBKVM_C_OBJ) $(LIBKVM_S_OBJ)))) $(LIBKVM_C_OBJ): $(OUTPUT)/%.o: %.c @@ -187,12 +188,8 @@ $(LIBKVM_C_OBJ): $(OUTPUT)/%.o: %.c $(LIBKVM_S_OBJ): $(OUTPUT)/%.o: %.S $(CC) $(CFLAGS) $(CPPFLAGS) $(TARGET_ARCH) -c $< -o $@ -LIBKVM_OBJS = $(LIBKVM_C_OBJ) $(LIBKVM_S_OBJ) -$(OUTPUT)/libkvm.a: $(LIBKVM_OBJS) - $(AR) crs $@ $^ - x := $(shell mkdir -p $(sort $(dir $(TEST_GEN_PROGS)))) -$(TEST_GEN_PROGS): $(STATIC_LIBS) +$(TEST_GEN_PROGS): $(LIBKVM_OBJS) cscope: include_paths = $(LINUX_TOOL_INCLUDE) $(LINUX_HDR_PATH) include lib .. cscope: From patchwork Fri May 20 21:57:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12857514 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 432B2C433EF for ; Fri, 20 May 2022 21:57:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1353807AbiETV5t (ORCPT ); Fri, 20 May 2022 17:57:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57128 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1353809AbiETV5q (ORCPT ); Fri, 20 May 2022 17:57:46 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 63C1E1A0ADC for ; Fri, 20 May 2022 14:57:43 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id h128-20020a636c86000000b003c574b3422aso4693573pgc.12 for ; Fri, 20 May 2022 14:57:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=iQaFMbmFoi2VGc8icwKYFgNyKRkuTYMPuUBun/zPSQs=; b=HSVY+mAllOBZya3h0V5nQ1b9YLbmIIHsNIDA4j4q+Coc5Ph4jC8n9q6CTN2UsSRCvA RhgeWTANX/dTR2/6G3QqVAKsPxBzSiS+AIRjbfV+sbH3qT5W69Bg2wsC0kbtwmzTQ+W2 adyWn0IQSVw7EJZUKjE3djAo9loDgeuu+juT+69R2BuLZ0QR2ATgxCH53e/fGaFn5mpo q30yONoKWGrWv08uE8Se7ZV7POjEAqxDAfxYTVxizT02kn1TX5elFXt0rbqul8SomPbX MKvqoZP5vLmY8IXh2G3WduPTjI/hicULvbqQ1deovc7wWSujY9A5t/C00pAzbqmnm4d7 F8hw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=iQaFMbmFoi2VGc8icwKYFgNyKRkuTYMPuUBun/zPSQs=; b=p1R4rdHdTBlUUkIlmNun8jRng45bKVRT4+akqSdxL581IEKMRCdQNg2h5kSLfBibZJ Eg80TO0wDywNgEoto5QdjbI45YpBsRXUxVrw4BVJbXZpHC3dNCXzEqT73QBnnc2UMrDh +Mw2dZLZ36xLnNzKvkBtYeGqXAArgqTdx1fTZy3WAIM/VCf7477RjG2raLpNAHUNNFWv ZYnivUzxqrStelox5vLD6HLp5YR+bbBQsLxkJ1RfQjbU5HRDCPsvDt1Up/ZtmAfuDQpN zNoAeBdCyd5w2X+DRbDndnzA/mrIdhP7nR7L/8BCIPQ9RWr/I4NnPzNiOY8btRFQpjEx wrCA== X-Gm-Message-State: AOAM532LIYhJ5QfAmaFg08RgSwbDaEmjwavmb/9epcQ+YLzBaVZ68Js7 8wju33rzcfA1AXtn/oIyDm+cx/GtpN/w9g== X-Google-Smtp-Source: ABdhPJxnXie0TPyJPxH209jH4UJBFHy4lGoLluujVymHpmkzQR2AlpDIZHUGR9tyymWUKsKmIfUMChFug9xYZg== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:90a:2a8a:b0:1df:26ba:6333 with SMTP id j10-20020a17090a2a8a00b001df26ba6333mr474198pjd.0.1653083862365; Fri, 20 May 2022 14:57:42 -0700 (PDT) Date: Fri, 20 May 2022 21:57:22 +0000 In-Reply-To: <20220520215723.3270205-1-dmatlack@google.com> Message-Id: <20220520215723.3270205-10-dmatlack@google.com> Mime-Version: 1.0 References: <20220520215723.3270205-1-dmatlack@google.com> X-Mailer: git-send-email 2.36.1.124.g0e6072fb45-goog Subject: [PATCH v3 09/10] KVM: selftests: Clean up LIBKVM files in Makefile From: David Matlack To: Paolo Bonzini Cc: Ben Gardon , Sean Christopherson , Oliver Upton , Peter Xu , Vitaly Kuznetsov , Andrew Jones , "open list:KERNEL VIRTUAL MACHINE (KVM)" , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Break up the long lines for LIBKVM and alphabetize each architecture. This makes reading the Makefile easier, and will make reading diffs to LIBKVM easier. No functional change intended. Reviewed-by: Peter Xu Signed-off-by: David Matlack --- tools/testing/selftests/kvm/Makefile | 36 ++++++++++++++++++++++++---- 1 file changed, 31 insertions(+), 5 deletions(-) diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 0889fc17baa5..83b9ffa456ea 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -37,11 +37,37 @@ ifeq ($(ARCH),riscv) UNAME_M := riscv endif -LIBKVM = lib/assert.c lib/elf.c lib/io.c lib/kvm_util.c lib/rbtree.c lib/sparsebit.c lib/test_util.c lib/guest_modes.c lib/perf_test_util.c -LIBKVM_x86_64 = lib/x86_64/apic.c lib/x86_64/processor.c lib/x86_64/vmx.c lib/x86_64/svm.c lib/x86_64/ucall.c lib/x86_64/handlers.S -LIBKVM_aarch64 = lib/aarch64/processor.c lib/aarch64/ucall.c lib/aarch64/handlers.S lib/aarch64/spinlock.c lib/aarch64/gic.c lib/aarch64/gic_v3.c lib/aarch64/vgic.c -LIBKVM_s390x = lib/s390x/processor.c lib/s390x/ucall.c lib/s390x/diag318_test_handler.c -LIBKVM_riscv = lib/riscv/processor.c lib/riscv/ucall.c +LIBKVM += lib/assert.c +LIBKVM += lib/elf.c +LIBKVM += lib/guest_modes.c +LIBKVM += lib/io.c +LIBKVM += lib/kvm_util.c +LIBKVM += lib/perf_test_util.c +LIBKVM += lib/rbtree.c +LIBKVM += lib/sparsebit.c +LIBKVM += lib/test_util.c + +LIBKVM_x86_64 += lib/x86_64/apic.c +LIBKVM_x86_64 += lib/x86_64/handlers.S +LIBKVM_x86_64 += lib/x86_64/processor.c +LIBKVM_x86_64 += lib/x86_64/svm.c +LIBKVM_x86_64 += lib/x86_64/ucall.c +LIBKVM_x86_64 += lib/x86_64/vmx.c + +LIBKVM_aarch64 += lib/aarch64/gic.c +LIBKVM_aarch64 += lib/aarch64/gic_v3.c +LIBKVM_aarch64 += lib/aarch64/handlers.S +LIBKVM_aarch64 += lib/aarch64/processor.c +LIBKVM_aarch64 += lib/aarch64/spinlock.c +LIBKVM_aarch64 += lib/aarch64/ucall.c +LIBKVM_aarch64 += lib/aarch64/vgic.c + +LIBKVM_s390x += lib/s390x/diag318_test_handler.c +LIBKVM_s390x += lib/s390x/processor.c +LIBKVM_s390x += lib/s390x/ucall.c + +LIBKVM_riscv += lib/riscv/processor.c +LIBKVM_riscv += lib/riscv/ucall.c TEST_GEN_PROGS_x86_64 = x86_64/cpuid_test TEST_GEN_PROGS_x86_64 += x86_64/cr4_cpuid_sync_test From patchwork Fri May 20 21:57:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12857515 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58CC3C433EF for ; Fri, 20 May 2022 21:57:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1353814AbiETV5x (ORCPT ); Fri, 20 May 2022 17:57:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57218 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1353801AbiETV5t (ORCPT ); Fri, 20 May 2022 17:57:49 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F1CBC1A04BA for ; Fri, 20 May 2022 14:57:44 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id d12-20020a17090a628c00b001dcd2efca39so5280035pjj.2 for ; Fri, 20 May 2022 14:57:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=MemxomWyn3xFxsnJwUeubmFMBZbsEx+Q3Mj2FuHfdHM=; b=jIg4wQejIfsFzDii3julo8lQtz8/QfW2kBREynDCm0BR4iBLizMo5nn+goraWdAnTY NejYrb2NgjTC/K4fmkVsn9yTrSt8uFPNc+i3PbrRIx4ysDCpSKON40SKZk9SEUYApBIj wfam421KboPFvlXkDSYkUizjznNkoUO3e2/yJJlCTU/RUGWkd5cT+Lqu1PnHYXvOxOvC rHmgnCw6Bu1AGJO3BuGyYRlyVIkqS1wJFgTlTwXRF9k6MSLiuhXpfT439E3OYVd1BtuI wK6GlqPsNtb4r+4ic+DQv+/s5wX/E0PW32XhxkClG3ekBomxHkPtWJcejVo5hUMxfjie s4Fw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=MemxomWyn3xFxsnJwUeubmFMBZbsEx+Q3Mj2FuHfdHM=; b=I9BbViirds24IFbvmSXRY4OE94Wj6gF+VWP6bA/UoHOv6O9JwzfXZ52jbQqcr1FD01 7z2e5GH9Bw7ZAoYK48chwXOYolN+N55VCelCMrCQyM3wxOFu9a9n3+XbtjHXVt0Ac/mM gaTU67mKTMbDl/8fhkgN0dOBT3ljmay3Q1mp2h3aV4DPg/o87GTkepi8e0ZcUJA5t3w3 9EbQzVZ9Zy9TWwUMmYJ387nQgTso5cZMK2CQXFMPcxMVt/goNWrJj6nldl2IkBmHx2OY egHE5UtnsK+6bpAFq2EyeEZ8xMOCcz1ww8Hhx7/uEdsaL7/XREh1H2cSjs6FPHLndCud JGFg== X-Gm-Message-State: AOAM533YZ1qIA6x6/Z3opckYGAPSr+KR7HTDKgnceMIb1fW2mSAt90M8 vfJKnKdiBZeHjI4P6A/AQzFPNQKTYSQihw== X-Google-Smtp-Source: ABdhPJzOQzBi5jNUe4g4dzg8hffGCwHOodxSEdbGTNWeLXZKGJYvqasmJnMSeGuhZ1BV+xKcz0jWwctK4Lfjew== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:902:f605:b0:14d:9e11:c864 with SMTP id n5-20020a170902f60500b0014d9e11c864mr11705635plg.54.1653083864355; Fri, 20 May 2022 14:57:44 -0700 (PDT) Date: Fri, 20 May 2022 21:57:23 +0000 In-Reply-To: <20220520215723.3270205-1-dmatlack@google.com> Message-Id: <20220520215723.3270205-11-dmatlack@google.com> Mime-Version: 1.0 References: <20220520215723.3270205-1-dmatlack@google.com> X-Mailer: git-send-email 2.36.1.124.g0e6072fb45-goog Subject: [PATCH v3 10/10] KVM: selftests: Add option to run dirty_log_perf_test vCPUs in L2 From: David Matlack To: Paolo Bonzini Cc: Ben Gardon , Sean Christopherson , Oliver Upton , Peter Xu , Vitaly Kuznetsov , Andrew Jones , "open list:KERNEL VIRTUAL MACHINE (KVM)" , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add an option to dirty_log_perf_test that configures the vCPUs to run in L2 instead of L1. This makes it possible to benchmark the dirty logging performance of nested virtualization, which is particularly interesting because KVM must shadow L1's EPT/NPT tables. For now this support only works on x86_64 CPUs with VMX. Otherwise passing -n results in the test being skipped. Signed-off-by: David Matlack --- tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/dirty_log_perf_test.c | 10 +- .../selftests/kvm/include/perf_test_util.h | 9 ++ .../selftests/kvm/include/x86_64/processor.h | 4 + .../selftests/kvm/include/x86_64/vmx.h | 4 + .../selftests/kvm/lib/perf_test_util.c | 35 +++++- .../selftests/kvm/lib/x86_64/perf_test_util.c | 112 ++++++++++++++++++ tools/testing/selftests/kvm/lib/x86_64/vmx.c | 15 +++ 8 files changed, 182 insertions(+), 8 deletions(-) create mode 100644 tools/testing/selftests/kvm/lib/x86_64/perf_test_util.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 83b9ffa456ea..42cb904f6e54 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -49,6 +49,7 @@ LIBKVM += lib/test_util.c LIBKVM_x86_64 += lib/x86_64/apic.c LIBKVM_x86_64 += lib/x86_64/handlers.S +LIBKVM_x86_64 += lib/x86_64/perf_test_util.c LIBKVM_x86_64 += lib/x86_64/processor.c LIBKVM_x86_64 += lib/x86_64/svm.c LIBKVM_x86_64 += lib/x86_64/ucall.c diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/testing/selftests/kvm/dirty_log_perf_test.c index 7b47ae4f952e..d60a34cdfaee 100644 --- a/tools/testing/selftests/kvm/dirty_log_perf_test.c +++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c @@ -336,8 +336,8 @@ static void run_test(enum vm_guest_mode mode, void *arg) static void help(char *name) { puts(""); - printf("usage: %s [-h] [-i iterations] [-p offset] [-g]" - "[-m mode] [-b vcpu bytes] [-v vcpus] [-o] [-s mem type]" + printf("usage: %s [-h] [-i iterations] [-p offset] [-g] " + "[-m mode] [-n] [-b vcpu bytes] [-v vcpus] [-o] [-s mem type]" "[-x memslots]\n", name); puts(""); printf(" -i: specify iteration counts (default: %"PRIu64")\n", @@ -351,6 +351,7 @@ static void help(char *name) printf(" -p: specify guest physical test memory offset\n" " Warning: a low offset can conflict with the loaded test code.\n"); guest_modes_help(); + printf(" -n: Run the vCPUs in nested mode (L2)\n"); printf(" -b: specify the size of the memory region which should be\n" " dirtied by each vCPU. e.g. 10M or 3G.\n" " (default: 1G)\n"); @@ -387,7 +388,7 @@ int main(int argc, char *argv[]) guest_modes_append_default(); - while ((opt = getopt(argc, argv, "ghi:p:m:b:f:v:os:x:")) != -1) { + while ((opt = getopt(argc, argv, "ghi:p:m:nb:f:v:os:x:")) != -1) { switch (opt) { case 'g': dirty_log_manual_caps = 0; @@ -401,6 +402,9 @@ int main(int argc, char *argv[]) case 'm': guest_modes_cmdline(optarg); break; + case 'n': + perf_test_args.nested = true; + break; case 'b': guest_percpu_mem_size = parse_size(optarg); break; diff --git a/tools/testing/selftests/kvm/include/perf_test_util.h b/tools/testing/selftests/kvm/include/perf_test_util.h index a86f953d8d36..d822cb670f1c 100644 --- a/tools/testing/selftests/kvm/include/perf_test_util.h +++ b/tools/testing/selftests/kvm/include/perf_test_util.h @@ -30,10 +30,15 @@ struct perf_test_vcpu_args { struct perf_test_args { struct kvm_vm *vm; + /* The starting address and size of the guest test region. */ uint64_t gpa; + uint64_t size; uint64_t guest_page_size; int wr_fract; + /* Run vCPUs in L2 instead of L1, if the architecture supports it. */ + bool nested; + struct perf_test_vcpu_args vcpu_args[KVM_MAX_VCPUS]; }; @@ -49,5 +54,9 @@ void perf_test_set_wr_fract(struct kvm_vm *vm, int wr_fract); void perf_test_start_vcpu_threads(int vcpus, void (*vcpu_fn)(struct perf_test_vcpu_args *)); void perf_test_join_vcpu_threads(int vcpus); +void perf_test_guest_code(uint32_t vcpu_id); + +uint64_t perf_test_nested_pages(int nr_vcpus); +void perf_test_setup_nested(struct kvm_vm *vm, int nr_vcpus); #endif /* SELFTEST_KVM_PERF_TEST_UTIL_H */ diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h index 04f1d540bcb2..2aaea757432a 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -477,6 +477,10 @@ enum pg_level { #define PG_LEVEL_SHIFT(_level) ((_level - 1) * 9 + 12) #define PG_LEVEL_SIZE(_level) (1ull << PG_LEVEL_SHIFT(_level)) +#define PG_SIZE_4K PG_LEVEL_SIZE(PG_LEVEL_4K) +#define PG_SIZE_2M PG_LEVEL_SIZE(PG_LEVEL_2M) +#define PG_SIZE_1G PG_LEVEL_SIZE(PG_LEVEL_1G) + void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level); /* diff --git a/tools/testing/selftests/kvm/include/x86_64/vmx.h b/tools/testing/selftests/kvm/include/x86_64/vmx.h index 3b1794baa97c..cc3604f8f1d3 100644 --- a/tools/testing/selftests/kvm/include/x86_64/vmx.h +++ b/tools/testing/selftests/kvm/include/x86_64/vmx.h @@ -96,6 +96,7 @@ #define VMX_MISC_PREEMPTION_TIMER_RATE_MASK 0x0000001f #define VMX_MISC_SAVE_EFER_LMA 0x00000020 +#define VMX_EPT_VPID_CAP_1G_PAGES 0x00020000 #define VMX_EPT_VPID_CAP_AD_BITS 0x00200000 #define EXIT_REASON_FAILED_VMENTRY 0x80000000 @@ -608,6 +609,7 @@ bool load_vmcs(struct vmx_pages *vmx); bool nested_vmx_supported(void); void nested_vmx_check_supported(void); +bool ept_1g_pages_supported(void); void nested_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm, uint64_t nested_paddr, uint64_t paddr); @@ -615,6 +617,8 @@ void nested_map(struct vmx_pages *vmx, struct kvm_vm *vm, uint64_t nested_paddr, uint64_t paddr, uint64_t size); void nested_map_memslot(struct vmx_pages *vmx, struct kvm_vm *vm, uint32_t memslot); +void nested_identity_map_1g(struct vmx_pages *vmx, struct kvm_vm *vm, + uint64_t addr, uint64_t size); void prepare_eptp(struct vmx_pages *vmx, struct kvm_vm *vm, uint32_t eptp_memslot); void prepare_virtualize_apic_accesses(struct vmx_pages *vmx, struct kvm_vm *vm); diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c index 722df3a28791..b2ff2cee2e51 100644 --- a/tools/testing/selftests/kvm/lib/perf_test_util.c +++ b/tools/testing/selftests/kvm/lib/perf_test_util.c @@ -40,7 +40,7 @@ static bool all_vcpu_threads_running; * Continuously write to the first 8 bytes of each page in the * specified region. */ -static void guest_code(uint32_t vcpu_id) +void perf_test_guest_code(uint32_t vcpu_id) { struct perf_test_args *pta = &perf_test_args; struct perf_test_vcpu_args *vcpu_args = &pta->vcpu_args[vcpu_id]; @@ -108,7 +108,7 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus, { struct perf_test_args *pta = &perf_test_args; struct kvm_vm *vm; - uint64_t guest_num_pages; + uint64_t guest_num_pages, slot0_pages = DEFAULT_GUEST_PHY_PAGES; uint64_t backing_src_pagesz = get_backing_src_pagesz(backing_src); int i; @@ -134,13 +134,20 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus, "Guest memory cannot be evenly divided into %d slots.", slots); + /* + * If using nested, allocate extra pages for the nested page tables and + * in-memory data structures. + */ + if (pta->nested) + slot0_pages += perf_test_nested_pages(vcpus); + /* * Pass guest_num_pages to populate the page tables for test memory. * The memory is also added to memslot 0, but that's a benign side * effect as KVM allows aliasing HVAs in meslots. */ - vm = vm_create_with_vcpus(mode, vcpus, DEFAULT_GUEST_PHY_PAGES, - guest_num_pages, 0, guest_code, NULL); + vm = vm_create_with_vcpus(mode, vcpus, slot0_pages, guest_num_pages, 0, + perf_test_guest_code, NULL); pta->vm = vm; @@ -161,7 +168,9 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus, /* Align to 1M (segment size) */ pta->gpa = align_down(pta->gpa, 1 << 20); #endif - pr_info("guest physical test memory offset: 0x%lx\n", pta->gpa); + pta->size = guest_num_pages * pta->guest_page_size; + pr_info("guest physical test memory: [0x%lx, 0x%lx)\n", + pta->gpa, pta->gpa + pta->size); /* Add extra memory slots for testing */ for (i = 0; i < slots; i++) { @@ -178,6 +187,11 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus, perf_test_setup_vcpus(vm, vcpus, vcpu_memory_bytes, partition_vcpu_memory_access); + if (pta->nested) { + pr_info("Configuring vCPUs to run in L2 (nested).\n"); + perf_test_setup_nested(vm, vcpus); + } + ucall_init(vm, NULL); /* Export the shared variables to the guest. */ @@ -198,6 +212,17 @@ void perf_test_set_wr_fract(struct kvm_vm *vm, int wr_fract) sync_global_to_guest(vm, perf_test_args); } +uint64_t __weak perf_test_nested_pages(int nr_vcpus) +{ + return 0; +} + +void __weak perf_test_setup_nested(struct kvm_vm *vm, int nr_vcpus) +{ + pr_info("%s() not support on this architecture, skipping.\n", __func__); + exit(KSFT_SKIP); +} + static void *vcpu_thread_main(void *data) { struct vcpu_thread *vcpu = data; diff --git a/tools/testing/selftests/kvm/lib/x86_64/perf_test_util.c b/tools/testing/selftests/kvm/lib/x86_64/perf_test_util.c new file mode 100644 index 000000000000..e258524435a0 --- /dev/null +++ b/tools/testing/selftests/kvm/lib/x86_64/perf_test_util.c @@ -0,0 +1,112 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * x86_64-specific extensions to perf_test_util.c. + * + * Copyright (C) 2022, Google, Inc. + */ +#include +#include +#include +#include + +#include "test_util.h" +#include "kvm_util.h" +#include "perf_test_util.h" +#include "../kvm_util_internal.h" +#include "processor.h" +#include "vmx.h" + +void perf_test_l2_guest_code(uint64_t vcpu_id) +{ + perf_test_guest_code(vcpu_id); + vmcall(); +} + +extern char perf_test_l2_guest_entry[]; +__asm__( +"perf_test_l2_guest_entry:" +" mov (%rsp), %rdi;" +" call perf_test_l2_guest_code;" +" ud2;" +); + +static void perf_test_l1_guest_code(struct vmx_pages *vmx, uint64_t vcpu_id) +{ +#define L2_GUEST_STACK_SIZE 64 + unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE]; + unsigned long *rsp; + + GUEST_ASSERT(vmx->vmcs_gpa); + GUEST_ASSERT(prepare_for_vmx_operation(vmx)); + GUEST_ASSERT(load_vmcs(vmx)); + GUEST_ASSERT(ept_1g_pages_supported()); + + rsp = &l2_guest_stack[L2_GUEST_STACK_SIZE - 1]; + *rsp = vcpu_id; + prepare_vmcs(vmx, perf_test_l2_guest_entry, rsp); + + GUEST_ASSERT(!vmlaunch()); + GUEST_ASSERT(vmreadz(VM_EXIT_REASON) == EXIT_REASON_VMCALL); + GUEST_DONE(); +} + +uint64_t perf_test_nested_pages(int nr_vcpus) +{ + /* + * 513 page tables is enough to identity-map 256 TiB of L2 with 1G + * pages and 4-level paging, plus a few pages per-vCPU for data + * structures such as the VMCS. + */ + return 513 + 10 * nr_vcpus; +} + +void perf_test_setup_ept(struct vmx_pages *vmx, struct kvm_vm *vm) +{ + uint64_t start, end; + + prepare_eptp(vmx, vm, 0); + + /* + * Identity map the first 4G and the test region with 1G pages so that + * KVM can shadow the EPT12 with the maximum huge page size supported + * by the backing source. + */ + nested_identity_map_1g(vmx, vm, 0, 0x100000000ULL); + + start = align_down(perf_test_args.gpa, PG_SIZE_1G); + end = align_up(perf_test_args.gpa + perf_test_args.size, PG_SIZE_1G); + nested_identity_map_1g(vmx, vm, start, end - start); +} + +void perf_test_setup_nested(struct kvm_vm *vm, int nr_vcpus) +{ + struct vmx_pages *vmx, *vmx0 = NULL; + struct kvm_regs regs; + vm_vaddr_t vmx_gva; + int vcpu_id; + + nested_vmx_check_supported(); + + for (vcpu_id = 0; vcpu_id < nr_vcpus; vcpu_id++) { + vmx = vcpu_alloc_vmx(vm, &vmx_gva); + + if (vcpu_id == 0) { + perf_test_setup_ept(vmx, vm); + vmx0 = vmx; + } else { + /* Share the same EPT table across all vCPUs. */ + vmx->eptp = vmx0->eptp; + vmx->eptp_hva = vmx0->eptp_hva; + vmx->eptp_gpa = vmx0->eptp_gpa; + } + + /* + * Override the vCPU to run perf_test_l1_guest_code() which will + * bounce it into L2 before calling perf_test_guest_code(). + */ + vcpu_regs_get(vm, vcpu_id, ®s); + regs.rip = (unsigned long) perf_test_l1_guest_code; + vcpu_regs_set(vm, vcpu_id, ®s); + vcpu_args_set(vm, vcpu_id, 2, vmx_gva, vcpu_id); + } +} diff --git a/tools/testing/selftests/kvm/lib/x86_64/vmx.c b/tools/testing/selftests/kvm/lib/x86_64/vmx.c index 5bf169179455..b77a01d0a271 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/vmx.c +++ b/tools/testing/selftests/kvm/lib/x86_64/vmx.c @@ -203,6 +203,11 @@ static bool ept_vpid_cap_supported(uint64_t mask) return rdmsr(MSR_IA32_VMX_EPT_VPID_CAP) & mask; } +bool ept_1g_pages_supported(void) +{ + return ept_vpid_cap_supported(VMX_EPT_VPID_CAP_1G_PAGES); +} + /* * Initialize the control fields to the most basic settings possible. */ @@ -439,6 +444,9 @@ void __nested_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm, TEST_ASSERT(vm->mode == VM_MODE_PXXV48_4K, "Attempt to use " "unknown or unsupported guest mode, mode: 0x%x", vm->mode); + TEST_ASSERT((nested_paddr >> 48) == 0, + "Nested physical address 0x%lx requires 5-level paging", + nested_paddr); TEST_ASSERT((nested_paddr % page_size) == 0, "Nested physical address not on page boundary,\n" " nested_paddr: 0x%lx page_size: 0x%lx", @@ -547,6 +555,13 @@ void nested_map_memslot(struct vmx_pages *vmx, struct kvm_vm *vm, } } +/* Identity map a region with 1GiB Pages. */ +void nested_identity_map_1g(struct vmx_pages *vmx, struct kvm_vm *vm, + uint64_t addr, uint64_t size) +{ + __nested_map(vmx, vm, addr, addr, size, PG_LEVEL_1G); +} + void prepare_eptp(struct vmx_pages *vmx, struct kvm_vm *vm, uint32_t eptp_memslot) {