From patchwork Tue May 17 19:05:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12852896 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB9AAC433EF for ; Tue, 17 May 2022 19:05:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352468AbiEQTFd (ORCPT ); Tue, 17 May 2022 15:05:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41838 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244502AbiEQTF3 (ORCPT ); Tue, 17 May 2022 15:05:29 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 33DAD3F30D for ; Tue, 17 May 2022 12:05:28 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id i127-20020a625485000000b0050d3d1cab5fso8121066pfb.5 for ; Tue, 17 May 2022 12:05:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=J0MT7fG5qSilWID7rwAVfpY/Pcyj6V73tU+mblj/FAM=; b=YybF+3IXuEFB8BbFCHx6s9ocH9X1mZ6Q7z/8qbgp1b+CNUxjdgN3sDxuczS2dYykWV dFGSATq5NHBV0A0RGVEbBMGmEJ/soDoUX23Bpn+4dZACtc+2QCgW772D/rdULOp4beUT N6+vKApyohkW1RqhskKG1AU/brwwDZtkc9cLSDlXV8dQmkk6zLEwhSnlBzbtS9C5QlNC +ogD+hQP9khPZIxG8qq2gEjnFlzfHtUO8URQDmiTzs8RamtmlN2mZdWPEBZPG8c7YaR/ tzp/c1E7jrapODzM7M1H3tcn/q42v7YsasAnknFAZuKnfjEGiGBRCAVGvGhUPAWtNCq/ IHMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=J0MT7fG5qSilWID7rwAVfpY/Pcyj6V73tU+mblj/FAM=; b=tm/aeaUMnsorMbZV1fkMQ+MVMs8JWFoNxnrCPy8HnUP9H9fLHYBFF9o6JWEZhz/fFe QsqqtapioofLjGsFIcU9YI/qSrhn+aboFdTSEnRzsFM9ki91a0XUhk/e/GiL3Y5wp4hv uO6Hd06g89kPv3QlL51F6cpJ+ROkQM7+a2yJt2nZXRZ690Noc0ndqh2qSF7ZgN9vFhC6 bQSgUQmuI5bkOrUeJYFhIgBpYL/WQ032836pBosodP95K8RdDnUX7H3A2+lc9uaVRGVr pSf0HBhv5VhXDzRXK3VGGFxRsfw2yZ1PnkO8OAFVn1wAE95Dk4mlnC49R09I7VLdiROh pQTQ== X-Gm-Message-State: AOAM53395xK2oTGPQxgVTnDvSf87VjjwyryQQYRpG/clxL5Vvc2Q9OWv M7l2JTxN1ma0kHmro5gBm2FDng5N2kfqaA== X-Google-Smtp-Source: ABdhPJxcHnwdR5JgXGM8k4FndoHDVN9DoAy/5AfynPu1y54pp3dFMaF30xWLHa/zzP6cRvEpHm8E2ZXt1800Bw== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a62:3881:0:b0:4b0:b1c:6fd9 with SMTP id f123-20020a623881000000b004b00b1c6fd9mr23810521pfa.27.1652814327657; Tue, 17 May 2022 12:05:27 -0700 (PDT) Date: Tue, 17 May 2022 19:05:15 +0000 In-Reply-To: <20220517190524.2202762-1-dmatlack@google.com> Message-Id: <20220517190524.2202762-2-dmatlack@google.com> Mime-Version: 1.0 References: <20220517190524.2202762-1-dmatlack@google.com> X-Mailer: git-send-email 2.36.0.550.gb090851708-goog Subject: [PATCH v2 01/10] KVM: selftests: Replace x86_page_size with PG_LEVEL_XX From: David Matlack To: Paolo Bonzini Cc: Ben Gardon , Sean Christopherson , Oliver Upton , Peter Xu , Vitaly Kuznetsov , Andrew Jones , "open list:KERNEL VIRTUAL MACHINE (KVM)" , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org x86_page_size is an enum used to communicate the desired page size with which to map a range of memory. Under the hood they just encode the desired level at which to map the page. This ends up being clunky in a few ways: - The name suggests it encodes the size of the page rather than the level. - In other places in x86_64/processor.c we just use a raw int to encode the level. Simplify this by adopting the kernel style of PG_LEVEL_XX enums and pass around raw ints when referring to the level. This makes the code easier to understand since these macros are very common in KVM MMU code. Signed-off-by: David Matlack Reviewed-by: Peter Xu --- .../selftests/kvm/include/x86_64/processor.h | 18 ++++++---- .../selftests/kvm/lib/x86_64/processor.c | 33 ++++++++++--------- .../selftests/kvm/max_guest_memory_test.c | 2 +- .../selftests/kvm/x86_64/mmu_role_test.c | 2 +- 4 files changed, 31 insertions(+), 24 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h index 37db341d4cc5..434a4f60f4d9 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -465,13 +465,19 @@ void vcpu_set_hv_cpuid(struct kvm_vm *vm, uint32_t vcpuid); struct kvm_cpuid2 *vcpu_get_supported_hv_cpuid(struct kvm_vm *vm, uint32_t vcpuid); void vm_xsave_req_perm(int bit); -enum x86_page_size { - X86_PAGE_SIZE_4K = 0, - X86_PAGE_SIZE_2M, - X86_PAGE_SIZE_1G, +enum pg_level { + PG_LEVEL_NONE, + PG_LEVEL_4K, + PG_LEVEL_2M, + PG_LEVEL_1G, + PG_LEVEL_512G, + PG_LEVEL_NUM }; -void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, - enum x86_page_size page_size); + +#define PG_LEVEL_SHIFT(_level) ((_level - 1) * 9 + 12) +#define PG_LEVEL_SIZE(_level) (1ull << PG_LEVEL_SHIFT(_level)) + +void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level); /* * Basic CPU control in CR0 diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c index 9f000dfb5594..f733c5b02da5 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c @@ -190,7 +190,7 @@ static void *virt_get_pte(struct kvm_vm *vm, uint64_t pt_pfn, uint64_t vaddr, int level) { uint64_t *page_table = addr_gpa2hva(vm, pt_pfn << vm->page_shift); - int index = vaddr >> (vm->page_shift + level * 9) & 0x1ffu; + int index = (vaddr >> PG_LEVEL_SHIFT(level)) & 0x1ffu; return &page_table[index]; } @@ -199,15 +199,15 @@ static struct pageUpperEntry *virt_create_upper_pte(struct kvm_vm *vm, uint64_t pt_pfn, uint64_t vaddr, uint64_t paddr, - int level, - enum x86_page_size page_size) + int current_level, + int target_level) { - struct pageUpperEntry *pte = virt_get_pte(vm, pt_pfn, vaddr, level); + struct pageUpperEntry *pte = virt_get_pte(vm, pt_pfn, vaddr, current_level); if (!pte->present) { pte->writable = true; pte->present = true; - pte->page_size = (level == page_size); + pte->page_size = (current_level == target_level); if (pte->page_size) pte->pfn = paddr >> vm->page_shift; else @@ -218,20 +218,19 @@ static struct pageUpperEntry *virt_create_upper_pte(struct kvm_vm *vm, * a hugepage at this level, and that there isn't a hugepage at * this level. */ - TEST_ASSERT(level != page_size, + TEST_ASSERT(current_level != target_level, "Cannot create hugepage at level: %u, vaddr: 0x%lx\n", - page_size, vaddr); + current_level, vaddr); TEST_ASSERT(!pte->page_size, "Cannot create page table at level: %u, vaddr: 0x%lx\n", - level, vaddr); + current_level, vaddr); } return pte; } -void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, - enum x86_page_size page_size) +void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level) { - const uint64_t pg_size = 1ull << ((page_size * 9) + 12); + const uint64_t pg_size = PG_LEVEL_SIZE(level); struct pageUpperEntry *pml4e, *pdpe, *pde; struct pageTableEntry *pte; @@ -256,20 +255,22 @@ void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, * early if a hugepage was created. */ pml4e = virt_create_upper_pte(vm, vm->pgd >> vm->page_shift, - vaddr, paddr, 3, page_size); + vaddr, paddr, PG_LEVEL_512G, level); if (pml4e->page_size) return; - pdpe = virt_create_upper_pte(vm, pml4e->pfn, vaddr, paddr, 2, page_size); + pdpe = virt_create_upper_pte(vm, pml4e->pfn, vaddr, paddr, PG_LEVEL_1G, + level); if (pdpe->page_size) return; - pde = virt_create_upper_pte(vm, pdpe->pfn, vaddr, paddr, 1, page_size); + pde = virt_create_upper_pte(vm, pdpe->pfn, vaddr, paddr, PG_LEVEL_2M, + level); if (pde->page_size) return; /* Fill in page table entry. */ - pte = virt_get_pte(vm, pde->pfn, vaddr, 0); + pte = virt_get_pte(vm, pde->pfn, vaddr, PG_LEVEL_4K); TEST_ASSERT(!pte->present, "PTE already present for 4k page at vaddr: 0x%lx\n", vaddr); pte->pfn = paddr >> vm->page_shift; @@ -279,7 +280,7 @@ void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, void virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr) { - __virt_pg_map(vm, vaddr, paddr, X86_PAGE_SIZE_4K); + __virt_pg_map(vm, vaddr, paddr, PG_LEVEL_4K); } static struct pageTableEntry *_vm_get_page_table_entry(struct kvm_vm *vm, int vcpuid, diff --git a/tools/testing/selftests/kvm/max_guest_memory_test.c b/tools/testing/selftests/kvm/max_guest_memory_test.c index 3875c4b23a04..15f046e19cb2 100644 --- a/tools/testing/selftests/kvm/max_guest_memory_test.c +++ b/tools/testing/selftests/kvm/max_guest_memory_test.c @@ -244,7 +244,7 @@ int main(int argc, char *argv[]) #ifdef __x86_64__ /* Identity map memory in the guest using 1gb pages. */ for (i = 0; i < slot_size; i += size_1gb) - __virt_pg_map(vm, gpa + i, gpa + i, X86_PAGE_SIZE_1G); + __virt_pg_map(vm, gpa + i, gpa + i, PG_LEVEL_1G); #else for (i = 0; i < slot_size; i += vm_get_page_size(vm)) virt_pg_map(vm, gpa + i, gpa + i); diff --git a/tools/testing/selftests/kvm/x86_64/mmu_role_test.c b/tools/testing/selftests/kvm/x86_64/mmu_role_test.c index da2325fcad87..bdecd532f935 100644 --- a/tools/testing/selftests/kvm/x86_64/mmu_role_test.c +++ b/tools/testing/selftests/kvm/x86_64/mmu_role_test.c @@ -35,7 +35,7 @@ static void mmu_role_test(u32 *cpuid_reg, u32 evil_cpuid_val) run = vcpu_state(vm, VCPU_ID); /* Map 1gb page without a backing memlot. */ - __virt_pg_map(vm, MMIO_GPA, MMIO_GPA, X86_PAGE_SIZE_1G); + __virt_pg_map(vm, MMIO_GPA, MMIO_GPA, PG_LEVEL_1G); r = _vcpu_run(vm, VCPU_ID); From patchwork Tue May 17 19:05:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12852897 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 791B3C433F5 for ; Tue, 17 May 2022 19:05:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352484AbiEQTFf (ORCPT ); Tue, 17 May 2022 15:05:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41850 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349953AbiEQTFa (ORCPT ); Tue, 17 May 2022 15:05:30 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B46AE3F30C for ; Tue, 17 May 2022 12:05:29 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id d125-20020a636883000000b003db5e24db27so25480pgc.13 for ; Tue, 17 May 2022 12:05:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=wYqtaE2fOJCw5qa6WHUEtvSbkPyRLrsFQ6NUPX8GobA=; b=NhsBar4v0DYt1jljvAhq1LFyG+jfdiPa0dEl45VzttN2jrZjTzfSmn8nPsI6HtcL0/ nnR0Mc2QVvh/ZdS1cEBO1td9QWfw7s2mwvcc7mG+PO9x+l2PkL5GnVVvvY1lXiv9HW71 1AZBOar18UB2GYl9/9hWeCy/iUaKzIEkHqs8aGU2CrV8M0PJ7kw9+BAJnNb+rSaPOO3K AFM1Pubmk4SQZdtPbvW6ZWyS09JUXmQheZA/YTCJ9WpFFE+FRzNtA9DQ7eVWqx4q0XQF HI6x4kBtdgXldU3qB+OVjU+vZMoUFimt1zgQdKtMLKiRyzTy0PWYCHyDML4rRzg/SCJc iCYw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=wYqtaE2fOJCw5qa6WHUEtvSbkPyRLrsFQ6NUPX8GobA=; b=Pko4/BXkjOwx7Fxnww3eoddDUNEc+DPNFCNy/uA8oSLXLmzwGAXNiOvSM2kEf6ZlS6 qTvh8sCCddaeweMqHRucfp28Yzaqu9RR4Yaj3km533hiBd7R5P7tyGnTckU40uJ7dFPI ZCxPxaG7z9kvKXefemk2FfFaKF9ehPdTAxQzJ5B+oNl5jTpt3FbLnkW3x+JV5YfRtnXS CkIO5vGePHIRM8hVJr4M4L+Dyr6WtUeqUNqItIE+tgeB3WYlPEmA6IMn4n50iS+BzcM5 2ex7mEL5AWN37qqc3slVaJOoj4cOFzZHgTnIIbTGl6I+qP0QMJu0Svub1c4b7IKXUBwS Vb5w== X-Gm-Message-State: AOAM532PVi8AmiJcrcIQgpuBP8Cfa+hHgd77bCU2FcRaKEG65DnzOqJw I0w4N5TIdPrJwd/gYnuRufuhq73D584N8g== X-Google-Smtp-Source: ABdhPJzz/EvdkQBDTKdh/Qmv12QsPAnoMDBz9xJ46kF1gPt+DP7h5p65oOFZRVowSWYmMfumeL2BUQqfzj4uaQ== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:902:a383:b0:161:8397:b493 with SMTP id x3-20020a170902a38300b001618397b493mr10236919pla.127.1652814329151; Tue, 17 May 2022 12:05:29 -0700 (PDT) Date: Tue, 17 May 2022 19:05:16 +0000 In-Reply-To: <20220517190524.2202762-1-dmatlack@google.com> Message-Id: <20220517190524.2202762-3-dmatlack@google.com> Mime-Version: 1.0 References: <20220517190524.2202762-1-dmatlack@google.com> X-Mailer: git-send-email 2.36.0.550.gb090851708-goog Subject: [PATCH v2 02/10] KVM: selftests: Add option to create 2M and 1G EPT mappings From: David Matlack To: Paolo Bonzini Cc: Ben Gardon , Sean Christopherson , Oliver Upton , Peter Xu , Vitaly Kuznetsov , Andrew Jones , "open list:KERNEL VIRTUAL MACHINE (KVM)" , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The current EPT mapping code in the selftests only supports mapping 4K pages. This commit extends that support with an option to map at 2M or 1G. This will be used in a future commit to create large page mappings to test eager page splitting. No functional change intended. Signed-off-by: David Matlack Reviewed-by: Peter Xu --- tools/testing/selftests/kvm/lib/x86_64/vmx.c | 110 ++++++++++--------- 1 file changed, 60 insertions(+), 50 deletions(-) diff --git a/tools/testing/selftests/kvm/lib/x86_64/vmx.c b/tools/testing/selftests/kvm/lib/x86_64/vmx.c index d089d8b850b5..fdc1e6deb922 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/vmx.c +++ b/tools/testing/selftests/kvm/lib/x86_64/vmx.c @@ -392,80 +392,90 @@ void nested_vmx_check_supported(void) } } -void nested_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm, - uint64_t nested_paddr, uint64_t paddr) +static void nested_create_pte(struct kvm_vm *vm, + struct eptPageTableEntry *pte, + uint64_t nested_paddr, + uint64_t paddr, + int current_level, + int target_level) +{ + if (!pte->readable) { + pte->writable = true; + pte->readable = true; + pte->executable = true; + pte->page_size = (current_level == target_level); + if (pte->page_size) + pte->address = paddr >> vm->page_shift; + else + pte->address = vm_alloc_page_table(vm) >> vm->page_shift; + } else { + /* + * Entry already present. Assert that the caller doesn't want + * a hugepage at this level, and that there isn't a hugepage at + * this level. + */ + TEST_ASSERT(current_level != target_level, + "Cannot create hugepage at level: %u, nested_paddr: 0x%lx\n", + current_level, nested_paddr); + TEST_ASSERT(!pte->page_size, + "Cannot create page table at level: %u, nested_paddr: 0x%lx\n", + current_level, nested_paddr); + } +} + + +void __nested_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm, + uint64_t nested_paddr, uint64_t paddr, int target_level) { - uint16_t index[4]; - struct eptPageTableEntry *pml4e; + const uint64_t page_size = PG_LEVEL_SIZE(target_level); + struct eptPageTableEntry *pt = vmx->eptp_hva, *pte; + uint16_t index; TEST_ASSERT(vm->mode == VM_MODE_PXXV48_4K, "Attempt to use " "unknown or unsupported guest mode, mode: 0x%x", vm->mode); - TEST_ASSERT((nested_paddr % vm->page_size) == 0, + TEST_ASSERT((nested_paddr % page_size) == 0, "Nested physical address not on page boundary,\n" - " nested_paddr: 0x%lx vm->page_size: 0x%x", - nested_paddr, vm->page_size); + " nested_paddr: 0x%lx page_size: 0x%lx", + nested_paddr, page_size); TEST_ASSERT((nested_paddr >> vm->page_shift) <= vm->max_gfn, "Physical address beyond beyond maximum supported,\n" " nested_paddr: 0x%lx vm->max_gfn: 0x%lx vm->page_size: 0x%x", paddr, vm->max_gfn, vm->page_size); - TEST_ASSERT((paddr % vm->page_size) == 0, + TEST_ASSERT((paddr % page_size) == 0, "Physical address not on page boundary,\n" - " paddr: 0x%lx vm->page_size: 0x%x", - paddr, vm->page_size); + " paddr: 0x%lx page_size: 0x%lx", + paddr, page_size); TEST_ASSERT((paddr >> vm->page_shift) <= vm->max_gfn, "Physical address beyond beyond maximum supported,\n" " paddr: 0x%lx vm->max_gfn: 0x%lx vm->page_size: 0x%x", paddr, vm->max_gfn, vm->page_size); - index[0] = (nested_paddr >> 12) & 0x1ffu; - index[1] = (nested_paddr >> 21) & 0x1ffu; - index[2] = (nested_paddr >> 30) & 0x1ffu; - index[3] = (nested_paddr >> 39) & 0x1ffu; - - /* Allocate page directory pointer table if not present. */ - pml4e = vmx->eptp_hva; - if (!pml4e[index[3]].readable) { - pml4e[index[3]].address = vm_alloc_page_table(vm) >> vm->page_shift; - pml4e[index[3]].writable = true; - pml4e[index[3]].readable = true; - pml4e[index[3]].executable = true; - } + for (int level = PG_LEVEL_512G; level >= PG_LEVEL_4K; level--) { + index = (nested_paddr >> PG_LEVEL_SHIFT(level)) & 0x1ffu; + pte = &pt[index]; - /* Allocate page directory table if not present. */ - struct eptPageTableEntry *pdpe; - pdpe = addr_gpa2hva(vm, pml4e[index[3]].address * vm->page_size); - if (!pdpe[index[2]].readable) { - pdpe[index[2]].address = vm_alloc_page_table(vm) >> vm->page_shift; - pdpe[index[2]].writable = true; - pdpe[index[2]].readable = true; - pdpe[index[2]].executable = true; - } + nested_create_pte(vm, pte, nested_paddr, paddr, level, target_level); - /* Allocate page table if not present. */ - struct eptPageTableEntry *pde; - pde = addr_gpa2hva(vm, pdpe[index[2]].address * vm->page_size); - if (!pde[index[1]].readable) { - pde[index[1]].address = vm_alloc_page_table(vm) >> vm->page_shift; - pde[index[1]].writable = true; - pde[index[1]].readable = true; - pde[index[1]].executable = true; - } + if (pte->page_size) + break; - /* Fill in page table entry. */ - struct eptPageTableEntry *pte; - pte = addr_gpa2hva(vm, pde[index[1]].address * vm->page_size); - pte[index[0]].address = paddr >> vm->page_shift; - pte[index[0]].writable = true; - pte[index[0]].readable = true; - pte[index[0]].executable = true; + pt = addr_gpa2hva(vm, pte->address * vm->page_size); + } /* * For now mark these as accessed and dirty because the only * testcase we have needs that. Can be reconsidered later. */ - pte[index[0]].accessed = true; - pte[index[0]].dirty = true; + pte->accessed = true; + pte->dirty = true; + +} + +void nested_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm, + uint64_t nested_paddr, uint64_t paddr) +{ + __nested_pg_map(vmx, vm, nested_paddr, paddr, PG_LEVEL_4K); } /* From patchwork Tue May 17 19:05:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12852898 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 37CBBC433FE for ; Tue, 17 May 2022 19:05:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352489AbiEQTFg (ORCPT ); Tue, 17 May 2022 15:05:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41858 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352462AbiEQTFb (ORCPT ); Tue, 17 May 2022 15:05:31 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 64DB93F30C for ; Tue, 17 May 2022 12:05:31 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id b21-20020a170902d41500b0015906c1ea31so3191735ple.20 for ; Tue, 17 May 2022 12:05:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Ecf/OxwjsoOUdz0RU+HgOiOcKGsVEGU/eMrvKCCxZII=; b=KU0xRRVFNiKlqWOa2Z9OuzHQAxZ+8ougvy9otCMxFKCQgs2+ihsYR1zLJpR7Lu9XjH 5xkvGRcszeJanksk/tMrzehQ5tU8Xijzo7GLSMi9Jb/fszBRKGuSzZBJunRC7hOw0Bll xBiAqf5pqAiH2Ysu7G+pvJOF0GldY4nNeUqg7tLn9UtCzBctv2S5sXLcgMy7x8ZWA32m 18nBBfawI1COyXpAYyafiUjcACcH5CUXQmNH4HCfSX/b0O6FyCNTD2jGSZYHCTBb+m7k JUaILjAjiGJj36baJq+sqN22w82wpD/jRpVuHTOhFVs4w+L4RLuSY6Z5fJxdgFN4xrQ+ ppzA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Ecf/OxwjsoOUdz0RU+HgOiOcKGsVEGU/eMrvKCCxZII=; b=EiGAeNOCm+4Ob/086sDpPCWabY426O8Yy6NHX+3RX1GtNWsXQVpegj2wNInrwAO/oq 6lrE2DQA7PEPGzBw3DDt0qfy9ALgBX/05QufkRHBPX4ZKc9vo2YpVP8sSU7e39CHTItZ cgSHtkatlJyo5U7pHpv7RMNf3SF3y6DDHtpRj4D6/sVm1omNxb1+AICZ+aY1W3yt3mEp CQIPAUKKwNeoQSxcgVFo3f7nuNf68s7x/NA3dQcc6UzsQlpWEGtUQ5UN41a+PlzelIhc fKHlW8KYrHMuB+AnsmPZUZ3wrUJw+Nva8VqcQWU3hjmvzGKC/DH5JvKfPvdxWSk97b1u DIog== X-Gm-Message-State: AOAM533kNUvBOPtamtf+dUqifO1wk8bUwOm43c1TwoH7oQ/UYrGKo5c+ fW/NCZhJjz3u6Kc5ALmeXoeMipirL7FmmQ== X-Google-Smtp-Source: ABdhPJxClgy1VBK3919rSE32N8hbc74FsM5w+7jsa9en4gSGtXBSHngQOkgz5SjfxyaDAOJjvVMrd5+EnpzHKQ== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:90b:33ce:b0:1dc:690e:acef with SMTP id lk14-20020a17090b33ce00b001dc690eacefmr37068126pjb.121.1652814330875; Tue, 17 May 2022 12:05:30 -0700 (PDT) Date: Tue, 17 May 2022 19:05:17 +0000 In-Reply-To: <20220517190524.2202762-1-dmatlack@google.com> Message-Id: <20220517190524.2202762-4-dmatlack@google.com> Mime-Version: 1.0 References: <20220517190524.2202762-1-dmatlack@google.com> X-Mailer: git-send-email 2.36.0.550.gb090851708-goog Subject: [PATCH v2 03/10] KVM: selftests: Drop stale function parameter comment for nested_map() From: David Matlack To: Paolo Bonzini Cc: Ben Gardon , Sean Christopherson , Oliver Upton , Peter Xu , Vitaly Kuznetsov , Andrew Jones , "open list:KERNEL VIRTUAL MACHINE (KVM)" , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org nested_map() does not take a parameter named eptp_memslot. Drop the comment referring to it. Reviewed-by: Peter Xu Signed-off-by: David Matlack --- tools/testing/selftests/kvm/lib/x86_64/vmx.c | 1 - 1 file changed, 1 deletion(-) diff --git a/tools/testing/selftests/kvm/lib/x86_64/vmx.c b/tools/testing/selftests/kvm/lib/x86_64/vmx.c index fdc1e6deb922..baeaa35de113 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/vmx.c +++ b/tools/testing/selftests/kvm/lib/x86_64/vmx.c @@ -486,7 +486,6 @@ void nested_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm, * nested_paddr - Nested guest physical address to map * paddr - VM Physical Address * size - The size of the range to map - * eptp_memslot - Memory region slot for new virtual translation tables * * Output Args: None * From patchwork Tue May 17 19:05:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12852899 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3AE02C4332F for ; Tue, 17 May 2022 19:05:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352491AbiEQTFh (ORCPT ); Tue, 17 May 2022 15:05:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41882 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352482AbiEQTFd (ORCPT ); Tue, 17 May 2022 15:05:33 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 325413F30C for ; Tue, 17 May 2022 12:05:33 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id nl9-20020a17090b384900b001df338b4b72so1900907pjb.6 for ; Tue, 17 May 2022 12:05:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=sx2V9VDm+ObdOOVN3aV0qnc2LPNZWBAseaQ3ZqDsyOA=; b=tTom5wHz6fvvfT6thkpCs43c8FgQngPnlAbZ+g6cycG7aIma/81AZwxPJ3/givqdSR 9wgdEuqzPdDGPsoYoPc7o0uI5yaqsjGmIm5L/KOmrKgzQzivRnbcLVqlXQ+JHC42DzRk nWirZuzV52IQOQh86L53GdgT3Aq/oH0CK49tLs0+VeNAn/nxBdDKJoGbAvgCUJVt/KqX c5Vn1afn8b+F190y5wNDADOE5fnZGf807reTAzCF5sK8CMCG0lM+c4gp8ktROxSfDFyN P/tDgYkI9WRf/zXSDJC0v6mR5Q7jQAjT4QRrbGLy8g91unNxn16WthHbA6YpZJQ4zpIV hmmg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=sx2V9VDm+ObdOOVN3aV0qnc2LPNZWBAseaQ3ZqDsyOA=; b=jqY9KMFrc/7oCr3JEYNhy6sF3dFamoXjInpmbXwalMD5oMNyl2pFNuB6OgkmI4JkD9 2htGqKQt0VtjgE5O8AqE6RK6PX8B91SZnpAwC59lp4OJWzuwRqCd6TUjJTV8l+jtnHWp 2DKg0hiYyuRWP51PYWxgaY3wWRA9W3lR01rVDmq6looiNV1OFUyte7ApNngrhtFOb7kD YdWFajK7t0yhXUFy8jOg1PmsGzhVPlkPDtoc5u8kq5j3RZ77ZJS45za07yKU6Y5SuRxA rVgT7lZFHm7+mRIN7pM8YK2dRNr5SZ+sU/D9uM/c2Wwm6BtGYZd1jPAcduoN+gKZCNxd MlrQ== X-Gm-Message-State: AOAM530ZLXmjoNKTxev9JaX3dPtdefEQwDY3frbhZst1EvtlycxdLGqf K/kt1cYaepkAgPXlwvhA+dPm6XaI4QSSoA== X-Google-Smtp-Source: ABdhPJzH7JiHGHxEBatLfy5d2vdCYrHAgsnn4bE/BLw8ku7NyxZJ163M2bTjqdM2MlEKAYi5TqonCiYDA6UExw== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a05:6a00:2402:b0:4e1:46ca:68bd with SMTP id z2-20020a056a00240200b004e146ca68bdmr23825636pfh.70.1652814332652; Tue, 17 May 2022 12:05:32 -0700 (PDT) Date: Tue, 17 May 2022 19:05:18 +0000 In-Reply-To: <20220517190524.2202762-1-dmatlack@google.com> Message-Id: <20220517190524.2202762-5-dmatlack@google.com> Mime-Version: 1.0 References: <20220517190524.2202762-1-dmatlack@google.com> X-Mailer: git-send-email 2.36.0.550.gb090851708-goog Subject: [PATCH v2 04/10] KVM: selftests: Refactor nested_map() to specify target level From: David Matlack To: Paolo Bonzini Cc: Ben Gardon , Sean Christopherson , Oliver Upton , Peter Xu , Vitaly Kuznetsov , Andrew Jones , "open list:KERNEL VIRTUAL MACHINE (KVM)" , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Refactor nested_map() to specify that it explicityl wants 4K mappings (the existing behavior) and push the implementation down into __nested_map(), which can be used in subsequent commits to create huge page mappings. No function change intended. Reviewed-by: Peter Xu Signed-off-by: David Matlack --- tools/testing/selftests/kvm/lib/x86_64/vmx.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/tools/testing/selftests/kvm/lib/x86_64/vmx.c b/tools/testing/selftests/kvm/lib/x86_64/vmx.c index baeaa35de113..b8cfe4914a3a 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/vmx.c +++ b/tools/testing/selftests/kvm/lib/x86_64/vmx.c @@ -486,6 +486,7 @@ void nested_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm, * nested_paddr - Nested guest physical address to map * paddr - VM Physical Address * size - The size of the range to map + * level - The level at which to map the range * * Output Args: None * @@ -494,22 +495,29 @@ void nested_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm, * Within the VM given by vm, creates a nested guest translation for the * page range starting at nested_paddr to the page range starting at paddr. */ -void nested_map(struct vmx_pages *vmx, struct kvm_vm *vm, - uint64_t nested_paddr, uint64_t paddr, uint64_t size) +void __nested_map(struct vmx_pages *vmx, struct kvm_vm *vm, + uint64_t nested_paddr, uint64_t paddr, uint64_t size, + int level) { - size_t page_size = vm->page_size; + size_t page_size = PG_LEVEL_SIZE(level); size_t npages = size / page_size; TEST_ASSERT(nested_paddr + size > nested_paddr, "Vaddr overflow"); TEST_ASSERT(paddr + size > paddr, "Paddr overflow"); while (npages--) { - nested_pg_map(vmx, vm, nested_paddr, paddr); + __nested_pg_map(vmx, vm, nested_paddr, paddr, level); nested_paddr += page_size; paddr += page_size; } } +void nested_map(struct vmx_pages *vmx, struct kvm_vm *vm, + uint64_t nested_paddr, uint64_t paddr, uint64_t size) +{ + __nested_map(vmx, vm, nested_paddr, paddr, size, PG_LEVEL_4K); +} + /* Prepare an identity extended page table that maps all the * physical pages in VM. */ From patchwork Tue May 17 19:05:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12852900 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 155F4C433EF for ; Tue, 17 May 2022 19:05:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352495AbiEQTFi (ORCPT ); Tue, 17 May 2022 15:05:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41916 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349953AbiEQTFf (ORCPT ); Tue, 17 May 2022 15:05:35 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E88703F322 for ; Tue, 17 May 2022 12:05:34 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id p18-20020aa78612000000b0050d1c170018so8107803pfn.15 for ; Tue, 17 May 2022 12:05:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=DnnFlLu/52FYYZr9wXdmSOKepKzJDBtm1RTeG5CUNV4=; b=Obu5/CRlI4gPbTWSTGOb3i51Ln4WBIFrTFBdbAA5rhI4/318nV8J0yNP2DJfx3BXdo iqORzLRYXAnBcc7UmAYQZN+F1PvdRXSFRfjOCbEAGAhQj3gCtdYDMjzgRBuaBlDUeEBo MnVLwizHKVZ0adlzd0SAXM3I8Rmug4MrEBQdSOf3RnY+/Cb9cG6j7D/wWcpYBnojZsdO 1pvfUC0cYpdUMUB8FDIFH3h81iO+MCC6SceTlS60oZHi9ygneskqz2MppeZUMsjsxLN0 Ccc+OP9+gTW/J6NtpNkHOBUpXnKfl23Jzt4xhsyLd45jKrWaaZzqzTQUWzOTzQLjtXi7 nHSw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=DnnFlLu/52FYYZr9wXdmSOKepKzJDBtm1RTeG5CUNV4=; b=rjpdZdldNZPXnbfd8bMFchQmu8h9X2KKqVnOSJA/G3qvTeUq5YAxLq912PUYFFzzaR uDrifnWSG9VgcMTk8hDhAd4xAM9hbOibl75fwcY+wGXeBwfwQYj8HJ75hgF/0N53oHkM aMdzWTamDubAn2eyZaP3fUfNFWnpfunnrgsPDRt2GFaLdbKNeURachVzis7DjncUSNDW hPWpXg2saES4uNCv61vLQzF+42XSHVG0tgSMkJE7o5bL3PwNysKOk5nXo/KrKB0P5hHv BEmaVBxSh5fh9dG1CHdRwzde5ogTGJwPe3uHOlWmOVVnAKr2pgfTWTI787Z4jM7DYfSw HWGQ== X-Gm-Message-State: AOAM5315JiY5nBjUIy6WWF67DniqtEjIP1GvP627RokdtObQWk5Jxrt3 M5C6Iz5wRKC11/182ikFBGikpy97D11csQ== X-Google-Smtp-Source: ABdhPJw+5+0muhQTAljpvErY2kDwFWekG3fk/QxA/Da+4uXGFY4LuEG6qIpJg54USoQ2tnndEmYVoxz6xGiwsQ== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a05:6a00:4091:b0:510:71b1:963e with SMTP id bw17-20020a056a00409100b0051071b1963emr23762972pfb.31.1652814334344; Tue, 17 May 2022 12:05:34 -0700 (PDT) Date: Tue, 17 May 2022 19:05:19 +0000 In-Reply-To: <20220517190524.2202762-1-dmatlack@google.com> Message-Id: <20220517190524.2202762-6-dmatlack@google.com> Mime-Version: 1.0 References: <20220517190524.2202762-1-dmatlack@google.com> X-Mailer: git-send-email 2.36.0.550.gb090851708-goog Subject: [PATCH v2 05/10] KVM: selftests: Move VMX_EPT_VPID_CAP_AD_BITS to vmx.h From: David Matlack To: Paolo Bonzini Cc: Ben Gardon , Sean Christopherson , Oliver Upton , Peter Xu , Vitaly Kuznetsov , Andrew Jones , "open list:KERNEL VIRTUAL MACHINE (KVM)" , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This is a VMX-related macro so move it to vmx.h. While here, open code the mask like the rest of the VMX bitmask macros. No functional change intended. Reviewed-by: Peter Xu Signed-off-by: David Matlack --- tools/testing/selftests/kvm/include/x86_64/processor.h | 3 --- tools/testing/selftests/kvm/include/x86_64/vmx.h | 2 ++ 2 files changed, 2 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h index 434a4f60f4d9..04f1d540bcb2 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -494,9 +494,6 @@ void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level) #define X86_CR0_CD (1UL<<30) /* Cache Disable */ #define X86_CR0_PG (1UL<<31) /* Paging */ -/* VMX_EPT_VPID_CAP bits */ -#define VMX_EPT_VPID_CAP_AD_BITS (1ULL << 21) - #define XSTATE_XTILE_CFG_BIT 17 #define XSTATE_XTILE_DATA_BIT 18 diff --git a/tools/testing/selftests/kvm/include/x86_64/vmx.h b/tools/testing/selftests/kvm/include/x86_64/vmx.h index 583ceb0d1457..3b1794baa97c 100644 --- a/tools/testing/selftests/kvm/include/x86_64/vmx.h +++ b/tools/testing/selftests/kvm/include/x86_64/vmx.h @@ -96,6 +96,8 @@ #define VMX_MISC_PREEMPTION_TIMER_RATE_MASK 0x0000001f #define VMX_MISC_SAVE_EFER_LMA 0x00000020 +#define VMX_EPT_VPID_CAP_AD_BITS 0x00200000 + #define EXIT_REASON_FAILED_VMENTRY 0x80000000 #define EXIT_REASON_EXCEPTION_NMI 0 #define EXIT_REASON_EXTERNAL_INTERRUPT 1 From patchwork Tue May 17 19:05:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12852901 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0FE18C433FE for ; Tue, 17 May 2022 19:05:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352497AbiEQTFj (ORCPT ); Tue, 17 May 2022 15:05:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41938 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352462AbiEQTFh (ORCPT ); Tue, 17 May 2022 15:05:37 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5101C3F317 for ; Tue, 17 May 2022 12:05:36 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id y14-20020a17090a784e00b001df7b1f8b74so1813547pjl.5 for ; Tue, 17 May 2022 12:05:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=nKtCf/D8AHtF58lPeSd+9hqjs5ZUKh4TExdA+8ekbX8=; b=NGoN6rJ9giIGMJKOKzFa7GYebMLWUdsBxqIxZiBxtqrUXyiuEzQP/INLF+Cs00I6BO U31aC68MB3Ih58K8cSNfmu894NRNZfNaV7z2/C8eiI3tyPQUAaqTnkfiDePvUQv+vmqv mqPK0AM1e4pwtzhL7KrLaei8UpLik2l9HyGXMIXrSYNZHE45jZ1jyfwEqGyoe13+sVqc LMzRx9OV8WacgpEiFhUqmiWJJXjRDK6OKfvmKIH1G1vWtlcZco7wAn2o2XxALRTY3KNC gLIHeGIuk8pX2adqi8xyb9G6XC/FOs2S/lSBjmYxR9ISBO7YCfu/9mnv/zrGTLM7V9CZ HugA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=nKtCf/D8AHtF58lPeSd+9hqjs5ZUKh4TExdA+8ekbX8=; b=bc4DdOJnVk79qPqG+RSo0Z3ked4DY/aJ9iE/2WUYQD2wIZWRRJvIgiiIBzN8XVOYBi kB11mlQdIsTPCEst30sZD841D3wHNRjoYF3KVaPiFZ9XXOCaJZ9qLYjlSJbbVVvMT6MB YhoaY8gvhc3HTFObMOlgQeFYtwYk6Vg5SJuXYPdMnf9Kj60Vkw0dqiJ8KrGdPhXqyYQk u3S4ZQbIi5+hAlA7jUTFLEkDe0SkKuJbczLJz87r55Qip9dFrASD7EL4BGPE75HskewT RRYlOJXS3ra48I8DDdbA5sn9c4PjbYvBqp13jq7totbiDXnXxuLSSV62NUy25U/wDiwb ozaQ== X-Gm-Message-State: AOAM530AeJ1cIqYscmgz12ot4BmZevd2AGkpeOsYwoOOd4r56D35j7zm mPphi43rQ/oITpqY5ffqPvpgWy4/WVnj9g== X-Google-Smtp-Source: ABdhPJyLOlUIAqeJnLwPpcgSgn9QvVTuoXNGHhULo1LDzxTsMGn6GkH5XaKP1JMg1d3+B0g/1mJT63lPNFw7qg== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:903:248:b0:155:e660:b774 with SMTP id j8-20020a170903024800b00155e660b774mr23629323plh.174.1652814335820; Tue, 17 May 2022 12:05:35 -0700 (PDT) Date: Tue, 17 May 2022 19:05:20 +0000 In-Reply-To: <20220517190524.2202762-1-dmatlack@google.com> Message-Id: <20220517190524.2202762-7-dmatlack@google.com> Mime-Version: 1.0 References: <20220517190524.2202762-1-dmatlack@google.com> X-Mailer: git-send-email 2.36.0.550.gb090851708-goog Subject: [PATCH v2 06/10] KVM: selftests: Add a helper to check EPT/VPID capabilities From: David Matlack To: Paolo Bonzini Cc: Ben Gardon , Sean Christopherson , Oliver Upton , Peter Xu , Vitaly Kuznetsov , Andrew Jones , "open list:KERNEL VIRTUAL MACHINE (KVM)" , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Create a small helper function to check if a given EPT/VPID capability is supported. This will be re-used in a follow-up commit to check for 1G page support. No functional change intended. Reviewed-by: Peter Xu Signed-off-by: David Matlack --- tools/testing/selftests/kvm/lib/x86_64/vmx.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/lib/x86_64/vmx.c b/tools/testing/selftests/kvm/lib/x86_64/vmx.c index b8cfe4914a3a..5bf169179455 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/vmx.c +++ b/tools/testing/selftests/kvm/lib/x86_64/vmx.c @@ -198,6 +198,11 @@ bool load_vmcs(struct vmx_pages *vmx) return true; } +static bool ept_vpid_cap_supported(uint64_t mask) +{ + return rdmsr(MSR_IA32_VMX_EPT_VPID_CAP) & mask; +} + /* * Initialize the control fields to the most basic settings possible. */ @@ -215,7 +220,7 @@ static inline void init_vmcs_control_fields(struct vmx_pages *vmx) struct eptPageTablePointer eptp = { .memory_type = VMX_BASIC_MEM_TYPE_WB, .page_walk_length = 3, /* + 1 */ - .ad_enabled = !!(rdmsr(MSR_IA32_VMX_EPT_VPID_CAP) & VMX_EPT_VPID_CAP_AD_BITS), + .ad_enabled = ept_vpid_cap_supported(VMX_EPT_VPID_CAP_AD_BITS), .address = vmx->eptp_gpa >> PAGE_SHIFT_4K, }; From patchwork Tue May 17 19:05:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12852902 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 860CAC433EF for ; Tue, 17 May 2022 19:05:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352462AbiEQTFk (ORCPT ); Tue, 17 May 2022 15:05:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42002 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352496AbiEQTFj (ORCPT ); Tue, 17 May 2022 15:05:39 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 841703F317 for ; Tue, 17 May 2022 12:05:37 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id d125-20020a636883000000b003db5e24db27so25480pgc.13 for ; Tue, 17 May 2022 12:05:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=8SU1GuP2cyDWM63hDryguetbkZujTr91I30YEfc4GrA=; b=NhnGsW+szGxLYqFSjgrWkaisa8/EcBBpzaFzxQr81YElGjfsPFIx3iG/egKZtIJse9 GeDqJiksM7SsF0RWC6lTycgI6A7OD3fchVv0vqFQxl7nqWv7xFmSuZIMKUJ9IEWr7Dtc w5UguQIuWZ6MX2/N/i+JW0XwoerWE4qWHlCVb6SUeRV8hGhJdc3reHMId9DYR7s8Z4Th LvCpT3V8bAyDTJLw+sSiPKwOSAB6zknlXe/KMRKsq/PyBsMKAElwxfHw92WVoEwbpl/t ypOvVJUpBsykCE7jmpbhJ7aDnQGIbFGAelGv5FWk6jK7puZexKWM9KrplECH5jlRuwaO OFKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=8SU1GuP2cyDWM63hDryguetbkZujTr91I30YEfc4GrA=; b=ukNu0J0L+OkrLCd/2fnAJZsgmaZMFJdniFtoYZbWp9fpo2mEASEX3jQHlcwCrGlycI CC47L9yEewpmaLwRxgPZQ9EqZ0GBQXWLLeJe1ImOWIwZubj0CAPxGhyfShmpWvBX+0DK 3Jkzt1TwxLkPT9EXV5O/9h74bmd9Pg8gCxxVE0Oq70DmyD6gYCuKQ00lneAego89nMfi SJdmtRQloacKwSvb8ThSI7T+AzEpjq3nwwO2EleEHD8lddkRvJzzr0pXUcX5eWM5g21K 9P8mR0b3WSERGWRV5FDxSQNyUNT08ZA5v4Z21MC+uN2AsGuGGt0eWtxu3Xcg9h6enoGS YFeA== X-Gm-Message-State: AOAM531YnYO+y6XgCWMQn9AL+5jG2GCTjbXNDo9IFl7srESOnMpdPf5q I1YuMFrd0nhJpUtfYSXFg8IfK/674Jn2lw== X-Google-Smtp-Source: ABdhPJwt5Ir1aSpVb7v5ikjcKlozi84JIaAt7BC6BmQMas4MM4c9O2yV0rA7KoIpnL0VVaRTie/ydjAzG2RMrg== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a05:6a00:4515:b0:510:9ec3:e815 with SMTP id cw21-20020a056a00451500b005109ec3e815mr24099159pfb.65.1652814337206; Tue, 17 May 2022 12:05:37 -0700 (PDT) Date: Tue, 17 May 2022 19:05:21 +0000 In-Reply-To: <20220517190524.2202762-1-dmatlack@google.com> Message-Id: <20220517190524.2202762-8-dmatlack@google.com> Mime-Version: 1.0 References: <20220517190524.2202762-1-dmatlack@google.com> X-Mailer: git-send-email 2.36.0.550.gb090851708-goog Subject: [PATCH v2 07/10] KVM: selftests: Link selftests directly with lib object files From: David Matlack To: Paolo Bonzini Cc: Ben Gardon , Sean Christopherson , Oliver Upton , Peter Xu , Vitaly Kuznetsov , Andrew Jones , "open list:KERNEL VIRTUAL MACHINE (KVM)" , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The linker does obey strong/weak symbols when linking static libraries, it simply resolves an undefined symbol to the first-encountered symbol. This means that defining __weak arch-generic functions and then defining arch-specific strong functions to override them in libkvm will not always work. More specifically, if we have: lib/generic.c: void __weak foo(void) { pr_info("weak\n"); } void bar(void) { foo(); } lib/x86_64/arch.c: void foo(void) { pr_info("strong\n"); } And a selftest that calls bar(), it will print "weak". Now if you make generic.o explicitly depend on arch.o (e.g. add function to arch.c that is called directly from generic.c) it will print "strong". In other words, it seems that the linker is free to throw out arch.o when linking because generic.o does not explicitly depend on it, which causes the linker to lose the strong symbol. One solution is to link libkvm.a with --whole-archive so that the linker doesn't throw away object files it thinks are unnecessary. However that is a bit difficult to plumb since we are using the common selftests makefile rules. An easier solution is to drop libkvm.a just link selftests with all the .o files that were originally in libkvm.a. Reviewed-by: Peter Xu Signed-off-by: David Matlack --- tools/testing/selftests/kvm/Makefile | 13 +++++-------- 1 file changed, 5 insertions(+), 8 deletions(-) diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 8c3db2f75315..cd7a9df4ad6d 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -173,12 +173,13 @@ LDFLAGS += -pthread $(no-pie-option) $(pgste-option) # $(TEST_GEN_PROGS) starts with $(OUTPUT)/ include ../lib.mk -STATIC_LIBS := $(OUTPUT)/libkvm.a LIBKVM_C := $(filter %.c,$(LIBKVM)) LIBKVM_S := $(filter %.S,$(LIBKVM)) LIBKVM_C_OBJ := $(patsubst %.c, $(OUTPUT)/%.o, $(LIBKVM_C)) LIBKVM_S_OBJ := $(patsubst %.S, $(OUTPUT)/%.o, $(LIBKVM_S)) -EXTRA_CLEAN += $(LIBKVM_C_OBJ) $(LIBKVM_S_OBJ) $(STATIC_LIBS) cscope.* +LIBKVM_OBJS = $(LIBKVM_C_OBJ) $(LIBKVM_S_OBJ) + +EXTRA_CLEAN += $(LIBKVM_OBJS) cscope.* x := $(shell mkdir -p $(sort $(dir $(LIBKVM_C_OBJ) $(LIBKVM_S_OBJ)))) $(LIBKVM_C_OBJ): $(OUTPUT)/%.o: %.c @@ -187,13 +188,9 @@ $(LIBKVM_C_OBJ): $(OUTPUT)/%.o: %.c $(LIBKVM_S_OBJ): $(OUTPUT)/%.o: %.S $(CC) $(CFLAGS) $(CPPFLAGS) $(TARGET_ARCH) -c $< -o $@ -LIBKVM_OBJS = $(LIBKVM_C_OBJ) $(LIBKVM_S_OBJ) -$(OUTPUT)/libkvm.a: $(LIBKVM_OBJS) - $(AR) crs $@ $^ - x := $(shell mkdir -p $(sort $(dir $(TEST_GEN_PROGS)))) -all: $(STATIC_LIBS) -$(TEST_GEN_PROGS): $(STATIC_LIBS) +all: $(LIBKVM_OBJS) +$(TEST_GEN_PROGS): $(LIBKVM_OBJS) cscope: include_paths = $(LINUX_TOOL_INCLUDE) $(LINUX_HDR_PATH) include lib .. cscope: From patchwork Tue May 17 19:05:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12852903 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8A250C433F5 for ; Tue, 17 May 2022 19:05:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352501AbiEQTFl (ORCPT ); Tue, 17 May 2022 15:05:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42054 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352500AbiEQTFk (ORCPT ); Tue, 17 May 2022 15:05:40 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8CFA04ECC3 for ; Tue, 17 May 2022 12:05:39 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id q17-20020a656851000000b003c66b4c5d54so36365pgt.6 for ; Tue, 17 May 2022 12:05:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=6YlkJV/T5mcjduBBp9QBuhMyd/oE9TQS23+/anwwZk0=; b=j6dZh6yLsqom/8TT5Agx6SRA7Tj1asUR9CBz6vxv1aVEOIeac0DWO0oqSe75lT1a0l jl2cSfuhBg2Z4U1npHh99sKD4Q8avpn68nhZ4xfmdy5U3LomUwn/hNE4JxmGoGYe6bE4 A6iFDyvZldT1RISoJ/kiZEuOXwNtV4uPQJaUl/rncFdtRdUa6CAZtLuur5m9ldYzre1c IKRgBKCKIuXX1bEbFkxLuwGcECvYaOIpYLRbu82Rp/HSUj9h88Ky1Bl7ANhvGYdbyLJh un3kLLDi3JN8pvsHqAogOPfpEnkA9RldAvv8WjiQQjQxT5eWgv7JY1uqznLz14pCZNmh Tghg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=6YlkJV/T5mcjduBBp9QBuhMyd/oE9TQS23+/anwwZk0=; b=fedYBUQNx4yL4uNwM7y+dQrQxHKV90ITDGhunBvTZQvC5uoEFsa8NNLYMMLZT6kWBG obQO7OMPbCLUsiVC/3idmofTH95+hXxJh5rJpHhMgaoovFnrQQPY1nQ19kBuya/SCF9L 4SEM2hLEn6+S97gf3mwWgLybYqQtg/Nhac5B7IvOMsytKdZYKopwTcRfpwg0Z7vs94TQ 5zlXm+2ENY7N6ZV0D8xVF8bM0iJXp5XGPjGDDnpe6PEDw6DyqxyYlKhTMK0FtZlnPWsD 6NH58pCEchFitMP+RTx6Qxo/i/7ttyIGsL3VfsBA3vj4pFEjDc7sNZ8l25utnnQvxb/v vhpw== X-Gm-Message-State: AOAM531+EFGCmVth9zn9ow5+UJTmfF+TMOSjFk34azKobk8o1clMkSNQ S1yCUvvdk40znisB0sqOsEPtABpfMv+fxQ== X-Google-Smtp-Source: ABdhPJwd+cSZ5X7CiNRk7yEjyGl9P5mm4DO6RCuJqcaFfrQkvhZOEgb36BoGpCSrWDwBsS2cdXarclyqpRHMdw== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:aa7:82d9:0:b0:4fa:2c7f:41e with SMTP id f25-20020aa782d9000000b004fa2c7f041emr24156849pfn.1.1652814338775; Tue, 17 May 2022 12:05:38 -0700 (PDT) Date: Tue, 17 May 2022 19:05:22 +0000 In-Reply-To: <20220517190524.2202762-1-dmatlack@google.com> Message-Id: <20220517190524.2202762-9-dmatlack@google.com> Mime-Version: 1.0 References: <20220517190524.2202762-1-dmatlack@google.com> X-Mailer: git-send-email 2.36.0.550.gb090851708-goog Subject: [PATCH v2 08/10] KVM: selftests: Drop unnecessary rule for $(LIBKVM_OBJS) From: David Matlack To: Paolo Bonzini Cc: Ben Gardon , Sean Christopherson , Oliver Upton , Peter Xu , Vitaly Kuznetsov , Andrew Jones , "open list:KERNEL VIRTUAL MACHINE (KVM)" , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Drop the "all: $(LIBKVM_OBJS)" rule. The KVM selftests already depend on $(LIBKVM_OBJS), so there is no reason to have this rule. Suggested-by: Peter Xu Signed-off-by: David Matlack Reviewed-by: Peter Xu --- tools/testing/selftests/kvm/Makefile | 1 - 1 file changed, 1 deletion(-) diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index cd7a9df4ad6d..0889fc17baa5 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -189,7 +189,6 @@ $(LIBKVM_S_OBJ): $(OUTPUT)/%.o: %.S $(CC) $(CFLAGS) $(CPPFLAGS) $(TARGET_ARCH) -c $< -o $@ x := $(shell mkdir -p $(sort $(dir $(TEST_GEN_PROGS)))) -all: $(LIBKVM_OBJS) $(TEST_GEN_PROGS): $(LIBKVM_OBJS) cscope: include_paths = $(LINUX_TOOL_INCLUDE) $(LINUX_HDR_PATH) include lib .. From patchwork Tue May 17 19:05:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12852904 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F18FC433F5 for ; Tue, 17 May 2022 19:05:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352517AbiEQTFp (ORCPT ); Tue, 17 May 2022 15:05:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42176 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346834AbiEQTFm (ORCPT ); Tue, 17 May 2022 15:05:42 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 350FD107 for ; Tue, 17 May 2022 12:05:41 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id z16-20020a17090a015000b001dbc8da29a1so1810367pje.7 for ; Tue, 17 May 2022 12:05:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=BbT3qIQqY75tltcSOWTXutdvIAEglFmPYbpzXMtd0+s=; b=FMcPD/W1tVzarYM9qnqX9+feK+EwoKvnZrBfUOxTHC1V+2iZiB6x2A48vClUhD1r7E HXy62gg5Fz7aZPe/wWBkAzFB6yu9VdLKcOihsH7SFEWYJwqt9Ms72ZRtRPJuALhGycPj QOWS7b+bArHO2R2BQY2Zp+9hg2Ts0Fmb+0cdEuNjmRYYQbmLtYKsyrTWYnjWVQExv5V8 muyZtOK9R5WOHrHWXJ9OHdr466u6xcj97mxxLZGNACrVeEWxGp+QPnt1Z3RKeGDpmNmH agssaQqahj47w07HCVkubK6K7Pcmpullh24hORGJAV6zXTW/KHIF9g2Ns1jqmBDnbvWS fc7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=BbT3qIQqY75tltcSOWTXutdvIAEglFmPYbpzXMtd0+s=; b=jbaMxNtZOdcdx/u03aPHRgRayl3je5xeKFyc5BPfSmMaXc9UWq6DP5WKn6dv2jaV+j BiOsr2IDAmc1Y8Pdz9IBX7ypjTQ9F56pm/8/9iivBk4j2bj7833UgEINgXxGHYB9h3cK ETvjTzZxucNcRgW6Cecc2vd0JvFJGk+hmwe6CQCqrVgsbyrJQ9WnywAbIUQOFIpWiVAG 0z6lsvQc6vwHMrikx5TczF87LzaeOLNY8gHL75pxuu/7Z74R1VFH/Y0yTkKsF2nxUBqR Da0qC9JAlmGSFkqTafbN78wsErsKhJo+7nOxQ45QXm1FhCYe6FdQUQ5v7inLD5whus3e XHBg== X-Gm-Message-State: AOAM531Ce0yaqQh4EPtmGq3EQ0gz/QShYL5OHxduDybbNtT7fLgYYIwY NcEEkKvex0c2FZBGZWsSKAcAJgEZ8Gu3gg== X-Google-Smtp-Source: ABdhPJxDo4a3LXUH6qD+1SS7Ok+dJTJ3GE/vu2EpBYI8HUJS9b+b8QHX9dnpaNq3CiK5ZmXbBECKX6+c4YYrQw== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:902:e808:b0:161:946c:d2a5 with SMTP id u8-20020a170902e80800b00161946cd2a5mr7988639plg.93.1652814340415; Tue, 17 May 2022 12:05:40 -0700 (PDT) Date: Tue, 17 May 2022 19:05:23 +0000 In-Reply-To: <20220517190524.2202762-1-dmatlack@google.com> Message-Id: <20220517190524.2202762-10-dmatlack@google.com> Mime-Version: 1.0 References: <20220517190524.2202762-1-dmatlack@google.com> X-Mailer: git-send-email 2.36.0.550.gb090851708-goog Subject: [PATCH v2 09/10] KVM: selftests: Clean up LIBKVM files in Makefile From: David Matlack To: Paolo Bonzini Cc: Ben Gardon , Sean Christopherson , Oliver Upton , Peter Xu , Vitaly Kuznetsov , Andrew Jones , "open list:KERNEL VIRTUAL MACHINE (KVM)" , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Break up the long lines for LIBKVM and alphabetize each architecture. This makes reading the Makefile easier, and will make reading diffs to LIBKVM easier. No functional change intended. Reviewed-by: Peter Xu Signed-off-by: David Matlack --- tools/testing/selftests/kvm/Makefile | 36 ++++++++++++++++++++++++---- 1 file changed, 31 insertions(+), 5 deletions(-) diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 0889fc17baa5..83b9ffa456ea 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -37,11 +37,37 @@ ifeq ($(ARCH),riscv) UNAME_M := riscv endif -LIBKVM = lib/assert.c lib/elf.c lib/io.c lib/kvm_util.c lib/rbtree.c lib/sparsebit.c lib/test_util.c lib/guest_modes.c lib/perf_test_util.c -LIBKVM_x86_64 = lib/x86_64/apic.c lib/x86_64/processor.c lib/x86_64/vmx.c lib/x86_64/svm.c lib/x86_64/ucall.c lib/x86_64/handlers.S -LIBKVM_aarch64 = lib/aarch64/processor.c lib/aarch64/ucall.c lib/aarch64/handlers.S lib/aarch64/spinlock.c lib/aarch64/gic.c lib/aarch64/gic_v3.c lib/aarch64/vgic.c -LIBKVM_s390x = lib/s390x/processor.c lib/s390x/ucall.c lib/s390x/diag318_test_handler.c -LIBKVM_riscv = lib/riscv/processor.c lib/riscv/ucall.c +LIBKVM += lib/assert.c +LIBKVM += lib/elf.c +LIBKVM += lib/guest_modes.c +LIBKVM += lib/io.c +LIBKVM += lib/kvm_util.c +LIBKVM += lib/perf_test_util.c +LIBKVM += lib/rbtree.c +LIBKVM += lib/sparsebit.c +LIBKVM += lib/test_util.c + +LIBKVM_x86_64 += lib/x86_64/apic.c +LIBKVM_x86_64 += lib/x86_64/handlers.S +LIBKVM_x86_64 += lib/x86_64/processor.c +LIBKVM_x86_64 += lib/x86_64/svm.c +LIBKVM_x86_64 += lib/x86_64/ucall.c +LIBKVM_x86_64 += lib/x86_64/vmx.c + +LIBKVM_aarch64 += lib/aarch64/gic.c +LIBKVM_aarch64 += lib/aarch64/gic_v3.c +LIBKVM_aarch64 += lib/aarch64/handlers.S +LIBKVM_aarch64 += lib/aarch64/processor.c +LIBKVM_aarch64 += lib/aarch64/spinlock.c +LIBKVM_aarch64 += lib/aarch64/ucall.c +LIBKVM_aarch64 += lib/aarch64/vgic.c + +LIBKVM_s390x += lib/s390x/diag318_test_handler.c +LIBKVM_s390x += lib/s390x/processor.c +LIBKVM_s390x += lib/s390x/ucall.c + +LIBKVM_riscv += lib/riscv/processor.c +LIBKVM_riscv += lib/riscv/ucall.c TEST_GEN_PROGS_x86_64 = x86_64/cpuid_test TEST_GEN_PROGS_x86_64 += x86_64/cr4_cpuid_sync_test From patchwork Tue May 17 19:05:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12852905 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D38EBC433FE for ; Tue, 17 May 2022 19:05:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351442AbiEQTFr (ORCPT ); Tue, 17 May 2022 15:05:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42320 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352516AbiEQTFo (ORCPT ); Tue, 17 May 2022 15:05:44 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 64896E8 for ; Tue, 17 May 2022 12:05:42 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id p18-20020aa78612000000b0050d1c170018so8107803pfn.15 for ; Tue, 17 May 2022 12:05:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=GHdmigCWoJn3h4Ms4W+5m3/D0YUe4bVjbE0mjuDmz4c=; b=LBJ5sDo39urAAyFbQznRrEIDfZqGo9Dc1xuE9faD1aR1ha6Xf2o2146NiI8CeYtU+R 29U2uKKZ95LgRHdAnv7qz9eTHNKUVP1jsGWZ3Bo2gFkYzv2JhbpfKGw8srmzZjvEYw0e wx9EvArrvIwCPEtkAyh9FFkwCGSQUq2qRGQN3HJRzGmyin/BlRai+cZDoJv+r6VN2o3m j9RgPYYsOhNxeQVMTU/iZff7D92wfjHrQTJEFBnHqhqb+KWVT/0ZNIL9jyjnT/EEaraY 5CwZRuAe2mpIVe0hiUDFbGCv1INpF0HQX/IUTjyie/vRgenvqpYL6kKN3EWQgaIV5Nhl I8aQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=GHdmigCWoJn3h4Ms4W+5m3/D0YUe4bVjbE0mjuDmz4c=; b=Xyp9tdRkcz2gRDslBhCUQcyeTRTTW7XKeSisdeLi7x0EzdnlT6QdnH9inNTVcKa0H3 7en2I97YWcpbykDzhyfcdxWldtwjNxuWTNc+BrHwVahhdz5/02aO1I94DfTD19lOdGeO kN6q/apdu0x9VAILa3nmcvqO6Bae4R6ETpkGu+OK94NN2PGLDjy8PaLs/hL15J57KKyu ljqoXMF8UzcbXb2HW2HvRIJ+IHXy3dfe9fTCM/zW2UsEh/yn9rO2r3RNUjnCZ42GVD8+ VjLgDGxazk74Zm8wm9WOD7a+3dBUpxhePHW2jCuT6Weso64F6x+kbde3lRD/lRg2xMzp xn+w== X-Gm-Message-State: AOAM5309efIq3JTgD4ei1t7QXYpoBdyU2SXuMVnkr7RsZAGKF6DVJ6G8 vX1NjuzCG15eiTE6zfrVfdgqRtLt3aegDw== X-Google-Smtp-Source: ABdhPJw/FK+Fd3N51diYnp8p8PGn0SqQOGLLFv/Egk451j9l/GMqByWbwhBhSoKvj/dox+aNmy1zNXT9E3Ejvg== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a65:5bcc:0:b0:3c5:fff8:a6a6 with SMTP id o12-20020a655bcc000000b003c5fff8a6a6mr20979273pgr.48.1652814341850; Tue, 17 May 2022 12:05:41 -0700 (PDT) Date: Tue, 17 May 2022 19:05:24 +0000 In-Reply-To: <20220517190524.2202762-1-dmatlack@google.com> Message-Id: <20220517190524.2202762-11-dmatlack@google.com> Mime-Version: 1.0 References: <20220517190524.2202762-1-dmatlack@google.com> X-Mailer: git-send-email 2.36.0.550.gb090851708-goog Subject: [PATCH v2 10/10] KVM: selftests: Add option to run dirty_log_perf_test vCPUs in L2 From: David Matlack To: Paolo Bonzini Cc: Ben Gardon , Sean Christopherson , Oliver Upton , Peter Xu , Vitaly Kuznetsov , Andrew Jones , "open list:KERNEL VIRTUAL MACHINE (KVM)" , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add an option to dirty_log_perf_test that configures the vCPUs to run in L2 instead of L1. This makes it possible to benchmark the dirty logging performance of nested virtualization, which is particularly interesting because KVM must shadow L1's EPT/NPT tables. For now this support only works on x86_64 CPUs with VMX. Otherwise passing -n results in the test being skipped. Signed-off-by: David Matlack --- tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/dirty_log_perf_test.c | 10 +- .../selftests/kvm/include/perf_test_util.h | 7 ++ .../selftests/kvm/include/x86_64/vmx.h | 3 + .../selftests/kvm/lib/perf_test_util.c | 29 +++++- .../selftests/kvm/lib/x86_64/perf_test_util.c | 98 +++++++++++++++++++ tools/testing/selftests/kvm/lib/x86_64/vmx.c | 13 +++ 7 files changed, 154 insertions(+), 7 deletions(-) create mode 100644 tools/testing/selftests/kvm/lib/x86_64/perf_test_util.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 83b9ffa456ea..42cb904f6e54 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -49,6 +49,7 @@ LIBKVM += lib/test_util.c LIBKVM_x86_64 += lib/x86_64/apic.c LIBKVM_x86_64 += lib/x86_64/handlers.S +LIBKVM_x86_64 += lib/x86_64/perf_test_util.c LIBKVM_x86_64 += lib/x86_64/processor.c LIBKVM_x86_64 += lib/x86_64/svm.c LIBKVM_x86_64 += lib/x86_64/ucall.c diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/testing/selftests/kvm/dirty_log_perf_test.c index 7b47ae4f952e..d60a34cdfaee 100644 --- a/tools/testing/selftests/kvm/dirty_log_perf_test.c +++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c @@ -336,8 +336,8 @@ static void run_test(enum vm_guest_mode mode, void *arg) static void help(char *name) { puts(""); - printf("usage: %s [-h] [-i iterations] [-p offset] [-g]" - "[-m mode] [-b vcpu bytes] [-v vcpus] [-o] [-s mem type]" + printf("usage: %s [-h] [-i iterations] [-p offset] [-g] " + "[-m mode] [-n] [-b vcpu bytes] [-v vcpus] [-o] [-s mem type]" "[-x memslots]\n", name); puts(""); printf(" -i: specify iteration counts (default: %"PRIu64")\n", @@ -351,6 +351,7 @@ static void help(char *name) printf(" -p: specify guest physical test memory offset\n" " Warning: a low offset can conflict with the loaded test code.\n"); guest_modes_help(); + printf(" -n: Run the vCPUs in nested mode (L2)\n"); printf(" -b: specify the size of the memory region which should be\n" " dirtied by each vCPU. e.g. 10M or 3G.\n" " (default: 1G)\n"); @@ -387,7 +388,7 @@ int main(int argc, char *argv[]) guest_modes_append_default(); - while ((opt = getopt(argc, argv, "ghi:p:m:b:f:v:os:x:")) != -1) { + while ((opt = getopt(argc, argv, "ghi:p:m:nb:f:v:os:x:")) != -1) { switch (opt) { case 'g': dirty_log_manual_caps = 0; @@ -401,6 +402,9 @@ int main(int argc, char *argv[]) case 'm': guest_modes_cmdline(optarg); break; + case 'n': + perf_test_args.nested = true; + break; case 'b': guest_percpu_mem_size = parse_size(optarg); break; diff --git a/tools/testing/selftests/kvm/include/perf_test_util.h b/tools/testing/selftests/kvm/include/perf_test_util.h index a86f953d8d36..b6c1770ab831 100644 --- a/tools/testing/selftests/kvm/include/perf_test_util.h +++ b/tools/testing/selftests/kvm/include/perf_test_util.h @@ -34,6 +34,9 @@ struct perf_test_args { uint64_t guest_page_size; int wr_fract; + /* Run vCPUs in L2 instead of L1, if the architecture supports it. */ + bool nested; + struct perf_test_vcpu_args vcpu_args[KVM_MAX_VCPUS]; }; @@ -49,5 +52,9 @@ void perf_test_set_wr_fract(struct kvm_vm *vm, int wr_fract); void perf_test_start_vcpu_threads(int vcpus, void (*vcpu_fn)(struct perf_test_vcpu_args *)); void perf_test_join_vcpu_threads(int vcpus); +void perf_test_guest_code(uint32_t vcpu_id); + +uint64_t perf_test_nested_pages(int nr_vcpus); +void perf_test_setup_nested(struct kvm_vm *vm, int nr_vcpus); #endif /* SELFTEST_KVM_PERF_TEST_UTIL_H */ diff --git a/tools/testing/selftests/kvm/include/x86_64/vmx.h b/tools/testing/selftests/kvm/include/x86_64/vmx.h index 3b1794baa97c..17d712503a36 100644 --- a/tools/testing/selftests/kvm/include/x86_64/vmx.h +++ b/tools/testing/selftests/kvm/include/x86_64/vmx.h @@ -96,6 +96,7 @@ #define VMX_MISC_PREEMPTION_TIMER_RATE_MASK 0x0000001f #define VMX_MISC_SAVE_EFER_LMA 0x00000020 +#define VMX_EPT_VPID_CAP_1G_PAGES 0x00020000 #define VMX_EPT_VPID_CAP_AD_BITS 0x00200000 #define EXIT_REASON_FAILED_VMENTRY 0x80000000 @@ -608,6 +609,7 @@ bool load_vmcs(struct vmx_pages *vmx); bool nested_vmx_supported(void); void nested_vmx_check_supported(void); +bool ept_1g_pages_supported(void); void nested_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm, uint64_t nested_paddr, uint64_t paddr); @@ -615,6 +617,7 @@ void nested_map(struct vmx_pages *vmx, struct kvm_vm *vm, uint64_t nested_paddr, uint64_t paddr, uint64_t size); void nested_map_memslot(struct vmx_pages *vmx, struct kvm_vm *vm, uint32_t memslot); +void nested_map_all_1g(struct vmx_pages *vmx, struct kvm_vm *vm); void prepare_eptp(struct vmx_pages *vmx, struct kvm_vm *vm, uint32_t eptp_memslot); void prepare_virtualize_apic_accesses(struct vmx_pages *vmx, struct kvm_vm *vm); diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c index 722df3a28791..530be01706d5 100644 --- a/tools/testing/selftests/kvm/lib/perf_test_util.c +++ b/tools/testing/selftests/kvm/lib/perf_test_util.c @@ -40,7 +40,7 @@ static bool all_vcpu_threads_running; * Continuously write to the first 8 bytes of each page in the * specified region. */ -static void guest_code(uint32_t vcpu_id) +void perf_test_guest_code(uint32_t vcpu_id) { struct perf_test_args *pta = &perf_test_args; struct perf_test_vcpu_args *vcpu_args = &pta->vcpu_args[vcpu_id]; @@ -108,7 +108,7 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus, { struct perf_test_args *pta = &perf_test_args; struct kvm_vm *vm; - uint64_t guest_num_pages; + uint64_t guest_num_pages, slot0_pages = DEFAULT_GUEST_PHY_PAGES; uint64_t backing_src_pagesz = get_backing_src_pagesz(backing_src); int i; @@ -134,13 +134,20 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus, "Guest memory cannot be evenly divided into %d slots.", slots); + /* + * If using nested, allocate extra pages for the nested page tables and + * in-memory data structures. + */ + if (pta->nested) + slot0_pages += perf_test_nested_pages(vcpus); + /* * Pass guest_num_pages to populate the page tables for test memory. * The memory is also added to memslot 0, but that's a benign side * effect as KVM allows aliasing HVAs in meslots. */ - vm = vm_create_with_vcpus(mode, vcpus, DEFAULT_GUEST_PHY_PAGES, - guest_num_pages, 0, guest_code, NULL); + vm = vm_create_with_vcpus(mode, vcpus, slot0_pages, guest_num_pages, 0, + perf_test_guest_code, NULL); pta->vm = vm; @@ -178,6 +185,9 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus, perf_test_setup_vcpus(vm, vcpus, vcpu_memory_bytes, partition_vcpu_memory_access); + if (pta->nested) + perf_test_setup_nested(vm, vcpus); + ucall_init(vm, NULL); /* Export the shared variables to the guest. */ @@ -198,6 +208,17 @@ void perf_test_set_wr_fract(struct kvm_vm *vm, int wr_fract) sync_global_to_guest(vm, perf_test_args); } +uint64_t __weak perf_test_nested_pages(int nr_vcpus) +{ + return 0; +} + +void __weak perf_test_setup_nested(struct kvm_vm *vm, int nr_vcpus) +{ + pr_info("%s() not support on this architecture, skipping.\n", __func__); + exit(KSFT_SKIP); +} + static void *vcpu_thread_main(void *data) { struct vcpu_thread *vcpu = data; diff --git a/tools/testing/selftests/kvm/lib/x86_64/perf_test_util.c b/tools/testing/selftests/kvm/lib/x86_64/perf_test_util.c new file mode 100644 index 000000000000..472e7d5a182b --- /dev/null +++ b/tools/testing/selftests/kvm/lib/x86_64/perf_test_util.c @@ -0,0 +1,98 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * x86_64-specific extensions to perf_test_util.c. + * + * Copyright (C) 2022, Google, Inc. + */ +#include +#include +#include +#include + +#include "test_util.h" +#include "kvm_util.h" +#include "perf_test_util.h" +#include "../kvm_util_internal.h" +#include "processor.h" +#include "vmx.h" + +void perf_test_l2_guest_code(uint64_t vcpu_id) +{ + perf_test_guest_code(vcpu_id); + vmcall(); +} + +extern char perf_test_l2_guest_entry[]; +__asm__( +"perf_test_l2_guest_entry:" +" mov (%rsp), %rdi;" +" call perf_test_l2_guest_code;" +" ud2;" +); + +static void perf_test_l1_guest_code(struct vmx_pages *vmx, uint64_t vcpu_id) +{ +#define L2_GUEST_STACK_SIZE 64 + unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE]; + unsigned long *rsp; + + GUEST_ASSERT(vmx->vmcs_gpa); + GUEST_ASSERT(prepare_for_vmx_operation(vmx)); + GUEST_ASSERT(load_vmcs(vmx)); + GUEST_ASSERT(ept_1g_pages_supported()); + + rsp = &l2_guest_stack[L2_GUEST_STACK_SIZE - 1]; + *rsp = vcpu_id; + prepare_vmcs(vmx, perf_test_l2_guest_entry, rsp); + + GUEST_ASSERT(!vmlaunch()); + GUEST_ASSERT(vmreadz(VM_EXIT_REASON) == EXIT_REASON_VMCALL); + GUEST_DONE(); +} + +uint64_t perf_test_nested_pages(int nr_vcpus) +{ + /* + * 513 page tables to identity-map the L2 with 1G pages, plus a few + * pages per-vCPU for data structures such as the VMCS. + */ + return 513 + 10 * nr_vcpus; +} + +void perf_test_setup_nested(struct kvm_vm *vm, int nr_vcpus) +{ + struct vmx_pages *vmx, *vmx0 = NULL; + struct kvm_regs regs; + vm_vaddr_t vmx_gva; + int vcpu_id; + + nested_vmx_check_supported(); + + for (vcpu_id = 0; vcpu_id < nr_vcpus; vcpu_id++) { + vmx = vcpu_alloc_vmx(vm, &vmx_gva); + + if (vcpu_id == 0) { + prepare_eptp(vmx, vm, 0); + /* + * Identity map L2 with 1G pages so that KVM can shadow + * the EPT12 with huge pages. + */ + nested_map_all_1g(vmx, vm); + vmx0 = vmx; + } else { + /* Share the same EPT table across all vCPUs. */ + vmx->eptp = vmx0->eptp; + vmx->eptp_hva = vmx0->eptp_hva; + vmx->eptp_gpa = vmx0->eptp_gpa; + } + + /* + * Override the vCPU to run perf_test_l1_guest_code() which will + * bounce it into L2 before calling perf_test_guest_code(). + */ + vcpu_regs_get(vm, vcpu_id, ®s); + regs.rip = (unsigned long) perf_test_l1_guest_code; + vcpu_regs_set(vm, vcpu_id, ®s); + vcpu_args_set(vm, vcpu_id, 2, vmx_gva, vcpu_id); + } +} diff --git a/tools/testing/selftests/kvm/lib/x86_64/vmx.c b/tools/testing/selftests/kvm/lib/x86_64/vmx.c index 5bf169179455..9858e56370cb 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/vmx.c +++ b/tools/testing/selftests/kvm/lib/x86_64/vmx.c @@ -203,6 +203,11 @@ static bool ept_vpid_cap_supported(uint64_t mask) return rdmsr(MSR_IA32_VMX_EPT_VPID_CAP) & mask; } +bool ept_1g_pages_supported(void) +{ + return ept_vpid_cap_supported(VMX_EPT_VPID_CAP_1G_PAGES); +} + /* * Initialize the control fields to the most basic settings possible. */ @@ -547,6 +552,14 @@ void nested_map_memslot(struct vmx_pages *vmx, struct kvm_vm *vm, } } +/* Identity map the entire guest physical address space with 1GiB Pages. */ +void nested_map_all_1g(struct vmx_pages *vmx, struct kvm_vm *vm) +{ + uint64_t gpa_size = (vm->max_gfn + 1) << vm->page_shift; + + __nested_map(vmx, vm, 0, 0, gpa_size, PG_LEVEL_1G); +} + void prepare_eptp(struct vmx_pages *vmx, struct kvm_vm *vm, uint32_t eptp_memslot) {