From patchwork Tue May 24 20:56:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 12860535 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42BE7C433F5 for ; Tue, 24 May 2022 20:57:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241794AbiEXU5L (ORCPT ); Tue, 24 May 2022 16:57:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37360 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241785AbiEXU5H (ORCPT ); Tue, 24 May 2022 16:57:07 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4F31729811 for ; Tue, 24 May 2022 13:57:06 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id m21-20020aa78a15000000b005182fda1b15so7633410pfa.21 for ; Tue, 24 May 2022 13:57:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=AKBZ5g+UlThLjWvJkSJOiNb4NMZ4OX5m/vBfmTTQzDE=; b=bDCy78Wrt/UnyhxqUr33cB7J91HyMmbN3KBn+ur+8AfcjUTDaR5jR8DmiFt39A5yxr yUF1fKevsXSxTIEYhFANFztRMYxQOYez7cAtWnnQY8py3hEsTuIpIZp7h8L7QPnUGXQN 8IGlNcLFp9mfuaNHJKalzZSvMkDT60FRK5XoybO4pdHq4Au9le3lJUyPm4AMovoV0Zob +mhC9B+hZoFxRJdqqaTHcL2kRleBWY7c6OUkXDpyXRHRg5KWjo332uK1DYTBcm65Zexg osd2/+vbZtFo/OazisSRKudU4tMr4aeaftB7oSsNMatg8Sl/W1N3kyVEo77E5O0QtlA3 grvw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=AKBZ5g+UlThLjWvJkSJOiNb4NMZ4OX5m/vBfmTTQzDE=; b=POTh4AQxTQgxMzewSsM97yGY6gg2RiTDz2ij1XGEBP1yp4nqtKmm8O9HPZVmgHWkcB ULbQymOe8/tQeQ7SQOmriDmhw43CLIjs5f9t/95CMLjwtEy7VfTERSE0VikfSkKgfOnn cdbyXdxMmnRZHDyBcqcpD1JY2xgeIlvxgEndbzL4/jPrArQpiLZV0prJ8plKHwX1nLKX 2n3h7n3Yt0MKjUw51Fat6uCjLmK/di6Ei07XYRegIyX2JJ4tBG3f67zraZfdtt0kEiHw trCB+qVrsDT8fAhkiSTohZE22iFLHB4yiHZ/XMZJrRcRe9q1pIjiS39EAbHeZPBO0SFT YWCw== X-Gm-Message-State: AOAM531R2WuihgB1uR8hvYk3/knkHlIV49Ln9QehWPmun/mUFb4tXJss +sYGeh1TagSVhcwvYIjhPBtOwcRBTqEtC5cp X-Google-Smtp-Source: ABdhPJwXaWo6Z1UdWS6qrzPMwOXkNXWOsIIs17+6VZguOm7qaBdGNNCZMj3aGunqWuA5lqg048GXQsSMDpuOCvi2 X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:902:900b:b0:161:612f:ca76 with SMTP id a11-20020a170902900b00b00161612fca76mr28397363plp.170.1653425825642; Tue, 24 May 2022 13:57:05 -0700 (PDT) Date: Tue, 24 May 2022 20:56:44 +0000 In-Reply-To: <20220524205646.1798325-1-vannapurve@google.com> Message-Id: <20220524205646.1798325-2-vannapurve@google.com> Mime-Version: 1.0 References: <20220524205646.1798325-1-vannapurve@google.com> X-Mailer: git-send-email 2.36.1.124.g0e6072fb45-goog Subject: [RFC V1 PATCH 1/3] selftests: kvm: x86_64: Add support for pagetable tracking From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add support for mapping guest pagetable pages to a contiguous guest virtual address range and sharing the physical to virtual mappings with the guest in a pre-defined format. This functionality will allow the guests to modify their page table entries. One such usecase for CC VMs is to toggle encryption bit in their ptes to switch from encrypted to shared memory and vice a versa. Signed-off-by: Vishal Annapurve --- .../selftests/kvm/include/kvm_util_base.h | 98 +++++++++++++++++++ tools/testing/selftests/kvm/lib/kvm_util.c | 81 ++++++++++++++- .../selftests/kvm/lib/kvm_util_internal.h | 9 ++ .../selftests/kvm/lib/x86_64/processor.c | 36 +++++++ 4 files changed, 223 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index 7516eb032cbb..68f4bdc88a0f 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -88,6 +88,23 @@ struct vm_guest_mode_params { unsigned int page_size; unsigned int page_shift; }; + +/* + * Structure shared with the guest containing information about: + * - Starting virtual address for num_pgt_pages physical pagetable + * page addresses tracked via paddrs array + * - page size of the guest + * + * Guest can walk through its pagetables using this information to + * read/modify pagetable attributes. + */ +struct guest_pgt_info { + uint64_t num_pgt_pages; + uint64_t pgt_vaddr_start; + uint64_t page_size; + uint64_t paddrs[]; +}; + extern const struct vm_guest_mode_params vm_guest_mode_params[]; int open_path_or_exit(const char *path, int flags); @@ -156,6 +173,50 @@ void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa); void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot); void vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpuid); vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min); +void vm_map_page_table(struct kvm_vm *vm, vm_vaddr_t vaddr_min); + +/* + * function called by guest code to translate physical address of a pagetable + * page to guest virtual address. + * + * input args: + * gpgt_info - pointer to the guest_pgt_info structure containing info + * about guest virtual address mappings for guest physical + * addresses of page table pages. + * pgt_pa - physical address of guest page table page to be translated + * to a virtual address. + * + * output args: none + * + * return: + * pointer to the pagetable page, null in case physical address + * is not tracked via given guest_pgt_info structure. + */ +void *guest_code_get_pgt_vaddr(struct guest_pgt_info *gpgt_info, + uint64_t pgt_pa); + +/* + * Allocate and setup a page to be shared with guest containing + * guest_pgt_info structure. + * + * Note: + * 1) vm_set_pgt_alloc_tracking function should be used to + * start tracking of physical page table page allocation. + * 2) This function should be invoked after needed pagetable + * pages are mapped to the VM using virt_pg_map. + * + * input args: + * vm - virtual machine + * vaddr_min - Minimum guest virtual address to start mapping + * the guest_pgt_info structure page(s). + * + * output args: none + * + * return: + * virtual address mapping guest_pgt_info structure. + */ +vm_vaddr_t vm_setup_pgt_info_buf(struct kvm_vm *vm, vm_vaddr_t vaddr_min); + vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min); vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages); vm_vaddr_t vm_vaddr_alloc_page(struct kvm_vm *vm); @@ -291,10 +352,47 @@ void virt_pgd_alloc(struct kvm_vm *vm); */ void virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr); +#ifdef __x86_64__ +/* + * Guest called function to get a pointer to pte corresponding to a given + * guest virtual address and pointer to the guest_pgt_info structure. + * + * input args: + * gpgt_info - pointer to guest_pgt_info structure containing + * information about guest virtual addresses mapped to pagetable + * physical addresses. + * vaddr - guest virtual address + * + * output args: none + * + * return: + * pointer to the pte corresponding to guest virtual address, + * Null if pte is not found + */ +uint64_t *guest_code_get_pte(struct guest_pgt_info *gpgt_info, + uint64_t vaddr); +#endif + vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min, uint32_t memslot); vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, vm_paddr_t paddr_min, uint32_t memslot); + +/* + * Enable tracking of physical guest pagetable pages for the given vm. + * This function should be called right after vm creation before any pages are + * mapped into the VM using vm_alloc_* / vm_vaddr_alloc* functions. + * + * input args: + * vm - virtual machine + * + * output args: none + * + * return: + * None + */ +void vm_set_pgt_alloc_tracking(struct kvm_vm *vm); + vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm); /* diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 5a6d52d77cc6..7781c8a0efe9 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -264,6 +264,7 @@ struct kvm_vm *vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm) TEST_ASSERT(vm != NULL, "Insufficient Memory"); INIT_LIST_HEAD(&vm->vcpus); + INIT_LIST_HEAD(&vm->pgt_pages); vm->regions.gpa_tree = RB_ROOT; vm->regions.hva_tree = RB_ROOT; hash_init(vm->regions.slot_hash); @@ -700,6 +701,7 @@ void kvm_vm_free(struct kvm_vm *vmp) { int ctr; struct hlist_node *node; + struct pgt_page *entry, *nentry; struct userspace_mem_region *region; if (vmp == NULL) @@ -709,6 +711,9 @@ void kvm_vm_free(struct kvm_vm *vmp) hash_for_each_safe(vmp->regions.slot_hash, ctr, node, region, slot_node) __vm_mem_region_delete(vmp, region, false); + list_for_each_entry_safe(entry, nentry, &vmp->pgt_pages, list) + free(entry); + /* Free sparsebit arrays. */ sparsebit_free(&vmp->vpages_valid); sparsebit_free(&vmp->vpages_mapped); @@ -1325,9 +1330,25 @@ vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min, /* Arbitrary minimum physical address used for virtual translation tables. */ #define KVM_GUEST_PAGE_TABLE_MIN_PADDR 0x180000 +void vm_set_pgt_alloc_tracking(struct kvm_vm *vm) +{ + vm->track_pgt_pages = true; +} + vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm) { - return vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0); + struct pgt_page *pgt; + vm_paddr_t paddr = vm_phy_page_alloc(vm, + KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0); + + if (vm->track_pgt_pages) { + pgt = calloc(1, sizeof(*pgt)); + TEST_ASSERT(pgt != NULL, "Insufficient memory"); + pgt->paddr = addr_gpa2raw(vm, paddr); + list_add(&pgt->list, &vm->pgt_pages); + vm->num_pgt_pages++; + } + return paddr; } /* @@ -1416,6 +1437,27 @@ static vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, return pgidx_start * vm->page_size; } +void vm_map_page_table(struct kvm_vm *vm, vm_vaddr_t vaddr_min) +{ + struct pgt_page *pgt_page_entry; + vm_vaddr_t vaddr; + + /* Stop tracking further pgt pages, mapping pagetable may itself need + * new pages. + */ + vm->track_pgt_pages = false; + vm_vaddr_t vaddr_start = vm_vaddr_unused_gap(vm, + vm->num_pgt_pages * vm->page_size, vaddr_min); + vaddr = vaddr_start; + list_for_each_entry(pgt_page_entry, &vm->pgt_pages, list) { + /* Map the virtual page. */ + virt_pg_map(vm, vaddr, addr_raw2gpa(vm, pgt_page_entry->paddr)); + sparsebit_set(vm->vpages_mapped, vaddr >> vm->page_shift); + vaddr += vm->page_size; + } + vm->pgt_vaddr_start = vaddr_start; +} + /* * VM Virtual Address Allocate Shared/Encrypted * @@ -1475,6 +1517,43 @@ vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_ return _vm_vaddr_alloc(vm, sz, vaddr_min, false); } +void *guest_code_get_pgt_vaddr(struct guest_pgt_info *gpgt_info, + uint64_t pgt_pa) +{ + uint64_t num_pgt_pages = gpgt_info->num_pgt_pages; + uint64_t pgt_vaddr_start = gpgt_info->pgt_vaddr_start; + uint64_t page_size = gpgt_info->page_size; + + for (uint32_t i = 0; i < num_pgt_pages; i++) { + if (gpgt_info->paddrs[i] == pgt_pa) + return (void *)(pgt_vaddr_start + i * page_size); + } + return NULL; +} + +vm_vaddr_t vm_setup_pgt_info_buf(struct kvm_vm *vm, vm_vaddr_t vaddr_min) +{ + struct pgt_page *pgt_page_entry; + struct guest_pgt_info *gpgt_info; + uint64_t info_size = sizeof(*gpgt_info) + + (sizeof(uint64_t) * vm->num_pgt_pages); + uint64_t num_pages = align_up(info_size, vm->page_size); + vm_vaddr_t buf_start = vm_vaddr_alloc_shared(vm, num_pages, vaddr_min); + uint32_t i = 0; + + gpgt_info = (struct guest_pgt_info *)addr_gva2hva(vm, buf_start); + gpgt_info->num_pgt_pages = vm->num_pgt_pages; + gpgt_info->pgt_vaddr_start = vm->pgt_vaddr_start; + gpgt_info->page_size = vm->page_size; + list_for_each_entry(pgt_page_entry, &vm->pgt_pages, list) { + gpgt_info->paddrs[i] = pgt_page_entry->paddr; + i++; + } + TEST_ASSERT((i == vm->num_pgt_pages), + "pgt entries mismatch with the counter"); + return buf_start; +} + /* * VM Virtual Address Allocate Pages * diff --git a/tools/testing/selftests/kvm/lib/kvm_util_internal.h b/tools/testing/selftests/kvm/lib/kvm_util_internal.h index 99ccab86115c..91792a5272e0 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util_internal.h +++ b/tools/testing/selftests/kvm/lib/kvm_util_internal.h @@ -53,6 +53,11 @@ struct vm_memcrypt { int8_t enc_bit; }; +struct pgt_page { + vm_paddr_t paddr; + struct list_head list; +}; + struct kvm_vm { int mode; unsigned long type; @@ -77,6 +82,10 @@ struct kvm_vm { vm_vaddr_t handlers; uint32_t dirty_ring_size; struct vm_memcrypt memcrypt; + struct list_head pgt_pages; + bool track_pgt_pages; + uint32_t num_pgt_pages; + vm_vaddr_t pgt_vaddr_start; }; struct vcpu *vcpu_find(struct kvm_vm *vm, uint32_t vcpuid); diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c index c71061361abb..ff054be31eed 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c @@ -284,6 +284,42 @@ void virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr) __virt_pg_map(vm, vaddr, paddr, X86_PAGE_SIZE_4K); } +uint64_t *guest_code_get_pte(struct guest_pgt_info *gpgt_info, + uint64_t vaddr) +{ + uint16_t index[4]; + struct pageUpperEntry *pml4e, *pdpe, *pde; + struct pageTableEntry *pte; + uint64_t pgt_paddr = get_cr3(); + uint64_t page_size = gpgt_info->page_size; + + index[0] = (vaddr >> 12) & 0x1ffu; + index[1] = (vaddr >> 21) & 0x1ffu; + index[2] = (vaddr >> 30) & 0x1ffu; + index[3] = (vaddr >> 39) & 0x1ffu; + + pml4e = guest_code_get_pgt_vaddr(gpgt_info, pgt_paddr); + if (!pml4e || !pml4e[index[3]].present) + return NULL; + + pgt_paddr = (pml4e[index[3]].pfn * page_size); + pdpe = guest_code_get_pgt_vaddr(gpgt_info, pgt_paddr); + if (!pdpe || !pdpe[index[2]].present || pdpe[index[2]].page_size) + return NULL; + + pgt_paddr = (pdpe[index[2]].pfn * page_size); + pde = guest_code_get_pgt_vaddr(gpgt_info, pgt_paddr); + if (!pde || !pde[index[1]].present || pde[index[1]].page_size) + return NULL; + + pgt_paddr = (pde[index[1]].pfn * page_size); + pte = guest_code_get_pgt_vaddr(gpgt_info, pgt_paddr); + if (!pte || !pte[index[0]].present) + return NULL; + + return (uint64_t *)&pte[index[0]]; +} + static struct pageTableEntry *_vm_get_page_table_entry(struct kvm_vm *vm, int vcpuid, uint64_t vaddr) { From patchwork Tue May 24 20:56:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 12860536 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC72CC433FE for ; Tue, 24 May 2022 20:57:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241805AbiEXU5O (ORCPT ); Tue, 24 May 2022 16:57:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36900 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241781AbiEXU5K (ORCPT ); Tue, 24 May 2022 16:57:10 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4B4F93B00D for ; Tue, 24 May 2022 13:57:09 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id s7-20020a170902ea0700b00162028433bcso5927994plg.16 for ; Tue, 24 May 2022 13:57:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=lupwoiIHSZjNm5eHKqOy8JlCyVBIv5yC/9yPkxpNszU=; b=jfYeSvEJH6TVFdfqmAQGSWOVWnNJxEe8cvgyyfm7cJc+NUP0MamvlUzq+Or2qri1wU tXuJkNgi6gdZs20UxzlO9LckiD8gc1ewFWvKwSy514IxXuISxbq0aAWC/ABsy6kZE5jD ywG609MHA9DTgBoKAGUTNUXREoy3qoWGjuQyrkGUf/KDQnBvOKpaDNeMhUKcKUitR3nf +8F7Ls6sdcSKGsol/j137oy6quro/SCyWZqexkweR/2S0NHOMoZX7NNRHMElgAFUJwml TmsG+kIXG4KHI4Rr2ZP7pt2sUiXs3v5NmmfRRTdgUx8UvNYCYne0bNyntNxpJEAnnHXQ An5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=lupwoiIHSZjNm5eHKqOy8JlCyVBIv5yC/9yPkxpNszU=; b=kl0ipooT52SPG7Vk4frVnydH/CWaXguztE25I2fSa2p87tP0lSfyi3tWaNaBjLZ39F 3oUwI/3+C5gaEhKAQRW9+VTEm3djRoPfW/VxEHDc5KF1bfNdeLL7gX4/BCTjobEY+4ld P652qc+YnOUAuOf7OzXatzSSR5foWZEf/s1k/6SdL3KIEDs/1wYXtbBJiOsMKIySZ1Dw Ph3kcxE81yA/Q8s5CDzipEYkVmxi5H3C/iHw4SV4Akc5nPd9C1au3Nu/Fdr3sdBq+y6j AKJ1QpSDdwZqIpdfrc9QW6fdrqjpYDLnkzDMMxQTLAnJdQfQLcNjWUt25F/PpT6PEy5n skeA== X-Gm-Message-State: AOAM531X6tZCpNxwdrg1u5Omzs4dGPIE69qHU+n0weTU64YQapHJbPKG X+LsgDUWXwSEhB9ewzd+NrII28l3TKOBq2HF X-Google-Smtp-Source: ABdhPJxYsghOrwW3NIvkkEKuM9QOpMmpIRLpmckMl2sc+0UeXOdkcGXwo48/Sg+GsxaJnAh3qIALbm0o+t7ncVE5 X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a05:6a00:1494:b0:518:b738:5876 with SMTP id v20-20020a056a00149400b00518b7385876mr8184004pfu.58.1653425828701; Tue, 24 May 2022 13:57:08 -0700 (PDT) Date: Tue, 24 May 2022 20:56:45 +0000 In-Reply-To: <20220524205646.1798325-1-vannapurve@google.com> Message-Id: <20220524205646.1798325-3-vannapurve@google.com> Mime-Version: 1.0 References: <20220524205646.1798325-1-vannapurve@google.com> X-Mailer: git-send-email 2.36.1.124.g0e6072fb45-goog Subject: [RFC V1 PATCH 2/3] selftests: kvm: sev: Handle hypercall exit From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add support for forwarding hypercalls to vmm from VC handler. Signed-off-by: Vishal Annapurve --- .../selftests/kvm/lib/x86_64/sev_exitlib.c | 39 ++++++++++++++++--- 1 file changed, 34 insertions(+), 5 deletions(-) diff --git a/tools/testing/selftests/kvm/lib/x86_64/sev_exitlib.c b/tools/testing/selftests/kvm/lib/x86_64/sev_exitlib.c index b3f7b0297e5b..749b9264f90d 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/sev_exitlib.c +++ b/tools/testing/selftests/kvm/lib/x86_64/sev_exitlib.c @@ -199,6 +199,31 @@ static int handle_vc_cpuid(struct ghcb *ghcb, u64 ghcb_gpa, struct ex_regs *regs return 0; } +static int handle_vc_vmmcall(struct ghcb *ghcb, u64 ghcb_gpa, struct ex_regs *regs) +{ + int ret; + + ghcb_set_rax(ghcb, regs->rax); + ghcb_set_rbx(ghcb, regs->rbx); + ghcb_set_rcx(ghcb, regs->rcx); + ghcb_set_rdx(ghcb, regs->rdx); + ghcb_set_rsi(ghcb, regs->rsi); + ghcb_set_cpl(ghcb, 0); + + ret = sev_es_ghcb_hv_call(ghcb, ghcb_gpa, SVM_EXIT_VMMCALL); + if (ret) + return ret; + + if (!ghcb_rax_is_valid(ghcb)) + return 1; + + regs->rax = ghcb->save.rax; + + regs->rip += 3; + + return 0; +} + static int handle_msr_vc_cpuid(struct ex_regs *regs) { uint32_t fn = regs->rax & 0xFFFFFFFF; @@ -239,11 +264,15 @@ static int handle_msr_vc_cpuid(struct ex_regs *regs) int sev_es_handle_vc(void *ghcb, u64 ghcb_gpa, struct ex_regs *regs) { - if (regs->error_code != SVM_EXIT_CPUID) - return 1; + if (regs->error_code == SVM_EXIT_CPUID) { + if (!ghcb) + return handle_msr_vc_cpuid(regs); + + return handle_vc_cpuid(ghcb, ghcb_gpa, regs); + } - if (!ghcb) - return handle_msr_vc_cpuid(regs); + if (regs->error_code == SVM_EXIT_VMMCALL) + return handle_vc_vmmcall(ghcb, ghcb_gpa, regs); - return handle_vc_cpuid(ghcb, ghcb_gpa, regs); + return 1; } From patchwork Tue May 24 20:56:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 12860537 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89CB8C433F5 for ; Tue, 24 May 2022 20:57:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241828AbiEXU5Z (ORCPT ); Tue, 24 May 2022 16:57:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38128 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241787AbiEXU5P (ORCPT ); Tue, 24 May 2022 16:57:15 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 18EC53E0C8 for ; Tue, 24 May 2022 13:57:12 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id 198-20020a6216cf000000b005182f7a9fe1so7702652pfw.7 for ; Tue, 24 May 2022 13:57:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=UZI2m7qTvl5zlNGI+2KJiafQShDj8mSYQ8WRvEqZh9M=; b=Vf4iMzuUn1lSoBlsmYAd8BUo00yaS/1OjkfovwgO5tg/gP1kzjFcWa0qP8GCXHgfpk RS4gBF65EbgKFhpRvV988mB3SI9qhIcyxuHA1B/LXUjFlThtpPtLXA2W7U0oUTG9odXT cpeP2vXCH5nEa2Q+RhZUmoejk48refBEZPH78W4hNk9hTE16sylbSzEYQGQGHwnCkba+ GSIjf78UnbsL66AGrbQhZKdMr1u0xSaitCRW/XXq5++/ZzI3+h6Cbvk6hRMqiJ2Ksdel IFGKPc+XQ1CgFzw2Mi7WUMvEqZ5lG7rqfeEjhXddwkFAPTZDphNsMaX4DVRIgYYpQG7y MLXQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=UZI2m7qTvl5zlNGI+2KJiafQShDj8mSYQ8WRvEqZh9M=; b=tSzXU49vu8nRrSIOl2w/hPlPb2TnOfzb9K/yYdJZQUDHsLvuFtJ1/aigBNzlg9xCnx 1j4eUy8KG8lNLXLV53QjZbqjOYfdQbRrN4qdyCCM7cab8DbRv8A7uC2K23xn8upPfbHl nTWkhuXxcVcF0QKeV/z8O4V0H8DB8Get+fkvgAeK9kuoNDqQ1TroizFUO+7v3gbmsAyg mWeZ4EjvMIU+O7CZihse6KiZ61cFVXLVhIQKgx3NBtYLbdhZVS9tR6amDjcIX6DKS85y IOe0fyo2VwR4PQFLIisnXAa9FFsHaeOlIdY1UCRtrF2lzgrbH1hhfQW/lv90qfcTeZQr P64A== X-Gm-Message-State: AOAM530iVhWcbYBkOiAF2dd6RLau+dqnoh1yWV7v43xlyW5lotqqLSg9 DQjdDJlbnBBI669O5iLhrcYkP4YldSwFcuKy X-Google-Smtp-Source: ABdhPJxqEIhonr5Gge0Q/RWVIEK4RPDXj1Tkm22fsDoDnlbSYegtlFIdROuV1/6HU0q4QOmZ6jA/6DEwvajGTTXR X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:902:bf09:b0:153:99a6:55b8 with SMTP id bi9-20020a170902bf0900b0015399a655b8mr28793638plb.142.1653425831481; Tue, 24 May 2022 13:57:11 -0700 (PDT) Date: Tue, 24 May 2022 20:56:46 +0000 In-Reply-To: <20220524205646.1798325-1-vannapurve@google.com> Message-Id: <20220524205646.1798325-4-vannapurve@google.com> Mime-Version: 1.0 References: <20220524205646.1798325-1-vannapurve@google.com> X-Mailer: git-send-email 2.36.1.124.g0e6072fb45-goog Subject: [RFC V1 PATCH 3/3] selftests: kvm: sev: Port UPM selftests onto SEV/SEV-ES VMs From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Port UPM tests from rfc v2 series posted here: https://lore.kernel.org/lkml/20220511000811.384766-1-vannapurve@google.com/T/ to execute with SEV/SEV-ES VMs. Major changes from original series: 1) SEV/SEV-ES VM creation logic is hooked into setup_and_execute_test. 2) Shared ucall is used for communication between vmm and the guest. 3) C bit toggle logic is added at places where memory conversion is needed. 4) memory size is passed via vcpu registers instead of MMIO. Example invocation on AMD SEV/SEV-ES capable machine: ./sev_priv_memfd_test -> Runs UPM selftests with SEV VMs ./sev_priv_memfd_test -e -> Runs UPM selftests with SEV-ES VMs Signed-off-by: Vishal Annapurve --- tools/testing/selftests/kvm/.gitignore | 1 + tools/testing/selftests/kvm/Makefile | 1 + .../kvm/x86_64/sev_priv_memfd_test.c | 1511 +++++++++++++++++ 3 files changed, 1513 insertions(+) create mode 100644 tools/testing/selftests/kvm/x86_64/sev_priv_memfd_test.c diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore index 8d4fda1ace8f..3cd4a6678663 100644 --- a/tools/testing/selftests/kvm/.gitignore +++ b/tools/testing/selftests/kvm/.gitignore @@ -56,6 +56,7 @@ /x86_64/xss_msr_test /x86_64/vmx_pmu_msrs_test /x86_64/sev_all_boot_test +/x86_64/sev_priv_memfd_test /access_tracking_perf_test /demand_paging_test /dirty_log_test diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 6d269c3159bf..1840f6a4c6f5 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -94,6 +94,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/xen_shinfo_test TEST_GEN_PROGS_x86_64 += x86_64/xen_vmcall_test TEST_GEN_PROGS_x86_64 += x86_64/sev_migrate_tests TEST_GEN_PROGS_x86_64 += x86_64/sev_all_boot_test +TEST_GEN_PROGS_x86_64 += x86_64/sev_priv_memfd_test TEST_GEN_PROGS_x86_64 += demand_paging_test TEST_GEN_PROGS_x86_64 += dirty_log_test TEST_GEN_PROGS_x86_64 += dirty_log_perf_test diff --git a/tools/testing/selftests/kvm/x86_64/sev_priv_memfd_test.c b/tools/testing/selftests/kvm/x86_64/sev_priv_memfd_test.c new file mode 100644 index 000000000000..9255a6a3ce41 --- /dev/null +++ b/tools/testing/selftests/kvm/x86_64/sev_priv_memfd_test.c @@ -0,0 +1,1511 @@ +// SPDX-License-Identifier: GPL-2.0 +#define _GNU_SOURCE /* for program_invocation_short_name */ +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include + +#include +#include +#include + +#include +#include +#include + +#define BYTE_MASK 0xFF + +// flags for mmap +#define MAP_HUGE_2MB (21 << MAP_HUGE_SHIFT) +#define MAP_HUGE_1GB (30 << MAP_HUGE_SHIFT) + +// page sizes +#define PAGE_SIZE_4KB ((size_t)0x1000) +#define PAGE_SIZE_2MB (PAGE_SIZE_4KB * (size_t)512) +#define PAGE_SIZE_1GB ((PAGE_SIZE_4KB * 256) * 1024) + +#define TEST_MEM_GPA 0xb0000000 +#define TEST_MEM_DATA_PAT1 0x6666666666666666 +#define TEST_MEM_DATA_PAT2 0x9999999999999999 +#define TEST_MEM_DATA_PAT3 0x3333333333333333 +#define TEST_MEM_DATA_PAT4 0xaaaaaaaaaaaaaaaa + +#define TOTAL_PAGES (1024) +#define GUEST_PGT_MIN_VADDR 0x10000 + +enum mem_op { + SET_PAT, + VERIFY_PAT +}; + +#define TEST_MEM_SLOT 10 + +#define VCPU_ID 0 + +#define VM_STAGE_PROCESSED(x) pr_info("Processed stage %s\n", #x) + +// global used for storing the current mem allocation size +// for the running test +static size_t test_mem_size; + +typedef bool (*vm_stage_handler_fn)(struct kvm_vm *, struct ucall *, void *, + uint64_t); + +static uint64_t g_ghcb_gpa; +static void *g_ghcb_gva; + +/* guest code function will accept 4 arguments: + * - pointer to shared ucall struct + * - encryption bit shift + * - pointer to guest_pgt_info struct + * - size of the test memory buffer + * - guest physical address of ghcb page + * - guest virtual address of ghcb page + */ +typedef void (*guest_code_fn)(struct ucall *, uint8_t, + struct guest_pgt_info *, uint64_t, uint64_t, void *); + +struct test_run_helper { + char *test_desc; + vm_stage_handler_fn vmst_handler; + guest_code_fn guest_fn; + void *shared_mem; + int priv_memfd; + bool disallow_boot_shared_access; + bool toggle_shared_mem_state; +}; + +enum page_size { + PAGE_4KB, + PAGE_2MB, + PAGE_1GB +}; + +struct page_combo { + enum page_size shared; + enum page_size private; +}; + +static char *page_size_to_str(enum page_size x) +{ + switch (x) { + case PAGE_4KB: + return "PAGE_4KB"; + case PAGE_2MB: + return "PAGE_2MB"; + case PAGE_1GB: + return "PAGE_1GB"; + default: + return "UNKNOWN"; + } +} + +static uint64_t test_mem_end(const uint64_t start, const uint64_t size) +{ + return start + size; +} + +/* Guest code in selftests is loaded to guest memory using kvm_vm_elf_load + * which doesn't handle global offset table updates. Calling standard libc + * functions would normally result in referring to the global offset table. + * Adding O1 here seems to prohibit compiler from replacing the memory + * operations with standard libc functions such as memset. + */ +static bool __attribute__((optimize("O1"))) do_mem_op(enum mem_op op, void *mem, + uint64_t pat, uint32_t size) +{ + uint64_t *buf = (uint64_t *)mem; + uint32_t chunk_size = sizeof(pat); + uint64_t mem_addr = (uint64_t)mem; + + if (((mem_addr % chunk_size) != 0) || ((size % chunk_size) != 0)) + return false; + + for (uint32_t i = 0; i < (size / chunk_size); i++) { + if (op == SET_PAT) + buf[i] = pat; + if (op == VERIFY_PAT) { + if (buf[i] != pat) + return false; + } + } + + return true; +} + +static inline uint64_t guest_hypercall(uint64_t nr, uint64_t a0, uint64_t a1, + uint64_t a2, uint64_t a3) +{ + uint64_t r; + + asm volatile("vmmcall" + : "=a"(r) + : "a"(nr), "b"(a0), "c"(a1), "d"(a2), "S"(a3)); + return r; +} + +static void guest_set_clr_pte_bit(struct ucall *uc, + struct guest_pgt_info *gpgt_info, uint64_t vaddr_start, + uint64_t mem_size, bool set, uint32_t bit) +{ + uint64_t vaddr = vaddr_start; + uint32_t guest_page_size = gpgt_info->page_size; + uint32_t num_pages; + + GUEST_SHARED_ASSERT(uc, !(mem_size % guest_page_size)); + num_pages = mem_size / guest_page_size; + for (uint32_t i = 0; i < num_pages; i++) { + uint64_t *pte = guest_code_get_pte(gpgt_info, vaddr); + + GUEST_SHARED_ASSERT(uc, pte); + if (set) + *pte |= (1ULL << bit); + else + *pte &= ~(1ULL << bit); + asm volatile("invlpg (%0)" :: "r"(vaddr) : "memory"); + vaddr += guest_page_size; + } +} + +static void guest_verify_sev_vm_boot(struct ucall *uc, bool sev_es) +{ + uint32_t eax, ebx, ecx, edx; + uint64_t sev_status; + + /* Check CPUID values via GHCB MSR protocol. */ + eax = 0x8000001f; + ecx = 0; + cpuid(&eax, &ebx, &ecx, &edx); + + /* Check SEV bit. */ + GUEST_SHARED_ASSERT(uc, eax & (1 << 1)); + + /* Check SEV-ES bit. */ + if (sev_es) + GUEST_SHARED_ASSERT(uc, eax & (1 << 3)); + + /* Check SEV and SEV-ES enabled bits (bits 0 and 1, respectively). */ + sev_status = rdmsr(MSR_AMD64_SEV); + GUEST_SHARED_ASSERT(uc, (sev_status & 0x1) == 1); + + if (sev_es) + GUEST_SHARED_ASSERT(uc, (sev_status & 0x2) == 2); +} + +/* Test to verify guest private accesses on private memory with following steps: + * 1) Upon entry, guest signals VMM that it has started. + * 2) VMM populates the shared memory with known pattern and continues guest + * execution. + * 3) Guest writes a different pattern on the private memory and signals VMM + * that it has updated private memory. + * 4) VMM verifies its shared memory contents to be same as the data populated + * in step 2 and continues guest execution. + * 5) Guest verifies its private memory contents to be same as the data + * populated in step 3 and marks the end of the guest execution. + */ +#define PMPAT_ID 0 +#define PMPAT_DESC "PrivateMemoryPrivateAccessTest" + +/* Guest code execution stages for private mem access test */ +#define PMPAT_GUEST_STARTED 0ULL +#define PMPAT_GUEST_PRIV_MEM_UPDATED 1ULL + +static bool pmpat_handle_vm_stage(struct kvm_vm *vm, struct ucall *uc, + void *test_info, uint64_t stage) +{ + void *shared_mem = ((struct test_run_helper *)test_info)->shared_mem; + + switch (stage) { + case PMPAT_GUEST_STARTED: { + /* Initialize the contents of shared memory */ + TEST_ASSERT(do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT1, test_mem_size), + "Shared memory update failure"); + VM_STAGE_PROCESSED(PMPAT_GUEST_STARTED); + break; + } + case PMPAT_GUEST_PRIV_MEM_UPDATED: { + /* verify host updated data is still intact */ + TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT1, test_mem_size), + "Shared memory view mismatch"); + VM_STAGE_PROCESSED(PMPAT_GUEST_PRIV_MEM_UPDATED); + break; + } + default: + TEST_FAIL("Unhandled VM stage %ld\n", stage); + return false; + } + + return true; +} + +static void pmpat_guest_code(struct ucall *uc, uint8_t enc_bit_shift, + struct guest_pgt_info *gpgt_info, uint64_t mem_size, uint64_t ghcb_gpa, + void *ghcb_gva) +{ + void *priv_mem = (void *)TEST_MEM_GPA; + int ret; + + g_ghcb_gva = ghcb_gva; + g_ghcb_gpa = ghcb_gpa; + GUEST_SHARED_SYNC(uc, PMPAT_GUEST_STARTED); + guest_verify_sev_vm_boot(uc, g_ghcb_gva != NULL); + + guest_set_clr_pte_bit(uc, gpgt_info, TEST_MEM_GPA, mem_size, true, + enc_bit_shift); + /* Mark the GPA range to be treated as always accessed privately */ + ret = guest_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, + mem_size >> MIN_PAGE_SHIFT, KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); + GUEST_SHARED_ASSERT_1(uc, ret == 0, ret); + + GUEST_SHARED_ASSERT(uc, do_mem_op(SET_PAT, priv_mem, TEST_MEM_DATA_PAT2, + mem_size)); + GUEST_SHARED_SYNC(uc, PMPAT_GUEST_PRIV_MEM_UPDATED); + + GUEST_SHARED_ASSERT(uc, do_mem_op(VERIFY_PAT, priv_mem, + TEST_MEM_DATA_PAT2, mem_size)); + + GUEST_SHARED_DONE(uc); +} + +/* Test to verify guest shared accesses on private memory with following steps: + * 1) Upon entry, guest signals VMM that it has started. + * 2) VMM populates the shared memory with known pattern and continues guest + * execution. + * 3) Guest reads private gpa range in a shared fashion and verifies that it + * reads what VMM has written in step2. + * 3) Guest writes a different pattern on the shared memory and signals VMM + * that it has updated the shared memory. + * 4) VMM verifies shared memory contents to be same as the data populated + * in step 3 and continues guest execution. + */ +#define PMSAT_ID 1 +#define PMSAT_DESC "PrivateMemorySharedAccessTest" + +/* Guest code execution stages for private mem access test */ +#define PMSAT_GUEST_STARTED 0ULL +#define PMSAT_GUEST_TEST_MEM_UPDATED 1ULL + +static bool pmsat_handle_vm_stage(struct kvm_vm *vm, struct ucall *uc, + void *test_info, uint64_t stage) +{ + void *shared_mem = ((struct test_run_helper *)test_info)->shared_mem; + + switch (stage) { + case PMSAT_GUEST_STARTED: { + /* Initialize the contents of shared memory */ + TEST_ASSERT(do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT1, test_mem_size), + "Shared memory update failed"); + VM_STAGE_PROCESSED(PMSAT_GUEST_STARTED); + break; + } + case PMSAT_GUEST_TEST_MEM_UPDATED: { + /* verify data to be same as what guest wrote */ + TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT2, test_mem_size), + "Shared memory view mismatch"); + VM_STAGE_PROCESSED(PMSAT_GUEST_TEST_MEM_UPDATED); + break; + } + default: + TEST_FAIL("Unhandled VM stage %ld\n", stage); + return false; + } + + return true; +} + +static void pmsat_guest_code(struct ucall *uc, uint8_t enc_bit_shift, + struct guest_pgt_info *gpgt_info, uint64_t mem_size, uint64_t ghcb_gpa, + void *ghcb_gva) +{ + void *shared_mem = (void *)TEST_MEM_GPA; + + g_ghcb_gva = ghcb_gva; + g_ghcb_gpa = ghcb_gpa; + GUEST_SHARED_SYNC(uc, PMSAT_GUEST_STARTED); + guest_verify_sev_vm_boot(uc, g_ghcb_gva != NULL); + GUEST_SHARED_ASSERT(uc, do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT1, mem_size)); + + GUEST_SHARED_ASSERT(uc, do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT2, mem_size)); + GUEST_SHARED_SYNC(uc, PMSAT_GUEST_TEST_MEM_UPDATED); + + GUEST_SHARED_DONE(uc); +} + +/* Test to verify guest shared accesses on shared memory with following steps: + * 1) Upon entry, guest signals VMM that it has started. + * 2) VMM deallocates the backing private memory and populates the shared memory + * with known pattern and continues guest execution. + * 3) Guest reads shared gpa range in a shared fashion and verifies that it + * reads what VMM has written in step2. + * 3) Guest writes a different pattern on the shared memory and signals VMM + * that it has updated the shared memory. + * 4) VMM verifies shared memory contents to be same as the data populated + * in step 3 and continues guest execution. + */ +#define SMSAT_ID 2 +#define SMSAT_DESC "SharedMemorySharedAccessTest" + +#define SMSAT_GUEST_STARTED 0ULL +#define SMSAT_GUEST_TEST_MEM_UPDATED 1ULL + +static bool smsat_handle_vm_stage(struct kvm_vm *vm, struct ucall *uc, + void *test_info, uint64_t stage) +{ + void *shared_mem = ((struct test_run_helper *)test_info)->shared_mem; + int priv_memfd = ((struct test_run_helper *)test_info)->priv_memfd; + + switch (stage) { + case SMSAT_GUEST_STARTED: { + /* Remove the backing private memory storage */ + int ret = fallocate(priv_memfd, + FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, + 0, test_mem_size); + TEST_ASSERT(ret != -1, "fallocate failed in smsat handling"); + /* Initialize the contents of shared memory */ + TEST_ASSERT(do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT1, test_mem_size), + "Shared memory updated failed"); + VM_STAGE_PROCESSED(SMSAT_GUEST_STARTED); + break; + } + case SMSAT_GUEST_TEST_MEM_UPDATED: { + /* verify data to be same as what guest wrote */ + TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT2, test_mem_size), + "Shared memory view mismatch"); + VM_STAGE_PROCESSED(SMSAT_GUEST_TEST_MEM_UPDATED); + break; + } + default: + TEST_FAIL("Unhandled VM stage %ld\n", stage); + return false; + } + + return true; +} + +static void smsat_guest_code(struct ucall *uc, uint8_t enc_bit_shift, + struct guest_pgt_info *gpgt_info, uint64_t mem_size, uint64_t ghcb_gpa, + void *ghcb_gva) +{ + void *shared_mem = (void *)TEST_MEM_GPA; + + g_ghcb_gva = ghcb_gva; + g_ghcb_gpa = ghcb_gpa; + GUEST_SHARED_SYNC(uc, SMSAT_GUEST_STARTED); + guest_verify_sev_vm_boot(uc, g_ghcb_gva != NULL); + GUEST_SHARED_ASSERT(uc, do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT1, mem_size)); + + GUEST_SHARED_ASSERT(uc, do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT2, mem_size)); + GUEST_SHARED_SYNC(uc, SMSAT_GUEST_TEST_MEM_UPDATED); + + GUEST_SHARED_DONE(uc); +} + +/* Test to verify guest private accesses on shared memory with following steps: + * 1) Upon entry, guest signals VMM that it has started. + * 2) VMM deallocates the backing private memory and populates the shared memory + * with known pattern and continues guest execution. + * 3) Guest writes gpa range via private access and signals VMM. + * 4) VMM verifies shared memory contents to be same as the data populated + * in step 2 and continues guest execution. + * 5) Guest reads gpa range via private access and verifies that the contents + * are same as written in step 3. + */ +#define SMPAT_ID 3 +#define SMPAT_DESC "SharedMemoryPrivateAccessTest" + +#define SMPAT_GUEST_STARTED 0ULL +#define SMPAT_GUEST_TEST_MEM_UPDATED 1ULL + +static bool smpat_handle_vm_stage(struct kvm_vm *vm, struct ucall *uc, + void *test_info, uint64_t stage) +{ + void *shared_mem = ((struct test_run_helper *)test_info)->shared_mem; + int priv_memfd = ((struct test_run_helper *)test_info)->priv_memfd; + + switch (stage) { + case SMPAT_GUEST_STARTED: { + /* Remove the backing private memory storage */ + int ret = fallocate(priv_memfd, + FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, + 0, test_mem_size); + TEST_ASSERT(ret != -1, "fallocate failed in smpat handling"); + /* Initialize the contents of shared memory */ + TEST_ASSERT(do_mem_op(SET_PAT, shared_mem, TEST_MEM_DATA_PAT1, + test_mem_size), "Shared memory updated failed"); + VM_STAGE_PROCESSED(SMPAT_GUEST_STARTED); + break; + } + case SMPAT_GUEST_TEST_MEM_UPDATED: { + /* verify data to be same as what vmm wrote earlier */ + TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT1, test_mem_size), + "Shared memory view mismatch"); + VM_STAGE_PROCESSED(SMPAT_GUEST_TEST_MEM_UPDATED); + break; + } + default: + TEST_FAIL("Unhandled VM stage %ld\n", stage); + return false; + } + + return true; +} + +static void smpat_guest_code(struct ucall *uc, uint8_t enc_bit_shift, + struct guest_pgt_info *gpgt_info, uint64_t mem_size, uint64_t ghcb_gpa, + void *ghcb_gva) +{ + void *shared_mem = (void *)TEST_MEM_GPA; + int ret; + + g_ghcb_gva = ghcb_gva; + g_ghcb_gpa = ghcb_gpa; + GUEST_SHARED_SYNC(uc, SMPAT_GUEST_STARTED); + guest_verify_sev_vm_boot(uc, g_ghcb_gva != NULL); + + guest_set_clr_pte_bit(uc, gpgt_info, TEST_MEM_GPA, + mem_size, true, enc_bit_shift); + /* Mark the GPA range to be treated as always accessed privately */ + ret = guest_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, + mem_size >> MIN_PAGE_SHIFT, KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); + GUEST_SHARED_ASSERT_1(uc, ret == 0, ret); + + GUEST_SHARED_ASSERT(uc, do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT2, mem_size)); + GUEST_SHARED_SYNC(uc, SMPAT_GUEST_TEST_MEM_UPDATED); + GUEST_SHARED_ASSERT(uc, do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT2, mem_size)); + + GUEST_SHARED_DONE(uc); +} + +/* Test to verify guest shared and private accesses on memory with following + * steps: + * 1) Upon entry, guest signals VMM that it has started. + * 2) VMM populates the shared memory with known pattern and continues guest + * execution. + * 3) Guest writes shared gpa range in a private fashion and signals VMM + * 4) VMM verifies that shared memory still contains the pattern written in + * step 2 and continues guest execution. + * 5) Guest verifies private memory contents to be same as the data populated + * in step 3 and signals VMM. + * 6) VMM removes the private memory backing which should also clear out the + * second stage mappings for the VM + * 6) Guest does shared write access on shared memory and signals vmm + * 7) VMM reads the shared memory and verifies that the data is same as what + * guest wrote in step 6 and continues guest execution. + * 8) Guest reads the private memory and verifies that the data is same as + * written in step 6. + */ +#define PSAT_ID 4 +#define PSAT_DESC "PrivateSharedAccessTest" + +#define PSAT_GUEST_STARTED 0ULL +#define PSAT_GUEST_PRIVATE_MEM_UPDATED 1ULL +#define PSAT_GUEST_PRIVATE_MEM_VERIFIED 2ULL +#define PSAT_GUEST_SHARED_MEM_UPDATED 3ULL + +static bool psat_handle_vm_stage(struct kvm_vm *vm, struct ucall *uc, + void *test_info, uint64_t stage) +{ + void *shared_mem = ((struct test_run_helper *)test_info)->shared_mem; + int priv_memfd = ((struct test_run_helper *)test_info)->priv_memfd; + + switch (stage) { + case PSAT_GUEST_STARTED: { + /* Initialize the contents of shared memory */ + TEST_ASSERT(do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT1, test_mem_size), + "Shared memory update failed"); + VM_STAGE_PROCESSED(PSAT_GUEST_STARTED); + break; + } + case PSAT_GUEST_PRIVATE_MEM_UPDATED: { + /* verify data to be same as what vmm wrote earlier */ + TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT1, test_mem_size), + "Shared memory view mismatch"); + VM_STAGE_PROCESSED(PSAT_GUEST_PRIVATE_MEM_UPDATED); + break; + } + case PSAT_GUEST_PRIVATE_MEM_VERIFIED: { + /* Remove the backing private memory storage so that + * subsequent accesses from guest cause a second stage + * page fault + */ + int ret = fallocate(priv_memfd, + FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, + 0, test_mem_size); + TEST_ASSERT(ret != -1, + "fallocate failed in smpat handling"); + VM_STAGE_PROCESSED(PSAT_GUEST_PRIVATE_MEM_VERIFIED); + break; + } + case PSAT_GUEST_SHARED_MEM_UPDATED: { + /* verify data to be same as what guest wrote */ + TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT2, test_mem_size), + "Shared memory view mismatch"); + VM_STAGE_PROCESSED(PSAT_GUEST_SHARED_MEM_UPDATED); + break; + } + default: + TEST_FAIL("Unhandled VM stage %ld\n", stage); + return false; + } + + return true; +} + +static void psat_guest_code(struct ucall *uc, uint8_t enc_bit_shift, + struct guest_pgt_info *gpgt_info, uint64_t mem_size, uint64_t ghcb_gpa, + void *ghcb_gva) +{ + void *shared_mem = (void *)TEST_MEM_GPA; + int ret; + + g_ghcb_gva = ghcb_gva; + g_ghcb_gpa = ghcb_gpa; + GUEST_SHARED_SYNC(uc, PSAT_GUEST_STARTED); + guest_verify_sev_vm_boot(uc, g_ghcb_gva != NULL); + + guest_set_clr_pte_bit(uc, gpgt_info, TEST_MEM_GPA, mem_size, true, + enc_bit_shift); + /* Mark the GPA range to be treated as always accessed privately */ + ret = guest_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, + mem_size >> MIN_PAGE_SHIFT, KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); + GUEST_SHARED_ASSERT_1(uc, ret == 0, ret); + + GUEST_SHARED_ASSERT(uc, do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT2, mem_size)); + GUEST_SHARED_SYNC(uc, PSAT_GUEST_PRIVATE_MEM_UPDATED); + GUEST_SHARED_ASSERT(uc, do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT2, mem_size)); + + GUEST_SHARED_SYNC(uc, PSAT_GUEST_PRIVATE_MEM_VERIFIED); + + guest_set_clr_pte_bit(uc, gpgt_info, TEST_MEM_GPA, mem_size, false, + enc_bit_shift); + /* Mark no GPA range to be treated as accessed privately */ + ret = guest_hypercall(KVM_HC_MAP_GPA_RANGE, 0, 0, + KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); + GUEST_SHARED_ASSERT_1(uc, ret == 0, ret); + GUEST_SHARED_ASSERT(uc, do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT2, mem_size)); + GUEST_SHARED_SYNC(uc, PSAT_GUEST_SHARED_MEM_UPDATED); + GUEST_SHARED_ASSERT(uc, do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT2, mem_size)); + + GUEST_SHARED_DONE(uc); +} + +/* Test to verify guest shared and private accesses on memory with following + * steps: + * 1) Upon entry, guest signals VMM that it has started. + * 2) VMM removes the private memory backing and populates the shared memory + * with known pattern and continues guest execution. + * 3) Guest reads shared gpa range in a shared fashion and verifies that it + * reads what VMM has written in step2. + * 4) Guest writes a different pattern on the shared memory and signals VMM + * that it has updated the shared memory. + * 5) VMM verifies shared memory contents to be same as the data populated + * in step 4 and installs private memory backing again to allow guest + * to do private access and invalidate second stage mappings. + * 6) Guest does private write access on shared memory and signals vmm + * 7) VMM reads the shared memory and verified that the data is still same + * as in step 4 and continues guest execution. + * 8) Guest reads the private memory and verifies that the data is same as + * written in step 6. + */ +#define SPAT_ID 5 +#define SPAT_DESC "SharedPrivateAccessTest" + +#define SPAT_GUEST_STARTED 0ULL +#define SPAT_GUEST_SHARED_MEM_UPDATED 1ULL +#define SPAT_GUEST_PRIVATE_MEM_UPDATED 2ULL + +static bool spat_handle_vm_stage(struct kvm_vm *vm, struct ucall *uc, + void *test_info, uint64_t stage) +{ + void *shared_mem = ((struct test_run_helper *)test_info)->shared_mem; + int priv_memfd = ((struct test_run_helper *)test_info)->priv_memfd; + int ret; + + switch (stage) { + case SPAT_GUEST_STARTED: { + /* Remove the backing private memory storage so that + * subsequent accesses from guest cause a second stage + * page fault + */ + ret = fallocate(priv_memfd, + FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, + 0, test_mem_size); + TEST_ASSERT(ret != -1, + "fallocate failed in spat handling"); + + /* Initialize the contents of shared memory */ + TEST_ASSERT(do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT1, test_mem_size), + "Shared memory updated failed"); + VM_STAGE_PROCESSED(SPAT_GUEST_STARTED); + break; + } + case SPAT_GUEST_SHARED_MEM_UPDATED: { + /* verify data to be same as what guest wrote earlier */ + TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT2, test_mem_size), + "Shared memory view mismatch"); + /* Allocate memory for private backing store */ + ret = fallocate(priv_memfd, 0, 0, test_mem_size); + TEST_ASSERT(ret != -1, "fallocate failed in spat handling"); + VM_STAGE_PROCESSED(SPAT_GUEST_SHARED_MEM_UPDATED); + break; + } + case SPAT_GUEST_PRIVATE_MEM_UPDATED: { + /* verify data to be same as what guest wrote earlier */ + TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT2, test_mem_size), + "Shared memory view mismatch"); + VM_STAGE_PROCESSED(SPAT_GUEST_PRIVATE_MEM_UPDATED); + break; + } + default: + TEST_FAIL("Unhandled VM stage %ld\n", stage); + return false; + } + + return true; +} + +static void spat_guest_code(struct ucall *uc, uint8_t enc_bit_shift, + struct guest_pgt_info *gpgt_info, uint64_t mem_size, uint64_t ghcb_gpa, + void *ghcb_gva) +{ + void *shared_mem = (void *)TEST_MEM_GPA; + int ret; + + g_ghcb_gva = ghcb_gva; + g_ghcb_gpa = ghcb_gpa; + GUEST_SHARED_SYNC(uc, SPAT_GUEST_STARTED); + guest_verify_sev_vm_boot(uc, g_ghcb_gva != NULL); + GUEST_SHARED_ASSERT(uc, do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT1, mem_size)); + GUEST_SHARED_ASSERT(uc, do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT2, mem_size)); + + GUEST_SHARED_SYNC(uc, SPAT_GUEST_SHARED_MEM_UPDATED); + + guest_set_clr_pte_bit(uc, gpgt_info, TEST_MEM_GPA, mem_size, true, + enc_bit_shift); + /* Mark the GPA range to be treated as always accessed privately */ + ret = guest_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, + mem_size >> MIN_PAGE_SHIFT, KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); + GUEST_SHARED_ASSERT_1(uc, ret == 0, ret); + + GUEST_SHARED_ASSERT(uc, do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT1, mem_size)); + GUEST_SHARED_SYNC(uc, PSAT_GUEST_PRIVATE_MEM_UPDATED); + GUEST_SHARED_ASSERT(uc, do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT1, mem_size)); + GUEST_SHARED_DONE(uc); +} + +/* Test to verify guest private, shared, private accesses on memory with + * following steps: + * 1) Upon entry, guest signals VMM that it has started. + * 2) VMM initializes the shared memory with known pattern and continues guest + * execution + * 3) Guest writes the private memory privately via a known pattern and + * signals VMM + * 4) VMM reads the shared memory and verifies that it's same as whats written + * in step 2 and continues guest execution + * 5) Guest reads the private memory privately and verifies that the contents + * are same as written in step 3. + * 6) Guest invokes KVM_HC_MAP_GPA_RANGE to map the hpa range as shared + * and marks the range to be accessed via shared access. + * 7) Guest does a shared access to shared memory and verifies that the + * contents are same as written in step 2. + * 8) Guest writes known pattern to test memory and signals VMM. + * 9) VMM verifies the memory contents to be same as written by guest in step + * 8 + * 10) Guest invokes KVM_HC_MAP_GPA_RANGE to map the hpa range as private + * and marks the range to be accessed via private access. + * 11) Guest writes a known pattern to the test memory and signals VMM. + * 12) VMM verifies the memory contents to be same as written by guest in step + * 8 and continues guest execution. + * 13) Guest verififes the memory pattern to be same as written in step 11. + */ +#define PSPAHCT_ID 6 +#define PSPAHCT_DESC "PrivateSharedPrivateAccessHyperCallTest" + +#define PSPAHCT_GUEST_STARTED 0ULL +#define PSPAHCT_GUEST_PRIVATE_MEM_UPDATED 1ULL +#define PSPAHCT_GUEST_SHARED_MEM_UPDATED 2ULL +#define PSPAHCT_GUEST_PRIVATE_MEM_UPDATED2 3ULL + +static bool pspahct_handle_vm_stage(struct kvm_vm *vm, struct ucall *uc, + void *test_info, uint64_t stage) +{ + void *shared_mem = ((struct test_run_helper *)test_info)->shared_mem; + + switch (stage) { + case PSPAHCT_GUEST_STARTED: { + /* Initialize the contents of shared memory */ + TEST_ASSERT(do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT1, test_mem_size), + "Shared memory update failed"); + VM_STAGE_PROCESSED(PSPAHCT_GUEST_STARTED); + break; + } + case PSPAHCT_GUEST_PRIVATE_MEM_UPDATED: { + /* verify data to be same as what guest wrote earlier */ + TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT1, test_mem_size), + "Shared memory view mismatch"); + VM_STAGE_PROCESSED(PSPAHCT_GUEST_PRIVATE_MEM_UPDATED); + break; + } + case PSPAHCT_GUEST_SHARED_MEM_UPDATED: { + /* verify data to be same as what guest wrote earlier */ + TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT2, test_mem_size), + "Shared memory view mismatch"); + VM_STAGE_PROCESSED(PSPAHCT_GUEST_SHARED_MEM_UPDATED); + break; + } + case PSPAHCT_GUEST_PRIVATE_MEM_UPDATED2: { + /* verify data to be same as what guest wrote earlier */ + TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT2, test_mem_size), + "Shared memory view mismatch"); + VM_STAGE_PROCESSED(PSPAHCT_GUEST_PRIVATE_MEM_UPDATED2); + break; + } + default: + TEST_FAIL("Unhandled VM stage %ld\n", stage); + return false; + } + + return true; +} + +static void pspahct_guest_code(struct ucall *uc, uint8_t enc_bit_shift, + struct guest_pgt_info *gpgt_info, uint64_t mem_size, uint64_t ghcb_gpa, + void *ghcb_gva) +{ + void *test_mem = (void *)TEST_MEM_GPA; + int ret; + + g_ghcb_gva = ghcb_gva; + g_ghcb_gpa = ghcb_gpa; + GUEST_SHARED_SYNC(uc, PSPAHCT_GUEST_STARTED); + guest_verify_sev_vm_boot(uc, g_ghcb_gva != NULL); + + guest_set_clr_pte_bit(uc, gpgt_info, TEST_MEM_GPA, mem_size, true, + enc_bit_shift); + /* Mark the GPA range to be treated as always accessed privately */ + ret = guest_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, + mem_size >> MIN_PAGE_SHIFT, KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); + GUEST_SHARED_ASSERT_1(uc, ret == 0, ret); + GUEST_SHARED_ASSERT(uc, do_mem_op(SET_PAT, test_mem, TEST_MEM_DATA_PAT2, + mem_size)); + + GUEST_SHARED_SYNC(uc, PSPAHCT_GUEST_PRIVATE_MEM_UPDATED); + GUEST_SHARED_ASSERT(uc, do_mem_op(VERIFY_PAT, test_mem, + TEST_MEM_DATA_PAT2, mem_size)); + + /* Map the GPA range to be treated as shared */ + ret = guest_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, + mem_size >> MIN_PAGE_SHIFT, + KVM_MAP_GPA_RANGE_DECRYPTED | KVM_MAP_GPA_RANGE_PAGE_SZ_4K, 0); + GUEST_SHARED_ASSERT_1(uc, ret == 0, ret); + + guest_set_clr_pte_bit(uc, gpgt_info, TEST_MEM_GPA, mem_size, false, + enc_bit_shift); + /* Mark the GPA range to be treated as always accessed via shared + * access + */ + ret = guest_hypercall(KVM_HC_MAP_GPA_RANGE, 0, 0, + KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); + GUEST_SHARED_ASSERT_1(uc, ret == 0, ret); + + GUEST_SHARED_ASSERT(uc, do_mem_op(VERIFY_PAT, test_mem, + TEST_MEM_DATA_PAT1, mem_size)); + GUEST_SHARED_ASSERT(uc, do_mem_op(SET_PAT, test_mem, + TEST_MEM_DATA_PAT2, mem_size)); + GUEST_SHARED_SYNC(uc, PSPAHCT_GUEST_SHARED_MEM_UPDATED); + + GUEST_SHARED_ASSERT(uc, do_mem_op(VERIFY_PAT, test_mem, + TEST_MEM_DATA_PAT2, mem_size)); + + guest_set_clr_pte_bit(uc, gpgt_info, TEST_MEM_GPA, mem_size, true, + enc_bit_shift); + /* Map the GPA range to be treated as private */ + ret = guest_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, + mem_size >> MIN_PAGE_SHIFT, + KVM_MAP_GPA_RANGE_ENCRYPTED | KVM_MAP_GPA_RANGE_PAGE_SZ_4K, 0); + GUEST_SHARED_ASSERT_1(uc, ret == 0, ret); + + /* Mark the GPA range to be treated as always accessed via private + * access + */ + ret = guest_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, + mem_size >> MIN_PAGE_SHIFT, KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); + GUEST_SHARED_ASSERT_1(uc, ret == 0, ret); + + GUEST_SHARED_ASSERT(uc, do_mem_op(SET_PAT, test_mem, + TEST_MEM_DATA_PAT1, mem_size)); + GUEST_SHARED_SYNC(uc, PSPAHCT_GUEST_PRIVATE_MEM_UPDATED2); + GUEST_SHARED_ASSERT(uc, do_mem_op(VERIFY_PAT, test_mem, + TEST_MEM_DATA_PAT1, mem_size)); + GUEST_SHARED_DONE(uc); +} + +/* Test to verify guest accesses without double allocation: + * Guest starts with shared memory access disallowed by default. + * 1) Guest writes the private memory privately via a known pattern + * 3) Guest reads the private memory privately and verifies that the contents + * are same as written. + * 4) Guest invokes KVM_HC_MAP_GPA_RANGE to map the hpa range as shared + * and marks the range to be accessed via shared access. + * 5) Guest writes shared memory with another pattern and signals VMM + * 6) VMM verifies the memory contents to be same as written by guest in step + * 5 and updates the memory with a different pattern + * 7) Guest verifies the memory contents to be same as written in step 6. + * 8) Guest invokes KVM_HC_MAP_GPA_RANGE to map the hpa range as private + * and marks the range to be accessed via private access. + * 9) Guest writes a known pattern to the test memory and verifies the contents + * to be same as written. + * 10) Guest invokes KVM_HC_MAP_GPA_RANGE to map the hpa range as shared + * and marks the range to be accessed via shared access. + * 11) Guest writes shared memory with another pattern and signals VMM + * 12) VMM verifies the memory contents to be same as written by guest in step + * 5 and updates the memory with a different pattern + * 13) Guest verifies the memory contents to be same as written in step 6. + */ +#define PSAWDAT_ID 7 +#define PSAWDAT_DESC "PrivateSharedAccessWithoutDoubleAllocationTest" + +#define PSAWDAT_GUEST_SHARED_MEM_UPDATED1 1ULL +#define PSAWDAT_GUEST_SHARED_MEM_UPDATED2 2ULL + +static bool psawdat_handle_vm_stage(struct kvm_vm *vm, struct ucall *uc, + void *test_info, uint64_t stage) +{ + void *shared_mem = ((struct test_run_helper *)test_info)->shared_mem; + + switch (stage) { + case PSAWDAT_GUEST_SHARED_MEM_UPDATED1: { + /* verify data to be same as what guest wrote earlier */ + TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT2, test_mem_size), + "Shared memory view mismatch"); + TEST_ASSERT(do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT1, test_mem_size), + "Shared mem update Failure"); + VM_STAGE_PROCESSED(PSAWDAT_GUEST_SHARED_MEM_UPDATED); + break; + } + case PSAWDAT_GUEST_SHARED_MEM_UPDATED2: { + /* verify data to be same as what guest wrote earlier */ + TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT3, test_mem_size), + "Shared memory view mismatch"); + TEST_ASSERT(do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT4, test_mem_size), + "Shared mem update Failure"); + VM_STAGE_PROCESSED(PSAWDAT_GUEST_SHARED_MEM_UPDATED2); + break; + } + default: + TEST_FAIL("Unhandled VM stage %ld\n", stage); + return false; + } + + return true; +} + +static void psawdat_guest_code(struct ucall *uc, uint8_t enc_bit_shift, + struct guest_pgt_info *gpgt_info, uint64_t mem_size, uint64_t ghcb_gpa, + void *ghcb_gva) +{ + void *test_mem = (void *)TEST_MEM_GPA; + int ret; + + g_ghcb_gva = ghcb_gva; + g_ghcb_gpa = ghcb_gpa; + guest_verify_sev_vm_boot(uc, g_ghcb_gva != NULL); + guest_set_clr_pte_bit(uc, gpgt_info, TEST_MEM_GPA, mem_size, true, + enc_bit_shift); + /* Mark the GPA range to be treated as always accessed privately */ + ret = guest_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, + mem_size >> MIN_PAGE_SHIFT, KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); + GUEST_SHARED_ASSERT_1(uc, ret == 0, ret); + GUEST_SHARED_ASSERT(uc, do_mem_op(SET_PAT, test_mem, TEST_MEM_DATA_PAT1, + mem_size)); + + GUEST_SHARED_ASSERT(uc, do_mem_op(VERIFY_PAT, test_mem, + TEST_MEM_DATA_PAT1, mem_size)); + + /* Map the GPA range to be treated as shared */ + ret = guest_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, + mem_size >> MIN_PAGE_SHIFT, + KVM_MAP_GPA_RANGE_DECRYPTED | KVM_MAP_GPA_RANGE_PAGE_SZ_4K, 0); + GUEST_SHARED_ASSERT_1(uc, ret == 0, ret); + + guest_set_clr_pte_bit(uc, gpgt_info, TEST_MEM_GPA, mem_size, false, + enc_bit_shift); + /* Mark the GPA range to be treated as always accessed via shared + * access + */ + ret = guest_hypercall(KVM_HC_MAP_GPA_RANGE, 0, 0, + KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); + GUEST_SHARED_ASSERT_1(uc, ret == 0, ret); + + GUEST_SHARED_ASSERT(uc, do_mem_op(SET_PAT, test_mem, + TEST_MEM_DATA_PAT2, mem_size)); + GUEST_SHARED_SYNC(uc, PSAWDAT_GUEST_SHARED_MEM_UPDATED1); + + GUEST_SHARED_ASSERT(uc, do_mem_op(VERIFY_PAT, test_mem, + TEST_MEM_DATA_PAT1, mem_size)); + + /* Map the GPA range to be treated as private */ + ret = guest_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, + mem_size >> MIN_PAGE_SHIFT, + KVM_MAP_GPA_RANGE_ENCRYPTED | KVM_MAP_GPA_RANGE_PAGE_SZ_4K, 0); + GUEST_SHARED_ASSERT_1(uc, ret == 0, ret); + + guest_set_clr_pte_bit(uc, gpgt_info, TEST_MEM_GPA, mem_size, true, + enc_bit_shift); + /* Mark the GPA range to be treated as always accessed via private + * access + */ + ret = guest_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, + mem_size >> MIN_PAGE_SHIFT, KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); + GUEST_SHARED_ASSERT_1(uc, ret == 0, ret); + + GUEST_SHARED_ASSERT(uc, do_mem_op(SET_PAT, test_mem, + TEST_MEM_DATA_PAT2, mem_size)); + GUEST_SHARED_ASSERT(uc, do_mem_op(VERIFY_PAT, test_mem, + TEST_MEM_DATA_PAT2, mem_size)); + + /* Map the GPA range to be treated as shared */ + ret = guest_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, + mem_size >> MIN_PAGE_SHIFT, + KVM_MAP_GPA_RANGE_DECRYPTED | KVM_MAP_GPA_RANGE_PAGE_SZ_4K, 0); + GUEST_SHARED_ASSERT_1(uc, ret == 0, ret); + + guest_set_clr_pte_bit(uc, gpgt_info, TEST_MEM_GPA, mem_size, false, + enc_bit_shift); + /* Mark the GPA range to be treated as always accessed via shared + * access + */ + ret = guest_hypercall(KVM_HC_MAP_GPA_RANGE, 0, 0, + KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); + GUEST_SHARED_ASSERT_1(uc, ret == 0, ret); + + GUEST_SHARED_ASSERT(uc, do_mem_op(SET_PAT, test_mem, + TEST_MEM_DATA_PAT3, mem_size)); + GUEST_SHARED_SYNC(uc, PSAWDAT_GUEST_SHARED_MEM_UPDATED2); + + GUEST_SHARED_ASSERT(uc, do_mem_op(VERIFY_PAT, test_mem, + TEST_MEM_DATA_PAT4, mem_size)); + + GUEST_SHARED_DONE(uc); +} + +static struct test_run_helper priv_memfd_testsuite[] = { + [PMPAT_ID] = { + .test_desc = PMPAT_DESC, + .vmst_handler = pmpat_handle_vm_stage, + .guest_fn = pmpat_guest_code, + }, + [PMSAT_ID] = { + .test_desc = PMSAT_DESC, + .vmst_handler = pmsat_handle_vm_stage, + .guest_fn = pmsat_guest_code, + }, + [SMSAT_ID] = { + .test_desc = SMSAT_DESC, + .vmst_handler = smsat_handle_vm_stage, + .guest_fn = smsat_guest_code, + }, + [SMPAT_ID] = { + .test_desc = SMPAT_DESC, + .vmst_handler = smpat_handle_vm_stage, + .guest_fn = smpat_guest_code, + }, + [PSAT_ID] = { + .test_desc = PSAT_DESC, + .vmst_handler = psat_handle_vm_stage, + .guest_fn = psat_guest_code, + }, + [SPAT_ID] = { + .test_desc = SPAT_DESC, + .vmst_handler = spat_handle_vm_stage, + .guest_fn = spat_guest_code, + }, + [PSPAHCT_ID] = { + .test_desc = PSPAHCT_DESC, + .vmst_handler = pspahct_handle_vm_stage, + .guest_fn = pspahct_guest_code, + }, + [PSAWDAT_ID] = { + .test_desc = PSAWDAT_DESC, + .vmst_handler = psawdat_handle_vm_stage, + .guest_fn = psawdat_guest_code, + .toggle_shared_mem_state = true, + .disallow_boot_shared_access = true, + }, +}; + +static void handle_vm_exit_hypercall(struct kvm_run *run, + uint32_t test_id) +{ + uint64_t gpa, npages, attrs, mem_end; + int priv_memfd = priv_memfd_testsuite[test_id].priv_memfd; + int ret; + int fallocate_mode; + void *shared_mem = priv_memfd_testsuite[test_id].shared_mem; + bool toggle_shared_mem_state = + priv_memfd_testsuite[test_id].toggle_shared_mem_state; + int mprotect_mode; + + if (run->hypercall.nr != KVM_HC_MAP_GPA_RANGE) + TEST_FAIL("Unhandled Hypercall %lld\n", run->hypercall.nr); + + gpa = run->hypercall.args[0]; + npages = run->hypercall.args[1]; + attrs = run->hypercall.args[2]; + mem_end = test_mem_end(gpa, test_mem_size); + + if ((gpa < TEST_MEM_GPA) || + ((gpa + (npages << MIN_PAGE_SHIFT)) > mem_end)) + TEST_FAIL("Unhandled gpa 0x%lx npages %ld\n", gpa, npages); + + if (attrs & KVM_MAP_GPA_RANGE_ENCRYPTED) { + fallocate_mode = 0; + mprotect_mode = PROT_NONE; + } else { + fallocate_mode = (FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE); + mprotect_mode = PROT_READ | PROT_WRITE; + } + pr_info("Converting off 0x%lx pages 0x%lx to %s\n", + (gpa - TEST_MEM_GPA), npages, fallocate_mode ? + "shared" : "private"); + ret = fallocate(priv_memfd, fallocate_mode, (gpa - TEST_MEM_GPA), + npages << MIN_PAGE_SHIFT); + TEST_ASSERT(ret != -1, "fallocate failed in hc handling"); + if (toggle_shared_mem_state) { + if (fallocate_mode) { + ret = madvise(shared_mem, test_mem_size, MADV_DONTNEED); + TEST_ASSERT(ret != -1, "madvise failed in hc handling"); + } + ret = mprotect(shared_mem, test_mem_size, mprotect_mode); + TEST_ASSERT(ret != -1, "mprotect failed in hc handling"); + } + run->hypercall.ret = 0; +} + +static void handle_vm_exit_memory_error(struct kvm_run *run, + uint32_t test_id) +{ + uint64_t gpa, size, flags, mem_end; + int ret; + int priv_memfd = + priv_memfd_testsuite[test_id].priv_memfd; + void *shared_mem = priv_memfd_testsuite[test_id].shared_mem; + bool toggle_shared_mem_state = + priv_memfd_testsuite[test_id].toggle_shared_mem_state; + int fallocate_mode; + int mprotect_mode; + + gpa = run->memory.gpa; + size = run->memory.size; + flags = run->memory.flags; + mem_end = test_mem_end(gpa, test_mem_size); + + if ((gpa < TEST_MEM_GPA) || ((gpa + size) > mem_end)) + TEST_FAIL("Unhandled gpa 0x%lx size 0x%lx\n", gpa, size); + + if (flags & KVM_MEMORY_EXIT_FLAG_PRIVATE) { + fallocate_mode = 0; + mprotect_mode = PROT_NONE; + } else { + fallocate_mode = (FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE); + mprotect_mode = PROT_READ | PROT_WRITE; + } + pr_info("Converting off 0x%lx size 0x%lx to %s\n", (gpa - TEST_MEM_GPA), + size, fallocate_mode ? "shared" : "private"); + ret = fallocate(priv_memfd, fallocate_mode, (gpa - TEST_MEM_GPA), size); + TEST_ASSERT(ret != -1, "fallocate failed in memory error handling"); + + if (toggle_shared_mem_state) { + if (fallocate_mode) { + ret = madvise(shared_mem, test_mem_size, MADV_DONTNEED); + TEST_ASSERT(ret != -1, + "madvise failed in memory error handling"); + } + ret = mprotect(shared_mem, test_mem_size, mprotect_mode); + TEST_ASSERT(ret != -1, + "mprotect failed in memory error handling"); + } +} + +static void vcpu_work(struct kvm_vm *vm, struct ucall *uc, uint32_t test_id) +{ + struct kvm_run *run; + uint64_t cmd; + + /* + * Loop until the guest is done. + */ + run = vcpu_state(vm, VCPU_ID); + + while (true) { + vcpu_run(vm, VCPU_ID); + + if (run->exit_reason == KVM_EXIT_HLT) { + cmd = get_ucall_shared(vm, VCPU_ID, uc); + if (cmd != UCALL_SYNC) + break; + + if (!priv_memfd_testsuite[test_id].vmst_handler( + vm, uc, &priv_memfd_testsuite[test_id], + uc->args[1])) + break; + + continue; + } + + if (run->exit_reason == KVM_EXIT_HYPERCALL) { + handle_vm_exit_hypercall(run, test_id); + continue; + } + + if (run->exit_reason == KVM_EXIT_MEMORY_FAULT) { + handle_vm_exit_memory_error(run, test_id); + continue; + } + + TEST_FAIL("Unhandled VCPU exit reason %d\n", run->exit_reason); + break; + } + + if (run->exit_reason == KVM_EXIT_HLT && cmd == UCALL_ABORT) + TEST_FAIL("%s at %s:%ld, val = %lu", (const char *)uc->args[0], + __FILE__, uc->args[1], uc->args[2]); +} + +static void priv_memory_region_add(struct kvm_vm *vm, void *mem, uint32_t slot, + uint32_t size, uint64_t guest_addr, + uint32_t priv_fd, uint64_t priv_offset) +{ + struct kvm_userspace_memory_region_ext region_ext; + int ret; + + region_ext.region.slot = slot; + region_ext.region.flags = KVM_MEM_PRIVATE; + region_ext.region.guest_phys_addr = guest_addr; + region_ext.region.memory_size = size; + region_ext.region.userspace_addr = (uintptr_t) mem; + region_ext.private_fd = priv_fd; + region_ext.private_offset = priv_offset; + ret = ioctl(vm_get_fd(vm), KVM_SET_USER_MEMORY_REGION, ®ion_ext); + TEST_ASSERT(ret == 0, "Failed to register user region for gpa 0x%lx\n", + guest_addr); +} + +static void vc_handler(struct ex_regs *regs) +{ + sev_es_handle_vc(g_ghcb_gva, g_ghcb_gpa, regs); +} + +static void setup_and_execute_test(uint32_t test_id, + const enum page_size shared, + const enum page_size private, + uint32_t policy) +{ + struct kvm_vm *vm; + vm_vaddr_t uc_vaddr; + struct sev_vm *sev; + int priv_memfd; + int ret; + void *shared_mem; + struct kvm_enable_cap cap; + bool disallow_boot_shared_access = + priv_memfd_testsuite[test_id].disallow_boot_shared_access; + int prot_flags = PROT_READ | PROT_WRITE; + uint8_t measurement[512]; + uint32_t vm_page_size, num_test_pages; + vm_vaddr_t ghcb_vaddr = 0; + uint8_t enc_bit; + + sev = sev_vm_create(policy, TOTAL_PAGES); + TEST_ASSERT(sev, "Failed to create SEV VM"); + vm = sev_get_vm(sev); + + vm_set_pgt_alloc_tracking(vm); + + /* Set up VCPU and initial guest kernel. */ + vm_vcpu_add_default(vm, VCPU_ID, + priv_memfd_testsuite[test_id].guest_fn); + kvm_vm_elf_load(vm, program_invocation_name); + + // use 2 pages by default + size_t mem_size = PAGE_SIZE_4KB * 2; + bool using_hugepages = false; + + int mmap_flags = MAP_PRIVATE | MAP_ANONYMOUS | MAP_NORESERVE; + + switch (shared) { + case PAGE_4KB: + // no additional flags are needed + break; + case PAGE_2MB: + mmap_flags |= MAP_HUGETLB | MAP_HUGE_2MB | MAP_POPULATE; + mem_size = max(mem_size, PAGE_SIZE_2MB); + using_hugepages = true; + break; + case PAGE_1GB: + mmap_flags |= MAP_HUGETLB | MAP_HUGE_1GB | MAP_POPULATE; + mem_size = max(mem_size, PAGE_SIZE_1GB); + using_hugepages = true; + break; + default: + TEST_FAIL("unknown page size for shared memory\n"); + } + + unsigned int memfd_flags = MFD_INACCESSIBLE; + + switch (private) { + case PAGE_4KB: + // no additional flags are needed + break; + case PAGE_2MB: + memfd_flags |= MFD_HUGETLB | MFD_HUGE_2MB; + mem_size = PAGE_SIZE_2MB; + using_hugepages = true; + break; + case PAGE_1GB: + memfd_flags |= MFD_HUGETLB | MFD_HUGE_1GB; + mem_size = PAGE_SIZE_1GB; + using_hugepages = true; + break; + default: + TEST_FAIL("unknown page size for private memory\n"); + } + + // set global for mem size to use later + test_mem_size = mem_size; + + if (disallow_boot_shared_access) + prot_flags = PROT_NONE; + + /* Allocate shared memory */ + shared_mem = mmap(NULL, mem_size, prot_flags, mmap_flags, -1, 0); + TEST_ASSERT(shared_mem != MAP_FAILED, "Failed to mmap() host"); + + if (using_hugepages) { + ret = madvise(shared_mem, mem_size, MADV_WILLNEED); + TEST_ASSERT(ret == 0, "madvise failed"); + } + + /* Allocate private memory */ + priv_memfd = memfd_create("vm_private_mem", memfd_flags); + TEST_ASSERT(priv_memfd != -1, "Failed to create priv_memfd"); + ret = fallocate(priv_memfd, 0, 0, mem_size); + TEST_ASSERT(ret != -1, "fallocate failed"); + + priv_memory_region_add(vm, shared_mem, TEST_MEM_SLOT, mem_size, + TEST_MEM_GPA, priv_memfd, 0); + + vm_page_size = vm_get_page_size(vm); + num_test_pages = mem_size / vm_page_size; + TEST_ASSERT(!(mem_size % vm_page_size), + "mem_size unaligned with vm page size"); + pr_info("Mapping test memory pages 0x%x page_size 0x%x\n", + num_test_pages, vm_page_size); + virt_map(vm, TEST_MEM_GPA, TEST_MEM_GPA, num_test_pages); + + /* Set up shared ucall buffer. */ + uc_vaddr = ucall_shared_alloc(vm, 1); + + /* Enable exit on KVM_HC_MAP_GPA_RANGE */ + pr_info("Enabling exit on map_gpa_range hypercall\n"); + ret = ioctl(vm_get_fd(vm), KVM_CHECK_EXTENSION, KVM_CAP_EXIT_HYPERCALL); + TEST_ASSERT(ret & (1 << KVM_HC_MAP_GPA_RANGE), + "VM exit on MAP_GPA_RANGE HC not supported"); + cap.cap = KVM_CAP_EXIT_HYPERCALL; + cap.flags = 0; + cap.args[0] = (1 << KVM_HC_MAP_GPA_RANGE); + ret = ioctl(vm_get_fd(vm), KVM_ENABLE_CAP, &cap); + TEST_ASSERT(ret == 0, + "Failed to enable exit on MAP_GPA_RANGE hypercall\n"); + + vm_map_page_table(vm, GUEST_PGT_MIN_VADDR); + vm_vaddr_t pgt_info_vaddr = vm_setup_pgt_info_buf(vm, + GUEST_PGT_MIN_VADDR); + + if (policy & SEV_POLICY_ES) { + ghcb_vaddr = vm_vaddr_alloc_shared(vm, vm_page_size, + vm_page_size); + /* Set up VC handler. */ + vm_init_descriptor_tables(vm); + vm_install_exception_handler(vm, 29, vc_handler); + vcpu_init_descriptor_tables(vm, VCPU_ID); + } + + /* Set up guest params. */ + enc_bit = sev_get_enc_bit(sev); + vcpu_args_set(vm, VCPU_ID, 6, uc_vaddr, enc_bit, pgt_info_vaddr, + test_mem_size, ghcb_vaddr ? addr_gva2gpa(vm, ghcb_vaddr) : 0, + ghcb_vaddr); + struct ucall *uc = (struct ucall *)addr_gva2hva(vm, uc_vaddr); + + priv_memfd_testsuite[test_id].shared_mem = shared_mem; + priv_memfd_testsuite[test_id].priv_memfd = priv_memfd; + + /* Allocations/setup done. Encrypt initial guest payload. */ + sev_vm_launch(sev); + sev_vm_launch_measure(sev, measurement); + pr_info("guest measurement: "); + for (uint32_t i = 0; i < 32; ++i) + pr_info("%02x", measurement[i]); + pr_info("\n"); + + sev_vm_launch_finish(sev); + + vcpu_work(vm, uc, test_id); + + munmap(shared_mem, mem_size); + priv_memfd_testsuite[test_id].shared_mem = NULL; + close(priv_memfd); + priv_memfd_testsuite[test_id].priv_memfd = -1; + sev_vm_free(sev); +} + +static void hugepage_requirements_text(const struct page_combo matrix) +{ + int pages_needed_2mb = 0; + int pages_needed_1gb = 0; + enum page_size sizes[] = { matrix.shared, matrix.private }; + + for (int i = 0; i < ARRAY_SIZE(sizes); ++i) { + if (sizes[i] == PAGE_2MB) + ++pages_needed_2mb; + if (sizes[i] == PAGE_1GB) + ++pages_needed_1gb; + } + if (pages_needed_2mb != 0 && pages_needed_1gb != 0) { + pr_info("This test requires %d 2MB page(s) and %d 1GB page(s)\n", + pages_needed_2mb, pages_needed_1gb); + } else if (pages_needed_2mb != 0) { + pr_info("This test requires %d 2MB page(s)\n", pages_needed_2mb); + } else if (pages_needed_1gb != 0) { + pr_info("This test requires %d 1GB page(s)\n", pages_needed_1gb); + } +} + +static bool should_skip_test(const struct page_combo matrix, + const bool use_2mb_pages, + const bool use_1gb_pages) +{ + if ((matrix.shared == PAGE_2MB || matrix.private == PAGE_2MB) + && !use_2mb_pages) + return true; + if ((matrix.shared == PAGE_1GB || matrix.private == PAGE_1GB) + && !use_1gb_pages) + return true; + return false; +} + +static void print_help(const char *const name) +{ + puts(""); + printf("usage %s [-h] [-m] [-g]\n", name); + puts(""); + printf(" -h: Display this help message\n"); + printf(" -m: include test runs using 2MB page permutations\n"); + printf(" -g: include test runs using 1GB page permutations\n"); + printf(" -e: Run test with SEV-ES VM\n"); + exit(0); +} + +int main(int argc, char *argv[]) +{ + /* Tell stdout not to buffer its content */ + setbuf(stdout, NULL); + + // arg parsing + int opt; + bool use_2mb_pages = false; + bool use_1gb_pages = false; + uint32_t policy = 0; + + while ((opt = getopt(argc, argv, "emgh")) != -1) { + switch (opt) { + case 'm': + use_2mb_pages = true; + break; + case 'g': + use_1gb_pages = true; + break; + case 'e': + policy |= SEV_POLICY_ES; + break; + case 'h': + default: + print_help(argv[0]); + } + } + + struct page_combo page_size_matrix[] = { + { .shared = PAGE_4KB, .private = PAGE_4KB }, + { .shared = PAGE_2MB, .private = PAGE_4KB }, + }; + + for (uint32_t i = 0; i < ARRAY_SIZE(priv_memfd_testsuite); i++) { + for (uint32_t j = 0; j < ARRAY_SIZE(page_size_matrix); j++) { + const struct page_combo current_page_matrix = + page_size_matrix[j]; + + if (should_skip_test(current_page_matrix, + use_2mb_pages, use_1gb_pages)) + break; + pr_info("=== Starting test %s... ===\n", + priv_memfd_testsuite[i].test_desc); + pr_info("using page sizes shared: %s private: %s\n", + page_size_to_str( + current_page_matrix.shared), + page_size_to_str( + current_page_matrix.private)); + hugepage_requirements_text(current_page_matrix); + setup_and_execute_test(i, current_page_matrix.shared, + current_page_matrix.private, policy); + pr_info("--- completed test %s ---\n\n", + priv_memfd_testsuite[i].test_desc); + } + } + + return 0; +}