From patchwork Fri Dec 23 00:13:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 13080461 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41227C10F1D for ; Fri, 23 Dec 2022 00:14:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229867AbiLWAOB (ORCPT ); Thu, 22 Dec 2022 19:14:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44496 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230010AbiLWAN7 (ORCPT ); Thu, 22 Dec 2022 19:13:59 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 75411264B6 for ; Thu, 22 Dec 2022 16:13:58 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id d7-20020a170902b70700b0018f4bf00569so2334360pls.4 for ; Thu, 22 Dec 2022 16:13:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=3CXtB6MmWgwzdyhdYI6wgxsbgDPgAgTn0fSY+N59/F8=; b=drpNHVsxlEQPFFOKpXIeD+HdXUVwglQVf5P3SjgrGRWkac+ExKksqwf2uJcdq6tNAY dzE7DEuw9lOfVmG4IPSHg9ge2WxIe4c3sdbOoBHOuRakO+snBcaTDAt0Ex/WGTRsTYrg b5/8VhWaubp4zJlxmRXrbSSUC/MfGw3nZVk63VY7yoLpxgLU80KPH4J/51ITcwYHISkT Swk4EqO7j+tz4b98gJSbEof3ugrIy8hT5k/XbiZx9Q3rbrQs9Fmj+j09Njf0tHrQX6pX xl2cickkH5OxZPemDApPAiesiswRMUt/uQVQeLGXWKtqOABdpBLcolSi9Qf3ao4VZmEm XyRw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=3CXtB6MmWgwzdyhdYI6wgxsbgDPgAgTn0fSY+N59/F8=; b=N1C/Prpte7/y0v2nuWjKbaikjpN0F74AphsZ7K8GTyFtjbhhKxquaVpvH+8irtK66m qSRvfTLjSb5qYcKLnYZ8n2whHq0aoXI8MZavmmqgL3OoCM4lYtxWiqSvpFo7+OtePewD CHh7hWtOfpQgfLJiEp0ZyDu/CjZTOa+hLkz5faV27gIJA9zYxwoc0czUqOKvSQQBvDtd KRRBmJmvjhwotoxRnMlPIJpJg8iZ8XA35ZeSZ3JGnPkhIvV/iYYLLtoExzlqfXMK8ykT WCaoIU7xTJxnv8P4YgkQWK9p9X+VnUMlD4kQOhdtdkQUKd2hFa/4c8Q+C3nVRIUEhMSt wYSw== X-Gm-Message-State: AFqh2koEb5dzyNm+RkxEifF9c4bwOTPuqe0IF5QH46dTcah1eepQeFph WJ9OMuiXaV1+LGHxQ+1YZf4hmCU2IYz72jIK X-Google-Smtp-Source: AMrXdXv4c3s22lQ8qQxnMVuKAij3ndlrk3aTTDiiOZmva1SV1pclFaT3/V/1VZXr2KoAYvuhAnpCUB0B/vE4Bscr X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a05:6a00:349b:b0:576:91fa:8ed0 with SMTP id cp27-20020a056a00349b00b0057691fa8ed0mr603715pfb.15.1671754437896; Thu, 22 Dec 2022 16:13:57 -0800 (PST) Date: Fri, 23 Dec 2022 00:13:45 +0000 In-Reply-To: <20221223001352.3873203-1-vannapurve@google.com> Mime-Version: 1.0 References: <20221223001352.3873203-1-vannapurve@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20221223001352.3873203-2-vannapurve@google.com> Subject: [V3 PATCH 1/8] KVM: selftests: private_mem: Use native hypercall From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, ackerleytng@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org CVMs need to execute hypercalls as per the cpu type without relying on KVM emulation. Execute hypercall from the guest using native instructions. Signed-off-by: Vishal Annapurve --- tools/testing/selftests/kvm/lib/x86_64/private_mem.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/lib/x86_64/private_mem.c b/tools/testing/selftests/kvm/lib/x86_64/private_mem.c index 2b97fc34ec4a..5a8fd8c3bc04 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/private_mem.c +++ b/tools/testing/selftests/kvm/lib/x86_64/private_mem.c @@ -24,7 +24,7 @@ static inline uint64_t __kvm_hypercall_map_gpa_range(uint64_t gpa, uint64_t size, uint64_t flags) { - return kvm_hypercall(KVM_HC_MAP_GPA_RANGE, gpa, size >> PAGE_SHIFT, flags, 0); + return kvm_native_hypercall(KVM_HC_MAP_GPA_RANGE, gpa, size >> PAGE_SHIFT, flags, 0); } static inline void kvm_hypercall_map_gpa_range(uint64_t gpa, uint64_t size, From patchwork Fri Dec 23 00:13:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 13080462 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9BA47C3DA7A for ; Fri, 23 Dec 2022 00:14:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235438AbiLWAOF (ORCPT ); Thu, 22 Dec 2022 19:14:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44538 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230444AbiLWAOB (ORCPT ); Thu, 22 Dec 2022 19:14:01 -0500 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 80EEF264BE for ; Thu, 22 Dec 2022 16:14:00 -0800 (PST) Received: by mail-pg1-x54a.google.com with SMTP id x79-20020a633152000000b004785d1cf6bbso1837416pgx.6 for ; Thu, 22 Dec 2022 16:14:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=MVS2VPJHypvfItXM6YMyTQGTBGIjipmMwSiUjoXdL8g=; b=XuSDGnZjihxZ6byyIHRt4Ar3HsgSljKfwzl1KDlDhO4Qj64Q5E66DxA2dvI0tLoMTD pk7ozE17dGCcj+MvJCTSiKNdwlEl6jF52KnUALW3lawPPKi+oJUXngTpy8CUcRnbfrET xZkS/kN1R+BsqdqSnrLfKuXSGRKDCF4RAj2hna0jgRDnOyG9iM6OrPHVFHDOcy0Uqwn0 QwME64otg8DEpyacK9CMhahAKHVlUDBetoysoqOXfPZGi/Cpl9f/KtnGjCmpXFhyyPvd jX+4S6miiz4cxFDxnryStGqJ6ZuxUfUTAlJ/bfeqbtnr7POPsZ7aDBSsQ0HxT+8dAA8N tCBA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=MVS2VPJHypvfItXM6YMyTQGTBGIjipmMwSiUjoXdL8g=; b=Lw2jrFiKj44uldX3eq4suZTG+Gc1itUDTIRNRhSEzf/yiWUvVCGHw2kvEiRvDbcLoV aXInhD/ICsi5+BzwqeoNrd90pxDgzfA3wWQ39Q4BUKT1SUR3EYpKrGz3WJIpjl5O+Qi9 2YflUOh6IRijGia+9zAMeUX20gqAg3TBfDSE8NHGGDReU8mVTEsfujMu+jhAnrho493t TvvucEgUThCUTVQ0xV/X3a4hnr7YVJ+cOPejDQmoVfyB9x6HP3NFVs/hUCkrlMVndqHh 4fkJyZp0gSiwWmp8hTz6g94rXYAjhXKHbkuPhtcMhP6TndcTuxGfoEYdxH5GDPKpe2jL 4REg== X-Gm-Message-State: AFqh2kogkj2q+RhY3PAWvZAr0W19QZOtiWkRw/W8joKg9rxmX5Zdz777 I9oRv4bW97Ey3BfsPl3pKrncYkAD/0l/dEXP X-Google-Smtp-Source: AMrXdXt3hoEW5vjhWX43EGzyjfpjYakRv97PcJWxT3YainHnqA0hmEVVeDlGX4ACjZ3RUg9/N/mrfrdFAGPMovyC X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:90b:1955:b0:219:b576:d0bd with SMTP id nk21-20020a17090b195500b00219b576d0bdmr682184pjb.123.1671754439977; Thu, 22 Dec 2022 16:13:59 -0800 (PST) Date: Fri, 23 Dec 2022 00:13:46 +0000 In-Reply-To: <20221223001352.3873203-1-vannapurve@google.com> Mime-Version: 1.0 References: <20221223001352.3873203-1-vannapurve@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20221223001352.3873203-3-vannapurve@google.com> Subject: [V3 PATCH 2/8] KVM: selftests: Support mapping pagetables to guest virtual memory From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, ackerleytng@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add support for mapping guest pagetables to guest VM regions to allow guests to modify the page table entries at runtime. Add following APIs: 1) vm_set_pgt_alloc_tracking: Track allocations for guest page tables so that such pages can be mapped back to guests later. 2) vm_map_page_table: Map page table pages to guests and pass the information about page table physical pages to virtual address mappings to guests. This framework is useful for the guests whose pagetables can't be manipulated from userspace e.g. confidential VM selftests. Confidential VMs need to change the memory access type by modifying gpa values in their page table entries. Signed-off-by: Vishal Annapurve --- .../selftests/kvm/include/kvm_util_base.h | 88 +++++++++++++++++++ tools/testing/selftests/kvm/lib/kvm_util.c | 88 ++++++++++++++++++- .../selftests/kvm/lib/x86_64/processor.c | 41 +++++++++ 3 files changed, 216 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index a736d6d18fa5..6c286686ec7c 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -78,6 +78,11 @@ struct protected_vm { int8_t protected_bit; }; +struct pgt_page { + vm_paddr_t paddr; + struct list_head list; +}; + struct kvm_vm { int mode; unsigned long type; @@ -108,6 +113,10 @@ struct kvm_vm { /* VM protection enabled: SEV, etc*/ bool protected; + struct list_head pgt_pages; + bool track_pgt_pages; + uint32_t num_pgt_pages; + vm_vaddr_t pgt_vaddr_start; /* Cache of information for binary stats interface */ int stats_fd; @@ -196,6 +205,25 @@ struct vm_guest_mode_params { unsigned int page_size; unsigned int page_shift; }; + +/* + * Structure shared with the guest containing information about: + * - Starting virtual address for num_pgt_pages physical pagetable + * page addresses tracked via paddrs array + * - page size of the guest + * + * Guest can walk through its pagetables using this information to + * read/modify pagetable attributes. + */ +struct guest_pgt_info { + uint64_t num_pgt_pages; + uint64_t pgt_vaddr_start; + uint64_t page_size; + uint64_t enc_mask; + uint64_t shared_mask; + uint64_t paddrs[]; +}; + extern const struct vm_guest_mode_params vm_guest_mode_params[]; int open_path_or_exit(const char *path, int flags); @@ -411,6 +439,48 @@ void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot); struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id); vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min); vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min); + +/* + * function called by guest code to translate physical address of a pagetable + * page to guest virtual address. + * + * input args: + * gpgt_info - pointer to the guest_pgt_info structure containing info + * about guest virtual address mappings for guest physical + * addresses of page table pages. + * pgt_pa - physical address of guest page table page to be translated + * to a virtual address. + * + * output args: none + * + * return: + * pointer to the pagetable page, null in case physical address is not + * tracked via given guest_pgt_info structure. + */ +void *guest_code_get_pgt_vaddr(struct guest_pgt_info *gpgt_info, uint64_t pgt_pa); + +/* + * 1) Map page table physical pages to the guest virtual address range + * 2) Allocate and setup a page to be shared with guest containing guest_pgt_info + * structure. + * + * Note: + * 1) vm_set_pgt_alloc_tracking function should be used to start tracking + * of physical page table page allocation. + * 2) This function should be invoked after needed pagetable pages are + * mapped to the VM using virt_pg_map. + * + * input args: + * vm - virtual machine + * vaddr_min - Minimum guest virtual address to start mapping the + * guest pagetable pages and guest_pgt_info structure page(s). + * + * output args: none + * + * return: none + */ +void vm_map_page_table(struct kvm_vm *vm, vm_vaddr_t vaddr_min); + vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min); vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages); vm_vaddr_t vm_vaddr_alloc_page(struct kvm_vm *vm); @@ -673,10 +743,28 @@ void kvm_gsi_routing_write(struct kvm_vm *vm, struct kvm_irq_routing *routing); const char *exit_reason_str(unsigned int exit_reason); +void sync_vm_gpgt_info(struct kvm_vm *vm, vm_vaddr_t pgt_info); + vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min, uint32_t memslot); vm_paddr_t _vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, vm_paddr_t paddr_min, uint32_t memslot, bool protected); + +/* + * Enable tracking of physical guest pagetable pages for the given vm. + * This function should be called right after vm creation before any pages are + * mapped into the VM using vm_alloc_* / vm_vaddr_alloc* functions. + * + * input args: + * vm - virtual machine + * + * output args: none + * + * return: + * None + */ +void vm_set_pgt_alloc_tracking(struct kvm_vm *vm); + vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm); static inline vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 37f342a17350..b56be997216a 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -202,6 +202,7 @@ struct kvm_vm *____vm_create(enum vm_guest_mode mode, uint64_t nr_pages) TEST_ASSERT(vm != NULL, "Insufficient Memory"); INIT_LIST_HEAD(&vm->vcpus); + INIT_LIST_HEAD(&vm->pgt_pages); vm->regions.gpa_tree = RB_ROOT; vm->regions.hva_tree = RB_ROOT; hash_init(vm->regions.slot_hash); @@ -695,6 +696,7 @@ void kvm_vm_free(struct kvm_vm *vmp) { int ctr; struct hlist_node *node; + struct pgt_page *entry, *nentry; struct userspace_mem_region *region; if (vmp == NULL) @@ -710,6 +712,9 @@ void kvm_vm_free(struct kvm_vm *vmp) hash_for_each_safe(vmp->regions.slot_hash, ctr, node, region, slot_node) __vm_mem_region_delete(vmp, region, false); + list_for_each_entry_safe(entry, nentry, &vmp->pgt_pages, list) + free(entry); + /* Free sparsebit arrays. */ sparsebit_free(&vmp->vpages_valid); sparsebit_free(&vmp->vpages_mapped); @@ -1330,6 +1335,72 @@ vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, return pgidx_start * vm->page_size; } +void __weak sync_vm_gpgt_info(struct kvm_vm *vm, vm_vaddr_t pgt_info) +{ +} + +void *guest_code_get_pgt_vaddr(struct guest_pgt_info *gpgt_info, + uint64_t pgt_pa) +{ + uint64_t num_pgt_pages = gpgt_info->num_pgt_pages; + uint64_t pgt_vaddr_start = gpgt_info->pgt_vaddr_start; + uint64_t page_size = gpgt_info->page_size; + + for (uint32_t i = 0; i < num_pgt_pages; i++) { + if (gpgt_info->paddrs[i] == pgt_pa) + return (void *)(pgt_vaddr_start + i * page_size); + } + return NULL; +} + +static void vm_setup_pgt_info_buf(struct kvm_vm *vm, vm_vaddr_t vaddr_min) +{ + struct pgt_page *pgt_page_entry; + struct guest_pgt_info *gpgt_info; + uint64_t info_size = sizeof(*gpgt_info) + (sizeof(uint64_t) * vm->num_pgt_pages); + uint64_t num_pages = align_up(info_size, vm->page_size); + vm_vaddr_t buf_start = vm_vaddr_alloc(vm, num_pages, vaddr_min); + uint32_t i = 0; + + gpgt_info = (struct guest_pgt_info *)addr_gva2hva(vm, buf_start); + gpgt_info->num_pgt_pages = vm->num_pgt_pages; + gpgt_info->pgt_vaddr_start = vm->pgt_vaddr_start; + gpgt_info->page_size = vm->page_size; + if (vm->protected) { + gpgt_info->enc_mask = vm->arch.c_bit; + gpgt_info->shared_mask = vm->arch.s_bit; + } + list_for_each_entry(pgt_page_entry, &vm->pgt_pages, list) { + gpgt_info->paddrs[i] = pgt_page_entry->paddr; + i++; + } + TEST_ASSERT((i == vm->num_pgt_pages), "pgt entries mismatch with the counter"); + sync_vm_gpgt_info(vm, buf_start); +} + +void vm_map_page_table(struct kvm_vm *vm, vm_vaddr_t vaddr_min) +{ + struct pgt_page *pgt_page_entry; + vm_vaddr_t vaddr; + + /* Stop tracking further pgt pages, mapping pagetable may itself need + * new pages. + */ + vm->track_pgt_pages = false; + vm_vaddr_t vaddr_start = vm_vaddr_unused_gap(vm, + vm->num_pgt_pages * vm->page_size, vaddr_min); + vaddr = vaddr_start; + list_for_each_entry(pgt_page_entry, &vm->pgt_pages, list) { + /* Map the virtual page. */ + virt_pg_map(vm, vaddr, pgt_page_entry->paddr & ~vm->arch.c_bit); + sparsebit_set(vm->vpages_mapped, vaddr >> vm->page_shift); + vaddr += vm->page_size; + } + vm->pgt_vaddr_start = vaddr_start; + + vm_setup_pgt_info_buf(vm, vaddr_min); +} + /* * VM Virtual Address Allocate Shared/Encrypted * @@ -1981,9 +2052,24 @@ vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min, /* Arbitrary minimum physical address used for virtual translation tables. */ #define KVM_GUEST_PAGE_TABLE_MIN_PADDR 0x180000 +void vm_set_pgt_alloc_tracking(struct kvm_vm *vm) +{ + vm->track_pgt_pages = true; +} + vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm) { - return vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0); + struct pgt_page *pgt; + vm_paddr_t paddr = vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0); + + if (vm->track_pgt_pages) { + pgt = calloc(1, sizeof(*pgt)); + TEST_ASSERT(pgt != NULL, "Insufficient memory"); + pgt->paddr = (paddr | vm->arch.c_bit); + list_add(&pgt->list, &vm->pgt_pages); + vm->num_pgt_pages++; + } + return paddr; } /* diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c index 429e55f2609f..ab7d4cc4b848 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c @@ -19,6 +19,7 @@ #define MAX_NR_CPUID_ENTRIES 100 vm_vaddr_t exception_handlers; +static struct guest_pgt_info *gpgt_info; static bool is_cpu_vendor_intel; static bool is_cpu_vendor_amd; @@ -241,6 +242,46 @@ void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr) __virt_pg_map(vm, vaddr, paddr, PG_LEVEL_4K); } +static uint64_t *guest_code_get_pte(uint64_t vaddr) +{ + uint16_t index[4]; + uint64_t *pml4e, *pdpe, *pde, *pte; + uint64_t pgt_paddr = get_cr3(); + + GUEST_ASSERT(gpgt_info != NULL); + uint64_t page_size = gpgt_info->page_size; + + index[0] = (vaddr >> 12) & 0x1ffu; + index[1] = (vaddr >> 21) & 0x1ffu; + index[2] = (vaddr >> 30) & 0x1ffu; + index[3] = (vaddr >> 39) & 0x1ffu; + + pml4e = guest_code_get_pgt_vaddr(gpgt_info, pgt_paddr); + GUEST_ASSERT(pml4e && (pml4e[index[3]] & PTE_PRESENT_MASK)); + + pgt_paddr = (PTE_GET_PFN(pml4e[index[3]]) * page_size); + pdpe = guest_code_get_pgt_vaddr(gpgt_info, pgt_paddr); + GUEST_ASSERT(pdpe && (pdpe[index[2]] & PTE_PRESENT_MASK) && + !(pdpe[index[2]] & PTE_LARGE_MASK)); + + pgt_paddr = (PTE_GET_PFN(pdpe[index[2]]) * page_size); + pde = guest_code_get_pgt_vaddr(gpgt_info, pgt_paddr); + GUEST_ASSERT(pde && (pde[index[1]] & PTE_PRESENT_MASK) && + !(pde[index[1]] & PTE_LARGE_MASK)); + + pgt_paddr = (PTE_GET_PFN(pde[index[1]]) * page_size); + pte = guest_code_get_pgt_vaddr(gpgt_info, pgt_paddr); + GUEST_ASSERT(pte && (pte[index[0]] & PTE_PRESENT_MASK)); + + return (uint64_t *)&pte[index[0]]; +} + +void sync_vm_gpgt_info(struct kvm_vm *vm, vm_vaddr_t pgt_info) +{ + gpgt_info = (struct guest_pgt_info *)pgt_info; + sync_global_to_guest(vm, gpgt_info); +} + void virt_map_level(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, uint64_t nr_bytes, int level) { From patchwork Fri Dec 23 00:13:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 13080463 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 87DE1C4332F for ; Fri, 23 Dec 2022 00:14:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235598AbiLWAOK (ORCPT ); Thu, 22 Dec 2022 19:14:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44558 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235249AbiLWAOD (ORCPT ); Thu, 22 Dec 2022 19:14:03 -0500 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C3BD627140 for ; Thu, 22 Dec 2022 16:14:02 -0800 (PST) Received: by mail-pl1-x649.google.com with SMTP id m16-20020a170902db1000b0018fa0de6aa6so2319434plx.18 for ; Thu, 22 Dec 2022 16:14:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=cLuChDmHZy9fZJAsWYadyMA42cALgM97d3TAhY29bd4=; b=qesxOioXaSBJL1ruAcCimnE/4E3MkzM5ZdGrQ40tnDmX4p1zgoouiyYp1MUFj51vYO nNfOwW6LG4WVVZlUPnlDY4+m1g5Ot+S+a+sxVijlNBf3A1ztamoTbu1JNeB6zcat62bv 4sZKS+O6SR4BDGU2SdZQmbQddafokLgtJEAamTcU9EXpGLpzv5QpwfcuaDT1U1ByTR8t Yn9Pel1JcW6C3LxBscnyfw+zgkMrzXFOvu0/lfQMKNp1hjPCg42eLccXB6PKdZ97pd5/ 03ryjbSSeYTi7TOD9AP2l+5tIQA4MenYW7GXMmsAvoZ+uhGJvLzF654L0FA35u/haI3Y 1Q2A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=cLuChDmHZy9fZJAsWYadyMA42cALgM97d3TAhY29bd4=; b=dCEgI55mA5k5zqly/z7Kn0AL0fi2NnobUg12Db4JZSMhTkk5gSbxyIBH56IlsqE9bs CZrgMHaANmFXBMUyjpCjQ8mVIgBcgh7h9mSNBktGqB1Z1gg1hlda1QsNK8/yZ/BxIdzD qsu6Vh4O8aF0aZkxeZ4QBFd3XAjJaQRn6sDcn2ZTYj9DQ/2/wD5WA2oYqrkqYEKu7nGQ j+yigWAsaglZ21hVnBw4po8w4bOoUoKB4OxOQ74aLiuuaWJUGZbfhsV88OldpvewTtHp du6D1Id8a3fg7H+NalGk57G2SBA6p3lt2OpSWL0kwZiuRQSLkzPrVAzBZNw4/QD/7cvU yoZg== X-Gm-Message-State: AFqh2kqIdfUcvauxCOQZnDX94R7rH7OTWy18TVXXj+SN8uQ4QYxvIFB8 okSF0g1JD5t1j4ojfZ95GoZUvkrh4sy8BmvS X-Google-Smtp-Source: AMrXdXvCa2RPlxHRHVP2d/nxmRmmfc6ud8BYwgOQ+ATjacORH6WxAk8C9L+XSPvx7xJyS+iVx1Es8D2izekn2BrB X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:aa7:870f:0:b0:576:a1d3:a157 with SMTP id b15-20020aa7870f000000b00576a1d3a157mr451723pfo.32.1671754442064; Thu, 22 Dec 2022 16:14:02 -0800 (PST) Date: Fri, 23 Dec 2022 00:13:47 +0000 In-Reply-To: <20221223001352.3873203-1-vannapurve@google.com> Mime-Version: 1.0 References: <20221223001352.3873203-1-vannapurve@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20221223001352.3873203-4-vannapurve@google.com> Subject: [V3 PATCH 3/8] KVM: selftests: x86: Support changing gpa encryption masks From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, ackerleytng@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add support for guest side functionality to modify encryption/shared masks for entries in page table to allow accessing GPA ranges as private or shared. Signed-off-by: Vishal Annapurve --- .../selftests/kvm/include/x86_64/processor.h | 4 ++ .../selftests/kvm/lib/x86_64/processor.c | 39 +++++++++++++++++++ 2 files changed, 43 insertions(+) diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h index 3617f83bb2e5..c8c55f54c14f 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -945,6 +945,10 @@ void vcpu_init_descriptor_tables(struct kvm_vcpu *vcpu); void vm_install_exception_handler(struct kvm_vm *vm, int vector, void (*handler)(struct ex_regs *)); +void guest_set_region_shared(void *vaddr, uint64_t size); + +void guest_set_region_private(void *vaddr, uint64_t size); + /* If a toddler were to say "abracadabra". */ #define KVM_EXCEPTION_MAGIC 0xabacadabaULL diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c index ab7d4cc4b848..42d1e4074f32 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c @@ -276,6 +276,45 @@ static uint64_t *guest_code_get_pte(uint64_t vaddr) return (uint64_t *)&pte[index[0]]; } +static void guest_code_change_region_prot(void *vaddr_start, uint64_t mem_size, + bool private) +{ + uint64_t vaddr = (uint64_t)vaddr_start; + uint32_t num_pages; + + GUEST_ASSERT(gpgt_info != NULL); + uint32_t guest_page_size = gpgt_info->page_size; + + GUEST_ASSERT(!(mem_size % guest_page_size) && !(vaddr % guest_page_size)); + GUEST_ASSERT(gpgt_info->enc_mask | gpgt_info->shared_mask); + + num_pages = mem_size / guest_page_size; + for (uint32_t i = 0; i < num_pages; i++) { + uint64_t *pte = guest_code_get_pte(vaddr); + + GUEST_ASSERT(pte); + if (private) { + *pte &= ~(gpgt_info->shared_mask); + *pte |= gpgt_info->enc_mask; + } else { + *pte &= ~(gpgt_info->enc_mask); + *pte |= gpgt_info->shared_mask; + } + asm volatile("invlpg (%0)" :: "r"(vaddr) : "memory"); + vaddr += guest_page_size; + } +} + +void guest_set_region_shared(void *vaddr, uint64_t size) +{ + guest_code_change_region_prot(vaddr, size, /* shared */ false); +} + +void guest_set_region_private(void *vaddr, uint64_t size) +{ + guest_code_change_region_prot(vaddr, size, /* private */ true); +} + void sync_vm_gpgt_info(struct kvm_vm *vm, vm_vaddr_t pgt_info) { gpgt_info = (struct guest_pgt_info *)pgt_info; From patchwork Fri Dec 23 00:13:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 13080464 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B377EC10F1D for ; Fri, 23 Dec 2022 00:14:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235442AbiLWAON (ORCPT ); Thu, 22 Dec 2022 19:14:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44644 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230028AbiLWAOF (ORCPT ); Thu, 22 Dec 2022 19:14:05 -0500 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0B0E527DEC for ; Thu, 22 Dec 2022 16:14:05 -0800 (PST) Received: by mail-pj1-x104a.google.com with SMTP id r17-20020a17090aa09100b0021903e75f14so1720299pjp.9 for ; Thu, 22 Dec 2022 16:14:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=xlm4N5CTIk+gx1iR2xF2yQo3rvmps98CLQF90zWQYoY=; b=jq9yJkP+aFqI9u8i8zl7BZbwNgH9lWiHnMROFDPqV7emq/6CZ1BBaf50MTKk4axAHc P9sABC18Xnp2N7bHZ4fpyynjIqDrHu6Gm9tsTlRRzSwwn3W0i3ulLy0Xl61kd3MxS1Sb 2NHztbMkbWmYzJGnjb2t3WQGs6TpLSrUBIBYBU8VNN912lCHyUFwtnC5bk9HsB8Ggfis +y7p+8FCdWi7jdfSSDQNlmCl+FJTT2KJdcPfFDlhmohQqs3qvR9KClcOBwCwtol5jTBA 0rS1T8RuAY6ZCMdhuzRsWhPMXMYwBH531gG+eFKkv7DZNMnSsCUeFcAWqVBQ6TTT5aix XKRw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=xlm4N5CTIk+gx1iR2xF2yQo3rvmps98CLQF90zWQYoY=; b=4o/wnRx0r5PkNX8DHWNBe09/kHyHBKSHirEk/geSRVgMB2r2ajmRqtiYVo9L1wgeZ7 Qt6mxeLbFGOw4/MzFNyu09KxyP6fjolvgQYVwrsBXt7bvRFCAGl2n2lm9XPktwBB1Xlr rqE5boMWpACC6Md80EdFIwR5acBs5YJY8/iJlLBBc6x0cDz9wmvGpaO0HPpJO+HYGrVJ LJBBLLT6U8KgOTco5pfC2EVKm0npY0D5LyTKyobkYG9QXh6Xob0xkeeIATx74YgWV9p0 LGK2BAI1ryguvPlnQG5w3CZm2TXuZAAY70tIQAGzwtXDiHCfIREIYoPJIE17Z+3Yf+wx IgqQ== X-Gm-Message-State: AFqh2kod+CAqsbaPo3WR/UVQWpfJavgbRUFD0TI1ITzd0mKfLyHg/InR lqnL8odPukddmJdHLVQoXeQzvG1A1oaZGRfh X-Google-Smtp-Source: AMrXdXsWc72htDtUWZS0yotugRgHtIHt8ahcLWxwxO5Sze+CNKRIKcpoRG+ZfMoKpk9Dv/2eij/H1pcN0LxZ3IDY X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:903:32c3:b0:191:3e01:39b3 with SMTP id i3-20020a17090332c300b001913e0139b3mr400196plr.5.1671754444373; Thu, 22 Dec 2022 16:14:04 -0800 (PST) Date: Fri, 23 Dec 2022 00:13:48 +0000 In-Reply-To: <20221223001352.3873203-1-vannapurve@google.com> Mime-Version: 1.0 References: <20221223001352.3873203-1-vannapurve@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20221223001352.3873203-5-vannapurve@google.com> Subject: [V3 PATCH 4/8] KVM: selftests: Split SEV VM creation logic From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, ackerleytng@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Split SEV VM creation logic to allow additional modifications to SEV VM configuration e.g. adding memslots. Signed-off-by: Vishal Annapurve --- .../selftests/kvm/include/x86_64/sev.h | 4 ++++ tools/testing/selftests/kvm/lib/x86_64/sev.c | 20 ++++++++++++++++--- 2 files changed, 21 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86_64/sev.h b/tools/testing/selftests/kvm/include/x86_64/sev.h index 1148db928d0b..6bf2015fff7a 100644 --- a/tools/testing/selftests/kvm/include/x86_64/sev.h +++ b/tools/testing/selftests/kvm/include/x86_64/sev.h @@ -19,4 +19,8 @@ bool is_kvm_sev_supported(void); struct kvm_vm *vm_sev_create_with_one_vcpu(uint32_t policy, void *guest_code, struct kvm_vcpu **cpu); +struct kvm_vm *sev_vm_init_with_one_vcpu(uint32_t policy, void *guest_code, + struct kvm_vcpu **cpu); + +void sev_vm_finalize(struct kvm_vm *vm, uint32_t policy); #endif /* SELFTEST_KVM_SEV_H */ diff --git a/tools/testing/selftests/kvm/lib/x86_64/sev.c b/tools/testing/selftests/kvm/lib/x86_64/sev.c index 49c62f25363e..96d3dbc2ba74 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/sev.c +++ b/tools/testing/selftests/kvm/lib/x86_64/sev.c @@ -215,7 +215,7 @@ static void sev_vm_measure(struct kvm_vm *vm) pr_debug("\n"); } -struct kvm_vm *vm_sev_create_with_one_vcpu(uint32_t policy, void *guest_code, +struct kvm_vm *sev_vm_init_with_one_vcpu(uint32_t policy, void *guest_code, struct kvm_vcpu **cpu) { enum vm_guest_mode mode = VM_MODE_PXXV48_4K; @@ -231,14 +231,28 @@ struct kvm_vm *vm_sev_create_with_one_vcpu(uint32_t policy, void *guest_code, *cpu = vm_vcpu_add(vm, 0, guest_code); kvm_vm_elf_load(vm, program_invocation_name); + pr_info("SEV guest created, policy: 0x%x, size: %lu KB\n", policy, + nr_pages * vm->page_size / 1024); + return vm; +} + +void sev_vm_finalize(struct kvm_vm *vm, uint32_t policy) +{ sev_vm_launch(vm, policy); sev_vm_measure(vm); sev_vm_launch_finish(vm); +} - pr_info("SEV guest created, policy: 0x%x, size: %lu KB\n", policy, - nr_pages * vm->page_size / 1024); +struct kvm_vm *vm_sev_create_with_one_vcpu(uint32_t policy, void *guest_code, + struct kvm_vcpu **cpu) +{ + struct kvm_vm *vm; + + vm = sev_vm_init_with_one_vcpu(policy, guest_code, cpu); + + sev_vm_finalize(vm, policy); return vm; } From patchwork Fri Dec 23 00:13:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 13080465 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ADF91C3DA7A for ; Fri, 23 Dec 2022 00:14:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235758AbiLWAOW (ORCPT ); Thu, 22 Dec 2022 19:14:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44640 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235531AbiLWAOJ (ORCPT ); Thu, 22 Dec 2022 19:14:09 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5F05328706 for ; Thu, 22 Dec 2022 16:14:07 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id n5-20020a170902d2c500b00189e5b86fe2so2324061plc.16 for ; Thu, 22 Dec 2022 16:14:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=EZopER/s4tiFrvreBNvA+OQLVRUFqUiISvcp+c+5UHk=; b=HUHXH4BEFdjasI8KjN0DUcFgQ332DM1EZL/VY94UC6Bde5YG+dsLST21wPEJeWfbWE wY4rri7V8MWRDYmpHskMTKvqqoxLZBtWuJC7SsHzFUIHkRoL0PTqLj7lSni34hAjQObJ xlXyImG913s4gNZ8WDm9+nM7U9HGJqblab9uMnFqkO+wPGJc1j5U63dgGp+POZhikZLP j/rXDR7gmy5jND0wk+OXc9YgUJodEbji4Jn4yijCKodYA82gSADK6f7xteIR75w0gdzX tsixo1K5xYhwzfwRZhv6jAu4TVv612TEO75bFQbxpJ3LOTl3mCEK6W07rf9eGXcjdcjY DPNA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=EZopER/s4tiFrvreBNvA+OQLVRUFqUiISvcp+c+5UHk=; b=d5trSaHlZlxynAR6pVHti/N0yI/QkJ2MQoYVe5P0Khs7nFh3oZzDa+7ywLVfyojUxU h6ct5A/2wED5U9o0Td1uG64D8szOqUjnuvRa3c7C3RIUbPnoVpH+QejXqGahU+MZe1DA XysLLuIRxoPQTf6KhvdIliBfCPxQLmGAi5hHx6X9jNpGx2fGEcPaSNF5s92sFLSnLlma QlcPxwwlgJixPtxTTq5rmoDNOnLztgfErDjd7cSvGFobLc1KzTETZ4hUTw9N5mfDUIze lPzZYSxReaxPFRrjPycS4k+k6KRyu3hgFthrj1+2gxsWjSVLFqDyg/E2LGH5araOzeRv ZLGg== X-Gm-Message-State: AFqh2ko70ttBts9xDN8VdwAM5SYUOcj5ibimYD5tEHA2k6FQ3BT3JD58 YINmO/DAOzqdV9fxcqOoZX5QMFqjgpkep+pu X-Google-Smtp-Source: AMrXdXtsCOLxwkxPeGE9Ol/zssDLd6SOwLmrDr2Pr3mBLzctCLIMEiKmUCf6u6bVUNGHfgsoNz3vV5wWK10EBiv+ X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a62:3042:0:b0:576:f200:210 with SMTP id w63-20020a623042000000b00576f2000210mr505612pfw.67.1671754446844; Thu, 22 Dec 2022 16:14:06 -0800 (PST) Date: Fri, 23 Dec 2022 00:13:49 +0000 In-Reply-To: <20221223001352.3873203-1-vannapurve@google.com> Mime-Version: 1.0 References: <20221223001352.3873203-1-vannapurve@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20221223001352.3873203-6-vannapurve@google.com> Subject: [V3 PATCH 5/8] KVM: selftests: Enable pagetable mapping for SEV VMs From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, ackerleytng@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Enable pagetable tracking and mapping for SEV VMs to allow guest code to execute guest_map_region_shared/private APIs. Signed-off-by: Vishal Annapurve --- tools/testing/selftests/kvm/lib/x86_64/sev.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/tools/testing/selftests/kvm/lib/x86_64/sev.c b/tools/testing/selftests/kvm/lib/x86_64/sev.c index 96d3dbc2ba74..0dfffdc224d6 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/sev.c +++ b/tools/testing/selftests/kvm/lib/x86_64/sev.c @@ -215,6 +215,8 @@ static void sev_vm_measure(struct kvm_vm *vm) pr_debug("\n"); } +#define GUEST_PGT_MIN_VADDR 0x10000 + struct kvm_vm *sev_vm_init_with_one_vcpu(uint32_t policy, void *guest_code, struct kvm_vcpu **cpu) { @@ -224,6 +226,7 @@ struct kvm_vm *sev_vm_init_with_one_vcpu(uint32_t policy, void *guest_code, vm = ____vm_create(mode, nr_pages); + vm_set_pgt_alloc_tracking(vm); kvm_sev_ioctl(vm, KVM_SEV_INIT, NULL); configure_sev_pte_masks(vm); @@ -238,6 +241,8 @@ struct kvm_vm *sev_vm_init_with_one_vcpu(uint32_t policy, void *guest_code, void sev_vm_finalize(struct kvm_vm *vm, uint32_t policy) { + vm_map_page_table(vm, GUEST_PGT_MIN_VADDR); + sev_vm_launch(vm, policy); sev_vm_measure(vm); From patchwork Fri Dec 23 00:13:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 13080466 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0AE5C4332F for ; Fri, 23 Dec 2022 00:14:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235873AbiLWAOs (ORCPT ); Thu, 22 Dec 2022 19:14:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44640 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235515AbiLWAOU (ORCPT ); Thu, 22 Dec 2022 19:14:20 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3DDF827DEC for ; Thu, 22 Dec 2022 16:14:10 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id j18-20020a170902da9200b00189b3b16addso2324813plx.23 for ; Thu, 22 Dec 2022 16:14:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=DfG5JzhSPPBTmyeZqygfonNF3+eUcGyRu0K6yDYOFAo=; b=SM4+xKni226DpVhRn0NlBoUPPfriOG1H2UYRt16IbgCmpCE7yuX0JtitW2/sUsc3MJ /v0ne16UCwUCdgWvGj3qRXuEcilujaVVDmjG3codhksg+Oy4OleFYozdtiXrqznBYA3P Guyy3es8av6ciLytcA8Tw7UEANDlWCUDN2U5Z4tgNipzL8frjSSB2XHLi2AlmtTQ4pWh txpTtgar9LDSfDplgpjUttEesi1dSv79Tt1fSryGRU2Efr++lIuHRoSUICFmzFxLAyXA c2MtMSQv6A4vOxw7ZN0ob0IHH45SveGoCbCa6afIwm0yl3EUWRl8pT2+5Di12B5qQ94t oQ8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=DfG5JzhSPPBTmyeZqygfonNF3+eUcGyRu0K6yDYOFAo=; b=xHf/3ccjhHifr1hisxmEZITH+6uCvdXzfQkjfVIwIQFcf/F/5XwZ3R3MBPIV3ARFrv Q8vv0oAJo5AaDt7b4yJaJRcl0+ZiDllkWhhflN3AO6WwYtZC9muNzsCLTso7cE8KNEEu yRqd4XM7SlQ8Ea8rKdJ/+z52xK8F+71cODF6Mn4WC3O/4Yx8C8cYEct6UTvmfXY2jgMI ZNYXAS++CbYTNtw0/OzlJVdvbGZdny19PG8nPDGCvntifmnrY5pKv1lru7W6oA5OT4Y4 rRLFuwl4/XsgbQ04gMw+szxjXRRvuVU1TBflJjqrmdgUBhe+7tJVZhOoWpfqI5qzeUH6 VORQ== X-Gm-Message-State: AFqh2kqC2I75kH9OWV9FY/BZv7m/3i+tPFoa9YxjyK3Hk72mQ3ODIATT aciWrrmfqAUYHNJkNj1ninz85ulBLAxS6ozI X-Google-Smtp-Source: AMrXdXud1PoPlKT4y1F5KMib0Fr4Tkf0gq528zsd01g47m76L9pYsV1T8JlsoH7XBdJPnfuKbWmcx618eW/AtxjD X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:90a:1309:b0:219:4487:f763 with SMTP id h9-20020a17090a130900b002194487f763mr845655pja.201.1671754449592; Thu, 22 Dec 2022 16:14:09 -0800 (PST) Date: Fri, 23 Dec 2022 00:13:50 +0000 In-Reply-To: <20221223001352.3873203-1-vannapurve@google.com> Mime-Version: 1.0 References: <20221223001352.3873203-1-vannapurve@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20221223001352.3873203-7-vannapurve@google.com> Subject: [V3 PATCH 6/8] KVM: selftests: Refactor private_mem_test From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, ackerleytng@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move most of the logic from private mem test to a library to allow possible sharing of private_mem_test logic amongst non-confidential and confidential VM selftests. Signed-off-by: Vishal Annapurve --- tools/testing/selftests/kvm/Makefile | 1 + .../include/x86_64/private_mem_test_helper.h | 15 ++ .../kvm/lib/x86_64/private_mem_test_helper.c | 197 ++++++++++++++++++ .../selftests/kvm/x86_64/private_mem_test.c | 187 +---------------- 4 files changed, 214 insertions(+), 186 deletions(-) create mode 100644 tools/testing/selftests/kvm/include/x86_64/private_mem_test_helper.h create mode 100644 tools/testing/selftests/kvm/lib/x86_64/private_mem_test_helper.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index ee8c3aebee80..83c649c9de23 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -56,6 +56,7 @@ LIBKVM_x86_64 += lib/x86_64/handlers.S LIBKVM_x86_64 += lib/x86_64/hyperv.c LIBKVM_x86_64 += lib/x86_64/memstress.c LIBKVM_x86_64 += lib/x86_64/private_mem.c +LIBKVM_x86_64 += lib/x86_64/private_mem_test_helper.c LIBKVM_x86_64 += lib/x86_64/processor.c LIBKVM_x86_64 += lib/x86_64/svm.c LIBKVM_x86_64 += lib/x86_64/ucall.c diff --git a/tools/testing/selftests/kvm/include/x86_64/private_mem_test_helper.h b/tools/testing/selftests/kvm/include/x86_64/private_mem_test_helper.h new file mode 100644 index 000000000000..4d32c025876c --- /dev/null +++ b/tools/testing/selftests/kvm/include/x86_64/private_mem_test_helper.h @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2022, Google LLC. + */ + +#ifndef SELFTEST_KVM_PRIVATE_MEM_TEST_HELPER_H +#define SELFTEST_KVM_PRIVATE_MEM_TEST_HELPER_H + +#include +#include + +void execute_vm_with_private_test_mem( + enum vm_mem_backing_src_type test_mem_src); + +#endif /* SELFTEST_KVM_PRIVATE_MEM_TEST_HELPER_H */ diff --git a/tools/testing/selftests/kvm/lib/x86_64/private_mem_test_helper.c b/tools/testing/selftests/kvm/lib/x86_64/private_mem_test_helper.c new file mode 100644 index 000000000000..600bd21d1bb8 --- /dev/null +++ b/tools/testing/selftests/kvm/lib/x86_64/private_mem_test_helper.c @@ -0,0 +1,197 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2022, Google LLC. + */ +#define _GNU_SOURCE /* for program_invocation_short_name */ +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include + +#include +#include +#include +#include +#include + +#define TEST_AREA_SLOT 10 +#define TEST_AREA_GPA 0xC0000000 +#define TEST_AREA_SIZE (2 * 1024 * 1024) +#define GUEST_TEST_MEM_OFFSET (1 * 1024 * 1024) +#define GUEST_TEST_MEM_SIZE (10 * 4096) + +#define VM_STAGE_PROCESSED(x) pr_info("Processed stage %s\n", #x) + +#define TEST_MEM_DATA_PATTERN1 0x66 +#define TEST_MEM_DATA_PATTERN2 0x99 +#define TEST_MEM_DATA_PATTERN3 0x33 +#define TEST_MEM_DATA_PATTERN4 0xaa +#define TEST_MEM_DATA_PATTERN5 0x12 + +static bool verify_mem_contents(void *mem, uint32_t size, uint8_t pattern) +{ + uint8_t *buf = (uint8_t *)mem; + + for (uint32_t i = 0; i < size; i++) { + if (buf[i] != pattern) + return false; + } + + return true; +} + +static void populate_test_area(void *test_area_base, uint64_t pattern) +{ + memset(test_area_base, pattern, TEST_AREA_SIZE); +} + +static void populate_guest_test_mem(void *guest_test_mem, uint64_t pattern) +{ + memset(guest_test_mem, pattern, GUEST_TEST_MEM_SIZE); +} + +static bool verify_test_area(void *test_area_base, uint64_t area_pattern, + uint64_t guest_pattern) +{ + void *guest_test_mem = test_area_base + GUEST_TEST_MEM_OFFSET; + void *test_area2_base = guest_test_mem + GUEST_TEST_MEM_SIZE; + uint64_t test_area2_size = (TEST_AREA_SIZE - (GUEST_TEST_MEM_OFFSET + + GUEST_TEST_MEM_SIZE)); + + return (verify_mem_contents(test_area_base, GUEST_TEST_MEM_OFFSET, area_pattern) && + verify_mem_contents(guest_test_mem, GUEST_TEST_MEM_SIZE, guest_pattern) && + verify_mem_contents(test_area2_base, test_area2_size, area_pattern)); +} + +#define GUEST_STARTED 0 +#define GUEST_PRIVATE_MEM_POPULATED 1 +#define GUEST_SHARED_MEM_POPULATED 2 +#define GUEST_PRIVATE_MEM_POPULATED2 3 + +/* + * Run memory conversion tests with explicit conversion: + * Execute KVM hypercall to map/unmap gpa range which will cause userspace exit + * to back/unback private memory. Subsequent accesses by guest to the gpa range + * will not cause exit to userspace. + * + * Test memory conversion scenarios with following steps: + * 1) Access private memory using private access and verify that memory contents + * are not visible to userspace. + * 2) Convert memory to shared using explicit conversions and ensure that + * userspace is able to access the shared regions. + * 3) Convert memory back to private using explicit conversions and ensure that + * userspace is again not able to access converted private regions. + */ +static void guest_conv_test_fn(void) +{ + void *test_area_base = (void *)TEST_AREA_GPA; + void *guest_test_mem = (void *)(TEST_AREA_GPA + GUEST_TEST_MEM_OFFSET); + uint64_t guest_test_size = GUEST_TEST_MEM_SIZE; + + GUEST_SYNC(GUEST_STARTED); + + populate_test_area(test_area_base, TEST_MEM_DATA_PATTERN1); + GUEST_SYNC(GUEST_PRIVATE_MEM_POPULATED); + GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PATTERN1, + TEST_MEM_DATA_PATTERN1)); + + kvm_hypercall_map_shared((uint64_t)guest_test_mem, guest_test_size); + + populate_guest_test_mem(guest_test_mem, TEST_MEM_DATA_PATTERN2); + + GUEST_SYNC(GUEST_SHARED_MEM_POPULATED); + GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PATTERN1, + TEST_MEM_DATA_PATTERN5)); + + kvm_hypercall_map_private((uint64_t)guest_test_mem, guest_test_size); + + populate_guest_test_mem(guest_test_mem, TEST_MEM_DATA_PATTERN3); + GUEST_SYNC(GUEST_PRIVATE_MEM_POPULATED2); + + GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PATTERN1, + TEST_MEM_DATA_PATTERN3)); + GUEST_DONE(); +} + +#define ASSERT_CONV_TEST_EXIT_IO(vcpu, stage) \ + { \ + struct ucall uc; \ + ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_IO); \ + ASSERT_EQ(get_ucall(vcpu, &uc), UCALL_SYNC); \ + ASSERT_EQ(uc.args[1], stage); \ + } + +#define ASSERT_GUEST_DONE(vcpu) \ + { \ + struct ucall uc; \ + ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_IO); \ + ASSERT_EQ(get_ucall(vcpu, &uc), UCALL_DONE); \ + } + +static void host_conv_test_fn(struct kvm_vm *vm, struct kvm_vcpu *vcpu) +{ + void *test_area_hva = addr_gpa2hva(vm, TEST_AREA_GPA); + void *guest_test_mem_hva = (test_area_hva + GUEST_TEST_MEM_OFFSET); + + vcpu_run_and_handle_mapgpa(vm, vcpu); + ASSERT_CONV_TEST_EXIT_IO(vcpu, GUEST_STARTED); + populate_test_area(test_area_hva, TEST_MEM_DATA_PATTERN4); + VM_STAGE_PROCESSED(GUEST_STARTED); + + vcpu_run_and_handle_mapgpa(vm, vcpu); + ASSERT_CONV_TEST_EXIT_IO(vcpu, GUEST_PRIVATE_MEM_POPULATED); + TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PATTERN4, + TEST_MEM_DATA_PATTERN4), "failed"); + VM_STAGE_PROCESSED(GUEST_PRIVATE_MEM_POPULATED); + + vcpu_run_and_handle_mapgpa(vm, vcpu); + ASSERT_CONV_TEST_EXIT_IO(vcpu, GUEST_SHARED_MEM_POPULATED); + TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PATTERN4, + TEST_MEM_DATA_PATTERN2), "failed"); + populate_guest_test_mem(guest_test_mem_hva, TEST_MEM_DATA_PATTERN5); + VM_STAGE_PROCESSED(GUEST_SHARED_MEM_POPULATED); + + vcpu_run_and_handle_mapgpa(vm, vcpu); + ASSERT_CONV_TEST_EXIT_IO(vcpu, GUEST_PRIVATE_MEM_POPULATED2); + TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PATTERN4, + TEST_MEM_DATA_PATTERN5), "failed"); + VM_STAGE_PROCESSED(GUEST_PRIVATE_MEM_POPULATED2); + + vcpu_run_and_handle_mapgpa(vm, vcpu); + ASSERT_GUEST_DONE(vcpu); +} + +void execute_vm_with_private_test_mem( + enum vm_mem_backing_src_type test_mem_src) +{ + struct kvm_vm *vm; + struct kvm_enable_cap cap; + struct kvm_vcpu *vcpu; + + vm = vm_create_with_one_vcpu(&vcpu, guest_conv_test_fn); + + vm_check_cap(vm, KVM_CAP_EXIT_HYPERCALL); + cap.cap = KVM_CAP_EXIT_HYPERCALL; + cap.flags = 0; + cap.args[0] = (1 << KVM_HC_MAP_GPA_RANGE); + vm_ioctl(vm, KVM_ENABLE_CAP, &cap); + + vm_userspace_mem_region_add(vm, test_mem_src, TEST_AREA_GPA, + TEST_AREA_SLOT, TEST_AREA_SIZE / vm->page_size, KVM_MEM_PRIVATE); + vm_allocate_private_mem(vm, TEST_AREA_GPA, TEST_AREA_SIZE); + + virt_map(vm, TEST_AREA_GPA, TEST_AREA_GPA, TEST_AREA_SIZE/vm->page_size); + + host_conv_test_fn(vm, vcpu); + + kvm_vm_free(vm); +} diff --git a/tools/testing/selftests/kvm/x86_64/private_mem_test.c b/tools/testing/selftests/kvm/x86_64/private_mem_test.c index 015ada2e3d54..72c2f913ee92 100644 --- a/tools/testing/selftests/kvm/x86_64/private_mem_test.c +++ b/tools/testing/selftests/kvm/x86_64/private_mem_test.c @@ -3,197 +3,12 @@ * Copyright (C) 2022, Google LLC. */ #define _GNU_SOURCE /* for program_invocation_short_name */ -#include -#include -#include -#include #include #include #include -#include -#include -#include -#include -#include - -#include #include -#include -#include - -#define TEST_AREA_SLOT 10 -#define TEST_AREA_GPA 0xC0000000 -#define TEST_AREA_SIZE (2 * 1024 * 1024) -#define GUEST_TEST_MEM_OFFSET (1 * 1024 * 1024) -#define GUEST_TEST_MEM_SIZE (10 * 4096) - -#define VM_STAGE_PROCESSED(x) pr_info("Processed stage %s\n", #x) - -#define TEST_MEM_DATA_PATTERN1 0x66 -#define TEST_MEM_DATA_PATTERN2 0x99 -#define TEST_MEM_DATA_PATTERN3 0x33 -#define TEST_MEM_DATA_PATTERN4 0xaa -#define TEST_MEM_DATA_PATTERN5 0x12 - -static bool verify_mem_contents(void *mem, uint32_t size, uint8_t pattern) -{ - uint8_t *buf = (uint8_t *)mem; - - for (uint32_t i = 0; i < size; i++) { - if (buf[i] != pattern) - return false; - } - - return true; -} - -static void populate_test_area(void *test_area_base, uint64_t pattern) -{ - memset(test_area_base, pattern, TEST_AREA_SIZE); -} - -static void populate_guest_test_mem(void *guest_test_mem, uint64_t pattern) -{ - memset(guest_test_mem, pattern, GUEST_TEST_MEM_SIZE); -} - -static bool verify_test_area(void *test_area_base, uint64_t area_pattern, - uint64_t guest_pattern) -{ - void *guest_test_mem = test_area_base + GUEST_TEST_MEM_OFFSET; - void *test_area2_base = guest_test_mem + GUEST_TEST_MEM_SIZE; - uint64_t test_area2_size = (TEST_AREA_SIZE - (GUEST_TEST_MEM_OFFSET + - GUEST_TEST_MEM_SIZE)); - - return (verify_mem_contents(test_area_base, GUEST_TEST_MEM_OFFSET, area_pattern) && - verify_mem_contents(guest_test_mem, GUEST_TEST_MEM_SIZE, guest_pattern) && - verify_mem_contents(test_area2_base, test_area2_size, area_pattern)); -} - -#define GUEST_STARTED 0 -#define GUEST_PRIVATE_MEM_POPULATED 1 -#define GUEST_SHARED_MEM_POPULATED 2 -#define GUEST_PRIVATE_MEM_POPULATED2 3 - -/* - * Run memory conversion tests with explicit conversion: - * Execute KVM hypercall to map/unmap gpa range which will cause userspace exit - * to back/unback private memory. Subsequent accesses by guest to the gpa range - * will not cause exit to userspace. - * - * Test memory conversion scenarios with following steps: - * 1) Access private memory using private access and verify that memory contents - * are not visible to userspace. - * 2) Convert memory to shared using explicit conversions and ensure that - * userspace is able to access the shared regions. - * 3) Convert memory back to private using explicit conversions and ensure that - * userspace is again not able to access converted private regions. - */ -static void guest_conv_test_fn(void) -{ - void *test_area_base = (void *)TEST_AREA_GPA; - void *guest_test_mem = (void *)(TEST_AREA_GPA + GUEST_TEST_MEM_OFFSET); - uint64_t guest_test_size = GUEST_TEST_MEM_SIZE; - - GUEST_SYNC(GUEST_STARTED); - - populate_test_area(test_area_base, TEST_MEM_DATA_PATTERN1); - GUEST_SYNC(GUEST_PRIVATE_MEM_POPULATED); - GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PATTERN1, - TEST_MEM_DATA_PATTERN1)); - - kvm_hypercall_map_shared((uint64_t)guest_test_mem, guest_test_size); - - populate_guest_test_mem(guest_test_mem, TEST_MEM_DATA_PATTERN2); - - GUEST_SYNC(GUEST_SHARED_MEM_POPULATED); - GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PATTERN1, - TEST_MEM_DATA_PATTERN5)); - - kvm_hypercall_map_private((uint64_t)guest_test_mem, guest_test_size); - - populate_guest_test_mem(guest_test_mem, TEST_MEM_DATA_PATTERN3); - GUEST_SYNC(GUEST_PRIVATE_MEM_POPULATED2); - - GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PATTERN1, - TEST_MEM_DATA_PATTERN3)); - GUEST_DONE(); -} - -#define ASSERT_CONV_TEST_EXIT_IO(vcpu, stage) \ - { \ - struct ucall uc; \ - ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_IO); \ - ASSERT_EQ(get_ucall(vcpu, &uc), UCALL_SYNC); \ - ASSERT_EQ(uc.args[1], stage); \ - } - -#define ASSERT_GUEST_DONE(vcpu) \ - { \ - struct ucall uc; \ - ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_IO); \ - ASSERT_EQ(get_ucall(vcpu, &uc), UCALL_DONE); \ - } - -static void host_conv_test_fn(struct kvm_vm *vm, struct kvm_vcpu *vcpu) -{ - void *test_area_hva = addr_gpa2hva(vm, TEST_AREA_GPA); - void *guest_test_mem_hva = (test_area_hva + GUEST_TEST_MEM_OFFSET); - - vcpu_run_and_handle_mapgpa(vm, vcpu); - ASSERT_CONV_TEST_EXIT_IO(vcpu, GUEST_STARTED); - populate_test_area(test_area_hva, TEST_MEM_DATA_PATTERN4); - VM_STAGE_PROCESSED(GUEST_STARTED); - - vcpu_run_and_handle_mapgpa(vm, vcpu); - ASSERT_CONV_TEST_EXIT_IO(vcpu, GUEST_PRIVATE_MEM_POPULATED); - TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PATTERN4, - TEST_MEM_DATA_PATTERN4), "failed"); - VM_STAGE_PROCESSED(GUEST_PRIVATE_MEM_POPULATED); - - vcpu_run_and_handle_mapgpa(vm, vcpu); - ASSERT_CONV_TEST_EXIT_IO(vcpu, GUEST_SHARED_MEM_POPULATED); - TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PATTERN4, - TEST_MEM_DATA_PATTERN2), "failed"); - populate_guest_test_mem(guest_test_mem_hva, TEST_MEM_DATA_PATTERN5); - VM_STAGE_PROCESSED(GUEST_SHARED_MEM_POPULATED); - - vcpu_run_and_handle_mapgpa(vm, vcpu); - ASSERT_CONV_TEST_EXIT_IO(vcpu, GUEST_PRIVATE_MEM_POPULATED2); - TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PATTERN4, - TEST_MEM_DATA_PATTERN5), "failed"); - VM_STAGE_PROCESSED(GUEST_PRIVATE_MEM_POPULATED2); - - vcpu_run_and_handle_mapgpa(vm, vcpu); - ASSERT_GUEST_DONE(vcpu); -} - -static void execute_vm_with_private_test_mem( - enum vm_mem_backing_src_type test_mem_src) -{ - struct kvm_vm *vm; - struct kvm_enable_cap cap; - struct kvm_vcpu *vcpu; - - vm = vm_create_with_one_vcpu(&vcpu, guest_conv_test_fn); - - vm_check_cap(vm, KVM_CAP_EXIT_HYPERCALL); - cap.cap = KVM_CAP_EXIT_HYPERCALL; - cap.flags = 0; - cap.args[0] = (1 << KVM_HC_MAP_GPA_RANGE); - vm_ioctl(vm, KVM_ENABLE_CAP, &cap); - - vm_userspace_mem_region_add(vm, test_mem_src, TEST_AREA_GPA, - TEST_AREA_SLOT, TEST_AREA_SIZE / vm->page_size, KVM_MEM_PRIVATE); - vm_allocate_private_mem(vm, TEST_AREA_GPA, TEST_AREA_SIZE); - - virt_map(vm, TEST_AREA_GPA, TEST_AREA_GPA, TEST_AREA_SIZE/vm->page_size); - - host_conv_test_fn(vm, vcpu); - - kvm_vm_free(vm); -} +#include int main(int argc, char *argv[]) { From patchwork Fri Dec 23 00:13:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 13080467 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78DABC4167B for ; Fri, 23 Dec 2022 00:14:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235882AbiLWAOu (ORCPT ); Thu, 22 Dec 2022 19:14:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45340 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235527AbiLWAO3 (ORCPT ); Thu, 22 Dec 2022 19:14:29 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B68B62CC9A for ; Thu, 22 Dec 2022 16:14:12 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id 203-20020a2502d4000000b006f94ab02400so3560237ybc.2 for ; Thu, 22 Dec 2022 16:14:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Cg/JgGzimgcKVfqgQO+cUvnCvql5iWRhml8JYXpDmXI=; b=sKa9lZOuyXiASyHL4kp9hKmVWR3zOuZrETb1wLibzqRJxzfzbNiXmD3S/Ng64NhoIV dG0JMir7Cp6iNktQKkX5sBYxyju/rlNqCnClo9LzUHp/AWgs2Rqw4X/ISUkosL7Vk3hF vOtpWhHppffPQzTZWm8ePcCZB4Mg0y1R3GLScM3UVXHIb1oriQz3rC37vJh6KSbT9dk9 QwVio4xoJMLgsmYL9wa/th4KU9kK7Ifj82VKjZ6iiKgfpicNUasXw2+pT10Imdo1LYz6 RsxZ4cYzbTWlVUGCpcI+kuwa0buaX4maiwjQf007tXO124EPcBnjuFRyKclaij46vGZJ GmGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Cg/JgGzimgcKVfqgQO+cUvnCvql5iWRhml8JYXpDmXI=; b=Om8yJbaqyjxR6MkjQpqQymQi1xhAPWSlbUrzI4ij3ERs8HPeyfS4aEcqEVn/DDhcXX YDUuNXFqIvWIZDMI0kn4UQexVvRXu3SFii7/dPcmNaDDtzM7nqpAoBwpmnCqNeqvgAHi +4lPaWhLpvD/RoOaShk66y7TwiHwEZtg8xBhaS+GtCy7uTl2VNXoe2JGzFhaUBXyhxdr A3aa4wFUzzQwFSWoYrUiT7iHDdHjR8VG0h6KUbDdJ+malohyw7ia0mPRvE/PqCmExJlA 4dMh+5eU2Q/l4QsxiUrZ7YIMk3vQqKsMPQ4J6qazW1+F90VhlSgIoDttGTwNBWYGeF4m btkw== X-Gm-Message-State: AFqh2kr5/AM6XOBQHCI+XoOF4X8cyX8jyzU/fSw317XHzIKQTU2Lm7F5 gQzkw4ZeyLuFqYcx+TLVL2TXXR3oelD7jygN X-Google-Smtp-Source: AMrXdXsGUdmfYazAx27DakjPtcERF5H/JBaT1GEG+jEsY0D2C/qM670rVyWuDdWr5cFAJrkfpPOF+5JCcq5fk0Li X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a25:4485:0:b0:764:2d1e:4345 with SMTP id r127-20020a254485000000b007642d1e4345mr404750yba.551.1671754451977; Thu, 22 Dec 2022 16:14:11 -0800 (PST) Date: Fri, 23 Dec 2022 00:13:51 +0000 In-Reply-To: <20221223001352.3873203-1-vannapurve@google.com> Mime-Version: 1.0 References: <20221223001352.3873203-1-vannapurve@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20221223001352.3873203-8-vannapurve@google.com> Subject: [V3 PATCH 7/8] KVM: selftests: private_mem_test: Add support for SEV VMs From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, ackerleytng@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add support of executing private mem test with SEV VMs to allow creating SEV VMs and make the guest code do page table updates in case of executiong from SEV VM context. Signed-off-by: Vishal Annapurve --- .../include/x86_64/private_mem_test_helper.h | 3 ++ .../kvm/lib/x86_64/private_mem_test_helper.c | 37 +++++++++++++++++-- 2 files changed, 37 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86_64/private_mem_test_helper.h b/tools/testing/selftests/kvm/include/x86_64/private_mem_test_helper.h index 4d32c025876c..e54870b72369 100644 --- a/tools/testing/selftests/kvm/include/x86_64/private_mem_test_helper.h +++ b/tools/testing/selftests/kvm/include/x86_64/private_mem_test_helper.h @@ -12,4 +12,7 @@ void execute_vm_with_private_test_mem( enum vm_mem_backing_src_type test_mem_src); +void execute_sev_vm_with_private_test_mem( + enum vm_mem_backing_src_type test_mem_src); + #endif /* SELFTEST_KVM_PRIVATE_MEM_TEST_HELPER_H */ diff --git a/tools/testing/selftests/kvm/lib/x86_64/private_mem_test_helper.c b/tools/testing/selftests/kvm/lib/x86_64/private_mem_test_helper.c index 600bd21d1bb8..36a8b1ab1c74 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/private_mem_test_helper.c +++ b/tools/testing/selftests/kvm/lib/x86_64/private_mem_test_helper.c @@ -22,6 +22,9 @@ #include #include #include +#include + +static bool is_guest_sev_vm; #define TEST_AREA_SLOT 10 #define TEST_AREA_GPA 0xC0000000 @@ -104,6 +107,8 @@ static void guest_conv_test_fn(void) GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PATTERN1, TEST_MEM_DATA_PATTERN1)); + if (is_guest_sev_vm) + guest_set_region_shared(guest_test_mem, guest_test_size); kvm_hypercall_map_shared((uint64_t)guest_test_mem, guest_test_size); populate_guest_test_mem(guest_test_mem, TEST_MEM_DATA_PATTERN2); @@ -112,6 +117,9 @@ static void guest_conv_test_fn(void) GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PATTERN1, TEST_MEM_DATA_PATTERN5)); + if (is_guest_sev_vm) + guest_set_region_private(guest_test_mem, guest_test_size); + kvm_hypercall_map_private((uint64_t)guest_test_mem, guest_test_size); populate_guest_test_mem(guest_test_mem, TEST_MEM_DATA_PATTERN3); @@ -170,14 +178,19 @@ static void host_conv_test_fn(struct kvm_vm *vm, struct kvm_vcpu *vcpu) ASSERT_GUEST_DONE(vcpu); } -void execute_vm_with_private_test_mem( - enum vm_mem_backing_src_type test_mem_src) +static void execute_private_mem_test(enum vm_mem_backing_src_type test_mem_src, + bool is_sev_vm) { struct kvm_vm *vm; struct kvm_enable_cap cap; struct kvm_vcpu *vcpu; - vm = vm_create_with_one_vcpu(&vcpu, guest_conv_test_fn); + if (is_sev_vm) + vm = sev_vm_init_with_one_vcpu(SEV_POLICY_NO_DBG, + guest_conv_test_fn, &vcpu); + else + vm = vm_create_with_one_vcpu(&vcpu, guest_conv_test_fn); + TEST_ASSERT(vm, "VM creation failed\n"); vm_check_cap(vm, KVM_CAP_EXIT_HYPERCALL); cap.cap = KVM_CAP_EXIT_HYPERCALL; @@ -191,7 +204,25 @@ void execute_vm_with_private_test_mem( virt_map(vm, TEST_AREA_GPA, TEST_AREA_GPA, TEST_AREA_SIZE/vm->page_size); + if (is_sev_vm) { + is_guest_sev_vm = true; + sync_global_to_guest(vm, is_guest_sev_vm); + sev_vm_finalize(vm, SEV_POLICY_NO_DBG); + } + host_conv_test_fn(vm, vcpu); kvm_vm_free(vm); } + +void execute_vm_with_private_test_mem( + enum vm_mem_backing_src_type test_mem_src) +{ + execute_private_mem_test(test_mem_src, false); +} + +void execute_sev_vm_with_private_test_mem( + enum vm_mem_backing_src_type test_mem_src) +{ + execute_private_mem_test(test_mem_src, true); +} From patchwork Fri Dec 23 00:13:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 13080468 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B661CC4332F for ; Fri, 23 Dec 2022 00:15:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235925AbiLWAPC (ORCPT ); Thu, 22 Dec 2022 19:15:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45512 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235800AbiLWAOg (ORCPT ); Thu, 22 Dec 2022 19:14:36 -0500 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C138E2EFB2 for ; Thu, 22 Dec 2022 16:14:14 -0800 (PST) Received: by mail-pg1-x549.google.com with SMTP id x79-20020a633152000000b004785d1cf6bbso1837688pgx.6 for ; Thu, 22 Dec 2022 16:14:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=mXjvTaFGl0oQkWFtQLkmlcU0BiyCn63t9cGXi+l6V/w=; b=acLkEOZ4nvyQu2uxwn2hL7G9VDeqPcNGplhMBS2a+Yx+2SNZIeWbRIHKUjv160NG0L m4d1KpHGQ2RR4P8vLlHgq6jgo7zEUuXsmdGzHVhSiFQvET6Dn4GkihmCSrLavj+zpSes 8CGusZoJ/ApV1KTPj7x4yLhN9l2mKIEZY9i00cpn307Av1luAJreVNajomg2aj1GLaH5 BjUxMIfDWuKDDZfHZN4igce+Jgfx/dOsevMY+0KgQPEcSG5w3V7gSDyz9iQ8s6oGsVO6 51pr/aXD8Jb0W+dVj27LxZ94KU0+dCZsk08OInfH1quXH3A4uj+O+BM3KnT33TzIPjoq dv1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=mXjvTaFGl0oQkWFtQLkmlcU0BiyCn63t9cGXi+l6V/w=; b=SleboBfv3mg1PagxQnBGwuiVN+lC6LCnpk79ZkO6WMtuvjjr24/3iM3DsYYtRjg612 wehmqAlDfY1MwkzYk4QJrQiU+IgSoLSLQ0k4FtSDL9JhvgmC2Hk5oAHPJPjFsI4dGWug Q8KtbxnGaaaNv4jtLdXQZ9V9UqH/eKrJred3ahGIa9zpjnLykuAYZstupAIFTY7MfkSr LVJu4eAYfpw8RWbQgBzpalBnYdg9g9T7D26ExjynUp36g3Eq4NqMmhFcFZJenCO3k+YW hasmPceYL0hFsvC+AzXnFDOOJUzkW6iOnDJ7JDeTiBprllvZ7yeXxGfpFHlQV/W2DZkO JR6g== X-Gm-Message-State: AFqh2kqcJ9WFHzlkaVusJJA/kelH9HXoK5stPCv26cNOs+O93OfMiNeU j1C/iAKndPeKZPu1cUGpprKu1Uc740r4qnjC X-Google-Smtp-Source: AMrXdXvICP9uAMh9F1qCb0MWER00sCMo/yyKBdu7ShrN3F6Pwkbk7bfoFJbU6crsZgKj54Km5Tgzn7z2dXwHkz4z X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a63:530a:0:b0:46f:1e8d:d6a8 with SMTP id h10-20020a63530a000000b0046f1e8dd6a8mr592287pgb.248.1671754454109; Thu, 22 Dec 2022 16:14:14 -0800 (PST) Date: Fri, 23 Dec 2022 00:13:52 +0000 In-Reply-To: <20221223001352.3873203-1-vannapurve@google.com> Mime-Version: 1.0 References: <20221223001352.3873203-1-vannapurve@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20221223001352.3873203-9-vannapurve@google.com> Subject: [V3 PATCH 8/8] KVM: selftests: Add private mem test for SEV VMs From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, ackerleytng@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add SEV VM specific private mem test to invoke selftest logic similar to the one executed for non-confidential VMs. Signed-off-by: Vishal Annapurve --- tools/testing/selftests/kvm/.gitignore | 1 + tools/testing/selftests/kvm/Makefile | 1 + .../kvm/x86_64/sev_private_mem_test.c | 26 +++++++++++++++++++ 3 files changed, 28 insertions(+) create mode 100644 tools/testing/selftests/kvm/x86_64/sev_private_mem_test.c diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore index f73639dcbebb..e5c82a1cd733 100644 --- a/tools/testing/selftests/kvm/.gitignore +++ b/tools/testing/selftests/kvm/.gitignore @@ -40,6 +40,7 @@ /x86_64/set_sregs_test /x86_64/sev_all_boot_test /x86_64/sev_migrate_tests +/x86_64/sev_private_mem_test /x86_64/smaller_maxphyaddr_emulation_test /x86_64/smm_test /x86_64/state_test diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 83c649c9de23..a8ee7c473644 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -104,6 +104,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/pmu_event_filter_test TEST_GEN_PROGS_x86_64 += x86_64/private_mem_test TEST_GEN_PROGS_x86_64 += x86_64/set_boot_cpu_id TEST_GEN_PROGS_x86_64 += x86_64/set_sregs_test +TEST_GEN_PROGS_x86_64 += x86_64/sev_private_mem_test TEST_GEN_PROGS_x86_64 += x86_64/smaller_maxphyaddr_emulation_test TEST_GEN_PROGS_x86_64 += x86_64/smm_test TEST_GEN_PROGS_x86_64 += x86_64/state_test diff --git a/tools/testing/selftests/kvm/x86_64/sev_private_mem_test.c b/tools/testing/selftests/kvm/x86_64/sev_private_mem_test.c new file mode 100644 index 000000000000..943fdfbe41d9 --- /dev/null +++ b/tools/testing/selftests/kvm/x86_64/sev_private_mem_test.c @@ -0,0 +1,26 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2022, Google LLC. + */ +#define _GNU_SOURCE /* for program_invocation_short_name */ +#include +#include +#include + +#include + +int main(int argc, char *argv[]) +{ + execute_sev_vm_with_private_test_mem( + VM_MEM_SRC_ANONYMOUS_AND_RESTRICTED_MEMFD); + + /* Needs 2MB Hugepages */ + if (get_free_huge_2mb_pages() >= 1) { + printf("Running SEV VM private mem test with 2M pages\n"); + execute_sev_vm_with_private_test_mem( + VM_MEM_SRC_ANON_HTLB2M_AND_RESTRICTED_MEMFD); + } else + printf("Skipping SEV VM private mem test with 2M pages\n"); + + return 0; +}