From patchwork Tue Aug 30 22:42:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 12960045 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70315ECAAD4 for ; Tue, 30 Aug 2022 22:43:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232112AbiH3WnX (ORCPT ); Tue, 30 Aug 2022 18:43:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39220 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231490AbiH3WnQ (ORCPT ); Tue, 30 Aug 2022 18:43:16 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6CB306EF0E for ; Tue, 30 Aug 2022 15:43:14 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id r74-20020a632b4d000000b0041bc393913eso6150473pgr.10 for ; Tue, 30 Aug 2022 15:43:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=+fxq/Nava99rJhdF98nlLDcbHMSZkK7JwuVLQ9+L7LI=; b=DHoNLClh8yhyoZiS/SOMsIl2aIpcJVxVV9GzQMVByyHDqoNx9nRUYK3wjKzOHhK0e2 Ve3zejRFM1b/U/lUIsZhvyCBprcOZliQNCImj2hhSVKWglHjW7hU2xd1iw4LisE5ZcF/ uMmSLVuMNtDcimD5pKzi1h7xa+NjhczDQF0+LA2q+Tj/evqyt1Blo/vp/M9piIdnjawa kUo4CwWgul6o1em8VllsdzIuhJdp0U7KDGYgXhOQLTkbnBD7K+kPu4F7Z/uzdEJ8/r5P 6yOxb0YMQl7IPsWr1/eIW5lLZArDYe5KiZ/rhyvx0ThyfD7ofsUrXb6n6DOcVlBpF6tb W9ag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=+fxq/Nava99rJhdF98nlLDcbHMSZkK7JwuVLQ9+L7LI=; b=1FFyw130ZmTBKSTbrjdP4tlwfWSnH/Xd3gKrzQQ6QUt6x9f6Eu+CKxHitvsGlG1PZv Wu9WD5CB1RJ7n9kwTnnnNjyExhoD/+QFHwvBwR3RkevDwuOfTEAq2XkJL11yPQK3OQol Pk0hoT22yG0Zq3qeGWfjtR6VqXqIPRwGtNE+738a6owEgpCHw8SbFJZlDtlzPW9OSRfy rvSL0+sE//Q2Qfz0ml/BVl3yqwTgp2cLH9pTcGUnTrcxNHl1hbEszSbAF6sjTbI3melQ 1C7cZqcTtrLRA9BPsnziVPUEfj0or+4iatiBKI6PSie18Y5iJ/d40zi5AKRNOriLEBN0 KK9Q== X-Gm-Message-State: ACgBeo3FVMFHlD6gBn96ft06FkAlx6DAIGdRoXfcnA/UKW0axx0y1l61 bWpa2amhTsBFog50CabnikTxzbZxTQ3WWhEZ X-Google-Smtp-Source: AA6agR6RJsD1NAYvHIDVRGOYRaxfpafbYixbKpI62dXEqhoHdt0NLNK3jfnQP9gQFc6+VnKg/he2n/SEEQ0HNsGL X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:aa7:81c6:0:b0:535:2aea:e29f with SMTP id c6-20020aa781c6000000b005352aeae29fmr23476929pfn.78.1661899393921; Tue, 30 Aug 2022 15:43:13 -0700 (PDT) Date: Tue, 30 Aug 2022 22:42:52 +0000 In-Reply-To: <20220830224259.412342-1-vannapurve@google.com> Mime-Version: 1.0 References: <20220830224259.412342-1-vannapurve@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220830224259.412342-2-vannapurve@google.com> Subject: [RFC V2 PATCH 1/8] selftests: kvm: x86_64: Add support for pagetable tracking From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add support for mapping guest pagetable pages to a contiguous guest virtual address range and sharing the physical to virtual mappings with the guest in a pre-defined format. This functionality will allow the guests to modify their page table entries. One such usecase for CC VMs is to toggle encryption bit in their ptes to switch from encrypted to shared memory and vice a versa. Signed-off-by: Vishal Annapurve --- .../selftests/kvm/include/kvm_util_base.h | 105 ++++++++++++++++++ tools/testing/selftests/kvm/lib/kvm_util.c | 78 ++++++++++++- .../selftests/kvm/lib/x86_64/processor.c | 32 ++++++ 3 files changed, 214 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index dfe454f228e7..f57ced56da1b 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -74,6 +74,11 @@ struct vm_memcrypt { int8_t enc_bit; }; +struct pgt_page { + vm_paddr_t paddr; + struct list_head list; +}; + struct kvm_vm { int mode; unsigned long type; @@ -98,6 +103,10 @@ struct kvm_vm { vm_vaddr_t handlers; uint32_t dirty_ring_size; struct vm_memcrypt memcrypt; + struct list_head pgt_pages; + bool track_pgt_pages; + uint32_t num_pgt_pages; + vm_vaddr_t pgt_vaddr_start; /* Cache of information for binary stats interface */ int stats_fd; @@ -184,6 +193,23 @@ struct vm_guest_mode_params { unsigned int page_size; unsigned int page_shift; }; + +/* + * Structure shared with the guest containing information about: + * - Starting virtual address for num_pgt_pages physical pagetable + * page addresses tracked via paddrs array + * - page size of the guest + * + * Guest can walk through its pagetables using this information to + * read/modify pagetable attributes. + */ +struct guest_pgt_info { + uint64_t num_pgt_pages; + uint64_t pgt_vaddr_start; + uint64_t page_size; + uint64_t paddrs[]; +}; + extern const struct vm_guest_mode_params vm_guest_mode_params[]; int open_path_or_exit(const char *path, int flags); @@ -394,6 +420,49 @@ void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot); struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id); vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min); +void vm_map_page_table(struct kvm_vm *vm, vm_vaddr_t vaddr_min); + +/* + * function called by guest code to translate physical address of a pagetable + * page to guest virtual address. + * + * input args: + * gpgt_info - pointer to the guest_pgt_info structure containing info + * about guest virtual address mappings for guest physical + * addresses of page table pages. + * pgt_pa - physical address of guest page table page to be translated + * to a virtual address. + * + * output args: none + * + * return: + * pointer to the pagetable page, null in case physical address is not + * tracked via given guest_pgt_info structure. + */ +void *guest_code_get_pgt_vaddr(struct guest_pgt_info *gpgt_info, uint64_t pgt_pa); + +/* + * Allocate and setup a page to be shared with guest containing guest_pgt_info + * structure. + * + * Note: + * 1) vm_set_pgt_alloc_tracking function should be used to start tracking + * of physical page table page allocation. + * 2) This function should be invoked after needed pagetable pages are + * mapped to the VM using virt_pg_map. + * + * input args: + * vm - virtual machine + * vaddr_min - Minimum guest virtual address to start mapping the + * guest_pgt_info structure page(s). + * + * output args: none + * + * return: + * virtual address mapping guest_pgt_info structure. + */ +vm_vaddr_t vm_setup_pgt_info_buf(struct kvm_vm *vm, vm_vaddr_t vaddr_min); + vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min); vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages); vm_vaddr_t vm_vaddr_alloc_page(struct kvm_vm *vm); @@ -647,10 +716,46 @@ void kvm_gsi_routing_write(struct kvm_vm *vm, struct kvm_irq_routing *routing); const char *exit_reason_str(unsigned int exit_reason); +#ifdef __x86_64__ +/* + * Guest called function to get a pointer to pte corresponding to a given + * guest virtual address and pointer to the guest_pgt_info structure. + * + * input args: + * gpgt_info - pointer to guest_pgt_info structure containing information + * about guest virtual addresses mapped to pagetable physical + * addresses. + * vaddr - guest virtual address + * + * output args: none + * + * return: + * pointer to the pte corresponding to guest virtual address, + * Null if pte is not found + */ +uint64_t *guest_code_get_pte(struct guest_pgt_info *gpgt_info, uint64_t vaddr); +#endif + vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min, uint32_t memslot); vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, vm_paddr_t paddr_min, uint32_t memslot); + +/* + * Enable tracking of physical guest pagetable pages for the given vm. + * This function should be called right after vm creation before any pages are + * mapped into the VM using vm_alloc_* / vm_vaddr_alloc* functions. + * + * input args: + * vm - virtual machine + * + * output args: none + * + * return: + * None + */ +void vm_set_pgt_alloc_tracking(struct kvm_vm *vm); + vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm); /* diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index f153c71d6988..243d04a3d4b6 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -155,6 +155,7 @@ struct kvm_vm *____vm_create(enum vm_guest_mode mode, uint64_t nr_pages) TEST_ASSERT(vm != NULL, "Insufficient Memory"); INIT_LIST_HEAD(&vm->vcpus); + INIT_LIST_HEAD(&vm->pgt_pages); vm->regions.gpa_tree = RB_ROOT; vm->regions.hva_tree = RB_ROOT; hash_init(vm->regions.slot_hash); @@ -573,6 +574,7 @@ void kvm_vm_free(struct kvm_vm *vmp) { int ctr; struct hlist_node *node; + struct pgt_page *entry, *nentry; struct userspace_mem_region *region; if (vmp == NULL) @@ -588,6 +590,9 @@ void kvm_vm_free(struct kvm_vm *vmp) hash_for_each_safe(vmp->regions.slot_hash, ctr, node, region, slot_node) __vm_mem_region_delete(vmp, region, false); + list_for_each_entry_safe(entry, nentry, &vmp->pgt_pages, list) + free(entry); + /* Free sparsebit arrays. */ sparsebit_free(&vmp->vpages_valid); sparsebit_free(&vmp->vpages_mapped); @@ -1195,9 +1200,24 @@ vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min, /* Arbitrary minimum physical address used for virtual translation tables. */ #define KVM_GUEST_PAGE_TABLE_MIN_PADDR 0x180000 +void vm_set_pgt_alloc_tracking(struct kvm_vm *vm) +{ + vm->track_pgt_pages = true; +} + vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm) { - return vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0); + struct pgt_page *pgt; + vm_paddr_t paddr = vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0); + + if (vm->track_pgt_pages) { + pgt = calloc(1, sizeof(*pgt)); + TEST_ASSERT(pgt != NULL, "Insufficient memory"); + pgt->paddr = addr_gpa2raw(vm, paddr); + list_add(&pgt->list, &vm->pgt_pages); + vm->num_pgt_pages++; + } + return paddr; } /* @@ -1286,6 +1306,27 @@ static vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, return pgidx_start * vm->page_size; } +void vm_map_page_table(struct kvm_vm *vm, vm_vaddr_t vaddr_min) +{ + struct pgt_page *pgt_page_entry; + vm_vaddr_t vaddr; + + /* Stop tracking further pgt pages, mapping pagetable may itself need + * new pages. + */ + vm->track_pgt_pages = false; + vm_vaddr_t vaddr_start = vm_vaddr_unused_gap(vm, + vm->num_pgt_pages * vm->page_size, vaddr_min); + vaddr = vaddr_start; + list_for_each_entry(pgt_page_entry, &vm->pgt_pages, list) { + /* Map the virtual page. */ + virt_pg_map(vm, vaddr, addr_raw2gpa(vm, pgt_page_entry->paddr)); + sparsebit_set(vm->vpages_mapped, vaddr >> vm->page_shift); + vaddr += vm->page_size; + } + vm->pgt_vaddr_start = vaddr_start; +} + /* * VM Virtual Address Allocate Shared/Encrypted * @@ -1345,6 +1386,41 @@ vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_ return _vm_vaddr_alloc(vm, sz, vaddr_min, false); } +void *guest_code_get_pgt_vaddr(struct guest_pgt_info *gpgt_info, + uint64_t pgt_pa) +{ + uint64_t num_pgt_pages = gpgt_info->num_pgt_pages; + uint64_t pgt_vaddr_start = gpgt_info->pgt_vaddr_start; + uint64_t page_size = gpgt_info->page_size; + + for (uint32_t i = 0; i < num_pgt_pages; i++) { + if (gpgt_info->paddrs[i] == pgt_pa) + return (void *)(pgt_vaddr_start + i * page_size); + } + return NULL; +} + +vm_vaddr_t vm_setup_pgt_info_buf(struct kvm_vm *vm, vm_vaddr_t vaddr_min) +{ + struct pgt_page *pgt_page_entry; + struct guest_pgt_info *gpgt_info; + uint64_t info_size = sizeof(*gpgt_info) + (sizeof(uint64_t) * vm->num_pgt_pages); + uint64_t num_pages = align_up(info_size, vm->page_size); + vm_vaddr_t buf_start = vm_vaddr_alloc(vm, num_pages, vaddr_min); + uint32_t i = 0; + + gpgt_info = (struct guest_pgt_info *)addr_gva2hva(vm, buf_start); + gpgt_info->num_pgt_pages = vm->num_pgt_pages; + gpgt_info->pgt_vaddr_start = vm->pgt_vaddr_start; + gpgt_info->page_size = vm->page_size; + list_for_each_entry(pgt_page_entry, &vm->pgt_pages, list) { + gpgt_info->paddrs[i] = pgt_page_entry->paddr; + i++; + } + TEST_ASSERT((i == vm->num_pgt_pages), "pgt entries mismatch with the counter"); + return buf_start; +} + /* * VM Virtual Address Allocate Pages * diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c index 09d757a0b148..02252cabf9ec 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c @@ -217,6 +217,38 @@ void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr) __virt_pg_map(vm, vaddr, paddr, PG_LEVEL_4K); } +uint64_t *guest_code_get_pte(struct guest_pgt_info *gpgt_info, uint64_t vaddr) +{ + uint16_t index[4]; + uint64_t *pml4e, *pdpe, *pde, *pte; + uint64_t pgt_paddr = get_cr3(); + uint64_t page_size = gpgt_info->page_size; + + index[0] = (vaddr >> 12) & 0x1ffu; + index[1] = (vaddr >> 21) & 0x1ffu; + index[2] = (vaddr >> 30) & 0x1ffu; + index[3] = (vaddr >> 39) & 0x1ffu; + + pml4e = guest_code_get_pgt_vaddr(gpgt_info, pgt_paddr); + GUEST_ASSERT(pml4e && (pml4e[index[3]] & PTE_PRESENT_MASK)); + + pgt_paddr = (PTE_GET_PFN(pml4e[index[3]]) * page_size); + pdpe = guest_code_get_pgt_vaddr(gpgt_info, pgt_paddr); + GUEST_ASSERT(pdpe && (pdpe[index[2]] & PTE_PRESENT_MASK) && + !(pdpe[index[2]] & PTE_LARGE_MASK)); + + pgt_paddr = (PTE_GET_PFN(pdpe[index[2]]) * page_size); + pde = guest_code_get_pgt_vaddr(gpgt_info, pgt_paddr); + GUEST_ASSERT(pde && (pde[index[1]] & PTE_PRESENT_MASK) && + !(pde[index[1]] & PTE_LARGE_MASK)); + + pgt_paddr = (PTE_GET_PFN(pde[index[1]]) * page_size); + pte = guest_code_get_pgt_vaddr(gpgt_info, pgt_paddr); + GUEST_ASSERT(pte && (pte[index[0]] & PTE_PRESENT_MASK)); + + return (uint64_t *)&pte[index[0]]; +} + static uint64_t *_vm_get_page_table_entry(struct kvm_vm *vm, struct kvm_vcpu *vcpu, uint64_t vaddr) From patchwork Tue Aug 30 22:42:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 12960046 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 52A6FECAAD5 for ; Tue, 30 Aug 2022 22:43:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232138AbiH3Wn1 (ORCPT ); Tue, 30 Aug 2022 18:43:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39254 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230082AbiH3WnT (ORCPT ); Tue, 30 Aug 2022 18:43:19 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CF11472FEB for ; Tue, 30 Aug 2022 15:43:17 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id p18-20020a170902a41200b00172b0dc71e0so8721176plq.0 for ; Tue, 30 Aug 2022 15:43:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=2fODE+rP/n4IhsKmyo9TrmeTKfUIF5nKP0v2EkUrneo=; b=gbNNPUBOKBKap2s5shHHjkVzc/iNv8XHjMpvm09nylSEQ3SOdpxMf259plptLgOyhm qtBnC5ARhX9mqeSUEQxwbwuiVXzeMEEAIsNqGvXWYo4cqOY+Vq10bLL0/E/u117bHK1I BppB22ajOfm+RKivJWNiOP4n6ezKLNCsf/eBjzeNYmUBoiTwWRYoGrABmFQhHfdANFR8 xc1V/n47Al7DVl2uTC7HoRXD3hwb0eWA8BBGtjplkqwmjVe1rbwtJJdWIr9ER/UOLK8x BsStvO82puuasYylYs10rEkZa8kjaQb1W/tseZoyMgSIsRHmFNCCa5zWJT5GNmRtA45f 3i6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=2fODE+rP/n4IhsKmyo9TrmeTKfUIF5nKP0v2EkUrneo=; b=ZA7I0YfJR/QQQWuF5kBght2GaV7yohpvCpChjcNHlSZhQmTFugIuSPpELA+lm/1pX9 nEQVi0XzHald8GGhwHiwdIU2p1MAdqeS7r9lVq5hBUXEzpRbrXQNflWkBUDhr60tsYxT F6rexzDn8JvtcJjm/e2vxBDRSFncoc4F4WbLSGVLyOhkliqye+qKPgB/jkT2eoK16oOK aoexUzZjs4cwPQPhD04oq+f8UDlrkMiZuNIRaTNiU4wOQzJv4GPvTc1fNOLaI4rgSZw1 DWQaf4m+VWsjkQWdG/rErOkLcV7q4DBe639oALwOBdSUXnQU5qkLxtk0bwmWuowvc9dv deJQ== X-Gm-Message-State: ACgBeo0gOrwt6SpGB0dCKmD4vxXzpG4e4y3EiXI4LxEg55+Pakot4Ua8 6DOl/FG0P+2mOUigezAgaT4WcLId5ZhpEILV X-Google-Smtp-Source: AA6agR6s8cwIw86sUzfNeW1dAbiwVvALjW5Jj3PJi8QbYI7h65XJwdYYtjA2IhmoCKLUNMKH83DOkyjt1ks1NcY1 X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:902:e883:b0:175:22e8:f30a with SMTP id w3-20020a170902e88300b0017522e8f30amr4205183plg.127.1661899397359; Tue, 30 Aug 2022 15:43:17 -0700 (PDT) Date: Tue, 30 Aug 2022 22:42:53 +0000 In-Reply-To: <20220830224259.412342-1-vannapurve@google.com> Mime-Version: 1.0 References: <20220830224259.412342-1-vannapurve@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220830224259.412342-3-vannapurve@google.com> Subject: [RFC V2 PATCH 2/8] kvm: Add HVA range operator From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Introduce HVA range operator so that other KVM subsystems can operate on HVA range. Signed-off-by: Vishal Annapurve --- include/linux/kvm_host.h | 6 +++++ virt/kvm/kvm_main.c | 48 ++++++++++++++++++++++++++++++++++++++++ 2 files changed, 54 insertions(+) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 4508fa0e8fb6..c860e6d6408d 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1398,6 +1398,12 @@ void *kvm_mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc); void kvm_mmu_updating_begin(struct kvm *kvm, gfn_t start, gfn_t end); void kvm_mmu_updating_end(struct kvm *kvm, gfn_t start, gfn_t end); +typedef int (*kvm_hva_range_op_t)(struct kvm *kvm, + struct kvm_gfn_range *range, void *data); + +int kvm_vm_do_hva_range_op(struct kvm *kvm, unsigned long hva_start, + unsigned long hva_end, kvm_hva_range_op_t handler, void *data); + long kvm_arch_dev_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg); long kvm_arch_vcpu_ioctl(struct file *filp, diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 7597949fe031..16cb9ab59143 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -647,6 +647,54 @@ static __always_inline int __kvm_handle_hva_range(struct kvm *kvm, return (int)ret; } +int kvm_vm_do_hva_range_op(struct kvm *kvm, unsigned long hva_start, + unsigned long hva_end, kvm_hva_range_op_t handler, void *data) +{ + int ret = 0; + struct kvm_gfn_range gfn_range; + struct kvm_memory_slot *slot; + struct kvm_memslots *slots; + int i, idx; + + if (WARN_ON_ONCE(hva_end <= hva_start)) + return -EINVAL; + + idx = srcu_read_lock(&kvm->srcu); + + for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + struct interval_tree_node *node; + + slots = __kvm_memslots(kvm, i); + kvm_for_each_memslot_in_hva_range(node, slots, + hva_start, hva_end - 1) { + unsigned long start, end; + + slot = container_of(node, struct kvm_memory_slot, + hva_node[slots->node_idx]); + start = max(hva_start, slot->userspace_addr); + end = min(hva_end, slot->userspace_addr + + (slot->npages << PAGE_SHIFT)); + + /* + * {gfn(page) | page intersects with [hva_start, hva_end)} = + * {gfn_start, gfn_start+1, ..., gfn_end-1}. + */ + gfn_range.start = hva_to_gfn_memslot(start, slot); + gfn_range.end = hva_to_gfn_memslot(end + PAGE_SIZE - 1, slot); + gfn_range.slot = slot; + + ret = handler(kvm, &gfn_range, data); + if (ret) + goto e_ret; + } + } + +e_ret: + srcu_read_unlock(&kvm->srcu, idx); + + return ret; +} + static __always_inline int kvm_handle_hva_range(struct mmu_notifier *mn, unsigned long start, unsigned long end, From patchwork Tue Aug 30 22:42:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 12960047 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D0C2ECAAD4 for ; Tue, 30 Aug 2022 22:43:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232080AbiH3Wnq (ORCPT ); Tue, 30 Aug 2022 18:43:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39408 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232046AbiH3WnW (ORCPT ); Tue, 30 Aug 2022 18:43:22 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2C5E1804B9 for ; Tue, 30 Aug 2022 15:43:21 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id r74-20020a632b4d000000b0041bc393913eso6150601pgr.10 for ; Tue, 30 Aug 2022 15:43:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=72P9i6U+1Dr4e9vSKI1gPWv6Lo3gM5hWP8wyM1fHGL0=; b=BmsjrGFWHc6Y2fjLh0Guc+hSkU6LRLDr7ZN+urJtAfJo95/yn8UtEUaA70RAyPcwJM eEJbzhv3Xr6tLaVaI+SWn5OUmXgPH67zijb6AQI/RVZXxJhHWJ0vv83NKZbLcGm3A6n2 JG1d3U32+PPI+dIUPMz2XWqz7Hu8ha7qzMHGuEcjWHizAmgB7MOYAoel5QpUdu/31AAV bMl6W1aLYSjEywBO7oap4TzR3PW31n9rJbdFV+Eslv700pf3UfKhHVc/F+t7mUxeXuxU ILWWvuw8/K0JiX+VZbAUAnrgUo9ixRflWR9XK1Dj9ERFmYsdHCi5YD8zbUj26WBY07ly StgQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=72P9i6U+1Dr4e9vSKI1gPWv6Lo3gM5hWP8wyM1fHGL0=; b=Bxnlk74cOuLVcua3DnrVVgoZxqfbCLKNRtmt7IxcROHXcV6BP+A19PRxxyT7BNrSMU vI/PyEuKtCNZj9671Wq1UQagJxsEzyWepVEBYFxPfSwb4HJ0V+EAPQlUs4ztg7NfEe17 aOzAMxSIbEv8OXIyJtiGBg8EMrHDTfFkQ03fZWLZoqwQGgLemWuShwXt0uvQFZkcbVgE IENmAA+vUAVFNWkXzzu9ZRpAIK5w81zHTH4rqz3eldnQYHWb8CpFjVDGsFkwxwK9l5rY MThzMmbe9RplzBjUynDIQPbhcXdY/b0NzVJ0eyMo7M2TXEuZlWP4MhYzhP71Dzu1SKLf u45Q== X-Gm-Message-State: ACgBeo0kFY8eWdEpF0pErZsktm2ai4mOeAWJ6GgxXeFSrbOkGUly2ZpV f40x2A2xUsPx1cgwFFdSLu0yD9aLMgmigC5c X-Google-Smtp-Source: AA6agR694skgUA9ff0wN7YLAz9d1J3Zwv7gfGw4NDsA/Q5y26oGB9uvpFpTcJTINj3D3/2tLzNcd9gkmf5zQAIEi X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:902:ef50:b0:171:516d:d2ce with SMTP id e16-20020a170902ef5000b00171516dd2cemr22222790plx.171.1661899400834; Tue, 30 Aug 2022 15:43:20 -0700 (PDT) Date: Tue, 30 Aug 2022 22:42:54 +0000 In-Reply-To: <20220830224259.412342-1-vannapurve@google.com> Mime-Version: 1.0 References: <20220830224259.412342-1-vannapurve@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220830224259.412342-4-vannapurve@google.com> Subject: [RFC V2 PATCH 3/8] arch: x86: sev: Populate private memory fd during LAUNCH_UPDATE_DATA From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This change adds handling of HVA ranges to copy contents to private memory while doing sev launch update data. mem_attr array is updated during LAUNCH_UPDATE_DATA to ensure that encrypted memory is marked as private. Signed-off-by: Vishal Annapurve --- arch/x86/kvm/svm/sev.c | 99 ++++++++++++++++++++++++++++++++++++---- include/linux/kvm_host.h | 2 + virt/kvm/kvm_main.c | 39 ++++++++++------ 3 files changed, 116 insertions(+), 24 deletions(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 309bcdb2f929..673dca318cd4 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -492,23 +492,22 @@ static unsigned long get_num_contig_pages(unsigned long idx, return pages; } -static int sev_launch_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp) +int sev_launch_update_shared_gfn_handler(struct kvm *kvm, + struct kvm_gfn_range *range, struct kvm_sev_cmd *argp) { unsigned long vaddr, vaddr_end, next_vaddr, npages, pages, size, i; struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; - struct kvm_sev_launch_update_data params; struct sev_data_launch_update_data data; struct page **inpages; int ret; - if (!sev_guest(kvm)) - return -ENOTTY; - - if (copy_from_user(¶ms, (void __user *)(uintptr_t)argp->data, sizeof(params))) - return -EFAULT; + vaddr = gfn_to_hva_memslot(range->slot, range->start); + if (kvm_is_error_hva(vaddr)) { + pr_err("vaddr is erroneous 0x%lx\n", vaddr); + return -EINVAL; + } - vaddr = params.uaddr; - size = params.len; + size = (range->end - range->start) << PAGE_SHIFT; vaddr_end = vaddr + size; /* Lock the user memory. */ @@ -560,6 +559,88 @@ static int sev_launch_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp) return ret; } +int sev_launch_update_priv_gfn_handler(struct kvm *kvm, + struct kvm_gfn_range *range, struct kvm_sev_cmd *argp) +{ + struct sev_data_launch_update_data data; + struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; + gfn_t gfn; + kvm_pfn_t pfn; + struct kvm_memory_slot *memslot = range->slot; + int ret = 0; + + data.reserved = 0; + data.handle = sev->handle; + + for (gfn = range->start; gfn < range->end; gfn++) { + int order; + void *kvaddr; + + ret = kvm_private_mem_get_pfn(memslot, + gfn, &pfn, &order); + if (ret) + return ret; + + kvaddr = pfn_to_kaddr(pfn); + if (!virt_addr_valid(kvaddr)) { + pr_err("Invalid kvaddr 0x%lx\n", (uint64_t)kvaddr); + ret = -EINVAL; + goto e_ret; + } + + ret = kvm_read_guest_page(kvm, gfn, kvaddr, 0, PAGE_SIZE); + if (ret) { + pr_err("guest read failed 0x%lx\n", ret); + goto e_ret; + } + + if (!this_cpu_has(X86_FEATURE_SME_COHERENT)) + clflush_cache_range(kvaddr, PAGE_SIZE); + + data.len = PAGE_SIZE; + data.address = __sme_set(pfn << PAGE_SHIFT); + ret = sev_issue_cmd(kvm, SEV_CMD_LAUNCH_UPDATE_DATA, &data, &argp->error); + if (ret) + goto e_ret; + + kvm_private_mem_put_pfn(memslot, pfn); + } + kvm_vm_set_region_attr(kvm, range->start, range->end, + true /* priv_attr */); + + return ret; + +e_ret: + kvm_private_mem_put_pfn(memslot, pfn); + return ret; +} + +int sev_launch_update_gfn_handler(struct kvm *kvm, + struct kvm_gfn_range *range, void *data) +{ + struct kvm_sev_cmd *argp = (struct kvm_sev_cmd *)data; + + if (kvm_slot_can_be_private(range->slot)) + return sev_launch_update_priv_gfn_handler(kvm, range, argp); + + return sev_launch_update_shared_gfn_handler(kvm, range, argp); +} + +static int sev_launch_update_data(struct kvm *kvm, + struct kvm_sev_cmd *argp) +{ + struct kvm_sev_launch_update_data params; + + if (!sev_guest(kvm)) + return -ENOTTY; + + if (copy_from_user(¶ms, (void __user *)(uintptr_t)argp->data, sizeof(params))) + return -EFAULT; + + return kvm_vm_do_hva_range_op(kvm, params.uaddr, params.uaddr + params.len, + sev_launch_update_gfn_handler, argp); +} + static int sev_es_sync_vmsa(struct vcpu_svm *svm) { struct sev_es_save_area *save = svm->sev_es.vmsa; diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index c860e6d6408d..5d0054e957b4 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -980,6 +980,8 @@ int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align, void kvm_exit(void); void kvm_get_kvm(struct kvm *kvm); +int kvm_vm_set_region_attr(struct kvm *kvm, unsigned long gfn_start, + unsigned long gfn_end, bool priv_attr); bool kvm_get_kvm_safe(struct kvm *kvm); void kvm_put_kvm(struct kvm *kvm); bool file_is_kvm(struct file *file); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 16cb9ab59143..9463737c2172 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -981,7 +981,7 @@ static int kvm_vm_populate_private_mem(struct kvm *kvm, unsigned long gfn_start, } mutex_lock(&kvm->slots_lock); - for (gfn = gfn_start; gfn <= gfn_end; gfn++) { + for (gfn = gfn_start; gfn < gfn_end; gfn++) { int order; void *kvaddr; @@ -1012,12 +1012,29 @@ static int kvm_vm_populate_private_mem(struct kvm *kvm, unsigned long gfn_start, } #endif +int kvm_vm_set_region_attr(struct kvm *kvm, unsigned long gfn_start, + unsigned long gfn_end, bool priv_attr) +{ + int r; + void *entry; + unsigned long index; + + entry = priv_attr ? xa_mk_value(KVM_MEM_ATTR_PRIVATE) : NULL; + + for (index = gfn_start; index < gfn_end; index++) { + r = xa_err(xa_store(&kvm->mem_attr_array, index, entry, + GFP_KERNEL_ACCOUNT)); + if (r) + break; + } + + return r; +} + static int kvm_vm_ioctl_set_encrypted_region(struct kvm *kvm, unsigned int ioctl, struct kvm_enc_region *region) { unsigned long start, end; - unsigned long index; - void *entry; int r; if (region->size == 0 || region->addr + region->size < region->addr) @@ -1026,22 +1043,14 @@ static int kvm_vm_ioctl_set_encrypted_region(struct kvm *kvm, unsigned int ioctl return -EINVAL; start = region->addr >> PAGE_SHIFT; - end = (region->addr + region->size - 1) >> PAGE_SHIFT; - - entry = ioctl == KVM_MEMORY_ENCRYPT_REG_REGION ? - xa_mk_value(KVM_MEM_ATTR_PRIVATE) : NULL; - - for (index = start; index <= end; index++) { - r = xa_err(xa_store(&kvm->mem_attr_array, index, entry, - GFP_KERNEL_ACCOUNT)); - if (r) - break; - } + end = (region->addr + region->size) >> PAGE_SHIFT; + r = kvm_vm_set_region_attr(kvm, start, end, + (ioctl == KVM_MEMORY_ENCRYPT_REG_REGION)); kvm_zap_gfn_range(kvm, start, end + 1); #ifdef CONFIG_HAVE_KVM_PRIVATE_MEM_TESTING - if (!kvm->vm_entry_attempted && (ioctl == KVM_MEMORY_ENCRYPT_REG_REGION)) + if (!r && !kvm->vm_entry_attempted && (ioctl == KVM_MEMORY_ENCRYPT_REG_REGION)) r = kvm_vm_populate_private_mem(kvm, start, end); #endif From patchwork Tue Aug 30 22:42:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 12960048 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70CCDECAAA1 for ; Tue, 30 Aug 2022 22:43:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232021AbiH3Wnu (ORCPT ); Tue, 30 Aug 2022 18:43:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39308 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231636AbiH3Wn2 (ORCPT ); Tue, 30 Aug 2022 18:43:28 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0880D7C312 for ; Tue, 30 Aug 2022 15:43:25 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id d8-20020a25bc48000000b00680651cf051so810225ybk.23 for ; Tue, 30 Aug 2022 15:43:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=FnC2MODp1Ilbo/3qVuV673Djb8hUYYAKXHPp7uV/l3U=; b=QPd795t1C0ahrsS55brxetoonol6V2jAppY01knmBO6dTEexYKdxvNbkTSQcIcPS0g 2kVquW8pntnzZ8ayf2KGc74LWg1ImA0LODvDVKrQ5RNE9iPAZqoluWZufVPBUn48fc67 kW1IFw4yDbnloy5QQ/STgOmCIxz9plIKPFM0HDvfvRd0xIR/a+XXnVcrsaDE4dXMXlvp 2IrquyFJf8Ta/3dM4DHzEUE06trbxbXXW3PUtdEzPpNE9l9jhx85/6oghBZ5fuiPnSIa HihGAukLvraBpoNRHwoZwUflCiTPtuG2KVmnc5uUF0aSGnXsPpEIuWMW3HwQM/7lAwtJ qghA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=FnC2MODp1Ilbo/3qVuV673Djb8hUYYAKXHPp7uV/l3U=; b=IuZibRFcT0HZG6Z191qWhv9bWpXy6xWXjcGATdLlfVm1iI3+fHZYXLX7OUYnpjM7bw bM1Cc4CO8i4Xzjwu3GSaHuFsUAMCeOvABVtHJkmCU4DM71ztJXKDFYseun6087q4yrsk /d9I8kGnWz8mrK8W537IOtwmR9Qd4gsWhpNye0SyIQ6zC+1tDRCxOfBbqszVZn+WLiqz L1xaTVbINPLlExYgN/82/unSuaA2r/b7wMdCu4m0dUQ9LQLq2Nh8a5eTtLfhBJUjL7Zm TYtwW5FpsI0MSKCChVymA/YWCh/SLIfEyFbjzEJ91qKICO7hpjtDUp+L8SHttv4ctmLs g0FA== X-Gm-Message-State: ACgBeo2Bm3YNF3nBqOjYvfOCwjbYyr/Prgyy8fGAy1iTYASECbDPLtyy 6k7EGD4mG3pjJgjNNkYkyJh3fRja7aM53Wzt X-Google-Smtp-Source: AA6agR7J2g4yWWCjTXSW8bJGJNJQDF4Q6cZWWSd4WOmEIBPpC8PfIDGdBXAbKzkcTBd8rZjpqBovURMzodQSv4dn X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a81:bb43:0:b0:33d:cdd9:aa56 with SMTP id a3-20020a81bb43000000b0033dcdd9aa56mr15659487ywl.240.1661899403835; Tue, 30 Aug 2022 15:43:23 -0700 (PDT) Date: Tue, 30 Aug 2022 22:42:55 +0000 In-Reply-To: <20220830224259.412342-1-vannapurve@google.com> Mime-Version: 1.0 References: <20220830224259.412342-1-vannapurve@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220830224259.412342-5-vannapurve@google.com> Subject: [RFC V2 PATCH 4/8] selftests: kvm: sev: Support memslots with private memory From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Introduce an additional helper API to create a SEV VM with private memory memslots. Signed-off-by: Vishal Annapurve --- tools/testing/selftests/kvm/include/x86_64/sev.h | 2 ++ tools/testing/selftests/kvm/lib/x86_64/sev.c | 15 ++++++++++++--- 2 files changed, 14 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86_64/sev.h b/tools/testing/selftests/kvm/include/x86_64/sev.h index b6552ea1c716..628801707917 100644 --- a/tools/testing/selftests/kvm/include/x86_64/sev.h +++ b/tools/testing/selftests/kvm/include/x86_64/sev.h @@ -38,6 +38,8 @@ void kvm_sev_ioctl(struct sev_vm *sev, int cmd, void *data); struct kvm_vm *sev_get_vm(struct sev_vm *sev); uint8_t sev_get_enc_bit(struct sev_vm *sev); +struct sev_vm *sev_vm_create_with_flags(uint32_t policy, uint64_t npages, + uint32_t memslot_flags); struct sev_vm *sev_vm_create(uint32_t policy, uint64_t npages); void sev_vm_free(struct sev_vm *sev); void sev_vm_launch(struct sev_vm *sev); diff --git a/tools/testing/selftests/kvm/lib/x86_64/sev.c b/tools/testing/selftests/kvm/lib/x86_64/sev.c index 44b5ce5cd8db..6a329ea17f9f 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/sev.c +++ b/tools/testing/selftests/kvm/lib/x86_64/sev.c @@ -171,7 +171,8 @@ void sev_vm_free(struct sev_vm *sev) free(sev); } -struct sev_vm *sev_vm_create(uint32_t policy, uint64_t npages) +struct sev_vm *sev_vm_create_with_flags(uint32_t policy, uint64_t npages, + uint32_t memslot_flags) { struct sev_vm *sev; struct kvm_vm *vm; @@ -188,9 +189,12 @@ struct sev_vm *sev_vm_create(uint32_t policy, uint64_t npages) vm->vpages_mapped = sparsebit_alloc(); vm_set_memory_encryption(vm, true, true, sev->enc_bit); pr_info("SEV cbit: %d\n", sev->enc_bit); - vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, 0, 0, npages, 0); - sev_register_user_region(sev, addr_gpa2hva(vm, 0), + vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, 0, 0, npages, + memslot_flags); + if (!(memslot_flags & KVM_MEM_PRIVATE)) { + sev_register_user_region(sev, addr_gpa2hva(vm, 0), npages * vm->page_size); + } pr_info("SEV guest created, policy: 0x%x, size: %lu KB\n", sev->sev_policy, npages * vm->page_size / 1024); @@ -198,6 +202,11 @@ struct sev_vm *sev_vm_create(uint32_t policy, uint64_t npages) return sev; } +struct sev_vm *sev_vm_create(uint32_t policy, uint64_t npages) +{ + return sev_vm_create_with_flags(policy, npages, 0); +} + void sev_vm_launch(struct sev_vm *sev) { struct kvm_sev_launch_start ksev_launch_start = {0}; From patchwork Tue Aug 30 22:42:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 12960049 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4CFA6ECAAA1 for ; Tue, 30 Aug 2022 22:43:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232453AbiH3Wnw (ORCPT ); Tue, 30 Aug 2022 18:43:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39918 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232086AbiH3Wni (ORCPT ); Tue, 30 Aug 2022 18:43:38 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 550688168A for ; Tue, 30 Aug 2022 15:43:28 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id h5-20020a636c05000000b00429fa12cb65so6184670pgc.21 for ; Tue, 30 Aug 2022 15:43:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=FxpX7DoPE3GMtBbFVyScvJLAhxo3IL8fu/3CfhEaMzM=; b=erWtdHITEzoRn+OGEzvo+IuiSJLt5Me15nfx9IkgRhJht9nbPSXu7kj81eibvCHxOT 8M3ZqpSXqa4NaBg7NigHUZSvPcXHBoADeRVYKq4WEO/UNmBJRFNXVdSTO4ue0ZRuQtaD JiAZANlRx8tKA5Ye5zGOIcUn4Q5VLY2za0WwBB/xhwaQ+jJVvnOkIWTivOC3U5prOyh/ WGPWTkMRkz22EMo1pSMVLroa6Xnfr0I1IfJczL6Prk3f7lk9vDClqGKzUoKFMGqjwEqC 66QWJbO+YFFfMWVaqpjedTSIejKgWpjppNYV5cP4JqFCSkjB+30XQ+ZVcmXp2EVOkuOr gWgQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=FxpX7DoPE3GMtBbFVyScvJLAhxo3IL8fu/3CfhEaMzM=; b=RbR56EeE9GjAVBPEVV1qlGcO0g0Bp0/2hij0ccYIw45oCzPnmURbKT0k8rQ/FlmBUs dE/ivtvWvGzY13X6uwL058FLB8GYRTpEJWoEOfqAu78NFCrTPn7tLWw87scdo+f+Ou20 CcoIguiBtAI8kf/Sics5SMpjhtuA7+j0kDE7VSHmRHtJ7Kf8dcoQgwjG6IsOpecemkg3 LHW/nAfTLH9Si+R48DPK8ljDUSxjdkgi3laa2BtdHzkrTbjNooLGyXCXGESATGh9w5Fx QUDAv1BqDcqX/3edih89IiOE6eP6wmel5aRrRxP2lI+aiBYRmF3ASHPr6QsCa49APlp8 LNGg== X-Gm-Message-State: ACgBeo21DLFrP9Z6tbs3r5NwuqMqkWQE7OtNS50QN3Gai+T1uWOefPY3 1cfknIf5uzJcVR1ZcsLC/RWgUgK9r57zxQsc X-Google-Smtp-Source: AA6agR4B1IUI8+BUs70F/elePrZ2UGxVCcCeqHEWdmOOw5t10UrnaK5juPiTyW2FEuX0yLwZlQ/8qutIu2KwxlQ1 X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:90a:249:b0:1e0:a8a3:3c6c with SMTP id t9-20020a17090a024900b001e0a8a33c6cmr5346pje.0.1661899406644; Tue, 30 Aug 2022 15:43:26 -0700 (PDT) Date: Tue, 30 Aug 2022 22:42:56 +0000 In-Reply-To: <20220830224259.412342-1-vannapurve@google.com> Mime-Version: 1.0 References: <20220830224259.412342-1-vannapurve@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220830224259.412342-6-vannapurve@google.com> Subject: [RFC V2 PATCH 5/8] selftests: kvm: Update usage of private mem lib for SEV VMs From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add/update APIs to allow reusing private mem lib for SEV VMs. Memory conversion for SEV VMs includes updating guest pagetables based on virtual addresses to toggle C-bit. Signed-off-by: Vishal Annapurve --- .../kvm/include/x86_64/private_mem.h | 9 +- .../selftests/kvm/lib/x86_64/private_mem.c | 103 +++++++++++++----- 2 files changed, 83 insertions(+), 29 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86_64/private_mem.h b/tools/testing/selftests/kvm/include/x86_64/private_mem.h index 645bf3f61d1e..183b53b8c486 100644 --- a/tools/testing/selftests/kvm/include/x86_64/private_mem.h +++ b/tools/testing/selftests/kvm/include/x86_64/private_mem.h @@ -14,10 +14,10 @@ enum mem_conversion_type { TO_SHARED }; -void guest_update_mem_access(enum mem_conversion_type type, uint64_t gpa, - uint64_t size); -void guest_update_mem_map(enum mem_conversion_type type, uint64_t gpa, - uint64_t size); +void guest_update_mem_access(enum mem_conversion_type type, uint64_t gva, + uint64_t gpa, uint64_t size); +void guest_update_mem_map(enum mem_conversion_type type, uint64_t gva, + uint64_t gpa, uint64_t size); void guest_map_ucall_page_shared(void); @@ -45,6 +45,7 @@ struct vm_setup_info { struct test_setup_info test_info; guest_code_fn guest_fn; io_exit_handler ioexit_cb; + uint32_t policy; /* Used for Sev VMs */ }; void execute_vm_with_private_mem(struct vm_setup_info *info); diff --git a/tools/testing/selftests/kvm/lib/x86_64/private_mem.c b/tools/testing/selftests/kvm/lib/x86_64/private_mem.c index f6dcfa4d353f..28d93754e1f2 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/private_mem.c +++ b/tools/testing/selftests/kvm/lib/x86_64/private_mem.c @@ -22,12 +22,45 @@ #include #include #include +#include + +#define GUEST_PGT_MIN_VADDR 0x10000 + +/* Variables populated by userspace logic and consumed by guest code */ +static bool is_sev_vm; +static struct guest_pgt_info *sev_gpgt_info; +static uint8_t sev_enc_bit; + +static void sev_guest_set_clr_pte_bit(uint64_t vaddr_start, uint64_t mem_size, + bool set) +{ + uint64_t vaddr = vaddr_start; + uint32_t guest_page_size = sev_gpgt_info->page_size; + uint32_t num_pages; + + GUEST_ASSERT(!(mem_size % guest_page_size) && !(vaddr_start % + guest_page_size)); + + num_pages = mem_size / guest_page_size; + for (uint32_t i = 0; i < num_pages; i++) { + uint64_t *pte = guest_code_get_pte(sev_gpgt_info, vaddr); + + GUEST_ASSERT(pte); + if (set) + *pte |= (1ULL << sev_enc_bit); + else + *pte &= ~(1ULL << sev_enc_bit); + asm volatile("invlpg (%0)" :: "r"(vaddr) : "memory"); + vaddr += guest_page_size; + } +} /* * Execute KVM hypercall to change memory access type for a given gpa range. * * Input Args: * type - memory conversion type TO_SHARED/TO_PRIVATE + * gva - starting gva address * gpa - starting gpa address * size - size of the range starting from gpa for which memory access needs * to be changed @@ -40,9 +73,12 @@ * for a given gpa range. This API is useful in exercising implicit conversion * path. */ -void guest_update_mem_access(enum mem_conversion_type type, uint64_t gpa, - uint64_t size) +void guest_update_mem_access(enum mem_conversion_type type, uint64_t gva, + uint64_t gpa, uint64_t size) { + if (is_sev_vm) + sev_guest_set_clr_pte_bit(gva, size, type == TO_PRIVATE ? true : false); + int ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, gpa, size >> MIN_PAGE_SHIFT, type == TO_PRIVATE ? KVM_MARK_GPA_RANGE_ENC_ACCESS : KVM_CLR_GPA_RANGE_ENC_ACCESS, 0); @@ -54,6 +90,7 @@ void guest_update_mem_access(enum mem_conversion_type type, uint64_t gpa, * * Input Args: * type - memory conversion type TO_SHARED/TO_PRIVATE + * gva - starting gva address * gpa - starting gpa address * size - size of the range starting from gpa for which memory type needs * to be changed @@ -65,9 +102,12 @@ void guest_update_mem_access(enum mem_conversion_type type, uint64_t gpa, * Function called by guest logic in selftests to update the memory type for a * given gpa range. This API is useful in exercising explicit conversion path. */ -void guest_update_mem_map(enum mem_conversion_type type, uint64_t gpa, - uint64_t size) +void guest_update_mem_map(enum mem_conversion_type type, uint64_t gva, + uint64_t gpa, uint64_t size) { + if (is_sev_vm) + sev_guest_set_clr_pte_bit(gva, size, type == TO_PRIVATE ? true : false); + int ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, gpa, size >> MIN_PAGE_SHIFT, type == TO_PRIVATE ? KVM_MAP_GPA_RANGE_ENCRYPTED : KVM_MAP_GPA_RANGE_DECRYPTED, 0); @@ -90,30 +130,15 @@ void guest_update_mem_map(enum mem_conversion_type type, uint64_t gpa, void guest_map_ucall_page_shared(void) { vm_paddr_t ucall_paddr = get_ucall_pool_paddr(); + GUEST_ASSERT(ucall_paddr); - guest_update_mem_access(TO_SHARED, ucall_paddr, 1 << MIN_PAGE_SHIFT); + int ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, ucall_paddr, 1, + KVM_MAP_GPA_RANGE_DECRYPTED, 0); + GUEST_ASSERT_1(!ret, ret); } -/* - * Execute KVM ioctl to back/unback private memory for given gpa range. - * - * Input Args: - * vm - kvm_vm handle - * gpa - starting gpa address - * size - size of the gpa range - * op - mem_op indicating whether private memory needs to be allocated or - * unbacked - * - * Output Args: None - * - * Return: None - * - * Function called by host userspace logic in selftests to back/unback private - * memory for gpa ranges. This function is useful to setup initial boot private - * memory and then convert memory during runtime. - */ -void vm_update_private_mem(struct kvm_vm *vm, uint64_t gpa, uint64_t size, - enum mem_op op) +static void vm_update_private_mem_internal(struct kvm_vm *vm, uint64_t gpa, + uint64_t size, enum mem_op op, bool encrypt) { int priv_memfd; uint64_t priv_offset, guest_phys_base, fd_offset; @@ -142,6 +167,10 @@ void vm_update_private_mem(struct kvm_vm *vm, uint64_t gpa, uint64_t size, TEST_ASSERT(ret == 0, "fallocate failed\n"); enc_region.addr = gpa; enc_region.size = size; + + if (!encrypt) + return; + if (op == ALLOCATE_MEM) { printf("doing encryption for gpa 0x%lx size 0x%lx\n", gpa, size); vm_ioctl(vm, KVM_MEMORY_ENCRYPT_REG_REGION, &enc_region); @@ -151,6 +180,30 @@ void vm_update_private_mem(struct kvm_vm *vm, uint64_t gpa, uint64_t size, } } +/* + * Execute KVM ioctl to back/unback private memory for given gpa range. + * + * Input Args: + * vm - kvm_vm handle + * gpa - starting gpa address + * size - size of the gpa range + * op - mem_op indicating whether private memory needs to be allocated or + * unbacked + * + * Output Args: None + * + * Return: None + * + * Function called by host userspace logic in selftests to back/unback private + * memory for gpa ranges. This function is useful to setup initial boot private + * memory and then convert memory during runtime. + */ +void vm_update_private_mem(struct kvm_vm *vm, uint64_t gpa, uint64_t size, + enum mem_op op) +{ + vm_update_private_mem_internal(vm, gpa, size, op, true /* encrypt */); +} + static void handle_vm_exit_map_gpa_hypercall(struct kvm_vm *vm, volatile struct kvm_run *run) { From patchwork Tue Aug 30 22:42:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 12960050 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 72510ECAAD4 for ; Tue, 30 Aug 2022 22:43:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232422AbiH3Wnz (ORCPT ); Tue, 30 Aug 2022 18:43:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40036 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232395AbiH3Wnm (ORCPT ); Tue, 30 Aug 2022 18:43:42 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 352F282D32 for ; Tue, 30 Aug 2022 15:43:31 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id lx1-20020a17090b4b0100b001fd720458c3so1262572pjb.1 for ; Tue, 30 Aug 2022 15:43:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=EVXb2yKFUQIy1Xc2nGSh/Bz10UhUMA0E75xzrEY47PY=; b=q+XH7fUvfPeID0YJ7U49n0ESnUIAi9/wL4R5kzkpECScWmPDnLOQ/WFkvPK9NZIkMN U0SEt6cWlTUnHi/iUEFTFIZKLTMLr/g66walLDBWob5x7/No9VET34rSMUeEutZVfjSw 1X4jn74tCUPffUvNTJaoBJdXk+wLiycEDfOfX8+Bnj3GX6VZozfHxOkjKprKfKh/Dsxk Scj5SGIzJZcaNJPtTIH5WLZ56op9vjZngNgpFaD143XaT/6NLYCtjH9NEXb5im9+S6Mj wPygBXNh4ZUo4Qdctl45gYcfalT/KzL68lPiECZGR5YXE1+9cnHJa/LMw9/wM9MEqof4 JmCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=EVXb2yKFUQIy1Xc2nGSh/Bz10UhUMA0E75xzrEY47PY=; b=insO3FVuFbvD+D9DUJB/GHN5SGsaVJE9yA+ti8O6+UrCdo1ypSuP1QCMniZMglrY24 0EevfsxqUDQVYIIBo0Swc1eEqK36oSHcjG6f1Kt5jghtiRWr8nE+Bsn0895YxzrnikwA 84Gy4CbHDR7Wp+LseUq4y/hpI/56HmeWVTq4K5n/PoW6SRlieVxy1hNjXOHbhjK+dFvm hnhUK0RajaTAsnsiypwqvtBKKGQ9gU2f4EwjMRG6IEDy+MHW81akDtKvt2CuJM/ExcIq /KrwtAslEHBlXcQdxXohw8np8mdvifYTw9sK0HZ25mKmKl4fYpHDl1hLJsw3Zgg2P2yt 9KGA== X-Gm-Message-State: ACgBeo24N9lU9o1IipQjpNw5ORoj+uluITosGbR4Ig49+ajABtu0qX81 IZkeQkJWHxOmEazcdpP/j78+RNmmriyuaKzS X-Google-Smtp-Source: AA6agR68oFvGPoyeb4Zbkf7izlnNfsd6sBNLwkNFBgmYKu6sYs9AXZrWGe6F9IAzmiul6hm4bS858itrP23BXUnL X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:90a:e558:b0:1fb:c4b7:1a24 with SMTP id ei24-20020a17090ae55800b001fbc4b71a24mr5289pjb.1.1661899410287; Tue, 30 Aug 2022 15:43:30 -0700 (PDT) Date: Tue, 30 Aug 2022 22:42:57 +0000 In-Reply-To: <20220830224259.412342-1-vannapurve@google.com> Mime-Version: 1.0 References: <20220830224259.412342-1-vannapurve@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220830224259.412342-7-vannapurve@google.com> Subject: [RFC V2 PATCH 6/8] selftests: kvm: Support executing SEV VMs with private memory From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add support of executing SEV VMs for testing private memory conversion scenarios. Signed-off-by: Vishal Annapurve --- .../kvm/include/x86_64/private_mem.h | 1 + .../selftests/kvm/lib/x86_64/private_mem.c | 86 +++++++++++++++++++ 2 files changed, 87 insertions(+) diff --git a/tools/testing/selftests/kvm/include/x86_64/private_mem.h b/tools/testing/selftests/kvm/include/x86_64/private_mem.h index 183b53b8c486..d3ef88da837c 100644 --- a/tools/testing/selftests/kvm/include/x86_64/private_mem.h +++ b/tools/testing/selftests/kvm/include/x86_64/private_mem.h @@ -49,5 +49,6 @@ struct vm_setup_info { }; void execute_vm_with_private_mem(struct vm_setup_info *info); +void execute_sev_vm_with_private_mem(struct vm_setup_info *info); #endif /* SELFTEST_KVM_PRIVATE_MEM_H */ diff --git a/tools/testing/selftests/kvm/lib/x86_64/private_mem.c b/tools/testing/selftests/kvm/lib/x86_64/private_mem.c index 28d93754e1f2..0eb8f92d19e8 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/private_mem.c +++ b/tools/testing/selftests/kvm/lib/x86_64/private_mem.c @@ -348,3 +348,89 @@ void execute_vm_with_private_mem(struct vm_setup_info *info) ucall_uninit(vm); kvm_vm_free(vm); } + +/* + * Execute Sev vm with private memory memslots. + * + * Input Args: + * info - pointer to a structure containing information about setting up a SEV + * VM with private memslots + * + * Output Args: None + * + * Return: None + * + * Function called by host userspace logic in selftests to execute SEV vm + * logic. It will install two memslots: + * 1) memslot 0 : containing all the boot code/stack pages + * 2) test_mem_slot : containing the region of memory that would be used to test + * private/shared memory accesses to a memory backed by private memslots + */ +void execute_sev_vm_with_private_mem(struct vm_setup_info *info) +{ + uint8_t measurement[512]; + struct sev_vm *sev; + struct kvm_vm *vm; + struct kvm_enable_cap cap; + struct kvm_vcpu *vcpu; + uint32_t memslot0_pages = info->memslot0_pages; + uint64_t test_area_gpa, test_area_size; + struct test_setup_info *test_info = &info->test_info; + + sev = sev_vm_create_with_flags(info->policy, memslot0_pages, KVM_MEM_PRIVATE); + TEST_ASSERT(sev, "Sev VM creation failed"); + vm = sev_get_vm(sev); + vm->use_ucall_pool = true; + vm_set_pgt_alloc_tracking(vm); + vm_create_irqchip(vm); + + TEST_ASSERT(info->guest_fn, "guest_fn not present"); + vcpu = vm_vcpu_add(vm, 0, info->guest_fn); + kvm_vm_elf_load(vm, program_invocation_name); + + vm_check_cap(vm, KVM_CAP_EXIT_HYPERCALL); + cap.cap = KVM_CAP_EXIT_HYPERCALL; + cap.flags = 0; + cap.args[0] = (1 << KVM_HC_MAP_GPA_RANGE); + vm_ioctl(vm, KVM_ENABLE_CAP, &cap); + + TEST_ASSERT(test_info->test_area_size, "Test mem size not present"); + + test_area_size = test_info->test_area_size; + test_area_gpa = test_info->test_area_gpa; + vm_userspace_mem_region_add(vm, test_info->test_area_mem_src, test_area_gpa, + test_info->test_area_slot, test_area_size / vm->page_size, + KVM_MEM_PRIVATE); + vm_update_private_mem(vm, test_area_gpa, test_area_size, ALLOCATE_MEM); + + virt_map(vm, test_area_gpa, test_area_gpa, test_area_size/vm->page_size); + + vm_map_page_table(vm, GUEST_PGT_MIN_VADDR); + sev_gpgt_info = (struct guest_pgt_info *)vm_setup_pgt_info_buf(vm, + GUEST_PGT_MIN_VADDR); + sev_enc_bit = sev_get_enc_bit(sev); + is_sev_vm = true; + sync_global_to_guest(vm, sev_enc_bit); + sync_global_to_guest(vm, sev_gpgt_info); + sync_global_to_guest(vm, is_sev_vm); + + vm_update_private_mem_internal(vm, 0, (memslot0_pages << MIN_PAGE_SHIFT), + ALLOCATE_MEM, false); + + /* Allocations/setup done. Encrypt initial guest payload. */ + sev_vm_launch(sev); + + /* Dump the initial measurement. A test to actually verify it would be nice. */ + sev_vm_launch_measure(sev, measurement); + pr_info("guest measurement: "); + for (uint32_t i = 0; i < 32; ++i) + pr_info("%02x", measurement[i]); + pr_info("\n"); + + sev_vm_launch_finish(sev); + + vcpu_work(vm, vcpu, info); + + sev_vm_free(sev); + is_sev_vm = false; +} From patchwork Tue Aug 30 22:42:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 12960051 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A541C0502A for ; Tue, 30 Aug 2022 22:44:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232461AbiH3Wn5 (ORCPT ); Tue, 30 Aug 2022 18:43:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39908 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232446AbiH3Wnn (ORCPT ); Tue, 30 Aug 2022 18:43:43 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6795083F33 for ; Tue, 30 Aug 2022 15:43:34 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id i3-20020aa78b43000000b005320ac5b724so5139615pfd.4 for ; Tue, 30 Aug 2022 15:43:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=lqixpQXBBABWZQORlxTuydpIP0GZwF2DjDV0e6REe5k=; b=d4nHpC0Ur/2l1zx8uv7diUCkftpV5Vp6eS/8jS9BbOEp7ec/TJDCKBYUpr9NZxQl3z xQP9x3TvMEjbgUkDeXQ2VMC+VKfyIcvqmhZrak7Ty2iI0+aaf2A7iDFImRoKRV9WNH7s kN94+gLz32tYIMkjEtL3H034U2gDjvqDdApLgHXd+qeBwDGhmG7g6J/8Ov+xm4OnDBIg wzO6ZXO8E1BD5P+rcR22geRPjBZMVcEWeA4VOPWlfwCMM4uGbI6xHwX1/cPZ59vc0mWb 8ApRoYpRd7FYjG0ufGmgGYXWH4h9c86bQCEinqquiZmYmu/WkIGKbO67kSaJOSBq4FI6 XA7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=lqixpQXBBABWZQORlxTuydpIP0GZwF2DjDV0e6REe5k=; b=x2NXxsuC2y/SZHz/kvwmBjZ+bLcGHFFeaw28u3S1EHfyqJQ4gS/LMOG6fzahEf+Biu V77DtZwGPHiIf455dp+KUvDVVM9Pl270I/ckKQS8k0nv6UX0/fNuU9TrbB2FD4QGNeXe 2W/dJGYa8/AAjzttt8aaJVq+0IhBb+nQrq6K9wRASuRCBuvztDDijdvCUXHQZadGD9Dv ZVRdq5/NHCgBx5IeqXZMsILbaOX1t6LZSEs8dlslQuz+Y+z0Z4fBZI1sj9bGiySDyosv CQ8vrpFB4XqQOv8j0T4Ugzk1IA5GGa8nK+Wt7VTAjqspQDOZ77fmUTOppDH6Jfk9RiYs RUHw== X-Gm-Message-State: ACgBeo3AIl2dNML2voI3bW/lBLJ4D1Td3roKQrY4z81uE7EmI2+uu+AL dgMXr0Nkn5zl/SBFo46dho3Mt5oEv4+tqEs+ X-Google-Smtp-Source: AA6agR4ml/Ql2xekqUAm4T/hSZGEEww9e0VlFIKKFuer2hjk7TU2Wk7Uwk9mBoOlkkPKqFgQci6wOzxkIp6dKzBD X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:90a:e515:b0:1fd:6e58:40 with SMTP id t21-20020a17090ae51500b001fd6e580040mr272209pjy.46.1661899413403; Tue, 30 Aug 2022 15:43:33 -0700 (PDT) Date: Tue, 30 Aug 2022 22:42:58 +0000 In-Reply-To: <20220830224259.412342-1-vannapurve@google.com> Mime-Version: 1.0 References: <20220830224259.412342-1-vannapurve@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220830224259.412342-8-vannapurve@google.com> Subject: [RFC V2 PATCH 7/8] selftests: kvm: Refactor testing logic for private memory From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move all of the logic to execute memory conversion tests into library to allow sharing the logic between normal non-confidential VMs and SEV VMs. Signed-off-by: Vishal Annapurve --- tools/testing/selftests/kvm/Makefile | 1 + .../include/x86_64/private_mem_test_helper.h | 13 + .../kvm/lib/x86_64/private_mem_test_helper.c | 273 ++++++++++++++++++ .../selftests/kvm/x86_64/private_mem_test.c | 246 +--------------- 4 files changed, 289 insertions(+), 244 deletions(-) create mode 100644 tools/testing/selftests/kvm/include/x86_64/private_mem_test_helper.h create mode 100644 tools/testing/selftests/kvm/lib/x86_64/private_mem_test_helper.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index c5fc8ea2c843..36874fedff4a 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -52,6 +52,7 @@ LIBKVM_x86_64 += lib/x86_64/apic.c LIBKVM_x86_64 += lib/x86_64/handlers.S LIBKVM_x86_64 += lib/x86_64/perf_test_util.c LIBKVM_x86_64 += lib/x86_64/private_mem.c +LIBKVM_x86_64 += lib/x86_64/private_mem_test_helper.c LIBKVM_x86_64 += lib/x86_64/processor.c LIBKVM_x86_64 += lib/x86_64/svm.c LIBKVM_x86_64 += lib/x86_64/ucall.c diff --git a/tools/testing/selftests/kvm/include/x86_64/private_mem_test_helper.h b/tools/testing/selftests/kvm/include/x86_64/private_mem_test_helper.h new file mode 100644 index 000000000000..31bc559cd813 --- /dev/null +++ b/tools/testing/selftests/kvm/include/x86_64/private_mem_test_helper.h @@ -0,0 +1,13 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2022, Google LLC. + */ + +#ifndef SELFTEST_KVM_PRIVATE_MEM_TEST_HELPER_H +#define SELFTEST_KVM_PRIVATE_MEM_TEST_HELPER_H + +void execute_memory_conversion_tests(void); + +void execute_sev_memory_conversion_tests(void); + +#endif // SELFTEST_KVM_PRIVATE_MEM_TEST_HELPER_H diff --git a/tools/testing/selftests/kvm/lib/x86_64/private_mem_test_helper.c b/tools/testing/selftests/kvm/lib/x86_64/private_mem_test_helper.c new file mode 100644 index 000000000000..ce53bef7896e --- /dev/null +++ b/tools/testing/selftests/kvm/lib/x86_64/private_mem_test_helper.c @@ -0,0 +1,273 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2022, Google LLC. + */ +#define _GNU_SOURCE /* for program_invocation_short_name */ +#include +#include +#include + +#include +#include +#include +#include +#include +#include + +#define VM_MEMSLOT0_PAGES (512 * 10) + +#define TEST_AREA_SLOT 10 +#define TEST_AREA_GPA 0xC0000000 +#define TEST_AREA_SIZE (2 * 1024 * 1024) +#define GUEST_TEST_MEM_OFFSET (1 * 1024 * 1024) +#define GUEST_TEST_MEM_SIZE (10 * 4096) + +#define VM_STAGE_PROCESSED(x) pr_info("Processed stage %s\n", #x) + +#define TEST_MEM_DATA_PAT1 0x66 +#define TEST_MEM_DATA_PAT2 0x99 +#define TEST_MEM_DATA_PAT3 0x33 +#define TEST_MEM_DATA_PAT4 0xaa +#define TEST_MEM_DATA_PAT5 0x12 + +static bool verify_mem_contents(void *mem, uint32_t size, uint8_t pat) +{ + uint8_t *buf = (uint8_t *)mem; + + for (uint32_t i = 0; i < size; i++) { + if (buf[i] != pat) + return false; + } + + return true; +} + +/* + * Add custom implementation for memset to avoid using standard/builtin memset + * which may use features like SSE/GOT that don't work with guest vm execution + * within selftests. + */ +void *memset(void *mem, int byte, size_t size) +{ + uint8_t *buf = (uint8_t *)mem; + + for (uint32_t i = 0; i < size; i++) + buf[i] = byte; + + return buf; +} + +static void populate_test_area(void *test_area_base, uint64_t pat) +{ + memset(test_area_base, pat, TEST_AREA_SIZE); +} + +static void populate_guest_test_mem(void *guest_test_mem, uint64_t pat) +{ + memset(guest_test_mem, pat, GUEST_TEST_MEM_SIZE); +} + +static bool verify_test_area(void *test_area_base, uint64_t area_pat, + uint64_t guest_pat) +{ + void *test_area1_base = test_area_base; + uint64_t test_area1_size = GUEST_TEST_MEM_OFFSET; + void *guest_test_mem = test_area_base + test_area1_size; + uint64_t guest_test_size = GUEST_TEST_MEM_SIZE; + void *test_area2_base = guest_test_mem + guest_test_size; + uint64_t test_area2_size = (TEST_AREA_SIZE - (GUEST_TEST_MEM_OFFSET + + GUEST_TEST_MEM_SIZE)); + + return (verify_mem_contents(test_area1_base, test_area1_size, area_pat) && + verify_mem_contents(guest_test_mem, guest_test_size, guest_pat) && + verify_mem_contents(test_area2_base, test_area2_size, area_pat)); +} + +#define GUEST_STARTED 0 +#define GUEST_PRIVATE_MEM_POPULATED 1 +#define GUEST_SHARED_MEM_POPULATED 2 +#define GUEST_PRIVATE_MEM_POPULATED2 3 +#define GUEST_IMPLICIT_MEM_CONV1 4 +#define GUEST_IMPLICIT_MEM_CONV2 5 + +/* + * Run memory conversion tests supporting two types of conversion: + * 1) Explicit: Execute KVM hypercall to map/unmap gpa range which will cause + * userspace exit to back/unback private memory. Subsequent accesses by guest + * to the gpa range will not cause exit to userspace. + * 2) Implicit: Execute KVM hypercall to update memory access to a gpa range as + * private/shared without exiting to userspace. Subsequent accesses by guest + * to the gpa range will result in KVM EPT/NPT faults and then exit to + * userspace for each page. + * + * Test memory conversion scenarios with following steps: + * 1) Access private memory using private access and verify that memory contents + * are not visible to userspace. + * 2) Convert memory to shared using explicit/implicit conversions and ensure + * that userspace is able to access the shared regions. + * 3) Convert memory back to private using explicit/implicit conversions and + * ensure that userspace is again not able to access converted private + * regions. + */ +static void guest_conv_test_fn(bool test_explicit_conv) +{ + void *test_area_base = (void *)TEST_AREA_GPA; + void *guest_test_mem = (void *)(TEST_AREA_GPA + GUEST_TEST_MEM_OFFSET); + uint64_t guest_test_size = GUEST_TEST_MEM_SIZE; + + guest_map_ucall_page_shared(); + GUEST_SYNC(GUEST_STARTED); + + populate_test_area(test_area_base, TEST_MEM_DATA_PAT1); + GUEST_SYNC(GUEST_PRIVATE_MEM_POPULATED); + GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PAT1, + TEST_MEM_DATA_PAT1)); + + if (test_explicit_conv) + guest_update_mem_map(TO_SHARED, (uint64_t)guest_test_mem, + (uint64_t)guest_test_mem, guest_test_size); + else { + guest_update_mem_access(TO_SHARED, (uint64_t)guest_test_mem, + (uint64_t)guest_test_mem, guest_test_size); + GUEST_SYNC(GUEST_IMPLICIT_MEM_CONV1); + } + + populate_guest_test_mem(guest_test_mem, TEST_MEM_DATA_PAT2); + + GUEST_SYNC(GUEST_SHARED_MEM_POPULATED); + GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PAT1, + TEST_MEM_DATA_PAT5)); + + if (test_explicit_conv) + guest_update_mem_map(TO_PRIVATE, (uint64_t)guest_test_mem, + (uint64_t)guest_test_mem, guest_test_size); + else { + guest_update_mem_access(TO_PRIVATE, (uint64_t)guest_test_mem, + (uint64_t)guest_test_mem, guest_test_size); + GUEST_SYNC(GUEST_IMPLICIT_MEM_CONV2); + } + + populate_guest_test_mem(guest_test_mem, TEST_MEM_DATA_PAT3); + GUEST_SYNC(GUEST_PRIVATE_MEM_POPULATED2); + + GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PAT1, + TEST_MEM_DATA_PAT3)); + GUEST_DONE(); +} + +static void conv_test_ioexit_fn(struct kvm_vm *vm, uint32_t uc_arg1) +{ + void *test_area_hva = addr_gpa2hva(vm, TEST_AREA_GPA); + void *guest_test_mem_hva = (test_area_hva + GUEST_TEST_MEM_OFFSET); + uint64_t guest_mem_gpa = (TEST_AREA_GPA + GUEST_TEST_MEM_OFFSET); + uint64_t guest_test_size = GUEST_TEST_MEM_SIZE; + + switch (uc_arg1) { + case GUEST_STARTED: + populate_test_area(test_area_hva, TEST_MEM_DATA_PAT4); + VM_STAGE_PROCESSED(GUEST_STARTED); + break; + case GUEST_PRIVATE_MEM_POPULATED: + TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PAT4, + TEST_MEM_DATA_PAT4), "failed"); + VM_STAGE_PROCESSED(GUEST_PRIVATE_MEM_POPULATED); + break; + case GUEST_SHARED_MEM_POPULATED: + TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PAT4, + TEST_MEM_DATA_PAT2), "failed"); + populate_guest_test_mem(guest_test_mem_hva, TEST_MEM_DATA_PAT5); + VM_STAGE_PROCESSED(GUEST_SHARED_MEM_POPULATED); + break; + case GUEST_PRIVATE_MEM_POPULATED2: + TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PAT4, + TEST_MEM_DATA_PAT5), "failed"); + VM_STAGE_PROCESSED(GUEST_PRIVATE_MEM_POPULATED2); + break; + case GUEST_IMPLICIT_MEM_CONV1: + /* + * For first implicit conversion, memory is already private so + * mark it private again just to zap the pte entries for the gpa + * range, so that subsequent accesses from the guest will + * generate ept/npt fault and memory conversion path will be + * exercised by KVM. + */ + vm_update_private_mem(vm, guest_mem_gpa, guest_test_size, + ALLOCATE_MEM); + VM_STAGE_PROCESSED(GUEST_IMPLICIT_MEM_CONV1); + break; + case GUEST_IMPLICIT_MEM_CONV2: + /* + * For second implicit conversion, memory is already shared so + * mark it shared again just to zap the pte entries for the gpa + * range, so that subsequent accesses from the guest will + * generate ept/npt fault and memory conversion path will be + * exercised by KVM. + */ + vm_update_private_mem(vm, guest_mem_gpa, guest_test_size, + UNBACK_MEM); + VM_STAGE_PROCESSED(GUEST_IMPLICIT_MEM_CONV2); + break; + default: + TEST_FAIL("Unknown stage %d\n", uc_arg1); + break; + } +} + +static void guest_explicit_conv_test_fn(void) +{ + guest_conv_test_fn(true); +} + +static void guest_implicit_conv_test_fn(void) +{ + guest_conv_test_fn(false); +} + +/* + * Execute implicit and explicit memory conversion tests with non-confidential + * VMs using memslots with private memory. + */ +void execute_memory_conversion_tests(void) +{ + struct vm_setup_info info; + struct test_setup_info *test_info = &info.test_info; + + info.vm_mem_src = VM_MEM_SRC_ANONYMOUS; + info.memslot0_pages = VM_MEMSLOT0_PAGES; + test_info->test_area_gpa = TEST_AREA_GPA; + test_info->test_area_size = TEST_AREA_SIZE; + test_info->test_area_slot = TEST_AREA_SLOT; + test_info->test_area_mem_src = VM_MEM_SRC_ANONYMOUS; + info.ioexit_cb = conv_test_ioexit_fn; + + info.guest_fn = guest_explicit_conv_test_fn; + execute_vm_with_private_mem(&info); + + info.guest_fn = guest_implicit_conv_test_fn; + execute_vm_with_private_mem(&info); +} + +/* + * Execute implicit and explicit memory conversion tests with SEV VMs using + * memslots with private memory. + */ +void execute_sev_memory_conversion_tests(void) +{ + struct vm_setup_info info; + struct test_setup_info *test_info = &info.test_info; + + info.vm_mem_src = VM_MEM_SRC_ANONYMOUS; + info.memslot0_pages = VM_MEMSLOT0_PAGES; + test_info->test_area_gpa = TEST_AREA_GPA; + test_info->test_area_size = TEST_AREA_SIZE; + test_info->test_area_slot = TEST_AREA_SLOT; + test_info->test_area_mem_src = VM_MEM_SRC_ANONYMOUS; + info.ioexit_cb = conv_test_ioexit_fn; + + info.policy = SEV_POLICY_NO_DBG; + info.guest_fn = guest_explicit_conv_test_fn; + execute_sev_vm_with_private_mem(&info); + + info.guest_fn = guest_implicit_conv_test_fn; + execute_sev_vm_with_private_mem(&info); +} diff --git a/tools/testing/selftests/kvm/x86_64/private_mem_test.c b/tools/testing/selftests/kvm/x86_64/private_mem_test.c index 52430b97bd0b..49da626e5807 100644 --- a/tools/testing/selftests/kvm/x86_64/private_mem_test.c +++ b/tools/testing/selftests/kvm/x86_64/private_mem_test.c @@ -1,263 +1,21 @@ // SPDX-License-Identifier: GPL-2.0 /* - * tools/testing/selftests/kvm/lib/kvm_util.c - * * Copyright (C) 2022, Google LLC. */ #define _GNU_SOURCE /* for program_invocation_short_name */ -#include -#include -#include -#include #include #include #include -#include - -#include -#include -#include -#include #include #include -#include -#include - -#define VM_MEMSLOT0_PAGES (512 * 10) - -#define TEST_AREA_SLOT 10 -#define TEST_AREA_GPA 0xC0000000 -#define TEST_AREA_SIZE (2 * 1024 * 1024) -#define GUEST_TEST_MEM_OFFSET (1 * 1024 * 1024) -#define GUEST_TEST_MEM_SIZE (10 * 4096) - -#define VM_STAGE_PROCESSED(x) pr_info("Processed stage %s\n", #x) - -#define TEST_MEM_DATA_PAT1 0x66 -#define TEST_MEM_DATA_PAT2 0x99 -#define TEST_MEM_DATA_PAT3 0x33 -#define TEST_MEM_DATA_PAT4 0xaa -#define TEST_MEM_DATA_PAT5 0x12 - -static bool verify_mem_contents(void *mem, uint32_t size, uint8_t pat) -{ - uint8_t *buf = (uint8_t *)mem; - - for (uint32_t i = 0; i < size; i++) { - if (buf[i] != pat) - return false; - } - - return true; -} - -/* - * Add custom implementation for memset to avoid using standard/builtin memset - * which may use features like SSE/GOT that don't work with guest vm execution - * within selftests. - */ -void *memset(void *mem, int byte, size_t size) -{ - uint8_t *buf = (uint8_t *)mem; - - for (uint32_t i = 0; i < size; i++) - buf[i] = byte; - - return buf; -} - -static void populate_test_area(void *test_area_base, uint64_t pat) -{ - memset(test_area_base, pat, TEST_AREA_SIZE); -} - -static void populate_guest_test_mem(void *guest_test_mem, uint64_t pat) -{ - memset(guest_test_mem, pat, GUEST_TEST_MEM_SIZE); -} - -static bool verify_test_area(void *test_area_base, uint64_t area_pat, - uint64_t guest_pat) -{ - void *test_area1_base = test_area_base; - uint64_t test_area1_size = GUEST_TEST_MEM_OFFSET; - void *guest_test_mem = test_area_base + test_area1_size; - uint64_t guest_test_size = GUEST_TEST_MEM_SIZE; - void *test_area2_base = guest_test_mem + guest_test_size; - uint64_t test_area2_size = (TEST_AREA_SIZE - (GUEST_TEST_MEM_OFFSET + - GUEST_TEST_MEM_SIZE)); - - return (verify_mem_contents(test_area1_base, test_area1_size, area_pat) && - verify_mem_contents(guest_test_mem, guest_test_size, guest_pat) && - verify_mem_contents(test_area2_base, test_area2_size, area_pat)); -} - -#define GUEST_STARTED 0 -#define GUEST_PRIVATE_MEM_POPULATED 1 -#define GUEST_SHARED_MEM_POPULATED 2 -#define GUEST_PRIVATE_MEM_POPULATED2 3 -#define GUEST_IMPLICIT_MEM_CONV1 4 -#define GUEST_IMPLICIT_MEM_CONV2 5 - -/* - * Run memory conversion tests supporting two types of conversion: - * 1) Explicit: Execute KVM hypercall to map/unmap gpa range which will cause - * userspace exit to back/unback private memory. Subsequent accesses by guest - * to the gpa range will not cause exit to userspace. - * 2) Implicit: Execute KVM hypercall to update memory access to a gpa range as - * private/shared without exiting to userspace. Subsequent accesses by guest - * to the gpa range will result in KVM EPT/NPT faults and then exit to - * userspace for each page. - * - * Test memory conversion scenarios with following steps: - * 1) Access private memory using private access and verify that memory contents - * are not visible to userspace. - * 2) Convert memory to shared using explicit/implicit conversions and ensure - * that userspace is able to access the shared regions. - * 3) Convert memory back to private using explicit/implicit conversions and - * ensure that userspace is again not able to access converted private - * regions. - */ -static void guest_conv_test_fn(bool test_explicit_conv) -{ - void *test_area_base = (void *)TEST_AREA_GPA; - void *guest_test_mem = (void *)(TEST_AREA_GPA + GUEST_TEST_MEM_OFFSET); - uint64_t guest_test_size = GUEST_TEST_MEM_SIZE; - - guest_map_ucall_page_shared(); - GUEST_SYNC(GUEST_STARTED); - - populate_test_area(test_area_base, TEST_MEM_DATA_PAT1); - GUEST_SYNC(GUEST_PRIVATE_MEM_POPULATED); - GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PAT1, - TEST_MEM_DATA_PAT1)); - - if (test_explicit_conv) - guest_update_mem_map(TO_SHARED, (uint64_t)guest_test_mem, - guest_test_size); - else { - guest_update_mem_access(TO_SHARED, (uint64_t)guest_test_mem, - guest_test_size); - GUEST_SYNC(GUEST_IMPLICIT_MEM_CONV1); - } - - populate_guest_test_mem(guest_test_mem, TEST_MEM_DATA_PAT2); - - GUEST_SYNC(GUEST_SHARED_MEM_POPULATED); - GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PAT1, - TEST_MEM_DATA_PAT5)); - - if (test_explicit_conv) - guest_update_mem_map(TO_PRIVATE, (uint64_t)guest_test_mem, - guest_test_size); - else { - guest_update_mem_access(TO_PRIVATE, (uint64_t)guest_test_mem, - guest_test_size); - GUEST_SYNC(GUEST_IMPLICIT_MEM_CONV2); - } - - populate_guest_test_mem(guest_test_mem, TEST_MEM_DATA_PAT3); - GUEST_SYNC(GUEST_PRIVATE_MEM_POPULATED2); - - GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PAT1, - TEST_MEM_DATA_PAT3)); - GUEST_DONE(); -} - -static void conv_test_ioexit_fn(struct kvm_vm *vm, uint32_t uc_arg1) -{ - void *test_area_hva = addr_gpa2hva(vm, TEST_AREA_GPA); - void *guest_test_mem_hva = (test_area_hva + GUEST_TEST_MEM_OFFSET); - uint64_t guest_mem_gpa = (TEST_AREA_GPA + GUEST_TEST_MEM_OFFSET); - uint64_t guest_test_size = GUEST_TEST_MEM_SIZE; - - switch (uc_arg1) { - case GUEST_STARTED: - populate_test_area(test_area_hva, TEST_MEM_DATA_PAT4); - VM_STAGE_PROCESSED(GUEST_STARTED); - break; - case GUEST_PRIVATE_MEM_POPULATED: - TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PAT4, - TEST_MEM_DATA_PAT4), "failed"); - VM_STAGE_PROCESSED(GUEST_PRIVATE_MEM_POPULATED); - break; - case GUEST_SHARED_MEM_POPULATED: - TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PAT4, - TEST_MEM_DATA_PAT2), "failed"); - populate_guest_test_mem(guest_test_mem_hva, TEST_MEM_DATA_PAT5); - VM_STAGE_PROCESSED(GUEST_SHARED_MEM_POPULATED); - break; - case GUEST_PRIVATE_MEM_POPULATED2: - TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PAT4, - TEST_MEM_DATA_PAT5), "failed"); - VM_STAGE_PROCESSED(GUEST_PRIVATE_MEM_POPULATED2); - break; - case GUEST_IMPLICIT_MEM_CONV1: - /* - * For first implicit conversion, memory is already private so - * mark it private again just to zap the pte entries for the gpa - * range, so that subsequent accesses from the guest will - * generate ept/npt fault and memory conversion path will be - * exercised by KVM. - */ - vm_update_private_mem(vm, guest_mem_gpa, guest_test_size, - ALLOCATE_MEM); - VM_STAGE_PROCESSED(GUEST_IMPLICIT_MEM_CONV1); - break; - case GUEST_IMPLICIT_MEM_CONV2: - /* - * For second implicit conversion, memory is already shared so - * mark it shared again just to zap the pte entries for the gpa - * range, so that subsequent accesses from the guest will - * generate ept/npt fault and memory conversion path will be - * exercised by KVM. - */ - vm_update_private_mem(vm, guest_mem_gpa, guest_test_size, - UNBACK_MEM); - VM_STAGE_PROCESSED(GUEST_IMPLICIT_MEM_CONV2); - break; - default: - TEST_FAIL("Unknown stage %d\n", uc_arg1); - break; - } -} - -static void guest_explicit_conv_test_fn(void) -{ - guest_conv_test_fn(true); -} - -static void guest_implicit_conv_test_fn(void) -{ - guest_conv_test_fn(false); -} - -static void execute_memory_conversion_test(void) -{ - struct vm_setup_info info; - struct test_setup_info *test_info = &info.test_info; - - info.vm_mem_src = VM_MEM_SRC_ANONYMOUS; - info.memslot0_pages = VM_MEMSLOT0_PAGES; - test_info->test_area_gpa = TEST_AREA_GPA; - test_info->test_area_size = TEST_AREA_SIZE; - test_info->test_area_slot = TEST_AREA_SLOT; - test_info->test_area_mem_src = VM_MEM_SRC_ANONYMOUS; - info.ioexit_cb = conv_test_ioexit_fn; - - info.guest_fn = guest_explicit_conv_test_fn; - execute_vm_with_private_mem(&info); - - info.guest_fn = guest_implicit_conv_test_fn; - execute_vm_with_private_mem(&info); -} +#include int main(int argc, char *argv[]) { /* Tell stdout not to buffer its content */ setbuf(stdout, NULL); - execute_memory_conversion_test(); + execute_memory_conversion_tests(); return 0; } From patchwork Tue Aug 30 22:42:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 12960052 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 72C72ECAAD4 for ; Tue, 30 Aug 2022 22:44:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231822AbiH3WoX (ORCPT ); Tue, 30 Aug 2022 18:44:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39958 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232212AbiH3Wnq (ORCPT ); Tue, 30 Aug 2022 18:43:46 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 927DD861FE for ; Tue, 30 Aug 2022 15:43:37 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id q128-20020a632a86000000b0042fadb61e4aso1196780pgq.3 for ; Tue, 30 Aug 2022 15:43:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=SZFI/BD05xGXUwWRf10O9GR3zZEiAgwS8+s9rPD1nCs=; b=Vwrj9vaDWiIAREcxlkWs0glRtmYa/bvTrp9YFdAUMOToYQ/jD2vTpSo+mv1IBBtwAG M3ITYnVQRBUdmQYzzVVPX5vdP3ZMEurc/94khBlubV3HgdL5stp2v+lPFcvf9TBd8NgN xrlHIl4wkwnMzqTR9T5+bZ+udASe8M2Mz6OczwNLYyAxwvJiP7tElKXOYpxDndH1/+4X C7mgUnfvokFFAgzbzf31jhSdVRLUPR45TOj74L+KgoouNHqAepXGu2HpOnETpOcMzmzd W1x9DMjSjl6xxJDaJ1ZdHTDS43ilcCt9EqAsOZZORWSW2z1WYj3tfon8hUDPGIDoyq3c kpiA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=SZFI/BD05xGXUwWRf10O9GR3zZEiAgwS8+s9rPD1nCs=; b=1tEvwzX/4dMkEbm3jYeWhJBVHptx2V3gzy94Tm2wsvQVBjKTedpS+giNTJffQ5G603 r1Z/V2fDcCXNY9gPdY/M3q5dMYN2BVbKDxu1BJyCBEwbim8h7W1ynkX+xqub0wir+lLC oQb+Qbczlk6f++CJwJpVbgiIgdgLtdCIfpHZCYxfslPSOvm64dYrAUnx3ZKzUSGk/OIv jntq1uz6B0DGcFmYEt4kXnC6t81MEtI6ifGKEXU1VH62UENMPfKsiEYt7davxPW8mzZR qzxz6IxpmemcJ2uIqp9JN8uCMXPzmuvTVs5JqxFrF6IK9GygEUFQgEBP36NrPq59Hpuv ip3A== X-Gm-Message-State: ACgBeo2SxxROEN0Fiu0Wm/+SGIOV/HQ/B5Dv1EPbG60YWX28ViBPJD1R 54ZiXDEEtD5W9CZT1kDWS/6vNXzyjZ3/oOEA X-Google-Smtp-Source: AA6agR5w/7GSSIl/Y9SPf0ncYlhNdf+tjVar8dIyS0KvqIeS5dpzuysMFBeXdDbt8wMm5SsXSQ2EpS1h7NawurFS X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:90b:1b0f:b0:1fd:e29c:d8e1 with SMTP id nu15-20020a17090b1b0f00b001fde29cd8e1mr255751pjb.118.1661899416074; Tue, 30 Aug 2022 15:43:36 -0700 (PDT) Date: Tue, 30 Aug 2022 22:42:59 +0000 In-Reply-To: <20220830224259.412342-1-vannapurve@google.com> Mime-Version: 1.0 References: <20220830224259.412342-1-vannapurve@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220830224259.412342-9-vannapurve@google.com> Subject: [RFC V2 PATCH 8/8] selftests: kvm: Add private memory test for SEV VMs From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a selftest placeholder for executing private memory conversion tests with SEV VMs. Signed-off-by: Vishal Annapurve --- tools/testing/selftests/kvm/.gitignore | 1 + tools/testing/selftests/kvm/Makefile | 1 + .../kvm/x86_64/sev_private_mem_test.c | 21 +++++++++++++++++++ 3 files changed, 23 insertions(+) create mode 100644 tools/testing/selftests/kvm/x86_64/sev_private_mem_test.c diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore index 095b67dc632e..757d4cac19b4 100644 --- a/tools/testing/selftests/kvm/.gitignore +++ b/tools/testing/selftests/kvm/.gitignore @@ -37,6 +37,7 @@ /x86_64/set_sregs_test /x86_64/sev_all_boot_test /x86_64/sev_migrate_tests +/x86_64/sev_private_mem_test /x86_64/smm_test /x86_64/state_test /x86_64/svm_vmcall_test diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 36874fedff4a..3f8030c46b72 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -98,6 +98,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/pmu_event_filter_test TEST_GEN_PROGS_x86_64 += x86_64/private_mem_test TEST_GEN_PROGS_x86_64 += x86_64/set_boot_cpu_id TEST_GEN_PROGS_x86_64 += x86_64/set_sregs_test +TEST_GEN_PROGS_x86_64 += x86_64/sev_private_mem_test TEST_GEN_PROGS_x86_64 += x86_64/smm_test TEST_GEN_PROGS_x86_64 += x86_64/state_test TEST_GEN_PROGS_x86_64 += x86_64/vmx_preemption_timer_test diff --git a/tools/testing/selftests/kvm/x86_64/sev_private_mem_test.c b/tools/testing/selftests/kvm/x86_64/sev_private_mem_test.c new file mode 100644 index 000000000000..2c8edbaef627 --- /dev/null +++ b/tools/testing/selftests/kvm/x86_64/sev_private_mem_test.c @@ -0,0 +1,21 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2022, Google LLC. + */ +#define _GNU_SOURCE /* for program_invocation_short_name */ +#include +#include +#include + +#include +#include +#include + +int main(int argc, char *argv[]) +{ + /* Tell stdout not to buffer its content */ + setbuf(stdout, NULL); + + execute_sev_memory_conversion_tests(); + return 0; +}