From patchwork Fri Aug 19 17:46:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 12949037 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A1CD2C32792 for ; Fri, 19 Aug 2022 18:01:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350017AbiHSSBV (ORCPT ); Fri, 19 Aug 2022 14:01:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52930 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351132AbiHSSAc (ORCPT ); Fri, 19 Aug 2022 14:00:32 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 50B19104460 for ; Fri, 19 Aug 2022 10:47:29 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id oc16-20020a17090b1c1000b001fa8330e2ecso5181770pjb.8 for ; Fri, 19 Aug 2022 10:47:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc; bh=zvwktMFJod8Yzcq1Mqr9gs4K5ayIyd5gM+5Vsw4ffO8=; b=dL4olxBvK+96LIxNQ/whZHZFVABUugVV+xyLFmQlzeQxrEnsTZ+NMAgBezkXVD/27A 0kmqPpIgAG/AsTAQ+QWXeRG/Jr546nrozRlbDUGjlZvjMrJyYx/5+u+f8utkhVAbsI9r vAJqvGpk8PorEhNtpFf+mss77WR9jr8FutZU3LJOFj4pxAds+aRq73Xm6UZgk4UBgLAR h8vvXuHCRHUz7UvP2YtwrSZe5kPH4U8bfT9YEu5ZVy5R4NXJJ2jz1Aac1fSorQTjLkTv GnDzRKT++EeKKOni4DRU2DTYpweZqi5Cn14gzZcfDSV2bmRxb18kCljjaJohE+2ohtqQ qa4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc; bh=zvwktMFJod8Yzcq1Mqr9gs4K5ayIyd5gM+5Vsw4ffO8=; b=y2KjdYj8RrtUAHqLfo3JAot/SxEHFI3x3XIinykH3cdwNuJW9f/D7JzxqR54tlpMQ1 px2tG8gnc9ygHk4ne/EWO4XfRh2I+5Qlso9tJ00z0UykUsXbzWxJFKSPm4wR0IPrJkmn GFAXCe0l+zJWb9RabJNkDGr5fJ8V1bN/5fE865CX7XUxvizJPaA8Hr2bvBZXpNAQouqa 5dTkgAGhI0FHqVw3b1YCxlRjs8iocTAjR8UtuYamD5j5mgycu0xMJ+tg4ing/pWhp/xT 5e+SUgahh7ZVBonkFGDDUPFCWaGS1yw4ltrSg4s1dIvczE3mACt5czydrq5jahAFqeT5 BtLA== X-Gm-Message-State: ACgBeo098dcgY6KYkNu5wzu0wWfx8rEPmK6s+IzEYhgFZcJzaHM6Jlim hl8aEJwepzg/FTKjkMc93k6Ipf6jb1cl1THC X-Google-Smtp-Source: AA6agR7dOzI5cuXHhw2oLEYHi5vo3Uw2VUgk2n78a2/fxIjY/sz7tGIcn8rHcdq/cF5lwsUO4oIN3/a/kGuAdFd6 X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a65:6d85:0:b0:429:9ce8:6a60 with SMTP id bc5-20020a656d85000000b004299ce86a60mr7245924pgb.352.1660931248778; Fri, 19 Aug 2022 10:47:28 -0700 (PDT) Date: Fri, 19 Aug 2022 17:46:54 +0000 In-Reply-To: <20220819174659.2427983-1-vannapurve@google.com> Message-Id: <20220819174659.2427983-2-vannapurve@google.com> Mime-Version: 1.0 References: <20220819174659.2427983-1-vannapurve@google.com> X-Mailer: git-send-email 2.37.1.595.g718a3a8f04-goog Subject: [RFC V3 PATCH 1/6] kvm: x86: Add support for testing private memory From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Introduce HAVE_KVM_PRIVATE_MEM_TESTING config to be able to test fd based approach to support private memory with non-confidential selftest VMs. To support this testing few important aspects need to be considered from the perspective of selftests - * Non-confidential VM should be able to boot from private memory: ENCRYPT_REG_REGION is modified to support populating initial private memory contents. * Non-confidential VM needs to convey its access type to KVM: Introduce HAVE_KVM_GUEST_PRIVATE_ACCESS_TRACKING to allow guest to convey the gpa ranges that need to be considered as accessed via private/shared access. memslots are modified to carry a shared bitmap that will be toggled when guest issues HC_MAP_GPA_RANGE hypercall with newly introduced attributes. * arch_private_mem_supported and kvm_is_fault_private are added/updated to allow private memory logic to work with non-confidential vm selftests. Signed-off-by: Vishal Annapurve --- arch/x86/include/uapi/asm/kvm_para.h | 2 + arch/x86/kvm/Kconfig | 1 + arch/x86/kvm/mmu/mmu.c | 19 ++++++ arch/x86/kvm/mmu/mmu_internal.h | 2 +- arch/x86/kvm/x86.c | 67 ++++++++++++++++++++- include/linux/kvm_host.h | 12 ++++ virt/kvm/Kconfig | 9 +++ virt/kvm/kvm_main.c | 90 +++++++++++++++++++++++++++- 8 files changed, 198 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/uapi/asm/kvm_para.h b/arch/x86/include/uapi/asm/kvm_para.h index 6e64b27b2c1e..541b7cbb3167 100644 --- a/arch/x86/include/uapi/asm/kvm_para.h +++ b/arch/x86/include/uapi/asm/kvm_para.h @@ -102,6 +102,8 @@ struct kvm_clock_pairing { #define KVM_MAP_GPA_RANGE_PAGE_SZ_2M (1 << 0) #define KVM_MAP_GPA_RANGE_PAGE_SZ_1G (1 << 1) #define KVM_MAP_GPA_RANGE_ENC_STAT(n) (n << 4) +#define KVM_MARK_GPA_RANGE_ENC_ACCESS (1 << 8) +#define KVM_CLR_GPA_RANGE_ENC_ACCESS (1 << 9) #define KVM_MAP_GPA_RANGE_ENCRYPTED KVM_MAP_GPA_RANGE_ENC_STAT(1) #define KVM_MAP_GPA_RANGE_DECRYPTED KVM_MAP_GPA_RANGE_ENC_STAT(0) diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig index 05861b9656a4..49b3d527e793 100644 --- a/arch/x86/kvm/Kconfig +++ b/arch/x86/kvm/Kconfig @@ -49,6 +49,7 @@ config KVM select INTERVAL_TREE select HAVE_KVM_PM_NOTIFIER if PM select HAVE_KVM_PRIVATE_MEM if X86_64 + select HAVE_KVM_PRIVATE_MEM_TESTING if HAVE_KVM_PRIVATE_MEM select MEMFILE_NOTIFIER if HAVE_KVM_PRIVATE_MEM select XARRAY_MULTI if HAVE_KVM_PRIVATE_MEM help diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 27dbdd4fe8d1..aee6f07e7a8d 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4236,6 +4236,24 @@ static bool is_page_fault_stale(struct kvm_vcpu *vcpu, mmu_updating_retry_gfn(vcpu->kvm, mmu_seq, fault->gfn); } +static bool kvm_is_fault_private(struct kvm *kvm, struct kvm_page_fault *fault) +{ + struct kvm_memory_slot *slot = fault->slot; + bool is_private; + + if (!kvm_slot_can_be_private(slot)) + return false; + + mutex_lock(&kvm->slots_lock); + if (slot->shared_bitmap) + is_private = !test_bit(fault->gfn - slot->base_gfn, slot->shared_bitmap); + else + is_private = false; + mutex_unlock(&kvm->slots_lock); + + return is_private; +} + static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { bool is_tdp_mmu_fault = is_tdp_mmu(vcpu->arch.mmu); @@ -4245,6 +4263,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault fault->gfn = fault->addr >> PAGE_SHIFT; fault->slot = kvm_vcpu_gfn_to_memslot(vcpu, fault->gfn); + fault->is_private = kvm_is_fault_private(vcpu->kvm, fault); if (page_fault_handle_page_track(vcpu, fault)) return RET_PF_EMULATE; diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index fb9c298abcf0..c198fe90f421 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -188,7 +188,7 @@ struct kvm_page_fault { /* Derived from mmu and global state. */ const bool is_tdp; - const bool is_private; + bool is_private; const bool nx_huge_page_workaround_enabled; /* diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 77d16b90045c..08a5e5b63119 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -9436,6 +9436,46 @@ static int complete_hypercall_exit(struct kvm_vcpu *vcpu) return kvm_skip_emulated_instruction(vcpu); } +#ifdef CONFIG_HAVE_KVM_GUEST_PRIVATE_ACCESS_TRACKING +static int kvm_update_guest_access_tracking(struct kvm *kvm, u64 attrs, + gfn_t gfn, u64 npages) +{ + int ret = 0; + struct kvm_memory_slot *memslot = gfn_to_memslot(kvm, gfn); + + if (!memslot) { + pr_err("memslot doesn't exist for 0x%lx\n", gfn); + return -KVM_EINVAL; + } + + mutex_lock(&kvm->slots_lock); + if (!kvm_slot_can_be_private(memslot)) { + pr_err("memslot not private for 0x%lx\n", gfn); + ret = -KVM_EINVAL; + goto err; + } + + if (memslot->npages - (gfn - memslot->base_gfn) < npages) { + pr_err("memslot length insufficient for gfn 0x%lx pages 0x%lx\n", + gfn, npages); + ret = -KVM_EINVAL; + goto err; + } + + if (attrs & KVM_MAP_GPA_RANGE_ENCRYPTED) { + bitmap_clear(memslot->shared_bitmap, (gfn - memslot->base_gfn), + npages); + } else { + bitmap_set(memslot->shared_bitmap, (gfn - memslot->base_gfn), + npages); + } + +err: + mutex_unlock(&kvm->slots_lock); + return ret; +} +#endif + int kvm_emulate_hypercall(struct kvm_vcpu *vcpu) { unsigned long nr, a0, a1, a2, a3, ret; @@ -9503,17 +9543,36 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu) break; case KVM_HC_MAP_GPA_RANGE: { u64 gpa = a0, npages = a1, attrs = a2; + bool exit_to_userspace = true; + gfn_t gfn; ret = -KVM_ENOSYS; if (!(vcpu->kvm->arch.hypercall_exit_enabled & (1 << KVM_HC_MAP_GPA_RANGE))) break; - if (!PAGE_ALIGNED(gpa) || !npages || - gpa_to_gfn(gpa) + npages <= gpa_to_gfn(gpa)) { + gfn = gpa_to_gfn(gpa); + if (!PAGE_ALIGNED(gpa) || (gfn + npages < gfn) || !npages) { ret = -KVM_EINVAL; break; } +#ifdef CONFIG_HAVE_KVM_PRIVATE_MEM_TESTING + if (attrs & (KVM_MARK_GPA_RANGE_ENC_ACCESS | KVM_CLR_GPA_RANGE_ENC_ACCESS)) { + exit_to_userspace = false; + if (attrs & KVM_MARK_GPA_RANGE_ENC_ACCESS) + attrs |= KVM_MAP_GPA_RANGE_ENCRYPTED; + } +#endif + +#ifdef CONFIG_HAVE_KVM_GUEST_PRIVATE_ACCESS_TRACKING + ret = kvm_update_guest_access_tracking(vcpu->kvm, attrs, + gfn, npages); + if (ret) + break; +#endif + if (!exit_to_userspace) + break; + vcpu->run->exit_reason = KVM_EXIT_HYPERCALL; vcpu->run->hypercall.nr = KVM_HC_MAP_GPA_RANGE; vcpu->run->hypercall.args[0] = gpa; @@ -10388,6 +10447,10 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) preempt_disable(); +#ifdef CONFIG_HAVE_KVM_PRIVATE_MEM_TESTING + vcpu->kvm->vm_entry_attempted = true; +#endif + static_call(kvm_x86_prepare_switch_to_guest)(vcpu); /* diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 4e5a0db68799..4508fa0e8fb6 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -328,6 +328,9 @@ struct kvm_vcpu { u64 requests; unsigned long guest_debug; + uint64_t priv_gfn; + uint64_t priv_pages; + struct mutex mutex; struct kvm_run *run; @@ -576,6 +579,7 @@ struct kvm_memory_slot { gfn_t base_gfn; unsigned long npages; unsigned long *dirty_bitmap; + unsigned long *shared_bitmap; struct kvm_arch_memory_slot arch; unsigned long userspace_addr; u32 flags; @@ -602,6 +606,11 @@ static inline unsigned long kvm_dirty_bitmap_bytes(struct kvm_memory_slot *memsl return ALIGN(memslot->npages, BITS_PER_LONG) / 8; } +static inline unsigned long kvm_shared_bitmap_bytes(struct kvm_memory_slot *memslot) +{ + return ALIGN(memslot->npages, BITS_PER_LONG) / 8; +} + static inline unsigned long *kvm_second_dirty_bitmap(struct kvm_memory_slot *memslot) { unsigned long len = kvm_dirty_bitmap_bytes(memslot); @@ -799,6 +808,9 @@ struct kvm { u32 dirty_ring_size; bool vm_bugged; bool vm_dead; +#ifdef CONFIG_HAVE_KVM_PRIVATE_MEM_TESTING + bool vm_entry_attempted; +#endif #ifdef CONFIG_HAVE_KVM_PM_NOTIFIER struct notifier_block pm_notifier; diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index ccaff13cc5b8..f28353c237ac 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -75,3 +75,12 @@ config HAVE_KVM_PM_NOTIFIER config HAVE_KVM_PRIVATE_MEM bool + +config HAVE_KVM_PRIVATE_MEM_TESTING + bool + select HAVE_KVM_GUEST_PRIVATE_ACCESS_TRACKING + depends on HAVE_KVM_PRIVATE_MEM + +config HAVE_KVM_GUEST_PRIVATE_ACCESS_TRACKING + bool + depends on HAVE_KVM_PRIVATE_MEM diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 00cdfda6915f..7597949fe031 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -916,6 +916,54 @@ static int kvm_init_mmu_notifier(struct kvm *kvm) #ifdef CONFIG_HAVE_KVM_PRIVATE_MEM #define KVM_MEM_ATTR_PRIVATE 0x0001 + +#ifdef CONFIG_HAVE_KVM_PRIVATE_MEM_TESTING +static int kvm_vm_populate_private_mem(struct kvm *kvm, unsigned long gfn_start, + unsigned long gfn_end) +{ + gfn_t gfn; + int ret = 0; + kvm_pfn_t pfn; + struct kvm_memory_slot *memslot = gfn_to_memslot(kvm, gfn_start); + + if (!memslot || !kvm_slot_can_be_private(memslot)) { + pr_err("Private memslot not registered for gfn 0x%lx\n", + gfn_start); + return -EINVAL; + } + + mutex_lock(&kvm->slots_lock); + for (gfn = gfn_start; gfn <= gfn_end; gfn++) { + int order; + void *kvaddr; + + ret = kvm_private_mem_get_pfn(memslot, gfn, &pfn, &order); + if (ret) { + pr_err("pfn not found for 0x%lx\n", gfn); + goto err_ret; + } + + kvaddr = pfn_to_kaddr(pfn); + if (!virt_addr_valid(kvaddr)) { + pr_err("Invalid kvaddr 0x%lx\n", (uint64_t)kvaddr); + ret = -EINVAL; + goto err_ret; + } + + ret = kvm_read_guest_page(kvm, gfn, kvaddr, 0, PAGE_SIZE); + if (ret) { + pr_err("guest read failed 0x%lx\n", ret); + goto err_ret; + } + + kvm_private_mem_put_pfn(memslot, pfn); + } +err_ret: + mutex_unlock(&kvm->slots_lock); + return ret; +} +#endif + static int kvm_vm_ioctl_set_encrypted_region(struct kvm *kvm, unsigned int ioctl, struct kvm_enc_region *region) { @@ -944,6 +992,11 @@ static int kvm_vm_ioctl_set_encrypted_region(struct kvm *kvm, unsigned int ioctl kvm_zap_gfn_range(kvm, start, end + 1); +#ifdef CONFIG_HAVE_KVM_PRIVATE_MEM_TESTING + if (!kvm->vm_entry_attempted && (ioctl == KVM_MEMORY_ENCRYPT_REG_REGION)) + r = kvm_vm_populate_private_mem(kvm, start, end); +#endif + return r; } @@ -1046,12 +1099,22 @@ static void kvm_destroy_dirty_bitmap(struct kvm_memory_slot *memslot) memslot->dirty_bitmap = NULL; } +static void kvm_destroy_shared_bitmap(struct kvm_memory_slot *memslot) +{ + if (!memslot->shared_bitmap) + return; + + kvfree(memslot->shared_bitmap); + memslot->shared_bitmap = NULL; +} + /* This does not remove the slot from struct kvm_memslots data structures */ static void kvm_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot) { if (slot->flags & KVM_MEM_PRIVATE) { kvm_private_mem_unregister(slot); fput(slot->private_file); + kvm_destroy_shared_bitmap(slot); } kvm_destroy_dirty_bitmap(slot); @@ -1477,6 +1540,19 @@ static int kvm_alloc_dirty_bitmap(struct kvm_memory_slot *memslot) return 0; } +#ifdef CONFIG_HAVE_KVM_GUEST_PRIVATE_ACCESS_TRACKING +static int kvm_alloc_shared_bitmap(struct kvm_memory_slot *memslot) +{ + unsigned long bitmap_bytes = kvm_shared_bitmap_bytes(memslot); + + memslot->shared_bitmap = __vcalloc(2, bitmap_bytes, GFP_KERNEL_ACCOUNT); + if (!memslot->shared_bitmap) + return -ENOMEM; + + return 0; +} +#endif + static struct kvm_memslots *kvm_get_inactive_memslots(struct kvm *kvm, int as_id) { struct kvm_memslots *active = __kvm_memslots(kvm, as_id); @@ -1612,7 +1688,11 @@ static void kvm_replace_memslot(struct kvm *kvm, bool __weak kvm_arch_private_mem_supported(struct kvm *kvm) { +#ifdef CONFIG_HAVE_KVM_PRIVATE_MEM_TESTING + return true; +#else return false; +#endif } static int check_memory_region_flags(struct kvm *kvm, @@ -1701,6 +1781,12 @@ static int kvm_prepare_memory_region(struct kvm *kvm, int r; if (change == KVM_MR_CREATE && new->flags & KVM_MEM_PRIVATE) { +#ifdef CONFIG_HAVE_KVM_GUEST_PRIVATE_ACCESS_TRACKING + r = kvm_alloc_shared_bitmap(new); + if (r) + return r; +#endif + r = kvm_private_mem_register(new); if (r) return r; @@ -1734,8 +1820,10 @@ static int kvm_prepare_memory_region(struct kvm *kvm, if (r && new && new->dirty_bitmap && (!old || !old->dirty_bitmap)) kvm_destroy_dirty_bitmap(new); - if (r && change == KVM_MR_CREATE && new->flags & KVM_MEM_PRIVATE) + if (r && change == KVM_MR_CREATE && new->flags & KVM_MEM_PRIVATE) { kvm_private_mem_unregister(new); + kvm_destroy_shared_bitmap(new); + } return r; } From patchwork Fri Aug 19 17:46:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 12949033 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E1478C3F6B0 for ; Fri, 19 Aug 2022 18:01:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350062AbiHSSBC (ORCPT ); Fri, 19 Aug 2022 14:01:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53206 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351283AbiHSSAf (ORCPT ); Fri, 19 Aug 2022 14:00:35 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A60E853D25 for ; Fri, 19 Aug 2022 10:47:41 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id z8-20020a17090a014800b001fac4204c7eso2893766pje.8 for ; Fri, 19 Aug 2022 10:47:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc; bh=LIUZkMGvJXcpbhNGmft6GGfIK9I8bm9L0jfGUffk13c=; b=pmlvaUHfcF+wVO68CerVhstzSi3CVnWjbp+gMTXQu0l84Wt+2u2oUzVK3BmNftGaGy GGpXVCc6+D3MKSACJA2PQVYerbIIezKmekA9B2YvDRShnOKiEElGzMG8v4DxoQHOhABK HweEyOo4EvgA7V8ak8DJaDLAmWol6w+mYSfN9HRk14EFe+YFuE2loXeJNPVTk3yWPrTD 8VHpJBLpndjnsDMVDERfw4XqucQJOyYPEZEzW9F2ZLWcosWIIeOzuSuxCfKLDMLvjgSJ IkmOy93ixKWhH8IJU9b7w73iIomz6HzL7w3V64W2f94zV3bIIKWKYIk2A4mUT6KakL6o OEeQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc; bh=LIUZkMGvJXcpbhNGmft6GGfIK9I8bm9L0jfGUffk13c=; b=rejI2fsUg4dlc3kNUDH9votjaXrC6NmEj+nNJRymIpKGSFuAExPS+U+6eEcWWCmJ7l ca165K69VuZ1fKKNOpHvHAeXJJEI+wt7xpz+qIzjY8ua+WWJVc4eofQK01striQLWnu+ sxUIytgyLQqyLfNi580z95dBXetQWjrdc0/4XE+48TvS36GVYjyiaBNJLpyRavSJ5vS1 A8+LV4Tevuis9bum11WOaq+gety7p2JmzOYMzjqwO3v9hYGWyV5CvybxPD3q8DHPFI2l 5SSIIi4gpOdngWbtIeAgwP3hgx7JMKRciTwlqu37s2HCJ71o3GCv6Z8vVNpyRbL5cvJf Quwg== X-Gm-Message-State: ACgBeo1CyB9osW5y1tfao06Sr0ef6tvGdv3cFqz2OKIpjuubMjzAiEDs meKoZibPzIPzbA2R+vdenPp7UfOAw+os7l9X X-Google-Smtp-Source: AA6agR7+aPOAfBVTAgSiy0DQ+nj1se+5Ibd8vZgJ3AwtFXKOvN/xwr/424n5dK411hPKuJHKjJRV/84SpeDaBz07 X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:902:ab41:b0:171:54ae:624c with SMTP id ij1-20020a170902ab4100b0017154ae624cmr8313760plb.157.1660931261144; Fri, 19 Aug 2022 10:47:41 -0700 (PDT) Date: Fri, 19 Aug 2022 17:46:55 +0000 In-Reply-To: <20220819174659.2427983-1-vannapurve@google.com> Message-Id: <20220819174659.2427983-3-vannapurve@google.com> Mime-Version: 1.0 References: <20220819174659.2427983-1-vannapurve@google.com> X-Mailer: git-send-email 2.37.1.595.g718a3a8f04-goog Subject: [RFC V3 PATCH 2/6] selftests: kvm: Add support for private memory From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Add support for registering private memory with kvm using KVM_SET_USER_MEMORY_REGION ioctl. Helper function to query extended userspace mem region is introduced to allow memory conversion. Signed-off-by: Vishal Annapurve --- .../selftests/kvm/include/kvm_util_base.h | 12 +++++- tools/testing/selftests/kvm/lib/kvm_util.c | 40 ++++++++++++++++++- 2 files changed, 49 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index ff2533a32af9..dfe454f228e7 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -30,7 +30,10 @@ typedef uint64_t vm_paddr_t; /* Virtual Machine (Guest) physical address */ typedef uint64_t vm_vaddr_t; /* Virtual Machine (Guest) virtual address */ struct userspace_mem_region { - struct kvm_userspace_memory_region region; + union { + struct kvm_userspace_memory_region region; + struct kvm_userspace_memory_region_ext region_ext; + }; struct sparsebit *unused_phy_pages; struct sparsebit *encrypted_phy_pages; int fd; @@ -200,7 +203,7 @@ static inline bool kvm_has_cap(long cap) #define kvm_do_ioctl(fd, cmd, arg) \ ({ \ - static_assert(!_IOC_SIZE(cmd) || sizeof(*arg) == _IOC_SIZE(cmd), ""); \ + static_assert(!_IOC_SIZE(cmd) || sizeof(*arg) >= _IOC_SIZE(cmd), ""); \ ioctl(fd, cmd, arg); \ }) @@ -388,6 +391,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm, void vm_mem_region_set_flags(struct kvm_vm *vm, uint32_t slot, uint32_t flags); void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa); void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot); + struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id); vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min); vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min); @@ -717,6 +721,10 @@ struct kvm_userspace_memory_region * kvm_userspace_memory_region_find(struct kvm_vm *vm, uint64_t start, uint64_t end); +struct kvm_userspace_memory_region_ext * +kvm_userspace_memory_region_ext_find(struct kvm_vm *vm, uint64_t start, + uint64_t end); + #define sync_global_to_guest(vm, g) ({ \ typeof(g) *_p = addr_gva2hva(vm, (vm_vaddr_t)&(g)); \ memcpy(_p, &(g), sizeof(g)); \ diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 171bc20a097a..f153c71d6988 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -17,6 +17,7 @@ #include #include +#define MFD_INACCESSIBLE 0x0008U #define KVM_UTIL_MIN_PFN 2 static int vcpu_mmap_sz(void); @@ -466,6 +467,35 @@ kvm_userspace_memory_region_find(struct kvm_vm *vm, uint64_t start, return ®ion->region; } +/* + * KVM Userspace Memory Region Ext Find + * + * Input Args: + * vm - Virtual Machine + * start - Starting VM physical address + * end - Ending VM physical address, inclusive. + * + * Output Args: None + * + * Return: + * Pointer to overlapping ext region, NULL if no such region. + * + * Public interface to userspace_mem_region_find. Allows tests to look up + * the memslot datastructure for a given range of guest physical memory. + */ +struct kvm_userspace_memory_region_ext * +kvm_userspace_memory_region_ext_find(struct kvm_vm *vm, uint64_t start, + uint64_t end) +{ + struct userspace_mem_region *region; + + region = userspace_mem_region_find(vm, start, end); + if (!region) + return NULL; + + return ®ion->region_ext; +} + /* * VM VCPU Remove * @@ -764,6 +794,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm, struct userspace_mem_region *region; size_t backing_src_pagesz = get_backing_src_pagesz(src_type); size_t alignment; + int priv_memfd = -1; TEST_ASSERT(vm_adjust_num_guest_pages(vm->mode, npages) == npages, "Number of guest pages is not compatible with the host. " @@ -869,6 +900,11 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm, vm_mem_backing_src_alias(src_type)->name); } + if (flags & KVM_MEM_PRIVATE) { + priv_memfd = memfd_create("vm_private_mem_", MFD_INACCESSIBLE); + TEST_ASSERT(priv_memfd != -1, "Failed to create private memfd"); + } + region->unused_phy_pages = sparsebit_alloc(); region->encrypted_phy_pages = sparsebit_alloc(); sparsebit_set_num(region->unused_phy_pages, @@ -878,7 +914,9 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm, region->region.guest_phys_addr = guest_paddr; region->region.memory_size = npages * vm->page_size; region->region.userspace_addr = (uintptr_t) region->host_mem; - ret = __vm_ioctl(vm, KVM_SET_USER_MEMORY_REGION, ®ion->region); + region->region_ext.private_fd = priv_memfd; + region->region_ext.private_offset = 0; + ret = ioctl(vm->fd, KVM_SET_USER_MEMORY_REGION, ®ion->region_ext); TEST_ASSERT(ret == 0, "KVM_SET_USER_MEMORY_REGION IOCTL failed,\n" " rc: %i errno: %i\n" " slot: %u flags: 0x%x\n" From patchwork Fri Aug 19 17:46:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 12949032 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3805BC32772 for ; Fri, 19 Aug 2022 18:01:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349792AbiHSSBC (ORCPT ); Fri, 19 Aug 2022 14:01:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51742 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351312AbiHSSAg (ORCPT ); Fri, 19 Aug 2022 14:00:36 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B812C104467 for ; Fri, 19 Aug 2022 10:47:44 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-33352499223so87343467b3.8 for ; Fri, 19 Aug 2022 10:47:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc; bh=AuVkheI7zf2HhufotQWCVsTEadgVfoRpum2lNpLuQgY=; b=oNfW/4Vm265qJrd7LBgFAyipxhEsSAOYU5rMBq91J3s0LGLmwRyUvf2zobD+iocvGC rLmPshSjGeT0g6SHJPTjY6bzeGEZukNgPVySpZWZJpfpTLjOOoofeCbjE4wVpCbw1U+j EkJBaRsfFlCUdZpPerVytTQfdhnWQStfFmO7kpeu5jLDpn+JPjv76skvPhlgvQTXfqqd qmtSwwVYkomcvTIcevsLzUBDxHN2ZwIXaSb2HUrgFgcnRKsdubuMqFRTXkGKJ5xBFxue InSW5McVJClaekJz+JJaVASvtdAFTT6dsAldrB/2HLKN31lg84RCExNfAgiGJESuAmTK 9iaQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc; bh=AuVkheI7zf2HhufotQWCVsTEadgVfoRpum2lNpLuQgY=; b=da/zmOyKyaY8uUDxD/6F9ND6noS7fTVFUpaWWDhGHjY22ZLSImPsTL+NSWhrKDM4ir S7H9zarH4p1BtbMs6vNkBX4dahE/lkXKDdoy4bEFv4nNUvTaM5ahd6sSIZJe8ZR8DbLq f/Omjk0wWfJAD9lsrW9GlIL6Vbd+uDki1i+93lppG09DmTs6NmQgTxmJ1K5HOeQenUzz 6tmSvdZ/Gal/Syqcph21HlWOIgKBswFBhulyl5LVo17Y8gSFmKfSWImuHnt3BfO8v5w2 511131ckGUKew15qkxz/CTyu2DgK6cEres+wg3QXj37X0DXJWM86ey+06C2aN939f5ag LB4A== X-Gm-Message-State: ACgBeo3v3IPzh9rAn2CiimOQH70CviIW/IwyoiinhqVZcIt7y8VBIzK/ h5kZzJry7qyvRDpGtdGfqCX8q2eEpl6uWklR X-Google-Smtp-Source: AA6agR4gbDQtGeQ1LZYbf3EfqgoCjGcFHk/HfWY3dMdFk8KnVzrhbHImvTXkhqgRq8jomz/vDv6134KXUwVifgsT X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a0d:ee01:0:b0:324:d943:19dc with SMTP id x1-20020a0dee01000000b00324d94319dcmr8282915ywe.325.1660931263972; Fri, 19 Aug 2022 10:47:43 -0700 (PDT) Date: Fri, 19 Aug 2022 17:46:56 +0000 In-Reply-To: <20220819174659.2427983-1-vannapurve@google.com> Message-Id: <20220819174659.2427983-4-vannapurve@google.com> Mime-Version: 1.0 References: <20220819174659.2427983-1-vannapurve@google.com> X-Mailer: git-send-email 2.37.1.595.g718a3a8f04-goog Subject: [RFC V3 PATCH 3/6] selftests: kvm: ucall: Allow querying ucall pool gpa From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Add a helper to query guest physical address for ucall pool so that guest can mark the page as accessed shared or private. Signed-off-by: Vishal Annapurve --- tools/testing/selftests/kvm/include/ucall_common.h | 2 ++ tools/testing/selftests/kvm/lib/ucall_common.c | 12 ++++++++++++ 2 files changed, 14 insertions(+) diff --git a/tools/testing/selftests/kvm/include/ucall_common.h b/tools/testing/selftests/kvm/include/ucall_common.h index 279bbab011c7..2c6e5c4df012 100644 --- a/tools/testing/selftests/kvm/include/ucall_common.h +++ b/tools/testing/selftests/kvm/include/ucall_common.h @@ -31,6 +31,8 @@ void ucall_arch_uninit(struct kvm_vm *vm); void ucall_arch_do_ucall(vm_vaddr_t uc); void *ucall_arch_get_ucall(struct kvm_vcpu *vcpu); +vm_paddr_t get_ucall_pool_paddr(void); + void ucall(uint64_t cmd, int nargs, ...); uint64_t get_ucall(struct kvm_vcpu *vcpu, struct ucall *uc); diff --git a/tools/testing/selftests/kvm/lib/ucall_common.c b/tools/testing/selftests/kvm/lib/ucall_common.c index 5a15fa39cd51..4d2abef8ee77 100644 --- a/tools/testing/selftests/kvm/lib/ucall_common.c +++ b/tools/testing/selftests/kvm/lib/ucall_common.c @@ -11,6 +11,7 @@ struct ucall_header { static bool use_ucall_pool; static struct ucall_header *ucall_pool; +static vm_paddr_t ucall_page_paddr; void ucall_init(struct kvm_vm *vm, void *arg) { @@ -35,7 +36,10 @@ void ucall_init(struct kvm_vm *vm, void *arg) } ucall_pool = (struct ucall_header *)vaddr; + ucall_page_paddr = addr_gva2gpa(vm, vaddr); sync_global_to_guest(vm, ucall_pool); + sync_global_to_guest(vm, ucall_page_paddr); + printf("ucall_page_paddr 0x%lx\n", ucall_page_paddr); out: ucall_arch_init(vm, arg); @@ -54,6 +58,14 @@ void ucall_uninit(struct kvm_vm *vm) ucall_arch_uninit(vm); } +vm_paddr_t get_ucall_pool_paddr(void) +{ + if (!use_ucall_pool) + return 0; + + return ucall_page_paddr; +} + static struct ucall *ucall_alloc(void) { struct ucall *uc = NULL; From patchwork Fri Aug 19 17:46:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 12949031 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B771C28B2B for ; Fri, 19 Aug 2022 18:01:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349933AbiHSSBB (ORCPT ); Fri, 19 Aug 2022 14:01:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52346 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351364AbiHSSAi (ORCPT ); Fri, 19 Aug 2022 14:00:38 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 74A76108FAF for ; Fri, 19 Aug 2022 10:47:46 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-334228502a8so88316087b3.20 for ; Fri, 19 Aug 2022 10:47:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc; bh=Y2OAShC93kvragyfUYUREEqvnCxiWawZXRTFrrotT38=; b=TLuOc4UmL+jL81s7Bv8b61DGfOZAoDQzaumA9BqGp5Bmr3vL4GOkBen5wbpQN/5gsF 4B51bN3Bfs/6AbBQ5WtaqpSpFAggpQ7wmLrLI64nqCzgZ/jt8gm1q9skBPqkJKBqZvj2 seLae0fWfrZHAisvvxFnk4IPQv7hGyySZbJUi5/Lgy2dMdRG7Fixc6wmTnxhE57aJ+Wq U7pfUelfIot/tqUDDNfDJAZ/71aHD2hbHirFx44fLugoCwp7zYPtpCwNOn/nB97EJht0 bbYPJ7di3F+90qzG3NXKm/y1aVEnD6KwHLkPJZl/Vc1hNzCb1goBMrFAssdvH+8FPUNo i5mw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc; bh=Y2OAShC93kvragyfUYUREEqvnCxiWawZXRTFrrotT38=; b=gtCvpLL7LJN2i+BErGOmerjO2NS/ffFIacJoK2CNgL9h4P5Ys/lk715aypAlBBfb+f EdPTO0abtJTQP3z6wcou5XSI3LL+FGy/AYe32m/CQIHkuv6Ypa0v877fBY7eOAkGwhjQ SImpy7GufJpW62yYaiHxUbaQpklTEmp2q9VjMcbGphp4m0yQYAZZC8MuylfPNJGlb2XJ KL2TKIPemBmBOzf4JIT07Q5vujkjYvymwpOIO4IInGcggYU1EqUehNUz8cq+R5O7erCZ 63l64LsHP5q+dUxIKRDVrlNCYo135xrWDGG/RvUulK6RB93nURSsO4AMn2tXjSuEV5LK 0dPw== X-Gm-Message-State: ACgBeo3yFdTyAU+Qy08FsZmmH27rsNqxpIwA2RIgflwqX2kYCpfOFJGT OggCE2iCMSZfKoynSGXoVI1vTNuXGW/f70hJ X-Google-Smtp-Source: AA6agR5W8Al1N910ut7jrjDbhUtrRMQn96o/M6cFVTCXQOJx7Xf1JUkM2YfHGM6Xo3O4CAQe3mn1eG80uZ1Pggj2 X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a05:690c:828:b0:336:bd1:1692 with SMTP id by8-20020a05690c082800b003360bd11692mr8629781ywb.364.1660931266161; Fri, 19 Aug 2022 10:47:46 -0700 (PDT) Date: Fri, 19 Aug 2022 17:46:57 +0000 In-Reply-To: <20220819174659.2427983-1-vannapurve@google.com> Message-Id: <20220819174659.2427983-5-vannapurve@google.com> Mime-Version: 1.0 References: <20220819174659.2427983-1-vannapurve@google.com> X-Mailer: git-send-email 2.37.1.595.g718a3a8f04-goog Subject: [RFC V3 PATCH 4/6] selftests: kvm: x86: Execute hypercall as per the cpu From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Add support for executing vmmcall/vmcall instruction on amd/intel cpus. In general kvm patches the instruction according to the cpu implementation at runtime. While executing selftest vms from private memory KVM will not be able to update the private memory of the guest. Hypercall parameters are fixed to explicitly populate hypercall number in eax. Otherwise inlined function calls to kvm_hypercall would call vmmcall/vmcall instruction without updating eax with hypercall number. Signed-off-by: Vishal Annapurve --- .../testing/selftests/kvm/lib/x86_64/processor.c | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c index 53b115876417..09d757a0b148 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c @@ -1254,10 +1254,21 @@ uint64_t kvm_hypercall(uint64_t nr, uint64_t a0, uint64_t a1, uint64_t a2, uint64_t a3) { uint64_t r; + static bool is_cpu_checked; + static bool is_cpu_amd; - asm volatile("vmcall" + if (!is_cpu_checked) + is_cpu_amd = is_amd_cpu(); + + if (is_cpu_amd) { + asm volatile("vmmcall" + : "=a"(r) + : "a"(nr), "b"(a0), "c"(a1), "d"(a2), "S"(a3)); + } else { + asm volatile("vmcall" : "=a"(r) - : "b"(a0), "c"(a1), "d"(a2), "S"(a3)); + : "a"(nr), "b"(a0), "c"(a1), "d"(a2), "S"(a3)); + } return r; } From patchwork Fri Aug 19 17:46:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 12949035 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2518EC32771 for ; Fri, 19 Aug 2022 18:01:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350212AbiHSSBE (ORCPT ); Fri, 19 Aug 2022 14:01:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49352 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351418AbiHSSAl (ORCPT ); Fri, 19 Aug 2022 14:00:41 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5E91210976B for ; Fri, 19 Aug 2022 10:47:49 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id bd20-20020a056a00279400b0053609a39408so1341921pfb.3 for ; Fri, 19 Aug 2022 10:47:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc; bh=dv/VsCKISe4XLQ/adqAzQumoi9iqOUxkYS0i2UxBP3I=; b=A4agjpKcSu6/kPWlIthpUoZueo3UcltCGmBNFoj9HUB7vMKlXZnphdejH4rh+uwECS 0eeIWtQyrFG/6KKguA5+dKENb41p+bncSzVLGdMO5NqhlFTaLzpVJuIO0VSvbinCxiH2 jZV6KiJLRXBYXV1jcNGGQx2iwgCN+dKiPrZHt0iAyRm/8hn+mGmOFFR+hZu8G48nhsP5 Aj2WOGR01iteAHL6RhXfErkxEtxSINRZvFAkHlM6ucQxyESYgGR/+ZnR8R0OUpHMP+CM xjrL8Z8v5iMh5T6gek18hvdn1jph4V3kmvJGii1Kqht+V2YKpTxMMS7lktVvEaA5pZbL Mdiw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc; bh=dv/VsCKISe4XLQ/adqAzQumoi9iqOUxkYS0i2UxBP3I=; b=mUsqoW89xwAzl1E790sDB6n5V50evLIsuH4Bdixrivb0Aumy32owqONG9+tiEneIBg Rzze0SdC1so9qFdlWkq3mD2xFM1WF400qnoBFQQj4Hfp/1Zm+9WMxpc3sGG3n/N5dtPU ELAtiS6bcc0jYZBWYmInSgBMq3L0iCsIWfCj9GU5hUEteGIZObrYhM8chpBqZasNOeHV qxrIFqN9miC3m6vywFOk+fd0TuhUNxD6lLhcGyPQFHvM4JT3T9XEDud5MB9/hE/ByTX9 f3E0TZkVFthfYNqkHLWuPzPn5CunS1hfR56Hu9wZpfsDZzqFCePraYXLjV0FxTEzBtMh EZMw== X-Gm-Message-State: ACgBeo2949dzLVpCuHeUMRrhv94e69izHlCu2HmJBeYbdLAKd6NsRPVF MJ5C3miAx8noMf+uf9ce8+TKpxcZ9D9kdCj3 X-Google-Smtp-Source: AA6agR45WcPgNXju/DqeyAETJOUtsihagTn67GcHVksMPZA4f6EE62Wig/51CgBefg7030k9YmlAa8WXjoi1a5wc X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a05:6a00:1145:b0:52b:78c:fa26 with SMTP id b5-20020a056a00114500b0052b078cfa26mr8897082pfm.27.1660931268817; Fri, 19 Aug 2022 10:47:48 -0700 (PDT) Date: Fri, 19 Aug 2022 17:46:58 +0000 In-Reply-To: <20220819174659.2427983-1-vannapurve@google.com> Message-Id: <20220819174659.2427983-6-vannapurve@google.com> Mime-Version: 1.0 References: <20220819174659.2427983-1-vannapurve@google.com> X-Mailer: git-send-email 2.37.1.595.g718a3a8f04-goog Subject: [RFC V3 PATCH 5/6] selftests: kvm: x86: Execute VMs with private memory From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Introduce a set of APIs to execute VM with private memslots. Host userspace APIs for: 1) Setting up and executing VM having private memslots 2) Backing/unbacking guest private memory Guest APIs for: 1) Changing memory access type and memory type 2) Setting ucall page access type as shared Signed-off-by: Vishal Annapurve --- tools/testing/selftests/kvm/Makefile | 1 + .../kvm/include/x86_64/private_mem.h | 51 +++ .../selftests/kvm/lib/x86_64/private_mem.c | 297 ++++++++++++++++++ 3 files changed, 349 insertions(+) create mode 100644 tools/testing/selftests/kvm/include/x86_64/private_mem.h create mode 100644 tools/testing/selftests/kvm/lib/x86_64/private_mem.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 41fd1171fbd8..8fe72a60aef0 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -51,6 +51,7 @@ LIBKVM += lib/ucall_common.c LIBKVM_x86_64 += lib/x86_64/apic.c LIBKVM_x86_64 += lib/x86_64/handlers.S LIBKVM_x86_64 += lib/x86_64/perf_test_util.c +LIBKVM_x86_64 += lib/x86_64/private_mem.c LIBKVM_x86_64 += lib/x86_64/processor.c LIBKVM_x86_64 += lib/x86_64/svm.c LIBKVM_x86_64 += lib/x86_64/ucall.c diff --git a/tools/testing/selftests/kvm/include/x86_64/private_mem.h b/tools/testing/selftests/kvm/include/x86_64/private_mem.h new file mode 100644 index 000000000000..00cab7b84f2c --- /dev/null +++ b/tools/testing/selftests/kvm/include/x86_64/private_mem.h @@ -0,0 +1,51 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2022, Google LLC. + */ + +#ifndef SELFTEST_KVM_PRIVATE_MEM_H +#define SELFTEST_KVM_PRIVATE_MEM_H + +#include +#include + +enum mem_conversion_type { + TO_PRIVATE, + TO_SHARED +}; + +void guest_update_mem_access(enum mem_conversion_type type, uint64_t gpa, + uint64_t size); +void guest_update_mem_map(enum mem_conversion_type type, uint64_t gpa, + uint64_t size); + +void guest_map_ucall_page_shared(void); + +enum mem_op { + ALLOCATE_MEM, + UNBACK_MEM +}; + +void vm_update_private_mem(struct kvm_vm *vm, uint64_t gpa, uint64_t size, + enum mem_op op); + +typedef void (*guest_code_fn)(void); +typedef void (*io_exit_handler)(struct kvm_vm *vm, uint32_t uc_arg1); + +struct test_setup_info { + uint64_t test_area_gpa; + uint64_t test_area_size; + uint32_t test_area_slot; +}; + +struct vm_setup_info { + enum vm_mem_backing_src_type vm_mem_src; + uint32_t memslot0_pages; + struct test_setup_info test_info; + guest_code_fn guest_fn; + io_exit_handler ioexit_cb; +}; + +void execute_vm_with_private_mem(struct vm_setup_info *info); + +#endif /* SELFTEST_KVM_PRIVATE_MEM_H */ diff --git a/tools/testing/selftests/kvm/lib/x86_64/private_mem.c b/tools/testing/selftests/kvm/lib/x86_64/private_mem.c new file mode 100644 index 000000000000..000584219045 --- /dev/null +++ b/tools/testing/selftests/kvm/lib/x86_64/private_mem.c @@ -0,0 +1,297 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * tools/testing/selftests/kvm/lib/kvm_util.c + * + * Copyright (C) 2022, Google LLC. + */ +#define _GNU_SOURCE /* for program_invocation_name */ +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include + +#include +#include +#include +#include + +/* + * Execute KVM hypercall to change memory access type for a given gpa range. + * + * Input Args: + * type - memory conversion type TO_SHARED/TO_PRIVATE + * gpa - starting gpa address + * size - size of the range starting from gpa for which memory access needs + * to be changed + * + * Output Args: None + * + * Return: None + * + * Function called by guest logic in selftests to update the memory access type + * for a given gpa range. This API is useful in exercising implicit conversion + * path. + */ +void guest_update_mem_access(enum mem_conversion_type type, uint64_t gpa, + uint64_t size) +{ + int ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, gpa, size >> MIN_PAGE_SHIFT, + type == TO_PRIVATE ? KVM_MARK_GPA_RANGE_ENC_ACCESS : + KVM_CLR_GPA_RANGE_ENC_ACCESS, 0); + GUEST_ASSERT_1(!ret, ret); +} + +/* + * Execute KVM hypercall to change memory type for a given gpa range. + * + * Input Args: + * type - memory conversion type TO_SHARED/TO_PRIVATE + * gpa - starting gpa address + * size - size of the range starting from gpa for which memory type needs + * to be changed + * + * Output Args: None + * + * Return: None + * + * Function called by guest logic in selftests to update the memory type for a + * given gpa range. This API is useful in exercising explicit conversion path. + */ +void guest_update_mem_map(enum mem_conversion_type type, uint64_t gpa, + uint64_t size) +{ + int ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, gpa, size >> MIN_PAGE_SHIFT, + type == TO_PRIVATE ? KVM_MAP_GPA_RANGE_ENCRYPTED : + KVM_MAP_GPA_RANGE_DECRYPTED, 0); + GUEST_ASSERT_1(!ret, ret); +} + +/* + * Execute KVM hypercall to change memory access type for ucall page. + * + * Input Args: None + * + * Output Args: None + * + * Return: None + * + * Function called by guest logic in selftests to update the memory access type + * for ucall page since by default all the accesses from guest to private + * memslot are treated as private accesses. + */ +void guest_map_ucall_page_shared(void) +{ + vm_paddr_t ucall_paddr = get_ucall_pool_paddr(); + + guest_update_mem_access(TO_SHARED, ucall_paddr, 1 << MIN_PAGE_SHIFT); +} + +/* + * Execute KVM ioctl to back/unback private memory for given gpa range. + * + * Input Args: + * vm - kvm_vm handle + * gpa - starting gpa address + * size - size of the gpa range + * op - mem_op indicating whether private memory needs to be allocated or + * unbacked + * + * Output Args: None + * + * Return: None + * + * Function called by host userspace logic in selftests to back/unback private + * memory for gpa ranges. This function is useful to setup initial boot private + * memory and then convert memory during runtime. + */ +void vm_update_private_mem(struct kvm_vm *vm, uint64_t gpa, uint64_t size, + enum mem_op op) +{ + int priv_memfd; + uint64_t priv_offset, guest_phys_base, fd_offset; + struct kvm_enc_region enc_region; + struct kvm_userspace_memory_region_ext *region_ext; + struct kvm_userspace_memory_region *region; + int fallocate_mode = 0; + int ret; + + region_ext = kvm_userspace_memory_region_ext_find(vm, gpa, gpa + size); + TEST_ASSERT(region_ext != NULL, "Region not found"); + region = ®ion_ext->region; + TEST_ASSERT(region->flags & KVM_MEM_PRIVATE, + "Can not update private memfd for non-private memslot\n"); + priv_memfd = region_ext->private_fd; + priv_offset = region_ext->private_offset; + guest_phys_base = region->guest_phys_addr; + fd_offset = priv_offset + (gpa - guest_phys_base); + + if (op == UNBACK_MEM) + fallocate_mode = (FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE); + + printf("priv_memfd %d fallocate_mode 0x%x for offset 0x%lx size 0x%lx\n", + priv_memfd, fallocate_mode, fd_offset, size); + ret = fallocate(priv_memfd, fallocate_mode, fd_offset, size); + TEST_ASSERT(ret == 0, "fallocate failed\n"); + enc_region.addr = gpa; + enc_region.size = size; + if (op == ALLOCATE_MEM) { + printf("doing encryption for gpa 0x%lx size 0x%lx\n", gpa, size); + vm_ioctl(vm, KVM_MEMORY_ENCRYPT_REG_REGION, &enc_region); + } else { + printf("undoing encryption for gpa 0x%lx size 0x%lx\n", gpa, size); + vm_ioctl(vm, KVM_MEMORY_ENCRYPT_UNREG_REGION, &enc_region); + } +} + +static void handle_vm_exit_map_gpa_hypercall(struct kvm_vm *vm, + volatile struct kvm_run *run) +{ + uint64_t gpa, npages, attrs, size; + + TEST_ASSERT(run->hypercall.nr == KVM_HC_MAP_GPA_RANGE, + "Unhandled Hypercall %lld\n", run->hypercall.nr); + gpa = run->hypercall.args[0]; + npages = run->hypercall.args[1]; + size = npages << MIN_PAGE_SHIFT; + attrs = run->hypercall.args[2]; + pr_info("Explicit conversion off 0x%lx size 0x%lx to %s\n", gpa, size, + (attrs & KVM_MAP_GPA_RANGE_ENCRYPTED) ? "private" : "shared"); + + if (attrs & KVM_MAP_GPA_RANGE_ENCRYPTED) + vm_update_private_mem(vm, gpa, size, ALLOCATE_MEM); + else + vm_update_private_mem(vm, gpa, size, UNBACK_MEM); + + run->hypercall.ret = 0; +} + +static void handle_vm_exit_memory_error(struct kvm_vm *vm, volatile struct kvm_run *run) +{ + uint64_t gpa, size, flags; + + gpa = run->memory.gpa; + size = run->memory.size; + flags = run->memory.flags; + pr_info("Implicit conversion off 0x%lx size 0x%lx to %s\n", gpa, size, + (flags & KVM_MEMORY_EXIT_FLAG_PRIVATE) ? "private" : "shared"); + if (flags & KVM_MEMORY_EXIT_FLAG_PRIVATE) + vm_update_private_mem(vm, gpa, size, ALLOCATE_MEM); + else + vm_update_private_mem(vm, gpa, size, UNBACK_MEM); +} + +static void vcpu_work(struct kvm_vm *vm, struct kvm_vcpu *vcpu, + struct vm_setup_info *info) +{ + volatile struct kvm_run *run; + struct ucall uc; + uint64_t cmd; + + /* + * Loop until the guest is done. + */ + run = vcpu->run; + + while (true) { + vcpu_run(vcpu); + + if (run->exit_reason == KVM_EXIT_IO) { + cmd = get_ucall(vcpu, &uc); + if (cmd != UCALL_SYNC) + break; + + TEST_ASSERT(info->ioexit_cb, "ioexit cb not present"); + info->ioexit_cb(vm, uc.args[1]); + continue; + } + + if (run->exit_reason == KVM_EXIT_HYPERCALL) { + handle_vm_exit_map_gpa_hypercall(vm, run); + continue; + } + + if (run->exit_reason == KVM_EXIT_MEMORY_FAULT) { + handle_vm_exit_memory_error(vm, run); + continue; + } + + TEST_FAIL("Unhandled VCPU exit reason %d\n", run->exit_reason); + break; + } + + if (run->exit_reason == KVM_EXIT_IO && cmd == UCALL_ABORT) + TEST_FAIL("%s at %s:%ld, val = %lu", (const char *)uc.args[0], + __FILE__, uc.args[1], uc.args[2]); +} + +/* + * Execute guest vm with private memory memslots. + * + * Input Args: + * info - pointer to a structure containing information about setting up a VM + * with private memslots + * + * Output Args: None + * + * Return: None + * + * Function called by host userspace logic in selftests to execute guest vm + * logic. It will install two memslots: + * 1) memslot 0 : containing all the boot code/stack pages + * 2) test_mem_slot : containing the region of memory that would be used to test + * private/shared memory accesses to a memory backed by private memslots + */ +void execute_vm_with_private_mem(struct vm_setup_info *info) +{ + struct kvm_vm *vm; + struct kvm_enable_cap cap; + struct kvm_vcpu *vcpu; + uint32_t memslot0_pages = info->memslot0_pages; + uint64_t test_area_gpa, test_area_size; + struct test_setup_info *test_info = &info->test_info; + + vm = vm_create_barebones(); + vm_set_memory_encryption(vm, true, false, 0); + vm->use_ucall_pool = true; + vm_userspace_mem_region_add(vm, info->vm_mem_src, 0, 0, + memslot0_pages, KVM_MEM_PRIVATE); + kvm_vm_elf_load(vm, program_invocation_name); + vm_create_irqchip(vm); + TEST_ASSERT(info->guest_fn, "guest_fn not present"); + vcpu = vm_vcpu_add(vm, 0, info->guest_fn); + + vm_check_cap(vm, KVM_CAP_EXIT_HYPERCALL); + cap.cap = KVM_CAP_EXIT_HYPERCALL; + cap.flags = 0; + cap.args[0] = (1 << KVM_HC_MAP_GPA_RANGE); + vm_ioctl(vm, KVM_ENABLE_CAP, &cap); + + TEST_ASSERT(test_info->test_area_size, "Test mem size not present"); + + test_area_size = test_info->test_area_size; + test_area_gpa = test_info->test_area_gpa; + vm_userspace_mem_region_add(vm, info->vm_mem_src, test_area_gpa, + test_info->test_area_slot, test_area_size / vm->page_size, + KVM_MEM_PRIVATE); + vm_update_private_mem(vm, test_area_gpa, test_area_size, ALLOCATE_MEM); + + pr_info("Mapping test memory pages 0x%zx page_size 0x%x\n", + test_area_size/vm->page_size, vm->page_size); + virt_map(vm, test_area_gpa, test_area_gpa, test_area_size/vm->page_size); + + ucall_init(vm, NULL); + vm_update_private_mem(vm, 0, (memslot0_pages << MIN_PAGE_SHIFT), ALLOCATE_MEM); + + vcpu_work(vm, vcpu, info); + + ucall_uninit(vm); + kvm_vm_free(vm); +} From patchwork Fri Aug 19 17:46:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 12949036 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 479C7C32773 for ; Fri, 19 Aug 2022 18:01:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350289AbiHSSBG (ORCPT ); Fri, 19 Aug 2022 14:01:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52520 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351535AbiHSSAp (ORCPT ); Fri, 19 Aug 2022 14:00:45 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 03B1610AE10 for ; Fri, 19 Aug 2022 10:47:51 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id q6-20020a17090311c600b0017266460b8fso3105593plh.4 for ; Fri, 19 Aug 2022 10:47:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc; bh=PxVtU6+p3kWLeEs1td/aqyfvaiMfUwekByaLOvsmjdI=; b=GxcY96k0edncpxD8CK+MfHYucHXbsAzy04n7ijQ8wdUVduEybdkgYmfnbO+bQKAcfl 15prnpY+ZH3MZA6XtCPnYZPe13bnZu+tYJMbJehrU7vFM1hyQpEO92FJ4AGj1UaubffN ZHnCoWh0Y06a1rj9Cbf8NEy2qrAtUeyRrdLGTn4qOK2iu7WN/Zel9xhksgFk5J9gJE+B ztygQInAJYZBLl2zbHnTYzW2cZtdTuyslAAz2yD0Pxkxo3SsuBkmSDeIceAKZARgNIde HNJ3dxT4MgsAdiVu96A6DaGxHxSqATa7KMkugo78Xl0FDxRDwI/OCiXWEJIvia4Y5LHI 7kYw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc; bh=PxVtU6+p3kWLeEs1td/aqyfvaiMfUwekByaLOvsmjdI=; b=0c+nRxSX5JLp1ixH39czBXI2gN6bCxFFBLibYznldBMJ7cPfFDKTIBqXfYKmM18ijv Erefw9QKuC1HhfESyrgxtRbIlzUmRBiPqcIaZQF90YkFRJHPjnGRJOTq7uU0qQ+JccjP ydvIV0MNC5/vQKrhWP4Uou3ed2ym8jAFpvG7GpxedIaGH7CKBjBSUUI0gbIGUlh572ny SY41xjE/O1/B+F+CUTXlL0t300aP88J3QQjGatcF9u4/kXnnAxltDaP6s9gRMU0q92WM 8sECzt2UzDeikUq1TM1VDvRNxMoGC2srNs7BSx6kQ0PrvxKLH/FGXNnAoWct8tXnlPPM jFBw== X-Gm-Message-State: ACgBeo3mpbW4bgbhcAl6DT+i3B6mx6kajhJHGfG2lar3NNLC/5PTLA7t D5zowaijmmFbcREHqz1ejF/pUiidZd5tHSlf X-Google-Smtp-Source: AA6agR4HGS4N/yFbtWqVnVgo8ZXDiZjUD94j+FDzDmfN+hdPI0Lqgg3Jg+CN+fENgQXPBR9kyG4d1DjWSAuEJRME X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a05:6a00:1aca:b0:52f:55f8:c3ec with SMTP id f10-20020a056a001aca00b0052f55f8c3ecmr8885306pfv.25.1660931271154; Fri, 19 Aug 2022 10:47:51 -0700 (PDT) Date: Fri, 19 Aug 2022 17:46:59 +0000 In-Reply-To: <20220819174659.2427983-1-vannapurve@google.com> Message-Id: <20220819174659.2427983-7-vannapurve@google.com> Mime-Version: 1.0 References: <20220819174659.2427983-1-vannapurve@google.com> X-Mailer: git-send-email 2.37.1.595.g718a3a8f04-goog Subject: [RFC V3 PATCH 6/6] sefltests: kvm: x86: Add selftest for private memory From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Add a selftest to exercise implicit/explicit conversion functionality within KVM and verify: 1) Shared memory is visible to host userspace after conversion 2) Private memory is not visible to host userspace before/after conversion 3) Host userspace and guest can communicate over shared memory Signed-off-by: Vishal Annapurve --- tools/testing/selftests/kvm/.gitignore | 1 + tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/x86_64/private_mem_test.c | 262 ++++++++++++++++++ 3 files changed, 264 insertions(+) create mode 100644 tools/testing/selftests/kvm/x86_64/private_mem_test.c diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore index ec084a61a819..095b67dc632e 100644 --- a/tools/testing/selftests/kvm/.gitignore +++ b/tools/testing/selftests/kvm/.gitignore @@ -31,6 +31,7 @@ /x86_64/monitor_mwait_test /x86_64/nx_huge_pages_test /x86_64/platform_info_test +/x86_64/private_mem_test /x86_64/pmu_event_filter_test /x86_64/set_boot_cpu_id /x86_64/set_sregs_test diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 8fe72a60aef0..c5fc8ea2c843 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -94,6 +94,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/mmu_role_test TEST_GEN_PROGS_x86_64 += x86_64/monitor_mwait_test TEST_GEN_PROGS_x86_64 += x86_64/platform_info_test TEST_GEN_PROGS_x86_64 += x86_64/pmu_event_filter_test +TEST_GEN_PROGS_x86_64 += x86_64/private_mem_test TEST_GEN_PROGS_x86_64 += x86_64/set_boot_cpu_id TEST_GEN_PROGS_x86_64 += x86_64/set_sregs_test TEST_GEN_PROGS_x86_64 += x86_64/smm_test diff --git a/tools/testing/selftests/kvm/x86_64/private_mem_test.c b/tools/testing/selftests/kvm/x86_64/private_mem_test.c new file mode 100644 index 000000000000..9f491e2a16bb --- /dev/null +++ b/tools/testing/selftests/kvm/x86_64/private_mem_test.c @@ -0,0 +1,262 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * tools/testing/selftests/kvm/lib/kvm_util.c + * + * Copyright (C) 2022, Google LLC. + */ +#define _GNU_SOURCE /* for program_invocation_short_name */ +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include + +#include +#include +#include +#include + +#define VM_MEMSLOT0_PAGES (512 * 10) + +#define TEST_AREA_SLOT 10 +#define TEST_AREA_GPA 0xC0000000 +#define TEST_AREA_SIZE (2 * 1024 * 1024) +#define GUEST_TEST_MEM_OFFSET (1 * 1024 * 1024) +#define GUEST_TEST_MEM_SIZE (10 * 4096) + +#define VM_STAGE_PROCESSED(x) pr_info("Processed stage %s\n", #x) + +#define TEST_MEM_DATA_PAT1 0x66 +#define TEST_MEM_DATA_PAT2 0x99 +#define TEST_MEM_DATA_PAT3 0x33 +#define TEST_MEM_DATA_PAT4 0xaa +#define TEST_MEM_DATA_PAT5 0x12 + +static bool verify_mem_contents(void *mem, uint32_t size, uint8_t pat) +{ + uint8_t *buf = (uint8_t *)mem; + + for (uint32_t i = 0; i < size; i++) { + if (buf[i] != pat) + return false; + } + + return true; +} + +/* + * Add custom implementation for memset to avoid using standard/builtin memset + * which may use features like SSE/GOT that don't work with guest vm execution + * within selftests. + */ +void *memset(void *mem, int byte, size_t size) +{ + uint8_t *buf = (uint8_t *)mem; + + for (uint32_t i = 0; i < size; i++) + buf[i] = byte; + + return buf; +} + +static void populate_test_area(void *test_area_base, uint64_t pat) +{ + memset(test_area_base, pat, TEST_AREA_SIZE); +} + +static void populate_guest_test_mem(void *guest_test_mem, uint64_t pat) +{ + memset(guest_test_mem, pat, GUEST_TEST_MEM_SIZE); +} + +static bool verify_test_area(void *test_area_base, uint64_t area_pat, + uint64_t guest_pat) +{ + void *test_area1_base = test_area_base; + uint64_t test_area1_size = GUEST_TEST_MEM_OFFSET; + void *guest_test_mem = test_area_base + test_area1_size; + uint64_t guest_test_size = GUEST_TEST_MEM_SIZE; + void *test_area2_base = guest_test_mem + guest_test_size; + uint64_t test_area2_size = (TEST_AREA_SIZE - (GUEST_TEST_MEM_OFFSET + + GUEST_TEST_MEM_SIZE)); + + return (verify_mem_contents(test_area1_base, test_area1_size, area_pat) && + verify_mem_contents(guest_test_mem, guest_test_size, guest_pat) && + verify_mem_contents(test_area2_base, test_area2_size, area_pat)); +} + +#define GUEST_STARTED 0 +#define GUEST_PRIVATE_MEM_POPULATED 1 +#define GUEST_SHARED_MEM_POPULATED 2 +#define GUEST_PRIVATE_MEM_POPULATED2 3 +#define GUEST_IMPLICIT_MEM_CONV1 4 +#define GUEST_IMPLICIT_MEM_CONV2 5 + +/* + * Run memory conversion tests supporting two types of conversion: + * 1) Explicit: Execute KVM hypercall to map/unmap gpa range which will cause + * userspace exit to back/unback private memory. Subsequent accesses by guest + * to the gpa range will not cause exit to userspace. + * 2) Implicit: Execute KVM hypercall to update memory access to a gpa range as + * private/shared without exiting to userspace. Subsequent accesses by guest + * to the gpa range will result in KVM EPT/NPT faults and then exit to + * userspace for each page. + * + * Test memory conversion scenarios with following steps: + * 1) Access private memory using private access and verify that memory contents + * are not visible to userspace. + * 2) Convert memory to shared using explicit/implicit conversions and ensure + * that userspace is able to access the shared regions. + * 3) Convert memory back to private using explicit/implicit conversions and + * ensure that userspace is again not able to access converted private + * regions. + */ +static void guest_conv_test_fn(bool test_explicit_conv) +{ + void *test_area_base = (void *)TEST_AREA_GPA; + void *guest_test_mem = (void *)(TEST_AREA_GPA + GUEST_TEST_MEM_OFFSET); + uint64_t guest_test_size = GUEST_TEST_MEM_SIZE; + + guest_map_ucall_page_shared(); + GUEST_SYNC(GUEST_STARTED); + + populate_test_area(test_area_base, TEST_MEM_DATA_PAT1); + GUEST_SYNC(GUEST_PRIVATE_MEM_POPULATED); + GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PAT1, + TEST_MEM_DATA_PAT1)); + + if (test_explicit_conv) + guest_update_mem_map(TO_SHARED, (uint64_t)guest_test_mem, + guest_test_size); + else { + guest_update_mem_access(TO_SHARED, (uint64_t)guest_test_mem, + guest_test_size); + GUEST_SYNC(GUEST_IMPLICIT_MEM_CONV1); + } + + populate_guest_test_mem(guest_test_mem, TEST_MEM_DATA_PAT2); + + GUEST_SYNC(GUEST_SHARED_MEM_POPULATED); + GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PAT1, + TEST_MEM_DATA_PAT5)); + + if (test_explicit_conv) + guest_update_mem_map(TO_PRIVATE, (uint64_t)guest_test_mem, + guest_test_size); + else { + guest_update_mem_access(TO_PRIVATE, (uint64_t)guest_test_mem, + guest_test_size); + GUEST_SYNC(GUEST_IMPLICIT_MEM_CONV2); + } + + populate_guest_test_mem(guest_test_mem, TEST_MEM_DATA_PAT3); + GUEST_SYNC(GUEST_PRIVATE_MEM_POPULATED2); + + GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PAT1, + TEST_MEM_DATA_PAT3)); + GUEST_DONE(); +} + +static void conv_test_ioexit_fn(struct kvm_vm *vm, uint32_t uc_arg1) +{ + void *test_area_hva = addr_gpa2hva(vm, TEST_AREA_GPA); + void *guest_test_mem_hva = (test_area_hva + GUEST_TEST_MEM_OFFSET); + uint64_t guest_mem_gpa = (TEST_AREA_GPA + GUEST_TEST_MEM_OFFSET); + uint64_t guest_test_size = GUEST_TEST_MEM_SIZE; + + switch (uc_arg1) { + case GUEST_STARTED: + populate_test_area(test_area_hva, TEST_MEM_DATA_PAT4); + VM_STAGE_PROCESSED(GUEST_STARTED); + break; + case GUEST_PRIVATE_MEM_POPULATED: + TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PAT4, + TEST_MEM_DATA_PAT4), "failed"); + VM_STAGE_PROCESSED(GUEST_PRIVATE_MEM_POPULATED); + break; + case GUEST_SHARED_MEM_POPULATED: + TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PAT4, + TEST_MEM_DATA_PAT2), "failed"); + populate_guest_test_mem(guest_test_mem_hva, TEST_MEM_DATA_PAT5); + VM_STAGE_PROCESSED(GUEST_SHARED_MEM_POPULATED); + break; + case GUEST_PRIVATE_MEM_POPULATED2: + TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PAT4, + TEST_MEM_DATA_PAT5), "failed"); + VM_STAGE_PROCESSED(GUEST_PRIVATE_MEM_POPULATED2); + break; + case GUEST_IMPLICIT_MEM_CONV1: + /* + * For first implicit conversion, memory is already private so + * mark it private again just to zap the pte entries for the gpa + * range, so that subsequent accesses from the guest will + * generate ept/npt fault and memory conversion path will be + * exercised by KVM. + */ + vm_update_private_mem(vm, guest_mem_gpa, guest_test_size, + ALLOCATE_MEM); + VM_STAGE_PROCESSED(GUEST_IMPLICIT_MEM_CONV1); + break; + case GUEST_IMPLICIT_MEM_CONV2: + /* + * For second implicit conversion, memory is already shared so + * mark it shared again just to zap the pte entries for the gpa + * range, so that subsequent accesses from the guest will + * generate ept/npt fault and memory conversion path will be + * exercised by KVM. + */ + vm_update_private_mem(vm, guest_mem_gpa, guest_test_size, + UNBACK_MEM); + VM_STAGE_PROCESSED(GUEST_IMPLICIT_MEM_CONV2); + break; + default: + TEST_FAIL("Unknown stage %d\n", uc_arg1); + break; + } +} + +static void guest_explicit_conv_test_fn(void) +{ + guest_conv_test_fn(true); +} + +static void guest_implicit_conv_test_fn(void) +{ + guest_conv_test_fn(false); +} + +static void execute_memory_conversion_test(void) +{ + struct vm_setup_info info; + struct test_setup_info *test_info = &info.test_info; + + info.vm_mem_src = VM_MEM_SRC_ANONYMOUS; + info.memslot0_pages = VM_MEMSLOT0_PAGES; + test_info->test_area_gpa = TEST_AREA_GPA; + test_info->test_area_size = TEST_AREA_SIZE; + test_info->test_area_slot = TEST_AREA_SLOT; + info.ioexit_cb = conv_test_ioexit_fn; + + info.guest_fn = guest_explicit_conv_test_fn; + execute_vm_with_private_mem(&info); + + info.guest_fn = guest_implicit_conv_test_fn; + execute_vm_with_private_mem(&info); +} + +int main(int argc, char *argv[]) +{ + /* Tell stdout not to buffer its content */ + setbuf(stdout, NULL); + + execute_memory_conversion_test(); + return 0; +}