From patchwork Mon Dec 5 23:23:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 13065193 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BC4E9C636F9 for ; Mon, 5 Dec 2022 23:23:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231888AbiLEXXz (ORCPT ); Mon, 5 Dec 2022 18:23:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44090 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231715AbiLEXXv (ORCPT ); Mon, 5 Dec 2022 18:23:51 -0500 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 96DBE1835C for ; Mon, 5 Dec 2022 15:23:50 -0800 (PST) Received: by mail-pg1-x54a.google.com with SMTP id i19-20020a63e913000000b004705d1506a6so11004803pgh.13 for ; Mon, 05 Dec 2022 15:23:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=bJsitMNrFwlWnkUPP+bzm0yFNlRi9o93p/cQJywTOXc=; b=BeFPFGkoIaSJW4OwyhtPykbuDKK8eg9FRG8ZrYy513qt1mteOhpFvR+ZPC6SOxjT0E Ee2a1RsDEA/8rV9K6WEagcDYhhMxcc3Yi+B2FRdqowFjfFx0ZExIpmGTZ25uFRzhVLSE c9rrECLvcTpVHxi/O5g0MwFLcMlc5qK29gKnctJZIQPcsuR1Wdj2Zb2xBdSKYN5GaCXb X9TW7aLUFnXHDIXxbaA8Wq52e/WoFR3oW6XsdtdF00zBsQ3ZSCRV+wyRlC6C2aRSHECH BJRtfzsp8EUTLT84jRRMN/9ysYGTzHzE9yuLYCvBVSPlM4qkJZSMPznYV6Eatrb2gZXV 9qVw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=bJsitMNrFwlWnkUPP+bzm0yFNlRi9o93p/cQJywTOXc=; b=LEaQb+R8mrLqjhNnbAAfhmSavd6dUBEkeXOLlenbBS/vI5TOz8r9p65TFXSm5nrXK8 yGpKI2lvvxzBCvbgGC5YG58VHKTP5QvWxp+31JYCwfpM3HlXEchNW+LhEXVli65sj1dG r5CpVB/Q9i0mjisiUete4yWtz7Cm3L+KfYNRbrhZOfcar6SjcR79mtBQ4BhPC5DVAwS1 EAM0l+0vRh8ASQkZszgNmy8QdemvD57w6wduifmKTUgApz7bUQmFtSaXnpsJ+74EMiBP 1phWhJj94j8+ykJBbZudlZGobZk4LUSW2aJM9GJbcl315uQLiaTfxjXtDchMKjy3Bfo7 7OnA== X-Gm-Message-State: ANoB5pmCf/26IMVB25TLyyCKORjCmgw3XQGtCSxDamjbjKGUzbM9zgbw SCOk/2LMHXh36ujCEG63lb6X/actwG3gK3BE X-Google-Smtp-Source: AA0mqf70IPLrIk8kUk34OCEAZ8JdYwWW/LqbN4So04STHjgiA+SBHYffgYaXuzj2UbySbdE8Lbsi9J5g63+gR9VL X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:90a:2dc8:b0:219:baef:3ba with SMTP id q8-20020a17090a2dc800b00219baef03bamr11402807pjm.6.1670282630094; Mon, 05 Dec 2022 15:23:50 -0800 (PST) Date: Mon, 5 Dec 2022 23:23:36 +0000 In-Reply-To: <20221205232341.4131240-1-vannapurve@google.com> Mime-Version: 1.0 References: <20221205232341.4131240-1-vannapurve@google.com> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Message-ID: <20221205232341.4131240-2-vannapurve@google.com> Subject: [V2 PATCH 1/6] KVM: x86: Add support for testing private memory From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, ackerleytng@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Introduce HAVE_KVM_PRIVATE_MEM_TESTING config to be able to test fd based approach to support private memory with non-confidential selftest VMs. To support this testing few important aspects need to be considered from the perspective of selftests - * KVM needs to know whether the access from guest VM is private or shared. Confidential VMs (SNP/TDX) carry a dedicated bit in gpa that can be used by KVM to deduce the nature of the access. Non-confidential VMs don't have mechanism to carry/convey such an information to KVM. So KVM just relies on what attributes are set by userspace VMM keeping the userspace VMM in the TCB for the testing purposes. * arch_private_mem_supported is updated to allow private memory logic to work with non-confidential vm selftests. Signed-off-by: Vishal Annapurve --- arch/x86/kvm/mmu/mmu_internal.h | 6 +++++- virt/kvm/Kconfig | 4 ++++ virt/kvm/kvm_main.c | 3 ++- 3 files changed, 11 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 5ccf08183b00..e2f508db0b6e 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -263,6 +263,8 @@ enum { static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u32 err, bool prefetch) { + bool is_tdp = likely(vcpu->arch.mmu->page_fault == kvm_tdp_page_fault); + struct kvm_page_fault fault = { .addr = cr2_or_gpa, .error_code = err, @@ -272,13 +274,15 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, .rsvd = err & PFERR_RSVD_MASK, .user = err & PFERR_USER_MASK, .prefetch = prefetch, - .is_tdp = likely(vcpu->arch.mmu->page_fault == kvm_tdp_page_fault), + .is_tdp = is_tdp, .nx_huge_page_workaround_enabled = is_nx_huge_page_enabled(vcpu->kvm), .max_level = KVM_MAX_HUGEPAGE_LEVEL, .req_level = PG_LEVEL_4K, .goal_level = PG_LEVEL_4K, + .is_private = IS_ENABLED(CONFIG_HAVE_KVM_PRIVATE_MEM_TESTING) && is_tdp && + kvm_mem_is_private(vcpu->kvm, cr2_or_gpa >> PAGE_SHIFT), }; int r; diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index d605545d6dd1..1e326367a21c 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -92,3 +92,7 @@ config HAVE_KVM_PM_NOTIFIER config HAVE_KVM_RESTRICTED_MEM bool + +config HAVE_KVM_PRIVATE_MEM_TESTING + bool "Private memory selftests" + depends on HAVE_KVM_MEMORY_ATTRIBUTES && HAVE_KVM_RESTRICTED_MEM diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index ac835fc77273..d2d829d23442 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1262,7 +1262,8 @@ int __weak kvm_arch_create_vm_debugfs(struct kvm *kvm) bool __weak kvm_arch_has_private_mem(struct kvm *kvm) { - return false; + return IS_ENABLED(CONFIG_HAVE_KVM_PRIVATE_MEM_TESTING); + } static struct kvm *kvm_create_vm(unsigned long type, const char *fdname) From patchwork Mon Dec 5 23:23:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 13065194 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B0E74C4708E for ; Mon, 5 Dec 2022 23:24:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232165AbiLEXX5 (ORCPT ); Mon, 5 Dec 2022 18:23:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44114 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231748AbiLEXXy (ORCPT ); Mon, 5 Dec 2022 18:23:54 -0500 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8A6B618387 for ; Mon, 5 Dec 2022 15:23:52 -0800 (PST) Received: by mail-pj1-x1049.google.com with SMTP id r17-20020a17090aa09100b0021903e75f14so10329339pjp.9 for ; Mon, 05 Dec 2022 15:23:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=p+JG7vhmV9GUu5EiNhTdgbdj//f06sfIcTQ9m3fXDF0=; b=cixBn0wVZpdFwMZWW+rTFy8ERYpD6APzLj8RHuIbJ5ngkVGi2keKWhT8Wxchco4RCm 8h6OdOeqAItmKu6pa4r9cvPQvxcLjauSb8tOxOmF1GvkX6FoX4Jbuz+XAPqCv0Bp7/1m dZWTFEz7Urk9M44dTu7NigncLx7+GuQFcJijsb50bzrQShFoXXBOrMuUCG5kXfHP6UZk ZudLvzPyfP37ONSbGbfwaVy0CJkDtFs9mBvfNpSKuSv+iE6zoO/Y27JnVHepI+Cc/DDU gIxPEitwkxqdbt4c8Wd7/yskua2HUG8LJieH98GEin5IHa10RZgl/E73cfd+SI+hWJWv ECWg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=p+JG7vhmV9GUu5EiNhTdgbdj//f06sfIcTQ9m3fXDF0=; b=MDCc8/kjpOtckoxjAUYbtRNGMIBq5YYvn/XRaUWBUtUCQh9NtUSKIEsZw2Y+cRBIYw ixQtkI/Y7IFbTmkO1uhap/MMmZZDMBfrqRBSXbWQcHMUF4zOIBsBKrAtyzIcTKePjuw8 TNoX4hBXA/qGSq08igpQwtODb2vFHPTSbV0VkLmSgPPPx0p3K+7duq80gI2+Du+//ycq GaYPY0vnRaJAGmteIYGmoAt8kYHy2HbjwN5MGh1kFaS6PgFQMeZDaY7iIbPN6jYt8DQf G8g7N6jjqc+7TQLGAewpxRjtv0QJWlxKtwSYfXX4qes/uX9Db9Ry7sPf8lLjipZ165lO RE6A== X-Gm-Message-State: ANoB5pm2a6zerrelRQxFP0r9q9IbCf4+vlX2EtCnoM2LrvM7PFqW8jiV YvKMO7ZG8iZN5EiKUtAHwKceWy67uxZ75VQH X-Google-Smtp-Source: AA0mqf6KPBWF+YgblauyfVp/6IydFfj7MSowsTXNyF+RFFBHcs3Vjvqo/a/3WkMMhqDAdiWVjR17YEUCPRY/6Mtw X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:90b:3594:b0:219:5b3b:760 with SMTP id mm20-20020a17090b359400b002195b3b0760mr32134253pjb.50.1670282632092; Mon, 05 Dec 2022 15:23:52 -0800 (PST) Date: Mon, 5 Dec 2022 23:23:37 +0000 In-Reply-To: <20221205232341.4131240-1-vannapurve@google.com> Mime-Version: 1.0 References: <20221205232341.4131240-1-vannapurve@google.com> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Message-ID: <20221205232341.4131240-3-vannapurve@google.com> Subject: [V2 PATCH 2/6] KVM: Selftests: Add support for private memory From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, ackerleytng@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add support for registering private memory with kvm using KVM_SET_USER_MEMORY_REGION ioctl. Helper function to query extended userspace mem region is introduced to allow memory conversion. vm_mem_backing_src types is extended to contain additional guest memory source types to cover the cases where guest memory can be backed by both anonymous memory and restricted memfd. Signed-off-by: Vishal Annapurve --- .../selftests/kvm/include/kvm_util_base.h | 12 +++- .../testing/selftests/kvm/include/test_util.h | 4 ++ tools/testing/selftests/kvm/lib/kvm_util.c | 58 +++++++++++++++++-- tools/testing/selftests/kvm/lib/test_util.c | 11 ++++ 4 files changed, 78 insertions(+), 7 deletions(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index c7685c7038ff..4ad99f295f2a 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -31,7 +31,10 @@ typedef uint64_t vm_paddr_t; /* Virtual Machine (Guest) physical address */ typedef uint64_t vm_vaddr_t; /* Virtual Machine (Guest) virtual address */ struct userspace_mem_region { - struct kvm_userspace_memory_region region; + union { + struct kvm_userspace_memory_region region; + struct kvm_userspace_memory_region_ext region_ext; + }; struct sparsebit *unused_phy_pages; int fd; off_t offset; @@ -196,7 +199,7 @@ static inline bool kvm_has_cap(long cap) #define kvm_do_ioctl(fd, cmd, arg) \ ({ \ - static_assert(!_IOC_SIZE(cmd) || sizeof(*arg) == _IOC_SIZE(cmd), ""); \ + static_assert(!_IOC_SIZE(cmd) || sizeof(*arg) >= _IOC_SIZE(cmd), ""); \ ioctl(fd, cmd, arg); \ }) @@ -384,6 +387,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm, void vm_mem_region_set_flags(struct kvm_vm *vm, uint32_t slot, uint32_t flags); void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa); void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot); + struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id); vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min); vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min); @@ -715,6 +719,10 @@ struct kvm_userspace_memory_region * kvm_userspace_memory_region_find(struct kvm_vm *vm, uint64_t start, uint64_t end); +struct kvm_userspace_memory_region_ext * +kvm_userspace_memory_region_ext_find(struct kvm_vm *vm, uint64_t start, + uint64_t end); + #define sync_global_to_guest(vm, g) ({ \ typeof(g) *_p = addr_gva2hva(vm, (vm_vaddr_t)&(g)); \ memcpy(_p, &(g), sizeof(g)); \ diff --git a/tools/testing/selftests/kvm/include/test_util.h b/tools/testing/selftests/kvm/include/test_util.h index 80d6416f3012..aea80071f2b8 100644 --- a/tools/testing/selftests/kvm/include/test_util.h +++ b/tools/testing/selftests/kvm/include/test_util.h @@ -103,6 +103,8 @@ enum vm_mem_backing_src_type { VM_MEM_SRC_ANONYMOUS_HUGETLB_16GB, VM_MEM_SRC_SHMEM, VM_MEM_SRC_SHARED_HUGETLB, + VM_MEM_SRC_ANONYMOUS_AND_RESTRICTED_MEMFD, + VM_MEM_SRC_ANON_HTLB2M_AND_RESTRICTED_MEMFD, NUM_SRC_TYPES, }; @@ -110,7 +112,9 @@ enum vm_mem_backing_src_type { struct vm_mem_backing_src_alias { const char *name; + /* Flags applicable for normal host accessible guest memory */ uint32_t flag; + uint32_t need_restricted_memfd; }; #define MIN_RUN_DELAY_NS 200000UL diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 1d26a2160178..dba693d6446a 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -32,6 +32,11 @@ int open_path_or_exit(const char *path, int flags) return fd; } +static int memfd_restricted(unsigned int flags) +{ + return syscall(__NR_memfd_restricted, flags); +} + /* * Open KVM_DEV_PATH if available, otherwise exit the entire program. * @@ -582,6 +587,35 @@ __weak void vcpu_arch_free(struct kvm_vcpu *vcpu) } +/* + * KVM Userspace Memory Region Ext Find + * + * Input Args: + * vm - Virtual Machine + * start - Starting VM physical address + * end - Ending VM physical address, inclusive. + * + * Output Args: None + * + * Return: + * Pointer to overlapping ext region, NULL if no such region. + * + * Public interface to userspace_mem_region_find. Allows tests to look up + * the memslot datastructure for a given range of guest physical memory. + */ +struct kvm_userspace_memory_region_ext * +kvm_userspace_memory_region_ext_find(struct kvm_vm *vm, uint64_t start, + uint64_t end) +{ + struct userspace_mem_region *region; + + region = userspace_mem_region_find(vm, start, end); + if (!region) + return NULL; + + return ®ion->region_ext; +} + /* * VM VCPU Remove * @@ -881,6 +915,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm, struct userspace_mem_region *region; size_t backing_src_pagesz = get_backing_src_pagesz(src_type); size_t alignment; + int restricted_memfd = -1; TEST_ASSERT(vm_adjust_num_guest_pages(vm->mode, npages) == npages, "Number of guest pages is not compatible with the host. " @@ -978,14 +1013,24 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm, /* As needed perform madvise */ if ((src_type == VM_MEM_SRC_ANONYMOUS || - src_type == VM_MEM_SRC_ANONYMOUS_THP) && thp_configured()) { + src_type == VM_MEM_SRC_ANONYMOUS_THP || + src_type == VM_MEM_SRC_ANONYMOUS_AND_RESTRICTED_MEMFD) && + thp_configured()) { ret = madvise(region->host_mem, npages * vm->page_size, - src_type == VM_MEM_SRC_ANONYMOUS ? MADV_NOHUGEPAGE : MADV_HUGEPAGE); + (src_type == VM_MEM_SRC_ANONYMOUS_THP) ? + MADV_HUGEPAGE : MADV_NOHUGEPAGE); TEST_ASSERT(ret == 0, "madvise failed, addr: %p length: 0x%lx src_type: %s", region->host_mem, npages * vm->page_size, vm_mem_backing_src_alias(src_type)->name); } + if (vm_mem_backing_src_alias(src_type)->need_restricted_memfd) { + restricted_memfd = memfd_restricted(0); + TEST_ASSERT(restricted_memfd != -1, + "Failed to create restricted memfd"); + flags |= KVM_MEM_PRIVATE; + } + region->unused_phy_pages = sparsebit_alloc(); sparsebit_set_num(region->unused_phy_pages, guest_paddr >> vm->page_shift, npages); @@ -994,13 +1039,16 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm, region->region.guest_phys_addr = guest_paddr; region->region.memory_size = npages * vm->page_size; region->region.userspace_addr = (uintptr_t) region->host_mem; - ret = __vm_ioctl(vm, KVM_SET_USER_MEMORY_REGION, ®ion->region); + region->region_ext.restricted_fd = restricted_memfd; + region->region_ext.restricted_offset = 0; + ret = ioctl(vm->fd, KVM_SET_USER_MEMORY_REGION, ®ion->region_ext); TEST_ASSERT(ret == 0, "KVM_SET_USER_MEMORY_REGION IOCTL failed,\n" " rc: %i errno: %i\n" " slot: %u flags: 0x%x\n" - " guest_phys_addr: 0x%lx size: 0x%lx", + " guest_phys_addr: 0x%lx size: 0x%lx restricted fd: %d\n", ret, errno, slot, flags, - guest_paddr, (uint64_t) region->region.memory_size); + guest_paddr, (uint64_t) region->region.memory_size, + restricted_memfd); /* Add to quick lookup data structures */ vm_userspace_mem_region_gpa_insert(&vm->regions.gpa_tree, region); diff --git a/tools/testing/selftests/kvm/lib/test_util.c b/tools/testing/selftests/kvm/lib/test_util.c index 5c22fa4c2825..d33b98bfe8a3 100644 --- a/tools/testing/selftests/kvm/lib/test_util.c +++ b/tools/testing/selftests/kvm/lib/test_util.c @@ -271,6 +271,16 @@ const struct vm_mem_backing_src_alias *vm_mem_backing_src_alias(uint32_t i) */ .flag = MAP_SHARED, }, + [VM_MEM_SRC_ANONYMOUS_AND_RESTRICTED_MEMFD] = { + .name = "anonymous_and_restricted_memfd", + .flag = ANON_FLAGS, + .need_restricted_memfd = 1, + }, + [VM_MEM_SRC_ANON_HTLB2M_AND_RESTRICTED_MEMFD] = { + .name = "anonymous_hugetlb_2mb_and_restricted_memfd", + .flag = ANON_HUGE_FLAGS | MAP_HUGE_2MB, + .need_restricted_memfd = 1, + }, }; _Static_assert(ARRAY_SIZE(aliases) == NUM_SRC_TYPES, "Missing new backing src types?"); @@ -289,6 +299,7 @@ size_t get_backing_src_pagesz(uint32_t i) switch (i) { case VM_MEM_SRC_ANONYMOUS: case VM_MEM_SRC_SHMEM: + case VM_MEM_SRC_ANONYMOUS_AND_RESTRICTED_MEMFD: return getpagesize(); case VM_MEM_SRC_ANONYMOUS_THP: return get_trans_hugepagesz(); From patchwork Mon Dec 5 23:23:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 13065195 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2380DC636F9 for ; Mon, 5 Dec 2022 23:24:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232584AbiLEXYF (ORCPT ); Mon, 5 Dec 2022 18:24:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44166 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231990AbiLEXXz (ORCPT ); Mon, 5 Dec 2022 18:23:55 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0729B193ED for ; Mon, 5 Dec 2022 15:23:55 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id l10-20020a170902f68a00b00189d1728848so4582401plg.2 for ; Mon, 05 Dec 2022 15:23:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ks4Sd+0JxxDpqRkpXg5iGmHlGBA+0HJ4sfY/7YXtnEQ=; b=i4osWTqZ7ymNfeSek7naspZXApgoE2o0F0UoWSnfa/GvObXDoG+s7NlDZRY2BAV9a6 l+GN4PjhVUKzUzq+26wXDQLVdRfg+cYDgmS22Po2BGrjbFyM/wrpER5zYnSOmqEmajjJ 7X5a8Mm0z3R4hTfp7p1Fav+2m4pHZ20uU6ic+I9uG9lvAxgz/J8uaWjn3Eqf7bCFTrP4 C1S3sPRoKMQ4YWYMm5Jp8ubMHkTuOX5tKYPNjsxnuxMVuC4Q8HsLN0hVqvjU5JgLMqp1 6ISfQ8iJdlKrDX3HUZ3IkXZtnL4rGgyQQdBERlqSzjfs88rKJXda9ixTCCqNPk5SZ71y GYhA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ks4Sd+0JxxDpqRkpXg5iGmHlGBA+0HJ4sfY/7YXtnEQ=; b=UpMbioJ+LlA127JrRsqxNupCWMwke4riv6qItmFNVwB9obD91A9/8R07UI49FQGuUj N23731vTaoIvurogevDfHiHXFVX3SO/1zTBuf+T3mscuYM+cdP70u+WO5nxs3AqeJn0V LWkyyZZj1bnOjSyj+R8cAUeA5fAqSoC1ZNmz5fXq4t7BE8owN+BNeSkHd6UKvIyW6car Bo5XhQhYoghgZsy8MYBY6MonuJVGzu09TI/kYDeXO7Ph4/3z9xTiVIqOFGbfMlAm9pdT 7lXY37E3Y+9amdr73ze1ZqtowsMDXvbkqqumFML85vuUaVOB9zHYbh7M756yLHnCj69y yqhA== X-Gm-Message-State: ANoB5pn054lZ7WcXglkyFHkmbAKHcewv3/2Bc4ZepJvXNXSh4+flKB0K dKKiut8Jkw7wwiTB9jbqBeW5jrnI28R3WCXL X-Google-Smtp-Source: AA0mqf5/ihzWtNo3Nz7Lxe8s5ahL++r75kLMqOSvyLc4FKu35FE99mM943xm72SYVMiKiCElrWzhVThLslEqhkwk X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:90a:9403:b0:218:6a4e:e44a with SMTP id r3-20020a17090a940300b002186a4ee44amr93942914pjo.6.1670282634468; Mon, 05 Dec 2022 15:23:54 -0800 (PST) Date: Mon, 5 Dec 2022 23:23:38 +0000 In-Reply-To: <20221205232341.4131240-1-vannapurve@google.com> Mime-Version: 1.0 References: <20221205232341.4131240-1-vannapurve@google.com> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Message-ID: <20221205232341.4131240-4-vannapurve@google.com> Subject: [V2 PATCH 3/6] KVM: selftests: x86: Add IS_ALIGNED/IS_PAGE_ALIGNED helpers From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, ackerleytng@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add IS_ALIGNED/IS_PAGE_ALIGNED helpers for selftests. Signed-off-by: Vishal Annapurve --- tools/testing/selftests/kvm/include/kvm_util_base.h | 3 +++ tools/testing/selftests/kvm/include/x86_64/processor.h | 1 + 2 files changed, 4 insertions(+) diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index 4ad99f295f2a..7ba32471df50 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -170,6 +170,9 @@ extern enum vm_guest_mode vm_mode_default; #define MIN_PAGE_SIZE (1U << MIN_PAGE_SHIFT) #define PTES_PER_MIN_PAGE ptes_per_page(MIN_PAGE_SIZE) +/* @a is a power of 2 value */ +#define IS_ALIGNED(x, a) (((x) & ((typeof(x))(a) - 1)) == 0) + struct vm_guest_mode_params { unsigned int pa_bits; unsigned int va_bits; diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h index 5d310abe6c3f..4d5dd9a467e1 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -279,6 +279,7 @@ static inline unsigned int x86_model(unsigned int eax) #define PAGE_SHIFT 12 #define PAGE_SIZE (1ULL << PAGE_SHIFT) #define PAGE_MASK (~(PAGE_SIZE-1) & PHYSICAL_PAGE_MASK) +#define IS_PAGE_ALIGNED(x) IS_ALIGNED(x, PAGE_SIZE) #define HUGEPAGE_SHIFT(x) (PAGE_SHIFT + (((x) - 1) * 9)) #define HUGEPAGE_SIZE(x) (1UL << HUGEPAGE_SHIFT(x)) From patchwork Mon Dec 5 23:23:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 13065196 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 04486C4708C for ; Mon, 5 Dec 2022 23:24:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232994AbiLEXYN (ORCPT ); Mon, 5 Dec 2022 18:24:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44404 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232372AbiLEXYD (ORCPT ); Mon, 5 Dec 2022 18:24:03 -0500 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4F0ED1E718 for ; Mon, 5 Dec 2022 15:23:57 -0800 (PST) Received: by mail-pj1-x1049.google.com with SMTP id m2-20020a17090a730200b0021020cce6adso17776499pjk.3 for ; Mon, 05 Dec 2022 15:23:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=kqXKZ4E65tt+0gB12B8IM8OBWtCBSAUOaS4vLSjHWmQ=; b=Iq42erPv3wxZ/KdNQCI0oMc/S0ufjxXagKz1JdDQeVDt5fQcLmFZnJu1571ly/SMiu M+Rf+F425uATne3x07ftXKfE2uNMPVXg4+PRUJlz9euVtTvSBLgbi0JT+Q58x9W0p1RS Nm6xS4df5gmD4wFTyghY5ZbY/+uoUhXlA9CP4ZkC8F/eD52yEaU9Ej7cQmf0ewzq+bAM yC2MXabIVhE5921WXYLQqHqGcuNVhHD/l6vUzdzd4NoWTvUBHAi86F6D7KlcfpK6l2Hn AXuDtDE6RkTcXh9N2VN4mynq8nFrJJOnqWLIE8PxhVP9dxMwA/nV1UWNvHUF3IuOwnBh uNBQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=kqXKZ4E65tt+0gB12B8IM8OBWtCBSAUOaS4vLSjHWmQ=; b=QKQEvRgvRrXCtwQ8KvDvHWorWZeWX3RFvrLqde+xgl/Fag5ackYckC8PyfQLuHCyVj Ev2HQmAjRfroNnetwmA10Vz6/NrCYv5KAHoHJYcIa8HwOxYKCgj1NdPRvpcCnBf6Av5G VOOBgeUvpsldLQRp+BaTEtljD8I215VB/OavBvFWNBMmOY+ph+Bw3SkL+uLkGLvsnhrj acambzQ7txLHZr86tnlk8I/oYatTjHXmj2pL1ZfC8hShb5OAQ+G6Hz14LbdQSyIaaCcO 6rMRPKOoVF0fNgrHzWKY379ibJoQV3zFbmAiL2DcUGXbz7yVm/aDrJUbZFipQ9as/TYH Mc6g== X-Gm-Message-State: ANoB5pmzBfp10bSo1LFunfjP8PxB/v+GxBhMHXYRZtWhHhAjFFo7zmpL TIgjkp/324CtTydbGC9fyXtVoZaPidTGZOH4 X-Google-Smtp-Source: AA0mqf7iJ0XUuIO47OI4BEWWSu+Cqg2FaGB1yW1qwkMtCXa3sPJx9TZL8N3yL8ixH+1PHnYl9W5zYkaq+SyWuVon X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:902:f643:b0:187:3c62:5837 with SMTP id m3-20020a170902f64300b001873c625837mr69398615plg.123.1670282636651; Mon, 05 Dec 2022 15:23:56 -0800 (PST) Date: Mon, 5 Dec 2022 23:23:39 +0000 In-Reply-To: <20221205232341.4131240-1-vannapurve@google.com> Mime-Version: 1.0 References: <20221205232341.4131240-1-vannapurve@google.com> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Message-ID: <20221205232341.4131240-5-vannapurve@google.com> Subject: [V2 PATCH 4/6] KVM: selftests: x86: Add helpers to execute VMs with private memory From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, ackerleytng@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Introduce a set of APIs to execute VM with private memslots. Host userspace APIs for: 1) Executing a vcpu run loop that handles MAPGPA hypercall 2) Backing/unbacking guest private memory Guest APIs for: 1) Changing memory mapping type Signed-off-by: Vishal Annapurve --- tools/testing/selftests/kvm/Makefile | 1 + .../kvm/include/x86_64/private_mem.h | 24 +++ .../selftests/kvm/lib/x86_64/private_mem.c | 139 ++++++++++++++++++ 3 files changed, 164 insertions(+) create mode 100644 tools/testing/selftests/kvm/include/x86_64/private_mem.h create mode 100644 tools/testing/selftests/kvm/lib/x86_64/private_mem.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 2275ba861e0e..97f7d52c553b 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -55,6 +55,7 @@ LIBKVM_x86_64 += lib/x86_64/apic.c LIBKVM_x86_64 += lib/x86_64/handlers.S LIBKVM_x86_64 += lib/x86_64/hyperv.c LIBKVM_x86_64 += lib/x86_64/memstress.c +LIBKVM_x86_64 += lib/x86_64/private_mem.c LIBKVM_x86_64 += lib/x86_64/processor.c LIBKVM_x86_64 += lib/x86_64/svm.c LIBKVM_x86_64 += lib/x86_64/ucall.c diff --git a/tools/testing/selftests/kvm/include/x86_64/private_mem.h b/tools/testing/selftests/kvm/include/x86_64/private_mem.h new file mode 100644 index 000000000000..3aa6b4d11b28 --- /dev/null +++ b/tools/testing/selftests/kvm/include/x86_64/private_mem.h @@ -0,0 +1,24 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2022, Google LLC. + */ + +#ifndef SELFTEST_KVM_PRIVATE_MEM_H +#define SELFTEST_KVM_PRIVATE_MEM_H + +#include +#include + +void kvm_hypercall_map_shared(uint64_t gpa, uint64_t size); +void kvm_hypercall_map_private(uint64_t gpa, uint64_t size); + +void vm_unback_private_mem(struct kvm_vm *vm, uint64_t gpa, uint64_t size); + +void vm_allocate_private_mem(struct kvm_vm *vm, uint64_t gpa, uint64_t size); + +void handle_vm_exit_map_gpa_hypercall(struct kvm_vm *vm, uint64_t gpa, + uint64_t npages, uint64_t attrs); + +void vcpu_run_and_handle_mapgpa(struct kvm_vm *vm, struct kvm_vcpu *vcpu); + +#endif /* SELFTEST_KVM_PRIVATE_MEM_H */ diff --git a/tools/testing/selftests/kvm/lib/x86_64/private_mem.c b/tools/testing/selftests/kvm/lib/x86_64/private_mem.c new file mode 100644 index 000000000000..2b97fc34ec4a --- /dev/null +++ b/tools/testing/selftests/kvm/lib/x86_64/private_mem.c @@ -0,0 +1,139 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2022, Google LLC. + */ +#define _GNU_SOURCE /* for program_invocation_name */ +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include + +#include +#include +#include +#include + +static inline uint64_t __kvm_hypercall_map_gpa_range(uint64_t gpa, uint64_t size, + uint64_t flags) +{ + return kvm_hypercall(KVM_HC_MAP_GPA_RANGE, gpa, size >> PAGE_SHIFT, flags, 0); +} + +static inline void kvm_hypercall_map_gpa_range(uint64_t gpa, uint64_t size, + uint64_t flags) +{ + uint64_t ret; + + GUEST_ASSERT_2(IS_PAGE_ALIGNED(gpa) && IS_PAGE_ALIGNED(size), gpa, size); + + ret = __kvm_hypercall_map_gpa_range(gpa, size, flags); + GUEST_ASSERT_1(!ret, ret); +} + +void kvm_hypercall_map_shared(uint64_t gpa, uint64_t size) +{ + kvm_hypercall_map_gpa_range(gpa, size, KVM_MAP_GPA_RANGE_DECRYPTED); +} + +void kvm_hypercall_map_private(uint64_t gpa, uint64_t size) +{ + kvm_hypercall_map_gpa_range(gpa, size, KVM_MAP_GPA_RANGE_ENCRYPTED); +} + +static void vm_update_private_mem(struct kvm_vm *vm, uint64_t gpa, uint64_t size, + bool unback_mem) +{ + int restricted_fd; + uint64_t restricted_fd_offset, guest_phys_base, fd_offset; + struct kvm_memory_attributes attr; + struct kvm_userspace_memory_region_ext *region_ext; + struct kvm_userspace_memory_region *region; + int fallocate_mode = 0; + int ret; + + region_ext = kvm_userspace_memory_region_ext_find(vm, gpa, gpa + size); + TEST_ASSERT(region_ext != NULL, "Region not found"); + region = ®ion_ext->region; + TEST_ASSERT(region->flags & KVM_MEM_PRIVATE, + "Can not update private memfd for non-private memslot\n"); + restricted_fd = region_ext->restricted_fd; + restricted_fd_offset = region_ext->restricted_offset; + guest_phys_base = region->guest_phys_addr; + fd_offset = restricted_fd_offset + (gpa - guest_phys_base); + + if (unback_mem) + fallocate_mode = (FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE); + + printf("restricted_fd %d fallocate_mode 0x%x for offset 0x%lx size 0x%lx\n", + restricted_fd, fallocate_mode, fd_offset, size); + ret = fallocate(restricted_fd, fallocate_mode, fd_offset, size); + TEST_ASSERT(ret == 0, "fallocate failed\n"); + attr.attributes = unback_mem ? 0 : KVM_MEMORY_ATTRIBUTE_PRIVATE; + attr.address = gpa; + attr.size = size; + attr.flags = 0; + if (unback_mem) + printf("undoing encryption for gpa 0x%lx size 0x%lx\n", gpa, size); + else + printf("doing encryption for gpa 0x%lx size 0x%lx\n", gpa, size); + + vm_ioctl(vm, KVM_SET_MEMORY_ATTRIBUTES, &attr); +} + +void vm_unback_private_mem(struct kvm_vm *vm, uint64_t gpa, uint64_t size) +{ + vm_update_private_mem(vm, gpa, size, true); +} + +void vm_allocate_private_mem(struct kvm_vm *vm, uint64_t gpa, uint64_t size) +{ + vm_update_private_mem(vm, gpa, size, false); +} + +void handle_vm_exit_map_gpa_hypercall(struct kvm_vm *vm, uint64_t gpa, + uint64_t npages, uint64_t attrs) +{ + uint64_t size; + + size = npages << MIN_PAGE_SHIFT; + pr_info("Explicit conversion off 0x%lx size 0x%lx to %s\n", gpa, size, + (attrs & KVM_MAP_GPA_RANGE_ENCRYPTED) ? "private" : "shared"); + + if (attrs & KVM_MAP_GPA_RANGE_ENCRYPTED) + vm_allocate_private_mem(vm, gpa, size); + else + vm_unback_private_mem(vm, gpa, size); +} + +void vcpu_run_and_handle_mapgpa(struct kvm_vm *vm, struct kvm_vcpu *vcpu) +{ + /* + * Loop until the guest exits with any reason other than + * KVM_HC_MAP_GPA_RANGE hypercall. + */ + + while (true) { + vcpu_run(vcpu); + + if ((vcpu->run->exit_reason == KVM_EXIT_HYPERCALL) && + (vcpu->run->hypercall.nr == KVM_HC_MAP_GPA_RANGE)) { + uint64_t gpa = vcpu->run->hypercall.args[0]; + uint64_t npages = vcpu->run->hypercall.args[1]; + uint64_t attrs = vcpu->run->hypercall.args[2]; + + handle_vm_exit_map_gpa_hypercall(vm, gpa, npages, attrs); + vcpu->run->hypercall.ret = 0; + continue; + } + + return; + } +} From patchwork Mon Dec 5 23:23:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 13065197 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3898AC4708E for ; Mon, 5 Dec 2022 23:24:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233319AbiLEXYQ (ORCPT ); Mon, 5 Dec 2022 18:24:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44460 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232540AbiLEXYF (ORCPT ); Mon, 5 Dec 2022 18:24:05 -0500 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5287C1EED4 for ; Mon, 5 Dec 2022 15:23:59 -0800 (PST) Received: by mail-pf1-x44a.google.com with SMTP id p17-20020a056a0026d100b005769067d113so6307149pfw.3 for ; Mon, 05 Dec 2022 15:23:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=jg7SSIwRQlr/a+rDYgH4gnZh0/TsI1GGL08PrcPRnas=; b=Ni0R2xMb56lYAf5vQja9sV9L8AwOKgADJq87s9XZWVfV1gKq9ckApzI4Mu1uUeQA+Q PoLaDv6B1InjcwJsMYV3O5eRL2N6YcHsqYbnDoTB/H1OoQtjxsTr0Zt8xU7c5fQa+3tp Q2VYmsyCFd1k2Z/S9n0QftPcuRfNJ1fkTOsOeTQ5Z8u0jL+ypLbciIG8wySleSMEs+7p I44DHEmbXM2gSc99hTvhcleXZfEIPbRxW89Wr0Rr/3aoi8wJqCZrDL/qOG3YZj23La96 x4ht4Bo1LJTOiYA0kAv87k0QIoOTKH5ZASPIr3uSE78qJb+RJOdO0K2dCNwTN7AmJu+O gs6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=jg7SSIwRQlr/a+rDYgH4gnZh0/TsI1GGL08PrcPRnas=; b=DRwagWxC3TnuNxzjquK2kg1nMffl7K6C4oiO/EHUPVeEy4evtKfhVHkgRJhqQBAKQp 8uGxsNW4iRH5fcUAEh+4CbAW0VRC6L63ScpIfiazF0RsmJr5iRDKVk0MnbKt86QPZWfy jmCRiyARhNLPaRARN7O8+fcxhGzNk0Q3yFmdX3bTDYR53FxPhH99ww0O7LNEewBSQXka CNdmfeP3Es8FT5jswrMF66nLSOHhbZHgcjVAOb+TFdma52rX9F4O5Ab3ZSAh39LfexQB t5LuGh0aq3WtIiNABMo9QnOVxrFzU0Tgc1tRWtALArNpwFO+v38eFdPQp35bmNdtO/nP 6yNQ== X-Gm-Message-State: ANoB5plsIHAaDa5ncIzE7TdHjEI3OHCp2mMR1EMLarRCIJHQZZXcQ0RW ljT67RJN+o9sSh8E1IahlfXvcCBjiXDhvRZ2 X-Google-Smtp-Source: AA0mqf5j+PyBlvwwMbJ+KA9NFMLxrpDat/KWkPA44zk6VIVrnuSr/5QtAUf0dhv+0TQLcWERel7T260md/DBb00+ X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a62:384e:0:b0:576:4dfb:1098 with SMTP id f75-20020a62384e000000b005764dfb1098mr18576480pfa.56.1670282638805; Mon, 05 Dec 2022 15:23:58 -0800 (PST) Date: Mon, 5 Dec 2022 23:23:40 +0000 In-Reply-To: <20221205232341.4131240-1-vannapurve@google.com> Mime-Version: 1.0 References: <20221205232341.4131240-1-vannapurve@google.com> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Message-ID: <20221205232341.4131240-6-vannapurve@google.com> Subject: [V2 PATCH 5/6] KVM: selftests: Add get_free_huge_2m_pages From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, ackerleytng@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add an API to query free 2MB hugepages in the system. Signed-off-by: Vishal Annapurve --- .../testing/selftests/kvm/include/test_util.h | 1 + tools/testing/selftests/kvm/lib/test_util.c | 18 ++++++++++++++++++ 2 files changed, 19 insertions(+) diff --git a/tools/testing/selftests/kvm/include/test_util.h b/tools/testing/selftests/kvm/include/test_util.h index aea80071f2b8..3d1cc215940a 100644 --- a/tools/testing/selftests/kvm/include/test_util.h +++ b/tools/testing/selftests/kvm/include/test_util.h @@ -122,6 +122,7 @@ struct vm_mem_backing_src_alias { bool thp_configured(void); size_t get_trans_hugepagesz(void); size_t get_def_hugetlb_pagesz(void); +size_t get_free_huge_2mb_pages(void); const struct vm_mem_backing_src_alias *vm_mem_backing_src_alias(uint32_t i); size_t get_backing_src_pagesz(uint32_t i); bool is_backing_src_hugetlb(uint32_t i); diff --git a/tools/testing/selftests/kvm/lib/test_util.c b/tools/testing/selftests/kvm/lib/test_util.c index d33b98bfe8a3..745573023b57 100644 --- a/tools/testing/selftests/kvm/lib/test_util.c +++ b/tools/testing/selftests/kvm/lib/test_util.c @@ -162,6 +162,24 @@ size_t get_trans_hugepagesz(void) return size; } +size_t get_free_huge_2mb_pages(void) +{ + size_t free_pages; + FILE *f; + int ret; + + f = fopen("/sys/kernel/mm/hugepages/hugepages-2048kB/free_hugepages", "r"); + TEST_ASSERT(f != NULL, "Error in opening hugepages-2048kB/free_hugepages"); + + do { + ret = fscanf(f, "%ld", &free_pages); + } while (errno == EINTR); + TEST_ASSERT(ret < 1, "Error reading hugepages-2048kB/free_hugepages"); + fclose(f); + + return free_pages; +} + size_t get_def_hugetlb_pagesz(void) { char buf[64]; From patchwork Mon Dec 5 23:23:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 13065198 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 926F8C4708E for ; Mon, 5 Dec 2022 23:24:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233026AbiLEXYV (ORCPT ); Mon, 5 Dec 2022 18:24:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44714 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232947AbiLEXYL (ORCPT ); Mon, 5 Dec 2022 18:24:11 -0500 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4549A1F633 for ; Mon, 5 Dec 2022 15:24:01 -0800 (PST) Received: by mail-pf1-x44a.google.com with SMTP id y11-20020a056a00190b00b005749340b8a8so11810070pfi.11 for ; Mon, 05 Dec 2022 15:24:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=4cT5T4kx5vV5G9qvldZN1GECLXLHFCHNlOSZmIc8Z4k=; b=MSm36zjFu6mM+RphFRhsmlmaQyUoFJB0YM0+rChhr1tlfWke1UXduhSz6qDk1U/p2h FyJwkto/N/PE2o78KOHsNxYMcclIv/h0KhRdKbsL+XavASCjGVCsaUN5E2Rv/aKmtlil 1NyfJffrd33nAYHPiyp6FZgRIIJSDDkCUQPM05Ul/yHFKW/+SqdKvC4ZzcZGapKJedBy CaV2moL13a7T6cP6mf42IEZ5+88ljwkdO5/aKk2D1uAw1AjCBYdUfBEl5WNRJvM8jhPT iiJ+9MGzjWp9fA9C5el/0GTTO/nk0bCWpXvo2F339y9Xj9YrqzA2Vwu88MHmmuZBCnWZ EVQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=4cT5T4kx5vV5G9qvldZN1GECLXLHFCHNlOSZmIc8Z4k=; b=7X1k4YKLPU4oIKAoNQdJLC/Yp8dklxzSyI/N/rNPaHgyO1reQApVJjizZRdU/mhumn X8Y30CySp63Xfr9TlpS9zSSWKZuD+ij5dBeCZPN3etl/1POvC0zxa1EFIyEJRaHtP8Xm CozjzklYEjTSmj4lLW/F4OEHPiRH2wQ2ENJn3IAswHEw8Wnj04QNF+MHrCCrrJGeehbB x0+Lx5U3TbMw0VR26yHvMNdFQF8ySFuyWIMNkKpOLOzFM1YAB/YS0fEN4ar4caHDf50z QABjZqBM24saqzwv6Soy8SEkOA5G0GQ/qM9t0ZMorjiWpC4XP2Yi8nz06oyX8BcfrIi7 nz2A== X-Gm-Message-State: ANoB5pn+biUS4V0c3pjNFPpq60TFAYWaYaJgI5EQ18PdCyHbSilxzFWX 1DlZsj48SELtfIwVtmslQiL392lexQdjisWW X-Google-Smtp-Source: AA0mqf4UFnuvrvDn3jIhzMzVaZzt3ltuRFmfUtGo3Om7zoi8ApgY8/8IL5jZeTQdynJbAAGM5lt53AWd1Ct/beY/ X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:903:330e:b0:189:57e4:c470 with SMTP id jk14-20020a170903330e00b0018957e4c470mr57735426plb.66.1670282640756; Mon, 05 Dec 2022 15:24:00 -0800 (PST) Date: Mon, 5 Dec 2022 23:23:41 +0000 In-Reply-To: <20221205232341.4131240-1-vannapurve@google.com> Mime-Version: 1.0 References: <20221205232341.4131240-1-vannapurve@google.com> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Message-ID: <20221205232341.4131240-7-vannapurve@google.com> Subject: [V2 PATCH 6/6] KVM: selftests: x86: Add selftest for private memory From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, yang.zhong@intel.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, maz@kernel.org, dmatlack@google.com, axelrasmussen@google.com, maciej.szmigiero@oracle.com, mizhang@google.com, bgardon@google.com, ackerleytng@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a selftest to exercise implicit/explicit conversion functionality within KVM and verify: 1) Shared memory is visible to host userspace after conversion 2) Private memory is not visible to host userspace before/after conversion 3) Host userspace and guest can communicate over shared memory Signed-off-by: Vishal Annapurve --- tools/testing/selftests/kvm/.gitignore | 1 + tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/x86_64/private_mem_test.c | 212 ++++++++++++++++++ 3 files changed, 214 insertions(+) create mode 100644 tools/testing/selftests/kvm/x86_64/private_mem_test.c diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore index 082855d94c72..19cdcde2ed08 100644 --- a/tools/testing/selftests/kvm/.gitignore +++ b/tools/testing/selftests/kvm/.gitignore @@ -34,6 +34,7 @@ /x86_64/nested_exceptions_test /x86_64/nx_huge_pages_test /x86_64/platform_info_test +/x86_64/private_mem_test /x86_64/pmu_event_filter_test /x86_64/set_boot_cpu_id /x86_64/set_sregs_test diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 97f7d52c553b..beb793dd3e1c 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -99,6 +99,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/monitor_mwait_test TEST_GEN_PROGS_x86_64 += x86_64/nested_exceptions_test TEST_GEN_PROGS_x86_64 += x86_64/platform_info_test TEST_GEN_PROGS_x86_64 += x86_64/pmu_event_filter_test +TEST_GEN_PROGS_x86_64 += x86_64/private_mem_test TEST_GEN_PROGS_x86_64 += x86_64/set_boot_cpu_id TEST_GEN_PROGS_x86_64 += x86_64/set_sregs_test TEST_GEN_PROGS_x86_64 += x86_64/smaller_maxphyaddr_emulation_test diff --git a/tools/testing/selftests/kvm/x86_64/private_mem_test.c b/tools/testing/selftests/kvm/x86_64/private_mem_test.c new file mode 100644 index 000000000000..015ada2e3d54 --- /dev/null +++ b/tools/testing/selftests/kvm/x86_64/private_mem_test.c @@ -0,0 +1,212 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2022, Google LLC. + */ +#define _GNU_SOURCE /* for program_invocation_short_name */ +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include + +#include +#include +#include +#include + +#define TEST_AREA_SLOT 10 +#define TEST_AREA_GPA 0xC0000000 +#define TEST_AREA_SIZE (2 * 1024 * 1024) +#define GUEST_TEST_MEM_OFFSET (1 * 1024 * 1024) +#define GUEST_TEST_MEM_SIZE (10 * 4096) + +#define VM_STAGE_PROCESSED(x) pr_info("Processed stage %s\n", #x) + +#define TEST_MEM_DATA_PATTERN1 0x66 +#define TEST_MEM_DATA_PATTERN2 0x99 +#define TEST_MEM_DATA_PATTERN3 0x33 +#define TEST_MEM_DATA_PATTERN4 0xaa +#define TEST_MEM_DATA_PATTERN5 0x12 + +static bool verify_mem_contents(void *mem, uint32_t size, uint8_t pattern) +{ + uint8_t *buf = (uint8_t *)mem; + + for (uint32_t i = 0; i < size; i++) { + if (buf[i] != pattern) + return false; + } + + return true; +} + +static void populate_test_area(void *test_area_base, uint64_t pattern) +{ + memset(test_area_base, pattern, TEST_AREA_SIZE); +} + +static void populate_guest_test_mem(void *guest_test_mem, uint64_t pattern) +{ + memset(guest_test_mem, pattern, GUEST_TEST_MEM_SIZE); +} + +static bool verify_test_area(void *test_area_base, uint64_t area_pattern, + uint64_t guest_pattern) +{ + void *guest_test_mem = test_area_base + GUEST_TEST_MEM_OFFSET; + void *test_area2_base = guest_test_mem + GUEST_TEST_MEM_SIZE; + uint64_t test_area2_size = (TEST_AREA_SIZE - (GUEST_TEST_MEM_OFFSET + + GUEST_TEST_MEM_SIZE)); + + return (verify_mem_contents(test_area_base, GUEST_TEST_MEM_OFFSET, area_pattern) && + verify_mem_contents(guest_test_mem, GUEST_TEST_MEM_SIZE, guest_pattern) && + verify_mem_contents(test_area2_base, test_area2_size, area_pattern)); +} + +#define GUEST_STARTED 0 +#define GUEST_PRIVATE_MEM_POPULATED 1 +#define GUEST_SHARED_MEM_POPULATED 2 +#define GUEST_PRIVATE_MEM_POPULATED2 3 + +/* + * Run memory conversion tests with explicit conversion: + * Execute KVM hypercall to map/unmap gpa range which will cause userspace exit + * to back/unback private memory. Subsequent accesses by guest to the gpa range + * will not cause exit to userspace. + * + * Test memory conversion scenarios with following steps: + * 1) Access private memory using private access and verify that memory contents + * are not visible to userspace. + * 2) Convert memory to shared using explicit conversions and ensure that + * userspace is able to access the shared regions. + * 3) Convert memory back to private using explicit conversions and ensure that + * userspace is again not able to access converted private regions. + */ +static void guest_conv_test_fn(void) +{ + void *test_area_base = (void *)TEST_AREA_GPA; + void *guest_test_mem = (void *)(TEST_AREA_GPA + GUEST_TEST_MEM_OFFSET); + uint64_t guest_test_size = GUEST_TEST_MEM_SIZE; + + GUEST_SYNC(GUEST_STARTED); + + populate_test_area(test_area_base, TEST_MEM_DATA_PATTERN1); + GUEST_SYNC(GUEST_PRIVATE_MEM_POPULATED); + GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PATTERN1, + TEST_MEM_DATA_PATTERN1)); + + kvm_hypercall_map_shared((uint64_t)guest_test_mem, guest_test_size); + + populate_guest_test_mem(guest_test_mem, TEST_MEM_DATA_PATTERN2); + + GUEST_SYNC(GUEST_SHARED_MEM_POPULATED); + GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PATTERN1, + TEST_MEM_DATA_PATTERN5)); + + kvm_hypercall_map_private((uint64_t)guest_test_mem, guest_test_size); + + populate_guest_test_mem(guest_test_mem, TEST_MEM_DATA_PATTERN3); + GUEST_SYNC(GUEST_PRIVATE_MEM_POPULATED2); + + GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PATTERN1, + TEST_MEM_DATA_PATTERN3)); + GUEST_DONE(); +} + +#define ASSERT_CONV_TEST_EXIT_IO(vcpu, stage) \ + { \ + struct ucall uc; \ + ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_IO); \ + ASSERT_EQ(get_ucall(vcpu, &uc), UCALL_SYNC); \ + ASSERT_EQ(uc.args[1], stage); \ + } + +#define ASSERT_GUEST_DONE(vcpu) \ + { \ + struct ucall uc; \ + ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_IO); \ + ASSERT_EQ(get_ucall(vcpu, &uc), UCALL_DONE); \ + } + +static void host_conv_test_fn(struct kvm_vm *vm, struct kvm_vcpu *vcpu) +{ + void *test_area_hva = addr_gpa2hva(vm, TEST_AREA_GPA); + void *guest_test_mem_hva = (test_area_hva + GUEST_TEST_MEM_OFFSET); + + vcpu_run_and_handle_mapgpa(vm, vcpu); + ASSERT_CONV_TEST_EXIT_IO(vcpu, GUEST_STARTED); + populate_test_area(test_area_hva, TEST_MEM_DATA_PATTERN4); + VM_STAGE_PROCESSED(GUEST_STARTED); + + vcpu_run_and_handle_mapgpa(vm, vcpu); + ASSERT_CONV_TEST_EXIT_IO(vcpu, GUEST_PRIVATE_MEM_POPULATED); + TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PATTERN4, + TEST_MEM_DATA_PATTERN4), "failed"); + VM_STAGE_PROCESSED(GUEST_PRIVATE_MEM_POPULATED); + + vcpu_run_and_handle_mapgpa(vm, vcpu); + ASSERT_CONV_TEST_EXIT_IO(vcpu, GUEST_SHARED_MEM_POPULATED); + TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PATTERN4, + TEST_MEM_DATA_PATTERN2), "failed"); + populate_guest_test_mem(guest_test_mem_hva, TEST_MEM_DATA_PATTERN5); + VM_STAGE_PROCESSED(GUEST_SHARED_MEM_POPULATED); + + vcpu_run_and_handle_mapgpa(vm, vcpu); + ASSERT_CONV_TEST_EXIT_IO(vcpu, GUEST_PRIVATE_MEM_POPULATED2); + TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PATTERN4, + TEST_MEM_DATA_PATTERN5), "failed"); + VM_STAGE_PROCESSED(GUEST_PRIVATE_MEM_POPULATED2); + + vcpu_run_and_handle_mapgpa(vm, vcpu); + ASSERT_GUEST_DONE(vcpu); +} + +static void execute_vm_with_private_test_mem( + enum vm_mem_backing_src_type test_mem_src) +{ + struct kvm_vm *vm; + struct kvm_enable_cap cap; + struct kvm_vcpu *vcpu; + + vm = vm_create_with_one_vcpu(&vcpu, guest_conv_test_fn); + + vm_check_cap(vm, KVM_CAP_EXIT_HYPERCALL); + cap.cap = KVM_CAP_EXIT_HYPERCALL; + cap.flags = 0; + cap.args[0] = (1 << KVM_HC_MAP_GPA_RANGE); + vm_ioctl(vm, KVM_ENABLE_CAP, &cap); + + vm_userspace_mem_region_add(vm, test_mem_src, TEST_AREA_GPA, + TEST_AREA_SLOT, TEST_AREA_SIZE / vm->page_size, KVM_MEM_PRIVATE); + vm_allocate_private_mem(vm, TEST_AREA_GPA, TEST_AREA_SIZE); + + virt_map(vm, TEST_AREA_GPA, TEST_AREA_GPA, TEST_AREA_SIZE/vm->page_size); + + host_conv_test_fn(vm, vcpu); + + kvm_vm_free(vm); +} + +int main(int argc, char *argv[]) +{ + execute_vm_with_private_test_mem( + VM_MEM_SRC_ANONYMOUS_AND_RESTRICTED_MEMFD); + + /* Needs 2MB Hugepages */ + if (get_free_huge_2mb_pages() >= 1) { + printf("Running private mem test with 2M pages\n"); + execute_vm_with_private_test_mem( + VM_MEM_SRC_ANON_HTLB2M_AND_RESTRICTED_MEMFD); + } else + printf("Skipping private mem test with 2M pages\n"); + + return 0; +}