From patchwork Wed May 11 00:08:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 12845630 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 918ECC433F5 for ; Wed, 11 May 2022 00:08:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239154AbiEKAI0 (ORCPT ); Tue, 10 May 2022 20:08:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33914 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239225AbiEKAIZ (ORCPT ); Tue, 10 May 2022 20:08:25 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E23C02ED4B for ; Tue, 10 May 2022 17:08:23 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id i6-20020a17090a718600b001dc87aca289so329902pjk.5 for ; Tue, 10 May 2022 17:08:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=nd70IAruy6YLk7HF+wmxASsgGrGBVsuWU+XIZwvh4Kw=; b=nhFPNERpqCW2rbwKZXsNo79ee146E7oAuRVFLoP1ULP1eLhKBZkBSWGlvsR3Buku1a LrFupfFPfU4SygJO3xlFTVJkZon9R0M3LCnghz0wiLa1uWHVtSlqckrCxf98GzQd1cWH rb0EsdVmyTnTNjTjeCEK4r/KM1nYOFzWN80jJtWwzzs50Q6PHlpuu3moK5L6jyZ5/iCu 8wFdv3qhr1iVLv4gmcu+EQscizQT9MMTJBy9j1GHVn14TS8Lpbnc6VK8SpprHPTbpVxN 5NHZrJ70cnR7POPlCM7If2iz52B88ke1Eq5dKx+4ngNc47S7T9J1H3M2CF3UmI3TanrW 1okg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=nd70IAruy6YLk7HF+wmxASsgGrGBVsuWU+XIZwvh4Kw=; b=uQW7PCd2hZANeU+o6HYaIAuqHoihUnr6F79y/joIXYqEWKuO/ih9HR/WIMJg96Ce+2 Ye37Ba2h8CVThmv5Z9nM7du7/HpeAVBsckoZ0fnnS14K4ulb5YhTsl3haW7xgMi1V+g1 Z50OdPO9ccVAydBZYsQ91spr5ZaIudNEWe6LJomApoD9sMb/1dRAdEQl7rzNzE5fBmi4 TRwwTieuM5rJyLFyxudxgF3//HmxSLwPcvvPyhyCQ23caLpcY7rfm3RhnpEQ6mbLrzDb Jfo+7F494E8lwO5uF6NYw+CCIXKBrhjTZJA8st23a0tMvHCaHE4rvEg/bpasxU1fKtl7 RXeg== X-Gm-Message-State: AOAM532XpVq1QVztXLsZSmMI4IK4gpxlWxmywSfqwQPvNs+asttomsT9 2HVMmMuWF1+i8sERGz54SLxLErak/VR9TGv7 X-Google-Smtp-Source: ABdhPJz43OpKlmYlcMGSh78/cL9Iy9s6NHLb4cSgmpzz40FSTyAhNdUD62Iydw0OAWV3QlFAIOgqxxRMVhzAdxYm X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:90b:3e8b:b0:1dc:e920:e072 with SMTP id rj11-20020a17090b3e8b00b001dce920e072mr2380190pjb.151.1652227703435; Tue, 10 May 2022 17:08:23 -0700 (PDT) Date: Wed, 11 May 2022 00:08:03 +0000 In-Reply-To: <20220511000811.384766-1-vannapurve@google.com> Message-Id: <20220511000811.384766-2-vannapurve@google.com> Mime-Version: 1.0 References: <20220511000811.384766-1-vannapurve@google.com> X-Mailer: git-send-email 2.36.0.550.gb090851708-goog Subject: [RFC V2 PATCH 1/8] selftests: kvm: Fix inline assembly for hypercall From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shauh@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Fix inline assembly for hypercall to explicitly set eax with hypercall number to allow the implementation to work even in cases where compiler would inline the function. Signed-off-by: Vishal Annapurve Reviewed-by: Shuah Khan --- tools/testing/selftests/kvm/lib/x86_64/processor.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c index 9f000dfb5594..4d88e1a553bf 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c @@ -1461,7 +1461,7 @@ uint64_t kvm_hypercall(uint64_t nr, uint64_t a0, uint64_t a1, uint64_t a2, asm volatile("vmcall" : "=a"(r) - : "b"(a0), "c"(a1), "d"(a2), "S"(a3)); + : "a"(nr), "b"(a0), "c"(a1), "d"(a2), "S"(a3)); return r; } From patchwork Wed May 11 00:08:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 12845636 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 02D5EC433F5 for ; Wed, 11 May 2022 00:09:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239418AbiEKAIj (ORCPT ); Tue, 10 May 2022 20:08:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34126 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239269AbiEKAI1 (ORCPT ); Tue, 10 May 2022 20:08:27 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 08CB62ED4B for ; Tue, 10 May 2022 17:08:26 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id s18-20020a17090aa11200b001d92f7609e8so336928pjp.3 for ; Tue, 10 May 2022 17:08:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=O0EFOzQdWm5ZixJFRjd/96H+25owXlD7vV3xIewP5r4=; b=aenQBg0dKo77C0zSSpVxbfgnrp/A5lfnUE4N5d5nikXUp8XDj/CMqo/vbFZj/0z0lo eGedg9/Pbh4VLJhmdDdcaxZwmlt688sPrxRNsvoO687SUmzPocpmCimQVYLRjc6SK4+X E8fLJNc/crdUmP5CaEv1uzCyzF1g+8LgLCXSukYOnDW/VJZceukUBSZM6A45RgKNaHv6 T7oksxo4YBX3ctfy6CWJJejMER/kwmxcIKizhd9Feub3NBw2e0PsbCxDnY368KsBQhql ytNKwYygoz9QSUbUe/6GG4nfcFkCGAFyYqE01srQftWhyDZqRx5syN1Dcht+rHIwydju X9qw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=O0EFOzQdWm5ZixJFRjd/96H+25owXlD7vV3xIewP5r4=; b=HM0+mGeB93VerpLIqOldQJciZO3YX8Yswt1Bil2JKvEHulhFLqNpjo9U9G3AO1dQRJ LOp8a9KpCBdSr5AUlJisiFcJPZKxl6kZbN3ZDxSuaB6FEyw859/hs5jnVNBhMnTIxci/ +zxjuWlt5dPet1UO+8ZmjH/WsWbjhb2i3wT75dHfGIQtp0QyOLFck2wTwQ+03Vtuaes+ KA7g7nIbEX+57daRTBfnyvofUBca7lnBbj0kwMR20/yXa+3Fo4AAgPDRmNiM+IIfuolR aHxNO63OCF0WtXp9iFSpPlreo5sSRFT9CZjLobHQJe8JrOs8ZVQIqZbfhoFt8PB0H7LZ mcOA== X-Gm-Message-State: AOAM530olXQDoVzTT23CL08UttEkw4feCUKli3KuCH01P2ReuhD8GwwB 8fT5hF9nPgCL0ehEt/utbh3eFhN1K2RfrYe0 X-Google-Smtp-Source: ABdhPJydH//tkdiC4ZIYoHpXVVtQZPEUR1pUQ6oSoKddrVo6Fwi61TMQmmJ8+mvynJf8XT8Zp0GfKoDy/BmaQhn6 X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:903:185:b0:15e:8bfa:ed63 with SMTP id z5-20020a170903018500b0015e8bfaed63mr23108705plg.153.1652227705708; Tue, 10 May 2022 17:08:25 -0700 (PDT) Date: Wed, 11 May 2022 00:08:04 +0000 In-Reply-To: <20220511000811.384766-1-vannapurve@google.com> Message-Id: <20220511000811.384766-3-vannapurve@google.com> Mime-Version: 1.0 References: <20220511000811.384766-1-vannapurve@google.com> X-Mailer: git-send-email 2.36.0.550.gb090851708-goog Subject: [RFC V2 PATCH 2/8] selftests: kvm: Add a basic selftest to test private memory From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shauh@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Add KVM selftest to access private memory privately from the guest to test that memory updates from guest and userspace vmm don't affect each other. Signed-off-by: Vishal Annapurve Reviewed-by: Shuah Khan --- tools/testing/selftests/kvm/Makefile | 1 + tools/testing/selftests/kvm/priv_memfd_test.c | 283 ++++++++++++++++++ 2 files changed, 284 insertions(+) create mode 100644 tools/testing/selftests/kvm/priv_memfd_test.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 21c2dbd21a81..f2f9a8546c66 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -97,6 +97,7 @@ TEST_GEN_PROGS_x86_64 += max_guest_memory_test TEST_GEN_PROGS_x86_64 += memslot_modification_stress_test TEST_GEN_PROGS_x86_64 += memslot_perf_test TEST_GEN_PROGS_x86_64 += rseq_test +TEST_GEN_PROGS_x86_64 += priv_memfd_test TEST_GEN_PROGS_x86_64 += set_memory_region_test TEST_GEN_PROGS_x86_64 += steal_time TEST_GEN_PROGS_x86_64 += kvm_binary_stats_test diff --git a/tools/testing/selftests/kvm/priv_memfd_test.c b/tools/testing/selftests/kvm/priv_memfd_test.c new file mode 100644 index 000000000000..bbb58c62e186 --- /dev/null +++ b/tools/testing/selftests/kvm/priv_memfd_test.c @@ -0,0 +1,283 @@ +// SPDX-License-Identifier: GPL-2.0 +#define _GNU_SOURCE /* for program_invocation_short_name */ +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include + +#include +#include +#include + +#define TEST_MEM_GPA 0xb0000000 +#define TEST_MEM_SIZE 0x2000 +#define TEST_MEM_END (TEST_MEM_GPA + TEST_MEM_SIZE) +#define TEST_MEM_DATA_PAT1 0x6666666666666666 +#define TEST_MEM_DATA_PAT2 0x9999999999999999 +#define TEST_MEM_DATA_PAT3 0x3333333333333333 +#define TEST_MEM_DATA_PAT4 0xaaaaaaaaaaaaaaaa + +enum mem_op { + SET_PAT, + VERIFY_PAT +}; + +#define TEST_MEM_SLOT 10 + +#define VCPU_ID 0 + +#define VM_STAGE_PROCESSED(x) pr_info("Processed stage %s\n", #x) + +typedef bool (*vm_stage_handler_fn)(struct kvm_vm *, + void *, uint64_t); +typedef void (*guest_code_fn)(void); +struct test_run_helper { + char *test_desc; + vm_stage_handler_fn vmst_handler; + guest_code_fn guest_fn; + void *shared_mem; + int priv_memfd; +}; + +/* Guest code in selftests is loaded to guest memory using kvm_vm_elf_load + * which doesn't handle global offset table updates. Calling standard libc + * functions would normally result in referring to the global offset table. + * Adding O1 here seems to prohibit compiler from replacing the memory + * operations with standard libc functions such as memset. + */ +static bool __attribute__((optimize("O1"))) do_mem_op(enum mem_op op, + void *mem, uint64_t pat, uint32_t size) +{ + uint64_t *buf = (uint64_t *)mem; + uint32_t chunk_size = sizeof(pat); + uint64_t mem_addr = (uint64_t)mem; + + if (((mem_addr % chunk_size) != 0) || ((size % chunk_size) != 0)) + return false; + + for (uint32_t i = 0; i < (size / chunk_size); i++) { + if (op == SET_PAT) + buf[i] = pat; + if (op == VERIFY_PAT) { + if (buf[i] != pat) + return false; + } + } + + return true; +} + +/* Test to verify guest private accesses on private memory with following steps: + * 1) Upon entry, guest signals VMM that it has started. + * 2) VMM populates the shared memory with known pattern and continues guest + * execution. + * 3) Guest writes a different pattern on the private memory and signals VMM + * that it has updated private memory. + * 4) VMM verifies its shared memory contents to be same as the data populated + * in step 2 and continues guest execution. + * 5) Guest verifies its private memory contents to be same as the data + * populated in step 3 and marks the end of the guest execution. + */ +#define PMPAT_ID 0 +#define PMPAT_DESC "PrivateMemoryPrivateAccessTest" + +/* Guest code execution stages for private mem access test */ +#define PMPAT_GUEST_STARTED 0ULL +#define PMPAT_GUEST_PRIV_MEM_UPDATED 1ULL + +static bool pmpat_handle_vm_stage(struct kvm_vm *vm, + void *test_info, + uint64_t stage) +{ + void *shared_mem = ((struct test_run_helper *)test_info)->shared_mem; + + switch (stage) { + case PMPAT_GUEST_STARTED: { + /* Initialize the contents of shared memory */ + TEST_ASSERT(do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), + "Shared memory update failure"); + VM_STAGE_PROCESSED(PMPAT_GUEST_STARTED); + break; + } + case PMPAT_GUEST_PRIV_MEM_UPDATED: { + /* verify host updated data is still intact */ + TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), + "Shared memory view mismatch"); + VM_STAGE_PROCESSED(PMPAT_GUEST_PRIV_MEM_UPDATED); + break; + } + default: + printf("Unhandled VM stage %ld\n", stage); + return false; + } + + return true; +} + +static void pmpat_guest_code(void) +{ + void *priv_mem = (void *)TEST_MEM_GPA; + int ret; + + GUEST_SYNC(PMPAT_GUEST_STARTED); + + /* Mark the GPA range to be treated as always accessed privately */ + ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, + TEST_MEM_SIZE >> MIN_PAGE_SHIFT, + KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); + GUEST_ASSERT_1(ret == 0, ret); + + GUEST_ASSERT(do_mem_op(SET_PAT, priv_mem, TEST_MEM_DATA_PAT2, + TEST_MEM_SIZE)); + GUEST_SYNC(PMPAT_GUEST_PRIV_MEM_UPDATED); + + GUEST_ASSERT(do_mem_op(VERIFY_PAT, priv_mem, + TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + + GUEST_DONE(); +} + +static struct test_run_helper priv_memfd_testsuite[] = { + [PMPAT_ID] = { + .test_desc = PMPAT_DESC, + .vmst_handler = pmpat_handle_vm_stage, + .guest_fn = pmpat_guest_code, + }, +}; + +static void vcpu_work(struct kvm_vm *vm, uint32_t test_id) +{ + struct kvm_run *run; + struct ucall uc; + uint64_t cmd; + + /* + * Loop until the guest is done. + */ + run = vcpu_state(vm, VCPU_ID); + + while (true) { + vcpu_run(vm, VCPU_ID); + + if (run->exit_reason == KVM_EXIT_IO) { + cmd = get_ucall(vm, VCPU_ID, &uc); + if (cmd != UCALL_SYNC) + break; + + if (!priv_memfd_testsuite[test_id].vmst_handler( + vm, &priv_memfd_testsuite[test_id], uc.args[1])) + break; + + continue; + } + + TEST_FAIL("Unhandled VCPU exit reason %d\n", run->exit_reason); + break; + } + + if (run->exit_reason == KVM_EXIT_IO && cmd == UCALL_ABORT) + TEST_FAIL("%s at %s:%ld, val = %lu", (const char *)uc.args[0], + __FILE__, uc.args[1], uc.args[2]); +} + +static void priv_memory_region_add(struct kvm_vm *vm, void *mem, uint32_t slot, + uint32_t size, uint64_t guest_addr, + uint32_t priv_fd, uint64_t priv_offset) +{ + struct kvm_userspace_memory_region_ext region_ext; + int ret; + + region_ext.region.slot = slot; + region_ext.region.flags = KVM_MEM_PRIVATE; + region_ext.region.guest_phys_addr = guest_addr; + region_ext.region.memory_size = size; + region_ext.region.userspace_addr = (uintptr_t) mem; + region_ext.private_fd = priv_fd; + region_ext.private_offset = priv_offset; + ret = ioctl(vm_get_fd(vm), KVM_SET_USER_MEMORY_REGION, ®ion_ext); + TEST_ASSERT(ret == 0, "Failed to register user region for gpa 0x%lx\n", + guest_addr); +} + +/* Do private access to the guest's private memory */ +static void setup_and_execute_test(uint32_t test_id) +{ + struct kvm_vm *vm; + int priv_memfd; + int ret; + void *shared_mem; + struct kvm_enable_cap cap; + + vm = vm_create_default(VCPU_ID, 0, + priv_memfd_testsuite[test_id].guest_fn); + + /* Allocate shared memory */ + shared_mem = mmap(NULL, TEST_MEM_SIZE, + PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS | MAP_NORESERVE, -1, 0); + TEST_ASSERT(shared_mem != MAP_FAILED, "Failed to mmap() host"); + + /* Allocate private memory */ + priv_memfd = memfd_create("vm_private_mem", MFD_INACCESSIBLE); + TEST_ASSERT(priv_memfd != -1, "Failed to create priv_memfd"); + ret = fallocate(priv_memfd, 0, 0, TEST_MEM_SIZE); + TEST_ASSERT(ret != -1, "fallocate failed"); + + priv_memory_region_add(vm, shared_mem, + TEST_MEM_SLOT, TEST_MEM_SIZE, + TEST_MEM_GPA, priv_memfd, 0); + + pr_info("Mapping test memory pages 0x%x page_size 0x%x\n", + TEST_MEM_SIZE/vm_get_page_size(vm), + vm_get_page_size(vm)); + virt_map(vm, TEST_MEM_GPA, TEST_MEM_GPA, + (TEST_MEM_SIZE/vm_get_page_size(vm))); + + /* Enable exit on KVM_HC_MAP_GPA_RANGE */ + pr_info("Enabling exit on map_gpa_range hypercall\n"); + ret = ioctl(vm_get_fd(vm), KVM_CHECK_EXTENSION, KVM_CAP_EXIT_HYPERCALL); + TEST_ASSERT(ret & (1 << KVM_HC_MAP_GPA_RANGE), + "VM exit on MAP_GPA_RANGE HC not supported"); + cap.cap = KVM_CAP_EXIT_HYPERCALL; + cap.flags = 0; + cap.args[0] = (1 << KVM_HC_MAP_GPA_RANGE); + ret = ioctl(vm_get_fd(vm), KVM_ENABLE_CAP, &cap); + TEST_ASSERT(ret == 0, + "Failed to enable exit on MAP_GPA_RANGE hypercall\n"); + + priv_memfd_testsuite[test_id].shared_mem = shared_mem; + priv_memfd_testsuite[test_id].priv_memfd = priv_memfd; + vcpu_work(vm, test_id); + + munmap(shared_mem, TEST_MEM_SIZE); + priv_memfd_testsuite[test_id].shared_mem = NULL; + close(priv_memfd); + priv_memfd_testsuite[test_id].priv_memfd = -1; + kvm_vm_free(vm); +} + +int main(int argc, char *argv[]) +{ + /* Tell stdout not to buffer its content */ + setbuf(stdout, NULL); + + for (uint32_t i = 0; i < ARRAY_SIZE(priv_memfd_testsuite); i++) { + pr_info("=== Starting test %s... ===\n", + priv_memfd_testsuite[i].test_desc); + setup_and_execute_test(i); + pr_info("--- completed test %s ---\n\n", + priv_memfd_testsuite[i].test_desc); + } + + return 0; +} From patchwork Wed May 11 00:08:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 12845631 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0909CC433F5 for ; Wed, 11 May 2022 00:08:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239470AbiEKAIk (ORCPT ); Tue, 10 May 2022 20:08:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34968 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229593AbiEKAIh (ORCPT ); Tue, 10 May 2022 20:08:37 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9E6F93D497 for ; Tue, 10 May 2022 17:08:28 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id z19-20020a62d113000000b0050d183adf6fso231111pfg.19 for ; Tue, 10 May 2022 17:08:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ousdsQMGoWGPLGfdJHRlc9S6OKJxuNlkPfZprs239LE=; b=e3rAodX5f8r102qY1GuiiUNVeM6ToTpHYpSFbU4cVPlW3L9nkcmzWJgDA9lX+agIkr Qd/bEFbE9ZCvcix75fmuOcnctpaM6mvYi4Gfiw+o24cYfyNIq5I0zDiUPvmsBPlXs729 HrOSPCyCn7ZI0Qy92j8PyfGtDJrCsCRQIe3RM3/roq6okzGovg5YE6P0S2cyeW0Hsx/E F9uOz81XMpK6On/HDJXdfcGYs6j/Cc75gc8HlGFYSeEq5BQ+kjrRDldiCso5cNyeWzxA S3CFwBiAUqGMPL2ERrhipC20Wzs7FZ6/gI2ZK47SUW/9tm79/wYkxDmNHVIbaaNR43Ym IfBA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ousdsQMGoWGPLGfdJHRlc9S6OKJxuNlkPfZprs239LE=; b=OIjQ0bb471p7PIVrgBRe2EQpmQYTHGVL7fAfsv2RT8+azGKOsR+ac7a9fsr9oLsRWe dHc0l2ZPnwa5P93mP731k7XWNKxnGZuTuH4e0jsTDDWrIIrRCm6EKV0YMzpk2FvDmCP6 iHA9MGw6qQc5MTVpljm+UdU2OPjaHY37r7Rir1oVw3jTO4E8yn+7ChX/pWBkXZJCiIoE 64XzaoUTPBtWzU3YmklldxLfBa0oFy0TdGVqKQxs9zacOsVl+KUwasNW/0CUdYivwbW6 Qami1/vYaDKqT0hfwt7tHdvtDz99ZCc685YsGhV4yNV+g3G6cVjQx23cCP0BLch0KFew baUQ== X-Gm-Message-State: AOAM5313N2nV+oDTDl1hYMtjfE0LS2ZiLV8g3G6Nr3twkzu7XrVTSwdC Jw3K75ajjtvHrjnY7PtTuqQmzuuFdxOUsA29 X-Google-Smtp-Source: ABdhPJzTmLb9y8mDaPQrDC9lq/Q88kd08tL6Wmu2p9a7OyjMRa4/Lb3QJlT+4la0xLIeV0XCQla+8xYaGu3VNwyk X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:902:e549:b0:15e:aa63:6fd8 with SMTP id n9-20020a170902e54900b0015eaa636fd8mr22998658plf.152.1652227707794; Tue, 10 May 2022 17:08:27 -0700 (PDT) Date: Wed, 11 May 2022 00:08:05 +0000 In-Reply-To: <20220511000811.384766-1-vannapurve@google.com> Message-Id: <20220511000811.384766-4-vannapurve@google.com> Mime-Version: 1.0 References: <20220511000811.384766-1-vannapurve@google.com> X-Mailer: git-send-email 2.36.0.550.gb090851708-goog Subject: [RFC V2 PATCH 3/8] selftests: kvm: priv_memfd_test: Add support for memory conversion From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shauh@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Add handling of explicit private/shared memory conversion using KVM_HC_MAP_GPA_RANGE and implicit memory conversion by handling KVM_EXIT_MEMORY_ERROR. Signed-off-by: Vishal Annapurve --- tools/testing/selftests/kvm/priv_memfd_test.c | 87 +++++++++++++++++++ 1 file changed, 87 insertions(+) diff --git a/tools/testing/selftests/kvm/priv_memfd_test.c b/tools/testing/selftests/kvm/priv_memfd_test.c index bbb58c62e186..55e24c893b07 100644 --- a/tools/testing/selftests/kvm/priv_memfd_test.c +++ b/tools/testing/selftests/kvm/priv_memfd_test.c @@ -155,6 +155,83 @@ static struct test_run_helper priv_memfd_testsuite[] = { }, }; +static void handle_vm_exit_hypercall(struct kvm_run *run, + uint32_t test_id) +{ + uint64_t gpa, npages, attrs; + int priv_memfd = + priv_memfd_testsuite[test_id].priv_memfd; + int ret; + int fallocate_mode; + + if (run->hypercall.nr != KVM_HC_MAP_GPA_RANGE) { + TEST_FAIL("Unhandled Hypercall %lld\n", + run->hypercall.nr); + } + + gpa = run->hypercall.args[0]; + npages = run->hypercall.args[1]; + attrs = run->hypercall.args[2]; + + if ((gpa < TEST_MEM_GPA) || ((gpa + + (npages << MIN_PAGE_SHIFT)) > TEST_MEM_END)) { + TEST_FAIL("Unhandled gpa 0x%lx npages %ld\n", + gpa, npages); + } + + if (attrs & KVM_MAP_GPA_RANGE_ENCRYPTED) + fallocate_mode = 0; + else { + fallocate_mode = (FALLOC_FL_PUNCH_HOLE | + FALLOC_FL_KEEP_SIZE); + } + pr_info("Converting off 0x%lx pages 0x%lx to %s\n", + (gpa - TEST_MEM_GPA), npages, + fallocate_mode ? + "shared" : "private"); + ret = fallocate(priv_memfd, fallocate_mode, + (gpa - TEST_MEM_GPA), + npages << MIN_PAGE_SHIFT); + TEST_ASSERT(ret != -1, + "fallocate failed in hc handling"); + run->hypercall.ret = 0; +} + +static void handle_vm_exit_memory_error(struct kvm_run *run, + uint32_t test_id) +{ + uint64_t gpa, size, flags; + int ret; + int priv_memfd = + priv_memfd_testsuite[test_id].priv_memfd; + int fallocate_mode; + + gpa = run->memory.gpa; + size = run->memory.size; + flags = run->memory.flags; + + if ((gpa < TEST_MEM_GPA) || ((gpa + size) + > TEST_MEM_END)) { + TEST_FAIL("Unhandled gpa 0x%lx size 0x%lx\n", + gpa, size); + } + + if (flags & KVM_MEMORY_EXIT_FLAG_PRIVATE) + fallocate_mode = 0; + else { + fallocate_mode = (FALLOC_FL_PUNCH_HOLE | + FALLOC_FL_KEEP_SIZE); + } + pr_info("Converting off 0x%lx size 0x%lx to %s\n", + (gpa - TEST_MEM_GPA), size, + fallocate_mode ? + "shared" : "private"); + ret = fallocate(priv_memfd, fallocate_mode, + (gpa - TEST_MEM_GPA), size); + TEST_ASSERT(ret != -1, + "fallocate failed in memory error handling"); +} + static void vcpu_work(struct kvm_vm *vm, uint32_t test_id) { struct kvm_run *run; @@ -181,6 +258,16 @@ static void vcpu_work(struct kvm_vm *vm, uint32_t test_id) continue; } + if (run->exit_reason == KVM_EXIT_HYPERCALL) { + handle_vm_exit_hypercall(run, test_id); + continue; + } + + if (run->exit_reason == KVM_EXIT_MEMORY_ERROR) { + handle_vm_exit_memory_error(run, test_id); + continue; + } + TEST_FAIL("Unhandled VCPU exit reason %d\n", run->exit_reason); break; } From patchwork Wed May 11 00:08:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 12845637 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DBF2DC433EF for ; Wed, 11 May 2022 00:09:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239337AbiEKAJX (ORCPT ); Tue, 10 May 2022 20:09:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34966 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239399AbiEKAIj (ORCPT ); Tue, 10 May 2022 20:08:39 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A826060DA2 for ; Tue, 10 May 2022 17:08:31 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id z34-20020a056a001da200b0050e057fdd7eso241105pfw.12 for ; Tue, 10 May 2022 17:08:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=wLjvVbIdoRCH49YCihnoSmA9ubv4t2s+xqBSCz7F9jg=; b=Ki3eaTjsQ7M5NE3TLT5tuYldDGfIk2buuSWB26+3w04czbg6Ph679zSyiiZgtT9PQs 6chWf7oec2aK+bxvv3wEI653g/J5vu6EkLMuyK1wYj3x1tTeDby0vt6KjCQvEFwUIBH6 ut043eELAUSze7vzZH+uxRVRXoRZFSZuFvoMuWscyNgr/3bM6OtHGPWEXelTgJauaNk+ nBcCnvg+3SPAcI2O7dVVV3wQPahjvvLidqndO5FdjzqXtuFuRteNvFVbzjClCWZza/us AILv/Sr51khSNwawZts3gpqxwMCWbPAm0liWiAInJ/kTt3VltafGCej6wxxvpmkL1Isn jn2Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=wLjvVbIdoRCH49YCihnoSmA9ubv4t2s+xqBSCz7F9jg=; b=KDWPBNFGWcAKoAYVJKVOpdMdGOKxUsvwuvx8t6/BJ21PLMOjxTaV9fnbhuXlzBumng LAkNl7GbJ8oObuuO/GBkJ6tsEynNzzEMGvdWmA0krdZN5OoJZPphhuF7B8NxNbksifui GCLm2/bLLXX1rv60WiYWnxFMemWrJmnB8r1rPqGjfVQaTJ3JclSrUlfkhCa0t8AguXWA v92UyUqDGmS4Az5s2lAXS/5mYpjX5ukFlk8spqB9jxwaXAGcSi6ioWyEKXlqzl2cpOcX qKx8H7sJ4itk6ttwKctbDJIeMVr5YyKEfXU0kJq9Qi3edhr/w+nEILnXWx81sXAII7AI ChaQ== X-Gm-Message-State: AOAM531Jl2to3v4Y6J/bcNDXdZoETQU0eCiauS2RoNNytWsfBcNA6st9 kSu01cdXGu/K0SppPYYJAvFwNp62hHX0g+Ff X-Google-Smtp-Source: ABdhPJzVAtm2BkSdXZuq4Ku3XaizDmU0qHSs3QkXhQaKqeH6miipiHUsyaTDTE95H36auWRbhZrpMBywulp15kHC X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:90a:e510:b0:1d9:ee23:9fa1 with SMTP id t16-20020a17090ae51000b001d9ee239fa1mr55560pjy.0.1652227710272; Tue, 10 May 2022 17:08:30 -0700 (PDT) Date: Wed, 11 May 2022 00:08:06 +0000 In-Reply-To: <20220511000811.384766-1-vannapurve@google.com> Message-Id: <20220511000811.384766-5-vannapurve@google.com> Mime-Version: 1.0 References: <20220511000811.384766-1-vannapurve@google.com> X-Mailer: git-send-email 2.36.0.550.gb090851708-goog Subject: [RFC V2 PATCH 4/8] selftests: kvm: priv_memfd_test: Add shared access test From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shauh@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Add a test to access private memory in shared fashion which should exercise implicit memory conversion path using KVM_EXIT_MEMORY_ERROR. Signed-off-by: Vishal Annapurve --- tools/testing/selftests/kvm/priv_memfd_test.c | 69 +++++++++++++++++++ 1 file changed, 69 insertions(+) diff --git a/tools/testing/selftests/kvm/priv_memfd_test.c b/tools/testing/selftests/kvm/priv_memfd_test.c index 55e24c893b07..48bc4343e7b5 100644 --- a/tools/testing/selftests/kvm/priv_memfd_test.c +++ b/tools/testing/selftests/kvm/priv_memfd_test.c @@ -147,12 +147,81 @@ static void pmpat_guest_code(void) GUEST_DONE(); } +/* Test to verify guest shared accesses on private memory with following steps: + * 1) Upon entry, guest signals VMM that it has started. + * 2) VMM populates the shared memory with known pattern and continues guest + * execution. + * 3) Guest reads private gpa range in a shared fashion and verifies that it + * reads what VMM has written in step2. + * 3) Guest writes a different pattern on the shared memory and signals VMM + * that it has updated the shared memory. + * 4) VMM verifies shared memory contents to be same as the data populated + * in step 3 and continues guest execution. + */ +#define PMSAT_ID 1 +#define PMSAT_DESC "PrivateMemorySharedAccessTest" + +/* Guest code execution stages for private mem access test */ +#define PMSAT_GUEST_STARTED 0ULL +#define PMSAT_GUEST_TEST_MEM_UPDATED 1ULL + +static bool pmsat_handle_vm_stage(struct kvm_vm *vm, + void *test_info, + uint64_t stage) +{ + void *shared_mem = ((struct test_run_helper *)test_info)->shared_mem; + + switch (stage) { + case PMSAT_GUEST_STARTED: { + /* Initialize the contents of shared memory */ + TEST_ASSERT(do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), + "Shared memory update failed"); + VM_STAGE_PROCESSED(PMSAT_GUEST_STARTED); + break; + } + case PMSAT_GUEST_TEST_MEM_UPDATED: { + /* verify data to be same as what guest wrote */ + TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT2, TEST_MEM_SIZE), + "Shared memory view mismatch"); + VM_STAGE_PROCESSED(PMSAT_GUEST_TEST_MEM_UPDATED); + break; + } + default: + printf("Unhandled VM stage %ld\n", stage); + return false; + } + + return true; +} + +static void pmsat_guest_code(void) +{ + void *shared_mem = (void *)TEST_MEM_GPA; + + GUEST_SYNC(PMSAT_GUEST_STARTED); + GUEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT1, TEST_MEM_SIZE)); + + GUEST_ASSERT(do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + GUEST_SYNC(PMSAT_GUEST_TEST_MEM_UPDATED); + + GUEST_DONE(); +} + static struct test_run_helper priv_memfd_testsuite[] = { [PMPAT_ID] = { .test_desc = PMPAT_DESC, .vmst_handler = pmpat_handle_vm_stage, .guest_fn = pmpat_guest_code, }, + [PMSAT_ID] = { + .test_desc = PMSAT_DESC, + .vmst_handler = pmsat_handle_vm_stage, + .guest_fn = pmsat_guest_code, + }, }; static void handle_vm_exit_hypercall(struct kvm_run *run, From patchwork Wed May 11 00:08:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 12845635 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63BC3C433F5 for ; Wed, 11 May 2022 00:09:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239460AbiEKAJQ (ORCPT ); Tue, 10 May 2022 20:09:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35264 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239457AbiEKAIk (ORCPT ); Tue, 10 May 2022 20:08:40 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4AF4C66C9D for ; Tue, 10 May 2022 17:08:33 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id r16-20020a17090b051000b001db302efed7so339109pjz.2 for ; Tue, 10 May 2022 17:08:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=+OvHhqt+M1RLlLwyc9HE0Jj3sR2gjcJo/jfqnOHXFtA=; b=pDdvYl4liRqjWSaJccStjWYwZZV9ExTf0TKOqyZH/a0lXvGR6JPGMutccbUkwXwEjC O0E455AAnVQY7wmeDgpPMRVSQqY6IVoeJ+ueNBr/Z/oRcAN325cjTiBBk282CDqXPreT 0l/4xm6ztKCiAn+wELjc2d/8QVtDZFYdTl9Xf6omeAzXArKc9KkBQIyaVnoevwHXYCPE jIJ9BjoCoFX0eChwxxagIni3jDz/TL9XTEZ2OFclurVRhl67K2iqkxscPnSzF3fNhjBr f4yiTS6mDrLKN2PeRcezWbMlKgRNM37sHLWpgVhGbbSTTGYs8SrP/UILcgudfVFlgfQN VX9A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=+OvHhqt+M1RLlLwyc9HE0Jj3sR2gjcJo/jfqnOHXFtA=; b=GJB79Io7vI1lMbzMhykKbFZeokOK7cjy6NBWJrefHiXM79vDCMBg1EKedfHSA2Xu0J 7rp+7vDfJp1oraaseisJpvrfK0h2LQVZ5vtRd7ETE7cJIeOX9ukQTRoPhwRmZftqZSUG whM8dMsiOby2asxHzc1ZsEXJ2ysjC5NtroBzwo5RHoyvtncEsp/AP+bQ+hH5DR7iZQck yi/vMwng7+4uzsljet/dZ+zLP3BmMrLV4YptA1xRBYGgFW9fxNXhi3UMpfo3G+IYnT6U gy5Ri/XLIHq1DRjasPM/6k9LRe0W6v08igzuDDRvcYALciRV9+pXxgOMALDxA5vxf6FT n7fw== X-Gm-Message-State: AOAM5308nylRqc+gZK9qQmHQLfCyPqQ5/I3NMk2ysFWg9yf82nCuZjSd Hrjr5X7AWjR1ifpMdRc3ZYDj3vrjYoeUlh3r X-Google-Smtp-Source: ABdhPJxkoMI9iyLMP/Xf2UCY0LX57CH7UuCXvd7tBR3phxh4aQm7Y1+mJkCzCI+BwFYC/MJVg+Jsw2Eb7yRnd1t7 X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:902:ecc2:b0:15e:9add:104c with SMTP id a2-20020a170902ecc200b0015e9add104cmr22970611plh.140.1652227712910; Tue, 10 May 2022 17:08:32 -0700 (PDT) Date: Wed, 11 May 2022 00:08:07 +0000 In-Reply-To: <20220511000811.384766-1-vannapurve@google.com> Message-Id: <20220511000811.384766-6-vannapurve@google.com> Mime-Version: 1.0 References: <20220511000811.384766-1-vannapurve@google.com> X-Mailer: git-send-email 2.36.0.550.gb090851708-goog Subject: [RFC V2 PATCH 5/8] selftests: kvm: Add implicit memory conversion tests From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shauh@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Add tests to exercise implicit memory conversion path. Signed-off-by: Vishal Annapurve --- tools/testing/selftests/kvm/priv_memfd_test.c | 384 +++++++++++++++++- 1 file changed, 383 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/priv_memfd_test.c b/tools/testing/selftests/kvm/priv_memfd_test.c index 48bc4343e7b5..f6f6b064a101 100644 --- a/tools/testing/selftests/kvm/priv_memfd_test.c +++ b/tools/testing/selftests/kvm/priv_memfd_test.c @@ -211,6 +211,369 @@ static void pmsat_guest_code(void) GUEST_DONE(); } +/* Test to verify guest shared accesses on shared memory with following steps: + * 1) Upon entry, guest signals VMM that it has started. + * 2) VMM deallocates the backing private memory and populates the shared memory + * with known pattern and continues guest execution. + * 3) Guest reads shared gpa range in a shared fashion and verifies that it + * reads what VMM has written in step2. + * 3) Guest writes a different pattern on the shared memory and signals VMM + * that it has updated the shared memory. + * 4) VMM verifies shared memory contents to be same as the data populated + * in step 3 and continues guest execution. + */ +#define SMSAT_ID 2 +#define SMSAT_DESC "SharedMemorySharedAccessTest" + +#define SMSAT_GUEST_STARTED 0ULL +#define SMSAT_GUEST_TEST_MEM_UPDATED 1ULL + +static bool smsat_handle_vm_stage(struct kvm_vm *vm, + void *test_info, + uint64_t stage) +{ + void *shared_mem = ((struct test_run_helper *)test_info)->shared_mem; + int priv_memfd = ((struct test_run_helper *)test_info)->priv_memfd; + + switch (stage) { + case SMSAT_GUEST_STARTED: { + /* Remove the backing private memory storage */ + int ret = fallocate(priv_memfd, + FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, + 0, TEST_MEM_SIZE); + TEST_ASSERT(ret != -1, + "fallocate failed in smsat handling"); + /* Initialize the contents of shared memory */ + TEST_ASSERT(do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), + "Shared memory updated failed"); + VM_STAGE_PROCESSED(SMSAT_GUEST_STARTED); + break; + } + case SMSAT_GUEST_TEST_MEM_UPDATED: { + /* verify data to be same as what guest wrote */ + TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT2, TEST_MEM_SIZE), + "Shared memory view mismatch"); + VM_STAGE_PROCESSED(SMSAT_GUEST_TEST_MEM_UPDATED); + break; + } + default: + printf("Unhandled VM stage %ld\n", stage); + return false; + } + + return true; +} + +static void smsat_guest_code(void) +{ + void *shared_mem = (void *)TEST_MEM_GPA; + + GUEST_SYNC(SMSAT_GUEST_STARTED); + GUEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT1, TEST_MEM_SIZE)); + + GUEST_ASSERT(do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + GUEST_SYNC(SMSAT_GUEST_TEST_MEM_UPDATED); + + GUEST_DONE(); +} + +/* Test to verify guest private accesses on shared memory with following steps: + * 1) Upon entry, guest signals VMM that it has started. + * 2) VMM deallocates the backing private memory and populates the shared memory + * with known pattern and continues guest execution. + * 3) Guest writes gpa range via private access and signals VMM. + * 4) VMM verifies shared memory contents to be same as the data populated + * in step 2 and continues guest execution. + * 5) Guest reads gpa range via private access and verifies that the contents + * are same as written in step 3. + */ +#define SMPAT_ID 3 +#define SMPAT_DESC "SharedMemoryPrivateAccessTest" + +#define SMPAT_GUEST_STARTED 0ULL +#define SMPAT_GUEST_TEST_MEM_UPDATED 1ULL + +static bool smpat_handle_vm_stage(struct kvm_vm *vm, + void *test_info, + uint64_t stage) +{ + void *shared_mem = ((struct test_run_helper *)test_info)->shared_mem; + int priv_memfd = ((struct test_run_helper *)test_info)->priv_memfd; + + switch (stage) { + case SMPAT_GUEST_STARTED: { + /* Remove the backing private memory storage */ + int ret = fallocate(priv_memfd, + FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, + 0, TEST_MEM_SIZE); + TEST_ASSERT(ret != -1, + "fallocate failed in smpat handling"); + /* Initialize the contents of shared memory */ + TEST_ASSERT(do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), + "Shared memory updated failed"); + VM_STAGE_PROCESSED(SMPAT_GUEST_STARTED); + break; + } + case SMPAT_GUEST_TEST_MEM_UPDATED: { + /* verify data to be same as what vmm wrote earlier */ + TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), + "Shared memory view mismatch"); + VM_STAGE_PROCESSED(SMPAT_GUEST_TEST_MEM_UPDATED); + break; + } + default: + printf("Unhandled VM stage %ld\n", stage); + return false; + } + + return true; +} + +static void smpat_guest_code(void) +{ + void *shared_mem = (void *)TEST_MEM_GPA; + int ret; + + GUEST_SYNC(SMPAT_GUEST_STARTED); + + /* Mark the GPA range to be treated as always accessed privately */ + ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, + TEST_MEM_SIZE >> MIN_PAGE_SHIFT, + KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); + GUEST_ASSERT_1(ret == 0, ret); + + GUEST_ASSERT(do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + GUEST_SYNC(SMPAT_GUEST_TEST_MEM_UPDATED); + GUEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + + GUEST_DONE(); +} + +/* Test to verify guest shared and private accesses on memory with following + * steps: + * 1) Upon entry, guest signals VMM that it has started. + * 2) VMM populates the shared memory with known pattern and continues guest + * execution. + * 3) Guest writes shared gpa range in a private fashion and signals VMM + * 4) VMM verifies that shared memory still contains the pattern written in + * step 2 and continues guest execution. + * 5) Guest verifies private memory contents to be same as the data populated + * in step 3 and signals VMM. + * 6) VMM removes the private memory backing which should also clear out the + * second stage mappings for the VM + * 6) Guest does shared write access on shared memory and signals vmm + * 7) VMM reads the shared memory and verifies that the data is same as what + * guest wrote in step 6 and continues guest execution. + * 8) Guest reads the private memory and verifies that the data is same as + * written in step 6. + */ +#define PSAT_ID 4 +#define PSAT_DESC "PrivateSharedAccessTest" + +#define PSAT_GUEST_STARTED 0ULL +#define PSAT_GUEST_PRIVATE_MEM_UPDATED 1ULL +#define PSAT_GUEST_PRIVATE_MEM_VERIFIED 2ULL +#define PSAT_GUEST_SHARED_MEM_UPDATED 3ULL + +static bool psat_handle_vm_stage(struct kvm_vm *vm, + void *test_info, + uint64_t stage) +{ + void *shared_mem = ((struct test_run_helper *)test_info)->shared_mem; + int priv_memfd = ((struct test_run_helper *)test_info)->priv_memfd; + + switch (stage) { + case PSAT_GUEST_STARTED: { + /* Initialize the contents of shared memory */ + TEST_ASSERT(do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), + "Shared memory update failed"); + VM_STAGE_PROCESSED(PSAT_GUEST_STARTED); + break; + } + case PSAT_GUEST_PRIVATE_MEM_UPDATED: { + /* verify data to be same as what vmm wrote earlier */ + TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), + "Shared memory view mismatch"); + VM_STAGE_PROCESSED(PSAT_GUEST_PRIVATE_MEM_UPDATED); + break; + } + case PSAT_GUEST_PRIVATE_MEM_VERIFIED: { + /* Remove the backing private memory storage so that + * subsequent accesses from guest cause a second stage + * page fault + */ + int ret = fallocate(priv_memfd, + FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, + 0, TEST_MEM_SIZE); + TEST_ASSERT(ret != -1, + "fallocate failed in smpat handling"); + VM_STAGE_PROCESSED(PSAT_GUEST_PRIVATE_MEM_VERIFIED); + break; + } + case PSAT_GUEST_SHARED_MEM_UPDATED: { + /* verify data to be same as what guest wrote */ + TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT2, TEST_MEM_SIZE), + "Shared memory view mismatch"); + VM_STAGE_PROCESSED(PSAT_GUEST_SHARED_MEM_UPDATED); + break; + } + default: + printf("Unhandled VM stage %ld\n", stage); + return false; + } + + return true; +} + +static void psat_guest_code(void) +{ + void *shared_mem = (void *)TEST_MEM_GPA; + int ret; + + GUEST_SYNC(PSAT_GUEST_STARTED); + /* Mark the GPA range to be treated as always accessed privately */ + ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, + TEST_MEM_SIZE >> MIN_PAGE_SHIFT, + KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); + GUEST_ASSERT_1(ret == 0, ret); + + GUEST_ASSERT(do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + GUEST_SYNC(PSAT_GUEST_PRIVATE_MEM_UPDATED); + GUEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + + GUEST_SYNC(PSAT_GUEST_PRIVATE_MEM_VERIFIED); + + /* Mark no GPA range to be treated as accessed privately */ + ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, 0, + 0, KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); + GUEST_ASSERT_1(ret == 0, ret); + GUEST_ASSERT(do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + GUEST_SYNC(PSAT_GUEST_SHARED_MEM_UPDATED); + GUEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + + GUEST_DONE(); +} + +/* Test to verify guest shared and private accesses on memory with following + * steps: + * 1) Upon entry, guest signals VMM that it has started. + * 2) VMM removes the private memory backing and populates the shared memory + * with known pattern and continues guest execution. + * 3) Guest reads shared gpa range in a shared fashion and verifies that it + * reads what VMM has written in step2. + * 4) Guest writes a different pattern on the shared memory and signals VMM + * that it has updated the shared memory. + * 5) VMM verifies shared memory contents to be same as the data populated + * in step 4 and installs private memory backing again to allow guest + * to do private access and invalidate second stage mappings. + * 6) Guest does private write access on shared memory and signals vmm + * 7) VMM reads the shared memory and verified that the data is still same + * as in step 4 and continues guest execution. + * 8) Guest reads the private memory and verifies that the data is same as + * written in step 6. + */ +#define SPAT_ID 5 +#define SPAT_DESC "SharedPrivateAccessTest" + +#define SPAT_GUEST_STARTED 0ULL +#define SPAT_GUEST_SHARED_MEM_UPDATED 1ULL +#define SPAT_GUEST_PRIVATE_MEM_UPDATED 2ULL + +static bool spat_handle_vm_stage(struct kvm_vm *vm, + void *test_info, + uint64_t stage) +{ + void *shared_mem = ((struct test_run_helper *)test_info)->shared_mem; + int priv_memfd = ((struct test_run_helper *)test_info)->priv_memfd; + + switch (stage) { + case SPAT_GUEST_STARTED: { + /* Remove the backing private memory storage so that + * subsequent accesses from guest cause a second stage + * page fault + */ + int ret = fallocate(priv_memfd, + FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, + 0, TEST_MEM_SIZE); + TEST_ASSERT(ret != -1, + "fallocate failed in spat handling"); + + /* Initialize the contents of shared memory */ + TEST_ASSERT(do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), + "Shared memory updated failed"); + VM_STAGE_PROCESSED(SPAT_GUEST_STARTED); + break; + } + case SPAT_GUEST_SHARED_MEM_UPDATED: { + /* verify data to be same as what guest wrote earlier */ + TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT2, TEST_MEM_SIZE), + "Shared memory view mismatch"); + /* Allocate memory for private backing store */ + int ret = fallocate(priv_memfd, + 0, 0, TEST_MEM_SIZE); + TEST_ASSERT(ret != -1, + "fallocate failed in spat handling"); + VM_STAGE_PROCESSED(SPAT_GUEST_SHARED_MEM_UPDATED); + break; + } + case SPAT_GUEST_PRIVATE_MEM_UPDATED: { + /* verify data to be same as what guest wrote earlier */ + TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT2, TEST_MEM_SIZE), + "Shared memory view mismatch"); + VM_STAGE_PROCESSED(SPAT_GUEST_PRIVATE_MEM_UPDATED); + break; + } + default: + printf("Unhandled VM stage %ld\n", stage); + return false; + } + + return true; +} + +static void spat_guest_code(void) +{ + void *shared_mem = (void *)TEST_MEM_GPA; + int ret; + + GUEST_SYNC(SPAT_GUEST_STARTED); + GUEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT1, TEST_MEM_SIZE)); + GUEST_ASSERT(do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + GUEST_SYNC(SPAT_GUEST_SHARED_MEM_UPDATED); + /* Mark the GPA range to be treated as always accessed privately */ + ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, + TEST_MEM_SIZE >> MIN_PAGE_SHIFT, + KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); + GUEST_ASSERT_1(ret == 0, ret); + + GUEST_ASSERT(do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT1, TEST_MEM_SIZE)); + GUEST_SYNC(PSAT_GUEST_PRIVATE_MEM_UPDATED); + GUEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT1, TEST_MEM_SIZE)); + GUEST_DONE(); +} + static struct test_run_helper priv_memfd_testsuite[] = { [PMPAT_ID] = { .test_desc = PMPAT_DESC, @@ -222,6 +585,26 @@ static struct test_run_helper priv_memfd_testsuite[] = { .vmst_handler = pmsat_handle_vm_stage, .guest_fn = pmsat_guest_code, }, + [SMSAT_ID] = { + .test_desc = SMSAT_DESC, + .vmst_handler = smsat_handle_vm_stage, + .guest_fn = smsat_guest_code, + }, + [SMPAT_ID] = { + .test_desc = SMPAT_DESC, + .vmst_handler = smpat_handle_vm_stage, + .guest_fn = smpat_guest_code, + }, + [PSAT_ID] = { + .test_desc = PSAT_DESC, + .vmst_handler = psat_handle_vm_stage, + .guest_fn = psat_guest_code, + }, + [SPAT_ID] = { + .test_desc = SPAT_DESC, + .vmst_handler = spat_handle_vm_stage, + .guest_fn = spat_guest_code, + }, }; static void handle_vm_exit_hypercall(struct kvm_run *run, @@ -365,7 +748,6 @@ static void priv_memory_region_add(struct kvm_vm *vm, void *mem, uint32_t slot, guest_addr); } -/* Do private access to the guest's private memory */ static void setup_and_execute_test(uint32_t test_id) { struct kvm_vm *vm; From patchwork Wed May 11 00:08:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 12845634 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 64CAAC433F5 for ; Wed, 11 May 2022 00:09:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238818AbiEKAJN (ORCPT ); Tue, 10 May 2022 20:09:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35248 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239548AbiEKAIl (ORCPT ); Tue, 10 May 2022 20:08:41 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 81865207925 for ; Tue, 10 May 2022 17:08:36 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id b10-20020a170902bd4a00b0015e7ee90842so164723plx.8 for ; Tue, 10 May 2022 17:08:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=xt6DblK/5Mc6y+G7wnyMgG6W7K9OB1zLu8lmSnC6BmA=; b=s7DCNPxAkIoe7rjo5twbzJaNF4Ge/oZxzkng81sloIIe4mURDY/NbjwUJUxHLo0wBH 32Yvo7V45ahXwikJNl6C8yHAniozumjVaWzwNhrq2KAwAtH+n9dy3w5Qs7hzZUeGHmO3 BqOvl+EZyihh+Jt2n8rtS3ttsbTX8gwCOOO5dWOOKAZ5cEO/KNCd5FCcvQRx1z4km0CP 6Uv2CfRX5Z7EiXiJlN0t/BLZ1h6tZrkhBiNRrPOQK+HthMXHR8t5X/Az2Ufr5m9RewBP YbQsxR7VpTb7wOuf88nfdEFeDCRBSJzlcyVrIQUyYULdaDIWkQnvGQHuNF/Z0QlBPpja RTSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=xt6DblK/5Mc6y+G7wnyMgG6W7K9OB1zLu8lmSnC6BmA=; b=d7zuVUUkpVzvROmgixo+6QYyvffnUUS1SEGuY5zv4cWmJTTf3kesDg1LriJKuBNTfS JKzhwEIMd4GMD/xzy0ONtSSWrih8Y/LSaq7E/DKeRjKHU6vBa/hoiU978p5/hVCD593V 3wEdEx7zG8urfol7XKFWBh7Ti3foyGR3FaG1T41mnzmAh5ltKpSJp3elhlX8gpbmc0Oh gy2SG7tU+EBtcJbtLcJGIc8UOpFKBwBbHXX87gp6YQU3m8TD2eLAkLwKKjqGQDutgbTM QXDUIVE1We5aJCsqu+0ajA4DHl7byS4bLXK1t+8Dr7KeaOJBb5kUFoQAlMlQdyl56Iio Yiow== X-Gm-Message-State: AOAM531tG76gDdkOuL3FtNoEtFJJsS8R8DreRu1TCP8BUgy2ows6wrkr OShNuTCL3R2P5DELKbI5RFBpL2D0To6UPtqE X-Google-Smtp-Source: ABdhPJwbj7MHfhMRq9CoUYw1eK/wvuJckUiIb8665/USZ+RSGQ888NUKRDhrIF1c7BL4wmnzAYfq9HXrbf4W5Kh7 X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:90a:8d83:b0:1dd:258c:7c55 with SMTP id d3-20020a17090a8d8300b001dd258c7c55mr55621pjo.1.1652227715196; Tue, 10 May 2022 17:08:35 -0700 (PDT) Date: Wed, 11 May 2022 00:08:08 +0000 In-Reply-To: <20220511000811.384766-1-vannapurve@google.com> Message-Id: <20220511000811.384766-7-vannapurve@google.com> Mime-Version: 1.0 References: <20220511000811.384766-1-vannapurve@google.com> X-Mailer: git-send-email 2.36.0.550.gb090851708-goog Subject: [RFC V2 PATCH 6/8] selftests: kvm: Add KVM_HC_MAP_GPA_RANGE hypercall test From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shauh@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Add test to exercise explicit memory conversion path using KVM_HC_MAP_GPA_RANGE hypercall. Signed-off-by: Vishal Annapurve --- tools/testing/selftests/kvm/priv_memfd_test.c | 148 ++++++++++++++++++ 1 file changed, 148 insertions(+) diff --git a/tools/testing/selftests/kvm/priv_memfd_test.c b/tools/testing/selftests/kvm/priv_memfd_test.c index f6f6b064a101..c2ea8f67337c 100644 --- a/tools/testing/selftests/kvm/priv_memfd_test.c +++ b/tools/testing/selftests/kvm/priv_memfd_test.c @@ -574,6 +574,149 @@ static void spat_guest_code(void) GUEST_DONE(); } +/* Test to verify guest private, shared, private accesses on memory with + * following steps: + * 1) Upon entry, guest signals VMM that it has started. + * 2) VMM initializes the shared memory with known pattern and continues guest + * execution + * 3) Guest writes the private memory privately via a known pattern and + * signals VMM + * 4) VMM reads the shared memory and verifies that it's same as whats written + * in step 2 and continues guest execution + * 5) Guest reads the private memory privately and verifies that the contents + * are same as written in step 3. + * 6) Guest invokes KVM_HC_MAP_GPA_RANGE to map the hpa range as shared + * and marks the range to be accessed via shared access. + * 7) Guest does a shared access to shared memory and verifies that the + * contents are same as written in step 2. + * 8) Guest writes known pattern to test memory and signals VMM. + * 9) VMM verifies the memory contents to be same as written by guest in step + * 8 + * 10) Guest invokes KVM_HC_MAP_GPA_RANGE to map the hpa range as private + * and marks the range to be accessed via private access. + * 11) Guest writes a known pattern to the test memory and signals VMM. + * 12) VMM verifies the memory contents to be same as written by guest in step + * 8 and continues guest execution. + * 13) Guest verififes the memory pattern to be same as written in step 11. + */ +#define PSPAHCT_ID 6 +#define PSPAHCT_DESC "PrivateSharedPrivateAccessHyperCallTest" + +#define PSPAHCT_GUEST_STARTED 0ULL +#define PSPAHCT_GUEST_PRIVATE_MEM_UPDATED 1ULL +#define PSPAHCT_GUEST_SHARED_MEM_UPDATED 2ULL +#define PSPAHCT_GUEST_PRIVATE_MEM_UPDATED2 3ULL + +static bool pspahct_handle_vm_stage(struct kvm_vm *vm, + void *test_info, + uint64_t stage) +{ + void *shared_mem = ((struct test_run_helper *)test_info)->shared_mem; + + switch (stage) { + case PSPAHCT_GUEST_STARTED: { + /* Initialize the contents of shared memory */ + TEST_ASSERT(do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), + "Shared memory update failed"); + VM_STAGE_PROCESSED(PSPAHCT_GUEST_STARTED); + break; + } + case PSPAHCT_GUEST_PRIVATE_MEM_UPDATED: { + /* verify data to be same as what guest wrote earlier */ + TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), + "Shared memory view mismatch"); + VM_STAGE_PROCESSED(PSPAHCT_GUEST_PRIVATE_MEM_UPDATED); + break; + } + case PSPAHCT_GUEST_SHARED_MEM_UPDATED: { + /* verify data to be same as what guest wrote earlier */ + TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT2, TEST_MEM_SIZE), + "Shared memory view mismatch"); + VM_STAGE_PROCESSED(PSPAHCT_GUEST_SHARED_MEM_UPDATED); + break; + } + case PSPAHCT_GUEST_PRIVATE_MEM_UPDATED2: { + /* verify data to be same as what guest wrote earlier */ + TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT2, TEST_MEM_SIZE), + "Shared memory view mismatch"); + VM_STAGE_PROCESSED(PSPAHCT_GUEST_PRIVATE_MEM_UPDATED2); + break; + } + default: + printf("Unhandled VM stage %ld\n", stage); + return false; + } + + return true; +} + +static void pspahct_guest_code(void) +{ + void *test_mem = (void *)TEST_MEM_GPA; + int ret; + + GUEST_SYNC(PSPAHCT_GUEST_STARTED); + + /* Mark the GPA range to be treated as always accessed privately */ + ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, + TEST_MEM_SIZE >> MIN_PAGE_SHIFT, + KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); + GUEST_ASSERT_1(ret == 0, ret); + GUEST_ASSERT(do_mem_op(SET_PAT, test_mem, + TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + + GUEST_SYNC(PSPAHCT_GUEST_PRIVATE_MEM_UPDATED); + GUEST_ASSERT(do_mem_op(VERIFY_PAT, test_mem, + TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + + /* Map the GPA range to be treated as shared */ + ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, + TEST_MEM_SIZE >> MIN_PAGE_SHIFT, + KVM_MAP_GPA_RANGE_DECRYPTED | KVM_MAP_GPA_RANGE_PAGE_SZ_4K, 0); + GUEST_ASSERT_1(ret == 0, ret); + + /* Mark the GPA range to be treated as always accessed via shared + * access + */ + ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, 0, 0, + KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); + GUEST_ASSERT_1(ret == 0, ret); + + GUEST_ASSERT(do_mem_op(VERIFY_PAT, test_mem, + TEST_MEM_DATA_PAT1, TEST_MEM_SIZE)); + GUEST_ASSERT(do_mem_op(SET_PAT, test_mem, + TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + GUEST_SYNC(PSPAHCT_GUEST_SHARED_MEM_UPDATED); + + GUEST_ASSERT(do_mem_op(VERIFY_PAT, test_mem, + TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + + /* Map the GPA range to be treated as private */ + ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, + TEST_MEM_SIZE >> MIN_PAGE_SHIFT, + KVM_MAP_GPA_RANGE_ENCRYPTED | KVM_MAP_GPA_RANGE_PAGE_SZ_4K, 0); + GUEST_ASSERT_1(ret == 0, ret); + + /* Mark the GPA range to be treated as always accessed via private + * access + */ + ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, + TEST_MEM_SIZE >> MIN_PAGE_SHIFT, + KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); + GUEST_ASSERT_1(ret == 0, ret); + + GUEST_ASSERT(do_mem_op(SET_PAT, test_mem, + TEST_MEM_DATA_PAT1, TEST_MEM_SIZE)); + GUEST_SYNC(PSPAHCT_GUEST_PRIVATE_MEM_UPDATED2); + GUEST_ASSERT(do_mem_op(VERIFY_PAT, test_mem, + TEST_MEM_DATA_PAT1, TEST_MEM_SIZE)); + GUEST_DONE(); +} + static struct test_run_helper priv_memfd_testsuite[] = { [PMPAT_ID] = { .test_desc = PMPAT_DESC, @@ -605,6 +748,11 @@ static struct test_run_helper priv_memfd_testsuite[] = { .vmst_handler = spat_handle_vm_stage, .guest_fn = spat_guest_code, }, + [PSPAHCT_ID] = { + .test_desc = PSPAHCT_DESC, + .vmst_handler = pspahct_handle_vm_stage, + .guest_fn = pspahct_guest_code, + }, }; static void handle_vm_exit_hypercall(struct kvm_run *run, From patchwork Wed May 11 00:08:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 12845633 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3A79C433EF for ; Wed, 11 May 2022 00:09:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239450AbiEKAJD (ORCPT ); Tue, 10 May 2022 20:09:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35514 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239634AbiEKAIo (ORCPT ); Tue, 10 May 2022 20:08:44 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 56305235148 for ; Tue, 10 May 2022 17:08:38 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id h23-20020a17090adb9700b001dcce3bb2d4so2022104pjv.7 for ; Tue, 10 May 2022 17:08:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=EqhkSxXkH/+hwiomE5rucysbEqX3JGglkHqA2C3iBW8=; b=NgWeY6kV8a8KizJs9328ro0t3UwQMjfOojbQ/esljEF5c0+r5IRbfIBbUlb7BhJqZn 2O0yfh85utUX7BuoAGzCPhhigxQLDK+kyny4dFgOFnCCD25qTZxVwMH9MjnEbm4f2de3 IPDo2q+VxTmi7biw8RLJ2ddvosAS5pHlKbrJX6FVehyUK1JIfNPjWq4Nr7yrcW+krjK+ 0GU5+Kl+oJ6z3ByiTxl0oBPL70vEuqECPpWrb1JrbVI4htRWhtZs3IOCSaPyH1bZAx4E jVo6/4217uPqbfarKpd+e2zA/lj/G0wx/iGFZWpy7/KfNFpmCR1HlT0ziSyYSgtAT0Yi fk6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=EqhkSxXkH/+hwiomE5rucysbEqX3JGglkHqA2C3iBW8=; b=pJgyxMXWJoE3hCuxtPhbs/JBoSetClJliGR4z2/REhZSB6CtrDOPqfVYHwrqAvKhOd tCkMLN40/xMFhEDuTuLIIXmsD/HfuZFNsH5zHYSyPGv6CBDXJa8Xe+DjtkM0phchIWIY kVYYgYWVsbv9WG/vnYuDz1UyYjLXfcCETT9vWJSu9IURAVJ0tlXiraHAkmmd2V9igteV B+B7n3nJ+NY6Fdn1AZrakLg7QWbQJdtVhG1aFE5eRwkIsFBdl9pqm38zK+J5Bdaf3vKe nkrpyLmw+EjPvBk2JXAZ4BvmRf2L6/0QkW2Bg9NsPWMLNlNiNWuvedJ4uerb9nd+qFTA bJeA== X-Gm-Message-State: AOAM5321QMtjtTOaoGXttYIDTjJMPe4Meofeq5EbP2hNCZV61Rulqfsk dBqhYwvrsvsBJm2oz6j6rBfuQ/RlsfwnMvYR X-Google-Smtp-Source: ABdhPJzx2oJnAJehuC8ueDi+L9zp9s7bHRnweeOMEWptNIKQmM5xJu2sYC/uYEEg1NEMclHrGt8Z2mA+Nm3Atxz2 X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a17:90b:3908:b0:1dc:710e:643 with SMTP id ob8-20020a17090b390800b001dc710e0643mr2377006pjb.210.1652227717896; Tue, 10 May 2022 17:08:37 -0700 (PDT) Date: Wed, 11 May 2022 00:08:09 +0000 In-Reply-To: <20220511000811.384766-1-vannapurve@google.com> Message-Id: <20220511000811.384766-8-vannapurve@google.com> Mime-Version: 1.0 References: <20220511000811.384766-1-vannapurve@google.com> X-Mailer: git-send-email 2.36.0.550.gb090851708-goog Subject: [RFC V2 PATCH 7/8] selftests: kvm: Add hugepage support to priv_memfd_test suite. From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shauh@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org From: Austin Diviness Adds ability to run priv_memfd_test test suite across various sizes of pages for shared/private memory. Shared and private memory can be allocated with different sized pages. In order to verify that there isn't a behavior change based on different page sizes, this change runs the tests using the currently supported permutations. Adds command line flags to adjust whether the tests should run with hugepages backing the test memory. Signed-off-by: Austin Diviness Signed-off-by: Vishal Annapurve --- tools/testing/selftests/kvm/priv_memfd_test.c | 369 ++++++++++++++---- 1 file changed, 294 insertions(+), 75 deletions(-) diff --git a/tools/testing/selftests/kvm/priv_memfd_test.c b/tools/testing/selftests/kvm/priv_memfd_test.c index c2ea8f67337c..dbe6ead92ba7 100644 --- a/tools/testing/selftests/kvm/priv_memfd_test.c +++ b/tools/testing/selftests/kvm/priv_memfd_test.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 #define _GNU_SOURCE /* for program_invocation_short_name */ #include +#include #include #include #include @@ -17,9 +18,18 @@ #include #include +#define BYTE_MASK 0xFF + +// flags for mmap +#define MAP_HUGE_2MB (21 << MAP_HUGE_SHIFT) +#define MAP_HUGE_1GB (30 << MAP_HUGE_SHIFT) + +// page sizes +#define PAGE_SIZE_4KB ((size_t)0x1000) +#define PAGE_SIZE_2MB (PAGE_SIZE_4KB * (size_t)512) +#define PAGE_SIZE_1GB ((PAGE_SIZE_4KB * 256) * 1024) + #define TEST_MEM_GPA 0xb0000000 -#define TEST_MEM_SIZE 0x2000 -#define TEST_MEM_END (TEST_MEM_GPA + TEST_MEM_SIZE) #define TEST_MEM_DATA_PAT1 0x6666666666666666 #define TEST_MEM_DATA_PAT2 0x9999999999999999 #define TEST_MEM_DATA_PAT3 0x3333333333333333 @@ -34,8 +44,16 @@ enum mem_op { #define VCPU_ID 0 +// address where guests can receive the mem size of the data +// allocated to them by the vmm +#define MEM_SIZE_MMIO_ADDRESS 0xa0000000 + #define VM_STAGE_PROCESSED(x) pr_info("Processed stage %s\n", #x) +// global used for storing the current mem allocation size +// for the running test +static size_t test_mem_size; + typedef bool (*vm_stage_handler_fn)(struct kvm_vm *, void *, uint64_t); typedef void (*guest_code_fn)(void); @@ -47,6 +65,36 @@ struct test_run_helper { int priv_memfd; }; +enum page_size { + PAGE_4KB, + PAGE_2MB, + PAGE_1GB +}; + +struct page_combo { + enum page_size shared; + enum page_size private; +}; + +static char *page_size_to_str(enum page_size x) +{ + switch (x) { + case PAGE_4KB: + return "PAGE_4KB"; + case PAGE_2MB: + return "PAGE_2MB"; + case PAGE_1GB: + return "PAGE_1GB"; + default: + return "UNKNOWN"; + } +} + +static uint64_t test_mem_end(const uint64_t start, const uint64_t size) +{ + return start + size; +} + /* Guest code in selftests is loaded to guest memory using kvm_vm_elf_load * which doesn't handle global offset table updates. Calling standard libc * functions would normally result in referring to the global offset table. @@ -103,7 +151,7 @@ static bool pmpat_handle_vm_stage(struct kvm_vm *vm, case PMPAT_GUEST_STARTED: { /* Initialize the contents of shared memory */ TEST_ASSERT(do_mem_op(SET_PAT, shared_mem, - TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), + TEST_MEM_DATA_PAT1, test_mem_size), "Shared memory update failure"); VM_STAGE_PROCESSED(PMPAT_GUEST_STARTED); break; @@ -111,7 +159,7 @@ static bool pmpat_handle_vm_stage(struct kvm_vm *vm, case PMPAT_GUEST_PRIV_MEM_UPDATED: { /* verify host updated data is still intact */ TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, - TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), + TEST_MEM_DATA_PAT1, test_mem_size), "Shared memory view mismatch"); VM_STAGE_PROCESSED(PMPAT_GUEST_PRIV_MEM_UPDATED); break; @@ -131,18 +179,20 @@ static void pmpat_guest_code(void) GUEST_SYNC(PMPAT_GUEST_STARTED); + const size_t mem_size = *((size_t *)MEM_SIZE_MMIO_ADDRESS); + /* Mark the GPA range to be treated as always accessed privately */ ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, - TEST_MEM_SIZE >> MIN_PAGE_SHIFT, + mem_size >> MIN_PAGE_SHIFT, KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); GUEST_ASSERT_1(ret == 0, ret); GUEST_ASSERT(do_mem_op(SET_PAT, priv_mem, TEST_MEM_DATA_PAT2, - TEST_MEM_SIZE)); + mem_size)); GUEST_SYNC(PMPAT_GUEST_PRIV_MEM_UPDATED); GUEST_ASSERT(do_mem_op(VERIFY_PAT, priv_mem, - TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + TEST_MEM_DATA_PAT2, mem_size)); GUEST_DONE(); } @@ -175,7 +225,7 @@ static bool pmsat_handle_vm_stage(struct kvm_vm *vm, case PMSAT_GUEST_STARTED: { /* Initialize the contents of shared memory */ TEST_ASSERT(do_mem_op(SET_PAT, shared_mem, - TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), + TEST_MEM_DATA_PAT1, test_mem_size), "Shared memory update failed"); VM_STAGE_PROCESSED(PMSAT_GUEST_STARTED); break; @@ -183,7 +233,7 @@ static bool pmsat_handle_vm_stage(struct kvm_vm *vm, case PMSAT_GUEST_TEST_MEM_UPDATED: { /* verify data to be same as what guest wrote */ TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, - TEST_MEM_DATA_PAT2, TEST_MEM_SIZE), + TEST_MEM_DATA_PAT2, test_mem_size), "Shared memory view mismatch"); VM_STAGE_PROCESSED(PMSAT_GUEST_TEST_MEM_UPDATED); break; @@ -199,13 +249,14 @@ static bool pmsat_handle_vm_stage(struct kvm_vm *vm, static void pmsat_guest_code(void) { void *shared_mem = (void *)TEST_MEM_GPA; + const size_t mem_size = *((size_t *)MEM_SIZE_MMIO_ADDRESS); GUEST_SYNC(PMSAT_GUEST_STARTED); GUEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, - TEST_MEM_DATA_PAT1, TEST_MEM_SIZE)); + TEST_MEM_DATA_PAT1, mem_size)); GUEST_ASSERT(do_mem_op(SET_PAT, shared_mem, - TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + TEST_MEM_DATA_PAT2, mem_size)); GUEST_SYNC(PMSAT_GUEST_TEST_MEM_UPDATED); GUEST_DONE(); @@ -240,12 +291,12 @@ static bool smsat_handle_vm_stage(struct kvm_vm *vm, /* Remove the backing private memory storage */ int ret = fallocate(priv_memfd, FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, - 0, TEST_MEM_SIZE); + 0, test_mem_size); TEST_ASSERT(ret != -1, "fallocate failed in smsat handling"); /* Initialize the contents of shared memory */ TEST_ASSERT(do_mem_op(SET_PAT, shared_mem, - TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), + TEST_MEM_DATA_PAT1, test_mem_size), "Shared memory updated failed"); VM_STAGE_PROCESSED(SMSAT_GUEST_STARTED); break; @@ -253,7 +304,7 @@ static bool smsat_handle_vm_stage(struct kvm_vm *vm, case SMSAT_GUEST_TEST_MEM_UPDATED: { /* verify data to be same as what guest wrote */ TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, - TEST_MEM_DATA_PAT2, TEST_MEM_SIZE), + TEST_MEM_DATA_PAT2, test_mem_size), "Shared memory view mismatch"); VM_STAGE_PROCESSED(SMSAT_GUEST_TEST_MEM_UPDATED); break; @@ -269,13 +320,14 @@ static bool smsat_handle_vm_stage(struct kvm_vm *vm, static void smsat_guest_code(void) { void *shared_mem = (void *)TEST_MEM_GPA; + const size_t mem_size = *((size_t *)MEM_SIZE_MMIO_ADDRESS); GUEST_SYNC(SMSAT_GUEST_STARTED); GUEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, - TEST_MEM_DATA_PAT1, TEST_MEM_SIZE)); + TEST_MEM_DATA_PAT1, mem_size)); GUEST_ASSERT(do_mem_op(SET_PAT, shared_mem, - TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + TEST_MEM_DATA_PAT2, mem_size)); GUEST_SYNC(SMSAT_GUEST_TEST_MEM_UPDATED); GUEST_DONE(); @@ -309,12 +361,12 @@ static bool smpat_handle_vm_stage(struct kvm_vm *vm, /* Remove the backing private memory storage */ int ret = fallocate(priv_memfd, FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, - 0, TEST_MEM_SIZE); + 0, test_mem_size); TEST_ASSERT(ret != -1, "fallocate failed in smpat handling"); /* Initialize the contents of shared memory */ TEST_ASSERT(do_mem_op(SET_PAT, shared_mem, - TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), + TEST_MEM_DATA_PAT1, test_mem_size), "Shared memory updated failed"); VM_STAGE_PROCESSED(SMPAT_GUEST_STARTED); break; @@ -322,7 +374,7 @@ static bool smpat_handle_vm_stage(struct kvm_vm *vm, case SMPAT_GUEST_TEST_MEM_UPDATED: { /* verify data to be same as what vmm wrote earlier */ TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, - TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), + TEST_MEM_DATA_PAT1, test_mem_size), "Shared memory view mismatch"); VM_STAGE_PROCESSED(SMPAT_GUEST_TEST_MEM_UPDATED); break; @@ -342,17 +394,19 @@ static void smpat_guest_code(void) GUEST_SYNC(SMPAT_GUEST_STARTED); + const size_t mem_size = *((size_t *)MEM_SIZE_MMIO_ADDRESS); + /* Mark the GPA range to be treated as always accessed privately */ ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, - TEST_MEM_SIZE >> MIN_PAGE_SHIFT, + mem_size >> MIN_PAGE_SHIFT, KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); GUEST_ASSERT_1(ret == 0, ret); GUEST_ASSERT(do_mem_op(SET_PAT, shared_mem, - TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + TEST_MEM_DATA_PAT2, mem_size)); GUEST_SYNC(SMPAT_GUEST_TEST_MEM_UPDATED); GUEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, - TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + TEST_MEM_DATA_PAT2, mem_size)); GUEST_DONE(); } @@ -394,7 +448,7 @@ static bool psat_handle_vm_stage(struct kvm_vm *vm, case PSAT_GUEST_STARTED: { /* Initialize the contents of shared memory */ TEST_ASSERT(do_mem_op(SET_PAT, shared_mem, - TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), + TEST_MEM_DATA_PAT1, test_mem_size), "Shared memory update failed"); VM_STAGE_PROCESSED(PSAT_GUEST_STARTED); break; @@ -402,7 +456,7 @@ static bool psat_handle_vm_stage(struct kvm_vm *vm, case PSAT_GUEST_PRIVATE_MEM_UPDATED: { /* verify data to be same as what vmm wrote earlier */ TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, - TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), + TEST_MEM_DATA_PAT1, test_mem_size), "Shared memory view mismatch"); VM_STAGE_PROCESSED(PSAT_GUEST_PRIVATE_MEM_UPDATED); break; @@ -414,7 +468,7 @@ static bool psat_handle_vm_stage(struct kvm_vm *vm, */ int ret = fallocate(priv_memfd, FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, - 0, TEST_MEM_SIZE); + 0, test_mem_size); TEST_ASSERT(ret != -1, "fallocate failed in smpat handling"); VM_STAGE_PROCESSED(PSAT_GUEST_PRIVATE_MEM_VERIFIED); @@ -423,7 +477,7 @@ static bool psat_handle_vm_stage(struct kvm_vm *vm, case PSAT_GUEST_SHARED_MEM_UPDATED: { /* verify data to be same as what guest wrote */ TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, - TEST_MEM_DATA_PAT2, TEST_MEM_SIZE), + TEST_MEM_DATA_PAT2, test_mem_size), "Shared memory view mismatch"); VM_STAGE_PROCESSED(PSAT_GUEST_SHARED_MEM_UPDATED); break; @@ -442,17 +496,20 @@ static void psat_guest_code(void) int ret; GUEST_SYNC(PSAT_GUEST_STARTED); + + const size_t mem_size = *((size_t *)MEM_SIZE_MMIO_ADDRESS); + /* Mark the GPA range to be treated as always accessed privately */ ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, - TEST_MEM_SIZE >> MIN_PAGE_SHIFT, + mem_size >> MIN_PAGE_SHIFT, KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); GUEST_ASSERT_1(ret == 0, ret); GUEST_ASSERT(do_mem_op(SET_PAT, shared_mem, - TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + TEST_MEM_DATA_PAT2, mem_size)); GUEST_SYNC(PSAT_GUEST_PRIVATE_MEM_UPDATED); GUEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, - TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + TEST_MEM_DATA_PAT2, mem_size)); GUEST_SYNC(PSAT_GUEST_PRIVATE_MEM_VERIFIED); @@ -461,10 +518,10 @@ static void psat_guest_code(void) 0, KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); GUEST_ASSERT_1(ret == 0, ret); GUEST_ASSERT(do_mem_op(SET_PAT, shared_mem, - TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + TEST_MEM_DATA_PAT2, mem_size)); GUEST_SYNC(PSAT_GUEST_SHARED_MEM_UPDATED); GUEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, - TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + TEST_MEM_DATA_PAT2, mem_size)); GUEST_DONE(); } @@ -509,13 +566,13 @@ static bool spat_handle_vm_stage(struct kvm_vm *vm, */ int ret = fallocate(priv_memfd, FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, - 0, TEST_MEM_SIZE); + 0, test_mem_size); TEST_ASSERT(ret != -1, "fallocate failed in spat handling"); /* Initialize the contents of shared memory */ TEST_ASSERT(do_mem_op(SET_PAT, shared_mem, - TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), + TEST_MEM_DATA_PAT1, test_mem_size), "Shared memory updated failed"); VM_STAGE_PROCESSED(SPAT_GUEST_STARTED); break; @@ -523,11 +580,11 @@ static bool spat_handle_vm_stage(struct kvm_vm *vm, case SPAT_GUEST_SHARED_MEM_UPDATED: { /* verify data to be same as what guest wrote earlier */ TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, - TEST_MEM_DATA_PAT2, TEST_MEM_SIZE), + TEST_MEM_DATA_PAT2, test_mem_size), "Shared memory view mismatch"); /* Allocate memory for private backing store */ int ret = fallocate(priv_memfd, - 0, 0, TEST_MEM_SIZE); + 0, 0, test_mem_size); TEST_ASSERT(ret != -1, "fallocate failed in spat handling"); VM_STAGE_PROCESSED(SPAT_GUEST_SHARED_MEM_UPDATED); @@ -536,7 +593,7 @@ static bool spat_handle_vm_stage(struct kvm_vm *vm, case SPAT_GUEST_PRIVATE_MEM_UPDATED: { /* verify data to be same as what guest wrote earlier */ TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, - TEST_MEM_DATA_PAT2, TEST_MEM_SIZE), + TEST_MEM_DATA_PAT2, test_mem_size), "Shared memory view mismatch"); VM_STAGE_PROCESSED(SPAT_GUEST_PRIVATE_MEM_UPDATED); break; @@ -554,23 +611,26 @@ static void spat_guest_code(void) void *shared_mem = (void *)TEST_MEM_GPA; int ret; + const size_t mem_size = *((size_t *)MEM_SIZE_MMIO_ADDRESS); + GUEST_SYNC(SPAT_GUEST_STARTED); GUEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, - TEST_MEM_DATA_PAT1, TEST_MEM_SIZE)); + TEST_MEM_DATA_PAT1, mem_size)); GUEST_ASSERT(do_mem_op(SET_PAT, shared_mem, - TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + TEST_MEM_DATA_PAT2, mem_size)); + GUEST_SYNC(SPAT_GUEST_SHARED_MEM_UPDATED); /* Mark the GPA range to be treated as always accessed privately */ ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, - TEST_MEM_SIZE >> MIN_PAGE_SHIFT, + mem_size >> MIN_PAGE_SHIFT, KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); GUEST_ASSERT_1(ret == 0, ret); GUEST_ASSERT(do_mem_op(SET_PAT, shared_mem, - TEST_MEM_DATA_PAT1, TEST_MEM_SIZE)); + TEST_MEM_DATA_PAT1, mem_size)); GUEST_SYNC(PSAT_GUEST_PRIVATE_MEM_UPDATED); GUEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, - TEST_MEM_DATA_PAT1, TEST_MEM_SIZE)); + TEST_MEM_DATA_PAT1, mem_size)); GUEST_DONE(); } @@ -617,7 +677,7 @@ static bool pspahct_handle_vm_stage(struct kvm_vm *vm, case PSPAHCT_GUEST_STARTED: { /* Initialize the contents of shared memory */ TEST_ASSERT(do_mem_op(SET_PAT, shared_mem, - TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), + TEST_MEM_DATA_PAT1, test_mem_size), "Shared memory update failed"); VM_STAGE_PROCESSED(PSPAHCT_GUEST_STARTED); break; @@ -625,7 +685,7 @@ static bool pspahct_handle_vm_stage(struct kvm_vm *vm, case PSPAHCT_GUEST_PRIVATE_MEM_UPDATED: { /* verify data to be same as what guest wrote earlier */ TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, - TEST_MEM_DATA_PAT1, TEST_MEM_SIZE), + TEST_MEM_DATA_PAT1, test_mem_size), "Shared memory view mismatch"); VM_STAGE_PROCESSED(PSPAHCT_GUEST_PRIVATE_MEM_UPDATED); break; @@ -633,7 +693,7 @@ static bool pspahct_handle_vm_stage(struct kvm_vm *vm, case PSPAHCT_GUEST_SHARED_MEM_UPDATED: { /* verify data to be same as what guest wrote earlier */ TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, - TEST_MEM_DATA_PAT2, TEST_MEM_SIZE), + TEST_MEM_DATA_PAT2, test_mem_size), "Shared memory view mismatch"); VM_STAGE_PROCESSED(PSPAHCT_GUEST_SHARED_MEM_UPDATED); break; @@ -641,7 +701,7 @@ static bool pspahct_handle_vm_stage(struct kvm_vm *vm, case PSPAHCT_GUEST_PRIVATE_MEM_UPDATED2: { /* verify data to be same as what guest wrote earlier */ TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, - TEST_MEM_DATA_PAT2, TEST_MEM_SIZE), + TEST_MEM_DATA_PAT2, test_mem_size), "Shared memory view mismatch"); VM_STAGE_PROCESSED(PSPAHCT_GUEST_PRIVATE_MEM_UPDATED2); break; @@ -661,21 +721,23 @@ static void pspahct_guest_code(void) GUEST_SYNC(PSPAHCT_GUEST_STARTED); + const size_t mem_size = *((size_t *)MEM_SIZE_MMIO_ADDRESS); + /* Mark the GPA range to be treated as always accessed privately */ ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, - TEST_MEM_SIZE >> MIN_PAGE_SHIFT, + mem_size >> MIN_PAGE_SHIFT, KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); GUEST_ASSERT_1(ret == 0, ret); GUEST_ASSERT(do_mem_op(SET_PAT, test_mem, - TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + TEST_MEM_DATA_PAT2, mem_size)); GUEST_SYNC(PSPAHCT_GUEST_PRIVATE_MEM_UPDATED); GUEST_ASSERT(do_mem_op(VERIFY_PAT, test_mem, - TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + TEST_MEM_DATA_PAT2, mem_size)); /* Map the GPA range to be treated as shared */ ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, - TEST_MEM_SIZE >> MIN_PAGE_SHIFT, + mem_size >> MIN_PAGE_SHIFT, KVM_MAP_GPA_RANGE_DECRYPTED | KVM_MAP_GPA_RANGE_PAGE_SZ_4K, 0); GUEST_ASSERT_1(ret == 0, ret); @@ -687,17 +749,17 @@ static void pspahct_guest_code(void) GUEST_ASSERT_1(ret == 0, ret); GUEST_ASSERT(do_mem_op(VERIFY_PAT, test_mem, - TEST_MEM_DATA_PAT1, TEST_MEM_SIZE)); + TEST_MEM_DATA_PAT1, mem_size)); GUEST_ASSERT(do_mem_op(SET_PAT, test_mem, - TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + TEST_MEM_DATA_PAT2, mem_size)); GUEST_SYNC(PSPAHCT_GUEST_SHARED_MEM_UPDATED); GUEST_ASSERT(do_mem_op(VERIFY_PAT, test_mem, - TEST_MEM_DATA_PAT2, TEST_MEM_SIZE)); + TEST_MEM_DATA_PAT2, mem_size)); /* Map the GPA range to be treated as private */ ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, - TEST_MEM_SIZE >> MIN_PAGE_SHIFT, + mem_size >> MIN_PAGE_SHIFT, KVM_MAP_GPA_RANGE_ENCRYPTED | KVM_MAP_GPA_RANGE_PAGE_SZ_4K, 0); GUEST_ASSERT_1(ret == 0, ret); @@ -705,15 +767,15 @@ static void pspahct_guest_code(void) * access */ ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, - TEST_MEM_SIZE >> MIN_PAGE_SHIFT, + mem_size >> MIN_PAGE_SHIFT, KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); GUEST_ASSERT_1(ret == 0, ret); GUEST_ASSERT(do_mem_op(SET_PAT, test_mem, - TEST_MEM_DATA_PAT1, TEST_MEM_SIZE)); + TEST_MEM_DATA_PAT1, mem_size)); GUEST_SYNC(PSPAHCT_GUEST_PRIVATE_MEM_UPDATED2); GUEST_ASSERT(do_mem_op(VERIFY_PAT, test_mem, - TEST_MEM_DATA_PAT1, TEST_MEM_SIZE)); + TEST_MEM_DATA_PAT1, mem_size)); GUEST_DONE(); } @@ -758,7 +820,7 @@ static struct test_run_helper priv_memfd_testsuite[] = { static void handle_vm_exit_hypercall(struct kvm_run *run, uint32_t test_id) { - uint64_t gpa, npages, attrs; + uint64_t gpa, npages, attrs, mem_end; int priv_memfd = priv_memfd_testsuite[test_id].priv_memfd; int ret; @@ -772,9 +834,10 @@ static void handle_vm_exit_hypercall(struct kvm_run *run, gpa = run->hypercall.args[0]; npages = run->hypercall.args[1]; attrs = run->hypercall.args[2]; + mem_end = test_mem_end(gpa, test_mem_size); if ((gpa < TEST_MEM_GPA) || ((gpa + - (npages << MIN_PAGE_SHIFT)) > TEST_MEM_END)) { + (npages << MIN_PAGE_SHIFT)) > mem_end)) { TEST_FAIL("Unhandled gpa 0x%lx npages %ld\n", gpa, npages); } @@ -800,7 +863,7 @@ static void handle_vm_exit_hypercall(struct kvm_run *run, static void handle_vm_exit_memory_error(struct kvm_run *run, uint32_t test_id) { - uint64_t gpa, size, flags; + uint64_t gpa, size, flags, mem_end; int ret; int priv_memfd = priv_memfd_testsuite[test_id].priv_memfd; @@ -809,9 +872,10 @@ static void handle_vm_exit_memory_error(struct kvm_run *run, gpa = run->memory.gpa; size = run->memory.size; flags = run->memory.flags; + mem_end = test_mem_end(gpa, test_mem_size); if ((gpa < TEST_MEM_GPA) || ((gpa + size) - > TEST_MEM_END)) { + > mem_end)) { TEST_FAIL("Unhandled gpa 0x%lx size 0x%lx\n", gpa, size); } @@ -858,6 +922,22 @@ static void vcpu_work(struct kvm_vm *vm, uint32_t test_id) continue; } + if (run->exit_reason == KVM_EXIT_MMIO) { + if (run->mmio.phys_addr == MEM_SIZE_MMIO_ADDRESS) { + // tell the guest the size of the memory + // it's been allocated + int shift_amount = 0; + + for (int i = 0; i < sizeof(uint64_t); ++i) { + run->mmio.data[i] = + (test_mem_size >> + shift_amount) & BYTE_MASK; + shift_amount += CHAR_BIT; + } + } + continue; + } + if (run->exit_reason == KVM_EXIT_HYPERCALL) { handle_vm_exit_hypercall(run, test_id); continue; @@ -896,7 +976,9 @@ static void priv_memory_region_add(struct kvm_vm *vm, void *mem, uint32_t slot, guest_addr); } -static void setup_and_execute_test(uint32_t test_id) +static void setup_and_execute_test(uint32_t test_id, + const enum page_size shared, + const enum page_size private) { struct kvm_vm *vm; int priv_memfd; @@ -907,27 +989,82 @@ static void setup_and_execute_test(uint32_t test_id) vm = vm_create_default(VCPU_ID, 0, priv_memfd_testsuite[test_id].guest_fn); + // use 2 pages by default + size_t mem_size = PAGE_SIZE_4KB * 2; + bool using_hugepages = false; + + int mmap_flags = MAP_PRIVATE | MAP_ANONYMOUS | MAP_NORESERVE; + + switch (shared) { + case PAGE_4KB: + // no additional flags are needed + break; + case PAGE_2MB: + mmap_flags |= MAP_HUGETLB | MAP_HUGE_2MB | MAP_POPULATE; + mem_size = max(mem_size, PAGE_SIZE_2MB); + using_hugepages = true; + break; + case PAGE_1GB: + mmap_flags |= MAP_HUGETLB | MAP_HUGE_1GB | MAP_POPULATE; + mem_size = max(mem_size, PAGE_SIZE_1GB); + using_hugepages = true; + break; + default: + TEST_FAIL("unknown page size for shared memory\n"); + } + + unsigned int memfd_flags = MFD_INACCESSIBLE; + + switch (private) { + case PAGE_4KB: + // no additional flags are needed + break; + case PAGE_2MB: + memfd_flags |= MFD_HUGETLB | MFD_HUGE_2MB; + mem_size = PAGE_SIZE_2MB; + using_hugepages = true; + break; + case PAGE_1GB: + memfd_flags |= MFD_HUGETLB | MFD_HUGE_1GB; + mem_size = PAGE_SIZE_1GB; + using_hugepages = true; + break; + default: + TEST_FAIL("unknown page size for private memory\n"); + } + + // set global for mem size to use later + test_mem_size = mem_size; + /* Allocate shared memory */ - shared_mem = mmap(NULL, TEST_MEM_SIZE, + shared_mem = mmap(NULL, mem_size, PROT_READ | PROT_WRITE, - MAP_PRIVATE | MAP_ANONYMOUS | MAP_NORESERVE, -1, 0); + mmap_flags, -1, 0); TEST_ASSERT(shared_mem != MAP_FAILED, "Failed to mmap() host"); + if (using_hugepages) { + ret = madvise(shared_mem, mem_size, MADV_WILLNEED); + TEST_ASSERT(ret == 0, "madvise failed"); + } + /* Allocate private memory */ - priv_memfd = memfd_create("vm_private_mem", MFD_INACCESSIBLE); + priv_memfd = memfd_create("vm_private_mem", memfd_flags); TEST_ASSERT(priv_memfd != -1, "Failed to create priv_memfd"); - ret = fallocate(priv_memfd, 0, 0, TEST_MEM_SIZE); + ret = fallocate(priv_memfd, 0, 0, mem_size); TEST_ASSERT(ret != -1, "fallocate failed"); priv_memory_region_add(vm, shared_mem, - TEST_MEM_SLOT, TEST_MEM_SIZE, + TEST_MEM_SLOT, mem_size, TEST_MEM_GPA, priv_memfd, 0); - pr_info("Mapping test memory pages 0x%x page_size 0x%x\n", - TEST_MEM_SIZE/vm_get_page_size(vm), + pr_info("Mapping test memory pages 0x%zx page_size 0x%x\n", + mem_size/vm_get_page_size(vm), vm_get_page_size(vm)); virt_map(vm, TEST_MEM_GPA, TEST_MEM_GPA, - (TEST_MEM_SIZE/vm_get_page_size(vm))); + (mem_size/vm_get_page_size(vm))); + + // add mmio communication page + virt_map(vm, MEM_SIZE_MMIO_ADDRESS, MEM_SIZE_MMIO_ADDRESS, 1); /* Enable exit on KVM_HC_MAP_GPA_RANGE */ pr_info("Enabling exit on map_gpa_range hypercall\n"); @@ -945,24 +1082,106 @@ static void setup_and_execute_test(uint32_t test_id) priv_memfd_testsuite[test_id].priv_memfd = priv_memfd; vcpu_work(vm, test_id); - munmap(shared_mem, TEST_MEM_SIZE); + munmap(shared_mem, mem_size); priv_memfd_testsuite[test_id].shared_mem = NULL; close(priv_memfd); priv_memfd_testsuite[test_id].priv_memfd = -1; kvm_vm_free(vm); } +static void hugepage_requirements_text(const struct page_combo matrix) +{ + int pages_needed_2mb = 0; + int pages_needed_1gb = 0; + enum page_size sizes[] = { matrix.shared, matrix.private }; + + for (int i = 0; i < ARRAY_SIZE(sizes); ++i) { + if (sizes[i] == PAGE_2MB) + ++pages_needed_2mb; + if (sizes[i] == PAGE_1GB) + ++pages_needed_1gb; + } + if (pages_needed_2mb != 0 && pages_needed_1gb != 0) { + pr_info("This test requires %d 2MB page(s) and %d 1GB page(s)\n", + pages_needed_2mb, pages_needed_1gb); + } else if (pages_needed_2mb != 0) { + pr_info("This test requires %d 2MB page(s)\n", pages_needed_2mb); + } else if (pages_needed_1gb != 0) { + pr_info("This test requires %d 1GB page(s)\n", pages_needed_1gb); + } +} + +static bool should_skip_test(const struct page_combo matrix, + const bool use_2mb_pages, + const bool use_1gb_pages) +{ + if ((matrix.shared == PAGE_2MB || matrix.private == PAGE_2MB) + && !use_2mb_pages) + return true; + if ((matrix.shared == PAGE_1GB || matrix.private == PAGE_1GB) + && !use_1gb_pages) + return true; + return false; +} + +static void print_help(const char *const name) +{ + puts(""); + printf("usage %s [-h] [-m] [-g]\n", name); + puts(""); + printf(" -h: Display this help message\n"); + printf(" -m: include test runs using 2MB page permutations\n"); + printf(" -g: include test runs using 1GB page permutations\n"); + exit(0); +} + int main(int argc, char *argv[]) { /* Tell stdout not to buffer its content */ setbuf(stdout, NULL); + // arg parsing + int opt; + bool use_2mb_pages = false; + bool use_1gb_pages = false; + + while ((opt = getopt(argc, argv, "mgh")) != -1) { + switch (opt) { + case 'm': + use_2mb_pages = true; + break; + case 'g': + use_1gb_pages = true; + break; + case 'h': + default: + print_help(argv[0]); + } + } + + struct page_combo page_size_matrix[] = { + { .shared = PAGE_4KB, .private = PAGE_4KB }, + { .shared = PAGE_2MB, .private = PAGE_4KB }, + }; + for (uint32_t i = 0; i < ARRAY_SIZE(priv_memfd_testsuite); i++) { - pr_info("=== Starting test %s... ===\n", - priv_memfd_testsuite[i].test_desc); - setup_and_execute_test(i); - pr_info("--- completed test %s ---\n\n", - priv_memfd_testsuite[i].test_desc); + for (uint32_t j = 0; j < ARRAY_SIZE(page_size_matrix); j++) { + const struct page_combo current_page_matrix = page_size_matrix[j]; + + if (should_skip_test(current_page_matrix, + use_2mb_pages, use_1gb_pages)) + break; + pr_info("=== Starting test %s... ===\n", + priv_memfd_testsuite[i].test_desc); + pr_info("using page sizes shared: %s private: %s\n", + page_size_to_str(current_page_matrix.shared), + page_size_to_str(current_page_matrix.private)); + hugepage_requirements_text(current_page_matrix); + setup_and_execute_test(i, current_page_matrix.shared, + current_page_matrix.private); + pr_info("--- completed test %s ---\n\n", + priv_memfd_testsuite[i].test_desc); + } } return 0; From patchwork Wed May 11 00:08:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Annapurve X-Patchwork-Id: 12845632 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A872FC433F5 for ; Wed, 11 May 2022 00:08:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239299AbiEKAIt (ORCPT ); Tue, 10 May 2022 20:08:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34968 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238766AbiEKAIr (ORCPT ); Tue, 10 May 2022 20:08:47 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2FCD83153E for ; Tue, 10 May 2022 17:08:41 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id b184-20020a62cfc1000000b0050d209cb8dcso254602pfg.3 for ; Tue, 10 May 2022 17:08:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=FKYCy8nTeKBSce/4iCSXAO9o2F9LxoRfpI1neokEeTE=; b=ssPbYUDDUR7Lk75ZwCroNZYn4ps9HZK45xYG0ZA/1nAmYyewmiB9+4jTK3lomDzHCU bP1sovRM/YX1uIo2/l1kOLMSeItaVf9KI6OUJN/9MvLBkBOq/hgF8YgLyfK8iCTSWWBA xW54BCXfW5Erd2zyTo259DTs1cZi2LgsB1J0ve5P7UlLI+P9G6L1Uxe3dD2cZ0pRPOHR wFOYkgJh4+qea58ZVGFCdzMK6fAHF+cv4Kp25iMEUIKSqHMNHlBN3z7kZf5Rm53gtQox dtNcYWkO4FsXB5F3Auf7UEIizxCT51k6ldFJPdqdprMxnhah1vRLXSU/pkLoUHW2ePn9 isMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=FKYCy8nTeKBSce/4iCSXAO9o2F9LxoRfpI1neokEeTE=; b=fOTD0oEOdyVlswi6uY5H1InxqN+UNOxpzpsoovR/AkKPaZdIW1DBIa2xnHwkUphQar MpF6/1r/rRh+BGY9yCj/5gEjRl4T9x/RBZPWBM41hk+rA6gUsb7jC3XYNWtHryxD+NOF xI9egbVZSva+Bi498wy1Bmbr/+g1a7UvGgZapvSSBMJD9msNnerjGhBxuuhc4v5WUoe0 ScIdGA/AcbnN0uYYhxaNr/3k82eMCrmnBtHBd7ebsOTSr2reZyCgrB89oXwsk8c6pW+4 nunx+Kls5Z4llA0Tv1Ls1cyCc4DydHb96ODPwGRDotLqtcDTDuIqvr0W3i7Xz40DZRof 3niA== X-Gm-Message-State: AOAM532rBN6BE3RWwzBj6tqBA6Fhb94QwsCQQtHtu2jx8wnDnGlvckH5 vKUx0tA7q0lht5sPeJelcvQVLCAAV/xWyhZk X-Google-Smtp-Source: ABdhPJzR473SpZp8Yag+VqE5GdAoG5IupcfbIthbh8IUjySyWZjJRrDwxsWbsBYaG8nQDo2oH5pQpOExjRQNMOsZ X-Received: from vannapurve2.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:41f8]) (user=vannapurve job=sendgmr) by 2002:a05:6a00:a0e:b0:4fd:fa6e:95fc with SMTP id p14-20020a056a000a0e00b004fdfa6e95fcmr22764103pfh.17.1652227720345; Tue, 10 May 2022 17:08:40 -0700 (PDT) Date: Wed, 11 May 2022 00:08:10 +0000 In-Reply-To: <20220511000811.384766-1-vannapurve@google.com> Message-Id: <20220511000811.384766-9-vannapurve@google.com> Mime-Version: 1.0 References: <20220511000811.384766-1-vannapurve@google.com> X-Mailer: git-send-email 2.36.0.550.gb090851708-goog Subject: [RFC V2 PATCH 8/8] selftests: kvm: priv_memfd: Add test avoiding double allocation From: Vishal Annapurve To: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shauh@kernel.org, yang.zhong@intel.com, drjones@redhat.com, ricarkol@google.com, aaronlewis@google.com, wei.w.wang@intel.com, kirill.shutemov@linux.intel.com, corbet@lwn.net, hughd@google.com, jlayton@kernel.org, bfields@fieldses.org, akpm@linux-foundation.org, chao.p.peng@linux.intel.com, yu.c.zhang@linux.intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, michael.roth@amd.com, qperret@google.com, steven.price@arm.com, ak@linux.intel.com, david@redhat.com, luto@kernel.org, vbabka@suse.cz, marcorr@google.com, erdemaktas@google.com, pgonda@google.com, nikunj@amd.com, seanjc@google.com, diviness@google.com, Vishal Annapurve Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Add a memory conversion test without leading to double allocation of memory backing gpa ranges. Signed-off-by: Vishal Annapurve --- tools/testing/selftests/kvm/priv_memfd_test.c | 225 ++++++++++++++++-- 1 file changed, 211 insertions(+), 14 deletions(-) diff --git a/tools/testing/selftests/kvm/priv_memfd_test.c b/tools/testing/selftests/kvm/priv_memfd_test.c index dbe6ead92ba7..3b6e84cf6a44 100644 --- a/tools/testing/selftests/kvm/priv_memfd_test.c +++ b/tools/testing/selftests/kvm/priv_memfd_test.c @@ -63,6 +63,8 @@ struct test_run_helper { guest_code_fn guest_fn; void *shared_mem; int priv_memfd; + bool disallow_boot_shared_access; + bool toggle_shared_mem_state; }; enum page_size { @@ -779,6 +781,151 @@ static void pspahct_guest_code(void) GUEST_DONE(); } +/* Test to verify guest accesses without double allocation: + * Guest starts with shared memory access disallowed by default. + * 1) Guest writes the private memory privately via a known pattern + * 3) Guest reads the private memory privately and verifies that the contents + * are same as written. + * 4) Guest invokes KVM_HC_MAP_GPA_RANGE to map the hpa range as shared + * and marks the range to be accessed via shared access. + * 5) Guest writes shared memory with another pattern and signals VMM + * 6) VMM verifies the memory contents to be same as written by guest in step + * 5 and updates the memory with a different pattern + * 7) Guest verifies the memory contents to be same as written in step 6. + * 8) Guest invokes KVM_HC_MAP_GPA_RANGE to map the hpa range as private + * and marks the range to be accessed via private access. + * 9) Guest writes a known pattern to the test memory and verifies the contents + * to be same as written. + * 10) Guest invokes KVM_HC_MAP_GPA_RANGE to map the hpa range as shared + * and marks the range to be accessed via shared access. + * 11) Guest writes shared memory with another pattern and signals VMM + * 12) VMM verifies the memory contents to be same as written by guest in step + * 5 and updates the memory with a different pattern + * 13) Guest verifies the memory contents to be same as written in step 6. + */ +#define PSAWDAT_ID 7 +#define PSAWDAT_DESC "PrivateSharedAccessWithoutDoubleAllocationTest" + +#define PSAWDAT_GUEST_SHARED_MEM_UPDATED1 1ULL +#define PSAWDAT_GUEST_SHARED_MEM_UPDATED2 2ULL + +static bool psawdat_handle_vm_stage(struct kvm_vm *vm, + void *test_info, + uint64_t stage) +{ + void *shared_mem = ((struct test_run_helper *)test_info)->shared_mem; + + switch (stage) { + case PSAWDAT_GUEST_SHARED_MEM_UPDATED1: { + /* verify data to be same as what guest wrote earlier */ + TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT2, test_mem_size), + "Shared memory view mismatch"); + TEST_ASSERT(do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT1, test_mem_size), + "Shared mem update Failure"); + VM_STAGE_PROCESSED(PSAWDAT_GUEST_SHARED_MEM_UPDATED); + break; + } + case PSAWDAT_GUEST_SHARED_MEM_UPDATED2: { + /* verify data to be same as what guest wrote earlier */ + TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem, + TEST_MEM_DATA_PAT3, test_mem_size), + "Shared memory view mismatch"); + TEST_ASSERT(do_mem_op(SET_PAT, shared_mem, + TEST_MEM_DATA_PAT4, test_mem_size), + "Shared mem update Failure"); + VM_STAGE_PROCESSED(PSAWDAT_GUEST_SHARED_MEM_UPDATED2); + break; + } + default: + printf("Unhandled VM stage %ld\n", stage); + return false; + } + + return true; +} + +static void psawdat_guest_code(void) +{ + void *test_mem = (void *)TEST_MEM_GPA; + int ret; + + const size_t mem_size = *((size_t *)MEM_SIZE_MMIO_ADDRESS); + + /* Mark the GPA range to be treated as always accessed privately */ + ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, + mem_size >> MIN_PAGE_SHIFT, + KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); + GUEST_ASSERT_1(ret == 0, ret); + GUEST_ASSERT(do_mem_op(SET_PAT, test_mem, + TEST_MEM_DATA_PAT1, mem_size)); + + GUEST_ASSERT(do_mem_op(VERIFY_PAT, test_mem, + TEST_MEM_DATA_PAT1, mem_size)); + + /* Map the GPA range to be treated as shared */ + ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, + mem_size >> MIN_PAGE_SHIFT, + KVM_MAP_GPA_RANGE_DECRYPTED | KVM_MAP_GPA_RANGE_PAGE_SZ_4K, 0); + GUEST_ASSERT_1(ret == 0, ret); + + /* Mark the GPA range to be treated as always accessed via shared + * access + */ + ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, 0, 0, + KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); + GUEST_ASSERT_1(ret == 0, ret); + + GUEST_ASSERT(do_mem_op(SET_PAT, test_mem, + TEST_MEM_DATA_PAT2, mem_size)); + GUEST_SYNC(PSAWDAT_GUEST_SHARED_MEM_UPDATED1); + + GUEST_ASSERT(do_mem_op(VERIFY_PAT, test_mem, + TEST_MEM_DATA_PAT1, mem_size)); + + /* Map the GPA range to be treated as private */ + ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, + mem_size >> MIN_PAGE_SHIFT, + KVM_MAP_GPA_RANGE_ENCRYPTED | KVM_MAP_GPA_RANGE_PAGE_SZ_4K, 0); + GUEST_ASSERT_1(ret == 0, ret); + + /* Mark the GPA range to be treated as always accessed via private + * access + */ + ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, + mem_size >> MIN_PAGE_SHIFT, + KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); + GUEST_ASSERT_1(ret == 0, ret); + + GUEST_ASSERT(do_mem_op(SET_PAT, test_mem, + TEST_MEM_DATA_PAT2, mem_size)); + GUEST_ASSERT(do_mem_op(VERIFY_PAT, test_mem, + TEST_MEM_DATA_PAT2, mem_size)); + + /* Map the GPA range to be treated as shared */ + ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA, + mem_size >> MIN_PAGE_SHIFT, + KVM_MAP_GPA_RANGE_DECRYPTED | KVM_MAP_GPA_RANGE_PAGE_SZ_4K, 0); + GUEST_ASSERT_1(ret == 0, ret); + + /* Mark the GPA range to be treated as always accessed via shared + * access + */ + ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, 0, 0, + KVM_MARK_GPA_RANGE_ENC_ACCESS, 0); + GUEST_ASSERT_1(ret == 0, ret); + + GUEST_ASSERT(do_mem_op(SET_PAT, test_mem, + TEST_MEM_DATA_PAT3, mem_size)); + GUEST_SYNC(PSAWDAT_GUEST_SHARED_MEM_UPDATED2); + + GUEST_ASSERT(do_mem_op(VERIFY_PAT, test_mem, + TEST_MEM_DATA_PAT4, mem_size)); + + GUEST_DONE(); +} + static struct test_run_helper priv_memfd_testsuite[] = { [PMPAT_ID] = { .test_desc = PMPAT_DESC, @@ -815,6 +962,13 @@ static struct test_run_helper priv_memfd_testsuite[] = { .vmst_handler = pspahct_handle_vm_stage, .guest_fn = pspahct_guest_code, }, + [PSAWDAT_ID] = { + .test_desc = PSAWDAT_DESC, + .vmst_handler = psawdat_handle_vm_stage, + .guest_fn = psawdat_guest_code, + .toggle_shared_mem_state = true, + .disallow_boot_shared_access = true, + }, }; static void handle_vm_exit_hypercall(struct kvm_run *run, @@ -825,6 +979,10 @@ static void handle_vm_exit_hypercall(struct kvm_run *run, priv_memfd_testsuite[test_id].priv_memfd; int ret; int fallocate_mode; + void *shared_mem = priv_memfd_testsuite[test_id].shared_mem; + bool toggle_shared_mem_state = + priv_memfd_testsuite[test_id].toggle_shared_mem_state; + int mprotect_mode; if (run->hypercall.nr != KVM_HC_MAP_GPA_RANGE) { TEST_FAIL("Unhandled Hypercall %lld\n", @@ -842,11 +1000,13 @@ static void handle_vm_exit_hypercall(struct kvm_run *run, gpa, npages); } - if (attrs & KVM_MAP_GPA_RANGE_ENCRYPTED) + if (attrs & KVM_MAP_GPA_RANGE_ENCRYPTED) { fallocate_mode = 0; - else { + mprotect_mode = PROT_NONE; + } else { fallocate_mode = (FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE); + mprotect_mode = PROT_READ | PROT_WRITE; } pr_info("Converting off 0x%lx pages 0x%lx to %s\n", (gpa - TEST_MEM_GPA), npages, @@ -857,6 +1017,17 @@ static void handle_vm_exit_hypercall(struct kvm_run *run, npages << MIN_PAGE_SHIFT); TEST_ASSERT(ret != -1, "fallocate failed in hc handling"); + if (toggle_shared_mem_state) { + if (fallocate_mode) { + ret = madvise(shared_mem, test_mem_size, + MADV_DONTNEED); + TEST_ASSERT(ret != -1, + "madvise failed in hc handling"); + } + ret = mprotect(shared_mem, test_mem_size, mprotect_mode); + TEST_ASSERT(ret != -1, + "mprotect failed in hc handling"); + } run->hypercall.ret = 0; } @@ -867,7 +1038,11 @@ static void handle_vm_exit_memory_error(struct kvm_run *run, int ret; int priv_memfd = priv_memfd_testsuite[test_id].priv_memfd; + void *shared_mem = priv_memfd_testsuite[test_id].shared_mem; + bool toggle_shared_mem_state = + priv_memfd_testsuite[test_id].toggle_shared_mem_state; int fallocate_mode; + int mprotect_mode; gpa = run->memory.gpa; size = run->memory.size; @@ -880,11 +1055,13 @@ static void handle_vm_exit_memory_error(struct kvm_run *run, gpa, size); } - if (flags & KVM_MEMORY_EXIT_FLAG_PRIVATE) + if (flags & KVM_MEMORY_EXIT_FLAG_PRIVATE) { fallocate_mode = 0; - else { + mprotect_mode = PROT_NONE; + } else { fallocate_mode = (FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE); + mprotect_mode = PROT_READ | PROT_WRITE; } pr_info("Converting off 0x%lx size 0x%lx to %s\n", (gpa - TEST_MEM_GPA), size, @@ -894,6 +1071,18 @@ static void handle_vm_exit_memory_error(struct kvm_run *run, (gpa - TEST_MEM_GPA), size); TEST_ASSERT(ret != -1, "fallocate failed in memory error handling"); + + if (toggle_shared_mem_state) { + if (fallocate_mode) { + ret = madvise(shared_mem, test_mem_size, + MADV_DONTNEED); + TEST_ASSERT(ret != -1, + "madvise failed in memory error handling"); + } + ret = mprotect(shared_mem, test_mem_size, mprotect_mode); + TEST_ASSERT(ret != -1, + "mprotect failed in memory error handling"); + } } static void vcpu_work(struct kvm_vm *vm, uint32_t test_id) @@ -924,14 +1113,14 @@ static void vcpu_work(struct kvm_vm *vm, uint32_t test_id) if (run->exit_reason == KVM_EXIT_MMIO) { if (run->mmio.phys_addr == MEM_SIZE_MMIO_ADDRESS) { - // tell the guest the size of the memory - // it's been allocated + /* tell the guest the size of the memory it's + * been allocated + */ int shift_amount = 0; for (int i = 0; i < sizeof(uint64_t); ++i) { - run->mmio.data[i] = - (test_mem_size >> - shift_amount) & BYTE_MASK; + run->mmio.data[i] = (test_mem_size >> + shift_amount) & BYTE_MASK; shift_amount += CHAR_BIT; } } @@ -985,6 +1174,9 @@ static void setup_and_execute_test(uint32_t test_id, int ret; void *shared_mem; struct kvm_enable_cap cap; + bool disallow_boot_shared_access = + priv_memfd_testsuite[test_id].disallow_boot_shared_access; + int prot_flags = PROT_READ | PROT_WRITE; vm = vm_create_default(VCPU_ID, 0, priv_memfd_testsuite[test_id].guest_fn); @@ -1036,10 +1228,12 @@ static void setup_and_execute_test(uint32_t test_id, // set global for mem size to use later test_mem_size = mem_size; + if (disallow_boot_shared_access) + prot_flags = PROT_NONE; + /* Allocate shared memory */ shared_mem = mmap(NULL, mem_size, - PROT_READ | PROT_WRITE, - mmap_flags, -1, 0); + prot_flags, mmap_flags, -1, 0); TEST_ASSERT(shared_mem != MAP_FAILED, "Failed to mmap() host"); if (using_hugepages) { @@ -1166,7 +1360,8 @@ int main(int argc, char *argv[]) for (uint32_t i = 0; i < ARRAY_SIZE(priv_memfd_testsuite); i++) { for (uint32_t j = 0; j < ARRAY_SIZE(page_size_matrix); j++) { - const struct page_combo current_page_matrix = page_size_matrix[j]; + const struct page_combo current_page_matrix = + page_size_matrix[j]; if (should_skip_test(current_page_matrix, use_2mb_pages, use_1gb_pages)) @@ -1174,8 +1369,10 @@ int main(int argc, char *argv[]) pr_info("=== Starting test %s... ===\n", priv_memfd_testsuite[i].test_desc); pr_info("using page sizes shared: %s private: %s\n", - page_size_to_str(current_page_matrix.shared), - page_size_to_str(current_page_matrix.private)); + page_size_to_str( + current_page_matrix.shared), + page_size_to_str( + current_page_matrix.private)); hugepage_requirements_text(current_page_matrix); setup_and_execute_test(i, current_page_matrix.shared, current_page_matrix.private);