diff mbox series

[V2,6/6] KVM: selftests: x86: Add selftest for private memory

Message ID 20221205232341.4131240-7-vannapurve@google.com (mailing list archive)
State New, archived
Headers show
Series KVM: selftests: selftests for fd-based private memory | expand

Commit Message

Vishal Annapurve Dec. 5, 2022, 11:23 p.m. UTC
Add a selftest to exercise implicit/explicit conversion functionality
within KVM and verify:
1) Shared memory is visible to host userspace after conversion
2) Private memory is not visible to host userspace before/after conversion
3) Host userspace and guest can communicate over shared memory

Signed-off-by: Vishal Annapurve <vannapurve@google.com>
---
 tools/testing/selftests/kvm/.gitignore        |   1 +
 tools/testing/selftests/kvm/Makefile          |   1 +
 .../selftests/kvm/x86_64/private_mem_test.c   | 212 ++++++++++++++++++
 3 files changed, 214 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/x86_64/private_mem_test.c

Comments

Sean Christopherson Jan. 18, 2023, 12:53 a.m. UTC | #1
On Mon, Dec 05, 2022, Vishal Annapurve wrote:
> +#define TEST_AREA_SLOT		10

I vote using "DATA" instead of "TEST_AREA" just because it's shorter.

> +#define TEST_AREA_GPA		0xC0000000

Put the GPA above 4GiB, that way it'll never run afoul of the PCI hole, e.g. won't
overlap with the local APIC.

> +#define TEST_AREA_SIZE		(2 * 1024 * 1024)

Use the sizes from <linux/sizes.h>, e.g. SZ_2M.

> +#define GUEST_TEST_MEM_OFFSET	(1 * 1024 * 1024)

Same here.

> +#define GUEST_TEST_MEM_SIZE	(10 * 4096)

Why 10*4KiB?  SZ_2M + PAGE_SIZE seems like the best compromise until selftests
doesn't explode on mapping huge memslots in the guest.  That lets us test
boundary conditions wiith hugepages.

> +
> +#define VM_STAGE_PROCESSED(x)	pr_info("Processed stage %s\n", #x)

Comments on this in the context of host_conv_test_fn().

> +#define TEST_MEM_DATA_PATTERN1	0x66

Regarding the values of the patterns, the patterns themselves are also not at all
interesting.  The goal of the test is to verify that KVM/kernel plumbs through the
correct memory.  Verifying that hardware doesn't barf on a pattern is very much a
non-goal, i.e. we aren't writing a test for DRAM.

While debugging my own goofs, I found it much, much easier to triage failures by
simply using the pattern number of the value, e.g. 1 = 0x11, 2 = 0x22, etc.

> +#define TEST_MEM_DATA_PATTERN2	0x99
> +#define TEST_MEM_DATA_PATTERN3	0x33
> +#define TEST_MEM_DATA_PATTERN4	0xaa
> +#define TEST_MEM_DATA_PATTERN5	0x12

While I like using macros instead of copy+pasting magic numbers, in this case the
macros make the code really, really hard to read.  Longish names with just one
character different all look the same when glancing through the code, e.g. identifying
when the test switches between patterns is difficult.

And IMO, having the guest explicitly tell the host what to expect yields a more
maintanable, easier to understand test overall.  And by having the guest say what
pattern(s) to expect/write, then there's no needed for the guest to have a priori
knowledge of the patterns, and thus no need for macros.

> +static bool verify_mem_contents(void *mem, uint32_t size, uint8_t pattern)
> +{
> +	uint8_t *buf = (uint8_t *)mem;
> +
> +	for (uint32_t i = 0; i < size; i++) {
> +		if (buf[i] != pattern)
> +			return false;

It took me blundering into a SHUTDOWN before I figured out these return booleans
instead of simply asserting: using TEST_ASSERT() in the guest obviously doesn't
fair well.

While I like deduplicating code, in this case it does more harm than good because
the context of exactly what fails is lost, e.g. expected vs. actual pattern,
offset, etc...  We could use macros to dedup code, i.e. have only the assertion
be different, but I don't think that ends up being a net positive either.

> +	}
> +
> +	return true;
> +}
> +
> +static void populate_test_area(void *test_area_base, uint64_t pattern)
> +{
> +	memset(test_area_base, pattern, TEST_AREA_SIZE);
> +}
> +
> +static void populate_guest_test_mem(void *guest_test_mem, uint64_t pattern)
> +{
> +	memset(guest_test_mem, pattern, GUEST_TEST_MEM_SIZE);

Generally speaking, don't add one-line wrappers that don't provide novel
functionality.  E.g. there's nothing fancy here, and so the wrappers just end up
obfuscating the size of a memset().

> +}
> +
> +static bool verify_test_area(void *test_area_base, uint64_t area_pattern,
> +	uint64_t guest_pattern)
> +{
> +	void *guest_test_mem = test_area_base + GUEST_TEST_MEM_OFFSET;
> +	void *test_area2_base = guest_test_mem + GUEST_TEST_MEM_SIZE;
> +	uint64_t test_area2_size = (TEST_AREA_SIZE - (GUEST_TEST_MEM_OFFSET +
> +			GUEST_TEST_MEM_SIZE));
> +
> +	return (verify_mem_contents(test_area_base, GUEST_TEST_MEM_OFFSET, area_pattern) &&
> +		verify_mem_contents(guest_test_mem, GUEST_TEST_MEM_SIZE, guest_pattern) &&
> +		verify_mem_contents(test_area2_base, test_area2_size, area_pattern));
> +}
> +
> +#define GUEST_STARTED			0
> +#define GUEST_PRIVATE_MEM_POPULATED	1
> +#define GUEST_SHARED_MEM_POPULATED	2
> +#define GUEST_PRIVATE_MEM_POPULATED2	3
> +
> +/*
> + * Run memory conversion tests with explicit conversion:
> + * Execute KVM hypercall to map/unmap gpa range which will cause userspace exit
> + * to back/unback private memory. Subsequent accesses by guest to the gpa range
> + * will not cause exit to userspace.
> + *
> + * Test memory conversion scenarios with following steps:
> + * 1) Access private memory using private access and verify that memory contents
> + *   are not visible to userspace.
> + * 2) Convert memory to shared using explicit conversions and ensure that
> + *   userspace is able to access the shared regions.
> + * 3) Convert memory back to private using explicit conversions and ensure that
> + *   userspace is again not able to access converted private regions.
> + */
> +static void guest_conv_test_fn(void)
> +{
> +	void *test_area_base = (void *)TEST_AREA_GPA;
> +	void *guest_test_mem = (void *)(TEST_AREA_GPA + GUEST_TEST_MEM_OFFSET);
> +	uint64_t guest_test_size = GUEST_TEST_MEM_SIZE;
> +
> +	GUEST_SYNC(GUEST_STARTED);
> +
> +	populate_test_area(test_area_base, TEST_MEM_DATA_PATTERN1);

Tangentially related to my "the patterns themselves are uninteresting" comment,
and very related to the "avoid global defines for the patterns",  I would like to
structure this test so that it's easy to test GPA+size combinations.  E.g. to
test that KVM does the right thing when a conversion spans regions that KVM is
likely to map with hugepages, or is misaligned with respect to hugepages, etc.

If the guest explicit tells the host which patterns to expect/write, then testing
combinations of addresses is just a matter of looping in the guest, e.g.

	struct {
		uint64_t offset;
		uint64_t size;
	} stages[] = {
		GUEST_STAGE(0, PAGE_SIZE),
		GUEST_STAGE(0, SZ_2M),
		GUEST_STAGE(PAGE_SIZE, PAGE_SIZE),
		GUEST_STAGE(PAGE_SIZE, SZ_2M),
		GUEST_STAGE(SZ_2M, PAGE_SIZE),
	};
	const uint8_t init_p = 0xcc;
	uint64_t j;
	int i;

	/* Memory should be shared by default. */
	memset((void *)DATA_GPA, ~init_p, DATA_SIZE);
	GUEST_SYNC4(DATA_GPA, DATA_SIZE, ~init_p, init_p);
	memcmp_g(DATA_GPA, init_p, DATA_SIZE);

	for (i = 0; i < ARRAY_SIZE(stages); i++) {
		blah blah blah
	}

> +	GUEST_SYNC(GUEST_PRIVATE_MEM_POPULATED);

As above, use the sync to effectively tell/request the host to do something, as
opposed to having the host infer what it's supposed to do based on the current
stage.

Aside from wanting to be able to iterate on GPA+size, I really, really dislike
the GUEST_SYNC(stage) pattern.  It works ok for small tests, but the pattern
doesn't scale, e.g. see xen_shinfo_test.c.  Even at smaller scales, the resulting
magic numbers can be unnecessarily difficult to understand, e.g. smm_test.c isn't
doing anything _that_ complex, but every time I look at the test I spend several
minutes relearning what it does.  Using macros instead of magic numbers helps a
little, but it doesn't fix the underlying issue of bundling a bunch of testcases
into a single monolithic sequences.

> +	GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PATTERN1,
> +		TEST_MEM_DATA_PATTERN1));

Align params to help delineate the boundaries between the assert and the function
call.  E.g. if we ended up with this code:

	GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PATTERN1,
				      TEST_MEM_DATA_PATTERN1));

But as (much further) above, just assert in the comparison helper to avoid
the problem entirely.

> +
> +	kvm_hypercall_map_shared((uint64_t)guest_test_mem, guest_test_size);
> +
> +	populate_guest_test_mem(guest_test_mem, TEST_MEM_DATA_PATTERN2);
> +
> +	GUEST_SYNC(GUEST_SHARED_MEM_POPULATED);
> +	GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PATTERN1,
> +		TEST_MEM_DATA_PATTERN5));
> +
> +	kvm_hypercall_map_private((uint64_t)guest_test_mem, guest_test_size);
> +
> +	populate_guest_test_mem(guest_test_mem, TEST_MEM_DATA_PATTERN3);
> +	GUEST_SYNC(GUEST_PRIVATE_MEM_POPULATED2);
> +
> +	GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PATTERN1,
> +		TEST_MEM_DATA_PATTERN3));
> +	GUEST_DONE();
> +}
> +
> +#define ASSERT_CONV_TEST_EXIT_IO(vcpu, stage) \
> +	{ \
> +		struct ucall uc; \
> +		ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_IO); \
> +		ASSERT_EQ(get_ucall(vcpu, &uc), UCALL_SYNC); \
> +		ASSERT_EQ(uc.args[1], stage); \
> +	}
> +
> +#define ASSERT_GUEST_DONE(vcpu) \
> +	{ \
> +		struct ucall uc; \
> +		ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_IO); \
> +		ASSERT_EQ(get_ucall(vcpu, &uc), UCALL_DONE); \
> +	}
> +
> +static void host_conv_test_fn(struct kvm_vm *vm, struct kvm_vcpu *vcpu)

For future reference, "fn" is completely redundant.  It's a function, no need for
the label.  When a function pointer is a parameter, and isn't obviously such, then
"fn" can be helpful, but here it's just noise.

> +{
> +	void *test_area_hva = addr_gpa2hva(vm, TEST_AREA_GPA);
> +	void *guest_test_mem_hva = (test_area_hva + GUEST_TEST_MEM_OFFSET);
> +
> +	vcpu_run_and_handle_mapgpa(vm, vcpu);
> +	ASSERT_CONV_TEST_EXIT_IO(vcpu, GUEST_STARTED);
> +	populate_test_area(test_area_hva, TEST_MEM_DATA_PATTERN4);
> +	VM_STAGE_PROCESSED(GUEST_STARTED);

Again for future reference since I think it's better to avoid named stages, if
you add macros to wrap trivial code in order to do something clever (pretty print
the name of the stage), use a name that more precisely captures the triviality of
the code.  VM_STAGE_PROCESSED() sounds like the macro is doing bookkeeping,
e.g. advancing a stage/counter or something, whereas something like print_stage()
makes it fairly clear that the helper/macro is doing nothing more than printing,
i.e. saves the reader from having to go look at the implementation to understand
the code.

Regarding the printing itself, I suspect one of the reasons why you added the
pretty printing of stages was to help debug.  While there is a time and place for
printf debugging, when it comes to KVM tests, the "need" to resort to printing
out every step is usually the symptom of unhelpful assertions and error messages.

E.g. if pattern "encodes" its number (p1 = 0x11), capturing the line number, expected
versus actual pattern, and the GPA provides pretty much all the debug info needed to
figure out what failed.

Seeing the test actually make progress can be useful, e.g. as a heartbeart for
long-running tests, but again outside of development/debug it's mostly noise.
E.g. if a test fails in CI, earlier messages may or may not be captured depending
on the whims of the CI instance/robot, and so we want as much information as
possible in the error message itself.

> +	vcpu_run_and_handle_mapgpa(vm, vcpu);
> +	ASSERT_CONV_TEST_EXIT_IO(vcpu, GUEST_PRIVATE_MEM_POPULATED);
> +	TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PATTERN4,
> +			TEST_MEM_DATA_PATTERN4), "failed");
> +	VM_STAGE_PROCESSED(GUEST_PRIVATE_MEM_POPULATED);
> +
> +	vcpu_run_and_handle_mapgpa(vm, vcpu);
> +	ASSERT_CONV_TEST_EXIT_IO(vcpu, GUEST_SHARED_MEM_POPULATED);
> +	TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PATTERN4,
> +			TEST_MEM_DATA_PATTERN2), "failed");
> +	populate_guest_test_mem(guest_test_mem_hva, TEST_MEM_DATA_PATTERN5);
> +	VM_STAGE_PROCESSED(GUEST_SHARED_MEM_POPULATED);
> +
> +	vcpu_run_and_handle_mapgpa(vm, vcpu);
> +	ASSERT_CONV_TEST_EXIT_IO(vcpu, GUEST_PRIVATE_MEM_POPULATED2);
> +	TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PATTERN4,
> +			TEST_MEM_DATA_PATTERN5), "failed");
> +	VM_STAGE_PROCESSED(GUEST_PRIVATE_MEM_POPULATED2);
> +
> +	vcpu_run_and_handle_mapgpa(vm, vcpu);
> +	ASSERT_GUEST_DONE(vcpu);
> +}
> +
> +static void execute_vm_with_private_test_mem(
> +			enum vm_mem_backing_src_type test_mem_src)
> +{
> +	struct kvm_vm *vm;
> +	struct kvm_enable_cap cap;
> +	struct kvm_vcpu *vcpu;
> +
> +	vm = vm_create_with_one_vcpu(&vcpu, guest_conv_test_fn);
> +
> +	vm_check_cap(vm, KVM_CAP_EXIT_HYPERCALL);

TEST_REQUIRE() so that the test is skipped, not failed.

> +	cap.cap = KVM_CAP_EXIT_HYPERCALL;
> +	cap.flags = 0;
> +	cap.args[0] = (1 << KVM_HC_MAP_GPA_RANGE);
> +	vm_ioctl(vm, KVM_ENABLE_CAP, &cap);

vm_enable_cap() will do most of this for you.

> +
> +	vm_userspace_mem_region_add(vm, test_mem_src, TEST_AREA_GPA,
> +		TEST_AREA_SLOT, TEST_AREA_SIZE / vm->page_size, KVM_MEM_PRIVATE);

Align params.

> +	vm_allocate_private_mem(vm, TEST_AREA_GPA, TEST_AREA_SIZE);
> +
> +	virt_map(vm, TEST_AREA_GPA, TEST_AREA_GPA, TEST_AREA_SIZE/vm->page_size);
> +
> +	host_conv_test_fn(vm, vcpu);
> +
> +	kvm_vm_free(vm);
> +}
> +
> +int main(int argc, char *argv[])
> +{
> +	execute_vm_with_private_test_mem(
> +				VM_MEM_SRC_ANONYMOUS_AND_RESTRICTED_MEMFD);
> +
> +	/* Needs 2MB Hugepages */
> +	if (get_free_huge_2mb_pages() >= 1) {
> +		printf("Running private mem test with 2M pages\n");
> +		execute_vm_with_private_test_mem(
> +				VM_MEM_SRC_ANON_HTLB2M_AND_RESTRICTED_MEMFD);
> +	} else
> +		printf("Skipping private mem test with 2M pages\n");
> +
> +	return 0;
> +}
> -- 
> 2.39.0.rc0.267.gcb52ba06e7-goog
>
diff mbox series

Patch

diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore
index 082855d94c72..19cdcde2ed08 100644
--- a/tools/testing/selftests/kvm/.gitignore
+++ b/tools/testing/selftests/kvm/.gitignore
@@ -34,6 +34,7 @@ 
 /x86_64/nested_exceptions_test
 /x86_64/nx_huge_pages_test
 /x86_64/platform_info_test
+/x86_64/private_mem_test
 /x86_64/pmu_event_filter_test
 /x86_64/set_boot_cpu_id
 /x86_64/set_sregs_test
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 97f7d52c553b..beb793dd3e1c 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -99,6 +99,7 @@  TEST_GEN_PROGS_x86_64 += x86_64/monitor_mwait_test
 TEST_GEN_PROGS_x86_64 += x86_64/nested_exceptions_test
 TEST_GEN_PROGS_x86_64 += x86_64/platform_info_test
 TEST_GEN_PROGS_x86_64 += x86_64/pmu_event_filter_test
+TEST_GEN_PROGS_x86_64 += x86_64/private_mem_test
 TEST_GEN_PROGS_x86_64 += x86_64/set_boot_cpu_id
 TEST_GEN_PROGS_x86_64 += x86_64/set_sregs_test
 TEST_GEN_PROGS_x86_64 += x86_64/smaller_maxphyaddr_emulation_test
diff --git a/tools/testing/selftests/kvm/x86_64/private_mem_test.c b/tools/testing/selftests/kvm/x86_64/private_mem_test.c
new file mode 100644
index 000000000000..015ada2e3d54
--- /dev/null
+++ b/tools/testing/selftests/kvm/x86_64/private_mem_test.c
@@ -0,0 +1,212 @@ 
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2022, Google LLC.
+ */
+#define _GNU_SOURCE /* for program_invocation_short_name */
+#include <fcntl.h>
+#include <limits.h>
+#include <sched.h>
+#include <signal.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/ioctl.h>
+
+#include <linux/compiler.h>
+#include <linux/kernel.h>
+#include <linux/kvm_para.h>
+#include <linux/memfd.h>
+
+#include <test_util.h>
+#include <kvm_util.h>
+#include <private_mem.h>
+#include <processor.h>
+
+#define TEST_AREA_SLOT		10
+#define TEST_AREA_GPA		0xC0000000
+#define TEST_AREA_SIZE		(2 * 1024 * 1024)
+#define GUEST_TEST_MEM_OFFSET	(1 * 1024 * 1024)
+#define GUEST_TEST_MEM_SIZE	(10 * 4096)
+
+#define VM_STAGE_PROCESSED(x)	pr_info("Processed stage %s\n", #x)
+
+#define TEST_MEM_DATA_PATTERN1	0x66
+#define TEST_MEM_DATA_PATTERN2	0x99
+#define TEST_MEM_DATA_PATTERN3	0x33
+#define TEST_MEM_DATA_PATTERN4	0xaa
+#define TEST_MEM_DATA_PATTERN5	0x12
+
+static bool verify_mem_contents(void *mem, uint32_t size, uint8_t pattern)
+{
+	uint8_t *buf = (uint8_t *)mem;
+
+	for (uint32_t i = 0; i < size; i++) {
+		if (buf[i] != pattern)
+			return false;
+	}
+
+	return true;
+}
+
+static void populate_test_area(void *test_area_base, uint64_t pattern)
+{
+	memset(test_area_base, pattern, TEST_AREA_SIZE);
+}
+
+static void populate_guest_test_mem(void *guest_test_mem, uint64_t pattern)
+{
+	memset(guest_test_mem, pattern, GUEST_TEST_MEM_SIZE);
+}
+
+static bool verify_test_area(void *test_area_base, uint64_t area_pattern,
+	uint64_t guest_pattern)
+{
+	void *guest_test_mem = test_area_base + GUEST_TEST_MEM_OFFSET;
+	void *test_area2_base = guest_test_mem + GUEST_TEST_MEM_SIZE;
+	uint64_t test_area2_size = (TEST_AREA_SIZE - (GUEST_TEST_MEM_OFFSET +
+			GUEST_TEST_MEM_SIZE));
+
+	return (verify_mem_contents(test_area_base, GUEST_TEST_MEM_OFFSET, area_pattern) &&
+		verify_mem_contents(guest_test_mem, GUEST_TEST_MEM_SIZE, guest_pattern) &&
+		verify_mem_contents(test_area2_base, test_area2_size, area_pattern));
+}
+
+#define GUEST_STARTED			0
+#define GUEST_PRIVATE_MEM_POPULATED	1
+#define GUEST_SHARED_MEM_POPULATED	2
+#define GUEST_PRIVATE_MEM_POPULATED2	3
+
+/*
+ * Run memory conversion tests with explicit conversion:
+ * Execute KVM hypercall to map/unmap gpa range which will cause userspace exit
+ * to back/unback private memory. Subsequent accesses by guest to the gpa range
+ * will not cause exit to userspace.
+ *
+ * Test memory conversion scenarios with following steps:
+ * 1) Access private memory using private access and verify that memory contents
+ *   are not visible to userspace.
+ * 2) Convert memory to shared using explicit conversions and ensure that
+ *   userspace is able to access the shared regions.
+ * 3) Convert memory back to private using explicit conversions and ensure that
+ *   userspace is again not able to access converted private regions.
+ */
+static void guest_conv_test_fn(void)
+{
+	void *test_area_base = (void *)TEST_AREA_GPA;
+	void *guest_test_mem = (void *)(TEST_AREA_GPA + GUEST_TEST_MEM_OFFSET);
+	uint64_t guest_test_size = GUEST_TEST_MEM_SIZE;
+
+	GUEST_SYNC(GUEST_STARTED);
+
+	populate_test_area(test_area_base, TEST_MEM_DATA_PATTERN1);
+	GUEST_SYNC(GUEST_PRIVATE_MEM_POPULATED);
+	GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PATTERN1,
+		TEST_MEM_DATA_PATTERN1));
+
+	kvm_hypercall_map_shared((uint64_t)guest_test_mem, guest_test_size);
+
+	populate_guest_test_mem(guest_test_mem, TEST_MEM_DATA_PATTERN2);
+
+	GUEST_SYNC(GUEST_SHARED_MEM_POPULATED);
+	GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PATTERN1,
+		TEST_MEM_DATA_PATTERN5));
+
+	kvm_hypercall_map_private((uint64_t)guest_test_mem, guest_test_size);
+
+	populate_guest_test_mem(guest_test_mem, TEST_MEM_DATA_PATTERN3);
+	GUEST_SYNC(GUEST_PRIVATE_MEM_POPULATED2);
+
+	GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PATTERN1,
+		TEST_MEM_DATA_PATTERN3));
+	GUEST_DONE();
+}
+
+#define ASSERT_CONV_TEST_EXIT_IO(vcpu, stage) \
+	{ \
+		struct ucall uc; \
+		ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_IO); \
+		ASSERT_EQ(get_ucall(vcpu, &uc), UCALL_SYNC); \
+		ASSERT_EQ(uc.args[1], stage); \
+	}
+
+#define ASSERT_GUEST_DONE(vcpu) \
+	{ \
+		struct ucall uc; \
+		ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_IO); \
+		ASSERT_EQ(get_ucall(vcpu, &uc), UCALL_DONE); \
+	}
+
+static void host_conv_test_fn(struct kvm_vm *vm, struct kvm_vcpu *vcpu)
+{
+	void *test_area_hva = addr_gpa2hva(vm, TEST_AREA_GPA);
+	void *guest_test_mem_hva = (test_area_hva + GUEST_TEST_MEM_OFFSET);
+
+	vcpu_run_and_handle_mapgpa(vm, vcpu);
+	ASSERT_CONV_TEST_EXIT_IO(vcpu, GUEST_STARTED);
+	populate_test_area(test_area_hva, TEST_MEM_DATA_PATTERN4);
+	VM_STAGE_PROCESSED(GUEST_STARTED);
+
+	vcpu_run_and_handle_mapgpa(vm, vcpu);
+	ASSERT_CONV_TEST_EXIT_IO(vcpu, GUEST_PRIVATE_MEM_POPULATED);
+	TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PATTERN4,
+			TEST_MEM_DATA_PATTERN4), "failed");
+	VM_STAGE_PROCESSED(GUEST_PRIVATE_MEM_POPULATED);
+
+	vcpu_run_and_handle_mapgpa(vm, vcpu);
+	ASSERT_CONV_TEST_EXIT_IO(vcpu, GUEST_SHARED_MEM_POPULATED);
+	TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PATTERN4,
+			TEST_MEM_DATA_PATTERN2), "failed");
+	populate_guest_test_mem(guest_test_mem_hva, TEST_MEM_DATA_PATTERN5);
+	VM_STAGE_PROCESSED(GUEST_SHARED_MEM_POPULATED);
+
+	vcpu_run_and_handle_mapgpa(vm, vcpu);
+	ASSERT_CONV_TEST_EXIT_IO(vcpu, GUEST_PRIVATE_MEM_POPULATED2);
+	TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PATTERN4,
+			TEST_MEM_DATA_PATTERN5), "failed");
+	VM_STAGE_PROCESSED(GUEST_PRIVATE_MEM_POPULATED2);
+
+	vcpu_run_and_handle_mapgpa(vm, vcpu);
+	ASSERT_GUEST_DONE(vcpu);
+}
+
+static void execute_vm_with_private_test_mem(
+			enum vm_mem_backing_src_type test_mem_src)
+{
+	struct kvm_vm *vm;
+	struct kvm_enable_cap cap;
+	struct kvm_vcpu *vcpu;
+
+	vm = vm_create_with_one_vcpu(&vcpu, guest_conv_test_fn);
+
+	vm_check_cap(vm, KVM_CAP_EXIT_HYPERCALL);
+	cap.cap = KVM_CAP_EXIT_HYPERCALL;
+	cap.flags = 0;
+	cap.args[0] = (1 << KVM_HC_MAP_GPA_RANGE);
+	vm_ioctl(vm, KVM_ENABLE_CAP, &cap);
+
+	vm_userspace_mem_region_add(vm, test_mem_src, TEST_AREA_GPA,
+		TEST_AREA_SLOT, TEST_AREA_SIZE / vm->page_size, KVM_MEM_PRIVATE);
+	vm_allocate_private_mem(vm, TEST_AREA_GPA, TEST_AREA_SIZE);
+
+	virt_map(vm, TEST_AREA_GPA, TEST_AREA_GPA, TEST_AREA_SIZE/vm->page_size);
+
+	host_conv_test_fn(vm, vcpu);
+
+	kvm_vm_free(vm);
+}
+
+int main(int argc, char *argv[])
+{
+	execute_vm_with_private_test_mem(
+				VM_MEM_SRC_ANONYMOUS_AND_RESTRICTED_MEMFD);
+
+	/* Needs 2MB Hugepages */
+	if (get_free_huge_2mb_pages() >= 1) {
+		printf("Running private mem test with 2M pages\n");
+		execute_vm_with_private_test_mem(
+				VM_MEM_SRC_ANON_HTLB2M_AND_RESTRICTED_MEMFD);
+	} else
+		printf("Skipping private mem test with 2M pages\n");
+
+	return 0;
+}