diff mbox series

[3/4] KVM: selftests: Use asserting kvm_ioctl() macros when getting ARM page sizes

Message ID 20230804004226.1984505-4-seanjc@google.com (mailing list archive)
State New, archived
Headers show
Series KVM: selftests: ioctl() macro cleanups | expand

Commit Message

Sean Christopherson Aug. 4, 2023, 12:42 a.m. UTC
Use kvm_ioctl() instead of open coding equivalent ioctl()+TEST_ASSERT()
calls when getting the support page sizes on ARM.  The macro usage is a
little funky since the "kvm_fd" parameter implies an actual /dev/kvm fd,
but on the other hand the code is invoking KVM ioctl()s.

Alternatively, the core utilities could expose a vm_open()+vm_close()
pair so that the ARM code could create a dummy, on-stack VM+vCPU pair and
use {vm,vcpu}_ioctl() as appropriate.  But the odds of something breaking
due to oddball, partial usage of kvm_vm and kvm_vcpu structures is much
higher than realizing meaningful benefit from using {vm,vcpu}_ioctl().

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 .../selftests/kvm/lib/aarch64/processor.c      | 18 ++++++------------
 1 file changed, 6 insertions(+), 12 deletions(-)

Comments

Michal Luczaj Aug. 4, 2023, 2:58 p.m. UTC | #1
On 8/4/23 02:42, Sean Christopherson wrote:
> Use kvm_ioctl() instead of open coding equivalent ioctl()+TEST_ASSERT()
> calls when getting the support page sizes on ARM.  The macro usage is a
> little funky since the "kvm_fd" parameter implies an actual /dev/kvm fd,
> but on the other hand the code is invoking KVM ioctl()s.
> 
> Alternatively, the core utilities could expose a vm_open()+vm_close()
> pair so that the ARM code could create a dummy, on-stack VM+vCPU pair and
> use {vm,vcpu}_ioctl() as appropriate.  But the odds of something breaking
> due to oddball, partial usage of kvm_vm and kvm_vcpu structures is much
> higher than realizing meaningful benefit from using {vm,vcpu}_ioctl().

Since you're doing the cleanup, does mmio_warning_test qualify for the
same (funky usage ahead)?

-       kvm = open("/dev/kvm", O_RDWR);
-       TEST_ASSERT(kvm != -1, "failed to open /dev/kvm");
-       kvmvm = __kvm_ioctl(kvm, KVM_CREATE_VM, NULL);
-       TEST_ASSERT(kvmvm > 0, KVM_IOCTL_ERROR(KVM_CREATE_VM, kvmvm));
-       kvmcpu = ioctl(kvmvm, KVM_CREATE_VCPU, 0);
-       TEST_ASSERT(kvmcpu != -1, KVM_IOCTL_ERROR(KVM_CREATE_VCPU, kvmcpu));
+       kvm = open_path_or_exit(KVM_DEV_PATH, O_RDWR);
+       kvmvm = kvm_fd_ioctl(kvm, KVM_CREATE_VM, NULL);
+       kvmcpu = kvm_fd_ioctl(kvmvm, KVM_CREATE_VCPU, NULL);

Side note, just in case this wasn't your intention: no kvm@ in cc.

thanks,
Michal
Sean Christopherson Aug. 4, 2023, 3:23 p.m. UTC | #2
On Fri, Aug 04, 2023, Michal Luczaj wrote:
> On 8/4/23 02:42, Sean Christopherson wrote:
> > Use kvm_ioctl() instead of open coding equivalent ioctl()+TEST_ASSERT()
> > calls when getting the support page sizes on ARM.  The macro usage is a
> > little funky since the "kvm_fd" parameter implies an actual /dev/kvm fd,
> > but on the other hand the code is invoking KVM ioctl()s.
> > 
> > Alternatively, the core utilities could expose a vm_open()+vm_close()
> > pair so that the ARM code could create a dummy, on-stack VM+vCPU pair and
> > use {vm,vcpu}_ioctl() as appropriate.  But the odds of something breaking
> > due to oddball, partial usage of kvm_vm and kvm_vcpu structures is much
> > higher than realizing meaningful benefit from using {vm,vcpu}_ioctl().
> 
> Since you're doing the cleanup, does mmio_warning_test qualify for the
> same (funky usage ahead)?

Hmm, I'm heavily leaning towards deleting that test entirely.  It's almost
literally a copy+paste of the most basic syzkaller test, e.g. spawn a vCPU with
no backing memory and watch it die a horrible death.  Unless I'm missing something,
the test is complete overkill too, e.g. I highly doubt the original KVM bug required
userspace to fork() and whatnot, but syzkaller spawns threads for everything and
so that got copied into the selftest.

And this stuff is just silly:

	TEST_REQUIRE(host_cpu_is_intel);

	TEST_REQUIRE(!vm_is_unrestricted_guest(NULL));

because crashing the VM doesn't require Intel, nor does it require !URG, those
just happened to be the conditions for the bug.

As much as I like having explicit testcases, adding a new selftest for every bug
that syzkaller finds is neither realistic nor productive.  In other words, I think
we should treat syzkaller as being part of KVM's test infrastructure.

I'll send a patch to nuke the test.
 
> -       kvm = open("/dev/kvm", O_RDWR);
> -       TEST_ASSERT(kvm != -1, "failed to open /dev/kvm");
> -       kvmvm = __kvm_ioctl(kvm, KVM_CREATE_VM, NULL);
> -       TEST_ASSERT(kvmvm > 0, KVM_IOCTL_ERROR(KVM_CREATE_VM, kvmvm));
> -       kvmcpu = ioctl(kvmvm, KVM_CREATE_VCPU, 0);
> -       TEST_ASSERT(kvmcpu != -1, KVM_IOCTL_ERROR(KVM_CREATE_VCPU, kvmcpu));
> +       kvm = open_path_or_exit(KVM_DEV_PATH, O_RDWR);
> +       kvmvm = kvm_fd_ioctl(kvm, KVM_CREATE_VM, NULL);
> +       kvmcpu = kvm_fd_ioctl(kvmvm, KVM_CREATE_VCPU, NULL);
> 
> Side note, just in case this wasn't your intention: no kvm@ in cc.

Wasn't intentional, I was moving too fast at the end of the day and missed that
KVM wasn't Cc'd.  Grr.
diff mbox series

Patch

diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c
index 3a0259e25335..afec1a30916f 100644
--- a/tools/testing/selftests/kvm/lib/aarch64/processor.c
+++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c
@@ -496,7 +496,7 @@  void aarch64_get_supported_page_sizes(uint32_t ipa,
 				      bool *ps4k, bool *ps16k, bool *ps64k)
 {
 	struct kvm_vcpu_init preferred_init;
-	int kvm_fd, vm_fd, vcpu_fd, err;
+	int kvm_fd, vm_fd, vcpu_fd;
 	uint64_t val;
 	struct kvm_one_reg reg = {
 		.id	= KVM_ARM64_SYS_REG(SYS_ID_AA64MMFR0_EL1),
@@ -504,19 +504,13 @@  void aarch64_get_supported_page_sizes(uint32_t ipa,
 	};
 
 	kvm_fd = open_kvm_dev_path_or_exit();
-	vm_fd = __kvm_ioctl(kvm_fd, KVM_CREATE_VM, (void *)(unsigned long)ipa);
-	TEST_ASSERT(vm_fd >= 0, KVM_IOCTL_ERROR(KVM_CREATE_VM, vm_fd));
+	vm_fd = kvm_fd_ioctl(kvm_fd, KVM_CREATE_VM, (void *)(unsigned long)ipa);
+	vcpu_fd = kvm_fd_ioctl(vm_fd, KVM_CREATE_VCPU, (void *)0ul);
 
-	vcpu_fd = ioctl(vm_fd, KVM_CREATE_VCPU, 0);
-	TEST_ASSERT(vcpu_fd >= 0, KVM_IOCTL_ERROR(KVM_CREATE_VCPU, vcpu_fd));
+	kvm_ioctl(vm_fd, KVM_ARM_PREFERRED_TARGET, &preferred_init);
+	kvm_ioctl(vcpu_fd, KVM_ARM_VCPU_INIT, &preferred_init);
 
-	err = ioctl(vm_fd, KVM_ARM_PREFERRED_TARGET, &preferred_init);
-	TEST_ASSERT(err == 0, KVM_IOCTL_ERROR(KVM_ARM_PREFERRED_TARGET, err));
-	err = ioctl(vcpu_fd, KVM_ARM_VCPU_INIT, &preferred_init);
-	TEST_ASSERT(err == 0, KVM_IOCTL_ERROR(KVM_ARM_VCPU_INIT, err));
-
-	err = ioctl(vcpu_fd, KVM_GET_ONE_REG, &reg);
-	TEST_ASSERT(err == 0, KVM_IOCTL_ERROR(KVM_GET_ONE_REG, vcpu_fd));
+	kvm_ioctl(vcpu_fd, KVM_GET_ONE_REG, &reg);
 
 	*ps4k = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR0_TGRAN4), val) != 0xf;
 	*ps64k = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR0_TGRAN64), val) == 0;