Message ID | 20231105163040.14904-35-pbonzini@redhat.com (mailing list archive) |
---|---|
State | Handled Elsewhere |
Headers | show |
Series | KVM: guest_memfd() and per-page attributes | expand |
Applying [1] and [2] reveals that this also breaks non-x86 builds- the MEM_REGION_GPA/SLOT definitions are guarded behind an #ifdef __x86_64__, while the usages introduced here aren't. Should On Sun, Nov 5, 2023 at 8:35 AM Paolo Bonzini <pbonzini@redhat.com> wrote: > > + test_invalid_memory_region_flags(); be #ifdef'd, perhaps? I'm not quite sure what the intent is. Side note: I wasn't able to get [2] to apply by copy-pasting the diff and trying "git apply", and that was after checking out the relevant commit. Eventually I just did it manually. If anyone can successfully apply it, please let me know what you did so I can see what I was doing wrong :) [1] https://lore.kernel.org/kvm/20231108233723.3380042-1-amoorthy@google.com/ [2] https://lore.kernel.org/kvm/affca7a8-116e-4b0f-9edf-6cdc05ba65ca@redhat.com/
Hi Anish, On Thu, Nov 9, 2023 at 1:08 AM Anish Moorthy <amoorthy@google.com> wrote: > > Applying [1] and [2] reveals that this also breaks non-x86 builds- the > MEM_REGION_GPA/SLOT definitions are guarded behind an #ifdef > __x86_64__, while the usages introduced here aren't. > > Should > > On Sun, Nov 5, 2023 at 8:35 AM Paolo Bonzini <pbonzini@redhat.com> wrote: > > > > + test_invalid_memory_region_flags(); > > be #ifdef'd, perhaps? I'm not quite sure what the intent is. > > Side note: I wasn't able to get [2] to apply by copy-pasting the diff > and trying "git apply", and that was after checking out the relevant > commit. Eventually I just did it manually. If anyone can successfully > apply it, please let me know what you did so I can see what I was > doing wrong :) For me I applied the whole series as follows: Checkout kvm-x86-next-2023.11.01 (45b890f7689e) from https://github.com/kvm-x86/linux.git . Then use b4: b4 am -o - 20231105163040.14904-1-pbonzini@redhat.com | git am -3 Cheers, /fuad > > [1] https://lore.kernel.org/kvm/20231108233723.3380042-1-amoorthy@google.com/ > [2] https://lore.kernel.org/kvm/affca7a8-116e-4b0f-9edf-6cdc05ba65ca@redhat.com/
On Wed, Nov 08, 2023 at 05:08:01PM -0800, Anish Moorthy wrote: > Applying [1] and [2] reveals that this also breaks non-x86 builds- the > MEM_REGION_GPA/SLOT definitions are guarded behind an #ifdef > __x86_64__, while the usages introduced here aren't. > > Should > > On Sun, Nov 5, 2023 at 8:35 AM Paolo Bonzini <pbonzini@redhat.com> wrote: > > > > + test_invalid_memory_region_flags(); > > be #ifdef'd, perhaps? I'm not quite sure what the intent is. This has been broken in -next for a week now, do we have any progress on a fix or should we just revert the patch?
On Mon, Nov 20, 2023 at 3:09 PM Mark Brown <broonie@kernel.org> wrote: > > On Wed, Nov 08, 2023 at 05:08:01PM -0800, Anish Moorthy wrote: > > Applying [1] and [2] reveals that this also breaks non-x86 builds- the > > MEM_REGION_GPA/SLOT definitions are guarded behind an #ifdef > > __x86_64__, while the usages introduced here aren't. > > > > Should > > > > On Sun, Nov 5, 2023 at 8:35 AM Paolo Bonzini <pbonzini@redhat.com> wrote: > > > > > > + test_invalid_memory_region_flags(); > > > > be #ifdef'd, perhaps? I'm not quite sure what the intent is. > > This has been broken in -next for a week now, do we have any progress > on a fix or should we just revert the patch? Sorry, I was away last week. I have now posted a patch. Paolo
diff --git a/tools/testing/selftests/kvm/set_memory_region_test.c b/tools/testing/selftests/kvm/set_memory_region_test.c index 1891774eb6d4..343e807043e1 100644 --- a/tools/testing/selftests/kvm/set_memory_region_test.c +++ b/tools/testing/selftests/kvm/set_memory_region_test.c @@ -326,6 +326,53 @@ static void test_zero_memory_regions(void) } #endif /* __x86_64__ */ +static void test_invalid_memory_region_flags(void) +{ + uint32_t supported_flags = KVM_MEM_LOG_DIRTY_PAGES; + const uint32_t v2_only_flags = KVM_MEM_GUEST_MEMFD; + struct kvm_vm *vm; + int r, i; + +#ifdef __x86_64__ + supported_flags |= KVM_MEM_READONLY; + + if (kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(KVM_X86_SW_PROTECTED_VM)) + vm = vm_create_barebones_protected_vm(); + else +#endif + vm = vm_create_barebones(); + + if (kvm_check_cap(KVM_CAP_MEMORY_ATTRIBUTES) & KVM_MEMORY_ATTRIBUTE_PRIVATE) + supported_flags |= KVM_MEM_GUEST_MEMFD; + + for (i = 0; i < 32; i++) { + if ((supported_flags & BIT(i)) && !(v2_only_flags & BIT(i))) + continue; + + r = __vm_set_user_memory_region(vm, MEM_REGION_SLOT, BIT(i), + MEM_REGION_GPA, MEM_REGION_SIZE, NULL); + + TEST_ASSERT(r && errno == EINVAL, + "KVM_SET_USER_MEMORY_REGION should have failed on v2 only flag 0x%lx", BIT(i)); + + if (supported_flags & BIT(i)) + continue; + + r = __vm_set_user_memory_region2(vm, MEM_REGION_SLOT, BIT(i), + MEM_REGION_GPA, MEM_REGION_SIZE, NULL, 0, 0); + TEST_ASSERT(r && errno == EINVAL, + "KVM_SET_USER_MEMORY_REGION2 should have failed on unsupported flag 0x%lx", BIT(i)); + } + + if (supported_flags & KVM_MEM_GUEST_MEMFD) { + r = __vm_set_user_memory_region2(vm, MEM_REGION_SLOT, + KVM_MEM_LOG_DIRTY_PAGES | KVM_MEM_GUEST_MEMFD, + MEM_REGION_GPA, MEM_REGION_SIZE, NULL, 0, 0); + TEST_ASSERT(r && errno == EINVAL, + "KVM_SET_USER_MEMORY_REGION2 should have failed, dirty logging private memory is unsupported"); + } +} + /* * Test it can be added memory slots up to KVM_CAP_NR_MEMSLOTS, then any * tentative to add further slots should fail. @@ -491,6 +538,8 @@ int main(int argc, char *argv[]) test_zero_memory_regions(); #endif + test_invalid_memory_region_flags(); + test_add_max_memory_regions(); if (kvm_has_cap(KVM_CAP_GUEST_MEMFD) &&