From patchwork Tue Sep 6 18:09:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 12967995 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4E3DCECAAD5 for ; Tue, 6 Sep 2022 18:09:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229583AbiIFSJl (ORCPT ); Tue, 6 Sep 2022 14:09:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45256 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229503AbiIFSJj (ORCPT ); Tue, 6 Sep 2022 14:09:39 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 30D9B22284 for ; Tue, 6 Sep 2022 11:09:37 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-34514a7105eso63928247b3.1 for ; Tue, 06 Sep 2022 11:09:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=RfW36hj5ilgIGdCVXFx9pG74FNHU2bWL2HcRu/iLhlA=; b=e5mH5vgPcqLaeZbMn3XahUXbpzMU2QmtjEO1I0WEsww/M4P+/ELehIQxeQ2DMBVtwb R5BcNfGgvtp3jKnl8l+wbhha5R+5Yylvl0lwbZMJaKlpgVZkqiWNQ7fV1N+Uhx/XML+W 4nHp0MdcuMQPcUlJmgVtc+EUGGKZabssmmhBMj5VUupjS0b4jOpVI4r5/7sX/CRW/lea USq3DM7/DRPKq8ogjdn/2+csBBDjxTlZBX4GLJSlwQzz6VpSt0TSz4Tw3slYrEsRlK3A w8Wd7BQchdkEbrUtzDj9s81EDq4btJZEELVgBag7+bgcaEr2ADj2+WGTy2qqMAQVpIaz ysJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=RfW36hj5ilgIGdCVXFx9pG74FNHU2bWL2HcRu/iLhlA=; b=LLey2cjMx+GaNBm0+N0AVjbcQ6hPnde/rKjPWaGB/KqqkNw7WvDjKAvd22K9eNdCWC l6iDQdh2GM1ugGFqsthsxEpOD5TvEUmMwrcE+VzwZXyLw79H84hTrCuayoTJoSIPHQ3E rZ2SvKR2XPrE+EEiHrPjHeeG/mkrwhfxyydOk5KY1kKyZcnq0VBRG6oiLeVqJ5qA0B5e GCorDl9utKNK/I3YW6uIwU7SYDdVr2JOggWHYtQ5joutUwnOHZvjbMW1kIcbmX6btFuC YKC6fw+Jc7sfTIytmvYDeYSCj1LPrKs2bqOu7+HzGhtjyf+P0oNjcj821fe8CcUS0LjG fe4Q== X-Gm-Message-State: ACgBeo30Fh1OH8FkmOgQYzjYbnwZdQNfL1wjfe0suPktHxox80ak1hJE y4M+6X8Dy7Ux7iAANxHhQDy40JZC4Jwdil7em8GHzdEfrXjHanEKG9GmQdJfUNsHDEtAs5lpRsG 89oS5PpuM0rifFP6/L0hi0ba4i2bFgcP3uU9cb1rL6GfkudUQlPqY/1ATP83vQb4= X-Google-Smtp-Source: AA6agR6lu5xk8jCLpR10IEg5n9+OYH7PSzQoypMWhi4+PjORG+xEQSWSTEvxm6LA7xLkyqsaKViLBKXvz1UY6A== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a05:6902:1028:b0:695:6fbb:8bb5 with SMTP id x8-20020a056902102800b006956fbb8bb5mr39427827ybt.631.1662487776325; Tue, 06 Sep 2022 11:09:36 -0700 (PDT) Date: Tue, 6 Sep 2022 18:09:18 +0000 In-Reply-To: <20220906180930.230218-1-ricarkol@google.com> Mime-Version: 1.0 References: <20220906180930.230218-1-ricarkol@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220906180930.230218-2-ricarkol@google.com> Subject: [PATCH v6 01/13] KVM: selftests: Add a userfaultfd library From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, andrew.jones@linux.dev Cc: pbonzini@redhat.com, maz@kernel.org, seanjc@google.com, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, dmatlack@google.com, axelrasmussen@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move the generic userfaultfd code out of demand_paging_test.c into a common library, userfaultfd_util. This library consists of a setup and a stop function. The setup function starts a thread for handling page faults using the handler callback function. This setup returns a uffd_desc object which is then used in the stop function (to wait and destroy the threads). Reviewed-by: Ben Gardon Signed-off-by: Ricardo Koller Reviewed-by: Oliver Upton --- tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/demand_paging_test.c | 228 +++--------------- .../selftests/kvm/include/userfaultfd_util.h | 46 ++++ .../selftests/kvm/lib/userfaultfd_util.c | 187 ++++++++++++++ 4 files changed, 264 insertions(+), 198 deletions(-) create mode 100644 tools/testing/selftests/kvm/include/userfaultfd_util.h create mode 100644 tools/testing/selftests/kvm/lib/userfaultfd_util.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 4c122f1b1737..1bb471aeb103 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -47,6 +47,7 @@ LIBKVM += lib/perf_test_util.c LIBKVM += lib/rbtree.c LIBKVM += lib/sparsebit.c LIBKVM += lib/test_util.c +LIBKVM += lib/userfaultfd_util.c LIBKVM_x86_64 += lib/x86_64/apic.c LIBKVM_x86_64 += lib/x86_64/handlers.S diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c index 779ae54f89c4..8e1fe4ffcccd 100644 --- a/tools/testing/selftests/kvm/demand_paging_test.c +++ b/tools/testing/selftests/kvm/demand_paging_test.c @@ -22,23 +22,13 @@ #include "test_util.h" #include "perf_test_util.h" #include "guest_modes.h" +#include "userfaultfd_util.h" #ifdef __NR_userfaultfd -#ifdef PRINT_PER_PAGE_UPDATES -#define PER_PAGE_DEBUG(...) printf(__VA_ARGS__) -#else -#define PER_PAGE_DEBUG(...) _no_printf(__VA_ARGS__) -#endif - -#ifdef PRINT_PER_VCPU_UPDATES -#define PER_VCPU_DEBUG(...) printf(__VA_ARGS__) -#else -#define PER_VCPU_DEBUG(...) _no_printf(__VA_ARGS__) -#endif - static int nr_vcpus = 1; static uint64_t guest_percpu_mem_size = DEFAULT_PER_VCPU_MEM_SIZE; + static size_t demand_paging_size; static char *guest_data_prototype; @@ -67,9 +57,11 @@ static void vcpu_worker(struct perf_test_vcpu_args *vcpu_args) ts_diff.tv_sec, ts_diff.tv_nsec); } -static int handle_uffd_page_request(int uffd_mode, int uffd, uint64_t addr) +static int handle_uffd_page_request(int uffd_mode, int uffd, + struct uffd_msg *msg) { pid_t tid = syscall(__NR_gettid); + uint64_t addr = msg->arg.pagefault.address; struct timespec start; struct timespec ts_diff; int r; @@ -116,174 +108,32 @@ static int handle_uffd_page_request(int uffd_mode, int uffd, uint64_t addr) return 0; } -bool quit_uffd_thread; - -struct uffd_handler_args { +struct test_params { int uffd_mode; - int uffd; - int pipefd; - useconds_t delay; + useconds_t uffd_delay; + enum vm_mem_backing_src_type src_type; + bool partition_vcpu_memory_access; }; -static void *uffd_handler_thread_fn(void *arg) -{ - struct uffd_handler_args *uffd_args = (struct uffd_handler_args *)arg; - int uffd = uffd_args->uffd; - int pipefd = uffd_args->pipefd; - useconds_t delay = uffd_args->delay; - int64_t pages = 0; - struct timespec start; - struct timespec ts_diff; - - clock_gettime(CLOCK_MONOTONIC, &start); - while (!quit_uffd_thread) { - struct uffd_msg msg; - struct pollfd pollfd[2]; - char tmp_chr; - int r; - uint64_t addr; - - pollfd[0].fd = uffd; - pollfd[0].events = POLLIN; - pollfd[1].fd = pipefd; - pollfd[1].events = POLLIN; - - r = poll(pollfd, 2, -1); - switch (r) { - case -1: - pr_info("poll err"); - continue; - case 0: - continue; - case 1: - break; - default: - pr_info("Polling uffd returned %d", r); - return NULL; - } - - if (pollfd[0].revents & POLLERR) { - pr_info("uffd revents has POLLERR"); - return NULL; - } - - if (pollfd[1].revents & POLLIN) { - r = read(pollfd[1].fd, &tmp_chr, 1); - TEST_ASSERT(r == 1, - "Error reading pipefd in UFFD thread\n"); - return NULL; - } - - if (!(pollfd[0].revents & POLLIN)) - continue; - - r = read(uffd, &msg, sizeof(msg)); - if (r == -1) { - if (errno == EAGAIN) - continue; - pr_info("Read of uffd got errno %d\n", errno); - return NULL; - } - - if (r != sizeof(msg)) { - pr_info("Read on uffd returned unexpected size: %d bytes", r); - return NULL; - } - - if (!(msg.event & UFFD_EVENT_PAGEFAULT)) - continue; - - if (delay) - usleep(delay); - addr = msg.arg.pagefault.address; - r = handle_uffd_page_request(uffd_args->uffd_mode, uffd, addr); - if (r < 0) - return NULL; - pages++; - } - - ts_diff = timespec_elapsed(start); - PER_VCPU_DEBUG("userfaulted %ld pages over %ld.%.9lds. (%f/sec)\n", - pages, ts_diff.tv_sec, ts_diff.tv_nsec, - pages / ((double)ts_diff.tv_sec + (double)ts_diff.tv_nsec / 100000000.0)); - - return NULL; -} - -static void setup_demand_paging(struct kvm_vm *vm, - pthread_t *uffd_handler_thread, int pipefd, - int uffd_mode, useconds_t uffd_delay, - struct uffd_handler_args *uffd_args, - void *hva, void *alias, uint64_t len) +static void prefault_mem(void *alias, uint64_t len) { - bool is_minor = (uffd_mode == UFFDIO_REGISTER_MODE_MINOR); - int uffd; - struct uffdio_api uffdio_api; - struct uffdio_register uffdio_register; - uint64_t expected_ioctls = ((uint64_t) 1) << _UFFDIO_COPY; - int ret; + size_t p; - PER_PAGE_DEBUG("Userfaultfd %s mode, faults resolved with %s\n", - is_minor ? "MINOR" : "MISSING", - is_minor ? "UFFDIO_CONINUE" : "UFFDIO_COPY"); - - /* In order to get minor faults, prefault via the alias. */ - if (is_minor) { - size_t p; - - expected_ioctls = ((uint64_t) 1) << _UFFDIO_CONTINUE; - - TEST_ASSERT(alias != NULL, "Alias required for minor faults"); - for (p = 0; p < (len / demand_paging_size); ++p) { - memcpy(alias + (p * demand_paging_size), - guest_data_prototype, demand_paging_size); - } + TEST_ASSERT(alias != NULL, "Alias required for minor faults"); + for (p = 0; p < (len / demand_paging_size); ++p) { + memcpy(alias + (p * demand_paging_size), + guest_data_prototype, demand_paging_size); } - - uffd = syscall(__NR_userfaultfd, O_CLOEXEC | O_NONBLOCK); - TEST_ASSERT(uffd >= 0, __KVM_SYSCALL_ERROR("userfaultfd()", uffd)); - - uffdio_api.api = UFFD_API; - uffdio_api.features = 0; - ret = ioctl(uffd, UFFDIO_API, &uffdio_api); - TEST_ASSERT(ret != -1, __KVM_SYSCALL_ERROR("UFFDIO_API", ret)); - - uffdio_register.range.start = (uint64_t)hva; - uffdio_register.range.len = len; - uffdio_register.mode = uffd_mode; - ret = ioctl(uffd, UFFDIO_REGISTER, &uffdio_register); - TEST_ASSERT(ret != -1, __KVM_SYSCALL_ERROR("UFFDIO_REGISTER", ret)); - TEST_ASSERT((uffdio_register.ioctls & expected_ioctls) == - expected_ioctls, "missing userfaultfd ioctls"); - - uffd_args->uffd_mode = uffd_mode; - uffd_args->uffd = uffd; - uffd_args->pipefd = pipefd; - uffd_args->delay = uffd_delay; - pthread_create(uffd_handler_thread, NULL, uffd_handler_thread_fn, - uffd_args); - - PER_VCPU_DEBUG("Created uffd thread for HVA range [%p, %p)\n", - hva, hva + len); } -struct test_params { - int uffd_mode; - useconds_t uffd_delay; - enum vm_mem_backing_src_type src_type; - bool partition_vcpu_memory_access; -}; - static void run_test(enum vm_guest_mode mode, void *arg) { struct test_params *p = arg; - pthread_t *uffd_handler_threads = NULL; - struct uffd_handler_args *uffd_args = NULL; + struct uffd_desc **uffd_descs = NULL; struct timespec start; struct timespec ts_diff; - int *pipefds = NULL; struct kvm_vm *vm; - int r, i; + int i; vm = perf_test_create_vm(mode, nr_vcpus, guest_percpu_mem_size, 1, p->src_type, p->partition_vcpu_memory_access); @@ -296,15 +146,8 @@ static void run_test(enum vm_guest_mode mode, void *arg) memset(guest_data_prototype, 0xAB, demand_paging_size); if (p->uffd_mode) { - uffd_handler_threads = - malloc(nr_vcpus * sizeof(*uffd_handler_threads)); - TEST_ASSERT(uffd_handler_threads, "Memory allocation failed"); - - uffd_args = malloc(nr_vcpus * sizeof(*uffd_args)); - TEST_ASSERT(uffd_args, "Memory allocation failed"); - - pipefds = malloc(sizeof(int) * nr_vcpus * 2); - TEST_ASSERT(pipefds, "Unable to allocate memory for pipefd"); + uffd_descs = malloc(nr_vcpus * sizeof(struct uffd_desc *)); + TEST_ASSERT(uffd_descs, "Memory allocation failed"); for (i = 0; i < nr_vcpus; i++) { struct perf_test_vcpu_args *vcpu_args; @@ -317,19 +160,17 @@ static void run_test(enum vm_guest_mode mode, void *arg) vcpu_hva = addr_gpa2hva(vm, vcpu_args->gpa); vcpu_alias = addr_gpa2alias(vm, vcpu_args->gpa); + prefault_mem(vcpu_alias, + vcpu_args->pages * perf_test_args.guest_page_size); + /* * Set up user fault fd to handle demand paging * requests. */ - r = pipe2(&pipefds[i * 2], - O_CLOEXEC | O_NONBLOCK); - TEST_ASSERT(!r, "Failed to set up pipefd"); - - setup_demand_paging(vm, &uffd_handler_threads[i], - pipefds[i * 2], p->uffd_mode, - p->uffd_delay, &uffd_args[i], - vcpu_hva, vcpu_alias, - vcpu_args->pages * perf_test_args.guest_page_size); + uffd_descs[i] = uffd_setup_demand_paging( + p->uffd_mode, p->uffd_delay, vcpu_hva, + vcpu_args->pages * perf_test_args.guest_page_size, + &handle_uffd_page_request); } } @@ -344,15 +185,9 @@ static void run_test(enum vm_guest_mode mode, void *arg) pr_info("All vCPU threads joined\n"); if (p->uffd_mode) { - char c; - /* Tell the user fault fd handler threads to quit */ - for (i = 0; i < nr_vcpus; i++) { - r = write(pipefds[i * 2 + 1], &c, 1); - TEST_ASSERT(r == 1, "Unable to write to pipefd"); - - pthread_join(uffd_handler_threads[i], NULL); - } + for (i = 0; i < nr_vcpus; i++) + uffd_stop_demand_paging(uffd_descs[i]); } pr_info("Total guest execution time: %ld.%.9lds\n", @@ -364,11 +199,8 @@ static void run_test(enum vm_guest_mode mode, void *arg) perf_test_destroy_vm(vm); free(guest_data_prototype); - if (p->uffd_mode) { - free(uffd_handler_threads); - free(uffd_args); - free(pipefds); - } + if (p->uffd_mode) + free(uffd_descs); } static void help(char *name) diff --git a/tools/testing/selftests/kvm/include/userfaultfd_util.h b/tools/testing/selftests/kvm/include/userfaultfd_util.h new file mode 100644 index 000000000000..a1a386c083b0 --- /dev/null +++ b/tools/testing/selftests/kvm/include/userfaultfd_util.h @@ -0,0 +1,46 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KVM userfaultfd util + * Adapted from demand_paging_test.c + * + * Copyright (C) 2018, Red Hat, Inc. + * Copyright (C) 2019-2022 Google LLC + */ + +#define _GNU_SOURCE /* for pipe2 */ + +#include +#include +#include +#include + +#include "test_util.h" + +typedef int (*uffd_handler_t)(int uffd_mode, int uffd, struct uffd_msg *msg); + +struct uffd_desc { + int uffd_mode; + int uffd; + int pipefds[2]; + useconds_t delay; + uffd_handler_t handler; + pthread_t thread; +}; + +struct uffd_desc *uffd_setup_demand_paging(int uffd_mode, + useconds_t uffd_delay, void *hva, uint64_t len, + uffd_handler_t handler); + +void uffd_stop_demand_paging(struct uffd_desc *uffd); + +#ifdef PRINT_PER_PAGE_UPDATES +#define PER_PAGE_DEBUG(...) printf(__VA_ARGS__) +#else +#define PER_PAGE_DEBUG(...) _no_printf(__VA_ARGS__) +#endif + +#ifdef PRINT_PER_VCPU_UPDATES +#define PER_VCPU_DEBUG(...) printf(__VA_ARGS__) +#else +#define PER_VCPU_DEBUG(...) _no_printf(__VA_ARGS__) +#endif diff --git a/tools/testing/selftests/kvm/lib/userfaultfd_util.c b/tools/testing/selftests/kvm/lib/userfaultfd_util.c new file mode 100644 index 000000000000..4395032ccbe4 --- /dev/null +++ b/tools/testing/selftests/kvm/lib/userfaultfd_util.c @@ -0,0 +1,187 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KVM userfaultfd util + * Adapted from demand_paging_test.c + * + * Copyright (C) 2018, Red Hat, Inc. + * Copyright (C) 2019, Google, Inc. + * Copyright (C) 2022, Google, Inc. + */ + +#define _GNU_SOURCE /* for pipe2 */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "kvm_util.h" +#include "test_util.h" +#include "perf_test_util.h" +#include "userfaultfd_util.h" + +#ifdef __NR_userfaultfd + +static void *uffd_handler_thread_fn(void *arg) +{ + struct uffd_desc *uffd_desc = (struct uffd_desc *)arg; + int uffd = uffd_desc->uffd; + int pipefd = uffd_desc->pipefds[0]; + useconds_t delay = uffd_desc->delay; + int64_t pages = 0; + struct timespec start; + struct timespec ts_diff; + + clock_gettime(CLOCK_MONOTONIC, &start); + while (1) { + struct uffd_msg msg; + struct pollfd pollfd[2]; + char tmp_chr; + int r; + + pollfd[0].fd = uffd; + pollfd[0].events = POLLIN; + pollfd[1].fd = pipefd; + pollfd[1].events = POLLIN; + + r = poll(pollfd, 2, -1); + switch (r) { + case -1: + pr_info("poll err"); + continue; + case 0: + continue; + case 1: + break; + default: + pr_info("Polling uffd returned %d", r); + return NULL; + } + + if (pollfd[0].revents & POLLERR) { + pr_info("uffd revents has POLLERR"); + return NULL; + } + + if (pollfd[1].revents & POLLIN) { + r = read(pollfd[1].fd, &tmp_chr, 1); + TEST_ASSERT(r == 1, + "Error reading pipefd in UFFD thread\n"); + return NULL; + } + + if (!(pollfd[0].revents & POLLIN)) + continue; + + r = read(uffd, &msg, sizeof(msg)); + if (r == -1) { + if (errno == EAGAIN) + continue; + pr_info("Read of uffd got errno %d\n", errno); + return NULL; + } + + if (r != sizeof(msg)) { + pr_info("Read on uffd returned unexpected size: %d bytes", r); + return NULL; + } + + if (!(msg.event & UFFD_EVENT_PAGEFAULT)) + continue; + + if (delay) + usleep(delay); + r = uffd_desc->handler(uffd_desc->uffd_mode, uffd, &msg); + if (r < 0) + return NULL; + pages++; + } + + ts_diff = timespec_elapsed(start); + PER_VCPU_DEBUG("userfaulted %ld pages over %ld.%.9lds. (%f/sec)\n", + pages, ts_diff.tv_sec, ts_diff.tv_nsec, + pages / ((double)ts_diff.tv_sec + (double)ts_diff.tv_nsec / 100000000.0)); + + return NULL; +} + +struct uffd_desc *uffd_setup_demand_paging(int uffd_mode, + useconds_t uffd_delay, void *hva, uint64_t len, + uffd_handler_t handler) +{ + struct uffd_desc *uffd_desc; + bool is_minor = (uffd_mode == UFFDIO_REGISTER_MODE_MINOR); + int uffd; + struct uffdio_api uffdio_api; + struct uffdio_register uffdio_register; + uint64_t expected_ioctls = ((uint64_t) 1) << _UFFDIO_COPY; + int ret; + + PER_PAGE_DEBUG("Userfaultfd %s mode, faults resolved with %s\n", + is_minor ? "MINOR" : "MISSING", + is_minor ? "UFFDIO_CONINUE" : "UFFDIO_COPY"); + + uffd_desc = malloc(sizeof(struct uffd_desc)); + TEST_ASSERT(uffd_desc, "malloc failed"); + + /* In order to get minor faults, prefault via the alias. */ + if (is_minor) + expected_ioctls = ((uint64_t) 1) << _UFFDIO_CONTINUE; + + uffd = syscall(__NR_userfaultfd, O_CLOEXEC | O_NONBLOCK); + TEST_ASSERT(uffd >= 0, "uffd creation failed, errno: %d", errno); + + uffdio_api.api = UFFD_API; + uffdio_api.features = 0; + TEST_ASSERT(ioctl(uffd, UFFDIO_API, &uffdio_api) != -1, + "ioctl UFFDIO_API failed: %" PRIu64, + (uint64_t)uffdio_api.api); + + uffdio_register.range.start = (uint64_t)hva; + uffdio_register.range.len = len; + uffdio_register.mode = uffd_mode; + TEST_ASSERT(ioctl(uffd, UFFDIO_REGISTER, &uffdio_register) != -1, + "ioctl UFFDIO_REGISTER failed"); + TEST_ASSERT((uffdio_register.ioctls & expected_ioctls) == + expected_ioctls, "missing userfaultfd ioctls"); + + ret = pipe2(uffd_desc->pipefds, O_CLOEXEC | O_NONBLOCK); + TEST_ASSERT(!ret, "Failed to set up pipefd"); + + uffd_desc->uffd_mode = uffd_mode; + uffd_desc->uffd = uffd; + uffd_desc->delay = uffd_delay; + uffd_desc->handler = handler; + pthread_create(&uffd_desc->thread, NULL, uffd_handler_thread_fn, + uffd_desc); + + PER_VCPU_DEBUG("Created uffd thread for HVA range [%p, %p)\n", + hva, hva + len); + + return uffd_desc; +} + +void uffd_stop_demand_paging(struct uffd_desc *uffd) +{ + char c = 0; + int ret; + + ret = write(uffd->pipefds[1], &c, 1); + TEST_ASSERT(ret == 1, "Unable to write to pipefd"); + + ret = pthread_join(uffd->thread, NULL); + TEST_ASSERT(ret == 0, "Pthread_join failed."); + + close(uffd->uffd); + + close(uffd->pipefds[1]); + close(uffd->pipefds[0]); + + free(uffd); +} + +#endif /* __NR_userfaultfd */ From patchwork Tue Sep 6 18:09:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 12967997 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A99F9C6FA86 for ; Tue, 6 Sep 2022 18:09:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229584AbiIFSJm (ORCPT ); Tue, 6 Sep 2022 14:09:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45318 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229526AbiIFSJk (ORCPT ); Tue, 6 Sep 2022 14:09:40 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 25DB426AC7 for ; Tue, 6 Sep 2022 11:09:39 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id o14-20020a17090ab88e00b0020034a4415dso5140107pjr.6 for ; Tue, 06 Sep 2022 11:09:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=WdKXXyOiNL7tHw4czPZqabS6V/2k1QhfqlT03oehezc=; b=AHTPSnI9WzHXRJKYVXhU1NPy3w9I3wq7nZoibVPLBek+eNJ2p7U12Zj2Q7LX4y4zXQ dlwj705TeJSYZq0A6FD1T075aVX/4VRVP112xYJQcrcwiNFW1KHwrK3HmsCjbYP3+pyW yq8eH+7mdQzLLOiKhCYJtAewr9eLSpaOfftNvAiqIT549eQGyXVNDYllNuTIyM2bysun aGoLLMTYsQToKc5OLP34nmYYkdnq6Ea+PZKDy+CPhsZxdsdOklNX2c7FIIrsN9fOSu1H qp6QIN8XynxEVU1aFo8N82oYeMtMRpVw8q2NC5bFk3NCep/NXnr+Z751VKOPHCyq/idW lJQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=WdKXXyOiNL7tHw4czPZqabS6V/2k1QhfqlT03oehezc=; b=Sg9ncMWDjmGdZoKIsFFDSZqEJ0d7wP94p0Hgt7s0Y6hWXGpVZ6lSOggDL0qNtNpv5g PUyGsd4mF26owdIltPTeU+8D+8eu6bWMYwztI4DScw0babejFRKk5rZupZZ9mMUmeaT2 oUa3NgF2ljOLvlQt9YzL8g6j5GPdNsty62JlyBork3O+6ZHCzJ9OajDCsY+cWx7k6iKN CHQWFVGd0KC/3DgKZN+KCXainEhRiVmNiOGNE5AAFCXPjkahqUP8ByLb2vNRNRdnprlE AhRGw5gEGBMlgog9FN2RVroyBcbQV31EQJDCNmUqdLQuEvz0MoKJySlNDzhWEda3yMnE ++YA== X-Gm-Message-State: ACgBeo1OMSVEvg88wWgqdPaFKl28gIORUSh67ZakF95upnht/vo4UYMb TFnM7/bE7j8j1JuJPx4FpycQkRw/8mFF0mZs1fi/ae+rdGWfwq69+eNmOM+E+WtT/g041A3uMd8 CDkMjRyWBMbjEepYQsKpEUW199eaOSj6JZfLXsWnIMa3KA/cJ8cNNzBzH229sS64= X-Google-Smtp-Source: AA6agR4ac83JgAtiop9Kq9QbBzgDoQaETkNX/tKBEn77W26FNsF/bmL0SV5ArqF2s5Vm9OK1ey6e9aaH57h2yw== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a63:2341:0:b0:434:d9b8:cfdf with SMTP id u1-20020a632341000000b00434d9b8cfdfmr753937pgm.446.1662487778403; Tue, 06 Sep 2022 11:09:38 -0700 (PDT) Date: Tue, 6 Sep 2022 18:09:19 +0000 In-Reply-To: <20220906180930.230218-1-ricarkol@google.com> Mime-Version: 1.0 References: <20220906180930.230218-1-ricarkol@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220906180930.230218-3-ricarkol@google.com> Subject: [PATCH v6 02/13] KVM: selftests: aarch64: Add virt_get_pte_hva() library function From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, andrew.jones@linux.dev Cc: pbonzini@redhat.com, maz@kernel.org, seanjc@google.com, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, dmatlack@google.com, axelrasmussen@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a library function to get the PTE (a host virtual address) of a given GVA. This will be used in a future commit by a test to clear and check the access flag of a particular page. Reviewed-by: Andrew Jones Signed-off-by: Ricardo Koller Reviewed-by: Oliver Upton --- .../selftests/kvm/include/aarch64/processor.h | 2 ++ tools/testing/selftests/kvm/lib/aarch64/processor.c | 13 ++++++++++--- 2 files changed, 12 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/kvm/include/aarch64/processor.h b/tools/testing/selftests/kvm/include/aarch64/processor.h index a8124f9dd68a..df4bfac69551 100644 --- a/tools/testing/selftests/kvm/include/aarch64/processor.h +++ b/tools/testing/selftests/kvm/include/aarch64/processor.h @@ -109,6 +109,8 @@ void vm_install_exception_handler(struct kvm_vm *vm, void vm_install_sync_handler(struct kvm_vm *vm, int vector, int ec, handler_fn handler); +uint64_t *virt_get_pte_hva(struct kvm_vm *vm, vm_vaddr_t gva); + static inline void cpu_relax(void) { asm volatile("yield" ::: "memory"); diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c index 6f5551368944..63ef3c78e55e 100644 --- a/tools/testing/selftests/kvm/lib/aarch64/processor.c +++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c @@ -138,7 +138,7 @@ void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr) _virt_pg_map(vm, vaddr, paddr, attr_idx); } -vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva) +uint64_t *virt_get_pte_hva(struct kvm_vm *vm, vm_vaddr_t gva) { uint64_t *ptep; @@ -169,11 +169,18 @@ vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva) TEST_FAIL("Page table levels must be 2, 3, or 4"); } - return pte_addr(vm, *ptep) + (gva & (vm->page_size - 1)); + return ptep; unmapped_gva: TEST_FAIL("No mapping for vm virtual address, gva: 0x%lx", gva); - exit(1); + exit(EXIT_FAILURE); +} + +vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva) +{ + uint64_t *ptep = virt_get_pte_hva(vm, gva); + + return pte_addr(vm, *ptep) + (gva & (vm->page_size - 1)); } static void pte_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent, uint64_t page, int level) From patchwork Tue Sep 6 18:09:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 12967996 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD46BC6FA89 for ; Tue, 6 Sep 2022 18:09:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229608AbiIFSJn (ORCPT ); Tue, 6 Sep 2022 14:09:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45374 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229494AbiIFSJl (ORCPT ); Tue, 6 Sep 2022 14:09:41 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0696521E31 for ; Tue, 6 Sep 2022 11:09:41 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-3451e7b0234so58682737b3.23 for ; Tue, 06 Sep 2022 11:09:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=Y+OPxQ5AWFMtnU0dHjjR3+DOg2GIHj7/VBS3FzZlKj0=; b=kN8zzv7uiZEn8zwiQPHFwI8t63lHXUuE8j8u79e7MDqO/PYCyCdjx3uq2JwQT2YMA8 vr49Vw4PR1tiDGjrfMp4SuYE+8PyKDoO2sWdl3Tbs/mZ683Ox6Q812fCImU83PN2B1QT PqQ9w7fXuY/VzfpM1hvi4JJ98op2j7fPdVVbzVj2OoufLWSXyYhu+F2bcuaebYVVLqCI R/b7rIa/UhFWjIQqoMxxookySHSMoDfE7Xy0tJpke5tQQy2AnSEQyu+mAwEmwmGTtN4j YtFumVaGUpvJY48IJL/HMethavHUcah9UExsZ41xwaJXxOxNysutwu7qz1d11SS05KUR 9mQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=Y+OPxQ5AWFMtnU0dHjjR3+DOg2GIHj7/VBS3FzZlKj0=; b=qHHlylzHzYw4fMNn7KLGXlEuGGIVz63htH8g7C06Hhe3KJIqwAls2qhJtwkp0forbI Ze0j7CFtmD/dZfvRx6t51GDZogChzcEPf9Rm0xEwJhAeO400EIB4Fd+hyJdotQcf9aTS WUBb4JGpicTpPG7OczUoZF4eUEZv9lVnO6ZcQ95Bmltf372DCIlLgiCdmajAr9hcXQZm chH7xi1LLp4JJmpDqVeMdPL568nBFvuQoFzZ2ZIoF3vGoVeQee7ellL88oQQ+DUsE9p2 uW0NSz6SYzag9SrJow7HpSk90RTqf3Vf2aSr/L+eqasLgx1x0L7aV1SJ2FbQoyGPuIHx cafg== X-Gm-Message-State: ACgBeo2t521UcE0JFXEXWjeQ3qJ3dcxn+J/PShPZNlyZ70iFsyOB34G4 j2hDNZPwIrLYGwxIvc/Ym/8adtNjw5JuagYTIurZQqc72joFVp4ba/5DnzBhbGjrzDd5ZZNu9eG iHUINt6V/lJKsX/9AR4qzvsnIhf5VhHAFlbDWdXHJ0nsdZL1EGK9LlEFZuWwiZBs= X-Google-Smtp-Source: AA6agR4ySKWHnI41ZN/mG2ne0AGMDwf5L2OJtYAPLM0Rb3A+J+MVOfmECZFXwRPRqhGuYpuf0MM5DeqYIehqpw== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a0d:c986:0:b0:325:1b81:9f77 with SMTP id l128-20020a0dc986000000b003251b819f77mr41841979ywd.182.1662487780294; Tue, 06 Sep 2022 11:09:40 -0700 (PDT) Date: Tue, 6 Sep 2022 18:09:20 +0000 In-Reply-To: <20220906180930.230218-1-ricarkol@google.com> Mime-Version: 1.0 References: <20220906180930.230218-1-ricarkol@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220906180930.230218-4-ricarkol@google.com> Subject: [PATCH v6 03/13] KVM: selftests: Add missing close and munmap in __vm_mem_region_delete() From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, andrew.jones@linux.dev Cc: pbonzini@redhat.com, maz@kernel.org, seanjc@google.com, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, dmatlack@google.com, axelrasmussen@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Deleting a memslot (when freeing a VM) is not closing the backing fd, nor it's unmapping the alias mapping. Fix by adding the missing close and munmap. Reviewed-by: Andrew Jones Reviewed-by: Oliver Upton Reviewed-by: Ben Gardon Signed-off-by: Ricardo Koller --- tools/testing/selftests/kvm/lib/kvm_util.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 9889fe0d8919..9dd03eda2eb9 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -544,6 +544,12 @@ static void __vm_mem_region_delete(struct kvm_vm *vm, sparsebit_free(®ion->unused_phy_pages); ret = munmap(region->mmap_start, region->mmap_size); TEST_ASSERT(!ret, __KVM_SYSCALL_ERROR("munmap()", ret)); + if (region->fd >= 0) { + /* There's an extra map when using shared memory. */ + ret = munmap(region->mmap_alias, region->mmap_size); + TEST_ASSERT(!ret, __KVM_SYSCALL_ERROR("munmap()", ret)); + close(region->fd); + } free(region); } From patchwork Tue Sep 6 18:09:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 12967998 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 570DEECAAA1 for ; Tue, 6 Sep 2022 18:09:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229526AbiIFSJt (ORCPT ); Tue, 6 Sep 2022 14:09:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45722 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229611AbiIFSJr (ORCPT ); Tue, 6 Sep 2022 14:09:47 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B3F232CDC7 for ; Tue, 6 Sep 2022 11:09:43 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-34577a9799dso25549457b3.6 for ; Tue, 06 Sep 2022 11:09:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=WT2R6z6Lj/wGUMYzxUUXh6/WyMLsyb+mHF2mn3pQS0A=; b=Hpd5SISjcMrCHf9JGChu9s9F77ftmvQgJ0eeVwHgoh+jmH8F1saN/GiAHzrJkOyH4a iN6WTYT7UVvEuvarYM60NPIkdylI6USs8tojmkKw77K+FzJsBDpeS6tRdMmQaJ90Yqj0 ZJptnNxxyoNksHntqPxpLUfqAYcJyBFpG+lpFu5PKIHClFsH9axeaw3Irj+zAzHR1Wfo 7Rp+ZrH8BINtd+Ovld090w5P713yDfv9HryiEhwngEjv8zWXNGhM2yH+RuBBofiX+EhY flQGGu0JTrWNMppeNXJADO1ooRWl7S4HeZJmmlUGmaEgOGwDpImEYqEqW8UOFK/Rk65u aT+w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=WT2R6z6Lj/wGUMYzxUUXh6/WyMLsyb+mHF2mn3pQS0A=; b=yidv248pmHZ4BVRY6tudJAOQyOqrjWUcmlZNYJHpBg6zMJTFPW99USuHhrbUsK6J4P HPUJ+sWFxzcv7P370o/QkiKZZDiE5GR9LvttFx1U79UiklnkCd83omAd4tYdCrAvBJAS YIQt2JISvSvhTTu0NE4SwwIANyF/1UBXAZr+XrKI/CBEIkKBucQUHZlIJyB/3fwLRxvq P/hLap7mGGQRq6LXESxG2lrcGNn4ch1eFEOjwuH+Ittf5+o1k0xm71K7ZvWU0yhZ86qt WJmihmCfrtiVRYrSyiAU7RutdIOrWEnrfob0ox2yeJXHj5+mhAB/mL9MvJG5fC3ztciI w6wQ== X-Gm-Message-State: ACgBeo1DxJZgbYgedd2jETd6BdLHNhhetJh81FE5KnBudoYldpN53LMs AHqEvx2CKvElO41Bl86B6JAWKbBZDj1SN+NG/+WPUVPn1Ui72Ui6igFwmwO89+o/NjsFnIEGrKw FdbtoHEn5t2erbE0lUCLON0Zcp1LNJX/7/oQCZu67o4nI5sIOq5B/nMPnpPgQMb4= X-Google-Smtp-Source: AA6agR5HZOlR46dlOCLVnDPm04bTiVUX60QtegJVypIVw2VNqy4drQBeKd/JOH3/A4EZV6bHfN4lbEVmHNvhtg== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a05:6902:10c5:b0:671:7158:cf2c with SMTP id w5-20020a05690210c500b006717158cf2cmr39837170ybu.314.1662487782395; Tue, 06 Sep 2022 11:09:42 -0700 (PDT) Date: Tue, 6 Sep 2022 18:09:21 +0000 In-Reply-To: <20220906180930.230218-1-ricarkol@google.com> Mime-Version: 1.0 References: <20220906180930.230218-1-ricarkol@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220906180930.230218-5-ricarkol@google.com> Subject: [PATCH v6 04/13] KVM: selftests: aarch64: Construct DEFAULT_MAIR_EL1 using sysreg.h macros From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, andrew.jones@linux.dev Cc: pbonzini@redhat.com, maz@kernel.org, seanjc@google.com, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, dmatlack@google.com, axelrasmussen@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Define macros for memory type indexes and construct DEFAULT_MAIR_EL1 with macros from asm/sysreg.h. The index macros can then be used when constructing PTEs (instead of using raw numbers). Reviewed-by: Andrew Jones Reviewed-by: Oliver Upton Signed-off-by: Ricardo Koller --- .../selftests/kvm/include/aarch64/processor.h | 25 ++++++++++++++----- .../selftests/kvm/lib/aarch64/processor.c | 2 +- 2 files changed, 20 insertions(+), 7 deletions(-) diff --git a/tools/testing/selftests/kvm/include/aarch64/processor.h b/tools/testing/selftests/kvm/include/aarch64/processor.h index df4bfac69551..c1ddca8db225 100644 --- a/tools/testing/selftests/kvm/include/aarch64/processor.h +++ b/tools/testing/selftests/kvm/include/aarch64/processor.h @@ -38,12 +38,25 @@ * NORMAL 4 1111:1111 * NORMAL_WT 5 1011:1011 */ -#define DEFAULT_MAIR_EL1 ((0x00ul << (0 * 8)) | \ - (0x04ul << (1 * 8)) | \ - (0x0cul << (2 * 8)) | \ - (0x44ul << (3 * 8)) | \ - (0xfful << (4 * 8)) | \ - (0xbbul << (5 * 8))) + +/* Linux doesn't use these memory types, so let's define them. */ +#define MAIR_ATTR_DEVICE_GRE UL(0x0c) +#define MAIR_ATTR_NORMAL_WT UL(0xbb) + +#define MT_DEVICE_nGnRnE 0 +#define MT_DEVICE_nGnRE 1 +#define MT_DEVICE_GRE 2 +#define MT_NORMAL_NC 3 +#define MT_NORMAL 4 +#define MT_NORMAL_WT 5 + +#define DEFAULT_MAIR_EL1 \ + (MAIR_ATTRIDX(MAIR_ATTR_DEVICE_nGnRnE, MT_DEVICE_nGnRnE) | \ + MAIR_ATTRIDX(MAIR_ATTR_DEVICE_nGnRE, MT_DEVICE_nGnRE) | \ + MAIR_ATTRIDX(MAIR_ATTR_DEVICE_GRE, MT_DEVICE_GRE) | \ + MAIR_ATTRIDX(MAIR_ATTR_NORMAL_NC, MT_NORMAL_NC) | \ + MAIR_ATTRIDX(MAIR_ATTR_NORMAL, MT_NORMAL) | \ + MAIR_ATTRIDX(MAIR_ATTR_NORMAL_WT, MT_NORMAL_WT)) #define MPIDR_HWID_BITMASK (0xff00fffffful) diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c index 63ef3c78e55e..26f0eccff6fe 100644 --- a/tools/testing/selftests/kvm/lib/aarch64/processor.c +++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c @@ -133,7 +133,7 @@ static void _virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr) { - uint64_t attr_idx = 4; /* NORMAL (See DEFAULT_MAIR_EL1) */ + uint64_t attr_idx = MT_NORMAL; _virt_pg_map(vm, vaddr, paddr, attr_idx); } From patchwork Tue Sep 6 18:09:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 12967999 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10465ECAAA1 for ; Tue, 6 Sep 2022 18:09:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229697AbiIFSJw (ORCPT ); Tue, 6 Sep 2022 14:09:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45794 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229647AbiIFSJs (ORCPT ); Tue, 6 Sep 2022 14:09:48 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 82F3753017 for ; Tue, 6 Sep 2022 11:09:45 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-345188a7247so62787867b3.22 for ; Tue, 06 Sep 2022 11:09:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=R+Euj9vqpXKM4xMXDZgxG8oUZJt3JErW32vmwZVdvtY=; b=sffeB/mWDpzSHoIDUqd/TgjCxUpSAKEVyBhiHDwZZ6p+h4u2CDddK6nTN+UxpD2Vu+ FuAYxva2/kiut0pVKP2T/k8f8peZncp2p/tBszoRlZIA0y6Rs6x5J2PmpiIGGkN6SJMk 58Yw1iR9s205FkYUafIDgpLZjqD4cmbQPGo/MovuvnbA1+I/xS28nAzvZlYR8TVY2dgD rpn8WDDKO0PwoyoUYnmVcWuZaHIqDGQ88/ICMGOLHDvtX2YhWJsm/tF7B3Z9xS1qlj2q NyAjyE8uW28MwLleZVLxn3RaNTpp1t26H09/3A/4ufiQKNDmWjz2pKg8H/xmbxRB/A+0 zTQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=R+Euj9vqpXKM4xMXDZgxG8oUZJt3JErW32vmwZVdvtY=; b=g5M3/MxNbrAtIEjowTUbjnnb3LlrIxv8mxSdRBnd2tjafWtXoUTCtFoX8ngqTgWF1B V1QyNUlrBfzkeA+fp+NTcKLQATS99sa8yHU3/zCvm6YMjyO7CBp0wK1iThxEU4p0srmM v1k5Lqd6I5qlbMli5gU5gwvQ5xZSjjlwN8JGVr/rVq6qJuBT4pB6D7grJsEVx87uYQUz w+aqmEIdo0652KhAbLktUqjnCQ3lnxAX0LuM1QSYjDhe3n/H9CPV2mNWBAVSr1RE6nfw /zgfPNXVEnw+8WhBm7fxUuoz9vOe4uPC7vsxNtHm1iKSseH1+S56KzbwCg6oMD4T4tPz a/KQ== X-Gm-Message-State: ACgBeo3VKen5o/eKlYkWebKE6QmmWzGPvK7J3xwnEhlAkvqiktM8S0+0 idf7wD9t5Ki3O7WnKj78C8pCrnmE6G4ZNiFW1EYvLN/1s58ilAv1STzfroj7qyryJl2aWQZfbu9 YqoqYyUwgpf7+1/TlGqmaCPPRaZQnn+TiFuwweAma1XlnTLKHBYVhyZYrglJW4ug= X-Google-Smtp-Source: AA6agR4ASjfrO6AOrVhpLFb2OAYw49XfM61U36AYbh+bPCYPuwxfnMdgjnaTJ6Aw8gtYE0FshwAT/jfMczz76w== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a81:5343:0:b0:332:d998:4c61 with SMTP id h64-20020a815343000000b00332d9984c61mr44295265ywb.125.1662487784189; Tue, 06 Sep 2022 11:09:44 -0700 (PDT) Date: Tue, 6 Sep 2022 18:09:22 +0000 In-Reply-To: <20220906180930.230218-1-ricarkol@google.com> Mime-Version: 1.0 References: <20220906180930.230218-1-ricarkol@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220906180930.230218-6-ricarkol@google.com> Subject: [PATCH v6 05/13] tools: Copy bitfield.h from the kernel sources From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, andrew.jones@linux.dev Cc: pbonzini@redhat.com, maz@kernel.org, seanjc@google.com, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, dmatlack@google.com, axelrasmussen@google.com, Ricardo Koller , Jakub Kicinski , Arnaldo Carvalho de Melo Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Copy bitfield.h from include/linux/bitfield.h. A subsequent change will make use of some FIELD_{GET,PREP} macros defined in this header. The header was copied as-is, no changes needed. Cc: Jakub Kicinski Cc: Arnaldo Carvalho de Melo Reviewed-by: Oliver Upton Signed-off-by: Ricardo Koller --- tools/include/linux/bitfield.h | 176 +++++++++++++++++++++++++++++++++ 1 file changed, 176 insertions(+) create mode 100644 tools/include/linux/bitfield.h diff --git a/tools/include/linux/bitfield.h b/tools/include/linux/bitfield.h new file mode 100644 index 000000000000..6093fa6db260 --- /dev/null +++ b/tools/include/linux/bitfield.h @@ -0,0 +1,176 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2014 Felix Fietkau + * Copyright (C) 2004 - 2009 Ivo van Doorn + */ + +#ifndef _LINUX_BITFIELD_H +#define _LINUX_BITFIELD_H + +#include +#include + +/* + * Bitfield access macros + * + * FIELD_{GET,PREP} macros take as first parameter shifted mask + * from which they extract the base mask and shift amount. + * Mask must be a compilation time constant. + * + * Example: + * + * #define REG_FIELD_A GENMASK(6, 0) + * #define REG_FIELD_B BIT(7) + * #define REG_FIELD_C GENMASK(15, 8) + * #define REG_FIELD_D GENMASK(31, 16) + * + * Get: + * a = FIELD_GET(REG_FIELD_A, reg); + * b = FIELD_GET(REG_FIELD_B, reg); + * + * Set: + * reg = FIELD_PREP(REG_FIELD_A, 1) | + * FIELD_PREP(REG_FIELD_B, 0) | + * FIELD_PREP(REG_FIELD_C, c) | + * FIELD_PREP(REG_FIELD_D, 0x40); + * + * Modify: + * reg &= ~REG_FIELD_C; + * reg |= FIELD_PREP(REG_FIELD_C, c); + */ + +#define __bf_shf(x) (__builtin_ffsll(x) - 1) + +#define __scalar_type_to_unsigned_cases(type) \ + unsigned type: (unsigned type)0, \ + signed type: (unsigned type)0 + +#define __unsigned_scalar_typeof(x) typeof( \ + _Generic((x), \ + char: (unsigned char)0, \ + __scalar_type_to_unsigned_cases(char), \ + __scalar_type_to_unsigned_cases(short), \ + __scalar_type_to_unsigned_cases(int), \ + __scalar_type_to_unsigned_cases(long), \ + __scalar_type_to_unsigned_cases(long long), \ + default: (x))) + +#define __bf_cast_unsigned(type, x) ((__unsigned_scalar_typeof(type))(x)) + +#define __BF_FIELD_CHECK(_mask, _reg, _val, _pfx) \ + ({ \ + BUILD_BUG_ON_MSG(!__builtin_constant_p(_mask), \ + _pfx "mask is not constant"); \ + BUILD_BUG_ON_MSG((_mask) == 0, _pfx "mask is zero"); \ + BUILD_BUG_ON_MSG(__builtin_constant_p(_val) ? \ + ~((_mask) >> __bf_shf(_mask)) & (_val) : 0, \ + _pfx "value too large for the field"); \ + BUILD_BUG_ON_MSG(__bf_cast_unsigned(_mask, _mask) > \ + __bf_cast_unsigned(_reg, ~0ull), \ + _pfx "type of reg too small for mask"); \ + __BUILD_BUG_ON_NOT_POWER_OF_2((_mask) + \ + (1ULL << __bf_shf(_mask))); \ + }) + +/** + * FIELD_MAX() - produce the maximum value representable by a field + * @_mask: shifted mask defining the field's length and position + * + * FIELD_MAX() returns the maximum value that can be held in the field + * specified by @_mask. + */ +#define FIELD_MAX(_mask) \ + ({ \ + __BF_FIELD_CHECK(_mask, 0ULL, 0ULL, "FIELD_MAX: "); \ + (typeof(_mask))((_mask) >> __bf_shf(_mask)); \ + }) + +/** + * FIELD_FIT() - check if value fits in the field + * @_mask: shifted mask defining the field's length and position + * @_val: value to test against the field + * + * Return: true if @_val can fit inside @_mask, false if @_val is too big. + */ +#define FIELD_FIT(_mask, _val) \ + ({ \ + __BF_FIELD_CHECK(_mask, 0ULL, 0ULL, "FIELD_FIT: "); \ + !((((typeof(_mask))_val) << __bf_shf(_mask)) & ~(_mask)); \ + }) + +/** + * FIELD_PREP() - prepare a bitfield element + * @_mask: shifted mask defining the field's length and position + * @_val: value to put in the field + * + * FIELD_PREP() masks and shifts up the value. The result should + * be combined with other fields of the bitfield using logical OR. + */ +#define FIELD_PREP(_mask, _val) \ + ({ \ + __BF_FIELD_CHECK(_mask, 0ULL, _val, "FIELD_PREP: "); \ + ((typeof(_mask))(_val) << __bf_shf(_mask)) & (_mask); \ + }) + +/** + * FIELD_GET() - extract a bitfield element + * @_mask: shifted mask defining the field's length and position + * @_reg: value of entire bitfield + * + * FIELD_GET() extracts the field specified by @_mask from the + * bitfield passed in as @_reg by masking and shifting it down. + */ +#define FIELD_GET(_mask, _reg) \ + ({ \ + __BF_FIELD_CHECK(_mask, _reg, 0U, "FIELD_GET: "); \ + (typeof(_mask))(((_reg) & (_mask)) >> __bf_shf(_mask)); \ + }) + +extern void __compiletime_error("value doesn't fit into mask") +__field_overflow(void); +extern void __compiletime_error("bad bitfield mask") +__bad_mask(void); +static __always_inline u64 field_multiplier(u64 field) +{ + if ((field | (field - 1)) & ((field | (field - 1)) + 1)) + __bad_mask(); + return field & -field; +} +static __always_inline u64 field_mask(u64 field) +{ + return field / field_multiplier(field); +} +#define field_max(field) ((typeof(field))field_mask(field)) +#define ____MAKE_OP(type,base,to,from) \ +static __always_inline __##type type##_encode_bits(base v, base field) \ +{ \ + if (__builtin_constant_p(v) && (v & ~field_mask(field))) \ + __field_overflow(); \ + return to((v & field_mask(field)) * field_multiplier(field)); \ +} \ +static __always_inline __##type type##_replace_bits(__##type old, \ + base val, base field) \ +{ \ + return (old & ~to(field)) | type##_encode_bits(val, field); \ +} \ +static __always_inline void type##p_replace_bits(__##type *p, \ + base val, base field) \ +{ \ + *p = (*p & ~to(field)) | type##_encode_bits(val, field); \ +} \ +static __always_inline base type##_get_bits(__##type v, base field) \ +{ \ + return (from(v) & field)/field_multiplier(field); \ +} +#define __MAKE_OP(size) \ + ____MAKE_OP(le##size,u##size,cpu_to_le##size,le##size##_to_cpu) \ + ____MAKE_OP(be##size,u##size,cpu_to_be##size,be##size##_to_cpu) \ + ____MAKE_OP(u##size,u##size,,) +____MAKE_OP(u8,u8,,) +__MAKE_OP(16) +__MAKE_OP(32) +__MAKE_OP(64) +#undef __MAKE_OP +#undef ____MAKE_OP + +#endif From patchwork Tue Sep 6 18:09:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 12968000 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B77C3ECAAA1 for ; Tue, 6 Sep 2022 18:09:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229494AbiIFSJz (ORCPT ); Tue, 6 Sep 2022 14:09:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45794 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229666AbiIFSJu (ORCPT ); Tue, 6 Sep 2022 14:09:50 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F03592871A for ; Tue, 6 Sep 2022 11:09:47 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id h12-20020a170902f54c00b0016f8858ce9bso8200166plf.9 for ; Tue, 06 Sep 2022 11:09:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=0SMylmi5VoKukevMCg329nqGKx2bQIjB0CpF+sxCEys=; b=j8CaVHHKMwtkw0CT4wOaawapBm/83lk9MV8o9HK1oFoiBAmmJWSgFszewtB/f3aQyr qv/JSfwxX2JpW0vWeCmAGyOlt364lyU45six1Bn7jxSwk1hRds9NCVd8HnDXaX3/t54N WAEMsZQGcEhGfN3hmUubAgn2r0bSdFKS4qXIgBRKdIfDiKrmXQc+8JZxdGAEA8nrBSFt NIrLeiq9SLXBTVJuXucL4lfRRvz7OxejDvDcpAy+YIx9oseJcxXKsu6ayKICw2P9jiCl PFBlJ7ebRT0xjADDaA+e8O2gUgbW1uOY2iymZ/xs8mC8u4nibHnV7E765pM2IIj5WIcA teJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=0SMylmi5VoKukevMCg329nqGKx2bQIjB0CpF+sxCEys=; b=71sPGwPSU3mV6ykQUhUrJkNahEJke6MLr8HsfXbU0xOscTtAtbzab2y1SXneWtiiVn UBlTZEi3mM283RxGAUCoT7/idvYAadM1DNL2qfd7zKTbc1tNGqp54urtmXxrWac/zCgy cmvDNUdD0jYvGZ2TW3341uxccFE5UtMnVNoVwCXXl/EozbWlCn3C8S9SyadWCQJLTIi3 QBtLCAz7YqV9yYw+Tgunr5Q7bb9leeRlllKRwbDPfSzvr3YpLmPXVnOxBRF0zWhU4pVH dLAdjQdg/Q1Au/0lRirtdmXloQiP7HJFdhkgqtWL71y4ZWVwFxX7xZg4YPwJ7oN1Dn0M Pd0Q== X-Gm-Message-State: ACgBeo3NipTQoV/vF46H2Qp3bU3MDiL6yS4zUECnFUvdosxRdesgm55D hCkK+xpE4Jvg0RZBojsIFWAjOCbLeLzzd+2p1pGj6CFWp1+gTMcnrjO19EdPceemaE9J1hTGcBH VkK73BvUzzSBmaludmLaOHILh7jFJXnHqfgQof8NjN6wAnKuSBSRmBCgv5BpB/34= X-Google-Smtp-Source: AA6agR6nO2Zhbp7vYQuiuptLKxsuxovt2DSw0/ssCA5XC63l+hdsth9veFaamYsFWfvkP59eq6149XXlA8B55Q== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a17:90b:164f:b0:1f5:4ced:ed81 with SMTP id il15-20020a17090b164f00b001f54ceded81mr25678999pjb.122.1662487786468; Tue, 06 Sep 2022 11:09:46 -0700 (PDT) Date: Tue, 6 Sep 2022 18:09:23 +0000 In-Reply-To: <20220906180930.230218-1-ricarkol@google.com> Mime-Version: 1.0 References: <20220906180930.230218-1-ricarkol@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220906180930.230218-7-ricarkol@google.com> Subject: [PATCH v6 06/13] KVM: selftests: Stash backing_src_type in struct userspace_mem_region From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, andrew.jones@linux.dev Cc: pbonzini@redhat.com, maz@kernel.org, seanjc@google.com, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, dmatlack@google.com, axelrasmussen@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add the backing_src_type into struct userspace_mem_region. This struct already stores a lot of info about memory regions, except the backing source type. This info will be used by a future commit in order to determine the method for punching a hole. Signed-off-by: Ricardo Koller Reviewed-by: Oliver Upton --- tools/testing/selftests/kvm/include/kvm_util_base.h | 1 + tools/testing/selftests/kvm/lib/kvm_util.c | 1 + 2 files changed, 2 insertions(+) diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index 24fde97f6121..b2dbe253d4d0 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -34,6 +34,7 @@ struct userspace_mem_region { struct sparsebit *unused_phy_pages; int fd; off_t offset; + enum vm_mem_backing_src_type backing_src_type; void *host_mem; void *host_alias; void *mmap_start; diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 9dd03eda2eb9..5a9f080ff888 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -887,6 +887,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm, vm_mem_backing_src_alias(src_type)->name); } + region->backing_src_type = src_type; region->unused_phy_pages = sparsebit_alloc(); sparsebit_set_num(region->unused_phy_pages, guest_paddr >> vm->page_shift, npages); From patchwork Tue Sep 6 18:09:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 12968001 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A749ECAAD5 for ; Tue, 6 Sep 2022 18:10:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229693AbiIFSJ5 (ORCPT ); Tue, 6 Sep 2022 14:09:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45960 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229625AbiIFSJw (ORCPT ); Tue, 6 Sep 2022 14:09:52 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A697C51438 for ; Tue, 6 Sep 2022 11:09:49 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-345158b6641so63421107b3.8 for ; Tue, 06 Sep 2022 11:09:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=Rs/wMLW/vGp5MGiiJEnVc1IKvM/DGXMWqUc3gISK3cY=; b=sX+tIBp/OmeiUsEltg/+BJcEmfbBwuHBwnEYd1tof8GW0e5p1eYI0oz+3QF55RyxDS Mr971dEXVFlcvXiyFcwF0QUxMRfmGo/inEHe2dO90GLgmeduTVvUrQc9kL55mWAFW5LL f/V8aEzOUsrinPIVdHFUuuqhI/a/2zBzG/I2lDlL/MATPYvQIISCFo1K7/tqlxLghBb+ bqd1kr4FpEXECjHkpNOVUsmTLOM3+espY5/mrNJpIKe+zxeVJvuVBoHcdeBeVsqZDlnz NQxefggcbtybimGlO64ixJCD+vVx3WQoSMKqNK1yrPIwW9E6pHQacsXvk8pSnlMjP+4e O0Sw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=Rs/wMLW/vGp5MGiiJEnVc1IKvM/DGXMWqUc3gISK3cY=; b=bomexyKADjI+TiSwqdTwN+3N6oJ/ttwJ4/z/NmlCDEUbQjXKhtD2Pb5+l/TQ/wtBAt OqlIZzAC+FQ/kd2QNarwsGz2uHYClg8TaZ0QdlsIIQvpSDyLkG+WQlj+kazwN1azP3eN pCY875+z8H9z9/rYr9a7DkFwvumfG+JH0CAz8OC2/C7/1EoA1Ma/dV8ehSFxiqco2aqB Ilpdvd4uMMr+mGKLPvuqfAjm893iUbmqQy6H98WL0QPDBoPyiRiWvFO2L2hMagVl357E YKZOBA7ghRR1L423eMQrBj/d/CbPUOgYajvLQ03GTEtKQX+YTXF5ICTkaKoUCq8xady1 dWBA== X-Gm-Message-State: ACgBeo0pavZF7rOGGnTXsi4PA7yVXuuMz6vpNuqT9ww7n/ZWPKAvbhdL N6KHhKBi7vHk04584FuuMd/AAP4wUkRSq7sjtGJQK8jtqLacDjuvH38sxAwwBXv+74EHeR7ujNL 6mJlNDK6Lxlif7TJIUDbgK2PUOj8pmrC4NLWNleM1jaIT11wGvhQmmIJALNmYpZU= X-Google-Smtp-Source: AA6agR5+ZwdBXg3yI2uyYQZ8M0QAMqg3cPEL4Di/OiWsx7lCNPhkI5z28yqm0qxi5xH/PDuFW7wZp3Jjs7JtUA== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a25:9b48:0:b0:67a:6ad0:f078 with SMTP id u8-20020a259b48000000b0067a6ad0f078mr38627093ybo.536.1662487788712; Tue, 06 Sep 2022 11:09:48 -0700 (PDT) Date: Tue, 6 Sep 2022 18:09:24 +0000 In-Reply-To: <20220906180930.230218-1-ricarkol@google.com> Mime-Version: 1.0 References: <20220906180930.230218-1-ricarkol@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220906180930.230218-8-ricarkol@google.com> Subject: [PATCH v6 07/13] KVM: selftests: Change ____vm_create() to take struct kvm_vm_mem_params From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, andrew.jones@linux.dev Cc: pbonzini@redhat.com, maz@kernel.org, seanjc@google.com, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, dmatlack@google.com, axelrasmussen@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The vm_create() helpers are hardcoded to place most page types (code, page-tables, stacks, etc) in the same memslot #0, and always backed with anonymous 4K. There are a couple of issues with that. First, tests willing to differ a bit, like placing page-tables in a different backing source type must replicate much of what's already done by the vm_create() functions. Second, the hardcoded assumption of memslot #0 holding most things is spreaded everywhere; this makes it very hard to change. Fix the above issues by having selftests specify how they want memory to be laid out: define the memory regions to use for code, pt (page-tables), and data. Introduce a new structure, struct kvm_vm_mem_params, that defines: guest mode, a list of memory region descriptions, and some fields specifying what regions to use for code, pt, and data. There is no functional change intended. The current commit adds a default struct kvm_vm_mem_params that lays out memory exactly as before. The next commit will change the allocators to get the region they should be using, e.g.,: like the page table allocators using the pt memslot. Cc: Sean Christopherson Cc: Andrew Jones Signed-off-by: Ricardo Koller Reviewed-by: Andrew Jones --- .../selftests/kvm/include/kvm_util_base.h | 51 +++++++++++++++- .../selftests/kvm/lib/aarch64/processor.c | 3 +- tools/testing/selftests/kvm/lib/kvm_util.c | 58 ++++++++++++++++--- 3 files changed, 102 insertions(+), 10 deletions(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index b2dbe253d4d0..5dbca38a512b 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -65,6 +65,13 @@ struct userspace_mem_regions { DECLARE_HASHTABLE(slot_hash, 9); }; +enum kvm_mem_region_type { + MEM_REGION_CODE, + MEM_REGION_PT, + MEM_REGION_DATA, + NR_MEM_REGIONS, +}; + struct kvm_vm { int mode; unsigned long type; @@ -93,6 +100,13 @@ struct kvm_vm { int stats_fd; struct kvm_stats_header stats_header; struct kvm_stats_desc *stats_desc; + + /* + * KVM region slots. These are the default memslots used by page + * allocators, e.g., lib/elf uses the memslots[MEM_REGION_CODE] + * memslot. + */ + uint32_t memslots[NR_MEM_REGIONS]; }; @@ -105,6 +119,13 @@ struct kvm_vm { struct userspace_mem_region * memslot2region(struct kvm_vm *vm, uint32_t memslot); +inline struct userspace_mem_region * +vm_get_mem_region(struct kvm_vm *vm, enum kvm_mem_region_type mrt) +{ + assert(mrt < NR_MEM_REGIONS); + return memslot2region(vm, vm->memslots[mrt]); +} + /* Minimum allocated guest virtual and physical addresses */ #define KVM_UTIL_MIN_VADDR 0x2000 #define KVM_GUEST_PAGE_TABLE_MIN_PADDR 0x180000 @@ -637,19 +658,45 @@ vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, vm_paddr_t paddr_min, uint32_t memslot); vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm); +struct kvm_vm_mem_params { + enum vm_guest_mode mode; + + struct { + enum vm_mem_backing_src_type src_type; + uint64_t guest_paddr; + /* + * KVM region slot (same meaning as in struct + * kvm_userspace_memory_region). + */ + uint32_t slot; + uint64_t npages; + uint32_t flags; + bool enabled; + } region[NR_MEM_REGIONS]; + + /* Each region type points to a region in the above array. */ + uint16_t region_idx[NR_MEM_REGIONS]; +}; + +extern struct kvm_vm_mem_params kvm_vm_mem_default; + /* * ____vm_create() does KVM_CREATE_VM and little else. __vm_create() also * loads the test binary into guest memory and creates an IRQ chip (x86 only). * __vm_create() does NOT create vCPUs, @nr_runnable_vcpus is used purely to * calculate the amount of memory needed for per-vCPU data, e.g. stacks. */ -struct kvm_vm *____vm_create(enum vm_guest_mode mode, uint64_t nr_pages); +struct kvm_vm *____vm_create(struct kvm_vm_mem_params *mem_params); struct kvm_vm *__vm_create(enum vm_guest_mode mode, uint32_t nr_runnable_vcpus, uint64_t nr_extra_pages); static inline struct kvm_vm *vm_create_barebones(void) { - return ____vm_create(VM_MODE_DEFAULT, 0); + struct kvm_vm_mem_params params_wo_memslots = { + .mode = kvm_vm_mem_default.mode, + }; + + return ____vm_create(¶ms_wo_memslots); } static inline struct kvm_vm *vm_create(uint32_t nr_runnable_vcpus) diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c index 26f0eccff6fe..5a31dc85d054 100644 --- a/tools/testing/selftests/kvm/lib/aarch64/processor.c +++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c @@ -508,7 +508,8 @@ void aarch64_get_supported_page_sizes(uint32_t ipa, */ void __attribute__((constructor)) init_guest_modes(void) { - guest_modes_append_default(); + guest_modes_append_default(); + kvm_vm_mem_default.mode = VM_MODE_DEFAULT; } void smccc_hvc(uint32_t function_id, uint64_t arg0, uint64_t arg1, diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 5a9f080ff888..02532bc528da 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -143,12 +143,37 @@ const struct vm_guest_mode_params vm_guest_mode_params[] = { _Static_assert(sizeof(vm_guest_mode_params)/sizeof(struct vm_guest_mode_params) == NUM_VM_MODES, "Missing new mode params?"); -struct kvm_vm *____vm_create(enum vm_guest_mode mode, uint64_t nr_pages) +/* + * A single memslot #0 for code, data, and page tables. + * + * .region[0].npages should be set by the user. + */ +struct kvm_vm_mem_params kvm_vm_mem_default = { +#ifndef __aarch64__ + /* arm64 kvm_vm_mem_default.mode set in init_guest_modes() */ + .mode = VM_MODE_DEFAULT, +#endif + .region[0] = { + .src_type = VM_MEM_SRC_ANONYMOUS, + .guest_paddr = 0, + .slot = 0, + .npages = 0, + .flags = 0, + .enabled = true, + }, + .region_idx[MEM_REGION_CODE] = 0, + .region_idx[MEM_REGION_PT] = 0, + .region_idx[MEM_REGION_DATA] = 0, +}; + +struct kvm_vm *____vm_create(struct kvm_vm_mem_params *mem_params) { + enum vm_guest_mode mode = mem_params->mode; struct kvm_vm *vm; + enum kvm_mem_region_type mrt; + int idx; - pr_debug("%s: mode='%s' pages='%ld'\n", __func__, - vm_guest_mode_string(mode), nr_pages); + pr_debug("%s: mode='%s'\n", __func__, vm_guest_mode_string(mode)); vm = calloc(1, sizeof(*vm)); TEST_ASSERT(vm != NULL, "Insufficient Memory"); @@ -245,9 +270,25 @@ struct kvm_vm *____vm_create(enum vm_guest_mode mode, uint64_t nr_pages) /* Allocate and setup memory for guest. */ vm->vpages_mapped = sparsebit_alloc(); - if (nr_pages != 0) - vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, - 0, 0, nr_pages, 0); + + /* Create all mem regions according to mem_params specifications. */ + for (idx = 0; idx < NR_MEM_REGIONS; idx++) { + if (!mem_params->region[idx].enabled) + continue; + + vm_userspace_mem_region_add(vm, + mem_params->region[idx].src_type, + mem_params->region[idx].guest_paddr, + mem_params->region[idx].slot, + mem_params->region[idx].npages, + mem_params->region[idx].flags); + } + + /* Set all memslot types for the VM, also according to the spec. */ + for (mrt = 0; mrt < NR_MEM_REGIONS; mrt++) { + idx = mem_params->region_idx[mrt]; + vm->memslots[mrt] = mem_params->region[idx].slot; + } return vm; } @@ -292,9 +333,12 @@ struct kvm_vm *__vm_create(enum vm_guest_mode mode, uint32_t nr_runnable_vcpus, { uint64_t nr_pages = vm_nr_pages_required(mode, nr_runnable_vcpus, nr_extra_pages); + struct kvm_vm_mem_params mem_params = kvm_vm_mem_default; struct kvm_vm *vm; - vm = ____vm_create(mode, nr_pages); + mem_params.region[0].npages = nr_pages; + mem_params.mode = mode; + vm = ____vm_create(&mem_params); kvm_vm_elf_load(vm, program_invocation_name); From patchwork Tue Sep 6 18:09:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 12968002 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E126EECAAA1 for ; Tue, 6 Sep 2022 18:10:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229708AbiIFSKA (ORCPT ); Tue, 6 Sep 2022 14:10:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45794 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229707AbiIFSJy (ORCPT ); Tue, 6 Sep 2022 14:09:54 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7A18474E3F for ; Tue, 6 Sep 2022 11:09:52 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-340ae84fb7dso95468667b3.17 for ; Tue, 06 Sep 2022 11:09:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=mBtx7H6gwduk2C5lu92clfmk+N0XgBE6Y3tpg2ksljk=; b=QGG0BLHi5GQGVos8uny6WdfWHTEk/bM6280oHgCBBKjbu2u8vwkv+iSmF8VQv6CuLt Xax2PcDtQCH34uXfFt6rUBA16uTkoe20L3yG+YWZ+k7U9hMYO08BfeFkGrviPEBwANbF Km9R8uAozFZfOnC+rctZ8b0mpONPXhU8lYEiDZr8/LxC6RbR0Afy2z4hVP/QP0TWTrDt y4ms4BFAjcF6rQx4LFLoc2HNbN3e7rxW9sCvRyBnxQspCmkoX53Txd94A0WFtU3pgQA3 5noPb3aOi19GMeC7doV+VJf9ShAkz153udeMbN4FCFos26DPEdhhUL8aIKdHX0f0aRlA JQBw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=mBtx7H6gwduk2C5lu92clfmk+N0XgBE6Y3tpg2ksljk=; b=nAVG3u+MB5C4ykwSFdhOIjPzULJ44a69AYAeDUbXT0U1ta26VGP0h4FQHBv2vXDfsB cXJdI43zw1qr1iViRvPbSR5xCJTEAdLE6CFXdF3zAVmcYaSw021HrfO0ubiEFxsBa4D/ yQrfsXFcpQis6XCYyL3q8X8aOo4cJ4Nbbp0MK/JSdJOhsJfEYghRKYt/hYnHV3AMs6x2 YADGvM+FqvcAuLaGpWumtfVduSeEmfP2/GmXSkKNyOe4jtLPccqjBOCAnNwoaEIw2lFj KAnOllqbGEA0bcT4UDa+4Soh1yZ0wGw70L6iErfEx26V8mpA4ydzkvIPQgxzrevzKXRr cIRA== X-Gm-Message-State: ACgBeo0A0bjIIphhP7W3eh8FiNkovb56y/0JaxMnlETAgigyIITG+75X VYAJyE43FF+02EI78+8lzzWMmVnWtcWkXVt4r0QIqyrxm3e7vdQwqAA715XbavLM0acLwVm2074 VLSIVH2iEWPJD8UFoOlc42/E70S0goUWHjYi6Ov1V+JK7NZO1CLP3waBmvw8ylO8= X-Google-Smtp-Source: AA6agR4kv0PzDjV2w60M2r2X58B3mYrKh8mAIPqkG9zuEoIbK+EyZohvUNi+chAQbn8Dn66nq+5Wx5OoRdKAlw== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a0d:f201:0:b0:335:4933:6683 with SMTP id b1-20020a0df201000000b0033549336683mr42051742ywf.23.1662487790487; Tue, 06 Sep 2022 11:09:50 -0700 (PDT) Date: Tue, 6 Sep 2022 18:09:25 +0000 In-Reply-To: <20220906180930.230218-1-ricarkol@google.com> Mime-Version: 1.0 References: <20220906180930.230218-1-ricarkol@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220906180930.230218-9-ricarkol@google.com> Subject: [PATCH v6 08/13] KVM: selftests: Use the right memslot for code, page-tables, and data allocations From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, andrew.jones@linux.dev Cc: pbonzini@redhat.com, maz@kernel.org, seanjc@google.com, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, dmatlack@google.com, axelrasmussen@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The previous commit added support for callers of ____vm_create() to specify what memslots to use for code, page-tables, and data allocations. Change them accordingly: - stacks, code, and exception tables use the code memslot - page tables and the pgd use the pt memslot - data (anything allocated with vm_vaddr_alloc()) uses the data memslot No functional change intended. All allocators keep using memslot #0. Cc: Sean Christopherson Cc: Andrew Jones Signed-off-by: Ricardo Koller Reviewed-by: Andrew Jones --- .../selftests/kvm/include/kvm_util_base.h | 3 + .../selftests/kvm/lib/aarch64/processor.c | 11 ++-- tools/testing/selftests/kvm/lib/elf.c | 3 +- tools/testing/selftests/kvm/lib/kvm_util.c | 57 ++++++++++++------- .../selftests/kvm/lib/riscv/processor.c | 7 ++- .../selftests/kvm/lib/s390x/processor.c | 7 ++- .../selftests/kvm/lib/x86_64/processor.c | 13 +++-- 7 files changed, 61 insertions(+), 40 deletions(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index 5dbca38a512b..76a087a88efd 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -402,7 +402,10 @@ void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa); void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot); struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id); vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min); +vm_vaddr_t __vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, + vm_vaddr_t vaddr_min, enum kvm_mem_region_type mr); vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages); +vm_vaddr_t __vm_vaddr_alloc_page(struct kvm_vm *vm, enum kvm_mem_region_type mr); vm_vaddr_t vm_vaddr_alloc_page(struct kvm_vm *vm); void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c index 5a31dc85d054..885b893a5f40 100644 --- a/tools/testing/selftests/kvm/lib/aarch64/processor.c +++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c @@ -79,7 +79,7 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm) if (!vm->pgd_created) { vm_paddr_t paddr = vm_phy_pages_alloc(vm, page_align(vm, ptrs_per_pgd(vm) * 8) / vm->page_size, - KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0); + KVM_GUEST_PAGE_TABLE_MIN_PADDR, vm->memslots[MEM_REGION_PT]); vm->pgd = paddr; vm->pgd_created = true; } @@ -328,8 +328,9 @@ struct kvm_vcpu *aarch64_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id, size_t stack_size = vm->page_size == 4096 ? DEFAULT_STACK_PGS * vm->page_size : vm->page_size; - uint64_t stack_vaddr = vm_vaddr_alloc(vm, stack_size, - DEFAULT_ARM64_GUEST_STACK_VADDR_MIN); + uint64_t stack_vaddr = __vm_vaddr_alloc(vm, stack_size, + DEFAULT_ARM64_GUEST_STACK_VADDR_MIN, + MEM_REGION_CODE); struct kvm_vcpu *vcpu = __vm_vcpu_add(vm, vcpu_id); aarch64_vcpu_setup(vcpu, init); @@ -435,8 +436,8 @@ void route_exception(struct ex_regs *regs, int vector) void vm_init_descriptor_tables(struct kvm_vm *vm) { - vm->handlers = vm_vaddr_alloc(vm, sizeof(struct handlers), - vm->page_size); + vm->handlers = __vm_vaddr_alloc(vm, sizeof(struct handlers), + vm->page_size, MEM_REGION_CODE); *(vm_vaddr_t *)addr_gva2hva(vm, (vm_vaddr_t)(&exception_handlers)) = vm->handlers; } diff --git a/tools/testing/selftests/kvm/lib/elf.c b/tools/testing/selftests/kvm/lib/elf.c index 9f54c098d9d0..51f280c412ba 100644 --- a/tools/testing/selftests/kvm/lib/elf.c +++ b/tools/testing/selftests/kvm/lib/elf.c @@ -161,7 +161,8 @@ void kvm_vm_elf_load(struct kvm_vm *vm, const char *filename) seg_vend |= vm->page_size - 1; size_t seg_size = seg_vend - seg_vstart + 1; - vm_vaddr_t vaddr = vm_vaddr_alloc(vm, seg_size, seg_vstart); + vm_vaddr_t vaddr = __vm_vaddr_alloc(vm, seg_size, seg_vstart, + MEM_REGION_CODE); TEST_ASSERT(vaddr == seg_vstart, "Unable to allocate " "virtual memory for segment at requested min addr,\n" " segment idx: %u\n" diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 02532bc528da..ff457af44c53 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -1226,32 +1226,15 @@ static vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, return pgidx_start * vm->page_size; } -/* - * VM Virtual Address Allocate - * - * Input Args: - * vm - Virtual Machine - * sz - Size in bytes - * vaddr_min - Minimum starting virtual address - * - * Output Args: None - * - * Return: - * Starting guest virtual address - * - * Allocates at least sz bytes within the virtual address space of the vm - * given by vm. The allocated bytes are mapped to a virtual address >= - * the address given by vaddr_min. Note that each allocation uses a - * a unique set of pages, with the minimum real allocation being at least - * a page. - */ -vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min) +vm_vaddr_t __vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, + vm_vaddr_t vaddr_min, enum kvm_mem_region_type mrt) { uint64_t pages = (sz >> vm->page_shift) + ((sz % vm->page_size) != 0); virt_pgd_alloc(vm); vm_paddr_t paddr = vm_phy_pages_alloc(vm, pages, - KVM_UTIL_MIN_PFN * vm->page_size, 0); + KVM_UTIL_MIN_PFN * vm->page_size, + vm->memslots[mrt]); /* * Find an unused range of virtual page addresses of at least @@ -1272,6 +1255,30 @@ vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min) return vaddr_start; } +/* + * VM Virtual Address Allocate + * + * Input Args: + * vm - Virtual Machine + * sz - Size in bytes + * vaddr_min - Minimum starting virtual address + * + * Output Args: None + * + * Return: + * Starting guest virtual address + * + * Allocates at least sz bytes within the virtual address space of the vm + * given by vm. The allocated bytes are mapped to a virtual address >= + * the address given by vaddr_min. Note that each allocation uses a + * a unique set of pages, with the minimum real allocation being at least + * a page. The allocated physical space comes from the data memory region. + */ +vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min) +{ + return __vm_vaddr_alloc(vm, sz, vaddr_min, MEM_REGION_DATA); +} + /* * VM Virtual Address Allocate Pages * @@ -1291,6 +1298,11 @@ vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages) return vm_vaddr_alloc(vm, nr_pages * getpagesize(), KVM_UTIL_MIN_VADDR); } +vm_vaddr_t __vm_vaddr_alloc_page(struct kvm_vm *vm, enum kvm_mem_region_type mrt) +{ + return __vm_vaddr_alloc(vm, getpagesize(), KVM_UTIL_MIN_VADDR, mrt); +} + /* * VM Virtual Address Allocate Page * @@ -1856,7 +1868,8 @@ vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min, vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm) { - return vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0); + return vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR, + vm->memslots[MEM_REGION_PT]); } /* diff --git a/tools/testing/selftests/kvm/lib/riscv/processor.c b/tools/testing/selftests/kvm/lib/riscv/processor.c index 604478151212..26c8d3dffb9a 100644 --- a/tools/testing/selftests/kvm/lib/riscv/processor.c +++ b/tools/testing/selftests/kvm/lib/riscv/processor.c @@ -58,7 +58,7 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm) if (!vm->pgd_created) { vm_paddr_t paddr = vm_phy_pages_alloc(vm, page_align(vm, ptrs_per_pte(vm) * 8) / vm->page_size, - KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0); + KVM_GUEST_PAGE_TABLE_MIN_PADDR, vm->memslots[MEM_REGION_PT]); vm->pgd = paddr; vm->pgd_created = true; } @@ -282,8 +282,9 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id, size_t stack_size = vm->page_size == 4096 ? DEFAULT_STACK_PGS * vm->page_size : vm->page_size; - unsigned long stack_vaddr = vm_vaddr_alloc(vm, stack_size, - DEFAULT_RISCV_GUEST_STACK_VADDR_MIN); + unsigned long stack_vaddr = __vm_vaddr_alloc(vm, stack_size, + DEFAULT_RISCV_GUEST_STACK_VADDR_MIN, + MEM_REGION_CODE); unsigned long current_gp = 0; struct kvm_mp_state mps; struct kvm_vcpu *vcpu; diff --git a/tools/testing/selftests/kvm/lib/s390x/processor.c b/tools/testing/selftests/kvm/lib/s390x/processor.c index 89d7340d9cbd..410ae2b59847 100644 --- a/tools/testing/selftests/kvm/lib/s390x/processor.c +++ b/tools/testing/selftests/kvm/lib/s390x/processor.c @@ -21,7 +21,7 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm) return; paddr = vm_phy_pages_alloc(vm, PAGES_PER_REGION, - KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0); + KVM_GUEST_PAGE_TABLE_MIN_PADDR, vm->memslots[MEM_REGION_PT]); memset(addr_gpa2hva(vm, paddr), 0xff, PAGES_PER_REGION * vm->page_size); vm->pgd = paddr; @@ -167,8 +167,9 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id, TEST_ASSERT(vm->page_size == 4096, "Unsupported page size: 0x%x", vm->page_size); - stack_vaddr = vm_vaddr_alloc(vm, stack_size, - DEFAULT_GUEST_STACK_VADDR_MIN); + stack_vaddr = __vm_vaddr_alloc(vm, stack_size, + DEFAULT_GUEST_STACK_VADDR_MIN, + MEM_REGION_CODE); vcpu = __vm_vcpu_add(vm, vcpu_id); diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c index 2e6e61bbe81b..f7b90a6c7d19 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c @@ -525,7 +525,7 @@ vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva) static void kvm_setup_gdt(struct kvm_vm *vm, struct kvm_dtable *dt) { if (!vm->gdt) - vm->gdt = vm_vaddr_alloc_page(vm); + vm->gdt = __vm_vaddr_alloc_page(vm, MEM_REGION_CODE); dt->base = vm->gdt; dt->limit = getpagesize(); @@ -535,7 +535,7 @@ static void kvm_setup_tss_64bit(struct kvm_vm *vm, struct kvm_segment *segp, int selector) { if (!vm->tss) - vm->tss = vm_vaddr_alloc_page(vm); + vm->tss = __vm_vaddr_alloc_page(vm, MEM_REGION_CODE); memset(segp, 0, sizeof(*segp)); segp->base = vm->tss; @@ -620,8 +620,9 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id, vm_vaddr_t stack_vaddr; struct kvm_vcpu *vcpu; - stack_vaddr = vm_vaddr_alloc(vm, DEFAULT_STACK_PGS * getpagesize(), - DEFAULT_GUEST_STACK_VADDR_MIN); + stack_vaddr = __vm_vaddr_alloc(vm, DEFAULT_STACK_PGS * getpagesize(), + DEFAULT_GUEST_STACK_VADDR_MIN, + MEM_REGION_CODE); vcpu = __vm_vcpu_add(vm, vcpu_id); vcpu_init_cpuid(vcpu, kvm_get_supported_cpuid()); @@ -1118,8 +1119,8 @@ void vm_init_descriptor_tables(struct kvm_vm *vm) extern void *idt_handlers; int i; - vm->idt = vm_vaddr_alloc_page(vm); - vm->handlers = vm_vaddr_alloc_page(vm); + vm->idt = __vm_vaddr_alloc_page(vm, MEM_REGION_CODE); + vm->handlers = __vm_vaddr_alloc_page(vm, MEM_REGION_CODE); /* Handlers have the same address in both address spaces.*/ for (i = 0; i < NUM_INTERRUPTS; i++) set_idt_entry(vm, i, (unsigned long)(&idt_handlers)[i], 0, From patchwork Tue Sep 6 18:09:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 12968003 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3F421ECAAD5 for ; Tue, 6 Sep 2022 18:10:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229751AbiIFSKE (ORCPT ); Tue, 6 Sep 2022 14:10:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46976 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229666AbiIFSJ7 (ORCPT ); Tue, 6 Sep 2022 14:09:59 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5613A7D79E for ; Tue, 6 Sep 2022 11:09:53 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-345482ec6adso41129247b3.18 for ; Tue, 06 Sep 2022 11:09:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:from:to:cc:subject:date; bh=brstbaPfwLwBEqf5X4HkC79QNibVf8sG0+pwhO/RLkk=; b=XiFz3fvlDU4rWj5yYCmISzD2oBc3rptT2mmyAvEmrHcxdWesgY8zaBU0r+nfMvuh7p QJAUeV5dLSmmnvKDFgbLnG6K5bKf4+1qZrRB/JxhPOj6SmfnEbpL64jP5fwzVYQPwYhx 51le4noHlglQBoVzn5yPfOoWStWCgGXpslBq62iA7x3QipGAvTP6+9vFQoS5l6NrXelC ihb4yuWA8qWV6M4IB0XLXMS+Y7m1D4QKt2k88wD0vKka6xS2c/praB/POlayWKvfxxFr v/ELx5Pckwylv5KKlJ8HY3moA7oUtbt5MV0Ud50ad1zlGLhdZxnK1nOsSbjfIxGtNh/W PCKw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:x-gm-message-state:from:to:cc:subject :date; bh=brstbaPfwLwBEqf5X4HkC79QNibVf8sG0+pwhO/RLkk=; b=abpLXaOPC6Nu2ubp7EFilt16r9Ei1DqFXMzfebq6X9H2ydN3HxbD5RntuTokE3HEAQ +ODgI+4DJ4Cb71sS5l57W6dH+qepEwWHGwQzFRBU8fcb55xNPdIqA5koMoLkfs6GwehB 43WSzXMFhn+vy9CLIH8CQDDk8WE0ECUEHfnP7RN59/NUYtAH5XyTKjSd1xAuh1T/zP/G vWWx+cekiVaPqVmd6SH3M2SZ0wS8XCm4u5s79YBx3NITy/XWdMXRbW8MzQrhkeBIpbCm ynY90jEzXPZ8sg7USf9kHDTXGEd13Fsuq12Y5R/sGBwyCLVMNf/zwRRuQNWyPqojuPmr iJUQ== X-Gm-Message-State: ACgBeo0C2Wb3eSlxifpgZgf7Gqv4nKxXUTQeUtVbCauGZ8OU4/toXZ5I 9sL1Up27atgS2CePI3utY05GuobKhcqFlSrMZfUWySkzSslalUJlKzmKVmyPd3Faa7tN7TPhoWA 8dySjBubVvVRCFHLLu+mg8DL4SubvE6KvP0IDalFQoNLfVohNn7cYpnqIKVrpD+8= X-Google-Smtp-Source: AA6agR5K+gnRo9biBRmbEEqtf8kG61rU3yvHtDfgELajhCOlYM+4LUaoONn4K2PvIEe5/t7mxRO/n8Z7oU9MEg== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a25:888f:0:b0:6a8:f77c:4f96 with SMTP id d15-20020a25888f000000b006a8f77c4f96mr9820699ybl.103.1662487792719; Tue, 06 Sep 2022 11:09:52 -0700 (PDT) Date: Tue, 6 Sep 2022 18:09:26 +0000 In-Reply-To: <20220906180930.230218-1-ricarkol@google.com> Mime-Version: 1.0 References: <20220906180930.230218-1-ricarkol@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220906180930.230218-10-ricarkol@google.com> Subject: [PATCH v6 09/13] KVM: selftests: aarch64: Add aarch64/page_fault_test From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, andrew.jones@linux.dev Cc: pbonzini@redhat.com, maz@kernel.org, seanjc@google.com, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, dmatlack@google.com, axelrasmussen@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a new test for stage 2 faults when using different combinations of guest accesses (e.g., write, S1PTW), backing source type (e.g., anon) and types of faults (e.g., read on hugetlbfs with a hole). The next commits will add different handling methods and more faults (e.g., uffd and dirty logging). This first commit starts by adding two sanity checks for all types of accesses: AF setting by the hw, and accessing memslots with holes. Signed-off-by: Ricardo Koller --- tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/aarch64/page_fault_test.c | 622 ++++++++++++++++++ .../selftests/kvm/include/aarch64/processor.h | 8 + 3 files changed, 631 insertions(+) create mode 100644 tools/testing/selftests/kvm/aarch64/page_fault_test.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 1bb471aeb103..850e317b9e82 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -149,6 +149,7 @@ TEST_GEN_PROGS_aarch64 += aarch64/arch_timer TEST_GEN_PROGS_aarch64 += aarch64/debug-exceptions TEST_GEN_PROGS_aarch64 += aarch64/get-reg-list TEST_GEN_PROGS_aarch64 += aarch64/hypercalls +TEST_GEN_PROGS_aarch64 += aarch64/page_fault_test TEST_GEN_PROGS_aarch64 += aarch64/psci_test TEST_GEN_PROGS_aarch64 += aarch64/vcpu_width_config TEST_GEN_PROGS_aarch64 += aarch64/vgic_init diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c new file mode 100644 index 000000000000..1c7e02003753 --- /dev/null +++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c @@ -0,0 +1,622 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * page_fault_test.c - Test stage 2 faults. + * + * This test tries different combinations of guest accesses (e.g., write, + * S1PTW), backing source type (e.g., anon) and types of faults (e.g., read on + * hugetlbfs with a hole). It checks that the expected handling method is + * called (e.g., uffd faults with the right address and write/read flag). + */ + +#define _GNU_SOURCE +#include +#include +#include +#include +#include +#include +#include +#include "guest_modes.h" +#include "userfaultfd_util.h" + +/* Guest virtual addresses that point to the test page and its PTE. */ +#define TEST_GVA 0xc0000000 +#define TEST_EXEC_GVA 0xc0000008 +#define TEST_PTE_GVA 0xb0000000 +#define TEST_DATA 0x0123456789ABCDEF + +static uint64_t *guest_test_memory = (uint64_t *)TEST_GVA; + +#define CMD_NONE (0) +#define CMD_SKIP_TEST (1ULL << 1) +#define CMD_HOLE_PT (1ULL << 2) +#define CMD_HOLE_DATA (1ULL << 3) + +#define PREPARE_FN_NR 10 +#define CHECK_FN_NR 10 + +struct test_desc { + const char *name; + uint64_t mem_mark_cmd; + /* Skip the test if any prepare function returns false */ + bool (*guest_prepare[PREPARE_FN_NR])(void); + void (*guest_test)(void); + void (*guest_test_check[CHECK_FN_NR])(void); + void (*dabt_handler)(struct ex_regs *regs); + void (*iabt_handler)(struct ex_regs *regs); + uint32_t pt_memslot_flags; + uint32_t data_memslot_flags; + bool skip; +}; + +struct test_params { + enum vm_mem_backing_src_type src_type; + struct test_desc *test_desc; +}; + +static inline void flush_tlb_page(uint64_t vaddr) +{ + uint64_t page = vaddr >> 12; + + dsb(ishst); + asm volatile("tlbi vaae1is, %0" :: "r" (page)); + dsb(ish); + isb(); +} + +static void guest_write64(void) +{ + uint64_t val; + + WRITE_ONCE(*guest_test_memory, TEST_DATA); + val = READ_ONCE(*guest_test_memory); + GUEST_ASSERT_EQ(val, TEST_DATA); +} + +/* Check the system for atomic instructions. */ +static bool guest_check_lse(void) +{ + uint64_t isar0 = read_sysreg(id_aa64isar0_el1); + uint64_t atomic; + + atomic = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64ISAR0_ATOMICS), isar0); + return atomic >= 2; +} + +static bool guest_check_dc_zva(void) +{ + uint64_t dczid = read_sysreg(dczid_el0); + uint64_t dzp = FIELD_GET(ARM64_FEATURE_MASK(DCZID_DZP), dczid); + + return dzp == 0; +} + +/* Compare and swap instruction. */ +static void guest_cas(void) +{ + uint64_t val; + + GUEST_ASSERT(guest_check_lse()); + asm volatile(".arch_extension lse\n" + "casal %0, %1, [%2]\n" + :: "r" (0), "r" (TEST_DATA), "r" (guest_test_memory)); + val = READ_ONCE(*guest_test_memory); + GUEST_ASSERT_EQ(val, TEST_DATA); +} + +static void guest_read64(void) +{ + uint64_t val; + + val = READ_ONCE(*guest_test_memory); + GUEST_ASSERT_EQ(val, 0); +} + +/* Address translation instruction */ +static void guest_at(void) +{ + uint64_t par; + + asm volatile("at s1e1r, %0" :: "r" (guest_test_memory)); + par = read_sysreg(par_el1); + isb(); + + /* Bit 1 indicates whether the AT was successful */ + GUEST_ASSERT_EQ(par & 1, 0); +} + +/* + * The size of the block written by "dc zva" is guaranteed to be between (2 << + * 0) and (2 << 9), which is safe in our case as we need the write to happen + * for at least a word, and not more than a page. + */ +static void guest_dc_zva(void) +{ + uint16_t val; + + asm volatile("dc zva, %0" :: "r" (guest_test_memory)); + dsb(ish); + val = READ_ONCE(*guest_test_memory); + GUEST_ASSERT_EQ(val, 0); +} + +/* + * Pre-indexing loads and stores don't have a valid syndrome (ESR_EL2.ISV==0). + * And that's special because KVM must take special care with those: they + * should still count as accesses for dirty logging or user-faulting, but + * should be handled differently on mmio. + */ +static void guest_ld_preidx(void) +{ + uint64_t val; + uint64_t addr = TEST_GVA - 8; + + /* + * This ends up accessing "TEST_GVA + 8 - 8", where "TEST_GVA - 8" is + * in a gap between memslots not backing by anything. + */ + asm volatile("ldr %0, [%1, #8]!" + : "=r" (val), "+r" (addr)); + GUEST_ASSERT_EQ(val, 0); + GUEST_ASSERT_EQ(addr, TEST_GVA); +} + +static void guest_st_preidx(void) +{ + uint64_t val = TEST_DATA; + uint64_t addr = TEST_GVA - 8; + + asm volatile("str %0, [%1, #8]!" + : "+r" (val), "+r" (addr)); + + GUEST_ASSERT_EQ(addr, TEST_GVA); + val = READ_ONCE(*guest_test_memory); +} + +static bool guest_set_ha(void) +{ + uint64_t mmfr1 = read_sysreg(id_aa64mmfr1_el1); + uint64_t hadbs, tcr; + + /* Skip if HA is not supported. */ + hadbs = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR1_HADBS), mmfr1); + if (hadbs == 0) + return false; + + tcr = read_sysreg(tcr_el1) | TCR_EL1_HA; + write_sysreg(tcr, tcr_el1); + isb(); + + return true; +} + +static bool guest_clear_pte_af(void) +{ + *((uint64_t *)TEST_PTE_GVA) &= ~PTE_AF; + flush_tlb_page(TEST_GVA); + + return true; +} + +static void guest_check_pte_af(void) +{ + dsb(ish); + GUEST_ASSERT_EQ(*((uint64_t *)TEST_PTE_GVA) & PTE_AF, PTE_AF); +} + +static void guest_exec(void) +{ + int (*code)(void) = (int (*)(void))TEST_EXEC_GVA; + int ret; + + ret = code(); + GUEST_ASSERT_EQ(ret, 0x77); +} + +static bool guest_prepare(struct test_desc *test) +{ + bool (*prepare_fn)(void); + int i; + + for (i = 0; i < PREPARE_FN_NR; i++) { + prepare_fn = test->guest_prepare[i]; + if (prepare_fn && !prepare_fn()) + return false; + } + + return true; +} + +static void guest_test_check(struct test_desc *test) +{ + void (*check_fn)(void); + int i; + + for (i = 0; i < CHECK_FN_NR; i++) { + check_fn = test->guest_test_check[i]; + if (check_fn) + check_fn(); + } +} + +static void guest_code(struct test_desc *test) +{ + if (!guest_prepare(test)) + GUEST_SYNC(CMD_SKIP_TEST); + + GUEST_SYNC(test->mem_mark_cmd); + + if (test->guest_test) + test->guest_test(); + + guest_test_check(test); + GUEST_DONE(); +} + +static void no_dabt_handler(struct ex_regs *regs) +{ + GUEST_ASSERT_1(false, read_sysreg(far_el1)); +} + +static void no_iabt_handler(struct ex_regs *regs) +{ + GUEST_ASSERT_1(false, regs->pc); +} + +/* Returns true to continue the test, and false if it should be skipped. */ +static bool punch_hole_in_memslot(struct kvm_vm *vm, + struct userspace_mem_region *region) +{ + void *hva = (void *)region->region.userspace_addr; + uint64_t paging_size = region->region.memory_size; + int ret, fd = region->fd; + + if (fd != -1) { + ret = fallocate(fd, FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, + 0, paging_size); + TEST_ASSERT(ret == 0, "fallocate failed, errno: %d\n", errno); + } else { + if (is_backing_src_hugetlb(region->backing_src_type)) + return false; + + ret = madvise(hva, paging_size, MADV_DONTNEED); + TEST_ASSERT(ret == 0, "madvise failed, errno: %d\n", errno); + } + + return true; +} + +/* Returns true to continue the test, and false if it should be skipped. */ +static bool handle_cmd(struct kvm_vm *vm, int cmd) +{ + struct userspace_mem_region *data_region, *pt_region; + bool continue_test = true; + + data_region = vm_get_mem_region(vm, MEM_REGION_DATA); + pt_region = vm_get_mem_region(vm, MEM_REGION_PT); + + if (cmd == CMD_SKIP_TEST) + continue_test = false; + + if (cmd & CMD_HOLE_PT) + continue_test = punch_hole_in_memslot(vm, pt_region); + if (cmd & CMD_HOLE_DATA) + continue_test = punch_hole_in_memslot(vm, data_region); + + return continue_test; +} + +extern unsigned char __exec_test; + +void noinline __return_0x77(void) +{ + asm volatile("__exec_test: mov x0, #0x77\n" + "ret\n"); +} + +/* + * Note that this function runs on the host before the test VM starts: there's + * no need to sync the D$ and I$ caches. + */ +static void load_exec_code_for_test(struct kvm_vm *vm) +{ + uint64_t *code, *c; + struct userspace_mem_region *region; + void *hva; + + region = vm_get_mem_region(vm, MEM_REGION_DATA); + hva = (void *)region->region.userspace_addr; + + assert(TEST_EXEC_GVA - TEST_GVA); + code = hva + 8; + + /* + * We need the cast to be separate in order for the compiler to not + * complain with: "‘memcpy’ forming offset [1, 7] is out of the bounds + * [0, 1] of object ‘__exec_test’ with type ‘unsigned char’" + */ + c = (uint64_t *)&__exec_test; + memcpy(code, c, 8); +} + +static void setup_abort_handlers(struct kvm_vm *vm, struct kvm_vcpu *vcpu, + struct test_desc *test) +{ + vm_init_descriptor_tables(vm); + vcpu_init_descriptor_tables(vcpu); + + vm_install_sync_handler(vm, VECTOR_SYNC_CURRENT, + ESR_EC_DABT, no_dabt_handler); + vm_install_sync_handler(vm, VECTOR_SYNC_CURRENT, + ESR_EC_IABT, no_iabt_handler); +} + +static void setup_gva_maps(struct kvm_vm *vm) +{ + struct userspace_mem_region *region; + uint64_t pte_gpa; + + region = vm_get_mem_region(vm, MEM_REGION_DATA); + /* Map TEST_GVA first. This will install a new PTE. */ + virt_pg_map(vm, TEST_GVA, region->region.guest_phys_addr); + /* Then map TEST_PTE_GVA to the above PTE. */ + pte_gpa = addr_hva2gpa(vm, virt_get_pte_hva(vm, TEST_GVA)); + virt_pg_map(vm, TEST_PTE_GVA, pte_gpa); +} + +unsigned long get_max_gfn(enum vm_guest_mode mode) +{ + unsigned int pa_bits = vm_guest_mode_params[mode].pa_bits; + unsigned int page_shift = vm_guest_mode_params[mode].page_shift; + + return ((1ULL << pa_bits) >> page_shift) - 1; +} + +/* Create a code memslot at pfn=0, and data and PT ones at max_gfn. */ +static struct kvm_vm_mem_params setup_memslots(enum vm_guest_mode mode, + struct test_params *p) +{ + uint64_t backing_src_pagesz = get_backing_src_pagesz(p->src_type); + uint64_t guest_page_size = vm_guest_mode_params[mode].page_size; + uint64_t max_gfn = get_max_gfn(mode); + /* Enough for 2M of code when using 4K guest pages. */ + uint64_t code_npages = 512; + uint64_t pt_size, data_size, data_gpa; + + /* + * This test requires 1 pgd, 2 pud, 4 pmd, and 6 pte pages when using + * VM_MODE_P48V48_4K. Note that the .text takes ~1.6MBs. That's 13 + * pages. VM_MODE_P48V48_4K is the mode with most PT pages; let's use + * twice that just in case. + */ + pt_size = 26 * guest_page_size; + + /* memslot sizes and gpa's must be aligned to the backing page size */ + pt_size = align_up(pt_size, backing_src_pagesz); + data_size = align_up(guest_page_size, backing_src_pagesz); + data_gpa = (max_gfn * guest_page_size) - data_size; + data_gpa = align_down(data_gpa, backing_src_pagesz); + + struct kvm_vm_mem_params mem_params = { + .region[0] = { + .src_type = VM_MEM_SRC_ANONYMOUS, + .guest_paddr = 0, + .slot = 0, + .npages = code_npages, + .flags = 0, + .enabled = true, + }, + .region[1] = { + .src_type = p->src_type, + .guest_paddr = data_gpa - pt_size, + .slot = 1, + .npages = pt_size / guest_page_size, + .flags = p->test_desc->pt_memslot_flags, + .enabled = true, + }, + .region[2] = { + .src_type = p->src_type, + .guest_paddr = data_gpa, + .slot = 2, + .npages = data_size / guest_page_size, + .flags = p->test_desc->data_memslot_flags, + .enabled = true, + }, + .region_idx[MEM_REGION_CODE] = 0, + .region_idx[MEM_REGION_PT] = 1, + .region_idx[MEM_REGION_DATA] = 2, + .mode = mode, + }; + + return mem_params; +} + +static void print_test_banner(enum vm_guest_mode mode, struct test_params *p) +{ + struct test_desc *test = p->test_desc; + + pr_debug("Test: %s\n", test->name); + pr_debug("Testing guest mode: %s\n", vm_guest_mode_string(mode)); + pr_debug("Testing memory backing src type: %s\n", + vm_mem_backing_src_alias(p->src_type)->name); +} + +/* + * This function either succeeds, skips the test (after setting test->skip), or + * fails with a TEST_FAIL that aborts all tests. + */ +static void vcpu_run_loop(struct kvm_vm *vm, struct kvm_vcpu *vcpu, + struct test_desc *test) +{ + struct ucall uc; + + for (;;) { + vcpu_run(vcpu); + + switch (get_ucall(vcpu, &uc)) { + case UCALL_SYNC: + if (!handle_cmd(vm, uc.args[1])) { + test->skip = true; + goto done; + } + break; + case UCALL_ABORT: + REPORT_GUEST_ASSERT_2(uc, "values: %#lx, %#lx"); + break; + case UCALL_DONE: + goto done; + default: + TEST_FAIL("Unknown ucall %lu", uc.cmd); + } + } + +done: + pr_debug(test->skip ? "Skipped.\n" : "Done.\n"); + return; +} + +static void run_test(enum vm_guest_mode mode, void *arg) +{ + struct test_params *p = (struct test_params *)arg; + struct test_desc *test = p->test_desc; + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + struct kvm_vm_mem_params mem_params; + + print_test_banner(mode, p); + + mem_params = setup_memslots(mode, p); + vm = ____vm_create(&mem_params); + kvm_vm_elf_load(vm, program_invocation_name); + vcpu = vm_vcpu_add(vm, 0, guest_code); + + setup_gva_maps(vm); + + ucall_init(vm, NULL); + + load_exec_code_for_test(vm); + setup_abort_handlers(vm, vcpu, test); + vcpu_args_set(vcpu, 1, test); + + vcpu_run_loop(vm, vcpu, test); + + ucall_uninit(vm); + kvm_vm_free(vm); +} + +static void help(char *name) +{ + puts(""); + printf("usage: %s [-h] [-s mem-type]\n", name); + puts(""); + guest_modes_help(); + backing_src_help("-s"); + puts(""); +} + +#define SNAME(s) #s +#define SCAT2(a, b) SNAME(a ## _ ## b) +#define SCAT3(a, b, c) SCAT2(a, SCAT2(b, c)) + +#define _CHECK(_test) _CHECK_##_test +#define _PREPARE(_test) _PREPARE_##_test +#define _PREPARE_guest_read64 NULL +#define _PREPARE_guest_ld_preidx NULL +#define _PREPARE_guest_write64 NULL +#define _PREPARE_guest_st_preidx NULL +#define _PREPARE_guest_exec NULL +#define _PREPARE_guest_at NULL +#define _PREPARE_guest_dc_zva guest_check_dc_zva +#define _PREPARE_guest_cas guest_check_lse + +/* With or without access flag checks */ +#define _PREPARE_with_af guest_set_ha, guest_clear_pte_af +#define _PREPARE_no_af NULL +#define _CHECK_with_af guest_check_pte_af +#define _CHECK_no_af NULL + +/* Performs an access and checks that no faults were triggered. */ +#define TEST_ACCESS(_access, _with_af, _mark_cmd) \ +{ \ + .name = SCAT3(_access, _with_af, #_mark_cmd), \ + .guest_prepare = { _PREPARE(_with_af), \ + _PREPARE(_access) }, \ + .mem_mark_cmd = _mark_cmd, \ + .guest_test = _access, \ + .guest_test_check = { _CHECK(_with_af) }, \ +} + +static struct test_desc tests[] = { + + /* Check that HW is setting the Access Flag (AF) (sanity checks). */ + TEST_ACCESS(guest_read64, with_af, CMD_NONE), + TEST_ACCESS(guest_ld_preidx, with_af, CMD_NONE), + TEST_ACCESS(guest_cas, with_af, CMD_NONE), + TEST_ACCESS(guest_write64, with_af, CMD_NONE), + TEST_ACCESS(guest_st_preidx, with_af, CMD_NONE), + TEST_ACCESS(guest_dc_zva, with_af, CMD_NONE), + TEST_ACCESS(guest_exec, with_af, CMD_NONE), + + /* + * Accessing a hole in the data memslot (punched with fallocate or + * madvise) shouldn't fault (more sanity checks). + */ + TEST_ACCESS(guest_read64, no_af, CMD_HOLE_DATA), + TEST_ACCESS(guest_cas, no_af, CMD_HOLE_DATA), + TEST_ACCESS(guest_ld_preidx, no_af, CMD_HOLE_DATA), + TEST_ACCESS(guest_write64, no_af, CMD_HOLE_DATA), + TEST_ACCESS(guest_st_preidx, no_af, CMD_HOLE_DATA), + TEST_ACCESS(guest_at, no_af, CMD_HOLE_DATA), + TEST_ACCESS(guest_dc_zva, no_af, CMD_HOLE_DATA), + + { 0 } +}; + +static void for_each_test_and_guest_mode( + void (*func)(enum vm_guest_mode m, void *a), + enum vm_mem_backing_src_type src_type) +{ + struct test_desc *t; + + for (t = &tests[0]; t->name; t++) { + if (t->skip) + continue; + + struct test_params p = { + .src_type = src_type, + .test_desc = t, + }; + + for_each_guest_mode(run_test, &p); + } +} + +int main(int argc, char *argv[]) +{ + enum vm_mem_backing_src_type src_type; + int opt; + + setbuf(stdout, NULL); + + src_type = DEFAULT_VM_MEM_SRC; + + guest_modes_append_default(); + + while ((opt = getopt(argc, argv, "hm:s:")) != -1) { + switch (opt) { + case 'm': + guest_modes_cmdline(optarg); + break; + case 's': + src_type = parse_backing_src_type(optarg); + break; + case 'h': + default: + help(argv[0]); + exit(0); + } + } + + for_each_test_and_guest_mode(run_test, src_type); + return 0; +} diff --git a/tools/testing/selftests/kvm/include/aarch64/processor.h b/tools/testing/selftests/kvm/include/aarch64/processor.h index c1ddca8db225..5f977528e09c 100644 --- a/tools/testing/selftests/kvm/include/aarch64/processor.h +++ b/tools/testing/selftests/kvm/include/aarch64/processor.h @@ -105,11 +105,19 @@ enum { #define ESR_EC_MASK (ESR_EC_NUM - 1) #define ESR_EC_SVC64 0x15 +#define ESR_EC_IABT 0x21 +#define ESR_EC_DABT 0x25 #define ESR_EC_HW_BP_CURRENT 0x31 #define ESR_EC_SSTEP_CURRENT 0x33 #define ESR_EC_WP_CURRENT 0x35 #define ESR_EC_BRK_INS 0x3c +/* Access flag */ +#define PTE_AF (1ULL << 10) + +/* Access flag update enable/disable */ +#define TCR_EL1_HA (1ULL << 39) + void aarch64_get_supported_page_sizes(uint32_t ipa, bool *ps4k, bool *ps16k, bool *ps64k); From patchwork Tue Sep 6 18:09:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 12968004 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9EACAECAAD5 for ; Tue, 6 Sep 2022 18:10:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229723AbiIFSKO (ORCPT ); Tue, 6 Sep 2022 14:10:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47084 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229744AbiIFSKD (ORCPT ); Tue, 6 Sep 2022 14:10:03 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 563055E660 for ; Tue, 6 Sep 2022 11:09:55 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id k3-20020a170902c40300b001743aafd6c6so8326937plk.20 for ; Tue, 06 Sep 2022 11:09:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=Dp0/zyVrNs7asVUa3ncYOzqx2/Dl//3Ob7u4FLnTMVs=; b=j7Y96WUCYB8zAiudeg/ex7V3ESP5Kbli+0+EqmKV8HT3BIpvD9hWW74NO2J7BQ0tTe VQ8AVM9xkmJZMbAN4tff5HlrXzrJlq3rL2j7v8AOPiqImiEeP1RWNeuxyOtFX+yf1OF7 MQHmOYbeG2ZwtkQKIltpRP975pzHAigt0/fmUwQ4czRpbU2KyOlmljSeDgP6WXmb1V5B Ax4IPc58ukfJSrrdIfD8FmfeTOqke7iPL2Z5KymnIvVmvgQaaWOaZr7H0/dC9wptXF1y ksYOPjGR6cdrqrnDETROsh7utzz+hp8BM96WaBvrQ9Nb10HnKVtMuwRDPj/E7g7+9KZ9 9vIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=Dp0/zyVrNs7asVUa3ncYOzqx2/Dl//3Ob7u4FLnTMVs=; b=s8W9TL2qmUr6PgvA2l6JZigFWTZ8gYEUoC+D9429+BLvTb597gqQSSS2jZD6VVrHrn 2YW1BCWsW+xvdYgJHQNGJwYDxNslmWZEUBOYH256eBFuhlJ43dfGlWxr1PSg9GJlFxep p/N6dZ6GvR6CxVH7m5Dn7qkKvLC4swOtKQH2mJcG3R/TfCY2e3UAww0xp0cGeFI+ptcT OvhoTn0YJUKQkOWvtyjW5SY1QksY3a3uHGyx17M+DvySyzrMEyR/zk69pxXQ9AKvWUUu Iqn6dySRBXrmNP8cOSCQVzOIdMumb8JSey2GZ2MkoelG+6M8/RyECEfGHoP9rx0K7l91 EyMg== X-Gm-Message-State: ACgBeo3HTlhzC/lQXa6NGLwEOAGs/+R4ra9VqyMlAKST698PBf1tUOBm q7bNE6Cd2+44ILhMAwcwcy0iDwxAWNw6jveNqZ6YQ4YtkQMisyaIsN8Tp5qy/75M01EfmPknyri G8oHFkaj22zsGw5mwWMScUHHJatm1Y4d3A2mvc6xqMbqevzS6Ki7GSKO96URhTk0= X-Google-Smtp-Source: AA6agR6LM0n5zw7DURVxq+/BJg26Am2NoqqrXkTNjaTTlA8xfFD7tNsALXgcXc4FUn3oGYl7AGkiBzoFj8QVsg== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a17:902:d48f:b0:16f:a73:bf04 with SMTP id c15-20020a170902d48f00b0016f0a73bf04mr55793468plg.43.1662487794976; Tue, 06 Sep 2022 11:09:54 -0700 (PDT) Date: Tue, 6 Sep 2022 18:09:27 +0000 In-Reply-To: <20220906180930.230218-1-ricarkol@google.com> Mime-Version: 1.0 References: <20220906180930.230218-1-ricarkol@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220906180930.230218-11-ricarkol@google.com> Subject: [PATCH v6 10/13] KVM: selftests: aarch64: Add userfaultfd tests into page_fault_test From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, andrew.jones@linux.dev Cc: pbonzini@redhat.com, maz@kernel.org, seanjc@google.com, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, dmatlack@google.com, axelrasmussen@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add some userfaultfd tests into page_fault_test. Punch holes into the data and/or page-table memslots, perform some accesses, and check that the faults are taken (or not taken) when expected. Signed-off-by: Ricardo Koller --- .../selftests/kvm/aarch64/page_fault_test.c | 190 +++++++++++++++++- 1 file changed, 188 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c index 1c7e02003753..57466d213b95 100644 --- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c +++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c @@ -35,6 +35,12 @@ static uint64_t *guest_test_memory = (uint64_t *)TEST_GVA; #define PREPARE_FN_NR 10 #define CHECK_FN_NR 10 +static struct event_cnt { + int uffd_faults; + /* uffd_faults is incremented from multiple threads. */ + pthread_mutex_t uffd_faults_mutex; +} events; + struct test_desc { const char *name; uint64_t mem_mark_cmd; @@ -42,11 +48,14 @@ struct test_desc { bool (*guest_prepare[PREPARE_FN_NR])(void); void (*guest_test)(void); void (*guest_test_check[CHECK_FN_NR])(void); + uffd_handler_t uffd_pt_handler; + uffd_handler_t uffd_data_handler; void (*dabt_handler)(struct ex_regs *regs); void (*iabt_handler)(struct ex_regs *regs); uint32_t pt_memslot_flags; uint32_t data_memslot_flags; bool skip; + struct event_cnt expected_events; }; struct test_params { @@ -263,7 +272,110 @@ static void no_iabt_handler(struct ex_regs *regs) GUEST_ASSERT_1(false, regs->pc); } +static struct uffd_args { + char *copy; + void *hva; + uint64_t paging_size; +} pt_args, data_args; + /* Returns true to continue the test, and false if it should be skipped. */ +static int uffd_generic_handler(int uffd_mode, int uffd, + struct uffd_msg *msg, struct uffd_args *args, + bool expect_write) +{ + uint64_t addr = msg->arg.pagefault.address; + uint64_t flags = msg->arg.pagefault.flags; + struct uffdio_copy copy; + int ret; + + TEST_ASSERT(uffd_mode == UFFDIO_REGISTER_MODE_MISSING, + "The only expected UFFD mode is MISSING"); + ASSERT_EQ(!!(flags & UFFD_PAGEFAULT_FLAG_WRITE), expect_write); + ASSERT_EQ(addr, (uint64_t)args->hva); + + pr_debug("uffd fault: addr=%p write=%d\n", + (void *)addr, !!(flags & UFFD_PAGEFAULT_FLAG_WRITE)); + + copy.src = (uint64_t)args->copy; + copy.dst = addr; + copy.len = args->paging_size; + copy.mode = 0; + + ret = ioctl(uffd, UFFDIO_COPY, ©); + if (ret == -1) { + pr_info("Failed UFFDIO_COPY in 0x%lx with errno: %d\n", + addr, errno); + return ret; + } + + pthread_mutex_lock(&events.uffd_faults_mutex); + events.uffd_faults += 1; + pthread_mutex_unlock(&events.uffd_faults_mutex); + return 0; +} + +static int uffd_pt_write_handler(int mode, int uffd, struct uffd_msg *msg) +{ + return uffd_generic_handler(mode, uffd, msg, &pt_args, true); +} + +static int uffd_data_write_handler(int mode, int uffd, struct uffd_msg *msg) +{ + return uffd_generic_handler(mode, uffd, msg, &data_args, true); +} + +static int uffd_data_read_handler(int mode, int uffd, struct uffd_msg *msg) +{ + return uffd_generic_handler(mode, uffd, msg, &data_args, false); +} + +static void setup_uffd_args(struct userspace_mem_region *region, + struct uffd_args *args) +{ + args->hva = (void *)region->region.userspace_addr; + args->paging_size = region->region.memory_size; + + args->copy = malloc(args->paging_size); + TEST_ASSERT(args->copy, "Failed to allocate data copy."); + memcpy(args->copy, args->hva, args->paging_size); +} + +static void setup_uffd(struct kvm_vm *vm, struct test_params *p, + struct uffd_desc **pt_uffd, struct uffd_desc **data_uffd) +{ + struct test_desc *test = p->test_desc; + + setup_uffd_args(vm_get_mem_region(vm, MEM_REGION_PT), &pt_args); + setup_uffd_args(vm_get_mem_region(vm, MEM_REGION_DATA), &data_args); + + *pt_uffd = NULL; + if (test->uffd_pt_handler) + *pt_uffd = uffd_setup_demand_paging( + UFFDIO_REGISTER_MODE_MISSING, 0, + pt_args.hva, pt_args.paging_size, + test->uffd_pt_handler); + + *data_uffd = NULL; + if (test->uffd_data_handler) + *data_uffd = uffd_setup_demand_paging( + UFFDIO_REGISTER_MODE_MISSING, 0, + data_args.hva, data_args.paging_size, + test->uffd_data_handler); +} + +static void free_uffd(struct test_desc *test, struct uffd_desc *pt_uffd, + struct uffd_desc *data_uffd) +{ + if (test->uffd_pt_handler) + uffd_stop_demand_paging(pt_uffd); + if (test->uffd_data_handler) + uffd_stop_demand_paging(data_uffd); + + free(pt_args.copy); + free(data_args.copy); +} + +/* Returns false if the test should be skipped. */ static bool punch_hole_in_memslot(struct kvm_vm *vm, struct userspace_mem_region *region) { @@ -431,6 +543,11 @@ static struct kvm_vm_mem_params setup_memslots(enum vm_guest_mode mode, return mem_params; } +static void check_event_counts(struct test_desc *test) +{ + ASSERT_EQ(test->expected_events.uffd_faults, events.uffd_faults); +} + static void print_test_banner(enum vm_guest_mode mode, struct test_params *p) { struct test_desc *test = p->test_desc; @@ -441,12 +558,17 @@ static void print_test_banner(enum vm_guest_mode mode, struct test_params *p) vm_mem_backing_src_alias(p->src_type)->name); } +static void reset_event_counts(void) +{ + memset(&events, 0, sizeof(events)); +} + /* * This function either succeeds, skips the test (after setting test->skip), or * fails with a TEST_FAIL that aborts all tests. */ static void vcpu_run_loop(struct kvm_vm *vm, struct kvm_vcpu *vcpu, - struct test_desc *test) + struct test_desc *test) { struct ucall uc; @@ -481,6 +603,7 @@ static void run_test(enum vm_guest_mode mode, void *arg) struct test_desc *test = p->test_desc; struct kvm_vm *vm; struct kvm_vcpu *vcpu; + struct uffd_desc *pt_uffd, *data_uffd; struct kvm_vm_mem_params mem_params; print_test_banner(mode, p); @@ -494,7 +617,16 @@ static void run_test(enum vm_guest_mode mode, void *arg) ucall_init(vm, NULL); + reset_event_counts(); + + /* + * Set some code in the data memslot for the guest to execute (only + * applicable to the EXEC tests). This has to be done before + * setup_uffd() as that function copies the memslot data for the uffd + * handler. + */ load_exec_code_for_test(vm); + setup_uffd(vm, p, &pt_uffd, &data_uffd); setup_abort_handlers(vm, vcpu, test); vcpu_args_set(vcpu, 1, test); @@ -502,6 +634,14 @@ static void run_test(enum vm_guest_mode mode, void *arg) ucall_uninit(vm); kvm_vm_free(vm); + free_uffd(test, pt_uffd, data_uffd); + + /* + * Make sure we check the events after the uffd threads have exited, + * which means they updated their respective event counters. + */ + if (!test->skip) + check_event_counts(test); } static void help(char *name) @@ -517,6 +657,7 @@ static void help(char *name) #define SNAME(s) #s #define SCAT2(a, b) SNAME(a ## _ ## b) #define SCAT3(a, b, c) SCAT2(a, SCAT2(b, c)) +#define SCAT4(a, b, c, d) SCAT2(a, SCAT3(b, c, d)) #define _CHECK(_test) _CHECK_##_test #define _PREPARE(_test) _PREPARE_##_test @@ -535,7 +676,7 @@ static void help(char *name) #define _CHECK_with_af guest_check_pte_af #define _CHECK_no_af NULL -/* Performs an access and checks that no faults were triggered. */ +/* Performs an access and checks that no faults (no events) were triggered. */ #define TEST_ACCESS(_access, _with_af, _mark_cmd) \ { \ .name = SCAT3(_access, _with_af, #_mark_cmd), \ @@ -544,6 +685,21 @@ static void help(char *name) .mem_mark_cmd = _mark_cmd, \ .guest_test = _access, \ .guest_test_check = { _CHECK(_with_af) }, \ + .expected_events = { 0 }, \ +} + +#define TEST_UFFD(_access, _with_af, _mark_cmd, \ + _uffd_data_handler, _uffd_pt_handler, _uffd_faults) \ +{ \ + .name = SCAT4(uffd, _access, _with_af, #_mark_cmd), \ + .guest_prepare = { _PREPARE(_with_af), \ + _PREPARE(_access) }, \ + .guest_test = _access, \ + .mem_mark_cmd = _mark_cmd, \ + .guest_test_check = { _CHECK(_with_af) }, \ + .uffd_data_handler = _uffd_data_handler, \ + .uffd_pt_handler = _uffd_pt_handler, \ + .expected_events = { .uffd_faults = _uffd_faults, }, \ } static struct test_desc tests[] = { @@ -569,6 +725,36 @@ static struct test_desc tests[] = { TEST_ACCESS(guest_at, no_af, CMD_HOLE_DATA), TEST_ACCESS(guest_dc_zva, no_af, CMD_HOLE_DATA), + /* + * Punch holes in the test and PT memslots and mark them for + * userfaultfd handling. This should result in 2 faults: the test + * access and its respective S1 page table walk (S1PTW). + */ + TEST_UFFD(guest_read64, with_af, CMD_HOLE_DATA | CMD_HOLE_PT, + uffd_data_read_handler, uffd_pt_write_handler, 2), + /* no_af should also lead to a PT write. */ + TEST_UFFD(guest_read64, no_af, CMD_HOLE_DATA | CMD_HOLE_PT, + uffd_data_read_handler, uffd_pt_write_handler, 2), + /* Note how that cas invokes the read handler. */ + TEST_UFFD(guest_cas, with_af, CMD_HOLE_DATA | CMD_HOLE_PT, + uffd_data_read_handler, uffd_pt_write_handler, 2), + /* + * Can't test guest_at with_af as it's IMPDEF whether the AF is set. + * The S1PTW fault should still be marked as a write. + */ + TEST_UFFD(guest_at, no_af, CMD_HOLE_DATA | CMD_HOLE_PT, + uffd_data_read_handler, uffd_pt_write_handler, 1), + TEST_UFFD(guest_ld_preidx, with_af, CMD_HOLE_DATA | CMD_HOLE_PT, + uffd_data_read_handler, uffd_pt_write_handler, 2), + TEST_UFFD(guest_write64, with_af, CMD_HOLE_DATA | CMD_HOLE_PT, + uffd_data_write_handler, uffd_pt_write_handler, 2), + TEST_UFFD(guest_dc_zva, with_af, CMD_HOLE_DATA | CMD_HOLE_PT, + uffd_data_write_handler, uffd_pt_write_handler, 2), + TEST_UFFD(guest_st_preidx, with_af, CMD_HOLE_DATA | CMD_HOLE_PT, + uffd_data_write_handler, uffd_pt_write_handler, 2), + TEST_UFFD(guest_exec, with_af, CMD_HOLE_DATA | CMD_HOLE_PT, + uffd_data_read_handler, uffd_pt_write_handler, 2), + { 0 } }; From patchwork Tue Sep 6 18:09:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 12968005 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F097ECAAA1 for ; Tue, 6 Sep 2022 18:10:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229802AbiIFSKQ (ORCPT ); Tue, 6 Sep 2022 14:10:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47088 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229749AbiIFSKE (ORCPT ); Tue, 6 Sep 2022 14:10:04 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 18B5785FD5 for ; Tue, 6 Sep 2022 11:09:58 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-345482ec6adso41131307b3.18 for ; Tue, 06 Sep 2022 11:09:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=GXyzgnGlMnDyr6jDldAtcQlrUJka0cbmkJbuqj5LpV4=; b=p6fpXW5JHNNYawSyrwmiqzoEy0jsuFom0Q0HhVjw1McwKQqqD2uhVb7G8zF+ZCF32B rJIxYxleLHMiUVsIvHWOKYpqFX+COdtZSEiQoT+lRqt1jkckXeeaXxUnrQa6E8NmX7ML HI7ndYlM1IL2gtNZMWR1kVM9FWJw25XYsCWgDgOzDKNgOY4EVZx+rOMZqxalDSRjtGOF lkTFi1wWESNwYsa8PAbBaU50ZLrc3ndrxy4a6I7UN3EYuzqab9JJ1YlwksVu5rjQwnl9 x4WIfsBUuBNPu6PiaQ2fbyoi0KFu+JXVfINnvKV6m8lHQm+w6A2s3HQaDAfMheaPBtUg zzQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=GXyzgnGlMnDyr6jDldAtcQlrUJka0cbmkJbuqj5LpV4=; b=09IXMKshulLGydNK8DPs+FBdZN7QUw0HSLkh5Bp4WtXgyVgABpVgXRMss2fnt2Vs/D C5aCa/GAF+jvRDm4HxF6GFd5fwrdw76xXOp41/4Ugjqr4StBitMTZU0AzWqR6j8/GfDH gNrjiKN1garmD4SS7eTJ/nfEjMYPbS5sg6VfepdkhejunwWvqr1fCorcse/QhlEmUqje qWbaSW3YAoDbeXLC07IggLnPkMQfp+LyMQs/kB0BxaKliykajZjx590MWGp0S4Q3hN4a 3eJf7Jr6yORCEGa4EBL683jbnYTI/+ekuYu+Fdm5Zlf+w5+IOjVIVdWmHF7NG9KyooBe ETiQ== X-Gm-Message-State: ACgBeo2lnnklh124vqBdZ6xDqBa75g7V0BpPE4IA/q6YrRDk9eEKzI6H /VgB8A+lzEjGjmhFxQf3FNHPUnSm9GtSDJjOYYR3/x1CHMGnh9VHZw8wH4kutRzuguQEXflG8Nv 6DK+sHnYLC72FgniDLhNBa0wTvul2v7z7a1Bns1+yG9mISKU1FEaDkUJlbG034hw= X-Google-Smtp-Source: AA6agR6dt/m/DW6IyOB3sq8vNkk6/O99Psm2qDwJwW0NzDs6Kl+pyaeY8X+6TB4JBfhZWkEuXvx5RRp1BsXi/Q== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a25:37d2:0:b0:6a9:a7ad:a22 with SMTP id e201-20020a2537d2000000b006a9a7ad0a22mr2495732yba.185.1662487797263; Tue, 06 Sep 2022 11:09:57 -0700 (PDT) Date: Tue, 6 Sep 2022 18:09:28 +0000 In-Reply-To: <20220906180930.230218-1-ricarkol@google.com> Mime-Version: 1.0 References: <20220906180930.230218-1-ricarkol@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220906180930.230218-12-ricarkol@google.com> Subject: [PATCH v6 11/13] KVM: selftests: aarch64: Add dirty logging tests into page_fault_test From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, andrew.jones@linux.dev Cc: pbonzini@redhat.com, maz@kernel.org, seanjc@google.com, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, dmatlack@google.com, axelrasmussen@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add some dirty logging tests into page_fault_test. Mark the data and/or page-table memslots for dirty logging, perform some accesses, and check that the dirty log bits are set or clean when expected. Signed-off-by: Ricardo Koller --- .../selftests/kvm/aarch64/page_fault_test.c | 75 +++++++++++++++++++ 1 file changed, 75 insertions(+) diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c index 57466d213b95..481c96eec34a 100644 --- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c +++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c @@ -31,6 +31,11 @@ static uint64_t *guest_test_memory = (uint64_t *)TEST_GVA; #define CMD_SKIP_TEST (1ULL << 1) #define CMD_HOLE_PT (1ULL << 2) #define CMD_HOLE_DATA (1ULL << 3) +#define CMD_CHECK_WRITE_IN_DIRTY_LOG (1ULL << 4) +#define CMD_CHECK_S1PTW_WR_IN_DIRTY_LOG (1ULL << 5) +#define CMD_CHECK_NO_WRITE_IN_DIRTY_LOG (1ULL << 6) +#define CMD_CHECK_NO_S1PTW_WR_IN_DIRTY_LOG (1ULL << 7) +#define CMD_SET_PTE_AF (1ULL << 8) #define PREPARE_FN_NR 10 #define CHECK_FN_NR 10 @@ -213,6 +218,21 @@ static void guest_check_pte_af(void) GUEST_ASSERT_EQ(*((uint64_t *)TEST_PTE_GVA) & PTE_AF, PTE_AF); } +static void guest_check_write_in_dirty_log(void) +{ + GUEST_SYNC(CMD_CHECK_WRITE_IN_DIRTY_LOG); +} + +static void guest_check_no_write_in_dirty_log(void) +{ + GUEST_SYNC(CMD_CHECK_NO_WRITE_IN_DIRTY_LOG); +} + +static void guest_check_s1ptw_wr_in_dirty_log(void) +{ + GUEST_SYNC(CMD_CHECK_S1PTW_WR_IN_DIRTY_LOG); +} + static void guest_exec(void) { int (*code)(void) = (int (*)(void))TEST_EXEC_GVA; @@ -398,6 +418,21 @@ static bool punch_hole_in_memslot(struct kvm_vm *vm, return true; } +static bool check_write_in_dirty_log(struct kvm_vm *vm, + struct userspace_mem_region *region, uint64_t host_pg_nr) +{ + unsigned long *bmap; + bool first_page_dirty; + uint64_t size = region->region.memory_size; + + /* getpage_size() is not always equal to vm->page_size */ + bmap = bitmap_zalloc(size / getpagesize()); + kvm_vm_get_dirty_log(vm, region->region.slot, bmap); + first_page_dirty = test_bit(host_pg_nr, bmap); + free(bmap); + return first_page_dirty; +} + /* Returns true to continue the test, and false if it should be skipped. */ static bool handle_cmd(struct kvm_vm *vm, int cmd) { @@ -414,6 +449,18 @@ static bool handle_cmd(struct kvm_vm *vm, int cmd) continue_test = punch_hole_in_memslot(vm, pt_region); if (cmd & CMD_HOLE_DATA) continue_test = punch_hole_in_memslot(vm, data_region); + if (cmd & CMD_CHECK_WRITE_IN_DIRTY_LOG) + TEST_ASSERT(check_write_in_dirty_log(vm, data_region, 0), + "Missing write in dirty log"); + if (cmd & CMD_CHECK_S1PTW_WR_IN_DIRTY_LOG) + TEST_ASSERT(check_write_in_dirty_log(vm, pt_region, 0), + "Missing s1ptw write in dirty log"); + if (cmd & CMD_CHECK_NO_WRITE_IN_DIRTY_LOG) + TEST_ASSERT(!check_write_in_dirty_log(vm, data_region, 0), + "Unexpected write in dirty log"); + if (cmd & CMD_CHECK_NO_S1PTW_WR_IN_DIRTY_LOG) + TEST_ASSERT(!check_write_in_dirty_log(vm, pt_region, 0), + "Unexpected s1ptw write in dirty log"); return continue_test; } @@ -702,6 +749,19 @@ static void help(char *name) .expected_events = { .uffd_faults = _uffd_faults, }, \ } +#define TEST_DIRTY_LOG(_access, _with_af, _test_check) \ +{ \ + .name = SCAT3(dirty_log, _access, _with_af), \ + .data_memslot_flags = KVM_MEM_LOG_DIRTY_PAGES, \ + .pt_memslot_flags = KVM_MEM_LOG_DIRTY_PAGES, \ + .guest_prepare = { _PREPARE(_with_af), \ + _PREPARE(_access) }, \ + .guest_test = _access, \ + .guest_test_check = { _CHECK(_with_af), _test_check, \ + guest_check_s1ptw_wr_in_dirty_log}, \ + .expected_events = { 0 }, \ +} + static struct test_desc tests[] = { /* Check that HW is setting the Access Flag (AF) (sanity checks). */ @@ -755,6 +815,21 @@ static struct test_desc tests[] = { TEST_UFFD(guest_exec, with_af, CMD_HOLE_DATA | CMD_HOLE_PT, uffd_data_read_handler, uffd_pt_write_handler, 2), + /* + * Try accesses when the data and PT memslots are both tracked for + * dirty logging. + */ + TEST_DIRTY_LOG(guest_read64, with_af, guest_check_no_write_in_dirty_log), + /* no_af should also lead to a PT write. */ + TEST_DIRTY_LOG(guest_read64, no_af, guest_check_no_write_in_dirty_log), + TEST_DIRTY_LOG(guest_ld_preidx, with_af, guest_check_no_write_in_dirty_log), + TEST_DIRTY_LOG(guest_at, no_af, guest_check_no_write_in_dirty_log), + TEST_DIRTY_LOG(guest_exec, with_af, guest_check_no_write_in_dirty_log), + TEST_DIRTY_LOG(guest_write64, with_af, guest_check_write_in_dirty_log), + TEST_DIRTY_LOG(guest_cas, with_af, guest_check_write_in_dirty_log), + TEST_DIRTY_LOG(guest_dc_zva, with_af, guest_check_write_in_dirty_log), + TEST_DIRTY_LOG(guest_st_preidx, with_af, guest_check_write_in_dirty_log), + { 0 } }; From patchwork Tue Sep 6 18:09:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 12968006 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B4B08ECAAD5 for ; Tue, 6 Sep 2022 18:10:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229833AbiIFSKW (ORCPT ); Tue, 6 Sep 2022 14:10:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48110 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229760AbiIFSKM (ORCPT ); Tue, 6 Sep 2022 14:10:12 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F1A888606D for ; Tue, 6 Sep 2022 11:10:00 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 184-20020a2507c1000000b00696056767cfso9126127ybh.22 for ; Tue, 06 Sep 2022 11:10:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=0orSjS9va3Sof/Dzkoopf1KvuiNfShBrUpfeQORNUFU=; b=iQtdqz1AXBZXoZoPtBmzLLNUG45exs6887OftN9ILL6Z2M8IvmSWGkuCp3x+kwz36R 4gXw4GsPN/Q7HCiRM/A5i+qzlLxb3WsK1ZUiWxaYVtAKKqRuOkI2JfXBSeGD8eI9jOwn GsYxQ5WduYs8HgjOLvuAXciIm+E4h6Eltdu+4UhT4Uhb1e0C199Ep2neVlmTCT3Ac6EV ozOqXJVIVDWAMzkQReMXEkNCkkuxwQhAjQQcd4IheeNuglFS2R7WOvOUVTiZT2+EiiSU DHy8Gk1Y6prBDP051SvIe31JCU8OFzOd4NwtN2SZpFV1m66j8rFhHHfZFaOxpL2/8DBL NyDQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=0orSjS9va3Sof/Dzkoopf1KvuiNfShBrUpfeQORNUFU=; b=ajP+YPcAD6t9qgq1pF5qHJdik+py5/o+K3mC6DDAhouoTUyj5AYcRpUMWZnbv8kNqt PgE3MT1ReSaptOhq8ghLOggTi0LKHxyD6phPw12NNKe1Yxa8a+dBhtt+tDdCYkQWyTND Tdqx31s0WgLfUkvLIsMPfyVo1b/H+wby7ENmaq0qh+Pc7O/lUrx3llB5u5UjNvY56jmD FkfE6g/Iyzkz7Q9euXAltfpcj9BTIzfBIEu2qO95a4wsHZz7uPYdhv3OGhNAQL87uh8+ gEeuQPTo7Qz276hwqcToNEVZe7knj0y5Tkkl3W9eyHU7RoghqiSkvSq31UVNOVyoNtY3 G96g== X-Gm-Message-State: ACgBeo3336EbTkZUkmBd2Gw/AxGPcjPghO3WSfv2946YmYOrDNUtdEJ+ ey007aujRr/axCJ3fYGCoDOcLb/zYR3CYXg1e16Iw2hc5r9cl9kWcxPnBn72vlpFt92lZ/1khWY IzhmbX+cXa7CR9jNs+Gh6PeAEYx90dcADyFgYpu3y+/ducVn4GujcZnH0UKywc9M= X-Google-Smtp-Source: AA6agR49HojQUmX3Ow0YN4qfdRCYKwsRN6st8YFfuHLBtzNQMINpNS7VxH+Ku9TiGnIS0vrCy3bzlHGEXSQC8w== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a25:b986:0:b0:671:a73:1ea6 with SMTP id r6-20020a25b986000000b006710a731ea6mr36103835ybg.405.1662487799172; Tue, 06 Sep 2022 11:09:59 -0700 (PDT) Date: Tue, 6 Sep 2022 18:09:29 +0000 In-Reply-To: <20220906180930.230218-1-ricarkol@google.com> Mime-Version: 1.0 References: <20220906180930.230218-1-ricarkol@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220906180930.230218-13-ricarkol@google.com> Subject: [PATCH v6 12/13] KVM: selftests: aarch64: Add readonly memslot tests into page_fault_test From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, andrew.jones@linux.dev Cc: pbonzini@redhat.com, maz@kernel.org, seanjc@google.com, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, dmatlack@google.com, axelrasmussen@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add some readonly memslot tests into page_fault_test. Mark the data and/or page-table memslots as readonly, perform some accesses, and check that the right fault is triggered when expected (e.g., a store with no write-back should lead to an mmio exit). Signed-off-by: Ricardo Koller --- .../selftests/kvm/aarch64/page_fault_test.c | 101 +++++++++++++++++- 1 file changed, 100 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c index 481c96eec34a..39fc2103fdd6 100644 --- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c +++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c @@ -41,6 +41,8 @@ static uint64_t *guest_test_memory = (uint64_t *)TEST_GVA; #define CHECK_FN_NR 10 static struct event_cnt { + int mmio_exits; + int fail_vcpu_runs; int uffd_faults; /* uffd_faults is incremented from multiple threads. */ pthread_mutex_t uffd_faults_mutex; @@ -57,6 +59,8 @@ struct test_desc { uffd_handler_t uffd_data_handler; void (*dabt_handler)(struct ex_regs *regs); void (*iabt_handler)(struct ex_regs *regs); + void (*mmio_handler)(struct kvm_vm *vm, struct kvm_run *run); + void (*fail_vcpu_run_handler)(int ret); uint32_t pt_memslot_flags; uint32_t data_memslot_flags; bool skip; @@ -418,6 +422,31 @@ static bool punch_hole_in_memslot(struct kvm_vm *vm, return true; } +static void mmio_on_test_gpa_handler(struct kvm_vm *vm, struct kvm_run *run) +{ + struct userspace_mem_region *region; + void *hva; + + region = vm_get_mem_region(vm, MEM_REGION_DATA); + hva = (void *)region->region.userspace_addr; + + ASSERT_EQ(run->mmio.phys_addr, region->region.guest_phys_addr); + + memcpy(hva, run->mmio.data, run->mmio.len); + events.mmio_exits += 1; +} + +static void mmio_no_handler(struct kvm_vm *vm, struct kvm_run *run) +{ + uint64_t data; + + memcpy(&data, run->mmio.data, sizeof(data)); + pr_debug("addr=%lld len=%d w=%d data=%lx\n", + run->mmio.phys_addr, run->mmio.len, + run->mmio.is_write, data); + TEST_FAIL("There was no MMIO exit expected."); +} + static bool check_write_in_dirty_log(struct kvm_vm *vm, struct userspace_mem_region *region, uint64_t host_pg_nr) { @@ -465,6 +494,18 @@ static bool handle_cmd(struct kvm_vm *vm, int cmd) return continue_test; } +void fail_vcpu_run_no_handler(int ret) +{ + TEST_FAIL("Unexpected vcpu run failure\n"); +} + +void fail_vcpu_run_mmio_no_syndrome_handler(int ret) +{ + TEST_ASSERT(errno == ENOSYS, + "The mmio handler should have returned not implemented."); + events.fail_vcpu_runs += 1; +} + extern unsigned char __exec_test; void noinline __return_0x77(void) @@ -590,9 +631,20 @@ static struct kvm_vm_mem_params setup_memslots(enum vm_guest_mode mode, return mem_params; } +static void setup_default_handlers(struct test_desc *test) +{ + if (!test->mmio_handler) + test->mmio_handler = mmio_no_handler; + + if (!test->fail_vcpu_run_handler) + test->fail_vcpu_run_handler = fail_vcpu_run_no_handler; +} + static void check_event_counts(struct test_desc *test) { ASSERT_EQ(test->expected_events.uffd_faults, events.uffd_faults); + ASSERT_EQ(test->expected_events.mmio_exits, events.mmio_exits); + ASSERT_EQ(test->expected_events.fail_vcpu_runs, events.fail_vcpu_runs); } static void print_test_banner(enum vm_guest_mode mode, struct test_params *p) @@ -617,10 +669,18 @@ static void reset_event_counts(void) static void vcpu_run_loop(struct kvm_vm *vm, struct kvm_vcpu *vcpu, struct test_desc *test) { + struct kvm_run *run; struct ucall uc; + int ret; + + run = vcpu->run; for (;;) { - vcpu_run(vcpu); + ret = _vcpu_run(vcpu); + if (ret) { + test->fail_vcpu_run_handler(ret); + goto done; + } switch (get_ucall(vcpu, &uc)) { case UCALL_SYNC: @@ -634,6 +694,10 @@ static void vcpu_run_loop(struct kvm_vm *vm, struct kvm_vcpu *vcpu, break; case UCALL_DONE: goto done; + case UCALL_NONE: + if (run->exit_reason == KVM_EXIT_MMIO) + test->mmio_handler(vm, run); + break; default: TEST_FAIL("Unknown ucall %lu", uc.cmd); } @@ -675,6 +739,7 @@ static void run_test(enum vm_guest_mode mode, void *arg) load_exec_code_for_test(vm); setup_uffd(vm, p, &pt_uffd, &data_uffd); setup_abort_handlers(vm, vcpu, test); + setup_default_handlers(test); vcpu_args_set(vcpu, 1, test); vcpu_run_loop(vm, vcpu, test); @@ -762,6 +827,25 @@ static void help(char *name) .expected_events = { 0 }, \ } +#define TEST_RO_MEMSLOT(_access, _mmio_handler, _mmio_exits) \ +{ \ + .name = SCAT3(ro_memslot, _access, _with_af), \ + .data_memslot_flags = KVM_MEM_READONLY, \ + .guest_prepare = { _PREPARE(_access) }, \ + .guest_test = _access, \ + .mmio_handler = _mmio_handler, \ + .expected_events = { .mmio_exits = _mmio_exits }, \ +} + +#define TEST_RO_MEMSLOT_NO_SYNDROME(_access) \ +{ \ + .name = SCAT2(ro_memslot_no_syndrome, _access), \ + .data_memslot_flags = KVM_MEM_READONLY, \ + .guest_test = _access, \ + .fail_vcpu_run_handler = fail_vcpu_run_mmio_no_syndrome_handler, \ + .expected_events = { .fail_vcpu_runs = 1 }, \ +} + static struct test_desc tests[] = { /* Check that HW is setting the Access Flag (AF) (sanity checks). */ @@ -830,6 +914,21 @@ static struct test_desc tests[] = { TEST_DIRTY_LOG(guest_dc_zva, with_af, guest_check_write_in_dirty_log), TEST_DIRTY_LOG(guest_st_preidx, with_af, guest_check_write_in_dirty_log), + /* + * Try accesses when the data memslot is marked read-only (with + * KVM_MEM_READONLY). Writes with a syndrome result in an MMIO exit, + * writes with no syndrome (e.g., CAS) result in a failed vcpu run, and + * reads/execs with and without syndroms do not fault. + */ + TEST_RO_MEMSLOT(guest_read64, 0, 0), + TEST_RO_MEMSLOT(guest_ld_preidx, 0, 0), + TEST_RO_MEMSLOT(guest_at, 0, 0), + TEST_RO_MEMSLOT(guest_exec, 0, 0), + TEST_RO_MEMSLOT(guest_write64, mmio_on_test_gpa_handler, 1), + TEST_RO_MEMSLOT_NO_SYNDROME(guest_dc_zva), + TEST_RO_MEMSLOT_NO_SYNDROME(guest_cas), + TEST_RO_MEMSLOT_NO_SYNDROME(guest_st_preidx), + { 0 } }; From patchwork Tue Sep 6 18:09:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 12968007 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 888CAC38145 for ; Tue, 6 Sep 2022 18:10:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229671AbiIFSKZ (ORCPT ); Tue, 6 Sep 2022 14:10:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47454 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229786AbiIFSKO (ORCPT ); Tue, 6 Sep 2022 14:10:14 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D4220861E4 for ; Tue, 6 Sep 2022 11:10:01 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-340ae84fb7dso95473307b3.17 for ; Tue, 06 Sep 2022 11:10:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=6g0+sbXE3jF6Ff45WLsR6oO1jnRUdtkvFHmn/gX5h9c=; b=au9jbTpNJBac5Cnu8oklgE3sYDslOdVxaG92ziImQk/BdZkvjaZ8TNrbt5dfjaeGMe QRuUk2YaUlT1seVuAlv8+eYdIPFhqGGeZoHdeA5owfnMDwPI1R3HAh+j5Bnhk4T2uTpP LcqAZEpYNBbeEA+eyvcjuiXTMYMWyhL+AQUzFFjZFsy2YwB1G3eRhJAQ7mO4w5gajyE+ +uy4cxuECBVqFd8/wp5OJUs8Kzx7IDoCEfavo5Fn3qbHRxdtDuo8eO7uZxC7/ywHSt6I S7ss17xFufZ+SaYU92Mb5Pf1BU7ZaZTwUA/9ds6Jz4JPLI/iPqz9Fqg+fiZDuSp4tzpO BLAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=6g0+sbXE3jF6Ff45WLsR6oO1jnRUdtkvFHmn/gX5h9c=; b=qqETTSdHfsI9ue4ZvzxSI1uPd0LB802lmHX5Fp4kY7t+20OADajtmXospOYeqGpDME BfwkjofuudMy2/RWlBkyEBbeOFbx+N/8v6jyHckzsHRLNeiasGcgYNiMHwOGtMMidUlL pfDfUtduV0xqyN3ObF6j/Gg2XOvjGHOP+t/TgTkwZg8nP24TMvUh841YSz44dLcvgtKG I2Py+wfsfvXeXLnsTFcvJqFKnmAyYEa4PM4a1mPrzfL09Vb+EdI1ko/GVfYH0/NA30jq z6/3q0NQbkgqQ80ZNs28+wc85lWhnLOzL+JT5pBxmLbRT5sGDAkNEOhhCu8ALBB3ty5b pNDg== X-Gm-Message-State: ACgBeo02VGx345gvmigV/EiTu1EeJnI1sgSrdRZfIgioK/XbiWYHwnJ2 cJOAP4NdRL5gxUx0y2u77yYWUE93Xf8W8eZu9ekWCL1aA5UTidKjD4jqzPOMMmmMpD/cigkCWwd Cpt7Z95ahZV+gR1wysggSlIPmPCYmSj8k9GfYxSpR/oWzAaP9rlR+k95kfkBLqt0= X-Google-Smtp-Source: AA6agR5LsAm+3Z5RCyxrKZ6AtzEpNjpMkzP0+YEghygheP3GZAOFm7UhPWzjkbLQmjNDYklSWyrNBB+zPQ3EXA== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a25:f46:0:b0:696:568:59f1 with SMTP id 67-20020a250f46000000b00696056859f1mr39008757ybp.455.1662487801312; Tue, 06 Sep 2022 11:10:01 -0700 (PDT) Date: Tue, 6 Sep 2022 18:09:30 +0000 In-Reply-To: <20220906180930.230218-1-ricarkol@google.com> Mime-Version: 1.0 References: <20220906180930.230218-1-ricarkol@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220906180930.230218-14-ricarkol@google.com> Subject: [PATCH v6 13/13] KVM: selftests: aarch64: Add mix of tests into page_fault_test From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, andrew.jones@linux.dev Cc: pbonzini@redhat.com, maz@kernel.org, seanjc@google.com, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, dmatlack@google.com, axelrasmussen@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add some mix of tests into page_fault_test: memslots with all the pairwise combinations of read-only, userfaultfd, and dirty-logging. For example, writing into a read-only memslot which has a hole handled with userfaultfd. Signed-off-by: Ricardo Koller --- .../selftests/kvm/aarch64/page_fault_test.c | 155 ++++++++++++++++++ 1 file changed, 155 insertions(+) diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c index 39fc2103fdd6..60a6a8a45fa4 100644 --- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c +++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c @@ -399,6 +399,12 @@ static void free_uffd(struct test_desc *test, struct uffd_desc *pt_uffd, free(data_args.copy); } +static int uffd_no_handler(int mode, int uffd, struct uffd_msg *msg) +{ + TEST_FAIL("There was no UFFD fault expected."); + return -1; +} + /* Returns false if the test should be skipped. */ static bool punch_hole_in_memslot(struct kvm_vm *vm, struct userspace_mem_region *region) @@ -827,6 +833,22 @@ static void help(char *name) .expected_events = { 0 }, \ } +#define TEST_UFFD_AND_DIRTY_LOG(_access, _with_af, _uffd_data_handler, \ + _uffd_faults, _test_check) \ +{ \ + .name = SCAT3(uffd_and_dirty_log, _access, _with_af), \ + .data_memslot_flags = KVM_MEM_LOG_DIRTY_PAGES, \ + .pt_memslot_flags = KVM_MEM_LOG_DIRTY_PAGES, \ + .guest_prepare = { _PREPARE(_with_af), \ + _PREPARE(_access) }, \ + .guest_test = _access, \ + .mem_mark_cmd = CMD_HOLE_DATA | CMD_HOLE_PT, \ + .guest_test_check = { _CHECK(_with_af), _test_check }, \ + .uffd_data_handler = _uffd_data_handler, \ + .uffd_pt_handler = uffd_pt_write_handler, \ + .expected_events = { .uffd_faults = _uffd_faults, }, \ +} + #define TEST_RO_MEMSLOT(_access, _mmio_handler, _mmio_exits) \ { \ .name = SCAT3(ro_memslot, _access, _with_af), \ @@ -846,6 +868,59 @@ static void help(char *name) .expected_events = { .fail_vcpu_runs = 1 }, \ } +#define TEST_RO_MEMSLOT_AND_DIRTY_LOG(_access, _mmio_handler, _mmio_exits, \ + _test_check) \ +{ \ + .name = SCAT3(ro_memslot, _access, _with_af), \ + .data_memslot_flags = KVM_MEM_READONLY | KVM_MEM_LOG_DIRTY_PAGES, \ + .pt_memslot_flags = KVM_MEM_LOG_DIRTY_PAGES, \ + .guest_prepare = { _PREPARE(_access) }, \ + .guest_test = _access, \ + .guest_test_check = { _test_check }, \ + .mmio_handler = _mmio_handler, \ + .expected_events = { .mmio_exits = _mmio_exits}, \ +} + +#define TEST_RO_MEMSLOT_NO_SYNDROME_AND_DIRTY_LOG(_access, _test_check) \ +{ \ + .name = SCAT2(ro_memslot_no_syn_and_dlog, _access), \ + .data_memslot_flags = KVM_MEM_READONLY | KVM_MEM_LOG_DIRTY_PAGES, \ + .pt_memslot_flags = KVM_MEM_LOG_DIRTY_PAGES, \ + .guest_test = _access, \ + .guest_test_check = { _test_check }, \ + .fail_vcpu_run_handler = fail_vcpu_run_mmio_no_syndrome_handler, \ + .expected_events = { .fail_vcpu_runs = 1 }, \ +} + +#define TEST_RO_MEMSLOT_AND_UFFD(_access, _mmio_handler, _mmio_exits, \ + _uffd_data_handler, _uffd_faults) \ +{ \ + .name = SCAT2(ro_memslot_uffd, _access), \ + .data_memslot_flags = KVM_MEM_READONLY, \ + .mem_mark_cmd = CMD_HOLE_DATA | CMD_HOLE_PT, \ + .guest_prepare = { _PREPARE(_access) }, \ + .guest_test = _access, \ + .uffd_data_handler = _uffd_data_handler, \ + .uffd_pt_handler = uffd_pt_write_handler, \ + .mmio_handler = _mmio_handler, \ + .expected_events = { .mmio_exits = _mmio_exits, \ + .uffd_faults = _uffd_faults }, \ +} + +#define TEST_RO_MEMSLOT_NO_SYNDROME_AND_UFFD(_access, _uffd_data_handler, \ + _uffd_faults) \ +{ \ + .name = SCAT2(ro_memslot_no_syndrome, _access), \ + .data_memslot_flags = KVM_MEM_READONLY, \ + .mem_mark_cmd = CMD_HOLE_DATA | CMD_HOLE_PT, \ + .guest_test = _access, \ + .uffd_data_handler = _uffd_data_handler, \ + .uffd_pt_handler = uffd_pt_write_handler, \ + .fail_vcpu_run_handler = fail_vcpu_run_mmio_no_syndrome_handler, \ + .expected_events = { .fail_vcpu_runs = 1, \ + .uffd_faults = _uffd_faults }, \ +} + static struct test_desc tests[] = { /* Check that HW is setting the Access Flag (AF) (sanity checks). */ @@ -914,6 +989,35 @@ static struct test_desc tests[] = { TEST_DIRTY_LOG(guest_dc_zva, with_af, guest_check_write_in_dirty_log), TEST_DIRTY_LOG(guest_st_preidx, with_af, guest_check_write_in_dirty_log), + /* + * Access when the data and PT memslots are both marked for dirty + * logging and UFFD at the same time. The expected result is that + * writes should mark the dirty log and trigger a userfaultfd write + * fault. Reads/execs should result in a read userfaultfd fault, and + * nothing in the dirty log. Any S1PTW should result in a write in the + * dirty log and a userfaultfd write. + */ + TEST_UFFD_AND_DIRTY_LOG(guest_read64, with_af, uffd_data_read_handler, 2, + guest_check_no_write_in_dirty_log), + /* no_af should also lead to a PT write. */ + TEST_UFFD_AND_DIRTY_LOG(guest_read64, no_af, uffd_data_read_handler, 2, + guest_check_no_write_in_dirty_log), + TEST_UFFD_AND_DIRTY_LOG(guest_ld_preidx, with_af, uffd_data_read_handler, + 2, guest_check_no_write_in_dirty_log), + TEST_UFFD_AND_DIRTY_LOG(guest_at, with_af, 0, 1, + guest_check_no_write_in_dirty_log), + TEST_UFFD_AND_DIRTY_LOG(guest_exec, with_af, uffd_data_read_handler, 2, + guest_check_no_write_in_dirty_log), + TEST_UFFD_AND_DIRTY_LOG(guest_write64, with_af, uffd_data_write_handler, + 2, guest_check_write_in_dirty_log), + TEST_UFFD_AND_DIRTY_LOG(guest_cas, with_af, uffd_data_read_handler, 2, + guest_check_write_in_dirty_log), + TEST_UFFD_AND_DIRTY_LOG(guest_dc_zva, with_af, uffd_data_write_handler, + 2, guest_check_write_in_dirty_log), + TEST_UFFD_AND_DIRTY_LOG(guest_st_preidx, with_af, + uffd_data_write_handler, 2, + guest_check_write_in_dirty_log), + /* * Try accesses when the data memslot is marked read-only (with * KVM_MEM_READONLY). Writes with a syndrome result in an MMIO exit, @@ -929,6 +1033,57 @@ static struct test_desc tests[] = { TEST_RO_MEMSLOT_NO_SYNDROME(guest_cas), TEST_RO_MEMSLOT_NO_SYNDROME(guest_st_preidx), + /* + * Access when both the data memslot is both read-only and marked for + * dirty logging at the same time. The expected result is that for + * writes there should be no write in the dirty log. The readonly + * handling is the same as if the memslot was not marked for dirty + * logging: writes with a syndrome result in an MMIO exit, and writes + * with no syndrome result in a failed vcpu run. + */ + TEST_RO_MEMSLOT_AND_DIRTY_LOG(guest_read64, 0, 0, + guest_check_no_write_in_dirty_log), + TEST_RO_MEMSLOT_AND_DIRTY_LOG(guest_ld_preidx, 0, 0, + guest_check_no_write_in_dirty_log), + TEST_RO_MEMSLOT_AND_DIRTY_LOG(guest_at, 0, 0, + guest_check_no_write_in_dirty_log), + TEST_RO_MEMSLOT_AND_DIRTY_LOG(guest_exec, 0, 0, + guest_check_no_write_in_dirty_log), + TEST_RO_MEMSLOT_AND_DIRTY_LOG(guest_write64, mmio_on_test_gpa_handler, + 1, guest_check_no_write_in_dirty_log), + TEST_RO_MEMSLOT_NO_SYNDROME_AND_DIRTY_LOG(guest_dc_zva, + guest_check_no_write_in_dirty_log), + TEST_RO_MEMSLOT_NO_SYNDROME_AND_DIRTY_LOG(guest_cas, + guest_check_no_write_in_dirty_log), + TEST_RO_MEMSLOT_NO_SYNDROME_AND_DIRTY_LOG(guest_st_preidx, + guest_check_no_write_in_dirty_log), + + /* + * Access when the data memslot is both read-only and punched with + * holes tracked with userfaultfd. The expected result is the union of + * both userfaultfd and read-only behaviors. For example, write + * accesses result in a userfaultfd write fault and an MMIO exit. + * Writes with no syndrome result in a failed vcpu run and no + * userfaultfd write fault. Reads result in userfaultfd getting + * triggered. + */ + TEST_RO_MEMSLOT_AND_UFFD(guest_read64, 0, 0, + uffd_data_read_handler, 2), + TEST_RO_MEMSLOT_AND_UFFD(guest_ld_preidx, 0, 0, + uffd_data_read_handler, 2), + TEST_RO_MEMSLOT_AND_UFFD(guest_at, 0, 0, + uffd_no_handler, 1), + TEST_RO_MEMSLOT_AND_UFFD(guest_exec, 0, 0, + uffd_data_read_handler, 2), + TEST_RO_MEMSLOT_AND_UFFD(guest_write64, mmio_on_test_gpa_handler, 1, + uffd_data_write_handler, 2), + TEST_RO_MEMSLOT_NO_SYNDROME_AND_UFFD(guest_cas, + uffd_data_read_handler, 2), + TEST_RO_MEMSLOT_NO_SYNDROME_AND_UFFD(guest_dc_zva, + uffd_no_handler, 1), + TEST_RO_MEMSLOT_NO_SYNDROME_AND_UFFD(guest_st_preidx, + uffd_no_handler, 1), + { 0 } };