From patchwork Tue Oct 11 01:06:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13003494 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45B58C433FE for ; Tue, 11 Oct 2022 01:06:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229662AbiJKBGi (ORCPT ); Mon, 10 Oct 2022 21:06:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48812 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229490AbiJKBGh (ORCPT ); Mon, 10 Oct 2022 21:06:37 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6040763F2 for ; Mon, 10 Oct 2022 18:06:35 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id a33-20020a630b61000000b00429d91cc649so6907888pgl.8 for ; Mon, 10 Oct 2022 18:06:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=u7F1os0mJuXLfiLwqh0k0HRHdo0cntgRCvywFbQepqQ=; b=e8DMx8RRBGzYwtMr3Zrcp8c9J8eRnu7GvNx4/mJCytViVSDWn/wWICXvZGk8fFXRNy aGkz3OU6jgtkTUgk06foddzBuq4av2PXsHhyjL0ZPtrIISz4KVMNjahwkNDpexaOzxJJ NsEtuE0XWN2stM3UqREARuHViPVpUGlZWbjzAJb/j+n466LiToYCBQyS4g/4qzwzQ8gd Ouvt4CyNRlM9QX8P5oUq1Lxd1R3sQIbHvzRapTrQmF4gxhN+JpMV73C4FZxzbYf4BEDp G9pV0ji3CbAdtbNXyV5sxfvV4UNmspL7PrZDDnbmud+wO5/NQxSJP73X3HwYCjBut+A0 4UtA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=u7F1os0mJuXLfiLwqh0k0HRHdo0cntgRCvywFbQepqQ=; b=ysUCLF3jWgd+ucqvtYPQdZhTMAlNWj70cjJJB+5c0T2pRPPn5XP/+hZZVfxZfM1rFl 5Y9llJqng1cqhZ80/j+V5GzeRM7+7a4YfHyRUBd6+ecKkJOJKTQjMtGYnjXsGTTUdyr8 ywsZB9Kj69XJOYHxObvUFVVQHNGGCgekXdEsilvOptsdIp7h2b0z/qvY/wRlfNSBhxWa 8xJ1OJ9KXFNU4jWA0ZvCEtHuscqdmkGxB0fWg5ll2m0FRmDkGkypr4nfmtg+jnAZopkW VhcvdDk7/7uE9Tp9DYMBxAHcKs8Nw/XrZd4DNdcfYYp46dl+Y+rMNs46IjjyTlIIqW9I 2Uzg== X-Gm-Message-State: ACrzQf15WLbGNJzyTfYyEwumPvA6mv4yeRCBItXrAWjl7UDVLD7MhDJy rh+HcMmt0zHg/YMHKeie/6IpH7KUFEmgYdSKWDPYj3eS0Q72C+h1JrfVNaktW/S6ar1QKQr9T3H CXyRHPRhQOU8KiRk19IKYUXfn3aQJDU1jmulQKGXlNj9DvoXizBeGxwPO8g3tRzs= X-Google-Smtp-Source: AMsMyM4aZLy8aurH7gsfUH8wpLX5OB/LVMOXPf285yO9n9le1ZdT8fr0i8tq/SkZQJXQvimje4cLl1P6kRNIlQ== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a17:90a:460d:b0:20c:1147:bf32 with SMTP id w13-20020a17090a460d00b0020c1147bf32mr18960321pjg.24.1665450394777; Mon, 10 Oct 2022 18:06:34 -0700 (PDT) Date: Tue, 11 Oct 2022 01:06:15 +0000 In-Reply-To: <20221011010628.1734342-1-ricarkol@google.com> Mime-Version: 1.0 References: <20221011010628.1734342-1-ricarkol@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Message-ID: <20221011010628.1734342-2-ricarkol@google.com> Subject: [PATCH v9 01/14] KVM: selftests: Add a userfaultfd library From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, andrew.jones@linux.dev Cc: pbonzini@redhat.com, maz@kernel.org, seanjc@google.com, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, dmatlack@google.com, axelrasmussen@google.com, Ricardo Koller , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move the generic userfaultfd code out of demand_paging_test.c into a common library, userfaultfd_util. This library consists of a setup and a stop function. The setup function starts a thread for handling page faults using the handler callback function. This setup returns a uffd_desc object which is then used in the stop function (to wait and destroy the threads). Reviewed-by: Oliver Upton Reviewed-by: Ben Gardon Signed-off-by: Ricardo Koller --- tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/demand_paging_test.c | 228 +++--------------- .../selftests/kvm/include/userfaultfd_util.h | 45 ++++ .../selftests/kvm/lib/userfaultfd_util.c | 186 ++++++++++++++ 4 files changed, 262 insertions(+), 198 deletions(-) create mode 100644 tools/testing/selftests/kvm/include/userfaultfd_util.h create mode 100644 tools/testing/selftests/kvm/lib/userfaultfd_util.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 0172eb6cb6ee..08a2606aff33 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -47,6 +47,7 @@ LIBKVM += lib/perf_test_util.c LIBKVM += lib/rbtree.c LIBKVM += lib/sparsebit.c LIBKVM += lib/test_util.c +LIBKVM += lib/userfaultfd_util.c LIBKVM_STRING += lib/string_override.c diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c index 779ae54f89c4..8e1fe4ffcccd 100644 --- a/tools/testing/selftests/kvm/demand_paging_test.c +++ b/tools/testing/selftests/kvm/demand_paging_test.c @@ -22,23 +22,13 @@ #include "test_util.h" #include "perf_test_util.h" #include "guest_modes.h" +#include "userfaultfd_util.h" #ifdef __NR_userfaultfd -#ifdef PRINT_PER_PAGE_UPDATES -#define PER_PAGE_DEBUG(...) printf(__VA_ARGS__) -#else -#define PER_PAGE_DEBUG(...) _no_printf(__VA_ARGS__) -#endif - -#ifdef PRINT_PER_VCPU_UPDATES -#define PER_VCPU_DEBUG(...) printf(__VA_ARGS__) -#else -#define PER_VCPU_DEBUG(...) _no_printf(__VA_ARGS__) -#endif - static int nr_vcpus = 1; static uint64_t guest_percpu_mem_size = DEFAULT_PER_VCPU_MEM_SIZE; + static size_t demand_paging_size; static char *guest_data_prototype; @@ -67,9 +57,11 @@ static void vcpu_worker(struct perf_test_vcpu_args *vcpu_args) ts_diff.tv_sec, ts_diff.tv_nsec); } -static int handle_uffd_page_request(int uffd_mode, int uffd, uint64_t addr) +static int handle_uffd_page_request(int uffd_mode, int uffd, + struct uffd_msg *msg) { pid_t tid = syscall(__NR_gettid); + uint64_t addr = msg->arg.pagefault.address; struct timespec start; struct timespec ts_diff; int r; @@ -116,174 +108,32 @@ static int handle_uffd_page_request(int uffd_mode, int uffd, uint64_t addr) return 0; } -bool quit_uffd_thread; - -struct uffd_handler_args { +struct test_params { int uffd_mode; - int uffd; - int pipefd; - useconds_t delay; + useconds_t uffd_delay; + enum vm_mem_backing_src_type src_type; + bool partition_vcpu_memory_access; }; -static void *uffd_handler_thread_fn(void *arg) -{ - struct uffd_handler_args *uffd_args = (struct uffd_handler_args *)arg; - int uffd = uffd_args->uffd; - int pipefd = uffd_args->pipefd; - useconds_t delay = uffd_args->delay; - int64_t pages = 0; - struct timespec start; - struct timespec ts_diff; - - clock_gettime(CLOCK_MONOTONIC, &start); - while (!quit_uffd_thread) { - struct uffd_msg msg; - struct pollfd pollfd[2]; - char tmp_chr; - int r; - uint64_t addr; - - pollfd[0].fd = uffd; - pollfd[0].events = POLLIN; - pollfd[1].fd = pipefd; - pollfd[1].events = POLLIN; - - r = poll(pollfd, 2, -1); - switch (r) { - case -1: - pr_info("poll err"); - continue; - case 0: - continue; - case 1: - break; - default: - pr_info("Polling uffd returned %d", r); - return NULL; - } - - if (pollfd[0].revents & POLLERR) { - pr_info("uffd revents has POLLERR"); - return NULL; - } - - if (pollfd[1].revents & POLLIN) { - r = read(pollfd[1].fd, &tmp_chr, 1); - TEST_ASSERT(r == 1, - "Error reading pipefd in UFFD thread\n"); - return NULL; - } - - if (!(pollfd[0].revents & POLLIN)) - continue; - - r = read(uffd, &msg, sizeof(msg)); - if (r == -1) { - if (errno == EAGAIN) - continue; - pr_info("Read of uffd got errno %d\n", errno); - return NULL; - } - - if (r != sizeof(msg)) { - pr_info("Read on uffd returned unexpected size: %d bytes", r); - return NULL; - } - - if (!(msg.event & UFFD_EVENT_PAGEFAULT)) - continue; - - if (delay) - usleep(delay); - addr = msg.arg.pagefault.address; - r = handle_uffd_page_request(uffd_args->uffd_mode, uffd, addr); - if (r < 0) - return NULL; - pages++; - } - - ts_diff = timespec_elapsed(start); - PER_VCPU_DEBUG("userfaulted %ld pages over %ld.%.9lds. (%f/sec)\n", - pages, ts_diff.tv_sec, ts_diff.tv_nsec, - pages / ((double)ts_diff.tv_sec + (double)ts_diff.tv_nsec / 100000000.0)); - - return NULL; -} - -static void setup_demand_paging(struct kvm_vm *vm, - pthread_t *uffd_handler_thread, int pipefd, - int uffd_mode, useconds_t uffd_delay, - struct uffd_handler_args *uffd_args, - void *hva, void *alias, uint64_t len) +static void prefault_mem(void *alias, uint64_t len) { - bool is_minor = (uffd_mode == UFFDIO_REGISTER_MODE_MINOR); - int uffd; - struct uffdio_api uffdio_api; - struct uffdio_register uffdio_register; - uint64_t expected_ioctls = ((uint64_t) 1) << _UFFDIO_COPY; - int ret; + size_t p; - PER_PAGE_DEBUG("Userfaultfd %s mode, faults resolved with %s\n", - is_minor ? "MINOR" : "MISSING", - is_minor ? "UFFDIO_CONINUE" : "UFFDIO_COPY"); - - /* In order to get minor faults, prefault via the alias. */ - if (is_minor) { - size_t p; - - expected_ioctls = ((uint64_t) 1) << _UFFDIO_CONTINUE; - - TEST_ASSERT(alias != NULL, "Alias required for minor faults"); - for (p = 0; p < (len / demand_paging_size); ++p) { - memcpy(alias + (p * demand_paging_size), - guest_data_prototype, demand_paging_size); - } + TEST_ASSERT(alias != NULL, "Alias required for minor faults"); + for (p = 0; p < (len / demand_paging_size); ++p) { + memcpy(alias + (p * demand_paging_size), + guest_data_prototype, demand_paging_size); } - - uffd = syscall(__NR_userfaultfd, O_CLOEXEC | O_NONBLOCK); - TEST_ASSERT(uffd >= 0, __KVM_SYSCALL_ERROR("userfaultfd()", uffd)); - - uffdio_api.api = UFFD_API; - uffdio_api.features = 0; - ret = ioctl(uffd, UFFDIO_API, &uffdio_api); - TEST_ASSERT(ret != -1, __KVM_SYSCALL_ERROR("UFFDIO_API", ret)); - - uffdio_register.range.start = (uint64_t)hva; - uffdio_register.range.len = len; - uffdio_register.mode = uffd_mode; - ret = ioctl(uffd, UFFDIO_REGISTER, &uffdio_register); - TEST_ASSERT(ret != -1, __KVM_SYSCALL_ERROR("UFFDIO_REGISTER", ret)); - TEST_ASSERT((uffdio_register.ioctls & expected_ioctls) == - expected_ioctls, "missing userfaultfd ioctls"); - - uffd_args->uffd_mode = uffd_mode; - uffd_args->uffd = uffd; - uffd_args->pipefd = pipefd; - uffd_args->delay = uffd_delay; - pthread_create(uffd_handler_thread, NULL, uffd_handler_thread_fn, - uffd_args); - - PER_VCPU_DEBUG("Created uffd thread for HVA range [%p, %p)\n", - hva, hva + len); } -struct test_params { - int uffd_mode; - useconds_t uffd_delay; - enum vm_mem_backing_src_type src_type; - bool partition_vcpu_memory_access; -}; - static void run_test(enum vm_guest_mode mode, void *arg) { struct test_params *p = arg; - pthread_t *uffd_handler_threads = NULL; - struct uffd_handler_args *uffd_args = NULL; + struct uffd_desc **uffd_descs = NULL; struct timespec start; struct timespec ts_diff; - int *pipefds = NULL; struct kvm_vm *vm; - int r, i; + int i; vm = perf_test_create_vm(mode, nr_vcpus, guest_percpu_mem_size, 1, p->src_type, p->partition_vcpu_memory_access); @@ -296,15 +146,8 @@ static void run_test(enum vm_guest_mode mode, void *arg) memset(guest_data_prototype, 0xAB, demand_paging_size); if (p->uffd_mode) { - uffd_handler_threads = - malloc(nr_vcpus * sizeof(*uffd_handler_threads)); - TEST_ASSERT(uffd_handler_threads, "Memory allocation failed"); - - uffd_args = malloc(nr_vcpus * sizeof(*uffd_args)); - TEST_ASSERT(uffd_args, "Memory allocation failed"); - - pipefds = malloc(sizeof(int) * nr_vcpus * 2); - TEST_ASSERT(pipefds, "Unable to allocate memory for pipefd"); + uffd_descs = malloc(nr_vcpus * sizeof(struct uffd_desc *)); + TEST_ASSERT(uffd_descs, "Memory allocation failed"); for (i = 0; i < nr_vcpus; i++) { struct perf_test_vcpu_args *vcpu_args; @@ -317,19 +160,17 @@ static void run_test(enum vm_guest_mode mode, void *arg) vcpu_hva = addr_gpa2hva(vm, vcpu_args->gpa); vcpu_alias = addr_gpa2alias(vm, vcpu_args->gpa); + prefault_mem(vcpu_alias, + vcpu_args->pages * perf_test_args.guest_page_size); + /* * Set up user fault fd to handle demand paging * requests. */ - r = pipe2(&pipefds[i * 2], - O_CLOEXEC | O_NONBLOCK); - TEST_ASSERT(!r, "Failed to set up pipefd"); - - setup_demand_paging(vm, &uffd_handler_threads[i], - pipefds[i * 2], p->uffd_mode, - p->uffd_delay, &uffd_args[i], - vcpu_hva, vcpu_alias, - vcpu_args->pages * perf_test_args.guest_page_size); + uffd_descs[i] = uffd_setup_demand_paging( + p->uffd_mode, p->uffd_delay, vcpu_hva, + vcpu_args->pages * perf_test_args.guest_page_size, + &handle_uffd_page_request); } } @@ -344,15 +185,9 @@ static void run_test(enum vm_guest_mode mode, void *arg) pr_info("All vCPU threads joined\n"); if (p->uffd_mode) { - char c; - /* Tell the user fault fd handler threads to quit */ - for (i = 0; i < nr_vcpus; i++) { - r = write(pipefds[i * 2 + 1], &c, 1); - TEST_ASSERT(r == 1, "Unable to write to pipefd"); - - pthread_join(uffd_handler_threads[i], NULL); - } + for (i = 0; i < nr_vcpus; i++) + uffd_stop_demand_paging(uffd_descs[i]); } pr_info("Total guest execution time: %ld.%.9lds\n", @@ -364,11 +199,8 @@ static void run_test(enum vm_guest_mode mode, void *arg) perf_test_destroy_vm(vm); free(guest_data_prototype); - if (p->uffd_mode) { - free(uffd_handler_threads); - free(uffd_args); - free(pipefds); - } + if (p->uffd_mode) + free(uffd_descs); } static void help(char *name) diff --git a/tools/testing/selftests/kvm/include/userfaultfd_util.h b/tools/testing/selftests/kvm/include/userfaultfd_util.h new file mode 100644 index 000000000000..aef029300b76 --- /dev/null +++ b/tools/testing/selftests/kvm/include/userfaultfd_util.h @@ -0,0 +1,45 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * KVM userfaultfd util + * + * Copyright (C) 2018, Red Hat, Inc. + * Copyright (C) 2019-2022 Google LLC + */ + +#define _GNU_SOURCE /* for pipe2 */ + +#include +#include +#include +#include + +#include "test_util.h" + +typedef int (*uffd_handler_t)(int uffd_mode, int uffd, struct uffd_msg *msg); + +struct uffd_desc { + int uffd_mode; + int uffd; + int pipefds[2]; + useconds_t delay; + uffd_handler_t handler; + pthread_t thread; +}; + +struct uffd_desc *uffd_setup_demand_paging(int uffd_mode, + useconds_t uffd_delay, void *hva, uint64_t len, + uffd_handler_t handler); + +void uffd_stop_demand_paging(struct uffd_desc *uffd); + +#ifdef PRINT_PER_PAGE_UPDATES +#define PER_PAGE_DEBUG(...) printf(__VA_ARGS__) +#else +#define PER_PAGE_DEBUG(...) _no_printf(__VA_ARGS__) +#endif + +#ifdef PRINT_PER_VCPU_UPDATES +#define PER_VCPU_DEBUG(...) printf(__VA_ARGS__) +#else +#define PER_VCPU_DEBUG(...) _no_printf(__VA_ARGS__) +#endif diff --git a/tools/testing/selftests/kvm/lib/userfaultfd_util.c b/tools/testing/selftests/kvm/lib/userfaultfd_util.c new file mode 100644 index 000000000000..9c08ddca03c9 --- /dev/null +++ b/tools/testing/selftests/kvm/lib/userfaultfd_util.c @@ -0,0 +1,186 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KVM userfaultfd util + * Adapted from demand_paging_test.c + * + * Copyright (C) 2018, Red Hat, Inc. + * Copyright (C) 2019-2022 Google LLC + */ + +#define _GNU_SOURCE /* for pipe2 */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "kvm_util.h" +#include "test_util.h" +#include "perf_test_util.h" +#include "userfaultfd_util.h" + +#ifdef __NR_userfaultfd + +static void *uffd_handler_thread_fn(void *arg) +{ + struct uffd_desc *uffd_desc = (struct uffd_desc *)arg; + int uffd = uffd_desc->uffd; + int pipefd = uffd_desc->pipefds[0]; + useconds_t delay = uffd_desc->delay; + int64_t pages = 0; + struct timespec start; + struct timespec ts_diff; + + clock_gettime(CLOCK_MONOTONIC, &start); + while (1) { + struct uffd_msg msg; + struct pollfd pollfd[2]; + char tmp_chr; + int r; + + pollfd[0].fd = uffd; + pollfd[0].events = POLLIN; + pollfd[1].fd = pipefd; + pollfd[1].events = POLLIN; + + r = poll(pollfd, 2, -1); + switch (r) { + case -1: + pr_info("poll err"); + continue; + case 0: + continue; + case 1: + break; + default: + pr_info("Polling uffd returned %d", r); + return NULL; + } + + if (pollfd[0].revents & POLLERR) { + pr_info("uffd revents has POLLERR"); + return NULL; + } + + if (pollfd[1].revents & POLLIN) { + r = read(pollfd[1].fd, &tmp_chr, 1); + TEST_ASSERT(r == 1, + "Error reading pipefd in UFFD thread\n"); + return NULL; + } + + if (!(pollfd[0].revents & POLLIN)) + continue; + + r = read(uffd, &msg, sizeof(msg)); + if (r == -1) { + if (errno == EAGAIN) + continue; + pr_info("Read of uffd got errno %d\n", errno); + return NULL; + } + + if (r != sizeof(msg)) { + pr_info("Read on uffd returned unexpected size: %d bytes", r); + return NULL; + } + + if (!(msg.event & UFFD_EVENT_PAGEFAULT)) + continue; + + if (delay) + usleep(delay); + r = uffd_desc->handler(uffd_desc->uffd_mode, uffd, &msg); + if (r < 0) + return NULL; + pages++; + } + + ts_diff = timespec_elapsed(start); + PER_VCPU_DEBUG("userfaulted %ld pages over %ld.%.9lds. (%f/sec)\n", + pages, ts_diff.tv_sec, ts_diff.tv_nsec, + pages / ((double)ts_diff.tv_sec + (double)ts_diff.tv_nsec / 100000000.0)); + + return NULL; +} + +struct uffd_desc *uffd_setup_demand_paging(int uffd_mode, + useconds_t uffd_delay, void *hva, uint64_t len, + uffd_handler_t handler) +{ + struct uffd_desc *uffd_desc; + bool is_minor = (uffd_mode == UFFDIO_REGISTER_MODE_MINOR); + int uffd; + struct uffdio_api uffdio_api; + struct uffdio_register uffdio_register; + uint64_t expected_ioctls = ((uint64_t) 1) << _UFFDIO_COPY; + int ret; + + PER_PAGE_DEBUG("Userfaultfd %s mode, faults resolved with %s\n", + is_minor ? "MINOR" : "MISSING", + is_minor ? "UFFDIO_CONINUE" : "UFFDIO_COPY"); + + uffd_desc = malloc(sizeof(struct uffd_desc)); + TEST_ASSERT(uffd_desc, "malloc failed"); + + /* In order to get minor faults, prefault via the alias. */ + if (is_minor) + expected_ioctls = ((uint64_t) 1) << _UFFDIO_CONTINUE; + + uffd = syscall(__NR_userfaultfd, O_CLOEXEC | O_NONBLOCK); + TEST_ASSERT(uffd >= 0, "uffd creation failed, errno: %d", errno); + + uffdio_api.api = UFFD_API; + uffdio_api.features = 0; + TEST_ASSERT(ioctl(uffd, UFFDIO_API, &uffdio_api) != -1, + "ioctl UFFDIO_API failed: %" PRIu64, + (uint64_t)uffdio_api.api); + + uffdio_register.range.start = (uint64_t)hva; + uffdio_register.range.len = len; + uffdio_register.mode = uffd_mode; + TEST_ASSERT(ioctl(uffd, UFFDIO_REGISTER, &uffdio_register) != -1, + "ioctl UFFDIO_REGISTER failed"); + TEST_ASSERT((uffdio_register.ioctls & expected_ioctls) == + expected_ioctls, "missing userfaultfd ioctls"); + + ret = pipe2(uffd_desc->pipefds, O_CLOEXEC | O_NONBLOCK); + TEST_ASSERT(!ret, "Failed to set up pipefd"); + + uffd_desc->uffd_mode = uffd_mode; + uffd_desc->uffd = uffd; + uffd_desc->delay = uffd_delay; + uffd_desc->handler = handler; + pthread_create(&uffd_desc->thread, NULL, uffd_handler_thread_fn, + uffd_desc); + + PER_VCPU_DEBUG("Created uffd thread for HVA range [%p, %p)\n", + hva, hva + len); + + return uffd_desc; +} + +void uffd_stop_demand_paging(struct uffd_desc *uffd) +{ + char c = 0; + int ret; + + ret = write(uffd->pipefds[1], &c, 1); + TEST_ASSERT(ret == 1, "Unable to write to pipefd"); + + ret = pthread_join(uffd->thread, NULL); + TEST_ASSERT(ret == 0, "Pthread_join failed."); + + close(uffd->uffd); + + close(uffd->pipefds[1]); + close(uffd->pipefds[0]); + + free(uffd); +} + +#endif /* __NR_userfaultfd */ From patchwork Tue Oct 11 01:06:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13003495 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 26BA2C4332F for ; Tue, 11 Oct 2022 01:06:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229641AbiJKBGk (ORCPT ); Mon, 10 Oct 2022 21:06:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48852 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229490AbiJKBGi (ORCPT ); Mon, 10 Oct 2022 21:06:38 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 02D3D51401 for ; Mon, 10 Oct 2022 18:06:38 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-3586920096bso120525597b3.20 for ; Mon, 10 Oct 2022 18:06:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Hq9NL6B9Os53bTPgTLTOMBGsM6lY0z/4JHp7mLyST1U=; b=lkrHQV/0lVpq1npY9LHpffGonYfpeNlunRWanRqfunEzNi/NcefaHQcNSm8i6b+GU7 j8K6AP95M36DnJphvvI9B7hRpUeiECfr6j3gW7J0sj73Ekcsv1hTGG+t2N+wfUUDU6oR lu0GrlM0vFAtxWCpmGs2HZjoHYhxe5Sw+G/DhMRfgYTIDSUbXwdYBIXC04yqLLUJrgxu jDI6NwHNSxy7ENngck0izE6Qy5f6kkj+epUcBJZRz34B16rGbqwAyFhK4bNZbgEjc6o6 mXW78Xf9dEoKzUbpRzEYUsW74h7NPRhf8F9lT7uisjcwgoMD4OXNV0WmcBLfau25SdcZ 07WQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Hq9NL6B9Os53bTPgTLTOMBGsM6lY0z/4JHp7mLyST1U=; b=fDqAXu2hcOTroTua4XXPoeaUH6chMwwFEdLLZvx8GhpNGzBiCZFvnpYl4AXtlmz/ij oKXZ1bWYdhbXmCj5TKMTttPcMtue/XLT3esSe/7uGWPJSuNY1YndioLHN/VO6szq+Y03 Hpnw0h+1VVHxrCFyE2H2K4aJH1M0Uk2KbAa1Qdq0b8LzmnJEoFxhLxhCJYHk+BVkfwCS srW45B6Bb1R5qagonXJJRTGi82IGEmrrUB6HsYv1DRq9Px+YDD2GnyfLwWMLOY9GEvWG LtOc//uyRjTGmrsbCUkpc1WRprR1aDN2bcjQuw0nyFVb3yt1UioLBfGHT3BlVIM+48PM zkqQ== X-Gm-Message-State: ACrzQf1duzecCzB2LVTFi8e6mUrVt4CnQVMKgqB8lFe3LiPdg/Pvb6io LNaq7HyX0TYza7B0gUWIVjIad5q9jHZ8WXCP64eLVRa/kRzUpuKPc/g2DNNrcNwL3hIHOqY0ZxL tzfKSPBu1gehQc3iSi0z8sdf2fBrcV5xRrmJHWkaB0Ei7De08rLJTZdLSZtH10PQ= X-Google-Smtp-Source: AMsMyM5werx4OlUxUq9yYtmxBAkTX+TLMdV2IK95277jRL8MA7+s0rjb4DSqTbsl0obBxVv8WfNgTl1RA9rJdw== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a5b:bca:0:b0:6bf:b0d4:407d with SMTP id c10-20020a5b0bca000000b006bfb0d4407dmr16896540ybr.189.1665450397236; Mon, 10 Oct 2022 18:06:37 -0700 (PDT) Date: Tue, 11 Oct 2022 01:06:16 +0000 In-Reply-To: <20221011010628.1734342-1-ricarkol@google.com> Mime-Version: 1.0 References: <20221011010628.1734342-1-ricarkol@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Message-ID: <20221011010628.1734342-3-ricarkol@google.com> Subject: [PATCH v9 02/14] KVM: selftests: aarch64: Add virt_get_pte_hva() library function From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, andrew.jones@linux.dev Cc: pbonzini@redhat.com, maz@kernel.org, seanjc@google.com, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, dmatlack@google.com, axelrasmussen@google.com, Ricardo Koller , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a library function to get the PTE (a host virtual address) of a given GVA. This will be used in a future commit by a test to clear and check the access flag of a particular page. Reviewed-by: Oliver Upton Reviewed-by: Andrew Jones Signed-off-by: Ricardo Koller --- .../selftests/kvm/include/aarch64/processor.h | 2 ++ tools/testing/selftests/kvm/lib/aarch64/processor.c | 13 ++++++++++--- 2 files changed, 12 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/kvm/include/aarch64/processor.h b/tools/testing/selftests/kvm/include/aarch64/processor.h index a8124f9dd68a..df4bfac69551 100644 --- a/tools/testing/selftests/kvm/include/aarch64/processor.h +++ b/tools/testing/selftests/kvm/include/aarch64/processor.h @@ -109,6 +109,8 @@ void vm_install_exception_handler(struct kvm_vm *vm, void vm_install_sync_handler(struct kvm_vm *vm, int vector, int ec, handler_fn handler); +uint64_t *virt_get_pte_hva(struct kvm_vm *vm, vm_vaddr_t gva); + static inline void cpu_relax(void) { asm volatile("yield" ::: "memory"); diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c index 6f5551368944..63ef3c78e55e 100644 --- a/tools/testing/selftests/kvm/lib/aarch64/processor.c +++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c @@ -138,7 +138,7 @@ void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr) _virt_pg_map(vm, vaddr, paddr, attr_idx); } -vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva) +uint64_t *virt_get_pte_hva(struct kvm_vm *vm, vm_vaddr_t gva) { uint64_t *ptep; @@ -169,11 +169,18 @@ vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva) TEST_FAIL("Page table levels must be 2, 3, or 4"); } - return pte_addr(vm, *ptep) + (gva & (vm->page_size - 1)); + return ptep; unmapped_gva: TEST_FAIL("No mapping for vm virtual address, gva: 0x%lx", gva); - exit(1); + exit(EXIT_FAILURE); +} + +vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva) +{ + uint64_t *ptep = virt_get_pte_hva(vm, gva); + + return pte_addr(vm, *ptep) + (gva & (vm->page_size - 1)); } static void pte_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent, uint64_t page, int level) From patchwork Tue Oct 11 01:06:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13003496 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A66FEC433FE for ; Tue, 11 Oct 2022 01:06:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229677AbiJKBGo (ORCPT ); Mon, 10 Oct 2022 21:06:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48946 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229671AbiJKBGm (ORCPT ); Mon, 10 Oct 2022 21:06:42 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 88F4B5A2C9 for ; Mon, 10 Oct 2022 18:06:40 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-33dc888dc62so119181397b3.4 for ; Mon, 10 Oct 2022 18:06:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=+2DRG2ktbVgPO8gWevB+QacaOchvfaRovAISZPDw15A=; b=Ru9bE6+5i93s4ljKiceTT8ECQbHky9nEIOqC8q7N246aJaSCSRd6yJpd3YA9r1JH3q MZNmq1Rh8HknqDV5/Y2ef3KeOA7gbWjKkOJjtZKLNy6CwzObUCvwuJ94MZUdQIdIt3t5 rld6OQuiY7EwKvi6HlTVjSt8Zzn8vTi/7K8RkPUIaQm+SIfAYNXoQc2gpna93DMy6OhN S+TjEJvpHNq292kKLI8g5vsyjiA4WSQ3iBMt7uYFt0nSjVAfsB56XIbr5tgEoyM6Uurs Q3Wcm3O2LaM+77RJBgCbdjEBwKVFy++ZvYW3DeyDmySmR7oaC3hawVRK/q2WuRrFpVwc h7iA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=+2DRG2ktbVgPO8gWevB+QacaOchvfaRovAISZPDw15A=; b=GukmJRVcXnvjxkmkhmxvw+9uWsAkuOk6ye3vi/jTW5t6xxoXwhxofMi+Vk8Dbk1Utl uwQTrlru5BA6pxIrpYXbsdRyuppjcLM341zG5f+C1rpu20+YQZQndFBOJ1bXDUwyNAnT 7WRjcXIR7w4FwM0TkXbNvco/xO792+8txQYdGyV2bZQC+V2vMAW4Zs1G1J7tMzkxJ9Cc gMUXc1UKUtoBEfEkXUghp4+wdo6zel/fxpK0ANamkCBeD0rqmu3kDt1gYb24aA0SfKcb x6T0HlUvUa5CVWthWprLA52bZMsykAYHTNp9j4maaKpaK1QsqYZqpPsMW+qdx7AUfzhs VYhw== X-Gm-Message-State: ACrzQf1bPtW8/at2kc8oiwynQOEbc+1Pq9c2HYyA8caqmF+kW1/V2cgC GEA6+UXUESwBPdb3cDrzFT/eOJDFjxNYWdh+Wa5RlRHOXCsZmxmFum/sMmPT0UPr4LFGlEPdY9q BCzk1M3BxRJxCVncpVHdlD8LlG4IfPG9IRqAML/L5Xr4uUX8pYKakYJrGIz62z/8= X-Google-Smtp-Source: AMsMyM519m9aB8GgfmipRWMBbUudlcoeRl8tTANXBJmjT5KugghKlNS7g1jd54VKWTbbPMxzI7dyinv9UFoveA== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a0d:d78c:0:b0:360:bbf0:ef88 with SMTP id z134-20020a0dd78c000000b00360bbf0ef88mr8633791ywd.206.1665450399492; Mon, 10 Oct 2022 18:06:39 -0700 (PDT) Date: Tue, 11 Oct 2022 01:06:17 +0000 In-Reply-To: <20221011010628.1734342-1-ricarkol@google.com> Mime-Version: 1.0 References: <20221011010628.1734342-1-ricarkol@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Message-ID: <20221011010628.1734342-4-ricarkol@google.com> Subject: [PATCH v9 03/14] KVM: selftests: Add missing close and munmap in __vm_mem_region_delete() From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, andrew.jones@linux.dev Cc: pbonzini@redhat.com, maz@kernel.org, seanjc@google.com, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, dmatlack@google.com, axelrasmussen@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Deleting a memslot (when freeing a VM) is not closing the backing fd, nor it's unmapping the alias mapping. Fix by adding the missing close and munmap. Reviewed-by: Andrew Jones Reviewed-by: Oliver Upton Reviewed-by: Ben Gardon Signed-off-by: Ricardo Koller --- tools/testing/selftests/kvm/lib/kvm_util.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index f1cb1627161f..19e37fb7de7c 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -586,6 +586,12 @@ static void __vm_mem_region_delete(struct kvm_vm *vm, sparsebit_free(®ion->unused_phy_pages); ret = munmap(region->mmap_start, region->mmap_size); TEST_ASSERT(!ret, __KVM_SYSCALL_ERROR("munmap()", ret)); + if (region->fd >= 0) { + /* There's an extra map when using shared memory. */ + ret = munmap(region->mmap_alias, region->mmap_size); + TEST_ASSERT(!ret, __KVM_SYSCALL_ERROR("munmap()", ret)); + close(region->fd); + } free(region); } From patchwork Tue Oct 11 01:06:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13003497 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4AF87C43217 for ; Tue, 11 Oct 2022 01:06:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229676AbiJKBGq (ORCPT ); Mon, 10 Oct 2022 21:06:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48976 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229665AbiJKBGn (ORCPT ); Mon, 10 Oct 2022 21:06:43 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C3A0A537FC for ; Mon, 10 Oct 2022 18:06:42 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-34d188806a8so119520437b3.19 for ; Mon, 10 Oct 2022 18:06:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=G5HDQ76XxR6LhG5G7Q04ltrToK8CzPidrVSTTOm2giM=; b=nEWlcyrt0nCFF41on5PLOyPT5ANVJAvY+BjoFq/+7B2SzlritkipQ+XdyBTDhnRHnN x0tFVJ5Udv+ogPw0pEKe1KuwBqESCRATne5McvPB47ZcJhNcYP8QnHySc2EeCBgvgFZf ekGVGVtY/xHmML54ErAJ6FwC9orIeCyxMNEpdEWgdFIC7FHCu5JYWBUMMW4+DEJURrQe pZAFDx9fa92bldqRvCeZHk2pkstoSR5GJ8aLyx3xePRAfm0UxaC8fkj9h1hU7um/oLBf sICdhq5q5xERyorGhSiO1kQwtpbnX5UTKt7HvgQKrIEzCSsXSTIwO3in5vU+VRymHYGY fxww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=G5HDQ76XxR6LhG5G7Q04ltrToK8CzPidrVSTTOm2giM=; b=KddCBf3FBx0MTIpd3d0QBiWhWd6xPc1Q/5m2Hh2DlIoxDZvgIVakdlS4/SrXq1QYXC Mucm1p1JrUYvUMOKtDEEOgEBd3zOnQSDFFA82XbhJH85NfSEaAHrb+RRHcjVdnNgitUg SF2Ho+4Ip5kPsUINWgaUKg1vnZunFZNVX8RusP/0A+8+r3i/e+ffdUwYz5qhxbIDTcBf K2bTnoLTRnNKMUr0WK68leAykLqz3Kub8J7Fk+tx6G5q7/BH4V4GWKMUS5CNNVxBjmEe kDkNU2EQcIAzt45NoOR7TLp5RAcJ6vk0gOA91BHDAOvZYrq5cRiHX/PZPxT6BIDq0BIV Bk7w== X-Gm-Message-State: ACrzQf1PS4/cf6MBTPAsOVSux/wsOSjOmfoSOirC53a6JV2vwiW5upio m4osdWAqrT/In74GIoAfpY6dF0eL7ENd7VGv1muCNCKUg50xY6kKVNGRTCnRcQVEW9m6NZtwT6M N40/KsQ0iRKur29C99MGaGXbx6VqL6fE0SVN+EpR2vtcOgOvfr1pceRzAPG74nVo= X-Google-Smtp-Source: AMsMyM4vh+AzTn/oWIaOk0oo483LBa9Y4ObtOhRbF9XhamGnuxyJwOjF5KJtpTGc8cl+K/YMmG/YwrzL9cF2OQ== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a0d:c285:0:b0:354:deb2:1aaa with SMTP id e127-20020a0dc285000000b00354deb21aaamr19074489ywd.4.1665450401600; Mon, 10 Oct 2022 18:06:41 -0700 (PDT) Date: Tue, 11 Oct 2022 01:06:18 +0000 In-Reply-To: <20221011010628.1734342-1-ricarkol@google.com> Mime-Version: 1.0 References: <20221011010628.1734342-1-ricarkol@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Message-ID: <20221011010628.1734342-5-ricarkol@google.com> Subject: [PATCH v9 04/14] KVM: selftests: aarch64: Construct DEFAULT_MAIR_EL1 using sysreg.h macros From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, andrew.jones@linux.dev Cc: pbonzini@redhat.com, maz@kernel.org, seanjc@google.com, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, dmatlack@google.com, axelrasmussen@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Define macros for memory type indexes and construct DEFAULT_MAIR_EL1 with macros from asm/sysreg.h. The index macros can then be used when constructing PTEs (instead of using raw numbers). Reviewed-by: Andrew Jones Reviewed-by: Oliver Upton Signed-off-by: Ricardo Koller --- .../selftests/kvm/include/aarch64/processor.h | 25 ++++++++++++++----- .../selftests/kvm/lib/aarch64/processor.c | 2 +- 2 files changed, 20 insertions(+), 7 deletions(-) diff --git a/tools/testing/selftests/kvm/include/aarch64/processor.h b/tools/testing/selftests/kvm/include/aarch64/processor.h index df4bfac69551..c1ddca8db225 100644 --- a/tools/testing/selftests/kvm/include/aarch64/processor.h +++ b/tools/testing/selftests/kvm/include/aarch64/processor.h @@ -38,12 +38,25 @@ * NORMAL 4 1111:1111 * NORMAL_WT 5 1011:1011 */ -#define DEFAULT_MAIR_EL1 ((0x00ul << (0 * 8)) | \ - (0x04ul << (1 * 8)) | \ - (0x0cul << (2 * 8)) | \ - (0x44ul << (3 * 8)) | \ - (0xfful << (4 * 8)) | \ - (0xbbul << (5 * 8))) + +/* Linux doesn't use these memory types, so let's define them. */ +#define MAIR_ATTR_DEVICE_GRE UL(0x0c) +#define MAIR_ATTR_NORMAL_WT UL(0xbb) + +#define MT_DEVICE_nGnRnE 0 +#define MT_DEVICE_nGnRE 1 +#define MT_DEVICE_GRE 2 +#define MT_NORMAL_NC 3 +#define MT_NORMAL 4 +#define MT_NORMAL_WT 5 + +#define DEFAULT_MAIR_EL1 \ + (MAIR_ATTRIDX(MAIR_ATTR_DEVICE_nGnRnE, MT_DEVICE_nGnRnE) | \ + MAIR_ATTRIDX(MAIR_ATTR_DEVICE_nGnRE, MT_DEVICE_nGnRE) | \ + MAIR_ATTRIDX(MAIR_ATTR_DEVICE_GRE, MT_DEVICE_GRE) | \ + MAIR_ATTRIDX(MAIR_ATTR_NORMAL_NC, MT_NORMAL_NC) | \ + MAIR_ATTRIDX(MAIR_ATTR_NORMAL, MT_NORMAL) | \ + MAIR_ATTRIDX(MAIR_ATTR_NORMAL_WT, MT_NORMAL_WT)) #define MPIDR_HWID_BITMASK (0xff00fffffful) diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c index 63ef3c78e55e..26f0eccff6fe 100644 --- a/tools/testing/selftests/kvm/lib/aarch64/processor.c +++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c @@ -133,7 +133,7 @@ static void _virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr) { - uint64_t attr_idx = 4; /* NORMAL (See DEFAULT_MAIR_EL1) */ + uint64_t attr_idx = MT_NORMAL; _virt_pg_map(vm, vaddr, paddr, attr_idx); } From patchwork Tue Oct 11 01:06:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13003498 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E20AC433FE for ; Tue, 11 Oct 2022 01:06:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229697AbiJKBGr (ORCPT ); Mon, 10 Oct 2022 21:06:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49002 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229684AbiJKBGp (ORCPT ); Mon, 10 Oct 2022 21:06:45 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 99A1051A23 for ; Mon, 10 Oct 2022 18:06:44 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id p5-20020a25bd45000000b006beafa0d110so12062642ybm.1 for ; Mon, 10 Oct 2022 18:06:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=o8Upr1yX5KnPfmTl1JLTyBxYqr+1VAy3xr1ZMCyNrL4=; b=LJgzWIQab2b1VzNmreL/1+VQ5Dzr31gao7+Cz8eBJLNQEwPKBYmPqyUkiuoJW0JNzl W0OJD/Bys3aIiKBq/qFwz8trEhB45Hk+ywwDBBizyFtTwebQxy75xbOlQrxx+rtL8q3Q 7u83f/uLLyp6+2sU+e26xIrDsQxjgjU3P2agXsH1bR/YYHaW/q8bS8I8v6IBHwcecfOe XDkJz2wkyzj7LJg352pubXvAM+ilSDag6Y2v0VlO6gO8wGRB84XQY0Q9IQ2r4IJ9n9uv O0cHQb9O2likZx8nNVB5QnHEykYwQpr/ZJTBzNE/IeB2UmNRcuGFJeOWLYrJmK5eDYwU wNqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=o8Upr1yX5KnPfmTl1JLTyBxYqr+1VAy3xr1ZMCyNrL4=; b=n96FZKPrhd0E639+h8igesYmZSuWNeOK/D0Npb/8SuXQa/bnKJsUVk8EbM7J505zs2 +n80yPvgQXIage35jUZvB1uJ44laMVcWljj/FlmAQeUF2J1DnEHhg1viXq0rs3WyE9HJ RF+7MOlohVQlYDXB+fGDUC3O4rUB79uZ9mPmMNZJkBVg/oEaWS51gLzC42iPpS+6Lu8A m7sxVlr2uy/12TU9Y069auQc+yVjRa3Aqe3tB6NF4UndOB3qUhNIiKrBlVjQ5/1Ou+st yIoHkCopjd6YuV12c/EKTxIXIiTakf+iJaD2yyjNctFaU0xOKqI5J6Yovk5y5gOc8axM oM+g== X-Gm-Message-State: ACrzQf0la+N0L7Ms/2OtOJDwQiGzkuKZXeW7T0pKPBfqtEqL1MS4pGke hLi3XPZoiBxGPOd4jNCMZ937C4hF0s3dQHu29JmdmJAkMqp6aSFPRGNMnZCaAXn2nL18Z1ya2mp Jk9ChskRY1/UfNkrOt8IZ/Eg8YzI1+42Q7tJ7b879s9AfVrc4tTgTsDiol5bETJ4= X-Google-Smtp-Source: AMsMyM5sBryvPxZVqUH1ShMbkoF0i6rEr8CMz0F/UwCO6BKfNuIPsL2mjsocE5ZBwLAM34Pw9Y+75RWg432zlw== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a25:af49:0:b0:6af:fbf5:92d5 with SMTP id c9-20020a25af49000000b006affbf592d5mr20394969ybj.624.1665450403833; Mon, 10 Oct 2022 18:06:43 -0700 (PDT) Date: Tue, 11 Oct 2022 01:06:19 +0000 In-Reply-To: <20221011010628.1734342-1-ricarkol@google.com> Mime-Version: 1.0 References: <20221011010628.1734342-1-ricarkol@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Message-ID: <20221011010628.1734342-6-ricarkol@google.com> Subject: [PATCH v9 05/14] tools: Copy bitfield.h from the kernel sources From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, andrew.jones@linux.dev Cc: pbonzini@redhat.com, maz@kernel.org, seanjc@google.com, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, dmatlack@google.com, axelrasmussen@google.com, Ricardo Koller , Jakub Kicinski , Arnaldo Carvalho de Melo Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Copy bitfield.h from include/linux/bitfield.h. A subsequent change will make use of some FIELD_{GET,PREP} macros defined in this header. The header was copied as-is, no changes needed. Cc: Jakub Kicinski Cc: Arnaldo Carvalho de Melo Reviewed-by: Oliver Upton Signed-off-by: Ricardo Koller --- tools/include/linux/bitfield.h | 176 +++++++++++++++++++++++++++++++++ 1 file changed, 176 insertions(+) create mode 100644 tools/include/linux/bitfield.h diff --git a/tools/include/linux/bitfield.h b/tools/include/linux/bitfield.h new file mode 100644 index 000000000000..6093fa6db260 --- /dev/null +++ b/tools/include/linux/bitfield.h @@ -0,0 +1,176 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2014 Felix Fietkau + * Copyright (C) 2004 - 2009 Ivo van Doorn + */ + +#ifndef _LINUX_BITFIELD_H +#define _LINUX_BITFIELD_H + +#include +#include + +/* + * Bitfield access macros + * + * FIELD_{GET,PREP} macros take as first parameter shifted mask + * from which they extract the base mask and shift amount. + * Mask must be a compilation time constant. + * + * Example: + * + * #define REG_FIELD_A GENMASK(6, 0) + * #define REG_FIELD_B BIT(7) + * #define REG_FIELD_C GENMASK(15, 8) + * #define REG_FIELD_D GENMASK(31, 16) + * + * Get: + * a = FIELD_GET(REG_FIELD_A, reg); + * b = FIELD_GET(REG_FIELD_B, reg); + * + * Set: + * reg = FIELD_PREP(REG_FIELD_A, 1) | + * FIELD_PREP(REG_FIELD_B, 0) | + * FIELD_PREP(REG_FIELD_C, c) | + * FIELD_PREP(REG_FIELD_D, 0x40); + * + * Modify: + * reg &= ~REG_FIELD_C; + * reg |= FIELD_PREP(REG_FIELD_C, c); + */ + +#define __bf_shf(x) (__builtin_ffsll(x) - 1) + +#define __scalar_type_to_unsigned_cases(type) \ + unsigned type: (unsigned type)0, \ + signed type: (unsigned type)0 + +#define __unsigned_scalar_typeof(x) typeof( \ + _Generic((x), \ + char: (unsigned char)0, \ + __scalar_type_to_unsigned_cases(char), \ + __scalar_type_to_unsigned_cases(short), \ + __scalar_type_to_unsigned_cases(int), \ + __scalar_type_to_unsigned_cases(long), \ + __scalar_type_to_unsigned_cases(long long), \ + default: (x))) + +#define __bf_cast_unsigned(type, x) ((__unsigned_scalar_typeof(type))(x)) + +#define __BF_FIELD_CHECK(_mask, _reg, _val, _pfx) \ + ({ \ + BUILD_BUG_ON_MSG(!__builtin_constant_p(_mask), \ + _pfx "mask is not constant"); \ + BUILD_BUG_ON_MSG((_mask) == 0, _pfx "mask is zero"); \ + BUILD_BUG_ON_MSG(__builtin_constant_p(_val) ? \ + ~((_mask) >> __bf_shf(_mask)) & (_val) : 0, \ + _pfx "value too large for the field"); \ + BUILD_BUG_ON_MSG(__bf_cast_unsigned(_mask, _mask) > \ + __bf_cast_unsigned(_reg, ~0ull), \ + _pfx "type of reg too small for mask"); \ + __BUILD_BUG_ON_NOT_POWER_OF_2((_mask) + \ + (1ULL << __bf_shf(_mask))); \ + }) + +/** + * FIELD_MAX() - produce the maximum value representable by a field + * @_mask: shifted mask defining the field's length and position + * + * FIELD_MAX() returns the maximum value that can be held in the field + * specified by @_mask. + */ +#define FIELD_MAX(_mask) \ + ({ \ + __BF_FIELD_CHECK(_mask, 0ULL, 0ULL, "FIELD_MAX: "); \ + (typeof(_mask))((_mask) >> __bf_shf(_mask)); \ + }) + +/** + * FIELD_FIT() - check if value fits in the field + * @_mask: shifted mask defining the field's length and position + * @_val: value to test against the field + * + * Return: true if @_val can fit inside @_mask, false if @_val is too big. + */ +#define FIELD_FIT(_mask, _val) \ + ({ \ + __BF_FIELD_CHECK(_mask, 0ULL, 0ULL, "FIELD_FIT: "); \ + !((((typeof(_mask))_val) << __bf_shf(_mask)) & ~(_mask)); \ + }) + +/** + * FIELD_PREP() - prepare a bitfield element + * @_mask: shifted mask defining the field's length and position + * @_val: value to put in the field + * + * FIELD_PREP() masks and shifts up the value. The result should + * be combined with other fields of the bitfield using logical OR. + */ +#define FIELD_PREP(_mask, _val) \ + ({ \ + __BF_FIELD_CHECK(_mask, 0ULL, _val, "FIELD_PREP: "); \ + ((typeof(_mask))(_val) << __bf_shf(_mask)) & (_mask); \ + }) + +/** + * FIELD_GET() - extract a bitfield element + * @_mask: shifted mask defining the field's length and position + * @_reg: value of entire bitfield + * + * FIELD_GET() extracts the field specified by @_mask from the + * bitfield passed in as @_reg by masking and shifting it down. + */ +#define FIELD_GET(_mask, _reg) \ + ({ \ + __BF_FIELD_CHECK(_mask, _reg, 0U, "FIELD_GET: "); \ + (typeof(_mask))(((_reg) & (_mask)) >> __bf_shf(_mask)); \ + }) + +extern void __compiletime_error("value doesn't fit into mask") +__field_overflow(void); +extern void __compiletime_error("bad bitfield mask") +__bad_mask(void); +static __always_inline u64 field_multiplier(u64 field) +{ + if ((field | (field - 1)) & ((field | (field - 1)) + 1)) + __bad_mask(); + return field & -field; +} +static __always_inline u64 field_mask(u64 field) +{ + return field / field_multiplier(field); +} +#define field_max(field) ((typeof(field))field_mask(field)) +#define ____MAKE_OP(type,base,to,from) \ +static __always_inline __##type type##_encode_bits(base v, base field) \ +{ \ + if (__builtin_constant_p(v) && (v & ~field_mask(field))) \ + __field_overflow(); \ + return to((v & field_mask(field)) * field_multiplier(field)); \ +} \ +static __always_inline __##type type##_replace_bits(__##type old, \ + base val, base field) \ +{ \ + return (old & ~to(field)) | type##_encode_bits(val, field); \ +} \ +static __always_inline void type##p_replace_bits(__##type *p, \ + base val, base field) \ +{ \ + *p = (*p & ~to(field)) | type##_encode_bits(val, field); \ +} \ +static __always_inline base type##_get_bits(__##type v, base field) \ +{ \ + return (from(v) & field)/field_multiplier(field); \ +} +#define __MAKE_OP(size) \ + ____MAKE_OP(le##size,u##size,cpu_to_le##size,le##size##_to_cpu) \ + ____MAKE_OP(be##size,u##size,cpu_to_be##size,be##size##_to_cpu) \ + ____MAKE_OP(u##size,u##size,,) +____MAKE_OP(u8,u8,,) +__MAKE_OP(16) +__MAKE_OP(32) +__MAKE_OP(64) +#undef __MAKE_OP +#undef ____MAKE_OP + +#endif From patchwork Tue Oct 11 01:06:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13003499 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 97513C433F5 for ; Tue, 11 Oct 2022 01:06:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229701AbiJKBGu (ORCPT ); Mon, 10 Oct 2022 21:06:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49090 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229691AbiJKBGs (ORCPT ); Mon, 10 Oct 2022 21:06:48 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0673F5A8AA for ; Mon, 10 Oct 2022 18:06:46 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id u16-20020a250950000000b006be72659056so12067289ybm.3 for ; Mon, 10 Oct 2022 18:06:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=NChdwDXJfPmnfWf5atW/Yp9il5xPVGNMebBCfEdWL2k=; b=UkcJ/bp46qwS500IUMujefCobrQShOfuHj8ioURU3a0xRbWbiAY1dJoK1tX+sNtP7T CxqJV0TdJjgLOcXmL2VYIE88pJHrCcVSAno3rTBfypXy9epKFlZfxIB6U3TkUx+Ct/6/ Lb348Bq/LWI2EvD9u2tDQF93FpdduOF1jjL+wMv5+jU2mIXLqg0PjXfVJdA5r0nI2gCR MpVB7KFa0SrftyMS7OlytdguZ8mjXhb8fCxo5BAIiDEWdHcJ820STx57+6hU5d3myWRp uWQinnUUJs4HUVh/fvW/4AjczmuWdPPkEHt6MxzKBMGeYirap0BsWWRr+qPTe+/tkdOM c3gA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=NChdwDXJfPmnfWf5atW/Yp9il5xPVGNMebBCfEdWL2k=; b=XXpaCuzcG0c6PCrjqWvczCat1mETjox5k95+fGSGSuBq0bIYQCJUmvtYY2XPVva32W ax2iErw3fhcym/w5UUEaKVnFAPoN8N/TytkpLPrYvcI6QcmhaZR+py0SmV1hJAC6MMKX I3P2YsmuIqACYx+IsGCciNt1TSWZYduACkGP10ZrXu34OqmJ6kmlQ1YPHHYogrtSGVdX zvggg6PQhx36TA4UWcndJbUsq7CJplVBP/6PJPq+6hsRHoahCuw/UykTN6kw/cMgw1oh EF1pHlUQECIYqKAVBNONwDJUto8LUHQAySIMXBNnIczLXgrdDnTHMefdXAS24/SmBh3S uYRw== X-Gm-Message-State: ACrzQf1ih9auU8+H0qr2TMjsy/HPDhoYXSe1WOexfCLSBp3hmFciigvo K+88CHPDjmYNcOJX3uLoUiTkHHoBuxr6TqSkXDk8U9o1W9c6xspSV6Vft+SIBhZAvD0rmU+t5Pl ddas/8dhFpWEF183gPlYRUoSFN2TKqFKEzvyqDk1lAwx0MdTIw+qaZWTKN674Vsk= X-Google-Smtp-Source: AMsMyM6DvDW7RQp/vT3H4oz3eGzG6d941XFf+LY4Plq3XF9A4zKfPJxqaSk6SkRxtj0iUkgns4BqjOw8cl41tA== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a25:d7c3:0:b0:6be:28a9:b8ec with SMTP id o186-20020a25d7c3000000b006be28a9b8ecmr20504274ybg.443.1665450406087; Mon, 10 Oct 2022 18:06:46 -0700 (PDT) Date: Tue, 11 Oct 2022 01:06:20 +0000 In-Reply-To: <20221011010628.1734342-1-ricarkol@google.com> Mime-Version: 1.0 References: <20221011010628.1734342-1-ricarkol@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Message-ID: <20221011010628.1734342-7-ricarkol@google.com> Subject: [PATCH v9 06/14] KVM: selftests: Stash backing_src_type in struct userspace_mem_region From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, andrew.jones@linux.dev Cc: pbonzini@redhat.com, maz@kernel.org, seanjc@google.com, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, dmatlack@google.com, axelrasmussen@google.com, Ricardo Koller , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add the backing_src_type into struct userspace_mem_region. This struct already stores a lot of info about memory regions, except the backing source type. This info will be used by a future commit in order to determine the method for punching a hole. Reviewed-by: Oliver Upton Signed-off-by: Ricardo Koller --- tools/testing/selftests/kvm/include/kvm_util_base.h | 1 + tools/testing/selftests/kvm/lib/kvm_util.c | 1 + 2 files changed, 2 insertions(+) diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index e42a09cd24a0..a9264ed22cca 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -34,6 +34,7 @@ struct userspace_mem_region { struct sparsebit *unused_phy_pages; int fd; off_t offset; + enum vm_mem_backing_src_type backing_src_type; void *host_mem; void *host_alias; void *mmap_start; diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 19e37fb7de7c..6affce47e899 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -929,6 +929,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm, vm_mem_backing_src_alias(src_type)->name); } + region->backing_src_type = src_type; region->unused_phy_pages = sparsebit_alloc(); sparsebit_set_num(region->unused_phy_pages, guest_paddr >> vm->page_shift, npages); From patchwork Tue Oct 11 01:06:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13003501 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BEE84C4332F for ; Tue, 11 Oct 2022 01:06:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229721AbiJKBG4 (ORCPT ); Mon, 10 Oct 2022 21:06:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49186 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229703AbiJKBGw (ORCPT ); Mon, 10 Oct 2022 21:06:52 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6685551417 for ; Mon, 10 Oct 2022 18:06:49 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-355ece59b6fso120003627b3.22 for ; Mon, 10 Oct 2022 18:06:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=nK27oxZIZaGjPLV/PJtdh+6yEDr9lU1hKNnYNecQezc=; b=jFtOKkyDlYF8w8FUNwkmEqOK/+TPSeiRrt5XWHWH9ZO6rhBDGUKONEog2fgqp/z7tf PHTi21RawmEW3ByWCIODIRiH03gHBlYCgqyf8xnoyZCOO1V5J1tpr+P1UcgJWzEJ8/Vb gLnmfRqgE9XBSeCP15UPVlSUeE01d63Cg//phh/EwpCetoHTkSPb1dH1hRsVIHHtQeMS 1u/L0s3WF7fnNMst9R1gyWSzEmRIRD+vK6lrTh0evqgmx7m/SyTtM6lc4O7Ph1RjsHby oGtkVJMRHNVK4qMI3Sg6uCnn8P/meyZm8jqwhtQdTDr39Do8LhGSibRKfQrVDuyDFf05 p79g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=nK27oxZIZaGjPLV/PJtdh+6yEDr9lU1hKNnYNecQezc=; b=v12YnR/l7+wbw9xHkWN78QysvvKXq3WZNu/iGUsMzbb7zSB6pLX3GJ5PhEIetFRbkz JYt2z/Puts8+7C80gS+wDqQETxRK9FhpE2c79Z39HEoKPLzIVeGuf8NZrnnWhZYuUa8a F8MFvHEJZFQMBwkrO3B7B/peyveVkOyuUR6cDECyq6xd1i+TMA3FAUBuRSgP1Hgv3MHq jYNRVx2dFXmozspNbCc3BpZV7wAIb4ucuquoPhunq8bB2852tAyh/V/UDRVBbugYbKTI I4q6aano+JMHlKJXjb7Yz8HFpgXjhCWANPdbZipPXDfc6FXWSvUNcJaF3gx34zmVUCOd rg7A== X-Gm-Message-State: ACrzQf01ZaQfoeUIRoUYz7VTIutxijCg/uGdr/9SwNWZ+fijD6UnEf6/ 50hzAk3EFhZUIMfAAhckU/0fRtyAYliUpTPd/5Az+TZPRm2pvU83ijwN//rKmz2YH9ZHaA4FxEt OYDRt+orWem/pQJwA3u11dsOsvwhAYiqx8SJAH9Ccpf3YNEct/xtba4CAhQ8jolM= X-Google-Smtp-Source: AMsMyM40VaGLni1kxYkZaK2TohYo6+kNGQ1bYMHT2Rwvf/TvtUzsEkOivoaKFXAH/sIVMFdGDJLnbS0ekdiaUA== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a5b:912:0:b0:6c0:39c9:91df with SMTP id a18-20020a5b0912000000b006c039c991dfmr11713518ybq.268.1665450407941; Mon, 10 Oct 2022 18:06:47 -0700 (PDT) Date: Tue, 11 Oct 2022 01:06:21 +0000 In-Reply-To: <20221011010628.1734342-1-ricarkol@google.com> Mime-Version: 1.0 References: <20221011010628.1734342-1-ricarkol@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Message-ID: <20221011010628.1734342-8-ricarkol@google.com> Subject: [PATCH v9 07/14] KVM: selftests: Add vm->memslots[] and enum kvm_mem_region_type From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, andrew.jones@linux.dev Cc: pbonzini@redhat.com, maz@kernel.org, seanjc@google.com, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, dmatlack@google.com, axelrasmussen@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The vm_create() helpers are hardcoded to place most page types (code, page-tables, stacks, etc) in the same memslot #0, and always backed with anonymous 4K. There are a couple of issues with that. First, tests willing to differ a bit, like placing page-tables in a different backing source type must replicate much of what's already done by the vm_create() functions. Second, the hardcoded assumption of memslot #0 holding most things is spread everywhere; this makes it very hard to change. Fix the above issues by having selftests specify how they want memory to be laid out. Start by changing ____vm_create() to not create memslot #0; a test (to come) will specify all memslots used by the VM. Then, add the vm->memslots[] array to specify the right memslot for different memory allocators, e.g.,: lib/elf should use the vm->[MEM_REGION_CODE] memslot. This will be used as a way to specify the page-tables memslots (to be backed by huge pages for example). There is no functional change intended. The current commit lays out memory exactly as before. A future commit will change the allocators to get the region they should be using, e.g.,: like the page table allocators using the pt memslot. Cc: Sean Christopherson Cc: Andrew Jones Signed-off-by: Ricardo Koller Reviewed-by: Andrew Jones Reviewed-by: Sean Christopherson --- .../selftests/kvm/include/kvm_util_base.h | 26 +++++++++++++++++-- tools/testing/selftests/kvm/lib/kvm_util.c | 18 +++++++------ 2 files changed, 34 insertions(+), 10 deletions(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index a9264ed22cca..6442aa9e9061 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -65,6 +65,14 @@ struct userspace_mem_regions { DECLARE_HASHTABLE(slot_hash, 9); }; +enum kvm_mem_region_type { + MEM_REGION_CODE, + MEM_REGION_DATA, + MEM_REGION_PT, + MEM_REGION_TEST_DATA, + NR_MEM_REGIONS, +}; + struct kvm_vm { int mode; unsigned long type; @@ -93,6 +101,13 @@ struct kvm_vm { int stats_fd; struct kvm_stats_header stats_header; struct kvm_stats_desc *stats_desc; + + /* + * KVM region slots. These are the default memslots used by page + * allocators, e.g., lib/elf uses the memslots[MEM_REGION_CODE] + * memslot. + */ + uint32_t memslots[NR_MEM_REGIONS]; }; @@ -105,6 +120,13 @@ struct kvm_vm { struct userspace_mem_region * memslot2region(struct kvm_vm *vm, uint32_t memslot); +static inline struct userspace_mem_region *vm_get_mem_region(struct kvm_vm *vm, + enum kvm_mem_region_type type) +{ + assert(type < NR_MEM_REGIONS); + return memslot2region(vm, vm->memslots[type]); +} + /* Minimum allocated guest virtual and physical addresses */ #define KVM_UTIL_MIN_VADDR 0x2000 #define KVM_GUEST_PAGE_TABLE_MIN_PADDR 0x180000 @@ -647,13 +669,13 @@ vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm); * __vm_create() does NOT create vCPUs, @nr_runnable_vcpus is used purely to * calculate the amount of memory needed for per-vCPU data, e.g. stacks. */ -struct kvm_vm *____vm_create(enum vm_guest_mode mode, uint64_t nr_pages); +struct kvm_vm *____vm_create(enum vm_guest_mode mode); struct kvm_vm *__vm_create(enum vm_guest_mode mode, uint32_t nr_runnable_vcpus, uint64_t nr_extra_pages); static inline struct kvm_vm *vm_create_barebones(void) { - return ____vm_create(VM_MODE_DEFAULT, 0); + return ____vm_create(VM_MODE_DEFAULT); } static inline struct kvm_vm *vm_create(uint32_t nr_runnable_vcpus) diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 6affce47e899..f3dfa4e9ee0f 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -185,13 +185,10 @@ const struct vm_guest_mode_params vm_guest_mode_params[] = { _Static_assert(sizeof(vm_guest_mode_params)/sizeof(struct vm_guest_mode_params) == NUM_VM_MODES, "Missing new mode params?"); -struct kvm_vm *____vm_create(enum vm_guest_mode mode, uint64_t nr_pages) +struct kvm_vm *____vm_create(enum vm_guest_mode mode) { struct kvm_vm *vm; - pr_debug("%s: mode='%s' pages='%ld'\n", __func__, - vm_guest_mode_string(mode), nr_pages); - vm = calloc(1, sizeof(*vm)); TEST_ASSERT(vm != NULL, "Insufficient Memory"); @@ -287,9 +284,6 @@ struct kvm_vm *____vm_create(enum vm_guest_mode mode, uint64_t nr_pages) /* Allocate and setup memory for guest. */ vm->vpages_mapped = sparsebit_alloc(); - if (nr_pages != 0) - vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, - 0, 0, nr_pages, 0); return vm; } @@ -335,8 +329,16 @@ struct kvm_vm *__vm_create(enum vm_guest_mode mode, uint32_t nr_runnable_vcpus, uint64_t nr_pages = vm_nr_pages_required(mode, nr_runnable_vcpus, nr_extra_pages); struct kvm_vm *vm; + int i; + + pr_debug("%s: mode='%s' pages='%ld'\n", __func__, + vm_guest_mode_string(mode), nr_pages); + + vm = ____vm_create(mode); - vm = ____vm_create(mode, nr_pages); + vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, 0, 0, nr_pages, 0); + for (i = 0; i < NR_MEM_REGIONS; i++) + vm->memslots[i] = 0; kvm_vm_elf_load(vm, program_invocation_name); From patchwork Tue Oct 11 01:06:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13003500 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9777C433F5 for ; Tue, 11 Oct 2022 01:06:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229738AbiJKBGz (ORCPT ); Mon, 10 Oct 2022 21:06:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49190 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229711AbiJKBGx (ORCPT ); Mon, 10 Oct 2022 21:06:53 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7543C6DFA6 for ; Mon, 10 Oct 2022 18:06:51 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id j75-20020a25234e000000b006c1656ba696so2328197ybj.7 for ; Mon, 10 Oct 2022 18:06:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=xfhZy3UUkhriDslFYVDM9M5BDExkncZX0DsGcYX7fbg=; b=admd+5/x8wpeJKkVRTO+JUip5gV6XBIFkmir4GdgC2eSEsMzfDSO5ZQshAsiD52f84 PmeR0g4XO7es/yHVJST8vnjs1QXcJsmlKpWos/1MY1PMaroasIeVW45Sx/CJRbttgPbo hvTOcthoK4SZaV0oqCGbMKy3mY+Y5sdt2jS0MOT8zNuWuK+gNl2E1v6pJT+Ml4NCudDQ e/LjED9lTE4JR0jr3p4TLJmMYbngAKSymqYpxRijNM7Qv/Fu6wGQL9/iOqWtkysTKEW8 gkFr3x3uu5hTvb1JuHgQQTPMqCbWPZWNvCqhOj6yEth4cztBAG20dKgGRmnXPOZYfzIH N9yg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=xfhZy3UUkhriDslFYVDM9M5BDExkncZX0DsGcYX7fbg=; b=5MgN7AJUOjv1nxkxBdotNBEKBufKxO/SHomMdzIMqIEnia2r3ipasCCAxSZWSPaY8u MZRgiGFPvMfgtx4HyqPtGcWj55euPeVemQhtZtK1QDpTyIc8dVkaF45DlWR/AJF+QHkg soaQRN86HG/1Gl9AiIImDAK4uS8La2gDfrKoUr/CD0KrN+KX/wCPqo3drNbdWmdBh3zG N+HcSYI0o8T0tZiRlgTX16LOc9dWIASnSSageXHN4Wh2eFRmXad1qSbL1lB6wZ+kWj+i 7jVWss80WVLnCLMNvETFgZ2zjryp6w1InAhk5O6gxBSOqKQDGPmEuHksO6B4CX/DVK84 IsYw== X-Gm-Message-State: ACrzQf3/4Y4FlbHBqPTnuxmBLocIpgmLvl5offQfYE/03FAD2N42kRdk ZGlH9u8oYvxCkUwEBb4xbK8AlO7EtTViOU4RIWT1CbXUf7y4+lDVpGStKsCEoui7kmR6+7e0O9y gQtNxMDWdKXLypiD09R0IVz52OuO/K8bb61MT896JODFPEw5wB9CO0FomH5/pqx0= X-Google-Smtp-Source: AMsMyM6K8mKf7kubU50iXGiAK5FjtnbsyCh1le0Tt8DHH74pM4TSk7C4VDwEmdlpD/akb779t1V37FXog58JtQ== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a25:6a57:0:b0:6af:c9f1:1f8d with SMTP id f84-20020a256a57000000b006afc9f11f8dmr20568767ybc.3.1665450409962; Mon, 10 Oct 2022 18:06:49 -0700 (PDT) Date: Tue, 11 Oct 2022 01:06:22 +0000 In-Reply-To: <20221011010628.1734342-1-ricarkol@google.com> Mime-Version: 1.0 References: <20221011010628.1734342-1-ricarkol@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Message-ID: <20221011010628.1734342-9-ricarkol@google.com> Subject: [PATCH v9 08/14] KVM: selftests: Fix alignment in virt_arch_pgd_alloc() and vm_vaddr_alloc() From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, andrew.jones@linux.dev Cc: pbonzini@redhat.com, maz@kernel.org, seanjc@google.com, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, dmatlack@google.com, axelrasmussen@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Refactor virt_arch_pgd_alloc() and vm_vaddr_alloc() in both RISC-V and aarch64 to fix the alignment of parameters in a couple of calls. This will make it easier to fix the alignment in a future commit that adds an extra parameter (that happens to be very long). No functional change intended. Suggested-by: Sean Christopherson Signed-off-by: Ricardo Koller Reviewed-by: Andrew Jones --- .../selftests/kvm/lib/aarch64/processor.c | 27 ++++++++++--------- .../selftests/kvm/lib/riscv/processor.c | 27 ++++++++++--------- 2 files changed, 30 insertions(+), 24 deletions(-) diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c index 26f0eccff6fe..6ff2b9d6cea6 100644 --- a/tools/testing/selftests/kvm/lib/aarch64/processor.c +++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c @@ -76,13 +76,14 @@ static uint64_t __maybe_unused ptrs_per_pte(struct kvm_vm *vm) void virt_arch_pgd_alloc(struct kvm_vm *vm) { - if (!vm->pgd_created) { - vm_paddr_t paddr = vm_phy_pages_alloc(vm, - page_align(vm, ptrs_per_pgd(vm) * 8) / vm->page_size, - KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0); - vm->pgd = paddr; - vm->pgd_created = true; - } + size_t nr_pages = page_align(vm, ptrs_per_pgd(vm) * 8) / vm->page_size; + + if (vm->pgd_created) + return; + + vm->pgd = vm_phy_pages_alloc(vm, nr_pages, + KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0); + vm->pgd_created = true; } static void _virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, @@ -325,13 +326,15 @@ void vcpu_arch_dump(FILE *stream, struct kvm_vcpu *vcpu, uint8_t indent) struct kvm_vcpu *aarch64_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id, struct kvm_vcpu_init *init, void *guest_code) { - size_t stack_size = vm->page_size == 4096 ? - DEFAULT_STACK_PGS * vm->page_size : - vm->page_size; - uint64_t stack_vaddr = vm_vaddr_alloc(vm, stack_size, - DEFAULT_ARM64_GUEST_STACK_VADDR_MIN); + size_t stack_size; + uint64_t stack_vaddr; struct kvm_vcpu *vcpu = __vm_vcpu_add(vm, vcpu_id); + stack_size = vm->page_size == 4096 ? DEFAULT_STACK_PGS * vm->page_size : + vm->page_size; + stack_vaddr = vm_vaddr_alloc(vm, stack_size, + DEFAULT_ARM64_GUEST_STACK_VADDR_MIN); + aarch64_vcpu_setup(vcpu, init); vcpu_set_reg(vcpu, ARM64_CORE_REG(sp_el1), stack_vaddr + stack_size); diff --git a/tools/testing/selftests/kvm/lib/riscv/processor.c b/tools/testing/selftests/kvm/lib/riscv/processor.c index 604478151212..ac7fc9d317db 100644 --- a/tools/testing/selftests/kvm/lib/riscv/processor.c +++ b/tools/testing/selftests/kvm/lib/riscv/processor.c @@ -55,13 +55,14 @@ static uint64_t pte_index(struct kvm_vm *vm, vm_vaddr_t gva, int level) void virt_arch_pgd_alloc(struct kvm_vm *vm) { - if (!vm->pgd_created) { - vm_paddr_t paddr = vm_phy_pages_alloc(vm, - page_align(vm, ptrs_per_pte(vm) * 8) / vm->page_size, - KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0); - vm->pgd = paddr; - vm->pgd_created = true; - } + size_t nr_pages = page_align(vm, ptrs_per_pte(vm) * 8) / vm->page_size; + + if (vm->pgd_created) + return; + + vm->pgd = vm_phy_pages_alloc(vm, nr_pages, + KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0); + vm->pgd_created = true; } void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr) @@ -279,15 +280,17 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id, void *guest_code) { int r; - size_t stack_size = vm->page_size == 4096 ? - DEFAULT_STACK_PGS * vm->page_size : - vm->page_size; - unsigned long stack_vaddr = vm_vaddr_alloc(vm, stack_size, - DEFAULT_RISCV_GUEST_STACK_VADDR_MIN); + size_t stack_size; + unsigned long stack_vaddr; unsigned long current_gp = 0; struct kvm_mp_state mps; struct kvm_vcpu *vcpu; + stack_size = vm->page_size == 4096 ? DEFAULT_STACK_PGS * vm->page_size : + vm->page_size; + stack_vaddr = vm_vaddr_alloc(vm, stack_size, + DEFAULT_RISCV_GUEST_STACK_VADDR_MIN); + vcpu = __vm_vcpu_add(vm, vcpu_id); riscv_vcpu_mmu_setup(vcpu); From patchwork Tue Oct 11 01:06:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13003503 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BCBEAC433F5 for ; Tue, 11 Oct 2022 01:07:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229729AbiJKBHH (ORCPT ); Mon, 10 Oct 2022 21:07:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49524 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229748AbiJKBHD (ORCPT ); Mon, 10 Oct 2022 21:07:03 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 487F7537FC for ; Mon, 10 Oct 2022 18:06:53 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id v1-20020aa78081000000b005636d8a1947so1544127pff.0 for ; Mon, 10 Oct 2022 18:06:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=bLUYIdTAWT8Y8VQYfW5BcWNeDtoU7cwO28DOSCLmdhs=; b=iENMaIEImKo0ZZsinAD0C283wnsdTbkav77KVBCxsE9mX/Mo4Hi7aHbqKAsKnitoI0 DMqg41jfuylhyrhnScVdqOYsRfO49oIG8Blbb5kxHm1WDlHpBn/V8U/5apwC7grVQQ5s j0owaHC5/jpAAtXE663Wz87MoMnRU4O2J6fIg9OgBQamSf9K+NV/zWow4pcnUgQG+X+1 JZMKP60ITXrn9Uo6sr2FBqIwyT78+MBI31Zl7Wubq54CU2fofsTSHffK0oA2se3qIiMh nLLXXuvUO9htR2Xd8p8ByjFQoSL9PSPx/BEzPAIPYiq0yIjbjlUqRTgSd/MnmWzdw7lA 2iuQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=bLUYIdTAWT8Y8VQYfW5BcWNeDtoU7cwO28DOSCLmdhs=; b=tbgupxnUB5/nISuwZi6YYC1pK/OVR0foWy7ZdAQSt7J6WMiTzATZn8R2i8hRZBHW3F FfhPkZcTs8F9uaY66C2DCsFywy+ed5uYE0elCmocryTw+u7XTCtELQePf7HQcJoSXu/e R31ZCUAXQ1GJTAcWhNqHpsY+bf2mIHL7IrBskTkWMuYtgFIysP0+jFiWxrNNiQ6Qj8au ZObDUg9ndCQ/0mWgWFHiF9HiKoRqBcp8fgltZLkloC+gjArzPWRdS9A80M3EvKJ3vE3n 2A879H4Kr+Qt22BuM//0uVyar1Sx/ddN9nt/Lecj55esynm3SXIWZ79cqkbQ6hq/Li5Z KMxA== X-Gm-Message-State: ACrzQf0koNy+nSqSyw25bKIqbgxQwvzbY1A6ZdKVmTMhyup6nARTY7SE ioNi3eoiWtQHGXxwIC9TQaH6iYA0v+Q5WIfg4/2aUP9pX9DlnN5FwanWDvsvLGA4lP3/HDIGRGR ikISrueDpJ9mTCgJFrDNk8zcgFNg9DoUqgzoqAdiQfx90oLAtTptKMSBsMPEdl5s= X-Google-Smtp-Source: AMsMyM5srUlm+sTE4saC0UPFNNvdN38am6xst1Zr88kmdNsmGM741o/fdaLT5jLqJfm0CXc7RSkicYedCSCvOQ== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a05:6a00:a22:b0:54e:6a90:fbef with SMTP id p34-20020a056a000a2200b0054e6a90fbefmr22136854pfh.53.1665450411987; Mon, 10 Oct 2022 18:06:51 -0700 (PDT) Date: Tue, 11 Oct 2022 01:06:23 +0000 In-Reply-To: <20221011010628.1734342-1-ricarkol@google.com> Mime-Version: 1.0 References: <20221011010628.1734342-1-ricarkol@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Message-ID: <20221011010628.1734342-10-ricarkol@google.com> Subject: [PATCH v9 09/14] KVM: selftests: Use the right memslot for code, page-tables, and data allocations From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, andrew.jones@linux.dev Cc: pbonzini@redhat.com, maz@kernel.org, seanjc@google.com, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, dmatlack@google.com, axelrasmussen@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Now that kvm_vm allows specifying different memslots for code, page tables, and data, use the appropriate memslot when making allocations in common/libraty code. Change them accordingly: - code (allocated by lib/elf) use the CODE memslot - stacks, exception tables, and other core data pages (like the TSS in x86) use the DATA memslot - page tables and the PGD use the PT memslot - test data (anything allocated with vm_vaddr_alloc()) uses the TEST_DATA memslot No functional change intended. All allocators keep using memslot #0. Cc: Sean Christopherson Cc: Andrew Jones Signed-off-by: Ricardo Koller Reviewed-by: Sean Christopherson Reviewed-by: Andrew Jones --- .../selftests/kvm/include/kvm_util_base.h | 4 ++ .../selftests/kvm/lib/aarch64/processor.c | 12 ++-- tools/testing/selftests/kvm/lib/elf.c | 3 +- tools/testing/selftests/kvm/lib/kvm_util.c | 57 ++++++++++++------- .../selftests/kvm/lib/riscv/processor.c | 8 ++- .../selftests/kvm/lib/s390x/processor.c | 8 ++- .../selftests/kvm/lib/x86_64/processor.c | 13 +++-- 7 files changed, 65 insertions(+), 40 deletions(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index 6442aa9e9061..b0da75af1ff3 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -407,7 +407,11 @@ void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa); void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot); struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id); vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min); +vm_vaddr_t __vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min, + enum kvm_mem_region_type type); vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages); +vm_vaddr_t __vm_vaddr_alloc_page(struct kvm_vm *vm, + enum kvm_mem_region_type type); vm_vaddr_t vm_vaddr_alloc_page(struct kvm_vm *vm); void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c index 6ff2b9d6cea6..2883dfd1ad49 100644 --- a/tools/testing/selftests/kvm/lib/aarch64/processor.c +++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c @@ -82,7 +82,8 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm) return; vm->pgd = vm_phy_pages_alloc(vm, nr_pages, - KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0); + KVM_GUEST_PAGE_TABLE_MIN_PADDR, + vm->memslots[MEM_REGION_PT]); vm->pgd_created = true; } @@ -332,8 +333,9 @@ struct kvm_vcpu *aarch64_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id, stack_size = vm->page_size == 4096 ? DEFAULT_STACK_PGS * vm->page_size : vm->page_size; - stack_vaddr = vm_vaddr_alloc(vm, stack_size, - DEFAULT_ARM64_GUEST_STACK_VADDR_MIN); + stack_vaddr = __vm_vaddr_alloc(vm, stack_size, + DEFAULT_ARM64_GUEST_STACK_VADDR_MIN, + MEM_REGION_DATA); aarch64_vcpu_setup(vcpu, init); @@ -438,8 +440,8 @@ void route_exception(struct ex_regs *regs, int vector) void vm_init_descriptor_tables(struct kvm_vm *vm) { - vm->handlers = vm_vaddr_alloc(vm, sizeof(struct handlers), - vm->page_size); + vm->handlers = __vm_vaddr_alloc(vm, sizeof(struct handlers), + vm->page_size, MEM_REGION_DATA); *(vm_vaddr_t *)addr_gva2hva(vm, (vm_vaddr_t)(&exception_handlers)) = vm->handlers; } diff --git a/tools/testing/selftests/kvm/lib/elf.c b/tools/testing/selftests/kvm/lib/elf.c index 9f54c098d9d0..51f280c412ba 100644 --- a/tools/testing/selftests/kvm/lib/elf.c +++ b/tools/testing/selftests/kvm/lib/elf.c @@ -161,7 +161,8 @@ void kvm_vm_elf_load(struct kvm_vm *vm, const char *filename) seg_vend |= vm->page_size - 1; size_t seg_size = seg_vend - seg_vstart + 1; - vm_vaddr_t vaddr = vm_vaddr_alloc(vm, seg_size, seg_vstart); + vm_vaddr_t vaddr = __vm_vaddr_alloc(vm, seg_size, seg_vstart, + MEM_REGION_CODE); TEST_ASSERT(vaddr == seg_vstart, "Unable to allocate " "virtual memory for segment at requested min addr,\n" " segment idx: %u\n" diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index f3dfa4e9ee0f..5ad4acaec8e0 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -1226,32 +1226,15 @@ static vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, return pgidx_start * vm->page_size; } -/* - * VM Virtual Address Allocate - * - * Input Args: - * vm - Virtual Machine - * sz - Size in bytes - * vaddr_min - Minimum starting virtual address - * - * Output Args: None - * - * Return: - * Starting guest virtual address - * - * Allocates at least sz bytes within the virtual address space of the vm - * given by vm. The allocated bytes are mapped to a virtual address >= - * the address given by vaddr_min. Note that each allocation uses a - * a unique set of pages, with the minimum real allocation being at least - * a page. - */ -vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min) +vm_vaddr_t __vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min, + enum kvm_mem_region_type type) { uint64_t pages = (sz >> vm->page_shift) + ((sz % vm->page_size) != 0); virt_pgd_alloc(vm); vm_paddr_t paddr = vm_phy_pages_alloc(vm, pages, - KVM_UTIL_MIN_PFN * vm->page_size, 0); + KVM_UTIL_MIN_PFN * vm->page_size, + vm->memslots[type]); /* * Find an unused range of virtual page addresses of at least @@ -1272,6 +1255,30 @@ vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min) return vaddr_start; } +/* + * VM Virtual Address Allocate + * + * Input Args: + * vm - Virtual Machine + * sz - Size in bytes + * vaddr_min - Minimum starting virtual address + * + * Output Args: None + * + * Return: + * Starting guest virtual address + * + * Allocates at least sz bytes within the virtual address space of the vm + * given by vm. The allocated bytes are mapped to a virtual address >= + * the address given by vaddr_min. Note that each allocation uses a + * a unique set of pages, with the minimum real allocation being at least + * a page. The allocated physical space comes from the TEST_DATA memory region. + */ +vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min) +{ + return __vm_vaddr_alloc(vm, sz, vaddr_min, MEM_REGION_TEST_DATA); +} + /* * VM Virtual Address Allocate Pages * @@ -1291,6 +1298,11 @@ vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages) return vm_vaddr_alloc(vm, nr_pages * getpagesize(), KVM_UTIL_MIN_VADDR); } +vm_vaddr_t __vm_vaddr_alloc_page(struct kvm_vm *vm, enum kvm_mem_region_type type) +{ + return __vm_vaddr_alloc(vm, getpagesize(), KVM_UTIL_MIN_VADDR, type); +} + /* * VM Virtual Address Allocate Page * @@ -1856,7 +1868,8 @@ vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min, vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm) { - return vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0); + return vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR, + vm->memslots[MEM_REGION_PT]); } /* diff --git a/tools/testing/selftests/kvm/lib/riscv/processor.c b/tools/testing/selftests/kvm/lib/riscv/processor.c index ac7fc9d317db..d146ca71e0c0 100644 --- a/tools/testing/selftests/kvm/lib/riscv/processor.c +++ b/tools/testing/selftests/kvm/lib/riscv/processor.c @@ -61,7 +61,8 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm) return; vm->pgd = vm_phy_pages_alloc(vm, nr_pages, - KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0); + KVM_GUEST_PAGE_TABLE_MIN_PADDR, + vm->memslots[MEM_REGION_PT]); vm->pgd_created = true; } @@ -288,8 +289,9 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id, stack_size = vm->page_size == 4096 ? DEFAULT_STACK_PGS * vm->page_size : vm->page_size; - stack_vaddr = vm_vaddr_alloc(vm, stack_size, - DEFAULT_RISCV_GUEST_STACK_VADDR_MIN); + stack_vaddr = __vm_vaddr_alloc(vm, stack_size, + DEFAULT_RISCV_GUEST_STACK_VADDR_MIN, + MEM_REGION_DATA); vcpu = __vm_vcpu_add(vm, vcpu_id); riscv_vcpu_mmu_setup(vcpu); diff --git a/tools/testing/selftests/kvm/lib/s390x/processor.c b/tools/testing/selftests/kvm/lib/s390x/processor.c index 89d7340d9cbd..15945121daf1 100644 --- a/tools/testing/selftests/kvm/lib/s390x/processor.c +++ b/tools/testing/selftests/kvm/lib/s390x/processor.c @@ -21,7 +21,8 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm) return; paddr = vm_phy_pages_alloc(vm, PAGES_PER_REGION, - KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0); + KVM_GUEST_PAGE_TABLE_MIN_PADDR, + vm->memslots[MEM_REGION_PT]); memset(addr_gpa2hva(vm, paddr), 0xff, PAGES_PER_REGION * vm->page_size); vm->pgd = paddr; @@ -167,8 +168,9 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id, TEST_ASSERT(vm->page_size == 4096, "Unsupported page size: 0x%x", vm->page_size); - stack_vaddr = vm_vaddr_alloc(vm, stack_size, - DEFAULT_GUEST_STACK_VADDR_MIN); + stack_vaddr = __vm_vaddr_alloc(vm, stack_size, + DEFAULT_GUEST_STACK_VADDR_MIN, + MEM_REGION_DATA); vcpu = __vm_vcpu_add(vm, vcpu_id); diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c index 39c4409ef56a..b199dde90e9f 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c @@ -552,7 +552,7 @@ vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva) static void kvm_setup_gdt(struct kvm_vm *vm, struct kvm_dtable *dt) { if (!vm->gdt) - vm->gdt = vm_vaddr_alloc_page(vm); + vm->gdt = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA); dt->base = vm->gdt; dt->limit = getpagesize(); @@ -562,7 +562,7 @@ static void kvm_setup_tss_64bit(struct kvm_vm *vm, struct kvm_segment *segp, int selector) { if (!vm->tss) - vm->tss = vm_vaddr_alloc_page(vm); + vm->tss = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA); memset(segp, 0, sizeof(*segp)); segp->base = vm->tss; @@ -647,8 +647,9 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id, vm_vaddr_t stack_vaddr; struct kvm_vcpu *vcpu; - stack_vaddr = vm_vaddr_alloc(vm, DEFAULT_STACK_PGS * getpagesize(), - DEFAULT_GUEST_STACK_VADDR_MIN); + stack_vaddr = __vm_vaddr_alloc(vm, DEFAULT_STACK_PGS * getpagesize(), + DEFAULT_GUEST_STACK_VADDR_MIN, + MEM_REGION_DATA); vcpu = __vm_vcpu_add(vm, vcpu_id); vcpu_init_cpuid(vcpu, kvm_get_supported_cpuid()); @@ -1145,8 +1146,8 @@ void vm_init_descriptor_tables(struct kvm_vm *vm) extern void *idt_handlers; int i; - vm->idt = vm_vaddr_alloc_page(vm); - vm->handlers = vm_vaddr_alloc_page(vm); + vm->idt = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA); + vm->handlers = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA); /* Handlers have the same address in both address spaces.*/ for (i = 0; i < NUM_INTERRUPTS; i++) set_idt_entry(vm, i, (unsigned long)(&idt_handlers)[i], 0, From patchwork Tue Oct 11 01:06:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13003502 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 11F5CC4332F for ; Tue, 11 Oct 2022 01:07:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229703AbiJKBHC (ORCPT ); Mon, 10 Oct 2022 21:07:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49330 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229729AbiJKBG6 (ORCPT ); Mon, 10 Oct 2022 21:06:58 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 914FC7961C for ; Mon, 10 Oct 2022 18:06:55 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id t6-20020a25b706000000b006b38040b6f7so12001583ybj.6 for ; Mon, 10 Oct 2022 18:06:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9oWTLSwqzKqy16LR4PUh6caP5CroyBM4CUCYNx9O9Tw=; b=YLHD0ZRHlHr6qfwtXPsggQW9u1U3YiITBmOXmnZ2ZTRBFyDvI+wzoyfiYFcpj6nyFL 8f/L+miVGvLBpGh4h8rEv4RxrGRxVE4dqH2u7YEspFceAlpLXMv2nJxQ2PvP7cXoawiA /T/fe8oCdceyeeOosXS303UMCbjaoBDcSgah8CYZVSAZfvb74dXfHh5F6bOHiJm+Gioh EqygtBW9Z23wwTEB4kgBou3YGVwdfexfXSo82+ogrGLz0KqmTNaQcxKfFGe4+Qz8+5lS fCbHL/ezy5fuAZv7/IhHmm/0HxWZPqu0lhjHmTaVRM4owoIO1wAYYAzygECYSmzT79yf 3liw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9oWTLSwqzKqy16LR4PUh6caP5CroyBM4CUCYNx9O9Tw=; b=c6X6TTSKLvUec6ZvI2jUSSGfTx7uUKFbaARFmzcoRi2Y+C6PMvy9YuVwTBiaGjm0k3 JU3fphTa0nfu4xA9mIdk9ct1qr0Z29h4umtAFHpyFCfRvKqGVHOQu8Qcp94YUYvRVJYR aeWd1XDlqI1yL4xQQpzBsegvuyWl73TM1uff6DsE9A9c4LgzSakyqmn4FJXcOCwqBmGG 5TLB4iTWKxvK71a91mHUt5RssyuI0kDuHQ7tWDL5spwUbh0q2bRls9oEIHU+Vm4SFbhp F8SZLvHs9d8lBtXaxi/2hPc4io7pNpIJEmGHKWKENPBxbnGAvgaZSEAflvvbZHZHlAeg 9/nw== X-Gm-Message-State: ACrzQf1e4Xdt7NwD/Hoz+OW8Q+Opka8sYP7kg/6noy7ruvKC5p7wvdnQ S1oKxTxybkuy5EB0LxLpFoWAXWYXS2COviVRn6asNZllOOJ17JPlnsL6CAkSgzf3xI4+pq2Rw6J bdUkDwDV+u/Hq6ASeygXaJPHTz7CprRyiHEA/YwE1eyMHuE/zvoy/1mBQwQMoHcc= X-Google-Smtp-Source: AMsMyM6cYnxT6HPvycohn0tifkbCabI/y0wwz93krpOXKTcN15qAKxFZmW18wgOcFgPVBG7+6dZtnYuKipFPeQ== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a05:6902:13c7:b0:695:84d9:c5da with SMTP id y7-20020a05690213c700b0069584d9c5damr21206841ybu.650.1665450414100; Mon, 10 Oct 2022 18:06:54 -0700 (PDT) Date: Tue, 11 Oct 2022 01:06:24 +0000 In-Reply-To: <20221011010628.1734342-1-ricarkol@google.com> Mime-Version: 1.0 References: <20221011010628.1734342-1-ricarkol@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Message-ID: <20221011010628.1734342-11-ricarkol@google.com> Subject: [PATCH v9 10/14] KVM: selftests: aarch64: Add aarch64/page_fault_test From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, andrew.jones@linux.dev Cc: pbonzini@redhat.com, maz@kernel.org, seanjc@google.com, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, dmatlack@google.com, axelrasmussen@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a new test for stage 2 faults when using different combinations of guest accesses (e.g., write, S1PTW), backing source type (e.g., anon) and types of faults (e.g., read on hugetlbfs with a hole). The next commits will add different handling methods and more faults (e.g., uffd and dirty logging). This first commit starts by adding two sanity checks for all types of accesses: AF setting by the hw, and accessing memslots with holes. Signed-off-by: Ricardo Koller --- tools/testing/selftests/kvm/.gitignore | 1 + tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/aarch64/page_fault_test.c | 596 ++++++++++++++++++ .../selftests/kvm/include/aarch64/processor.h | 8 + 4 files changed, 606 insertions(+) create mode 100644 tools/testing/selftests/kvm/aarch64/page_fault_test.c diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore index 2f0d705db9db..4a30d684e208 100644 --- a/tools/testing/selftests/kvm/.gitignore +++ b/tools/testing/selftests/kvm/.gitignore @@ -4,6 +4,7 @@ /aarch64/debug-exceptions /aarch64/get-reg-list /aarch64/hypercalls +/aarch64/page_fault_test /aarch64/psci_test /aarch64/vcpu_width_config /aarch64/vgic_init diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 08a2606aff33..50c30335460f 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -153,6 +153,7 @@ TEST_GEN_PROGS_aarch64 += aarch64/arch_timer TEST_GEN_PROGS_aarch64 += aarch64/debug-exceptions TEST_GEN_PROGS_aarch64 += aarch64/get-reg-list TEST_GEN_PROGS_aarch64 += aarch64/hypercalls +TEST_GEN_PROGS_aarch64 += aarch64/page_fault_test TEST_GEN_PROGS_aarch64 += aarch64/psci_test TEST_GEN_PROGS_aarch64 += aarch64/vcpu_width_config TEST_GEN_PROGS_aarch64 += aarch64/vgic_init diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c new file mode 100644 index 000000000000..8583be16e1fe --- /dev/null +++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c @@ -0,0 +1,596 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * page_fault_test.c - Test stage 2 faults. + * + * This test tries different combinations of guest accesses (e.g., write, + * S1PTW), backing source type (e.g., anon) and types of faults (e.g., read on + * hugetlbfs with a hole). It checks that the expected handling method is + * called (e.g., uffd faults with the right address and write/read flag). + */ + +#define _GNU_SOURCE +#include +#include +#include +#include +#include +#include +#include +#include "guest_modes.h" +#include "userfaultfd_util.h" + +/* Guest virtual addresses that point to the test page and its PTE. */ +#define TEST_GVA 0xc0000000 +#define TEST_EXEC_GVA (TEST_GVA + 0x8) +#define TEST_PTE_GVA 0xb0000000 +#define TEST_DATA 0x0123456789ABCDEF + +static uint64_t *guest_test_memory = (uint64_t *)TEST_GVA; + +#define CMD_NONE (0) +#define CMD_SKIP_TEST (1ULL << 1) +#define CMD_HOLE_PT (1ULL << 2) +#define CMD_HOLE_DATA (1ULL << 3) + +#define PREPARE_FN_NR 10 +#define CHECK_FN_NR 10 + +struct test_desc { + const char *name; + uint64_t mem_mark_cmd; + /* Skip the test if any prepare function returns false */ + bool (*guest_prepare[PREPARE_FN_NR])(void); + void (*guest_test)(void); + void (*guest_test_check[CHECK_FN_NR])(void); + void (*dabt_handler)(struct ex_regs *regs); + void (*iabt_handler)(struct ex_regs *regs); + uint32_t pt_memslot_flags; + uint32_t data_memslot_flags; + bool skip; +}; + +struct test_params { + enum vm_mem_backing_src_type src_type; + struct test_desc *test_desc; +}; + +static inline void flush_tlb_page(uint64_t vaddr) +{ + uint64_t page = vaddr >> 12; + + dsb(ishst); + asm volatile("tlbi vaae1is, %0" :: "r" (page)); + dsb(ish); + isb(); +} + +static void guest_write64(void) +{ + uint64_t val; + + WRITE_ONCE(*guest_test_memory, TEST_DATA); + val = READ_ONCE(*guest_test_memory); + GUEST_ASSERT_EQ(val, TEST_DATA); +} + +/* Check the system for atomic instructions. */ +static bool guest_check_lse(void) +{ + uint64_t isar0 = read_sysreg(id_aa64isar0_el1); + uint64_t atomic; + + atomic = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64ISAR0_ATOMICS), isar0); + return atomic >= 2; +} + +static bool guest_check_dc_zva(void) +{ + uint64_t dczid = read_sysreg(dczid_el0); + uint64_t dzp = FIELD_GET(ARM64_FEATURE_MASK(DCZID_DZP), dczid); + + return dzp == 0; +} + +/* Compare and swap instruction. */ +static void guest_cas(void) +{ + uint64_t val; + + GUEST_ASSERT(guest_check_lse()); + asm volatile(".arch_extension lse\n" + "casal %0, %1, [%2]\n" + :: "r" (0), "r" (TEST_DATA), "r" (guest_test_memory)); + val = READ_ONCE(*guest_test_memory); + GUEST_ASSERT_EQ(val, TEST_DATA); +} + +static void guest_read64(void) +{ + uint64_t val; + + val = READ_ONCE(*guest_test_memory); + GUEST_ASSERT_EQ(val, 0); +} + +/* Address translation instruction */ +static void guest_at(void) +{ + uint64_t par; + + asm volatile("at s1e1r, %0" :: "r" (guest_test_memory)); + par = read_sysreg(par_el1); + isb(); + + /* Bit 1 indicates whether the AT was successful */ + GUEST_ASSERT_EQ(par & 1, 0); +} + +/* + * The size of the block written by "dc zva" is guaranteed to be between (2 << + * 0) and (2 << 9), which is safe in our case as we need the write to happen + * for at least a word, and not more than a page. + */ +static void guest_dc_zva(void) +{ + uint16_t val; + + asm volatile("dc zva, %0" :: "r" (guest_test_memory)); + dsb(ish); + val = READ_ONCE(*guest_test_memory); + GUEST_ASSERT_EQ(val, 0); +} + +/* + * Pre-indexing loads and stores don't have a valid syndrome (ESR_EL2.ISV==0). + * And that's special because KVM must take special care with those: they + * should still count as accesses for dirty logging or user-faulting, but + * should be handled differently on mmio. + */ +static void guest_ld_preidx(void) +{ + uint64_t val; + uint64_t addr = TEST_GVA - 8; + + /* + * This ends up accessing "TEST_GVA + 8 - 8", where "TEST_GVA - 8" is + * in a gap between memslots not backing by anything. + */ + asm volatile("ldr %0, [%1, #8]!" + : "=r" (val), "+r" (addr)); + GUEST_ASSERT_EQ(val, 0); + GUEST_ASSERT_EQ(addr, TEST_GVA); +} + +static void guest_st_preidx(void) +{ + uint64_t val = TEST_DATA; + uint64_t addr = TEST_GVA - 8; + + asm volatile("str %0, [%1, #8]!" + : "+r" (val), "+r" (addr)); + + GUEST_ASSERT_EQ(addr, TEST_GVA); + val = READ_ONCE(*guest_test_memory); +} + +static bool guest_set_ha(void) +{ + uint64_t mmfr1 = read_sysreg(id_aa64mmfr1_el1); + uint64_t hadbs, tcr; + + /* Skip if HA is not supported. */ + hadbs = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR1_HADBS), mmfr1); + if (hadbs == 0) + return false; + + tcr = read_sysreg(tcr_el1) | TCR_EL1_HA; + write_sysreg(tcr, tcr_el1); + isb(); + + return true; +} + +static bool guest_clear_pte_af(void) +{ + *((uint64_t *)TEST_PTE_GVA) &= ~PTE_AF; + flush_tlb_page(TEST_GVA); + + return true; +} + +static void guest_check_pte_af(void) +{ + dsb(ish); + GUEST_ASSERT_EQ(*((uint64_t *)TEST_PTE_GVA) & PTE_AF, PTE_AF); +} + +static void guest_exec(void) +{ + int (*code)(void) = (int (*)(void))TEST_EXEC_GVA; + int ret; + + ret = code(); + GUEST_ASSERT_EQ(ret, 0x77); +} + +static bool guest_prepare(struct test_desc *test) +{ + bool (*prepare_fn)(void); + int i; + + for (i = 0; i < PREPARE_FN_NR; i++) { + prepare_fn = test->guest_prepare[i]; + if (prepare_fn && !prepare_fn()) + return false; + } + + return true; +} + +static void guest_test_check(struct test_desc *test) +{ + void (*check_fn)(void); + int i; + + for (i = 0; i < CHECK_FN_NR; i++) { + check_fn = test->guest_test_check[i]; + if (check_fn) + check_fn(); + } +} + +static void guest_code(struct test_desc *test) +{ + if (!guest_prepare(test)) + GUEST_SYNC(CMD_SKIP_TEST); + + GUEST_SYNC(test->mem_mark_cmd); + + if (test->guest_test) + test->guest_test(); + + guest_test_check(test); + GUEST_DONE(); +} + +static void no_dabt_handler(struct ex_regs *regs) +{ + GUEST_ASSERT_1(false, read_sysreg(far_el1)); +} + +static void no_iabt_handler(struct ex_regs *regs) +{ + GUEST_ASSERT_1(false, regs->pc); +} + +/* Returns true to continue the test, and false if it should be skipped. */ +static bool punch_hole_in_backing_store(struct kvm_vm *vm, + struct userspace_mem_region *region) +{ + void *hva = (void *)region->region.userspace_addr; + uint64_t paging_size = region->region.memory_size; + int ret, fd = region->fd; + + if (fd != -1) { + ret = fallocate(fd, FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, + 0, paging_size); + TEST_ASSERT(ret == 0, "fallocate failed, errno: %d\n", errno); + } else { + ret = madvise(hva, paging_size, MADV_DONTNEED); + TEST_ASSERT(ret == 0, "madvise failed, errno: %d\n", errno); + } + + return true; +} + +/* Returns true to continue the test, and false if it should be skipped. */ +static bool handle_cmd(struct kvm_vm *vm, int cmd) +{ + struct userspace_mem_region *data_region, *pt_region; + bool continue_test = true; + + data_region = vm_get_mem_region(vm, MEM_REGION_TEST_DATA); + pt_region = vm_get_mem_region(vm, MEM_REGION_PT); + + if (cmd == CMD_SKIP_TEST) + continue_test = false; + + if (cmd & CMD_HOLE_PT) + continue_test = punch_hole_in_backing_store(vm, pt_region); + if (cmd & CMD_HOLE_DATA) + continue_test = punch_hole_in_backing_store(vm, data_region); + + return continue_test; +} + +typedef uint32_t aarch64_insn_t; +extern aarch64_insn_t __exec_test[2]; + +noinline void __return_0x77(void) +{ + asm volatile("__exec_test: mov x0, #0x77\n" + "ret\n"); +} + +/* + * Note that this function runs on the host before the test VM starts: there's + * no need to sync the D$ and I$ caches. + */ +static void load_exec_code_for_test(struct kvm_vm *vm) +{ + uint64_t *code; + struct userspace_mem_region *region; + void *hva; + + region = vm_get_mem_region(vm, MEM_REGION_TEST_DATA); + hva = (void *)region->region.userspace_addr; + + assert(TEST_EXEC_GVA > TEST_GVA); + code = hva + TEST_EXEC_GVA - TEST_GVA; + memcpy(code, __exec_test, sizeof(__exec_test)); +} + +static void setup_abort_handlers(struct kvm_vm *vm, struct kvm_vcpu *vcpu, + struct test_desc *test) +{ + vm_init_descriptor_tables(vm); + vcpu_init_descriptor_tables(vcpu); + + vm_install_sync_handler(vm, VECTOR_SYNC_CURRENT, + ESR_EC_DABT, no_dabt_handler); + vm_install_sync_handler(vm, VECTOR_SYNC_CURRENT, + ESR_EC_IABT, no_iabt_handler); +} + +static void setup_gva_maps(struct kvm_vm *vm) +{ + struct userspace_mem_region *region; + uint64_t pte_gpa; + + region = vm_get_mem_region(vm, MEM_REGION_TEST_DATA); + /* Map TEST_GVA first. This will install a new PTE. */ + virt_pg_map(vm, TEST_GVA, region->region.guest_phys_addr); + /* Then map TEST_PTE_GVA to the above PTE. */ + pte_gpa = addr_hva2gpa(vm, virt_get_pte_hva(vm, TEST_GVA)); + virt_pg_map(vm, TEST_PTE_GVA, pte_gpa); +} + +enum pf_test_memslots { + CODE_AND_DATA_MEMSLOT, + PAGE_TABLE_MEMSLOT, + TEST_DATA_MEMSLOT, +}; + +/* + * Create a memslot for code and data at pfn=0, and test-data and PT ones + * at max_gfn. + */ +static void setup_memslots(struct kvm_vm *vm, struct test_params *p) +{ + uint64_t backing_src_pagesz = get_backing_src_pagesz(p->src_type); + uint64_t guest_page_size = vm->page_size; + uint64_t max_gfn = vm_compute_max_gfn(vm); + /* Enough for 2M of code when using 4K guest pages. */ + uint64_t code_npages = 512; + uint64_t pt_size, data_size, data_gpa; + + /* + * This test requires 1 pgd, 2 pud, 4 pmd, and 6 pte pages when using + * VM_MODE_P48V48_4K. Note that the .text takes ~1.6MBs. That's 13 + * pages. VM_MODE_P48V48_4K is the mode with most PT pages; let's use + * twice that just in case. + */ + pt_size = 26 * guest_page_size; + + /* memslot sizes and gpa's must be aligned to the backing page size */ + pt_size = align_up(pt_size, backing_src_pagesz); + data_size = align_up(guest_page_size, backing_src_pagesz); + data_gpa = (max_gfn * guest_page_size) - data_size; + data_gpa = align_down(data_gpa, backing_src_pagesz); + + vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, 0, + CODE_AND_DATA_MEMSLOT, code_npages, 0); + vm->memslots[MEM_REGION_CODE] = CODE_AND_DATA_MEMSLOT; + vm->memslots[MEM_REGION_DATA] = CODE_AND_DATA_MEMSLOT; + + vm_userspace_mem_region_add(vm, p->src_type, data_gpa - pt_size, + PAGE_TABLE_MEMSLOT, pt_size / guest_page_size, + p->test_desc->pt_memslot_flags); + vm->memslots[MEM_REGION_PT] = PAGE_TABLE_MEMSLOT; + + vm_userspace_mem_region_add(vm, p->src_type, data_gpa, TEST_DATA_MEMSLOT, + data_size / guest_page_size, + p->test_desc->data_memslot_flags); + vm->memslots[MEM_REGION_TEST_DATA] = TEST_DATA_MEMSLOT; +} + +static void print_test_banner(enum vm_guest_mode mode, struct test_params *p) +{ + struct test_desc *test = p->test_desc; + + pr_debug("Test: %s\n", test->name); + pr_debug("Testing guest mode: %s\n", vm_guest_mode_string(mode)); + pr_debug("Testing memory backing src type: %s\n", + vm_mem_backing_src_alias(p->src_type)->name); +} + +/* + * This function either succeeds, skips the test (after setting test->skip), or + * fails with a TEST_FAIL that aborts all tests. + */ +static void vcpu_run_loop(struct kvm_vm *vm, struct kvm_vcpu *vcpu, + struct test_desc *test) +{ + struct ucall uc; + + for (;;) { + vcpu_run(vcpu); + + switch (get_ucall(vcpu, &uc)) { + case UCALL_SYNC: + if (!handle_cmd(vm, uc.args[1])) { + test->skip = true; + goto done; + } + break; + case UCALL_ABORT: + REPORT_GUEST_ASSERT_2(uc, "values: %#lx, %#lx"); + break; + case UCALL_DONE: + goto done; + default: + TEST_FAIL("Unknown ucall %lu", uc.cmd); + } + } + +done: + pr_debug(test->skip ? "Skipped.\n" : "Done.\n"); +} + +static void run_test(enum vm_guest_mode mode, void *arg) +{ + struct test_params *p = (struct test_params *)arg; + struct test_desc *test = p->test_desc; + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + + print_test_banner(mode, p); + + vm = ____vm_create(mode); + setup_memslots(vm, p); + kvm_vm_elf_load(vm, program_invocation_name); + vcpu = vm_vcpu_add(vm, 0, guest_code); + + setup_gva_maps(vm); + + ucall_init(vm, NULL); + + load_exec_code_for_test(vm); + setup_abort_handlers(vm, vcpu, test); + vcpu_args_set(vcpu, 1, test); + + vcpu_run_loop(vm, vcpu, test); + + ucall_uninit(vm); + kvm_vm_free(vm); +} + +static void help(char *name) +{ + puts(""); + printf("usage: %s [-h] [-s mem-type]\n", name); + puts(""); + guest_modes_help(); + backing_src_help("-s"); + puts(""); +} + +#define SNAME(s) #s +#define SCAT2(a, b) SNAME(a ## _ ## b) +#define SCAT3(a, b, c) SCAT2(a, SCAT2(b, c)) + +#define _CHECK(_test) _CHECK_##_test +#define _PREPARE(_test) _PREPARE_##_test +#define _PREPARE_guest_read64 NULL +#define _PREPARE_guest_ld_preidx NULL +#define _PREPARE_guest_write64 NULL +#define _PREPARE_guest_st_preidx NULL +#define _PREPARE_guest_exec NULL +#define _PREPARE_guest_at NULL +#define _PREPARE_guest_dc_zva guest_check_dc_zva +#define _PREPARE_guest_cas guest_check_lse + +/* With or without access flag checks */ +#define _PREPARE_with_af guest_set_ha, guest_clear_pte_af +#define _PREPARE_no_af NULL +#define _CHECK_with_af guest_check_pte_af +#define _CHECK_no_af NULL + +/* Performs an access and checks that no faults were triggered. */ +#define TEST_ACCESS(_access, _with_af, _mark_cmd) \ +{ \ + .name = SCAT3(_access, _with_af, #_mark_cmd), \ + .guest_prepare = { _PREPARE(_with_af), \ + _PREPARE(_access) }, \ + .mem_mark_cmd = _mark_cmd, \ + .guest_test = _access, \ + .guest_test_check = { _CHECK(_with_af) }, \ +} + +static struct test_desc tests[] = { + + /* Check that HW is setting the Access Flag (AF) (sanity checks). */ + TEST_ACCESS(guest_read64, with_af, CMD_NONE), + TEST_ACCESS(guest_ld_preidx, with_af, CMD_NONE), + TEST_ACCESS(guest_cas, with_af, CMD_NONE), + TEST_ACCESS(guest_write64, with_af, CMD_NONE), + TEST_ACCESS(guest_st_preidx, with_af, CMD_NONE), + TEST_ACCESS(guest_dc_zva, with_af, CMD_NONE), + TEST_ACCESS(guest_exec, with_af, CMD_NONE), + + /* + * Punch a hole in the data backing store, and then try multiple + * accesses: reads should rturn zeroes, and writes should + * re-populate the page. Moreover, the test also check that no + * exception was generated in the guest. Note that this + * reading/writing behavior is the same as reading/writing a + * punched page (with fallocate(FALLOC_FL_PUNCH_HOLE)) from + * userspace. + */ + TEST_ACCESS(guest_read64, no_af, CMD_HOLE_DATA), + TEST_ACCESS(guest_cas, no_af, CMD_HOLE_DATA), + TEST_ACCESS(guest_ld_preidx, no_af, CMD_HOLE_DATA), + TEST_ACCESS(guest_write64, no_af, CMD_HOLE_DATA), + TEST_ACCESS(guest_st_preidx, no_af, CMD_HOLE_DATA), + TEST_ACCESS(guest_at, no_af, CMD_HOLE_DATA), + TEST_ACCESS(guest_dc_zva, no_af, CMD_HOLE_DATA), + + { 0 } +}; + +static void for_each_test_and_guest_mode( + void (*func)(enum vm_guest_mode m, void *a), + enum vm_mem_backing_src_type src_type) +{ + struct test_desc *t; + + for (t = &tests[0]; t->name; t++) { + if (t->skip) + continue; + + struct test_params p = { + .src_type = src_type, + .test_desc = t, + }; + + for_each_guest_mode(run_test, &p); + } +} + +int main(int argc, char *argv[]) +{ + enum vm_mem_backing_src_type src_type; + int opt; + + setbuf(stdout, NULL); + + src_type = DEFAULT_VM_MEM_SRC; + + while ((opt = getopt(argc, argv, "hm:s:")) != -1) { + switch (opt) { + case 'm': + guest_modes_cmdline(optarg); + break; + case 's': + src_type = parse_backing_src_type(optarg); + break; + case 'h': + default: + help(argv[0]); + exit(0); + } + } + + for_each_test_and_guest_mode(run_test, src_type); + return 0; +} diff --git a/tools/testing/selftests/kvm/include/aarch64/processor.h b/tools/testing/selftests/kvm/include/aarch64/processor.h index c1ddca8db225..5f977528e09c 100644 --- a/tools/testing/selftests/kvm/include/aarch64/processor.h +++ b/tools/testing/selftests/kvm/include/aarch64/processor.h @@ -105,11 +105,19 @@ enum { #define ESR_EC_MASK (ESR_EC_NUM - 1) #define ESR_EC_SVC64 0x15 +#define ESR_EC_IABT 0x21 +#define ESR_EC_DABT 0x25 #define ESR_EC_HW_BP_CURRENT 0x31 #define ESR_EC_SSTEP_CURRENT 0x33 #define ESR_EC_WP_CURRENT 0x35 #define ESR_EC_BRK_INS 0x3c +/* Access flag */ +#define PTE_AF (1ULL << 10) + +/* Access flag update enable/disable */ +#define TCR_EL1_HA (1ULL << 39) + void aarch64_get_supported_page_sizes(uint32_t ipa, bool *ps4k, bool *ps16k, bool *ps64k); From patchwork Tue Oct 11 01:06:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13003505 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83754C433F5 for ; Tue, 11 Oct 2022 01:07:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229755AbiJKBHN (ORCPT ); Mon, 10 Oct 2022 21:07:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49548 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229748AbiJKBHI (ORCPT ); Mon, 10 Oct 2022 21:07:08 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BB61F82770 for ; Mon, 10 Oct 2022 18:06:57 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-360a7ff46c3so47298217b3.12 for ; Mon, 10 Oct 2022 18:06:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=8dLIpD8ljXy45IrICHbNRjNl4/9BRIvyuVvrFLFHnuw=; b=VhkqbWS2lwnorNFeBySulwBq1pi8TvSvDzAt5y3AawnYr3WsT20cz1LZkkEvSUng1m BS+eoUiw8Rm8ah7rEaVxCPMJqmL3buB6ER80A8LEwpcN5SR/nEXn8c3Au90c09qrPizh unDYL44xV1+v6FEWbF9VL3/kg1NMzqIzapPLpKZFr/FO+pHlUyxxWr2S+n3nQcbSsekg T6Jt6qSdYMJ8Fshl7BOXHgJBOKLw1o4TWCAeTjZP+j00VJL2WA72iVoITWtxTNLL1KM/ U2mzVzavnp5g/9FC4MORHAOQ4LRmTL0avHXkeBQM26Ex/CqpR/QrY86OMzN+jXFiEAm1 LZvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=8dLIpD8ljXy45IrICHbNRjNl4/9BRIvyuVvrFLFHnuw=; b=MDhNH0HmZemwlFjGeG7+Qfqa/d+EzWM4we1LC9cYXvSg43eT4E3wPwzbhH6Gv96Aml Q9fWzV8mMHoEVTUbocDyuL6mmpOa9oHnONGwv+7/L6uLVM5UaYh6LyNOG9MzLlktxbiy deyCGeE7Prol+O0P8i5aOYQRPUAHWzzitBKN2qfTsXsCRPTwpeYyvg6duP5Rplr2AmuC hSaFym6nVWsxp3JlmS2A2/Fk9VysddfLNzKBQDIqR5JV4Lsdi1Iz2Tv7GBwOI52SQd4j l5rS6GF1GAv8mynFK7yegAbMXIUC5HrU05hx+9MiZPWMtgkPXO7G99Ejbe9oTFU7ENYN 7XOw== X-Gm-Message-State: ACrzQf2IpFfsvncUG++OhjxFrwggDWooXemwL++jmjciAEkEYrNViNsF BKaddpNyjNL50AVn4Xfl4pKN6GCC+bedDd5jnP3kDpjGcI9/ClP9KlDPb/y+2ukuqilp489sNVy gw3eP9LKVtPZZEXoaKdoMV/ZESHXRwf9YQhnZ7v74KCngecBSYBXIEIIV/qdv0BY= X-Google-Smtp-Source: AMsMyM5MY+h27o1+WCep0csBatYWFp5mXYjmhqY91nXvUvDzdiektPvB0C0xEgrSxcOzI31wJguuMcgvrrhyIQ== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a81:d34e:0:b0:353:1cdc:330c with SMTP id d14-20020a81d34e000000b003531cdc330cmr18939068ywl.315.1665450416209; Mon, 10 Oct 2022 18:06:56 -0700 (PDT) Date: Tue, 11 Oct 2022 01:06:25 +0000 In-Reply-To: <20221011010628.1734342-1-ricarkol@google.com> Mime-Version: 1.0 References: <20221011010628.1734342-1-ricarkol@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Message-ID: <20221011010628.1734342-12-ricarkol@google.com> Subject: [PATCH v9 11/14] KVM: selftests: aarch64: Add userfaultfd tests into page_fault_test From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, andrew.jones@linux.dev Cc: pbonzini@redhat.com, maz@kernel.org, seanjc@google.com, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, dmatlack@google.com, axelrasmussen@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add some userfaultfd tests into page_fault_test. Punch holes into the data and/or page-table memslots, perform some accesses, and check that the faults are taken (or not taken) when expected. Signed-off-by: Ricardo Koller --- .../selftests/kvm/aarch64/page_fault_test.c | 189 +++++++++++++++++- 1 file changed, 188 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c index 8583be16e1fe..cf937b90deab 100644 --- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c +++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c @@ -35,6 +35,12 @@ static uint64_t *guest_test_memory = (uint64_t *)TEST_GVA; #define PREPARE_FN_NR 10 #define CHECK_FN_NR 10 +static struct event_cnt { + int uffd_faults; + /* uffd_faults is incremented from multiple threads. */ + pthread_mutex_t uffd_faults_mutex; +} events; + struct test_desc { const char *name; uint64_t mem_mark_cmd; @@ -42,11 +48,14 @@ struct test_desc { bool (*guest_prepare[PREPARE_FN_NR])(void); void (*guest_test)(void); void (*guest_test_check[CHECK_FN_NR])(void); + uffd_handler_t uffd_pt_handler; + uffd_handler_t uffd_data_handler; void (*dabt_handler)(struct ex_regs *regs); void (*iabt_handler)(struct ex_regs *regs); uint32_t pt_memslot_flags; uint32_t data_memslot_flags; bool skip; + struct event_cnt expected_events; }; struct test_params { @@ -263,7 +272,110 @@ static void no_iabt_handler(struct ex_regs *regs) GUEST_ASSERT_1(false, regs->pc); } +static struct uffd_args { + char *copy; + void *hva; + uint64_t paging_size; +} pt_args, data_args; + /* Returns true to continue the test, and false if it should be skipped. */ +static int uffd_generic_handler(int uffd_mode, int uffd, + struct uffd_msg *msg, struct uffd_args *args, + bool expect_write) +{ + uint64_t addr = msg->arg.pagefault.address; + uint64_t flags = msg->arg.pagefault.flags; + struct uffdio_copy copy; + int ret; + + TEST_ASSERT(uffd_mode == UFFDIO_REGISTER_MODE_MISSING, + "The only expected UFFD mode is MISSING"); + ASSERT_EQ(!!(flags & UFFD_PAGEFAULT_FLAG_WRITE), expect_write); + ASSERT_EQ(addr, (uint64_t)args->hva); + + pr_debug("uffd fault: addr=%p write=%d\n", + (void *)addr, !!(flags & UFFD_PAGEFAULT_FLAG_WRITE)); + + copy.src = (uint64_t)args->copy; + copy.dst = addr; + copy.len = args->paging_size; + copy.mode = 0; + + ret = ioctl(uffd, UFFDIO_COPY, ©); + if (ret == -1) { + pr_info("Failed UFFDIO_COPY in 0x%lx with errno: %d\n", + addr, errno); + return ret; + } + + pthread_mutex_lock(&events.uffd_faults_mutex); + events.uffd_faults += 1; + pthread_mutex_unlock(&events.uffd_faults_mutex); + return 0; +} + +static int uffd_pt_write_handler(int mode, int uffd, struct uffd_msg *msg) +{ + return uffd_generic_handler(mode, uffd, msg, &pt_args, true); +} + +static int uffd_data_write_handler(int mode, int uffd, struct uffd_msg *msg) +{ + return uffd_generic_handler(mode, uffd, msg, &data_args, true); +} + +static int uffd_data_read_handler(int mode, int uffd, struct uffd_msg *msg) +{ + return uffd_generic_handler(mode, uffd, msg, &data_args, false); +} + +static void setup_uffd_args(struct userspace_mem_region *region, + struct uffd_args *args) +{ + args->hva = (void *)region->region.userspace_addr; + args->paging_size = region->region.memory_size; + + args->copy = malloc(args->paging_size); + TEST_ASSERT(args->copy, "Failed to allocate data copy."); + memcpy(args->copy, args->hva, args->paging_size); +} + +static void setup_uffd(struct kvm_vm *vm, struct test_params *p, + struct uffd_desc **pt_uffd, struct uffd_desc **data_uffd) +{ + struct test_desc *test = p->test_desc; + + setup_uffd_args(vm_get_mem_region(vm, MEM_REGION_PT), &pt_args); + setup_uffd_args(vm_get_mem_region(vm, MEM_REGION_TEST_DATA), &data_args); + + *pt_uffd = NULL; + if (test->uffd_pt_handler) + *pt_uffd = uffd_setup_demand_paging( + UFFDIO_REGISTER_MODE_MISSING, 0, + pt_args.hva, pt_args.paging_size, + test->uffd_pt_handler); + + *data_uffd = NULL; + if (test->uffd_data_handler) + *data_uffd = uffd_setup_demand_paging( + UFFDIO_REGISTER_MODE_MISSING, 0, + data_args.hva, data_args.paging_size, + test->uffd_data_handler); +} + +static void free_uffd(struct test_desc *test, struct uffd_desc *pt_uffd, + struct uffd_desc *data_uffd) +{ + if (test->uffd_pt_handler) + uffd_stop_demand_paging(pt_uffd); + if (test->uffd_data_handler) + uffd_stop_demand_paging(data_uffd); + + free(pt_args.copy); + free(data_args.copy); +} + +/* Returns false if the test should be skipped. */ static bool punch_hole_in_backing_store(struct kvm_vm *vm, struct userspace_mem_region *region) { @@ -404,6 +516,11 @@ static void setup_memslots(struct kvm_vm *vm, struct test_params *p) vm->memslots[MEM_REGION_TEST_DATA] = TEST_DATA_MEMSLOT; } +static void check_event_counts(struct test_desc *test) +{ + ASSERT_EQ(test->expected_events.uffd_faults, events.uffd_faults); +} + static void print_test_banner(enum vm_guest_mode mode, struct test_params *p) { struct test_desc *test = p->test_desc; @@ -414,12 +531,17 @@ static void print_test_banner(enum vm_guest_mode mode, struct test_params *p) vm_mem_backing_src_alias(p->src_type)->name); } +static void reset_event_counts(void) +{ + memset(&events, 0, sizeof(events)); +} + /* * This function either succeeds, skips the test (after setting test->skip), or * fails with a TEST_FAIL that aborts all tests. */ static void vcpu_run_loop(struct kvm_vm *vm, struct kvm_vcpu *vcpu, - struct test_desc *test) + struct test_desc *test) { struct ucall uc; @@ -453,6 +575,7 @@ static void run_test(enum vm_guest_mode mode, void *arg) struct test_desc *test = p->test_desc; struct kvm_vm *vm; struct kvm_vcpu *vcpu; + struct uffd_desc *pt_uffd, *data_uffd; print_test_banner(mode, p); @@ -465,7 +588,16 @@ static void run_test(enum vm_guest_mode mode, void *arg) ucall_init(vm, NULL); + reset_event_counts(); + + /* + * Set some code in the data memslot for the guest to execute (only + * applicable to the EXEC tests). This has to be done before + * setup_uffd() as that function copies the memslot data for the uffd + * handler. + */ load_exec_code_for_test(vm); + setup_uffd(vm, p, &pt_uffd, &data_uffd); setup_abort_handlers(vm, vcpu, test); vcpu_args_set(vcpu, 1, test); @@ -473,6 +605,14 @@ static void run_test(enum vm_guest_mode mode, void *arg) ucall_uninit(vm); kvm_vm_free(vm); + free_uffd(test, pt_uffd, data_uffd); + + /* + * Make sure we check the events after the uffd threads have exited, + * which means they updated their respective event counters. + */ + if (!test->skip) + check_event_counts(test); } static void help(char *name) @@ -488,6 +628,7 @@ static void help(char *name) #define SNAME(s) #s #define SCAT2(a, b) SNAME(a ## _ ## b) #define SCAT3(a, b, c) SCAT2(a, SCAT2(b, c)) +#define SCAT4(a, b, c, d) SCAT2(a, SCAT3(b, c, d)) #define _CHECK(_test) _CHECK_##_test #define _PREPARE(_test) _PREPARE_##_test @@ -515,6 +656,21 @@ static void help(char *name) .mem_mark_cmd = _mark_cmd, \ .guest_test = _access, \ .guest_test_check = { _CHECK(_with_af) }, \ + .expected_events = { 0 }, \ +} + +#define TEST_UFFD(_access, _with_af, _mark_cmd, \ + _uffd_data_handler, _uffd_pt_handler, _uffd_faults) \ +{ \ + .name = SCAT4(uffd, _access, _with_af, #_mark_cmd), \ + .guest_prepare = { _PREPARE(_with_af), \ + _PREPARE(_access) }, \ + .guest_test = _access, \ + .mem_mark_cmd = _mark_cmd, \ + .guest_test_check = { _CHECK(_with_af) }, \ + .uffd_data_handler = _uffd_data_handler, \ + .uffd_pt_handler = _uffd_pt_handler, \ + .expected_events = { .uffd_faults = _uffd_faults, }, \ } static struct test_desc tests[] = { @@ -545,6 +701,37 @@ static struct test_desc tests[] = { TEST_ACCESS(guest_at, no_af, CMD_HOLE_DATA), TEST_ACCESS(guest_dc_zva, no_af, CMD_HOLE_DATA), + /* + * Punch holes in the data and PT backing stores and mark them for + * userfaultfd handling. This should result in 2 faults: the access + * on the data backing store, and its respective S1 page table walk + * (S1PTW). + */ + TEST_UFFD(guest_read64, with_af, CMD_HOLE_DATA | CMD_HOLE_PT, + uffd_data_read_handler, uffd_pt_write_handler, 2), + /* no_af should also lead to a PT write. */ + TEST_UFFD(guest_read64, no_af, CMD_HOLE_DATA | CMD_HOLE_PT, + uffd_data_read_handler, uffd_pt_write_handler, 2), + /* Note how that cas invokes the read handler. */ + TEST_UFFD(guest_cas, with_af, CMD_HOLE_DATA | CMD_HOLE_PT, + uffd_data_read_handler, uffd_pt_write_handler, 2), + /* + * Can't test guest_at with_af as it's IMPDEF whether the AF is set. + * The S1PTW fault should still be marked as a write. + */ + TEST_UFFD(guest_at, no_af, CMD_HOLE_DATA | CMD_HOLE_PT, + uffd_data_read_handler, uffd_pt_write_handler, 1), + TEST_UFFD(guest_ld_preidx, with_af, CMD_HOLE_DATA | CMD_HOLE_PT, + uffd_data_read_handler, uffd_pt_write_handler, 2), + TEST_UFFD(guest_write64, with_af, CMD_HOLE_DATA | CMD_HOLE_PT, + uffd_data_write_handler, uffd_pt_write_handler, 2), + TEST_UFFD(guest_dc_zva, with_af, CMD_HOLE_DATA | CMD_HOLE_PT, + uffd_data_write_handler, uffd_pt_write_handler, 2), + TEST_UFFD(guest_st_preidx, with_af, CMD_HOLE_DATA | CMD_HOLE_PT, + uffd_data_write_handler, uffd_pt_write_handler, 2), + TEST_UFFD(guest_exec, with_af, CMD_HOLE_DATA | CMD_HOLE_PT, + uffd_data_read_handler, uffd_pt_write_handler, 2), + { 0 } }; From patchwork Tue Oct 11 01:06:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13003504 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F127C433FE for ; Tue, 11 Oct 2022 01:07:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229764AbiJKBHJ (ORCPT ); Mon, 10 Oct 2022 21:07:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49574 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229763AbiJKBHE (ORCPT ); Mon, 10 Oct 2022 21:07:04 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 92E4F82745 for ; Mon, 10 Oct 2022 18:06:59 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 189-20020a2516c6000000b006bbbcc3dd9bso12162860ybw.15 for ; Mon, 10 Oct 2022 18:06:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=NECBN1urHa5ecfWcXIU6t7ZxqfGNVfPwYe7uy/62hLM=; b=BkbKLJVcY1F0TFbCiq/u/iaYIbJ4TZzhtv1wuLHbYC0YAGB5wDqYJ/FCkkFYTN1J46 ES98KQ/EtK2wUB1ssq7oAQLxZjPqA8uATDHwEqJIHy2XE12tzx6FxNroe+T5odnliVzt ZUuh26MZWdvRMFeplwwgmprFJbWRQaA4UeTuVhdtfbtSy6u5bGDmr0VjwWdK7CeWRLbF pexkl5xkZmoce48f9psmIVWRGfQok8H5iFvMExuEzlZW56KdfsnKkpD4PASq8GOExjYO nD8idfQa8zEK3MZdNMVHCWt8FPfS9OwfOU5uv3WtprRVfFR4kRr5rjFPd7ZuFXfII8tG OUXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=NECBN1urHa5ecfWcXIU6t7ZxqfGNVfPwYe7uy/62hLM=; b=4JWccKGqDwf0YvhwximAFqXAzoKehBrDLDs+jIytLyDJwTFUa1NBFMjdak1L3PZcr8 OOCVEzYsxYyguBf4NeTZ/Gy3Okln53wcstUyiix4Xe/qIuaFjlygNeBBP5sXyWmm+vRy TIJiNciPsCvurBK6TMdN78gerY2VNwM5rA85JZlHWXSXgtUXJVf/mIwrHfTNGgsxDv8U YtxUd9zyc12gyhgnQ4Qj/Z3cJ2qrhEXiyAJ4MOC9Dlyli1eaUzJTe6J1/fVHjoWNMPeq ZF1FwLi1/gZe5TtMMVZQ7uRypcnjjuljm0B6pOv/AWY7x93/wyV/XHy2nrBgO+Suy0pB GJXQ== X-Gm-Message-State: ACrzQf3+EHa/M6TM2JBhQ1wcBOsnL+hfVAyfG70UDQmY/WlJhwP6GDhZ OU2Tmv7PG9JpG5rK7OXEYZeFvr9f7OgaQUEu0VXk+RozTLFp73PGnqokoGQr1c+G9GWp8LIOmeB oNScPtUjakIq8Tx5mrmOIg0ARbrZRZACfTdSytY3W14KPPNqLdXZWAKYIl+95lOk= X-Google-Smtp-Source: AMsMyM7kgmUTZsBBwTL7wAa8dmXNWUvWgY0T5e9p1b2fj5D0o8kZT3Ih3RbW4nCARU41dljoB5UwTysP+TXaxA== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a25:cf46:0:b0:6bf:3a66:1c2a with SMTP id f67-20020a25cf46000000b006bf3a661c2amr19025967ybg.176.1665450418335; Mon, 10 Oct 2022 18:06:58 -0700 (PDT) Date: Tue, 11 Oct 2022 01:06:26 +0000 In-Reply-To: <20221011010628.1734342-1-ricarkol@google.com> Mime-Version: 1.0 References: <20221011010628.1734342-1-ricarkol@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Message-ID: <20221011010628.1734342-13-ricarkol@google.com> Subject: [PATCH v9 12/14] KVM: selftests: aarch64: Add dirty logging tests into page_fault_test From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, andrew.jones@linux.dev Cc: pbonzini@redhat.com, maz@kernel.org, seanjc@google.com, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, dmatlack@google.com, axelrasmussen@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add some dirty logging tests into page_fault_test. Mark the data and/or page-table memory regions for dirty logging, perform some accesses, and check that the dirty log bits are set or clean when expected. Signed-off-by: Ricardo Koller --- .../selftests/kvm/aarch64/page_fault_test.c | 75 +++++++++++++++++++ 1 file changed, 75 insertions(+) diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c index cf937b90deab..06fab0c00184 100644 --- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c +++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c @@ -31,6 +31,11 @@ static uint64_t *guest_test_memory = (uint64_t *)TEST_GVA; #define CMD_SKIP_TEST (1ULL << 1) #define CMD_HOLE_PT (1ULL << 2) #define CMD_HOLE_DATA (1ULL << 3) +#define CMD_CHECK_WRITE_IN_DIRTY_LOG (1ULL << 4) +#define CMD_CHECK_S1PTW_WR_IN_DIRTY_LOG (1ULL << 5) +#define CMD_CHECK_NO_WRITE_IN_DIRTY_LOG (1ULL << 6) +#define CMD_CHECK_NO_S1PTW_WR_IN_DIRTY_LOG (1ULL << 7) +#define CMD_SET_PTE_AF (1ULL << 8) #define PREPARE_FN_NR 10 #define CHECK_FN_NR 10 @@ -213,6 +218,21 @@ static void guest_check_pte_af(void) GUEST_ASSERT_EQ(*((uint64_t *)TEST_PTE_GVA) & PTE_AF, PTE_AF); } +static void guest_check_write_in_dirty_log(void) +{ + GUEST_SYNC(CMD_CHECK_WRITE_IN_DIRTY_LOG); +} + +static void guest_check_no_write_in_dirty_log(void) +{ + GUEST_SYNC(CMD_CHECK_NO_WRITE_IN_DIRTY_LOG); +} + +static void guest_check_s1ptw_wr_in_dirty_log(void) +{ + GUEST_SYNC(CMD_CHECK_S1PTW_WR_IN_DIRTY_LOG); +} + static void guest_exec(void) { int (*code)(void) = (int (*)(void))TEST_EXEC_GVA; @@ -395,6 +415,21 @@ static bool punch_hole_in_backing_store(struct kvm_vm *vm, return true; } +static bool check_write_in_dirty_log(struct kvm_vm *vm, + struct userspace_mem_region *region, uint64_t host_pg_nr) +{ + unsigned long *bmap; + bool first_page_dirty; + uint64_t size = region->region.memory_size; + + /* getpage_size() is not always equal to vm->page_size */ + bmap = bitmap_zalloc(size / getpagesize()); + kvm_vm_get_dirty_log(vm, region->region.slot, bmap); + first_page_dirty = test_bit(host_pg_nr, bmap); + free(bmap); + return first_page_dirty; +} + /* Returns true to continue the test, and false if it should be skipped. */ static bool handle_cmd(struct kvm_vm *vm, int cmd) { @@ -411,6 +446,18 @@ static bool handle_cmd(struct kvm_vm *vm, int cmd) continue_test = punch_hole_in_backing_store(vm, pt_region); if (cmd & CMD_HOLE_DATA) continue_test = punch_hole_in_backing_store(vm, data_region); + if (cmd & CMD_CHECK_WRITE_IN_DIRTY_LOG) + TEST_ASSERT(check_write_in_dirty_log(vm, data_region, 0), + "Missing write in dirty log"); + if (cmd & CMD_CHECK_S1PTW_WR_IN_DIRTY_LOG) + TEST_ASSERT(check_write_in_dirty_log(vm, pt_region, 0), + "Missing s1ptw write in dirty log"); + if (cmd & CMD_CHECK_NO_WRITE_IN_DIRTY_LOG) + TEST_ASSERT(!check_write_in_dirty_log(vm, data_region, 0), + "Unexpected write in dirty log"); + if (cmd & CMD_CHECK_NO_S1PTW_WR_IN_DIRTY_LOG) + TEST_ASSERT(!check_write_in_dirty_log(vm, pt_region, 0), + "Unexpected s1ptw write in dirty log"); return continue_test; } @@ -673,6 +720,19 @@ static void help(char *name) .expected_events = { .uffd_faults = _uffd_faults, }, \ } +#define TEST_DIRTY_LOG(_access, _with_af, _test_check) \ +{ \ + .name = SCAT3(dirty_log, _access, _with_af), \ + .data_memslot_flags = KVM_MEM_LOG_DIRTY_PAGES, \ + .pt_memslot_flags = KVM_MEM_LOG_DIRTY_PAGES, \ + .guest_prepare = { _PREPARE(_with_af), \ + _PREPARE(_access) }, \ + .guest_test = _access, \ + .guest_test_check = { _CHECK(_with_af), _test_check, \ + guest_check_s1ptw_wr_in_dirty_log}, \ + .expected_events = { 0 }, \ +} + static struct test_desc tests[] = { /* Check that HW is setting the Access Flag (AF) (sanity checks). */ @@ -732,6 +792,21 @@ static struct test_desc tests[] = { TEST_UFFD(guest_exec, with_af, CMD_HOLE_DATA | CMD_HOLE_PT, uffd_data_read_handler, uffd_pt_write_handler, 2), + /* + * Try accesses when the data and PT memory regions are both + * tracked for dirty logging. + */ + TEST_DIRTY_LOG(guest_read64, with_af, guest_check_no_write_in_dirty_log), + /* no_af should also lead to a PT write. */ + TEST_DIRTY_LOG(guest_read64, no_af, guest_check_no_write_in_dirty_log), + TEST_DIRTY_LOG(guest_ld_preidx, with_af, guest_check_no_write_in_dirty_log), + TEST_DIRTY_LOG(guest_at, no_af, guest_check_no_write_in_dirty_log), + TEST_DIRTY_LOG(guest_exec, with_af, guest_check_no_write_in_dirty_log), + TEST_DIRTY_LOG(guest_write64, with_af, guest_check_write_in_dirty_log), + TEST_DIRTY_LOG(guest_cas, with_af, guest_check_write_in_dirty_log), + TEST_DIRTY_LOG(guest_dc_zva, with_af, guest_check_write_in_dirty_log), + TEST_DIRTY_LOG(guest_st_preidx, with_af, guest_check_write_in_dirty_log), + { 0 } }; From patchwork Tue Oct 11 01:06:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13003506 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F64DC433FE for ; Tue, 11 Oct 2022 01:07:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229486AbiJKBHV (ORCPT ); Mon, 10 Oct 2022 21:07:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49628 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229766AbiJKBHS (ORCPT ); Mon, 10 Oct 2022 21:07:18 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E02F382D37 for ; Mon, 10 Oct 2022 18:07:04 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id h2-20020a656382000000b0044b0f6c572fso7035102pgv.20 for ; Mon, 10 Oct 2022 18:07:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=DP+7o4VUTNDCJGmIjDcGSTOB0VbOEdBk+0Jht8aULcg=; b=MzJhuqrADZbTAED7a7W1G+WHx7zy4WqoAWbk7lwAR3G4FCzAHG2/MmbJChasG2fhBo 1hv6ftMV6GuU345D1xdyBCmuPmcX8SBc8uNZV8gGLK5wit4D4FAn9S3sDl6qIJFUgb8p QqTzvZ2chJRGOvxpXXR30kZ33JY6StBab70813Qk7rK2VyXk+A1N6ZqwFnXOsK/z+8Lk 02+m0F/YVYelTDJaUDMEratC6vQ6M9jOnWlYPlfMyACaXJ4+8Jxszm1Fas6Cike/TSJc AlQk9YUl17veHi72iTyWBn7AKH5e2r4Tt/+bN7BPyzZxkmuVsUX5Tt3Et0wun4CBVCf+ oUpQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=DP+7o4VUTNDCJGmIjDcGSTOB0VbOEdBk+0Jht8aULcg=; b=a/t5HbfFC6P1Zl5lSVJ211THyZY4T5nH9XEZIK4PY2qmQp4h384CBsi7IN1zySGwwH u0L4cMkzgXsPW1gDj43AzMM2eKAGiPROwgA2i2oXAh81B7eyTcjPQ8E4VfPc0Dn6tVdl Jd8eKQoVUpXd18d8SodrZV3+gH3O2gUxsLsc0U1AVWaUvm/Dxh53ZxuEVxFOKhJdchTi 4VhwEBjD7K+GhgQWmIOI7FUiMYrVfLoY2pws26lCc/LcWMWfEW/l8IH/gviN162MgGtv Nz+L/RiWOqJXU7I/SOnDPM+IUZgpvekjUYduohk2NvynHXh/Zsob2KhwkuhtYGCOJ7pD fPCg== X-Gm-Message-State: ACrzQf3SJxaxotjCQ/qCqgBpAL6nPwse883gzXZaXeFA1460cpfEusaN JHPH1Y716KP2RDq89LcouMTlwdz4fDTxh8oaPNoPrXAY1Z5uwT8msmmsHIrGaG5RmzZjyRyF0E/ mkIps5AAYRDv5Mx1hK7P7Z5QAyiCRHhxJXBYBzLkPdQceHB4eNUPDcQmQmvrNWM0= X-Google-Smtp-Source: AMsMyM4jt20hose3qBwsDVqSWBN3gukm+20XXz8ezM2XHfrEHD+MBfY6GQdcoxu/XBljEQflE2Vjljuwi4FL6g== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a63:ff55:0:b0:438:fa5d:af36 with SMTP id s21-20020a63ff55000000b00438fa5daf36mr19320367pgk.533.1665450420799; Mon, 10 Oct 2022 18:07:00 -0700 (PDT) Date: Tue, 11 Oct 2022 01:06:27 +0000 In-Reply-To: <20221011010628.1734342-1-ricarkol@google.com> Mime-Version: 1.0 References: <20221011010628.1734342-1-ricarkol@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Message-ID: <20221011010628.1734342-14-ricarkol@google.com> Subject: [PATCH v9 13/14] KVM: selftests: aarch64: Add readonly memslot tests into page_fault_test From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, andrew.jones@linux.dev Cc: pbonzini@redhat.com, maz@kernel.org, seanjc@google.com, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, dmatlack@google.com, axelrasmussen@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add some readonly memslot tests into page_fault_test. Mark the data and/or page-table memory regions as readonly, perform some accesses, and check that the right fault is triggered when expected (e.g., a store with no write-back should lead to an mmio exit). Signed-off-by: Ricardo Koller --- .../selftests/kvm/aarch64/page_fault_test.c | 102 +++++++++++++++++- 1 file changed, 101 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c index 06fab0c00184..51d74909c50f 100644 --- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c +++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c @@ -41,6 +41,8 @@ static uint64_t *guest_test_memory = (uint64_t *)TEST_GVA; #define CHECK_FN_NR 10 static struct event_cnt { + int mmio_exits; + int fail_vcpu_runs; int uffd_faults; /* uffd_faults is incremented from multiple threads. */ pthread_mutex_t uffd_faults_mutex; @@ -57,6 +59,8 @@ struct test_desc { uffd_handler_t uffd_data_handler; void (*dabt_handler)(struct ex_regs *regs); void (*iabt_handler)(struct ex_regs *regs); + void (*mmio_handler)(struct kvm_vm *vm, struct kvm_run *run); + void (*fail_vcpu_run_handler)(int ret); uint32_t pt_memslot_flags; uint32_t data_memslot_flags; bool skip; @@ -415,6 +419,31 @@ static bool punch_hole_in_backing_store(struct kvm_vm *vm, return true; } +static void mmio_on_test_gpa_handler(struct kvm_vm *vm, struct kvm_run *run) +{ + struct userspace_mem_region *region; + void *hva; + + region = vm_get_mem_region(vm, MEM_REGION_TEST_DATA); + hva = (void *)region->region.userspace_addr; + + ASSERT_EQ(run->mmio.phys_addr, region->region.guest_phys_addr); + + memcpy(hva, run->mmio.data, run->mmio.len); + events.mmio_exits += 1; +} + +static void mmio_no_handler(struct kvm_vm *vm, struct kvm_run *run) +{ + uint64_t data; + + memcpy(&data, run->mmio.data, sizeof(data)); + pr_debug("addr=%lld len=%d w=%d data=%lx\n", + run->mmio.phys_addr, run->mmio.len, + run->mmio.is_write, data); + TEST_FAIL("There was no MMIO exit expected."); +} + static bool check_write_in_dirty_log(struct kvm_vm *vm, struct userspace_mem_region *region, uint64_t host_pg_nr) { @@ -462,6 +491,18 @@ static bool handle_cmd(struct kvm_vm *vm, int cmd) return continue_test; } +void fail_vcpu_run_no_handler(int ret) +{ + TEST_FAIL("Unexpected vcpu run failure\n"); +} + +void fail_vcpu_run_mmio_no_syndrome_handler(int ret) +{ + TEST_ASSERT(errno == ENOSYS, + "The mmio handler should have returned not implemented."); + events.fail_vcpu_runs += 1; +} + typedef uint32_t aarch64_insn_t; extern aarch64_insn_t __exec_test[2]; @@ -563,9 +604,20 @@ static void setup_memslots(struct kvm_vm *vm, struct test_params *p) vm->memslots[MEM_REGION_TEST_DATA] = TEST_DATA_MEMSLOT; } +static void setup_default_handlers(struct test_desc *test) +{ + if (!test->mmio_handler) + test->mmio_handler = mmio_no_handler; + + if (!test->fail_vcpu_run_handler) + test->fail_vcpu_run_handler = fail_vcpu_run_no_handler; +} + static void check_event_counts(struct test_desc *test) { ASSERT_EQ(test->expected_events.uffd_faults, events.uffd_faults); + ASSERT_EQ(test->expected_events.mmio_exits, events.mmio_exits); + ASSERT_EQ(test->expected_events.fail_vcpu_runs, events.fail_vcpu_runs); } static void print_test_banner(enum vm_guest_mode mode, struct test_params *p) @@ -590,10 +642,18 @@ static void reset_event_counts(void) static void vcpu_run_loop(struct kvm_vm *vm, struct kvm_vcpu *vcpu, struct test_desc *test) { + struct kvm_run *run; struct ucall uc; + int ret; + + run = vcpu->run; for (;;) { - vcpu_run(vcpu); + ret = _vcpu_run(vcpu); + if (ret) { + test->fail_vcpu_run_handler(ret); + goto done; + } switch (get_ucall(vcpu, &uc)) { case UCALL_SYNC: @@ -607,6 +667,10 @@ static void vcpu_run_loop(struct kvm_vm *vm, struct kvm_vcpu *vcpu, break; case UCALL_DONE: goto done; + case UCALL_NONE: + if (run->exit_reason == KVM_EXIT_MMIO) + test->mmio_handler(vm, run); + break; default: TEST_FAIL("Unknown ucall %lu", uc.cmd); } @@ -646,6 +710,7 @@ static void run_test(enum vm_guest_mode mode, void *arg) load_exec_code_for_test(vm); setup_uffd(vm, p, &pt_uffd, &data_uffd); setup_abort_handlers(vm, vcpu, test); + setup_default_handlers(test); vcpu_args_set(vcpu, 1, test); vcpu_run_loop(vm, vcpu, test); @@ -733,6 +798,25 @@ static void help(char *name) .expected_events = { 0 }, \ } +#define TEST_RO_MEMSLOT(_access, _mmio_handler, _mmio_exits) \ +{ \ + .name = SCAT3(ro_memslot, _access, _with_af), \ + .data_memslot_flags = KVM_MEM_READONLY, \ + .guest_prepare = { _PREPARE(_access) }, \ + .guest_test = _access, \ + .mmio_handler = _mmio_handler, \ + .expected_events = { .mmio_exits = _mmio_exits }, \ +} + +#define TEST_RO_MEMSLOT_NO_SYNDROME(_access) \ +{ \ + .name = SCAT2(ro_memslot_no_syndrome, _access), \ + .data_memslot_flags = KVM_MEM_READONLY, \ + .guest_test = _access, \ + .fail_vcpu_run_handler = fail_vcpu_run_mmio_no_syndrome_handler, \ + .expected_events = { .fail_vcpu_runs = 1 }, \ +} + static struct test_desc tests[] = { /* Check that HW is setting the Access Flag (AF) (sanity checks). */ @@ -807,6 +891,22 @@ static struct test_desc tests[] = { TEST_DIRTY_LOG(guest_dc_zva, with_af, guest_check_write_in_dirty_log), TEST_DIRTY_LOG(guest_st_preidx, with_af, guest_check_write_in_dirty_log), + /* + * Try accesses when the data memory region is marked read-only + * (with KVM_MEM_READONLY). Writes with a syndrome result in an + * MMIO exit, writes with no syndrome (e.g., CAS) result in a + * failed vcpu run, and reads/execs with and without syndroms do + * not fault. + */ + TEST_RO_MEMSLOT(guest_read64, 0, 0), + TEST_RO_MEMSLOT(guest_ld_preidx, 0, 0), + TEST_RO_MEMSLOT(guest_at, 0, 0), + TEST_RO_MEMSLOT(guest_exec, 0, 0), + TEST_RO_MEMSLOT(guest_write64, mmio_on_test_gpa_handler, 1), + TEST_RO_MEMSLOT_NO_SYNDROME(guest_dc_zva), + TEST_RO_MEMSLOT_NO_SYNDROME(guest_cas), + TEST_RO_MEMSLOT_NO_SYNDROME(guest_st_preidx), + { 0 } }; From patchwork Tue Oct 11 01:06:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13003507 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89B25C433F5 for ; Tue, 11 Oct 2022 01:07:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229811AbiJKBHj (ORCPT ); Mon, 10 Oct 2022 21:07:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50144 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229777AbiJKBHZ (ORCPT ); Mon, 10 Oct 2022 21:07:25 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F1E5582861 for ; Mon, 10 Oct 2022 18:07:14 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id g15-20020a25af8f000000b006bcad4bf46aso12134851ybh.19 for ; Mon, 10 Oct 2022 18:07:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=DyXXqD3xtLfhLkrMXQTmf2hWx1cL1NFJdWh5Ydo7ph4=; b=i5MWqUxoix/sBOPZMHnQ1UTNk8d/Aw3wUzJtlO5QXtuEnGFNOfY9mqKOAJL/JBQRXW S4veOrvsqzTMNYU2Axqlt9AzM7DNL//MV/WJKd6rt7r0ie0jcRR8psrAEv+fmdj6IwYr zYW+AOtbA7fBdw09iyvQTjPrOgWa9ie0ItS7pjb/2/+Hdi5YfRw0B0bEhWB5qpOMCi9y KLMJc08v9UPI1ehKCMXXh1JwOI0vdkwKVs2zYUsiOy/TomUXaSY2/iTAc9gb8pUJGcHh PL6xKbmidUse0lrJCh+Hej505qT37I5DJB8vnb7e8LWJKL0r8sTn1Tkcq3hiz67G5E3z O6Ew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=DyXXqD3xtLfhLkrMXQTmf2hWx1cL1NFJdWh5Ydo7ph4=; b=A8ui72EfSRWzHgwnjWXRIAPG5VvD9dDyjq6erCE5/B5ZJ22gtbCiCd6XOgU0pBIwil qVeKmkptCrTJG+Jjs/OnQXN2CxuC/gagFWR0/J6kKZVMx1HYn4W6/35T28t7qU3xAIHF U+40Ay1bfi5kpXXhTmqpE9NL08dXHc+Tyz8u6cyMhaGI+mwxWw3NhUyYT09lb3e3fLg9 1NrcLk9GxjqtF6waqyCv+VyUd/Xu7/fnSrX7urj09NWU+ZfHbiVmkUi3dotLPZLnKM6V QOXhghIQYO2WKDinPSLg9lYBIGk8BgOTsiX27uzAkRMKzht5kp4eKYDPc3SXc8BMFQ3c m8IQ== X-Gm-Message-State: ACrzQf1aPQ7Gh2DHhIA4HMjhkZd6RiFg1VPKCmDTAUrtgmqGbW8SY2gf LA/bi1qZ4SLVUV3i8DRgcgRRUEaimXyBo3TdR5rBdjreUpTa6JlKjgD2JgQ/6EwYqlPjM8wIBUz uLagPlEYBdGx9V+OLotIbLCS4i+puu92DGIlQDoim/yQghEjIzwfAHLdgJr4Z3Ns= X-Google-Smtp-Source: AMsMyM6nvb2a+HTH0IZEJZPLHIWLrF7x2uzqhDY9Lbd+1CWVKPLmvURpwZnFwzieo+Axk/AAy3UrLArKUeE0zg== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a25:8e0a:0:b0:6be:fb9a:9027 with SMTP id p10-20020a258e0a000000b006befb9a9027mr20899012ybl.8.1665450422834; Mon, 10 Oct 2022 18:07:02 -0700 (PDT) Date: Tue, 11 Oct 2022 01:06:28 +0000 In-Reply-To: <20221011010628.1734342-1-ricarkol@google.com> Mime-Version: 1.0 References: <20221011010628.1734342-1-ricarkol@google.com> X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Message-ID: <20221011010628.1734342-15-ricarkol@google.com> Subject: [PATCH v9 14/14] KVM: selftests: aarch64: Add mix of tests into page_fault_test From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, andrew.jones@linux.dev Cc: pbonzini@redhat.com, maz@kernel.org, seanjc@google.com, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, dmatlack@google.com, axelrasmussen@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add some mix of tests into page_fault_test: memory regions with all the pairwise combinations of read-only, userfaultfd, and dirty-logging. For example, writing into a read-only region which has a hole handled with userfaultfd. Signed-off-by: Ricardo Koller --- .../selftests/kvm/aarch64/page_fault_test.c | 155 ++++++++++++++++++ 1 file changed, 155 insertions(+) diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c index 51d74909c50f..98b133920eb1 100644 --- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c +++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c @@ -399,6 +399,12 @@ static void free_uffd(struct test_desc *test, struct uffd_desc *pt_uffd, free(data_args.copy); } +static int uffd_no_handler(int mode, int uffd, struct uffd_msg *msg) +{ + TEST_FAIL("There was no UFFD fault expected."); + return -1; +} + /* Returns false if the test should be skipped. */ static bool punch_hole_in_backing_store(struct kvm_vm *vm, struct userspace_mem_region *region) @@ -798,6 +804,22 @@ static void help(char *name) .expected_events = { 0 }, \ } +#define TEST_UFFD_AND_DIRTY_LOG(_access, _with_af, _uffd_data_handler, \ + _uffd_faults, _test_check) \ +{ \ + .name = SCAT3(uffd_and_dirty_log, _access, _with_af), \ + .data_memslot_flags = KVM_MEM_LOG_DIRTY_PAGES, \ + .pt_memslot_flags = KVM_MEM_LOG_DIRTY_PAGES, \ + .guest_prepare = { _PREPARE(_with_af), \ + _PREPARE(_access) }, \ + .guest_test = _access, \ + .mem_mark_cmd = CMD_HOLE_DATA | CMD_HOLE_PT, \ + .guest_test_check = { _CHECK(_with_af), _test_check }, \ + .uffd_data_handler = _uffd_data_handler, \ + .uffd_pt_handler = uffd_pt_write_handler, \ + .expected_events = { .uffd_faults = _uffd_faults, }, \ +} + #define TEST_RO_MEMSLOT(_access, _mmio_handler, _mmio_exits) \ { \ .name = SCAT3(ro_memslot, _access, _with_af), \ @@ -817,6 +839,59 @@ static void help(char *name) .expected_events = { .fail_vcpu_runs = 1 }, \ } +#define TEST_RO_MEMSLOT_AND_DIRTY_LOG(_access, _mmio_handler, _mmio_exits, \ + _test_check) \ +{ \ + .name = SCAT3(ro_memslot, _access, _with_af), \ + .data_memslot_flags = KVM_MEM_READONLY | KVM_MEM_LOG_DIRTY_PAGES, \ + .pt_memslot_flags = KVM_MEM_LOG_DIRTY_PAGES, \ + .guest_prepare = { _PREPARE(_access) }, \ + .guest_test = _access, \ + .guest_test_check = { _test_check }, \ + .mmio_handler = _mmio_handler, \ + .expected_events = { .mmio_exits = _mmio_exits}, \ +} + +#define TEST_RO_MEMSLOT_NO_SYNDROME_AND_DIRTY_LOG(_access, _test_check) \ +{ \ + .name = SCAT2(ro_memslot_no_syn_and_dlog, _access), \ + .data_memslot_flags = KVM_MEM_READONLY | KVM_MEM_LOG_DIRTY_PAGES, \ + .pt_memslot_flags = KVM_MEM_LOG_DIRTY_PAGES, \ + .guest_test = _access, \ + .guest_test_check = { _test_check }, \ + .fail_vcpu_run_handler = fail_vcpu_run_mmio_no_syndrome_handler, \ + .expected_events = { .fail_vcpu_runs = 1 }, \ +} + +#define TEST_RO_MEMSLOT_AND_UFFD(_access, _mmio_handler, _mmio_exits, \ + _uffd_data_handler, _uffd_faults) \ +{ \ + .name = SCAT2(ro_memslot_uffd, _access), \ + .data_memslot_flags = KVM_MEM_READONLY, \ + .mem_mark_cmd = CMD_HOLE_DATA | CMD_HOLE_PT, \ + .guest_prepare = { _PREPARE(_access) }, \ + .guest_test = _access, \ + .uffd_data_handler = _uffd_data_handler, \ + .uffd_pt_handler = uffd_pt_write_handler, \ + .mmio_handler = _mmio_handler, \ + .expected_events = { .mmio_exits = _mmio_exits, \ + .uffd_faults = _uffd_faults }, \ +} + +#define TEST_RO_MEMSLOT_NO_SYNDROME_AND_UFFD(_access, _uffd_data_handler, \ + _uffd_faults) \ +{ \ + .name = SCAT2(ro_memslot_no_syndrome, _access), \ + .data_memslot_flags = KVM_MEM_READONLY, \ + .mem_mark_cmd = CMD_HOLE_DATA | CMD_HOLE_PT, \ + .guest_test = _access, \ + .uffd_data_handler = _uffd_data_handler, \ + .uffd_pt_handler = uffd_pt_write_handler, \ + .fail_vcpu_run_handler = fail_vcpu_run_mmio_no_syndrome_handler, \ + .expected_events = { .fail_vcpu_runs = 1, \ + .uffd_faults = _uffd_faults }, \ +} + static struct test_desc tests[] = { /* Check that HW is setting the Access Flag (AF) (sanity checks). */ @@ -891,6 +966,35 @@ static struct test_desc tests[] = { TEST_DIRTY_LOG(guest_dc_zva, with_af, guest_check_write_in_dirty_log), TEST_DIRTY_LOG(guest_st_preidx, with_af, guest_check_write_in_dirty_log), + /* + * Access when the data and PT memory regions are both marked for + * dirty logging and UFFD at the same time. The expected result is + * that writes should mark the dirty log and trigger a userfaultfd + * write fault. Reads/execs should result in a read userfaultfd + * fault, and nothing in the dirty log. Any S1PTW should result in + * a write in the dirty log and a userfaultfd write. + */ + TEST_UFFD_AND_DIRTY_LOG(guest_read64, with_af, uffd_data_read_handler, 2, + guest_check_no_write_in_dirty_log), + /* no_af should also lead to a PT write. */ + TEST_UFFD_AND_DIRTY_LOG(guest_read64, no_af, uffd_data_read_handler, 2, + guest_check_no_write_in_dirty_log), + TEST_UFFD_AND_DIRTY_LOG(guest_ld_preidx, with_af, uffd_data_read_handler, + 2, guest_check_no_write_in_dirty_log), + TEST_UFFD_AND_DIRTY_LOG(guest_at, with_af, 0, 1, + guest_check_no_write_in_dirty_log), + TEST_UFFD_AND_DIRTY_LOG(guest_exec, with_af, uffd_data_read_handler, 2, + guest_check_no_write_in_dirty_log), + TEST_UFFD_AND_DIRTY_LOG(guest_write64, with_af, uffd_data_write_handler, + 2, guest_check_write_in_dirty_log), + TEST_UFFD_AND_DIRTY_LOG(guest_cas, with_af, uffd_data_read_handler, 2, + guest_check_write_in_dirty_log), + TEST_UFFD_AND_DIRTY_LOG(guest_dc_zva, with_af, uffd_data_write_handler, + 2, guest_check_write_in_dirty_log), + TEST_UFFD_AND_DIRTY_LOG(guest_st_preidx, with_af, + uffd_data_write_handler, 2, + guest_check_write_in_dirty_log), + /* * Try accesses when the data memory region is marked read-only * (with KVM_MEM_READONLY). Writes with a syndrome result in an @@ -907,6 +1011,57 @@ static struct test_desc tests[] = { TEST_RO_MEMSLOT_NO_SYNDROME(guest_cas), TEST_RO_MEMSLOT_NO_SYNDROME(guest_st_preidx), + /* + * Access when both the data region is both read-only and marked + * for dirty logging at the same time. The expected result is that + * for writes there should be no write in the dirty log. The + * readonly handling is the same as if the memslot was not marked + * for dirty logging: writes with a syndrome result in an MMIO + * exit, and writes with no syndrome result in a failed vcpu run. + */ + TEST_RO_MEMSLOT_AND_DIRTY_LOG(guest_read64, 0, 0, + guest_check_no_write_in_dirty_log), + TEST_RO_MEMSLOT_AND_DIRTY_LOG(guest_ld_preidx, 0, 0, + guest_check_no_write_in_dirty_log), + TEST_RO_MEMSLOT_AND_DIRTY_LOG(guest_at, 0, 0, + guest_check_no_write_in_dirty_log), + TEST_RO_MEMSLOT_AND_DIRTY_LOG(guest_exec, 0, 0, + guest_check_no_write_in_dirty_log), + TEST_RO_MEMSLOT_AND_DIRTY_LOG(guest_write64, mmio_on_test_gpa_handler, + 1, guest_check_no_write_in_dirty_log), + TEST_RO_MEMSLOT_NO_SYNDROME_AND_DIRTY_LOG(guest_dc_zva, + guest_check_no_write_in_dirty_log), + TEST_RO_MEMSLOT_NO_SYNDROME_AND_DIRTY_LOG(guest_cas, + guest_check_no_write_in_dirty_log), + TEST_RO_MEMSLOT_NO_SYNDROME_AND_DIRTY_LOG(guest_st_preidx, + guest_check_no_write_in_dirty_log), + + /* + * Access when the data region is both read-only and punched with + * holes tracked with userfaultfd. The expected result is the + * union of both userfaultfd and read-only behaviors. For example, + * write accesses result in a userfaultfd write fault and an MMIO + * exit. Writes with no syndrome result in a failed vcpu run and + * no userfaultfd write fault. Reads result in userfaultfd getting + * triggered. + */ + TEST_RO_MEMSLOT_AND_UFFD(guest_read64, 0, 0, + uffd_data_read_handler, 2), + TEST_RO_MEMSLOT_AND_UFFD(guest_ld_preidx, 0, 0, + uffd_data_read_handler, 2), + TEST_RO_MEMSLOT_AND_UFFD(guest_at, 0, 0, + uffd_no_handler, 1), + TEST_RO_MEMSLOT_AND_UFFD(guest_exec, 0, 0, + uffd_data_read_handler, 2), + TEST_RO_MEMSLOT_AND_UFFD(guest_write64, mmio_on_test_gpa_handler, 1, + uffd_data_write_handler, 2), + TEST_RO_MEMSLOT_NO_SYNDROME_AND_UFFD(guest_cas, + uffd_data_read_handler, 2), + TEST_RO_MEMSLOT_NO_SYNDROME_AND_UFFD(guest_dc_zva, + uffd_no_handler, 1), + TEST_RO_MEMSLOT_NO_SYNDROME_AND_UFFD(guest_st_preidx, + uffd_no_handler, 1), + { 0 } };