From patchwork Wed Mar 23 22:53:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 12790189 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6EA56C433F5 for ; Wed, 23 Mar 2022 22:54:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241368AbiCWWzp (ORCPT ); Wed, 23 Mar 2022 18:55:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46208 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241332AbiCWWzo (ORCPT ); Wed, 23 Mar 2022 18:55:44 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 985C58CDA9 for ; Wed, 23 Mar 2022 15:54:13 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id b9-20020a17090aa58900b001b8b14b4aabso1786808pjq.9 for ; Wed, 23 Mar 2022 15:54:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=NE2gi3am1VrazKzQil6+H+7ZqvWWY0zMkbpQYZMR0do=; b=AYlT9OzEDitrlvU7wZVMEJ7rRG4uzKwBKn9V7VqsTk9q/NNJrR+UKjVUdUQKjMIm2G 6JiWdJN/+jlKVrKhtB32jP+YW/lzfxw72P/+w9QLiumrzZAyQCo/Tl1OV2H/L9OjmTcr 19nDlVBBY7SH4Sky7joWGlcYspJS5N6t78/fAR7sJi8a7zTdGZSaD8hSLWVvHDUF5oTz VSacnZxIn0lj7zirTlItec0TX0vRFITBNurTjCp6LpVWVCj9lxA20Jb1cgl56kmWJnwF WA0ef53CZJAgVhD4ujlyH7CmdaOSHhU3fLNYBVt69rv8LtdQFraql+GfUVi22acYpvHV cnBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=NE2gi3am1VrazKzQil6+H+7ZqvWWY0zMkbpQYZMR0do=; b=Q5TduMZuzaofiLwBTGnYA+ja+zYWue74dklaB20tAfwxDGtLv1QifWTA5JTItcOd2Q Ik+us5ww+nI6TNzV1ywRjQ62DFtBOy9wuP7HDwmPw+lbnnXRgLhP9rdn4PAYiMKQH/nT oEwrEvXs7DFTSu10ptpaARRMwlcJU+SgLvxdbJNy2UnbSrPrl7OAfOoSMtrZJUUcjTyS a/dYlmB0w3ZzJdFQikcDSEsnnjFOhoQ/EXanw75FUGjm2g5i4THUenqTXJfwKpCtB9l6 XGQrkuJNlELkwjkxX15r9vc09h31CH+oskzYyaA2UruS0Zn7M3yIbF7aaIFClp4R8iWW aTtA== X-Gm-Message-State: AOAM533En/k1R5H60Rt011z9p4OGkWVnmC4mG8PS5c1sHrUVTRvfhC8l qamc605K1+z4S5HwnrKBzaSlLyoi59pgAV1fwpXXesshB+aa+pT0nFowxEBAUcbel1GWg0vfgI8 BO3ri854c3i61+TuVt1m/l3defx88IoPATCBtqykTyg0vmEB79i+vBu9JTgB0BOw= X-Google-Smtp-Source: ABdhPJyvGAJG3vzPsiLOX3FKYMmxk0gs4E6PB944UFaHKjb9H9DIKSWC19I7X+3VwhZsx7GUaB1fHorzPC2BQg== X-Received: from ricarkol2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:62fe]) (user=ricarkol job=sendgmr) by 2002:a17:902:d50c:b0:154:172:3688 with SMTP id b12-20020a170902d50c00b0015401723688mr2459173plg.17.1648076051459; Wed, 23 Mar 2022 15:54:11 -0700 (PDT) Date: Wed, 23 Mar 2022 15:53:55 -0700 In-Reply-To: <20220323225405.267155-1-ricarkol@google.com> Message-Id: <20220323225405.267155-2-ricarkol@google.com> Mime-Version: 1.0 References: <20220323225405.267155-1-ricarkol@google.com> X-Mailer: git-send-email 2.35.1.894.gb6a874cedc-goog Subject: [PATCH v2 01/11] KVM: selftests: Add a userfaultfd library From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, drjones@redhat.com Cc: pbonzini@redhat.com, maz@kernel.org, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, axelrasmussen@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move the generic userfaultfd code out of demand_paging_test.c into a common library, userfaultfd_util. This library consists of a setup and a stop function. The setup function starts a thread for handling page faults using the handler callback function. This setup returns a uffd_desc object which is then used in the stop function (to wait and destroy the threads). Signed-off-by: Ricardo Koller Reviewed-by: Ben Gardon Reviewed-by: Oliver Upton --- tools/testing/selftests/kvm/Makefile | 2 +- .../selftests/kvm/demand_paging_test.c | 227 +++--------------- .../selftests/kvm/include/userfaultfd_util.h | 47 ++++ .../selftests/kvm/lib/userfaultfd_util.c | 187 +++++++++++++++ 4 files changed, 264 insertions(+), 199 deletions(-) create mode 100644 tools/testing/selftests/kvm/include/userfaultfd_util.h create mode 100644 tools/testing/selftests/kvm/lib/userfaultfd_util.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 0e4926bc9a58..bc5f89b3700e 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -37,7 +37,7 @@ ifeq ($(ARCH),riscv) UNAME_M := riscv endif -LIBKVM = lib/assert.c lib/elf.c lib/io.c lib/kvm_util.c lib/rbtree.c lib/sparsebit.c lib/test_util.c lib/guest_modes.c lib/perf_test_util.c +LIBKVM = lib/assert.c lib/elf.c lib/io.c lib/kvm_util.c lib/rbtree.c lib/sparsebit.c lib/test_util.c lib/guest_modes.c lib/perf_test_util.c lib/userfaultfd_util.c LIBKVM_x86_64 = lib/x86_64/apic.c lib/x86_64/processor.c lib/x86_64/vmx.c lib/x86_64/svm.c lib/x86_64/ucall.c lib/x86_64/handlers.S LIBKVM_aarch64 = lib/aarch64/processor.c lib/aarch64/ucall.c lib/aarch64/handlers.S lib/aarch64/spinlock.c lib/aarch64/gic.c lib/aarch64/gic_v3.c lib/aarch64/vgic.c LIBKVM_s390x = lib/s390x/processor.c lib/s390x/ucall.c lib/s390x/diag318_test_handler.c diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c index 6a719d065599..b3d457cecd68 100644 --- a/tools/testing/selftests/kvm/demand_paging_test.c +++ b/tools/testing/selftests/kvm/demand_paging_test.c @@ -22,23 +22,13 @@ #include "test_util.h" #include "perf_test_util.h" #include "guest_modes.h" +#include "userfaultfd_util.h" #ifdef __NR_userfaultfd -#ifdef PRINT_PER_PAGE_UPDATES -#define PER_PAGE_DEBUG(...) printf(__VA_ARGS__) -#else -#define PER_PAGE_DEBUG(...) _no_printf(__VA_ARGS__) -#endif - -#ifdef PRINT_PER_VCPU_UPDATES -#define PER_VCPU_DEBUG(...) printf(__VA_ARGS__) -#else -#define PER_VCPU_DEBUG(...) _no_printf(__VA_ARGS__) -#endif - static int nr_vcpus = 1; static uint64_t guest_percpu_mem_size = DEFAULT_PER_VCPU_MEM_SIZE; + static size_t demand_paging_size; static char *guest_data_prototype; @@ -69,9 +59,11 @@ static void vcpu_worker(struct perf_test_vcpu_args *vcpu_args) ts_diff.tv_sec, ts_diff.tv_nsec); } -static int handle_uffd_page_request(int uffd_mode, int uffd, uint64_t addr) +static int handle_uffd_page_request(int uffd_mode, int uffd, + struct uffd_msg *msg) { pid_t tid = syscall(__NR_gettid); + uint64_t addr = msg->arg.pagefault.address; struct timespec start; struct timespec ts_diff; int r; @@ -118,175 +110,32 @@ static int handle_uffd_page_request(int uffd_mode, int uffd, uint64_t addr) return 0; } -bool quit_uffd_thread; - -struct uffd_handler_args { +struct test_params { int uffd_mode; - int uffd; - int pipefd; - useconds_t delay; + useconds_t uffd_delay; + enum vm_mem_backing_src_type src_type; + bool partition_vcpu_memory_access; }; -static void *uffd_handler_thread_fn(void *arg) +static void prefault_mem(void *alias, uint64_t len) { - struct uffd_handler_args *uffd_args = (struct uffd_handler_args *)arg; - int uffd = uffd_args->uffd; - int pipefd = uffd_args->pipefd; - useconds_t delay = uffd_args->delay; - int64_t pages = 0; - struct timespec start; - struct timespec ts_diff; - - clock_gettime(CLOCK_MONOTONIC, &start); - while (!quit_uffd_thread) { - struct uffd_msg msg; - struct pollfd pollfd[2]; - char tmp_chr; - int r; - uint64_t addr; - - pollfd[0].fd = uffd; - pollfd[0].events = POLLIN; - pollfd[1].fd = pipefd; - pollfd[1].events = POLLIN; - - r = poll(pollfd, 2, -1); - switch (r) { - case -1: - pr_info("poll err"); - continue; - case 0: - continue; - case 1: - break; - default: - pr_info("Polling uffd returned %d", r); - return NULL; - } - - if (pollfd[0].revents & POLLERR) { - pr_info("uffd revents has POLLERR"); - return NULL; - } - - if (pollfd[1].revents & POLLIN) { - r = read(pollfd[1].fd, &tmp_chr, 1); - TEST_ASSERT(r == 1, - "Error reading pipefd in UFFD thread\n"); - return NULL; - } - - if (!(pollfd[0].revents & POLLIN)) - continue; - - r = read(uffd, &msg, sizeof(msg)); - if (r == -1) { - if (errno == EAGAIN) - continue; - pr_info("Read of uffd got errno %d\n", errno); - return NULL; - } - - if (r != sizeof(msg)) { - pr_info("Read on uffd returned unexpected size: %d bytes", r); - return NULL; - } - - if (!(msg.event & UFFD_EVENT_PAGEFAULT)) - continue; + size_t p; - if (delay) - usleep(delay); - addr = msg.arg.pagefault.address; - r = handle_uffd_page_request(uffd_args->uffd_mode, uffd, addr); - if (r < 0) - return NULL; - pages++; + TEST_ASSERT(alias != NULL, "Alias required for minor faults"); + for (p = 0; p < (len / demand_paging_size); ++p) { + memcpy(alias + (p * demand_paging_size), + guest_data_prototype, demand_paging_size); } - - ts_diff = timespec_elapsed(start); - PER_VCPU_DEBUG("userfaulted %ld pages over %ld.%.9lds. (%f/sec)\n", - pages, ts_diff.tv_sec, ts_diff.tv_nsec, - pages / ((double)ts_diff.tv_sec + (double)ts_diff.tv_nsec / 100000000.0)); - - return NULL; } -static void setup_demand_paging(struct kvm_vm *vm, - pthread_t *uffd_handler_thread, int pipefd, - int uffd_mode, useconds_t uffd_delay, - struct uffd_handler_args *uffd_args, - void *hva, void *alias, uint64_t len) -{ - bool is_minor = (uffd_mode == UFFDIO_REGISTER_MODE_MINOR); - int uffd; - struct uffdio_api uffdio_api; - struct uffdio_register uffdio_register; - uint64_t expected_ioctls = ((uint64_t) 1) << _UFFDIO_COPY; - - PER_PAGE_DEBUG("Userfaultfd %s mode, faults resolved with %s\n", - is_minor ? "MINOR" : "MISSING", - is_minor ? "UFFDIO_CONINUE" : "UFFDIO_COPY"); - - /* In order to get minor faults, prefault via the alias. */ - if (is_minor) { - size_t p; - - expected_ioctls = ((uint64_t) 1) << _UFFDIO_CONTINUE; - - TEST_ASSERT(alias != NULL, "Alias required for minor faults"); - for (p = 0; p < (len / demand_paging_size); ++p) { - memcpy(alias + (p * demand_paging_size), - guest_data_prototype, demand_paging_size); - } - } - - uffd = syscall(__NR_userfaultfd, O_CLOEXEC | O_NONBLOCK); - TEST_ASSERT(uffd >= 0, "uffd creation failed, errno: %d", errno); - - uffdio_api.api = UFFD_API; - uffdio_api.features = 0; - TEST_ASSERT(ioctl(uffd, UFFDIO_API, &uffdio_api) != -1, - "ioctl UFFDIO_API failed: %" PRIu64, - (uint64_t)uffdio_api.api); - - uffdio_register.range.start = (uint64_t)hva; - uffdio_register.range.len = len; - uffdio_register.mode = uffd_mode; - TEST_ASSERT(ioctl(uffd, UFFDIO_REGISTER, &uffdio_register) != -1, - "ioctl UFFDIO_REGISTER failed"); - TEST_ASSERT((uffdio_register.ioctls & expected_ioctls) == - expected_ioctls, "missing userfaultfd ioctls"); - - uffd_args->uffd_mode = uffd_mode; - uffd_args->uffd = uffd; - uffd_args->pipefd = pipefd; - uffd_args->delay = uffd_delay; - pthread_create(uffd_handler_thread, NULL, uffd_handler_thread_fn, - uffd_args); - - PER_VCPU_DEBUG("Created uffd thread for HVA range [%p, %p)\n", - hva, hva + len); -} - -struct test_params { - int uffd_mode; - useconds_t uffd_delay; - enum vm_mem_backing_src_type src_type; - bool partition_vcpu_memory_access; -}; - static void run_test(enum vm_guest_mode mode, void *arg) { struct test_params *p = arg; - pthread_t *uffd_handler_threads = NULL; - struct uffd_handler_args *uffd_args = NULL; + struct uffd_desc **uffd_descs = NULL; struct timespec start; struct timespec ts_diff; - int *pipefds = NULL; struct kvm_vm *vm; int vcpu_id; - int r; vm = perf_test_create_vm(mode, nr_vcpus, guest_percpu_mem_size, 1, p->src_type, p->partition_vcpu_memory_access); @@ -299,15 +148,8 @@ static void run_test(enum vm_guest_mode mode, void *arg) memset(guest_data_prototype, 0xAB, demand_paging_size); if (p->uffd_mode) { - uffd_handler_threads = - malloc(nr_vcpus * sizeof(*uffd_handler_threads)); - TEST_ASSERT(uffd_handler_threads, "Memory allocation failed"); - - uffd_args = malloc(nr_vcpus * sizeof(*uffd_args)); - TEST_ASSERT(uffd_args, "Memory allocation failed"); - - pipefds = malloc(sizeof(int) * nr_vcpus * 2); - TEST_ASSERT(pipefds, "Unable to allocate memory for pipefd"); + uffd_descs = malloc(nr_vcpus * sizeof(struct uffd_desc *)); + TEST_ASSERT(uffd_descs, "Memory allocation failed"); for (vcpu_id = 0; vcpu_id < nr_vcpus; vcpu_id++) { struct perf_test_vcpu_args *vcpu_args; @@ -320,19 +162,17 @@ static void run_test(enum vm_guest_mode mode, void *arg) vcpu_hva = addr_gpa2hva(vm, vcpu_args->gpa); vcpu_alias = addr_gpa2alias(vm, vcpu_args->gpa); + prefault_mem(vcpu_alias, + vcpu_args->pages * perf_test_args.guest_page_size); + /* * Set up user fault fd to handle demand paging * requests. */ - r = pipe2(&pipefds[vcpu_id * 2], - O_CLOEXEC | O_NONBLOCK); - TEST_ASSERT(!r, "Failed to set up pipefd"); - - setup_demand_paging(vm, &uffd_handler_threads[vcpu_id], - pipefds[vcpu_id * 2], p->uffd_mode, - p->uffd_delay, &uffd_args[vcpu_id], - vcpu_hva, vcpu_alias, - vcpu_args->pages * perf_test_args.guest_page_size); + uffd_descs[vcpu_id] = uffd_setup_demand_paging( + p->uffd_mode, p->uffd_delay, vcpu_hva, + vcpu_args->pages * perf_test_args.guest_page_size, + &handle_uffd_page_request); } } @@ -347,15 +187,9 @@ static void run_test(enum vm_guest_mode mode, void *arg) pr_info("All vCPU threads joined\n"); if (p->uffd_mode) { - char c; - /* Tell the user fault fd handler threads to quit */ - for (vcpu_id = 0; vcpu_id < nr_vcpus; vcpu_id++) { - r = write(pipefds[vcpu_id * 2 + 1], &c, 1); - TEST_ASSERT(r == 1, "Unable to write to pipefd"); - - pthread_join(uffd_handler_threads[vcpu_id], NULL); - } + for (vcpu_id = 0; vcpu_id < nr_vcpus; vcpu_id++) + uffd_stop_demand_paging(uffd_descs[vcpu_id]); } pr_info("Total guest execution time: %ld.%.9lds\n", @@ -367,11 +201,8 @@ static void run_test(enum vm_guest_mode mode, void *arg) perf_test_destroy_vm(vm); free(guest_data_prototype); - if (p->uffd_mode) { - free(uffd_handler_threads); - free(uffd_args); - free(pipefds); - } + if (p->uffd_mode) + free(uffd_descs); } static void help(char *name) diff --git a/tools/testing/selftests/kvm/include/userfaultfd_util.h b/tools/testing/selftests/kvm/include/userfaultfd_util.h new file mode 100644 index 000000000000..dffb4e768d56 --- /dev/null +++ b/tools/testing/selftests/kvm/include/userfaultfd_util.h @@ -0,0 +1,47 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KVM userfaultfd util + * Adapted from demand_paging_test.c + * + * Copyright (C) 2018, Red Hat, Inc. + * Copyright (C) 2019, Google, Inc. + * Copyright (C) 2022, Google, Inc. + */ + +#define _GNU_SOURCE /* for pipe2 */ + +#include +#include +#include +#include + +#include "test_util.h" + +typedef int (*uffd_handler_t)(int uffd_mode, int uffd, struct uffd_msg *msg); + +struct uffd_desc { + int uffd_mode; + int uffd; + int pipefds[2]; + useconds_t delay; + uffd_handler_t handler; + pthread_t thread; +}; + +struct uffd_desc *uffd_setup_demand_paging(int uffd_mode, + useconds_t uffd_delay, void *hva, uint64_t len, + uffd_handler_t handler); + +void uffd_stop_demand_paging(struct uffd_desc *uffd); + +#ifdef PRINT_PER_PAGE_UPDATES +#define PER_PAGE_DEBUG(...) printf(__VA_ARGS__) +#else +#define PER_PAGE_DEBUG(...) _no_printf(__VA_ARGS__) +#endif + +#ifdef PRINT_PER_VCPU_UPDATES +#define PER_VCPU_DEBUG(...) printf(__VA_ARGS__) +#else +#define PER_VCPU_DEBUG(...) _no_printf(__VA_ARGS__) +#endif diff --git a/tools/testing/selftests/kvm/lib/userfaultfd_util.c b/tools/testing/selftests/kvm/lib/userfaultfd_util.c new file mode 100644 index 000000000000..4395032ccbe4 --- /dev/null +++ b/tools/testing/selftests/kvm/lib/userfaultfd_util.c @@ -0,0 +1,187 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KVM userfaultfd util + * Adapted from demand_paging_test.c + * + * Copyright (C) 2018, Red Hat, Inc. + * Copyright (C) 2019, Google, Inc. + * Copyright (C) 2022, Google, Inc. + */ + +#define _GNU_SOURCE /* for pipe2 */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "kvm_util.h" +#include "test_util.h" +#include "perf_test_util.h" +#include "userfaultfd_util.h" + +#ifdef __NR_userfaultfd + +static void *uffd_handler_thread_fn(void *arg) +{ + struct uffd_desc *uffd_desc = (struct uffd_desc *)arg; + int uffd = uffd_desc->uffd; + int pipefd = uffd_desc->pipefds[0]; + useconds_t delay = uffd_desc->delay; + int64_t pages = 0; + struct timespec start; + struct timespec ts_diff; + + clock_gettime(CLOCK_MONOTONIC, &start); + while (1) { + struct uffd_msg msg; + struct pollfd pollfd[2]; + char tmp_chr; + int r; + + pollfd[0].fd = uffd; + pollfd[0].events = POLLIN; + pollfd[1].fd = pipefd; + pollfd[1].events = POLLIN; + + r = poll(pollfd, 2, -1); + switch (r) { + case -1: + pr_info("poll err"); + continue; + case 0: + continue; + case 1: + break; + default: + pr_info("Polling uffd returned %d", r); + return NULL; + } + + if (pollfd[0].revents & POLLERR) { + pr_info("uffd revents has POLLERR"); + return NULL; + } + + if (pollfd[1].revents & POLLIN) { + r = read(pollfd[1].fd, &tmp_chr, 1); + TEST_ASSERT(r == 1, + "Error reading pipefd in UFFD thread\n"); + return NULL; + } + + if (!(pollfd[0].revents & POLLIN)) + continue; + + r = read(uffd, &msg, sizeof(msg)); + if (r == -1) { + if (errno == EAGAIN) + continue; + pr_info("Read of uffd got errno %d\n", errno); + return NULL; + } + + if (r != sizeof(msg)) { + pr_info("Read on uffd returned unexpected size: %d bytes", r); + return NULL; + } + + if (!(msg.event & UFFD_EVENT_PAGEFAULT)) + continue; + + if (delay) + usleep(delay); + r = uffd_desc->handler(uffd_desc->uffd_mode, uffd, &msg); + if (r < 0) + return NULL; + pages++; + } + + ts_diff = timespec_elapsed(start); + PER_VCPU_DEBUG("userfaulted %ld pages over %ld.%.9lds. (%f/sec)\n", + pages, ts_diff.tv_sec, ts_diff.tv_nsec, + pages / ((double)ts_diff.tv_sec + (double)ts_diff.tv_nsec / 100000000.0)); + + return NULL; +} + +struct uffd_desc *uffd_setup_demand_paging(int uffd_mode, + useconds_t uffd_delay, void *hva, uint64_t len, + uffd_handler_t handler) +{ + struct uffd_desc *uffd_desc; + bool is_minor = (uffd_mode == UFFDIO_REGISTER_MODE_MINOR); + int uffd; + struct uffdio_api uffdio_api; + struct uffdio_register uffdio_register; + uint64_t expected_ioctls = ((uint64_t) 1) << _UFFDIO_COPY; + int ret; + + PER_PAGE_DEBUG("Userfaultfd %s mode, faults resolved with %s\n", + is_minor ? "MINOR" : "MISSING", + is_minor ? "UFFDIO_CONINUE" : "UFFDIO_COPY"); + + uffd_desc = malloc(sizeof(struct uffd_desc)); + TEST_ASSERT(uffd_desc, "malloc failed"); + + /* In order to get minor faults, prefault via the alias. */ + if (is_minor) + expected_ioctls = ((uint64_t) 1) << _UFFDIO_CONTINUE; + + uffd = syscall(__NR_userfaultfd, O_CLOEXEC | O_NONBLOCK); + TEST_ASSERT(uffd >= 0, "uffd creation failed, errno: %d", errno); + + uffdio_api.api = UFFD_API; + uffdio_api.features = 0; + TEST_ASSERT(ioctl(uffd, UFFDIO_API, &uffdio_api) != -1, + "ioctl UFFDIO_API failed: %" PRIu64, + (uint64_t)uffdio_api.api); + + uffdio_register.range.start = (uint64_t)hva; + uffdio_register.range.len = len; + uffdio_register.mode = uffd_mode; + TEST_ASSERT(ioctl(uffd, UFFDIO_REGISTER, &uffdio_register) != -1, + "ioctl UFFDIO_REGISTER failed"); + TEST_ASSERT((uffdio_register.ioctls & expected_ioctls) == + expected_ioctls, "missing userfaultfd ioctls"); + + ret = pipe2(uffd_desc->pipefds, O_CLOEXEC | O_NONBLOCK); + TEST_ASSERT(!ret, "Failed to set up pipefd"); + + uffd_desc->uffd_mode = uffd_mode; + uffd_desc->uffd = uffd; + uffd_desc->delay = uffd_delay; + uffd_desc->handler = handler; + pthread_create(&uffd_desc->thread, NULL, uffd_handler_thread_fn, + uffd_desc); + + PER_VCPU_DEBUG("Created uffd thread for HVA range [%p, %p)\n", + hva, hva + len); + + return uffd_desc; +} + +void uffd_stop_demand_paging(struct uffd_desc *uffd) +{ + char c = 0; + int ret; + + ret = write(uffd->pipefds[1], &c, 1); + TEST_ASSERT(ret == 1, "Unable to write to pipefd"); + + ret = pthread_join(uffd->thread, NULL); + TEST_ASSERT(ret == 0, "Pthread_join failed."); + + close(uffd->uffd); + + close(uffd->pipefds[1]); + close(uffd->pipefds[0]); + + free(uffd); +} + +#endif /* __NR_userfaultfd */ From patchwork Wed Mar 23 22:53:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 12790188 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0039EC433FE for ; Wed, 23 Mar 2022 22:54:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241349AbiCWWzo (ORCPT ); Wed, 23 Mar 2022 18:55:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46130 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229714AbiCWWzn (ORCPT ); Wed, 23 Mar 2022 18:55:43 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8FB058CDA6 for ; Wed, 23 Mar 2022 15:54:13 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id p21-20020a631e55000000b00372d919267cso1352269pgm.1 for ; Wed, 23 Mar 2022 15:54:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=EX5EdkPTcMFobBvg2k2IPzfGAyJv+44CwzcOylMIfUM=; b=QXX9iZudZkXIo6wQ/ra4T2fPlG8qjoYt07mt2kngzjE9dhe3Dx9l52TRYkqJR7e4Qj 9nYLZ/vLz8qnYf4o2ymu82DKVONyT2QxVh2wYgg2K+9+0UCEznlA3L8YwiqVaOjqJaTx pppzXBXWskc8koE/QRdI6XNRQAKzdRM83fntH+rkqvMdKJcjM3XzKalXNz8qdXTEJLsI ijjHKokSVH8e1aAAyGTH2VtZJdPKBIdzAezuHc8PgD9vNXTjNsbsMzlDKWIaSujE+Sh/ WTx/gsX4vlG+zwOYii+USookdJYWIAipso6FweptVh1BF3XpHYsZ9iNhx5VFhoXX/TDW 4zoA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=EX5EdkPTcMFobBvg2k2IPzfGAyJv+44CwzcOylMIfUM=; b=2IRy/cKgi7eBeQ5sGFtZUNyux1SKQerOHDbRJ4nQsj5BtKR8YfQeImJKyxDh5Lai16 AFonV5jj9tBSNw8PEhwjxzMnkq5+DVtmDNB41Rrc4shGPmhAOJVC7u3mYIcncrPmXtdF hLJAjaolTwWMdLevxWkNg+DHElQI5n2ZrT/sRBRHiGD868xG2dk4vgRiKQOkw18emFnj ErzOaVtG9buYZPkXlgF/kznCSFiintZ3m+gzCtZNb6SChrwYnH6tx0YfysD1PYaDsUMd imGZV/lxaSmWElbfjuA+k8G1824iwmRY9XErL0J4SbMIFxwhL2xQWYsvB9Vh67RXwfaK wGuQ== X-Gm-Message-State: AOAM530ry+UkGj02q/LaBaJp0DbsueN1/qj41w1QtXIYSevL82C1PsHk JYBG9/6F+6aHSKDTs6vowWgfeEE/jiXspRqww7NLNrT646rOZRuFNQxElw2UNAjmxda6mN72szX stmHg2aqe+tQH7w7GWb75A0nPpty5yHPnZah91XEOaAG9AseiTFS5HeCnuDYugSk= X-Google-Smtp-Source: ABdhPJxlDAyrKty4WlvxYaFgWBnQWGbA8dG7nnwcr1i+I3TtdRt9Pt8GemhN9kPAWxAHCUPTszfC7fapjfAFuw== X-Received: from ricarkol2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:62fe]) (user=ricarkol job=sendgmr) by 2002:a65:524b:0:b0:383:1b87:2d21 with SMTP id q11-20020a65524b000000b003831b872d21mr1670543pgp.482.1648076052888; Wed, 23 Mar 2022 15:54:12 -0700 (PDT) Date: Wed, 23 Mar 2022 15:53:56 -0700 In-Reply-To: <20220323225405.267155-1-ricarkol@google.com> Message-Id: <20220323225405.267155-3-ricarkol@google.com> Mime-Version: 1.0 References: <20220323225405.267155-1-ricarkol@google.com> X-Mailer: git-send-email 2.35.1.894.gb6a874cedc-goog Subject: [PATCH v2 02/11] KVM: selftests: aarch64: Add vm_get_pte_gpa library function From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, drjones@redhat.com Cc: pbonzini@redhat.com, maz@kernel.org, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, axelrasmussen@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a library function (in-guest) to get the GPA of the PTE of a particular GVA. This will be used in a future commit by a test to clear and check the AF (access flag) of a particular page. Signed-off-by: Ricardo Koller --- .../selftests/kvm/include/aarch64/processor.h | 2 ++ .../selftests/kvm/lib/aarch64/processor.c | 24 +++++++++++++++++-- 2 files changed, 24 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/kvm/include/aarch64/processor.h b/tools/testing/selftests/kvm/include/aarch64/processor.h index 8f9f46979a00..caa572d83062 100644 --- a/tools/testing/selftests/kvm/include/aarch64/processor.h +++ b/tools/testing/selftests/kvm/include/aarch64/processor.h @@ -125,6 +125,8 @@ void vm_install_exception_handler(struct kvm_vm *vm, void vm_install_sync_handler(struct kvm_vm *vm, int vector, int ec, handler_fn handler); +vm_paddr_t vm_get_pte_gpa(struct kvm_vm *vm, vm_vaddr_t gva); + static inline void cpu_relax(void) { asm volatile("yield" ::: "memory"); diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c index 9343d82519b4..ee006d354b79 100644 --- a/tools/testing/selftests/kvm/lib/aarch64/processor.c +++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c @@ -139,7 +139,7 @@ void virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr) _virt_pg_map(vm, vaddr, paddr, attr_idx); } -vm_paddr_t addr_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva) +vm_paddr_t vm_get_pte_gpa(struct kvm_vm *vm, vm_vaddr_t gva) { uint64_t *ptep; @@ -162,7 +162,7 @@ vm_paddr_t addr_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva) goto unmapped_gva; /* fall through */ case 2: - ptep = addr_gpa2hva(vm, pte_addr(vm, *ptep)) + pte_index(vm, gva) * 8; + ptep = (uint64_t *)(pte_addr(vm, *ptep) + pte_index(vm, gva) * 8); if (!ptep) goto unmapped_gva; break; @@ -170,6 +170,26 @@ vm_paddr_t addr_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva) TEST_FAIL("Page table levels must be 2, 3, or 4"); } + return (vm_paddr_t)ptep; + +unmapped_gva: + TEST_FAIL("No mapping for vm virtual address, gva: 0x%lx", gva); + exit(1); +} + +vm_paddr_t addr_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva) +{ + uint64_t *ptep; + vm_paddr_t ptep_gpa; + + ptep_gpa = vm_get_pte_gpa(vm, gva); + if (!ptep_gpa) + goto unmapped_gva; + + ptep = addr_gpa2hva(vm, ptep_gpa); + if (!ptep) + goto unmapped_gva; + return pte_addr(vm, *ptep) + (gva & (vm->page_size - 1)); unmapped_gva: From patchwork Wed Mar 23 22:53:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 12790190 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1DDD0C43217 for ; Wed, 23 Mar 2022 22:54:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241377AbiCWWzr (ORCPT ); Wed, 23 Mar 2022 18:55:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46226 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241359AbiCWWzp (ORCPT ); Wed, 23 Mar 2022 18:55:45 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D33348CDA6 for ; Wed, 23 Mar 2022 15:54:14 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id b4-20020a170902e94400b0015309b5c481so1521546pll.6 for ; Wed, 23 Mar 2022 15:54:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=iVxsz/nxe69FaKPg7w+D5EN2GRyAtqQAGpLEEQZhEHs=; b=VU0CIiC8HQxd7t8K1M3jmQDYqgvIorEmirh+ftwpKsWGJiy95+m1qEDus1m4U1t5WN 7UXWM/0apqb95IDvBoTfGgiEY71kaeuxiYEsyU32epxdJNRj2uUTmegG80Q51H9el7iw CzCdxLJWbmWVpaxM9XP4kHTR0X1vX3oHqxz0FiQlC1w3dPVl1KgUOJ+0Vb3GnEi2WiXy 3BLwyuXJjxr0ZDwH+JUK1Zbnky1LNQMWWM8Pw1A+slStCy9SQJSNkQAoLARNjOU4OoCp CfambUCXnhYDW/wE5aaKMgKRKbhj6NoXnQ2vFGSLAcCcDWiXq6OuSKyDLkdy6rdes+Rm 0yBw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=iVxsz/nxe69FaKPg7w+D5EN2GRyAtqQAGpLEEQZhEHs=; b=t09zpvoJUBiwa3XOwv26L8fPDONExiTTO2N0n/zrd2gh77QryGjIWW2loX09ktLi+2 PUOj7tc5CsX+rLxRJsHSEa541ZjkgXla/ojKvXv3nrRPArDqeLFHKRvElDW0LnftRWuf 81rRPifpp4iKRr4aK497ZICAoSRmtiW2pGt0o3aNZJ9bZkstZysk6kjMxpFgpGECxa3G r7FAifJ1fNRV9vX3JNsLMXoTVqiReM4Omkz56ZQ8NeV6iCM8SJT3IyYsPBLtH4RY7Rpj UW8joAYYL8Ak8j1FxM7QSpeFYdQQ9H4gzujKorPiD+9eZbBZcLZa3kkJo7WcHVEkKZYs Wmvg== X-Gm-Message-State: AOAM530bU8NLBBJ7k/vmgrmHJv3t2iNdmxiDGknwifNlK6hancwDvR9s tAbxoff3Fwjq+wd3HuwilzUD5LpWcyzzpO6QDg9y/DZrc6TzXrh0+htg5p5b7Z5WfEhnO1txOPh TvL3J5VcXcOkKGwQyBYaJOdj9e83MBpZ+iRXaAxwmtBEoUtm3oMXiovmrLhoaOjA= X-Google-Smtp-Source: ABdhPJwo+TcYa43YtvZ8dZVWCEVPpmS+xcdh51rkUG++rK+wYWB1GkUggZNN5tsVJR2O9qdpFXekINZzbsRF/Q== X-Received: from ricarkol2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:62fe]) (user=ricarkol job=sendgmr) by 2002:a17:902:bf04:b0:149:c5a5:5323 with SMTP id bi4-20020a170902bf0400b00149c5a55323mr2553612plb.97.1648076054305; Wed, 23 Mar 2022 15:54:14 -0700 (PDT) Date: Wed, 23 Mar 2022 15:53:57 -0700 In-Reply-To: <20220323225405.267155-1-ricarkol@google.com> Message-Id: <20220323225405.267155-4-ricarkol@google.com> Mime-Version: 1.0 References: <20220323225405.267155-1-ricarkol@google.com> X-Mailer: git-send-email 2.35.1.894.gb6a874cedc-goog Subject: [PATCH v2 03/11] KVM: selftests: Add vm_alloc_page_table_in_memslot library function From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, drjones@redhat.com Cc: pbonzini@redhat.com, maz@kernel.org, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, axelrasmussen@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a library function to allocate a page-table physical page in a particular memslot. The default behavior is to create new page-table pages in memslot 0. Reviewed-by: Ben Gardon Signed-off-by: Ricardo Koller --- tools/testing/selftests/kvm/include/kvm_util_base.h | 1 + tools/testing/selftests/kvm/lib/kvm_util.c | 8 +++++++- 2 files changed, 8 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index 4ed6aa049a91..3a69b35e37cc 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -306,6 +306,7 @@ vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min, vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, vm_paddr_t paddr_min, uint32_t memslot); vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm); +vm_paddr_t vm_alloc_page_table_in_memslot(struct kvm_vm *vm, uint32_t pt_memslot); /* * Create a VM with reasonable defaults diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index d8cf851ab119..e18f1c93e4b4 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -2386,9 +2386,15 @@ vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min, /* Arbitrary minimum physical address used for virtual translation tables. */ #define KVM_GUEST_PAGE_TABLE_MIN_PADDR 0x180000 +vm_paddr_t vm_alloc_page_table_in_memslot(struct kvm_vm *vm, uint32_t pt_memslot) +{ + return vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR, + pt_memslot); +} + vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm) { - return vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0); + return vm_alloc_page_table_in_memslot(vm, 0); } /* From patchwork Wed Mar 23 22:53:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 12790191 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10622C433FE for ; Wed, 23 Mar 2022 22:54:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241378AbiCWWzt (ORCPT ); Wed, 23 Mar 2022 18:55:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46396 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241410AbiCWWzr (ORCPT ); Wed, 23 Mar 2022 18:55:47 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A1227887A8 for ; Wed, 23 Mar 2022 15:54:16 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id w6-20020a170902d70600b001547597fccbso1515858ply.15 for ; Wed, 23 Mar 2022 15:54:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=oEktOHA4h/Pzsk6xdNePnTp7Hf0kOVWpP7CdgL+bzKw=; b=Xs8hcW0FNFRDWwAGijUrf3IOcs1YOT3+divsoc1McHrVoNk6m6Z/2b8YiaTy+ZKg/2 QpX7KalZ6NfxulsxhM1BX3Jc7n1xe6FvFwjdFAENd7RYFYt51hmVXSHcAzbgoeGBSjr2 G1WdW+MksTLqLWBVLajXSQ+NY8u6LGLZGTAyQpjc/ca46j2LSaetZ9X/wLrQdcNXi5ZA CAwkLoMgc/qP3QbDy25hkiSReX2tT9tC6vSgSEySGo7isptcgwss2l/gkMch3s7tfuzu 98Llsg+DNc2bStrrIO/cd3U7ESCoLfTBIbXdoEDJ6tA743vQnawC4XKjB+SFuExCYNK5 XSeA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=oEktOHA4h/Pzsk6xdNePnTp7Hf0kOVWpP7CdgL+bzKw=; b=5zCLv9j69YUUxHWamDe9Td6F5c+W+Q2sqdyeso5t8fnp3YM8O7Dhc+0VUXXenQuzhS Xzpy5GFYNlq1AYJptoW07pcdj+o4TeUwsj5TaWphuBF5nHqpdN/QWKRfdKxPDFyS3+wZ 6rLIh/k1cG53xXa9rO4ImqSHoiaXt+Xyp/7g1V3TiqG8NlXswmu4abn4fwg3yNPVLGQB wm1MonPKqcQTpI+s6jFzPJ7KojXFbsXaSIq2o1VkEBT7b9oW8qYJx50bfWuVGgVsFYgA 9ob4hKDOJahMehFT9wjQa1VSZrw6q5HLLe+QKapMB8J3+LiAbjOTjZoeObObO4Vk8GtC 3sPQ== X-Gm-Message-State: AOAM532GKtZNBQSO+xWmxX8lpBplaupdTeVODo59I3kogzW2tL6rZP8p kO8ztwH8l+0Zx+XipwBkv+IuNvyUmrw8z1ERLmuKLQmb+6/sqW0x+DyFpFoy9Qg1V+xYIqYOZAw 1ylf98KRyEQWD6PhMe0Hs3HTm/Fc3q078V0TdT0dz9nXYcBQo5KXFAfuH8jghnIw= X-Google-Smtp-Source: ABdhPJymrSPUSVCVfy9VYz8wQJ7EnH3jfORcGwmLLgCnmXsuGDss/MsB32qZGt5NM91bPbaw971dur38maD3xA== X-Received: from ricarkol2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:62fe]) (user=ricarkol job=sendgmr) by 2002:a17:902:b189:b0:14d:6f87:7c25 with SMTP id s9-20020a170902b18900b0014d6f877c25mr2446366plr.31.1648076055887; Wed, 23 Mar 2022 15:54:15 -0700 (PDT) Date: Wed, 23 Mar 2022 15:53:58 -0700 In-Reply-To: <20220323225405.267155-1-ricarkol@google.com> Message-Id: <20220323225405.267155-5-ricarkol@google.com> Mime-Version: 1.0 References: <20220323225405.267155-1-ricarkol@google.com> X-Mailer: git-send-email 2.35.1.894.gb6a874cedc-goog Subject: [PATCH v2 04/11] KVM: selftests: aarch64: Export _virt_pg_map with a pt_memslot arg From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, drjones@redhat.com Cc: pbonzini@redhat.com, maz@kernel.org, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, axelrasmussen@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add an argument, pt_memslot, into _virt_pg_map in order to use a specific memslot for the page-table allocations performed when creating a new map. This will be used in a future commit to test having PTEs stored on memslots with different setups (e.g., hugetlb with a hole). Signed-off-by: Ricardo Koller --- .../selftests/kvm/include/aarch64/processor.h | 3 +++ tools/testing/selftests/kvm/lib/aarch64/processor.c | 12 ++++++------ 2 files changed, 9 insertions(+), 6 deletions(-) diff --git a/tools/testing/selftests/kvm/include/aarch64/processor.h b/tools/testing/selftests/kvm/include/aarch64/processor.h index caa572d83062..3965a5ac778e 100644 --- a/tools/testing/selftests/kvm/include/aarch64/processor.h +++ b/tools/testing/selftests/kvm/include/aarch64/processor.h @@ -125,6 +125,9 @@ void vm_install_exception_handler(struct kvm_vm *vm, void vm_install_sync_handler(struct kvm_vm *vm, int vector, int ec, handler_fn handler); +void _virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, + uint64_t flags, uint32_t pt_memslot); + vm_paddr_t vm_get_pte_gpa(struct kvm_vm *vm, vm_vaddr_t gva); static inline void cpu_relax(void) diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c index ee006d354b79..8f4ec1be4364 100644 --- a/tools/testing/selftests/kvm/lib/aarch64/processor.c +++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c @@ -86,8 +86,8 @@ void virt_pgd_alloc(struct kvm_vm *vm) } } -static void _virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, - uint64_t flags) +void _virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, + uint64_t flags, uint32_t pt_memslot) { uint8_t attr_idx = flags & 7; uint64_t *ptep; @@ -108,18 +108,18 @@ static void _virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, ptep = addr_gpa2hva(vm, vm->pgd) + pgd_index(vm, vaddr) * 8; if (!*ptep) - *ptep = vm_alloc_page_table(vm) | 3; + *ptep = vm_alloc_page_table_in_memslot(vm, pt_memslot) | 3; switch (vm->pgtable_levels) { case 4: ptep = addr_gpa2hva(vm, pte_addr(vm, *ptep)) + pud_index(vm, vaddr) * 8; if (!*ptep) - *ptep = vm_alloc_page_table(vm) | 3; + *ptep = vm_alloc_page_table_in_memslot(vm, pt_memslot) | 3; /* fall through */ case 3: ptep = addr_gpa2hva(vm, pte_addr(vm, *ptep)) + pmd_index(vm, vaddr) * 8; if (!*ptep) - *ptep = vm_alloc_page_table(vm) | 3; + *ptep = vm_alloc_page_table_in_memslot(vm, pt_memslot) | 3; /* fall through */ case 2: ptep = addr_gpa2hva(vm, pte_addr(vm, *ptep)) + pte_index(vm, vaddr) * 8; @@ -136,7 +136,7 @@ void virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr) { uint64_t attr_idx = 4; /* NORMAL (See DEFAULT_MAIR_EL1) */ - _virt_pg_map(vm, vaddr, paddr, attr_idx); + _virt_pg_map(vm, vaddr, paddr, attr_idx, 0); } vm_paddr_t vm_get_pte_gpa(struct kvm_vm *vm, vm_vaddr_t gva) From patchwork Wed Mar 23 22:53:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 12790192 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59AD8C433F5 for ; Wed, 23 Mar 2022 22:54:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241410AbiCWWzu (ORCPT ); Wed, 23 Mar 2022 18:55:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46490 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345331AbiCWWzs (ORCPT ); Wed, 23 Mar 2022 18:55:48 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1F59B8D681 for ; Wed, 23 Mar 2022 15:54:18 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id 138-20020a621690000000b004fa807ac59aso1646935pfw.19 for ; Wed, 23 Mar 2022 15:54:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=RQZ9oTCQlO2kkOxWGWPVIC9nBisyNLoKeQuBeACdbVQ=; b=p+g7bWwB50EB/w3hkgA0rVNT57FGibDrrLJK/XlvKLVxecBZRbJrvIbXJ+jcvXAkzn S1Ze9G7u8gofjR6Kp6iU/N1kcmwr6vx4FewGe8GM9o6xh5Ixh5kUqRrEYob4wRf9NtBL 4Dv/9xTXxLXO61IseJp55nta5dXslqhiINmTaykUsTyh75+6wK1aw0BDi5hMhMh2Mzmj uCq6mZhZUl4qxzzojSU6UU1eioFrn9aWOO/MbqlItHvp78eaOO7o9GOctz6iLmCFOnx6 W0vtrPT9wQpToA7Vu1IysUe8y7nqgNr03LQmZKjtFIZlbAuxiXzehToAeivCsGyDUMuZ alXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=RQZ9oTCQlO2kkOxWGWPVIC9nBisyNLoKeQuBeACdbVQ=; b=7gSDhzcHbkyjNadbCyFBWnM6NXYqajnU12tFFFUoq+0kIv0pCsF0AewcvVczhDLW5C 8tSYLDEi3UsNPfPrWU5FWo/A0KCO9WinMQt6xNgND9CpGAHkBEPgUlBZEbA8ToG4wZWG WOH3zUULmEVu6UJeMUzowHFtWjhF3zcILKMd1cIEmR3pZtRBnUhLCgVpPqMs3A0fmG8o fxPwd+S7q6Zc7nSCdWEjrL7W5FBx/TqeJHUP5UFPFi2S9kjDu71+UwXCJn++6jIbJMFR Yn+6R8YiJSt4wQCCy4iijV26/GQW7zgHBQUniUGGFQpBLc5x1jo98vLWucURxXNMM1pW 2qkg== X-Gm-Message-State: AOAM533j7A7sIL5FZyATiyBHQrqbkr7vLc0RRVzn2mzkfEH/7I4DF3Pu HegQ/761L3dJaIyO88enUcibGtZ4N+4eJ8oqiSynRfqcB41lOwEpYHiWoX/Nk7JS2waZBDbbiZj N+mErykfsS91HcsJzVdcAKqmptHL0HYcnkPTXQOrzBmq1n95rfMgjHP1w7+RsbxA= X-Google-Smtp-Source: ABdhPJzZggTEZsz9VHBBQHG8G2jPGTIF3aQKjvOX95a74j+hsAPzoyo4bO9e+VNMfrpeBTP6J5d5Gh+F73mgOw== X-Received: from ricarkol2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:62fe]) (user=ricarkol job=sendgmr) by 2002:a63:6c01:0:b0:37c:73a0:a175 with SMTP id h1-20020a636c01000000b0037c73a0a175mr1622779pgc.415.1648076057479; Wed, 23 Mar 2022 15:54:17 -0700 (PDT) Date: Wed, 23 Mar 2022 15:53:59 -0700 In-Reply-To: <20220323225405.267155-1-ricarkol@google.com> Message-Id: <20220323225405.267155-6-ricarkol@google.com> Mime-Version: 1.0 References: <20220323225405.267155-1-ricarkol@google.com> X-Mailer: git-send-email 2.35.1.894.gb6a874cedc-goog Subject: [PATCH v2 05/11] KVM: selftests: Add missing close and munmap in __vm_mem_region_delete From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, drjones@redhat.com Cc: pbonzini@redhat.com, maz@kernel.org, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, axelrasmussen@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Deleting a memslot (when freeing a VM) is not closing the backing fd, nor it's unmapping the alias mapping. Fix by adding the missing close and munmap. Signed-off-by: Ricardo Koller Reviewed-by: Ben Gardon --- tools/testing/selftests/kvm/lib/kvm_util.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index e18f1c93e4b4..268ad3d75fe2 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -679,6 +679,12 @@ static void __vm_mem_region_delete(struct kvm_vm *vm, sparsebit_free(®ion->unused_phy_pages); ret = munmap(region->mmap_start, region->mmap_size); TEST_ASSERT(ret == 0, "munmap failed, rc: %i errno: %i", ret, errno); + if (region->fd >= 0) { + /* There's an extra map shen using shared memory. */ + ret = munmap(region->mmap_alias, region->mmap_size); + TEST_ASSERT(ret == 0, "munmap failed, rc: %i errno: %i", ret, errno); + close(region->fd); + } free(region); } From patchwork Wed Mar 23 22:54:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 12790193 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 307FAC4332F for ; Wed, 23 Mar 2022 22:54:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345338AbiCWWzv (ORCPT ); Wed, 23 Mar 2022 18:55:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46664 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241390AbiCWWzu (ORCPT ); Wed, 23 Mar 2022 18:55:50 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BDD208CDA5 for ; Wed, 23 Mar 2022 15:54:19 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id o15-20020a17090aac0f00b001c6595a43dbso1796516pjq.4 for ; Wed, 23 Mar 2022 15:54:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=vKLQzM6klqpkjmFS7GU8Jq/66Sy+Fcb5x77LNB4tmdY=; b=T2r1Caf2OJOtk5scJPySOp/9x5bj+Apl8WHaB3jSMoOIz1MKaQBurKnGDjtE5j7HIZ 9OzExFu8pSpHjGFZCP3Cuj1ybKmX1oCev7O7tSC+rgrYeVIw8Tr3sNK35PzvXat/KSR9 dolxuhItD4Ol+BlA6VIzVmvkJT0NxWvmxp88V7/W1lTGTKFLRyWtGRd2iEMSN5A9yy2H Seddhj/2HySTBw0pKeLTmrJNHMvXNP+/FAmkaEkHGZlS45r7/IWEwWw6nH+c7krdYanH JOCd4/NjP0nSTfd43jcxEdgU+kxdHjcnaruKZsOp4e5k8y/vNSOOSipI3Y41Bg2nBs6n bCpg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=vKLQzM6klqpkjmFS7GU8Jq/66Sy+Fcb5x77LNB4tmdY=; b=0SGyQqWau2M0BuFyU09G5ltwDwTdQiIJJNN1XSNaue9EzFIj3fhN4keiblCTyl+JDn fLp605pfRcF2xu9LDtcQgonyjrlzUXRrbY2aFbfPUikUwPVKX/CWT9uMrylY/QIbgibC Ls9j4mKRqklxcmxmeUbe+szUi7x7y/LtXWdvYtCxgiTADymE94Ric6E3cYd2rLII5iZk vkPvt8eKy+Hd9+qIQLI7HSFUw2/pl8GP3U32OZwYv7GA29OTWZ/L/5dya5R6TjBYIeOO bKt4Z6s1rn4BZ/eYufJbs2zlQXS3dLIyeityqlYcCn4/egU1qhEqWjYF7tczgUPx12B6 K0cg== X-Gm-Message-State: AOAM531kkDIRk9XmnKCOr9rue0x+BqJ2NOIEnJE3YQhptfxKCRTxYw79 yQiwfWDLcflauIqZuYBvagd6H0MxWNpUr/iQsnJ+HoqLf5/N732kGSyY2M9MMWx7UjAZNzvSCKr xct/QFhRYW21ZiP2muS/3HGxGT0vZkrTNHRghnHvr0XSVc41zMYtHrhEBYaWuy4U= X-Google-Smtp-Source: ABdhPJyAZxVTIQwmd51afsdptnZnXVDyKJtPsBQk9LT1jHkerS5r8cmHIz4Q/xt8p3x6a0B6SZillADFV5BM/Q== X-Received: from ricarkol2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:62fe]) (user=ricarkol job=sendgmr) by 2002:a17:902:8bcc:b0:14f:2294:232e with SMTP id r12-20020a1709028bcc00b0014f2294232emr2362408plo.105.1648076059147; Wed, 23 Mar 2022 15:54:19 -0700 (PDT) Date: Wed, 23 Mar 2022 15:54:00 -0700 In-Reply-To: <20220323225405.267155-1-ricarkol@google.com> Message-Id: <20220323225405.267155-7-ricarkol@google.com> Mime-Version: 1.0 References: <20220323225405.267155-1-ricarkol@google.com> X-Mailer: git-send-email 2.35.1.894.gb6a874cedc-goog Subject: [PATCH v2 06/11] KVM: selftests: Add vm_mem_region_get_src_fd library function From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, drjones@redhat.com Cc: pbonzini@redhat.com, maz@kernel.org, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, axelrasmussen@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a library function to get the backing source FD of a memslot. Signed-off-by: Ricardo Koller --- .../selftests/kvm/include/kvm_util_base.h | 1 + tools/testing/selftests/kvm/lib/kvm_util.c | 23 +++++++++++++++++++ 2 files changed, 24 insertions(+) diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index 3a69b35e37cc..c8dce12a9a52 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -163,6 +163,7 @@ int _kvm_ioctl(struct kvm_vm *vm, unsigned long ioctl, void *arg); void vm_mem_region_set_flags(struct kvm_vm *vm, uint32_t slot, uint32_t flags); void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa); void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot); +int vm_mem_region_get_src_fd(struct kvm_vm *vm, uint32_t memslot); void vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpuid); vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min); vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages); diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 268ad3d75fe2..a0a9cd575fac 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -580,6 +580,29 @@ kvm_userspace_memory_region_find(struct kvm_vm *vm, uint64_t start, return ®ion->region; } +/* + * KVM Userspace Memory Get Backing Source FD + * + * Input Args: + * vm - Virtual Machine + * memslot - KVM memory slot ID + * + * Output Args: None + * + * Return: + * Backing source file descriptor, -1 if the memslot is an anonymous region. + * + * Returns the backing source fd of a memslot, so tests can use it to punch + * holes, or to setup permissions. + */ +int vm_mem_region_get_src_fd(struct kvm_vm *vm, uint32_t memslot) +{ + struct userspace_mem_region *region; + + region = memslot2region(vm, memslot); + return region->fd; +} + /* * VCPU Find * From patchwork Wed Mar 23 22:54:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 12790194 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63506C4332F for ; Wed, 23 Mar 2022 22:54:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345341AbiCWWzy (ORCPT ); Wed, 23 Mar 2022 18:55:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47026 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345334AbiCWWzx (ORCPT ); Wed, 23 Mar 2022 18:55:53 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BE78D9025C for ; Wed, 23 Mar 2022 15:54:21 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-2e6aaf57a0eso18058807b3.11 for ; Wed, 23 Mar 2022 15:54:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=tGi3aEd1DymP1QEbX6UeAl67uwr+ad2Hf0K4ePd8bfw=; b=jQsYsRBasQR1L4DI2LummKhWRUdIgIsWSYMGQIJH1zR+QfjonGaosVz3ExDMZitkh7 /Y3oUJ0UJMFbx99FZRtCdnCBcTtOtUIE2XrSNeQAyjNB6JFYlt+0Wr++I8VlJNoauYKh HKPDsa6XDQRjiei5KLCXYprjYW9bf/fqXvoxr44A3TiQqc0PeJ/zPnAB8QgZxXibBjRm 0LTqMiZFLVvEfL3R26nNGtKeZ4TMWLd/IzjK6JLgj0oKHO1ECbVU/RWbVSeBJVUmn3YP xSz3I0vcXTyRAFIbLyo7tNdrXjqlKe9nHQQ4AFRCK9r1ge6zeNWdnJvfbhgiIi/nGhXc TLOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=tGi3aEd1DymP1QEbX6UeAl67uwr+ad2Hf0K4ePd8bfw=; b=VovHzBC1zS4EoC37EvPQeFGPAH0t74300EhwgCzRfn4KKKBvitiOPgwlwmMKsUHv4L TztAv9c3Y9DGCVGxpDMmkMMkIuMLj8CJ9e7Pku6dG2ZWJXqOhqJ1npgNyTo7CL7htQ8i IwWSn4RJXUMQOVtoYwkgd/WI4IcMNUZeJqyrJ2Jehubm23JT3kpeJErB5Ohpr9bOrH20 ZDJmtOkOdfT/MMjHGhpl9HtPCnm0FqRB8ib/fbA2oHgSRza4o1yiyHEZ0iEmtb766ZW/ kZskawyJSsci5lCQE+1w97NKv3c4ILWrbxoSNtDFVhnUuHy81qepvWP9jdmKkFCRx2xw FEIw== X-Gm-Message-State: AOAM533f5qUXEYNOj0WqH6kRMuUn2BMBmC6REvs3/jzfyVH2vVc9eN2i UoOEP8xIOEl+MQ0b1GQN9ap6Et/fSFMAt/iHnxJUuJUaT5MwUQZE9u/45fxkAZoHaQx45QJdpQv 19qrHzGLtmTAD3LI1jXpbiWzDPsf7oydlNcoWWdurld8TF3cUCRp8fIt3RDalBHo= X-Google-Smtp-Source: ABdhPJzKD0HIIXn4MSLGTHZN/023n14qhjP9uuLtZ7yqmi2KvdrUEmYVfRRYBoJgBUGVlbjGZ+DarMn/9lzYZQ== X-Received: from ricarkol2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:62fe]) (user=ricarkol job=sendgmr) by 2002:a0d:e297:0:b0:2e5:9bbc:2455 with SMTP id l145-20020a0de297000000b002e59bbc2455mr2327528ywe.21.1648076060784; Wed, 23 Mar 2022 15:54:20 -0700 (PDT) Date: Wed, 23 Mar 2022 15:54:01 -0700 In-Reply-To: <20220323225405.267155-1-ricarkol@google.com> Message-Id: <20220323225405.267155-8-ricarkol@google.com> Mime-Version: 1.0 References: <20220323225405.267155-1-ricarkol@google.com> X-Mailer: git-send-email 2.35.1.894.gb6a874cedc-goog Subject: [PATCH v2 07/11] KVM: selftests: aarch64: Add aarch64/page_fault_test From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, drjones@redhat.com Cc: pbonzini@redhat.com, maz@kernel.org, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, axelrasmussen@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a new test for stage 2 faults when using different combinations of guest accesses (e.g., write, S1PTW), backing source type (e.g., anon) and types of faults (e.g., read on hugetlbfs with a hole). The next commits will add different handling methods and more faults (e.g., uffd and dirty logging). This first commit starts by adding two sanity checks for all types of accesses: AF setting by the hw, and accessing memslots with holes. Note that this commit borrows some code from kvm-unit-tests: RET, MOV_X0, and flush_tlb_page. Signed-off-by: Ricardo Koller --- tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/aarch64/page_fault_test.c | 667 ++++++++++++++++++ 2 files changed, 668 insertions(+) create mode 100644 tools/testing/selftests/kvm/aarch64/page_fault_test.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index bc5f89b3700e..6a192798b217 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -103,6 +103,7 @@ TEST_GEN_PROGS_x86_64 += system_counter_offset_test TEST_GEN_PROGS_aarch64 += aarch64/arch_timer TEST_GEN_PROGS_aarch64 += aarch64/debug-exceptions TEST_GEN_PROGS_aarch64 += aarch64/get-reg-list +TEST_GEN_PROGS_aarch64 += aarch64/page_fault_test TEST_GEN_PROGS_aarch64 += aarch64/psci_cpu_on_test TEST_GEN_PROGS_aarch64 += aarch64/vgic_init TEST_GEN_PROGS_aarch64 += aarch64/vgic_irq diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c new file mode 100644 index 000000000000..00477a4f10cb --- /dev/null +++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c @@ -0,0 +1,667 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * page_fault_test.c - Test stage 2 faults. + * + * This test tries different combinations of guest accesses (e.g., write, + * S1PTW), backing source type (e.g., anon) and types of faults (e.g., read on + * hugetlbfs with a hole). It checks that the expected handling method is + * called (e.g., uffd faults with the right address and write/read flag). + */ + +#define _GNU_SOURCE +#include +#include +#include +#include +#include +#include "guest_modes.h" +#include "userfaultfd_util.h" + +#define VCPU_ID 0 + +#define TEST_MEM_SLOT_INDEX 1 +#define TEST_PT_SLOT_INDEX 2 + +/* Max number of backing pages per guest page */ +#define BACKING_PG_PER_GUEST_PG (64 / 4) + +/* Test memslot in backing source pages */ +#define TEST_MEMSLOT_BACKING_SRC_NPAGES (1 * BACKING_PG_PER_GUEST_PG) + +/* PT memslot size in backing source pages */ +#define PT_MEMSLOT_BACKING_SRC_NPAGES (4 * BACKING_PG_PER_GUEST_PG) + +/* Guest virtual addresses that point to the test page and its PTE. */ +#define GUEST_TEST_GVA 0xc0000000 +#define GUEST_TEST_EXEC_GVA 0xc0000008 +#define GUEST_TEST_PTE_GVA 0xd0000000 + +/* Access flag */ +#define PTE_AF (1ULL << 10) + +/* Acces flag update enable/disable */ +#define TCR_EL1_HA (1ULL << 39) + +#define CMD_SKIP_TEST (-1LL) +#define CMD_HOLE_PT (1ULL << 2) +#define CMD_HOLE_TEST (1ULL << 3) + +#define PREPARE_FN_NR 10 +#define CHECK_FN_NR 10 + +static const uint64_t test_gva = GUEST_TEST_GVA; +static const uint64_t test_exec_gva = GUEST_TEST_EXEC_GVA; +static const uint64_t pte_gva = GUEST_TEST_PTE_GVA; +uint64_t pte_gpa; + +enum { PT, TEST, NR_MEMSLOTS}; + +struct memslot_desc { + void *hva; + uint64_t gpa; + uint64_t size; + uint64_t guest_pages; + uint64_t backing_pages; + enum vm_mem_backing_src_type src_type; + uint32_t idx; +} memslot[NR_MEMSLOTS] = { + { + .idx = TEST_PT_SLOT_INDEX, + .backing_pages = PT_MEMSLOT_BACKING_SRC_NPAGES, + }, + { + .idx = TEST_MEM_SLOT_INDEX, + .backing_pages = TEST_MEMSLOT_BACKING_SRC_NPAGES, + }, +}; + +static struct event_cnt { + int aborts; + int fail_vcpu_runs; +} events; + +struct test_desc { + const char *name; + uint64_t mem_mark_cmd; + /* Skip the test if any prepare function returns false */ + bool (*guest_prepare[PREPARE_FN_NR])(void); + void (*guest_test)(void); + void (*guest_test_check[CHECK_FN_NR])(void); + void (*dabt_handler)(struct ex_regs *regs); + void (*iabt_handler)(struct ex_regs *regs); + uint32_t pt_memslot_flags; + uint32_t test_memslot_flags; + void (*guest_pre_run)(struct kvm_vm *vm); + bool skip; + struct event_cnt expected_events; +}; + +struct test_params { + enum vm_mem_backing_src_type src_type; + struct test_desc *test_desc; +}; + + +static inline void flush_tlb_page(uint64_t vaddr) +{ + uint64_t page = vaddr >> 12; + + dsb(ishst); + asm("tlbi vaae1is, %0" :: "r" (page)); + dsb(ish); + isb(); +} + +#define RET 0xd65f03c0 +#define MOV_X0(x) (0xd2800000 | (((x) & 0xffff) << 5)) + +static void guest_test_nop(void) +{} + +static void guest_test_write64(void) +{ + uint64_t val; + + WRITE_ONCE(*((uint64_t *)test_gva), 0x0123456789ABCDEF); + val = READ_ONCE(*(uint64_t *)test_gva); + GUEST_ASSERT_EQ(val, 0x0123456789ABCDEF); +} + +/* Check the system for atomic instructions. */ +static bool guest_check_lse(void) +{ + uint64_t isar0 = read_sysreg(id_aa64isar0_el1); + uint64_t atomic = (isar0 >> 20) & 7; + + return atomic >= 2; +} + +/* Compare and swap instruction. */ +static void guest_test_cas(void) +{ + uint64_t val; + uint64_t addr = test_gva; + + GUEST_ASSERT_EQ(guest_check_lse(), 1); + asm volatile(".arch_extension lse\n" + "casal %0, %1, [%2]\n" + :: "r" (0), "r" (0x0123456789ABCDEF), "r" (addr)); + val = READ_ONCE(*(uint64_t *)(addr)); + GUEST_ASSERT_EQ(val, 0x0123456789ABCDEF); +} + +static void guest_test_read64(void) +{ + uint64_t val; + + val = READ_ONCE(*(uint64_t *)test_gva); + GUEST_ASSERT_EQ(val, 0); +} + +/* Address translation instruction */ +static void guest_test_at(void) +{ + uint64_t par; + uint64_t addr = 0; + + asm volatile("at s1e1r, %0" :: "r" (test_gva)); + par = read_sysreg(par_el1); + + /* Bit 1 indicates whether the AT was successful */ + GUEST_ASSERT_EQ(par & 1, 0); + /* The PA in bits [51:12] */ + addr = par & (((1ULL << 40) - 1) << 12); + GUEST_ASSERT_EQ(addr, memslot[TEST].gpa); +} + +static void guest_test_dc_zva(void) +{ + /* The smallest guaranteed block size (bs) is a word. */ + uint16_t val; + + asm volatile("dc zva, %0\n" + "dsb ish\n" + :: "r" (test_gva)); + val = READ_ONCE(*(uint16_t *)test_gva); + GUEST_ASSERT_EQ(val, 0); +} + +static void guest_test_ld_preidx(void) +{ + uint64_t val; + uint64_t addr = test_gva - 8; + + /* + * This ends up accessing "test_gva + 8 - 8", where "test_gva - 8" + * is not backed by a memslot. + */ + asm volatile("ldr %0, [%1, #8]!" + : "=r" (val), "+r" (addr)); + GUEST_ASSERT_EQ(val, 0); + GUEST_ASSERT_EQ(addr, test_gva); +} + +static void guest_test_st_preidx(void) +{ + uint64_t val = 0x0123456789ABCDEF; + uint64_t addr = test_gva - 8; + + asm volatile("str %0, [%1, #8]!" + : "+r" (val), "+r" (addr)); + + GUEST_ASSERT_EQ(addr, test_gva); + val = READ_ONCE(*(uint64_t *)test_gva); +} + +static bool guest_set_ha(void) +{ + uint64_t mmfr1 = read_sysreg(id_aa64mmfr1_el1); + uint64_t hadbs = mmfr1 & 6; + uint64_t tcr; + + /* Skip if HA is not supported. */ + if (hadbs == 0) + return false; + + tcr = read_sysreg(tcr_el1) | TCR_EL1_HA; + write_sysreg(tcr, tcr_el1); + isb(); + + return true; +} + +static bool guest_clear_pte_af(void) +{ + *((uint64_t *)pte_gva) &= ~PTE_AF; + flush_tlb_page(pte_gva); + + return true; +} + +static void guest_check_pte_af(void) +{ + flush_tlb_page(pte_gva); + GUEST_ASSERT_EQ(*((uint64_t *)pte_gva) & PTE_AF, PTE_AF); +} + +static void guest_test_exec(void) +{ + int (*code)(void) = (int (*)(void))test_exec_gva; + int ret; + + ret = code(); + GUEST_ASSERT_EQ(ret, 0x77); +} + +static bool guest_prepare(struct test_desc *test) +{ + bool (*prepare_fn)(void); + int i; + + for (i = 0; i < PREPARE_FN_NR; i++) { + prepare_fn = test->guest_prepare[i]; + if (prepare_fn && !prepare_fn()) + return false; + } + + return true; +} + +static void guest_test_check(struct test_desc *test) +{ + void (*check_fn)(void); + int i; + + for (i = 0; i < CHECK_FN_NR; i++) { + check_fn = test->guest_test_check[i]; + if (!check_fn) + continue; + check_fn(); + } +} + +static void guest_code(struct test_desc *test) +{ + if (!test->guest_test) + test->guest_test = guest_test_nop; + + if (!guest_prepare(test)) + GUEST_SYNC(CMD_SKIP_TEST); + + GUEST_SYNC(test->mem_mark_cmd); + test->guest_test(); + + guest_test_check(test); + GUEST_DONE(); +} + +static void no_dabt_handler(struct ex_regs *regs) +{ + GUEST_ASSERT_1(false, read_sysreg(far_el1)); +} + +static void no_iabt_handler(struct ex_regs *regs) +{ + GUEST_ASSERT_1(false, regs->pc); +} + +static void punch_hole_in_memslot(struct kvm_vm *vm, + struct memslot_desc *memslot) +{ + int ret, fd; + void *hva; + + fd = vm_mem_region_get_src_fd(vm, memslot->idx); + if (fd != -1) { + ret = fallocate(fd, FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, + 0, memslot->size); + TEST_ASSERT(ret == 0, "fallocate failed, errno: %d\n", errno); + } else { + hva = addr_gpa2hva(vm, memslot->gpa); + ret = madvise(hva, memslot->size, MADV_DONTNEED); + TEST_ASSERT(ret == 0, "madvise failed, errno: %d\n", errno); + } +} + +static void handle_cmd(struct kvm_vm *vm, int cmd) +{ + if (cmd & CMD_HOLE_PT) + punch_hole_in_memslot(vm, &memslot[PT]); + if (cmd & CMD_HOLE_TEST) + punch_hole_in_memslot(vm, &memslot[TEST]); +} + +static void sync_stats_from_guest(struct kvm_vm *vm) +{ + struct event_cnt *ec = addr_gva2hva(vm, (uint64_t)&events); + + events.aborts += ec->aborts; +} + +void fail_vcpu_run_no_handler(int ret) +{ + TEST_FAIL("Unexpected vcpu run failure\n"); +} + +static uint64_t get_total_guest_pages(enum vm_guest_mode mode, + struct test_params *p) +{ + uint64_t large_page_size = get_backing_src_pagesz(p->src_type); + uint64_t guest_page_size = vm_guest_mode_params[mode].page_size; + uint64_t size; + + size = PT_MEMSLOT_BACKING_SRC_NPAGES * large_page_size; + size += TEST_MEMSLOT_BACKING_SRC_NPAGES * large_page_size; + + return size / guest_page_size; +} + +static void load_exec_code_for_test(void) +{ + uint32_t *code; + + /* Write this "code" into test_exec_gva */ + assert(test_exec_gva - test_gva); + code = memslot[TEST].hva + 8; + + code[0] = MOV_X0(0x77); + code[1] = RET; +} + +static void setup_guest_args(struct kvm_vm *vm, struct test_desc *test) +{ + vm_vaddr_t test_desc_gva; + + test_desc_gva = vm_vaddr_alloc_page(vm); + memcpy(addr_gva2hva(vm, test_desc_gva), test, + sizeof(struct test_desc)); + vcpu_args_set(vm, 0, 1, test_desc_gva); +} + +static void setup_abort_handlers(struct kvm_vm *vm, struct test_desc *test) +{ + vm_init_descriptor_tables(vm); + vcpu_init_descriptor_tables(vm, VCPU_ID); + if (!test->dabt_handler) + test->dabt_handler = no_dabt_handler; + if (!test->iabt_handler) + test->iabt_handler = no_iabt_handler; + vm_install_sync_handler(vm, VECTOR_SYNC_CURRENT, + 0x25, test->dabt_handler); + vm_install_sync_handler(vm, VECTOR_SYNC_CURRENT, + 0x21, test->iabt_handler); +} + +static void setup_memslots(struct kvm_vm *vm, enum vm_guest_mode mode, + struct test_params *p) +{ + uint64_t large_page_size = get_backing_src_pagesz(p->src_type); + uint64_t guest_page_size = vm_guest_mode_params[mode].page_size; + struct test_desc *test = p->test_desc; + uint64_t hole_gpa; + uint64_t alignment; + int i; + + /* Calculate the test and PT memslot sizes */ + for (i = 0; i < NR_MEMSLOTS; i++) { + memslot[i].size = large_page_size * memslot[i].backing_pages; + memslot[i].guest_pages = memslot[i].size / guest_page_size; + memslot[i].src_type = p->src_type; + } + + TEST_ASSERT(memslot[TEST].size >= guest_page_size, + "The test memslot should have space one guest page.\n"); + TEST_ASSERT(memslot[PT].size >= (4 * guest_page_size), + "The PT memslot sould have space for 4 guest pages.\n"); + + /* Place the memslots GPAs at the end of physical memory */ + alignment = max(large_page_size, guest_page_size); + memslot[TEST].gpa = (vm_get_max_gfn(vm) - memslot[TEST].guest_pages) * + guest_page_size; + memslot[TEST].gpa = align_down(memslot[TEST].gpa, alignment); + /* Add a 1-guest_page-hole between the two memslots */ + hole_gpa = memslot[TEST].gpa - guest_page_size; + virt_pg_map(vm, test_gva - guest_page_size, hole_gpa); + memslot[PT].gpa = hole_gpa - (memslot[PT].guest_pages * + guest_page_size); + memslot[PT].gpa = align_down(memslot[PT].gpa, alignment); + + /* Create memslots for and test data and a PTE. */ + vm_userspace_mem_region_add(vm, p->src_type, memslot[PT].gpa, + memslot[PT].idx, memslot[PT].guest_pages, + test->pt_memslot_flags); + vm_userspace_mem_region_add(vm, p->src_type, memslot[TEST].gpa, + memslot[TEST].idx, memslot[TEST].guest_pages, + test->test_memslot_flags); + + for (i = 0; i < NR_MEMSLOTS; i++) + memslot[i].hva = addr_gpa2hva(vm, memslot[i].gpa); + + /* Map the test test_gva using the PT memslot. */ + _virt_pg_map(vm, test_gva, memslot[TEST].gpa, + 4 /* NORMAL (See DEFAULT_MAIR_EL1) */, + TEST_PT_SLOT_INDEX); + + /* + * Find the PTE of the test page and map it in the guest so it can + * clear the AF. + */ + pte_gpa = vm_get_pte_gpa(vm, test_gva); + TEST_ASSERT(memslot[PT].gpa <= pte_gpa && + pte_gpa < (memslot[PT].gpa + memslot[PT].size), + "The EPT should be in the PT memslot."); + /* This is an artibrary requirement just to make things simpler. */ + TEST_ASSERT(pte_gpa % guest_page_size == 0, + "The pte_gpa (%p) should be aligned to the guest page (%lx).", + (void *)pte_gpa, guest_page_size); + virt_pg_map(vm, pte_gva, pte_gpa); +} + +static void check_event_counts(struct test_desc *test) +{ + ASSERT_EQ(test->expected_events.aborts, events.aborts); +} + +static void print_test_banner(enum vm_guest_mode mode, struct test_params *p) +{ + struct test_desc *test = p->test_desc; + + pr_debug("Test: %s\n", test->name); + pr_debug("Testing guest mode: %s\n", vm_guest_mode_string(mode)); + pr_debug("Testing memory backing src type: %s\n", + vm_mem_backing_src_alias(p->src_type)->name); +} + +static void reset_event_counts(void) +{ + memset(&events, 0, sizeof(events)); +} + +static bool vcpu_run_loop(struct kvm_vm *vm, struct test_desc *test) +{ + bool skip_test = false; + struct ucall uc; + int stage; + + for (stage = 0; ; stage++) { + vcpu_run(vm, VCPU_ID); + + switch (get_ucall(vm, VCPU_ID, &uc)) { + case UCALL_SYNC: + if (uc.args[1] == CMD_SKIP_TEST) { + pr_debug("Skipped.\n"); + skip_test = true; + goto done; + } + handle_cmd(vm, uc.args[1]); + break; + case UCALL_ABORT: + TEST_FAIL("%s at %s:%ld\n\tvalues: %#lx, %#lx", + (const char *)uc.args[0], + __FILE__, uc.args[1], uc.args[2], uc.args[3]); + break; + case UCALL_DONE: + pr_debug("Done.\n"); + goto done; + default: + TEST_FAIL("Unknown ucall %lu", uc.cmd); + } + } + +done: + return skip_test; +} + +static void run_test(enum vm_guest_mode mode, void *arg) +{ + struct test_params *p = (struct test_params *)arg; + struct test_desc *test = p->test_desc; + struct kvm_vm *vm; + bool skip_test = false; + + print_test_banner(mode, p); + + vm = vm_create_with_vcpus(mode, 1, DEFAULT_GUEST_PHY_PAGES, + get_total_guest_pages(mode, p), 0, guest_code, NULL); + ucall_init(vm, NULL); + + reset_event_counts(); + setup_memslots(vm, mode, p); + + load_exec_code_for_test(); + setup_abort_handlers(vm, test); + setup_guest_args(vm, test); + + if (test->guest_pre_run) + test->guest_pre_run(vm); + + sync_global_to_guest(vm, memslot); + + skip_test = vcpu_run_loop(vm, test); + + sync_stats_from_guest(vm); + ucall_uninit(vm); + kvm_vm_free(vm); + + if (!skip_test) + check_event_counts(test); +} + +static void for_each_test_and_guest_mode(void (*func)(enum vm_guest_mode, void *), + enum vm_mem_backing_src_type src_type); + +static void help(char *name) +{ + puts(""); + printf("usage: %s [-h] [-s mem-type]\n", name); + puts(""); + guest_modes_help(); + backing_src_help("-s"); + puts(""); +} + +int main(int argc, char *argv[]) +{ + enum vm_mem_backing_src_type src_type; + int opt; + + setbuf(stdout, NULL); + + src_type = DEFAULT_VM_MEM_SRC; + + guest_modes_append_default(); + + while ((opt = getopt(argc, argv, "hm:s:")) != -1) { + switch (opt) { + case 'm': + guest_modes_cmdline(optarg); + break; + case 's': + src_type = parse_backing_src_type(optarg); + break; + case 'h': + default: + help(argv[0]); + exit(0); + } + } + + for_each_test_and_guest_mode(run_test, src_type); + return 0; +} + +#define SNAME(s) #s +#define SCAT(a, b) SNAME(a ## _ ## b) + +#define TEST_BASIC_ACCESS(__a, ...) \ +{ \ + .name = SNAME(BASIC_ACCESS ## _ ## __a), \ + .guest_test = __a, \ + .expected_events = { 0 }, \ + __VA_ARGS__ \ +} + +#define __AF_TEST_ARGS \ + .guest_prepare = { guest_set_ha, guest_clear_pte_af, }, \ + .guest_test_check = { guest_check_pte_af, }, \ + +#define __AF_LSE_TEST_ARGS \ + .guest_prepare = { guest_set_ha, guest_clear_pte_af, \ + guest_check_lse, }, \ + .guest_test_check = { guest_check_pte_af, }, \ + +#define __PREPARE_LSE_TEST_ARGS \ + .guest_prepare = { guest_check_lse, }, + +#define TEST_HW_ACCESS_FLAG(__a) \ + TEST_BASIC_ACCESS(__a, __AF_TEST_ARGS) + +#define TEST_ACCESS_ON_HOLE_NO_FAULTS(__a, ...) \ +{ \ + .name = SNAME(ACCESS_ON_HOLE_NO_FAULTS ## _ ## __a), \ + .guest_test = __a, \ + .mem_mark_cmd = CMD_HOLE_TEST, \ + .expected_events = { 0 }, \ + __VA_ARGS__ \ +} + +static struct test_desc tests[] = { + /* Check that HW is setting the AF (sanity checks). */ + TEST_HW_ACCESS_FLAG(guest_test_read64), + TEST_HW_ACCESS_FLAG(guest_test_ld_preidx), + TEST_BASIC_ACCESS(guest_test_cas, __AF_LSE_TEST_ARGS), + TEST_HW_ACCESS_FLAG(guest_test_write64), + TEST_HW_ACCESS_FLAG(guest_test_st_preidx), + TEST_HW_ACCESS_FLAG(guest_test_dc_zva), + TEST_HW_ACCESS_FLAG(guest_test_exec), + + /* Accessing a hole shouldn't fault (more sanity checks). */ + TEST_ACCESS_ON_HOLE_NO_FAULTS(guest_test_read64), + TEST_ACCESS_ON_HOLE_NO_FAULTS(guest_test_cas, __PREPARE_LSE_TEST_ARGS), + TEST_ACCESS_ON_HOLE_NO_FAULTS(guest_test_ld_preidx), + TEST_ACCESS_ON_HOLE_NO_FAULTS(guest_test_write64), + TEST_ACCESS_ON_HOLE_NO_FAULTS(guest_test_at), + TEST_ACCESS_ON_HOLE_NO_FAULTS(guest_test_dc_zva), + TEST_ACCESS_ON_HOLE_NO_FAULTS(guest_test_st_preidx), + + { 0 }, +}; + +static void for_each_test_and_guest_mode( + void (*func)(enum vm_guest_mode m, void *a), + enum vm_mem_backing_src_type src_type) +{ + struct test_desc *t; + + for (t = &tests[0]; t->name; t++) { + if (t->skip) + continue; + + struct test_params p = { + .src_type = src_type, + .test_desc = t, + }; + + for_each_guest_mode(run_test, &p); + } +} From patchwork Wed Mar 23 22:54:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 12790195 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0602C433FE for ; Wed, 23 Mar 2022 22:54:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345351AbiCWWzz (ORCPT ); Wed, 23 Mar 2022 18:55:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46774 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345340AbiCWWzy (ORCPT ); Wed, 23 Mar 2022 18:55:54 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 097608D681 for ; Wed, 23 Mar 2022 15:54:23 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id x18-20020a170902ea9200b00153e0dbca9bso1522413plb.9 for ; Wed, 23 Mar 2022 15:54:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=rNNhRfKhYknMP2+oJNC4EO4oFb/4JLt8/J+1Ulwb074=; b=fcvWs/M+OJroPCiM23XkgUWvmtBZv19XWU8pgUAWdJSm2HusVFqNt28RzQrt/t2l3w kdnTH1+Lkj19lUhY+ewLR83Amgot0T086obsthKAvw5ww0g1hDdLdWnLAUdYeTZliIE4 kl2oI+GwPq7NBmTx1tsnI61h65IZYcZkUyhe1G4tq5nabV5tS4skuTB5hx8mb+srtcsi gUtprmAUDrIGCsXGcOIFGI7av3RTDpjprQeWNjE+ZY1xTIe7MYFX7aVm2JlLtKyYSqwE erZLyDHHqWFnnSxMY/0fEgTwyrISc5KxCqH20R9tjpfg1ol8iFppXePAdkYQeSW6kZlY ZWdg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=rNNhRfKhYknMP2+oJNC4EO4oFb/4JLt8/J+1Ulwb074=; b=5xC+DG5d6SSyNUiNdNZPRicAVhihxRntZ7U2P7cQIqco7bsv8uLUyZI4ODX+/mmFvO 0qHQIncfyaFbvlI4n3LXoRpfKqed7c1Gi+cx4W4GW6akikRDDlB8eO9EFYlJvgHbcIyl o5YNvDzLWqAlVrvbcLkPXfYEK8sfzvEDdRUgGJ4l5vV1p5ccz5GRDlH6ZcXTyb5kEsEM lLqGnUhaTWYJ1+ETpGxCA6xC+XnMJBWIXvqyuM0LTp8qDFy3PAVqzSxe6MZy9ukfw4YG AXr0S1A4BGhCEJ2DfRXkfbBqSzhB85/ZSK15B7el0jkV7iRSuG/2IU0sDP6bi3OQllVe v0rA== X-Gm-Message-State: AOAM532rfvgBc4LgvLlcQwomRxZHB70H7m+sCaiSMEeYaa2uaK0rBN/Y 2/G1ahR1VKiYqHEVTXOxCXRmIokoxfbOueSTfNfDg8YdfEc8KJxmEFdgBu2ByRTImEDmydZb55G CuPxC7uyyS+LMZS+/oxQ5SB2YV2yD1SyMZp3bOc06bTUwoqGOw4GHf4pBSc8wS6Q= X-Google-Smtp-Source: ABdhPJzZcZCSZNlq91/IIXrO/vTBcBMJURE15e8jxPmh/Bne7GSzSCwhKyzaScOvedmnAsB3HGQ+Qutp6F+1og== X-Received: from ricarkol2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:62fe]) (user=ricarkol job=sendgmr) by 2002:a17:903:249:b0:153:857c:a1fe with SMTP id j9-20020a170903024900b00153857ca1femr2562597plh.44.1648076062416; Wed, 23 Mar 2022 15:54:22 -0700 (PDT) Date: Wed, 23 Mar 2022 15:54:02 -0700 In-Reply-To: <20220323225405.267155-1-ricarkol@google.com> Message-Id: <20220323225405.267155-9-ricarkol@google.com> Mime-Version: 1.0 References: <20220323225405.267155-1-ricarkol@google.com> X-Mailer: git-send-email 2.35.1.894.gb6a874cedc-goog Subject: [PATCH v2 08/11] KVM: selftests: aarch64: Add userfaultfd tests into page_fault_test From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, drjones@redhat.com Cc: pbonzini@redhat.com, maz@kernel.org, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, axelrasmussen@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add some userfaultfd tests into page_fault_test. Punch holes into the data and/or page-table memslots, perform some accesses, and check that the faults are taken (or not taken) when expected. Signed-off-by: Ricardo Koller --- .../selftests/kvm/aarch64/page_fault_test.c | 232 +++++++++++++++++- 1 file changed, 229 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c index 00477a4f10cb..99449eaddb2b 100644 --- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c +++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c @@ -57,6 +57,8 @@ uint64_t pte_gpa; enum { PT, TEST, NR_MEMSLOTS}; struct memslot_desc { + size_t paging_size; + char *data_copy; void *hva; uint64_t gpa; uint64_t size; @@ -78,6 +80,9 @@ struct memslot_desc { static struct event_cnt { int aborts; int fail_vcpu_runs; + int uffd_faults; + /* uffd_faults is incremented from multiple threads. */ + pthread_mutex_t uffd_faults_mutex; } events; struct test_desc { @@ -87,6 +92,8 @@ struct test_desc { bool (*guest_prepare[PREPARE_FN_NR])(void); void (*guest_test)(void); void (*guest_test_check[CHECK_FN_NR])(void); + int (*uffd_pt_handler)(int mode, int uffd, struct uffd_msg *msg); + int (*uffd_test_handler)(int mode, int uffd, struct uffd_msg *msg); void (*dabt_handler)(struct ex_regs *regs); void (*iabt_handler)(struct ex_regs *regs); uint32_t pt_memslot_flags; @@ -305,6 +312,56 @@ static void no_iabt_handler(struct ex_regs *regs) GUEST_ASSERT_1(false, regs->pc); } +static int uffd_generic_handler(int uffd_mode, int uffd, + struct uffd_msg *msg, struct memslot_desc *memslot, + bool expect_write) +{ + uint64_t addr = msg->arg.pagefault.address; + uint64_t flags = msg->arg.pagefault.flags; + struct uffdio_copy copy; + int ret; + + TEST_ASSERT(uffd_mode == UFFDIO_REGISTER_MODE_MISSING, + "The only expected UFFD mode is MISSING"); + ASSERT_EQ(!!(flags & UFFD_PAGEFAULT_FLAG_WRITE), expect_write); + ASSERT_EQ(addr, (uint64_t)memslot->hva); + + pr_debug("uffd fault: addr=%p write=%d\n", + (void *)addr, !!(flags & UFFD_PAGEFAULT_FLAG_WRITE)); + + copy.src = (uint64_t)memslot->data_copy; + copy.dst = addr; + copy.len = memslot->paging_size; + copy.mode = 0; + + ret = ioctl(uffd, UFFDIO_COPY, ©); + if (ret == -1) { + pr_info("Failed UFFDIO_COPY in 0x%lx with errno: %d\n", + addr, errno); + return ret; + } + + pthread_mutex_lock(&events.uffd_faults_mutex); + events.uffd_faults += 1; + pthread_mutex_unlock(&events.uffd_faults_mutex); + return 0; +} + +static int uffd_pt_write_handler(int mode, int uffd, struct uffd_msg *msg) +{ + return uffd_generic_handler(mode, uffd, msg, &memslot[PT], true); +} + +static int uffd_test_write_handler(int mode, int uffd, struct uffd_msg *msg) +{ + return uffd_generic_handler(mode, uffd, msg, &memslot[TEST], true); +} + +static int uffd_test_read_handler(int mode, int uffd, struct uffd_msg *msg) +{ + return uffd_generic_handler(mode, uffd, msg, &memslot[TEST], false); +} + static void punch_hole_in_memslot(struct kvm_vm *vm, struct memslot_desc *memslot) { @@ -314,11 +371,11 @@ static void punch_hole_in_memslot(struct kvm_vm *vm, fd = vm_mem_region_get_src_fd(vm, memslot->idx); if (fd != -1) { ret = fallocate(fd, FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, - 0, memslot->size); + 0, memslot->paging_size); TEST_ASSERT(ret == 0, "fallocate failed, errno: %d\n", errno); } else { hva = addr_gpa2hva(vm, memslot->gpa); - ret = madvise(hva, memslot->size, MADV_DONTNEED); + ret = madvise(hva, memslot->paging_size, MADV_DONTNEED); TEST_ASSERT(ret == 0, "madvise failed, errno: %d\n", errno); } } @@ -457,9 +514,60 @@ static void setup_memslots(struct kvm_vm *vm, enum vm_guest_mode mode, virt_pg_map(vm, pte_gva, pte_gpa); } +static void setup_uffd(enum vm_guest_mode mode, struct test_params *p, + struct uffd_desc **uffd) +{ + struct test_desc *test = p->test_desc; + uint64_t large_page_size = get_backing_src_pagesz(p->src_type); + int i; + + /* + * When creating the map, we might not only have created a pte page, + * but also an intermediate level (pte_gpa != gpa[PT]). So, we + * might need to demand page both. + */ + memslot[PT].paging_size = align_up(pte_gpa - memslot[PT].gpa, + large_page_size) + large_page_size; + memslot[TEST].paging_size = large_page_size; + + for (i = 0; i < NR_MEMSLOTS; i++) { + memslot[i].data_copy = malloc(memslot[i].paging_size); + TEST_ASSERT(memslot[i].data_copy, "Failed malloc."); + memcpy(memslot[i].data_copy, memslot[i].hva, + memslot[i].paging_size); + } + + uffd[PT] = NULL; + if (test->uffd_pt_handler) + uffd[PT] = uffd_setup_demand_paging( + UFFDIO_REGISTER_MODE_MISSING, 0, + memslot[PT].hva, memslot[PT].paging_size, + test->uffd_pt_handler); + + uffd[TEST] = NULL; + if (test->uffd_test_handler) + uffd[TEST] = uffd_setup_demand_paging( + UFFDIO_REGISTER_MODE_MISSING, 0, + memslot[TEST].hva, memslot[TEST].paging_size, + test->uffd_test_handler); +} + static void check_event_counts(struct test_desc *test) { ASSERT_EQ(test->expected_events.aborts, events.aborts); + ASSERT_EQ(test->expected_events.uffd_faults, events.uffd_faults); +} + +static void free_uffd(struct test_desc *test, struct uffd_desc **uffd) +{ + int i; + + if (test->uffd_pt_handler) + uffd_stop_demand_paging(uffd[PT]); + if (test->uffd_test_handler) + uffd_stop_demand_paging(uffd[TEST]); + for (i = 0; i < NR_MEMSLOTS; i++) + free(memslot[i].data_copy); } static void print_test_banner(enum vm_guest_mode mode, struct test_params *p) @@ -517,6 +625,7 @@ static void run_test(enum vm_guest_mode mode, void *arg) struct test_params *p = (struct test_params *)arg; struct test_desc *test = p->test_desc; struct kvm_vm *vm; + struct uffd_desc *uffd[NR_MEMSLOTS]; bool skip_test = false; print_test_banner(mode, p); @@ -528,7 +637,14 @@ static void run_test(enum vm_guest_mode mode, void *arg) reset_event_counts(); setup_memslots(vm, mode, p); + /* + * Set some code at memslot[TEST].hva for the guest to execute (only + * applicable to the EXEC tests). This has to be done before + * setup_uffd() as that function copies the memslot data for the uffd + * handler. + */ load_exec_code_for_test(); + setup_uffd(mode, p, uffd); setup_abort_handlers(vm, test); setup_guest_args(vm, test); @@ -542,7 +658,12 @@ static void run_test(enum vm_guest_mode mode, void *arg) sync_stats_from_guest(vm); ucall_uninit(vm); kvm_vm_free(vm); + free_uffd(test, uffd); + /* + * Make sure this is called after the uffd threads have exited (and + * updated their respective event counters). + */ if (!skip_test) check_event_counts(test); } @@ -625,6 +746,43 @@ int main(int argc, char *argv[]) __VA_ARGS__ \ } +#define TEST_ACCESS_ON_HOLE_UFFD(__a, __uffd_handler, ...) \ +{ \ + .name = SNAME(ACCESS_ON_HOLE_UFFD ## _ ## __a), \ + .guest_test = __a, \ + .mem_mark_cmd = CMD_HOLE_TEST, \ + .uffd_test_handler = __uffd_handler, \ + .expected_events = { .uffd_faults = 1, }, \ + __VA_ARGS__ \ +} + +#define TEST_S1PTW_ON_HOLE_UFFD(__a, __uffd_handler, ...) \ +{ \ + .name = SNAME(S1PTW_ON_HOLE_UFFD ## _ ## __a), \ + .guest_test = __a, \ + .mem_mark_cmd = CMD_HOLE_PT, \ + .uffd_pt_handler = __uffd_handler, \ + .expected_events = { .uffd_faults = 1, }, \ + __VA_ARGS__ \ +} + +#define TEST_S1PTW_ON_HOLE_UFFD_AF(__a, __uffd_handler) \ + TEST_S1PTW_ON_HOLE_UFFD(__a, __uffd_handler, __AF_TEST_ARGS) + +#define TEST_ACCESS_AND_S1PTW_ON_HOLE_UFFD(__a, __th, __ph, ...) \ +{ \ + .name = SNAME(ACCESS_S1PTW_ON_HOLE_UFFD ## _ ## __a), \ + .guest_test = __a, \ + .mem_mark_cmd = CMD_HOLE_PT | CMD_HOLE_TEST, \ + .uffd_pt_handler = __ph, \ + .uffd_test_handler = __th, \ + .expected_events = { .uffd_faults = 2, }, \ + __VA_ARGS__ \ +} + +#define TEST_ACCESS_AND_S1PTW_ON_HOLE_UFFD_AF(__a, __th, __ph) \ + TEST_ACCESS_AND_S1PTW_ON_HOLE_UFFD(__a, __th, __ph, __AF_TEST_ARGS) + static struct test_desc tests[] = { /* Check that HW is setting the AF (sanity checks). */ TEST_HW_ACCESS_FLAG(guest_test_read64), @@ -640,10 +798,78 @@ static struct test_desc tests[] = { TEST_ACCESS_ON_HOLE_NO_FAULTS(guest_test_cas, __PREPARE_LSE_TEST_ARGS), TEST_ACCESS_ON_HOLE_NO_FAULTS(guest_test_ld_preidx), TEST_ACCESS_ON_HOLE_NO_FAULTS(guest_test_write64), - TEST_ACCESS_ON_HOLE_NO_FAULTS(guest_test_at), TEST_ACCESS_ON_HOLE_NO_FAULTS(guest_test_dc_zva), TEST_ACCESS_ON_HOLE_NO_FAULTS(guest_test_st_preidx), + /* UFFD basic (sanity checks) */ + TEST_ACCESS_ON_HOLE_UFFD(guest_test_read64, uffd_test_read_handler), + TEST_ACCESS_ON_HOLE_UFFD(guest_test_cas, uffd_test_read_handler, + __PREPARE_LSE_TEST_ARGS), + TEST_ACCESS_ON_HOLE_UFFD(guest_test_ld_preidx, uffd_test_read_handler), + TEST_ACCESS_ON_HOLE_UFFD(guest_test_write64, uffd_test_write_handler), + TEST_ACCESS_ON_HOLE_UFFD(guest_test_st_preidx, uffd_test_write_handler), + TEST_ACCESS_ON_HOLE_UFFD(guest_test_dc_zva, uffd_test_write_handler), + TEST_ACCESS_ON_HOLE_UFFD(guest_test_exec, uffd_test_read_handler), + + /* UFFD fault due to S1PTW. Note how they are all write faults. */ + TEST_S1PTW_ON_HOLE_UFFD(guest_test_read64, uffd_pt_write_handler), + TEST_S1PTW_ON_HOLE_UFFD(guest_test_cas, uffd_pt_write_handler, + __PREPARE_LSE_TEST_ARGS), + TEST_S1PTW_ON_HOLE_UFFD(guest_test_at, uffd_pt_write_handler), + TEST_S1PTW_ON_HOLE_UFFD(guest_test_ld_preidx, uffd_pt_write_handler), + TEST_S1PTW_ON_HOLE_UFFD(guest_test_write64, uffd_pt_write_handler), + TEST_S1PTW_ON_HOLE_UFFD(guest_test_dc_zva, uffd_pt_write_handler), + TEST_S1PTW_ON_HOLE_UFFD(guest_test_st_preidx, uffd_pt_write_handler), + TEST_S1PTW_ON_HOLE_UFFD(guest_test_exec, uffd_pt_write_handler), + + /* UFFD fault due to S1PTW with AF. Note how they all write faults. */ + TEST_S1PTW_ON_HOLE_UFFD_AF(guest_test_read64, uffd_pt_write_handler), + TEST_S1PTW_ON_HOLE_UFFD(guest_test_cas, uffd_pt_write_handler, + __AF_LSE_TEST_ARGS), + /* + * Can't test the AF case for address translation insts (D5.4.11) as + * it's IMPDEF whether that marks the AF. + */ + TEST_S1PTW_ON_HOLE_UFFD_AF(guest_test_ld_preidx, uffd_pt_write_handler), + TEST_S1PTW_ON_HOLE_UFFD_AF(guest_test_write64, uffd_pt_write_handler), + TEST_S1PTW_ON_HOLE_UFFD_AF(guest_test_st_preidx, uffd_pt_write_handler), + TEST_S1PTW_ON_HOLE_UFFD_AF(guest_test_dc_zva, uffd_pt_write_handler), + TEST_S1PTW_ON_HOLE_UFFD_AF(guest_test_exec, uffd_pt_write_handler), + + /* UFFD faults due to an access and its S1PTW. */ + TEST_ACCESS_AND_S1PTW_ON_HOLE_UFFD(guest_test_read64, + uffd_test_read_handler, uffd_pt_write_handler), + TEST_ACCESS_AND_S1PTW_ON_HOLE_UFFD(guest_test_cas, + uffd_test_read_handler, uffd_pt_write_handler, + __PREPARE_LSE_TEST_ARGS), + TEST_ACCESS_AND_S1PTW_ON_HOLE_UFFD(guest_test_ld_preidx, + uffd_test_read_handler, uffd_pt_write_handler), + TEST_ACCESS_AND_S1PTW_ON_HOLE_UFFD(guest_test_write64, + uffd_test_write_handler, uffd_pt_write_handler), + TEST_ACCESS_AND_S1PTW_ON_HOLE_UFFD(guest_test_dc_zva, + uffd_test_write_handler, uffd_pt_write_handler), + TEST_ACCESS_AND_S1PTW_ON_HOLE_UFFD(guest_test_st_preidx, + uffd_test_write_handler, uffd_pt_write_handler), + TEST_ACCESS_AND_S1PTW_ON_HOLE_UFFD(guest_test_exec, + uffd_test_read_handler, uffd_pt_write_handler), + + /* UFFD faults due to an access and its S1PTW with AF. */ + TEST_ACCESS_AND_S1PTW_ON_HOLE_UFFD_AF(guest_test_read64, + uffd_test_read_handler, uffd_pt_write_handler), + TEST_ACCESS_AND_S1PTW_ON_HOLE_UFFD(guest_test_cas, + uffd_test_read_handler, uffd_pt_write_handler, + __AF_LSE_TEST_ARGS), + TEST_ACCESS_AND_S1PTW_ON_HOLE_UFFD_AF(guest_test_ld_preidx, + uffd_test_read_handler, uffd_pt_write_handler), + TEST_ACCESS_AND_S1PTW_ON_HOLE_UFFD_AF(guest_test_write64, + uffd_test_write_handler, uffd_pt_write_handler), + TEST_ACCESS_AND_S1PTW_ON_HOLE_UFFD_AF(guest_test_dc_zva, + uffd_test_write_handler, uffd_pt_write_handler), + TEST_ACCESS_AND_S1PTW_ON_HOLE_UFFD_AF(guest_test_st_preidx, + uffd_test_write_handler, uffd_pt_write_handler), + TEST_ACCESS_AND_S1PTW_ON_HOLE_UFFD_AF(guest_test_exec, + uffd_test_read_handler, uffd_pt_write_handler), + { 0 }, }; From patchwork Wed Mar 23 22:54:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 12790196 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7B0DC433EF for ; Wed, 23 Mar 2022 22:54:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345334AbiCWWz4 (ORCPT ); Wed, 23 Mar 2022 18:55:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47254 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345342AbiCWWzz (ORCPT ); Wed, 23 Mar 2022 18:55:55 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BEED38D681 for ; Wed, 23 Mar 2022 15:54:24 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id rm11-20020a17090b3ecb00b001c713925e58so5264663pjb.6 for ; Wed, 23 Mar 2022 15:54:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=/89MKP2BWFTMFr1I+WAjn8R2Z4dFf8EnGbuPEHCIdJY=; b=RlWBXu652oujXBG5JqJ5G3bNtA2p9IlCyCUEKqY0PmdCc9cpESgJudt/esnTc3rt5t YVSnRShAOh8HhIUlu19MnM0byFf3Qdvfr3AI+948y2Bblb++tG3WzNaSjYa+eZx5FIuJ zYC2pOWEL4lkKQj7rdiC22x//YfIzx0qhFR9SaOb7kRdwg64orvtF4HbYXvGfxBdyOhH 3v04jKyL80xNiutznrRvOMZxmuBB9aKbaP4qEZEtAtAA4NeUH6rYm+VIU34u8b+om6VV mlH9FLckEicZD6qXQ/TaHfgUEq9UT7F+gosIFmCBB4hoI7OTCCwPnIIoCONLnZKHvW3V yEWQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=/89MKP2BWFTMFr1I+WAjn8R2Z4dFf8EnGbuPEHCIdJY=; b=f2gjv3fb2OETbV+iX9gp217xTsFzrY1ey+wizsczFF7ZcGB6z18ITllEP0orXcHBRn YRQ20FqWIxoMa+6/RDPQZ6hdmFKVF3n11ALyhcSoX1m987LBJikqzq1Sj/RACj5LBFS8 HFtA5UPyPDSkv5Ri9nBjhQN14r5UTJvr0EMxxXygPhv+6cGonCwTG/FXmhOh98o7d1iC XIZobEUN8Nq7LRXSrsH3f6lGQZiS1BNN5jdu+46fGb1fLJP5hBdmydVXhhZByhPidHZn UjZLjLurHc9lAruncfEu3cW38ba0+VDgMfsZm8jcPhuGg6KpoARtA2HlOpF93pwcO1jM WKNw== X-Gm-Message-State: AOAM530zAtXFu3yXGIxe8se/2n1mTR1sN2XAlm2hNAS/ePqS7pkdVPLo FQRsPVBFrhqRFUl5LoEV30pg3GG15FDePZZBriueypMdI5fDRYEVq+8QRezvaAvD05rf1+EDtei IMzi84o2YUXoP7F0VLkZa36vJX+BB75jj4w4e91uQaFMTzKyzZhC8q2qE804z7vs= X-Google-Smtp-Source: ABdhPJwGEFfuWKuS9ydMbKpWNJYNLHlEsPreQ1KcrhowuX6bcFrqwZAMS2SL4KO9siNgqNbXq9CfVHDFr1oIxg== X-Received: from ricarkol2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:62fe]) (user=ricarkol job=sendgmr) by 2002:a05:6a00:114b:b0:4f7:915:3ec3 with SMTP id b11-20020a056a00114b00b004f709153ec3mr2233613pfm.8.1648076064060; Wed, 23 Mar 2022 15:54:24 -0700 (PDT) Date: Wed, 23 Mar 2022 15:54:03 -0700 In-Reply-To: <20220323225405.267155-1-ricarkol@google.com> Message-Id: <20220323225405.267155-10-ricarkol@google.com> Mime-Version: 1.0 References: <20220323225405.267155-1-ricarkol@google.com> X-Mailer: git-send-email 2.35.1.894.gb6a874cedc-goog Subject: [PATCH v2 09/11] KVM: selftests: aarch64: Add dirty logging tests into page_fault_test From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, drjones@redhat.com Cc: pbonzini@redhat.com, maz@kernel.org, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, axelrasmussen@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add some dirty logging tests into page_fault_test. Mark the data and/or page-table memslots for dirty logging, perform some accesses, and check that the dirty log bits are set or clean when expected. Signed-off-by: Ricardo Koller --- .../selftests/kvm/aarch64/page_fault_test.c | 123 ++++++++++++++++++ 1 file changed, 123 insertions(+) diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c index 99449eaddb2b..b41da9317242 100644 --- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c +++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c @@ -45,6 +45,11 @@ #define CMD_SKIP_TEST (-1LL) #define CMD_HOLE_PT (1ULL << 2) #define CMD_HOLE_TEST (1ULL << 3) +#define CMD_RECREATE_PT_MEMSLOT_WR (1ULL << 4) +#define CMD_CHECK_WRITE_IN_DIRTY_LOG (1ULL << 5) +#define CMD_CHECK_S1PTW_WR_IN_DIRTY_LOG (1ULL << 6) +#define CMD_CHECK_NO_WRITE_IN_DIRTY_LOG (1ULL << 7) +#define CMD_SET_PTE_AF (1ULL << 8) #define PREPARE_FN_NR 10 #define CHECK_FN_NR 10 @@ -251,6 +256,21 @@ static void guest_check_pte_af(void) GUEST_ASSERT_EQ(*((uint64_t *)pte_gva) & PTE_AF, PTE_AF); } +static void guest_check_write_in_dirty_log(void) +{ + GUEST_SYNC(CMD_CHECK_WRITE_IN_DIRTY_LOG); +} + +static void guest_check_no_write_in_dirty_log(void) +{ + GUEST_SYNC(CMD_CHECK_NO_WRITE_IN_DIRTY_LOG); +} + +static void guest_check_s1ptw_wr_in_dirty_log(void) +{ + GUEST_SYNC(CMD_CHECK_S1PTW_WR_IN_DIRTY_LOG); +} + static void guest_test_exec(void) { int (*code)(void) = (int (*)(void))test_exec_gva; @@ -380,12 +400,34 @@ static void punch_hole_in_memslot(struct kvm_vm *vm, } } +static bool check_write_in_dirty_log(struct kvm_vm *vm, + struct memslot_desc *ms, uint64_t host_pg_nr) +{ + unsigned long *bmap; + bool first_page_dirty; + + bmap = bitmap_zalloc(ms->size / getpagesize()); + kvm_vm_get_dirty_log(vm, ms->idx, bmap); + first_page_dirty = test_bit(host_pg_nr, bmap); + free(bmap); + return first_page_dirty; +} + static void handle_cmd(struct kvm_vm *vm, int cmd) { if (cmd & CMD_HOLE_PT) punch_hole_in_memslot(vm, &memslot[PT]); if (cmd & CMD_HOLE_TEST) punch_hole_in_memslot(vm, &memslot[TEST]); + if (cmd & CMD_CHECK_WRITE_IN_DIRTY_LOG) + TEST_ASSERT(check_write_in_dirty_log(vm, &memslot[TEST], 0), + "Missing write in dirty log"); + if (cmd & CMD_CHECK_NO_WRITE_IN_DIRTY_LOG) + TEST_ASSERT(!check_write_in_dirty_log(vm, &memslot[TEST], 0), + "Unexpected s1ptw write in dirty log"); + if (cmd & CMD_CHECK_S1PTW_WR_IN_DIRTY_LOG) + TEST_ASSERT(check_write_in_dirty_log(vm, &memslot[PT], 0), + "Missing s1ptw write in dirty log"); } static void sync_stats_from_guest(struct kvm_vm *vm) @@ -783,6 +825,56 @@ int main(int argc, char *argv[]) #define TEST_ACCESS_AND_S1PTW_ON_HOLE_UFFD_AF(__a, __th, __ph) \ TEST_ACCESS_AND_S1PTW_ON_HOLE_UFFD(__a, __th, __ph, __AF_TEST_ARGS) +#define __TEST_ACCESS_DIRTY_LOG(__a, ...) \ +{ \ + .name = SNAME(TEST_ACCESS_DIRTY_LOG ## _ ## __a), \ + .test_memslot_flags = KVM_MEM_LOG_DIRTY_PAGES, \ + .guest_test = __a, \ + .expected_events = { 0 }, \ + __VA_ARGS__ \ +} + +#define __CHECK_WRITE_IN_DIRTY_LOG \ + .guest_test_check = { guest_check_write_in_dirty_log, }, + +#define __CHECK_NO_WRITE_IN_DIRTY_LOG \ + .guest_test_check = { guest_check_no_write_in_dirty_log, }, + +#define TEST_WRITE_DIRTY_LOG(__a, ...) \ + __TEST_ACCESS_DIRTY_LOG(__a, __CHECK_WRITE_IN_DIRTY_LOG __VA_ARGS__) + +#define TEST_NO_WRITE_DIRTY_LOG(__a, ...) \ + __TEST_ACCESS_DIRTY_LOG(__a, __CHECK_NO_WRITE_IN_DIRTY_LOG __VA_ARGS__) + +#define __TEST_S1PTW_DIRTY_LOG(__a, ...) \ +{ \ + .name = SNAME(S1PTW_AF_DIRTY_LOG ## _ ## __a), \ + .pt_memslot_flags = KVM_MEM_LOG_DIRTY_PAGES, \ + .guest_test = __a, \ + .expected_events = { 0 }, \ + __VA_ARGS__ \ +} + +#define __CHECK_S1PTW_WR_IN_DIRTY_LOG \ + .guest_test_check = { guest_check_s1ptw_wr_in_dirty_log, }, + +#define TEST_S1PTW_DIRTY_LOG(__a, ...) \ + __TEST_S1PTW_DIRTY_LOG(__a, __CHECK_S1PTW_WR_IN_DIRTY_LOG __VA_ARGS__) + +#define __AF_TEST_ARGS_FOR_DIRTY_LOG \ + .guest_prepare = { guest_set_ha, guest_clear_pte_af, }, \ + .guest_test_check = { guest_check_s1ptw_wr_in_dirty_log, \ + guest_check_pte_af, }, + +#define __AF_AND_LSE_ARGS_FOR_DIRTY_LOG \ + .guest_prepare = { guest_set_ha, guest_clear_pte_af, \ + guest_check_lse, }, \ + .guest_test_check = { guest_check_s1ptw_wr_in_dirty_log, \ + guest_check_pte_af, }, + +#define TEST_S1PTW_AF_DIRTY_LOG(__a, ...) \ + TEST_S1PTW_DIRTY_LOG(__a, __AF_TEST_ARGS_FOR_DIRTY_LOG) + static struct test_desc tests[] = { /* Check that HW is setting the AF (sanity checks). */ TEST_HW_ACCESS_FLAG(guest_test_read64), @@ -793,6 +885,37 @@ static struct test_desc tests[] = { TEST_HW_ACCESS_FLAG(guest_test_dc_zva), TEST_HW_ACCESS_FLAG(guest_test_exec), + /* Dirty log basic checks. */ + TEST_WRITE_DIRTY_LOG(guest_test_write64), + TEST_WRITE_DIRTY_LOG(guest_test_cas, __PREPARE_LSE_TEST_ARGS), + TEST_WRITE_DIRTY_LOG(guest_test_dc_zva), + TEST_WRITE_DIRTY_LOG(guest_test_st_preidx), + TEST_NO_WRITE_DIRTY_LOG(guest_test_read64), + TEST_NO_WRITE_DIRTY_LOG(guest_test_ld_preidx), + TEST_NO_WRITE_DIRTY_LOG(guest_test_at), + TEST_NO_WRITE_DIRTY_LOG(guest_test_exec), + + /* + * S1PTW on a PT (no AF) which is marked for dirty logging. Note that + * this still shows up in the dirty log as a write. + */ + TEST_S1PTW_DIRTY_LOG(guest_test_write64), + TEST_S1PTW_DIRTY_LOG(guest_test_st_preidx), + TEST_S1PTW_DIRTY_LOG(guest_test_read64), + TEST_S1PTW_DIRTY_LOG(guest_test_cas, __PREPARE_LSE_TEST_ARGS), + TEST_S1PTW_DIRTY_LOG(guest_test_ld_preidx), + TEST_S1PTW_DIRTY_LOG(guest_test_at), + TEST_S1PTW_DIRTY_LOG(guest_test_dc_zva), + TEST_S1PTW_DIRTY_LOG(guest_test_exec), + TEST_S1PTW_AF_DIRTY_LOG(guest_test_write64), + TEST_S1PTW_AF_DIRTY_LOG(guest_test_st_preidx), + TEST_S1PTW_AF_DIRTY_LOG(guest_test_read64), + TEST_S1PTW_DIRTY_LOG(guest_test_cas, __AF_AND_LSE_ARGS_FOR_DIRTY_LOG), + TEST_S1PTW_AF_DIRTY_LOG(guest_test_ld_preidx), + TEST_S1PTW_AF_DIRTY_LOG(guest_test_at), + TEST_S1PTW_AF_DIRTY_LOG(guest_test_dc_zva), + TEST_S1PTW_AF_DIRTY_LOG(guest_test_exec), + /* Accessing a hole shouldn't fault (more sanity checks). */ TEST_ACCESS_ON_HOLE_NO_FAULTS(guest_test_read64), TEST_ACCESS_ON_HOLE_NO_FAULTS(guest_test_cas, __PREPARE_LSE_TEST_ARGS), From patchwork Wed Mar 23 22:54:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 12790197 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98689C433F5 for ; Wed, 23 Mar 2022 22:54:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345345AbiCWWz7 (ORCPT ); Wed, 23 Mar 2022 18:55:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47416 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345350AbiCWWz5 (ORCPT ); Wed, 23 Mar 2022 18:55:57 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4771A90257 for ; Wed, 23 Mar 2022 15:54:26 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id j1-20020a170903028100b0014b1f9e0068so1522638plr.8 for ; Wed, 23 Mar 2022 15:54:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=DxBINzl4JzV6xhR33CG+d7pEPZETv79QyGVIAQZe+wk=; b=eY+cjbzg3IamLkjK8u18S2eauAndtCY1wEYTJIxqZ1l0kwcFLz8ovBUQQoLmii4k4Y VgmzZruaNezs/sGmhXLTo1zBDDLuaFTSvK1RkZq90ap3inQs3EELTIcSW4jdTyNSFGwE 6v/zaU6Oz2oyDT+7aIb19UcZWyomAEAJahKGetwpp31s4jN2EperMylj77r4vILdpaLn FKlkb1AzIXIH/YFRVbiZhstALoJE2oKnNuaVlRkTLgcH44Ftgvby3nztPqhF/5NFcir2 7MBBikte1YawbzmEt+l6/uU9FWeLe9Lpv/uMg7NV/QQOdAEfjY+EFrFaHcQVNMZokSEP LLtA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=DxBINzl4JzV6xhR33CG+d7pEPZETv79QyGVIAQZe+wk=; b=0FKjBZIK9JsngeTGEhfD1Uhef9ljF35GUCaysR8iX+TWXLKf/w2eegBfmaxPcASjCM V7zaxIgXTTJ6YQAGxOQYu36tzL5J7Fw7v8oCe8WAfWUv++DuUYWFC7DEWjOBMkRKekrX A4CrbaYNgSxzNIXuPaHKtRKLWNwTRUp0dcPnOA8U4rKkvnt7fh5OBTzJP7dGEkc9Eq3m ew3nu/q83FFNtzvX+iTwaEKX38A6Ic6wiprC2iBirGOOQTsjKOCYINQsktaA/wmT4MV6 RmvuNFHVISprdxhufWFvX2VI2ODx13m3+qzo7yfXSixBfYsZ0tX71AVmf0n4FKoHHxIz D+Yg== X-Gm-Message-State: AOAM533CvAaCf5ieKCUMEaxSGwrOa4yGA5Yl/0SHMIk2stpk2T2gb5R+ eASmFhsLQpxSjk7tc8wM1QTR1a9NW6ld0+ob/xEYEM+7hH8o7D+xTvJ+3rS8vtGcQ9cX5Mo/GxD xyriaGWHXxsIPn2inm6WuqWwNuwVs0Ki+rN0vkOLRFdzN/jPS+5TQqo4WwctG7NU= X-Google-Smtp-Source: ABdhPJxFb7i46t1go21HKnf2DuioKNxmylsbtyqoZ2M95chxZv+ZZxsqqAQpJpHIG/kmsUu9i+LUInZJm+eZ2Q== X-Received: from ricarkol2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:62fe]) (user=ricarkol job=sendgmr) by 2002:aa7:8104:0:b0:4fa:2091:d200 with SMTP id b4-20020aa78104000000b004fa2091d200mr1985659pfi.17.1648076065676; Wed, 23 Mar 2022 15:54:25 -0700 (PDT) Date: Wed, 23 Mar 2022 15:54:04 -0700 In-Reply-To: <20220323225405.267155-1-ricarkol@google.com> Message-Id: <20220323225405.267155-11-ricarkol@google.com> Mime-Version: 1.0 References: <20220323225405.267155-1-ricarkol@google.com> X-Mailer: git-send-email 2.35.1.894.gb6a874cedc-goog Subject: [PATCH v2 10/11] KVM: selftests: aarch64: Add readonly memslot tests into page_fault_test From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, drjones@redhat.com Cc: pbonzini@redhat.com, maz@kernel.org, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, axelrasmussen@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add some readonly memslot tests into page_fault_test. Mark the data and/or page-table memslots as readonly, perform some accesses, and check that the right fault is triggered when expected (e.g., a store with no write-back should lead to an mmio exit). Signed-off-by: Ricardo Koller --- .../selftests/kvm/aarch64/page_fault_test.c | 303 +++++++++++++++++- 1 file changed, 300 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c index b41da9317242..e6607f903bc1 100644 --- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c +++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c @@ -84,6 +84,7 @@ struct memslot_desc { static struct event_cnt { int aborts; + int mmio_exits; int fail_vcpu_runs; int uffd_faults; /* uffd_faults is incremented from multiple threads. */ @@ -101,6 +102,8 @@ struct test_desc { int (*uffd_test_handler)(int mode, int uffd, struct uffd_msg *msg); void (*dabt_handler)(struct ex_regs *regs); void (*iabt_handler)(struct ex_regs *regs); + void (*mmio_handler)(struct kvm_run *run); + void (*fail_vcpu_run_handler)(int ret); uint32_t pt_memslot_flags; uint32_t test_memslot_flags; void (*guest_pre_run)(struct kvm_vm *vm); @@ -322,6 +325,20 @@ static void guest_code(struct test_desc *test) GUEST_DONE(); } +static void dabt_s1ptw_on_ro_memslot_handler(struct ex_regs *regs) +{ + GUEST_ASSERT_EQ(read_sysreg(far_el1), GUEST_TEST_GVA); + events.aborts += 1; + GUEST_SYNC(CMD_RECREATE_PT_MEMSLOT_WR); +} + +static void iabt_s1ptw_on_ro_memslot_handler(struct ex_regs *regs) +{ + GUEST_ASSERT_EQ(regs->pc, GUEST_TEST_EXEC_GVA); + events.aborts += 1; + GUEST_SYNC(CMD_RECREATE_PT_MEMSLOT_WR); +} + static void no_dabt_handler(struct ex_regs *regs) { GUEST_ASSERT_1(false, read_sysreg(far_el1)); @@ -400,6 +417,57 @@ static void punch_hole_in_memslot(struct kvm_vm *vm, } } +static int __memory_region_add(struct kvm_vm *vm, void *mem, uint32_t slot, + uint32_t size, uint64_t guest_addr, + uint32_t flags) +{ + struct kvm_userspace_memory_region region; + int ret; + + region.slot = slot; + region.flags = flags; + region.guest_phys_addr = guest_addr; + region.memory_size = size; + region.userspace_addr = (uintptr_t) mem; + ret = ioctl(vm_get_fd(vm), KVM_SET_USER_MEMORY_REGION, ®ion); + + return ret; +} + +static void recreate_memslot(struct kvm_vm *vm, struct memslot_desc *ms, + uint32_t flags) +{ + __memory_region_add(vm, ms->hva, ms->idx, 0, ms->gpa, 0); + __memory_region_add(vm, ms->hva, ms->idx, ms->size, ms->gpa, flags); +} + +static void clear_pte_accessflag(struct kvm_vm *vm) +{ + volatile uint64_t *pte_hva; + + pte_hva = (uint64_t *)addr_gpa2hva(vm, pte_gpa); + *pte_hva &= ~PTE_AF; +} + +static void mmio_on_test_gpa_handler(struct kvm_run *run) +{ + ASSERT_EQ(run->mmio.phys_addr, memslot[TEST].gpa); + + memcpy(memslot[TEST].hva, run->mmio.data, run->mmio.len); + events.mmio_exits += 1; +} + +static void mmio_no_handler(struct kvm_run *run) +{ + uint64_t data; + + memcpy(&data, run->mmio.data, sizeof(data)); + pr_debug("addr=%lld len=%d w=%d data=%lx\n", + run->mmio.phys_addr, run->mmio.len, + run->mmio.is_write, data); + TEST_FAIL("There was no MMIO exit expected."); +} + static bool check_write_in_dirty_log(struct kvm_vm *vm, struct memslot_desc *ms, uint64_t host_pg_nr) { @@ -419,6 +487,8 @@ static void handle_cmd(struct kvm_vm *vm, int cmd) punch_hole_in_memslot(vm, &memslot[PT]); if (cmd & CMD_HOLE_TEST) punch_hole_in_memslot(vm, &memslot[TEST]); + if (cmd & CMD_RECREATE_PT_MEMSLOT_WR) + recreate_memslot(vm, &memslot[PT], 0); if (cmd & CMD_CHECK_WRITE_IN_DIRTY_LOG) TEST_ASSERT(check_write_in_dirty_log(vm, &memslot[TEST], 0), "Missing write in dirty log"); @@ -442,6 +512,13 @@ void fail_vcpu_run_no_handler(int ret) TEST_FAIL("Unexpected vcpu run failure\n"); } +void fail_vcpu_run_mmio_no_syndrome_handler(int ret) +{ + TEST_ASSERT(errno == ENOSYS, "The mmio handler in the kernel" + " should have returned not implemented."); + events.fail_vcpu_runs += 1; +} + static uint64_t get_total_guest_pages(enum vm_guest_mode mode, struct test_params *p) { @@ -594,10 +671,21 @@ static void setup_uffd(enum vm_guest_mode mode, struct test_params *p, test->uffd_test_handler); } +static void setup_default_handlers(struct test_desc *test) +{ + if (!test->mmio_handler) + test->mmio_handler = mmio_no_handler; + + if (!test->fail_vcpu_run_handler) + test->fail_vcpu_run_handler = fail_vcpu_run_no_handler; +} + static void check_event_counts(struct test_desc *test) { ASSERT_EQ(test->expected_events.aborts, events.aborts); ASSERT_EQ(test->expected_events.uffd_faults, events.uffd_faults); + ASSERT_EQ(test->expected_events.mmio_exits, events.mmio_exits); + ASSERT_EQ(test->expected_events.fail_vcpu_runs, events.fail_vcpu_runs); } static void free_uffd(struct test_desc *test, struct uffd_desc **uffd) @@ -629,12 +717,20 @@ static void reset_event_counts(void) static bool vcpu_run_loop(struct kvm_vm *vm, struct test_desc *test) { + struct kvm_run *run; bool skip_test = false; struct ucall uc; - int stage; + int stage, ret; + + run = vcpu_state(vm, VCPU_ID); for (stage = 0; ; stage++) { - vcpu_run(vm, VCPU_ID); + ret = _vcpu_run(vm, VCPU_ID); + if (ret) { + test->fail_vcpu_run_handler(ret); + pr_debug("Done.\n"); + goto done; + } switch (get_ucall(vm, VCPU_ID, &uc)) { case UCALL_SYNC: @@ -653,6 +749,10 @@ static bool vcpu_run_loop(struct kvm_vm *vm, struct test_desc *test) case UCALL_DONE: pr_debug("Done.\n"); goto done; + case UCALL_NONE: + if (run->exit_reason == KVM_EXIT_MMIO) + test->mmio_handler(run); + break; default: TEST_FAIL("Unknown ucall %lu", uc.cmd); } @@ -677,6 +777,7 @@ static void run_test(enum vm_guest_mode mode, void *arg) ucall_init(vm, NULL); reset_event_counts(); + setup_abort_handlers(vm, test); setup_memslots(vm, mode, p); /* @@ -687,7 +788,7 @@ static void run_test(enum vm_guest_mode mode, void *arg) */ load_exec_code_for_test(); setup_uffd(mode, p, uffd); - setup_abort_handlers(vm, test); + setup_default_handlers(test); setup_guest_args(vm, test); if (test->guest_pre_run) @@ -875,6 +976,135 @@ int main(int argc, char *argv[]) #define TEST_S1PTW_AF_DIRTY_LOG(__a, ...) \ TEST_S1PTW_DIRTY_LOG(__a, __AF_TEST_ARGS_FOR_DIRTY_LOG) +#define TEST_WRITE_ON_RO_MEMSLOT(__a, ...) \ +{ \ + .name = SNAME(WRITE_ON_RO_MEMSLOT ## _ ## __a), \ + .test_memslot_flags = KVM_MEM_READONLY, \ + .guest_test = __a, \ + .mmio_handler = mmio_on_test_gpa_handler, \ + .expected_events = { .mmio_exits = 1, }, \ + __VA_ARGS__ \ +} + +#define TEST_READ_ON_RO_MEMSLOT(__a, ...) \ +{ \ + .name = SNAME(READ_ON_RO_MEMSLOT ## _ ## __a), \ + .test_memslot_flags = KVM_MEM_READONLY, \ + .guest_test = __a, \ + .expected_events = { 0 }, \ + __VA_ARGS__ \ +} + +#define TEST_CM_ON_RO_MEMSLOT(__a, ...) \ +{ \ + .name = SNAME(CM_ON_RO_MEMSLOT ## _ ## __a), \ + .test_memslot_flags = KVM_MEM_READONLY, \ + .guest_test = __a, \ + .fail_vcpu_run_handler = fail_vcpu_run_mmio_no_syndrome_handler, \ + .expected_events = { .fail_vcpu_runs = 1, }, \ + __VA_ARGS__ \ +} + +#define __AF_TEST_IN_RO_MEMSLOT_ARGS \ + .guest_pre_run = clear_pte_accessflag, \ + .guest_prepare = { guest_set_ha, }, \ + .guest_test_check = { guest_check_pte_af, } + +#define __AF_LSE_IN_RO_MEMSLOT_ARGS \ + .guest_pre_run = clear_pte_accessflag, \ + .guest_prepare = { guest_set_ha, guest_check_lse, }, \ + .guest_test_check = { guest_check_pte_af, } + +#define TEST_WRITE_ON_RO_MEMSLOT_AF(__a) \ + TEST_WRITE_ON_RO_MEMSLOT(__a, __AF_TEST_IN_RO_MEMSLOT_ARGS) + +#define TEST_READ_ON_RO_MEMSLOT_AF(__a) \ + TEST_READ_ON_RO_MEMSLOT(__a, __AF_TEST_IN_RO_MEMSLOT_ARGS) + +#define TEST_CM_ON_RO_MEMSLOT_AF(__a) \ + TEST_CM_ON_RO_MEMSLOT(__a, __AF_TEST_IN_RO_MEMSLOT_ARGS) + +#define TEST_S1PTW_ON_RO_MEMSLOT_DATA(__a, ...) \ +{ \ + .name = SNAME(S1PTW_ON_RO_MEMSLOT_DATA ## _ ## __a), \ + .pt_memslot_flags = KVM_MEM_READONLY, \ + .guest_test = __a, \ + .dabt_handler = dabt_s1ptw_on_ro_memslot_handler, \ + .expected_events = { .aborts = 1, }, \ + __VA_ARGS__ \ +} + +#define TEST_S1PTW_ON_RO_MEMSLOT_EXEC(__a, ...) \ +{ \ + .name = SNAME(S1PTW_ON_RO_MEMSLOT_EXEC ## _ ## __a), \ + .pt_memslot_flags = KVM_MEM_READONLY, \ + .guest_test = __a, \ + .iabt_handler = iabt_s1ptw_on_ro_memslot_handler, \ + .expected_events = { .aborts = 1, }, \ + __VA_ARGS__ \ +} + +#define TEST_S1PTW_AF_ON_RO_MEMSLOT_DATA(__a) \ + TEST_S1PTW_ON_RO_MEMSLOT_DATA(__a, __AF_TEST_IN_RO_MEMSLOT_ARGS) + +#define TEST_S1PTW_AF_ON_RO_MEMSLOT_EXEC(__a) \ + TEST_S1PTW_ON_RO_MEMSLOT_EXEC(__a, __AF_TEST_IN_RO_MEMSLOT_ARGS) + +#define TEST_WRITE_AND_S1PTW_ON_RO_MEMSLOT(__a, ...) \ +{ \ + .name = SCAT(WRITE_AND_S1PTW_ON_RO_MEMSLOT, __a), \ + .test_memslot_flags = KVM_MEM_READONLY, \ + .pt_memslot_flags = KVM_MEM_READONLY, \ + .guest_test = __a, \ + .mmio_handler = mmio_on_test_gpa_handler, \ + .dabt_handler = dabt_s1ptw_on_ro_memslot_handler, \ + .expected_events = { .mmio_exits = 1, .aborts = 1, }, \ + __VA_ARGS__ \ +} + +#define TEST_READ_AND_S1PTW_ON_RO_MEMSLOT(__a, ...) \ +{ \ + .name = SCAT(READ_AND_S1PTW_ON_RO_MEMSLOT, __a), \ + .test_memslot_flags = KVM_MEM_READONLY, \ + .pt_memslot_flags = KVM_MEM_READONLY, \ + .guest_test = __a, \ + .dabt_handler = dabt_s1ptw_on_ro_memslot_handler, \ + .expected_events = { .aborts = 1, }, \ + __VA_ARGS__ \ +} + +#define TEST_CM_AND_S1PTW_ON_RO_MEMSLOT(__a, ...) \ +{ \ + .name = SCAT(CM_AND_S1PTW_ON_RO_MEMSLOT, __a), \ + .test_memslot_flags = KVM_MEM_READONLY, \ + .pt_memslot_flags = KVM_MEM_READONLY, \ + .guest_test = __a, \ + .dabt_handler = dabt_s1ptw_on_ro_memslot_handler, \ + .fail_vcpu_run_handler = fail_vcpu_run_mmio_no_syndrome_handler, \ + .expected_events = { .aborts = 1, .fail_vcpu_runs = 1 }, \ + __VA_ARGS__ \ +} + +#define TEST_EXEC_AND_S1PTW_ON_RO_MEMSLOT(__a, ...) \ +{ \ + .name = SCAT(EXEC_AND_S1PTW_ON_RO_MEMSLOT, __a), \ + .test_memslot_flags = KVM_MEM_READONLY, \ + .pt_memslot_flags = KVM_MEM_READONLY, \ + .guest_test = __a, \ + .iabt_handler = iabt_s1ptw_on_ro_memslot_handler, \ + .expected_events = { .aborts = 1, }, \ + __VA_ARGS__ \ +} + +#define TEST_WRITE_AND_S1PTW_AF_ON_RO_MEMSLOT(__a) \ + TEST_WRITE_AND_S1PTW_ON_RO_MEMSLOT(__a, __AF_TEST_IN_RO_MEMSLOT_ARGS) +#define TEST_READ_AND_S1PTW_AF_ON_RO_MEMSLOT(__a) \ + TEST_READ_AND_S1PTW_ON_RO_MEMSLOT(__a, __AF_TEST_IN_RO_MEMSLOT_ARGS) +#define TEST_CM_AND_S1PTW_AF_ON_RO_MEMSLOT(__a) \ + TEST_CM_AND_S1PTW_ON_RO_MEMSLOT(__a, __AF_TEST_IN_RO_MEMSLOT_ARGS) +#define TEST_EXEC_AND_S1PTW_AF_ON_RO_MEMSLOT(__a) \ + TEST_EXEC_AND_S1PTW_ON_RO_MEMSLOT(__a, __AF_TEST_IN_RO_MEMSLOT_ARGS) + static struct test_desc tests[] = { /* Check that HW is setting the AF (sanity checks). */ TEST_HW_ACCESS_FLAG(guest_test_read64), @@ -993,6 +1223,73 @@ static struct test_desc tests[] = { TEST_ACCESS_AND_S1PTW_ON_HOLE_UFFD_AF(guest_test_exec, uffd_test_read_handler, uffd_pt_write_handler), + /* Access on readonly memslot (sanity check). */ + TEST_WRITE_ON_RO_MEMSLOT(guest_test_write64), + TEST_READ_ON_RO_MEMSLOT(guest_test_read64), + TEST_READ_ON_RO_MEMSLOT(guest_test_ld_preidx), + TEST_READ_ON_RO_MEMSLOT(guest_test_exec), + /* + * CM and ld/st with pre-indexing don't have any syndrome. And so + * vcpu_run just fails; which is expected. + */ + TEST_CM_ON_RO_MEMSLOT(guest_test_dc_zva), + TEST_CM_ON_RO_MEMSLOT(guest_test_cas, __PREPARE_LSE_TEST_ARGS), + TEST_CM_ON_RO_MEMSLOT(guest_test_st_preidx), + + /* Access on readonly memslot w/ non-faulting S1PTW w/ AF. */ + TEST_WRITE_ON_RO_MEMSLOT_AF(guest_test_write64), + TEST_READ_ON_RO_MEMSLOT_AF(guest_test_read64), + TEST_READ_ON_RO_MEMSLOT_AF(guest_test_ld_preidx), + TEST_CM_ON_RO_MEMSLOT(guest_test_cas, __AF_LSE_IN_RO_MEMSLOT_ARGS), + TEST_CM_ON_RO_MEMSLOT_AF(guest_test_dc_zva), + TEST_CM_ON_RO_MEMSLOT_AF(guest_test_st_preidx), + TEST_READ_ON_RO_MEMSLOT_AF(guest_test_exec), + + /* + * S1PTW without AF on a readonly memslot. Note that even though this + * page table walk does not actually write the access flag, it is still + * considered a write, and therefore there is a fault. + */ + TEST_S1PTW_ON_RO_MEMSLOT_DATA(guest_test_write64), + TEST_S1PTW_ON_RO_MEMSLOT_DATA(guest_test_read64), + TEST_S1PTW_ON_RO_MEMSLOT_DATA(guest_test_ld_preidx), + TEST_S1PTW_ON_RO_MEMSLOT_DATA(guest_test_cas, __PREPARE_LSE_TEST_ARGS), + TEST_S1PTW_ON_RO_MEMSLOT_DATA(guest_test_at), + TEST_S1PTW_ON_RO_MEMSLOT_DATA(guest_test_dc_zva), + TEST_S1PTW_ON_RO_MEMSLOT_DATA(guest_test_st_preidx), + TEST_S1PTW_ON_RO_MEMSLOT_EXEC(guest_test_exec), + + /* S1PTW with AF on a readonly memslot. */ + TEST_S1PTW_AF_ON_RO_MEMSLOT_DATA(guest_test_write64), + TEST_S1PTW_AF_ON_RO_MEMSLOT_DATA(guest_test_read64), + TEST_S1PTW_ON_RO_MEMSLOT_DATA(guest_test_cas, + __AF_LSE_IN_RO_MEMSLOT_ARGS), + TEST_S1PTW_AF_ON_RO_MEMSLOT_DATA(guest_test_ld_preidx), + TEST_S1PTW_AF_ON_RO_MEMSLOT_DATA(guest_test_at), + TEST_S1PTW_AF_ON_RO_MEMSLOT_DATA(guest_test_st_preidx), + TEST_S1PTW_AF_ON_RO_MEMSLOT_DATA(guest_test_dc_zva), + TEST_S1PTW_AF_ON_RO_MEMSLOT_EXEC(guest_test_exec), + + /* Access on a RO memslot with S1PTW also on a RO memslot. */ + TEST_WRITE_AND_S1PTW_ON_RO_MEMSLOT(guest_test_write64), + TEST_READ_AND_S1PTW_ON_RO_MEMSLOT(guest_test_ld_preidx), + TEST_READ_AND_S1PTW_ON_RO_MEMSLOT(guest_test_read64), + TEST_CM_AND_S1PTW_ON_RO_MEMSLOT(guest_test_cas, + __PREPARE_LSE_TEST_ARGS), + TEST_CM_AND_S1PTW_ON_RO_MEMSLOT(guest_test_dc_zva), + TEST_CM_AND_S1PTW_ON_RO_MEMSLOT(guest_test_st_preidx), + TEST_EXEC_AND_S1PTW_ON_RO_MEMSLOT(guest_test_exec), + + /* Access on a RO memslot with S1PTW w/ AF also on a RO memslot. */ + TEST_WRITE_AND_S1PTW_AF_ON_RO_MEMSLOT(guest_test_write64), + TEST_READ_AND_S1PTW_AF_ON_RO_MEMSLOT(guest_test_read64), + TEST_READ_AND_S1PTW_AF_ON_RO_MEMSLOT(guest_test_ld_preidx), + TEST_CM_AND_S1PTW_ON_RO_MEMSLOT(guest_test_cas, + __AF_LSE_IN_RO_MEMSLOT_ARGS), + TEST_CM_AND_S1PTW_AF_ON_RO_MEMSLOT(guest_test_dc_zva), + TEST_CM_AND_S1PTW_AF_ON_RO_MEMSLOT(guest_test_st_preidx), + TEST_EXEC_AND_S1PTW_AF_ON_RO_MEMSLOT(guest_test_exec), + { 0 }, }; From patchwork Wed Mar 23 22:54:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 12790198 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 606CFC4332F for ; Wed, 23 Mar 2022 22:54:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344224AbiCWW4D (ORCPT ); Wed, 23 Mar 2022 18:56:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47600 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345352AbiCWW4A (ORCPT ); Wed, 23 Mar 2022 18:56:00 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A9DA88CDBE for ; Wed, 23 Mar 2022 15:54:27 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id t12-20020a17090a448c00b001b9cbac9c43so1802516pjg.2 for ; Wed, 23 Mar 2022 15:54:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=rIGGxCKW6wA4RApzu/HCVy5JcPZMFgCPd5kmjCwZ9O0=; b=piZXHjsIkJ450mWbYKngbm3r2F9EiJqQAxZ1le7FNtfC3xFdjIpPNCmarUWoufU8L3 APRC/TKWSo17t4ibuolc+YTfas8qlKhUpSzpv9fSQ1UYbRxuhDMfHFFI0Zw3EKMeqRPP tdiSYUtt4QD0aPgJo618/IEPivUwPL74tuWT4Xf+bZxaQ+cHeB7HgZnc2c9KWLU8bIfL VKWheO/kLL+g7KwzZ/Vmv5fb42buxWcMZT5/At2It1a1F5UHUhcB8QqXbK6DE+KL73W4 5DY4pccU40444jrVf5F7sgOd/4wrWaKlIW7hC8wpeGVChUrDxVXzVlGTsJzMXHfJHgN8 h12g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=rIGGxCKW6wA4RApzu/HCVy5JcPZMFgCPd5kmjCwZ9O0=; b=oK5t3gLONv8J0K8Ly7N35sj/usNOvEygoGaTlPYUUjqsjLrOxz6QlsvW21/tY3NJw7 msk3T8rwyKHSffsEQwlfvg2lF+rC25XrkrSEvs3+yYK/d5jaokNH97etzBIedBLT+aUf o2JT+HbBzEf8uG6PHsPB6AK5JbixxZOUYOAiHolhYAIt6Ga1ySQCcKuYY5GNeEpcJB4D uOrNF8Kd5JaFeHRcWFABQI+H/coUSa5/muqSV2BGRhQ17ZU+ab9Hljs0SueNHqNrkPZK eIAsgKf3nCKoPLDUpwKoTNCQzrr70MQquGC6kQsOmIK0PJ3/wFP/TnecHvrReJx6OfcU DEiw== X-Gm-Message-State: AOAM5318g+7gKvXhyHwSpNjv1NOUnXklqzw3kvuVskPH02v7XnhJRlBE DfUNvJBigq2V2qorunjJB7LHJWcLBPBu5gZRJijWhlRPABh3X/abEV9lbdCOyLBDVZMNQeN+unq 0YauNk2Jl6VbcS0lJAHpj9reqyhwUo5M7BehwJ1eAxmwfNlhO36HVJ3gB0cdij8g= X-Google-Smtp-Source: ABdhPJyQoyjnUIA1nAFuaTRO6TMxt5W14CprA5TK+iqbEUoBmU6oXnnhTS2xfse4MRrVKkMft9PG8GnMgAdGXw== X-Received: from ricarkol2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:62fe]) (user=ricarkol job=sendgmr) by 2002:a17:902:a501:b0:153:f956:29f0 with SMTP id s1-20020a170902a50100b00153f95629f0mr2583559plq.120.1648076067074; Wed, 23 Mar 2022 15:54:27 -0700 (PDT) Date: Wed, 23 Mar 2022 15:54:05 -0700 In-Reply-To: <20220323225405.267155-1-ricarkol@google.com> Message-Id: <20220323225405.267155-12-ricarkol@google.com> Mime-Version: 1.0 References: <20220323225405.267155-1-ricarkol@google.com> X-Mailer: git-send-email 2.35.1.894.gb6a874cedc-goog Subject: [PATCH v2 11/11] KVM: selftests: aarch64: Add mix of tests into page_fault_test From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, drjones@redhat.com Cc: pbonzini@redhat.com, maz@kernel.org, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, axelrasmussen@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add some mix of tests into page_fault_test, like stage 2 faults on memslots marked for both userfaultfd and dirty-logging. Signed-off-by: Ricardo Koller --- .../selftests/kvm/aarch64/page_fault_test.c | 148 ++++++++++++++++++ 1 file changed, 148 insertions(+) diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c index e6607f903bc1..f1a5bf081a5b 100644 --- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c +++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c @@ -399,6 +399,12 @@ static int uffd_test_read_handler(int mode, int uffd, struct uffd_msg *msg) return uffd_generic_handler(mode, uffd, msg, &memslot[TEST], false); } +static int uffd_no_handler(int mode, int uffd, struct uffd_msg *msg) +{ + TEST_FAIL("There was no UFFD fault expected."); + return -1; +} + static void punch_hole_in_memslot(struct kvm_vm *vm, struct memslot_desc *memslot) { @@ -912,6 +918,30 @@ int main(int argc, char *argv[]) #define TEST_S1PTW_ON_HOLE_UFFD_AF(__a, __uffd_handler) \ TEST_S1PTW_ON_HOLE_UFFD(__a, __uffd_handler, __AF_TEST_ARGS) +#define __DIRTY_LOG_TEST \ + .test_memslot_flags = KVM_MEM_LOG_DIRTY_PAGES, \ + .guest_test_check = { guest_check_write_in_dirty_log, }, \ + +#define __DIRTY_LOG_S1PTW_TEST \ + .pt_memslot_flags = KVM_MEM_LOG_DIRTY_PAGES, \ + .guest_test_check = { guest_check_s1ptw_wr_in_dirty_log, }, \ + +#define TEST_WRITE_DIRTY_LOG_AND_S1PTW_ON_UFFD(__a, __uffd_handler, ...) \ + TEST_S1PTW_ON_HOLE_UFFD(__a, __uffd_handler, \ + __DIRTY_LOG_TEST __VA_ARGS__) + +#define TEST_WRITE_ON_DIRTY_LOG_AND_UFFD(__a, __uffd_handler, ...) \ + TEST_ACCESS_ON_HOLE_UFFD(__a, __uffd_handler, \ + __DIRTY_LOG_TEST __VA_ARGS__) + +#define TEST_WRITE_UFFD_AND_S1PTW_ON_DIRTY_LOG(__a, __uffd_handler, ...) \ + TEST_ACCESS_ON_HOLE_UFFD(__a, __uffd_handler, \ + __DIRTY_LOG_S1PTW_TEST __VA_ARGS__) + +#define TEST_S1PTW_ON_DIRTY_LOG_AND_UFFD(__a, __uffd_handler, ...) \ + TEST_S1PTW_ON_HOLE_UFFD(__a, __uffd_handler, \ + __DIRTY_LOG_S1PTW_TEST __VA_ARGS__) + #define TEST_ACCESS_AND_S1PTW_ON_HOLE_UFFD(__a, __th, __ph, ...) \ { \ .name = SNAME(ACCESS_S1PTW_ON_HOLE_UFFD ## _ ## __a), \ @@ -1015,6 +1045,10 @@ int main(int argc, char *argv[]) .guest_prepare = { guest_set_ha, guest_check_lse, }, \ .guest_test_check = { guest_check_pte_af, } +#define __NULL_UFFD_HANDLERS \ + .uffd_test_handler = uffd_no_handler, \ + .uffd_pt_handler = uffd_no_handler + #define TEST_WRITE_ON_RO_MEMSLOT_AF(__a) \ TEST_WRITE_ON_RO_MEMSLOT(__a, __AF_TEST_IN_RO_MEMSLOT_ARGS) @@ -1105,6 +1139,37 @@ int main(int argc, char *argv[]) #define TEST_EXEC_AND_S1PTW_AF_ON_RO_MEMSLOT(__a) \ TEST_EXEC_AND_S1PTW_ON_RO_MEMSLOT(__a, __AF_TEST_IN_RO_MEMSLOT_ARGS) +#define TEST_WRITE_AND_S1PTW_AF_ON_RO_MEMSLOT_WITH_UFFD(__a) \ + TEST_WRITE_AND_S1PTW_ON_RO_MEMSLOT(__a, __NULL_UFFD_HANDLERS) +#define TEST_READ_AND_S1PTW_ON_RO_MEMSLOT_WITH_UFFD(__a) \ + TEST_READ_AND_S1PTW_ON_RO_MEMSLOT(__a, __NULL_UFFD_HANDLERS) +#define TEST_CM_AND_S1PTW_AF_ON_RO_MEMSLOT_WITH_UFFD(__a) \ + TEST_CM_AND_S1PTW_ON_RO_MEMSLOT(__a, __NULL_UFFD_HANDLERS) +#define TEST_EXEC_AND_S1PTW_AF_ON_RO_MEMSLOT_WITH_UFFD(__a) \ + TEST_EXEC_AND_S1PTW_ON_RO_MEMSLOT(__a, __NULL_UFFD_HANDLERS) + +#define TEST_WRITE_ON_RO_DIRTY_LOG_MEMSLOT(__a, ...) \ +{ \ + .name = SNAME(WRITE_ON_RO_MEMSLOT ## _ ## __a), \ + .test_memslot_flags = KVM_MEM_READONLY | KVM_MEM_LOG_DIRTY_PAGES, \ + .guest_test = __a, \ + .guest_test_check = { guest_check_no_write_in_dirty_log, }, \ + .mmio_handler = mmio_on_test_gpa_handler, \ + .expected_events = { .mmio_exits = 1, }, \ + __VA_ARGS__ \ +} + +#define TEST_CM_ON_RO_DIRTY_LOG_MEMSLOT(__a, ...) \ +{ \ + .name = SNAME(WRITE_ON_RO_MEMSLOT ## _ ## __a), \ + .test_memslot_flags = KVM_MEM_READONLY | KVM_MEM_LOG_DIRTY_PAGES, \ + .guest_test = __a, \ + .guest_test_check = { guest_check_no_write_in_dirty_log, }, \ + .fail_vcpu_run_handler = fail_vcpu_run_mmio_no_syndrome_handler, \ + .expected_events = { .fail_vcpu_runs = 1, }, \ + __VA_ARGS__ \ +} + static struct test_desc tests[] = { /* Check that HW is setting the AF (sanity checks). */ TEST_HW_ACCESS_FLAG(guest_test_read64), @@ -1223,6 +1288,65 @@ static struct test_desc tests[] = { TEST_ACCESS_AND_S1PTW_ON_HOLE_UFFD_AF(guest_test_exec, uffd_test_read_handler, uffd_pt_write_handler), + /* Write into a memslot marked for both dirty logging and UFFD. */ + TEST_WRITE_ON_DIRTY_LOG_AND_UFFD(guest_test_write64, + uffd_test_write_handler), + /* Note that the cas uffd handler is for a read. */ + TEST_WRITE_ON_DIRTY_LOG_AND_UFFD(guest_test_cas, + uffd_test_read_handler, __PREPARE_LSE_TEST_ARGS), + TEST_WRITE_ON_DIRTY_LOG_AND_UFFD(guest_test_dc_zva, + uffd_test_write_handler), + TEST_WRITE_ON_DIRTY_LOG_AND_UFFD(guest_test_st_preidx, + uffd_test_write_handler), + + /* + * Access whose s1ptw faults on a hole that's marked for both dirty + * logging and UFFD. + */ + TEST_S1PTW_ON_DIRTY_LOG_AND_UFFD(guest_test_read64, + uffd_pt_write_handler), + TEST_S1PTW_ON_DIRTY_LOG_AND_UFFD(guest_test_cas, + uffd_pt_write_handler, __PREPARE_LSE_TEST_ARGS), + TEST_S1PTW_ON_DIRTY_LOG_AND_UFFD(guest_test_ld_preidx, + uffd_pt_write_handler), + TEST_S1PTW_ON_DIRTY_LOG_AND_UFFD(guest_test_exec, + uffd_pt_write_handler), + TEST_S1PTW_ON_DIRTY_LOG_AND_UFFD(guest_test_write64, + uffd_pt_write_handler), + TEST_S1PTW_ON_DIRTY_LOG_AND_UFFD(guest_test_st_preidx, + uffd_pt_write_handler), + TEST_S1PTW_ON_DIRTY_LOG_AND_UFFD(guest_test_dc_zva, + uffd_pt_write_handler), + TEST_S1PTW_ON_DIRTY_LOG_AND_UFFD(guest_test_at, + uffd_pt_write_handler), + + /* + * Write on a memslot marked for dirty logging whose related s1ptw + * is on a hole marked with UFFD. + */ + TEST_WRITE_DIRTY_LOG_AND_S1PTW_ON_UFFD(guest_test_write64, + uffd_pt_write_handler), + TEST_WRITE_DIRTY_LOG_AND_S1PTW_ON_UFFD(guest_test_cas, + uffd_pt_write_handler, __PREPARE_LSE_TEST_ARGS), + TEST_WRITE_DIRTY_LOG_AND_S1PTW_ON_UFFD(guest_test_dc_zva, + uffd_pt_write_handler), + TEST_WRITE_DIRTY_LOG_AND_S1PTW_ON_UFFD(guest_test_st_preidx, + uffd_pt_write_handler), + + /* + * Write on a memslot that's on a hole marked with UFFD, whose related + * sp1ptw is on a memslot marked for dirty logging. + */ + TEST_WRITE_UFFD_AND_S1PTW_ON_DIRTY_LOG(guest_test_write64, + uffd_test_write_handler), + /* Note that the uffd handler is for a read. */ + TEST_WRITE_UFFD_AND_S1PTW_ON_DIRTY_LOG(guest_test_cas, + uffd_test_read_handler, __PREPARE_LSE_TEST_ARGS), + TEST_WRITE_UFFD_AND_S1PTW_ON_DIRTY_LOG(guest_test_dc_zva, + uffd_test_write_handler), + TEST_WRITE_UFFD_AND_S1PTW_ON_DIRTY_LOG(guest_test_st_preidx, + uffd_test_write_handler), + /* Access on readonly memslot (sanity check). */ TEST_WRITE_ON_RO_MEMSLOT(guest_test_write64), TEST_READ_ON_RO_MEMSLOT(guest_test_read64), @@ -1290,6 +1414,30 @@ static struct test_desc tests[] = { TEST_CM_AND_S1PTW_AF_ON_RO_MEMSLOT(guest_test_st_preidx), TEST_EXEC_AND_S1PTW_AF_ON_RO_MEMSLOT(guest_test_exec), + /* + * Access on a memslot marked as readonly with also dirty log tracking. + * There should be no write in the dirty log. + */ + TEST_WRITE_ON_RO_DIRTY_LOG_MEMSLOT(guest_test_write64), + TEST_CM_ON_RO_DIRTY_LOG_MEMSLOT(guest_test_cas, + __PREPARE_LSE_TEST_ARGS), + TEST_CM_ON_RO_DIRTY_LOG_MEMSLOT(guest_test_dc_zva), + TEST_CM_ON_RO_DIRTY_LOG_MEMSLOT(guest_test_st_preidx), + + /* + * Access on a RO memslot with S1PTW also on a RO memslot, while also + * having those memslot regions marked for UFFD fault handling. The + * result is that UFFD fault handlers should not be called. + */ + TEST_WRITE_AND_S1PTW_AF_ON_RO_MEMSLOT_WITH_UFFD(guest_test_write64), + TEST_READ_AND_S1PTW_ON_RO_MEMSLOT_WITH_UFFD(guest_test_read64), + TEST_READ_AND_S1PTW_ON_RO_MEMSLOT_WITH_UFFD(guest_test_ld_preidx), + TEST_CM_AND_S1PTW_ON_RO_MEMSLOT(guest_test_cas, + __PREPARE_LSE_TEST_ARGS __NULL_UFFD_HANDLERS), + TEST_CM_AND_S1PTW_AF_ON_RO_MEMSLOT_WITH_UFFD(guest_test_dc_zva), + TEST_CM_AND_S1PTW_AF_ON_RO_MEMSLOT_WITH_UFFD(guest_test_st_preidx), + TEST_EXEC_AND_S1PTW_AF_ON_RO_MEMSLOT_WITH_UFFD(guest_test_exec), + { 0 }, };