From patchwork Fri Mar 11 06:01:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 12777470 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24476C433FE for ; Fri, 11 Mar 2022 06:04:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343809AbiCKGFQ (ORCPT ); Fri, 11 Mar 2022 01:05:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54796 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347296AbiCKGDy (ORCPT ); Fri, 11 Mar 2022 01:03:54 -0500 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7DC9E120F70 for ; Thu, 10 Mar 2022 22:02:13 -0800 (PST) Received: by mail-pl1-x649.google.com with SMTP id z14-20020a170902ccce00b0014d7a559635so4001506ple.16 for ; Thu, 10 Mar 2022 22:02:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=OS8ohS91Puk73y/TF73RsdwMKNPAd8FW0KH2+jj/0wM=; b=EeKocKTXReZfB68YeE1DkdHXd3qIWwVW0WStSX+oNTYKqPobEv7Eo3McRrMCxC4lFG iq8su/qjlMA5mpYIaaOliO9uWYpm7Zq7bPhQcywNujzjYhq6+K3OguxqrFtajIbw3Ndj kFk7ekEc5AcTSoUjBFYDcVqocZBF5B5BUKTim28quGtiKYtTPxSS+4U6uFuKjWK89q9Y 9o/RIhHSgTIrdg1TXYnH0KIs446C2y012zQu+dbrVDFtY8tRYiVHJAclxFcaNNfwTgN1 F8NCVioSL/NV51AtwT9g55I+/7OBzEtmvXJQ86pHhzEtcPOTbBMVGuJ9YfSLiegFEISw 9iMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=OS8ohS91Puk73y/TF73RsdwMKNPAd8FW0KH2+jj/0wM=; b=Mw2pN5kCS+Y188VfMAC5yiSdqHjv5UhT/oLZDtoBzjuTHoP0xbgCo85Ohylp9ELKIW l4gwVqJPwYkZUgg5/0sczDyI6fhvvX+USTho9V/8Xgubnwi4p6Pzs+86YiKnJOISeIZy 18/wE27PSg9ePCxo+Ee8nUVGh5HPmQBfQeZvWgNvq3GIduiV7UkBr0rsmNgOjihfYEYX uGZixdFvhPGytZPAH34pKUsmDwlPJ4YEPzgJ2ti1VFUkT057CR3X3dW8p4r7fpBlQLcP XKGcYmmelmmvZmGpBwfL7j/RMNFLHkXdIUc7uqs51T2Bo4N6W2fszRO+KBWNZKjHMvtK 1FUA== X-Gm-Message-State: AOAM531qTKWBCPbJXvg8YYWJAXKY0FH6Q5GsWHuXlooErqDASnG4mwoB O2/XAAKuOWba/yKOjge08Llq88ULVmxh1WrtRuGnenxM3WRlXNS0z6Mqyf+LxLlNXkoGTqEXNu8 jV2/OSbqXBbMtor/hy8xa14n3I+353Qi3KUmUbw25J0kSgcGeaKnzLG0CIooMwJU= X-Google-Smtp-Source: ABdhPJz2RXUhVeAn+YnfIXXryZgxmMIF84rMicd/o6TcnNKW99CjoKhy6UwqhGSPrGH7vN1vGw+8fFqVaTJySg== X-Received: from ricarkol2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:62fe]) (user=ricarkol job=sendgmr) by 2002:a17:90b:1e10:b0:1bf:6c78:54a9 with SMTP id pg16-20020a17090b1e1000b001bf6c7854a9mr464935pjb.1.1646978532077; Thu, 10 Mar 2022 22:02:12 -0800 (PST) Date: Thu, 10 Mar 2022 22:01:57 -0800 In-Reply-To: <20220311060207.2438667-1-ricarkol@google.com> Message-Id: <20220311060207.2438667-2-ricarkol@google.com> Mime-Version: 1.0 References: <20220311060207.2438667-1-ricarkol@google.com> X-Mailer: git-send-email 2.35.1.723.g4982287a31-goog Subject: [PATCH 01/11] KVM: selftests: Add a userfaultfd library From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, drjones@redhat.com Cc: pbonzini@redhat.com, maz@kernel.org, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, axelrasmussen@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move the generic userfaultfd code out of demand_paging_test.c into a common library, userfaultfd_util. This library consists of a setup and a stop function. The setup function starts a thread for handling page faults using the handler callback function. This setup returns a uffd_desc object which is then used in the stop function (to wait and destroy the threads). Signed-off-by: Ricardo Koller --- tools/testing/selftests/kvm/Makefile | 2 +- .../selftests/kvm/demand_paging_test.c | 227 +++--------------- .../selftests/kvm/include/userfaultfd_util.h | 46 ++++ .../selftests/kvm/lib/userfaultfd_util.c | 196 +++++++++++++++ 4 files changed, 272 insertions(+), 199 deletions(-) create mode 100644 tools/testing/selftests/kvm/include/userfaultfd_util.h create mode 100644 tools/testing/selftests/kvm/lib/userfaultfd_util.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 0e4926bc9a58..bc5f89b3700e 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -37,7 +37,7 @@ ifeq ($(ARCH),riscv) UNAME_M := riscv endif -LIBKVM = lib/assert.c lib/elf.c lib/io.c lib/kvm_util.c lib/rbtree.c lib/sparsebit.c lib/test_util.c lib/guest_modes.c lib/perf_test_util.c +LIBKVM = lib/assert.c lib/elf.c lib/io.c lib/kvm_util.c lib/rbtree.c lib/sparsebit.c lib/test_util.c lib/guest_modes.c lib/perf_test_util.c lib/userfaultfd_util.c LIBKVM_x86_64 = lib/x86_64/apic.c lib/x86_64/processor.c lib/x86_64/vmx.c lib/x86_64/svm.c lib/x86_64/ucall.c lib/x86_64/handlers.S LIBKVM_aarch64 = lib/aarch64/processor.c lib/aarch64/ucall.c lib/aarch64/handlers.S lib/aarch64/spinlock.c lib/aarch64/gic.c lib/aarch64/gic_v3.c lib/aarch64/vgic.c LIBKVM_s390x = lib/s390x/processor.c lib/s390x/ucall.c lib/s390x/diag318_test_handler.c diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c index 6a719d065599..b3d457cecd68 100644 --- a/tools/testing/selftests/kvm/demand_paging_test.c +++ b/tools/testing/selftests/kvm/demand_paging_test.c @@ -22,23 +22,13 @@ #include "test_util.h" #include "perf_test_util.h" #include "guest_modes.h" +#include "userfaultfd_util.h" #ifdef __NR_userfaultfd -#ifdef PRINT_PER_PAGE_UPDATES -#define PER_PAGE_DEBUG(...) printf(__VA_ARGS__) -#else -#define PER_PAGE_DEBUG(...) _no_printf(__VA_ARGS__) -#endif - -#ifdef PRINT_PER_VCPU_UPDATES -#define PER_VCPU_DEBUG(...) printf(__VA_ARGS__) -#else -#define PER_VCPU_DEBUG(...) _no_printf(__VA_ARGS__) -#endif - static int nr_vcpus = 1; static uint64_t guest_percpu_mem_size = DEFAULT_PER_VCPU_MEM_SIZE; + static size_t demand_paging_size; static char *guest_data_prototype; @@ -69,9 +59,11 @@ static void vcpu_worker(struct perf_test_vcpu_args *vcpu_args) ts_diff.tv_sec, ts_diff.tv_nsec); } -static int handle_uffd_page_request(int uffd_mode, int uffd, uint64_t addr) +static int handle_uffd_page_request(int uffd_mode, int uffd, + struct uffd_msg *msg) { pid_t tid = syscall(__NR_gettid); + uint64_t addr = msg->arg.pagefault.address; struct timespec start; struct timespec ts_diff; int r; @@ -118,175 +110,32 @@ static int handle_uffd_page_request(int uffd_mode, int uffd, uint64_t addr) return 0; } -bool quit_uffd_thread; - -struct uffd_handler_args { +struct test_params { int uffd_mode; - int uffd; - int pipefd; - useconds_t delay; + useconds_t uffd_delay; + enum vm_mem_backing_src_type src_type; + bool partition_vcpu_memory_access; }; -static void *uffd_handler_thread_fn(void *arg) +static void prefault_mem(void *alias, uint64_t len) { - struct uffd_handler_args *uffd_args = (struct uffd_handler_args *)arg; - int uffd = uffd_args->uffd; - int pipefd = uffd_args->pipefd; - useconds_t delay = uffd_args->delay; - int64_t pages = 0; - struct timespec start; - struct timespec ts_diff; - - clock_gettime(CLOCK_MONOTONIC, &start); - while (!quit_uffd_thread) { - struct uffd_msg msg; - struct pollfd pollfd[2]; - char tmp_chr; - int r; - uint64_t addr; - - pollfd[0].fd = uffd; - pollfd[0].events = POLLIN; - pollfd[1].fd = pipefd; - pollfd[1].events = POLLIN; - - r = poll(pollfd, 2, -1); - switch (r) { - case -1: - pr_info("poll err"); - continue; - case 0: - continue; - case 1: - break; - default: - pr_info("Polling uffd returned %d", r); - return NULL; - } - - if (pollfd[0].revents & POLLERR) { - pr_info("uffd revents has POLLERR"); - return NULL; - } - - if (pollfd[1].revents & POLLIN) { - r = read(pollfd[1].fd, &tmp_chr, 1); - TEST_ASSERT(r == 1, - "Error reading pipefd in UFFD thread\n"); - return NULL; - } - - if (!(pollfd[0].revents & POLLIN)) - continue; - - r = read(uffd, &msg, sizeof(msg)); - if (r == -1) { - if (errno == EAGAIN) - continue; - pr_info("Read of uffd got errno %d\n", errno); - return NULL; - } - - if (r != sizeof(msg)) { - pr_info("Read on uffd returned unexpected size: %d bytes", r); - return NULL; - } - - if (!(msg.event & UFFD_EVENT_PAGEFAULT)) - continue; + size_t p; - if (delay) - usleep(delay); - addr = msg.arg.pagefault.address; - r = handle_uffd_page_request(uffd_args->uffd_mode, uffd, addr); - if (r < 0) - return NULL; - pages++; + TEST_ASSERT(alias != NULL, "Alias required for minor faults"); + for (p = 0; p < (len / demand_paging_size); ++p) { + memcpy(alias + (p * demand_paging_size), + guest_data_prototype, demand_paging_size); } - - ts_diff = timespec_elapsed(start); - PER_VCPU_DEBUG("userfaulted %ld pages over %ld.%.9lds. (%f/sec)\n", - pages, ts_diff.tv_sec, ts_diff.tv_nsec, - pages / ((double)ts_diff.tv_sec + (double)ts_diff.tv_nsec / 100000000.0)); - - return NULL; } -static void setup_demand_paging(struct kvm_vm *vm, - pthread_t *uffd_handler_thread, int pipefd, - int uffd_mode, useconds_t uffd_delay, - struct uffd_handler_args *uffd_args, - void *hva, void *alias, uint64_t len) -{ - bool is_minor = (uffd_mode == UFFDIO_REGISTER_MODE_MINOR); - int uffd; - struct uffdio_api uffdio_api; - struct uffdio_register uffdio_register; - uint64_t expected_ioctls = ((uint64_t) 1) << _UFFDIO_COPY; - - PER_PAGE_DEBUG("Userfaultfd %s mode, faults resolved with %s\n", - is_minor ? "MINOR" : "MISSING", - is_minor ? "UFFDIO_CONINUE" : "UFFDIO_COPY"); - - /* In order to get minor faults, prefault via the alias. */ - if (is_minor) { - size_t p; - - expected_ioctls = ((uint64_t) 1) << _UFFDIO_CONTINUE; - - TEST_ASSERT(alias != NULL, "Alias required for minor faults"); - for (p = 0; p < (len / demand_paging_size); ++p) { - memcpy(alias + (p * demand_paging_size), - guest_data_prototype, demand_paging_size); - } - } - - uffd = syscall(__NR_userfaultfd, O_CLOEXEC | O_NONBLOCK); - TEST_ASSERT(uffd >= 0, "uffd creation failed, errno: %d", errno); - - uffdio_api.api = UFFD_API; - uffdio_api.features = 0; - TEST_ASSERT(ioctl(uffd, UFFDIO_API, &uffdio_api) != -1, - "ioctl UFFDIO_API failed: %" PRIu64, - (uint64_t)uffdio_api.api); - - uffdio_register.range.start = (uint64_t)hva; - uffdio_register.range.len = len; - uffdio_register.mode = uffd_mode; - TEST_ASSERT(ioctl(uffd, UFFDIO_REGISTER, &uffdio_register) != -1, - "ioctl UFFDIO_REGISTER failed"); - TEST_ASSERT((uffdio_register.ioctls & expected_ioctls) == - expected_ioctls, "missing userfaultfd ioctls"); - - uffd_args->uffd_mode = uffd_mode; - uffd_args->uffd = uffd; - uffd_args->pipefd = pipefd; - uffd_args->delay = uffd_delay; - pthread_create(uffd_handler_thread, NULL, uffd_handler_thread_fn, - uffd_args); - - PER_VCPU_DEBUG("Created uffd thread for HVA range [%p, %p)\n", - hva, hva + len); -} - -struct test_params { - int uffd_mode; - useconds_t uffd_delay; - enum vm_mem_backing_src_type src_type; - bool partition_vcpu_memory_access; -}; - static void run_test(enum vm_guest_mode mode, void *arg) { struct test_params *p = arg; - pthread_t *uffd_handler_threads = NULL; - struct uffd_handler_args *uffd_args = NULL; + struct uffd_desc **uffd_descs = NULL; struct timespec start; struct timespec ts_diff; - int *pipefds = NULL; struct kvm_vm *vm; int vcpu_id; - int r; vm = perf_test_create_vm(mode, nr_vcpus, guest_percpu_mem_size, 1, p->src_type, p->partition_vcpu_memory_access); @@ -299,15 +148,8 @@ static void run_test(enum vm_guest_mode mode, void *arg) memset(guest_data_prototype, 0xAB, demand_paging_size); if (p->uffd_mode) { - uffd_handler_threads = - malloc(nr_vcpus * sizeof(*uffd_handler_threads)); - TEST_ASSERT(uffd_handler_threads, "Memory allocation failed"); - - uffd_args = malloc(nr_vcpus * sizeof(*uffd_args)); - TEST_ASSERT(uffd_args, "Memory allocation failed"); - - pipefds = malloc(sizeof(int) * nr_vcpus * 2); - TEST_ASSERT(pipefds, "Unable to allocate memory for pipefd"); + uffd_descs = malloc(nr_vcpus * sizeof(struct uffd_desc *)); + TEST_ASSERT(uffd_descs, "Memory allocation failed"); for (vcpu_id = 0; vcpu_id < nr_vcpus; vcpu_id++) { struct perf_test_vcpu_args *vcpu_args; @@ -320,19 +162,17 @@ static void run_test(enum vm_guest_mode mode, void *arg) vcpu_hva = addr_gpa2hva(vm, vcpu_args->gpa); vcpu_alias = addr_gpa2alias(vm, vcpu_args->gpa); + prefault_mem(vcpu_alias, + vcpu_args->pages * perf_test_args.guest_page_size); + /* * Set up user fault fd to handle demand paging * requests. */ - r = pipe2(&pipefds[vcpu_id * 2], - O_CLOEXEC | O_NONBLOCK); - TEST_ASSERT(!r, "Failed to set up pipefd"); - - setup_demand_paging(vm, &uffd_handler_threads[vcpu_id], - pipefds[vcpu_id * 2], p->uffd_mode, - p->uffd_delay, &uffd_args[vcpu_id], - vcpu_hva, vcpu_alias, - vcpu_args->pages * perf_test_args.guest_page_size); + uffd_descs[vcpu_id] = uffd_setup_demand_paging( + p->uffd_mode, p->uffd_delay, vcpu_hva, + vcpu_args->pages * perf_test_args.guest_page_size, + &handle_uffd_page_request); } } @@ -347,15 +187,9 @@ static void run_test(enum vm_guest_mode mode, void *arg) pr_info("All vCPU threads joined\n"); if (p->uffd_mode) { - char c; - /* Tell the user fault fd handler threads to quit */ - for (vcpu_id = 0; vcpu_id < nr_vcpus; vcpu_id++) { - r = write(pipefds[vcpu_id * 2 + 1], &c, 1); - TEST_ASSERT(r == 1, "Unable to write to pipefd"); - - pthread_join(uffd_handler_threads[vcpu_id], NULL); - } + for (vcpu_id = 0; vcpu_id < nr_vcpus; vcpu_id++) + uffd_stop_demand_paging(uffd_descs[vcpu_id]); } pr_info("Total guest execution time: %ld.%.9lds\n", @@ -367,11 +201,8 @@ static void run_test(enum vm_guest_mode mode, void *arg) perf_test_destroy_vm(vm); free(guest_data_prototype); - if (p->uffd_mode) { - free(uffd_handler_threads); - free(uffd_args); - free(pipefds); - } + if (p->uffd_mode) + free(uffd_descs); } static void help(char *name) diff --git a/tools/testing/selftests/kvm/include/userfaultfd_util.h b/tools/testing/selftests/kvm/include/userfaultfd_util.h new file mode 100644 index 000000000000..7b294ce8147c --- /dev/null +++ b/tools/testing/selftests/kvm/include/userfaultfd_util.h @@ -0,0 +1,46 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KVM userfaultfd util + * Adapted from demand_paging_test.c + * + * Copyright (C) 2018, Red Hat, Inc. + * Copyright (C) 2019, Google, Inc. + * Copyright (C) 2022, Google, Inc. + */ + +#define _GNU_SOURCE /* for pipe2 */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "kvm_util.h" +#include "test_util.h" +#include "perf_test_util.h" + +typedef int (*uffd_handler_t)(int uffd_mode, int uffd, struct uffd_msg *msg); + +struct uffd_desc; + +struct uffd_desc *uffd_setup_demand_paging(int uffd_mode, + useconds_t uffd_delay, void *hva, uint64_t len, + uffd_handler_t handler); + +void uffd_stop_demand_paging(struct uffd_desc *uffd); + +#ifdef PRINT_PER_PAGE_UPDATES +#define PER_PAGE_DEBUG(...) printf(__VA_ARGS__) +#else +#define PER_PAGE_DEBUG(...) _no_printf(__VA_ARGS__) +#endif + +#ifdef PRINT_PER_VCPU_UPDATES +#define PER_VCPU_DEBUG(...) printf(__VA_ARGS__) +#else +#define PER_VCPU_DEBUG(...) _no_printf(__VA_ARGS__) +#endif diff --git a/tools/testing/selftests/kvm/lib/userfaultfd_util.c b/tools/testing/selftests/kvm/lib/userfaultfd_util.c new file mode 100644 index 000000000000..5e0878878a69 --- /dev/null +++ b/tools/testing/selftests/kvm/lib/userfaultfd_util.c @@ -0,0 +1,196 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KVM userfaultfd util + * Adapted from demand_paging_test.c + * + * Copyright (C) 2018, Red Hat, Inc. + * Copyright (C) 2019, Google, Inc. + * Copyright (C) 2022, Google, Inc. + */ + +#define _GNU_SOURCE /* for pipe2 */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "kvm_util.h" +#include "test_util.h" +#include "perf_test_util.h" +#include "userfaultfd_util.h" + +#ifdef __NR_userfaultfd + +struct uffd_desc { + int uffd_mode; + int uffd; + int pipefds[2]; + useconds_t delay; + uffd_handler_t handler; + pthread_t thread; +}; + +static void *uffd_handler_thread_fn(void *arg) +{ + struct uffd_desc *uffd_desc = (struct uffd_desc *)arg; + int uffd = uffd_desc->uffd; + int pipefd = uffd_desc->pipefds[0]; + useconds_t delay = uffd_desc->delay; + int64_t pages = 0; + struct timespec start; + struct timespec ts_diff; + + clock_gettime(CLOCK_MONOTONIC, &start); + while (1) { + struct uffd_msg msg; + struct pollfd pollfd[2]; + char tmp_chr; + int r; + + pollfd[0].fd = uffd; + pollfd[0].events = POLLIN; + pollfd[1].fd = pipefd; + pollfd[1].events = POLLIN; + + r = poll(pollfd, 2, -1); + switch (r) { + case -1: + pr_info("poll err"); + continue; + case 0: + continue; + case 1: + break; + default: + pr_info("Polling uffd returned %d", r); + return NULL; + } + + if (pollfd[0].revents & POLLERR) { + pr_info("uffd revents has POLLERR"); + return NULL; + } + + if (pollfd[1].revents & POLLIN) { + r = read(pollfd[1].fd, &tmp_chr, 1); + TEST_ASSERT(r == 1, + "Error reading pipefd in UFFD thread\n"); + return NULL; + } + + if (!(pollfd[0].revents & POLLIN)) + continue; + + r = read(uffd, &msg, sizeof(msg)); + if (r == -1) { + if (errno == EAGAIN) + continue; + pr_info("Read of uffd got errno %d\n", errno); + return NULL; + } + + if (r != sizeof(msg)) { + pr_info("Read on uffd returned unexpected size: %d bytes", r); + return NULL; + } + + if (!(msg.event & UFFD_EVENT_PAGEFAULT)) + continue; + + if (delay) + usleep(delay); + r = uffd_desc->handler(uffd_desc->uffd_mode, uffd, &msg); + if (r < 0) + return NULL; + pages++; + } + + ts_diff = timespec_elapsed(start); + PER_VCPU_DEBUG("userfaulted %ld pages over %ld.%.9lds. (%f/sec)\n", + pages, ts_diff.tv_sec, ts_diff.tv_nsec, + pages / ((double)ts_diff.tv_sec + (double)ts_diff.tv_nsec / 100000000.0)); + + return NULL; +} + +struct uffd_desc *uffd_setup_demand_paging(int uffd_mode, + useconds_t uffd_delay, void *hva, uint64_t len, + uffd_handler_t handler) +{ + struct uffd_desc *uffd_desc; + bool is_minor = (uffd_mode == UFFDIO_REGISTER_MODE_MINOR); + int uffd; + struct uffdio_api uffdio_api; + struct uffdio_register uffdio_register; + uint64_t expected_ioctls = ((uint64_t) 1) << _UFFDIO_COPY; + int ret; + + PER_PAGE_DEBUG("Userfaultfd %s mode, faults resolved with %s\n", + is_minor ? "MINOR" : "MISSING", + is_minor ? "UFFDIO_CONINUE" : "UFFDIO_COPY"); + + uffd_desc = malloc(sizeof(struct uffd_desc)); + TEST_ASSERT(uffd_desc, "malloc failed"); + + /* In order to get minor faults, prefault via the alias. */ + if (is_minor) + expected_ioctls = ((uint64_t) 1) << _UFFDIO_CONTINUE; + + uffd = syscall(__NR_userfaultfd, O_CLOEXEC | O_NONBLOCK); + TEST_ASSERT(uffd >= 0, "uffd creation failed, errno: %d", errno); + + uffdio_api.api = UFFD_API; + uffdio_api.features = 0; + TEST_ASSERT(ioctl(uffd, UFFDIO_API, &uffdio_api) != -1, + "ioctl UFFDIO_API failed: %" PRIu64, + (uint64_t)uffdio_api.api); + + uffdio_register.range.start = (uint64_t)hva; + uffdio_register.range.len = len; + uffdio_register.mode = uffd_mode; + TEST_ASSERT(ioctl(uffd, UFFDIO_REGISTER, &uffdio_register) != -1, + "ioctl UFFDIO_REGISTER failed"); + TEST_ASSERT((uffdio_register.ioctls & expected_ioctls) == + expected_ioctls, "missing userfaultfd ioctls"); + + ret = pipe2(uffd_desc->pipefds, O_CLOEXEC | O_NONBLOCK); + TEST_ASSERT(!ret, "Failed to set up pipefd"); + + uffd_desc->uffd_mode = uffd_mode; + uffd_desc->uffd = uffd; + uffd_desc->delay = uffd_delay; + uffd_desc->handler = handler; + pthread_create(&uffd_desc->thread, NULL, uffd_handler_thread_fn, + uffd_desc); + + PER_VCPU_DEBUG("Created uffd thread for HVA range [%p, %p)\n", + hva, hva + len); + + return uffd_desc; +} + +void uffd_stop_demand_paging(struct uffd_desc *uffd) +{ + char c = 0; + int ret; + + ret = write(uffd->pipefds[1], &c, 1); + TEST_ASSERT(ret == 1, "Unable to write to pipefd"); + + ret = pthread_join(uffd->thread, NULL); + TEST_ASSERT(ret == 0, "Pthread_join failed."); + + close(uffd->uffd); + + close(uffd->pipefds[1]); + close(uffd->pipefds[0]); + + free(uffd); +} + +#endif /* __NR_userfaultfd */ From patchwork Fri Mar 11 06:01:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 12777468 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2873FC433EF for ; Fri, 11 Mar 2022 06:04:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346800AbiCKGFL (ORCPT ); Fri, 11 Mar 2022 01:05:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54806 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347303AbiCKGDy (ORCPT ); Fri, 11 Mar 2022 01:03:54 -0500 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 952D11A907C for ; Thu, 10 Mar 2022 22:02:14 -0800 (PST) Received: by mail-pj1-x1049.google.com with SMTP id p8-20020a17090a74c800b001bf257861efso7287428pjl.6 for ; Thu, 10 Mar 2022 22:02:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=xwfZh69dZOdSDjKPAanZfrcJOqRr+yQYeyh7zLSa5HM=; b=FmJrdIQ4nkAf2XhudiEZwKvXHoi1jC/nwpfxS6QVNMD2gPfr86NLRAre5UcUjxyqqf tpfui4ksu2sGfp+uQsV5yIpGapg3LBMNCt/amcsL5lmSGJ3/yIjRb0ObSWspsHGJ/ExJ E+UXgzwVetIbnj9tRAx5JjTRdumu1rneme2ubpOwDyXKrYlqJmUYCFAekBIiz091gpaT KbqliVra7ce0Y5Qk7cd6GQcxH6Z4e5SRYivMRlJAQ2vZi2+LZ8ygTB01Si4Fb5ScWXR8 sPuDuZ7kmM3T0y2+IPlT+bqYW/pW8DCcyLD2Y81KBDrp7R1EjscglqttDJohlJ/bX+O6 uq6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=xwfZh69dZOdSDjKPAanZfrcJOqRr+yQYeyh7zLSa5HM=; b=tq7a8CBsgEMW6cvsbJYSW2GOK6mLYsjl0EGiQ5guaSkg0z2KiperV+bHWH4AUiXSQt JgpaN6USrTR2b/l5LTn1Mn5B/vtozfn/v1d1weEn/gHBLsRoi52reYcOXs4Zy14IIRXX p6LRrndQ0xigqOQN9AzGxiDqK2dXMG6rH2cUUeHUH7VsdVavv/aDAqQ4Oi0qHG/GTEA4 WFOP42gZ8dq4SZJghQhH7Hy8kbuTmxvRXKPOjojoI9qApk72iNjvT94JCC33UGYw67TL SLIGuyewaEm4wpEJ8DvlsBeZq6sFybLqUT6wj0fdLf3g9JCzJdA1aX/Fz4PSDO2oRsLa lB1w== X-Gm-Message-State: AOAM533uVYDOuaqkE5ADeBx/NlsQGwKkdOyoNPaqFns2bPG3qXc0KCPw vRaxBpkCdLs7YWoFm69HMxG2nwsTpyJXx6S8am3DO6GHDVThH0T1ltXSngMc5tNmn51JhMAusJa 2l5DP3T7u7qEKfRMJEj4KMdJ4YNxEN6ISnHGUfhMQWHtJfdpYIClBnljKArPF+E0= X-Google-Smtp-Source: ABdhPJx2Dx7cNt7Ofd9w5hxmXsBiBkr6Dftp2Jiv2s6DH+Uk1zEWu8ia9W4jQ1mK6lRGzs+WWSNA8YA6bHTD5w== X-Received: from ricarkol2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:62fe]) (user=ricarkol job=sendgmr) by 2002:a17:90b:4a52:b0:1be:fb7c:9fef with SMTP id lb18-20020a17090b4a5200b001befb7c9fefmr20013275pjb.57.1646978533927; Thu, 10 Mar 2022 22:02:13 -0800 (PST) Date: Thu, 10 Mar 2022 22:01:58 -0800 In-Reply-To: <20220311060207.2438667-1-ricarkol@google.com> Message-Id: <20220311060207.2438667-3-ricarkol@google.com> Mime-Version: 1.0 References: <20220311060207.2438667-1-ricarkol@google.com> X-Mailer: git-send-email 2.35.1.723.g4982287a31-goog Subject: [PATCH 02/11] KVM: selftests: Add vm_mem_region_get_src_fd library function From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, drjones@redhat.com Cc: pbonzini@redhat.com, maz@kernel.org, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, axelrasmussen@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a library function to get the backing source FD of a memslot. Signed-off-by: Ricardo Koller --- .../selftests/kvm/include/kvm_util_base.h | 1 + tools/testing/selftests/kvm/lib/kvm_util.c | 23 +++++++++++++++++++ 2 files changed, 24 insertions(+) diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index 4ed6aa049a91..d6acec0858c0 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -163,6 +163,7 @@ int _kvm_ioctl(struct kvm_vm *vm, unsigned long ioctl, void *arg); void vm_mem_region_set_flags(struct kvm_vm *vm, uint32_t slot, uint32_t flags); void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa); void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot); +int vm_mem_region_get_src_fd(struct kvm_vm *vm, uint32_t memslot); void vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpuid); vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min); vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages); diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index d8cf851ab119..64ef245b73de 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -580,6 +580,29 @@ kvm_userspace_memory_region_find(struct kvm_vm *vm, uint64_t start, return ®ion->region; } +/* + * KVM Userspace Memory Get Backing Source FD + * + * Input Args: + * vm - Virtual Machine + * memslot - KVM memory slot ID + * + * Output Args: None + * + * Return: + * Backing source file descriptor, -1 if the memslot is an anonymous region. + * + * Returns the backing source fd of a memslot, so tests can use it to punch + * holes, or to setup permissions. + */ +int vm_mem_region_get_src_fd(struct kvm_vm *vm, uint32_t memslot) +{ + struct userspace_mem_region *region; + + region = memslot2region(vm, memslot); + return region->fd; +} + /* * VCPU Find * From patchwork Fri Mar 11 06:01:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 12777466 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B66A8C433EF for ; Fri, 11 Mar 2022 06:04:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346755AbiCKGFH (ORCPT ); Fri, 11 Mar 2022 01:05:07 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60236 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347300AbiCKGDy (ORCPT ); Fri, 11 Mar 2022 01:03:54 -0500 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 337F01A948F for ; Thu, 10 Mar 2022 22:02:16 -0800 (PST) Received: by mail-pg1-x54a.google.com with SMTP id 9-20020a630509000000b0037c8607d296so4231159pgf.22 for ; Thu, 10 Mar 2022 22:02:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=KUc4dNO91pxe/NV0PVPkfNxi+9BwbPEO+xnILPLfPtM=; b=OWem3wBwWHa+HaaS8WpKJBbzmMF72Cj+Df1b68H+hj8/jon+uQJrz08+ML/M0WqJJ1 2SaXqEBuLt3uZBIMPZ0oE5QkFVVbwwoE5wJWFRLpvdSXEpKESmZfpezDhuxF621uKGuQ rXtV601E/AhPf4YgMyBH4LjaEO6M8oNTBbIxHiur5TT20l9pZwAqQYx9aGauBknmdeQq YEfCapjRrzso8wYmMZ1I6oyEAA9CcZnJkteqFu8mQovKsiXdj5L2RSUteZDo2izqb66Q V5k1eNjv9Qx7bUK3FxF2bkCWT5gRa8berXIo+1vqOzeVrc5glsskWiD5bAeSzJc3qTLE 540A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=KUc4dNO91pxe/NV0PVPkfNxi+9BwbPEO+xnILPLfPtM=; b=fZjEQ0ovDgh3aS3wGIF0iU+A+I7LuxBbgD0VA2lNWjAMu5Xg3zn6Wh+nNlYN3MkWhz /hCNqUIV0p8lnBSnReBt3zOmBMLeM07Ko8PnDGcLQNQS7h2Du5KrrlFIxi1l/gYM/NDV CPpwfW/ADxndhxmFNSBcogM00vPU7hnZk8WBKV+i34i8FL2InOso0eyctZxjMAyReOD0 1Vkmxg9BMh0dcFquxp6kAxxfQRUVCdbzhfO/WmvYsXodWDNIZcOF454o83EBStVoEQ3c PZKQtv7yR1FRgMcH7/NSHC4etVJJ7jRwrEZlZArAqJ6OOy2lsT/80WHfCLFHRyBbDOJX /Pdg== X-Gm-Message-State: AOAM533uBomD0J3vaOnjgHpTxbNOgQHs2u3w+14uvzE8Hq0/bnL3cJc+ 05AWnWo96ssGjxQxky4OgwuliUgv3wn83x1yXDy51EiiksheAhrS4DHRecMCmwyvKc69gfQ67hW wLHpjuXIvMyV9utCFyzFqzVJ6vzgf6moyprX5YOOHitvD3q71dwiJu4+PrFrPQ0s= X-Google-Smtp-Source: ABdhPJzk4mG7szV7nhCSQP4wEbEtKHndkNBInkYogcawx/lZmIEZBQ8uLDfqKN6TKj4ltYzJ4pYt4Cq2WbHPmQ== X-Received: from ricarkol2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:62fe]) (user=ricarkol job=sendgmr) by 2002:a05:6a00:1248:b0:4f7:db0:4204 with SMTP id u8-20020a056a00124800b004f70db04204mr8469014pfi.27.1646978535585; Thu, 10 Mar 2022 22:02:15 -0800 (PST) Date: Thu, 10 Mar 2022 22:01:59 -0800 In-Reply-To: <20220311060207.2438667-1-ricarkol@google.com> Message-Id: <20220311060207.2438667-4-ricarkol@google.com> Mime-Version: 1.0 References: <20220311060207.2438667-1-ricarkol@google.com> X-Mailer: git-send-email 2.35.1.723.g4982287a31-goog Subject: [PATCH 03/11] KVM: selftests: aarch64: Add vm_get_pte_gpa library function From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, drjones@redhat.com Cc: pbonzini@redhat.com, maz@kernel.org, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, axelrasmussen@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a library function (in-guest) to get the GPA of the PTE of a particular GVA. This will be used in a future commit by a test to clear and check the AF (access flag) of a particular page. Signed-off-by: Ricardo Koller --- .../selftests/kvm/include/aarch64/processor.h | 2 ++ .../selftests/kvm/lib/aarch64/processor.c | 24 +++++++++++++++++-- 2 files changed, 24 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/kvm/include/aarch64/processor.h b/tools/testing/selftests/kvm/include/aarch64/processor.h index 8f9f46979a00..caa572d83062 100644 --- a/tools/testing/selftests/kvm/include/aarch64/processor.h +++ b/tools/testing/selftests/kvm/include/aarch64/processor.h @@ -125,6 +125,8 @@ void vm_install_exception_handler(struct kvm_vm *vm, void vm_install_sync_handler(struct kvm_vm *vm, int vector, int ec, handler_fn handler); +vm_paddr_t vm_get_pte_gpa(struct kvm_vm *vm, vm_vaddr_t gva); + static inline void cpu_relax(void) { asm volatile("yield" ::: "memory"); diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c index 9343d82519b4..ee006d354b79 100644 --- a/tools/testing/selftests/kvm/lib/aarch64/processor.c +++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c @@ -139,7 +139,7 @@ void virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr) _virt_pg_map(vm, vaddr, paddr, attr_idx); } -vm_paddr_t addr_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva) +vm_paddr_t vm_get_pte_gpa(struct kvm_vm *vm, vm_vaddr_t gva) { uint64_t *ptep; @@ -162,7 +162,7 @@ vm_paddr_t addr_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva) goto unmapped_gva; /* fall through */ case 2: - ptep = addr_gpa2hva(vm, pte_addr(vm, *ptep)) + pte_index(vm, gva) * 8; + ptep = (uint64_t *)(pte_addr(vm, *ptep) + pte_index(vm, gva) * 8); if (!ptep) goto unmapped_gva; break; @@ -170,6 +170,26 @@ vm_paddr_t addr_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva) TEST_FAIL("Page table levels must be 2, 3, or 4"); } + return (vm_paddr_t)ptep; + +unmapped_gva: + TEST_FAIL("No mapping for vm virtual address, gva: 0x%lx", gva); + exit(1); +} + +vm_paddr_t addr_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva) +{ + uint64_t *ptep; + vm_paddr_t ptep_gpa; + + ptep_gpa = vm_get_pte_gpa(vm, gva); + if (!ptep_gpa) + goto unmapped_gva; + + ptep = addr_gpa2hva(vm, ptep_gpa); + if (!ptep) + goto unmapped_gva; + return pte_addr(vm, *ptep) + (gva & (vm->page_size - 1)); unmapped_gva: From patchwork Fri Mar 11 06:02:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 12777459 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9EEA8C433FE for ; Fri, 11 Mar 2022 06:03:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245286AbiCKGEq (ORCPT ); Fri, 11 Mar 2022 01:04:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55448 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347302AbiCKGDy (ORCPT ); Fri, 11 Mar 2022 01:03:54 -0500 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CF8EE1A9494 for ; Thu, 10 Mar 2022 22:02:17 -0800 (PST) Received: by mail-pf1-x44a.google.com with SMTP id d145-20020a621d97000000b004f7285f67e8so4642516pfd.2 for ; Thu, 10 Mar 2022 22:02:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=8ix8hjq3ugfBG2VjWflpErmFpudSQCGZkT4xxCEQk+0=; b=kccGSI/hDvCHSQRVFA8c/lxGAUB465N9ZlpJ8LNzZljadoy6eaRouGO0ksm+3PwcW+ Vhlw6MIBLNr7pwZaCJLg9mCQ6G0xiNhxkQ+NkNKPzxeF0mj2MpMlYGQIZxa+UYj3GQmg tof+9FzK+Tco2G/FpPmm/Aq/BarjXPx/wESPFMnryqUzzL0JXlK2j+ocXj/Hd21xGLNR TD9gNBskYZIppl3kma9baKZY6db4lEFD/Txe5QS1KABzNA2Y8oH7L3JmvIHVCINfsmSc IBuVhbsNro54yM0zh52mmuZHJeTmJ7kx/D3as1ta9Db8JGUNJBmjqAEOR1ze4Fc73LE5 eXdw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=8ix8hjq3ugfBG2VjWflpErmFpudSQCGZkT4xxCEQk+0=; b=bVmbw3dM1XKHaKhVMlomi4hqr05TWju17N6DXf2/fOEaHv+lsFG3MBRjOcYn4QC5SD g90fympSWWyhOb9kw1ppViqrSdHZi9zgifDw1EVlkImGzYgrfREo44kXjPbSgWqErGZK D7uE+awl2PDJs3i9t9z8HK453CLyDlfPMWLv0gVH6n3rwqbP1RojZEFrzvmNSUDqAwu/ 8drMmls81NjioZPds5fvGoT3DiUJEt0F6pAkCBHtPhF0oxwunKbma9nIZ6Hid/w4eqjA Fgg83O+iwHY8EYgvsGfAF0MArU8Npe345HX5lyo/Zl4LxP4FV9DU1S9mBozTsR4G4bwo Zl1Q== X-Gm-Message-State: AOAM532z7/acph1QDrU5nE+J1hNzsKNPpOMwMICN7RWuJq/i4WQTMvyY sIs1NtEvolm7lHx8HKICj4dplxi2f2Ma0I92y+J+0KiOsDOTCrw0aq+xpeW4nLmThhze/TX5vOe Ux/E1rF7U9bB1YrLxU2/bVICXb4e+4Eb5MWBYKTFjrzjM5S9/vRr184HS7MWulJs= X-Google-Smtp-Source: ABdhPJxpGigncSmxYhf4XlgXUbziRCRypFVzTD0H9Yb5hfRYSmi/fZj91kJeTwObWuRykQL3BsReC0ZGkDnYsw== X-Received: from ricarkol2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:62fe]) (user=ricarkol job=sendgmr) by 2002:a05:6a00:a0c:b0:4f6:661e:8dda with SMTP id p12-20020a056a000a0c00b004f6661e8ddamr8400235pfh.66.1646978537229; Thu, 10 Mar 2022 22:02:17 -0800 (PST) Date: Thu, 10 Mar 2022 22:02:00 -0800 In-Reply-To: <20220311060207.2438667-1-ricarkol@google.com> Message-Id: <20220311060207.2438667-5-ricarkol@google.com> Mime-Version: 1.0 References: <20220311060207.2438667-1-ricarkol@google.com> X-Mailer: git-send-email 2.35.1.723.g4982287a31-goog Subject: [PATCH 04/11] KVM: selftests: Add vm_alloc_page_table_in_memslot library function From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, drjones@redhat.com Cc: pbonzini@redhat.com, maz@kernel.org, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, axelrasmussen@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a library function to allocate a page-table physical page in a particular memslot. The default behavior is to create new page-table pages in memslot 0. Signed-off-by: Ricardo Koller Reviewed-by: Ben Gardon --- tools/testing/selftests/kvm/include/kvm_util_base.h | 1 + tools/testing/selftests/kvm/lib/kvm_util.c | 8 +++++++- 2 files changed, 8 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index d6acec0858c0..c8dce12a9a52 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -307,6 +307,7 @@ vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min, vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, vm_paddr_t paddr_min, uint32_t memslot); vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm); +vm_paddr_t vm_alloc_page_table_in_memslot(struct kvm_vm *vm, uint32_t pt_memslot); /* * Create a VM with reasonable defaults diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 64ef245b73de..ae21564241c8 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -2409,9 +2409,15 @@ vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min, /* Arbitrary minimum physical address used for virtual translation tables. */ #define KVM_GUEST_PAGE_TABLE_MIN_PADDR 0x180000 +vm_paddr_t vm_alloc_page_table_in_memslot(struct kvm_vm *vm, uint32_t pt_memslot) +{ + return vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR, + pt_memslot); +} + vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm) { - return vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0); + return vm_alloc_page_table_in_memslot(vm, 0); } /* From patchwork Fri Mar 11 06:02:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 12777461 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1AF01C433F5 for ; Fri, 11 Mar 2022 06:04:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242535AbiCKGE5 (ORCPT ); Fri, 11 Mar 2022 01:04:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56186 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347297AbiCKGDy (ORCPT ); Fri, 11 Mar 2022 01:03:54 -0500 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8FB1E1A949E for ; Thu, 10 Mar 2022 22:02:19 -0800 (PST) Received: by mail-pf1-x449.google.com with SMTP id x123-20020a626381000000b004f6fc50208eso4638896pfb.11 for ; Thu, 10 Mar 2022 22:02:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=bQK3tiQaNCPDChbMmIObJpHbjuJxCpHp0ngHczz/K88=; b=p0ElfbSfRFbh2onOZMLcgvy7xaQN+L1eRjGbdEdlotyXLZxUvTezAMBNQg23NqVACa OIS3GBP+f1CuQU99x8YtcyT2uqeg6UR4tZbWw2+4FcHuMKto0isIwUY8wAeKoV0jrKIl AZOc3b34fZbzlEJxzBbYqSaq7Zd9UJEyzLBPj28JHWLikyNK5+NcYqk7fsfDHQf+nrMR vCPreel5KxzZrTpDRo/WOViBrz2O1b1aqB5awJd9DVx18IpdzICbmSE0+SjwdAmGq77s Qi6WgRnF/DOWrhhn+IlDWNSYmrDkigCUUBCgQdsuKEgJmwOE3LhDin3Ski+hFwiSlSLO Cw+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=bQK3tiQaNCPDChbMmIObJpHbjuJxCpHp0ngHczz/K88=; b=sW18Y+thpflqbHs7Rwc758GWLzwxGeHVI/JF2SSOUAYS0K9FDWWkI7nRSWSa1vTaaJ NdyxkPd5S4wKs4D9WIa4Ku5zB4NXi0swTDQsQwVd/WFmZkdzrtS/eOuLlCEFmrdRE/CH KkQqh4DsTBgvfF+CPtfm2pzqL2WhTPTl4mVIenmozfth2ALV5tGP+vpr6TJuCCpn1qFy rz0wFIIt6M/gIIe4aqxvvUOWx9h3oTZzWSEoXETv2yGFoRcdxLbRTcYrFWj3E72Pqc86 iL4Yro2fCqWdvMRIdVXPWMmQtRg4LTtVanYWArjLc+0KKhcdMeG/Cgkre/UsqYQVgGLI Dlkg== X-Gm-Message-State: AOAM531e3CpgJBlR0+UapJDjtVn6o/VpU8mPMRD0+UlFGF/5YlbxZwMW LbRt/7VbMZMkV7nLkends8zbi6R5+Mx7DI3k9xx7ZqYclmF+uHFstpAzmR+9lUxG7np1NrKZX2s ZfxcBmjoBFbs9fS/GRtsGDRF6LZ7s+wjqUpD21n4T5f9MWn75GMyvik/jrp2MNr0= X-Google-Smtp-Source: ABdhPJy7CkMfNWO/q7qASHNk+OXFyikAV71Ej2ADGG+MttMyDAp29u5Su8YNs60F3+/3TeijkeLWvjr11PDX7A== X-Received: from ricarkol2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:62fe]) (user=ricarkol job=sendgmr) by 2002:aa7:8ec2:0:b0:4f7:4a9:7fcd with SMTP id b2-20020aa78ec2000000b004f704a97fcdmr8436792pfr.26.1646978538998; Thu, 10 Mar 2022 22:02:18 -0800 (PST) Date: Thu, 10 Mar 2022 22:02:01 -0800 In-Reply-To: <20220311060207.2438667-1-ricarkol@google.com> Message-Id: <20220311060207.2438667-6-ricarkol@google.com> Mime-Version: 1.0 References: <20220311060207.2438667-1-ricarkol@google.com> X-Mailer: git-send-email 2.35.1.723.g4982287a31-goog Subject: [PATCH 05/11] KVM: selftests: aarch64: Export _virt_pg_map with a pt_memslot arg From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, drjones@redhat.com Cc: pbonzini@redhat.com, maz@kernel.org, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, axelrasmussen@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add an argument, pt_memslot, into _virt_pg_map in order to use a specific memslot for the page-table allocations performed when creating a new map. This will be used in a future commit to test having PTEs stored on memslots with different setups (e.g., hugetlb with a hole). Signed-off-by: Ricardo Koller --- .../selftests/kvm/include/aarch64/processor.h | 3 +++ tools/testing/selftests/kvm/lib/aarch64/processor.c | 12 ++++++------ 2 files changed, 9 insertions(+), 6 deletions(-) diff --git a/tools/testing/selftests/kvm/include/aarch64/processor.h b/tools/testing/selftests/kvm/include/aarch64/processor.h index caa572d83062..3965a5ac778e 100644 --- a/tools/testing/selftests/kvm/include/aarch64/processor.h +++ b/tools/testing/selftests/kvm/include/aarch64/processor.h @@ -125,6 +125,9 @@ void vm_install_exception_handler(struct kvm_vm *vm, void vm_install_sync_handler(struct kvm_vm *vm, int vector, int ec, handler_fn handler); +void _virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, + uint64_t flags, uint32_t pt_memslot); + vm_paddr_t vm_get_pte_gpa(struct kvm_vm *vm, vm_vaddr_t gva); static inline void cpu_relax(void) diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c index ee006d354b79..8f4ec1be4364 100644 --- a/tools/testing/selftests/kvm/lib/aarch64/processor.c +++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c @@ -86,8 +86,8 @@ void virt_pgd_alloc(struct kvm_vm *vm) } } -static void _virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, - uint64_t flags) +void _virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, + uint64_t flags, uint32_t pt_memslot) { uint8_t attr_idx = flags & 7; uint64_t *ptep; @@ -108,18 +108,18 @@ static void _virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, ptep = addr_gpa2hva(vm, vm->pgd) + pgd_index(vm, vaddr) * 8; if (!*ptep) - *ptep = vm_alloc_page_table(vm) | 3; + *ptep = vm_alloc_page_table_in_memslot(vm, pt_memslot) | 3; switch (vm->pgtable_levels) { case 4: ptep = addr_gpa2hva(vm, pte_addr(vm, *ptep)) + pud_index(vm, vaddr) * 8; if (!*ptep) - *ptep = vm_alloc_page_table(vm) | 3; + *ptep = vm_alloc_page_table_in_memslot(vm, pt_memslot) | 3; /* fall through */ case 3: ptep = addr_gpa2hva(vm, pte_addr(vm, *ptep)) + pmd_index(vm, vaddr) * 8; if (!*ptep) - *ptep = vm_alloc_page_table(vm) | 3; + *ptep = vm_alloc_page_table_in_memslot(vm, pt_memslot) | 3; /* fall through */ case 2: ptep = addr_gpa2hva(vm, pte_addr(vm, *ptep)) + pte_index(vm, vaddr) * 8; @@ -136,7 +136,7 @@ void virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr) { uint64_t attr_idx = 4; /* NORMAL (See DEFAULT_MAIR_EL1) */ - _virt_pg_map(vm, vaddr, paddr, attr_idx); + _virt_pg_map(vm, vaddr, paddr, attr_idx, 0); } vm_paddr_t vm_get_pte_gpa(struct kvm_vm *vm, vm_vaddr_t gva) From patchwork Fri Mar 11 06:02:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 12777467 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D2C5BC433F5 for ; Fri, 11 Mar 2022 06:04:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237369AbiCKGFK (ORCPT ); Fri, 11 Mar 2022 01:05:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57416 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347305AbiCKGDz (ORCPT ); Fri, 11 Mar 2022 01:03:55 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ACDC81A94A7 for ; Thu, 10 Mar 2022 22:02:21 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id x9-20020a5b0809000000b00631d9edfb96so343091ybp.22 for ; Thu, 10 Mar 2022 22:02:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=VT9WintL5Ym+JFx3Nrk1+phRO6aH7MpQMfJGQNhWh1Y=; b=bFwfK4JfAVkqt1tf1xHmAoL0M7Njs7QuwGIqTRa8PLO4taBTNEwTDqtWBKaJCOyJ0P xpAFdOW1mJU3zu63xQ/Ce81U/7FRPYtPABCVYBFNt732Ok7zamOgqU/kxAchk2YSGZ28 U52+TD4btpOnoXkni3xSFcbRoBaKJ7OmnSBSsHRtPi9QWtXbhRXUOYaCTvcb9dMYCkSm uVFv2zzrTHuKlS6VhQy/eJNk92FikqybqbOH9wlUQTTnZn64djBqiSm/4SIGlm/jLOuF 2raR5vAQfu2pHjnyAQ/4ezZibVDeMuC9YGNK1whVK8cZMDswLjiQh4EYKJ5qamovBqLi e0mA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=VT9WintL5Ym+JFx3Nrk1+phRO6aH7MpQMfJGQNhWh1Y=; b=Y3vw7RZGmUqphEFfKPMuaKoXIw8/ZD/D3jjJxZHRtnhoyXOR3FE2Sj4aQpzGnlwJR2 Emh3vmW2Q7ZH2Y6sVgDKcu9wBBDLB1aF4FANakUn/YcPpzSgIfQWVvDdmZMQtfqnl+aG 2bipxIZoHZAf8kzFG9P7KG4wzFbrz62RPPMiX2f5TybHc10Cz4Dlw/08DtyRg37aRvPV RN2iJmd3G6MgllIXh/LVhmtXWEJF07PWzzDXkaYZhu6UNE48MGczDXHRG1GtBjntpNK5 6aJwhwhrTvaG3ifTijXLWBLkuSvUsHNAg0Jiw5F5DK0jUYSStRtnqdssJXENtrYZkSM7 hpeA== X-Gm-Message-State: AOAM530dv5qcAJISrE4NI95648ObJX2o6km3a33ZnNj7Gj692PfJygqC YHcVSQt4csuui0dr6habVetCwe2/yQyyDXscZvrVj3dsuIvIq1yXERkSMWnjxVC7PUtSB2E+RZP RobgS4p73sZhmqC03EmPnsezXsxBiZ5DZLBTeBtFr+RGckvnzVDockyu2mnZGaZE= X-Google-Smtp-Source: ABdhPJyYTefolyTnht2/9c+t/yVvGC6aP5qIxASi1iZ8EPO20HO7P4WKmQotUQLTiX7IG4nmmi9g4mmhuDGEcA== X-Received: from ricarkol2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:62fe]) (user=ricarkol job=sendgmr) by 2002:a25:be8b:0:b0:628:bf04:f8d9 with SMTP id i11-20020a25be8b000000b00628bf04f8d9mr6695328ybk.472.1646978540808; Thu, 10 Mar 2022 22:02:20 -0800 (PST) Date: Thu, 10 Mar 2022 22:02:02 -0800 In-Reply-To: <20220311060207.2438667-1-ricarkol@google.com> Message-Id: <20220311060207.2438667-7-ricarkol@google.com> Mime-Version: 1.0 References: <20220311060207.2438667-1-ricarkol@google.com> X-Mailer: git-send-email 2.35.1.723.g4982287a31-goog Subject: [PATCH 06/11] KVM: selftests: Add missing close and munmap in __vm_mem_region_delete From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, drjones@redhat.com Cc: pbonzini@redhat.com, maz@kernel.org, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, axelrasmussen@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Deleting a memslot (when freeing a VM) is not closing the backing fd, nor it's unmapping the alias mapping. Fix by adding the missing close and munmap. Signed-off-by: Ricardo Koller --- tools/testing/selftests/kvm/lib/kvm_util.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index ae21564241c8..c25c79f97695 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -702,6 +702,12 @@ static void __vm_mem_region_delete(struct kvm_vm *vm, sparsebit_free(®ion->unused_phy_pages); ret = munmap(region->mmap_start, region->mmap_size); TEST_ASSERT(ret == 0, "munmap failed, rc: %i errno: %i", ret, errno); + if (region->fd >= 0) { + /* There's an extra map if shared memory. */ + ret = munmap(region->mmap_alias, region->mmap_size); + TEST_ASSERT(ret == 0, "munmap failed, rc: %i errno: %i", ret, errno); + close(region->fd); + } free(region); } From patchwork Fri Mar 11 06:02:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 12777462 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B0C53C433F5 for ; Fri, 11 Mar 2022 06:04:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345216AbiCKGFD (ORCPT ); Fri, 11 Mar 2022 01:05:03 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56388 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347306AbiCKGDz (ORCPT ); Fri, 11 Mar 2022 01:03:55 -0500 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 36F4C1A94B3 for ; Thu, 10 Mar 2022 22:02:23 -0800 (PST) Received: by mail-pg1-x54a.google.com with SMTP id 1-20020a630c41000000b00378d9d6bd91so4236946pgm.17 for ; Thu, 10 Mar 2022 22:02:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ro8T0btLM9lNs50cg8N+1w/MYEoMjEAC2LuiT+BQvK4=; b=p9tpCpoYV0XNJNpsrObC1pa1lfT20VQleReOfrBstp8GgZ9UFDNdp655KA66tz9iI0 17diziA8Sr0+7VpsOesUdeTqpa8KQI8XjfJ/yKkTaHevPWTS0kvrLfxLGK5BuTPIbNGh g3RMM/ch45qIcuoC8tAOIQshd05N3BzS2aTfVMYWKcBrl7v4UQ19XcZbI6JGook0/PLC 64RN93VjIvxLU73NEF2nYIvXBXi6GzjlfrR3sNGpXfoaFvBtwnu0b74/ArSd7IhWEaCa NMbRR5nahd4htLtZoYr+6FME/nr8g53E2vYxsfHNfrZ8KNNHkYgcEn6ds/o4mEhMfNZV RtNg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ro8T0btLM9lNs50cg8N+1w/MYEoMjEAC2LuiT+BQvK4=; b=wkW0wJ2UUigFPogY6MEFS8d8awX3wLsPIqX0t1fx1FWLXexwQf5KhoeNttQ9EIBvm2 ed4L/h3zN0gQk430LeNTWCRDciriv1jMXMWZn4bQw/3K1Jqds4HQn/jlNJTYAnvFAowl NlpXGVT0mmpX3XTwTf0j3wiEgnsIS8gn0RMI/kRi34Hz5OE0hN0UODP3CdA9OdpNzxYY YiUxGzI2K+BDies18quQ73V06iu5xyLDGpMes3waRN/6N/HFO4DGb2YK4bRv8baGaCwi q105+8qxB5wY5ivJc6ebLnid+kOwqy+uYsK+B/r46hHNdM46283eaDczDC+fLxmNhSkP 2u3Q== X-Gm-Message-State: AOAM532+jwg29KqbwPnjRsZFibm8Ui3SbKdEhSVDIbw1hOJVErmihK1W lFXdYaeb0K0hJmoszDhRVxZXpY9U6ZH1F2MSO3NHVOFaTy6QCkJw1k3ImEX+UyqitgZRLXARjNq 8H0vL3XzQZKmkPKQdYe1+qd4VT1G3pZ7jdaxk4oSh2/XydSb/m7mHM1QaP1lZ3qU= X-Google-Smtp-Source: ABdhPJxRP6dlZLb842BwKfvT/EbBMJSptLVFOxs2aiV2GA2q3idT1jQ/Y0pQiPd3RzllfgKFgMIOmLW2bzgSLw== X-Received: from ricarkol2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:62fe]) (user=ricarkol job=sendgmr) by 2002:a17:90b:33cc:b0:1c2:fbc9:5e84 with SMTP id lk12-20020a17090b33cc00b001c2fbc95e84mr3765819pjb.161.1646978542526; Thu, 10 Mar 2022 22:02:22 -0800 (PST) Date: Thu, 10 Mar 2022 22:02:03 -0800 In-Reply-To: <20220311060207.2438667-1-ricarkol@google.com> Message-Id: <20220311060207.2438667-8-ricarkol@google.com> Mime-Version: 1.0 References: <20220311060207.2438667-1-ricarkol@google.com> X-Mailer: git-send-email 2.35.1.723.g4982287a31-goog Subject: [PATCH 07/11] KVM: selftests: aarch64: Add aarch64/page_fault_test From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, drjones@redhat.com Cc: pbonzini@redhat.com, maz@kernel.org, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, axelrasmussen@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a new test for stage 2 faults when using different combinations of guest accesses (e.g., write, S1PTW), backing source type (e.g., anon) and types of faults (e.g., read on hugetlbfs with a hole). The next commits will add different handling methods and more faults (e.g., uffd and dirty logging). This first commit starts by adding two sanity checks for all types of accesses: AF setting by the hw, and accessing memslots with holes. Note that this commit borrows some code from kvm-unit-tests: RET, MOV_X0, and flush_tlb_page. Signed-off-by: Ricardo Koller --- tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/aarch64/page_fault_test.c | 667 ++++++++++++++++++ 2 files changed, 668 insertions(+) create mode 100644 tools/testing/selftests/kvm/aarch64/page_fault_test.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index bc5f89b3700e..6a192798b217 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -103,6 +103,7 @@ TEST_GEN_PROGS_x86_64 += system_counter_offset_test TEST_GEN_PROGS_aarch64 += aarch64/arch_timer TEST_GEN_PROGS_aarch64 += aarch64/debug-exceptions TEST_GEN_PROGS_aarch64 += aarch64/get-reg-list +TEST_GEN_PROGS_aarch64 += aarch64/page_fault_test TEST_GEN_PROGS_aarch64 += aarch64/psci_cpu_on_test TEST_GEN_PROGS_aarch64 += aarch64/vgic_init TEST_GEN_PROGS_aarch64 += aarch64/vgic_irq diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c new file mode 100644 index 000000000000..00477a4f10cb --- /dev/null +++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c @@ -0,0 +1,667 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * page_fault_test.c - Test stage 2 faults. + * + * This test tries different combinations of guest accesses (e.g., write, + * S1PTW), backing source type (e.g., anon) and types of faults (e.g., read on + * hugetlbfs with a hole). It checks that the expected handling method is + * called (e.g., uffd faults with the right address and write/read flag). + */ + +#define _GNU_SOURCE +#include +#include +#include +#include +#include +#include "guest_modes.h" +#include "userfaultfd_util.h" + +#define VCPU_ID 0 + +#define TEST_MEM_SLOT_INDEX 1 +#define TEST_PT_SLOT_INDEX 2 + +/* Max number of backing pages per guest page */ +#define BACKING_PG_PER_GUEST_PG (64 / 4) + +/* Test memslot in backing source pages */ +#define TEST_MEMSLOT_BACKING_SRC_NPAGES (1 * BACKING_PG_PER_GUEST_PG) + +/* PT memslot size in backing source pages */ +#define PT_MEMSLOT_BACKING_SRC_NPAGES (4 * BACKING_PG_PER_GUEST_PG) + +/* Guest virtual addresses that point to the test page and its PTE. */ +#define GUEST_TEST_GVA 0xc0000000 +#define GUEST_TEST_EXEC_GVA 0xc0000008 +#define GUEST_TEST_PTE_GVA 0xd0000000 + +/* Access flag */ +#define PTE_AF (1ULL << 10) + +/* Acces flag update enable/disable */ +#define TCR_EL1_HA (1ULL << 39) + +#define CMD_SKIP_TEST (-1LL) +#define CMD_HOLE_PT (1ULL << 2) +#define CMD_HOLE_TEST (1ULL << 3) + +#define PREPARE_FN_NR 10 +#define CHECK_FN_NR 10 + +static const uint64_t test_gva = GUEST_TEST_GVA; +static const uint64_t test_exec_gva = GUEST_TEST_EXEC_GVA; +static const uint64_t pte_gva = GUEST_TEST_PTE_GVA; +uint64_t pte_gpa; + +enum { PT, TEST, NR_MEMSLOTS}; + +struct memslot_desc { + void *hva; + uint64_t gpa; + uint64_t size; + uint64_t guest_pages; + uint64_t backing_pages; + enum vm_mem_backing_src_type src_type; + uint32_t idx; +} memslot[NR_MEMSLOTS] = { + { + .idx = TEST_PT_SLOT_INDEX, + .backing_pages = PT_MEMSLOT_BACKING_SRC_NPAGES, + }, + { + .idx = TEST_MEM_SLOT_INDEX, + .backing_pages = TEST_MEMSLOT_BACKING_SRC_NPAGES, + }, +}; + +static struct event_cnt { + int aborts; + int fail_vcpu_runs; +} events; + +struct test_desc { + const char *name; + uint64_t mem_mark_cmd; + /* Skip the test if any prepare function returns false */ + bool (*guest_prepare[PREPARE_FN_NR])(void); + void (*guest_test)(void); + void (*guest_test_check[CHECK_FN_NR])(void); + void (*dabt_handler)(struct ex_regs *regs); + void (*iabt_handler)(struct ex_regs *regs); + uint32_t pt_memslot_flags; + uint32_t test_memslot_flags; + void (*guest_pre_run)(struct kvm_vm *vm); + bool skip; + struct event_cnt expected_events; +}; + +struct test_params { + enum vm_mem_backing_src_type src_type; + struct test_desc *test_desc; +}; + + +static inline void flush_tlb_page(uint64_t vaddr) +{ + uint64_t page = vaddr >> 12; + + dsb(ishst); + asm("tlbi vaae1is, %0" :: "r" (page)); + dsb(ish); + isb(); +} + +#define RET 0xd65f03c0 +#define MOV_X0(x) (0xd2800000 | (((x) & 0xffff) << 5)) + +static void guest_test_nop(void) +{} + +static void guest_test_write64(void) +{ + uint64_t val; + + WRITE_ONCE(*((uint64_t *)test_gva), 0x0123456789ABCDEF); + val = READ_ONCE(*(uint64_t *)test_gva); + GUEST_ASSERT_EQ(val, 0x0123456789ABCDEF); +} + +/* Check the system for atomic instructions. */ +static bool guest_check_lse(void) +{ + uint64_t isar0 = read_sysreg(id_aa64isar0_el1); + uint64_t atomic = (isar0 >> 20) & 7; + + return atomic >= 2; +} + +/* Compare and swap instruction. */ +static void guest_test_cas(void) +{ + uint64_t val; + uint64_t addr = test_gva; + + GUEST_ASSERT_EQ(guest_check_lse(), 1); + asm volatile(".arch_extension lse\n" + "casal %0, %1, [%2]\n" + :: "r" (0), "r" (0x0123456789ABCDEF), "r" (addr)); + val = READ_ONCE(*(uint64_t *)(addr)); + GUEST_ASSERT_EQ(val, 0x0123456789ABCDEF); +} + +static void guest_test_read64(void) +{ + uint64_t val; + + val = READ_ONCE(*(uint64_t *)test_gva); + GUEST_ASSERT_EQ(val, 0); +} + +/* Address translation instruction */ +static void guest_test_at(void) +{ + uint64_t par; + uint64_t addr = 0; + + asm volatile("at s1e1r, %0" :: "r" (test_gva)); + par = read_sysreg(par_el1); + + /* Bit 1 indicates whether the AT was successful */ + GUEST_ASSERT_EQ(par & 1, 0); + /* The PA in bits [51:12] */ + addr = par & (((1ULL << 40) - 1) << 12); + GUEST_ASSERT_EQ(addr, memslot[TEST].gpa); +} + +static void guest_test_dc_zva(void) +{ + /* The smallest guaranteed block size (bs) is a word. */ + uint16_t val; + + asm volatile("dc zva, %0\n" + "dsb ish\n" + :: "r" (test_gva)); + val = READ_ONCE(*(uint16_t *)test_gva); + GUEST_ASSERT_EQ(val, 0); +} + +static void guest_test_ld_preidx(void) +{ + uint64_t val; + uint64_t addr = test_gva - 8; + + /* + * This ends up accessing "test_gva + 8 - 8", where "test_gva - 8" + * is not backed by a memslot. + */ + asm volatile("ldr %0, [%1, #8]!" + : "=r" (val), "+r" (addr)); + GUEST_ASSERT_EQ(val, 0); + GUEST_ASSERT_EQ(addr, test_gva); +} + +static void guest_test_st_preidx(void) +{ + uint64_t val = 0x0123456789ABCDEF; + uint64_t addr = test_gva - 8; + + asm volatile("str %0, [%1, #8]!" + : "+r" (val), "+r" (addr)); + + GUEST_ASSERT_EQ(addr, test_gva); + val = READ_ONCE(*(uint64_t *)test_gva); +} + +static bool guest_set_ha(void) +{ + uint64_t mmfr1 = read_sysreg(id_aa64mmfr1_el1); + uint64_t hadbs = mmfr1 & 6; + uint64_t tcr; + + /* Skip if HA is not supported. */ + if (hadbs == 0) + return false; + + tcr = read_sysreg(tcr_el1) | TCR_EL1_HA; + write_sysreg(tcr, tcr_el1); + isb(); + + return true; +} + +static bool guest_clear_pte_af(void) +{ + *((uint64_t *)pte_gva) &= ~PTE_AF; + flush_tlb_page(pte_gva); + + return true; +} + +static void guest_check_pte_af(void) +{ + flush_tlb_page(pte_gva); + GUEST_ASSERT_EQ(*((uint64_t *)pte_gva) & PTE_AF, PTE_AF); +} + +static void guest_test_exec(void) +{ + int (*code)(void) = (int (*)(void))test_exec_gva; + int ret; + + ret = code(); + GUEST_ASSERT_EQ(ret, 0x77); +} + +static bool guest_prepare(struct test_desc *test) +{ + bool (*prepare_fn)(void); + int i; + + for (i = 0; i < PREPARE_FN_NR; i++) { + prepare_fn = test->guest_prepare[i]; + if (prepare_fn && !prepare_fn()) + return false; + } + + return true; +} + +static void guest_test_check(struct test_desc *test) +{ + void (*check_fn)(void); + int i; + + for (i = 0; i < CHECK_FN_NR; i++) { + check_fn = test->guest_test_check[i]; + if (!check_fn) + continue; + check_fn(); + } +} + +static void guest_code(struct test_desc *test) +{ + if (!test->guest_test) + test->guest_test = guest_test_nop; + + if (!guest_prepare(test)) + GUEST_SYNC(CMD_SKIP_TEST); + + GUEST_SYNC(test->mem_mark_cmd); + test->guest_test(); + + guest_test_check(test); + GUEST_DONE(); +} + +static void no_dabt_handler(struct ex_regs *regs) +{ + GUEST_ASSERT_1(false, read_sysreg(far_el1)); +} + +static void no_iabt_handler(struct ex_regs *regs) +{ + GUEST_ASSERT_1(false, regs->pc); +} + +static void punch_hole_in_memslot(struct kvm_vm *vm, + struct memslot_desc *memslot) +{ + int ret, fd; + void *hva; + + fd = vm_mem_region_get_src_fd(vm, memslot->idx); + if (fd != -1) { + ret = fallocate(fd, FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, + 0, memslot->size); + TEST_ASSERT(ret == 0, "fallocate failed, errno: %d\n", errno); + } else { + hva = addr_gpa2hva(vm, memslot->gpa); + ret = madvise(hva, memslot->size, MADV_DONTNEED); + TEST_ASSERT(ret == 0, "madvise failed, errno: %d\n", errno); + } +} + +static void handle_cmd(struct kvm_vm *vm, int cmd) +{ + if (cmd & CMD_HOLE_PT) + punch_hole_in_memslot(vm, &memslot[PT]); + if (cmd & CMD_HOLE_TEST) + punch_hole_in_memslot(vm, &memslot[TEST]); +} + +static void sync_stats_from_guest(struct kvm_vm *vm) +{ + struct event_cnt *ec = addr_gva2hva(vm, (uint64_t)&events); + + events.aborts += ec->aborts; +} + +void fail_vcpu_run_no_handler(int ret) +{ + TEST_FAIL("Unexpected vcpu run failure\n"); +} + +static uint64_t get_total_guest_pages(enum vm_guest_mode mode, + struct test_params *p) +{ + uint64_t large_page_size = get_backing_src_pagesz(p->src_type); + uint64_t guest_page_size = vm_guest_mode_params[mode].page_size; + uint64_t size; + + size = PT_MEMSLOT_BACKING_SRC_NPAGES * large_page_size; + size += TEST_MEMSLOT_BACKING_SRC_NPAGES * large_page_size; + + return size / guest_page_size; +} + +static void load_exec_code_for_test(void) +{ + uint32_t *code; + + /* Write this "code" into test_exec_gva */ + assert(test_exec_gva - test_gva); + code = memslot[TEST].hva + 8; + + code[0] = MOV_X0(0x77); + code[1] = RET; +} + +static void setup_guest_args(struct kvm_vm *vm, struct test_desc *test) +{ + vm_vaddr_t test_desc_gva; + + test_desc_gva = vm_vaddr_alloc_page(vm); + memcpy(addr_gva2hva(vm, test_desc_gva), test, + sizeof(struct test_desc)); + vcpu_args_set(vm, 0, 1, test_desc_gva); +} + +static void setup_abort_handlers(struct kvm_vm *vm, struct test_desc *test) +{ + vm_init_descriptor_tables(vm); + vcpu_init_descriptor_tables(vm, VCPU_ID); + if (!test->dabt_handler) + test->dabt_handler = no_dabt_handler; + if (!test->iabt_handler) + test->iabt_handler = no_iabt_handler; + vm_install_sync_handler(vm, VECTOR_SYNC_CURRENT, + 0x25, test->dabt_handler); + vm_install_sync_handler(vm, VECTOR_SYNC_CURRENT, + 0x21, test->iabt_handler); +} + +static void setup_memslots(struct kvm_vm *vm, enum vm_guest_mode mode, + struct test_params *p) +{ + uint64_t large_page_size = get_backing_src_pagesz(p->src_type); + uint64_t guest_page_size = vm_guest_mode_params[mode].page_size; + struct test_desc *test = p->test_desc; + uint64_t hole_gpa; + uint64_t alignment; + int i; + + /* Calculate the test and PT memslot sizes */ + for (i = 0; i < NR_MEMSLOTS; i++) { + memslot[i].size = large_page_size * memslot[i].backing_pages; + memslot[i].guest_pages = memslot[i].size / guest_page_size; + memslot[i].src_type = p->src_type; + } + + TEST_ASSERT(memslot[TEST].size >= guest_page_size, + "The test memslot should have space one guest page.\n"); + TEST_ASSERT(memslot[PT].size >= (4 * guest_page_size), + "The PT memslot sould have space for 4 guest pages.\n"); + + /* Place the memslots GPAs at the end of physical memory */ + alignment = max(large_page_size, guest_page_size); + memslot[TEST].gpa = (vm_get_max_gfn(vm) - memslot[TEST].guest_pages) * + guest_page_size; + memslot[TEST].gpa = align_down(memslot[TEST].gpa, alignment); + /* Add a 1-guest_page-hole between the two memslots */ + hole_gpa = memslot[TEST].gpa - guest_page_size; + virt_pg_map(vm, test_gva - guest_page_size, hole_gpa); + memslot[PT].gpa = hole_gpa - (memslot[PT].guest_pages * + guest_page_size); + memslot[PT].gpa = align_down(memslot[PT].gpa, alignment); + + /* Create memslots for and test data and a PTE. */ + vm_userspace_mem_region_add(vm, p->src_type, memslot[PT].gpa, + memslot[PT].idx, memslot[PT].guest_pages, + test->pt_memslot_flags); + vm_userspace_mem_region_add(vm, p->src_type, memslot[TEST].gpa, + memslot[TEST].idx, memslot[TEST].guest_pages, + test->test_memslot_flags); + + for (i = 0; i < NR_MEMSLOTS; i++) + memslot[i].hva = addr_gpa2hva(vm, memslot[i].gpa); + + /* Map the test test_gva using the PT memslot. */ + _virt_pg_map(vm, test_gva, memslot[TEST].gpa, + 4 /* NORMAL (See DEFAULT_MAIR_EL1) */, + TEST_PT_SLOT_INDEX); + + /* + * Find the PTE of the test page and map it in the guest so it can + * clear the AF. + */ + pte_gpa = vm_get_pte_gpa(vm, test_gva); + TEST_ASSERT(memslot[PT].gpa <= pte_gpa && + pte_gpa < (memslot[PT].gpa + memslot[PT].size), + "The EPT should be in the PT memslot."); + /* This is an artibrary requirement just to make things simpler. */ + TEST_ASSERT(pte_gpa % guest_page_size == 0, + "The pte_gpa (%p) should be aligned to the guest page (%lx).", + (void *)pte_gpa, guest_page_size); + virt_pg_map(vm, pte_gva, pte_gpa); +} + +static void check_event_counts(struct test_desc *test) +{ + ASSERT_EQ(test->expected_events.aborts, events.aborts); +} + +static void print_test_banner(enum vm_guest_mode mode, struct test_params *p) +{ + struct test_desc *test = p->test_desc; + + pr_debug("Test: %s\n", test->name); + pr_debug("Testing guest mode: %s\n", vm_guest_mode_string(mode)); + pr_debug("Testing memory backing src type: %s\n", + vm_mem_backing_src_alias(p->src_type)->name); +} + +static void reset_event_counts(void) +{ + memset(&events, 0, sizeof(events)); +} + +static bool vcpu_run_loop(struct kvm_vm *vm, struct test_desc *test) +{ + bool skip_test = false; + struct ucall uc; + int stage; + + for (stage = 0; ; stage++) { + vcpu_run(vm, VCPU_ID); + + switch (get_ucall(vm, VCPU_ID, &uc)) { + case UCALL_SYNC: + if (uc.args[1] == CMD_SKIP_TEST) { + pr_debug("Skipped.\n"); + skip_test = true; + goto done; + } + handle_cmd(vm, uc.args[1]); + break; + case UCALL_ABORT: + TEST_FAIL("%s at %s:%ld\n\tvalues: %#lx, %#lx", + (const char *)uc.args[0], + __FILE__, uc.args[1], uc.args[2], uc.args[3]); + break; + case UCALL_DONE: + pr_debug("Done.\n"); + goto done; + default: + TEST_FAIL("Unknown ucall %lu", uc.cmd); + } + } + +done: + return skip_test; +} + +static void run_test(enum vm_guest_mode mode, void *arg) +{ + struct test_params *p = (struct test_params *)arg; + struct test_desc *test = p->test_desc; + struct kvm_vm *vm; + bool skip_test = false; + + print_test_banner(mode, p); + + vm = vm_create_with_vcpus(mode, 1, DEFAULT_GUEST_PHY_PAGES, + get_total_guest_pages(mode, p), 0, guest_code, NULL); + ucall_init(vm, NULL); + + reset_event_counts(); + setup_memslots(vm, mode, p); + + load_exec_code_for_test(); + setup_abort_handlers(vm, test); + setup_guest_args(vm, test); + + if (test->guest_pre_run) + test->guest_pre_run(vm); + + sync_global_to_guest(vm, memslot); + + skip_test = vcpu_run_loop(vm, test); + + sync_stats_from_guest(vm); + ucall_uninit(vm); + kvm_vm_free(vm); + + if (!skip_test) + check_event_counts(test); +} + +static void for_each_test_and_guest_mode(void (*func)(enum vm_guest_mode, void *), + enum vm_mem_backing_src_type src_type); + +static void help(char *name) +{ + puts(""); + printf("usage: %s [-h] [-s mem-type]\n", name); + puts(""); + guest_modes_help(); + backing_src_help("-s"); + puts(""); +} + +int main(int argc, char *argv[]) +{ + enum vm_mem_backing_src_type src_type; + int opt; + + setbuf(stdout, NULL); + + src_type = DEFAULT_VM_MEM_SRC; + + guest_modes_append_default(); + + while ((opt = getopt(argc, argv, "hm:s:")) != -1) { + switch (opt) { + case 'm': + guest_modes_cmdline(optarg); + break; + case 's': + src_type = parse_backing_src_type(optarg); + break; + case 'h': + default: + help(argv[0]); + exit(0); + } + } + + for_each_test_and_guest_mode(run_test, src_type); + return 0; +} + +#define SNAME(s) #s +#define SCAT(a, b) SNAME(a ## _ ## b) + +#define TEST_BASIC_ACCESS(__a, ...) \ +{ \ + .name = SNAME(BASIC_ACCESS ## _ ## __a), \ + .guest_test = __a, \ + .expected_events = { 0 }, \ + __VA_ARGS__ \ +} + +#define __AF_TEST_ARGS \ + .guest_prepare = { guest_set_ha, guest_clear_pte_af, }, \ + .guest_test_check = { guest_check_pte_af, }, \ + +#define __AF_LSE_TEST_ARGS \ + .guest_prepare = { guest_set_ha, guest_clear_pte_af, \ + guest_check_lse, }, \ + .guest_test_check = { guest_check_pte_af, }, \ + +#define __PREPARE_LSE_TEST_ARGS \ + .guest_prepare = { guest_check_lse, }, + +#define TEST_HW_ACCESS_FLAG(__a) \ + TEST_BASIC_ACCESS(__a, __AF_TEST_ARGS) + +#define TEST_ACCESS_ON_HOLE_NO_FAULTS(__a, ...) \ +{ \ + .name = SNAME(ACCESS_ON_HOLE_NO_FAULTS ## _ ## __a), \ + .guest_test = __a, \ + .mem_mark_cmd = CMD_HOLE_TEST, \ + .expected_events = { 0 }, \ + __VA_ARGS__ \ +} + +static struct test_desc tests[] = { + /* Check that HW is setting the AF (sanity checks). */ + TEST_HW_ACCESS_FLAG(guest_test_read64), + TEST_HW_ACCESS_FLAG(guest_test_ld_preidx), + TEST_BASIC_ACCESS(guest_test_cas, __AF_LSE_TEST_ARGS), + TEST_HW_ACCESS_FLAG(guest_test_write64), + TEST_HW_ACCESS_FLAG(guest_test_st_preidx), + TEST_HW_ACCESS_FLAG(guest_test_dc_zva), + TEST_HW_ACCESS_FLAG(guest_test_exec), + + /* Accessing a hole shouldn't fault (more sanity checks). */ + TEST_ACCESS_ON_HOLE_NO_FAULTS(guest_test_read64), + TEST_ACCESS_ON_HOLE_NO_FAULTS(guest_test_cas, __PREPARE_LSE_TEST_ARGS), + TEST_ACCESS_ON_HOLE_NO_FAULTS(guest_test_ld_preidx), + TEST_ACCESS_ON_HOLE_NO_FAULTS(guest_test_write64), + TEST_ACCESS_ON_HOLE_NO_FAULTS(guest_test_at), + TEST_ACCESS_ON_HOLE_NO_FAULTS(guest_test_dc_zva), + TEST_ACCESS_ON_HOLE_NO_FAULTS(guest_test_st_preidx), + + { 0 }, +}; + +static void for_each_test_and_guest_mode( + void (*func)(enum vm_guest_mode m, void *a), + enum vm_mem_backing_src_type src_type) +{ + struct test_desc *t; + + for (t = &tests[0]; t->name; t++) { + if (t->skip) + continue; + + struct test_params p = { + .src_type = src_type, + .test_desc = t, + }; + + for_each_guest_mode(run_test, &p); + } +} From patchwork Fri Mar 11 06:02:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 12777463 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D13EFC433EF for ; Fri, 11 Mar 2022 06:04:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346698AbiCKGFG (ORCPT ); Fri, 11 Mar 2022 01:05:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51832 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347309AbiCKGDz (ORCPT ); Fri, 11 Mar 2022 01:03:55 -0500 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DDC2A1AA05C for ; Thu, 10 Mar 2022 22:02:24 -0800 (PST) Received: by mail-pj1-x1049.google.com with SMTP id g19-20020a17090a579300b001b9d80f3714so4610495pji.7 for ; Thu, 10 Mar 2022 22:02:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=alEF4uVy3MKXE24ouehI1MHV6RHYI67VpMz3Zfm+ONs=; b=a17s5aFrzaiqb1z7opXV82rLA2vSvLZNXNuq37ak2Qvx28RsT/omtU9iGPs3lhqeCR wtAJb016pv3gN3L9yD6V0Hl3cxRHTRj98Vl6tTfWv5EFBg3HqCiI2N/v0Udh8rU3KI9u SOEHfUyCjaatotJHKN4bpSNJF2sUztpRyOd6FtqlB8e46KtZW2zmvpA1YRwTrQbsUrLh 26W10aSvsfvdj0abidkAAggZhUy4CZsIgOIHhr2MIsvKasAp5T6X6MWtNqBjxTB7s/V+ V3GZp5LTrLixyWIDKSGuP4xbZMAncTbOB0c+79EhPVcyh5IGjSYmsH6DI2N5XEKrbzMH n01g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=alEF4uVy3MKXE24ouehI1MHV6RHYI67VpMz3Zfm+ONs=; b=oiirP45aICzc8XfIwfNetrFl36FsT/4/8/q/jOwcjWrGsyHTkmC61Hf95x6XLmnU9m f8j0WnfW8gV0Xh1DZwkixzVbgpXbKDtefjtwqGrO3ushpG6aSIc++BYHPat+zg1NZJ/T OR66uJI3Pgz1uN8xZHfHRhs/HmPBgdYId4qlRR20CywxC2JtsLPGqs8p7fRZd2xWrqcA oj5N4zyBm7oKhpC3/Y0aLK5ofkSLIZOqKGJJyy4uusH3tnPpGg5lbsyXGRaFIcD5j3z7 emW/EososGc7mrQLIZzd7qGGlxaVyNP6xP5ltu/rIim88hWqVaIg7DVCLd89QtYQ8XsN ziWw== X-Gm-Message-State: AOAM533MSSlaOoDr6yRbMCDHB53WNe3vSmk72QYnoqtuioov2svtJYWD jQva92B6jLdgp8i4RHBJBnqbjNzCHkyU47CUV5ijoPx2kvmv7uXIxIaRhrtcl1VGjJ18szwXvoT jQnt4eKkEHVYdFDoLa8686yp0KGI3cfAoN5+DpE0P9Fha4HYZQ8lwH2Jd8/1W2zg= X-Google-Smtp-Source: ABdhPJznL/it5YOK+KY5NGPo/3x1bNZC6rgXgXoGw2xSri2UThyOu3yVtmktdqOQuvf/jkK5wHWwutFyWm6C7Q== X-Received: from ricarkol2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:62fe]) (user=ricarkol job=sendgmr) by 2002:a05:6a02:19a:b0:378:4f44:b1da with SMTP id bj26-20020a056a02019a00b003784f44b1damr7101775pgb.568.1646978544281; Thu, 10 Mar 2022 22:02:24 -0800 (PST) Date: Thu, 10 Mar 2022 22:02:04 -0800 In-Reply-To: <20220311060207.2438667-1-ricarkol@google.com> Message-Id: <20220311060207.2438667-9-ricarkol@google.com> Mime-Version: 1.0 References: <20220311060207.2438667-1-ricarkol@google.com> X-Mailer: git-send-email 2.35.1.723.g4982287a31-goog Subject: [PATCH 08/11] KVM: selftests: aarch64: Add userfaultfd tests into page_fault_test From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, drjones@redhat.com Cc: pbonzini@redhat.com, maz@kernel.org, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, axelrasmussen@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add some userfaultfd tests into page_fault_test. Punch holes into the data and/or page-table memslots, perform some accesses, and check that the faults are taken (or not taken) when expected. Signed-off-by: Ricardo Koller --- .../selftests/kvm/aarch64/page_fault_test.c | 232 +++++++++++++++++- 1 file changed, 229 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c index 00477a4f10cb..99449eaddb2b 100644 --- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c +++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c @@ -57,6 +57,8 @@ uint64_t pte_gpa; enum { PT, TEST, NR_MEMSLOTS}; struct memslot_desc { + size_t paging_size; + char *data_copy; void *hva; uint64_t gpa; uint64_t size; @@ -78,6 +80,9 @@ struct memslot_desc { static struct event_cnt { int aborts; int fail_vcpu_runs; + int uffd_faults; + /* uffd_faults is incremented from multiple threads. */ + pthread_mutex_t uffd_faults_mutex; } events; struct test_desc { @@ -87,6 +92,8 @@ struct test_desc { bool (*guest_prepare[PREPARE_FN_NR])(void); void (*guest_test)(void); void (*guest_test_check[CHECK_FN_NR])(void); + int (*uffd_pt_handler)(int mode, int uffd, struct uffd_msg *msg); + int (*uffd_test_handler)(int mode, int uffd, struct uffd_msg *msg); void (*dabt_handler)(struct ex_regs *regs); void (*iabt_handler)(struct ex_regs *regs); uint32_t pt_memslot_flags; @@ -305,6 +312,56 @@ static void no_iabt_handler(struct ex_regs *regs) GUEST_ASSERT_1(false, regs->pc); } +static int uffd_generic_handler(int uffd_mode, int uffd, + struct uffd_msg *msg, struct memslot_desc *memslot, + bool expect_write) +{ + uint64_t addr = msg->arg.pagefault.address; + uint64_t flags = msg->arg.pagefault.flags; + struct uffdio_copy copy; + int ret; + + TEST_ASSERT(uffd_mode == UFFDIO_REGISTER_MODE_MISSING, + "The only expected UFFD mode is MISSING"); + ASSERT_EQ(!!(flags & UFFD_PAGEFAULT_FLAG_WRITE), expect_write); + ASSERT_EQ(addr, (uint64_t)memslot->hva); + + pr_debug("uffd fault: addr=%p write=%d\n", + (void *)addr, !!(flags & UFFD_PAGEFAULT_FLAG_WRITE)); + + copy.src = (uint64_t)memslot->data_copy; + copy.dst = addr; + copy.len = memslot->paging_size; + copy.mode = 0; + + ret = ioctl(uffd, UFFDIO_COPY, ©); + if (ret == -1) { + pr_info("Failed UFFDIO_COPY in 0x%lx with errno: %d\n", + addr, errno); + return ret; + } + + pthread_mutex_lock(&events.uffd_faults_mutex); + events.uffd_faults += 1; + pthread_mutex_unlock(&events.uffd_faults_mutex); + return 0; +} + +static int uffd_pt_write_handler(int mode, int uffd, struct uffd_msg *msg) +{ + return uffd_generic_handler(mode, uffd, msg, &memslot[PT], true); +} + +static int uffd_test_write_handler(int mode, int uffd, struct uffd_msg *msg) +{ + return uffd_generic_handler(mode, uffd, msg, &memslot[TEST], true); +} + +static int uffd_test_read_handler(int mode, int uffd, struct uffd_msg *msg) +{ + return uffd_generic_handler(mode, uffd, msg, &memslot[TEST], false); +} + static void punch_hole_in_memslot(struct kvm_vm *vm, struct memslot_desc *memslot) { @@ -314,11 +371,11 @@ static void punch_hole_in_memslot(struct kvm_vm *vm, fd = vm_mem_region_get_src_fd(vm, memslot->idx); if (fd != -1) { ret = fallocate(fd, FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, - 0, memslot->size); + 0, memslot->paging_size); TEST_ASSERT(ret == 0, "fallocate failed, errno: %d\n", errno); } else { hva = addr_gpa2hva(vm, memslot->gpa); - ret = madvise(hva, memslot->size, MADV_DONTNEED); + ret = madvise(hva, memslot->paging_size, MADV_DONTNEED); TEST_ASSERT(ret == 0, "madvise failed, errno: %d\n", errno); } } @@ -457,9 +514,60 @@ static void setup_memslots(struct kvm_vm *vm, enum vm_guest_mode mode, virt_pg_map(vm, pte_gva, pte_gpa); } +static void setup_uffd(enum vm_guest_mode mode, struct test_params *p, + struct uffd_desc **uffd) +{ + struct test_desc *test = p->test_desc; + uint64_t large_page_size = get_backing_src_pagesz(p->src_type); + int i; + + /* + * When creating the map, we might not only have created a pte page, + * but also an intermediate level (pte_gpa != gpa[PT]). So, we + * might need to demand page both. + */ + memslot[PT].paging_size = align_up(pte_gpa - memslot[PT].gpa, + large_page_size) + large_page_size; + memslot[TEST].paging_size = large_page_size; + + for (i = 0; i < NR_MEMSLOTS; i++) { + memslot[i].data_copy = malloc(memslot[i].paging_size); + TEST_ASSERT(memslot[i].data_copy, "Failed malloc."); + memcpy(memslot[i].data_copy, memslot[i].hva, + memslot[i].paging_size); + } + + uffd[PT] = NULL; + if (test->uffd_pt_handler) + uffd[PT] = uffd_setup_demand_paging( + UFFDIO_REGISTER_MODE_MISSING, 0, + memslot[PT].hva, memslot[PT].paging_size, + test->uffd_pt_handler); + + uffd[TEST] = NULL; + if (test->uffd_test_handler) + uffd[TEST] = uffd_setup_demand_paging( + UFFDIO_REGISTER_MODE_MISSING, 0, + memslot[TEST].hva, memslot[TEST].paging_size, + test->uffd_test_handler); +} + static void check_event_counts(struct test_desc *test) { ASSERT_EQ(test->expected_events.aborts, events.aborts); + ASSERT_EQ(test->expected_events.uffd_faults, events.uffd_faults); +} + +static void free_uffd(struct test_desc *test, struct uffd_desc **uffd) +{ + int i; + + if (test->uffd_pt_handler) + uffd_stop_demand_paging(uffd[PT]); + if (test->uffd_test_handler) + uffd_stop_demand_paging(uffd[TEST]); + for (i = 0; i < NR_MEMSLOTS; i++) + free(memslot[i].data_copy); } static void print_test_banner(enum vm_guest_mode mode, struct test_params *p) @@ -517,6 +625,7 @@ static void run_test(enum vm_guest_mode mode, void *arg) struct test_params *p = (struct test_params *)arg; struct test_desc *test = p->test_desc; struct kvm_vm *vm; + struct uffd_desc *uffd[NR_MEMSLOTS]; bool skip_test = false; print_test_banner(mode, p); @@ -528,7 +637,14 @@ static void run_test(enum vm_guest_mode mode, void *arg) reset_event_counts(); setup_memslots(vm, mode, p); + /* + * Set some code at memslot[TEST].hva for the guest to execute (only + * applicable to the EXEC tests). This has to be done before + * setup_uffd() as that function copies the memslot data for the uffd + * handler. + */ load_exec_code_for_test(); + setup_uffd(mode, p, uffd); setup_abort_handlers(vm, test); setup_guest_args(vm, test); @@ -542,7 +658,12 @@ static void run_test(enum vm_guest_mode mode, void *arg) sync_stats_from_guest(vm); ucall_uninit(vm); kvm_vm_free(vm); + free_uffd(test, uffd); + /* + * Make sure this is called after the uffd threads have exited (and + * updated their respective event counters). + */ if (!skip_test) check_event_counts(test); } @@ -625,6 +746,43 @@ int main(int argc, char *argv[]) __VA_ARGS__ \ } +#define TEST_ACCESS_ON_HOLE_UFFD(__a, __uffd_handler, ...) \ +{ \ + .name = SNAME(ACCESS_ON_HOLE_UFFD ## _ ## __a), \ + .guest_test = __a, \ + .mem_mark_cmd = CMD_HOLE_TEST, \ + .uffd_test_handler = __uffd_handler, \ + .expected_events = { .uffd_faults = 1, }, \ + __VA_ARGS__ \ +} + +#define TEST_S1PTW_ON_HOLE_UFFD(__a, __uffd_handler, ...) \ +{ \ + .name = SNAME(S1PTW_ON_HOLE_UFFD ## _ ## __a), \ + .guest_test = __a, \ + .mem_mark_cmd = CMD_HOLE_PT, \ + .uffd_pt_handler = __uffd_handler, \ + .expected_events = { .uffd_faults = 1, }, \ + __VA_ARGS__ \ +} + +#define TEST_S1PTW_ON_HOLE_UFFD_AF(__a, __uffd_handler) \ + TEST_S1PTW_ON_HOLE_UFFD(__a, __uffd_handler, __AF_TEST_ARGS) + +#define TEST_ACCESS_AND_S1PTW_ON_HOLE_UFFD(__a, __th, __ph, ...) \ +{ \ + .name = SNAME(ACCESS_S1PTW_ON_HOLE_UFFD ## _ ## __a), \ + .guest_test = __a, \ + .mem_mark_cmd = CMD_HOLE_PT | CMD_HOLE_TEST, \ + .uffd_pt_handler = __ph, \ + .uffd_test_handler = __th, \ + .expected_events = { .uffd_faults = 2, }, \ + __VA_ARGS__ \ +} + +#define TEST_ACCESS_AND_S1PTW_ON_HOLE_UFFD_AF(__a, __th, __ph) \ + TEST_ACCESS_AND_S1PTW_ON_HOLE_UFFD(__a, __th, __ph, __AF_TEST_ARGS) + static struct test_desc tests[] = { /* Check that HW is setting the AF (sanity checks). */ TEST_HW_ACCESS_FLAG(guest_test_read64), @@ -640,10 +798,78 @@ static struct test_desc tests[] = { TEST_ACCESS_ON_HOLE_NO_FAULTS(guest_test_cas, __PREPARE_LSE_TEST_ARGS), TEST_ACCESS_ON_HOLE_NO_FAULTS(guest_test_ld_preidx), TEST_ACCESS_ON_HOLE_NO_FAULTS(guest_test_write64), - TEST_ACCESS_ON_HOLE_NO_FAULTS(guest_test_at), TEST_ACCESS_ON_HOLE_NO_FAULTS(guest_test_dc_zva), TEST_ACCESS_ON_HOLE_NO_FAULTS(guest_test_st_preidx), + /* UFFD basic (sanity checks) */ + TEST_ACCESS_ON_HOLE_UFFD(guest_test_read64, uffd_test_read_handler), + TEST_ACCESS_ON_HOLE_UFFD(guest_test_cas, uffd_test_read_handler, + __PREPARE_LSE_TEST_ARGS), + TEST_ACCESS_ON_HOLE_UFFD(guest_test_ld_preidx, uffd_test_read_handler), + TEST_ACCESS_ON_HOLE_UFFD(guest_test_write64, uffd_test_write_handler), + TEST_ACCESS_ON_HOLE_UFFD(guest_test_st_preidx, uffd_test_write_handler), + TEST_ACCESS_ON_HOLE_UFFD(guest_test_dc_zva, uffd_test_write_handler), + TEST_ACCESS_ON_HOLE_UFFD(guest_test_exec, uffd_test_read_handler), + + /* UFFD fault due to S1PTW. Note how they are all write faults. */ + TEST_S1PTW_ON_HOLE_UFFD(guest_test_read64, uffd_pt_write_handler), + TEST_S1PTW_ON_HOLE_UFFD(guest_test_cas, uffd_pt_write_handler, + __PREPARE_LSE_TEST_ARGS), + TEST_S1PTW_ON_HOLE_UFFD(guest_test_at, uffd_pt_write_handler), + TEST_S1PTW_ON_HOLE_UFFD(guest_test_ld_preidx, uffd_pt_write_handler), + TEST_S1PTW_ON_HOLE_UFFD(guest_test_write64, uffd_pt_write_handler), + TEST_S1PTW_ON_HOLE_UFFD(guest_test_dc_zva, uffd_pt_write_handler), + TEST_S1PTW_ON_HOLE_UFFD(guest_test_st_preidx, uffd_pt_write_handler), + TEST_S1PTW_ON_HOLE_UFFD(guest_test_exec, uffd_pt_write_handler), + + /* UFFD fault due to S1PTW with AF. Note how they all write faults. */ + TEST_S1PTW_ON_HOLE_UFFD_AF(guest_test_read64, uffd_pt_write_handler), + TEST_S1PTW_ON_HOLE_UFFD(guest_test_cas, uffd_pt_write_handler, + __AF_LSE_TEST_ARGS), + /* + * Can't test the AF case for address translation insts (D5.4.11) as + * it's IMPDEF whether that marks the AF. + */ + TEST_S1PTW_ON_HOLE_UFFD_AF(guest_test_ld_preidx, uffd_pt_write_handler), + TEST_S1PTW_ON_HOLE_UFFD_AF(guest_test_write64, uffd_pt_write_handler), + TEST_S1PTW_ON_HOLE_UFFD_AF(guest_test_st_preidx, uffd_pt_write_handler), + TEST_S1PTW_ON_HOLE_UFFD_AF(guest_test_dc_zva, uffd_pt_write_handler), + TEST_S1PTW_ON_HOLE_UFFD_AF(guest_test_exec, uffd_pt_write_handler), + + /* UFFD faults due to an access and its S1PTW. */ + TEST_ACCESS_AND_S1PTW_ON_HOLE_UFFD(guest_test_read64, + uffd_test_read_handler, uffd_pt_write_handler), + TEST_ACCESS_AND_S1PTW_ON_HOLE_UFFD(guest_test_cas, + uffd_test_read_handler, uffd_pt_write_handler, + __PREPARE_LSE_TEST_ARGS), + TEST_ACCESS_AND_S1PTW_ON_HOLE_UFFD(guest_test_ld_preidx, + uffd_test_read_handler, uffd_pt_write_handler), + TEST_ACCESS_AND_S1PTW_ON_HOLE_UFFD(guest_test_write64, + uffd_test_write_handler, uffd_pt_write_handler), + TEST_ACCESS_AND_S1PTW_ON_HOLE_UFFD(guest_test_dc_zva, + uffd_test_write_handler, uffd_pt_write_handler), + TEST_ACCESS_AND_S1PTW_ON_HOLE_UFFD(guest_test_st_preidx, + uffd_test_write_handler, uffd_pt_write_handler), + TEST_ACCESS_AND_S1PTW_ON_HOLE_UFFD(guest_test_exec, + uffd_test_read_handler, uffd_pt_write_handler), + + /* UFFD faults due to an access and its S1PTW with AF. */ + TEST_ACCESS_AND_S1PTW_ON_HOLE_UFFD_AF(guest_test_read64, + uffd_test_read_handler, uffd_pt_write_handler), + TEST_ACCESS_AND_S1PTW_ON_HOLE_UFFD(guest_test_cas, + uffd_test_read_handler, uffd_pt_write_handler, + __AF_LSE_TEST_ARGS), + TEST_ACCESS_AND_S1PTW_ON_HOLE_UFFD_AF(guest_test_ld_preidx, + uffd_test_read_handler, uffd_pt_write_handler), + TEST_ACCESS_AND_S1PTW_ON_HOLE_UFFD_AF(guest_test_write64, + uffd_test_write_handler, uffd_pt_write_handler), + TEST_ACCESS_AND_S1PTW_ON_HOLE_UFFD_AF(guest_test_dc_zva, + uffd_test_write_handler, uffd_pt_write_handler), + TEST_ACCESS_AND_S1PTW_ON_HOLE_UFFD_AF(guest_test_st_preidx, + uffd_test_write_handler, uffd_pt_write_handler), + TEST_ACCESS_AND_S1PTW_ON_HOLE_UFFD_AF(guest_test_exec, + uffd_test_read_handler, uffd_pt_write_handler), + { 0 }, }; From patchwork Fri Mar 11 06:02:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 12777460 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58DA8C433EF for ; Fri, 11 Mar 2022 06:03:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235616AbiCKGE4 (ORCPT ); Fri, 11 Mar 2022 01:04:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56416 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347310AbiCKGDz (ORCPT ); Fri, 11 Mar 2022 01:03:55 -0500 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5C7C2B40 for ; Thu, 10 Mar 2022 22:02:26 -0800 (PST) Received: by mail-pf1-x44a.google.com with SMTP id d145-20020a621d97000000b004f7285f67e8so4642803pfd.2 for ; Thu, 10 Mar 2022 22:02:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=9kgFaGsT8/B0fio+4RiMckapt6Qp1qpNOH83d8/dvfs=; b=eEpqL3c5BAWAM4Zd47HQtztsOnCFI0c8VLYJbxS/kbJDaNcAyytHkHe2S4iq8FoWzJ 6JoA89sHYrLEKkDXGAcoTIqI0GOnFrT38qzd/pAbRZSNqgC30EC5oW7BPwR+yTNCAPuw NJJiTjt33iCewXSjkATF62yUeKDMrRNvNTELh+dd+LxEyThuHPIO6zteDZKH4AnU9y4l 5ZerJGMZnM2LsXVZtV2p3uMqm2cNcUWZngd+u+XPzSYySEUrzaOqOUY9Is3tEs8qkMae joBq2xgovJLzf6nCX196L2feJ14E/evb82bI3XYBIbkFTepY+oweLAzXv2gkj+vpwOPo AImw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=9kgFaGsT8/B0fio+4RiMckapt6Qp1qpNOH83d8/dvfs=; b=SdjWZyZQ6ZK8LA3z9yLPsShsFUA0N2/+yjFOnAYl7t7pVkMJahMkRiDOgaTwvnIjiQ G2/60MPtWoq9Az+5cWnF3NFOPkLcG2cyaVw8O+isVX9LvlfAX/Ywq1xhvUUyl4sg+VLw HAdzZ/J/9PzQ3aPilGcLqL8wkIoULu5mZKvQLratejgZKPALER7Nq1buqy37S/sM0Aaj IChFX1uUFtSZyukGFli6AlISqtkoh5tiICYgMx1Bg/B/yl624NRUsT7uSOUfVrdFaGnv LxpwlRSrrw0usUMHrQHZAzvAHCQgPW5dq6WdGAAyRssiiql8I0WNJkBvto6dA57JuMSs tkEQ== X-Gm-Message-State: AOAM532FiCZH3AH1E1uJTj9THDslSkJyerlTaFex8jTzCwE+Q0fXk52V nHw2H8x3W+pYfa8ILj/c/nZFWGwybp6AxgOggDpgljDe3n/K0YDNc2d7bqY7ySJzk+2D+31cCN9 4Ps09TKfPKrxmL2EViM1rbmX9tgQszptuT7MhF5zo1hBmWnDlf2Q3nldulLEhKcA= X-Google-Smtp-Source: ABdhPJylXOEFK30IzO362QIV0eVcmcL0f9ewowSR/vacH9J6rbR5mLCxq1VlgHLRqtvGbDRxJhlkAG1SuFYnXA== X-Received: from ricarkol2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:62fe]) (user=ricarkol job=sendgmr) by 2002:aa7:9e07:0:b0:4f6:a7e3:1b57 with SMTP id y7-20020aa79e07000000b004f6a7e31b57mr8361479pfq.13.1646978545757; Thu, 10 Mar 2022 22:02:25 -0800 (PST) Date: Thu, 10 Mar 2022 22:02:05 -0800 In-Reply-To: <20220311060207.2438667-1-ricarkol@google.com> Message-Id: <20220311060207.2438667-10-ricarkol@google.com> Mime-Version: 1.0 References: <20220311060207.2438667-1-ricarkol@google.com> X-Mailer: git-send-email 2.35.1.723.g4982287a31-goog Subject: [PATCH 09/11] KVM: selftests: aarch64: Add dirty logging tests into page_fault_test From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, drjones@redhat.com Cc: pbonzini@redhat.com, maz@kernel.org, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, axelrasmussen@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add some dirty logging tests into page_fault_test. Mark the data and/or page-table memslots for dirty logging, perform some accesses, and check that the dirty log bits are set or clean when expected. Signed-off-by: Ricardo Koller --- .../selftests/kvm/aarch64/page_fault_test.c | 123 ++++++++++++++++++ 1 file changed, 123 insertions(+) diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c index 99449eaddb2b..b41da9317242 100644 --- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c +++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c @@ -45,6 +45,11 @@ #define CMD_SKIP_TEST (-1LL) #define CMD_HOLE_PT (1ULL << 2) #define CMD_HOLE_TEST (1ULL << 3) +#define CMD_RECREATE_PT_MEMSLOT_WR (1ULL << 4) +#define CMD_CHECK_WRITE_IN_DIRTY_LOG (1ULL << 5) +#define CMD_CHECK_S1PTW_WR_IN_DIRTY_LOG (1ULL << 6) +#define CMD_CHECK_NO_WRITE_IN_DIRTY_LOG (1ULL << 7) +#define CMD_SET_PTE_AF (1ULL << 8) #define PREPARE_FN_NR 10 #define CHECK_FN_NR 10 @@ -251,6 +256,21 @@ static void guest_check_pte_af(void) GUEST_ASSERT_EQ(*((uint64_t *)pte_gva) & PTE_AF, PTE_AF); } +static void guest_check_write_in_dirty_log(void) +{ + GUEST_SYNC(CMD_CHECK_WRITE_IN_DIRTY_LOG); +} + +static void guest_check_no_write_in_dirty_log(void) +{ + GUEST_SYNC(CMD_CHECK_NO_WRITE_IN_DIRTY_LOG); +} + +static void guest_check_s1ptw_wr_in_dirty_log(void) +{ + GUEST_SYNC(CMD_CHECK_S1PTW_WR_IN_DIRTY_LOG); +} + static void guest_test_exec(void) { int (*code)(void) = (int (*)(void))test_exec_gva; @@ -380,12 +400,34 @@ static void punch_hole_in_memslot(struct kvm_vm *vm, } } +static bool check_write_in_dirty_log(struct kvm_vm *vm, + struct memslot_desc *ms, uint64_t host_pg_nr) +{ + unsigned long *bmap; + bool first_page_dirty; + + bmap = bitmap_zalloc(ms->size / getpagesize()); + kvm_vm_get_dirty_log(vm, ms->idx, bmap); + first_page_dirty = test_bit(host_pg_nr, bmap); + free(bmap); + return first_page_dirty; +} + static void handle_cmd(struct kvm_vm *vm, int cmd) { if (cmd & CMD_HOLE_PT) punch_hole_in_memslot(vm, &memslot[PT]); if (cmd & CMD_HOLE_TEST) punch_hole_in_memslot(vm, &memslot[TEST]); + if (cmd & CMD_CHECK_WRITE_IN_DIRTY_LOG) + TEST_ASSERT(check_write_in_dirty_log(vm, &memslot[TEST], 0), + "Missing write in dirty log"); + if (cmd & CMD_CHECK_NO_WRITE_IN_DIRTY_LOG) + TEST_ASSERT(!check_write_in_dirty_log(vm, &memslot[TEST], 0), + "Unexpected s1ptw write in dirty log"); + if (cmd & CMD_CHECK_S1PTW_WR_IN_DIRTY_LOG) + TEST_ASSERT(check_write_in_dirty_log(vm, &memslot[PT], 0), + "Missing s1ptw write in dirty log"); } static void sync_stats_from_guest(struct kvm_vm *vm) @@ -783,6 +825,56 @@ int main(int argc, char *argv[]) #define TEST_ACCESS_AND_S1PTW_ON_HOLE_UFFD_AF(__a, __th, __ph) \ TEST_ACCESS_AND_S1PTW_ON_HOLE_UFFD(__a, __th, __ph, __AF_TEST_ARGS) +#define __TEST_ACCESS_DIRTY_LOG(__a, ...) \ +{ \ + .name = SNAME(TEST_ACCESS_DIRTY_LOG ## _ ## __a), \ + .test_memslot_flags = KVM_MEM_LOG_DIRTY_PAGES, \ + .guest_test = __a, \ + .expected_events = { 0 }, \ + __VA_ARGS__ \ +} + +#define __CHECK_WRITE_IN_DIRTY_LOG \ + .guest_test_check = { guest_check_write_in_dirty_log, }, + +#define __CHECK_NO_WRITE_IN_DIRTY_LOG \ + .guest_test_check = { guest_check_no_write_in_dirty_log, }, + +#define TEST_WRITE_DIRTY_LOG(__a, ...) \ + __TEST_ACCESS_DIRTY_LOG(__a, __CHECK_WRITE_IN_DIRTY_LOG __VA_ARGS__) + +#define TEST_NO_WRITE_DIRTY_LOG(__a, ...) \ + __TEST_ACCESS_DIRTY_LOG(__a, __CHECK_NO_WRITE_IN_DIRTY_LOG __VA_ARGS__) + +#define __TEST_S1PTW_DIRTY_LOG(__a, ...) \ +{ \ + .name = SNAME(S1PTW_AF_DIRTY_LOG ## _ ## __a), \ + .pt_memslot_flags = KVM_MEM_LOG_DIRTY_PAGES, \ + .guest_test = __a, \ + .expected_events = { 0 }, \ + __VA_ARGS__ \ +} + +#define __CHECK_S1PTW_WR_IN_DIRTY_LOG \ + .guest_test_check = { guest_check_s1ptw_wr_in_dirty_log, }, + +#define TEST_S1PTW_DIRTY_LOG(__a, ...) \ + __TEST_S1PTW_DIRTY_LOG(__a, __CHECK_S1PTW_WR_IN_DIRTY_LOG __VA_ARGS__) + +#define __AF_TEST_ARGS_FOR_DIRTY_LOG \ + .guest_prepare = { guest_set_ha, guest_clear_pte_af, }, \ + .guest_test_check = { guest_check_s1ptw_wr_in_dirty_log, \ + guest_check_pte_af, }, + +#define __AF_AND_LSE_ARGS_FOR_DIRTY_LOG \ + .guest_prepare = { guest_set_ha, guest_clear_pte_af, \ + guest_check_lse, }, \ + .guest_test_check = { guest_check_s1ptw_wr_in_dirty_log, \ + guest_check_pte_af, }, + +#define TEST_S1PTW_AF_DIRTY_LOG(__a, ...) \ + TEST_S1PTW_DIRTY_LOG(__a, __AF_TEST_ARGS_FOR_DIRTY_LOG) + static struct test_desc tests[] = { /* Check that HW is setting the AF (sanity checks). */ TEST_HW_ACCESS_FLAG(guest_test_read64), @@ -793,6 +885,37 @@ static struct test_desc tests[] = { TEST_HW_ACCESS_FLAG(guest_test_dc_zva), TEST_HW_ACCESS_FLAG(guest_test_exec), + /* Dirty log basic checks. */ + TEST_WRITE_DIRTY_LOG(guest_test_write64), + TEST_WRITE_DIRTY_LOG(guest_test_cas, __PREPARE_LSE_TEST_ARGS), + TEST_WRITE_DIRTY_LOG(guest_test_dc_zva), + TEST_WRITE_DIRTY_LOG(guest_test_st_preidx), + TEST_NO_WRITE_DIRTY_LOG(guest_test_read64), + TEST_NO_WRITE_DIRTY_LOG(guest_test_ld_preidx), + TEST_NO_WRITE_DIRTY_LOG(guest_test_at), + TEST_NO_WRITE_DIRTY_LOG(guest_test_exec), + + /* + * S1PTW on a PT (no AF) which is marked for dirty logging. Note that + * this still shows up in the dirty log as a write. + */ + TEST_S1PTW_DIRTY_LOG(guest_test_write64), + TEST_S1PTW_DIRTY_LOG(guest_test_st_preidx), + TEST_S1PTW_DIRTY_LOG(guest_test_read64), + TEST_S1PTW_DIRTY_LOG(guest_test_cas, __PREPARE_LSE_TEST_ARGS), + TEST_S1PTW_DIRTY_LOG(guest_test_ld_preidx), + TEST_S1PTW_DIRTY_LOG(guest_test_at), + TEST_S1PTW_DIRTY_LOG(guest_test_dc_zva), + TEST_S1PTW_DIRTY_LOG(guest_test_exec), + TEST_S1PTW_AF_DIRTY_LOG(guest_test_write64), + TEST_S1PTW_AF_DIRTY_LOG(guest_test_st_preidx), + TEST_S1PTW_AF_DIRTY_LOG(guest_test_read64), + TEST_S1PTW_DIRTY_LOG(guest_test_cas, __AF_AND_LSE_ARGS_FOR_DIRTY_LOG), + TEST_S1PTW_AF_DIRTY_LOG(guest_test_ld_preidx), + TEST_S1PTW_AF_DIRTY_LOG(guest_test_at), + TEST_S1PTW_AF_DIRTY_LOG(guest_test_dc_zva), + TEST_S1PTW_AF_DIRTY_LOG(guest_test_exec), + /* Accessing a hole shouldn't fault (more sanity checks). */ TEST_ACCESS_ON_HOLE_NO_FAULTS(guest_test_read64), TEST_ACCESS_ON_HOLE_NO_FAULTS(guest_test_cas, __PREPARE_LSE_TEST_ARGS), From patchwork Fri Mar 11 06:02:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 12777472 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC787C433F5 for ; Fri, 11 Mar 2022 06:04:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346803AbiCKGFU (ORCPT ); Fri, 11 Mar 2022 01:05:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55232 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347315AbiCKGD4 (ORCPT ); Fri, 11 Mar 2022 01:03:56 -0500 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BA45C62DD for ; Thu, 10 Mar 2022 22:02:27 -0800 (PST) Received: by mail-pj1-x104a.google.com with SMTP id mz5-20020a17090b378500b001bf46a762baso4615276pjb.5 for ; Thu, 10 Mar 2022 22:02:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=xPgQVfI2Y1I/KtRzUMNKCQIc7yU1OAIv+46ymAD0UPo=; b=G2sEVMECVrKhpuBCyJWirf6hhHYiGny9N+2jM2o16/Ud0JJmLBZUCI0QlXzrBAdwXf RtEA/T2uSBldDhq69glIdy23EBdOBpGdFFZcdlBAqoXkXgdOx3EssIMBB/E+FJ4x8zMZ 4jWZ69krhpTz+te3T1e38wIMoE2RH19+/9gS9Z7CNo4t5If+Dnt+LZtXbCVHP7GMVp/n au7n7Ezu8eRoiqttpcflOaQl68DL6iDHgZvEK+B4GJlr3WCOccAi6YtxMG65DtYZ3DZv MWm7ChvNNzbzQqwf2qCKnuFVpmh7G5XNZomPZPepjT3CWVR2H3RnkVXMweCLUtuGd76n Kn4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=xPgQVfI2Y1I/KtRzUMNKCQIc7yU1OAIv+46ymAD0UPo=; b=CaNiy5N1EAyq2qCha1uLcTGGNLjIbXDmfuGB97OaXUEPJ4pGqjOOYFVLtutlMAujuA 7YTwoV6ntFUIwtMWys5b5+KsQrAmoC/dDz17U6bJU8C3gD1vuQ8NKTZahNNN3qAckRnT quRSScrT1B/L4V1oY+fWEYJQobkNQ0Z1qJvGO9LYDRfAbSLRtapM10SZzu77BjIwxfV+ lvgyE7EkziZ/BX8DDQWOGDrqmjrHDtTm6CRqFA4l5T5NjnTDD1Zd30Gu5hVerLmKQvCy w2MK/Nvhiu6FG4iniqCJ8DA72M1VqXQ9pIOdtCH42aYwUbAMrKq2hhCKHwYO2zqG4s2y kbeQ== X-Gm-Message-State: AOAM533qjJgX4Bm+kCMAvtcvLFUhOwGqLTsyl4I1uYAdpMCIJSRL0liO EtL8+5HUv2wgGnbPl+skUFDdkC++iUKuEuyjukCNpdgbp4AFCD/Q223oRFnDqCmd/URdk/klFNe pAhX/l4QOhgeJBKUaM03bN0v2KnVnSh9vjSdotsZwn8b+akZVs8WmhLQOX5jProE= X-Google-Smtp-Source: ABdhPJwiK2P1PitciFQTJDaWIirN71r4sXA6wmv6Mr37j6aMujc02xvgf7jF/CmCmMNQG2sXg6OyRkTWXDKDcw== X-Received: from ricarkol2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:62fe]) (user=ricarkol job=sendgmr) by 2002:a05:6a00:17a5:b0:4f6:f3fb:b6d8 with SMTP id s37-20020a056a0017a500b004f6f3fbb6d8mr8269665pfg.75.1646978547125; Thu, 10 Mar 2022 22:02:27 -0800 (PST) Date: Thu, 10 Mar 2022 22:02:06 -0800 In-Reply-To: <20220311060207.2438667-1-ricarkol@google.com> Message-Id: <20220311060207.2438667-11-ricarkol@google.com> Mime-Version: 1.0 References: <20220311060207.2438667-1-ricarkol@google.com> X-Mailer: git-send-email 2.35.1.723.g4982287a31-goog Subject: [PATCH 10/11] KVM: selftests: aarch64: Add readonly memslot tests into page_fault_test From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, drjones@redhat.com Cc: pbonzini@redhat.com, maz@kernel.org, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, axelrasmussen@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add some readonly memslot tests into page_fault_test. Mark the data and/or page-table memslots as readonly, perform some accesses, and check that the right fault is triggered when expected (e.g., a store with no write-back should lead to an mmio exit). Signed-off-by: Ricardo Koller --- .../selftests/kvm/aarch64/page_fault_test.c | 303 +++++++++++++++++- 1 file changed, 300 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c index b41da9317242..e6607f903bc1 100644 --- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c +++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c @@ -84,6 +84,7 @@ struct memslot_desc { static struct event_cnt { int aborts; + int mmio_exits; int fail_vcpu_runs; int uffd_faults; /* uffd_faults is incremented from multiple threads. */ @@ -101,6 +102,8 @@ struct test_desc { int (*uffd_test_handler)(int mode, int uffd, struct uffd_msg *msg); void (*dabt_handler)(struct ex_regs *regs); void (*iabt_handler)(struct ex_regs *regs); + void (*mmio_handler)(struct kvm_run *run); + void (*fail_vcpu_run_handler)(int ret); uint32_t pt_memslot_flags; uint32_t test_memslot_flags; void (*guest_pre_run)(struct kvm_vm *vm); @@ -322,6 +325,20 @@ static void guest_code(struct test_desc *test) GUEST_DONE(); } +static void dabt_s1ptw_on_ro_memslot_handler(struct ex_regs *regs) +{ + GUEST_ASSERT_EQ(read_sysreg(far_el1), GUEST_TEST_GVA); + events.aborts += 1; + GUEST_SYNC(CMD_RECREATE_PT_MEMSLOT_WR); +} + +static void iabt_s1ptw_on_ro_memslot_handler(struct ex_regs *regs) +{ + GUEST_ASSERT_EQ(regs->pc, GUEST_TEST_EXEC_GVA); + events.aborts += 1; + GUEST_SYNC(CMD_RECREATE_PT_MEMSLOT_WR); +} + static void no_dabt_handler(struct ex_regs *regs) { GUEST_ASSERT_1(false, read_sysreg(far_el1)); @@ -400,6 +417,57 @@ static void punch_hole_in_memslot(struct kvm_vm *vm, } } +static int __memory_region_add(struct kvm_vm *vm, void *mem, uint32_t slot, + uint32_t size, uint64_t guest_addr, + uint32_t flags) +{ + struct kvm_userspace_memory_region region; + int ret; + + region.slot = slot; + region.flags = flags; + region.guest_phys_addr = guest_addr; + region.memory_size = size; + region.userspace_addr = (uintptr_t) mem; + ret = ioctl(vm_get_fd(vm), KVM_SET_USER_MEMORY_REGION, ®ion); + + return ret; +} + +static void recreate_memslot(struct kvm_vm *vm, struct memslot_desc *ms, + uint32_t flags) +{ + __memory_region_add(vm, ms->hva, ms->idx, 0, ms->gpa, 0); + __memory_region_add(vm, ms->hva, ms->idx, ms->size, ms->gpa, flags); +} + +static void clear_pte_accessflag(struct kvm_vm *vm) +{ + volatile uint64_t *pte_hva; + + pte_hva = (uint64_t *)addr_gpa2hva(vm, pte_gpa); + *pte_hva &= ~PTE_AF; +} + +static void mmio_on_test_gpa_handler(struct kvm_run *run) +{ + ASSERT_EQ(run->mmio.phys_addr, memslot[TEST].gpa); + + memcpy(memslot[TEST].hva, run->mmio.data, run->mmio.len); + events.mmio_exits += 1; +} + +static void mmio_no_handler(struct kvm_run *run) +{ + uint64_t data; + + memcpy(&data, run->mmio.data, sizeof(data)); + pr_debug("addr=%lld len=%d w=%d data=%lx\n", + run->mmio.phys_addr, run->mmio.len, + run->mmio.is_write, data); + TEST_FAIL("There was no MMIO exit expected."); +} + static bool check_write_in_dirty_log(struct kvm_vm *vm, struct memslot_desc *ms, uint64_t host_pg_nr) { @@ -419,6 +487,8 @@ static void handle_cmd(struct kvm_vm *vm, int cmd) punch_hole_in_memslot(vm, &memslot[PT]); if (cmd & CMD_HOLE_TEST) punch_hole_in_memslot(vm, &memslot[TEST]); + if (cmd & CMD_RECREATE_PT_MEMSLOT_WR) + recreate_memslot(vm, &memslot[PT], 0); if (cmd & CMD_CHECK_WRITE_IN_DIRTY_LOG) TEST_ASSERT(check_write_in_dirty_log(vm, &memslot[TEST], 0), "Missing write in dirty log"); @@ -442,6 +512,13 @@ void fail_vcpu_run_no_handler(int ret) TEST_FAIL("Unexpected vcpu run failure\n"); } +void fail_vcpu_run_mmio_no_syndrome_handler(int ret) +{ + TEST_ASSERT(errno == ENOSYS, "The mmio handler in the kernel" + " should have returned not implemented."); + events.fail_vcpu_runs += 1; +} + static uint64_t get_total_guest_pages(enum vm_guest_mode mode, struct test_params *p) { @@ -594,10 +671,21 @@ static void setup_uffd(enum vm_guest_mode mode, struct test_params *p, test->uffd_test_handler); } +static void setup_default_handlers(struct test_desc *test) +{ + if (!test->mmio_handler) + test->mmio_handler = mmio_no_handler; + + if (!test->fail_vcpu_run_handler) + test->fail_vcpu_run_handler = fail_vcpu_run_no_handler; +} + static void check_event_counts(struct test_desc *test) { ASSERT_EQ(test->expected_events.aborts, events.aborts); ASSERT_EQ(test->expected_events.uffd_faults, events.uffd_faults); + ASSERT_EQ(test->expected_events.mmio_exits, events.mmio_exits); + ASSERT_EQ(test->expected_events.fail_vcpu_runs, events.fail_vcpu_runs); } static void free_uffd(struct test_desc *test, struct uffd_desc **uffd) @@ -629,12 +717,20 @@ static void reset_event_counts(void) static bool vcpu_run_loop(struct kvm_vm *vm, struct test_desc *test) { + struct kvm_run *run; bool skip_test = false; struct ucall uc; - int stage; + int stage, ret; + + run = vcpu_state(vm, VCPU_ID); for (stage = 0; ; stage++) { - vcpu_run(vm, VCPU_ID); + ret = _vcpu_run(vm, VCPU_ID); + if (ret) { + test->fail_vcpu_run_handler(ret); + pr_debug("Done.\n"); + goto done; + } switch (get_ucall(vm, VCPU_ID, &uc)) { case UCALL_SYNC: @@ -653,6 +749,10 @@ static bool vcpu_run_loop(struct kvm_vm *vm, struct test_desc *test) case UCALL_DONE: pr_debug("Done.\n"); goto done; + case UCALL_NONE: + if (run->exit_reason == KVM_EXIT_MMIO) + test->mmio_handler(run); + break; default: TEST_FAIL("Unknown ucall %lu", uc.cmd); } @@ -677,6 +777,7 @@ static void run_test(enum vm_guest_mode mode, void *arg) ucall_init(vm, NULL); reset_event_counts(); + setup_abort_handlers(vm, test); setup_memslots(vm, mode, p); /* @@ -687,7 +788,7 @@ static void run_test(enum vm_guest_mode mode, void *arg) */ load_exec_code_for_test(); setup_uffd(mode, p, uffd); - setup_abort_handlers(vm, test); + setup_default_handlers(test); setup_guest_args(vm, test); if (test->guest_pre_run) @@ -875,6 +976,135 @@ int main(int argc, char *argv[]) #define TEST_S1PTW_AF_DIRTY_LOG(__a, ...) \ TEST_S1PTW_DIRTY_LOG(__a, __AF_TEST_ARGS_FOR_DIRTY_LOG) +#define TEST_WRITE_ON_RO_MEMSLOT(__a, ...) \ +{ \ + .name = SNAME(WRITE_ON_RO_MEMSLOT ## _ ## __a), \ + .test_memslot_flags = KVM_MEM_READONLY, \ + .guest_test = __a, \ + .mmio_handler = mmio_on_test_gpa_handler, \ + .expected_events = { .mmio_exits = 1, }, \ + __VA_ARGS__ \ +} + +#define TEST_READ_ON_RO_MEMSLOT(__a, ...) \ +{ \ + .name = SNAME(READ_ON_RO_MEMSLOT ## _ ## __a), \ + .test_memslot_flags = KVM_MEM_READONLY, \ + .guest_test = __a, \ + .expected_events = { 0 }, \ + __VA_ARGS__ \ +} + +#define TEST_CM_ON_RO_MEMSLOT(__a, ...) \ +{ \ + .name = SNAME(CM_ON_RO_MEMSLOT ## _ ## __a), \ + .test_memslot_flags = KVM_MEM_READONLY, \ + .guest_test = __a, \ + .fail_vcpu_run_handler = fail_vcpu_run_mmio_no_syndrome_handler, \ + .expected_events = { .fail_vcpu_runs = 1, }, \ + __VA_ARGS__ \ +} + +#define __AF_TEST_IN_RO_MEMSLOT_ARGS \ + .guest_pre_run = clear_pte_accessflag, \ + .guest_prepare = { guest_set_ha, }, \ + .guest_test_check = { guest_check_pte_af, } + +#define __AF_LSE_IN_RO_MEMSLOT_ARGS \ + .guest_pre_run = clear_pte_accessflag, \ + .guest_prepare = { guest_set_ha, guest_check_lse, }, \ + .guest_test_check = { guest_check_pte_af, } + +#define TEST_WRITE_ON_RO_MEMSLOT_AF(__a) \ + TEST_WRITE_ON_RO_MEMSLOT(__a, __AF_TEST_IN_RO_MEMSLOT_ARGS) + +#define TEST_READ_ON_RO_MEMSLOT_AF(__a) \ + TEST_READ_ON_RO_MEMSLOT(__a, __AF_TEST_IN_RO_MEMSLOT_ARGS) + +#define TEST_CM_ON_RO_MEMSLOT_AF(__a) \ + TEST_CM_ON_RO_MEMSLOT(__a, __AF_TEST_IN_RO_MEMSLOT_ARGS) + +#define TEST_S1PTW_ON_RO_MEMSLOT_DATA(__a, ...) \ +{ \ + .name = SNAME(S1PTW_ON_RO_MEMSLOT_DATA ## _ ## __a), \ + .pt_memslot_flags = KVM_MEM_READONLY, \ + .guest_test = __a, \ + .dabt_handler = dabt_s1ptw_on_ro_memslot_handler, \ + .expected_events = { .aborts = 1, }, \ + __VA_ARGS__ \ +} + +#define TEST_S1PTW_ON_RO_MEMSLOT_EXEC(__a, ...) \ +{ \ + .name = SNAME(S1PTW_ON_RO_MEMSLOT_EXEC ## _ ## __a), \ + .pt_memslot_flags = KVM_MEM_READONLY, \ + .guest_test = __a, \ + .iabt_handler = iabt_s1ptw_on_ro_memslot_handler, \ + .expected_events = { .aborts = 1, }, \ + __VA_ARGS__ \ +} + +#define TEST_S1PTW_AF_ON_RO_MEMSLOT_DATA(__a) \ + TEST_S1PTW_ON_RO_MEMSLOT_DATA(__a, __AF_TEST_IN_RO_MEMSLOT_ARGS) + +#define TEST_S1PTW_AF_ON_RO_MEMSLOT_EXEC(__a) \ + TEST_S1PTW_ON_RO_MEMSLOT_EXEC(__a, __AF_TEST_IN_RO_MEMSLOT_ARGS) + +#define TEST_WRITE_AND_S1PTW_ON_RO_MEMSLOT(__a, ...) \ +{ \ + .name = SCAT(WRITE_AND_S1PTW_ON_RO_MEMSLOT, __a), \ + .test_memslot_flags = KVM_MEM_READONLY, \ + .pt_memslot_flags = KVM_MEM_READONLY, \ + .guest_test = __a, \ + .mmio_handler = mmio_on_test_gpa_handler, \ + .dabt_handler = dabt_s1ptw_on_ro_memslot_handler, \ + .expected_events = { .mmio_exits = 1, .aborts = 1, }, \ + __VA_ARGS__ \ +} + +#define TEST_READ_AND_S1PTW_ON_RO_MEMSLOT(__a, ...) \ +{ \ + .name = SCAT(READ_AND_S1PTW_ON_RO_MEMSLOT, __a), \ + .test_memslot_flags = KVM_MEM_READONLY, \ + .pt_memslot_flags = KVM_MEM_READONLY, \ + .guest_test = __a, \ + .dabt_handler = dabt_s1ptw_on_ro_memslot_handler, \ + .expected_events = { .aborts = 1, }, \ + __VA_ARGS__ \ +} + +#define TEST_CM_AND_S1PTW_ON_RO_MEMSLOT(__a, ...) \ +{ \ + .name = SCAT(CM_AND_S1PTW_ON_RO_MEMSLOT, __a), \ + .test_memslot_flags = KVM_MEM_READONLY, \ + .pt_memslot_flags = KVM_MEM_READONLY, \ + .guest_test = __a, \ + .dabt_handler = dabt_s1ptw_on_ro_memslot_handler, \ + .fail_vcpu_run_handler = fail_vcpu_run_mmio_no_syndrome_handler, \ + .expected_events = { .aborts = 1, .fail_vcpu_runs = 1 }, \ + __VA_ARGS__ \ +} + +#define TEST_EXEC_AND_S1PTW_ON_RO_MEMSLOT(__a, ...) \ +{ \ + .name = SCAT(EXEC_AND_S1PTW_ON_RO_MEMSLOT, __a), \ + .test_memslot_flags = KVM_MEM_READONLY, \ + .pt_memslot_flags = KVM_MEM_READONLY, \ + .guest_test = __a, \ + .iabt_handler = iabt_s1ptw_on_ro_memslot_handler, \ + .expected_events = { .aborts = 1, }, \ + __VA_ARGS__ \ +} + +#define TEST_WRITE_AND_S1PTW_AF_ON_RO_MEMSLOT(__a) \ + TEST_WRITE_AND_S1PTW_ON_RO_MEMSLOT(__a, __AF_TEST_IN_RO_MEMSLOT_ARGS) +#define TEST_READ_AND_S1PTW_AF_ON_RO_MEMSLOT(__a) \ + TEST_READ_AND_S1PTW_ON_RO_MEMSLOT(__a, __AF_TEST_IN_RO_MEMSLOT_ARGS) +#define TEST_CM_AND_S1PTW_AF_ON_RO_MEMSLOT(__a) \ + TEST_CM_AND_S1PTW_ON_RO_MEMSLOT(__a, __AF_TEST_IN_RO_MEMSLOT_ARGS) +#define TEST_EXEC_AND_S1PTW_AF_ON_RO_MEMSLOT(__a) \ + TEST_EXEC_AND_S1PTW_ON_RO_MEMSLOT(__a, __AF_TEST_IN_RO_MEMSLOT_ARGS) + static struct test_desc tests[] = { /* Check that HW is setting the AF (sanity checks). */ TEST_HW_ACCESS_FLAG(guest_test_read64), @@ -993,6 +1223,73 @@ static struct test_desc tests[] = { TEST_ACCESS_AND_S1PTW_ON_HOLE_UFFD_AF(guest_test_exec, uffd_test_read_handler, uffd_pt_write_handler), + /* Access on readonly memslot (sanity check). */ + TEST_WRITE_ON_RO_MEMSLOT(guest_test_write64), + TEST_READ_ON_RO_MEMSLOT(guest_test_read64), + TEST_READ_ON_RO_MEMSLOT(guest_test_ld_preidx), + TEST_READ_ON_RO_MEMSLOT(guest_test_exec), + /* + * CM and ld/st with pre-indexing don't have any syndrome. And so + * vcpu_run just fails; which is expected. + */ + TEST_CM_ON_RO_MEMSLOT(guest_test_dc_zva), + TEST_CM_ON_RO_MEMSLOT(guest_test_cas, __PREPARE_LSE_TEST_ARGS), + TEST_CM_ON_RO_MEMSLOT(guest_test_st_preidx), + + /* Access on readonly memslot w/ non-faulting S1PTW w/ AF. */ + TEST_WRITE_ON_RO_MEMSLOT_AF(guest_test_write64), + TEST_READ_ON_RO_MEMSLOT_AF(guest_test_read64), + TEST_READ_ON_RO_MEMSLOT_AF(guest_test_ld_preidx), + TEST_CM_ON_RO_MEMSLOT(guest_test_cas, __AF_LSE_IN_RO_MEMSLOT_ARGS), + TEST_CM_ON_RO_MEMSLOT_AF(guest_test_dc_zva), + TEST_CM_ON_RO_MEMSLOT_AF(guest_test_st_preidx), + TEST_READ_ON_RO_MEMSLOT_AF(guest_test_exec), + + /* + * S1PTW without AF on a readonly memslot. Note that even though this + * page table walk does not actually write the access flag, it is still + * considered a write, and therefore there is a fault. + */ + TEST_S1PTW_ON_RO_MEMSLOT_DATA(guest_test_write64), + TEST_S1PTW_ON_RO_MEMSLOT_DATA(guest_test_read64), + TEST_S1PTW_ON_RO_MEMSLOT_DATA(guest_test_ld_preidx), + TEST_S1PTW_ON_RO_MEMSLOT_DATA(guest_test_cas, __PREPARE_LSE_TEST_ARGS), + TEST_S1PTW_ON_RO_MEMSLOT_DATA(guest_test_at), + TEST_S1PTW_ON_RO_MEMSLOT_DATA(guest_test_dc_zva), + TEST_S1PTW_ON_RO_MEMSLOT_DATA(guest_test_st_preidx), + TEST_S1PTW_ON_RO_MEMSLOT_EXEC(guest_test_exec), + + /* S1PTW with AF on a readonly memslot. */ + TEST_S1PTW_AF_ON_RO_MEMSLOT_DATA(guest_test_write64), + TEST_S1PTW_AF_ON_RO_MEMSLOT_DATA(guest_test_read64), + TEST_S1PTW_ON_RO_MEMSLOT_DATA(guest_test_cas, + __AF_LSE_IN_RO_MEMSLOT_ARGS), + TEST_S1PTW_AF_ON_RO_MEMSLOT_DATA(guest_test_ld_preidx), + TEST_S1PTW_AF_ON_RO_MEMSLOT_DATA(guest_test_at), + TEST_S1PTW_AF_ON_RO_MEMSLOT_DATA(guest_test_st_preidx), + TEST_S1PTW_AF_ON_RO_MEMSLOT_DATA(guest_test_dc_zva), + TEST_S1PTW_AF_ON_RO_MEMSLOT_EXEC(guest_test_exec), + + /* Access on a RO memslot with S1PTW also on a RO memslot. */ + TEST_WRITE_AND_S1PTW_ON_RO_MEMSLOT(guest_test_write64), + TEST_READ_AND_S1PTW_ON_RO_MEMSLOT(guest_test_ld_preidx), + TEST_READ_AND_S1PTW_ON_RO_MEMSLOT(guest_test_read64), + TEST_CM_AND_S1PTW_ON_RO_MEMSLOT(guest_test_cas, + __PREPARE_LSE_TEST_ARGS), + TEST_CM_AND_S1PTW_ON_RO_MEMSLOT(guest_test_dc_zva), + TEST_CM_AND_S1PTW_ON_RO_MEMSLOT(guest_test_st_preidx), + TEST_EXEC_AND_S1PTW_ON_RO_MEMSLOT(guest_test_exec), + + /* Access on a RO memslot with S1PTW w/ AF also on a RO memslot. */ + TEST_WRITE_AND_S1PTW_AF_ON_RO_MEMSLOT(guest_test_write64), + TEST_READ_AND_S1PTW_AF_ON_RO_MEMSLOT(guest_test_read64), + TEST_READ_AND_S1PTW_AF_ON_RO_MEMSLOT(guest_test_ld_preidx), + TEST_CM_AND_S1PTW_ON_RO_MEMSLOT(guest_test_cas, + __AF_LSE_IN_RO_MEMSLOT_ARGS), + TEST_CM_AND_S1PTW_AF_ON_RO_MEMSLOT(guest_test_dc_zva), + TEST_CM_AND_S1PTW_AF_ON_RO_MEMSLOT(guest_test_st_preidx), + TEST_EXEC_AND_S1PTW_AF_ON_RO_MEMSLOT(guest_test_exec), + { 0 }, }; From patchwork Fri Mar 11 06:02:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 12777471 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D8D58C433F5 for ; Fri, 11 Mar 2022 06:04:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346883AbiCKGFS (ORCPT ); Fri, 11 Mar 2022 01:05:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57224 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347316AbiCKGD4 (ORCPT ); Fri, 11 Mar 2022 01:03:56 -0500 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 15C97AE5C for ; Thu, 10 Mar 2022 22:02:29 -0800 (PST) Received: by mail-pg1-x549.google.com with SMTP id i4-20020a63b304000000b0038108d6e7cdso777694pgf.14 for ; Thu, 10 Mar 2022 22:02:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=n0unht5qzdqOFcOLqfZJouXFJeZ+JY2CKOuyKBmheCA=; b=LfwlZvTsfX375jUVosmH2FSFaIoyaBWCknnU1gJXvE28xTOYuOr1q34o/zA3Wsg9jT 1JVwKQ9XrP+iijvAAd70Cgzq8cD0IViWF29HrRDlnUofoEh+LvPKJPTHMhzPy9NDLsjG 7lyOr1VvFITJI3s4CZr6sXM/jKuJV+mDBLTCUv6n2fvBwx2KDZ2pzT2cjzui2bPFke9n mx0fCUxeqG/f8dQmK90wSLvRQSPolTkdzSTP6RXfFXX8N+BykM0G00XaeJ6jw2nWzG94 nrd04TyJUg0miVpBO/KsSCs5BTrjFQG9ZkCLLH/giJEabKAnATpwD64gxUwfgV3VFeZL wymA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=n0unht5qzdqOFcOLqfZJouXFJeZ+JY2CKOuyKBmheCA=; b=2mnXo/3JaA31XTbeW4EqB0fs6pAKOEjZiB5Ui/BfcDxTYEMpFkMHyx1D51fqRnxkjE nkLj1FX0ufhXVRFJkjgqbmJdVgIgWWXDblvzycoO7xeuOvamg4J88KV4CT44lNS7QtFg gruufnsll4CbLd+O8CGbTesN8vZlx/oBKx4NL6zBPkv4Af2ai4U8I15CaalV+dsSZxCW Re38K3evDc/F8a9p9SLoQdx0wuehe+RLRYxQir28E8F9Op2Vv4B07Lbc0cBSnDNJ5alH jJEQ729a0Q3myNnTs88Sl5RyLRaR4vRGwy7J5UReMLmBIrtPXka0hlJaSueYDlN1DtqG cLeQ== X-Gm-Message-State: AOAM5317WnV6q0T6ioTIcsHUW1XKdZJJswK2YMjHdRggvTYr+p+wKyOP c2A6Bt6KntC4rqLvAveN5dX+3DenBLR7bEE/6bNK7nJUGSzMKKJXuAlQJGKBjqo5tbVUB2cAOkG +j9iLiOLNpLWotA8UnNVEGle+WJXqhgzTintPEe9UHM9TePYkRelIsxNEUgdETas= X-Google-Smtp-Source: ABdhPJw3bzyRx2Qc+ZAwSoMid/szXDn7Vzz6hZONQ9ewFnuLbhdgmtOSNT7JklW7GOKEz7JP8odEs7mpTOXbOg== X-Received: from ricarkol2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:62fe]) (user=ricarkol job=sendgmr) by 2002:a17:902:f78d:b0:14f:ce61:eaf2 with SMTP id q13-20020a170902f78d00b0014fce61eaf2mr8934670pln.124.1646978548847; Thu, 10 Mar 2022 22:02:28 -0800 (PST) Date: Thu, 10 Mar 2022 22:02:07 -0800 In-Reply-To: <20220311060207.2438667-1-ricarkol@google.com> Message-Id: <20220311060207.2438667-12-ricarkol@google.com> Mime-Version: 1.0 References: <20220311060207.2438667-1-ricarkol@google.com> X-Mailer: git-send-email 2.35.1.723.g4982287a31-goog Subject: [PATCH 11/11] KVM: selftests: aarch64: Add mix of tests into page_fault_test From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, drjones@redhat.com Cc: pbonzini@redhat.com, maz@kernel.org, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, axelrasmussen@google.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add some mix of tests into page_fault_test, like stage 2 faults on memslots marked for both userfaultfd and dirty-logging. Signed-off-by: Ricardo Koller --- .../selftests/kvm/aarch64/page_fault_test.c | 148 ++++++++++++++++++ 1 file changed, 148 insertions(+) diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c index e6607f903bc1..f1a5bf081a5b 100644 --- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c +++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c @@ -399,6 +399,12 @@ static int uffd_test_read_handler(int mode, int uffd, struct uffd_msg *msg) return uffd_generic_handler(mode, uffd, msg, &memslot[TEST], false); } +static int uffd_no_handler(int mode, int uffd, struct uffd_msg *msg) +{ + TEST_FAIL("There was no UFFD fault expected."); + return -1; +} + static void punch_hole_in_memslot(struct kvm_vm *vm, struct memslot_desc *memslot) { @@ -912,6 +918,30 @@ int main(int argc, char *argv[]) #define TEST_S1PTW_ON_HOLE_UFFD_AF(__a, __uffd_handler) \ TEST_S1PTW_ON_HOLE_UFFD(__a, __uffd_handler, __AF_TEST_ARGS) +#define __DIRTY_LOG_TEST \ + .test_memslot_flags = KVM_MEM_LOG_DIRTY_PAGES, \ + .guest_test_check = { guest_check_write_in_dirty_log, }, \ + +#define __DIRTY_LOG_S1PTW_TEST \ + .pt_memslot_flags = KVM_MEM_LOG_DIRTY_PAGES, \ + .guest_test_check = { guest_check_s1ptw_wr_in_dirty_log, }, \ + +#define TEST_WRITE_DIRTY_LOG_AND_S1PTW_ON_UFFD(__a, __uffd_handler, ...) \ + TEST_S1PTW_ON_HOLE_UFFD(__a, __uffd_handler, \ + __DIRTY_LOG_TEST __VA_ARGS__) + +#define TEST_WRITE_ON_DIRTY_LOG_AND_UFFD(__a, __uffd_handler, ...) \ + TEST_ACCESS_ON_HOLE_UFFD(__a, __uffd_handler, \ + __DIRTY_LOG_TEST __VA_ARGS__) + +#define TEST_WRITE_UFFD_AND_S1PTW_ON_DIRTY_LOG(__a, __uffd_handler, ...) \ + TEST_ACCESS_ON_HOLE_UFFD(__a, __uffd_handler, \ + __DIRTY_LOG_S1PTW_TEST __VA_ARGS__) + +#define TEST_S1PTW_ON_DIRTY_LOG_AND_UFFD(__a, __uffd_handler, ...) \ + TEST_S1PTW_ON_HOLE_UFFD(__a, __uffd_handler, \ + __DIRTY_LOG_S1PTW_TEST __VA_ARGS__) + #define TEST_ACCESS_AND_S1PTW_ON_HOLE_UFFD(__a, __th, __ph, ...) \ { \ .name = SNAME(ACCESS_S1PTW_ON_HOLE_UFFD ## _ ## __a), \ @@ -1015,6 +1045,10 @@ int main(int argc, char *argv[]) .guest_prepare = { guest_set_ha, guest_check_lse, }, \ .guest_test_check = { guest_check_pte_af, } +#define __NULL_UFFD_HANDLERS \ + .uffd_test_handler = uffd_no_handler, \ + .uffd_pt_handler = uffd_no_handler + #define TEST_WRITE_ON_RO_MEMSLOT_AF(__a) \ TEST_WRITE_ON_RO_MEMSLOT(__a, __AF_TEST_IN_RO_MEMSLOT_ARGS) @@ -1105,6 +1139,37 @@ int main(int argc, char *argv[]) #define TEST_EXEC_AND_S1PTW_AF_ON_RO_MEMSLOT(__a) \ TEST_EXEC_AND_S1PTW_ON_RO_MEMSLOT(__a, __AF_TEST_IN_RO_MEMSLOT_ARGS) +#define TEST_WRITE_AND_S1PTW_AF_ON_RO_MEMSLOT_WITH_UFFD(__a) \ + TEST_WRITE_AND_S1PTW_ON_RO_MEMSLOT(__a, __NULL_UFFD_HANDLERS) +#define TEST_READ_AND_S1PTW_ON_RO_MEMSLOT_WITH_UFFD(__a) \ + TEST_READ_AND_S1PTW_ON_RO_MEMSLOT(__a, __NULL_UFFD_HANDLERS) +#define TEST_CM_AND_S1PTW_AF_ON_RO_MEMSLOT_WITH_UFFD(__a) \ + TEST_CM_AND_S1PTW_ON_RO_MEMSLOT(__a, __NULL_UFFD_HANDLERS) +#define TEST_EXEC_AND_S1PTW_AF_ON_RO_MEMSLOT_WITH_UFFD(__a) \ + TEST_EXEC_AND_S1PTW_ON_RO_MEMSLOT(__a, __NULL_UFFD_HANDLERS) + +#define TEST_WRITE_ON_RO_DIRTY_LOG_MEMSLOT(__a, ...) \ +{ \ + .name = SNAME(WRITE_ON_RO_MEMSLOT ## _ ## __a), \ + .test_memslot_flags = KVM_MEM_READONLY | KVM_MEM_LOG_DIRTY_PAGES, \ + .guest_test = __a, \ + .guest_test_check = { guest_check_no_write_in_dirty_log, }, \ + .mmio_handler = mmio_on_test_gpa_handler, \ + .expected_events = { .mmio_exits = 1, }, \ + __VA_ARGS__ \ +} + +#define TEST_CM_ON_RO_DIRTY_LOG_MEMSLOT(__a, ...) \ +{ \ + .name = SNAME(WRITE_ON_RO_MEMSLOT ## _ ## __a), \ + .test_memslot_flags = KVM_MEM_READONLY | KVM_MEM_LOG_DIRTY_PAGES, \ + .guest_test = __a, \ + .guest_test_check = { guest_check_no_write_in_dirty_log, }, \ + .fail_vcpu_run_handler = fail_vcpu_run_mmio_no_syndrome_handler, \ + .expected_events = { .fail_vcpu_runs = 1, }, \ + __VA_ARGS__ \ +} + static struct test_desc tests[] = { /* Check that HW is setting the AF (sanity checks). */ TEST_HW_ACCESS_FLAG(guest_test_read64), @@ -1223,6 +1288,65 @@ static struct test_desc tests[] = { TEST_ACCESS_AND_S1PTW_ON_HOLE_UFFD_AF(guest_test_exec, uffd_test_read_handler, uffd_pt_write_handler), + /* Write into a memslot marked for both dirty logging and UFFD. */ + TEST_WRITE_ON_DIRTY_LOG_AND_UFFD(guest_test_write64, + uffd_test_write_handler), + /* Note that the cas uffd handler is for a read. */ + TEST_WRITE_ON_DIRTY_LOG_AND_UFFD(guest_test_cas, + uffd_test_read_handler, __PREPARE_LSE_TEST_ARGS), + TEST_WRITE_ON_DIRTY_LOG_AND_UFFD(guest_test_dc_zva, + uffd_test_write_handler), + TEST_WRITE_ON_DIRTY_LOG_AND_UFFD(guest_test_st_preidx, + uffd_test_write_handler), + + /* + * Access whose s1ptw faults on a hole that's marked for both dirty + * logging and UFFD. + */ + TEST_S1PTW_ON_DIRTY_LOG_AND_UFFD(guest_test_read64, + uffd_pt_write_handler), + TEST_S1PTW_ON_DIRTY_LOG_AND_UFFD(guest_test_cas, + uffd_pt_write_handler, __PREPARE_LSE_TEST_ARGS), + TEST_S1PTW_ON_DIRTY_LOG_AND_UFFD(guest_test_ld_preidx, + uffd_pt_write_handler), + TEST_S1PTW_ON_DIRTY_LOG_AND_UFFD(guest_test_exec, + uffd_pt_write_handler), + TEST_S1PTW_ON_DIRTY_LOG_AND_UFFD(guest_test_write64, + uffd_pt_write_handler), + TEST_S1PTW_ON_DIRTY_LOG_AND_UFFD(guest_test_st_preidx, + uffd_pt_write_handler), + TEST_S1PTW_ON_DIRTY_LOG_AND_UFFD(guest_test_dc_zva, + uffd_pt_write_handler), + TEST_S1PTW_ON_DIRTY_LOG_AND_UFFD(guest_test_at, + uffd_pt_write_handler), + + /* + * Write on a memslot marked for dirty logging whose related s1ptw + * is on a hole marked with UFFD. + */ + TEST_WRITE_DIRTY_LOG_AND_S1PTW_ON_UFFD(guest_test_write64, + uffd_pt_write_handler), + TEST_WRITE_DIRTY_LOG_AND_S1PTW_ON_UFFD(guest_test_cas, + uffd_pt_write_handler, __PREPARE_LSE_TEST_ARGS), + TEST_WRITE_DIRTY_LOG_AND_S1PTW_ON_UFFD(guest_test_dc_zva, + uffd_pt_write_handler), + TEST_WRITE_DIRTY_LOG_AND_S1PTW_ON_UFFD(guest_test_st_preidx, + uffd_pt_write_handler), + + /* + * Write on a memslot that's on a hole marked with UFFD, whose related + * sp1ptw is on a memslot marked for dirty logging. + */ + TEST_WRITE_UFFD_AND_S1PTW_ON_DIRTY_LOG(guest_test_write64, + uffd_test_write_handler), + /* Note that the uffd handler is for a read. */ + TEST_WRITE_UFFD_AND_S1PTW_ON_DIRTY_LOG(guest_test_cas, + uffd_test_read_handler, __PREPARE_LSE_TEST_ARGS), + TEST_WRITE_UFFD_AND_S1PTW_ON_DIRTY_LOG(guest_test_dc_zva, + uffd_test_write_handler), + TEST_WRITE_UFFD_AND_S1PTW_ON_DIRTY_LOG(guest_test_st_preidx, + uffd_test_write_handler), + /* Access on readonly memslot (sanity check). */ TEST_WRITE_ON_RO_MEMSLOT(guest_test_write64), TEST_READ_ON_RO_MEMSLOT(guest_test_read64), @@ -1290,6 +1414,30 @@ static struct test_desc tests[] = { TEST_CM_AND_S1PTW_AF_ON_RO_MEMSLOT(guest_test_st_preidx), TEST_EXEC_AND_S1PTW_AF_ON_RO_MEMSLOT(guest_test_exec), + /* + * Access on a memslot marked as readonly with also dirty log tracking. + * There should be no write in the dirty log. + */ + TEST_WRITE_ON_RO_DIRTY_LOG_MEMSLOT(guest_test_write64), + TEST_CM_ON_RO_DIRTY_LOG_MEMSLOT(guest_test_cas, + __PREPARE_LSE_TEST_ARGS), + TEST_CM_ON_RO_DIRTY_LOG_MEMSLOT(guest_test_dc_zva), + TEST_CM_ON_RO_DIRTY_LOG_MEMSLOT(guest_test_st_preidx), + + /* + * Access on a RO memslot with S1PTW also on a RO memslot, while also + * having those memslot regions marked for UFFD fault handling. The + * result is that UFFD fault handlers should not be called. + */ + TEST_WRITE_AND_S1PTW_AF_ON_RO_MEMSLOT_WITH_UFFD(guest_test_write64), + TEST_READ_AND_S1PTW_ON_RO_MEMSLOT_WITH_UFFD(guest_test_read64), + TEST_READ_AND_S1PTW_ON_RO_MEMSLOT_WITH_UFFD(guest_test_ld_preidx), + TEST_CM_AND_S1PTW_ON_RO_MEMSLOT(guest_test_cas, + __PREPARE_LSE_TEST_ARGS __NULL_UFFD_HANDLERS), + TEST_CM_AND_S1PTW_AF_ON_RO_MEMSLOT_WITH_UFFD(guest_test_dc_zva), + TEST_CM_AND_S1PTW_AF_ON_RO_MEMSLOT_WITH_UFFD(guest_test_st_preidx), + TEST_EXEC_AND_S1PTW_AF_ON_RO_MEMSLOT_WITH_UFFD(guest_test_exec), + { 0 }, };