From patchwork Mon Dec 16 21:38:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 11295401 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 597A6109A for ; Mon, 16 Dec 2019 21:39:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 2CB9024655 for ; Mon, 16 Dec 2019 21:39:50 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="GqPzjsSz" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727517AbfLPVjK (ORCPT ); Mon, 16 Dec 2019 16:39:10 -0500 Received: from mail-yw1-f74.google.com ([209.85.161.74]:52658 "EHLO mail-yw1-f74.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727629AbfLPVjJ (ORCPT ); Mon, 16 Dec 2019 16:39:09 -0500 Received: by mail-yw1-f74.google.com with SMTP id r75so244989ywg.19 for ; Mon, 16 Dec 2019 13:39:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=bxiiqlGhMy/IAtaVqxDYYBiF2ABUxURzQMEyaZ64Y2g=; b=GqPzjsSzHp6J/PfxddF3hz9NLSvH0IZzJWyTzZ1VDFTzisdLDz0J5w37FhwQ/PiXAG A/3WNPqjFzzw7ZR0SCj6mnbRMQN4z04aEEKqIkqDzbrhYSw1yyWPcjnBPWXzf5NHBrt2 HCkSwReA4LiKFgnbqQlamMMXlBqp9Lov/z4xShyuSZpnqvTmkBACZIrS/eg5zDk1fjz6 HElel7voQsRYj3mmDX1g3K9LoNfZAlf0PtQ8mq5cfVqiHeY2R/yd3CRDef7slhz+Zb99 HNJzhtHAIeCTrZ+mVQlXuyX+Yb3BDrnCCnc7PhBnuMSkl6ATkpjXUeWxlrIQrGjFI5+P ugYA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=bxiiqlGhMy/IAtaVqxDYYBiF2ABUxURzQMEyaZ64Y2g=; b=Gq7WNDfzkeOyyagbEFiWbIIYFZkZlr09mHcu/Q4NA3htLSoXl5N83JNJ7x8rRYbal1 LEb8gAKLWkQn7VRhB6nhxCDjZqJMIkPzRGG5SCabCOnyBmzsCiG5MRisbQVvlvwXS7kz JyKH0K01OC1BULhh1JGYSGMe+MBG+XSKWnjLNY+hXt2PBtetGW46Z97AHybHQqEJGPf/ lzSV6/RpuViCiKLl0zL/Q5ikaP4fQPs3lIi44y1bI0l9ybCAEOkWDJI5r3uwvD72TtBA rGHl3cfss6vHaPPG5AOd27aSPM9CTKo/0YN2GcXyHoR9rUDR1tk2Y7z+cwA3wJO8SM0w BmZw== X-Gm-Message-State: APjAAAWs4Zrg4eeE1hYySNg5DmNoQOQYrFtDkHycdx614yNYAikK3LN+ velE8q/sRsrSC6EI/HYLQGxOmiYnNegu X-Google-Smtp-Source: APXvYqyES5e5iIolFxxBkzyJ+r1/AHVqE8XHO3sSbGror5h8fa8ZHlF/ACgHdzQYe9m++jQMbSu+qbu4p33c X-Received: by 2002:a25:7502:: with SMTP id q2mr21638052ybc.480.1576532347342; Mon, 16 Dec 2019 13:39:07 -0800 (PST) Date: Mon, 16 Dec 2019 13:38:54 -0800 In-Reply-To: <20191216213901.106941-1-bgardon@google.com> Message-Id: <20191216213901.106941-2-bgardon@google.com> Mime-Version: 1.0 References: <20191216213901.106941-1-bgardon@google.com> X-Mailer: git-send-email 2.24.1.735.g03f4e72817-goog Subject: [PATCH v3 1/8] KVM: selftests: Create a demand paging test From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: Paolo Bonzini , Cannon Matthews , Peter Xu , Andrew Jones , Ben Gardon Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org While userfaultfd, KVM's demand paging implementation, is not specific to KVM, having a benchmark for its performance will be useful for guiding performance improvements to KVM. As a first step towards creating a userfaultfd demand paging test, create a simple memory access test, based on dirty_log_test. Signed-off-by: Ben Gardon Reviewed-by: Peter Xu --- tools/testing/selftests/kvm/.gitignore | 1 + tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/demand_paging_test.c | 268 ++++++++++++++++++ 3 files changed, 270 insertions(+) create mode 100644 tools/testing/selftests/kvm/demand_paging_test.c diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore index 30072c3f52fbe..9619d96e15c41 100644 --- a/tools/testing/selftests/kvm/.gitignore +++ b/tools/testing/selftests/kvm/.gitignore @@ -17,3 +17,4 @@ /clear_dirty_log_test /dirty_log_test /kvm_create_max_vcpus +/demand_paging_test diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 3138a916574a9..8c412cdd527e6 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -28,6 +28,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/vmx_tsc_adjust_test TEST_GEN_PROGS_x86_64 += x86_64/xss_msr_test TEST_GEN_PROGS_x86_64 += clear_dirty_log_test TEST_GEN_PROGS_x86_64 += dirty_log_test +TEST_GEN_PROGS_x86_64 += demand_paging_test TEST_GEN_PROGS_x86_64 += kvm_create_max_vcpus TEST_GEN_PROGS_aarch64 += clear_dirty_log_test diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c new file mode 100644 index 0000000000000..36e12db5da56b --- /dev/null +++ b/tools/testing/selftests/kvm/demand_paging_test.c @@ -0,0 +1,268 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KVM demand paging test + * Adapted from dirty_log_test.c + * + * Copyright (C) 2018, Red Hat, Inc. + * Copyright (C) 2019, Google, Inc. + */ + +#define _GNU_SOURCE /* for program_invocation_name */ + +#include +#include +#include +#include +#include +#include +#include + +#include "test_util.h" +#include "kvm_util.h" +#include "processor.h" + +#define VCPU_ID 1 + +/* The memory slot index demand page */ +#define TEST_MEM_SLOT_INDEX 1 + +/* Default guest test virtual memory offset */ +#define DEFAULT_GUEST_TEST_MEM 0xc0000000 + +/* + * Guest/Host shared variables. Ensure addr_gva2hva() and/or + * sync_global_to/from_guest() are used when accessing from + * the host. READ/WRITE_ONCE() should also be used with anything + * that may change. + */ +static uint64_t host_page_size; +static uint64_t guest_page_size; +static uint64_t guest_num_pages; + +/* + * Guest physical memory offset of the testing memory slot. + * This will be set to the topmost valid physical address minus + * the test memory size. + */ +static uint64_t guest_test_phys_mem; + +/* + * Guest virtual memory offset of the testing memory slot. + * Must not conflict with identity mapped test code. + */ +static uint64_t guest_test_virt_mem = DEFAULT_GUEST_TEST_MEM; + +/* + * Continuously write to the first 8 bytes of each page in the demand paging + * memory region. + */ +static void guest_code(void) +{ + int i; + + for (i = 0; i < guest_num_pages; i++) { + uint64_t addr = guest_test_virt_mem; + + addr += i * guest_page_size; + addr &= ~(host_page_size - 1); + *(uint64_t *)addr = 0x0123456789ABCDEF; + } + + GUEST_SYNC(1); +} + +/* Points to the test VM memory region on which we are doing demand paging */ +static void *host_test_mem; +static uint64_t host_num_pages; + +static void *vcpu_worker(void *data) +{ + int ret; + struct kvm_vm *vm = data; + struct kvm_run *run; + + run = vcpu_state(vm, VCPU_ID); + + /* Let the guest access its memory */ + ret = _vcpu_run(vm, VCPU_ID); + TEST_ASSERT(ret == 0, "vcpu_run failed: %d\n", ret); + if (get_ucall(vm, VCPU_ID, NULL) != UCALL_SYNC) { + TEST_ASSERT(false, + "Invalid guest sync status: exit_reason=%s\n", + exit_reason_str(run->exit_reason)); + } + + return NULL; +} + +static struct kvm_vm *create_vm(enum vm_guest_mode mode, uint32_t vcpuid, + uint64_t extra_mem_pages, void *guest_code) +{ + struct kvm_vm *vm; + uint64_t extra_pg_pages = extra_mem_pages / 512 * 2; + + vm = _vm_create(mode, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR); + kvm_vm_elf_load(vm, program_invocation_name, 0, 0); +#ifdef __x86_64__ + vm_create_irqchip(vm); +#endif + vm_vcpu_add_default(vm, vcpuid, guest_code); + return vm; +} + +#define GUEST_MEM_SHIFT 30 /* 1G */ +#define PAGE_SHIFT_4K 12 + +static void run_test(enum vm_guest_mode mode) +{ + pthread_t vcpu_thread; + struct kvm_vm *vm; + + /* + * We reserve page table for 2 times of extra dirty mem which + * will definitely cover the original (1G+) test range. Here + * we do the calculation with 4K page size which is the + * smallest so the page number will be enough for all archs + * (e.g., 64K page size guest will need even less memory for + * page tables). + */ + vm = create_vm(mode, VCPU_ID, + 2ul << (GUEST_MEM_SHIFT - PAGE_SHIFT_4K), + guest_code); + + guest_page_size = vm_get_page_size(vm); + /* + * A little more than 1G of guest page sized pages. Cover the + * case where the size is not aligned to 64 pages. + */ + guest_num_pages = (1ul << (GUEST_MEM_SHIFT - + vm_get_page_shift(vm))) + 16; +#ifdef __s390x__ + /* Round up to multiple of 1M (segment size) */ + guest_num_pages = (guest_num_pages + 0xff) & ~0xffUL; +#endif + + host_page_size = getpagesize(); + host_num_pages = (guest_num_pages * guest_page_size) / host_page_size + + !!((guest_num_pages * guest_page_size) % + host_page_size); + + guest_test_phys_mem = (vm_get_max_gfn(vm) - guest_num_pages) * + guest_page_size; + guest_test_phys_mem &= ~(host_page_size - 1); + +#ifdef __s390x__ + /* Align to 1M (segment size) */ + guest_test_phys_mem &= ~((1 << 20) - 1); +#endif + + DEBUG("guest physical test memory offset: 0x%lx\n", + guest_test_phys_mem); + + + /* Add an extra memory slot for testing demand paging */ + vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, + guest_test_phys_mem, + TEST_MEM_SLOT_INDEX, + guest_num_pages, 0); + + /* Do mapping for the demand paging memory slot */ + virt_map(vm, guest_test_virt_mem, guest_test_phys_mem, + guest_num_pages * guest_page_size, 0); + + /* Cache the HVA pointer of the region */ + host_test_mem = addr_gpa2hva(vm, (vm_paddr_t)guest_test_phys_mem); + +#ifdef __x86_64__ + vcpu_set_cpuid(vm, VCPU_ID, kvm_get_supported_cpuid()); +#endif + + /* Export the shared variables to the guest */ + sync_global_to_guest(vm, host_page_size); + sync_global_to_guest(vm, guest_page_size); + sync_global_to_guest(vm, guest_test_virt_mem); + sync_global_to_guest(vm, guest_num_pages); + + pthread_create(&vcpu_thread, NULL, vcpu_worker, vm); + + /* Wait for the vcpu thread to quit */ + pthread_join(vcpu_thread, NULL); + + ucall_uninit(vm); + kvm_vm_free(vm); +} + +struct vm_guest_mode_params { + bool supported; + bool enabled; +}; +struct vm_guest_mode_params vm_guest_mode_params[NUM_VM_MODES]; + +#define vm_guest_mode_params_init(mode, supported, enabled) \ +({ \ + vm_guest_mode_params[mode] = \ + (struct vm_guest_mode_params){ supported, enabled }; \ +}) + +static void help(char *name) +{ + int i; + + puts(""); + printf("usage: %s [-h] [-m mode]\n", name); + printf(" -m: specify the guest mode ID to test\n" + " (default: test all supported modes)\n" + " This option may be used multiple times.\n" + " Guest mode IDs:\n"); + for (i = 0; i < NUM_VM_MODES; ++i) { + printf(" %d: %s%s\n", i, vm_guest_mode_string(i), + vm_guest_mode_params[i].supported ? " (supported)" : ""); + } + puts(""); + exit(0); +} + +int main(int argc, char *argv[]) +{ + bool mode_selected = false; + unsigned int mode; + int opt, i; + +#ifdef __x86_64__ + vm_guest_mode_params_init(VM_MODE_PXXV48_4K, true, true); +#endif +#ifdef __s390x__ + vm_guest_mode_params_init(VM_MODE_P40V48_4K, true, true); +#endif + + while ((opt = getopt(argc, argv, "hm:")) != -1) { + switch (opt) { + case 'm': + if (!mode_selected) { + for (i = 0; i < NUM_VM_MODES; ++i) + vm_guest_mode_params[i].enabled = false; + mode_selected = true; + } + mode = strtoul(optarg, NULL, 10); + TEST_ASSERT(mode < NUM_VM_MODES, + "Guest mode ID %d too big", mode); + vm_guest_mode_params[mode].enabled = true; + break; + case 'h': + default: + help(argv[0]); + break; + } + } + + for (i = 0; i < NUM_VM_MODES; ++i) { + if (!vm_guest_mode_params[i].enabled) + continue; + TEST_ASSERT(vm_guest_mode_params[i].supported, + "Guest mode ID %d (%s) not supported.", + i, vm_guest_mode_string(i)); + run_test(i); + } + + return 0; +} From patchwork Mon Dec 16 21:38:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 11295399 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B6C306C1 for ; Mon, 16 Dec 2019 21:39:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 88BCA24655 for ; Mon, 16 Dec 2019 21:39:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="OSbvvLeH" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727407AbfLPVjL (ORCPT ); Mon, 16 Dec 2019 16:39:11 -0500 Received: from mail-pl1-f201.google.com ([209.85.214.201]:49909 "EHLO mail-pl1-f201.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727283AbfLPVjK (ORCPT ); Mon, 16 Dec 2019 16:39:10 -0500 Received: by mail-pl1-f201.google.com with SMTP id y8so6269873plk.16 for ; Mon, 16 Dec 2019 13:39:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=yO5WXr87Skg8elcGy+aFK+0c/WhLmUDiH+4S7sKXvKU=; b=OSbvvLeH8AXNaGEjsltb2SWeU0Wb2C4JtyOykb9hGyt+BmCT0TqtQYGIQKC+knzMP5 Sb54KhSXMypKgewgpcXTmFxRl5eTzmmxxSNxPamgNpl7Eu4b0atxgqUvxT56EpINHu1L Kyw7ETbHEaft8YyVyM70PI4Axe8uz7R0KDoMLzcDfw3YMMCFNwwEmfdFlxjteTU+dXJf KklWhBbKGthWG7fo024v2QFMnbJm12two82fn+vcrZb6hSc5mOP/Wrk1lvM/54QheAmh Y19GvGPdusGgAeCv4HC424RzExWflOLo0YxoJ6JHnwBk1/v19Bd0KW5+pyUwfW3q+GuK 3cOA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=yO5WXr87Skg8elcGy+aFK+0c/WhLmUDiH+4S7sKXvKU=; b=P/C2ueqRUBg+Rax9Ppp6tfCpSzMRQKhK4+sDSEXyTy6JPfkMkDFZoOp+gKpSzMxa2w 5SVgihNzP/55cm+F38Q+mVlJHLLJLisqwsoeBl4fofQMiSY/cNyvVXJwhu17VtnKgBUh pxrbIv9TS+nrfZNfJFKEk71v6yWGRwJSJolxHWkyGFDzer0uYJQfa5HgbqJVjU/ghHu7 D9mDVMhoi2RssfPusnTTGrbciwmfNeq1STXerjkEmP7rJP7S+QhBo47WMOs9PatHaltB UDmu/93kyP5+vh2mGzkt+WDcexa3TDTgmoiE3cctIFaSMhjeL6TnW6LRc5C2PiKaX8ri bHvw== X-Gm-Message-State: APjAAAUoIX24lESJYUxUFs4pMQltFVTHG9N5/vn6MPotysRBA9IPzepQ yFjcn83nwdraQ+zv5qPMwfFvvI2t87mh X-Google-Smtp-Source: APXvYqzQs48juYosU3WmzfQmoLou/SFH6VCwt1N1xcW3kPnQFM8uHrRjGLzMBk9Iw+7F7RjMUKftu4TOQjr2 X-Received: by 2002:a65:48cb:: with SMTP id o11mr21116330pgs.313.1576532349464; Mon, 16 Dec 2019 13:39:09 -0800 (PST) Date: Mon, 16 Dec 2019 13:38:55 -0800 In-Reply-To: <20191216213901.106941-1-bgardon@google.com> Message-Id: <20191216213901.106941-3-bgardon@google.com> Mime-Version: 1.0 References: <20191216213901.106941-1-bgardon@google.com> X-Mailer: git-send-email 2.24.1.735.g03f4e72817-goog Subject: [PATCH v3 2/8] KVM: selftests: Add demand paging content to the demand paging test From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: Paolo Bonzini , Cannon Matthews , Peter Xu , Andrew Jones , Ben Gardon Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org The demand paging test is currently a simple page access test which, while potentially useful, doesn't add much versus the existing dirty logging test. To improve the demand paging test, add a basic userfaultfd demand paging implementation. Signed-off-by: Ben Gardon --- .../selftests/kvm/demand_paging_test.c | 177 +++++++++++++++++- 1 file changed, 173 insertions(+), 4 deletions(-) diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c index 36e12db5da56b..a8f775dab7d4a 100644 --- a/tools/testing/selftests/kvm/demand_paging_test.c +++ b/tools/testing/selftests/kvm/demand_paging_test.c @@ -11,11 +11,14 @@ #include #include +#include #include #include +#include #include #include #include +#include #include "test_util.h" #include "kvm_util.h" @@ -39,6 +42,8 @@ static uint64_t host_page_size; static uint64_t guest_page_size; static uint64_t guest_num_pages; +static char *guest_data_prototype; + /* * Guest physical memory offset of the testing memory slot. * This will be set to the topmost valid physical address minus @@ -110,13 +115,158 @@ static struct kvm_vm *create_vm(enum vm_guest_mode mode, uint32_t vcpuid, return vm; } +static int handle_uffd_page_request(int uffd, uint64_t addr) +{ + pid_t tid; + struct uffdio_copy copy; + int r; + + tid = syscall(__NR_gettid); + + copy.src = (uint64_t)guest_data_prototype; + copy.dst = addr; + copy.len = host_page_size; + copy.mode = 0; + + r = ioctl(uffd, UFFDIO_COPY, ©); + if (r == -1) { + DEBUG("Failed Paged in 0x%lx from thread %d with errno: %d\n", + addr, tid, errno); + return r; + } + + return 0; +} + +bool quit_uffd_thread; + +struct uffd_handler_args { + int uffd; +}; + +static void *uffd_handler_thread_fn(void *arg) +{ + struct uffd_handler_args *uffd_args = (struct uffd_handler_args *)arg; + int uffd = uffd_args->uffd; + int64_t pages = 0; + + while (!quit_uffd_thread) { + struct uffd_msg msg; + struct pollfd pollfd[1]; + int r; + uint64_t addr; + + pollfd[0].fd = uffd; + pollfd[0].events = POLLIN; + + /* + * TODO this introduces a 0.5sec delay at the end of the test. + * Reduce the timeout or eliminate it following the example in + * tools/testing/selftests/vm/userfaultfd.c + */ + r = poll(pollfd, 1, 500); + switch (r) { + case -1: + DEBUG("poll err"); + continue; + case 0: + continue; + case 1: + break; + default: + DEBUG("Polling uffd returned %d", r); + return NULL; + } + + if (pollfd[0].revents & POLLERR) { + DEBUG("uffd revents has POLLERR"); + return NULL; + } + + if (!pollfd[0].revents & POLLIN) + continue; + + r = read(uffd, &msg, sizeof(msg)); + if (r == -1) { + if (errno == EAGAIN) + continue; + DEBUG("Read of uffd gor errno %d", errno); + return NULL; + } + + if (r != sizeof(msg)) { + DEBUG("Read on uffd returned unexpected size: %d bytes", + r); + return NULL; + } + + if (!(msg.event & UFFD_EVENT_PAGEFAULT)) + continue; + + addr = msg.arg.pagefault.address; + r = handle_uffd_page_request(uffd, addr); + if (r < 0) + return NULL; + pages++; + } + + return NULL; +} + +static int setup_demand_paging(struct kvm_vm *vm, + pthread_t *uffd_handler_thread) +{ + int uffd; + struct uffdio_api uffdio_api; + struct uffdio_register uffdio_register; + struct uffd_handler_args uffd_args; + + guest_data_prototype = malloc(host_page_size); + memset(guest_data_prototype, 0xAB, host_page_size); + + uffd = syscall(__NR_userfaultfd, O_CLOEXEC | O_NONBLOCK); + if (uffd == -1) { + DEBUG("uffd creation failed\n"); + return -1; + } + + uffdio_api.api = UFFD_API; + uffdio_api.features = 0; + if (ioctl(uffd, UFFDIO_API, &uffdio_api) == -1) { + DEBUG("ioctl uffdio_api failed\n"); + return -1; + } + + uffdio_register.range.start = (uint64_t)host_test_mem; + uffdio_register.range.len = host_num_pages * host_page_size; + uffdio_register.mode = UFFDIO_REGISTER_MODE_MISSING; + if (ioctl(uffd, UFFDIO_REGISTER, &uffdio_register) == -1) { + DEBUG("ioctl uffdio_register failed\n"); + return -1; + } + + if ((uffdio_register.ioctls & UFFD_API_RANGE_IOCTLS) != + UFFD_API_RANGE_IOCTLS) { + DEBUG("unexpected userfaultfd ioctl set\n"); + return -1; + } + + uffd_args.uffd = uffd; + pthread_create(uffd_handler_thread, NULL, uffd_handler_thread_fn, + &uffd_args); + + return 0; +} + #define GUEST_MEM_SHIFT 30 /* 1G */ #define PAGE_SHIFT_4K 12 -static void run_test(enum vm_guest_mode mode) +static void run_test(enum vm_guest_mode mode, bool use_uffd) { pthread_t vcpu_thread; + pthread_t uffd_handler_thread; struct kvm_vm *vm; + int r; /* * We reserve page table for 2 times of extra dirty mem which @@ -173,6 +323,14 @@ static void run_test(enum vm_guest_mode mode) /* Cache the HVA pointer of the region */ host_test_mem = addr_gpa2hva(vm, (vm_paddr_t)guest_test_phys_mem); + if (use_uffd) { + /* Set up user fault fd to handle demand paging requests. */ + quit_uffd_thread = false; + r = setup_demand_paging(vm, &uffd_handler_thread); + if (r < 0) + exit(-r); + } + #ifdef __x86_64__ vcpu_set_cpuid(vm, VCPU_ID, kvm_get_supported_cpuid()); #endif @@ -188,6 +346,12 @@ static void run_test(enum vm_guest_mode mode) /* Wait for the vcpu thread to quit */ pthread_join(vcpu_thread, NULL); + if (use_uffd) { + /* Tell the user fault fd handler thread to quit */ + quit_uffd_thread = true; + pthread_join(uffd_handler_thread, NULL); + } + ucall_uninit(vm); kvm_vm_free(vm); } @@ -209,7 +373,7 @@ static void help(char *name) int i; puts(""); - printf("usage: %s [-h] [-m mode]\n", name); + printf("usage: %s [-h] [-m mode] [-u]\n", name); printf(" -m: specify the guest mode ID to test\n" " (default: test all supported modes)\n" " This option may be used multiple times.\n" @@ -218,6 +382,7 @@ static void help(char *name) printf(" %d: %s%s\n", i, vm_guest_mode_string(i), vm_guest_mode_params[i].supported ? " (supported)" : ""); } + printf(" -u: Use User Fault FD to handle vCPU page faults.\n"); puts(""); exit(0); } @@ -227,6 +392,7 @@ int main(int argc, char *argv[]) bool mode_selected = false; unsigned int mode; int opt, i; + bool use_uffd = false; #ifdef __x86_64__ vm_guest_mode_params_init(VM_MODE_PXXV48_4K, true, true); @@ -235,7 +401,7 @@ int main(int argc, char *argv[]) vm_guest_mode_params_init(VM_MODE_P40V48_4K, true, true); #endif - while ((opt = getopt(argc, argv, "hm:")) != -1) { + while ((opt = getopt(argc, argv, "hm:u")) != -1) { switch (opt) { case 'm': if (!mode_selected) { @@ -248,6 +414,9 @@ int main(int argc, char *argv[]) "Guest mode ID %d too big", mode); vm_guest_mode_params[mode].enabled = true; break; + case 'u': + use_uffd = true; + break; case 'h': default: help(argv[0]); @@ -261,7 +430,7 @@ int main(int argc, char *argv[]) TEST_ASSERT(vm_guest_mode_params[i].supported, "Guest mode ID %d (%s) not supported.", i, vm_guest_mode_string(i)); - run_test(i); + run_test(i, use_uffd); } return 0; From patchwork Mon Dec 16 21:38:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 11295391 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 27FEF17F0 for ; Mon, 16 Dec 2019 21:39:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 04AB620717 for ; Mon, 16 Dec 2019 21:39:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Mazzzmjs" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727784AbfLPVjl (ORCPT ); Mon, 16 Dec 2019 16:39:41 -0500 Received: from mail-pj1-f73.google.com ([209.85.216.73]:44121 "EHLO mail-pj1-f73.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727224AbfLPVjO (ORCPT ); Mon, 16 Dec 2019 16:39:14 -0500 Received: by mail-pj1-f73.google.com with SMTP id gx23so5228944pjb.11 for ; Mon, 16 Dec 2019 13:39:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Mr7NGohfKIO/0LLJHqW5UMZVsaNn5FVbb/vWY+9rP5E=; b=MazzzmjsYA7dwdKZdSYwKxZrKQRfn9Avrp+JQYCAchfbT1fQBoRFwY+UxwJbCP196m kBouenURU6NRBeBZeDJW1cxAl1baMicohCNW1TD8fph3Wv8u02vElujG8Jb08v6z/Ku8 zXYiGLwftnut6Ulq862yC2qCcDEYoaZDL5qbsS39cKTWAhhYn8NF1pJ+OOMFiMpYwvoN rtYAI0KMJOYHsTfR9y3/ZV0M9sHUD4UfLT4IdUNYbcKcVod0Mp81m8Mb6omCsivQ0HmD YYdOykog5f5dWV6ibeEwK81IWLMgzRktxAwJNsGbra27+U87vjmV+a5dl+Fjcac65XO+ ANgg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Mr7NGohfKIO/0LLJHqW5UMZVsaNn5FVbb/vWY+9rP5E=; b=VLkVp9X6fDRm8UBibpl8t6QpeN2ucQjWyLGOBpNFEov+AcVai08kZvZIoLX6S0FIqm Nxba7qW0q2x1QPhH/QV+RBkv0Ydfkhn+X2wAkA7Zr1wUpkuvM3jnYC9fxA3sTVSsOJ61 d1eb8Xsh7UkmZJ8QN9KMEA1MH/u5VrCZ/0tbb3qlR1Mm7d/pJC+cgGgMnd1hM9AZ5h2I ucjDs92/Xle1PhHdAcTia2sIvmPeXgB0T6Kl2ffUCb0WGZNJ51FlORNszSmALMAJVxNt 5Tzr2+5GwMoP4OFxaDBEXCGVwR2g4fM7rY2hnaCJEh1o0SGWib5NipQJ/H2AKWkBbHIg kIhw== X-Gm-Message-State: APjAAAVgcbDuirMRP5OHINy7aB4MzfJSyvEWOn29sMRqEHR75s7G1yJB vRyD2dOcaGKfPorUYlcV5f1PFwDIMZrC X-Google-Smtp-Source: APXvYqxwnGiU6z3dDQuXViiFr6rMTVYb6tKV1KLyBCQcqxDau+svH7GqbaYECyz4BdKVOJ8/Fn4e3p91W3Zf X-Received: by 2002:a63:c207:: with SMTP id b7mr21145028pgd.422.1576532351536; Mon, 16 Dec 2019 13:39:11 -0800 (PST) Date: Mon, 16 Dec 2019 13:38:56 -0800 In-Reply-To: <20191216213901.106941-1-bgardon@google.com> Message-Id: <20191216213901.106941-4-bgardon@google.com> Mime-Version: 1.0 References: <20191216213901.106941-1-bgardon@google.com> X-Mailer: git-send-email 2.24.1.735.g03f4e72817-goog Subject: [PATCH v3 3/8] KVM: selftests: Add configurable demand paging delay From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: Paolo Bonzini , Cannon Matthews , Peter Xu , Andrew Jones , Ben Gardon Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org When running the demand paging test with the -u option, the User Fault FD handler essentially adds an arbitrary delay to page fault resolution. To enable better simulation of a real demand paging scenario, add a configurable delay to the UFFD handler. Signed-off-by: Ben Gardon Reviewed-by: Peter Xu --- .../selftests/kvm/demand_paging_test.c | 32 +++++++++++++++---- 1 file changed, 25 insertions(+), 7 deletions(-) diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c index a8f775dab7d4a..11de5b58995fb 100644 --- a/tools/testing/selftests/kvm/demand_paging_test.c +++ b/tools/testing/selftests/kvm/demand_paging_test.c @@ -142,12 +142,14 @@ bool quit_uffd_thread; struct uffd_handler_args { int uffd; + useconds_t delay; }; static void *uffd_handler_thread_fn(void *arg) { struct uffd_handler_args *uffd_args = (struct uffd_handler_args *)arg; int uffd = uffd_args->uffd; + useconds_t delay = uffd_args->delay; int64_t pages = 0; while (!quit_uffd_thread) { @@ -203,6 +205,8 @@ static void *uffd_handler_thread_fn(void *arg) if (!(msg.event & UFFD_EVENT_PAGEFAULT)) continue; + if (delay) + usleep(delay); addr = msg.arg.pagefault.address; r = handle_uffd_page_request(uffd, addr); if (r < 0) @@ -214,7 +218,8 @@ static void *uffd_handler_thread_fn(void *arg) } static int setup_demand_paging(struct kvm_vm *vm, - pthread_t *uffd_handler_thread) + pthread_t *uffd_handler_thread, + useconds_t uffd_delay) { int uffd; struct uffdio_api uffdio_api; @@ -252,6 +257,7 @@ static int setup_demand_paging(struct kvm_vm *vm, } uffd_args.uffd = uffd; + uffd_args.delay = uffd_delay; pthread_create(uffd_handler_thread, NULL, uffd_handler_thread_fn, &uffd_args); @@ -261,7 +267,8 @@ static int setup_demand_paging(struct kvm_vm *vm, #define GUEST_MEM_SHIFT 30 /* 1G */ #define PAGE_SHIFT_4K 12 -static void run_test(enum vm_guest_mode mode, bool use_uffd) +static void run_test(enum vm_guest_mode mode, bool use_uffd, + useconds_t uffd_delay) { pthread_t vcpu_thread; pthread_t uffd_handler_thread; @@ -326,7 +333,8 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd) if (use_uffd) { /* Set up user fault fd to handle demand paging requests. */ quit_uffd_thread = false; - r = setup_demand_paging(vm, &uffd_handler_thread); + r = setup_demand_paging(vm, &uffd_handler_thread, + uffd_delay); if (r < 0) exit(-r); } @@ -373,7 +381,7 @@ static void help(char *name) int i; puts(""); - printf("usage: %s [-h] [-m mode] [-u]\n", name); + printf("usage: %s [-h] [-m mode] [-u] [-d uffd_delay_usec]\n", name); printf(" -m: specify the guest mode ID to test\n" " (default: test all supported modes)\n" " This option may be used multiple times.\n" @@ -382,7 +390,11 @@ static void help(char *name) printf(" %d: %s%s\n", i, vm_guest_mode_string(i), vm_guest_mode_params[i].supported ? " (supported)" : ""); } - printf(" -u: Use User Fault FD to handle vCPU page faults.\n"); + printf(" -u: use User Fault FD to handle vCPU page\n" + " faults.\n"); + printf(" -d: add a delay in usec to the User Fault\n" + " FD handler to simulate demand paging\n" + " overheads. Ignored without -u.\n"); puts(""); exit(0); } @@ -393,6 +405,7 @@ int main(int argc, char *argv[]) unsigned int mode; int opt, i; bool use_uffd = false; + useconds_t uffd_delay = 0; #ifdef __x86_64__ vm_guest_mode_params_init(VM_MODE_PXXV48_4K, true, true); @@ -401,7 +414,7 @@ int main(int argc, char *argv[]) vm_guest_mode_params_init(VM_MODE_P40V48_4K, true, true); #endif - while ((opt = getopt(argc, argv, "hm:u")) != -1) { + while ((opt = getopt(argc, argv, "hm:ud:")) != -1) { switch (opt) { case 'm': if (!mode_selected) { @@ -417,6 +430,11 @@ int main(int argc, char *argv[]) case 'u': use_uffd = true; break; + case 'd': + uffd_delay = strtoul(optarg, NULL, 0); + TEST_ASSERT(uffd_delay >= 0, + "A negative UFFD delay is not supported."); + break; case 'h': default: help(argv[0]); @@ -430,7 +448,7 @@ int main(int argc, char *argv[]) TEST_ASSERT(vm_guest_mode_params[i].supported, "Guest mode ID %d (%s) not supported.", i, vm_guest_mode_string(i)); - run_test(i, use_uffd); + run_test(i, use_uffd, uffd_delay); } return 0; From patchwork Mon Dec 16 21:38:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 11295389 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C4B446C1 for ; Mon, 16 Dec 2019 21:39:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 9801424673 for ; Mon, 16 Dec 2019 21:39:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="g4XYDQWf" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727856AbfLPVjl (ORCPT ); Mon, 16 Dec 2019 16:39:41 -0500 Received: from mail-pl1-f202.google.com ([209.85.214.202]:38364 "EHLO mail-pl1-f202.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727780AbfLPVjO (ORCPT ); Mon, 16 Dec 2019 16:39:14 -0500 Received: by mail-pl1-f202.google.com with SMTP id t17so805918ply.5 for ; Mon, 16 Dec 2019 13:39:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Cb2Ll4QXvvxKS0ilQTgclxRvuIG8oGi9ZpuJq+AVeGI=; b=g4XYDQWfuDZf2trAxFnCU73GU+Yj6ctN3yfvsb9reOXRBlqT11TmjtBjhCq4udB06C 1rgR4JmeznePN7W21+ltGzYwvJZmbek8Cksi6kl0UhhlAQTBbzQPCNEfG9zr2EyfETL9 F6g9+Z99DwK67RzKhdsFSx7tXEav7pmzxKKluBHp5BbyOadP4EOh1sjhZ3e/qYQf0g1e YFPPasxp+V9JwvrG2wOy6h2oeTg9MJNI2fskJrusXQ7I6lpq+81S2JcOgXvYMUFGWcg7 exBT7gUaEGJPfNN+hjn39cWc5Lw4ht/9qJ5bzNPp91DdN4JfWAMFnZmNWHV2Ad9xl+y3 Rs7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Cb2Ll4QXvvxKS0ilQTgclxRvuIG8oGi9ZpuJq+AVeGI=; b=DWsgLgCpkKCVR3gJt5JtZ5eg7c7MSxs7GOr2Tw2u2A6LpXtd/SO/EhlU+X5V1Q3AOA YRtR2H0EzDY6paW88waFQLwkupyVHaGoQ8/IwYOi/2KQUECw3b3oOpGBXlaBjbXde/4z qoo5FHLHcMac4yvg4t8DDBjb8O2x53Gds+VurOQLw6bw4MC7q+bLTwlZzMWraBC8oJnl KDBN7EG3Aplh0fNFLwu4lDwBUKwtzuCxLwyKEL4WoCxsM121xtPBaJWcigSiZ7484xzl /7JFMzCnhoeRfVB0kAMDzpyhSa3rYfQAhuJ+ESW94I+BAwuZk8UdSugH9jL6Gc5KXocR VatA== X-Gm-Message-State: APjAAAVjatqdi54QWvvk9BDin/MuiHDRRrTz2UTrKLJDrqSjlgJ60clr yK4bN/ZOYm3va8CyD+x7o0a0P34ybGM8 X-Google-Smtp-Source: APXvYqyYGvF0JGmKhrzZ8qT6+zoNtfD1MP73Z2ZHYZDbiaM1nMIKJvUUautjOVH1eBm1WAvXQ0BzT58IzMsi X-Received: by 2002:a63:2a06:: with SMTP id q6mr20169428pgq.92.1576532353732; Mon, 16 Dec 2019 13:39:13 -0800 (PST) Date: Mon, 16 Dec 2019 13:38:57 -0800 In-Reply-To: <20191216213901.106941-1-bgardon@google.com> Message-Id: <20191216213901.106941-5-bgardon@google.com> Mime-Version: 1.0 References: <20191216213901.106941-1-bgardon@google.com> X-Mailer: git-send-email 2.24.1.735.g03f4e72817-goog Subject: [PATCH v3 4/8] KVM: selftests: Add memory size parameter to the demand paging test From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: Paolo Bonzini , Cannon Matthews , Peter Xu , Andrew Jones , Ben Gardon Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Add an argument to allow the demand paging test to work on larger and smaller guest sizes. Signed-off-by: Ben Gardon --- .../selftests/kvm/demand_paging_test.c | 56 ++++++++++++------- 1 file changed, 35 insertions(+), 21 deletions(-) diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c index 11de5b58995fb..4aa90a3fce99c 100644 --- a/tools/testing/selftests/kvm/demand_paging_test.c +++ b/tools/testing/selftests/kvm/demand_paging_test.c @@ -32,6 +32,8 @@ /* Default guest test virtual memory offset */ #define DEFAULT_GUEST_TEST_MEM 0xc0000000 +#define DEFAULT_GUEST_TEST_MEM_SIZE (1 << 30) /* 1G */ + /* * Guest/Host shared variables. Ensure addr_gva2hva() and/or * sync_global_to/from_guest() are used when accessing from @@ -264,11 +266,10 @@ static int setup_demand_paging(struct kvm_vm *vm, return 0; } -#define GUEST_MEM_SHIFT 30 /* 1G */ #define PAGE_SHIFT_4K 12 static void run_test(enum vm_guest_mode mode, bool use_uffd, - useconds_t uffd_delay) + useconds_t uffd_delay, uint64_t guest_memory_bytes) { pthread_t vcpu_thread; pthread_t uffd_handler_thread; @@ -276,33 +277,40 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd, int r; /* - * We reserve page table for 2 times of extra dirty mem which - * will definitely cover the original (1G+) test range. Here - * we do the calculation with 4K page size which is the - * smallest so the page number will be enough for all archs - * (e.g., 64K page size guest will need even less memory for - * page tables). + * We reserve page table for twice the ammount of memory we intend + * to use in the test region for demand paging. Here we do the + * calculation with 4K page size which is the smallest so the page + * number will be enough for all archs. (e.g., 64K page size guest + * will need even less memory for page tables). */ vm = create_vm(mode, VCPU_ID, - 2ul << (GUEST_MEM_SHIFT - PAGE_SHIFT_4K), + (2 * guest_memory_bytes) >> PAGE_SHIFT_4K, guest_code); guest_page_size = vm_get_page_size(vm); - /* - * A little more than 1G of guest page sized pages. Cover the - * case where the size is not aligned to 64 pages. - */ - guest_num_pages = (1ul << (GUEST_MEM_SHIFT - - vm_get_page_shift(vm))) + 16; + + TEST_ASSERT(guest_memory_bytes % guest_page_size == 0, + "Guest memory size is not guest page size aligned."); + + guest_num_pages = guest_memory_bytes / guest_page_size; + #ifdef __s390x__ /* Round up to multiple of 1M (segment size) */ guest_num_pages = (guest_num_pages + 0xff) & ~0xffUL; #endif + /* + * If there should be more memory in the guest test region than there + * can be pages in the guest, it will definitely cause problems. + */ + TEST_ASSERT(guest_num_pages < vm_get_max_gfn(vm), + "Requested more guest memory than address space allows.\n" + " guest pages: %lx max gfn: %lx\n", + guest_num_pages, vm_get_max_gfn(vm)); host_page_size = getpagesize(); - host_num_pages = (guest_num_pages * guest_page_size) / host_page_size + - !!((guest_num_pages * guest_page_size) % - host_page_size); + TEST_ASSERT(guest_memory_bytes % host_page_size == 0, + "Guest memory size is not host page size aligned."); + host_num_pages = guest_memory_bytes / host_page_size; guest_test_phys_mem = (vm_get_max_gfn(vm) - guest_num_pages) * guest_page_size; @@ -381,7 +389,8 @@ static void help(char *name) int i; puts(""); - printf("usage: %s [-h] [-m mode] [-u] [-d uffd_delay_usec]\n", name); + printf("usage: %s [-h] [-m mode] [-u] [-d uffd_delay_usec]\n" + " [-b bytes test memory]\n", name); printf(" -m: specify the guest mode ID to test\n" " (default: test all supported modes)\n" " This option may be used multiple times.\n" @@ -395,6 +404,8 @@ static void help(char *name) printf(" -d: add a delay in usec to the User Fault\n" " FD handler to simulate demand paging\n" " overheads. Ignored without -u.\n"); + printf(" -b: specify the number of bytes of memory which should be\n" + " allocated to the guest.\n"); puts(""); exit(0); } @@ -402,6 +413,7 @@ static void help(char *name) int main(int argc, char *argv[]) { bool mode_selected = false; + uint64_t guest_memory_bytes = DEFAULT_GUEST_TEST_MEM_SIZE; unsigned int mode; int opt, i; bool use_uffd = false; @@ -414,7 +426,7 @@ int main(int argc, char *argv[]) vm_guest_mode_params_init(VM_MODE_P40V48_4K, true, true); #endif - while ((opt = getopt(argc, argv, "hm:ud:")) != -1) { + while ((opt = getopt(argc, argv, "hm:ud:b:")) != -1) { switch (opt) { case 'm': if (!mode_selected) { @@ -435,6 +447,8 @@ int main(int argc, char *argv[]) TEST_ASSERT(uffd_delay >= 0, "A negative UFFD delay is not supported."); break; + case 'b': + guest_memory_bytes = strtoull(optarg, NULL, 0); case 'h': default: help(argv[0]); @@ -448,7 +462,7 @@ int main(int argc, char *argv[]) TEST_ASSERT(vm_guest_mode_params[i].supported, "Guest mode ID %d (%s) not supported.", i, vm_guest_mode_string(i)); - run_test(i, use_uffd, uffd_delay); + run_test(i, use_uffd, uffd_delay, guest_memory_bytes); } return 0; From patchwork Mon Dec 16 21:38:58 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 11295385 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A9F826C1 for ; Mon, 16 Dec 2019 21:39:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7E0B220717 for ; Mon, 16 Dec 2019 21:39:38 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="DlayJWaK" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727873AbfLPVjU (ORCPT ); Mon, 16 Dec 2019 16:39:20 -0500 Received: from mail-ua1-f74.google.com ([209.85.222.74]:56560 "EHLO mail-ua1-f74.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727854AbfLPVjR (ORCPT ); Mon, 16 Dec 2019 16:39:17 -0500 Received: by mail-ua1-f74.google.com with SMTP id b15so2051600uas.23 for ; Mon, 16 Dec 2019 13:39:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=tdjhHaFHTNc8ESXu/zTu/enk0TT4JDIoNFmVjTmEGts=; b=DlayJWaKptpMlhk8FuxQ7c26gwVu6O5AkeMlx00kXx9lBAV+guQyW3wdHqb8VrpVY3 431PlYD1OMLwqJ7lDTU1wn82niI0iXBoA68pNGhEq6IRTzz+DuszgOtBTC4KamYpKPEs S99p+DIs5Ttyv5eOMvDBiRwjHhx8Iu3hQId699S1RwW03xpSsDL0sr2OdHLrQSrmK+Dp 5RVlmFd+85NgKSxBdvwnuZm4/nclR7v99k6EHk00y20vLwAAUVz5+toaGsA8r+yxc0Wy 4ote43AkCnCQ+XLdFWKmHrlqlNlgxkYgmWjvaLdPwu8WG2evB9LQPoU/F1kDZNYKe2TE 7NDA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=tdjhHaFHTNc8ESXu/zTu/enk0TT4JDIoNFmVjTmEGts=; b=ckg1wl6bjlx9Q7L8YlBMwnlCgt87XpnHKaYq6BSWsuVVuPbyK7YoXrr+1W2rKoPjvP aDULR9dOJAhUWy9ozU4eg5IQ5252OULflDBIilUypnJ+0gGb//DGpQAcQMGWAVGxrWFt r6lI5oknwWyoY9P4jyyZ+lEYywO70UkvRexPDEXj7LuJZe8j7wOSBvc/BRp01dEYC9He v5A8k7IOMttWBvpkrGAjgz7yNTtXRqXkxx+B16MMv6DuTytAI6qpv1Od337Ner1YzhQq mSYdxp+Equt42XwnhgpNOxU8ltvmCtimIs2+wKxBdivmGE5nDNcu6WzFKDBgU3G+867t OyCQ== X-Gm-Message-State: APjAAAW++6ro1ZOK8UW6dfQXzjeqTl4IGO859N/8UNP97Vvl6Esz5mav /lOManJiSSKjvAicg3Sh9SSFi9Ik7zPH X-Google-Smtp-Source: APXvYqxzQwtszqwnbztGIUhRgrrnIIojk2n1FdujOYNnY87GSMmPVP9Kf8KV9XJs5WeHpP2E5mt3yT/9ryWy X-Received: by 2002:a67:f8d1:: with SMTP id c17mr778381vsp.62.1576532356083; Mon, 16 Dec 2019 13:39:16 -0800 (PST) Date: Mon, 16 Dec 2019 13:38:58 -0800 In-Reply-To: <20191216213901.106941-1-bgardon@google.com> Message-Id: <20191216213901.106941-6-bgardon@google.com> Mime-Version: 1.0 References: <20191216213901.106941-1-bgardon@google.com> X-Mailer: git-send-email 2.24.1.735.g03f4e72817-goog Subject: [PATCH v3 5/8] KVM: selftests: Pass args to vCPU instead of using globals From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: Paolo Bonzini , Cannon Matthews , Peter Xu , Andrew Jones , Ben Gardon Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org In preparation for supporting multiple vCPUs in the demand paging test, pass arguments to the vCPU instead of syncing globals to it. Signed-off-by: Ben Gardon --- .../selftests/kvm/demand_paging_test.c | 61 +++++++++++-------- 1 file changed, 37 insertions(+), 24 deletions(-) diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c index 4aa90a3fce99c..8ede26e088ab6 100644 --- a/tools/testing/selftests/kvm/demand_paging_test.c +++ b/tools/testing/selftests/kvm/demand_paging_test.c @@ -42,7 +42,6 @@ */ static uint64_t host_page_size; static uint64_t guest_page_size; -static uint64_t guest_num_pages; static char *guest_data_prototype; @@ -63,14 +62,13 @@ static uint64_t guest_test_virt_mem = DEFAULT_GUEST_TEST_MEM; * Continuously write to the first 8 bytes of each page in the demand paging * memory region. */ -static void guest_code(void) +static void guest_code(uint64_t gva, uint64_t pages) { int i; - for (i = 0; i < guest_num_pages; i++) { - uint64_t addr = guest_test_virt_mem; + for (i = 0; i < pages; i++) { + uint64_t addr = gva + (i * guest_page_size); - addr += i * guest_page_size; addr &= ~(host_page_size - 1); *(uint64_t *)addr = 0x0123456789ABCDEF; } @@ -82,18 +80,31 @@ static void guest_code(void) static void *host_test_mem; static uint64_t host_num_pages; +struct vcpu_thread_args { + uint64_t gva; + uint64_t pages; + struct kvm_vm *vm; + int vcpu_id; +}; + static void *vcpu_worker(void *data) { int ret; - struct kvm_vm *vm = data; + struct vcpu_thread_args *args = (struct vcpu_thread_args *)data; + struct kvm_vm *vm = args->vm; + int vcpu_id = args->vcpu_id; + uint64_t gva = args->gva; + uint64_t pages = args->pages; struct kvm_run *run; - run = vcpu_state(vm, VCPU_ID); + vcpu_args_set(vm, vcpu_id, 2, gva, pages); + + run = vcpu_state(vm, vcpu_id); /* Let the guest access its memory */ - ret = _vcpu_run(vm, VCPU_ID); + ret = _vcpu_run(vm, vcpu_id); TEST_ASSERT(ret == 0, "vcpu_run failed: %d\n", ret); - if (get_ucall(vm, VCPU_ID, NULL) != UCALL_SYNC) { + if (get_ucall(vm, vcpu_id, NULL) != UCALL_SYNC) { TEST_ASSERT(false, "Invalid guest sync status: exit_reason=%s\n", exit_reason_str(run->exit_reason)); @@ -269,11 +280,13 @@ static int setup_demand_paging(struct kvm_vm *vm, #define PAGE_SHIFT_4K 12 static void run_test(enum vm_guest_mode mode, bool use_uffd, - useconds_t uffd_delay, uint64_t guest_memory_bytes) + useconds_t uffd_delay, uint64_t vcpu_wss) { pthread_t vcpu_thread; pthread_t uffd_handler_thread; struct kvm_vm *vm; + struct vcpu_thread_args vcpu_args; + uint64_t guest_num_pages; int r; /* @@ -283,16 +296,15 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd, * number will be enough for all archs. (e.g., 64K page size guest * will need even less memory for page tables). */ - vm = create_vm(mode, VCPU_ID, - (2 * guest_memory_bytes) >> PAGE_SHIFT_4K, + vm = create_vm(mode, VCPU_ID, (2 * vcpu_wss) >> PAGE_SHIFT_4K, guest_code); guest_page_size = vm_get_page_size(vm); - TEST_ASSERT(guest_memory_bytes % guest_page_size == 0, + TEST_ASSERT(vcpu_wss % guest_page_size == 0, "Guest memory size is not guest page size aligned."); - guest_num_pages = guest_memory_bytes / guest_page_size; + guest_num_pages = vcpu_wss / guest_page_size; #ifdef __s390x__ /* Round up to multiple of 1M (segment size) */ @@ -308,9 +320,9 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd, guest_num_pages, vm_get_max_gfn(vm)); host_page_size = getpagesize(); - TEST_ASSERT(guest_memory_bytes % host_page_size == 0, + TEST_ASSERT(vcpu_wss % host_page_size == 0, "Guest memory size is not host page size aligned."); - host_num_pages = guest_memory_bytes / host_page_size; + host_num_pages = vcpu_wss / host_page_size; guest_test_phys_mem = (vm_get_max_gfn(vm) - guest_num_pages) * guest_page_size; @@ -354,10 +366,12 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd, /* Export the shared variables to the guest */ sync_global_to_guest(vm, host_page_size); sync_global_to_guest(vm, guest_page_size); - sync_global_to_guest(vm, guest_test_virt_mem); - sync_global_to_guest(vm, guest_num_pages); - pthread_create(&vcpu_thread, NULL, vcpu_worker, vm); + vcpu_args.vm = vm; + vcpu_args.vcpu_id = VCPU_ID; + vcpu_args.gva = guest_test_virt_mem; + vcpu_args.pages = guest_num_pages; + pthread_create(&vcpu_thread, NULL, vcpu_worker, &vcpu_args); /* Wait for the vcpu thread to quit */ pthread_join(vcpu_thread, NULL); @@ -404,8 +418,7 @@ static void help(char *name) printf(" -d: add a delay in usec to the User Fault\n" " FD handler to simulate demand paging\n" " overheads. Ignored without -u.\n"); - printf(" -b: specify the number of bytes of memory which should be\n" - " allocated to the guest.\n"); + printf(" -b: specify the working set size, in bytes for each vCPU.\n"); puts(""); exit(0); } @@ -413,7 +426,7 @@ static void help(char *name) int main(int argc, char *argv[]) { bool mode_selected = false; - uint64_t guest_memory_bytes = DEFAULT_GUEST_TEST_MEM_SIZE; + uint64_t vcpu_wss = DEFAULT_GUEST_TEST_MEM_SIZE; unsigned int mode; int opt, i; bool use_uffd = false; @@ -448,7 +461,7 @@ int main(int argc, char *argv[]) "A negative UFFD delay is not supported."); break; case 'b': - guest_memory_bytes = strtoull(optarg, NULL, 0); + vcpu_wss = strtoull(optarg, NULL, 0); case 'h': default: help(argv[0]); @@ -462,7 +475,7 @@ int main(int argc, char *argv[]) TEST_ASSERT(vm_guest_mode_params[i].supported, "Guest mode ID %d (%s) not supported.", i, vm_guest_mode_string(i)); - run_test(i, use_uffd, uffd_delay, guest_memory_bytes); + run_test(i, use_uffd, uffd_delay, vcpu_wss); } return 0; From patchwork Mon Dec 16 21:38:59 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 11295387 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 04998109A for ; Mon, 16 Dec 2019 21:39:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id D416724686 for ; Mon, 16 Dec 2019 21:39:38 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="LaYHcyhx" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727981AbfLPVji (ORCPT ); Mon, 16 Dec 2019 16:39:38 -0500 Received: from mail-pg1-f202.google.com ([209.85.215.202]:34952 "EHLO mail-pg1-f202.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727119AbfLPVjT (ORCPT ); Mon, 16 Dec 2019 16:39:19 -0500 Received: by mail-pg1-f202.google.com with SMTP id f15so5977421pgk.2 for ; Mon, 16 Dec 2019 13:39:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Gpogm8mQle2p9Mai3QOtq7/HvkIKgHYlmywPlhuQEM4=; b=LaYHcyhxN8hNW0l0urKdZVOzcozfiq7ovFOFOqHuWKbtPJc+nAtMYsanPsibAtwEUK CndNHXKQjq5peUSy6PMkzpr0OYIDYqYD4hrzWVzuWE34bfL4Ze4oaklbABs1+3tPcHHC vLh4Eu+2wDfkpuorixlaw3OaGLylnhgiLyjbur3sVao0hOQxLBAYwxRBIm3SYUkt+saJ MPge93XO+9W3hoyHQ3pB+7JicrFYFDm15O657/crHXRRF5rnH4tZCF8KUkEaxuOUsGLj EAb4XIsmP8BpxPNevKd36lgE5klwC1LNpyiLJcxGecfLG3TXflk3IrfN9sCD9QYS5UDM caug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Gpogm8mQle2p9Mai3QOtq7/HvkIKgHYlmywPlhuQEM4=; b=ekuexx3tunSGr7aSq9XE40/NcWdWHamLcT0QJ44G/XdDxwJxeWDAOMNtmiXcbz3ToK wSKx0EYv9Yi3xNbf9cvrcewY3rGsvUm52UdYCmigbjvt0tnNFBBB3cXZEBPBWo+4nh0Y NcI4rvibUeVAEcGRUTGMS5UrF1JgbE/nRpWfkEmBlIB0Z8jY8pwzOKA2q8Jk5FdPtsfz z7b6WrQKWlht3g4hCOdnr248mIurAgwmlP5bXn/+BsPZNClP1K/ACL7R5kqp0JYuQOe1 iholn38AbQpw0xsZiipPEfSq5lyXOLhLEv4ubi8CrUAz7INWAlUy5KsL3/OerSWzSLnL ZcsA== X-Gm-Message-State: APjAAAVuajE3OZgTaTmh8PqK07Xw1wqv5d1Vz/sHgU1TobXcjQy++dNP 9Hyt0vixj12PRgs2bhgJ2vU4MTorGfU3 X-Google-Smtp-Source: APXvYqw8G5G2rPjuAtbpUrf/RELCdYRcoYFB2gXZrbOOODJKftFiRp6F3xxMefEz3R/8EkPS11GhE3/uc6NX X-Received: by 2002:a63:f60:: with SMTP id 32mr20905201pgp.206.1576532358337; Mon, 16 Dec 2019 13:39:18 -0800 (PST) Date: Mon, 16 Dec 2019 13:38:59 -0800 In-Reply-To: <20191216213901.106941-1-bgardon@google.com> Message-Id: <20191216213901.106941-7-bgardon@google.com> Mime-Version: 1.0 References: <20191216213901.106941-1-bgardon@google.com> X-Mailer: git-send-email 2.24.1.735.g03f4e72817-goog Subject: [PATCH v3 6/8] KVM: selftests: Support multiple vCPUs in demand paging test From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: Paolo Bonzini , Cannon Matthews , Peter Xu , Andrew Jones , Ben Gardon Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Most VMs have multiple vCPUs, the concurrent execution of which has a substantial impact on demand paging performance. Add an option to create multiple vCPUs to each access disjoint regions of memory. Signed-off-by: Ben Gardon --- .../selftests/kvm/demand_paging_test.c | 199 ++++++++++++------ 1 file changed, 136 insertions(+), 63 deletions(-) diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c index 8ede26e088ab6..2b80f614dd537 100644 --- a/tools/testing/selftests/kvm/demand_paging_test.c +++ b/tools/testing/selftests/kvm/demand_paging_test.c @@ -24,8 +24,6 @@ #include "kvm_util.h" #include "processor.h" -#define VCPU_ID 1 - /* The memory slot index demand page */ #define TEST_MEM_SLOT_INDEX 1 @@ -34,6 +32,12 @@ #define DEFAULT_GUEST_TEST_MEM_SIZE (1 << 30) /* 1G */ +#ifdef PRINT_PER_VCPU_UPDATES +#define PER_VCPU_DEBUG(...) DEBUG(__VA_ARGS__) +#else +#define PER_VCPU_DEBUG(...) +#endif + /* * Guest/Host shared variables. Ensure addr_gva2hva() and/or * sync_global_to/from_guest() are used when accessing from @@ -76,10 +80,6 @@ static void guest_code(uint64_t gva, uint64_t pages) GUEST_SYNC(1); } -/* Points to the test VM memory region on which we are doing demand paging */ -static void *host_test_mem; -static uint64_t host_num_pages; - struct vcpu_thread_args { uint64_t gva; uint64_t pages; @@ -113,18 +113,32 @@ static void *vcpu_worker(void *data) return NULL; } -static struct kvm_vm *create_vm(enum vm_guest_mode mode, uint32_t vcpuid, - uint64_t extra_mem_pages, void *guest_code) +#define PAGE_SHIFT_4K 12 +#define PTES_PER_PT 512 + +static struct kvm_vm *create_vm(enum vm_guest_mode mode, int vcpus, + uint64_t vcpu_wss) { struct kvm_vm *vm; - uint64_t extra_pg_pages = extra_mem_pages / 512 * 2; + uint64_t pages = DEFAULT_GUEST_PHY_PAGES; - vm = _vm_create(mode, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR); + /* Account for a few pages per-vCPU for stacks */ + pages += DEFAULT_STACK_PGS * vcpus; + + /* + * Reserve twice the ammount of memory needed to map the test region and + * the page table / stacks region, at 4k, for page tables. Do the + * calculation with 4K page size: the smallest of all archs. (e.g., 64K + * page size guest will need even less memory for page tables). + */ + pages += (2 * pages) / PTES_PER_PT; + pages += ((2 * vcpus * vcpu_wss) >> PAGE_SHIFT_4K) / PTES_PER_PT; + + vm = _vm_create(mode, pages, O_RDWR); kvm_vm_elf_load(vm, program_invocation_name, 0, 0); #ifdef __x86_64__ vm_create_irqchip(vm); #endif - vm_vcpu_add_default(vm, vcpuid, guest_code); return vm; } @@ -232,15 +246,13 @@ static void *uffd_handler_thread_fn(void *arg) static int setup_demand_paging(struct kvm_vm *vm, pthread_t *uffd_handler_thread, - useconds_t uffd_delay) + useconds_t uffd_delay, + struct uffd_handler_args *uffd_args, + void *hva, uint64_t len) { int uffd; struct uffdio_api uffdio_api; struct uffdio_register uffdio_register; - struct uffd_handler_args uffd_args; - - guest_data_prototype = malloc(host_page_size); - memset(guest_data_prototype, 0xAB, host_page_size); uffd = syscall(__NR_userfaultfd, O_CLOEXEC | O_NONBLOCK); if (uffd == -1) { @@ -255,8 +267,8 @@ static int setup_demand_paging(struct kvm_vm *vm, return -1; } - uffdio_register.range.start = (uint64_t)host_test_mem; - uffdio_register.range.len = host_num_pages * host_page_size; + uffdio_register.range.start = (uint64_t)hva; + uffdio_register.range.len = len; uffdio_register.mode = UFFDIO_REGISTER_MODE_MISSING; if (ioctl(uffd, UFFDIO_REGISTER, &uffdio_register) == -1) { DEBUG("ioctl uffdio_register failed\n"); @@ -269,42 +281,37 @@ static int setup_demand_paging(struct kvm_vm *vm, return -1; } - uffd_args.uffd = uffd; - uffd_args.delay = uffd_delay; + uffd_args->uffd = uffd; + uffd_args->delay = uffd_delay; pthread_create(uffd_handler_thread, NULL, uffd_handler_thread_fn, - &uffd_args); + uffd_args); + + PER_VCPU_DEBUG("Created uffd thread for HVA range [%p, %p)\n", + hva, hva + len); return 0; } -#define PAGE_SHIFT_4K 12 - static void run_test(enum vm_guest_mode mode, bool use_uffd, - useconds_t uffd_delay, uint64_t vcpu_wss) + useconds_t uffd_delay, int vcpus, uint64_t vcpu_wss) { - pthread_t vcpu_thread; - pthread_t uffd_handler_thread; + pthread_t *vcpu_threads; + pthread_t *uffd_handler_threads = NULL; + struct uffd_handler_args *uffd_args = NULL; struct kvm_vm *vm; - struct vcpu_thread_args vcpu_args; + struct vcpu_thread_args *vcpu_args; uint64_t guest_num_pages; + int vcpu_id; int r; - /* - * We reserve page table for twice the ammount of memory we intend - * to use in the test region for demand paging. Here we do the - * calculation with 4K page size which is the smallest so the page - * number will be enough for all archs. (e.g., 64K page size guest - * will need even less memory for page tables). - */ - vm = create_vm(mode, VCPU_ID, (2 * vcpu_wss) >> PAGE_SHIFT_4K, - guest_code); + vm = create_vm(mode, vcpus, vcpu_wss); guest_page_size = vm_get_page_size(vm); TEST_ASSERT(vcpu_wss % guest_page_size == 0, "Guest memory size is not guest page size aligned."); - guest_num_pages = vcpu_wss / guest_page_size; + guest_num_pages = (vcpus * vcpu_wss) / guest_page_size; #ifdef __s390x__ /* Round up to multiple of 1M (segment size) */ @@ -316,13 +323,12 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd, */ TEST_ASSERT(guest_num_pages < vm_get_max_gfn(vm), "Requested more guest memory than address space allows.\n" - " guest pages: %lx max gfn: %lx\n", - guest_num_pages, vm_get_max_gfn(vm)); + " guest pages: %lx max gfn: %lx vcpus: %d wss: %lx]\n", + guest_num_pages, vm_get_max_gfn(vm), vcpus, vcpu_wss); host_page_size = getpagesize(); TEST_ASSERT(vcpu_wss % host_page_size == 0, "Guest memory size is not host page size aligned."); - host_num_pages = vcpu_wss / host_page_size; guest_test_phys_mem = (vm_get_max_gfn(vm) - guest_num_pages) * guest_page_size; @@ -347,43 +353,102 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd, virt_map(vm, guest_test_virt_mem, guest_test_phys_mem, guest_num_pages * guest_page_size, 0); - /* Cache the HVA pointer of the region */ - host_test_mem = addr_gpa2hva(vm, (vm_paddr_t)guest_test_phys_mem); + /* Export the shared variables to the guest */ + sync_global_to_guest(vm, host_page_size); + sync_global_to_guest(vm, guest_page_size); + + guest_data_prototype = malloc(host_page_size); + TEST_ASSERT(guest_data_prototype, "Memory allocation failed"); + memset(guest_data_prototype, 0xAB, host_page_size); + + vcpu_threads = malloc(vcpus * sizeof(*vcpu_threads)); + TEST_ASSERT(vcpu_threads, "Memory allocation failed"); if (use_uffd) { - /* Set up user fault fd to handle demand paging requests. */ quit_uffd_thread = false; - r = setup_demand_paging(vm, &uffd_handler_thread, - uffd_delay); - if (r < 0) - exit(-r); + + uffd_handler_threads = + malloc(vcpus * sizeof(*uffd_handler_threads)); + TEST_ASSERT(uffd_handler_threads, "Memory allocation failed"); + + uffd_args = malloc(vcpus * sizeof(*uffd_args)); + TEST_ASSERT(uffd_args, "Memory allocation failed"); } + vcpu_args = malloc(vcpus * sizeof(*vcpu_args)); + TEST_ASSERT(vcpu_args, "Memory allocation failed"); + + for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++) { + vm_paddr_t vcpu_gpa; + void *vcpu_hva; + + vm_vcpu_add_default(vm, vcpu_id, guest_code); + + vcpu_gpa = guest_test_phys_mem + (vcpu_id * vcpu_wss); + PER_VCPU_DEBUG("Added VCPU %d with test mem gpa [%lx, %lx)\n", + vcpu_id, vcpu_gpa, vcpu_gpa + vcpu_wss); + + /* Cache the HVA pointer of the region */ + vcpu_hva = addr_gpa2hva(vm, vcpu_gpa); + + if (use_uffd) { + /* + * Set up user fault fd to handle demand paging + * requests. + */ + r = setup_demand_paging(vm, + &uffd_handler_threads[vcpu_id], + uffd_delay, &uffd_args[vcpu_id], + vcpu_hva, vcpu_wss); + if (r < 0) + exit(-r); + } + #ifdef __x86_64__ - vcpu_set_cpuid(vm, VCPU_ID, kvm_get_supported_cpuid()); + vcpu_set_cpuid(vm, vcpu_id, kvm_get_supported_cpuid()); #endif - /* Export the shared variables to the guest */ - sync_global_to_guest(vm, host_page_size); - sync_global_to_guest(vm, guest_page_size); + vcpu_args[vcpu_id].vm = vm; + vcpu_args[vcpu_id].vcpu_id = vcpu_id; + vcpu_args[vcpu_id].gva = guest_test_virt_mem + + (vcpu_id * vcpu_wss); + vcpu_args[vcpu_id].pages = vcpu_wss / guest_page_size; + } + + DEBUG("Finished creating vCPUs and starting uffd threads\n"); + + for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++) { + pthread_create(&vcpu_threads[vcpu_id], NULL, vcpu_worker, + &vcpu_args[vcpu_id]); + } + + DEBUG("Started all vCPUs\n"); - vcpu_args.vm = vm; - vcpu_args.vcpu_id = VCPU_ID; - vcpu_args.gva = guest_test_virt_mem; - vcpu_args.pages = guest_num_pages; - pthread_create(&vcpu_thread, NULL, vcpu_worker, &vcpu_args); + /* Wait for the vcpu threads to quit */ + for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++) { + pthread_join(vcpu_threads[vcpu_id], NULL); + PER_VCPU_DEBUG("Joined thread for vCPU %d\n", vcpu_id); + } - /* Wait for the vcpu thread to quit */ - pthread_join(vcpu_thread, NULL); + DEBUG("All vCPU threads joined\n"); if (use_uffd) { - /* Tell the user fault fd handler thread to quit */ + /* Tell the user fault fd handler threads to quit */ quit_uffd_thread = true; - pthread_join(uffd_handler_thread, NULL); + for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++) + pthread_join(uffd_handler_threads[vcpu_id], NULL); } ucall_uninit(vm); kvm_vm_free(vm); + + free(guest_data_prototype); + free(vcpu_threads); + if (use_uffd) { + free(uffd_handler_threads); + free(uffd_args); + } + free(vcpu_args); } struct vm_guest_mode_params { @@ -404,7 +469,7 @@ static void help(char *name) puts(""); printf("usage: %s [-h] [-m mode] [-u] [-d uffd_delay_usec]\n" - " [-b bytes test memory]\n", name); + " [-b bytes test memory] [-v vcpus]\n", name); printf(" -m: specify the guest mode ID to test\n" " (default: test all supported modes)\n" " This option may be used multiple times.\n" @@ -419,6 +484,7 @@ static void help(char *name) " FD handler to simulate demand paging\n" " overheads. Ignored without -u.\n"); printf(" -b: specify the working set size, in bytes for each vCPU.\n"); + printf(" -v: specify the number of vCPUs to run.\n"); puts(""); exit(0); } @@ -427,6 +493,7 @@ int main(int argc, char *argv[]) { bool mode_selected = false; uint64_t vcpu_wss = DEFAULT_GUEST_TEST_MEM_SIZE; + int vcpus = 1; unsigned int mode; int opt, i; bool use_uffd = false; @@ -439,7 +506,7 @@ int main(int argc, char *argv[]) vm_guest_mode_params_init(VM_MODE_P40V48_4K, true, true); #endif - while ((opt = getopt(argc, argv, "hm:ud:b:")) != -1) { + while ((opt = getopt(argc, argv, "hm:ud:b:v:")) != -1) { switch (opt) { case 'm': if (!mode_selected) { @@ -462,6 +529,12 @@ int main(int argc, char *argv[]) break; case 'b': vcpu_wss = strtoull(optarg, NULL, 0); + break; + case 'v': + vcpus = atoi(optarg); + TEST_ASSERT(vcpus > 0, + "Must have a positive number of vCPUs"); + break; case 'h': default: help(argv[0]); @@ -475,7 +548,7 @@ int main(int argc, char *argv[]) TEST_ASSERT(vm_guest_mode_params[i].supported, "Guest mode ID %d (%s) not supported.", i, vm_guest_mode_string(i)); - run_test(i, use_uffd, uffd_delay, vcpu_wss); + run_test(i, use_uffd, uffd_delay, vcpus, vcpu_wss); } return 0; From patchwork Mon Dec 16 21:39:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 11295373 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 107D2109A for ; Mon, 16 Dec 2019 21:39:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id D59A42468E for ; Mon, 16 Dec 2019 21:39:22 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="T5biqIIm" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727627AbfLPVjW (ORCPT ); Mon, 16 Dec 2019 16:39:22 -0500 Received: from mail-pl1-f201.google.com ([209.85.214.201]:45741 "EHLO mail-pl1-f201.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727897AbfLPVjV (ORCPT ); Mon, 16 Dec 2019 16:39:21 -0500 Received: by mail-pl1-f201.google.com with SMTP id b8so6277195plz.12 for ; Mon, 16 Dec 2019 13:39:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=12xMY3S0k192hVkbotc53c72N/aodHQMjhfnDBGydPE=; b=T5biqIImkGRBkPX18/PrkrCxeDe93szhNMnb+3D0JNgeESLimO1xkTIPBsUyD57Ks9 bPlWnegoJ2yGzjz7cD6hYo+gG3l1HRE6COccXQRZJuswHnC1Mncv3n8bOTZXTZElaFJO HqhYzy9prsndpUCnAwtNnq3NWrpKoXFuT2fcmD8IDkmIroM0pm4gFWHpXPbtMb4iEQQS GAIKxWppVywzbiLBmIvgtOVim9E1IlIAZy/J8PawRzee1gY2jRY95mtWvgtYzkW20zeW JAlMs4vNqqg/bk9HOJ+ueZ0d+h2uD2HDERcHNBu10cOxwumiA98zNbAIcY+lwL6uEHf4 S8Wg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=12xMY3S0k192hVkbotc53c72N/aodHQMjhfnDBGydPE=; b=Xkk/hDKYhe4Ds2pUp6VvSwDGiyQsLQiA4j+z1R1VZOi5VwZM0C19jS1anxQASPEfZE 0CDr4Hpx1rhX1tuOheymVz1a074nB8a6e2XqLcnaa55svVxg4SnUiGqITfL4tbmVg3k3 Bvt5y/4Jd1aFy10a8j4yoUNV1K3wrBRP0Y9bBqL0D49xNGmXk0bB0sEcS+ZJpy1IeLEU ri4VhKvnYhL7iFHnevsIbImUiPQs7+trkMCxsuKB1kULn5RucAvvXl35XVGolzrgCTQg nvkUeg5sMB+iW3X13OjD4VQq0y+0TVn6XDtDIPG0XsICCS3oE4umF4nlnHBSPYrwUd12 W7ug== X-Gm-Message-State: APjAAAVynex5Vzgh7EQhLo/Wh5/yKaqk/mkii8/+lWNQlD7j65Wz6AGu 7lmnvXbrITUaXzgS+VqCOTr6O8fHzOOB X-Google-Smtp-Source: APXvYqxu8EWj4aSN0HVlcfGqbsI23bzO4+co41jlCYLgSXYCGQAcDG2owM/AowODQb+ety9Od899WWR2Roav X-Received: by 2002:a63:f910:: with SMTP id h16mr21432184pgi.148.1576532360444; Mon, 16 Dec 2019 13:39:20 -0800 (PST) Date: Mon, 16 Dec 2019 13:39:00 -0800 In-Reply-To: <20191216213901.106941-1-bgardon@google.com> Message-Id: <20191216213901.106941-8-bgardon@google.com> Mime-Version: 1.0 References: <20191216213901.106941-1-bgardon@google.com> X-Mailer: git-send-email 2.24.1.735.g03f4e72817-goog Subject: [PATCH v3 7/8] KVM: selftests: Time guest demand paging From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: Paolo Bonzini , Cannon Matthews , Peter Xu , Andrew Jones , Ben Gardon Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org In order to quantify demand paging performance, time guest execution during demand paging. Signed-off-by: Ben Gardon --- .../selftests/kvm/demand_paging_test.c | 68 +++++++++++++++++++ 1 file changed, 68 insertions(+) diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c index 2b80f614dd537..d93d72bdea4a3 100644 --- a/tools/testing/selftests/kvm/demand_paging_test.c +++ b/tools/testing/selftests/kvm/demand_paging_test.c @@ -32,6 +32,12 @@ #define DEFAULT_GUEST_TEST_MEM_SIZE (1 << 30) /* 1G */ +#ifdef PRINT_PER_PAGE_UPDATES +#define PER_PAGE_DEBUG(...) DEBUG(__VA_ARGS__) +#else +#define PER_PAGE_DEBUG(...) +#endif + #ifdef PRINT_PER_VCPU_UPDATES #define PER_VCPU_DEBUG(...) DEBUG(__VA_ARGS__) #else @@ -62,6 +68,26 @@ static uint64_t guest_test_phys_mem; */ static uint64_t guest_test_virt_mem = DEFAULT_GUEST_TEST_MEM; +int64_t to_ns(struct timespec ts) +{ + return (int64_t)ts.tv_nsec + 1000000000LL * (int64_t)ts.tv_sec; +} + +struct timespec diff(struct timespec start, struct timespec end) +{ + struct timespec temp; + + if ((end.tv_nsec-start.tv_nsec) < 0) { + temp.tv_sec = end.tv_sec - start.tv_sec - 1; + temp.tv_nsec = 1000000000 + end.tv_nsec - start.tv_nsec; + } else { + temp.tv_sec = end.tv_sec - start.tv_sec; + temp.tv_nsec = end.tv_nsec - start.tv_nsec; + } + + return temp; +} + /* * Continuously write to the first 8 bytes of each page in the demand paging * memory region. @@ -96,11 +122,15 @@ static void *vcpu_worker(void *data) uint64_t gva = args->gva; uint64_t pages = args->pages; struct kvm_run *run; + struct timespec start; + struct timespec end; vcpu_args_set(vm, vcpu_id, 2, gva, pages); run = vcpu_state(vm, vcpu_id); + clock_gettime(CLOCK_MONOTONIC, &start); + /* Let the guest access its memory */ ret = _vcpu_run(vm, vcpu_id); TEST_ASSERT(ret == 0, "vcpu_run failed: %d\n", ret); @@ -110,6 +140,11 @@ static void *vcpu_worker(void *data) exit_reason_str(run->exit_reason)); } + clock_gettime(CLOCK_MONOTONIC, &end); + PER_VCPU_DEBUG("vCPU %d execution time: %lld.%.9lds\n", vcpu_id, + (long long)(diff(start, end).tv_sec), + diff(start, end).tv_nsec); + return NULL; } @@ -145,6 +180,8 @@ static struct kvm_vm *create_vm(enum vm_guest_mode mode, int vcpus, static int handle_uffd_page_request(int uffd, uint64_t addr) { pid_t tid; + struct timespec start; + struct timespec end; struct uffdio_copy copy; int r; @@ -155,6 +192,8 @@ static int handle_uffd_page_request(int uffd, uint64_t addr) copy.len = host_page_size; copy.mode = 0; + clock_gettime(CLOCK_MONOTONIC, &start); + r = ioctl(uffd, UFFDIO_COPY, ©); if (r == -1) { DEBUG("Failed Paged in 0x%lx from thread %d with errno: %d\n", @@ -162,6 +201,13 @@ static int handle_uffd_page_request(int uffd, uint64_t addr) return r; } + clock_gettime(CLOCK_MONOTONIC, &end); + + PER_PAGE_DEBUG("UFFDIO_COPY %d \t%lld ns\n", tid, + (long long)to_ns(diff(start, end))); + PER_PAGE_DEBUG("Paged in %ld bytes at 0x%lx from thread %d\n", + host_page_size, addr, tid); + return 0; } @@ -178,7 +224,10 @@ static void *uffd_handler_thread_fn(void *arg) int uffd = uffd_args->uffd; useconds_t delay = uffd_args->delay; int64_t pages = 0; + struct timespec start; + struct timespec end; + clock_gettime(CLOCK_MONOTONIC, &start); while (!quit_uffd_thread) { struct uffd_msg msg; struct pollfd pollfd[1]; @@ -241,6 +290,13 @@ static void *uffd_handler_thread_fn(void *arg) pages++; } + clock_gettime(CLOCK_MONOTONIC, &end); + PER_VCPU_DEBUG("userfaulted %ld pages over %lld.%.9lds. (%f/sec)\n", + pages, (long long)(diff(start, end).tv_sec), + diff(start, end).tv_nsec, pages / + ((double)diff(start, end).tv_sec + + (double)diff(start, end).tv_nsec / 100000000.0)); + return NULL; } @@ -303,6 +359,8 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd, uint64_t guest_num_pages; int vcpu_id; int r; + struct timespec start; + struct timespec end; vm = create_vm(mode, vcpus, vcpu_wss); @@ -417,6 +475,8 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd, DEBUG("Finished creating vCPUs and starting uffd threads\n"); + clock_gettime(CLOCK_MONOTONIC, &start); + for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++) { pthread_create(&vcpu_threads[vcpu_id], NULL, vcpu_worker, &vcpu_args[vcpu_id]); @@ -432,6 +492,8 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd, DEBUG("All vCPU threads joined\n"); + clock_gettime(CLOCK_MONOTONIC, &end); + if (use_uffd) { /* Tell the user fault fd handler threads to quit */ quit_uffd_thread = true; @@ -439,6 +501,12 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd, pthread_join(uffd_handler_threads[vcpu_id], NULL); } + DEBUG("Total guest execution time: %lld.%.9lds\n", + (long long)(diff(start, end).tv_sec), diff(start, end).tv_nsec); + DEBUG("Overall demand paging rate: %f pgs/sec\n", + guest_num_pages / ((double)diff(start, end).tv_sec + + (double)diff(start, end).tv_nsec / 100000000.0)); + ucall_uninit(vm); kvm_vm_free(vm); From patchwork Mon Dec 16 21:39:01 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 11295375 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C5B08109A for ; Mon, 16 Dec 2019 21:39:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A0E5124655 for ; Mon, 16 Dec 2019 21:39:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="MuHMhVLT" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727932AbfLPVjX (ORCPT ); Mon, 16 Dec 2019 16:39:23 -0500 Received: from mail-pg1-f201.google.com ([209.85.215.201]:48981 "EHLO mail-pg1-f201.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727913AbfLPVjX (ORCPT ); Mon, 16 Dec 2019 16:39:23 -0500 Received: by mail-pg1-f201.google.com with SMTP id c8so5944602pgl.15 for ; Mon, 16 Dec 2019 13:39:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=rowy0fulnWlicWAYnRVZ1Dd1A8eU5tnc0eErNEZ+z80=; b=MuHMhVLT/JPmloUwrLfNYvZW2n4et7Jp6otDiIOqsAgsXxhPw0pD+sf+K2Wz1bcwcE umHcmexYa3XVrNTFitMJMBYyqjDaET0ivXVKpA1RqVyD8V4224NfFmieT+WL8wxCP9Uf TeyVGVkrZUBRPuOJOeUyhrRYRParc+bz/foA7OITtQlAv8YjLXySbTVB+rFX/btl5eqr XV2jZIQBXqTrzmvmmJ11xow6XVm8OAB0UlReGxO9v7l0id98TbLh6QDJoZETRv2fkiPo RwZ+5HAYVciPWg/rBIsxuRno1YegprlFb+LVfcvaBjDfj2oNEeKU+ibtPCaeMIQ0tu6z r+BQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=rowy0fulnWlicWAYnRVZ1Dd1A8eU5tnc0eErNEZ+z80=; b=KVnBFfS1YrI86cSYuFJb6J9EYOZmINB3WxIWQYWP/dyXActhd/G5qJU4/oggjqvTfP 8gyjvejlcMmlKTr3stRhe/u+jBGz+24Sq1epmEK9GL+FHDJ96U/A7Zc9ztn8lAMQOD4p WQLakZGNLZ9Rp06PKi4uBtQJvBUjQFEQ3MKS8oCKOmH0PHId43oVZ2t/qawIXBHaEuN0 Ljbrk3NX10MPvzVuOzz5ztGjaRI1nnoP1bFleMRHFH/K9J99pT98e4A+e8eUEcdL70zU 4tv9I0ykoyil7TEKdTwd/paDmYJSNiWalUUErsn9vut/rgX0j6HdZvl74VmGu3by0ZLH MXlw== X-Gm-Message-State: APjAAAVFd6xi39d+X/+I6WeGmuLf19bTy3WHXnRxKt4PRkHhg9e/A/3p dOllGksRq4nYk0pPjJeiZxUzMIElTMkW X-Google-Smtp-Source: APXvYqzpGDXOtRuYa0WgsWK7wz+wgL2YlTBvmIVTv2h8YGbeupjQc/AMbR9FD0fpw5V/MIPOI8j2lg0uDdbH X-Received: by 2002:a63:1c13:: with SMTP id c19mr21194582pgc.450.1576532362335; Mon, 16 Dec 2019 13:39:22 -0800 (PST) Date: Mon, 16 Dec 2019 13:39:01 -0800 In-Reply-To: <20191216213901.106941-1-bgardon@google.com> Message-Id: <20191216213901.106941-9-bgardon@google.com> Mime-Version: 1.0 References: <20191216213901.106941-1-bgardon@google.com> X-Mailer: git-send-email 2.24.1.735.g03f4e72817-goog Subject: [PATCH v3 8/8] KVM: selftests: Move large memslots above KVM internal memslots in _vm_create From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: Paolo Bonzini , Cannon Matthews , Peter Xu , Andrew Jones , Ben Gardon Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org KVM creates internal memslots between 3 and 4 GiB paddrs on the first vCPU creation. If memslot 0 is large enough it collides with these memslots an causes vCPU creation to fail. When requesting more than 3G, start memslot 0 at 4G in _vm_create. Signed-off-by: Ben Gardon --- tools/testing/selftests/kvm/lib/kvm_util.c | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 41cf45416060f..886d58e6cac39 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -113,6 +113,8 @@ const char * const vm_guest_mode_string[] = { _Static_assert(sizeof(vm_guest_mode_string)/sizeof(char *) == NUM_VM_MODES, "Missing new mode strings?"); +#define KVM_INTERNAL_MEMSLOTS_START_PADDR (3UL << 30) +#define KVM_INTERNAL_MEMSLOTS_END_PADDR (4UL << 30) /* * VM Create * @@ -128,13 +130,16 @@ _Static_assert(sizeof(vm_guest_mode_string)/sizeof(char *) == NUM_VM_MODES, * * Creates a VM with the mode specified by mode (e.g. VM_MODE_P52V48_4K). * When phy_pages is non-zero, a memory region of phy_pages physical pages - * is created and mapped starting at guest physical address 0. The file - * descriptor to control the created VM is created with the permissions - * given by perm (e.g. O_RDWR). + * is created. If phy_pages is less that 3G, it is mapped starting at guest + * physical address 0. If phy_pages is greater than 3G it is mapped starting + * 4G into the guest physical address space to avoid KVM internal memslots + * which map the region between 3G and 4G. The file descriptor to control the + * created VM is created with the permissions given by perm (e.g. O_RDWR). */ struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm) { struct kvm_vm *vm; + uint64_t guest_paddr = 0; DEBUG("Testing guest mode: %s\n", vm_guest_mode_string(mode)); @@ -227,9 +232,11 @@ struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm) /* Allocate and setup memory for guest. */ vm->vpages_mapped = sparsebit_alloc(); + if (guest_paddr + phy_pages > KVM_INTERNAL_MEMSLOTS_START_PADDR) + guest_paddr = KVM_INTERNAL_MEMSLOTS_END_PADDR; if (phy_pages != 0) vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, - 0, 0, phy_pages, 0); + guest_paddr, 0, phy_pages, 0); return vm; }