From patchwork Wed Jan 3 08:44:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yan Zhao X-Patchwork-Id: 13509778 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.65]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C373E18AE4; Wed, 3 Jan 2024 09:13:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="H7lIBFb2" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1704273218; x=1735809218; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=cTwg2O0ziBhg+OL/NfG87tprFRUxjdv/1jQ57i90GUg=; b=H7lIBFb29p+Shbn5LxJwQYIVc3qoc/0JpNyGoSnAuy7/dx1E7amP2oWu xFQV+cfx30Q2BREM2n5VfKL7+nPYGU7FRhtRaYR0l8V2714hb5gY0cHuc J4rRxfcL63+a6J17tREBSjU0mtp8kjRtlBuc1/gHblNlF5ljjNu+Trlun uptXjwZ2+khPmFykgbXnE3Gp8aFa9dOWJpQaF+AZ7nwGnjcEV05pKs18j btPCtPyvUdVeVU8l2wsShJdzbZGI1PBzPGqo0tFn1SBejtUP+tdusiB/z y9oELmnHbDtb2R47O8L60LPLOljEwkkddSTBk079mIjdK33i0yej+SZIA w==; X-IronPort-AV: E=McAfee;i="6600,9927,10941"; a="400794941" X-IronPort-AV: E=Sophos;i="6.04,327,1695711600"; d="scan'208";a="400794941" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Jan 2024 01:13:37 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10941"; a="814207245" X-IronPort-AV: E=Sophos;i="6.04,327,1695711600"; d="scan'208";a="814207245" Received: from yzhao56-desk.sh.intel.com ([10.239.159.62]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Jan 2024 01:13:35 -0800 From: Yan Zhao To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, seanjc@google.com, shuah@kernel.org, stevensd@chromium.org, Yan Zhao Subject: [RFC PATCH v2 1/3] KVM: allow mapping of compound tail pages for IO or PFNMAP mapping Date: Wed, 3 Jan 2024 16:44:24 +0800 Message-Id: <20240103084424.20014-1-yan.y.zhao@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240103084327.19955-1-yan.y.zhao@intel.com> References: <20240103084327.19955-1-yan.y.zhao@intel.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Allow mapping of tail pages of compound pages for IO or PFNMAP mapping by trying and getting ref count of its head page. For IO or PFNMAP mapping, sometimes it's backed by compound pages. KVM will just return error on mapping of tail pages of the compound pages, as ref count of the tail pages are always 0. So, rather than check and add ref count of a tail page, check and add ref count of its folio (head page) to allow mapping of the compound tail pages. This will not break the origial intention to disallow mapping of tail pages of non-compound higher order allocations as the folio of a non-compound tail page is the same as the page itself. On the other side, put_page() has already converted page to folio before putting page ref. Signed-off-by: Yan Zhao --- virt/kvm/kvm_main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index acd67fb40183..f53b58446ac7 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2892,7 +2892,7 @@ static int kvm_try_get_pfn(kvm_pfn_t pfn) if (!page) return 1; - return get_page_unless_zero(page); + return folio_try_get(page_folio(page)); } static int hva_to_pfn_remapped(struct vm_area_struct *vma, From patchwork Wed Jan 3 08:44:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yan Zhao X-Patchwork-Id: 13509779 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.93]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7182718627; Wed, 3 Jan 2024 09:14:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="bArnnaSm" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1704273256; x=1735809256; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=beKMrUAeoM7XiyShL3qFBKJIHdC3YdO1vgauHYZzfX8=; b=bArnnaSm5GryXnjbIX93zDN4S/PHYxXm2CBLh//wDyrX36VIAWgR08Mc nnQpJIyvdFLs5cCL9bWfzhkguZrI15piZhj8LMOV9IpofW3GN4/vIHl0j AiYv3PrCA8vfoEfWdUrRBSFZibD6e6KWbfvmaVeGzZCUgEeGe6amcDKHW dfroQHYA0j5bL/y76AG+3y38nAGZ+W0VzYpgDYrtH+IIqr3x+akbEePON K83oJ57xRq+PqYb1wCJFFcAW4BdvnDGYZh4Uu23P/nrAIrd//8DMZpION PBagbv7F3IpPNc6ivBfUE0nBKl7mO6nOatA40WAjqSjWTX/Xg2KeBSvjf w==; X-IronPort-AV: E=McAfee;i="6600,9927,10941"; a="394136947" X-IronPort-AV: E=Sophos;i="6.04,327,1695711600"; d="scan'208";a="394136947" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Jan 2024 01:14:15 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10941"; a="923485704" X-IronPort-AV: E=Sophos;i="6.04,327,1695711600"; d="scan'208";a="923485704" Received: from yzhao56-desk.sh.intel.com ([10.239.159.62]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Jan 2024 01:14:13 -0800 From: Yan Zhao To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, seanjc@google.com, shuah@kernel.org, stevensd@chromium.org, Yan Zhao Subject: [RFC PATCH v2 2/3] KVM: selftests: add selftest driver for KVM to test memory slots for MMIO BARs Date: Wed, 3 Jan 2024 16:44:57 +0800 Message-Id: <20240103084457.20086-1-yan.y.zhao@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240103084327.19955-1-yan.y.zhao@intel.com> References: <20240103084327.19955-1-yan.y.zhao@intel.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: This driver is for testing KVM memory slots for device MMIO BARs that are mapped to pages serving as device resources. This driver implements a mock device whose device resources are pages array that can be mmaped into user space. It provides ioctl interface to users to configure whether the pages are allocated as a compound huge page or not. KVM selftest code can then map the mock device resource to KVM memslots to check if any error encountered. After VM shutdown, mock device resource's page reference counters are checked to ensure KVM does not hold extra reference count during memslot add/removal. Signed-off-by: Yan Zhao --- lib/Kconfig.debug | 14 ++ lib/Makefile | 1 + lib/test_kvm_mock_device.c | 281 ++++++++++++++++++++++++++++++++ lib/test_kvm_mock_device_uapi.h | 16 ++ 4 files changed, 312 insertions(+) create mode 100644 lib/test_kvm_mock_device.c create mode 100644 lib/test_kvm_mock_device_uapi.h diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index cc7d53d9dc01..c0fd4b53db89 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -2922,6 +2922,20 @@ config TEST_HMM If unsure, say N. +config TEST_KVM_MOCK_DEVICE + tristate "Test page-backended BAR to KVM mock device" + help + This is a mock KVM assigned device whose MMIO BAR is backended by + struct page. + Say M here if you want to build the "test_kvm_mock_device" module. + Doing so will allow you to run KVM selftest + tools/testing/selftest/kvm/set_memory_region_io, which tests + functionality of adding page-backended MMIO memslots in KVM and + ensures that reference count of the backend pages are correctly + handled. + + If unsure, say N. + config TEST_FREE_PAGES tristate "Test freeing pages" help diff --git a/lib/Makefile b/lib/Makefile index 6b09731d8e61..894a185bbabd 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -83,6 +83,7 @@ obj-$(CONFIG_TEST_STATIC_KEYS) += test_static_key_base.o obj-$(CONFIG_TEST_DYNAMIC_DEBUG) += test_dynamic_debug.o obj-$(CONFIG_TEST_PRINTF) += test_printf.o obj-$(CONFIG_TEST_SCANF) += test_scanf.o +obj-$(CONFIG_TEST_KVM_MOCK_DEVICE) += test_kvm_mock_device.o obj-$(CONFIG_TEST_BITMAP) += test_bitmap.o ifeq ($(CONFIG_CC_IS_CLANG)$(CONFIG_KASAN),yy) diff --git a/lib/test_kvm_mock_device.c b/lib/test_kvm_mock_device.c new file mode 100644 index 000000000000..4e7527c230cd --- /dev/null +++ b/lib/test_kvm_mock_device.c @@ -0,0 +1,281 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * This is a module to test KVM DEVICE MMIO PASSTHROUGH. + */ +#include +#include +#include +#include +#include +#include +#include + +#include "test_kvm_mock_device_uapi.h" + +/* kvm mock device */ +struct kvm_mock_dev { + dev_t devt; + struct device device; + struct cdev cdev; +}; +static struct kvm_mock_dev kvm_mock_dev; + +struct kvm_mock_device { + bool compound; + struct page *resource; + u64 bar_size; + int order; + int *ref_array; + struct mutex lock; + bool prepared; +}; + +static bool opened; + +#define BAR_SIZE 0x200000UL +#define DEFAULT_COMPOUND true + +static vm_fault_t kvm_mock_device_mmap_fault(struct vm_fault *vmf) +{ + struct vm_area_struct *vma = vmf->vma; + struct kvm_mock_device *kmdev = vma->vm_private_data; + struct page *p = kmdev->resource; + vm_fault_t ret = VM_FAULT_NOPAGE; + unsigned long addr; + int i; + + for (addr = vma->vm_start, i = vma->vm_pgoff; addr < vma->vm_end; + addr += PAGE_SIZE, i++) { + + ret = vmf_insert_pfn(vma, addr, page_to_pfn(p + i)); + if (ret == VM_FAULT_NOPAGE) + continue; + + zap_vma_ptes(vma, vma->vm_start, vma->vm_end - vma->vm_start); + return ret; + + } + return ret; +} + +static const struct vm_operations_struct kvm_mock_device_mmap_ops = { + .fault = kvm_mock_device_mmap_fault, +}; + +static int kvm_mock_device_fops_mmap(struct file *file, struct vm_area_struct *vma) +{ + struct kvm_mock_device *kmdev = file->private_data; + u64 offset, req_len; + int ret = 0; + + mutex_lock(&kmdev->lock); + if (!kmdev->prepared) { + ret = -ENODEV; + goto out; + } + + offset = vma->vm_pgoff << PAGE_SHIFT; + req_len = vma->vm_end - vma->vm_start; + if (offset + req_len > BAR_SIZE) { + ret = -EINVAL; + goto out; + } + + vm_flags_set(vma, VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP); + vma->vm_ops = &kvm_mock_device_mmap_ops; + vma->vm_private_data = kmdev; +out: + mutex_unlock(&kmdev->lock); + return ret; +} + +static int kvm_mock_device_prepare_resource(struct kvm_mock_device *kmdev) +{ + gfp_t gfp_flags = GFP_KERNEL | __GFP_ZERO; + unsigned int order = kmdev->order; + unsigned long count = 1 << order; + unsigned long i; + struct page *p; + int ret; + + mutex_lock(&kmdev->lock); + if (kmdev->prepared) { + ret = -EBUSY; + goto out; + } + + if (kmdev->compound) + gfp_flags |= __GFP_COMP; + + p = alloc_pages_node(0, gfp_flags, order); + if (!p) { + ret = -ENOMEM; + goto out; + } + + kmdev->ref_array = kmalloc_array(count, sizeof(kmdev->ref_array), + GFP_KERNEL_ACCOUNT); + if (!kmdev->ref_array) { + __free_pages(p, order); + ret = -ENOMEM; + goto out; + } + + for (i = 0; i < count; i++) + kmdev->ref_array[i] = page_ref_count(p + i); + + kmdev->resource = p; + kmdev->prepared = true; +out: + mutex_unlock(&kmdev->lock); + return ret; +} + +static int kvm_mock_device_check_resource_ref(struct kvm_mock_device *kmdev) +{ + u32 i, count = 1 << kmdev->order; + struct page *p = kmdev->resource; + int inequal = 0; + + mutex_lock(&kmdev->lock); + if (!kmdev->prepared) { + mutex_unlock(&kmdev->lock); + return -ENODEV; + } + + for (i = 0; i < count; i++) { + if (kmdev->ref_array[i] == page_ref_count(p + i)) + continue; + + pr_err("kvm test device check resource page %d old ref=%d new ref=%d\n", + i, kmdev->ref_array[i], page_ref_count(p + i)); + inequal++; + } + mutex_unlock(&kmdev->lock); + + return inequal; +} + +static int kvm_mock_device_fops_open(struct inode *inode, struct file *filp) +{ + struct kvm_mock_device *kmdev; + + if (opened) + return -EBUSY; + + kmdev = kzalloc(sizeof(*kmdev), GFP_KERNEL_ACCOUNT); + if (!kmdev) + return -ENOMEM; + + kmdev->compound = DEFAULT_COMPOUND; + kmdev->bar_size = BAR_SIZE; + kmdev->order = get_order(kmdev->bar_size); + mutex_init(&kmdev->lock); + filp->private_data = kmdev; + + opened = true; + return 0; +} + +static int kvm_mock_device_fops_release(struct inode *inode, struct file *filp) +{ + struct kvm_mock_device *kmdev = filp->private_data; + + if (kmdev->prepared) + __free_pages(kmdev->resource, kmdev->order); + mutex_destroy(&kmdev->lock); + kfree(kmdev->ref_array); + kfree(kmdev); + opened = false; + return 0; +} + +static long kvm_mock_device_fops_unlocked_ioctl(struct file *filp, + unsigned int command, + unsigned long arg) +{ + struct kvm_mock_device *kmdev = filp->private_data; + int r; + + switch (command) { + case KVM_MOCK_DEVICE_GET_BAR_SIZE: { + u64 bar_size; + + bar_size = kmdev->bar_size; + r = put_user(bar_size, (u64 __user *)arg); + break; + } + case KVM_MOCK_DEVICE_PREPARE_RESOURCE: { + u32 compound; + + r = get_user(compound, (u32 __user *)arg); + if (r) + return r; + + kmdev->compound = compound; + r = kvm_mock_device_prepare_resource(kmdev); + break; + + } + case KVM_MOCK_DEVICE_CHECK_BACKEND_REF: { + int inequal; + + inequal = kvm_mock_device_check_resource_ref(kmdev); + + if (inequal < 0) + return inequal; + + r = put_user(inequal, (u32 __user *)arg); + break; + } + default: + r = -EOPNOTSUPP; + } + + return r; +} + + +static const struct file_operations kvm_mock_device_fops = { + .open = kvm_mock_device_fops_open, + .release = kvm_mock_device_fops_release, + .mmap = kvm_mock_device_fops_mmap, + .unlocked_ioctl = kvm_mock_device_fops_unlocked_ioctl, + .llseek = default_llseek, + .owner = THIS_MODULE, +}; + + +static int __init kvm_mock_device_test_init(void) +{ + int ret; + + ret = alloc_chrdev_region(&kvm_mock_dev.devt, 0, 1, "KVM-MOCK-DEVICE"); + if (ret) + goto out; + + cdev_init(&kvm_mock_dev.cdev, &kvm_mock_device_fops); + kvm_mock_dev.cdev.owner = THIS_MODULE; + device_initialize(&kvm_mock_dev.device); + kvm_mock_dev.device.devt = MKDEV(MAJOR(kvm_mock_dev.devt), 0); + ret = dev_set_name(&kvm_mock_dev.device, "kvm_mock_device"); + if (ret) + goto out; + + ret = cdev_device_add(&kvm_mock_dev.cdev, &kvm_mock_dev.device); + if (ret) + goto out; + +out: + return ret; +} + +static void __exit kvm_mock_device_test_exit(void) +{ + cdev_device_del(&kvm_mock_dev.cdev, &kvm_mock_dev.device); + unregister_chrdev_region(kvm_mock_dev.devt, 1); +} + +module_init(kvm_mock_device_test_init); +module_exit(kvm_mock_device_test_exit); +MODULE_LICENSE("GPL"); diff --git a/lib/test_kvm_mock_device_uapi.h b/lib/test_kvm_mock_device_uapi.h new file mode 100644 index 000000000000..227d0bf1d430 --- /dev/null +++ b/lib/test_kvm_mock_device_uapi.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ +/* + * This is a module to help test KVM guest access of KVM mock device's BAR, + * whose backend is mapped to pages. + */ +#ifndef _LIB_TEST_KVM_MOCK_DEVICE_UAPI_H +#define _LIB_TEST_KVM_MOCK_DEVICE_UAPI_H + +#include +#include + +#define KVM_MOCK_DEVICE_GET_BAR_SIZE _IOR('M', 0x00, u64) +#define KVM_MOCK_DEVICE_PREPARE_RESOURCE _IOWR('M', 0x01, u32) +#define KVM_MOCK_DEVICE_CHECK_BACKEND_REF _IOWR('M', 0x02, u32) + +#endif /* _LIB_TEST_KVM_MOCK_DEVICE_UAPI_H */ From patchwork Wed Jan 3 08:45:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yan Zhao X-Patchwork-Id: 13509780 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.100]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6B84118AEB; Wed, 3 Jan 2024 09:14:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="d1r8NF7s" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1704273293; x=1735809293; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=al72hcUGOzqysVVj35W64xRJAG42d0hvvgBN5GR17Eg=; b=d1r8NF7s5j+wiL3BkfSkZsk35UdEwINa8szuFoLLkP+PauUR3OUw5OIj HV44jFXblPfaj5EIvmfhBg+XdSPb7p4vDNJP1tiEer0lBSYOK3InKuO7Q qxqtxq/u9jDyKEFy+DCxhzFXuIU28HYGi1Y4Ha2B1lknH2WWoXuZcMnIW MeRsnxzUdXief5OYtr2oDWEv0DCxZO9ULnltJzOiksRG9jqCTWD9wRmpy l9DcoVT2ugkCnHvPfAMmAmeZjbOcazqLu9tWdbDtseufDjgVc2nVrAN7i MikExITjoCLFMYNG3FfSaF0RaOEIRAM2Wvwk/SWOQzWfGtZWFi0LuVcHa Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10941"; a="463386829" X-IronPort-AV: E=Sophos;i="6.04,327,1695711600"; d="scan'208";a="463386829" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Jan 2024 01:14:50 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10941"; a="898812648" X-IronPort-AV: E=Sophos;i="6.04,327,1695711600"; d="scan'208";a="898812648" Received: from yzhao56-desk.sh.intel.com ([10.239.159.62]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Jan 2024 01:14:46 -0800 From: Yan Zhao To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: pbonzini@redhat.com, seanjc@google.com, shuah@kernel.org, stevensd@chromium.org, Yan Zhao Subject: [RFC PATCH v2 3/3] KVM: selftests: Add set_memory_region_io to test memslots for MMIO BARs Date: Wed, 3 Jan 2024 16:45:35 +0800 Message-Id: <20240103084535.20162-1-yan.y.zhao@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240103084327.19955-1-yan.y.zhao@intel.com> References: <20240103084327.19955-1-yan.y.zhao@intel.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Added a selftest set_memory_region_io to test memslots for MMIO BARs. The MMIO BARs' backends are compound/non-compound huge pages serving as device resources allocated by a mock device driver. This selftest will assert and report "errno=14 - Bad address" in vcpu_run() if any failure is met to add such MMIO BAR memslots. After MMIO memslots removal, page reference counts of the device resources are also checked. As this selftest will interacts with a mock device "/dev/kvm_mock_device", it depends on test driver test_kvm_mock_device.ko in the kernel. CONFIG_TEST_KVM_MOCK_DEVICE=m must be enabled in the kernel. Currently, this selftest is only compiled for __x86_64__. Signed-off-by: Yan Zhao --- tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/set_memory_region_io.c | 188 ++++++++++++++++++ 2 files changed, 189 insertions(+) create mode 100644 tools/testing/selftests/kvm/set_memory_region_io.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 4412b42d95de..9d39514b6403 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -144,6 +144,7 @@ TEST_GEN_PROGS_x86_64 += memslot_modification_stress_test TEST_GEN_PROGS_x86_64 += memslot_perf_test TEST_GEN_PROGS_x86_64 += rseq_test TEST_GEN_PROGS_x86_64 += set_memory_region_test +TEST_GEN_PROGS_x86_64 += set_memory_region_io TEST_GEN_PROGS_x86_64 += steal_time TEST_GEN_PROGS_x86_64 += kvm_binary_stats_test TEST_GEN_PROGS_x86_64 += system_counter_offset_test diff --git a/tools/testing/selftests/kvm/set_memory_region_io.c b/tools/testing/selftests/kvm/set_memory_region_io.c new file mode 100644 index 000000000000..e221103091f4 --- /dev/null +++ b/tools/testing/selftests/kvm/set_memory_region_io.c @@ -0,0 +1,188 @@ +// SPDX-License-Identifier: GPL-2.0 +#define _GNU_SOURCE /* for program_invocation_short_name */ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include + +#include +#include +#include + +#include <../../../../lib/test_kvm_mock_device_uapi.h> + +/* + * Somewhat arbitrary location and slot, intended to not overlap anything. + */ +#define MEM_REGION_GPA_BASE 0xc0000000 +#define RANDOM_OFFSET 0x1000 +#define MEM_REGION_GPA_RANDOM (MEM_REGION_GPA_BASE + RANDOM_OFFSET) +#define MEM_REGION_SLOT_ID 10 + +static const bool non_compound_supported; + +static const uint64_t BASE_VAL = 0x1111; +static const uint64_t RANDOM_VAL = 0x2222; + +static unsigned long bar_size; + +static sem_t vcpu_ready; + +static void guest_code_read_bar(void) +{ + uint64_t val; + + GUEST_SYNC(0); + + val = READ_ONCE(*((uint64_t *)MEM_REGION_GPA_RANDOM)); + GUEST_ASSERT_EQ(val, RANDOM_VAL); + + val = READ_ONCE(*((uint64_t *)MEM_REGION_GPA_BASE)); + GUEST_ASSERT_EQ(val, BASE_VAL); + + GUEST_DONE(); +} + +static void *vcpu_worker(void *data) +{ + struct kvm_vcpu *vcpu = data; + struct kvm_run *run = vcpu->run; + struct ucall uc; + uint64_t cmd; + + /* + * Loop until the guest is done. Re-enter the guest on all MMIO exits, + * which will occur if the guest attempts to access a memslot after it + * has been deleted or while it is being moved . + */ + while (1) { + vcpu_run(vcpu); + + if (run->exit_reason == KVM_EXIT_IO) { + cmd = get_ucall(vcpu, &uc); + if (cmd != UCALL_SYNC) + break; + + sem_post(&vcpu_ready); + continue; + } + + if (run->exit_reason != KVM_EXIT_MMIO) + break; + + TEST_ASSERT(!run->mmio.is_write, "Unexpected exit mmio write"); + TEST_ASSERT(run->mmio.len == 8, + "Unexpected exit mmio size = %u", run->mmio.len); + + TEST_ASSERT(run->mmio.phys_addr < MEM_REGION_GPA_BASE || + run->mmio.phys_addr >= MEM_REGION_GPA_BASE + bar_size, + "Unexpected exit mmio address = 0x%llx", + run->mmio.phys_addr); + } + + if (run->exit_reason == KVM_EXIT_IO && cmd == UCALL_ABORT) + REPORT_GUEST_ASSERT(uc); + + return NULL; +} + +static void wait_for_vcpu(void) +{ + struct timespec ts; + + TEST_ASSERT(!clock_gettime(CLOCK_REALTIME, &ts), + "clock_gettime() failed: %d\n", errno); + + ts.tv_sec += 2; + TEST_ASSERT(!sem_timedwait(&vcpu_ready, &ts), + "sem_timedwait() failed: %d\n", errno); + + /* Wait for the vCPU thread to reenter the guest. */ + usleep(100000); +} + +static void test_kvm_mock_device_bar(bool compound) +{ + struct kvm_vm *vm; + void *mem; + struct kvm_vcpu *vcpu; + pthread_t vcpu_thread; + int fd, ret; + u32 param_compound = compound; + u32 inequal = 0; + + fd = open("/dev/kvm_mock_device", O_RDWR); + if (fd < 0) { + pr_info("Please ensure \"CONFIG_TEST_KVM_MOCK_DEVICE=m\" is enabled in the kernel"); + pr_info(", and execute\n\"modprobe test_kvm_mock_device\n"); + } + TEST_ASSERT(fd >= 0, "Failed to open kvm mock device."); + + ret = ioctl(fd, KVM_MOCK_DEVICE_GET_BAR_SIZE, &bar_size); + TEST_ASSERT(ret == 0, "Failed to get bar size of kvm mock device"); + + ret = ioctl(fd, KVM_MOCK_DEVICE_PREPARE_RESOURCE, ¶m_compound); + TEST_ASSERT(ret == 0, "Failed to prepare resource of kvm mock device"); + + mem = mmap(NULL, (size_t)bar_size, PROT_READ | PROT_WRITE, MAP_SHARED, + fd, 0); + TEST_ASSERT(mem != MAP_FAILED, "Failed to mmap() kvm mock device bar"); + + *(u64 *)mem = BASE_VAL; + *(u64 *)(mem + RANDOM_OFFSET) = RANDOM_VAL; + + vm = vm_create_with_one_vcpu(&vcpu, guest_code_read_bar); + + vm_set_user_memory_region(vm, MEM_REGION_SLOT_ID, 0, MEM_REGION_GPA_BASE, + bar_size, mem); + + virt_map(vm, MEM_REGION_GPA_BASE, MEM_REGION_GPA_BASE, + (RANDOM_OFFSET / getpagesize()) + 1); + + pthread_create(&vcpu_thread, NULL, vcpu_worker, vcpu); + + /* Ensure the guest thread is spun up. */ + wait_for_vcpu(); + + pthread_join(vcpu_thread, NULL); + + vm_set_user_memory_region(vm, MEM_REGION_SLOT_ID, 0, 0, 0, 0); + kvm_vm_free(vm); + + ret = ioctl(fd, KVM_MOCK_DEVICE_CHECK_BACKEND_REF, &inequal); + TEST_ASSERT(ret == 0 && inequal == 0, "Incorrect resource ref of KVM device"); + + munmap(mem, bar_size); + close(fd); +} + +static void test_non_compound_backend(void) +{ + pr_info("Testing non-compound huge page backend for mem slot\n"); + test_kvm_mock_device_bar(false); +} + +static void test_compound_backend(void) +{ + pr_info("Testing compound huge page backend for mem slot\n"); + test_kvm_mock_device_bar(true); +} + +int main(int argc, char *argv[]) +{ +#ifdef __x86_64__ + test_compound_backend(); + if (non_compound_supported) + test_non_compound_backend(); +#endif + + return 0; +}