From patchwork Tue Aug 20 13:33:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ilias Stamatis X-Patchwork-Id: 13770109 Received: from smtp-fw-52003.amazon.com (smtp-fw-52003.amazon.com [52.119.213.152]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 72B111AACB for ; Tue, 20 Aug 2024 13:35:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=52.119.213.152 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724160947; cv=none; b=O/uVbmsW+SMlJBVsHW4FgXZ6MSWTHFtlR0ge41mmWFTJYXpN6H4rVXU1OLXeuistY1R0qzaTg8h2NdIQITZoMrMl0YQxXuTnuJSlgYVOE44wEaW0OUKykbcEuASGSWOVOhEUqwjWfSzpjTuG/w/XTaJUp4Hhztb3w1dZb/aYJ2Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724160947; c=relaxed/simple; bh=H7k+6nAtfPAjMYGkXJcBgLA3LEHTFAquwe3upQ2dfFc=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=BUpH0N6AkO3sLlbk5IGDvstwV9XpLuqnUj3aR944imb2234HKot12QKZmI/dZPu3EsCr9n22qR+8D9pDgI0FDpD9oPK9P6t1aR2HayoAl78SZz3f3D3BRgnuRAiRa32uzGgmrMqWY5TxTIGgox8bvFH/6nZrF2vCdFRy/J0XsVY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com; spf=pass smtp.mailfrom=amazon.co.uk; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b=v42arRQN; arc=none smtp.client-ip=52.119.213.152 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=amazon.co.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="v42arRQN" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1724160946; x=1755696946; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=OuOmf53f5AL3V3xjIto3P3hy6XPDMnPWCwtF1OZEi8k=; b=v42arRQNlMWnFgmVEqGuLaFzxXT0ArJeNFi0hxh15Gu3TwFQOV0DYPL6 dq/SG0IV7gZnPM08Dx5n9UowbCJctIyzOiU2F+9i1E8yCWixZHpY4eZFp s/ugXyTp7ZI/h1tu0bcS7fswlKknSZSBZMykjtBSATeCijlWiORuYmtqZ c=; X-IronPort-AV: E=Sophos;i="6.10,162,1719878400"; d="scan'208";a="19994641" Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO smtpout.prod.us-east-1.prod.farcaster.email.amazon.dev) ([10.43.8.6]) by smtp-border-fw-52003.iad7.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Aug 2024 13:35:43 +0000 Received: from EX19MTAEUC002.ant.amazon.com [10.0.17.79:8030] by smtpin.naws.eu-west-1.prod.farcaster.email.amazon.dev [10.0.11.126:2525] with esmtp (Farcaster) id b1ad6a92-1159-4e63-bdbf-420d2a32b1e0; Tue, 20 Aug 2024 13:35:41 +0000 (UTC) X-Farcaster-Flow-ID: b1ad6a92-1159-4e63-bdbf-420d2a32b1e0 Received: from EX19D018EUA002.ant.amazon.com (10.252.50.146) by EX19MTAEUC002.ant.amazon.com (10.252.51.181) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34; Tue, 20 Aug 2024 13:35:41 +0000 Received: from u94b036d6357a55.ant.amazon.com (10.106.82.48) by EX19D018EUA002.ant.amazon.com (10.252.50.146) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34; Tue, 20 Aug 2024 13:35:37 +0000 From: Ilias Stamatis To: , CC: , , , , Ilias Stamatis , Paul Durrant Subject: [PATCH v3 1/6] KVM: Fix coalesced_mmio_has_room() Date: Tue, 20 Aug 2024 14:33:28 +0100 Message-ID: <20240820133333.1724191-2-ilstam@amazon.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240820133333.1724191-1-ilstam@amazon.com> References: <20240820133333.1724191-1-ilstam@amazon.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: EX19D032UWA003.ant.amazon.com (10.13.139.37) To EX19D018EUA002.ant.amazon.com (10.252.50.146) The following calculation used in coalesced_mmio_has_room() to check whether the ring buffer is full is wrong and only allows half the buffer to be used. avail = (ring->first - last - 1) % KVM_COALESCED_MMIO_MAX; if (avail == 0) /* full */ The % operator in C is not the modulo operator but the remainder operator. Modulo and remainder operators differ with respect to negative values. But all values are unsigned in this case anyway. The above might have worked as expected in python for example: >>> (-86) % 170 84 However it doesn't work the same way in C. printf("avail: %d\n", (-86) % 170); printf("avail: %u\n", (-86) % 170); printf("avail: %u\n", (-86u) % 170u); Using gcc-11 these print: avail: -86 avail: 4294967210 avail: 0 Fix the calculation and allow all but one entries in the buffer to be used as originally intended. Fixes: 105f8d40a737 ("KVM: Calculate available entries in coalesced mmio ring") Signed-off-by: Ilias Stamatis Reviewed-by: Paul Durrant --- virt/kvm/coalesced_mmio.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/virt/kvm/coalesced_mmio.c b/virt/kvm/coalesced_mmio.c index 1b90acb6e3fe..184c5c40c9c1 100644 --- a/virt/kvm/coalesced_mmio.c +++ b/virt/kvm/coalesced_mmio.c @@ -43,7 +43,6 @@ static int coalesced_mmio_in_range(struct kvm_coalesced_mmio_dev *dev, static int coalesced_mmio_has_room(struct kvm_coalesced_mmio_dev *dev, u32 last) { struct kvm_coalesced_mmio_ring *ring; - unsigned avail; /* Are we able to batch it ? */ @@ -52,8 +51,7 @@ static int coalesced_mmio_has_room(struct kvm_coalesced_mmio_dev *dev, u32 last) * there is always one unused entry in the buffer */ ring = dev->kvm->coalesced_mmio_ring; - avail = (ring->first - last - 1) % KVM_COALESCED_MMIO_MAX; - if (avail == 0) { + if ((last + 1) % KVM_COALESCED_MMIO_MAX == READ_ONCE(ring->first)) { /* full */ return 0; } From patchwork Tue Aug 20 13:33:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ilias Stamatis X-Patchwork-Id: 13770110 Received: from smtp-fw-6001.amazon.com (smtp-fw-6001.amazon.com [52.95.48.154]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B12E618A6D1 for ; Tue, 20 Aug 2024 13:35:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=52.95.48.154 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724160957; cv=none; b=acMZNSU5WNFluD4yfr+yU16Qz5aIqsi1AvhttA9xqnLqtEi1xSLBQpz9q3mfsgw2mlJK7CVDxWplnYijjNJMK6XjvznBCY7qEZmvvNwZj5cX3y1ISmm+r8OvXPAbMrkHNIGlKTVtiwgC6TH1tMryKHi87wPIioTSu3SlUoOL+Qw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724160957; c=relaxed/simple; bh=CHJVhmXt6gaABg54QaIeVarOiPHqy6jztApjB/pYWo0=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=gaw1HOTSp2MGDGirEdIOiy0VucjkWFFAs6IzE6zXgTfhmcTgoRyt8lzm4gv3rxTjnMLQ3o5+GDarU2bFPRMN+3QGiDGLeyRmkoVKaBGQ1x0A//M65/HUiLcpEJS/e6+i0T1dd5iFmlYoYfFc5VisfCsVNSgYiIRymGRpY5AFUj8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com; spf=pass smtp.mailfrom=amazon.co.uk; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b=I1m+IeYp; arc=none smtp.client-ip=52.95.48.154 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=amazon.co.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="I1m+IeYp" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1724160956; x=1755696956; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=sySJcTFm+Th9SLlNnfq+siH3BRW0PPP2QTvHlVZnpsw=; b=I1m+IeYpqYgkVDJaxlTinCBrgiPMec/XjBzPjzgTCXXWo2NkdPhVBWOR NZJcdOb/VyRs/IoyuFlrpXrA2ZMHy7tb/414AlXX92oJTQmo7DEP4x776 gYdb7g9U7Fbq+OcQuTKq9nMJGdlqejgtBVNIceyy/Pb7FzX/EaVkQKhwH 0=; X-IronPort-AV: E=Sophos;i="6.10,162,1719878400"; d="scan'208";a="418261980" Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO smtpout.prod.us-east-1.prod.farcaster.email.amazon.dev) ([10.43.8.2]) by smtp-border-fw-6001.iad6.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Aug 2024 13:35:53 +0000 Received: from EX19MTAEUA001.ant.amazon.com [10.0.17.79:48359] by smtpin.naws.eu-west-1.prod.farcaster.email.amazon.dev [10.0.21.54:2525] with esmtp (Farcaster) id ab0157be-c78d-470c-85db-0e60224b2e25; Tue, 20 Aug 2024 13:35:51 +0000 (UTC) X-Farcaster-Flow-ID: ab0157be-c78d-470c-85db-0e60224b2e25 Received: from EX19D018EUA002.ant.amazon.com (10.252.50.146) by EX19MTAEUA001.ant.amazon.com (10.252.50.192) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34; Tue, 20 Aug 2024 13:35:51 +0000 Received: from u94b036d6357a55.ant.amazon.com (10.106.82.48) by EX19D018EUA002.ant.amazon.com (10.252.50.146) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34; Tue, 20 Aug 2024 13:35:47 +0000 From: Ilias Stamatis To: , CC: , , , , Ilias Stamatis , Paul Durrant Subject: [PATCH v3 2/6] KVM: Add KVM_CREATE_COALESCED_MMIO_BUFFER ioctl Date: Tue, 20 Aug 2024 14:33:29 +0100 Message-ID: <20240820133333.1724191-3-ilstam@amazon.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240820133333.1724191-1-ilstam@amazon.com> References: <20240820133333.1724191-1-ilstam@amazon.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: EX19D032UWA003.ant.amazon.com (10.13.139.37) To EX19D018EUA002.ant.amazon.com (10.252.50.146) The current MMIO coalescing design has a few drawbacks which limit its usefulness. Currently all coalesced MMIO zones use the same ring buffer. That means that upon a userspace exit we have to handle potentially unrelated MMIO writes synchronously. And a VM-wide lock needs to be taken in the kernel when an MMIO exit occurs. Additionally, there is no direct way for userspace to be notified about coalesced MMIO writes. If the next MMIO exit to userspace is when the ring buffer has filled then a substantial (and unbounded) amount of time may have passed since the first coalesced MMIO. Add a KVM_CREATE_COALESCED_MMIO_BUFFER ioctl to KVM. This ioctl simply returns a file descriptor to the caller but does not allocate a ring buffer. Userspace can then pass this fd to mmap() to actually allocate a buffer and map it to its address space. Subsequent patches will allow userspace to: - Associate the fd with a coalescing zone when registering it so that writes to that zone are accumulated in that specific ring buffer rather than the VM-wide one. - Poll for MMIO writes using this fd. Signed-off-by: Ilias Stamatis Reviewed-by: Paul Durrant --- v2->v3: - Removed unnecessary brackets in a switch case and adjusted indentation on a spinlock as suggested by Sean Christopherson include/linux/kvm_host.h | 1 + include/uapi/linux/kvm.h | 2 + virt/kvm/coalesced_mmio.c | 141 +++++++++++++++++++++++++++++++++++--- virt/kvm/coalesced_mmio.h | 9 +++ virt/kvm/kvm_main.c | 3 + 5 files changed, 148 insertions(+), 8 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index ed0520268de4..efb07422e76b 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -808,6 +808,7 @@ struct kvm { struct kvm_coalesced_mmio_ring *coalesced_mmio_ring; spinlock_t ring_lock; struct list_head coalesced_zones; + struct list_head coalesced_buffers; #endif struct mutex irq_lock; diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 637efc055145..87f79a820fc0 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1573,4 +1573,6 @@ struct kvm_pre_fault_memory { __u64 padding[5]; }; +#define KVM_CREATE_COALESCED_MMIO_BUFFER _IO(KVMIO, 0xd6) + #endif /* __LINUX_KVM_H */ diff --git a/virt/kvm/coalesced_mmio.c b/virt/kvm/coalesced_mmio.c index 184c5c40c9c1..98b7e8760aa7 100644 --- a/virt/kvm/coalesced_mmio.c +++ b/virt/kvm/coalesced_mmio.c @@ -4,6 +4,7 @@ * * Copyright (c) 2008 Bull S.A.S. * Copyright 2009 Red Hat, Inc. and/or its affiliates. + * Copyright 2024 Amazon.com, Inc. or its affiliates. All Rights Reserved. * * Author: Laurent Vivier * @@ -14,6 +15,7 @@ #include #include #include +#include #include "coalesced_mmio.h" @@ -40,17 +42,14 @@ static int coalesced_mmio_in_range(struct kvm_coalesced_mmio_dev *dev, return 1; } -static int coalesced_mmio_has_room(struct kvm_coalesced_mmio_dev *dev, u32 last) +static int coalesced_mmio_has_room(struct kvm_coalesced_mmio_ring *ring, u32 last) { - struct kvm_coalesced_mmio_ring *ring; - /* Are we able to batch it ? */ /* last is the first free entry * check if we don't meet the first used entry * there is always one unused entry in the buffer */ - ring = dev->kvm->coalesced_mmio_ring; if ((last + 1) % KVM_COALESCED_MMIO_MAX == READ_ONCE(ring->first)) { /* full */ return 0; @@ -65,17 +64,27 @@ static int coalesced_mmio_write(struct kvm_vcpu *vcpu, { struct kvm_coalesced_mmio_dev *dev = to_mmio(this); struct kvm_coalesced_mmio_ring *ring = dev->kvm->coalesced_mmio_ring; + spinlock_t *lock = dev->buffer_dev ? &dev->buffer_dev->ring_lock : + &dev->kvm->ring_lock; __u32 insert; if (!coalesced_mmio_in_range(dev, addr, len)) return -EOPNOTSUPP; - spin_lock(&dev->kvm->ring_lock); + spin_lock(lock); + + if (dev->buffer_dev) { + ring = dev->buffer_dev->ring; + if (!ring) { + spin_unlock(lock); + return -EOPNOTSUPP; + } + } insert = READ_ONCE(ring->last); - if (!coalesced_mmio_has_room(dev, insert) || + if (!coalesced_mmio_has_room(ring, insert) || insert >= KVM_COALESCED_MMIO_MAX) { - spin_unlock(&dev->kvm->ring_lock); + spin_unlock(lock); return -EOPNOTSUPP; } @@ -87,7 +96,7 @@ static int coalesced_mmio_write(struct kvm_vcpu *vcpu, ring->coalesced_mmio[insert].pio = dev->zone.pio; smp_wmb(); ring->last = (insert + 1) % KVM_COALESCED_MMIO_MAX; - spin_unlock(&dev->kvm->ring_lock); + spin_unlock(lock); return 0; } @@ -122,6 +131,7 @@ int kvm_coalesced_mmio_init(struct kvm *kvm) */ spin_lock_init(&kvm->ring_lock); INIT_LIST_HEAD(&kvm->coalesced_zones); + INIT_LIST_HEAD(&kvm->coalesced_buffers); return 0; } @@ -132,11 +142,125 @@ void kvm_coalesced_mmio_free(struct kvm *kvm) free_page((unsigned long)kvm->coalesced_mmio_ring); } +static void coalesced_mmio_buffer_vma_close(struct vm_area_struct *vma) +{ + struct kvm_coalesced_mmio_buffer_dev *dev = vma->vm_private_data; + + spin_lock(&dev->ring_lock); + + vfree(dev->ring); + dev->ring = NULL; + + spin_unlock(&dev->ring_lock); +} + +static const struct vm_operations_struct coalesced_mmio_buffer_vm_ops = { + .close = coalesced_mmio_buffer_vma_close, +}; + +static int coalesced_mmio_buffer_mmap(struct file *file, struct vm_area_struct *vma) +{ + struct kvm_coalesced_mmio_buffer_dev *dev = file->private_data; + unsigned long pfn; + int ret = 0; + + spin_lock(&dev->ring_lock); + + if (dev->ring) { + ret = -EBUSY; + goto out_unlock; + } + + dev->ring = vmalloc_user(PAGE_SIZE); + if (!dev->ring) { + ret = -ENOMEM; + goto out_unlock; + } + + pfn = vmalloc_to_pfn(dev->ring); + + if (remap_pfn_range(vma, vma->vm_start, pfn, PAGE_SIZE, + vma->vm_page_prot)) { + vfree(dev->ring); + dev->ring = NULL; + ret = -EAGAIN; + goto out_unlock; + } + + vma->vm_ops = &coalesced_mmio_buffer_vm_ops; + vma->vm_private_data = dev; + +out_unlock: + spin_unlock(&dev->ring_lock); + + return ret; +} + +static int coalesced_mmio_buffer_release(struct inode *inode, struct file *file) +{ + + struct kvm_coalesced_mmio_buffer_dev *buffer_dev = file->private_data; + struct kvm_coalesced_mmio_dev *mmio_dev, *tmp; + struct kvm *kvm = buffer_dev->kvm; + + /* Deregister all zones associated with this ring buffer */ + mutex_lock(&kvm->slots_lock); + + list_for_each_entry_safe(mmio_dev, tmp, &kvm->coalesced_zones, list) { + if (mmio_dev->buffer_dev == buffer_dev) { + if (kvm_io_bus_unregister_dev(kvm, + mmio_dev->zone.pio ? KVM_PIO_BUS : KVM_MMIO_BUS, + &mmio_dev->dev)) + break; + } + } + + list_del(&buffer_dev->list); + kfree(buffer_dev); + + mutex_unlock(&kvm->slots_lock); + + return 0; +} + +static const struct file_operations coalesced_mmio_buffer_ops = { + .mmap = coalesced_mmio_buffer_mmap, + .release = coalesced_mmio_buffer_release, +}; + +int kvm_vm_ioctl_create_coalesced_mmio_buffer(struct kvm *kvm) +{ + int ret; + struct kvm_coalesced_mmio_buffer_dev *dev; + + dev = kzalloc(sizeof(struct kvm_coalesced_mmio_buffer_dev), + GFP_KERNEL_ACCOUNT); + if (!dev) + return -ENOMEM; + + dev->kvm = kvm; + spin_lock_init(&dev->ring_lock); + + ret = anon_inode_getfd("coalesced_mmio_buf", &coalesced_mmio_buffer_ops, + dev, O_RDWR | O_CLOEXEC); + if (ret < 0) { + kfree(dev); + return ret; + } + + mutex_lock(&kvm->slots_lock); + list_add_tail(&dev->list, &kvm->coalesced_buffers); + mutex_unlock(&kvm->slots_lock); + + return ret; +} + int kvm_vm_ioctl_register_coalesced_mmio(struct kvm *kvm, struct kvm_coalesced_mmio_zone *zone) { int ret; struct kvm_coalesced_mmio_dev *dev; + struct kvm_coalesced_mmio_buffer_dev *buffer_dev = NULL; if (zone->pio != 1 && zone->pio != 0) return -EINVAL; @@ -149,6 +273,7 @@ int kvm_vm_ioctl_register_coalesced_mmio(struct kvm *kvm, kvm_iodevice_init(&dev->dev, &coalesced_mmio_ops); dev->kvm = kvm; dev->zone = *zone; + dev->buffer_dev = buffer_dev; mutex_lock(&kvm->slots_lock); ret = kvm_io_bus_register_dev(kvm, diff --git a/virt/kvm/coalesced_mmio.h b/virt/kvm/coalesced_mmio.h index 36f84264ed25..37d9d8f325bb 100644 --- a/virt/kvm/coalesced_mmio.h +++ b/virt/kvm/coalesced_mmio.h @@ -20,6 +20,14 @@ struct kvm_coalesced_mmio_dev { struct kvm_io_device dev; struct kvm *kvm; struct kvm_coalesced_mmio_zone zone; + struct kvm_coalesced_mmio_buffer_dev *buffer_dev; +}; + +struct kvm_coalesced_mmio_buffer_dev { + struct list_head list; + struct kvm *kvm; + spinlock_t ring_lock; + struct kvm_coalesced_mmio_ring *ring; }; int kvm_coalesced_mmio_init(struct kvm *kvm); @@ -28,6 +36,7 @@ int kvm_vm_ioctl_register_coalesced_mmio(struct kvm *kvm, struct kvm_coalesced_mmio_zone *zone); int kvm_vm_ioctl_unregister_coalesced_mmio(struct kvm *kvm, struct kvm_coalesced_mmio_zone *zone); +int kvm_vm_ioctl_create_coalesced_mmio_buffer(struct kvm *kvm); #else diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 238940c3cb32..9f6ad6e03317 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -5244,6 +5244,9 @@ static long kvm_vm_ioctl(struct file *filp, r = kvm_vm_ioctl_unregister_coalesced_mmio(kvm, &zone); break; } + case KVM_CREATE_COALESCED_MMIO_BUFFER: + r = kvm_vm_ioctl_create_coalesced_mmio_buffer(kvm); + break; #endif case KVM_IRQFD: { struct kvm_irqfd data; From patchwork Tue Aug 20 13:33:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ilias Stamatis X-Patchwork-Id: 13770111 Received: from smtp-fw-9102.amazon.com (smtp-fw-9102.amazon.com [207.171.184.29]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 40B1D1AACB for ; Tue, 20 Aug 2024 13:36:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=207.171.184.29 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724160980; cv=none; b=F1H9Zi2Mfe70IpS/7yFtyhl5/YAW+bXDfUNl4XsTHQ1lGI1SoAZJY9hn/iTt3vVOpwDzrnRyF+jmjAmgSUlfCYrJBLTDqbHa1dZamBoDOMEgkOGSyubae01bG/Y5GIH+2LuBiT1MXWRkQaXsGYNpMyzakkBNV2sz5gbbvLTVneE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724160980; c=relaxed/simple; bh=iw1QWwjO96/sdC+kz/Am60/DexumwNF4Cpl59Um2lWQ=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=VesXlSGz01LD6lK/QYBTRprOrKzb5/1bC6+kFvOAzm35VGXOdSF8cCy34nz39AC4UhQhbIQhU7OiNgbeMjSG/oeEH/RRk5H1+D7FzXk1RKAPoY2aabNWQShuHmZ82YdmXhls2wYQ/d206GpmVvUKL4kWNg8IxvSthQazt+p+e30= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com; spf=pass smtp.mailfrom=amazon.co.uk; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b=mvQXhZ1M; arc=none smtp.client-ip=207.171.184.29 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=amazon.co.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="mvQXhZ1M" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1724160980; x=1755696980; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=pxAqEm5rxhhnLv8GE0igYRWRK9hF/AAIPjpVwR077/I=; b=mvQXhZ1MkLW3iNcnZmnDIbcxqECAEm5aqQOp062j17X8g7obcc61u16l OPXQX3pek2b/h0gQGudm19Onu4kPabU7S10sX2lJ2lzJS8d/KHBuTCKJb GJqrlggvrRpy5LScGr84ZZ8NWU7iC3iivpQZkBHkQyUF9Af8x2aoyFPr9 E=; X-IronPort-AV: E=Sophos;i="6.10,162,1719878400"; d="scan'208";a="445536591" Received: from pdx4-co-svc-p1-lb2-vlan3.amazon.com (HELO smtpout.prod.us-east-1.prod.farcaster.email.amazon.dev) ([10.25.36.214]) by smtp-border-fw-9102.sea19.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Aug 2024 13:36:18 +0000 Received: from EX19MTAEUB002.ant.amazon.com [10.0.10.100:13602] by smtpin.naws.eu-west-1.prod.farcaster.email.amazon.dev [10.0.12.81:2525] with esmtp (Farcaster) id f759a4a7-0a4a-45da-82d1-d9f07ea0aadf; Tue, 20 Aug 2024 13:36:16 +0000 (UTC) X-Farcaster-Flow-ID: f759a4a7-0a4a-45da-82d1-d9f07ea0aadf Received: from EX19D018EUA002.ant.amazon.com (10.252.50.146) by EX19MTAEUB002.ant.amazon.com (10.252.51.79) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34; Tue, 20 Aug 2024 13:36:15 +0000 Received: from u94b036d6357a55.ant.amazon.com (10.106.82.48) by EX19D018EUA002.ant.amazon.com (10.252.50.146) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34; Tue, 20 Aug 2024 13:36:12 +0000 From: Ilias Stamatis To: , CC: , , , , Ilias Stamatis Subject: [PATCH v3 3/6] KVM: Support poll() on coalesced mmio buffer fds Date: Tue, 20 Aug 2024 14:33:30 +0100 Message-ID: <20240820133333.1724191-4-ilstam@amazon.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240820133333.1724191-1-ilstam@amazon.com> References: <20240820133333.1724191-1-ilstam@amazon.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: EX19D032UWA003.ant.amazon.com (10.13.139.37) To EX19D018EUA002.ant.amazon.com (10.252.50.146) There is no direct way for userspace to be notified about coalesced MMIO writes when using KVM_REGISTER_COALESCED_MMIO. If the next MMIO exit is when the ring buffer has filled then a substantial (and unbounded) amount of time may have passed since the first coalesced MMIO. To improve this, make it possible for userspace to use poll() and select() on the fd returned by the KVM_CREATE_COALESCED_MMIO_BUFFER ioctl. This way a userspace VMM could have dedicated threads that deal with writes to specific MMIO zones. For example, a common use of MMIO, particularly in the realm of network devices, is as a doorbell. A write to a doorbell register will trigger the device to initiate a DMA transfer. When a network device is emulated by userspace a write to a doorbell register would typically result in an MMIO exit so that userspace can emulate the DMA transfer in a timely manner. No further processing can be done until userspace performs the necessary emulation and re-invokes KVM_RUN. Even if userspace makes use of another thread to emulate the DMA transfer such MMIO exits are disruptive to the vCPU and they may also be quite frequent if, for example, the vCPU is sending a sequence of short packets to the network device. By supporting poll() on coalesced buffer fds, userspace can have dedicated threads wait for new doorbell writes and avoid the performance hit of userspace exits on the main vCPU threads. Signed-off-by: Ilias Stamatis --- v2->v3: - Changed POLLIN | POLLRDNORM to EPOLLIN | EPOLLRDNORM virt/kvm/coalesced_mmio.c | 22 ++++++++++++++++++++++ virt/kvm/coalesced_mmio.h | 1 + 2 files changed, 23 insertions(+) diff --git a/virt/kvm/coalesced_mmio.c b/virt/kvm/coalesced_mmio.c index 98b7e8760aa7..039c6ffcb2a8 100644 --- a/virt/kvm/coalesced_mmio.c +++ b/virt/kvm/coalesced_mmio.c @@ -16,6 +16,7 @@ #include #include #include +#include #include "coalesced_mmio.h" @@ -97,6 +98,10 @@ static int coalesced_mmio_write(struct kvm_vcpu *vcpu, smp_wmb(); ring->last = (insert + 1) % KVM_COALESCED_MMIO_MAX; spin_unlock(lock); + + if (dev->buffer_dev) + wake_up_interruptible(&dev->buffer_dev->wait_queue); + return 0; } @@ -223,9 +228,25 @@ static int coalesced_mmio_buffer_release(struct inode *inode, struct file *file) return 0; } +static __poll_t coalesced_mmio_buffer_poll(struct file *file, struct poll_table_struct *wait) +{ + struct kvm_coalesced_mmio_buffer_dev *dev = file->private_data; + __poll_t mask = 0; + + poll_wait(file, &dev->wait_queue, wait); + + spin_lock(&dev->ring_lock); + if (dev->ring && (READ_ONCE(dev->ring->first) != READ_ONCE(dev->ring->last))) + mask = EPOLLIN | EPOLLRDNORM; + spin_unlock(&dev->ring_lock); + + return mask; +} + static const struct file_operations coalesced_mmio_buffer_ops = { .mmap = coalesced_mmio_buffer_mmap, .release = coalesced_mmio_buffer_release, + .poll = coalesced_mmio_buffer_poll, }; int kvm_vm_ioctl_create_coalesced_mmio_buffer(struct kvm *kvm) @@ -239,6 +260,7 @@ int kvm_vm_ioctl_create_coalesced_mmio_buffer(struct kvm *kvm) return -ENOMEM; dev->kvm = kvm; + init_waitqueue_head(&dev->wait_queue); spin_lock_init(&dev->ring_lock); ret = anon_inode_getfd("coalesced_mmio_buf", &coalesced_mmio_buffer_ops, diff --git a/virt/kvm/coalesced_mmio.h b/virt/kvm/coalesced_mmio.h index 37d9d8f325bb..d1807ce26464 100644 --- a/virt/kvm/coalesced_mmio.h +++ b/virt/kvm/coalesced_mmio.h @@ -26,6 +26,7 @@ struct kvm_coalesced_mmio_dev { struct kvm_coalesced_mmio_buffer_dev { struct list_head list; struct kvm *kvm; + wait_queue_head_t wait_queue; spinlock_t ring_lock; struct kvm_coalesced_mmio_ring *ring; }; From patchwork Tue Aug 20 13:33:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ilias Stamatis X-Patchwork-Id: 13770112 Received: from smtp-fw-52005.amazon.com (smtp-fw-52005.amazon.com [52.119.213.156]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A0A2C1AACB for ; Tue, 20 Aug 2024 13:36:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=52.119.213.156 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724160995; cv=none; b=Ht/DwWCiNsye9q/EdzE5Y2hxkBKvcOFzVIWDp7SnE96VWIQ0MiRJnoZ1oo3X2q/bwEuhf6ecw1K+UwnxangWeVQ/9A+8w21xaahTkX0xUeoJIzWE3aqgd2+iZYK1+vP69kSwifBL6AYgBH2/A3a3yQ9dQGchXdIaLk58Coy3qEQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724160995; c=relaxed/simple; bh=8qRz65138DbQ1wYR8NJQERvCVlqRCL9dw54IgzqNIwY=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=ApDltt7xMk9nVgfO1nIqsZcVuY/RzPiVDsB5BEQaw+GiCco8DeEBL2nEbE6LxOp8gtU0V2thkA05397tAHEaQvCFOoaMEif1/hJOH4IxXMuaaWoyhnpZy1m+ynflBo+UwY6HETEMGwLwTJbA7u0h9z1vu+VEMPCTE15/mJm3tEc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com; spf=pass smtp.mailfrom=amazon.co.uk; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b=oYR6GnYK; arc=none smtp.client-ip=52.119.213.156 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=amazon.co.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="oYR6GnYK" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1724160994; x=1755696994; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=cfR30k25ea+4miTS6gGoJvcvkvW0H+8ygzR5VlhnEdw=; b=oYR6GnYKl0GwA8bb3YaoBtD2KyGO2qhiDU/GY9wSXKVyCjvcV2VS8Mve RAzCCSHTmWKHq322c+otpRyjYU3SOgO95DMf0wLadHandCIU4Ke/+6E8q EJz3P/xqJo3L8CNwJsaYw4LqEy5D86Vyvf/PVuwAQY5dhceF5s5Efyvor U=; X-IronPort-AV: E=Sophos;i="6.10,162,1719878400"; d="scan'208";a="674929191" Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO smtpout.prod.us-east-1.prod.farcaster.email.amazon.dev) ([10.43.8.6]) by smtp-border-fw-52005.iad7.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Aug 2024 13:36:29 +0000 Received: from EX19MTAEUC002.ant.amazon.com [10.0.17.79:63397] by smtpin.naws.eu-west-1.prod.farcaster.email.amazon.dev [10.0.26.14:2525] with esmtp (Farcaster) id 45fac969-3a73-4d03-95ca-d1de2b0d7440; Tue, 20 Aug 2024 13:36:28 +0000 (UTC) X-Farcaster-Flow-ID: 45fac969-3a73-4d03-95ca-d1de2b0d7440 Received: from EX19D018EUA002.ant.amazon.com (10.252.50.146) by EX19MTAEUC002.ant.amazon.com (10.252.51.181) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34; Tue, 20 Aug 2024 13:36:27 +0000 Received: from u94b036d6357a55.ant.amazon.com (10.106.82.48) by EX19D018EUA002.ant.amazon.com (10.252.50.146) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34; Tue, 20 Aug 2024 13:36:24 +0000 From: Ilias Stamatis To: , CC: , , , , Ilias Stamatis , Paul Durrant Subject: [PATCH v3 4/6] KVM: Add KVM_(UN)REGISTER_COALESCED_MMIO2 ioctls Date: Tue, 20 Aug 2024 14:33:31 +0100 Message-ID: <20240820133333.1724191-5-ilstam@amazon.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240820133333.1724191-1-ilstam@amazon.com> References: <20240820133333.1724191-1-ilstam@amazon.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: EX19D032UWA003.ant.amazon.com (10.13.139.37) To EX19D018EUA002.ant.amazon.com (10.252.50.146) Add 2 new ioctls, KVM_REGISTER_COALESCED_MMIO2 and KVM_UNREGISTER_COALESCED_MMIO2. These do the same thing as their v1 equivalents except an fd returned by KVM_CREATE_COALESCED_MMIO_BUFFER needs to be passed as an argument to them. The fd representing a ring buffer is associated with an MMIO region registered for coalescing and all writes to that region are accumulated there. This is in contrast to the v1 API where all regions have to share the same buffer. Nevertheless, userspace code can still use the same ring buffer for multiple zones if it wishes to do so. Userspace can check for the availability of the new API by checking if the KVM_CAP_COALESCED_MMIO2 capability is supported. Signed-off-by: Ilias Stamatis Reviewed-by: Paul Durrant --- v2->v3: - Changed type of buffer_fd from int to __u32 - Removed 0 initialisation of ret in kvm_vm_ioctl_register_coalesced_mmio() include/uapi/linux/kvm.h | 16 ++++++++++++++++ virt/kvm/coalesced_mmio.c | 36 +++++++++++++++++++++++++++++++----- virt/kvm/coalesced_mmio.h | 7 ++++--- virt/kvm/kvm_main.c | 34 +++++++++++++++++++++++++++++++++- 4 files changed, 84 insertions(+), 9 deletions(-) diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 87f79a820fc0..5e9fcc560cc1 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -480,6 +480,16 @@ struct kvm_coalesced_mmio_zone { }; }; +struct kvm_coalesced_mmio_zone2 { + __u64 addr; + __u32 size; + union { + __u32 pad; + __u32 pio; + }; + __u32 buffer_fd; +}; + struct kvm_coalesced_mmio { __u64 phys_addr; __u32 len; @@ -933,6 +943,7 @@ struct kvm_enable_cap { #define KVM_CAP_PRE_FAULT_MEMORY 236 #define KVM_CAP_X86_APIC_BUS_CYCLES_NS 237 #define KVM_CAP_X86_GUEST_MODE 238 +#define KVM_CAP_COALESCED_MMIO2 239 struct kvm_irq_routing_irqchip { __u32 irqchip; @@ -1573,6 +1584,11 @@ struct kvm_pre_fault_memory { __u64 padding[5]; }; +/* Available with KVM_CAP_COALESCED_MMIO2 */ #define KVM_CREATE_COALESCED_MMIO_BUFFER _IO(KVMIO, 0xd6) +#define KVM_REGISTER_COALESCED_MMIO2 \ + _IOW(KVMIO, 0xd7, struct kvm_coalesced_mmio_zone2) +#define KVM_UNREGISTER_COALESCED_MMIO2 \ + _IOW(KVMIO, 0xd8, struct kvm_coalesced_mmio_zone2) #endif /* __LINUX_KVM_H */ diff --git a/virt/kvm/coalesced_mmio.c b/virt/kvm/coalesced_mmio.c index 039c6ffcb2a8..4e237ee66711 100644 --- a/virt/kvm/coalesced_mmio.c +++ b/virt/kvm/coalesced_mmio.c @@ -17,6 +17,7 @@ #include #include #include +#include #include "coalesced_mmio.h" @@ -278,19 +279,40 @@ int kvm_vm_ioctl_create_coalesced_mmio_buffer(struct kvm *kvm) } int kvm_vm_ioctl_register_coalesced_mmio(struct kvm *kvm, - struct kvm_coalesced_mmio_zone *zone) + struct kvm_coalesced_mmio_zone2 *zone, + bool use_buffer_fd) { int ret; + struct file *file; struct kvm_coalesced_mmio_dev *dev; struct kvm_coalesced_mmio_buffer_dev *buffer_dev = NULL; if (zone->pio != 1 && zone->pio != 0) return -EINVAL; + if (use_buffer_fd) { + file = fget(zone->buffer_fd); + if (!file) + return -EBADF; + + if (file->f_op != &coalesced_mmio_buffer_ops) { + fput(file); + return -EINVAL; + } + + buffer_dev = file->private_data; + if (!buffer_dev->ring) { + fput(file); + return -ENOBUFS; + } + } + dev = kzalloc(sizeof(struct kvm_coalesced_mmio_dev), GFP_KERNEL_ACCOUNT); - if (!dev) - return -ENOMEM; + if (!dev) { + ret = -ENOMEM; + goto out_free_file; + } kvm_iodevice_init(&dev->dev, &coalesced_mmio_ops); dev->kvm = kvm; @@ -306,17 +328,21 @@ int kvm_vm_ioctl_register_coalesced_mmio(struct kvm *kvm, list_add_tail(&dev->list, &kvm->coalesced_zones); mutex_unlock(&kvm->slots_lock); - return 0; + ret = 0; + goto out_free_file; out_free_dev: mutex_unlock(&kvm->slots_lock); kfree(dev); +out_free_file: + if (use_buffer_fd) + fput(file); return ret; } int kvm_vm_ioctl_unregister_coalesced_mmio(struct kvm *kvm, - struct kvm_coalesced_mmio_zone *zone) + struct kvm_coalesced_mmio_zone2 *zone) { struct kvm_coalesced_mmio_dev *dev, *tmp; int r; diff --git a/virt/kvm/coalesced_mmio.h b/virt/kvm/coalesced_mmio.h index d1807ce26464..32792adb7cb4 100644 --- a/virt/kvm/coalesced_mmio.h +++ b/virt/kvm/coalesced_mmio.h @@ -19,7 +19,7 @@ struct kvm_coalesced_mmio_dev { struct list_head list; struct kvm_io_device dev; struct kvm *kvm; - struct kvm_coalesced_mmio_zone zone; + struct kvm_coalesced_mmio_zone2 zone; struct kvm_coalesced_mmio_buffer_dev *buffer_dev; }; @@ -34,9 +34,10 @@ struct kvm_coalesced_mmio_buffer_dev { int kvm_coalesced_mmio_init(struct kvm *kvm); void kvm_coalesced_mmio_free(struct kvm *kvm); int kvm_vm_ioctl_register_coalesced_mmio(struct kvm *kvm, - struct kvm_coalesced_mmio_zone *zone); + struct kvm_coalesced_mmio_zone2 *zone, + bool use_buffer_fd); int kvm_vm_ioctl_unregister_coalesced_mmio(struct kvm *kvm, - struct kvm_coalesced_mmio_zone *zone); + struct kvm_coalesced_mmio_zone2 *zone); int kvm_vm_ioctl_create_coalesced_mmio_buffer(struct kvm *kvm); #else diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 9f6ad6e03317..0850f151ef16 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -4890,6 +4890,7 @@ static int kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg) #ifdef CONFIG_KVM_MMIO case KVM_CAP_COALESCED_MMIO: return KVM_COALESCED_MMIO_PAGE_OFFSET; + case KVM_CAP_COALESCED_MMIO2: case KVM_CAP_COALESCED_PIO: return 1; #endif @@ -5228,15 +5229,46 @@ static long kvm_vm_ioctl(struct file *filp, #ifdef CONFIG_KVM_MMIO case KVM_REGISTER_COALESCED_MMIO: { struct kvm_coalesced_mmio_zone zone; + struct kvm_coalesced_mmio_zone2 zone2; r = -EFAULT; if (copy_from_user(&zone, argp, sizeof(zone))) goto out; - r = kvm_vm_ioctl_register_coalesced_mmio(kvm, &zone); + + zone2.addr = zone.addr; + zone2.size = zone.size; + zone2.pio = zone.pio; + + r = kvm_vm_ioctl_register_coalesced_mmio(kvm, &zone2, false); + break; + } + case KVM_REGISTER_COALESCED_MMIO2: { + struct kvm_coalesced_mmio_zone2 zone; + + r = -EFAULT; + if (copy_from_user(&zone, argp, sizeof(zone))) + goto out; + + r = kvm_vm_ioctl_register_coalesced_mmio(kvm, &zone, true); break; } case KVM_UNREGISTER_COALESCED_MMIO: { struct kvm_coalesced_mmio_zone zone; + struct kvm_coalesced_mmio_zone2 zone2; + + r = -EFAULT; + if (copy_from_user(&zone, argp, sizeof(zone))) + goto out; + + zone2.addr = zone.addr; + zone2.size = zone.size; + zone2.pio = zone.pio; + + r = kvm_vm_ioctl_unregister_coalesced_mmio(kvm, &zone2); + break; + } + case KVM_UNREGISTER_COALESCED_MMIO2: { + struct kvm_coalesced_mmio_zone2 zone; r = -EFAULT; if (copy_from_user(&zone, argp, sizeof(zone))) From patchwork Tue Aug 20 13:33:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ilias Stamatis X-Patchwork-Id: 13770113 Received: from smtp-fw-80008.amazon.com (smtp-fw-80008.amazon.com [99.78.197.219]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3755118FDD0 for ; Tue, 20 Aug 2024 13:36:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=99.78.197.219 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724161006; cv=none; b=qjb814YlUEZjAdEBPO351j/3Jt3b8m34wx1lP28m3nih/TTjfruS8TC9QN/eCq5+K1JzTEXy3S+tW6ylCu9/aZtu3waoG6Q3rHCCXQn/8H2YAy+s/K+d+uJDh42s6ID4LO+zU9Cg2fP+KNcqCojnyZXt8W1dSzTFwExAx0JVKY4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724161006; c=relaxed/simple; bh=3tgpbVJg3upkWR9TtfZAXyXRS9z5yGT8vUvSs2SIXZ0=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=cbbzYh9U/ilaeBRqwNOSholFULIHENYhYhWQ8+O0Ko9kW9cf0cAaRARpZwYcp0kYzNJhZSDRTTV4KnW+rov57R/dtALgqu/BFWKYlS5hj5TVLjFUlgTg8371WkV8i0uO0kAm426AEd6xH8+a/CXgw6iudBshTk92q8nbs9+mvKo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com; spf=pass smtp.mailfrom=amazon.co.uk; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b=XJ4mfzZH; arc=none smtp.client-ip=99.78.197.219 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=amazon.co.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="XJ4mfzZH" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1724161005; x=1755697005; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=s6CYa/JwNx9eNjz8m6bzIpujdl+anAGEZZuMJM21PyI=; b=XJ4mfzZHwdqcyiHgChdtHazIX1zdcgNs0GXx0ivkrvzTe5sR2JCNuQuP 9x/O03o6D7I2CnTsAJ5eNTCEbfP+SqVXrjU5pGFphNMiJ/5qLxMrx8zR0 Xf/0vGdg6EPzs1Aa2Qs5lSxiWVX3RM5MjsKN4XhxURK6JrJSKQF+QKjTz Y=; X-IronPort-AV: E=Sophos;i="6.10,162,1719878400"; d="scan'208";a="117201359" Received: from pdx4-co-svc-p1-lb2-vlan3.amazon.com (HELO smtpout.prod.us-west-2.prod.farcaster.email.amazon.dev) ([10.25.36.214]) by smtp-border-fw-80008.pdx80.corp.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Aug 2024 13:36:44 +0000 Received: from EX19MTAEUC001.ant.amazon.com [10.0.17.79:47704] by smtpin.naws.eu-west-1.prod.farcaster.email.amazon.dev [10.0.8.165:2525] with esmtp (Farcaster) id df722463-bb18-42ef-baae-5d6705e793b8; Tue, 20 Aug 2024 13:36:43 +0000 (UTC) X-Farcaster-Flow-ID: df722463-bb18-42ef-baae-5d6705e793b8 Received: from EX19D018EUA002.ant.amazon.com (10.252.50.146) by EX19MTAEUC001.ant.amazon.com (10.252.51.193) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34; Tue, 20 Aug 2024 13:36:43 +0000 Received: from u94b036d6357a55.ant.amazon.com (10.106.82.48) by EX19D018EUA002.ant.amazon.com (10.252.50.146) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34; Tue, 20 Aug 2024 13:36:39 +0000 From: Ilias Stamatis To: , CC: , , , , Ilias Stamatis , Paul Durrant Subject: [PATCH v3 5/6] KVM: Documentation: Document v2 of coalesced MMIO API Date: Tue, 20 Aug 2024 14:33:32 +0100 Message-ID: <20240820133333.1724191-6-ilstam@amazon.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240820133333.1724191-1-ilstam@amazon.com> References: <20240820133333.1724191-1-ilstam@amazon.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: EX19D032UWA003.ant.amazon.com (10.13.139.37) To EX19D018EUA002.ant.amazon.com (10.252.50.146) Document the KVM_CREATE_COALESCED_MMIO_BUFFER and KVM_REGISTER_COALESCED_MMIO2 ioctls. Signed-off-by: Ilias Stamatis Reviewed-by: Paul Durrant --- Documentation/virt/kvm/api.rst | 91 ++++++++++++++++++++++++++++++++++ 1 file changed, 91 insertions(+) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index b4d1cf2e4628..0b3ca05e380a 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -4922,6 +4922,8 @@ For the definition of struct kvm_nested_state, see KVM_GET_NESTED_STATE. :Parameters: struct kvm_coalesced_mmio_zone :Returns: 0 on success, < 0 on error +KVM_(UN)REGISTER_COALESCED_MMIO2 can be used instead if available. + Coalesced I/O is a performance optimization that defers hardware register write emulation so that userspace exits are avoided. It is typically used to reduce the overhead of emulating frequently accessed @@ -6427,6 +6429,95 @@ the capability to be present. `flags` must currently be zero. +4.144 KVM_CREATE_COALESCED_MMIO_BUFFER +------------------------------------- + +:Capability: KVM_CAP_COALESCED_MMIO2 +:Architectures: all +:Type: vm ioctl +:Parameters: none +:Returns: An fd on success, < 0 on error + +Returns an fd, but does not allocate a buffer. Also see +KVM_REGISTER_COALESCED_MMIO2. + +The fd must be first passed to mmap() to allocate a page to be used as a ring +buffer that is shared between kernel and userspace. The page must be +interpreted as a struct kvm_coalesced_mmio_ring. + +:: + + struct kvm_coalesced_mmio_ring { + __u32 first, last; + struct kvm_coalesced_mmio coalesced_mmio[]; + }; + +The kernel will increment the last index and userspace is expected to do the +same with the first index after consuming entries. The upper bound of the +coalesced_mmio array is defined as KVM_COALESCED_MMIO_MAX. + +:: + + struct kvm_coalesced_mmio { + __u64 phys_addr; + __u32 len; + union { + __u32 pad; + __u32 pio; + }; + __u8 data[8]; + }; + +After allocating a buffer with mmap(), the fd must be passed as an argument to +KVM_REGISTER_COALESCED_MMIO2 to associate an I/O region to which writes are +coalesced with the ring buffer. Multiple I/O regions can be associated with the +same ring buffer. Closing the fd after unmapping it automatically deregisters +all I/O regions associated with it. + +poll() is also supported on the fd so that userspace can be notified of I/O +writes without having to wait until the next exit to userspace. + +4.145 KVM_(UN)REGISTER_COALESCED_MMIO2 +------------------------------------- + +:Capability: KVM_CAP_COALESCED_MMIO2 +:Architectures: all +:Type: vm ioctl +:Parameters: struct kvm_coalesced_mmio_zone2 +:Returns: 0 on success, < 0 on error + +Coalesced I/O is a performance optimization that defers hardware register write +emulation so that userspace exits are avoided. It is typically used to reduce +the overhead of emulating frequently accessed hardware registers. + +When a hardware register is configured for coalesced I/O, write accesses do not +exit to userspace and their value is recorded in a ring buffer that is shared +between kernel and userspace. + +:: + + struct kvm_coalesced_mmio_zone2 { + __u64 addr; + __u32 size; + union { + __u32 pad; + __u32 pio; + }; + int buffer_fd; + }; + +KVM_CREATE_COALESCED_MMIO_BUFFER must be used to allocate a buffer fd which +must be first mmaped before passed to KVM_REGISTER_COALESCED_MMIO2, otherwise +the ioctl will fail. + +Coalesced I/O is used if one or more write accesses to a hardware register can +be deferred until a read or a write to another hardware register on the same +device. This last access will cause a vmexit and userspace will process +accesses from the ring buffer before emulating it. That will avoid exiting to +userspace on repeated writes. + +Alternatively, userspace can call poll() on the buffer fd if it wishes to be +notified of new I/O writes that way. 5. The kvm_run structure ======================== From patchwork Tue Aug 20 13:33:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ilias Stamatis X-Patchwork-Id: 13770114 Received: from smtp-fw-80007.amazon.com (smtp-fw-80007.amazon.com [99.78.197.218]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5987F1AACB for ; Tue, 20 Aug 2024 13:36:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=99.78.197.218 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724161015; cv=none; b=NJtFvkHwPPVIFNib2zsiXp6zVrv+iDV6Snp2H4tPNp8aFn0IwCL6X5TrFNWOa4kKzPvm12aa9WZXzIWvMfgQBjcjh19qFZoQl3R2merOxVAh0NLNZz25myk0qe3aJBtAgH6/Fy/3peixUXhfh6e7F2cDMPABd980cCXZ7ayA9rM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724161015; c=relaxed/simple; bh=aduH0T7hxiSgWmx3n2ahktMEv5AKZbd25/yp2f4Dhsc=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=N+daXXNghpQwucoHRLwDky+8L61iXl7oN0p8VnSiXF0CVF3Hc7aNCeobx8EbxnKLY5LOTELUjkZaMLFIxYYei9QiKXoBPglqsBpFQnVEcbupFA/JIpoqgZuvAdN2jBSgJmq66ry2FoX+GlBqyeynFGGoFZgdjk+aNLmN6I3HvOw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com; spf=pass smtp.mailfrom=amazon.co.uk; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b=llQMeLkg; arc=none smtp.client-ip=99.78.197.218 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=amazon.co.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="llQMeLkg" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1724161013; x=1755697013; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=K7k7RBTkcSQ4M5fikVS8zZMpnCTT/zMp/vaIahRivvo=; b=llQMeLkgJWo5kAwrwInkiygvriPykrTEwIQOPwCg1hT1jTkjEeXqjTAW XrcyHFPW3a2y3FI/KPnShRLxv3+YuQyHZ/3fl79LjJlGiO1c7iteqpVFB X+mnm8SeYtmuC+b6poyS0mcVnPDIGpJAcFTO1CBu4H9XuAVfsNpaQ4RXh Y=; X-IronPort-AV: E=Sophos;i="6.10,162,1719878400"; d="scan'208";a="322578885" Received: from pdx4-co-svc-p1-lb2-vlan2.amazon.com (HELO smtpout.prod.us-west-2.prod.farcaster.email.amazon.dev) ([10.25.36.210]) by smtp-border-fw-80007.pdx80.corp.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Aug 2024 13:36:51 +0000 Received: from EX19MTAEUA001.ant.amazon.com [10.0.10.100:40971] by smtpin.naws.eu-west-1.prod.farcaster.email.amazon.dev [10.0.20.122:2525] with esmtp (Farcaster) id 68f18467-5b98-4d88-a815-4ed8256b8c9d; Tue, 20 Aug 2024 13:36:49 +0000 (UTC) X-Farcaster-Flow-ID: 68f18467-5b98-4d88-a815-4ed8256b8c9d Received: from EX19D018EUA002.ant.amazon.com (10.252.50.146) by EX19MTAEUA001.ant.amazon.com (10.252.50.192) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34; Tue, 20 Aug 2024 13:36:48 +0000 Received: from u94b036d6357a55.ant.amazon.com (10.106.82.48) by EX19D018EUA002.ant.amazon.com (10.252.50.146) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34; Tue, 20 Aug 2024 13:36:45 +0000 From: Ilias Stamatis To: , CC: , , , , Ilias Stamatis Subject: [PATCH v3 6/6] KVM: selftests: Add coalesced_mmio_test Date: Tue, 20 Aug 2024 14:33:33 +0100 Message-ID: <20240820133333.1724191-7-ilstam@amazon.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240820133333.1724191-1-ilstam@amazon.com> References: <20240820133333.1724191-1-ilstam@amazon.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: EX19D032UWA003.ant.amazon.com (10.13.139.37) To EX19D018EUA002.ant.amazon.com (10.252.50.146) Test the KVM_CREATE_COALESCED_MMIO_BUFFER, KVM_REGISTER_COALESCED_MMIO2 and KVM_UNREGISTER_COALESCED_MMIO2 ioctls. Signed-off-by: Ilias Stamatis --- tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/coalesced_mmio_test.c | 313 ++++++++++++++++++ 2 files changed, 314 insertions(+) create mode 100644 tools/testing/selftests/kvm/coalesced_mmio_test.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 48d32c5aa3eb..527297bfb9c5 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -147,6 +147,7 @@ TEST_GEN_PROGS_x86_64 += steal_time TEST_GEN_PROGS_x86_64 += kvm_binary_stats_test TEST_GEN_PROGS_x86_64 += system_counter_offset_test TEST_GEN_PROGS_x86_64 += pre_fault_memory_test +TEST_GEN_PROGS_x86_64 += coalesced_mmio_test # Compiled outputs used by test targets TEST_GEN_PROGS_EXTENDED_x86_64 += x86_64/nx_huge_pages_test diff --git a/tools/testing/selftests/kvm/coalesced_mmio_test.c b/tools/testing/selftests/kvm/coalesced_mmio_test.c new file mode 100644 index 000000000000..7a000596279f --- /dev/null +++ b/tools/testing/selftests/kvm/coalesced_mmio_test.c @@ -0,0 +1,313 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright 2024 Amazon.com, Inc. or its affiliates. All Rights Reserved. + * + * Test the KVM_CREATE_COALESCED_MMIO_BUFFER, KVM_REGISTER_COALESCED_MMIO2 and + * KVM_UNREGISTER_COALESCED_MMIO2 ioctls by making sure that MMIO writes to + * associated zones end up in the correct ring buffer. Also test that we don't + * exit to userspace when there is space in the corresponding buffer. + */ + +#include +#include + +#define PAGE_SIZE 4096 + +/* + * Somewhat arbitrary location and slot, intended to not overlap anything. + */ +#define MEM_REGION_SLOT 10 +#define MEM_REGION_GPA 0xc0000000UL +#define MEM_REGION_SIZE (PAGE_SIZE * 2) +#define MEM_REGION_PAGES DIV_ROUND_UP(MEM_REGION_SIZE, PAGE_SIZE) + +#define COALESCING_ZONE1_GPA MEM_REGION_GPA +#define COALESCING_ZONE1_SIZE PAGE_SIZE +#define COALESCING_ZONE2_GPA (COALESCING_ZONE1_GPA + COALESCING_ZONE1_SIZE) +#define COALESCING_ZONE2_SIZE PAGE_SIZE + +#define MMIO_WRITE_DATA 0xdeadbeef +#define MMIO_WRITE_DATA2 0xbadc0de + +#define BATCH_SIZE 4 + +static void guest_code(void) +{ + uint64_t *gpa; + + /* + * The first write should result in an exit + */ + gpa = (uint64_t *)(MEM_REGION_GPA); + WRITE_ONCE(*gpa, MMIO_WRITE_DATA); + + /* + * These writes should be stored in a coalescing ring buffer and only + * the last one should result in an exit. + */ + for (int i = 0; i < KVM_COALESCED_MMIO_MAX; i++) { + gpa = (uint64_t *)(COALESCING_ZONE1_GPA + i * sizeof(*gpa)); + WRITE_ONCE(*gpa, MMIO_WRITE_DATA + i); + + /* Let's throw a PIO into the mix */ + if (i == KVM_COALESCED_MMIO_MAX / 2) + GUEST_SYNC(0); + } + + /* + * These writes should be stored in two separate ring buffers and they + * shouldn't result in an exit. + */ + for (int i = 0; i < BATCH_SIZE; i++) { + gpa = (uint64_t *)(COALESCING_ZONE1_GPA + i * sizeof(*gpa)); + WRITE_ONCE(*gpa, MMIO_WRITE_DATA + i); + + gpa = (uint64_t *)(COALESCING_ZONE2_GPA + i * sizeof(*gpa)); + WRITE_ONCE(*gpa, MMIO_WRITE_DATA2 + i); + } + + GUEST_SYNC(0); + + /* + * These writes should be stored in the same ring buffer and they + * shouldn't result in an exit. + */ + for (int i = 0; i < BATCH_SIZE; i++) { + if (i < BATCH_SIZE / 2) + gpa = (uint64_t *)(COALESCING_ZONE1_GPA + i * sizeof(*gpa)); + else + gpa = (uint64_t *)(COALESCING_ZONE2_GPA + i * sizeof(*gpa)); + + WRITE_ONCE(*gpa, MMIO_WRITE_DATA2 + i); + } + + GUEST_SYNC(0); + + /* + * This last write should result in an exit because the host should + * have disabled I/O coalescing by now. + */ + gpa = (uint64_t *)(COALESCING_ZONE1_GPA); + WRITE_ONCE(*gpa, MMIO_WRITE_DATA); +} + +static void assert_mmio_write(struct kvm_vcpu *vcpu, uint64_t addr, uint64_t value) +{ + uint64_t data; + + TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_MMIO); + TEST_ASSERT(vcpu->run->mmio.is_write, "Got MMIO read, not MMIO write"); + + memcpy(&data, vcpu->run->mmio.data, vcpu->run->mmio.len); + TEST_ASSERT_EQ(vcpu->run->mmio.phys_addr, addr); + TEST_ASSERT_EQ(value, data); +} + +static void assert_ucall_exit(struct kvm_vcpu *vcpu, uint64_t command) +{ + uint64_t cmd; + struct ucall uc; + + TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_IO); + cmd = get_ucall(vcpu, &uc); + TEST_ASSERT_EQ(cmd, command); +} + +static void assert_ring_entries(struct kvm_coalesced_mmio_ring *ring, + uint32_t nentries, + uint64_t addr, + uint64_t value) +{ + uint64_t data; + + for (int i = READ_ONCE(ring->first); i < nentries; i++) { + TEST_ASSERT_EQ(READ_ONCE(ring->coalesced_mmio[i].len), + sizeof(data)); + memcpy(&data, ring->coalesced_mmio[i].data, + READ_ONCE(ring->coalesced_mmio[i].len)); + + TEST_ASSERT_EQ(READ_ONCE(ring->coalesced_mmio[i].phys_addr), + addr + i * sizeof(data)); + TEST_ASSERT_EQ(data, value + i); + } +} + +int main(int argc, char *argv[]) +{ + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + uint64_t gpa; + struct kvm_coalesced_mmio_ring *ring, *ring2; + struct kvm_coalesced_mmio_zone2 zone, zone2; + int ring_fd, ring_fd2; + int r; + + TEST_REQUIRE(kvm_has_cap(KVM_CAP_COALESCED_MMIO2)); + TEST_REQUIRE(kvm_has_cap(KVM_CAP_READONLY_MEM)); + TEST_ASSERT(BATCH_SIZE * 2 <= KVM_COALESCED_MMIO_MAX, + "KVM_COALESCED_MMIO_MAX too small"); + + vm = vm_create_with_one_vcpu(&vcpu, guest_code); + + vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, MEM_REGION_GPA, + MEM_REGION_SLOT, MEM_REGION_PAGES, + KVM_MEM_READONLY); + + gpa = vm_phy_pages_alloc(vm, MEM_REGION_PAGES, MEM_REGION_GPA, + MEM_REGION_SLOT); + TEST_ASSERT(gpa == MEM_REGION_GPA, "Failed vm_phy_pages_alloc"); + + virt_map(vm, MEM_REGION_GPA, MEM_REGION_GPA, MEM_REGION_PAGES); + + /* + * Test that allocating an fd and memory mapping it works + */ + ring_fd = __vm_ioctl(vm, KVM_CREATE_COALESCED_MMIO_BUFFER, NULL); + TEST_ASSERT(ring_fd != -1, "Failed KVM_CREATE_COALESCED_MMIO_BUFFER"); + + ring = mmap(NULL, PAGE_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, + ring_fd, 0); + TEST_ASSERT(ring != MAP_FAILED, "Failed to allocate ring buffer"); + + /* + * Test that trying to map the same fd again fails + */ + ring2 = mmap(NULL, PAGE_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, + ring_fd, 0); + TEST_ASSERT(ring2 == MAP_FAILED && errno == EBUSY, + "Mapping the same fd again should fail with EBUSY"); + + /* + * Test that the first and last ring indices are zero + */ + TEST_ASSERT_EQ(READ_ONCE(ring->first), 0); + TEST_ASSERT_EQ(READ_ONCE(ring->last), 0); + + /* + * Run the vCPU and make sure the first MMIO write results in a + * userspace exit since we have not setup MMIO coalescing yet. + */ + vcpu_run(vcpu); + assert_mmio_write(vcpu, MEM_REGION_GPA, MMIO_WRITE_DATA); + + /* + * Let's actually setup MMIO coalescing now... + */ + zone.addr = COALESCING_ZONE1_GPA; + zone.size = COALESCING_ZONE1_SIZE; + zone.buffer_fd = ring_fd; + r = __vm_ioctl(vm, KVM_REGISTER_COALESCED_MMIO2, &zone); + TEST_ASSERT(r != -1, "Failed KVM_REGISTER_COALESCED_MMIO2"); + + /* + * The guest will start doing MMIO writes in the coalesced regions but + * will also do a ucall when the buffer is half full. The first + * userspace exit should be due to the ucall and not an MMIO exit. + */ + vcpu_run(vcpu); + assert_ucall_exit(vcpu, UCALL_SYNC); + TEST_ASSERT_EQ(READ_ONCE(ring->first), 0); + TEST_ASSERT_EQ(READ_ONCE(ring->last), KVM_COALESCED_MMIO_MAX / 2 + 1); + + /* + * Run the guest again. Next exit should be when the buffer is full. + * One entry always remains unused. + */ + vcpu_run(vcpu); + assert_mmio_write(vcpu, + COALESCING_ZONE1_GPA + (KVM_COALESCED_MMIO_MAX - 1) * sizeof(uint64_t), + MMIO_WRITE_DATA + KVM_COALESCED_MMIO_MAX - 1); + TEST_ASSERT_EQ(READ_ONCE(ring->first), 0); + TEST_ASSERT_EQ(READ_ONCE(ring->last), KVM_COALESCED_MMIO_MAX - 1); + + assert_ring_entries(ring, KVM_COALESCED_MMIO_MAX - 1, + COALESCING_ZONE1_GPA, MMIO_WRITE_DATA); + + /* + * Let's setup another region with a separate buffer + */ + ring_fd2 = __vm_ioctl(vm, KVM_CREATE_COALESCED_MMIO_BUFFER, NULL); + TEST_ASSERT(ring_fd != -1, "Failed KVM_CREATE_COALESCED_MMIO_BUFFER"); + + ring2 = mmap(NULL, PAGE_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, + ring_fd2, 0); + TEST_ASSERT(ring2 != MAP_FAILED, "Failed to allocate ring buffer"); + + zone2.addr = COALESCING_ZONE2_GPA; + zone2.size = COALESCING_ZONE2_SIZE; + zone2.buffer_fd = ring_fd2; + r = __vm_ioctl(vm, KVM_REGISTER_COALESCED_MMIO2, &zone2); + TEST_ASSERT(r != -1, "Failed KVM_REGISTER_COALESCED_MMIO2"); + + /* + * Move the consumer pointer of the first ring forward. + * + * When re-entering the vCPU the guest will write BATCH_SIZE + * times to each MMIO zone. + */ + WRITE_ONCE(ring->first, + (READ_ONCE(ring->first) + BATCH_SIZE) % KVM_COALESCED_MMIO_MAX); + + vcpu_run(vcpu); + assert_ucall_exit(vcpu, UCALL_SYNC); + + TEST_ASSERT_EQ(READ_ONCE(ring->first), BATCH_SIZE); + TEST_ASSERT_EQ(READ_ONCE(ring->last), + (KVM_COALESCED_MMIO_MAX - 1 + BATCH_SIZE) % KVM_COALESCED_MMIO_MAX); + TEST_ASSERT_EQ(READ_ONCE(ring2->first), 0); + TEST_ASSERT_EQ(READ_ONCE(ring2->last), BATCH_SIZE); + + assert_ring_entries(ring, BATCH_SIZE, COALESCING_ZONE1_GPA, MMIO_WRITE_DATA); + assert_ring_entries(ring2, BATCH_SIZE, COALESCING_ZONE2_GPA, MMIO_WRITE_DATA2); + + /* + * Unregister zone 2 and register it again but this time use the same + * ring buffer used for zone 1. + */ + r = __vm_ioctl(vm, KVM_UNREGISTER_COALESCED_MMIO2, &zone2); + TEST_ASSERT(r != -1, "Failed KVM_UNREGISTER_COALESCED_MMIO2"); + + zone2.buffer_fd = ring_fd; + r = __vm_ioctl(vm, KVM_REGISTER_COALESCED_MMIO2, &zone2); + TEST_ASSERT(r != -1, "Failed KVM_REGISTER_COALESCED_MMIO2"); + + /* + * Enter the vCPU again. This time writes to both regions should go + * to the same ring buffer. + */ + WRITE_ONCE(ring->first, + (READ_ONCE(ring->first) + BATCH_SIZE) % KVM_COALESCED_MMIO_MAX); + + vcpu_run(vcpu); + assert_ucall_exit(vcpu, UCALL_SYNC); + + TEST_ASSERT_EQ(READ_ONCE(ring->first), BATCH_SIZE * 2); + TEST_ASSERT_EQ(READ_ONCE(ring->last), + (KVM_COALESCED_MMIO_MAX - 1 + BATCH_SIZE * 2) % KVM_COALESCED_MMIO_MAX); + + WRITE_ONCE(ring->first, + (READ_ONCE(ring->first) + BATCH_SIZE) % KVM_COALESCED_MMIO_MAX); + + /* + * Test that munmap and close work. + */ + r = munmap(ring, PAGE_SIZE); + TEST_ASSERT(r == 0, "Failed to munmap()"); + r = close(ring_fd); + TEST_ASSERT(r == 0, "Failed to close()"); + + r = munmap(ring2, PAGE_SIZE); + TEST_ASSERT(r == 0, "Failed to munmap()"); + r = close(ring_fd2); + TEST_ASSERT(r == 0, "Failed to close()"); + + /* + * close() should have also deregistered all I/O regions associated + * with the ring buffer automatically. Make sure that when the guest + * writes to the region again this results in an immediate exit. + */ + vcpu_run(vcpu); + assert_mmio_write(vcpu, COALESCING_ZONE1_GPA, MMIO_WRITE_DATA); + + return 0; +}