From patchwork Wed Jun 1 08:57:38 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexey Kardashevskiy X-Patchwork-Id: 9147083 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id DB33B60777 for ; Wed, 1 Jun 2016 10:24:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C91C01FF65 for ; Wed, 1 Jun 2016 10:24:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BDDAA26785; Wed, 1 Jun 2016 10:24:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00, MSGID_FROM_MTA_HEADER,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id A672B1FF65 for ; Wed, 1 Jun 2016 10:24:25 +0000 (UTC) Received: from localhost ([::1]:41582 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b83K4-0004EO-OM for patchwork-qemu-devel@patchwork.kernel.org; Wed, 01 Jun 2016 06:24:24 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:38337) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b838B-000334-BI for qemu-devel@nongnu.org; Wed, 01 Jun 2016 06:12:11 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1b8381-0005r3-Ld for qemu-devel@nongnu.org; Wed, 01 Jun 2016 06:12:06 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:30511) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b8381-0005qR-EI for qemu-devel@nongnu.org; Wed, 01 Jun 2016 06:11:57 -0400 Received: from pps.filterd (m0082756.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.11/8.16.0.11) with SMTP id u51A9CrR022642 for ; Wed, 1 Jun 2016 06:11:55 -0400 Message-Id: <201606011011.u51A9CrR022642@mx0a-001b2d01.pphosted.com> Received: from e23smtp08.au.ibm.com (e23smtp08.au.ibm.com [202.81.31.141]) by mx0a-001b2d01.pphosted.com with ESMTP id 239u0553mn-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Wed, 01 Jun 2016 06:11:55 -0400 Received: from localhost by e23smtp08.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 1 Jun 2016 19:01:27 +1000 Received: from d23dlp01.au.ibm.com (202.81.31.203) by e23smtp08.au.ibm.com (202.81.31.205) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Wed, 1 Jun 2016 19:01:24 +1000 X-IBM-Helo: d23dlp01.au.ibm.com X-IBM-MailFrom: aik@ozlabs.ru X-IBM-RcptTo: qemu-devel@nongnu.org;qemu-ppc@nongnu.org Received: from d23relay08.au.ibm.com (d23relay08.au.ibm.com [9.185.71.33]) by d23dlp01.au.ibm.com (Postfix) with ESMTP id 984A92CE8070; Wed, 1 Jun 2016 19:00:14 +1000 (EST) Received: from d23av03.au.ibm.com (d23av03.au.ibm.com [9.190.234.97]) by d23relay08.au.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id u518xvpZ60096524; Wed, 1 Jun 2016 19:00:09 +1000 Received: from d23av03.au.ibm.com (localhost [127.0.0.1]) by d23av03.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id u518wtGg003739; Wed, 1 Jun 2016 18:58:55 +1000 Received: from ozlabs.au.ibm.com (ozlabs.au.ibm.com [9.192.253.14]) by d23av03.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id u518wtjV002606; Wed, 1 Jun 2016 18:58:55 +1000 Received: from bran.ozlabs.ibm.com (haven.au.ibm.com [9.192.254.114]) by ozlabs.au.ibm.com (Postfix) with ESMTP id 9B630A0351; Wed, 1 Jun 2016 18:57:49 +1000 (AEST) Received: from vpl2.ozlabs.ibm.com (vpl2.ozlabs.ibm.com [10.61.141.27]) by bran.ozlabs.ibm.com (Postfix) with ESMTP id 729CBE3AE8; Wed, 1 Jun 2016 18:57:49 +1000 (AEST) From: Alexey Kardashevskiy To: qemu-devel@nongnu.org Date: Wed, 1 Jun 2016 18:57:38 +1000 X-Mailer: git-send-email 2.5.0.rc3 In-Reply-To: <1464771463-37214-1-git-send-email-aik@ozlabs.ru> References: <1464771463-37214-1-git-send-email-aik@ozlabs.ru> X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 16060109-0048-0000-0000-0000018CD864 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 16060109-0049-0000-0000-000045DA7E03 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2016-06-01_03:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=97 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1604210000 definitions=main-1606010120 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] X-Received-From: 148.163.156.1 Subject: [Qemu-devel] [PATCH qemu v17 07/12] vfio: spapr: Add DMA memory preregistering (SPAPR IOMMU v2) X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Alexey Kardashevskiy , Alex Williamson , qemu-ppc@nongnu.org, Alexander Graf , David Gibson Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP This makes use of the new "memory registering" feature. The idea is to provide the userspace ability to notify the host kernel about pages which are going to be used for DMA. Having this information, the host kernel can pin them all once per user process, do locked pages accounting (once) and not spent time on doing that in real time with possible failures which cannot be handled nicely in some cases. This adds a prereg memory listener which listens on address_space_memory and notifies a VFIO container about memory which needs to be pinned/unpinned. VFIO MMIO regions (i.e. "skip dump" regions) are skipped. As there is no per-IOMMU-type release() callback anymore, this stores the IOMMU type in the container so vfio_listener_release() can determine if it needs to unregister @prereg_listener. The feature is only enabled for SPAPR IOMMU v2. The host kernel changes are required. Since v2 does not need/support VFIO_IOMMU_ENABLE, this does not call it when v2 is detected and enabled. This enforces guest RAM blocks to be host page size aligned; however this is not new as KVM already requires memory slots to be host page size aligned. Signed-off-by: Alexey Kardashevskiy --- Changes: v17: * s/prereg\.c/spapr.c/ * s/vfio_prereg_gpa_to_ua/vfio_prereg_gpa_to_vaddr/ * vfio_prereg_listener_skipped_section does hw_error() on IOMMUs v16: * switched to 64bit math everywhere as there is no chance to see region_add on RAM blocks even remotely close to 1<<64bytes. v15: * banned unaligned sections * added an vfio_prereg_gpa_to_ua() helper v14: * s/free_container_exit/listener_release_exit/g * added "if memory_region_is_iommu()" to vfio_prereg_listener_skipped_section --- hw/vfio/Makefile.objs | 1 + hw/vfio/common.c | 38 +++++++++--- hw/vfio/spapr.c | 137 ++++++++++++++++++++++++++++++++++++++++++ include/hw/vfio/vfio-common.h | 4 ++ trace-events | 2 + 5 files changed, 172 insertions(+), 10 deletions(-) create mode 100644 hw/vfio/spapr.c diff --git a/hw/vfio/Makefile.objs b/hw/vfio/Makefile.objs index ceddbb8..c25e32b 100644 --- a/hw/vfio/Makefile.objs +++ b/hw/vfio/Makefile.objs @@ -4,4 +4,5 @@ obj-$(CONFIG_PCI) += pci.o pci-quirks.o obj-$(CONFIG_SOFTMMU) += platform.o obj-$(CONFIG_SOFTMMU) += calxeda-xgmac.o obj-$(CONFIG_SOFTMMU) += amd-xgbe.o +obj-$(CONFIG_SOFTMMU) += spapr.o endif diff --git a/hw/vfio/common.c b/hw/vfio/common.c index f1a12b0..770f630 100644 --- a/hw/vfio/common.c +++ b/hw/vfio/common.c @@ -504,6 +504,9 @@ static const MemoryListener vfio_memory_listener = { static void vfio_listener_release(VFIOContainer *container) { memory_listener_unregister(&container->listener); + if (container->iommu_type == VFIO_SPAPR_TCE_v2_IOMMU) { + memory_listener_unregister(&container->prereg_listener); + } } static struct vfio_info_cap_header * @@ -862,8 +865,8 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as) goto free_container_exit; } - ret = ioctl(fd, VFIO_SET_IOMMU, - v2 ? VFIO_TYPE1v2_IOMMU : VFIO_TYPE1_IOMMU); + container->iommu_type = v2 ? VFIO_TYPE1v2_IOMMU : VFIO_TYPE1_IOMMU; + ret = ioctl(fd, VFIO_SET_IOMMU, container->iommu_type); if (ret) { error_report("vfio: failed to set iommu for container: %m"); ret = -errno; @@ -888,8 +891,10 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as) if ((ret == 0) && (info.flags & VFIO_IOMMU_INFO_PGSIZES)) { container->iova_pgsizes = info.iova_pgsizes; } - } else if (ioctl(fd, VFIO_CHECK_EXTENSION, VFIO_SPAPR_TCE_IOMMU)) { + } else if (ioctl(fd, VFIO_CHECK_EXTENSION, VFIO_SPAPR_TCE_IOMMU) || + ioctl(fd, VFIO_CHECK_EXTENSION, VFIO_SPAPR_TCE_v2_IOMMU)) { struct vfio_iommu_spapr_tce_info info; + bool v2 = !!ioctl(fd, VFIO_CHECK_EXTENSION, VFIO_SPAPR_TCE_v2_IOMMU); ret = ioctl(group->fd, VFIO_GROUP_SET_CONTAINER, &fd); if (ret) { @@ -897,7 +902,9 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as) ret = -errno; goto free_container_exit; } - ret = ioctl(fd, VFIO_SET_IOMMU, VFIO_SPAPR_TCE_IOMMU); + container->iommu_type = + v2 ? VFIO_SPAPR_TCE_v2_IOMMU : VFIO_SPAPR_TCE_IOMMU; + ret = ioctl(fd, VFIO_SET_IOMMU, container->iommu_type); if (ret) { error_report("vfio: failed to set iommu for container: %m"); ret = -errno; @@ -909,11 +916,22 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as) * when container fd is closed so we do not call it explicitly * in this file. */ - ret = ioctl(fd, VFIO_IOMMU_ENABLE); - if (ret) { - error_report("vfio: failed to enable container: %m"); - ret = -errno; - goto free_container_exit; + if (!v2) { + ret = ioctl(fd, VFIO_IOMMU_ENABLE); + if (ret) { + error_report("vfio: failed to enable container: %m"); + ret = -errno; + goto free_container_exit; + } + } else { + container->prereg_listener = vfio_prereg_listener; + + memory_listener_register(&container->prereg_listener, + &address_space_memory); + if (container->error) { + error_report("vfio: RAM memory listener initialization failed for container"); + goto listener_release_exit; + } } /* @@ -926,7 +944,7 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as) if (ret) { error_report("vfio: VFIO_IOMMU_SPAPR_TCE_GET_INFO failed: %m"); ret = -errno; - goto free_container_exit; + goto listener_release_exit; } container->min_iova = info.dma32_window_start; container->max_iova = container->min_iova + info.dma32_window_size - 1; diff --git a/hw/vfio/spapr.c b/hw/vfio/spapr.c new file mode 100644 index 0000000..f339472 --- /dev/null +++ b/hw/vfio/spapr.c @@ -0,0 +1,137 @@ +/* + * DMA memory preregistration + * + * Authors: + * Alexey Kardashevskiy + * + * This work is licensed under the terms of the GNU GPL, version 2. See + * the COPYING file in the top-level directory. + */ + +#include "qemu/osdep.h" +#include "cpu.h" +#include +#include + +#include "hw/vfio/vfio-common.h" +#include "hw/hw.h" +#include "qemu/error-report.h" +#include "trace.h" + +static bool vfio_prereg_listener_skipped_section(MemoryRegionSection *section) +{ + if (memory_region_is_iommu(section->mr)) { + hw_error("Cannot possibly preregister IOMMU memory"); + } + + return !memory_region_is_ram(section->mr) || + memory_region_is_skip_dump(section->mr); +} + +static void *vfio_prereg_gpa_to_vaddr(MemoryRegionSection *section, hwaddr gpa) +{ + return memory_region_get_ram_ptr(section->mr) + + section->offset_within_region + + (gpa - section->offset_within_address_space); +} + +static void vfio_prereg_listener_region_add(MemoryListener *listener, + MemoryRegionSection *section) +{ + VFIOContainer *container = container_of(listener, VFIOContainer, + prereg_listener); + const hwaddr gpa = section->offset_within_address_space; + hwaddr end; + int ret; + hwaddr page_mask = qemu_real_host_page_mask; + struct vfio_iommu_spapr_register_memory reg = { + .argsz = sizeof(reg), + .flags = 0, + }; + + if (vfio_prereg_listener_skipped_section(section)) { + trace_vfio_listener_region_add_skip( + section->offset_within_address_space, + section->offset_within_address_space + + int128_get64(int128_sub(section->size, int128_one()))); + return; + } + + if (unlikely((section->offset_within_address_space & ~page_mask) || + (section->offset_within_region & ~page_mask) || + (int128_get64(section->size) & ~page_mask))) { + error_report("%s received unaligned region", __func__); + return; + } + + end = section->offset_within_address_space + int128_get64(section->size); + g_assert(gpa < end); + + memory_region_ref(section->mr); + + reg.vaddr = (__u64) vfio_prereg_gpa_to_vaddr(section, gpa); + reg.size = end - gpa; + + ret = ioctl(container->fd, VFIO_IOMMU_SPAPR_REGISTER_MEMORY, ®); + trace_vfio_ram_register(reg.vaddr, reg.size, ret ? -errno : 0); + if (ret) { + /* + * On the initfn path, store the first error in the container so we + * can gracefully fail. Runtime, there's not much we can do other + * than throw a hardware error. + */ + if (!container->initialized) { + if (!container->error) { + container->error = ret; + } + } else { + hw_error("vfio: Memory registering failed, unable to continue"); + } + } +} + +static void vfio_prereg_listener_region_del(MemoryListener *listener, + MemoryRegionSection *section) +{ + VFIOContainer *container = container_of(listener, VFIOContainer, + prereg_listener); + const hwaddr gpa = section->offset_within_address_space; + hwaddr end; + int ret; + hwaddr page_mask = qemu_real_host_page_mask; + struct vfio_iommu_spapr_register_memory reg = { + .argsz = sizeof(reg), + .flags = 0, + }; + + if (vfio_prereg_listener_skipped_section(section)) { + trace_vfio_listener_region_del_skip( + section->offset_within_address_space, + section->offset_within_address_space + + int128_get64(int128_sub(section->size, int128_one()))); + return; + } + + if (unlikely((section->offset_within_address_space & ~page_mask) || + (section->offset_within_region & ~page_mask) || + (int128_get64(section->size) & ~page_mask))) { + error_report("%s received unaligned region", __func__); + return; + } + + end = section->offset_within_address_space + int128_get64(section->size); + if (gpa >= end) { + return; + } + + reg.vaddr = (__u64) vfio_prereg_gpa_to_vaddr(section, gpa); + reg.size = end - gpa; + + ret = ioctl(container->fd, VFIO_IOMMU_SPAPR_UNREGISTER_MEMORY, ®); + trace_vfio_ram_unregister(reg.vaddr, reg.size, ret ? -errno : 0); +} + +const MemoryListener vfio_prereg_listener = { + .region_add = vfio_prereg_listener_region_add, + .region_del = vfio_prereg_listener_region_del, +}; diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h index 0610377..405c3b2 100644 --- a/include/hw/vfio/vfio-common.h +++ b/include/hw/vfio/vfio-common.h @@ -73,6 +73,8 @@ typedef struct VFIOContainer { VFIOAddressSpace *space; int fd; /* /dev/vfio/vfio, empowered by the attached groups */ MemoryListener listener; + MemoryListener prereg_listener; + unsigned iommu_type; int error; bool initialized; /* @@ -158,4 +160,6 @@ int vfio_get_region_info(VFIODevice *vbasedev, int index, int vfio_get_dev_region_info(VFIODevice *vbasedev, uint32_t type, uint32_t subtype, struct vfio_region_info **info); #endif +extern const MemoryListener vfio_prereg_listener; + #endif /* !HW_VFIO_VFIO_COMMON_H */ diff --git a/trace-events b/trace-events index de42012..ddb8676 100644 --- a/trace-events +++ b/trace-events @@ -1766,6 +1766,8 @@ vfio_region_mmaps_set_enabled(const char *name, bool enabled) "Region %s mmaps e vfio_region_sparse_mmap_header(const char *name, int index, int nr_areas) "Device %s region %d: %d sparse mmap entries" vfio_region_sparse_mmap_entry(int i, unsigned long start, unsigned long end) "sparse entry %d [0x%lx - 0x%lx]" vfio_get_dev_region(const char *name, int index, uint32_t type, uint32_t subtype) "%s index %d, %08x/%0x8" +vfio_ram_register(uint64_t va, uint64_t size, int ret) "va=%"PRIx64" size=%"PRIx64" ret=%d" +vfio_ram_unregister(uint64_t va, uint64_t size, int ret) "va=%"PRIx64" size=%"PRIx64" ret=%d" # hw/vfio/platform.c vfio_platform_base_device_init(char *name, int groupid) "%s belongs to group #%d"