From patchwork Tue Oct 16 18:12:35 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kirti Wankhede X-Patchwork-Id: 10644047 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3DA4515E2 for ; Tue, 16 Oct 2018 18:14:25 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 305462A587 for ; Tue, 16 Oct 2018 18:14:25 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 244E92A5F1; Tue, 16 Oct 2018 18:14:25 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9B4F42A587 for ; Tue, 16 Oct 2018 18:14:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727476AbeJQCGC (ORCPT ); Tue, 16 Oct 2018 22:06:02 -0400 Received: from hqemgate15.nvidia.com ([216.228.121.64]:3981 "EHLO hqemgate15.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727395AbeJQCGB (ORCPT ); Tue, 16 Oct 2018 22:06:01 -0400 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqemgate15.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 16 Oct 2018 11:14:18 -0700 Received: from HQMAIL103.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Tue, 16 Oct 2018 11:14:21 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Tue, 16 Oct 2018 11:14:21 -0700 Received: from HQMAIL106.nvidia.com (172.18.146.12) by HQMAIL103.nvidia.com (172.20.187.11) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Tue, 16 Oct 2018 18:14:20 +0000 Received: from kwankhede-dev.nvidia.com (172.20.13.39) by HQMAIL106.nvidia.com (172.18.146.12) with Microsoft SMTP Server (TLS) id 15.0.1395.4 via Frontend Transport; Tue, 16 Oct 2018 18:14:18 +0000 From: Kirti Wankhede To: , CC: , , Kirti Wankhede Subject: [RFC PATCH v1 1/4] VFIO KABI for migration interface Date: Tue, 16 Oct 2018 23:42:35 +0530 Message-ID: <1539713558-2453-2-git-send-email-kwankhede@nvidia.com> X-Mailer: git-send-email 2.7.0 In-Reply-To: <1539713558-2453-1-git-send-email-kwankhede@nvidia.com> References: <1539713558-2453-1-git-send-email-kwankhede@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1539713658; bh=CyywSk+x9dw+XyZrPIPCCg6U8WO1bp20MNnNeaZT95I=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:X-NVConfidentiality:MIME-Version: Content-Type; b=OOJq4ELs+IUi8jWvpMKpi6qkhO6rEuGWyuWzPeC+3HIns87C2PrZBqFai4q3So8Pj 6pdeNNlgOLn7hu76TMq2JWAY9opBHoUtTkIItjFPnlYSyudQJqIMnRvd7AD1eVLQMa HXfAGhqUTLi1RGjPQPpnOCjxpVC7RiO+xKwAMaL3NQrBgKyQUnSs1Tzvh+/Kjx137T vqol4XCLEGdtkDBLtZAHvDZOw0rO/IaLUK3a0fTZEfqMK1iacvyyy7a/1ttgpMFGAQ U852pHrXXeAa27bmcopDzq5BwkYdZfoxI7pa/tN805ATAm/NJrrY9+rRPID/25p4w2 hyPyBGzmob2KA== Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP - Added vfio_device_migration_info structure to use interact with vendor driver. - Different flags are used to get or set migration related information from/to vendor driver. Flag VFIO_MIGRATION_PROBE: To query if feature is supported Flag VFIO_MIGRATION_GET_REGION: To get migration region info Flag VFIO_MIGRATION_SET_STATE: To convey device state in vendor driver Flag VFIO_MIGRATION_GET_PENDING: To get pending bytes yet to be migrated from vendor driver Flag VFIO_MIGRATION_GET_BUFFER: On this flag, vendor driver should write data to migration region and return number of bytes written in the region Flag VFIO_MIGRATION_SET_BUFFER: In migration resume path, user space app writes to migration region and communicates it to vendor driver with this ioctl with this flag. Flag VFIO_MIGRATION_GET_DIRTY_PFNS: Get bitmap of dirty pages from vendor driver from given start address - Added enum for possible device states. Signed-off-by: Kirti Wankhede Reviewed-by: Neo Jia --- linux-headers/linux/vfio.h | 91 ++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 91 insertions(+) diff --git a/linux-headers/linux/vfio.h b/linux-headers/linux/vfio.h index 3615a269d378..8e9045ed9aa8 100644 --- a/linux-headers/linux/vfio.h +++ b/linux-headers/linux/vfio.h @@ -602,6 +602,97 @@ struct vfio_device_ioeventfd { #define VFIO_DEVICE_IOEVENTFD _IO(VFIO_TYPE, VFIO_BASE + 16) +/** + * VFIO_DEVICE_MIGRATION_INFO - _IOW(VFIO_TYPE, VFIO_BASE + 17, + * struct vfio_device_migration_info) + * Flag VFIO_MIGRATION_PROBE: + * To query if feature is supported + * + * Flag VFIO_MIGRATION_GET_REGION: + * To get migration region info + * region_index [output] : region index to be used for migration region + * size [output]: size of migration region + * + * Flag VFIO_MIGRATION_SET_STATE: + * To set device state in vendor driver + * device_state [input] : User space app sends device state to vendor + * driver on state change + * + * Flag VFIO_MIGRATION_GET_PENDING: + * To get pending bytes yet to be migrated from vendor driver + * threshold_size [Input] : threshold of buffer in User space app. + * pending_precopy_only [output] : pending data which must be migrated in + * precopy phase or in stopped state, in other words - before target + * vm start + * pending_compatible [output] : pending data which may be migrated in any + * phase + * pending_postcopy_only [output] : pending data which must be migrated in + * postcopy phase or in stopped state, in other words - after source + * vm stop + * Sum of pending_precopy_only, pending_compatible and + * pending_postcopy_only is the whole amount of pending data. + * + * Flag VFIO_MIGRATION_GET_BUFFER: + * On this flag, vendor driver should write data to migration region and + * return number of bytes written in the region. + * bytes_written [output] : number of bytes written in migration buffer by + * vendor driver + * + * Flag VFIO_MIGRATION_SET_BUFFER + * In migration resume path, user space app writes to migration region and + * communicates it to vendor driver with this ioctl with this flag. + * bytes_written [Input] : number of bytes written in migration buffer by + * user space app. + * + * Flag VFIO_MIGRATION_GET_DIRTY_PFNS + * Get bitmap of dirty pages from vendor driver from given start address. + * start_addr [Input] : start address + * pfn_count [Input] : Total pfn count from start_addr for which dirty + * bitmap is requested + * dirty_bitmap [Output] : bitmap memory allocated by user space + * application, vendor driver should return the bitmap with bits set + * only for pages to be marked dirty. + * Return: 0 on success, -errno on failure. + */ + +struct vfio_device_migration_info { + __u32 argsz; + __u32 flags; +#define VFIO_MIGRATION_PROBE (1 << 0) +#define VFIO_MIGRATION_GET_REGION (1 << 1) +#define VFIO_MIGRATION_SET_STATE (1 << 2) +#define VFIO_MIGRATION_GET_PENDING (1 << 3) +#define VFIO_MIGRATION_GET_BUFFER (1 << 4) +#define VFIO_MIGRATION_SET_BUFFER (1 << 5) +#define VFIO_MIGRATION_GET_DIRTY_PFNS (1 << 6) + __u32 region_index; /* region index */ + __u64 size; /* size */ + __u32 device_state; /* VFIO device state */ + __u64 pending_precopy_only; + __u64 pending_compatible; + __u64 pending_postcopy_only; + __u64 threshold_size; + __u64 bytes_written; + __u64 start_addr; + __u64 pfn_count; + __u8 dirty_bitmap[]; +}; + +enum { + VFIO_DEVICE_STATE_NONE, + VFIO_DEVICE_STATE_RUNNING, + VFIO_DEVICE_STATE_MIGRATION_SETUP, + VFIO_DEVICE_STATE_MIGRATION_PRECOPY_ACTIVE, + VFIO_DEVICE_STATE_MIGRATION_STOPNCOPY_ACTIVE, + VFIO_DEVICE_STATE_MIGRATION_SAVE_COMPLETED, + VFIO_DEVICE_STATE_MIGRATION_RESUME, + VFIO_DEVICE_STATE_MIGRATION_RESUME_COMPLETED, + VFIO_DEVICE_STATE_MIGRATION_FAILED, + VFIO_DEVICE_STATE_MIGRATION_CANCELLED, +}; + +#define VFIO_DEVICE_MIGRATION_INFO _IO(VFIO_TYPE, VFIO_BASE + 17) + /* -------- API for Type1 VFIO IOMMU -------- */ /** From patchwork Tue Oct 16 18:12:36 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kirti Wankhede X-Patchwork-Id: 10644055 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 646FC109C for ; Tue, 16 Oct 2018 18:14:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 55AE32A587 for ; Tue, 16 Oct 2018 18:14:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 49C362A5F9; Tue, 16 Oct 2018 18:14:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0514A2A587 for ; Tue, 16 Oct 2018 18:14:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727503AbeJQCGJ (ORCPT ); Tue, 16 Oct 2018 22:06:09 -0400 Received: from hqemgate16.nvidia.com ([216.228.121.65]:12585 "EHLO hqemgate16.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727433AbeJQCGJ (ORCPT ); Tue, 16 Oct 2018 22:06:09 -0400 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqemgate16.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 16 Oct 2018 11:14:27 -0700 Received: from HQMAIL104.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Tue, 16 Oct 2018 11:14:23 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Tue, 16 Oct 2018 11:14:23 -0700 Received: from HQMAIL106.nvidia.com (172.18.146.12) by HQMAIL104.nvidia.com (172.18.146.11) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Tue, 16 Oct 2018 18:14:23 +0000 Received: from kwankhede-dev.nvidia.com (172.20.13.39) by HQMAIL106.nvidia.com (172.18.146.12) with Microsoft SMTP Server (TLS) id 15.0.1395.4 via Frontend Transport; Tue, 16 Oct 2018 18:14:21 +0000 From: Kirti Wankhede To: , CC: , , Kirti Wankhede Subject: [RFC PATCH v1 2/4] Add migration functions for VFIO devices Date: Tue, 16 Oct 2018 23:42:36 +0530 Message-ID: <1539713558-2453-3-git-send-email-kwankhede@nvidia.com> X-Mailer: git-send-email 2.7.0 In-Reply-To: <1539713558-2453-1-git-send-email-kwankhede@nvidia.com> References: <1539713558-2453-1-git-send-email-kwankhede@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1539713667; bh=BY0F/ZbqwFwzg/fSTGqa5BBas0hGccE16ingupmBxSs=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:X-NVConfidentiality:MIME-Version: Content-Type; b=A2nic76IIYWqKFYaEyf/2wX6eZNLd/UsOJZ4u9uejvS870/gneMS5ikBFv7d+Pto/ HMXld3MIi0Ho5jdwTg4Zl0/OkRq0FcJrbLInFXSGpnw19Tkshgwxx8mkQU8St/7Y6C PxtZIDiQ3/mCh/BIZYrWz8eP0tfDMhi0mzFXPY0RNGV+gPppKzXLbg2I+zssYwDipo cZDnm3DJZHcBJcWYC6CxXFP/fH7Z+jrLSWUJoakSCi744KoGY3ihzzVemffViDTXg3 00zm3xc+OH4653iYp39Moeuxf0xYaF9C7/Ux9SSV2QeKiPBzp/CL6gOQuz52HDXRlY IQCa3Epo/FP2Q== Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP - Migration function are implemented for VFIO_DEVICE_TYPE_PCI device. - Added SaveVMHandlers and implemented all basic functions required for live migration. - Added VM state change handler to know running or stopped state of VM. - Added migration state change notifier to get notification on migration state change. This state is translated to VFIO device state and conveyed to vendor driver. Signed-off-by: Kirti Wankhede Reviewed-by: Neo Jia --- hw/vfio/Makefile.objs | 2 +- hw/vfio/migration.c | 716 ++++++++++++++++++++++++++++++++++++++++++ include/hw/vfio/vfio-common.h | 23 ++ 3 files changed, 740 insertions(+), 1 deletion(-) create mode 100644 hw/vfio/migration.c diff --git a/hw/vfio/Makefile.objs b/hw/vfio/Makefile.objs index a2e7a0a7cf02..6206ad47e90e 100644 --- a/hw/vfio/Makefile.objs +++ b/hw/vfio/Makefile.objs @@ -1,6 +1,6 @@ ifeq ($(CONFIG_LINUX), y) obj-$(CONFIG_SOFTMMU) += common.o -obj-$(CONFIG_PCI) += pci.o pci-quirks.o display.o +obj-$(CONFIG_PCI) += pci.o pci-quirks.o display.o migration.o obj-$(CONFIG_VFIO_CCW) += ccw.o obj-$(CONFIG_SOFTMMU) += platform.o obj-$(CONFIG_VFIO_XGMAC) += calxeda-xgmac.o diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c new file mode 100644 index 000000000000..8a4f515226e0 --- /dev/null +++ b/hw/vfio/migration.c @@ -0,0 +1,716 @@ +/* + * Migration support for VFIO devices + * + * Copyright NVIDIA, Inc. 2018 + * + * This work is licensed under the terms of the GNU GPL, version 2. See + * the COPYING file in the top-level directory. + */ + +#include "qemu/osdep.h" +#include +#include + +#include "hw/vfio/vfio-common.h" +#include "cpu.h" +#include "migration/migration.h" +#include "migration/qemu-file.h" +#include "migration/register.h" +#include "migration/blocker.h" +#include "migration/misc.h" +#include "qapi/error.h" +#include "exec/ramlist.h" +#include "exec/ram_addr.h" +#include "pci.h" + +/* + * Flags used as delimiter: + * 0xffffffff => MSB 32-bit all 1s + * 0xef10 => emulated (virtual) function IO + * 0x0000 => 16-bits reserved for flags + */ +#define VFIO_MIG_FLAG_END_OF_STATE (0xffffffffef100001ULL) +#define VFIO_MIG_FLAG_DEV_CONFIG_STATE (0xffffffffef100002ULL) +#define VFIO_MIG_FLAG_DEV_SETUP_STATE (0xffffffffef100003ULL) + +static void vfio_migration_region_exit(VFIODevice *vbasedev) +{ + VFIOMigration *migration = vbasedev->migration; + + if (!migration) { + return; + } + + if (migration->region.buffer.size) { + vfio_region_exit(&migration->region.buffer); + vfio_region_finalize(&migration->region.buffer); + } + g_free(vbasedev->migration); +} + +static int vfio_migration_region_init(VFIODevice *vbasedev) +{ + VFIOMigration *migration; + Object *obj = NULL; + int ret; + struct vfio_device_migration_info migration_info = { + .argsz = sizeof(migration_info), + .flags = VFIO_MIGRATION_GET_REGION, + }; + + /* Migration support added for PCI device only */ + if (vbasedev->type == VFIO_DEVICE_TYPE_PCI) { + VFIOPCIDevice *vdev = container_of(vbasedev, VFIOPCIDevice, vbasedev); + + obj = OBJECT(vdev); + } else + return -EINVAL; + + ret = ioctl(vbasedev->fd, VFIO_DEVICE_MIGRATION_INFO, &migration_info); + if (ret < 0) { + error_report("Failed to migration region %s", + strerror(errno)); + return ret; + } + + if (!migration_info.size || !migration_info.region_index) { + error_report("Incorrect migration region params index: %d,size: 0x%llx", + migration_info.region_index, migration_info.size); + return -EINVAL; + } + + vbasedev->migration = g_new0(VFIOMigration, 1); + migration = vbasedev->migration; + + migration->region.index = migration_info.region_index; + + ret = vfio_region_setup(obj, vbasedev, + &migration->region.buffer, + migration_info.region_index, + "migration"); + if (ret != 0) { + error_report("%s: vfio_region_setup(%d): %s", + __func__, migration_info.region_index, strerror(-ret)); + goto err; + } + + if (migration->region.buffer.mmaps == NULL) { + ret = -EINVAL; + error_report("%s: Migration region (%d) not mappable : %s", + __func__, migration_info.region_index, strerror(-ret)); + goto err; + } + + ret = vfio_region_mmap(&migration->region.buffer); + if (ret != 0) { + error_report("%s: vfio_region_mmap(%d): %s", __func__, + migration_info.region_index, strerror(-ret)); + goto err; + } + assert(migration->region.buffer.mmaps[0].mmap != NULL); + + return 0; + +err: + vfio_migration_region_exit(vbasedev); + return ret; +} + +static int vfio_migration_set_state(VFIODevice *vbasedev, uint32_t state) +{ + int ret = 0; + struct vfio_device_migration_info migration_info = { + .argsz = sizeof(migration_info), + .flags = VFIO_MIGRATION_SET_STATE, + .device_state = state, + }; + + if (vbasedev->device_state == state) { + return ret; + } + + ret = ioctl(vbasedev->fd, VFIO_DEVICE_MIGRATION_INFO, &migration_info); + if (ret < 0) { + error_report("Failed to set migration state %d %s", + ret, strerror(errno)); + return ret; + } + + vbasedev->device_state = state; + return ret; +} + +void vfio_get_dirty_page_list(VFIODevice *vbasedev, + uint64_t start_addr, + uint64_t pfn_count) +{ + uint64_t count = 0; + int ret; + struct vfio_device_migration_info *migration_info; + uint64_t bitmap_size; + + bitmap_size = (BITS_TO_LONGS(pfn_count) + 1) * sizeof(unsigned long); + + migration_info = g_malloc0(sizeof(*migration_info) + bitmap_size); + if (!migration_info) { + error_report("Failed to allocated migration_info %s", + strerror(errno)); + return; + } + + memset(migration_info, 0, sizeof(*migration_info) + bitmap_size); + migration_info->flags = VFIO_MIGRATION_GET_DIRTY_PFNS, + migration_info->start_addr = start_addr; + migration_info->pfn_count = pfn_count; + migration_info->argsz = sizeof(*migration_info) + bitmap_size; + + ret = ioctl(vbasedev->fd, VFIO_DEVICE_MIGRATION_INFO, migration_info); + if (ret < 0) { + error_report("Failed to get dirty pages bitmap %d %s", + ret, strerror(errno)); + g_free(migration_info); + return; + } + + if (migration_info->pfn_count) { + cpu_physical_memory_set_dirty_lebitmap( + (unsigned long *)&migration_info->dirty_bitmap, + migration_info->start_addr, migration_info->pfn_count); + count += migration_info->pfn_count; + } + g_free(migration_info); +} + +static int vfio_save_device_config_state(QEMUFile *f, void *opaque) +{ + VFIODevice *vbasedev = opaque; + + qemu_put_be64(f, VFIO_MIG_FLAG_DEV_CONFIG_STATE); + + if (vbasedev->type == VFIO_DEVICE_TYPE_PCI) { + VFIOPCIDevice *vdev = container_of(vbasedev, VFIOPCIDevice, vbasedev); + PCIDevice *pdev = &vdev->pdev; + uint32_t msi_flags, msi_addr_lo, msi_addr_hi = 0, msi_data; + bool msi_64bit; + int i; + + for (i = 0; i < PCI_ROM_SLOT; i++) { + uint32_t bar; + + bar = pci_default_read_config(pdev, PCI_BASE_ADDRESS_0 + i * 4, 4); + qemu_put_be32(f, bar); + } + + msi_flags = pci_default_read_config(pdev, + pdev->msi_cap + PCI_MSI_FLAGS, 2); + msi_64bit = (msi_flags & PCI_MSI_FLAGS_64BIT); + + msi_addr_lo = pci_default_read_config(pdev, + pdev->msi_cap + PCI_MSI_ADDRESS_LO, 4); + qemu_put_be32(f, msi_addr_lo); + + if (msi_64bit) { + msi_addr_hi = pci_default_read_config(pdev, + pdev->msi_cap + PCI_MSI_ADDRESS_HI, + 4); + } + qemu_put_be32(f, msi_addr_hi); + + msi_data = pci_default_read_config(pdev, + pdev->msi_cap + (msi_64bit ? PCI_MSI_DATA_64 : PCI_MSI_DATA_32), + 2); + qemu_put_be32(f, msi_data); + } + qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE); + + return qemu_file_get_error(f); +} + +static int vfio_load_device_config_state(QEMUFile *f, void *opaque) +{ + VFIODevice *vbasedev = opaque; + + if (vbasedev->type == VFIO_DEVICE_TYPE_PCI) { + VFIOPCIDevice *vdev = container_of(vbasedev, VFIOPCIDevice, vbasedev); + PCIDevice *pdev = &vdev->pdev; + uint32_t pci_cmd; + uint32_t msi_flags, msi_addr_lo, msi_addr_hi = 0, msi_data; + bool msi_64bit; + int i; + + /* retore pci bar configuration */ + pci_cmd = pci_default_read_config(pdev, PCI_COMMAND, 2); + vfio_pci_write_config(pdev, PCI_COMMAND, + pci_cmd & (!(PCI_COMMAND_IO | PCI_COMMAND_MEMORY)), 2); + for (i = 0; i < PCI_ROM_SLOT; i++) { + uint32_t bar = qemu_get_be32(f); + + vfio_pci_write_config(pdev, PCI_BASE_ADDRESS_0 + i * 4, bar, 4); + } + vfio_pci_write_config(pdev, PCI_COMMAND, + pci_cmd | PCI_COMMAND_IO | PCI_COMMAND_MEMORY, 2); + + /* restore msi configuration */ + msi_flags = pci_default_read_config(pdev, + pdev->msi_cap + PCI_MSI_FLAGS, + 2); + msi_64bit = (msi_flags & PCI_MSI_FLAGS_64BIT); + + vfio_pci_write_config(&vdev->pdev, + pdev->msi_cap + PCI_MSI_FLAGS, + msi_flags & (!PCI_MSI_FLAGS_ENABLE), + 2); + + msi_addr_lo = qemu_get_be32(f); + vfio_pci_write_config(pdev, + pdev->msi_cap + PCI_MSI_ADDRESS_LO, + msi_addr_lo, + 4); + + msi_addr_hi = qemu_get_be32(f); + if (msi_64bit) { + vfio_pci_write_config(pdev, pdev->msi_cap + PCI_MSI_ADDRESS_HI, + msi_addr_hi, 4); + } + msi_data = qemu_get_be32(f); + vfio_pci_write_config(pdev, + pdev->msi_cap + (msi_64bit ? PCI_MSI_DATA_64 : + PCI_MSI_DATA_32), + msi_data, + 2); + + vfio_pci_write_config(&vdev->pdev, + pdev->msi_cap + PCI_MSI_FLAGS, + msi_flags | PCI_MSI_FLAGS_ENABLE, + 2); + } + + if (qemu_get_be64(f) != VFIO_MIG_FLAG_END_OF_STATE) { + error_report("%s Wrong end of block ", __func__); + return -EINVAL; + } + + return qemu_file_get_error(f); +} + +/* ---------------------------------------------------------------------- */ + +static bool vfio_is_active_iterate(void *opaque) +{ + VFIODevice *vbasedev = opaque; + + if (vbasedev->vm_running && vbasedev->migration && + (vbasedev->migration->pending_precopy_only != 0)) + return true; + + if (!vbasedev->vm_running && vbasedev->migration && + (vbasedev->migration->pending_postcopy != 0)) + return true; + + return false; +} + +static int vfio_save_setup(QEMUFile *f, void *opaque) +{ + VFIODevice *vbasedev = opaque; + int ret; + + qemu_put_be64(f, VFIO_MIG_FLAG_DEV_SETUP_STATE); + + qemu_mutex_lock_iothread(); + ret = vfio_migration_region_init(vbasedev); + qemu_mutex_unlock_iothread(); + if (ret) { + return ret; + } + + qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE); + + ret = qemu_file_get_error(f); + if (ret) { + return ret; + } + + return 0; +} + +static int vfio_save_buffer(QEMUFile *f, VFIODevice *vbasedev) +{ + VFIOMigration *migration = vbasedev->migration; + uint8_t *buf = (uint8_t *)migration->region.buffer.mmaps[0].mmap; + int ret; + struct vfio_device_migration_info migration_info = { + .argsz = sizeof(migration_info), + .flags = VFIO_MIGRATION_GET_BUFFER, + }; + + ret = ioctl(vbasedev->fd, VFIO_DEVICE_MIGRATION_INFO, &migration_info); + if (ret < 0) { + error_report("Failed to get migration buffer information %s", + strerror(errno)); + return ret; + } + + qemu_put_be64(f, migration_info.bytes_written); + + if (migration_info.bytes_written) { + qemu_put_buffer(f, buf, migration_info.bytes_written); + } + + ret = qemu_file_get_error(f); + if (ret) { + return ret; + } + + return migration_info.bytes_written; +} + +static int vfio_save_iterate(QEMUFile *f, void *opaque) +{ + VFIODevice *vbasedev = opaque; + int ret; + + ret = vfio_save_buffer(f, vbasedev); + if (ret < 0) { + error_report("vfio_save_buffer failed %s", + strerror(errno)); + return ret; + } + + qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE); + + ret = qemu_file_get_error(f); + if (ret) { + return ret; + } + + return ret; +} + +static void vfio_update_pending(VFIODevice *vbasedev, uint64_t threshold_size) +{ + struct vfio_device_migration_info migration_info; + VFIOMigration *migration = vbasedev->migration; + int ret; + + migration_info.argsz = sizeof(migration_info); + migration_info.flags = VFIO_MIGRATION_GET_PENDING; + migration_info.threshold_size = threshold_size; + + ret = ioctl(vbasedev->fd, VFIO_DEVICE_MIGRATION_INFO, &migration_info); + if (ret < 0) { + error_report("Failed to get pending bytes %s", + strerror(errno)); + return; + } + + migration->pending_precopy_only = migration_info.pending_precopy_only; + migration->pending_compatible = migration_info.pending_compatible; + migration->pending_postcopy = migration_info.pending_postcopy_only; + + return; +} + +static void vfio_save_pending(QEMUFile *f, void *opaque, + uint64_t threshold_size, + uint64_t *res_precopy_only, + uint64_t *res_compatible, + uint64_t *res_postcopy_only) +{ + VFIODevice *vbasedev = opaque; + VFIOMigration *migration = vbasedev->migration; + + vfio_update_pending(vbasedev, threshold_size); + + *res_precopy_only += migration->pending_precopy_only; + *res_compatible += migration->pending_compatible; + *res_postcopy_only += migration->pending_postcopy; +} + +static int vfio_save_complete_precopy(QEMUFile *f, void *opaque) +{ + VFIODevice *vbasedev = opaque; + VFIOMigration *migration = vbasedev->migration; + MigrationState *ms = migrate_get_current(); + int ret; + + if (vbasedev->vm_running) { + vbasedev->vm_running = 0; + } + + ret = vfio_migration_set_state(vbasedev, + VFIO_DEVICE_STATE_MIGRATION_STOPNCOPY_ACTIVE); + if (ret) { + error_report("Failed to set state STOPNCOPY_ACTIVE"); + return ret; + } + + ret = vfio_save_device_config_state(f, opaque); + if (ret) { + return ret; + } + + do { + vfio_update_pending(vbasedev, ms->threshold_size); + + if (vfio_is_active_iterate(opaque)) { + ret = vfio_save_buffer(f, vbasedev); + if (ret < 0) { + error_report("Failed to save buffer"); + break; + } else if (ret == 0) { + break; + } + } + } while ((migration->pending_compatible + migration->pending_postcopy) > 0); + + qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE); + + ret = qemu_file_get_error(f); + if (ret) { + return ret; + } + + ret = vfio_migration_set_state(vbasedev, + VFIO_DEVICE_STATE_MIGRATION_SAVE_COMPLETED); + if (ret) { + error_report("Failed to set state SAVE_COMPLETED"); + return ret; + } + return ret; +} + +static void vfio_save_cleanup(void *opaque) +{ + VFIODevice *vbasedev = opaque; + + vfio_migration_region_exit(vbasedev); +} + +static int vfio_load_state(QEMUFile *f, void *opaque, int version_id) +{ + VFIODevice *vbasedev = opaque; + VFIOMigration *migration = vbasedev->migration; + uint8_t *buf = (uint8_t *)migration->region.buffer.mmaps[0].mmap; + int ret; + uint64_t data; + + data = qemu_get_be64(f); + while (data != VFIO_MIG_FLAG_END_OF_STATE) { + if (data == VFIO_MIG_FLAG_DEV_CONFIG_STATE) { + ret = vfio_load_device_config_state(f, opaque); + if (ret) { + return ret; + } + } else if (data == VFIO_MIG_FLAG_DEV_SETUP_STATE) { + data = qemu_get_be64(f); + if (data == VFIO_MIG_FLAG_END_OF_STATE) { + return 0; + } else { + error_report("SETUP STATE: EOS not found 0x%lx", data); + return -EINVAL; + } + } else if (data != 0) { + struct vfio_device_migration_info migration_info = { + .argsz = sizeof(migration_info), + .flags = VFIO_MIGRATION_SET_BUFFER, + }; + + qemu_get_buffer(f, buf, data); + migration_info.bytes_written = data; + + ret = ioctl(vbasedev->fd, + VFIO_DEVICE_MIGRATION_INFO, + &migration_info); + if (ret < 0) { + error_report("Failed to set migration buffer information %s", + strerror(errno)); + return ret; + } + } + + ret = qemu_file_get_error(f); + if (ret) { + return ret; + } + data = qemu_get_be64(f); + } + + return 0; +} + +static int vfio_load_setup(QEMUFile *f, void *opaque) +{ + VFIODevice *vbasedev = opaque; + int ret; + + ret = vfio_migration_set_state(vbasedev, + VFIO_DEVICE_STATE_MIGRATION_RESUME); + if (ret) { + error_report("Failed to set state RESUME"); + } + + ret = vfio_migration_region_init(vbasedev); + if (ret) { + error_report("Failed to initialise migration region"); + return ret; + } + + return 0; +} + +static int vfio_load_cleanup(void *opaque) +{ + VFIODevice *vbasedev = opaque; + int ret = 0; + + ret = vfio_migration_set_state(vbasedev, + VFIO_DEVICE_STATE_MIGRATION_RESUME_COMPLETED); + if (ret) { + error_report("Failed to set state RESUME_COMPLETED"); + } + + vfio_migration_region_exit(vbasedev); + return ret; +} + +static SaveVMHandlers savevm_vfio_handlers = { + .save_setup = vfio_save_setup, + .save_live_iterate = vfio_save_iterate, + .save_live_complete_precopy = vfio_save_complete_precopy, + .save_live_pending = vfio_save_pending, + .save_cleanup = vfio_save_cleanup, + .load_state = vfio_load_state, + .load_setup = vfio_load_setup, + .load_cleanup = vfio_load_cleanup, + .is_active_iterate = vfio_is_active_iterate, +}; + +static void vfio_vmstate_change(void *opaque, int running, RunState state) +{ + VFIODevice *vbasedev = opaque; + + if ((vbasedev->vm_running != running) && running) { + int ret; + + ret = vfio_migration_set_state(vbasedev, VFIO_DEVICE_STATE_RUNNING); + if (ret) { + error_report("Failed to set state RUNNING"); + } + } + + vbasedev->vm_running = running; +} + +static void vfio_migration_state_notifier(Notifier *notifier, void *data) +{ + MigrationState *s = data; + VFIODevice *vbasedev = container_of(notifier, VFIODevice, migration_state); + int ret; + + switch (s->state) { + case MIGRATION_STATUS_SETUP: + ret = vfio_migration_set_state(vbasedev, + VFIO_DEVICE_STATE_MIGRATION_SETUP); + if (ret) { + error_report("Failed to set state SETUP"); + } + return; + + case MIGRATION_STATUS_ACTIVE: + if (vbasedev->device_state == VFIO_DEVICE_STATE_MIGRATION_SETUP) { + if (vbasedev->vm_running) { + ret = vfio_migration_set_state(vbasedev, + VFIO_DEVICE_STATE_MIGRATION_PRECOPY_ACTIVE); + if (ret) { + error_report("Failed to set state PRECOPY_ACTIVE"); + } + } else { + ret = vfio_migration_set_state(vbasedev, + VFIO_DEVICE_STATE_MIGRATION_STOPNCOPY_ACTIVE); + if (ret) { + error_report("Failed to set state STOPNCOPY_ACTIVE"); + } + } + } else { + ret = vfio_migration_set_state(vbasedev, + VFIO_DEVICE_STATE_MIGRATION_RESUME); + if (ret) { + error_report("Failed to set state RESUME"); + } + } + return; + + case MIGRATION_STATUS_CANCELLING: + case MIGRATION_STATUS_CANCELLED: + ret = vfio_migration_set_state(vbasedev, + VFIO_DEVICE_STATE_MIGRATION_CANCELLED); + if (ret) { + error_report("Failed to set state CANCELLED"); + } + return; + + case MIGRATION_STATUS_FAILED: + ret = vfio_migration_set_state(vbasedev, + VFIO_DEVICE_STATE_MIGRATION_FAILED); + if (ret) { + error_report("Failed to set state FAILED"); + } + return; + } +} + +static int vfio_migration_init(VFIODevice *vbasedev) +{ + register_savevm_live(NULL, "vfio", -1, 1, &savevm_vfio_handlers, vbasedev); + vbasedev->vm_state = qemu_add_vm_change_state_handler(vfio_vmstate_change, + vbasedev); + + vbasedev->migration_state.notify = vfio_migration_state_notifier; + add_migration_state_change_notifier(&vbasedev->migration_state); + + return 0; +} + + +/* ---------------------------------------------------------------------- */ + +int vfio_migration_probe(VFIODevice *vbasedev, Error **errp) +{ + struct vfio_device_migration_info probe; + Error *local_err = NULL; + int ret; + + memset(&probe, 0, sizeof(probe)); + probe.argsz = sizeof(probe); + probe.flags = VFIO_MIGRATION_PROBE; + ret = ioctl(vbasedev->fd, VFIO_DEVICE_MIGRATION_INFO, &probe); + + if (ret == 0) { + return vfio_migration_init(vbasedev); + } + + error_setg(&vbasedev->migration_blocker, + "VFIO device doesn't support migration"); + ret = migrate_add_blocker(vbasedev->migration_blocker, &local_err); + if (local_err) { + error_propagate(errp, local_err); + error_free(vbasedev->migration_blocker); + return ret; + } + + return 0; +} + +void vfio_migration_finalize(VFIODevice *vbasedev) +{ + if (vbasedev->vm_state) { + qemu_del_vm_change_state_handler(vbasedev->vm_state); + remove_migration_state_change_notifier(&vbasedev->migration_state); + } + + if (vbasedev->migration_blocker) { + migrate_del_blocker(vbasedev->migration_blocker); + error_free(vbasedev->migration_blocker); + } +} diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h index a9036929b220..ab8217c9e249 100644 --- a/include/hw/vfio/vfio-common.h +++ b/include/hw/vfio/vfio-common.h @@ -30,6 +30,8 @@ #include #endif +#include "sysemu/sysemu.h" + #define ERR_PREFIX "vfio error: %s: " #define WARN_PREFIX "vfio warning: %s: " @@ -57,6 +59,16 @@ typedef struct VFIORegion { uint8_t nr; /* cache the region number for debug */ } VFIORegion; +typedef struct VFIOMigration { + struct { + VFIORegion buffer; + uint32_t index; + } region; + uint64_t pending_precopy_only; + uint64_t pending_compatible; + uint64_t pending_postcopy; +} VFIOMigration; + typedef struct VFIOAddressSpace { AddressSpace *as; QLIST_HEAD(, VFIOContainer) containers; @@ -116,6 +128,12 @@ typedef struct VFIODevice { unsigned int num_irqs; unsigned int num_regions; unsigned int flags; + uint32_t device_state; + VMChangeStateEntry *vm_state; + int vm_running; + Notifier migration_state; + VFIOMigration *migration; + Error *migration_blocker; } VFIODevice; struct VFIODeviceOps { @@ -193,4 +211,9 @@ int vfio_spapr_create_window(VFIOContainer *container, int vfio_spapr_remove_window(VFIOContainer *container, hwaddr offset_within_address_space); +int vfio_migration_probe(VFIODevice *vbasedev, Error **errp); +void vfio_migration_finalize(VFIODevice *vbasedev); +void vfio_get_dirty_page_list(VFIODevice *vbasedev, uint64_t start_addr, + uint64_t pfn_count); + #endif /* HW_VFIO_VFIO_COMMON_H */ From patchwork Tue Oct 16 18:12:37 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kirti Wankhede X-Patchwork-Id: 10644049 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2E4A1109C for ; Tue, 16 Oct 2018 18:14:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 210042A587 for ; Tue, 16 Oct 2018 18:14:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 14AD22A5F9; Tue, 16 Oct 2018 18:14:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A4EBF2A587 for ; Tue, 16 Oct 2018 18:14:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727459AbeJQCGF (ORCPT ); Tue, 16 Oct 2018 22:06:05 -0400 Received: from hqemgate14.nvidia.com ([216.228.121.143]:5828 "EHLO hqemgate14.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727395AbeJQCGF (ORCPT ); Tue, 16 Oct 2018 22:06:05 -0400 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqemgate14.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 16 Oct 2018 11:14:18 -0700 Received: from HQMAIL105.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Tue, 16 Oct 2018 11:14:25 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Tue, 16 Oct 2018 11:14:25 -0700 Received: from HQMAIL106.nvidia.com (172.18.146.12) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Tue, 16 Oct 2018 18:14:25 +0000 Received: from kwankhede-dev.nvidia.com (172.20.13.39) by HQMAIL106.nvidia.com (172.18.146.12) with Microsoft SMTP Server (TLS) id 15.0.1395.4 via Frontend Transport; Tue, 16 Oct 2018 18:14:23 +0000 From: Kirti Wankhede To: , CC: , , Kirti Wankhede Subject: [RFC PATCH v1 3/4] Add vfio_listerner_log_sync to mark dirty pages Date: Tue, 16 Oct 2018 23:42:37 +0530 Message-ID: <1539713558-2453-4-git-send-email-kwankhede@nvidia.com> X-Mailer: git-send-email 2.7.0 In-Reply-To: <1539713558-2453-1-git-send-email-kwankhede@nvidia.com> References: <1539713558-2453-1-git-send-email-kwankhede@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1539713658; bh=fd4XE5Ltvr/ON0rSU86/qK8lZccH2TInBagZVVjAuqE=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:X-NVConfidentiality:MIME-Version: Content-Type; b=jiHn9IH8D03BHtAf0qKP71QucSm9CJkVIa2DBVKdnFEetWyQj9KHLO7fb0Gj9Nni5 PLYSngBfuLjtLgWQCZA+EsjhTAA3O1OswBw1oN3Bc1RrRhhWEhsXfGBJDyljVZ97xj UabMV0qB7NawJY/F/Tc9KF/YXVXpTjezafjl3cE/ba/Oc0DhnX2fBYiEzNQvyB1rCQ mkmdV4thRA78qHnpEwvcgI6YUSM3Pos23ZKHt4WDHm6cEYtCWJCHgU+xtPU7hPdc6M YFnIUEBKYsrpJ3OQrzsl6x/gIlfowRqxxgJGoserr03Sn+LjmrK1vURmd/+corGGDd RkP5mx9hkHZiw== Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP vfio_listerner_log_sync gets list of dirty pages from vendor driver and mark those pages dirty. Signed-off-by: Kirti Wankhede Reviewed-by: Neo Jia --- hw/vfio/common.c | 32 ++++++++++++++++++++++++++++++++ 1 file changed, 32 insertions(+) diff --git a/hw/vfio/common.c b/hw/vfio/common.c index fb396cf00ac4..817d93750337 100644 --- a/hw/vfio/common.c +++ b/hw/vfio/common.c @@ -697,9 +697,41 @@ static void vfio_listener_region_del(MemoryListener *listener, } } +static void vfio_listerner_log_sync(MemoryListener *listener, + MemoryRegionSection *section) +{ + uint64_t start_addr, size, pfn_count; + VFIOGroup *group; + VFIODevice *vbasedev; + + QLIST_FOREACH(group, &vfio_group_list, next) { + QLIST_FOREACH(vbasedev, &group->device_list, next) { + switch (vbasedev->device_state) { + case VFIO_DEVICE_STATE_MIGRATION_PRECOPY_ACTIVE: + case VFIO_DEVICE_STATE_MIGRATION_STOPNCOPY_ACTIVE: + continue; + + default: + return; + } + } + } + + start_addr = TARGET_PAGE_ALIGN(section->offset_within_address_space); + size = int128_get64(section->size); + pfn_count = size >> TARGET_PAGE_BITS; + + QLIST_FOREACH(group, &vfio_group_list, next) { + QLIST_FOREACH(vbasedev, &group->device_list, next) { + vfio_get_dirty_page_list(vbasedev, start_addr, pfn_count); + } + } +} + static const MemoryListener vfio_memory_listener = { .region_add = vfio_listener_region_add, .region_del = vfio_listener_region_del, + .log_sync = vfio_listerner_log_sync, }; static void vfio_listener_release(VFIOContainer *container) From patchwork Tue Oct 16 18:12:38 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kirti Wankhede X-Patchwork-Id: 10644053 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1482E109C for ; Tue, 16 Oct 2018 18:14:31 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 062B62A587 for ; Tue, 16 Oct 2018 18:14:31 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id EDEC52A5F9; Tue, 16 Oct 2018 18:14:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 90D372A587 for ; Tue, 16 Oct 2018 18:14:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727516AbeJQCGI (ORCPT ); Tue, 16 Oct 2018 22:06:08 -0400 Received: from hqemgate15.nvidia.com ([216.228.121.64]:3987 "EHLO hqemgate15.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727395AbeJQCGH (ORCPT ); Tue, 16 Oct 2018 22:06:07 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate15.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 16 Oct 2018 11:14:25 -0700 Received: from HQMAIL108.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Tue, 16 Oct 2018 11:14:28 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Tue, 16 Oct 2018 11:14:28 -0700 Received: from HQMAIL106.nvidia.com (172.18.146.12) by HQMAIL108.nvidia.com (172.18.146.13) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Tue, 16 Oct 2018 18:14:27 +0000 Received: from kwankhede-dev.nvidia.com (172.20.13.39) by HQMAIL106.nvidia.com (172.18.146.12) with Microsoft SMTP Server (TLS) id 15.0.1395.4 via Frontend Transport; Tue, 16 Oct 2018 18:14:26 +0000 From: Kirti Wankhede To: , CC: , , Kirti Wankhede Subject: [RFC PATCH v1 4/4] Make vfio-pci device migration capable. Date: Tue, 16 Oct 2018 23:42:38 +0530 Message-ID: <1539713558-2453-5-git-send-email-kwankhede@nvidia.com> X-Mailer: git-send-email 2.7.0 In-Reply-To: <1539713558-2453-1-git-send-email-kwankhede@nvidia.com> References: <1539713558-2453-1-git-send-email-kwankhede@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1539713665; bh=X+ygK0QQzNYkDZ6Yhs5+yivNzKXSKZBYBbB6aypg8iY=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:X-NVConfidentiality:MIME-Version: Content-Type; b=VTlnaPM5LgnVZXHtKcvW/7gBg6JEeVZrixeqbep5Jv9sC37jOExpL1ElwYjuyRcCo PEcpWmakjgmHtDvs49qXyeZV5QT22kd096YCRJV17xQIvM6iay5Or0G7A4dLxaJy53 3/ZDMSpPqsJDtEkoimkgzPcTUuB6CUqdB6VY9ySxoSaG7kcmXMUCNxDXBCeW5c5ik5 bqgjpKEO60yh8wR9k04uA5I5zatE7aF/VkOwzS/obiMnqMKeFKG/LG27eMXKwXG+sZ ktVYQUhInbfPjzAQmd3E2LNjFeMz8WSZUlK4B4xDkNfpDw3E1afL0tOk+qZT1eeweX 6ECd1W+sRe/ag== Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Call vfio_migration_probe() and vfio_migration_finalize() functions for vfio-pci device to enable migration for vfio PCI device. Removed vfio_pci_vmstate structure. Signed-off-by: Kirti Wankhede Reviewed-by: Neo Jia --- hw/vfio/pci.c | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/hw/vfio/pci.c b/hw/vfio/pci.c index 6cbb8fa0549d..dd833f3f1830 100644 --- a/hw/vfio/pci.c +++ b/hw/vfio/pci.c @@ -2835,6 +2835,7 @@ static void vfio_realize(PCIDevice *pdev, Error **errp) vdev->vbasedev.ops = &vfio_pci_ops; vdev->vbasedev.type = VFIO_DEVICE_TYPE_PCI; vdev->vbasedev.dev = &vdev->pdev.qdev; + vdev->vbasedev.device_state = VFIO_DEVICE_STATE_NONE; tmp = g_strdup_printf("%s/iommu_group", vdev->vbasedev.sysfsdev); len = readlink(tmp, group_path, sizeof(group_path)); @@ -3046,10 +3047,11 @@ static void vfio_realize(PCIDevice *pdev, Error **errp) } } + ret = vfio_migration_probe(&vdev->vbasedev, errp); + vfio_register_err_notifier(vdev); vfio_register_req_notifier(vdev); vfio_setup_resetfn_quirk(vdev); - return; out_teardown: @@ -3085,6 +3087,8 @@ static void vfio_exitfn(PCIDevice *pdev) { VFIOPCIDevice *vdev = DO_UPCAST(VFIOPCIDevice, pdev, pdev); + vdev->vbasedev.device_state = VFIO_DEVICE_STATE_NONE; + vfio_unregister_req_notifier(vdev); vfio_unregister_err_notifier(vdev); pci_device_set_intx_routing_notifier(&vdev->pdev, NULL); @@ -3094,6 +3098,7 @@ static void vfio_exitfn(PCIDevice *pdev) } vfio_teardown_msi(vdev); vfio_bars_exit(vdev); + vfio_migration_finalize(&vdev->vbasedev); } static void vfio_pci_reset(DeviceState *dev) @@ -3199,11 +3204,6 @@ static Property vfio_pci_dev_properties[] = { DEFINE_PROP_END_OF_LIST(), }; -static const VMStateDescription vfio_pci_vmstate = { - .name = "vfio-pci", - .unmigratable = 1, -}; - static void vfio_pci_dev_class_init(ObjectClass *klass, void *data) { DeviceClass *dc = DEVICE_CLASS(klass); @@ -3211,7 +3211,6 @@ static void vfio_pci_dev_class_init(ObjectClass *klass, void *data) dc->reset = vfio_pci_reset; dc->props = vfio_pci_dev_properties; - dc->vmsd = &vfio_pci_vmstate; dc->desc = "VFIO-based PCI device assignment"; set_bit(DEVICE_CATEGORY_MISC, dc->categories); pdc->realize = vfio_realize;