From patchwork Mon Feb 6 12:31:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Avihai Horon X-Patchwork-Id: 13129806 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D760BC636D6 for ; Mon, 6 Feb 2023 12:34:41 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pP0gr-00077T-93; Mon, 06 Feb 2023 07:33:45 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pP0go-0006jE-8x; Mon, 06 Feb 2023 07:33:42 -0500 Received: from mail-dm3nam02on20617.outbound.protection.outlook.com ([2a01:111:f400:7e83::617] helo=NAM02-DM3-obe.outbound.protection.outlook.com) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pP0gk-0000gj-7Q; Mon, 06 Feb 2023 07:33:41 -0500 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ApPLiNlZt8nGCncPb72VVqrAWxnzBo5A38oyTdqM8a9KmXFM1r/gkexRiYD18vizn/MNiE4lE9PJn4DaElT4KmutI+KM0uQWLx0c0Slav1jnoFObfOXXnLDc7CubZLseuGLWRPf18EdFe8N5GnOapxxQyJr4SWhEJ3Ziorms1E213riVm/y2FGaBb/8szw5labIds5q4srn1HiILA/dqIRhoP4F6cg7iX9Xj+PkF0pBjTdjgCgeyzryG7o3ytHO/ItkiPmI5HBE+gRqtKCsj/J28TZKaO0Isjp0b0JG0lS6eRqYwUZPFpH47H/qUgWq5pFLscydEld6pLu8/y1MwBQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ugTFN4SgKHEf7JpUPpY9mbq/elIz7eOWQndhb+CjIjA=; b=bSWzSMQWzlJEfbiIExBbk2dqBgYTLtK5wYUK9OKdFEiv2CFLkViKlKO4WEoezuLrI0F9cvgHaa6jyYoojaEfhMgrBD8F/Go6+rW8SDvgdZATD1GFnzt6/0ScK6UUiq0sJ/kE+Vsi13T8rY09UFU0ocvrSoMEayLfme/48DFeaan/BNQxLvoDe9hNzqApdFd5xp49Gs8Dmv8aQRFhqdnj/BGdxSPprDWxabIp1Up0QaS++/AmPAdxR+0haATZYc6/ms47ACqjRlJxzbMz9zPNDAvIfv4p0wFbFYXLz29QGW8wxUnAKa/4mLyiwcX2pgel1nRhs8fHKSYX0W1x+m6APg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=nongnu.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ugTFN4SgKHEf7JpUPpY9mbq/elIz7eOWQndhb+CjIjA=; b=hhwF8u54gBXc+cglQjZoYRVu/gMKpeyb2TzPkC7pWRFVgtbVKAHJ6gi17zCd0B/nL43kK1gTy+Ive6V+dsYN8uRs/95XMR+01PuIoVgRX8iy/K18h5ZIZlNIt1oxgaj7Coz34EVBFLH68tA7vKHGpBtFzHn+EiUnUwP1V/CCrZ5ot2iO3M50gOqMcRI3KCsxDkrBN9Par9TZNcIqp6FD9KDfzYmsUba3uyubTDGCmwCt11tbLV3GhbRZCE2fpj0DwiAAbfeeqMe9odlw4mEDEltLX/I4CuJJelU547ed6284hO71V+MPQU4sLgCqzpnoJaUmb5wpcnyDq0paYIiFoQ== Received: from BN9PR03CA0576.namprd03.prod.outlook.com (2603:10b6:408:10d::11) by SJ2PR12MB8062.namprd12.prod.outlook.com (2603:10b6:a03:4cb::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6064.32; Mon, 6 Feb 2023 12:33:34 +0000 Received: from BN8NAM11FT075.eop-nam11.prod.protection.outlook.com (2603:10b6:408:10d:cafe::42) by BN9PR03CA0576.outlook.office365.com (2603:10b6:408:10d::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6064.35 via Frontend Transport; Mon, 6 Feb 2023 12:33:33 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BN8NAM11FT075.mail.protection.outlook.com (10.13.176.208) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6064.32 via Frontend Transport; Mon, 6 Feb 2023 12:33:33 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Mon, 6 Feb 2023 04:33:20 -0800 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail204.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Mon, 6 Feb 2023 04:33:19 -0800 Received: from vdi.nvidia.com (10.127.8.9) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.986.36 via Frontend Transport; Mon, 6 Feb 2023 04:33:12 -0800 From: Avihai Horon To: CC: Alex Williamson , Halil Pasic , Christian Borntraeger , Eric Farman , Richard Henderson , David Hildenbrand , "Ilya Leoshkevich" , Thomas Huth , "Juan Quintela" , "Dr. David Alan Gilbert" , "Michael S. Tsirkin" , Cornelia Huck , Paolo Bonzini , Stefan Hajnoczi , Fam Zheng , Eric Blake , Vladimir Sementsov-Ogievskiy , John Snow , =?utf-8?q?C=C3=A9dric_Le_Goater?= , , , Yishai Hadas , Jason Gunthorpe , Maor Gottlieb , Avihai Horon , Kirti Wankhede , Tarun Gupta , Joao Martins Subject: [PATCH v9 12/14] vfio/migration: Remove VFIO migration protocol v1 Date: Mon, 6 Feb 2023 14:31:35 +0200 Message-ID: <20230206123137.31149-13-avihaih@nvidia.com> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20230206123137.31149-1-avihaih@nvidia.com> References: <20230206123137.31149-1-avihaih@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT075:EE_|SJ2PR12MB8062:EE_ X-MS-Office365-Filtering-Correlation-Id: 1a50af0e-e213-423d-1500-08db083e5c9f X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: CNp8W9zbP/HxpPe9dXWZLdTf/l/K910VXMYRLB/qK9YtEMhLwWIO19OfS4oV8GxtdJwUK4muCiIqvIax8bd7N6uX/exbkekPdHfHVTKMjbJgqInI3Cz26SpRxs4jG8ZCSanHHsKogL+elz25SB/5x5G6H5uIXyOoFgp+C/lkhOVkrE2/mplhkMF20B3FwhwUFfX77b/XxSqUa7NydjHV1aeabcT2yy89Rri8k/wAFckFYv15NOFf85Axfwn8YyJERRO/bHMFJuKYYvVKqcQuwRtLjV6CpQTsv5qjKJ7UwVkfjdzlSQtnrY1J4afRnn+wxGZASt9BtKF/F55BGoZsdvLxMtrkEM/hDNsx8FfIfpYX7AOjtGnMLHkVinvrbwfqVZ6YBCCrnZSdLHneW4Iz0Upvit6BwVVQ9waNY8BgfBKsqnmOPioD9WVV7068FS4gCSTxA8I1ld63fVAapINdJCkiTVKpxCCK0gE+sztZ/H0C8NyKZvj2BNK1MgQBE/eihMpbNwOfd8KE/udn7Rnhd2gRXMZomfz0K1tVDngJrXN62K07Jw7zOJcYtk0U4eeHQcKZtDHSaa+dp9NJ/mKgmDh1QCtTqZZorv/o96kwZNRM3Xlt/3nccz0tzE1CntcWkclm4j3itbM5ahMopa5etF+q/QUyqmCSOyVmpu+tP3y89e+TbdrZIFc3jerMB6cBjIvNzrvQh6v/Of9KGvWQJw== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230025)(4636009)(136003)(376002)(39860400002)(396003)(346002)(451199018)(36840700001)(40470700004)(46966006)(316002)(82310400005)(6916009)(40460700003)(70206006)(6666004)(40480700001)(2616005)(478600001)(1076003)(26005)(8676002)(4326008)(186003)(7696005)(36756003)(86362001)(54906003)(356005)(82740400003)(70586007)(7636003)(47076005)(83380400001)(2906002)(66574015)(36860700001)(426003)(336012)(5660300002)(30864003)(7416002)(41300700001)(8936002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Feb 2023 12:33:33.5950 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1a50af0e-e213-423d-1500-08db083e5c9f X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT075.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR12MB8062 Received-SPF: softfail client-ip=2a01:111:f400:7e83::617; envelope-from=avihaih@nvidia.com; helo=NAM02-DM3-obe.outbound.protection.outlook.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Now that v2 protocol implementation has been added, remove the deprecated v1 implementation. Signed-off-by: Avihai Horon Reviewed-by: Cédric Le Goater --- include/hw/vfio/vfio-common.h | 5 - hw/vfio/common.c | 17 +- hw/vfio/migration.c | 703 +--------------------------------- hw/vfio/trace-events | 9 - 4 files changed, 23 insertions(+), 711 deletions(-) diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h index 3c94660608..f7da9f50a9 100644 --- a/include/hw/vfio/vfio-common.h +++ b/include/hw/vfio/vfio-common.h @@ -61,18 +61,13 @@ typedef struct VFIORegion { typedef struct VFIOMigration { struct VFIODevice *vbasedev; VMChangeStateEntry *vm_state; - VFIORegion region; - uint32_t device_state_v1; - int vm_running; Notifier migration_state; NotifierWithReturn migration_data; - uint64_t pending_bytes; uint32_t device_state; int data_fd; void *data_buffer; size_t data_buffer_size; uint64_t stop_copy_size; - bool v2; } VFIOMigration; typedef struct VFIOAddressSpace { diff --git a/hw/vfio/common.c b/hw/vfio/common.c index 8628f2c18c..f5055df86d 100644 --- a/hw/vfio/common.c +++ b/hw/vfio/common.c @@ -406,14 +406,7 @@ static bool vfio_devices_all_dirty_tracking(VFIOContainer *container) return false; } - if (!migration->v2 && - (vbasedev->pre_copy_dirty_page_tracking == ON_OFF_AUTO_OFF) && - (migration->device_state_v1 & VFIO_DEVICE_STATE_V1_RUNNING)) { - return false; - } - - if (migration->v2 && - vbasedev->pre_copy_dirty_page_tracking == ON_OFF_AUTO_OFF && + if (vbasedev->pre_copy_dirty_page_tracking == ON_OFF_AUTO_OFF && migration->device_state == VFIO_DEVICE_STATE_RUNNING) { return false; } @@ -443,13 +436,7 @@ static bool vfio_devices_all_running_and_mig_active(VFIOContainer *container) return false; } - if (!migration->v2 && - migration->device_state_v1 & VFIO_DEVICE_STATE_V1_RUNNING) { - continue; - } - - if (migration->v2 && - migration->device_state == VFIO_DEVICE_STATE_RUNNING) { + if (migration->device_state == VFIO_DEVICE_STATE_RUNNING) { continue; } else { return false; diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c index 5daeb5a106..cf5c196299 100644 --- a/hw/vfio/migration.c +++ b/hw/vfio/migration.c @@ -140,220 +140,6 @@ static int vfio_migration_set_state(VFIODevice *vbasedev, return 0; } -static inline int vfio_mig_access(VFIODevice *vbasedev, void *val, int count, - off_t off, bool iswrite) -{ - int ret; - - ret = iswrite ? pwrite(vbasedev->fd, val, count, off) : - pread(vbasedev->fd, val, count, off); - if (ret < count) { - error_report("vfio_mig_%s %d byte %s: failed at offset 0x%" - HWADDR_PRIx", err: %s", iswrite ? "write" : "read", count, - vbasedev->name, off, strerror(errno)); - return (ret < 0) ? ret : -EINVAL; - } - return 0; -} - -static int vfio_mig_rw(VFIODevice *vbasedev, __u8 *buf, size_t count, - off_t off, bool iswrite) -{ - int ret, done = 0; - __u8 *tbuf = buf; - - while (count) { - int bytes = 0; - - if (count >= 8 && !(off % 8)) { - bytes = 8; - } else if (count >= 4 && !(off % 4)) { - bytes = 4; - } else if (count >= 2 && !(off % 2)) { - bytes = 2; - } else { - bytes = 1; - } - - ret = vfio_mig_access(vbasedev, tbuf, bytes, off, iswrite); - if (ret) { - return ret; - } - - count -= bytes; - done += bytes; - off += bytes; - tbuf += bytes; - } - return done; -} - -#define vfio_mig_read(f, v, c, o) vfio_mig_rw(f, (__u8 *)v, c, o, false) -#define vfio_mig_write(f, v, c, o) vfio_mig_rw(f, (__u8 *)v, c, o, true) - -#define VFIO_MIG_STRUCT_OFFSET(f) \ - offsetof(struct vfio_device_migration_info, f) -/* - * Change the device_state register for device @vbasedev. Bits set in @mask - * are preserved, bits set in @value are set, and bits not set in either @mask - * or @value are cleared in device_state. If the register cannot be accessed, - * the resulting state would be invalid, or the device enters an error state, - * an error is returned. - */ - -static int vfio_migration_v1_set_state(VFIODevice *vbasedev, uint32_t mask, - uint32_t value) -{ - VFIOMigration *migration = vbasedev->migration; - VFIORegion *region = &migration->region; - off_t dev_state_off = region->fd_offset + - VFIO_MIG_STRUCT_OFFSET(device_state); - uint32_t device_state; - int ret; - - ret = vfio_mig_read(vbasedev, &device_state, sizeof(device_state), - dev_state_off); - if (ret < 0) { - return ret; - } - - device_state = (device_state & mask) | value; - - if (!VFIO_DEVICE_STATE_VALID(device_state)) { - return -EINVAL; - } - - ret = vfio_mig_write(vbasedev, &device_state, sizeof(device_state), - dev_state_off); - if (ret < 0) { - int rret; - - rret = vfio_mig_read(vbasedev, &device_state, sizeof(device_state), - dev_state_off); - - if ((rret < 0) || (VFIO_DEVICE_STATE_IS_ERROR(device_state))) { - hw_error("%s: Device in error state 0x%x", vbasedev->name, - device_state); - return rret ? rret : -EIO; - } - return ret; - } - - migration->device_state_v1 = device_state; - trace_vfio_migration_v1_set_state(vbasedev->name, device_state); - return 0; -} - -static void *get_data_section_size(VFIORegion *region, uint64_t data_offset, - uint64_t data_size, uint64_t *size) -{ - void *ptr = NULL; - uint64_t limit = 0; - int i; - - if (!region->mmaps) { - if (size) { - *size = MIN(data_size, region->size - data_offset); - } - return ptr; - } - - for (i = 0; i < region->nr_mmaps; i++) { - VFIOMmap *map = region->mmaps + i; - - if ((data_offset >= map->offset) && - (data_offset < map->offset + map->size)) { - - /* check if data_offset is within sparse mmap areas */ - ptr = map->mmap + data_offset - map->offset; - if (size) { - *size = MIN(data_size, map->offset + map->size - data_offset); - } - break; - } else if ((data_offset < map->offset) && - (!limit || limit > map->offset)) { - /* - * data_offset is not within sparse mmap areas, find size of - * non-mapped area. Check through all list since region->mmaps list - * is not sorted. - */ - limit = map->offset; - } - } - - if (!ptr && size) { - *size = limit ? MIN(data_size, limit - data_offset) : data_size; - } - return ptr; -} - -static int vfio_save_buffer(QEMUFile *f, VFIODevice *vbasedev, uint64_t *size) -{ - VFIOMigration *migration = vbasedev->migration; - VFIORegion *region = &migration->region; - uint64_t data_offset = 0, data_size = 0, sz; - int ret; - - ret = vfio_mig_read(vbasedev, &data_offset, sizeof(data_offset), - region->fd_offset + VFIO_MIG_STRUCT_OFFSET(data_offset)); - if (ret < 0) { - return ret; - } - - ret = vfio_mig_read(vbasedev, &data_size, sizeof(data_size), - region->fd_offset + VFIO_MIG_STRUCT_OFFSET(data_size)); - if (ret < 0) { - return ret; - } - - trace_vfio_save_buffer(vbasedev->name, data_offset, data_size, - migration->pending_bytes); - - qemu_put_be64(f, data_size); - sz = data_size; - - while (sz) { - void *buf; - uint64_t sec_size; - bool buf_allocated = false; - - buf = get_data_section_size(region, data_offset, sz, &sec_size); - - if (!buf) { - buf = g_try_malloc(sec_size); - if (!buf) { - error_report("%s: Error allocating buffer ", __func__); - return -ENOMEM; - } - buf_allocated = true; - - ret = vfio_mig_read(vbasedev, buf, sec_size, - region->fd_offset + data_offset); - if (ret < 0) { - g_free(buf); - return ret; - } - } - - qemu_put_buffer(f, buf, sec_size); - - if (buf_allocated) { - g_free(buf); - } - sz -= sec_size; - data_offset += sec_size; - } - - ret = qemu_file_get_error(f); - - if (!ret && size) { - *size = data_size; - } - - bytes_transferred += data_size; - return ret; -} - static int vfio_load_buffer(QEMUFile *f, VFIODevice *vbasedev, uint64_t data_size) { @@ -366,96 +152,6 @@ static int vfio_load_buffer(QEMUFile *f, VFIODevice *vbasedev, return ret; } -static int vfio_v1_load_buffer(QEMUFile *f, VFIODevice *vbasedev, - uint64_t data_size) -{ - VFIORegion *region = &vbasedev->migration->region; - uint64_t data_offset = 0, size, report_size; - int ret; - - do { - ret = vfio_mig_read(vbasedev, &data_offset, sizeof(data_offset), - region->fd_offset + VFIO_MIG_STRUCT_OFFSET(data_offset)); - if (ret < 0) { - return ret; - } - - if (data_offset + data_size > region->size) { - /* - * If data_size is greater than the data section of migration region - * then iterate the write buffer operation. This case can occur if - * size of migration region at destination is smaller than size of - * migration region at source. - */ - report_size = size = region->size - data_offset; - data_size -= size; - } else { - report_size = size = data_size; - data_size = 0; - } - - trace_vfio_v1_load_state_device_data(vbasedev->name, data_offset, size); - - while (size) { - void *buf; - uint64_t sec_size; - bool buf_alloc = false; - - buf = get_data_section_size(region, data_offset, size, &sec_size); - - if (!buf) { - buf = g_try_malloc(sec_size); - if (!buf) { - error_report("%s: Error allocating buffer ", __func__); - return -ENOMEM; - } - buf_alloc = true; - } - - qemu_get_buffer(f, buf, sec_size); - - if (buf_alloc) { - ret = vfio_mig_write(vbasedev, buf, sec_size, - region->fd_offset + data_offset); - g_free(buf); - - if (ret < 0) { - return ret; - } - } - size -= sec_size; - data_offset += sec_size; - } - - ret = vfio_mig_write(vbasedev, &report_size, sizeof(report_size), - region->fd_offset + VFIO_MIG_STRUCT_OFFSET(data_size)); - if (ret < 0) { - return ret; - } - } while (data_size); - - return 0; -} - -static int vfio_update_pending(VFIODevice *vbasedev) -{ - VFIOMigration *migration = vbasedev->migration; - VFIORegion *region = &migration->region; - uint64_t pending_bytes = 0; - int ret; - - ret = vfio_mig_read(vbasedev, &pending_bytes, sizeof(pending_bytes), - region->fd_offset + VFIO_MIG_STRUCT_OFFSET(pending_bytes)); - if (ret < 0) { - migration->pending_bytes = 0; - return ret; - } - - migration->pending_bytes = pending_bytes; - trace_vfio_update_pending(vbasedev->name, pending_bytes); - return 0; -} - static int vfio_save_device_config_state(QEMUFile *f, void *opaque) { VFIODevice *vbasedev = opaque; @@ -508,15 +204,6 @@ static void vfio_migration_cleanup(VFIODevice *vbasedev) migration->data_fd = -1; } -static void vfio_migration_v1_cleanup(VFIODevice *vbasedev) -{ - VFIOMigration *migration = vbasedev->migration; - - if (migration->region.mmaps) { - vfio_region_unmap(&migration->region); - } -} - static int vfio_query_stop_copy_size(VFIODevice *vbasedev, uint64_t *stop_copy_size) { @@ -591,49 +278,6 @@ static int vfio_save_setup(QEMUFile *f, void *opaque) return qemu_file_get_error(f); } -static int vfio_v1_save_setup(QEMUFile *f, void *opaque) -{ - VFIODevice *vbasedev = opaque; - VFIOMigration *migration = vbasedev->migration; - int ret; - - trace_vfio_v1_save_setup(vbasedev->name); - - qemu_put_be64(f, VFIO_MIG_FLAG_DEV_SETUP_STATE); - - if (migration->region.mmaps) { - /* - * Calling vfio_region_mmap() from migration thread. Memory API called - * from this function require locking the iothread when called from - * outside the main loop thread. - */ - qemu_mutex_lock_iothread(); - ret = vfio_region_mmap(&migration->region); - qemu_mutex_unlock_iothread(); - if (ret) { - error_report("%s: Failed to mmap VFIO migration region: %s", - vbasedev->name, strerror(-ret)); - error_report("%s: Falling back to slow path", vbasedev->name); - } - } - - ret = vfio_migration_v1_set_state(vbasedev, VFIO_DEVICE_STATE_MASK, - VFIO_DEVICE_STATE_V1_SAVING); - if (ret) { - error_report("%s: Failed to set state SAVING", vbasedev->name); - return ret; - } - - qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE); - - ret = qemu_file_get_error(f); - if (ret) { - return ret; - } - - return 0; -} - static void vfio_save_cleanup(void *opaque) { VFIODevice *vbasedev = opaque; @@ -645,14 +289,6 @@ static void vfio_save_cleanup(void *opaque) trace_vfio_save_cleanup(vbasedev->name); } -static void vfio_v1_save_cleanup(void *opaque) -{ - VFIODevice *vbasedev = opaque; - - vfio_migration_v1_cleanup(vbasedev); - trace_vfio_save_cleanup(vbasedev->name); -} - static void vfio_save_pending(void *opaque, uint64_t threshold_size, uint64_t *res_precopy_only, uint64_t *res_compatible, @@ -668,73 +304,6 @@ static void vfio_save_pending(void *opaque, uint64_t threshold_size, migration->stop_copy_size); } -static void vfio_v1_save_pending(void *opaque, uint64_t threshold_size, - uint64_t *res_precopy_only, - uint64_t *res_compatible, - uint64_t *res_postcopy_only) -{ - VFIODevice *vbasedev = opaque; - VFIOMigration *migration = vbasedev->migration; - int ret; - - ret = vfio_update_pending(vbasedev); - if (ret) { - return; - } - - *res_precopy_only += migration->pending_bytes; - - trace_vfio_v1_save_pending(vbasedev->name, *res_precopy_only, - *res_postcopy_only, *res_compatible); -} - -static int vfio_save_iterate(QEMUFile *f, void *opaque) -{ - VFIODevice *vbasedev = opaque; - VFIOMigration *migration = vbasedev->migration; - uint64_t data_size; - int ret; - - qemu_put_be64(f, VFIO_MIG_FLAG_DEV_DATA_STATE); - - if (migration->pending_bytes == 0) { - ret = vfio_update_pending(vbasedev); - if (ret) { - return ret; - } - - if (migration->pending_bytes == 0) { - qemu_put_be64(f, 0); - qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE); - /* indicates data finished, goto complete phase */ - return 1; - } - } - - ret = vfio_save_buffer(f, vbasedev, &data_size); - if (ret) { - error_report("%s: vfio_save_buffer failed %s", vbasedev->name, - strerror(errno)); - return ret; - } - - qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE); - - ret = qemu_file_get_error(f); - if (ret) { - return ret; - } - - /* - * Reset pending_bytes as .save_live_pending is not called during savevm or - * snapshot case, in such case vfio_update_pending() at the start of this - * function updates pending_bytes. - */ - migration->pending_bytes = 0; - trace_vfio_save_iterate(vbasedev->name, data_size); - return 0; -} - static int vfio_save_complete_precopy(QEMUFile *f, void *opaque) { VFIODevice *vbasedev = opaque; @@ -770,62 +339,6 @@ static int vfio_save_complete_precopy(QEMUFile *f, void *opaque) return ret; } -static int vfio_v1_save_complete_precopy(QEMUFile *f, void *opaque) -{ - VFIODevice *vbasedev = opaque; - VFIOMigration *migration = vbasedev->migration; - uint64_t data_size; - int ret; - - ret = vfio_migration_v1_set_state(vbasedev, ~VFIO_DEVICE_STATE_V1_RUNNING, - VFIO_DEVICE_STATE_V1_SAVING); - if (ret) { - error_report("%s: Failed to set state STOP and SAVING", - vbasedev->name); - return ret; - } - - ret = vfio_update_pending(vbasedev); - if (ret) { - return ret; - } - - while (migration->pending_bytes > 0) { - qemu_put_be64(f, VFIO_MIG_FLAG_DEV_DATA_STATE); - ret = vfio_save_buffer(f, vbasedev, &data_size); - if (ret < 0) { - error_report("%s: Failed to save buffer", vbasedev->name); - return ret; - } - - if (data_size == 0) { - break; - } - - ret = vfio_update_pending(vbasedev); - if (ret) { - return ret; - } - } - - qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE); - - ret = qemu_file_get_error(f); - if (ret) { - return ret; - } - - ret = vfio_migration_v1_set_state(vbasedev, ~VFIO_DEVICE_STATE_V1_SAVING, - 0); - if (ret) { - error_report("%s: Failed to set state STOPPED", vbasedev->name); - return ret; - } - - trace_vfio_v1_save_complete_precopy(vbasedev->name); - return ret; -} - static void vfio_save_state(QEMUFile *f, void *opaque) { VFIODevice *vbasedev = opaque; @@ -847,33 +360,6 @@ static int vfio_load_setup(QEMUFile *f, void *opaque) vbasedev->migration->device_state); } -static int vfio_v1_load_setup(QEMUFile *f, void *opaque) -{ - VFIODevice *vbasedev = opaque; - VFIOMigration *migration = vbasedev->migration; - int ret = 0; - - if (migration->region.mmaps) { - ret = vfio_region_mmap(&migration->region); - if (ret) { - error_report("%s: Failed to mmap VFIO migration region %d: %s", - vbasedev->name, migration->region.nr, - strerror(-ret)); - error_report("%s: Falling back to slow path", vbasedev->name); - } - } - - ret = vfio_migration_v1_set_state(vbasedev, ~VFIO_DEVICE_STATE_MASK, - VFIO_DEVICE_STATE_V1_RESUMING); - if (ret) { - error_report("%s: Failed to set state RESUMING", vbasedev->name); - if (migration->region.mmaps) { - vfio_region_unmap(&migration->region); - } - } - return ret; -} - static int vfio_load_cleanup(void *opaque) { VFIODevice *vbasedev = opaque; @@ -884,15 +370,6 @@ static int vfio_load_cleanup(void *opaque) return 0; } -static int vfio_v1_load_cleanup(void *opaque) -{ - VFIODevice *vbasedev = opaque; - - vfio_migration_v1_cleanup(vbasedev); - trace_vfio_load_cleanup(vbasedev->name); - return 0; -} - static int vfio_load_state(QEMUFile *f, void *opaque, int version_id) { VFIODevice *vbasedev = opaque; @@ -926,11 +403,7 @@ static int vfio_load_state(QEMUFile *f, void *opaque, int version_id) uint64_t data_size = qemu_get_be64(f); if (data_size) { - if (vbasedev->migration->v2) { - ret = vfio_load_buffer(f, vbasedev, data_size); - } else { - ret = vfio_v1_load_buffer(f, vbasedev, data_size); - } + ret = vfio_load_buffer(f, vbasedev, data_size); if (ret < 0) { return ret; } @@ -962,18 +435,6 @@ static const SaveVMHandlers savevm_vfio_handlers = { .load_state = vfio_load_state, }; -static SaveVMHandlers savevm_vfio_v1_handlers = { - .save_setup = vfio_v1_save_setup, - .save_cleanup = vfio_v1_save_cleanup, - .save_live_pending = vfio_v1_save_pending, - .save_live_iterate = vfio_save_iterate, - .save_live_complete_precopy = vfio_v1_save_complete_precopy, - .save_state = vfio_save_state, - .load_setup = vfio_v1_load_setup, - .load_cleanup = vfio_v1_load_cleanup, - .load_state = vfio_load_state, -}; - /* ---------------------------------------------------------------------- */ static void vfio_vmstate_change(void *opaque, bool running, RunState state) @@ -1004,70 +465,12 @@ static void vfio_vmstate_change(void *opaque, bool running, RunState state) mig_state_to_str(new_state)); } -static void vfio_v1_vmstate_change(void *opaque, bool running, RunState state) -{ - VFIODevice *vbasedev = opaque; - VFIOMigration *migration = vbasedev->migration; - uint32_t value, mask; - int ret; - - if (vbasedev->migration->vm_running == running) { - return; - } - - if (running) { - /* - * Here device state can have one of _SAVING, _RESUMING or _STOP bit. - * Transition from _SAVING to _RUNNING can happen if there is migration - * failure, in that case clear _SAVING bit. - * Transition from _RESUMING to _RUNNING occurs during resuming - * phase, in that case clear _RESUMING bit. - * In both the above cases, set _RUNNING bit. - */ - mask = ~VFIO_DEVICE_STATE_MASK; - value = VFIO_DEVICE_STATE_V1_RUNNING; - } else { - /* - * Here device state could be either _RUNNING or _SAVING|_RUNNING. Reset - * _RUNNING bit - */ - mask = ~VFIO_DEVICE_STATE_V1_RUNNING; - - /* - * When VM state transition to stop for savevm command, device should - * start saving data. - */ - if (state == RUN_STATE_SAVE_VM) { - value = VFIO_DEVICE_STATE_V1_SAVING; - } else { - value = 0; - } - } - - ret = vfio_migration_v1_set_state(vbasedev, mask, value); - if (ret) { - /* - * Migration should be aborted in this case, but vm_state_notify() - * currently does not support reporting failures. - */ - error_report("%s: Failed to set device state 0x%x", vbasedev->name, - (migration->device_state_v1 & mask) | value); - if (migrate_get_current()->to_dst_file) { - qemu_file_set_error(migrate_get_current()->to_dst_file, ret); - } - } - vbasedev->migration->vm_running = running; - trace_vfio_v1_vmstate_change(vbasedev->name, running, RunState_str(state), - (migration->device_state_v1 & mask) | value); -} - static void vfio_migration_state_notifier(Notifier *notifier, void *data) { MigrationState *s = data; VFIOMigration *migration = container_of(notifier, VFIOMigration, migration_state); VFIODevice *vbasedev = migration->vbasedev; - int ret; trace_vfio_migration_state_notifier(vbasedev->name, MigrationStatus_str(s->state)); @@ -1077,18 +480,8 @@ static void vfio_migration_state_notifier(Notifier *notifier, void *data) case MIGRATION_STATUS_CANCELLED: case MIGRATION_STATUS_FAILED: bytes_transferred = 0; - if (migration->v2) { - vfio_migration_set_state(vbasedev, VFIO_DEVICE_STATE_RUNNING, - VFIO_DEVICE_STATE_ERROR); - } else { - ret = vfio_migration_v1_set_state(vbasedev, - ~(VFIO_DEVICE_STATE_V1_SAVING | - VFIO_DEVICE_STATE_V1_RESUMING), - VFIO_DEVICE_STATE_V1_RUNNING); - if (ret) { - error_report("%s: Failed to set state RUNNING", vbasedev->name); - } - } + vfio_migration_set_state(vbasedev, VFIO_DEVICE_STATE_RUNNING, + VFIO_DEVICE_STATE_ERROR); } } @@ -1128,12 +521,6 @@ static int vfio_migration_data_notifier(NotifierWithReturn *n, void *data) static void vfio_migration_exit(VFIODevice *vbasedev) { - VFIOMigration *migration = vbasedev->migration; - - if (!migration->v2) { - vfio_region_exit(&migration->region); - vfio_region_finalize(&migration->region); - } g_free(vbasedev->migration); vbasedev->migration = NULL; } @@ -1173,7 +560,6 @@ static int vfio_migration_init(VFIODevice *vbasedev) VFIOMigration *migration; char id[256] = ""; g_autofree char *path = NULL, *oid = NULL; - struct vfio_region_info *info; uint64_t mig_flags = 0; if (!vbasedev->ops->vfio_get_object) { @@ -1186,52 +572,20 @@ static int vfio_migration_init(VFIODevice *vbasedev) } ret = vfio_migration_query_flags(vbasedev, &mig_flags); - if (!ret) { - /* Migration v2 */ - /* Basic migration functionality must be supported */ - if (!(mig_flags & VFIO_MIGRATION_STOP_COPY)) { - return -EOPNOTSUPP; - } - vbasedev->migration = g_new0(VFIOMigration, 1); - vbasedev->migration->device_state = VFIO_DEVICE_STATE_RUNNING; - vbasedev->migration->data_fd = -1; - vbasedev->migration->v2 = true; - } else if (ret == -ENOTTY) { - /* Migration v1 */ - ret = vfio_get_dev_region_info(vbasedev, - VFIO_REGION_TYPE_MIGRATION_DEPRECATED, - VFIO_REGION_SUBTYPE_MIGRATION_DEPRECATED, - &info); - if (ret) { - return ret; - } - - vbasedev->migration = g_new0(VFIOMigration, 1); - vbasedev->migration->device_state_v1 = VFIO_DEVICE_STATE_V1_RUNNING; - vbasedev->migration->vm_running = runstate_is_running(); - - ret = vfio_region_setup(obj, vbasedev, &vbasedev->migration->region, - info->index, "migration"); - if (ret) { - error_report("%s: Failed to setup VFIO migration region %d: %s", - vbasedev->name, info->index, strerror(-ret)); - goto err; - } - - if (!vbasedev->migration->region.size) { - error_report("%s: Invalid zero-sized VFIO migration region %d", - vbasedev->name, info->index); - ret = -EINVAL; - goto err; - } - - g_free(info); - } else { + if (ret) { return ret; } + /* Basic migration functionality must be supported */ + if (!(mig_flags & VFIO_MIGRATION_STOP_COPY)) { + return -EOPNOTSUPP; + } + + vbasedev->migration = g_new0(VFIOMigration, 1); migration = vbasedev->migration; migration->vbasedev = vbasedev; + migration->device_state = VFIO_DEVICE_STATE_RUNNING; + migration->data_fd = -1; oid = vmstate_if_get_id(VMSTATE_IF(DEVICE(obj))); if (oid) { @@ -1241,31 +595,18 @@ static int vfio_migration_init(VFIODevice *vbasedev) } strpadcpy(id, sizeof(id), path, '\0'); - if (migration->v2) { - register_savevm_live(id, VMSTATE_INSTANCE_ID_ANY, 1, - &savevm_vfio_handlers, vbasedev); - - migration->vm_state = qdev_add_vm_change_state_handler( - vbasedev->dev, vfio_vmstate_change, vbasedev); - - migration->migration_data.notify = vfio_migration_data_notifier; - precopy_add_notifier(&migration->migration_data); - } else { - register_savevm_live(id, VMSTATE_INSTANCE_ID_ANY, 1, - &savevm_vfio_v1_handlers, vbasedev); - - migration->vm_state = qdev_add_vm_change_state_handler( - vbasedev->dev, vfio_v1_vmstate_change, vbasedev); - } + register_savevm_live(id, VMSTATE_INSTANCE_ID_ANY, 1, &savevm_vfio_handlers, + vbasedev); + migration->vm_state = qdev_add_vm_change_state_handler(vbasedev->dev, + vfio_vmstate_change, + vbasedev); migration->migration_state.notify = vfio_migration_state_notifier; add_migration_state_change_notifier(&migration->migration_state); - return 0; + migration->migration_data.notify = vfio_migration_data_notifier; + precopy_add_notifier(&migration->migration_data); -err: - g_free(info); - vfio_migration_exit(vbasedev); - return ret; + return 0; } /* ---------------------------------------------------------------------- */ @@ -1314,9 +655,7 @@ void vfio_migration_finalize(VFIODevice *vbasedev) VFIOMigration *migration = vbasedev->migration; vfio_unblock_multiple_devices_migration(); - if (migration->v2) { - precopy_remove_notifier(&migration->migration_data); - } + precopy_remove_notifier(&migration->migration_data); remove_migration_state_change_notifier(&migration->migration_state); qemu_del_vm_change_state_handler(migration->vm_state); unregister_savevm(VMSTATE_IF(vbasedev->dev), "vfio", vbasedev); diff --git a/hw/vfio/trace-events b/hw/vfio/trace-events index a8a64f0627..60c49b2ecf 100644 --- a/hw/vfio/trace-events +++ b/hw/vfio/trace-events @@ -150,24 +150,15 @@ vfio_display_edid_write_error(void) "" # migration.c vfio_migration_probe(const char *name) " (%s)" vfio_migration_set_state(const char *name, const char *state) " (%s) state %s" -vfio_migration_v1_set_state(const char *name, uint32_t state) " (%s) state %d" vfio_vmstate_change(const char *name, int running, const char *reason, const char *dev_state) " (%s) running %d reason %s device state %s" -vfio_v1_vmstate_change(const char *name, int running, const char *reason, uint32_t dev_state) " (%s) running %d reason %s device state %d" vfio_migration_state_notifier(const char *name, const char *state) " (%s) state %s" vfio_save_setup(const char *name, uint64_t data_buffer_size) " (%s) data buffer size 0x%"PRIx64 -vfio_v1_save_setup(const char *name) " (%s)" vfio_save_cleanup(const char *name) " (%s)" -vfio_save_buffer(const char *name, uint64_t data_offset, uint64_t data_size, uint64_t pending) " (%s) Offset 0x%"PRIx64" size 0x%"PRIx64" pending 0x%"PRIx64 -vfio_update_pending(const char *name, uint64_t pending) " (%s) pending 0x%"PRIx64 vfio_save_device_config_state(const char *name) " (%s)" vfio_save_pending(const char *name, uint64_t precopy, uint64_t postcopy, uint64_t compatible, uint64_t stopcopy_size) " (%s) precopy 0x%"PRIx64" postcopy 0x%"PRIx64" compatible 0x%"PRIx64" stopcopy size 0x%"PRIx64 -vfio_v1_save_pending(const char *name, uint64_t precopy, uint64_t postcopy, uint64_t compatible) " (%s) precopy 0x%"PRIx64" postcopy 0x%"PRIx64" compatible 0x%"PRIx64 -vfio_save_iterate(const char *name, int data_size) " (%s) data_size %d" vfio_save_complete_precopy(const char *name, int ret) " (%s) ret %d" -vfio_v1_save_complete_precopy(const char *name) " (%s)" vfio_load_device_config_state(const char *name) " (%s)" vfio_load_state(const char *name, uint64_t data) " (%s) data 0x%"PRIx64 -vfio_v1_load_state_device_data(const char *name, uint64_t data_offset, uint64_t data_size) " (%s) Offset 0x%"PRIx64" size 0x%"PRIx64 vfio_load_state_device_data(const char *name, uint64_t data_size, int ret) " (%s) size 0x%"PRIx64" ret %d" vfio_load_cleanup(const char *name) " (%s)" vfio_get_dirty_bitmap(int fd, uint64_t iova, uint64_t size, uint64_t bitmap_size, uint64_t start) "container fd=%d, iova=0x%"PRIx64" size= 0x%"PRIx64" bitmap_size=0x%"PRIx64" start=0x%"PRIx64