From patchwork Wed Nov 30 09:44:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avihai Horon X-Patchwork-Id: 13059652 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1392CC433FE for ; Wed, 30 Nov 2022 09:52:45 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p0JeA-00039S-3u; Wed, 30 Nov 2022 04:44:54 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p0Je6-00035T-D4; Wed, 30 Nov 2022 04:44:50 -0500 Received: from mail-bn7nam10on20602.outbound.protection.outlook.com ([2a01:111:f400:7e8a::602] helo=NAM10-BN7-obe.outbound.protection.outlook.com) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p0Je1-0002nF-Sa; Wed, 30 Nov 2022 04:44:50 -0500 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=TKzhbq2L6c08U1DxShVQ3EaTuouM8kb5lRRW0kQkA4An/n1u7fUkEo/JuT6MwuUq2cbjJkJEUW6KdY81dCtRd4AsKWHLKk9fweaWHlx08JxXFGcZzLGF1unBCf4CXw28j6K6s1P3WVFCKCgY3RRcjpWl/rBcEnOazZc+izW1MNokeHxPnZ3KANu8OMwB4U8YqwU0Gz5h0ZdRxr3GUSDDsM9BmOL2TP3xuU+OwD2ud0dKLVhnimwUsynr/C4E1SdcYv1FpzybnOV/UzEw11THGNlCD3T4ygG9COWhqpQjhwj5SS1491g7ViUSwOjTTfvWwsagJAUstRSQGPNspxF+gg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=g6Y1Wtl1N8/6dpk4wwrIb37QzCphmaP+vqICYYVj9bI=; b=NM/F4WMTr+OuMv7sxnMLYdExXo20Zs1McBbgoIeLSb83N2KgY0h8gAfooBNQ+/qtbW/hX2egnT3H+yN7ig2z5o5mdoLyUldDh/GtyKqgJ12V1EkXxfKBaemaA/k27GGtFGY0Hfn7eUIURmcCyszIXehh3ZxyvIGkRFUUL9V7+NXD0wDHH4fU0rpBfX5WX6HoS9krvBy+W9FvLMOjsvko2NcV2zIL71WkdUgsjTuMU9mTCsB2CKOlPGLO9/6RX2Mc9nXpTeAndSprXXxFG3DYwBP4XS3du/BwCUNqOY0U92mZKt4btfjJdWdWXcoRHRkuT+wZV0JtCJAdgHPRUKFvWw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=nongnu.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=g6Y1Wtl1N8/6dpk4wwrIb37QzCphmaP+vqICYYVj9bI=; b=RqoDVueFlcTr/T+oLRSGk2RsR9v5UkQTfyZKYGb3/BVcnnZxQso1OxPcP44wiByasGrIZdGXELcQymgNZw2S1TD6UzNNIN++6SaI5BpK2siiYxeq1UGwh8WK2+1HCSU5Xnojft5WwEpOY3IedC8zRofa5ZFNZL+Yk/pPRvTjG4FeaPzv3lUJiIOMFCTrZiJm9mXU3sSLTr6rhvwPnKwKDxQp80VIcZ2ngTD0/ysDj9eIr1pRtggW+PexseffV9MzVZ95qQiQrxCyDTwgYF6FwGkr4OxtoZbY1h4BmCF/7iQxa2X191X/L3tpD8sDsjD+R+e7dAArCLBkE3gNVkfSqw== Received: from MW4PR04CA0214.namprd04.prod.outlook.com (2603:10b6:303:87::9) by SN7PR12MB7417.namprd12.prod.outlook.com (2603:10b6:806:2a4::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5857.23; Wed, 30 Nov 2022 09:44:41 +0000 Received: from CO1NAM11FT005.eop-nam11.prod.protection.outlook.com (2603:10b6:303:87:cafe::79) by MW4PR04CA0214.outlook.office365.com (2603:10b6:303:87::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5857.23 via Frontend Transport; Wed, 30 Nov 2022 09:44:41 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT005.mail.protection.outlook.com (10.13.174.147) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5857.20 via Frontend Transport; Wed, 30 Nov 2022 09:44:40 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 30 Nov 2022 01:44:29 -0800 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail204.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 30 Nov 2022 01:44:29 -0800 Received: from vdi.nvidia.com (10.127.8.9) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.986.36 via Frontend Transport; Wed, 30 Nov 2022 01:44:22 -0800 From: Avihai Horon To: CC: Alex Williamson , Halil Pasic , Christian Borntraeger , Eric Farman , Richard Henderson , David Hildenbrand , "Ilya Leoshkevich" , Thomas Huth , "Juan Quintela" , "Dr. David Alan Gilbert" , "Michael S. Tsirkin" , Cornelia Huck , Paolo Bonzini , Stefan Hajnoczi , Fam Zheng , Eric Blake , Vladimir Sementsov-Ogievskiy , John Snow , , , Yishai Hadas , Jason Gunthorpe , Maor Gottlieb , Shay Drory , Avihai Horon , Kirti Wankhede , Tarun Gupta , Joao Martins Subject: [PATCH v4 01/14] migration: No save_live_pending() method uses the QEMUFile parameter Date: Wed, 30 Nov 2022 11:44:01 +0200 Message-ID: <20221130094414.27247-2-avihaih@nvidia.com> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20221130094414.27247-1-avihaih@nvidia.com> References: <20221130094414.27247-1-avihaih@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT005:EE_|SN7PR12MB7417:EE_ X-MS-Office365-Filtering-Correlation-Id: b45c1c7a-8f35-4290-3de2-08dad2b780f5 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 1Sdw8InG+/VGkKhxNEn+TTXW0nHmdr9T87i+PIAMF5Hy45y7ehXBgA1p4L0i5eW/ZRk2fEPxyRFEU1/E1DioP2dX6geBvmyC46supcfrXyWnjVJF98q8sE2SfAvcOzr4RkFU3H1WfLSetVLgCQPFRxdxINXdzbguPhjro8ix+ePtDlRR59lWho3JS493hpRnzNHbU0ydQ1iQWDpexOCSSpzhAJ6xjJrdNu1T0QZWkyPgkp3+2PFE6Rn/g7lqJ1eh6RwX7J6jZcAXI2VvZQDd39nljNtxDj0PQg6dBQrDZI1rS3TtFBm/F+lVNvCSYDZNjQrVlPmZPRSEpKkHL40rGFnQQ3RoeojU4GB5znSlSJ677oBZTmvTi6v0VfO4OoD3XEL7/Qq74PXuNKxrXBeyVmgwbD6AV02jYnCwQ5NYKqkr0Gap7JYYTspmEfRA9UZtFM13DXjpxE79uCIqWwYKhn6aAfUhldYFA3unoaSxAg5EuStBJjRciYsYlRKb3WiHHA0Fai23Za5A3mS2+UsnDNnGQByS+MAheh934a5hTog4cgDT3I1BsN3a0l2tnp+tWulGiFAQ0ldO/meBiu+gyz/fZ+I+8lReYH9G5U7bDknOEMQiUIsoz4TjuzFJp/EA2Oap+/FxFUfoQ4xx04E3eD7qjD2MEe7ZFhxyr0fi+u/9YixXuUwyfrnORcRIZi24MEre7p0PiQvjncYu3e4x/Q== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(136003)(39860400002)(396003)(346002)(376002)(451199015)(46966006)(36840700001)(40470700004)(83380400001)(36756003)(8676002)(316002)(4326008)(70206006)(70586007)(1076003)(2616005)(86362001)(6916009)(54906003)(426003)(40460700003)(7636003)(41300700001)(356005)(8936002)(40480700001)(336012)(5660300002)(47076005)(186003)(7416002)(2906002)(36860700001)(82310400005)(478600001)(82740400003)(6666004)(26005)(7696005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Nov 2022 09:44:40.9188 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b45c1c7a-8f35-4290-3de2-08dad2b780f5 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT005.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB7417 Received-SPF: softfail client-ip=2a01:111:f400:7e8a::602; envelope-from=avihaih@nvidia.com; helo=NAM10-BN7-obe.outbound.protection.outlook.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org From: Juan Quintela So remove it everywhere. Signed-off-by: Juan Quintela Reviewed-by: Vladimir Sementsov-Ogievskiy Reviewed-by: Dr. David Alan Gilbert --- hw/s390x/s390-stattrib.c | 2 +- hw/vfio/migration.c | 3 +-- include/migration/register.h | 3 +-- migration/block-dirty-bitmap.c | 3 +-- migration/block.c | 2 +- migration/migration.c | 4 ++-- migration/ram.c | 2 +- migration/savevm.c | 7 +++---- migration/savevm.h | 3 +-- 9 files changed, 12 insertions(+), 17 deletions(-) diff --git a/hw/s390x/s390-stattrib.c b/hw/s390x/s390-stattrib.c index 9eda1c3b2a..a553a1e850 100644 --- a/hw/s390x/s390-stattrib.c +++ b/hw/s390x/s390-stattrib.c @@ -182,7 +182,7 @@ static int cmma_save_setup(QEMUFile *f, void *opaque) return 0; } -static void cmma_save_pending(QEMUFile *f, void *opaque, uint64_t max_size, +static void cmma_save_pending(void *opaque, uint64_t max_size, uint64_t *res_precopy_only, uint64_t *res_compatible, uint64_t *res_postcopy_only) diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c index c74453e0b5..e1413ac90c 100644 --- a/hw/vfio/migration.c +++ b/hw/vfio/migration.c @@ -456,8 +456,7 @@ static void vfio_save_cleanup(void *opaque) trace_vfio_save_cleanup(vbasedev->name); } -static void vfio_save_pending(QEMUFile *f, void *opaque, - uint64_t threshold_size, +static void vfio_save_pending(void *opaque, uint64_t threshold_size, uint64_t *res_precopy_only, uint64_t *res_compatible, uint64_t *res_postcopy_only) diff --git a/include/migration/register.h b/include/migration/register.h index c1dcff0f90..eb6266a877 100644 --- a/include/migration/register.h +++ b/include/migration/register.h @@ -46,8 +46,7 @@ typedef struct SaveVMHandlers { /* This runs outside the iothread lock! */ int (*save_setup)(QEMUFile *f, void *opaque); - void (*save_live_pending)(QEMUFile *f, void *opaque, - uint64_t threshold_size, + void (*save_live_pending)(void *opaque, uint64_t threshold_size, uint64_t *res_precopy_only, uint64_t *res_compatible, uint64_t *res_postcopy_only); diff --git a/migration/block-dirty-bitmap.c b/migration/block-dirty-bitmap.c index 9aba7d9c22..50c8ad1c5a 100644 --- a/migration/block-dirty-bitmap.c +++ b/migration/block-dirty-bitmap.c @@ -761,8 +761,7 @@ static int dirty_bitmap_save_complete(QEMUFile *f, void *opaque) return 0; } -static void dirty_bitmap_save_pending(QEMUFile *f, void *opaque, - uint64_t max_size, +static void dirty_bitmap_save_pending(void *opaque, uint64_t max_size, uint64_t *res_precopy_only, uint64_t *res_compatible, uint64_t *res_postcopy_only) diff --git a/migration/block.c b/migration/block.c index 4347da1526..b6a98caf78 100644 --- a/migration/block.c +++ b/migration/block.c @@ -862,7 +862,7 @@ static int block_save_complete(QEMUFile *f, void *opaque) return 0; } -static void block_save_pending(QEMUFile *f, void *opaque, uint64_t max_size, +static void block_save_pending(void *opaque, uint64_t max_size, uint64_t *res_precopy_only, uint64_t *res_compatible, uint64_t *res_postcopy_only) diff --git a/migration/migration.c b/migration/migration.c index f485eea5fb..edefba954e 100644 --- a/migration/migration.c +++ b/migration/migration.c @@ -3756,8 +3756,8 @@ static MigIterateState migration_iteration_run(MigrationState *s) uint64_t pending_size, pend_pre, pend_compat, pend_post; bool in_postcopy = s->state == MIGRATION_STATUS_POSTCOPY_ACTIVE; - qemu_savevm_state_pending(s->to_dst_file, s->threshold_size, &pend_pre, - &pend_compat, &pend_post); + qemu_savevm_state_pending(s->threshold_size, &pend_pre, &pend_compat, + &pend_post); pending_size = pend_pre + pend_compat + pend_post; trace_migrate_pending(pending_size, s->threshold_size, diff --git a/migration/ram.c b/migration/ram.c index 1338e47665..02f5e7ad00 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -3441,7 +3441,7 @@ static int ram_save_complete(QEMUFile *f, void *opaque) return 0; } -static void ram_save_pending(QEMUFile *f, void *opaque, uint64_t max_size, +static void ram_save_pending(void *opaque, uint64_t max_size, uint64_t *res_precopy_only, uint64_t *res_compatible, uint64_t *res_postcopy_only) diff --git a/migration/savevm.c b/migration/savevm.c index a0cdb714f7..a94e637904 100644 --- a/migration/savevm.c +++ b/migration/savevm.c @@ -1471,7 +1471,7 @@ flush: * the result is split into the amount for units that can and * for units that can't do postcopy. */ -void qemu_savevm_state_pending(QEMUFile *f, uint64_t threshold_size, +void qemu_savevm_state_pending(uint64_t threshold_size, uint64_t *res_precopy_only, uint64_t *res_compatible, uint64_t *res_postcopy_only) @@ -1492,9 +1492,8 @@ void qemu_savevm_state_pending(QEMUFile *f, uint64_t threshold_size, continue; } } - se->ops->save_live_pending(f, se->opaque, threshold_size, - res_precopy_only, res_compatible, - res_postcopy_only); + se->ops->save_live_pending(se->opaque, threshold_size, res_precopy_only, + res_compatible, res_postcopy_only); } } diff --git a/migration/savevm.h b/migration/savevm.h index 6461342cb4..6dec468cc3 100644 --- a/migration/savevm.h +++ b/migration/savevm.h @@ -40,8 +40,7 @@ void qemu_savevm_state_cleanup(void); void qemu_savevm_state_complete_postcopy(QEMUFile *f); int qemu_savevm_state_complete_precopy(QEMUFile *f, bool iterable_only, bool inactivate_disks); -void qemu_savevm_state_pending(QEMUFile *f, uint64_t max_size, - uint64_t *res_precopy_only, +void qemu_savevm_state_pending(uint64_t max_size, uint64_t *res_precopy_only, uint64_t *res_compatible, uint64_t *res_postcopy_only); void qemu_savevm_send_ping(QEMUFile *f, uint32_t value); From patchwork Wed Nov 30 09:44:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avihai Horon X-Patchwork-Id: 13059634 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 59730C4321E for ; Wed, 30 Nov 2022 09:46:03 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p0JeD-0003D5-En; Wed, 30 Nov 2022 04:44:58 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p0JeB-0003Bd-9M; Wed, 30 Nov 2022 04:44:55 -0500 Received: from mail-bn7nam10on2085.outbound.protection.outlook.com ([40.107.92.85] helo=NAM10-BN7-obe.outbound.protection.outlook.com) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p0Je9-0002pB-Cc; Wed, 30 Nov 2022 04:44:54 -0500 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=EY4f914UaHUwieS9Is5pOAYBUJ4Jm4Tosu+aJP2sh8Z/M9mVTNThvuvDcBZtED0NrbLAAsdaeQxZPCk51EKoutvoETsjwCpYWS2WfUihXz6/3fnqzFLxIsN1Ns8wbO2vWyb745F7TSPecB5fQU2xuckrl46DCMSHW5AtxNwJ75nzVPl1+lvwW7BUeXVBp3GcPzf7TJUZR/NtOLFVhCn0Vaxj+hSPX7GqNQx+V8NibI/BV5u4ix/75nxl20/ktozyhoYmMoJugMRfQOZBEoJPLP0m8N5cGUMrWjKCYiS0fBFhCpzbBcbEpxh8Z3Kemq5VffW0O7PD1R97o2I3s6hNfQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=52W7DrAvGtfkkmAVsehdHAR6WaScEEdNPK/GKFFLxHE=; b=U/q8JLBjhrmUGms+/H8CAHnY6Y4yiGqZ8rBwJokBagf+dDtEKFfAr0MRVFrboF3ealIgiLbHsdqXqyKT2sZiCXZlO1GbAHVHs08l8Y94OUrprYuaraLVNOTBxqn4tenxGDZ0EaHGXb6sUGRP+X610+ABzefhwEZeyAsDKK30TowpHtNoLpn01GNC5IO6gB4/OoNavDMMxN+Uuan9rrATqzII7qc/iXMAySJmlNxX9rslLbgTF4u/LNWtn0AAMpePbQDUzF76vfWjW818x3NxtROHFbZNDIjeeC6RlAgS23eVij6M+Nz0d3Vu1NSL4C9apM+xEWgDWph6tKdWbWFfoQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=nongnu.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=52W7DrAvGtfkkmAVsehdHAR6WaScEEdNPK/GKFFLxHE=; b=GdVCbtjN1jdudPG6pSEbYOV/u6mp6VS2yt0uWWmzk5RqpLEcv0yU984wymq7vaCMJDIIz24dmHhQWXOsrAkN5aC3OUs08umoordCU/WWYkt0zxqYtj3Y4aZhyzWBlXaacfbLhEvH0QvkTJu7RemZoutVo9lk/pfGRdB+YQtA03Ts/uDoawFg1nDPJratF7To7rFIe7bn3u047dgFqPD8RqSKLc8uYw8ByopG+pNddHfdYiyqZxCrUV2SCJnYnoNstje6r0NGO7U4eULp9gDgFiF1+C8dzUs4KZ+Q9soZtcJAcG9QiY6qzavAL47UjVFLZJrxis3Ol5ML1kaznqyo2A== Received: from MW4PR03CA0221.namprd03.prod.outlook.com (2603:10b6:303:b9::16) by DS7PR12MB6336.namprd12.prod.outlook.com (2603:10b6:8:93::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5857.23; Wed, 30 Nov 2022 09:44:49 +0000 Received: from CO1NAM11FT057.eop-nam11.prod.protection.outlook.com (2603:10b6:303:b9:cafe::e6) by MW4PR03CA0221.outlook.office365.com (2603:10b6:303:b9::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5857.23 via Frontend Transport; Wed, 30 Nov 2022 09:44:49 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT057.mail.protection.outlook.com (10.13.174.205) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5857.18 via Frontend Transport; Wed, 30 Nov 2022 09:44:49 +0000 Received: from rnnvmail203.nvidia.com (10.129.68.9) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 30 Nov 2022 01:44:37 -0800 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail203.nvidia.com (10.129.68.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 30 Nov 2022 01:44:36 -0800 Received: from vdi.nvidia.com (10.127.8.9) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.986.36 via Frontend Transport; Wed, 30 Nov 2022 01:44:29 -0800 From: Avihai Horon To: CC: Alex Williamson , Halil Pasic , Christian Borntraeger , Eric Farman , Richard Henderson , David Hildenbrand , "Ilya Leoshkevich" , Thomas Huth , "Juan Quintela" , "Dr. David Alan Gilbert" , "Michael S. Tsirkin" , Cornelia Huck , Paolo Bonzini , Stefan Hajnoczi , Fam Zheng , Eric Blake , Vladimir Sementsov-Ogievskiy , John Snow , , , Yishai Hadas , Jason Gunthorpe , Maor Gottlieb , Shay Drory , Avihai Horon , Kirti Wankhede , Tarun Gupta , Joao Martins Subject: [PATCH v4 02/14] migration: Simplify migration_iteration_run() Date: Wed, 30 Nov 2022 11:44:02 +0200 Message-ID: <20221130094414.27247-3-avihaih@nvidia.com> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20221130094414.27247-1-avihaih@nvidia.com> References: <20221130094414.27247-1-avihaih@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT057:EE_|DS7PR12MB6336:EE_ X-MS-Office365-Filtering-Correlation-Id: c741d341-2fac-47ae-67bc-08dad2b78604 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 8DEZyEgh7UrfTzYxbTV86stS2RVQkvFzpzoHAThMy0HK+WEa+OZOUWUPSc6CsYpS+eSGqmip1kHSDZ8kkXyFLkV6QZjIs48j8l/dmQ3FdVk9yTRiv4cKrC5PQHFqHEUQ0QHRRtJL8GgnA/7EdlfPs9xYmYycLYOblWRq4Ssi8EiiXz98Nbtffinvr1ulfizKWu+1wybSXrD9KrUb3ZZsdg+DxW66bUpNISWqAp5gi0y6u5I0y3Ky8b7uMiRl9s0dr+et4+ZlO77+r+G4CfIwT9k40kBw6OeUw7iOHfBZORPbwAsI9pIzx0hJ2KBaUk9netgPMDL15IQ8hraVSg9FzNh9u4VdmgKBb/E5zqq5leXS2IKxdzcWspOk4YJFvzxvLWAEmHSfCI6fug6I16y+6BhUFqlllIFAJcgsu8gE1U81Vi8ylGHiA2PcxeLqpSrrhRZ/id7z9UFp6FQVPXlA9awUmJ/B6R7wrTSJ0lO/PxgsMv0rcJXltRg0OWFpF8UZtCjZ6XC+Rza/Az/0gjkL20/AybCtq4BmPY4yRfwHmhyt+E7ckKb8ar0YHh7eUgeG7HdQPDjUvY+TbzULUoHfT285OBtbFf/Va+91lSwLdyLOW1SGCgRpO0GHn3LV4zTZI4a+ZRtlo7DTXBrmu1sWyYJX7C9BvhtAtrIQpMpxX/DajXCXHSQ3blu9nMlbwPfozjIgAWdVKZiORBCMo/EljQ== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(39860400002)(376002)(346002)(136003)(396003)(451199015)(40470700004)(36840700001)(46966006)(7696005)(47076005)(426003)(86362001)(6666004)(478600001)(7416002)(36756003)(2906002)(40480700001)(356005)(40460700003)(186003)(7636003)(82740400003)(83380400001)(336012)(82310400005)(1076003)(4326008)(2616005)(70586007)(41300700001)(8676002)(70206006)(6916009)(316002)(26005)(8936002)(5660300002)(54906003)(36860700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Nov 2022 09:44:49.4067 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: c741d341-2fac-47ae-67bc-08dad2b78604 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT057.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB6336 Received-SPF: softfail client-ip=40.107.92.85; envelope-from=avihaih@nvidia.com; helo=NAM10-BN7-obe.outbound.protection.outlook.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org From: Juan Quintela Signed-off-by: Juan Quintela Signed-off-by: Avihai Horon --- migration/migration.c | 25 +++++++++++++------------ 1 file changed, 13 insertions(+), 12 deletions(-) diff --git a/migration/migration.c b/migration/migration.c index edefba954e..630e4af02f 100644 --- a/migration/migration.c +++ b/migration/migration.c @@ -3763,23 +3763,24 @@ static MigIterateState migration_iteration_run(MigrationState *s) trace_migrate_pending(pending_size, s->threshold_size, pend_pre, pend_compat, pend_post); - if (pending_size && pending_size >= s->threshold_size) { - /* Still a significant amount to transfer */ - if (!in_postcopy && pend_pre <= s->threshold_size && - qatomic_read(&s->start_postcopy)) { - if (postcopy_start(s)) { - error_report("%s: postcopy failed to start", __func__); - } - return MIG_ITERATE_SKIP; - } - /* Just another iteration step */ - qemu_savevm_state_iterate(s->to_dst_file, in_postcopy); - } else { + + if (!pending_size || pending_size < s->threshold_size) { trace_migration_thread_low_pending(pending_size); migration_completion(s); return MIG_ITERATE_BREAK; } + /* Still a significant amount to transfer */ + if (!in_postcopy && pend_pre <= s->threshold_size && + qatomic_read(&s->start_postcopy)) { + if (postcopy_start(s)) { + error_report("%s: postcopy failed to start", __func__); + } + return MIG_ITERATE_SKIP; + } + + /* Just another iteration step */ + qemu_savevm_state_iterate(s->to_dst_file, in_postcopy); return MIG_ITERATE_RESUME; } From patchwork Wed Nov 30 09:44:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avihai Horon X-Patchwork-Id: 13059632 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 13AC7C4321E for ; Wed, 30 Nov 2022 09:45:38 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p0JeJ-0003Lk-Vn; Wed, 30 Nov 2022 04:45:04 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p0JeI-0003LH-Gq; Wed, 30 Nov 2022 04:45:02 -0500 Received: from mail-bn7nam10on20601.outbound.protection.outlook.com ([2a01:111:f400:7e8a::601] helo=NAM10-BN7-obe.outbound.protection.outlook.com) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p0JeG-0002pp-MR; Wed, 30 Nov 2022 04:45:02 -0500 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=jQi2Md+oes7svrqkO3JuoRhaA8fcLUSQpqrjazzveFv2yXzYxVr/jZz6mDIDFU85qJZUs/VVxvGkcqboUaIDLVFkuxuW4c1cjuxlAN7gUb/Re8XpNQpFYqUsraCmt4oDU5FUChAKtOBZf/M96It7DZhndpyMFYF0fPHAHPeAFR5aZUGGOc+q81JNoam8P8oXg723J5tAZm6gVu11GOHbB6YR60/zoP/iF0cHRDEmazvT7CMAHYFvo0LSZ4S5OsMQSjqjb7NYvhN8ofXEnIJQ/bHeeMIiQdDssmunATr1cINwHDHheP25aLUKKpqjHwefRgg2cSWr2jrbAxy4bJa0XA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=jJkOFNmqZNgWafMrpbwZLIyQY9fQkQPMJwKlAZkkfgk=; b=jYUPGJ7jsvzXP5dVdYOjJxl3Z/P6cOsod5oAQMM6qGK0KkPxaKryeJX4W9VuANFLdR7ZV2sHm9eF9MtbFcGKTVcQH6RBlozs5W21S++6lqRXQdmvWINNDUtgVC2NISRBkHHeHGqCEXExxqq9TJc0ENCkRsZTGQLUbAPtzvov0n3HiLtaJrFVMs4V0V6jKlH0bR7LUbrW8OjkeEW/eoNNXFC2jqSHI63WIKnNTXw9zIel4O7zm0IDJX7J4rpOkmabKLvobnVMD5GYIuEzocjlRE0jfAOqhgzFd+aFiplrIMzLGkwBjb6RqKQGzILXMiGqne1JXtSR2lctGGcKKB1xKQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=nongnu.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=jJkOFNmqZNgWafMrpbwZLIyQY9fQkQPMJwKlAZkkfgk=; b=HgATbNsIjBFF7woPqrvwVKKgYTsV30E54v4RLDU3xWBMhzUnhcwstJ7FVWQ/fQxZEOegTDae+/D2v1sXyIdx/UG4+OCROjd4onnbHYpe98Myikfmew8CnkEEGzKeqM52ruiOXWUhAnE515UbOpD2qqu/53Jd7h2zXB8e9AgKu688vDaNkVwdIlM8wM4Hc3VBRLmX2mZN8//wh9YJGLKeXX9eaFvc0mQAAgLtSxITyr8QdwIzJlitQrMxllaLpeEuwApMiqJvzBkWUyw2ZHR/WSJZTe1XrQtBcTp3Q5jQw7z8zCG9MNOu1EcWYUysQDyNxIgWATBNtLlU6UR4K/1VRg== Received: from MW4PR04CA0357.namprd04.prod.outlook.com (2603:10b6:303:8a::32) by SJ0PR12MB7081.namprd12.prod.outlook.com (2603:10b6:a03:4ae::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5857.23; Wed, 30 Nov 2022 09:44:56 +0000 Received: from CO1NAM11FT016.eop-nam11.prod.protection.outlook.com (2603:10b6:303:8a:cafe::89) by MW4PR04CA0357.outlook.office365.com (2603:10b6:303:8a::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5857.23 via Frontend Transport; Wed, 30 Nov 2022 09:44:56 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CO1NAM11FT016.mail.protection.outlook.com (10.13.175.141) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5857.18 via Frontend Transport; Wed, 30 Nov 2022 09:44:56 +0000 Received: from rnnvmail203.nvidia.com (10.129.68.9) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 30 Nov 2022 01:44:44 -0800 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail203.nvidia.com (10.129.68.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 30 Nov 2022 01:44:44 -0800 Received: from vdi.nvidia.com (10.127.8.9) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.986.36 via Frontend Transport; Wed, 30 Nov 2022 01:44:37 -0800 From: Avihai Horon To: CC: Alex Williamson , Halil Pasic , Christian Borntraeger , Eric Farman , Richard Henderson , David Hildenbrand , "Ilya Leoshkevich" , Thomas Huth , "Juan Quintela" , "Dr. David Alan Gilbert" , "Michael S. Tsirkin" , Cornelia Huck , Paolo Bonzini , Stefan Hajnoczi , Fam Zheng , Eric Blake , Vladimir Sementsov-Ogievskiy , John Snow , , , Yishai Hadas , Jason Gunthorpe , Maor Gottlieb , Shay Drory , Avihai Horon , Kirti Wankhede , Tarun Gupta , Joao Martins Subject: [PATCH v4 03/14] vfio/migration: Fix NULL pointer dereference bug Date: Wed, 30 Nov 2022 11:44:03 +0200 Message-ID: <20221130094414.27247-4-avihaih@nvidia.com> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20221130094414.27247-1-avihaih@nvidia.com> References: <20221130094414.27247-1-avihaih@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT016:EE_|SJ0PR12MB7081:EE_ X-MS-Office365-Filtering-Correlation-Id: 6ce05c95-5bc3-4473-045f-08dad2b78a35 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: szVlTNhdYhNRZcPWUPfvF+ixxb7nD+hOmY6q381GWh/DJKWLrow+JTo/b5RYsorB+7Ouwt4J1GgaF9g7gLHdYoP7sST9AxNaSZzQ1mE89iUEGnN2TwKoe4V2V189P7JDSfmcb9/4u1sUVs5/f9H1C86exOwQs6yledQnaNInTR23eXb9bASdSgO1IvMUFAKZtQZkf+KBnNDfVBP8p0phdynyGWf5Q/MhgwZz7k/CAPVt8JU/c6dMua06gr3O3eLBs1FQaHY3q6Fikns/FIC8PZ6it+gks7DPZhdFVeXQOFVIZr7y8seRMYhbl7B8gjP/i+Gk9D9HcrNGI4LP/QFcBHd7Gl7lUP/HYqNLzuZa2SxmDR2MhAKcQKVhzzqazXqrdeUFIDPIbJFl6rZtlm+27BVbdtT9Of4YBTCrDGYn7lbLh7VZCdhIsL3ak37OAlxJIYl6wS6irkrrLdjO3j7u9KTxoz60Jv4vO13vUDwIVjdxERwRx2GABXJzOznugB8r2u9/oMoPpMl1zHnA/GBxfGpb1gwjomYjC21SnjvRQhDDZKADsduAFDEy12Ydj97fcFZ5KWodnJ72oh+pWBfOAlAtXUyKSU2pIU4YmcleK8oX9Nr9qWq3cjt4sclVamxU2Td3UuO0Y4IdazjcWrlGwzeSLU4d0LdLvQ2VQ4GiFBVquFf4v4x1IjaSw6g23msuGsLB0BHelnFS1kfHX7+nvQ== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(346002)(136003)(39860400002)(396003)(376002)(451199015)(40470700004)(36840700001)(46966006)(2906002)(5660300002)(41300700001)(8676002)(7416002)(86362001)(36860700001)(82310400005)(4326008)(70586007)(8936002)(70206006)(36756003)(356005)(7636003)(82740400003)(6916009)(316002)(40460700003)(40480700001)(478600001)(26005)(6666004)(47076005)(7696005)(83380400001)(1076003)(186003)(2616005)(54906003)(336012)(426003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Nov 2022 09:44:56.4987 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 6ce05c95-5bc3-4473-045f-08dad2b78a35 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT016.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR12MB7081 Received-SPF: softfail client-ip=2a01:111:f400:7e8a::601; envelope-from=avihaih@nvidia.com; helo=NAM10-BN7-obe.outbound.protection.outlook.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org As part of its error flow, vfio_vmstate_change() accesses MigrationState->to_dst_file without any checks. This can cause a NULL pointer dereference if the error flow is taken and MigrationState->to_dst_file is not set. For example, this can happen if VM is started or stopped not during migration and vfio_vmstate_change() error flow is taken, as MigrationState->to_dst_file is not set at that time. Fix it by checking that MigrationState->to_dst_file is set before using it. Fixes: 02a7e71b1e5b ("vfio: Add VM state change handler to know state of VM") Signed-off-by: Avihai Horon Reviewed-by: Juan Quintela Reviewed-by: Vladimir Sementsov-Ogievskiy --- hw/vfio/migration.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c index e1413ac90c..09fe7c1de2 100644 --- a/hw/vfio/migration.c +++ b/hw/vfio/migration.c @@ -743,7 +743,9 @@ static void vfio_vmstate_change(void *opaque, bool running, RunState state) */ error_report("%s: Failed to set device state 0x%x", vbasedev->name, (migration->device_state & mask) | value); - qemu_file_set_error(migrate_get_current()->to_dst_file, ret); + if (migrate_get_current()->to_dst_file) { + qemu_file_set_error(migrate_get_current()->to_dst_file, ret); + } } vbasedev->migration->vm_running = running; trace_vfio_vmstate_change(vbasedev->name, running, RunState_str(state), From patchwork Wed Nov 30 09:44:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avihai Horon X-Patchwork-Id: 13059653 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E2549C4321E for ; Wed, 30 Nov 2022 09:53:52 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p0JeZ-0003PU-F7; Wed, 30 Nov 2022 04:45:20 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p0JeT-0003Oe-Lm; Wed, 30 Nov 2022 04:45:13 -0500 Received: from mail-co1nam11on20621.outbound.protection.outlook.com ([2a01:111:f400:7eab::621] helo=NAM11-CO1-obe.outbound.protection.outlook.com) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p0JeP-00031D-HE; Wed, 30 Nov 2022 04:45:11 -0500 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=V5asJwgryHxcUayxt9yYZVdadgIDWTh/gWDvrRN8gZfi2vLBfmx0vQKN77MwbogLfv8cXi/iOS2p31WHEuzo8pbssN/8hJj7wdHOi6AgFoRiwBE2NwU2Dm40Lb9C+alMH3wXP8Amv+42dqiUvgvffMDnZTPk4YX21ccrR++hlCF7wj+LpKN5GO7HK6b9SccnDmm1Jo5YFnXPEi8dszKHsPR33+vpibUYO0UrS60JSXLhxgbiYXgA/LqclKvyh7+E6zhaWNktP+33aM4BC0e4zlTO5owsJ1yFmLpdBOxNZphk/xP2Ih8eKt740ez7V5029paQd/sJ+wPB7yPdGUExZw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Wy48OPsRa/mOtsvQQ6UDa7t/N5MKEjaFzUZq2RwPJDY=; b=F4MSo9KjhjsDUSIGuxVNT+twx6OuD287RSRgPp0+yVIXBukR4RbLXpKqhONB9uiT6lcKjHXoI7rFvp5y9lVgO1bi726OBEqhw09yVGATj4JJacoSzgtmQzjf5vUuAQLfcMyoXrX3DuJGSob1w0WzUxTip+lmROjOwLGz4LZbPXyZN0Li9L/d0U/gq4dMUSsdCRRuJ1kkVJUrRZeQF+MynXbTN97FRBpSA0TEJczZ0SDsG08OpB4qyUX2Qq2Teu6lYDMYS7HzeI0tSKzV6gv2/d6A5jijYHEj0Edvr170nMjwvCkRUh2aZ7fkqcKoFzm5lm6wZISdxRXxRP8m5oYIOg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=nongnu.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Wy48OPsRa/mOtsvQQ6UDa7t/N5MKEjaFzUZq2RwPJDY=; b=DIN880FoG0u9yTxr0G5L5oyhNrlnfQS7/LPsthyT+ikkGf1OXurQMfqXavzr8ert1v2erGB4L631xqK9REo6y36yfyGq2cXN+/FmChwnlcvQOTwQfZaa/0df55vWForMe0C4/nRXhjILyINliaxPg+mg30q2IafGVXwU6Ir3rflO0DJGfhzL/JGQN/ScddAZIrDxBF9PP2jW6tIfldBQtjaA7MndQEF9Zp6NAvwhiVD22TdNx01yGDTz8ALzr8KinmnODyJRVoF6TwVdyxD6wwThbwJ+6NxrpjjJQuwSuvDUKLxVoS1Xwj8A4+ozVgETbInaHqdHwkl6D2iox1xKww== Received: from MW4PR04CA0094.namprd04.prod.outlook.com (2603:10b6:303:83::9) by DM6PR12MB4532.namprd12.prod.outlook.com (2603:10b6:5:2af::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5857.23; Wed, 30 Nov 2022 09:45:05 +0000 Received: from CO1NAM11FT081.eop-nam11.prod.protection.outlook.com (2603:10b6:303:83:cafe::28) by MW4PR04CA0094.outlook.office365.com (2603:10b6:303:83::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5857.23 via Frontend Transport; Wed, 30 Nov 2022 09:45:04 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT081.mail.protection.outlook.com (10.13.174.80) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5880.8 via Frontend Transport; Wed, 30 Nov 2022 09:45:04 +0000 Received: from rnnvmail202.nvidia.com (10.129.68.7) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 30 Nov 2022 01:44:52 -0800 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail202.nvidia.com (10.129.68.7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 30 Nov 2022 01:44:51 -0800 Received: from vdi.nvidia.com (10.127.8.9) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.986.36 via Frontend Transport; Wed, 30 Nov 2022 01:44:44 -0800 From: Avihai Horon To: CC: Alex Williamson , Halil Pasic , Christian Borntraeger , Eric Farman , Richard Henderson , David Hildenbrand , "Ilya Leoshkevich" , Thomas Huth , "Juan Quintela" , "Dr. David Alan Gilbert" , "Michael S. Tsirkin" , Cornelia Huck , Paolo Bonzini , Stefan Hajnoczi , Fam Zheng , Eric Blake , Vladimir Sementsov-Ogievskiy , John Snow , , , Yishai Hadas , Jason Gunthorpe , Maor Gottlieb , Shay Drory , Avihai Horon , Kirti Wankhede , Tarun Gupta , Joao Martins Subject: [PATCH v4 04/14] vfio/migration: Allow migration without VFIO IOMMU dirty tracking support Date: Wed, 30 Nov 2022 11:44:04 +0200 Message-ID: <20221130094414.27247-5-avihaih@nvidia.com> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20221130094414.27247-1-avihaih@nvidia.com> References: <20221130094414.27247-1-avihaih@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT081:EE_|DM6PR12MB4532:EE_ X-MS-Office365-Filtering-Correlation-Id: 0b13f4aa-3340-44f6-4abe-08dad2b78f18 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: kWYGXfsdzgD96NjHaIuVYQ9uUmHIUajuT3K/GnHdczYsB3TcNnzW+UOL6Um9CDUvfk3R2ZF5+ZgskNzbLDu9VFGCzEuW9+UsL4bgWGwllk3wvgiHNZ3yYKFpgq93ScCuXrNeYdSQh3RRagxkYva8CGtuzvOGfflU+SeFp9mDL711RtFJIdjq4I4pdY59YtPqImRiR34FyODrV9Vk0NNAercgMqpvxsFi/HF3+v1d0ZV2u4st6z1RbqgIajwImqts1lZb6Rpv3L3XN4wcWiMHkbGhO4kr+k65YMOR5HPaorKhkIm8MQsFVlwIO5f8AU6lycxQd0whocV6kqrFN0m9mTPUCDDEh/IK3QqVOYMLlSU9ehNLVemRyt8VJ2pDAIIJ2sEUNtO+l89MFtAFlG4tLS+2oHJoOaYac18MVol6QJbhaC/Vk3JwZyJJ5f7QSB1zkIkmDw1tQxC8HzInxHcq+Gp2Uzz+vNyBEtOuJugnwAyntq0eH2ad+ml9spdeg4J1BII2yWP9SuihQyeKd2ct4E5E4mP6kyKENl+WAthzSEAxcu7lT/X/r8sdiztKJr+rV1e9llUPIo9W5/r41ktgHg+LYkpayzJYdyQE36v6Pcj3rAdCqEdcJxnfxjxONO1/85SFzLHtg2rVRdr2tEIQsYMJN4GnmNEygjrjngBjnKzRiBTT3ES/sLXDPx88Qv6sVYyQD4Jq6Q0h9VosknejesamNPMz03OAi1SvJ4R6BEisEwhui6bHDObfLGIzWGvDiGMkSR+LqOtSwloONZHNtO/xMyzkvBuNlR43z0MM8UXsa36f6hpZl5+LXlz7gmGt X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(346002)(376002)(136003)(39860400002)(396003)(451199015)(36840700001)(46966006)(40470700004)(2906002)(7696005)(966005)(478600001)(40460700003)(356005)(40480700001)(7636003)(36756003)(86362001)(6666004)(1076003)(26005)(82740400003)(47076005)(83380400001)(426003)(186003)(2616005)(336012)(41300700001)(70586007)(70206006)(4326008)(36860700001)(8676002)(5660300002)(8936002)(7416002)(82310400005)(316002)(6916009)(54906003)(14143004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Nov 2022 09:45:04.6366 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 0b13f4aa-3340-44f6-4abe-08dad2b78f18 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT081.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4532 Received-SPF: softfail client-ip=2a01:111:f400:7eab::621; envelope-from=avihaih@nvidia.com; helo=NAM11-CO1-obe.outbound.protection.outlook.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Currently, if IOMMU of a VFIO container doesn't support dirty page tracking, migration is blocked. This is because a DMA-able VFIO device can dirty RAM pages without updating QEMU about it, thus breaking the migration. However, this doesn't mean that migration can't be done at all. In such case, allow migration and let QEMU VFIO code mark the entire bitmap dirty. This guarantees that all pages that might have gotten dirty are reported back, and thus guarantees a valid migration even without VFIO IOMMU dirty tracking support. The motivation for this patch is the future introduction of iommufd [1]. iommufd will directly implement the /dev/vfio/vfio container IOCTLs by mapping them into its internal ops, allowing the usage of these IOCTLs over iommufd. However, VFIO IOMMU dirty tracking will not be supported by this VFIO compatibility API. This patch will allow migration by hosts that use the VFIO compatibility API and prevent migration regressions caused by the lack of VFIO IOMMU dirty tracking support. [1] https://lore.kernel.org/kvm/0-v2-f9436d0bde78+4bb-iommufd_jgg@nvidia.com/ Signed-off-by: Avihai Horon --- hw/vfio/common.c | 100 ++++++++++++++++++++++++++------------------ hw/vfio/migration.c | 3 +- 2 files changed, 61 insertions(+), 42 deletions(-) diff --git a/hw/vfio/common.c b/hw/vfio/common.c index 130e5d1dc7..67104e2fc2 100644 --- a/hw/vfio/common.c +++ b/hw/vfio/common.c @@ -397,51 +397,61 @@ static int vfio_dma_unmap_bitmap(VFIOContainer *container, IOMMUTLBEntry *iotlb) { struct vfio_iommu_type1_dma_unmap *unmap; - struct vfio_bitmap *bitmap; + struct vfio_bitmap *vbitmap; + unsigned long *bitmap; + uint64_t bitmap_size; uint64_t pages = REAL_HOST_PAGE_ALIGN(size) / qemu_real_host_page_size(); int ret; - unmap = g_malloc0(sizeof(*unmap) + sizeof(*bitmap)); + unmap = g_malloc0(sizeof(*unmap) + sizeof(*vbitmap)); - unmap->argsz = sizeof(*unmap) + sizeof(*bitmap); + unmap->argsz = sizeof(*unmap); unmap->iova = iova; unmap->size = size; - unmap->flags |= VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP; - bitmap = (struct vfio_bitmap *)&unmap->data; + bitmap_size = ROUND_UP(pages, sizeof(__u64) * BITS_PER_BYTE) / + BITS_PER_BYTE; + bitmap = g_try_malloc0(bitmap_size); + if (!bitmap) { + ret = -ENOMEM; + goto unmap_exit; + } + + if (!container->dirty_pages_supported) { + bitmap_set(bitmap, 0, pages); + goto do_unmap; + } + + unmap->argsz += sizeof(*vbitmap); + unmap->flags = VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP; + + vbitmap = (struct vfio_bitmap *)&unmap->data; + vbitmap->data = (__u64 *)bitmap; /* * cpu_physical_memory_set_dirty_lebitmap() supports pages in bitmap of * qemu_real_host_page_size to mark those dirty. Hence set bitmap_pgsize * to qemu_real_host_page_size. */ + vbitmap->pgsize = qemu_real_host_page_size(); + vbitmap->size = bitmap_size; - bitmap->pgsize = qemu_real_host_page_size(); - bitmap->size = ROUND_UP(pages, sizeof(__u64) * BITS_PER_BYTE) / - BITS_PER_BYTE; - - if (bitmap->size > container->max_dirty_bitmap_size) { - error_report("UNMAP: Size of bitmap too big 0x%"PRIx64, - (uint64_t)bitmap->size); + if (bitmap_size > container->max_dirty_bitmap_size) { + error_report("UNMAP: Size of bitmap too big 0x%"PRIx64, bitmap_size); ret = -E2BIG; goto unmap_exit; } - bitmap->data = g_try_malloc0(bitmap->size); - if (!bitmap->data) { - ret = -ENOMEM; - goto unmap_exit; - } - +do_unmap: ret = ioctl(container->fd, VFIO_IOMMU_UNMAP_DMA, unmap); if (!ret) { - cpu_physical_memory_set_dirty_lebitmap((unsigned long *)bitmap->data, - iotlb->translated_addr, pages); + cpu_physical_memory_set_dirty_lebitmap(bitmap, iotlb->translated_addr, + pages); } else { error_report("VFIO_UNMAP_DMA with DIRTY_BITMAP : %m"); } - g_free(bitmap->data); unmap_exit: + g_free(bitmap); g_free(unmap); return ret; } @@ -460,8 +470,7 @@ static int vfio_dma_unmap(VFIOContainer *container, .size = size, }; - if (iotlb && container->dirty_pages_supported && - vfio_devices_all_running_and_saving(container)) { + if (iotlb && vfio_devices_all_running_and_saving(container)) { return vfio_dma_unmap_bitmap(container, iova, size, iotlb); } @@ -1201,6 +1210,10 @@ static void vfio_set_dirty_page_tracking(VFIOContainer *container, bool start) .argsz = sizeof(dirty), }; + if (!container->dirty_pages_supported) { + return; + } + if (start) { dirty.flags = VFIO_IOMMU_DIRTY_PAGES_FLAG_START; } else { @@ -1231,11 +1244,26 @@ static void vfio_listener_log_global_stop(MemoryListener *listener) static int vfio_get_dirty_bitmap(VFIOContainer *container, uint64_t iova, uint64_t size, ram_addr_t ram_addr) { - struct vfio_iommu_type1_dirty_bitmap *dbitmap; + struct vfio_iommu_type1_dirty_bitmap *dbitmap = NULL; struct vfio_iommu_type1_dirty_bitmap_get *range; + unsigned long *bitmap; + uint64_t bitmap_size; uint64_t pages; int ret; + pages = REAL_HOST_PAGE_ALIGN(size) / qemu_real_host_page_size(); + bitmap_size = ROUND_UP(pages, sizeof(__u64) * BITS_PER_BYTE) / + BITS_PER_BYTE; + bitmap = g_try_malloc0(bitmap_size); + if (!bitmap) { + return -ENOMEM; + } + + if (!container->dirty_pages_supported) { + bitmap_set(bitmap, 0, pages); + goto set_dirty; + } + dbitmap = g_malloc0(sizeof(*dbitmap) + sizeof(*range)); dbitmap->argsz = sizeof(*dbitmap) + sizeof(*range); @@ -1250,15 +1278,8 @@ static int vfio_get_dirty_bitmap(VFIOContainer *container, uint64_t iova, * to qemu_real_host_page_size. */ range->bitmap.pgsize = qemu_real_host_page_size(); - - pages = REAL_HOST_PAGE_ALIGN(range->size) / qemu_real_host_page_size(); - range->bitmap.size = ROUND_UP(pages, sizeof(__u64) * BITS_PER_BYTE) / - BITS_PER_BYTE; - range->bitmap.data = g_try_malloc0(range->bitmap.size); - if (!range->bitmap.data) { - ret = -ENOMEM; - goto err_out; - } + range->bitmap.size = bitmap_size; + range->bitmap.data = (__u64 *)bitmap; ret = ioctl(container->fd, VFIO_IOMMU_DIRTY_PAGES, dbitmap); if (ret) { @@ -1268,13 +1289,13 @@ static int vfio_get_dirty_bitmap(VFIOContainer *container, uint64_t iova, goto err_out; } - cpu_physical_memory_set_dirty_lebitmap((unsigned long *)range->bitmap.data, - ram_addr, pages); +set_dirty: + cpu_physical_memory_set_dirty_lebitmap(bitmap, ram_addr, pages); - trace_vfio_get_dirty_bitmap(container->fd, range->iova, range->size, - range->bitmap.size, ram_addr); + trace_vfio_get_dirty_bitmap(container->fd, iova, size, bitmap_size, + ram_addr); err_out: - g_free(range->bitmap.data); + g_free(bitmap); g_free(dbitmap); return ret; @@ -1409,8 +1430,7 @@ static void vfio_listener_log_sync(MemoryListener *listener, { VFIOContainer *container = container_of(listener, VFIOContainer, listener); - if (vfio_listener_skipped_section(section) || - !container->dirty_pages_supported) { + if (vfio_listener_skipped_section(section)) { return; } diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c index 09fe7c1de2..552c2313b2 100644 --- a/hw/vfio/migration.c +++ b/hw/vfio/migration.c @@ -860,11 +860,10 @@ int64_t vfio_mig_bytes_transferred(void) int vfio_migration_probe(VFIODevice *vbasedev, Error **errp) { - VFIOContainer *container = vbasedev->group->container; struct vfio_region_info *info = NULL; int ret = -ENOTSUP; - if (!vbasedev->enable_migration || !container->dirty_pages_supported) { + if (!vbasedev->enable_migration) { goto add_blocker; } From patchwork Wed Nov 30 09:44:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avihai Horon X-Patchwork-Id: 13059636 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4CDC5C4321E for ; Wed, 30 Nov 2022 09:46:24 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p0Jel-0003dk-CU; Wed, 30 Nov 2022 04:45:32 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p0JeZ-0003VZ-Rm; Wed, 30 Nov 2022 04:45:21 -0500 Received: from mail-bn8nam12on20611.outbound.protection.outlook.com ([2a01:111:f400:fe5b::611] helo=NAM12-BN8-obe.outbound.protection.outlook.com) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p0JeW-00031l-Jk; Wed, 30 Nov 2022 04:45:18 -0500 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Cit0rF7dpR2ecN07pM3VFLYND0UKzyvFfw8vwpHvgaLCNBYm6AwPX1stHGa11J2OJrEi14rwuLbJo5SxAxsJfgTZN5ULPYNKgNRsw+3wCY4wtHqUjTpNXwEqoix3Kc3YCq17/onPZMONTAeHIQfqdXcHyvhVuqll62l182Ceow8y75EWYMt1UauvE5nDK3DN6DPKwh2AqcA1Oi3a0wo0Yh+7zLGlWSpgb6clahUSZIdysRM69leG0arxxzw3xu6JmHPPGUsszr0KgF75qv2dYz8uigHPB/y4btR3h6in7d1/Kywr+1aNA3pPFySV/fz+eLJ6LKMAt9FM/0KmiAzYKQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=2G0q/2kgHWbnASannczhKIF63rVDJa3fNWQ5vG82KH4=; b=PzfjJMJ6XEir2IKgUcr4S/vLelfuiHTEg8iTUsI0GsrAtkVltDqr9TddpkE4wjJhu24du2237WpVtXLetf0BmZ1FW6DVNvcPyBbcRbdF74WANC5XPf2kIAQpqdXhDWdUfGOF84jmRUbYVqbe+DCpe+8ltj/tBV9cpKAjrvlH9qq0L9x3l61vj+KGvE/VWxcU/xmwUZIgXhDI8PtoIIio9j4TNW3u+0FqBatCcsjigUQspVUVzohhz7gE7KlE7/A+QtbB+QvwSrG9Hp8FXg1RdnHH1iceL4HhMzDfhn2odAeOymRMv5SwWHyKvgHv2TgiqZI7zplRFHXA34QhrNHtjg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=nongnu.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=2G0q/2kgHWbnASannczhKIF63rVDJa3fNWQ5vG82KH4=; b=JqoDcwAz+85ejAqF9jYj9XgO97nt8jr3jc/d0ex5pBntFHOWL9LgTjMIcbrGnNrEdWtYc6MdsWC4Kp9yPxO4p8+mh0igoPRs+5Qsr4++GyhNs/2xYtLnpJHBkd4pSvrAWFiqzbm6QfZ7oDSyzxdua3RyZDsluiQQT3YH00Vi9TvnWFY7KEI3CLSuAAJfLzqFI5FaWvf9eCstDadMyC6xqzz9BMQ0TqViSwcs2Dab94rzpn9RsLY9TGOaHo7DVu5VuNZm6TxQqPYcCmvA+Q/RuDjJtsPg9a+iOygKguHrlrrk2gyA5ziv5YvxOCNxMPHObexSa64qCHIJc5wipe9PSg== Received: from MW4PR03CA0049.namprd03.prod.outlook.com (2603:10b6:303:8e::24) by BL3PR12MB6643.namprd12.prod.outlook.com (2603:10b6:208:38f::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5857.23; Wed, 30 Nov 2022 09:45:10 +0000 Received: from CO1NAM11FT019.eop-nam11.prod.protection.outlook.com (2603:10b6:303:8e:cafe::f4) by MW4PR03CA0049.outlook.office365.com (2603:10b6:303:8e::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5857.23 via Frontend Transport; Wed, 30 Nov 2022 09:45:09 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CO1NAM11FT019.mail.protection.outlook.com (10.13.175.57) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5834.8 via Frontend Transport; Wed, 30 Nov 2022 09:45:09 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 30 Nov 2022 01:44:59 -0800 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail204.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 30 Nov 2022 01:44:59 -0800 Received: from vdi.nvidia.com (10.127.8.9) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.986.36 via Frontend Transport; Wed, 30 Nov 2022 01:44:52 -0800 From: Avihai Horon To: CC: Alex Williamson , Halil Pasic , Christian Borntraeger , Eric Farman , Richard Henderson , David Hildenbrand , "Ilya Leoshkevich" , Thomas Huth , "Juan Quintela" , "Dr. David Alan Gilbert" , "Michael S. Tsirkin" , Cornelia Huck , Paolo Bonzini , Stefan Hajnoczi , Fam Zheng , Eric Blake , Vladimir Sementsov-Ogievskiy , John Snow , , , Yishai Hadas , Jason Gunthorpe , Maor Gottlieb , Shay Drory , Avihai Horon , Kirti Wankhede , Tarun Gupta , Joao Martins Subject: [PATCH v4 05/14] migration/qemu-file: Add qemu_file_get_to_fd() Date: Wed, 30 Nov 2022 11:44:05 +0200 Message-ID: <20221130094414.27247-6-avihaih@nvidia.com> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20221130094414.27247-1-avihaih@nvidia.com> References: <20221130094414.27247-1-avihaih@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT019:EE_|BL3PR12MB6643:EE_ X-MS-Office365-Filtering-Correlation-Id: cf297268-0677-4513-0df3-08dad2b79219 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: wFaDybXiIJ5IRkio/nevhlNt69d3KU85RMxR4+xhzDoi/UXkDCd5GSkH3u9NaZcpsJ1JCQhFa7i7A+zcju7jjblDOTSuh6+Y75IE/PrIPnlhUGIjhFa20BL2WgDFQKocqXCzAJri8VPlm3Z6RIT/YBsDur88qJk8HGhJ60oqMJ6CGJyU6U88DCy7x3HLMi9XPujza7MK6ZH5d24fyCE/suRw68Axb40j+ln0hPQTecAj8vL34+rq4rlA4Bh1S6knz+/JDo2Q8DzoJINM1hlsqdg0wykNUGHv4zQoAZtYoaRBiyRu4HQ65lrgkGeYo6nzLCtYrIiLMLm3qg33bCOfD4yhk/5mkXCCATZoxpCPQrBnp2W9myC2LppLQyypH9TekFsnT+iJwRoA6Rzf65tU6MNFKbzA+f5sNtaljGDVR+JgCodSHvNLE0chIUn9sMzQTd6tLC2KzEHExw33vJq1MhDPkYbSdk+WsMHfUVdf2ZUr68hHqFVNzkTL7PTDAvabb//uOY0aqfeTdmJMZx1VqmR4wiRhsxLKf8NockMj89Qn2v70rMQ23AvKKQYxPYboaJyMUmZw+pwoqccBEj3YMYnQaJRlknsfdYCIqG7i6wJzRzLFTYMpHnuK8j33fT9HaM7qOYo5RC9ouy2m+9X7XbQl+TeXkq1Im6hyXrE6pgwDC6PVEiuS3jFH87P34GHKsLdzcFMDJHPkVudzzvxR4w== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(396003)(136003)(376002)(39860400002)(346002)(451199015)(40470700004)(46966006)(36840700001)(36860700001)(47076005)(83380400001)(86362001)(2906002)(82740400003)(7416002)(7636003)(8676002)(356005)(40460700003)(426003)(41300700001)(40480700001)(8936002)(82310400005)(186003)(7696005)(1076003)(5660300002)(4326008)(336012)(26005)(54906003)(6666004)(316002)(6916009)(2616005)(70586007)(478600001)(36756003)(70206006); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Nov 2022 09:45:09.7372 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: cf297268-0677-4513-0df3-08dad2b79219 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT019.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL3PR12MB6643 Received-SPF: softfail client-ip=2a01:111:f400:fe5b::611; envelope-from=avihaih@nvidia.com; helo=NAM12-BN8-obe.outbound.protection.outlook.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Add new function qemu_file_get_to_fd() that allows reading data from QEMUFile and writing it straight into a given fd. This will be used later in VFIO migration code. Signed-off-by: Avihai Horon Reviewed-by: Vladimir Sementsov-Ogievskiy --- migration/qemu-file.c | 34 ++++++++++++++++++++++++++++++++++ migration/qemu-file.h | 1 + 2 files changed, 35 insertions(+) diff --git a/migration/qemu-file.c b/migration/qemu-file.c index 2d5f74ffc2..79303c9d34 100644 --- a/migration/qemu-file.c +++ b/migration/qemu-file.c @@ -940,3 +940,37 @@ QIOChannel *qemu_file_get_ioc(QEMUFile *file) { return file->ioc; } + +/* + * Read size bytes from QEMUFile f and write them to fd. + */ +int qemu_file_get_to_fd(QEMUFile *f, int fd, size_t size) +{ + while (size) { + size_t pending = f->buf_size - f->buf_index; + ssize_t rc; + + if (!pending) { + rc = qemu_fill_buffer(f); + if (rc < 0) { + return rc; + } + if (rc == 0) { + return -1; + } + continue; + } + + rc = write(fd, f->buf + f->buf_index, MIN(pending, size)); + if (rc < 0) { + return rc; + } + if (rc == 0) { + return -1; + } + f->buf_index += rc; + size -= rc; + } + + return 0; +} diff --git a/migration/qemu-file.h b/migration/qemu-file.h index fa13d04d78..9d0155a2a1 100644 --- a/migration/qemu-file.h +++ b/migration/qemu-file.h @@ -148,6 +148,7 @@ int qemu_file_shutdown(QEMUFile *f); QEMUFile *qemu_file_get_return_path(QEMUFile *f); void qemu_fflush(QEMUFile *f); void qemu_file_set_blocking(QEMUFile *f, bool block); +int qemu_file_get_to_fd(QEMUFile *f, int fd, size_t size); void ram_control_before_iterate(QEMUFile *f, uint64_t flags); void ram_control_after_iterate(QEMUFile *f, uint64_t flags); From patchwork Wed Nov 30 09:44:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avihai Horon X-Patchwork-Id: 13059633 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E2231C4321E for ; Wed, 30 Nov 2022 09:45:57 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p0Jes-0003ff-7H; Wed, 30 Nov 2022 04:45:38 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p0Jei-0003d3-Vw; Wed, 30 Nov 2022 04:45:29 -0500 Received: from mail-mw2nam10on2061a.outbound.protection.outlook.com ([2a01:111:f400:7e89::61a] helo=NAM10-MW2-obe.outbound.protection.outlook.com) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p0Jeg-000373-Ji; Wed, 30 Nov 2022 04:45:28 -0500 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=fwXXdbYY+Oilay/QyCK63PbTqNSoxxu2ZmtuS7byJMhqOGNViE1Oh6gu66BPEakawN8HqIKYozTN+GtUmQLNznVexyoWklHB9ilU1xF+Z/qmJbKtjCIFnEnBD+qT5MVHxUVQLa9OhlJc772CB9NvMOGOJjfeCkAW9+ysJsDi+oA+t87mq8+Ndub8QAQVAUcuXkpYumdpJbFZ2RuNSsSCsUFDYKgdrdh/yuMsU5Xof/8Dh1A4sKlDgf3Uffgp6GSFIJddzpBpuaG6G0o8vYqf7U9PIqIRBJjmqkJ/touqsimnQQPf0XazibnGUGwU6mepqp9th2/yBAtkI82ZFk09Pw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=LMUTneDX00Am4uPOHJaEs/D16Sk+RJ172M1/EeGTLbI=; b=c/LGaplpQAgnOX07E+LCCZca8sxjzdqdeSXxA3m/yXT1FfMla7aR/GWPHQHjS5+Kbwwzi5KH5e17mzCe2/kvAwvmDl5pPRFMUVz1Hdqpkg5IUdIqHxfKOtsDknsPkMDTaBBfcm3KIRsMOX9pavwj9zLn1wB4ILQHVbaFRLNV4lzYnFsmN9E+8vVdbkG34JFJzXI63vjzg/ixel4YggUm5fSr8L2i71z8YT/rm/G7mND2hHvoWU+uVcsiImXF4iIE5YWTJD3vY/6JCtR6D636GZXp+ryK8jMiCYchfCKhOngWIpTlKtBtAumSU2HpkZIG9dmHZuF/3VLVZYy7/s8v+A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=nongnu.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=LMUTneDX00Am4uPOHJaEs/D16Sk+RJ172M1/EeGTLbI=; b=au/Kbk4CuIlgAIICgn3ZuAefsIO3CWLxcMs60ljOj35K3bYbEEr1mlx1OOpDT1VhKnBB68RqF0CqIBzJ/EFlpqNFXnZPhwGg4/tVmBba/wilms2GxJK7phAUiPOAyW24S/DuIbslhdJN4JBAvIiaHKXciD+qI8YWxP/oXePbmc5z7DJPUqSr9Lp+D7eWZgj1LJfNaunWW8YEHYTPAz732cS5apB24+WUereF3vbJZXRs7oqMfCVOIrjdbEPr3YvWwklDViyQJnX1gUGU2sF9+XaKQ8HWpsZHSm8jvq8hov9RW2P3k3Uq+EYmNMQJj/eaLJfHf4HINUqG0UmtUvbB6w== Received: from MW4PR04CA0306.namprd04.prod.outlook.com (2603:10b6:303:82::11) by CY8PR12MB8194.namprd12.prod.outlook.com (2603:10b6:930:76::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5857.17; Wed, 30 Nov 2022 09:45:22 +0000 Received: from CO1NAM11FT083.eop-nam11.prod.protection.outlook.com (2603:10b6:303:82:cafe::bd) by MW4PR04CA0306.outlook.office365.com (2603:10b6:303:82::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5857.23 via Frontend Transport; Wed, 30 Nov 2022 09:45:21 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT083.mail.protection.outlook.com (10.13.174.92) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5880.8 via Frontend Transport; Wed, 30 Nov 2022 09:45:20 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 30 Nov 2022 01:45:06 -0800 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail204.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 30 Nov 2022 01:45:06 -0800 Received: from vdi.nvidia.com (10.127.8.9) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.986.36 via Frontend Transport; Wed, 30 Nov 2022 01:44:59 -0800 From: Avihai Horon To: CC: Alex Williamson , Halil Pasic , Christian Borntraeger , Eric Farman , Richard Henderson , David Hildenbrand , "Ilya Leoshkevich" , Thomas Huth , "Juan Quintela" , "Dr. David Alan Gilbert" , "Michael S. Tsirkin" , Cornelia Huck , Paolo Bonzini , Stefan Hajnoczi , Fam Zheng , Eric Blake , Vladimir Sementsov-Ogievskiy , John Snow , , , Yishai Hadas , Jason Gunthorpe , Maor Gottlieb , Shay Drory , Avihai Horon , Kirti Wankhede , Tarun Gupta , Joao Martins Subject: [PATCH v4 06/14] vfio/common: Change vfio_devices_all_running_and_saving() logic to equivalent one Date: Wed, 30 Nov 2022 11:44:06 +0200 Message-ID: <20221130094414.27247-7-avihaih@nvidia.com> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20221130094414.27247-1-avihaih@nvidia.com> References: <20221130094414.27247-1-avihaih@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT083:EE_|CY8PR12MB8194:EE_ X-MS-Office365-Filtering-Correlation-Id: d3d806de-8e01-471e-af98-08dad2b79867 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: B3kp+yaeyw/CSmPLGHai5IP04s6mrdjqu8PYDrmWFkrUlyRCyJ6HFu/BgDWcU57mh7gjq/BPLliZAE+MwlxrvfGJ9UVQhIqtqecwJeceKwXS0jBIW12gE0RJt4ovIsA6dl8X6SOJIpFvYUeWfuoyCJWC0EiXL613ekGWh281xEvAzsUSTXkXY3AYleIYrqmh1ocPrHXlUE/9WkiTB/Ol20nVpNuKCOFG3/X9OhnPotFE9YBf4Qkhqvf9u7vCmbA+cVHXqP46S0Hsqtn/9FP85WCAYlsmK7/vVgDesXrG+OBjQ6OXTEXxOKlJs5XOleiMm9A23zfJLWaVOOgkB/G9aFrXyUcMtpABKRDXls1UPAed4st26HeqiZJR0C+IzKIuWNXQIKJ55geTJ9ftlCoGAc91aphumn4l8ZNi0+fN6Af5mJ45grrmR99bXAtHg8a6FJ7WX3qYQCxB6Sfyfcgye+pygCP0P6zrvhmdecgX3cTIV7JtAbJpKGvxjFN/0cO74FMunUD8EZv8GrVN5bjd2zIXP9/dYao2/gpU3wClV7ngW4AaKwnFtMCOK7KqU6IpMMKrSytW0fEOwl6ygYJb5EmAOv1s0P09LM5/e1h+ZZ4/rQkoczMogfP6vQywZfYoNIUv+CsbiJckmVQj0vl/PhRdZBoLerwQSqRoUSCBO+lHgl6BTX4REyZZEaRoIxIdgFRNhbyo88Q9xtGhOsJCTQ== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(396003)(39860400002)(136003)(376002)(346002)(451199015)(36840700001)(46966006)(40470700004)(356005)(7636003)(86362001)(82740400003)(82310400005)(36860700001)(2906002)(40460700003)(8936002)(8676002)(40480700001)(5660300002)(4326008)(41300700001)(7416002)(70586007)(70206006)(316002)(83380400001)(54906003)(6916009)(47076005)(36756003)(7696005)(2616005)(1076003)(186003)(26005)(426003)(478600001)(6666004)(336012); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Nov 2022 09:45:20.2720 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: d3d806de-8e01-471e-af98-08dad2b79867 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT083.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY8PR12MB8194 Received-SPF: softfail client-ip=2a01:111:f400:7e89::61a; envelope-from=avihaih@nvidia.com; helo=NAM10-MW2-obe.outbound.protection.outlook.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org vfio_devices_all_running_and_saving() is used to check if migration is in pre-copy phase. This is done by checking if migration is in setup or active states and if all VFIO devices are in pre-copy state, i.e. _SAVING | _RUNNING. In VFIO migration protocol v2 pre-copy support is made optional. Hence, a matching v2 protocol pre-copy state can't be used here. As preparation for adding v2 protocol, change vfio_devices_all_running_and_saving() logic such that it doesn't use the VFIO pre-copy state. The new equivalent logic checks if migration is in active state and if all VFIO devices are in running state [1]. No functional changes intended. [1] Note that checking if migration is in setup or active states and if all VFIO devices are in running state doesn't guarantee that we are in pre-copy phase, thus we check if migration is only in active state. Signed-off-by: Avihai Horon --- hw/vfio/common.c | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-) diff --git a/hw/vfio/common.c b/hw/vfio/common.c index 67104e2fc2..7a35edb0e9 100644 --- a/hw/vfio/common.c +++ b/hw/vfio/common.c @@ -40,6 +40,7 @@ #include "trace.h" #include "qapi/error.h" #include "migration/migration.h" +#include "migration/misc.h" #include "sysemu/tpm.h" VFIOGroupList vfio_group_list = @@ -363,13 +364,16 @@ static bool vfio_devices_all_dirty_tracking(VFIOContainer *container) return true; } -static bool vfio_devices_all_running_and_saving(VFIOContainer *container) +/* + * Check if all VFIO devices are running and migration is active, which is + * essentially equivalent to the migration being in pre-copy phase. + */ +static bool vfio_devices_all_running_and_mig_active(VFIOContainer *container) { VFIOGroup *group; VFIODevice *vbasedev; - MigrationState *ms = migrate_get_current(); - if (!migration_is_setup_or_active(ms->state)) { + if (!migration_is_active(migrate_get_current())) { return false; } @@ -381,8 +385,7 @@ static bool vfio_devices_all_running_and_saving(VFIOContainer *container) return false; } - if ((migration->device_state & VFIO_DEVICE_STATE_V1_SAVING) && - (migration->device_state & VFIO_DEVICE_STATE_V1_RUNNING)) { + if (migration->device_state & VFIO_DEVICE_STATE_V1_RUNNING) { continue; } else { return false; @@ -470,7 +473,7 @@ static int vfio_dma_unmap(VFIOContainer *container, .size = size, }; - if (iotlb && vfio_devices_all_running_and_saving(container)) { + if (iotlb && vfio_devices_all_running_and_mig_active(container)) { return vfio_dma_unmap_bitmap(container, iova, size, iotlb); } From patchwork Wed Nov 30 09:44:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avihai Horon X-Patchwork-Id: 13059651 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B7A06C433FE for ; Wed, 30 Nov 2022 09:52:15 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p0Jey-0003rH-J9; Wed, 30 Nov 2022 04:45:44 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p0Jeq-0003ft-Ex; Wed, 30 Nov 2022 04:45:38 -0500 Received: from mail-bn8nam04on20629.outbound.protection.outlook.com ([2a01:111:f400:7e8d::629] helo=NAM04-BN8-obe.outbound.protection.outlook.com) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p0Jen-00039X-QW; Wed, 30 Nov 2022 04:45:35 -0500 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=BqSUvtsOf8FHZ2ClVv8PFUfzQzcuAI3/5UC1h7TVUNziVtmYVDtd7pCq80RDgfHDiHYYwfIJ5M34YGrHv3iLg9uNTeriQ+M5ija2TV4wZMcmySmkZUsXfHmoocwFrzFIBSqm9qs462Apl3mvRAhDj7o+y5dabUj/pEj1s3T5NJPEJINYBWw3owe3yCLhLzvTLAFlMVd7vWBDHUD3RAssSfDMflrbpfrHqCBGI68aYO72K5rPbfOdAVIaBoqOSbBzvDfrqqmyWTI7biadje6OSP5pwBX1q6OKMh5C33nH3LPRGBSHOB9cpwZUOYBmDh5p2adiz6uYv63sbKlp7BA0/Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=wRt1ktX6lTWEEZa9wsfYoL+Srcfqs82vaOtcDqF8IXo=; b=PxaoU7iN7j/J8t/hne2mNhd2VLY8aXpM0UvT9AhVipf1r0bf+X5FJAgZl8FOfptSuB+EDhW5ogB+8hX0PS+9a93OOcjvOqDKqeApE7RFCIM8bn+WZrBz6/6ykCsJNMAxwbsrPnUuAJ9kQ/P/0WmNnlE+A+DQ7KmO2OR3kt6GfqwxYfG3XbsSguQhYlaezXn0UMQ+7L82WRVVJ9Bx8VtRHJqcqMWN491Ad7m1mSluzat1Fa0Z82Z05YWz4sWQcagP1v8s1SYSHGdCV2k8er4Yd5OSv+zxFHNetBm3SL+RlKLSO3F/v2YLaZva/Bc53P8E74Ip1bxYjSDpTFLOKtYMeA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=nongnu.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=wRt1ktX6lTWEEZa9wsfYoL+Srcfqs82vaOtcDqF8IXo=; b=jeWCEKV8SLdzxhD4wxBeqnm+zYpb41ShEpd0Nb91KT0uHCfjgPFY2MNj+Uw9gNaI9DNWkCsIEi7Q5xnJYeCQ6vmYNj/KI2ybnmwfUsqMcRSypirxkeA5YQPjiKNwpIGglhJ3I1ChTjiLJgJlJ1SbQNu7AIwb5FZuDSd8ZA/PLDnF3WZlHbVANemId902VO2GLWy1nat6YT326VxDk1WDxwI8T6MXpv/XEmd91dWnUD1vQf3t4AreaXQX/QozlC2qU2TAU6Gz1DkQJTYwkJRBMPG2rz30jwZiwIb6FDnMoI5Kfr34m//p+R90nWPmec2a3n/rXdUhm4z0cf5nkihhog== Received: from MW4PR04CA0137.namprd04.prod.outlook.com (2603:10b6:303:84::22) by MN2PR12MB4221.namprd12.prod.outlook.com (2603:10b6:208:1d2::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5857.23; Wed, 30 Nov 2022 09:45:27 +0000 Received: from CO1NAM11FT013.eop-nam11.prod.protection.outlook.com (2603:10b6:303:84:cafe::27) by MW4PR04CA0137.outlook.office365.com (2603:10b6:303:84::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5857.23 via Frontend Transport; Wed, 30 Nov 2022 09:45:27 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CO1NAM11FT013.mail.protection.outlook.com (10.13.174.227) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5857.18 via Frontend Transport; Wed, 30 Nov 2022 09:45:27 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 30 Nov 2022 01:45:14 -0800 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail204.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 30 Nov 2022 01:45:13 -0800 Received: from vdi.nvidia.com (10.127.8.9) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.986.36 via Frontend Transport; Wed, 30 Nov 2022 01:45:06 -0800 From: Avihai Horon To: CC: Alex Williamson , Halil Pasic , Christian Borntraeger , Eric Farman , Richard Henderson , David Hildenbrand , "Ilya Leoshkevich" , Thomas Huth , "Juan Quintela" , "Dr. David Alan Gilbert" , "Michael S. Tsirkin" , Cornelia Huck , Paolo Bonzini , Stefan Hajnoczi , Fam Zheng , Eric Blake , Vladimir Sementsov-Ogievskiy , John Snow , , , Yishai Hadas , Jason Gunthorpe , Maor Gottlieb , Shay Drory , Avihai Horon , Kirti Wankhede , Tarun Gupta , Joao Martins Subject: [PATCH v4 07/14] vfio/migration: Move migration v1 logic to vfio_migration_init() Date: Wed, 30 Nov 2022 11:44:07 +0200 Message-ID: <20221130094414.27247-8-avihaih@nvidia.com> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20221130094414.27247-1-avihaih@nvidia.com> References: <20221130094414.27247-1-avihaih@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT013:EE_|MN2PR12MB4221:EE_ X-MS-Office365-Filtering-Correlation-Id: a10ddc6f-0405-4ae2-af49-08dad2b79ca6 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: JstYrXidnxBMxYY36l21oR208PemmsgSuP6xI6ygF5qzqEcY2Q+H4LriBJ4VC8NGcp2FWJnh5j4fvhAacDnLfmRQke2WRHhPdKBb1DphSNHTjTBEAKTLXdAcx/5h8LDL/QOi1P2Jv5AWZPIQKQMcRS/oNgwwZ8kEUX/x6yfJJZnr7OnhnQP89ihkGbqRBlgl7y0mu+6z/+uCyGUJOynciLClv9sMGcCssveT+vRQ1cq7GpXQpnMKNZIQiwjaIS+6jD/W8nOcfV3x/1Q/09lt3rwzMFgrjhVwjsagvrHTEe91iZiMoPWTDJMSQlFsAwmAk0nRd5S1WxfikVZ1QeXFCmTRrDZX15R7DFcJ9KnPAutL9DoaGs7XoF3sKqg59iNNqxlcol+8ZogfVu0F1ssThPuJtHCYAoumK2L1dAYoKcbACzlkfQrWbKOxs9+6G1BT7u2v+ZsTxbL3HKIBEAn9X4vgNfhO11TgDPeStOWJEXBvuT+xVBYVEj9j9KLXtj94aUp77MCdkKpg25g6lT3Iad0LTFTXKWM5feXrL8kL9ugpEKy6F3hbDBzqBrtChvWipPCZBPddxqsJBJy+URBkvs+AtgVeCLgf43EljzFFueNtedisIbCx9mq9CWzcS+/Fc28FDV1WyX6nDJ/LMH4LbmW12yNW4C5DI947uan0Cl09FThG3JNp9QHQY+P8RrhmHCAd4PujgMuuCAWc/+T+Mg== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(396003)(39860400002)(376002)(136003)(346002)(451199015)(40470700004)(36840700001)(46966006)(356005)(7636003)(40480700001)(86362001)(70586007)(2616005)(336012)(8936002)(41300700001)(70206006)(26005)(8676002)(40460700003)(4326008)(478600001)(316002)(7696005)(36860700001)(6916009)(82740400003)(83380400001)(54906003)(2906002)(7416002)(426003)(47076005)(1076003)(5660300002)(186003)(36756003)(82310400005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Nov 2022 09:45:27.4426 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: a10ddc6f-0405-4ae2-af49-08dad2b79ca6 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT013.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4221 Received-SPF: softfail client-ip=2a01:111:f400:7e8d::629; envelope-from=avihaih@nvidia.com; helo=NAM04-BN8-obe.outbound.protection.outlook.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Move vfio_dev_get_region_info() logic from vfio_migration_probe() to vfio_migration_init(). This logic is specific to v1 protocol and moving it will make it easier to add the v2 protocol implementation later. No functional changes intended. Signed-off-by: Avihai Horon --- hw/vfio/migration.c | 30 +++++++++++++++--------------- hw/vfio/trace-events | 2 +- 2 files changed, 16 insertions(+), 16 deletions(-) diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c index 552c2313b2..977da64411 100644 --- a/hw/vfio/migration.c +++ b/hw/vfio/migration.c @@ -788,14 +788,14 @@ static void vfio_migration_exit(VFIODevice *vbasedev) vbasedev->migration = NULL; } -static int vfio_migration_init(VFIODevice *vbasedev, - struct vfio_region_info *info) +static int vfio_migration_init(VFIODevice *vbasedev) { int ret; Object *obj; VFIOMigration *migration; char id[256] = ""; g_autofree char *path = NULL, *oid = NULL; + struct vfio_region_info *info; if (!vbasedev->ops->vfio_get_object) { return -EINVAL; @@ -806,6 +806,14 @@ static int vfio_migration_init(VFIODevice *vbasedev, return -EINVAL; } + ret = vfio_get_dev_region_info(vbasedev, + VFIO_REGION_TYPE_MIGRATION_DEPRECATED, + VFIO_REGION_SUBTYPE_MIGRATION_DEPRECATED, + &info); + if (ret) { + return ret; + } + vbasedev->migration = g_new0(VFIOMigration, 1); vbasedev->migration->device_state = VFIO_DEVICE_STATE_V1_RUNNING; vbasedev->migration->vm_running = runstate_is_running(); @@ -825,6 +833,8 @@ static int vfio_migration_init(VFIODevice *vbasedev, goto err; } + g_free(info); + migration = vbasedev->migration; migration->vbasedev = vbasedev; @@ -847,6 +857,7 @@ static int vfio_migration_init(VFIODevice *vbasedev, return 0; err: + g_free(info); vfio_migration_exit(vbasedev); return ret; } @@ -860,34 +871,23 @@ int64_t vfio_mig_bytes_transferred(void) int vfio_migration_probe(VFIODevice *vbasedev, Error **errp) { - struct vfio_region_info *info = NULL; int ret = -ENOTSUP; if (!vbasedev->enable_migration) { goto add_blocker; } - ret = vfio_get_dev_region_info(vbasedev, - VFIO_REGION_TYPE_MIGRATION_DEPRECATED, - VFIO_REGION_SUBTYPE_MIGRATION_DEPRECATED, - &info); - if (ret) { - goto add_blocker; - } - - ret = vfio_migration_init(vbasedev, info); + ret = vfio_migration_init(vbasedev); if (ret) { goto add_blocker; } - trace_vfio_migration_probe(vbasedev->name, info->index); - g_free(info); + trace_vfio_migration_probe(vbasedev->name); return 0; add_blocker: error_setg(&vbasedev->migration_blocker, "VFIO device doesn't support migration"); - g_free(info); ret = migrate_add_blocker(vbasedev->migration_blocker, errp); if (ret < 0) { diff --git a/hw/vfio/trace-events b/hw/vfio/trace-events index 73dffe9e00..b259dcc644 100644 --- a/hw/vfio/trace-events +++ b/hw/vfio/trace-events @@ -148,7 +148,7 @@ vfio_display_edid_update(uint32_t prefx, uint32_t prefy) "%ux%u" vfio_display_edid_write_error(void) "" # migration.c -vfio_migration_probe(const char *name, uint32_t index) " (%s) Region %d" +vfio_migration_probe(const char *name) " (%s)" vfio_migration_set_state(const char *name, uint32_t state) " (%s) state %d" vfio_vmstate_change(const char *name, int running, const char *reason, uint32_t dev_state) " (%s) running %d reason %s device state %d" vfio_migration_state_notifier(const char *name, const char *state) " (%s) state %s" From patchwork Wed Nov 30 09:44:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avihai Horon X-Patchwork-Id: 13059635 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 449D6C4321E for ; Wed, 30 Nov 2022 09:46:06 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p0Jf9-0003uX-IP; Wed, 30 Nov 2022 04:45:55 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p0Jez-0003sn-V4; Wed, 30 Nov 2022 04:45:49 -0500 Received: from mail-co1nam11on20607.outbound.protection.outlook.com ([2a01:111:f400:7eab::607] helo=NAM11-CO1-obe.outbound.protection.outlook.com) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p0Jex-0003AJ-4t; Wed, 30 Nov 2022 04:45:45 -0500 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=RfvKlM1Hmk77WS1z3Noxu0dCUyJIACeO8KDcWVqHi9cWSsPoaDkWjgUxdFDdOHb9HvXq3KR01+I7E0Q3ClpTxJd7APBFpX4bXKNj/yIlpkI48LCvJaKbvjXBf5n+jU4bUc4oKFYFOCSM02P6lahimDPjUtRcNWgyZwmNj4REW8C/uPGGziltg4Kdx1nQ4Iq0xQlJfkxyvZ40U9woMof5FGTJ7qi6YCS5XnwRts/lvHncl4UiE7ra2cwYj8REjzML3clUJauq2LUZWAf6+qW0cngeBWLlpotuMdlc8GUHBbcFvAxXPlbmdoGtJ7EPzXQ6ooxEzN6aHqWKrXMPV7PCdQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=tFqyyTU4ph7QTkHedfiMavkICd2mimJae+t36T1VNsg=; b=UxoAUz6lNx8kSMSDkM65eaMfGT4EUHHQeOv3Wrua51wVgrqTqfmaaCXQiUn3ERw66SDIG95FPHAxn6NJ0vAcYMAGoOl4IadFGR1HSvw5wF6RaYh0zPOgln+BvoFvpeENf8jnqjCClnDV8DZ1jpTUyQzLwmwwDy2/4SfQgwyuKXJarVsePFTnFQa3bcuX6YjcJt5f1BLjv+EuveQVwdse5z/GHM+kkN3t3lD8R+QEaZd1FuMy3SJJ2gzh0v2N/xh7IqaWXxSh0uFPaObPmR5vvOhiy95wdiFoC8jNQusfsQvbs6dJypeOCNC8bGlfZtSDjcFsg5fwlIHOqO3tjZgdbQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=nongnu.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=tFqyyTU4ph7QTkHedfiMavkICd2mimJae+t36T1VNsg=; b=N/FiD9A8ZJZeYdwQH7tbgtEOzclo2OpeDVy86b+5+hS72IxTA+vwXsM+9S/ZQRGiHzS4kp57mjsHJq4k1doDHq266ra2h1BpAm6V++Ib2cKKwJkIPwTgtHgMiLlmKG6eeIihdfAiC/TqdAgRVyU1CKALxGs3O8xd75i3elb2EWY2gGbmwMjGMNqTcdsUGWOM0QH0fgG644m/l+h9tUwHdQoQAfqaGbPtHWTcO5qmYgOKzaR/8QfQ7jEF5wlj0PgFQOiPVjEGEZWZeNHdhBnmKvyXVi3Guq6gYOgV3ZbQxchyycNfhwhgAxJtwQYZWfaCGtF+1E79uZ2DS9vs4UKANg== Received: from MW4PR04CA0340.namprd04.prod.outlook.com (2603:10b6:303:8a::15) by CH2PR12MB4970.namprd12.prod.outlook.com (2603:10b6:610:67::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5857.23; Wed, 30 Nov 2022 09:45:37 +0000 Received: from CO1NAM11FT016.eop-nam11.prod.protection.outlook.com (2603:10b6:303:8a:cafe::85) by MW4PR04CA0340.outlook.office365.com (2603:10b6:303:8a::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5857.23 via Frontend Transport; Wed, 30 Nov 2022 09:45:37 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT016.mail.protection.outlook.com (10.13.175.141) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5857.18 via Frontend Transport; Wed, 30 Nov 2022 09:45:37 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 30 Nov 2022 01:45:21 -0800 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail205.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 30 Nov 2022 01:45:21 -0800 Received: from vdi.nvidia.com (10.127.8.9) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.986.36 via Frontend Transport; Wed, 30 Nov 2022 01:45:14 -0800 From: Avihai Horon To: CC: Alex Williamson , Halil Pasic , Christian Borntraeger , Eric Farman , Richard Henderson , David Hildenbrand , "Ilya Leoshkevich" , Thomas Huth , "Juan Quintela" , "Dr. David Alan Gilbert" , "Michael S. Tsirkin" , Cornelia Huck , Paolo Bonzini , Stefan Hajnoczi , Fam Zheng , Eric Blake , Vladimir Sementsov-Ogievskiy , John Snow , , , Yishai Hadas , Jason Gunthorpe , Maor Gottlieb , Shay Drory , Avihai Horon , Kirti Wankhede , Tarun Gupta , Joao Martins Subject: [PATCH v4 08/14] vfio/migration: Rename functions/structs related to v1 protocol Date: Wed, 30 Nov 2022 11:44:08 +0200 Message-ID: <20221130094414.27247-9-avihaih@nvidia.com> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20221130094414.27247-1-avihaih@nvidia.com> References: <20221130094414.27247-1-avihaih@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT016:EE_|CH2PR12MB4970:EE_ X-MS-Office365-Filtering-Correlation-Id: d99c00b3-aa05-4cd5-ce31-08dad2b7a277 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: MpuKYzfStZSrCpK5hJD4y8mAMWuuK7Fsr8XWy5aPBX9duHhxNa9Lry/HJl3JP/Zg5WHFhniuSvpi1ONVADSK41z+3CaZcEhb4kj0rR812DEJhdBVlXh1epfX4jCAL2Lye3MxUcsP8OautnvAZSUTYCIxKTDpSezti71Yu5WVLvdIaOSc1fGmsuqJ1u6CCbogJCakLIiSysiHnAXg5CBRGqI2FEHlYANuuO7buMPcuAntyccZBEMjYjNuQqQL5s9dR4MSVnjCuXukorH66e9b5R4jaMBdOtzMxloQ/vlIjGu9cTZapbfrGN2/Lf1XhcZIVVksNOz3eDN9DY6skuq67DyN0NkVKaFrQLItVcIN58WD+9HSevZFy+wuIAwIhNC2OhDj5Hd91JBE54+266fgCQ4Pg/vUDPtO00nsh6CjSgp2uvmKhfh0E72TB5yifnPH/RIbPFNUB0/JMmA7mMz4q0blsoRWtHbmRViWgqaZ77julYsIrkzZuxkVEveF3zpbyh9E6wz2Jf4gFM1GsUAHM++YiYe7oDDUFBp6Lxh5uVj0YgwAxNLUUMxCLacn2Abbc2qnDzeKfVfYz5hwBgSjNy2GkAR/+A1piH2mHicTvzEX3+OjIdcEvijaduU5EUQBuQRISBwaO4+ZqVM83NOG41TU8m2EbgExRbA1TvAgmZ8Zi357Z/n7N5bRPtOZ72Gm/wUiqAYI2ON0IErUZ20fhQ== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(376002)(39860400002)(346002)(136003)(396003)(451199015)(36840700001)(46966006)(40470700004)(478600001)(316002)(70206006)(2906002)(70586007)(86362001)(36756003)(82740400003)(6916009)(356005)(40460700003)(54906003)(7636003)(36860700001)(336012)(426003)(1076003)(82310400005)(47076005)(40480700001)(6666004)(83380400001)(26005)(7696005)(186003)(2616005)(5660300002)(30864003)(7416002)(41300700001)(8936002)(4326008)(8676002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Nov 2022 09:45:37.1392 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: d99c00b3-aa05-4cd5-ce31-08dad2b7a277 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT016.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB4970 Received-SPF: softfail client-ip=2a01:111:f400:7eab::607; envelope-from=avihaih@nvidia.com; helo=NAM11-CO1-obe.outbound.protection.outlook.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org To avoid name collisions, rename functions and structs related to VFIO migration protocol v1. This will allow the two protocols to co-exist when v2 protocol is added, until v1 is removed. No functional changes intended. Signed-off-by: Avihai Horon --- hw/vfio/common.c | 6 +- hw/vfio/migration.c | 104 +++++++++++++++++----------------- hw/vfio/trace-events | 10 ++-- include/hw/vfio/vfio-common.h | 2 +- 4 files changed, 61 insertions(+), 61 deletions(-) diff --git a/hw/vfio/common.c b/hw/vfio/common.c index 7a35edb0e9..6f0157c848 100644 --- a/hw/vfio/common.c +++ b/hw/vfio/common.c @@ -355,8 +355,8 @@ static bool vfio_devices_all_dirty_tracking(VFIOContainer *container) return false; } - if ((vbasedev->pre_copy_dirty_page_tracking == ON_OFF_AUTO_OFF) - && (migration->device_state & VFIO_DEVICE_STATE_V1_RUNNING)) { + if ((vbasedev->pre_copy_dirty_page_tracking == ON_OFF_AUTO_OFF) && + (migration->device_state_v1 & VFIO_DEVICE_STATE_V1_RUNNING)) { return false; } } @@ -385,7 +385,7 @@ static bool vfio_devices_all_running_and_mig_active(VFIOContainer *container) return false; } - if (migration->device_state & VFIO_DEVICE_STATE_V1_RUNNING) { + if (migration->device_state_v1 & VFIO_DEVICE_STATE_V1_RUNNING) { continue; } else { return false; diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c index 977da64411..e59b463b6c 100644 --- a/hw/vfio/migration.c +++ b/hw/vfio/migration.c @@ -107,8 +107,8 @@ static int vfio_mig_rw(VFIODevice *vbasedev, __u8 *buf, size_t count, * an error is returned. */ -static int vfio_migration_set_state(VFIODevice *vbasedev, uint32_t mask, - uint32_t value) +static int vfio_migration_v1_set_state(VFIODevice *vbasedev, uint32_t mask, + uint32_t value) { VFIOMigration *migration = vbasedev->migration; VFIORegion *region = &migration->region; @@ -145,8 +145,8 @@ static int vfio_migration_set_state(VFIODevice *vbasedev, uint32_t mask, return ret; } - migration->device_state = device_state; - trace_vfio_migration_set_state(vbasedev->name, device_state); + migration->device_state_v1 = device_state; + trace_vfio_migration_v1_set_state(vbasedev->name, device_state); return 0; } @@ -260,8 +260,8 @@ static int vfio_save_buffer(QEMUFile *f, VFIODevice *vbasedev, uint64_t *size) return ret; } -static int vfio_load_buffer(QEMUFile *f, VFIODevice *vbasedev, - uint64_t data_size) +static int vfio_v1_load_buffer(QEMUFile *f, VFIODevice *vbasedev, + uint64_t data_size) { VFIORegion *region = &vbasedev->migration->region; uint64_t data_offset = 0, size, report_size; @@ -288,7 +288,7 @@ static int vfio_load_buffer(QEMUFile *f, VFIODevice *vbasedev, data_size = 0; } - trace_vfio_load_state_device_data(vbasedev->name, data_offset, size); + trace_vfio_v1_load_state_device_data(vbasedev->name, data_offset, size); while (size) { void *buf; @@ -394,7 +394,7 @@ static int vfio_load_device_config_state(QEMUFile *f, void *opaque) return qemu_file_get_error(f); } -static void vfio_migration_cleanup(VFIODevice *vbasedev) +static void vfio_migration_v1_cleanup(VFIODevice *vbasedev) { VFIOMigration *migration = vbasedev->migration; @@ -405,7 +405,7 @@ static void vfio_migration_cleanup(VFIODevice *vbasedev) /* ---------------------------------------------------------------------- */ -static int vfio_save_setup(QEMUFile *f, void *opaque) +static int vfio_v1_save_setup(QEMUFile *f, void *opaque) { VFIODevice *vbasedev = opaque; VFIOMigration *migration = vbasedev->migration; @@ -431,8 +431,8 @@ static int vfio_save_setup(QEMUFile *f, void *opaque) } } - ret = vfio_migration_set_state(vbasedev, VFIO_DEVICE_STATE_MASK, - VFIO_DEVICE_STATE_V1_SAVING); + ret = vfio_migration_v1_set_state(vbasedev, VFIO_DEVICE_STATE_MASK, + VFIO_DEVICE_STATE_V1_SAVING); if (ret) { error_report("%s: Failed to set state SAVING", vbasedev->name); return ret; @@ -448,18 +448,18 @@ static int vfio_save_setup(QEMUFile *f, void *opaque) return 0; } -static void vfio_save_cleanup(void *opaque) +static void vfio_v1_save_cleanup(void *opaque) { VFIODevice *vbasedev = opaque; - vfio_migration_cleanup(vbasedev); + vfio_migration_v1_cleanup(vbasedev); trace_vfio_save_cleanup(vbasedev->name); } -static void vfio_save_pending(void *opaque, uint64_t threshold_size, - uint64_t *res_precopy_only, - uint64_t *res_compatible, - uint64_t *res_postcopy_only) +static void vfio_v1_save_pending(void *opaque, uint64_t threshold_size, + uint64_t *res_precopy_only, + uint64_t *res_compatible, + uint64_t *res_postcopy_only) { VFIODevice *vbasedev = opaque; VFIOMigration *migration = vbasedev->migration; @@ -472,8 +472,8 @@ static void vfio_save_pending(void *opaque, uint64_t threshold_size, *res_precopy_only += migration->pending_bytes; - trace_vfio_save_pending(vbasedev->name, *res_precopy_only, - *res_postcopy_only, *res_compatible); + trace_vfio_v1_save_pending(vbasedev->name, *res_precopy_only, + *res_postcopy_only, *res_compatible); } static int vfio_save_iterate(QEMUFile *f, void *opaque) @@ -523,15 +523,15 @@ static int vfio_save_iterate(QEMUFile *f, void *opaque) return 0; } -static int vfio_save_complete_precopy(QEMUFile *f, void *opaque) +static int vfio_v1_save_complete_precopy(QEMUFile *f, void *opaque) { VFIODevice *vbasedev = opaque; VFIOMigration *migration = vbasedev->migration; uint64_t data_size; int ret; - ret = vfio_migration_set_state(vbasedev, ~VFIO_DEVICE_STATE_V1_RUNNING, - VFIO_DEVICE_STATE_V1_SAVING); + ret = vfio_migration_v1_set_state(vbasedev, ~VFIO_DEVICE_STATE_V1_RUNNING, + VFIO_DEVICE_STATE_V1_SAVING); if (ret) { error_report("%s: Failed to set state STOP and SAVING", vbasedev->name); @@ -568,13 +568,14 @@ static int vfio_save_complete_precopy(QEMUFile *f, void *opaque) return ret; } - ret = vfio_migration_set_state(vbasedev, ~VFIO_DEVICE_STATE_V1_SAVING, 0); + ret = vfio_migration_v1_set_state(vbasedev, ~VFIO_DEVICE_STATE_V1_SAVING, + 0); if (ret) { error_report("%s: Failed to set state STOPPED", vbasedev->name); return ret; } - trace_vfio_save_complete_precopy(vbasedev->name); + trace_vfio_v1_save_complete_precopy(vbasedev->name); return ret; } @@ -591,7 +592,7 @@ static void vfio_save_state(QEMUFile *f, void *opaque) } } -static int vfio_load_setup(QEMUFile *f, void *opaque) +static int vfio_v1_load_setup(QEMUFile *f, void *opaque) { VFIODevice *vbasedev = opaque; VFIOMigration *migration = vbasedev->migration; @@ -607,8 +608,8 @@ static int vfio_load_setup(QEMUFile *f, void *opaque) } } - ret = vfio_migration_set_state(vbasedev, ~VFIO_DEVICE_STATE_MASK, - VFIO_DEVICE_STATE_V1_RESUMING); + ret = vfio_migration_v1_set_state(vbasedev, ~VFIO_DEVICE_STATE_MASK, + VFIO_DEVICE_STATE_V1_RESUMING); if (ret) { error_report("%s: Failed to set state RESUMING", vbasedev->name); if (migration->region.mmaps) { @@ -618,11 +619,11 @@ static int vfio_load_setup(QEMUFile *f, void *opaque) return ret; } -static int vfio_load_cleanup(void *opaque) +static int vfio_v1_load_cleanup(void *opaque) { VFIODevice *vbasedev = opaque; - vfio_migration_cleanup(vbasedev); + vfio_migration_v1_cleanup(vbasedev); trace_vfio_load_cleanup(vbasedev->name); return 0; } @@ -660,7 +661,7 @@ static int vfio_load_state(QEMUFile *f, void *opaque, int version_id) uint64_t data_size = qemu_get_be64(f); if (data_size) { - ret = vfio_load_buffer(f, vbasedev, data_size); + ret = vfio_v1_load_buffer(f, vbasedev, data_size); if (ret < 0) { return ret; } @@ -681,21 +682,21 @@ static int vfio_load_state(QEMUFile *f, void *opaque, int version_id) return ret; } -static SaveVMHandlers savevm_vfio_handlers = { - .save_setup = vfio_save_setup, - .save_cleanup = vfio_save_cleanup, - .save_live_pending = vfio_save_pending, +static SaveVMHandlers savevm_vfio_v1_handlers = { + .save_setup = vfio_v1_save_setup, + .save_cleanup = vfio_v1_save_cleanup, + .save_live_pending = vfio_v1_save_pending, .save_live_iterate = vfio_save_iterate, - .save_live_complete_precopy = vfio_save_complete_precopy, + .save_live_complete_precopy = vfio_v1_save_complete_precopy, .save_state = vfio_save_state, - .load_setup = vfio_load_setup, - .load_cleanup = vfio_load_cleanup, + .load_setup = vfio_v1_load_setup, + .load_cleanup = vfio_v1_load_cleanup, .load_state = vfio_load_state, }; /* ---------------------------------------------------------------------- */ -static void vfio_vmstate_change(void *opaque, bool running, RunState state) +static void vfio_v1_vmstate_change(void *opaque, bool running, RunState state) { VFIODevice *vbasedev = opaque; VFIOMigration *migration = vbasedev->migration; @@ -735,21 +736,21 @@ static void vfio_vmstate_change(void *opaque, bool running, RunState state) } } - ret = vfio_migration_set_state(vbasedev, mask, value); + ret = vfio_migration_v1_set_state(vbasedev, mask, value); if (ret) { /* * Migration should be aborted in this case, but vm_state_notify() * currently does not support reporting failures. */ error_report("%s: Failed to set device state 0x%x", vbasedev->name, - (migration->device_state & mask) | value); + (migration->device_state_v1 & mask) | value); if (migrate_get_current()->to_dst_file) { qemu_file_set_error(migrate_get_current()->to_dst_file, ret); } } vbasedev->migration->vm_running = running; - trace_vfio_vmstate_change(vbasedev->name, running, RunState_str(state), - (migration->device_state & mask) | value); + trace_vfio_v1_vmstate_change(vbasedev->name, running, RunState_str(state), + (migration->device_state_v1 & mask) | value); } static void vfio_migration_state_notifier(Notifier *notifier, void *data) @@ -768,10 +769,10 @@ static void vfio_migration_state_notifier(Notifier *notifier, void *data) case MIGRATION_STATUS_CANCELLED: case MIGRATION_STATUS_FAILED: bytes_transferred = 0; - ret = vfio_migration_set_state(vbasedev, - ~(VFIO_DEVICE_STATE_V1_SAVING | - VFIO_DEVICE_STATE_V1_RESUMING), - VFIO_DEVICE_STATE_V1_RUNNING); + ret = vfio_migration_v1_set_state(vbasedev, + ~(VFIO_DEVICE_STATE_V1_SAVING | + VFIO_DEVICE_STATE_V1_RESUMING), + VFIO_DEVICE_STATE_V1_RUNNING); if (ret) { error_report("%s: Failed to set state RUNNING", vbasedev->name); } @@ -815,7 +816,7 @@ static int vfio_migration_init(VFIODevice *vbasedev) } vbasedev->migration = g_new0(VFIOMigration, 1); - vbasedev->migration->device_state = VFIO_DEVICE_STATE_V1_RUNNING; + vbasedev->migration->device_state_v1 = VFIO_DEVICE_STATE_V1_RUNNING; vbasedev->migration->vm_running = runstate_is_running(); ret = vfio_region_setup(obj, vbasedev, &vbasedev->migration->region, @@ -846,12 +847,11 @@ static int vfio_migration_init(VFIODevice *vbasedev) } strpadcpy(id, sizeof(id), path, '\0'); - register_savevm_live(id, VMSTATE_INSTANCE_ID_ANY, 1, &savevm_vfio_handlers, - vbasedev); + register_savevm_live(id, VMSTATE_INSTANCE_ID_ANY, 1, + &savevm_vfio_v1_handlers, vbasedev); - migration->vm_state = qdev_add_vm_change_state_handler(vbasedev->dev, - vfio_vmstate_change, - vbasedev); + migration->vm_state = qdev_add_vm_change_state_handler( + vbasedev->dev, vfio_v1_vmstate_change, vbasedev); migration->migration_state.notify = vfio_migration_state_notifier; add_migration_state_change_notifier(&migration->migration_state); return 0; diff --git a/hw/vfio/trace-events b/hw/vfio/trace-events index b259dcc644..664f679e0d 100644 --- a/hw/vfio/trace-events +++ b/hw/vfio/trace-events @@ -149,20 +149,20 @@ vfio_display_edid_write_error(void) "" # migration.c vfio_migration_probe(const char *name) " (%s)" -vfio_migration_set_state(const char *name, uint32_t state) " (%s) state %d" -vfio_vmstate_change(const char *name, int running, const char *reason, uint32_t dev_state) " (%s) running %d reason %s device state %d" +vfio_migration_v1_set_state(const char *name, uint32_t state) " (%s) state %d" +vfio_v1_vmstate_change(const char *name, int running, const char *reason, uint32_t dev_state) " (%s) running %d reason %s device state %d" vfio_migration_state_notifier(const char *name, const char *state) " (%s) state %s" vfio_save_setup(const char *name) " (%s)" vfio_save_cleanup(const char *name) " (%s)" vfio_save_buffer(const char *name, uint64_t data_offset, uint64_t data_size, uint64_t pending) " (%s) Offset 0x%"PRIx64" size 0x%"PRIx64" pending 0x%"PRIx64 vfio_update_pending(const char *name, uint64_t pending) " (%s) pending 0x%"PRIx64 vfio_save_device_config_state(const char *name) " (%s)" -vfio_save_pending(const char *name, uint64_t precopy, uint64_t postcopy, uint64_t compatible) " (%s) precopy 0x%"PRIx64" postcopy 0x%"PRIx64" compatible 0x%"PRIx64 +vfio_v1_save_pending(const char *name, uint64_t precopy, uint64_t postcopy, uint64_t compatible) " (%s) precopy 0x%"PRIx64" postcopy 0x%"PRIx64" compatible 0x%"PRIx64 vfio_save_iterate(const char *name, int data_size) " (%s) data_size %d" -vfio_save_complete_precopy(const char *name) " (%s)" +vfio_v1_save_complete_precopy(const char *name) " (%s)" vfio_load_device_config_state(const char *name) " (%s)" vfio_load_state(const char *name, uint64_t data) " (%s) data 0x%"PRIx64 -vfio_load_state_device_data(const char *name, uint64_t data_offset, uint64_t data_size) " (%s) Offset 0x%"PRIx64" size 0x%"PRIx64 +vfio_v1_load_state_device_data(const char *name, uint64_t data_offset, uint64_t data_size) " (%s) Offset 0x%"PRIx64" size 0x%"PRIx64 vfio_load_cleanup(const char *name) " (%s)" vfio_get_dirty_bitmap(int fd, uint64_t iova, uint64_t size, uint64_t bitmap_size, uint64_t start) "container fd=%d, iova=0x%"PRIx64" size= 0x%"PRIx64" bitmap_size=0x%"PRIx64" start=0x%"PRIx64 vfio_iommu_map_dirty_notify(uint64_t iova_start, uint64_t iova_end) "iommu dirty @ 0x%"PRIx64" - 0x%"PRIx64 diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h index e573f5a9f1..bbaf72ba00 100644 --- a/include/hw/vfio/vfio-common.h +++ b/include/hw/vfio/vfio-common.h @@ -62,7 +62,7 @@ typedef struct VFIOMigration { struct VFIODevice *vbasedev; VMChangeStateEntry *vm_state; VFIORegion region; - uint32_t device_state; + uint32_t device_state_v1; int vm_running; Notifier migration_state; uint64_t pending_bytes; From patchwork Wed Nov 30 09:44:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avihai Horon X-Patchwork-Id: 13059638 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 06413C433FE for ; Wed, 30 Nov 2022 09:46:30 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p0JfB-00040G-3n; Wed, 30 Nov 2022 04:45:57 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p0Jf4-0003tV-Bh; Wed, 30 Nov 2022 04:45:50 -0500 Received: from mail-bn7nam10on2062d.outbound.protection.outlook.com ([2a01:111:f400:7e8a::62d] helo=NAM10-BN7-obe.outbound.protection.outlook.com) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p0Jf0-0003Ai-1m; Wed, 30 Nov 2022 04:45:49 -0500 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=MBVt9Nt8QOVD43a/wdgvhQLlQZ+qf7UzxCKQNx1IOxDi+qFzihec9b+jFEFa55iMUH2Q1onwherSHbLbKevhdS2wJBZm+Qnxw0bbzFCBhUo24KLjRIZ3SOe/s3FrkApb6tJixCCBIBCO487svgdxBMZZLfyoz70WPYfm5sSP17X8bECxuHCf5ASEc0Yd8L1NZTYo+2OMhfDZ2TJ1WT7ueOuWENwcSsNiruRXMdBNL2Xv1PCe0Zq+vIAsvQS5t36hbCX1Hd+RLKa6kksYnlh7A98dWfvqYDpu5hQWZ2FP8su9Gj65d21BEZtjr61Iveh/+wFJqatEOqU2NhYQQ5UdQA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Tzk9kw7zgMpB8Xm1H4baMc1XwSNOcVylsvgci7kPNHo=; b=ESJ5XIH9naU3Q6Smdv7u8by6YDjIyhoweLM9we+2WWNwYCReISaJl6eLFmkRHGQuFaO8AvllDY0/8zJsSz7RWev4coxP1biAw8ZaONzNGF33cRq1SQvnMnSqiWi+CCCKemI9TzprbFsYUVfyf7+m9MvVqUvwNFu2uDbY5uBb9SgKUhiXRX5FwROI3G1Qabw1diioF9MM++euYnGorCBvKlcSsKkEIJkWCfsA0Uec+cUiYIr/ODzvm75n3heBgfzeayTg/xJHkM36AjcexUft3FCkjKE/UxpuYA3nQooL81dlr2SEQGTDFzLHbL5ejTjzD1LpiswfbZcpTdoj+lcnuA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=nongnu.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Tzk9kw7zgMpB8Xm1H4baMc1XwSNOcVylsvgci7kPNHo=; b=O7a5gqNukxcjjYym4euoDLcplwRZIEdsjBGP53yuVCSPL+eg+Vg3nK8XW2Gua9I/1vUIy+lz2dxLKKoVGSGEKmyV9W8olGVuUZiw12aC1jxeyiYz5XC9bLlkzoikRNIRewECYI5CQE3pGOMgYTMRvj12nPxiq7cSY7IwTy9pMAOrJpO8vPCdwELgByTegKQF5Nfy0MOrBiJTBwjQ1bR1Ux55cYImw38TfMIXJH7gaEUgqCBJpJuQf7pEErroxGy0ZDFSBpbFDmFC1LYr9m1AveieToqDsr0+MpoD5K5+nxP93dwtvrnP2cfBoxAVXilnaXw5Mv6QltDpLL+fZI/3AQ== Received: from MW4PR03CA0152.namprd03.prod.outlook.com (2603:10b6:303:8d::7) by DM4PR12MB5793.namprd12.prod.outlook.com (2603:10b6:8:60::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5857.23; Wed, 30 Nov 2022 09:45:41 +0000 Received: from CO1NAM11FT048.eop-nam11.prod.protection.outlook.com (2603:10b6:303:8d:cafe::ca) by MW4PR03CA0152.outlook.office365.com (2603:10b6:303:8d::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5857.23 via Frontend Transport; Wed, 30 Nov 2022 09:45:41 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CO1NAM11FT048.mail.protection.outlook.com (10.13.175.148) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5857.21 via Frontend Transport; Wed, 30 Nov 2022 09:45:41 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 30 Nov 2022 01:45:29 -0800 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail205.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 30 Nov 2022 01:45:28 -0800 Received: from vdi.nvidia.com (10.127.8.9) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.986.36 via Frontend Transport; Wed, 30 Nov 2022 01:45:21 -0800 From: Avihai Horon To: CC: Alex Williamson , Halil Pasic , Christian Borntraeger , Eric Farman , Richard Henderson , David Hildenbrand , "Ilya Leoshkevich" , Thomas Huth , "Juan Quintela" , "Dr. David Alan Gilbert" , "Michael S. Tsirkin" , Cornelia Huck , Paolo Bonzini , Stefan Hajnoczi , Fam Zheng , Eric Blake , Vladimir Sementsov-Ogievskiy , John Snow , , , Yishai Hadas , Jason Gunthorpe , Maor Gottlieb , Shay Drory , Avihai Horon , Kirti Wankhede , Tarun Gupta , Joao Martins Subject: [PATCH v4 09/14] vfio/migration: Implement VFIO migration protocol v2 Date: Wed, 30 Nov 2022 11:44:09 +0200 Message-ID: <20221130094414.27247-10-avihaih@nvidia.com> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20221130094414.27247-1-avihaih@nvidia.com> References: <20221130094414.27247-1-avihaih@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT048:EE_|DM4PR12MB5793:EE_ X-MS-Office365-Filtering-Correlation-Id: dcb7e997-de2d-4fa6-c464-08dad2b7a523 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: YJVS2FTXP8YPTzaih13YjlBP38LG7rXXQnyS6pLRJQoAikgHPiZFanGbbUzscObFQ6q4UPUeDNFKWHrPvST5xkX7yn3WVmB3KPrXBteRZbd3Y/jDFs4dgQ6qSHjOkJOjs+mjRy4qxHr9Tk4OsrYKoA77PusQqI5MOosA50kQ/5FEh7TFRkK59JgmhwHe6EPlkkt9OrW9TCg8Ecz5nuD/ESTxv566C38LVQqMnT5/twEaKHImIHukjToT3Eag2C/vRt6j7aiceDCyNAj1PJcAKyxcjFojCvWGRT7IK97n8iirDP4xmgFPnFiGhkE8ih1gKg2I0ji/v+1HUNwxivjoMOYU61edx60SymShRtd/3A7stZpela7bno7mXOo3AwxsA2NTJXvYxryZf5XceuZtlyC4AgxYnjyT10+d/7Gh+tcEpM9qCntTU+3Pdw9wZ4tWiPtidKXUMnjk3vb2eWGcMfGfjH4c2DeuNr92xZyBRmkLEV+qbqA9264kSm8CgvNT80FIDY06nA5b+Rgke23R5AdkdbVegNcTYJ2QsejdntjGsq+6vLluX6JqilgRYL8kWBBeKSb1WP0a03CkZ/jOWGTFi7/E+mg/vkUiBNok/fqcatImkHJCUPnkfQp5i+wm3jmEcGCfsuj6pK8auC4LNd8MfYlL2gM9ufZs/h6YxWrIhnRCSHZY3FvPRwfUrckOf5lcE8vT8kZj3vRWO1VdkH4XaR/E+wIJeAxoFGf0zL+MOyhhzplbjS4ikOR0hlz1evZOasfucJ92hFWv3H9u5pK9Y427cqLX+iA/N9QyDXM= X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(136003)(396003)(39860400002)(376002)(346002)(451199015)(46966006)(40470700004)(36840700001)(2906002)(36756003)(30864003)(82310400005)(41300700001)(36860700001)(86362001)(70206006)(7416002)(8936002)(70586007)(5660300002)(8676002)(40460700003)(40480700001)(966005)(7636003)(316002)(82740400003)(356005)(4326008)(6916009)(478600001)(26005)(6666004)(7696005)(83380400001)(2616005)(426003)(336012)(47076005)(1076003)(54906003)(186003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Nov 2022 09:45:41.6841 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: dcb7e997-de2d-4fa6-c464-08dad2b7a523 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT048.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5793 Received-SPF: softfail client-ip=2a01:111:f400:7e8a::62d; envelope-from=avihaih@nvidia.com; helo=NAM10-BN7-obe.outbound.protection.outlook.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Add implementation of VFIO migration protocol v2. The two protocols, v1 and v2, will co-exist and in next patch v1 protocol will be removed. There are several main differences between v1 and v2 protocols: - VFIO device state is now represented as a finite state machine instead of a bitmap. - Migration interface with kernel is now done using VFIO_DEVICE_FEATURE ioctl and normal read() and write() instead of the migration region. - VFIO migration protocol v2 currently doesn't support the pre-copy phase of migration. Detailed information about VFIO migration protocol v2 and difference compared to v1 can be found here [1]. [1] https://lore.kernel.org/all/20220224142024.147653-10-yishaih@nvidia.com/ Signed-off-by: Avihai Horon --- hw/vfio/common.c | 19 +- hw/vfio/migration.c | 420 +++++++++++++++++++++++++++++++--- hw/vfio/trace-events | 6 + include/hw/vfio/vfio-common.h | 5 + 4 files changed, 411 insertions(+), 39 deletions(-) diff --git a/hw/vfio/common.c b/hw/vfio/common.c index 6f0157c848..70dff8ea42 100644 --- a/hw/vfio/common.c +++ b/hw/vfio/common.c @@ -355,10 +355,18 @@ static bool vfio_devices_all_dirty_tracking(VFIOContainer *container) return false; } - if ((vbasedev->pre_copy_dirty_page_tracking == ON_OFF_AUTO_OFF) && + if (!migration->v2 && + (vbasedev->pre_copy_dirty_page_tracking == ON_OFF_AUTO_OFF) && (migration->device_state_v1 & VFIO_DEVICE_STATE_V1_RUNNING)) { return false; } + + if (migration->v2 && + (vbasedev->pre_copy_dirty_page_tracking == ON_OFF_AUTO_OFF) && + (migration->device_state == VFIO_DEVICE_STATE_RUNNING || + migration->device_state == VFIO_DEVICE_STATE_RUNNING_P2P)) { + return false; + } } } return true; @@ -385,7 +393,14 @@ static bool vfio_devices_all_running_and_mig_active(VFIOContainer *container) return false; } - if (migration->device_state_v1 & VFIO_DEVICE_STATE_V1_RUNNING) { + if (!migration->v2 && + migration->device_state_v1 & VFIO_DEVICE_STATE_V1_RUNNING) { + continue; + } + + if (migration->v2 && + (migration->device_state == VFIO_DEVICE_STATE_RUNNING || + migration->device_state == VFIO_DEVICE_STATE_RUNNING_P2P)) { continue; } else { return false; diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c index e59b463b6c..152690c90d 100644 --- a/hw/vfio/migration.c +++ b/hw/vfio/migration.c @@ -10,6 +10,7 @@ #include "qemu/osdep.h" #include "qemu/main-loop.h" #include "qemu/cutils.h" +#include "qemu/units.h" #include #include @@ -44,8 +45,103 @@ #define VFIO_MIG_FLAG_DEV_SETUP_STATE (0xffffffffef100003ULL) #define VFIO_MIG_FLAG_DEV_DATA_STATE (0xffffffffef100004ULL) +/* + * This is an arbitrary size based on migration of mlx5 devices, where typically + * total device migration size is on the order of 100s of MB. Testing with + * larger values, e.g. 128MB and 1GB, did not show a performance improvement. + */ +#define VFIO_MIG_DEFAULT_DATA_BUFFER_SIZE (1 * MiB) + static int64_t bytes_transferred; +static const char *mig_state_to_str(enum vfio_device_mig_state state) +{ + switch (state) { + case VFIO_DEVICE_STATE_ERROR: + return "ERROR"; + case VFIO_DEVICE_STATE_STOP: + return "STOP"; + case VFIO_DEVICE_STATE_RUNNING: + return "RUNNING"; + case VFIO_DEVICE_STATE_STOP_COPY: + return "STOP_COPY"; + case VFIO_DEVICE_STATE_RESUMING: + return "RESUMING"; + case VFIO_DEVICE_STATE_RUNNING_P2P: + return "RUNNING_P2P"; + default: + return "UNKNOWN STATE"; + } +} + +static int vfio_migration_set_state(VFIODevice *vbasedev, + enum vfio_device_mig_state new_state, + enum vfio_device_mig_state recover_state) +{ + VFIOMigration *migration = vbasedev->migration; + uint64_t buf[DIV_ROUND_UP(sizeof(struct vfio_device_feature) + + sizeof(struct vfio_device_feature_mig_state), + sizeof(uint64_t))] = {}; + struct vfio_device_feature *feature = (struct vfio_device_feature *)buf; + struct vfio_device_feature_mig_state *mig_state = + (struct vfio_device_feature_mig_state *)feature->data; + + feature->argsz = sizeof(buf); + feature->flags = + VFIO_DEVICE_FEATURE_SET | VFIO_DEVICE_FEATURE_MIG_DEVICE_STATE; + mig_state->device_state = new_state; + if (ioctl(vbasedev->fd, VFIO_DEVICE_FEATURE, feature)) { + /* Try to set the device in some good state */ + int err = -errno; + + error_report( + "%s: Failed setting device state to %s, err: %s. Setting device in recover state %s", + vbasedev->name, mig_state_to_str(new_state), + strerror(errno), mig_state_to_str(recover_state)); + + mig_state->device_state = recover_state; + if (ioctl(vbasedev->fd, VFIO_DEVICE_FEATURE, feature)) { + err = -errno; + error_report( + "%s: Failed setting device in recover state, err: %s. Resetting device", + vbasedev->name, strerror(errno)); + + if (ioctl(vbasedev->fd, VFIO_DEVICE_RESET)) { + hw_error("%s: Failed resetting device, err: %s", vbasedev->name, + strerror(errno)); + } + + migration->device_state = VFIO_DEVICE_STATE_RUNNING; + + return err; + } + + migration->device_state = recover_state; + + return err; + } + + migration->device_state = new_state; + if (mig_state->data_fd != -1) { + if (migration->data_fd != -1) { + /* + * This can happen if the device is asynchronously reset and + * terminates a data transfer. + */ + error_report("%s: data_fd out of sync", vbasedev->name); + close(mig_state->data_fd); + + return -EBADF; + } + + migration->data_fd = mig_state->data_fd; + } + + trace_vfio_migration_set_state(vbasedev->name, mig_state_to_str(new_state)); + + return 0; +} + static inline int vfio_mig_access(VFIODevice *vbasedev, void *val, int count, off_t off, bool iswrite) { @@ -260,6 +356,18 @@ static int vfio_save_buffer(QEMUFile *f, VFIODevice *vbasedev, uint64_t *size) return ret; } +static int vfio_load_buffer(QEMUFile *f, VFIODevice *vbasedev, + uint64_t data_size) +{ + VFIOMigration *migration = vbasedev->migration; + int ret; + + ret = qemu_file_get_to_fd(f, migration->data_fd, data_size); + trace_vfio_load_state_device_data(vbasedev->name, data_size, ret); + + return ret; +} + static int vfio_v1_load_buffer(QEMUFile *f, VFIODevice *vbasedev, uint64_t data_size) { @@ -394,6 +502,14 @@ static int vfio_load_device_config_state(QEMUFile *f, void *opaque) return qemu_file_get_error(f); } +static void vfio_migration_cleanup(VFIODevice *vbasedev) +{ + VFIOMigration *migration = vbasedev->migration; + + close(migration->data_fd); + migration->data_fd = -1; +} + static void vfio_migration_v1_cleanup(VFIODevice *vbasedev) { VFIOMigration *migration = vbasedev->migration; @@ -405,6 +521,27 @@ static void vfio_migration_v1_cleanup(VFIODevice *vbasedev) /* ---------------------------------------------------------------------- */ +static int vfio_save_setup(QEMUFile *f, void *opaque) +{ + VFIODevice *vbasedev = opaque; + VFIOMigration *migration = vbasedev->migration; + + qemu_put_be64(f, VFIO_MIG_FLAG_DEV_SETUP_STATE); + + migration->data_buffer = g_try_malloc0(migration->data_buffer_size); + if (!migration->data_buffer) { + error_report("%s: Failed to allocate migration data buffer", + vbasedev->name); + return -ENOMEM; + } + + trace_vfio_save_setup(vbasedev->name); + + qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE); + + return qemu_file_get_error(f); +} + static int vfio_v1_save_setup(QEMUFile *f, void *opaque) { VFIODevice *vbasedev = opaque; @@ -448,6 +585,17 @@ static int vfio_v1_save_setup(QEMUFile *f, void *opaque) return 0; } +static void vfio_save_cleanup(void *opaque) +{ + VFIODevice *vbasedev = opaque; + VFIOMigration *migration = vbasedev->migration; + + g_free(migration->data_buffer); + migration->data_buffer = NULL; + vfio_migration_cleanup(vbasedev); + trace_vfio_save_cleanup(vbasedev->name); +} + static void vfio_v1_save_cleanup(void *opaque) { VFIODevice *vbasedev = opaque; @@ -456,6 +604,31 @@ static void vfio_v1_save_cleanup(void *opaque) trace_vfio_save_cleanup(vbasedev->name); } +/* + * Migration size of VFIO devices can be as little as a few KBs or as big as + * many GBs. This value should be big enough to cover the worst case. + */ +#define VFIO_MIG_STOP_COPY_SIZE (100 * GiB) +static void vfio_save_pending(void *opaque, uint64_t threshold_size, + uint64_t *res_precopy_only, + uint64_t *res_compatible, + uint64_t *res_postcopy_only) +{ + VFIODevice *vbasedev = opaque; + + /* + * VFIO migration protocol v2 currently doesn't have an API to get pending + * migration size. Until such an API is introduced, report big pending size + * so the device migration size will be taken into account and downtime + * limit won't be violated. + */ + *res_precopy_only += VFIO_MIG_STOP_COPY_SIZE; + + trace_vfio_save_pending(vbasedev->name, *res_precopy_only, + *res_postcopy_only, *res_compatible, + VFIO_MIG_STOP_COPY_SIZE); +} + static void vfio_v1_save_pending(void *opaque, uint64_t threshold_size, uint64_t *res_precopy_only, uint64_t *res_compatible, @@ -523,6 +696,65 @@ static int vfio_save_iterate(QEMUFile *f, void *opaque) return 0; } +/* Returns 1 if end-of-stream is reached, 0 if more data and -1 if error */ +static int vfio_save_block(QEMUFile *f, VFIOMigration *migration) +{ + ssize_t data_size; + + data_size = read(migration->data_fd, migration->data_buffer, + migration->data_buffer_size); + if (data_size < 0) { + return -errno; + } + if (data_size == 0) { + return 1; + } + + qemu_put_be64(f, VFIO_MIG_FLAG_DEV_DATA_STATE); + qemu_put_be64(f, data_size); + qemu_put_buffer(f, migration->data_buffer, data_size); + bytes_transferred += data_size; + + trace_vfio_save_block(migration->vbasedev->name, data_size); + + return qemu_file_get_error(f); +} + +static int vfio_save_complete_precopy(QEMUFile *f, void *opaque) +{ + VFIODevice *vbasedev = opaque; + enum vfio_device_mig_state recover_state; + int ret; + + /* We reach here with device state STOP only */ + recover_state = VFIO_DEVICE_STATE_STOP; + ret = vfio_migration_set_state(vbasedev, VFIO_DEVICE_STATE_STOP_COPY, + recover_state); + if (ret) { + return ret; + } + + do { + ret = vfio_save_block(f, vbasedev->migration); + if (ret < 0) { + return ret; + } + } while (!ret); + + qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE); + ret = qemu_file_get_error(f); + if (ret) { + return ret; + } + + recover_state = VFIO_DEVICE_STATE_ERROR; + ret = vfio_migration_set_state(vbasedev, VFIO_DEVICE_STATE_STOP, + recover_state); + trace_vfio_save_complete_precopy(vbasedev->name, ret); + + return ret; +} + static int vfio_v1_save_complete_precopy(QEMUFile *f, void *opaque) { VFIODevice *vbasedev = opaque; @@ -592,6 +824,14 @@ static void vfio_save_state(QEMUFile *f, void *opaque) } } +static int vfio_load_setup(QEMUFile *f, void *opaque) +{ + VFIODevice *vbasedev = opaque; + + return vfio_migration_set_state(vbasedev, VFIO_DEVICE_STATE_RESUMING, + vbasedev->migration->device_state); +} + static int vfio_v1_load_setup(QEMUFile *f, void *opaque) { VFIODevice *vbasedev = opaque; @@ -619,6 +859,16 @@ static int vfio_v1_load_setup(QEMUFile *f, void *opaque) return ret; } +static int vfio_load_cleanup(void *opaque) +{ + VFIODevice *vbasedev = opaque; + + vfio_migration_cleanup(vbasedev); + trace_vfio_load_cleanup(vbasedev->name); + + return 0; +} + static int vfio_v1_load_cleanup(void *opaque) { VFIODevice *vbasedev = opaque; @@ -661,7 +911,11 @@ static int vfio_load_state(QEMUFile *f, void *opaque, int version_id) uint64_t data_size = qemu_get_be64(f); if (data_size) { - ret = vfio_v1_load_buffer(f, vbasedev, data_size); + if (vbasedev->migration->v2) { + ret = vfio_load_buffer(f, vbasedev, data_size); + } else { + ret = vfio_v1_load_buffer(f, vbasedev, data_size); + } if (ret < 0) { return ret; } @@ -682,6 +936,17 @@ static int vfio_load_state(QEMUFile *f, void *opaque, int version_id) return ret; } +static const SaveVMHandlers savevm_vfio_handlers = { + .save_setup = vfio_save_setup, + .save_cleanup = vfio_save_cleanup, + .save_live_pending = vfio_save_pending, + .save_live_complete_precopy = vfio_save_complete_precopy, + .save_state = vfio_save_state, + .load_setup = vfio_load_setup, + .load_cleanup = vfio_load_cleanup, + .load_state = vfio_load_state, +}; + static SaveVMHandlers savevm_vfio_v1_handlers = { .save_setup = vfio_v1_save_setup, .save_cleanup = vfio_v1_save_cleanup, @@ -696,6 +961,34 @@ static SaveVMHandlers savevm_vfio_v1_handlers = { /* ---------------------------------------------------------------------- */ +static void vfio_vmstate_change(void *opaque, bool running, RunState state) +{ + VFIODevice *vbasedev = opaque; + enum vfio_device_mig_state new_state; + int ret; + + if (running) { + new_state = VFIO_DEVICE_STATE_RUNNING; + } else { + new_state = VFIO_DEVICE_STATE_STOP; + } + + ret = vfio_migration_set_state(vbasedev, new_state, + VFIO_DEVICE_STATE_ERROR); + if (ret) { + /* + * Migration should be aborted in this case, but vm_state_notify() + * currently does not support reporting failures. + */ + if (migrate_get_current()->to_dst_file) { + qemu_file_set_error(migrate_get_current()->to_dst_file, ret); + } + } + + trace_vfio_vmstate_change(vbasedev->name, running, RunState_str(state), + mig_state_to_str(new_state)); +} + static void vfio_v1_vmstate_change(void *opaque, bool running, RunState state) { VFIODevice *vbasedev = opaque; @@ -769,12 +1062,17 @@ static void vfio_migration_state_notifier(Notifier *notifier, void *data) case MIGRATION_STATUS_CANCELLED: case MIGRATION_STATUS_FAILED: bytes_transferred = 0; - ret = vfio_migration_v1_set_state(vbasedev, - ~(VFIO_DEVICE_STATE_V1_SAVING | - VFIO_DEVICE_STATE_V1_RESUMING), - VFIO_DEVICE_STATE_V1_RUNNING); - if (ret) { - error_report("%s: Failed to set state RUNNING", vbasedev->name); + if (migration->v2) { + vfio_migration_set_state(vbasedev, VFIO_DEVICE_STATE_RUNNING, + VFIO_DEVICE_STATE_ERROR); + } else { + ret = vfio_migration_v1_set_state(vbasedev, + ~(VFIO_DEVICE_STATE_V1_SAVING | + VFIO_DEVICE_STATE_V1_RESUMING), + VFIO_DEVICE_STATE_V1_RUNNING); + if (ret) { + error_report("%s: Failed to set state RUNNING", vbasedev->name); + } } } } @@ -783,12 +1081,34 @@ static void vfio_migration_exit(VFIODevice *vbasedev) { VFIOMigration *migration = vbasedev->migration; - vfio_region_exit(&migration->region); - vfio_region_finalize(&migration->region); + if (!migration->v2) { + vfio_region_exit(&migration->region); + vfio_region_finalize(&migration->region); + } g_free(vbasedev->migration); vbasedev->migration = NULL; } +static int vfio_migration_query_flags(VFIODevice *vbasedev, uint64_t *mig_flags) +{ + uint64_t buf[DIV_ROUND_UP(sizeof(struct vfio_device_feature) + + sizeof(struct vfio_device_feature_migration), + sizeof(uint64_t))] = {}; + struct vfio_device_feature *feature = (struct vfio_device_feature *)buf; + struct vfio_device_feature_migration *mig = + (struct vfio_device_feature_migration *)feature->data; + + feature->argsz = sizeof(buf); + feature->flags = VFIO_DEVICE_FEATURE_GET | VFIO_DEVICE_FEATURE_MIGRATION; + if (ioctl(vbasedev->fd, VFIO_DEVICE_FEATURE, feature)) { + return -EOPNOTSUPP; + } + + *mig_flags = mig->flags; + + return 0; +} + static int vfio_migration_init(VFIODevice *vbasedev) { int ret; @@ -797,6 +1117,7 @@ static int vfio_migration_init(VFIODevice *vbasedev) char id[256] = ""; g_autofree char *path = NULL, *oid = NULL; struct vfio_region_info *info; + uint64_t mig_flags; if (!vbasedev->ops->vfio_get_object) { return -EINVAL; @@ -807,34 +1128,50 @@ static int vfio_migration_init(VFIODevice *vbasedev) return -EINVAL; } - ret = vfio_get_dev_region_info(vbasedev, - VFIO_REGION_TYPE_MIGRATION_DEPRECATED, - VFIO_REGION_SUBTYPE_MIGRATION_DEPRECATED, - &info); - if (ret) { - return ret; - } + ret = vfio_migration_query_flags(vbasedev, &mig_flags); + if (!ret) { + /* Migration v2 */ + /* Basic migration functionality must be supported */ + if (!(mig_flags & VFIO_MIGRATION_STOP_COPY)) { + return -EOPNOTSUPP; + } + vbasedev->migration = g_new0(VFIOMigration, 1); + vbasedev->migration->device_state = VFIO_DEVICE_STATE_RUNNING; + vbasedev->migration->data_buffer_size = + VFIO_MIG_DEFAULT_DATA_BUFFER_SIZE; + vbasedev->migration->data_fd = -1; + vbasedev->migration->v2 = true; + } else { + /* Migration v1 */ + ret = vfio_get_dev_region_info(vbasedev, + VFIO_REGION_TYPE_MIGRATION_DEPRECATED, + VFIO_REGION_SUBTYPE_MIGRATION_DEPRECATED, + &info); + if (ret) { + return ret; + } - vbasedev->migration = g_new0(VFIOMigration, 1); - vbasedev->migration->device_state_v1 = VFIO_DEVICE_STATE_V1_RUNNING; - vbasedev->migration->vm_running = runstate_is_running(); + vbasedev->migration = g_new0(VFIOMigration, 1); + vbasedev->migration->device_state_v1 = VFIO_DEVICE_STATE_V1_RUNNING; + vbasedev->migration->vm_running = runstate_is_running(); - ret = vfio_region_setup(obj, vbasedev, &vbasedev->migration->region, - info->index, "migration"); - if (ret) { - error_report("%s: Failed to setup VFIO migration region %d: %s", - vbasedev->name, info->index, strerror(-ret)); - goto err; - } + ret = vfio_region_setup(obj, vbasedev, &vbasedev->migration->region, + info->index, "migration"); + if (ret) { + error_report("%s: Failed to setup VFIO migration region %d: %s", + vbasedev->name, info->index, strerror(-ret)); + goto err; + } - if (!vbasedev->migration->region.size) { - error_report("%s: Invalid zero-sized VFIO migration region %d", - vbasedev->name, info->index); - ret = -EINVAL; - goto err; - } + if (!vbasedev->migration->region.size) { + error_report("%s: Invalid zero-sized VFIO migration region %d", + vbasedev->name, info->index); + ret = -EINVAL; + goto err; + } - g_free(info); + g_free(info); + } migration = vbasedev->migration; migration->vbasedev = vbasedev; @@ -847,11 +1184,20 @@ static int vfio_migration_init(VFIODevice *vbasedev) } strpadcpy(id, sizeof(id), path, '\0'); - register_savevm_live(id, VMSTATE_INSTANCE_ID_ANY, 1, - &savevm_vfio_v1_handlers, vbasedev); + if (migration->v2) { + register_savevm_live(id, VMSTATE_INSTANCE_ID_ANY, 1, + &savevm_vfio_handlers, vbasedev); + + migration->vm_state = qdev_add_vm_change_state_handler( + vbasedev->dev, vfio_vmstate_change, vbasedev); + } else { + register_savevm_live(id, VMSTATE_INSTANCE_ID_ANY, 1, + &savevm_vfio_v1_handlers, vbasedev); + + migration->vm_state = qdev_add_vm_change_state_handler( + vbasedev->dev, vfio_v1_vmstate_change, vbasedev); + } - migration->vm_state = qdev_add_vm_change_state_handler( - vbasedev->dev, vfio_v1_vmstate_change, vbasedev); migration->migration_state.notify = vfio_migration_state_notifier; add_migration_state_change_notifier(&migration->migration_state); return 0; diff --git a/hw/vfio/trace-events b/hw/vfio/trace-events index 664f679e0d..71536caacb 100644 --- a/hw/vfio/trace-events +++ b/hw/vfio/trace-events @@ -149,7 +149,9 @@ vfio_display_edid_write_error(void) "" # migration.c vfio_migration_probe(const char *name) " (%s)" +vfio_migration_set_state(const char *name, const char *state) " (%s) state %s" vfio_migration_v1_set_state(const char *name, uint32_t state) " (%s) state %d" +vfio_vmstate_change(const char *name, int running, const char *reason, const char *dev_state) " (%s) running %d reason %s device state %s" vfio_v1_vmstate_change(const char *name, int running, const char *reason, uint32_t dev_state) " (%s) running %d reason %s device state %d" vfio_migration_state_notifier(const char *name, const char *state) " (%s) state %s" vfio_save_setup(const char *name) " (%s)" @@ -157,12 +159,16 @@ vfio_save_cleanup(const char *name) " (%s)" vfio_save_buffer(const char *name, uint64_t data_offset, uint64_t data_size, uint64_t pending) " (%s) Offset 0x%"PRIx64" size 0x%"PRIx64" pending 0x%"PRIx64 vfio_update_pending(const char *name, uint64_t pending) " (%s) pending 0x%"PRIx64 vfio_save_device_config_state(const char *name) " (%s)" +vfio_save_pending(const char *name, uint64_t precopy, uint64_t postcopy, uint64_t compatible, uint64_t stopcopy) " (%s) precopy 0x%"PRIx64" postcopy 0x%"PRIx64" compatible 0x%"PRIx64" stopcopy size 0x%"PRIx64 vfio_v1_save_pending(const char *name, uint64_t precopy, uint64_t postcopy, uint64_t compatible) " (%s) precopy 0x%"PRIx64" postcopy 0x%"PRIx64" compatible 0x%"PRIx64 vfio_save_iterate(const char *name, int data_size) " (%s) data_size %d" +vfio_save_complete_precopy(const char *name, int ret) " (%s) ret %d" vfio_v1_save_complete_precopy(const char *name) " (%s)" vfio_load_device_config_state(const char *name) " (%s)" vfio_load_state(const char *name, uint64_t data) " (%s) data 0x%"PRIx64 vfio_v1_load_state_device_data(const char *name, uint64_t data_offset, uint64_t data_size) " (%s) Offset 0x%"PRIx64" size 0x%"PRIx64 +vfio_load_state_device_data(const char *name, uint64_t data_size, int ret) " (%s) size 0x%"PRIx64" ret %d" vfio_load_cleanup(const char *name) " (%s)" vfio_get_dirty_bitmap(int fd, uint64_t iova, uint64_t size, uint64_t bitmap_size, uint64_t start) "container fd=%d, iova=0x%"PRIx64" size= 0x%"PRIx64" bitmap_size=0x%"PRIx64" start=0x%"PRIx64 vfio_iommu_map_dirty_notify(uint64_t iova_start, uint64_t iova_end) "iommu dirty @ 0x%"PRIx64" - 0x%"PRIx64 +vfio_save_block(const char *name, int data_size) " (%s) data_size %d" diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h index bbaf72ba00..2ec3346fea 100644 --- a/include/hw/vfio/vfio-common.h +++ b/include/hw/vfio/vfio-common.h @@ -66,6 +66,11 @@ typedef struct VFIOMigration { int vm_running; Notifier migration_state; uint64_t pending_bytes; + enum vfio_device_mig_state device_state; + int data_fd; + void *data_buffer; + size_t data_buffer_size; + bool v2; } VFIOMigration; typedef struct VFIOAddressSpace { From patchwork Wed Nov 30 09:44:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avihai Horon X-Patchwork-Id: 13059645 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4F175C433FE for ; Wed, 30 Nov 2022 09:47:26 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p0JfI-00048n-0T; Wed, 30 Nov 2022 04:46:04 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p0JfG-00044X-C2; Wed, 30 Nov 2022 04:46:02 -0500 Received: from mail-mw2nam04on20612.outbound.protection.outlook.com ([2a01:111:f400:7e8c::612] helo=NAM04-MW2-obe.outbound.protection.outlook.com) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p0JfC-0003Hf-Jr; Wed, 30 Nov 2022 04:46:02 -0500 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=MCbzofXE4o1bZ7z1M7JgGGqgvl/BrGkTZTQppT/clCy8y/Ow4Th2yqSXbC7BfjVg6vx4GDus8BeNn/u01jGETG7S1JzPM1UGvtnc++3rCb+q3I5+l73ulTR/YPW9MKyNhrNPfV/IQqXpc28wYfCLdWiTGslvH/c3UEiiy4rePLIwy7wXA2+YhDCCgHwoOQxriCFdYUpL4Y0GORGcq4Gu63Ng09G+KDcpx4NaaT5p4BoFXQxMA6jEtxl7tqKHkkZ8reqynB/6wGPT/B2jkAHaWsV01Jw7rhbhexgGWtNg+XXaEW+Dz89gwifwkYrs5uMmd5dYU895Ytluaz9BBy0Fhw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ZlTY50H+MD0PKKoc4WmUug3ZQrSeHrEF8zYZUqrbMMQ=; b=gHkMip033L2H5grgGoY7lZg5yWFhEAjoKXGA/FzlK0gstnc8o3qMB4khpKAhxCoibDV5F1ZzYzcptDXz0LSpn7w/4YjG3c9p7+Uki8DOWL9gJ1GFDyMTCgkxNHc6OquLXHZk5R97OGqCX7UXer+NYSPu1LRucsm0FQIqnE5sGQdmwns3Xc29El4PYy3DI17PvfdbhglWqqnv5mOt6yNSkPeKK3ta8/uCgF3rH6x5BY0SH7A/10+RJ2DDX9UVTA1E4Uwxvvh7OGY8y3qO6bd80ibRqdi7lGO1VtzckD9n45tA6QBvrn2fDiwg6ruOwwnnaiJDdHkiY26DK6pNziyIEg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=nongnu.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ZlTY50H+MD0PKKoc4WmUug3ZQrSeHrEF8zYZUqrbMMQ=; b=CvEsAZ0aBXAIQ1Ttnx+PCPTOQj+ISaDm+jQeQ+wGo1EoyjG/v3pOdNEaIbW3ONjECpN9cPNc6mXFDkMm1IDwnc19LjiBT2bq0lO6Yv5YjjrkerAXIbI084VVmgqLqXxh+q+5+oWOQ3h1XfykdRr77y6Y6tfXXBGAnuqYe5ecIesSdC6mBNP99ud0o0OTeRQ6f6nr5HVzxqf4vG2/7TrGLrZnnpc4CgYPfLJeGSbSauwEvcwcfhFAM9UX4unR0P4bfW3PIM0YGdJtks2BefLwLx66i3U6CSN6zNbxBT7XtpquB02RTSAmY0DXTncidCHy6XN6AJnRsfDJ1vm9zAIbMA== Received: from MW4PR03CA0068.namprd03.prod.outlook.com (2603:10b6:303:b6::13) by DM4PR12MB7575.namprd12.prod.outlook.com (2603:10b6:8:10d::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5857.23; Wed, 30 Nov 2022 09:45:51 +0000 Received: from CO1NAM11FT079.eop-nam11.prod.protection.outlook.com (2603:10b6:303:b6:cafe::74) by MW4PR03CA0068.outlook.office365.com (2603:10b6:303:b6::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5857.23 via Frontend Transport; Wed, 30 Nov 2022 09:45:51 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT079.mail.protection.outlook.com (10.13.175.134) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5834.8 via Frontend Transport; Wed, 30 Nov 2022 09:45:50 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 30 Nov 2022 01:45:36 -0800 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail205.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 30 Nov 2022 01:45:36 -0800 Received: from vdi.nvidia.com (10.127.8.9) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.986.36 via Frontend Transport; Wed, 30 Nov 2022 01:45:29 -0800 From: Avihai Horon To: CC: Alex Williamson , Halil Pasic , Christian Borntraeger , Eric Farman , Richard Henderson , David Hildenbrand , "Ilya Leoshkevich" , Thomas Huth , "Juan Quintela" , "Dr. David Alan Gilbert" , "Michael S. Tsirkin" , Cornelia Huck , Paolo Bonzini , Stefan Hajnoczi , Fam Zheng , Eric Blake , Vladimir Sementsov-Ogievskiy , John Snow , , , Yishai Hadas , Jason Gunthorpe , Maor Gottlieb , Shay Drory , Avihai Horon , Kirti Wankhede , Tarun Gupta , Joao Martins Subject: [PATCH v4 10/14] vfio/migration: Remove VFIO migration protocol v1 Date: Wed, 30 Nov 2022 11:44:10 +0200 Message-ID: <20221130094414.27247-11-avihaih@nvidia.com> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20221130094414.27247-1-avihaih@nvidia.com> References: <20221130094414.27247-1-avihaih@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT079:EE_|DM4PR12MB7575:EE_ X-MS-Office365-Filtering-Correlation-Id: 5ea3d097-ab75-4436-75cf-08dad2b7aa91 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: RhyRwfFDA2+Zp2JQ92E16cj4qyOpqLqvzVB61TI9mpT+Lhn5FPw6PT1sj9fXFSJUBT+GLHfOdl4+CEUZz1A3obISr07fun16/S9SueSQBc7zIaF3toCDaT1LFWl2HNdM9kw3lD9v9dCDsxAPcmbQF10Vy8BisRwXcxc8ToMYwM/edL62PuU+U/41s5jFeiK16wKhy0rPzkyMF3IsFlVnUyR2fkjNkEiwXEaCLMvUB9rBLTlzj4AaKmdtcYgqDo7PJA+nWBjXRdsPpPkYacgFR1ObYluX2px0JsWw63R8g7BK/fiOBtkfWnkiYaJ4xoK7P+dqKgPdb/C3R6KjB2m5urPaUir0EuEgmEBX2lewZFTowa6JaqP8IGfEdtuZqwp618k0tMjBb0my7lYYjT6B2KWQrppqjVM+gEK202GEDQlZdN40/2d1fdBZqiasYc85TG+IX3UImITbUi+qqXQaYnnOytE8NsgF0tLFjTiD7E3JdHvdOAdqKibQSeV9QRzyTgBk1ozGfbri7ggvPHEpYCu2+cPovte9lgupUpmnpP4E8GJfNCVNumOYC2n5UKCYeP0N7nkLZnYiAc48ZylMYk+ocgVYOXDDfwGkUhWQwpz2AEsjM3inbCoOtQXvTsKwLP46SuHcl4GO8zFeBLnxIKoi3DOCN2ZM9qNs9gbwUJ+SoPytyzkg2DgqdIY0BcSyqqDGYh+vFIzEvRVMiQ0Npg== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(376002)(396003)(346002)(39860400002)(136003)(451199015)(46966006)(36840700001)(40470700004)(356005)(7636003)(82740400003)(1076003)(83380400001)(2616005)(26005)(336012)(186003)(7696005)(6666004)(426003)(4326008)(8676002)(478600001)(70206006)(6916009)(70586007)(316002)(54906003)(36860700001)(47076005)(2906002)(8936002)(30864003)(41300700001)(7416002)(5660300002)(86362001)(40460700003)(36756003)(82310400005)(40480700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Nov 2022 09:45:50.7291 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 5ea3d097-ab75-4436-75cf-08dad2b7aa91 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT079.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB7575 Received-SPF: softfail client-ip=2a01:111:f400:7e8c::612; envelope-from=avihaih@nvidia.com; helo=NAM04-MW2-obe.outbound.protection.outlook.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Now that v2 protocol implementation has been added, remove the deprecated v1 implementation. Signed-off-by: Avihai Horon --- hw/vfio/common.c | 19 +- hw/vfio/migration.c | 695 +--------------------------------- hw/vfio/trace-events | 8 - include/hw/vfio/vfio-common.h | 5 - 4 files changed, 22 insertions(+), 705 deletions(-) diff --git a/hw/vfio/common.c b/hw/vfio/common.c index 70dff8ea42..4fdf281a12 100644 --- a/hw/vfio/common.c +++ b/hw/vfio/common.c @@ -355,14 +355,7 @@ static bool vfio_devices_all_dirty_tracking(VFIOContainer *container) return false; } - if (!migration->v2 && - (vbasedev->pre_copy_dirty_page_tracking == ON_OFF_AUTO_OFF) && - (migration->device_state_v1 & VFIO_DEVICE_STATE_V1_RUNNING)) { - return false; - } - - if (migration->v2 && - (vbasedev->pre_copy_dirty_page_tracking == ON_OFF_AUTO_OFF) && + if ((vbasedev->pre_copy_dirty_page_tracking == ON_OFF_AUTO_OFF) && (migration->device_state == VFIO_DEVICE_STATE_RUNNING || migration->device_state == VFIO_DEVICE_STATE_RUNNING_P2P)) { return false; @@ -393,14 +386,8 @@ static bool vfio_devices_all_running_and_mig_active(VFIOContainer *container) return false; } - if (!migration->v2 && - migration->device_state_v1 & VFIO_DEVICE_STATE_V1_RUNNING) { - continue; - } - - if (migration->v2 && - (migration->device_state == VFIO_DEVICE_STATE_RUNNING || - migration->device_state == VFIO_DEVICE_STATE_RUNNING_P2P)) { + if (migration->device_state == VFIO_DEVICE_STATE_RUNNING || + migration->device_state == VFIO_DEVICE_STATE_RUNNING_P2P) { continue; } else { return false; diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c index 152690c90d..98bde4a517 100644 --- a/hw/vfio/migration.c +++ b/hw/vfio/migration.c @@ -142,220 +142,6 @@ static int vfio_migration_set_state(VFIODevice *vbasedev, return 0; } -static inline int vfio_mig_access(VFIODevice *vbasedev, void *val, int count, - off_t off, bool iswrite) -{ - int ret; - - ret = iswrite ? pwrite(vbasedev->fd, val, count, off) : - pread(vbasedev->fd, val, count, off); - if (ret < count) { - error_report("vfio_mig_%s %d byte %s: failed at offset 0x%" - HWADDR_PRIx", err: %s", iswrite ? "write" : "read", count, - vbasedev->name, off, strerror(errno)); - return (ret < 0) ? ret : -EINVAL; - } - return 0; -} - -static int vfio_mig_rw(VFIODevice *vbasedev, __u8 *buf, size_t count, - off_t off, bool iswrite) -{ - int ret, done = 0; - __u8 *tbuf = buf; - - while (count) { - int bytes = 0; - - if (count >= 8 && !(off % 8)) { - bytes = 8; - } else if (count >= 4 && !(off % 4)) { - bytes = 4; - } else if (count >= 2 && !(off % 2)) { - bytes = 2; - } else { - bytes = 1; - } - - ret = vfio_mig_access(vbasedev, tbuf, bytes, off, iswrite); - if (ret) { - return ret; - } - - count -= bytes; - done += bytes; - off += bytes; - tbuf += bytes; - } - return done; -} - -#define vfio_mig_read(f, v, c, o) vfio_mig_rw(f, (__u8 *)v, c, o, false) -#define vfio_mig_write(f, v, c, o) vfio_mig_rw(f, (__u8 *)v, c, o, true) - -#define VFIO_MIG_STRUCT_OFFSET(f) \ - offsetof(struct vfio_device_migration_info, f) -/* - * Change the device_state register for device @vbasedev. Bits set in @mask - * are preserved, bits set in @value are set, and bits not set in either @mask - * or @value are cleared in device_state. If the register cannot be accessed, - * the resulting state would be invalid, or the device enters an error state, - * an error is returned. - */ - -static int vfio_migration_v1_set_state(VFIODevice *vbasedev, uint32_t mask, - uint32_t value) -{ - VFIOMigration *migration = vbasedev->migration; - VFIORegion *region = &migration->region; - off_t dev_state_off = region->fd_offset + - VFIO_MIG_STRUCT_OFFSET(device_state); - uint32_t device_state; - int ret; - - ret = vfio_mig_read(vbasedev, &device_state, sizeof(device_state), - dev_state_off); - if (ret < 0) { - return ret; - } - - device_state = (device_state & mask) | value; - - if (!VFIO_DEVICE_STATE_VALID(device_state)) { - return -EINVAL; - } - - ret = vfio_mig_write(vbasedev, &device_state, sizeof(device_state), - dev_state_off); - if (ret < 0) { - int rret; - - rret = vfio_mig_read(vbasedev, &device_state, sizeof(device_state), - dev_state_off); - - if ((rret < 0) || (VFIO_DEVICE_STATE_IS_ERROR(device_state))) { - hw_error("%s: Device in error state 0x%x", vbasedev->name, - device_state); - return rret ? rret : -EIO; - } - return ret; - } - - migration->device_state_v1 = device_state; - trace_vfio_migration_v1_set_state(vbasedev->name, device_state); - return 0; -} - -static void *get_data_section_size(VFIORegion *region, uint64_t data_offset, - uint64_t data_size, uint64_t *size) -{ - void *ptr = NULL; - uint64_t limit = 0; - int i; - - if (!region->mmaps) { - if (size) { - *size = MIN(data_size, region->size - data_offset); - } - return ptr; - } - - for (i = 0; i < region->nr_mmaps; i++) { - VFIOMmap *map = region->mmaps + i; - - if ((data_offset >= map->offset) && - (data_offset < map->offset + map->size)) { - - /* check if data_offset is within sparse mmap areas */ - ptr = map->mmap + data_offset - map->offset; - if (size) { - *size = MIN(data_size, map->offset + map->size - data_offset); - } - break; - } else if ((data_offset < map->offset) && - (!limit || limit > map->offset)) { - /* - * data_offset is not within sparse mmap areas, find size of - * non-mapped area. Check through all list since region->mmaps list - * is not sorted. - */ - limit = map->offset; - } - } - - if (!ptr && size) { - *size = limit ? MIN(data_size, limit - data_offset) : data_size; - } - return ptr; -} - -static int vfio_save_buffer(QEMUFile *f, VFIODevice *vbasedev, uint64_t *size) -{ - VFIOMigration *migration = vbasedev->migration; - VFIORegion *region = &migration->region; - uint64_t data_offset = 0, data_size = 0, sz; - int ret; - - ret = vfio_mig_read(vbasedev, &data_offset, sizeof(data_offset), - region->fd_offset + VFIO_MIG_STRUCT_OFFSET(data_offset)); - if (ret < 0) { - return ret; - } - - ret = vfio_mig_read(vbasedev, &data_size, sizeof(data_size), - region->fd_offset + VFIO_MIG_STRUCT_OFFSET(data_size)); - if (ret < 0) { - return ret; - } - - trace_vfio_save_buffer(vbasedev->name, data_offset, data_size, - migration->pending_bytes); - - qemu_put_be64(f, data_size); - sz = data_size; - - while (sz) { - void *buf; - uint64_t sec_size; - bool buf_allocated = false; - - buf = get_data_section_size(region, data_offset, sz, &sec_size); - - if (!buf) { - buf = g_try_malloc(sec_size); - if (!buf) { - error_report("%s: Error allocating buffer ", __func__); - return -ENOMEM; - } - buf_allocated = true; - - ret = vfio_mig_read(vbasedev, buf, sec_size, - region->fd_offset + data_offset); - if (ret < 0) { - g_free(buf); - return ret; - } - } - - qemu_put_buffer(f, buf, sec_size); - - if (buf_allocated) { - g_free(buf); - } - sz -= sec_size; - data_offset += sec_size; - } - - ret = qemu_file_get_error(f); - - if (!ret && size) { - *size = data_size; - } - - bytes_transferred += data_size; - return ret; -} - static int vfio_load_buffer(QEMUFile *f, VFIODevice *vbasedev, uint64_t data_size) { @@ -368,96 +154,6 @@ static int vfio_load_buffer(QEMUFile *f, VFIODevice *vbasedev, return ret; } -static int vfio_v1_load_buffer(QEMUFile *f, VFIODevice *vbasedev, - uint64_t data_size) -{ - VFIORegion *region = &vbasedev->migration->region; - uint64_t data_offset = 0, size, report_size; - int ret; - - do { - ret = vfio_mig_read(vbasedev, &data_offset, sizeof(data_offset), - region->fd_offset + VFIO_MIG_STRUCT_OFFSET(data_offset)); - if (ret < 0) { - return ret; - } - - if (data_offset + data_size > region->size) { - /* - * If data_size is greater than the data section of migration region - * then iterate the write buffer operation. This case can occur if - * size of migration region at destination is smaller than size of - * migration region at source. - */ - report_size = size = region->size - data_offset; - data_size -= size; - } else { - report_size = size = data_size; - data_size = 0; - } - - trace_vfio_v1_load_state_device_data(vbasedev->name, data_offset, size); - - while (size) { - void *buf; - uint64_t sec_size; - bool buf_alloc = false; - - buf = get_data_section_size(region, data_offset, size, &sec_size); - - if (!buf) { - buf = g_try_malloc(sec_size); - if (!buf) { - error_report("%s: Error allocating buffer ", __func__); - return -ENOMEM; - } - buf_alloc = true; - } - - qemu_get_buffer(f, buf, sec_size); - - if (buf_alloc) { - ret = vfio_mig_write(vbasedev, buf, sec_size, - region->fd_offset + data_offset); - g_free(buf); - - if (ret < 0) { - return ret; - } - } - size -= sec_size; - data_offset += sec_size; - } - - ret = vfio_mig_write(vbasedev, &report_size, sizeof(report_size), - region->fd_offset + VFIO_MIG_STRUCT_OFFSET(data_size)); - if (ret < 0) { - return ret; - } - } while (data_size); - - return 0; -} - -static int vfio_update_pending(VFIODevice *vbasedev) -{ - VFIOMigration *migration = vbasedev->migration; - VFIORegion *region = &migration->region; - uint64_t pending_bytes = 0; - int ret; - - ret = vfio_mig_read(vbasedev, &pending_bytes, sizeof(pending_bytes), - region->fd_offset + VFIO_MIG_STRUCT_OFFSET(pending_bytes)); - if (ret < 0) { - migration->pending_bytes = 0; - return ret; - } - - migration->pending_bytes = pending_bytes; - trace_vfio_update_pending(vbasedev->name, pending_bytes); - return 0; -} - static int vfio_save_device_config_state(QEMUFile *f, void *opaque) { VFIODevice *vbasedev = opaque; @@ -510,15 +206,6 @@ static void vfio_migration_cleanup(VFIODevice *vbasedev) migration->data_fd = -1; } -static void vfio_migration_v1_cleanup(VFIODevice *vbasedev) -{ - VFIOMigration *migration = vbasedev->migration; - - if (migration->region.mmaps) { - vfio_region_unmap(&migration->region); - } -} - /* ---------------------------------------------------------------------- */ static int vfio_save_setup(QEMUFile *f, void *opaque) @@ -542,49 +229,6 @@ static int vfio_save_setup(QEMUFile *f, void *opaque) return qemu_file_get_error(f); } -static int vfio_v1_save_setup(QEMUFile *f, void *opaque) -{ - VFIODevice *vbasedev = opaque; - VFIOMigration *migration = vbasedev->migration; - int ret; - - trace_vfio_save_setup(vbasedev->name); - - qemu_put_be64(f, VFIO_MIG_FLAG_DEV_SETUP_STATE); - - if (migration->region.mmaps) { - /* - * Calling vfio_region_mmap() from migration thread. Memory API called - * from this function require locking the iothread when called from - * outside the main loop thread. - */ - qemu_mutex_lock_iothread(); - ret = vfio_region_mmap(&migration->region); - qemu_mutex_unlock_iothread(); - if (ret) { - error_report("%s: Failed to mmap VFIO migration region: %s", - vbasedev->name, strerror(-ret)); - error_report("%s: Falling back to slow path", vbasedev->name); - } - } - - ret = vfio_migration_v1_set_state(vbasedev, VFIO_DEVICE_STATE_MASK, - VFIO_DEVICE_STATE_V1_SAVING); - if (ret) { - error_report("%s: Failed to set state SAVING", vbasedev->name); - return ret; - } - - qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE); - - ret = qemu_file_get_error(f); - if (ret) { - return ret; - } - - return 0; -} - static void vfio_save_cleanup(void *opaque) { VFIODevice *vbasedev = opaque; @@ -596,14 +240,6 @@ static void vfio_save_cleanup(void *opaque) trace_vfio_save_cleanup(vbasedev->name); } -static void vfio_v1_save_cleanup(void *opaque) -{ - VFIODevice *vbasedev = opaque; - - vfio_migration_v1_cleanup(vbasedev); - trace_vfio_save_cleanup(vbasedev->name); -} - /* * Migration size of VFIO devices can be as little as a few KBs or as big as * many GBs. This value should be big enough to cover the worst case. @@ -629,73 +265,6 @@ static void vfio_save_pending(void *opaque, uint64_t threshold_size, VFIO_MIG_STOP_COPY_SIZE); } -static void vfio_v1_save_pending(void *opaque, uint64_t threshold_size, - uint64_t *res_precopy_only, - uint64_t *res_compatible, - uint64_t *res_postcopy_only) -{ - VFIODevice *vbasedev = opaque; - VFIOMigration *migration = vbasedev->migration; - int ret; - - ret = vfio_update_pending(vbasedev); - if (ret) { - return; - } - - *res_precopy_only += migration->pending_bytes; - - trace_vfio_v1_save_pending(vbasedev->name, *res_precopy_only, - *res_postcopy_only, *res_compatible); -} - -static int vfio_save_iterate(QEMUFile *f, void *opaque) -{ - VFIODevice *vbasedev = opaque; - VFIOMigration *migration = vbasedev->migration; - uint64_t data_size; - int ret; - - qemu_put_be64(f, VFIO_MIG_FLAG_DEV_DATA_STATE); - - if (migration->pending_bytes == 0) { - ret = vfio_update_pending(vbasedev); - if (ret) { - return ret; - } - - if (migration->pending_bytes == 0) { - qemu_put_be64(f, 0); - qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE); - /* indicates data finished, goto complete phase */ - return 1; - } - } - - ret = vfio_save_buffer(f, vbasedev, &data_size); - if (ret) { - error_report("%s: vfio_save_buffer failed %s", vbasedev->name, - strerror(errno)); - return ret; - } - - qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE); - - ret = qemu_file_get_error(f); - if (ret) { - return ret; - } - - /* - * Reset pending_bytes as .save_live_pending is not called during savevm or - * snapshot case, in such case vfio_update_pending() at the start of this - * function updates pending_bytes. - */ - migration->pending_bytes = 0; - trace_vfio_save_iterate(vbasedev->name, data_size); - return 0; -} - /* Returns 1 if end-of-stream is reached, 0 if more data and -1 if error */ static int vfio_save_block(QEMUFile *f, VFIOMigration *migration) { @@ -755,62 +324,6 @@ static int vfio_save_complete_precopy(QEMUFile *f, void *opaque) return ret; } -static int vfio_v1_save_complete_precopy(QEMUFile *f, void *opaque) -{ - VFIODevice *vbasedev = opaque; - VFIOMigration *migration = vbasedev->migration; - uint64_t data_size; - int ret; - - ret = vfio_migration_v1_set_state(vbasedev, ~VFIO_DEVICE_STATE_V1_RUNNING, - VFIO_DEVICE_STATE_V1_SAVING); - if (ret) { - error_report("%s: Failed to set state STOP and SAVING", - vbasedev->name); - return ret; - } - - ret = vfio_update_pending(vbasedev); - if (ret) { - return ret; - } - - while (migration->pending_bytes > 0) { - qemu_put_be64(f, VFIO_MIG_FLAG_DEV_DATA_STATE); - ret = vfio_save_buffer(f, vbasedev, &data_size); - if (ret < 0) { - error_report("%s: Failed to save buffer", vbasedev->name); - return ret; - } - - if (data_size == 0) { - break; - } - - ret = vfio_update_pending(vbasedev); - if (ret) { - return ret; - } - } - - qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE); - - ret = qemu_file_get_error(f); - if (ret) { - return ret; - } - - ret = vfio_migration_v1_set_state(vbasedev, ~VFIO_DEVICE_STATE_V1_SAVING, - 0); - if (ret) { - error_report("%s: Failed to set state STOPPED", vbasedev->name); - return ret; - } - - trace_vfio_v1_save_complete_precopy(vbasedev->name); - return ret; -} - static void vfio_save_state(QEMUFile *f, void *opaque) { VFIODevice *vbasedev = opaque; @@ -832,33 +345,6 @@ static int vfio_load_setup(QEMUFile *f, void *opaque) vbasedev->migration->device_state); } -static int vfio_v1_load_setup(QEMUFile *f, void *opaque) -{ - VFIODevice *vbasedev = opaque; - VFIOMigration *migration = vbasedev->migration; - int ret = 0; - - if (migration->region.mmaps) { - ret = vfio_region_mmap(&migration->region); - if (ret) { - error_report("%s: Failed to mmap VFIO migration region %d: %s", - vbasedev->name, migration->region.nr, - strerror(-ret)); - error_report("%s: Falling back to slow path", vbasedev->name); - } - } - - ret = vfio_migration_v1_set_state(vbasedev, ~VFIO_DEVICE_STATE_MASK, - VFIO_DEVICE_STATE_V1_RESUMING); - if (ret) { - error_report("%s: Failed to set state RESUMING", vbasedev->name); - if (migration->region.mmaps) { - vfio_region_unmap(&migration->region); - } - } - return ret; -} - static int vfio_load_cleanup(void *opaque) { VFIODevice *vbasedev = opaque; @@ -869,15 +355,6 @@ static int vfio_load_cleanup(void *opaque) return 0; } -static int vfio_v1_load_cleanup(void *opaque) -{ - VFIODevice *vbasedev = opaque; - - vfio_migration_v1_cleanup(vbasedev); - trace_vfio_load_cleanup(vbasedev->name); - return 0; -} - static int vfio_load_state(QEMUFile *f, void *opaque, int version_id) { VFIODevice *vbasedev = opaque; @@ -911,11 +388,7 @@ static int vfio_load_state(QEMUFile *f, void *opaque, int version_id) uint64_t data_size = qemu_get_be64(f); if (data_size) { - if (vbasedev->migration->v2) { - ret = vfio_load_buffer(f, vbasedev, data_size); - } else { - ret = vfio_v1_load_buffer(f, vbasedev, data_size); - } + ret = vfio_load_buffer(f, vbasedev, data_size); if (ret < 0) { return ret; } @@ -947,18 +420,6 @@ static const SaveVMHandlers savevm_vfio_handlers = { .load_state = vfio_load_state, }; -static SaveVMHandlers savevm_vfio_v1_handlers = { - .save_setup = vfio_v1_save_setup, - .save_cleanup = vfio_v1_save_cleanup, - .save_live_pending = vfio_v1_save_pending, - .save_live_iterate = vfio_save_iterate, - .save_live_complete_precopy = vfio_v1_save_complete_precopy, - .save_state = vfio_save_state, - .load_setup = vfio_v1_load_setup, - .load_cleanup = vfio_v1_load_cleanup, - .load_state = vfio_load_state, -}; - /* ---------------------------------------------------------------------- */ static void vfio_vmstate_change(void *opaque, bool running, RunState state) @@ -989,70 +450,12 @@ static void vfio_vmstate_change(void *opaque, bool running, RunState state) mig_state_to_str(new_state)); } -static void vfio_v1_vmstate_change(void *opaque, bool running, RunState state) -{ - VFIODevice *vbasedev = opaque; - VFIOMigration *migration = vbasedev->migration; - uint32_t value, mask; - int ret; - - if (vbasedev->migration->vm_running == running) { - return; - } - - if (running) { - /* - * Here device state can have one of _SAVING, _RESUMING or _STOP bit. - * Transition from _SAVING to _RUNNING can happen if there is migration - * failure, in that case clear _SAVING bit. - * Transition from _RESUMING to _RUNNING occurs during resuming - * phase, in that case clear _RESUMING bit. - * In both the above cases, set _RUNNING bit. - */ - mask = ~VFIO_DEVICE_STATE_MASK; - value = VFIO_DEVICE_STATE_V1_RUNNING; - } else { - /* - * Here device state could be either _RUNNING or _SAVING|_RUNNING. Reset - * _RUNNING bit - */ - mask = ~VFIO_DEVICE_STATE_V1_RUNNING; - - /* - * When VM state transition to stop for savevm command, device should - * start saving data. - */ - if (state == RUN_STATE_SAVE_VM) { - value = VFIO_DEVICE_STATE_V1_SAVING; - } else { - value = 0; - } - } - - ret = vfio_migration_v1_set_state(vbasedev, mask, value); - if (ret) { - /* - * Migration should be aborted in this case, but vm_state_notify() - * currently does not support reporting failures. - */ - error_report("%s: Failed to set device state 0x%x", vbasedev->name, - (migration->device_state_v1 & mask) | value); - if (migrate_get_current()->to_dst_file) { - qemu_file_set_error(migrate_get_current()->to_dst_file, ret); - } - } - vbasedev->migration->vm_running = running; - trace_vfio_v1_vmstate_change(vbasedev->name, running, RunState_str(state), - (migration->device_state_v1 & mask) | value); -} - static void vfio_migration_state_notifier(Notifier *notifier, void *data) { MigrationState *s = data; VFIOMigration *migration = container_of(notifier, VFIOMigration, migration_state); VFIODevice *vbasedev = migration->vbasedev; - int ret; trace_vfio_migration_state_notifier(vbasedev->name, MigrationStatus_str(s->state)); @@ -1062,29 +465,13 @@ static void vfio_migration_state_notifier(Notifier *notifier, void *data) case MIGRATION_STATUS_CANCELLED: case MIGRATION_STATUS_FAILED: bytes_transferred = 0; - if (migration->v2) { - vfio_migration_set_state(vbasedev, VFIO_DEVICE_STATE_RUNNING, - VFIO_DEVICE_STATE_ERROR); - } else { - ret = vfio_migration_v1_set_state(vbasedev, - ~(VFIO_DEVICE_STATE_V1_SAVING | - VFIO_DEVICE_STATE_V1_RESUMING), - VFIO_DEVICE_STATE_V1_RUNNING); - if (ret) { - error_report("%s: Failed to set state RUNNING", vbasedev->name); - } - } + vfio_migration_set_state(vbasedev, VFIO_DEVICE_STATE_RUNNING, + VFIO_DEVICE_STATE_ERROR); } } static void vfio_migration_exit(VFIODevice *vbasedev) { - VFIOMigration *migration = vbasedev->migration; - - if (!migration->v2) { - vfio_region_exit(&migration->region); - vfio_region_finalize(&migration->region); - } g_free(vbasedev->migration); vbasedev->migration = NULL; } @@ -1116,7 +503,6 @@ static int vfio_migration_init(VFIODevice *vbasedev) VFIOMigration *migration; char id[256] = ""; g_autofree char *path = NULL, *oid = NULL; - struct vfio_region_info *info; uint64_t mig_flags; if (!vbasedev->ops->vfio_get_object) { @@ -1129,52 +515,21 @@ static int vfio_migration_init(VFIODevice *vbasedev) } ret = vfio_migration_query_flags(vbasedev, &mig_flags); - if (!ret) { - /* Migration v2 */ - /* Basic migration functionality must be supported */ - if (!(mig_flags & VFIO_MIGRATION_STOP_COPY)) { - return -EOPNOTSUPP; - } - vbasedev->migration = g_new0(VFIOMigration, 1); - vbasedev->migration->device_state = VFIO_DEVICE_STATE_RUNNING; - vbasedev->migration->data_buffer_size = - VFIO_MIG_DEFAULT_DATA_BUFFER_SIZE; - vbasedev->migration->data_fd = -1; - vbasedev->migration->v2 = true; - } else { - /* Migration v1 */ - ret = vfio_get_dev_region_info(vbasedev, - VFIO_REGION_TYPE_MIGRATION_DEPRECATED, - VFIO_REGION_SUBTYPE_MIGRATION_DEPRECATED, - &info); - if (ret) { - return ret; - } - - vbasedev->migration = g_new0(VFIOMigration, 1); - vbasedev->migration->device_state_v1 = VFIO_DEVICE_STATE_V1_RUNNING; - vbasedev->migration->vm_running = runstate_is_running(); - - ret = vfio_region_setup(obj, vbasedev, &vbasedev->migration->region, - info->index, "migration"); - if (ret) { - error_report("%s: Failed to setup VFIO migration region %d: %s", - vbasedev->name, info->index, strerror(-ret)); - goto err; - } - - if (!vbasedev->migration->region.size) { - error_report("%s: Invalid zero-sized VFIO migration region %d", - vbasedev->name, info->index); - ret = -EINVAL; - goto err; - } + if (ret) { + return ret; + } - g_free(info); + /* Basic migration functionality must be supported */ + if (!(mig_flags & VFIO_MIGRATION_STOP_COPY)) { + return -EOPNOTSUPP; } + vbasedev->migration = g_new0(VFIOMigration, 1); migration = vbasedev->migration; migration->vbasedev = vbasedev; + migration->device_state = VFIO_DEVICE_STATE_RUNNING; + migration->data_buffer_size = VFIO_MIG_DEFAULT_DATA_BUFFER_SIZE; + migration->data_fd = -1; oid = vmstate_if_get_id(VMSTATE_IF(DEVICE(obj))); if (oid) { @@ -1184,28 +539,16 @@ static int vfio_migration_init(VFIODevice *vbasedev) } strpadcpy(id, sizeof(id), path, '\0'); - if (migration->v2) { - register_savevm_live(id, VMSTATE_INSTANCE_ID_ANY, 1, - &savevm_vfio_handlers, vbasedev); - - migration->vm_state = qdev_add_vm_change_state_handler( - vbasedev->dev, vfio_vmstate_change, vbasedev); - } else { - register_savevm_live(id, VMSTATE_INSTANCE_ID_ANY, 1, - &savevm_vfio_v1_handlers, vbasedev); - - migration->vm_state = qdev_add_vm_change_state_handler( - vbasedev->dev, vfio_v1_vmstate_change, vbasedev); - } + register_savevm_live(id, VMSTATE_INSTANCE_ID_ANY, 1, &savevm_vfio_handlers, + vbasedev); + migration->vm_state = qdev_add_vm_change_state_handler(vbasedev->dev, + vfio_vmstate_change, + vbasedev); migration->migration_state.notify = vfio_migration_state_notifier; add_migration_state_change_notifier(&migration->migration_state); - return 0; -err: - g_free(info); - vfio_migration_exit(vbasedev); - return ret; + return 0; } /* ---------------------------------------------------------------------- */ diff --git a/hw/vfio/trace-events b/hw/vfio/trace-events index 71536caacb..f727e0e00c 100644 --- a/hw/vfio/trace-events +++ b/hw/vfio/trace-events @@ -150,23 +150,15 @@ vfio_display_edid_write_error(void) "" # migration.c vfio_migration_probe(const char *name) " (%s)" vfio_migration_set_state(const char *name, const char *state) " (%s) state %s" -vfio_migration_v1_set_state(const char *name, uint32_t state) " (%s) state %d" vfio_vmstate_change(const char *name, int running, const char *reason, const char *dev_state) " (%s) running %d reason %s device state %s" -vfio_v1_vmstate_change(const char *name, int running, const char *reason, uint32_t dev_state) " (%s) running %d reason %s device state %d" vfio_migration_state_notifier(const char *name, const char *state) " (%s) state %s" vfio_save_setup(const char *name) " (%s)" vfio_save_cleanup(const char *name) " (%s)" -vfio_save_buffer(const char *name, uint64_t data_offset, uint64_t data_size, uint64_t pending) " (%s) Offset 0x%"PRIx64" size 0x%"PRIx64" pending 0x%"PRIx64 -vfio_update_pending(const char *name, uint64_t pending) " (%s) pending 0x%"PRIx64 vfio_save_device_config_state(const char *name) " (%s)" vfio_save_pending(const char *name, uint64_t precopy, uint64_t postcopy, uint64_t compatible, uint64_t stopcopy) " (%s) precopy 0x%"PRIx64" postcopy 0x%"PRIx64" compatible 0x%"PRIx64" stopcopy size 0x%"PRIx64 -vfio_v1_save_pending(const char *name, uint64_t precopy, uint64_t postcopy, uint64_t compatible) " (%s) precopy 0x%"PRIx64" postcopy 0x%"PRIx64" compatible 0x%"PRIx64 -vfio_save_iterate(const char *name, int data_size) " (%s) data_size %d" vfio_save_complete_precopy(const char *name, int ret) " (%s) ret %d" -vfio_v1_save_complete_precopy(const char *name) " (%s)" vfio_load_device_config_state(const char *name) " (%s)" vfio_load_state(const char *name, uint64_t data) " (%s) data 0x%"PRIx64 -vfio_v1_load_state_device_data(const char *name, uint64_t data_offset, uint64_t data_size) " (%s) Offset 0x%"PRIx64" size 0x%"PRIx64 vfio_load_state_device_data(const char *name, uint64_t data_size, int ret) " (%s) size 0x%"PRIx64" ret %d" vfio_load_cleanup(const char *name) " (%s)" vfio_get_dirty_bitmap(int fd, uint64_t iova, uint64_t size, uint64_t bitmap_size, uint64_t start) "container fd=%d, iova=0x%"PRIx64" size= 0x%"PRIx64" bitmap_size=0x%"PRIx64" start=0x%"PRIx64 diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h index 2ec3346fea..76d470178f 100644 --- a/include/hw/vfio/vfio-common.h +++ b/include/hw/vfio/vfio-common.h @@ -61,16 +61,11 @@ typedef struct VFIORegion { typedef struct VFIOMigration { struct VFIODevice *vbasedev; VMChangeStateEntry *vm_state; - VFIORegion region; - uint32_t device_state_v1; - int vm_running; Notifier migration_state; - uint64_t pending_bytes; enum vfio_device_mig_state device_state; int data_fd; void *data_buffer; size_t data_buffer_size; - bool v2; } VFIOMigration; typedef struct VFIOAddressSpace { From patchwork Wed Nov 30 09:44:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avihai Horon X-Patchwork-Id: 13059637 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 13FFEC47088 for ; Wed, 30 Nov 2022 09:46:29 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p0JfJ-0004D3-Ht; Wed, 30 Nov 2022 04:46:05 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p0JfI-00049X-85; Wed, 30 Nov 2022 04:46:04 -0500 Received: from mail-bn8nam11on2040.outbound.protection.outlook.com ([40.107.236.40] helo=NAM11-BN8-obe.outbound.protection.outlook.com) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p0JfG-0003I4-7I; Wed, 30 Nov 2022 04:46:03 -0500 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Tf7t0H5kO9WbdrOzh9dpOmQZhnFqc9UdHkePk10A1TAN5djyNHhf43TD6ZfntzUKuKC2DaEOTtfCjsokpJD4pPsnLYtOr/HR1jkFW3I4EosDoQrxcKqLCb9e6znCsRjNN9LZp6Qn1F9qYLb/eSjiPy8N1oszSqSJE8qkDijAOgUtFK9phn1Zc+cmhBbpB4ODV/59AayVKVf+Q0lbXFkW15Gid0iJ6FPbs0ZEJMCuIE/KZV6VzSDwbFukudDeRhtGJ6i76F0R3Y+ssgBIuPkO3Nf8V9hPwEF99NN21YGNWHT1mm1oz0IdanbtCAY7mv2M6bT1QeIUhasMGegyrrGAZg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=NSWRVRxTdDw2Vys6qAICitr/noXx+ipo4LS0aKlkBrI=; b=AkEE9yWvVCob485/lXEjvOxuaXx2Z3MQPX/h1PqbS7pwoC9dwr9eWol42XRpsV3/3vjPdR6hD0npuW9sSFp5Q+gPHXZKVVuIy7s6AaQhu0mCLLLu1oEbfBV+MHF/9yVQd6ygwdTaprHAqhSXyz74u8z6UoXtN26FsKxQqRfpT1WBfqDcr1xUtFGNXsWTSHn+0QOkJlu47p8r3pcZs0wcnW2fiASnAQZ9oEF25nu8DPvjA5b06pXTCiFdne/DWXOBliL7GzPwoOCH5CJbyBZom5FP05+jyQKB5cvdMdkb+vhoUG5yi9jhcWYDhwYaWK+ZVzNmji2NP/5DazufVMu3TQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=nongnu.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=NSWRVRxTdDw2Vys6qAICitr/noXx+ipo4LS0aKlkBrI=; b=DNaaqYnZw6AATKwcVk13FoTPgOvGXQVaqPlezh7DIqIIpSEm7QckSLM7Um2PiyPiDpJHb7uisJ82sQ51UzTB9cEckFhO6GUlLkGwgSCJ8GOlkqRh75RlZh1lqlLMB1ox2XjJGrN628k3lO5FuM7nUhwl0ilTw2SUJ6H+Z85ysU4BUAWIL0HYfAX46deGvNLaiaPbkvonx33HODG8IiW0hygfuwGxjvNPr9IPrVg8C6dkmTa/5K3XLZ9ACqEvgXR3Hl6M6q56MIZecKyTu9mq9Su69CEMqoOOyZYY8YdVs10BWgAesUKMcoYjM+OPiqrVtQ7UnxYxu22pxCmOfspArg== Received: from MW4PR03CA0079.namprd03.prod.outlook.com (2603:10b6:303:b6::24) by CH2PR12MB4086.namprd12.prod.outlook.com (2603:10b6:610:7c::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5857.23; Wed, 30 Nov 2022 09:45:56 +0000 Received: from CO1NAM11FT079.eop-nam11.prod.protection.outlook.com (2603:10b6:303:b6:cafe::fc) by MW4PR03CA0079.outlook.office365.com (2603:10b6:303:b6::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5857.23 via Frontend Transport; Wed, 30 Nov 2022 09:45:55 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CO1NAM11FT079.mail.protection.outlook.com (10.13.175.134) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5834.8 via Frontend Transport; Wed, 30 Nov 2022 09:45:55 +0000 Received: from rnnvmail203.nvidia.com (10.129.68.9) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 30 Nov 2022 01:45:44 -0800 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail203.nvidia.com (10.129.68.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 30 Nov 2022 01:45:43 -0800 Received: from vdi.nvidia.com (10.127.8.9) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.986.36 via Frontend Transport; Wed, 30 Nov 2022 01:45:36 -0800 From: Avihai Horon To: CC: Alex Williamson , Halil Pasic , Christian Borntraeger , Eric Farman , Richard Henderson , David Hildenbrand , "Ilya Leoshkevich" , Thomas Huth , "Juan Quintela" , "Dr. David Alan Gilbert" , "Michael S. Tsirkin" , Cornelia Huck , Paolo Bonzini , Stefan Hajnoczi , Fam Zheng , Eric Blake , Vladimir Sementsov-Ogievskiy , John Snow , , , Yishai Hadas , Jason Gunthorpe , Maor Gottlieb , Shay Drory , Avihai Horon , Kirti Wankhede , Tarun Gupta , Joao Martins Subject: [PATCH v4 11/14] vfio: Alphabetize migration section of VFIO trace-events file Date: Wed, 30 Nov 2022 11:44:11 +0200 Message-ID: <20221130094414.27247-12-avihaih@nvidia.com> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20221130094414.27247-1-avihaih@nvidia.com> References: <20221130094414.27247-1-avihaih@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT079:EE_|CH2PR12MB4086:EE_ X-MS-Office365-Filtering-Correlation-Id: 34797c6a-922d-4d4c-bfff-08dad2b7ad7b X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: JtLdfIfzWGFGGtGT/HJmf9lgsJMhwGBNVMHIuQ/Aa0JbEePfwX/Kqjjg/jYOaAeID1rZZqTwPZvr5aSNiYwDHBReuiyqAOMHurNw8B6bMgF2HCI/llP1Wr87PpPBcELOBswHe9+K24WBWxW1GFVM+jgueKBU5QssZlQTRbU1BiCNfTmfEiEOvZdWyB6JA1xMyc6aqQ+xE9oIO+I+jcaxMvOTmpqFMr9ZKnMOVRlmaRa3I3HqLEBm3WdXBh5SZmkPdbja6S4ixfy1tC58jE/t0dOfB1/sf3m81ao3kQs55XkfK+/6aOTP4PewP7ihj4k7ySVlxGncObPq0Y//9e+7jeqHSb/HH0qP/+wPEO9GB1hKkLd8BcvZj917yJWiDyE/WygIadsJ5OWrtOrFnPSAKHMMgY0diRwEiO6x7vxYrOnizE4OXnA9SSbttpTRNUuu8I6OJB5+kin5aM8feFDRkDayeM/u6Fjt02//ZCNk+NAeEKb1hRgU0oOD/ZAWfNpb+NsgY/H1Jc945IHReJvhB9JQpi5FR+8K2MOlnNPLqyuZ2jrB/EG/mmDajFEE3n5AgatoWELQNromGEzhY4zCXGbqaGH23L95AnsFsC825OtA5C2QA/RMgyAOvyym3bHU7Lja2qi8lX0GIjD3k3uFsiNPnXUD71reDIDVTaTtmwucOuPCHKaXkwpaOmtDzTTsakti7fGnwYX7roWQh9e5sg== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(396003)(376002)(346002)(136003)(39860400002)(451199015)(36840700001)(46966006)(40470700004)(41300700001)(36756003)(82310400005)(1076003)(26005)(4326008)(70206006)(8676002)(7696005)(6666004)(40460700003)(478600001)(336012)(2616005)(356005)(7636003)(40480700001)(86362001)(83380400001)(2906002)(426003)(186003)(8936002)(70586007)(47076005)(316002)(6916009)(54906003)(7416002)(5660300002)(82740400003)(36860700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Nov 2022 09:45:55.6825 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 34797c6a-922d-4d4c-bfff-08dad2b7ad7b X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT079.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB4086 Received-SPF: softfail client-ip=40.107.236.40; envelope-from=avihaih@nvidia.com; helo=NAM11-BN8-obe.outbound.protection.outlook.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sort the migration section of VFIO trace events file alphabetically and move two misplaced traces to common.c section. Signed-off-by: Avihai Horon --- hw/vfio/trace-events | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/hw/vfio/trace-events b/hw/vfio/trace-events index f727e0e00c..6c1db71a1e 100644 --- a/hw/vfio/trace-events +++ b/hw/vfio/trace-events @@ -119,6 +119,8 @@ vfio_region_sparse_mmap_header(const char *name, int index, int nr_areas) "Devic vfio_region_sparse_mmap_entry(int i, unsigned long start, unsigned long end) "sparse entry %d [0x%lx - 0x%lx]" vfio_get_dev_region(const char *name, int index, uint32_t type, uint32_t subtype) "%s index %d, %08x/%0x8" vfio_dma_unmap_overflow_workaround(void) "" +vfio_get_dirty_bitmap(int fd, uint64_t iova, uint64_t size, uint64_t bitmap_size, uint64_t start) "container fd=%d, iova=0x%"PRIx64" size= 0x%"PRIx64" bitmap_size=0x%"PRIx64" start=0x%"PRIx64 +vfio_iommu_map_dirty_notify(uint64_t iova_start, uint64_t iova_end) "iommu dirty @ 0x%"PRIx64" - 0x%"PRIx64 # platform.c vfio_platform_base_device_init(char *name, int groupid) "%s belongs to group #%d" @@ -148,19 +150,17 @@ vfio_display_edid_update(uint32_t prefx, uint32_t prefy) "%ux%u" vfio_display_edid_write_error(void) "" # migration.c +vfio_load_cleanup(const char *name) " (%s)" +vfio_load_device_config_state(const char *name) " (%s)" +vfio_load_state(const char *name, uint64_t data) " (%s) data 0x%"PRIx64 +vfio_load_state_device_data(const char *name, uint64_t data_size, int ret) " (%s) size 0x%"PRIx64" ret %d" vfio_migration_probe(const char *name) " (%s)" vfio_migration_set_state(const char *name, const char *state) " (%s) state %s" -vfio_vmstate_change(const char *name, int running, const char *reason, const char *dev_state) " (%s) running %d reason %s device state %s" vfio_migration_state_notifier(const char *name, const char *state) " (%s) state %s" -vfio_save_setup(const char *name) " (%s)" +vfio_save_block(const char *name, int data_size) " (%s) data_size %d" vfio_save_cleanup(const char *name) " (%s)" +vfio_save_complete_precopy(const char *name, int ret) " (%s) ret %d" vfio_save_device_config_state(const char *name) " (%s)" vfio_save_pending(const char *name, uint64_t precopy, uint64_t postcopy, uint64_t compatible, uint64_t stopcopy) " (%s) precopy 0x%"PRIx64" postcopy 0x%"PRIx64" compatible 0x%"PRIx64" stopcopy size 0x%"PRIx64 -vfio_save_complete_precopy(const char *name, int ret) " (%s) ret %d" -vfio_load_device_config_state(const char *name) " (%s)" -vfio_load_state(const char *name, uint64_t data) " (%s) data 0x%"PRIx64 -vfio_load_state_device_data(const char *name, uint64_t data_size, int ret) " (%s) size 0x%"PRIx64" ret %d" -vfio_load_cleanup(const char *name) " (%s)" -vfio_get_dirty_bitmap(int fd, uint64_t iova, uint64_t size, uint64_t bitmap_size, uint64_t start) "container fd=%d, iova=0x%"PRIx64" size= 0x%"PRIx64" bitmap_size=0x%"PRIx64" start=0x%"PRIx64 -vfio_iommu_map_dirty_notify(uint64_t iova_start, uint64_t iova_end) "iommu dirty @ 0x%"PRIx64" - 0x%"PRIx64 -vfio_save_block(const char *name, int data_size) " (%s) data_size %d" +vfio_save_setup(const char *name) " (%s)" +vfio_vmstate_change(const char *name, int running, const char *reason, const char *dev_state) " (%s) running %d reason %s device state %s" From patchwork Wed Nov 30 09:44:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avihai Horon X-Patchwork-Id: 13059647 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 69F12C433FE for ; Wed, 30 Nov 2022 09:51:14 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p0Jfx-00052V-NL; Wed, 30 Nov 2022 04:46:45 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p0JfX-0004gi-Pi; Wed, 30 Nov 2022 04:46:19 -0500 Received: from mail-dm6nam12on2060c.outbound.protection.outlook.com ([2a01:111:f400:fe59::60c] helo=NAM12-DM6-obe.outbound.protection.outlook.com) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p0JfQ-0003Ix-3E; Wed, 30 Nov 2022 04:46:18 -0500 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=csffMxmFsUSDqs/8yHBkmAcYJtJ+FLhUKHuXAQe7C2TR5XPw5SUzJ/wZ13QMVOrD+j6j1qBxa4FBfzr9XirrEgPSM2Pd5sWtzBPdCt3Uomjh3nH7DYO/d1QMXBqwYwM92kDqymAVFZy2EBwn2EIdIJ6AeQg2rDh6kKdh84wo1Fn7K81IWJxlIte3OH7aVsUhC0AzITPY0lGRvM2O/aDsUjjNzyvypiViFky/qVZDQ3B6Syu64Ma1E7R+BVlh3DN53a1oRaXKYBaV50hGPoUL6AR/MqPydniVhfek8fLgOKjlyKpQ949B5yPqhc/XzuxJV4/FhmQShA9u8tMCDeCmFw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=IZPg1bTWqxIeLqvbsaGaOOU07cxoS8rciXxhHWB5u9k=; b=WT1xres8bdaiPbE4iSwNOdgn8v+Rq/v0ZQGv4M4ih9SLASUd+MMlwiC3ceG40siv82vE/3ce5SqPP+ijK8ECnNw6r7fIRFLRBG8HFo3/rBdWVbjTySdMWAG9CJP0o6Z8kjjO8JY/1sZ9emhwUsj0pnFuHlH05E9pWLtk7bHAbCdyplqVyrV1qD5q2Lit9XNiVKhLiqH5E8wgTZaLlN2Gp403ugw8xCMfjCSCm6AuMTXWymRWuNxYFTzt+QVXS/jU8RBCGo8FM5hLRCJhJKb/aogM7DU4MbFnNSgDQKa3n0ZpfSf5fURhoHW7Y6S5dWg+8H63CxCU/OPoaz+TJRVVHg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=nongnu.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=IZPg1bTWqxIeLqvbsaGaOOU07cxoS8rciXxhHWB5u9k=; b=kCWtCLjhLP1+dJgshLkGMvQU/imH6aBCIE3w9wqKyLy9U1Ys9J5hyVI/izeBdYGWDryxf9qXxktyhfZcaRu8wdiLSmma+6iJqevZbcc93ky2iR2f7t+ytDZtzS8NM5UYMjhIVeLe2og4pKLbOq+qKMFiBr4SYlucqcNtaZkvQzcK8RUMyAeIWTgnB92pLoppEpWJQ6biZhpTAXDA9zxju1AuBiDWtRHrhYxMulDlPJhK/wj+e0CpkkzGEdQz6j1Uo6cwCXtKQrHr2nu99xAK15cjlRLk4poyyajpQlinR4m9il3gctUrEHjC8+tUrW6HNXHhHawbo/vawmYnw4YqZQ== Received: from MW4PR03CA0332.namprd03.prod.outlook.com (2603:10b6:303:dc::7) by CH2PR12MB4905.namprd12.prod.outlook.com (2603:10b6:610:64::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5857.23; Wed, 30 Nov 2022 09:46:07 +0000 Received: from CO1NAM11FT058.eop-nam11.prod.protection.outlook.com (2603:10b6:303:dc:cafe::bf) by MW4PR03CA0332.outlook.office365.com (2603:10b6:303:dc::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5857.23 via Frontend Transport; Wed, 30 Nov 2022 09:46:07 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT058.mail.protection.outlook.com (10.13.174.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5857.23 via Frontend Transport; Wed, 30 Nov 2022 09:46:06 +0000 Received: from rnnvmail203.nvidia.com (10.129.68.9) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 30 Nov 2022 01:45:51 -0800 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail203.nvidia.com (10.129.68.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 30 Nov 2022 01:45:51 -0800 Received: from vdi.nvidia.com (10.127.8.9) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.986.36 via Frontend Transport; Wed, 30 Nov 2022 01:45:44 -0800 From: Avihai Horon To: CC: Alex Williamson , Halil Pasic , Christian Borntraeger , Eric Farman , Richard Henderson , David Hildenbrand , "Ilya Leoshkevich" , Thomas Huth , "Juan Quintela" , "Dr. David Alan Gilbert" , "Michael S. Tsirkin" , Cornelia Huck , Paolo Bonzini , Stefan Hajnoczi , Fam Zheng , Eric Blake , Vladimir Sementsov-Ogievskiy , John Snow , , , Yishai Hadas , Jason Gunthorpe , Maor Gottlieb , Shay Drory , Avihai Horon , Kirti Wankhede , Tarun Gupta , Joao Martins Subject: [PATCH v4 12/14] docs/devel: Align vfio-migration docs to VFIO migration v2 Date: Wed, 30 Nov 2022 11:44:12 +0200 Message-ID: <20221130094414.27247-13-avihaih@nvidia.com> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20221130094414.27247-1-avihaih@nvidia.com> References: <20221130094414.27247-1-avihaih@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT058:EE_|CH2PR12MB4905:EE_ X-MS-Office365-Filtering-Correlation-Id: 00143dc3-9837-4944-7670-08dad2b7b426 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: IY/TZIOEruQJWYTkbfy5Ez+TAT72GjePdaCealNZfkZnjyJrsnQGvK7Pm2NVxg5IKHVYn9IOOPlfNs+14yYXIdXGf2/esRf/Pb4KVrqrBf7/oJcW7ne1/9GvYNi63ypFfKY5axzH242+IO5p4pj7YPoHNdhz+Sa4K145BBmC7FKpMFlD8LWX+QGvylusZG439RPirBnz+Jqp6Zhrr5p8jqxsUmshmTmTUJK28MKrJ+fTJ04notsjgr/WceWa25f2G5EBK6RYKOV01V4wcB7nfYPA0BmkhN2ksNoIz6Bx/FnvdTFWgV3aNMCsoFr71he/yhhILPweb0EH5QRFzAoHVPEPOg6qd4gdjDyi3lIDN6zu9mEFv8V/mOoscaiNvEOteZ4UFZIw3yZpEFAShIWt1mxZnr/Bmh33Tu83M0r7YnJ0Wn7738bxMYnPPw1H2a7vICgpfiYN3PTxHKqYc1ZfvlBglIELJEC+Gmx3CFlPM4XEJHDN+hjotGDaNrg9phieqq8L+RgkSb5BDcz41vtFdxa/TzS53DU+B/4Favhglozd+4xWZKHqVXnLwqPNDeTGZrKWwgeAzFUET62DCzLxgzVJ1pLdpu4N9wN3oO4GRbfeVi85zi16tLe6+E9skz/7MD3hGAKsg/wB3001MsBOiLt7F2riO+xm6At6/NsBB3I+xfEb3FHAvihLiTGoa+XFjheUcR+JFVxpF+OuLMIKKQ== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(346002)(39860400002)(136003)(396003)(376002)(451199015)(36840700001)(40470700004)(46966006)(40480700001)(356005)(7636003)(82740400003)(336012)(36756003)(82310400005)(26005)(41300700001)(6666004)(478600001)(8676002)(70586007)(70206006)(40460700003)(86362001)(2616005)(4326008)(316002)(7416002)(6916009)(54906003)(5660300002)(186003)(8936002)(1076003)(47076005)(426003)(7696005)(83380400001)(2906002)(36860700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Nov 2022 09:46:06.7739 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 00143dc3-9837-4944-7670-08dad2b7b426 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT058.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB4905 Received-SPF: softfail client-ip=2a01:111:f400:fe59::60c; envelope-from=avihaih@nvidia.com; helo=NAM12-DM6-obe.outbound.protection.outlook.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Align the vfio-migration documentation to VFIO migration protocol v2. Signed-off-by: Avihai Horon --- docs/devel/vfio-migration.rst | 68 ++++++++++++++++------------------- 1 file changed, 30 insertions(+), 38 deletions(-) diff --git a/docs/devel/vfio-migration.rst b/docs/devel/vfio-migration.rst index 9ff6163c88..ad991b7eeb 100644 --- a/docs/devel/vfio-migration.rst +++ b/docs/devel/vfio-migration.rst @@ -7,46 +7,39 @@ the guest is running on source host and restoring this saved state on the destination host. This document details how saving and restoring of VFIO devices is done in QEMU. -Migration of VFIO devices consists of two phases: the optional pre-copy phase, -and the stop-and-copy phase. The pre-copy phase is iterative and allows to -accommodate VFIO devices that have a large amount of data that needs to be -transferred. The iterative pre-copy phase of migration allows for the guest to -continue whilst the VFIO device state is transferred to the destination, this -helps to reduce the total downtime of the VM. VFIO devices can choose to skip -the pre-copy phase of migration by returning pending_bytes as zero during the -pre-copy phase. +Migration of VFIO devices currently consists of a single stop-and-copy phase. +During the stop-and-copy phase the guest is stopped and the entire VFIO device +data is transferred to the destination. + +The pre-copy phase of migration is currently not supported for VFIO devices, +so VFIO device data is not transferred during pre-copy phase. A detailed description of the UAPI for VFIO device migration can be found in -the comment for the ``vfio_device_migration_info`` structure in the header -file linux-headers/linux/vfio.h. +the comment for the ``vfio_device_mig_state`` structure in the header file +linux-headers/linux/vfio.h. VFIO implements the device hooks for the iterative approach as follows: -* A ``save_setup`` function that sets up the migration region and sets _SAVING - flag in the VFIO device state. +* A ``save_setup`` function that sets up migration on the source. -* A ``load_setup`` function that sets up the migration region on the - destination and sets _RESUMING flag in the VFIO device state. +* A ``load_setup`` function that sets the VFIO device on the destination in + _RESUMING state. * A ``save_live_pending`` function that reads pending_bytes from the vendor driver, which indicates the amount of data that the vendor driver has yet to save for the VFIO device. -* A ``save_live_iterate`` function that reads the VFIO device's data from the - vendor driver through the migration region during iterative phase. - * A ``save_state`` function to save the device config space if it is present. -* A ``save_live_complete_precopy`` function that resets _RUNNING flag from the - VFIO device state and iteratively copies the remaining data for the VFIO - device until the vendor driver indicates that no data remains (pending bytes - is zero). +* A ``save_live_complete_precopy`` function that sets the VFIO device in + _STOP_COPY state and iteratively copies the data for the VFIO device until + the vendor driver indicates that no data remains. * A ``load_state`` function that loads the config section and the data - sections that are generated by the save functions above + sections that are generated by the save functions above. * ``cleanup`` functions for both save and load that perform any migration - related cleanup, including unmapping the migration region + related cleanup. The VFIO migration code uses a VM state change handler to change the VFIO @@ -71,13 +64,13 @@ tracking can identify dirtied pages, but any page pinned by the vendor driver can also be written by the device. There is currently no device or IOMMU support for dirty page tracking in hardware. -By default, dirty pages are tracked when the device is in pre-copy as well as -stop-and-copy phase. So, a page pinned by the vendor driver will be copied to -the destination in both phases. Copying dirty pages in pre-copy phase helps -QEMU to predict if it can achieve its downtime tolerances. If QEMU during -pre-copy phase keeps finding dirty pages continuously, then it understands -that even in stop-and-copy phase, it is likely to find dirty pages and can -predict the downtime accordingly. +By default, dirty pages are tracked during pre-copy as well as stop-and-copy +phase. So, a page pinned by the vendor driver will be copied to the destination +in both phases. Copying dirty pages in pre-copy phase helps QEMU to predict if +it can achieve its downtime tolerances. If QEMU during pre-copy phase keeps +finding dirty pages continuously, then it understands that even in stop-and-copy +phase, it is likely to find dirty pages and can predict the downtime +accordingly. QEMU also provides a per device opt-out option ``pre-copy-dirty-page-tracking`` which disables querying the dirty bitmap during pre-copy phase. If it is set to @@ -111,23 +104,22 @@ Live migration save path | migrate_init spawns migration_thread Migration thread then calls each device's .save_setup() - (RUNNING, _SETUP, _RUNNING|_SAVING) + (RUNNING, _SETUP, _RUNNING) | - (RUNNING, _ACTIVE, _RUNNING|_SAVING) + (RUNNING, _ACTIVE, _RUNNING) If device is active, get pending_bytes by .save_live_pending() If total pending_bytes >= threshold_size, call .save_live_iterate() - Data of VFIO device for pre-copy phase is copied Iterate till total pending bytes converge and are less than threshold | On migration completion, vCPU stops and calls .save_live_complete_precopy for - each active device. The VFIO device is then transitioned into _SAVING state - (FINISH_MIGRATE, _DEVICE, _SAVING) + each active device. The VFIO device is then transitioned into _STOP_COPY state + (FINISH_MIGRATE, _DEVICE, _STOP_COPY) | For the VFIO device, iterate in .save_live_complete_precopy until pending data is 0 - (FINISH_MIGRATE, _DEVICE, _STOPPED) + (FINISH_MIGRATE, _DEVICE, _STOP) | - (FINISH_MIGRATE, _COMPLETED, _STOPPED) + (FINISH_MIGRATE, _COMPLETED, _STOP) Migraton thread schedules cleanup bottom half and exits Live migration resume path @@ -136,7 +128,7 @@ Live migration resume path :: Incoming migration calls .load_setup for each device - (RESTORE_VM, _ACTIVE, _STOPPED) + (RESTORE_VM, _ACTIVE, _STOP) | For each device, .load_state is called for that device section data (RESTORE_VM, _ACTIVE, _RESUMING) From patchwork Wed Nov 30 09:44:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avihai Horon X-Patchwork-Id: 13059648 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 149C0C433FE for ; Wed, 30 Nov 2022 09:51:55 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p0Jfo-0004qk-8n; Wed, 30 Nov 2022 04:46:38 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p0JfX-0004gj-Pw; Wed, 30 Nov 2022 04:46:19 -0500 Received: from mail-bn8nam12on2073.outbound.protection.outlook.com ([40.107.237.73] helo=NAM12-BN8-obe.outbound.protection.outlook.com) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p0JfU-0003JF-6Y; Wed, 30 Nov 2022 04:46:19 -0500 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=HorLk1dexk1ydNX36/oQs5m9Be092tJ57pXOaaGdscQ3QHtrMm62mCYmcwCv0y1Bid9xyzv4FDxGm7vS0yqh1oSuFjfMd+KeZkxJSIg9/D3dwE0l3FAFD5TyHxN64rcdrMQqHDZsORK0nUtMm0KJDSNF8hUsy2RfcYszkpmWJVMsJhqAlEF89gXcDGlxhXO0iQQBYmXVWAVWp8Stim1FURBkdQcLuMzu1QiAARl1JfKk/r0PRBEIHfHlqGGKU9D/s21YZ/rKChHW2kzXd8y1g+/sOWtRT+hcYx5wblfTvZUSD1pkNwFZg0RBA9d+MWZTfTsms8a1Nv/Wvdd7Mb+ARw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=k1cRiwR81+VR725i0ZuMtZb5jIgHzWXXG2RJCT8K7kM=; b=AC9JjZg2xRX05LOtqx5UPh/80pQcW8V+G27ZrwPaj+Cxh5z/kV0HVw/SgzpzegUd1F0CxwjAZcO4B6HPh1vpd9ifj43X4FHqaPO8EUMdO5xuKDnyYK6OJH2nMHqkOx9X2JKTHiqFKr5pozuF+URYKEOXyrHsrF4+EsZngH1x+YxAfNHrb3WM+/INwPAdyMhnKd9UspqmLhTS3wW20J4iqkd2wCnuRzJnkKsiJqu++wJzLNn3SKu9MTeaAKJ+Lzpg2slIrk2TX8Q64qFKuuQJzsPBvskA+bQBZSGvON1SWXl0FgLeGzdLmVTAlzqxYXfT0ftTQ5Z/S36g5U0ue3fiyQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=nongnu.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=k1cRiwR81+VR725i0ZuMtZb5jIgHzWXXG2RJCT8K7kM=; b=ePozXXGowpOFi/a6Ao4J+anLuQx3ihZ2w69lfm0QcYJIKz9DdCDrx0JOOMQrzzGIe6bd+jlOh2wYMIFSS8gslzxidtKQu0URc9s37P5AePiaNMDSX8/JvkxLifUUQp1ZXzkONi52JS4+m/Q+jqFw1Ep4Uo+L0OvCkWfrXFmbfMxOVlNZn1vG3hfEGCvolih0/8YFdssB6myp+bemmTwAibZ24hVassKQNxwH1DqJM+IfnWGeusWttUf9Ul6CNF789mNZunBXOjhC0dBTLAqA7MvIpUubnTZMqInEumyacex9CYy/ti46gCqUSGd2SBLfkAaexK0rVxqeyhcS9+RL4Q== Received: from MW4PR04CA0329.namprd04.prod.outlook.com (2603:10b6:303:82::34) by MW4PR12MB6731.namprd12.prod.outlook.com (2603:10b6:303:1eb::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5857.23; Wed, 30 Nov 2022 09:46:12 +0000 Received: from CO1NAM11FT053.eop-nam11.prod.protection.outlook.com (2603:10b6:303:82:cafe::19) by MW4PR04CA0329.outlook.office365.com (2603:10b6:303:82::34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5857.23 via Frontend Transport; Wed, 30 Nov 2022 09:46:12 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CO1NAM11FT053.mail.protection.outlook.com (10.13.175.63) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5857.17 via Frontend Transport; Wed, 30 Nov 2022 09:46:11 +0000 Received: from rnnvmail203.nvidia.com (10.129.68.9) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 30 Nov 2022 01:45:59 -0800 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail203.nvidia.com (10.129.68.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 30 Nov 2022 01:45:58 -0800 Received: from vdi.nvidia.com (10.127.8.9) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.986.36 via Frontend Transport; Wed, 30 Nov 2022 01:45:51 -0800 From: Avihai Horon To: CC: Alex Williamson , Halil Pasic , Christian Borntraeger , Eric Farman , Richard Henderson , David Hildenbrand , "Ilya Leoshkevich" , Thomas Huth , "Juan Quintela" , "Dr. David Alan Gilbert" , "Michael S. Tsirkin" , Cornelia Huck , Paolo Bonzini , Stefan Hajnoczi , Fam Zheng , Eric Blake , Vladimir Sementsov-Ogievskiy , John Snow , , , Yishai Hadas , Jason Gunthorpe , Maor Gottlieb , Shay Drory , Avihai Horon , Kirti Wankhede , Tarun Gupta , Joao Martins Subject: [PATCH v4 13/14] vfio/migration: Use VFIO_DEVICE_FEATURE_MIG_DATA_SIZE ioctl Date: Wed, 30 Nov 2022 11:44:13 +0200 Message-ID: <20221130094414.27247-14-avihaih@nvidia.com> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20221130094414.27247-1-avihaih@nvidia.com> References: <20221130094414.27247-1-avihaih@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT053:EE_|MW4PR12MB6731:EE_ X-MS-Office365-Filtering-Correlation-Id: 6b8277a5-41d8-41cf-39ce-08dad2b7b6cd X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: FxzvgIFyZ0VH9FQtz2OrwpPLjEWXZiYiyKu91T25KgfMD5g8sBpi08c100zG6GfSw+0qP3cfN9mindDlwU/0ScOg/1pSi1Upf83XhyoJmzrgj/MT1m5O7u0AXnGaEB6+hx5iR0EtJq418a6LbwQbqvj6EkSHWJdm2/Qb5/RoAv0ZA4dNwscFl9JUE0lmMoOYEIRQS3LKYGKstIxqj7iYqvMmwHORMZ+W/gwYCe13VMX3AXaMPVt2dEWJ70HC5tjBI8wpBpPduMtVY/XmwrUXS661YN1tenHgRsavmrYY56ZI25ag7MTHFpjS8TBGpvj9+EzCYzR3WnYbuozQFC269NlEjNO32kvXoyZVgyOHkXCqDM+G7tjxB587Obz0OE5dVz5Be5AWHuZIVkf5Ahmlb0DhgmooBHwwbOaTHkjm1fGPoO5fRtyHORxV0k/oPLMDdFyz5qW8aEiGAsM123sMLrba6lHwpBYmZptl3fUT1+rQY7GAXC0wmHUsO0Gk4uuE8SI+jI1YjKCmTWugrVAhVOTVWPXIFbA3cM1XgFNb3tDGjstrSShsAIIeVTkl/5BhqALaNrUXjsEsgWRSOdVZdHS9fQrpA50YmV2LM9rwambkXnPPEUuqWbJHcKeHGqrDDTka7OHZ/jnNoXwVR04dYncWJDGTk7GOurey6QvYqQpWHJUO+R34+SpmSvwQZGtCHb/V5401YCmrq9zc5PtQSA== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(346002)(136003)(39860400002)(396003)(376002)(451199015)(46966006)(40470700004)(36840700001)(82740400003)(7636003)(82310400005)(36756003)(40480700001)(26005)(6666004)(186003)(7696005)(40460700003)(41300700001)(70206006)(8676002)(356005)(70586007)(478600001)(86362001)(4326008)(316002)(7416002)(54906003)(6916009)(5660300002)(8936002)(426003)(336012)(47076005)(2616005)(1076003)(83380400001)(2906002)(36860700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Nov 2022 09:46:11.3160 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 6b8277a5-41d8-41cf-39ce-08dad2b7b6cd X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT053.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR12MB6731 Received-SPF: softfail client-ip=40.107.237.73; envelope-from=avihaih@nvidia.com; helo=NAM12-BN8-obe.outbound.protection.outlook.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Use VFIO_DEVICE_FEATURE_MIG_DATA_SIZE ioctl to query the device stop copy data size and report this value in vfio_save_pending() instead of the hardcoded value that is currently used. Use this ioctl in vfio_save_setup() as well, to adjust the migration data buffer size. Signed-off-by: Avihai Horon --- hw/vfio/migration.c | 49 +++++++++++++++++++++++++++++++------- hw/vfio/trace-events | 2 +- linux-headers/linux/vfio.h | 13 ++++++++++ 3 files changed, 54 insertions(+), 10 deletions(-) diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c index 98bde4a517..9285746183 100644 --- a/hw/vfio/migration.c +++ b/hw/vfio/migration.c @@ -208,13 +208,42 @@ static void vfio_migration_cleanup(VFIODevice *vbasedev) /* ---------------------------------------------------------------------- */ +static int vfio_query_stop_copy_size(VFIODevice *vbasedev, + uint64_t *stop_copy_size) +{ + uint64_t buf[DIV_ROUND_UP(sizeof(struct vfio_device_feature) + + sizeof(struct vfio_device_feature_mig_data_size), + sizeof(uint64_t))] = {}; + struct vfio_device_feature *feature = (struct vfio_device_feature *)buf; + struct vfio_device_feature_mig_data_size *mig_data_size = + (struct vfio_device_feature_mig_data_size *)feature->data; + + feature->argsz = sizeof(buf); + feature->flags = + VFIO_DEVICE_FEATURE_GET | VFIO_DEVICE_FEATURE_MIG_DATA_SIZE; + + if (ioctl(vbasedev->fd, VFIO_DEVICE_FEATURE, feature)) { + return -errno; + } + + *stop_copy_size = mig_data_size->stop_copy_length; + + return 0; +} + static int vfio_save_setup(QEMUFile *f, void *opaque) { VFIODevice *vbasedev = opaque; VFIOMigration *migration = vbasedev->migration; + uint64_t stop_copy_size; qemu_put_be64(f, VFIO_MIG_FLAG_DEV_SETUP_STATE); + if (vfio_query_stop_copy_size(vbasedev, &stop_copy_size)) { + stop_copy_size = VFIO_MIG_DEFAULT_DATA_BUFFER_SIZE; + } + migration->data_buffer_size = MIN(VFIO_MIG_DEFAULT_DATA_BUFFER_SIZE, + stop_copy_size); migration->data_buffer = g_try_malloc0(migration->data_buffer_size); if (!migration->data_buffer) { error_report("%s: Failed to allocate migration data buffer", @@ -222,7 +251,7 @@ static int vfio_save_setup(QEMUFile *f, void *opaque) return -ENOMEM; } - trace_vfio_save_setup(vbasedev->name); + trace_vfio_save_setup(vbasedev->name, migration->data_buffer_size); qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE); @@ -251,18 +280,20 @@ static void vfio_save_pending(void *opaque, uint64_t threshold_size, uint64_t *res_postcopy_only) { VFIODevice *vbasedev = opaque; + uint64_t stop_copy_size; - /* - * VFIO migration protocol v2 currently doesn't have an API to get pending - * migration size. Until such an API is introduced, report big pending size - * so the device migration size will be taken into account and downtime - * limit won't be violated. - */ - *res_precopy_only += VFIO_MIG_STOP_COPY_SIZE; + if (vfio_query_stop_copy_size(vbasedev, &stop_copy_size)) { + /* + * Failed to get pending migration size. Report big pending size so + * downtime limit won't be violated. + */ + stop_copy_size = VFIO_MIG_STOP_COPY_SIZE; + } + *res_precopy_only += stop_copy_size; trace_vfio_save_pending(vbasedev->name, *res_precopy_only, *res_postcopy_only, *res_compatible, - VFIO_MIG_STOP_COPY_SIZE); + stop_copy_size); } /* Returns 1 if end-of-stream is reached, 0 if more data and -1 if error */ diff --git a/hw/vfio/trace-events b/hw/vfio/trace-events index 6c1db71a1e..2723a5d1aa 100644 --- a/hw/vfio/trace-events +++ b/hw/vfio/trace-events @@ -162,5 +162,5 @@ vfio_save_cleanup(const char *name) " (%s)" vfio_save_complete_precopy(const char *name, int ret) " (%s) ret %d" vfio_save_device_config_state(const char *name) " (%s)" vfio_save_pending(const char *name, uint64_t precopy, uint64_t postcopy, uint64_t compatible, uint64_t stopcopy) " (%s) precopy 0x%"PRIx64" postcopy 0x%"PRIx64" compatible 0x%"PRIx64" stopcopy size 0x%"PRIx64 -vfio_save_setup(const char *name) " (%s)" +vfio_save_setup(const char *name, uint64_t data_buffer_size) " (%s) data buffer size 0x%"PRIx64 vfio_vmstate_change(const char *name, int running, const char *reason, const char *dev_state) " (%s) running %d reason %s device state %s" diff --git a/linux-headers/linux/vfio.h b/linux-headers/linux/vfio.h index ede44b5572..5c4ddf424f 100644 --- a/linux-headers/linux/vfio.h +++ b/linux-headers/linux/vfio.h @@ -986,6 +986,19 @@ enum vfio_device_mig_state { VFIO_DEVICE_STATE_RUNNING_P2P = 5, }; +/* + * Upon VFIO_DEVICE_FEATURE_GET read back the estimated data length that will + * be required to complete stop copy. + * + * Note: Can be called on each device state. + */ + +struct vfio_device_feature_mig_data_size { + __aligned_u64 stop_copy_length; +}; + +#define VFIO_DEVICE_FEATURE_MIG_DATA_SIZE 9 + /* -------- API for Type1 VFIO IOMMU -------- */ /** From patchwork Wed Nov 30 09:44:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avihai Horon X-Patchwork-Id: 13059646 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F25DFC433FE for ; Wed, 30 Nov 2022 09:48:58 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p0Jg0-00056r-8S; Wed, 30 Nov 2022 04:46:49 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p0Jfe-0004sz-8y; Wed, 30 Nov 2022 04:46:31 -0500 Received: from mail-dm6nam10on2082.outbound.protection.outlook.com ([40.107.93.82] helo=NAM10-DM6-obe.outbound.protection.outlook.com) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p0Jfc-0003Ju-35; Wed, 30 Nov 2022 04:46:25 -0500 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Nr4zThIbS0OJq6xb2TjcMPkJiEPFD2ssahbf1ADFkh2kKhrzWK6jlBQH1wohr2gPXCAiCbJ6U5L61iuUho7fXI016CB1t2yTeSlwdlQSzLRNJJr5hfWYKBQCG59Y25jDeApIoWz85VRLEtuHWK80hGZUPz9L1He5oYKl/XZBAY1S9eyqlFusNPYhleB5WbHabyqR3eUx7P4BpR28+1Fs5+0zzzwr8R2ec3ODrZ5jGWizqMG/pAxOiLxeFq1619mL176S3BF1h40Qaeq02sK7cXDOu6VlZA2OK3/dkfSRuvw49P+kwSu/LZXbpfmAwXP443HSFris0KMuUBuMxGJfqA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=3fKtSj7bt1YoQ7NQEV6JOyBeAMHDjppjRRxJm9g6C/k=; b=Vo2jGq0pFlE1THD4QtGaYPasrgN93baEOWOjPW+Otoqf8oC4w/eHwaWQX13OeAh6p8oa6yaY3sDiN41U1cHLH96Nqe83Nrn7LzubIwNSkMW1TijO/0ZKqOWpn8t/ZAnEHkKnILq4/LB86CtCvaGyhUpBouxoucyDgHak98B03N17JR1lRupMluyhYovHMtcmmYaVl+ZQhT0YWCMG8QqRvopBDM1AqEWKHl1ZpM+mJKMqcSVlnzmqi7th05zHGTZbitmVcNeK6uRndTViqUAqCGtOysGx8NepRrmmb+iSLd7qWZvEN26KHzvPifNbV4S/TnPVTortGpdMoMe+lxI1eg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=nongnu.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=3fKtSj7bt1YoQ7NQEV6JOyBeAMHDjppjRRxJm9g6C/k=; b=NryQqryzHxazmE2/2rSsFrdIkESjYoDQfm4VQ6NIPPSq5OcryLUcUlt+gU3lUhImp9L0oIjozXLIHerYtyaBvAQDMpvbSnNYJsIUq85ay71hAy71v+DKSABMqJLheFY4PWlcZCzUA5/vfR+qU3ZOGmk/GA+0S0SF09x6Qmonml5qTT5EGbPIbcaUhASiDdVMbbNazm6CTFMwjoMiXPUaQN4qTM2akIXqZwl3Da9ooXbVdMrR5GeH2NCT0Oywz/krWyu6KkQhAO0/FzEtOy2QtwqpXqZNYRstONWd6J16FQXylnkO29Cq2bZhk5XwZtxOh47Opcxgu6AuI/lFG39mQA== Received: from MW4PR03CA0108.namprd03.prod.outlook.com (2603:10b6:303:b7::23) by BN9PR12MB5368.namprd12.prod.outlook.com (2603:10b6:408:105::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5857.23; Wed, 30 Nov 2022 09:46:18 +0000 Received: from CO1NAM11FT088.eop-nam11.prod.protection.outlook.com (2603:10b6:303:b7:cafe::ee) by MW4PR03CA0108.outlook.office365.com (2603:10b6:303:b7::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5857.23 via Frontend Transport; Wed, 30 Nov 2022 09:46:18 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT088.mail.protection.outlook.com (10.13.175.131) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5857.23 via Frontend Transport; Wed, 30 Nov 2022 09:46:18 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 30 Nov 2022 01:46:06 -0800 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail204.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 30 Nov 2022 01:46:06 -0800 Received: from vdi.nvidia.com (10.127.8.9) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server id 15.2.986.36 via Frontend Transport; Wed, 30 Nov 2022 01:45:59 -0800 From: Avihai Horon To: CC: Alex Williamson , Halil Pasic , Christian Borntraeger , Eric Farman , Richard Henderson , David Hildenbrand , "Ilya Leoshkevich" , Thomas Huth , "Juan Quintela" , "Dr. David Alan Gilbert" , "Michael S. Tsirkin" , Cornelia Huck , Paolo Bonzini , Stefan Hajnoczi , Fam Zheng , Eric Blake , Vladimir Sementsov-Ogievskiy , John Snow , , , Yishai Hadas , Jason Gunthorpe , Maor Gottlieb , Shay Drory , Avihai Horon , Kirti Wankhede , Tarun Gupta , Joao Martins Subject: [PATCH v4 14/14] vfio/migration: Optimize vfio_save_pending() Date: Wed, 30 Nov 2022 11:44:14 +0200 Message-ID: <20221130094414.27247-15-avihaih@nvidia.com> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20221130094414.27247-1-avihaih@nvidia.com> References: <20221130094414.27247-1-avihaih@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT088:EE_|BN9PR12MB5368:EE_ X-MS-Office365-Filtering-Correlation-Id: ad96624e-41b2-4809-79ea-08dad2b7bae3 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: UQ7BdWcOSaK/LqIZlZUjbsjhQ2dgu3rT8M9iyN7O2j2yke9Q6RQeNExvRQSkutJElfzL53M35B7fqHHwhWaCG2BKUifxrUts+evZA8y018OLM1QGnRF8FXaASfPqlqoa7rbJioJHIv+T17oigkw9SRP3/T/mZZH23NDLKHBIqP+/cINzlTjCLxK5MjMmNmD5kIflAVFkj6K+lAzBM1+sXN4AqCFQc3yGEldk2PL8ATEqs48nFt0MJj3MwpdkbkedQFgnCYCoCAUfz/W0+myWpO5AG/P/hBQn3XbckLhIwV9YNCIkyyjEf9YT60ZFNUV5CQg3ZYIP60T3eyTBVW+SseTCswHRh4PmCK8ED8pGyA5yoB+5tNE5aXadRuyB1aUZAm9prk/ikDbxboZYtMVIzkqkF+RQcsL1WZE0LkZc013HXlqFhyLOkUKzdWRbOtY8vFjAo7Z2kY/fm/yPQu37B3JljRXAVITIh7tkFs7PMz2MvaG4qWNntMtIGeeshdASaXQQlHB/afdglNv9/PIdlYLBfIpgq6h4VudQPbfVOuvxia88JXRp71A7t9DQq5Ex003Zfx9oV6pJQuqlZ8yZfP+cRLTAe5haHOmReK5LBrvfM74na652RqsHJt5MqyfFdRk7klAnY/rYCE7l5Rn4iPCoCzBEx8tx84VIGWdcjHXlsZsPGzDyZAFLClJgPwBxtaj0nYZ+VgMCRSIpK/x0QQ== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(396003)(39860400002)(376002)(136003)(346002)(451199015)(40470700004)(36840700001)(46966006)(356005)(7636003)(40480700001)(86362001)(70586007)(2616005)(336012)(8936002)(41300700001)(70206006)(26005)(8676002)(40460700003)(4326008)(478600001)(6666004)(316002)(7696005)(36860700001)(6916009)(82740400003)(83380400001)(54906003)(2906002)(7416002)(426003)(47076005)(1076003)(5660300002)(186003)(36756003)(82310400005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Nov 2022 09:46:18.1102 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: ad96624e-41b2-4809-79ea-08dad2b7bae3 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT088.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN9PR12MB5368 Received-SPF: softfail client-ip=40.107.93.82; envelope-from=avihaih@nvidia.com; helo=NAM10-DM6-obe.outbound.protection.outlook.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org During pre-copy phase of migration vfio_save_pending() is called repeatedly and queries the VFIO device for its pending data size. As long as pending RAM size is over the threshold, migration can't converge and be completed. Therefore, during this time there is no point in querying the VFIO device pending data size. Avoid these unnecessary queries by issuing them in a RAM pre-copy notifier instead of vfio_save_pending(). This way the VFIO device is queried only when RAM pending data is below the threshold, when there is an actual chance for migration to converge. Signed-off-by: Avihai Horon --- hw/vfio/migration.c | 55 +++++++++++++++++++++++++---------- hw/vfio/trace-events | 1 + include/hw/vfio/vfio-common.h | 2 ++ 3 files changed, 43 insertions(+), 15 deletions(-) diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c index 9285746183..d57cda5516 100644 --- a/hw/vfio/migration.c +++ b/hw/vfio/migration.c @@ -269,31 +269,19 @@ static void vfio_save_cleanup(void *opaque) trace_vfio_save_cleanup(vbasedev->name); } -/* - * Migration size of VFIO devices can be as little as a few KBs or as big as - * many GBs. This value should be big enough to cover the worst case. - */ -#define VFIO_MIG_STOP_COPY_SIZE (100 * GiB) static void vfio_save_pending(void *opaque, uint64_t threshold_size, uint64_t *res_precopy_only, uint64_t *res_compatible, uint64_t *res_postcopy_only) { VFIODevice *vbasedev = opaque; - uint64_t stop_copy_size; + VFIOMigration *migration = vbasedev->migration; - if (vfio_query_stop_copy_size(vbasedev, &stop_copy_size)) { - /* - * Failed to get pending migration size. Report big pending size so - * downtime limit won't be violated. - */ - stop_copy_size = VFIO_MIG_STOP_COPY_SIZE; - } + *res_precopy_only += migration->stop_copy_size; - *res_precopy_only += stop_copy_size; trace_vfio_save_pending(vbasedev->name, *res_precopy_only, *res_postcopy_only, *res_compatible, - stop_copy_size); + migration->stop_copy_size); } /* Returns 1 if end-of-stream is reached, 0 if more data and -1 if error */ @@ -501,6 +489,40 @@ static void vfio_migration_state_notifier(Notifier *notifier, void *data) } } +/* + * Migration size of VFIO devices can be as little as a few KBs or as big as + * many GBs. This value should be big enough to cover the worst case. + */ +#define VFIO_MIG_STOP_COPY_SIZE (100 * GiB) +static int vfio_migration_data_notifier(NotifierWithReturn *n, void *data) +{ + VFIOMigration *migration = container_of(n, VFIOMigration, migration_data); + VFIODevice *vbasedev = migration->vbasedev; + PrecopyNotifyData *pnd = data; + + if (pnd->reason != PRECOPY_NOTIFY_AFTER_BITMAP_SYNC) { + return 0; + } + + /* No need to get pending size when finishing migration */ + if (runstate_check(RUN_STATE_FINISH_MIGRATE)) { + return 0; + } + + if (vfio_query_stop_copy_size(vbasedev, &migration->stop_copy_size)) { + /* + * Failed to get pending migration size. Report big pending size so + * downtime limit won't be violated. + */ + migration->stop_copy_size = VFIO_MIG_STOP_COPY_SIZE; + } + + trace_vfio_migration_data_notifier(vbasedev->name, + migration->stop_copy_size); + + return 0; +} + static void vfio_migration_exit(VFIODevice *vbasedev) { g_free(vbasedev->migration); @@ -578,6 +600,8 @@ static int vfio_migration_init(VFIODevice *vbasedev) vbasedev); migration->migration_state.notify = vfio_migration_state_notifier; add_migration_state_change_notifier(&migration->migration_state); + migration->migration_data.notify = vfio_migration_data_notifier; + precopy_add_notifier(&migration->migration_data); return 0; } @@ -622,6 +646,7 @@ void vfio_migration_finalize(VFIODevice *vbasedev) if (vbasedev->migration) { VFIOMigration *migration = vbasedev->migration; + precopy_remove_notifier(&migration->migration_data); remove_migration_state_change_notifier(&migration->migration_state); qemu_del_vm_change_state_handler(migration->vm_state); unregister_savevm(VMSTATE_IF(vbasedev->dev), "vfio", vbasedev); diff --git a/hw/vfio/trace-events b/hw/vfio/trace-events index 2723a5d1aa..e377a24f0e 100644 --- a/hw/vfio/trace-events +++ b/hw/vfio/trace-events @@ -154,6 +154,7 @@ vfio_load_cleanup(const char *name) " (%s)" vfio_load_device_config_state(const char *name) " (%s)" vfio_load_state(const char *name, uint64_t data) " (%s) data 0x%"PRIx64 vfio_load_state_device_data(const char *name, uint64_t data_size, int ret) " (%s) size 0x%"PRIx64" ret %d" +vfio_migration_data_notifier(const char *name, uint64_t stopcopy_size) " (%s) stopcopy size 0x%"PRIx64 vfio_migration_probe(const char *name) " (%s)" vfio_migration_set_state(const char *name, const char *state) " (%s) state %s" vfio_migration_state_notifier(const char *name, const char *state) " (%s) state %s" diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h index 76d470178f..2aba45887c 100644 --- a/include/hw/vfio/vfio-common.h +++ b/include/hw/vfio/vfio-common.h @@ -62,10 +62,12 @@ typedef struct VFIOMigration { struct VFIODevice *vbasedev; VMChangeStateEntry *vm_state; Notifier migration_state; + NotifierWithReturn migration_data; enum vfio_device_mig_state device_state; int data_fd; void *data_buffer; size_t data_buffer_size; + uint64_t stop_copy_size; } VFIOMigration; typedef struct VFIOAddressSpace {