From patchwork Thu Nov 9 15:46:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liu, Yuan1" X-Patchwork-Id: 13452192 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 12578C4167B for ; Fri, 10 Nov 2023 07:34:29 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1r1M12-0005Qh-51; Fri, 10 Nov 2023 02:33:20 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1r1M0z-0005Mz-GL for qemu-devel@nongnu.org; Fri, 10 Nov 2023 02:33:17 -0500 Received: from mgamail.intel.com ([192.198.163.7]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1r1M0x-0007DT-9z for qemu-devel@nongnu.org; Fri, 10 Nov 2023 02:33:17 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1699601595; x=1731137595; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+d/QvHRS/nq18/jH2q/BnchHPO+qhQm5X4HU3nYHklw=; b=PYDX0y2OuJc7tJeYIPV+wZeaR24YJUWkqTeKjL78Fwe8dZYi7zpO/Pg4 hmC/Pua73QRjIyuvQ/eq2rUBtM2S36zkOSed8QY23Q7yuMaKf29BLqMzm a31tZLfkiy5Xk9D84v6i3aEu6ZMDzmKtk5VECAgtrYv474CN5cSSDJzLI d8WwklVMq5l4T9/65ECbcEEN0RSXB+XV/LKVROo0+vWMuVat9t86tl/XO gpXLwRl7ot+qHPHtZzxoiv1NtB52gjJ6fXJMLjpv/NUcAHDWlDYII3IQ8 09B0uo3FOp4Gyd0a2aIju4qlG9GxLn0KM8zPKWhYA+M/PxJWDCN78RwyG w==; X-IronPort-AV: E=McAfee;i="6600,9927,10889"; a="11694836" X-IronPort-AV: E=Sophos;i="6.03,291,1694761200"; d="scan'208";a="11694836" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmvoesa101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Nov 2023 23:33:11 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10889"; a="1010886550" X-IronPort-AV: E=Sophos;i="6.03,291,1694761200"; d="scan'208";a="1010886550" Received: from sae-gw02.sh.intel.com (HELO localhost) ([10.239.45.110]) by fmsmga006.fm.intel.com with ESMTP; 09 Nov 2023 23:33:08 -0800 From: Yuan Liu To: quintela@redhat.com, peterx@redhat.com, farosas@suse.de, leobras@redhat.com Cc: qemu-devel@nongnu.org, yuan1.liu@intel.com, nanhai.zou@intel.com Subject: [PATCH v2 1/4] migration: Introduce multifd-compression-accel parameter Date: Thu, 9 Nov 2023 23:46:35 +0800 Message-Id: <20231109154638.488213-2-yuan1.liu@intel.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20231109154638.488213-1-yuan1.liu@intel.com> References: <20231109154638.488213-1-yuan1.liu@intel.com> MIME-Version: 1.0 Received-SPF: pass client-ip=192.198.163.7; envelope-from=yuan1.liu@intel.com; helo=mgamail.intel.com X-Spam_score_int: -10 X-Spam_score: -1.1 X-Spam_bar: - X-Spam_report: (-1.1 / 5.0 requ) BAYES_00=-1.9, DATE_IN_PAST_12_24=1.049, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Introduce the multifd-compression-accel option to enable or disable live migration data (de)compression accelerator. The default value of multifd-compression-accel is auto, and the enabling and selection of the accelerator are automatically detected. By setting multifd-compression-accel=none, the acceleration function can be disabled. Similarly, users can explicitly specify a specific accelerator name, such as multifd-compression-accel=qpl. Signed-off-by: Yuan Liu Reviewed-by: Nanhai Zou --- hw/core/qdev-properties-system.c | 11 +++++++++++ include/hw/qdev-properties-system.h | 4 ++++ migration/migration-hmp-cmds.c | 10 ++++++++++ migration/options.c | 24 ++++++++++++++++++++++++ migration/options.h | 1 + qapi/migration.json | 26 +++++++++++++++++++++++++- 6 files changed, 75 insertions(+), 1 deletion(-) -- 2.39.3 diff --git a/hw/core/qdev-properties-system.c b/hw/core/qdev-properties-system.c index 688340610e..ed23035845 100644 --- a/hw/core/qdev-properties-system.c +++ b/hw/core/qdev-properties-system.c @@ -673,6 +673,17 @@ const PropertyInfo qdev_prop_multifd_compression = { .set_default_value = qdev_propinfo_set_default_value_enum, }; +/* --- MultiFD Compression Accelerator --- */ + +const PropertyInfo qdev_prop_multifd_compression_accel = { + .name = "MultiFDCompressionAccel", + .description = "MultiFD Compression Accelerator, " + "auto/none/qpl", + .enum_table = &MultiFDCompressionAccel_lookup, + .get = qdev_propinfo_get_enum, + .set = qdev_propinfo_set_enum, + .set_default_value = qdev_propinfo_set_default_value_enum, +}; /* --- Reserved Region --- */ /* diff --git a/include/hw/qdev-properties-system.h b/include/hw/qdev-properties-system.h index 0ac327ae60..da086bd836 100644 --- a/include/hw/qdev-properties-system.h +++ b/include/hw/qdev-properties-system.h @@ -7,6 +7,7 @@ extern const PropertyInfo qdev_prop_chr; extern const PropertyInfo qdev_prop_macaddr; extern const PropertyInfo qdev_prop_reserved_region; extern const PropertyInfo qdev_prop_multifd_compression; +extern const PropertyInfo qdev_prop_multifd_compression_accel; extern const PropertyInfo qdev_prop_losttickpolicy; extern const PropertyInfo qdev_prop_blockdev_on_error; extern const PropertyInfo qdev_prop_bios_chs_trans; @@ -41,6 +42,9 @@ extern const PropertyInfo qdev_prop_pcie_link_width; #define DEFINE_PROP_MULTIFD_COMPRESSION(_n, _s, _f, _d) \ DEFINE_PROP_SIGNED(_n, _s, _f, _d, qdev_prop_multifd_compression, \ MultiFDCompression) +#define DEFINE_PROP_MULTIFD_COMPRESSION_ACCEL(_n, _s, _f, _d) \ + DEFINE_PROP_SIGNED(_n, _s, _f, _d, qdev_prop_multifd_compression_accel, \ + MultiFDCompressionAccel) #define DEFINE_PROP_LOSTTICKPOLICY(_n, _s, _f, _d) \ DEFINE_PROP_SIGNED(_n, _s, _f, _d, qdev_prop_losttickpolicy, \ LostTickPolicy) diff --git a/migration/migration-hmp-cmds.c b/migration/migration-hmp-cmds.c index a82597f18e..3a278c89d9 100644 --- a/migration/migration-hmp-cmds.c +++ b/migration/migration-hmp-cmds.c @@ -344,6 +344,11 @@ void hmp_info_migrate_parameters(Monitor *mon, const QDict *qdict) monitor_printf(mon, "%s: %s\n", MigrationParameter_str(MIGRATION_PARAMETER_MULTIFD_COMPRESSION), MultiFDCompression_str(params->multifd_compression)); + assert(params->has_multifd_compression_accel); + monitor_printf(mon, "%s: %s\n", + MigrationParameter_str( + MIGRATION_PARAMETER_MULTIFD_COMPRESSION_ACCEL), + MultiFDCompressionAccel_str(params->multifd_compression_accel)); monitor_printf(mon, "%s: %" PRIu64 " bytes\n", MigrationParameter_str(MIGRATION_PARAMETER_XBZRLE_CACHE_SIZE), params->xbzrle_cache_size); @@ -610,6 +615,11 @@ void hmp_migrate_set_parameter(Monitor *mon, const QDict *qdict) visit_type_MultiFDCompression(v, param, &p->multifd_compression, &err); break; + case MIGRATION_PARAMETER_MULTIFD_COMPRESSION_ACCEL: + p->has_multifd_compression_accel = true; + visit_type_MultiFDCompressionAccel(v, param, + &p->multifd_compression_accel, &err); + break; case MIGRATION_PARAMETER_MULTIFD_ZLIB_LEVEL: p->has_multifd_zlib_level = true; visit_type_uint8(v, param, &p->multifd_zlib_level, &err); diff --git a/migration/options.c b/migration/options.c index 42fb818956..4c567c49e6 100644 --- a/migration/options.c +++ b/migration/options.c @@ -59,6 +59,8 @@ #define DEFAULT_MIGRATE_X_CHECKPOINT_DELAY (200 * 100) #define DEFAULT_MIGRATE_MULTIFD_CHANNELS 2 #define DEFAULT_MIGRATE_MULTIFD_COMPRESSION MULTIFD_COMPRESSION_NONE +/* By default use the accelerator for multifd compression */ +#define DEFAULT_MIGRATE_MULTIFD_COMPRESSION_ACCEL MULTIFD_COMPRESSION_ACCEL_AUTO /* 0: means nocompress, 1: best speed, ... 9: best compress ratio */ #define DEFAULT_MIGRATE_MULTIFD_ZLIB_LEVEL 1 /* 0: means nocompress, 1: best speed, ... 20: best compress ratio */ @@ -139,6 +141,9 @@ Property migration_properties[] = { DEFINE_PROP_MULTIFD_COMPRESSION("multifd-compression", MigrationState, parameters.multifd_compression, DEFAULT_MIGRATE_MULTIFD_COMPRESSION), + DEFINE_PROP_MULTIFD_COMPRESSION_ACCEL("multifd-compression-accel", + MigrationState, parameters.multifd_compression_accel, + DEFAULT_MIGRATE_MULTIFD_COMPRESSION_ACCEL), DEFINE_PROP_UINT8("multifd-zlib-level", MigrationState, parameters.multifd_zlib_level, DEFAULT_MIGRATE_MULTIFD_ZLIB_LEVEL), @@ -818,6 +823,15 @@ MultiFDCompression migrate_multifd_compression(void) return s->parameters.multifd_compression; } +MultiFDCompressionAccel migrate_multifd_compression_accel(void) +{ + MigrationState *s = migrate_get_current(); + + assert(s->parameters.multifd_compression_accel < + MULTIFD_COMPRESSION_ACCEL__MAX); + return s->parameters.multifd_compression_accel; +} + int migrate_multifd_zlib_level(void) { MigrationState *s = migrate_get_current(); @@ -945,6 +959,8 @@ MigrationParameters *qmp_query_migrate_parameters(Error **errp) params->multifd_channels = s->parameters.multifd_channels; params->has_multifd_compression = true; params->multifd_compression = s->parameters.multifd_compression; + params->has_multifd_compression_accel = true; + params->multifd_compression_accel = s->parameters.multifd_compression_accel; params->has_multifd_zlib_level = true; params->multifd_zlib_level = s->parameters.multifd_zlib_level; params->has_multifd_zstd_level = true; @@ -999,6 +1015,7 @@ void migrate_params_init(MigrationParameters *params) params->has_block_incremental = true; params->has_multifd_channels = true; params->has_multifd_compression = true; + params->has_multifd_compression_accel = true; params->has_multifd_zlib_level = true; params->has_multifd_zstd_level = true; params->has_xbzrle_cache_size = true; @@ -1273,6 +1290,9 @@ static void migrate_params_test_apply(MigrateSetParameters *params, if (params->has_multifd_compression) { dest->multifd_compression = params->multifd_compression; } + if (params->has_multifd_compression_accel) { + dest->multifd_compression_accel = params->multifd_compression_accel; + } if (params->has_xbzrle_cache_size) { dest->xbzrle_cache_size = params->xbzrle_cache_size; } @@ -1394,6 +1414,10 @@ static void migrate_params_apply(MigrateSetParameters *params, Error **errp) if (params->has_multifd_compression) { s->parameters.multifd_compression = params->multifd_compression; } + if (params->has_multifd_compression_accel) { + s->parameters.multifd_compression_accel = + params->multifd_compression_accel; + } if (params->has_xbzrle_cache_size) { s->parameters.xbzrle_cache_size = params->xbzrle_cache_size; xbzrle_cache_resize(params->xbzrle_cache_size, errp); diff --git a/migration/options.h b/migration/options.h index 237f2d6b4a..e59bf4b5c1 100644 --- a/migration/options.h +++ b/migration/options.h @@ -85,6 +85,7 @@ uint64_t migrate_avail_switchover_bandwidth(void); uint64_t migrate_max_postcopy_bandwidth(void); int migrate_multifd_channels(void); MultiFDCompression migrate_multifd_compression(void); +MultiFDCompressionAccel migrate_multifd_compression_accel(void); int migrate_multifd_zlib_level(void); int migrate_multifd_zstd_level(void); uint8_t migrate_throttle_trigger_threshold(void); diff --git a/qapi/migration.json b/qapi/migration.json index db3df12d6c..47239328e4 100644 --- a/qapi/migration.json +++ b/qapi/migration.json @@ -616,6 +616,22 @@ { 'name': 'zstd', 'if': 'CONFIG_ZSTD' } ] } ## +# @MultiFDCompressionAccel: +# +# An enumeration of multifd compression accelerator. +# +# @auto: automatically determined if accelerator is available. +# +# @none: disable compression accelerator. +# +# @qpl: enable qpl compression accelerator. +# +# Since: 8.2 +## +{ 'enum': 'MultiFDCompressionAccel', + 'data': [ 'auto', 'none', + { 'name': 'qpl', 'if': 'CONFIG_QPL' } ] } +## # @BitmapMigrationBitmapAliasTransform: # # @persistent: If present, the bitmap will be made persistent or @@ -798,6 +814,9 @@ # @multifd-compression: Which compression method to use. Defaults to # none. (Since 5.0) # +# @multifd-compression-accel: Which compression accelerator to use. Defaults to +# auto. (Since 8.2) +# # @multifd-zlib-level: Set the compression level to be used in live # migration, the compression level is an integer between 0 and 9, # where 0 means no compression, 1 means the best compression @@ -853,7 +872,7 @@ 'block-incremental', 'multifd-channels', 'xbzrle-cache-size', 'max-postcopy-bandwidth', - 'max-cpu-throttle', 'multifd-compression', + 'max-cpu-throttle', 'multifd-compression', 'multifd-compression-accel', 'multifd-zlib-level', 'multifd-zstd-level', 'block-bitmap-mapping', { 'name': 'x-vcpu-dirty-limit-period', 'features': ['unstable'] }, @@ -974,6 +993,9 @@ # @multifd-compression: Which compression method to use. Defaults to # none. (Since 5.0) # +# @multifd-compression-accel: Which compression acclerator to use. Defaults to +# auto. (Since 8.2) +# # @multifd-zlib-level: Set the compression level to be used in live # migration, the compression level is an integer between 0 and 9, # where 0 means no compression, 1 means the best compression @@ -1046,6 +1068,7 @@ '*max-postcopy-bandwidth': 'size', '*max-cpu-throttle': 'uint8', '*multifd-compression': 'MultiFDCompression', + '*multifd-compression-accel': 'MultiFDCompressionAccel', '*multifd-zlib-level': 'uint8', '*multifd-zstd-level': 'uint8', '*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ], @@ -1257,6 +1280,7 @@ '*max-postcopy-bandwidth': 'size', '*max-cpu-throttle': 'uint8', '*multifd-compression': 'MultiFDCompression', + '*multifd-compression-accel': 'MultiFDCompressionAccel', '*multifd-zlib-level': 'uint8', '*multifd-zstd-level': 'uint8', '*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ], From patchwork Thu Nov 9 15:46:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liu, Yuan1" X-Patchwork-Id: 13452190 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C0262C4332F for ; Fri, 10 Nov 2023 07:34:04 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1r1M14-0005cV-8o; Fri, 10 Nov 2023 02:33:22 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1r1M11-0005RE-K2 for qemu-devel@nongnu.org; Fri, 10 Nov 2023 02:33:19 -0500 Received: from mgamail.intel.com ([192.198.163.7]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1r1M0z-0007DT-SK for qemu-devel@nongnu.org; Fri, 10 Nov 2023 02:33:19 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1699601598; x=1731137598; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=28hGxziHoPgYcEF1SjO4kQyakVIaD+oRtveWRlxr5tU=; b=HbUQW9TIubU9QhzAwm2BmARPn443/mxjguS6nETAsVRs5ua+7SVcmEnh 3pXxHCJIrFOiIWhAiG6uFM6MxRNZPNHzIwcQdSZ14dslXonATlOZE1eWF LzNEU79GTendvmpD0KriVDRHiV8SgZGpxj4Xk6NO1TAU0pYajmtpJ5aMA h+5MTcOXPV5/xK9kfWwwDG3d5N47alsR7zOnPpXeF9xc9D2xZvwddC2pd pBlZLle6DWVnOaByKDTIMkVgkBdJ0XrlWG6XxDHsK27w1Op0KnRVcqpkj m+kD8ekQU4ZWGxjbwzwCXgTdhhjEQ0VF/1Ph3650UV9eSZXpLNbIRX1s6 Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10889"; a="11694840" X-IronPort-AV: E=Sophos;i="6.03,291,1694761200"; d="scan'208";a="11694840" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmvoesa101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Nov 2023 23:33:14 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10889"; a="1010886563" X-IronPort-AV: E=Sophos;i="6.03,291,1694761200"; d="scan'208";a="1010886563" Received: from sae-gw02.sh.intel.com (HELO localhost) ([10.239.45.110]) by fmsmga006.fm.intel.com with ESMTP; 09 Nov 2023 23:33:12 -0800 From: Yuan Liu To: quintela@redhat.com, peterx@redhat.com, farosas@suse.de, leobras@redhat.com Cc: qemu-devel@nongnu.org, yuan1.liu@intel.com, nanhai.zou@intel.com Subject: [PATCH v2 2/4] multifd: Implement multifd compression accelerator Date: Thu, 9 Nov 2023 23:46:36 +0800 Message-Id: <20231109154638.488213-3-yuan1.liu@intel.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20231109154638.488213-1-yuan1.liu@intel.com> References: <20231109154638.488213-1-yuan1.liu@intel.com> MIME-Version: 1.0 Received-SPF: pass client-ip=192.198.163.7; envelope-from=yuan1.liu@intel.com; helo=mgamail.intel.com X-Spam_score_int: -10 X-Spam_score: -1.1 X-Spam_bar: - X-Spam_report: (-1.1 / 5.0 requ) BAYES_00=-1.9, DATE_IN_PAST_12_24=1.049, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org when starting multifd live migration, if the compression method is enabled, compression method can be accelerated using accelerators. Signed-off-by: Yuan Liu Reviewed-by: Nanhai Zou Reviewed-by: Fabiano Rosas --- migration/multifd.c | 38 ++++++++++++++++++++++++++++++++++++-- migration/multifd.h | 8 ++++++++ 2 files changed, 44 insertions(+), 2 deletions(-) -- 2.39.3 diff --git a/migration/multifd.c b/migration/multifd.c index 1fe53d3b98..7149e67867 100644 --- a/migration/multifd.c +++ b/migration/multifd.c @@ -165,6 +165,34 @@ static MultiFDMethods multifd_nocomp_ops = { static MultiFDMethods *multifd_ops[MULTIFD_COMPRESSION__MAX] = { [MULTIFD_COMPRESSION_NONE] = &multifd_nocomp_ops, }; +static MultiFDAccelMethods *accel_multifd_ops[MULTIFD_COMPRESSION_ACCEL__MAX]; + +static MultiFDMethods *get_multifd_ops(void) +{ + MultiFDCompression comp = migrate_multifd_compression(); + MultiFDCompressionAccel accel = migrate_multifd_compression_accel(); + + if (comp == MULTIFD_COMPRESSION_NONE || + accel == MULTIFD_COMPRESSION_ACCEL_NONE) { + return multifd_ops[comp]; + } + if (accel == MULTIFD_COMPRESSION_ACCEL_AUTO) { + for (int i = 0; i < MULTIFD_COMPRESSION_ACCEL__MAX; i++) { + if (accel_multifd_ops[i] && + accel_multifd_ops[i]->is_supported(comp)) { + return accel_multifd_ops[i]->get_multifd_methods(); + } + } + return multifd_ops[comp]; + } + + /* Check if a specified accelerator is available */ + if (accel_multifd_ops[accel] && + accel_multifd_ops[accel]->is_supported(comp)) { + return accel_multifd_ops[accel]->get_multifd_methods(); + } + return multifd_ops[comp]; +} void multifd_register_ops(int method, MultiFDMethods *ops) { @@ -172,6 +200,12 @@ void multifd_register_ops(int method, MultiFDMethods *ops) multifd_ops[method] = ops; } +void multifd_register_accel_ops(int accel, MultiFDAccelMethods *ops) +{ + assert(0 < accel && accel < MULTIFD_COMPRESSION_ACCEL__MAX); + accel_multifd_ops[accel] = ops; +} + static int multifd_send_initial_packet(MultiFDSendParams *p, Error **errp) { MultiFDInit_t msg = {}; @@ -922,7 +956,7 @@ int multifd_save_setup(Error **errp) multifd_send_state->pages = multifd_pages_init(page_count); qemu_sem_init(&multifd_send_state->channels_ready, 0); qatomic_set(&multifd_send_state->exiting, 0); - multifd_send_state->ops = multifd_ops[migrate_multifd_compression()]; + multifd_send_state->ops = get_multifd_ops(); for (i = 0; i < thread_count; i++) { MultiFDSendParams *p = &multifd_send_state->params[i]; @@ -1180,7 +1214,7 @@ int multifd_load_setup(Error **errp) multifd_recv_state->params = g_new0(MultiFDRecvParams, thread_count); qatomic_set(&multifd_recv_state->count, 0); qemu_sem_init(&multifd_recv_state->sem_sync, 0); - multifd_recv_state->ops = multifd_ops[migrate_multifd_compression()]; + multifd_recv_state->ops = get_multifd_ops(); for (i = 0; i < thread_count; i++) { MultiFDRecvParams *p = &multifd_recv_state->params[i]; diff --git a/migration/multifd.h b/migration/multifd.h index a835643b48..c40ff79443 100644 --- a/migration/multifd.h +++ b/migration/multifd.h @@ -206,7 +206,15 @@ typedef struct { int (*recv_pages)(MultiFDRecvParams *p, Error **errp); } MultiFDMethods; +typedef struct { + /* Check if the compression method supports acceleration */ + bool (*is_supported) (MultiFDCompression compression); + /* Get multifd methods of the accelerator */ + MultiFDMethods* (*get_multifd_methods)(void); +} MultiFDAccelMethods; + void multifd_register_ops(int method, MultiFDMethods *ops); +void multifd_register_accel_ops(int accel, MultiFDAccelMethods *ops); #endif From patchwork Thu Nov 9 15:46:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liu, Yuan1" X-Patchwork-Id: 13452189 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A7CE9C4332F for ; Fri, 10 Nov 2023 07:34:00 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1r1M16-0005kk-Sa; Fri, 10 Nov 2023 02:33:25 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1r1M14-0005dw-Nb for qemu-devel@nongnu.org; Fri, 10 Nov 2023 02:33:22 -0500 Received: from mgamail.intel.com ([198.175.65.10]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1r1M12-0007HI-Tj for qemu-devel@nongnu.org; Fri, 10 Nov 2023 02:33:22 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1699601601; x=1731137601; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=B2q/sm0UOlEDxkDSGbHGaYVp8jIR3zsBxM8BgBttiFg=; b=GijrrJ98Od7+MWkF4ZIEUvFgumGhP9DlqnnZK7ERBpiT8mO0QnNXdo5O eikMJPMONiqO0Lf26YT6xfODgKLprXepVkgfFvfBjDBoIFPqMu2s+nhln 0wAKiYQKsBKkwOmuAy3KURYe44pd7xo035PYcazSki8EA2Fox69crhFNA n7R6s1GWnY0Xwv5vtRPvWVzvtNKK4kdy1YdIlOp9GecFYctkmEWWJbMT9 /OmGRLVpoE1+jZ8c48lgxwWZv+GAcPcNF3XlXTcVV/j3d6VspIzxiTqry V9BRXnQ3xJbf6EqZPpePvseOm9Gb5Au/6fmNDZviIx0nIwZK3u85ALl4r w==; X-IronPort-AV: E=McAfee;i="6600,9927,10889"; a="3183700" X-IronPort-AV: E=Sophos;i="6.03,291,1694761200"; d="scan'208";a="3183700" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orvoesa102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Nov 2023 23:33:17 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10889"; a="829571064" X-IronPort-AV: E=Sophos;i="6.03,291,1694761200"; d="scan'208";a="829571064" Received: from sae-gw02.sh.intel.com (HELO localhost) ([10.239.45.110]) by fmsmga008.fm.intel.com with ESMTP; 09 Nov 2023 23:33:15 -0800 From: Yuan Liu To: quintela@redhat.com, peterx@redhat.com, farosas@suse.de, leobras@redhat.com Cc: qemu-devel@nongnu.org, yuan1.liu@intel.com, nanhai.zou@intel.com Subject: [PATCH v2 3/4] configure: add qpl option Date: Thu, 9 Nov 2023 23:46:37 +0800 Message-Id: <20231109154638.488213-4-yuan1.liu@intel.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20231109154638.488213-1-yuan1.liu@intel.com> References: <20231109154638.488213-1-yuan1.liu@intel.com> MIME-Version: 1.0 Received-SPF: pass client-ip=198.175.65.10; envelope-from=yuan1.liu@intel.com; helo=mgamail.intel.com X-Spam_score_int: -10 X-Spam_score: -1.1 X-Spam_bar: - X-Spam_report: (-1.1 / 5.0 requ) BAYES_00=-1.9, DATE_IN_PAST_12_24=1.049, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org the Query Processing Library (QPL) is an open-source library that supports data compression and decompression features. add --enable-qpl and --disable-qpl options to enable and disable the QPL compression accelerator. The QPL compression accelerator can accelerate the Zlib compression algorithm during the live migration. Signed-off-by: Yuan Liu Reviewed-by: Nanhai Zou --- meson.build | 7 +++++++ meson_options.txt | 2 ++ scripts/meson-buildoptions.sh | 3 +++ 3 files changed, 12 insertions(+) -- 2.39.3 diff --git a/meson.build b/meson.build index 259dc5f308..b4ba30b4fa 100644 --- a/meson.build +++ b/meson.build @@ -1032,6 +1032,11 @@ if not get_option('zstd').auto() or have_block required: get_option('zstd'), method: 'pkg-config') endif +qpl = not_found +if not get_option('qpl').auto() + qpl = dependency('libqpl', required: get_option('qpl'), + method: 'pkg-config') +endif virgl = not_found have_vhost_user_gpu = have_tools and targetos == 'linux' and pixman.found() @@ -2165,6 +2170,7 @@ config_host_data.set('CONFIG_MALLOC_TRIM', has_malloc_trim) config_host_data.set('CONFIG_STATX', has_statx) config_host_data.set('CONFIG_STATX_MNT_ID', has_statx_mnt_id) config_host_data.set('CONFIG_ZSTD', zstd.found()) +config_host_data.set('CONFIG_QPL', qpl.found()) config_host_data.set('CONFIG_FUSE', fuse.found()) config_host_data.set('CONFIG_FUSE_LSEEK', fuse_lseek.found()) config_host_data.set('CONFIG_SPICE_PROTOCOL', spice_protocol.found()) @@ -4325,6 +4331,7 @@ summary_info += {'snappy support': snappy} summary_info += {'bzip2 support': libbzip2} summary_info += {'lzfse support': liblzfse} summary_info += {'zstd support': zstd} +summary_info += {'Query Processing Library support': qpl} summary_info += {'NUMA host support': numa} summary_info += {'capstone': capstone} summary_info += {'libpmem support': libpmem} diff --git a/meson_options.txt b/meson_options.txt index 3c7398f3c6..71cd533985 100644 --- a/meson_options.txt +++ b/meson_options.txt @@ -255,6 +255,8 @@ option('xkbcommon', type : 'feature', value : 'auto', description: 'xkbcommon support') option('zstd', type : 'feature', value : 'auto', description: 'zstd compression support') +option('qpl', type : 'feature', value : 'auto', + description: 'Query Processing Library support') option('fuse', type: 'feature', value: 'auto', description: 'FUSE block device export') option('fuse_lseek', type : 'feature', value : 'auto', diff --git a/scripts/meson-buildoptions.sh b/scripts/meson-buildoptions.sh index 7ca4b77eae..0909d1d517 100644 --- a/scripts/meson-buildoptions.sh +++ b/scripts/meson-buildoptions.sh @@ -220,6 +220,7 @@ meson_options_help() { printf "%s\n" ' Xen PCI passthrough support' printf "%s\n" ' xkbcommon xkbcommon support' printf "%s\n" ' zstd zstd compression support' + printf "%s\n" ' qpl Query Processing Library support' } _meson_option_parse() { case $1 in @@ -556,6 +557,8 @@ _meson_option_parse() { --disable-xkbcommon) printf "%s" -Dxkbcommon=disabled ;; --enable-zstd) printf "%s" -Dzstd=enabled ;; --disable-zstd) printf "%s" -Dzstd=disabled ;; + --enable-qpl) printf "%s" -Dqpl=enabled ;; + --disable-qpl) printf "%s" -Dqpl=disabled ;; *) return 1 ;; esac } From patchwork Thu Nov 9 15:46:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liu, Yuan1" X-Patchwork-Id: 13452188 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 44D37C4332F for ; Fri, 10 Nov 2023 07:33:42 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1r1M1E-000672-GO; Fri, 10 Nov 2023 02:33:32 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1r1M1C-0005z8-3n for qemu-devel@nongnu.org; Fri, 10 Nov 2023 02:33:30 -0500 Received: from mgamail.intel.com ([198.175.65.10]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1r1M16-0007HI-Kr for qemu-devel@nongnu.org; Fri, 10 Nov 2023 02:33:29 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1699601604; x=1731137604; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=uOL+LNCHU+xCugILn6evh00iYwzMVNa02vULBrgV7+g=; b=mTUZkV74axE0O0cyPRisysZOwJXYOW4snhQs96HJSTeUZ6L3CYMQwYXo QwWI1cbAGlE2V8FFkiLxu95wL6xcw0KfNzEnnSGdjHBKdgIxu+qvVMu69 0e4b4wsHPIIkrEvv5VsX35n3eSvT//CLSm/S1zpYXvHMoefnhSP80puf+ Ylwpzm2Ak5Y2cXN7VXphr97Cnay9+cDAz8VpGd33DTu93CdlGyzuT9MVu Y5WpxRyjOxXnfi94dQJoea0JtMBQsmZDCj4mK+VgKbBPCvXPTOlzVmByW 9/1ySTrUGwSnv12sbzeF2PIUB7zZEFcArF1nZoKRIyV0wMEl76klBDQ1p g==; X-IronPort-AV: E=McAfee;i="6600,9927,10889"; a="3183711" X-IronPort-AV: E=Sophos;i="6.03,291,1694761200"; d="scan'208";a="3183711" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orvoesa102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Nov 2023 23:33:23 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10889"; a="829571068" X-IronPort-AV: E=Sophos;i="6.03,291,1694761200"; d="scan'208";a="829571068" Received: from sae-gw02.sh.intel.com (HELO localhost) ([10.239.45.110]) by fmsmga008.fm.intel.com with ESMTP; 09 Nov 2023 23:33:18 -0800 From: Yuan Liu To: quintela@redhat.com, peterx@redhat.com, farosas@suse.de, leobras@redhat.com Cc: qemu-devel@nongnu.org, yuan1.liu@intel.com, nanhai.zou@intel.com Subject: [PATCH v2 4/4] multifd: Introduce QPL compression accelerator Date: Thu, 9 Nov 2023 23:46:38 +0800 Message-Id: <20231109154638.488213-5-yuan1.liu@intel.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20231109154638.488213-1-yuan1.liu@intel.com> References: <20231109154638.488213-1-yuan1.liu@intel.com> MIME-Version: 1.0 Received-SPF: pass client-ip=198.175.65.10; envelope-from=yuan1.liu@intel.com; helo=mgamail.intel.com X-Spam_score_int: -10 X-Spam_score: -1.1 X-Spam_bar: - X-Spam_report: (-1.1 / 5.0 requ) BAYES_00=-1.9, DATE_IN_PAST_12_24=1.049, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Intel Query Processing Library (QPL) is an open-source library for data compression, it supports the deflate compression algorithm, compatible with Zlib and GZIP. QPL supports both software compression and hardware compression. Software compression is based on instruction optimization to accelerate data compression, and it can be widely used on Intel CPUs. Hardware compression utilizes the Intel In-Memory Analytics Accelerator (IAA) hardware which is available on Intel Xeon Sapphire Rapids processors. During multifd live migration, the QPL accelerator can be specified to accelerate the Zlib compression algorithm. QPL can automatically choose software or hardware acceleration based on the platform. Signed-off-by: Yuan Liu Reviewed-by: Nanhai Zou --- migration/meson.build | 1 + migration/multifd-qpl.c | 326 ++++++++++++++++++++++++++++++++++++++++ 2 files changed, 327 insertions(+) create mode 100644 migration/multifd-qpl.c -- 2.39.3 diff --git a/migration/meson.build b/migration/meson.build index 92b1cc4297..c155c2d781 100644 --- a/migration/meson.build +++ b/migration/meson.build @@ -40,6 +40,7 @@ if get_option('live_block_migration').allowed() system_ss.add(files('block.c')) endif system_ss.add(when: zstd, if_true: files('multifd-zstd.c')) +system_ss.add(when: qpl, if_true: files('multifd-qpl.c')) specific_ss.add(when: 'CONFIG_SYSTEM_ONLY', if_true: files('ram.c', diff --git a/migration/multifd-qpl.c b/migration/multifd-qpl.c new file mode 100644 index 0000000000..9d2ca9e44e --- /dev/null +++ b/migration/multifd-qpl.c @@ -0,0 +1,326 @@ +/* + * Multifd qpl compression accelerator implementation + * + * Copyright (c) 2023 Intel Corporation + * + * Authors: + * Yuan Liu + * + * This work is licensed under the terms of the GNU GPL, version 2 or later. + * See the COPYING file in the top-level directory. + */ + +#include "qemu/osdep.h" +#include "qemu/rcu.h" +#include "exec/ramblock.h" +#include "exec/target_page.h" +#include "qapi/error.h" +#include "migration.h" +#include "trace.h" +#include "options.h" +#include "multifd.h" +#include "qpl/qpl.h" + +#define MAX_BUF_SIZE (MULTIFD_PACKET_SIZE * 2) + +static bool support_compression_methods[MULTIFD_COMPRESSION__MAX]; + +struct qpl_data { + qpl_job *job; + /* compressed data buffer */ + uint8_t *buf; + /* decompressed data buffer */ + uint8_t *zbuf; +}; + +static int init_qpl(struct qpl_data *qpl, uint8_t channel_id, Error **errp) +{ + qpl_status status; + qpl_path_t path = qpl_path_auto; + uint32_t job_size = 0; + + status = qpl_get_job_size(path, &job_size); + if (status != QPL_STS_OK) { + error_setg(errp, "multfd: %u: failed to get QPL size, error %d", + channel_id, status); + return -1; + } + + qpl->job = g_try_malloc0(job_size); + if (!qpl->job) { + error_setg(errp, "multfd: %u: failed to allocate QPL job", channel_id); + return -1; + } + + status = qpl_init_job(path, qpl->job); + if (status != QPL_STS_OK) { + error_setg(errp, "multfd: %u: failed to init QPL hardware, error %d", + channel_id, status); + g_free(qpl->job); + return -1; + } + return 0; +} + +static void deinit_qpl(struct qpl_data *qpl) +{ + if (qpl->job) { + qpl_fini_job(qpl->job); + g_free(qpl->job); + } +} + +/** + * qpl_send_setup: setup send side + * + * Setup each channel with QPL compression. + * + * Returns 0 for success or -1 for error + * + * @p: Params for the channel that we are using + * @errp: pointer to an error + */ +static int qpl_send_setup(MultiFDSendParams *p, Error **errp) +{ + struct qpl_data *qpl = g_new0(struct qpl_data, 1); + /* prefault the memory to avoid the IO page faults */ + int flags = MAP_PRIVATE | MAP_POPULATE | MAP_ANONYMOUS; + const char *err_msg; + + if (init_qpl(qpl, p->id, errp) != 0) { + err_msg = "failed to initialize QPL\n"; + goto err_qpl_init; + } + qpl->zbuf = mmap(NULL, MAX_BUF_SIZE, PROT_READ | PROT_WRITE, flags, -1, 0); + if (qpl->zbuf == MAP_FAILED) { + err_msg = "failed to allocate QPL zbuf\n"; + goto err_zbuf_mmap; + } + p->data = qpl; + return 0; + +err_zbuf_mmap: + deinit_qpl(qpl); +err_qpl_init: + g_free(qpl); + error_setg(errp, "multifd %u: %s", p->id, err_msg); + return -1; +} + +/** + * qpl_send_cleanup: cleanup send side + * + * Close the channel and return memory. + * + * @p: Params for the channel that we are using + * @errp: pointer to an error + */ +static void qpl_send_cleanup(MultiFDSendParams *p, Error **errp) +{ + struct qpl_data *qpl = p->data; + + deinit_qpl(qpl); + if (qpl->zbuf) { + munmap(qpl->zbuf, MAX_BUF_SIZE); + qpl->zbuf = NULL; + } + g_free(p->data); + p->data = NULL; +} + +/** + * qpl_send_prepare: prepare data to be able to send + * + * Create a compressed buffer with all the pages that we are going to + * send. + * + * Returns 0 for success or -1 for error + * + * @p: Params for the channel that we are using + * @errp: pointer to an error + */ +static int qpl_send_prepare(MultiFDSendParams *p, Error **errp) +{ + struct qpl_data *qpl = p->data; + qpl_job *job = qpl->job; + qpl_status status; + + job->op = qpl_op_compress; + job->next_out_ptr = qpl->zbuf; + job->available_out = MAX_BUF_SIZE; + job->flags = QPL_FLAG_FIRST | QPL_FLAG_OMIT_VERIFY | QPL_FLAG_ZLIB_MODE; + /* QPL supports compression level 1 */ + job->level = 1; + for (int i = 0; i < p->normal_num; i++) { + if (i == p->normal_num - 1) { + job->flags |= (QPL_FLAG_LAST | QPL_FLAG_OMIT_VERIFY); + } + job->next_in_ptr = p->pages->block->host + p->normal[i]; + job->available_in = p->page_size; + status = qpl_execute_job(job); + if (status != QPL_STS_OK) { + error_setg(errp, "multifd %u: execute job error %d ", + p->id, status); + return -1; + } + job->flags &= ~QPL_FLAG_FIRST; + } + p->iov[p->iovs_num].iov_base = qpl->zbuf; + p->iov[p->iovs_num].iov_len = job->total_out; + p->iovs_num++; + p->next_packet_size += job->total_out; + p->flags |= MULTIFD_FLAG_ZLIB; + return 0; +} + +/** + * qpl_recv_setup: setup receive side + * + * Create the compressed channel and buffer. + * + * Returns 0 for success or -1 for error + * + * @p: Params for the channel that we are using + * @errp: pointer to an error + */ +static int qpl_recv_setup(MultiFDRecvParams *p, Error **errp) +{ + struct qpl_data *qpl = g_new0(struct qpl_data, 1); + int flags = MAP_PRIVATE | MAP_POPULATE | MAP_ANONYMOUS; + const char *err_msg; + + if (init_qpl(qpl, p->id, errp) != 0) { + err_msg = "failed to initialize QPL\n"; + goto err_qpl_init; + } + qpl->zbuf = mmap(NULL, MAX_BUF_SIZE, PROT_READ | PROT_WRITE, flags, -1, 0); + if (qpl->zbuf == MAP_FAILED) { + err_msg = "failed to allocate QPL zbuf\n"; + goto err_zbuf_mmap; + } + qpl->buf = mmap(NULL, MAX_BUF_SIZE, PROT_READ | PROT_WRITE, flags, -1, 0); + if (qpl->buf == MAP_FAILED) { + err_msg = "failed to allocate QPL buf\n"; + goto err_buf_mmap; + } + p->data = qpl; + return 0; + +err_buf_mmap: + munmap(qpl->zbuf, MAX_BUF_SIZE); + qpl->zbuf = NULL; +err_zbuf_mmap: + deinit_qpl(qpl); +err_qpl_init: + g_free(qpl); + error_setg(errp, "multifd %u: %s", p->id, err_msg); + return -1; +} + +/** + * qpl_recv_cleanup: setup receive side + * + * For no compression this function does nothing. + * + * @p: Params for the channel that we are using + */ +static void qpl_recv_cleanup(MultiFDRecvParams *p) +{ + struct qpl_data *qpl = p->data; + + deinit_qpl(qpl); + if (qpl->zbuf) { + munmap(qpl->zbuf, MAX_BUF_SIZE); + qpl->zbuf = NULL; + } + if (qpl->buf) { + munmap(qpl->buf, MAX_BUF_SIZE); + qpl->buf = NULL; + } + g_free(p->data); + p->data = NULL; +} + +/** + * qpl_recv_pages: read the data from the channel into actual pages + * + * Read the compressed buffer, and uncompress it into the actual + * pages. + * + * Returns 0 for success or -1 for error + * + * @p: Params for the channel that we are using + * @errp: pointer to an error + */ +static int qpl_recv_pages(MultiFDRecvParams *p, Error **errp) +{ + struct qpl_data *qpl = p->data; + uint32_t in_size = p->next_packet_size; + uint32_t expected_size = p->normal_num * p->page_size; + uint32_t flags = p->flags & MULTIFD_FLAG_COMPRESSION_MASK; + qpl_job *job = qpl->job; + qpl_status status; + int ret; + + if (flags != MULTIFD_FLAG_ZLIB) { + error_setg(errp, "multifd %u: flags received %x flags expected %x", + p->id, flags, MULTIFD_FLAG_ZLIB); + return -1; + } + ret = qio_channel_read_all(p->c, (void *)qpl->zbuf, in_size, errp); + if (ret != 0) { + return ret; + } + + job->op = qpl_op_decompress; + job->next_in_ptr = qpl->zbuf; + job->available_in = in_size; + job->next_out_ptr = qpl->buf; + job->available_out = expected_size; + job->flags = QPL_FLAG_FIRST | QPL_FLAG_LAST | QPL_FLAG_OMIT_VERIFY | + QPL_FLAG_ZLIB_MODE; + status = qpl_execute_job(job); + if ((status != QPL_STS_OK) || (job->total_out != expected_size)) { + error_setg(errp, "multifd %u: execute job error %d, expect %u, out %u", + p->id, status, job->total_out, expected_size); + return -1; + } + for (int i = 0; i < p->normal_num; i++) { + memcpy(p->host + p->normal[i], qpl->buf + (i * p->page_size), + p->page_size); + } + return 0; +} + +static MultiFDMethods multifd_qpl_ops = { + .send_setup = qpl_send_setup, + .send_cleanup = qpl_send_cleanup, + .send_prepare = qpl_send_prepare, + .recv_setup = qpl_recv_setup, + .recv_cleanup = qpl_recv_cleanup, + .recv_pages = qpl_recv_pages +}; + +static bool is_supported(MultiFDCompression compression) +{ + return support_compression_methods[compression]; +} + +static MultiFDMethods *get_qpl_multifd_methods(void) +{ + return &multifd_qpl_ops; +} + +static MultiFDAccelMethods multifd_qpl_accel_ops = { + .is_supported = is_supported, + .get_multifd_methods = get_qpl_multifd_methods, +}; + +static void multifd_qpl_register(void) +{ + multifd_register_accel_ops(MULTIFD_COMPRESSION_ACCEL_QPL, + &multifd_qpl_accel_ops); + support_compression_methods[MULTIFD_COMPRESSION_ZLIB] = true; +} + +migration_init(multifd_qpl_register);