From patchwork Mon Jan 27 14:17:39 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Tokarev X-Patchwork-Id: 13951494 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4B3D2C02192 for ; Mon, 27 Jan 2025 14:26:49 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tcQ4G-0005vG-Cs; Mon, 27 Jan 2025 09:26:24 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tcQ46-0005qF-SD; Mon, 27 Jan 2025 09:26:17 -0500 Received: from isrv.corpit.ru ([86.62.121.231]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tcQ45-0003Jy-BV; Mon, 27 Jan 2025 09:26:14 -0500 Received: from localhost.tls.msk.ru (mjt.wg.tls.msk.ru [192.168.177.130]) by isrv.corpit.ru (Postfix) with ESMTP id A0D61E0F83; Mon, 27 Jan 2025 17:25:41 +0300 (MSK) Received: by localhost.tls.msk.ru (Postfix, from userid 1000) id 9691651D97; Mon, 27 Jan 2025 17:18:03 +0300 (MSK) From: Michael Tokarev To: qemu-devel@nongnu.org Cc: qemu-stable@nongnu.org, Yuan Liu , Jason Zeng , Peter Xu , Fabiano Rosas , Michael Tokarev Subject: [Stable-9.2.1 25/41] multifd: bugfix for migration using compression methods Date: Mon, 27 Jan 2025 17:17:39 +0300 Message-Id: <20250127141803.3514882-25-mjt@tls.msk.ru> X-Mailer: git-send-email 2.39.5 In-Reply-To: References: MIME-Version: 1.0 Received-SPF: pass client-ip=86.62.121.231; envelope-from=mjt@tls.msk.ru; helo=isrv.corpit.ru X-Spam_score_int: -68 X-Spam_score: -6.9 X-Spam_bar: ------ X-Spam_report: (-6.9 / 5.0 requ) BAYES_00=-1.9, RCVD_IN_DNSWL_HI=-5, RCVD_IN_VALIDITY_CERTIFIED_BLOCKED=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org From: Yuan Liu When compression is enabled on the migration channel and the pages processed are all zero pages, these pages will not be sent and updated on the target side, resulting in incorrect memory data on the source and target sides. The root cause is that all compression methods call multifd_send_prepare_common to determine whether to compress dirty pages, but multifd_send_prepare_common does not update the IOV of MultiFDPacket_t when all dirty pages are zero pages. The solution is to always update the IOV of MultiFDPacket_t regardless of whether the dirty pages are all zero pages. Fixes: 303e6f54f9 ("migration/multifd: Implement zero page transmission on the multifd thread.") Cc: qemu-stable@nongnu.org #9.0+ Signed-off-by: Yuan Liu Reviewed-by: Jason Zeng Reviewed-by: Peter Xu Message-Id: <20241218091413.140396-2-yuan1.liu@intel.com> Signed-off-by: Fabiano Rosas (cherry picked from commit cdc3970f8597ebdc1a4c2090cfb4d11e297329ed) Signed-off-by: Michael Tokarev diff --git a/migration/multifd-nocomp.c b/migration/multifd-nocomp.c index 55191152f9..2e4aaac285 100644 --- a/migration/multifd-nocomp.c +++ b/migration/multifd-nocomp.c @@ -362,6 +362,7 @@ int multifd_ram_flush_and_sync(void) bool multifd_send_prepare_common(MultiFDSendParams *p) { MultiFDPages_t *pages = &p->data->u.ram; + multifd_send_prepare_header(p); multifd_send_zero_page_detect(p); if (!pages->normal_num) { @@ -369,8 +370,6 @@ bool multifd_send_prepare_common(MultiFDSendParams *p) return false; } - multifd_send_prepare_header(p); - return true; }