From patchwork Mon Sep 9 13:47:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyman Huang X-Patchwork-Id: 13797080 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0656CECE579 for ; Mon, 9 Sep 2024 13:49:46 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1snel1-0008W6-8i; Mon, 09 Sep 2024 09:48:43 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sneky-0008UX-Me for qemu-devel@nongnu.org; Mon, 09 Sep 2024 09:48:40 -0400 Received: from mail-pj1-x1036.google.com ([2607:f8b0:4864:20::1036]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1snekx-00009S-3J for qemu-devel@nongnu.org; Mon, 09 Sep 2024 09:48:40 -0400 Received: by mail-pj1-x1036.google.com with SMTP id 98e67ed59e1d1-2d1daa2577bso3139699a91.2 for ; Mon, 09 Sep 2024 06:48:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=smartx-com.20230601.gappssmtp.com; s=20230601; t=1725889717; x=1726494517; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=368uvjWdieiurybOR+FslAHDldotj//VpXJQhIwe/k0=; b=gbZ6tFsKi2wecGVTXfj+9vxRuypWZIGvlfWXuJnB5e8MCG0rydgtykbiEaaIC1nJvx HCH9rRmHcLXBpkItogCqbdEh23dlk3r1kgmO5LmZzUMfT9fdNijE3I9FbnU9RRNd+yQz 9g+XHNugO95e0teqIIYRsGM8KF+SjgyS7Q2ad4tNDcBWOLtwNX4oxOMMatP0iQhGRJJs pyykoMHTdFgjPtbL1YhCTmQGS5Dn+u4ndh3+xuXKZx46VEhFsHUKulFN0/MFR0gyRMDz yzd84EM8MZLmVktsHEev6Ste+/wavkTZinKA1+lSsBgAqGYl4HULxm8Ud40PzgIU0QYT Sc6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725889717; x=1726494517; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=368uvjWdieiurybOR+FslAHDldotj//VpXJQhIwe/k0=; b=fUcWC1DjgxbzwSzlPrOFrIxoch6BxA16F2v+wggwG9jkQu3u4jbFlX8AjDCfNrDxNF YWOJ1u9XLU2dMSSPgpPhDXCDX78Htqu/y4qCzeDa/irymkbprkNWr63KHiBMQWaRGUVF q/mG2XSws0Jy1zavW/TGZgctc3UUs3hrtkqtaVTCZDacbmUWwPVkqBghqOmp6+MXicfn SZIVtiC7rjmNppkY6yKa0urDrOfHjMR7dpHBPjls0D5JjojoMH9Ii+0l3p3CyR5YImeT n5eNWsgSwdL2J1sXYFAUAdj+D4y093Kud/mdXwhEV2EWfD1zPAdmqcTGoVbi/YFReefy 1LPA== X-Gm-Message-State: AOJu0YzrzqH8PSee9mjdfJKUaVXAVOHmXCL2ec8791hP/Q/roTcQSY6X Ff9Alka5H4fNJCe8lJrWZ2lF4w1EEcf5383jR01mInDJha3bAE43MXWtYHG5fhq5cnTJZ3NSpIY NdsD8fQ== X-Google-Smtp-Source: AGHT+IGLPCD4koe9ZC8XLlKtT7Nj5CTHsx1eG5yOTBdRvi0jGrA3J48HvG0MdDwcrBmftu7L6PphQQ== X-Received: by 2002:a17:90a:ec87:b0:2da:802d:1f95 with SMTP id 98e67ed59e1d1-2dafcef4703mr8423218a91.5.1725889716826; Mon, 09 Sep 2024 06:48:36 -0700 (PDT) Received: from localhost.localdomain ([118.114.94.226]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2dab2c6b0b9sm7841031a91.0.2024.09.09.06.48.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 09 Sep 2024 06:48:36 -0700 (PDT) From: Hyman Huang To: qemu-devel@nongnu.org Cc: Peter Xu , Fabiano Rosas , Eric Blake , Markus Armbruster , David Hildenbrand , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Paolo Bonzini , yong.huang@smartx.com Subject: [PATCH RFC 01/10] migration: Introduce structs for periodic CPU throttle Date: Mon, 9 Sep 2024 21:47:13 +0800 Message-Id: X-Mailer: git-send-email 2.39.1 In-Reply-To: References: MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::1036; envelope-from=yong.huang@smartx.com; helo=mail-pj1-x1036.google.com X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org shadow_bmap, iter_bmap, iter_dirty_pages and periodic_sync_shown_up are introduced to satisfy the need for periodic CPU throttle. Meanwhile, introduce enumeration of dirty bitmap sync method. Signed-off-by: Hyman Huang --- include/exec/ramblock.h | 45 +++++++++++++++++++++++++++++++++++++++++ migration/ram.c | 6 ++++++ 2 files changed, 51 insertions(+) diff --git a/include/exec/ramblock.h b/include/exec/ramblock.h index 0babd105c0..619c52885a 100644 --- a/include/exec/ramblock.h +++ b/include/exec/ramblock.h @@ -24,6 +24,30 @@ #include "qemu/rcu.h" #include "exec/ramlist.h" +/* Possible bits for migration_bitmap_sync */ + +/* + * The old-fashioned sync method, which is, in turn, used for CPU + * throttle and memory transfer. + */ +#define RAMBLOCK_SYN_LEGACY_ITER (1U << 0) + +/* + * The modern sync method, which is, in turn, used for CPU throttle + * and memory transfer. + */ +#define RAMBLOCK_SYN_MODERN_ITER (1U << 1) + +/* The modern sync method, which is used for CPU throttle only */ +#define RAMBLOCK_SYN_MODERN_PERIOD (1U << 2) + +#define RAMBLOCK_SYN_MASK (0x7) + +typedef enum RAMBlockSynMode { + RAMBLOCK_SYN_LEGACY, /* Old-fashined mode */ + RAMBLOCK_SYN_MODERN, +} RAMBlockSynMode; + struct RAMBlock { struct rcu_head rcu; struct MemoryRegion *mr; @@ -89,6 +113,27 @@ struct RAMBlock { * could not have been valid on the source. */ ram_addr_t postcopy_length; + + /* + * Used to backup the bmap during periodic sync to see whether any dirty + * pages were sent during that time. + */ + unsigned long *shadow_bmap; + + /* + * The bitmap "bmap," which was initially used for both sync and memory + * transfer, will be replaced by two bitmaps: the previously used "bmap" + * and the recently added "iter_bmap." Only the memory transfer is + * conducted with the previously used "bmap"; the recently added + * "iter_bmap" is utilized for sync. + */ + unsigned long *iter_bmap; + + /* Number of new dirty pages during iteration */ + uint64_t iter_dirty_pages; + + /* If periodic sync has shown up during iteration */ + bool periodic_sync_shown_up; }; #endif #endif diff --git a/migration/ram.c b/migration/ram.c index 67ca3d5d51..f29faa82d6 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -2362,6 +2362,10 @@ static void ram_bitmaps_destroy(void) block->bmap = NULL; g_free(block->file_bmap); block->file_bmap = NULL; + g_free(block->shadow_bmap); + block->shadow_bmap = NULL; + g_free(block->iter_bmap); + block->iter_bmap = NULL; } } @@ -2753,6 +2757,8 @@ static void ram_list_init_bitmaps(void) } block->clear_bmap_shift = shift; block->clear_bmap = bitmap_new(clear_bmap_size(pages, shift)); + block->shadow_bmap = bitmap_new(pages); + block->iter_bmap = bitmap_new(pages); } } } From patchwork Mon Sep 9 13:47:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyman Huang X-Patchwork-Id: 13797084 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E8BC2ECE579 for ; Mon, 9 Sep 2024 13:50:31 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1snel3-0000FK-Im; Mon, 09 Sep 2024 09:48:45 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1snel2-0000BB-Jq for qemu-devel@nongnu.org; Mon, 09 Sep 2024 09:48:44 -0400 Received: from mail-pf1-x433.google.com ([2607:f8b0:4864:20::433]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1snel0-00009n-F1 for qemu-devel@nongnu.org; Mon, 09 Sep 2024 09:48:44 -0400 Received: by mail-pf1-x433.google.com with SMTP id d2e1a72fcca58-718a3b8a2dcso2407841b3a.2 for ; Mon, 09 Sep 2024 06:48:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=smartx-com.20230601.gappssmtp.com; s=20230601; t=1725889721; x=1726494521; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=KyMoOZ8k3OezZ5ErBCKLScuysobLeWnbtog4nkvjlso=; b=HIPpQCoSLZWGTRUkAp97YsqiErRrWEJjo7p/E3ctTfsGmG1ZtPyWXqwPmdczjZKSvH LN3j23gHir+DE5GjFhG3ujQv+PXp7sft5R109peu7xUDch9OdERCpC1R2MoEZsHi7pXr ukkM3oGwgDrpVMmVKnX4bvJpbsMovRtFM0KwP1CcuQNC6i6Oq9f5kC81k0V6qSgIwDCq XwKCs1wqlM47DTvonVPEg++WJnd8D2WsKc38grjV/c6q4YgFtDhAjDnCvQSCA1Wt/GRE D+V71QYZcL5T0sBqsAck/YPgXBlL2FTDfoKOeWGhmY/5+ZLM6aZ9WrVNjcz4Z1lzpMXK mvtA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725889721; x=1726494521; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KyMoOZ8k3OezZ5ErBCKLScuysobLeWnbtog4nkvjlso=; b=UQ/rrI9KeFni+ZvYf6ygUFAtDgoPJ5QLMyOR7noRNIzI6YhnazbVcexDxO9+ual+vp 4zdf519hj1JOZg2iafSQpfB/3iglSeNP22gjUj8hBjF6PQXRWzy7+zxtMmqR8xNcyUMY t1tMyqP5QWYdRpPPcYDKR1kmgly1ADhIsAlNSU0EpqDxpDP1+z69xx1AIX8OWq21qH2u 2567qkOMmr97bsDxqzbFy6FWhUE/pHTYYiocozv3R+NrvE59O2GT3s57gkiXRaTVXwMK 4G9OC/FH9q8Jd3fX3R2729r2dcgHjQSFu3yvwFCBBVWvxufnZnSCgVro9S8S4HgT9H7X CsrQ== X-Gm-Message-State: AOJu0YyJTqgN9IgRh4S0mvpRvsaHkrhd3PaDoZdYqZ5gmXwGRG5Ujywp nsQw/rdLko+TK5uVIlIuWhDC/PqJBF3KCZFd853zjY+mx/BfRWi75hKwGnnPdsSVEhnBEZCdo64 LLWE19g== X-Google-Smtp-Source: AGHT+IHJz9kvPRk8lso1m6PiQg1+YMtZ15C0kAqfTnnlP6UEXWWB8TtgwD4GDyQLe8Uj3S7pD/8a/A== X-Received: by 2002:a05:6a20:b603:b0:1ce:cf2b:dd23 with SMTP id adf61e73a8af0-1cf1d1fafdbmr11080536637.49.1725889720045; Mon, 09 Sep 2024 06:48:40 -0700 (PDT) Received: from localhost.localdomain ([118.114.94.226]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2dab2c6b0b9sm7841031a91.0.2024.09.09.06.48.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 09 Sep 2024 06:48:39 -0700 (PDT) From: Hyman Huang To: qemu-devel@nongnu.org Cc: Peter Xu , Fabiano Rosas , Eric Blake , Markus Armbruster , David Hildenbrand , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Paolo Bonzini , yong.huang@smartx.com Subject: [PATCH RFC 02/10] migration: Refine util functions to support periodic CPU throttle Date: Mon, 9 Sep 2024 21:47:14 +0800 Message-Id: <7b06d849b1b4ebf184f7e2d71b444fcb6393a339.1725889277.git.yong.huang@smartx.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: References: MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::433; envelope-from=yong.huang@smartx.com; helo=mail-pf1-x433.google.com X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Supply the migration_bitmap_sync function along with a periodic argument. Introduce the sync_mode global variable to track the sync mode and support periodic throttling while keeping backward compatibility. Signed-off-by: Hyman Huang --- include/exec/ram_addr.h | 117 ++++++++++++++++++++++++++++++++++++---- migration/ram.c | 49 +++++++++++++---- 2 files changed, 147 insertions(+), 19 deletions(-) diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h index 891c44cf2d..43fa4d7b18 100644 --- a/include/exec/ram_addr.h +++ b/include/exec/ram_addr.h @@ -472,17 +472,68 @@ static inline void cpu_physical_memory_clear_dirty_range(ram_addr_t start, cpu_physical_memory_test_and_clear_dirty(start, length, DIRTY_MEMORY_CODE); } +static void ramblock_clear_iter_bmap(RAMBlock *rb, + ram_addr_t start, + ram_addr_t length) +{ + ram_addr_t addr; + unsigned long *bmap = rb->bmap; + unsigned long *shadow_bmap = rb->shadow_bmap; + unsigned long *iter_bmap = rb->iter_bmap; + + for (addr = 0; addr < length; addr += TARGET_PAGE_SIZE) { + long k = (start + addr) >> TARGET_PAGE_BITS; + if (test_bit(k, shadow_bmap) && !test_bit(k, bmap)) { + /* Page has been sent, clear the iter bmap */ + clear_bit(k, iter_bmap); + } + } +} + +static void ramblock_update_iter_bmap(RAMBlock *rb, + ram_addr_t start, + ram_addr_t length) +{ + ram_addr_t addr; + unsigned long *bmap = rb->bmap; + unsigned long *iter_bmap = rb->iter_bmap; + + for (addr = 0; addr < length; addr += TARGET_PAGE_SIZE) { + long k = (start + addr) >> TARGET_PAGE_BITS; + if (test_bit(k, iter_bmap)) { + if (!test_bit(k, bmap)) { + set_bit(k, bmap); + rb->iter_dirty_pages++; + } + } + } +} /* Called with RCU critical section */ static inline uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock *rb, ram_addr_t start, - ram_addr_t length) + ram_addr_t length, + unsigned int flag) { ram_addr_t addr; unsigned long word = BIT_WORD((start + rb->offset) >> TARGET_PAGE_BITS); uint64_t num_dirty = 0; unsigned long *dest = rb->bmap; + unsigned long *shadow_bmap = rb->shadow_bmap; + unsigned long *iter_bmap = rb->iter_bmap; + + assert(flag && !(flag & (~RAMBLOCK_SYN_MASK))); + + /* + * We must remove the sent dirty page from the iter_bmap in order to + * minimize redundant page transfers if periodic sync has appeared + * during this iteration. + */ + if (rb->periodic_sync_shown_up && + (flag & (RAMBLOCK_SYN_MODERN_ITER | RAMBLOCK_SYN_MODERN_PERIOD))) { + ramblock_clear_iter_bmap(rb, start, length); + } /* start address and length is aligned at the start of a word? */ if (((word * BITS_PER_LONG) << TARGET_PAGE_BITS) == @@ -503,8 +554,20 @@ uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock *rb, if (src[idx][offset]) { unsigned long bits = qatomic_xchg(&src[idx][offset], 0); unsigned long new_dirty; + if (flag & (RAMBLOCK_SYN_MODERN_ITER | + RAMBLOCK_SYN_MODERN_PERIOD)) { + /* Back-up bmap for the next iteration */ + iter_bmap[k] |= bits; + if (flag == RAMBLOCK_SYN_MODERN_PERIOD) { + /* Back-up bmap to detect pages has been sent */ + shadow_bmap[k] = dest[k]; + } + } new_dirty = ~dest[k]; - dest[k] |= bits; + if (flag == RAMBLOCK_SYN_LEGACY_ITER) { + dest[k] |= bits; + } + new_dirty &= bits; num_dirty += ctpopl(new_dirty); } @@ -534,18 +597,54 @@ uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock *rb, ram_addr_t offset = rb->offset; for (addr = 0; addr < length; addr += TARGET_PAGE_SIZE) { - if (cpu_physical_memory_test_and_clear_dirty( - start + addr + offset, - TARGET_PAGE_SIZE, - DIRTY_MEMORY_MIGRATION)) { - long k = (start + addr) >> TARGET_PAGE_BITS; - if (!test_and_set_bit(k, dest)) { - num_dirty++; + long k = (start + addr) >> TARGET_PAGE_BITS; + if (flag == RAMBLOCK_SYN_MODERN_PERIOD) { + if (test_bit(k, dest)) { + /* Back-up bmap to detect pages has been sent */ + set_bit(k, shadow_bmap); + } + } + + if (flag == RAMBLOCK_SYN_LEGACY_ITER) { + if (cpu_physical_memory_test_and_clear_dirty( + start + addr + offset, + TARGET_PAGE_SIZE, + DIRTY_MEMORY_MIGRATION)) { + if (!test_and_set_bit(k, dest)) { + num_dirty++; + } + } + } else { + if (cpu_physical_memory_test_and_clear_dirty( + start + addr + offset, + TARGET_PAGE_SIZE, + DIRTY_MEMORY_MIGRATION)) { + if (!test_bit(k, dest)) { + num_dirty++; + } + /* Back-up bmap for the next iteration */ + set_bit(k, iter_bmap); } } } } + /* + * If periodic sync has emerged, we have to resync every dirty + * page from the iter_bmap one by one. It's possible that not + * all of the dirty pages that this iteration is meant to send + * are included in the bitmap that the current sync retrieved + * from the KVM. + */ + if (rb->periodic_sync_shown_up && + (flag == RAMBLOCK_SYN_MODERN_ITER)) { + ramblock_update_iter_bmap(rb, start, length); + } + + if (flag == RAMBLOCK_SYN_MODERN_PERIOD) { + rb->periodic_sync_shown_up = true; + } + return num_dirty; } #endif diff --git a/migration/ram.c b/migration/ram.c index f29faa82d6..a56634eb46 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -112,6 +112,8 @@ XBZRLECacheStats xbzrle_counters; +static RAMBlockSynMode sync_mode = RAMBLOCK_SYN_LEGACY; + /* used by the search for pages to send */ struct PageSearchStatus { /* The migration channel used for a specific host page */ @@ -912,13 +914,38 @@ bool ramblock_page_is_discarded(RAMBlock *rb, ram_addr_t start) return false; } +static void ramblock_reset_iter_stats(RAMBlock *rb) +{ + bitmap_clear(rb->shadow_bmap, 0, rb->used_length >> TARGET_PAGE_BITS); + bitmap_clear(rb->iter_bmap, 0, rb->used_length >> TARGET_PAGE_BITS); + rb->iter_dirty_pages = 0; + rb->periodic_sync_shown_up = false; +} + /* Called with RCU critical section */ -static void ramblock_sync_dirty_bitmap(RAMState *rs, RAMBlock *rb) +static void ramblock_sync_dirty_bitmap(RAMState *rs, + RAMBlock *rb, + bool periodic) { - uint64_t new_dirty_pages = - cpu_physical_memory_sync_dirty_bitmap(rb, 0, rb->used_length); + uint64_t new_dirty_pages; + unsigned int flag = RAMBLOCK_SYN_LEGACY_ITER; + + if (sync_mode == RAMBLOCK_SYN_MODERN) { + flag = periodic ? RAMBLOCK_SYN_MODERN_PERIOD : RAMBLOCK_SYN_MODERN_ITER; + } + + new_dirty_pages = + cpu_physical_memory_sync_dirty_bitmap(rb, 0, rb->used_length, flag); + + if (flag & (RAMBLOCK_SYN_LEGACY_ITER | RAMBLOCK_SYN_MODERN_ITER)) { + if (flag == RAMBLOCK_SYN_LEGACY_ITER) { + rs->migration_dirty_pages += new_dirty_pages; + } else { + rs->migration_dirty_pages += rb->iter_dirty_pages; + ramblock_reset_iter_stats(rb); + } + } - rs->migration_dirty_pages += new_dirty_pages; rs->num_dirty_pages_period += new_dirty_pages; } @@ -1041,7 +1068,9 @@ static void migration_trigger_throttle(RAMState *rs) } } -static void migration_bitmap_sync(RAMState *rs, bool last_stage) +static void migration_bitmap_sync(RAMState *rs, + bool last_stage, + bool periodic) { RAMBlock *block; int64_t end_time; @@ -1058,7 +1087,7 @@ static void migration_bitmap_sync(RAMState *rs, bool last_stage) WITH_QEMU_LOCK_GUARD(&rs->bitmap_mutex) { WITH_RCU_READ_LOCK_GUARD() { RAMBLOCK_FOREACH_NOT_IGNORED(block) { - ramblock_sync_dirty_bitmap(rs, block); + ramblock_sync_dirty_bitmap(rs, block, periodic); } stat64_set(&mig_stats.dirty_bytes_last_sync, ram_bytes_remaining()); } @@ -1101,7 +1130,7 @@ static void migration_bitmap_sync_precopy(RAMState *rs, bool last_stage) local_err = NULL; } - migration_bitmap_sync(rs, last_stage); + migration_bitmap_sync(rs, last_stage, false); if (precopy_notify(PRECOPY_NOTIFY_AFTER_BITMAP_SYNC, &local_err)) { error_report_err(local_err); @@ -2594,7 +2623,7 @@ void ram_postcopy_send_discard_bitmap(MigrationState *ms) RCU_READ_LOCK_GUARD(); /* This should be our last sync, the src is now paused */ - migration_bitmap_sync(rs, false); + migration_bitmap_sync(rs, false, false); /* Easiest way to make sure we don't resume in the middle of a host-page */ rs->pss[RAM_CHANNEL_PRECOPY].last_sent_block = NULL; @@ -3581,7 +3610,7 @@ void colo_incoming_start_dirty_log(void) memory_global_dirty_log_sync(false); WITH_RCU_READ_LOCK_GUARD() { RAMBLOCK_FOREACH_NOT_IGNORED(block) { - ramblock_sync_dirty_bitmap(ram_state, block); + ramblock_sync_dirty_bitmap(ram_state, block, false); /* Discard this dirty bitmap record */ bitmap_zero(block->bmap, block->max_length >> TARGET_PAGE_BITS); } @@ -3862,7 +3891,7 @@ void colo_flush_ram_cache(void) qemu_mutex_lock(&ram_state->bitmap_mutex); WITH_RCU_READ_LOCK_GUARD() { RAMBLOCK_FOREACH_NOT_IGNORED(block) { - ramblock_sync_dirty_bitmap(ram_state, block); + ramblock_sync_dirty_bitmap(ram_state, block, false); } } From patchwork Mon Sep 9 13:47:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyman Huang X-Patchwork-Id: 13797079 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8D5F3ECE579 for ; Mon, 9 Sep 2024 13:49:23 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1snel8-0000Zc-AA; Mon, 09 Sep 2024 09:48:50 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1snel7-0000UZ-1d for qemu-devel@nongnu.org; Mon, 09 Sep 2024 09:48:49 -0400 Received: from mail-pj1-x102d.google.com ([2607:f8b0:4864:20::102d]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1snel4-0000AC-Ue for qemu-devel@nongnu.org; Mon, 09 Sep 2024 09:48:48 -0400 Received: by mail-pj1-x102d.google.com with SMTP id 98e67ed59e1d1-2d892997913so2839072a91.3 for ; Mon, 09 Sep 2024 06:48:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=smartx-com.20230601.gappssmtp.com; s=20230601; t=1725889725; x=1726494525; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=sLPpduPp6Uoo3uoV/VinR4B0oEyxm97ZfJYte9O4FBw=; b=MxSSFrpW4E/FXtOvLHQomF6Cfp/6981GXYCWcbslyhkDWv9PEsxWz4AiagCkxG1Vxx MxJ+XL9s10OGlHlyiKZM/4dIB6mj8Rn6IV2YNgORdeJEvflQwJAXjUoVnqjg2m8aEKeE e0a0BqWaBBuyF7BVQeXI0eanAN1skxoDg6aTRQ21+wdXB9vp93P9Ja+f+XO00xf8RfKK OvTh4nf8hT2ssmpYCoHyiaT2CNzo1k0nyDygiD9HHY6VRey7oTI1SXCDPoPzJO+0T1ob +AXI0B5XZJDlyPnzV7PkOqX3/yBMZtUs2jv/GZAC8rQziVZ6zjmzwau75T4nHL1s9cdw evqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725889725; x=1726494525; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=sLPpduPp6Uoo3uoV/VinR4B0oEyxm97ZfJYte9O4FBw=; b=uduMh4NUdpQXxxWkiC07pPY3IcNHRv1Dy1pYtBkbtabf5YKGw9eWEq7dLXFTh5hxws f1hkT8eOL2IrCsXBrZQ8+NNwzpDY/hZSFO1oFfxDRdL9alhTr1wSFfZ72t7aPA5Ls460 I13whIwQsBpFmP/P8H8kMDCSDX7aJML14fFmGtg5vnVEPFGQLfVMFD0g9SLlmFDvrzjk knhs4IpkYqydv7xMgcNEjk3qK6s6jR5uPD0e9u77AM0FqocNNM+/JOzqwldqrFMOprvB cWVcyW6iCOksFOm2WGZOsdxL3Gzeq+voPZbWepS7fXioU3yo5BTJkxgdty7EsT1PG+oT uRrQ== X-Gm-Message-State: AOJu0YxOO0dFZsR9vD0euEZ5uW3kjgONuCr7r8lYnXdRd2UK3sia7Hr3 Ox0qNzBSB1zFX5YPjf9qhCI5eaNe071ZGxOgXYWnsSkDBmN3SL/9esCzFIDZSVZQFSdoUxHrvxY IcKzwVA== X-Google-Smtp-Source: AGHT+IH+uhpn3S1iiMgd47mg63EQUGsPZoRjdx/TyUA90HbN2iKxBUll0gOP3ktRPp3KHwVRrdLHsQ== X-Received: by 2002:a17:90a:5802:b0:2c9:5c67:dd9e with SMTP id 98e67ed59e1d1-2dad50139d8mr13175567a91.19.1725889724266; Mon, 09 Sep 2024 06:48:44 -0700 (PDT) Received: from localhost.localdomain ([118.114.94.226]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2dab2c6b0b9sm7841031a91.0.2024.09.09.06.48.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 09 Sep 2024 06:48:44 -0700 (PDT) From: Hyman Huang To: qemu-devel@nongnu.org Cc: Peter Xu , Fabiano Rosas , Eric Blake , Markus Armbruster , David Hildenbrand , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Paolo Bonzini , yong.huang@smartx.com Subject: [PATCH RFC 03/10] qapi/migration: Introduce periodic CPU throttling parameters Date: Mon, 9 Sep 2024 21:47:15 +0800 Message-Id: <4eb9f818219880b4fbdd5609c8fb998626febe62.1725889277.git.yong.huang@smartx.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: References: MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::102d; envelope-from=yong.huang@smartx.com; helo=mail-pj1-x102d.google.com X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org To activate the periodic CPU throttleing feature, introduce the cpu-periodic-throttle. To control the frequency of throttling, introduce the cpu-periodic-throttle-interval. Signed-off-by: Hyman Huang --- migration/migration-hmp-cmds.c | 17 +++++++++++ migration/options.c | 54 ++++++++++++++++++++++++++++++++++ migration/options.h | 2 ++ qapi/migration.json | 25 +++++++++++++++- 4 files changed, 97 insertions(+), 1 deletion(-) diff --git a/migration/migration-hmp-cmds.c b/migration/migration-hmp-cmds.c index 7d608d26e1..f7b8e06bb4 100644 --- a/migration/migration-hmp-cmds.c +++ b/migration/migration-hmp-cmds.c @@ -264,6 +264,15 @@ void hmp_info_migrate_parameters(Monitor *mon, const QDict *qdict) monitor_printf(mon, "%s: %s\n", MigrationParameter_str(MIGRATION_PARAMETER_CPU_THROTTLE_TAILSLOW), params->cpu_throttle_tailslow ? "on" : "off"); + assert(params->has_cpu_periodic_throttle); + monitor_printf(mon, "%s: %s\n", + MigrationParameter_str(MIGRATION_PARAMETER_CPU_PERIODIC_THROTTLE), + params->cpu_periodic_throttle ? "on" : "off"); + assert(params->has_cpu_periodic_throttle_interval); + monitor_printf(mon, "%s: %u\n", + MigrationParameter_str( + MIGRATION_PARAMETER_CPU_PERIODIC_THROTTLE_INTERVAL), + params->cpu_periodic_throttle_interval); assert(params->has_max_cpu_throttle); monitor_printf(mon, "%s: %u\n", MigrationParameter_str(MIGRATION_PARAMETER_MAX_CPU_THROTTLE), @@ -512,6 +521,14 @@ void hmp_migrate_set_parameter(Monitor *mon, const QDict *qdict) p->has_cpu_throttle_tailslow = true; visit_type_bool(v, param, &p->cpu_throttle_tailslow, &err); break; + case MIGRATION_PARAMETER_CPU_PERIODIC_THROTTLE: + p->has_cpu_periodic_throttle = true; + visit_type_bool(v, param, &p->cpu_periodic_throttle, &err); + break; + case MIGRATION_PARAMETER_CPU_PERIODIC_THROTTLE_INTERVAL: + p->has_cpu_periodic_throttle_interval = true; + visit_type_uint8(v, param, &p->cpu_periodic_throttle_interval, &err); + break; case MIGRATION_PARAMETER_MAX_CPU_THROTTLE: p->has_max_cpu_throttle = true; visit_type_uint8(v, param, &p->max_cpu_throttle, &err); diff --git a/migration/options.c b/migration/options.c index 645f55003d..2dbe275ba0 100644 --- a/migration/options.c +++ b/migration/options.c @@ -44,6 +44,7 @@ #define DEFAULT_MIGRATE_THROTTLE_TRIGGER_THRESHOLD 50 #define DEFAULT_MIGRATE_CPU_THROTTLE_INITIAL 20 #define DEFAULT_MIGRATE_CPU_THROTTLE_INCREMENT 10 +#define DEFAULT_MIGRATE_CPU_PERIODIC_THROTTLE_INTERVAL 5 #define DEFAULT_MIGRATE_MAX_CPU_THROTTLE 99 /* Migration XBZRLE default cache size */ @@ -104,6 +105,11 @@ Property migration_properties[] = { DEFAULT_MIGRATE_CPU_THROTTLE_INCREMENT), DEFINE_PROP_BOOL("x-cpu-throttle-tailslow", MigrationState, parameters.cpu_throttle_tailslow, false), + DEFINE_PROP_BOOL("x-cpu-periodic-throttle", MigrationState, + parameters.cpu_periodic_throttle, false), + DEFINE_PROP_UINT8("x-cpu-periodic-throttle-interval", MigrationState, + parameters.cpu_periodic_throttle_interval, + DEFAULT_MIGRATE_CPU_PERIODIC_THROTTLE_INTERVAL), DEFINE_PROP_SIZE("x-max-bandwidth", MigrationState, parameters.max_bandwidth, MAX_THROTTLE), DEFINE_PROP_SIZE("avail-switchover-bandwidth", MigrationState, @@ -695,6 +701,20 @@ uint8_t migrate_cpu_throttle_initial(void) return s->parameters.cpu_throttle_initial; } +uint8_t migrate_periodic_throttle_interval(void) +{ + MigrationState *s = migrate_get_current(); + + return s->parameters.cpu_periodic_throttle_interval; +} + +bool migrate_periodic_throttle(void) +{ + MigrationState *s = migrate_get_current(); + + return s->parameters.cpu_periodic_throttle; +} + bool migrate_cpu_throttle_tailslow(void) { MigrationState *s = migrate_get_current(); @@ -874,6 +894,11 @@ MigrationParameters *qmp_query_migrate_parameters(Error **errp) params->cpu_throttle_increment = s->parameters.cpu_throttle_increment; params->has_cpu_throttle_tailslow = true; params->cpu_throttle_tailslow = s->parameters.cpu_throttle_tailslow; + params->has_cpu_periodic_throttle = true; + params->cpu_periodic_throttle = s->parameters.cpu_periodic_throttle; + params->has_cpu_periodic_throttle_interval = true; + params->cpu_periodic_throttle_interval = + s->parameters.cpu_periodic_throttle_interval; params->tls_creds = g_strdup(s->parameters.tls_creds); params->tls_hostname = g_strdup(s->parameters.tls_hostname); params->tls_authz = g_strdup(s->parameters.tls_authz ? @@ -940,6 +965,8 @@ void migrate_params_init(MigrationParameters *params) params->has_cpu_throttle_initial = true; params->has_cpu_throttle_increment = true; params->has_cpu_throttle_tailslow = true; + params->has_cpu_periodic_throttle = true; + params->has_cpu_periodic_throttle_interval = true; params->has_max_bandwidth = true; params->has_downtime_limit = true; params->has_x_checkpoint_delay = true; @@ -996,6 +1023,15 @@ bool migrate_params_check(MigrationParameters *params, Error **errp) return false; } + if (params->has_cpu_periodic_throttle_interval && + (params->cpu_periodic_throttle_interval < 2 || + params->cpu_periodic_throttle_interval > 10)) { + error_setg(errp, QERR_INVALID_PARAMETER_VALUE, + "cpu_periodic_throttle_interval", + "an integer in the range of 2 to 10"); + return false; + } + if (params->has_max_bandwidth && (params->max_bandwidth > SIZE_MAX)) { error_setg(errp, QERR_INVALID_PARAMETER_VALUE, "max_bandwidth", @@ -1163,6 +1199,15 @@ static void migrate_params_test_apply(MigrateSetParameters *params, dest->cpu_throttle_tailslow = params->cpu_throttle_tailslow; } + if (params->has_cpu_periodic_throttle) { + dest->cpu_periodic_throttle = params->cpu_periodic_throttle; + } + + if (params->has_cpu_periodic_throttle_interval) { + dest->cpu_periodic_throttle_interval = + params->cpu_periodic_throttle_interval; + } + if (params->tls_creds) { assert(params->tls_creds->type == QTYPE_QSTRING); dest->tls_creds = params->tls_creds->u.s; @@ -1271,6 +1316,15 @@ static void migrate_params_apply(MigrateSetParameters *params, Error **errp) s->parameters.cpu_throttle_tailslow = params->cpu_throttle_tailslow; } + if (params->has_cpu_periodic_throttle) { + s->parameters.cpu_periodic_throttle = params->cpu_periodic_throttle; + } + + if (params->has_cpu_periodic_throttle_interval) { + s->parameters.cpu_periodic_throttle_interval = + params->cpu_periodic_throttle_interval; + } + if (params->tls_creds) { g_free(s->parameters.tls_creds); assert(params->tls_creds->type == QTYPE_QSTRING); diff --git a/migration/options.h b/migration/options.h index a2397026db..efeac01470 100644 --- a/migration/options.h +++ b/migration/options.h @@ -68,6 +68,8 @@ bool migrate_has_block_bitmap_mapping(void); uint32_t migrate_checkpoint_delay(void); uint8_t migrate_cpu_throttle_increment(void); uint8_t migrate_cpu_throttle_initial(void); +uint8_t migrate_periodic_throttle_interval(void); +bool migrate_periodic_throttle(void); bool migrate_cpu_throttle_tailslow(void); bool migrate_direct_io(void); uint64_t migrate_downtime_limit(void); diff --git a/qapi/migration.json b/qapi/migration.json index 7324571e92..8281d4a83b 100644 --- a/qapi/migration.json +++ b/qapi/migration.json @@ -724,6 +724,12 @@ # be excessive at tail stage. The default value is false. (Since # 5.1) # +# @cpu-periodic-throttle: Make CPU throttling periodically. +# (Since 9.1) +# +# @cpu-periodic-throttle-interval: Interval of the periodic CPU throttling. +# (Since 9.1) +# # @tls-creds: ID of the 'tls-creds' object that provides credentials # for establishing a TLS connection over the migration data # channel. On the outgoing side of the migration, the credentials @@ -844,7 +850,8 @@ 'announce-rounds', 'announce-step', 'throttle-trigger-threshold', 'cpu-throttle-initial', 'cpu-throttle-increment', - 'cpu-throttle-tailslow', + 'cpu-throttle-tailslow', 'cpu-periodic-throttle', + 'cpu-periodic-throttle-interval', 'tls-creds', 'tls-hostname', 'tls-authz', 'max-bandwidth', 'avail-switchover-bandwidth', 'downtime-limit', { 'name': 'x-checkpoint-delay', 'features': [ 'unstable' ] }, @@ -899,6 +906,12 @@ # be excessive at tail stage. The default value is false. (Since # 5.1) # +# @cpu-periodic-throttle: Make CPU throttling periodically. +# (Since 9.1) +# +# @cpu-periodic-throttle-interval: Interval of the periodic CPU throttling. +# (Since 9.1) +# # @tls-creds: ID of the 'tls-creds' object that provides credentials # for establishing a TLS connection over the migration data # channel. On the outgoing side of the migration, the credentials @@ -1026,6 +1039,8 @@ '*cpu-throttle-initial': 'uint8', '*cpu-throttle-increment': 'uint8', '*cpu-throttle-tailslow': 'bool', + '*cpu-periodic-throttle': 'bool', + '*cpu-periodic-throttle-interval': 'uint8', '*tls-creds': 'StrOrNull', '*tls-hostname': 'StrOrNull', '*tls-authz': 'StrOrNull', @@ -1107,6 +1122,12 @@ # be excessive at tail stage. The default value is false. (Since # 5.1) # +# @cpu-periodic-throttle: Make CPU throttling periodically. +# (Since 9.1) +# +# @cpu-periodic-throttle-interval: Interval of the periodic CPU throttling. +# (Since 9.1) +# # @tls-creds: ID of the 'tls-creds' object that provides credentials # for establishing a TLS connection over the migration data # channel. On the outgoing side of the migration, the credentials @@ -1227,6 +1248,8 @@ '*cpu-throttle-initial': 'uint8', '*cpu-throttle-increment': 'uint8', '*cpu-throttle-tailslow': 'bool', + '*cpu-periodic-throttle': 'bool', + '*cpu-periodic-throttle-interval': 'uint8', '*tls-creds': 'str', '*tls-hostname': 'str', '*tls-authz': 'str', From patchwork Mon Sep 9 13:47:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyman Huang X-Patchwork-Id: 13797076 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 849E4ECE579 for ; Mon, 9 Sep 2024 13:49:05 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1snelB-0000lE-0K; Mon, 09 Sep 2024 09:48:53 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1snel9-0000eO-Ho for qemu-devel@nongnu.org; Mon, 09 Sep 2024 09:48:51 -0400 Received: from mail-pj1-x1035.google.com ([2607:f8b0:4864:20::1035]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1snel7-0000AU-O9 for qemu-devel@nongnu.org; Mon, 09 Sep 2024 09:48:51 -0400 Received: by mail-pj1-x1035.google.com with SMTP id 98e67ed59e1d1-2da4ea973bdso3145927a91.1 for ; Mon, 09 Sep 2024 06:48:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=smartx-com.20230601.gappssmtp.com; s=20230601; t=1725889728; x=1726494528; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=A4+6JdQYvda/1Hm0Qzqic1h3nzIQOs1z0LtM+uvLl5s=; b=oPGJTCsOOaxq9O3nXP6ljQSozmzTYPr2Gexw49v97d5ToMdo9jyujURTwIHMMgU3Pl 3XsmZeWSJkzvPysfQt4ZqFIUD4edkpKzGJBb72Gn/kiiN+Edzk/7PpTDvn8dw5jZjiNJ yj7HDAoQ3xIviEAdZI99yRYYlnnMEOtsPKN2MmI5xKhc48y0ItvKyXFN0yFjuyQBNU+h fin2ow+EGKp6qXdf4O+IH5dKOpXhR5sDeLCB+dRf9bDlSWzd/Sf6a6/xtITSgqlugs9o MWWyWi12c0hmyKe3evqtSYKGwRAKfwtOFtkMNKHQFK3/yoYR4Ql/QUmJAlAQibQavFSL Ha5g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725889728; x=1726494528; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=A4+6JdQYvda/1Hm0Qzqic1h3nzIQOs1z0LtM+uvLl5s=; b=uoDDkVAcll5zD0mGzVzAWePHEczwSOWtR27fDZBF5YofNfNmpmK+cRwIE4KSXeI45a L8uCgaUhmqH8UeJlEtsC2x81vh4lo17k3VTL7i1uuy0N7MujuJimeODCDG8/KM9+NQVF dCSN2vQJXhfWxrwVGNvFb5Fb0NRwknwzb1GB2m/mxslg3pXkQKoD7B6eQeqNsiS52iCr U9VBLRrfWQwhPkmr+YUl5ddSvRSEIdfeTc/5gJ6ElmQymCz+X5Gi+QYJYfcgCjd46WTi nZLmeW3/OeoXWI54n75uvJ0Wai/+N8qO7ddn+WDQA9U+eRDlvZXsjTzTlaejAIqCDp9q pyFQ== X-Gm-Message-State: AOJu0YwypArgMPji89+pWaiefA1ybiqcxctSjHN/Ul7unKm2/4NM0Tro MLssF0YDb6tXtgTZHEvos9kWz+VZBHz5zhs0QJNndc+SX8v6dDB4MiynJBqupKOibANtVqJFgb+ aGm9mDg== X-Google-Smtp-Source: AGHT+IESCKcOO4LVfEP9Ndg7n2I2JOagqHf46h1y0dkJYnnSKHxfKI6gV+ISspCMp7qbYrUtSpAH3w== X-Received: by 2002:a17:90b:4c48:b0:2d8:8d34:5b8 with SMTP id 98e67ed59e1d1-2dafcf1b325mr9328266a91.10.1725889727508; Mon, 09 Sep 2024 06:48:47 -0700 (PDT) Received: from localhost.localdomain ([118.114.94.226]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2dab2c6b0b9sm7841031a91.0.2024.09.09.06.48.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 09 Sep 2024 06:48:47 -0700 (PDT) From: Hyman Huang To: qemu-devel@nongnu.org Cc: Peter Xu , Fabiano Rosas , Eric Blake , Markus Armbruster , David Hildenbrand , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Paolo Bonzini , yong.huang@smartx.com Subject: [PATCH RFC 04/10] qapi/migration: Introduce the iteration-count Date: Mon, 9 Sep 2024 21:47:16 +0800 Message-Id: <8d2b0314e4c9d8be52f50ed41d60dd6c2bbd5804.1725889277.git.yong.huang@smartx.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: References: MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::1035; envelope-from=yong.huang@smartx.com; helo=mail-pj1-x1035.google.com X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org The original migration information dirty-sync-count could no longer reflect iteration count due to the introduction of periodic synchronization in the next commit; add the iteration count to compensate. Signed-off-by: Hyman Huang --- migration/migration-stats.h | 4 ++++ migration/migration.c | 1 + migration/ram.c | 12 ++++++++---- qapi/migration.json | 6 +++++- tests/qtest/migration-test.c | 2 +- 5 files changed, 19 insertions(+), 6 deletions(-) diff --git a/migration/migration-stats.h b/migration/migration-stats.h index 05290ade76..43ee0f4f05 100644 --- a/migration/migration-stats.h +++ b/migration/migration-stats.h @@ -50,6 +50,10 @@ typedef struct { * Number of times we have synchronized guest bitmaps. */ Stat64 dirty_sync_count; + /* + * Number of migration iteration processed. + */ + Stat64 iteration_count; /* * Number of times zero copy failed to send any page using zero * copy. diff --git a/migration/migration.c b/migration/migration.c index 3dea06d577..055d527ff6 100644 --- a/migration/migration.c +++ b/migration/migration.c @@ -1197,6 +1197,7 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s) info->ram->mbps = s->mbps; info->ram->dirty_sync_count = stat64_get(&mig_stats.dirty_sync_count); + info->ram->iteration_count = stat64_get(&mig_stats.iteration_count); info->ram->dirty_sync_missed_zero_copy = stat64_get(&mig_stats.dirty_sync_missed_zero_copy); info->ram->postcopy_requests = diff --git a/migration/ram.c b/migration/ram.c index a56634eb46..23471c9e5a 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -594,7 +594,7 @@ static void xbzrle_cache_zero_page(ram_addr_t current_addr) /* We don't care if this fails to allocate a new cache page * as long as it updated an old one */ cache_insert(XBZRLE.cache, current_addr, XBZRLE.zero_target_page, - stat64_get(&mig_stats.dirty_sync_count)); + stat64_get(&mig_stats.iteration_count)); } #define ENCODING_FLAG_XBZRLE 0x1 @@ -620,7 +620,7 @@ static int save_xbzrle_page(RAMState *rs, PageSearchStatus *pss, int encoded_len = 0, bytes_xbzrle; uint8_t *prev_cached_page; QEMUFile *file = pss->pss_channel; - uint64_t generation = stat64_get(&mig_stats.dirty_sync_count); + uint64_t generation = stat64_get(&mig_stats.iteration_count); if (!cache_is_cached(XBZRLE.cache, current_addr, generation)) { xbzrle_counters.cache_miss++; @@ -1075,6 +1075,10 @@ static void migration_bitmap_sync(RAMState *rs, RAMBlock *block; int64_t end_time; + if (!periodic) { + stat64_add(&mig_stats.iteration_count, 1); + } + stat64_add(&mig_stats.dirty_sync_count, 1); if (!rs->time_last_bitmap_sync) { @@ -1111,8 +1115,8 @@ static void migration_bitmap_sync(RAMState *rs, rs->num_dirty_pages_period = 0; rs->bytes_xfer_prev = migration_transferred_bytes(); } - if (migrate_events()) { - uint64_t generation = stat64_get(&mig_stats.dirty_sync_count); + if (!periodic && migrate_events()) { + uint64_t generation = stat64_get(&mig_stats.iteration_count); qapi_event_send_migration_pass(generation); } } diff --git a/qapi/migration.json b/qapi/migration.json index 8281d4a83b..6d8358c202 100644 --- a/qapi/migration.json +++ b/qapi/migration.json @@ -60,6 +60,9 @@ # between 0 and @dirty-sync-count * @multifd-channels. (since # 7.1) # +# @iteration-count: The number of iterations since migration started. +# (since 9.2) +# # Since: 0.14 ## { 'struct': 'MigrationStats', @@ -72,7 +75,8 @@ 'multifd-bytes': 'uint64', 'pages-per-second': 'uint64', 'precopy-bytes': 'uint64', 'downtime-bytes': 'uint64', 'postcopy-bytes': 'uint64', - 'dirty-sync-missed-zero-copy': 'uint64' } } + 'dirty-sync-missed-zero-copy': 'uint64', + 'iteration-count' : 'int' } } ## # @XBZRLECacheStats: diff --git a/tests/qtest/migration-test.c b/tests/qtest/migration-test.c index 9d08101643..2fb10658d4 100644 --- a/tests/qtest/migration-test.c +++ b/tests/qtest/migration-test.c @@ -278,7 +278,7 @@ static int64_t read_migrate_property_int(QTestState *who, const char *property) static uint64_t get_migration_pass(QTestState *who) { - return read_ram_property_int(who, "dirty-sync-count"); + return read_ram_property_int(who, "iteration-count"); } static void read_blocktime(QTestState *who) From patchwork Mon Sep 9 13:47:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyman Huang X-Patchwork-Id: 13797078 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 88978ECE579 for ; Mon, 9 Sep 2024 13:49:19 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1snelD-0000vn-Hy; Mon, 09 Sep 2024 09:48:55 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1snelC-0000r5-Bh for qemu-devel@nongnu.org; Mon, 09 Sep 2024 09:48:54 -0400 Received: from mail-pg1-x533.google.com ([2607:f8b0:4864:20::533]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1snelA-0000Au-MH for qemu-devel@nongnu.org; Mon, 09 Sep 2024 09:48:54 -0400 Received: by mail-pg1-x533.google.com with SMTP id 41be03b00d2f7-7d50e865b7aso3080343a12.0 for ; Mon, 09 Sep 2024 06:48:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=smartx-com.20230601.gappssmtp.com; s=20230601; t=1725889731; x=1726494531; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=aPNIJP+CkLZPeSwpwgFasxHEV3wc4ZaUYexML1Q2BSA=; b=UKXRRcfum9ixJcpqaM9S/o5cCfs6Iz84/j2dMukmxuiAA2TioES+a8ITIf8tPvsp4T BnDwxZsmm/9N2kLuyWrwmfe4ItEeEd8RIoT2Xct0or0dI1MGnRSqB+Abe9UlrBCon8FM 1o1JRuEVNiJvSqMtmS+YhumKfzkd1vXtcWDjfNp6qr5wGMoL6qRY/qE3rgU+xU2i3Umd GI83kUA80XG8YprVXa5Xb2QC+g29Hvk9sqomVNi0Ne0cJKx6P+l/3aNlWzcmbmekDXUE 0ZD4/i/O/3v16bj78SpA4Y31p/pHIf6fbCo5ZXk2kZruXCQOYVv+5qto1NAD6SDeQ1bK BWfg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725889731; x=1726494531; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=aPNIJP+CkLZPeSwpwgFasxHEV3wc4ZaUYexML1Q2BSA=; b=CaHg1SKnRWo8gRy/jBNhyppNPrCSZ/NS2gblEB2IQQGgOyHYeYXcJlGHQrEvaRwnEr f5dFkkNuTjtJREILqmbbzm4Dz0v97SuqBY5DEUl7PGENwJAUE+1Hx23s6SOhkSmw9U+P NyQZsqqaeAgQhCBG5Fh5K3zINLpCGyO/qF09zhTiGWxurdNm5fsTWoqgalxGZsibzAyM ETECMqrrv2SUGilHahdiX9fltH7Jl41JbpnVZho6vqmmmdz+STBIPajdzgNrGYKqrBQH WQ5Em9IzhbKRIvy+Oy8VsUs9zJ7rVtxUA5aM2t/NzjKD+rfTMqCD6FT7GMK7Q+Kn2ueW iazA== X-Gm-Message-State: AOJu0YxRq1iy0Rrx6pBiZpQoEe2xc56FPbSwK7Ry1J0FGQceLKKQUUzH uuColAhZVGTIZzyOocOpKh4ZLwN0p6umscMpIlfxwDFQFUtADlAH2FgWSZcfrivG5K3pejms7bA DygXP0w== X-Google-Smtp-Source: AGHT+IHfZSgDBtWC2g6Ac29rp7d4W2h5u8/632rMe5bOApCaR4uJpHrrKX+bGPml2/5lZ4umtrKCEw== X-Received: by 2002:a17:90b:3c43:b0:2da:5156:1253 with SMTP id 98e67ed59e1d1-2dad50141cbmr1350253a91.21.1725889730632; Mon, 09 Sep 2024 06:48:50 -0700 (PDT) Received: from localhost.localdomain ([118.114.94.226]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2dab2c6b0b9sm7841031a91.0.2024.09.09.06.48.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 09 Sep 2024 06:48:50 -0700 (PDT) From: Hyman Huang To: qemu-devel@nongnu.org Cc: Peter Xu , Fabiano Rosas , Eric Blake , Markus Armbruster , David Hildenbrand , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Paolo Bonzini , yong.huang@smartx.com Subject: [PATCH RFC 05/10] migration: Introduce util functions for periodic CPU throttle Date: Mon, 9 Sep 2024 21:47:17 +0800 Message-Id: X-Mailer: git-send-email 2.39.1 In-Reply-To: References: MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::533; envelope-from=yong.huang@smartx.com; helo=mail-pg1-x533.google.com X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Provide useful utilities to manage the periodic_throttle_thread's lifespan. Additionally, to set up sync mode, provide periodic_throttle_setup. Signed-off-by: Hyman Huang --- migration/ram.c | 98 +++++++++++++++++++++++++++++++++++++++++- migration/ram.h | 4 ++ migration/trace-events | 3 ++ 3 files changed, 104 insertions(+), 1 deletion(-) diff --git a/migration/ram.c b/migration/ram.c index 23471c9e5a..d9d8ed0fda 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -416,6 +416,10 @@ struct RAMState { * RAM migration. */ unsigned int postcopy_bmap_sync_requested; + + /* Periodic throttle information */ + bool throttle_running; + QemuThread throttle_thread; }; typedef struct RAMState RAMState; @@ -1075,7 +1079,13 @@ static void migration_bitmap_sync(RAMState *rs, RAMBlock *block; int64_t end_time; - if (!periodic) { + if (periodic) { + /* Be careful that we don't synchronize too often */ + int64_t curr_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME); + if (curr_time < rs->time_last_bitmap_sync + 1000) { + return; + } + } else { stat64_add(&mig_stats.iteration_count, 1); } @@ -1121,6 +1131,92 @@ static void migration_bitmap_sync(RAMState *rs, } } +static void *periodic_throttle_thread(void *opaque) +{ + RAMState *rs = opaque; + bool skip_sleep = false; + int sleep_duration = migrate_periodic_throttle_interval(); + + rcu_register_thread(); + + while (qatomic_read(&rs->throttle_running)) { + int64_t curr_time; + /* + * The first iteration copies all memory anyhow and has no + * effect on guest performance, therefore omit it to avoid + * paying extra for the sync penalty. + */ + if (stat64_get(&mig_stats.iteration_count) <= 1) { + continue; + } + + if (!skip_sleep) { + sleep(sleep_duration); + } + + /* Be careful that we don't synchronize too often */ + curr_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME); + if (curr_time > rs->time_last_bitmap_sync + 1000) { + bql_lock(); + trace_migration_periodic_throttle(); + WITH_RCU_READ_LOCK_GUARD() { + migration_bitmap_sync(rs, false, true); + } + bql_unlock(); + skip_sleep = false; + } else { + skip_sleep = true; + } + } + + rcu_unregister_thread(); + + return NULL; +} + +void periodic_throttle_start(void) +{ + RAMState *rs = ram_state; + + if (!rs) { + return; + } + + if (qatomic_read(&rs->throttle_running)) { + return; + } + + trace_migration_periodic_throttle_start(); + + qatomic_set(&rs->throttle_running, 1); + qemu_thread_create(&rs->throttle_thread, + NULL, periodic_throttle_thread, + rs, QEMU_THREAD_JOINABLE); +} + +void periodic_throttle_stop(void) +{ + RAMState *rs = ram_state; + + if (!rs) { + return; + } + + if (!qatomic_read(&rs->throttle_running)) { + return; + } + + trace_migration_periodic_throttle_stop(); + + qatomic_set(&rs->throttle_running, 0); + qemu_thread_join(&rs->throttle_thread); +} + +void periodic_throttle_setup(bool enable) +{ + sync_mode = enable ? RAMBLOCK_SYN_MODERN : RAMBLOCK_SYN_LEGACY; +} + static void migration_bitmap_sync_precopy(RAMState *rs, bool last_stage) { Error *local_err = NULL; diff --git a/migration/ram.h b/migration/ram.h index bc0318b834..f7c7b2e7ad 100644 --- a/migration/ram.h +++ b/migration/ram.h @@ -93,4 +93,8 @@ void ram_write_tracking_prepare(void); int ram_write_tracking_start(void); void ram_write_tracking_stop(void); +/* Periodic throttle */ +void periodic_throttle_start(void); +void periodic_throttle_stop(void); +void periodic_throttle_setup(bool enable); #endif diff --git a/migration/trace-events b/migration/trace-events index c65902f042..5b9db57c8f 100644 --- a/migration/trace-events +++ b/migration/trace-events @@ -95,6 +95,9 @@ get_queued_page_not_dirty(const char *block_name, uint64_t tmp_offset, unsigned migration_bitmap_sync_start(void) "" migration_bitmap_sync_end(uint64_t dirty_pages) "dirty_pages %" PRIu64 migration_bitmap_clear_dirty(char *str, uint64_t start, uint64_t size, unsigned long page) "rb %s start 0x%"PRIx64" size 0x%"PRIx64" page 0x%lx" +migration_periodic_throttle(void) "" +migration_periodic_throttle_start(void) "" +migration_periodic_throttle_stop(void) "" migration_throttle(void) "" migration_dirty_limit_guest(int64_t dirtyrate) "guest dirty page rate limit %" PRIi64 " MB/s" ram_discard_range(const char *rbname, uint64_t start, size_t len) "%s: start: %" PRIx64 " %zx" From patchwork Mon Sep 9 13:47:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyman Huang X-Patchwork-Id: 13797109 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2FE25ECE57E for ; Mon, 9 Sep 2024 13:51:43 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1snelH-00017q-4L; Mon, 09 Sep 2024 09:48:59 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1snelF-00013E-Bk for qemu-devel@nongnu.org; Mon, 09 Sep 2024 09:48:57 -0400 Received: from mail-pj1-x102a.google.com ([2607:f8b0:4864:20::102a]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1snelD-0000B9-QZ for qemu-devel@nongnu.org; Mon, 09 Sep 2024 09:48:57 -0400 Received: by mail-pj1-x102a.google.com with SMTP id 98e67ed59e1d1-2d8815ef6d2so3153166a91.0 for ; Mon, 09 Sep 2024 06:48:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=smartx-com.20230601.gappssmtp.com; s=20230601; t=1725889734; x=1726494534; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=fDNjzZDuWCBsPbd7tcy7/yK9DFVnmavsqu74xEof5CQ=; b=jvVOkDGIq1Z0mzWc9HIvoQ24DF+wTSZBcrZAJoE5shwkoj6XRGKXBNAyiyWbPex+Tv 4w+4GNhdPP5iDvd5rV3HE1wAgALqyqUTcRdVYoNLkk2NWA3PteE6Gx/Drw7/qLDaOGr8 9Ow7ljJ4yTyJEUzU+PJsJ2tk29MlM8gvOD9DWCfkCpQybcFK53j2oJQjxTWcYUAMY/dl SJTOqQCBSYTvjIKIi6JXkEXS79UgpfcuHxsggCQkxiYgJAwDkDKfAmFfoIDx/HM0Z2nz aYU39U9PQtpgN5X359Lo6Y9fg0mqQeWGPKSHOiwwYNprZGUfgI+WMqvIG8LuUxIRuOBT 0Tkw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725889734; x=1726494534; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=fDNjzZDuWCBsPbd7tcy7/yK9DFVnmavsqu74xEof5CQ=; b=GSUPopTM36adQwaSd/B99+hsodwYh8IgRiKNJfeosmU98FrdBShMESlVVDABK8NFxB s4DLSgvyie7nWUQ6WtMa3xYILV/Vrul+U/7Z3pxFjBWjdVr5SctxotbYcyOCMwcfJW+/ MQJrJwa9Qcy3geNClMK7PQl+VPnT6m70q7E8APEskqNOZQjgQ31NChB7H2iobhqnUZPq gjilat0HMdo3MONFB4ATfJVeaLFWuHyfCIdcRg/UjGeAdZri77wCWymgYX4d0TFObS4h 5Hns15VzqjIeReN8Mdp/OMbOOqOZSg/nDddfq91pZl0uFXFw5ojP0jVca/D4H9qpiob+ 2RgQ== X-Gm-Message-State: AOJu0YwrpXdCDMfN0lykaIr2EFbfqXfqeOQko14QIXWECQj/J2GcFdyg f7yaJFDZaSXxIO/bRpzd+aNVqgJN+5OsaK/TsnWGVAXpb97rcZljJ8AykBOs6l8zvJAIlKkVxh/ vbrDxSg== X-Google-Smtp-Source: AGHT+IFTdRMIzH1GiH5spX+qhhyuT6ezfRw/SFBJVv8ypkWM/WD1opAIgqASUIbTxg3nBRqN6tsj+w== X-Received: by 2002:a17:90a:7c04:b0:2d8:9255:396d with SMTP id 98e67ed59e1d1-2dafcd5c767mr8829709a91.0.1725889733775; Mon, 09 Sep 2024 06:48:53 -0700 (PDT) Received: from localhost.localdomain ([118.114.94.226]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2dab2c6b0b9sm7841031a91.0.2024.09.09.06.48.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 09 Sep 2024 06:48:53 -0700 (PDT) From: Hyman Huang To: qemu-devel@nongnu.org Cc: Peter Xu , Fabiano Rosas , Eric Blake , Markus Armbruster , David Hildenbrand , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Paolo Bonzini , yong.huang@smartx.com Subject: [PATCH RFC 06/10] migration: Support periodic CPU throttle Date: Mon, 9 Sep 2024 21:47:18 +0800 Message-Id: <5ee66750057034adba99696a450aa676fd0cedb3.1725889277.git.yong.huang@smartx.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: References: MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::102a; envelope-from=yong.huang@smartx.com; helo=mail-pj1-x102a.google.com X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org When VM is configured with huge memory, the current throttle logic doesn't look like to scale, because migration_trigger_throttle() is only called for each iteration, so it won't be invoked for a long time if one iteration can take a long time. The periodic sync and throttle aims to fix the above issue by synchronizing the remote dirty bitmap and triggering the throttle periodically. This is a trade-off between synchronization overhead and CPU throttle impact. Signed-off-by: Hyman Huang --- migration/migration.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/migration/migration.c b/migration/migration.c index 055d527ff6..fefd93b683 100644 --- a/migration/migration.c +++ b/migration/migration.c @@ -1420,6 +1420,9 @@ static void migrate_fd_cleanup(MigrationState *s) qemu_thread_join(&s->thread); s->migration_thread_running = false; } + if (migrate_periodic_throttle()) { + periodic_throttle_stop(); + } bql_lock(); multifd_send_shutdown(); @@ -3263,6 +3266,9 @@ static MigIterateState migration_iteration_run(MigrationState *s) if ((!pending_size || pending_size < s->threshold_size) && can_switchover) { trace_migration_thread_low_pending(pending_size); + if (migrate_periodic_throttle()) { + periodic_throttle_stop(); + } migration_completion(s); return MIG_ITERATE_BREAK; } @@ -3508,6 +3514,11 @@ static void *migration_thread(void *opaque) ret = qemu_savevm_state_setup(s->to_dst_file, &local_err); bql_unlock(); + if (migrate_periodic_throttle()) { + periodic_throttle_setup(true); + periodic_throttle_start(); + } + qemu_savevm_wait_unplug(s, MIGRATION_STATUS_SETUP, MIGRATION_STATUS_ACTIVE); From patchwork Mon Sep 9 13:47:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyman Huang X-Patchwork-Id: 13797081 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 34A5EECE579 for ; Mon, 9 Sep 2024 13:49:49 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1snelK-0001Hz-UM; Mon, 09 Sep 2024 09:49:02 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1snelJ-0001Gz-Dp for qemu-devel@nongnu.org; Mon, 09 Sep 2024 09:49:01 -0400 Received: from mail-pj1-x102b.google.com ([2607:f8b0:4864:20::102b]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1snelH-0000BO-O9 for qemu-devel@nongnu.org; Mon, 09 Sep 2024 09:49:01 -0400 Received: by mail-pj1-x102b.google.com with SMTP id 98e67ed59e1d1-2d889207d1aso3032471a91.3 for ; Mon, 09 Sep 2024 06:48:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=smartx-com.20230601.gappssmtp.com; s=20230601; t=1725889738; x=1726494538; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=PMsMrxXIYF55uyEUpAg6oCulZa95qOnP3qJj8dOgMPY=; b=IbKr5iGaW1qjWCkggXFfb+BdX2kIT3hpuw1bkP1rPMG3VFR1kyO4bkWagFEaZOMAE5 N9kO4xtnJ3K7yEUkxgIwt1Mr5smbs4Cuq4Wh+FceHLe36gOBclLVJYMHF/hHvp0fRuq2 vntalflTHlxbL2ccQJMMYSczKMCblYcPw74xGeHMqwAKZkybRskiy4e7yXhL/SF7dNM6 jWUzdX3RwSPvkU9Jc2Umu71Uw3SybbCoXj9eosa8b1kD55pnGcswH7hCVTsei960+U18 E0owFhgwJm6SqhYW8cp50OMMrA6U+7j2iprGclM/QYe1bUlU5itYjjTYY1KM2f7jER/6 CcRw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725889738; x=1726494538; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=PMsMrxXIYF55uyEUpAg6oCulZa95qOnP3qJj8dOgMPY=; b=QH5ZUQTC96eS5WZkR+S4HQNdMeGKDmzBpZdvzZ9VJ9Nds9W6WaRhNHsttduGl9k4L7 zmFIreDR20kdYjErKCjx856HT6GcdTLSe6t9cNDjcVPj4rodVn5rWl4XmrnqFDZkjgem 20aqdaqCesk5qPpDUMrY1rKY+kJbVDs4VEAGRJBxSOVcVRqppsccngUKPJigfvwU4v61 5zuJrsO8DJUDvY2ZomQTzDyjVXRiG8H0DSuy7ugrinJFxj/69yrDLs+IXz0/JUyriglg MxjPlhfks2MyqfkHwlqnDHlXQXAF0tfMCuaewcrElNsshzbowJPG6HbRPwomBPVASv+R ScyA== X-Gm-Message-State: AOJu0YwqXO41aru9xliN04vcb1HptUw+/1Kr/+m4y8fVth80vHazlnKa SWBMe9iNhT1GedxC7K6/cuk6dDF/r8qom+da/dm+EfEWe5nGclmS//qrKmHnTo1D3OytPuX6QO+ h2JAcEA== X-Google-Smtp-Source: AGHT+IGWssGLJxaQvufb3+G0ZMuSFcehCGYuez6+lG2DAJf+A2X7zxCL5AVHfTCkM8Cx3rDYBrx2/Q== X-Received: by 2002:a17:90b:384e:b0:2cf:cbc7:91f8 with SMTP id 98e67ed59e1d1-2dad50fc904mr10048579a91.19.1725889737002; Mon, 09 Sep 2024 06:48:57 -0700 (PDT) Received: from localhost.localdomain ([118.114.94.226]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2dab2c6b0b9sm7841031a91.0.2024.09.09.06.48.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 09 Sep 2024 06:48:56 -0700 (PDT) From: Hyman Huang To: qemu-devel@nongnu.org Cc: Peter Xu , Fabiano Rosas , Eric Blake , Markus Armbruster , David Hildenbrand , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Paolo Bonzini , yong.huang@smartx.com Subject: [PATCH RFC 07/10] tests/migration-tests: Add test case for periodic throttle Date: Mon, 9 Sep 2024 21:47:19 +0800 Message-Id: <8903506e56f2c1d36cb83b54fe4875a1253b7691.1725889277.git.yong.huang@smartx.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: References: MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::102b; envelope-from=yong.huang@smartx.com; helo=mail-pj1-x102b.google.com X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org To make sure periodic throttle feature doesn't regression any features and functionalities, enable this feature in the auto-converge migration test. Signed-off-by: Hyman Huang --- tests/qtest/migration-test.c | 56 +++++++++++++++++++++++++++++++++++- 1 file changed, 55 insertions(+), 1 deletion(-) diff --git a/tests/qtest/migration-test.c b/tests/qtest/migration-test.c index 2fb10658d4..61d7182f88 100644 --- a/tests/qtest/migration-test.c +++ b/tests/qtest/migration-test.c @@ -281,6 +281,11 @@ static uint64_t get_migration_pass(QTestState *who) return read_ram_property_int(who, "iteration-count"); } +static uint64_t get_migration_dirty_sync_count(QTestState *who) +{ + return read_ram_property_int(who, "dirty-sync-count"); +} + static void read_blocktime(QTestState *who) { QDict *rsp_return; @@ -710,6 +715,11 @@ typedef struct { PostcopyRecoveryFailStage postcopy_recovery_fail_stage; } MigrateCommon; +typedef struct { + /* CPU throttle parameters */ + bool periodic; +} AutoConvergeArgs; + static int test_migrate_start(QTestState **from, QTestState **to, const char *uri, MigrateStart *args) { @@ -2778,12 +2788,13 @@ static void test_validate_uri_channels_none_set(void) * To make things even worse, we need to run the initial stage at * 3MB/s so we enter autoconverge even when host is (over)loaded. */ -static void test_migrate_auto_converge(void) +static void test_migrate_auto_converge_args(AutoConvergeArgs *input_args) { g_autofree char *uri = g_strdup_printf("unix:%s/migsocket", tmpfs); MigrateStart args = {}; QTestState *from, *to; int64_t percentage; + bool periodic = (input_args && input_args->periodic); /* * We want the test to be stable and as fast as possible. @@ -2791,6 +2802,7 @@ static void test_migrate_auto_converge(void) * so we need to decrease a bandwidth. */ const int64_t init_pct = 5, inc_pct = 25, max_pct = 95; + const int64_t periodic_throttle_interval = 2; if (test_migrate_start(&from, &to, uri, &args)) { return; @@ -2801,6 +2813,12 @@ static void test_migrate_auto_converge(void) migrate_set_parameter_int(from, "cpu-throttle-increment", inc_pct); migrate_set_parameter_int(from, "max-cpu-throttle", max_pct); + if (periodic) { + migrate_set_parameter_bool(from, "cpu-periodic-throttle", true); + migrate_set_parameter_int(from, "cpu-periodic-throttle-interval", + periodic_throttle_interval); + } + /* * Set the initial parameters so that the migration could not converge * without throttling. @@ -2827,6 +2845,29 @@ static void test_migrate_auto_converge(void) } while (true); /* The first percentage of throttling should be at least init_pct */ g_assert_cmpint(percentage, >=, init_pct); + + if (periodic) { + /* + * Check if periodic sync take effect, set the timeout with 20s + * (max_try_count * 1s), if extra sync doesn't show up, fail test. + */ + uint64_t iteration_count, dirty_sync_count; + bool extra_sync = false; + int max_try_count = 20; + + /* Check if periodic sync take effect */ + while (--max_try_count) { + usleep(1000 * 1000); + iteration_count = get_migration_pass(from); + dirty_sync_count = get_migration_dirty_sync_count(from); + if (dirty_sync_count > iteration_count) { + extra_sync = true; + break; + } + } + g_assert(extra_sync); + } + /* Now, when we tested that throttling works, let it converge */ migrate_ensure_converge(from); @@ -2849,6 +2890,17 @@ static void test_migrate_auto_converge(void) test_migrate_end(from, to, true); } +static void test_migrate_auto_converge(void) +{ + test_migrate_auto_converge_args(NULL); +} + +static void test_migrate_auto_converge_periodic_throttle(void) +{ + AutoConvergeArgs args = {.periodic = true}; + test_migrate_auto_converge_args(&args); +} + static void * test_migrate_precopy_tcp_multifd_start_common(QTestState *from, QTestState *to, @@ -3900,6 +3952,8 @@ int main(int argc, char **argv) if (g_test_slow()) { migration_test_add("/migration/auto_converge", test_migrate_auto_converge); + migration_test_add("/migration/auto_converge_periodic_throttle", + test_migrate_auto_converge_periodic_throttle); if (g_str_equal(arch, "x86_64") && has_kvm && kvm_dirty_ring_supported()) { migration_test_add("/migration/dirty_limit", From patchwork Mon Sep 9 13:47:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyman Huang X-Patchwork-Id: 13797077 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8F90FECE579 for ; Mon, 9 Sep 2024 13:49:14 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1snelO-0001ZS-OB; Mon, 09 Sep 2024 09:49:06 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1snelM-0001PF-PP for qemu-devel@nongnu.org; Mon, 09 Sep 2024 09:49:04 -0400 Received: from mail-pj1-x1035.google.com ([2607:f8b0:4864:20::1035]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1snelK-0000Bi-S5 for qemu-devel@nongnu.org; Mon, 09 Sep 2024 09:49:04 -0400 Received: by mail-pj1-x1035.google.com with SMTP id 98e67ed59e1d1-2d89dbb60bdso2923562a91.1 for ; Mon, 09 Sep 2024 06:49:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=smartx-com.20230601.gappssmtp.com; s=20230601; t=1725889741; x=1726494541; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=r0+VMi25/Zc4CDone71m1lCGybIDfbvM1CcXIqSCA9s=; b=p26fR4YAEElT+Byzfj/pD3ukk90mrAwxNCHoLIeOXDw0FPKN7Oy6bqLgFLHtC7fSb/ CMpChNOYCpnpFvnCPCvaniTTtf87aFe/yLIO6WpRABM2IIJ1zRf6MWSm++dwahUh3nfI b0ccSi9IywJLFAQPZDLiEFFbHLYROcpwbjD/G0An62r1VzqpM7OK3FItrfAbR9d1LwNQ BxrfPSN8c/0PCkqjadU4o7oH9wIYY9/KBdTVON5ncmqnlPx5v8koGsUSlLogNruPONJG scIantOiVcNL69R9orhRippqkkPPrSx3qmca1aO/1LpnWbZ83yuBzmHXYUBhjuxQYgT6 gMaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725889741; x=1726494541; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=r0+VMi25/Zc4CDone71m1lCGybIDfbvM1CcXIqSCA9s=; b=RYgoiIhcCGkik83y3DQGlbDvKUIN1Syif4iCnjq3nol+l1/CfzWR2yXtnJySQS/A3P sd28BZSxlrsx2qqlwE7LN0UovfrGZBE4HE5kfyOnSx76ig12L1aY4fzMoD2WJKqvH3lA o+cmVsTUTQO09UerIEUUMSPqtQbGBnAM2SfFqvUoKvTo0c096rrz0lcIahaJP6ZSe6n9 quAE84SyQHtU9St1GaXJ6ufRUP7l18YM1AX8JDjAaOVs9QjMCTtJ+4L7YLDHCYIkmOD1 qw7hd20ivuejt4VFqkt/CLXOZ2ae60wNxONfDnx03rUHtaxwcoqZFHqZvoDTX9rYmSYr sCAg== X-Gm-Message-State: AOJu0YzbrMOYOJYGEyOrpOtpBrbf+6ZuLdRmXbYW7oO6Dwya9oJZhBht 1vTmzG3YgaOe6v0pjaTDQX8heA8O/tqTKhrotrhrBgeRa9/SSrSrwhtfm16HFltMNYxOhcjci6u 1+BjWvg== X-Google-Smtp-Source: AGHT+IEv0vyqNLBwnpgRZISdlcOecif4whKql3fH/CgRnN1+3UtbJcnUagKcl/aWN4VEJ+LhY92qkg== X-Received: by 2002:a17:90a:ba93:b0:2d8:a731:7db0 with SMTP id 98e67ed59e1d1-2daffe2735fmr8929016a91.35.1725889740675; Mon, 09 Sep 2024 06:49:00 -0700 (PDT) Received: from localhost.localdomain ([118.114.94.226]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2dab2c6b0b9sm7841031a91.0.2024.09.09.06.48.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 09 Sep 2024 06:49:00 -0700 (PDT) From: Hyman Huang To: qemu-devel@nongnu.org Cc: Peter Xu , Fabiano Rosas , Eric Blake , Markus Armbruster , David Hildenbrand , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Paolo Bonzini , yong.huang@smartx.com Subject: [PATCH RFC 08/10] migration: Introduce cpu-responsive-throttle parameter Date: Mon, 9 Sep 2024 21:47:20 +0800 Message-Id: <24b2598916502ae298fdf6ca296e8c1559710d78.1725889277.git.yong.huang@smartx.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: References: MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::1035; envelope-from=yong.huang@smartx.com; helo=mail-pj1-x1035.google.com X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org To enable the responsive throttle that will be implemented in the next commit, introduce the cpu-responsive-throttle parameter. Signed-off-by: Hyman Huang --- migration/migration-hmp-cmds.c | 8 ++++++++ migration/options.c | 20 ++++++++++++++++++++ migration/options.h | 1 + qapi/migration.json | 16 +++++++++++++++- 4 files changed, 44 insertions(+), 1 deletion(-) diff --git a/migration/migration-hmp-cmds.c b/migration/migration-hmp-cmds.c index f7b8e06bb4..a3d4d3f62f 100644 --- a/migration/migration-hmp-cmds.c +++ b/migration/migration-hmp-cmds.c @@ -273,6 +273,10 @@ void hmp_info_migrate_parameters(Monitor *mon, const QDict *qdict) MigrationParameter_str( MIGRATION_PARAMETER_CPU_PERIODIC_THROTTLE_INTERVAL), params->cpu_periodic_throttle_interval); + assert(params->has_cpu_responsive_throttle); + monitor_printf(mon, "%s: %s\n", + MigrationParameter_str(MIGRATION_PARAMETER_CPU_RESPONSIVE_THROTTLE), + params->cpu_responsive_throttle ? "on" : "off"); assert(params->has_max_cpu_throttle); monitor_printf(mon, "%s: %u\n", MigrationParameter_str(MIGRATION_PARAMETER_MAX_CPU_THROTTLE), @@ -529,6 +533,10 @@ void hmp_migrate_set_parameter(Monitor *mon, const QDict *qdict) p->has_cpu_periodic_throttle_interval = true; visit_type_uint8(v, param, &p->cpu_periodic_throttle_interval, &err); break; + case MIGRATION_PARAMETER_CPU_RESPONSIVE_THROTTLE: + p->has_cpu_responsive_throttle = true; + visit_type_bool(v, param, &p->cpu_responsive_throttle, &err); + break; case MIGRATION_PARAMETER_MAX_CPU_THROTTLE: p->has_max_cpu_throttle = true; visit_type_uint8(v, param, &p->max_cpu_throttle, &err); diff --git a/migration/options.c b/migration/options.c index 2dbe275ba0..aa233684ee 100644 --- a/migration/options.c +++ b/migration/options.c @@ -110,6 +110,8 @@ Property migration_properties[] = { DEFINE_PROP_UINT8("x-cpu-periodic-throttle-interval", MigrationState, parameters.cpu_periodic_throttle_interval, DEFAULT_MIGRATE_CPU_PERIODIC_THROTTLE_INTERVAL), + DEFINE_PROP_BOOL("x-cpu-responsive-throttle", MigrationState, + parameters.cpu_responsive_throttle, false), DEFINE_PROP_SIZE("x-max-bandwidth", MigrationState, parameters.max_bandwidth, MAX_THROTTLE), DEFINE_PROP_SIZE("avail-switchover-bandwidth", MigrationState, @@ -715,6 +717,13 @@ bool migrate_periodic_throttle(void) return s->parameters.cpu_periodic_throttle; } +bool migrate_responsive_throttle(void) +{ + MigrationState *s = migrate_get_current(); + + return s->parameters.cpu_responsive_throttle; +} + bool migrate_cpu_throttle_tailslow(void) { MigrationState *s = migrate_get_current(); @@ -899,6 +908,8 @@ MigrationParameters *qmp_query_migrate_parameters(Error **errp) params->has_cpu_periodic_throttle_interval = true; params->cpu_periodic_throttle_interval = s->parameters.cpu_periodic_throttle_interval; + params->has_cpu_responsive_throttle = true; + params->cpu_responsive_throttle = s->parameters.cpu_responsive_throttle; params->tls_creds = g_strdup(s->parameters.tls_creds); params->tls_hostname = g_strdup(s->parameters.tls_hostname); params->tls_authz = g_strdup(s->parameters.tls_authz ? @@ -967,6 +978,7 @@ void migrate_params_init(MigrationParameters *params) params->has_cpu_throttle_tailslow = true; params->has_cpu_periodic_throttle = true; params->has_cpu_periodic_throttle_interval = true; + params->has_cpu_responsive_throttle = true; params->has_max_bandwidth = true; params->has_downtime_limit = true; params->has_x_checkpoint_delay = true; @@ -1208,6 +1220,10 @@ static void migrate_params_test_apply(MigrateSetParameters *params, params->cpu_periodic_throttle_interval; } + if (params->has_cpu_responsive_throttle) { + dest->cpu_responsive_throttle = params->cpu_responsive_throttle; + } + if (params->tls_creds) { assert(params->tls_creds->type == QTYPE_QSTRING); dest->tls_creds = params->tls_creds->u.s; @@ -1325,6 +1341,10 @@ static void migrate_params_apply(MigrateSetParameters *params, Error **errp) params->cpu_periodic_throttle_interval; } + if (params->has_cpu_responsive_throttle) { + s->parameters.cpu_responsive_throttle = params->cpu_responsive_throttle; + } + if (params->tls_creds) { g_free(s->parameters.tls_creds); assert(params->tls_creds->type == QTYPE_QSTRING); diff --git a/migration/options.h b/migration/options.h index efeac01470..613d675003 100644 --- a/migration/options.h +++ b/migration/options.h @@ -70,6 +70,7 @@ uint8_t migrate_cpu_throttle_increment(void); uint8_t migrate_cpu_throttle_initial(void); uint8_t migrate_periodic_throttle_interval(void); bool migrate_periodic_throttle(void); +bool migrate_responsive_throttle(void); bool migrate_cpu_throttle_tailslow(void); bool migrate_direct_io(void); uint64_t migrate_downtime_limit(void); diff --git a/qapi/migration.json b/qapi/migration.json index 6d8358c202..9f52ed1899 100644 --- a/qapi/migration.json +++ b/qapi/migration.json @@ -734,6 +734,10 @@ # @cpu-periodic-throttle-interval: Interval of the periodic CPU throttling. # (Since 9.1) # +# @cpu-responsive-throttle: Make CPU throttling more responsively by +# introduce an extra detection metric of +# migration convergence. (Since 9.1) +# # @tls-creds: ID of the 'tls-creds' object that provides credentials # for establishing a TLS connection over the migration data # channel. On the outgoing side of the migration, the credentials @@ -855,7 +859,7 @@ 'throttle-trigger-threshold', 'cpu-throttle-initial', 'cpu-throttle-increment', 'cpu-throttle-tailslow', 'cpu-periodic-throttle', - 'cpu-periodic-throttle-interval', + 'cpu-periodic-throttle-interval', 'cpu-responsive-throttle', 'tls-creds', 'tls-hostname', 'tls-authz', 'max-bandwidth', 'avail-switchover-bandwidth', 'downtime-limit', { 'name': 'x-checkpoint-delay', 'features': [ 'unstable' ] }, @@ -916,6 +920,10 @@ # @cpu-periodic-throttle-interval: Interval of the periodic CPU throttling. # (Since 9.1) # +# @cpu-responsive-throttle: Make CPU throttling more responsively by +# introduce an extra detection metric of +# migration convergence. (Since 9.1) +# # @tls-creds: ID of the 'tls-creds' object that provides credentials # for establishing a TLS connection over the migration data # channel. On the outgoing side of the migration, the credentials @@ -1045,6 +1053,7 @@ '*cpu-throttle-tailslow': 'bool', '*cpu-periodic-throttle': 'bool', '*cpu-periodic-throttle-interval': 'uint8', + '*cpu-responsive-throttle': 'bool', '*tls-creds': 'StrOrNull', '*tls-hostname': 'StrOrNull', '*tls-authz': 'StrOrNull', @@ -1132,6 +1141,10 @@ # @cpu-periodic-throttle-interval: Interval of the periodic CPU throttling. # (Since 9.1) # +# @cpu-responsive-throttle: Make CPU throttling more responsively by +# introduce an extra detection metric of +# migration convergence. (Since 9.1) +# # @tls-creds: ID of the 'tls-creds' object that provides credentials # for establishing a TLS connection over the migration data # channel. On the outgoing side of the migration, the credentials @@ -1254,6 +1267,7 @@ '*cpu-throttle-tailslow': 'bool', '*cpu-periodic-throttle': 'bool', '*cpu-periodic-throttle-interval': 'uint8', + '*cpu-responsive-throttle': 'bool', '*tls-creds': 'str', '*tls-hostname': 'str', '*tls-authz': 'str', From patchwork Mon Sep 9 13:47:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyman Huang X-Patchwork-Id: 13797083 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AD9F7ECE579 for ; Mon, 9 Sep 2024 13:50:20 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1snelR-0001kx-KO; Mon, 09 Sep 2024 09:49:09 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1snelQ-0001fI-2j for qemu-devel@nongnu.org; Mon, 09 Sep 2024 09:49:08 -0400 Received: from mail-pg1-x52c.google.com ([2607:f8b0:4864:20::52c]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1snelO-0000C5-8x for qemu-devel@nongnu.org; Mon, 09 Sep 2024 09:49:07 -0400 Received: by mail-pg1-x52c.google.com with SMTP id 41be03b00d2f7-7c3e1081804so2082561a12.3 for ; Mon, 09 Sep 2024 06:49:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=smartx-com.20230601.gappssmtp.com; s=20230601; t=1725889744; x=1726494544; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Ff6vB6OP2BGQkLDtZ3odfZR0Boq6VYV5mOBp8pXlipM=; b=1Z2YyacJiJ1DrB6A9KF05NNk13payUof4bvrM5yHQ+fJw/OCmPWy9nQGgoqkxexeUR PmnG15vX1ox+h3R1KUMc7+BcSij7nncUsNrHDgUdpKdX/MjA31p0Z3oJl7SlI0HRHMdZ +xAqJG6+AqDC5z5oGZ/c9i+lJthcfTKPm9okuePflLRD1dmaLcpD7u4e7ona+KYO0VFY BFxAWC44Wq3ke3Cd4X77NKRCun4gbW1dHWhtPjdZhVHgoCkAvpHZ9vuXT2JXULittzVB zM/KuLxm4fqFjIl4trTBRBBYOzCDilc18e4dOW9qlO6sEYoMAWHF7yIeT3fvzI09lWTM IaVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725889744; x=1726494544; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Ff6vB6OP2BGQkLDtZ3odfZR0Boq6VYV5mOBp8pXlipM=; b=Slxri35jmzXK3LDZIAna/u0425jLVxuCK7VEHa+/bIyViiynF7Ck8XxfIxBdA5Ve36 Wzg/9xynzR2F/DJ0aBok+sn3mxmEvEU9+K5AADARA+MmLcD9A4WoyH4ZQUuFiI2ANZLT 9FPJNp1pW61mjV9U99ev6ZUPPGfKMkLmCvTqx6VsslDHgQ4Y+L0U+sfqd1oCNSthDbPy B7CRE8DiG1rJeMv17hKs1CsVpY9Ijq9NOqjo6kX7/7nXUG/TOzTkTSaIXIQe0H7NBqJM oMrlcMTa1e8quL6HcCbKbc+52oHdh9W/tvXQddKZ52olsF5lZjvWoYaLAFethknvsEVJ exKQ== X-Gm-Message-State: AOJu0Yx4lzxlT3pwr5ByJ19CWaGyF6EioAt8uUFN4hukSjqtpXfHx45Q UCJQWd0ybAgJfOt98VNan/gOt62PUpnrX4zIWbto/ndx0Jn5SUBFB4gH45spHvGBVCbjssH1Ive p2fvvfQ== X-Google-Smtp-Source: AGHT+IH6MILuGxeh8ES1f3j/FcRRnKw8LKasVvdN2f0UQVldT44bJhIKCFrwfB8j7HiyYlYPExOn6Q== X-Received: by 2002:a05:6a21:513:b0:1ce:e952:c0dd with SMTP id adf61e73a8af0-1cf2a0b5ed1mr6969929637.43.1725889743930; Mon, 09 Sep 2024 06:49:03 -0700 (PDT) Received: from localhost.localdomain ([118.114.94.226]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2dab2c6b0b9sm7841031a91.0.2024.09.09.06.49.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 09 Sep 2024 06:49:03 -0700 (PDT) From: Hyman Huang To: qemu-devel@nongnu.org Cc: Peter Xu , Fabiano Rosas , Eric Blake , Markus Armbruster , David Hildenbrand , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Paolo Bonzini , yong.huang@smartx.com Subject: [PATCH RFC 09/10] migration: Support responsive CPU throttle Date: Mon, 9 Sep 2024 21:47:21 +0800 Message-Id: <641bc3ac36205cd636f16b23d7960bac9c9a8931.1725889277.git.yong.huang@smartx.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: References: MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::52c; envelope-from=yong.huang@smartx.com; helo=mail-pg1-x52c.google.com X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Currently, the convergence algorithm determines that the migration cannot converge according to the following principle: The dirty pages generated in current iteration exceed a specific percentage (throttle-trigger-threshold, 50 by default) of the number of transmissions. Let's refer to this criteria as the "dirty rate". If this criteria is met more than or equal to twice (dirty_rate_high_cnt >= 2), the throttle percentage increased. In most cases, above implementation is appropriate. However, for a VM with high memory overload, each iteration is time-consuming. The VM's computing performance may be throttled at a high percentage and last for a long time due to the repeated confirmation behavior. Which may be intolerable for some computationally sensitive software in the VM. As the comment mentioned in the migration_trigger_throttle function, in order to avoid erroneous detection, the original algorithm confirms the criteria repeatedly. Put differently, the criteria does not need to be validated again once the detection is more reliable. In the refinement, in order to make the detection more accurate, we introduce another criteria, called the "dirty ratio" to determine the migration convergence. The "dirty ratio" is the ratio of bytes_xfer_period and bytes_dirty_period. When the algorithm repeatedly detects that the "dirty ratio" of current sync is lower than the previous, the algorithm determines that the migration cannot converge. For the "dirty rate" and "dirty ratio", if one of the two criteria is met, the penalty percentage would be increased. This makes CPU throttle more responsively and therefor saves the time of the entire iteration and therefore reduces the time of VM performance degradation. In conclusion, this refinement significantly reduces the processing time required for the throttle percentage step to its maximum while the VM is under a high memory load. Signed-off-by: Hyman Huang --- migration/ram.c | 55 ++++++++++++++++++++++++++++++++++-- migration/trace-events | 1 + tests/qtest/migration-test.c | 1 + 3 files changed, 55 insertions(+), 2 deletions(-) diff --git a/migration/ram.c b/migration/ram.c index d9d8ed0fda..5fba572f3e 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -420,6 +420,12 @@ struct RAMState { /* Periodic throttle information */ bool throttle_running; QemuThread throttle_thread; + + /* + * Ratio of bytes_dirty_period and bytes_xfer_period in the previous + * sync. + */ + uint64_t dirty_ratio_pct; }; typedef struct RAMState RAMState; @@ -1044,6 +1050,43 @@ static void migration_dirty_limit_guest(void) trace_migration_dirty_limit_guest(quota_dirtyrate); } +static bool migration_dirty_ratio_high(RAMState *rs) +{ + static int dirty_ratio_high_cnt; + uint64_t threshold = migrate_throttle_trigger_threshold(); + uint64_t bytes_xfer_period = + migration_transferred_bytes() - rs->bytes_xfer_prev; + uint64_t bytes_dirty_period = rs->num_dirty_pages_period * TARGET_PAGE_SIZE; + bool dirty_ratio_high = false; + uint64_t prev, curr; + + /* Calculate the dirty ratio percentage */ + curr = 100 * (bytes_dirty_period * 1.0 / bytes_xfer_period); + + prev = rs->dirty_ratio_pct; + rs->dirty_ratio_pct = curr; + + if (prev == 0) { + return false; + } + + /* + * If current dirty ratio is greater than previouse, determine + * that the migration do not converge. + */ + if (curr > threshold && curr >= prev) { + trace_migration_dirty_ratio_high(curr, prev); + dirty_ratio_high_cnt++; + } + + if (dirty_ratio_high_cnt >= 2) { + dirty_ratio_high = true; + dirty_ratio_high_cnt = 0; + } + + return dirty_ratio_high; +} + static void migration_trigger_throttle(RAMState *rs) { uint64_t threshold = migrate_throttle_trigger_threshold(); @@ -1051,6 +1094,11 @@ static void migration_trigger_throttle(RAMState *rs) migration_transferred_bytes() - rs->bytes_xfer_prev; uint64_t bytes_dirty_period = rs->num_dirty_pages_period * TARGET_PAGE_SIZE; uint64_t bytes_dirty_threshold = bytes_xfer_period * threshold / 100; + bool dirty_ratio_high = false; + + if (migrate_responsive_throttle() && (bytes_xfer_period != 0)) { + dirty_ratio_high = migration_dirty_ratio_high(rs); + } /* * The following detection logic can be refined later. For now: @@ -1060,8 +1108,11 @@ static void migration_trigger_throttle(RAMState *rs) * twice, start or increase throttling. */ if ((bytes_dirty_period > bytes_dirty_threshold) && - (++rs->dirty_rate_high_cnt >= 2)) { - rs->dirty_rate_high_cnt = 0; + ((++rs->dirty_rate_high_cnt >= 2) || dirty_ratio_high)) { + + rs->dirty_rate_high_cnt = + rs->dirty_rate_high_cnt >= 2 ? 0 : rs->dirty_rate_high_cnt; + if (migrate_auto_converge()) { trace_migration_throttle(); mig_throttle_guest_down(bytes_dirty_period, diff --git a/migration/trace-events b/migration/trace-events index 5b9db57c8f..241bbfcee9 100644 --- a/migration/trace-events +++ b/migration/trace-events @@ -95,6 +95,7 @@ get_queued_page_not_dirty(const char *block_name, uint64_t tmp_offset, unsigned migration_bitmap_sync_start(void) "" migration_bitmap_sync_end(uint64_t dirty_pages) "dirty_pages %" PRIu64 migration_bitmap_clear_dirty(char *str, uint64_t start, uint64_t size, unsigned long page) "rb %s start 0x%"PRIx64" size 0x%"PRIx64" page 0x%lx" +migration_dirty_ratio_high(uint64_t cur, uint64_t prev) "current ratio: %" PRIu64 " previous ratio: %" PRIu64 migration_periodic_throttle(void) "" migration_periodic_throttle_start(void) "" migration_periodic_throttle_stop(void) "" diff --git a/tests/qtest/migration-test.c b/tests/qtest/migration-test.c index 61d7182f88..4626301435 100644 --- a/tests/qtest/migration-test.c +++ b/tests/qtest/migration-test.c @@ -2812,6 +2812,7 @@ static void test_migrate_auto_converge_args(AutoConvergeArgs *input_args) migrate_set_parameter_int(from, "cpu-throttle-initial", init_pct); migrate_set_parameter_int(from, "cpu-throttle-increment", inc_pct); migrate_set_parameter_int(from, "max-cpu-throttle", max_pct); + migrate_set_parameter_bool(from, "cpu-responsive-throttle", true); if (periodic) { migrate_set_parameter_bool(from, "cpu-periodic-throttle", true); From patchwork Mon Sep 9 13:47:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyman Huang X-Patchwork-Id: 13797110 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B35E4ECE579 for ; Mon, 9 Sep 2024 13:51:43 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1snelV-0002EL-I6; Mon, 09 Sep 2024 09:49:13 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1snelT-0001yF-Im for qemu-devel@nongnu.org; Mon, 09 Sep 2024 09:49:11 -0400 Received: from mail-pg1-x52d.google.com ([2607:f8b0:4864:20::52d]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1snelR-0000CO-Sd for qemu-devel@nongnu.org; Mon, 09 Sep 2024 09:49:11 -0400 Received: by mail-pg1-x52d.google.com with SMTP id 41be03b00d2f7-7163489149eso3490940a12.1 for ; Mon, 09 Sep 2024 06:49:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=smartx-com.20230601.gappssmtp.com; s=20230601; t=1725889748; x=1726494548; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=3t+rJYL+OLNJsbkNL6rt5vBSdNx9BVxn2HZsRTYOAAc=; b=Vk5g3uuJKmWRiPeNB1SKeckjqi9j4WFAI4DbheuCZnr67K48RZXa//dR/OUgGmU9hr odVRKKD8e+F31n4y3X2q4Rx8GM85RTT9ArpuY1DKHpR+GVt0/neTZB9pQXIuQsQuXa3K hP7qZ1C7JGdZ03A7oL8ez29VH5cHJ465EhSqHuAQnNwE2GFoaBhaPrbu121g+ML6XaRV Fdee78uraHCnAdRam4fPGypuGF/EW8M7MQVdR+/GlNxRWxszVIzo+26g6i51ZDgtS7xP hEdILDHIN4ZlVWP8yGW37/m2rG7uRdzV80UgI+iIBKXRjkK9gUogSlKpFHA6uXPpogcY xNtA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725889748; x=1726494548; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=3t+rJYL+OLNJsbkNL6rt5vBSdNx9BVxn2HZsRTYOAAc=; b=w9T53o3S4e6OoKhM0xN7NIeMxX440SYSqsVMgWd5V7XiLChEbowoEQIPKWdgBIR8y7 xP6gE8eqzfGpqzSiDQwxXepXjIfmRfH9cU3Zqyrx2MYgJVqVgjbmcgOsSibo2E7BEZ+W NPLXy303l0cxbxNpS7HIUf2E2pRmablKAmMm6IZTkeGvDMTeF2NeAdIP9vIFa0Eho5E8 AoLPqy+zN/TEwp100YJi3MviloNs3COY/qgxVyD1MM9d5O1pS1NJEogeVnHzaATksMaF M9jyAlhH8Ndfy6dDxZjy+FhE9NsClSB9mKXy6zLgb7ZlIrNubqSnXx9hRitbJXm7acZf BaDw== X-Gm-Message-State: AOJu0Ywu/r4g8IP+2H3+MXZ5XYdnBk1Z7CSMh097lElgFwNHEzkg/f2l iqXlF2lzMjj9Kok6q8TfIo76S3O3/s84MkkktwmZG5F0k50Yi5JHetY1rpKcLZiTy2bf6HhUjk7 WXpdw4g== X-Google-Smtp-Source: AGHT+IEIHouTQrQxgz/J2sEL4N1nMmDwA/0cnxC2g3gQYu8nJi+33wGAmO0AA91dNxDkkL4OiicYUQ== X-Received: by 2002:a05:6a20:cf8f:b0:1cf:39ee:f259 with SMTP id adf61e73a8af0-1cf39eef59emr5103855637.5.1725889747596; Mon, 09 Sep 2024 06:49:07 -0700 (PDT) Received: from localhost.localdomain ([118.114.94.226]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2dab2c6b0b9sm7841031a91.0.2024.09.09.06.49.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 09 Sep 2024 06:49:07 -0700 (PDT) From: Hyman Huang To: qemu-devel@nongnu.org Cc: Peter Xu , Fabiano Rosas , Eric Blake , Markus Armbruster , David Hildenbrand , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Paolo Bonzini , yong.huang@smartx.com Subject: [PATCH RFC 10/10] tests/migration-tests: Add test case for responsive CPU throttle Date: Mon, 9 Sep 2024 21:47:22 +0800 Message-Id: <96eeea4efd3417212d6e2639bc118b90d4dcf926.1725889277.git.yong.huang@smartx.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: References: MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::52d; envelope-from=yong.huang@smartx.com; helo=mail-pg1-x52d.google.com X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Despite the fact that the responsive CPU throttle is enabled, the dirty sync count may not always increase because this is an optimization that might not happen in any situation. This test case just making sure it doesn't interfere with any current functionality. Signed-off-by: Hyman Huang --- tests/qtest/migration-test.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/tests/qtest/migration-test.c b/tests/qtest/migration-test.c index 4626301435..cf0b1dcb50 100644 --- a/tests/qtest/migration-test.c +++ b/tests/qtest/migration-test.c @@ -718,6 +718,7 @@ typedef struct { typedef struct { /* CPU throttle parameters */ bool periodic; + bool responsive; } AutoConvergeArgs; static int test_migrate_start(QTestState **from, QTestState **to, @@ -2795,6 +2796,7 @@ static void test_migrate_auto_converge_args(AutoConvergeArgs *input_args) QTestState *from, *to; int64_t percentage; bool periodic = (input_args && input_args->periodic); + bool responsive = (input_args && input_args->responsive); /* * We want the test to be stable and as fast as possible. @@ -2820,6 +2822,16 @@ static void test_migrate_auto_converge_args(AutoConvergeArgs *input_args) periodic_throttle_interval); } + if (responsive) { + /* + * The dirty-sync-count may not always go down while using responsive + * throttle because it is an optimization and may not take effect in + * any scenario. Just making sure this feature doesn't break any + * existing functionality by turning it on. + */ + migrate_set_parameter_bool(from, "cpu-responsive-throttle", true); + } + /* * Set the initial parameters so that the migration could not converge * without throttling. @@ -2902,6 +2914,12 @@ static void test_migrate_auto_converge_periodic_throttle(void) test_migrate_auto_converge_args(&args); } +static void test_migrate_auto_converge_responsive_throttle(void) +{ + AutoConvergeArgs args = {.responsive = true}; + test_migrate_auto_converge_args(&args); +} + static void * test_migrate_precopy_tcp_multifd_start_common(QTestState *from, QTestState *to, @@ -3955,6 +3973,8 @@ int main(int argc, char **argv) test_migrate_auto_converge); migration_test_add("/migration/auto_converge_periodic_throttle", test_migrate_auto_converge_periodic_throttle); + migration_test_add("/migration/auto_converge_responsive_throttle", + test_migrate_auto_converge_responsive_throttle); if (g_str_equal(arch, "x86_64") && has_kvm && kvm_dirty_ring_supported()) { migration_test_add("/migration/dirty_limit",