From patchwork Mon Feb 27 04:26:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gavin Shan X-Patchwork-Id: 13152730 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 61246C64ED6 for ; Mon, 27 Feb 2023 04:30:00 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pWV7s-0002dP-3D; Sun, 26 Feb 2023 23:28:36 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pWV7p-0002ch-TN for qemu-devel@nongnu.org; Sun, 26 Feb 2023 23:28:33 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pWV7o-0003tr-8N for qemu-devel@nongnu.org; Sun, 26 Feb 2023 23:28:33 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1677472111; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uAUjD8d9CDSX+A+0Pl6qBqCUVd6lEPSKh9yeuK2QOnQ=; b=adzTkxCFcCEs2n+kva1iBgR9qfvKROxxOrAS5H4ro1IzgoZY/HjJxISsPLbSXZ/IFTr42u NYSt7oE+BCEB2Mzw0TTSTaS5bAd8jujynL3dnUvz4H2l/DQv/LclX2VI8hgREBONrD33Up l/Ra84cbUpv7JvConR9D91zXzpeAPbM= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-130-aji86WsdN02Q6Tlt1ODGuQ-1; Sun, 26 Feb 2023 23:28:27 -0500 X-MC-Unique: aji86WsdN02Q6Tlt1ODGuQ-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C733D3C0E441; Mon, 27 Feb 2023 04:28:26 +0000 (UTC) Received: from gshan.redhat.com (vpn2-54-58.bne.redhat.com [10.64.54.58]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5779E1121315; Mon, 27 Feb 2023 04:28:21 +0000 (UTC) From: Gavin Shan To: qemu-arm@nongnu.org Cc: qemu-devel@nongnu.org, pbonzini@redhat.com, peter.maydell@linaro.org, peterx@redhat.com, david@redhat.com, philmd@linaro.org, mst@redhat.com, cohuck@redhat.com, quintela@redhat.com, dgilbert@redhat.com, maz@kernel.org, zhenyzha@redhat.com, shan.gavin@gmail.com Subject: [PATCH v2 1/4] migration: Add last stage indicator to global dirty log synchronization Date: Mon, 27 Feb 2023 12:26:26 +0800 Message-Id: <20230227042629.339747-2-gshan@redhat.com> In-Reply-To: <20230227042629.339747-1-gshan@redhat.com> References: <20230227042629.339747-1-gshan@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 Received-SPF: pass client-ip=170.10.133.124; envelope-from=gshan@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org The global dirty log synchronization is used when KVM and dirty ring are enabled. There is a particularity for ARM64 where the backup bitmap is used to track dirty pages in non-running-vcpu situations. It means the dirty ring works with the combination of ring buffer and backup bitmap. The dirty bits in the backup bitmap needs to collected in the last stage of live migration. In order to identify the last stage of live migration and pass it down, an extra parameter is added to the relevant functions and callbacks. This last stage indicator isn't used until the dirty ring is enabled in the subsequent patches. No functional change intended. Signed-off-by: Gavin Shan Reviewed-by: Peter Xu Tested-by: Zhenyu Zhang --- accel/kvm/kvm-all.c | 2 +- include/exec/memory.h | 7 +++++-- migration/dirtyrate.c | 4 ++-- migration/ram.c | 20 ++++++++++---------- softmmu/memory.c | 10 +++++----- 5 files changed, 23 insertions(+), 20 deletions(-) diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c index 9b26582655..01a6a026af 100644 --- a/accel/kvm/kvm-all.c +++ b/accel/kvm/kvm-all.c @@ -1554,7 +1554,7 @@ static void kvm_log_sync(MemoryListener *listener, kvm_slots_unlock(); } -static void kvm_log_sync_global(MemoryListener *l) +static void kvm_log_sync_global(MemoryListener *l, bool last_stage) { KVMMemoryListener *kml = container_of(l, KVMMemoryListener, listener); KVMState *s = kvm_state; diff --git a/include/exec/memory.h b/include/exec/memory.h index 2e602a2fad..b5463b3a7a 100644 --- a/include/exec/memory.h +++ b/include/exec/memory.h @@ -929,8 +929,11 @@ struct MemoryListener { * its @log_sync must be NULL. Vice versa. * * @listener: The #MemoryListener. + * @last_stage: The last stage to synchronize the log during migration. + * The caller should gurantee that the synchronization with true for + * @last_stage is triggered for once after all VCPUs have been stopped. */ - void (*log_sync_global)(MemoryListener *listener); + void (*log_sync_global)(MemoryListener *listener, bool last_stage); /** * @log_clear: @@ -2408,7 +2411,7 @@ MemoryRegionSection memory_region_find(MemoryRegion *mr, * * Synchronizes the dirty page log for all address spaces. */ -void memory_global_dirty_log_sync(void); +void memory_global_dirty_log_sync(bool last_stage); /** * memory_global_dirty_log_sync: synchronize the dirty log for all memory diff --git a/migration/dirtyrate.c b/migration/dirtyrate.c index 575d48c397..da9b4a1f8d 100644 --- a/migration/dirtyrate.c +++ b/migration/dirtyrate.c @@ -101,7 +101,7 @@ void global_dirty_log_change(unsigned int flag, bool start) static void global_dirty_log_sync(unsigned int flag, bool one_shot) { qemu_mutex_lock_iothread(); - memory_global_dirty_log_sync(); + memory_global_dirty_log_sync(false); if (one_shot) { memory_global_dirty_log_stop(flag); } @@ -553,7 +553,7 @@ static void calculate_dirtyrate_dirty_bitmap(struct DirtyRateConfig config) * skip it unconditionally and start dirty tracking * from 2'round of log sync */ - memory_global_dirty_log_sync(); + memory_global_dirty_log_sync(false); /* * reset page protect manually and unconditionally. diff --git a/migration/ram.c b/migration/ram.c index 96e8a19a58..22e712c8b9 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -1217,7 +1217,7 @@ static void migration_trigger_throttle(RAMState *rs) } } -static void migration_bitmap_sync(RAMState *rs) +static void migration_bitmap_sync(RAMState *rs, bool last_stage) { RAMBlock *block; int64_t end_time; @@ -1229,7 +1229,7 @@ static void migration_bitmap_sync(RAMState *rs) } trace_migration_bitmap_sync_start(); - memory_global_dirty_log_sync(); + memory_global_dirty_log_sync(last_stage); qemu_mutex_lock(&rs->bitmap_mutex); WITH_RCU_READ_LOCK_GUARD() { @@ -1263,7 +1263,7 @@ static void migration_bitmap_sync(RAMState *rs) } } -static void migration_bitmap_sync_precopy(RAMState *rs) +static void migration_bitmap_sync_precopy(RAMState *rs, bool last_stage) { Error *local_err = NULL; @@ -1276,7 +1276,7 @@ static void migration_bitmap_sync_precopy(RAMState *rs) local_err = NULL; } - migration_bitmap_sync(rs); + migration_bitmap_sync(rs, last_stage); if (precopy_notify(PRECOPY_NOTIFY_AFTER_BITMAP_SYNC, &local_err)) { error_report_err(local_err); @@ -2937,7 +2937,7 @@ void ram_postcopy_send_discard_bitmap(MigrationState *ms) RCU_READ_LOCK_GUARD(); /* This should be our last sync, the src is now paused */ - migration_bitmap_sync(rs); + migration_bitmap_sync(rs, false); /* Easiest way to make sure we don't resume in the middle of a host-page */ rs->pss[RAM_CHANNEL_PRECOPY].last_sent_block = NULL; @@ -3128,7 +3128,7 @@ static void ram_init_bitmaps(RAMState *rs) /* We don't use dirty log with background snapshots */ if (!migrate_background_snapshot()) { memory_global_dirty_log_start(GLOBAL_DIRTY_MIGRATION); - migration_bitmap_sync_precopy(rs); + migration_bitmap_sync_precopy(rs, false); } } qemu_mutex_unlock_ramlist(); @@ -3446,7 +3446,7 @@ static int ram_save_complete(QEMUFile *f, void *opaque) WITH_RCU_READ_LOCK_GUARD() { if (!migration_in_postcopy()) { - migration_bitmap_sync_precopy(rs); + migration_bitmap_sync_precopy(rs, true); } ram_control_before_iterate(f, RAM_CONTROL_FINISH); @@ -3516,7 +3516,7 @@ static void ram_state_pending_exact(void *opaque, uint64_t *must_precopy, if (!migration_in_postcopy()) { qemu_mutex_lock_iothread(); WITH_RCU_READ_LOCK_GUARD() { - migration_bitmap_sync_precopy(rs); + migration_bitmap_sync_precopy(rs, false); } qemu_mutex_unlock_iothread(); remaining_size = rs->migration_dirty_pages * TARGET_PAGE_SIZE; @@ -3926,7 +3926,7 @@ void colo_incoming_start_dirty_log(void) qemu_mutex_lock_iothread(); qemu_mutex_lock_ramlist(); - memory_global_dirty_log_sync(); + memory_global_dirty_log_sync(false); WITH_RCU_READ_LOCK_GUARD() { RAMBLOCK_FOREACH_NOT_IGNORED(block) { ramblock_sync_dirty_bitmap(ram_state, block); @@ -4217,7 +4217,7 @@ void colo_flush_ram_cache(void) void *src_host; unsigned long offset = 0; - memory_global_dirty_log_sync(); + memory_global_dirty_log_sync(false); WITH_RCU_READ_LOCK_GUARD() { RAMBLOCK_FOREACH_NOT_IGNORED(block) { ramblock_sync_dirty_bitmap(ram_state, block); diff --git a/softmmu/memory.c b/softmmu/memory.c index 9d64efca26..1cc36ef028 100644 --- a/softmmu/memory.c +++ b/softmmu/memory.c @@ -2224,7 +2224,7 @@ void memory_region_set_dirty(MemoryRegion *mr, hwaddr addr, * If memory region `mr' is NULL, do global sync. Otherwise, sync * dirty bitmap for the specified memory region. */ -static void memory_region_sync_dirty_bitmap(MemoryRegion *mr) +static void memory_region_sync_dirty_bitmap(MemoryRegion *mr, bool last_stage) { MemoryListener *listener; AddressSpace *as; @@ -2254,7 +2254,7 @@ static void memory_region_sync_dirty_bitmap(MemoryRegion *mr) * is to do a global sync, because we are not capable to * sync in a finer granularity. */ - listener->log_sync_global(listener); + listener->log_sync_global(listener, last_stage); trace_memory_region_sync_dirty(mr ? mr->name : "(all)", listener->name, 1); } } @@ -2318,7 +2318,7 @@ DirtyBitmapSnapshot *memory_region_snapshot_and_clear_dirty(MemoryRegion *mr, { DirtyBitmapSnapshot *snapshot; assert(mr->ram_block); - memory_region_sync_dirty_bitmap(mr); + memory_region_sync_dirty_bitmap(mr, false); snapshot = cpu_physical_memory_snapshot_and_clear_dirty(mr, addr, size, client); memory_global_after_dirty_log_sync(); return snapshot; @@ -2844,9 +2844,9 @@ bool memory_region_present(MemoryRegion *container, hwaddr addr) return mr && mr != container; } -void memory_global_dirty_log_sync(void) +void memory_global_dirty_log_sync(bool last_stage) { - memory_region_sync_dirty_bitmap(NULL); + memory_region_sync_dirty_bitmap(NULL, last_stage); } void memory_global_after_dirty_log_sync(void) From patchwork Mon Feb 27 04:26:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gavin Shan X-Patchwork-Id: 13152729 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6D006C7EE23 for ; Mon, 27 Feb 2023 04:30:00 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pWV7t-0002dg-KO; Sun, 26 Feb 2023 23:28:37 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pWV7t-0002dX-1C for qemu-devel@nongnu.org; Sun, 26 Feb 2023 23:28:37 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pWV7r-0003uK-IC for qemu-devel@nongnu.org; Sun, 26 Feb 2023 23:28:36 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1677472115; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HOhpwxg5W41kqEZ6DVz9gB+jt0gJUFC4UZr/Z/fVdA4=; b=hGiLz0sv/7SRmgjt4PmEYcwtBNSLGKiK13lNEMqH4RRHxZ0AAQ/UJJ/UHqu8JIw14tkwVF d0TSvbI/DJ2WOB6LgtHZQwiTYbWeh6Yh36NwoSOr3SaWT+GiAFjJgsklHFtS+Z1nRY+vvy OM3jcIbHX7zQmNo27MQ6x7Vz8+T2HV4= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-218-dU3GxINjPJqNarUJSbc8uQ-1; Sun, 26 Feb 2023 23:28:33 -0500 X-MC-Unique: dU3GxINjPJqNarUJSbc8uQ-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 98F10101A521; Mon, 27 Feb 2023 04:28:32 +0000 (UTC) Received: from gshan.redhat.com (vpn2-54-58.bne.redhat.com [10.64.54.58]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5B6CA1121314; Mon, 27 Feb 2023 04:28:27 +0000 (UTC) From: Gavin Shan To: qemu-arm@nongnu.org Cc: qemu-devel@nongnu.org, pbonzini@redhat.com, peter.maydell@linaro.org, peterx@redhat.com, david@redhat.com, philmd@linaro.org, mst@redhat.com, cohuck@redhat.com, quintela@redhat.com, dgilbert@redhat.com, maz@kernel.org, zhenyzha@redhat.com, shan.gavin@gmail.com Subject: [PATCH v2 2/4] kvm: Synchronize the backup bitmap in the last stage Date: Mon, 27 Feb 2023 12:26:27 +0800 Message-Id: <20230227042629.339747-3-gshan@redhat.com> In-Reply-To: <20230227042629.339747-1-gshan@redhat.com> References: <20230227042629.339747-1-gshan@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 Received-SPF: pass client-ip=170.10.133.124; envelope-from=gshan@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org In the last stage of live migration or memory slot removal, the backup bitmap needs to be synchronized when it has been enabled. Signed-off-by: Gavin Shan Reviewed-by: Peter Xu Tested-by: Zhenyu Zhang --- accel/kvm/kvm-all.c | 11 +++++++++++ include/sysemu/kvm_int.h | 1 + 2 files changed, 12 insertions(+) diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c index 01a6a026af..b5e12de522 100644 --- a/accel/kvm/kvm-all.c +++ b/accel/kvm/kvm-all.c @@ -1352,6 +1352,10 @@ static void kvm_set_phys_mem(KVMMemoryListener *kml, */ if (kvm_state->kvm_dirty_ring_size) { kvm_dirty_ring_reap_locked(kvm_state, NULL); + if (kvm_state->kvm_dirty_ring_with_bitmap) { + kvm_slot_sync_dirty_pages(mem); + kvm_slot_get_dirty_log(kvm_state, mem); + } } else { kvm_slot_get_dirty_log(kvm_state, mem); } @@ -1573,6 +1577,12 @@ static void kvm_log_sync_global(MemoryListener *l, bool last_stage) mem = &kml->slots[i]; if (mem->memory_size && mem->flags & KVM_MEM_LOG_DIRTY_PAGES) { kvm_slot_sync_dirty_pages(mem); + + if (s->kvm_dirty_ring_with_bitmap && last_stage && + kvm_slot_get_dirty_log(s, mem)) { + kvm_slot_sync_dirty_pages(mem); + } + /* * This is not needed by KVM_GET_DIRTY_LOG because the * ioctl will unconditionally overwrite the whole region. @@ -3701,6 +3711,7 @@ static void kvm_accel_instance_init(Object *obj) s->kernel_irqchip_split = ON_OFF_AUTO_AUTO; /* KVM dirty ring is by default off */ s->kvm_dirty_ring_size = 0; + s->kvm_dirty_ring_with_bitmap = false; s->notify_vmexit = NOTIFY_VMEXIT_OPTION_RUN; s->notify_window = 0; } diff --git a/include/sysemu/kvm_int.h b/include/sysemu/kvm_int.h index 60b520a13e..fdd5b1bde0 100644 --- a/include/sysemu/kvm_int.h +++ b/include/sysemu/kvm_int.h @@ -115,6 +115,7 @@ struct KVMState } *as; uint64_t kvm_dirty_ring_bytes; /* Size of the per-vcpu dirty ring */ uint32_t kvm_dirty_ring_size; /* Number of dirty GFNs per ring */ + bool kvm_dirty_ring_with_bitmap; struct KVMDirtyRingReaper reaper; NotifyVmexitOption notify_vmexit; uint32_t notify_window; From patchwork Mon Feb 27 04:26:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gavin Shan X-Patchwork-Id: 13152732 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F07D7C64ED6 for ; Mon, 27 Feb 2023 04:30:05 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pWV83-0002i2-C4; Sun, 26 Feb 2023 23:28:47 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pWV81-0002fd-EI for qemu-devel@nongnu.org; Sun, 26 Feb 2023 23:28:45 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pWV7z-0003vP-TK for qemu-devel@nongnu.org; Sun, 26 Feb 2023 23:28:45 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1677472123; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6XFstomBiQ7+sR1kwRrER3iVKCZa8Lv1o7ES+ZiQ+xQ=; b=YPXWmjvF4LiI02pYEXfkD4lN8+/SlayUMXacmMSTws5ntB1ZxQJejGMv9pEGjvgE5h8sgL sv4MiBhxk7IriVEF6nhcvusWSPimjXeYV89VdbfSXD5iKNc/LONn8EcYDGS8gCsMRmJsue petJ8aPVH5hZ5EwdgUABZO1LPpy+MEw= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-532-BYz8SgByM7uIq_y9UteoAA-1; Sun, 26 Feb 2023 23:28:39 -0500 X-MC-Unique: BYz8SgByM7uIq_y9UteoAA-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 1117A29AA3B1; Mon, 27 Feb 2023 04:28:39 +0000 (UTC) Received: from gshan.redhat.com (vpn2-54-58.bne.redhat.com [10.64.54.58]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6886A1121314; Mon, 27 Feb 2023 04:28:33 +0000 (UTC) From: Gavin Shan To: qemu-arm@nongnu.org Cc: qemu-devel@nongnu.org, pbonzini@redhat.com, peter.maydell@linaro.org, peterx@redhat.com, david@redhat.com, philmd@linaro.org, mst@redhat.com, cohuck@redhat.com, quintela@redhat.com, dgilbert@redhat.com, maz@kernel.org, zhenyzha@redhat.com, shan.gavin@gmail.com Subject: [PATCH v2 3/4] kvm: Add helper kvm_dirty_ring_init() Date: Mon, 27 Feb 2023 12:26:28 +0800 Message-Id: <20230227042629.339747-4-gshan@redhat.com> In-Reply-To: <20230227042629.339747-1-gshan@redhat.com> References: <20230227042629.339747-1-gshan@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 Received-SPF: pass client-ip=170.10.133.124; envelope-from=gshan@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Due to multiple capabilities associated with the dirty ring for different architectures: KVM_CAP_DIRTY_{LOG_RING, LOG_RING_ACQ_REL} for x86 and arm64 separately. There will be more to be done in order to support the dirty ring for arm64. Lets add helper kvm_dirty_ring_init() to enable the dirty ring. With this, the code looks a bit clean. No functional change intended. Signed-off-by: Gavin Shan Reviewed-by: Peter Xu Tested-by: Zhenyu Zhang --- accel/kvm/kvm-all.c | 76 ++++++++++++++++++++++++++++----------------- 1 file changed, 47 insertions(+), 29 deletions(-) diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c index b5e12de522..e5035026c9 100644 --- a/accel/kvm/kvm-all.c +++ b/accel/kvm/kvm-all.c @@ -1453,6 +1453,50 @@ static int kvm_dirty_ring_reaper_init(KVMState *s) return 0; } +static int kvm_dirty_ring_init(KVMState *s) +{ + uint32_t ring_size = s->kvm_dirty_ring_size; + uint64_t ring_bytes = ring_size * sizeof(struct kvm_dirty_gfn); + int ret; + + s->kvm_dirty_ring_size = 0; + s->kvm_dirty_ring_bytes = 0; + + /* Bail if the dirty ring size isn't specified */ + if (!ring_size) { + return 0; + } + + /* + * Read the max supported pages. Fall back to dirty logging mode + * if the dirty ring isn't supported. + */ + ret = kvm_vm_check_extension(s, KVM_CAP_DIRTY_LOG_RING); + if (ret <= 0) { + warn_report("KVM dirty ring not available, using bitmap method"); + return 0; + } + + if (ring_bytes > ret) { + error_report("KVM dirty ring size %" PRIu32 " too big " + "(maximum is %ld). Please use a smaller value.", + ring_size, (long)ret / sizeof(struct kvm_dirty_gfn)); + return -EINVAL; + } + + ret = kvm_vm_enable_cap(s, KVM_CAP_DIRTY_LOG_RING, 0, ring_bytes); + if (ret) { + error_report("Enabling of KVM dirty ring failed: %s. " + "Suggested minimum value is 1024.", strerror(-ret)); + return -EIO; + } + + s->kvm_dirty_ring_size = ring_size; + s->kvm_dirty_ring_bytes = ring_bytes; + + return 0; +} + static void kvm_region_add(MemoryListener *listener, MemoryRegionSection *section) { @@ -2522,35 +2566,9 @@ static int kvm_init(MachineState *ms) * Enable KVM dirty ring if supported, otherwise fall back to * dirty logging mode */ - if (s->kvm_dirty_ring_size > 0) { - uint64_t ring_bytes; - - ring_bytes = s->kvm_dirty_ring_size * sizeof(struct kvm_dirty_gfn); - - /* Read the max supported pages */ - ret = kvm_vm_check_extension(s, KVM_CAP_DIRTY_LOG_RING); - if (ret > 0) { - if (ring_bytes > ret) { - error_report("KVM dirty ring size %" PRIu32 " too big " - "(maximum is %ld). Please use a smaller value.", - s->kvm_dirty_ring_size, - (long)ret / sizeof(struct kvm_dirty_gfn)); - ret = -EINVAL; - goto err; - } - - ret = kvm_vm_enable_cap(s, KVM_CAP_DIRTY_LOG_RING, 0, ring_bytes); - if (ret) { - error_report("Enabling of KVM dirty ring failed: %s. " - "Suggested minimum value is 1024.", strerror(-ret)); - goto err; - } - - s->kvm_dirty_ring_bytes = ring_bytes; - } else { - warn_report("KVM dirty ring not available, using bitmap method"); - s->kvm_dirty_ring_size = 0; - } + ret = kvm_dirty_ring_init(s); + if (ret < 0) { + goto err; } /* From patchwork Mon Feb 27 04:26:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gavin Shan X-Patchwork-Id: 13152733 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3B6D5C7EE23 for ; Mon, 27 Feb 2023 04:30:06 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pWV88-0002ir-3v; Sun, 26 Feb 2023 23:28:52 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pWV86-0002iV-D4 for qemu-devel@nongnu.org; Sun, 26 Feb 2023 23:28:50 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pWV84-0003xZ-OC for qemu-devel@nongnu.org; Sun, 26 Feb 2023 23:28:50 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1677472128; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=IEbi1lN0FvpAADEiqnPvO0Blw7g5PUkwtEQIj5AfqEA=; b=CBIc4v/CYxE6TzQ7ELNCTXTnt6ekPUE5Sqwj8uzuAnFX+SZehyJWn083l4Zs/pHOOWiPB3 hVqqml6nw92lBrqrzk++OrT37V3kl8v+8eYBUVxdMeD5j3g06nHZeAw8QpF04w5ZqUg/fz Ctl/DX9//XB8tKySzgSPC9fAKYrBAVA= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-413-DFPB52I9PAKpE3hDIYDJEA-1; Sun, 26 Feb 2023 23:28:46 -0500 X-MC-Unique: DFPB52I9PAKpE3hDIYDJEA-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D9D9429AA38B; Mon, 27 Feb 2023 04:28:45 +0000 (UTC) Received: from gshan.redhat.com (vpn2-54-58.bne.redhat.com [10.64.54.58]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9B4221121314; Mon, 27 Feb 2023 04:28:39 +0000 (UTC) From: Gavin Shan To: qemu-arm@nongnu.org Cc: qemu-devel@nongnu.org, pbonzini@redhat.com, peter.maydell@linaro.org, peterx@redhat.com, david@redhat.com, philmd@linaro.org, mst@redhat.com, cohuck@redhat.com, quintela@redhat.com, dgilbert@redhat.com, maz@kernel.org, zhenyzha@redhat.com, shan.gavin@gmail.com Subject: [PATCH v2 4/4] kvm: Enable dirty ring for arm64 Date: Mon, 27 Feb 2023 12:26:29 +0800 Message-Id: <20230227042629.339747-5-gshan@redhat.com> In-Reply-To: <20230227042629.339747-1-gshan@redhat.com> References: <20230227042629.339747-1-gshan@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 Received-SPF: pass client-ip=170.10.129.124; envelope-from=gshan@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org arm64 has different capability from x86 to enable the dirty ring, which is KVM_CAP_DIRTY_LOG_RING_ACQ_REL. Besides, arm64 also needs the backup bitmap extension (KVM_CAP_DIRTY_LOG_RING_WITH_BITMAP) when 'kvm-arm-gicv3' or 'arm-its-kvm' device is enabled. Here the extension is always enabled and the unnecessary overhead to do the last stage of dirty log synchronization when those two devices aren't used is introduced, but the overhead should be very small and acceptable. The benefit is cover future cases where those two devices are used without modifying the code. Signed-off-by: Gavin Shan Reviewed-by: Juan Quintela Tested-by: Zhenyu Zhang --- accel/kvm/kvm-all.c | 23 +++++++++++++++++++++-- 1 file changed, 21 insertions(+), 2 deletions(-) diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c index e5035026c9..d96bca618b 100644 --- a/accel/kvm/kvm-all.c +++ b/accel/kvm/kvm-all.c @@ -1457,6 +1457,7 @@ static int kvm_dirty_ring_init(KVMState *s) { uint32_t ring_size = s->kvm_dirty_ring_size; uint64_t ring_bytes = ring_size * sizeof(struct kvm_dirty_gfn); + unsigned int capability = KVM_CAP_DIRTY_LOG_RING; int ret; s->kvm_dirty_ring_size = 0; @@ -1471,7 +1472,12 @@ static int kvm_dirty_ring_init(KVMState *s) * Read the max supported pages. Fall back to dirty logging mode * if the dirty ring isn't supported. */ - ret = kvm_vm_check_extension(s, KVM_CAP_DIRTY_LOG_RING); + ret = kvm_vm_check_extension(s, capability); + if (ret <= 0) { + capability = KVM_CAP_DIRTY_LOG_RING_ACQ_REL; + ret = kvm_vm_check_extension(s, capability); + } + if (ret <= 0) { warn_report("KVM dirty ring not available, using bitmap method"); return 0; @@ -1484,13 +1490,26 @@ static int kvm_dirty_ring_init(KVMState *s) return -EINVAL; } - ret = kvm_vm_enable_cap(s, KVM_CAP_DIRTY_LOG_RING, 0, ring_bytes); + ret = kvm_vm_enable_cap(s, capability, 0, ring_bytes); if (ret) { error_report("Enabling of KVM dirty ring failed: %s. " "Suggested minimum value is 1024.", strerror(-ret)); return -EIO; } + /* Enable the backup bitmap if it is supported */ + ret = kvm_vm_check_extension(s, KVM_CAP_DIRTY_LOG_RING_WITH_BITMAP); + if (ret > 0) { + ret = kvm_vm_enable_cap(s, KVM_CAP_DIRTY_LOG_RING_WITH_BITMAP, 0); + if (ret) { + error_report("Enabling of KVM dirty ring's backup bitmap failed: " + "%s. ", strerror(-ret)); + return -EIO; + } + + s->kvm_dirty_ring_with_bitmap = true; + } + s->kvm_dirty_ring_size = ring_size; s->kvm_dirty_ring_bytes = ring_bytes;