Message ID | 20241206224755.1108686-8-peterx@redhat.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | migration/multifd: Some VFIO / postcopy preparations on flush | expand |
Peter Xu <peterx@redhat.com> writes: > It's not straightforward to see why src QEMU needs to sync multifd during > setup() phase. After all, there's no page queued at that point. > > For old QEMUs, there's a solid reason: EOS requires it to work. While it's > clueless on the new QEMUs which do not take EOS message as sync requests. > > One will figure that out only when this is conditionally removed. In fact, > the author did try it out. Logically we could still avoid doing this on > new machine types, however that needs a separate compat field and that can > be an overkill in some trivial overhead in setup() phase. > > Let's instead document it completely, to avoid someone else tries this > again and do the debug one more time, or anyone confused on why this ever > existed. > > Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Fabiano Rosas <farosas@suse.de>
diff --git a/migration/ram.c b/migration/ram.c index 5d4bdefe69..e5c590b259 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -3036,6 +3036,31 @@ static int ram_save_setup(QEMUFile *f, void *opaque, Error **errp) migration_ops->ram_save_target_page = ram_save_target_page_legacy; } + /* + * This operation is unfortunate.. + * + * For legacy QEMUs using per-section sync + * ======================================= + * + * This must exist because the EOS below requires the SYNC messages + * per-channel to work. + * + * For modern QEMUs using per-round sync + * ===================================== + * + * Logically such sync is not needed, and recv threads should not run + * until setup ready (using things like channels_ready on src). Then + * we should be all fine. + * + * However even if we add channels_ready to recv side in new QEMUs, old + * QEMU won't have them so this sync will still be needed to make sure + * multifd recv threads won't start processing guest pages early before + * ram_load_setup() is properly done. + * + * Let's stick with this. Fortunately the overhead is low to sync + * during setup because the VM is running, so at least it's not + * accounted as part of downtime. + */ bql_unlock(); ret = multifd_ram_flush_and_sync(f); bql_lock();
It's not straightforward to see why src QEMU needs to sync multifd during setup() phase. After all, there's no page queued at that point. For old QEMUs, there's a solid reason: EOS requires it to work. While it's clueless on the new QEMUs which do not take EOS message as sync requests. One will figure that out only when this is conditionally removed. In fact, the author did try it out. Logically we could still avoid doing this on new machine types, however that needs a separate compat field and that can be an overkill in some trivial overhead in setup() phase. Let's instead document it completely, to avoid someone else tries this again and do the debug one more time, or anyone confused on why this ever existed. Signed-off-by: Peter Xu <peterx@redhat.com> --- migration/ram.c | 25 +++++++++++++++++++++++++ 1 file changed, 25 insertions(+)