diff mbox series

[13/14] migration/multifd: Move header prepare/fill into send_prepare()

Message ID 20240131103111.306523-14-peterx@redhat.com (mailing list archive)
State New, archived
Headers show
Series migration/multifd: Refactor ->send_prepare() and cleanups | expand

Commit Message

Peter Xu Jan. 31, 2024, 10:31 a.m. UTC
From: Peter Xu <peterx@redhat.com>

This patch redefines the interfacing of ->send_prepare().  It further
simplifies multifd_send_thread() especially on zero copy.

Now with the new interface, we require the hook to do all the work for
preparing the IOVs to send.  After it's completed, the IOVs should be ready
to be dumped into the specific multifd QIOChannel later.

So now the API looks like:

  p->pages ----------->  send_prepare() -------------> IOVs

This also prepares for the case where the input can be extended to even not
any p->pages.  But that's for later.

This patch will achieve similar goal of what Fabiano used to propose here:

https://lore.kernel.org/r/20240126221943.26628-1-farosas@suse.de

However the send() interface may not be necessary.  I'm boldly attaching a
"Co-developed-by" for Fabiano.

Co-developed-by: Fabiano Rosas <farosas@suse.de>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 migration/multifd.h      |  1 +
 migration/multifd-zlib.c |  4 ++++
 migration/multifd-zstd.c |  4 ++++
 migration/multifd.c      | 45 ++++++++++++++++++++--------------------
 4 files changed, 32 insertions(+), 22 deletions(-)

Comments

Fabiano Rosas Jan. 31, 2024, 9:42 p.m. UTC | #1
peterx@redhat.com writes:

> From: Peter Xu <peterx@redhat.com>
>
> This patch redefines the interfacing of ->send_prepare().  It further
> simplifies multifd_send_thread() especially on zero copy.
>
> Now with the new interface, we require the hook to do all the work for
> preparing the IOVs to send.  After it's completed, the IOVs should be ready
> to be dumped into the specific multifd QIOChannel later.
>
> So now the API looks like:
>
>   p->pages ----------->  send_prepare() -------------> IOVs
>
> This also prepares for the case where the input can be extended to even not
> any p->pages.  But that's for later.
>
> This patch will achieve similar goal of what Fabiano used to propose here:
>
> https://lore.kernel.org/r/20240126221943.26628-1-farosas@suse.de
>
> However the send() interface may not be necessary.  I'm boldly attaching a

So should I drop send() for fixed-ram as well? Or do you still want a
separate layer just for send()?

> "Co-developed-by" for Fabiano.
>
> Co-developed-by: Fabiano Rosas <farosas@suse.de>
> Signed-off-by: Peter Xu <peterx@redhat.com>

Reviewed-by: Fabiano Rosas <farosas@suse.de>
Peter Xu Feb. 1, 2024, 10:15 a.m. UTC | #2
On Wed, Jan 31, 2024 at 06:42:57PM -0300, Fabiano Rosas wrote:
> peterx@redhat.com writes:
> 
> > From: Peter Xu <peterx@redhat.com>
> >
> > This patch redefines the interfacing of ->send_prepare().  It further
> > simplifies multifd_send_thread() especially on zero copy.
> >
> > Now with the new interface, we require the hook to do all the work for
> > preparing the IOVs to send.  After it's completed, the IOVs should be ready
> > to be dumped into the specific multifd QIOChannel later.
> >
> > So now the API looks like:
> >
> >   p->pages ----------->  send_prepare() -------------> IOVs
> >
> > This also prepares for the case where the input can be extended to even not
> > any p->pages.  But that's for later.
> >
> > This patch will achieve similar goal of what Fabiano used to propose here:
> >
> > https://lore.kernel.org/r/20240126221943.26628-1-farosas@suse.de
> >
> > However the send() interface may not be necessary.  I'm boldly attaching a
> 
> So should I drop send() for fixed-ram as well? Or do you still want a
> separate layer just for send()?

Currently after the whole set applied, the IO side is pretty like before,
and IMHO straightforward enough:

            ret = qio_channel_writev_full_all(p->c, p->iov, p->iovs_num, NULL,
                                              0, p->write_flags, &local_err);
            if (ret != 0) {
                qemu_mutex_unlock(&p->mutex);
                break;
            }

IIRC with your file interface added, it could logically become something
like:

            if (use_socket()) {
                ret = qio_channel_writev_full_all(IOV, ...);
            } else {
                /*
                 * Assert "file:".  I forgot what we used to discuss here for
                 * the interface as name.. but I remember we need ramblock,
                 * so perhaps passing over "p" would work?  As then it is
                 * p->pages->ramblock, along with IOVs, etc.
                 */
                ret = qio_channel_XXX(p, ...);
            }

            if (ret != 0) {
                qemu_mutex_unlock(&p->mutex);
                break;
            }

So there's only one way or another.  We can add one helper to even wrap
these two.

IMHO a hook will be more helpful if there can be a bunch of "if, else if,
... else" things, so at least three options perhaps?  But if you prefer a
hook, that'll also work for me.  So.. your call. :)

But I hope if the send() will exist, it's a separate OPs, so that the
compiler accelerators should avoid worrying at all with how the data will
be dumped when they prepare their new MultiFDMethods (even though your
"file:" will need to block them all as of now, but only support no
compression, iiuc).
Peter Xu Feb. 2, 2024, 3:57 a.m. UTC | #3
On Wed, Jan 31, 2024 at 06:31:10PM +0800, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> This patch redefines the interfacing of ->send_prepare().  It further
> simplifies multifd_send_thread() especially on zero copy.
> 
> Now with the new interface, we require the hook to do all the work for
> preparing the IOVs to send.  After it's completed, the IOVs should be ready
> to be dumped into the specific multifd QIOChannel later.
> 
> So now the API looks like:
> 
>   p->pages ----------->  send_prepare() -------------> IOVs
> 
> This also prepares for the case where the input can be extended to even not
> any p->pages.  But that's for later.
> 
> This patch will achieve similar goal of what Fabiano used to propose here:
> 
> https://lore.kernel.org/r/20240126221943.26628-1-farosas@suse.de
> 
> However the send() interface may not be necessary.  I'm boldly attaching a
> "Co-developed-by" for Fabiano.
> 
> Co-developed-by: Fabiano Rosas <farosas@suse.de>
> Signed-off-by: Peter Xu <peterx@redhat.com>

Just a heads-up: I plan to squash something like below also into it.
That's mostly Fabiano's:

https://lore.kernel.org/r/20240126221943.26628-6-farosas@suse.de

But instead of overwritting write_flags in the hook, I made it a
conditional "OR" just in case we'll extend write_flags later in common
paths and get it overlooked.

In short, I'll keep all zerocopy changes together in this single patch,
hopefully clearer.

=====
diff --git a/migration/multifd.c b/migration/multifd.c
index cd4467aff4..6aa44340de 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -50,15 +50,15 @@ typedef struct {
 /**
  * nocomp_send_setup: setup send side
  *
- * For no compression this function does nothing.
- *
- * Returns 0 for success or -1 for error
- *
  * @p: Params for the channel that we are using
  * @errp: pointer to an error
  */
 static int nocomp_send_setup(MultiFDSendParams *p, Error **errp)
 {
+    if (migrate_zero_copy_send()) {
+        p->write_flags |= QIO_CHANNEL_WRITE_FLAG_ZERO_COPY;
+    }
+
     return 0;
 }
diff mbox series

Patch

diff --git a/migration/multifd.h b/migration/multifd.h
index 4ec005f53f..34a2ecb9f4 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -208,6 +208,7 @@  typedef struct {
 } MultiFDMethods;
 
 void multifd_register_ops(int method, MultiFDMethods *ops);
+void multifd_send_fill_packet(MultiFDSendParams *p);
 
 static inline void multifd_send_prepare_header(MultiFDSendParams *p)
 {
diff --git a/migration/multifd-zlib.c b/migration/multifd-zlib.c
index 100809abc1..012e3bdea1 100644
--- a/migration/multifd-zlib.c
+++ b/migration/multifd-zlib.c
@@ -123,6 +123,8 @@  static int zlib_send_prepare(MultiFDSendParams *p, Error **errp)
     int ret;
     uint32_t i;
 
+    multifd_send_prepare_header(p);
+
     for (i = 0; i < pages->num; i++) {
         uint32_t available = z->zbuff_len - out_size;
         int flush = Z_NO_FLUSH;
@@ -172,6 +174,8 @@  static int zlib_send_prepare(MultiFDSendParams *p, Error **errp)
     p->next_packet_size = out_size;
     p->flags |= MULTIFD_FLAG_ZLIB;
 
+    multifd_send_fill_packet(p);
+
     return 0;
 }
 
diff --git a/migration/multifd-zstd.c b/migration/multifd-zstd.c
index 2023edd8cc..dc8fe43e94 100644
--- a/migration/multifd-zstd.c
+++ b/migration/multifd-zstd.c
@@ -118,6 +118,8 @@  static int zstd_send_prepare(MultiFDSendParams *p, Error **errp)
     int ret;
     uint32_t i;
 
+    multifd_send_prepare_header(p);
+
     z->out.dst = z->zbuff;
     z->out.size = z->zbuff_len;
     z->out.pos = 0;
@@ -161,6 +163,8 @@  static int zstd_send_prepare(MultiFDSendParams *p, Error **errp)
     p->next_packet_size = z->out.pos;
     p->flags |= MULTIFD_FLAG_ZSTD;
 
+    multifd_send_fill_packet(p);
+
     return 0;
 }
 
diff --git a/migration/multifd.c b/migration/multifd.c
index 1b0035787e..0f22646f95 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -88,7 +88,17 @@  static void nocomp_send_cleanup(MultiFDSendParams *p, Error **errp)
  */
 static int nocomp_send_prepare(MultiFDSendParams *p, Error **errp)
 {
+    bool use_zero_copy_send = migrate_zero_copy_send();
     MultiFDPages_t *pages = p->pages;
+    int ret;
+
+    if (!use_zero_copy_send) {
+        /*
+         * Only !zero_copy needs the header in IOV; zerocopy will
+         * send it separately.
+         */
+        multifd_send_prepare_header(p);
+    }
 
     for (int i = 0; i < pages->num; i++) {
         p->iov[p->iovs_num].iov_base = pages->block->host + pages->offset[i];
@@ -98,6 +108,18 @@  static int nocomp_send_prepare(MultiFDSendParams *p, Error **errp)
 
     p->next_packet_size = pages->num * p->page_size;
     p->flags |= MULTIFD_FLAG_NOCOMP;
+
+    multifd_send_fill_packet(p);
+
+    if (use_zero_copy_send) {
+        /* Send header first, without zerocopy */
+        ret = qio_channel_write_all(p->c, (void *)p->packet,
+                                    p->packet_len, errp);
+        if (ret != 0) {
+            return -1;
+        }
+    }
+
     return 0;
 }
 
@@ -266,7 +288,7 @@  static void multifd_pages_clear(MultiFDPages_t *pages)
     g_free(pages);
 }
 
-static void multifd_send_fill_packet(MultiFDSendParams *p)
+void multifd_send_fill_packet(MultiFDSendParams *p)
 {
     MultiFDPacket_t *packet = p->packet;
     MultiFDPages_t *pages = p->pages;
@@ -683,7 +705,6 @@  static void *multifd_send_thread(void *opaque)
     MigrationThread *thread = NULL;
     Error *local_err = NULL;
     int ret = 0;
-    bool use_zero_copy_send = migrate_zero_copy_send();
 
     thread = migration_threads_add(p->name, qemu_get_thread_id());
 
@@ -708,15 +729,6 @@  static void *multifd_send_thread(void *opaque)
             MultiFDPages_t *pages = p->pages;
 
             p->iovs_num = 0;
-
-            if (!use_zero_copy_send) {
-                /*
-                 * Only !zero_copy needs the header in IOV; zerocopy will
-                 * send it separately.
-                 */
-                multifd_send_prepare_header(p);
-            }
-
             assert(pages->num);
 
             ret = multifd_send_state->ops->send_prepare(p, &local_err);
@@ -725,17 +737,6 @@  static void *multifd_send_thread(void *opaque)
                 break;
             }
 
-            multifd_send_fill_packet(p);
-
-            if (use_zero_copy_send) {
-                /* Send header first, without zerocopy */
-                ret = qio_channel_write_all(p->c, (void *)p->packet,
-                                            p->packet_len, &local_err);
-                if (ret != 0) {
-                    break;
-                }
-            }
-
             ret = qio_channel_writev_full_all(p->c, p->iov, p->iovs_num, NULL,
                                               0, p->write_flags, &local_err);
             if (ret != 0) {