Message ID | 20240620212111.29319-7-farosas@suse.de (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | migration/multifd: Introduce storage slots | expand |
On 6/21/2024 5:21, Fabiano Rosas wrote:> Multifd currently has a simple scheduling mechanism that distributes > work to the various channels by providing the client (producer) with a > memory slot and swapping that slot with free slot from the next idle > channel (consumer). Or graphically: > > [] <-- multifd_send_state->pages > [][][][] <-- channels' p->pages pointers > > 1) client fills the empty slot with data: > [a] > [][][][] > > 2) multifd_send_pages() finds an idle channel and swaps the pointers: > [a] > [][][][] > ^idle > > [] > [a][][][] > > 3) client can immediately fill new slot with more data: > [b] > [a][][][] > > 4) channel processes the data, the channel slot is now free to use > again: > [b] > [][][][] > > This works just fine, except that it doesn't allow different types of > payloads to be processed at the same time in different channels, > i.e. the data type of multifd_send_state->pages needs to be the same > as p->pages. For each new data type different from MultiFDPage_t that > is to be handled, this logic needs to be duplicated by adding new > fields to multifd_send_state and to the channels. > > The core of the issue here is that we're using the channel parameters > (MultiFDSendParams) to hold the storage space on behalf of the multifd > client (currently ram.c). This is cumbersome because it forces us to > change multifd_send_pages() to check the data type being handled > before deciding which field to use. > > One way to solve this is to detach the storage space from the multifd > channel and put it somewhere else, in control of the multifd > client. That way, multifd_send_pages() can operate on an opaque > pointer without needing to be adapted to each new data type. Implement > this logic with a new "slots" abstraction: > > struct MultiFDSendData { > void *opaque; > size_t size; > } > > struct MultiFDSlots { > MultiFDSendData **free; <-- what used to be p->pages > MultiFDSendData *active; <-- what used to be multifd_send_state->pages > }; > > Each multifd client now gets one set of slots to use. The slots are > passed into multifd_send_pages() (renamed to multifd_send). The > channels now only hold a pointer to the generic MultiFDSendData, and > after it's processed that reference can be dropped. > > Or graphically: > > 1) client fills the active slot with data. Channels point to nothing > at this point: > [a] <-- active slot > [][][][] <-- free slots, one per-channel > > [][][][] <-- channels' p->data pointers > > 2) multifd_send() swaps the pointers inside the client slot. Channels > still point to nothing: > [] > [a][][][] > > [][][][] > > 3) multifd_send() finds an idle channel and updates its pointer: It seems the action "finds an idle channel" is in step 2 rather than step 3, which means the free slot is selected based on the id of the channel found, am I understanding correctly? > [] > [a][][][] > > [a][][][] > ^idle > > 4) a second client calls multifd_send(), but with it's own slots: > [] [b] > [a][][][] [][][][] > > [a][][][] > > 5) multifd_send() does steps 2 and 3 again: > [] [] > [a][][][] [][b][][] > > [a][b][][] > ^idle > > 6) The channels continue processing the data and lose/acquire the > references as multifd_send() updates them. The free lists of each > client are not affected. > > Signed-off-by: Fabiano Rosas <farosas@suse.de> > --- > migration/multifd.c | 119 +++++++++++++++++++++++++++++++------------- > migration/multifd.h | 17 +++++++ > migration/ram.c | 1 + > 3 files changed, 102 insertions(+), 35 deletions(-) > > diff --git a/migration/multifd.c b/migration/multifd.c > index 6fe339b378..f22a1c2e84 100644 > --- a/migration/multifd.c > +++ b/migration/multifd.c > @@ -97,6 +97,30 @@ struct { > MultiFDMethods *ops; > } *multifd_recv_state; > > +MultiFDSlots *multifd_allocate_slots(void *(*alloc_fn)(void), > + void (*reset_fn)(void *), > + void (*cleanup_fn)(void *)) > +{ > + int thread_count = migrate_multifd_channels(); > + MultiFDSlots *slots = g_new0(MultiFDSlots, 1); > + > + slots->active = g_new0(MultiFDSendData, 1); > + slots->free = g_new0(MultiFDSendData *, thread_count); > + > + slots->active->opaque = alloc_fn(); > + slots->active->reset = reset_fn; > + slots->active->cleanup = cleanup_fn; > + > + for (int i = 0; i < thread_count; i++) { > + slots->free[i] = g_new0(MultiFDSendData, 1); > + slots->free[i]->opaque = alloc_fn(); > + slots->free[i]->reset = reset_fn; > + slots->free[i]->cleanup = cleanup_fn; > + } > + > + return slots; > +} > + > static bool multifd_use_packets(void) > { > return !migrate_mapped_ram(); > @@ -313,8 +337,10 @@ void multifd_register_ops(int method, MultiFDMethods *ops) > } > > /* Reset a MultiFDPages_t* object for the next use */ > -static void multifd_pages_reset(MultiFDPages_t *pages) > +static void multifd_pages_reset(void *opaque) > { > + MultiFDPages_t *pages = opaque; > + > /* > * We don't need to touch offset[] array, because it will be > * overwritten later when reused. > @@ -388,8 +414,9 @@ static int multifd_recv_initial_packet(QIOChannel *c, Error **errp) > return msg.id; > } > > -static MultiFDPages_t *multifd_pages_init(uint32_t n) > +static void *multifd_pages_init(void) > { > + uint32_t n = MULTIFD_PACKET_SIZE / qemu_target_page_size(); > MultiFDPages_t *pages = g_new0(MultiFDPages_t, 1); > > pages->allocated = n; > @@ -398,13 +425,24 @@ static MultiFDPages_t *multifd_pages_init(uint32_t n) > return pages; > } > > -static void multifd_pages_clear(MultiFDPages_t *pages) > +static void multifd_pages_clear(void *opaque) > { > + MultiFDPages_t *pages = opaque; > + > multifd_pages_reset(pages); > pages->allocated = 0; > g_free(pages->offset); > pages->offset = NULL; > - g_free(pages); > +} > + > +/* TODO: move these to multifd-ram.c */ > +MultiFDSlots *multifd_ram_send_slots; > + > +void multifd_ram_save_setup(void) > +{ > + multifd_ram_send_slots = multifd_allocate_slots(multifd_pages_init, > + multifd_pages_reset, > + multifd_pages_clear); > } > > static void multifd_ram_fill_packet(MultiFDSendParams *p) > @@ -617,13 +655,12 @@ static void multifd_send_kick_main(MultiFDSendParams *p) > * > * Returns true if succeed, false otherwise. > */ > -static bool multifd_send_pages(void) > +static bool multifd_send(MultiFDSlots *slots) > { > int i; > static int next_channel; > MultiFDSendParams *p = NULL; /* make happy gcc */ > - MultiFDPages_t *channel_pages; > - MultiFDSendData *data = multifd_send_state->data; > + MultiFDSendData *active_slot; > > if (multifd_send_should_exit()) { > return false; > @@ -659,11 +696,24 @@ static bool multifd_send_pages(void) > */ > smp_mb_acquire(); > > - channel_pages = p->data->opaque; > - assert(!channel_pages->num); > + assert(!slots->free[p->id]->size); > + > + /* > + * Swap the slots. The client gets a free slot to fill up for the > + * next iteration and the channel gets the active slot for > + * processing. > + */ > + active_slot = slots->active; > + slots->active = slots->free[p->id]; > + p->data = active_slot; > + > + /* > + * By the next time we arrive here, the channel will certainly > + * have consumed the active slot. Put it back on the free list > + * now. > + */ > + slots->free[p->id] = active_slot; > > - multifd_send_state->data = p->data; > - p->data = data; > /* > * Making sure p->data is setup before marking pending_job=true. Pairs > * with the qatomic_load_acquire() in multifd_send_thread(). > @@ -687,6 +737,7 @@ static inline bool multifd_queue_full(MultiFDPages_t *pages) > static inline void multifd_enqueue(MultiFDPages_t *pages, ram_addr_t offset) > { > pages->offset[pages->num++] = offset; > + multifd_ram_send_slots->active->size += qemu_target_page_size(); > } > > /* Returns true if enqueue successful, false otherwise */ > @@ -695,7 +746,7 @@ bool multifd_queue_page(RAMBlock *block, ram_addr_t offset) > MultiFDPages_t *pages; > > retry: > - pages = multifd_send_state->data->opaque; > + pages = multifd_ram_send_slots->active->opaque; > > /* If the queue is empty, we can already enqueue now */ > if (multifd_queue_empty(pages)) { > @@ -713,7 +764,7 @@ retry: > * After flush, always retry. > */ > if (pages->block != block || multifd_queue_full(pages)) { > - if (!multifd_send_pages()) { > + if (!multifd_send(multifd_ram_send_slots)) { > return false; > } > goto retry; > @@ -825,10 +876,12 @@ static bool multifd_send_cleanup_channel(MultiFDSendParams *p, Error **errp) > qemu_sem_destroy(&p->sem_sync); > g_free(p->name); > p->name = NULL; > - multifd_pages_clear(p->data->opaque); > - p->data->opaque = NULL; > - g_free(p->data); > - p->data = NULL; > + if (p->data) { > + p->data->cleanup(p->data->opaque); > + p->data->opaque = NULL; > + /* p->data was not allocated by us, just clear the pointer */ > + p->data = NULL; > + } > p->packet_len = 0; > g_free(p->packet); > p->packet = NULL; > @@ -845,10 +898,6 @@ static void multifd_send_cleanup_state(void) > qemu_sem_destroy(&multifd_send_state->channels_ready); > g_free(multifd_send_state->params); > multifd_send_state->params = NULL; > - multifd_pages_clear(multifd_send_state->data->opaque); > - multifd_send_state->data->opaque = NULL; > - g_free(multifd_send_state->data); > - multifd_send_state->data = NULL; > g_free(multifd_send_state); > multifd_send_state = NULL; > } > @@ -897,14 +946,13 @@ int multifd_send_sync_main(void) > { > int i; > bool flush_zero_copy; > - MultiFDPages_t *pages; > > if (!migrate_multifd()) { > return 0; > } > - pages = multifd_send_state->data->opaque; > - if (pages->num) { > - if (!multifd_send_pages()) { > + > + if (multifd_ram_send_slots->active->size) { > + if (!multifd_send(multifd_ram_send_slots)) { > error_report("%s: multifd_send_pages fail", __func__); > return -1; > } > @@ -979,13 +1027,11 @@ static void *multifd_send_thread(void *opaque) > > /* > * Read pending_job flag before p->data. Pairs with the > - * qatomic_store_release() in multifd_send_pages(). > + * qatomic_store_release() in multifd_send(). > */ > if (qatomic_load_acquire(&p->pending_job)) { > - MultiFDPages_t *pages = p->data->opaque; > - > p->iovs_num = 0; > - assert(pages->num); > + assert(p->data->size); > > ret = multifd_send_state->ops->send_prepare(p, &local_err); > if (ret != 0) { > @@ -1008,13 +1054,20 @@ static void *multifd_send_thread(void *opaque) > stat64_add(&mig_stats.multifd_bytes, > p->next_packet_size + p->packet_len); > > - multifd_pages_reset(pages); > p->next_packet_size = 0; > > + /* > + * The data has now been sent. Since multifd_send() > + * already put this slot on the free list, reset the > + * entire slot before releasing the barrier below. > + */ > + p->data->size = 0; > + p->data->reset(p->data->opaque); > + > /* > * Making sure p->data is published before saying "we're > * free". Pairs with the smp_mb_acquire() in > - * multifd_send_pages(). > + * multifd_send(). > */ > qatomic_store_release(&p->pending_job, false); > } else { > @@ -1208,8 +1261,6 @@ bool multifd_send_setup(void) > thread_count = migrate_multifd_channels(); > multifd_send_state = g_malloc0(sizeof(*multifd_send_state)); > multifd_send_state->params = g_new0(MultiFDSendParams, thread_count); > - multifd_send_state->data = g_new0(MultiFDSendData, 1); > - multifd_send_state->data->opaque = multifd_pages_init(page_count); > qemu_sem_init(&multifd_send_state->channels_created, 0); > qemu_sem_init(&multifd_send_state->channels_ready, 0); > qatomic_set(&multifd_send_state->exiting, 0); > @@ -1221,8 +1272,6 @@ bool multifd_send_setup(void) > qemu_sem_init(&p->sem, 0); > qemu_sem_init(&p->sem_sync, 0); > p->id = i; > - p->data = g_new0(MultiFDSendData, 1); > - p->data->opaque = multifd_pages_init(page_count); > > if (use_packets) { > p->packet_len = sizeof(MultiFDPacket_t) > diff --git a/migration/multifd.h b/migration/multifd.h > index 2029bfd80a..5230729077 100644 > --- a/migration/multifd.h > +++ b/migration/multifd.h > @@ -17,6 +17,10 @@ > > typedef struct MultiFDRecvData MultiFDRecvData; > typedef struct MultiFDSendData MultiFDSendData; > +typedef struct MultiFDSlots MultiFDSlots; > + > +typedef void *(multifd_data_alloc_cb)(void); > +typedef void (multifd_data_cleanup_cb)(void *); > > bool multifd_send_setup(void); > void multifd_send_shutdown(void); > @@ -93,8 +97,21 @@ struct MultiFDRecvData { > struct MultiFDSendData { > void *opaque; > size_t size; > + /* reset the slot for reuse after successful transfer */ > + void (*reset)(void *); > + void (*cleanup)(void *); > }; > > +struct MultiFDSlots { > + MultiFDSendData **free; > + MultiFDSendData *active; > +}; > + > +MultiFDSlots *multifd_allocate_slots(void *(*alloc_fn)(void), > + void (*reset_fn)(void *), > + void (*cleanup_fn)(void *)); > +void multifd_ram_save_setup(void); > + > typedef struct { > /* Fields are only written at creating/deletion time */ > /* No lock required for them, they are read only */ > diff --git a/migration/ram.c b/migration/ram.c > index ceea586b06..c33a9dcf3f 100644 > --- a/migration/ram.c > +++ b/migration/ram.c > @@ -3058,6 +3058,7 @@ static int ram_save_setup(QEMUFile *f, void *opaque, Error **errp) > migration_ops = g_malloc0(sizeof(MigrationOps)); > > if (migrate_multifd()) { > + multifd_ram_save_setup(); > migration_ops->ram_save_target_page = ram_save_target_page_multifd; > } else { > migration_ops->ram_save_target_page = ram_save_target_page_legacy;
On Thu, Jun 27, 2024 at 11:27:08AM +0800, Wang, Lei wrote: > > Or graphically: > > > > 1) client fills the active slot with data. Channels point to nothing > > at this point: > > [a] <-- active slot > > [][][][] <-- free slots, one per-channel > > > > [][][][] <-- channels' p->data pointers > > > > 2) multifd_send() swaps the pointers inside the client slot. Channels > > still point to nothing: > > [] > > [a][][][] > > > > [][][][] > > > > 3) multifd_send() finds an idle channel and updates its pointer: > > It seems the action "finds an idle channel" is in step 2 rather than step 3, > which means the free slot is selected based on the id of the channel found, am I > understanding correctly? I think you're right. Actually I also feel like the desription here is ambiguous, even though I think I get what Fabiano wanted to say. The free slot should be the first step of step 2+3, here what Fabiano really wanted to suggest is we move the free buffer array from multifd channels into the callers, then the caller can pass in whatever data to send. So I think maybe it's cleaner to write it as this in code (note: I didn't really change the code, just some ordering and comments): ===8<=== @@ -710,15 +710,11 @@ static bool multifd_send(MultiFDSlots *slots) */ active_slot = slots->active; slots->active = slots->free[p->id]; - p->data = active_slot; - - /* - * By the next time we arrive here, the channel will certainly - * have consumed the active slot. Put it back on the free list - * now. - */ slots->free[p->id] = active_slot; + /* Assign the current active slot to the chosen thread */ + p->data = active_slot; ===8<=== The comment I removed is slightly misleading to me too, because right now active_slot contains the data hasn't yet been delivered to multifd, so we're "putting it back to free list" not because of it's free, but because we know it won't get used until the multifd send thread consumes it (because before that the thread will be busy, and we won't use the buffer if so in upcoming send()s). And then when I'm looking at this again, I think maybe it's a slight overkill, and maybe we can still keep the "opaque data" managed by multifd. One reason might be that I don't expect the "opaque data" payload keep growing at all: it should really be either RAM or device state as I commented elsewhere in a relevant thread, after all it's a thread model only for migration purpose to move vmstates.. Putting it managed by multifd thread should involve less change than this series, but it could look like this: typedef enum { MULTIFD_PAYLOAD_RAM = 0, MULTIFD_PAYLOAD_DEVICE_STATE = 1, } MultifdPayloadType; typedef enum { MultiFDPages_t ram_payload; MultifdDeviceState_t device_payload; } MultifdPayload; struct MultiFDSendData { MultifdPayloadType type; MultifdPayload data; }; Then the "enum" makes sure the payload only consumes only the max of both types; a side benefit to save some memory. I think we need to make sure MultifdDeviceState_t is generic enough so that it will work for mostly everything (especially normal VMSDs). In this case the VFIO series should be good as that was currently defined as: typedef struct { MultiFDPacketHdr_t hdr; char idstr[256] QEMU_NONSTRING; uint32_t instance_id; /* size of the next packet that contains the actual data */ uint32_t next_packet_size; } __attribute__((packed)) MultiFDPacketDeviceState_t; IIUC that was what we need exactly with idstr+instance_id, so as to nail exactly at where should the "opaque device state" go to, then load it with a buffer-based loader when it's ready (starting from VFIO, to get rid of qemufile). For VMSDs in the future if ever possible, that should be a modified version of vmstate_load() where it may take buffers not qemufiles. To Maciej: please see whether above makes sense to you, and if you also agree please consider that with your VFIO work. Thanks, > > > [] > > [a][][][] > > > > [a][][][] > > ^idle
On Thu, Jun 27, 2024 at 10:40:11AM -0400, Peter Xu wrote: > On Thu, Jun 27, 2024 at 11:27:08AM +0800, Wang, Lei wrote: > > > Or graphically: > > > > > > 1) client fills the active slot with data. Channels point to nothing > > > at this point: > > > [a] <-- active slot > > > [][][][] <-- free slots, one per-channel > > > > > > [][][][] <-- channels' p->data pointers > > > > > > 2) multifd_send() swaps the pointers inside the client slot. Channels > > > still point to nothing: > > > [] > > > [a][][][] > > > > > > [][][][] > > > > > > 3) multifd_send() finds an idle channel and updates its pointer: > > > > It seems the action "finds an idle channel" is in step 2 rather than step 3, > > which means the free slot is selected based on the id of the channel found, am I > > understanding correctly? > > I think you're right. > > Actually I also feel like the desription here is ambiguous, even though I > think I get what Fabiano wanted to say. > > The free slot should be the first step of step 2+3, here what Fabiano > really wanted to suggest is we move the free buffer array from multifd > channels into the callers, then the caller can pass in whatever data to > send. > > So I think maybe it's cleaner to write it as this in code (note: I didn't > really change the code, just some ordering and comments): > > ===8<=== > @@ -710,15 +710,11 @@ static bool multifd_send(MultiFDSlots *slots) > */ > active_slot = slots->active; > slots->active = slots->free[p->id]; > - p->data = active_slot; > - > - /* > - * By the next time we arrive here, the channel will certainly > - * have consumed the active slot. Put it back on the free list > - * now. > - */ > slots->free[p->id] = active_slot; > > + /* Assign the current active slot to the chosen thread */ > + p->data = active_slot; > ===8<=== > > The comment I removed is slightly misleading to me too, because right now > active_slot contains the data hasn't yet been delivered to multifd, so > we're "putting it back to free list" not because of it's free, but because > we know it won't get used until the multifd send thread consumes it > (because before that the thread will be busy, and we won't use the buffer > if so in upcoming send()s). > > And then when I'm looking at this again, I think maybe it's a slight > overkill, and maybe we can still keep the "opaque data" managed by multifd. > One reason might be that I don't expect the "opaque data" payload keep > growing at all: it should really be either RAM or device state as I > commented elsewhere in a relevant thread, after all it's a thread model > only for migration purpose to move vmstates.. > > Putting it managed by multifd thread should involve less change than this > series, but it could look like this: > > typedef enum { > MULTIFD_PAYLOAD_RAM = 0, > MULTIFD_PAYLOAD_DEVICE_STATE = 1, > } MultifdPayloadType; > > typedef enum { > MultiFDPages_t ram_payload; > MultifdDeviceState_t device_payload; > } MultifdPayload; PS: please conditionally read "enum" as "union" throughout the previous email of mine, sorry. [I'll leave that to readers to decide when should do the replacement..] > > struct MultiFDSendData { > MultifdPayloadType type; > MultifdPayload data; > }; > > Then the "enum" makes sure the payload only consumes only the max of both > types; a side benefit to save some memory. > > I think we need to make sure MultifdDeviceState_t is generic enough so that > it will work for mostly everything (especially normal VMSDs). In this case > the VFIO series should be good as that was currently defined as: > > typedef struct { > MultiFDPacketHdr_t hdr; > > char idstr[256] QEMU_NONSTRING; > uint32_t instance_id; > > /* size of the next packet that contains the actual data */ > uint32_t next_packet_size; > } __attribute__((packed)) MultiFDPacketDeviceState_t; > > IIUC that was what we need exactly with idstr+instance_id, so as to nail > exactly at where should the "opaque device state" go to, then load it with > a buffer-based loader when it's ready (starting from VFIO, to get rid of > qemufile). For VMSDs in the future if ever possible, that should be a > modified version of vmstate_load() where it may take buffers not qemufiles. > > To Maciej: please see whether above makes sense to you, and if you also > agree please consider that with your VFIO work. > > Thanks, > > > > > > [] > > > [a][][][] > > > > > > [a][][][] > > > ^idle > > -- > Peter Xu
Peter Xu <peterx@redhat.com> writes: > On Thu, Jun 27, 2024 at 11:27:08AM +0800, Wang, Lei wrote: >> > Or graphically: >> > >> > 1) client fills the active slot with data. Channels point to nothing >> > at this point: >> > [a] <-- active slot >> > [][][][] <-- free slots, one per-channel >> > >> > [][][][] <-- channels' p->data pointers >> > >> > 2) multifd_send() swaps the pointers inside the client slot. Channels >> > still point to nothing: >> > [] >> > [a][][][] >> > >> > [][][][] >> > >> > 3) multifd_send() finds an idle channel and updates its pointer: >> >> It seems the action "finds an idle channel" is in step 2 rather than step 3, >> which means the free slot is selected based on the id of the channel found, am I >> understanding correctly? > > I think you're right. > > Actually I also feel like the desription here is ambiguous, even though I > think I get what Fabiano wanted to say. > > The free slot should be the first step of step 2+3, here what Fabiano > really wanted to suggest is we move the free buffer array from multifd > channels into the callers, then the caller can pass in whatever data to > send. > > So I think maybe it's cleaner to write it as this in code (note: I didn't > really change the code, just some ordering and comments): > > ===8<=== > @@ -710,15 +710,11 @@ static bool multifd_send(MultiFDSlots *slots) > */ > active_slot = slots->active; > slots->active = slots->free[p->id]; > - p->data = active_slot; > - > - /* > - * By the next time we arrive here, the channel will certainly > - * have consumed the active slot. Put it back on the free list > - * now. > - */ > slots->free[p->id] = active_slot; > > + /* Assign the current active slot to the chosen thread */ > + p->data = active_slot; > ===8<=== > > The comment I removed is slightly misleading to me too, because right now > active_slot contains the data hasn't yet been delivered to multifd, so > we're "putting it back to free list" not because of it's free, but because > we know it won't get used until the multifd send thread consumes it > (because before that the thread will be busy, and we won't use the buffer > if so in upcoming send()s). > > And then when I'm looking at this again, I think maybe it's a slight > overkill, and maybe we can still keep the "opaque data" managed by multifd. > One reason might be that I don't expect the "opaque data" payload keep > growing at all: it should really be either RAM or device state as I > commented elsewhere in a relevant thread, after all it's a thread model > only for migration purpose to move vmstates.. Some amount of flexibility needs to be baked in. For instance, what about the handshake procedure? Don't we want to use multifd threads to put some information on the wire for that as well? > Putting it managed by multifd thread should involve less change than this > series, but it could look like this: > > typedef enum { > MULTIFD_PAYLOAD_RAM = 0, > MULTIFD_PAYLOAD_DEVICE_STATE = 1, > } MultifdPayloadType; > > typedef enum { > MultiFDPages_t ram_payload; > MultifdDeviceState_t device_payload; > } MultifdPayload; > > struct MultiFDSendData { > MultifdPayloadType type; > MultifdPayload data; > }; Is that an union up there? So you want to simply allocate in multifd the max amount of memory between the two types of payload? But then we'll need a memset(p->data, 0, ...) at every round of sending to avoid giving stale data from one client to another. That doesn't work with the current ram migration because it wants p->pages to remain active across several calls of multifd_queue_page(). > > Then the "enum" makes sure the payload only consumes only the max of both > types; a side benefit to save some memory. > > I think we need to make sure MultifdDeviceState_t is generic enough so that > it will work for mostly everything (especially normal VMSDs). In this case > the VFIO series should be good as that was currently defined as: > > typedef struct { > MultiFDPacketHdr_t hdr; > > char idstr[256] QEMU_NONSTRING; > uint32_t instance_id; > > /* size of the next packet that contains the actual data */ > uint32_t next_packet_size; > } __attribute__((packed)) MultiFDPacketDeviceState_t; This is the packet, a different thing. Not sure if your paragraph above means to talk about that or really MultifdDeviceState, which is what is exchanged between the multifd threads and the client code. > > IIUC that was what we need exactly with idstr+instance_id, so as to nail > exactly at where should the "opaque device state" go to, then load it with > a buffer-based loader when it's ready (starting from VFIO, to get rid of > qemufile). For VMSDs in the future if ever possible, that should be a > modified version of vmstate_load() where it may take buffers not qemufiles. > > To Maciej: please see whether above makes sense to you, and if you also > agree please consider that with your VFIO work. > > Thanks, > >> >> > [] >> > [a][][][] >> > >> > [a][][][] >> > ^idle
On Wed, Jul 10, 2024 at 01:10:37PM -0300, Fabiano Rosas wrote: > Peter Xu <peterx@redhat.com> writes: > > > On Thu, Jun 27, 2024 at 11:27:08AM +0800, Wang, Lei wrote: > >> > Or graphically: > >> > > >> > 1) client fills the active slot with data. Channels point to nothing > >> > at this point: > >> > [a] <-- active slot > >> > [][][][] <-- free slots, one per-channel > >> > > >> > [][][][] <-- channels' p->data pointers > >> > > >> > 2) multifd_send() swaps the pointers inside the client slot. Channels > >> > still point to nothing: > >> > [] > >> > [a][][][] > >> > > >> > [][][][] > >> > > >> > 3) multifd_send() finds an idle channel and updates its pointer: > >> > >> It seems the action "finds an idle channel" is in step 2 rather than step 3, > >> which means the free slot is selected based on the id of the channel found, am I > >> understanding correctly? > > > > I think you're right. > > > > Actually I also feel like the desription here is ambiguous, even though I > > think I get what Fabiano wanted to say. > > > > The free slot should be the first step of step 2+3, here what Fabiano > > really wanted to suggest is we move the free buffer array from multifd > > channels into the callers, then the caller can pass in whatever data to > > send. > > > > So I think maybe it's cleaner to write it as this in code (note: I didn't > > really change the code, just some ordering and comments): > > > > ===8<=== > > @@ -710,15 +710,11 @@ static bool multifd_send(MultiFDSlots *slots) > > */ > > active_slot = slots->active; > > slots->active = slots->free[p->id]; > > - p->data = active_slot; > > - > > - /* > > - * By the next time we arrive here, the channel will certainly > > - * have consumed the active slot. Put it back on the free list > > - * now. > > - */ > > slots->free[p->id] = active_slot; > > > > + /* Assign the current active slot to the chosen thread */ > > + p->data = active_slot; > > ===8<=== > > > > The comment I removed is slightly misleading to me too, because right now > > active_slot contains the data hasn't yet been delivered to multifd, so > > we're "putting it back to free list" not because of it's free, but because > > we know it won't get used until the multifd send thread consumes it > > (because before that the thread will be busy, and we won't use the buffer > > if so in upcoming send()s). > > > > And then when I'm looking at this again, I think maybe it's a slight > > overkill, and maybe we can still keep the "opaque data" managed by multifd. > > One reason might be that I don't expect the "opaque data" payload keep > > growing at all: it should really be either RAM or device state as I > > commented elsewhere in a relevant thread, after all it's a thread model > > only for migration purpose to move vmstates.. > > Some amount of flexibility needs to be baked in. For instance, what > about the handshake procedure? Don't we want to use multifd threads to > put some information on the wire for that as well? Is this an orthogonal question? What I meant above is it looks fine to me to keep "device state" in multifd.c, as long as it is not only about VFIO. What you were saying seems to be about how to identify this is a device state, then I just hope VFIO shares the same flag with any future device that would also like to send its state via multifd, like: #define MULTIFD_FLAG_DEVICE_STATE (32 << 1) Then set it in MultiFDPacket_t.flags. The dest qemu should route that packet to the device vmsd / save_entry for parsing. > > > Putting it managed by multifd thread should involve less change than this > > series, but it could look like this: > > > > typedef enum { > > MULTIFD_PAYLOAD_RAM = 0, > > MULTIFD_PAYLOAD_DEVICE_STATE = 1, > > } MultifdPayloadType; > > > > typedef enum { > > MultiFDPages_t ram_payload; > > MultifdDeviceState_t device_payload; > > } MultifdPayload; > > > > struct MultiFDSendData { > > MultifdPayloadType type; > > MultifdPayload data; > > }; > > Is that an union up there? So you want to simply allocate in multifd the Yes. > max amount of memory between the two types of payload? But then we'll Yes. > need a memset(p->data, 0, ...) at every round of sending to avoid giving > stale data from one client to another. That doesn't work with the I think as long as the one to enqueue will always setup the fields, we don't need to do memset. I am not sure if it's a major concern to always set all the relevant fields in the multifd enqueue threads. It sounds like the thing we should always better do. > current ram migration because it wants p->pages to remain active across > several calls of multifd_queue_page(). I don't think I followed here. What I meant: QEMU maintains SendData[8], now a bunch of pages arrives, it enqueues "pages" into a free slot index 2 (set type=pages), then before thread 2 finished sending the bunch of pages, SendData[2] will always represent those pages without being used by anything else. What did I miss? > > > > > Then the "enum" makes sure the payload only consumes only the max of both > > types; a side benefit to save some memory. > > > > I think we need to make sure MultifdDeviceState_t is generic enough so that > > it will work for mostly everything (especially normal VMSDs). In this case > > the VFIO series should be good as that was currently defined as: > > > > typedef struct { > > MultiFDPacketHdr_t hdr; > > > > char idstr[256] QEMU_NONSTRING; > > uint32_t instance_id; > > > > /* size of the next packet that contains the actual data */ > > uint32_t next_packet_size; > > } __attribute__((packed)) MultiFDPacketDeviceState_t; > > This is the packet, a different thing. Not sure if your paragraph above > means to talk about that or really MultifdDeviceState, which is what is > exchanged between the multifd threads and the client code. I meant the wire protocol looks great from that POV. We may need similar thing for the type==device_state slots just to be generic. > > > > > IIUC that was what we need exactly with idstr+instance_id, so as to nail > > exactly at where should the "opaque device state" go to, then load it with > > a buffer-based loader when it's ready (starting from VFIO, to get rid of > > qemufile). For VMSDs in the future if ever possible, that should be a > > modified version of vmstate_load() where it may take buffers not qemufiles. > > > > To Maciej: please see whether above makes sense to you, and if you also > > agree please consider that with your VFIO work. > > > > Thanks, > > > >> > >> > [] > >> > [a][][][] > >> > > >> > [a][][][] > >> > ^idle >
Peter Xu <peterx@redhat.com> writes: > On Wed, Jul 10, 2024 at 01:10:37PM -0300, Fabiano Rosas wrote: >> Peter Xu <peterx@redhat.com> writes: >> >> > On Thu, Jun 27, 2024 at 11:27:08AM +0800, Wang, Lei wrote: >> >> > Or graphically: >> >> > >> >> > 1) client fills the active slot with data. Channels point to nothing >> >> > at this point: >> >> > [a] <-- active slot >> >> > [][][][] <-- free slots, one per-channel >> >> > >> >> > [][][][] <-- channels' p->data pointers >> >> > >> >> > 2) multifd_send() swaps the pointers inside the client slot. Channels >> >> > still point to nothing: >> >> > [] >> >> > [a][][][] >> >> > >> >> > [][][][] >> >> > >> >> > 3) multifd_send() finds an idle channel and updates its pointer: >> >> >> >> It seems the action "finds an idle channel" is in step 2 rather than step 3, >> >> which means the free slot is selected based on the id of the channel found, am I >> >> understanding correctly? >> > >> > I think you're right. >> > >> > Actually I also feel like the desription here is ambiguous, even though I >> > think I get what Fabiano wanted to say. >> > >> > The free slot should be the first step of step 2+3, here what Fabiano >> > really wanted to suggest is we move the free buffer array from multifd >> > channels into the callers, then the caller can pass in whatever data to >> > send. >> > >> > So I think maybe it's cleaner to write it as this in code (note: I didn't >> > really change the code, just some ordering and comments): >> > >> > ===8<=== >> > @@ -710,15 +710,11 @@ static bool multifd_send(MultiFDSlots *slots) >> > */ >> > active_slot = slots->active; >> > slots->active = slots->free[p->id]; >> > - p->data = active_slot; >> > - >> > - /* >> > - * By the next time we arrive here, the channel will certainly >> > - * have consumed the active slot. Put it back on the free list >> > - * now. >> > - */ >> > slots->free[p->id] = active_slot; >> > >> > + /* Assign the current active slot to the chosen thread */ >> > + p->data = active_slot; >> > ===8<=== >> > >> > The comment I removed is slightly misleading to me too, because right now >> > active_slot contains the data hasn't yet been delivered to multifd, so >> > we're "putting it back to free list" not because of it's free, but because >> > we know it won't get used until the multifd send thread consumes it >> > (because before that the thread will be busy, and we won't use the buffer >> > if so in upcoming send()s). >> > >> > And then when I'm looking at this again, I think maybe it's a slight >> > overkill, and maybe we can still keep the "opaque data" managed by multifd. >> > One reason might be that I don't expect the "opaque data" payload keep >> > growing at all: it should really be either RAM or device state as I >> > commented elsewhere in a relevant thread, after all it's a thread model >> > only for migration purpose to move vmstates.. >> >> Some amount of flexibility needs to be baked in. For instance, what >> about the handshake procedure? Don't we want to use multifd threads to >> put some information on the wire for that as well? > > Is this an orthogonal question? I don't think so. You say the payload data should be either RAM or device state. I'm asking what other types of data do we want the multifd channel to transmit and suggesting we need to allow room for the addition of that, whatever it is. One thing that comes to mind that is neither RAM or device state is some form of handshake or capabilities negotiation. > > What I meant above is it looks fine to me to keep "device state" in > multifd.c, as long as it is not only about VFIO. > > What you were saying seems to be about how to identify this is a device > state, then I just hope VFIO shares the same flag with any future device > that would also like to send its state via multifd, like: > > #define MULTIFD_FLAG_DEVICE_STATE (32 << 1) > > Then set it in MultiFDPacket_t.flags. The dest qemu should route that > packet to the device vmsd / save_entry for parsing. Sure, that part I agree with, no issue here. > >> >> > Putting it managed by multifd thread should involve less change than this >> > series, but it could look like this: >> > >> > typedef enum { >> > MULTIFD_PAYLOAD_RAM = 0, >> > MULTIFD_PAYLOAD_DEVICE_STATE = 1, >> > } MultifdPayloadType; >> > >> > typedef enum { >> > MultiFDPages_t ram_payload; >> > MultifdDeviceState_t device_payload; >> > } MultifdPayload; >> > >> > struct MultiFDSendData { >> > MultifdPayloadType type; >> > MultifdPayload data; >> > }; >> >> Is that an union up there? So you want to simply allocate in multifd the > > Yes. > >> max amount of memory between the two types of payload? But then we'll > > Yes. > >> need a memset(p->data, 0, ...) at every round of sending to avoid giving >> stale data from one client to another. That doesn't work with the > > I think as long as the one to enqueue will always setup the fields, we > don't need to do memset. I am not sure if it's a major concern to always > set all the relevant fields in the multifd enqueue threads. It sounds like > the thing we should always better do. Well, writing to a region of memory that was "owned" by another multifd client and already has a bunch of data there is somewhat prone to bugs. Just forget to set something and now things start to behave weirdly. I guess that's just the price of having an union. I'm not against that, but I would maybe prefer to have each client hold its own data and not have to think about anything else. Much of this feeling comes from how the RAM code currently works (more on that below). > >> current ram migration because it wants p->pages to remain active across >> several calls of multifd_queue_page(). > > I don't think I followed here. > > What I meant: QEMU maintains SendData[8], now a bunch of pages arrives, it > enqueues "pages" into a free slot index 2 (set type=pages), then before > thread 2 finished sending the bunch of pages, SendData[2] will always > represent those pages without being used by anything else. What did I miss? > You're missing multifd_send_state->pages and the fact that it holds unsent data on behalf of the client. At every call to multifd_queue_pages(), the RAM code expects the previously filled pages structure to be there. Since we intend to have more than one multifd client, now the other client (say device state) might run, it will take that slot and fill it with it's own stuff (or rather fill p->send_data and multifd_send_pages() switches the pointer). Next call to multifd_queue_pages(), it will take multifd_send_state->pages and there'll be garbage there. The code is not: take a free slot from the next idle channel and fill it with data. It is: take from multifd_send_state the active slot which *might* have previously been consumed by the last channel and (continue to) fill it with data. "might", because successive calls to multifd_queue_page() don't need to call multifd_send_page() to flush to the channel. >> >> > >> > Then the "enum" makes sure the payload only consumes only the max of both >> > types; a side benefit to save some memory. >> > >> > I think we need to make sure MultifdDeviceState_t is generic enough so that >> > it will work for mostly everything (especially normal VMSDs). In this case >> > the VFIO series should be good as that was currently defined as: >> > >> > typedef struct { >> > MultiFDPacketHdr_t hdr; >> > >> > char idstr[256] QEMU_NONSTRING; >> > uint32_t instance_id; >> > >> > /* size of the next packet that contains the actual data */ >> > uint32_t next_packet_size; >> > } __attribute__((packed)) MultiFDPacketDeviceState_t; >> >> This is the packet, a different thing. Not sure if your paragraph above >> means to talk about that or really MultifdDeviceState, which is what is >> exchanged between the multifd threads and the client code. > > I meant the wire protocol looks great from that POV. We may need similar > thing for the type==device_state slots just to be generic. > >> >> > >> > IIUC that was what we need exactly with idstr+instance_id, so as to nail >> > exactly at where should the "opaque device state" go to, then load it with >> > a buffer-based loader when it's ready (starting from VFIO, to get rid of >> > qemufile). For VMSDs in the future if ever possible, that should be a >> > modified version of vmstate_load() where it may take buffers not qemufiles. >> > >> > To Maciej: please see whether above makes sense to you, and if you also >> > agree please consider that with your VFIO work. >> > >> > Thanks, >> > >> >> >> >> > [] >> >> > [a][][][] >> >> > >> >> > [a][][][] >> >> > ^idle >>
On Wed, Jul 10, 2024 at 05:16:36PM -0300, Fabiano Rosas wrote: > Peter Xu <peterx@redhat.com> writes: > > > On Wed, Jul 10, 2024 at 01:10:37PM -0300, Fabiano Rosas wrote: > >> Peter Xu <peterx@redhat.com> writes: > >> > >> > On Thu, Jun 27, 2024 at 11:27:08AM +0800, Wang, Lei wrote: > >> >> > Or graphically: > >> >> > > >> >> > 1) client fills the active slot with data. Channels point to nothing > >> >> > at this point: > >> >> > [a] <-- active slot > >> >> > [][][][] <-- free slots, one per-channel > >> >> > > >> >> > [][][][] <-- channels' p->data pointers > >> >> > > >> >> > 2) multifd_send() swaps the pointers inside the client slot. Channels > >> >> > still point to nothing: > >> >> > [] > >> >> > [a][][][] > >> >> > > >> >> > [][][][] > >> >> > > >> >> > 3) multifd_send() finds an idle channel and updates its pointer: > >> >> > >> >> It seems the action "finds an idle channel" is in step 2 rather than step 3, > >> >> which means the free slot is selected based on the id of the channel found, am I > >> >> understanding correctly? > >> > > >> > I think you're right. > >> > > >> > Actually I also feel like the desription here is ambiguous, even though I > >> > think I get what Fabiano wanted to say. > >> > > >> > The free slot should be the first step of step 2+3, here what Fabiano > >> > really wanted to suggest is we move the free buffer array from multifd > >> > channels into the callers, then the caller can pass in whatever data to > >> > send. > >> > > >> > So I think maybe it's cleaner to write it as this in code (note: I didn't > >> > really change the code, just some ordering and comments): > >> > > >> > ===8<=== > >> > @@ -710,15 +710,11 @@ static bool multifd_send(MultiFDSlots *slots) > >> > */ > >> > active_slot = slots->active; > >> > slots->active = slots->free[p->id]; > >> > - p->data = active_slot; > >> > - > >> > - /* > >> > - * By the next time we arrive here, the channel will certainly > >> > - * have consumed the active slot. Put it back on the free list > >> > - * now. > >> > - */ > >> > slots->free[p->id] = active_slot; > >> > > >> > + /* Assign the current active slot to the chosen thread */ > >> > + p->data = active_slot; > >> > ===8<=== > >> > > >> > The comment I removed is slightly misleading to me too, because right now > >> > active_slot contains the data hasn't yet been delivered to multifd, so > >> > we're "putting it back to free list" not because of it's free, but because > >> > we know it won't get used until the multifd send thread consumes it > >> > (because before that the thread will be busy, and we won't use the buffer > >> > if so in upcoming send()s). > >> > > >> > And then when I'm looking at this again, I think maybe it's a slight > >> > overkill, and maybe we can still keep the "opaque data" managed by multifd. > >> > One reason might be that I don't expect the "opaque data" payload keep > >> > growing at all: it should really be either RAM or device state as I > >> > commented elsewhere in a relevant thread, after all it's a thread model > >> > only for migration purpose to move vmstates.. > >> > >> Some amount of flexibility needs to be baked in. For instance, what > >> about the handshake procedure? Don't we want to use multifd threads to > >> put some information on the wire for that as well? > > > > Is this an orthogonal question? > > I don't think so. You say the payload data should be either RAM or > device state. I'm asking what other types of data do we want the multifd > channel to transmit and suggesting we need to allow room for the > addition of that, whatever it is. One thing that comes to mind that is > neither RAM or device state is some form of handshake or capabilities > negotiation. Indeed what I thought was multifd payload should be either ram or device, nothing else. The worst case is we can add one more into the union, but I can't think of. I wonder why handshake needs to be done per-thread. I was naturally thinking the handshake should happen sequentially, talking over everything including multifd. IMO multifd to have these threads are mostly for the sake of performance. I sometimes think we have some tiny places where we "over-engineered" multifd, e.g. on attaching ZLIB/ZSTD/... flags on each packet header, even if they should never change, and that is the part of thing we can put into handshake too, and after handshake we should assume both sides and all threads are in sync. There's no need to worry compressor per-packet, per-channel. It could be a global thing and done upfront, even if Libvirt didn't guarantee those. > > > > > What I meant above is it looks fine to me to keep "device state" in > > multifd.c, as long as it is not only about VFIO. > > > > What you were saying seems to be about how to identify this is a device > > state, then I just hope VFIO shares the same flag with any future device > > that would also like to send its state via multifd, like: > > > > #define MULTIFD_FLAG_DEVICE_STATE (32 << 1) > > > > Then set it in MultiFDPacket_t.flags. The dest qemu should route that > > packet to the device vmsd / save_entry for parsing. > > Sure, that part I agree with, no issue here. > > > > >> > >> > Putting it managed by multifd thread should involve less change than this > >> > series, but it could look like this: > >> > > >> > typedef enum { > >> > MULTIFD_PAYLOAD_RAM = 0, > >> > MULTIFD_PAYLOAD_DEVICE_STATE = 1, > >> > } MultifdPayloadType; > >> > > >> > typedef enum { > >> > MultiFDPages_t ram_payload; > >> > MultifdDeviceState_t device_payload; > >> > } MultifdPayload; > >> > > >> > struct MultiFDSendData { > >> > MultifdPayloadType type; > >> > MultifdPayload data; > >> > }; > >> > >> Is that an union up there? So you want to simply allocate in multifd the > > > > Yes. > > > >> max amount of memory between the two types of payload? But then we'll > > > > Yes. > > > >> need a memset(p->data, 0, ...) at every round of sending to avoid giving > >> stale data from one client to another. That doesn't work with the > > > > I think as long as the one to enqueue will always setup the fields, we > > don't need to do memset. I am not sure if it's a major concern to always > > set all the relevant fields in the multifd enqueue threads. It sounds like > > the thing we should always better do. > > Well, writing to a region of memory that was "owned" by another multifd > client and already has a bunch of data there is somewhat prone to > bugs. Just forget to set something and now things start to behave > weirdly. I guess that's just the price of having an union. I'm not > against that, but I would maybe prefer to have each client hold its own > data and not have to think about anything else. Much of this feeling > comes from how the RAM code currently works (more on that below). IIUC so far not using union is fine. However in the future what if we extend device states to vcpus? In that case the vcpu will need to have its own array of SendData[], even though it will share the same structure with what VFIO would use. If with an union, and if we can settle it'll be either PAGES or DEV_STATE, then when vcpu state is involved, vcpu will simply enqueue the same DEV_STATE. One thing to mention is that when with an union we may probably need to get rid of multifd_send_state->pages already. The object can't be a global cache (in which case so far it's N+1, N being n_multifd_channels, while "1" is the extra buffer as only RAM uses it). In the union world we'll need to allocate M+N SendData, where N is still the n_multifd_channels, and M is the number of users, in VFIO's case, VFIO allocates the cached SendData and use that to enqueue, right after enqueue it'll get a free one by switching it with another one in the multifd's array[N]. Same to RAM. Then there'll be N+2 SendData and VFIO/RAM needs to free their own SendData when cleanup (multifd owns the N per-thread only). > > > > >> current ram migration because it wants p->pages to remain active across > >> several calls of multifd_queue_page(). > > > > I don't think I followed here. > > > > What I meant: QEMU maintains SendData[8], now a bunch of pages arrives, it > > enqueues "pages" into a free slot index 2 (set type=pages), then before > > thread 2 finished sending the bunch of pages, SendData[2] will always > > represent those pages without being used by anything else. What did I miss? > > > > You're missing multifd_send_state->pages and the fact that it holds > unsent data on behalf of the client. At every call to > multifd_queue_pages(), the RAM code expects the previously filled pages > structure to be there. Since we intend to have more than one multifd > client, now the other client (say device state) might run, it will take > that slot and fill it with it's own stuff (or rather fill p->send_data > and multifd_send_pages() switches the pointer). Next call to > multifd_queue_pages(), it will take multifd_send_state->pages and > there'll be garbage there. Please see above to see whether that may answer here too; in general I think we need to drop multifd_send_state->pages, but let SendData caches be managed by the users, so instead of multifd managing N+1 SendData, it only manages N, leaving the rest to possible users of multifd. It also means it's a must any multifd enqueue user will first provide a valid cache object of SendData. > > The code is not: take a free slot from the next idle channel and fill it > with data. > > It is: take from multifd_send_state the active slot which *might* have > previously been consumed by the last channel and (continue to) fill it > with data. > > "might", because successive calls to multifd_queue_page() don't need to > call multifd_send_page() to flush to the channel. > > >> > >> > > >> > Then the "enum" makes sure the payload only consumes only the max of both > >> > types; a side benefit to save some memory. > >> > > >> > I think we need to make sure MultifdDeviceState_t is generic enough so that > >> > it will work for mostly everything (especially normal VMSDs). In this case > >> > the VFIO series should be good as that was currently defined as: > >> > > >> > typedef struct { > >> > MultiFDPacketHdr_t hdr; > >> > > >> > char idstr[256] QEMU_NONSTRING; > >> > uint32_t instance_id; > >> > > >> > /* size of the next packet that contains the actual data */ > >> > uint32_t next_packet_size; > >> > } __attribute__((packed)) MultiFDPacketDeviceState_t; > >> > >> This is the packet, a different thing. Not sure if your paragraph above > >> means to talk about that or really MultifdDeviceState, which is what is > >> exchanged between the multifd threads and the client code. > > > > I meant the wire protocol looks great from that POV. We may need similar > > thing for the type==device_state slots just to be generic. > > > >> > >> > > >> > IIUC that was what we need exactly with idstr+instance_id, so as to nail > >> > exactly at where should the "opaque device state" go to, then load it with > >> > a buffer-based loader when it's ready (starting from VFIO, to get rid of > >> > qemufile). For VMSDs in the future if ever possible, that should be a > >> > modified version of vmstate_load() where it may take buffers not qemufiles. > >> > > >> > To Maciej: please see whether above makes sense to you, and if you also > >> > agree please consider that with your VFIO work. > >> > > >> > Thanks, > >> > > >> >> > >> >> > [] > >> >> > [a][][][] > >> >> > > >> >> > [a][][][] > >> >> > ^idle > >> >
Peter Xu <peterx@redhat.com> writes: > On Wed, Jul 10, 2024 at 05:16:36PM -0300, Fabiano Rosas wrote: >> Peter Xu <peterx@redhat.com> writes: >> >> > On Wed, Jul 10, 2024 at 01:10:37PM -0300, Fabiano Rosas wrote: >> >> Peter Xu <peterx@redhat.com> writes: >> >> >> >> > On Thu, Jun 27, 2024 at 11:27:08AM +0800, Wang, Lei wrote: >> >> >> > Or graphically: >> >> >> > >> >> >> > 1) client fills the active slot with data. Channels point to nothing >> >> >> > at this point: >> >> >> > [a] <-- active slot >> >> >> > [][][][] <-- free slots, one per-channel >> >> >> > >> >> >> > [][][][] <-- channels' p->data pointers >> >> >> > >> >> >> > 2) multifd_send() swaps the pointers inside the client slot. Channels >> >> >> > still point to nothing: >> >> >> > [] >> >> >> > [a][][][] >> >> >> > >> >> >> > [][][][] >> >> >> > >> >> >> > 3) multifd_send() finds an idle channel and updates its pointer: >> >> >> >> >> >> It seems the action "finds an idle channel" is in step 2 rather than step 3, >> >> >> which means the free slot is selected based on the id of the channel found, am I >> >> >> understanding correctly? >> >> > >> >> > I think you're right. >> >> > >> >> > Actually I also feel like the desription here is ambiguous, even though I >> >> > think I get what Fabiano wanted to say. >> >> > >> >> > The free slot should be the first step of step 2+3, here what Fabiano >> >> > really wanted to suggest is we move the free buffer array from multifd >> >> > channels into the callers, then the caller can pass in whatever data to >> >> > send. >> >> > >> >> > So I think maybe it's cleaner to write it as this in code (note: I didn't >> >> > really change the code, just some ordering and comments): >> >> > >> >> > ===8<=== >> >> > @@ -710,15 +710,11 @@ static bool multifd_send(MultiFDSlots *slots) >> >> > */ >> >> > active_slot = slots->active; >> >> > slots->active = slots->free[p->id]; >> >> > - p->data = active_slot; >> >> > - >> >> > - /* >> >> > - * By the next time we arrive here, the channel will certainly >> >> > - * have consumed the active slot. Put it back on the free list >> >> > - * now. >> >> > - */ >> >> > slots->free[p->id] = active_slot; >> >> > >> >> > + /* Assign the current active slot to the chosen thread */ >> >> > + p->data = active_slot; >> >> > ===8<=== >> >> > >> >> > The comment I removed is slightly misleading to me too, because right now >> >> > active_slot contains the data hasn't yet been delivered to multifd, so >> >> > we're "putting it back to free list" not because of it's free, but because >> >> > we know it won't get used until the multifd send thread consumes it >> >> > (because before that the thread will be busy, and we won't use the buffer >> >> > if so in upcoming send()s). >> >> > >> >> > And then when I'm looking at this again, I think maybe it's a slight >> >> > overkill, and maybe we can still keep the "opaque data" managed by multifd. >> >> > One reason might be that I don't expect the "opaque data" payload keep >> >> > growing at all: it should really be either RAM or device state as I >> >> > commented elsewhere in a relevant thread, after all it's a thread model >> >> > only for migration purpose to move vmstates.. >> >> >> >> Some amount of flexibility needs to be baked in. For instance, what >> >> about the handshake procedure? Don't we want to use multifd threads to >> >> put some information on the wire for that as well? >> > >> > Is this an orthogonal question? >> >> I don't think so. You say the payload data should be either RAM or >> device state. I'm asking what other types of data do we want the multifd >> channel to transmit and suggesting we need to allow room for the >> addition of that, whatever it is. One thing that comes to mind that is >> neither RAM or device state is some form of handshake or capabilities >> negotiation. > > Indeed what I thought was multifd payload should be either ram or device, > nothing else. The worst case is we can add one more into the union, but I > can't think of. What about the QEMUFile traffic? There's an iov in there. I have been thinking of replacing some of qemu-file.c guts with calls to multifd. Instead of several qemu_put_byte() we could construct an iov and give it to multifd for transfering, call multifd_sync at the end and get rid of the QEMUFile entirely. I don't have that completely laid out at the moment, but I think it should be possible. I get concerned about making assumptions on the types of data we're ever going to want to transmit. I bet someone thought in the past that multifd would never be used for anything other than ram. > > I wonder why handshake needs to be done per-thread. I was naturally > thinking the handshake should happen sequentially, talking over everything > including multifd. Well, it would still be thread based. Just that it would be 1 thread and it would not be managed by multifd. I don't see the point. We could make everything be multifd-based. Any piece of data that needs to reach the other side of the migration could be sent through multifd, no? Also, when you say "per-thread", that's the model we're trying to get away from. There should be nothing "per-thread", the threads just consume the data produced by the clients. Anything "per-thread" that is not strictly related to the thread model should go away. For instance, p->page_size, p->page_count, p->write_flags, p->flags, etc. None of these should be in MultiFDSendParams. That thing should be (say) MultifdChannelState and contain only the semaphores and control flags for the threads. It would be nice if we could once and for all have a model that can dispatch data transfers without having to fiddle with threading all the time. Any time someone wants to do something different in the migration code, there it goes a random qemu_create_thread() flying around. > IMO multifd to have these threads are mostly for the > sake of performance. I sometimes think we have some tiny places where we > "over-engineered" multifd, e.g. on attaching ZLIB/ZSTD/... flags on each > packet header, even if they should never change, and that is the part of > thing we can put into handshake too, and after handshake we should assume > both sides and all threads are in sync. There's no need to worry > compressor per-packet, per-channel. It could be a global thing and done > upfront, even if Libvirt didn't guarantee those. Yep, I agree. And if any client needs such granularity, it can add a sub-header of it's own. >> >> > >> > What I meant above is it looks fine to me to keep "device state" in >> > multifd.c, as long as it is not only about VFIO. >> > >> > What you were saying seems to be about how to identify this is a device >> > state, then I just hope VFIO shares the same flag with any future device >> > that would also like to send its state via multifd, like: >> > >> > #define MULTIFD_FLAG_DEVICE_STATE (32 << 1) >> > >> > Then set it in MultiFDPacket_t.flags. The dest qemu should route that >> > packet to the device vmsd / save_entry for parsing. >> >> Sure, that part I agree with, no issue here. >> >> > >> >> >> >> > Putting it managed by multifd thread should involve less change than this >> >> > series, but it could look like this: >> >> > >> >> > typedef enum { >> >> > MULTIFD_PAYLOAD_RAM = 0, >> >> > MULTIFD_PAYLOAD_DEVICE_STATE = 1, >> >> > } MultifdPayloadType; >> >> > >> >> > typedef enum { >> >> > MultiFDPages_t ram_payload; >> >> > MultifdDeviceState_t device_payload; >> >> > } MultifdPayload; >> >> > >> >> > struct MultiFDSendData { >> >> > MultifdPayloadType type; >> >> > MultifdPayload data; >> >> > }; >> >> >> >> Is that an union up there? So you want to simply allocate in multifd the >> > >> > Yes. >> > >> >> max amount of memory between the two types of payload? But then we'll >> > >> > Yes. >> > >> >> need a memset(p->data, 0, ...) at every round of sending to avoid giving >> >> stale data from one client to another. That doesn't work with the >> > >> > I think as long as the one to enqueue will always setup the fields, we >> > don't need to do memset. I am not sure if it's a major concern to always >> > set all the relevant fields in the multifd enqueue threads. It sounds like >> > the thing we should always better do. >> >> Well, writing to a region of memory that was "owned" by another multifd >> client and already has a bunch of data there is somewhat prone to >> bugs. Just forget to set something and now things start to behave >> weirdly. I guess that's just the price of having an union. I'm not >> against that, but I would maybe prefer to have each client hold its own >> data and not have to think about anything else. Much of this feeling >> comes from how the RAM code currently works (more on that below). > > IIUC so far not using union is fine. However in the future what if we > extend device states to vcpus? In that case the vcpu will need to have its > own array of SendData[], even though it will share the same structure with > what VFIO would use. > > If with an union, and if we can settle it'll be either PAGES or DEV_STATE, > then when vcpu state is involved, vcpu will simply enqueue the same > DEV_STATE. Yeah, the union is advantageous from that aspect. > One thing to mention is that when with an union we may probably need to get > rid of multifd_send_state->pages already. Hehe, please don't do this like "oh, by the way...". This is a major pain point. I've been complaining about that "holding of client data" since the fist time I read that code. So if you're going to propose something, it needs to account for that. > The object can't be a global > cache (in which case so far it's N+1, N being n_multifd_channels, while "1" > is the extra buffer as only RAM uses it). In the union world we'll need to > allocate M+N SendData, where N is still the n_multifd_channels, and M is > the number of users, in VFIO's case, VFIO allocates the cached SendData and > use that to enqueue, right after enqueue it'll get a free one by switching > it with another one in the multifd's array[N]. Same to RAM. Then there'll > be N+2 SendData and VFIO/RAM needs to free their own SendData when cleanup > (multifd owns the N per-thread only). > At first sight, that seems to work. It's similar to this series, but you're moving the free slots back into the channels. Should I keep SendData as an actual separate array instead of multiple p->data? Let me know, I'll write some code and see what it looks like. >> >> > >> >> current ram migration because it wants p->pages to remain active across >> >> several calls of multifd_queue_page(). >> > >> > I don't think I followed here. >> > >> > What I meant: QEMU maintains SendData[8], now a bunch of pages arrives, it >> > enqueues "pages" into a free slot index 2 (set type=pages), then before >> > thread 2 finished sending the bunch of pages, SendData[2] will always >> > represent those pages without being used by anything else. What did I miss? >> > >> >> You're missing multifd_send_state->pages and the fact that it holds >> unsent data on behalf of the client. At every call to >> multifd_queue_pages(), the RAM code expects the previously filled pages >> structure to be there. Since we intend to have more than one multifd >> client, now the other client (say device state) might run, it will take >> that slot and fill it with it's own stuff (or rather fill p->send_data >> and multifd_send_pages() switches the pointer). Next call to >> multifd_queue_pages(), it will take multifd_send_state->pages and >> there'll be garbage there. > > Please see above to see whether that may answer here too; in general I > think we need to drop multifd_send_state->pages, but let SendData caches be > managed by the users, so instead of multifd managing N+1 SendData, it only > manages N, leaving the rest to possible users of multifd. It also means > it's a must any multifd enqueue user will first provide a valid cache > object of SendData. > >> >> The code is not: take a free slot from the next idle channel and fill it >> with data. >> >> It is: take from multifd_send_state the active slot which *might* have >> previously been consumed by the last channel and (continue to) fill it >> with data. >> >> "might", because successive calls to multifd_queue_page() don't need to >> call multifd_send_page() to flush to the channel. >> >> >> >> >> > >> >> > Then the "enum" makes sure the payload only consumes only the max of both >> >> > types; a side benefit to save some memory. >> >> > >> >> > I think we need to make sure MultifdDeviceState_t is generic enough so that >> >> > it will work for mostly everything (especially normal VMSDs). In this case >> >> > the VFIO series should be good as that was currently defined as: >> >> > >> >> > typedef struct { >> >> > MultiFDPacketHdr_t hdr; >> >> > >> >> > char idstr[256] QEMU_NONSTRING; >> >> > uint32_t instance_id; >> >> > >> >> > /* size of the next packet that contains the actual data */ >> >> > uint32_t next_packet_size; >> >> > } __attribute__((packed)) MultiFDPacketDeviceState_t; >> >> >> >> This is the packet, a different thing. Not sure if your paragraph above >> >> means to talk about that or really MultifdDeviceState, which is what is >> >> exchanged between the multifd threads and the client code. >> > >> > I meant the wire protocol looks great from that POV. We may need similar >> > thing for the type==device_state slots just to be generic. >> > >> >> >> >> > >> >> > IIUC that was what we need exactly with idstr+instance_id, so as to nail >> >> > exactly at where should the "opaque device state" go to, then load it with >> >> > a buffer-based loader when it's ready (starting from VFIO, to get rid of >> >> > qemufile). For VMSDs in the future if ever possible, that should be a >> >> > modified version of vmstate_load() where it may take buffers not qemufiles. >> >> > >> >> > To Maciej: please see whether above makes sense to you, and if you also >> >> > agree please consider that with your VFIO work. >> >> > >> >> > Thanks, >> >> > >> >> >> >> >> >> > [] >> >> >> > [a][][][] >> >> >> > >> >> >> > [a][][][] >> >> >> > ^idle >> >> >>
On Thu, Jul 11, 2024 at 11:12:09AM -0300, Fabiano Rosas wrote: > What about the QEMUFile traffic? There's an iov in there. I have been > thinking of replacing some of qemu-file.c guts with calls to > multifd. Instead of several qemu_put_byte() we could construct an iov > and give it to multifd for transfering, call multifd_sync at the end and > get rid of the QEMUFile entirely. I don't have that completely laid out > at the moment, but I think it should be possible. I get concerned about > making assumptions on the types of data we're ever going to want to > transmit. I bet someone thought in the past that multifd would never be > used for anything other than ram. Hold on a bit.. there're two things I want to clarity with you. Firstly, qemu_put_byte() has buffering on f->buf[]. Directly changing them to iochannels may regress performance. I never checked, but I would assume some buffering will be needed for small chunk of data even with iochannels. Secondly, why multifd has things to do with this? What you're talking about is more like the rework of qemufile->iochannel thing to me, and IIUC that doesn't yet involve multifd. For many of such conversions, it'll still be operating on the main channel, which is not the multifd channels. What matters might be about what's in your mind to be put over multifd channels there. > > > > > I wonder why handshake needs to be done per-thread. I was naturally > > thinking the handshake should happen sequentially, talking over everything > > including multifd. > > Well, it would still be thread based. Just that it would be 1 thread and > it would not be managed by multifd. I don't see the point. We could make > everything be multifd-based. Any piece of data that needs to reach the > other side of the migration could be sent through multifd, no? Hmm.... yes we can. But what do we gain from it, if we know it'll be a few MBs in total? There ain't a lot of huge stuff to move, it seems to me. > > Also, when you say "per-thread", that's the model we're trying to get > away from. There should be nothing "per-thread", the threads just > consume the data produced by the clients. Anything "per-thread" that is > not strictly related to the thread model should go away. For instance, > p->page_size, p->page_count, p->write_flags, p->flags, etc. None of > these should be in MultiFDSendParams. That thing should be (say) > MultifdChannelState and contain only the semaphores and control flags > for the threads. > > It would be nice if we could once and for all have a model that can > dispatch data transfers without having to fiddle with threading all the > time. Any time someone wants to do something different in the migration > code, there it goes a random qemu_create_thread() flying around. That's exactly what I want to avoid. Not all things will need a thread, only performance relevant ones. So now we have multifd threads, they're for IO throughputs: if we want to push a fast NIC, that's the only way to go. Anything wants to push that NIC, should use multifd. Then it turns out we want more concurrency, it's about VFIO save()/load() of the kenrel drivers and it can block. Same to other devices that can take time to save()/load() if it can happen concurrently in the future. I think that's the reason why I suggested the VFIO solution to provide a generic concept of thread pool so it services a generic purpose, and can be reused in the future. I hope that'll stop anyone else on migration to create yet another thread randomly, and I definitely don't like that either. I would _suspect_ the next one to come as such is TDX.. I remember at least in the very initial proposal years ago, TDX migration involves its own "channel" to migrate, migration.c may not even know where is that channel. We'll see. [...] > > One thing to mention is that when with an union we may probably need to get > > rid of multifd_send_state->pages already. > > Hehe, please don't do this like "oh, by the way...". This is a major > pain point. I've been complaining about that "holding of client data" > since the fist time I read that code. So if you're going to propose > something, it needs to account for that. The client puts something into a buffer (SendData), then it delivers it to multifd (who silently switches the buffer). After enqueued, the client assumes the buffer is sent and reusable again. It looks pretty common to me, what is the concern within the procedure? What's the "holding of client data" issue? > > > The object can't be a global > > cache (in which case so far it's N+1, N being n_multifd_channels, while "1" > > is the extra buffer as only RAM uses it). In the union world we'll need to > > allocate M+N SendData, where N is still the n_multifd_channels, and M is > > the number of users, in VFIO's case, VFIO allocates the cached SendData and > > use that to enqueue, right after enqueue it'll get a free one by switching > > it with another one in the multifd's array[N]. Same to RAM. Then there'll > > be N+2 SendData and VFIO/RAM needs to free their own SendData when cleanup > > (multifd owns the N per-thread only). > > > > At first sight, that seems to work. It's similar to this series, but > you're moving the free slots back into the channels. Should I keep > SendData as an actual separate array instead of multiple p->data? I don't know.. they look similar to me yet so far, as long as multifd is managing the N buffers, while the clients will manage one for each. There should have a helper to allocate/free the generic multifd buffers (SendData in this case) so everyone should be using that. > > Let me know, I'll write some code and see what it looks like. I think Maciej is working on this too since your absence, as I saw he decided to base his work on top of yours and he's preparing the new version. I hope you two won't conflict or duplicates the work. Might be good to ask / wait and see how far Maciej has been going.
Peter Xu <peterx@redhat.com> writes: > On Thu, Jul 11, 2024 at 11:12:09AM -0300, Fabiano Rosas wrote: >> What about the QEMUFile traffic? There's an iov in there. I have been >> thinking of replacing some of qemu-file.c guts with calls to >> multifd. Instead of several qemu_put_byte() we could construct an iov >> and give it to multifd for transfering, call multifd_sync at the end and >> get rid of the QEMUFile entirely. I don't have that completely laid out >> at the moment, but I think it should be possible. I get concerned about >> making assumptions on the types of data we're ever going to want to >> transmit. I bet someone thought in the past that multifd would never be >> used for anything other than ram. > > Hold on a bit.. there're two things I want to clarity with you. > > Firstly, qemu_put_byte() has buffering on f->buf[]. Directly changing them > to iochannels may regress performance. I never checked, but I would assume > some buffering will be needed for small chunk of data even with iochannels. Right, but there's an extra memcpy to do that. Not sure how those balance out. We also don't flush the iov at once, so f->buf seems redundant to me. But of course, if we touch any of that we must ensure we're not dropping any major optimization. > Secondly, why multifd has things to do with this? What you're talking > about is more like the rework of qemufile->iochannel thing to me, and IIUC > that doesn't yet involve multifd. For many of such conversions, it'll > still be operating on the main channel, which is not the multifd channels. > What matters might be about what's in your mind to be put over multifd > channels there. > Any piece of code that fills an iov with data is prone to be able to send that data through multifd. From this perspective, multifd is just a way to give work to an iochannel. We don't *need* to use it, but it might be simple enough to the point that the benefit of ditching QEMUFile can be reached without too much rework. Say we provision multifd threads early and leave them waiting for any part of the migration code to send some data. We could have n-1 threads idle waiting for the bulk of the data and use a single thread for any early traffic that does not need to be parallel. I'm not suggesting we do any of this right away or even that this is the correct way to go, I'm just letting you know some of my ideas and why I think ram + device state might not be the only data we put through multifd. >> >> > >> > I wonder why handshake needs to be done per-thread. I was naturally >> > thinking the handshake should happen sequentially, talking over everything >> > including multifd. >> >> Well, it would still be thread based. Just that it would be 1 thread and >> it would not be managed by multifd. I don't see the point. We could make >> everything be multifd-based. Any piece of data that needs to reach the >> other side of the migration could be sent through multifd, no? > > Hmm.... yes we can. But what do we gain from it, if we know it'll be a few > MBs in total? There ain't a lot of huge stuff to move, it seems to me. Well it depends on what the alternative is. If we're going to create a thread to send small chunks of data anyway, we could use the multifd threads instead. > >> >> Also, when you say "per-thread", that's the model we're trying to get >> away from. There should be nothing "per-thread", the threads just >> consume the data produced by the clients. Anything "per-thread" that is >> not strictly related to the thread model should go away. For instance, >> p->page_size, p->page_count, p->write_flags, p->flags, etc. None of >> these should be in MultiFDSendParams. That thing should be (say) >> MultifdChannelState and contain only the semaphores and control flags >> for the threads. >> >> It would be nice if we could once and for all have a model that can >> dispatch data transfers without having to fiddle with threading all the >> time. Any time someone wants to do something different in the migration >> code, there it goes a random qemu_create_thread() flying around. > > That's exactly what I want to avoid. Not all things will need a thread, > only performance relevant ones. > > So now we have multifd threads, they're for IO throughputs: if we want to > push a fast NIC, that's the only way to go. Anything wants to push that > NIC, should use multifd. > > Then it turns out we want more concurrency, it's about VFIO save()/load() > of the kenrel drivers and it can block. Same to other devices that can > take time to save()/load() if it can happen concurrently in the future. I > think that's the reason why I suggested the VFIO solution to provide a > generic concept of thread pool so it services a generic purpose, and can be > reused in the future. Just to be clear, do you want a thread-pool to replace multifd? Or would that be only used for concurrency on the producer side? > I hope that'll stop anyone else on migration to create yet another thread > randomly, and I definitely don't like that either. I would _suspect_ the > next one to come as such is TDX.. I remember at least in the very initial > proposal years ago, TDX migration involves its own "channel" to migrate, > migration.c may not even know where is that channel. We'll see. > It would be good if we could standardize on a single structure for data transfer. Today it's kind of a mess, QEMUFile with it's several data sizes, iovs, MultiFDPages_t, whatever comes out of this series in p->data, etc. That makes it difficult to change the underlying model without rewriting a bunch of logic. > [...] > >> > One thing to mention is that when with an union we may probably need to get >> > rid of multifd_send_state->pages already. >> >> Hehe, please don't do this like "oh, by the way...". This is a major >> pain point. I've been complaining about that "holding of client data" >> since the fist time I read that code. So if you're going to propose >> something, it needs to account for that. > > The client puts something into a buffer (SendData), then it delivers it to > multifd (who silently switches the buffer). After enqueued, the client > assumes the buffer is sent and reusable again. > > It looks pretty common to me, what is the concern within the procedure? > What's the "holding of client data" issue? > No concern, it's just that you didn't mention any of this when you suggested the union, so I thought you simply ignored it. "holding of the client data" is what we're discussing here: The fact that multifd keeps multifd_send_state->pages around for the next call to multifd_queue_pages() to reach. Anyway, your M+N suggestion seems to be enough to address this, let's see. >> >> > The object can't be a global >> > cache (in which case so far it's N+1, N being n_multifd_channels, while "1" >> > is the extra buffer as only RAM uses it). In the union world we'll need to >> > allocate M+N SendData, where N is still the n_multifd_channels, and M is >> > the number of users, in VFIO's case, VFIO allocates the cached SendData and >> > use that to enqueue, right after enqueue it'll get a free one by switching >> > it with another one in the multifd's array[N]. Same to RAM. Then there'll >> > be N+2 SendData and VFIO/RAM needs to free their own SendData when cleanup >> > (multifd owns the N per-thread only). >> > >> >> At first sight, that seems to work. It's similar to this series, but >> you're moving the free slots back into the channels. Should I keep >> SendData as an actual separate array instead of multiple p->data? > > I don't know.. they look similar to me yet so far, as long as multifd is > managing the N buffers, while the clients will manage one for each. There > should have a helper to allocate/free the generic multifd buffers (SendData > in this case) so everyone should be using that. > >> >> Let me know, I'll write some code and see what it looks like. > > I think Maciej is working on this too since your absence, as I saw he > decided to base his work on top of yours and he's preparing the new > version. I hope you two won't conflict or duplicates the work. Might be > good to ask / wait and see how far Maciej has been going.
On Thu, Jul 11, 2024 at 04:37:34PM -0300, Fabiano Rosas wrote: [...] > We also don't flush the iov at once, so f->buf seems redundant to > me. But of course, if we touch any of that we must ensure we're not > dropping any major optimization. Yes some tests over that would be more persuasive when it comes. Per my limited experience in the past few years: memcpy on chips nowadays is pretty cheap. You'll see very soon one more example of that when you start to look at the qatzip series: that series decided to do one more memcpy for all guest pages, to make it a larger chunk of buffer instead of submitting the compression tasks in 4k chunks (while I thought 4k wasn't too small itself). That may be more involved so may not be a great example (e.g. the compression algo can be special in this case where it just likes larger buffers), but it's not uncommon that I see people trade things with memcpy, especially small buffers. [...] > Any piece of code that fills an iov with data is prone to be able to > send that data through multifd. From this perspective, multifd is just a > way to give work to an iochannel. We don't *need* to use it, but it > might be simple enough to the point that the benefit of ditching > QEMUFile can be reached without too much rework. > > Say we provision multifd threads early and leave them waiting for any > part of the migration code to send some data. We could have n-1 threads > idle waiting for the bulk of the data and use a single thread for any > early traffic that does not need to be parallel. > > I'm not suggesting we do any of this right away or even that this is the > correct way to go, I'm just letting you know some of my ideas and why I > think ram + device state might not be the only data we put through > multifd. We can wait and see whether that can be of any use in the future, even if so, we still have chance to add more types into the union, I think. But again, I don't expect. My gut feeling: we shouldn't bother putting any (1) non-huge-chunk, or (2) non-IO, data onto multifd. Again, I would ask "why not the main channel", otherwise. [...] > Just to be clear, do you want a thread-pool to replace multifd? Or would > that be only used for concurrency on the producer side? Not replace multifd. It's just that I was imagining multifd threads only manage IO stuff, nothing else. I was indeed thinking whether we can reuse multifd threads, but then I found there's risk mangling these two concepts, as: when we do more than IO in multifd threads (e.g., talking to VFIO kernel fetching data which can block), we have risk of blocking IO even if we can push more so the NICs can be idle again. There's also the complexity where the job fetches data from VFIO kernel and want to enqueue again, it means an multifd task can enqueue to itself, and circular enqueue can be challenging: imagine 8 concurrent tasks (with a total of 8 multifd threads) trying to enqueue at the same time; they hunger themselves to death. Things like that. Then I figured the rest jobs are really fn(void*) type of things; they should deserve their own pool of threads. So the VFIO threads (used to be per-device) becomes migration worker threads, we need them for both src/dst: on dst there's still pending work to apply the continuous VFIO data back to the kernel driver, and that can't be done by multifd thread too due to similar same reason. Then those dest side worker threads can also do load() not only for VFIO but also other device states if we can add more. So to summary, we'll have: - 1 main thread (send / recv) - N multifd threads (IOs only) - M worker threads (jobs only) Of course, postcopy not involved.. How's that sound?
Peter Xu <peterx@redhat.com> writes: > On Thu, Jul 11, 2024 at 04:37:34PM -0300, Fabiano Rosas wrote: > > [...] > >> We also don't flush the iov at once, so f->buf seems redundant to >> me. But of course, if we touch any of that we must ensure we're not >> dropping any major optimization. > > Yes some tests over that would be more persuasive when it comes. > > Per my limited experience in the past few years: memcpy on chips nowadays > is pretty cheap. You'll see very soon one more example of that when you > start to look at the qatzip series: that series decided to do one more > memcpy for all guest pages, to make it a larger chunk of buffer instead of > submitting the compression tasks in 4k chunks (while I thought 4k wasn't > too small itself). > > That may be more involved so may not be a great example (e.g. the > compression algo can be special in this case where it just likes larger > buffers), but it's not uncommon that I see people trade things with memcpy, > especially small buffers. > > [...] > >> Any piece of code that fills an iov with data is prone to be able to >> send that data through multifd. From this perspective, multifd is just a >> way to give work to an iochannel. We don't *need* to use it, but it >> might be simple enough to the point that the benefit of ditching >> QEMUFile can be reached without too much rework. >> >> Say we provision multifd threads early and leave them waiting for any >> part of the migration code to send some data. We could have n-1 threads >> idle waiting for the bulk of the data and use a single thread for any >> early traffic that does not need to be parallel. >> >> I'm not suggesting we do any of this right away or even that this is the >> correct way to go, I'm just letting you know some of my ideas and why I >> think ram + device state might not be the only data we put through >> multifd. > > We can wait and see whether that can be of any use in the future, even if > so, we still have chance to add more types into the union, I think. But > again, I don't expect. > > My gut feeling: we shouldn't bother putting any (1) non-huge-chunk, or (2) > non-IO, data onto multifd. Again, I would ask "why not the main channel", > otherwise. > > [...] > >> Just to be clear, do you want a thread-pool to replace multifd? Or would >> that be only used for concurrency on the producer side? > > Not replace multifd. It's just that I was imagining multifd threads only > manage IO stuff, nothing else. > > I was indeed thinking whether we can reuse multifd threads, but then I > found there's risk mangling these two concepts, as: when we do more than IO > in multifd threads (e.g., talking to VFIO kernel fetching data which can > block), we have risk of blocking IO even if we can push more so the NICs > can be idle again. There's also the complexity where the job fetches data > from VFIO kernel and want to enqueue again, it means an multifd task can > enqueue to itself, and circular enqueue can be challenging: imagine 8 > concurrent tasks (with a total of 8 multifd threads) trying to enqueue at > the same time; they hunger themselves to death. Things like that. Then I > figured the rest jobs are really fn(void*) type of things; they should > deserve their own pool of threads. > > So the VFIO threads (used to be per-device) becomes migration worker > threads, we need them for both src/dst: on dst there's still pending work > to apply the continuous VFIO data back to the kernel driver, and that can't > be done by multifd thread too due to similar same reason. Then those dest > side worker threads can also do load() not only for VFIO but also other > device states if we can add more. > > So to summary, we'll have: > > - 1 main thread (send / recv) > - N multifd threads (IOs only) > - M worker threads (jobs only) > > Of course, postcopy not involved.. How's that sound? Looks good. There's a better divide between producer and consumer this way. I think it will help when designing new features. One observation is that we'll still have two different entities doing IO (multifd threads and the migration thread), which I would prefer were using a common code at a higher level than the iochannel. One thing that I tried to look into for mapped-ram was whether we could set up iouring in the migration code, but got entirely discouraged by the migration thread doing IO at random points. And of course, you've seen what we had to do with direct-io. That was in part due to having the migration thread in parallel doing it's small writes at undetermined points in time.
On Thu, Jul 11, 2024 at 06:12:44PM -0300, Fabiano Rosas wrote: > Peter Xu <peterx@redhat.com> writes: > > > On Thu, Jul 11, 2024 at 04:37:34PM -0300, Fabiano Rosas wrote: > > > > [...] > > > >> We also don't flush the iov at once, so f->buf seems redundant to > >> me. But of course, if we touch any of that we must ensure we're not > >> dropping any major optimization. > > > > Yes some tests over that would be more persuasive when it comes. > > > > Per my limited experience in the past few years: memcpy on chips nowadays > > is pretty cheap. You'll see very soon one more example of that when you > > start to look at the qatzip series: that series decided to do one more > > memcpy for all guest pages, to make it a larger chunk of buffer instead of > > submitting the compression tasks in 4k chunks (while I thought 4k wasn't > > too small itself). > > > > That may be more involved so may not be a great example (e.g. the > > compression algo can be special in this case where it just likes larger > > buffers), but it's not uncommon that I see people trade things with memcpy, > > especially small buffers. > > > > [...] > > > >> Any piece of code that fills an iov with data is prone to be able to > >> send that data through multifd. From this perspective, multifd is just a > >> way to give work to an iochannel. We don't *need* to use it, but it > >> might be simple enough to the point that the benefit of ditching > >> QEMUFile can be reached without too much rework. > >> > >> Say we provision multifd threads early and leave them waiting for any > >> part of the migration code to send some data. We could have n-1 threads > >> idle waiting for the bulk of the data and use a single thread for any > >> early traffic that does not need to be parallel. > >> > >> I'm not suggesting we do any of this right away or even that this is the > >> correct way to go, I'm just letting you know some of my ideas and why I > >> think ram + device state might not be the only data we put through > >> multifd. > > > > We can wait and see whether that can be of any use in the future, even if > > so, we still have chance to add more types into the union, I think. But > > again, I don't expect. > > > > My gut feeling: we shouldn't bother putting any (1) non-huge-chunk, or (2) > > non-IO, data onto multifd. Again, I would ask "why not the main channel", > > otherwise. > > > > [...] > > > >> Just to be clear, do you want a thread-pool to replace multifd? Or would > >> that be only used for concurrency on the producer side? > > > > Not replace multifd. It's just that I was imagining multifd threads only > > manage IO stuff, nothing else. > > > > I was indeed thinking whether we can reuse multifd threads, but then I > > found there's risk mangling these two concepts, as: when we do more than IO > > in multifd threads (e.g., talking to VFIO kernel fetching data which can > > block), we have risk of blocking IO even if we can push more so the NICs > > can be idle again. There's also the complexity where the job fetches data > > from VFIO kernel and want to enqueue again, it means an multifd task can > > enqueue to itself, and circular enqueue can be challenging: imagine 8 > > concurrent tasks (with a total of 8 multifd threads) trying to enqueue at > > the same time; they hunger themselves to death. Things like that. Then I > > figured the rest jobs are really fn(void*) type of things; they should > > deserve their own pool of threads. > > > > So the VFIO threads (used to be per-device) becomes migration worker > > threads, we need them for both src/dst: on dst there's still pending work > > to apply the continuous VFIO data back to the kernel driver, and that can't > > be done by multifd thread too due to similar same reason. Then those dest > > side worker threads can also do load() not only for VFIO but also other > > device states if we can add more. > > > > So to summary, we'll have: > > > > - 1 main thread (send / recv) > > - N multifd threads (IOs only) > > - M worker threads (jobs only) > > > > Of course, postcopy not involved.. How's that sound? > > Looks good. There's a better divide between producer and consumer this > way. I think it will help when designing new features. > > One observation is that we'll still have two different entities doing IO > (multifd threads and the migration thread), which I would prefer were > using a common code at a higher level than the iochannel. At least for the main channel probably yes. I think Dan has had the idea of adding the buffering layer over iochannels, then replace qemufiles with that. Multifd channels looks ok so far to use as raw channels. > > One thing that I tried to look into for mapped-ram was whether we could > set up iouring in the migration code, but got entirely discouraged by > the migration thread doing IO at random points. And of course, you've > seen what we had to do with direct-io. That was in part due to having > the migration thread in parallel doing it's small writes at undetermined > points in time. On the locked_vm part: probably yes, we'd better try to avoid using page pinning if possible. It just looks like it becomes a more important scenario nowadays to put VMs into containers, it means then such feature may not be always usable there. For the rest: I really don't know much on iouring, but I remember it can be fast normally only in a poll model with interrupt-less context? Not sure whether it suites here for us, as I guess we should avoid consuming cpu resourcess with no good reason, and polling for perf falls into that category, I think. Even without it, kubevirt now already has issue on multifd eating cpus, and people observe multifd threads causing vcpu threads to be throttled, interrupting guest workloads; they're currently put in the same container. I also not sure how much it'll help comparing to when we have the multi-threading ready. I suspect not that much.
Peter Xu <peterx@redhat.com> writes: > On Thu, Jul 11, 2024 at 06:12:44PM -0300, Fabiano Rosas wrote: >> Peter Xu <peterx@redhat.com> writes: >> >> > On Thu, Jul 11, 2024 at 04:37:34PM -0300, Fabiano Rosas wrote: >> > >> > [...] >> > >> >> We also don't flush the iov at once, so f->buf seems redundant to >> >> me. But of course, if we touch any of that we must ensure we're not >> >> dropping any major optimization. >> > >> > Yes some tests over that would be more persuasive when it comes. >> > >> > Per my limited experience in the past few years: memcpy on chips nowadays >> > is pretty cheap. You'll see very soon one more example of that when you >> > start to look at the qatzip series: that series decided to do one more >> > memcpy for all guest pages, to make it a larger chunk of buffer instead of >> > submitting the compression tasks in 4k chunks (while I thought 4k wasn't >> > too small itself). >> > >> > That may be more involved so may not be a great example (e.g. the >> > compression algo can be special in this case where it just likes larger >> > buffers), but it's not uncommon that I see people trade things with memcpy, >> > especially small buffers. >> > >> > [...] >> > >> >> Any piece of code that fills an iov with data is prone to be able to >> >> send that data through multifd. From this perspective, multifd is just a >> >> way to give work to an iochannel. We don't *need* to use it, but it >> >> might be simple enough to the point that the benefit of ditching >> >> QEMUFile can be reached without too much rework. >> >> >> >> Say we provision multifd threads early and leave them waiting for any >> >> part of the migration code to send some data. We could have n-1 threads >> >> idle waiting for the bulk of the data and use a single thread for any >> >> early traffic that does not need to be parallel. >> >> >> >> I'm not suggesting we do any of this right away or even that this is the >> >> correct way to go, I'm just letting you know some of my ideas and why I >> >> think ram + device state might not be the only data we put through >> >> multifd. >> > >> > We can wait and see whether that can be of any use in the future, even if >> > so, we still have chance to add more types into the union, I think. But >> > again, I don't expect. >> > >> > My gut feeling: we shouldn't bother putting any (1) non-huge-chunk, or (2) >> > non-IO, data onto multifd. Again, I would ask "why not the main channel", >> > otherwise. >> > >> > [...] >> > >> >> Just to be clear, do you want a thread-pool to replace multifd? Or would >> >> that be only used for concurrency on the producer side? >> > >> > Not replace multifd. It's just that I was imagining multifd threads only >> > manage IO stuff, nothing else. >> > >> > I was indeed thinking whether we can reuse multifd threads, but then I >> > found there's risk mangling these two concepts, as: when we do more than IO >> > in multifd threads (e.g., talking to VFIO kernel fetching data which can >> > block), we have risk of blocking IO even if we can push more so the NICs >> > can be idle again. There's also the complexity where the job fetches data >> > from VFIO kernel and want to enqueue again, it means an multifd task can >> > enqueue to itself, and circular enqueue can be challenging: imagine 8 >> > concurrent tasks (with a total of 8 multifd threads) trying to enqueue at >> > the same time; they hunger themselves to death. Things like that. Then I >> > figured the rest jobs are really fn(void*) type of things; they should >> > deserve their own pool of threads. >> > >> > So the VFIO threads (used to be per-device) becomes migration worker >> > threads, we need them for both src/dst: on dst there's still pending work >> > to apply the continuous VFIO data back to the kernel driver, and that can't >> > be done by multifd thread too due to similar same reason. Then those dest >> > side worker threads can also do load() not only for VFIO but also other >> > device states if we can add more. >> > >> > So to summary, we'll have: >> > >> > - 1 main thread (send / recv) >> > - N multifd threads (IOs only) >> > - M worker threads (jobs only) >> > >> > Of course, postcopy not involved.. How's that sound? >> >> Looks good. There's a better divide between producer and consumer this >> way. I think it will help when designing new features. >> >> One observation is that we'll still have two different entities doing IO >> (multifd threads and the migration thread), which I would prefer were >> using a common code at a higher level than the iochannel. > > At least for the main channel probably yes. I think Dan has had the idea > of adding the buffering layer over iochannels, then replace qemufiles with > that. Multifd channels looks ok so far to use as raw channels. > >> >> One thing that I tried to look into for mapped-ram was whether we could >> set up iouring in the migration code, but got entirely discouraged by >> the migration thread doing IO at random points. And of course, you've >> seen what we had to do with direct-io. That was in part due to having >> the migration thread in parallel doing it's small writes at undetermined >> points in time. > > On the locked_vm part: probably yes, we'd better try to avoid using page > pinning if possible. It just looks like it becomes a more important > scenario nowadays to put VMs into containers, it means then such feature > may not be always usable there. > > For the rest: I really don't know much on iouring, but I remember it can be > fast normally only in a poll model with interrupt-less context? I'm not sure. I mainly thought about using it to save syscalls on the write side. But as I said, I didn't look further into it. > Not sure > whether it suites here for us, as I guess we should avoid consuming cpu > resourcess with no good reason, and polling for perf falls into that > category, I think. Even without it, kubevirt now already has issue on > multifd eating cpus, and people observe multifd threads causing vcpu > threads to be throttled, interrupting guest workloads; they're currently > put in the same container. I also not sure how much it'll help comparing > to when we have the multi-threading ready. I suspect not that much. Do you have a reference for that kubevirt issue I could look at? It maybe interesting to investigate further. Where's the throttling coming from? And doesn't less vcpu time imply less dirtying and therefore faster convergence?
On Fri, Jul 12, 2024 at 09:44:02AM -0300, Fabiano Rosas wrote: > Do you have a reference for that kubevirt issue I could look at? It > maybe interesting to investigate further. Where's the throttling coming > from? And doesn't less vcpu time imply less dirtying and therefore > faster convergence? Sorry I don't have a link on hand.. sometimes it's not about converge, it's about impacting the guest workload too much without intention which is not wanted, especially if on a public cloud. It's understandable to me since they're under the same cgroup with throttled cpu resources applie to QEMU+Libvirt processes as a whole, probably based on N_VCPUS with some tiny extra room for other stuff. For example, I remember they also hit other threads content with the vcpu threads like the block layer thread pools. It's a separate issue here when talking about locked_vm, as kubevirt probably need to figure out a way to say "these are mgmt threads, and those are vcpu threads", because mgmt threads can take quite some cpu resources sometimes and it's not avoidable. Page pinning will be another story, as in many cases pinning should not be required, except VFIO, zerocopy and other special stuff.
On 10.07.2024 22:16, Fabiano Rosas wrote: > Peter Xu <peterx@redhat.com> writes: > >> On Wed, Jul 10, 2024 at 01:10:37PM -0300, Fabiano Rosas wrote: >>> Peter Xu <peterx@redhat.com> writes: >>> >>>> On Thu, Jun 27, 2024 at 11:27:08AM +0800, Wang, Lei wrote: >>>>>> Or graphically: >>>>>> >>>>>> 1) client fills the active slot with data. Channels point to nothing >>>>>> at this point: >>>>>> [a] <-- active slot >>>>>> [][][][] <-- free slots, one per-channel >>>>>> >>>>>> [][][][] <-- channels' p->data pointers >>>>>> >>>>>> 2) multifd_send() swaps the pointers inside the client slot. Channels >>>>>> still point to nothing: >>>>>> [] >>>>>> [a][][][] >>>>>> >>>>>> [][][][] >>>>>> >>>>>> 3) multifd_send() finds an idle channel and updates its pointer: >>>>> >>>>> It seems the action "finds an idle channel" is in step 2 rather than step 3, >>>>> which means the free slot is selected based on the id of the channel found, am I >>>>> understanding correctly? >>>> >>>> I think you're right. >>>> >>>> Actually I also feel like the desription here is ambiguous, even though I >>>> think I get what Fabiano wanted to say. >>>> >>>> The free slot should be the first step of step 2+3, here what Fabiano >>>> really wanted to suggest is we move the free buffer array from multifd >>>> channels into the callers, then the caller can pass in whatever data to >>>> send. >>>> >>>> So I think maybe it's cleaner to write it as this in code (note: I didn't >>>> really change the code, just some ordering and comments): >>>> >>>> ===8<=== >>>> @@ -710,15 +710,11 @@ static bool multifd_send(MultiFDSlots *slots) >>>> */ >>>> active_slot = slots->active; >>>> slots->active = slots->free[p->id]; >>>> - p->data = active_slot; >>>> - >>>> - /* >>>> - * By the next time we arrive here, the channel will certainly >>>> - * have consumed the active slot. Put it back on the free list >>>> - * now. >>>> - */ >>>> slots->free[p->id] = active_slot; >>>> >>>> + /* Assign the current active slot to the chosen thread */ >>>> + p->data = active_slot; >>>> ===8<=== >>>> >>>> The comment I removed is slightly misleading to me too, because right now >>>> active_slot contains the data hasn't yet been delivered to multifd, so >>>> we're "putting it back to free list" not because of it's free, but because >>>> we know it won't get used until the multifd send thread consumes it >>>> (because before that the thread will be busy, and we won't use the buffer >>>> if so in upcoming send()s). >>>> >>>> And then when I'm looking at this again, I think maybe it's a slight >>>> overkill, and maybe we can still keep the "opaque data" managed by multifd. >>>> One reason might be that I don't expect the "opaque data" payload keep >>>> growing at all: it should really be either RAM or device state as I >>>> commented elsewhere in a relevant thread, after all it's a thread model >>>> only for migration purpose to move vmstates.. >>> >>> Some amount of flexibility needs to be baked in. For instance, what >>> about the handshake procedure? Don't we want to use multifd threads to >>> put some information on the wire for that as well? >> >> Is this an orthogonal question? > > I don't think so. You say the payload data should be either RAM or > device state. I'm asking what other types of data do we want the multifd > channel to transmit and suggesting we need to allow room for the > addition of that, whatever it is. One thing that comes to mind that is > neither RAM or device state is some form of handshake or capabilities > negotiation. The RFC version of my multifd device state transfer patch set introduced a new migration channel header (by Avihai) for clean and extensible migration channel handshaking but people didn't like so it was removed in v1. Thanks, Maciej
On Tue, Jul 16, 2024 at 10:10:25PM +0200, Maciej S. Szmigiero wrote: > > > > > The comment I removed is slightly misleading to me too, because right now > > > > > active_slot contains the data hasn't yet been delivered to multifd, so > > > > > we're "putting it back to free list" not because of it's free, but because > > > > > we know it won't get used until the multifd send thread consumes it > > > > > (because before that the thread will be busy, and we won't use the buffer > > > > > if so in upcoming send()s). > > > > > > > > > > And then when I'm looking at this again, I think maybe it's a slight > > > > > overkill, and maybe we can still keep the "opaque data" managed by multifd. > > > > > One reason might be that I don't expect the "opaque data" payload keep > > > > > growing at all: it should really be either RAM or device state as I > > > > > commented elsewhere in a relevant thread, after all it's a thread model > > > > > only for migration purpose to move vmstates.. > > > > > > > > Some amount of flexibility needs to be baked in. For instance, what > > > > about the handshake procedure? Don't we want to use multifd threads to > > > > put some information on the wire for that as well? > > > > > > Is this an orthogonal question? > > > > I don't think so. You say the payload data should be either RAM or > > device state. I'm asking what other types of data do we want the multifd > > channel to transmit and suggesting we need to allow room for the > > addition of that, whatever it is. One thing that comes to mind that is > > neither RAM or device state is some form of handshake or capabilities > > negotiation. > > The RFC version of my multifd device state transfer patch set introduced > a new migration channel header (by Avihai) for clean and extensible > migration channel handshaking but people didn't like so it was removed in v1. Hmm, I'm not sure this is relevant to the context of discussion here, but I confess I didn't notice the per-channel header thing in the previous RFC series. Link is here: https://lore.kernel.org/r/636cec92eb801f13ba893de79d4872f5d8342097.1713269378.git.maciej.szmigiero@oracle.com Maciej, if you want, you can split that out of the seriess. So far it looks like a good thing with/without how VFIO tackles it. Thanks,
On 17.07.2024 21:00, Peter Xu wrote: > On Tue, Jul 16, 2024 at 10:10:25PM +0200, Maciej S. Szmigiero wrote: >>>>>> The comment I removed is slightly misleading to me too, because right now >>>>>> active_slot contains the data hasn't yet been delivered to multifd, so >>>>>> we're "putting it back to free list" not because of it's free, but because >>>>>> we know it won't get used until the multifd send thread consumes it >>>>>> (because before that the thread will be busy, and we won't use the buffer >>>>>> if so in upcoming send()s). >>>>>> >>>>>> And then when I'm looking at this again, I think maybe it's a slight >>>>>> overkill, and maybe we can still keep the "opaque data" managed by multifd. >>>>>> One reason might be that I don't expect the "opaque data" payload keep >>>>>> growing at all: it should really be either RAM or device state as I >>>>>> commented elsewhere in a relevant thread, after all it's a thread model >>>>>> only for migration purpose to move vmstates.. >>>>> >>>>> Some amount of flexibility needs to be baked in. For instance, what >>>>> about the handshake procedure? Don't we want to use multifd threads to >>>>> put some information on the wire for that as well? >>>> >>>> Is this an orthogonal question? >>> >>> I don't think so. You say the payload data should be either RAM or >>> device state. I'm asking what other types of data do we want the multifd >>> channel to transmit and suggesting we need to allow room for the >>> addition of that, whatever it is. One thing that comes to mind that is >>> neither RAM or device state is some form of handshake or capabilities >>> negotiation. >> >> The RFC version of my multifd device state transfer patch set introduced >> a new migration channel header (by Avihai) for clean and extensible >> migration channel handshaking but people didn't like so it was removed in v1. > > Hmm, I'm not sure this is relevant to the context of discussion here, but I > confess I didn't notice the per-channel header thing in the previous RFC > series. Link is here: > > https://lore.kernel.org/r/636cec92eb801f13ba893de79d4872f5d8342097.1713269378.git.maciej.szmigiero@oracle.com The channel header patches were dropped because Daniel didn't like them: https://lore.kernel.org/qemu-devel/Zh-KF72Fe9oV6tfT@redhat.com/ https://lore.kernel.org/qemu-devel/Zh_6W8u3H4FmGS49@redhat.com/ > Maciej, if you want, you can split that out of the seriess. So far it looks > like a good thing with/without how VFIO tackles it. Unfortunately, these Avihai's channel header patches obviously impact wire protocol and are a bit of intermingled with the rest of the device state transfer patch set so it would be good to know upfront whether there is some consensus to (re)introduce this new channel header (CCed Daniel, too). > Thanks, > Thanks, Maciej
On Wed, Jul 17, 2024 at 11:07:17PM +0200, Maciej S. Szmigiero wrote: > On 17.07.2024 21:00, Peter Xu wrote: > > On Tue, Jul 16, 2024 at 10:10:25PM +0200, Maciej S. Szmigiero wrote: > > > > > > > The comment I removed is slightly misleading to me too, because right now > > > > > > > active_slot contains the data hasn't yet been delivered to multifd, so > > > > > > > we're "putting it back to free list" not because of it's free, but because > > > > > > > we know it won't get used until the multifd send thread consumes it > > > > > > > (because before that the thread will be busy, and we won't use the buffer > > > > > > > if so in upcoming send()s). > > > > > > > > > > > > > > And then when I'm looking at this again, I think maybe it's a slight > > > > > > > overkill, and maybe we can still keep the "opaque data" managed by multifd. > > > > > > > One reason might be that I don't expect the "opaque data" payload keep > > > > > > > growing at all: it should really be either RAM or device state as I > > > > > > > commented elsewhere in a relevant thread, after all it's a thread model > > > > > > > only for migration purpose to move vmstates.. > > > > > > > > > > > > Some amount of flexibility needs to be baked in. For instance, what > > > > > > about the handshake procedure? Don't we want to use multifd threads to > > > > > > put some information on the wire for that as well? > > > > > > > > > > Is this an orthogonal question? > > > > > > > > I don't think so. You say the payload data should be either RAM or > > > > device state. I'm asking what other types of data do we want the multifd > > > > channel to transmit and suggesting we need to allow room for the > > > > addition of that, whatever it is. One thing that comes to mind that is > > > > neither RAM or device state is some form of handshake or capabilities > > > > negotiation. > > > > > > The RFC version of my multifd device state transfer patch set introduced > > > a new migration channel header (by Avihai) for clean and extensible > > > migration channel handshaking but people didn't like so it was removed in v1. > > > > Hmm, I'm not sure this is relevant to the context of discussion here, but I > > confess I didn't notice the per-channel header thing in the previous RFC > > series. Link is here: > > > > https://lore.kernel.org/r/636cec92eb801f13ba893de79d4872f5d8342097.1713269378.git.maciej.szmigiero@oracle.com > > The channel header patches were dropped because Daniel didn't like them: > https://lore.kernel.org/qemu-devel/Zh-KF72Fe9oV6tfT@redhat.com/ > https://lore.kernel.org/qemu-devel/Zh_6W8u3H4FmGS49@redhat.com/ Ah I missed that too when I quickly went over the old series, sorry. I think what Dan meant was that we'd better do that with the handshake work, which should cover more than this. I've no problem with that. It's just that sooner or later, we should provide something more solid than commit 6720c2b327 ("migration: check magic value for deciding the mapping of channels"). > > > Maciej, if you want, you can split that out of the seriess. So far it looks > > like a good thing with/without how VFIO tackles it. > > Unfortunately, these Avihai's channel header patches obviously impact wire > protocol and are a bit of intermingled with the rest of the device state > transfer patch set so it would be good to know upfront whether there is > some consensus to (re)introduce this new channel header (CCed Daniel, too). When I mentioned posting it separately, it'll still not be relevant to the VFIO series. IOW, I think below is definitely not needed (and I think we're on the same page now to reuse multifd threads as generic channels, so there's no issue now): https://lore.kernel.org/qemu-devel/027695db92ace07d2d6ee66da05f8e85959fd46a.1713269378.git.maciej.szmigiero@oracle.com/ So I assume we should leave that for later for whoever refactors the handshake process. Thanks,
Peter Xu <peterx@redhat.com> writes: > On Thu, Jul 11, 2024 at 11:12:09AM -0300, Fabiano Rosas wrote: >> What about the QEMUFile traffic? There's an iov in there. I have been >> thinking of replacing some of qemu-file.c guts with calls to >> multifd. Instead of several qemu_put_byte() we could construct an iov >> and give it to multifd for transfering, call multifd_sync at the end and >> get rid of the QEMUFile entirely. I don't have that completely laid out >> at the moment, but I think it should be possible. I get concerned about >> making assumptions on the types of data we're ever going to want to >> transmit. I bet someone thought in the past that multifd would never be >> used for anything other than ram. > > Hold on a bit.. there're two things I want to clarity with you. > > Firstly, qemu_put_byte() has buffering on f->buf[]. Directly changing them > to iochannels may regress performance. I never checked, but I would assume > some buffering will be needed for small chunk of data even with iochannels. > > Secondly, why multifd has things to do with this? What you're talking > about is more like the rework of qemufile->iochannel thing to me, and IIUC > that doesn't yet involve multifd. For many of such conversions, it'll > still be operating on the main channel, which is not the multifd channels. > What matters might be about what's in your mind to be put over multifd > channels there. > >> >> > >> > I wonder why handshake needs to be done per-thread. I was naturally >> > thinking the handshake should happen sequentially, talking over everything >> > including multifd. >> >> Well, it would still be thread based. Just that it would be 1 thread and >> it would not be managed by multifd. I don't see the point. We could make >> everything be multifd-based. Any piece of data that needs to reach the >> other side of the migration could be sent through multifd, no? > > Hmm.... yes we can. But what do we gain from it, if we know it'll be a few > MBs in total? There ain't a lot of huge stuff to move, it seems to me. > >> >> Also, when you say "per-thread", that's the model we're trying to get >> away from. There should be nothing "per-thread", the threads just >> consume the data produced by the clients. Anything "per-thread" that is >> not strictly related to the thread model should go away. For instance, >> p->page_size, p->page_count, p->write_flags, p->flags, etc. None of >> these should be in MultiFDSendParams. That thing should be (say) >> MultifdChannelState and contain only the semaphores and control flags >> for the threads. >> >> It would be nice if we could once and for all have a model that can >> dispatch data transfers without having to fiddle with threading all the >> time. Any time someone wants to do something different in the migration >> code, there it goes a random qemu_create_thread() flying around. > > That's exactly what I want to avoid. Not all things will need a thread, > only performance relevant ones. > > So now we have multifd threads, they're for IO throughputs: if we want to > push a fast NIC, that's the only way to go. Anything wants to push that > NIC, should use multifd. > > Then it turns out we want more concurrency, it's about VFIO save()/load() > of the kenrel drivers and it can block. Same to other devices that can > take time to save()/load() if it can happen concurrently in the future. I > think that's the reason why I suggested the VFIO solution to provide a > generic concept of thread pool so it services a generic purpose, and can be > reused in the future. > > I hope that'll stop anyone else on migration to create yet another thread > randomly, and I definitely don't like that either. I would _suspect_ the > next one to come as such is TDX.. I remember at least in the very initial > proposal years ago, TDX migration involves its own "channel" to migrate, > migration.c may not even know where is that channel. We'll see. > > [...] > >> > One thing to mention is that when with an union we may probably need to get >> > rid of multifd_send_state->pages already. >> >> Hehe, please don't do this like "oh, by the way...". This is a major >> pain point. I've been complaining about that "holding of client data" >> since the fist time I read that code. So if you're going to propose >> something, it needs to account for that. > > The client puts something into a buffer (SendData), then it delivers it to > multifd (who silently switches the buffer). After enqueued, the client > assumes the buffer is sent and reusable again. > > It looks pretty common to me, what is the concern within the procedure? > What's the "holding of client data" issue? > v2 is ready, but unfortunately this approach doesn't work. When client A takes the payload, it fills it with it's data, which may include allocating memory. MultiFDPages_t does that for the offset. This means we need a round of free/malloc at every packet sent. For every client and every allocation they decide to do.
On Thu, Jul 18, 2024 at 04:39:00PM -0300, Fabiano Rosas wrote: > v2 is ready, but unfortunately this approach doesn't work. When client A > takes the payload, it fills it with it's data, which may include > allocating memory. MultiFDPages_t does that for the offset. This means > we need a round of free/malloc at every packet sent. For every client > and every allocation they decide to do. Shouldn't be a blocker? E.g. one option is: /* Allocate both the pages + offset[] */ MultiFDPages_t *pages = g_malloc0(sizeof(MultiFDPages_t) + sizeof(ram_addr_t) * n, 1); pages->allocated = n; pages->offset = &pages[1]; Or.. we can also make offset[] dynamic size, if that looks less tricky: typedef struct { /* number of used pages */ uint32_t num; /* number of normal pages */ uint32_t normal_num; /* number of allocated pages */ uint32_t allocated; RAMBlock *block; /* offset of each page */ ram_addr_t offset[0]; } MultiFDPages_t;
Peter Xu <peterx@redhat.com> writes: > On Thu, Jul 18, 2024 at 04:39:00PM -0300, Fabiano Rosas wrote: >> v2 is ready, but unfortunately this approach doesn't work. When client A >> takes the payload, it fills it with it's data, which may include >> allocating memory. MultiFDPages_t does that for the offset. This means >> we need a round of free/malloc at every packet sent. For every client >> and every allocation they decide to do. > > Shouldn't be a blocker? E.g. one option is: > > /* Allocate both the pages + offset[] */ > MultiFDPages_t *pages = g_malloc0(sizeof(MultiFDPages_t) + > sizeof(ram_addr_t) * n, 1); > pages->allocated = n; > pages->offset = &pages[1]; > > Or.. we can also make offset[] dynamic size, if that looks less tricky: > > typedef struct { > /* number of used pages */ > uint32_t num; > /* number of normal pages */ > uint32_t normal_num; > /* number of allocated pages */ > uint32_t allocated; > RAMBlock *block; > /* offset of each page */ > ram_addr_t offset[0]; > } MultiFDPages_t; I think you missed the point. If we hold a pointer inside the payload, we lose the reference when the other client takes the structure and puts its own data there. So we'll need to alloc/free everytime we send a packet.
On Thu, Jul 18, 2024 at 06:27:32PM -0300, Fabiano Rosas wrote: > Peter Xu <peterx@redhat.com> writes: > > > On Thu, Jul 18, 2024 at 04:39:00PM -0300, Fabiano Rosas wrote: > >> v2 is ready, but unfortunately this approach doesn't work. When client A > >> takes the payload, it fills it with it's data, which may include > >> allocating memory. MultiFDPages_t does that for the offset. This means > >> we need a round of free/malloc at every packet sent. For every client > >> and every allocation they decide to do. > > > > Shouldn't be a blocker? E.g. one option is: > > > > /* Allocate both the pages + offset[] */ > > MultiFDPages_t *pages = g_malloc0(sizeof(MultiFDPages_t) + > > sizeof(ram_addr_t) * n, 1); > > pages->allocated = n; > > pages->offset = &pages[1]; > > > > Or.. we can also make offset[] dynamic size, if that looks less tricky: > > > > typedef struct { > > /* number of used pages */ > > uint32_t num; > > /* number of normal pages */ > > uint32_t normal_num; > > /* number of allocated pages */ > > uint32_t allocated; > > RAMBlock *block; > > /* offset of each page */ > > ram_addr_t offset[0]; > > } MultiFDPages_t; > > I think you missed the point. If we hold a pointer inside the payload, > we lose the reference when the other client takes the structure and puts > its own data there. So we'll need to alloc/free everytime we send a > packet. For option 1: when the buffer switch happens, MultiFDPages_t will switch as a whole, including its offset[], because its offset[] always belong to this MultiFDPages_t. So yes, we want to lose that *offset reference together with MultiFDPages_t here, so the offset[] always belongs to one single MultiFDPages_t object for its lifetime. For option 2: I meant MultiFDPages_t will have no offset[] pointer anymore, but make it part of the struct (MultiFDPages_t.offset[]). Logically it's the same as option 1 but maybe slight cleaner. We just need to make it sized 0 so as to be dynamic in size. Hmm.. is it the case?
Peter Xu <peterx@redhat.com> writes: > On Thu, Jul 18, 2024 at 06:27:32PM -0300, Fabiano Rosas wrote: >> Peter Xu <peterx@redhat.com> writes: >> >> > On Thu, Jul 18, 2024 at 04:39:00PM -0300, Fabiano Rosas wrote: >> >> v2 is ready, but unfortunately this approach doesn't work. When client A >> >> takes the payload, it fills it with it's data, which may include >> >> allocating memory. MultiFDPages_t does that for the offset. This means >> >> we need a round of free/malloc at every packet sent. For every client >> >> and every allocation they decide to do. >> > >> > Shouldn't be a blocker? E.g. one option is: >> > >> > /* Allocate both the pages + offset[] */ >> > MultiFDPages_t *pages = g_malloc0(sizeof(MultiFDPages_t) + >> > sizeof(ram_addr_t) * n, 1); >> > pages->allocated = n; >> > pages->offset = &pages[1]; >> > >> > Or.. we can also make offset[] dynamic size, if that looks less tricky: >> > >> > typedef struct { >> > /* number of used pages */ >> > uint32_t num; >> > /* number of normal pages */ >> > uint32_t normal_num; >> > /* number of allocated pages */ >> > uint32_t allocated; >> > RAMBlock *block; >> > /* offset of each page */ >> > ram_addr_t offset[0]; >> > } MultiFDPages_t; >> >> I think you missed the point. If we hold a pointer inside the payload, >> we lose the reference when the other client takes the structure and puts >> its own data there. So we'll need to alloc/free everytime we send a >> packet. > > For option 1: when the buffer switch happens, MultiFDPages_t will switch as > a whole, including its offset[], because its offset[] always belong to this > MultiFDPages_t. So yes, we want to lose that *offset reference together > with MultiFDPages_t here, so the offset[] always belongs to one single > MultiFDPages_t object for its lifetime. MultiFDPages_t is part of MultiFDSendData, it doesn't get allocated individually: struct MultiFDSendData { MultiFDPayloadType type; union { MultiFDPages_t ram_payload; } u; }; (and even if it did, then we'd lose the pointer to ram_payload anyway - or require multiple free/alloc) > > For option 2: I meant MultiFDPages_t will have no offset[] pointer anymore, > but make it part of the struct (MultiFDPages_t.offset[]). Logically it's > the same as option 1 but maybe slight cleaner. We just need to make it > sized 0 so as to be dynamic in size. Seems like an undefined behavior magnet. If I sent this as the first version, you'd NACK me right away. Besides, it's an unnecessary restriction to impose in the client code. And like above, we don't allocate the struct directly, it's part of MultiFDSendData, that's an advantage of using the union. I think we've reached the point where I'd like to hear more concrete reasons for not going with the current proposal, except for the simplicity argument you already put. I like the union idea, but OTOH we already have a working solution right here. > > Hmm.. is it the case?
On Thu, Jul 18, 2024 at 07:32:05PM -0300, Fabiano Rosas wrote: > Peter Xu <peterx@redhat.com> writes: > > > On Thu, Jul 18, 2024 at 06:27:32PM -0300, Fabiano Rosas wrote: > >> Peter Xu <peterx@redhat.com> writes: > >> > >> > On Thu, Jul 18, 2024 at 04:39:00PM -0300, Fabiano Rosas wrote: > >> >> v2 is ready, but unfortunately this approach doesn't work. When client A > >> >> takes the payload, it fills it with it's data, which may include > >> >> allocating memory. MultiFDPages_t does that for the offset. This means > >> >> we need a round of free/malloc at every packet sent. For every client > >> >> and every allocation they decide to do. > >> > > >> > Shouldn't be a blocker? E.g. one option is: > >> > > >> > /* Allocate both the pages + offset[] */ > >> > MultiFDPages_t *pages = g_malloc0(sizeof(MultiFDPages_t) + > >> > sizeof(ram_addr_t) * n, 1); > >> > pages->allocated = n; > >> > pages->offset = &pages[1]; > >> > > >> > Or.. we can also make offset[] dynamic size, if that looks less tricky: > >> > > >> > typedef struct { > >> > /* number of used pages */ > >> > uint32_t num; > >> > /* number of normal pages */ > >> > uint32_t normal_num; > >> > /* number of allocated pages */ > >> > uint32_t allocated; > >> > RAMBlock *block; > >> > /* offset of each page */ > >> > ram_addr_t offset[0]; > >> > } MultiFDPages_t; > >> > >> I think you missed the point. If we hold a pointer inside the payload, > >> we lose the reference when the other client takes the structure and puts > >> its own data there. So we'll need to alloc/free everytime we send a > >> packet. > > > > For option 1: when the buffer switch happens, MultiFDPages_t will switch as > > a whole, including its offset[], because its offset[] always belong to this > > MultiFDPages_t. So yes, we want to lose that *offset reference together > > with MultiFDPages_t here, so the offset[] always belongs to one single > > MultiFDPages_t object for its lifetime. > > MultiFDPages_t is part of MultiFDSendData, it doesn't get allocated > individually: > > struct MultiFDSendData { > MultiFDPayloadType type; > union { > MultiFDPages_t ram_payload; > } u; > }; > > (and even if it did, then we'd lose the pointer to ram_payload anyway - > or require multiple free/alloc) IMHO it's the same. The core idea is we allocate a buffer to put MultiFDSendData which may contain either Pages_t or DeviceState_t, and the size of the buffer should be MAX(A, B). > > > > > For option 2: I meant MultiFDPages_t will have no offset[] pointer anymore, > > but make it part of the struct (MultiFDPages_t.offset[]). Logically it's > > the same as option 1 but maybe slight cleaner. We just need to make it > > sized 0 so as to be dynamic in size. > > Seems like an undefined behavior magnet. If I sent this as the first > version, you'd NACK me right away. > > Besides, it's an unnecessary restriction to impose in the client > code. And like above, we don't allocate the struct directly, it's part > of MultiFDSendData, that's an advantage of using the union. > > I think we've reached the point where I'd like to hear more concrete > reasons for not going with the current proposal, except for the > simplicity argument you already put. I like the union idea, but OTOH we > already have a working solution right here. I think the issue with current proposal is each client will need to allocate (N+1)*buffer, so more user using it the more buffers we'll need (M users, then M*(N+1)*buffer). Currently it seems to me we will have 3 users at least: RAM, VFIO, and some other VMSD devices TBD in mid-long futures; the latter two will share the same DeviceState_t. Maybe vDPA as well at some point? Then 4. I'd agree with this approach only if multifd is flexible enough to not even know what's the buffers, but it's not the case, and we seem only care about two: if (type==RAM) ... else assert(type==DEVICE); ... In this case I think it's easier we have multifd manage all the buffers (after all, it knows them well...). Then the consumption is not M*(N+1)*buffer, but (M+N)*buffer. Perhaps push your tree somewhere so we can have a quick look? I'm totally lost when you said I'll nack it.. so maybe I didn't really get what you meant. Codes may clarify that.
Peter Xu <peterx@redhat.com> writes: > On Thu, Jul 18, 2024 at 07:32:05PM -0300, Fabiano Rosas wrote: >> Peter Xu <peterx@redhat.com> writes: >> >> > On Thu, Jul 18, 2024 at 06:27:32PM -0300, Fabiano Rosas wrote: >> >> Peter Xu <peterx@redhat.com> writes: >> >> >> >> > On Thu, Jul 18, 2024 at 04:39:00PM -0300, Fabiano Rosas wrote: >> >> >> v2 is ready, but unfortunately this approach doesn't work. When client A >> >> >> takes the payload, it fills it with it's data, which may include >> >> >> allocating memory. MultiFDPages_t does that for the offset. This means >> >> >> we need a round of free/malloc at every packet sent. For every client >> >> >> and every allocation they decide to do. >> >> > >> >> > Shouldn't be a blocker? E.g. one option is: >> >> > >> >> > /* Allocate both the pages + offset[] */ >> >> > MultiFDPages_t *pages = g_malloc0(sizeof(MultiFDPages_t) + >> >> > sizeof(ram_addr_t) * n, 1); >> >> > pages->allocated = n; >> >> > pages->offset = &pages[1]; >> >> > >> >> > Or.. we can also make offset[] dynamic size, if that looks less tricky: >> >> > >> >> > typedef struct { >> >> > /* number of used pages */ >> >> > uint32_t num; >> >> > /* number of normal pages */ >> >> > uint32_t normal_num; >> >> > /* number of allocated pages */ >> >> > uint32_t allocated; >> >> > RAMBlock *block; >> >> > /* offset of each page */ >> >> > ram_addr_t offset[0]; >> >> > } MultiFDPages_t; >> >> >> >> I think you missed the point. If we hold a pointer inside the payload, >> >> we lose the reference when the other client takes the structure and puts >> >> its own data there. So we'll need to alloc/free everytime we send a >> >> packet. >> > >> > For option 1: when the buffer switch happens, MultiFDPages_t will switch as >> > a whole, including its offset[], because its offset[] always belong to this >> > MultiFDPages_t. So yes, we want to lose that *offset reference together >> > with MultiFDPages_t here, so the offset[] always belongs to one single >> > MultiFDPages_t object for its lifetime. >> >> MultiFDPages_t is part of MultiFDSendData, it doesn't get allocated >> individually: >> >> struct MultiFDSendData { >> MultiFDPayloadType type; >> union { >> MultiFDPages_t ram_payload; >> } u; >> }; >> >> (and even if it did, then we'd lose the pointer to ram_payload anyway - >> or require multiple free/alloc) > > IMHO it's the same. > > The core idea is we allocate a buffer to put MultiFDSendData which may > contain either Pages_t or DeviceState_t, and the size of the buffer should > be MAX(A, B). > Right, but with your zero-length array proposals we need to have a separate allocation for MultiFDPages_t because to expand the array we need to include the number of pages. Also, don't think only about MultiFDPages_t. With this approach we cannot have pointers to memory allocated by the client at all anywhere inside the union. Every pointer needs to have another reference somewhere else to ensure we don't leak it. That's an unnecessary restriction. >> >> > >> > For option 2: I meant MultiFDPages_t will have no offset[] pointer anymore, >> > but make it part of the struct (MultiFDPages_t.offset[]). Logically it's >> > the same as option 1 but maybe slight cleaner. We just need to make it >> > sized 0 so as to be dynamic in size. >> >> Seems like an undefined behavior magnet. If I sent this as the first >> version, you'd NACK me right away. >> >> Besides, it's an unnecessary restriction to impose in the client >> code. And like above, we don't allocate the struct directly, it's part >> of MultiFDSendData, that's an advantage of using the union. >> >> I think we've reached the point where I'd like to hear more concrete >> reasons for not going with the current proposal, except for the >> simplicity argument you already put. I like the union idea, but OTOH we >> already have a working solution right here. > > I think the issue with current proposal is each client will need to > allocate (N+1)*buffer, so more user using it the more buffers we'll need (M > users, then M*(N+1)*buffer). Currently it seems to me we will have 3 users > at least: RAM, VFIO, and some other VMSD devices TBD in mid-long futures; > the latter two will share the same DeviceState_t. Maybe vDPA as well at > some point? Then 4. You used the opposite argument earlier in this thread to argue in favor of the union: We'll only have 2 clients. I'm confused. Although, granted, this RFC does use more memory. > I'd agree with this approach only if multifd is flexible enough to not even > know what's the buffers, but it's not the case, and we seem only care about > two: > > if (type==RAM) > ... > else > assert(type==DEVICE); > ... I don't understand: "not even know what's the buffers" is exactly what this series is about. It doesn't have any such conditional on "type". > > In this case I think it's easier we have multifd manage all the buffers > (after all, it knows them well...). Then the consumption is not > M*(N+1)*buffer, but (M+N)*buffer. Fine. As I said, I like the union approach. It's just that it doesn't work if the client wants to have a pointer in there. Again, this is client data that multifd holds, it's not multifd data. MultiFDPages_t or DeviceState_t have nothing to do with multifd. It should be ok to have: DeviceState_t *devstate = &p->data->u.device; devstate->foo = g_new0(...); devstate->bar = g_new0(...); just like we have: MultiFDPages_t *pages = &p->data->u.ram; pages->offset = g_new0(ram_addr_t, page_count); > > Perhaps push your tree somewhere so we can have a quick look? https://gitlab.com/farosas/qemu/-/commits/multifd-pages-decouple > I'm totally > lost when you said I'll nack it.. so maybe I didn't really get what you > meant. Codes may clarify that. I'm conjecturing that any contributor adding a zero-length array (a[0]) would probably be given a hard time on the mailing list. There's 10 instances of it in the code base. The proper way to grow an array is to use a flexible array (a[]) instead.
On Fri, Jul 19, 2024 at 01:54:37PM -0300, Fabiano Rosas wrote: > Peter Xu <peterx@redhat.com> writes: > > > On Thu, Jul 18, 2024 at 07:32:05PM -0300, Fabiano Rosas wrote: > >> Peter Xu <peterx@redhat.com> writes: > >> > >> > On Thu, Jul 18, 2024 at 06:27:32PM -0300, Fabiano Rosas wrote: > >> >> Peter Xu <peterx@redhat.com> writes: > >> >> > >> >> > On Thu, Jul 18, 2024 at 04:39:00PM -0300, Fabiano Rosas wrote: > >> >> >> v2 is ready, but unfortunately this approach doesn't work. When client A > >> >> >> takes the payload, it fills it with it's data, which may include > >> >> >> allocating memory. MultiFDPages_t does that for the offset. This means > >> >> >> we need a round of free/malloc at every packet sent. For every client > >> >> >> and every allocation they decide to do. > >> >> > > >> >> > Shouldn't be a blocker? E.g. one option is: > >> >> > > >> >> > /* Allocate both the pages + offset[] */ > >> >> > MultiFDPages_t *pages = g_malloc0(sizeof(MultiFDPages_t) + > >> >> > sizeof(ram_addr_t) * n, 1); > >> >> > pages->allocated = n; > >> >> > pages->offset = &pages[1]; > >> >> > > >> >> > Or.. we can also make offset[] dynamic size, if that looks less tricky: > >> >> > > >> >> > typedef struct { > >> >> > /* number of used pages */ > >> >> > uint32_t num; > >> >> > /* number of normal pages */ > >> >> > uint32_t normal_num; > >> >> > /* number of allocated pages */ > >> >> > uint32_t allocated; > >> >> > RAMBlock *block; > >> >> > /* offset of each page */ > >> >> > ram_addr_t offset[0]; > >> >> > } MultiFDPages_t; > >> >> > >> >> I think you missed the point. If we hold a pointer inside the payload, > >> >> we lose the reference when the other client takes the structure and puts > >> >> its own data there. So we'll need to alloc/free everytime we send a > >> >> packet. > >> > > >> > For option 1: when the buffer switch happens, MultiFDPages_t will switch as > >> > a whole, including its offset[], because its offset[] always belong to this > >> > MultiFDPages_t. So yes, we want to lose that *offset reference together > >> > with MultiFDPages_t here, so the offset[] always belongs to one single > >> > MultiFDPages_t object for its lifetime. > >> > >> MultiFDPages_t is part of MultiFDSendData, it doesn't get allocated > >> individually: > >> > >> struct MultiFDSendData { > >> MultiFDPayloadType type; > >> union { > >> MultiFDPages_t ram_payload; > >> } u; > >> }; > >> > >> (and even if it did, then we'd lose the pointer to ram_payload anyway - > >> or require multiple free/alloc) > > > > IMHO it's the same. > > > > The core idea is we allocate a buffer to put MultiFDSendData which may > > contain either Pages_t or DeviceState_t, and the size of the buffer should > > be MAX(A, B). > > > > Right, but with your zero-length array proposals we need to have a > separate allocation for MultiFDPages_t because to expand the array we > need to include the number of pages. We need to fetch the max size we need and allocate one object covers all the sizes we need. I sincerely don't understand why it's an issue.. > > Also, don't think only about MultiFDPages_t. With this approach we > cannot have pointers to memory allocated by the client at all anywhere > inside the union. Every pointer needs to have another reference > somewhere else to ensure we don't leak it. That's an unnecessary > restriction. So even if there can be multiple pointers we can definitely play the same trick that we allocate object A+B+C+D in the same chunk and let A->b points to B, A->c points to C, and so on. Before that, my question is do we really need that. For device states, AFAIU it'll always be an opaque buffer.. VFIO needs that, vDPA probably the same, and for VMSDs it'll be a temp buffer to put the VMSD dump. For multifd, I used offset[0] just to make sure things like "dynamic sized multifd buffers" will easily work without much changes. Or even we could have this, afaict: #define MULTIFD_PAGES_PER_PACKET (128) typedef struct { /* number of used pages */ uint32_t num; /* number of normal pages */ uint32_t normal_num; /* number of allocated pages */ uint32_t allocated; RAMBlock *block; /* offset of each page */ ram_addr_t offset[MULTIFD_PAGES_PER_PACKET]; } MultiFDPages_t; It might change perf on a few archs where psize is not 4K, but I don't see it a huge deal, personally. Then everything will have no pointers, and it can be even slightly faster because we use 64B cachelines in most systems nowadays, and one indirect pointer may always need a load on a new cacheline otherwise.. This whole cacheline thing is trivial. What I worried that you worry too much on that flexibility that we may never need. And even with that flexibilty I don't understand why you don't like allocating an object that's larger than how the union is defined: I really don't see it a problem.. It'll need care on alloc/free, true, but it should be pretty manageable in this case to me. > > >> > >> > > >> > For option 2: I meant MultiFDPages_t will have no offset[] pointer anymore, > >> > but make it part of the struct (MultiFDPages_t.offset[]). Logically it's > >> > the same as option 1 but maybe slight cleaner. We just need to make it > >> > sized 0 so as to be dynamic in size. > >> > >> Seems like an undefined behavior magnet. If I sent this as the first > >> version, you'd NACK me right away. > >> > >> Besides, it's an unnecessary restriction to impose in the client > >> code. And like above, we don't allocate the struct directly, it's part > >> of MultiFDSendData, that's an advantage of using the union. > >> > >> I think we've reached the point where I'd like to hear more concrete > >> reasons for not going with the current proposal, except for the > >> simplicity argument you already put. I like the union idea, but OTOH we > >> already have a working solution right here. > > > > I think the issue with current proposal is each client will need to > > allocate (N+1)*buffer, so more user using it the more buffers we'll need (M > > users, then M*(N+1)*buffer). Currently it seems to me we will have 3 users > > at least: RAM, VFIO, and some other VMSD devices TBD in mid-long futures; > > the latter two will share the same DeviceState_t. Maybe vDPA as well at > > some point? Then 4. > > You used the opposite argument earlier in this thread to argue in favor > of the union: We'll only have 2 clients. I'm confused. Maybe I meant "2 types of clients"? VDPA will also use the same device state buffer. > > Although, granted, this RFC does use more memory. IMHO it's also easier to understand, where any user always has a free SendData buffer to manipulate, and multifd always has one buffer for each channel (free or busy). That is compared to each client needs to allocate N buffers and we're actually at least leaking "number of multifd channels" into the client which may not be wanted. IOW, I wonder whether you're happy with below to drop the union idea: struct MultiFDSendData { MultiFDPayloadType type; MultiFDPages_t ram_payload; MultiFDDeviceState_t device_payload; }; Then we keep the "(M+N)" usage model, but don't use union and simply forget about the memory consumption (similar to your original memory consumption with this, but will be better as long as anything else joins, e.g. vDPA, because then vDPA will at least share that same buffer with VFIO). Do you think you would accept this? > > > I'd agree with this approach only if multifd is flexible enough to not even > > know what's the buffers, but it's not the case, and we seem only care about > > two: > > > > if (type==RAM) > > ... > > else > > assert(type==DEVICE); > > ... > > I don't understand: "not even know what's the buffers" is exactly what > this series is about. It doesn't have any such conditional on "type". > > > > > In this case I think it's easier we have multifd manage all the buffers > > (after all, it knows them well...). Then the consumption is not > > M*(N+1)*buffer, but (M+N)*buffer. > > Fine. As I said, I like the union approach. It's just that it doesn't > work if the client wants to have a pointer in there. > > Again, this is client data that multifd holds, it's not multifd > data. MultiFDPages_t or DeviceState_t have nothing to do with > multifd. It should be ok to have: > > DeviceState_t *devstate = &p->data->u.device; > devstate->foo = g_new0(...); > devstate->bar = g_new0(...); > > just like we have: > > MultiFDPages_t *pages = &p->data->u.ram; > pages->offset = g_new0(ram_addr_t, page_count); > > > > > Perhaps push your tree somewhere so we can have a quick look? > > https://gitlab.com/farosas/qemu/-/commits/multifd-pages-decouple > > > I'm totally > > lost when you said I'll nack it.. so maybe I didn't really get what you > > meant. Codes may clarify that. > > I'm conjecturing that any contributor adding a zero-length array (a[0]) > would probably be given a hard time on the mailing list. There's 10 > instances of it in the code base. The proper way to grow an array is to > use a flexible array (a[]) instead. I'm not familiar with flexible array. What's the difference between: struct { int a[]; }; v.s. struct { int a[0]; }; ? If that works for you, it should work for me. Or if you really hate the union / zero-sized array thing, it'll still be nice to me to drop the union but keep using M+N objects model. Thanks.
Peter Xu <peterx@redhat.com> writes: > On Fri, Jul 19, 2024 at 01:54:37PM -0300, Fabiano Rosas wrote: >> Peter Xu <peterx@redhat.com> writes: >> >> > On Thu, Jul 18, 2024 at 07:32:05PM -0300, Fabiano Rosas wrote: >> >> Peter Xu <peterx@redhat.com> writes: >> >> >> >> > On Thu, Jul 18, 2024 at 06:27:32PM -0300, Fabiano Rosas wrote: >> >> >> Peter Xu <peterx@redhat.com> writes: >> >> >> >> >> >> > On Thu, Jul 18, 2024 at 04:39:00PM -0300, Fabiano Rosas wrote: >> >> >> >> v2 is ready, but unfortunately this approach doesn't work. When client A >> >> >> >> takes the payload, it fills it with it's data, which may include >> >> >> >> allocating memory. MultiFDPages_t does that for the offset. This means >> >> >> >> we need a round of free/malloc at every packet sent. For every client >> >> >> >> and every allocation they decide to do. >> >> >> > >> >> >> > Shouldn't be a blocker? E.g. one option is: >> >> >> > >> >> >> > /* Allocate both the pages + offset[] */ >> >> >> > MultiFDPages_t *pages = g_malloc0(sizeof(MultiFDPages_t) + >> >> >> > sizeof(ram_addr_t) * n, 1); >> >> >> > pages->allocated = n; >> >> >> > pages->offset = &pages[1]; >> >> >> > >> >> >> > Or.. we can also make offset[] dynamic size, if that looks less tricky: >> >> >> > >> >> >> > typedef struct { >> >> >> > /* number of used pages */ >> >> >> > uint32_t num; >> >> >> > /* number of normal pages */ >> >> >> > uint32_t normal_num; >> >> >> > /* number of allocated pages */ >> >> >> > uint32_t allocated; >> >> >> > RAMBlock *block; >> >> >> > /* offset of each page */ >> >> >> > ram_addr_t offset[0]; >> >> >> > } MultiFDPages_t; >> >> >> >> >> >> I think you missed the point. If we hold a pointer inside the payload, >> >> >> we lose the reference when the other client takes the structure and puts >> >> >> its own data there. So we'll need to alloc/free everytime we send a >> >> >> packet. >> >> > >> >> > For option 1: when the buffer switch happens, MultiFDPages_t will switch as >> >> > a whole, including its offset[], because its offset[] always belong to this >> >> > MultiFDPages_t. So yes, we want to lose that *offset reference together >> >> > with MultiFDPages_t here, so the offset[] always belongs to one single >> >> > MultiFDPages_t object for its lifetime. >> >> >> >> MultiFDPages_t is part of MultiFDSendData, it doesn't get allocated >> >> individually: >> >> >> >> struct MultiFDSendData { >> >> MultiFDPayloadType type; >> >> union { >> >> MultiFDPages_t ram_payload; >> >> } u; >> >> }; >> >> >> >> (and even if it did, then we'd lose the pointer to ram_payload anyway - >> >> or require multiple free/alloc) >> > >> > IMHO it's the same. >> > >> > The core idea is we allocate a buffer to put MultiFDSendData which may >> > contain either Pages_t or DeviceState_t, and the size of the buffer should >> > be MAX(A, B). >> > >> >> Right, but with your zero-length array proposals we need to have a >> separate allocation for MultiFDPages_t because to expand the array we >> need to include the number of pages. > > We need to fetch the max size we need and allocate one object covers all > the sizes we need. I sincerely don't understand why it's an issue.. > What you describe is this: p->data = g_malloc(sizeof(MultiFDPayloadType) + max(sizeof(MultiFDPages_t) + sizeof(ram_addr_t) * page_count, sizeof(MultiFDDevice_t))); This pushes the payload specific information into multifd_send_setup() which is against what we've been doing, namely isolating payload information out of multifd main code. >> >> Also, don't think only about MultiFDPages_t. With this approach we >> cannot have pointers to memory allocated by the client at all anywhere >> inside the union. Every pointer needs to have another reference >> somewhere else to ensure we don't leak it. That's an unnecessary >> restriction. > > So even if there can be multiple pointers we can definitely play the same > trick that we allocate object A+B+C+D in the same chunk and let A->b points > to B, A->c points to C, and so on. > > Before that, my question is do we really need that. > > For device states, AFAIU it'll always be an opaque buffer.. VFIO needs > that, vDPA probably the same, and for VMSDs it'll be a temp buffer to put > the VMSD dump. > > For multifd, I used offset[0] just to make sure things like "dynamic sized > multifd buffers" will easily work without much changes. Or even we could > have this, afaict: > > #define MULTIFD_PAGES_PER_PACKET (128) > > typedef struct { > /* number of used pages */ > uint32_t num; > /* number of normal pages */ > uint32_t normal_num; > /* number of allocated pages */ > uint32_t allocated; > RAMBlock *block; > /* offset of each page */ > ram_addr_t offset[MULTIFD_PAGES_PER_PACKET]; > } MultiFDPages_t; I think this is off the table, we're looking into allowing multifd packet size to change. Page size can change as well. Also future payload types might need to add dynamically allocated data to the payload. Although it might be ok if we have a rule that everything in the payload needs to be static, it just seems unnecessary. > > It might change perf on a few archs where psize is not 4K, but I don't see > it a huge deal, personally. > > Then everything will have no pointers, and it can be even slightly faster > because we use 64B cachelines in most systems nowadays, and one indirect > pointer may always need a load on a new cacheline otherwise.. > > This whole cacheline thing is trivial. What I worried that you worry too > much on that flexibility that we may never need. > > And even with that flexibilty I don't understand why you don't like > allocating an object that's larger than how the union is defined: I really The object being larger than the union is not the point. The point is we don't know what size that array will have, because that's client-specific data. Say RAM code wants an array of X size, vmstate might need another of Y size, etc. > don't see it a problem.. It'll need care on alloc/free, true, but it > should be pretty manageable in this case to me. > >> >> >> >> >> > >> >> > For option 2: I meant MultiFDPages_t will have no offset[] pointer anymore, >> >> > but make it part of the struct (MultiFDPages_t.offset[]). Logically it's >> >> > the same as option 1 but maybe slight cleaner. We just need to make it >> >> > sized 0 so as to be dynamic in size. >> >> >> >> Seems like an undefined behavior magnet. If I sent this as the first >> >> version, you'd NACK me right away. >> >> >> >> Besides, it's an unnecessary restriction to impose in the client >> >> code. And like above, we don't allocate the struct directly, it's part >> >> of MultiFDSendData, that's an advantage of using the union. >> >> >> >> I think we've reached the point where I'd like to hear more concrete >> >> reasons for not going with the current proposal, except for the >> >> simplicity argument you already put. I like the union idea, but OTOH we >> >> already have a working solution right here. >> > >> > I think the issue with current proposal is each client will need to >> > allocate (N+1)*buffer, so more user using it the more buffers we'll need (M >> > users, then M*(N+1)*buffer). Currently it seems to me we will have 3 users >> > at least: RAM, VFIO, and some other VMSD devices TBD in mid-long futures; >> > the latter two will share the same DeviceState_t. Maybe vDPA as well at >> > some point? Then 4. >> >> You used the opposite argument earlier in this thread to argue in favor >> of the union: We'll only have 2 clients. I'm confused. > > Maybe I meant "2 types of clients"? VDPA will also use the same device > state buffer. > >> >> Although, granted, this RFC does use more memory. > > IMHO it's also easier to understand, where any user always has a free > SendData buffer to manipulate, and multifd always has one buffer for each > channel (free or busy). > > That is compared to each client needs to allocate > N buffers and we're actually at least leaking "number of multifd channels" > into the client which may not be wanted. That's a stretch. multifd-channels is a migration parameter, it's fine if any code has access to it. > > IOW, I wonder whether you're happy with below to drop the union idea: > > struct MultiFDSendData { > MultiFDPayloadType type; > MultiFDPages_t ram_payload; > MultiFDDeviceState_t device_payload; > }; > > Then we keep the "(M+N)" usage model, but don't use union and simply forget > about the memory consumption (similar to your original memory consumption > with this, but will be better as long as anything else joins, e.g. vDPA, > because then vDPA will at least share that same buffer with VFIO). > > Do you think you would accept this? I'm not sure. I'll try some alternatives first. Maybe the a[] approach is not so bad. More below... > >> >> > I'd agree with this approach only if multifd is flexible enough to not even >> > know what's the buffers, but it's not the case, and we seem only care about >> > two: >> > >> > if (type==RAM) >> > ... >> > else >> > assert(type==DEVICE); >> > ... >> >> I don't understand: "not even know what's the buffers" is exactly what >> this series is about. It doesn't have any such conditional on "type". >> >> > >> > In this case I think it's easier we have multifd manage all the buffers >> > (after all, it knows them well...). Then the consumption is not >> > M*(N+1)*buffer, but (M+N)*buffer. >> >> Fine. As I said, I like the union approach. It's just that it doesn't >> work if the client wants to have a pointer in there. >> >> Again, this is client data that multifd holds, it's not multifd >> data. MultiFDPages_t or DeviceState_t have nothing to do with >> multifd. It should be ok to have: >> >> DeviceState_t *devstate = &p->data->u.device; >> devstate->foo = g_new0(...); >> devstate->bar = g_new0(...); >> >> just like we have: >> >> MultiFDPages_t *pages = &p->data->u.ram; >> pages->offset = g_new0(ram_addr_t, page_count); >> >> > >> > Perhaps push your tree somewhere so we can have a quick look? >> >> https://gitlab.com/farosas/qemu/-/commits/multifd-pages-decouple >> >> > I'm totally >> > lost when you said I'll nack it.. so maybe I didn't really get what you >> > meant. Codes may clarify that. >> >> I'm conjecturing that any contributor adding a zero-length array (a[0]) >> would probably be given a hard time on the mailing list. There's 10 >> instances of it in the code base. The proper way to grow an array is to >> use a flexible array (a[]) instead. > > I'm not familiar with flexible array. What's the difference between: > > struct { > int a[]; > }; > > v.s. > > struct { > int a[0]; > }; Both are ways of making a dynamically sized structure. We allocate them with: s = malloc(sizeof(struct) + n * sizeof(int)); a[0] is the older way and full of issues: - sizeof might return 0 or 1 depending on compiler extensions - the compiler can't tell when the array is not at the end of the struct - I'm not sure putting this inside an usion is well defined. gcc docs mention: "Declaring zero-length arrays in other contexts, including as interior members of structure objects or as non-member objects, is discouraged. Accessing elements of zero-length arrays declared in such contexts is undefined and may be diagnosed." -- https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html - the kernel has deprecated this usage entirely: "...this led to other problems ... like not being able to detect when such an array is accidentally being used _not_ at the end of a structure (which could happen directly, or when such a struct was in unions, structs of structs, etc)." -- https://www.kernel.org/doc/html/v5.16/process/deprecated.html#zero-length-and-one-element-arrays a[] is the modern way and doesn't have the issues above, but it's still not a perfect solution for us: - note above how allocation doesn't reference "a" directly, which means we need to know "n" at the time of allocating p->data. This requires the sizeof mess I mentioned at the beginning. - the array needs to be at the end of the structure, so we can only have one of them per payload. - I have no idea how this would work if more than one payload needed an array like that. I'll give this one a shot and see how it looks. I'm getting tired of looking at this code.
diff --git a/migration/multifd.c b/migration/multifd.c index 6fe339b378..f22a1c2e84 100644 --- a/migration/multifd.c +++ b/migration/multifd.c @@ -97,6 +97,30 @@ struct { MultiFDMethods *ops; } *multifd_recv_state; +MultiFDSlots *multifd_allocate_slots(void *(*alloc_fn)(void), + void (*reset_fn)(void *), + void (*cleanup_fn)(void *)) +{ + int thread_count = migrate_multifd_channels(); + MultiFDSlots *slots = g_new0(MultiFDSlots, 1); + + slots->active = g_new0(MultiFDSendData, 1); + slots->free = g_new0(MultiFDSendData *, thread_count); + + slots->active->opaque = alloc_fn(); + slots->active->reset = reset_fn; + slots->active->cleanup = cleanup_fn; + + for (int i = 0; i < thread_count; i++) { + slots->free[i] = g_new0(MultiFDSendData, 1); + slots->free[i]->opaque = alloc_fn(); + slots->free[i]->reset = reset_fn; + slots->free[i]->cleanup = cleanup_fn; + } + + return slots; +} + static bool multifd_use_packets(void) { return !migrate_mapped_ram(); @@ -313,8 +337,10 @@ void multifd_register_ops(int method, MultiFDMethods *ops) } /* Reset a MultiFDPages_t* object for the next use */ -static void multifd_pages_reset(MultiFDPages_t *pages) +static void multifd_pages_reset(void *opaque) { + MultiFDPages_t *pages = opaque; + /* * We don't need to touch offset[] array, because it will be * overwritten later when reused. @@ -388,8 +414,9 @@ static int multifd_recv_initial_packet(QIOChannel *c, Error **errp) return msg.id; } -static MultiFDPages_t *multifd_pages_init(uint32_t n) +static void *multifd_pages_init(void) { + uint32_t n = MULTIFD_PACKET_SIZE / qemu_target_page_size(); MultiFDPages_t *pages = g_new0(MultiFDPages_t, 1); pages->allocated = n; @@ -398,13 +425,24 @@ static MultiFDPages_t *multifd_pages_init(uint32_t n) return pages; } -static void multifd_pages_clear(MultiFDPages_t *pages) +static void multifd_pages_clear(void *opaque) { + MultiFDPages_t *pages = opaque; + multifd_pages_reset(pages); pages->allocated = 0; g_free(pages->offset); pages->offset = NULL; - g_free(pages); +} + +/* TODO: move these to multifd-ram.c */ +MultiFDSlots *multifd_ram_send_slots; + +void multifd_ram_save_setup(void) +{ + multifd_ram_send_slots = multifd_allocate_slots(multifd_pages_init, + multifd_pages_reset, + multifd_pages_clear); } static void multifd_ram_fill_packet(MultiFDSendParams *p) @@ -617,13 +655,12 @@ static void multifd_send_kick_main(MultiFDSendParams *p) * * Returns true if succeed, false otherwise. */ -static bool multifd_send_pages(void) +static bool multifd_send(MultiFDSlots *slots) { int i; static int next_channel; MultiFDSendParams *p = NULL; /* make happy gcc */ - MultiFDPages_t *channel_pages; - MultiFDSendData *data = multifd_send_state->data; + MultiFDSendData *active_slot; if (multifd_send_should_exit()) { return false; @@ -659,11 +696,24 @@ static bool multifd_send_pages(void) */ smp_mb_acquire(); - channel_pages = p->data->opaque; - assert(!channel_pages->num); + assert(!slots->free[p->id]->size); + + /* + * Swap the slots. The client gets a free slot to fill up for the + * next iteration and the channel gets the active slot for + * processing. + */ + active_slot = slots->active; + slots->active = slots->free[p->id]; + p->data = active_slot; + + /* + * By the next time we arrive here, the channel will certainly + * have consumed the active slot. Put it back on the free list + * now. + */ + slots->free[p->id] = active_slot; - multifd_send_state->data = p->data; - p->data = data; /* * Making sure p->data is setup before marking pending_job=true. Pairs * with the qatomic_load_acquire() in multifd_send_thread(). @@ -687,6 +737,7 @@ static inline bool multifd_queue_full(MultiFDPages_t *pages) static inline void multifd_enqueue(MultiFDPages_t *pages, ram_addr_t offset) { pages->offset[pages->num++] = offset; + multifd_ram_send_slots->active->size += qemu_target_page_size(); } /* Returns true if enqueue successful, false otherwise */ @@ -695,7 +746,7 @@ bool multifd_queue_page(RAMBlock *block, ram_addr_t offset) MultiFDPages_t *pages; retry: - pages = multifd_send_state->data->opaque; + pages = multifd_ram_send_slots->active->opaque; /* If the queue is empty, we can already enqueue now */ if (multifd_queue_empty(pages)) { @@ -713,7 +764,7 @@ retry: * After flush, always retry. */ if (pages->block != block || multifd_queue_full(pages)) { - if (!multifd_send_pages()) { + if (!multifd_send(multifd_ram_send_slots)) { return false; } goto retry; @@ -825,10 +876,12 @@ static bool multifd_send_cleanup_channel(MultiFDSendParams *p, Error **errp) qemu_sem_destroy(&p->sem_sync); g_free(p->name); p->name = NULL; - multifd_pages_clear(p->data->opaque); - p->data->opaque = NULL; - g_free(p->data); - p->data = NULL; + if (p->data) { + p->data->cleanup(p->data->opaque); + p->data->opaque = NULL; + /* p->data was not allocated by us, just clear the pointer */ + p->data = NULL; + } p->packet_len = 0; g_free(p->packet); p->packet = NULL; @@ -845,10 +898,6 @@ static void multifd_send_cleanup_state(void) qemu_sem_destroy(&multifd_send_state->channels_ready); g_free(multifd_send_state->params); multifd_send_state->params = NULL; - multifd_pages_clear(multifd_send_state->data->opaque); - multifd_send_state->data->opaque = NULL; - g_free(multifd_send_state->data); - multifd_send_state->data = NULL; g_free(multifd_send_state); multifd_send_state = NULL; } @@ -897,14 +946,13 @@ int multifd_send_sync_main(void) { int i; bool flush_zero_copy; - MultiFDPages_t *pages; if (!migrate_multifd()) { return 0; } - pages = multifd_send_state->data->opaque; - if (pages->num) { - if (!multifd_send_pages()) { + + if (multifd_ram_send_slots->active->size) { + if (!multifd_send(multifd_ram_send_slots)) { error_report("%s: multifd_send_pages fail", __func__); return -1; } @@ -979,13 +1027,11 @@ static void *multifd_send_thread(void *opaque) /* * Read pending_job flag before p->data. Pairs with the - * qatomic_store_release() in multifd_send_pages(). + * qatomic_store_release() in multifd_send(). */ if (qatomic_load_acquire(&p->pending_job)) { - MultiFDPages_t *pages = p->data->opaque; - p->iovs_num = 0; - assert(pages->num); + assert(p->data->size); ret = multifd_send_state->ops->send_prepare(p, &local_err); if (ret != 0) { @@ -1008,13 +1054,20 @@ static void *multifd_send_thread(void *opaque) stat64_add(&mig_stats.multifd_bytes, p->next_packet_size + p->packet_len); - multifd_pages_reset(pages); p->next_packet_size = 0; + /* + * The data has now been sent. Since multifd_send() + * already put this slot on the free list, reset the + * entire slot before releasing the barrier below. + */ + p->data->size = 0; + p->data->reset(p->data->opaque); + /* * Making sure p->data is published before saying "we're * free". Pairs with the smp_mb_acquire() in - * multifd_send_pages(). + * multifd_send(). */ qatomic_store_release(&p->pending_job, false); } else { @@ -1208,8 +1261,6 @@ bool multifd_send_setup(void) thread_count = migrate_multifd_channels(); multifd_send_state = g_malloc0(sizeof(*multifd_send_state)); multifd_send_state->params = g_new0(MultiFDSendParams, thread_count); - multifd_send_state->data = g_new0(MultiFDSendData, 1); - multifd_send_state->data->opaque = multifd_pages_init(page_count); qemu_sem_init(&multifd_send_state->channels_created, 0); qemu_sem_init(&multifd_send_state->channels_ready, 0); qatomic_set(&multifd_send_state->exiting, 0); @@ -1221,8 +1272,6 @@ bool multifd_send_setup(void) qemu_sem_init(&p->sem, 0); qemu_sem_init(&p->sem_sync, 0); p->id = i; - p->data = g_new0(MultiFDSendData, 1); - p->data->opaque = multifd_pages_init(page_count); if (use_packets) { p->packet_len = sizeof(MultiFDPacket_t) diff --git a/migration/multifd.h b/migration/multifd.h index 2029bfd80a..5230729077 100644 --- a/migration/multifd.h +++ b/migration/multifd.h @@ -17,6 +17,10 @@ typedef struct MultiFDRecvData MultiFDRecvData; typedef struct MultiFDSendData MultiFDSendData; +typedef struct MultiFDSlots MultiFDSlots; + +typedef void *(multifd_data_alloc_cb)(void); +typedef void (multifd_data_cleanup_cb)(void *); bool multifd_send_setup(void); void multifd_send_shutdown(void); @@ -93,8 +97,21 @@ struct MultiFDRecvData { struct MultiFDSendData { void *opaque; size_t size; + /* reset the slot for reuse after successful transfer */ + void (*reset)(void *); + void (*cleanup)(void *); }; +struct MultiFDSlots { + MultiFDSendData **free; + MultiFDSendData *active; +}; + +MultiFDSlots *multifd_allocate_slots(void *(*alloc_fn)(void), + void (*reset_fn)(void *), + void (*cleanup_fn)(void *)); +void multifd_ram_save_setup(void); + typedef struct { /* Fields are only written at creating/deletion time */ /* No lock required for them, they are read only */ diff --git a/migration/ram.c b/migration/ram.c index ceea586b06..c33a9dcf3f 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -3058,6 +3058,7 @@ static int ram_save_setup(QEMUFile *f, void *opaque, Error **errp) migration_ops = g_malloc0(sizeof(MigrationOps)); if (migrate_multifd()) { + multifd_ram_save_setup(); migration_ops->ram_save_target_page = ram_save_target_page_multifd; } else { migration_ops->ram_save_target_page = ram_save_target_page_legacy;
Multifd currently has a simple scheduling mechanism that distributes work to the various channels by providing the client (producer) with a memory slot and swapping that slot with free slot from the next idle channel (consumer). Or graphically: [] <-- multifd_send_state->pages [][][][] <-- channels' p->pages pointers 1) client fills the empty slot with data: [a] [][][][] 2) multifd_send_pages() finds an idle channel and swaps the pointers: [a] [][][][] ^idle [] [a][][][] 3) client can immediately fill new slot with more data: [b] [a][][][] 4) channel processes the data, the channel slot is now free to use again: [b] [][][][] This works just fine, except that it doesn't allow different types of payloads to be processed at the same time in different channels, i.e. the data type of multifd_send_state->pages needs to be the same as p->pages. For each new data type different from MultiFDPage_t that is to be handled, this logic needs to be duplicated by adding new fields to multifd_send_state and to the channels. The core of the issue here is that we're using the channel parameters (MultiFDSendParams) to hold the storage space on behalf of the multifd client (currently ram.c). This is cumbersome because it forces us to change multifd_send_pages() to check the data type being handled before deciding which field to use. One way to solve this is to detach the storage space from the multifd channel and put it somewhere else, in control of the multifd client. That way, multifd_send_pages() can operate on an opaque pointer without needing to be adapted to each new data type. Implement this logic with a new "slots" abstraction: struct MultiFDSendData { void *opaque; size_t size; } struct MultiFDSlots { MultiFDSendData **free; <-- what used to be p->pages MultiFDSendData *active; <-- what used to be multifd_send_state->pages }; Each multifd client now gets one set of slots to use. The slots are passed into multifd_send_pages() (renamed to multifd_send). The channels now only hold a pointer to the generic MultiFDSendData, and after it's processed that reference can be dropped. Or graphically: 1) client fills the active slot with data. Channels point to nothing at this point: [a] <-- active slot [][][][] <-- free slots, one per-channel [][][][] <-- channels' p->data pointers 2) multifd_send() swaps the pointers inside the client slot. Channels still point to nothing: [] [a][][][] [][][][] 3) multifd_send() finds an idle channel and updates its pointer: [] [a][][][] [a][][][] ^idle 4) a second client calls multifd_send(), but with it's own slots: [] [b] [a][][][] [][][][] [a][][][] 5) multifd_send() does steps 2 and 3 again: [] [] [a][][][] [][b][][] [a][b][][] ^idle 6) The channels continue processing the data and lose/acquire the references as multifd_send() updates them. The free lists of each client are not affected. Signed-off-by: Fabiano Rosas <farosas@suse.de> --- migration/multifd.c | 119 +++++++++++++++++++++++++++++++------------- migration/multifd.h | 17 +++++++ migration/ram.c | 1 + 3 files changed, 102 insertions(+), 35 deletions(-)