Message ID | 20240722175914.24022-5-farosas@suse.de (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | migration/multifd: Remove multifd_send_state->pages | expand |
On Mon, Jul 22, 2024 at 02:59:09PM -0300, Fabiano Rosas wrote: > We're about to use MultiFDPages_t from inside the MultiFDSendData > payload union, which means we cannot have pointers to allocated data > inside the pages structure, otherwise we'd lose the reference to that > memory once another payload type touches the union. Move the offset > array into the end of the structure and turn it into a flexible array > member, so it is allocated along with the rest of MultiFDSendData in > the next patches. > > Note that the ramblock pointer is still fine because the storage for > it is not owned by the migration code. > > Signed-off-by: Fabiano Rosas <farosas@suse.de> > --- > migration/multifd.c | 21 ++++++--------------- > migration/multifd.h | 4 ++-- > 2 files changed, 8 insertions(+), 17 deletions(-) > > diff --git a/migration/multifd.c b/migration/multifd.c > index 20a767157e..440319b361 100644 > --- a/migration/multifd.c > +++ b/migration/multifd.c > @@ -389,22 +389,10 @@ static int multifd_recv_initial_packet(QIOChannel *c, Error **errp) > return msg.id; > } > > -static MultiFDPages_t *multifd_pages_init(uint32_t n) > -{ > - MultiFDPages_t *pages = g_new0(MultiFDPages_t, 1); > - > - pages->allocated = n; > - pages->offset = g_new0(ram_addr_t, n); > - > - return pages; > -} Considering this is the tricky object to allocate here, shall we keep the function just for readability (and already dedups below two callers)? With it, someone else will notice g_new0() stops working for MultiFDPages_t. Some verbose comment would be nice too. Maybe we can also move multifd_ram_payload_size() from the next patch to here. > - > static void multifd_pages_clear(MultiFDPages_t *pages) > { > multifd_pages_reset(pages); > pages->allocated = 0; > - g_free(pages->offset); > - pages->offset = NULL; > g_free(pages); > } > > @@ -1169,7 +1157,9 @@ bool multifd_send_setup(void) > thread_count = migrate_multifd_channels(); > multifd_send_state = g_malloc0(sizeof(*multifd_send_state)); > multifd_send_state->params = g_new0(MultiFDSendParams, thread_count); > - multifd_send_state->pages = multifd_pages_init(page_count); > + multifd_send_state->pages = g_malloc0(sizeof(MultiFDPages_t) + > + page_count * sizeof(ram_addr_t)); > + multifd_send_state->pages->allocated = page_count; > qemu_sem_init(&multifd_send_state->channels_created, 0); > qemu_sem_init(&multifd_send_state->channels_ready, 0); > qatomic_set(&multifd_send_state->exiting, 0); > @@ -1181,8 +1171,9 @@ bool multifd_send_setup(void) > qemu_sem_init(&p->sem, 0); > qemu_sem_init(&p->sem_sync, 0); > p->id = i; > - p->pages = multifd_pages_init(page_count); > - > + p->pages = g_malloc0(sizeof(MultiFDPages_t) + > + page_count * sizeof(ram_addr_t)); > + p->pages->allocated = page_count; > if (use_packets) { > p->packet_len = sizeof(MultiFDPacket_t) > + sizeof(uint64_t) * page_count; > diff --git a/migration/multifd.h b/migration/multifd.h > index c7b1ebe099..12d4247e23 100644 > --- a/migration/multifd.h > +++ b/migration/multifd.h > @@ -78,9 +78,9 @@ typedef struct { > uint32_t normal_num; > /* number of allocated pages */ > uint32_t allocated; > + RAMBlock *block; > /* offset of each page */ > - ram_addr_t *offset; > - RAMBlock *block; > + ram_addr_t offset[]; > } MultiFDPages_t; > > struct MultiFDRecvData { > -- > 2.35.3 >
Peter Xu <peterx@redhat.com> writes: > On Mon, Jul 22, 2024 at 02:59:09PM -0300, Fabiano Rosas wrote: >> We're about to use MultiFDPages_t from inside the MultiFDSendData >> payload union, which means we cannot have pointers to allocated data >> inside the pages structure, otherwise we'd lose the reference to that >> memory once another payload type touches the union. Move the offset >> array into the end of the structure and turn it into a flexible array >> member, so it is allocated along with the rest of MultiFDSendData in >> the next patches. >> >> Note that the ramblock pointer is still fine because the storage for >> it is not owned by the migration code. >> >> Signed-off-by: Fabiano Rosas <farosas@suse.de> >> --- >> migration/multifd.c | 21 ++++++--------------- >> migration/multifd.h | 4 ++-- >> 2 files changed, 8 insertions(+), 17 deletions(-) >> >> diff --git a/migration/multifd.c b/migration/multifd.c >> index 20a767157e..440319b361 100644 >> --- a/migration/multifd.c >> +++ b/migration/multifd.c >> @@ -389,22 +389,10 @@ static int multifd_recv_initial_packet(QIOChannel *c, Error **errp) >> return msg.id; >> } >> >> -static MultiFDPages_t *multifd_pages_init(uint32_t n) >> -{ >> - MultiFDPages_t *pages = g_new0(MultiFDPages_t, 1); >> - >> - pages->allocated = n; >> - pages->offset = g_new0(ram_addr_t, n); >> - >> - return pages; >> -} > > Considering this is the tricky object to allocate here, shall we keep the > function just for readability (and already dedups below two callers)? With > it, someone else will notice g_new0() stops working for MultiFDPages_t. > Some verbose comment would be nice too. > > Maybe we can also move multifd_ram_payload_size() from the next patch to > here. I guess so, I was trying to keep this diff small so it's easier to grasp. > >> - >> static void multifd_pages_clear(MultiFDPages_t *pages) >> { >> multifd_pages_reset(pages); >> pages->allocated = 0; >> - g_free(pages->offset); >> - pages->offset = NULL; >> g_free(pages); >> } >> >> @@ -1169,7 +1157,9 @@ bool multifd_send_setup(void) >> thread_count = migrate_multifd_channels(); >> multifd_send_state = g_malloc0(sizeof(*multifd_send_state)); >> multifd_send_state->params = g_new0(MultiFDSendParams, thread_count); >> - multifd_send_state->pages = multifd_pages_init(page_count); >> + multifd_send_state->pages = g_malloc0(sizeof(MultiFDPages_t) + >> + page_count * sizeof(ram_addr_t)); >> + multifd_send_state->pages->allocated = page_count; >> qemu_sem_init(&multifd_send_state->channels_created, 0); >> qemu_sem_init(&multifd_send_state->channels_ready, 0); >> qatomic_set(&multifd_send_state->exiting, 0); >> @@ -1181,8 +1171,9 @@ bool multifd_send_setup(void) >> qemu_sem_init(&p->sem, 0); >> qemu_sem_init(&p->sem_sync, 0); >> p->id = i; >> - p->pages = multifd_pages_init(page_count); >> - >> + p->pages = g_malloc0(sizeof(MultiFDPages_t) + >> + page_count * sizeof(ram_addr_t)); >> + p->pages->allocated = page_count; >> if (use_packets) { >> p->packet_len = sizeof(MultiFDPacket_t) >> + sizeof(uint64_t) * page_count; >> diff --git a/migration/multifd.h b/migration/multifd.h >> index c7b1ebe099..12d4247e23 100644 >> --- a/migration/multifd.h >> +++ b/migration/multifd.h >> @@ -78,9 +78,9 @@ typedef struct { >> uint32_t normal_num; >> /* number of allocated pages */ >> uint32_t allocated; >> + RAMBlock *block; >> /* offset of each page */ >> - ram_addr_t *offset; >> - RAMBlock *block; >> + ram_addr_t offset[]; >> } MultiFDPages_t; >> >> struct MultiFDRecvData { >> -- >> 2.35.3 >>
diff --git a/migration/multifd.c b/migration/multifd.c index 20a767157e..440319b361 100644 --- a/migration/multifd.c +++ b/migration/multifd.c @@ -389,22 +389,10 @@ static int multifd_recv_initial_packet(QIOChannel *c, Error **errp) return msg.id; } -static MultiFDPages_t *multifd_pages_init(uint32_t n) -{ - MultiFDPages_t *pages = g_new0(MultiFDPages_t, 1); - - pages->allocated = n; - pages->offset = g_new0(ram_addr_t, n); - - return pages; -} - static void multifd_pages_clear(MultiFDPages_t *pages) { multifd_pages_reset(pages); pages->allocated = 0; - g_free(pages->offset); - pages->offset = NULL; g_free(pages); } @@ -1169,7 +1157,9 @@ bool multifd_send_setup(void) thread_count = migrate_multifd_channels(); multifd_send_state = g_malloc0(sizeof(*multifd_send_state)); multifd_send_state->params = g_new0(MultiFDSendParams, thread_count); - multifd_send_state->pages = multifd_pages_init(page_count); + multifd_send_state->pages = g_malloc0(sizeof(MultiFDPages_t) + + page_count * sizeof(ram_addr_t)); + multifd_send_state->pages->allocated = page_count; qemu_sem_init(&multifd_send_state->channels_created, 0); qemu_sem_init(&multifd_send_state->channels_ready, 0); qatomic_set(&multifd_send_state->exiting, 0); @@ -1181,8 +1171,9 @@ bool multifd_send_setup(void) qemu_sem_init(&p->sem, 0); qemu_sem_init(&p->sem_sync, 0); p->id = i; - p->pages = multifd_pages_init(page_count); - + p->pages = g_malloc0(sizeof(MultiFDPages_t) + + page_count * sizeof(ram_addr_t)); + p->pages->allocated = page_count; if (use_packets) { p->packet_len = sizeof(MultiFDPacket_t) + sizeof(uint64_t) * page_count; diff --git a/migration/multifd.h b/migration/multifd.h index c7b1ebe099..12d4247e23 100644 --- a/migration/multifd.h +++ b/migration/multifd.h @@ -78,9 +78,9 @@ typedef struct { uint32_t normal_num; /* number of allocated pages */ uint32_t allocated; + RAMBlock *block; /* offset of each page */ - ram_addr_t *offset; - RAMBlock *block; + ram_addr_t offset[]; } MultiFDPages_t; struct MultiFDRecvData {
We're about to use MultiFDPages_t from inside the MultiFDSendData payload union, which means we cannot have pointers to allocated data inside the pages structure, otherwise we'd lose the reference to that memory once another payload type touches the union. Move the offset array into the end of the structure and turn it into a flexible array member, so it is allocated along with the rest of MultiFDSendData in the next patches. Note that the ramblock pointer is still fine because the storage for it is not owned by the migration code. Signed-off-by: Fabiano Rosas <farosas@suse.de> --- migration/multifd.c | 21 ++++++--------------- migration/multifd.h | 4 ++-- 2 files changed, 8 insertions(+), 17 deletions(-)