diff mbox series

[1/6] vdpa: check for iova tree initialized at net_client_start

Message ID 20240111190222.496695-2-eperezma@redhat.com (mailing list archive)
State New, archived
Headers show
Series Move memory listener register to vhost_vdpa_init | expand

Commit Message

Eugenio Perez Martin Jan. 11, 2024, 7:02 p.m. UTC
To map the guest memory while it is migrating we need to create the
iova_tree, as long as the destination uses x-svq=on. Checking to not
override it.

The function vhost_vdpa_net_client_stop clear it if the device is
stopped. If the guest starts the device again, the iova tree is
recreated by vhost_vdpa_net_data_start_first or vhost_vdpa_net_cvq_start
if needed, so old behavior is kept.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 net/vhost-vdpa.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

Comments

Si-Wei Liu Jan. 31, 2024, 10:06 a.m. UTC | #1
Hi Eugenio,

Maybe there's some patch missing, but I saw this core dump when x-svq=on 
is specified while waiting for the incoming migration on destination host:

(gdb) bt
#0  0x00005643b24cc13c in vhost_iova_tree_map_alloc (tree=0x0, 
map=map@entry=0x7ffd58c54830) at ../hw/virtio/vhost-iova-tree.c:89
#1  0x00005643b234f193 in vhost_vdpa_listener_region_add 
(listener=0x5643b4403fd8, section=0x7ffd58c548d0) at 
/home/opc/qemu-upstream/include/qemu/int128.h:34
#2  0x00005643b24e6a61 in address_space_update_topology_pass 
(as=as@entry=0x5643b35a3840 <address_space_memory>, 
old_view=old_view@entry=0x5643b442b5f0, 
new_view=new_view@entry=0x5643b44a2130, adding=adding@entry=true) at 
../system/memory.c:1004
#3  0x00005643b24e6e60 in address_space_set_flatview (as=0x5643b35a3840 
<address_space_memory>) at ../system/memory.c:1080
#4  0x00005643b24ea750 in memory_region_transaction_commit () at 
../system/memory.c:1132
#5  0x00005643b24ea750 in memory_region_transaction_commit () at 
../system/memory.c:1117
#6  0x00005643b241f4c1 in pc_memory_init 
(pcms=pcms@entry=0x5643b43c8400, 
system_memory=system_memory@entry=0x5643b43d18b0, 
rom_memory=rom_memory@entry=0x5643b449a960, pci_hole64_size=<optimized 
out>) at ../hw/i386/pc.c:954
#7  0x00005643b240d088 in pc_q35_init (machine=0x5643b43c8400) at 
../hw/i386/pc_q35.c:222
#8  0x00005643b21e1da8 in machine_run_board_init (machine=<optimized 
out>, mem_path=<optimized out>, errp=<optimized out>, 
errp@entry=0x5643b35b7958 <error_fatal>)
     at ../hw/core/machine.c:1509
#9  0x00005643b237c0f6 in qmp_x_exit_preconfig () at ../system/vl.c:2613
#10 0x00005643b237c0f6 in qmp_x_exit_preconfig (errp=<optimized out>) at 
../system/vl.c:2704
#11 0x00005643b237fcdd in qemu_init (errp=<optimized out>) at 
../system/vl.c:3753
#12 0x00005643b237fcdd in qemu_init (argc=<optimized out>, 
argv=<optimized out>) at ../system/vl.c:3753
#13 0x00005643b2158249 in main (argc=<optimized out>, argv=<optimized 
out>) at ../system/main.c:47

Shall we create the iova tree early during vdpa dev int for the x-svq=on 
case?

+    if (s->always_svq) {
+        /* iova tree is needed because of SVQ */
+        shared->iova_tree = vhost_iova_tree_new(shared->iova_range.first,
+ shared->iova_range.last);
+    }
+

Regards,
-Siwei

On 1/11/2024 11:02 AM, Eugenio Pérez wrote:
> To map the guest memory while it is migrating we need to create the
> iova_tree, as long as the destination uses x-svq=on. Checking to not
> override it.
>
> The function vhost_vdpa_net_client_stop clear it if the device is
> stopped. If the guest starts the device again, the iova tree is
> recreated by vhost_vdpa_net_data_start_first or vhost_vdpa_net_cvq_start
> if needed, so old behavior is kept.
>
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> ---
>   net/vhost-vdpa.c | 4 +++-
>   1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> index 3726ee5d67..e11b390466 100644
> --- a/net/vhost-vdpa.c
> +++ b/net/vhost-vdpa.c
> @@ -341,7 +341,9 @@ static void vhost_vdpa_net_data_start_first(VhostVDPAState *s)
>   
>       migration_add_notifier(&s->migration_state,
>                              vdpa_net_migration_state_notifier);
> -    if (v->shadow_vqs_enabled) {
> +
> +    /* iova_tree may be initialized by vhost_vdpa_net_load_setup */
> +    if (v->shadow_vqs_enabled && !v->shared->iova_tree) {
>           v->shared->iova_tree = vhost_iova_tree_new(v->shared->iova_range.first,
>                                                      v->shared->iova_range.last);
>       }
Eugenio Perez Martin Feb. 1, 2024, 10:14 a.m. UTC | #2
On Wed, Jan 31, 2024 at 11:07 AM Si-Wei Liu <si-wei.liu@oracle.com> wrote:
>
> Hi Eugenio,
>
> Maybe there's some patch missing, but I saw this core dump when x-svq=on
> is specified while waiting for the incoming migration on destination host:
>
> (gdb) bt
> #0  0x00005643b24cc13c in vhost_iova_tree_map_alloc (tree=0x0,
> map=map@entry=0x7ffd58c54830) at ../hw/virtio/vhost-iova-tree.c:89
> #1  0x00005643b234f193 in vhost_vdpa_listener_region_add
> (listener=0x5643b4403fd8, section=0x7ffd58c548d0) at
> /home/opc/qemu-upstream/include/qemu/int128.h:34
> #2  0x00005643b24e6a61 in address_space_update_topology_pass
> (as=as@entry=0x5643b35a3840 <address_space_memory>,
> old_view=old_view@entry=0x5643b442b5f0,
> new_view=new_view@entry=0x5643b44a2130, adding=adding@entry=true) at
> ../system/memory.c:1004
> #3  0x00005643b24e6e60 in address_space_set_flatview (as=0x5643b35a3840
> <address_space_memory>) at ../system/memory.c:1080
> #4  0x00005643b24ea750 in memory_region_transaction_commit () at
> ../system/memory.c:1132
> #5  0x00005643b24ea750 in memory_region_transaction_commit () at
> ../system/memory.c:1117
> #6  0x00005643b241f4c1 in pc_memory_init
> (pcms=pcms@entry=0x5643b43c8400,
> system_memory=system_memory@entry=0x5643b43d18b0,
> rom_memory=rom_memory@entry=0x5643b449a960, pci_hole64_size=<optimized
> out>) at ../hw/i386/pc.c:954
> #7  0x00005643b240d088 in pc_q35_init (machine=0x5643b43c8400) at
> ../hw/i386/pc_q35.c:222
> #8  0x00005643b21e1da8 in machine_run_board_init (machine=<optimized
> out>, mem_path=<optimized out>, errp=<optimized out>,
> errp@entry=0x5643b35b7958 <error_fatal>)
>      at ../hw/core/machine.c:1509
> #9  0x00005643b237c0f6 in qmp_x_exit_preconfig () at ../system/vl.c:2613
> #10 0x00005643b237c0f6 in qmp_x_exit_preconfig (errp=<optimized out>) at
> ../system/vl.c:2704
> #11 0x00005643b237fcdd in qemu_init (errp=<optimized out>) at
> ../system/vl.c:3753
> #12 0x00005643b237fcdd in qemu_init (argc=<optimized out>,
> argv=<optimized out>) at ../system/vl.c:3753
> #13 0x00005643b2158249 in main (argc=<optimized out>, argv=<optimized
> out>) at ../system/main.c:47
>
> Shall we create the iova tree early during vdpa dev int for the x-svq=on
> case?
>
> +    if (s->always_svq) {
> +        /* iova tree is needed because of SVQ */
> +        shared->iova_tree = vhost_iova_tree_new(shared->iova_range.first,
> + shared->iova_range.last);
> +    }
> +
>

Right.

As your series will make the maps permanent in the best case, I think
it is wise to follow the same path as with the memory listener and
create always, not conditionally.

I'll send a new series by today with this fix and the other one that
Lei detected.

Thanks!

> Regards,
> -Siwei
>
> On 1/11/2024 11:02 AM, Eugenio Pérez wrote:
> > To map the guest memory while it is migrating we need to create the
> > iova_tree, as long as the destination uses x-svq=on. Checking to not
> > override it.
> >
> > The function vhost_vdpa_net_client_stop clear it if the device is
> > stopped. If the guest starts the device again, the iova tree is
> > recreated by vhost_vdpa_net_data_start_first or vhost_vdpa_net_cvq_start
> > if needed, so old behavior is kept.
> >
> > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > ---
> >   net/vhost-vdpa.c | 4 +++-
> >   1 file changed, 3 insertions(+), 1 deletion(-)
> >
> > diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> > index 3726ee5d67..e11b390466 100644
> > --- a/net/vhost-vdpa.c
> > +++ b/net/vhost-vdpa.c
> > @@ -341,7 +341,9 @@ static void vhost_vdpa_net_data_start_first(VhostVDPAState *s)
> >
> >       migration_add_notifier(&s->migration_state,
> >                              vdpa_net_migration_state_notifier);
> > -    if (v->shadow_vqs_enabled) {
> > +
> > +    /* iova_tree may be initialized by vhost_vdpa_net_load_setup */
> > +    if (v->shadow_vqs_enabled && !v->shared->iova_tree) {
> >           v->shared->iova_tree = vhost_iova_tree_new(v->shared->iova_range.first,
> >                                                      v->shared->iova_range.last);
> >       }
>
diff mbox series

Patch

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 3726ee5d67..e11b390466 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -341,7 +341,9 @@  static void vhost_vdpa_net_data_start_first(VhostVDPAState *s)
 
     migration_add_notifier(&s->migration_state,
                            vdpa_net_migration_state_notifier);
-    if (v->shadow_vqs_enabled) {
+
+    /* iova_tree may be initialized by vhost_vdpa_net_load_setup */
+    if (v->shadow_vqs_enabled && !v->shared->iova_tree) {
         v->shared->iova_tree = vhost_iova_tree_new(v->shared->iova_range.first,
                                                    v->shared->iova_range.last);
     }