Message ID | 20240617-stage-vdpa-vq-precreate-v1-20-8c0483f0ca2a@nvidia.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | vdpa/mlx5: Pre-create HW VQs to reduce LM downtime | expand |
Context | Check | Description |
---|---|---|
netdev/tree_selection | success | Not a local patch |
On Mon, Jun 17, 2024 at 5:09 PM Dragos Tatulea <dtatulea@nvidia.com> wrote: > > Currently, hardware VQs are created right when the vdpa device gets into > DRIVER_OK state. That is easier because most of the VQ state is known by > then. > > This patch switches to creating all VQs and their associated resources > at device creation time. The motivation is to reduce the vdpa device > live migration downtime by moving the expensive operation of creating > all the hardware VQs and their associated resources out of downtime on > the destination VM. > > The VQs are now created in a blank state. The VQ configuration will > happen later, on DRIVER_OK. Then the configuration will be applied when > the VQs are moved to the Ready state. > > When .set_vq_ready() is called on a VQ before DRIVER_OK, special care is > needed: now that the VQ is already created a resume_vq() will be > triggered too early when no mr has been configured yet. Skip calling > resume_vq() in this case, let it be handled during DRIVER_OK. > > For virtio-vdpa, the device configuration is done earlier during > .vdpa_dev_add() by vdpa_register_device(). Avoid calling > setup_vq_resources() a second time in that case. > I guess this happens if virtio_vdpa is already loaded, but I cannot see how this is different here. Apart from the IOTLB, what else does it change from the mlx5_vdpa POV? > Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> > Reviewed-by: Cosmin Ratiu <cratiu@nvidia.com> > --- > drivers/vdpa/mlx5/net/mlx5_vnet.c | 37 ++++++++++++++++++++++++++++++++----- > 1 file changed, 32 insertions(+), 5 deletions(-) > > diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c > index 249b5afbe34a..b2836fd3d1dd 100644 > --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c > +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c > @@ -2444,7 +2444,7 @@ static void mlx5_vdpa_set_vq_ready(struct vdpa_device *vdev, u16 idx, bool ready > mvq = &ndev->vqs[idx]; > if (!ready) { > suspend_vq(ndev, mvq); > - } else { > + } else if (mvdev->status & VIRTIO_CONFIG_S_DRIVER_OK) { > if (resume_vq(ndev, mvq)) > ready = false; > } > @@ -3078,10 +3078,18 @@ static void mlx5_vdpa_set_status(struct vdpa_device *vdev, u8 status) > goto err_setup; > } > register_link_notifier(ndev); > - err = setup_vq_resources(ndev, true); > - if (err) { > - mlx5_vdpa_warn(mvdev, "failed to setup driver\n"); > - goto err_driver; > + if (ndev->setup) { > + err = resume_vqs(ndev); > + if (err) { > + mlx5_vdpa_warn(mvdev, "failed to resume VQs\n"); > + goto err_driver; > + } > + } else { > + err = setup_vq_resources(ndev, true); > + if (err) { > + mlx5_vdpa_warn(mvdev, "failed to setup driver\n"); > + goto err_driver; > + } > } > } else { > mlx5_vdpa_warn(mvdev, "did not expect DRIVER_OK to be cleared\n"); > @@ -3142,6 +3150,7 @@ static int mlx5_vdpa_compat_reset(struct vdpa_device *vdev, u32 flags) > if (mlx5_vdpa_create_dma_mr(mvdev)) > mlx5_vdpa_warn(mvdev, "create MR failed\n"); > } > + setup_vq_resources(ndev, false); > up_write(&ndev->reslock); > > return 0; > @@ -3836,8 +3845,21 @@ static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name, > goto err_reg; > > mgtdev->ndev = ndev; > + > + /* For virtio-vdpa, the device was set up during device register. */ > + if (ndev->setup) > + return 0; > + > + down_write(&ndev->reslock); > + err = setup_vq_resources(ndev, false); > + up_write(&ndev->reslock); > + if (err) > + goto err_setup_vq_res; > + > return 0; > > +err_setup_vq_res: > + _vdpa_unregister_device(&mvdev->vdev); > err_reg: > destroy_workqueue(mvdev->wq); > err_res2: > @@ -3863,6 +3885,11 @@ static void mlx5_vdpa_dev_del(struct vdpa_mgmt_dev *v_mdev, struct vdpa_device * > > unregister_link_notifier(ndev); > _vdpa_unregister_device(dev); > + > + down_write(&ndev->reslock); > + teardown_vq_resources(ndev); > + up_write(&ndev->reslock); > + > wq = mvdev->wq; > mvdev->wq = NULL; > destroy_workqueue(wq); > > -- > 2.45.1 >
On Wed, 2024-06-19 at 17:54 +0200, Eugenio Perez Martin wrote: > On Mon, Jun 17, 2024 at 5:09 PM Dragos Tatulea <dtatulea@nvidia.com> wrote: > > > > Currently, hardware VQs are created right when the vdpa device gets into > > DRIVER_OK state. That is easier because most of the VQ state is known by > > then. > > > > This patch switches to creating all VQs and their associated resources > > at device creation time. The motivation is to reduce the vdpa device > > live migration downtime by moving the expensive operation of creating > > all the hardware VQs and their associated resources out of downtime on > > the destination VM. > > > > The VQs are now created in a blank state. The VQ configuration will > > happen later, on DRIVER_OK. Then the configuration will be applied when > > the VQs are moved to the Ready state. > > > > When .set_vq_ready() is called on a VQ before DRIVER_OK, special care is > > needed: now that the VQ is already created a resume_vq() will be > > triggered too early when no mr has been configured yet. Skip calling > > resume_vq() in this case, let it be handled during DRIVER_OK. > > > > For virtio-vdpa, the device configuration is done earlier during > > .vdpa_dev_add() by vdpa_register_device(). Avoid calling > > setup_vq_resources() a second time in that case. > > > > I guess this happens if virtio_vdpa is already loaded, but I cannot > see how this is different here. Apart from the IOTLB, what else does > it change from the mlx5_vdpa POV? > I don't understand your question, could you rephrase or provide more context please? Thanks, Dragos > > Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> > > Reviewed-by: Cosmin Ratiu <cratiu@nvidia.com> > > --- > > drivers/vdpa/mlx5/net/mlx5_vnet.c | 37 ++++++++++++++++++++++++++++++++----- > > 1 file changed, 32 insertions(+), 5 deletions(-) > > > > diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c > > index 249b5afbe34a..b2836fd3d1dd 100644 > > --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c > > +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c > > @@ -2444,7 +2444,7 @@ static void mlx5_vdpa_set_vq_ready(struct vdpa_device *vdev, u16 idx, bool ready > > mvq = &ndev->vqs[idx]; > > if (!ready) { > > suspend_vq(ndev, mvq); > > - } else { > > + } else if (mvdev->status & VIRTIO_CONFIG_S_DRIVER_OK) { > > if (resume_vq(ndev, mvq)) > > ready = false; > > } > > @@ -3078,10 +3078,18 @@ static void mlx5_vdpa_set_status(struct vdpa_device *vdev, u8 status) > > goto err_setup; > > } > > register_link_notifier(ndev); > > - err = setup_vq_resources(ndev, true); > > - if (err) { > > - mlx5_vdpa_warn(mvdev, "failed to setup driver\n"); > > - goto err_driver; > > + if (ndev->setup) { > > + err = resume_vqs(ndev); > > + if (err) { > > + mlx5_vdpa_warn(mvdev, "failed to resume VQs\n"); > > + goto err_driver; > > + } > > + } else { > > + err = setup_vq_resources(ndev, true); > > + if (err) { > > + mlx5_vdpa_warn(mvdev, "failed to setup driver\n"); > > + goto err_driver; > > + } > > } > > } else { > > mlx5_vdpa_warn(mvdev, "did not expect DRIVER_OK to be cleared\n"); > > @@ -3142,6 +3150,7 @@ static int mlx5_vdpa_compat_reset(struct vdpa_device *vdev, u32 flags) > > if (mlx5_vdpa_create_dma_mr(mvdev)) > > mlx5_vdpa_warn(mvdev, "create MR failed\n"); > > } > > + setup_vq_resources(ndev, false); > > up_write(&ndev->reslock); > > > > return 0; > > @@ -3836,8 +3845,21 @@ static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name, > > goto err_reg; > > > > mgtdev->ndev = ndev; > > + > > + /* For virtio-vdpa, the device was set up during device register. */ > > + if (ndev->setup) > > + return 0; > > + > > + down_write(&ndev->reslock); > > + err = setup_vq_resources(ndev, false); > > + up_write(&ndev->reslock); > > + if (err) > > + goto err_setup_vq_res; > > + > > return 0; > > > > +err_setup_vq_res: > > + _vdpa_unregister_device(&mvdev->vdev); > > err_reg: > > destroy_workqueue(mvdev->wq); > > err_res2: > > @@ -3863,6 +3885,11 @@ static void mlx5_vdpa_dev_del(struct vdpa_mgmt_dev *v_mdev, struct vdpa_device * > > > > unregister_link_notifier(ndev); > > _vdpa_unregister_device(dev); > > + > > + down_write(&ndev->reslock); > > + teardown_vq_resources(ndev); > > + up_write(&ndev->reslock); > > + > > wq = mvdev->wq; > > mvdev->wq = NULL; > > destroy_workqueue(wq); > > > > -- > > 2.45.1 > > >
diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c index 249b5afbe34a..b2836fd3d1dd 100644 --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c @@ -2444,7 +2444,7 @@ static void mlx5_vdpa_set_vq_ready(struct vdpa_device *vdev, u16 idx, bool ready mvq = &ndev->vqs[idx]; if (!ready) { suspend_vq(ndev, mvq); - } else { + } else if (mvdev->status & VIRTIO_CONFIG_S_DRIVER_OK) { if (resume_vq(ndev, mvq)) ready = false; } @@ -3078,10 +3078,18 @@ static void mlx5_vdpa_set_status(struct vdpa_device *vdev, u8 status) goto err_setup; } register_link_notifier(ndev); - err = setup_vq_resources(ndev, true); - if (err) { - mlx5_vdpa_warn(mvdev, "failed to setup driver\n"); - goto err_driver; + if (ndev->setup) { + err = resume_vqs(ndev); + if (err) { + mlx5_vdpa_warn(mvdev, "failed to resume VQs\n"); + goto err_driver; + } + } else { + err = setup_vq_resources(ndev, true); + if (err) { + mlx5_vdpa_warn(mvdev, "failed to setup driver\n"); + goto err_driver; + } } } else { mlx5_vdpa_warn(mvdev, "did not expect DRIVER_OK to be cleared\n"); @@ -3142,6 +3150,7 @@ static int mlx5_vdpa_compat_reset(struct vdpa_device *vdev, u32 flags) if (mlx5_vdpa_create_dma_mr(mvdev)) mlx5_vdpa_warn(mvdev, "create MR failed\n"); } + setup_vq_resources(ndev, false); up_write(&ndev->reslock); return 0; @@ -3836,8 +3845,21 @@ static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name, goto err_reg; mgtdev->ndev = ndev; + + /* For virtio-vdpa, the device was set up during device register. */ + if (ndev->setup) + return 0; + + down_write(&ndev->reslock); + err = setup_vq_resources(ndev, false); + up_write(&ndev->reslock); + if (err) + goto err_setup_vq_res; + return 0; +err_setup_vq_res: + _vdpa_unregister_device(&mvdev->vdev); err_reg: destroy_workqueue(mvdev->wq); err_res2: @@ -3863,6 +3885,11 @@ static void mlx5_vdpa_dev_del(struct vdpa_mgmt_dev *v_mdev, struct vdpa_device * unregister_link_notifier(ndev); _vdpa_unregister_device(dev); + + down_write(&ndev->reslock); + teardown_vq_resources(ndev); + up_write(&ndev->reslock); + wq = mvdev->wq; mvdev->wq = NULL; destroy_workqueue(wq);