Message ID | 20240617-stage-vdpa-vq-precreate-v1-8-8c0483f0ca2a@nvidia.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | vdpa/mlx5: Pre-create HW VQs to reduce LM downtime | expand |
Context | Check | Description |
---|---|---|
netdev/tree_selection | success | Not a local patch |
On Mon, Jun 17, 2024 at 5:08 PM Dragos Tatulea <dtatulea@nvidia.com> wrote: > > The hardware VQ configuration is mirrored by data in struct > mlx5_vdpa_virtqueue . Instead of clearing just a few fields at reset, > fully clear the struct and initialize with the appropriate default > values. > > As clear_vqs_ready() is used only during reset, get rid of it. > > Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> > Reviewed-by: Cosmin Ratiu <cratiu@nvidia.com> Acked-by: Eugenio Pérez <eperezma@redhat.com> > --- > drivers/vdpa/mlx5/net/mlx5_vnet.c | 16 +++------------- > 1 file changed, 3 insertions(+), 13 deletions(-) > > diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c > index c8b5c87f001d..de013b5a2815 100644 > --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c > +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c > @@ -2941,18 +2941,6 @@ static void teardown_vq_resources(struct mlx5_vdpa_net *ndev) > ndev->setup = false; > } > > -static void clear_vqs_ready(struct mlx5_vdpa_net *ndev) > -{ > - int i; > - > - for (i = 0; i < ndev->mvdev.max_vqs; i++) { > - ndev->vqs[i].ready = false; > - ndev->vqs[i].modified_fields = 0; > - } > - > - ndev->mvdev.cvq.ready = false; > -} > - > static int setup_cvq_vring(struct mlx5_vdpa_dev *mvdev) > { > struct mlx5_control_vq *cvq = &mvdev->cvq; > @@ -3035,12 +3023,14 @@ static int mlx5_vdpa_compat_reset(struct vdpa_device *vdev, u32 flags) > down_write(&ndev->reslock); > unregister_link_notifier(ndev); > teardown_vq_resources(ndev); > - clear_vqs_ready(ndev); > + init_mvqs(ndev); Nitpick / suggestion if you have to send a v2. The init_mvqs function name sounds like it can allocate stuff that needs to be released to me. But I'm very bad at naming :). Maybe something like "mvqs_set_defaults" or similar? > + > if (flags & VDPA_RESET_F_CLEAN_MAP) > mlx5_vdpa_destroy_mr_resources(&ndev->mvdev); > ndev->mvdev.status = 0; > ndev->mvdev.suspended = false; > ndev->cur_num_vqs = MLX5V_DEFAULT_VQ_COUNT; > + ndev->mvdev.cvq.ready = false; > ndev->mvdev.cvq.received_desc = 0; > ndev->mvdev.cvq.completed_desc = 0; > memset(ndev->event_cbs, 0, sizeof(*ndev->event_cbs) * (mvdev->max_vqs + 1)); > > -- > 2.45.1 >
On Wed, 2024-06-19 at 13:28 +0200, Eugenio Perez Martin wrote: > On Mon, Jun 17, 2024 at 5:08 PM Dragos Tatulea <dtatulea@nvidia.com> wrote: > > > > The hardware VQ configuration is mirrored by data in struct > > mlx5_vdpa_virtqueue . Instead of clearing just a few fields at reset, > > fully clear the struct and initialize with the appropriate default > > values. > > > > As clear_vqs_ready() is used only during reset, get rid of it. > > > > Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> > > Reviewed-by: Cosmin Ratiu <cratiu@nvidia.com> > > Acked-by: Eugenio Pérez <eperezma@redhat.com> > > > --- > > drivers/vdpa/mlx5/net/mlx5_vnet.c | 16 +++------------- > > 1 file changed, 3 insertions(+), 13 deletions(-) > > > > diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c > > index c8b5c87f001d..de013b5a2815 100644 > > --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c > > +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c > > @@ -2941,18 +2941,6 @@ static void teardown_vq_resources(struct mlx5_vdpa_net *ndev) > > ndev->setup = false; > > } > > > > -static void clear_vqs_ready(struct mlx5_vdpa_net *ndev) > > -{ > > - int i; > > - > > - for (i = 0; i < ndev->mvdev.max_vqs; i++) { > > - ndev->vqs[i].ready = false; > > - ndev->vqs[i].modified_fields = 0; > > - } > > - > > - ndev->mvdev.cvq.ready = false; > > -} > > - > > static int setup_cvq_vring(struct mlx5_vdpa_dev *mvdev) > > { > > struct mlx5_control_vq *cvq = &mvdev->cvq; > > @@ -3035,12 +3023,14 @@ static int mlx5_vdpa_compat_reset(struct vdpa_device *vdev, u32 flags) > > down_write(&ndev->reslock); > > unregister_link_notifier(ndev); > > teardown_vq_resources(ndev); > > - clear_vqs_ready(ndev); > > + init_mvqs(ndev); > > Nitpick / suggestion if you have to send a v2. The init_mvqs function > name sounds like it can allocate stuff that needs to be released to > me. But I'm very bad at naming :). Maybe something like > "mvqs_set_defaults" or similar? Makes sense. I think I will call it mvqs_reset / reset_mvqs to keep things consistent. Thanks, Dragos > > > + > > if (flags & VDPA_RESET_F_CLEAN_MAP) > > mlx5_vdpa_destroy_mr_resources(&ndev->mvdev); > > ndev->mvdev.status = 0; > > ndev->mvdev.suspended = false; > > ndev->cur_num_vqs = MLX5V_DEFAULT_VQ_COUNT; > > + ndev->mvdev.cvq.ready = false; > > ndev->mvdev.cvq.received_desc = 0; > > ndev->mvdev.cvq.completed_desc = 0; > > memset(ndev->event_cbs, 0, sizeof(*ndev->event_cbs) * (mvdev->max_vqs + 1)); > > > > -- > > 2.45.1 > > >
diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c index c8b5c87f001d..de013b5a2815 100644 --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c @@ -2941,18 +2941,6 @@ static void teardown_vq_resources(struct mlx5_vdpa_net *ndev) ndev->setup = false; } -static void clear_vqs_ready(struct mlx5_vdpa_net *ndev) -{ - int i; - - for (i = 0; i < ndev->mvdev.max_vqs; i++) { - ndev->vqs[i].ready = false; - ndev->vqs[i].modified_fields = 0; - } - - ndev->mvdev.cvq.ready = false; -} - static int setup_cvq_vring(struct mlx5_vdpa_dev *mvdev) { struct mlx5_control_vq *cvq = &mvdev->cvq; @@ -3035,12 +3023,14 @@ static int mlx5_vdpa_compat_reset(struct vdpa_device *vdev, u32 flags) down_write(&ndev->reslock); unregister_link_notifier(ndev); teardown_vq_resources(ndev); - clear_vqs_ready(ndev); + init_mvqs(ndev); + if (flags & VDPA_RESET_F_CLEAN_MAP) mlx5_vdpa_destroy_mr_resources(&ndev->mvdev); ndev->mvdev.status = 0; ndev->mvdev.suspended = false; ndev->cur_num_vqs = MLX5V_DEFAULT_VQ_COUNT; + ndev->mvdev.cvq.ready = false; ndev->mvdev.cvq.received_desc = 0; ndev->mvdev.cvq.completed_desc = 0; memset(ndev->event_cbs, 0, sizeof(*ndev->event_cbs) * (mvdev->max_vqs + 1));