Message ID | 20240617-stage-vdpa-vq-precreate-v1-12-8c0483f0ca2a@nvidia.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | vdpa/mlx5: Pre-create HW VQs to reduce LM downtime | expand |
On Mon, Jun 17, 2024 at 5:09 PM Dragos Tatulea <dtatulea@nvidia.com> wrote: > > Currently rqt_size is initialized during device flag configuration. > That's because it is the earliest moment when device knows if MQ > (multi queue) is on or off. > > Shift this configuration earlier to device creation time. This implies > that non-MQ devices will have a larger RQT size. But the configuration > will still be correct. > > This is done in preparation for the pre-creation of hardware virtqueues > at device add time. When that change will be added, RQT will be created > at device creation time so it needs to be initialized to its max size. > > Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> > Reviewed-by: Cosmin Ratiu <cratiu@nvidia.com> Acked-by: Eugenio Pérez <eperezma@redhat.com> > --- > drivers/vdpa/mlx5/net/mlx5_vnet.c | 10 +++++----- > 1 file changed, 5 insertions(+), 5 deletions(-) > > diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c > index 1181e0ac3671..0201c6fe61e1 100644 > --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c > +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c > @@ -2731,10 +2731,6 @@ static int mlx5_vdpa_set_driver_features(struct vdpa_device *vdev, u64 features) > return err; > > ndev->mvdev.actual_features = features & ndev->mvdev.mlx_features; > - if (ndev->mvdev.actual_features & BIT_ULL(VIRTIO_NET_F_MQ)) > - ndev->rqt_size = mlx5vdpa16_to_cpu(mvdev, ndev->config.max_virtqueue_pairs); > - else > - ndev->rqt_size = 1; > > /* Interested in changes of vq features only. */ > if (get_features(old_features) != get_features(mvdev->actual_features)) { > @@ -3719,8 +3715,12 @@ static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name, > goto err_alloc; > } > > - if (device_features & BIT_ULL(VIRTIO_NET_F_MQ)) > + if (device_features & BIT_ULL(VIRTIO_NET_F_MQ)) { > config->max_virtqueue_pairs = cpu_to_mlx5vdpa16(mvdev, max_vqs / 2); > + ndev->rqt_size = max_vqs / 2; > + } else { > + ndev->rqt_size = 1; > + } > > ndev->mvdev.mlx_features = device_features; > mvdev->vdev.dma_dev = &mdev->pdev->dev; > > -- > 2.45.1 >
diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c index 1181e0ac3671..0201c6fe61e1 100644 --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c @@ -2731,10 +2731,6 @@ static int mlx5_vdpa_set_driver_features(struct vdpa_device *vdev, u64 features) return err; ndev->mvdev.actual_features = features & ndev->mvdev.mlx_features; - if (ndev->mvdev.actual_features & BIT_ULL(VIRTIO_NET_F_MQ)) - ndev->rqt_size = mlx5vdpa16_to_cpu(mvdev, ndev->config.max_virtqueue_pairs); - else - ndev->rqt_size = 1; /* Interested in changes of vq features only. */ if (get_features(old_features) != get_features(mvdev->actual_features)) { @@ -3719,8 +3715,12 @@ static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name, goto err_alloc; } - if (device_features & BIT_ULL(VIRTIO_NET_F_MQ)) + if (device_features & BIT_ULL(VIRTIO_NET_F_MQ)) { config->max_virtqueue_pairs = cpu_to_mlx5vdpa16(mvdev, max_vqs / 2); + ndev->rqt_size = max_vqs / 2; + } else { + ndev->rqt_size = 1; + } ndev->mvdev.mlx_features = device_features; mvdev->vdev.dma_dev = &mdev->pdev->dev;