Message ID | 20230912130132.561193-10-dtatulea@nvidia.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | vdpa: Add support for vq descriptor mappings | expand |
On Tue, Sep 12, 2023 at 9:02 PM Dragos Tatulea <dtatulea@nvidia.com> wrote: > > This patch adapts the mr creation/deletion code to be able to work with > any given mr struct pointer. All the APIs are adapted to take an extra > parameter for the mr. > > mlx5_vdpa_create/delete_mr doesn't need a ASID parameter anymore. The > check is done in the caller instead (mlx5_set_map). > > This change is needed for a followup patch which will introduce an > additional mr for the vq descriptor data. > > Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> > --- Thinking of this decoupling I think I have a question. We advertise 2 address spaces and 2 groups. So we actually don't know for example which address spaces will be used by dvq. And actually we allow the user space to do something like set_group_asid(dvq_group, 0) set_map(0) set_group_asid(dvq_group, 1) set_map(1) I wonder if the decoupling like this patch can work and why. It looks to me the most easy way is to let each AS be backed by an MR. Then we don't even need to care about the dvq, cvq. Thanks
On Tue, 2023-09-26 at 12:44 +0800, Jason Wang wrote: > On Tue, Sep 12, 2023 at 9:02 PM Dragos Tatulea <dtatulea@nvidia.com> wrote: > > > > This patch adapts the mr creation/deletion code to be able to work with > > any given mr struct pointer. All the APIs are adapted to take an extra > > parameter for the mr. > > > > mlx5_vdpa_create/delete_mr doesn't need a ASID parameter anymore. The > > check is done in the caller instead (mlx5_set_map). > > > > This change is needed for a followup patch which will introduce an > > additional mr for the vq descriptor data. > > > > Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> > > --- > > Thinking of this decoupling I think I have a question. > > We advertise 2 address spaces and 2 groups. So we actually don't know > for example which address spaces will be used by dvq. > > And actually we allow the user space to do something like > > set_group_asid(dvq_group, 0) > set_map(0) > set_group_asid(dvq_group, 1) > set_map(1) > > I wonder if the decoupling like this patch can work and why. > This scenario could indeed work. Especially if you look at the 13'th patch [0] where hw support is added. Are you wondering if this should work at all or if it should be blocked? > It looks to me the most easy way is to let each AS be backed by an MR. > Then we don't even need to care about the dvq, cvq. That's what this patch series dowes. Thanks, Dragos [0]https://lore.kernel.org/virtualization/20230912130132.561193-14-dtatulea@nvidia.com/T/#u
On Tue, Sep 26, 2023 at 3:21 PM Dragos Tatulea <dtatulea@nvidia.com> wrote: > > On Tue, 2023-09-26 at 12:44 +0800, Jason Wang wrote: > > On Tue, Sep 12, 2023 at 9:02 PM Dragos Tatulea <dtatulea@nvidia.com> wrote: > > > > > > This patch adapts the mr creation/deletion code to be able to work with > > > any given mr struct pointer. All the APIs are adapted to take an extra > > > parameter for the mr. > > > > > > mlx5_vdpa_create/delete_mr doesn't need a ASID parameter anymore. The > > > check is done in the caller instead (mlx5_set_map). > > > > > > This change is needed for a followup patch which will introduce an > > > additional mr for the vq descriptor data. > > > > > > Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> > > > --- > > > > Thinking of this decoupling I think I have a question. > > > > We advertise 2 address spaces and 2 groups. So we actually don't know > > for example which address spaces will be used by dvq. > > > > And actually we allow the user space to do something like > > > > set_group_asid(dvq_group, 0) > > set_map(0) > > set_group_asid(dvq_group, 1) > > set_map(1) > > > > I wonder if the decoupling like this patch can work and why. > > > This scenario could indeed work. Especially if you look at the 13'th patch [0] > where hw support is added. Are you wondering if this should work at all or if it > should be blocked? It would be great if it can work with the following patches. But at least for this patch, it seems not: For example, what happens if we switch back to group 0 for dvq? set_group_asid(dvq_group, 0) set_map(0) set_group_asid(dvq_group, 1) set_map(1) // here we destroy the mr created for asid 0 set_group_asid(dvq_group, 0) Btw, if this is a new issue, I haven't checked whether or not it exists before this series (if yes, we can fix on top). > > > It looks to me the most easy way is to let each AS be backed by an MR. > > Then we don't even need to care about the dvq, cvq. > That's what this patch series dowes. Good to know this, I will review the series. Thanks > > Thanks, > Dragos > > [0]https://lore.kernel.org/virtualization/20230912130132.561193-14-dtatulea@nvidia.com/T/#u
On Sun, 2023-10-08 at 12:25 +0800, Jason Wang wrote: > On Tue, Sep 26, 2023 at 3:21 PM Dragos Tatulea <dtatulea@nvidia.com> wrote: > > > > On Tue, 2023-09-26 at 12:44 +0800, Jason Wang wrote: > > > On Tue, Sep 12, 2023 at 9:02 PM Dragos Tatulea <dtatulea@nvidia.com> > > > wrote: > > > > > > > > This patch adapts the mr creation/deletion code to be able to work with > > > > any given mr struct pointer. All the APIs are adapted to take an extra > > > > parameter for the mr. > > > > > > > > mlx5_vdpa_create/delete_mr doesn't need a ASID parameter anymore. The > > > > check is done in the caller instead (mlx5_set_map). > > > > > > > > This change is needed for a followup patch which will introduce an > > > > additional mr for the vq descriptor data. > > > > > > > > Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> > > > > --- > > > > > > Thinking of this decoupling I think I have a question. > > > > > > We advertise 2 address spaces and 2 groups. So we actually don't know > > > for example which address spaces will be used by dvq. > > > > > > And actually we allow the user space to do something like > > > > > > set_group_asid(dvq_group, 0) > > > set_map(0) > > > set_group_asid(dvq_group, 1) > > > set_map(1) > > > > > > I wonder if the decoupling like this patch can work and why. > > > > > This scenario could indeed work. Especially if you look at the 13'th patch > > [0] > > where hw support is added. Are you wondering if this should work at all or > > if it > > should be blocked? > > It would be great if it can work with the following patches. But at > least for this patch, it seems not: > > For example, what happens if we switch back to group 0 for dvq? > > set_group_asid(dvq_group, 0) > set_map(0) > set_group_asid(dvq_group, 1) > set_map(1) > // here we destroy the mr created for asid 0 > set_group_asid(dvq_group, 0) > If by destroy you mean .reset, I think it works: During .reset the mapping in ASID 0 is reset back to the DMA/pysical map (mlx5_vdpa_create_dma_mr). Am I missing something? > Btw, if this is a new issue, I haven't checked whether or not it > exists before this series (if yes, we can fix on top). > > > > > It looks to me the most easy way is to let each AS be backed by an MR. > > > Then we don't even need to care about the dvq, cvq. > > That's what this patch series dowes. > > Good to know this, I will review the series. > I was planning to spin a v3 with Eugenio's suggestions. Should I wait for your feedback before doing that? Thanks, Dragos
On Sun, Oct 8, 2023 at 8:05 PM Dragos Tatulea <dtatulea@nvidia.com> wrote: > > On Sun, 2023-10-08 at 12:25 +0800, Jason Wang wrote: > > On Tue, Sep 26, 2023 at 3:21 PM Dragos Tatulea <dtatulea@nvidia.com> wrote: > > > > > > On Tue, 2023-09-26 at 12:44 +0800, Jason Wang wrote: > > > > On Tue, Sep 12, 2023 at 9:02 PM Dragos Tatulea <dtatulea@nvidia.com> > > > > wrote: > > > > > > > > > > This patch adapts the mr creation/deletion code to be able to work with > > > > > any given mr struct pointer. All the APIs are adapted to take an extra > > > > > parameter for the mr. > > > > > > > > > > mlx5_vdpa_create/delete_mr doesn't need a ASID parameter anymore. The > > > > > check is done in the caller instead (mlx5_set_map). > > > > > > > > > > This change is needed for a followup patch which will introduce an > > > > > additional mr for the vq descriptor data. > > > > > > > > > > Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> > > > > > --- > > > > > > > > Thinking of this decoupling I think I have a question. > > > > > > > > We advertise 2 address spaces and 2 groups. So we actually don't know > > > > for example which address spaces will be used by dvq. > > > > > > > > And actually we allow the user space to do something like > > > > > > > > set_group_asid(dvq_group, 0) > > > > set_map(0) > > > > set_group_asid(dvq_group, 1) > > > > set_map(1) > > > > > > > > I wonder if the decoupling like this patch can work and why. > > > > > > > This scenario could indeed work. Especially if you look at the 13'th patch > > > [0] > > > where hw support is added. Are you wondering if this should work at all or > > > if it > > > should be blocked? > > > > It would be great if it can work with the following patches. But at > > least for this patch, it seems not: > > > > For example, what happens if we switch back to group 0 for dvq? > > > > set_group_asid(dvq_group, 0) > > set_map(0) > > set_group_asid(dvq_group, 1) > > set_map(1) > > // here we destroy the mr created for asid 0 > > set_group_asid(dvq_group, 0) > > > If by destroy you mean .reset, It's not rest. During the second map, the mr is destroyed by mlx5_vdpa_change_map(). I think it works: During .reset the mapping in > ASID 0 is reset back to the DMA/pysical map (mlx5_vdpa_create_dma_mr). Am I > missing something? > > > Btw, if this is a new issue, I haven't checked whether or not it > > exists before this series (if yes, we can fix on top). > > > > > > > > It looks to me the most easy way is to let each AS be backed by an MR. > > > > Then we don't even need to care about the dvq, cvq. > > > That's what this patch series dowes. > > > > Good to know this, I will review the series. > > > I was planning to spin a v3 with Eugenio's suggestions. Should I wait for your > feedback before doing that? You can post v3 and we can move the discussion there if you wish. Thanks > > Thanks, > Dragos
On Mon, 2023-10-09 at 14:39 +0800, Jason Wang wrote: > On Sun, Oct 8, 2023 at 8:05 PM Dragos Tatulea <dtatulea@nvidia.com> wrote: > > > > On Sun, 2023-10-08 at 12:25 +0800, Jason Wang wrote: > > > On Tue, Sep 26, 2023 at 3:21 PM Dragos Tatulea <dtatulea@nvidia.com> > > > wrote: > > > > > > > > On Tue, 2023-09-26 at 12:44 +0800, Jason Wang wrote: > > > > > On Tue, Sep 12, 2023 at 9:02 PM Dragos Tatulea <dtatulea@nvidia.com> > > > > > wrote: > > > > > > > > > > > > This patch adapts the mr creation/deletion code to be able to work > > > > > > with > > > > > > any given mr struct pointer. All the APIs are adapted to take an > > > > > > extra > > > > > > parameter for the mr. > > > > > > > > > > > > mlx5_vdpa_create/delete_mr doesn't need a ASID parameter anymore. > > > > > > The > > > > > > check is done in the caller instead (mlx5_set_map). > > > > > > > > > > > > This change is needed for a followup patch which will introduce an > > > > > > additional mr for the vq descriptor data. > > > > > > > > > > > > Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> > > > > > > --- > > > > > > > > > > Thinking of this decoupling I think I have a question. > > > > > > > > > > We advertise 2 address spaces and 2 groups. So we actually don't know > > > > > for example which address spaces will be used by dvq. > > > > > > > > > > And actually we allow the user space to do something like > > > > > > > > > > set_group_asid(dvq_group, 0) > > > > > set_map(0) > > > > > set_group_asid(dvq_group, 1) > > > > > set_map(1) > > > > > > > > > > I wonder if the decoupling like this patch can work and why. > > > > > > > > > This scenario could indeed work. Especially if you look at the 13'th > > > > patch > > > > [0] > > > > where hw support is added. Are you wondering if this should work at all > > > > or > > > > if it > > > > should be blocked? > > > > > > It would be great if it can work with the following patches. But at > > > least for this patch, it seems not: > > > > > > For example, what happens if we switch back to group 0 for dvq? > > > > > > set_group_asid(dvq_group, 0) > > > set_map(0) > > > set_group_asid(dvq_group, 1) > > > set_map(1) > > > // here we destroy the mr created for asid 0 > > > set_group_asid(dvq_group, 0) > > > > > If by destroy you mean .reset, > > It's not rest. During the second map, the mr is destroyed by > mlx5_vdpa_change_map(). > Oh, now I understand what you mean. This is not the case anymore. This patch series introduces one mr per asid. mlx5_vdpa_change_map will only destroy the old mr in the given asid. Before, there was one mr for all asids. > I think it works: During .reset the mapping in > > ASID 0 is reset back to the DMA/pysical map (mlx5_vdpa_create_dma_mr). Am I > > missing something? > > > > > Btw, if this is a new issue, I haven't checked whether or not it > > > exists before this series (if yes, we can fix on top). > > > > > > > > > > > It looks to me the most easy way is to let each AS be backed by an MR. > > > > > Then we don't even need to care about the dvq, cvq. > > > > That's what this patch series dowes. > > > > > > Good to know this, I will review the series. > > > > > I was planning to spin a v3 with Eugenio's suggestions. Should I wait for > > your > > feedback before doing that? > > You can post v3 and we can move the discussion there if you wish. > Ack. Thanks, Dragos
diff --git a/drivers/vdpa/mlx5/core/mlx5_vdpa.h b/drivers/vdpa/mlx5/core/mlx5_vdpa.h index e1e6e7aba50e..01d4ee58ccb1 100644 --- a/drivers/vdpa/mlx5/core/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/core/mlx5_vdpa.h @@ -116,10 +116,12 @@ int mlx5_vdpa_create_mkey(struct mlx5_vdpa_dev *mvdev, u32 *mkey, u32 *in, int mlx5_vdpa_destroy_mkey(struct mlx5_vdpa_dev *mvdev, u32 mkey); int mlx5_vdpa_handle_set_map(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb, bool *change_map, unsigned int asid); -int mlx5_vdpa_create_mr(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb, - unsigned int asid); +int mlx5_vdpa_create_mr(struct mlx5_vdpa_dev *mvdev, + struct mlx5_vdpa_mr *mr, + struct vhost_iotlb *iotlb); void mlx5_vdpa_destroy_mr_resources(struct mlx5_vdpa_dev *mvdev); -void mlx5_vdpa_destroy_mr(struct mlx5_vdpa_dev *mvdev, unsigned int asid); +void mlx5_vdpa_destroy_mr(struct mlx5_vdpa_dev *mvdev, + struct mlx5_vdpa_mr *mr); int mlx5_vdpa_update_cvq_iotlb(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb, unsigned int asid); diff --git a/drivers/vdpa/mlx5/core/mr.c b/drivers/vdpa/mlx5/core/mr.c index 00dcce190a1f..6f29e8eaabb1 100644 --- a/drivers/vdpa/mlx5/core/mr.c +++ b/drivers/vdpa/mlx5/core/mr.c @@ -301,10 +301,13 @@ static void unmap_direct_mr(struct mlx5_vdpa_dev *mvdev, struct mlx5_vdpa_direct sg_free_table(&mr->sg_head); } -static int add_direct_chain(struct mlx5_vdpa_dev *mvdev, u64 start, u64 size, u8 perm, +static int add_direct_chain(struct mlx5_vdpa_dev *mvdev, + struct mlx5_vdpa_mr *mr, + u64 start, + u64 size, + u8 perm, struct vhost_iotlb *iotlb) { - struct mlx5_vdpa_mr *mr = &mvdev->mr; struct mlx5_vdpa_direct_mr *dmr; struct mlx5_vdpa_direct_mr *n; LIST_HEAD(tmp); @@ -354,9 +357,10 @@ static int add_direct_chain(struct mlx5_vdpa_dev *mvdev, u64 start, u64 size, u8 * indirect memory key that provides access to the enitre address space given * by iotlb. */ -static int create_user_mr(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb) +static int create_user_mr(struct mlx5_vdpa_dev *mvdev, + struct mlx5_vdpa_mr *mr, + struct vhost_iotlb *iotlb) { - struct mlx5_vdpa_mr *mr = &mvdev->mr; struct mlx5_vdpa_direct_mr *dmr; struct mlx5_vdpa_direct_mr *n; struct vhost_iotlb_map *map; @@ -384,7 +388,7 @@ static int create_user_mr(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb LOG_MAX_KLM_SIZE); mr->num_klms += nnuls; } - err = add_direct_chain(mvdev, ps, pe - ps, pperm, iotlb); + err = add_direct_chain(mvdev, mr, ps, pe - ps, pperm, iotlb); if (err) goto err_chain; } @@ -393,7 +397,7 @@ static int create_user_mr(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb pperm = map->perm; } } - err = add_direct_chain(mvdev, ps, pe - ps, pperm, iotlb); + err = add_direct_chain(mvdev, mr, ps, pe - ps, pperm, iotlb); if (err) goto err_chain; @@ -489,13 +493,8 @@ static void destroy_user_mr(struct mlx5_vdpa_dev *mvdev, struct mlx5_vdpa_mr *mr } } -static void _mlx5_vdpa_destroy_mr(struct mlx5_vdpa_dev *mvdev, unsigned int asid) +static void _mlx5_vdpa_destroy_mr(struct mlx5_vdpa_dev *mvdev, struct mlx5_vdpa_mr *mr) { - struct mlx5_vdpa_mr *mr = &mvdev->mr; - - if (mvdev->group2asid[MLX5_VDPA_DATAVQ_GROUP] != asid) - return; - if (!mr->initialized) return; @@ -507,38 +506,33 @@ static void _mlx5_vdpa_destroy_mr(struct mlx5_vdpa_dev *mvdev, unsigned int asid mr->initialized = false; } -void mlx5_vdpa_destroy_mr(struct mlx5_vdpa_dev *mvdev, unsigned int asid) +void mlx5_vdpa_destroy_mr(struct mlx5_vdpa_dev *mvdev, + struct mlx5_vdpa_mr *mr) { - struct mlx5_vdpa_mr *mr = &mvdev->mr; - mutex_lock(&mr->mkey_mtx); - _mlx5_vdpa_destroy_mr(mvdev, asid); + _mlx5_vdpa_destroy_mr(mvdev, mr); mutex_unlock(&mr->mkey_mtx); } void mlx5_vdpa_destroy_mr_resources(struct mlx5_vdpa_dev *mvdev) { - mlx5_vdpa_destroy_mr(mvdev, mvdev->group2asid[MLX5_VDPA_DATAVQ_GROUP]); + mlx5_vdpa_destroy_mr(mvdev, &mvdev->mr); prune_iotlb(mvdev); } static int _mlx5_vdpa_create_mr(struct mlx5_vdpa_dev *mvdev, - struct vhost_iotlb *iotlb, - unsigned int asid) + struct mlx5_vdpa_mr *mr, + struct vhost_iotlb *iotlb) { - struct mlx5_vdpa_mr *mr = &mvdev->mr; int err; - if (mvdev->group2asid[MLX5_VDPA_DATAVQ_GROUP] != asid) - return 0; - if (mr->initialized) return 0; if (iotlb) - err = create_user_mr(mvdev, iotlb); + err = create_user_mr(mvdev, mr, iotlb); else err = create_dma_mr(mvdev, mr); @@ -550,13 +544,14 @@ static int _mlx5_vdpa_create_mr(struct mlx5_vdpa_dev *mvdev, return 0; } -int mlx5_vdpa_create_mr(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb, - unsigned int asid) +int mlx5_vdpa_create_mr(struct mlx5_vdpa_dev *mvdev, + struct mlx5_vdpa_mr *mr, + struct vhost_iotlb *iotlb) { int err; mutex_lock(&mvdev->mr.mkey_mtx); - err = _mlx5_vdpa_create_mr(mvdev, iotlb, asid); + err = _mlx5_vdpa_create_mr(mvdev, mr, iotlb); mutex_unlock(&mvdev->mr.mkey_mtx); return err; } @@ -574,7 +569,7 @@ int mlx5_vdpa_handle_set_map(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *io *change_map = true; } if (!*change_map) - err = _mlx5_vdpa_create_mr(mvdev, iotlb, asid); + err = _mlx5_vdpa_create_mr(mvdev, mr, iotlb); mutex_unlock(&mr->mkey_mtx); return err; @@ -603,7 +598,7 @@ int mlx5_vdpa_create_dma_mr(struct mlx5_vdpa_dev *mvdev) { int err; - err = mlx5_vdpa_create_mr(mvdev, NULL, 0); + err = mlx5_vdpa_create_mr(mvdev, &mvdev->mr, NULL); if (err) return err; diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c index 4d759ab96319..4793b6a7ab54 100644 --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c @@ -2644,8 +2644,8 @@ static int mlx5_vdpa_change_map(struct mlx5_vdpa_dev *mvdev, goto err_mr; teardown_driver(ndev); - mlx5_vdpa_destroy_mr(mvdev, asid); - err = mlx5_vdpa_create_mr(mvdev, iotlb, asid); + mlx5_vdpa_destroy_mr(mvdev, &mvdev->mr); + err = mlx5_vdpa_create_mr(mvdev, &mvdev->mr, iotlb); if (err) goto err_mr; @@ -2660,7 +2660,7 @@ static int mlx5_vdpa_change_map(struct mlx5_vdpa_dev *mvdev, return 0; err_setup: - mlx5_vdpa_destroy_mr(mvdev, asid); + mlx5_vdpa_destroy_mr(mvdev, &mvdev->mr); err_mr: return err; } @@ -2878,6 +2878,9 @@ static int set_map_data(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb, bool change_map; int err; + if (mvdev->group2asid[MLX5_VDPA_DATAVQ_GROUP] != asid) + goto end; + err = mlx5_vdpa_handle_set_map(mvdev, iotlb, &change_map, asid); if (err) { mlx5_vdpa_warn(mvdev, "set map failed(%d)\n", err); @@ -2890,6 +2893,7 @@ static int set_map_data(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb, return err; } +end: return mlx5_vdpa_update_cvq_iotlb(mvdev, iotlb, asid); }
This patch adapts the mr creation/deletion code to be able to work with any given mr struct pointer. All the APIs are adapted to take an extra parameter for the mr. mlx5_vdpa_create/delete_mr doesn't need a ASID parameter anymore. The check is done in the caller instead (mlx5_set_map). This change is needed for a followup patch which will introduce an additional mr for the vq descriptor data. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> --- drivers/vdpa/mlx5/core/mlx5_vdpa.h | 8 +++-- drivers/vdpa/mlx5/core/mr.c | 53 ++++++++++++++---------------- drivers/vdpa/mlx5/net/mlx5_vnet.c | 10 ++++-- 3 files changed, 36 insertions(+), 35 deletions(-)