Message ID | 1599842643-2553-2-git-send-email-mjrosato@linux.ibm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | vfio iommu: Add dma limit capability | expand |
On Fri, 11 Sep 2020 12:44:03 -0400 Matthew Rosato <mjrosato@linux.ibm.com> wrote: > Commit 492855939bdb ("vfio/type1: Limit DMA mappings per container") > added the ability to limit the number of memory backed DMA mappings. > However on s390x, when lazy mapping is in use, we use a very large > number of concurrent mappings. Let's provide the limitation to > userspace via the IOMMU info chain so that userspace can take > appropriate mitigation. > > Signed-off-by: Matthew Rosato <mjrosato@linux.ibm.com> > --- > drivers/vfio/vfio_iommu_type1.c | 17 +++++++++++++++++ > include/uapi/linux/vfio.h | 16 ++++++++++++++++ > 2 files changed, 33 insertions(+) > > diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c > index 5fbf0c1..573c2c9 100644 > --- a/drivers/vfio/vfio_iommu_type1.c > +++ b/drivers/vfio/vfio_iommu_type1.c > @@ -2609,6 +2609,20 @@ static int vfio_iommu_migration_build_caps(struct vfio_iommu *iommu, > return vfio_info_add_capability(caps, &cap_mig.header, sizeof(cap_mig)); > } > > +static int vfio_iommu_dma_limit_build_caps(struct vfio_iommu *iommu, > + struct vfio_info_cap *caps) > +{ > + struct vfio_iommu_type1_info_dma_limit cap_dma_limit; > + > + cap_dma_limit.header.id = VFIO_IOMMU_TYPE1_INFO_DMA_LIMIT; > + cap_dma_limit.header.version = 1; > + > + cap_dma_limit.max = dma_entry_limit; I think you want to report iommu->dma_avail, which might change the naming and semantics of the capability a bit. dma_entry_limit is a writable module param, so the current value might not be relevant to this container at the time that it's read. When a container is opened we set iommu->dma_avail to the current dma_entry_limit, therefore later modifications of dma_entry_limit are only relevant to subsequent containers. It seems like there are additional benefits to reporting available dma entries as well, for example on mapping failure userspace could reevaluate, perhaps even validate usage counts between kernel and user. Thanks, Alex > + > + return vfio_info_add_capability(caps, &cap_dma_limit.header, > + sizeof(cap_dma_limit)); > +} > + > static int vfio_iommu_type1_get_info(struct vfio_iommu *iommu, > unsigned long arg) > { > @@ -2642,6 +2656,9 @@ static int vfio_iommu_type1_get_info(struct vfio_iommu *iommu, > ret = vfio_iommu_migration_build_caps(iommu, &caps); > > if (!ret) > + ret = vfio_iommu_dma_limit_build_caps(iommu, &caps); > + > + if (!ret) > ret = vfio_iommu_iova_build_caps(iommu, &caps); > > mutex_unlock(&iommu->lock); > diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h > index 9204705..c91e471 100644 > --- a/include/uapi/linux/vfio.h > +++ b/include/uapi/linux/vfio.h > @@ -1039,6 +1039,22 @@ struct vfio_iommu_type1_info_cap_migration { > __u64 max_dirty_bitmap_size; /* in bytes */ > }; > > +/* > + * The DMA limit capability allows to report the number of simultaneously > + * outstanding DMA mappings are supported. > + * > + * The structures below define version 1 of this capability. > + * > + * max: specifies the maximum number of outstanding DMA mappings allowed. > + */ > +#define VFIO_IOMMU_TYPE1_INFO_DMA_LIMIT 3 > + > +struct vfio_iommu_type1_info_dma_limit { > + struct vfio_info_cap_header header; > + __u32 max; > +}; > + > + > #define VFIO_IOMMU_GET_INFO _IO(VFIO_TYPE, VFIO_BASE + 12) > > /**
On 9/11/20 1:09 PM, Alex Williamson wrote: > On Fri, 11 Sep 2020 12:44:03 -0400 > Matthew Rosato <mjrosato@linux.ibm.com> wrote: > >> Commit 492855939bdb ("vfio/type1: Limit DMA mappings per container") >> added the ability to limit the number of memory backed DMA mappings. >> However on s390x, when lazy mapping is in use, we use a very large >> number of concurrent mappings. Let's provide the limitation to >> userspace via the IOMMU info chain so that userspace can take >> appropriate mitigation. >> >> Signed-off-by: Matthew Rosato <mjrosato@linux.ibm.com> >> --- >> drivers/vfio/vfio_iommu_type1.c | 17 +++++++++++++++++ >> include/uapi/linux/vfio.h | 16 ++++++++++++++++ >> 2 files changed, 33 insertions(+) >> >> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c >> index 5fbf0c1..573c2c9 100644 >> --- a/drivers/vfio/vfio_iommu_type1.c >> +++ b/drivers/vfio/vfio_iommu_type1.c >> @@ -2609,6 +2609,20 @@ static int vfio_iommu_migration_build_caps(struct vfio_iommu *iommu, >> return vfio_info_add_capability(caps, &cap_mig.header, sizeof(cap_mig)); >> } >> >> +static int vfio_iommu_dma_limit_build_caps(struct vfio_iommu *iommu, >> + struct vfio_info_cap *caps) >> +{ >> + struct vfio_iommu_type1_info_dma_limit cap_dma_limit; >> + >> + cap_dma_limit.header.id = VFIO_IOMMU_TYPE1_INFO_DMA_LIMIT; >> + cap_dma_limit.header.version = 1; >> + >> + cap_dma_limit.max = dma_entry_limit; > > > I think you want to report iommu->dma_avail, which might change the > naming and semantics of the capability a bit. dma_entry_limit is a > writable module param, so the current value might not be relevant to > this container at the time that it's read. When a container is opened > we set iommu->dma_avail to the current dma_entry_limit, therefore later > modifications of dma_entry_limit are only relevant to subsequent > containers. > > It seems like there are additional benefits to reporting available dma > entries as well, for example on mapping failure userspace could > reevaluate, perhaps even validate usage counts between kernel and user. Hmm, both good points. I'll re-work to something that presents the current dma_avail for the container instead. Thanks! > Thanks, > > Alex > >> + >> + return vfio_info_add_capability(caps, &cap_dma_limit.header, >> + sizeof(cap_dma_limit)); >> +} >> + >> static int vfio_iommu_type1_get_info(struct vfio_iommu *iommu, >> unsigned long arg) >> { >> @@ -2642,6 +2656,9 @@ static int vfio_iommu_type1_get_info(struct vfio_iommu *iommu, >> ret = vfio_iommu_migration_build_caps(iommu, &caps); >> >> if (!ret) >> + ret = vfio_iommu_dma_limit_build_caps(iommu, &caps); >> + >> + if (!ret) >> ret = vfio_iommu_iova_build_caps(iommu, &caps); >> >> mutex_unlock(&iommu->lock); >> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h >> index 9204705..c91e471 100644 >> --- a/include/uapi/linux/vfio.h >> +++ b/include/uapi/linux/vfio.h >> @@ -1039,6 +1039,22 @@ struct vfio_iommu_type1_info_cap_migration { >> __u64 max_dirty_bitmap_size; /* in bytes */ >> }; >> >> +/* >> + * The DMA limit capability allows to report the number of simultaneously >> + * outstanding DMA mappings are supported. >> + * >> + * The structures below define version 1 of this capability. >> + * >> + * max: specifies the maximum number of outstanding DMA mappings allowed. >> + */ >> +#define VFIO_IOMMU_TYPE1_INFO_DMA_LIMIT 3 >> + >> +struct vfio_iommu_type1_info_dma_limit { >> + struct vfio_info_cap_header header; >> + __u32 max; >> +}; >> + >> + >> #define VFIO_IOMMU_GET_INFO _IO(VFIO_TYPE, VFIO_BASE + 12) >> >> /** >
diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 5fbf0c1..573c2c9 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -2609,6 +2609,20 @@ static int vfio_iommu_migration_build_caps(struct vfio_iommu *iommu, return vfio_info_add_capability(caps, &cap_mig.header, sizeof(cap_mig)); } +static int vfio_iommu_dma_limit_build_caps(struct vfio_iommu *iommu, + struct vfio_info_cap *caps) +{ + struct vfio_iommu_type1_info_dma_limit cap_dma_limit; + + cap_dma_limit.header.id = VFIO_IOMMU_TYPE1_INFO_DMA_LIMIT; + cap_dma_limit.header.version = 1; + + cap_dma_limit.max = dma_entry_limit; + + return vfio_info_add_capability(caps, &cap_dma_limit.header, + sizeof(cap_dma_limit)); +} + static int vfio_iommu_type1_get_info(struct vfio_iommu *iommu, unsigned long arg) { @@ -2642,6 +2656,9 @@ static int vfio_iommu_type1_get_info(struct vfio_iommu *iommu, ret = vfio_iommu_migration_build_caps(iommu, &caps); if (!ret) + ret = vfio_iommu_dma_limit_build_caps(iommu, &caps); + + if (!ret) ret = vfio_iommu_iova_build_caps(iommu, &caps); mutex_unlock(&iommu->lock); diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h index 9204705..c91e471 100644 --- a/include/uapi/linux/vfio.h +++ b/include/uapi/linux/vfio.h @@ -1039,6 +1039,22 @@ struct vfio_iommu_type1_info_cap_migration { __u64 max_dirty_bitmap_size; /* in bytes */ }; +/* + * The DMA limit capability allows to report the number of simultaneously + * outstanding DMA mappings are supported. + * + * The structures below define version 1 of this capability. + * + * max: specifies the maximum number of outstanding DMA mappings allowed. + */ +#define VFIO_IOMMU_TYPE1_INFO_DMA_LIMIT 3 + +struct vfio_iommu_type1_info_dma_limit { + struct vfio_info_cap_header header; + __u32 max; +}; + + #define VFIO_IOMMU_GET_INFO _IO(VFIO_TYPE, VFIO_BASE + 12) /**
Commit 492855939bdb ("vfio/type1: Limit DMA mappings per container") added the ability to limit the number of memory backed DMA mappings. However on s390x, when lazy mapping is in use, we use a very large number of concurrent mappings. Let's provide the limitation to userspace via the IOMMU info chain so that userspace can take appropriate mitigation. Signed-off-by: Matthew Rosato <mjrosato@linux.ibm.com> --- drivers/vfio/vfio_iommu_type1.c | 17 +++++++++++++++++ include/uapi/linux/vfio.h | 16 ++++++++++++++++ 2 files changed, 33 insertions(+)