Message ID | 1459890090-16040-1-git-send-email-mitchelh@codeaurora.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Hi Mitch, On 05/04/16 22:01, Mitchel Humpherys wrote: > Currently we use a single pgsize_bitmap per IOMMU driver. However, some > IOMMU drivers might service different IOMMUs with different supported > page sizes. Some drivers might also want to restrict page sizes for > different use cases. Support these use cases by adding a > .get_pgsize_bitmap function to the iommu_ops which can optionally be > used by the driver to return a domain-specific pgsize_bitmap. Funnily enough, I've just been doing very much the same thing. How would you handle said restriction of page sizes under this scheme? What you hint at in the commit message sounds the same as the mumblings I've heard from graphics guys about e.g. using short-descriptor at forced 1MB granularity for minimal table walk latency - to me it seemed most logical in that situation to simply let the owner of the domain mess directly with its bitmap themselves. I'll clean up what I have and try to get it posted this afternoon so we can compare. Robin. > Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org> > --- > drivers/iommu/iommu.c | 28 +++++++++++++++++++--------- > include/linux/iommu.h | 3 +++ > 2 files changed, 22 insertions(+), 9 deletions(-) > > diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c > index bfd4f7c3b1d8..6141710f3091 100644 > --- a/drivers/iommu/iommu.c > +++ b/drivers/iommu/iommu.c > @@ -325,6 +325,13 @@ int iommu_group_set_name(struct iommu_group *group, const char *name) > } > EXPORT_SYMBOL_GPL(iommu_group_set_name); > > +static unsigned long iommu_get_pgsize_bitmap(struct iommu_domain *domain) > +{ > + if (domain->ops->get_pgsize_bitmap) > + return domain->ops->get_pgsize_bitmap(domain); > + return domain->ops->pgsize_bitmap; > +} > + > static int iommu_group_create_direct_mappings(struct iommu_group *group, > struct device *dev) > { > @@ -337,9 +344,9 @@ static int iommu_group_create_direct_mappings(struct iommu_group *group, > if (!domain || domain->type != IOMMU_DOMAIN_DMA) > return 0; > > - BUG_ON(!domain->ops->pgsize_bitmap); > + BUG_ON(!(domain->ops->pgsize_bitmap || domain->ops->get_pgsize_bitmap)); > > - pg_size = 1UL << __ffs(domain->ops->pgsize_bitmap); > + pg_size = 1UL << __ffs(iommu_get_pgsize_bitmap(domain)); > INIT_LIST_HEAD(&mappings); > > iommu_get_dm_regions(dev, &mappings); > @@ -1318,14 +1325,15 @@ int iommu_map(struct iommu_domain *domain, unsigned long iova, > int ret = 0; > > if (unlikely(domain->ops->map == NULL || > - domain->ops->pgsize_bitmap == 0UL)) > + (domain->ops->pgsize_bitmap == 0UL && > + !domain->ops->get_pgsize_bitmap))) > return -ENODEV; > > if (unlikely(!(domain->type & __IOMMU_DOMAIN_PAGING))) > return -EINVAL; > > /* find out the minimum page size supported */ > - min_pagesz = 1 << __ffs(domain->ops->pgsize_bitmap); > + min_pagesz = 1 << __ffs(iommu_get_pgsize_bitmap(domain)); > > /* > * both the virtual address and the physical one, as well as > @@ -1372,14 +1380,15 @@ size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size) > unsigned long orig_iova = iova; > > if (unlikely(domain->ops->unmap == NULL || > - domain->ops->pgsize_bitmap == 0UL)) > + (domain->ops->pgsize_bitmap == 0UL && > + !domain->ops->get_pgsize_bitmap))) > return -ENODEV; > > if (unlikely(!(domain->type & __IOMMU_DOMAIN_PAGING))) > return -EINVAL; > > /* find out the minimum page size supported */ > - min_pagesz = 1 << __ffs(domain->ops->pgsize_bitmap); > + min_pagesz = 1 << __ffs(iommu_get_pgsize_bitmap(domain)); > > /* > * The virtual address, as well as the size of the mapping, must be > @@ -1425,10 +1434,11 @@ size_t default_iommu_map_sg(struct iommu_domain *domain, unsigned long iova, > unsigned int i, min_pagesz; > int ret; > > - if (unlikely(domain->ops->pgsize_bitmap == 0UL)) > + if (unlikely(domain->ops->pgsize_bitmap == 0UL && > + !domain->ops->get_pgsize_bitmap)) > return 0; > > - min_pagesz = 1 << __ffs(domain->ops->pgsize_bitmap); > + min_pagesz = 1 << __ffs(iommu_get_pgsize_bitmap(domain)); > > for_each_sg(sg, s, nents, i) { > phys_addr_t phys = page_to_phys(sg_page(s)) + s->offset; > @@ -1509,7 +1519,7 @@ int iommu_domain_get_attr(struct iommu_domain *domain, > break; > case DOMAIN_ATTR_PAGING: > paging = data; > - *paging = (domain->ops->pgsize_bitmap != 0UL); > + *paging = (iommu_get_pgsize_bitmap(domain) != 0UL); > break; > case DOMAIN_ATTR_WINDOWS: > count = data; > diff --git a/include/linux/iommu.h b/include/linux/iommu.h > index a5c539fa5d2b..03f8d50670db 100644 > --- a/include/linux/iommu.h > +++ b/include/linux/iommu.h > @@ -156,6 +156,8 @@ struct iommu_dm_region { > * @domain_get_windows: Return the number of windows for a domain > * @of_xlate: add OF master IDs to iommu grouping > * @pgsize_bitmap: bitmap of supported page sizes > + * @get_pgsize_bitmap: gets a bitmap of supported page sizes for a domain > + * This takes precedence over @pgsize_bitmap. > * @priv: per-instance data private to the iommu driver > */ > struct iommu_ops { > @@ -200,6 +202,7 @@ struct iommu_ops { > #endif > > unsigned long pgsize_bitmap; > + unsigned long (*get_pgsize_bitmap)(struct iommu_domain *domain); > void *priv; > }; > >
On Tue, Apr 05, 2016 at 02:01:29PM -0700, Mitchel Humpherys wrote: > Currently we use a single pgsize_bitmap per IOMMU driver. However, some > IOMMU drivers might service different IOMMUs with different supported > page sizes. Some drivers might also want to restrict page sizes for > different use cases. Support these use cases by adding a > .get_pgsize_bitmap function to the iommu_ops which can optionally be > used by the driver to return a domain-specific pgsize_bitmap. No, at least not this way. I said it before and I say it again: We are not going to lift the iommu-api requirements in this undetectable way. The iommu-api works with domains/groups and devices. The general expectation is that every group can be part of every domain. I know that this is not the case already with some drivers, but I am not going to move the code further into the wrong direction. The way I'd like to see that solved is: * Introduce per-group pgsize-bitmaps (group->pgsize_bmp) * Calculate a global pgsize-bitmap from all groups pgsize-bitmaps * Also store a pgsize_bitmap in each domain on allocation * Modify iommu_domain_alloc to set domain->pgsize_bmp to the global pgsize_bitmap * Introduce an iommu_group_alloc_domain(group) function which allocates a new domain only for the given group. This function sets domain->pgsize_bitmap to group->pgsize_bmp. * Note that now you can have multiple page-tables per domain, one page-table for each required format. Regards, Joerg
On Wed, Apr 06 2016 at 11:47:19 AM, Robin Murphy <robin.murphy@arm.com> wrote: > How would you handle said restriction of page sizes under this scheme? I have a custom io-pgtable implementation that gets wired up based on an IOMMU domain attribute, which is set by yet another custom DMA mapper. My main goal is to give clients a way to specify the page table format they want to use. It's a bit of a mess but hopefully I can clean it up and send it out soon. > I'll clean up what I have and try to get it posted this afternoon so > we can compare. Cool, I have some comments that I'll leave over there. -Mitch
On Thu, Apr 07 2016 at 12:29:59 PM, Mitchel Humpherys <mitchelh@codeaurora.org> wrote: >> I'll clean up what I have and try to get it posted this afternoon so >> we can compare. > > Cool, I have some comments that I'll leave over there. Never mind, my comments weren't relevant. I'll try to test your series out soon... -Mitch
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index bfd4f7c3b1d8..6141710f3091 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -325,6 +325,13 @@ int iommu_group_set_name(struct iommu_group *group, const char *name) } EXPORT_SYMBOL_GPL(iommu_group_set_name); +static unsigned long iommu_get_pgsize_bitmap(struct iommu_domain *domain) +{ + if (domain->ops->get_pgsize_bitmap) + return domain->ops->get_pgsize_bitmap(domain); + return domain->ops->pgsize_bitmap; +} + static int iommu_group_create_direct_mappings(struct iommu_group *group, struct device *dev) { @@ -337,9 +344,9 @@ static int iommu_group_create_direct_mappings(struct iommu_group *group, if (!domain || domain->type != IOMMU_DOMAIN_DMA) return 0; - BUG_ON(!domain->ops->pgsize_bitmap); + BUG_ON(!(domain->ops->pgsize_bitmap || domain->ops->get_pgsize_bitmap)); - pg_size = 1UL << __ffs(domain->ops->pgsize_bitmap); + pg_size = 1UL << __ffs(iommu_get_pgsize_bitmap(domain)); INIT_LIST_HEAD(&mappings); iommu_get_dm_regions(dev, &mappings); @@ -1318,14 +1325,15 @@ int iommu_map(struct iommu_domain *domain, unsigned long iova, int ret = 0; if (unlikely(domain->ops->map == NULL || - domain->ops->pgsize_bitmap == 0UL)) + (domain->ops->pgsize_bitmap == 0UL && + !domain->ops->get_pgsize_bitmap))) return -ENODEV; if (unlikely(!(domain->type & __IOMMU_DOMAIN_PAGING))) return -EINVAL; /* find out the minimum page size supported */ - min_pagesz = 1 << __ffs(domain->ops->pgsize_bitmap); + min_pagesz = 1 << __ffs(iommu_get_pgsize_bitmap(domain)); /* * both the virtual address and the physical one, as well as @@ -1372,14 +1380,15 @@ size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size) unsigned long orig_iova = iova; if (unlikely(domain->ops->unmap == NULL || - domain->ops->pgsize_bitmap == 0UL)) + (domain->ops->pgsize_bitmap == 0UL && + !domain->ops->get_pgsize_bitmap))) return -ENODEV; if (unlikely(!(domain->type & __IOMMU_DOMAIN_PAGING))) return -EINVAL; /* find out the minimum page size supported */ - min_pagesz = 1 << __ffs(domain->ops->pgsize_bitmap); + min_pagesz = 1 << __ffs(iommu_get_pgsize_bitmap(domain)); /* * The virtual address, as well as the size of the mapping, must be @@ -1425,10 +1434,11 @@ size_t default_iommu_map_sg(struct iommu_domain *domain, unsigned long iova, unsigned int i, min_pagesz; int ret; - if (unlikely(domain->ops->pgsize_bitmap == 0UL)) + if (unlikely(domain->ops->pgsize_bitmap == 0UL && + !domain->ops->get_pgsize_bitmap)) return 0; - min_pagesz = 1 << __ffs(domain->ops->pgsize_bitmap); + min_pagesz = 1 << __ffs(iommu_get_pgsize_bitmap(domain)); for_each_sg(sg, s, nents, i) { phys_addr_t phys = page_to_phys(sg_page(s)) + s->offset; @@ -1509,7 +1519,7 @@ int iommu_domain_get_attr(struct iommu_domain *domain, break; case DOMAIN_ATTR_PAGING: paging = data; - *paging = (domain->ops->pgsize_bitmap != 0UL); + *paging = (iommu_get_pgsize_bitmap(domain) != 0UL); break; case DOMAIN_ATTR_WINDOWS: count = data; diff --git a/include/linux/iommu.h b/include/linux/iommu.h index a5c539fa5d2b..03f8d50670db 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -156,6 +156,8 @@ struct iommu_dm_region { * @domain_get_windows: Return the number of windows for a domain * @of_xlate: add OF master IDs to iommu grouping * @pgsize_bitmap: bitmap of supported page sizes + * @get_pgsize_bitmap: gets a bitmap of supported page sizes for a domain + * This takes precedence over @pgsize_bitmap. * @priv: per-instance data private to the iommu driver */ struct iommu_ops { @@ -200,6 +202,7 @@ struct iommu_ops { #endif unsigned long pgsize_bitmap; + unsigned long (*get_pgsize_bitmap)(struct iommu_domain *domain); void *priv; };
Currently we use a single pgsize_bitmap per IOMMU driver. However, some IOMMU drivers might service different IOMMUs with different supported page sizes. Some drivers might also want to restrict page sizes for different use cases. Support these use cases by adding a .get_pgsize_bitmap function to the iommu_ops which can optionally be used by the driver to return a domain-specific pgsize_bitmap. Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org> --- drivers/iommu/iommu.c | 28 +++++++++++++++++++--------- include/linux/iommu.h | 3 +++ 2 files changed, 22 insertions(+), 9 deletions(-)