Message ID | 20220518182120.1136715-5-jacob.jun.pan@linux.intel.com (mailing list archive) |
---|---|
State | Not Applicable |
Headers | show |
Series | Enable PASID for DMA API users | expand |
> From: Jacob Pan <jacob.jun.pan@linux.intel.com> > Sent: Thursday, May 19, 2022 2:21 AM > > DMA mapping API is the de facto standard for in-kernel DMA. It operates > on a per device/RID basis which is not PASID-aware. > > Some modern devices such as Intel Data Streaming Accelerator, PASID is > required for certain work submissions. To allow such devices use DMA > mapping API, we need the following functionalities: > 1. Provide device a way to retrieve a PASID for work submission within > the kernel > 2. Enable the kernel PASID on the IOMMU for the device > 3. Attach the kernel PASID to the device's default DMA domain, let it > be IOVA or physical address in case of pass-through. > > This patch introduces a driver facing API that enables DMA API > PASID usage. Once enabled, device drivers can continue to use DMA APIs as > is. There is no difference in dma_handle between without PASID and with > PASID. > > Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> > --- > drivers/iommu/dma-iommu.c | 114 > ++++++++++++++++++++++++++++++++++++++ > include/linux/dma-iommu.h | 3 + > 2 files changed, 117 insertions(+) > > diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c > index 1ca85d37eeab..6ad7ba619ef0 100644 > --- a/drivers/iommu/dma-iommu.c > +++ b/drivers/iommu/dma-iommu.c > @@ -34,6 +34,8 @@ struct iommu_dma_msi_page { > phys_addr_t phys; > }; > > +static DECLARE_IOASID_SET(iommu_dma_pasid); > + > enum iommu_dma_cookie_type { > IOMMU_DMA_IOVA_COOKIE, > IOMMU_DMA_MSI_COOKIE, > @@ -370,6 +372,118 @@ void iommu_put_dma_cookie(struct > iommu_domain *domain) > domain->iova_cookie = NULL; > } > > +/* Protect iommu_domain DMA PASID data */ > +static DEFINE_MUTEX(dma_pasid_lock); > +/** > + * iommu_attach_dma_pasid --Attach a PASID for in-kernel DMA. Use the > device's > + * DMA domain. > + * @dev: Device to be enabled > + * @pasid: The returned kernel PASID to be used for DMA > + * > + * DMA request with PASID will be mapped the same way as the legacy DMA. > + * If the device is in pass-through, PASID will also pass-through. If the > + * device is in IOVA, the PASID will point to the same IOVA page table. > + * > + * @return err code or 0 on success > + */ > +int iommu_attach_dma_pasid(struct device *dev, ioasid_t *pasid) iommu_attach_dma_domain_pasid? 'dma_pasid' is too broad from a API p.o.v. > +{ > + struct iommu_domain *dom; > + ioasid_t id, max; > + int ret = 0; > + > + dom = iommu_get_domain_for_dev(dev); > + if (!dom || !dom->ops || !dom->ops->attach_dev_pasid) > + return -ENODEV; > + > + /* Only support domain types that DMA API can be used */ > + if (dom->type == IOMMU_DOMAIN_UNMANAGED || > + dom->type == IOMMU_DOMAIN_BLOCKED) { > + dev_warn(dev, "Invalid domain type %d", dom->type); > + return -EPERM; > + } WARN_ON. and probably we can just check whether domain is default domain here. > + > + mutex_lock(&dma_pasid_lock); > + id = dom->dma_pasid; > + if (!id) { > + /* > + * First device to use PASID in its DMA domain, allocate > + * a single PASID per DMA domain is all we need, it is also > + * good for performance when it comes down to IOTLB flush. > + */ > + max = 1U << dev->iommu->pasid_bits; > + if (!max) { > + ret = -EINVAL; > + goto done_unlock; > + } > + > + id = ioasid_alloc(&iommu_dma_pasid, 1, max, dev); > + if (id == INVALID_IOASID) { > + ret = -ENOMEM; > + goto done_unlock; > + } > + > + dom->dma_pasid = id; > + atomic_set(&dom->dma_pasid_users, 1); this is always accessed with lock held hence no need to be atomic. > + } > + > + ret = iommu_attach_device_pasid(dom, dev, id); > + if (!ret) { > + *pasid = id; > + atomic_inc(&dom->dma_pasid_users); > + goto done_unlock; > + } > + > + if (atomic_dec_and_test(&dom->dma_pasid_users)) { > + ioasid_free(id); > + dom->dma_pasid = 0; > + } > +done_unlock: > + mutex_unlock(&dma_pasid_lock); > + return ret; > +} > +EXPORT_SYMBOL(iommu_attach_dma_pasid); > + > +/** > + * iommu_detach_dma_pasid --Disable in-kernel DMA request with PASID > + * @dev: Device's PASID DMA to be disabled > + * > + * It is the device driver's responsibility to ensure no more incoming DMA > + * requests with the kernel PASID before calling this function. IOMMU driver > + * ensures PASID cache, IOTLBs related to the kernel PASID are cleared and > + * drained. > + * > + */ > +void iommu_detach_dma_pasid(struct device *dev) > +{ > + struct iommu_domain *dom; > + ioasid_t pasid; > + > + dom = iommu_get_domain_for_dev(dev); > + if (WARN_ON(!dom || !dom->ops || !dom->ops->detach_dev_pasid)) > + return; > + > + /* Only support DMA API managed domain type */ > + if (WARN_ON(dom->type == IOMMU_DOMAIN_UNMANAGED || > + dom->type == IOMMU_DOMAIN_BLOCKED)) > + return; > + > + mutex_lock(&dma_pasid_lock); > + pasid = iommu_get_pasid_from_domain(dev, dom); > + if (!pasid || pasid == INVALID_IOASID) { > + dev_err(dev, "No valid DMA PASID attached\n"); > + mutex_unlock(&dma_pasid_lock); > + return; > + } here just use dom->dma_pasid and let iommu driver to figure out underlying whether this device has been attached to the domain with the said pasid. > + iommu_detach_device_pasid(dom, dev, pasid); > + if (atomic_dec_and_test(&dom->dma_pasid_users)) { > + ioasid_free(pasid); > + dom->dma_pasid = 0; > + } > + mutex_unlock(&dma_pasid_lock); > +} > +EXPORT_SYMBOL(iommu_detach_dma_pasid); > + > /** > * iommu_dma_get_resv_regions - Reserved region driver helper > * @dev: Device from iommu_get_resv_regions() > diff --git a/include/linux/dma-iommu.h b/include/linux/dma-iommu.h > index 24607dc3c2ac..538650b9cb75 100644 > --- a/include/linux/dma-iommu.h > +++ b/include/linux/dma-iommu.h > @@ -18,6 +18,9 @@ int iommu_get_dma_cookie(struct iommu_domain > *domain); > int iommu_get_msi_cookie(struct iommu_domain *domain, dma_addr_t > base); > void iommu_put_dma_cookie(struct iommu_domain *domain); > > +int iommu_attach_dma_pasid(struct device *dev, ioasid_t *pasid); > +void iommu_detach_dma_pasid(struct device *dev); > + > /* Setup call for arch DMA mapping code */ > void iommu_setup_dma_ops(struct device *dev, u64 dma_base, u64 > dma_limit); > int iommu_dma_init_fq(struct iommu_domain *domain); > -- > 2.25.1
Hi Kevin, On Mon, 23 May 2022 08:25:33 +0000, "Tian, Kevin" <kevin.tian@intel.com> wrote: > > From: Jacob Pan <jacob.jun.pan@linux.intel.com> > > Sent: Thursday, May 19, 2022 2:21 AM > > > > DMA mapping API is the de facto standard for in-kernel DMA. It operates > > on a per device/RID basis which is not PASID-aware. > > > > Some modern devices such as Intel Data Streaming Accelerator, PASID is > > required for certain work submissions. To allow such devices use DMA > > mapping API, we need the following functionalities: > > 1. Provide device a way to retrieve a PASID for work submission within > > the kernel > > 2. Enable the kernel PASID on the IOMMU for the device > > 3. Attach the kernel PASID to the device's default DMA domain, let it > > be IOVA or physical address in case of pass-through. > > > > This patch introduces a driver facing API that enables DMA API > > PASID usage. Once enabled, device drivers can continue to use DMA APIs > > as is. There is no difference in dma_handle between without PASID and > > with PASID. > > > > Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> > > --- > > drivers/iommu/dma-iommu.c | 114 > > ++++++++++++++++++++++++++++++++++++++ > > include/linux/dma-iommu.h | 3 + > > 2 files changed, 117 insertions(+) > > > > diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c > > index 1ca85d37eeab..6ad7ba619ef0 100644 > > --- a/drivers/iommu/dma-iommu.c > > +++ b/drivers/iommu/dma-iommu.c > > @@ -34,6 +34,8 @@ struct iommu_dma_msi_page { > > phys_addr_t phys; > > }; > > > > +static DECLARE_IOASID_SET(iommu_dma_pasid); > > + > > enum iommu_dma_cookie_type { > > IOMMU_DMA_IOVA_COOKIE, > > IOMMU_DMA_MSI_COOKIE, > > @@ -370,6 +372,118 @@ void iommu_put_dma_cookie(struct > > iommu_domain *domain) > > domain->iova_cookie = NULL; > > } > > > > +/* Protect iommu_domain DMA PASID data */ > > +static DEFINE_MUTEX(dma_pasid_lock); > > +/** > > + * iommu_attach_dma_pasid --Attach a PASID for in-kernel DMA. Use the > > device's > > + * DMA domain. > > + * @dev: Device to be enabled > > + * @pasid: The returned kernel PASID to be used for DMA > > + * > > + * DMA request with PASID will be mapped the same way as the legacy > > DMA. > > + * If the device is in pass-through, PASID will also pass-through. If > > the > > + * device is in IOVA, the PASID will point to the same IOVA page table. > > + * > > + * @return err code or 0 on success > > + */ > > +int iommu_attach_dma_pasid(struct device *dev, ioasid_t *pasid) > > iommu_attach_dma_domain_pasid? 'dma_pasid' is too broad from > a API p.o.v. > I agree dma_pasid is too broad, technically it is dma_api_pasid but seems too long. My concern with dma_domain_pasid is that the pasid can also be used for identity domain. > > +{ > > + struct iommu_domain *dom; > > + ioasid_t id, max; > > + int ret = 0; > > + > > + dom = iommu_get_domain_for_dev(dev); > > + if (!dom || !dom->ops || !dom->ops->attach_dev_pasid) > > + return -ENODEV; > > + > > + /* Only support domain types that DMA API can be used */ > > + if (dom->type == IOMMU_DOMAIN_UNMANAGED || > > + dom->type == IOMMU_DOMAIN_BLOCKED) { > > + dev_warn(dev, "Invalid domain type %d", dom->type); > > + return -EPERM; > > + } > > WARN_ON. > > and probably we can just check whether domain is default domain here. > good point, I will just use struct iommu_domain *def_domain = iommu_get_dma_domain(dev); > > + > > + mutex_lock(&dma_pasid_lock); > > + id = dom->dma_pasid; > > + if (!id) { > > + /* > > + * First device to use PASID in its DMA domain, > > allocate > > + * a single PASID per DMA domain is all we need, it is > > also > > + * good for performance when it comes down to IOTLB > > flush. > > + */ > > + max = 1U << dev->iommu->pasid_bits; > > + if (!max) { > > + ret = -EINVAL; > > + goto done_unlock; > > + } > > + > > + id = ioasid_alloc(&iommu_dma_pasid, 1, max, dev); > > + if (id == INVALID_IOASID) { > > + ret = -ENOMEM; > > + goto done_unlock; > > + } > > + > > + dom->dma_pasid = id; > > + atomic_set(&dom->dma_pasid_users, 1); > > this is always accessed with lock held hence no need to be atomic. > good catch, will fix > > + } > > + > > + ret = iommu_attach_device_pasid(dom, dev, id); > > + if (!ret) { > > + *pasid = id; > > + atomic_inc(&dom->dma_pasid_users); > > + goto done_unlock; > > + } > > + > > + if (atomic_dec_and_test(&dom->dma_pasid_users)) { > > + ioasid_free(id); > > + dom->dma_pasid = 0; > > + } > > +done_unlock: > > + mutex_unlock(&dma_pasid_lock); > > + return ret; > > +} > > +EXPORT_SYMBOL(iommu_attach_dma_pasid); > > + > > +/** > > + * iommu_detach_dma_pasid --Disable in-kernel DMA request with PASID > > + * @dev: Device's PASID DMA to be disabled > > + * > > + * It is the device driver's responsibility to ensure no more incoming > > DMA > > + * requests with the kernel PASID before calling this function. IOMMU > > driver > > + * ensures PASID cache, IOTLBs related to the kernel PASID are cleared > > and > > + * drained. > > + * > > + */ > > +void iommu_detach_dma_pasid(struct device *dev) > > +{ > > + struct iommu_domain *dom; > > + ioasid_t pasid; > > + > > + dom = iommu_get_domain_for_dev(dev); > > + if (WARN_ON(!dom || !dom->ops || !dom->ops->detach_dev_pasid)) > > + return; > > + > > + /* Only support DMA API managed domain type */ > > + if (WARN_ON(dom->type == IOMMU_DOMAIN_UNMANAGED || > > + dom->type == IOMMU_DOMAIN_BLOCKED)) > > + return; > > + > > + mutex_lock(&dma_pasid_lock); > > + pasid = iommu_get_pasid_from_domain(dev, dom); > > + if (!pasid || pasid == INVALID_IOASID) { > > + dev_err(dev, "No valid DMA PASID attached\n"); > > + mutex_unlock(&dma_pasid_lock); > > + return; > > + } > > here just use dom->dma_pasid and let iommu driver to figure out > underlying whether this device has been attached to the domain > with the said pasid. > Yeah, I am checking the pasid matching in the iommu driver. My thinking is that here is a quick sanity check in the common code to rule out invalid value. Thanks a lot! Jacob
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 1ca85d37eeab..6ad7ba619ef0 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -34,6 +34,8 @@ struct iommu_dma_msi_page { phys_addr_t phys; }; +static DECLARE_IOASID_SET(iommu_dma_pasid); + enum iommu_dma_cookie_type { IOMMU_DMA_IOVA_COOKIE, IOMMU_DMA_MSI_COOKIE, @@ -370,6 +372,118 @@ void iommu_put_dma_cookie(struct iommu_domain *domain) domain->iova_cookie = NULL; } +/* Protect iommu_domain DMA PASID data */ +static DEFINE_MUTEX(dma_pasid_lock); +/** + * iommu_attach_dma_pasid --Attach a PASID for in-kernel DMA. Use the device's + * DMA domain. + * @dev: Device to be enabled + * @pasid: The returned kernel PASID to be used for DMA + * + * DMA request with PASID will be mapped the same way as the legacy DMA. + * If the device is in pass-through, PASID will also pass-through. If the + * device is in IOVA, the PASID will point to the same IOVA page table. + * + * @return err code or 0 on success + */ +int iommu_attach_dma_pasid(struct device *dev, ioasid_t *pasid) +{ + struct iommu_domain *dom; + ioasid_t id, max; + int ret = 0; + + dom = iommu_get_domain_for_dev(dev); + if (!dom || !dom->ops || !dom->ops->attach_dev_pasid) + return -ENODEV; + + /* Only support domain types that DMA API can be used */ + if (dom->type == IOMMU_DOMAIN_UNMANAGED || + dom->type == IOMMU_DOMAIN_BLOCKED) { + dev_warn(dev, "Invalid domain type %d", dom->type); + return -EPERM; + } + + mutex_lock(&dma_pasid_lock); + id = dom->dma_pasid; + if (!id) { + /* + * First device to use PASID in its DMA domain, allocate + * a single PASID per DMA domain is all we need, it is also + * good for performance when it comes down to IOTLB flush. + */ + max = 1U << dev->iommu->pasid_bits; + if (!max) { + ret = -EINVAL; + goto done_unlock; + } + + id = ioasid_alloc(&iommu_dma_pasid, 1, max, dev); + if (id == INVALID_IOASID) { + ret = -ENOMEM; + goto done_unlock; + } + + dom->dma_pasid = id; + atomic_set(&dom->dma_pasid_users, 1); + } + + ret = iommu_attach_device_pasid(dom, dev, id); + if (!ret) { + *pasid = id; + atomic_inc(&dom->dma_pasid_users); + goto done_unlock; + } + + if (atomic_dec_and_test(&dom->dma_pasid_users)) { + ioasid_free(id); + dom->dma_pasid = 0; + } +done_unlock: + mutex_unlock(&dma_pasid_lock); + return ret; +} +EXPORT_SYMBOL(iommu_attach_dma_pasid); + +/** + * iommu_detach_dma_pasid --Disable in-kernel DMA request with PASID + * @dev: Device's PASID DMA to be disabled + * + * It is the device driver's responsibility to ensure no more incoming DMA + * requests with the kernel PASID before calling this function. IOMMU driver + * ensures PASID cache, IOTLBs related to the kernel PASID are cleared and + * drained. + * + */ +void iommu_detach_dma_pasid(struct device *dev) +{ + struct iommu_domain *dom; + ioasid_t pasid; + + dom = iommu_get_domain_for_dev(dev); + if (WARN_ON(!dom || !dom->ops || !dom->ops->detach_dev_pasid)) + return; + + /* Only support DMA API managed domain type */ + if (WARN_ON(dom->type == IOMMU_DOMAIN_UNMANAGED || + dom->type == IOMMU_DOMAIN_BLOCKED)) + return; + + mutex_lock(&dma_pasid_lock); + pasid = iommu_get_pasid_from_domain(dev, dom); + if (!pasid || pasid == INVALID_IOASID) { + dev_err(dev, "No valid DMA PASID attached\n"); + mutex_unlock(&dma_pasid_lock); + return; + } + iommu_detach_device_pasid(dom, dev, pasid); + if (atomic_dec_and_test(&dom->dma_pasid_users)) { + ioasid_free(pasid); + dom->dma_pasid = 0; + } + mutex_unlock(&dma_pasid_lock); +} +EXPORT_SYMBOL(iommu_detach_dma_pasid); + /** * iommu_dma_get_resv_regions - Reserved region driver helper * @dev: Device from iommu_get_resv_regions() diff --git a/include/linux/dma-iommu.h b/include/linux/dma-iommu.h index 24607dc3c2ac..538650b9cb75 100644 --- a/include/linux/dma-iommu.h +++ b/include/linux/dma-iommu.h @@ -18,6 +18,9 @@ int iommu_get_dma_cookie(struct iommu_domain *domain); int iommu_get_msi_cookie(struct iommu_domain *domain, dma_addr_t base); void iommu_put_dma_cookie(struct iommu_domain *domain); +int iommu_attach_dma_pasid(struct device *dev, ioasid_t *pasid); +void iommu_detach_dma_pasid(struct device *dev); + /* Setup call for arch DMA mapping code */ void iommu_setup_dma_ops(struct device *dev, u64 dma_base, u64 dma_limit); int iommu_dma_init_fq(struct iommu_domain *domain);
DMA mapping API is the de facto standard for in-kernel DMA. It operates on a per device/RID basis which is not PASID-aware. Some modern devices such as Intel Data Streaming Accelerator, PASID is required for certain work submissions. To allow such devices use DMA mapping API, we need the following functionalities: 1. Provide device a way to retrieve a PASID for work submission within the kernel 2. Enable the kernel PASID on the IOMMU for the device 3. Attach the kernel PASID to the device's default DMA domain, let it be IOVA or physical address in case of pass-through. This patch introduces a driver facing API that enables DMA API PASID usage. Once enabled, device drivers can continue to use DMA APIs as is. There is no difference in dma_handle between without PASID and with PASID. Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> --- drivers/iommu/dma-iommu.c | 114 ++++++++++++++++++++++++++++++++++++++ include/linux/dma-iommu.h | 3 + 2 files changed, 117 insertions(+)