From patchwork Tue Jan 4 01:56:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolu Lu X-Patchwork-Id: 12702902 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 019BEC433F5 for ; Tue, 4 Jan 2022 01:57:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232000AbiADB5h (ORCPT ); Mon, 3 Jan 2022 20:57:37 -0500 Received: from mga12.intel.com ([192.55.52.136]:30120 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231991AbiADB5g (ORCPT ); Mon, 3 Jan 2022 20:57:36 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1641261456; x=1672797456; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+sw4a7g4AmPnKFFUjxfNjPKnxV/r9tLMn6hlt3HqCpQ=; b=TBMYCbaY4uExVOi34bxgrfXJU9p/9eJU/1v4UUdStWsOU514fqf5t7Wp W8Gelui1WxtZlWTQxGRRu4NLpPvJupvv996EKbdGaPQ3rZHf+A75QzvxH 4FzcN3YObDeOIEguiravOAVKDgTLmKeER+OI2EBEx6IrTs0SYrVzwo/Hg fXPvhVkThFdgzksL6hfqfXWIXJvQISQ1YzOmO77yyWjTPp4j36lUcch3z 4xKaNqzWQoYIOMC+wPx/OldT82JdatySwehCPFqWA8ZwAzk+s1y3iZf74 t4ZLz6gxekqVPKFJjxWjtmXnd5WCOwA2FQZKORNLwk109OZiDgWlvWU9h Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10216"; a="222133672" X-IronPort-AV: E=Sophos;i="5.88,258,1635231600"; d="scan'208";a="222133672" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Jan 2022 17:57:36 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,258,1635231600"; d="scan'208";a="667573199" Received: from allen-box.sh.intel.com ([10.239.159.118]) by fmsmga001.fm.intel.com with ESMTP; 03 Jan 2022 17:57:29 -0800 From: Lu Baolu To: Greg Kroah-Hartman , Joerg Roedel , Alex Williamson , Bjorn Helgaas , Jason Gunthorpe , Christoph Hellwig , Kevin Tian , Ashok Raj Cc: Will Deacon , Robin Murphy , Dan Williams , rafael@kernel.org, Diana Craciun , Cornelia Huck , Eric Auger , Liu Yi L , Jacob jun Pan , Chaitanya Kulkarni , Stuart Yoder , Laurentiu Tudor , Thierry Reding , David Airlie , Daniel Vetter , Jonathan Hunter , Li Yang , Dmitry Osipenko , iommu@lists.linux-foundation.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Lu Baolu Subject: [PATCH v5 01/14] iommu: Add dma ownership management interfaces Date: Tue, 4 Jan 2022 09:56:31 +0800 Message-Id: <20220104015644.2294354-2-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220104015644.2294354-1-baolu.lu@linux.intel.com> References: <20220104015644.2294354-1-baolu.lu@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Multiple devices may be placed in the same IOMMU group because they cannot be isolated from each other. These devices must either be entirely under kernel control or userspace control, never a mixture. This adds dma ownership management in iommu core and exposes several interfaces for the device drivers and the device userspace assignment framework (i.e. vfio), so that any conflict between user and kernel controlled DMA could be detected at the beginning. The device driver oriented interfaces are, int iommu_device_use_dma_api(struct device *dev); void iommu_device_unuse_dma_api(struct device *dev); Devices under kernel drivers control must call iommu_device_use_dma_api() before driver probes. The driver binding process must be aborted if it returns failure. The vfio oriented interfaces are, int iommu_group_set_dma_owner(struct iommu_group *group, void *owner); void iommu_group_release_dma_owner(struct iommu_group *group); bool iommu_group_dma_owner_claimed(struct iommu_group *group); The device userspace assignment must be disallowed if the set dma owner interface returns failure. Signed-off-by: Jason Gunthorpe Signed-off-by: Kevin Tian Signed-off-by: Lu Baolu --- include/linux/iommu.h | 31 ++++++++ drivers/iommu/iommu.c | 161 +++++++++++++++++++++++++++++++++++++++++- 2 files changed, 189 insertions(+), 3 deletions(-) diff --git a/include/linux/iommu.h b/include/linux/iommu.h index de0c57a567c8..568f285468cf 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -682,6 +682,13 @@ struct iommu_sva *iommu_sva_bind_device(struct device *dev, void iommu_sva_unbind_device(struct iommu_sva *handle); u32 iommu_sva_get_pasid(struct iommu_sva *handle); +int iommu_device_use_dma_api(struct device *dev); +void iommu_device_unuse_dma_api(struct device *dev); + +int iommu_group_set_dma_owner(struct iommu_group *group, void *owner); +void iommu_group_release_dma_owner(struct iommu_group *group); +bool iommu_group_dma_owner_claimed(struct iommu_group *group); + #else /* CONFIG_IOMMU_API */ struct iommu_ops {}; @@ -1082,6 +1089,30 @@ static inline struct iommu_fwspec *dev_iommu_fwspec_get(struct device *dev) { return NULL; } + +static inline int iommu_device_use_dma_api(struct device *dev) +{ + return 0; +} + +static inline void iommu_device_unuse_dma_api(struct device *dev) +{ +} + +static inline int +iommu_group_set_dma_owner(struct iommu_group *group, void *owner) +{ + return -ENODEV; +} + +static inline void iommu_group_release_dma_owner(struct iommu_group *group) +{ +} + +static inline bool iommu_group_dma_owner_claimed(struct iommu_group *group) +{ + return false; +} #endif /* CONFIG_IOMMU_API */ /** diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 8b86406b7162..ff0c8c1ad5af 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -48,6 +48,8 @@ struct iommu_group { struct iommu_domain *default_domain; struct iommu_domain *domain; struct list_head entry; + unsigned int owner_cnt; + void *owner; }; struct group_device { @@ -289,7 +291,12 @@ int iommu_probe_device(struct device *dev) mutex_lock(&group->mutex); iommu_alloc_default_domain(group, dev); - if (group->default_domain) { + /* + * If device joined an existing group which has been claimed + * for none kernel DMA purpose, avoid attaching the default + * domain. + */ + if (group->default_domain && !group->owner) { ret = __iommu_attach_device(group->default_domain, dev); if (ret) { mutex_unlock(&group->mutex); @@ -2320,7 +2327,7 @@ static int __iommu_attach_group(struct iommu_domain *domain, { int ret; - if (group->default_domain && group->domain != group->default_domain) + if (group->domain && group->domain != group->default_domain) return -EBUSY; ret = __iommu_group_for_each_dev(group, domain, @@ -2357,7 +2364,11 @@ static void __iommu_detach_group(struct iommu_domain *domain, { int ret; - if (!group->default_domain) { + /* + * If group has been claimed for none kernel DMA purpose, avoid + * re-attaching the default domain. + */ + if (!group->default_domain || group->owner) { __iommu_group_for_each_dev(group, domain, iommu_group_do_detach_device); group->domain = NULL; @@ -3351,3 +3362,147 @@ static ssize_t iommu_group_store_type(struct iommu_group *group, return ret; } + +/** + * iommu_device_use_dma_api() - Device driver wants to do DMA through + * kernel DMA API. + * @dev: The device. + * + * The device driver about to bind @dev wants to do DMA through the kernel + * DMA API. Return 0 if it is allowed, otherwise an error. + */ +int iommu_device_use_dma_api(struct device *dev) +{ + struct iommu_group *group = iommu_group_get(dev); + int ret = 0; + + if (!group) + return 0; + + mutex_lock(&group->mutex); + if (group->owner_cnt) { + if (group->domain != group->default_domain || + group->owner) { + ret = -EBUSY; + goto unlock_out; + } + } + + group->owner_cnt++; + +unlock_out: + mutex_unlock(&group->mutex); + iommu_group_put(group); + + return ret; +} +EXPORT_SYMBOL_GPL(iommu_device_use_dma_api); + +/** + * iommu_device_unuse_dma_api() - Device driver doesn't want to do DMA + * through kernel DMA API anymore. + * @dev: The device. + * + * The device driver doesn't want to do DMA through kernel DMA API anymore. + * It must be called after iommu_device_use_dma_api(). + */ +void iommu_device_unuse_dma_api(struct device *dev) +{ + struct iommu_group *group = iommu_group_get(dev); + + if (!group) + return; + + mutex_lock(&group->mutex); + if (!WARN_ON(!group->owner_cnt)) + group->owner_cnt--; + + mutex_unlock(&group->mutex); + iommu_group_put(group); +} +EXPORT_SYMBOL_GPL(iommu_device_unuse_dma_api); + +/** + * iommu_group_set_dma_owner() - Set DMA ownership of a group + * @group: The group. + * @owner: Caller specified pointer. Used for exclusive ownership. + * + * This is to support backward compatibility for vfio which manages + * the dma ownership in iommu_group level. New invocations on this + * interface should be prohibited. + */ +int iommu_group_set_dma_owner(struct iommu_group *group, void *owner) +{ + int ret = 0; + + mutex_lock(&group->mutex); + if (group->owner_cnt) { + if (group->owner != owner) { + ret = -EPERM; + goto unlock_out; + } + } else { + if (group->domain && group->domain != group->default_domain) { + ret = -EBUSY; + goto unlock_out; + } + + group->owner = owner; + if (group->domain) + __iommu_detach_group(group->domain, group); + } + + group->owner_cnt++; +unlock_out: + mutex_unlock(&group->mutex); + + return ret; +} +EXPORT_SYMBOL_GPL(iommu_group_set_dma_owner); + +/** + * iommu_group_release_dma_owner() - Release DMA ownership of a group + * @group: The group. + * + * Release the DMA ownership claimed by iommu_group_set_dma_owner(). + */ +void iommu_group_release_dma_owner(struct iommu_group *group) +{ + mutex_lock(&group->mutex); + if (WARN_ON(!group->owner_cnt || !group->owner)) + goto unlock_out; + + if (--group->owner_cnt > 0) + goto unlock_out; + + /* + * The UNMANAGED domain should be detached before all USER + * owners have been released. + */ + if (!WARN_ON(group->domain) && group->default_domain) + __iommu_attach_group(group->default_domain, group); + group->owner = NULL; + +unlock_out: + mutex_unlock(&group->mutex); +} +EXPORT_SYMBOL_GPL(iommu_group_release_dma_owner); + +/** + * iommu_group_dma_owner_claimed() - Query group dma ownership status + * @group: The group. + * + * This provides status query on a given group. It is racey and only for + * non-binding status reporting. + */ +bool iommu_group_dma_owner_claimed(struct iommu_group *group) +{ + unsigned int user; + + mutex_lock(&group->mutex); + user = group->owner_cnt; + mutex_unlock(&group->mutex); + + return user; +} +EXPORT_SYMBOL_GPL(iommu_group_dma_owner_claimed);