From patchwork Fri Dec 17 06:36:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolu Lu X-Patchwork-Id: 12683745 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6D779C433EF for ; Fri, 17 Dec 2021 06:37:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233287AbhLQGhn (ORCPT ); Fri, 17 Dec 2021 01:37:43 -0500 Received: from mga04.intel.com ([192.55.52.120]:46331 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233288AbhLQGhn (ORCPT ); Fri, 17 Dec 2021 01:37:43 -0500 X-IronPort-AV: E=McAfee;i="6200,9189,10200"; a="238435806" X-IronPort-AV: E=Sophos;i="5.88,213,1635231600"; d="scan'208,223";a="238435806" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Dec 2021 22:37:42 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,213,1635231600"; d="scan'208,223";a="519623112" Received: from allen-box.sh.intel.com ([10.239.159.118]) by orsmga008.jf.intel.com with ESMTP; 16 Dec 2021 22:37:35 -0800 From: Lu Baolu To: Greg Kroah-Hartman , Joerg Roedel , Alex Williamson , Bjorn Helgaas , Jason Gunthorpe , Christoph Hellwig , Kevin Tian , Ashok Raj Cc: Will Deacon , Robin Murphy , Dan Williams , rafael@kernel.org, Diana Craciun , Cornelia Huck , Eric Auger , Liu Yi L , Jacob jun Pan , Chaitanya Kulkarni , Stuart Yoder , Laurentiu Tudor , Thierry Reding , David Airlie , Daniel Vetter , Jonathan Hunter , Li Yang , Dmitry Osipenko , iommu@lists.linux-foundation.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Lu Baolu Subject: [PATCH v4 01/13] iommu: Add device dma ownership set/release interfaces Date: Fri, 17 Dec 2021 14:36:56 +0800 Message-Id: <20211217063708.1740334-2-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211217063708.1740334-1-baolu.lu@linux.intel.com> References: <20211217063708.1740334-1-baolu.lu@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org From the perspective of who is initiating the device to do DMA, device DMA could be divided into the following types: DMA_OWNER_DMA_API: Device DMAs are initiated by a kernel driver through the kernel DMA API. DMA_OWNER_PRIVATE_DOMAIN: Device DMAs are initiated by a kernel driver with its own PRIVATE domain. DMA_OWNER_PRIVATE_DOMAIN_USER: Device DMAs are initiated by userspace. Different DMA ownerships are exclusive for all devices in the same iommu group as an iommu group is the smallest granularity of device isolation and protection that the IOMMU subsystem can guarantee. This extends the iommu core to enforce this exclusion. Basically two new interfaces are provided: int iommu_device_set_dma_owner(struct device *dev, enum iommu_dma_owner type, void *owner_cookie); void iommu_device_release_dma_owner(struct device *dev, enum iommu_dma_owner type); Although above interfaces are per-device, DMA owner is tracked per group under the hood. An iommu group cannot have different dma ownership set at the same time. Violation of this assumption fails iommu_device_set_dma_owner(). Kernel driver which does DMA have DMA_OWNER_DMA_API automatically set/ released in the driver binding/unbinding process (see next patch). Kernel driver which doesn't do DMA could avoid setting the owner type. Device bound to such driver is considered same as a driver-less device which is compatible to all owner types. Userspace driver framework (e.g. vfio) should set DMA_OWNER_PRIVATE_DOMAIN_USER for a device before the userspace is allowed to access it, plus a owner cookie pointer to mark the user identity so a single group cannot be operated by multiple users simultaneously. Vice versa, the owner type should be released after the user access permission is withdrawn. Signed-off-by: Jason Gunthorpe Signed-off-by: Kevin Tian Signed-off-by: Lu Baolu --- include/linux/iommu.h | 34 ++++++++++++++++ drivers/iommu/iommu.c | 95 +++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 129 insertions(+) diff --git a/include/linux/iommu.h b/include/linux/iommu.h index d2f3435e7d17..53a023ee1ac0 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -162,6 +162,21 @@ enum iommu_dev_features { IOMMU_DEV_FEAT_IOPF, }; +/** + * enum iommu_dma_owner - IOMMU DMA ownership + * @DMA_OWNER_DMA_API: Device DMAs are initiated by a kernel driver through + * the kernel DMA API. + * @DMA_OWNER_PRIVATE_DOMAIN: Device DMAs are initiated by a kernel driver + * which provides an UNMANAGED domain. + * @DMA_OWNER_PRIVATE_DOMAIN_USER: Device DMAs are initiated by userspace, + * kernel ensures that DMAs never go to kernel memory. + */ +enum iommu_dma_owner { + DMA_OWNER_DMA_API, + DMA_OWNER_PRIVATE_DOMAIN, + DMA_OWNER_PRIVATE_DOMAIN_USER, +}; + #define IOMMU_PASID_INVALID (-1U) #ifdef CONFIG_IOMMU_API @@ -681,6 +696,10 @@ struct iommu_sva *iommu_sva_bind_device(struct device *dev, void iommu_sva_unbind_device(struct iommu_sva *handle); u32 iommu_sva_get_pasid(struct iommu_sva *handle); +int iommu_device_set_dma_owner(struct device *dev, enum iommu_dma_owner owner, + void *owner_cookie); +void iommu_device_release_dma_owner(struct device *dev, enum iommu_dma_owner owner); + #else /* CONFIG_IOMMU_API */ struct iommu_ops {}; @@ -1081,6 +1100,21 @@ static inline struct iommu_fwspec *dev_iommu_fwspec_get(struct device *dev) { return NULL; } + +static inline int iommu_device_set_dma_owner(struct device *dev, + enum iommu_dma_owner owner, + void *owner_cookie) +{ + if (owner != DMA_OWNER_DMA_API) + return -EINVAL; + + return 0; +} + +static inline void iommu_device_release_dma_owner(struct device *dev, + enum iommu_dma_owner owner) +{ +} #endif /* CONFIG_IOMMU_API */ /** diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 8b86406b7162..5439bf45afb2 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -48,6 +48,9 @@ struct iommu_group { struct iommu_domain *default_domain; struct iommu_domain *domain; struct list_head entry; + enum iommu_dma_owner dma_owner; + unsigned int owner_cnt; + void *owner_cookie; }; struct group_device { @@ -3351,3 +3354,95 @@ static ssize_t iommu_group_store_type(struct iommu_group *group, return ret; } + +static int iommu_group_set_dma_owner(struct iommu_group *group, + enum iommu_dma_owner owner, + void *owner_cookie) +{ + int ret = 0; + + mutex_lock(&group->mutex); + if (group->owner_cnt && + (group->dma_owner != owner || + group->owner_cookie != owner_cookie)) { + ret = -EBUSY; + goto unlock_out; + } + + group->dma_owner = owner; + group->owner_cookie = owner_cookie; + group->owner_cnt++; + +unlock_out: + mutex_unlock(&group->mutex); + + return ret; +} + +static void iommu_group_release_dma_owner(struct iommu_group *group, + enum iommu_dma_owner owner) +{ + mutex_lock(&group->mutex); + if (WARN_ON(!group->owner_cnt || group->dma_owner != owner)) + goto unlock_out; + + if (--group->owner_cnt > 0) + goto unlock_out; + + group->dma_owner = DMA_OWNER_DMA_API; + +unlock_out: + mutex_unlock(&group->mutex); +} + +/** + * iommu_device_set_dma_owner() - Set DMA ownership of a device + * @dev: The device. + * @owner: DMA ownership type. + * @owner_cookie: Caller specified pointer. Could be used for exclusive + * declaration. Could be NULL. + * + * Set the DMA ownership of a device. The different ownerships are + * exclusive. The caller could specify a owner_cookie pointer so that + * the same DMA ownership could be exclusive among different owners. + */ +int iommu_device_set_dma_owner(struct device *dev, enum iommu_dma_owner owner, + void *owner_cookie) +{ + struct iommu_group *group = iommu_group_get(dev); + int ret; + + if (!group) { + if (owner == DMA_OWNER_DMA_API) + return 0; + else + return -ENODEV; + } + + ret = iommu_group_set_dma_owner(group, owner, owner_cookie); + iommu_group_put(group); + + return ret; +} +EXPORT_SYMBOL_GPL(iommu_device_set_dma_owner); + +/** + * iommu_device_release_dma_owner() - Release DMA ownership of a device + * @dev: The device. + * @owner: The DMA ownership type. + * + * Release the DMA ownership claimed by iommu_device_set_dma_owner(). + */ +void iommu_device_release_dma_owner(struct device *dev, enum iommu_dma_owner owner) +{ + struct iommu_group *group = iommu_group_get(dev); + + if (!group) { + WARN_ON(owner != DMA_OWNER_DMA_API); + return; + } + + iommu_group_release_dma_owner(group, owner); + iommu_group_put(group); +} +EXPORT_SYMBOL_GPL(iommu_device_release_dma_owner); From patchwork Fri Dec 17 06:36:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolu Lu X-Patchwork-Id: 12683747 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CBC43C433F5 for ; Fri, 17 Dec 2021 06:37:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233288AbhLQGhu (ORCPT ); Fri, 17 Dec 2021 01:37:50 -0500 Received: from mga02.intel.com ([134.134.136.20]:16976 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233282AbhLQGhu (ORCPT ); Fri, 17 Dec 2021 01:37:50 -0500 X-IronPort-AV: E=McAfee;i="6200,9189,10200"; a="226979359" X-IronPort-AV: E=Sophos;i="5.88,213,1635231600"; d="scan'208";a="226979359" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Dec 2021 22:37:49 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,213,1635231600"; d="scan'208";a="519623128" Received: from allen-box.sh.intel.com ([10.239.159.118]) by orsmga008.jf.intel.com with ESMTP; 16 Dec 2021 22:37:42 -0800 From: Lu Baolu To: Greg Kroah-Hartman , Joerg Roedel , Alex Williamson , Bjorn Helgaas , Jason Gunthorpe , Christoph Hellwig , Kevin Tian , Ashok Raj Cc: Will Deacon , Robin Murphy , Dan Williams , rafael@kernel.org, Diana Craciun , Cornelia Huck , Eric Auger , Liu Yi L , Jacob jun Pan , Chaitanya Kulkarni , Stuart Yoder , Laurentiu Tudor , Thierry Reding , David Airlie , Daniel Vetter , Jonathan Hunter , Li Yang , Dmitry Osipenko , iommu@lists.linux-foundation.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Lu Baolu Subject: [PATCH v4 02/13] driver core: Set DMA ownership during driver bind/unbind Date: Fri, 17 Dec 2021 14:36:57 +0800 Message-Id: <20211217063708.1740334-3-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211217063708.1740334-1-baolu.lu@linux.intel.com> References: <20211217063708.1740334-1-baolu.lu@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org This extends really_probe() to allow checking for dma ownership conflict during the driver binding process. By default, the DMA_OWNER_DMA_API is claimed for the bound driver before calling its .probe() callback. If this operation fails (e.g. the iommu group of the target device already has the DMA_OWNER_USER set), the binding process is aborted to avoid breaking the security contract for devices in the iommu group. Without this change, the vfio driver has to listen to a bus BOUND_DRIVER event and then BUG_ON() in case of dma ownership conflict. This leads to bad user experience since careless driver binding operation may crash the system if the admin overlooks the group restriction. Aside from bad design, this leads to a security problem as a root user can force the kernel to BUG() even with lockdown=integrity. Driver may set a new flag (suppress_auto_claim_dma_owner) to disable auto claim in the binding process. Examples include kernel drivers (pci_stub, PCI bridge drivers, etc.) which don't trigger DMA at all thus can be safely exempted in DMA ownership check and userspace framework drivers (vfio/vdpa etc.) which need to manually claim DMA_OWNER_USER when assigning a device to userspace. Suggested-by: Jason Gunthorpe Link: https://lore.kernel.org/linux-iommu/20210922123931.GI327412@nvidia.com/ Link: https://lore.kernel.org/linux-iommu/20210928115751.GK964074@nvidia.com/ Signed-off-by: Lu Baolu --- include/linux/device/driver.h | 2 ++ drivers/base/dd.c | 37 ++++++++++++++++++++++++++++++----- 2 files changed, 34 insertions(+), 5 deletions(-) diff --git a/include/linux/device/driver.h b/include/linux/device/driver.h index a498ebcf4993..f5bf7030c416 100644 --- a/include/linux/device/driver.h +++ b/include/linux/device/driver.h @@ -54,6 +54,7 @@ enum probe_type { * @owner: The module owner. * @mod_name: Used for built-in modules. * @suppress_bind_attrs: Disables bind/unbind via sysfs. + * @suppress_auto_claim_dma_owner: Disable kernel dma auto-claim. * @probe_type: Type of the probe (synchronous or asynchronous) to use. * @of_match_table: The open firmware table. * @acpi_match_table: The ACPI match table. @@ -100,6 +101,7 @@ struct device_driver { const char *mod_name; /* used for built-in modules */ bool suppress_bind_attrs; /* disables bind/unbind via sysfs */ + bool suppress_auto_claim_dma_owner; enum probe_type probe_type; const struct of_device_id *of_match_table; diff --git a/drivers/base/dd.c b/drivers/base/dd.c index 68ea1f949daa..b04eec5dcefa 100644 --- a/drivers/base/dd.c +++ b/drivers/base/dd.c @@ -28,6 +28,7 @@ #include #include #include +#include #include "base.h" #include "power/power.h" @@ -538,6 +539,32 @@ static int call_driver_probe(struct device *dev, struct device_driver *drv) return ret; } +static int device_dma_configure(struct device *dev, struct device_driver *drv) +{ + int ret; + + if (!dev->bus->dma_configure) + return 0; + + ret = dev->bus->dma_configure(dev); + if (ret) + return ret; + + if (!drv->suppress_auto_claim_dma_owner) + ret = iommu_device_set_dma_owner(dev, DMA_OWNER_DMA_API, NULL); + + return ret; +} + +static void device_dma_cleanup(struct device *dev, struct device_driver *drv) +{ + if (!dev->bus->dma_configure) + return; + + if (!drv->suppress_auto_claim_dma_owner) + iommu_device_release_dma_owner(dev, DMA_OWNER_DMA_API); +} + static int really_probe(struct device *dev, struct device_driver *drv) { bool test_remove = IS_ENABLED(CONFIG_DEBUG_TEST_DRIVER_REMOVE) && @@ -574,11 +601,8 @@ static int really_probe(struct device *dev, struct device_driver *drv) if (ret) goto pinctrl_bind_failed; - if (dev->bus->dma_configure) { - ret = dev->bus->dma_configure(dev); - if (ret) - goto probe_failed; - } + if (device_dma_configure(dev, drv)) + goto pinctrl_bind_failed; ret = driver_sysfs_add(dev); if (ret) { @@ -660,6 +684,8 @@ static int really_probe(struct device *dev, struct device_driver *drv) if (dev->bus) blocking_notifier_call_chain(&dev->bus->p->bus_notifier, BUS_NOTIFY_DRIVER_NOT_BOUND, dev); + + device_dma_cleanup(dev, drv); pinctrl_bind_failed: device_links_no_driver(dev); devres_release_all(dev); @@ -1204,6 +1230,7 @@ static void __device_release_driver(struct device *dev, struct device *parent) else if (drv->remove) drv->remove(dev); + device_dma_cleanup(dev, drv); device_links_driver_cleanup(dev); devres_release_all(dev); From patchwork Fri Dec 17 06:36:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolu Lu X-Patchwork-Id: 12683749 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8A081C433EF for ; Fri, 17 Dec 2021 06:38:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233320AbhLQGiJ (ORCPT ); Fri, 17 Dec 2021 01:38:09 -0500 Received: from mga01.intel.com ([192.55.52.88]:16250 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232173AbhLQGiI (ORCPT ); Fri, 17 Dec 2021 01:38:08 -0500 X-IronPort-AV: E=McAfee;i="6200,9189,10200"; a="263864777" X-IronPort-AV: E=Sophos;i="5.88,213,1635231600"; d="scan'208";a="263864777" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Dec 2021 22:37:57 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,213,1635231600"; d="scan'208";a="519623157" Received: from allen-box.sh.intel.com ([10.239.159.118]) by orsmga008.jf.intel.com with ESMTP; 16 Dec 2021 22:37:50 -0800 From: Lu Baolu To: Greg Kroah-Hartman , Joerg Roedel , Alex Williamson , Bjorn Helgaas , Jason Gunthorpe , Christoph Hellwig , Kevin Tian , Ashok Raj Cc: Will Deacon , Robin Murphy , Dan Williams , rafael@kernel.org, Diana Craciun , Cornelia Huck , Eric Auger , Liu Yi L , Jacob jun Pan , Chaitanya Kulkarni , Stuart Yoder , Laurentiu Tudor , Thierry Reding , David Airlie , Daniel Vetter , Jonathan Hunter , Li Yang , Dmitry Osipenko , iommu@lists.linux-foundation.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Lu Baolu Subject: [PATCH v4 03/13] PCI: pci_stub: Suppress kernel DMA ownership auto-claiming Date: Fri, 17 Dec 2021 14:36:58 +0800 Message-Id: <20211217063708.1740334-4-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211217063708.1740334-1-baolu.lu@linux.intel.com> References: <20211217063708.1740334-1-baolu.lu@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org The pci_dma_configure() marks the iommu_group as containing only devices with kernel drivers that manage DMA. Avoid this default behavior for the pci_stub because it does not program any DMA itself. This allows the pci_stub still able to be used by the admin to block driver binding after applying the DMA ownership to vfio. Signed-off-by: Lu Baolu --- drivers/pci/pci-stub.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/pci/pci-stub.c b/drivers/pci/pci-stub.c index e408099fea52..6324c68602b4 100644 --- a/drivers/pci/pci-stub.c +++ b/drivers/pci/pci-stub.c @@ -36,6 +36,9 @@ static struct pci_driver stub_driver = { .name = "pci-stub", .id_table = NULL, /* only dynamic id's */ .probe = pci_stub_probe, + .driver = { + .suppress_auto_claim_dma_owner = true, + }, }; static int __init pci_stub_init(void) From patchwork Fri Dec 17 06:36:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolu Lu X-Patchwork-Id: 12683751 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 11A3FC4332F for ; Fri, 17 Dec 2021 06:38:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233341AbhLQGiL (ORCPT ); Fri, 17 Dec 2021 01:38:11 -0500 Received: from mga17.intel.com ([192.55.52.151]:27206 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233344AbhLQGiK (ORCPT ); Fri, 17 Dec 2021 01:38:10 -0500 X-IronPort-AV: E=McAfee;i="6200,9189,10200"; a="220373966" X-IronPort-AV: E=Sophos;i="5.88,213,1635231600"; d="scan'208";a="220373966" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Dec 2021 22:38:09 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,213,1635231600"; d="scan'208";a="519623208" Received: from allen-box.sh.intel.com ([10.239.159.118]) by orsmga008.jf.intel.com with ESMTP; 16 Dec 2021 22:37:57 -0800 From: Lu Baolu To: Greg Kroah-Hartman , Joerg Roedel , Alex Williamson , Bjorn Helgaas , Jason Gunthorpe , Christoph Hellwig , Kevin Tian , Ashok Raj Cc: Will Deacon , Robin Murphy , Dan Williams , rafael@kernel.org, Diana Craciun , Cornelia Huck , Eric Auger , Liu Yi L , Jacob jun Pan , Chaitanya Kulkarni , Stuart Yoder , Laurentiu Tudor , Thierry Reding , David Airlie , Daniel Vetter , Jonathan Hunter , Li Yang , Dmitry Osipenko , iommu@lists.linux-foundation.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Lu Baolu Subject: [PATCH v4 04/13] PCI: portdrv: Suppress kernel DMA ownership auto-claiming Date: Fri, 17 Dec 2021 14:36:59 +0800 Message-Id: <20211217063708.1740334-5-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211217063708.1740334-1-baolu.lu@linux.intel.com> References: <20211217063708.1740334-1-baolu.lu@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org IOMMU grouping on PCI necessitates that if we lack isolation on a bridge then all of the downstream devices will be part of the same IOMMU group as the bridge. The existing vfio framework allows the portdrv driver to be bound to the bridge while its downstream devices are assigned to user space. The pci_dma_configure() marks the iommu_group as containing only devices with kernel drivers that manage DMA. Avoid this default behavior for the portdrv driver in order for compatibility with the current vfio policy. The commit 5f096b14d421b ("vfio: Whitelist PCI bridges") extended above policy to all kernel drivers of bridge class. This is not always safe. For example, The shpchp_core driver relies on the PCI MMIO access for the controller functionality. With its downstream devices assigned to the userspace, the MMIO might be changed through user initiated P2P accesses without any notification. This might break the kernel driver integrity and lead to some unpredictable consequences. For any bridge driver, in order to avoiding default kernel DMA ownership claiming, we should consider: 1) Does the bridge driver use DMA? Calling pci_set_master() or a dma_map_* API is a sure indicate the driver is doing DMA 2) If the bridge driver uses MMIO, is it tolerant to hostile userspace also touching the same MMIO registers via P2P DMA attacks? Conservatively if the driver maps an MMIO region at all, we can say that it fails the test. Suggested-by: Jason Gunthorpe Suggested-by: Kevin Tian Signed-off-by: Lu Baolu --- drivers/pci/pcie/portdrv_pci.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/pci/pcie/portdrv_pci.c b/drivers/pci/pcie/portdrv_pci.c index 35eca6277a96..c48a8734f9c4 100644 --- a/drivers/pci/pcie/portdrv_pci.c +++ b/drivers/pci/pcie/portdrv_pci.c @@ -202,7 +202,10 @@ static struct pci_driver pcie_portdriver = { .err_handler = &pcie_portdrv_err_handler, - .driver.pm = PCIE_PORTDRV_PM_OPS, + .driver = { + .pm = PCIE_PORTDRV_PM_OPS, + .suppress_auto_claim_dma_owner = true, + }, }; static int __init dmi_pcie_pme_disable_msi(const struct dmi_system_id *d) From patchwork Fri Dec 17 06:37:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolu Lu X-Patchwork-Id: 12683753 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 834A6C433F5 for ; Fri, 17 Dec 2021 06:38:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233340AbhLQGiR (ORCPT ); Fri, 17 Dec 2021 01:38:17 -0500 Received: from mga11.intel.com ([192.55.52.93]:61780 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233365AbhLQGiQ (ORCPT ); Fri, 17 Dec 2021 01:38:16 -0500 X-IronPort-AV: E=McAfee;i="6200,9189,10200"; a="237232452" X-IronPort-AV: E=Sophos;i="5.88,213,1635231600"; d="scan'208";a="237232452" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Dec 2021 22:38:16 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,213,1635231600"; d="scan'208";a="519623238" Received: from allen-box.sh.intel.com ([10.239.159.118]) by orsmga008.jf.intel.com with ESMTP; 16 Dec 2021 22:38:09 -0800 From: Lu Baolu To: Greg Kroah-Hartman , Joerg Roedel , Alex Williamson , Bjorn Helgaas , Jason Gunthorpe , Christoph Hellwig , Kevin Tian , Ashok Raj Cc: Will Deacon , Robin Murphy , Dan Williams , rafael@kernel.org, Diana Craciun , Cornelia Huck , Eric Auger , Liu Yi L , Jacob jun Pan , Chaitanya Kulkarni , Stuart Yoder , Laurentiu Tudor , Thierry Reding , David Airlie , Daniel Vetter , Jonathan Hunter , Li Yang , Dmitry Osipenko , iommu@lists.linux-foundation.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Lu Baolu Subject: [PATCH v4 05/13] iommu: Add security context management for assigned devices Date: Fri, 17 Dec 2021 14:37:00 +0800 Message-Id: <20211217063708.1740334-6-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211217063708.1740334-1-baolu.lu@linux.intel.com> References: <20211217063708.1740334-1-baolu.lu@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org When an iommu group has DMA_OWNER_PRIVATE_DOMAIN_USER set for the first time, it is a contract that the group could be assigned to userspace from now on. The group must be detached from the default iommu domain and all devices in this group are blocked from doing DMA until it is attached to a user controlled iommu_domain. Correspondingly, the default domain should be reattached after the last DMA_OWNER_USER is released. Signed-off-by: Lu Baolu --- drivers/iommu/iommu.c | 35 ++++++++++++++++++++++++++++++++--- 1 file changed, 32 insertions(+), 3 deletions(-) diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 5439bf45afb2..573e253bad51 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -292,7 +292,12 @@ int iommu_probe_device(struct device *dev) mutex_lock(&group->mutex); iommu_alloc_default_domain(group, dev); - if (group->default_domain) { + /* + * If any device in the group has been initialized for user dma, + * avoid attaching the default domain. + */ + if (group->default_domain && + group->dma_owner != DMA_OWNER_PRIVATE_DOMAIN_USER) { ret = __iommu_attach_device(group->default_domain, dev); if (ret) { mutex_unlock(&group->mutex); @@ -2323,7 +2328,7 @@ static int __iommu_attach_group(struct iommu_domain *domain, { int ret; - if (group->default_domain && group->domain != group->default_domain) + if (group->domain && group->domain != group->default_domain) return -EBUSY; ret = __iommu_group_for_each_dev(group, domain, @@ -2360,7 +2365,12 @@ static void __iommu_detach_group(struct iommu_domain *domain, { int ret; - if (!group->default_domain) { + /* + * If any device in the group has been initialized for user dma, + * avoid re-attaching the default domain. + */ + if (!group->default_domain || + group->dma_owner == DMA_OWNER_PRIVATE_DOMAIN_USER) { __iommu_group_for_each_dev(group, domain, iommu_group_do_detach_device); group->domain = NULL; @@ -3373,6 +3383,16 @@ static int iommu_group_set_dma_owner(struct iommu_group *group, group->owner_cookie = owner_cookie; group->owner_cnt++; + /* + * We must ensure that any device DMAs issued after this call + * are discarded. DMAs can only reach real memory once someone + * has attached a real domain. + */ + if (owner == DMA_OWNER_PRIVATE_DOMAIN_USER && + group->domain && + !WARN_ON(group->domain != group->default_domain)) + __iommu_detach_group(group->domain, group); + unlock_out: mutex_unlock(&group->mutex); @@ -3391,6 +3411,15 @@ static void iommu_group_release_dma_owner(struct iommu_group *group, group->dma_owner = DMA_OWNER_DMA_API; + /* + * The UNMANAGED domain should be detached before all USER + * owners have been released. + */ + if (owner == DMA_OWNER_PRIVATE_DOMAIN_USER) { + if (!WARN_ON(group->domain) && group->default_domain) + __iommu_attach_group(group->default_domain, group); + } + unlock_out: mutex_unlock(&group->mutex); } From patchwork Fri Dec 17 06:37:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolu Lu X-Patchwork-Id: 12683755 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AD7C2C433F5 for ; Fri, 17 Dec 2021 06:38:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233352AbhLQGiY (ORCPT ); Fri, 17 Dec 2021 01:38:24 -0500 Received: from mga05.intel.com ([192.55.52.43]:56676 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233365AbhLQGiY (ORCPT ); Fri, 17 Dec 2021 01:38:24 -0500 X-IronPort-AV: E=McAfee;i="6200,9189,10200"; a="325979467" X-IronPort-AV: E=Sophos;i="5.88,213,1635231600"; d="scan'208";a="325979467" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Dec 2021 22:38:23 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,213,1635231600"; d="scan'208";a="519623260" Received: from allen-box.sh.intel.com ([10.239.159.118]) by orsmga008.jf.intel.com with ESMTP; 16 Dec 2021 22:38:16 -0800 From: Lu Baolu To: Greg Kroah-Hartman , Joerg Roedel , Alex Williamson , Bjorn Helgaas , Jason Gunthorpe , Christoph Hellwig , Kevin Tian , Ashok Raj Cc: Will Deacon , Robin Murphy , Dan Williams , rafael@kernel.org, Diana Craciun , Cornelia Huck , Eric Auger , Liu Yi L , Jacob jun Pan , Chaitanya Kulkarni , Stuart Yoder , Laurentiu Tudor , Thierry Reding , David Airlie , Daniel Vetter , Jonathan Hunter , Li Yang , Dmitry Osipenko , iommu@lists.linux-foundation.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Lu Baolu Subject: [PATCH v4 06/13] iommu: Expose group variants of dma ownership interfaces Date: Fri, 17 Dec 2021 14:37:01 +0800 Message-Id: <20211217063708.1740334-7-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211217063708.1740334-1-baolu.lu@linux.intel.com> References: <20211217063708.1740334-1-baolu.lu@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org The vfio needs to set DMA_OWNER_PRIVATE_DOMAIN_USER for the entire group when attaching it to a vfio container. Expose group variants of setting/ releasing dma ownership for this purpose. This also exposes the helper iommu_group_dma_owner_unclaimed() for vfio to report to userspace if the group is viable to user assignment for compatibility with VFIO_GROUP_FLAGS_VIABLE. Signed-off-by: Lu Baolu --- include/linux/iommu.h | 21 +++++++++++++++++++ drivers/iommu/iommu.c | 47 ++++++++++++++++++++++++++++++++++++++----- 2 files changed, 63 insertions(+), 5 deletions(-) diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 53a023ee1ac0..5ad4cf13370d 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -699,6 +699,10 @@ u32 iommu_sva_get_pasid(struct iommu_sva *handle); int iommu_device_set_dma_owner(struct device *dev, enum iommu_dma_owner owner, void *owner_cookie); void iommu_device_release_dma_owner(struct device *dev, enum iommu_dma_owner owner); +int iommu_group_set_dma_owner(struct iommu_group *group, enum iommu_dma_owner owner, + void *owner_cookie); +void iommu_group_release_dma_owner(struct iommu_group *group, enum iommu_dma_owner owner); +bool iommu_group_dma_owner_unclaimed(struct iommu_group *group); #else /* CONFIG_IOMMU_API */ @@ -1115,6 +1119,23 @@ static inline void iommu_device_release_dma_owner(struct device *dev, enum iommu_dma_owner owner) { } + +static inline int iommu_group_set_dma_owner(struct iommu_group *group, + enum iommu_dma_owner owner, + void *owner_cookie) +{ + return -EINVAL; +} + +static inline void iommu_group_release_dma_owner(struct iommu_group *group, + enum iommu_dma_owner owner) +{ +} + +static inline bool iommu_group_dma_owner_unclaimed(struct iommu_group *group) +{ + return false; +} #endif /* CONFIG_IOMMU_API */ /** diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 573e253bad51..8bec71b1cc18 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -3365,9 +3365,19 @@ static ssize_t iommu_group_store_type(struct iommu_group *group, return ret; } -static int iommu_group_set_dma_owner(struct iommu_group *group, - enum iommu_dma_owner owner, - void *owner_cookie) +/** + * iommu_group_set_dma_owner() - Set DMA ownership of a group + * @group: The group. + * @owner: DMA owner type. + * @owner_cookie: Caller specified pointer. Could be used for exclusive + * declaration. Could be NULL. + * + * This is to support backward compatibility for legacy vfio which manages + * dma ownership in group level. New invocations on this interface should be + * prohibited. Instead, please turn to iommu_device_set_dma_owner(). + */ +int iommu_group_set_dma_owner(struct iommu_group *group, enum iommu_dma_owner owner, + void *owner_cookie) { int ret = 0; @@ -3398,9 +3408,16 @@ static int iommu_group_set_dma_owner(struct iommu_group *group, return ret; } +EXPORT_SYMBOL_GPL(iommu_group_set_dma_owner); -static void iommu_group_release_dma_owner(struct iommu_group *group, - enum iommu_dma_owner owner) +/** + * iommu_group_release_dma_owner() - Release DMA ownership of a group + * @group: The group. + * @owner: DMA owner type. + * + * Release the DMA ownership claimed by iommu_group_set_dma_owner(). + */ +void iommu_group_release_dma_owner(struct iommu_group *group, enum iommu_dma_owner owner) { mutex_lock(&group->mutex); if (WARN_ON(!group->owner_cnt || group->dma_owner != owner)) @@ -3423,6 +3440,26 @@ static void iommu_group_release_dma_owner(struct iommu_group *group, unlock_out: mutex_unlock(&group->mutex); } +EXPORT_SYMBOL_GPL(iommu_group_release_dma_owner); + +/** + * iommu_group_dma_owner_unclaimed() - Is group dma ownership claimed + * @group: The group. + * + * This provides status check on a given group. It is racey and only for + * non-binding status reporting. + */ +bool iommu_group_dma_owner_unclaimed(struct iommu_group *group) +{ + unsigned int user; + + mutex_lock(&group->mutex); + user = group->owner_cnt; + mutex_unlock(&group->mutex); + + return !user; +} +EXPORT_SYMBOL_GPL(iommu_group_dma_owner_unclaimed); /** * iommu_device_set_dma_owner() - Set DMA ownership of a device From patchwork Fri Dec 17 06:37:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolu Lu X-Patchwork-Id: 12683757 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CCB78C433FE for ; Fri, 17 Dec 2021 06:38:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233389AbhLQGid (ORCPT ); Fri, 17 Dec 2021 01:38:33 -0500 Received: from mga07.intel.com ([134.134.136.100]:40614 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233382AbhLQGic (ORCPT ); Fri, 17 Dec 2021 01:38:32 -0500 X-IronPort-AV: E=McAfee;i="6200,9189,10200"; a="303069179" X-IronPort-AV: E=Sophos;i="5.88,213,1635231600"; d="scan'208";a="303069179" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Dec 2021 22:38:32 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,213,1635231600"; d="scan'208";a="519623298" Received: from allen-box.sh.intel.com ([10.239.159.118]) by orsmga008.jf.intel.com with ESMTP; 16 Dec 2021 22:38:23 -0800 From: Lu Baolu To: Greg Kroah-Hartman , Joerg Roedel , Alex Williamson , Bjorn Helgaas , Jason Gunthorpe , Christoph Hellwig , Kevin Tian , Ashok Raj Cc: Will Deacon , Robin Murphy , Dan Williams , rafael@kernel.org, Diana Craciun , Cornelia Huck , Eric Auger , Liu Yi L , Jacob jun Pan , Chaitanya Kulkarni , Stuart Yoder , Laurentiu Tudor , Thierry Reding , David Airlie , Daniel Vetter , Jonathan Hunter , Li Yang , Dmitry Osipenko , iommu@lists.linux-foundation.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Lu Baolu Subject: [PATCH v4 07/13] iommu: Add iommu_at[de]tach_device_shared() for multi-device groups Date: Fri, 17 Dec 2021 14:37:02 +0800 Message-Id: <20211217063708.1740334-8-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211217063708.1740334-1-baolu.lu@linux.intel.com> References: <20211217063708.1740334-1-baolu.lu@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org The iommu_attach/detach_device() interfaces were exposed for the device drivers to attach/detach their own domains. The commit <426a273834eae> ("iommu: Limit iommu_attach/detach_device to device with their own group") restricted them to singleton groups to avoid different device in a group attaching different domain. As we've introduced device DMA ownership into the iommu core. We can now introduce interfaces for muliple-device groups, and "all devices are in the same address space" is still guaranteed. The iommu_attach/detach_device_shared() could be used when multiple drivers sharing the group claim the DMA_OWNER_PRIVATE_DOMAIN ownership. The first call of iommu_attach_device_shared() attaches the domain to the group. Other drivers could join it later. The domain will be detached from the group after all drivers unjoin it. Signed-off-by: Jason Gunthorpe Signed-off-by: Lu Baolu Tested-by: Dmitry Osipenko --- include/linux/iommu.h | 13 +++++++ drivers/iommu/iommu.c | 79 +++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 92 insertions(+) diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 5ad4cf13370d..1bc03118dfb3 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -703,6 +703,8 @@ int iommu_group_set_dma_owner(struct iommu_group *group, enum iommu_dma_owner ow void *owner_cookie); void iommu_group_release_dma_owner(struct iommu_group *group, enum iommu_dma_owner owner); bool iommu_group_dma_owner_unclaimed(struct iommu_group *group); +int iommu_attach_device_shared(struct iommu_domain *domain, struct device *dev); +void iommu_detach_device_shared(struct iommu_domain *domain, struct device *dev); #else /* CONFIG_IOMMU_API */ @@ -743,11 +745,22 @@ static inline int iommu_attach_device(struct iommu_domain *domain, return -ENODEV; } +static inline int iommu_attach_device_shared(struct iommu_domain *domain, + struct device *dev) +{ + return -ENODEV; +} + static inline void iommu_detach_device(struct iommu_domain *domain, struct device *dev) { } +static inline void iommu_detach_device_shared(struct iommu_domain *domain, + struct device *dev) +{ +} + static inline struct iommu_domain *iommu_get_domain_for_dev(struct device *dev) { return NULL; diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 8bec71b1cc18..3ad66cb9bedc 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -50,6 +50,7 @@ struct iommu_group { struct list_head entry; enum iommu_dma_owner dma_owner; unsigned int owner_cnt; + unsigned int attach_cnt; void *owner_cookie; }; @@ -3512,3 +3513,81 @@ void iommu_device_release_dma_owner(struct device *dev, enum iommu_dma_owner own iommu_group_put(group); } EXPORT_SYMBOL_GPL(iommu_device_release_dma_owner); + +/** + * iommu_attach_device_shared() - Attach shared domain to a device + * @domain: The shared domain. + * @dev: The device. + * + * Similar to iommu_attach_device(), but allowed for shared-group devices + * and guarantees that all devices in an iommu group could only be attached + * by a same iommu domain. The caller should explicitly set the dma ownership + * of DMA_OWNER_PRIVATE_DOMAIN or DMA_OWNER_PRIVATE_DOMAIN_USER type before + * calling it and use the paired helper iommu_detach_device_shared() for + * cleanup. + */ +int iommu_attach_device_shared(struct iommu_domain *domain, struct device *dev) +{ + struct iommu_group *group; + int ret = 0; + + group = iommu_group_get(dev); + if (!group) + return -ENODEV; + + mutex_lock(&group->mutex); + if (group->dma_owner != DMA_OWNER_PRIVATE_DOMAIN && + group->dma_owner != DMA_OWNER_PRIVATE_DOMAIN_USER) { + ret = -EPERM; + goto unlock_out; + } + + if (group->attach_cnt) { + if (group->domain != domain) { + ret = -EBUSY; + goto unlock_out; + } + } else { + ret = __iommu_attach_group(domain, group); + if (ret) + goto unlock_out; + } + + group->attach_cnt++; +unlock_out: + mutex_unlock(&group->mutex); + iommu_group_put(group); + + return ret; +} +EXPORT_SYMBOL_GPL(iommu_attach_device_shared); + +/** + * iommu_detach_device_shared() - Detach a domain from device + * @domain: The domain. + * @dev: The device. + * + * The detach helper paired with iommu_attach_device_shared(). + */ +void iommu_detach_device_shared(struct iommu_domain *domain, struct device *dev) +{ + struct iommu_group *group; + + group = iommu_group_get(dev); + if (!group) + return; + + mutex_lock(&group->mutex); + if (WARN_ON(!group->attach_cnt || group->domain != domain || + (group->dma_owner != DMA_OWNER_PRIVATE_DOMAIN && + group->dma_owner != DMA_OWNER_PRIVATE_DOMAIN_USER))) + goto unlock_out; + + if (--group->attach_cnt == 0) + __iommu_detach_group(domain, group); + +unlock_out: + mutex_unlock(&group->mutex); + iommu_group_put(group); +} +EXPORT_SYMBOL_GPL(iommu_detach_device_shared); From patchwork Fri Dec 17 06:37:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolu Lu X-Patchwork-Id: 12683759 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6D5B6C433F5 for ; Fri, 17 Dec 2021 06:38:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233403AbhLQGik (ORCPT ); Fri, 17 Dec 2021 01:38:40 -0500 Received: from mga05.intel.com ([192.55.52.43]:56699 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233382AbhLQGik (ORCPT ); Fri, 17 Dec 2021 01:38:40 -0500 X-IronPort-AV: E=McAfee;i="6200,9189,10200"; a="325979496" X-IronPort-AV: E=Sophos;i="5.88,213,1635231600"; d="scan'208";a="325979496" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Dec 2021 22:38:39 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,213,1635231600"; d="scan'208";a="519623324" Received: from allen-box.sh.intel.com ([10.239.159.118]) by orsmga008.jf.intel.com with ESMTP; 16 Dec 2021 22:38:32 -0800 From: Lu Baolu To: Greg Kroah-Hartman , Joerg Roedel , Alex Williamson , Bjorn Helgaas , Jason Gunthorpe , Christoph Hellwig , Kevin Tian , Ashok Raj Cc: Will Deacon , Robin Murphy , Dan Williams , rafael@kernel.org, Diana Craciun , Cornelia Huck , Eric Auger , Liu Yi L , Jacob jun Pan , Chaitanya Kulkarni , Stuart Yoder , Laurentiu Tudor , Thierry Reding , David Airlie , Daniel Vetter , Jonathan Hunter , Li Yang , Dmitry Osipenko , iommu@lists.linux-foundation.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Lu Baolu Subject: [PATCH v4 08/13] vfio: Set DMA USER ownership for VFIO devices Date: Fri, 17 Dec 2021 14:37:03 +0800 Message-Id: <20211217063708.1740334-9-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211217063708.1740334-1-baolu.lu@linux.intel.com> References: <20211217063708.1740334-1-baolu.lu@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Set DMA_OWNER_PRIVATE_DOMAIN_USER when an iommu group is set to a container, and release DMA_OWNER_USER once the iommu group is unset from a container. Signed-off-by: Lu Baolu --- drivers/vfio/fsl-mc/vfio_fsl_mc.c | 1 + drivers/vfio/pci/vfio_pci.c | 3 +++ drivers/vfio/platform/vfio_amba.c | 1 + drivers/vfio/platform/vfio_platform.c | 1 + drivers/vfio/vfio.c | 13 ++++++++++++- 5 files changed, 18 insertions(+), 1 deletion(-) diff --git a/drivers/vfio/fsl-mc/vfio_fsl_mc.c b/drivers/vfio/fsl-mc/vfio_fsl_mc.c index 6e2e62c6f47a..b749d092a185 100644 --- a/drivers/vfio/fsl-mc/vfio_fsl_mc.c +++ b/drivers/vfio/fsl-mc/vfio_fsl_mc.c @@ -587,6 +587,7 @@ static struct fsl_mc_driver vfio_fsl_mc_driver = { .driver = { .name = "vfio-fsl-mc", .owner = THIS_MODULE, + .suppress_auto_claim_dma_owner = true, }, }; diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c index a5ce92beb655..ce3e814b5506 100644 --- a/drivers/vfio/pci/vfio_pci.c +++ b/drivers/vfio/pci/vfio_pci.c @@ -193,6 +193,9 @@ static struct pci_driver vfio_pci_driver = { .remove = vfio_pci_remove, .sriov_configure = vfio_pci_sriov_configure, .err_handler = &vfio_pci_core_err_handlers, + .driver = { + .suppress_auto_claim_dma_owner = true, + }, }; static void __init vfio_pci_fill_ids(void) diff --git a/drivers/vfio/platform/vfio_amba.c b/drivers/vfio/platform/vfio_amba.c index badfffea14fb..2146ee52901a 100644 --- a/drivers/vfio/platform/vfio_amba.c +++ b/drivers/vfio/platform/vfio_amba.c @@ -94,6 +94,7 @@ static struct amba_driver vfio_amba_driver = { .drv = { .name = "vfio-amba", .owner = THIS_MODULE, + .suppress_auto_claim_dma_owner = true, }, }; diff --git a/drivers/vfio/platform/vfio_platform.c b/drivers/vfio/platform/vfio_platform.c index 68a1c87066d7..5ef06e668192 100644 --- a/drivers/vfio/platform/vfio_platform.c +++ b/drivers/vfio/platform/vfio_platform.c @@ -75,6 +75,7 @@ static struct platform_driver vfio_platform_driver = { .remove = vfio_platform_remove, .driver = { .name = "vfio-platform", + .suppress_auto_claim_dma_owner = true, }, }; diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c index 735d1d344af9..b75ba7551079 100644 --- a/drivers/vfio/vfio.c +++ b/drivers/vfio/vfio.c @@ -1198,6 +1198,9 @@ static void __vfio_group_unset_container(struct vfio_group *group) driver->ops->detach_group(container->iommu_data, group->iommu_group); + iommu_group_release_dma_owner(group->iommu_group, + DMA_OWNER_PRIVATE_DOMAIN_USER); + group->container = NULL; wake_up(&group->container_q); list_del(&group->container_next); @@ -1282,13 +1285,21 @@ static int vfio_group_set_container(struct vfio_group *group, int container_fd) goto unlock_out; } + ret = iommu_group_set_dma_owner(group->iommu_group, + DMA_OWNER_PRIVATE_DOMAIN_USER, f.file); + if (ret) + goto unlock_out; + driver = container->iommu_driver; if (driver) { ret = driver->ops->attach_group(container->iommu_data, group->iommu_group, group->type); - if (ret) + if (ret) { + iommu_group_release_dma_owner(group->iommu_group, + DMA_OWNER_PRIVATE_DOMAIN_USER); goto unlock_out; + } } group->container = container; From patchwork Fri Dec 17 06:37:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolu Lu X-Patchwork-Id: 12683761 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7DBC6C433FE for ; Fri, 17 Dec 2021 06:38:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233417AbhLQGis (ORCPT ); Fri, 17 Dec 2021 01:38:48 -0500 Received: from mga06.intel.com ([134.134.136.31]:50426 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233421AbhLQGis (ORCPT ); Fri, 17 Dec 2021 01:38:48 -0500 X-IronPort-AV: E=McAfee;i="6200,9189,10200"; a="300467728" X-IronPort-AV: E=Sophos;i="5.88,213,1635231600"; d="scan'208";a="300467728" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Dec 2021 22:38:47 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,213,1635231600"; d="scan'208";a="519623361" Received: from allen-box.sh.intel.com ([10.239.159.118]) by orsmga008.jf.intel.com with ESMTP; 16 Dec 2021 22:38:39 -0800 From: Lu Baolu To: Greg Kroah-Hartman , Joerg Roedel , Alex Williamson , Bjorn Helgaas , Jason Gunthorpe , Christoph Hellwig , Kevin Tian , Ashok Raj Cc: Will Deacon , Robin Murphy , Dan Williams , rafael@kernel.org, Diana Craciun , Cornelia Huck , Eric Auger , Liu Yi L , Jacob jun Pan , Chaitanya Kulkarni , Stuart Yoder , Laurentiu Tudor , Thierry Reding , David Airlie , Daniel Vetter , Jonathan Hunter , Li Yang , Dmitry Osipenko , iommu@lists.linux-foundation.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Lu Baolu Subject: [PATCH v4 09/13] vfio: Remove use of vfio_group_viable() Date: Fri, 17 Dec 2021 14:37:04 +0800 Message-Id: <20211217063708.1740334-10-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211217063708.1740334-1-baolu.lu@linux.intel.com> References: <20211217063708.1740334-1-baolu.lu@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org As DMA USER ownership is claimed for the iommu group when a vfio group is added to a vfio container, the vfio group viability is guaranteed as long as group->container_users > 0. Remove those unnecessary group viability checks which are only hit when group->container_users is not zero. The only remaining reference is in GROUP_GET_STATUS, which could be called at any time when group fd is valid. Here we just replace the vfio_group_viable() by directly calling iommu core to get viability status. Signed-off-by: Lu Baolu --- drivers/vfio/vfio.c | 18 ++++++------------ 1 file changed, 6 insertions(+), 12 deletions(-) diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c index b75ba7551079..241756b85705 100644 --- a/drivers/vfio/vfio.c +++ b/drivers/vfio/vfio.c @@ -1316,12 +1316,6 @@ static int vfio_group_set_container(struct vfio_group *group, int container_fd) return ret; } -static bool vfio_group_viable(struct vfio_group *group) -{ - return (iommu_group_for_each_dev(group->iommu_group, - group, vfio_dev_viable) == 0); -} - static int vfio_group_add_container_user(struct vfio_group *group) { if (!atomic_inc_not_zero(&group->container_users)) @@ -1331,7 +1325,7 @@ static int vfio_group_add_container_user(struct vfio_group *group) atomic_dec(&group->container_users); return -EPERM; } - if (!group->container->iommu_driver || !vfio_group_viable(group)) { + if (!group->container->iommu_driver) { atomic_dec(&group->container_users); return -EINVAL; } @@ -1349,7 +1343,7 @@ static int vfio_group_get_device_fd(struct vfio_group *group, char *buf) int ret = 0; if (0 == atomic_read(&group->container_users) || - !group->container->iommu_driver || !vfio_group_viable(group)) + !group->container->iommu_driver) return -EINVAL; if (group->type == VFIO_NO_IOMMU && !capable(CAP_SYS_RAWIO)) @@ -1441,11 +1435,11 @@ static long vfio_group_fops_unl_ioctl(struct file *filep, status.flags = 0; - if (vfio_group_viable(group)) - status.flags |= VFIO_GROUP_FLAGS_VIABLE; - if (group->container) - status.flags |= VFIO_GROUP_FLAGS_CONTAINER_SET; + status.flags |= VFIO_GROUP_FLAGS_CONTAINER_SET | + VFIO_GROUP_FLAGS_VIABLE; + else if (iommu_group_dma_owner_unclaimed(group->iommu_group)) + status.flags |= VFIO_GROUP_FLAGS_VIABLE; if (copy_to_user((void __user *)arg, &status, minsz)) return -EFAULT; From patchwork Fri Dec 17 06:37:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolu Lu X-Patchwork-Id: 12683765 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 55765C43219 for ; Fri, 17 Dec 2021 06:39:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233460AbhLQGjE (ORCPT ); Fri, 17 Dec 2021 01:39:04 -0500 Received: from mga05.intel.com ([192.55.52.43]:56719 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233445AbhLQGjD (ORCPT ); Fri, 17 Dec 2021 01:39:03 -0500 X-IronPort-AV: E=McAfee;i="6200,9189,10200"; a="325979542" X-IronPort-AV: E=Sophos;i="5.88,213,1635231600"; d="scan'208";a="325979542" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Dec 2021 22:38:55 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,213,1635231600"; d="scan'208";a="519623398" Received: from allen-box.sh.intel.com ([10.239.159.118]) by orsmga008.jf.intel.com with ESMTP; 16 Dec 2021 22:38:47 -0800 From: Lu Baolu To: Greg Kroah-Hartman , Joerg Roedel , Alex Williamson , Bjorn Helgaas , Jason Gunthorpe , Christoph Hellwig , Kevin Tian , Ashok Raj Cc: Will Deacon , Robin Murphy , Dan Williams , rafael@kernel.org, Diana Craciun , Cornelia Huck , Eric Auger , Liu Yi L , Jacob jun Pan , Chaitanya Kulkarni , Stuart Yoder , Laurentiu Tudor , Thierry Reding , David Airlie , Daniel Vetter , Jonathan Hunter , Li Yang , Dmitry Osipenko , iommu@lists.linux-foundation.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Lu Baolu Subject: [PATCH v4 10/13] vfio: Delete the unbound_list Date: Fri, 17 Dec 2021 14:37:05 +0800 Message-Id: <20211217063708.1740334-11-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211217063708.1740334-1-baolu.lu@linux.intel.com> References: <20211217063708.1740334-1-baolu.lu@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org From: Jason Gunthorpe commit 60720a0fc646 ("vfio: Add device tracking during unbind") added the unbound list to plug a problem with KVM where KVM_DEV_VFIO_GROUP_DEL relied on vfio_group_get_external_user() succeeding to return the vfio_group from a group file descriptor. The unbound list allowed vfio_group_get_external_user() to continue to succeed in edge cases. However commit 5d6dee80a1e9 ("vfio: New external user group/file match") deleted the call to vfio_group_get_external_user() during KVM_DEV_VFIO_GROUP_DEL. Instead vfio_external_group_match_file() is used to directly match the file descriptor to the group pointer. This in turn avoids the call down to vfio_dev_viable() during KVM_DEV_VFIO_GROUP_DEL and also avoids the trouble the first commit was trying to fix. There are no other users of vfio_dev_viable() that care about the time after vfio_unregister_group_dev() returns, so simply delete the unbound_list entirely. Reviewed-by: Chaitanya Kulkarni Signed-off-by: Jason Gunthorpe Signed-off-by: Lu Baolu --- drivers/vfio/vfio.c | 74 ++------------------------------------------- 1 file changed, 2 insertions(+), 72 deletions(-) diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c index 241756b85705..6426b29e73a2 100644 --- a/drivers/vfio/vfio.c +++ b/drivers/vfio/vfio.c @@ -62,11 +62,6 @@ struct vfio_container { bool noiommu; }; -struct vfio_unbound_dev { - struct device *dev; - struct list_head unbound_next; -}; - struct vfio_group { struct device dev; struct cdev cdev; @@ -79,8 +74,6 @@ struct vfio_group { struct notifier_block nb; struct list_head vfio_next; struct list_head container_next; - struct list_head unbound_list; - struct mutex unbound_lock; atomic_t opened; wait_queue_head_t container_q; enum vfio_group_type type; @@ -340,16 +333,8 @@ vfio_group_get_from_iommu(struct iommu_group *iommu_group) static void vfio_group_release(struct device *dev) { struct vfio_group *group = container_of(dev, struct vfio_group, dev); - struct vfio_unbound_dev *unbound, *tmp; - - list_for_each_entry_safe(unbound, tmp, - &group->unbound_list, unbound_next) { - list_del(&unbound->unbound_next); - kfree(unbound); - } mutex_destroy(&group->device_lock); - mutex_destroy(&group->unbound_lock); iommu_group_put(group->iommu_group); ida_free(&vfio.group_ida, MINOR(group->dev.devt)); kfree(group); @@ -381,8 +366,6 @@ static struct vfio_group *vfio_group_alloc(struct iommu_group *iommu_group, refcount_set(&group->users, 1); INIT_LIST_HEAD(&group->device_list); mutex_init(&group->device_lock); - INIT_LIST_HEAD(&group->unbound_list); - mutex_init(&group->unbound_lock); init_waitqueue_head(&group->container_q); group->iommu_group = iommu_group; /* put in vfio_group_release() */ @@ -571,19 +554,8 @@ static int vfio_dev_viable(struct device *dev, void *data) struct vfio_group *group = data; struct vfio_device *device; struct device_driver *drv = READ_ONCE(dev->driver); - struct vfio_unbound_dev *unbound; - int ret = -EINVAL; - mutex_lock(&group->unbound_lock); - list_for_each_entry(unbound, &group->unbound_list, unbound_next) { - if (dev == unbound->dev) { - ret = 0; - break; - } - } - mutex_unlock(&group->unbound_lock); - - if (!ret || !drv || vfio_dev_driver_allowed(dev, drv)) + if (!drv || vfio_dev_driver_allowed(dev, drv)) return 0; device = vfio_group_get_device(group, dev); @@ -592,7 +564,7 @@ static int vfio_dev_viable(struct device *dev, void *data) return 0; } - return ret; + return -EINVAL; } /* @@ -634,7 +606,6 @@ static int vfio_iommu_group_notifier(struct notifier_block *nb, { struct vfio_group *group = container_of(nb, struct vfio_group, nb); struct device *dev = data; - struct vfio_unbound_dev *unbound; switch (action) { case IOMMU_GROUP_NOTIFY_ADD_DEVICE: @@ -663,28 +634,6 @@ static int vfio_iommu_group_notifier(struct notifier_block *nb, __func__, iommu_group_id(group->iommu_group), dev->driver->name); break; - case IOMMU_GROUP_NOTIFY_UNBOUND_DRIVER: - dev_dbg(dev, "%s: group %d unbound from driver\n", __func__, - iommu_group_id(group->iommu_group)); - /* - * XXX An unbound device in a live group is ok, but we'd - * really like to avoid the above BUG_ON by preventing other - * drivers from binding to it. Once that occurs, we have to - * stop the system to maintain isolation. At a minimum, we'd - * want a toggle to disable driver auto probe for this device. - */ - - mutex_lock(&group->unbound_lock); - list_for_each_entry(unbound, - &group->unbound_list, unbound_next) { - if (dev == unbound->dev) { - list_del(&unbound->unbound_next); - kfree(unbound); - break; - } - } - mutex_unlock(&group->unbound_lock); - break; } return NOTIFY_OK; } @@ -889,29 +838,10 @@ static struct vfio_device *vfio_device_get_from_name(struct vfio_group *group, void vfio_unregister_group_dev(struct vfio_device *device) { struct vfio_group *group = device->group; - struct vfio_unbound_dev *unbound; unsigned int i = 0; bool interrupted = false; long rc; - /* - * When the device is removed from the group, the group suddenly - * becomes non-viable; the device has a driver (until the unbind - * completes), but it's not present in the group. This is bad news - * for any external users that need to re-acquire a group reference - * in order to match and release their existing reference. To - * solve this, we track such devices on the unbound_list to bridge - * the gap until they're fully unbound. - */ - unbound = kzalloc(sizeof(*unbound), GFP_KERNEL); - if (unbound) { - unbound->dev = device->dev; - mutex_lock(&group->unbound_lock); - list_add(&unbound->unbound_next, &group->unbound_list); - mutex_unlock(&group->unbound_lock); - } - WARN_ON(!unbound); - vfio_device_put(device); rc = try_wait_for_completion(&device->comp); while (rc <= 0) { From patchwork Fri Dec 17 06:37:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolu Lu X-Patchwork-Id: 12683763 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 14B67C433FE for ; Fri, 17 Dec 2021 06:39:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233441AbhLQGjD (ORCPT ); Fri, 17 Dec 2021 01:39:03 -0500 Received: from mga14.intel.com ([192.55.52.115]:25480 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233422AbhLQGjC (ORCPT ); Fri, 17 Dec 2021 01:39:02 -0500 X-IronPort-AV: E=McAfee;i="6200,9189,10200"; a="239916360" X-IronPort-AV: E=Sophos;i="5.88,213,1635231600"; d="scan'208";a="239916360" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Dec 2021 22:39:02 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,213,1635231600"; d="scan'208";a="519623419" Received: from allen-box.sh.intel.com ([10.239.159.118]) by orsmga008.jf.intel.com with ESMTP; 16 Dec 2021 22:38:55 -0800 From: Lu Baolu To: Greg Kroah-Hartman , Joerg Roedel , Alex Williamson , Bjorn Helgaas , Jason Gunthorpe , Christoph Hellwig , Kevin Tian , Ashok Raj Cc: Will Deacon , Robin Murphy , Dan Williams , rafael@kernel.org, Diana Craciun , Cornelia Huck , Eric Auger , Liu Yi L , Jacob jun Pan , Chaitanya Kulkarni , Stuart Yoder , Laurentiu Tudor , Thierry Reding , David Airlie , Daniel Vetter , Jonathan Hunter , Li Yang , Dmitry Osipenko , iommu@lists.linux-foundation.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Lu Baolu Subject: [PATCH v4 11/13] vfio: Remove iommu group notifier Date: Fri, 17 Dec 2021 14:37:06 +0800 Message-Id: <20211217063708.1740334-12-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211217063708.1740334-1-baolu.lu@linux.intel.com> References: <20211217063708.1740334-1-baolu.lu@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org The iommu core and driver core have been enhanced to avoid unsafe driver binding to a live group after iommu_group_set_dma_owner(PRIVATE_USER) has been called. There's no need to register iommu group notifier. This removes the iommu group notifer which contains BUG_ON() and WARN(). Signed-off-by: Lu Baolu --- drivers/vfio/vfio.c | 147 -------------------------------------------- 1 file changed, 147 deletions(-) diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c index 6426b29e73a2..539f5da9eb34 100644 --- a/drivers/vfio/vfio.c +++ b/drivers/vfio/vfio.c @@ -71,7 +71,6 @@ struct vfio_group { struct vfio_container *container; struct list_head device_list; struct mutex device_lock; - struct notifier_block nb; struct list_head vfio_next; struct list_head container_next; atomic_t opened; @@ -274,8 +273,6 @@ void vfio_unregister_iommu_driver(const struct vfio_iommu_driver_ops *ops) } EXPORT_SYMBOL_GPL(vfio_unregister_iommu_driver); -static int vfio_iommu_group_notifier(struct notifier_block *nb, - unsigned long action, void *data); static void vfio_group_get(struct vfio_group *group); /* @@ -395,13 +392,6 @@ static struct vfio_group *vfio_create_group(struct iommu_group *iommu_group, goto err_put; } - group->nb.notifier_call = vfio_iommu_group_notifier; - err = iommu_group_register_notifier(iommu_group, &group->nb); - if (err) { - ret = ERR_PTR(err); - goto err_put; - } - mutex_lock(&vfio.group_lock); /* Did we race creating this group? */ @@ -422,7 +412,6 @@ static struct vfio_group *vfio_create_group(struct iommu_group *iommu_group, err_unlock: mutex_unlock(&vfio.group_lock); - iommu_group_unregister_notifier(group->iommu_group, &group->nb); err_put: put_device(&group->dev); return ret; @@ -447,7 +436,6 @@ static void vfio_group_put(struct vfio_group *group) cdev_device_del(&group->cdev, &group->dev); mutex_unlock(&vfio.group_lock); - iommu_group_unregister_notifier(group->iommu_group, &group->nb); put_device(&group->dev); } @@ -503,141 +491,6 @@ static struct vfio_device *vfio_group_get_device(struct vfio_group *group, return NULL; } -/* - * Some drivers, like pci-stub, are only used to prevent other drivers from - * claiming a device and are therefore perfectly legitimate for a user owned - * group. The pci-stub driver has no dependencies on DMA or the IOVA mapping - * of the device, but it does prevent the user from having direct access to - * the device, which is useful in some circumstances. - * - * We also assume that we can include PCI interconnect devices, ie. bridges. - * IOMMU grouping on PCI necessitates that if we lack isolation on a bridge - * then all of the downstream devices will be part of the same IOMMU group as - * the bridge. Thus, if placing the bridge into the user owned IOVA space - * breaks anything, it only does so for user owned devices downstream. Note - * that error notification via MSI can be affected for platforms that handle - * MSI within the same IOVA space as DMA. - */ -static const char * const vfio_driver_allowed[] = { "pci-stub" }; - -static bool vfio_dev_driver_allowed(struct device *dev, - struct device_driver *drv) -{ - if (dev_is_pci(dev)) { - struct pci_dev *pdev = to_pci_dev(dev); - - if (pdev->hdr_type != PCI_HEADER_TYPE_NORMAL) - return true; - } - - return match_string(vfio_driver_allowed, - ARRAY_SIZE(vfio_driver_allowed), - drv->name) >= 0; -} - -/* - * A vfio group is viable for use by userspace if all devices are in - * one of the following states: - * - driver-less - * - bound to a vfio driver - * - bound to an otherwise allowed driver - * - a PCI interconnect device - * - * We use two methods to determine whether a device is bound to a vfio - * driver. The first is to test whether the device exists in the vfio - * group. The second is to test if the device exists on the group - * unbound_list, indicating it's in the middle of transitioning from - * a vfio driver to driver-less. - */ -static int vfio_dev_viable(struct device *dev, void *data) -{ - struct vfio_group *group = data; - struct vfio_device *device; - struct device_driver *drv = READ_ONCE(dev->driver); - - if (!drv || vfio_dev_driver_allowed(dev, drv)) - return 0; - - device = vfio_group_get_device(group, dev); - if (device) { - vfio_device_put(device); - return 0; - } - - return -EINVAL; -} - -/* - * Async device support - */ -static int vfio_group_nb_add_dev(struct vfio_group *group, struct device *dev) -{ - struct vfio_device *device; - - /* Do we already know about it? We shouldn't */ - device = vfio_group_get_device(group, dev); - if (WARN_ON_ONCE(device)) { - vfio_device_put(device); - return 0; - } - - /* Nothing to do for idle groups */ - if (!atomic_read(&group->container_users)) - return 0; - - /* TODO Prevent device auto probing */ - dev_WARN(dev, "Device added to live group %d!\n", - iommu_group_id(group->iommu_group)); - - return 0; -} - -static int vfio_group_nb_verify(struct vfio_group *group, struct device *dev) -{ - /* We don't care what happens when the group isn't in use */ - if (!atomic_read(&group->container_users)) - return 0; - - return vfio_dev_viable(dev, group); -} - -static int vfio_iommu_group_notifier(struct notifier_block *nb, - unsigned long action, void *data) -{ - struct vfio_group *group = container_of(nb, struct vfio_group, nb); - struct device *dev = data; - - switch (action) { - case IOMMU_GROUP_NOTIFY_ADD_DEVICE: - vfio_group_nb_add_dev(group, dev); - break; - case IOMMU_GROUP_NOTIFY_DEL_DEVICE: - /* - * Nothing to do here. If the device is in use, then the - * vfio sub-driver should block the remove callback until - * it is unused. If the device is unused or attached to a - * stub driver, then it should be released and we don't - * care that it will be going away. - */ - break; - case IOMMU_GROUP_NOTIFY_BIND_DRIVER: - dev_dbg(dev, "%s: group %d binding to driver\n", __func__, - iommu_group_id(group->iommu_group)); - break; - case IOMMU_GROUP_NOTIFY_BOUND_DRIVER: - dev_dbg(dev, "%s: group %d bound to driver %s\n", __func__, - iommu_group_id(group->iommu_group), dev->driver->name); - BUG_ON(vfio_group_nb_verify(group, dev)); - break; - case IOMMU_GROUP_NOTIFY_UNBIND_DRIVER: - dev_dbg(dev, "%s: group %d unbinding from driver %s\n", - __func__, iommu_group_id(group->iommu_group), - dev->driver->name); - break; - } - return NOTIFY_OK; -} - /* * VFIO driver API */ From patchwork Fri Dec 17 06:37:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolu Lu X-Patchwork-Id: 12683767 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22778C433F5 for ; Fri, 17 Dec 2021 06:39:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233464AbhLQGjO (ORCPT ); Fri, 17 Dec 2021 01:39:14 -0500 Received: from mga01.intel.com ([192.55.52.88]:16346 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233484AbhLQGjJ (ORCPT ); Fri, 17 Dec 2021 01:39:09 -0500 X-IronPort-AV: E=McAfee;i="6200,9189,10200"; a="263864940" X-IronPort-AV: E=Sophos;i="5.88,213,1635231600"; d="scan'208";a="263864940" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Dec 2021 22:39:09 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,213,1635231600"; d="scan'208";a="519623437" Received: from allen-box.sh.intel.com ([10.239.159.118]) by orsmga008.jf.intel.com with ESMTP; 16 Dec 2021 22:39:02 -0800 From: Lu Baolu To: Greg Kroah-Hartman , Joerg Roedel , Alex Williamson , Bjorn Helgaas , Jason Gunthorpe , Christoph Hellwig , Kevin Tian , Ashok Raj Cc: Will Deacon , Robin Murphy , Dan Williams , rafael@kernel.org, Diana Craciun , Cornelia Huck , Eric Auger , Liu Yi L , Jacob jun Pan , Chaitanya Kulkarni , Stuart Yoder , Laurentiu Tudor , Thierry Reding , David Airlie , Daniel Vetter , Jonathan Hunter , Li Yang , Dmitry Osipenko , iommu@lists.linux-foundation.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Lu Baolu , Christoph Hellwig Subject: [PATCH v4 12/13] iommu: Remove iommu group changes notifier Date: Fri, 17 Dec 2021 14:37:07 +0800 Message-Id: <20211217063708.1740334-13-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211217063708.1740334-1-baolu.lu@linux.intel.com> References: <20211217063708.1740334-1-baolu.lu@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org The iommu group changes notifer is not referenced in the tree. Remove it to avoid dead code. Suggested-by: Christoph Hellwig Signed-off-by: Lu Baolu --- include/linux/iommu.h | 23 ------------- drivers/iommu/iommu.c | 75 ------------------------------------------- 2 files changed, 98 deletions(-) diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 1bc03118dfb3..860ac545ac77 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -417,13 +417,6 @@ static inline void iommu_iotlb_gather_init(struct iommu_iotlb_gather *gather) }; } -#define IOMMU_GROUP_NOTIFY_ADD_DEVICE 1 /* Device added */ -#define IOMMU_GROUP_NOTIFY_DEL_DEVICE 2 /* Pre Device removed */ -#define IOMMU_GROUP_NOTIFY_BIND_DRIVER 3 /* Pre Driver bind */ -#define IOMMU_GROUP_NOTIFY_BOUND_DRIVER 4 /* Post Driver bind */ -#define IOMMU_GROUP_NOTIFY_UNBIND_DRIVER 5 /* Pre Driver unbind */ -#define IOMMU_GROUP_NOTIFY_UNBOUND_DRIVER 6 /* Post Driver unbind */ - extern int bus_set_iommu(struct bus_type *bus, const struct iommu_ops *ops); extern int bus_iommu_probe(struct bus_type *bus); extern bool iommu_present(struct bus_type *bus); @@ -496,10 +489,6 @@ extern int iommu_group_for_each_dev(struct iommu_group *group, void *data, extern struct iommu_group *iommu_group_get(struct device *dev); extern struct iommu_group *iommu_group_ref_get(struct iommu_group *group); extern void iommu_group_put(struct iommu_group *group); -extern int iommu_group_register_notifier(struct iommu_group *group, - struct notifier_block *nb); -extern int iommu_group_unregister_notifier(struct iommu_group *group, - struct notifier_block *nb); extern int iommu_register_device_fault_handler(struct device *dev, iommu_dev_fault_handler_t handler, void *data); @@ -913,18 +902,6 @@ static inline void iommu_group_put(struct iommu_group *group) { } -static inline int iommu_group_register_notifier(struct iommu_group *group, - struct notifier_block *nb) -{ - return -ENODEV; -} - -static inline int iommu_group_unregister_notifier(struct iommu_group *group, - struct notifier_block *nb) -{ - return 0; -} - static inline int iommu_register_device_fault_handler(struct device *dev, iommu_dev_fault_handler_t handler, diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 3ad66cb9bedc..053f6ea54fbd 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -18,7 +18,6 @@ #include #include #include -#include #include #include #include @@ -40,7 +39,6 @@ struct iommu_group { struct kobject *devices_kobj; struct list_head devices; struct mutex mutex; - struct blocking_notifier_head notifier; void *iommu_data; void (*iommu_data_release)(void *iommu_data); char *name; @@ -629,7 +627,6 @@ struct iommu_group *iommu_group_alloc(void) mutex_init(&group->mutex); INIT_LIST_HEAD(&group->devices); INIT_LIST_HEAD(&group->entry); - BLOCKING_INIT_NOTIFIER_HEAD(&group->notifier); ret = ida_simple_get(&iommu_group_ida, 0, 0, GFP_KERNEL); if (ret < 0) { @@ -904,10 +901,6 @@ int iommu_group_add_device(struct iommu_group *group, struct device *dev) if (ret) goto err_put_group; - /* Notify any listeners about change to group. */ - blocking_notifier_call_chain(&group->notifier, - IOMMU_GROUP_NOTIFY_ADD_DEVICE, dev); - trace_add_device_to_group(group->id, dev); dev_info(dev, "Adding to iommu group %d\n", group->id); @@ -949,10 +942,6 @@ void iommu_group_remove_device(struct device *dev) dev_info(dev, "Removing from iommu group %d\n", group->id); - /* Pre-notify listeners that a device is being removed. */ - blocking_notifier_call_chain(&group->notifier, - IOMMU_GROUP_NOTIFY_DEL_DEVICE, dev); - mutex_lock(&group->mutex); list_for_each_entry(tmp_device, &group->devices, list) { if (tmp_device->dev == dev) { @@ -1075,36 +1064,6 @@ void iommu_group_put(struct iommu_group *group) } EXPORT_SYMBOL_GPL(iommu_group_put); -/** - * iommu_group_register_notifier - Register a notifier for group changes - * @group: the group to watch - * @nb: notifier block to signal - * - * This function allows iommu group users to track changes in a group. - * See include/linux/iommu.h for actions sent via this notifier. Caller - * should hold a reference to the group throughout notifier registration. - */ -int iommu_group_register_notifier(struct iommu_group *group, - struct notifier_block *nb) -{ - return blocking_notifier_chain_register(&group->notifier, nb); -} -EXPORT_SYMBOL_GPL(iommu_group_register_notifier); - -/** - * iommu_group_unregister_notifier - Unregister a notifier - * @group: the group to watch - * @nb: notifier block to signal - * - * Unregister a previously registered group notifier block. - */ -int iommu_group_unregister_notifier(struct iommu_group *group, - struct notifier_block *nb) -{ - return blocking_notifier_chain_unregister(&group->notifier, nb); -} -EXPORT_SYMBOL_GPL(iommu_group_unregister_notifier); - /** * iommu_register_device_fault_handler() - Register a device fault handler * @dev: the device @@ -1653,14 +1612,8 @@ static int remove_iommu_group(struct device *dev, void *data) static int iommu_bus_notifier(struct notifier_block *nb, unsigned long action, void *data) { - unsigned long group_action = 0; struct device *dev = data; - struct iommu_group *group; - /* - * ADD/DEL call into iommu driver ops if provided, which may - * result in ADD/DEL notifiers to group->notifier - */ if (action == BUS_NOTIFY_ADD_DEVICE) { int ret; @@ -1671,34 +1624,6 @@ static int iommu_bus_notifier(struct notifier_block *nb, return NOTIFY_OK; } - /* - * Remaining BUS_NOTIFYs get filtered and republished to the - * group, if anyone is listening - */ - group = iommu_group_get(dev); - if (!group) - return 0; - - switch (action) { - case BUS_NOTIFY_BIND_DRIVER: - group_action = IOMMU_GROUP_NOTIFY_BIND_DRIVER; - break; - case BUS_NOTIFY_BOUND_DRIVER: - group_action = IOMMU_GROUP_NOTIFY_BOUND_DRIVER; - break; - case BUS_NOTIFY_UNBIND_DRIVER: - group_action = IOMMU_GROUP_NOTIFY_UNBIND_DRIVER; - break; - case BUS_NOTIFY_UNBOUND_DRIVER: - group_action = IOMMU_GROUP_NOTIFY_UNBOUND_DRIVER; - break; - } - - if (group_action) - blocking_notifier_call_chain(&group->notifier, - group_action, dev); - - iommu_group_put(group); return 0; } From patchwork Fri Dec 17 06:37:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolu Lu X-Patchwork-Id: 12683769 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58500C433F5 for ; Fri, 17 Dec 2021 06:39:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233487AbhLQGjS (ORCPT ); Fri, 17 Dec 2021 01:39:18 -0500 Received: from mga11.intel.com ([192.55.52.93]:61855 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233489AbhLQGjR (ORCPT ); Fri, 17 Dec 2021 01:39:17 -0500 X-IronPort-AV: E=McAfee;i="6200,9189,10200"; a="237232563" X-IronPort-AV: E=Sophos;i="5.88,213,1635231600"; d="scan'208";a="237232563" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Dec 2021 22:39:16 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,213,1635231600"; d="scan'208";a="519623456" Received: from allen-box.sh.intel.com ([10.239.159.118]) by orsmga008.jf.intel.com with ESMTP; 16 Dec 2021 22:39:09 -0800 From: Lu Baolu To: Greg Kroah-Hartman , Joerg Roedel , Alex Williamson , Bjorn Helgaas , Jason Gunthorpe , Christoph Hellwig , Kevin Tian , Ashok Raj Cc: Will Deacon , Robin Murphy , Dan Williams , rafael@kernel.org, Diana Craciun , Cornelia Huck , Eric Auger , Liu Yi L , Jacob jun Pan , Chaitanya Kulkarni , Stuart Yoder , Laurentiu Tudor , Thierry Reding , David Airlie , Daniel Vetter , Jonathan Hunter , Li Yang , Dmitry Osipenko , iommu@lists.linux-foundation.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Lu Baolu Subject: [PATCH v4 13/13] drm/tegra: Use the iommu dma_owner mechanism Date: Fri, 17 Dec 2021 14:37:08 +0800 Message-Id: <20211217063708.1740334-14-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211217063708.1740334-1-baolu.lu@linux.intel.com> References: <20211217063708.1740334-1-baolu.lu@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org From: Jason Gunthorpe Tegra joins many platform devices onto the same iommu_domain and builds sort-of a DMA API on top of it. Given that iommu_attach/detatch_device_shared() has supported this usage model. Each device that wants to use the special domain will use suppress_auto_claim_dma_owner and call iommu_attach_device_shared() which will use dma owner framework to lock out other usages of the group and refcount the domain attachment. When the last device calls detatch the domain will be disconnected. Signed-off-by: Jason Gunthorpe Signed-off-by: Lu Baolu Tested-by: Dmitry Osipenko # Nexus7 T30 --- drivers/gpu/drm/tegra/dc.c | 1 + drivers/gpu/drm/tegra/drm.c | 54 +++++++++++++++++------------------- drivers/gpu/drm/tegra/gr2d.c | 1 + drivers/gpu/drm/tegra/gr3d.c | 1 + drivers/gpu/drm/tegra/vic.c | 3 +- 5 files changed, 30 insertions(+), 30 deletions(-) diff --git a/drivers/gpu/drm/tegra/dc.c b/drivers/gpu/drm/tegra/dc.c index a29d64f87563..8fd7a083cc44 100644 --- a/drivers/gpu/drm/tegra/dc.c +++ b/drivers/gpu/drm/tegra/dc.c @@ -3108,6 +3108,7 @@ struct platform_driver tegra_dc_driver = { .driver = { .name = "tegra-dc", .of_match_table = tegra_dc_of_match, + .suppress_auto_claim_dma_owner = true, }, .probe = tegra_dc_probe, .remove = tegra_dc_remove, diff --git a/drivers/gpu/drm/tegra/drm.c b/drivers/gpu/drm/tegra/drm.c index 8d37d6b00562..8a5fd390f85f 100644 --- a/drivers/gpu/drm/tegra/drm.c +++ b/drivers/gpu/drm/tegra/drm.c @@ -928,12 +928,15 @@ int tegra_drm_unregister_client(struct tegra_drm *tegra, return 0; } +/* + * Clients which use this function must set suppress_auto_claim_dma_owner in + * their platform_driver's device_driver struct. + */ int host1x_client_iommu_attach(struct host1x_client *client) { struct iommu_domain *domain = iommu_get_domain_for_dev(client->dev); struct drm_device *drm = dev_get_drvdata(client->host); struct tegra_drm *tegra = drm->dev_private; - struct iommu_group *group = NULL; int err; /* @@ -941,48 +944,41 @@ int host1x_client_iommu_attach(struct host1x_client *client) * not the shared IOMMU domain, don't try to attach it to a different * domain. This allows using the IOMMU-backed DMA API. */ - if (domain && domain != tegra->domain) - return 0; + client->group = NULL; + if (!client->dev->iommu_group || (domain && domain != tegra->domain)) + return iommu_device_set_dma_owner(client->dev, + DMA_OWNER_DMA_API, NULL); - if (tegra->domain) { - group = iommu_group_get(client->dev); - if (!group) - return -ENODEV; - - if (domain != tegra->domain) { - err = iommu_attach_group(tegra->domain, group); - if (err < 0) { - iommu_group_put(group); - return err; - } - } + err = iommu_device_set_dma_owner(client->dev, + DMA_OWNER_PRIVATE_DOMAIN, NULL); + if (err) + return err; - tegra->use_explicit_iommu = true; + err = iommu_attach_device_shared(tegra->domain, client->dev); + if (err) { + iommu_device_release_dma_owner(client->dev, + DMA_OWNER_PRIVATE_DOMAIN); + return err; } - client->group = group; - + tegra->use_explicit_iommu = true; + client->group = client->dev->iommu_group; return 0; } void host1x_client_iommu_detach(struct host1x_client *client) { + struct iommu_domain *domain = iommu_get_domain_for_dev(client->dev); struct drm_device *drm = dev_get_drvdata(client->host); struct tegra_drm *tegra = drm->dev_private; - struct iommu_domain *domain; if (client->group) { - /* - * Devices that are part of the same group may no longer be - * attached to a domain at this point because their group may - * have been detached by an earlier client. - */ - domain = iommu_get_domain_for_dev(client->dev); - if (domain) - iommu_detach_group(tegra->domain, client->group); - - iommu_group_put(client->group); + iommu_detach_device_shared(tegra->domain, client->dev); + iommu_device_release_dma_owner(client->dev, + DMA_OWNER_PRIVATE_DOMAIN); client->group = NULL; + } else { + iommu_device_release_dma_owner(client->dev, DMA_OWNER_DMA_API); } } diff --git a/drivers/gpu/drm/tegra/gr2d.c b/drivers/gpu/drm/tegra/gr2d.c index de288cba3905..2e8bb9342da2 100644 --- a/drivers/gpu/drm/tegra/gr2d.c +++ b/drivers/gpu/drm/tegra/gr2d.c @@ -268,6 +268,7 @@ struct platform_driver tegra_gr2d_driver = { .driver = { .name = "tegra-gr2d", .of_match_table = gr2d_match, + .suppress_auto_claim_dma_owner = true, }, .probe = gr2d_probe, .remove = gr2d_remove, diff --git a/drivers/gpu/drm/tegra/gr3d.c b/drivers/gpu/drm/tegra/gr3d.c index 24442ade0da3..20133ac59e78 100644 --- a/drivers/gpu/drm/tegra/gr3d.c +++ b/drivers/gpu/drm/tegra/gr3d.c @@ -397,6 +397,7 @@ struct platform_driver tegra_gr3d_driver = { .driver = { .name = "tegra-gr3d", .of_match_table = tegra_gr3d_match, + .suppress_auto_claim_dma_owner = true, }, .probe = gr3d_probe, .remove = gr3d_remove, diff --git a/drivers/gpu/drm/tegra/vic.c b/drivers/gpu/drm/tegra/vic.c index c02010ff2b7f..b4f65574e36f 100644 --- a/drivers/gpu/drm/tegra/vic.c +++ b/drivers/gpu/drm/tegra/vic.c @@ -523,7 +523,8 @@ struct platform_driver tegra_vic_driver = { .driver = { .name = "tegra-vic", .of_match_table = tegra_vic_of_match, - .pm = &vic_pm_ops + .pm = &vic_pm_ops, + .suppress_auto_claim_dma_owner = true, }, .probe = vic_probe, .remove = vic_remove,