From patchwork Wed May 18 18:21:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jacob Pan X-Patchwork-Id: 12853948 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46461C433F5 for ; Wed, 18 May 2022 18:17:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241347AbiERSRj (ORCPT ); Wed, 18 May 2022 14:17:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50314 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241336AbiERSRi (ORCPT ); Wed, 18 May 2022 14:17:38 -0400 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9645E185402; Wed, 18 May 2022 11:17:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652897857; x=1684433857; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=VA8nFl9fC9gNGwuMhCr88Na7OguiZVel87k14M/WRzo=; b=oHoaDoqR76XSX5UC6/D7a9Tezl75RFKKlf242DUf/RXX4oR3IFVcvaSu gCM38X71MzoyglqbYssHQhjL6UhHGqSIxK+NDDGXeBpes+CrpnB38ifQ0 h6RGiE4u1ccNOzbJyd7HD97kpZpy4OQoLuQduZCRqL6yJ+n+hA/5syTTl clAWEECGVlkGXipSKiy559NTaccCtPluB5MPuv3JtjfFbOBaeoO4ZlzBx wZ/PMtPb8nXijhvOlrXOGOnhfELFwr4luvqiHdtLfvCZ3bpBmfo/IqoFu Wo3TOk3V8eki8ywkBL3beSdLuGn0DFtP+br3EtebhR4UCG2Q79riBrGGd Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10351"; a="270648738" X-IronPort-AV: E=Sophos;i="5.91,235,1647327600"; d="scan'208";a="270648738" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 May 2022 11:17:33 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,235,1647327600"; d="scan'208";a="639405494" Received: from otc-wp-03.jf.intel.com (HELO jacob-builder.jf.intel.com) ([10.54.39.79]) by fmsmga004.fm.intel.com with ESMTP; 18 May 2022 11:17:31 -0700 From: Jacob Pan To: iommu@lists.linux-foundation.org, LKML , dmaengine@vger.kernel.org, Joerg Roedel , David Woodhouse , Jean-Philippe Brucker , "Lu Baolu" , Jason Gunthorpe , "Christoph Hellwig" , vkoul@kernel.org, robin.murphy@arm.com, will@kernel.org Cc: Yi Liu , Dave Jiang , "Tian, Kevin" , Raj Ashok , Eric Auger , Jacob Pan Subject: [PATCH v4 1/6] iommu: Add a per domain PASID for DMA API Date: Wed, 18 May 2022 11:21:15 -0700 Message-Id: <20220518182120.1136715-2-jacob.jun.pan@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220518182120.1136715-1-jacob.jun.pan@linux.intel.com> References: <20220518182120.1136715-1-jacob.jun.pan@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org DMA requests tagged with PASID can target individual IOMMU domains. Introduce a domain-wide PASID for DMA API, it will be used on the same mapping as legacy DMA without PASID. Let it be IOVA or PA in case of identity domain. Signed-off-by: Jacob Pan --- include/linux/iommu.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 9405034e3013..36ad007084cc 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -106,6 +106,8 @@ struct iommu_domain { enum iommu_page_response_code (*iopf_handler)(struct iommu_fault *fault, void *data); void *fault_data; + ioasid_t dma_pasid; /* Used for DMA requests with PASID */ + atomic_t dma_pasid_users; }; static inline bool iommu_is_dma_domain(struct iommu_domain *domain) From patchwork Wed May 18 18:21:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jacob Pan X-Patchwork-Id: 12853950 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8E12C433FE for ; Wed, 18 May 2022 18:17:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240092AbiERSRk (ORCPT ); Wed, 18 May 2022 14:17:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50408 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241342AbiERSRj (ORCPT ); Wed, 18 May 2022 14:17:39 -0400 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F194518540F; Wed, 18 May 2022 11:17:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652897857; x=1684433857; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=hrkDrSmygZqDJVCbXXXzX8LyYOgqsByefLY3F+S5ueg=; b=Dj7bRQt53AbPg24s8WeU0Q+f3PGFxX+YOcMwtV7bn0lOT+Su8vQG6uxE c2s8BFOaEZ4Tv9vDe/EarbY23fk0+BJoNLxx5Yp/loL7UJFYozkER+50L GrJXwc8sE6hhYAoRs/Ewa4ZptU13I8vySM7YwRp7JhYJf36h3/bavgiiD QtFF7X61uKRNOHMET/JfA5CMCFn9udtm5ZcYEuhzuAQX1Q74MMYzVh03K Ybys/jRWP/gFBe0ZE4CkwQSgxfUpYwnT86VF8cU1qX9ap8WC7qLEBTCnc 9ftvgwUCikMaf8geby/EW0qi2+LbcXxjjIr0XZPHTdHw68kw1gWvZzKLg w==; X-IronPort-AV: E=McAfee;i="6400,9594,10351"; a="270648741" X-IronPort-AV: E=Sophos;i="5.91,235,1647327600"; d="scan'208";a="270648741" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 May 2022 11:17:33 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,235,1647327600"; d="scan'208";a="639405499" Received: from otc-wp-03.jf.intel.com (HELO jacob-builder.jf.intel.com) ([10.54.39.79]) by fmsmga004.fm.intel.com with ESMTP; 18 May 2022 11:17:31 -0700 From: Jacob Pan To: iommu@lists.linux-foundation.org, LKML , dmaengine@vger.kernel.org, Joerg Roedel , David Woodhouse , Jean-Philippe Brucker , "Lu Baolu" , Jason Gunthorpe , "Christoph Hellwig" , vkoul@kernel.org, robin.murphy@arm.com, will@kernel.org Cc: Yi Liu , Dave Jiang , "Tian, Kevin" , Raj Ashok , Eric Auger , Jacob Pan Subject: [PATCH v4 2/6] iommu: Add a helper to do PASID lookup from domain Date: Wed, 18 May 2022 11:21:16 -0700 Message-Id: <20220518182120.1136715-3-jacob.jun.pan@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220518182120.1136715-1-jacob.jun.pan@linux.intel.com> References: <20220518182120.1136715-1-jacob.jun.pan@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org IOMMU group maintains a PASID array which stores the associated IOMMU domains. This patch introduces a helper function to do domain to PASID look up. It will be used by TLB flush and device-PASID attach verification. Signed-off-by: Jacob Pan --- drivers/iommu/iommu.c | 22 ++++++++++++++++++++++ include/linux/iommu.h | 6 +++++- 2 files changed, 27 insertions(+), 1 deletion(-) diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 00d0262a1fe9..22f44833db64 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -3199,3 +3199,25 @@ struct iommu_domain *iommu_get_domain_for_iopf(struct device *dev, return domain; } + +ioasid_t iommu_get_pasid_from_domain(struct device *dev, struct iommu_domain *domain) +{ + struct iommu_domain *tdomain; + struct iommu_group *group; + unsigned long index; + ioasid_t pasid = INVALID_IOASID; + + group = iommu_group_get(dev); + if (!group) + return pasid; + + xa_for_each(&group->pasid_array, index, tdomain) { + if (domain == tdomain) { + pasid = index; + break; + } + } + iommu_group_put(group); + + return pasid; +} diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 36ad007084cc..c0440a4be699 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -694,7 +694,7 @@ void iommu_detach_device_pasid(struct iommu_domain *domain, struct device *dev, ioasid_t pasid); struct iommu_domain * iommu_get_domain_for_iopf(struct device *dev, ioasid_t pasid); - +ioasid_t iommu_get_pasid_from_domain(struct device *dev, struct iommu_domain *domain); #else /* CONFIG_IOMMU_API */ struct iommu_ops {}; @@ -1070,6 +1070,10 @@ iommu_get_domain_for_iopf(struct device *dev, ioasid_t pasid) { return NULL; } +static ioasid_t iommu_get_pasid_from_domain(struct device *dev, struct iommu_domain *domain) +{ + return INVALID_IOASID; +} #endif /* CONFIG_IOMMU_API */ #ifdef CONFIG_IOMMU_SVA From patchwork Wed May 18 18:21:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jacob Pan X-Patchwork-Id: 12853951 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D49FEC43217 for ; Wed, 18 May 2022 18:17:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241356AbiERSRl (ORCPT ); Wed, 18 May 2022 14:17:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50460 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241346AbiERSRj (ORCPT ); Wed, 18 May 2022 14:17:39 -0400 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3AE2617EC3F; Wed, 18 May 2022 11:17:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652897858; x=1684433858; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=4E0rUsQ0j9aptl5lb7L0UIOwroU6cL8nhCs1ddmoug0=; b=cj1lzljHpeP7O11FRCsm+hYZIR0uYDFByzUT34/fqqlFCwClbyS3oPlU pGCnO7OiDI5TDswH76Y+mHQg4x9iBO+qjLxjrAVMMw3Suk07n8mG5EYMK pF87d4J0KDIbTv62Pl2rqEShLc+rR9pyDXIkt+GyCSfk9Jn9R0hUOaQY3 cOXkE5WMhLaVXn47tzjhhS3gfy+PNz/PHHC0GA3OerqDJwd2ymv4/vg9h 3zBPXLxyDhiGdb/8Bk0agtECZnBg5BcUh5ITKW44bjOK1yyQsyArrdkrt aMRqNWgt9MqDTixvtk9bKjW3UUSIsFlKu1NViE6Rh3S45ONkxA+iuJQAt A==; X-IronPort-AV: E=McAfee;i="6400,9594,10351"; a="270648740" X-IronPort-AV: E=Sophos;i="5.91,235,1647327600"; d="scan'208";a="270648740" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 May 2022 11:17:33 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,235,1647327600"; d="scan'208";a="639405497" Received: from otc-wp-03.jf.intel.com (HELO jacob-builder.jf.intel.com) ([10.54.39.79]) by fmsmga004.fm.intel.com with ESMTP; 18 May 2022 11:17:32 -0700 From: Jacob Pan To: iommu@lists.linux-foundation.org, LKML , dmaengine@vger.kernel.org, Joerg Roedel , David Woodhouse , Jean-Philippe Brucker , "Lu Baolu" , Jason Gunthorpe , "Christoph Hellwig" , vkoul@kernel.org, robin.murphy@arm.com, will@kernel.org Cc: Yi Liu , Dave Jiang , "Tian, Kevin" , Raj Ashok , Eric Auger , Jacob Pan Subject: [PATCH v4 3/6] iommu/vt-d: Implement domain ops for attach_dev_pasid Date: Wed, 18 May 2022 11:21:17 -0700 Message-Id: <20220518182120.1136715-4-jacob.jun.pan@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220518182120.1136715-1-jacob.jun.pan@linux.intel.com> References: <20220518182120.1136715-1-jacob.jun.pan@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org On VT-d platforms with scalable mode enabled, devices issue DMA requests with PASID need to attach PASIDs to given IOMMU domains. The attach operation involves the following: - Programming the PASID into the device's PASID table - Tracking device domain and the PASID relationship - Managing IOTLB and device TLB invalidations This patch add attach_dev_pasid functions to the default domain ops which is used by DMA and identity domain types. It could be extended to support other domain types whenever necessary. Signed-off-by: Lu Baolu Signed-off-by: Jacob Pan --- drivers/iommu/intel/iommu.c | 72 +++++++++++++++++++++++++++++++++++-- 1 file changed, 70 insertions(+), 2 deletions(-) diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c index 1c2c92b657c7..75615c105fdf 100644 --- a/drivers/iommu/intel/iommu.c +++ b/drivers/iommu/intel/iommu.c @@ -1556,12 +1556,18 @@ static void __iommu_flush_dev_iotlb(struct device_domain_info *info, u64 addr, unsigned int mask) { u16 sid, qdep; + ioasid_t pasid; if (!info || !info->ats_enabled) return; sid = info->bus << 8 | info->devfn; qdep = info->ats_qdep; + pasid = iommu_get_pasid_from_domain(info->dev, &info->domain->domain); + if (pasid != INVALID_IOASID) { + qi_flush_dev_iotlb_pasid(info->iommu, sid, info->pfsid, + pasid, qdep, addr, mask); + } qi_flush_dev_iotlb(info->iommu, sid, info->pfsid, qdep, addr, mask); } @@ -1591,6 +1597,7 @@ static void iommu_flush_iotlb_psi(struct intel_iommu *iommu, unsigned int mask = ilog2(aligned_pages); uint64_t addr = (uint64_t)pfn << VTD_PAGE_SHIFT; u16 did = domain->iommu_did[iommu->seq_id]; + struct iommu_domain *iommu_domain = &domain->domain; BUG_ON(pages == 0); @@ -1599,6 +1606,9 @@ static void iommu_flush_iotlb_psi(struct intel_iommu *iommu, if (domain_use_first_level(domain)) { qi_flush_piotlb(iommu, did, PASID_RID2PASID, addr, pages, ih); + /* flush additional kernel DMA PASIDs attached */ + if (iommu_domain->dma_pasid) + qi_flush_piotlb(iommu, did, iommu_domain->dma_pasid, addr, pages, ih); } else { unsigned long bitmask = aligned_pages - 1; @@ -4255,6 +4265,7 @@ static void __dmar_remove_one_dev_info(struct device_domain_info *info) struct dmar_domain *domain; struct intel_iommu *iommu; unsigned long flags; + ioasid_t pasid; assert_spin_locked(&device_domain_lock); @@ -4265,10 +4276,15 @@ static void __dmar_remove_one_dev_info(struct device_domain_info *info) domain = info->domain; if (info->dev && !dev_is_real_dma_subdevice(info->dev)) { - if (dev_is_pci(info->dev) && sm_supported(iommu)) + if (dev_is_pci(info->dev) && sm_supported(iommu)) { intel_pasid_tear_down_entry(iommu, info->dev, PASID_RID2PASID, false); - + pasid = iommu_get_pasid_from_domain(info->dev, + &info->domain->domain); + if (pasid != INVALID_IOASID) + intel_pasid_tear_down_entry(iommu, info->dev, + pasid, false); + } iommu_disable_dev_iotlb(info); domain_context_clear(info); intel_pasid_free_table(info->dev); @@ -4904,6 +4920,56 @@ static void intel_iommu_iotlb_sync_map(struct iommu_domain *domain, } } +static int intel_iommu_attach_dev_pasid(struct iommu_domain *domain, + struct device *dev, + ioasid_t pasid) +{ + struct device_domain_info *info = dev_iommu_priv_get(dev); + struct dmar_domain *dmar_domain = to_dmar_domain(domain); + struct intel_iommu *iommu = info->iommu; + unsigned long flags; + int ret = 0; + + if (!sm_supported(iommu) || !info) + return -ENODEV; + + if (WARN_ON(pasid == PASID_RID2PASID)) + return -EINVAL; + + spin_lock_irqsave(&device_domain_lock, flags); + spin_lock(&iommu->lock); + if (hw_pass_through && domain_type_is_si(dmar_domain)) + ret = intel_pasid_setup_pass_through(iommu, dmar_domain, + dev, pasid); + else if (domain_use_first_level(dmar_domain)) + ret = domain_setup_first_level(iommu, dmar_domain, + dev, pasid); + else + ret = intel_pasid_setup_second_level(iommu, dmar_domain, + dev, pasid); + + spin_unlock(&iommu->lock); + spin_unlock_irqrestore(&device_domain_lock, flags); + + return ret; +} + +static void intel_iommu_detach_dev_pasid(struct iommu_domain *domain, + struct device *dev, + ioasid_t pasid) +{ + struct device_domain_info *info = dev_iommu_priv_get(dev); + struct intel_iommu *iommu = info->iommu; + unsigned long flags; + + if (pasid != iommu_get_pasid_from_domain(info->dev, &info->domain->domain)) + return; + + spin_lock_irqsave(&iommu->lock, flags); + intel_pasid_tear_down_entry(iommu, dev, pasid, false); + spin_unlock_irqrestore(&iommu->lock, flags); +} + const struct iommu_ops intel_iommu_ops = { .capable = intel_iommu_capable, .domain_alloc = intel_iommu_domain_alloc, @@ -4932,6 +4998,8 @@ const struct iommu_ops intel_iommu_ops = { .iova_to_phys = intel_iommu_iova_to_phys, .free = intel_iommu_domain_free, .enforce_cache_coherency = intel_iommu_enforce_cache_coherency, + .attach_dev_pasid = intel_iommu_attach_dev_pasid, + .detach_dev_pasid = intel_iommu_detach_dev_pasid, } }; From patchwork Wed May 18 18:21:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jacob Pan X-Patchwork-Id: 12853945 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0E1FC433EF for ; Wed, 18 May 2022 18:17:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241318AbiERSRh (ORCPT ); Wed, 18 May 2022 14:17:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50204 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240092AbiERSRg (ORCPT ); Wed, 18 May 2022 14:17:36 -0400 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2700C17EC15; Wed, 18 May 2022 11:17:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652897855; x=1684433855; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=5VKbTR0g0tlRykbvGSeZDqc/YRgpbo6uTJXWIP5098I=; b=ESBWrZYLM3jAwfBvV9g3LYmjOr7AUy5H0ZxCxOYXOdxTy4o1hJTMOUKO LZFEfGJ4ydgNLnmVNQvCVtKMcDAgYWiWK3X+fXDDW1UJrC/myt4cy17Dv OMbE2zqlZJkIisfF5doOsJKZv3XWwqXzpyFy+UtMgXzZiGkTPULZuXW48 MHLz1opG29rpsPODEAnwEq13jEXguLaKbajeIgk/V9HdIK3YfJfmgkZpc ilIykkHrs58XfvQ6UamZ5rvgG+1QK05Q17pHBGE+FOytjgFp0GtimRFJq mPSVAbLzteJWjlQ2lY1E+x/esYDqKTPWJ/fWRbPGQaPVqlpK4J9doa3jS Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10351"; a="270648742" X-IronPort-AV: E=Sophos;i="5.91,235,1647327600"; d="scan'208";a="270648742" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 May 2022 11:17:33 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,235,1647327600"; d="scan'208";a="639405503" Received: from otc-wp-03.jf.intel.com (HELO jacob-builder.jf.intel.com) ([10.54.39.79]) by fmsmga004.fm.intel.com with ESMTP; 18 May 2022 11:17:32 -0700 From: Jacob Pan To: iommu@lists.linux-foundation.org, LKML , dmaengine@vger.kernel.org, Joerg Roedel , David Woodhouse , Jean-Philippe Brucker , "Lu Baolu" , Jason Gunthorpe , "Christoph Hellwig" , vkoul@kernel.org, robin.murphy@arm.com, will@kernel.org Cc: Yi Liu , Dave Jiang , "Tian, Kevin" , Raj Ashok , Eric Auger , Jacob Pan Subject: [PATCH v4 4/6] iommu: Add PASID support for DMA mapping API users Date: Wed, 18 May 2022 11:21:18 -0700 Message-Id: <20220518182120.1136715-5-jacob.jun.pan@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220518182120.1136715-1-jacob.jun.pan@linux.intel.com> References: <20220518182120.1136715-1-jacob.jun.pan@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org DMA mapping API is the de facto standard for in-kernel DMA. It operates on a per device/RID basis which is not PASID-aware. Some modern devices such as Intel Data Streaming Accelerator, PASID is required for certain work submissions. To allow such devices use DMA mapping API, we need the following functionalities: 1. Provide device a way to retrieve a PASID for work submission within the kernel 2. Enable the kernel PASID on the IOMMU for the device 3. Attach the kernel PASID to the device's default DMA domain, let it be IOVA or physical address in case of pass-through. This patch introduces a driver facing API that enables DMA API PASID usage. Once enabled, device drivers can continue to use DMA APIs as is. There is no difference in dma_handle between without PASID and with PASID. Signed-off-by: Jacob Pan --- drivers/iommu/dma-iommu.c | 114 ++++++++++++++++++++++++++++++++++++++ include/linux/dma-iommu.h | 3 + 2 files changed, 117 insertions(+) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 1ca85d37eeab..6ad7ba619ef0 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -34,6 +34,8 @@ struct iommu_dma_msi_page { phys_addr_t phys; }; +static DECLARE_IOASID_SET(iommu_dma_pasid); + enum iommu_dma_cookie_type { IOMMU_DMA_IOVA_COOKIE, IOMMU_DMA_MSI_COOKIE, @@ -370,6 +372,118 @@ void iommu_put_dma_cookie(struct iommu_domain *domain) domain->iova_cookie = NULL; } +/* Protect iommu_domain DMA PASID data */ +static DEFINE_MUTEX(dma_pasid_lock); +/** + * iommu_attach_dma_pasid --Attach a PASID for in-kernel DMA. Use the device's + * DMA domain. + * @dev: Device to be enabled + * @pasid: The returned kernel PASID to be used for DMA + * + * DMA request with PASID will be mapped the same way as the legacy DMA. + * If the device is in pass-through, PASID will also pass-through. If the + * device is in IOVA, the PASID will point to the same IOVA page table. + * + * @return err code or 0 on success + */ +int iommu_attach_dma_pasid(struct device *dev, ioasid_t *pasid) +{ + struct iommu_domain *dom; + ioasid_t id, max; + int ret = 0; + + dom = iommu_get_domain_for_dev(dev); + if (!dom || !dom->ops || !dom->ops->attach_dev_pasid) + return -ENODEV; + + /* Only support domain types that DMA API can be used */ + if (dom->type == IOMMU_DOMAIN_UNMANAGED || + dom->type == IOMMU_DOMAIN_BLOCKED) { + dev_warn(dev, "Invalid domain type %d", dom->type); + return -EPERM; + } + + mutex_lock(&dma_pasid_lock); + id = dom->dma_pasid; + if (!id) { + /* + * First device to use PASID in its DMA domain, allocate + * a single PASID per DMA domain is all we need, it is also + * good for performance when it comes down to IOTLB flush. + */ + max = 1U << dev->iommu->pasid_bits; + if (!max) { + ret = -EINVAL; + goto done_unlock; + } + + id = ioasid_alloc(&iommu_dma_pasid, 1, max, dev); + if (id == INVALID_IOASID) { + ret = -ENOMEM; + goto done_unlock; + } + + dom->dma_pasid = id; + atomic_set(&dom->dma_pasid_users, 1); + } + + ret = iommu_attach_device_pasid(dom, dev, id); + if (!ret) { + *pasid = id; + atomic_inc(&dom->dma_pasid_users); + goto done_unlock; + } + + if (atomic_dec_and_test(&dom->dma_pasid_users)) { + ioasid_free(id); + dom->dma_pasid = 0; + } +done_unlock: + mutex_unlock(&dma_pasid_lock); + return ret; +} +EXPORT_SYMBOL(iommu_attach_dma_pasid); + +/** + * iommu_detach_dma_pasid --Disable in-kernel DMA request with PASID + * @dev: Device's PASID DMA to be disabled + * + * It is the device driver's responsibility to ensure no more incoming DMA + * requests with the kernel PASID before calling this function. IOMMU driver + * ensures PASID cache, IOTLBs related to the kernel PASID are cleared and + * drained. + * + */ +void iommu_detach_dma_pasid(struct device *dev) +{ + struct iommu_domain *dom; + ioasid_t pasid; + + dom = iommu_get_domain_for_dev(dev); + if (WARN_ON(!dom || !dom->ops || !dom->ops->detach_dev_pasid)) + return; + + /* Only support DMA API managed domain type */ + if (WARN_ON(dom->type == IOMMU_DOMAIN_UNMANAGED || + dom->type == IOMMU_DOMAIN_BLOCKED)) + return; + + mutex_lock(&dma_pasid_lock); + pasid = iommu_get_pasid_from_domain(dev, dom); + if (!pasid || pasid == INVALID_IOASID) { + dev_err(dev, "No valid DMA PASID attached\n"); + mutex_unlock(&dma_pasid_lock); + return; + } + iommu_detach_device_pasid(dom, dev, pasid); + if (atomic_dec_and_test(&dom->dma_pasid_users)) { + ioasid_free(pasid); + dom->dma_pasid = 0; + } + mutex_unlock(&dma_pasid_lock); +} +EXPORT_SYMBOL(iommu_detach_dma_pasid); + /** * iommu_dma_get_resv_regions - Reserved region driver helper * @dev: Device from iommu_get_resv_regions() diff --git a/include/linux/dma-iommu.h b/include/linux/dma-iommu.h index 24607dc3c2ac..538650b9cb75 100644 --- a/include/linux/dma-iommu.h +++ b/include/linux/dma-iommu.h @@ -18,6 +18,9 @@ int iommu_get_dma_cookie(struct iommu_domain *domain); int iommu_get_msi_cookie(struct iommu_domain *domain, dma_addr_t base); void iommu_put_dma_cookie(struct iommu_domain *domain); +int iommu_attach_dma_pasid(struct device *dev, ioasid_t *pasid); +void iommu_detach_dma_pasid(struct device *dev); + /* Setup call for arch DMA mapping code */ void iommu_setup_dma_ops(struct device *dev, u64 dma_base, u64 dma_limit); int iommu_dma_init_fq(struct iommu_domain *domain); From patchwork Wed May 18 18:21:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jacob Pan X-Patchwork-Id: 12853949 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0DFD5C4167B for ; Wed, 18 May 2022 18:17:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241349AbiERSRk (ORCPT ); Wed, 18 May 2022 14:17:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50410 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241344AbiERSRj (ORCPT ); Wed, 18 May 2022 14:17:39 -0400 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1894E185412; Wed, 18 May 2022 11:17:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652897858; x=1684433858; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Bzje02YdvCwfEdXd4VLSz7VQ2pQZoJC0EJfVFm1lN80=; b=K9oLTWBYi/4hg6aJ8MivRJbc99JN3Pk/MYYPtsKpk3a7cROmqLtq2ITm mBW0E8arwCbNKI/Cdvkoe8YNRg9FmAVVoNg+2v0RxHYef8woiWS5KIT4f qRM21Wj1XsIy+hDYFUhuv9JpoGBAHEY9pbL7NtDljPc+8X8FFQxYU64gZ XRgXdjfnErXG7GxZnS0p0F2VyXsOBHBV7VNWufFsiKKySSfrYPSasfzpp SyjZlJqtTKkB/v7coLLyySl5lU/dxkGJu+Yn3Q65cdU8mxcc5uVlHJcLe R6CGGBHxq/Y/eklu8/dViscFOO7lYg2i+338H1/lX4csP5NJYjaPjXsHt A==; X-IronPort-AV: E=McAfee;i="6400,9594,10351"; a="270648744" X-IronPort-AV: E=Sophos;i="5.91,235,1647327600"; d="scan'208";a="270648744" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 May 2022 11:17:33 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,235,1647327600"; d="scan'208";a="639405505" Received: from otc-wp-03.jf.intel.com (HELO jacob-builder.jf.intel.com) ([10.54.39.79]) by fmsmga004.fm.intel.com with ESMTP; 18 May 2022 11:17:33 -0700 From: Jacob Pan To: iommu@lists.linux-foundation.org, LKML , dmaengine@vger.kernel.org, Joerg Roedel , David Woodhouse , Jean-Philippe Brucker , "Lu Baolu" , Jason Gunthorpe , "Christoph Hellwig" , vkoul@kernel.org, robin.murphy@arm.com, will@kernel.org Cc: Yi Liu , Dave Jiang , "Tian, Kevin" , Raj Ashok , Eric Auger , Jacob Pan Subject: [PATCH v4 5/6] dmaengine: idxd: Use DMA API for in-kernel DMA with PASID Date: Wed, 18 May 2022 11:21:19 -0700 Message-Id: <20220518182120.1136715-6-jacob.jun.pan@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220518182120.1136715-1-jacob.jun.pan@linux.intel.com> References: <20220518182120.1136715-1-jacob.jun.pan@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org The current in-kernel supervisor PASID support is based on the SVM/SVA machinery in SVA lib. The binding between a kernel PASID and kernel mapping has many flaws. See discussions in the link below. This patch enables in-kernel DMA by switching from SVA lib to the standard DMA mapping APIs. Since both DMA requests with and without PASIDs are mapped identically, there is no change to how DMA APIs are used after the kernel PASID is enabled. Link: https://lore.kernel.org/linux-iommu/20210511194726.GP1002214@nvidia.com/ Signed-off-by: Jacob Pan --- drivers/dma/idxd/idxd.h | 1 - drivers/dma/idxd/init.c | 34 +++++++++------------------------- drivers/dma/idxd/sysfs.c | 7 ------- 3 files changed, 9 insertions(+), 33 deletions(-) diff --git a/drivers/dma/idxd/idxd.h b/drivers/dma/idxd/idxd.h index ccbefd0be617..190b08bd7c08 100644 --- a/drivers/dma/idxd/idxd.h +++ b/drivers/dma/idxd/idxd.h @@ -277,7 +277,6 @@ struct idxd_device { struct idxd_wq **wqs; struct idxd_engine **engines; - struct iommu_sva *sva; unsigned int pasid; int num_groups; diff --git a/drivers/dma/idxd/init.c b/drivers/dma/idxd/init.c index e1b5d1e4a949..e2e1c0eae6d6 100644 --- a/drivers/dma/idxd/init.c +++ b/drivers/dma/idxd/init.c @@ -16,6 +16,7 @@ #include #include #include +#include #include #include #include "../dmaengine.h" @@ -466,36 +467,22 @@ static struct idxd_device *idxd_alloc(struct pci_dev *pdev, struct idxd_driver_d static int idxd_enable_system_pasid(struct idxd_device *idxd) { - int flags; - unsigned int pasid; - struct iommu_sva *sva; + u32 pasid; + int ret; - flags = SVM_FLAG_SUPERVISOR_MODE; - - sva = iommu_sva_bind_device(&idxd->pdev->dev, NULL, &flags); - if (IS_ERR(sva)) { - dev_warn(&idxd->pdev->dev, - "iommu sva bind failed: %ld\n", PTR_ERR(sva)); - return PTR_ERR(sva); - } - - pasid = iommu_sva_get_pasid(sva); - if (pasid == IOMMU_PASID_INVALID) { - iommu_sva_unbind_device(sva); - return -ENODEV; + ret = iommu_attach_dma_pasid(&idxd->pdev->dev, &pasid); + if (ret) { + dev_err(&idxd->pdev->dev, "No DMA PASID %d\n", ret); + return ret; } - - idxd->sva = sva; idxd->pasid = pasid; - dev_dbg(&idxd->pdev->dev, "system pasid: %u\n", pasid); + return 0; } static void idxd_disable_system_pasid(struct idxd_device *idxd) { - - iommu_sva_unbind_device(idxd->sva); - idxd->sva = NULL; + iommu_detach_dma_pasid(&idxd->pdev->dev); } static int idxd_probe(struct idxd_device *idxd) @@ -527,10 +514,7 @@ static int idxd_probe(struct idxd_device *idxd) else set_bit(IDXD_FLAG_PASID_ENABLED, &idxd->flags); } - } else if (!sva) { - dev_warn(dev, "User forced SVA off via module param.\n"); } - idxd_read_caps(idxd); idxd_read_table_offsets(idxd); diff --git a/drivers/dma/idxd/sysfs.c b/drivers/dma/idxd/sysfs.c index dfd549685c46..a48928973bd4 100644 --- a/drivers/dma/idxd/sysfs.c +++ b/drivers/dma/idxd/sysfs.c @@ -839,13 +839,6 @@ static ssize_t wq_name_store(struct device *dev, if (strlen(buf) > WQ_NAME_SIZE || strlen(buf) == 0) return -EINVAL; - /* - * This is temporarily placed here until we have SVM support for - * dmaengine. - */ - if (wq->type == IDXD_WQT_KERNEL && device_pasid_enabled(wq->idxd)) - return -EOPNOTSUPP; - memset(wq->name, 0, WQ_NAME_SIZE + 1); strncpy(wq->name, buf, WQ_NAME_SIZE); strreplace(wq->name, '\n', '\0'); From patchwork Wed May 18 18:21:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jacob Pan X-Patchwork-Id: 12853946 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D4FAC433FE for ; Wed, 18 May 2022 18:17:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241337AbiERSRi (ORCPT ); Wed, 18 May 2022 14:17:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50268 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241316AbiERSRh (ORCPT ); Wed, 18 May 2022 14:17:37 -0400 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E552617EC38; Wed, 18 May 2022 11:17:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652897856; x=1684433856; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=4MoEi9eIAsQEEnYGbn1gdDfBYv7oO8G4INN9f64CdNo=; b=amBX8b8TQ9JF5jwFJItSzPmoZ2Mv+DnqXkSb+3iW0dj5NZdyIcKntb8S LHoYjBtJaZ0NTvC1SUXabdnr/83nz3VItqfY8C1l2jeD4VWTJ89y5JK+R GzknF1eb1ND9bXbu2DfUHbjvEhU0rCBdXc+OQAEJYsFjO7B/NsgMwm2Zl e9cuI3f+XUwy2wTGSWDjS4m/3pPFkntpXU6FI8pKJgHdCBXgnJ4FRYmtP OVZpk8YKQuV7x2wSbbCLXRzzV6Uom6imSfnDilpMuxvPmJA4Vae1iHQBP AjvQaI7NmgDZ6p+ok8ll44myBIcqbbVk0Q9NAEpiWlJp+dpUpv/dtB8XB Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10351"; a="270648746" X-IronPort-AV: E=Sophos;i="5.91,235,1647327600"; d="scan'208";a="270648746" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 May 2022 11:17:34 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,235,1647327600"; d="scan'208";a="639405510" Received: from otc-wp-03.jf.intel.com (HELO jacob-builder.jf.intel.com) ([10.54.39.79]) by fmsmga004.fm.intel.com with ESMTP; 18 May 2022 11:17:33 -0700 From: Jacob Pan To: iommu@lists.linux-foundation.org, LKML , dmaengine@vger.kernel.org, Joerg Roedel , David Woodhouse , Jean-Philippe Brucker , "Lu Baolu" , Jason Gunthorpe , "Christoph Hellwig" , vkoul@kernel.org, robin.murphy@arm.com, will@kernel.org Cc: Yi Liu , Dave Jiang , "Tian, Kevin" , Raj Ashok , Eric Auger , Jacob Pan Subject: [PATCH v4 6/6] iommu/vt-d: Delete unused SVM flag Date: Wed, 18 May 2022 11:21:20 -0700 Message-Id: <20220518182120.1136715-7-jacob.jun.pan@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220518182120.1136715-1-jacob.jun.pan@linux.intel.com> References: <20220518182120.1136715-1-jacob.jun.pan@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org Supervisor PASID for SVA/SVM is no longer supported, delete the unused flag. Signed-off-by: Jacob Pan --- drivers/iommu/intel/svm.c | 2 +- include/linux/intel-svm.h | 13 ------------- 2 files changed, 1 insertion(+), 14 deletions(-) diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c index 44331db060e4..5b220d464218 100644 --- a/drivers/iommu/intel/svm.c +++ b/drivers/iommu/intel/svm.c @@ -750,7 +750,7 @@ static irqreturn_t prq_event_thread(int irq, void *d) * to unbind the mm while any page faults are outstanding. */ svm = pasid_private_find(req->pasid); - if (IS_ERR_OR_NULL(svm) || (svm->flags & SVM_FLAG_SUPERVISOR_MODE)) + if (IS_ERR_OR_NULL(svm)) goto bad_req; } diff --git a/include/linux/intel-svm.h b/include/linux/intel-svm.h index b3b125b332aa..6835a665c195 100644 --- a/include/linux/intel-svm.h +++ b/include/linux/intel-svm.h @@ -13,17 +13,4 @@ #define PRQ_RING_MASK ((0x1000 << PRQ_ORDER) - 0x20) #define PRQ_DEPTH ((0x1000 << PRQ_ORDER) >> 5) -/* - * The SVM_FLAG_SUPERVISOR_MODE flag requests a PASID which can be used only - * for access to kernel addresses. No IOTLB flushes are automatically done - * for kernel mappings; it is valid only for access to the kernel's static - * 1:1 mapping of physical memory — not to vmalloc or even module mappings. - * A future API addition may permit the use of such ranges, by means of an - * explicit IOTLB flush call (akin to the DMA API's unmap method). - * - * It is unlikely that we will ever hook into flush_tlb_kernel_range() to - * do such IOTLB flushes automatically. - */ -#define SVM_FLAG_SUPERVISOR_MODE BIT(0) - #endif /* __INTEL_SVM_H__ */