From patchwork Tue Oct 10 14:49:37 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 9996409 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id A77E0601AE for ; Tue, 10 Oct 2017 14:56:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 97AE028639 for ; Tue, 10 Oct 2017 14:56:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8A4E92863B; Tue, 10 Oct 2017 14:56:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 253C82862C for ; Tue, 10 Oct 2017 14:56:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932283AbdJJO4H (ORCPT ); Tue, 10 Oct 2017 10:56:07 -0400 Received: from mga05.intel.com ([192.55.52.43]:5865 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932245AbdJJO4D (ORCPT ); Tue, 10 Oct 2017 10:56:03 -0400 Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga105.fm.intel.com with ESMTP; 10 Oct 2017 07:56:02 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.42,505,1500966000"; d="scan'208";a="144862070" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.125]) by orsmga002.jf.intel.com with ESMTP; 10 Oct 2017 07:56:01 -0700 Subject: [PATCH v8 07/14] iommu, dma-mapping: introduce dma_get_iommu_domain() From: Dan Williams To: linux-nvdimm@lists.01.org Cc: Jan Kara , Ashok Raj , "Darrick J. Wong" , linux-rdma@vger.kernel.org, Greg Kroah-Hartman , Joerg Roedel , Dave Chinner , iommu@lists.linux-foundation.org, linux-xfs@vger.kernel.org, linux-mm@kvack.org, Jeff Moyer , linux-api@vger.kernel.org, linux-fsdevel@vger.kernel.org, Ross Zwisler , David Woodhouse , Robin Murphy , Christoph Hellwig , Marek Szyprowski Date: Tue, 10 Oct 2017 07:49:37 -0700 Message-ID: <150764697773.16882.5489456954873798235.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <150764693502.16882.15848797003793552156.stgit@dwillia2-desk3.amr.corp.intel.com> References: <150764693502.16882.15848797003793552156.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.17.1-9-g687f MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Add a dma-mapping api helper to retrieve the generic iommu_domain for a device. The motivation for this interface is making RDMA transfers to DAX mappings safe. If the DAX file's block map changes we need to be to reliably stop accesses to blocks that have been freed or re-assigned to a new file. With the iommu_domain and a callback from the DAX filesystem the kernel can safely revoke access to a DMA device. The process that performed the RDMA memory registration is also notified of this revocation event, but the kernel can not otherwise be in the position of waiting for userspace to quiesce the device. Since PMEM+DAX is currently only enabled for x86, we only update the x86 iommu drivers. Cc: Marek Szyprowski Cc: Robin Murphy Cc: Greg Kroah-Hartman Cc: Joerg Roedel Cc: David Woodhouse Cc: Ashok Raj Cc: Jan Kara Cc: Jeff Moyer Cc: Christoph Hellwig Cc: Dave Chinner Cc: "Darrick J. Wong" Cc: Ross Zwisler Signed-off-by: Dan Williams --- drivers/base/dma-mapping.c | 10 ++++++++++ drivers/iommu/amd_iommu.c | 10 ++++++++++ drivers/iommu/intel-iommu.c | 15 +++++++++++++++ include/linux/dma-mapping.h | 3 +++ 4 files changed, 38 insertions(+) diff --git a/drivers/base/dma-mapping.c b/drivers/base/dma-mapping.c index e584eddef0a7..fdb9764f95a4 100644 --- a/drivers/base/dma-mapping.c +++ b/drivers/base/dma-mapping.c @@ -369,3 +369,13 @@ void dma_deconfigure(struct device *dev) of_dma_deconfigure(dev); acpi_dma_deconfigure(dev); } + +struct iommu_domain *dma_get_iommu_domain(struct device *dev) +{ + const struct dma_map_ops *ops = get_dma_ops(dev); + + if (ops && ops->get_iommu) + return ops->get_iommu(dev); + return NULL; +} +EXPORT_SYMBOL(dma_get_iommu_domain); diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c index 51f8215877f5..c8e1a45af182 100644 --- a/drivers/iommu/amd_iommu.c +++ b/drivers/iommu/amd_iommu.c @@ -2271,6 +2271,15 @@ static struct protection_domain *get_domain(struct device *dev) return domain; } +static struct iommu_domain *amd_dma_get_iommu(struct device *dev) +{ + struct protection_domain *domain = get_domain(dev); + + if (IS_ERR(domain)) + return NULL; + return &domain->domain; +} + static void update_device_table(struct protection_domain *domain) { struct iommu_dev_data *dev_data; @@ -2689,6 +2698,7 @@ static const struct dma_map_ops amd_iommu_dma_ops = { .unmap_sg = unmap_sg, .dma_supported = amd_iommu_dma_supported, .mapping_error = amd_iommu_mapping_error, + .get_iommu = amd_dma_get_iommu, }; static int init_reserved_iova_ranges(void) diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c index 6784a05dd6b2..f3f4939cebad 100644 --- a/drivers/iommu/intel-iommu.c +++ b/drivers/iommu/intel-iommu.c @@ -3578,6 +3578,20 @@ static int iommu_no_mapping(struct device *dev) return 0; } +static struct iommu_domain *intel_dma_get_iommu(struct device *dev) +{ + struct dmar_domain *domain; + + if (iommu_no_mapping(dev)) + return NULL; + + domain = get_valid_domain_for_dev(dev); + if (!domain) + return NULL; + + return &domain->domain; +} + static dma_addr_t __intel_map_single(struct device *dev, phys_addr_t paddr, size_t size, int dir, u64 dma_mask) { @@ -3872,6 +3886,7 @@ const struct dma_map_ops intel_dma_ops = { .map_page = intel_map_page, .unmap_page = intel_unmap_page, .mapping_error = intel_mapping_error, + .get_iommu = intel_dma_get_iommu, #ifdef CONFIG_X86 .dma_supported = x86_dma_supported, #endif diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 29ce9815da87..aa62df1d0d72 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -128,6 +128,7 @@ struct dma_map_ops { enum dma_data_direction dir); int (*mapping_error)(struct device *dev, dma_addr_t dma_addr); int (*dma_supported)(struct device *dev, u64 mask); + struct iommu_domain *(*get_iommu)(struct device *dev); #ifdef ARCH_HAS_DMA_GET_REQUIRED_MASK u64 (*get_required_mask)(struct device *dev); #endif @@ -221,6 +222,8 @@ static inline const struct dma_map_ops *get_dma_ops(struct device *dev) } #endif +extern struct iommu_domain *dma_get_iommu_domain(struct device *dev); + static inline dma_addr_t dma_map_single_attrs(struct device *dev, void *ptr, size_t size, enum dma_data_direction dir,