From patchwork Wed May 17 10:07:10 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Magnus Damm X-Patchwork-Id: 9730657 X-Patchwork-Delegate: geert@linux-m68k.org Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id D3E4A60138 for ; Wed, 17 May 2017 10:11:59 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C5782205AF for ; Wed, 17 May 2017 10:11:59 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BA6AC2872D; Wed, 17 May 2017 10:11:59 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.3 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM, T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 402FC205AF for ; Wed, 17 May 2017 10:11:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753259AbdEQKLo (ORCPT ); Wed, 17 May 2017 06:11:44 -0400 Received: from mail-pf0-f193.google.com ([209.85.192.193]:32807 "EHLO mail-pf0-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752450AbdEQKLm (ORCPT ); Wed, 17 May 2017 06:11:42 -0400 Received: by mail-pf0-f193.google.com with SMTP id f27so1204819pfe.0; Wed, 17 May 2017 03:11:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:date:message-id:in-reply-to:references:subject; bh=fw3SciNwwhOAb4yAL+/BoLmlrjGbiF4lU0CQP8slXCM=; b=Ic4WjryED0VuJQuQGbZ5LoyE9JSTtTkrUHxS34hfZ5Zs/aZyL/qYUeyUMOQ05BMhqT wbmiXZKDSAXNkvPOc5gtBwpRH4pEWYknV1nyMvTugkxRfR4f+p0eQQ4mkUvyZapDQ97q 2AoKUqARcB5n0qKectPbsp3AHpWk4Yuw9QsbKQ0QzHZiLWF0YXNDy17RGlmeiq5daAtw NtmljIxHUkySSRODWQaePDBogbcVX0Xft04BBxhYPXQbGwf35N/goSWi0cJKG5iYWQce nNc1HYSywwlPdieFpFqDeWIXXbiEUfgbLFoOjrFkBjaV4djo2fyvgExQ4atxYtDRFAtx OtHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:date:message-id:in-reply-to :references:subject; bh=fw3SciNwwhOAb4yAL+/BoLmlrjGbiF4lU0CQP8slXCM=; b=oMSzOSgXhItGdilK3EbRnXsNAzRRRmWO6Pbt45jHpnKizPUgVB8Sz6BJGaXUHrTNAu cqR934gLMWaoGeHOGp35iwP/soycMzyxlQ1hr5fPLfKetmJXO2rAxdofh/TzfL5Kz8FW cAV2Gm2DFBgHIt/WvSKeR2qSS1jAQbiVDqLu3SF2I/i8RthXz8U+r4fqUq3oiyBcXQAI QKK76NBVBVJYCLySI8C55yfBW6q4sWxdH91HBglG95LO03a8QzWZ8QlBjYb+DsQgpwx/ /m9FalVi9D4h+u4q8JgsOBj+Ab2E6EEEz47i9BObssYd9f0VBQcdP+gbT8VxBusVxv42 GqRA== X-Gm-Message-State: AODbwcApxDsvIQbQAE5wz8P5HAVRhdkwqYGW74Hup1dhP9XMTDWIX/yP mRmHBZjE3QHSKw== X-Received: by 10.84.233.136 with SMTP id l8mr3368205plk.169.1495015896995; Wed, 17 May 2017 03:11:36 -0700 (PDT) Received: from [127.0.0.1] (s214090.ppp.asahi-net.or.jp. [220.157.214.90]) by smtp.gmail.com with ESMTPSA id t17sm3632062pfj.61.2017.05.17.03.11.33 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 17 May 2017 03:11:35 -0700 (PDT) From: Magnus Damm To: joro@8bytes.org Cc: laurent.pinchart+renesas@ideasonboard.com, geert+renesas@glider.be, robin.murphy@arm.com, will.deacon@arm.com, linux-kernel@vger.kernel.org, linux-renesas-soc@vger.kernel.org, iommu@lists.linux-foundation.org, horms+renesas@verge.net.au, Magnus Damm , sricharan@codeaurora.org, m.szyprowski@samsung.com Date: Wed, 17 May 2017 19:07:10 +0900 Message-Id: <149501563010.21593.2577334360194191792.sendpatchset@little-apple> In-Reply-To: <149501557669.21593.1017116915706613060.sendpatchset@little-apple> References: <149501557669.21593.1017116915706613060.sendpatchset@little-apple> Subject: [PATCH v8 05/08] iommu/ipmmu-vmsa: Add new IOMMU_DOMAIN_DMA ops Sender: linux-renesas-soc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-renesas-soc@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Magnus Damm Introduce an alternative set of iommu_ops suitable for 64-bit ARM as well as 32-bit ARM when CONFIG_IOMMU_DMA=y. Also adjust the Kconfig to depend on ARM or IOMMU_DMA. Initialize the device from ->xlate() when CONFIG_IOMMU_DMA=y. Signed-off-by: Magnus Damm --- Changes since V7: - None drivers/iommu/Kconfig | 1 drivers/iommu/ipmmu-vmsa.c | 164 +++++++++++++++++++++++++++++++++++++++++--- 2 files changed, 156 insertions(+), 9 deletions(-) --- 0001/drivers/iommu/Kconfig +++ work/drivers/iommu/Kconfig 2017-05-17 16:41:43.030607110 +0900 @@ -274,6 +274,7 @@ config EXYNOS_IOMMU_DEBUG config IPMMU_VMSA bool "Renesas VMSA-compatible IPMMU" + depends on ARM || IOMMU_DMA depends on ARM_LPAE depends on ARCH_RENESAS || COMPILE_TEST select IOMMU_API --- 0009/drivers/iommu/ipmmu-vmsa.c +++ work/drivers/iommu/ipmmu-vmsa.c 2017-05-17 16:42:02.730607110 +0900 @@ -10,6 +10,7 @@ #include #include +#include #include #include #include @@ -22,8 +23,10 @@ #include #include +#if defined(CONFIG_ARM) && !defined(CONFIG_IOMMU_DMA) #include #include +#endif #include "io-pgtable.h" @@ -57,6 +60,8 @@ struct ipmmu_vmsa_archdata { struct ipmmu_vmsa_device *mmu; unsigned int *utlbs; unsigned int num_utlbs; + struct device *dev; + struct list_head list; }; static DEFINE_SPINLOCK(ipmmu_devices_lock); @@ -522,14 +527,6 @@ static struct iommu_domain *__ipmmu_doma return &domain->io_domain; } -static struct iommu_domain *ipmmu_domain_alloc(unsigned type) -{ - if (type != IOMMU_DOMAIN_UNMANAGED) - return NULL; - - return __ipmmu_domain_alloc(type); -} - static void ipmmu_domain_free(struct iommu_domain *io_domain) { struct ipmmu_vmsa_domain *domain = to_vmsa_domain(io_domain); @@ -572,7 +569,8 @@ static int ipmmu_attach_device(struct io dev_err(dev, "Can't attach IPMMU %s to domain on IPMMU %s\n", dev_name(mmu->dev), dev_name(domain->mmu->dev)); ret = -EINVAL; - } + } else + dev_info(dev, "Reusing IPMMU context %u\n", domain->context_id); spin_unlock_irqrestore(&domain->lock, flags); @@ -708,6 +706,7 @@ static int ipmmu_init_platform_device(st archdata->mmu = mmu; archdata->utlbs = utlbs; archdata->num_utlbs = num_utlbs; + archdata->dev = dev; dev->archdata.iommu = archdata; return 0; @@ -716,6 +715,16 @@ error: return ret; } +#if defined(CONFIG_ARM) && !defined(CONFIG_IOMMU_DMA) + +static struct iommu_domain *ipmmu_domain_alloc(unsigned type) +{ + if (type != IOMMU_DOMAIN_UNMANAGED) + return NULL; + + return __ipmmu_domain_alloc(type); +} + static int ipmmu_add_device(struct device *dev) { struct ipmmu_vmsa_archdata *archdata; @@ -825,6 +834,141 @@ static const struct iommu_ops ipmmu_ops .pgsize_bitmap = SZ_1G | SZ_2M | SZ_4K, }; +#endif /* !CONFIG_ARM && CONFIG_IOMMU_DMA */ + +#ifdef CONFIG_IOMMU_DMA + +static DEFINE_SPINLOCK(ipmmu_slave_devices_lock); +static LIST_HEAD(ipmmu_slave_devices); + +static struct iommu_domain *ipmmu_domain_alloc_dma(unsigned type) +{ + struct iommu_domain *io_domain = NULL; + + switch (type) { + case IOMMU_DOMAIN_UNMANAGED: + io_domain = __ipmmu_domain_alloc(type); + break; + + case IOMMU_DOMAIN_DMA: + io_domain = __ipmmu_domain_alloc(type); + if (io_domain) + iommu_get_dma_cookie(io_domain); + break; + } + + return io_domain; +} + +static void ipmmu_domain_free_dma(struct iommu_domain *io_domain) +{ + switch (io_domain->type) { + case IOMMU_DOMAIN_DMA: + iommu_put_dma_cookie(io_domain); + /* fall-through */ + default: + ipmmu_domain_free(io_domain); + break; + } +} + +static int ipmmu_add_device_dma(struct device *dev) +{ + struct ipmmu_vmsa_archdata *archdata = dev->archdata.iommu; + struct iommu_group *group; + + /* The device has been verified in xlate() */ + if (!archdata) + return -ENODEV; + + group = iommu_group_get_for_dev(dev); + if (IS_ERR(group)) + return PTR_ERR(group); + + spin_lock(&ipmmu_slave_devices_lock); + list_add(&archdata->list, &ipmmu_slave_devices); + spin_unlock(&ipmmu_slave_devices_lock); + return 0; +} + +static void ipmmu_remove_device_dma(struct device *dev) +{ + struct ipmmu_vmsa_archdata *archdata = dev->archdata.iommu; + + spin_lock(&ipmmu_slave_devices_lock); + list_del(&archdata->list); + spin_unlock(&ipmmu_slave_devices_lock); + + iommu_group_remove_device(dev); +} + +static struct device *ipmmu_find_sibling_device(struct device *dev) +{ + struct ipmmu_vmsa_archdata *archdata = dev->archdata.iommu; + struct ipmmu_vmsa_archdata *sibling_archdata = NULL; + bool found = false; + + spin_lock(&ipmmu_slave_devices_lock); + + list_for_each_entry(sibling_archdata, &ipmmu_slave_devices, list) { + if (archdata == sibling_archdata) + continue; + if (sibling_archdata->mmu == archdata->mmu) { + found = true; + break; + } + } + + spin_unlock(&ipmmu_slave_devices_lock); + + return found ? sibling_archdata->dev : NULL; +} + +static struct iommu_group *ipmmu_find_group_dma(struct device *dev) +{ + struct iommu_group *group; + struct device *sibling; + + sibling = ipmmu_find_sibling_device(dev); + if (sibling) + group = iommu_group_get(sibling); + if (!sibling || IS_ERR(group)) + group = generic_device_group(dev); + + return group; +} + +static int ipmmu_of_xlate_dma(struct device *dev, + struct of_phandle_args *spec) +{ + /* If the IPMMU device is disabled in DT then return error + * to make sure the of_iommu code does not install ops + * even though the iommu device is disabled + */ + if (!of_device_is_available(spec->np)) + return -ENODEV; + + return ipmmu_init_platform_device(dev); +} + +static const struct iommu_ops ipmmu_ops = { + .domain_alloc = ipmmu_domain_alloc_dma, + .domain_free = ipmmu_domain_free_dma, + .attach_dev = ipmmu_attach_device, + .detach_dev = ipmmu_detach_device, + .map = ipmmu_map, + .unmap = ipmmu_unmap, + .map_sg = default_iommu_map_sg, + .iova_to_phys = ipmmu_iova_to_phys, + .add_device = ipmmu_add_device_dma, + .remove_device = ipmmu_remove_device_dma, + .device_group = ipmmu_find_group_dma, + .pgsize_bitmap = SZ_1G | SZ_2M | SZ_4K, + .of_xlate = ipmmu_of_xlate_dma, +}; + +#endif /* CONFIG_IOMMU_DMA */ + /* ----------------------------------------------------------------------------- * Probe/remove and init */ @@ -914,7 +1058,9 @@ static int ipmmu_remove(struct platform_ list_del(&mmu->list); spin_unlock(&ipmmu_devices_lock); +#if defined(CONFIG_ARM) && !defined(CONFIG_IOMMU_DMA) arm_iommu_release_mapping(mmu->mapping); +#endif ipmmu_device_reset(mmu);