From patchwork Thu Nov 22 11:33:26 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Cho KyongHo X-Patchwork-Id: 1783541 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) by patchwork2.kernel.org (Postfix) with ESMTP id CC575DF230 for ; Thu, 22 Nov 2012 11:38:09 +0000 (UTC) Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.76 #1 (Red Hat Linux)) id 1TbV3L-00072g-TL; Thu, 22 Nov 2012 11:34:45 +0000 Received: from mailout1.samsung.com ([203.254.224.24]) by merlin.infradead.org with esmtp (Exim 4.76 #1 (Red Hat Linux)) id 1TbV27-0006XI-16 for linux-arm-kernel@lists.infradead.org; Thu, 22 Nov 2012 11:33:33 +0000 Received: from epcpsbgm2.samsung.com (epcpsbgm2 [203.254.230.27]) by mailout1.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0MDW0006C03FL360@mailout1.samsung.com> for linux-arm-kernel@lists.infradead.org; Thu, 22 Nov 2012 20:33:26 +0900 (KST) Received: from epcpsbgm2.samsung.com ( [203.254.230.47]) by epcpsbgm2.samsung.com (EPCPMTA) with SMTP id FA.F7.12699.68D0EA05; Thu, 22 Nov 2012 20:33:26 +0900 (KST) X-AuditID: cbfee61b-b7f616d00000319b-dd-50ae0d8671cf Received: from epmmp1.local.host ( [203.254.227.16]) by epcpsbgm2.samsung.com (EPCPMTA) with SMTP id D9.F7.12699.68D0EA05; Thu, 22 Nov 2012 20:33:26 +0900 (KST) Received: from DOPULLIPCHO06 ([12.23.118.152]) by mmp1.samsung.com (Oracle Communications Messaging Server 7u4-24.01 (7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTPA id <0MDW00ICM03QQS20@mmp1.samsung.com> for linux-arm-kernel@lists.infradead.org; Thu, 22 Nov 2012 20:33:26 +0900 (KST) From: Cho KyongHo To: linux-arm-kernel@lists.infradead.org, linux-samsung-soc@vger.kernel.org, iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 08/12] iommu/exynos: set System MMU as the parent of client device Date: Thu, 22 Nov 2012 20:33:26 +0900 Message-id: <002901cdc8a5$2ff903a0$8feb0ae0$%cho@samsung.com> MIME-version: 1.0 X-Mailer: Microsoft Office Outlook 12.0 Thread-index: Ac3IpS/bDCGLhlkISN+FVc9tendIZA== Content-language: ko DLP-Filter: Pass X-MTR: 20000000000000000@CPGS X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFtrEIsWRmVeSWpSXmKPExsVy+t8zfd023nUBBtfu6ltsenyN1YHRY/OS +gDGKC6blNSczLLUIn27BK6M6YsWMhe82sZYcehKagPjtUmMXYycHBICJhLnp65lh7DFJC7c W8/WxcjFISSwjFFi86/jTDBF8w7fYIdILGKUWLCggRXCWc4kseTZPLBRbAJaEqvnHmcESYgI 9DJKXOj/ygTiMAv8YJSYvvkXC0iVsEC4RO/Mx6wgNouAqsTMqU1gO3gFbCUONn9kgbAFJX5M vgdmMwNNXb8T4g5mAXmJzWveMncxcgDdpC7x6K8uSFhEQE9iypdbjBAlIhL7XrxjhBgvIPFt 8iEWiHJZiU0HmEHOkRDoZ5fYcmwBM8RrkhIHV9xgmcAoNgvJ5llINs9CsnkWkhULGFlWMYqm FiQXFCel5xrpFSfmFpfmpesl5+duYoREjPQOxlUNFocYBTgYlXh4MwzWBgixJpYVV+YeYpTg YFYS4b3HvS5AiDclsbIqtSg/vqg0J7X4EKMP0OUTmaVEk/OB0ZxXEm9obGxiZmJqYm5pam6K Q1hJnLfZIyVASCA9sSQ1OzW1ILUIZhwTB6dUA2PZi/sfLvzN1/ips0bJNOuNz9OHr2K3is5r PGu0WGNa4upIw6V1ftfP/m7Xu/N7YuRiptnL525Tufz5xEL2aofuV1MiJqi/urGwV2HNS4eb m19W/u1Z9rhT3b5zxqbdiXbflXRuM8gpp7xVsdS581jJbpNH9N7XUTeKVIUyd6x/O49Tv2jO I052JZbijERDLeai4kQA3KsiqcUCAAA= X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrIIsWRmVeSWpSXmKPExsVy+t9jAd023nUBBv+/a1hsenyN1YHRY/OS +gDGqAZGm4zUxJTUIoXUvOT8lMy8dFsl7+B453hTMwNDXUNLC3MlhbzE3FRbJRefAF23zByg qUoKZYk5pUChgMTiYiV9O0wTQkPcdC1gGiN0fUOC4HqMDNBAwjrGjOmLFjIXvNrGWHHoSmoD 47VJjF2MnBwSAiYS8w7fYIewxSQu3FvP1sXIxSEksIhRYsGCBlYIZzmTxJJn88A62AS0JFbP Pc4IkhAR6GWUuND/lQnEYRb4wSgxffMvFpAqYYFwid6Zj1lBbBYBVYmZU5uYQGxeAVuJg80f WSBsQYkfk++B2cxAU9fvPM4EYctLbF7zlrmLkQPoJnWJR391QcIiAnoSU77cYoQoEZHY9+Id 4wRGgVlIJs1CMmkWkkmzkLQsYGRZxSiaWpBcUJyUnmukV5yYW1yal66XnJ+7iREcj8+kdzCu arA4xCjAwajEw5thsDZAiDWxrLgy9xCjBAezkgjvPe51AUK8KYmVValF+fFFpTmpxYcYfYAe ncgsJZqcD0wVeSXxhsYmZkaWRmYWRibm5jiElcR5mz1SAoQE0hNLUrNTUwtSi2DGMXFwSjUw huRVaO89aiiwtSl/09yu/Ja5sbltKgYbUtw8/S2etKw4bH6wlfV6ws6t66WUI3baLC54wfey 77jGrfv58+ZO2HJN0Lb68oLTTQsmv6rbwrJzEQ+DSNALl6OhNZ93VLaxeG5l3fHTZK/hVMWn R7a1lD7sXb0rS6+79mnDEfvOkxeEKpn+PDykrMRSnJFoqMVcVJwIAGcmSP/0AgAA X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20121122_063327_757136_C3BE7EA9 X-CRM114-Status: GOOD ( 20.89 ) X-Spam-Score: -7.6 (-------) X-Spam-Report: SpamAssassin version 3.3.2 on merlin.infradead.org summary: Content analysis details: (-7.6 points) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at http://www.dnswl.org/, high trust [203.254.224.24 listed in list.dnswl.org] -0.0 SPF_HELO_PASS SPF: HELO matches SPF record -0.7 RP_MATCHES_RCVD Envelope sender domain matches handover relay domain -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] 0.0 T_MANY_HDRS_LCASE Odd capitalization of multiple message headers Cc: 'Kukjin Kim' , prathyush.k@samsung.com, 'Joerg Roedel' , sw0312.kim@samsung.com, 'Subash Patel' , 'Sanghyun Lee' , rahul.sharma@samsung.com X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: linux-arm-kernel-bounces@lists.infradead.org Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org [PATCH v4 08/12] iommu/exynos: set System MMU as the parent of client device This commit sets System MM as the parent of the client device for power management. If System MMU is the parent of a device, it is guaranteed that System MMU is suspended later than the device and resumed earlier. Runtime suspend/resume on the device is also propagated to the System MMU. If a device is configured to have more than one System MMU, the advantage of power management also works and the System MMUs are also have relationships of parent and child. In this situation, the client device is still the descendant of its System MMUs. Cc: Rahul Sharma Signed-off-by: KyongHo Cho --- drivers/iommu/exynos-iommu.c | 540 ++++++++++++++++++++++++++++--------------- 1 file changed, 360 insertions(+), 180 deletions(-) diff --git a/drivers/iommu/exynos-iommu.c b/drivers/iommu/exynos-iommu.c index e39ddac..576f6b1 100644 --- a/drivers/iommu/exynos-iommu.c +++ b/drivers/iommu/exynos-iommu.c @@ -104,6 +104,17 @@ #define REG_PB1_SADDR 0x054 #define REG_PB1_EADDR 0x058 +static void *sysmmu_placeholder; /* Inidcate if a device is System MMU */ + +#define is_sysmmu(sysmmu) (sysmmu->archdata.iommu == &sysmmu_placeholder) +#define has_sysmmu(dev) \ + (dev->parent && dev->archdata.iommu && is_sysmmu(dev->parent)) +#define for_each_sysmmu(dev, sysmmu) \ + for (sysmmu = dev->parent; sysmmu && is_sysmmu(sysmmu); \ + sysmmu = sysmmu->parent) +#define for_each_sysmmu_until(dev, sysmmu, until) \ + for (sysmmu = dev->parent; sysmmu != until; sysmmu = sysmmu->parent) + static struct kmem_cache *lv2table_kmem_cache; static unsigned long *section_entry(unsigned long *pgtable, unsigned long iova) @@ -170,6 +181,16 @@ struct exynos_iommu_domain { spinlock_t pgtablelock; /* lock for modifying page table @ pgtable */ }; +/* exynos_iommu_owner + * Metadata attached to the owner of a group of System MMUs that belong + * to the same owner device. + */ +struct exynos_iommu_owner { + struct list_head client; /* entry of exynos_iommu_domain.clients */ + struct device *dev; + spinlock_t lock; /* Lock to preserve consistency of System MMU */ +}; + struct sysmmu_version { unsigned char major; /* major = 0 means that driver must use MMU_VERSION register instead of this structure */ @@ -177,9 +198,8 @@ struct sysmmu_version { }; struct sysmmu_drvdata { - struct list_head node; /* entry of exynos_iommu_domain.clients */ struct device *sysmmu; /* System MMU's device descriptor */ - struct device *dev; /* Owner of system MMU */ + struct device *master; /* Client device that needs System MMU */ int nsfrs; struct clk *clk; int activations; @@ -281,62 +301,70 @@ void exynos_sysmmu_set_prefbuf(struct device *dev, unsigned long base0, unsigned long size0, unsigned long base1, unsigned long size1) { - struct sysmmu_drvdata *data = dev_get_drvdata(dev->archdata.iommu); - unsigned long flags; - int i; + struct device *sysmmu; - BUG_ON((base0 + size0) <= base0); - BUG_ON((size1 > 0) && ((base1 + size1) <= base1)); + for_each_sysmmu(dev, sysmmu) { + int i; + unsigned long flags; + struct sysmmu_drvdata *data = dev_get_drvdata(sysmmu); - spin_lock_irqsave(&data->lock, flags); - if (!is_sysmmu_active(data)) - goto finish; + BUG_ON((base0 + size0) <= base0); + BUG_ON((size1 > 0) && ((base1 + size1) <= base1)); - for (i = 0; i < data->nsfrs; i++) { - if (__sysmmu_version(data, i, NULL) == 3) { - if (!sysmmu_block(data->sfrbases[i])) - continue; - - if (size1 == 0) { - if (size0 <= SZ_128K) { - base1 = base0; - size1 = size0; - } else { - size1 = size0 - + spin_lock_irqsave(&data->lock, flags); + if (!is_sysmmu_active(data)) { + spin_unlock_irqrestore(&data->lock, flags); + continue; + } + + for (i = 0; i < data->nsfrs; i++) { + if (__sysmmu_version(data, i, NULL) == 3) { + if (!sysmmu_block(data->sfrbases[i])) + continue; + + if (size1 == 0) { + if (size0 <= SZ_128K) { + base1 = base0; + size1 = size0; + } else { + size1 = size0 - ALIGN(size0 / 2, SZ_64K); - size0 = size0 - size1; - base1 = base0 + size0; + size0 = size0 - size1; + base1 = base0 + size0; + } } - } - __sysmmu_set_prefbuf( + __sysmmu_set_prefbuf( data->sfrbases[i], base0, size0, 0); - __sysmmu_set_prefbuf( + __sysmmu_set_prefbuf( data->sfrbases[i], base1, size1, 1); - sysmmu_unblock(data->sfrbases[i]); + sysmmu_unblock(data->sfrbases[i]); + } } + spin_unlock_irqrestore(&data->lock, flags); } -finish: - spin_unlock_irqrestore(&data->lock, flags); } static void __set_fault_handler(struct sysmmu_drvdata *data, sysmmu_fault_handler_t handler) { - unsigned long flags; - - spin_lock_irqsave(&data->lock, flags); data->fault_handler = handler; - spin_unlock_irqrestore(&data->lock, flags); } void exynos_sysmmu_set_fault_handler(struct device *dev, sysmmu_fault_handler_t handler) { - struct sysmmu_drvdata *data = dev_get_drvdata(dev->archdata.iommu); + struct exynos_iommu_owner *owner = dev->archdata.iommu; + struct device *sysmmu; + unsigned long flags; + + spin_lock_irqsave(&owner->lock, flags); - __set_fault_handler(data, handler); + for_each_sysmmu(dev, sysmmu) + __set_fault_handler(dev_get_drvdata(sysmmu), handler); + + spin_unlock_irqrestore(&owner->lock, flags); } static int default_fault_handler(enum exynos_sysmmu_inttype itype, @@ -400,7 +428,7 @@ static irqreturn_t exynos_sysmmu_irq(int irq, void *dev_id) } if (data->domain) - ret = report_iommu_fault(data->domain, data->dev, + ret = report_iommu_fault(data->domain, data->master, addr, itype); if ((ret == -ENOSYS) && data->fault_handler) { @@ -425,174 +453,264 @@ static irqreturn_t exynos_sysmmu_irq(int irq, void *dev_id) return IRQ_HANDLED; } -static bool __exynos_sysmmu_disable(struct sysmmu_drvdata *data) +static void __sysmmu_disable_nocount(struct sysmmu_drvdata *drvdata) { - unsigned long flags; - bool disabled = false; int i; - spin_lock_irqsave(&data->lock, flags); + for (i = 0; i < drvdata->nsfrs; i++) + __raw_writel(CTRL_DISABLE, + drvdata->sfrbases[i] + REG_MMU_CTRL); + + clk_disable(drvdata->clk); +} + +static bool __sysmmu_disable(struct sysmmu_drvdata *drvdata) +{ + bool disabled; + unsigned long flags; - if (!set_sysmmu_inactive(data)) - goto finish; + spin_lock_irqsave(&drvdata->lock, flags); - for (i = 0; i < data->nsfrs; i++) - __raw_writel(CTRL_DISABLE, data->sfrbases[i] + REG_MMU_CTRL); + disabled = set_sysmmu_inactive(drvdata); - if (data->clk) - clk_disable(data->clk); + if (disabled) { + drvdata->pgtable = 0; + drvdata->domain = NULL; - disabled = true; - data->pgtable = 0; - data->domain = NULL; -finish: - spin_unlock_irqrestore(&data->lock, flags); + __sysmmu_disable_nocount(drvdata); - if (disabled) - dev_dbg(data->sysmmu, "Disabled\n"); - else - dev_dbg(data->sysmmu, "%d times left to be disabled\n", - data->activations); + dev_dbg(drvdata->sysmmu, "Disabled\n"); + } else { + dev_dbg(drvdata->sysmmu, "%d times left to be disabled\n", + drvdata->activations); + } + + spin_unlock_irqrestore(&drvdata->lock, flags); return disabled; } -/* __exynos_sysmmu_enable: Enables System MMU - * - * returns -error if an error occurred and System MMU is not enabled, - * 0 if the System MMU has been just enabled and 1 if System MMU was already - * enabled before. - */ -static int __exynos_sysmmu_enable(struct sysmmu_drvdata *data, - unsigned long pgtable, struct iommu_domain *domain) +static bool __exynos_sysmmu_disable(struct device *dev) { - int i, ret = 0; unsigned long flags; + bool disabled = true; + struct exynos_iommu_owner *owner = dev->archdata.iommu; + struct device *sysmmu; - spin_lock_irqsave(&data->lock, flags); + BUG_ON(!has_sysmmu(dev)); - if (!set_sysmmu_active(data)) { - if (WARN_ON(pgtable != data->pgtable)) { - ret = -EBUSY; - set_sysmmu_inactive(data); - } else { - ret = 1; - } + spin_lock_irqsave(&owner->lock, flags); - dev_dbg(data->sysmmu, "Already enabled\n"); - goto finish; + /* Every call to __sysmmu_disable() must return same result */ + for_each_sysmmu(dev, sysmmu) { + struct sysmmu_drvdata *drvdata = dev_get_drvdata(sysmmu); + disabled = __sysmmu_disable(drvdata); + if (disabled) + drvdata->master = NULL; } - if (data->clk) - clk_enable(data->clk); + spin_unlock_irqrestore(&owner->lock, flags); - data->pgtable = pgtable; + return disabled; +} - for (i = 0; i < data->nsfrs; i++) { - __sysmmu_set_ptbase(data->sfrbases[i], pgtable); +static void __sysmmu_enable_nocount(struct sysmmu_drvdata *drvdata) +{ + int i; + + clk_enable(drvdata->clk); + + for (i = 0; i < drvdata->nsfrs; i++) { + int maj, min; + unsigned long cfg = 1; - if (__sysmmu_version(data, i, NULL) == 3) { + __sysmmu_set_ptbase(drvdata->sfrbases[i], drvdata->pgtable); + + /* Initialization of REG_MMU_CFG must be prior to + call to __sysmmu_init_prefbuf() */ + maj = __sysmmu_version(drvdata, i, &min); + if (maj == 3) { /* System MMU version is 3.x */ __raw_writel((1 << 12) | (2 << 28), - data->sfrbases[i] + REG_MMU_CFG); - __sysmmu_set_prefbuf(data->sfrbases[i], 0, -1, 0); - __sysmmu_set_prefbuf(data->sfrbases[i], 0, -1, 1); + drvdata->sfrbases[i] + REG_MMU_CFG); + __sysmmu_set_prefbuf(drvdata->sfrbases[i], 0, -1, 0); + __sysmmu_set_prefbuf(drvdata->sfrbases[i], 0, -1, 1); } - __raw_writel(CTRL_ENABLE, data->sfrbases[i] + REG_MMU_CTRL); + __raw_writel(cfg, drvdata->sfrbases[i] + REG_MMU_CFG); + + __raw_writel(CTRL_ENABLE, drvdata->sfrbases[i] + REG_MMU_CTRL); } +} + +static int __sysmmu_enable(struct sysmmu_drvdata *drvdata, + unsigned long pgtable, struct iommu_domain *domain) +{ + int ret = 0; + unsigned long flags; - data->domain = domain; + spin_lock_irqsave(&drvdata->lock, flags); + if (set_sysmmu_active(drvdata)) { + drvdata->pgtable = pgtable; + drvdata->domain = domain; + + __sysmmu_enable_nocount(drvdata); + + dev_dbg(drvdata->sysmmu, "Enabled\n"); + } else { + ret = (pgtable == drvdata->pgtable) ? 1 : -EBUSY; + + dev_dbg(drvdata->sysmmu, "Already enabled\n"); + } - dev_dbg(data->sysmmu, "Enabled\n"); -finish: - spin_unlock_irqrestore(&data->lock, flags); + if (WARN_ON(ret < 0)) + set_sysmmu_inactive(drvdata); /* decrement count */ + + spin_unlock_irqrestore(&drvdata->lock, flags); + + return ret; +} + +/* __exynos_sysmmu_enable: Enables System MMU + * + * returns -error if an error occurred and System MMU is not enabled, + * 0 if the System MMU has been just enabled and 1 if System MMU was already + * enabled before. + */ +static int __exynos_sysmmu_enable(struct device *dev, unsigned long pgtable, + struct iommu_domain *domain) +{ + int ret = 0; + unsigned long flags; + struct exynos_iommu_owner *owner = dev->archdata.iommu; + struct device *sysmmu; + + BUG_ON(!has_sysmmu(dev)); + + spin_lock_irqsave(&owner->lock, flags); + + for_each_sysmmu(dev, sysmmu) { + struct sysmmu_drvdata *drvdata = dev_get_drvdata(sysmmu); + ret = __sysmmu_enable(drvdata, pgtable, domain); + if (ret < 0) { + struct device *iter; + for_each_sysmmu_until(dev, iter, sysmmu) { + drvdata = dev_get_drvdata(iter); + __sysmmu_disable(drvdata); + } + } else { + drvdata->master = dev; + } + } + + spin_unlock_irqrestore(&owner->lock, flags); return ret; } int exynos_sysmmu_enable(struct device *dev, unsigned long pgtable) { - struct sysmmu_drvdata *data = dev_get_drvdata(dev->archdata.iommu); int ret; + struct device *sysmmu; BUG_ON(!memblock_is_memory(pgtable)); - ret = pm_runtime_get_sync(data->sysmmu); + for_each_sysmmu(dev, sysmmu) { + ret = pm_runtime_get_sync(sysmmu); + if (ret < 0) + break; + } + if (ret < 0) { - dev_dbg(data->sysmmu, "Failed to enable\n"); + struct device *start; + for_each_sysmmu_until(dev, start, sysmmu) + pm_runtime_put(start); + return ret; } - ret = __exynos_sysmmu_enable(data, pgtable, NULL); - if (WARN_ON(ret < 0)) { - pm_runtime_put(data->sysmmu); - dev_err(data->sysmmu, - "Already enabled with page table %#lx\n", - data->pgtable); - } else { - data->dev = dev; - } + ret = __exynos_sysmmu_enable(dev, pgtable, NULL); + if (ret < 0) + for_each_sysmmu(dev, sysmmu) + pm_runtime_put(sysmmu); return ret; } bool exynos_sysmmu_disable(struct device *dev) { - struct sysmmu_drvdata *data = dev_get_drvdata(dev->archdata.iommu); bool disabled; + struct device *sysmmu; + + disabled = __exynos_sysmmu_disable(dev); - disabled = __exynos_sysmmu_disable(data); - pm_runtime_put(data->sysmmu); + for_each_sysmmu(dev, sysmmu) + pm_runtime_put(sysmmu); return disabled; } static void sysmmu_tlb_invalidate_entry(struct device *dev, unsigned long iova) { - unsigned long flags; - struct sysmmu_drvdata *data = dev_get_drvdata(dev->archdata.iommu); + struct device *sysmmu; - spin_lock_irqsave(&data->lock, flags); + for_each_sysmmu(dev, sysmmu) { + unsigned long flags; + struct sysmmu_drvdata *data; - if (is_sysmmu_active(data)) { - int i; - for (i = 0; i < data->nsfrs; i++) { - if (sysmmu_block(data->sfrbases[i])) { - __sysmmu_tlb_invalidate_entry( + data = dev_get_drvdata(sysmmu); + + spin_lock_irqsave(&data->lock, flags); + if (is_sysmmu_active(data)) { + int i; + for (i = 0; i < data->nsfrs; i++) { + if (sysmmu_block(data->sfrbases[i])) { + __sysmmu_tlb_invalidate_entry( data->sfrbases[i], iova); - sysmmu_unblock(data->sfrbases[i]); + sysmmu_unblock(data->sfrbases[i]); + } else { + dev_err(dev, + "%s failed due to blocking timeout\n", + __func__); + } } + } else { + dev_dbg(dev, + "Disabled. Skipping TLB invalidation for %#lx\n", iova); } - } else { - dev_dbg(data->sysmmu, - "Disabled. Skipping invalidating TLB.\n"); + spin_unlock_irqrestore(&data->lock, flags); } - - spin_unlock_irqrestore(&data->lock, flags); } void exynos_sysmmu_tlb_invalidate(struct device *dev) { - unsigned long flags; - struct sysmmu_drvdata *data = dev_get_drvdata(dev->archdata.iommu); - - spin_lock_irqsave(&data->lock, flags); - - if (is_sysmmu_active(data)) { - int i; - for (i = 0; i < data->nsfrs; i++) { - if (sysmmu_block(data->sfrbases[i])) { - __sysmmu_tlb_invalidate(data->sfrbases[i]); - sysmmu_unblock(data->sfrbases[i]); + struct device *sysmmu; + + for_each_sysmmu(dev, sysmmu) { + unsigned long flags; + struct sysmmu_drvdata *data; + + data = dev_get_drvdata(sysmmu); + + spin_lock_irqsave(&data->lock, flags); + if (is_sysmmu_active(data)) { + int i; + for (i = 0; i < data->nsfrs; i++) { + if (sysmmu_block(data->sfrbases[i])) { + __sysmmu_tlb_invalidate( + data->sfrbases[i]); + sysmmu_unblock(data->sfrbases[i]); + } else { + dev_err(dev, + "%s failed due to blocking timeout\n", + __func__); + } } + } else { + dev_dbg(dev, "Disabled. Skipping TLB invalidation\n"); } - } else { - dev_dbg(data->sysmmu, - "Disabled. Skipping invalidating TLB.\n"); + spin_unlock_irqrestore(&data->lock, flags); } - - spin_unlock_irqrestore(&data->lock, flags); } static int __init __sysmmu_init_clock(struct device *sysmmu, @@ -646,6 +764,7 @@ static int __init __sysmmu_setup(struct device *sysmmu, struct sysmmu_drvdata *drvdata) { struct device_node *master_node; + struct device *child; const char *compat; struct platform_device *pmaster = NULL; u32 master_inst_no = -1; @@ -679,12 +798,41 @@ static int __init __sysmmu_setup(struct device *sysmmu, return __sysmmu_init_clock(sysmmu, drvdata, NULL); } - pmaster->dev.archdata.iommu = sysmmu; + child = &pmaster->dev; + + while (child->parent && is_sysmmu(child->parent)) + child = child->parent; + + ret = device_move(child, sysmmu, DPM_ORDER_PARENT_BEFORE_DEV); + if (ret) { + dev_err(sysmmu, "Failed to set parent of %s\n", + dev_name(child)); + goto err_dev_put; + } + + if (!pmaster->dev.archdata.iommu) { + struct exynos_iommu_owner *owner; + owner = devm_kzalloc(sysmmu, sizeof(*owner), GFP_KERNEL); + if (!owner) { + ret = -ENOMEM; + dev_err(sysmmu, "Failed to allocate iommu data\n"); + goto err_dev_put; + } + + INIT_LIST_HEAD(&owner->client); + owner->dev = &pmaster->dev; + spin_lock_init(&owner->lock); + + pmaster->dev.archdata.iommu = owner; + } ret = __sysmmu_init_clock(sysmmu, drvdata, &pmaster->dev); if (ret) dev_err(sysmmu, "Failed to initialize gating clocks\n"); - + else + dev_dbg(sysmmu, "Assigned master device %s\n", + dev_name(&pmaster->dev)); +err_dev_put: of_dev_put(pmaster); return ret; @@ -749,13 +897,13 @@ static int __init exynos_sysmmu_probe(struct platform_device *pdev) if (!ret) { data->sysmmu = dev; spin_lock_init(&data->lock); - INIT_LIST_HEAD(&data->node); __set_fault_handler(data, &default_fault_handler); platform_set_drvdata(pdev, data); - dev_dbg(dev, "Initialized\n"); + dev->archdata.iommu = &sysmmu_placeholder; + dev_dbg(dev, "Initialized successfully!\n"); } return ret; @@ -850,7 +998,7 @@ err_pgtable: static void exynos_iommu_domain_destroy(struct iommu_domain *domain) { struct exynos_iommu_domain *priv = domain->priv; - struct sysmmu_drvdata *data; + struct exynos_iommu_owner *owner, *n; unsigned long flags; int i; @@ -858,9 +1006,14 @@ static void exynos_iommu_domain_destroy(struct iommu_domain *domain) spin_lock_irqsave(&priv->lock, flags); - list_for_each_entry(data, &priv->clients, node) { - while (!exynos_sysmmu_disable(data->dev)) + list_for_each_entry_safe(owner, n, &priv->clients, client) { + struct device *sysmmu; + while (!__exynos_sysmmu_disable(owner->dev)) ; /* until System MMU is actually disabled */ + list_del_init(&owner->client); + + for_each_sysmmu(owner->dev, sysmmu) + pm_runtime_put(sysmmu); } spin_unlock_irqrestore(&priv->lock, flags); @@ -879,37 +1032,68 @@ static void exynos_iommu_domain_destroy(struct iommu_domain *domain) static int exynos_iommu_attach_device(struct iommu_domain *domain, struct device *dev) { - struct sysmmu_drvdata *data = dev_get_drvdata(dev->archdata.iommu); + struct exynos_iommu_owner *owner = dev->archdata.iommu; struct exynos_iommu_domain *priv = domain->priv; unsigned long flags; int ret; + struct device *sysmmu; - ret = pm_runtime_get_sync(data->sysmmu); - if (ret < 0) - return ret; + if (WARN_ON(!list_empty(&owner->client))) { + bool found = false; + struct exynos_iommu_owner *tmpowner; - ret = 0; + spin_lock_irqsave(&priv->lock, flags); + list_for_each_entry(tmpowner, &priv->clients, client) { + if (tmpowner == owner) { + found = true; + break; + } + } + spin_unlock_irqrestore(&priv->lock, flags); - spin_lock_irqsave(&priv->lock, flags); + if (!found) { + dev_err(dev, "%s: Already attached to another domain\n", + __func__); + return -EBUSY; + } + + dev_dbg(dev, "%s: Already attached to this domain\n", __func__); + return 0; + } + + for_each_sysmmu(dev, sysmmu) { + ret = pm_runtime_get_sync(sysmmu); + if (ret < 0) + break; + } - ret = __exynos_sysmmu_enable(data, __pa(priv->pgtable), domain); + if (ret < 0) { + struct device *start; + for_each_sysmmu_until(dev, start, sysmmu) + pm_runtime_put(start); - if (ret == 0) { - /* 'data->node' must not be appeared in priv->clients */ - BUG_ON(!list_empty(&data->node)); - data->dev = dev; - list_add_tail(&data->node, &priv->clients); + return ret; } + spin_lock_irqsave(&priv->lock, flags); + + ret = __exynos_sysmmu_enable(dev, __pa(priv->pgtable), domain); + + /* + * __exynos_sysmmu_enable() returns 1 + * if the System MMU of dev is already enabled + */ + BUG_ON(ret > 0); + + list_add_tail(&owner->client, &priv->clients); + spin_unlock_irqrestore(&priv->lock, flags); if (ret < 0) { dev_err(dev, "%s: Failed to attach IOMMU with pgtable %#lx\n", __func__, __pa(priv->pgtable)); - pm_runtime_put(data->sysmmu); - } else if (ret > 0) { - dev_dbg(dev, "%s: IOMMU with pgtable 0x%lx already attached\n", - __func__, __pa(priv->pgtable)); + for_each_sysmmu(dev, sysmmu) + pm_runtime_put(sysmmu); } else { dev_dbg(dev, "%s: Attached new IOMMU with pgtable 0x%lx\n", __func__, __pa(priv->pgtable)); @@ -921,39 +1105,33 @@ static int exynos_iommu_attach_device(struct iommu_domain *domain, static void exynos_iommu_detach_device(struct iommu_domain *domain, struct device *dev) { - struct sysmmu_drvdata *data = dev_get_drvdata(dev->archdata.iommu); + struct exynos_iommu_owner *owner, *n; struct exynos_iommu_domain *priv = domain->priv; - struct list_head *pos; unsigned long flags; - bool found = false; spin_lock_irqsave(&priv->lock, flags); - list_for_each(pos, &priv->clients) { - if (list_entry(pos, struct sysmmu_drvdata, node) == data) { - found = true; + list_for_each_entry_safe(owner, n, &priv->clients, client) { + if (owner == dev->archdata.iommu) { + if (__exynos_sysmmu_disable(dev)) + list_del_init(&owner->client); + else + BUG(); break; } } - if (!found) - goto finish; + spin_unlock_irqrestore(&priv->lock, flags); - if (__exynos_sysmmu_disable(data)) { + if (owner == dev->archdata.iommu) { + struct device *sysmmu; dev_dbg(dev, "%s: Detached IOMMU with pgtable %#lx\n", __func__, __pa(priv->pgtable)); - list_del_init(&data->node); - - } else { - dev_dbg(dev, "%s: Detaching IOMMU with pgtable %#lx delayed", - __func__, __pa(priv->pgtable)); - } + for_each_sysmmu(dev, sysmmu) + pm_runtime_put(sysmmu); -finish: - spin_unlock_irqrestore(&priv->lock, flags); - - if (found) - pm_runtime_put(data->sysmmu); + } else + dev_dbg(dev, "%s: No IOMMU is attached\n", __func__); } static unsigned long *alloc_lv2entry(unsigned long *sent, unsigned long iova, @@ -1068,7 +1246,6 @@ static size_t exynos_iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size) { struct exynos_iommu_domain *priv = domain->priv; - struct sysmmu_drvdata *data; unsigned long flags; unsigned long *ent; @@ -1120,8 +1297,11 @@ done: spin_unlock_irqrestore(&priv->pgtablelock, flags); spin_lock_irqsave(&priv->lock, flags); - list_for_each_entry(data, &priv->clients, node) - sysmmu_tlb_invalidate_entry(data->dev, iova); + { + struct exynos_iommu_owner *owner; + list_for_each_entry(owner, &priv->clients, client) + sysmmu_tlb_invalidate_entry(owner->dev, iova); + } spin_unlock_irqrestore(&priv->lock, flags);