From patchwork Wed Mar 1 17:42:54 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 9598857 X-Patchwork-Delegate: agross@codeaurora.org Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id F168260429 for ; Wed, 1 Mar 2017 17:43:24 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E765A281DB for ; Wed, 1 Mar 2017 17:43:24 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DBDA6283F9; Wed, 1 Mar 2017 17:43:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.3 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 811C6281DB for ; Wed, 1 Mar 2017 17:43:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752882AbdCARnW (ORCPT ); Wed, 1 Mar 2017 12:43:22 -0500 Received: from mail-qk0-f196.google.com ([209.85.220.196]:36403 "EHLO mail-qk0-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752879AbdCARnV (ORCPT ); Wed, 1 Mar 2017 12:43:21 -0500 Received: by mail-qk0-f196.google.com with SMTP id u188so12512332qkc.3 for ; Wed, 01 Mar 2017 09:43:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=+U7BXSD4fkL7PtBoWbfA80yNhMW0AW6Yint/YxHdtqY=; b=md5UrTaMQca3VglRKmS7mz6Rb4DHtKqv0TxgyNw7R+ZdIspdYSwHj20vxrv3XWqzNQ OvA8AyMaRIJH7GWY+8O46GWj5YDGNtAHcNCetidzICS2z9/mRUlZJqtXCqr8iYo7mQlG x88prtSR+BNJMd++D9wSNgbOS7MKjoak7vkTKDcV0sUuX+mbxwOlQuvVlKGoJBihqo1w EPeIs1s2YDyg3xQr3MKgBQmk7QCo7u78e1aI/MUKYyaTZDgx71nwjDFaT4cQpP69swQC Vp4PTijJU2Kr83aiIeRm1fSPyEhuFDzeZeQ0gay/fY+OmZTSk8NOAdFIFXigLEzgjsI5 6W3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=+U7BXSD4fkL7PtBoWbfA80yNhMW0AW6Yint/YxHdtqY=; b=p/ko0s335zSYzRAbVnrx8Pc4T/NGAXiUnuV8T9Ch4eXY2ihI9zfkMwq+PR0ku6UeNF v69tYkdHEw+vClMc7wctmUuFjU6noi1vUK94LC7iRkoSumwAbiUC0AKzq9jmidS5fyMa cBNV2TseaP+6I0p0CqsuMZkXqZU31lJ+DOwQf+R+bIlYw3nddhCAowQMZw05HMiOC+wT xzNgiWa+P2+dWU5+vTptGKwq/8VwmUXxdGmyGEAp4xU0Rv0vuYop6WXWF8KExRCAH329 XSgc2sDzHZ90wYfPV5Dfmako8meDUDK2zyti5ocF7QGFdAKSIorHFbbE579NmqYFptuL 5G1Q== X-Gm-Message-State: AMke39lTgSSC9DH4uFF6NtRUYej3U3MafNnhkSBeyhOs9sde2U3NslO3FfFqzSM1LgSk6w== X-Received: by 10.55.102.199 with SMTP id a190mr11130643qkc.277.1488390199820; Wed, 01 Mar 2017 09:43:19 -0800 (PST) Received: from localhost ([2601:184:4780:aac0:25f8:dd96:a084:785a]) by smtp.gmail.com with ESMTPSA id q31sm3595631qta.22.2017.03.01.09.43.19 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 01 Mar 2017 09:43:19 -0800 (PST) From: Rob Clark To: iommu@lists.linux-foundation.org Cc: linux-arm-msm@vger.kernel.org, Robin Murphy , Will Deacon , Sricharan , Mark Rutland , Stanimir Varbanov , Rob Clark Subject: [PATCH 5/9] iommu: add qcom_iommu Date: Wed, 1 Mar 2017 12:42:54 -0500 Message-Id: <20170301174258.14618-6-robdclark@gmail.com> X-Mailer: git-send-email 2.9.3 In-Reply-To: <20170301174258.14618-1-robdclark@gmail.com> References: <20170301174258.14618-1-robdclark@gmail.com> Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP An iommu driver for Qualcomm "B" family devices which do not completely implement the ARM SMMU spec. These devices have context-bank register layout that is similar to ARM SMMU, but no global register space (or at least not one that is accessible). Signed-off-by: Rob Clark --- drivers/iommu/Kconfig | 10 + drivers/iommu/Makefile | 1 + drivers/iommu/arm-smmu-regs.h | 2 + drivers/iommu/qcom_iommu.c | 825 ++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 838 insertions(+) create mode 100644 drivers/iommu/qcom_iommu.c diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig index 37e204f..400a404 100644 --- a/drivers/iommu/Kconfig +++ b/drivers/iommu/Kconfig @@ -359,4 +359,14 @@ config MTK_IOMMU_V1 if unsure, say N here. +config QCOM_IOMMU + bool "Qualcomm IOMMU Support" + depends on ARM || ARM64 + depends on ARCH_QCOM || COMPILE_TEST + select IOMMU_API + select IOMMU_IO_PGTABLE_LPAE + select ARM_DMA_USE_IOMMU + help + Support for IOMMU on certain Qualcomm SoCs. + endif # IOMMU_SUPPORT diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile index 195f7b9..b910aea 100644 --- a/drivers/iommu/Makefile +++ b/drivers/iommu/Makefile @@ -27,3 +27,4 @@ obj-$(CONFIG_TEGRA_IOMMU_SMMU) += tegra-smmu.o obj-$(CONFIG_EXYNOS_IOMMU) += exynos-iommu.o obj-$(CONFIG_FSL_PAMU) += fsl_pamu.o fsl_pamu_domain.o obj-$(CONFIG_S390_IOMMU) += s390-iommu.o +obj-$(CONFIG_QCOM_IOMMU) += qcom_iommu.o diff --git a/drivers/iommu/arm-smmu-regs.h b/drivers/iommu/arm-smmu-regs.h index 632240f..e643164 100644 --- a/drivers/iommu/arm-smmu-regs.h +++ b/drivers/iommu/arm-smmu-regs.h @@ -174,6 +174,8 @@ enum arm_smmu_s2cr_privcfg { #define ARM_SMMU_CB_S1_TLBIVAL 0x620 #define ARM_SMMU_CB_S2_TLBIIPAS2 0x630 #define ARM_SMMU_CB_S2_TLBIIPAS2L 0x638 +#define ARM_SMMU_CB_TLBSYNC 0x7f0 +#define ARM_SMMU_CB_TLBSTATUS 0x7f4 #define ARM_SMMU_CB_ATS1PR 0x800 #define ARM_SMMU_CB_ATSR 0x8f0 diff --git a/drivers/iommu/qcom_iommu.c b/drivers/iommu/qcom_iommu.c new file mode 100644 index 0000000..5d3bb63 --- /dev/null +++ b/drivers/iommu/qcom_iommu.c @@ -0,0 +1,825 @@ +/* + * IOMMU API for QCOM secure IOMMUs. Somewhat based on arm-smmu.c + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. + * + * Copyright (C) 2013 ARM Limited + * Copyright (C) 2017 Red Hat + */ + +#define pr_fmt(fmt) "qcom-iommu: " fmt + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "io-pgtable.h" +#include "arm-smmu-regs.h" + +#define SMMU_INTR_SEL_NS 0x2000 + +struct qcom_iommu_dev { + /* IOMMU core code handle */ + struct iommu_device iommu; + struct device *dev; + struct clk *iface_clk; + struct clk *bus_clk; + void __iomem *local_base; + u32 sec_id; + struct list_head context_list; /* list of qcom_iommu_context */ +}; + +struct qcom_iommu_ctx { + struct device *dev; + void __iomem *base; + unsigned int irq; + bool secure_init; + u32 asid; /* asid and ctx bank # are 1:1 */ + struct iommu_group *group; + struct list_head node; /* head in qcom_iommu_device::context_list */ +}; + +struct qcom_iommu_domain { + struct io_pgtable_ops *pgtbl_ops; + spinlock_t pgtbl_lock; + struct mutex init_mutex; /* Protects iommu pointer */ + struct iommu_domain domain; + struct qcom_iommu_dev *iommu; +}; + +static struct qcom_iommu_domain *to_qcom_iommu_domain(struct iommu_domain *dom) +{ + return container_of(dom, struct qcom_iommu_domain, domain); +} + +static const struct iommu_ops qcom_iommu_ops; +static struct platform_driver qcom_iommu_driver; + +static struct qcom_iommu_dev * __to_iommu(struct iommu_fwspec *fwspec) +{ + if (WARN_ON(!fwspec || fwspec->ops != &qcom_iommu_ops)) + return NULL; + return fwspec->iommu_priv; +} + +static struct qcom_iommu_ctx * __to_ctx(struct iommu_fwspec *fwspec, unsigned asid) +{ + struct qcom_iommu_dev *qcom_iommu = __to_iommu(fwspec); + struct qcom_iommu_ctx *ctx; + + if (!qcom_iommu) + return NULL; + + list_for_each_entry(ctx, &qcom_iommu->context_list, node) + if (ctx->asid == asid) + return ctx; + + WARN(1, "no ctx for asid %u\n", asid); + return NULL; +} + +static inline void +iommu_writel(struct qcom_iommu_ctx *ctx, unsigned reg, u32 val) +{ + writel_relaxed(val, ctx->base + reg); +} + +static inline void +iommu_writeq(struct qcom_iommu_ctx *ctx, unsigned reg, u64 val) +{ + writeq_relaxed(val, ctx->base + reg); +} + +static inline u32 +iommu_readl(struct qcom_iommu_ctx *ctx, unsigned reg) +{ + return readl_relaxed(ctx->base + reg); +} + +static inline u32 +iommu_readq(struct qcom_iommu_ctx *ctx, unsigned reg) +{ + return readq_relaxed(ctx->base + reg); +} + +static void __sync_tlb(struct qcom_iommu_ctx *ctx) +{ + unsigned int val; + unsigned int ret; + + iommu_writel(ctx, ARM_SMMU_CB_TLBSYNC, 0); + + ret = readl_poll_timeout(ctx->base + ARM_SMMU_CB_TLBSTATUS, val, + (val & 0x1) == 0, 0, 5000000); + if (ret) + dev_err(ctx->dev, "timeout waiting for TLB SYNC\n"); +} + +static void qcom_iommu_tlb_sync(void *cookie) +{ + struct iommu_fwspec *fwspec = cookie; + unsigned i; + + for (i = 0; i < fwspec->num_ids; i++) + __sync_tlb(__to_ctx(fwspec, fwspec->ids[i])); +} + +static void qcom_iommu_tlb_inv_context(void *cookie) +{ + struct iommu_fwspec *fwspec = cookie; + unsigned i; + + for (i = 0; i < fwspec->num_ids; i++) { + struct qcom_iommu_ctx *ctx = __to_ctx(fwspec, fwspec->ids[i]); + + iommu_writel(ctx, ARM_SMMU_CB_S1_TLBIASID, ctx->asid); + __sync_tlb(ctx); + } +} + +static void qcom_iommu_tlb_inv_range_nosync(unsigned long iova, size_t size, + size_t granule, bool leaf, void *cookie) +{ + struct iommu_fwspec *fwspec = cookie; + unsigned i, reg; + + reg = leaf ? ARM_SMMU_CB_S1_TLBIVAL : ARM_SMMU_CB_S1_TLBIVA; + + for (i = 0; i < fwspec->num_ids; i++) { + struct qcom_iommu_ctx *ctx = __to_ctx(fwspec, fwspec->ids[i]); + size_t s = size; + + iova &= ~12UL; + iova |= ctx->asid; + do { + iommu_writel(ctx, reg, iova); + iova += granule; + } while (s -= granule); + } +} + +static const struct iommu_gather_ops qcom_gather_ops = { + .tlb_flush_all = qcom_iommu_tlb_inv_context, + .tlb_add_flush = qcom_iommu_tlb_inv_range_nosync, + .tlb_sync = qcom_iommu_tlb_sync, +}; + +static irqreturn_t qcom_iommu_fault(int irq, void *dev) +{ + struct qcom_iommu_ctx *ctx = dev; + u32 fsr, fsynr; + unsigned long iova; + + fsr = iommu_readl(ctx, ARM_SMMU_CB_FSR); + + if (!(fsr & FSR_FAULT)) + return IRQ_NONE; + + fsynr = iommu_readl(ctx, ARM_SMMU_CB_FSYNR0); + iova = iommu_readq(ctx, ARM_SMMU_CB_FAR); + + dev_err_ratelimited(ctx->dev, + "Unhandled context fault: fsr=0x%x, " + "iova=0x%08lx, fsynr=0x%x, cb=%d\n", + fsr, iova, fsynr, ctx->asid); + + iommu_writel(ctx, ARM_SMMU_CB_FSR, fsr); + + return IRQ_HANDLED; +} + +static int qcom_iommu_init_domain(struct iommu_domain *domain, + struct qcom_iommu_dev *qcom_iommu, + struct iommu_fwspec *fwspec) +{ + struct qcom_iommu_domain *qcom_domain = to_qcom_iommu_domain(domain); + struct io_pgtable_ops *pgtbl_ops; + struct io_pgtable_cfg pgtbl_cfg; + int i, ret = 0; + u32 reg; + + mutex_lock(&qcom_domain->init_mutex); + if (qcom_domain->iommu) + goto out_unlock; + + pgtbl_cfg = (struct io_pgtable_cfg) { + .pgsize_bitmap = qcom_iommu_ops.pgsize_bitmap, + .ias = 32, + .oas = 40, + .tlb = &qcom_gather_ops, + .iommu_dev = qcom_iommu->dev, + }; + + qcom_domain->iommu = qcom_iommu; + pgtbl_ops = alloc_io_pgtable_ops(ARM_32_LPAE_S1, &pgtbl_cfg, fwspec); + if (!pgtbl_ops) { + dev_err(qcom_iommu->dev, "failed to allocate pagetable ops\n"); + ret = -ENOMEM; + goto out_clear_iommu; + } + + /* Update the domain's page sizes to reflect the page table format */ + domain->pgsize_bitmap = pgtbl_cfg.pgsize_bitmap; + domain->geometry.aperture_end = (1ULL << 48) - 1; + domain->geometry.force_aperture = true; + + for (i = 0; i < fwspec->num_ids; i++) { + struct qcom_iommu_ctx *ctx = __to_ctx(fwspec, fwspec->ids[i]); + + if (!ctx->secure_init) { + ret = qcom_scm_restore_sec_cfg(qcom_iommu->sec_id, ctx->asid); + if (ret) { + dev_err(qcom_iommu->dev, "secure init failed: %d\n", ret); + goto out_clear_iommu; + } + ctx->secure_init = true; + } + + /* TTBRs */ + iommu_writeq(ctx, ARM_SMMU_CB_TTBR0, + pgtbl_cfg.arm_lpae_s1_cfg.ttbr[0] | + ((u64)ctx->asid << TTBRn_ASID_SHIFT)); + iommu_writeq(ctx, ARM_SMMU_CB_TTBR1, + pgtbl_cfg.arm_lpae_s1_cfg.ttbr[1] | + ((u64)ctx->asid << TTBRn_ASID_SHIFT)); + + /* TTBCR */ + iommu_writel(ctx, ARM_SMMU_CB_TTBCR2, + (pgtbl_cfg.arm_lpae_s1_cfg.tcr >> 32) | + TTBCR2_SEP_UPSTREAM); + iommu_writel(ctx, ARM_SMMU_CB_TTBCR, + pgtbl_cfg.arm_lpae_s1_cfg.tcr); + + /* MAIRs (stage-1 only) */ + iommu_writel(ctx, ARM_SMMU_CB_S1_MAIR0, + pgtbl_cfg.arm_lpae_s1_cfg.mair[0]); + iommu_writel(ctx, ARM_SMMU_CB_S1_MAIR1, + pgtbl_cfg.arm_lpae_s1_cfg.mair[1]); + + /* SCTLR */ + reg = SCTLR_CFIE | SCTLR_CFRE | SCTLR_AFE | SCTLR_TRE | + SCTLR_M | SCTLR_S1_ASIDPNE; +#ifdef __BIG_ENDIAN + reg |= SCTLR_E; +#endif + iommu_writel(ctx, ARM_SMMU_CB_SCTLR, reg); + } + + mutex_unlock(&qcom_domain->init_mutex); + + /* Publish page table ops for map/unmap */ + qcom_domain->pgtbl_ops = pgtbl_ops; + + return 0; + +out_clear_iommu: + qcom_domain->iommu = NULL; +out_unlock: + mutex_unlock(&qcom_domain->init_mutex); + return ret; +} + +static struct iommu_domain *qcom_iommu_domain_alloc(unsigned type) +{ + struct qcom_iommu_domain *qcom_domain; + + if (type != IOMMU_DOMAIN_UNMANAGED && type != IOMMU_DOMAIN_DMA) + return NULL; + /* + * Allocate the domain and initialise some of its data structures. + * We can't really do anything meaningful until we've added a + * master. + */ + qcom_domain = kzalloc(sizeof(*qcom_domain), GFP_KERNEL); + if (!qcom_domain) + return NULL; + + if (type == IOMMU_DOMAIN_DMA && + iommu_get_dma_cookie(&qcom_domain->domain)) { + kfree(qcom_domain); + return NULL; + } + + mutex_init(&qcom_domain->init_mutex); + spin_lock_init(&qcom_domain->pgtbl_lock); + + return &qcom_domain->domain; +} + +static void qcom_iommu_domain_free(struct iommu_domain *domain) +{ + struct qcom_iommu_domain *qcom_domain = to_qcom_iommu_domain(domain); + + if (WARN_ON(qcom_domain->iommu)) /* forgot to detach? */ + return; + + iommu_put_dma_cookie(domain); + + free_io_pgtable_ops(qcom_domain->pgtbl_ops); + + kfree(qcom_domain); +} + +static int qcom_iommu_attach_dev(struct iommu_domain *domain, struct device *dev) +{ + struct qcom_iommu_dev *qcom_iommu = __to_iommu(dev->iommu_fwspec); + struct qcom_iommu_domain *qcom_domain = to_qcom_iommu_domain(domain); + int ret; + + if (!qcom_iommu) { + dev_err(dev, "cannot attach to IOMMU, is it on the same bus?\n"); + return -ENXIO; + } + + /* Ensure that the domain is finalized */ + pm_runtime_get_sync(qcom_iommu->dev); + ret = qcom_iommu_init_domain(domain, qcom_iommu, dev->iommu_fwspec); + pm_runtime_put_sync(qcom_iommu->dev); + if (ret < 0) + return ret; + + /* + * Sanity check the domain. We don't support domains across + * different IOMMUs. + */ + if (qcom_domain->iommu != qcom_iommu) { + dev_err(dev, "cannot attach to IOMMU %s while already " + "attached to domain on IOMMU %s\n", + dev_name(qcom_domain->iommu->dev), + dev_name(qcom_iommu->dev)); + return -EINVAL; + } + + return 0; +} + +static void qcom_iommu_detach_dev(struct iommu_domain *domain, struct device *dev) +{ + struct iommu_fwspec *fwspec = dev->iommu_fwspec; + struct qcom_iommu_dev *qcom_iommu = __to_iommu(fwspec); + struct qcom_iommu_domain *qcom_domain = to_qcom_iommu_domain(domain); + unsigned i; + + if (!qcom_domain->iommu) + return; + + pm_runtime_get_sync(qcom_iommu->dev); + for (i = 0; i < fwspec->num_ids; i++) { + struct qcom_iommu_ctx *ctx = __to_ctx(fwspec, fwspec->ids[i]); + + /* Disable the context bank: */ + iommu_writel(ctx, ARM_SMMU_CB_SCTLR, 0); + } + pm_runtime_put_sync(qcom_iommu->dev); + + qcom_domain->iommu = NULL; +} + +static int qcom_iommu_map(struct iommu_domain *domain, unsigned long iova, + phys_addr_t paddr, size_t size, int prot) +{ + int ret; + unsigned long flags; + struct qcom_iommu_domain *qcom_domain = to_qcom_iommu_domain(domain); + struct io_pgtable_ops *ops = qcom_domain->pgtbl_ops; + + if (!ops) + return -ENODEV; + + spin_lock_irqsave(&qcom_domain->pgtbl_lock, flags); + ret = ops->map(ops, iova, paddr, size, prot); + spin_unlock_irqrestore(&qcom_domain->pgtbl_lock, flags); + return ret; +} + +static size_t qcom_iommu_unmap(struct iommu_domain *domain, unsigned long iova, + size_t size) +{ + size_t ret; + unsigned long flags; + struct qcom_iommu_domain *qcom_domain = to_qcom_iommu_domain(domain); + struct io_pgtable_ops *ops = qcom_domain->pgtbl_ops; + + if (!ops) + return 0; + + spin_lock_irqsave(&qcom_domain->pgtbl_lock, flags); + ret = ops->unmap(ops, iova, size); + spin_unlock_irqrestore(&qcom_domain->pgtbl_lock, flags); + return ret; +} + +static phys_addr_t qcom_iommu_iova_to_phys(struct iommu_domain *domain, + dma_addr_t iova) +{ + phys_addr_t ret; + unsigned long flags; + struct qcom_iommu_domain *qcom_domain = to_qcom_iommu_domain(domain); + struct io_pgtable_ops *ops = qcom_domain->pgtbl_ops; + + if (!ops) + return 0; + + spin_lock_irqsave(&qcom_domain->pgtbl_lock, flags); + ret = ops->iova_to_phys(ops, iova); + spin_unlock_irqrestore(&qcom_domain->pgtbl_lock, flags); + + return ret; +} + +static bool qcom_iommu_capable(enum iommu_cap cap) +{ + switch (cap) { + case IOMMU_CAP_CACHE_COHERENCY: + /* + * Return true here as the SMMU can always send out coherent + * requests. + */ + return true; + case IOMMU_CAP_NOEXEC: + return true; + default: + return false; + } +} + +static int qcom_iommu_add_device(struct device *dev) +{ + struct qcom_iommu_dev *qcom_iommu = __to_iommu(dev->iommu_fwspec); + struct iommu_group *group; + struct device_link *link; + + if (!qcom_iommu) + return -ENODEV; + + group = iommu_group_get_for_dev(dev); + if (IS_ERR_OR_NULL(group)) + return PTR_ERR_OR_ZERO(group); + + iommu_group_put(group); + iommu_device_link(&qcom_iommu->iommu, dev); + + /* + * Establish the link between iommu and master, so that the + * iommu gets runtime enabled/disabled as per the master's + * needs. + */ + link = device_link_add(dev, qcom_iommu->dev, DL_FLAG_PM_RUNTIME); + if (!link) { + dev_warn(qcom_iommu->dev, "Unable to create device link between %s and %s\n", + dev_name(qcom_iommu->dev), dev_name(dev)); + /* TODO fatal or ignore? */ + } + + return 0; +} + +static void qcom_iommu_remove_device(struct device *dev) +{ + struct qcom_iommu_dev *qcom_iommu = __to_iommu(dev->iommu_fwspec); + + if (!qcom_iommu) + return; + + iommu_group_remove_device(dev); + iommu_device_unlink(&qcom_iommu->iommu, dev); + iommu_fwspec_free(dev); +} + +static struct iommu_group *qcom_iommu_device_group(struct device *dev) +{ + struct iommu_fwspec *fwspec = dev->iommu_fwspec; + struct iommu_group *group = NULL; + unsigned i; + + for (i = 0; i < fwspec->num_ids; i++) { + struct qcom_iommu_ctx *ctx = __to_ctx(fwspec, fwspec->ids[i]); + + if (group && ctx->group && group != ctx->group) + return ERR_PTR(-EINVAL); + + group = ctx->group; + } + + if (group) + return iommu_group_ref_get(group); + + group = generic_device_group(dev); + + for (i = 0; i < fwspec->num_ids; i++) { + struct qcom_iommu_ctx *ctx = __to_ctx(fwspec, fwspec->ids[i]); + ctx->group = iommu_group_ref_get(group); + } + + return group; +} + +static int qcom_iommu_of_xlate(struct device *dev, struct of_phandle_args *args) +{ + struct platform_device *iommu_pdev; + + if (args->args_count != 1) { + dev_err(dev, "incorrect number of iommu params found for %s " + "(found %d, expected 1)\n", + args->np->full_name, args->args_count); + return -EINVAL; + } + + if (!dev->iommu_fwspec->iommu_priv) { + iommu_pdev = of_find_device_by_node(args->np); + if (WARN_ON(!iommu_pdev)) + return -EINVAL; + + dev->iommu_fwspec->iommu_priv = platform_get_drvdata(iommu_pdev); + } + + return iommu_fwspec_add_ids(dev, &args->args[0], 1); +} + +static const struct iommu_ops qcom_iommu_ops = { + .capable = qcom_iommu_capable, + .domain_alloc = qcom_iommu_domain_alloc, + .domain_free = qcom_iommu_domain_free, + .attach_dev = qcom_iommu_attach_dev, + .detach_dev = qcom_iommu_detach_dev, + .map = qcom_iommu_map, + .unmap = qcom_iommu_unmap, + .map_sg = default_iommu_map_sg, + .iova_to_phys = qcom_iommu_iova_to_phys, + .add_device = qcom_iommu_add_device, + .remove_device = qcom_iommu_remove_device, + .device_group = qcom_iommu_device_group, + .of_xlate = qcom_iommu_of_xlate, + .pgsize_bitmap = SZ_4K | SZ_64K | SZ_1M | SZ_16M, +}; + +static int qcom_iommu_enable_clocks(struct qcom_iommu_dev *qcom_iommu) +{ + int ret; + + ret = clk_prepare_enable(qcom_iommu->iface_clk); + if (ret) { + dev_err(qcom_iommu->dev, "Couldn't enable iface_clk\n"); + return ret; + } + + ret = clk_prepare_enable(qcom_iommu->bus_clk); + if (ret) { + dev_err(qcom_iommu->dev, "Couldn't enable bus_clk\n"); + clk_disable_unprepare(qcom_iommu->iface_clk); + return ret; + } + + return 0; +} + +static void qcom_iommu_disable_clocks(struct qcom_iommu_dev *qcom_iommu) +{ + clk_disable_unprepare(qcom_iommu->bus_clk); + clk_disable_unprepare(qcom_iommu->iface_clk); +} + +static int qcom_iommu_ctx_probe(struct platform_device *pdev) +{ + struct qcom_iommu_ctx *ctx; + struct device *dev = &pdev->dev; + struct qcom_iommu_dev *qcom_iommu = dev_get_drvdata(dev->parent); + struct resource *res; + int ret; + u32 reg; + + ctx = devm_kzalloc(dev, sizeof(*ctx), GFP_KERNEL); + if (!ctx) { + dev_err(dev, "failed to allocate qcom_iommu_context\n"); + return -ENOMEM; + } + + ctx->dev = dev; + platform_set_drvdata(pdev, ctx); + + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + ctx->base = devm_ioremap_resource(dev, res); + if (IS_ERR(ctx->base)) + return PTR_ERR(ctx->base); + + ctx->irq = platform_get_irq(pdev, 0); + if (ctx->irq < 0) { + dev_err(dev, "failed to get irq\n"); + return -ENODEV; + } + + ret = devm_request_irq(dev, ctx->irq, + qcom_iommu_fault, + IRQF_SHARED, + "qcom-iommu-fault", + ctx); + if (ret) { + dev_err(dev, "failed to request IRQ %u\n", ctx->irq); + return ret; + } + + /* read the "reg" property directly to get the relative address + * of the context bank, and calculate the asid from that: + */ + if (of_property_read_u32_index(dev->of_node, "reg", 0, ®)) { + dev_err(dev, "missing reg property\n"); + return -ENODEV; + } + + ctx->asid = reg / 0x1000; + + dev_info(dev, "found asid %u\n", ctx->asid); + + list_add_tail(&ctx->node, &qcom_iommu->context_list); + + return 0; +} + +static int qcom_iommu_ctx_remove(struct platform_device *pdev) +{ + struct qcom_iommu_ctx *ctx = platform_get_drvdata(pdev); + + if (!ctx) + return 0; + + iommu_group_put(ctx->group); + platform_set_drvdata(pdev, NULL); + + return 0; +} + +static const struct of_device_id ctx_of_match[] = { + { .compatible = "qcom,msm-iommu-v1-ns" }, + { .compatible = "qcom,msm-iommu-v1-sec" }, + { /* sentinel */ } +}; + +static struct platform_driver qcom_iommu_ctx_driver = { + .driver = { + .name = "qcom-iommu-ctx", + .of_match_table = of_match_ptr(ctx_of_match), + }, + .probe = qcom_iommu_ctx_probe, + .remove = qcom_iommu_ctx_remove, +}; +module_platform_driver(qcom_iommu_ctx_driver); + +static int qcom_iommu_device_probe(struct platform_device *pdev) +{ + struct qcom_iommu_dev *qcom_iommu; + struct device *dev = &pdev->dev; + struct resource *res; + int ret; + + qcom_iommu = devm_kzalloc(dev, sizeof(*qcom_iommu), GFP_KERNEL); + if (!qcom_iommu) { + dev_err(dev, "failed to allocate qcom_iommu_device\n"); + return -ENOMEM; + } + qcom_iommu->dev = dev; + + INIT_LIST_HEAD(&qcom_iommu->context_list); + + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + if (res) + qcom_iommu->local_base = devm_ioremap_resource(dev, res); + + qcom_iommu->iface_clk = devm_clk_get(dev, "iface_clk"); + if (IS_ERR(qcom_iommu->iface_clk)) { + dev_err(dev, "failed to get iface_clk\n"); + return PTR_ERR(qcom_iommu->iface_clk); + } + + qcom_iommu->bus_clk = devm_clk_get(dev, "bus_clk"); + if (IS_ERR(qcom_iommu->bus_clk)) { + dev_err(dev, "failed to get bus_clk\n"); + return PTR_ERR(qcom_iommu->bus_clk); + } + + if (of_property_read_u32(dev->of_node, "qcom,iommu-secure-id", + &qcom_iommu->sec_id)) { + dev_err(dev, "missing qcom,iommu-secure-id property\n"); + return -ENODEV; + } + + platform_set_drvdata(pdev, qcom_iommu); + + /* register context bank devices, which are child nodes: */ + ret = of_platform_populate(dev->of_node, ctx_of_match, NULL, dev); + if (ret) { + dev_err(dev, "Failed to populate iommu contexts\n"); + return ret; + } + + ret = iommu_device_sysfs_add(&qcom_iommu->iommu, dev, NULL, + "smmu.%pa", &res->start); + if (ret) { + dev_err(dev, "Failed to register iommu in sysfs\n"); + return ret; + } + + iommu_device_set_ops(&qcom_iommu->iommu, &qcom_iommu_ops); + iommu_device_set_fwnode(&qcom_iommu->iommu, dev->fwnode); + + ret = iommu_device_register(&qcom_iommu->iommu); + if (ret) { + dev_err(dev, "Failed to register iommu\n"); + return ret; + } + + pm_runtime_enable(dev); + bus_set_iommu(&platform_bus_type, &qcom_iommu_ops); + + if (qcom_iommu->local_base) { + pm_runtime_get_sync(dev); + writel_relaxed(0xffffffff, qcom_iommu->local_base + SMMU_INTR_SEL_NS); + pm_runtime_put_sync(dev); + } + + return 0; +} + +static int qcom_iommu_device_remove(struct platform_device *pdev) +{ + pm_runtime_force_suspend(&pdev->dev); + platform_set_drvdata(pdev, NULL); + + return 0; +} + +#ifdef CONFIG_PM +static int qcom_iommu_resume(struct device *dev) +{ + struct platform_device *pdev = to_platform_device(dev); + struct qcom_iommu_dev *qcom_iommu = platform_get_drvdata(pdev); + + return qcom_iommu_enable_clocks(qcom_iommu); +} + +static int qcom_iommu_suspend(struct device *dev) +{ + struct platform_device *pdev = to_platform_device(dev); + struct qcom_iommu_dev *qcom_iommu = platform_get_drvdata(pdev); + + qcom_iommu_disable_clocks(qcom_iommu); + + return 0; +} +#endif + +static const struct dev_pm_ops qcom_iommu_pm_ops = { + SET_RUNTIME_PM_OPS(qcom_iommu_suspend, qcom_iommu_resume, NULL) + SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, + pm_runtime_force_resume) +}; + +static const struct of_device_id qcom_iommu_of_match[] = { + { .compatible = "qcom,msm-iommu-v1" }, + { /* sentinel */ } +}; +MODULE_DEVICE_TABLE(of, qcom_iommu_of_match); + +static struct platform_driver qcom_iommu_driver = { + .driver = { + .name = "qcom-iommu", + .of_match_table = of_match_ptr(qcom_iommu_of_match), + .pm = &qcom_iommu_pm_ops, + }, + .probe = qcom_iommu_device_probe, + .remove = qcom_iommu_device_remove, +}; +module_platform_driver(qcom_iommu_driver); + +IOMMU_OF_DECLARE(qcom_iommu_dev, "qcom,msm-iommu-v1", NULL); + +MODULE_DESCRIPTION("IOMMU API for QCOM IOMMU v1 implementations"); +MODULE_LICENSE("GPL v2");