From patchwork Tue Sep 1 16:46:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 11749267 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AE9B613B6 for ; Tue, 1 Sep 2020 16:50:07 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 74E5D20767 for ; Tue, 1 Sep 2020 16:50:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="G4qSKztu"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="KF6VyTYt" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 74E5D20767 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=BCCrLK2vpFIqRc+nosEkGhtUtPhiOU0ykCwNtuxoC6o=; b=G4qSKztus+aI0iTevLqyHGTH4 HxV3atgE/mGF53O8jV+qd/pLpRhme5CpvB1d1HoSM/jHFE8+fBcDTExOIgWeuaFY6TVZn0ua7z5FT 9YZJQPc8ZXAKzeY6juqr0TDu8tfzDg2EZObhHadsO3aZE3kkzAzNCuBw8OyFwW3+6YVhnplZatMjV IPT7S8uLXTZ92SoQJefPkBI9hgYSpTpcgMiyvzz5eI9aM6uU3oQg29TonX5ylr1hjZktmHWbMcRIU RHQB7OFKyCyFdWW3xZRjmuSEyYq31OqtYGHRvr8soI2eGP9OsibnTWPypgkaOxCbiAZ2Unf4rTiJu iEK0r+NIA==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kD9RL-0005Q3-4h; Tue, 01 Sep 2020 16:47:23 +0000 Received: from mail-pf1-x443.google.com ([2607:f8b0:4864:20::443]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kD9RH-0005Ny-UD for linux-arm-kernel@lists.infradead.org; Tue, 01 Sep 2020 16:47:20 +0000 Received: by mail-pf1-x443.google.com with SMTP id o68so1126803pfg.2 for ; Tue, 01 Sep 2020 09:47:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Tpvqka10jtiI4Fakhi4yppkwdviTd0raegoqIiBhtwU=; b=KF6VyTYt6S062StiB0OxYqSvtKJfg/6p3HU1eaLAbRElNS9WMsgiYnXoKtXDQwz2be 8bK4W+jC1LGCMYypglcXVIi3eKSctvWFUOs9NQn09hq+aHaA8bVKPjuzwTW3RejLa17a 2zqMdbS6xOmM6sUfitkIj2b+07pzpMl7qRcbYGhR5ZCcoHjgidIgd+JFukx32lrB9854 ujfK2cl/QZ0v7IdgesPG+xZU/EdopZFT8iUapTNFVDz7YvMRMxSd7Q1oPez3mvdcCUnd hSH15MIwmg+z4HvJv/cgyGBHwnzoQZ37oRsLB1qfz00CcW9SeBqD2z9e/9Z8MkXckDfX 2jFw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Tpvqka10jtiI4Fakhi4yppkwdviTd0raegoqIiBhtwU=; b=L/O9A3QHSZYqRDYRkD1yXpRYZ8BaXRlkg9dfYu9yLyyLowaf2SHq0Sk4kK+hc565lH 8VUd91ccUYWgyJLmfgyBX8C4fqvF/1+iqFIrv60etnIUo6ooAEjRcl+uZNOWJkPeBa0/ keSPp9UmBcL2P5nVHXFZX4PiWVvoG/7axu/UvN4SvDAl89pMGCl7+9c/rHJfBODxnHS3 K18anD3RaYJ7MAjiNux8P7hq8ixNUqU9pa8jmXgJwfDOy//zcFn3A6uzgTTzAfi/6Dob 61ekIcfB4I25o4gDdSKZjl1auy+uuGuP1Z05WFmDaWrypexDSXhrh8PSteYSw+/uwBgF 54Mg== X-Gm-Message-State: AOAM533nSFGjZdZoU5ZZiubiTAD/Ge7RO3vEoyBnn73Y6xyxICuK1oPm exLUyFwEQdxQaI0MjV+lP+g= X-Google-Smtp-Source: ABdhPJwvaRmqhf9pjuHxTFHp3b/76glw/kTFHmxDQt+rc+mvX/FMJejx+ANCfD7XgzLokQHWRZatVg== X-Received: by 2002:a63:4e52:: with SMTP id o18mr2196521pgl.171.1598978837732; Tue, 01 Sep 2020 09:47:17 -0700 (PDT) Received: from localhost ([2601:1c0:5200:a6:307:a401:7b76:c6e5]) by smtp.gmail.com with ESMTPSA id k5sm2773406pgk.78.2020.09.01.09.47.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Sep 2020 09:47:15 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org, iommu@lists.linux-foundation.org, linux-arm-msm@vger.kernel.org, Will Deacon , Robin Murphy Subject: [PATCH v16 14/20] iommu/arm-smmu: Prepare for the adreno-smmu implementation Date: Tue, 1 Sep 2020 09:46:31 -0700 Message-Id: <20200901164707.2645413-15-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200901164707.2645413-1-robdclark@gmail.com> References: <20200901164707.2645413-1-robdclark@gmail.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200901_124719_979753_D7E761C7 X-CRM114-Status: GOOD ( 24.80 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.4 on merlin.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:443 listed in] [list.dnswl.org] 0.0 FREEMAIL_FROM Sender email is commonly abused enduser mail provider [robdclark[at]gmail.com] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Rob Clark , open list , Jonathan Marek , Greg Kroah-Hartman , Joerg Roedel , Akhil P Oommen , Stephen Boyd , Jordan Crouse , Sibi Sankar , Vivek Gautam , Bjorn Andersson , Hanna Hawa , "moderated list:ARM SMMU DRIVERS" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org From: Jordan Crouse Do a bit of prep work to add the upcoming adreno-smmu implementation. Add an hook to allow the implementation to choose which context banks to allocate. Move some of the common structs to arm-smmu.h in anticipation of them being used by the implementations and update some of the existing hooks to pass more information that the implementation will need. These modifications will be used by the upcoming Adreno SMMU implementation to identify the GPU device and properly configure it for pagetable switching. Co-developed-by: Rob Clark Signed-off-by: Jordan Crouse Signed-off-by: Rob Clark Reviewed-by: Bjorn Andersson --- drivers/iommu/arm/arm-smmu/arm-smmu-impl.c | 2 +- drivers/iommu/arm/arm-smmu/arm-smmu.c | 69 ++++++---------------- drivers/iommu/arm/arm-smmu/arm-smmu.h | 51 +++++++++++++++- 3 files changed, 68 insertions(+), 54 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-impl.c b/drivers/iommu/arm/arm-smmu/arm-smmu-impl.c index a9861dcd0884..88f17cc33023 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu-impl.c +++ b/drivers/iommu/arm/arm-smmu/arm-smmu-impl.c @@ -69,7 +69,7 @@ static int cavium_cfg_probe(struct arm_smmu_device *smmu) } static int cavium_init_context(struct arm_smmu_domain *smmu_domain, - struct io_pgtable_cfg *pgtbl_cfg) + struct io_pgtable_cfg *pgtbl_cfg, struct device *dev) { struct cavium_smmu *cs = container_of(smmu_domain->smmu, struct cavium_smmu, smmu); diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c b/drivers/iommu/arm/arm-smmu/arm-smmu.c index 8e884e58f208..68b7b9e6140e 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu.c +++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c @@ -65,41 +65,10 @@ module_param(disable_bypass, bool, S_IRUGO); MODULE_PARM_DESC(disable_bypass, "Disable bypass streams such that incoming transactions from devices that are not attached to an iommu domain will report an abort back to the device and will not be allowed to pass through the SMMU."); -struct arm_smmu_s2cr { - struct iommu_group *group; - int count; - enum arm_smmu_s2cr_type type; - enum arm_smmu_s2cr_privcfg privcfg; - u8 cbndx; -}; - #define s2cr_init_val (struct arm_smmu_s2cr){ \ .type = disable_bypass ? S2CR_TYPE_FAULT : S2CR_TYPE_BYPASS, \ } -struct arm_smmu_smr { - u16 mask; - u16 id; - bool valid; -}; - -struct arm_smmu_cb { - u64 ttbr[2]; - u32 tcr[2]; - u32 mair[2]; - struct arm_smmu_cfg *cfg; -}; - -struct arm_smmu_master_cfg { - struct arm_smmu_device *smmu; - s16 smendx[]; -}; -#define INVALID_SMENDX -1 -#define cfg_smendx(cfg, fw, i) \ - (i >= fw->num_ids ? INVALID_SMENDX : cfg->smendx[i]) -#define for_each_cfg_sme(cfg, fw, i, idx) \ - for (i = 0; idx = cfg_smendx(cfg, fw, i), i < fw->num_ids; ++i) - static bool using_legacy_binding, using_generic_binding; static inline int arm_smmu_rpm_get(struct arm_smmu_device *smmu) @@ -234,19 +203,6 @@ static int arm_smmu_register_legacy_master(struct device *dev, } #endif /* CONFIG_ARM_SMMU_LEGACY_DT_BINDINGS */ -static int __arm_smmu_alloc_bitmap(unsigned long *map, int start, int end) -{ - int idx; - - do { - idx = find_next_zero_bit(map, end, start); - if (idx == end) - return -ENOSPC; - } while (test_and_set_bit(idx, map)); - - return idx; -} - static void __arm_smmu_free_bitmap(unsigned long *map, int idx) { clear_bit(idx, map); @@ -578,7 +534,7 @@ static void arm_smmu_init_context_bank(struct arm_smmu_domain *smmu_domain, } } -static void arm_smmu_write_context_bank(struct arm_smmu_device *smmu, int idx) +void arm_smmu_write_context_bank(struct arm_smmu_device *smmu, int idx) { u32 reg; bool stage1; @@ -665,7 +621,8 @@ static void arm_smmu_write_context_bank(struct arm_smmu_device *smmu, int idx) } static int arm_smmu_init_domain_context(struct iommu_domain *domain, - struct arm_smmu_device *smmu) + struct arm_smmu_device *smmu, + struct device *dev) { int irq, start, ret = 0; unsigned long ias, oas; @@ -780,10 +737,20 @@ static int arm_smmu_init_domain_context(struct iommu_domain *domain, ret = -EINVAL; goto out_unlock; } - ret = __arm_smmu_alloc_bitmap(smmu->context_map, start, + + smmu_domain->smmu = smmu; + + if (smmu->impl && smmu->impl->alloc_context_bank) + ret = smmu->impl->alloc_context_bank(smmu_domain, dev, + start, smmu->num_context_banks); + else + ret = __arm_smmu_alloc_bitmap(smmu->context_map, start, smmu->num_context_banks); - if (ret < 0) + + if (ret < 0) { + smmu_domain->smmu = NULL; goto out_unlock; + } cfg->cbndx = ret; if (smmu->version < ARM_SMMU_V2) { @@ -798,8 +765,6 @@ static int arm_smmu_init_domain_context(struct iommu_domain *domain, else cfg->asid = cfg->cbndx; - smmu_domain->smmu = smmu; - pgtbl_cfg = (struct io_pgtable_cfg) { .pgsize_bitmap = smmu->pgsize_bitmap, .ias = ias, @@ -810,7 +775,7 @@ static int arm_smmu_init_domain_context(struct iommu_domain *domain, }; if (smmu->impl && smmu->impl->init_context) { - ret = smmu->impl->init_context(smmu_domain, &pgtbl_cfg); + ret = smmu->impl->init_context(smmu_domain, &pgtbl_cfg, dev); if (ret) goto out_clear_smmu; } @@ -1194,7 +1159,7 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) return ret; /* Ensure that the domain is finalised */ - ret = arm_smmu_init_domain_context(domain, smmu); + ret = arm_smmu_init_domain_context(domain, smmu, dev); if (ret < 0) goto rpm_put; diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.h b/drivers/iommu/arm/arm-smmu/arm-smmu.h index f3e456893f28..59ff3fc5c6c8 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu.h +++ b/drivers/iommu/arm/arm-smmu/arm-smmu.h @@ -256,6 +256,21 @@ enum arm_smmu_implementation { QCOM_SMMUV2, }; +struct arm_smmu_s2cr { + struct iommu_group *group; + int count; + enum arm_smmu_s2cr_type type; + enum arm_smmu_s2cr_privcfg privcfg; + u8 cbndx; +}; + +struct arm_smmu_smr { + u16 mask; + u16 id; + bool valid; + bool pinned; +}; + struct arm_smmu_device { struct device *dev; @@ -331,6 +346,13 @@ struct arm_smmu_cfg { }; #define ARM_SMMU_INVALID_IRPTNDX 0xff +struct arm_smmu_cb { + u64 ttbr[2]; + u32 tcr[2]; + u32 mair[2]; + struct arm_smmu_cfg *cfg; +}; + enum arm_smmu_domain_stage { ARM_SMMU_DOMAIN_S1 = 0, ARM_SMMU_DOMAIN_S2, @@ -350,6 +372,11 @@ struct arm_smmu_domain { struct iommu_domain domain; }; +struct arm_smmu_master_cfg { + struct arm_smmu_device *smmu; + s16 smendx[]; +}; + static inline u32 arm_smmu_lpae_tcr(struct io_pgtable_cfg *cfg) { u32 tcr = FIELD_PREP(ARM_SMMU_TCR_TG0, cfg->arm_lpae_s1_cfg.tcr.tg) | @@ -400,14 +427,35 @@ struct arm_smmu_impl { int (*cfg_probe)(struct arm_smmu_device *smmu); int (*reset)(struct arm_smmu_device *smmu); int (*init_context)(struct arm_smmu_domain *smmu_domain, - struct io_pgtable_cfg *cfg); + struct io_pgtable_cfg *cfg, struct device *dev); void (*tlb_sync)(struct arm_smmu_device *smmu, int page, int sync, int status); int (*def_domain_type)(struct device *dev); irqreturn_t (*global_fault)(int irq, void *dev); irqreturn_t (*context_fault)(int irq, void *dev); + int (*alloc_context_bank)(struct arm_smmu_domain *smmu_domain, + struct device *dev, int start, int max); }; +#define INVALID_SMENDX -1 +#define cfg_smendx(cfg, fw, i) \ + (i >= fw->num_ids ? INVALID_SMENDX : cfg->smendx[i]) +#define for_each_cfg_sme(cfg, fw, i, idx) \ + for (i = 0; idx = cfg_smendx(cfg, fw, i), i < fw->num_ids; ++i) + +static inline int __arm_smmu_alloc_bitmap(unsigned long *map, int start, int end) +{ + int idx; + + do { + idx = find_next_zero_bit(map, end, start); + if (idx == end) + return -ENOSPC; + } while (test_and_set_bit(idx, map)); + + return idx; +} + static inline void __iomem *arm_smmu_page(struct arm_smmu_device *smmu, int n) { return smmu->base + (n << smmu->pgshift); @@ -472,6 +520,7 @@ struct arm_smmu_device *arm_smmu_impl_init(struct arm_smmu_device *smmu); struct arm_smmu_device *nvidia_smmu_impl_init(struct arm_smmu_device *smmu); struct arm_smmu_device *qcom_smmu_impl_init(struct arm_smmu_device *smmu); +void arm_smmu_write_context_bank(struct arm_smmu_device *smmu, int idx); int arm_mmu500_reset(struct arm_smmu_device *smmu); #endif /* _ARM_SMMU_H */