From patchwork Wed Feb 1 12:53:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124395 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3CE98C05027 for ; Wed, 1 Feb 2023 14:11:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=t5pQrz1W09jI+pi47R4BymyXHqTxvkrjA5q8XeRGbwI=; b=KqXgawFmNQXuRD TmzQ/kct7l9YYGfdSW7+ReOYagPDFyX9r8f4d4pTGfzIs0/sckSXu7omIVBKLy6L1Ndw36TCR9zeB ZEDk3IF5qe+Kp/f/4V5ND6Lm9lLQ5AGAXvyXSMcRSAADvj9dJcEGzE+lWT9Tk9OrPI6JgxHWnQjcx J7pK/zUtJQWBIH1PBuG7bH+MH9Bc5wCtR0WeYu/D27xwNodmgHI1vTWTHQ+ahKzgdB2GSzLFNmBWn L2bolZ3fnR7R0Np0HGkEXN3W9tx2mkw5JD8CAfBjU6iyubo9lzupUdQh0ToLD9QAUlxXIWcyy67Ls pQh9KlwkDgt1hBMhdVUQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDoz-00CFBc-DG; Wed, 01 Feb 2023 14:10:46 +0000 Received: from mail-wr1-x429.google.com ([2a00:1450:4864:20::429]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNCiE-00BnGh-Q5 for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:46 +0000 Received: by mail-wr1-x429.google.com with SMTP id m14so16760734wrg.13 for ; Wed, 01 Feb 2023 04:59:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=eZWyvRGjN1hWh9Kt9qhU/sgEA46LGfzgFOOld/1yFu4=; b=dfScr3h3qshLBf1B014FouB4z/1CopKuz/MO/EE7+b9TesnPty6Em1E6tfi4iIIXBd zZ1RPy2uQ2uzr39P/WO7S5/EstvRwh2WysL1DjmICiH/ol53CsuzE9aMz6WzHz4UM3ms GGy6J3Y0aXV175rCm4P2OqRpvFBIzXuBjucPQV4ykIWmL51Z7F3OVtQSfzAjxAuZrITu 525vg9jwehAFP5gOiHrwXKFPAWLGwxcCtb84aXEBeT+lc4bnHFOf0fOmZ/MHixOqWdDW F4avwa0bBpUs01oIzK+dFZsBKJA9vJqpV2cIASIeYO9D7lVyQULape1NZoML4B+KGba1 VYrg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=eZWyvRGjN1hWh9Kt9qhU/sgEA46LGfzgFOOld/1yFu4=; b=2qS17DMUh5ZtcugRpXjHjUUBmkSh1Cm+Dc3vcIcxwUnqq26LWMO2qCz/51GEh8cGzD tzTFncEd/YdhrK3DvfIKpfIoWa54wCogShiTkWpdOFKW648J90UkL2ThJYJiqcNalB76 8Ydh4XmtI6EdwwiL2TQ1GVgUl4j0ZwTJRI9U6VmfYFPr4wq9qJlb8tfJ9dflIs9bTWUS Gj/jbIkPs0jbViaRniIiKZJXhYt4NFmhezUWLCAvu1LCngHA9+1RmsLmOlYPxgvQPNGG KECy1sEvjiPv5Jahj8VrDbn1YGLKfroFRZX571oo9X272slzQwygLMPC6Tbe09deAugK K2wA== X-Gm-Message-State: AO0yUKVi5rCgWhVYIxHRzMePgQJOm4JIEIxR97dmLr8QjepawQZLwEMa IxZJ4xHryjwbl+TWJjiWeh8/pw== X-Google-Smtp-Source: AK7set+SAZmma/mIC/SofWrwrAByNMx0pHWI7IJggK0gg3QH5HvjERWTG3NiPGqqX7n2D+i7IeOrpQ== X-Received: by 2002:a5d:4fce:0:b0:2bf:b672:689b with SMTP id h14-20020a5d4fce000000b002bfb672689bmr1568372wrw.62.1675256382262; Wed, 01 Feb 2023 04:59:42 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:41 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 29/45] iommu/arm-smmu-v3: Move some functions to arm-smmu-v3-common.c Date: Wed, 1 Feb 2023 12:53:13 +0000 Message-Id: <20230201125328.2186498-30-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_045942_922281_885C23DA X-CRM114-Status: GOOD ( 28.66 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Move functions that can be shared between normal and KVM drivers to arm-smmu-v3-common.c Only straightforward moves here. More subtle factoring will be done in then next patches. Signed-off-by: Jean-Philippe Brucker --- drivers/iommu/arm/arm-smmu-v3/Makefile | 1 + drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 9 + .../arm/arm-smmu-v3/arm-smmu-v3-common.c | 296 ++++++++++++++++++ drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 293 ----------------- 4 files changed, 306 insertions(+), 293 deletions(-) create mode 100644 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-common.c diff --git a/drivers/iommu/arm/arm-smmu-v3/Makefile b/drivers/iommu/arm/arm-smmu-v3/Makefile index 54feb1ecccad..c4fcc796213c 100644 --- a/drivers/iommu/arm/arm-smmu-v3/Makefile +++ b/drivers/iommu/arm/arm-smmu-v3/Makefile @@ -1,5 +1,6 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_ARM_SMMU_V3) += arm_smmu_v3.o arm_smmu_v3-objs-y += arm-smmu-v3.o +arm_smmu_v3-objs-y += arm-smmu-v3-common.o arm_smmu_v3-objs-$(CONFIG_ARM_SMMU_V3_SVA) += arm-smmu-v3-sva.o arm_smmu_v3-objs := $(arm_smmu_v3-objs-y) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h index 32ce835ab4eb..59e8101d4ff5 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -269,6 +269,15 @@ extern struct xarray arm_smmu_asid_xa; extern struct mutex arm_smmu_asid_lock; extern struct arm_smmu_ctx_desc quiet_cd; +int arm_smmu_write_reg_sync(struct arm_smmu_device *smmu, u32 val, + unsigned int reg_off, unsigned int ack_off); +int arm_smmu_update_gbpa(struct arm_smmu_device *smmu, u32 set, u32 clr); +int arm_smmu_device_disable(struct arm_smmu_device *smmu); +bool arm_smmu_capable(struct device *dev, enum iommu_cap cap); +struct iommu_group *arm_smmu_device_group(struct device *dev); +int arm_smmu_of_xlate(struct device *dev, struct of_phandle_args *args); +int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu); + int arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain, int ssid, struct arm_smmu_ctx_desc *cd); void arm_smmu_tlb_inv_asid(struct arm_smmu_device *smmu, u16 asid); diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-common.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-common.c new file mode 100644 index 000000000000..5e43329c0826 --- /dev/null +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-common.c @@ -0,0 +1,296 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include + +#include "arm-smmu-v3.h" + +int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu) +{ + u32 reg; + bool coherent = smmu->features & ARM_SMMU_FEAT_COHERENCY; + + /* IDR0 */ + reg = readl_relaxed(smmu->base + ARM_SMMU_IDR0); + + /* 2-level structures */ + if (FIELD_GET(IDR0_ST_LVL, reg) == IDR0_ST_LVL_2LVL) + smmu->features |= ARM_SMMU_FEAT_2_LVL_STRTAB; + + if (reg & IDR0_CD2L) + smmu->features |= ARM_SMMU_FEAT_2_LVL_CDTAB; + + /* + * Translation table endianness. + * We currently require the same endianness as the CPU, but this + * could be changed later by adding a new IO_PGTABLE_QUIRK. + */ + switch (FIELD_GET(IDR0_TTENDIAN, reg)) { + case IDR0_TTENDIAN_MIXED: + smmu->features |= ARM_SMMU_FEAT_TT_LE | ARM_SMMU_FEAT_TT_BE; + break; +#ifdef __BIG_ENDIAN + case IDR0_TTENDIAN_BE: + smmu->features |= ARM_SMMU_FEAT_TT_BE; + break; +#else + case IDR0_TTENDIAN_LE: + smmu->features |= ARM_SMMU_FEAT_TT_LE; + break; +#endif + default: + dev_err(smmu->dev, "unknown/unsupported TT endianness!\n"); + return -ENXIO; + } + + /* Boolean feature flags */ + if (IS_ENABLED(CONFIG_PCI_PRI) && reg & IDR0_PRI) + smmu->features |= ARM_SMMU_FEAT_PRI; + + if (IS_ENABLED(CONFIG_PCI_ATS) && reg & IDR0_ATS) + smmu->features |= ARM_SMMU_FEAT_ATS; + + if (reg & IDR0_SEV) + smmu->features |= ARM_SMMU_FEAT_SEV; + + if (reg & IDR0_MSI) { + smmu->features |= ARM_SMMU_FEAT_MSI; + if (coherent) + smmu->options |= ARM_SMMU_OPT_MSIPOLL; + } + + if (reg & IDR0_HYP) { + smmu->features |= ARM_SMMU_FEAT_HYP; + if (cpus_have_cap(ARM64_HAS_VIRT_HOST_EXTN)) + smmu->features |= ARM_SMMU_FEAT_E2H; + } + + /* + * The coherency feature as set by FW is used in preference to the ID + * register, but warn on mismatch. + */ + if (!!(reg & IDR0_COHACC) != coherent) + dev_warn(smmu->dev, "IDR0.COHACC overridden by FW configuration (%s)\n", + coherent ? "true" : "false"); + + switch (FIELD_GET(IDR0_STALL_MODEL, reg)) { + case IDR0_STALL_MODEL_FORCE: + smmu->features |= ARM_SMMU_FEAT_STALL_FORCE; + fallthrough; + case IDR0_STALL_MODEL_STALL: + smmu->features |= ARM_SMMU_FEAT_STALLS; + } + + if (reg & IDR0_S1P) + smmu->features |= ARM_SMMU_FEAT_TRANS_S1; + + if (reg & IDR0_S2P) + smmu->features |= ARM_SMMU_FEAT_TRANS_S2; + + if (!(reg & (IDR0_S1P | IDR0_S2P))) { + dev_err(smmu->dev, "no translation support!\n"); + return -ENXIO; + } + + /* We only support the AArch64 table format at present */ + switch (FIELD_GET(IDR0_TTF, reg)) { + case IDR0_TTF_AARCH32_64: + smmu->ias = 40; + fallthrough; + case IDR0_TTF_AARCH64: + break; + default: + dev_err(smmu->dev, "AArch64 table format not supported!\n"); + return -ENXIO; + } + + /* ASID/VMID sizes */ + smmu->asid_bits = reg & IDR0_ASID16 ? 16 : 8; + smmu->vmid_bits = reg & IDR0_VMID16 ? 16 : 8; + + /* IDR1 */ + reg = readl_relaxed(smmu->base + ARM_SMMU_IDR1); + if (reg & (IDR1_TABLES_PRESET | IDR1_QUEUES_PRESET | IDR1_REL)) { + dev_err(smmu->dev, "embedded implementation not supported\n"); + return -ENXIO; + } + + /* Queue sizes, capped to ensure natural alignment */ + smmu->cmdq.q.llq.max_n_shift = min_t(u32, CMDQ_MAX_SZ_SHIFT, + FIELD_GET(IDR1_CMDQS, reg)); + if (smmu->cmdq.q.llq.max_n_shift <= ilog2(CMDQ_BATCH_ENTRIES)) { + /* + * We don't support splitting up batches, so one batch of + * commands plus an extra sync needs to fit inside the command + * queue. There's also no way we can handle the weird alignment + * restrictions on the base pointer for a unit-length queue. + */ + dev_err(smmu->dev, "command queue size <= %d entries not supported\n", + CMDQ_BATCH_ENTRIES); + return -ENXIO; + } + + smmu->evtq.q.llq.max_n_shift = min_t(u32, EVTQ_MAX_SZ_SHIFT, + FIELD_GET(IDR1_EVTQS, reg)); + smmu->priq.q.llq.max_n_shift = min_t(u32, PRIQ_MAX_SZ_SHIFT, + FIELD_GET(IDR1_PRIQS, reg)); + + /* SID/SSID sizes */ + smmu->ssid_bits = FIELD_GET(IDR1_SSIDSIZE, reg); + smmu->sid_bits = FIELD_GET(IDR1_SIDSIZE, reg); + smmu->iommu.max_pasids = 1UL << smmu->ssid_bits; + + /* + * If the SMMU supports fewer bits than would fill a single L2 stream + * table, use a linear table instead. + */ + if (smmu->sid_bits <= STRTAB_SPLIT) + smmu->features &= ~ARM_SMMU_FEAT_2_LVL_STRTAB; + + /* IDR3 */ + reg = readl_relaxed(smmu->base + ARM_SMMU_IDR3); + if (FIELD_GET(IDR3_RIL, reg)) + smmu->features |= ARM_SMMU_FEAT_RANGE_INV; + + /* IDR5 */ + reg = readl_relaxed(smmu->base + ARM_SMMU_IDR5); + + /* Maximum number of outstanding stalls */ + smmu->evtq.max_stalls = FIELD_GET(IDR5_STALL_MAX, reg); + + /* Page sizes */ + if (reg & IDR5_GRAN64K) + smmu->pgsize_bitmap |= SZ_64K | SZ_512M; + if (reg & IDR5_GRAN16K) + smmu->pgsize_bitmap |= SZ_16K | SZ_32M; + if (reg & IDR5_GRAN4K) + smmu->pgsize_bitmap |= SZ_4K | SZ_2M | SZ_1G; + + /* Input address size */ + if (FIELD_GET(IDR5_VAX, reg) == IDR5_VAX_52_BIT) + smmu->features |= ARM_SMMU_FEAT_VAX; + + /* Output address size */ + switch (FIELD_GET(IDR5_OAS, reg)) { + case IDR5_OAS_32_BIT: + smmu->oas = 32; + break; + case IDR5_OAS_36_BIT: + smmu->oas = 36; + break; + case IDR5_OAS_40_BIT: + smmu->oas = 40; + break; + case IDR5_OAS_42_BIT: + smmu->oas = 42; + break; + case IDR5_OAS_44_BIT: + smmu->oas = 44; + break; + case IDR5_OAS_52_BIT: + smmu->oas = 52; + smmu->pgsize_bitmap |= 1ULL << 42; /* 4TB */ + break; + default: + dev_info(smmu->dev, + "unknown output address size. Truncating to 48-bit\n"); + fallthrough; + case IDR5_OAS_48_BIT: + smmu->oas = 48; + } + + /* Set the DMA mask for our table walker */ + if (dma_set_mask_and_coherent(smmu->dev, DMA_BIT_MASK(smmu->oas))) + dev_warn(smmu->dev, + "failed to set DMA mask for table walker\n"); + + smmu->ias = max(smmu->ias, smmu->oas); + + if (arm_smmu_sva_supported(smmu)) + smmu->features |= ARM_SMMU_FEAT_SVA; + + dev_info(smmu->dev, "ias %lu-bit, oas %lu-bit (features 0x%08x)\n", + smmu->ias, smmu->oas, smmu->features); + return 0; +} + +int arm_smmu_write_reg_sync(struct arm_smmu_device *smmu, u32 val, + unsigned int reg_off, unsigned int ack_off) +{ + u32 reg; + + writel_relaxed(val, smmu->base + reg_off); + return readl_relaxed_poll_timeout(smmu->base + ack_off, reg, reg == val, + 1, ARM_SMMU_POLL_TIMEOUT_US); +} + +/* GBPA is "special" */ +int arm_smmu_update_gbpa(struct arm_smmu_device *smmu, u32 set, u32 clr) +{ + int ret; + u32 reg, __iomem *gbpa = smmu->base + ARM_SMMU_GBPA; + + ret = readl_relaxed_poll_timeout(gbpa, reg, !(reg & GBPA_UPDATE), + 1, ARM_SMMU_POLL_TIMEOUT_US); + if (ret) + return ret; + + reg &= ~clr; + reg |= set; + writel_relaxed(reg | GBPA_UPDATE, gbpa); + ret = readl_relaxed_poll_timeout(gbpa, reg, !(reg & GBPA_UPDATE), + 1, ARM_SMMU_POLL_TIMEOUT_US); + + if (ret) + dev_err(smmu->dev, "GBPA not responding to update\n"); + return ret; +} + +int arm_smmu_device_disable(struct arm_smmu_device *smmu) +{ + int ret; + + ret = arm_smmu_write_reg_sync(smmu, 0, ARM_SMMU_CR0, ARM_SMMU_CR0ACK); + if (ret) + dev_err(smmu->dev, "failed to clear cr0\n"); + + return ret; +} + +bool arm_smmu_capable(struct device *dev, enum iommu_cap cap) +{ + struct arm_smmu_master *master = dev_iommu_priv_get(dev); + + switch (cap) { + case IOMMU_CAP_CACHE_COHERENCY: + /* Assume that a coherent TCU implies coherent TBUs */ + return master->smmu->features & ARM_SMMU_FEAT_COHERENCY; + case IOMMU_CAP_NOEXEC: + return true; + default: + return false; + } +} + + +struct iommu_group *arm_smmu_device_group(struct device *dev) +{ + struct iommu_group *group; + + /* + * We don't support devices sharing stream IDs other than PCI RID + * aliases, since the necessary ID-to-device lookup becomes rather + * impractical given a potential sparse 32-bit stream ID space. + */ + if (dev_is_pci(dev)) + group = pci_device_group(dev); + else + group = generic_device_group(dev); + + return group; +} + +int arm_smmu_of_xlate(struct device *dev, struct of_phandle_args *args) +{ + return iommu_fwspec_add_ids(dev, args->args, 1); +} diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index bcbd691ca96a..08fd79f66d29 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -17,13 +17,11 @@ #include #include #include -#include #include #include #include #include #include -#include #include #include @@ -1642,8 +1640,6 @@ static irqreturn_t arm_smmu_priq_thread(int irq, void *dev) return IRQ_HANDLED; } -static int arm_smmu_device_disable(struct arm_smmu_device *smmu); - static irqreturn_t arm_smmu_gerror_handler(int irq, void *dev) { u32 gerror, gerrorn, active; @@ -1990,21 +1986,6 @@ static const struct iommu_flush_ops arm_smmu_flush_ops = { }; /* IOMMU API */ -static bool arm_smmu_capable(struct device *dev, enum iommu_cap cap) -{ - struct arm_smmu_master *master = dev_iommu_priv_get(dev); - - switch (cap) { - case IOMMU_CAP_CACHE_COHERENCY: - /* Assume that a coherent TCU implies coherent TBUs */ - return master->smmu->features & ARM_SMMU_FEAT_COHERENCY; - case IOMMU_CAP_NOEXEC: - return true; - default: - return false; - } -} - static struct iommu_domain *arm_smmu_domain_alloc(unsigned type) { struct arm_smmu_domain *smmu_domain; @@ -2698,23 +2679,6 @@ static void arm_smmu_release_device(struct device *dev) kfree(master); } -static struct iommu_group *arm_smmu_device_group(struct device *dev) -{ - struct iommu_group *group; - - /* - * We don't support devices sharing stream IDs other than PCI RID - * aliases, since the necessary ID-to-device lookup becomes rather - * impractical given a potential sparse 32-bit stream ID space. - */ - if (dev_is_pci(dev)) - group = pci_device_group(dev); - else - group = generic_device_group(dev); - - return group; -} - static int arm_smmu_enable_nesting(struct iommu_domain *domain) { struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); @@ -2730,11 +2694,6 @@ static int arm_smmu_enable_nesting(struct iommu_domain *domain) return ret; } -static int arm_smmu_of_xlate(struct device *dev, struct of_phandle_args *args) -{ - return iommu_fwspec_add_ids(dev, args->args, 1); -} - static void arm_smmu_get_resv_regions(struct device *dev, struct list_head *head) { @@ -3081,38 +3040,6 @@ static int arm_smmu_init_structures(struct arm_smmu_device *smmu) return arm_smmu_init_strtab(smmu); } -static int arm_smmu_write_reg_sync(struct arm_smmu_device *smmu, u32 val, - unsigned int reg_off, unsigned int ack_off) -{ - u32 reg; - - writel_relaxed(val, smmu->base + reg_off); - return readl_relaxed_poll_timeout(smmu->base + ack_off, reg, reg == val, - 1, ARM_SMMU_POLL_TIMEOUT_US); -} - -/* GBPA is "special" */ -static int arm_smmu_update_gbpa(struct arm_smmu_device *smmu, u32 set, u32 clr) -{ - int ret; - u32 reg, __iomem *gbpa = smmu->base + ARM_SMMU_GBPA; - - ret = readl_relaxed_poll_timeout(gbpa, reg, !(reg & GBPA_UPDATE), - 1, ARM_SMMU_POLL_TIMEOUT_US); - if (ret) - return ret; - - reg &= ~clr; - reg |= set; - writel_relaxed(reg | GBPA_UPDATE, gbpa); - ret = readl_relaxed_poll_timeout(gbpa, reg, !(reg & GBPA_UPDATE), - 1, ARM_SMMU_POLL_TIMEOUT_US); - - if (ret) - dev_err(smmu->dev, "GBPA not responding to update\n"); - return ret; -} - static void arm_smmu_free_msis(void *data) { struct device *dev = data; @@ -3258,17 +3185,6 @@ static int arm_smmu_setup_irqs(struct arm_smmu_device *smmu) return 0; } -static int arm_smmu_device_disable(struct arm_smmu_device *smmu) -{ - int ret; - - ret = arm_smmu_write_reg_sync(smmu, 0, ARM_SMMU_CR0, ARM_SMMU_CR0ACK); - if (ret) - dev_err(smmu->dev, "failed to clear cr0\n"); - - return ret; -} - static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass) { int ret; @@ -3404,215 +3320,6 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass) return 0; } -static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu) -{ - u32 reg; - bool coherent = smmu->features & ARM_SMMU_FEAT_COHERENCY; - - /* IDR0 */ - reg = readl_relaxed(smmu->base + ARM_SMMU_IDR0); - - /* 2-level structures */ - if (FIELD_GET(IDR0_ST_LVL, reg) == IDR0_ST_LVL_2LVL) - smmu->features |= ARM_SMMU_FEAT_2_LVL_STRTAB; - - if (reg & IDR0_CD2L) - smmu->features |= ARM_SMMU_FEAT_2_LVL_CDTAB; - - /* - * Translation table endianness. - * We currently require the same endianness as the CPU, but this - * could be changed later by adding a new IO_PGTABLE_QUIRK. - */ - switch (FIELD_GET(IDR0_TTENDIAN, reg)) { - case IDR0_TTENDIAN_MIXED: - smmu->features |= ARM_SMMU_FEAT_TT_LE | ARM_SMMU_FEAT_TT_BE; - break; -#ifdef __BIG_ENDIAN - case IDR0_TTENDIAN_BE: - smmu->features |= ARM_SMMU_FEAT_TT_BE; - break; -#else - case IDR0_TTENDIAN_LE: - smmu->features |= ARM_SMMU_FEAT_TT_LE; - break; -#endif - default: - dev_err(smmu->dev, "unknown/unsupported TT endianness!\n"); - return -ENXIO; - } - - /* Boolean feature flags */ - if (IS_ENABLED(CONFIG_PCI_PRI) && reg & IDR0_PRI) - smmu->features |= ARM_SMMU_FEAT_PRI; - - if (IS_ENABLED(CONFIG_PCI_ATS) && reg & IDR0_ATS) - smmu->features |= ARM_SMMU_FEAT_ATS; - - if (reg & IDR0_SEV) - smmu->features |= ARM_SMMU_FEAT_SEV; - - if (reg & IDR0_MSI) { - smmu->features |= ARM_SMMU_FEAT_MSI; - if (coherent) - smmu->options |= ARM_SMMU_OPT_MSIPOLL; - } - - if (reg & IDR0_HYP) { - smmu->features |= ARM_SMMU_FEAT_HYP; - if (cpus_have_cap(ARM64_HAS_VIRT_HOST_EXTN)) - smmu->features |= ARM_SMMU_FEAT_E2H; - } - - /* - * The coherency feature as set by FW is used in preference to the ID - * register, but warn on mismatch. - */ - if (!!(reg & IDR0_COHACC) != coherent) - dev_warn(smmu->dev, "IDR0.COHACC overridden by FW configuration (%s)\n", - coherent ? "true" : "false"); - - switch (FIELD_GET(IDR0_STALL_MODEL, reg)) { - case IDR0_STALL_MODEL_FORCE: - smmu->features |= ARM_SMMU_FEAT_STALL_FORCE; - fallthrough; - case IDR0_STALL_MODEL_STALL: - smmu->features |= ARM_SMMU_FEAT_STALLS; - } - - if (reg & IDR0_S1P) - smmu->features |= ARM_SMMU_FEAT_TRANS_S1; - - if (reg & IDR0_S2P) - smmu->features |= ARM_SMMU_FEAT_TRANS_S2; - - if (!(reg & (IDR0_S1P | IDR0_S2P))) { - dev_err(smmu->dev, "no translation support!\n"); - return -ENXIO; - } - - /* We only support the AArch64 table format at present */ - switch (FIELD_GET(IDR0_TTF, reg)) { - case IDR0_TTF_AARCH32_64: - smmu->ias = 40; - fallthrough; - case IDR0_TTF_AARCH64: - break; - default: - dev_err(smmu->dev, "AArch64 table format not supported!\n"); - return -ENXIO; - } - - /* ASID/VMID sizes */ - smmu->asid_bits = reg & IDR0_ASID16 ? 16 : 8; - smmu->vmid_bits = reg & IDR0_VMID16 ? 16 : 8; - - /* IDR1 */ - reg = readl_relaxed(smmu->base + ARM_SMMU_IDR1); - if (reg & (IDR1_TABLES_PRESET | IDR1_QUEUES_PRESET | IDR1_REL)) { - dev_err(smmu->dev, "embedded implementation not supported\n"); - return -ENXIO; - } - - /* Queue sizes, capped to ensure natural alignment */ - smmu->cmdq.q.llq.max_n_shift = min_t(u32, CMDQ_MAX_SZ_SHIFT, - FIELD_GET(IDR1_CMDQS, reg)); - if (smmu->cmdq.q.llq.max_n_shift <= ilog2(CMDQ_BATCH_ENTRIES)) { - /* - * We don't support splitting up batches, so one batch of - * commands plus an extra sync needs to fit inside the command - * queue. There's also no way we can handle the weird alignment - * restrictions on the base pointer for a unit-length queue. - */ - dev_err(smmu->dev, "command queue size <= %d entries not supported\n", - CMDQ_BATCH_ENTRIES); - return -ENXIO; - } - - smmu->evtq.q.llq.max_n_shift = min_t(u32, EVTQ_MAX_SZ_SHIFT, - FIELD_GET(IDR1_EVTQS, reg)); - smmu->priq.q.llq.max_n_shift = min_t(u32, PRIQ_MAX_SZ_SHIFT, - FIELD_GET(IDR1_PRIQS, reg)); - - /* SID/SSID sizes */ - smmu->ssid_bits = FIELD_GET(IDR1_SSIDSIZE, reg); - smmu->sid_bits = FIELD_GET(IDR1_SIDSIZE, reg); - smmu->iommu.max_pasids = 1UL << smmu->ssid_bits; - - /* - * If the SMMU supports fewer bits than would fill a single L2 stream - * table, use a linear table instead. - */ - if (smmu->sid_bits <= STRTAB_SPLIT) - smmu->features &= ~ARM_SMMU_FEAT_2_LVL_STRTAB; - - /* IDR3 */ - reg = readl_relaxed(smmu->base + ARM_SMMU_IDR3); - if (FIELD_GET(IDR3_RIL, reg)) - smmu->features |= ARM_SMMU_FEAT_RANGE_INV; - - /* IDR5 */ - reg = readl_relaxed(smmu->base + ARM_SMMU_IDR5); - - /* Maximum number of outstanding stalls */ - smmu->evtq.max_stalls = FIELD_GET(IDR5_STALL_MAX, reg); - - /* Page sizes */ - if (reg & IDR5_GRAN64K) - smmu->pgsize_bitmap |= SZ_64K | SZ_512M; - if (reg & IDR5_GRAN16K) - smmu->pgsize_bitmap |= SZ_16K | SZ_32M; - if (reg & IDR5_GRAN4K) - smmu->pgsize_bitmap |= SZ_4K | SZ_2M | SZ_1G; - - /* Input address size */ - if (FIELD_GET(IDR5_VAX, reg) == IDR5_VAX_52_BIT) - smmu->features |= ARM_SMMU_FEAT_VAX; - - /* Output address size */ - switch (FIELD_GET(IDR5_OAS, reg)) { - case IDR5_OAS_32_BIT: - smmu->oas = 32; - break; - case IDR5_OAS_36_BIT: - smmu->oas = 36; - break; - case IDR5_OAS_40_BIT: - smmu->oas = 40; - break; - case IDR5_OAS_42_BIT: - smmu->oas = 42; - break; - case IDR5_OAS_44_BIT: - smmu->oas = 44; - break; - case IDR5_OAS_52_BIT: - smmu->oas = 52; - smmu->pgsize_bitmap |= 1ULL << 42; /* 4TB */ - break; - default: - dev_info(smmu->dev, - "unknown output address size. Truncating to 48-bit\n"); - fallthrough; - case IDR5_OAS_48_BIT: - smmu->oas = 48; - } - - /* Set the DMA mask for our table walker */ - if (dma_set_mask_and_coherent(smmu->dev, DMA_BIT_MASK(smmu->oas))) - dev_warn(smmu->dev, - "failed to set DMA mask for table walker\n"); - - smmu->ias = max(smmu->ias, smmu->oas); - - if (arm_smmu_sva_supported(smmu)) - smmu->features |= ARM_SMMU_FEAT_SVA; - - dev_info(smmu->dev, "ias %lu-bit, oas %lu-bit (features 0x%08x)\n", - smmu->ias, smmu->oas, smmu->features); - return 0; -} - #ifdef CONFIG_ACPI static void acpi_smmu_get_options(u32 model, struct arm_smmu_device *smmu) {