From patchwork Thu Aug 15 18:37:21 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 11096265 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 247C214DB for ; Thu, 15 Aug 2019 18:38:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 158792870F for ; Thu, 15 Aug 2019 18:38:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 097B828735; Thu, 15 Aug 2019 18:38:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id B3BAA2870F for ; Thu, 15 Aug 2019 18:38:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=fhSFvPLznP7zJLvWozPb+cgh6CqeNp8tqXjwtAJhaUY=; b=lJFjrIqeEVZaXs YuQqUOAwuZPdLfzk0rblhGo1pF3O1Ll4n2wYnLLOKIbwR47zQPElgree6OC50fr5kt2O7bBc0HbgO Amc4B+2A+arJODHI16sraSi/u1YadwCJvPECCK4pH9O/0zYQRa97lCdGpzvR61GP7OBw72PfhjY1l I/DcUwpF20aeOG4efG1zktI0TSwmmK4MwA696vJTEEQwZ+opsjKPcLZrFUEczflsO5156HHoWXUk2 dHBmV1BuIWEPFhfAE0sLMhMgYsOdSIx3XkcWFYxpYc+9CrYxTySehKOoq9sHMMNcuV5G8nnPkCIA7 SqBiCUZSCtLetYJd5srg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyKdN-00050e-2c; Thu, 15 Aug 2019 18:38:01 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyKd6-0004mY-Ce for linux-arm-kernel@lists.infradead.org; Thu, 15 Aug 2019 18:37:46 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DE427360; Thu, 15 Aug 2019 11:37:43 -0700 (PDT) Received: from e110467-lin.cambridge.arm.com (e110467-lin.cambridge.arm.com [10.1.197.57]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 80D893F694; Thu, 15 Aug 2019 11:37:42 -0700 (PDT) From: Robin Murphy To: will@kernel.org Subject: [PATCH v2 01/17] iommu/arm-smmu: Mask TLBI address correctly Date: Thu, 15 Aug 2019 19:37:21 +0100 Message-Id: <33c764762c8e5baa961d92648c293dd5590559e7.1565892337.git.robin.murphy@arm.com> X-Mailer: git-send-email 2.21.0.dirty In-Reply-To: References: MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190815_113744_471214_16BE84EB X-CRM114-Status: GOOD ( 14.26 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: robdclark@gmail.com, joro@8bytes.org, bjorn.andersson@linaro.org, iommu@lists.linux-foundation.org, vivek.gautam@codeaurora.org, jcrouse@codeaurora.org, gregory.clement@bootlin.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP The less said about "~12UL" the better. Oh dear. We get away with it due to calling constraints that mean IOVAs are implicitly at least page-aligned to begin with, but still; oh dear. Signed-off-by: Robin Murphy --- drivers/iommu/arm-smmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c index 64977c131ee6..d60ee292ecee 100644 --- a/drivers/iommu/arm-smmu.c +++ b/drivers/iommu/arm-smmu.c @@ -504,7 +504,7 @@ static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size, reg += leaf ? ARM_SMMU_CB_S1_TLBIVAL : ARM_SMMU_CB_S1_TLBIVA; if (cfg->fmt != ARM_SMMU_CTX_FMT_AARCH64) { - iova &= ~12UL; + iova = (iova >> 12) << 12; iova |= cfg->asid; do { writel_relaxed(iova, reg); From patchwork Thu Aug 15 18:37:22 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 11096271 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2DA9617E2 for ; Thu, 15 Aug 2019 18:38:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1E6AB201CB for ; Thu, 15 Aug 2019 18:38:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 129C328734; Thu, 15 Aug 2019 18:38:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id A7B49201CB for ; Thu, 15 Aug 2019 18:38:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=eUM9dXppk2fbHIkv2z1Y0/yB0iY5p8EjSl7xkr1QsSY=; b=gxS4X3ew+PkQWB e7x0UBjFUcxevR0u0+wrZP83U8bYH1nIXR4qlOgx3SmWt+gJnLB2TfeTJe1RkG4+tXJiVXYn0eTl1 pZFrTkLgDKTNOlrYMPVbCrVQuwkvBTbOSnj4lWDnH3HnnUaM6GG7nCCsF7gnT12csuhYmW1Gmnok+ yM9XI0Q544rYOR7vj/C88oBbLj8vK+QjyU2M3d+Co/tLyoLZjygnQLzQPmBo73HSfumhPLR3ZUd4R BqsbeySpAkO4gVRZ6hdTFN10rHyDQ8NhwbRUmioVWdtnUGA1Zwf5dSvrYqZU6ACTCTjWXvSoNq5k8 qxvAg3P11zNt9zoaWjBw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyKdh-0005I5-QC; Thu, 15 Aug 2019 18:38:21 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyKd8-0004mt-7L for linux-arm-kernel@lists.infradead.org; Thu, 15 Aug 2019 18:37:48 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7D9151570; Thu, 15 Aug 2019 11:37:45 -0700 (PDT) Received: from e110467-lin.cambridge.arm.com (e110467-lin.cambridge.arm.com [10.1.197.57]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 2138B3F694; Thu, 15 Aug 2019 11:37:44 -0700 (PDT) From: Robin Murphy To: will@kernel.org Subject: [PATCH v2 02/17] iommu/qcom: Mask TLBI addresses correctly Date: Thu, 15 Aug 2019 19:37:22 +0100 Message-Id: <3353ab6006c49cb8821f298523f38c5f7dc1d00d.1565892337.git.robin.murphy@arm.com> X-Mailer: git-send-email 2.21.0.dirty In-Reply-To: References: MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190815_113746_303825_59F51A7A X-CRM114-Status: GOOD ( 14.98 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: robdclark@gmail.com, joro@8bytes.org, bjorn.andersson@linaro.org, iommu@lists.linux-foundation.org, vivek.gautam@codeaurora.org, jcrouse@codeaurora.org, gregory.clement@bootlin.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP As with arm-smmu from whence this code was borrowed, the IOVAs passed in here happen to be at least page-aligned anyway, but still; oh dear. Signed-off-by: Robin Murphy --- drivers/iommu/qcom_iommu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/iommu/qcom_iommu.c b/drivers/iommu/qcom_iommu.c index 34d0b9783b3e..bed948c3058a 100644 --- a/drivers/iommu/qcom_iommu.c +++ b/drivers/iommu/qcom_iommu.c @@ -155,7 +155,7 @@ static void qcom_iommu_tlb_inv_range_nosync(unsigned long iova, size_t size, struct qcom_iommu_ctx *ctx = to_ctx(fwspec, fwspec->ids[i]); size_t s = size; - iova &= ~12UL; + iova = (iova >> 12) << 12; iova |= ctx->asid; do { iommu_writel(ctx, reg, iova); From patchwork Thu Aug 15 18:37:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 11096273 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C1B7914DB for ; Thu, 15 Aug 2019 18:38:36 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B0856201CB for ; Thu, 15 Aug 2019 18:38:36 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A45532870F; Thu, 15 Aug 2019 18:38:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id C55C2201CB for ; Thu, 15 Aug 2019 18:38:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=XHbIV6nR4fEurMJ8bxq+agtFt8xZjEiWyGY/1U98qp8=; b=AQCYMoYB6y8U7b vk/U5Q/GeVO3+QJOi5748pJBwgvWu25pOQmPmVWVzNuVHj9VHXhLtr6F/jvVNYsgkXbhpyV0fSqdn adEZ+dUr26MFqDSHSKBN9yz8zfBxuS2Ifb/8cA6H14/CG9vNURRPgjFqri2kPd942yRuYZfJjeIvl 49Z1fqJkby+TB6m5mhzZtHRB7ImovFw0KyXKkPddlK2UTxqdYAFyn1jZzrqsa+1S4XQdcLuRhw+vM loaX4fFj7JQzJfGFWT3/f1JlLtYxJN6pck4O7VqRVyATcKv6osGf7BPDg150MLtQQoE7KULkc7iAQ EAMxLUCJydTVJMcaBA/w==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyKdu-0005VZ-IP; Thu, 15 Aug 2019 18:38:34 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyKd9-0004nO-TD for linux-arm-kernel@lists.infradead.org; Thu, 15 Aug 2019 18:37:51 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 12051344; Thu, 15 Aug 2019 11:37:47 -0700 (PDT) Received: from e110467-lin.cambridge.arm.com (e110467-lin.cambridge.arm.com [10.1.197.57]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id B2ABC3F694; Thu, 15 Aug 2019 11:37:45 -0700 (PDT) From: Robin Murphy To: will@kernel.org Subject: [PATCH v2 03/17] iommu/arm-smmu: Convert GR0 registers to bitfields Date: Thu, 15 Aug 2019 19:37:23 +0100 Message-Id: X-Mailer: git-send-email 2.21.0.dirty In-Reply-To: References: MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190815_113748_031274_A2A327B5 X-CRM114-Status: GOOD ( 18.83 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: robdclark@gmail.com, joro@8bytes.org, bjorn.andersson@linaro.org, iommu@lists.linux-foundation.org, vivek.gautam@codeaurora.org, jcrouse@codeaurora.org, gregory.clement@bootlin.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP FIELD_PREP remains a terrible name, but the overall simplification will make further work on this stuff that much more manageable. This also serves as an audit of the header, wherein we can impose a consistent grouping and ordering of the offset and field definitions Signed-off-by: Robin Murphy --- drivers/iommu/arm-smmu-regs.h | 126 ++++++++++++++++------------------ drivers/iommu/arm-smmu.c | 51 +++++++------- 2 files changed, 84 insertions(+), 93 deletions(-) diff --git a/drivers/iommu/arm-smmu-regs.h b/drivers/iommu/arm-smmu-regs.h index 1c278f7ae888..351ab09c7d4f 100644 --- a/drivers/iommu/arm-smmu-regs.h +++ b/drivers/iommu/arm-smmu-regs.h @@ -10,111 +10,101 @@ #ifndef _ARM_SMMU_REGS_H #define _ARM_SMMU_REGS_H +#include + /* Configuration registers */ #define ARM_SMMU_GR0_sCR0 0x0 -#define sCR0_CLIENTPD (1 << 0) -#define sCR0_GFRE (1 << 1) -#define sCR0_GFIE (1 << 2) -#define sCR0_EXIDENABLE (1 << 3) -#define sCR0_GCFGFRE (1 << 4) -#define sCR0_GCFGFIE (1 << 5) -#define sCR0_USFCFG (1 << 10) -#define sCR0_VMIDPNE (1 << 11) -#define sCR0_PTM (1 << 12) -#define sCR0_FB (1 << 13) -#define sCR0_VMID16EN (1 << 31) -#define sCR0_BSU_SHIFT 14 -#define sCR0_BSU_MASK 0x3 +#define sCR0_VMID16EN BIT(31) +#define sCR0_BSU GENMASK(15, 14) +#define sCR0_FB BIT(13) +#define sCR0_PTM BIT(12) +#define sCR0_VMIDPNE BIT(11) +#define sCR0_USFCFG BIT(10) +#define sCR0_GCFGFIE BIT(5) +#define sCR0_GCFGFRE BIT(4) +#define sCR0_EXIDENABLE BIT(3) +#define sCR0_GFIE BIT(2) +#define sCR0_GFRE BIT(1) +#define sCR0_CLIENTPD BIT(0) /* Auxiliary Configuration register */ #define ARM_SMMU_GR0_sACR 0x10 /* Identification registers */ #define ARM_SMMU_GR0_ID0 0x20 +#define ID0_S1TS BIT(30) +#define ID0_S2TS BIT(29) +#define ID0_NTS BIT(28) +#define ID0_SMS BIT(27) +#define ID0_ATOSNS BIT(26) +#define ID0_PTFS_NO_AARCH32 BIT(25) +#define ID0_PTFS_NO_AARCH32S BIT(24) +#define ID0_NUMIRPT GENMASK(23, 16) +#define ID0_CTTW BIT(14) +#define ID0_NUMSIDB GENMASK(12, 9) +#define ID0_EXIDS BIT(8) +#define ID0_NUMSMRG GENMASK(7, 0) + #define ARM_SMMU_GR0_ID1 0x24 +#define ID1_PAGESIZE BIT(31) +#define ID1_NUMPAGENDXB GENMASK(30, 28) +#define ID1_NUMS2CB GENMASK(23, 16) +#define ID1_NUMCB GENMASK(7, 0) + #define ARM_SMMU_GR0_ID2 0x28 +#define ID2_VMID16 BIT(15) +#define ID2_PTFS_64K BIT(14) +#define ID2_PTFS_16K BIT(13) +#define ID2_PTFS_4K BIT(12) +#define ID2_UBS GENMASK(11, 8) +#define ID2_OAS GENMASK(7, 4) +#define ID2_IAS GENMASK(3, 0) + #define ARM_SMMU_GR0_ID3 0x2c #define ARM_SMMU_GR0_ID4 0x30 #define ARM_SMMU_GR0_ID5 0x34 #define ARM_SMMU_GR0_ID6 0x38 + #define ARM_SMMU_GR0_ID7 0x3c +#define ID7_MAJOR GENMASK(7, 4) +#define ID7_MINOR GENMASK(3, 0) + #define ARM_SMMU_GR0_sGFSR 0x48 #define ARM_SMMU_GR0_sGFSYNR0 0x50 #define ARM_SMMU_GR0_sGFSYNR1 0x54 #define ARM_SMMU_GR0_sGFSYNR2 0x58 -#define ID0_S1TS (1 << 30) -#define ID0_S2TS (1 << 29) -#define ID0_NTS (1 << 28) -#define ID0_SMS (1 << 27) -#define ID0_ATOSNS (1 << 26) -#define ID0_PTFS_NO_AARCH32 (1 << 25) -#define ID0_PTFS_NO_AARCH32S (1 << 24) -#define ID0_CTTW (1 << 14) -#define ID0_NUMIRPT_SHIFT 16 -#define ID0_NUMIRPT_MASK 0xff -#define ID0_NUMSIDB_SHIFT 9 -#define ID0_NUMSIDB_MASK 0xf -#define ID0_EXIDS (1 << 8) -#define ID0_NUMSMRG_SHIFT 0 -#define ID0_NUMSMRG_MASK 0xff - -#define ID1_PAGESIZE (1 << 31) -#define ID1_NUMPAGENDXB_SHIFT 28 -#define ID1_NUMPAGENDXB_MASK 7 -#define ID1_NUMS2CB_SHIFT 16 -#define ID1_NUMS2CB_MASK 0xff -#define ID1_NUMCB_SHIFT 0 -#define ID1_NUMCB_MASK 0xff - -#define ID2_OAS_SHIFT 4 -#define ID2_OAS_MASK 0xf -#define ID2_IAS_SHIFT 0 -#define ID2_IAS_MASK 0xf -#define ID2_UBS_SHIFT 8 -#define ID2_UBS_MASK 0xf -#define ID2_PTFS_4K (1 << 12) -#define ID2_PTFS_16K (1 << 13) -#define ID2_PTFS_64K (1 << 14) -#define ID2_VMID16 (1 << 15) - -#define ID7_MAJOR_SHIFT 4 -#define ID7_MAJOR_MASK 0xf - /* Global TLB invalidation */ #define ARM_SMMU_GR0_TLBIVMID 0x64 #define ARM_SMMU_GR0_TLBIALLNSNH 0x68 #define ARM_SMMU_GR0_TLBIALLH 0x6c #define ARM_SMMU_GR0_sTLBGSYNC 0x70 + #define ARM_SMMU_GR0_sTLBGSTATUS 0x74 -#define sTLBGSTATUS_GSACTIVE (1 << 0) +#define sTLBGSTATUS_GSACTIVE BIT(0) /* Stream mapping registers */ #define ARM_SMMU_GR0_SMR(n) (0x800 + ((n) << 2)) -#define SMR_VALID (1 << 31) -#define SMR_MASK_SHIFT 16 -#define SMR_ID_SHIFT 0 +#define SMR_VALID BIT(31) +#define SMR_MASK GENMASK(31, 16) +#define SMR_ID GENMASK(15, 0) #define ARM_SMMU_GR0_S2CR(n) (0xc00 + ((n) << 2)) -#define S2CR_CBNDX_SHIFT 0 -#define S2CR_CBNDX_MASK 0xff -#define S2CR_EXIDVALID (1 << 10) -#define S2CR_TYPE_SHIFT 16 -#define S2CR_TYPE_MASK 0x3 -enum arm_smmu_s2cr_type { - S2CR_TYPE_TRANS, - S2CR_TYPE_BYPASS, - S2CR_TYPE_FAULT, -}; - -#define S2CR_PRIVCFG_SHIFT 24 -#define S2CR_PRIVCFG_MASK 0x3 +#define S2CR_PRIVCFG GENMASK(25, 24) enum arm_smmu_s2cr_privcfg { S2CR_PRIVCFG_DEFAULT, S2CR_PRIVCFG_DIPAN, S2CR_PRIVCFG_UNPRIV, S2CR_PRIVCFG_PRIV, }; +#define S2CR_TYPE GENMASK(17, 16) +enum arm_smmu_s2cr_type { + S2CR_TYPE_TRANS, + S2CR_TYPE_BYPASS, + S2CR_TYPE_FAULT, +}; +#define S2CR_EXIDVALID BIT(10) +#define S2CR_CBNDX GENMASK(7, 0) /* Context bank attribute registers */ #define ARM_SMMU_GR1_CBAR(n) (0x0 + ((n) << 2)) diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c index d60ee292ecee..105015798c06 100644 --- a/drivers/iommu/arm-smmu.c +++ b/drivers/iommu/arm-smmu.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include #include @@ -1019,7 +1020,7 @@ static void arm_smmu_domain_free(struct iommu_domain *domain) static void arm_smmu_write_smr(struct arm_smmu_device *smmu, int idx) { struct arm_smmu_smr *smr = smmu->smrs + idx; - u32 reg = smr->id << SMR_ID_SHIFT | smr->mask << SMR_MASK_SHIFT; + u32 reg = FIELD_PREP(SMR_ID, smr->id) | FIELD_PREP(SMR_MASK, smr->mask); if (!(smmu->features & ARM_SMMU_FEAT_EXIDS) && smr->valid) reg |= SMR_VALID; @@ -1029,9 +1030,9 @@ static void arm_smmu_write_smr(struct arm_smmu_device *smmu, int idx) static void arm_smmu_write_s2cr(struct arm_smmu_device *smmu, int idx) { struct arm_smmu_s2cr *s2cr = smmu->s2crs + idx; - u32 reg = (s2cr->type & S2CR_TYPE_MASK) << S2CR_TYPE_SHIFT | - (s2cr->cbndx & S2CR_CBNDX_MASK) << S2CR_CBNDX_SHIFT | - (s2cr->privcfg & S2CR_PRIVCFG_MASK) << S2CR_PRIVCFG_SHIFT; + u32 reg = FIELD_PREP(S2CR_TYPE, s2cr->type) | + FIELD_PREP(S2CR_CBNDX, s2cr->cbndx) | + FIELD_PREP(S2CR_PRIVCFG, s2cr->privcfg); if (smmu->features & ARM_SMMU_FEAT_EXIDS && smmu->smrs && smmu->smrs[idx].valid) @@ -1063,15 +1064,15 @@ static void arm_smmu_test_smr_masks(struct arm_smmu_device *smmu) * bits are set, so check each one separately. We can reject * masters later if they try to claim IDs outside these masks. */ - smr = smmu->streamid_mask << SMR_ID_SHIFT; + smr = FIELD_PREP(SMR_ID, smmu->streamid_mask); writel_relaxed(smr, gr0_base + ARM_SMMU_GR0_SMR(0)); smr = readl_relaxed(gr0_base + ARM_SMMU_GR0_SMR(0)); - smmu->streamid_mask = smr >> SMR_ID_SHIFT; + smmu->streamid_mask = FIELD_GET(SMR_ID, smr); - smr = smmu->streamid_mask << SMR_MASK_SHIFT; + smr = FIELD_PREP(SMR_MASK, smmu->streamid_mask); writel_relaxed(smr, gr0_base + ARM_SMMU_GR0_SMR(0)); smr = readl_relaxed(gr0_base + ARM_SMMU_GR0_SMR(0)); - smmu->smr_mask_mask = smr >> SMR_MASK_SHIFT; + smmu->smr_mask_mask = FIELD_GET(SMR_MASK, smr); } static int arm_smmu_find_sme(struct arm_smmu_device *smmu, u16 id, u16 mask) @@ -1140,8 +1141,8 @@ static int arm_smmu_master_alloc_smes(struct device *dev) mutex_lock(&smmu->stream_map_mutex); /* Figure out a viable stream map entry allocation */ for_each_cfg_sme(fwspec, i, idx) { - u16 sid = fwspec->ids[i]; - u16 mask = fwspec->ids[i] >> SMR_MASK_SHIFT; + u16 sid = FIELD_GET(SMR_ID, fwspec->ids[i]); + u16 mask = FIELD_GET(SMR_MASK, fwspec->ids[i]); if (idx != INVALID_SMENDX) { ret = -EEXIST; @@ -1466,8 +1467,8 @@ static int arm_smmu_add_device(struct device *dev) ret = -EINVAL; for (i = 0; i < fwspec->num_ids; i++) { - u16 sid = fwspec->ids[i]; - u16 mask = fwspec->ids[i] >> SMR_MASK_SHIFT; + u16 sid = FIELD_GET(SMR_ID, fwspec->ids[i]); + u16 mask = FIELD_GET(SMR_MASK, fwspec->ids[i]); if (sid & ~smmu->streamid_mask) { dev_err(dev, "stream ID 0x%x out of range for SMMU (0x%x)\n", @@ -1648,12 +1649,12 @@ static int arm_smmu_of_xlate(struct device *dev, struct of_phandle_args *args) u32 mask, fwid = 0; if (args->args_count > 0) - fwid |= (u16)args->args[0]; + fwid |= FIELD_PREP(SMR_ID, args->args[0]); if (args->args_count > 1) - fwid |= (u16)args->args[1] << SMR_MASK_SHIFT; + fwid |= FIELD_PREP(SMR_MASK, args->args[1]); else if (!of_property_read_u32(args->np, "stream-match-mask", &mask)) - fwid |= (u16)mask << SMR_MASK_SHIFT; + fwid |= FIELD_PREP(SMR_MASK, mask); return iommu_fwspec_add_ids(dev, &fwid, 1); } @@ -1728,7 +1729,7 @@ static void arm_smmu_device_reset(struct arm_smmu_device *smmu) * bit is only present in MMU-500r2 onwards. */ reg = readl_relaxed(gr0_base + ARM_SMMU_GR0_ID7); - major = (reg >> ID7_MAJOR_SHIFT) & ID7_MAJOR_MASK; + major = FIELD_GET(ID7_MAJOR, reg); reg = readl_relaxed(gr0_base + ARM_SMMU_GR0_sACR); if (major >= 2) reg &= ~ARM_MMU500_ACR_CACHE_LOCK; @@ -1780,7 +1781,7 @@ static void arm_smmu_device_reset(struct arm_smmu_device *smmu) reg &= ~sCR0_FB; /* Don't upgrade barriers */ - reg &= ~(sCR0_BSU_MASK << sCR0_BSU_SHIFT); + reg &= ~(sCR0_BSU); if (smmu->features & ARM_SMMU_FEAT_VMID16) reg |= sCR0_VMID16EN; @@ -1879,12 +1880,12 @@ static int arm_smmu_device_cfg_probe(struct arm_smmu_device *smmu) smmu->features |= ARM_SMMU_FEAT_EXIDS; size = 1 << 16; } else { - size = 1 << ((id >> ID0_NUMSIDB_SHIFT) & ID0_NUMSIDB_MASK); + size = 1 << FIELD_GET(ID0_NUMSIDB, id); } smmu->streamid_mask = size - 1; if (id & ID0_SMS) { smmu->features |= ARM_SMMU_FEAT_STREAM_MATCH; - size = (id >> ID0_NUMSMRG_SHIFT) & ID0_NUMSMRG_MASK; + size = FIELD_GET(ID0_NUMSMRG, id); if (size == 0) { dev_err(smmu->dev, "stream-matching supported, but no SMRs present!\n"); @@ -1923,15 +1924,15 @@ static int arm_smmu_device_cfg_probe(struct arm_smmu_device *smmu) smmu->pgshift = (id & ID1_PAGESIZE) ? 16 : 12; /* Check for size mismatch of SMMU address space from mapped region */ - size = 1 << (((id >> ID1_NUMPAGENDXB_SHIFT) & ID1_NUMPAGENDXB_MASK) + 1); + size = 1 << (FIELD_GET(ID1_NUMPAGENDXB, id) + 1); size <<= smmu->pgshift; if (smmu->cb_base != gr0_base + size) dev_warn(smmu->dev, "SMMU address space size (0x%lx) differs from mapped region size (0x%tx)!\n", size * 2, (smmu->cb_base - gr0_base) * 2); - smmu->num_s2_context_banks = (id >> ID1_NUMS2CB_SHIFT) & ID1_NUMS2CB_MASK; - smmu->num_context_banks = (id >> ID1_NUMCB_SHIFT) & ID1_NUMCB_MASK; + smmu->num_s2_context_banks = FIELD_GET(ID1_NUMS2CB, id); + smmu->num_context_banks = FIELD_GET(ID1_NUMCB, id); if (smmu->num_s2_context_banks > smmu->num_context_banks) { dev_err(smmu->dev, "impossible number of S2 context banks!\n"); return -ENODEV; @@ -1957,11 +1958,11 @@ static int arm_smmu_device_cfg_probe(struct arm_smmu_device *smmu) /* ID2 */ id = readl_relaxed(gr0_base + ARM_SMMU_GR0_ID2); - size = arm_smmu_id_size_to_bits((id >> ID2_IAS_SHIFT) & ID2_IAS_MASK); + size = arm_smmu_id_size_to_bits(FIELD_GET(ID2_IAS, id)); smmu->ipa_size = size; /* The output mask is also applied for bypass */ - size = arm_smmu_id_size_to_bits((id >> ID2_OAS_SHIFT) & ID2_OAS_MASK); + size = arm_smmu_id_size_to_bits(FIELD_GET(ID2_OAS, id)); smmu->pa_size = size; if (id & ID2_VMID16) @@ -1981,7 +1982,7 @@ static int arm_smmu_device_cfg_probe(struct arm_smmu_device *smmu) if (smmu->version == ARM_SMMU_V1_64K) smmu->features |= ARM_SMMU_FEAT_FMT_AARCH64_64K; } else { - size = (id >> ID2_UBS_SHIFT) & ID2_UBS_MASK; + size = FIELD_GET(ID2_UBS, id); smmu->va_size = arm_smmu_id_size_to_bits(size); if (id & ID2_PTFS_4K) smmu->features |= ARM_SMMU_FEAT_FMT_AARCH64_4K; From patchwork Thu Aug 15 18:37:24 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 11096277 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 063D517E2 for ; Thu, 15 Aug 2019 18:38:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E9B272003F for ; Thu, 15 Aug 2019 18:38:54 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DD93028650; Thu, 15 Aug 2019 18:38:54 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 5D6252003F for ; Thu, 15 Aug 2019 18:38:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=GaWIKjSEs6yIxrUYApTu5jDM4arF52D74WpQ1L3yYQE=; b=B5U9ejHH4bR689 Geqag9a9ezPZl+tiQDCBfeR/D7yUvZXl0EEoEYbIgSFUTN9J5T0QSE7cTLB2qFLaFZA5Gz221a/lm HC4mAdsQNNIfTuaXwUpVf8voLOqJCGZZATzZkFLW6OjxV+o+x79Xk8EUYFfvtYJeT+3wB8YvaxcNb dqQSA3R6n7LkhmD2ff/qeBwaqRGcES8y1mYd0VziDvZXVONd0SeCnFBWJh7q+uRnaTQ7Mj0977vQG 56FHQI2Nx7RgHLS3muVJRglqiaVEK19WQ+gtbym5XtEjf2GbH7JbQvPcX3NXAG/K6BADxRldq+4Nu z2GbfrMS9+7miD1mfQDg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyKeD-0005k0-MN; Thu, 15 Aug 2019 18:38:53 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyKdB-0004oJ-CJ for linux-arm-kernel@lists.infradead.org; Thu, 15 Aug 2019 18:37:52 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9AFEE360; Thu, 15 Aug 2019 11:37:48 -0700 (PDT) Received: from e110467-lin.cambridge.arm.com (e110467-lin.cambridge.arm.com [10.1.197.57]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 471863F694; Thu, 15 Aug 2019 11:37:47 -0700 (PDT) From: Robin Murphy To: will@kernel.org Subject: [PATCH v2 04/17] iommu/arm-smmu: Convert GR1 registers to bitfields Date: Thu, 15 Aug 2019 19:37:24 +0100 Message-Id: <98e0381a9612febbec5f3c5cc3c0b4b6350c5224.1565892337.git.robin.murphy@arm.com> X-Mailer: git-send-email 2.21.0.dirty In-Reply-To: References: MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190815_113749_514413_F690431F X-CRM114-Status: GOOD ( 16.65 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: robdclark@gmail.com, joro@8bytes.org, bjorn.andersson@linaro.org, iommu@lists.linux-foundation.org, vivek.gautam@codeaurora.org, jcrouse@codeaurora.org, gregory.clement@bootlin.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP As for GR0, use the bitfield helpers to make GR1 usage a little cleaner, and use it as an opportunity to audit and tidy the definitions. This tweaks the handling of CBAR types to match what we did for S2CR a while back, and fixes a couple of names which didn't quite match the latest architecture spec (IHI0062D.c). Signed-off-by: Robin Murphy --- drivers/iommu/arm-smmu-regs.h | 33 ++++++++++++++------------------- drivers/iommu/arm-smmu.c | 18 +++++++++--------- 2 files changed, 23 insertions(+), 28 deletions(-) diff --git a/drivers/iommu/arm-smmu-regs.h b/drivers/iommu/arm-smmu-regs.h index 351ab09c7d4f..8522330ee624 100644 --- a/drivers/iommu/arm-smmu-regs.h +++ b/drivers/iommu/arm-smmu-regs.h @@ -108,30 +108,25 @@ enum arm_smmu_s2cr_type { /* Context bank attribute registers */ #define ARM_SMMU_GR1_CBAR(n) (0x0 + ((n) << 2)) -#define CBAR_VMID_SHIFT 0 -#define CBAR_VMID_MASK 0xff -#define CBAR_S1_BPSHCFG_SHIFT 8 -#define CBAR_S1_BPSHCFG_MASK 3 -#define CBAR_S1_BPSHCFG_NSH 3 -#define CBAR_S1_MEMATTR_SHIFT 12 -#define CBAR_S1_MEMATTR_MASK 0xf +#define CBAR_IRPTNDX GENMASK(31, 24) +#define CBAR_TYPE GENMASK(17, 16) +enum arm_smmu_cbar_type { + CBAR_TYPE_S2_TRANS, + CBAR_TYPE_S1_TRANS_S2_BYPASS, + CBAR_TYPE_S1_TRANS_S2_FAULT, + CBAR_TYPE_S1_TRANS_S2_TRANS, +}; +#define CBAR_S1_MEMATTR GENMASK(15, 12) #define CBAR_S1_MEMATTR_WB 0xf -#define CBAR_TYPE_SHIFT 16 -#define CBAR_TYPE_MASK 0x3 -#define CBAR_TYPE_S2_TRANS (0 << CBAR_TYPE_SHIFT) -#define CBAR_TYPE_S1_TRANS_S2_BYPASS (1 << CBAR_TYPE_SHIFT) -#define CBAR_TYPE_S1_TRANS_S2_FAULT (2 << CBAR_TYPE_SHIFT) -#define CBAR_TYPE_S1_TRANS_S2_TRANS (3 << CBAR_TYPE_SHIFT) -#define CBAR_IRPTNDX_SHIFT 24 -#define CBAR_IRPTNDX_MASK 0xff +#define CBAR_S1_BPSHCFG GENMASK(9, 8) +#define CBAR_S1_BPSHCFG_NSH 3 +#define CBAR_VMID GENMASK(7, 0) #define ARM_SMMU_GR1_CBFRSYNRA(n) (0x400 + ((n) << 2)) #define ARM_SMMU_GR1_CBA2R(n) (0x800 + ((n) << 2)) -#define CBA2R_RW64_32BIT (0 << 0) -#define CBA2R_RW64_64BIT (1 << 0) -#define CBA2R_VMID_SHIFT 16 -#define CBA2R_VMID_MASK 0xffff +#define CBA2R_VMID16 GENMASK(31, 16) +#define CBA2R_VA64 BIT(0) #define ARM_SMMU_CB_SCTLR 0x0 #define ARM_SMMU_CB_ACTLR 0x4 diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c index 105015798c06..293a95b0d682 100644 --- a/drivers/iommu/arm-smmu.c +++ b/drivers/iommu/arm-smmu.c @@ -237,7 +237,7 @@ struct arm_smmu_cfg { u16 asid; u16 vmid; }; - u32 cbar; + enum arm_smmu_cbar_type cbar; enum arm_smmu_context_fmt fmt; }; #define INVALID_IRPTNDX 0xff @@ -692,31 +692,31 @@ static void arm_smmu_write_context_bank(struct arm_smmu_device *smmu, int idx) /* CBA2R */ if (smmu->version > ARM_SMMU_V1) { if (cfg->fmt == ARM_SMMU_CTX_FMT_AARCH64) - reg = CBA2R_RW64_64BIT; + reg = CBA2R_VA64; else - reg = CBA2R_RW64_32BIT; + reg = 0; /* 16-bit VMIDs live in CBA2R */ if (smmu->features & ARM_SMMU_FEAT_VMID16) - reg |= cfg->vmid << CBA2R_VMID_SHIFT; + reg |= FIELD_PREP(CBA2R_VMID16, cfg->vmid); writel_relaxed(reg, gr1_base + ARM_SMMU_GR1_CBA2R(idx)); } /* CBAR */ - reg = cfg->cbar; + reg = FIELD_PREP(CBAR_TYPE, cfg->cbar); if (smmu->version < ARM_SMMU_V2) - reg |= cfg->irptndx << CBAR_IRPTNDX_SHIFT; + reg |= FIELD_PREP(CBAR_IRPTNDX, cfg->irptndx); /* * Use the weakest shareability/memory types, so they are * overridden by the ttbcr/pte. */ if (stage1) { - reg |= (CBAR_S1_BPSHCFG_NSH << CBAR_S1_BPSHCFG_SHIFT) | - (CBAR_S1_MEMATTR_WB << CBAR_S1_MEMATTR_SHIFT); + reg |= FIELD_PREP(CBAR_S1_BPSHCFG, CBAR_S1_BPSHCFG_NSH) | + FIELD_PREP(CBAR_S1_MEMATTR, CBAR_S1_MEMATTR_WB); } else if (!(smmu->features & ARM_SMMU_FEAT_VMID16)) { /* 8-bit VMIDs live in CBAR */ - reg |= cfg->vmid << CBAR_VMID_SHIFT; + reg |= FIELD_PREP(CBAR_VMID, cfg->vmid); } writel_relaxed(reg, gr1_base + ARM_SMMU_GR1_CBAR(idx)); From patchwork Thu Aug 15 18:37:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 11096279 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 50D3F14DB for ; Thu, 15 Aug 2019 18:39:09 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3D410201CB for ; Thu, 15 Aug 2019 18:39:09 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2B01A28662; Thu, 15 Aug 2019 18:39:09 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 89FB9201CB for ; Thu, 15 Aug 2019 18:39:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ljEYZboOPAtlFhbtwqLbAilCNDjS/7YNA8gisfI/hzo=; b=NpCirYb3Ha3XCV w92w80sWfoWsihtF/1mtqpSiOMROVGXwR0EQ8ICSqJ9048aWIYTmU19JcflMF0Zgs/DoGLwPvObzJ mIPnPr3rAI70UXCK39P2PCxAkR0WgOr09OtL4BXLYGkrgnGKvOF8Gep17rXR7CsFfgRnpcG7H2OmF mp9WixNK0Vg+P481yTAdfglJP/1PTUad+8T+tmmqk1DgU5AkL7BiVQNVcDJgIRnr72fzZu4JT1DqA hgS8Bwl8FEiAZuuxQHQWTdGkmTg1Q9l2M37LzX1MalpTIaykW7rxALug3EO1Lb0kuWCOE4L5SRrR+ i+XN279dgHC4GHvXw3TA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyKeQ-0005xo-Ke; Thu, 15 Aug 2019 18:39:06 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyKdD-0004nO-El for linux-arm-kernel@lists.infradead.org; Thu, 15 Aug 2019 18:37:54 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2F4141570; Thu, 15 Aug 2019 11:37:50 -0700 (PDT) Received: from e110467-lin.cambridge.arm.com (e110467-lin.cambridge.arm.com [10.1.197.57]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id CF0813F694; Thu, 15 Aug 2019 11:37:48 -0700 (PDT) From: Robin Murphy To: will@kernel.org Subject: [PATCH v2 05/17] iommu/arm-smmu: Convert context bank registers to bitfields Date: Thu, 15 Aug 2019 19:37:25 +0100 Message-Id: <30616d132784d6fb5a4f2ebefa3e55f004696332.1565892337.git.robin.murphy@arm.com> X-Mailer: git-send-email 2.21.0.dirty In-Reply-To: References: MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190815_113751_598741_D25B6047 X-CRM114-Status: GOOD ( 15.75 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: robdclark@gmail.com, joro@8bytes.org, bjorn.andersson@linaro.org, iommu@lists.linux-foundation.org, vivek.gautam@codeaurora.org, jcrouse@codeaurora.org, gregory.clement@bootlin.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Finish the final part of the job, once again updating some names to match the current spec. Signed-off-by: Robin Murphy --- drivers/iommu/arm-smmu-regs.h | 86 ++++++++++++++++++----------------- drivers/iommu/arm-smmu.c | 16 +++---- drivers/iommu/qcom_iommu.c | 13 +++--- 3 files changed, 59 insertions(+), 56 deletions(-) diff --git a/drivers/iommu/arm-smmu-regs.h b/drivers/iommu/arm-smmu-regs.h index 8522330ee624..a8e288192285 100644 --- a/drivers/iommu/arm-smmu-regs.h +++ b/drivers/iommu/arm-smmu-regs.h @@ -129,19 +129,59 @@ enum arm_smmu_cbar_type { #define CBA2R_VA64 BIT(0) #define ARM_SMMU_CB_SCTLR 0x0 +#define SCTLR_S1_ASIDPNE BIT(12) +#define SCTLR_CFCFG BIT(7) +#define SCTLR_CFIE BIT(6) +#define SCTLR_CFRE BIT(5) +#define SCTLR_E BIT(4) +#define SCTLR_AFE BIT(2) +#define SCTLR_TRE BIT(1) +#define SCTLR_M BIT(0) + #define ARM_SMMU_CB_ACTLR 0x4 + #define ARM_SMMU_CB_RESUME 0x8 -#define ARM_SMMU_CB_TTBCR2 0x10 +#define RESUME_TERMINATE BIT(0) + +#define ARM_SMMU_CB_TCR2 0x10 +#define TCR2_SEP GENMASK(17, 15) +#define TCR2_SEP_UPSTREAM 0x7 +#define TCR2_AS BIT(4) + #define ARM_SMMU_CB_TTBR0 0x20 #define ARM_SMMU_CB_TTBR1 0x28 -#define ARM_SMMU_CB_TTBCR 0x30 +#define TTBRn_ASID GENMASK_ULL(63, 48) + +#define ARM_SMMU_CB_TCR 0x30 #define ARM_SMMU_CB_CONTEXTIDR 0x34 #define ARM_SMMU_CB_S1_MAIR0 0x38 #define ARM_SMMU_CB_S1_MAIR1 0x3c + #define ARM_SMMU_CB_PAR 0x50 +#define CB_PAR_F BIT(0) + #define ARM_SMMU_CB_FSR 0x58 +#define FSR_MULTI BIT(31) +#define FSR_SS BIT(30) +#define FSR_UUT BIT(8) +#define FSR_ASF BIT(7) +#define FSR_TLBLKF BIT(6) +#define FSR_TLBMCF BIT(5) +#define FSR_EF BIT(4) +#define FSR_PF BIT(3) +#define FSR_AFF BIT(2) +#define FSR_TF BIT(1) + +#define FSR_IGN (FSR_AFF | FSR_ASF | \ + FSR_TLBMCF | FSR_TLBLKF) +#define FSR_FAULT (FSR_MULTI | FSR_SS | FSR_UUT | \ + FSR_EF | FSR_PF | FSR_TF | FSR_IGN) + #define ARM_SMMU_CB_FAR 0x60 + #define ARM_SMMU_CB_FSYNR0 0x68 +#define FSYNR0_WNR BIT(4) + #define ARM_SMMU_CB_S1_TLBIVA 0x600 #define ARM_SMMU_CB_S1_TLBIASID 0x610 #define ARM_SMMU_CB_S1_TLBIVAL 0x620 @@ -150,46 +190,8 @@ enum arm_smmu_cbar_type { #define ARM_SMMU_CB_TLBSYNC 0x7f0 #define ARM_SMMU_CB_TLBSTATUS 0x7f4 #define ARM_SMMU_CB_ATS1PR 0x800 + #define ARM_SMMU_CB_ATSR 0x8f0 - -#define SCTLR_S1_ASIDPNE (1 << 12) -#define SCTLR_CFCFG (1 << 7) -#define SCTLR_CFIE (1 << 6) -#define SCTLR_CFRE (1 << 5) -#define SCTLR_E (1 << 4) -#define SCTLR_AFE (1 << 2) -#define SCTLR_TRE (1 << 1) -#define SCTLR_M (1 << 0) - -#define CB_PAR_F (1 << 0) - -#define ATSR_ACTIVE (1 << 0) - -#define RESUME_RETRY (0 << 0) -#define RESUME_TERMINATE (1 << 0) - -#define TTBCR2_SEP_SHIFT 15 -#define TTBCR2_SEP_UPSTREAM (0x7 << TTBCR2_SEP_SHIFT) -#define TTBCR2_AS (1 << 4) - -#define TTBRn_ASID_SHIFT 48 - -#define FSR_MULTI (1 << 31) -#define FSR_SS (1 << 30) -#define FSR_UUT (1 << 8) -#define FSR_ASF (1 << 7) -#define FSR_TLBLKF (1 << 6) -#define FSR_TLBMCF (1 << 5) -#define FSR_EF (1 << 4) -#define FSR_PF (1 << 3) -#define FSR_AFF (1 << 2) -#define FSR_TF (1 << 1) - -#define FSR_IGN (FSR_AFF | FSR_ASF | \ - FSR_TLBMCF | FSR_TLBLKF) -#define FSR_FAULT (FSR_MULTI | FSR_SS | FSR_UUT | \ - FSR_EF | FSR_PF | FSR_TF | FSR_IGN) - -#define FSYNR0_WNR (1 << 4) +#define ATSR_ACTIVE BIT(0) #endif /* _ARM_SMMU_REGS_H */ diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c index 293a95b0d682..a877de006d02 100644 --- a/drivers/iommu/arm-smmu.c +++ b/drivers/iommu/arm-smmu.c @@ -628,16 +628,16 @@ static void arm_smmu_init_context_bank(struct arm_smmu_domain *smmu_domain, cb->cfg = cfg; - /* TTBCR */ + /* TCR */ if (stage1) { if (cfg->fmt == ARM_SMMU_CTX_FMT_AARCH32_S) { cb->tcr[0] = pgtbl_cfg->arm_v7s_cfg.tcr; } else { cb->tcr[0] = pgtbl_cfg->arm_lpae_s1_cfg.tcr; cb->tcr[1] = pgtbl_cfg->arm_lpae_s1_cfg.tcr >> 32; - cb->tcr[1] |= TTBCR2_SEP_UPSTREAM; + cb->tcr[1] |= FIELD_PREP(TCR2_SEP, TCR2_SEP_UPSTREAM); if (cfg->fmt == ARM_SMMU_CTX_FMT_AARCH64) - cb->tcr[1] |= TTBCR2_AS; + cb->tcr[1] |= TCR2_AS; } } else { cb->tcr[0] = pgtbl_cfg->arm_lpae_s2_cfg.vtcr; @@ -650,9 +650,9 @@ static void arm_smmu_init_context_bank(struct arm_smmu_domain *smmu_domain, cb->ttbr[1] = pgtbl_cfg->arm_v7s_cfg.ttbr[1]; } else { cb->ttbr[0] = pgtbl_cfg->arm_lpae_s1_cfg.ttbr[0]; - cb->ttbr[0] |= (u64)cfg->asid << TTBRn_ASID_SHIFT; + cb->ttbr[0] |= FIELD_PREP(TTBRn_ASID, cfg->asid); cb->ttbr[1] = pgtbl_cfg->arm_lpae_s1_cfg.ttbr[1]; - cb->ttbr[1] |= (u64)cfg->asid << TTBRn_ASID_SHIFT; + cb->ttbr[1] |= FIELD_PREP(TTBRn_ASID, cfg->asid); } } else { cb->ttbr[0] = pgtbl_cfg->arm_lpae_s2_cfg.vttbr; @@ -721,13 +721,13 @@ static void arm_smmu_write_context_bank(struct arm_smmu_device *smmu, int idx) writel_relaxed(reg, gr1_base + ARM_SMMU_GR1_CBAR(idx)); /* - * TTBCR + * TCR * We must write this before the TTBRs, since it determines the * access behaviour of some fields (in particular, ASID[15:8]). */ if (stage1 && smmu->version > ARM_SMMU_V1) - writel_relaxed(cb->tcr[1], cb_base + ARM_SMMU_CB_TTBCR2); - writel_relaxed(cb->tcr[0], cb_base + ARM_SMMU_CB_TTBCR); + writel_relaxed(cb->tcr[1], cb_base + ARM_SMMU_CB_TCR2); + writel_relaxed(cb->tcr[0], cb_base + ARM_SMMU_CB_TCR); /* TTBRs */ if (cfg->fmt == ARM_SMMU_CTX_FMT_AARCH32_S) { diff --git a/drivers/iommu/qcom_iommu.c b/drivers/iommu/qcom_iommu.c index bed948c3058a..60a125dd7300 100644 --- a/drivers/iommu/qcom_iommu.c +++ b/drivers/iommu/qcom_iommu.c @@ -7,6 +7,7 @@ */ #include +#include #include #include #include @@ -247,16 +248,16 @@ static int qcom_iommu_init_domain(struct iommu_domain *domain, /* TTBRs */ iommu_writeq(ctx, ARM_SMMU_CB_TTBR0, pgtbl_cfg.arm_lpae_s1_cfg.ttbr[0] | - ((u64)ctx->asid << TTBRn_ASID_SHIFT)); + FIELD_PREP(TTBRn_ASID, ctx->asid)); iommu_writeq(ctx, ARM_SMMU_CB_TTBR1, pgtbl_cfg.arm_lpae_s1_cfg.ttbr[1] | - ((u64)ctx->asid << TTBRn_ASID_SHIFT)); + FIELD_PREP(TTBRn_ASID, ctx->asid)); - /* TTBCR */ - iommu_writel(ctx, ARM_SMMU_CB_TTBCR2, + /* TCR */ + iommu_writel(ctx, ARM_SMMU_CB_TCR2, (pgtbl_cfg.arm_lpae_s1_cfg.tcr >> 32) | - TTBCR2_SEP_UPSTREAM); - iommu_writel(ctx, ARM_SMMU_CB_TTBCR, + FIELD_PREP(TCR2_SEP, TCR2_SEP_UPSTREAM)); + iommu_writel(ctx, ARM_SMMU_CB_TCR, pgtbl_cfg.arm_lpae_s1_cfg.tcr); /* MAIRs (stage-1 only) */ From patchwork Thu Aug 15 18:37:26 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 11096281 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0765214DB for ; Thu, 15 Aug 2019 18:39:24 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EAB64201CB for ; Thu, 15 Aug 2019 18:39:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DB50B28662; Thu, 15 Aug 2019 18:39:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 6F32B201CB for ; Thu, 15 Aug 2019 18:39:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=SMtugQG/au9D4a5jXtAmmQpcKVWE0izV5NI62FlVzuY=; b=cTnwqY5QohHWv0 8Ky1cYfK17C+tU0N2BgZQn6NDGjY+n05ODqlSEvmlXP2dbw+77pcegQ1Emj/2bUn6tIYNNfthaEiz 5JIXX06wvygERqF1lzLKLdrihyMUxJfF1d5Ro+wYxkCNjidRyJMyCStzet/AiX6kilPwK+Aq8/FQW II8IBNqptEfkgH6uEe5pesVw2IjO9wgM9rsdTp3lmTeQbNcQj+1ENCLlaAS6SWCj+kCzOg90RIEdp Ax8fxR3B2Sd+3HJijC3UHegPpF4YbMTXsvKTzXdx1ZbASY8lcspU7h7rKiibYtHVRLewGpRAFhNtb SVS45jhBMeKfaPTxVRyg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyKeh-0006CC-3V; Thu, 15 Aug 2019 18:39:23 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyKdE-0004rN-6b for linux-arm-kernel@lists.infradead.org; Thu, 15 Aug 2019 18:37:56 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B5C90344; Thu, 15 Aug 2019 11:37:51 -0700 (PDT) Received: from e110467-lin.cambridge.arm.com (e110467-lin.cambridge.arm.com [10.1.197.57]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 629A23F694; Thu, 15 Aug 2019 11:37:50 -0700 (PDT) From: Robin Murphy To: will@kernel.org Subject: [PATCH v2 06/17] iommu/arm-smmu: Rework cb_base handling Date: Thu, 15 Aug 2019 19:37:26 +0100 Message-Id: <8856992662ff638e01fa7227cffb73fcff7bc5bb.1565892337.git.robin.murphy@arm.com> X-Mailer: git-send-email 2.21.0.dirty In-Reply-To: References: MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190815_113752_357321_5E9F1C26 X-CRM114-Status: GOOD ( 16.29 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: robdclark@gmail.com, joro@8bytes.org, bjorn.andersson@linaro.org, iommu@lists.linux-foundation.org, vivek.gautam@codeaurora.org, jcrouse@codeaurora.org, gregory.clement@bootlin.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP To keep register-access quirks manageable, we want to structure things to avoid needing too many individual overrides. It seems fairly clean to have a single interface which handles both global and context registers in terms of the architectural pages, so the first preparatory step is to rework cb_base into a page number rather than an absolute address. Signed-off-by: Robin Murphy --- drivers/iommu/arm-smmu.c | 25 +++++++++++++++---------- 1 file changed, 15 insertions(+), 10 deletions(-) diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c index a877de006d02..19126230c780 100644 --- a/drivers/iommu/arm-smmu.c +++ b/drivers/iommu/arm-smmu.c @@ -95,7 +95,7 @@ #endif /* Translation context bank */ -#define ARM_SMMU_CB(smmu, n) ((smmu)->cb_base + ((n) << (smmu)->pgshift)) +#define ARM_SMMU_CB(smmu, n) ((smmu)->base + (((smmu)->numpage + (n)) << (smmu)->pgshift)) #define MSI_IOVA_BASE 0x8000000 #define MSI_IOVA_LENGTH 0x100000 @@ -168,8 +168,8 @@ struct arm_smmu_device { struct device *dev; void __iomem *base; - void __iomem *cb_base; - unsigned long pgshift; + unsigned int numpage; + unsigned int pgshift; #define ARM_SMMU_FEAT_COHERENT_WALK (1 << 0) #define ARM_SMMU_FEAT_STREAM_MATCH (1 << 1) @@ -1815,7 +1815,7 @@ static int arm_smmu_id_size_to_bits(int size) static int arm_smmu_device_cfg_probe(struct arm_smmu_device *smmu) { - unsigned long size; + unsigned int size; void __iomem *gr0_base = ARM_SMMU_GR0(smmu); u32 id; bool cttw_reg, cttw_fw = smmu->features & ARM_SMMU_FEAT_COHERENT_WALK; @@ -1899,7 +1899,7 @@ static int arm_smmu_device_cfg_probe(struct arm_smmu_device *smmu) return -ENOMEM; dev_notice(smmu->dev, - "\tstream matching with %lu register groups", size); + "\tstream matching with %u register groups", size); } /* s2cr->type == 0 means translation, so initialise explicitly */ smmu->s2crs = devm_kmalloc_array(smmu->dev, size, sizeof(*smmu->s2crs), @@ -1925,11 +1925,12 @@ static int arm_smmu_device_cfg_probe(struct arm_smmu_device *smmu) /* Check for size mismatch of SMMU address space from mapped region */ size = 1 << (FIELD_GET(ID1_NUMPAGENDXB, id) + 1); - size <<= smmu->pgshift; - if (smmu->cb_base != gr0_base + size) + if (smmu->numpage != 2 * size << smmu->pgshift) dev_warn(smmu->dev, - "SMMU address space size (0x%lx) differs from mapped region size (0x%tx)!\n", - size * 2, (smmu->cb_base - gr0_base) * 2); + "SMMU address space size (0x%x) differs from mapped region size (0x%x)!\n", + 2 * size << smmu->pgshift, smmu->numpage); + /* Now properly encode NUMPAGE to subsequently derive SMMU_CB_BASE */ + smmu->numpage = size; smmu->num_s2_context_banks = FIELD_GET(ID1_NUMS2CB, id); smmu->num_context_banks = FIELD_GET(ID1_NUMCB, id); @@ -2200,7 +2201,11 @@ static int arm_smmu_device_probe(struct platform_device *pdev) smmu->base = devm_ioremap_resource(dev, res); if (IS_ERR(smmu->base)) return PTR_ERR(smmu->base); - smmu->cb_base = smmu->base + resource_size(res) / 2; + /* + * The resource size should effectively match the value of SMMU_TOP; + * stash that temporarily until we know PAGESIZE to validate it with. + */ + smmu->numpage = resource_size(res); num_irqs = 0; while ((res = platform_get_resource(pdev, IORESOURCE_IRQ, num_irqs))) { From patchwork Thu Aug 15 18:37:27 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 11096283 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5844617E2 for ; Thu, 15 Aug 2019 18:39:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4362D28650 for ; Thu, 15 Aug 2019 18:39:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 343C8286F6; Thu, 15 Aug 2019 18:39:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id B249C28650 for ; Thu, 15 Aug 2019 18:39:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=qbzWvQC6FC4iHjJyRIbORZx/NCpUVMLg/Ylq/ZdouEY=; b=bJZbuWedk0AoMp jBN3hUgsBoBNTyPTJrdi3j26fnZf0AimfV+mVoS9en550IqdW2W5bjNqWMYNWr8lLi89a9r88RLT1 ttj5cOamRY+YzmUcnw/KxfiLah+wTaXOzJQBS4lx66IY9iug3hKC47CG4OOkFSxIh5imCg8LnAJSe jwMEWHTZPb3cgkWtTDH/rxyn44DyyCf6xpjIP7J8XI9SZaCnRpD9v6c9dO7T5W77iSsfPGpr47DG/ /qCUgz82B9fDVPmvKwFrSPnMlJVv6ulJbvVDGEcBO8kneRk+SOugLRFezr0F6ZKqdTOkcgcxnRVSC v83tvqmeAxG78PK0yWCg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyKeu-0006QH-8v; Thu, 15 Aug 2019 18:39:36 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyKdF-0004oJ-K1 for linux-arm-kernel@lists.infradead.org; Thu, 15 Aug 2019 18:37:56 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4B01D1596; Thu, 15 Aug 2019 11:37:53 -0700 (PDT) Received: from e110467-lin.cambridge.arm.com (e110467-lin.cambridge.arm.com [10.1.197.57]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id EAC843F694; Thu, 15 Aug 2019 11:37:51 -0700 (PDT) From: Robin Murphy To: will@kernel.org Subject: [PATCH v2 07/17] iommu/arm-smmu: Split arm_smmu_tlb_inv_range_nosync() Date: Thu, 15 Aug 2019 19:37:27 +0100 Message-Id: <087400cdf15908066aa7813ecbdc57254529ad79.1565892337.git.robin.murphy@arm.com> X-Mailer: git-send-email 2.21.0.dirty In-Reply-To: References: MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190815_113753_904262_73EBF7E1 X-CRM114-Status: GOOD ( 16.43 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: robdclark@gmail.com, joro@8bytes.org, bjorn.andersson@linaro.org, iommu@lists.linux-foundation.org, vivek.gautam@codeaurora.org, jcrouse@codeaurora.org, gregory.clement@bootlin.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Since we now use separate iommu_gather_ops for stage 1 and stage 2 contexts, we may as well divide up the monolithic callback into its respective stage 1 and stage 2 parts. Signed-off-by: Robin Murphy --- drivers/iommu/arm-smmu.c | 66 ++++++++++++++++++++++------------------ 1 file changed, 37 insertions(+), 29 deletions(-) diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c index 19126230c780..5b12e96d7878 100644 --- a/drivers/iommu/arm-smmu.c +++ b/drivers/iommu/arm-smmu.c @@ -490,46 +490,54 @@ static void arm_smmu_tlb_inv_context_s2(void *cookie) arm_smmu_tlb_sync_global(smmu); } -static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size, - size_t granule, bool leaf, void *cookie) +static void arm_smmu_tlb_inv_range_s1(unsigned long iova, size_t size, + size_t granule, bool leaf, void *cookie) { struct arm_smmu_domain *smmu_domain = cookie; + struct arm_smmu_device *smmu = smmu_domain->smmu; struct arm_smmu_cfg *cfg = &smmu_domain->cfg; - bool stage1 = cfg->cbar != CBAR_TYPE_S2_TRANS; - void __iomem *reg = ARM_SMMU_CB(smmu_domain->smmu, cfg->cbndx); + void __iomem *reg = ARM_SMMU_CB(smmu, cfg->cbndx); - if (smmu_domain->smmu->features & ARM_SMMU_FEAT_COHERENT_WALK) + if (smmu->features & ARM_SMMU_FEAT_COHERENT_WALK) wmb(); - if (stage1) { - reg += leaf ? ARM_SMMU_CB_S1_TLBIVAL : ARM_SMMU_CB_S1_TLBIVA; + reg += leaf ? ARM_SMMU_CB_S1_TLBIVAL : ARM_SMMU_CB_S1_TLBIVA; - if (cfg->fmt != ARM_SMMU_CTX_FMT_AARCH64) { - iova = (iova >> 12) << 12; - iova |= cfg->asid; - do { - writel_relaxed(iova, reg); - iova += granule; - } while (size -= granule); - } else { - iova >>= 12; - iova |= (u64)cfg->asid << 48; - do { - writeq_relaxed(iova, reg); - iova += granule >> 12; - } while (size -= granule); - } - } else { - reg += leaf ? ARM_SMMU_CB_S2_TLBIIPAS2L : - ARM_SMMU_CB_S2_TLBIIPAS2; - iova >>= 12; + if (cfg->fmt != ARM_SMMU_CTX_FMT_AARCH64) { + iova = (iova >> 12) << 12; + iova |= cfg->asid; do { - smmu_write_atomic_lq(iova, reg); + writel_relaxed(iova, reg); + iova += granule; + } while (size -= granule); + } else { + iova >>= 12; + iova |= (u64)cfg->asid << 48; + do { + writeq_relaxed(iova, reg); iova += granule >> 12; } while (size -= granule); } } +static void arm_smmu_tlb_inv_range_s2(unsigned long iova, size_t size, + size_t granule, bool leaf, void *cookie) +{ + struct arm_smmu_domain *smmu_domain = cookie; + struct arm_smmu_device *smmu = smmu_domain->smmu; + void __iomem *reg = ARM_SMMU_CB(smmu, smmu_domain->cfg.cbndx); + + if (smmu->features & ARM_SMMU_FEAT_COHERENT_WALK) + wmb(); + + reg += leaf ? ARM_SMMU_CB_S2_TLBIIPAS2L : ARM_SMMU_CB_S2_TLBIIPAS2; + iova >>= 12; + do { + smmu_write_atomic_lq(iova, reg); + iova += granule >> 12; + } while (size -= granule); +} + /* * On MMU-401 at least, the cost of firing off multiple TLBIVMIDs appears * almost negligible, but the benefit of getting the first one in as far ahead @@ -550,13 +558,13 @@ static void arm_smmu_tlb_inv_vmid_nosync(unsigned long iova, size_t size, static const struct iommu_gather_ops arm_smmu_s1_tlb_ops = { .tlb_flush_all = arm_smmu_tlb_inv_context_s1, - .tlb_add_flush = arm_smmu_tlb_inv_range_nosync, + .tlb_add_flush = arm_smmu_tlb_inv_range_s1, .tlb_sync = arm_smmu_tlb_sync_context, }; static const struct iommu_gather_ops arm_smmu_s2_tlb_ops_v2 = { .tlb_flush_all = arm_smmu_tlb_inv_context_s2, - .tlb_add_flush = arm_smmu_tlb_inv_range_nosync, + .tlb_add_flush = arm_smmu_tlb_inv_range_s2, .tlb_sync = arm_smmu_tlb_sync_context, }; From patchwork Thu Aug 15 18:37:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 11096285 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9044E17E2 for ; Thu, 15 Aug 2019 18:39:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7C58B1FF13 for ; Thu, 15 Aug 2019 18:39:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6A91D28662; Thu, 15 Aug 2019 18:39:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 0A2921FF13 for ; Thu, 15 Aug 2019 18:39:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=H4LzRz5+G2JQsRBESooM6cAiLbSPGVhlO5XjhAI/e2E=; b=A8jWBlWksIIr7S lNLptV5a+jbaniUcXlUlbe9OuoqMKESxG0BnEW1ZXNxYRv/mWNOyW5d0n6W2xsZ69OQNJeQNncVr8 Vw0d9oLpPT+JK7Y5V8AKMy/YzzK/K592z6peUr9PYsa622Cq2DJMzyLtHORDBBaCeMschRv/WUvYb f5QMmBmLtbwZGHnafR7vOienECTz0l54hRfsyxGgDRaYnYm6BkPn8HSFeA9eN8qwRqI7XR1xJF+qQ k/iQWN1AU/KUqToLesoIBhKU6ATUEdQR8uzU75QqenLUYuEGJ9MAH3xjMaEEeqPA/D38jGhNfti6J OWlb5wxX3GyO/NbM4Ang==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyKf9-0006d3-Ei; Thu, 15 Aug 2019 18:39:51 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyKdG-0004nO-Va for linux-arm-kernel@lists.infradead.org; Thu, 15 Aug 2019 18:37:58 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D18D5360; Thu, 15 Aug 2019 11:37:54 -0700 (PDT) Received: from e110467-lin.cambridge.arm.com (e110467-lin.cambridge.arm.com [10.1.197.57]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 7E5073F694; Thu, 15 Aug 2019 11:37:53 -0700 (PDT) From: Robin Murphy To: will@kernel.org Subject: [PATCH v2 08/17] iommu/arm-smmu: Get rid of weird "atomic" write Date: Thu, 15 Aug 2019 19:37:28 +0100 Message-Id: X-Mailer: git-send-email 2.21.0.dirty In-Reply-To: References: MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190815_113755_144129_14BB5DF7 X-CRM114-Status: GOOD ( 13.95 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: robdclark@gmail.com, joro@8bytes.org, bjorn.andersson@linaro.org, iommu@lists.linux-foundation.org, vivek.gautam@codeaurora.org, jcrouse@codeaurora.org, gregory.clement@bootlin.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP The smmu_write_atomic_lq oddity made some sense when the context format was effectively tied to CONFIG_64BIT, but these days it's simpler to just pick an explicit access size based on the format for the one-and-a-half times we actually care. Signed-off-by: Robin Murphy --- drivers/iommu/arm-smmu.c | 23 +++++++---------------- 1 file changed, 7 insertions(+), 16 deletions(-) diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c index 5b12e96d7878..24b4de1a4185 100644 --- a/drivers/iommu/arm-smmu.c +++ b/drivers/iommu/arm-smmu.c @@ -83,17 +83,6 @@ ((smmu->options & ARM_SMMU_OPT_SECURE_CFG_ACCESS) \ ? 0x400 : 0)) -/* - * Some 64-bit registers only make sense to write atomically, but in such - * cases all the data relevant to AArch32 formats lies within the lower word, - * therefore this actually makes more sense than it might first appear. - */ -#ifdef CONFIG_64BIT -#define smmu_write_atomic_lq writeq_relaxed -#else -#define smmu_write_atomic_lq writel_relaxed -#endif - /* Translation context bank */ #define ARM_SMMU_CB(smmu, n) ((smmu)->base + (((smmu)->numpage + (n)) << (smmu)->pgshift)) @@ -533,7 +522,10 @@ static void arm_smmu_tlb_inv_range_s2(unsigned long iova, size_t size, reg += leaf ? ARM_SMMU_CB_S2_TLBIIPAS2L : ARM_SMMU_CB_S2_TLBIIPAS2; iova >>= 12; do { - smmu_write_atomic_lq(iova, reg); + if (smmu_domain->cfg.fmt == ARM_SMMU_CTX_FMT_AARCH64) + writeq_relaxed(iova, reg); + else + writel_relaxed(iova, reg); iova += granule >> 12; } while (size -= granule); } @@ -1371,11 +1363,10 @@ static phys_addr_t arm_smmu_iova_to_phys_hard(struct iommu_domain *domain, cb_base = ARM_SMMU_CB(smmu, cfg->cbndx); spin_lock_irqsave(&smmu_domain->cb_lock, flags); - /* ATS1 registers can only be written atomically */ va = iova & ~0xfffUL; - if (smmu->version == ARM_SMMU_V2) - smmu_write_atomic_lq(va, cb_base + ARM_SMMU_CB_ATS1PR); - else /* Register is only 32-bit in v1 */ + if (cfg->fmt == ARM_SMMU_CTX_FMT_AARCH64) + writeq_relaxed(va, cb_base + ARM_SMMU_CB_ATS1PR); + else writel_relaxed(va, cb_base + ARM_SMMU_CB_ATS1PR); if (readl_poll_timeout_atomic(cb_base + ARM_SMMU_CB_ATSR, tmp, From patchwork Thu Aug 15 18:37:29 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 11096287 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0261017E2 for ; Thu, 15 Aug 2019 18:40:15 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E6DFC2003F for ; Thu, 15 Aug 2019 18:40:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DB30D286F9; Thu, 15 Aug 2019 18:40:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 6B11728662 for ; Thu, 15 Aug 2019 18:40:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=dic2+ldocgsVq749CUBUnvbEc/++q0Sk34ejG4PhNxs=; b=kNF1esRzUIZfhI yLLrqnOwQz2FdHlFGego2Gn3qHMFqvHA58Thz2ZaH4nX5onZBV3Iw0TPHG5LbC6Eqz+Kca2hkJAwm /r+BYehdYGtZe1lArG+1+sahv581xavsGh3Bcx1meIhV2i4XPNIJXlzjCppzxk5kKx2Pygg5ltufx T7xJ97hQaaYOtokiTRol4SSpvJbcTW1OIR4QOHjCyhg73sobT//SHu2PzluMp2DiKG6P4UW36IWSw lPeULwlbS510Z88AYgYRmBUx0pTTjsYLEUlNqq+dtxVyHGOd0wtWwlFCmdWvMcMiodKcm4+9wGiVu 3RNvMFRZqaBlhMvG73rg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyKfV-00089W-Hw; Thu, 15 Aug 2019 18:40:13 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyKdI-0004rN-RL for linux-arm-kernel@lists.infradead.org; Thu, 15 Aug 2019 18:37:59 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6552B344; Thu, 15 Aug 2019 11:37:56 -0700 (PDT) Received: from e110467-lin.cambridge.arm.com (e110467-lin.cambridge.arm.com [10.1.197.57]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 11F6C3F694; Thu, 15 Aug 2019 11:37:54 -0700 (PDT) From: Robin Murphy To: will@kernel.org Subject: [PATCH v2 09/17] iommu/arm-smmu: Abstract GR1 accesses Date: Thu, 15 Aug 2019 19:37:29 +0100 Message-Id: <00d20b1757c8ab4206deefcc30fdce343383af52.1565892337.git.robin.murphy@arm.com> X-Mailer: git-send-email 2.21.0.dirty In-Reply-To: References: MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190815_113757_032540_D8EDB902 X-CRM114-Status: GOOD ( 13.72 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: robdclark@gmail.com, joro@8bytes.org, bjorn.andersson@linaro.org, iommu@lists.linux-foundation.org, vivek.gautam@codeaurora.org, jcrouse@codeaurora.org, gregory.clement@bootlin.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Introduce some register access abstractions which we will later use to encapsulate various quirks. GR1 is the easiest page to start with. Signed-off-by: Robin Murphy --- drivers/iommu/arm-smmu.c | 34 +++++++++++++++++++++++++++------- 1 file changed, 27 insertions(+), 7 deletions(-) diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c index 24b4de1a4185..d612dda2889f 100644 --- a/drivers/iommu/arm-smmu.c +++ b/drivers/iommu/arm-smmu.c @@ -71,7 +71,6 @@ /* SMMU global address space */ #define ARM_SMMU_GR0(smmu) ((smmu)->base) -#define ARM_SMMU_GR1(smmu) ((smmu)->base + (1 << (smmu)->pgshift)) /* * SMMU global address space with conditional offset to access secure @@ -250,6 +249,29 @@ struct arm_smmu_domain { struct iommu_domain domain; }; +static void __iomem *arm_smmu_page(struct arm_smmu_device *smmu, int n) +{ + return smmu->base + (n << smmu->pgshift); +} + +static u32 arm_smmu_readl(struct arm_smmu_device *smmu, int page, int offset) +{ + return readl_relaxed(arm_smmu_page(smmu, page) + offset); +} + +static void arm_smmu_writel(struct arm_smmu_device *smmu, int page, int offset, + u32 val) +{ + writel_relaxed(val, arm_smmu_page(smmu, page) + offset); +} + +#define ARM_SMMU_GR1 1 + +#define arm_smmu_gr1_read(s, o) \ + arm_smmu_readl((s), ARM_SMMU_GR1, (o)) +#define arm_smmu_gr1_write(s, o, v) \ + arm_smmu_writel((s), ARM_SMMU_GR1, (o), (v)) + struct arm_smmu_option_prop { u32 opt; const char *prop; @@ -574,7 +596,6 @@ static irqreturn_t arm_smmu_context_fault(int irq, void *dev) struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); struct arm_smmu_cfg *cfg = &smmu_domain->cfg; struct arm_smmu_device *smmu = smmu_domain->smmu; - void __iomem *gr1_base = ARM_SMMU_GR1(smmu); void __iomem *cb_base; cb_base = ARM_SMMU_CB(smmu, cfg->cbndx); @@ -585,7 +606,7 @@ static irqreturn_t arm_smmu_context_fault(int irq, void *dev) fsynr = readl_relaxed(cb_base + ARM_SMMU_CB_FSYNR0); iova = readq_relaxed(cb_base + ARM_SMMU_CB_FAR); - cbfrsynra = readl_relaxed(gr1_base + ARM_SMMU_GR1_CBFRSYNRA(cfg->cbndx)); + cbfrsynra = arm_smmu_gr1_read(smmu, ARM_SMMU_GR1_CBFRSYNRA(cfg->cbndx)); dev_err_ratelimited(smmu->dev, "Unhandled context fault: fsr=0x%x, iova=0x%08lx, fsynr=0x%x, cbfrsynra=0x%x, cb=%d\n", @@ -676,7 +697,7 @@ static void arm_smmu_write_context_bank(struct arm_smmu_device *smmu, int idx) bool stage1; struct arm_smmu_cb *cb = &smmu->cbs[idx]; struct arm_smmu_cfg *cfg = cb->cfg; - void __iomem *cb_base, *gr1_base; + void __iomem *cb_base; cb_base = ARM_SMMU_CB(smmu, idx); @@ -686,7 +707,6 @@ static void arm_smmu_write_context_bank(struct arm_smmu_device *smmu, int idx) return; } - gr1_base = ARM_SMMU_GR1(smmu); stage1 = cfg->cbar != CBAR_TYPE_S2_TRANS; /* CBA2R */ @@ -699,7 +719,7 @@ static void arm_smmu_write_context_bank(struct arm_smmu_device *smmu, int idx) if (smmu->features & ARM_SMMU_FEAT_VMID16) reg |= FIELD_PREP(CBA2R_VMID16, cfg->vmid); - writel_relaxed(reg, gr1_base + ARM_SMMU_GR1_CBA2R(idx)); + arm_smmu_gr1_write(smmu, ARM_SMMU_GR1_CBA2R(idx), reg); } /* CBAR */ @@ -718,7 +738,7 @@ static void arm_smmu_write_context_bank(struct arm_smmu_device *smmu, int idx) /* 8-bit VMIDs live in CBAR */ reg |= FIELD_PREP(CBAR_VMID, cfg->vmid); } - writel_relaxed(reg, gr1_base + ARM_SMMU_GR1_CBAR(idx)); + arm_smmu_gr1_write(smmu, ARM_SMMU_GR1_CBAR(idx), reg); /* * TCR From patchwork Thu Aug 15 18:37:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 11096289 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 460E514DB for ; Thu, 15 Aug 2019 18:40:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3543D286E6 for ; Thu, 15 Aug 2019 18:40:29 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 28F6628701; Thu, 15 Aug 2019 18:40:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 4FBF5286E6 for ; Thu, 15 Aug 2019 18:40:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Hg0P9h0evuuzk+ZIi3WrYVLxpSvcr48T/Ldxm+W5FBQ=; b=usvErvtBO8Aecl 8TSoJmPTo7RzghhBOlQ7Oz06x9F9Q+ddmEewAvJ1z95KrsHjG00OjBRKGqsgJaav7QEnUuoCkQtS6 V3rOK3iuQfIv+zK8SOhdx5Gv/lsTXXzw6uLgPhcphHGCk/iVJF5o9zt9y2fPGezjO/A4a6K6oknqH JeAhz5oJ8nz6TUMUKf8hHFVcT76nLVygZISWh4VlowiFArFHSfDjSJudkmS1ZyGJhl4Oz5hvGlyUi 0jKAL0ac98NnCZ3V8JzX8w/UEdwn1pjCSkvP3j87XW//imZv4PUwHNg7QhFQhWhOw0nfLp1Jtd00l q1r4aYatIVA4c0/IPS0Q==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyKfj-0008OA-FL; Thu, 15 Aug 2019 18:40:27 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyKdK-0004yn-Kl for linux-arm-kernel@lists.infradead.org; Thu, 15 Aug 2019 18:38:02 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id EDCCA1570; Thu, 15 Aug 2019 11:37:57 -0700 (PDT) Received: from e110467-lin.cambridge.arm.com (e110467-lin.cambridge.arm.com [10.1.197.57]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 9AC983F694; Thu, 15 Aug 2019 11:37:56 -0700 (PDT) From: Robin Murphy To: will@kernel.org Subject: [PATCH v2 10/17] iommu/arm-smmu: Abstract context bank accesses Date: Thu, 15 Aug 2019 19:37:30 +0100 Message-Id: X-Mailer: git-send-email 2.21.0.dirty In-Reply-To: References: MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190815_113759_086994_23EE4D03 X-CRM114-Status: GOOD ( 19.21 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: robdclark@gmail.com, joro@8bytes.org, bjorn.andersson@linaro.org, iommu@lists.linux-foundation.org, vivek.gautam@codeaurora.org, jcrouse@codeaurora.org, gregory.clement@bootlin.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Context bank accesses are fiddly enough to deserve a number of extra helpers to keep the callsites looking sane, even though there are only one or two of each. Signed-off-by: Robin Murphy --- drivers/iommu/arm-smmu.c | 138 +++++++++++++++++++++------------------ 1 file changed, 73 insertions(+), 65 deletions(-) diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c index d612dda2889f..e72554f334ee 100644 --- a/drivers/iommu/arm-smmu.c +++ b/drivers/iommu/arm-smmu.c @@ -82,9 +82,6 @@ ((smmu->options & ARM_SMMU_OPT_SECURE_CFG_ACCESS) \ ? 0x400 : 0)) -/* Translation context bank */ -#define ARM_SMMU_CB(smmu, n) ((smmu)->base + (((smmu)->numpage + (n)) << (smmu)->pgshift)) - #define MSI_IOVA_BASE 0x8000000 #define MSI_IOVA_LENGTH 0x100000 @@ -265,13 +262,34 @@ static void arm_smmu_writel(struct arm_smmu_device *smmu, int page, int offset, writel_relaxed(val, arm_smmu_page(smmu, page) + offset); } +static u64 arm_smmu_readq(struct arm_smmu_device *smmu, int page, int offset) +{ + return readq_relaxed(arm_smmu_page(smmu, page) + offset); +} + +static void arm_smmu_writeq(struct arm_smmu_device *smmu, int page, int offset, + u64 val) +{ + writeq_relaxed(val, arm_smmu_page(smmu, page) + offset); +} + #define ARM_SMMU_GR1 1 +#define ARM_SMMU_CB(s, n) ((s)->numpage + (n)) #define arm_smmu_gr1_read(s, o) \ arm_smmu_readl((s), ARM_SMMU_GR1, (o)) #define arm_smmu_gr1_write(s, o, v) \ arm_smmu_writel((s), ARM_SMMU_GR1, (o), (v)) +#define arm_smmu_cb_read(s, n, o) \ + arm_smmu_readl((s), ARM_SMMU_CB((s), (n)), (o)) +#define arm_smmu_cb_write(s, n, o, v) \ + arm_smmu_writel((s), ARM_SMMU_CB((s), (n)), (o), (v)) +#define arm_smmu_cb_readq(s, n, o) \ + arm_smmu_readq((s), ARM_SMMU_CB((s), (n)), (o)) +#define arm_smmu_cb_writeq(s, n, o, v) \ + arm_smmu_writeq((s), ARM_SMMU_CB((s), (n)), (o), (v)) + struct arm_smmu_option_prop { u32 opt; const char *prop; @@ -427,15 +445,17 @@ static void __arm_smmu_free_bitmap(unsigned long *map, int idx) } /* Wait for any pending TLB invalidations to complete */ -static void __arm_smmu_tlb_sync(struct arm_smmu_device *smmu, - void __iomem *sync, void __iomem *status) +static void __arm_smmu_tlb_sync(struct arm_smmu_device *smmu, int page, + int sync, int status) { unsigned int spin_cnt, delay; + u32 reg; - writel_relaxed(QCOM_DUMMY_VAL, sync); + arm_smmu_writel(smmu, page, sync, QCOM_DUMMY_VAL); for (delay = 1; delay < TLB_LOOP_TIMEOUT; delay *= 2) { for (spin_cnt = TLB_SPIN_COUNT; spin_cnt > 0; spin_cnt--) { - if (!(readl_relaxed(status) & sTLBGSTATUS_GSACTIVE)) + reg = arm_smmu_readl(smmu, page, status); + if (!(reg & sTLBGSTATUS_GSACTIVE)) return; cpu_relax(); } @@ -447,12 +467,11 @@ static void __arm_smmu_tlb_sync(struct arm_smmu_device *smmu, static void arm_smmu_tlb_sync_global(struct arm_smmu_device *smmu) { - void __iomem *base = ARM_SMMU_GR0(smmu); unsigned long flags; spin_lock_irqsave(&smmu->global_sync_lock, flags); - __arm_smmu_tlb_sync(smmu, base + ARM_SMMU_GR0_sTLBGSYNC, - base + ARM_SMMU_GR0_sTLBGSTATUS); + __arm_smmu_tlb_sync(smmu, 0, ARM_SMMU_GR0_sTLBGSYNC, + ARM_SMMU_GR0_sTLBGSTATUS); spin_unlock_irqrestore(&smmu->global_sync_lock, flags); } @@ -460,12 +479,11 @@ static void arm_smmu_tlb_sync_context(void *cookie) { struct arm_smmu_domain *smmu_domain = cookie; struct arm_smmu_device *smmu = smmu_domain->smmu; - void __iomem *base = ARM_SMMU_CB(smmu, smmu_domain->cfg.cbndx); unsigned long flags; spin_lock_irqsave(&smmu_domain->cb_lock, flags); - __arm_smmu_tlb_sync(smmu, base + ARM_SMMU_CB_TLBSYNC, - base + ARM_SMMU_CB_TLBSTATUS); + __arm_smmu_tlb_sync(smmu, ARM_SMMU_CB(smmu, smmu_domain->cfg.cbndx), + ARM_SMMU_CB_TLBSYNC, ARM_SMMU_CB_TLBSTATUS); spin_unlock_irqrestore(&smmu_domain->cb_lock, flags); } @@ -479,14 +497,13 @@ static void arm_smmu_tlb_sync_vmid(void *cookie) static void arm_smmu_tlb_inv_context_s1(void *cookie) { struct arm_smmu_domain *smmu_domain = cookie; - struct arm_smmu_cfg *cfg = &smmu_domain->cfg; - void __iomem *base = ARM_SMMU_CB(smmu_domain->smmu, cfg->cbndx); - /* - * NOTE: this is not a relaxed write; it needs to guarantee that PTEs - * cleared by the current CPU are visible to the SMMU before the TLBI. + * The TLBI write may be relaxed, so ensure that PTEs cleared by the + * current CPU are visible beforehand. */ - writel(cfg->asid, base + ARM_SMMU_CB_S1_TLBIASID); + wmb(); + arm_smmu_cb_write(smmu_domain->smmu, smmu_domain->cfg.cbndx, + ARM_SMMU_CB_S1_TLBIASID, smmu_domain->cfg.asid); arm_smmu_tlb_sync_context(cookie); } @@ -507,25 +524,25 @@ static void arm_smmu_tlb_inv_range_s1(unsigned long iova, size_t size, struct arm_smmu_domain *smmu_domain = cookie; struct arm_smmu_device *smmu = smmu_domain->smmu; struct arm_smmu_cfg *cfg = &smmu_domain->cfg; - void __iomem *reg = ARM_SMMU_CB(smmu, cfg->cbndx); + int reg, idx = cfg->cbndx; if (smmu->features & ARM_SMMU_FEAT_COHERENT_WALK) wmb(); - reg += leaf ? ARM_SMMU_CB_S1_TLBIVAL : ARM_SMMU_CB_S1_TLBIVA; + reg = leaf ? ARM_SMMU_CB_S1_TLBIVAL : ARM_SMMU_CB_S1_TLBIVA; if (cfg->fmt != ARM_SMMU_CTX_FMT_AARCH64) { iova = (iova >> 12) << 12; iova |= cfg->asid; do { - writel_relaxed(iova, reg); + arm_smmu_cb_write(smmu, idx, reg, iova); iova += granule; } while (size -= granule); } else { iova >>= 12; iova |= (u64)cfg->asid << 48; do { - writeq_relaxed(iova, reg); + arm_smmu_cb_writeq(smmu, idx, reg, iova); iova += granule >> 12; } while (size -= granule); } @@ -536,18 +553,18 @@ static void arm_smmu_tlb_inv_range_s2(unsigned long iova, size_t size, { struct arm_smmu_domain *smmu_domain = cookie; struct arm_smmu_device *smmu = smmu_domain->smmu; - void __iomem *reg = ARM_SMMU_CB(smmu, smmu_domain->cfg.cbndx); + int reg, idx = smmu_domain->cfg.cbndx; if (smmu->features & ARM_SMMU_FEAT_COHERENT_WALK) wmb(); - reg += leaf ? ARM_SMMU_CB_S2_TLBIIPAS2L : ARM_SMMU_CB_S2_TLBIIPAS2; + reg = leaf ? ARM_SMMU_CB_S2_TLBIIPAS2L : ARM_SMMU_CB_S2_TLBIIPAS2; iova >>= 12; do { if (smmu_domain->cfg.fmt == ARM_SMMU_CTX_FMT_AARCH64) - writeq_relaxed(iova, reg); + arm_smmu_cb_writeq(smmu, idx, reg, iova); else - writel_relaxed(iova, reg); + arm_smmu_cb_write(smmu, idx, reg, iova); iova += granule >> 12; } while (size -= granule); } @@ -594,25 +611,22 @@ static irqreturn_t arm_smmu_context_fault(int irq, void *dev) unsigned long iova; struct iommu_domain *domain = dev; struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); - struct arm_smmu_cfg *cfg = &smmu_domain->cfg; struct arm_smmu_device *smmu = smmu_domain->smmu; - void __iomem *cb_base; - - cb_base = ARM_SMMU_CB(smmu, cfg->cbndx); - fsr = readl_relaxed(cb_base + ARM_SMMU_CB_FSR); + int idx = smmu_domain->cfg.cbndx; + fsr = arm_smmu_cb_read(smmu, idx, ARM_SMMU_CB_FSR); if (!(fsr & FSR_FAULT)) return IRQ_NONE; - fsynr = readl_relaxed(cb_base + ARM_SMMU_CB_FSYNR0); - iova = readq_relaxed(cb_base + ARM_SMMU_CB_FAR); - cbfrsynra = arm_smmu_gr1_read(smmu, ARM_SMMU_GR1_CBFRSYNRA(cfg->cbndx)); + fsynr = arm_smmu_cb_read(smmu, idx, ARM_SMMU_CB_FSYNR0); + iova = arm_smmu_cb_readq(smmu, idx, ARM_SMMU_CB_FAR); + cbfrsynra = arm_smmu_gr1_read(smmu, ARM_SMMU_GR1_CBFRSYNRA(idx)); dev_err_ratelimited(smmu->dev, "Unhandled context fault: fsr=0x%x, iova=0x%08lx, fsynr=0x%x, cbfrsynra=0x%x, cb=%d\n", - fsr, iova, fsynr, cbfrsynra, cfg->cbndx); + fsr, iova, fsynr, cbfrsynra, idx); - writel(fsr, cb_base + ARM_SMMU_CB_FSR); + arm_smmu_cb_write(smmu, idx, ARM_SMMU_CB_FSR, fsr); return IRQ_HANDLED; } @@ -697,13 +711,10 @@ static void arm_smmu_write_context_bank(struct arm_smmu_device *smmu, int idx) bool stage1; struct arm_smmu_cb *cb = &smmu->cbs[idx]; struct arm_smmu_cfg *cfg = cb->cfg; - void __iomem *cb_base; - - cb_base = ARM_SMMU_CB(smmu, idx); /* Unassigned context banks only need disabling */ if (!cfg) { - writel_relaxed(0, cb_base + ARM_SMMU_CB_SCTLR); + arm_smmu_cb_write(smmu, idx, ARM_SMMU_CB_SCTLR, 0); return; } @@ -746,24 +757,25 @@ static void arm_smmu_write_context_bank(struct arm_smmu_device *smmu, int idx) * access behaviour of some fields (in particular, ASID[15:8]). */ if (stage1 && smmu->version > ARM_SMMU_V1) - writel_relaxed(cb->tcr[1], cb_base + ARM_SMMU_CB_TCR2); - writel_relaxed(cb->tcr[0], cb_base + ARM_SMMU_CB_TCR); + arm_smmu_cb_write(smmu, idx, ARM_SMMU_CB_TCR2, cb->tcr[1]); + arm_smmu_cb_write(smmu, idx, ARM_SMMU_CB_TCR, cb->tcr[0]); /* TTBRs */ if (cfg->fmt == ARM_SMMU_CTX_FMT_AARCH32_S) { - writel_relaxed(cfg->asid, cb_base + ARM_SMMU_CB_CONTEXTIDR); - writel_relaxed(cb->ttbr[0], cb_base + ARM_SMMU_CB_TTBR0); - writel_relaxed(cb->ttbr[1], cb_base + ARM_SMMU_CB_TTBR1); + arm_smmu_cb_write(smmu, idx, ARM_SMMU_CB_CONTEXTIDR, cfg->asid); + arm_smmu_cb_write(smmu, idx, ARM_SMMU_CB_TTBR0, cb->ttbr[0]); + arm_smmu_cb_write(smmu, idx, ARM_SMMU_CB_TTBR1, cb->ttbr[1]); } else { - writeq_relaxed(cb->ttbr[0], cb_base + ARM_SMMU_CB_TTBR0); + arm_smmu_cb_writeq(smmu, idx, ARM_SMMU_CB_TTBR0, cb->ttbr[0]); if (stage1) - writeq_relaxed(cb->ttbr[1], cb_base + ARM_SMMU_CB_TTBR1); + arm_smmu_cb_writeq(smmu, idx, ARM_SMMU_CB_TTBR1, + cb->ttbr[1]); } /* MAIRs (stage-1 only) */ if (stage1) { - writel_relaxed(cb->mair[0], cb_base + ARM_SMMU_CB_S1_MAIR0); - writel_relaxed(cb->mair[1], cb_base + ARM_SMMU_CB_S1_MAIR1); + arm_smmu_cb_write(smmu, idx, ARM_SMMU_CB_S1_MAIR0, cb->mair[0]); + arm_smmu_cb_write(smmu, idx, ARM_SMMU_CB_S1_MAIR1, cb->mair[1]); } /* SCTLR */ @@ -773,7 +785,7 @@ static void arm_smmu_write_context_bank(struct arm_smmu_device *smmu, int idx) if (IS_ENABLED(CONFIG_CPU_BIG_ENDIAN)) reg |= SCTLR_E; - writel_relaxed(reg, cb_base + ARM_SMMU_CB_SCTLR); + arm_smmu_cb_write(smmu, idx, ARM_SMMU_CB_SCTLR, reg); } static int arm_smmu_init_domain_context(struct iommu_domain *domain, @@ -1370,27 +1382,25 @@ static phys_addr_t arm_smmu_iova_to_phys_hard(struct iommu_domain *domain, struct arm_smmu_cfg *cfg = &smmu_domain->cfg; struct io_pgtable_ops *ops= smmu_domain->pgtbl_ops; struct device *dev = smmu->dev; - void __iomem *cb_base; + void __iomem *reg; u32 tmp; u64 phys; unsigned long va, flags; - int ret; + int ret, idx = cfg->cbndx; ret = arm_smmu_rpm_get(smmu); if (ret < 0) return 0; - cb_base = ARM_SMMU_CB(smmu, cfg->cbndx); - spin_lock_irqsave(&smmu_domain->cb_lock, flags); va = iova & ~0xfffUL; if (cfg->fmt == ARM_SMMU_CTX_FMT_AARCH64) - writeq_relaxed(va, cb_base + ARM_SMMU_CB_ATS1PR); + arm_smmu_cb_writeq(smmu, idx, ARM_SMMU_CB_ATS1PR, va); else - writel_relaxed(va, cb_base + ARM_SMMU_CB_ATS1PR); + arm_smmu_cb_write(smmu, idx, ARM_SMMU_CB_ATS1PR, va); - if (readl_poll_timeout_atomic(cb_base + ARM_SMMU_CB_ATSR, tmp, - !(tmp & ATSR_ACTIVE), 5, 50)) { + reg = arm_smmu_page(smmu, ARM_SMMU_CB(smmu, idx)) + ARM_SMMU_CB_ATSR; + if (readl_poll_timeout_atomic(reg, tmp, !(tmp & ATSR_ACTIVE), 5, 50)) { spin_unlock_irqrestore(&smmu_domain->cb_lock, flags); dev_err(dev, "iova to phys timed out on %pad. Falling back to software table walk.\n", @@ -1398,7 +1408,7 @@ static phys_addr_t arm_smmu_iova_to_phys_hard(struct iommu_domain *domain, return ops->iova_to_phys(ops, iova); } - phys = readq_relaxed(cb_base + ARM_SMMU_CB_PAR); + phys = arm_smmu_cb_readq(smmu, idx, ARM_SMMU_CB_PAR); spin_unlock_irqrestore(&smmu_domain->cb_lock, flags); if (phys & CB_PAR_F) { dev_err(dev, "translation fault!\n"); @@ -1762,18 +1772,16 @@ static void arm_smmu_device_reset(struct arm_smmu_device *smmu) /* Make sure all context banks are disabled and clear CB_FSR */ for (i = 0; i < smmu->num_context_banks; ++i) { - void __iomem *cb_base = ARM_SMMU_CB(smmu, i); - arm_smmu_write_context_bank(smmu, i); - writel_relaxed(FSR_FAULT, cb_base + ARM_SMMU_CB_FSR); + arm_smmu_cb_write(smmu, i, ARM_SMMU_CB_FSR, FSR_FAULT); /* * Disable MMU-500's not-particularly-beneficial next-page * prefetcher for the sake of errata #841119 and #826419. */ if (smmu->model == ARM_MMU500) { - reg = readl_relaxed(cb_base + ARM_SMMU_CB_ACTLR); + reg = arm_smmu_cb_read(smmu, i, ARM_SMMU_CB_ACTLR); reg &= ~ARM_MMU500_ACTLR_CPRE; - writel_relaxed(reg, cb_base + ARM_SMMU_CB_ACTLR); + arm_smmu_cb_write(smmu, i, ARM_SMMU_CB_ACTLR, reg); } } From patchwork Thu Aug 15 18:37:31 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 11096291 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 50E4C14DB for ; Thu, 15 Aug 2019 18:40:41 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 41347286A1 for ; Thu, 15 Aug 2019 18:40:41 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 357AE2870B; Thu, 15 Aug 2019 18:40:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 5C28F286A1 for ; Thu, 15 Aug 2019 18:40:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=nQkXL2QkTDvKGVXu8oAtmyceh584hfo4BjYlaaIEfQI=; b=AAYreCCWkrRwEs 79+iLRCcE5B5qJAGk9q9WS+ZZPuq1uRR8HEmRl6b8juB9ih5l6CDisOhAfEdHzihEGuBMvtRLlZi3 6I9PzWFELXp69yiP06qJsTDBh3poEeWdH68/kPZR0goVTHpZSoOpx34Eq79ShdNDmDg3MOGZwUu5H c+bH4vY59cQQZ4Y697GHBhv+jYuc+HkcDdBtwiKLwOPNohZrMKGGg3+g7FUERWcoRnNfqAt9YZ3hL itTPQel/TLjTxcoh6mqLOqEQuxwlZfwRFuK/fGSQG8oacdTNoOuv96znOsDlXWeCfSAwBB+Y6O+yR S1fTydU5WxyG9FH5bA7g==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyKfv-0000BP-I4; Thu, 15 Aug 2019 18:40:39 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyKdL-0004nO-LT for linux-arm-kernel@lists.infradead.org; Thu, 15 Aug 2019 18:38:03 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 820A7360; Thu, 15 Aug 2019 11:37:59 -0700 (PDT) Received: from e110467-lin.cambridge.arm.com (e110467-lin.cambridge.arm.com [10.1.197.57]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 2EE0A3F694; Thu, 15 Aug 2019 11:37:58 -0700 (PDT) From: Robin Murphy To: will@kernel.org Subject: [PATCH v2 11/17] iommu/arm-smmu: Abstract GR0 accesses Date: Thu, 15 Aug 2019 19:37:31 +0100 Message-Id: X-Mailer: git-send-email 2.21.0.dirty In-Reply-To: References: MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190815_113759_990348_0B4991A3 X-CRM114-Status: GOOD ( 19.86 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: robdclark@gmail.com, joro@8bytes.org, bjorn.andersson@linaro.org, iommu@lists.linux-foundation.org, vivek.gautam@codeaurora.org, jcrouse@codeaurora.org, gregory.clement@bootlin.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Clean up the remaining accesses to GR0 registers, so that everything is now neatly abstracted. This folds up the Non-Secure alias quirk as the first step towards moving it out of the way entirely. Although GR0 does technically contain some 64-bit registers (sGFAR and the weird SMMUv2 HYPC and MONC stuff), they're not ones we have any need to access. Signed-off-by: Robin Murphy --- drivers/iommu/arm-smmu.c | 106 +++++++++++++++++++++------------------ 1 file changed, 58 insertions(+), 48 deletions(-) diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c index e72554f334ee..e9fd9117109e 100644 --- a/drivers/iommu/arm-smmu.c +++ b/drivers/iommu/arm-smmu.c @@ -69,19 +69,6 @@ /* Maximum number of context banks per SMMU */ #define ARM_SMMU_MAX_CBS 128 -/* SMMU global address space */ -#define ARM_SMMU_GR0(smmu) ((smmu)->base) - -/* - * SMMU global address space with conditional offset to access secure - * aliases of non-secure registers (e.g. nsCR0: 0x400, nsGFSR: 0x448, - * nsGFSYNR0: 0x450) - */ -#define ARM_SMMU_GR0_NS(smmu) \ - ((smmu)->base + \ - ((smmu->options & ARM_SMMU_OPT_SECURE_CFG_ACCESS) \ - ? 0x400 : 0)) - #define MSI_IOVA_BASE 0x8000000 #define MSI_IOVA_LENGTH 0x100000 @@ -246,6 +233,21 @@ struct arm_smmu_domain { struct iommu_domain domain; }; +static int arm_smmu_gr0_ns(int offset) +{ + switch(offset) { + case ARM_SMMU_GR0_sCR0: + case ARM_SMMU_GR0_sACR: + case ARM_SMMU_GR0_sGFSR: + case ARM_SMMU_GR0_sGFSYNR0: + case ARM_SMMU_GR0_sGFSYNR1: + case ARM_SMMU_GR0_sGFSYNR2: + return offset + 0x400; + default: + return offset; + } +} + static void __iomem *arm_smmu_page(struct arm_smmu_device *smmu, int n) { return smmu->base + (n << smmu->pgshift); @@ -253,12 +255,18 @@ static void __iomem *arm_smmu_page(struct arm_smmu_device *smmu, int n) static u32 arm_smmu_readl(struct arm_smmu_device *smmu, int page, int offset) { + if ((smmu->options & ARM_SMMU_OPT_SECURE_CFG_ACCESS) && page == 0) + offset = arm_smmu_gr0_ns(offset); + return readl_relaxed(arm_smmu_page(smmu, page) + offset); } static void arm_smmu_writel(struct arm_smmu_device *smmu, int page, int offset, u32 val) { + if ((smmu->options & ARM_SMMU_OPT_SECURE_CFG_ACCESS) && page == 0) + offset = arm_smmu_gr0_ns(offset); + writel_relaxed(val, arm_smmu_page(smmu, page) + offset); } @@ -273,9 +281,15 @@ static void arm_smmu_writeq(struct arm_smmu_device *smmu, int page, int offset, writeq_relaxed(val, arm_smmu_page(smmu, page) + offset); } +#define ARM_SMMU_GR0 0 #define ARM_SMMU_GR1 1 #define ARM_SMMU_CB(s, n) ((s)->numpage + (n)) +#define arm_smmu_gr0_read(s, o) \ + arm_smmu_readl((s), ARM_SMMU_GR0, (o)) +#define arm_smmu_gr0_write(s, o, v) \ + arm_smmu_writel((s), ARM_SMMU_GR0, (o), (v)) + #define arm_smmu_gr1_read(s, o) \ arm_smmu_readl((s), ARM_SMMU_GR1, (o)) #define arm_smmu_gr1_write(s, o, v) \ @@ -470,7 +484,7 @@ static void arm_smmu_tlb_sync_global(struct arm_smmu_device *smmu) unsigned long flags; spin_lock_irqsave(&smmu->global_sync_lock, flags); - __arm_smmu_tlb_sync(smmu, 0, ARM_SMMU_GR0_sTLBGSYNC, + __arm_smmu_tlb_sync(smmu, ARM_SMMU_GR0, ARM_SMMU_GR0_sTLBGSYNC, ARM_SMMU_GR0_sTLBGSTATUS); spin_unlock_irqrestore(&smmu->global_sync_lock, flags); } @@ -511,10 +525,10 @@ static void arm_smmu_tlb_inv_context_s2(void *cookie) { struct arm_smmu_domain *smmu_domain = cookie; struct arm_smmu_device *smmu = smmu_domain->smmu; - void __iomem *base = ARM_SMMU_GR0(smmu); - /* NOTE: see above */ - writel(smmu_domain->cfg.vmid, base + ARM_SMMU_GR0_TLBIVMID); + /* See above */ + wmb(); + arm_smmu_gr0_write(smmu, ARM_SMMU_GR0_TLBIVMID, smmu_domain->cfg.vmid); arm_smmu_tlb_sync_global(smmu); } @@ -579,12 +593,12 @@ static void arm_smmu_tlb_inv_vmid_nosync(unsigned long iova, size_t size, size_t granule, bool leaf, void *cookie) { struct arm_smmu_domain *smmu_domain = cookie; - void __iomem *base = ARM_SMMU_GR0(smmu_domain->smmu); + struct arm_smmu_device *smmu = smmu_domain->smmu; - if (smmu_domain->smmu->features & ARM_SMMU_FEAT_COHERENT_WALK) + if (smmu->features & ARM_SMMU_FEAT_COHERENT_WALK) wmb(); - writel_relaxed(smmu_domain->cfg.vmid, base + ARM_SMMU_GR0_TLBIVMID); + arm_smmu_gr0_write(smmu, ARM_SMMU_GR0_TLBIVMID, smmu_domain->cfg.vmid); } static const struct iommu_gather_ops arm_smmu_s1_tlb_ops = { @@ -634,12 +648,11 @@ static irqreturn_t arm_smmu_global_fault(int irq, void *dev) { u32 gfsr, gfsynr0, gfsynr1, gfsynr2; struct arm_smmu_device *smmu = dev; - void __iomem *gr0_base = ARM_SMMU_GR0_NS(smmu); - gfsr = readl_relaxed(gr0_base + ARM_SMMU_GR0_sGFSR); - gfsynr0 = readl_relaxed(gr0_base + ARM_SMMU_GR0_sGFSYNR0); - gfsynr1 = readl_relaxed(gr0_base + ARM_SMMU_GR0_sGFSYNR1); - gfsynr2 = readl_relaxed(gr0_base + ARM_SMMU_GR0_sGFSYNR2); + gfsr = arm_smmu_gr0_read(smmu, ARM_SMMU_GR0_sGFSR); + gfsynr0 = arm_smmu_gr0_read(smmu, ARM_SMMU_GR0_sGFSYNR0); + gfsynr1 = arm_smmu_gr0_read(smmu, ARM_SMMU_GR0_sGFSYNR1); + gfsynr2 = arm_smmu_gr0_read(smmu, ARM_SMMU_GR0_sGFSYNR2); if (!gfsr) return IRQ_NONE; @@ -650,7 +663,7 @@ static irqreturn_t arm_smmu_global_fault(int irq, void *dev) "\tGFSR 0x%08x, GFSYNR0 0x%08x, GFSYNR1 0x%08x, GFSYNR2 0x%08x\n", gfsr, gfsynr0, gfsynr1, gfsynr2); - writel(gfsr, gr0_base + ARM_SMMU_GR0_sGFSR); + arm_smmu_gr0_write(smmu, ARM_SMMU_GR0_sGFSR, gfsr); return IRQ_HANDLED; } @@ -1056,7 +1069,7 @@ static void arm_smmu_write_smr(struct arm_smmu_device *smmu, int idx) if (!(smmu->features & ARM_SMMU_FEAT_EXIDS) && smr->valid) reg |= SMR_VALID; - writel_relaxed(reg, ARM_SMMU_GR0(smmu) + ARM_SMMU_GR0_SMR(idx)); + arm_smmu_gr0_write(smmu, ARM_SMMU_GR0_SMR(idx), reg); } static void arm_smmu_write_s2cr(struct arm_smmu_device *smmu, int idx) @@ -1069,7 +1082,7 @@ static void arm_smmu_write_s2cr(struct arm_smmu_device *smmu, int idx) if (smmu->features & ARM_SMMU_FEAT_EXIDS && smmu->smrs && smmu->smrs[idx].valid) reg |= S2CR_EXIDVALID; - writel_relaxed(reg, ARM_SMMU_GR0(smmu) + ARM_SMMU_GR0_S2CR(idx)); + arm_smmu_gr0_write(smmu, ARM_SMMU_GR0_S2CR(idx), reg); } static void arm_smmu_write_sme(struct arm_smmu_device *smmu, int idx) @@ -1085,7 +1098,6 @@ static void arm_smmu_write_sme(struct arm_smmu_device *smmu, int idx) */ static void arm_smmu_test_smr_masks(struct arm_smmu_device *smmu) { - void __iomem *gr0_base = ARM_SMMU_GR0(smmu); u32 smr; if (!smmu->smrs) @@ -1097,13 +1109,13 @@ static void arm_smmu_test_smr_masks(struct arm_smmu_device *smmu) * masters later if they try to claim IDs outside these masks. */ smr = FIELD_PREP(SMR_ID, smmu->streamid_mask); - writel_relaxed(smr, gr0_base + ARM_SMMU_GR0_SMR(0)); - smr = readl_relaxed(gr0_base + ARM_SMMU_GR0_SMR(0)); + arm_smmu_gr0_write(smmu, ARM_SMMU_GR0_SMR(0), smr); + smr = arm_smmu_gr0_read(smmu, ARM_SMMU_GR0_SMR(0)); smmu->streamid_mask = FIELD_GET(SMR_ID, smr); smr = FIELD_PREP(SMR_MASK, smmu->streamid_mask); - writel_relaxed(smr, gr0_base + ARM_SMMU_GR0_SMR(0)); - smr = readl_relaxed(gr0_base + ARM_SMMU_GR0_SMR(0)); + arm_smmu_gr0_write(smmu, ARM_SMMU_GR0_SMR(0), smr); + smr = arm_smmu_gr0_read(smmu, ARM_SMMU_GR0_SMR(0)); smmu->smr_mask_mask = FIELD_GET(SMR_MASK, smr); } @@ -1736,13 +1748,12 @@ static struct iommu_ops arm_smmu_ops = { static void arm_smmu_device_reset(struct arm_smmu_device *smmu) { - void __iomem *gr0_base = ARM_SMMU_GR0(smmu); int i; u32 reg, major; /* clear global FSR */ - reg = readl_relaxed(ARM_SMMU_GR0_NS(smmu) + ARM_SMMU_GR0_sGFSR); - writel(reg, ARM_SMMU_GR0_NS(smmu) + ARM_SMMU_GR0_sGFSR); + reg = arm_smmu_gr0_read(smmu, ARM_SMMU_GR0_sGFSR); + arm_smmu_gr0_write(smmu, ARM_SMMU_GR0_sGFSR, reg); /* * Reset stream mapping groups: Initial values mark all SMRn as @@ -1757,9 +1768,9 @@ static void arm_smmu_device_reset(struct arm_smmu_device *smmu) * clear CACHE_LOCK bit of ACR first. And, CACHE_LOCK * bit is only present in MMU-500r2 onwards. */ - reg = readl_relaxed(gr0_base + ARM_SMMU_GR0_ID7); + reg = arm_smmu_gr0_read(smmu, ARM_SMMU_GR0_ID7); major = FIELD_GET(ID7_MAJOR, reg); - reg = readl_relaxed(gr0_base + ARM_SMMU_GR0_sACR); + reg = arm_smmu_gr0_read(smmu, ARM_SMMU_GR0_sACR); if (major >= 2) reg &= ~ARM_MMU500_ACR_CACHE_LOCK; /* @@ -1767,7 +1778,7 @@ static void arm_smmu_device_reset(struct arm_smmu_device *smmu) * TLB entries for reduced latency. */ reg |= ARM_MMU500_ACR_SMTNMB_TLBEN | ARM_MMU500_ACR_S2CRB_TLBEN; - writel_relaxed(reg, gr0_base + ARM_SMMU_GR0_sACR); + arm_smmu_gr0_write(smmu, ARM_SMMU_GR0_sACR, reg); } /* Make sure all context banks are disabled and clear CB_FSR */ @@ -1786,10 +1797,10 @@ static void arm_smmu_device_reset(struct arm_smmu_device *smmu) } /* Invalidate the TLB, just in case */ - writel_relaxed(QCOM_DUMMY_VAL, gr0_base + ARM_SMMU_GR0_TLBIALLH); - writel_relaxed(QCOM_DUMMY_VAL, gr0_base + ARM_SMMU_GR0_TLBIALLNSNH); + arm_smmu_gr0_write(smmu, ARM_SMMU_GR0_TLBIALLH, QCOM_DUMMY_VAL); + arm_smmu_gr0_write(smmu, ARM_SMMU_GR0_TLBIALLNSNH, QCOM_DUMMY_VAL); - reg = readl_relaxed(ARM_SMMU_GR0_NS(smmu) + ARM_SMMU_GR0_sCR0); + reg = arm_smmu_gr0_read(smmu, ARM_SMMU_GR0_sCR0); /* Enable fault reporting */ reg |= (sCR0_GFRE | sCR0_GFIE | sCR0_GCFGFRE | sCR0_GCFGFIE); @@ -1818,7 +1829,7 @@ static void arm_smmu_device_reset(struct arm_smmu_device *smmu) /* Push the button */ arm_smmu_tlb_sync_global(smmu); - writel(reg, ARM_SMMU_GR0_NS(smmu) + ARM_SMMU_GR0_sCR0); + arm_smmu_gr0_write(smmu, ARM_SMMU_GR0_sCR0, reg); } static int arm_smmu_id_size_to_bits(int size) @@ -1843,7 +1854,6 @@ static int arm_smmu_id_size_to_bits(int size) static int arm_smmu_device_cfg_probe(struct arm_smmu_device *smmu) { unsigned int size; - void __iomem *gr0_base = ARM_SMMU_GR0(smmu); u32 id; bool cttw_reg, cttw_fw = smmu->features & ARM_SMMU_FEAT_COHERENT_WALK; int i; @@ -1853,7 +1863,7 @@ static int arm_smmu_device_cfg_probe(struct arm_smmu_device *smmu) smmu->version == ARM_SMMU_V2 ? 2 : 1); /* ID0 */ - id = readl_relaxed(gr0_base + ARM_SMMU_GR0_ID0); + id = arm_smmu_gr0_read(smmu, ARM_SMMU_GR0_ID0); /* Restrict available stages based on module parameter */ if (force_stage == 1) @@ -1947,7 +1957,7 @@ static int arm_smmu_device_cfg_probe(struct arm_smmu_device *smmu) } /* ID1 */ - id = readl_relaxed(gr0_base + ARM_SMMU_GR0_ID1); + id = arm_smmu_gr0_read(smmu, ARM_SMMU_GR0_ID1); smmu->pgshift = (id & ID1_PAGESIZE) ? 16 : 12; /* Check for size mismatch of SMMU address space from mapped region */ @@ -1985,7 +1995,7 @@ static int arm_smmu_device_cfg_probe(struct arm_smmu_device *smmu) return -ENOMEM; /* ID2 */ - id = readl_relaxed(gr0_base + ARM_SMMU_GR0_ID2); + id = arm_smmu_gr0_read(smmu, ARM_SMMU_GR0_ID2); size = arm_smmu_id_size_to_bits(FIELD_GET(ID2_IAS, id)); smmu->ipa_size = size; @@ -2372,7 +2382,7 @@ static void arm_smmu_device_shutdown(struct platform_device *pdev) arm_smmu_rpm_get(smmu); /* Turn the thing off */ - writel(sCR0_CLIENTPD, ARM_SMMU_GR0_NS(smmu) + ARM_SMMU_GR0_sCR0); + arm_smmu_gr0_write(smmu, ARM_SMMU_GR0_sCR0, sCR0_CLIENTPD); arm_smmu_rpm_put(smmu); if (pm_runtime_enabled(smmu->dev)) From patchwork Thu Aug 15 18:37:32 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 11096295 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1B6B017E2 for ; Thu, 15 Aug 2019 18:40:57 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0D18028726 for ; Thu, 15 Aug 2019 18:40:57 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 01D0F28775; Thu, 15 Aug 2019 18:40:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 068B628726 for ; Thu, 15 Aug 2019 18:40:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=oAL4AXlvxcMzgIwCpuTiS5NXNLMWrANRKl4s0H+CNPY=; b=NvW4hTDaeCFxE6 JQpxI2VTE3jQnkZaRznQ2t8jWUFNO6Fyuu3/mLetVX+jXR8Q++WnIsYoS/7fGBjvKf7VJNZJwyOEm rk5yr6maPcNLUenxsNGcUzfPxE9IMQQZgeKxB71PPvCoFX1ym5wM/oRrwoHU/v16QhV1U3ORSWkdl G6/UldF4yuOSksJfzLH3Ng0JltCvhBBTtFtv8xO7VR9Fp6GJThzIbUChKdjD3lpV6C9Z6xlH3kMKh YAGFkD8t0HXgu8Gegd6SFDs+bGshBV0vwTUxVdQCC3IlqCo/OR2BRpUSTY7fmvDiK7K4ET8DmIHvt LrZntgd+8lHbigSqyGRA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyKgB-0000QH-GH; Thu, 15 Aug 2019 18:40:55 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyKdN-00052E-KM for linux-arm-kernel@lists.infradead.org; Thu, 15 Aug 2019 18:38:04 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 15195344; Thu, 15 Aug 2019 11:38:01 -0700 (PDT) Received: from e110467-lin.cambridge.arm.com (e110467-lin.cambridge.arm.com [10.1.197.57]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id B60453F694; Thu, 15 Aug 2019 11:37:59 -0700 (PDT) From: Robin Murphy To: will@kernel.org Subject: [PATCH v2 12/17] iommu/arm-smmu: Rename arm-smmu-regs.h Date: Thu, 15 Aug 2019 19:37:32 +0100 Message-Id: <75a8504028a60b3890997af2bf25032158e0d7a7.1565892337.git.robin.murphy@arm.com> X-Mailer: git-send-email 2.21.0.dirty In-Reply-To: References: MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190815_113801_851756_04EEE79B X-CRM114-Status: GOOD ( 13.65 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: robdclark@gmail.com, joro@8bytes.org, bjorn.andersson@linaro.org, iommu@lists.linux-foundation.org, vivek.gautam@codeaurora.org, jcrouse@codeaurora.org, gregory.clement@bootlin.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP We're about to start using it for more than just register definitions, so generalise the name. Signed-off-by: Robin Murphy --- drivers/iommu/arm-smmu.c | 2 +- drivers/iommu/{arm-smmu-regs.h => arm-smmu.h} | 6 +++--- drivers/iommu/qcom_iommu.c | 2 +- 3 files changed, 5 insertions(+), 5 deletions(-) rename drivers/iommu/{arm-smmu-regs.h => arm-smmu.h} (98%) diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c index e9fd9117109e..f3b8301a3059 100644 --- a/drivers/iommu/arm-smmu.c +++ b/drivers/iommu/arm-smmu.c @@ -46,7 +46,7 @@ #include #include -#include "arm-smmu-regs.h" +#include "arm-smmu.h" /* * Apparently, some Qualcomm arm64 platforms which appear to expose their SMMU diff --git a/drivers/iommu/arm-smmu-regs.h b/drivers/iommu/arm-smmu.h similarity index 98% rename from drivers/iommu/arm-smmu-regs.h rename to drivers/iommu/arm-smmu.h index a8e288192285..ccc3097a4247 100644 --- a/drivers/iommu/arm-smmu-regs.h +++ b/drivers/iommu/arm-smmu.h @@ -7,8 +7,8 @@ * Author: Will Deacon */ -#ifndef _ARM_SMMU_REGS_H -#define _ARM_SMMU_REGS_H +#ifndef _ARM_SMMU_H +#define _ARM_SMMU_H #include @@ -194,4 +194,4 @@ enum arm_smmu_cbar_type { #define ARM_SMMU_CB_ATSR 0x8f0 #define ATSR_ACTIVE BIT(0) -#endif /* _ARM_SMMU_REGS_H */ +#endif /* _ARM_SMMU_H */ diff --git a/drivers/iommu/qcom_iommu.c b/drivers/iommu/qcom_iommu.c index 60a125dd7300..a2062d13584f 100644 --- a/drivers/iommu/qcom_iommu.c +++ b/drivers/iommu/qcom_iommu.c @@ -33,7 +33,7 @@ #include #include -#include "arm-smmu-regs.h" +#include "arm-smmu.h" #define SMMU_INTR_SEL_NS 0x2000 From patchwork Thu Aug 15 18:37:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 11096297 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6165D14DB for ; Thu, 15 Aug 2019 18:41:09 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4F139286E6 for ; Thu, 15 Aug 2019 18:41:09 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4307F28833; Thu, 15 Aug 2019 18:41:09 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 8D9C0286E6 for ; Thu, 15 Aug 2019 18:41:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=pSdtP3kHXXRNnnLsAG5HAnTN7UvBbSMiTNHCMFzximM=; b=BKfkDXhDgwAtSX Vc+8KXLcqK8rYtw8V+F1bBaMZO0T+HT0ZMhPU3RKpR5rgbuKDwyuJgNOW/Xqfs6KMVOAqvuTs/3VR YkTygqpF0y6ZZwZE30oBss6UDEvHBSR3JyDXfZCWw4puAwi2qMTvwu+ENRHObN25HEtHbRfZ+e1JH l03roXpNAF15qlYOIINilCSTsVNrP9D/HJaosNCIjad/hxUFPkLJ58HxTjAIruEcGUCcykpOVstjC vv+jklCjyNkGs6KIn4hzMqHh6lICKDEYFftlWsqWzHmarjN+9xptc/KkCGEULGCSFUPDLeVfH79YL nfsBgQNhnkvnI4NX4MfA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyKgO-0000dr-1O; Thu, 15 Aug 2019 18:41:08 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyKdP-00053d-0P for linux-arm-kernel@lists.infradead.org; Thu, 15 Aug 2019 18:38:06 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9DAA21596; Thu, 15 Aug 2019 11:38:02 -0700 (PDT) Received: from e110467-lin.cambridge.arm.com (e110467-lin.cambridge.arm.com [10.1.197.57]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 4A55A3F694; Thu, 15 Aug 2019 11:38:01 -0700 (PDT) From: Robin Murphy To: will@kernel.org Subject: [PATCH v2 13/17] iommu/arm-smmu: Add implementation infrastructure Date: Thu, 15 Aug 2019 19:37:33 +0100 Message-Id: X-Mailer: git-send-email 2.21.0.dirty In-Reply-To: References: MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190815_113803_333155_B07E38A0 X-CRM114-Status: GOOD ( 19.78 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: robdclark@gmail.com, joro@8bytes.org, bjorn.andersson@linaro.org, iommu@lists.linux-foundation.org, vivek.gautam@codeaurora.org, jcrouse@codeaurora.org, gregory.clement@bootlin.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Add some nascent infrastructure for handling implementation-specific details outside the flow of the architectural code. This will allow us to keep mutually-incompatible vendor-specific hooks in their own files where the respective interested parties can maintain them with minimal chance of conflicts. As somewhat of a template, we'll start with a general place to collect the relatively trivial existing quirks. Signed-off-by: Robin Murphy --- MAINTAINERS | 3 +- drivers/iommu/Makefile | 2 +- drivers/iommu/arm-smmu-impl.c | 13 +++++ drivers/iommu/arm-smmu.c | 82 ++------------------------------ drivers/iommu/arm-smmu.h | 89 +++++++++++++++++++++++++++++++++++ 5 files changed, 108 insertions(+), 81 deletions(-) create mode 100644 drivers/iommu/arm-smmu-impl.c diff --git a/MAINTAINERS b/MAINTAINERS index 6426db5198f0..35ff49ac303b 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1350,8 +1350,7 @@ M: Will Deacon R: Robin Murphy L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) S: Maintained -F: drivers/iommu/arm-smmu.c -F: drivers/iommu/arm-smmu-v3.c +F: drivers/iommu/arm-smmu* F: drivers/iommu/io-pgtable-arm.c F: drivers/iommu/io-pgtable-arm-v7s.c diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile index f13f36ae1af6..a2729aadd300 100644 --- a/drivers/iommu/Makefile +++ b/drivers/iommu/Makefile @@ -13,7 +13,7 @@ obj-$(CONFIG_MSM_IOMMU) += msm_iommu.o obj-$(CONFIG_AMD_IOMMU) += amd_iommu.o amd_iommu_init.o obj-$(CONFIG_AMD_IOMMU_DEBUGFS) += amd_iommu_debugfs.o obj-$(CONFIG_AMD_IOMMU_V2) += amd_iommu_v2.o -obj-$(CONFIG_ARM_SMMU) += arm-smmu.o +obj-$(CONFIG_ARM_SMMU) += arm-smmu.o arm-smmu-impl.o obj-$(CONFIG_ARM_SMMU_V3) += arm-smmu-v3.o obj-$(CONFIG_DMAR_TABLE) += dmar.o obj-$(CONFIG_INTEL_IOMMU) += intel-iommu.o intel-pasid.o diff --git a/drivers/iommu/arm-smmu-impl.c b/drivers/iommu/arm-smmu-impl.c new file mode 100644 index 000000000000..efeb6d78da17 --- /dev/null +++ b/drivers/iommu/arm-smmu-impl.c @@ -0,0 +1,13 @@ +// SPDX-License-Identifier: GPL-2.0-only +// Miscellaneous Arm SMMU implementation and integration quirks +// Copyright (C) 2019 Arm Limited + +#define pr_fmt(fmt) "arm-smmu: " fmt + +#include "arm-smmu.h" + + +struct arm_smmu_device *arm_smmu_impl_init(struct arm_smmu_device *smmu) +{ + return smmu; +} diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c index f3b8301a3059..1e8153182830 100644 --- a/drivers/iommu/arm-smmu.c +++ b/drivers/iommu/arm-smmu.c @@ -19,7 +19,6 @@ #include #include -#include #include #include #include @@ -29,7 +28,6 @@ #include #include #include -#include #include #include #include @@ -41,7 +39,6 @@ #include #include #include -#include #include #include @@ -66,9 +63,6 @@ #define TLB_LOOP_TIMEOUT 1000000 /* 1s! */ #define TLB_SPIN_COUNT 10 -/* Maximum number of context banks per SMMU */ -#define ARM_SMMU_MAX_CBS 128 - #define MSI_IOVA_BASE 0x8000000 #define MSI_IOVA_LENGTH 0x100000 @@ -86,19 +80,6 @@ module_param(disable_bypass, bool, S_IRUGO); MODULE_PARM_DESC(disable_bypass, "Disable bypass streams such that incoming transactions from devices that are not attached to an iommu domain will report an abort back to the device and will not be allowed to pass through the SMMU."); -enum arm_smmu_arch_version { - ARM_SMMU_V1, - ARM_SMMU_V1_64K, - ARM_SMMU_V2, -}; - -enum arm_smmu_implementation { - GENERIC_SMMU, - ARM_MMU500, - CAVIUM_SMMUV2, - QCOM_SMMUV2, -}; - struct arm_smmu_s2cr { struct iommu_group *group; int count; @@ -136,65 +117,6 @@ struct arm_smmu_master_cfg { #define for_each_cfg_sme(fw, i, idx) \ for (i = 0; idx = fwspec_smendx(fw, i), i < fw->num_ids; ++i) -struct arm_smmu_device { - struct device *dev; - - void __iomem *base; - unsigned int numpage; - unsigned int pgshift; - -#define ARM_SMMU_FEAT_COHERENT_WALK (1 << 0) -#define ARM_SMMU_FEAT_STREAM_MATCH (1 << 1) -#define ARM_SMMU_FEAT_TRANS_S1 (1 << 2) -#define ARM_SMMU_FEAT_TRANS_S2 (1 << 3) -#define ARM_SMMU_FEAT_TRANS_NESTED (1 << 4) -#define ARM_SMMU_FEAT_TRANS_OPS (1 << 5) -#define ARM_SMMU_FEAT_VMID16 (1 << 6) -#define ARM_SMMU_FEAT_FMT_AARCH64_4K (1 << 7) -#define ARM_SMMU_FEAT_FMT_AARCH64_16K (1 << 8) -#define ARM_SMMU_FEAT_FMT_AARCH64_64K (1 << 9) -#define ARM_SMMU_FEAT_FMT_AARCH32_L (1 << 10) -#define ARM_SMMU_FEAT_FMT_AARCH32_S (1 << 11) -#define ARM_SMMU_FEAT_EXIDS (1 << 12) - u32 features; - -#define ARM_SMMU_OPT_SECURE_CFG_ACCESS (1 << 0) - u32 options; - enum arm_smmu_arch_version version; - enum arm_smmu_implementation model; - - u32 num_context_banks; - u32 num_s2_context_banks; - DECLARE_BITMAP(context_map, ARM_SMMU_MAX_CBS); - struct arm_smmu_cb *cbs; - atomic_t irptndx; - - u32 num_mapping_groups; - u16 streamid_mask; - u16 smr_mask_mask; - struct arm_smmu_smr *smrs; - struct arm_smmu_s2cr *s2crs; - struct mutex stream_map_mutex; - - unsigned long va_size; - unsigned long ipa_size; - unsigned long pa_size; - unsigned long pgsize_bitmap; - - u32 num_global_irqs; - u32 num_context_irqs; - unsigned int *irqs; - struct clk_bulk_data *clks; - int num_clks; - - u32 cavium_id_base; /* Specific to Cavium */ - - spinlock_t global_sync_lock; - - /* IOMMU core code handle */ - struct iommu_device iommu; -}; - enum arm_smmu_context_fmt { ARM_SMMU_CTX_FMT_NONE, ARM_SMMU_CTX_FMT_AARCH64, @@ -2233,6 +2155,10 @@ static int arm_smmu_device_probe(struct platform_device *pdev) if (err) return err; + smmu = arm_smmu_impl_init(smmu); + if (IS_ERR(smmu)) + return PTR_ERR(smmu); + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); ioaddr = res->start; smmu->base = devm_ioremap_resource(dev, res); diff --git a/drivers/iommu/arm-smmu.h b/drivers/iommu/arm-smmu.h index ccc3097a4247..6fea0b0b7e51 100644 --- a/drivers/iommu/arm-smmu.h +++ b/drivers/iommu/arm-smmu.h @@ -10,7 +10,14 @@ #ifndef _ARM_SMMU_H #define _ARM_SMMU_H +#include #include +#include +#include +#include +#include +#include +#include /* Configuration registers */ #define ARM_SMMU_GR0_sCR0 0x0 @@ -194,4 +201,86 @@ enum arm_smmu_cbar_type { #define ARM_SMMU_CB_ATSR 0x8f0 #define ATSR_ACTIVE BIT(0) + +/* Maximum number of context banks per SMMU */ +#define ARM_SMMU_MAX_CBS 128 + + +/* Shared driver definitions */ +enum arm_smmu_arch_version { + ARM_SMMU_V1, + ARM_SMMU_V1_64K, + ARM_SMMU_V2, +}; + +enum arm_smmu_implementation { + GENERIC_SMMU, + ARM_MMU500, + CAVIUM_SMMUV2, + QCOM_SMMUV2, +}; + +struct arm_smmu_device { + struct device *dev; + + void __iomem *base; + unsigned int numpage; + unsigned int pgshift; + +#define ARM_SMMU_FEAT_COHERENT_WALK (1 << 0) +#define ARM_SMMU_FEAT_STREAM_MATCH (1 << 1) +#define ARM_SMMU_FEAT_TRANS_S1 (1 << 2) +#define ARM_SMMU_FEAT_TRANS_S2 (1 << 3) +#define ARM_SMMU_FEAT_TRANS_NESTED (1 << 4) +#define ARM_SMMU_FEAT_TRANS_OPS (1 << 5) +#define ARM_SMMU_FEAT_VMID16 (1 << 6) +#define ARM_SMMU_FEAT_FMT_AARCH64_4K (1 << 7) +#define ARM_SMMU_FEAT_FMT_AARCH64_16K (1 << 8) +#define ARM_SMMU_FEAT_FMT_AARCH64_64K (1 << 9) +#define ARM_SMMU_FEAT_FMT_AARCH32_L (1 << 10) +#define ARM_SMMU_FEAT_FMT_AARCH32_S (1 << 11) +#define ARM_SMMU_FEAT_EXIDS (1 << 12) + u32 features; + +#define ARM_SMMU_OPT_SECURE_CFG_ACCESS (1 << 0) + u32 options; + enum arm_smmu_arch_version version; + enum arm_smmu_implementation model; + + u32 num_context_banks; + u32 num_s2_context_banks; + DECLARE_BITMAP(context_map, ARM_SMMU_MAX_CBS); + struct arm_smmu_cb *cbs; + atomic_t irptndx; + + u32 num_mapping_groups; + u16 streamid_mask; + u16 smr_mask_mask; + struct arm_smmu_smr *smrs; + struct arm_smmu_s2cr *s2crs; + struct mutex stream_map_mutex; + + unsigned long va_size; + unsigned long ipa_size; + unsigned long pa_size; + unsigned long pgsize_bitmap; + + u32 num_global_irqs; + u32 num_context_irqs; + unsigned int *irqs; + struct clk_bulk_data *clks; + int num_clks; + + u32 cavium_id_base; /* Specific to Cavium */ + + spinlock_t global_sync_lock; + + /* IOMMU core code handle */ + struct iommu_device iommu; +}; + + +/* Implementation details, yay! */ +struct arm_smmu_device *arm_smmu_impl_init(struct arm_smmu_device *smmu); + #endif /* _ARM_SMMU_H */ From patchwork Thu Aug 15 18:37:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 11096301 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 513B017E2 for ; Thu, 15 Aug 2019 18:41:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 417AC286A1 for ; Thu, 15 Aug 2019 18:41:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3414E28833; Thu, 15 Aug 2019 18:41:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id BC28E288A9 for ; Thu, 15 Aug 2019 18:41:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=R70CPebUOXfc+gFRovS9CjZMehuo58czuJ6BJAxEmCs=; b=Y3EpIZi8dU9eAT 2jXrv3Rhg0LppEAp0gDtAm7p7KhaoP0exkE3zL85jFx0C5mQyLdw8by/ncfNYFNMYNXMEncoZDT6B iYaJl6zi/xnsU4F/t1tYWpPJml3oMZZ77GUlvcCYh6CU+xhu0PzJvn1jzxTMpJ6PFGl9KUpnXnSUt LZPLSMo/pcFjgHwvERg1i/eQrORzH9mJLF4zV/Q1lDrdaWmCDqsP8NWFSAyFkPpO5ElzQsszHInoz w5ilc7pYrJID/YXKtJYRD4FEEopXMSpvZwBK4k0Y1KP8cMAm5u+5Q4um8u9lz4R+wkaZuf4aNOYNd WTH/+Q9DibUVpxiSgl2A==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyKgc-0000wX-Uo; Thu, 15 Aug 2019 18:41:23 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyKdQ-0004nO-EC for linux-arm-kernel@lists.infradead.org; Thu, 15 Aug 2019 18:38:07 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 330A9360; Thu, 15 Aug 2019 11:38:04 -0700 (PDT) Received: from e110467-lin.cambridge.arm.com (e110467-lin.cambridge.arm.com [10.1.197.57]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id D2D693F694; Thu, 15 Aug 2019 11:38:02 -0700 (PDT) From: Robin Murphy To: will@kernel.org Subject: [PATCH v2 14/17] iommu/arm-smmu: Move Secure access quirk to implementation Date: Thu, 15 Aug 2019 19:37:34 +0100 Message-Id: X-Mailer: git-send-email 2.21.0.dirty In-Reply-To: References: MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190815_113804_704614_E71B7845 X-CRM114-Status: GOOD ( 17.65 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: robdclark@gmail.com, joro@8bytes.org, bjorn.andersson@linaro.org, iommu@lists.linux-foundation.org, vivek.gautam@codeaurora.org, jcrouse@codeaurora.org, gregory.clement@bootlin.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Move detection of the Secure access quirk to its new home, trimming it down in the process - time has proven that boolean DT flags are neither ideal nor necessarily sufficient, so it's highly unlikely we'll ever add more, let alone enough to justify the frankly overengineered parsing machinery. Signed-off-by: Robin Murphy --- drivers/iommu/arm-smmu-impl.c | 44 ++++++++++++++++ drivers/iommu/arm-smmu.c | 97 ----------------------------------- drivers/iommu/arm-smmu.h | 72 +++++++++++++++++++++++++- 3 files changed, 114 insertions(+), 99 deletions(-) diff --git a/drivers/iommu/arm-smmu-impl.c b/drivers/iommu/arm-smmu-impl.c index efeb6d78da17..0657c85580cb 100644 --- a/drivers/iommu/arm-smmu-impl.c +++ b/drivers/iommu/arm-smmu-impl.c @@ -4,10 +4,54 @@ #define pr_fmt(fmt) "arm-smmu: " fmt +#include + #include "arm-smmu.h" +static int arm_smmu_gr0_ns(int offset) +{ + switch(offset) { + case ARM_SMMU_GR0_sCR0: + case ARM_SMMU_GR0_sACR: + case ARM_SMMU_GR0_sGFSR: + case ARM_SMMU_GR0_sGFSYNR0: + case ARM_SMMU_GR0_sGFSYNR1: + case ARM_SMMU_GR0_sGFSYNR2: + return offset + 0x400; + default: + return offset; + } +} + +static u32 arm_smmu_read_ns(struct arm_smmu_device *smmu, int page, + int offset) +{ + if (page == ARM_SMMU_GR0) + offset = arm_smmu_gr0_ns(offset); + return readl_relaxed(arm_smmu_page(smmu, page) + offset); +} + +static void arm_smmu_write_ns(struct arm_smmu_device *smmu, int page, + int offset, u32 val) +{ + if (page == ARM_SMMU_GR0) + offset = arm_smmu_gr0_ns(offset); + writel_relaxed(val, arm_smmu_page(smmu, page) + offset); +} + +/* Since we don't care for sGFAR, we can do without 64-bit accessors */ +const struct arm_smmu_impl calxeda_impl = { + .read_reg = arm_smmu_read_ns, + .write_reg = arm_smmu_write_ns, +}; + + struct arm_smmu_device *arm_smmu_impl_init(struct arm_smmu_device *smmu) { + if (of_property_read_bool(smmu->dev->of_node, + "calxeda,smmu-secure-config-access")) + smmu->impl = &calxeda_impl; + return smmu; } diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c index 1e8153182830..432d781f05f3 100644 --- a/drivers/iommu/arm-smmu.c +++ b/drivers/iommu/arm-smmu.c @@ -155,91 +155,10 @@ struct arm_smmu_domain { struct iommu_domain domain; }; -static int arm_smmu_gr0_ns(int offset) -{ - switch(offset) { - case ARM_SMMU_GR0_sCR0: - case ARM_SMMU_GR0_sACR: - case ARM_SMMU_GR0_sGFSR: - case ARM_SMMU_GR0_sGFSYNR0: - case ARM_SMMU_GR0_sGFSYNR1: - case ARM_SMMU_GR0_sGFSYNR2: - return offset + 0x400; - default: - return offset; - } -} - -static void __iomem *arm_smmu_page(struct arm_smmu_device *smmu, int n) -{ - return smmu->base + (n << smmu->pgshift); -} - -static u32 arm_smmu_readl(struct arm_smmu_device *smmu, int page, int offset) -{ - if ((smmu->options & ARM_SMMU_OPT_SECURE_CFG_ACCESS) && page == 0) - offset = arm_smmu_gr0_ns(offset); - - return readl_relaxed(arm_smmu_page(smmu, page) + offset); -} - -static void arm_smmu_writel(struct arm_smmu_device *smmu, int page, int offset, - u32 val) -{ - if ((smmu->options & ARM_SMMU_OPT_SECURE_CFG_ACCESS) && page == 0) - offset = arm_smmu_gr0_ns(offset); - - writel_relaxed(val, arm_smmu_page(smmu, page) + offset); -} - -static u64 arm_smmu_readq(struct arm_smmu_device *smmu, int page, int offset) -{ - return readq_relaxed(arm_smmu_page(smmu, page) + offset); -} - -static void arm_smmu_writeq(struct arm_smmu_device *smmu, int page, int offset, - u64 val) -{ - writeq_relaxed(val, arm_smmu_page(smmu, page) + offset); -} - -#define ARM_SMMU_GR0 0 -#define ARM_SMMU_GR1 1 -#define ARM_SMMU_CB(s, n) ((s)->numpage + (n)) - -#define arm_smmu_gr0_read(s, o) \ - arm_smmu_readl((s), ARM_SMMU_GR0, (o)) -#define arm_smmu_gr0_write(s, o, v) \ - arm_smmu_writel((s), ARM_SMMU_GR0, (o), (v)) - -#define arm_smmu_gr1_read(s, o) \ - arm_smmu_readl((s), ARM_SMMU_GR1, (o)) -#define arm_smmu_gr1_write(s, o, v) \ - arm_smmu_writel((s), ARM_SMMU_GR1, (o), (v)) - -#define arm_smmu_cb_read(s, n, o) \ - arm_smmu_readl((s), ARM_SMMU_CB((s), (n)), (o)) -#define arm_smmu_cb_write(s, n, o, v) \ - arm_smmu_writel((s), ARM_SMMU_CB((s), (n)), (o), (v)) -#define arm_smmu_cb_readq(s, n, o) \ - arm_smmu_readq((s), ARM_SMMU_CB((s), (n)), (o)) -#define arm_smmu_cb_writeq(s, n, o, v) \ - arm_smmu_writeq((s), ARM_SMMU_CB((s), (n)), (o), (v)) - -struct arm_smmu_option_prop { - u32 opt; - const char *prop; -}; - static atomic_t cavium_smmu_context_count = ATOMIC_INIT(0); static bool using_legacy_binding, using_generic_binding; -static struct arm_smmu_option_prop arm_smmu_options[] = { - { ARM_SMMU_OPT_SECURE_CFG_ACCESS, "calxeda,smmu-secure-config-access" }, - { 0, NULL}, -}; - static inline int arm_smmu_rpm_get(struct arm_smmu_device *smmu) { if (pm_runtime_enabled(smmu->dev)) @@ -259,20 +178,6 @@ static struct arm_smmu_domain *to_smmu_domain(struct iommu_domain *dom) return container_of(dom, struct arm_smmu_domain, domain); } -static void parse_driver_options(struct arm_smmu_device *smmu) -{ - int i = 0; - - do { - if (of_property_read_bool(smmu->dev->of_node, - arm_smmu_options[i].prop)) { - smmu->options |= arm_smmu_options[i].opt; - dev_notice(smmu->dev, "option %s\n", - arm_smmu_options[i].prop); - } - } while (arm_smmu_options[++i].opt); -} - static struct device_node *dev_get_dev_node(struct device *dev) { if (dev_is_pci(dev)) { @@ -2091,8 +1996,6 @@ static int arm_smmu_device_dt_probe(struct platform_device *pdev, smmu->version = data->version; smmu->model = data->model; - parse_driver_options(smmu); - legacy_binding = of_find_property(dev->of_node, "mmu-masters", NULL); if (legacy_binding && !using_generic_binding) { if (!using_legacy_binding) diff --git a/drivers/iommu/arm-smmu.h b/drivers/iommu/arm-smmu.h index 6fea0b0b7e51..d4fd29d70705 100644 --- a/drivers/iommu/arm-smmu.h +++ b/drivers/iommu/arm-smmu.h @@ -242,10 +242,9 @@ struct arm_smmu_device { #define ARM_SMMU_FEAT_EXIDS (1 << 12) u32 features; -#define ARM_SMMU_OPT_SECURE_CFG_ACCESS (1 << 0) - u32 options; enum arm_smmu_arch_version version; enum arm_smmu_implementation model; + const struct arm_smmu_impl *impl; u32 num_context_banks; u32 num_s2_context_banks; @@ -281,6 +280,75 @@ struct arm_smmu_device { /* Implementation details, yay! */ +struct arm_smmu_impl { + u32 (*read_reg)(struct arm_smmu_device *smmu, int page, int offset); + void (*write_reg)(struct arm_smmu_device *smmu, int page, int offset, + u32 val); + u64 (*read_reg64)(struct arm_smmu_device *smmu, int page, int offset); + void (*write_reg64)(struct arm_smmu_device *smmu, int page, int offset, + u64 val); +}; + +static inline void __iomem *arm_smmu_page(struct arm_smmu_device *smmu, int n) +{ + return smmu->base + (n << smmu->pgshift); +} + +static inline u32 arm_smmu_readl(struct arm_smmu_device *smmu, int page, int offset) +{ + if (smmu->impl && unlikely(smmu->impl->read_reg)) + return smmu->impl->read_reg(smmu, page, offset); + return readl_relaxed(arm_smmu_page(smmu, page) + offset); +} + +static inline void arm_smmu_writel(struct arm_smmu_device *smmu, int page, + int offset, u32 val) +{ + if (smmu->impl && unlikely(smmu->impl->write_reg)) + smmu->impl->write_reg(smmu, page, offset, val); + else + writel_relaxed(val, arm_smmu_page(smmu, page) + offset); +} + +static inline u64 arm_smmu_readq(struct arm_smmu_device *smmu, int page, int offset) +{ + if (smmu->impl && unlikely(smmu->impl->read_reg64)) + return smmu->impl->read_reg64(smmu, page, offset); + return readq_relaxed(arm_smmu_page(smmu, page) + offset); +} + +static inline void arm_smmu_writeq(struct arm_smmu_device *smmu, int page, + int offset, u64 val) +{ + if (smmu->impl && unlikely(smmu->impl->write_reg64)) + smmu->impl->write_reg64(smmu, page, offset, val); + else + writeq_relaxed(val, arm_smmu_page(smmu, page) + offset); +} + +#define ARM_SMMU_GR0 0 +#define ARM_SMMU_GR1 1 +#define ARM_SMMU_CB(s, n) ((s)->numpage + (n)) + +#define arm_smmu_gr0_read(s, o) \ + arm_smmu_readl((s), ARM_SMMU_GR0, (o)) +#define arm_smmu_gr0_write(s, o, v) \ + arm_smmu_writel((s), ARM_SMMU_GR0, (o), (v)) + +#define arm_smmu_gr1_read(s, o) \ + arm_smmu_readl((s), ARM_SMMU_GR1, (o)) +#define arm_smmu_gr1_write(s, o, v) \ + arm_smmu_writel((s), ARM_SMMU_GR1, (o), (v)) + +#define arm_smmu_cb_read(s, n, o) \ + arm_smmu_readl((s), ARM_SMMU_CB((s), (n)), (o)) +#define arm_smmu_cb_write(s, n, o, v) \ + arm_smmu_writel((s), ARM_SMMU_CB((s), (n)), (o), (v)) +#define arm_smmu_cb_readq(s, n, o) \ + arm_smmu_readq((s), ARM_SMMU_CB((s), (n)), (o)) +#define arm_smmu_cb_writeq(s, n, o, v) \ + arm_smmu_writeq((s), ARM_SMMU_CB((s), (n)), (o), (v)) + struct arm_smmu_device *arm_smmu_impl_init(struct arm_smmu_device *smmu); #endif /* _ARM_SMMU_H */ From patchwork Thu Aug 15 18:37:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 11096303 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3D9AF14DB for ; Thu, 15 Aug 2019 18:41:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2DE7A285D8 for ; Thu, 15 Aug 2019 18:41:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 21F7C288AC; Thu, 15 Aug 2019 18:41:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 794A5286E6 for ; Thu, 15 Aug 2019 18:41:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=zCyLMfqSAg7T8p75oDRYJgsEXI+0p36FpkDvgiDJ6z8=; b=IzIimyyEmbVcqf LFudiYWJYZuddZ9SjSz670mfaegWgY3oaTPYonU6h2C35qt3I/CHGJmiMQWhtgEKeFTA6bRUlOsdH nslfJo5tpJcCOcvJi8lzv0mPHHN68BarbtsPYHoI9Lc9aUovioDrsTEK1FfvNDrQ77e3ITxAYz27c epZYrW4CJukqd3aryZtes2cQcm6KchCiI0ULCgg6eO4DYwHwA9BibLLFajSzHs70hz/+CiR+geC7S JSkio3x77nLGs8EvaLt7wSurHomTrtaS7c1nd6rNoG97rPwfbulGBBraIADVre5G56DrSne5oC4uc hD46yJ3LUoR0aVfZTAWg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyKgp-00019s-Pe; Thu, 15 Aug 2019 18:41:35 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyKdS-00052E-0f for linux-arm-kernel@lists.infradead.org; Thu, 15 Aug 2019 18:38:09 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BD47E1570; Thu, 15 Aug 2019 11:38:05 -0700 (PDT) Received: from e110467-lin.cambridge.arm.com (e110467-lin.cambridge.arm.com [10.1.197.57]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 695D83F694; Thu, 15 Aug 2019 11:38:04 -0700 (PDT) From: Robin Murphy To: will@kernel.org Subject: [PATCH v2 15/17] iommu/arm-smmu: Add configuration implementation hook Date: Thu, 15 Aug 2019 19:37:35 +0100 Message-Id: <73f4b0bf9cf6478917c618011363b6cf06566a65.1565892337.git.robin.murphy@arm.com> X-Mailer: git-send-email 2.21.0.dirty In-Reply-To: References: MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190815_113806_198878_6C5467B1 X-CRM114-Status: GOOD ( 18.56 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: robdclark@gmail.com, joro@8bytes.org, bjorn.andersson@linaro.org, iommu@lists.linux-foundation.org, vivek.gautam@codeaurora.org, jcrouse@codeaurora.org, gregory.clement@bootlin.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Probing the ID registers and setting up the SMMU configuration is an area where overrides and workarounds may well be needed. Indeed, the Cavium workaround detection lives there at the moment, so let's break that out. Signed-off-by: Robin Murphy --- drivers/iommu/arm-smmu-impl.c | 34 ++++++++++++++++++++++++++++++++++ drivers/iommu/arm-smmu.c | 17 +++-------------- drivers/iommu/arm-smmu.h | 1 + 3 files changed, 38 insertions(+), 14 deletions(-) diff --git a/drivers/iommu/arm-smmu-impl.c b/drivers/iommu/arm-smmu-impl.c index 0657c85580cb..696417908793 100644 --- a/drivers/iommu/arm-smmu-impl.c +++ b/drivers/iommu/arm-smmu-impl.c @@ -47,8 +47,42 @@ const struct arm_smmu_impl calxeda_impl = { }; +static int cavium_cfg_probe(struct arm_smmu_device *smmu) +{ + static atomic_t context_count = ATOMIC_INIT(0); + /* + * Cavium CN88xx erratum #27704. + * Ensure ASID and VMID allocation is unique across all SMMUs in + * the system. + */ + smmu->cavium_id_base = atomic_fetch_add(smmu->num_context_banks, + &context_count); + dev_notice(smmu->dev, "\tenabling workaround for Cavium erratum 27704\n"); + + return 0; +} + +const struct arm_smmu_impl cavium_impl = { + .cfg_probe = cavium_cfg_probe, +}; + + struct arm_smmu_device *arm_smmu_impl_init(struct arm_smmu_device *smmu) { + /* + * We will inevitably have to combine model-specific implementation + * quirks with platform-specific integration quirks, but everything + * we currently support happens to work out as straightforward + * mutually-exclusive assignments. + */ + switch (smmu->model) { + case CAVIUM_SMMUV2: + smmu->impl = &cavium_impl; + break; + default: + break; + } + if (of_property_read_bool(smmu->dev->of_node, "calxeda,smmu-secure-config-access")) smmu->impl = &calxeda_impl; diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c index 432d781f05f3..362b6b5a28ee 100644 --- a/drivers/iommu/arm-smmu.c +++ b/drivers/iommu/arm-smmu.c @@ -155,8 +155,6 @@ struct arm_smmu_domain { struct iommu_domain domain; }; -static atomic_t cavium_smmu_context_count = ATOMIC_INIT(0); - static bool using_legacy_binding, using_generic_binding; static inline int arm_smmu_rpm_get(struct arm_smmu_device *smmu) @@ -1804,18 +1802,6 @@ static int arm_smmu_device_cfg_probe(struct arm_smmu_device *smmu) } dev_notice(smmu->dev, "\t%u context banks (%u stage-2 only)\n", smmu->num_context_banks, smmu->num_s2_context_banks); - /* - * Cavium CN88xx erratum #27704. - * Ensure ASID and VMID allocation is unique across all SMMUs in - * the system. - */ - if (smmu->model == CAVIUM_SMMUV2) { - smmu->cavium_id_base = - atomic_add_return(smmu->num_context_banks, - &cavium_smmu_context_count); - smmu->cavium_id_base -= smmu->num_context_banks; - dev_notice(smmu->dev, "\tenabling workaround for Cavium erratum 27704\n"); - } smmu->cbs = devm_kcalloc(smmu->dev, smmu->num_context_banks, sizeof(*smmu->cbs), GFP_KERNEL); if (!smmu->cbs) @@ -1884,6 +1870,9 @@ static int arm_smmu_device_cfg_probe(struct arm_smmu_device *smmu) dev_notice(smmu->dev, "\tStage-2: %lu-bit IPA -> %lu-bit PA\n", smmu->ipa_size, smmu->pa_size); + if (smmu->impl && smmu->impl->cfg_probe) + return smmu->impl->cfg_probe(smmu); + return 0; } diff --git a/drivers/iommu/arm-smmu.h b/drivers/iommu/arm-smmu.h index d4fd29d70705..f4e90f33fce2 100644 --- a/drivers/iommu/arm-smmu.h +++ b/drivers/iommu/arm-smmu.h @@ -287,6 +287,7 @@ struct arm_smmu_impl { u64 (*read_reg64)(struct arm_smmu_device *smmu, int page, int offset); void (*write_reg64)(struct arm_smmu_device *smmu, int page, int offset, u64 val); + int (*cfg_probe)(struct arm_smmu_device *smmu); }; static inline void __iomem *arm_smmu_page(struct arm_smmu_device *smmu, int n) From patchwork Thu Aug 15 18:37:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 11096305 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D6A6314DB for ; Thu, 15 Aug 2019 18:41:50 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C7C2F28869 for ; Thu, 15 Aug 2019 18:41:50 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BC39E288B5; Thu, 15 Aug 2019 18:41:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 3A649288AC for ; Thu, 15 Aug 2019 18:41:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=8uSS49Yp2x7/TS6Gvf3OmhewvzUi31YLHzKe/oGiWHU=; b=oKoe67eqP3FeR6 QrBdNG9KmmjPTJxkIC/1HIEOioVRRPrxgV/AupIpGcjmSk00gzhbGqx1SeZ5ENOIXg/ctTYVyPfWh 5eOWWs+U4i674OhRxr1WVjAgKruqGzFF5Wa5S5uU4c5jkWRzSWoQybrc0Vd2OK+wzK8HiGhRjaGY8 gZYEIyyEzX8KIfXHSY2ZETm4r39f6t3xvdilD66EQ4kChaSHsfWKpj3tLechktpleXTzDzTNkvg22 1wgrK4w0k588+ZVppiJQi2RmWLXoQ0dkMz1Sw75FsMlhK5XY2pCcx5+kdfFTObi+zZTm/T0L6S1nE 6oqqOrnxB+Qx07h8rQFg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyKh3-0001NY-OP; Thu, 15 Aug 2019 18:41:49 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyKdT-00053d-IK for linux-arm-kernel@lists.infradead.org; Thu, 15 Aug 2019 18:38:11 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 51380344; Thu, 15 Aug 2019 11:38:07 -0700 (PDT) Received: from e110467-lin.cambridge.arm.com (e110467-lin.cambridge.arm.com [10.1.197.57]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id F22223F694; Thu, 15 Aug 2019 11:38:05 -0700 (PDT) From: Robin Murphy To: will@kernel.org Subject: [PATCH v2 16/17] iommu/arm-smmu: Add reset implementation hook Date: Thu, 15 Aug 2019 19:37:36 +0100 Message-Id: <9a050064bc835dc68e18937a4e6e502acff8fa3e.1565892337.git.robin.murphy@arm.com> X-Mailer: git-send-email 2.21.0.dirty In-Reply-To: References: MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190815_113807_828431_1A063C5A X-CRM114-Status: GOOD ( 19.25 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: robdclark@gmail.com, joro@8bytes.org, bjorn.andersson@linaro.org, iommu@lists.linux-foundation.org, vivek.gautam@codeaurora.org, jcrouse@codeaurora.org, gregory.clement@bootlin.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Reset is an activity rife with implementation-defined poking. Add a corresponding hook, and use it to encapsulate the existing MMU-500 details. Signed-off-by: Robin Murphy --- drivers/iommu/arm-smmu-impl.c | 49 +++++++++++++++++++++++++++++++++++ drivers/iommu/arm-smmu.c | 39 +++------------------------- drivers/iommu/arm-smmu.h | 1 + 3 files changed, 54 insertions(+), 35 deletions(-) diff --git a/drivers/iommu/arm-smmu-impl.c b/drivers/iommu/arm-smmu-impl.c index 696417908793..4dc8b1c4befb 100644 --- a/drivers/iommu/arm-smmu-impl.c +++ b/drivers/iommu/arm-smmu-impl.c @@ -4,6 +4,7 @@ #define pr_fmt(fmt) "arm-smmu: " fmt +#include #include #include "arm-smmu.h" @@ -67,6 +68,51 @@ const struct arm_smmu_impl cavium_impl = { }; +#define ARM_MMU500_ACTLR_CPRE (1 << 1) + +#define ARM_MMU500_ACR_CACHE_LOCK (1 << 26) +#define ARM_MMU500_ACR_S2CRB_TLBEN (1 << 10) +#define ARM_MMU500_ACR_SMTNMB_TLBEN (1 << 8) + +static int arm_mmu500_reset(struct arm_smmu_device *smmu) +{ + u32 reg, major; + int i; + /* + * On MMU-500 r2p0 onwards we need to clear ACR.CACHE_LOCK before + * writes to the context bank ACTLRs will stick. And we just hope that + * Secure has also cleared SACR.CACHE_LOCK for this to take effect... + */ + reg = arm_smmu_gr0_read(smmu, ARM_SMMU_GR0_ID7); + major = FIELD_GET(ID7_MAJOR, reg); + reg = arm_smmu_gr0_read(smmu, ARM_SMMU_GR0_sACR); + if (major >= 2) + reg &= ~ARM_MMU500_ACR_CACHE_LOCK; + /* + * Allow unmatched Stream IDs to allocate bypass + * TLB entries for reduced latency. + */ + reg |= ARM_MMU500_ACR_SMTNMB_TLBEN | ARM_MMU500_ACR_S2CRB_TLBEN; + arm_smmu_gr0_write(smmu, ARM_SMMU_GR0_sACR, reg); + + /* + * Disable MMU-500's not-particularly-beneficial next-page + * prefetcher for the sake of errata #841119 and #826419. + */ + for (i = 0; i < smmu->num_context_banks; ++i) { + reg = arm_smmu_cb_read(smmu, i, ARM_SMMU_CB_ACTLR); + reg &= ~ARM_MMU500_ACTLR_CPRE; + arm_smmu_cb_write(smmu, i, ARM_SMMU_CB_ACTLR, reg); + } + + return 0; +} + +const struct arm_smmu_impl arm_mmu500_impl = { + .reset = arm_mmu500_reset, +}; + + struct arm_smmu_device *arm_smmu_impl_init(struct arm_smmu_device *smmu) { /* @@ -76,6 +122,9 @@ struct arm_smmu_device *arm_smmu_impl_init(struct arm_smmu_device *smmu) * mutually-exclusive assignments. */ switch (smmu->model) { + case ARM_MMU500: + smmu->impl = &arm_mmu500_impl; + break; case CAVIUM_SMMUV2: smmu->impl = &cavium_impl; break; diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c index 362b6b5a28ee..fc98992d120d 100644 --- a/drivers/iommu/arm-smmu.c +++ b/drivers/iommu/arm-smmu.c @@ -54,12 +54,6 @@ */ #define QCOM_DUMMY_VAL -1 -#define ARM_MMU500_ACTLR_CPRE (1 << 1) - -#define ARM_MMU500_ACR_CACHE_LOCK (1 << 26) -#define ARM_MMU500_ACR_S2CRB_TLBEN (1 << 10) -#define ARM_MMU500_ACR_SMTNMB_TLBEN (1 << 8) - #define TLB_LOOP_TIMEOUT 1000000 /* 1s! */ #define TLB_SPIN_COUNT 10 @@ -1574,7 +1568,7 @@ static struct iommu_ops arm_smmu_ops = { static void arm_smmu_device_reset(struct arm_smmu_device *smmu) { int i; - u32 reg, major; + u32 reg; /* clear global FSR */ reg = arm_smmu_gr0_read(smmu, ARM_SMMU_GR0_sGFSR); @@ -1587,38 +1581,10 @@ static void arm_smmu_device_reset(struct arm_smmu_device *smmu) for (i = 0; i < smmu->num_mapping_groups; ++i) arm_smmu_write_sme(smmu, i); - if (smmu->model == ARM_MMU500) { - /* - * Before clearing ARM_MMU500_ACTLR_CPRE, need to - * clear CACHE_LOCK bit of ACR first. And, CACHE_LOCK - * bit is only present in MMU-500r2 onwards. - */ - reg = arm_smmu_gr0_read(smmu, ARM_SMMU_GR0_ID7); - major = FIELD_GET(ID7_MAJOR, reg); - reg = arm_smmu_gr0_read(smmu, ARM_SMMU_GR0_sACR); - if (major >= 2) - reg &= ~ARM_MMU500_ACR_CACHE_LOCK; - /* - * Allow unmatched Stream IDs to allocate bypass - * TLB entries for reduced latency. - */ - reg |= ARM_MMU500_ACR_SMTNMB_TLBEN | ARM_MMU500_ACR_S2CRB_TLBEN; - arm_smmu_gr0_write(smmu, ARM_SMMU_GR0_sACR, reg); - } - /* Make sure all context banks are disabled and clear CB_FSR */ for (i = 0; i < smmu->num_context_banks; ++i) { arm_smmu_write_context_bank(smmu, i); arm_smmu_cb_write(smmu, i, ARM_SMMU_CB_FSR, FSR_FAULT); - /* - * Disable MMU-500's not-particularly-beneficial next-page - * prefetcher for the sake of errata #841119 and #826419. - */ - if (smmu->model == ARM_MMU500) { - reg = arm_smmu_cb_read(smmu, i, ARM_SMMU_CB_ACTLR); - reg &= ~ARM_MMU500_ACTLR_CPRE; - arm_smmu_cb_write(smmu, i, ARM_SMMU_CB_ACTLR, reg); - } } /* Invalidate the TLB, just in case */ @@ -1652,6 +1618,9 @@ static void arm_smmu_device_reset(struct arm_smmu_device *smmu) if (smmu->features & ARM_SMMU_FEAT_EXIDS) reg |= sCR0_EXIDENABLE; + if (smmu->impl && smmu->impl->reset) + smmu->impl->reset(smmu); + /* Push the button */ arm_smmu_tlb_sync_global(smmu); arm_smmu_gr0_write(smmu, ARM_SMMU_GR0_sCR0, reg); diff --git a/drivers/iommu/arm-smmu.h b/drivers/iommu/arm-smmu.h index f4e90f33fce2..ddafe872a396 100644 --- a/drivers/iommu/arm-smmu.h +++ b/drivers/iommu/arm-smmu.h @@ -288,6 +288,7 @@ struct arm_smmu_impl { void (*write_reg64)(struct arm_smmu_device *smmu, int page, int offset, u64 val); int (*cfg_probe)(struct arm_smmu_device *smmu); + int (*reset)(struct arm_smmu_device *smmu); }; static inline void __iomem *arm_smmu_page(struct arm_smmu_device *smmu, int n) From patchwork Thu Aug 15 18:37:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 11096307 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6AD6B17E2 for ; Thu, 15 Aug 2019 18:42:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5BF812003F for ; Thu, 15 Aug 2019 18:42:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 50A6E28735; Thu, 15 Aug 2019 18:42:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id B33CB2003F for ; Thu, 15 Aug 2019 18:42:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=fkGazBCBy326LCYHOSbQtGD6zER9xfzLQURvu1Pe/l4=; b=ZEOuLaG8KV6nvE dxnhx8vb9dRKxkn6thytXicv7T7yWaX3YMjGxGDHmcMpGZzRVRyKw8wLUG5u/3xpsHk62GXJPAtIb XSdKKcQCPg1pzJUGgL/C5fnaFFR+nfLdW0vJgm6zsG+viI0BmzWwl49uGihWscqnw3l2NChHxRyeK ej6YUHrfq5eLVEBl1ZFKgy2/p3MJy7AJ+t3rB8TEFg1nRYfijasjFvxGnxfZK8ITxOXMCd2inevyT eOs3eGCT8ccReJ8zbV0Zd8jwQ7+gb7OCV/Nlt8CQnoQWfPekFAwRzIj6Ge+kIlWy9ffAIpW2N3sHH pSB89a9+v5ByKEjxg7+A==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyKhI-0001aj-TH; Thu, 15 Aug 2019 18:42:05 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyKdV-0005AD-7n for linux-arm-kernel@lists.infradead.org; Thu, 15 Aug 2019 18:38:12 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D97E11596; Thu, 15 Aug 2019 11:38:08 -0700 (PDT) Received: from e110467-lin.cambridge.arm.com (e110467-lin.cambridge.arm.com [10.1.197.57]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 8603D3F694; Thu, 15 Aug 2019 11:38:07 -0700 (PDT) From: Robin Murphy To: will@kernel.org Subject: [PATCH v2 17/17] iommu/arm-smmu: Add context init implementation hook Date: Thu, 15 Aug 2019 19:37:37 +0100 Message-Id: X-Mailer: git-send-email 2.21.0.dirty In-Reply-To: References: MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190815_113809_449458_3B1549AC X-CRM114-Status: GOOD ( 18.72 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: robdclark@gmail.com, joro@8bytes.org, bjorn.andersson@linaro.org, iommu@lists.linux-foundation.org, vivek.gautam@codeaurora.org, jcrouse@codeaurora.org, gregory.clement@bootlin.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Allocating and initialising a context for a domain is another point where certain implementations are known to want special behaviour. Currently the other half of the Cavium workaround comes into play here, so let's finish the job to get the whole thing right out of the way. Signed-off-by: Robin Murphy Signed-off-by: Vivek Gautam --- drivers/iommu/arm-smmu-impl.c | 42 ++++++++++++++++++++++++++--- drivers/iommu/arm-smmu.c | 51 +++++++---------------------------- drivers/iommu/arm-smmu.h | 42 +++++++++++++++++++++++++++-- 3 files changed, 87 insertions(+), 48 deletions(-) diff --git a/drivers/iommu/arm-smmu-impl.c b/drivers/iommu/arm-smmu-impl.c index 4dc8b1c4befb..e22e9004f449 100644 --- a/drivers/iommu/arm-smmu-impl.c +++ b/drivers/iommu/arm-smmu-impl.c @@ -48,25 +48,60 @@ const struct arm_smmu_impl calxeda_impl = { }; +struct cavium_smmu { + struct arm_smmu_device smmu; + u32 id_base; +}; + static int cavium_cfg_probe(struct arm_smmu_device *smmu) { static atomic_t context_count = ATOMIC_INIT(0); + struct cavium_smmu *cs = container_of(smmu, struct cavium_smmu, smmu); /* * Cavium CN88xx erratum #27704. * Ensure ASID and VMID allocation is unique across all SMMUs in * the system. */ - smmu->cavium_id_base = atomic_fetch_add(smmu->num_context_banks, - &context_count); + cs->id_base = atomic_fetch_add(smmu->num_context_banks, &context_count); dev_notice(smmu->dev, "\tenabling workaround for Cavium erratum 27704\n"); return 0; } +int cavium_init_context(struct arm_smmu_domain *smmu_domain) +{ + struct cavium_smmu *cs = container_of(smmu_domain->smmu, + struct cavium_smmu, smmu); + + if (smmu_domain->stage == ARM_SMMU_DOMAIN_S2) + smmu_domain->cfg.vmid += cs->id_base; + else + smmu_domain->cfg.asid += cs->id_base; + + return 0; +} + const struct arm_smmu_impl cavium_impl = { .cfg_probe = cavium_cfg_probe, + .init_context = cavium_init_context, }; +struct arm_smmu_device *cavium_smmu_impl_init(struct arm_smmu_device *smmu) +{ + struct cavium_smmu *cs; + + cs = devm_kzalloc(smmu->dev, sizeof(*cs), GFP_KERNEL); + if (!cs) + return ERR_PTR(-ENOMEM); + + cs->smmu = *smmu; + cs->smmu.impl = &cavium_impl; + + devm_kfree(smmu->dev, smmu); + + return &cs->smmu; +} + #define ARM_MMU500_ACTLR_CPRE (1 << 1) @@ -126,8 +161,7 @@ struct arm_smmu_device *arm_smmu_impl_init(struct arm_smmu_device *smmu) smmu->impl = &arm_mmu500_impl; break; case CAVIUM_SMMUV2: - smmu->impl = &cavium_impl; - break; + return cavium_smmu_impl_init(smmu); default: break; } diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c index fc98992d120d..b8628e2ab579 100644 --- a/drivers/iommu/arm-smmu.c +++ b/drivers/iommu/arm-smmu.c @@ -27,7 +27,6 @@ #include #include #include -#include #include #include #include @@ -111,44 +110,6 @@ struct arm_smmu_master_cfg { #define for_each_cfg_sme(fw, i, idx) \ for (i = 0; idx = fwspec_smendx(fw, i), i < fw->num_ids; ++i) -enum arm_smmu_context_fmt { - ARM_SMMU_CTX_FMT_NONE, - ARM_SMMU_CTX_FMT_AARCH64, - ARM_SMMU_CTX_FMT_AARCH32_L, - ARM_SMMU_CTX_FMT_AARCH32_S, -}; - -struct arm_smmu_cfg { - u8 cbndx; - u8 irptndx; - union { - u16 asid; - u16 vmid; - }; - enum arm_smmu_cbar_type cbar; - enum arm_smmu_context_fmt fmt; -}; -#define INVALID_IRPTNDX 0xff - -enum arm_smmu_domain_stage { - ARM_SMMU_DOMAIN_S1 = 0, - ARM_SMMU_DOMAIN_S2, - ARM_SMMU_DOMAIN_NESTED, - ARM_SMMU_DOMAIN_BYPASS, -}; - -struct arm_smmu_domain { - struct arm_smmu_device *smmu; - struct io_pgtable_ops *pgtbl_ops; - const struct iommu_gather_ops *tlb_ops; - struct arm_smmu_cfg cfg; - enum arm_smmu_domain_stage stage; - bool non_strict; - struct mutex init_mutex; /* Protects smmu pointer */ - spinlock_t cb_lock; /* Serialises ATS1* ops and TLB syncs */ - struct iommu_domain domain; -}; - static bool using_legacy_binding, using_generic_binding; static inline int arm_smmu_rpm_get(struct arm_smmu_device *smmu) @@ -749,9 +710,16 @@ static int arm_smmu_init_domain_context(struct iommu_domain *domain, } if (smmu_domain->stage == ARM_SMMU_DOMAIN_S2) - cfg->vmid = cfg->cbndx + 1 + smmu->cavium_id_base; + cfg->vmid = cfg->cbndx + 1; else - cfg->asid = cfg->cbndx + smmu->cavium_id_base; + cfg->asid = cfg->cbndx; + + smmu_domain->smmu = smmu; + if (smmu->impl && smmu->impl->init_context) { + ret = smmu->impl->init_context(smmu_domain); + if (ret) + goto out_unlock; + } pgtbl_cfg = (struct io_pgtable_cfg) { .pgsize_bitmap = smmu->pgsize_bitmap, @@ -765,7 +733,6 @@ static int arm_smmu_init_domain_context(struct iommu_domain *domain, if (smmu_domain->non_strict) pgtbl_cfg.quirks |= IO_PGTABLE_QUIRK_NON_STRICT; - smmu_domain->smmu = smmu; pgtbl_ops = alloc_io_pgtable_ops(fmt, &pgtbl_cfg, smmu_domain); if (!pgtbl_ops) { ret = -ENOMEM; diff --git a/drivers/iommu/arm-smmu.h b/drivers/iommu/arm-smmu.h index ddafe872a396..611ed742e56f 100644 --- a/drivers/iommu/arm-smmu.h +++ b/drivers/iommu/arm-smmu.h @@ -14,6 +14,7 @@ #include #include #include +#include #include #include #include @@ -270,14 +271,50 @@ struct arm_smmu_device { struct clk_bulk_data *clks; int num_clks; - u32 cavium_id_base; /* Specific to Cavium */ - spinlock_t global_sync_lock; /* IOMMU core code handle */ struct iommu_device iommu; }; +enum arm_smmu_context_fmt { + ARM_SMMU_CTX_FMT_NONE, + ARM_SMMU_CTX_FMT_AARCH64, + ARM_SMMU_CTX_FMT_AARCH32_L, + ARM_SMMU_CTX_FMT_AARCH32_S, +}; + +struct arm_smmu_cfg { + u8 cbndx; + u8 irptndx; + union { + u16 asid; + u16 vmid; + }; + enum arm_smmu_cbar_type cbar; + enum arm_smmu_context_fmt fmt; +}; +#define INVALID_IRPTNDX 0xff + +enum arm_smmu_domain_stage { + ARM_SMMU_DOMAIN_S1 = 0, + ARM_SMMU_DOMAIN_S2, + ARM_SMMU_DOMAIN_NESTED, + ARM_SMMU_DOMAIN_BYPASS, +}; + +struct arm_smmu_domain { + struct arm_smmu_device *smmu; + struct io_pgtable_ops *pgtbl_ops; + const struct iommu_gather_ops *tlb_ops; + struct arm_smmu_cfg cfg; + enum arm_smmu_domain_stage stage; + bool non_strict; + struct mutex init_mutex; /* Protects smmu pointer */ + spinlock_t cb_lock; /* Serialises ATS1* ops and TLB syncs */ + struct iommu_domain domain; +}; + /* Implementation details, yay! */ struct arm_smmu_impl { @@ -289,6 +326,7 @@ struct arm_smmu_impl { u64 val); int (*cfg_probe)(struct arm_smmu_device *smmu); int (*reset)(struct arm_smmu_device *smmu); + int (*init_context)(struct arm_smmu_domain *smmu_domain); }; static inline void __iomem *arm_smmu_page(struct arm_smmu_device *smmu, int n)