From patchwork Tue Apr 23 13:14:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Gunthorpe X-Patchwork-Id: 13640072 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9D997C4345F for ; Tue, 23 Apr 2024 13:15:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=hDl8UrBXcp0CLWaj87MR1uszCiRfPf8EHI0uc6W7HVE=; b=4m5kwMS6lcCcic GqJXgNW1fM04uizsuDWe6XCZZIz85EW8k1XfgNaGQucOw71Y7R1jsfUPR4ip/qCeBxmJimks9OWJP n4RdQsFwpSI7JpgfK/9E5ypuHeD2X4Fcse0KKpYlPhayyf/wHHVlVdMaHMTYEauRVEQzIxYIfkFm5 bAV/N90GYa03e3qkBU7LaQnuTBgx6uI3Hz5KMZ427AH5TflHVDJ2diaOtP6GR1GZqzxhTM0gEJtVc W0nHjd63KTK4F4hFaidsGHnH1kmcsgpqBRi6XTDWmpKOc0YcUMv7738d1R8g5UmFRIAF0wckx+nhW kHdywhYFo+TpNKijS/yQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rzFzH-000000000pK-4B18; Tue, 23 Apr 2024 13:15:08 +0000 Received: from mail-dm6nam11on20600.outbound.protection.outlook.com ([2a01:111:f403:2415::600] helo=NAM11-DM6-obe.outbound.protection.outlook.com) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rzFyu-000000000Uj-3y7e for linux-arm-kernel@lists.infradead.org; Tue, 23 Apr 2024 13:14:46 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=AF6Yxx7lo670RQa4ENeeK+vix3pr01zksgLXWiT5u5Xv7OVJDlmDyifGspEsO51YqR8X6gjoPC1KgQQnobmT2DGfyzDWEpomgt+ESqpQ7stUgKhgDGttG+6gMwdgzGaQDFXNQzlETn69VccIatGgsL8W8dWv2r3AVEV4nNvpQeZ5BsgKsSPXL/Ss1VEYcGSCSxauitt9MmFIUl2qgWa37VY9Rec8bpDcKKyUz8rwW0dq1mj8dlaARkDhPHYKr2Rp1kvvACkSeFJPlrFY1Zk7uJ/2rkNmr+oPBzkrDMy4WYXRvaDeqeUEsVzhDGJm2h7EZ484Txic8FFFGvh+3KwmSQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=NuOQM+tk1hBYURM7kywp08aMkUjx537isuXThrhYqWA=; b=nJ037gln6FVfnlAQPMuV0nFqaffoJmNZNDQjIbizIYTxfMLBJtvN6SAslEpdSqFgyiypjQO6sOqQN0Epr+juIfYsZFdP/ZFile0EVjK860mMf+t9tBZUjQO6GAC0sZHN8v+av5IaeM88Fka5fpw2+Ywb3hlVsYC8MbI/+VLpSnYnMVAs1eMa3w1G4qfNZUxEIkCs/CSk7tT9orct4snWl0jN0U7XvraCrlfe7ZwCeOPeWb4LIQ/e2Pgf3+mCJNvCMDYBx/KKf1ZegRaAXt6ddBhofFLG8F7Oi1WRZ27K9aNGcONlt4dpmGSyBSh2AA1qZjuO2WrSqqKhiQMHSExUnQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=NuOQM+tk1hBYURM7kywp08aMkUjx537isuXThrhYqWA=; b=LwfVqJMDw4jjUcB3cd8Sn8NK5gQxBPtyLt1hUZ+riq7MsqdhwCnzDNucZTmB5qb8cIjua4kI5DS+/5/BJDfHybvnMdmTUN9liCF896vmtTL9FJvfQRG109YJlocax2T2oEcul1/UDUxxzSF3oUsYfz7XwqjoQdw6X4h4LeNQpzyqeSktxP5XcLT/UB3yoScBNoLSpBCNE7AjTc5f4zRHYqzAmT8bBHbXJWQN+0X1j+Y+xMHZTzHU1CyrGYjNFATQmdwL8nJ2qVoSO40h+MrCoqt4nTy1iOb82hU7nAwNjfirVdI29WoWcXt4w13/v3VYe1KygFMFFpEn3V9d8l5DPA== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from DM6PR12MB3849.namprd12.prod.outlook.com (2603:10b6:5:1c7::26) by SA1PR12MB5672.namprd12.prod.outlook.com (2603:10b6:806:23c::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7472.44; Tue, 23 Apr 2024 13:14:22 +0000 Received: from DM6PR12MB3849.namprd12.prod.outlook.com ([fe80::c296:774b:a5fc:965e]) by DM6PR12MB3849.namprd12.prod.outlook.com ([fe80::c296:774b:a5fc:965e%3]) with mapi id 15.20.7519.021; Tue, 23 Apr 2024 13:14:22 +0000 From: Jason Gunthorpe To: iommu@lists.linux.dev, Joerg Roedel , linux-arm-kernel@lists.infradead.org, Robin Murphy , Will Deacon Cc: Eric Auger , Moritz Fischer , Moritz Fischer , Michael Shavit , Nicolin Chen , patches@lists.linux.dev, Shameerali Kolothum Thodi , Mostafa Saleh Subject: [PATCH v8 1/9] iommu/arm-smmu-v3: Add an ops indirection to the STE code Date: Tue, 23 Apr 2024 10:14:06 -0300 Message-ID: <1-v8-4c4298c63951+13484-smmuv3_newapi_p2_jgg@nvidia.com> In-Reply-To: <0-v8-4c4298c63951+13484-smmuv3_newapi_p2_jgg@nvidia.com> References: X-ClientProxiedBy: BL1PR13CA0365.namprd13.prod.outlook.com (2603:10b6:208:2c0::10) To DM6PR12MB3849.namprd12.prod.outlook.com (2603:10b6:5:1c7::26) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6PR12MB3849:EE_|SA1PR12MB5672:EE_ X-MS-Office365-Filtering-Correlation-Id: b228a9a7-133b-4c12-9e2a-08dc639746bb X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: mfoFow/P8eQPPxwmvNNS/45SODvJcgTAznBpXwr8gRKCm86/glyo8zEy6O1nC4EsUt2tn6ezTIuz+OX71qvtR+r9p1oCL608R0ghxQlNjvMZ7Hq4ixyT+ZUMkIj42vKQTtD3OLQ6ep7oZEWUUyPVtNSWR5oOKMC3XXaUbgOrBA5283na1/gv2eb/NvvGx0Inz2Zm2YpOLcUO/EHuAGGwO95USm8vtZFzJbFm/qKUv8ee4kcaOIz+DXgOVyv791r8YUUiBkO9h4bnKoMFN0hkcpf+9wAO2Nc7Rj6g/52sUdIZZEWi2JY9GzQX/zUeqnaf2c1FYROZFwj3uUOzBy5x6NawtDVx2u6W23UDxxOOMQ3IjpGyf+J063+hPzwX0OJpAboSj+DgXqKsFiOjf88L5SwTgTfQ9a9DiV0YfgCa/dC7Yr9vLnmim0ZrW1R3sVRvFEypyjVhQJYOhUnbbXnrNUzbITpObpn1EwzQ8mWvpdIV3u799jfpd/5PScG70XS8rkCwUQzaeov8MY7HQzQengEQrjFPBygQZ4ucfQCYa37k2E0E0FnIBOmN/D+m2ggZQiFk9N8mtyF7v/kwrsDP7cymoRP5IizhponFq5Rj0kB1ZNaZuq5KBWm2BGF+puP8lUOkXaxlL1HzJtZM6KaBPciFE1nYPsw2hacuNbqXq/I9VpJs5jP7w44m5nvHevSxuxhhUmnGWTzzAyGUeoyHIlZ04pWwcfR40AgJrm/FzMGV5apC3Z+pZXh99hDFLXc2eXElo0/powq4KBIku/vHr7llHEFeGMBpO3geivizjUOzVXfB+A2kwWoggo7byengQnEJ16qzQfGOkoyOG0tXhPW2QOftswKfqoUfZlfBhj+0v/1W2Cy+2uPRF6JYjJcK03+QKqfHVgUhkanxxuwteDkeuVeKGvPtOTjugsfFvpR+fsQyEi4nZc851kPm3YtkCWUEVgp0Rfv695df+aLWx7bnnvorOqp9zl4XukNgBaO6c9FS8rrVOpxeoE8LKNeZkgEgWwkdEUvGhmRQZgvFZ9FVlsc1e6q6HoVwGhABzTwOC5fOhaP6Rc8+bLqiqiVtoDnDmsz5KZP3CWvSuC1nqYRHQVxiOCs2+Is3BcfEitvBzFYkrVioTj1ZV94ycwDHy669Z+MzKqVhY0fWfvrh0jJmDduJX5nObgggojU/EX1/eHD+kikZ6We5ue97AWpoaqM8rPCy9yjAz0yyMPhKnHkbmIrVwBkwasdISpzTAqEL5pTpTvwGsKzC8F0Pa9ITC9tId6mH9pPuhLTb2pMqKA== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM6PR12MB3849.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(366007)(376005)(1800799015)(7416005);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: CRorQPGZNoPh/J5GKEd1UQw7V5u99vcN5AOtXLqXbOjmN22po+l4P620MmM7owz4TjXxb5maR9K9o2tUnEWhW3nBtt9eYZXpN/mCfhYe2XphVDzNSRLaArTIOAxpq2K20RxVkIqIgiSLGg03LhFqEJVSvjCzH8zH6UBeMHY13PgjdynPuVOr4CPtN1kNUC/mOvreFgdeZf5zIjU5hZ76NmLXqmANiHQdbUNZWvnSOW5WQT+3mFVuEVh4vLiMjmAthwZ1jt0IyN+2LddWaW3rDzkbyH2yCqm+TjG29ZKVlvWmuCnDxPucgrLB4pNXvS78CgcjQQSB/xT/5D8LXRvr/QFnJFQ4FFxAuXZyuRUaCnCcks8+Hvc2pEAFpb1PdhqF3SNVXOZxBZkTudhiE5EYeTb+dnr4j742h3pZdMp1R0oj/hdy4xKK99AxTNVlwBncntfg7lLOjwQ7rX7lyzixhG9zDslw8OL0wJyjIRkNCx96NLYdV6L1/d/DENWpDRiW05l7enCZ4hETM10JNM9gFVAI/XxaVPDIYoG3QMAlZj/MGvMdrbaWnBlQPGMnx+F0axbUjf7F4JcZrfv0EP1Q7YAXuBqBN3C3NTX2YCOMyV+bq6UVPkg1GDwkqfmiNWvN8xAQy8SbKTQc88a4yBfv6JobjgXk+koL8RvauKcv5hst0qNyFpo+aWfyKPvXwTxfvdcU/IpajYqIFr2Srbz2259+9ww5pjt1B8qVMIAYJ+elXaibnbH89UVND38yuEOShS5ViegIB8q1h6lOvymcCSuuzvVxyBMnNj2YseJHoA4ZPS3vOeoq+QMzeUiPHL2NwXxRH0HZIU0NkcB7dXPDfaiBke3Cx6JcsOXLeGfwNV49utl3Fx0diZxUWV0/jOBSPddY2zZ25mRQFHe0IVepFqm+6JolGxCbtu9IhWr1osKUHifBaz4WclXppk0QkxTclrz6LEk6leuiMhEwwQkKlsowOKrJm0NOgB1BmB0YpmKaUXPt0tJjVXUUS4+f4/2lyxOXJetkaw3Xo0weanAMUib3f6IjfSu1FimyB7jRcvTwr0l7afh7TbroxHf44FNODJpaLMysmOylt47uUdVIiekDIEOWkxZIkE+szhuNJ1r9oVSRE+v2AVArj/lj72KTDeDBjm+qdieaOYkYFa2r3IouTB6cXQhC/DRb6Nmqv1yo1zZGrgz1YVFIrjsxQFiZswkdeLm5wubRW1B+16+XHz/CeH6Exv2ikaC6xIXPD/UR7TYgMu3b+M9MdoAqWHQUI0bDSBMeWbvVyuhfjn0MXzm7WytMpjFz8t4Ia/g5oRa3HFTCJzxEUhsaMGlsNlf2DBgb2BVZAYS1AT9NBFD78QqLW64gpzMzELOC2+RRn3GeQcNCr8zXV6hFMMYaVwLCIWMreTg9BqD7hX37fE/WcfutVPrqmiwDKXctb8kTGxMtcOz3inZ9sy993iQVAH1jdSFHtp2SBQsidtJTNM1TRYTvL3QBqOtueaV+RcwioKCGmk45j3WA/LNR/M++K7+ZrBT7/sNUnsXvorLexOfafPRrdYvulkyUspCPcKZoojom1ZMqNZ8KX8dWHs31wuiI X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: b228a9a7-133b-4c12-9e2a-08dc639746bb X-MS-Exchange-CrossTenant-AuthSource: DM6PR12MB3849.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Apr 2024 13:14:15.9248 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: LjFpx+mbvRIrWO7594VQG2JPldU15LLP24Jy9FJ4oFkySsZ+7iVlH0xTTJYIi/Eg X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR12MB5672 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240423_061445_107148_8C7B4553 X-CRM114-Status: GOOD ( 20.75 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Prepare to put the CD code into the same mechanism. Add an ops indirection around all the STE specific code and make the worker functions independent of the entry content being processed. get_used and sync ops are provided to hook the correct code. Signed-off-by: Michael Shavit Reviewed-by: Michael Shavit Reviewed-by: Nicolin Chen Tested-by: Nicolin Chen Tested-by: Shameer Kolothum Signed-off-by: Jason Gunthorpe --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 176 ++++++++++++-------- 1 file changed, 104 insertions(+), 72 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 79c18e95dd293e..196aeaf280042c 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -42,8 +42,19 @@ enum arm_smmu_msi_index { ARM_SMMU_MAX_MSIS, }; -static void arm_smmu_sync_ste_for_sid(struct arm_smmu_device *smmu, - ioasid_t sid); +struct arm_smmu_entry_writer_ops; +struct arm_smmu_entry_writer { + const struct arm_smmu_entry_writer_ops *ops; + struct arm_smmu_master *master; +}; + +struct arm_smmu_entry_writer_ops { + void (*get_used)(const __le64 *entry, __le64 *used); + void (*sync)(struct arm_smmu_entry_writer *writer); +}; + +#define NUM_ENTRY_QWORDS 8 +static_assert(sizeof(struct arm_smmu_ste) == NUM_ENTRY_QWORDS * sizeof(u64)); static phys_addr_t arm_smmu_msi_cfg[ARM_SMMU_MAX_MSIS][3] = { [EVTQ_MSI_INDEX] = { @@ -972,43 +983,42 @@ void arm_smmu_tlb_inv_asid(struct arm_smmu_device *smmu, u16 asid) * would be nice if this was complete according to the spec, but minimally it * has to capture the bits this driver uses. */ -static void arm_smmu_get_ste_used(const struct arm_smmu_ste *ent, - struct arm_smmu_ste *used_bits) +static void arm_smmu_get_ste_used(const __le64 *ent, __le64 *used_bits) { - unsigned int cfg = FIELD_GET(STRTAB_STE_0_CFG, le64_to_cpu(ent->data[0])); + unsigned int cfg = FIELD_GET(STRTAB_STE_0_CFG, le64_to_cpu(ent[0])); - used_bits->data[0] = cpu_to_le64(STRTAB_STE_0_V); - if (!(ent->data[0] & cpu_to_le64(STRTAB_STE_0_V))) + used_bits[0] = cpu_to_le64(STRTAB_STE_0_V); + if (!(ent[0] & cpu_to_le64(STRTAB_STE_0_V))) return; - used_bits->data[0] |= cpu_to_le64(STRTAB_STE_0_CFG); + used_bits[0] |= cpu_to_le64(STRTAB_STE_0_CFG); /* S1 translates */ if (cfg & BIT(0)) { - used_bits->data[0] |= cpu_to_le64(STRTAB_STE_0_S1FMT | - STRTAB_STE_0_S1CTXPTR_MASK | - STRTAB_STE_0_S1CDMAX); - used_bits->data[1] |= + used_bits[0] |= cpu_to_le64(STRTAB_STE_0_S1FMT | + STRTAB_STE_0_S1CTXPTR_MASK | + STRTAB_STE_0_S1CDMAX); + used_bits[1] |= cpu_to_le64(STRTAB_STE_1_S1DSS | STRTAB_STE_1_S1CIR | STRTAB_STE_1_S1COR | STRTAB_STE_1_S1CSH | STRTAB_STE_1_S1STALLD | STRTAB_STE_1_STRW | STRTAB_STE_1_EATS); - used_bits->data[2] |= cpu_to_le64(STRTAB_STE_2_S2VMID); + used_bits[2] |= cpu_to_le64(STRTAB_STE_2_S2VMID); } /* S2 translates */ if (cfg & BIT(1)) { - used_bits->data[1] |= + used_bits[1] |= cpu_to_le64(STRTAB_STE_1_EATS | STRTAB_STE_1_SHCFG); - used_bits->data[2] |= + used_bits[2] |= cpu_to_le64(STRTAB_STE_2_S2VMID | STRTAB_STE_2_VTCR | STRTAB_STE_2_S2AA64 | STRTAB_STE_2_S2ENDI | STRTAB_STE_2_S2PTW | STRTAB_STE_2_S2R); - used_bits->data[3] |= cpu_to_le64(STRTAB_STE_3_S2TTB_MASK); + used_bits[3] |= cpu_to_le64(STRTAB_STE_3_S2TTB_MASK); } if (cfg == STRTAB_STE_0_CFG_BYPASS) - used_bits->data[1] |= cpu_to_le64(STRTAB_STE_1_SHCFG); + used_bits[1] |= cpu_to_le64(STRTAB_STE_1_SHCFG); } /* @@ -1017,57 +1027,55 @@ static void arm_smmu_get_ste_used(const struct arm_smmu_ste *ent, * unused_update is an intermediate value of entry that has unused bits set to * their new values. */ -static u8 arm_smmu_entry_qword_diff(const struct arm_smmu_ste *entry, - const struct arm_smmu_ste *target, - struct arm_smmu_ste *unused_update) +static u8 arm_smmu_entry_qword_diff(struct arm_smmu_entry_writer *writer, + const __le64 *entry, const __le64 *target, + __le64 *unused_update) { - struct arm_smmu_ste target_used = {}; - struct arm_smmu_ste cur_used = {}; + __le64 target_used[NUM_ENTRY_QWORDS] = {}; + __le64 cur_used[NUM_ENTRY_QWORDS] = {}; u8 used_qword_diff = 0; unsigned int i; - arm_smmu_get_ste_used(entry, &cur_used); - arm_smmu_get_ste_used(target, &target_used); + writer->ops->get_used(entry, cur_used); + writer->ops->get_used(target, target_used); - for (i = 0; i != ARRAY_SIZE(target_used.data); i++) { + for (i = 0; i != NUM_ENTRY_QWORDS; i++) { /* * Check that masks are up to date, the make functions are not * allowed to set a bit to 1 if the used function doesn't say it * is used. */ - WARN_ON_ONCE(target->data[i] & ~target_used.data[i]); + WARN_ON_ONCE(target[i] & ~target_used[i]); /* Bits can change because they are not currently being used */ - unused_update->data[i] = (entry->data[i] & cur_used.data[i]) | - (target->data[i] & ~cur_used.data[i]); + unused_update[i] = (entry[i] & cur_used[i]) | + (target[i] & ~cur_used[i]); /* * Each bit indicates that a used bit in a qword needs to be * changed after unused_update is applied. */ - if ((unused_update->data[i] & target_used.data[i]) != - target->data[i]) + if ((unused_update[i] & target_used[i]) != target[i]) used_qword_diff |= 1 << i; } return used_qword_diff; } -static bool entry_set(struct arm_smmu_device *smmu, ioasid_t sid, - struct arm_smmu_ste *entry, - const struct arm_smmu_ste *target, unsigned int start, +static bool entry_set(struct arm_smmu_entry_writer *writer, __le64 *entry, + const __le64 *target, unsigned int start, unsigned int len) { bool changed = false; unsigned int i; for (i = start; len != 0; len--, i++) { - if (entry->data[i] != target->data[i]) { - WRITE_ONCE(entry->data[i], target->data[i]); + if (entry[i] != target[i]) { + WRITE_ONCE(entry[i], target[i]); changed = true; } } if (changed) - arm_smmu_sync_ste_for_sid(smmu, sid); + writer->ops->sync(writer); return changed; } @@ -1097,24 +1105,21 @@ static bool entry_set(struct arm_smmu_device *smmu, ioasid_t sid, * V=0 process. This relies on the IGNORED behavior described in the * specification. */ -static void arm_smmu_write_ste(struct arm_smmu_master *master, u32 sid, - struct arm_smmu_ste *entry, - const struct arm_smmu_ste *target) +static void arm_smmu_write_entry(struct arm_smmu_entry_writer *writer, + __le64 *entry, const __le64 *target) { - unsigned int num_entry_qwords = ARRAY_SIZE(target->data); - struct arm_smmu_device *smmu = master->smmu; - struct arm_smmu_ste unused_update; + __le64 unused_update[NUM_ENTRY_QWORDS]; u8 used_qword_diff; used_qword_diff = - arm_smmu_entry_qword_diff(entry, target, &unused_update); + arm_smmu_entry_qword_diff(writer, entry, target, unused_update); if (hweight8(used_qword_diff) == 1) { /* * Only one qword needs its used bits to be changed. This is a - * hitless update, update all bits the current STE is ignoring - * to their new values, then update a single "critical qword" to - * change the STE and finally 0 out any bits that are now unused - * in the target configuration. + * hitless update, update all bits the current STE/CD is + * ignoring to their new values, then update a single "critical + * qword" to change the STE/CD and finally 0 out any bits that + * are now unused in the target configuration. */ unsigned int critical_qword_index = ffs(used_qword_diff) - 1; @@ -1123,22 +1128,21 @@ static void arm_smmu_write_ste(struct arm_smmu_master *master, u32 sid, * writing it in the next step anyways. This can save a sync * when the only change is in that qword. */ - unused_update.data[critical_qword_index] = - entry->data[critical_qword_index]; - entry_set(smmu, sid, entry, &unused_update, 0, num_entry_qwords); - entry_set(smmu, sid, entry, target, critical_qword_index, 1); - entry_set(smmu, sid, entry, target, 0, num_entry_qwords); + unused_update[critical_qword_index] = + entry[critical_qword_index]; + entry_set(writer, entry, unused_update, 0, NUM_ENTRY_QWORDS); + entry_set(writer, entry, target, critical_qword_index, 1); + entry_set(writer, entry, target, 0, NUM_ENTRY_QWORDS); } else if (used_qword_diff) { /* * At least two qwords need their inuse bits to be changed. This * requires a breaking update, zero the V bit, write all qwords * but 0, then set qword 0 */ - unused_update.data[0] = entry->data[0] & - cpu_to_le64(~STRTAB_STE_0_V); - entry_set(smmu, sid, entry, &unused_update, 0, 1); - entry_set(smmu, sid, entry, target, 1, num_entry_qwords - 1); - entry_set(smmu, sid, entry, target, 0, 1); + unused_update[0] = 0; + entry_set(writer, entry, unused_update, 0, 1); + entry_set(writer, entry, target, 1, NUM_ENTRY_QWORDS - 1); + entry_set(writer, entry, target, 0, 1); } else { /* * No inuse bit changed. Sanity check that all unused bits are 0 @@ -1146,18 +1150,7 @@ static void arm_smmu_write_ste(struct arm_smmu_master *master, u32 sid, * compute_qword_diff(). */ WARN_ON_ONCE( - entry_set(smmu, sid, entry, target, 0, num_entry_qwords)); - } - - /* It's likely that we'll want to use the new STE soon */ - if (!(smmu->options & ARM_SMMU_OPT_SKIP_PREFETCH)) { - struct arm_smmu_cmdq_ent - prefetch_cmd = { .opcode = CMDQ_OP_PREFETCH_CFG, - .prefetch = { - .sid = sid, - } }; - - arm_smmu_cmdq_issue_cmd(smmu, &prefetch_cmd); + entry_set(writer, entry, target, 0, NUM_ENTRY_QWORDS)); } } @@ -1430,17 +1423,56 @@ arm_smmu_write_strtab_l1_desc(__le64 *dst, struct arm_smmu_strtab_l1_desc *desc) WRITE_ONCE(*dst, cpu_to_le64(val)); } -static void arm_smmu_sync_ste_for_sid(struct arm_smmu_device *smmu, u32 sid) +struct arm_smmu_ste_writer { + struct arm_smmu_entry_writer writer; + u32 sid; +}; + +static void arm_smmu_ste_writer_sync_entry(struct arm_smmu_entry_writer *writer) { + struct arm_smmu_ste_writer *ste_writer = + container_of(writer, struct arm_smmu_ste_writer, writer); struct arm_smmu_cmdq_ent cmd = { .opcode = CMDQ_OP_CFGI_STE, .cfgi = { - .sid = sid, + .sid = ste_writer->sid, .leaf = true, }, }; - arm_smmu_cmdq_issue_cmd_with_sync(smmu, &cmd); + arm_smmu_cmdq_issue_cmd_with_sync(writer->master->smmu, &cmd); +} + +static const struct arm_smmu_entry_writer_ops arm_smmu_ste_writer_ops = { + .sync = arm_smmu_ste_writer_sync_entry, + .get_used = arm_smmu_get_ste_used, +}; + +static void arm_smmu_write_ste(struct arm_smmu_master *master, u32 sid, + struct arm_smmu_ste *ste, + const struct arm_smmu_ste *target) +{ + struct arm_smmu_device *smmu = master->smmu; + struct arm_smmu_ste_writer ste_writer = { + .writer = { + .ops = &arm_smmu_ste_writer_ops, + .master = master, + }, + .sid = sid, + }; + + arm_smmu_write_entry(&ste_writer.writer, ste->data, target->data); + + /* It's likely that we'll want to use the new STE soon */ + if (!(smmu->options & ARM_SMMU_OPT_SKIP_PREFETCH)) { + struct arm_smmu_cmdq_ent + prefetch_cmd = { .opcode = CMDQ_OP_PREFETCH_CFG, + .prefetch = { + .sid = sid, + } }; + + arm_smmu_cmdq_issue_cmd(smmu, &prefetch_cmd); + } } static void arm_smmu_make_abort_ste(struct arm_smmu_ste *target)