From patchwork Tue Apr 16 19:28:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Gunthorpe X-Patchwork-Id: 13632435 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8E66CC05023 for ; Tue, 16 Apr 2024 19:29:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=WS90ATgdbKBoOWKp/0yPYBjTprYaqxfpYkHIQ5LUGTs=; b=MZa+I/0Z/whpFy UQoIbfEsF4ggCJ37T+HcdDKhanVdQwaQ1pSXf3hAzKBvsxZub6m0fgjSvI85HDnimE1mQAtlOsrGD gxgYVrG/upNqa87iOcQYPXBn2BiCLul82ZzfRA55ubrljmq/fReEFU+395D913u+jpnIFIrYSEOzf 3ydT2nN+9qKUvBNb6kqMRTF2LN1EWpREAFa7CP9OUa4iRDlbfxqcrXmExrdaReVqWTVjBsvCLs3Kj 7c0yhXRGr2o9sJiejvLsC3v6geo4AW7AaCvYOUxGOiUsxjaJfh1t8jML/hMmX3/qgj3P3jOWTHNZW 6Fx4gdL/u+q4ur512YAg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rwoUr-0000000DXnS-1plX; Tue, 16 Apr 2024 19:29:37 +0000 Received: from mail-mw2nam04on20601.outbound.protection.outlook.com ([2a01:111:f403:240a::601] helo=NAM04-MW2-obe.outbound.protection.outlook.com) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rwoUM-0000000DXR2-0BkH for linux-arm-kernel@lists.infradead.org; Tue, 16 Apr 2024 19:29:11 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=X9KPMUxnCQQeslui1ZE+tx8KQpQGQfvLeRVLiEM/VHffoH6O1A6Uq+Xjg5CiAmHKOWlQOVLQYyH8UE6PGbo1xD5aCjfAH6hKorV9igwdkQql9jFNx9m6L7ZHoAxigOuGECDN9rSxZ09GwhRwp0OpAci5uisv2Jz12hObDpRXRiqx9MgryjvlZHw/ktQ6T33GfkixyM9zZOsoHO8cJgtHnJ5prNYk7ePPURlcJIqoQfUpWZUenTMubn8LzapwOR7l9DfWQ79ATP/WZX+o5Fj8Rm+ONF6TbU5RBznIHxixAoNjr0ZrlCX3i5Yqah1AIbYwmLfHbr8VKrcVxYNlhzjPig== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=eWGFHURE+pRHLYw8oaFvCOhxYaRRrDiSAW44G7GBJlk=; b=BTEITVvrwMyItXcSBkXt+R3sEwpuIesF6K7KLYGEgoBSE7QWikgdG+Z1oJfNTKgfVJLeL7MooAIZ23QNjNf3C7f7amTk3E8b29ABK8aUvGigPJ+3g+ivJCXlIXoOGKopMrbEIi2dIXznkfAm685roDGd3qlx2XD+Y5T1PhFIOWYEGszq0iwWUqd5Zv+pVB+HQtbw8MVJEraqcJ6p16jAxltmClMjEr2wnSWfdr8VS4nBXUPC9REWdDNi1CgdIR645REuKkAftVawj6TsMzeUK804XdtQMCS361DinqD28Rwsn3eMysJqcLn0922lx8L2WS+15K6t1ezlj3FUsTHUAA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=eWGFHURE+pRHLYw8oaFvCOhxYaRRrDiSAW44G7GBJlk=; b=sSOlI3xzwt1Hcq8YdDPDGaEbbBxGQm3uY+gXjZOJsN7YJnQbiEf3mwJGCv9NoOcJ9qyehWIpCyuF2O55//YH4hE+Fk8qBgWVVTfiH8aTA5uSryOxHXPhcWJyixBl7FYbaoyQzL6njIf6xKP/S7Vf9XA0XeYj5ZGhB6D8+Sm/LZp+4Z8qZJamsw+NSm4vEyZtys88wftX3fX3Xywnpw96/xXTEWhOgfaVNDJIZXc5O0K1aKxp4gW5yR7uo5AxiIbS55dpUUP4Mu8KgbnSAJHXMcUAWcIwOvgj2rlpGabWnx8bvpvgBjwZkG5xUM9GsFL+LlRgIRPAZtANSOgJYGr2AQ== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from DM6PR12MB3849.namprd12.prod.outlook.com (2603:10b6:5:1c7::26) by CY8PR12MB8213.namprd12.prod.outlook.com (2603:10b6:930:71::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7452.50; Tue, 16 Apr 2024 19:28:28 +0000 Received: from DM6PR12MB3849.namprd12.prod.outlook.com ([fe80::6aec:dbca:a593:a222]) by DM6PR12MB3849.namprd12.prod.outlook.com ([fe80::6aec:dbca:a593:a222%5]) with mapi id 15.20.7452.049; Tue, 16 Apr 2024 19:28:25 +0000 From: Jason Gunthorpe To: iommu@lists.linux.dev, Joerg Roedel , linux-arm-kernel@lists.infradead.org, Robin Murphy , Will Deacon Cc: Eric Auger , Moritz Fischer , Moritz Fischer , Michael Shavit , Nicolin Chen , patches@lists.linux.dev, Shameerali Kolothum Thodi , Mostafa Saleh Subject: [PATCH v7 1/9] iommu/arm-smmu-v3: Add an ops indirection to the STE code Date: Tue, 16 Apr 2024 16:28:12 -0300 Message-ID: <1-v7-cb149db3a320+3b5-smmuv3_newapi_p2_jgg@nvidia.com> In-Reply-To: <0-v7-cb149db3a320+3b5-smmuv3_newapi_p2_jgg@nvidia.com> References: X-ClientProxiedBy: SN7PR04CA0046.namprd04.prod.outlook.com (2603:10b6:806:120::21) To DM6PR12MB3849.namprd12.prod.outlook.com (2603:10b6:5:1c7::26) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6PR12MB3849:EE_|CY8PR12MB8213:EE_ X-MS-Office365-Filtering-Correlation-Id: e45b86a9-64c9-4398-7b68-08dc5e4b61ba X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 2EbiEiJuxaWn21SXUufxRHrI2Qsw4tt37BhWwfKButqVCZljSIINtRxkINEifTMrSQ8eOwJp8LOkVaQBqxGN8VYhKHkB0Jb9t5rOlaw+z8doetv/b7wAT6eycZlnoZSIiO7Pdz7kBSG9ObnXAQpO7gP1b5O+ZuSRIfMR3Say6QKfUqpa9eoxd37/VAnAUSs+KquaLdgPPpJb8lYmnHsQHEjS/4aVuiWKwDkIF4k5up6Ky+VUpEcNLWmMwpp5sdi1P78+IorEEViRYEweDk4/N8dyVPrTvB2rDQhBem3Y43FT7uGxdbgG29BRAyJZL+Z8bqpgK4KNcqMRp3QUdSlpvyT0MB7ktfmrijfgm+yMCuEf3nw2eEIE4ePcAJlSAV5VlOk9qlVnhuIpu/yhpcavhcSqDiLDEOjxy1uhOzE1U6X1VOADeRHu5rBQThtTHQpvt+dCOH6h4cQCDOb1NUkQO9wv4xFKjITl6H2EfmayQ1zCQsnNltzoG5fgBxomMZNd61EkdJnayyXNpDBfhWIQAF0T8lkmTQbMDwyvFdtalpN/Kt90UUtwvRrYGxQiGFzhN60eKX1JIYSx3H0Ogm8QCcGwhy35WvV0coHlBo1usoWqqaZo3W4PEG50F4wqyQWvlDocfhi3inT4CDsYEBz0HJwe+XYyOO9XVg+IgcY3zTw= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM6PR12MB3849.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(366007)(1800799015)(376005)(7416005);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: ycLPMD/bLfaNSGf+uLQSuQUDKwXgaT8dKp14aCgNgn09GKw456yBa12nrRmKMVnTGWHtw2mNG1dXlXx9kY+KrWHWgQ8gDv625Q0nCOwqP3TsLkw4C0yPOJDwc8YIVngrWgcxXgsaQirfa2VdLvwYtOvpKKA9x3PzI3Ej3fOZowgrgQQzrdRFA644M0/EkwvCpNI3u+db+kkGqGw6MA3q1YTomWnLODAVECwbhuddyAO5kpenAEvevtbXALOuoRRf3ytudeRWVRc3Wenmg6vYIz5NSH7mWOYAKQ0chtKjse0lcOLb1i9oWTGxUHbvdPdCe9MWeOzXeuY+FNy2G8jZCG6dE97Fr3A1pTZDVvUzgcSHYLgVv5w8S6m9D9OecPIxsjVhe2BE62I4KSxJkoNRge7dioTtKYexZTfKPcXefI4i38f6RSy2YSPhuLvc5r38cmMkTyBB91Lm0JecH9jIWTJCxMXAXAG2fXxM0wcVTOexaSqF7cnXARKxTpnYLlImY+iIQO/hGZb7nqAar0GSPpxYsac/zEsxEoxkca8nmzspcDbtYvA3njQhCKc5lnhCOHWEl+gcM5rqRhg+KgoSDuT0XI6pk8LgxR1TD7W1qihHX/5rknH6qBZmD7Q0ulkE9Agvq7NTAYkzfdCyPM3jVHPK6jDR4T+7mAcxn4QG0ZEpZTtB90j5ErO+FxWNBLZ+swOrtdmKXIsSTeY9B/yi3ydV/O1qdz14yBQ7DOyJ2cjdVc+sU8eDiJxxYx77uzgvTmFc4ke45+5G6h4dVP8p8PcGHgrNkDIU72VF+3yPk0OtleKschv1Rc4gMOlI2esM0gl/kna+nEVsSM95VatssFjTfOjapthObaDb6YJPbBenYZU5m8Dj9mh3SlvfudoXNKZ2upLJa8Ybp+L7TV/kSYWdlGy6sWblgSyVGDx83hYHlujNBXrxzSKhFzg2iNRdNkAk2sWOcY0ob4f03spHy08e9BKuwP+SBROa6eOgdZUtLcXe9PnQxjjKRYbHQS0hyCCjNHcRI9Hz4xA6VqszAWPca0+GHjp79WGv8DJf2200WwG6KdEhsZVlW9mk6y+PSbm4AI+4uzCsBrCd2aLQIosC8ge4Jx+0Ml0fsG7nudiZnbpY5oG2vMKfMIj2gBXBXjB6deFlSOCWu3kCbPkB+UK+y+iN2q9mOpCiaULGKuisQAYqAE6kbwpsnw38KNztEZG02iCylNN2sGC49gJksTmGC8xKrtUkQwpI4OGi+0/kvu1v2So65SuKuj8G8xDe2cagb3SL30E8YnffwWatnAK8x5jPwVuo71HZrc/4K24OivlUjYLhYxaF3ae/MIghR/Sx2uE29jv1VbLfsLPCx2oHxeMTMfLj4F+ns3RxBveNKtrKYOykeFjoKFgSacYFkUg9U0SwQH6s7aUASLyaGK0tAumPVn4n/IDZmS+BxhcL8kewYmPw23Xw7zJL0S3a/A6ZSuwJmsUfapdZIJeAvho34TG53ysnu/zMmfJnDQOlGp8cgO0fPDHGMphLZEENgFFN18Gsl18OpZWWVh4sJxBaQTm4EaKp6E0r/rbjBenjaRE/cXT6Y1gqI0/k3GMr X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: e45b86a9-64c9-4398-7b68-08dc5e4b61ba X-MS-Exchange-CrossTenant-AuthSource: DM6PR12MB3849.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Apr 2024 19:28:23.7293 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: D29IjSeX8A1+2nBW3yI8ZWXHYdhynaZT8b4NfNctIiru42Ey5Ilf5447b5QPJp8i X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY8PR12MB8213 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240416_122906_134622_E5FE0750 X-CRM114-Status: GOOD ( 25.21 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Prepare to put the CD code into the same mechanism. Add an ops indirection around all the STE specific code and make the worker functions independent of the entry content being processed. get_used and sync ops are provided to hook the correct code. Signed-off-by: Michael Shavit Reviewed-by: Michael Shavit Tested-by: Nicolin Chen Tested-by: Shameer Kolothum Signed-off-by: Jason Gunthorpe Reviewed-by: Nicolin Chen --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 178 ++++++++++++-------- 1 file changed, 106 insertions(+), 72 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 79c18e95dd293e..bf105e914d38b1 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -42,8 +42,20 @@ enum arm_smmu_msi_index { ARM_SMMU_MAX_MSIS, }; -static void arm_smmu_sync_ste_for_sid(struct arm_smmu_device *smmu, - ioasid_t sid); +struct arm_smmu_entry_writer_ops; +struct arm_smmu_entry_writer { + const struct arm_smmu_entry_writer_ops *ops; + struct arm_smmu_master *master; +}; + +struct arm_smmu_entry_writer_ops { + __le64 v_bit; + void (*get_used)(const __le64 *entry, __le64 *used); + void (*sync)(struct arm_smmu_entry_writer *writer); +}; + +#define NUM_ENTRY_QWORDS 8 +static_assert(sizeof(struct arm_smmu_ste) == NUM_ENTRY_QWORDS * sizeof(u64)); static phys_addr_t arm_smmu_msi_cfg[ARM_SMMU_MAX_MSIS][3] = { [EVTQ_MSI_INDEX] = { @@ -972,43 +984,42 @@ void arm_smmu_tlb_inv_asid(struct arm_smmu_device *smmu, u16 asid) * would be nice if this was complete according to the spec, but minimally it * has to capture the bits this driver uses. */ -static void arm_smmu_get_ste_used(const struct arm_smmu_ste *ent, - struct arm_smmu_ste *used_bits) +static void arm_smmu_get_ste_used(const __le64 *ent, __le64 *used_bits) { - unsigned int cfg = FIELD_GET(STRTAB_STE_0_CFG, le64_to_cpu(ent->data[0])); + unsigned int cfg = FIELD_GET(STRTAB_STE_0_CFG, le64_to_cpu(ent[0])); - used_bits->data[0] = cpu_to_le64(STRTAB_STE_0_V); - if (!(ent->data[0] & cpu_to_le64(STRTAB_STE_0_V))) + used_bits[0] = cpu_to_le64(STRTAB_STE_0_V); + if (!(ent[0] & cpu_to_le64(STRTAB_STE_0_V))) return; - used_bits->data[0] |= cpu_to_le64(STRTAB_STE_0_CFG); + used_bits[0] |= cpu_to_le64(STRTAB_STE_0_CFG); /* S1 translates */ if (cfg & BIT(0)) { - used_bits->data[0] |= cpu_to_le64(STRTAB_STE_0_S1FMT | - STRTAB_STE_0_S1CTXPTR_MASK | - STRTAB_STE_0_S1CDMAX); - used_bits->data[1] |= + used_bits[0] |= cpu_to_le64(STRTAB_STE_0_S1FMT | + STRTAB_STE_0_S1CTXPTR_MASK | + STRTAB_STE_0_S1CDMAX); + used_bits[1] |= cpu_to_le64(STRTAB_STE_1_S1DSS | STRTAB_STE_1_S1CIR | STRTAB_STE_1_S1COR | STRTAB_STE_1_S1CSH | STRTAB_STE_1_S1STALLD | STRTAB_STE_1_STRW | STRTAB_STE_1_EATS); - used_bits->data[2] |= cpu_to_le64(STRTAB_STE_2_S2VMID); + used_bits[2] |= cpu_to_le64(STRTAB_STE_2_S2VMID); } /* S2 translates */ if (cfg & BIT(1)) { - used_bits->data[1] |= + used_bits[1] |= cpu_to_le64(STRTAB_STE_1_EATS | STRTAB_STE_1_SHCFG); - used_bits->data[2] |= + used_bits[2] |= cpu_to_le64(STRTAB_STE_2_S2VMID | STRTAB_STE_2_VTCR | STRTAB_STE_2_S2AA64 | STRTAB_STE_2_S2ENDI | STRTAB_STE_2_S2PTW | STRTAB_STE_2_S2R); - used_bits->data[3] |= cpu_to_le64(STRTAB_STE_3_S2TTB_MASK); + used_bits[3] |= cpu_to_le64(STRTAB_STE_3_S2TTB_MASK); } if (cfg == STRTAB_STE_0_CFG_BYPASS) - used_bits->data[1] |= cpu_to_le64(STRTAB_STE_1_SHCFG); + used_bits[1] |= cpu_to_le64(STRTAB_STE_1_SHCFG); } /* @@ -1017,57 +1028,55 @@ static void arm_smmu_get_ste_used(const struct arm_smmu_ste *ent, * unused_update is an intermediate value of entry that has unused bits set to * their new values. */ -static u8 arm_smmu_entry_qword_diff(const struct arm_smmu_ste *entry, - const struct arm_smmu_ste *target, - struct arm_smmu_ste *unused_update) +static u8 arm_smmu_entry_qword_diff(struct arm_smmu_entry_writer *writer, + const __le64 *entry, const __le64 *target, + __le64 *unused_update) { - struct arm_smmu_ste target_used = {}; - struct arm_smmu_ste cur_used = {}; + __le64 target_used[NUM_ENTRY_QWORDS] = {}; + __le64 cur_used[NUM_ENTRY_QWORDS] = {}; u8 used_qword_diff = 0; unsigned int i; - arm_smmu_get_ste_used(entry, &cur_used); - arm_smmu_get_ste_used(target, &target_used); + writer->ops->get_used(entry, cur_used); + writer->ops->get_used(target, target_used); - for (i = 0; i != ARRAY_SIZE(target_used.data); i++) { + for (i = 0; i != NUM_ENTRY_QWORDS; i++) { /* * Check that masks are up to date, the make functions are not * allowed to set a bit to 1 if the used function doesn't say it * is used. */ - WARN_ON_ONCE(target->data[i] & ~target_used.data[i]); + WARN_ON_ONCE(target[i] & ~target_used[i]); /* Bits can change because they are not currently being used */ - unused_update->data[i] = (entry->data[i] & cur_used.data[i]) | - (target->data[i] & ~cur_used.data[i]); + unused_update[i] = (entry[i] & cur_used[i]) | + (target[i] & ~cur_used[i]); /* * Each bit indicates that a used bit in a qword needs to be * changed after unused_update is applied. */ - if ((unused_update->data[i] & target_used.data[i]) != - target->data[i]) + if ((unused_update[i] & target_used[i]) != target[i]) used_qword_diff |= 1 << i; } return used_qword_diff; } -static bool entry_set(struct arm_smmu_device *smmu, ioasid_t sid, - struct arm_smmu_ste *entry, - const struct arm_smmu_ste *target, unsigned int start, +static bool entry_set(struct arm_smmu_entry_writer *writer, __le64 *entry, + const __le64 *target, unsigned int start, unsigned int len) { bool changed = false; unsigned int i; for (i = start; len != 0; len--, i++) { - if (entry->data[i] != target->data[i]) { - WRITE_ONCE(entry->data[i], target->data[i]); + if (entry[i] != target[i]) { + WRITE_ONCE(entry[i], target[i]); changed = true; } } if (changed) - arm_smmu_sync_ste_for_sid(smmu, sid); + writer->ops->sync(writer); return changed; } @@ -1097,24 +1106,21 @@ static bool entry_set(struct arm_smmu_device *smmu, ioasid_t sid, * V=0 process. This relies on the IGNORED behavior described in the * specification. */ -static void arm_smmu_write_ste(struct arm_smmu_master *master, u32 sid, - struct arm_smmu_ste *entry, - const struct arm_smmu_ste *target) +static void arm_smmu_write_entry(struct arm_smmu_entry_writer *writer, + __le64 *entry, const __le64 *target) { - unsigned int num_entry_qwords = ARRAY_SIZE(target->data); - struct arm_smmu_device *smmu = master->smmu; - struct arm_smmu_ste unused_update; + __le64 unused_update[NUM_ENTRY_QWORDS]; u8 used_qword_diff; used_qword_diff = - arm_smmu_entry_qword_diff(entry, target, &unused_update); + arm_smmu_entry_qword_diff(writer, entry, target, unused_update); if (hweight8(used_qword_diff) == 1) { /* * Only one qword needs its used bits to be changed. This is a - * hitless update, update all bits the current STE is ignoring - * to their new values, then update a single "critical qword" to - * change the STE and finally 0 out any bits that are now unused - * in the target configuration. + * hitless update, update all bits the current STE/CD is + * ignoring to their new values, then update a single "critical + * qword" to change the STE/CD and finally 0 out any bits that + * are now unused in the target configuration. */ unsigned int critical_qword_index = ffs(used_qword_diff) - 1; @@ -1123,22 +1129,21 @@ static void arm_smmu_write_ste(struct arm_smmu_master *master, u32 sid, * writing it in the next step anyways. This can save a sync * when the only change is in that qword. */ - unused_update.data[critical_qword_index] = - entry->data[critical_qword_index]; - entry_set(smmu, sid, entry, &unused_update, 0, num_entry_qwords); - entry_set(smmu, sid, entry, target, critical_qword_index, 1); - entry_set(smmu, sid, entry, target, 0, num_entry_qwords); + unused_update[critical_qword_index] = + entry[critical_qword_index]; + entry_set(writer, entry, unused_update, 0, NUM_ENTRY_QWORDS); + entry_set(writer, entry, target, critical_qword_index, 1); + entry_set(writer, entry, target, 0, NUM_ENTRY_QWORDS); } else if (used_qword_diff) { /* * At least two qwords need their inuse bits to be changed. This * requires a breaking update, zero the V bit, write all qwords * but 0, then set qword 0 */ - unused_update.data[0] = entry->data[0] & - cpu_to_le64(~STRTAB_STE_0_V); - entry_set(smmu, sid, entry, &unused_update, 0, 1); - entry_set(smmu, sid, entry, target, 1, num_entry_qwords - 1); - entry_set(smmu, sid, entry, target, 0, 1); + unused_update[0] = entry[0] & (~writer->ops->v_bit); + entry_set(writer, entry, unused_update, 0, 1); + entry_set(writer, entry, target, 1, NUM_ENTRY_QWORDS - 1); + entry_set(writer, entry, target, 0, 1); } else { /* * No inuse bit changed. Sanity check that all unused bits are 0 @@ -1146,18 +1151,7 @@ static void arm_smmu_write_ste(struct arm_smmu_master *master, u32 sid, * compute_qword_diff(). */ WARN_ON_ONCE( - entry_set(smmu, sid, entry, target, 0, num_entry_qwords)); - } - - /* It's likely that we'll want to use the new STE soon */ - if (!(smmu->options & ARM_SMMU_OPT_SKIP_PREFETCH)) { - struct arm_smmu_cmdq_ent - prefetch_cmd = { .opcode = CMDQ_OP_PREFETCH_CFG, - .prefetch = { - .sid = sid, - } }; - - arm_smmu_cmdq_issue_cmd(smmu, &prefetch_cmd); + entry_set(writer, entry, target, 0, NUM_ENTRY_QWORDS)); } } @@ -1430,17 +1424,57 @@ arm_smmu_write_strtab_l1_desc(__le64 *dst, struct arm_smmu_strtab_l1_desc *desc) WRITE_ONCE(*dst, cpu_to_le64(val)); } -static void arm_smmu_sync_ste_for_sid(struct arm_smmu_device *smmu, u32 sid) +struct arm_smmu_ste_writer { + struct arm_smmu_entry_writer writer; + u32 sid; +}; + +static void arm_smmu_ste_writer_sync_entry(struct arm_smmu_entry_writer *writer) { + struct arm_smmu_ste_writer *ste_writer = + container_of(writer, struct arm_smmu_ste_writer, writer); struct arm_smmu_cmdq_ent cmd = { .opcode = CMDQ_OP_CFGI_STE, .cfgi = { - .sid = sid, + .sid = ste_writer->sid, .leaf = true, }, }; - arm_smmu_cmdq_issue_cmd_with_sync(smmu, &cmd); + arm_smmu_cmdq_issue_cmd_with_sync(writer->master->smmu, &cmd); +} + +static const struct arm_smmu_entry_writer_ops arm_smmu_ste_writer_ops = { + .sync = arm_smmu_ste_writer_sync_entry, + .get_used = arm_smmu_get_ste_used, + .v_bit = cpu_to_le64(STRTAB_STE_0_V), +}; + +static void arm_smmu_write_ste(struct arm_smmu_master *master, u32 sid, + struct arm_smmu_ste *ste, + const struct arm_smmu_ste *target) +{ + struct arm_smmu_device *smmu = master->smmu; + struct arm_smmu_ste_writer ste_writer = { + .writer = { + .ops = &arm_smmu_ste_writer_ops, + .master = master, + }, + .sid = sid, + }; + + arm_smmu_write_entry(&ste_writer.writer, ste->data, target->data); + + /* It's likely that we'll want to use the new STE soon */ + if (!(smmu->options & ARM_SMMU_OPT_SKIP_PREFETCH)) { + struct arm_smmu_cmdq_ent + prefetch_cmd = { .opcode = CMDQ_OP_PREFETCH_CFG, + .prefetch = { + .sid = sid, + } }; + + arm_smmu_cmdq_issue_cmd(smmu, &prefetch_cmd); + } } static void arm_smmu_make_abort_ste(struct arm_smmu_ste *target) From patchwork Tue Apr 16 19:28:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Gunthorpe X-Patchwork-Id: 13632582 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 38779C4345F for ; Tue, 16 Apr 2024 20:31:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=wIUHOFJ1Gqcu3Ds6Nm9oabo1h2a/viCRIhY/bhNkT5Y=; b=lkJnbdJqK4W1Mf Viuf7CHQxHJL6mBoIsacQiqK3dyJwKaJ+YTK0Ppkx8mDZsayLenxQu5MldBSakela6IHr+vYKLMPP aWIzxohGb0l3Y+ywBzDkgCGdSp3iD+9teszOFvpfEwJz3+tZ+ee3yq72GDQXLD+gM9T/hqp/8wOM9 HI8VvSDTokl3MDxadSZ7dfyZFl9yOFMGzoKFqDnOAd8Dg1kjFNtxOELrdSRFGiYDyjp6BxM+U3u/F BJV6pJxlX8SoW4hB3fuDX63Lo5jC7fG70SoDBdcxnzOibcK4BQjUL3NaQj/jWFSE4bd7OTO1sv3H6 iRHl7h3P7Hh8p37VHrHQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rwpSE-0000000DiJD-34UM; Tue, 16 Apr 2024 20:30:58 +0000 Received: from mail-mw2nam04on20601.outbound.protection.outlook.com ([2a01:111:f403:240a::601] helo=NAM04-MW2-obe.outbound.protection.outlook.com) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rwoUH-0000000DXR2-0BKk for linux-arm-kernel@lists.infradead.org; Tue, 16 Apr 2024 19:29:05 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=EEO0EPMIvPXR9r33BfSMQiz2yavseXXWdxezdNz5CDVQ9XDccoKyUHJ27tg/v8AB00ewd/QqJ4E9tPqLYcmFTSqcx7fXbVUoydRYwHfvBP4YOMLaJr0G0v3a5nMk7CWowdiksAdEJB1jgPwk4SI0QqvFAEJZ530KzSCJnX9v0izO9cxADpT6le+oEnxGLwpLHBJMEuzWQhYjVctMDhfTEeh6foE4rIMi6pZDQMRuXE3WugL7d/Mc+m8Wyc3f9HcC7agbA+MprVURx076H3pj6n7xsiW48oxbASJOTtAEYGTcQNJfeFGRBM/n+gGr2vlQk+Da9/9rCmTt+Qup/0K9Jw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=GOU9y0taWgsvl7bx0IEajZ8bxIn0NmHVI4xahM58xpY=; b=DGdXRVN3D3Em4fuhNlhAA3uu88g7JeLnqw/3pIdsLd6L6eU5jCkl34VQhR6/KrsdNMpQYwwyk3y/Nk6fC2RQ/RH+LixXdEpgKPhaIFU7fxI5TrGaBPoZCDHOrcaEYoQ+DsSsWvuVs1Iur4665PNmrgfdh3NSW+s9eD+8dxaVRwxh102CSsursrFXQjBRqqjSpET7hE4ef3KQV2zBe8JX3JlVug2zQoqm7zzZ4VMJG1fAkCsMtY5mnQnxvlt8F4S556yilsv5FKpBMb2yua1FdQpeopZYUHpIxyqgV88AiML/Lc8UYDhWg4+RQ7YghIzpwe0R+VdmaEvgfk7vHEYBGw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=GOU9y0taWgsvl7bx0IEajZ8bxIn0NmHVI4xahM58xpY=; b=Sfnap4cfvJd8OA/zD4S/yNje3kJUVEv4wpE/PlsaEFQNS+b4knyf5CIZHOW1HTJjEySK5PtFtqSl1IedP6xYMPHqKLTTpsldbt1pb2DYhOrYHYNkogVh9YBLqAHQy4Ivpbvx3Ll8vUYGIHf2ok3NkvT/gR4G5a1q0Z+RDzkpWrpcxPD6T2DMV3Z9FDsgTCylEdy80JBpfC2CDxJkZbwHplwmnYJnTPn9NQIPFh0AkiF2QYyjoOu8sDI7HrP7iKM1lDDIGWkvaNywpCkM0IKIP0DF1ag+9ibShuIbDpfPlfkcWP4l6k+255eRA+Poo6Y6p8LcEIpeMC4Ui65/OHdpxQ== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from DM6PR12MB3849.namprd12.prod.outlook.com (2603:10b6:5:1c7::26) by CY8PR12MB8213.namprd12.prod.outlook.com (2603:10b6:930:71::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7452.50; Tue, 16 Apr 2024 19:28:24 +0000 Received: from DM6PR12MB3849.namprd12.prod.outlook.com ([fe80::6aec:dbca:a593:a222]) by DM6PR12MB3849.namprd12.prod.outlook.com ([fe80::6aec:dbca:a593:a222%5]) with mapi id 15.20.7452.049; Tue, 16 Apr 2024 19:28:24 +0000 From: Jason Gunthorpe To: iommu@lists.linux.dev, Joerg Roedel , linux-arm-kernel@lists.infradead.org, Robin Murphy , Will Deacon Cc: Eric Auger , Moritz Fischer , Moritz Fischer , Michael Shavit , Nicolin Chen , patches@lists.linux.dev, Shameerali Kolothum Thodi , Mostafa Saleh Subject: [PATCH v7 2/9] iommu/arm-smmu-v3: Make CD programming use arm_smmu_write_entry() Date: Tue, 16 Apr 2024 16:28:13 -0300 Message-ID: <2-v7-cb149db3a320+3b5-smmuv3_newapi_p2_jgg@nvidia.com> In-Reply-To: <0-v7-cb149db3a320+3b5-smmuv3_newapi_p2_jgg@nvidia.com> References: X-ClientProxiedBy: SN7PR04CA0054.namprd04.prod.outlook.com (2603:10b6:806:120::29) To DM6PR12MB3849.namprd12.prod.outlook.com (2603:10b6:5:1c7::26) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6PR12MB3849:EE_|CY8PR12MB8213:EE_ X-MS-Office365-Filtering-Correlation-Id: 93e007cc-a767-4fcc-870c-08dc5e4b6185 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: bpDbzDFij36Qf5DxX3Kj5V2Ne1nreqiJQ7c78IFyxPEigUIMwJ1thXo2ncC5U6ZgjC3WKIYBJo05vweDEXi6/TRfEKyGsYvdAkM2c62WG1RN1G9rIwjFEZ1grACbmghAm+uE08qXh/Nq3nXYkLTKulIthdj3mD+QAfPbftgIlsWg3OweIfqI7XIUooudHwKfjHn0fSFswI/bFHEUZCuBwFeGg9LNo2ac55606hFZk6TUEDp8WiptTJJIVW3+rQkq6UwkkVtdit0swiNiD5U5wUMRJSEOVrUo4Pyiyu+uOK8w8z2wXp9yH8yEq18OM/yOifMDDY2Gaqjsam7/bR3lBzzBQQiV0LL7WUu3n9r58Fe2Nbmq6y8iD2trTm+t1dCvalSGUXM37qgFXKR86hl7T0SxksXNVjvyRD8q20KNyT01uBtmeIQfs+j6XT2cQfitxTL1NJDsZifljcI2uMClPQGuaH9nr9ydVtzAp0H4Boyj/KQvNmX2KR5hVqoTlpnT7TIKGBhjoT+NP9rLCTbQCKrrsfq63IIjhYqFbYbOtcSPcmyd4jSPYpL9775iUnPNOoG6bNZTvSEV/xxjz2AGKfdZecyW3mRyDjKhumzAxchrRijQwhuKkdjFg2LJynsf6FmO9jQLtcVSv1UwziBjg0wMqwbxORbtF1Dof+ZZbEQ= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM6PR12MB3849.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(366007)(1800799015)(376005)(7416005);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: jd1+wBq+VcaBrW8M9mfSjjaYtLLaFQtmYydD+sy0jp/6FQNyy4S+My3Pcpal6ioTIxX5u0LnrphM98kfLhDGSJtHIK4fsj93mmmvMQayGxCSY5jqi1k1X2OLSOlUr33edzovWY+3YSuKra/TTL6V3zK61VuEBcFwfTdGTy499yP5xigSNwqWVkaSVFFRFOFb3sFHYotPBi7sbeaZVpJ6Wz8qiFEFYmufp/Gl8OBtft8UE3d/DTUl1wJ8L/TM80zxU9pFPKzF4z07gRCWhIJrlTRAF1tVNPvIQ27x5dbFKZK2ghA7Yg9iaE2/U/UwibuWCEmBnc3NnrBm7GRjo+adQL4y2dZj+HNruyLud8c+XiWaR0DcWwx457qnW9i7L08kn2ObBlOfk0JaDmHBu2dXErccpOVJntnvaQsycNJnUe/LcUk+UmDUdLYuQcKYifk1f7qGn78WDLhfl1/+qbHpEBxqwjnc6gjcxkY3Ii/FM7BMUmWY3q3Op6cTWMOZl900vxLvrTMEuFVthGsj2U1QmlEUUfFln9ShBvwc27KyfxywphAVDgM+asLGFRMh4AmPp4Zucy0btglOTyToBMKmMdRqLWL9UXJRjtzpxdBZQ0lEj56DRLXmKRIv/A21IbD9+SzPT8ZMcvhXn/+pTA/UW3YyX246g30O7FJTM5RlY3fDu3YP+7z4mxOvkoKThc+PB/Dy9X2J8Ld58SEhzsmW5okVTkytqUmHNo6ZTwYV/OQGaoPk1TW8WXKbDLP+WC/KBeRRWO+ftXJkH14MlN4fPPmWn9w+8qM1yUV/1fI5rSmglZa3AUtYEi6RM96pMaxO8IXmX50TWBmLOJHQizRsO6OQb+H5fO8hdKKopIn8ixmzp8mh6J3qWPuVk8IQWXUkwp7GaJ39deQxYipunpGsEIo5iJQHs4ypEXn9WeT/BqochQbsN2Kfz/nr0lOthD29Srw9Wnh/++JZOLBdY+rzRbX0v0FkCa/9Qh9I/ZtUnXMuCIZ75VpLuCfJj+FysaT6vQhr0Vm5+kmQM6bEmhaPgw/gxymBf+HFUy8/Ppznagh3QliAW+aks4amgmOG2KEi0ra1kp4z7JT9upZC41FmsnjB6pWsIm/nmqgc51r2Ji/XdbqU7F2lPYcxdJh9901dbjEI+/Z6u+9L7HUreseq8lYs0i/lUJHKazTNom5TmkABwRcuTt0om4JuF4kXJAL/mNSSmHpWMiXTObnRKql2H9/99zhGoq4x2kXGVGn2hC5hJRaz1Fa8RvBnhXlhz2K4pzWtYjSaGljYdTU98MX3W6/P8NFDB98RKHFVvPVOFJDbEI+gAiRdqPFk381X3vyeSYZyqd5KEuqeQ584vWje15JjRcPQXby9OoPiztG1dHepRDz3lhXYXb63Dh0f16eE4TS2NsXbqTUbeE/wvM3IJphMcwDUzMlB7xOhu78LUX9Dj8Jhpu8m6nJhQ4hsrbhaeE53tRu3L0GRxy84bpNbPAxj+71I+d1hZjJ2isymEepbhLsHip2BNmF+aNZSjj3GdjdX7sbPhdXJYXzyhXpwEw2NF4f+0Lx+Wk6839UQaXW4/zOvY/wddb7clQESJfru X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 93e007cc-a767-4fcc-870c-08dc5e4b6185 X-MS-Exchange-CrossTenant-AuthSource: DM6PR12MB3849.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Apr 2024 19:28:23.3885 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: etH6iGvtLLogTtk+yaU9oFC2hPmbFf5vmyx6cV9+GENSNA1L1y6WHdqNtbGEKbJ/ X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY8PR12MB8213 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240416_122901_147061_F0ECD148 X-CRM114-Status: GOOD ( 18.54 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org CD table entries and STE's have the same essential programming sequence, just with different types. Have arm_smmu_write_ctx_desc() generate a target CD and call arm_smmu_write_entry() to do the programming. Due to the way the target CD is generated by modifying the existing CD this alone is not enough for the CD callers to be freed of the ordering requirements. The following patches will make the rest of the CD flow mirror the STE flow with precise CD contents generated in all cases. Signed-off-by: Michael Shavit Tested-by: Nicolin Chen Tested-by: Shameer Kolothum Reviewed-by: Moritz Fischer Signed-off-by: Jason Gunthorpe Reviewed-by: Nicolin Chen --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 94 ++++++++++++++++----- 1 file changed, 74 insertions(+), 20 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index bf105e914d38b1..3983de90c2fa01 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -56,6 +56,7 @@ struct arm_smmu_entry_writer_ops { #define NUM_ENTRY_QWORDS 8 static_assert(sizeof(struct arm_smmu_ste) == NUM_ENTRY_QWORDS * sizeof(u64)); +static_assert(sizeof(struct arm_smmu_cd) == NUM_ENTRY_QWORDS * sizeof(u64)); static phys_addr_t arm_smmu_msi_cfg[ARM_SMMU_MAX_MSIS][3] = { [EVTQ_MSI_INDEX] = { @@ -1231,6 +1232,67 @@ static struct arm_smmu_cd *arm_smmu_get_cd_ptr(struct arm_smmu_master *master, return &l1_desc->l2ptr[idx]; } +struct arm_smmu_cd_writer { + struct arm_smmu_entry_writer writer; + unsigned int ssid; +}; + +static void arm_smmu_get_cd_used(const __le64 *ent, __le64 *used_bits) +{ + used_bits[0] = cpu_to_le64(CTXDESC_CD_0_V); + if (!(ent[0] & cpu_to_le64(CTXDESC_CD_0_V))) + return; + memset(used_bits, 0xFF, sizeof(struct arm_smmu_cd)); + + /* EPD0 means T0SZ/TG0/IR0/OR0/SH0/TTB0 are IGNORED */ + if (ent[0] & cpu_to_le64(CTXDESC_CD_0_TCR_EPD0)) { + used_bits[0] &= ~cpu_to_le64( + CTXDESC_CD_0_TCR_T0SZ | CTXDESC_CD_0_TCR_TG0 | + CTXDESC_CD_0_TCR_IRGN0 | CTXDESC_CD_0_TCR_ORGN0 | + CTXDESC_CD_0_TCR_SH0); + used_bits[1] &= ~cpu_to_le64(CTXDESC_CD_1_TTB0_MASK); + } +} + +static void arm_smmu_cd_writer_sync_entry(struct arm_smmu_entry_writer *writer) +{ + struct arm_smmu_cd_writer *cd_writer = + container_of(writer, struct arm_smmu_cd_writer, writer); + + arm_smmu_sync_cd(writer->master, cd_writer->ssid, true); +} + +static const struct arm_smmu_entry_writer_ops arm_smmu_cd_writer_ops = { + .sync = arm_smmu_cd_writer_sync_entry, + .get_used = arm_smmu_get_cd_used, + .v_bit = cpu_to_le64(CTXDESC_CD_0_V), +}; + +static void arm_smmu_write_cd_entry(struct arm_smmu_master *master, int ssid, + struct arm_smmu_cd *cdptr, + const struct arm_smmu_cd *target) +{ + struct arm_smmu_cd_writer cd_writer = { + .writer = { + .ops = &arm_smmu_cd_writer_ops, + .master = master, + }, + .ssid = ssid, + }; + + arm_smmu_write_entry(&cd_writer.writer, cdptr->data, target->data); +} + +static void arm_smmu_clean_cd_entry(struct arm_smmu_cd *target) +{ + struct arm_smmu_cd used = {}; + int i; + + arm_smmu_get_cd_used(target->data, used.data); + for (i = 0; i != ARRAY_SIZE(target->data); i++) + target->data[i] &= used.data[i]; +} + int arm_smmu_write_ctx_desc(struct arm_smmu_master *master, int ssid, struct arm_smmu_ctx_desc *cd) { @@ -1247,17 +1309,20 @@ int arm_smmu_write_ctx_desc(struct arm_smmu_master *master, int ssid, */ u64 val; bool cd_live; - struct arm_smmu_cd *cdptr; + struct arm_smmu_cd target; + struct arm_smmu_cd *cdptr = ⌖ + struct arm_smmu_cd *cd_table_entry; struct arm_smmu_ctx_desc_cfg *cd_table = &master->cd_table; struct arm_smmu_device *smmu = master->smmu; if (WARN_ON(ssid >= (1 << cd_table->s1cdmax))) return -E2BIG; - cdptr = arm_smmu_get_cd_ptr(master, ssid); - if (!cdptr) + cd_table_entry = arm_smmu_get_cd_ptr(master, ssid); + if (!cd_table_entry) return -ENOMEM; + target = *cd_table_entry; val = le64_to_cpu(cdptr->data[0]); cd_live = !!(val & CTXDESC_CD_0_V); @@ -1279,13 +1344,6 @@ int arm_smmu_write_ctx_desc(struct arm_smmu_master *master, int ssid, cdptr->data[2] = 0; cdptr->data[3] = cpu_to_le64(cd->mair); - /* - * STE may be live, and the SMMU might read dwords of this CD in any - * order. Ensure that it observes valid values before reading - * V=1. - */ - arm_smmu_sync_cd(master, ssid, true); - val = cd->tcr | #ifdef __BIG_ENDIAN CTXDESC_CD_0_ENDI | @@ -1299,18 +1357,14 @@ int arm_smmu_write_ctx_desc(struct arm_smmu_master *master, int ssid, if (cd_table->stall_enabled) val |= CTXDESC_CD_0_S; } - + cdptr->data[0] = cpu_to_le64(val); /* - * The SMMU accesses 64-bit values atomically. See IHI0070Ca 3.21.3 - * "Configuration structures and configuration invalidation completion" - * - * The size of single-copy atomic reads made by the SMMU is - * IMPLEMENTATION DEFINED but must be at least 64 bits. Any single - * field within an aligned 64-bit span of a structure can be altered - * without first making the structure invalid. + * Since the above is updating the CD entry based on the current value + * without zeroing unused bits it needs fixing before being passed to + * the programming logic. */ - WRITE_ONCE(cdptr->data[0], cpu_to_le64(val)); - arm_smmu_sync_cd(master, ssid, true); + arm_smmu_clean_cd_entry(&target); + arm_smmu_write_cd_entry(master, ssid, cd_table_entry, &target); return 0; } From patchwork Tue Apr 16 19:28:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Gunthorpe X-Patchwork-Id: 13632434 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8E18FC4345F for ; Tue, 16 Apr 2024 19:29:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=4xBWYcdhoJq6vxZu34a+mpLpZ8+k30Y+mm+G0Si1s24=; b=UiuNLwUD5FZv+u pOyI1v+qV3lm/UE3SqhWh8ujZdOSMRgjHACO6Ghbb5Q+L7nfhzuGl1xQKs+MdcsCWsKSnK5ZLX0YX cgn/BwRtaWXWPiVlE6kgqEhAjadg/VzwG4abB2IrjXqzokRqbtv30qdzgp7kRb10IkNYKVZ+hNzsK cczFP2nrMWs13q8/i5wQsSydpm3vNfq6+3q+XigSnYlZ1p12uoieVrSA5nCEbjnHnkDcuQ9Azlx5W vI4rSbUa/ZhAh/8vBGxlUKQLbZwV+dS1jUmERSCW6i6iJXkOR1wDq382Ie9Pq3hNeW590pzx6wIjX QMbxaSvtBEmd9sVHfQlw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rwoUp-0000000DXmG-1aYh; Tue, 16 Apr 2024 19:29:35 +0000 Received: from mail-mw2nam04on20600.outbound.protection.outlook.com ([2a01:111:f403:240a::600] helo=NAM04-MW2-obe.outbound.protection.outlook.com) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rwoUL-0000000DXPF-1tJA for linux-arm-kernel@lists.infradead.org; Tue, 16 Apr 2024 19:29:08 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=V8QifPitVRxutGoluWKscYkSv1vr8U8rdlqgmkMDbx7OvrLPMg7QKzE3zqsiTzmwL3mcxFNKEuvJvIRAgSrGtM80efN5LrI5xoeXQf1fC1CZ7mtyrH216pjTxLVzbMu2qmsfkvXGFP1y7it4nVZA/4KHiTSGaYss8nU0XbmbmDrtPXGm+RjHTHqPy+PL60pBVfDTs7vCksknIft28ax6G1AfscTeQsw+3eVlUVuTxZjg0rzUCidJv1R9v9dVMX4arEBpEeiP/X6FU3RJiqe+VhPpDw/Rd/Ve8t8UNmYJWlz8osmM2BCe7fX4hVEa5rclDzURC40JDVt09yNxLnNfig== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ASQiC4jkUH+R9y81XGv9aC7QMCCANMB28fOtrPUcsWk=; b=gZvZe1GGUqaCMQc3YmeBmwer1IfG3auu78/pDDvkDqOsamkNvCxzaWEs3O2ACcIeAewJOZCU8E37y77qoNTqIIryhvGpmYuiZyij4S5hBeN3jqi9qQw/SQm3Pe17uXF2IlZ0qeds9VHhaWDnfX2+Y2CkPRQAfOnmugafqoUG/Ri2uobIC3yjle6FHj0SllAE0aeC/gESWJ7RhXP+oXXGOLuxtXPYRJkgoALO2QYpq2L5++2U5IM9FpAwq4evbTGM0WxE2Vl0Jg1+w9aHBuINCig92Awc0iUV17hQe/u9mgpwh+aQ/yADmVwWV5jtRh1b53KJafc7xJkp/IjNAAyG6Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ASQiC4jkUH+R9y81XGv9aC7QMCCANMB28fOtrPUcsWk=; b=hq56pmSrRsdr/tN/4qtouhaAwey9CWkxf2N8guNl7m25wDsY3nLrDJZuxXRABDS4aMb2LNo5IwqOEHHryHGvZTQOOU82M9pbG0RpJPfw1xJaOgEbp54r5ynHI7EQlVuZ9/lv1kK4bnECf/XTlSvB4E7fQxi5dPnqvzYmJHUMrzXBipCOWb8VeZChkJKQ2Q78EzyIF64axBDubLFjgIMrE6kbAiXINu/g264l5ulwXGqFSiEO60EEUx6l+gogxVCsmGtjwAu6hAd7howrOXEJGOKj28oZJ0T/LjkovRAYs9pIuGIBijbOaSc+ZL+XAncBcs2K7sdDP4LSvaSGjiUmkQ== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from DM6PR12MB3849.namprd12.prod.outlook.com (2603:10b6:5:1c7::26) by CY8PR12MB8213.namprd12.prod.outlook.com (2603:10b6:930:71::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7452.50; Tue, 16 Apr 2024 19:28:24 +0000 Received: from DM6PR12MB3849.namprd12.prod.outlook.com ([fe80::6aec:dbca:a593:a222]) by DM6PR12MB3849.namprd12.prod.outlook.com ([fe80::6aec:dbca:a593:a222%5]) with mapi id 15.20.7452.049; Tue, 16 Apr 2024 19:28:24 +0000 From: Jason Gunthorpe To: iommu@lists.linux.dev, Joerg Roedel , linux-arm-kernel@lists.infradead.org, Robin Murphy , Will Deacon Cc: Eric Auger , Moritz Fischer , Moritz Fischer , Michael Shavit , Nicolin Chen , patches@lists.linux.dev, Shameerali Kolothum Thodi , Mostafa Saleh Subject: [PATCH v7 3/9] iommu/arm-smmu-v3: Move the CD generation for S1 domains into a function Date: Tue, 16 Apr 2024 16:28:14 -0300 Message-ID: <3-v7-cb149db3a320+3b5-smmuv3_newapi_p2_jgg@nvidia.com> In-Reply-To: <0-v7-cb149db3a320+3b5-smmuv3_newapi_p2_jgg@nvidia.com> References: X-ClientProxiedBy: SA0PR11CA0032.namprd11.prod.outlook.com (2603:10b6:806:d0::7) To DM6PR12MB3849.namprd12.prod.outlook.com (2603:10b6:5:1c7::26) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6PR12MB3849:EE_|CY8PR12MB8213:EE_ X-MS-Office365-Filtering-Correlation-Id: 59cffe25-346c-4456-9647-08dc5e4b6183 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 2U59HQ3daFkO3E2n1IEqPkibWRvWnCWjPJqdQIgv8VtISJMnXmK+b/rb5UocvcVc0ICnvD2S2QUr09UCCCHEpGwfGvsTKyz9NumF9Sm6naFeLUfdZkn0Gcywx2351d9/j/iozofH5p2fW94gsxJo6BTruPIM+NEXWRSHtNZ6+4UuqwWoWdR8SDjzMAYuK+stCQAsD4CPej4r20FKlgkE4cwn4P5sS19DHH1gnLSL9MJKuWkeR7jNeHKqgv8LBliwURoVAralpP/Nt2wWDMFx4fNHIAcnQI+8YjPZt7jsy1Enzt1xTFL+TRYpAEA4fa+HRW/hQbvxkHBVkNpIiAa6s4yPVlzob/eo8kNb8kmnJcSQ2S4FSjqv/Vb14aZI/9P6BtoyNsY8dfNlOXTmcJfXXjspqIYIfBnZFBtkYH1Uuh+PkswxqnxhpR6h1Vz+trehgHC0HCkUcDdRUUsHS7rLaePVWXNAgdBmOmAHkoGcJwH5mOjsQzHdPM6tS9hhYnZ32rqYu0dsRvv7ihGj9OFNlMqmhcAd7pBxok5GAyWeTJd2g+R+8/cQIIILhCqeWpFZ1WHWlY22F3PsVEAhAL7UQU+VPjbqShSYWlQ+uWJCIFyxyysDQC3qQnY3gyqqy9jn/gp5wLudCTbJn57Cq34TTzN51Zaq9bv2PJMxDpLz9K8= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM6PR12MB3849.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(366007)(1800799015)(376005)(7416005);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: PWf560i/o8aBHKNCD3gm2S2RxJrDL/TK0/J280VNu/hID9MugGjTQkyUKFUJCQnFrc/CMlYgXo+Ktht4JfXeFybdb+c0A73VvnQyDMsGPB4x744jYtPgMKrSiGWmrUmeFapS6poAxC5g1NkVIdvCQd2pdSyij5sMJxrGACiGDpmSMOQe8wv1nvNh6nrnhobY6fA0Wo/I2BIf9iGnM5fbEmSs/IKibcgxJm6Dn5e/L9bWtoLOVWpXXY65aZ4ybGVPe5PYKJXqjIEYIVHfR8vUo94UAnVy6lJmrN0HQFao3i1lzsimFOq5Xnyo0JGE8DZiuj5zxhdtWQ890aYt2NlAGWtlxxfTHZlz1253TuEyJVk6Uq3EWz10Pt6V0XKMJ65NkKwv2Dwq8S9T4brTnS9q4qiE3sxJ7gJRk7OUlITW/urghbvcUOX+n1PTEZ55L05uJsZU3L3r5Ft6/nYrUTDMUumzpkpdNStXP5bYqkkl9Hkc3I6ya08OIxQO9L7u7ol/lOIdI1Il2Pw0Uhrz1EBIt9Mebr19xLoM512VrrhRqK8dxhI1YCJoDYpU2TKSHs54MRSo4POHNxnquNUSKFAz4RAL2y29kcfCsLOpMeQMsMteyJ0veU47n7Tw0fzTdbqxSAPZCQuu8qXzguEeXJlIWPPCJOl9YZgCdMk91sjJcR2ZqdrWxpTsT+NH3JYINzAWzNQvdmWUFBbkDXn8Hk+VsIB4f/GiEBHpWDYKLMZSTVXJcuf7hz7ferUBBNLtlbLJh41zaktYlWCq8+hM1SdZ1FZJyZoA45zoXunRj8dPmm1jiQSL7Qn0lLnaLumudZ0MYpmzKcbNwAqUKMuhJulpz8kSxoX8pu3cR1kfkSQKASMMFOaGmI7w224+QTaPYNhvIXu4WzIwQng+4W7VOz9mye0lKbePX7UHSVJZQbH3Lu9LlrmiEfKHLsXV08odzPE4K8VUxqEEuW07+cdQPaDmWlsAVgVB2F/Ed1/+/esr8sNTVc5W7FWtNkvoKQwYf8j1I6l+PWELsdar5GlzsqdQxHI1+AjvtujG2h8b3t/OE7e/tHFvVGoT7f8r2TlGaDyvdSSxL70J9ATQ63sAfRiOLU2zdEzoi6SG/Ms4stWGUrstjrGGvjr292WmZhIFSKaT15q6JDySkJuqC3As+Y54LkUbTGOBdz6Z4uWo8Bysa4f+L/jLojJkL/lQc/UOpu9R9rD2bNbSaeozF4rPitpz77f7RPhvJLJniobJlTvJ2r+qYUJrMDGKIx9rvmPGKMmNbhSI8f4ZqUFzPnYjIjlXxawraRbvboMUaJWtr1PSnw/6OB6OQ3tTZp/NdPiURma/IivIkv5uPopCQD+6NsVUJNY1zMzy9EIdF+Wur2eavBVaEkFdQUTgC6XsA4yDNZOke0dNcQ1Lq5PXdcu6b3Qa4w/Vc8eE/JnlYWX0RDG+ASBwneh4K/fsHoqkCmGHRKzP7Xu8CF3UGKL99KaSU6/4mTrPgjeq1XMW69XGOmfIFb68qq7BwNWDRfdmmTabuO8MerfcdNc+MhOv8X5z97pFosxBjmYyYVfOdJZHi2wzFaVn3LUZDOY5I22eoMLV2Tk7 X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 59cffe25-346c-4456-9647-08dc5e4b6183 X-MS-Exchange-CrossTenant-AuthSource: DM6PR12MB3849.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Apr 2024 19:28:23.4023 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: qFEhJFzopDnIcYKC8ORT1/uvniJQ+YBOyu+sJ3Q7BZFjibnhZ4U+21BeZvzxUR4K X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY8PR12MB8213 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240416_122905_536905_508D6272 X-CRM114-Status: GOOD ( 19.40 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce arm_smmu_make_s1_cd() to build the CD from the paging S1 domain, and reorganize all the places programming S1 domain CD table entries to call it. Split arm_smmu_update_s1_domain_cd_entry() from arm_smmu_update_ctx_desc_devices() so that the S1 path has its own call chain separate from the unrelated SVA path. arm_smmu_update_s1_domain_cd_entry() only works on S1 domains attached to RIDs and refreshes all their CDs. Remove the forced clear of the CD during S1 domain attach, arm_smmu_write_cd_entry() will do this automatically if necessary. Tested-by: Nicolin Chen Tested-by: Shameer Kolothum Reviewed-by: Michael Shavit Signed-off-by: Jason Gunthorpe Reviewed-by: Nicolin Chen Reviewed-by: Mostafa Saleh --- .../iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c | 25 +++++++- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 60 +++++++++++++------ drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 9 +++ 3 files changed, 76 insertions(+), 18 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c index 41b44baef15e80..d159f60480935e 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c @@ -53,6 +53,29 @@ static void arm_smmu_update_ctx_desc_devices(struct arm_smmu_domain *smmu_domain spin_unlock_irqrestore(&smmu_domain->devices_lock, flags); } +static void +arm_smmu_update_s1_domain_cd_entry(struct arm_smmu_domain *smmu_domain) +{ + struct arm_smmu_master *master; + struct arm_smmu_cd target_cd; + unsigned long flags; + + spin_lock_irqsave(&smmu_domain->devices_lock, flags); + list_for_each_entry(master, &smmu_domain->devices, domain_head) { + struct arm_smmu_cd *cdptr; + + /* S1 domains only support RID attachment right now */ + cdptr = arm_smmu_get_cd_ptr(master, IOMMU_NO_PASID); + if (WARN_ON(!cdptr)) + continue; + + arm_smmu_make_s1_cd(&target_cd, master, smmu_domain); + arm_smmu_write_cd_entry(master, IOMMU_NO_PASID, cdptr, + &target_cd); + } + spin_unlock_irqrestore(&smmu_domain->devices_lock, flags); +} + /* * Check if the CPU ASID is available on the SMMU side. If a private context * descriptor is using it, try to replace it. @@ -96,7 +119,7 @@ arm_smmu_share_asid(struct mm_struct *mm, u16 asid) * be some overlap between use of both ASIDs, until we invalidate the * TLB. */ - arm_smmu_update_ctx_desc_devices(smmu_domain, IOMMU_NO_PASID, cd); + arm_smmu_update_s1_domain_cd_entry(smmu_domain); /* Invalidate TLB entries previously associated with that context */ arm_smmu_tlb_inv_asid(smmu, asid); diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 3983de90c2fa01..d24fa13a52b4e0 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -1204,8 +1204,8 @@ static void arm_smmu_write_cd_l1_desc(__le64 *dst, WRITE_ONCE(*dst, cpu_to_le64(val)); } -static struct arm_smmu_cd *arm_smmu_get_cd_ptr(struct arm_smmu_master *master, - u32 ssid) +struct arm_smmu_cd *arm_smmu_get_cd_ptr(struct arm_smmu_master *master, + u32 ssid) { __le64 *l1ptr; unsigned int idx; @@ -1268,9 +1268,9 @@ static const struct arm_smmu_entry_writer_ops arm_smmu_cd_writer_ops = { .v_bit = cpu_to_le64(CTXDESC_CD_0_V), }; -static void arm_smmu_write_cd_entry(struct arm_smmu_master *master, int ssid, - struct arm_smmu_cd *cdptr, - const struct arm_smmu_cd *target) +void arm_smmu_write_cd_entry(struct arm_smmu_master *master, int ssid, + struct arm_smmu_cd *cdptr, + const struct arm_smmu_cd *target) { struct arm_smmu_cd_writer cd_writer = { .writer = { @@ -1283,6 +1283,32 @@ static void arm_smmu_write_cd_entry(struct arm_smmu_master *master, int ssid, arm_smmu_write_entry(&cd_writer.writer, cdptr->data, target->data); } +void arm_smmu_make_s1_cd(struct arm_smmu_cd *target, + struct arm_smmu_master *master, + struct arm_smmu_domain *smmu_domain) +{ + struct arm_smmu_ctx_desc *cd = &smmu_domain->cd; + + memset(target, 0, sizeof(*target)); + + target->data[0] = cpu_to_le64( + cd->tcr | +#ifdef __BIG_ENDIAN + CTXDESC_CD_0_ENDI | +#endif + CTXDESC_CD_0_V | + CTXDESC_CD_0_AA64 | + (master->stall_enabled ? CTXDESC_CD_0_S : 0) | + CTXDESC_CD_0_R | + CTXDESC_CD_0_A | + CTXDESC_CD_0_ASET | + FIELD_PREP(CTXDESC_CD_0_ASID, cd->asid) + ); + + target->data[1] = cpu_to_le64(cd->ttbr & CTXDESC_CD_1_TTB0_MASK); + target->data[3] = cpu_to_le64(cd->mair); +} + static void arm_smmu_clean_cd_entry(struct arm_smmu_cd *target) { struct arm_smmu_cd used = {}; @@ -2644,29 +2670,29 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) spin_unlock_irqrestore(&smmu_domain->devices_lock, flags); switch (smmu_domain->stage) { - case ARM_SMMU_DOMAIN_S1: + case ARM_SMMU_DOMAIN_S1: { + struct arm_smmu_cd target_cd; + struct arm_smmu_cd *cdptr; + if (!master->cd_table.cdtab) { ret = arm_smmu_alloc_cd_tables(master); if (ret) goto out_list_del; - } else { - /* - * arm_smmu_write_ctx_desc() relies on the entry being - * invalid to work, clear any existing entry. - */ - ret = arm_smmu_write_ctx_desc(master, IOMMU_NO_PASID, - NULL); - if (ret) - goto out_list_del; } - ret = arm_smmu_write_ctx_desc(master, IOMMU_NO_PASID, &smmu_domain->cd); - if (ret) + cdptr = arm_smmu_get_cd_ptr(master, IOMMU_NO_PASID); + if (!cdptr) { + ret = -ENOMEM; goto out_list_del; + } + arm_smmu_make_s1_cd(&target_cd, master, smmu_domain); + arm_smmu_write_cd_entry(master, IOMMU_NO_PASID, cdptr, + &target_cd); arm_smmu_make_cdtable_ste(&target, master); arm_smmu_install_ste_for_dev(master, &target); break; + } case ARM_SMMU_DOMAIN_S2: arm_smmu_make_s2_domain_ste(&target, master, smmu_domain); arm_smmu_install_ste_for_dev(master, &target); diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h index 4b767e0eeeb682..bb08f087ba39e4 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -751,6 +751,15 @@ extern struct xarray arm_smmu_asid_xa; extern struct mutex arm_smmu_asid_lock; extern struct arm_smmu_ctx_desc quiet_cd; +struct arm_smmu_cd *arm_smmu_get_cd_ptr(struct arm_smmu_master *master, + u32 ssid); +void arm_smmu_make_s1_cd(struct arm_smmu_cd *target, + struct arm_smmu_master *master, + struct arm_smmu_domain *smmu_domain); +void arm_smmu_write_cd_entry(struct arm_smmu_master *master, int ssid, + struct arm_smmu_cd *cdptr, + const struct arm_smmu_cd *target); + int arm_smmu_write_ctx_desc(struct arm_smmu_master *smmu_master, int ssid, struct arm_smmu_ctx_desc *cd); void arm_smmu_tlb_inv_asid(struct arm_smmu_device *smmu, u16 asid); From patchwork Tue Apr 16 19:28:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Gunthorpe X-Patchwork-Id: 13632431 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0B469C4345F for ; Tue, 16 Apr 2024 19:29:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=dg/MTF3h17Y4EXKkpihRpRW4uf6VEFhE6hNYnbLQrgE=; b=Wseis8KTka1FNN tCDcVHt/pcjfHA+cw6BM3PdNOl5pS7mrzWeSNADY7Vv/i2VSmo3KuMER9EsKHzbBRqJ6czvaKgEXo tvuek4yMWZuMkQFaQWo6IYpdqilkMkfYTRRiSOw9pxIwwFLS2mPEcsuWwbUTIrS1t4t/kn96NGr3V RcyZV1W0CxiU8jSqs2ayFSbd5IqA746x4dnsn04qSSQGtTvLMuXrRfz5INx6f6wcsAeHG8cuAQYWN sbR3+wybOo3b4dFLrHFuwRqlQtHWQ4pDufGUOqgTInx1IqR6QDym9vBxIvI3W3FU+jJYJw2YAw4oI peiZFP+jY+DvFsPmTkDw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rwoUL-0000000DXXe-1TfO; Tue, 16 Apr 2024 19:29:05 +0000 Received: from mail-mw2nam12on20601.outbound.protection.outlook.com ([2a01:111:f403:200a::601] helo=NAM12-MW2-obe.outbound.protection.outlook.com) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rwoUB-0000000DXQx-440n for linux-arm-kernel@lists.infradead.org; Tue, 16 Apr 2024 19:28:59 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=f5VAX8oHpJtqC9p+NekOiVDaNoy3kFzEdYkDbUFnnJTqForCDl/FKbhwvdsGDNGLQg4PQvs6kU4Z/kX3Vm/+ZQSJVt1Dr7eYpv45ATfZHG3KA1i12EKBC6oPLkGfoNrwfnGktspIMJuUEGGRbxRFZOJ3zaiPpZEcJALWE1sDG0nGSNXxif4w/HhXV8zLFXcEdw/e0ZCD8uNgYLb+pjQDFXc79J36vPBuoDhXBi701B7+hd+aLSbgHNBeNK4y8vbHaEd9gTXIX7pS96laEoHO9HTMpg7vF83t1gel23XAePhRna+tjoReroFDOOKdCGhh3OxjvSV4dxNkc237+IE2VA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=noviFek5dK1FJybLVd7tzoMY/WnRustfBIibVZJwQY4=; b=hvWWBPEGAYVj2zOl7/iuQBCgsrvT+MVeBHkswYN7z1svrgQ0n7KYQgBRMHXI9Twg8/5aQHH1zg4WlJSPAuYGfpcxX1TuDXgOtX1reYDt93QnqXh7Tpeh0LuLqM1QjRB0KuDiuip0oDgxwtumiTfON0ZNIhxFxwh3KCVkbzOfjz9bwoCf1Ki3L2p/CMi3AuFeHbkEGbEr3NBE4HvZoCQlce+C9E9fm/U827+U9yZhFUTe0xPoDPGiiVKCCYrZZkYKI/7fd7UwfrtdpKy+fme5PTzi1v5eJ3JaI1frO6e3V/8kLqdJgIIgSm0cOtNo1ykEnIxBizAP7RbuMLtxoxqo3g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=noviFek5dK1FJybLVd7tzoMY/WnRustfBIibVZJwQY4=; b=Od+o7JugvYwydGiKCXKfWap+WtTln8EsACueds+cwO8DvO0FShIZ3WNU4ycD9D3jCDFjEgvBOiDQrheWJkE60OuVmeGdnBG7wLpvDVa0nnTTES14zClOcsEgcNLlq1+AlqFgSEn90bQn4+sEUwTB6J2DwFDbgKZHp0FOBW83zT2ge5lYSNXEyWsBUmy/B0aEXNi5LHYuQi/n4L2N3w3Cj/gcfg8qkELdLkfEC+yV5vKQkZ/NpBeAGCenZuPqM0aUhy0B7SODpPGoIu/QJOdee1YRh26Jng6eaO1aRcQHBMhFknNOXtqnc30K5Wfg5lFBA2mTPOPVdokHSoO0AfyMnw== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from DM6PR12MB3849.namprd12.prod.outlook.com (2603:10b6:5:1c7::26) by DS7PR12MB5912.namprd12.prod.outlook.com (2603:10b6:8:7d::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7452.50; Tue, 16 Apr 2024 19:28:30 +0000 Received: from DM6PR12MB3849.namprd12.prod.outlook.com ([fe80::6aec:dbca:a593:a222]) by DM6PR12MB3849.namprd12.prod.outlook.com ([fe80::6aec:dbca:a593:a222%5]) with mapi id 15.20.7452.049; Tue, 16 Apr 2024 19:28:29 +0000 From: Jason Gunthorpe To: iommu@lists.linux.dev, Joerg Roedel , linux-arm-kernel@lists.infradead.org, Robin Murphy , Will Deacon Cc: Eric Auger , Moritz Fischer , Moritz Fischer , Michael Shavit , Nicolin Chen , patches@lists.linux.dev, Shameerali Kolothum Thodi , Mostafa Saleh Subject: [PATCH v7 4/9] iommu/arm-smmu-v3: Consolidate clearing a CD table entry Date: Tue, 16 Apr 2024 16:28:15 -0300 Message-ID: <4-v7-cb149db3a320+3b5-smmuv3_newapi_p2_jgg@nvidia.com> In-Reply-To: <0-v7-cb149db3a320+3b5-smmuv3_newapi_p2_jgg@nvidia.com> References: X-ClientProxiedBy: SA0PR11CA0042.namprd11.prod.outlook.com (2603:10b6:806:d0::17) To DM6PR12MB3849.namprd12.prod.outlook.com (2603:10b6:5:1c7::26) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6PR12MB3849:EE_|DS7PR12MB5912:EE_ X-MS-Office365-Filtering-Correlation-Id: 7ab21a22-02c0-402c-5b5d-08dc5e4b61d7 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: kVteyYHLgrH9vaSLPFU6teHhyHqKSILsd6VhjjPr4JoGoMaSZGupw6XDwsmDYx/1b7EbNmtRnUirgLHCC05u5xZonz8sad+rUH7n6FSZGrvbkCiGU3I/s+2U2ZI45orWilTNrTgMMWqTDEKJHBtIRaec7jOdymPt8hYMvOW7OALq47mlcT/VHo5Xy00W3uwpWuKtd/bxfhDme2lOI6FZsqHTbWPqt/JSt8s1t5OCHqLyDAsIOiDkOD/2yCv9rPLrn/kxr2mm4SzTXEwjPLtEvYsExIlWjAiMkUQjHJvv2ejzCPYzuGQVADh8wos1TlAANicDY86QeyGsVO9xKlpT7V/avZmJDFQmIZ1bsLCsyhefvRk0+v4aAe1dd+NvOiiGsQCvDJIg9KSoSqLsbiycqTuIhKyZafRa62Cpf87s06PngU3XuM7H2UqpX3bp20XyqlnRMWHxmqkU6u7/FD8IPpaRSGO0DBxtPvsY3f8Jns2lrGjZKxaROfqt/tQeNSvZrBvq+zAOclfN7lFOgc7Qn5UCTvRbTyFVjeNMdyIQ8VYfOk17hzXo6NP4JTO/ZtRnL0Mj5bLzbj0kO4sriwwJblxflJIITOwBcI3/MBIC5mhC48kChi8XuKcJr8q4koHwJ9OTTZ2iVOwoJFQBDVJD3GKZN9/rf5oNp9sEVYd96AE= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM6PR12MB3849.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(366007)(7416005)(376005)(1800799015);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: r6G7ftTWcwP2abpXDjJYyypMd0pgXom2R/IIU/GyB4eb+4PAhiEIvtO4ZntwIXhJ22n38MKRUbyER9xZgsG2ChPqRtxJFmG88dYzLaM/OQmhkXfnOXB9/EvU5nUA40wu4r1aigKeAt4sy6MuVBZPHhW/dBzMJXFZbL8DrWriMwMTE5v4AXS1IzVOIbMZRlX7cjfDaQ04Jtev0te+8bReXL4PuZnZtwBS0W56RChW3O+03WAX7Rbsa6w+/X3xQnATEJIxUzq+Zdvs0Jlmq9dGcEnrmfEfXG/xklV2fB5y1bKroiCuTuUlzJnDxzx6w1IaN2p4M1y1Sv1RXMZ7s65nrWE/RFIQmzpN9yqZHD9345tHjW2HBzbcAofoV2ZArV7iZlvMayWfTvxESO4kvfNBiKu85dHuWO2EZ7ZTUB6Y0TEQd9fH988PTXKt+X4tbJZwQfz+3KkRVdDJO1ufpv4+kfe3NTBuxNlE/M+zcmfUhC30oyO/wp593/KbwnCjEKV+B89vLKVLEx514xWlX1C48B26mkN3Y+z8SQn/scHRRvtWg5yKAA53B9uEtSJhW496feQhxkXcyEzn23mJa7iBb62UI0xb8VMewxAjFq6XU8wx/ChuBUQT7zxBpoaDVdPP+yuKglwDt18PwH7Aac32TJ0w/ALldkABcGqb+vc8nhzS+jEZM0k/KTMAxOuC0Ksgt5P9Ds50YCTcQHeh+COrGEyJFhcxK8EqIGJNAYLhpRmKy+R5D7aOvu1S4xHPl7dZseqgKMbW0nkXBH5hmc3quQ8G0xvoVzggBhgIz23w4/HIPcF0p+KmnrmC7/NJcwbsbLnsz6w4qhQg/xl4O1l0tZ/MUQUfY7ktP6LSg6H1nHbm/Qpk56rh0x3R+SiayTcByrEtpOAIzT5zHsasgVMchTaQVebVo+i4hLzRwRvvIb1SyJMEvMi/sCP7V/XkBi/SNd5iqe20hiQdVK0kIzKYoNH83hNerhyJGnEnwB/pMj/88pqbPLqgLMQWRonR4LY8p3KcyckWX7YHY5FjPEj0awvrBmJWyviI5O2nUP44VPFlKkKl9xq5PguXsPINjKOB5EPChMaDvet52eQL2QdTjw5k0oDF/zVHImtH+J5ciRoMI66qllfX9rjALidH2Sgr9vm4yBoXB2nXqWpZYTd6JV+f7jlIwkdHsRL84RhYuhinOkrs70RcqRHmppD46ESeUCaA635hAXHSBkC1UuW1aLxV5MdIeeNStYcagE9N6+fg1zT+3EMkONxmFoZItMTtsjmNv1CY4OF1ZxIRaPtRem9rffZkdsWexI/TBB0xIap8TjnUoABDfOfYh+vVIjdW4GB/cdBh00Nen//j0xn0fmlpIbs8z+IrkUv8vLTYW0TSHw67TxIEKPRTVScntII+W4xGCSn15jleFMDp3SsFdZh3loqNj4G75/eipPXHPXN9jZscaPqRCDj4OYf+mGfI7PvtCOBNQsTkXnuS//Hunad25T2Ex02egGvyQNDyaNVS0TAaXPyn/0CLlyu1NT1R0+sL8CMHLEKBnQXipLOqcxs6U+Mfk5Uk2u1jScbrkVwGhbSsGDRm+We8d6kEZEka X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 7ab21a22-02c0-402c-5b5d-08dc5e4b61d7 X-MS-Exchange-CrossTenant-AuthSource: DM6PR12MB3849.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Apr 2024 19:28:23.8208 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: J2ihp6sLFoqtVZKL8qg5bmP7ZylHp6fzANzTm7dRCmE2SC25GDc2osDyUjm46cxR X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB5912 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240416_122856_060972_A9AF059E X-CRM114-Status: GOOD ( 13.16 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org A cleared entry is all 0's. Make arm_smmu_clear_cd() do this sequence. If we are clearing an entry and for some reason it is not already allocated in the CD table then something has gone wrong. Tested-by: Nicolin Chen Tested-by: Shameer Kolothum Reviewed-by: Michael Shavit Reviewed-by: Nicolin Chen Reviewed-by: Moritz Fischer Reviewed-by: Mostafa Saleh Signed-off-by: Jason Gunthorpe --- .../iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c | 2 +- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 20 ++++++++++++++----- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 1 + 3 files changed, 17 insertions(+), 6 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c index d159f60480935e..7cf286f7a009fb 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c @@ -569,7 +569,7 @@ void arm_smmu_sva_remove_dev_pasid(struct iommu_domain *domain, mutex_lock(&sva_lock); - arm_smmu_write_ctx_desc(master, id, NULL); + arm_smmu_clear_cd(master, id); list_for_each_entry(t, &master->bonds, list) { if (t->mm == mm) { diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index d24fa13a52b4e0..f3df1ec8d258dc 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -1309,6 +1309,19 @@ void arm_smmu_make_s1_cd(struct arm_smmu_cd *target, target->data[3] = cpu_to_le64(cd->mair); } +void arm_smmu_clear_cd(struct arm_smmu_master *master, ioasid_t ssid) +{ + struct arm_smmu_cd target = {}; + struct arm_smmu_cd *cdptr; + + if (!master->cd_table.cdtab) + return; + cdptr = arm_smmu_get_cd_ptr(master, ssid); + if (WARN_ON(!cdptr)) + return; + arm_smmu_write_cd_entry(master, ssid, cdptr, &target); +} + static void arm_smmu_clean_cd_entry(struct arm_smmu_cd *target) { struct arm_smmu_cd used = {}; @@ -2696,9 +2709,7 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) case ARM_SMMU_DOMAIN_S2: arm_smmu_make_s2_domain_ste(&target, master, smmu_domain); arm_smmu_install_ste_for_dev(master, &target); - if (master->cd_table.cdtab) - arm_smmu_write_ctx_desc(master, IOMMU_NO_PASID, - NULL); + arm_smmu_clear_cd(master, IOMMU_NO_PASID); break; } @@ -2746,8 +2757,7 @@ static int arm_smmu_attach_dev_ste(struct device *dev, * arm_smmu_domain->devices to avoid races updating the same context * descriptor from arm_smmu_share_asid(). */ - if (master->cd_table.cdtab) - arm_smmu_write_ctx_desc(master, IOMMU_NO_PASID, NULL); + arm_smmu_clear_cd(master, IOMMU_NO_PASID); return 0; } diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h index bb08f087ba39e4..99fd6f24caa818 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -751,6 +751,7 @@ extern struct xarray arm_smmu_asid_xa; extern struct mutex arm_smmu_asid_lock; extern struct arm_smmu_ctx_desc quiet_cd; +void arm_smmu_clear_cd(struct arm_smmu_master *master, ioasid_t ssid); struct arm_smmu_cd *arm_smmu_get_cd_ptr(struct arm_smmu_master *master, u32 ssid); void arm_smmu_make_s1_cd(struct arm_smmu_cd *target, From patchwork Tue Apr 16 19:28:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Gunthorpe X-Patchwork-Id: 13632432 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2001FC04FFF for ; Tue, 16 Apr 2024 19:29:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=POn3dib3W5G1CkL6Gm+NWkCC2Y4nMNxpkY2ZmnBTFQg=; b=bgp47WIS0GVFjv UVy54k5yIVJZ6wgb4RkSYUDSVDOSFnRWlaZwAjyEGykmIHTsi1DpLEQooS0XnynA5CMooLO2hdN5e 0zS7PAlvUVf7IeVuZNvYkkZh9fr+dnISHITe4g4hst9NbfG3d1AnzD1TpmMJq/xK3XmBt7wvxLmi+ nXFEk6SE0mXASaXIqgVhHy5XB7N8TjUskecICcyyELQE4GpIQpx8VXtDy+i+STaKXaKMKr+N8eYU3 pO+8QrJPmmdjZ9hKOHWb4WGBYgvLcet1Xk78bVStF4UYTvRgoSDrmm1HEjS98IK4mt+6Y9UHngK0v 9u6fGupNTuaW7S0tV2mA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rwoUP-0000000DXaM-3sXR; Tue, 16 Apr 2024 19:29:09 +0000 Received: from mail-mw2nam12on20601.outbound.protection.outlook.com ([2a01:111:f403:200a::601] helo=NAM12-MW2-obe.outbound.protection.outlook.com) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rwoUG-0000000DXQx-08d3 for linux-arm-kernel@lists.infradead.org; Tue, 16 Apr 2024 19:29:04 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=NsqTNHROR82tnP1BRtJmJU9w5xINr076Jm/g6fdQZF1vMaszxWTnHJVBsMtjZtY8mnYT1Wp8DeQJO4FI1h0HYGonog3EOzD7uiteMqsTtUR5x0esxmywfIUDDHz9eiw0WC5olboC/WcZkZhxaIMSjtXCC86rd+KGZgJnLm1WkEh9wOem7NGoqBfqMHDGjf8AMFR7AnDFitmpxVjQL0S38XOw1YCL82LqdQzIEfY30xhPQmyKUt9cOzWrS8tvhJexWSG3Lo5LvHmMfmXgQliNQxKT3b9uuJq6TnyhJr/bKbh3qj2dc706fQ2j2Vyhh2hKMQAxGvdpbQQUg/lpg73ZUw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=p/BEQyof/JfT3/ZJEAxYARzj2enWwJXlbOMwBtgNjt0=; b=oAss+bqHKcGABoqmJg09V4lPp3u3jVO1VCu7k5cGEN8YnuphrlSqorotnVm7y85tnc3A3pJVfuuQXIcWBnGV6O6wp9V0TXTRKz5EXZpO4LUfpFjgYMgWmSpP3UW51i53WUCNKO4rAnakArOgaAyHtY8yFJYLuC2mgkXdmDft5G0T/z7HgwBTY6wRV1GjDEKOjiZv6FuLgHDuI28dN0aA3bnMXibo58s8Q8EfBcxykZByjoy4x5rBeqtHoyfQEb4udmGhiOABpcM8KfC9LxP9ynpApdPV6rEPX9AOvM1TAkkhhVgPh6wA68e4jlFYEA/SIra8neaI/OWEwARvOU+p0w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=p/BEQyof/JfT3/ZJEAxYARzj2enWwJXlbOMwBtgNjt0=; b=smVfiDiOVgQkYtCkHaIiKDrqcFnyPKkfzsuleBvbDWUsjQ4Mcw6at3nwR976vuAb+1PdOJCwQXGbz7zx6FDcIClcpn7x11l4NHRq4oargimPWLa3otI3kd/C6NxKWGMfUo8GLXQfkbBnqilxSDRgeonL5mSmto0Ggl22aXXXRjeXOJRYukGselaisL5hX7NsBJ9/IJQV87eK42uK/TF8u/NAWSRawCRFxQaYQUHh5YpAevZI6ifKB6S8awvE6ony4Z5lPtPptEY5EhfiHZKfev7ivR4kl13N1DzImChVZPSBFuhqZb3fzkK8rZyWY+FWKyg7BKaCqiMxcJoF99PgRA== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from DM6PR12MB3849.namprd12.prod.outlook.com (2603:10b6:5:1c7::26) by DS7PR12MB5912.namprd12.prod.outlook.com (2603:10b6:8:7d::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7452.50; Tue, 16 Apr 2024 19:28:30 +0000 Received: from DM6PR12MB3849.namprd12.prod.outlook.com ([fe80::6aec:dbca:a593:a222]) by DM6PR12MB3849.namprd12.prod.outlook.com ([fe80::6aec:dbca:a593:a222%5]) with mapi id 15.20.7452.049; Tue, 16 Apr 2024 19:28:30 +0000 From: Jason Gunthorpe To: iommu@lists.linux.dev, Joerg Roedel , linux-arm-kernel@lists.infradead.org, Robin Murphy , Will Deacon Cc: Eric Auger , Moritz Fischer , Moritz Fischer , Michael Shavit , Nicolin Chen , patches@lists.linux.dev, Shameerali Kolothum Thodi , Mostafa Saleh Subject: [PATCH v7 5/9] iommu/arm-smmu-v3: Make arm_smmu_alloc_cd_ptr() Date: Tue, 16 Apr 2024 16:28:16 -0300 Message-ID: <5-v7-cb149db3a320+3b5-smmuv3_newapi_p2_jgg@nvidia.com> In-Reply-To: <0-v7-cb149db3a320+3b5-smmuv3_newapi_p2_jgg@nvidia.com> References: X-ClientProxiedBy: SN7P222CA0013.NAMP222.PROD.OUTLOOK.COM (2603:10b6:806:124::11) To DM6PR12MB3849.namprd12.prod.outlook.com (2603:10b6:5:1c7::26) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6PR12MB3849:EE_|DS7PR12MB5912:EE_ X-MS-Office365-Filtering-Correlation-Id: 361b3ea7-d080-456f-b7fe-08dc5e4b6272 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: GUdcCpJ6HU6deQfmiOJT4nRFkSo9lpx1CcxKabR9I6FkWgrOPQmjri2UHSaAwmVOYvTj1AgH2WKzb4YrEoaNuufWWWss041KD+xRuD5JGDwrMPvs1YoTgzpcIJ+9abchj2lsxYDcHlsFj/1JFtipcCwIR9e90GVrgrmbhH5yYudRAvg89FGv1w5OWzF3w47jlIafg2zgvnXWUGGjUvJ8s76oNgqt+hKAx/B+dxuDo4xFUvBSYMgHGJ6HScFn1VPNmpStIY/jrJH7rv9CL6EaXOJV8eldOOe6OXz+v1Kp346IrIhfNfuqUergWEcrxAqsjFptQ0D49vZxjttF40StjZcfhap3m6+iF3kT9EHH7PqePWT39i63qgELmQ/yEPxR14YwinyGllKnamp3AiOPhsMdKP1ps1kV7UNkvh9nR37E0xpspj4RUX0Ysib+d+hdDZq4UAuJJcdVmS1bxO4ElSN/W6Z8pnc70/a2W4/VvIg3ASDanJ6qME/BHIfJVrXcxKnx/LOMf+p4zdBU/w7tBM1CLgTvDgYAJXHPJyVMviAh8WtfifY7wbgQJnvoF3ArDND7L98NudFs1mUbThythEiyKAGlwdCxk0m4HwBnsUffw+Oq8H43IAk8KqhKjPyqCoa2Fl2XrlgppLR+PQ9FEoIBUD7xvuvvnl80ggnFO6E= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM6PR12MB3849.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(366007)(7416005)(376005)(1800799015);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: z+8zpPwlLR8VQipDjlZdR6ZLyoVhptieX0nMj1LRkPGcbskS/fxYvLs7hZXG6TZvL5Svel1o1kNgD6fIne5SzkOeqv3zNphe73PqRHaFQuup+I1DYUImT2C5h667beMgZPlVjo4Ebk2SIk0e0HWZIW1FA6wFUDGtmg6QALcznKiflDiTew8frcXHAdRF8SW1y3HOJsLzlCGWO6VtrH6bsCpjJkiBOUkhGRcJKd5AJtxuue5wzVisYzV/Yi9/1mmxu29RiMyzASuodeJG4Da7Uw3UoZrwdB9MFY5Gp1FacmRUdJzZEzkqVbRi7TcRXI1AwAb2OApRrIozho3hdKBAWU7dygP06uodHI7gvlRRDeMjGurVKBlizYyMZmNQbbS1aGfmnBplW+paVPg/+5QuNc+k5oekGLpqfzdbgRSAQ5ZBGMA4zD9y5cG4tQA/ejpQuQFaZ2GBUpd6WKVOhI1t7M6koz72s8eN6JquM5oEXqT/L41sgwqewtSFS9V9PdhVCfC+2B6siA1Ndh6Y2jDirVlavVAYFk08unpEKMlTlmP0s+t9L+cBrAiMUWzv68c5Rt+Af1M+3ssegDaJwG/z3QXjbZR1dbdKYTZxAgB8+sO3DpXEtho7F/5Zds+Y6SMHNRhCfgMzVdTeKLDPmMmyoxN85Bs1A8wHRaSGs1whpzDsTiX6VtI6i3OPQTmE/oxPg6KdcToaZS4d9vO7zyX1JsC6MJivSEWk+R6c3UxVU6LVnRyTNAiXfIQbHOmJahxYnbPKe0Ly5hsqhWIzSRPw43GCW/wGET5PUIs7H1/819WBXq0S0vtlJdZeTRfA/gxuUqMGr55CFOTzjBNtitbZZH6bVBTxZTRfibd7vBbxGx1ogtkCKiUOFe0PbCzygUeyUw2WGuO1K5SOR8FyjMNKzVi4ZhKY19VjEkTJsj0tkA2AHVIAiHJVGYF5m5nri1PZHTGenbdovV2MhOWQplsUirG6mWn77/2Dd3+FiRhiopCHInRkPT/Tc/oMLtzyGQHuTW6oZzupvzeRF2W5KOSf5qKxp9iKeSR/lZ76GOeIPF4o829Tfn/F3BsG/v1NiGttSI5o2bRmgp9MDm4syoXVI/92iepRDVdxJ48bpuADOyPLY03Z9B5xgaUeDvT7xJrj17F/knCdOi4bMBi2FOR7KIJqQ4laOb26coqtq704CvCex5k05hWk+Oz9MXUy9OxNPVKAoC6/lJveZMDkBjZSu+WvyIM9lXfCv5l2vdoC+l+oz7OmpuNyjxxFd79rdOOcGLDxOR12IqzA1+JVlZpthPXARbjpANJiKVNF6nRQumjvF6b4wdlMdsVmj/3Le2+mfqDGq1UwGkqg2rBFTyHfvnwf3FJmdKKP+akIRx1Exb9iDVNDorVRtvBjsM0zcW88qopwYOqXrlGuubYpUKIk1KqYpeNmshFiesXPBZv9deVElwGpU9RZ4EOKa7YodcTNEJFIE15BFsiSrt6us8fVLyM3Hz2Sfa9cDcIOKLIFXB9aD2oFaaKgxWz3QwEk7CI5rqtJOekdBe/eD+YlY6vITpPRbuXQmdn7pO91+F2lfN1VobU+Yl3+POJBHXi2vhoR X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 361b3ea7-d080-456f-b7fe-08dc5e4b6272 X-MS-Exchange-CrossTenant-AuthSource: DM6PR12MB3849.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Apr 2024 19:28:24.7974 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: CzfBiF6Nj6vZvEZPaV0tB/vfsL+9icl030INp+Ej+Sftmt7f418ANIjhaewR4URl X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB5912 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240416_122900_159302_158F4398 X-CRM114-Status: GOOD ( 14.97 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Only the attach callers can perform an allocation for the CD table entry, the other callers must not do so, they do not have the correct locking and they cannot sleep. Split up the functions so this is clear. arm_smmu_get_cd_ptr() will return pointer to a CD table entry without doing any kind of allocation. arm_smmu_alloc_cd_ptr() will allocate the table and any required leaf. A following patch will add lockdep assertions to arm_smmu_alloc_cd_ptr() once the restructuring is completed and arm_smmu_alloc_cd_ptr() is never called in the wrong context. Signed-off-by: Jason Gunthorpe Reviewed-by: Nicolin Chen --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 61 +++++++++++++-------- 1 file changed, 39 insertions(+), 22 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index f3df1ec8d258dc..a0d1237272936f 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -98,6 +98,7 @@ static struct arm_smmu_option_prop arm_smmu_options[] = { static int arm_smmu_domain_finalise(struct arm_smmu_domain *smmu_domain, struct arm_smmu_device *smmu); +static int arm_smmu_alloc_cd_tables(struct arm_smmu_master *master); static void parse_driver_options(struct arm_smmu_device *smmu) { @@ -1207,29 +1208,51 @@ static void arm_smmu_write_cd_l1_desc(__le64 *dst, struct arm_smmu_cd *arm_smmu_get_cd_ptr(struct arm_smmu_master *master, u32 ssid) { - __le64 *l1ptr; - unsigned int idx; struct arm_smmu_l1_ctx_desc *l1_desc; - struct arm_smmu_device *smmu = master->smmu; struct arm_smmu_ctx_desc_cfg *cd_table = &master->cd_table; + if (!cd_table->cdtab) + return NULL; + if (cd_table->s1fmt == STRTAB_STE_0_S1FMT_LINEAR) return (struct arm_smmu_cd *)(cd_table->cdtab + ssid * CTXDESC_CD_DWORDS); - idx = ssid >> CTXDESC_SPLIT; - l1_desc = &cd_table->l1_desc[idx]; - if (!l1_desc->l2ptr) { - if (arm_smmu_alloc_cd_leaf_table(smmu, l1_desc)) - return NULL; + l1_desc = &cd_table->l1_desc[ssid / CTXDESC_L2_ENTRIES]; + if (!l1_desc->l2ptr) + return NULL; + return &l1_desc->l2ptr[ssid % CTXDESC_L2_ENTRIES]; +} - l1ptr = cd_table->cdtab + idx * CTXDESC_L1_DESC_DWORDS; - arm_smmu_write_cd_l1_desc(l1ptr, l1_desc); - /* An invalid L1CD can be cached */ - arm_smmu_sync_cd(master, ssid, false); +static struct arm_smmu_cd *arm_smmu_alloc_cd_ptr(struct arm_smmu_master *master, + u32 ssid) +{ + struct arm_smmu_ctx_desc_cfg *cd_table = &master->cd_table; + struct arm_smmu_device *smmu = master->smmu; + + if (!cd_table->cdtab) { + if (arm_smmu_alloc_cd_tables(master)) + return NULL; } - idx = ssid & (CTXDESC_L2_ENTRIES - 1); - return &l1_desc->l2ptr[idx]; + + if (cd_table->s1fmt == STRTAB_STE_0_S1FMT_64K_L2) { + unsigned int idx = ssid >> CTXDESC_SPLIT; + struct arm_smmu_l1_ctx_desc *l1_desc; + + l1_desc = &cd_table->l1_desc[idx]; + if (!l1_desc->l2ptr) { + __le64 *l1ptr; + + if (arm_smmu_alloc_cd_leaf_table(smmu, l1_desc)) + return NULL; + + l1ptr = cd_table->cdtab + idx * CTXDESC_L1_DESC_DWORDS; + arm_smmu_write_cd_l1_desc(l1ptr, l1_desc); + /* An invalid L1CD can be cached */ + arm_smmu_sync_cd(master, ssid, false); + } + } + return arm_smmu_get_cd_ptr(master, ssid); } struct arm_smmu_cd_writer { @@ -1357,7 +1380,7 @@ int arm_smmu_write_ctx_desc(struct arm_smmu_master *master, int ssid, if (WARN_ON(ssid >= (1 << cd_table->s1cdmax))) return -E2BIG; - cd_table_entry = arm_smmu_get_cd_ptr(master, ssid); + cd_table_entry = arm_smmu_alloc_cd_ptr(master, ssid); if (!cd_table_entry) return -ENOMEM; @@ -2687,13 +2710,7 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) struct arm_smmu_cd target_cd; struct arm_smmu_cd *cdptr; - if (!master->cd_table.cdtab) { - ret = arm_smmu_alloc_cd_tables(master); - if (ret) - goto out_list_del; - } - - cdptr = arm_smmu_get_cd_ptr(master, IOMMU_NO_PASID); + cdptr = arm_smmu_alloc_cd_ptr(master, IOMMU_NO_PASID); if (!cdptr) { ret = -ENOMEM; goto out_list_del; From patchwork Tue Apr 16 19:28:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Gunthorpe X-Patchwork-Id: 13632430 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EAC9FC4345F for ; Tue, 16 Apr 2024 19:29:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=TvTdnyl68bprYes554o9eyhlRP1pPp9B193uqUy4Z3Y=; b=eGuVQEJFCDBKOS rrATZywJRFxOfclGO/UBwFf2F7OlXuxGx2eDbDayfNF6yJ8D0EVEou4/HF6YfC1LZnL4dR3sWrPVJ MMVpUefluevmBOwym/NgyeTrBHPbftJpBRIsemPPSwql5LOvTvt+plRsxyRNORprdVr6jxaHFuV1I irEqvgbcDGuyCy+87SNQdXoQFJvK8o8tbZzM1QZ8psQrIpk4yA0bFD1C6IK80yd+y1VgXFvmEC4d3 cPYNvYYLGQfOIXlb808pylGAbVk1z6QEJ8BWv1UDb0B7bzLlTB8/t99Bujl3hyUtt22O92pNWtjcs fFiWWF8huI9BZC7mps+A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rwoUM-0000000DXYO-0Ybe; Tue, 16 Apr 2024 19:29:06 +0000 Received: from mail-mw2nam04on20601.outbound.protection.outlook.com ([2a01:111:f403:240a::601] helo=NAM04-MW2-obe.outbound.protection.outlook.com) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rwoUE-0000000DXR2-0WZ6 for linux-arm-kernel@lists.infradead.org; Tue, 16 Apr 2024 19:29:00 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=gng72XtWHkxhn4JWHlws/9Av3sPLuMvyRArPVwalE14afSwma4spLCHzM8DnW2iMiCXUwNpV+NFmm2ukT/Jtox+3psxBfuFlB0WVjbnq2ZqV6jO2C1aKWKYoB8/RpoyG/bqKGsgRsa5QnTMjVcp7hrur0higUIovMd+Qq0zwoAeCB/0uESGXjl7WGxyAyEmFUtVC2+J9kTxyIPEi91PhWMmjgX7mErs+vES8/lm7SNNbJjls3rnceuGigmcXBdA11cVtHK/tF4ijohQ28HCJXMlO/TcNeSY61p3i5MW2yaFFXXaTHc0KxD29aOZrjAhzPlX4oKvdjQKpE8wz2BzvfA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=MGJWq/IwWubTSvQESDQ2LoocBJQ+Q/Eg0VdRZOpSZ8g=; b=cEp5h+4mKdW41eMHJGcTpFu6zJLrDYW3QhJFdhYgNzXova5QXSvARU+5WMviSqugXiUU2KHUNL04cqzPt6DVJ6W7l7q8d2VCgKv7/ag+7Sjmn2LdtOVneLYoJRuKKpgHan1bcyQCjphAO2QlhKSwz/s3IKxj7kkwtQMNMptIZdomKyIWiTxCkk2MC+pogzTCIDEFQksWyiVF8fjXLHb32NFOG1K76vTgUJU1D7cLdHHpwtFgEhsUEELUaa4wHYl/wW0f5xMqzb22U4zKM4mtgEXGl4FsBYzVHQnpvpUmLGfQZA4Mmrs22mLsSY9YmQ37gYDFdvkZHTuJ95miiT237g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=MGJWq/IwWubTSvQESDQ2LoocBJQ+Q/Eg0VdRZOpSZ8g=; b=hikEl3l3LE0OciBT9BmfzmWejI6DSLqbODAFBkDDoVrrssGDxyyYBB4ceTtDX7mdF74S11Lp91qNE7rwkVD6ijaBnL6TynetE6aBb3OCoZ+wOwfWEWY4seethFnLqVWLCPEWR9JTxgEoUFvF38GKkvHkbcCktqNiW6P6Fko7mbEPrvYSSsEG4EhQ+r8h9Uei/BqmqIxqmUXqs24ixP35k3BX05/WPdunvQMy4mCy2T6mTdW3QEEr0RvHatWdZlsORohYZ6DFXNfVqAlJhXXI1EJAYW2d0n9t2z5/Co1nCSUBsKywR+OTb8U0NYQcKJTnFsKEkTskmu683Bwy+K/8pg== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from DM6PR12MB3849.namprd12.prod.outlook.com (2603:10b6:5:1c7::26) by CY8PR12MB8213.namprd12.prod.outlook.com (2603:10b6:930:71::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7452.50; Tue, 16 Apr 2024 19:28:23 +0000 Received: from DM6PR12MB3849.namprd12.prod.outlook.com ([fe80::6aec:dbca:a593:a222]) by DM6PR12MB3849.namprd12.prod.outlook.com ([fe80::6aec:dbca:a593:a222%5]) with mapi id 15.20.7452.049; Tue, 16 Apr 2024 19:28:23 +0000 From: Jason Gunthorpe To: iommu@lists.linux.dev, Joerg Roedel , linux-arm-kernel@lists.infradead.org, Robin Murphy , Will Deacon Cc: Eric Auger , Moritz Fischer , Moritz Fischer , Michael Shavit , Nicolin Chen , patches@lists.linux.dev, Shameerali Kolothum Thodi , Mostafa Saleh Subject: [PATCH v7 6/9] iommu/arm-smmu-v3: Allocate the CD table entry in advance Date: Tue, 16 Apr 2024 16:28:17 -0300 Message-ID: <6-v7-cb149db3a320+3b5-smmuv3_newapi_p2_jgg@nvidia.com> In-Reply-To: <0-v7-cb149db3a320+3b5-smmuv3_newapi_p2_jgg@nvidia.com> References: X-ClientProxiedBy: SN6PR01CA0004.prod.exchangelabs.com (2603:10b6:805:b6::17) To DM6PR12MB3849.namprd12.prod.outlook.com (2603:10b6:5:1c7::26) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6PR12MB3849:EE_|CY8PR12MB8213:EE_ X-MS-Office365-Filtering-Correlation-Id: ce4d2837-2eef-40cb-271b-08dc5e4b6070 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: scfsq3rf5sZtTM6uj0KT6/5QNQDqCB+IFBu96USFwFzz59nv5QSUiDK0A6H7wWjQZv6FCdsxhh6pMKPdmA0af8Z/ZoUIka5OVXtNmlfKW4FGYlc2jfNfspCMSQQHO/B5u+r4rJmq/aiZkaAo1rVI8AhHgCAIcOXE3bVF4Kug563zKLo9uX+CQG+ngY4/m8RvGrc/ISMZw/1vQxqQyqHqAMNQUz+QnYTCXXnF4YWrNtRDmZUgsBjTKp0u3hyauYvgQNLHNDVIr0YXlC5671UY+8YXfshqzqlCAlP5oxqZstDqA44jN3jYYDy+aWdxA7jaYIE6PVCQ1B2YMbS0eJWZmabMWFN5E9ce7mI3fqk8F3/HDQLE7SlNvPITsmJH498of+DdS2hUw6wmM0+m5oyzEQ3VRGcASmHQ1rUXqB2AlUJY/3I95siOf4GBol6GZvy0RqR5Ybl4/Hj4TcJMYHx2evy2FVVrwU3MwpbF9zQoyqPueiuEL1aJ/nYG/FTUiS2mnvEpvU3Q7V0wC3oPe8H4sM3cDE9KVh4JAbNzQlZE/jxXhcm8vLF73FxnOSEbbRHzb5e7Yi+NCICYenNoyGqPwU5XrBPChOiS83TMPhJoiSNrtSlRECRGLq2deE5Oe+310IHeoDdz38UI4visar5+NDsUdC82M05qTeDOCDh/xN4= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM6PR12MB3849.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(366007)(1800799015)(376005)(7416005);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: g3ONQZFpyXDGQ9Ckptiv1E2Eg+r5N9yrF+mSCVL4kbyfGT75O9qEGVHsFEo1qBeSlS7h4gcxr8LIVJwxtTD7Xz2FA6VJZs2/YF8MTFM9MR7fbXpQgSR9MuHwxEeVmh6rjJu+WFJRR3vk/TV47CYG/yRDPq63eLXP+At7AZ5vcAQ6sUASYnzF2HzuEDZ+p71GlAReGwTHQCLz3juRsJDEpkdQ/wVfFQzRNf4pdovO4wBXWIhapjJ+6lckgs8/qYsuJejHqY6l1fF7pS95BrZ9GS47MRac9OeDHq9qLgMBlmqfiN8LhBm5GEp7xt15zzkpeUot9ciquTRF27EWjjGLZDjAVd4eYaith+A55Ih9ImFWzkSKz9OTYtKmKzhdgdNE+ug9M0QNie46ujDEwntsqcDAlOvpecxu37xIaGxpgpHrUYmX90R4B6uNK1rKytudfBQ4tnoMdz6qfnmqUrzYRDWjw2wUy6kbo70yKAUvJr7R0P0xCcCENBwXXe4UFIE/AZed8mATkzBXy+YfUi34iggtG1rm04eFZm1NdOc884kZseOL65UvOY7XSKvu6lcgJ32nNFGZ20ztgkth3SXRp/RaaDDoC63WaSA2NGCpV9sdxNAMEnMnnpMUy3oTM+d1fAQiF3P8oJjurktoW08EYyG3KesY4RqCCo4YiuSfjFxghFLaDzcfy8IRY0GflqYknGbgr8FTJfM8ePGIZdQNFGT3tni1JPV5BG4nAxiTkRMC97umMUGb7MBSYtsMf0tMWB16XaFDS9AycOckxc+t6QTZgaRx9ZZsPrBB2Vat765/Qt3yIXztmltESU59kZSktHh9L3tFgv10qZ39hhu6lvs3PdFB034LEaxar31bAjiWBkF0r35poCXl/XfAdKWUEMJJWQDbmaLGy3937bhOfIKNJIVkUjGaupE/RrAxAk1HVW4JQeWnnSdddiYofEfzZ6gQWlpO9E7DgHxC94vI4HEbKuDul+gwn2OG+MLlmKEMtRWEs1x70y2OdxnN4aWUmP73IDKG8ekxUu9prZfG9p6Q6AeSVMpztb2UQPw4ak5MDx2CzhZyuxQ3OZLiwPA3fRzi+m1ptb1T5NIAelm5PohHT0+KfBw+71G2URTmV2A0ynknLNAF/T3srhmdnh+bTSOkpb7LDOLpPFBwPx74fqRoZ1rrJAycZMqszBUzb1lrPb1J8zYhZo2M+M0OHkhyyrqi/Ie6v07BMaW2mc/bJnO2zwm2+uELBc3oB/aJ8ESZKrFPn6s8ePOkfW5JwzPLtPaaRzMMmILb9oOGdozRbl9VzCK9XtjlFIxTWrfGt7eWnkZIpiqw7IaIdAJKpw8XK2cVD0TQzsvBS4LxMP8gnHq9ZwSE2pU9nojlAukoc6hs/hu7G4CzwlzebvVlD1sJIP/efQ8nnrtSb+ZyNIT0yytJ1sooLGagt6nnQt3FackA7OiGEE8p+iT54xhqLrrBcNCbYjeg57IPpsuIsTWzT8nA20tcJIATFmwIqxhk52hlPOHlxUmaNSTLEPyAK9HakTjqshcQHk5xnBAJmRYyblQyMviyx5o1yl8RwlvYbKC6/I/YnhmosLY8pqYAP/1x X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: ce4d2837-2eef-40cb-271b-08dc5e4b6070 X-MS-Exchange-CrossTenant-AuthSource: DM6PR12MB3849.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Apr 2024 19:28:22.1190 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: UjOMe/MZDUDQymC3cGougwnuNLWLCRNDSONdTtVzE+SjmE28twa1OH3X2txw74TN X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY8PR12MB8213 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240416_122858_198166_4CE63423 X-CRM114-Status: GOOD ( 15.48 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Avoid arm_smmu_attach_dev() having to undo the changes to the smmu_domain->devices list, acquire the cdptr earlier so we don't need to handle that error. Now there is a clear break in arm_smmu_attach_dev() where all the prep-work has been done non-disruptively and we commit to making the HW change, which cannot fail. This completes transforming arm_smmu_attach_dev() so that it does not disturb the HW if it fails. Tested-by: Nicolin Chen Tested-by: Shameer Kolothum Reviewed-by: Michael Shavit Reviewed-by: Nicolin Chen Reviewed-by: Mostafa Saleh Signed-off-by: Jason Gunthorpe --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 24 +++++++-------------- 1 file changed, 8 insertions(+), 16 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index a0d1237272936f..0aacd95f34a479 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -2661,6 +2661,7 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) struct arm_smmu_device *smmu; struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); struct arm_smmu_master *master; + struct arm_smmu_cd *cdptr; if (!fwspec) return -ENOENT; @@ -2689,6 +2690,12 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) if (ret) return ret; + if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) { + cdptr = arm_smmu_alloc_cd_ptr(master, IOMMU_NO_PASID); + if (!cdptr) + return -ENOMEM; + } + /* * Prevent arm_smmu_share_asid() from trying to change the ASID * of either the old or new domain while we are working on it. @@ -2708,13 +2715,6 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) switch (smmu_domain->stage) { case ARM_SMMU_DOMAIN_S1: { struct arm_smmu_cd target_cd; - struct arm_smmu_cd *cdptr; - - cdptr = arm_smmu_alloc_cd_ptr(master, IOMMU_NO_PASID); - if (!cdptr) { - ret = -ENOMEM; - goto out_list_del; - } arm_smmu_make_s1_cd(&target_cd, master, smmu_domain); arm_smmu_write_cd_entry(master, IOMMU_NO_PASID, cdptr, @@ -2731,16 +2731,8 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) } arm_smmu_enable_ats(master, smmu_domain); - goto out_unlock; - -out_list_del: - spin_lock_irqsave(&smmu_domain->devices_lock, flags); - list_del_init(&master->domain_head); - spin_unlock_irqrestore(&smmu_domain->devices_lock, flags); - -out_unlock: mutex_unlock(&arm_smmu_asid_lock); - return ret; + return 0; } static int arm_smmu_attach_dev_ste(struct device *dev, From patchwork Tue Apr 16 19:28:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Gunthorpe X-Patchwork-Id: 13632436 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CFBCBC04FF6 for ; Tue, 16 Apr 2024 19:29:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=4WSrs8mfW9TBJCBRl158j7RyApz1TCUIJEeDE0qQIEg=; b=Z0f7WkJJ8Komaq pSv3xUfUspXyVLfRM49KhjTpaiRoDwu3FAr7InLR5jUJJLAGUs0TdUIHdOJTURaKARsP1X6szZHXm L3x4BKSPr2edJ1M05w88QYjhh7545ElVfMUResQ8rZfiVUOGzO3J4IohRR8UliJ/1Ew2L7JmvpplS H35d9UmIwfdiKxiBhyRcHoWVwDgLUETUjOH4rVWUSntM/ZLYwGGEmxMWjurpIfdC4hGwJJPuOBNhz o4gd0g26Gj5j3GtM/5r8FkWwHKuPk9Lvd4QMAXMheNUd/Jy1moPWyeLSjY0gVjXODzD30xzJjMCfR Q/yZ67+c9VSdit9aKOsg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rwoUu-0000000DXoo-0xRm; Tue, 16 Apr 2024 19:29:40 +0000 Received: from mail-mw2nam04on20600.outbound.protection.outlook.com ([2a01:111:f403:240a::600] helo=NAM04-MW2-obe.outbound.protection.outlook.com) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rwoUO-0000000DXPF-1dM1 for linux-arm-kernel@lists.infradead.org; Tue, 16 Apr 2024 19:29:11 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=RmcpuXPvD5CzqfYDAwnFhQ2XA/1Dxd8je3VUmGrC8bf0LZ2zHRMasgqty6O8Itvh9a9GpltjKtfzGwFvPp0L28ETV29Nib6iVafKZOsu0x70NcvDQxBAVj1DkrPRSVVJHnMIt9wgpLN9X/GjhgbqrQ/vgJ7D0EfEF0+3W9zkYgO4+5fEvlOcF3ec46AsInuPkQ2K2A4T8idS+pk3o97mkwXQlNDYCDcFDSVbY69W5RRev5ONqIq3HWLLF3+6vC7BY/iciVzL3OEO1QEKKHCUWhbnOszvj2Qmdu2i9HGvsrhs6TiLa5keIZzITSkN305Xky4TT7+H5qjVVvFalXE8yw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=gCkCsSGrfmIp+c0bi4wLp1+fU3w7dn66jI22Ex2wIH0=; b=Vu4GE2DzeEYLN44U+icmzii+VVRSDb5BEslxe+DIrwnHwIkNPtTRCybZoIMZtwpRKyNY3YR5jX6oN8NaFmVvAvRBi2e3zh5ZgouEeUUpyGjTVNFxOrysrwDGBxuse7OpiXpikCNudSNf28DR1PyOR4BVxtNR01LKKLbXF5tAN9KdkinSSBAPwgp6kPAHHJT0lpkNLM6HoFV4dSOxi0IOW/9aRxFy8IkJR5j+z7F9ba2JKaXLMzpeGF5vrmUq3u2ReoyjjP2MZOUC/IMdbxi1eAOmx2hQZ4xOeug3lnrU7DWM0MfEvHTj+Hzfu9WNT0Esz5JjmODM/SRqAFT3D1HDrA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=gCkCsSGrfmIp+c0bi4wLp1+fU3w7dn66jI22Ex2wIH0=; b=D8dS6m9DU4p3xzoi3nIQkcZQ8BYXBQ3pCKp3VgLkBwg30Baryj0OOHzgrBe99H/Ngrb97+/+4mpHBBVTRlROqBmjvIlt/MqXLaXtv7tGtMi9cNRw8mDvMjL0n4wtSVKtPCnXkvXsJ4+NYZSRjdUVZmej6KlQOD99LO2lp8icV3HZV/FgGk8a7KuloT1QzxbzWi/7SjqxFhig4a7SZ15/oM7zjyGH68czyUrdi1ZJxV+pcCNLuL7+jBPuN26kalnUtW9e99F3CvyyKY5A/STOur4Nx4IonNwHKwcqSl1Ywq5osDEDuP3NiTLaiLVPPnLgVUSKm4dlN5QsUiD8YFfACw== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from DM6PR12MB3849.namprd12.prod.outlook.com (2603:10b6:5:1c7::26) by CY8PR12MB8213.namprd12.prod.outlook.com (2603:10b6:930:71::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7452.50; Tue, 16 Apr 2024 19:28:25 +0000 Received: from DM6PR12MB3849.namprd12.prod.outlook.com ([fe80::6aec:dbca:a593:a222]) by DM6PR12MB3849.namprd12.prod.outlook.com ([fe80::6aec:dbca:a593:a222%5]) with mapi id 15.20.7452.049; Tue, 16 Apr 2024 19:28:25 +0000 From: Jason Gunthorpe To: iommu@lists.linux.dev, Joerg Roedel , linux-arm-kernel@lists.infradead.org, Robin Murphy , Will Deacon Cc: Eric Auger , Moritz Fischer , Moritz Fischer , Michael Shavit , Nicolin Chen , patches@lists.linux.dev, Shameerali Kolothum Thodi , Mostafa Saleh Subject: [PATCH v7 7/9] iommu/arm-smmu-v3: Move the CD generation for SVA into a function Date: Tue, 16 Apr 2024 16:28:18 -0300 Message-ID: <7-v7-cb149db3a320+3b5-smmuv3_newapi_p2_jgg@nvidia.com> In-Reply-To: <0-v7-cb149db3a320+3b5-smmuv3_newapi_p2_jgg@nvidia.com> References: X-ClientProxiedBy: SN6PR01CA0007.prod.exchangelabs.com (2603:10b6:805:b6::20) To DM6PR12MB3849.namprd12.prod.outlook.com (2603:10b6:5:1c7::26) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6PR12MB3849:EE_|CY8PR12MB8213:EE_ X-MS-Office365-Filtering-Correlation-Id: 02bf238b-17e9-4f42-a6bf-08dc5e4b61a9 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: sdhQdcd3LpHSA609P0EdyUgUNYnsI1qEpoHWCmBHDIu/RDaSilpcGvXr6jrlY7bv4I41QQ3RlHaVxQuaEghCWLGSwbo48dvNy9wgoOWq3Skk48ql1tay7hhxleP2W8HG0ehOzfylRoBX6RvRaQftdO4DfQDcc8bJdtw3XWQwtYBvbtnSDQK+y0QIprsM+pxch4zdmIW3jh7Aig+Myu+wZBTZbexKPFzcwXnmmluX7y8Uu5sVOeJ41A5MpvWWNjZpRQuqsRQEUNcj7M1BhEarMe9/fGML+7X1v9V0yPc6WMcVtWGF3LdWWNL84/wd5BqcLUDPMeBO7T/rQZ+QsXGYADf2Ttg4jbwI1dRj7Xak3KToZ6eyiug6wXHPMwkwgf9KxSTC8GZyJzCT58e9T5iM1NCMIBrxabsMo2NLsDsC/LR/45jyoJeY/bvafoFAaOvGYgpiPDJOLdhBgKbzESD6+dQaMpSclwFyVEoyflniyI5NmBUXMYaP6scE57+9dw52S9WFnQORS9BdB+gbHggQChYn/Y4zo2oYBAJMo8r273ynxaY5PvnMbnKieHXxnd8RTljatT3xXRl9Dwk3N5OY99t4JDUc6DU4VMWgjFzympXPI3VPmrxQuCfrT6PAJlwTwRjgEQe1fMnMzhtqfCY5GY8WfQp3ONtIJEBXxtLlu9Q= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM6PR12MB3849.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(366007)(1800799015)(376005)(7416005);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: 35e2/aKW/Fe1VwSu1y02GT609LpQhPb4JLKVxlWooNSUVUEUG0bn4v1NjQzcr9kICoEfh5Cq87EmBwy6lCbm4c520Wm66aBe519CJg1qHnASgvVC2aA1P+5dO6XGj+yjBGLJvXcRFM6IdZAb5wOXeylM+W2CgOS5zZWu1+fWaQAqK746/Q7enO58PGXhhTCBuoR5aHgoA82LXnHqFSBfZxiXbvf9uKghl4v7KEbY6NmabQbv0d9GLI4yZ6FU31KVkYV1339sjnYWurhyCOHGfsGqjxjLfIP8yrU0A3gk0upon1pK44xkEK17zzF3ulIiX+G1wuzn4NOfDLgZ3HtUQOwhvbnTFtCjrUkxewHEICQTvS0nmL4fakIJlJhXBHbCffxzJ3dfNg7oO2xoz7zygy4Dbalt7pOqtjfbwkui3+KHbWD57nzx8C7mdu3kO1C4quSkzPoO4f+vK7gaye9kA3fwqfolXne3dGVSmihAY3IRrOH6q6H+Ils9se0SLyX87o7iZKQqag63Yshm6m8jQaW5U6tK6cVIQwNR0zHGFja7cB8BMYY41o1IhPwIAV/rpZ+O38JfTUTSqr/6UYVMN3TAgJE6nF+TRbCC526Y5uhR36sHpjbgAs9OorTmGiVhRb+f5JkQ19wkWO4GLYtWj9Qd3GYAtMZkgz3fKGOxsAdtg6FqXw7/QxyWT4RITRN5l+z5NJ9d63XHKplnJD4C7hEBhiWjqezTudrFJQkNqh+z/238dppYhbv0qlvswSjq4PycteVRZVlfdn+/62b5V81V5NnnhQA7384AfAUGPvz0nafhOZhaEwhtdJcYUAWUIuOcsLHn0pzw//lm+oporsRF6zhrW7UM6H3UdrMPfpeFy+0MyRkuMRoCNxfhbUTyfMt6/sb+JoTBE30YV+WHaWZOWKGpJdq0/esiagn2lMrFqypNUUi05I3CM0Y43NgGCoRocFwRu7R6/pZ29vKnrya0oBkt75/t2tA6IfR6i51Ye0MZsa/M3equXiSk4HaOxTeTw5P6641i0V1evqLFQdX6OangsxEfBdpFzs+EX2DrVm03VfxqKnRjSAATmRA3FuPtqIYHnLWwc7c74WSKpDhNsRfG79jUxRayZmju1UFMeTTi0NAFqARH2OAxmP+hc9vPJend9WSp1oDczmJwG63eAzpDsuQpG1cj2BCWS/q6y3/slEF8ymqeZcH6oGSVh6/ajUGvAt/JkN2H4cyPbFQIIRqr4U8qfeme8bXZjChytG+axOEz0DS9OLCphuNp6f8iMfKTLXZDk9X9XKbRcrM+WMRSCFH9V6B81auMAyHwfkNKZG8I4kQzL9p80sBDJ/E7cSG3350OmMjvPoYkl2iEWADh4kuDhphkw65ws4PTfTU0KyYukobxzoEaON6y5ATUxJGLsWGIny0DUK0lPuS94oLnBMDiMnPVgja2sZDl97t6ETkSCINKvdtbkG2SHfNmvSoowez+kvGztAjqvKlJcyubVHsS2cjo6lFEAjKEjKqQnNAo4h1RHdiK6RjZLs4eJk9YJ459MN/LOMLPyCdVaU7UhjlmcKJSlh7ULMz/xQXtvwEG8IWUEAw1VPhI X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 02bf238b-17e9-4f42-a6bf-08dc5e4b61a9 X-MS-Exchange-CrossTenant-AuthSource: DM6PR12MB3849.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Apr 2024 19:28:23.6604 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: cjAwflGl51V2xTIBq/1AMfS6FR2KUXwYLZr8pbj8asWVefaMqMRexlE20nJmWK/9 X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY8PR12MB8213 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240416_122908_476060_7C4B534B X-CRM114-Status: GOOD ( 24.82 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Pull all the calculations for building the CD table entry for a mmu_struct into arm_smmu_make_sva_cd(). Call it in the two places installing the SVA CD table entry. Open code the last caller of arm_smmu_update_ctx_desc_devices() and remove the function. Remove arm_smmu_write_ctx_desc() since all callers are gone. Add the locking assertions to arm_smmu_alloc_cd_ptr() since arm_smmu_update_ctx_desc_devices() was the last problematic caller. Remove quiet_cd since all users are gone, arm_smmu_make_sva_cd() creates the same value. The behavior of quiet_cd changes slightly, the old implementation edited the CD in place to set CTXDESC_CD_0_TCR_EPD0 assuming it was a SVA CD entry. This version generates a full CD entry with a 0 TTB0 and relies on arm_smmu_write_cd_entry() to install it hitlessly. Tested-by: Nicolin Chen Tested-by: Shameer Kolothum Signed-off-by: Jason Gunthorpe Reviewed-by: Nicolin Chen --- .../iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c | 156 +++++++++++------- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 103 +----------- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 7 +- 3 files changed, 108 insertions(+), 158 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c index 7cf286f7a009fb..80a7d559ef2d3f 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c @@ -34,25 +34,6 @@ struct arm_smmu_bond { static DEFINE_MUTEX(sva_lock); -/* - * Write the CD to the CD tables for all masters that this domain is attached - * to. Note that this is only used to update existing CD entries in the target - * CD table, for which it's assumed that arm_smmu_write_ctx_desc can't fail. - */ -static void arm_smmu_update_ctx_desc_devices(struct arm_smmu_domain *smmu_domain, - int ssid, - struct arm_smmu_ctx_desc *cd) -{ - struct arm_smmu_master *master; - unsigned long flags; - - spin_lock_irqsave(&smmu_domain->devices_lock, flags); - list_for_each_entry(master, &smmu_domain->devices, domain_head) { - arm_smmu_write_ctx_desc(master, ssid, cd); - } - spin_unlock_irqrestore(&smmu_domain->devices_lock, flags); -} - static void arm_smmu_update_s1_domain_cd_entry(struct arm_smmu_domain *smmu_domain) { @@ -128,11 +109,86 @@ arm_smmu_share_asid(struct mm_struct *mm, u16 asid) return NULL; } +static u64 page_size_to_cd(void) +{ + static_assert(PAGE_SIZE == SZ_4K || PAGE_SIZE == SZ_16K || + PAGE_SIZE == SZ_64K); + if (PAGE_SIZE == SZ_64K) + return ARM_LPAE_TCR_TG0_64K; + if (PAGE_SIZE == SZ_16K) + return ARM_LPAE_TCR_TG0_16K; + return ARM_LPAE_TCR_TG0_4K; +} + +static void arm_smmu_make_sva_cd(struct arm_smmu_cd *target, + struct arm_smmu_master *master, + struct mm_struct *mm, u16 asid) +{ + u64 par; + + memset(target, 0, sizeof(*target)); + + par = cpuid_feature_extract_unsigned_field( + read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1), + ID_AA64MMFR0_EL1_PARANGE_SHIFT); + + target->data[0] = cpu_to_le64( + CTXDESC_CD_0_TCR_EPD1 | +#ifdef __BIG_ENDIAN + CTXDESC_CD_0_ENDI | +#endif + CTXDESC_CD_0_V | + FIELD_PREP(CTXDESC_CD_0_TCR_IPS, par) | + CTXDESC_CD_0_AA64 | + (master->stall_enabled ? CTXDESC_CD_0_S : 0) | + CTXDESC_CD_0_R | + CTXDESC_CD_0_A | + CTXDESC_CD_0_ASET | + FIELD_PREP(CTXDESC_CD_0_ASID, asid)); + + /* + * If no MM is passed then this creates a SVA entry that faults + * everything. arm_smmu_write_cd_entry() can hitlessly go between these + * two entries types since TTB0 is ignored by HW when EPD0 is set. + */ + if (mm) { + target->data[0] |= cpu_to_le64( + FIELD_PREP(CTXDESC_CD_0_TCR_T0SZ, + 64ULL - vabits_actual) | + FIELD_PREP(CTXDESC_CD_0_TCR_TG0, page_size_to_cd()) | + FIELD_PREP(CTXDESC_CD_0_TCR_IRGN0, + ARM_LPAE_TCR_RGN_WBWA) | + FIELD_PREP(CTXDESC_CD_0_TCR_ORGN0, + ARM_LPAE_TCR_RGN_WBWA) | + FIELD_PREP(CTXDESC_CD_0_TCR_SH0, ARM_LPAE_TCR_SH_IS)); + + target->data[1] = cpu_to_le64(virt_to_phys(mm->pgd) & + CTXDESC_CD_1_TTB0_MASK); + } else { + target->data[0] |= cpu_to_le64(CTXDESC_CD_0_TCR_EPD0); + + /* + * Disable stall and immediately generate an abort if stall + * disable is permitted. This speeds up cleanup for an unclean + * exit if the device is still doing a lot of DMA. + */ + if (master->stall_enabled && + !(master->smmu->features & ARM_SMMU_FEAT_STALL_FORCE)) + target->data[0] &= + cpu_to_le64(~(CTXDESC_CD_0_S | CTXDESC_CD_0_R)); + } + + /* + * MAIR value is pretty much constant and global, so we can just get it + * from the current CPU register + */ + target->data[3] = cpu_to_le64(read_sysreg(mair_el1)); +} + static struct arm_smmu_ctx_desc *arm_smmu_alloc_shared_cd(struct mm_struct *mm) { u16 asid; int err = 0; - u64 tcr, par, reg; struct arm_smmu_ctx_desc *cd; struct arm_smmu_ctx_desc *ret = NULL; @@ -166,39 +222,6 @@ static struct arm_smmu_ctx_desc *arm_smmu_alloc_shared_cd(struct mm_struct *mm) if (err) goto out_free_asid; - tcr = FIELD_PREP(CTXDESC_CD_0_TCR_T0SZ, 64ULL - vabits_actual) | - FIELD_PREP(CTXDESC_CD_0_TCR_IRGN0, ARM_LPAE_TCR_RGN_WBWA) | - FIELD_PREP(CTXDESC_CD_0_TCR_ORGN0, ARM_LPAE_TCR_RGN_WBWA) | - FIELD_PREP(CTXDESC_CD_0_TCR_SH0, ARM_LPAE_TCR_SH_IS) | - CTXDESC_CD_0_TCR_EPD1 | CTXDESC_CD_0_AA64; - - switch (PAGE_SIZE) { - case SZ_4K: - tcr |= FIELD_PREP(CTXDESC_CD_0_TCR_TG0, ARM_LPAE_TCR_TG0_4K); - break; - case SZ_16K: - tcr |= FIELD_PREP(CTXDESC_CD_0_TCR_TG0, ARM_LPAE_TCR_TG0_16K); - break; - case SZ_64K: - tcr |= FIELD_PREP(CTXDESC_CD_0_TCR_TG0, ARM_LPAE_TCR_TG0_64K); - break; - default: - WARN_ON(1); - err = -EINVAL; - goto out_free_asid; - } - - reg = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1); - par = cpuid_feature_extract_unsigned_field(reg, ID_AA64MMFR0_EL1_PARANGE_SHIFT); - tcr |= FIELD_PREP(CTXDESC_CD_0_TCR_IPS, par); - - cd->ttbr = virt_to_phys(mm->pgd); - cd->tcr = tcr; - /* - * MAIR value is pretty much constant and global, so we can just get it - * from the current CPU register - */ - cd->mair = read_sysreg(mair_el1); cd->asid = asid; cd->mm = mm; @@ -276,6 +299,8 @@ static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm) { struct arm_smmu_mmu_notifier *smmu_mn = mn_to_smmu(mn); struct arm_smmu_domain *smmu_domain = smmu_mn->domain; + struct arm_smmu_master *master; + unsigned long flags; mutex_lock(&sva_lock); if (smmu_mn->cleared) { @@ -287,8 +312,19 @@ static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm) * DMA may still be running. Keep the cd valid to avoid C_BAD_CD events, * but disable translation. */ - arm_smmu_update_ctx_desc_devices(smmu_domain, mm_get_enqcmd_pasid(mm), - &quiet_cd); + spin_lock_irqsave(&smmu_domain->devices_lock, flags); + list_for_each_entry(master, &smmu_domain->devices, domain_head) { + struct arm_smmu_cd target; + struct arm_smmu_cd *cdptr; + + cdptr = arm_smmu_get_cd_ptr(master, mm_get_enqcmd_pasid(mm)); + if (WARN_ON(!cdptr)) + continue; + arm_smmu_make_sva_cd(&target, master, NULL, smmu_mn->cd->asid); + arm_smmu_write_cd_entry(master, mm_get_enqcmd_pasid(mm), cdptr, + &target); + } + spin_unlock_irqrestore(&smmu_domain->devices_lock, flags); arm_smmu_tlb_inv_asid(smmu_domain->smmu, smmu_mn->cd->asid); arm_smmu_atc_inv_domain(smmu_domain, mm_get_enqcmd_pasid(mm), 0, 0); @@ -383,6 +419,8 @@ static int __arm_smmu_sva_bind(struct device *dev, ioasid_t pasid, struct mm_struct *mm) { int ret; + struct arm_smmu_cd target; + struct arm_smmu_cd *cdptr; struct arm_smmu_bond *bond; struct arm_smmu_master *master = dev_iommu_priv_get(dev); struct iommu_domain *domain = iommu_get_domain_for_dev(dev); @@ -409,9 +447,13 @@ static int __arm_smmu_sva_bind(struct device *dev, ioasid_t pasid, goto err_free_bond; } - ret = arm_smmu_write_ctx_desc(master, pasid, bond->smmu_mn->cd); - if (ret) + cdptr = arm_smmu_alloc_cd_ptr(master, mm_get_enqcmd_pasid(mm)); + if (!cdptr) { + ret = -ENOMEM; goto err_put_notifier; + } + arm_smmu_make_sva_cd(&target, master, mm, bond->smmu_mn->cd->asid); + arm_smmu_write_cd_entry(master, pasid, cdptr, &target); list_add(&bond->list, &master->bonds); return 0; diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 0aacd95f34a479..d01b632197c0b7 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -84,12 +84,6 @@ struct arm_smmu_option_prop { DEFINE_XARRAY_ALLOC1(arm_smmu_asid_xa); DEFINE_MUTEX(arm_smmu_asid_lock); -/* - * Special value used by SVA when a process dies, to quiesce a CD without - * disabling it. - */ -struct arm_smmu_ctx_desc quiet_cd = { 0 }; - static struct arm_smmu_option_prop arm_smmu_options[] = { { ARM_SMMU_OPT_SKIP_PREFETCH, "hisilicon,broken-prefetch-cmd" }, { ARM_SMMU_OPT_PAGE0_REGS_ONLY, "cavium,cn9900-broken-page1-regspace"}, @@ -1201,7 +1195,7 @@ static void arm_smmu_write_cd_l1_desc(__le64 *dst, u64 val = (l1_desc->l2ptr_dma & CTXDESC_L1_DESC_L2PTR_MASK) | CTXDESC_L1_DESC_V; - /* See comment in arm_smmu_write_ctx_desc() */ + /* The HW has 64 bit atomicity with stores to the L2 CD table */ WRITE_ONCE(*dst, cpu_to_le64(val)); } @@ -1224,12 +1218,15 @@ struct arm_smmu_cd *arm_smmu_get_cd_ptr(struct arm_smmu_master *master, return &l1_desc->l2ptr[ssid % CTXDESC_L2_ENTRIES]; } -static struct arm_smmu_cd *arm_smmu_alloc_cd_ptr(struct arm_smmu_master *master, - u32 ssid) +struct arm_smmu_cd *arm_smmu_alloc_cd_ptr(struct arm_smmu_master *master, + u32 ssid) { struct arm_smmu_ctx_desc_cfg *cd_table = &master->cd_table; struct arm_smmu_device *smmu = master->smmu; + might_sleep(); + iommu_group_mutex_assert(master->dev); + if (!cd_table->cdtab) { if (arm_smmu_alloc_cd_tables(master)) return NULL; @@ -1345,91 +1342,6 @@ void arm_smmu_clear_cd(struct arm_smmu_master *master, ioasid_t ssid) arm_smmu_write_cd_entry(master, ssid, cdptr, &target); } -static void arm_smmu_clean_cd_entry(struct arm_smmu_cd *target) -{ - struct arm_smmu_cd used = {}; - int i; - - arm_smmu_get_cd_used(target->data, used.data); - for (i = 0; i != ARRAY_SIZE(target->data); i++) - target->data[i] &= used.data[i]; -} - -int arm_smmu_write_ctx_desc(struct arm_smmu_master *master, int ssid, - struct arm_smmu_ctx_desc *cd) -{ - /* - * This function handles the following cases: - * - * (1) Install primary CD, for normal DMA traffic (SSID = IOMMU_NO_PASID = 0). - * (2) Install a secondary CD, for SID+SSID traffic. - * (3) Update ASID of a CD. Atomically write the first 64 bits of the - * CD, then invalidate the old entry and mappings. - * (4) Quiesce the context without clearing the valid bit. Disable - * translation, and ignore any translation fault. - * (5) Remove a secondary CD. - */ - u64 val; - bool cd_live; - struct arm_smmu_cd target; - struct arm_smmu_cd *cdptr = ⌖ - struct arm_smmu_cd *cd_table_entry; - struct arm_smmu_ctx_desc_cfg *cd_table = &master->cd_table; - struct arm_smmu_device *smmu = master->smmu; - - if (WARN_ON(ssid >= (1 << cd_table->s1cdmax))) - return -E2BIG; - - cd_table_entry = arm_smmu_alloc_cd_ptr(master, ssid); - if (!cd_table_entry) - return -ENOMEM; - - target = *cd_table_entry; - val = le64_to_cpu(cdptr->data[0]); - cd_live = !!(val & CTXDESC_CD_0_V); - - if (!cd) { /* (5) */ - val = 0; - } else if (cd == &quiet_cd) { /* (4) */ - if (!(smmu->features & ARM_SMMU_FEAT_STALL_FORCE)) - val &= ~(CTXDESC_CD_0_S | CTXDESC_CD_0_R); - val |= CTXDESC_CD_0_TCR_EPD0; - } else if (cd_live) { /* (3) */ - val &= ~CTXDESC_CD_0_ASID; - val |= FIELD_PREP(CTXDESC_CD_0_ASID, cd->asid); - /* - * Until CD+TLB invalidation, both ASIDs may be used for tagging - * this substream's traffic - */ - } else { /* (1) and (2) */ - cdptr->data[1] = cpu_to_le64(cd->ttbr & CTXDESC_CD_1_TTB0_MASK); - cdptr->data[2] = 0; - cdptr->data[3] = cpu_to_le64(cd->mair); - - val = cd->tcr | -#ifdef __BIG_ENDIAN - CTXDESC_CD_0_ENDI | -#endif - CTXDESC_CD_0_R | CTXDESC_CD_0_A | - (cd->mm ? 0 : CTXDESC_CD_0_ASET) | - CTXDESC_CD_0_AA64 | - FIELD_PREP(CTXDESC_CD_0_ASID, cd->asid) | - CTXDESC_CD_0_V; - - if (cd_table->stall_enabled) - val |= CTXDESC_CD_0_S; - } - cdptr->data[0] = cpu_to_le64(val); - /* - * Since the above is updating the CD entry based on the current value - * without zeroing unused bits it needs fixing before being passed to - * the programming logic. - */ - arm_smmu_clean_cd_entry(&target); - arm_smmu_write_cd_entry(master, ssid, cd_table_entry, &target); - return 0; -} - static int arm_smmu_alloc_cd_tables(struct arm_smmu_master *master) { int ret; @@ -1438,7 +1350,6 @@ static int arm_smmu_alloc_cd_tables(struct arm_smmu_master *master) struct arm_smmu_device *smmu = master->smmu; struct arm_smmu_ctx_desc_cfg *cd_table = &master->cd_table; - cd_table->stall_enabled = master->stall_enabled; cd_table->s1cdmax = master->ssid_bits; max_contexts = 1 << cd_table->s1cdmax; @@ -1536,7 +1447,7 @@ arm_smmu_write_strtab_l1_desc(__le64 *dst, struct arm_smmu_strtab_l1_desc *desc) val |= FIELD_PREP(STRTAB_L1_DESC_SPAN, desc->span); val |= desc->l2ptr_dma & STRTAB_L1_DESC_L2PTR_MASK; - /* See comment in arm_smmu_write_ctx_desc() */ + /* The HW has 64 bit atomicity with stores to the L2 STE table */ WRITE_ONCE(*dst, cpu_to_le64(val)); } diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h index 99fd6f24caa818..8098bf8836a180 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -609,8 +609,6 @@ struct arm_smmu_ctx_desc_cfg { u8 s1fmt; /* log2 of the maximum number of CDs supported by this table */ u8 s1cdmax; - /* Whether CD entries in this table have the stall bit set. */ - u8 stall_enabled:1; }; struct arm_smmu_s2_cfg { @@ -749,11 +747,12 @@ static inline struct arm_smmu_domain *to_smmu_domain(struct iommu_domain *dom) extern struct xarray arm_smmu_asid_xa; extern struct mutex arm_smmu_asid_lock; -extern struct arm_smmu_ctx_desc quiet_cd; void arm_smmu_clear_cd(struct arm_smmu_master *master, ioasid_t ssid); struct arm_smmu_cd *arm_smmu_get_cd_ptr(struct arm_smmu_master *master, u32 ssid); +struct arm_smmu_cd *arm_smmu_alloc_cd_ptr(struct arm_smmu_master *master, + u32 ssid); void arm_smmu_make_s1_cd(struct arm_smmu_cd *target, struct arm_smmu_master *master, struct arm_smmu_domain *smmu_domain); @@ -761,8 +760,6 @@ void arm_smmu_write_cd_entry(struct arm_smmu_master *master, int ssid, struct arm_smmu_cd *cdptr, const struct arm_smmu_cd *target); -int arm_smmu_write_ctx_desc(struct arm_smmu_master *smmu_master, int ssid, - struct arm_smmu_ctx_desc *cd); void arm_smmu_tlb_inv_asid(struct arm_smmu_device *smmu, u16 asid); void arm_smmu_tlb_inv_range_asid(unsigned long iova, size_t size, int asid, size_t granule, bool leaf, From patchwork Tue Apr 16 19:28:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Gunthorpe X-Patchwork-Id: 13632433 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0AB67C04FF6 for ; Tue, 16 Apr 2024 19:29:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=gmYvhTWL3Xkza6nrbE0s3t/iR5uAR5gj9wgMpSaZ7EM=; b=YSp6tCK725qiBb WlN3f7KXg24aFrJxvUzZF8pN6kmFuF0/MoKxQXGQhe09LkWqNhj/ocqRXxkH4UrN0D1B917E83KeO qkpn+0YAkLopdEvxcM/1iojUGIFobEBbpmksk/XuXlQ96Pc8BUf625cvw1Es24273WZovY1fmVHZo 2thyTAqC9FHjK6+5Dc2wlEdwXZWuju/jgrih1sPC9X484KfWkMQ3lJHil2fR3vWld1SPXxGRfz9oD y1rmEI+8h6OLx4p56tyy/jzRCopcOjqFK8yKC942Vt8+RoNvHETObNPeL9c1MWslZvC5JKe4ckl2k zJsIxAWwv01QLDalGJ/g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rwoUO-0000000DXZf-2oTU; Tue, 16 Apr 2024 19:29:08 +0000 Received: from mail-mw2nam04on20600.outbound.protection.outlook.com ([2a01:111:f403:240a::600] helo=NAM04-MW2-obe.outbound.protection.outlook.com) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rwoUE-0000000DXPF-34E2 for linux-arm-kernel@lists.infradead.org; Tue, 16 Apr 2024 19:29:02 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=UObruCvevDKE9mP9KnbrUBfFe0ab4Rk6QiysMRggsePgXh8pAkVHLh4AbG6RkSzMnu8mc2vIrSk/9eT1xX8ui2E+G0F0v1Z6qju0yKqa06EBGyQTHTRvdpURCg0u3Do6ommDZaiHqln0sqL6cL9BZ7CWP1lgPDJ6uEw2zPWPWtq66BRcJ89PrKi5NdNIh2+ZC8QFfMx2jx8sFX3wETS+ArjbfWCofuD/0UY5SDbDUVhmQ0IBW23wWnGzB7GMNVFWhmnDxyBz9IrXc+mLdVRKqnDT5+iMbRsPa3HcWNNFkFiBlx9/FxZ0oOb6RYOHnd4EFEkjr0tUDiVktcjCz0Sriw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=taoiH1AVBd3PfWHuo628rJ9YiYQvL5ZYmxibhBx3s9I=; b=WF6min7GGpeV2k+d8T6yn6+zV5NM8jbvI2ekMPxieQMcsGJqn6ZLP95fD4vIQUDGID5FseexmxdFhfjNNwpoafZHaxSCKwlZ2KikIACwPprtU7IOnWBEjsyT+yfrlEewbWC/ZZXWyHEKcq9258N8o7eb5zvRzQ2Q23OpiJWWkk3lkW879+HdcR1QH02Q8Jb09qrATJ4xOqTZX0pNrUYBLNaLYdu75a5M1vZFEGVtTZ9533DrajxL8lNmgoagKnWSulG3IDrwG70wXY/R4mS3NMStPnekgNyOYcjjD+zJ1dfUaVw7FxgaqywI5+nk+mt5Vm/VYLKMaIGCi5jtRjPtaw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=taoiH1AVBd3PfWHuo628rJ9YiYQvL5ZYmxibhBx3s9I=; b=IJHEwyR2ylCtE0XXsh1v/FeRZdQzK7UBJKt+wPpA5Pm5RWPwW/TvgjxrTV2crSG1RfTZRvnKlwN/2SNayvw+jmoBnPvBCeKmtbqWwgLoXGNCgIE5SU5E0VuSV8H/mg5hyfV0fHjPKoGen+uPNGGfGL5tvAYohGljLpmQQ+8GfW0K40BVC5cXikCkHpCD4TX+0jU/LNYEm0Wd9QdztKg101RkFCk6PG1ITzELglzZHfTFmEiuG5Kw5nK/1Epkj0MHHmEO3bsNhi5SgyICyMHo+wzdGKGwy0kCWhcQEXc5EuPgjk74e2uUd41X8nHH3q5+Xes+PHXipyuCEILFkJ4zwg== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from DM6PR12MB3849.namprd12.prod.outlook.com (2603:10b6:5:1c7::26) by CY8PR12MB8213.namprd12.prod.outlook.com (2603:10b6:930:71::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7452.50; Tue, 16 Apr 2024 19:28:23 +0000 Received: from DM6PR12MB3849.namprd12.prod.outlook.com ([fe80::6aec:dbca:a593:a222]) by DM6PR12MB3849.namprd12.prod.outlook.com ([fe80::6aec:dbca:a593:a222%5]) with mapi id 15.20.7452.049; Tue, 16 Apr 2024 19:28:23 +0000 From: Jason Gunthorpe To: iommu@lists.linux.dev, Joerg Roedel , linux-arm-kernel@lists.infradead.org, Robin Murphy , Will Deacon Cc: Eric Auger , Moritz Fischer , Moritz Fischer , Michael Shavit , Nicolin Chen , patches@lists.linux.dev, Shameerali Kolothum Thodi , Mostafa Saleh Subject: [PATCH v7 8/9] iommu/arm-smmu-v3: Build the whole CD in arm_smmu_make_s1_cd() Date: Tue, 16 Apr 2024 16:28:19 -0300 Message-ID: <8-v7-cb149db3a320+3b5-smmuv3_newapi_p2_jgg@nvidia.com> In-Reply-To: <0-v7-cb149db3a320+3b5-smmuv3_newapi_p2_jgg@nvidia.com> References: X-ClientProxiedBy: SN6PR01CA0005.prod.exchangelabs.com (2603:10b6:805:b6::18) To DM6PR12MB3849.namprd12.prod.outlook.com (2603:10b6:5:1c7::26) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6PR12MB3849:EE_|CY8PR12MB8213:EE_ X-MS-Office365-Filtering-Correlation-Id: be5bb160-4c0c-48af-271c-08dc5e4b6070 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: q6HyVaxCQ1neWYGkUj44hDmc+9WuY2Gr4pdKSbsvltyMfP/a/80wors4TC/7hKL6Ap364gnMegituctrp4/N3XQ18Wq42CbRPlb8PrKhw5juVhba48/oZtDywIvVIJOpRfTi1CNI5Fttlo80G3qlGo1GL1g8dGBV9zrN9ZDbdQeIM0Yv6euXPffwR+PjV2IdnWgjxQwxu8HR1RtUprIsGQDJAo9u6+9KvCQxKNnS6N4ThxZxEdpTdQLrWJ8ySzSOQ1J64SdtW7mueKHsxsWTz5Sh3UX5fyScsWdxToMnqcV+B4G0CKM3J6Lf/VpPqk8I0FxlLWZpDIr2VVkFBZP9sjnHN/tBnJPHbkFKrLlQ0KSABYBaq1HMk1C1w0gph3id9LHp8IAehQOgTi2xJu3m2wkNIfHvdDTmaZH2nPL33ACupc8DOhjD9129syOlmzn6z9KSFgVizxPzVcu6d9ZMph285w382xpw9v/DE00owFiNDOwhVCwbs9RpXF+3uvPzMfz/9ypKY9BrnThMA6hT01PVjrKwTZ1yqGEA3vhMeWB5iJRqVzYkeMv7oMuX8b+x2Cp71LBY1YfwhQZhTNNn3/1Bi6R04M4vRRJ85nTOpUo5UZoIs9uCcGQoSxWWPGxNKTnHNJCRiiDZEP2oLtPho3nm4XvuMhoAHZMvsZKlhdU= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM6PR12MB3849.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(366007)(1800799015)(376005)(7416005);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: Vk0TC7SnjwnbwZNJaklygXRmNK5yZDX0aZcexyHWwfhyVfWw8N32anWjUbDyiQf7iQ04N98K4yFN+g5rSKzi7dmL2/9faIOo0TpM5SCw2xdZys47u5wS6SJdVBX7E7az/8jaTbPFd3Yjg6Qh4J1DG57ZZtR3q4cpCvKAcbCdX3T4RaL+E3/oF4vuETih/o3fdxZ5O6hPVUESk6CMvg1tGIKKkTdWyQuLnYdYGsEmCEbuGDRV3joDxA0kzQDr5+sl9rDvpg8/6eD4797m2igpEMN7mk336Ff0SGYLIAKboTWw6AaRT19ZYbJMyttpzL4FC8LDKkRlNL5ttlO7yNu6ATrxNUgRzrktpAxvIW9t8tPr8i5TXuh1FkLvbHzazof9lT1cfQvNJJWQs0LqwiBZvo0HG4zebf1yqp2U5jMk+F4E3etoPQiSBIIZxl/L/G+7hFnVFYXVIs7i8Ujlz0glDhYi5CZ6b3CiimuuDhN8goA9NRpoAb+HA2yhjthyFoWgOm5G6CmbENqZQb4Yvvz5p1naNqOsMyCq8xmmojeWiWRbMaNGsb21k8lENb5Fw6HcWJIu+Nfxfgd51Zqfp9aOSt/QIXj+5/wsM+RhL78sD+1bYYvt0nfBdKgUdQUHX1MvqVwu4ZKh4vKBtAlqI1N/xZIyR1AIi80g75o3Hy7LjLvUnc0Gfj7JAOXL+FTmHVeJ/lAuWOH/ZUAwKAZNLoqqRES+vLTdb5GMkW6s2/qs8IgrfiSZgt45vOi4g4O1G4BOJ7dNRod3FmVFPCJy7fecqikagovUUY6zZgUo/DScLyOWqc+GTpqFSUZpW6GTuFTrDvjZZJlZ09wzX5GPiKxr+wdzotJ84KucU7UQe8xsGwPpR9q+PMpqK98Rhl6Zksytn8hOQGczs2mW+V3pkfl6IPti3cWmZpE/cbEkEokzVx/NuzTwcDSUcQulbOIxU6sCoL0HV7QWGiCq/rSIJuOz9bj0X9h3DdvfQqGrRTlQjzblxYWE7cE5B22oaJiqRkn9J7sOd/GNvxqp6IzZ6gFAyqAlceU2QOoDDBFsDVBY/Fn7LH5g65H0NGJw/cTj1r2ewx6EIerPnQHU1UcNMsql6nJ2+3XVQdRHOUkmff8eonqhvtNAAxV+cBySZeMyFPHyyOmcrT6uBLdLcgBQroSF2v+DBRsXpszom0TS5tlldo662oEMjVxLYzsEsmoV/qEvn4Rh2o2MNIL95GzLmaD9Xmc2Gv0oIbg4zu2GOhIwtIoL0Uz8j7ok5JgDt9UiWFcNyW9RTa6TCqM0jkeF7y3X9PMA7aQra+8KsZHZHnF+gQ6U5vVCJR3NKTqGuY1LX1rG5vC9YOwfdoeWq96QJgcEg/55Eoyaej2z9MQC5vJrv7WqavVW13gi85IKPEZj9D3LBr5SqlBItguKniAFcUF73wmsVfl3rWusOSFFhUZxshwU89JjzB9qxdUHo6dqjFzO/WqM74EktOtGBm9H+H5saUsqesTRrDpo92Fg6rGH01n3gf8/tBuTyw+EK8RqFFgof1cBxu1XWFXrSLKv/P6GNATOdk/OhJJtfEhIQytDAhyeDfnrBnfExVn/CiohY62w X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: be5bb160-4c0c-48af-271c-08dc5e4b6070 X-MS-Exchange-CrossTenant-AuthSource: DM6PR12MB3849.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Apr 2024 19:28:22.1390 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: PajoM7KhWh9eX9+rkfIM7xZqsCuNvru9Xif9beobB71hNg5o44AEFCzCoZALYH3Q X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY8PR12MB8213 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240416_122858_847720_56FEC354 X-CRM114-Status: GOOD ( 13.19 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Half the code was living in arm_smmu_domain_finalise_s1(), just move it here and take the values directly from the pgtbl_ops instead of storing copies. Tested-by: Nicolin Chen Tested-by: Shameer Kolothum Reviewed-by: Michael Shavit Reviewed-by: Mostafa Saleh Signed-off-by: Jason Gunthorpe Reviewed-by: Nicolin Chen --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 47 ++++++++------------- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 3 -- 2 files changed, 18 insertions(+), 32 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index d01b632197c0b7..72402f6a7ed4e0 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -1308,15 +1308,25 @@ void arm_smmu_make_s1_cd(struct arm_smmu_cd *target, struct arm_smmu_domain *smmu_domain) { struct arm_smmu_ctx_desc *cd = &smmu_domain->cd; + const struct io_pgtable_cfg *pgtbl_cfg = + &io_pgtable_ops_to_pgtable(smmu_domain->pgtbl_ops)->cfg; + typeof(&pgtbl_cfg->arm_lpae_s1_cfg.tcr) tcr = + &pgtbl_cfg->arm_lpae_s1_cfg.tcr; memset(target, 0, sizeof(*target)); target->data[0] = cpu_to_le64( - cd->tcr | + FIELD_PREP(CTXDESC_CD_0_TCR_T0SZ, tcr->tsz) | + FIELD_PREP(CTXDESC_CD_0_TCR_TG0, tcr->tg) | + FIELD_PREP(CTXDESC_CD_0_TCR_IRGN0, tcr->irgn) | + FIELD_PREP(CTXDESC_CD_0_TCR_ORGN0, tcr->orgn) | + FIELD_PREP(CTXDESC_CD_0_TCR_SH0, tcr->sh) | #ifdef __BIG_ENDIAN CTXDESC_CD_0_ENDI | #endif + CTXDESC_CD_0_TCR_EPD1 | CTXDESC_CD_0_V | + FIELD_PREP(CTXDESC_CD_0_TCR_IPS, tcr->ips) | CTXDESC_CD_0_AA64 | (master->stall_enabled ? CTXDESC_CD_0_S : 0) | CTXDESC_CD_0_R | @@ -1324,9 +1334,9 @@ void arm_smmu_make_s1_cd(struct arm_smmu_cd *target, CTXDESC_CD_0_ASET | FIELD_PREP(CTXDESC_CD_0_ASID, cd->asid) ); - - target->data[1] = cpu_to_le64(cd->ttbr & CTXDESC_CD_1_TTB0_MASK); - target->data[3] = cpu_to_le64(cd->mair); + target->data[1] = cpu_to_le64(pgtbl_cfg->arm_lpae_s1_cfg.ttbr & + CTXDESC_CD_1_TTB0_MASK); + target->data[3] = cpu_to_le64(pgtbl_cfg->arm_lpae_s1_cfg.mair); } void arm_smmu_clear_cd(struct arm_smmu_master *master, ioasid_t ssid) @@ -2284,13 +2294,11 @@ static void arm_smmu_domain_free(struct iommu_domain *domain) } static int arm_smmu_domain_finalise_s1(struct arm_smmu_device *smmu, - struct arm_smmu_domain *smmu_domain, - struct io_pgtable_cfg *pgtbl_cfg) + struct arm_smmu_domain *smmu_domain) { int ret; u32 asid; struct arm_smmu_ctx_desc *cd = &smmu_domain->cd; - typeof(&pgtbl_cfg->arm_lpae_s1_cfg.tcr) tcr = &pgtbl_cfg->arm_lpae_s1_cfg.tcr; refcount_set(&cd->refs, 1); @@ -2298,31 +2306,13 @@ static int arm_smmu_domain_finalise_s1(struct arm_smmu_device *smmu, mutex_lock(&arm_smmu_asid_lock); ret = xa_alloc(&arm_smmu_asid_xa, &asid, cd, XA_LIMIT(1, (1 << smmu->asid_bits) - 1), GFP_KERNEL); - if (ret) - goto out_unlock; - cd->asid = (u16)asid; - cd->ttbr = pgtbl_cfg->arm_lpae_s1_cfg.ttbr; - cd->tcr = FIELD_PREP(CTXDESC_CD_0_TCR_T0SZ, tcr->tsz) | - FIELD_PREP(CTXDESC_CD_0_TCR_TG0, tcr->tg) | - FIELD_PREP(CTXDESC_CD_0_TCR_IRGN0, tcr->irgn) | - FIELD_PREP(CTXDESC_CD_0_TCR_ORGN0, tcr->orgn) | - FIELD_PREP(CTXDESC_CD_0_TCR_SH0, tcr->sh) | - FIELD_PREP(CTXDESC_CD_0_TCR_IPS, tcr->ips) | - CTXDESC_CD_0_TCR_EPD1 | CTXDESC_CD_0_AA64; - cd->mair = pgtbl_cfg->arm_lpae_s1_cfg.mair; - - mutex_unlock(&arm_smmu_asid_lock); - return 0; - -out_unlock: mutex_unlock(&arm_smmu_asid_lock); return ret; } static int arm_smmu_domain_finalise_s2(struct arm_smmu_device *smmu, - struct arm_smmu_domain *smmu_domain, - struct io_pgtable_cfg *pgtbl_cfg) + struct arm_smmu_domain *smmu_domain) { int vmid; struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg; @@ -2346,8 +2336,7 @@ static int arm_smmu_domain_finalise(struct arm_smmu_domain *smmu_domain, struct io_pgtable_cfg pgtbl_cfg; struct io_pgtable_ops *pgtbl_ops; int (*finalise_stage_fn)(struct arm_smmu_device *smmu, - struct arm_smmu_domain *smmu_domain, - struct io_pgtable_cfg *pgtbl_cfg); + struct arm_smmu_domain *smmu_domain); /* Restrict the stage to what we can actually support */ if (!(smmu->features & ARM_SMMU_FEAT_TRANS_S1)) @@ -2390,7 +2379,7 @@ static int arm_smmu_domain_finalise(struct arm_smmu_domain *smmu_domain, smmu_domain->domain.geometry.aperture_end = (1UL << pgtbl_cfg.ias) - 1; smmu_domain->domain.geometry.force_aperture = true; - ret = finalise_stage_fn(smmu, smmu_domain, &pgtbl_cfg); + ret = finalise_stage_fn(smmu, smmu_domain); if (ret < 0) { free_io_pgtable_ops(pgtbl_ops); return ret; diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h index 8098bf8836a180..8f791f67f9f7f4 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -588,9 +588,6 @@ struct arm_smmu_strtab_l1_desc { struct arm_smmu_ctx_desc { u16 asid; - u64 ttbr; - u64 tcr; - u64 mair; refcount_t refs; struct mm_struct *mm; From patchwork Tue Apr 16 19:28:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Gunthorpe X-Patchwork-Id: 13632437 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CA9F8C05023 for ; Tue, 16 Apr 2024 19:29:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=mUah1vtDLxJXhlxldveiim/v1UOyTci8ah2GmBFVV8A=; b=Os0ay80r8STWqw Rb3SBKM8A7F/IqJywkFQC0TXg0j40RfVz9OkxHuEioSw08Jv9iqo2c7u+p3zFKg/4q2T/X4e9Zzoi aC5VweHRZHjDLTwRqymj4ZMnzHLAmhh71z7reej9w8SCao1X2liWuhBBT98YxrSIkBbVZmSFdpn7L sWGjlrBIChw0kjkHSwZSSoX/Dtu+G0txcfFStyW1vzzd9p+YZ45RPwQ5n8UD5Q5vqfaGI1K2sqD20 r413zM68yU0P1LigHMuEFG68FRp33oIZt1LZOA2T9KvxCgAjmIL7Xno9q/ANZqzGtwRF7olEsE1Cb tfOgpVnBcR0flPj31Ekw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rwoUw-0000000DXqB-0YjT; Tue, 16 Apr 2024 19:29:42 +0000 Received: from mail-mw2nam04on20601.outbound.protection.outlook.com ([2a01:111:f403:240a::601] helo=NAM04-MW2-obe.outbound.protection.outlook.com) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rwoUU-0000000DXR2-0653 for linux-arm-kernel@lists.infradead.org; Tue, 16 Apr 2024 19:29:17 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=TfridyLOXN2d8AlEwG2N9j9WLtcpyycTbT2Gter9ffKkfFFrQnvqXZyTMP9bbc5VHHqEpDR4YIkYq5gzN9+cdBih9eH38+mexxi8Kfd3LQLJTmjPa+iaTpHmCU4Ll3U6i1evlmcYLIlWDR0I3W5H6ZWyZf4D5iK/RYZpPmkU+dGWqD/ty+4NXPHg4QxqJw69Lx12Mj5u8snc38CbZ44UPkRmkLOaxE2fwU1t6EGmQD6D7IijgUtW6gt/KnDhnw8XWFhB2OfaZW3r4rczcwcdISgY/9jbxGbw7xSh4gcOV6M3gDLJib06HUP3EFWZ4P65TWyr3MHfHNVFn9hx2iGpyA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=BJhBU41MFBfeiV7YP8dD26LoLghPUbNU05Y8lRvP1U0=; b=Ev4znqPAk9RB+QrPIpFEjcRq5TbeNxP5X6EDsr3exL+XE1WgjSvp18iSpADuT+nYIMvtj0xkhInHTeffdKLoiVGCeDbQB83mwr9djLmfaYSQHn10DW1BttvGKD86pU8YMijWlKSYT9GnScRppPXPBHWuKGa2Cy2RH3XCwb3QZU6AR79jtNFnglvQ7/xqaAobOvt2qJhChkrc19Skrdv9V97AayMZdv9JDZB7SGu8dsKJtlm5LGOvKyMqc+zBOLm7KxvtMKuxiWqrs0PNmkPavm3Z8dKZO//UyIsLUlFc/NcT9AkvcZVHFOf1uOmUa74l6II9zuoXVOh8duWzrnsE8A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=BJhBU41MFBfeiV7YP8dD26LoLghPUbNU05Y8lRvP1U0=; b=JMsoZC29eewUMy98ry1DdzFxV+fKyCdy1pkeeZ82rvPWiU+EI2RHiSWWLnkqs1LY0LfsvzOXNUtCeiT5mfeh6B3IiIsJ0QEuBbmIWiIq3GKtMqO49CLk7+KgyRDpmkbkHbqtG9yw5TGEfnpsIFWAKaTnFjExVi7i0Iopg+NpEoAAixXsrXYkhAVxarL6Hs+ilhX3RuaSEAK6RmO0rzjlDjwca42UK5Q+TlfoS0Z3tnRYiP5sVdpI8QlARsPfuz5io9psAuP70wG5O0TJZo7iXP/yEG23arzRnnDdy+zGtvjE4vcfFchWla8MZ7GcmMTJBXibPs1Qqqfwj1jqvlA51A== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from DM6PR12MB3849.namprd12.prod.outlook.com (2603:10b6:5:1c7::26) by CY8PR12MB8213.namprd12.prod.outlook.com (2603:10b6:930:71::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7452.50; Tue, 16 Apr 2024 19:28:25 +0000 Received: from DM6PR12MB3849.namprd12.prod.outlook.com ([fe80::6aec:dbca:a593:a222]) by DM6PR12MB3849.namprd12.prod.outlook.com ([fe80::6aec:dbca:a593:a222%5]) with mapi id 15.20.7452.049; Tue, 16 Apr 2024 19:28:24 +0000 From: Jason Gunthorpe To: iommu@lists.linux.dev, Joerg Roedel , linux-arm-kernel@lists.infradead.org, Robin Murphy , Will Deacon Cc: Eric Auger , Moritz Fischer , Moritz Fischer , Michael Shavit , Nicolin Chen , patches@lists.linux.dev, Shameerali Kolothum Thodi , Mostafa Saleh Subject: [PATCH v7 9/9] iommu/arm-smmu-v3: Add unit tests for arm_smmu_write_entry Date: Tue, 16 Apr 2024 16:28:20 -0300 Message-ID: <9-v7-cb149db3a320+3b5-smmuv3_newapi_p2_jgg@nvidia.com> In-Reply-To: <0-v7-cb149db3a320+3b5-smmuv3_newapi_p2_jgg@nvidia.com> References: X-ClientProxiedBy: SN7PR04CA0052.namprd04.prod.outlook.com (2603:10b6:806:120::27) To DM6PR12MB3849.namprd12.prod.outlook.com (2603:10b6:5:1c7::26) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6PR12MB3849:EE_|CY8PR12MB8213:EE_ X-MS-Office365-Filtering-Correlation-Id: cea689d0-2a52-4852-4481-08dc5e4b6174 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: r6hyznX0mSUyySp6aajtLNdtzVHHB+OFvA5chxtlvpG9fSIcWzn851E6hJG4XFIcNtkfK/qjq7RftNijxGSPAJtVHqX4ChAbQ097COUb+l1JmP+hLli+yPKx3kUjBRWtkXXxT5a6uaH4cUWNlIeCL684I4crzY61EhTPNEC0EBR7HMoqIXPjg1PWUZ4quHp3tCU8dljrRtS+EsxDWquBkclTZQsaNVYsvHx7R2QdfFUANDB/bURAn9N6ldMlmpbOmAzZ6MdHfnmI2IEBvrqpvXq67vg+4Cy3V8GvOOPb6hlMCOSRAan6HBwOhhuIhKadUw8oKd7sdLNhy65ngiDl5Nf9pRl9AhmykPSj8jNexXFP6E5c+czw7t9hQt9175Y/SPmn47fRePlvVJWpOq/UD1Tr6VKBigEQeR+ORRHURf51qAio9HzXqFb5WXwWjdBcCOazYFsHjQGrEzzpNjNUCK4aagStDDPE74TZew9D3WgnkKHyza7884GJBvuNWmxdYu1wTJxHf6RATZzwnj6o6lWX84YGP+43gwIBGnj8wrs7ljxBapKm7uoxyIVD60kgJUZstrZHpD1aZqMrpUMJvNSMG+a9Bz3/NKIQm4inHekLE4/cbL/dg1cMS0fNb1Q2z+BBkXNqBRrixPnJFKKIT1RnSYOGyk6IlHxHhBFNCJ8= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM6PR12MB3849.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(366007)(1800799015)(376005)(7416005);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: xDP+4e/tn8upXpDSLS3JlmN1VO98mdC3wYnh8DxUMTmgHSUz39bnWU+fsf5WsqjVLIRJ/6L9q83T3oOpVuaZUkBZ64b8CvrVZA+IBjSLJGb0vwS46M08LMvbDzo31gSnLK1bULOdWm6Awz5PldLkUvEpBWwLTwHIlGtzb4/wyOnFqxU/S1ueFUIrCAvD23VMo6i/lvwZUZq4WgvFV3i12egG4G5YHIDm9zs638MKN3khWlcwN9pVO38vDw4JMQigL9sEWxws/I6DBPz+w5xiJtefhXPlkuoSFt9Q5y/8xE0MP2fTHVnZJNWwt59whTsBdceluDquBN7yOD8bt3cgUd7IZiwJRDO6uW1CX1ulttVse59z/bbt/YeROa+YVZR6Tkg0CBcMHKDmgWoEjeA01YhMbtbvntw3LcZlfu1+1pRML5Nw+2BqEEmb8R2vInQu/WC3IspZW40vVyqEVfi/Q3W7uXiJ2q7aL0IhMB06m0Y5VVK9qJ+X9Xk/y2qDJH9zgK3SrWOVERwXa7zyYbTEDtlZgC8EIrw8Zmvl0E0sfLTu2Uc4zVWHnCQeCkRXaJOLEGi6JzYl12igtQPUBJkKDsaG1zDjz8gvYmI4do2PTq+kOapRo//IYGECZQ9QMYcZ8K47OEQpyuq88Wq1IkghogNmh9bGunFXUkGPEnpzHjNICUiGpmnV2yJvoUIU00zmAUMnucxmUK/4Hzz7jWEBUcp7irA2ul5bkjQOIn2eGpzj2ZJGdd3cRrMExTUPeBqCBHTSBKs4oAcyUxqREQTsdeRxk0PnFNV2uGgget/KQQJB6X4Sc4RHc3ZiWCfj5a1HWArzmodbqIgARuWUTuWayQduHqBT8lbb7Lb6Jh2ozeoSbmiBpE0qPllpr5e0J7ufcquihSsvyTXab7aV3vNE350TWpThRD4RMnqaygbYToYZUSkowGu/fMUu6DrTZIZ6D91xSqV6gk+QHxDoQS6mWKDNi6KQSL9EM5mNkOHZuKxAt9NxYcn4es3clGlMOums/jbzSbN3TByXVNCY9FCyULBD4P2NuxBrNxobFeeAC/W94wnz7YLwe0KfIbKthmBimI/QLBRtjSsVmmBIMQ4O1fZor5nkKKxWdMd6oZLcxVJ6ewO9noGNTpfWxdJ/QCGAf+QTbc3s7LpJTI2WuR6IMZBnzewR973ong/VcRWYZzo4xDO64MHRFZHG/3hyYjxnnun1FT/pe/1HfB/XmwHFj96HFznazcWyybs7bcr0yU770VpXyaQUkeL7/tTdLuQVzXtwl9LvpG63txTM79Kr1YsTI5RxkJm9PREM8uBXpJbPc9PZrOjH70Jy30XHMaivRaif85iknmZNWxtZ0XEIJLjI6fQzp+KisjzQSW+Sayd5hHi7ONGvUUdW6yqodacdknaNsA9/hhRioCBpMSqHtWmlDc0vKC2m+TLL9i+ruGsAZ1IOFtHOFc/3UVhOSV3zU9V6gTg3e+izUjtQeJGZyYRaPC158fyYnW/PcIe2UnOawzd82hIXImu0FDrZ/WwS2AKd5KZ3Bdkj4qo7eQ1EnuBLb/+7ENrwx0mYaFWM4LUQRvtkWAD5KWn0qFr9xY3O X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: cea689d0-2a52-4852-4481-08dc5e4b6174 X-MS-Exchange-CrossTenant-AuthSource: DM6PR12MB3849.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Apr 2024 19:28:23.4363 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 1ixglLCHvuMPGGPyRqWGVFR/AnfN0SFzvm5Zo1P5xTe7sj1ES1xL4pE0dQlx+URI X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY8PR12MB8213 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240416_122914_130351_02E1B368 X-CRM114-Status: GOOD ( 22.15 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add tests for some of the more common STE update operations that we expect to see, as well as some artificial STE updates to test the edges of arm_smmu_write_entry. These also serve as a record of which common operation is expected to be hitless, and how many syncs they require. arm_smmu_write_entry implements a generic algorithm that updates an STE/CD to any other abritrary STE/CD configuration. The update requires a sequence of write+sync operations with some invariants that must be held true after each sync. arm_smmu_write_entry lends itself well to unit-testing since the function's interaction with the STE/CD is already abstracted by input callbacks that we can hook to introspect into the sequence of operations. We can use these hooks to guarantee that invariants are held throughout the entire update operation. Link: https://lore.kernel.org/r/20240106083617.1173871-3-mshavit@google.com Signed-off-by: Michael Shavit Signed-off-by: Jason Gunthorpe Tested-by: Nicolin Chen --- drivers/iommu/Kconfig | 12 +- drivers/iommu/arm/arm-smmu-v3/Makefile | 2 + .../iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c | 6 +- .../iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c | 467 ++++++++++++++++++ drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 36 +- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 30 ++ 6 files changed, 525 insertions(+), 28 deletions(-) create mode 100644 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig index 0af39bbbe3a30e..2e597102baf6e5 100644 --- a/drivers/iommu/Kconfig +++ b/drivers/iommu/Kconfig @@ -397,9 +397,9 @@ config ARM_SMMU_V3 Say Y here if your system includes an IOMMU device implementing the ARM SMMUv3 architecture. +if ARM_SMMU_V3 config ARM_SMMU_V3_SVA bool "Shared Virtual Addressing support for the ARM SMMUv3" - depends on ARM_SMMU_V3 select IOMMU_SVA select IOMMU_IOPF select MMU_NOTIFIER @@ -410,6 +410,16 @@ config ARM_SMMU_V3_SVA Say Y here if your system supports SVA extensions such as PCIe PASID and PRI. +config ARM_SMMU_V3_KUNIT_TEST + tristate "KUnit tests for arm-smmu-v3 driver" if !KUNIT_ALL_TESTS + depends on KUNIT + default KUNIT_ALL_TESTS + help + Enable this option to unit-test arm-smmu-v3 driver functions. + + If unsure, say N. +endif + config S390_IOMMU def_bool y if S390 && PCI depends on S390 && PCI diff --git a/drivers/iommu/arm/arm-smmu-v3/Makefile b/drivers/iommu/arm/arm-smmu-v3/Makefile index 54feb1ecccad89..014a997753a8a2 100644 --- a/drivers/iommu/arm/arm-smmu-v3/Makefile +++ b/drivers/iommu/arm/arm-smmu-v3/Makefile @@ -3,3 +3,5 @@ obj-$(CONFIG_ARM_SMMU_V3) += arm_smmu_v3.o arm_smmu_v3-objs-y += arm-smmu-v3.o arm_smmu_v3-objs-$(CONFIG_ARM_SMMU_V3_SVA) += arm-smmu-v3-sva.o arm_smmu_v3-objs := $(arm_smmu_v3-objs-y) + +obj-$(CONFIG_ARM_SMMU_V3_KUNIT_TEST) += arm-smmu-v3-test.o diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c index 80a7d559ef2d3f..f56a2d38012b5c 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c @@ -120,9 +120,9 @@ static u64 page_size_to_cd(void) return ARM_LPAE_TCR_TG0_4K; } -static void arm_smmu_make_sva_cd(struct arm_smmu_cd *target, - struct arm_smmu_master *master, - struct mm_struct *mm, u16 asid) +void arm_smmu_make_sva_cd(struct arm_smmu_cd *target, + struct arm_smmu_master *master, struct mm_struct *mm, + u16 asid) { u64 par; diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c new file mode 100644 index 00000000000000..14c8e40712a70e --- /dev/null +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c @@ -0,0 +1,467 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright 2024 Google LLC. + */ +#include +#include + +#include "arm-smmu-v3.h" + +struct arm_smmu_test_writer { + struct arm_smmu_entry_writer writer; + struct kunit *test; + const __le64 *init_entry; + const __le64 *target_entry; + __le64 *entry; + + bool invalid_entry_written; + unsigned int num_syncs; +}; + +#define NUM_ENTRY_QWORDS 8 +#define NUM_EXPECTED_SYNCS(x) x + +static struct arm_smmu_ste bypass_ste; +static struct arm_smmu_ste abort_ste; +static struct arm_smmu_device smmu = { + .features = ARM_SMMU_FEAT_STALLS | ARM_SMMU_FEAT_ATTR_TYPES_OVR +}; + +static bool arm_smmu_entry_differs_in_used_bits(const __le64 *entry, + const __le64 *used_bits, + const __le64 *target, + unsigned int length) +{ + bool differs = false; + unsigned int i; + + for (i = 0; i < length; i++) { + if ((entry[i] & used_bits[i]) != target[i]) + differs = true; + } + return differs; +} + +static void +arm_smmu_test_writer_record_syncs(struct arm_smmu_entry_writer *writer) +{ + struct arm_smmu_test_writer *test_writer = + container_of(writer, struct arm_smmu_test_writer, writer); + __le64 *entry_used_bits; + + entry_used_bits = kunit_kzalloc( + test_writer->test, sizeof(*entry_used_bits) * NUM_ENTRY_QWORDS, + GFP_KERNEL); + KUNIT_ASSERT_NOT_NULL(test_writer->test, entry_used_bits); + + pr_debug("STE value is now set to: "); + print_hex_dump_debug(" ", DUMP_PREFIX_NONE, 16, 8, + test_writer->entry, + NUM_ENTRY_QWORDS * sizeof(*test_writer->entry), + false); + + test_writer->num_syncs += 1; + if (!(test_writer->entry[0] & writer->ops->v_bit)) { + test_writer->invalid_entry_written = true; + } else { + /* + * At any stage in a hitless transition, the entry must be + * equivalent to either the initial entry or the target entry + * when only considering the bits used by the current + * configuration. + */ + writer->ops->get_used(test_writer->entry, entry_used_bits); + KUNIT_EXPECT_FALSE( + test_writer->test, + arm_smmu_entry_differs_in_used_bits( + test_writer->entry, entry_used_bits, + test_writer->init_entry, NUM_ENTRY_QWORDS) && + arm_smmu_entry_differs_in_used_bits( + test_writer->entry, entry_used_bits, + test_writer->target_entry, + NUM_ENTRY_QWORDS)); + } +} + +static void +arm_smmu_v3_test_debug_print_used_bits(struct arm_smmu_entry_writer *writer, + const __le64 *ste) +{ + __le64 used_bits[NUM_ENTRY_QWORDS] = {}; + + arm_smmu_get_ste_used(ste, used_bits); + pr_debug("STE used bits: "); + print_hex_dump_debug(" ", DUMP_PREFIX_NONE, 16, 8, used_bits, + sizeof(used_bits), false); +} + +static const struct arm_smmu_entry_writer_ops test_ste_ops = { + .v_bit = cpu_to_le64(STRTAB_STE_0_V), + .sync = arm_smmu_test_writer_record_syncs, + .get_used = arm_smmu_get_ste_used, +}; + +static const struct arm_smmu_entry_writer_ops test_cd_ops = { + .v_bit = cpu_to_le64(CTXDESC_CD_0_V), + .sync = arm_smmu_test_writer_record_syncs, + .get_used = arm_smmu_get_cd_used, +}; + +static void arm_smmu_v3_test_ste_expect_transition( + struct kunit *test, const struct arm_smmu_ste *cur, + const struct arm_smmu_ste *target, unsigned int num_syncs_expected, + bool hitless) +{ + struct arm_smmu_ste cur_copy = *cur; + struct arm_smmu_test_writer test_writer = { + .writer = { + .ops = &test_ste_ops, + }, + .test = test, + .init_entry = cur->data, + .target_entry = target->data, + .entry = cur_copy.data, + .num_syncs = 0, + .invalid_entry_written = false, + + }; + + pr_debug("STE initial value: "); + print_hex_dump_debug(" ", DUMP_PREFIX_NONE, 16, 8, cur_copy.data, + sizeof(cur_copy), false); + arm_smmu_v3_test_debug_print_used_bits(&test_writer.writer, cur->data); + pr_debug("STE target value: "); + print_hex_dump_debug(" ", DUMP_PREFIX_NONE, 16, 8, target->data, + sizeof(cur_copy), false); + arm_smmu_v3_test_debug_print_used_bits(&test_writer.writer, + target->data); + + arm_smmu_write_entry(&test_writer.writer, cur_copy.data, target->data); + + KUNIT_EXPECT_EQ(test, test_writer.invalid_entry_written, !hitless); + KUNIT_EXPECT_EQ(test, test_writer.num_syncs, num_syncs_expected); + KUNIT_EXPECT_MEMEQ(test, target->data, cur_copy.data, sizeof(cur_copy)); +} + +static void arm_smmu_v3_test_ste_expect_hitless_transition( + struct kunit *test, const struct arm_smmu_ste *cur, + const struct arm_smmu_ste *target, unsigned int num_syncs_expected) +{ + arm_smmu_v3_test_ste_expect_transition(test, cur, target, + num_syncs_expected, true); +} + +static const dma_addr_t fake_cdtab_dma_addr = 0xF0F0F0F0F0F0; + +static void arm_smmu_test_make_cdtable_ste(struct arm_smmu_ste *ste, + const dma_addr_t dma_addr) +{ + struct arm_smmu_master master = { + .cd_table.cdtab_dma = dma_addr, + .cd_table.s1cdmax = 0xFF, + .cd_table.s1fmt = STRTAB_STE_0_S1FMT_64K_L2, + .smmu = &smmu, + }; + + arm_smmu_make_cdtable_ste(ste, &master); +} + +static void arm_smmu_v3_write_ste_test_bypass_to_abort(struct kunit *test) +{ + /* + * Bypass STEs has used bits in the first two Qwords, while abort STEs + * only have used bits in the first QWord. Transitioning from bypass to + * abort requires two syncs: the first to set the first qword and make + * the STE into an abort, the second to clean up the second qword. + */ + arm_smmu_v3_test_ste_expect_hitless_transition( + test, &bypass_ste, &abort_ste, NUM_EXPECTED_SYNCS(2)); +} + +static void arm_smmu_v3_write_ste_test_abort_to_bypass(struct kunit *test) +{ + /* + * Transitioning from abort to bypass also requires two syncs: the first + * to set the second qword data required by the bypass STE, and the + * second to set the first qword and switch to bypass. + */ + arm_smmu_v3_test_ste_expect_hitless_transition( + test, &abort_ste, &bypass_ste, NUM_EXPECTED_SYNCS(2)); +} + +static void arm_smmu_v3_write_ste_test_cdtable_to_abort(struct kunit *test) +{ + struct arm_smmu_ste ste; + + arm_smmu_test_make_cdtable_ste(&ste, fake_cdtab_dma_addr); + arm_smmu_v3_test_ste_expect_hitless_transition(test, &ste, &abort_ste, + NUM_EXPECTED_SYNCS(2)); +} + +static void arm_smmu_v3_write_ste_test_abort_to_cdtable(struct kunit *test) +{ + struct arm_smmu_ste ste; + + arm_smmu_test_make_cdtable_ste(&ste, fake_cdtab_dma_addr); + arm_smmu_v3_test_ste_expect_hitless_transition(test, &abort_ste, &ste, + NUM_EXPECTED_SYNCS(2)); +} + +static void arm_smmu_v3_write_ste_test_cdtable_to_bypass(struct kunit *test) +{ + struct arm_smmu_ste ste; + + arm_smmu_test_make_cdtable_ste(&ste, fake_cdtab_dma_addr); + arm_smmu_v3_test_ste_expect_hitless_transition(test, &ste, &bypass_ste, + NUM_EXPECTED_SYNCS(3)); +} + +static void arm_smmu_v3_write_ste_test_bypass_to_cdtable(struct kunit *test) +{ + struct arm_smmu_ste ste; + + arm_smmu_test_make_cdtable_ste(&ste, fake_cdtab_dma_addr); + arm_smmu_v3_test_ste_expect_hitless_transition(test, &bypass_ste, &ste, + NUM_EXPECTED_SYNCS(3)); +} + +static void arm_smmu_test_make_s2_ste(struct arm_smmu_ste *ste, + bool ats_enabled) +{ + struct arm_smmu_master master = { + .smmu = &smmu, + .ats_enabled = ats_enabled, + }; + struct io_pgtable io_pgtable = {}; + struct arm_smmu_domain smmu_domain = { + .pgtbl_ops = &io_pgtable.ops, + }; + + io_pgtable.cfg.arm_lpae_s2_cfg.vttbr = 0xdaedbeefdeadbeefULL; + io_pgtable.cfg.arm_lpae_s2_cfg.vtcr.ps = 1; + io_pgtable.cfg.arm_lpae_s2_cfg.vtcr.tg = 2; + io_pgtable.cfg.arm_lpae_s2_cfg.vtcr.sh = 3; + io_pgtable.cfg.arm_lpae_s2_cfg.vtcr.orgn = 1; + io_pgtable.cfg.arm_lpae_s2_cfg.vtcr.irgn = 2; + io_pgtable.cfg.arm_lpae_s2_cfg.vtcr.sl = 3; + io_pgtable.cfg.arm_lpae_s2_cfg.vtcr.tsz = 4; + + arm_smmu_make_s2_domain_ste(ste, &master, &smmu_domain); +} + +static void arm_smmu_v3_write_ste_test_s2_to_abort(struct kunit *test) +{ + struct arm_smmu_ste ste; + + arm_smmu_test_make_s2_ste(&ste, true); + arm_smmu_v3_test_ste_expect_hitless_transition(test, &ste, &abort_ste, + NUM_EXPECTED_SYNCS(2)); +} + +static void arm_smmu_v3_write_ste_test_abort_to_s2(struct kunit *test) +{ + struct arm_smmu_ste ste; + + arm_smmu_test_make_s2_ste(&ste, true); + arm_smmu_v3_test_ste_expect_hitless_transition(test, &abort_ste, &ste, + NUM_EXPECTED_SYNCS(2)); +} + +static void arm_smmu_v3_write_ste_test_s2_to_bypass(struct kunit *test) +{ + struct arm_smmu_ste ste; + + arm_smmu_test_make_s2_ste(&ste, true); + arm_smmu_v3_test_ste_expect_hitless_transition(test, &ste, &bypass_ste, + NUM_EXPECTED_SYNCS(2)); +} + +static void arm_smmu_v3_write_ste_test_bypass_to_s2(struct kunit *test) +{ + struct arm_smmu_ste ste; + + arm_smmu_test_make_s2_ste(&ste, true); + arm_smmu_v3_test_ste_expect_hitless_transition(test, &bypass_ste, &ste, + NUM_EXPECTED_SYNCS(2)); +} + +static void arm_smmu_v3_test_cd_expect_transition( + struct kunit *test, const struct arm_smmu_cd *cur, + const struct arm_smmu_cd *target, unsigned int num_syncs_expected, + bool hitless) +{ + struct arm_smmu_cd cur_copy = *cur; + struct arm_smmu_test_writer test_writer = { + .writer = { + .ops = &test_cd_ops, + }, + .test = test, + .init_entry = cur->data, + .target_entry = target->data, + .entry = cur_copy.data, + .num_syncs = 0, + .invalid_entry_written = false, + + }; + + pr_debug("CD initial value: "); + print_hex_dump_debug(" ", DUMP_PREFIX_NONE, 16, 8, cur_copy.data, + sizeof(cur_copy), false); + arm_smmu_v3_test_debug_print_used_bits(&test_writer.writer, cur->data); + pr_debug("CD target value: "); + print_hex_dump_debug(" ", DUMP_PREFIX_NONE, 16, 8, target->data, + sizeof(cur_copy), false); + arm_smmu_v3_test_debug_print_used_bits(&test_writer.writer, + target->data); + + arm_smmu_write_entry(&test_writer.writer, cur_copy.data, target->data); + + KUNIT_EXPECT_EQ(test, test_writer.invalid_entry_written, !hitless); + KUNIT_EXPECT_EQ(test, test_writer.num_syncs, num_syncs_expected); + KUNIT_EXPECT_MEMEQ(test, target->data, cur_copy.data, sizeof(cur_copy)); +} + +static void arm_smmu_v3_test_cd_expect_non_hitless_transition( + struct kunit *test, const struct arm_smmu_cd *cur, + const struct arm_smmu_cd *target, unsigned int num_syncs_expected) +{ + arm_smmu_v3_test_cd_expect_transition(test, cur, target, + num_syncs_expected, false); +} + +static void arm_smmu_v3_test_cd_expect_hitless_transition( + struct kunit *test, const struct arm_smmu_cd *cur, + const struct arm_smmu_cd *target, unsigned int num_syncs_expected) +{ + arm_smmu_v3_test_cd_expect_transition(test, cur, target, + num_syncs_expected, true); +} + +static void arm_smmu_test_make_s1_cd(struct arm_smmu_cd *cd, unsigned int asid) +{ + struct arm_smmu_master master = { + .smmu = &smmu, + }; + struct io_pgtable io_pgtable = {}; + struct arm_smmu_domain smmu_domain = { + .pgtbl_ops = &io_pgtable.ops, + .cd = { + .asid = asid, + }, + }; + + io_pgtable.cfg.arm_lpae_s1_cfg.ttbr = 0xdaedbeefdeadbeefULL; + io_pgtable.cfg.arm_lpae_s1_cfg.tcr.ips = 1; + io_pgtable.cfg.arm_lpae_s1_cfg.tcr.tg = 2; + io_pgtable.cfg.arm_lpae_s1_cfg.tcr.sh = 3; + io_pgtable.cfg.arm_lpae_s1_cfg.tcr.orgn = 1; + io_pgtable.cfg.arm_lpae_s1_cfg.tcr.irgn = 2; + io_pgtable.cfg.arm_lpae_s1_cfg.tcr.tsz = 4; + io_pgtable.cfg.arm_lpae_s1_cfg.mair = 0xabcdef012345678ULL; + + arm_smmu_make_s1_cd(cd, &master, &smmu_domain); +} + +static void arm_smmu_v3_write_cd_test_s1_clear(struct kunit *test) +{ + struct arm_smmu_cd cd = {}; + struct arm_smmu_cd cd_2; + + arm_smmu_test_make_s1_cd(&cd_2, 1997); + arm_smmu_v3_test_cd_expect_non_hitless_transition( + test, &cd, &cd_2, NUM_EXPECTED_SYNCS(2)); + arm_smmu_v3_test_cd_expect_non_hitless_transition( + test, &cd_2, &cd, NUM_EXPECTED_SYNCS(2)); +} + +static void arm_smmu_v3_write_cd_test_s1_change_asid(struct kunit *test) +{ + struct arm_smmu_cd cd = {}; + struct arm_smmu_cd cd_2; + + arm_smmu_test_make_s1_cd(&cd, 778); + arm_smmu_test_make_s1_cd(&cd_2, 1997); + arm_smmu_v3_test_cd_expect_hitless_transition(test, &cd, &cd_2, + NUM_EXPECTED_SYNCS(1)); + arm_smmu_v3_test_cd_expect_hitless_transition(test, &cd_2, &cd, + NUM_EXPECTED_SYNCS(1)); +} + +static void arm_smmu_test_make_sva_cd(struct arm_smmu_cd *cd, unsigned int asid) +{ + struct arm_smmu_master master = { + .smmu = &smmu, + }; + struct mm_struct mm = { + .pgd = (void *)0xdaedbeefdeadbeefULL, + }; + + arm_smmu_make_sva_cd(cd, &master, &mm, asid); +} + +static void arm_smmu_test_make_sva_release_cd(struct arm_smmu_cd *cd, + unsigned int asid) +{ + struct arm_smmu_master master = { + .smmu = &smmu, + }; + + arm_smmu_make_sva_cd(cd, &master, NULL, asid); +} + +static void arm_smmu_v3_write_cd_test_sva_clear(struct kunit *test) +{ + struct arm_smmu_cd cd = {}; + struct arm_smmu_cd cd_2; + + arm_smmu_test_make_sva_cd(&cd_2, 1997); + arm_smmu_v3_test_cd_expect_non_hitless_transition( + test, &cd, &cd_2, NUM_EXPECTED_SYNCS(2)); + arm_smmu_v3_test_cd_expect_non_hitless_transition( + test, &cd_2, &cd, NUM_EXPECTED_SYNCS(2)); +} + +static void arm_smmu_v3_write_cd_test_sva_release(struct kunit *test) +{ + struct arm_smmu_cd cd; + struct arm_smmu_cd cd_2; + + arm_smmu_test_make_sva_cd(&cd, 1997); + arm_smmu_test_make_sva_release_cd(&cd_2, 1997); + arm_smmu_v3_test_cd_expect_hitless_transition(test, &cd, &cd_2, + NUM_EXPECTED_SYNCS(2)); + arm_smmu_v3_test_cd_expect_hitless_transition(test, &cd_2, &cd, + NUM_EXPECTED_SYNCS(2)); +} + +static struct kunit_case arm_smmu_v3_test_cases[] = { + KUNIT_CASE(arm_smmu_v3_write_ste_test_bypass_to_abort), + KUNIT_CASE(arm_smmu_v3_write_ste_test_abort_to_bypass), + KUNIT_CASE(arm_smmu_v3_write_ste_test_cdtable_to_abort), + KUNIT_CASE(arm_smmu_v3_write_ste_test_abort_to_cdtable), + KUNIT_CASE(arm_smmu_v3_write_ste_test_cdtable_to_bypass), + KUNIT_CASE(arm_smmu_v3_write_ste_test_bypass_to_cdtable), + KUNIT_CASE(arm_smmu_v3_write_ste_test_s2_to_abort), + KUNIT_CASE(arm_smmu_v3_write_ste_test_abort_to_s2), + KUNIT_CASE(arm_smmu_v3_write_ste_test_s2_to_bypass), + KUNIT_CASE(arm_smmu_v3_write_ste_test_bypass_to_s2), + KUNIT_CASE(arm_smmu_v3_write_cd_test_s1_clear), + KUNIT_CASE(arm_smmu_v3_write_cd_test_s1_change_asid), + KUNIT_CASE(arm_smmu_v3_write_cd_test_sva_clear), + KUNIT_CASE(arm_smmu_v3_write_cd_test_sva_release), + {}, +}; + +static int arm_smmu_v3_test_suite_init(struct kunit_suite *test) +{ + arm_smmu_make_bypass_ste(&smmu, &bypass_ste); + arm_smmu_make_abort_ste(&abort_ste); + return 0; +} + +static struct kunit_suite arm_smmu_v3_test_module = { + .name = "arm-smmu-v3-kunit-test", + .suite_init = arm_smmu_v3_test_suite_init, + .test_cases = arm_smmu_v3_test_cases, +}; +kunit_test_suites(&arm_smmu_v3_test_module); diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 72402f6a7ed4e0..3ffaa3b34b44bf 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -42,18 +42,6 @@ enum arm_smmu_msi_index { ARM_SMMU_MAX_MSIS, }; -struct arm_smmu_entry_writer_ops; -struct arm_smmu_entry_writer { - const struct arm_smmu_entry_writer_ops *ops; - struct arm_smmu_master *master; -}; - -struct arm_smmu_entry_writer_ops { - __le64 v_bit; - void (*get_used)(const __le64 *entry, __le64 *used); - void (*sync)(struct arm_smmu_entry_writer *writer); -}; - #define NUM_ENTRY_QWORDS 8 static_assert(sizeof(struct arm_smmu_ste) == NUM_ENTRY_QWORDS * sizeof(u64)); static_assert(sizeof(struct arm_smmu_cd) == NUM_ENTRY_QWORDS * sizeof(u64)); @@ -980,7 +968,7 @@ void arm_smmu_tlb_inv_asid(struct arm_smmu_device *smmu, u16 asid) * would be nice if this was complete according to the spec, but minimally it * has to capture the bits this driver uses. */ -static void arm_smmu_get_ste_used(const __le64 *ent, __le64 *used_bits) +void arm_smmu_get_ste_used(const __le64 *ent, __le64 *used_bits) { unsigned int cfg = FIELD_GET(STRTAB_STE_0_CFG, le64_to_cpu(ent[0])); @@ -1102,8 +1090,8 @@ static bool entry_set(struct arm_smmu_entry_writer *writer, __le64 *entry, * V=0 process. This relies on the IGNORED behavior described in the * specification. */ -static void arm_smmu_write_entry(struct arm_smmu_entry_writer *writer, - __le64 *entry, const __le64 *target) +void arm_smmu_write_entry(struct arm_smmu_entry_writer *writer, __le64 *entry, + const __le64 *target) { __le64 unused_update[NUM_ENTRY_QWORDS]; u8 used_qword_diff; @@ -1257,7 +1245,7 @@ struct arm_smmu_cd_writer { unsigned int ssid; }; -static void arm_smmu_get_cd_used(const __le64 *ent, __le64 *used_bits) +void arm_smmu_get_cd_used(const __le64 *ent, __le64 *used_bits) { used_bits[0] = cpu_to_le64(CTXDESC_CD_0_V); if (!(ent[0] & cpu_to_le64(CTXDESC_CD_0_V))) @@ -1514,7 +1502,7 @@ static void arm_smmu_write_ste(struct arm_smmu_master *master, u32 sid, } } -static void arm_smmu_make_abort_ste(struct arm_smmu_ste *target) +void arm_smmu_make_abort_ste(struct arm_smmu_ste *target) { memset(target, 0, sizeof(*target)); target->data[0] = cpu_to_le64( @@ -1522,8 +1510,8 @@ static void arm_smmu_make_abort_ste(struct arm_smmu_ste *target) FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_ABORT)); } -static void arm_smmu_make_bypass_ste(struct arm_smmu_device *smmu, - struct arm_smmu_ste *target) +void arm_smmu_make_bypass_ste(struct arm_smmu_device *smmu, + struct arm_smmu_ste *target) { memset(target, 0, sizeof(*target)); target->data[0] = cpu_to_le64( @@ -1535,8 +1523,8 @@ static void arm_smmu_make_bypass_ste(struct arm_smmu_device *smmu, STRTAB_STE_1_SHCFG_INCOMING)); } -static void arm_smmu_make_cdtable_ste(struct arm_smmu_ste *target, - struct arm_smmu_master *master) +void arm_smmu_make_cdtable_ste(struct arm_smmu_ste *target, + struct arm_smmu_master *master) { struct arm_smmu_ctx_desc_cfg *cd_table = &master->cd_table; struct arm_smmu_device *smmu = master->smmu; @@ -1585,9 +1573,9 @@ static void arm_smmu_make_cdtable_ste(struct arm_smmu_ste *target, } } -static void arm_smmu_make_s2_domain_ste(struct arm_smmu_ste *target, - struct arm_smmu_master *master, - struct arm_smmu_domain *smmu_domain) +void arm_smmu_make_s2_domain_ste(struct arm_smmu_ste *target, + struct arm_smmu_master *master, + struct arm_smmu_domain *smmu_domain) { struct arm_smmu_s2_cfg *s2_cfg = &smmu_domain->s2_cfg; const struct io_pgtable_cfg *pgtbl_cfg = diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h index 8f791f67f9f7f4..0455498d24c730 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -737,6 +737,36 @@ struct arm_smmu_domain { struct list_head mmu_notifiers; }; +/* The following are exposed for testing purposes. */ +struct arm_smmu_entry_writer_ops; +struct arm_smmu_entry_writer { + const struct arm_smmu_entry_writer_ops *ops; + struct arm_smmu_master *master; +}; + +struct arm_smmu_entry_writer_ops { + __le64 v_bit; + void (*get_used)(const __le64 *entry, __le64 *used); + void (*sync)(struct arm_smmu_entry_writer *writer); +}; + +void arm_smmu_get_ste_used(const __le64 *ent, __le64 *used_bits); +void arm_smmu_get_cd_used(const __le64 *ent, __le64 *used_bits); +void arm_smmu_write_entry(struct arm_smmu_entry_writer *writer, __le64 *cur, + const __le64 *target); + +void arm_smmu_make_abort_ste(struct arm_smmu_ste *target); +void arm_smmu_make_bypass_ste(struct arm_smmu_device *smmu, + struct arm_smmu_ste *target); +void arm_smmu_make_cdtable_ste(struct arm_smmu_ste *target, + struct arm_smmu_master *master); +void arm_smmu_make_s2_domain_ste(struct arm_smmu_ste *target, + struct arm_smmu_master *master, + struct arm_smmu_domain *smmu_domain); +void arm_smmu_make_sva_cd(struct arm_smmu_cd *target, + struct arm_smmu_master *master, struct mm_struct *mm, + u16 asid); + static inline struct arm_smmu_domain *to_smmu_domain(struct iommu_domain *dom) { return container_of(dom, struct arm_smmu_domain, domain);