From patchwork Thu Aug 31 17:44:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Shavit X-Patchwork-Id: 13371763 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0E6EBC83F3A for ; Thu, 31 Aug 2023 17:46:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=7T3W039IgAMAYmN45+he771Q1mnlVYur4NPHXZliwNQ=; b=zh+7CLdrjhMVx/gxv4b5Us5iUr yCpU4O6/YUCAZIj6BKtSc2+rzvNHul4xmRHFyay1LfsfNPM+1dZfbxfp57cjeb4Clk8+XoI2JTo6B M4bMbyzAvj4KLqwC2RH4SmlLxFO32oNCRv3rMU3BEsOMgPEFru3ukM8BwYz2KAyVlfUlNym00pgoQ 1x3msXHQh2+XTCD6cYrcl+VAquWtMiKdj+8lPEzZg2bN/5uS9aLIwrRNap/JZ4XSF2niGdVgSCSSo ejYjNvsGQjdfqyl4t/DPE4NfitG9ENqrrPPqxPc5CBUoNT/MfuMgpktRbclFsIssu99E1u1mNi4+z rca7o2gg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qblkQ-00Fenh-1H; Thu, 31 Aug 2023 17:46:26 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qblkC-00FeiI-1b for linux-arm-kernel@lists.infradead.org; Thu, 31 Aug 2023 17:46:14 +0000 Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-d7ba519f46bso872405276.3 for ; Thu, 31 Aug 2023 10:46:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1693503969; x=1694108769; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=w3I8Ygy6DcsMLN55whhxyEbEgQo4X2xjrF5n3oW9r44=; b=0XlT/lDM30yHZ/f837A1Y8TKTsnFIgzE9/pecMTYRb6OAHobRAP69q6soAPx6j+y5+ 2RUvuUAnaxqgUFcaqqJwoDU+bOuiSxzd/eYOIkSI/4sPKiYT3EQaCrhkZWN+J/RYnHKO jlHjjQE6mUvmUZDwwhrR0NvJu3444k3l3wB/Ok/iKmjlDz06nnGytTGzWK3DYTNJU6e/ TxOHAwxM6JF8MtFATJfjTgZLtRMmpj/YKi6Ov0yNdXchDsKunDSIA2yVXj7QaN1A7r0G RufKwgZSnuztIb/WT+ecGUg0TwiKSCc870V9jpxq7LCWSKntmp++IMC5uA6c2GPRb0Ya 1nTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1693503969; x=1694108769; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=w3I8Ygy6DcsMLN55whhxyEbEgQo4X2xjrF5n3oW9r44=; b=XUdEpsnLcItNLcO3Eqh55yaYNLgyUkgsji/QRPrAYH/hOYVRUWTTop0XGtezgYNA1H HgngZnegcdf+qYhImsAIQ1l7YIhyjbJpvDvpqpwSB09fy7pCv4WsjYvCXdGDsrSDJrxW rZqDPF7zb8OqQXYPQQv6c9E8y3YGPlCFMS0Juj+Qcvh0dAE24VjmFoYLZJanjLJ1N/yR Wj7fET9fVfcH1f7h1X6cSzs3PjDq2EnXy10I1MaEqUOGhPKle0sM188LGRGGdsXt45wt JveMomJxXQ0wPkt9Kqyu9RCuCgl3XUXTTwTJ1OzNkA4b9DdGa2qFwqunfuKLvogDs0te 2cIQ== X-Gm-Message-State: AOJu0Yz39VFGrOONY+ZG7DpIAXncmrMDV6jedyKgeCAh5VR79yhQlCaN WH1DG3YcHqj29hKkga4gFaYxA2a4sNWM X-Google-Smtp-Source: AGHT+IF5B4he3eZkDz7k0UdFEAerRxOqKdFQx4AkB65eUUUR5N6V3kELU3i1Q1XEtQh94nXA1dIB3LoASzwp X-Received: from mshavit.ntc.corp.google.com ([2401:fa00:95:20c:1a0a:7338:4a5a:5f83]) (user=mshavit job=sendgmr) by 2002:a25:7343:0:b0:d77:df8a:389a with SMTP id o64-20020a257343000000b00d77df8a389amr10244ybc.3.1693503969439; Thu, 31 Aug 2023 10:46:09 -0700 (PDT) Date: Fri, 1 Sep 2023 01:44:34 +0800 In-Reply-To: <20230831174536.103472-1-mshavit@google.com> Mime-Version: 1.0 References: <20230831174536.103472-1-mshavit@google.com> X-Mailer: git-send-email 2.42.0.283.g2d96d420d3-goog Message-ID: <20230901014413.v7.5.I219054a6cf538df5bb22f4ada2d9933155d6058c@changeid> Subject: [PATCH v7 5/9] iommu/arm-smmu-v3: Refactor write_ctx_desc From: Michael Shavit To: iommu@lists.linux.dev, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Cc: will@kernel.org, robin.murphy@arm.com, jean-philippe@linaro.org, jgg@nvidia.com, nicolinc@nvidia.com, Michael Shavit , Jason Gunthorpe , Joerg Roedel , Kevin Tian , "Kirill A. Shutemov" , Lu Baolu , Mark Brown , Yicong Yang X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230831_104612_537883_A9257188 X-CRM114-Status: GOOD ( 33.10 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Update arm_smmu_write_ctx_desc and downstream functions to operate on a master instead of an smmu domain. We expect arm_smmu_write_ctx_desc() to only be called to write a CD entry into a CD table owned by the master. Under the hood, arm_smmu_write_ctx_desc still fetches the CD table from the domain that is attached to the master, but a subsequent commit will move that table's ownership to the master. Note that this change isn't a nop refactor since SVA will call arm_smmu_write_ctx_desc in a loop for every master the domain is attached to despite the fact that they all share the same CD table. This loop may look weird but becomes necessary when the CD table becomes per-master in a subsequent commit. Reviewed-by: Jason Gunthorpe Reviewed-by: Nicolin Chen Signed-off-by: Michael Shavit --- Changes in v7: - Change the amr_smmu_write_ctx_desc_devices helper introduced to arm_smmu_update_ctx_desc_devices to distinguish from the case where a potentially new CD entry is written to. Add a comment to clarify that it is assumed that the operation can't fail and that it's therefore safe not to handle the return. In contrast, the case where a new CD entry is written-to does not use the helper and does have to handle failure. - Remove unintended formatting change in this commit. Changes in v6: - Unwind the loop in amr_smmu_write_ctx_desc_devices to NULL out the CD entries we succesfully wrote on failure. - Add a comment clarifying the different usages of amr_smmu_write_ctx_desc_devices Changes in v3: - Add a helper to write a CD to all masters that a domain is attached to. - Fixed an issue where an arm_smmu_write_ctx_desc error return wasn't correctly handled by its caller. Changes in v2: - minor style fixes Changes in v1: - arm_smmu_write_ctx_desc now get's the CD table to write to from the master parameter instead of a distinct parameter. This works well because the CD table being written to should always be owned by the master by the end of this series. This version no longer allows master to be NULL. .../iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c | 39 +++++++++++++-- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 48 +++++++------------ drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 2 +- 3 files changed, 54 insertions(+), 35 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c index 968559d625c40..c89cd8877c891 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c @@ -37,6 +37,25 @@ struct arm_smmu_bond { static DEFINE_MUTEX(sva_lock); +/* + * Write the CD to the CD tables for all masters that this domain is attached + * to. Note that this is only used to update existing CD entries in the target + * CD table, for which it's assumed that arm_smmu_write_ctx_desc can't fail. + */ +static void arm_smmu_update_ctx_desc_devices(struct arm_smmu_domain *smmu_domain, + int ssid, + struct arm_smmu_ctx_desc *cd) +{ + struct arm_smmu_master *master; + unsigned long flags; + + spin_lock_irqsave(&smmu_domain->devices_lock, flags); + list_for_each_entry(master, &smmu_domain->devices, domain_head) { + arm_smmu_write_ctx_desc(master, ssid, cd); + } + spin_unlock_irqrestore(&smmu_domain->devices_lock, flags); +} + /* * Check if the CPU ASID is available on the SMMU side. If a private context * descriptor is using it, try to replace it. @@ -80,7 +99,7 @@ arm_smmu_share_asid(struct mm_struct *mm, u16 asid) * be some overlap between use of both ASIDs, until we invalidate the * TLB. */ - arm_smmu_write_ctx_desc(smmu_domain, 0, cd); + arm_smmu_update_ctx_desc_devices(smmu_domain, 0, cd); /* Invalidate TLB entries previously associated with that context */ arm_smmu_tlb_inv_asid(smmu, asid); @@ -222,7 +241,7 @@ static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm) * DMA may still be running. Keep the cd valid to avoid C_BAD_CD events, * but disable translation. */ - arm_smmu_write_ctx_desc(smmu_domain, mm->pasid, &quiet_cd); + arm_smmu_update_ctx_desc_devices(smmu_domain, mm->pasid, &quiet_cd); arm_smmu_tlb_inv_asid(smmu_domain->smmu, smmu_mn->cd->asid); arm_smmu_atc_inv_domain(smmu_domain, mm->pasid, 0, 0); @@ -248,8 +267,10 @@ arm_smmu_mmu_notifier_get(struct arm_smmu_domain *smmu_domain, struct mm_struct *mm) { int ret; + unsigned long flags; struct arm_smmu_ctx_desc *cd; struct arm_smmu_mmu_notifier *smmu_mn; + struct arm_smmu_master *master; list_for_each_entry(smmu_mn, &smmu_domain->mmu_notifiers, list) { if (smmu_mn->mn.mm == mm) { @@ -279,7 +300,16 @@ arm_smmu_mmu_notifier_get(struct arm_smmu_domain *smmu_domain, goto err_free_cd; } - ret = arm_smmu_write_ctx_desc(smmu_domain, mm->pasid, cd); + spin_lock_irqsave(&smmu_domain->devices_lock, flags); + list_for_each_entry(master, &smmu_domain->devices, domain_head) { + ret = arm_smmu_write_ctx_desc(master, mm->pasid, cd); + if (ret) { + list_for_each_entry_from_reverse(master, &smmu_domain->devices, domain_head) + arm_smmu_write_ctx_desc(master, mm->pasid, NULL); + break; + } + } + spin_unlock_irqrestore(&smmu_domain->devices_lock, flags); if (ret) goto err_put_notifier; @@ -304,7 +334,8 @@ static void arm_smmu_mmu_notifier_put(struct arm_smmu_mmu_notifier *smmu_mn) return; list_del(&smmu_mn->list); - arm_smmu_write_ctx_desc(smmu_domain, mm->pasid, NULL); + + arm_smmu_update_ctx_desc_devices(smmu_domain, mm->pasid, NULL); /* * If we went through clear(), we've already invalidated, and no diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 44df7c0926802..69b9bb5c7f773 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -971,14 +971,12 @@ void arm_smmu_tlb_inv_asid(struct arm_smmu_device *smmu, u16 asid) arm_smmu_cmdq_issue_cmd_with_sync(smmu, &cmd); } -static void arm_smmu_sync_cd(struct arm_smmu_domain *smmu_domain, +static void arm_smmu_sync_cd(struct arm_smmu_master *master, int ssid, bool leaf) { size_t i; - unsigned long flags; - struct arm_smmu_master *master; struct arm_smmu_cmdq_batch cmds; - struct arm_smmu_device *smmu = smmu_domain->smmu; + struct arm_smmu_device *smmu = master->smmu; struct arm_smmu_cmdq_ent cmd = { .opcode = CMDQ_OP_CFGI_CD, .cfgi = { @@ -988,15 +986,10 @@ static void arm_smmu_sync_cd(struct arm_smmu_domain *smmu_domain, }; cmds.num = 0; - - spin_lock_irqsave(&smmu_domain->devices_lock, flags); - list_for_each_entry(master, &smmu_domain->devices, domain_head) { - for (i = 0; i < master->num_streams; i++) { - cmd.cfgi.sid = master->streams[i].id; - arm_smmu_cmdq_batch_add(smmu, &cmds, &cmd); - } + for (i = 0; i < master->num_streams; i++) { + cmd.cfgi.sid = master->streams[i].id; + arm_smmu_cmdq_batch_add(smmu, &cmds, &cmd); } - spin_unlock_irqrestore(&smmu_domain->devices_lock, flags); arm_smmu_cmdq_batch_submit(smmu, &cmds); } @@ -1026,14 +1019,13 @@ static void arm_smmu_write_cd_l1_desc(__le64 *dst, WRITE_ONCE(*dst, cpu_to_le64(val)); } -static __le64 *arm_smmu_get_cd_ptr(struct arm_smmu_domain *smmu_domain, - u32 ssid) +static __le64 *arm_smmu_get_cd_ptr(struct arm_smmu_master *master, u32 ssid) { __le64 *l1ptr; unsigned int idx; struct arm_smmu_l1_ctx_desc *l1_desc; - struct arm_smmu_device *smmu = smmu_domain->smmu; - struct arm_smmu_ctx_desc_cfg *cdcfg = &smmu_domain->cd_table; + struct arm_smmu_device *smmu = master->smmu; + struct arm_smmu_ctx_desc_cfg *cdcfg = &master->domain->cd_table; if (cdcfg->s1fmt == STRTAB_STE_0_S1FMT_LINEAR) return cdcfg->cdtab + ssid * CTXDESC_CD_DWORDS; @@ -1047,13 +1039,13 @@ static __le64 *arm_smmu_get_cd_ptr(struct arm_smmu_domain *smmu_domain, l1ptr = cdcfg->cdtab + idx * CTXDESC_L1_DESC_DWORDS; arm_smmu_write_cd_l1_desc(l1ptr, l1_desc); /* An invalid L1CD can be cached */ - arm_smmu_sync_cd(smmu_domain, ssid, false); + arm_smmu_sync_cd(master, ssid, false); } idx = ssid & (CTXDESC_L2_ENTRIES - 1); return l1_desc->l2ptr + idx * CTXDESC_CD_DWORDS; } -int arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain, int ssid, +int arm_smmu_write_ctx_desc(struct arm_smmu_master *master, int ssid, struct arm_smmu_ctx_desc *cd) { /* @@ -1070,11 +1062,12 @@ int arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain, int ssid, u64 val; bool cd_live; __le64 *cdptr; + struct arm_smmu_ctx_desc_cfg *cd_table = &master->domain->cd_table; - if (WARN_ON(ssid >= (1 << smmu_domain->cd_table.s1cdmax))) + if (WARN_ON(ssid >= (1 << cd_table->s1cdmax))) return -E2BIG; - cdptr = arm_smmu_get_cd_ptr(smmu_domain, ssid); + cdptr = arm_smmu_get_cd_ptr(master, ssid); if (!cdptr) return -ENOMEM; @@ -1102,7 +1095,7 @@ int arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain, int ssid, * order. Ensure that it observes valid values before reading * V=1. */ - arm_smmu_sync_cd(smmu_domain, ssid, true); + arm_smmu_sync_cd(master, ssid, true); val = cd->tcr | #ifdef __BIG_ENDIAN @@ -1114,7 +1107,7 @@ int arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain, int ssid, FIELD_PREP(CTXDESC_CD_0_ASID, cd->asid) | CTXDESC_CD_0_V; - if (smmu_domain->cd_table.stall_enabled) + if (cd_table->stall_enabled) val |= CTXDESC_CD_0_S; } @@ -1128,7 +1121,7 @@ int arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain, int ssid, * without first making the structure invalid. */ WRITE_ONCE(cdptr[0], cpu_to_le64(val)); - arm_smmu_sync_cd(smmu_domain, ssid, true); + arm_smmu_sync_cd(master, ssid, true); return 0; } @@ -1138,7 +1131,7 @@ static int arm_smmu_alloc_cd_tables(struct arm_smmu_domain *smmu_domain, int ret; size_t l1size; size_t max_contexts; - struct arm_smmu_device *smmu = smmu_domain->smmu; + struct arm_smmu_device *smmu = master->smmu; struct arm_smmu_ctx_desc_cfg *cdcfg = &smmu_domain->cd_table; cdcfg->stall_enabled = master->stall_enabled; @@ -2135,12 +2128,7 @@ static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain, CTXDESC_CD_0_TCR_EPD1 | CTXDESC_CD_0_AA64; cd->mair = pgtbl_cfg->arm_lpae_s1_cfg.mair; - /* - * Note that this will end up calling arm_smmu_sync_cd() before - * the master has been added to the devices list for this domain. - * This isn't an issue because the STE hasn't been installed yet. - */ - ret = arm_smmu_write_ctx_desc(smmu_domain, 0, cd); + ret = arm_smmu_write_ctx_desc(master, 0, cd); if (ret) goto out_free_cd_tables; diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h index 007758df57610..00f8e6388848e 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -745,7 +745,7 @@ extern struct xarray arm_smmu_asid_xa; extern struct mutex arm_smmu_asid_lock; extern struct arm_smmu_ctx_desc quiet_cd; -int arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain, int ssid, +int arm_smmu_write_ctx_desc(struct arm_smmu_master *smmu_master, int ssid, struct arm_smmu_ctx_desc *cd); void arm_smmu_tlb_inv_asid(struct arm_smmu_device *smmu, u16 asid); void arm_smmu_tlb_inv_range_asid(unsigned long iova, size_t size, int asid,