From patchwork Tue Jun 6 12:07:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Shavit X-Patchwork-Id: 13269064 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 29F52C7EE24 for ; Tue, 6 Jun 2023 12:11:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=GrAl/dALxjn+DlGK7+nKzYF3/+nfCCeOlC07teiUVqs=; b=uGC6RjLV1DNTOca1Sd9/4qDBZF LZWSMuNTF5YlRpD6mDLOHPCWfbOUzBYfNh5fsuGGCqoWSPGcgHTmqT62ze7o5wCuWdd9YHW2jkFYJ 9jkwbTbJaoHQ3ATmDbJmtdGtLDTKWjDBHIuaHwI2+lJqq6p3vKrticXTO6LzIfyY+vNAPgit5Bx3o VA4ZBpvyqPdGWZy08asgd2ZKG+8C7HDatl+6DKGBkETh3q+roe4pYETt6hOPpvkOF78uA+dQ0QWxH r2IHa+ccsPm/HHvkGQRocE4R51DkSd8TPbRHxyfXaHcKMhdCn3ec67G7CKjkI/9D0PDYIzj9Z+UwR iCAOE3Jg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q6VWx-001Zq4-1P; Tue, 06 Jun 2023 12:11:19 +0000 Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q6VWr-001ZkE-2Y for linux-arm-kernel@lists.infradead.org; Tue, 06 Jun 2023 12:11:16 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-565a33c35b1so89796107b3.0 for ; Tue, 06 Jun 2023 05:11:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1686053472; x=1688645472; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=k8coPEvUwDlKI+G80MftH9xxXZgTa0epa7omkndSMHA=; b=vWzHFe+f6b0SO9ZjCJqoUru52xdUuNnZIq6LnMnCtriPlDLRJZ18LiSlZY3NpYKKCf nMe/iV6ctP3Oz5SwfbarLiRIxyCChHxJbaf7V04aWyS1M/t6tk7jjopXLvNz/UIoBA5O CDebWGr2/1v3E1fW0PClyPNl068DVDA6EOjD4ZobvbmF2BTjWvCkGSao48mifbZOWmNb /QyrEUnZXV/JMNSnzX++RYhSbPNvVocemSoVMQNs4El0N9p5AXTml140OsqVQ7c5W4kG b1q7xBiEzKY39mS8kM2Bpbgxd/iihYyrRoZV2ieeGptRuyuSzNFCxplOVUU1visxhY6O kFPQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686053472; x=1688645472; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=k8coPEvUwDlKI+G80MftH9xxXZgTa0epa7omkndSMHA=; b=OWCJ9W/HzJg8ZqlV2cjw5MJH5As5PlHCQ6mmD7KyJqlVoJwFfvzfpJImsK0iAGNUGM sNSnzznzAECmTzyAogvKmbDmPBP++bxgIGqxE2V9CuTtBzx3MGeQk01hzUQ41cFQK4Db jt7a99R95Q+j4A2msqB4e+MbXdjdW99HLWpfmnsaLo7qWXUvKKu1QCYpeuASDxqPbjvb aIj3FKG3aq1LsIpsYFsQ9Ers0KWux3y3DVgV1bpTPDMTPEm7gxKsSeUWEEwmIubNA3yh Qy3q62JFudywConUlpkFVAevLywynafjVozTrNk/sBSYtCgLyWNnGTDqS3ysxm9k/sFc E7fw== X-Gm-Message-State: AC+VfDzWdktseZ99OGxMeWSH0GHC/yo7/PZbHGlrmPLGt3fYVBvXNSbq ehaCKDkP9Se91XuxgChRVNxUZ9Q+Ni0c X-Google-Smtp-Source: ACHHUZ4yVw8kxBQaZNeJJwBixm7xdLTIiPMzfDmRrqmi1UCOSbaEkRgUbl8QYue0ANyIEacYietJ03+RyloM X-Received: from mshavit.ntc.corp.google.com ([2401:fa00:95:20c:a615:63d5:b54e:6919]) (user=mshavit job=sendgmr) by 2002:a81:ad1c:0:b0:55d:6af3:1e2c with SMTP id l28-20020a81ad1c000000b0055d6af31e2cmr959734ywh.3.1686053472400; Tue, 06 Jun 2023 05:11:12 -0700 (PDT) Date: Tue, 6 Jun 2023 20:07:52 +0800 In-Reply-To: <20230606120854.4170244-1-mshavit@google.com> Mime-Version: 1.0 References: <20230606120854.4170244-1-mshavit@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230606120854.4170244-17-mshavit@google.com> Subject: [PATCH v2 16/18] iommu/arm-smmu-v3-sva: Attach S1_SHARED_CD domain From: Michael Shavit To: Will Deacon , Robin Murphy , Joerg Roedel Cc: Michael Shavit , jean-philippe@linaro.org, nicolinc@nvidia.com, jgg@nvidia.com, baolu.lu@linux.intel.com, linux-arm-kernel@lists.infradead.org, iommu@lists.linux.dev, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230606_051113_837221_B96062DF X-CRM114-Status: GOOD ( 17.77 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Prepare an smmu domain of type S1_SHARED_CD per smmu_mmu_notifier. Attach that domain using the common arm_smmu_domain_set_dev_pasid implementation when attaching an SVA domain. Signed-off-by: Michael Shavit --- .../iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c | 67 ++++++------------- 1 file changed, 22 insertions(+), 45 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c index e2a91f20f0906..9a2da579c3563 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c @@ -19,7 +19,7 @@ struct arm_smmu_mmu_notifier { bool cleared; refcount_t refs; struct list_head list; - struct arm_smmu_domain *domain; + struct arm_smmu_domain domain; }; #define mn_to_smmu(mn) container_of(mn, struct arm_smmu_mmu_notifier, mn) @@ -198,7 +198,7 @@ static void arm_smmu_mm_invalidate_range(struct mmu_notifier *mn, unsigned long start, unsigned long end) { struct arm_smmu_mmu_notifier *smmu_mn = mn_to_smmu(mn); - struct arm_smmu_domain *smmu_domain = smmu_mn->domain; + struct arm_smmu_domain *smmu_domain = &smmu_mn->domain; size_t size; /* @@ -217,7 +217,7 @@ static void arm_smmu_mm_invalidate_range(struct mmu_notifier *mn, static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm) { struct arm_smmu_mmu_notifier *smmu_mn = mn_to_smmu(mn); - struct arm_smmu_domain *smmu_domain = smmu_mn->domain; + struct arm_smmu_domain *smmu_domain = &smmu_mn->domain; struct arm_smmu_master *master; struct arm_smmu_attached_domain *attached_domain; unsigned long flags; @@ -233,15 +233,10 @@ static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm) * but disable translation. */ spin_lock_irqsave(&smmu_domain->attached_domains_lock, flags); - list_for_each_entry(attached_domain, &smmu_domain->attached_domains, - domain_head) { + list_for_each_entry(attached_domain, &smmu_domain->attached_domains, domain_head) { master = attached_domain->master; - /* - * SVA domains piggyback on the attached_domain with SSID 0. - */ - if (attached_domain->ssid == 0) - arm_smmu_write_ctx_desc(master->smmu, master->s1_cfg, - master, mm->pasid, &quiet_cd); + arm_smmu_write_ctx_desc(master->smmu, master->s1_cfg, master, + attached_domain->ssid, &quiet_cd); } spin_unlock_irqrestore(&smmu_domain->attached_domains_lock, flags); @@ -265,15 +260,13 @@ static const struct mmu_notifier_ops arm_smmu_mmu_notifier_ops = { /* Allocate or get existing MMU notifier for this {domain, mm} pair */ static struct arm_smmu_mmu_notifier * -arm_smmu_mmu_notifier_get(struct arm_smmu_domain *smmu_domain, +arm_smmu_mmu_notifier_get(struct arm_smmu_device *smmu, + struct arm_smmu_domain *smmu_domain, struct mm_struct *mm) { int ret; - unsigned long flags; struct arm_smmu_ctx_desc *cd; struct arm_smmu_mmu_notifier *smmu_mn; - struct arm_smmu_master *master; - struct arm_smmu_attached_domain *attached_domain; list_for_each_entry(smmu_mn, &smmu_domain->mmu_notifiers, list) { if (smmu_mn->mn.mm == mm) { @@ -294,7 +287,6 @@ arm_smmu_mmu_notifier_get(struct arm_smmu_domain *smmu_domain, refcount_set(&smmu_mn->refs, 1); smmu_mn->cd = cd; - smmu_mn->domain = smmu_domain; smmu_mn->mn.ops = &arm_smmu_mmu_notifier_ops; ret = mmu_notifier_register(&smmu_mn->mn, mm); @@ -302,24 +294,11 @@ arm_smmu_mmu_notifier_get(struct arm_smmu_domain *smmu_domain, kfree(smmu_mn); goto err_free_cd; } - - spin_lock_irqsave(&smmu_domain->attached_domains_lock, flags); - list_for_each_entry(attached_domain, &smmu_domain->attached_domains, - domain_head) { - master = attached_domain->master; - ret = arm_smmu_write_ctx_desc(master->smmu, master->s1_cfg, - master, mm->pasid, cd); - } - spin_unlock_irqrestore(&smmu_domain->attached_domains_lock, flags); - if (ret) - goto err_put_notifier; + arm_smmu_init_shared_cd_domain(smmu, &smmu_mn->domain, cd); list_add(&smmu_mn->list, &smmu_domain->mmu_notifiers); return smmu_mn; -err_put_notifier: - /* Frees smmu_mn */ - mmu_notifier_put(&smmu_mn->mn); err_free_cd: arm_smmu_free_shared_cd(cd); return ERR_PTR(ret); @@ -327,27 +306,15 @@ arm_smmu_mmu_notifier_get(struct arm_smmu_domain *smmu_domain, static void arm_smmu_mmu_notifier_put(struct arm_smmu_mmu_notifier *smmu_mn) { - unsigned long flags; struct mm_struct *mm = smmu_mn->mn.mm; struct arm_smmu_ctx_desc *cd = smmu_mn->cd; - struct arm_smmu_attached_domain *attached_domain; - struct arm_smmu_master *master; - struct arm_smmu_domain *smmu_domain = smmu_mn->domain; + struct arm_smmu_domain *smmu_domain = &smmu_mn->domain; if (!refcount_dec_and_test(&smmu_mn->refs)) return; list_del(&smmu_mn->list); - spin_lock_irqsave(&smmu_domain->attached_domains_lock, flags); - list_for_each_entry(attached_domain, &smmu_domain->attached_domains, - domain_head) { - master = attached_domain->master; - arm_smmu_write_ctx_desc(master->smmu, master->s1_cfg, master, - mm->pasid, NULL); - } - spin_unlock_irqrestore(&smmu_domain->attached_domains_lock, flags); - /* * If we went through clear(), we've already invalidated, and no * new TLB entry can have been formed. @@ -369,17 +336,26 @@ static int __arm_smmu_sva_bind(struct device *dev, struct arm_smmu_master *master = dev_iommu_priv_get(dev); struct iommu_domain *domain = iommu_get_domain_for_dev(dev); struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + int ret; if (!master || !master->sva_enabled) return -ENODEV; - sva_domain->smmu_mn = arm_smmu_mmu_notifier_get(smmu_domain, + sva_domain->smmu_mn = arm_smmu_mmu_notifier_get(master->smmu, + smmu_domain, mm); if (IS_ERR(sva_domain->smmu_mn)) { sva_domain->smmu_mn = NULL; return PTR_ERR(sva_domain->smmu_mn); } + master->nr_attached_sva_domains += 1; + smmu_domain = &sva_domain->smmu_mn->domain; + ret = arm_smmu_domain_set_dev_pasid(dev, master, smmu_domain, mm->pasid); + if (ret) { + arm_smmu_mmu_notifier_put(sva_domain->smmu_mn); + return ret; + } return 0; } @@ -544,8 +520,9 @@ void arm_smmu_sva_remove_dev_pasid(struct iommu_domain *domain, struct arm_smmu_master *master = dev_iommu_priv_get(dev); mutex_lock(&sva_lock); - master->nr_attached_sva_domains -= 1; + arm_smmu_domain_remove_dev_pasid(dev, &sva_domain->smmu_mn->domain, id); arm_smmu_mmu_notifier_put(sva_domain->smmu_mn); + master->nr_attached_sva_domains -= 1; mutex_unlock(&sva_lock); }