From patchwork Tue May 19 17:54:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 11558393 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 330C213B1 for ; Tue, 19 May 2020 18:05:01 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0CADB2072C for ; Tue, 19 May 2020 18:05:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="VEo7sssL"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="huY/boZ2" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0CADB2072C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=PlLBD5imBKN3Ct2WA4IbktFDsixDUorkxaZOqkooTMU=; b=VEo7sssLX8fgWv j1C64mWPO6cGIeXAW5YZFcmM4dki3UID2uNSo+djhVb+OAH5AQ0CZ5eWQ4Mp7lgsR9rFHxNx2njKb LODBwNxTS8XmdFdOD4oK9ao6+6vBrBszeWnwmHQe6XRTdMx/tBqxAeeuZooKVDyJZa8ZLFCjkbYWC cyvz6vOexQb0RCLsnoUn2WXoTA7K4dW/sWI7yCXl21D8wy695I2oCgca5dRZDG8r51JxpaXs9bnPj 3hieC4JtALIKkmHquK4SB4K3JFKSgNOTVTia/vIFAYUy/tHbg+wUMmRucjW5McnmtAc2lEXuF2EaA +W0VLsZA8WMi8zT+zMvg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jb6bo-0000f7-E6; Tue, 19 May 2020 18:04:56 +0000 Received: from mail-wr1-x441.google.com ([2a00:1450:4864:20::441]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jb6Yu-0006Ob-1f for linux-arm-kernel@lists.infradead.org; Tue, 19 May 2020 18:01:58 +0000 Received: by mail-wr1-x441.google.com with SMTP id i15so372691wrx.10 for ; Tue, 19 May 2020 11:01:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=XBKcHNY1n69RPE8RkPPU5z/rlngsZCnS/EuX/PAz4IU=; b=huY/boZ2R0L8m0KTImbAnN89+/2ewwTJYLMEBBCpcua+DalBUsakIScasPz1tAUdut jRLy3r1jNosv70oYwSU9jO/ND777I99T3vTcwtDuvS+z4ts3JBiyyelU4xrccZtfv65M OoGGVHnkE3+NYPIRcGlfJu3caej4l9TJmqKGkN/NstG1NCG63dxqwgHHgFCMfUm4mrj0 z1oeTNy0tARikd0eyqDO4i543e7oQYPFKVv5hf9WWhxCUW3E5Cn6ju0kq0tTnDeE3Ux8 tvYYtC0GsBVoEtefsfPtLyyXRmMJBG1+xMkQ9hQtTSnIhJWUYukG+coDDK6CSirzk1XF Ki2g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=XBKcHNY1n69RPE8RkPPU5z/rlngsZCnS/EuX/PAz4IU=; b=lrTCy2IbQbDpbaP8pzpuEf7B7adVRFnRgyIsyns5qdTnccZyazSD/bAiD6sAPSwFjd re76CT38W7vynIow+O5rDhCfwP2XSfeastF6oIsK89FH1v7WPXr65LwgxlQppsC4oTqb GdF399tRZSa7nfbf/CxyQe10/24iw453NPGROoaFfz37bh3r4zRLb0TfND8B+DNRqPMO utFeM7G+gwne5K9mo1yRBFLf6Ww8kwYrAXSQ4FL655Ph9VMeKDeKupqErpU49ULiKGPh JXeY7kQXSdqvEhkfo3pAArSsXwvfDWO2ay+9DIVmf5qZQ3w6A3iPquDuRcYvy8OqYmOw A4bg== X-Gm-Message-State: AOAM532+AqCifVC6A42eS7rxLahlbZ6ebqzK4Gcam+XunjLcqHVxS5WH c1KYIbAQ3W97M8BS6lFOkk7NMg== X-Google-Smtp-Source: ABdhPJyaRoZ2ct/6xg1Cpi+87CQAyXoR+7ccakgkN9wxMnUzVPrjQA/FX1kAYbsrPwmNn+nen9KjLQ== X-Received: by 2002:adf:ec88:: with SMTP id z8mr126757wrn.44.1589911314539; Tue, 19 May 2020 11:01:54 -0700 (PDT) Received: from localhost.localdomain ([2001:171b:226e:c200:c43b:ef78:d083:b355]) by smtp.gmail.com with ESMTPSA id 1sm510496wmz.13.2020.05.19.11.01.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 May 2020 11:01:54 -0700 (PDT) From: Jean-Philippe Brucker To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v7 11/24] iommu/arm-smmu-v3: Seize private ASID Date: Tue, 19 May 2020 19:54:49 +0200 Message-Id: <20200519175502.2504091-12-jean-philippe@linaro.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200519175502.2504091-1-jean-philippe@linaro.org> References: <20200519175502.2504091-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200519_110156_191056_E76C9095 X-CRM114-Status: GOOD ( 20.87 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:441 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: fenghua.yu@intel.com, kevin.tian@intel.com, jacob.jun.pan@linux.intel.com, jgg@ziepe.ca, catalin.marinas@arm.com, joro@8bytes.org, robin.murphy@arm.com, hch@infradead.org, zhangfei.gao@linaro.org, Jean-Philippe Brucker , Jonathan.Cameron@huawei.com, felix.kuehling@amd.com, xuzaibo@huawei.com, will@kernel.org, christian.koenig@amd.com, baolu.lu@linux.intel.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org The SMMU has a single ASID space, the union of shared and private ASID sets. This means that the SMMU driver competes with the arch allocator for ASIDs. Shared ASIDs are those of Linux processes, allocated by the arch, and contribute in broadcast TLB maintenance. Private ASIDs are allocated by the SMMU driver and used for "classic" map/unmap DMA. They require command-queue TLB invalidations. When we pin down an mm_context and get an ASID that is already in use by the SMMU, it belongs to a private context. We used to simply abort the bind, but this is unfair to users that would be unable to bind a few seemingly random processes. Try to allocate a new private ASID for the context, and make the old ASID shared. Signed-off-by: Jean-Philippe Brucker --- v6->v7: Replace context_lock spinlock with asid_lock mutex, remove GFP_ATOMIC changes, add comments about locking. --- drivers/iommu/arm-smmu-v3.c | 100 ++++++++++++++++++++++++++++-------- 1 file changed, 80 insertions(+), 20 deletions(-) diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index 52cbdf08f5e2..403871d36438 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -733,6 +733,7 @@ struct arm_smmu_option_prop { }; static DEFINE_XARRAY_ALLOC1(asid_xa); +static DEFINE_MUTEX(asid_lock); static DEFINE_MUTEX(sva_lock); static struct arm_smmu_option_prop arm_smmu_options[] = { @@ -1537,6 +1538,17 @@ static int arm_smmu_cmdq_batch_submit(struct arm_smmu_device *smmu, } /* Context descriptor manipulation functions */ +static void arm_smmu_tlb_inv_asid(struct arm_smmu_device *smmu, u16 asid) +{ + struct arm_smmu_cmdq_ent cmd = { + .opcode = CMDQ_OP_TLBI_NH_ASID, + .tlbi.asid = asid, + }; + + arm_smmu_cmdq_issue_cmd(smmu, &cmd); + arm_smmu_cmdq_issue_sync(smmu); +} + static void arm_smmu_sync_cd(struct arm_smmu_domain *smmu_domain, int ssid, bool leaf) { @@ -1795,9 +1807,18 @@ static bool arm_smmu_free_asid(struct arm_smmu_ctx_desc *cd) return free; } +/* + * Try to reserve this ASID in the SMMU. If it is in use, try to steal it from + * the private entry. Careful here, we may be modifying the context tables of + * another SMMU! + */ static struct arm_smmu_ctx_desc *arm_smmu_share_asid(u16 asid) { + int ret; + u32 new_asid; struct arm_smmu_ctx_desc *cd; + struct arm_smmu_device *smmu; + struct arm_smmu_domain *smmu_domain; cd = xa_load(&asid_xa, asid); if (!cd) @@ -1809,11 +1830,31 @@ static struct arm_smmu_ctx_desc *arm_smmu_share_asid(u16 asid) return cd; } + smmu_domain = container_of(cd, struct arm_smmu_domain, s1_cfg.cd); + smmu = smmu_domain->smmu; + + ret = xa_alloc(&asid_xa, &new_asid, cd, + XA_LIMIT(1, 1 << smmu->asid_bits), GFP_KERNEL); + if (ret) + return ERR_PTR(-ENOSPC); + /* + * Race with unmap: TLB invalidations will start targeting the new ASID, + * which isn't assigned yet. We'll do an invalidate-all on the old ASID + * later, so it doesn't matter. + */ + cd->asid = new_asid; + /* - * Ouch, ASID is already in use for a private cd. - * TODO: seize it. + * Update ASID and invalidate CD in all associated masters. There will + * be some overlap between use of both ASIDs, until we invalidate the + * TLB. */ - return ERR_PTR(-EEXIST); + arm_smmu_write_ctx_desc(smmu_domain, 0, cd); + + /* Invalidate TLB entries previously associated with that context */ + arm_smmu_tlb_inv_asid(smmu, asid); + + return NULL; } __maybe_unused @@ -1839,7 +1880,20 @@ static struct arm_smmu_ctx_desc *arm_smmu_alloc_shared_cd(struct mm_struct *mm) arm_smmu_init_cd(cd); + /* + * Serialize against arm_smmu_domain_finalise_s1() and + * arm_smmu_domain_free() as we might need to replace the private ASID + * from an existing CD. + */ + mutex_lock(&asid_lock); old_cd = arm_smmu_share_asid(asid); + if (!old_cd) { + ret = xa_insert(&asid_xa, asid, cd, GFP_KERNEL); + if (ret) + old_cd = ERR_PTR(ret); + } + mutex_unlock(&asid_lock); + if (IS_ERR(old_cd)) { ret = PTR_ERR(old_cd); goto err_free_cd; @@ -1853,11 +1907,6 @@ static struct arm_smmu_ctx_desc *arm_smmu_alloc_shared_cd(struct mm_struct *mm) return old_cd; } - /* Fails if a private ASID has been allocated since we last checked */ - ret = xa_insert(&asid_xa, asid, cd, GFP_KERNEL); - if (ret) - goto err_free_cd; - tcr = FIELD_PREP(CTXDESC_CD_0_TCR_T0SZ, 64ULL - VA_BITS) | FIELD_PREP(CTXDESC_CD_0_TCR_IRGN0, ARM_LPAE_TCR_RGN_WBWA) | FIELD_PREP(CTXDESC_CD_0_TCR_ORGN0, ARM_LPAE_TCR_RGN_WBWA) | @@ -2401,15 +2450,6 @@ static void arm_smmu_tlb_inv_context(void *cookie) struct arm_smmu_device *smmu = smmu_domain->smmu; struct arm_smmu_cmdq_ent cmd; - if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) { - cmd.opcode = CMDQ_OP_TLBI_NH_ASID; - cmd.tlbi.asid = smmu_domain->s1_cfg.cd.asid; - cmd.tlbi.vmid = 0; - } else { - cmd.opcode = CMDQ_OP_TLBI_S12_VMALL; - cmd.tlbi.vmid = smmu_domain->s2_cfg.vmid; - } - /* * NOTE: when io-pgtable is in non-strict mode, we may get here with * PTEs previously cleared by unmaps on the current CPU not yet visible @@ -2417,8 +2457,14 @@ static void arm_smmu_tlb_inv_context(void *cookie) * insertion to guarantee those are observed before the TLBI. Do be * careful, 007. */ - arm_smmu_cmdq_issue_cmd(smmu, &cmd); - arm_smmu_cmdq_issue_sync(smmu); + if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) { + arm_smmu_tlb_inv_asid(smmu, smmu_domain->s1_cfg.cd.asid); + } else { + cmd.opcode = CMDQ_OP_TLBI_S12_VMALL; + cmd.tlbi.vmid = smmu_domain->s2_cfg.vmid; + arm_smmu_cmdq_issue_cmd(smmu, &cmd); + arm_smmu_cmdq_issue_sync(smmu); + } arm_smmu_atc_inv_domain(smmu_domain, 0, 0, 0); } @@ -2602,9 +2648,15 @@ static void arm_smmu_domain_free(struct iommu_domain *domain) if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) { struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg; + /* + * Prevent arm_smmu_share_asid() from rewriting CD#0 while we're + * freeing it. + */ + mutex_lock(&asid_lock); if (cfg->cdcfg.cdtab) arm_smmu_free_cd_tables(smmu_domain); arm_smmu_free_asid(&cfg->cd); + mutex_unlock(&asid_lock); } else { struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg; if (cfg->vmid) @@ -2626,10 +2678,15 @@ static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain, arm_smmu_init_cd(&cfg->cd); + /* + * Prevent arm_smmu_share_asid() from seizing the private ASID we're + * allocating here until it is written to the CD. + */ + mutex_lock(&asid_lock); ret = xa_alloc(&asid_xa, &asid, &cfg->cd, XA_LIMIT(1, (1 << smmu->asid_bits) - 1), GFP_KERNEL); if (ret) - return ret; + goto out_unlock; cfg->s1cdmax = master->ssid_bits; @@ -2657,12 +2714,15 @@ static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain, if (ret) goto out_free_cd_tables; + mutex_unlock(&asid_lock); return 0; out_free_cd_tables: arm_smmu_free_cd_tables(smmu_domain); out_free_asid: arm_smmu_free_asid(&cfg->cd); +out_unlock: + mutex_unlock(&asid_lock); return ret; }