From patchwork Fri Oct 6 13:31:51 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 9989475 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 102516020F for ; Fri, 6 Oct 2017 13:37:36 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EB71928DAB for ; Fri, 6 Oct 2017 13:37:35 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E017828DAD; Fri, 6 Oct 2017 13:37:35 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 69BD328DAB for ; Fri, 6 Oct 2017 13:37:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=2Kf5debv4fxMEFQixxp9wsgpSmTuIoeYoRlmvRijQXM=; b=IzB4ABgfSA03ZiCHJtMhuOJAbb G+4KsDKn4NMRjzaGmcLX0S58Ic3w6IpST95uZSbtnKd8JKjo/3VB17JWpBmVQ7Z0Uc0rUsUT6KgOA XcB1XeMQcpDftvnGcvFEFf1xH0TV3ZvDhsROAfSyJmSAVScnqAByjM0lHUC65YCfMg6YrACDXex+Z Q5QRc6nX1onIBbbedVC6JtAe+Mo5yUMPIAHoTIxNZZd68CpkcWhoQSTMtu/S4Mu+h5v+WmB901giB w6n9+FaWl0UUM/ad94mliA1tW873BrWlbhADT5hJTK0+9yE87LEjT6qSMFU6T1fqVTn4V1+6tPkkC F7Hs9F6A==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1e0SoS-0002HV-Uo; Fri, 06 Oct 2017 13:37:12 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1e0SoF-00028q-Dk for linux-arm-kernel@lists.infradead.org; Fri, 06 Oct 2017 13:37:09 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0B94415BE; Fri, 6 Oct 2017 06:29:55 -0700 (PDT) Received: from e106794-lin.cambridge.arm.com (e106794-lin.cambridge.arm.com [10.1.211.72]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 32B643F578; Fri, 6 Oct 2017 06:29:50 -0700 (PDT) From: Jean-Philippe Brucker To: linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-acpi@vger.kernel.org, devicetree@vger.kernel.org, iommu@lists.linux-foundation.org Subject: [RFCv2 PATCH 24/36] iommu/arm-smmu-v3: Steal private ASID from a domain Date: Fri, 6 Oct 2017 14:31:51 +0100 Message-Id: <20171006133203.22803-25-jean-philippe.brucker@arm.com> X-Mailer: git-send-email 2.13.3 In-Reply-To: <20171006133203.22803-1-jean-philippe.brucker@arm.com> References: <20171006133203.22803-1-jean-philippe.brucker@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20171006_063659_634246_D436C81D X-CRM114-Status: GOOD ( 13.91 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mark.rutland@arm.com, xieyisheng1@huawei.com, gabriele.paoloni@huawei.com, catalin.marinas@arm.com, will.deacon@arm.com, okaya@codeaurora.org, yi.l.liu@intel.com, lorenzo.pieralisi@arm.com, ashok.raj@intel.com, tn@semihalf.com, joro@8bytes.org, rfranz@cavium.com, lenb@kernel.org, jacob.jun.pan@linux.intel.com, alex.williamson@redhat.com, robh+dt@kernel.org, thunder.leizhen@huawei.com, bhelgaas@google.com, dwmw2@infradead.org, liubo95@huawei.com, rjw@rjwysocki.net, robdclark@gmail.com, hanjun.guo@linaro.org, sudeep.holla@arm.com, robin.murphy@arm.com, nwatters@codeaurora.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP The SMMU only has one ASID space, so the process allocator competes with the domain allocator for ASIDs. Process ASIDs are allocated by the arch allocator and shared with CPUs, whereas domain ASIDs are private to the SMMU, and not affected by broadcast TLB invalidations. When the process allocator pins an mm_context and gets an ASID that is already in use by the SMMU, it belongs to a domain. At the moment we simply abort the bind, but we can try one step further. Attempt to assign a new private ASID to the domain, and steal the old one for our process. Use the smmu-wide ASID lock to prevent racing with attach_dev over the foreign domain. We now need to also take this lock when modifying entry 0 of the context table. Concurrent modifications of a given context table used to be prevented by group->mutex but in this patch we modify the CD of another group. Signed-off-by: Jean-Philippe Brucker --- drivers/iommu/arm-smmu-v3.c | 53 +++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 49 insertions(+), 4 deletions(-) diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index 293f260782c2..e89e6d1263d9 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -1731,7 +1731,7 @@ static void arm_smmu_tlb_inv_context(void *cookie) if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) { cmd.opcode = smmu->features & ARM_SMMU_FEAT_E2H ? CMDQ_OP_TLBI_EL2_ASID : CMDQ_OP_TLBI_NH_ASID; - cmd.tlbi.asid = smmu_domain->s1_cfg.cd.asid; + cmd.tlbi.asid = READ_ONCE(smmu_domain->s1_cfg.cd.asid); cmd.tlbi.vmid = 0; } else { cmd.opcode = CMDQ_OP_TLBI_S12_VMALL; @@ -1757,7 +1757,7 @@ static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size, if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) { cmd.opcode = smmu->features & ARM_SMMU_FEAT_E2H ? CMDQ_OP_TLBI_EL2_VA : CMDQ_OP_TLBI_NH_VA; - cmd.tlbi.asid = smmu_domain->s1_cfg.cd.asid; + cmd.tlbi.asid = READ_ONCE(smmu_domain->s1_cfg.cd.asid); } else { cmd.opcode = CMDQ_OP_TLBI_S2_IPA; cmd.tlbi.vmid = smmu_domain->s2_cfg.vmid; @@ -2119,7 +2119,9 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) } else if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) { ste->s1_cfg = &smmu_domain->s1_cfg; ste->s2_cfg = NULL; + spin_lock(&smmu->asid_lock); arm_smmu_write_ctx_desc(smmu_domain, 0, &ste->s1_cfg->cd); + spin_unlock(&smmu->asid_lock); } else { ste->s1_cfg = NULL; ste->s2_cfg = &smmu_domain->s2_cfg; @@ -2253,14 +2255,57 @@ static int arm_smmu_process_share(struct arm_smmu_domain *smmu_domain, struct arm_smmu_process *smmu_process) { int asid, ret; - struct arm_smmu_asid_state *asid_state; + struct arm_smmu_asid_state *asid_state, *new_state; struct arm_smmu_device *smmu = smmu_domain->smmu; asid = smmu_process->ctx_desc.asid; asid_state = idr_find(&smmu->asid_idr, asid); if (asid_state && asid_state->domain) { - return -EEXIST; + struct arm_smmu_domain *smmu_domain = asid_state->domain; + struct arm_smmu_cmdq_ent cmd = { + .opcode = smmu->features & ARM_SMMU_FEAT_E2H ? + CMDQ_OP_TLBI_EL2_ASID : CMDQ_OP_TLBI_NH_ASID, + }; + + new_state = kzalloc(sizeof(*new_state), GFP_ATOMIC); + if (!new_state) + return -ENOMEM; + + new_state->domain = smmu_domain; + + ret = idr_alloc_cyclic(&smmu->asid_idr, new_state, 0, + 1 << smmu->asid_bits, GFP_ATOMIC); + if (ret < 0) { + kfree(new_state); + return ret; + } + + /* + * Race with unmap; TLB invalidations will start targeting the + * new ASID, which isn't assigned yet. We'll do an + * invalidate-all on the old ASID later, so it doesn't matter. + */ + WRITE_ONCE(smmu_domain->s1_cfg.cd.asid, ret); + + /* + * Update ASID and invalidate CD in all associated masters. + * There will be some overlapping between use of both ASIDs, + * until we invalidate the TLB. + */ + arm_smmu_write_ctx_desc(smmu_domain, 0, &smmu_domain->s1_cfg.cd); + + /* Invalidate TLB entries previously associated with that domain */ + cmd.tlbi.asid = asid; + arm_smmu_cmdq_issue_cmd(smmu, &cmd); + cmd.opcode = CMDQ_OP_CMD_SYNC; + arm_smmu_cmdq_issue_cmd(smmu, &cmd); + + asid_state->domain = NULL; + asid_state->refs = 1; + + return 0; + } else if (asid_state) { asid_state->refs++; return 0;