From patchwork Mon Feb 12 18:33:36 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 10214531 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 89D2A60153 for ; Mon, 12 Feb 2018 21:36:25 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5206328D6E for ; Mon, 12 Feb 2018 21:36:25 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4659D28D6F; Mon, 12 Feb 2018 21:36:25 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id A944F28D6E for ; Mon, 12 Feb 2018 21:36:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=casper.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:MIME-Version:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:References:In-Reply-To:Message-Id:Date:Subject:To: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=uGKt3qACzEa3rNVkHN/xw5BZFXzQLBjDedOGVcSvKcU=; b=CZ8DHY5DcIktowAErikdvHFeRo tlhSi+lrXTNxYiZnNpLng2aPXtERYy4ZM+Kb/Oqbt6Z1cDQN01017M6PcgMSWxbxmqPdvp/Wnq7Em iukVrRqNaiBgloP+A8KmjIRenmdw3X1YaC6ZfPYtCv9qoAxJMhadldzCerJFEJ3afvAcvksKClgCY EyNl44Z6lbXW38cl753Jd3bedSgynbKk6Cr3A6vw6IPX9lB2Y5LRYdi0ZAjVEFq6gvVKFUpRS8Zdc 9iF0NIhYXgsnQeH+Za72ieJH+QyH6qYFdDQYryYPORLadPTiVB2C1KW4U0XTqlm9iv9drvvKmVRGV tLMWglGQ==; Received: from [198.137.202.133] (helo=bombadil.infradead.org) by casper.infradead.org with esmtps (Exim 4.89 #1 (Red Hat Linux)) id 1elLls-0005P7-RC; Mon, 12 Feb 2018 21:36:20 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:MIME-Version:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:References:In-Reply-To:Message-Id:Date:Subject:To: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=uGKt3qACzEa3rNVkHN/xw5BZFXzQLBjDedOGVcSvKcU=; b=ox/I1XswfIy1wxWNswFHa60xCa g9LxhMW/gGbYGdgPT4u2YMRk+Esvxmb8Uuq3pAarhalG3RHCGWrubbfu1j0RhdpoVmL7TLgOOVv3v RCTwVhVcSNBU3i3xrKTvB2Kh+IztZqTQWI0FSSQNmsl3J3F46zp2fKFTU70OS7U3NbDpNgVyfOeUw Kb9C66OWNIk9tBsMrdBTm4YV0TUjfxqvxrSs/vE2pkud5QQo2zuJOElGtnVQwFHJCJy/POOJ8itYy KVcM5s4AsieBHsVOJtX06FUGXaso9gmVisRUoiajqW8VwTFftOhVTk8gZAgtx759N/WS97EBeXTpL dsGNYcOA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.89 #1 (Red Hat Linux)) id 1elJ0a-0008Ba-Ht; Mon, 12 Feb 2018 18:39:20 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.89 #1 (Red Hat Linux)) id 1elIuW-0004ph-MK for linux-arm-kernel@lists.infradead.org; Mon, 12 Feb 2018 18:33:09 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2C98F164F; Mon, 12 Feb 2018 10:33:00 -0800 (PST) Received: from e106794-lin.cambridge.arm.com (e106794-lin.cambridge.arm.com [10.1.210.24]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 4F53F3F24D; Mon, 12 Feb 2018 10:32:54 -0800 (PST) From: Jean-Philippe Brucker To: linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-acpi@vger.kernel.org, devicetree@vger.kernel.org, iommu@lists.linux-foundation.org, kvm@vger.kernel.org Subject: [PATCH 21/37] iommu/arm-smmu-v3: Seize private ASID Date: Mon, 12 Feb 2018 18:33:36 +0000 Message-Id: <20180212183352.22730-22-jean-philippe.brucker@arm.com> X-Mailer: git-send-email 2.15.1 In-Reply-To: <20180212183352.22730-1-jean-philippe.brucker@arm.com> References: <20180212183352.22730-1-jean-philippe.brucker@arm.com> X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mark.rutland@arm.com, xieyisheng1@huawei.com, ilias.apalodimas@linaro.org, catalin.marinas@arm.com, xuzaibo@huawei.com, jonathan.cameron@huawei.com, will.deacon@arm.com, okaya@codeaurora.org, yi.l.liu@intel.com, lorenzo.pieralisi@arm.com, ashok.raj@intel.com, tn@semihalf.com, joro@8bytes.org, bharatku@xilinx.com, rfranz@cavium.com, lenb@kernel.org, jacob.jun.pan@linux.intel.com, alex.williamson@redhat.com, robh+dt@kernel.org, thunder.leizhen@huawei.com, bhelgaas@google.com, shunyong.yang@hxt-semitech.com, dwmw2@infradead.org, liubo95@huawei.com, rjw@rjwysocki.net, jcrouse@codeaurora.org, robdclark@gmail.com, hanjun.guo@linaro.org, sudeep.holla@arm.com, robin.murphy@arm.com, christian.koenig@amd.com, nwatters@codeaurora.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP The SMMU has a single ASID space, the union of shared and private ASID sets. This means that the PASID lib competes with the arch allocator for ASIDs. Shared ASIDs are those of Linux processes, allocated by the arch, and contribute in broadcast TLB maintenance. Private ASIDs are allocated by the SMMU driver and used for "classic" map/unmap DMA. They require explicit TLB invalidations. When we pin down an mm_context and get an ASID that is already in use by the SMMU, it belongs to a private context. We used to simply abort the bind, but this is unfair to users that would be unable to bind a few seemingly random processes. We can try one step further, by allocating a new private ASID for the context in use, and make the old ASID shared. Introduce a new lock to prevent races when rewriting context descriptors. Signed-off-by: Jean-Philippe Brucker --- drivers/iommu/arm-smmu-v3-context.c | 88 ++++++++++++++++++++++++++++++++++--- 1 file changed, 82 insertions(+), 6 deletions(-) diff --git a/drivers/iommu/arm-smmu-v3-context.c b/drivers/iommu/arm-smmu-v3-context.c index b7c90384ff56..5b8c5875e0d9 100644 --- a/drivers/iommu/arm-smmu-v3-context.c +++ b/drivers/iommu/arm-smmu-v3-context.c @@ -82,6 +82,7 @@ (((tcr) >> ARM64_TCR_##fld##_SHIFT & ARM64_TCR_##fld##_MASK) \ << CTXDESC_CD_0_TCR_##fld##_SHIFT) +#define ARM_SMMU_NO_PASID (-1) struct arm_smmu_cd { struct iommu_pasid_entry entry; @@ -90,8 +91,14 @@ struct arm_smmu_cd { u64 tcr; u64 mair; + int pasid; + + /* 'refs' tracks alloc/free */ refcount_t refs; + /* 'users' tracks attach/detach, and is only used for sanity checking */ + unsigned int users; struct mm_struct *mm; + struct arm_smmu_cd_tables *tbl; }; #define pasid_entry_to_cd(entry) \ @@ -123,6 +130,7 @@ struct arm_smmu_cd_tables { #define pasid_ops_to_tables(ops) \ pasid_to_cd_tables(iommu_pasid_table_ops_to_table(ops)) +static DEFINE_SPINLOCK(contexts_lock); static DEFINE_SPINLOCK(asid_lock); static DEFINE_IDR(asid_idr); @@ -209,8 +217,8 @@ static u64 arm_smmu_cpu_tcr_to_cd(u64 tcr) return val; } -static int arm_smmu_write_ctx_desc(struct arm_smmu_cd_tables *tbl, int ssid, - struct arm_smmu_cd *cd) +static int __arm_smmu_write_ctx_desc(struct arm_smmu_cd_tables *tbl, int ssid, + struct arm_smmu_cd *cd) { u64 val; bool cd_live; @@ -284,6 +292,18 @@ static int arm_smmu_write_ctx_desc(struct arm_smmu_cd_tables *tbl, int ssid, return 0; } +static int arm_smmu_write_ctx_desc(struct arm_smmu_cd_tables *tbl, int ssid, + struct arm_smmu_cd *cd) +{ + int ret; + + spin_lock(&contexts_lock); + ret = __arm_smmu_write_ctx_desc(tbl, ssid, cd); + spin_unlock(&contexts_lock); + + return ret; +} + static bool arm_smmu_free_asid(struct arm_smmu_cd *cd) { bool free; @@ -308,14 +328,25 @@ static struct arm_smmu_cd *arm_smmu_alloc_cd(struct arm_smmu_cd_tables *tbl) if (!cd) return NULL; + cd->pasid = ARM_SMMU_NO_PASID; + cd->tbl = tbl; refcount_set(&cd->refs, 1); return cd; } +/* + * Try to reserve this ASID in the SMMU. If it is in use, try to steal it from + * the private entry. Careful here, we may be modifying the context tables of + * another SMMU! + */ static struct arm_smmu_cd *arm_smmu_share_asid(u16 asid) { + int ret; struct arm_smmu_cd *cd; + struct arm_smmu_cd_tables *tbl; + struct arm_smmu_context_cfg *cfg; + struct iommu_pasid_entry old_entry; cd = idr_find(&asid_idr, asid); if (!cd) @@ -325,17 +356,47 @@ static struct arm_smmu_cd *arm_smmu_share_asid(u16 asid) /* * It's pretty common to find a stale CD when doing unbind-bind, * given that the release happens after a RCU grace period. - * Simply reuse it. + * Simply reuse it, but check that it isn't active, because it's + * going to be assigned a different PASID. */ + if (WARN_ON(cd->users)) + return ERR_PTR(-EINVAL); + refcount_inc(&cd->refs); return cd; } + tbl = cd->tbl; + cfg = &tbl->pasid.cfg.arm_smmu; + + ret = idr_alloc_cyclic(&asid_idr, cd, 0, 1 << cfg->asid_bits, + GFP_ATOMIC); + if (ret < 0) + return ERR_PTR(-ENOSPC); + + /* Save the previous ASID */ + old_entry = cd->entry; + + /* + * Race with unmap; TLB invalidations will start targeting the new ASID, + * which isn't assigned yet. We'll do an invalidate-all on the old ASID + * later, so it doesn't matter. + */ + cd->entry.tag = ret; + /* - * Ouch, ASID is already in use for a private cd. - * TODO: seize it, for the common good. + * Update ASID and invalidate CD in all associated masters. There will + * be some overlap between use of both ASIDs, until we invalidate the + * TLB. */ - return ERR_PTR(-EEXIST); + arm_smmu_write_ctx_desc(tbl, cd->pasid, cd); + + /* Invalidate TLB entries previously associated with that context */ + iommu_pasid_flush_tlbs(&tbl->pasid, cd->pasid, &old_entry); + + idr_remove(&asid_idr, asid); + + return NULL; } static struct iommu_pasid_entry * @@ -500,6 +561,15 @@ static int arm_smmu_set_cd(struct iommu_pasid_table_ops *ops, int pasid, if (WARN_ON(pasid > (1 << tbl->pasid.cfg.order))) return -EINVAL; + if (WARN_ON(cd->pasid != ARM_SMMU_NO_PASID && cd->pasid != pasid)) + return -EEXIST; + + /* + * There is a single cd structure for each address space, multiple + * devices may use the same in different tables. + */ + cd->users++; + cd->pasid = pasid; return arm_smmu_write_ctx_desc(tbl, pasid, cd); } @@ -507,10 +577,16 @@ static void arm_smmu_clear_cd(struct iommu_pasid_table_ops *ops, int pasid, struct iommu_pasid_entry *entry) { struct arm_smmu_cd_tables *tbl = pasid_ops_to_tables(ops); + struct arm_smmu_cd *cd = pasid_entry_to_cd(entry); if (WARN_ON(pasid > (1 << tbl->pasid.cfg.order))) return; + WARN_ON(cd->pasid != pasid); + + if (!(--cd->users)) + cd->pasid = ARM_SMMU_NO_PASID; + arm_smmu_write_ctx_desc(tbl, pasid, NULL); /*