From patchwork Fri May 11 19:06:24 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 10394973 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id DBE5960348 for ; Fri, 11 May 2018 19:09:43 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C912128F8E for ; Fri, 11 May 2018 19:09:43 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BD28C28F98; Fri, 11 May 2018 19:09:43 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0CCF628F8E for ; Fri, 11 May 2018 19:09:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 70D7D6B06AB; Fri, 11 May 2018 15:09:41 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 691746B06AE; Fri, 11 May 2018 15:09:41 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 50C5D6B06AF; Fri, 11 May 2018 15:09:41 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-oi0-f69.google.com (mail-oi0-f69.google.com [209.85.218.69]) by kanga.kvack.org (Postfix) with ESMTP id 1E1DC6B06AB for ; Fri, 11 May 2018 15:09:41 -0400 (EDT) Received: by mail-oi0-f69.google.com with SMTP id k136-v6so3491356oih.4 for ; Fri, 11 May 2018 12:09:41 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=OsECYNNC/h08lLvfX87FnPknwV2gLyU9qqpzX/7NX/Q=; b=thxVAuLDpGO6SNnXaIeLEb1jhLiNPjle2FW+E0i8Paq3/szZzIQjkX4zNnzaND7cz4 PW4T0cMqHEOsZBFUiWO3GwNVMIlgOOtqNupQ3b8a0o5LwmzxJ/NvT8g83rl3SL6NnWzb ez07f4wGb8sV0/Xrdm8oDkR7QOVv0rdVVPkgApO+grBUrzBeWx7uF0MysHeTSvJru4S1 LQ2qUqTmk7WWdX5clGa2pWQvoU2V+kNwhIg/d/Tbc3bfkVxuAzM/un2eVEGD4y8nMqBF TRUeg8ok39prF5B5PvHbsP3gRdjLRQPHIpN4ssH8EaEjarhZq1Ip5+kR5w9mHjdcRBMx qoHg== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of jean-philippe.brucker@arm.com designates 217.140.101.70 as permitted sender) smtp.mailfrom=jean-philippe.brucker@arm.com X-Gm-Message-State: ALKqPwc2s5oJRsP5CpfT8HczlIA7aAM5yXiP83S5j8qILNcrVSO1h1jI /0Gbi6X5fzp+Gril42DEONq7IRJ4HU8tWE65iVrmCoIL4TMxxX4ViA++zIihB/bbdGhWqebIgt6 eMFhpuSK97RGWhuDS4QzlIYSTQMBljMMeb28qjGMmwsej4Ubi0HOVx9uXWgJ9tWk41Q== X-Received: by 2002:a9d:25c6:: with SMTP id q64-v6mr5007244ota.193.1526065780908; Fri, 11 May 2018 12:09:40 -0700 (PDT) X-Google-Smtp-Source: AB8JxZqixAL8n1m99kOCZ4HzD5LWTDWOBQsmqF9BfEDJOS/uP1cHftRB4fAZ9iNm+5iSG8w9ctZv X-Received: by 2002:a9d:25c6:: with SMTP id q64-v6mr5007195ota.193.1526065780019; Fri, 11 May 2018 12:09:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1526065779; cv=none; d=google.com; s=arc-20160816; b=aApRdZvDY+5DO+9OQ/bGxLZhJAISYQ0XwTwNfGC7N4igZV10vzFgIKSAwarTpTOK0b kMRypVkjJveGXFiAVefykcacW04GJG8mMsg8fRNxYX328Orl7JLJOzezIJ06lhg8Iest CHxh05tu6vWtFwIC8Rj7Tch+7IMvMgiu/XoVEjV2s7wcuOb7BWEZwr67pmZYP7w2VLbn UEbCbvMDd4Dhbra/0Gp/unGNG7JtwoVckGBaJjtKLww5F9MZerhualLlt7cfy0SUh6a8 dhAj8m4wkRH0fyA+NcISby9UiJNC4kYfCxkIkiDXFXeV7BjSkvNSH5VGOg+vHM3MsgEA 7hMQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=OsECYNNC/h08lLvfX87FnPknwV2gLyU9qqpzX/7NX/Q=; b=pHRIzWrAD6mh+fFCLPbm1p4usTiV0PIaxXo66Jz9pnNUCl1SoVTkDDsIQqlcmKzLJe GT8gOE4OK4itmv+aVS6tUB4Y93fs3XI9ywmHeoOX60R2WSWyK2a6+4olhPO45V08xzCZ jwiR/z9qVmWk4FFgGtdizd5T6MeFkW6AdgjC+w83t8gnjjK0Mvxs3hkIvmVXFpyTRwPq Nz/HM73tl6LJxv/XJ5e1vlpWiUawmrSs5udCQmBOwSRRF865239YM3xUndIsajjooP6e 0/g2VEC9FdtAD1MseJhz6wCPVRBHtNm5SxCuLmU9q2WLrxETNnbGf4G4BEEA+YyaM+7c FsDg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of jean-philippe.brucker@arm.com designates 217.140.101.70 as permitted sender) smtp.mailfrom=jean-philippe.brucker@arm.com Received: from foss.arm.com (foss.arm.com. [217.140.101.70]) by mx.google.com with ESMTP id l60-v6si1315799otc.297.2018.05.11.12.09.39 for ; Fri, 11 May 2018 12:09:39 -0700 (PDT) Received-SPF: pass (google.com: domain of jean-philippe.brucker@arm.com designates 217.140.101.70 as permitted sender) client-ip=217.140.101.70; Authentication-Results: mx.google.com; spf=pass (google.com: domain of jean-philippe.brucker@arm.com designates 217.140.101.70 as permitted sender) smtp.mailfrom=jean-philippe.brucker@arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8CED31991; Fri, 11 May 2018 12:09:39 -0700 (PDT) Received: from ostrya.cambridge.arm.com (ostrya.cambridge.arm.com [10.1.210.33]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 4E81D3F23C; Fri, 11 May 2018 12:09:34 -0700 (PDT) From: Jean-Philippe Brucker To: linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-acpi@vger.kernel.org, devicetree@vger.kernel.org, iommu@lists.linux-foundation.org, kvm@vger.kernel.org, linux-mm@kvack.org Cc: joro@8bytes.org, will.deacon@arm.com, robin.murphy@arm.com, alex.williamson@redhat.com, tn@semihalf.com, liubo95@huawei.com, thunder.leizhen@huawei.com, xieyisheng1@huawei.com, xuzaibo@huawei.com, ilias.apalodimas@linaro.org, jonathan.cameron@huawei.com, liudongdong3@huawei.com, shunyong.yang@hxt-semitech.com, nwatters@codeaurora.org, okaya@codeaurora.org, jcrouse@codeaurora.org, rfranz@cavium.com, dwmw2@infradead.org, jacob.jun.pan@linux.intel.com, yi.l.liu@intel.com, ashok.raj@intel.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, robdclark@gmail.com, christian.koenig@amd.com, bharatku@xilinx.com, rgummal@xilinx.com Subject: [PATCH v2 23/40] iommu/arm-smmu-v3: Share process page tables Date: Fri, 11 May 2018 20:06:24 +0100 Message-Id: <20180511190641.23008-24-jean-philippe.brucker@arm.com> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180511190641.23008-1-jean-philippe.brucker@arm.com> References: <20180511190641.23008-1-jean-philippe.brucker@arm.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP With Shared Virtual Addressing (SVA), we need to mirror CPU TTBR, TCR, MAIR and ASIDs in SMMU contexts. Each SMMU has a single ASID space split into two sets, shared and private. Shared ASIDs correspond to those obtained from the arch ASID allocator, and private ASIDs are used for "classic" map/unmap DMA. Replace the ASID IDA with an IDR, allowing to keep information about each context. Initialize shared contexts with info obtained from the mm. Signed-off-by: Jean-Philippe Brucker --- drivers/iommu/arm-smmu-v3-context.c | 182 ++++++++++++++++++++++++++-- 1 file changed, 172 insertions(+), 10 deletions(-) diff --git a/drivers/iommu/arm-smmu-v3-context.c b/drivers/iommu/arm-smmu-v3-context.c index d68da99aa472..352cba3c1a62 100644 --- a/drivers/iommu/arm-smmu-v3-context.c +++ b/drivers/iommu/arm-smmu-v3-context.c @@ -10,8 +10,10 @@ #include #include #include +#include #include +#include "io-pgtable-arm.h" #include "iommu-pasid-table.h" /* @@ -69,6 +71,9 @@ struct arm_smmu_cd { u64 ttbr; u64 tcr; u64 mair; + + refcount_t refs; + struct mm_struct *mm; }; #define pasid_entry_to_cd(entry) \ @@ -100,7 +105,8 @@ struct arm_smmu_cd_tables { #define pasid_ops_to_tables(ops) \ pasid_to_cd_tables(iommu_pasid_table_ops_to_table(ops)) -static DEFINE_IDA(asid_ida); +static DEFINE_SPINLOCK(asid_lock); +static DEFINE_IDR(asid_idr); static int arm_smmu_alloc_cd_leaf_table(struct device *dev, struct arm_smmu_cd_table *desc, @@ -239,7 +245,8 @@ static int arm_smmu_write_ctx_desc(struct arm_smmu_cd_tables *tbl, int ssid, #ifdef __BIG_ENDIAN CTXDESC_CD_0_ENDI | #endif - CTXDESC_CD_0_R | CTXDESC_CD_0_A | CTXDESC_CD_0_ASET | + CTXDESC_CD_0_R | CTXDESC_CD_0_A | + (cd->mm ? 0 : CTXDESC_CD_0_ASET) | CTXDESC_CD_0_AA64 | FIELD_PREP(CTXDESC_CD_0_ASID, cd->entry.tag) | CTXDESC_CD_0_V; @@ -255,18 +262,161 @@ static int arm_smmu_write_ctx_desc(struct arm_smmu_cd_tables *tbl, int ssid, return 0; } +static bool arm_smmu_free_asid(struct arm_smmu_cd *cd) +{ + bool free; + struct arm_smmu_cd *old_cd; + + spin_lock(&asid_lock); + free = refcount_dec_and_test(&cd->refs); + if (free) { + old_cd = idr_remove(&asid_idr, (u16)cd->entry.tag); + WARN_ON(old_cd != cd); + } + spin_unlock(&asid_lock); + + return free; +} + static void arm_smmu_free_cd(struct iommu_pasid_entry *entry) { struct arm_smmu_cd *cd = pasid_entry_to_cd(entry); - ida_simple_remove(&asid_ida, (u16)entry->tag); + if (!arm_smmu_free_asid(cd)) + return; + + if (cd->mm) { + /* Unpin ASID */ + mm_context_put(cd->mm); + } + kfree(cd); } +static struct arm_smmu_cd *arm_smmu_alloc_cd(struct arm_smmu_cd_tables *tbl) +{ + struct arm_smmu_cd *cd; + + cd = kzalloc(sizeof(*cd), GFP_KERNEL); + if (!cd) + return NULL; + + cd->entry.release = arm_smmu_free_cd; + refcount_set(&cd->refs, 1); + + return cd; +} + +static struct arm_smmu_cd *arm_smmu_share_asid(u16 asid) +{ + struct arm_smmu_cd *cd; + + cd = idr_find(&asid_idr, asid); + if (!cd) + return NULL; + + if (cd->mm) { + /* + * It's pretty common to find a stale CD when doing unbind-bind, + * given that the release happens after a RCU grace period. + * Simply reuse it. + */ + refcount_inc(&cd->refs); + return cd; + } + + /* + * Ouch, ASID is already in use for a private cd. + * TODO: seize it, for the common good. + */ + return ERR_PTR(-EEXIST); +} + static struct iommu_pasid_entry * arm_smmu_alloc_shared_cd(struct iommu_pasid_table_ops *ops, struct mm_struct *mm) { - return ERR_PTR(-ENODEV); + u16 asid; + u64 tcr, par, reg; + int ret = -ENOMEM; + struct arm_smmu_cd *cd; + struct arm_smmu_cd *old_cd = NULL; + struct arm_smmu_cd_tables *tbl = pasid_ops_to_tables(ops); + + asid = mm_context_get(mm); + if (!asid) + return ERR_PTR(-ESRCH); + + cd = arm_smmu_alloc_cd(tbl); + if (!cd) + goto err_put_context; + + idr_preload(GFP_KERNEL); + spin_lock(&asid_lock); + old_cd = arm_smmu_share_asid(asid); + if (!old_cd) + ret = idr_alloc(&asid_idr, cd, asid, asid + 1, GFP_ATOMIC); + spin_unlock(&asid_lock); + idr_preload_end(); + + if (!IS_ERR_OR_NULL(old_cd)) { + if (WARN_ON(old_cd->mm != mm)) { + ret = -EINVAL; + goto err_free_cd; + } + kfree(cd); + mm_context_put(mm); + return &old_cd->entry; + } else if (old_cd) { + ret = PTR_ERR(old_cd); + goto err_free_cd; + } + + tcr = TCR_T0SZ(VA_BITS) | TCR_IRGN0_WBWA | TCR_ORGN0_WBWA | + TCR_SH0_INNER | ARM_LPAE_TCR_EPD1; + + switch (PAGE_SIZE) { + case SZ_4K: + tcr |= TCR_TG0_4K; + break; + case SZ_16K: + tcr |= TCR_TG0_16K; + break; + case SZ_64K: + tcr |= TCR_TG0_64K; + break; + default: + WARN_ON(1); + ret = -EINVAL; + goto err_free_asid; + } + + reg = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1); + par = cpuid_feature_extract_unsigned_field(reg, ID_AA64MMFR0_PARANGE_SHIFT); + tcr |= par << ARM_LPAE_TCR_IPS_SHIFT; + + cd->ttbr = virt_to_phys(mm->pgd); + cd->tcr = tcr; + /* + * MAIR value is pretty much constant and global, so we can just get it + * from the current CPU register + */ + cd->mair = read_sysreg(mair_el1); + + cd->mm = mm; + cd->entry.tag = asid; + + return &cd->entry; + +err_free_asid: + arm_smmu_free_asid(cd); + +err_free_cd: + kfree(cd); + +err_put_context: + mm_context_put(mm); + + return ERR_PTR(ret); } static struct iommu_pasid_entry * @@ -280,20 +430,23 @@ arm_smmu_alloc_priv_cd(struct iommu_pasid_table_ops *ops, struct arm_smmu_cd_tables *tbl = pasid_ops_to_tables(ops); struct arm_smmu_context_cfg *ctx_cfg = &tbl->pasid.cfg.arm_smmu; - cd = kzalloc(sizeof(*cd), GFP_KERNEL); + cd = arm_smmu_alloc_cd(tbl); if (!cd) return ERR_PTR(-ENOMEM); - asid = ida_simple_get(&asid_ida, 0, 1 << ctx_cfg->asid_bits, - GFP_KERNEL); + idr_preload(GFP_KERNEL); + spin_lock(&asid_lock); + asid = idr_alloc_cyclic(&asid_idr, cd, 0, 1 << ctx_cfg->asid_bits, + GFP_ATOMIC); + cd->entry.tag = asid; + spin_unlock(&asid_lock); + idr_preload_end(); + if (asid < 0) { kfree(cd); return ERR_PTR(asid); } - cd->entry.tag = asid; - cd->entry.release = arm_smmu_free_cd; - switch (fmt) { case ARM_64_LPAE_S1: cd->ttbr = cfg->arm_lpae_s1_cfg.ttbr[0]; @@ -330,11 +483,20 @@ static void arm_smmu_clear_cd(struct iommu_pasid_table_ops *ops, int pasid, struct iommu_pasid_entry *entry) { struct arm_smmu_cd_tables *tbl = pasid_ops_to_tables(ops); + struct arm_smmu_cd *cd = pasid_entry_to_cd(entry); if (WARN_ON(pasid > (1 << tbl->pasid.cfg.order))) return; arm_smmu_write_ctx_desc(tbl, pasid, NULL); + + /* + * The ASID allocator won't broadcast the final TLB invalidations for + * this ASID, so we need to do it manually. For private contexts, + * freeing io-pgtable ops performs the invalidation. + */ + if (cd->mm) + iommu_pasid_flush_tlbs(&tbl->pasid, pasid, entry); } static struct iommu_pasid_table *