From patchwork Mon Jun 26 13:38:48 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen (ThunderTown)" X-Patchwork-Id: 9809715 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 4A73960329 for ; Mon, 26 Jun 2017 13:40:15 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4E3C12846F for ; Mon, 26 Jun 2017 13:40:15 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 41780285CA; Mon, 26 Jun 2017 13:40:15 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 966152846F for ; Mon, 26 Jun 2017 13:40:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=CZXSwCsvd38OY4EbGVDNqUN1ojYo22sIgckmzwJQ7w8=; b=HSuC+GnUcnjAMw dvlo6x0UIlniR0oaZOcAMBPK0VIKwcAA+kR7gWGWqqy2wrNOI1/kcWaxxc6JEoorv7d29ibbKKVaI 0Q23qCR/laca44QdolCQvx61IbfFCQZEjjIsaO2GAE/vN6nnPt9HL+yW85M90QXgImwLMilUSpTaz yNE3pGbXyUM7JZn1FfKkSQKhrUOiAh/GeZPIAyiTK7BTYI2dLjf4oJsQYUMFMIYsc6+4bsesbWFU9 MmtNHDxMddcUskvkXkWf54edGLp+pnZxUN2CtzfOCmVfBzpLnOYIRQ2EdBKx15MIzJlxZ9mLnB/q7 uF5yEQyMLmV9EbFUdIfQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1dPUFR-0008Rb-4C; Mon, 26 Jun 2017 13:40:13 +0000 Received: from szxga02-in.huawei.com ([45.249.212.188]) by bombadil.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1dPUFM-00072e-35 for linux-arm-kernel@lists.infradead.org; Mon, 26 Jun 2017 13:40:11 +0000 Received: from 172.30.72.53 (EHLO DGGEML401-HUB.china.huawei.com) ([172.30.72.53]) by dggrg02-dlp.huawei.com (MOS 4.4.6-GA FastPath queued) with ESMTP id AQA47794; Mon, 26 Jun 2017 21:39:26 +0800 (CST) Received: from localhost (10.177.23.164) by DGGEML401-HUB.china.huawei.com (10.3.17.32) with Microsoft SMTP Server id 14.3.301.0; Mon, 26 Jun 2017 21:39:19 +0800 From: Zhen Lei To: Will Deacon , Joerg Roedel , linux-arm-kernel , iommu , Robin Murphy , linux-kernel Subject: [PATCH 3/5] iommu/arm-smmu-v3: add support for unmap an iova range with only one tlb sync Date: Mon, 26 Jun 2017 21:38:48 +0800 Message-ID: <1498484330-10840-4-git-send-email-thunder.leizhen@huawei.com> X-Mailer: git-send-email 1.9.5.msysgit.0 In-Reply-To: <1498484330-10840-1-git-send-email-thunder.leizhen@huawei.com> References: <1498484330-10840-1-git-send-email-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.177.23.164] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020204.59510E8E.0149, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 9c63b4ac117a54a3f8d97d1bfff17cd4 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20170626_064008_753399_B312DAB2 X-CRM114-Status: GOOD ( 14.78 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: John Garry , Hanjun Guo , Xinwei Hu , Zefan Li , Tianhong Ding , Zhen Lei Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP 1. remove tlb_sync operation in "unmap" 2. make sure each "unmap" will always be followed by tlb sync operation The resultant effect is as below: unmap memory page-1 tlb invalidate page-1 ... unmap memory page-n tlb invalidate page-n tlb sync Signed-off-by: Zhen Lei --- drivers/iommu/arm-smmu-v3.c | 10 ++++++++++ drivers/iommu/io-pgtable-arm.c | 30 ++++++++++++++++++++---------- drivers/iommu/io-pgtable.h | 1 + 3 files changed, 31 insertions(+), 10 deletions(-) diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index 4481123..328b9d7 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -1724,6 +1724,15 @@ arm_smmu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size) return ops->unmap(ops, iova, size); } +static void arm_smmu_unmap_tlb_sync(struct iommu_domain *domain) +{ + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + struct io_pgtable_ops *ops= smmu_domain->pgtbl_ops; + + if (ops && ops->unmap_tlb_sync) + ops->unmap_tlb_sync(ops); +} + static phys_addr_t arm_smmu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova) { @@ -1943,6 +1952,7 @@ static struct iommu_ops arm_smmu_ops = { .attach_dev = arm_smmu_attach_dev, .map = arm_smmu_map, .unmap = arm_smmu_unmap, + .unmap_tlb_sync = arm_smmu_unmap_tlb_sync, .map_sg = default_iommu_map_sg, .iova_to_phys = arm_smmu_iova_to_phys, .add_device = arm_smmu_add_device, diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c index 52700fa..8137e62 100644 --- a/drivers/iommu/io-pgtable-arm.c +++ b/drivers/iommu/io-pgtable-arm.c @@ -304,6 +304,8 @@ static int arm_lpae_init_pte(struct arm_lpae_io_pgtable *data, WARN_ON(!selftest_running); return -EEXIST; } else if (iopte_type(pte, lvl) == ARM_LPAE_PTE_TYPE_TABLE) { + size_t unmapped; + /* * We need to unmap and free the old table before * overwriting it with a block entry. @@ -312,7 +314,9 @@ static int arm_lpae_init_pte(struct arm_lpae_io_pgtable *data, size_t sz = ARM_LPAE_BLOCK_SIZE(lvl, data); tblp = ptep - ARM_LPAE_LVL_IDX(iova, lvl, data); - if (WARN_ON(__arm_lpae_unmap(data, iova, sz, lvl, tblp) != sz)) + unmapped = __arm_lpae_unmap(data, iova, sz, lvl, tblp); + io_pgtable_tlb_sync(&data->iop); + if (WARN_ON(unmapped != sz)) return -EINVAL; } @@ -576,7 +580,6 @@ static int __arm_lpae_unmap(struct arm_lpae_io_pgtable *data, /* Also flush any partial walks */ io_pgtable_tlb_add_flush(iop, iova, size, ARM_LPAE_GRANULE(data), false); - io_pgtable_tlb_sync(iop); ptep = iopte_deref(pte, data); __arm_lpae_free_pgtable(data, lvl + 1, ptep); } else { @@ -601,16 +604,18 @@ static int __arm_lpae_unmap(struct arm_lpae_io_pgtable *data, static int arm_lpae_unmap(struct io_pgtable_ops *ops, unsigned long iova, size_t size) { - size_t unmapped; struct arm_lpae_io_pgtable *data = io_pgtable_ops_to_data(ops); arm_lpae_iopte *ptep = data->pgd; int lvl = ARM_LPAE_START_LVL(data); - unmapped = __arm_lpae_unmap(data, iova, size, lvl, ptep); - if (unmapped) - io_pgtable_tlb_sync(&data->iop); + return __arm_lpae_unmap(data, iova, size, lvl, ptep); +} + +static void arm_lpae_unmap_tlb_sync(struct io_pgtable_ops *ops) +{ + struct arm_lpae_io_pgtable *data = io_pgtable_ops_to_data(ops); - return unmapped; + io_pgtable_tlb_sync(&data->iop); } static phys_addr_t arm_lpae_iova_to_phys(struct io_pgtable_ops *ops, @@ -723,6 +728,7 @@ arm_lpae_alloc_pgtable(struct io_pgtable_cfg *cfg) data->iop.ops = (struct io_pgtable_ops) { .map = arm_lpae_map, .unmap = arm_lpae_unmap, + .unmap_tlb_sync = arm_lpae_unmap_tlb_sync, .iova_to_phys = arm_lpae_iova_to_phys, }; @@ -1019,7 +1025,7 @@ static int __init arm_lpae_run_tests(struct io_pgtable_cfg *cfg) int i, j; unsigned long iova; - size_t size; + size_t size, unmapped; struct io_pgtable_ops *ops; selftest_running = true; @@ -1071,7 +1077,9 @@ static int __init arm_lpae_run_tests(struct io_pgtable_cfg *cfg) /* Partial unmap */ size = 1UL << __ffs(cfg->pgsize_bitmap); - if (ops->unmap(ops, SZ_1G + size, size) != size) + unmapped = ops->unmap(ops, SZ_1G + size, size); + ops->unmap_tlb_sync(ops); + if (unmapped != size) return __FAIL(ops, i); /* Remap of partial unmap */ @@ -1087,7 +1095,9 @@ static int __init arm_lpae_run_tests(struct io_pgtable_cfg *cfg) while (j != BITS_PER_LONG) { size = 1UL << j; - if (ops->unmap(ops, iova, size) != size) + unmapped = ops->unmap(ops, iova, size); + ops->unmap_tlb_sync(ops); + if (unmapped != size) return __FAIL(ops, i); if (ops->iova_to_phys(ops, iova + 42)) diff --git a/drivers/iommu/io-pgtable.h b/drivers/iommu/io-pgtable.h index 524263a..7b3fc04 100644 --- a/drivers/iommu/io-pgtable.h +++ b/drivers/iommu/io-pgtable.h @@ -120,6 +120,7 @@ struct io_pgtable_ops { phys_addr_t paddr, size_t size, int prot); int (*unmap)(struct io_pgtable_ops *ops, unsigned long iova, size_t size); + void (*unmap_tlb_sync)(struct io_pgtable_ops *ops); phys_addr_t (*iova_to_phys)(struct io_pgtable_ops *ops, unsigned long iova); };