From patchwork Thu Jul 12 06:18:30 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen (ThunderTown)" X-Patchwork-Id: 10521113 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 3000D603D7 for ; Thu, 12 Jul 2018 06:20:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E8E4F2929C for ; Thu, 12 Jul 2018 06:20:25 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D99832935D; Thu, 12 Jul 2018 06:20:25 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 813842929C for ; Thu, 12 Jul 2018 06:20:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Q+6+3h3Lx39lltlAlQ6UY194bQNd8TEr4BEvpXkM1XU=; b=EvbTGX3CL4XgbM sUtSo3IjnqbzD8njekurlm8OdhV4HqTyy1HoTNtS2XDyK66QqzqPVw4rzKU8jRV9qj7nCbuptcRzB VefvphlefCK3oZw7/1XxXsnebtl+VuSuX33hwhuXhtm/KT37quQAPSsHTL8C77bNdsup7Xt9hYaAB HXV466FSSaiXrK5wd2fnZGrf7JFLHL9tKyjQbpPTZ9WvVtkTQc2yrWA+6PTwbu+yv1pbcgbOT8zFP yasWbYCESiZhs1/mo9QxW8FAZpr/xu8XFg4Hz1Sf8HwGkcqEx3bQ4V9ZtWvvP+40lO/SS9j1+d3i7 BFu3W61fXiYqXXw2v2FQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fdUxO-0005SO-GP; Thu, 12 Jul 2018 06:20:02 +0000 Received: from szxga07-in.huawei.com ([45.249.212.35] helo=huawei.com) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1fdUwr-0004mB-Au for linux-arm-kernel@lists.infradead.org; Thu, 12 Jul 2018 06:19:31 +0000 Received: from DGGEMS412-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 9006C525065D2; Thu, 12 Jul 2018 14:19:10 +0800 (CST) Received: from localhost (10.177.23.164) by DGGEMS412-HUB.china.huawei.com (10.3.19.212) with Microsoft SMTP Server id 14.3.382.0; Thu, 12 Jul 2018 14:19:04 +0800 From: Zhen Lei To: Jean-Philippe Brucker , Robin Murphy , Will Deacon , Joerg Roedel , linux-arm-kernel , iommu , linux-kernel Subject: [PATCH v3 4/6] iommu/io-pgtable-arm: add support for non-strict mode Date: Thu, 12 Jul 2018 14:18:30 +0800 Message-ID: <1531376312-2192-5-git-send-email-thunder.leizhen@huawei.com> X-Mailer: git-send-email 1.9.5.msysgit.0 In-Reply-To: <1531376312-2192-1-git-send-email-thunder.leizhen@huawei.com> References: <1531376312-2192-1-git-send-email-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.177.23.164] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180711_231929_544430_C627697F X-CRM114-Status: GOOD ( 13.98 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Zhen Lei Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP To support the non-strict mode, now we only tlbi and sync for the strict mode. But for the non-leaf case, always follow strict mode. Use the lowest bit of the iova parameter to pass the strict mode: 0, IOMMU_STRICT; 1, IOMMU_NON_STRICT; Treat 0 as IOMMU_STRICT, so that the unmap operation can compatible with other IOMMUs which still use strict mode. Signed-off-by: Zhen Lei --- drivers/iommu/io-pgtable-arm.c | 23 ++++++++++++++--------- 1 file changed, 14 insertions(+), 9 deletions(-) diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c index 010a254..9234db3 100644 --- a/drivers/iommu/io-pgtable-arm.c +++ b/drivers/iommu/io-pgtable-arm.c @@ -292,7 +292,7 @@ static void __arm_lpae_set_pte(arm_lpae_iopte *ptep, arm_lpae_iopte pte, static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data, unsigned long iova, size_t size, int lvl, - arm_lpae_iopte *ptep); + arm_lpae_iopte *ptep, int strict); static void __arm_lpae_init_pte(struct arm_lpae_io_pgtable *data, phys_addr_t paddr, arm_lpae_iopte prot, @@ -334,7 +334,7 @@ static int arm_lpae_init_pte(struct arm_lpae_io_pgtable *data, size_t sz = ARM_LPAE_BLOCK_SIZE(lvl, data); tblp = ptep - ARM_LPAE_LVL_IDX(iova, lvl, data); - if (WARN_ON(__arm_lpae_unmap(data, iova, sz, lvl, tblp) != sz)) + if (WARN_ON(__arm_lpae_unmap(data, iova, sz, lvl, tblp, IOMMU_STRICT) != sz)) return -EINVAL; } @@ -531,7 +531,7 @@ static void arm_lpae_free_pgtable(struct io_pgtable *iop) static size_t arm_lpae_split_blk_unmap(struct arm_lpae_io_pgtable *data, unsigned long iova, size_t size, arm_lpae_iopte blk_pte, int lvl, - arm_lpae_iopte *ptep) + arm_lpae_iopte *ptep, int strict) { struct io_pgtable_cfg *cfg = &data->iop.cfg; arm_lpae_iopte pte, *tablep; @@ -576,15 +576,18 @@ static size_t arm_lpae_split_blk_unmap(struct arm_lpae_io_pgtable *data, } if (unmap_idx < 0) - return __arm_lpae_unmap(data, iova, size, lvl, tablep); + return __arm_lpae_unmap(data, iova, size, lvl, tablep, strict); io_pgtable_tlb_add_flush(&data->iop, iova, size, size, true); + if (!strict) + io_pgtable_tlb_sync(&data->iop); + return size; } static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data, unsigned long iova, size_t size, int lvl, - arm_lpae_iopte *ptep) + arm_lpae_iopte *ptep, int strict) { arm_lpae_iopte pte; struct io_pgtable *iop = &data->iop; @@ -609,7 +612,7 @@ static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data, io_pgtable_tlb_sync(iop); ptep = iopte_deref(pte, data); __arm_lpae_free_pgtable(data, lvl + 1, ptep); - } else { + } else if (strict) { io_pgtable_tlb_add_flush(iop, iova, size, size, true); } @@ -620,25 +623,27 @@ static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data, * minus the part we want to unmap */ return arm_lpae_split_blk_unmap(data, iova, size, pte, - lvl + 1, ptep); + lvl + 1, ptep, strict); } /* Keep on walkin' */ ptep = iopte_deref(pte, data); - return __arm_lpae_unmap(data, iova, size, lvl + 1, ptep); + return __arm_lpae_unmap(data, iova, size, lvl + 1, ptep, strict); } static size_t arm_lpae_unmap(struct io_pgtable_ops *ops, unsigned long iova, size_t size) { + int strict = ((iova & IOMMU_STRICT_MODE_MASK) == IOMMU_STRICT); struct arm_lpae_io_pgtable *data = io_pgtable_ops_to_data(ops); arm_lpae_iopte *ptep = data->pgd; int lvl = ARM_LPAE_START_LVL(data); + iova &= ~IOMMU_STRICT_MODE_MASK; if (WARN_ON(iova >= (1ULL << data->iop.cfg.ias))) return 0; - return __arm_lpae_unmap(data, iova, size, lvl, ptep); + return __arm_lpae_unmap(data, iova, size, lvl, ptep, strict); } static phys_addr_t arm_lpae_iova_to_phys(struct io_pgtable_ops *ops,