From patchwork Wed Aug 15 01:28:25 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen (ThunderTown)" X-Patchwork-Id: 10566211 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 13DF41515 for ; Wed, 15 Aug 2018 01:32:54 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 01CA828505 for ; Wed, 15 Aug 2018 01:32:54 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E582C28573; Wed, 15 Aug 2018 01:32:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 3E33C28505 for ; Wed, 15 Aug 2018 01:32:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:Message-ID:Date:Subject:To :From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References: List-Owner; bh=p09VPlNHK/WkxuBC9eFTZtnglQGnrtT22JS0BMz0SC4=; b=nDJsGnOdoU0v7y ZONmK51PU6L5IsvnHHWJrynlnMFb450Em6jc7lOXEYJxvYstJbMs5WBPgFRbgs0DiIaBPK9Wh8ZY7 sqqDmStGgMwfD9ei6SpuD2xAsn/048vurmwmmraTuYQEBPi3UUQyrMlg+KJzqIZ1jX3BSb2cliSFp AHwP8xdWS6vVUzmCM2Fa0XQzIlLnUrreXaoh3hTG8bhOklyqIYkfmXd+q4R/z9J/t+BXT8Xwfq5bt Bu6D3+xuM3QMla0x5Rtu0r2mkROZbP24NSUoDBxmjGrM/OEuNNh/1erIc0uMGw7N6d4I/k1Glw/5E VSElDQIdO5kd+toES08A==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fpkg5-00028m-GT; Wed, 15 Aug 2018 01:32:49 +0000 Received: from szxga06-in.huawei.com ([45.249.212.32] helo=huawei.com) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1fpkg2-00026G-Hu for linux-arm-kernel@lists.infradead.org; Wed, 15 Aug 2018 01:32:48 +0000 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id DAEC63A23B2EA; Wed, 15 Aug 2018 09:32:30 +0800 (CST) Received: from localhost (10.177.23.164) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.399.0; Wed, 15 Aug 2018 09:32:25 +0800 From: Zhen Lei To: Robin Murphy , Will Deacon , Joerg Roedel , linux-arm-kernel , iommu , linux-kernel Subject: [PATCH v5 0/5] add non-strict mode support for arm-smmu-v3 Date: Wed, 15 Aug 2018 09:28:25 +0800 Message-ID: <1534296510-12888-1-git-send-email-thunder.leizhen@huawei.com> X-Mailer: git-send-email 1.9.5.msysgit.0 MIME-Version: 1.0 X-Originating-IP: [10.177.23.164] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180814_183246_762877_AB9BC98A X-CRM114-Status: GOOD ( 17.54 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: John Garry , Hanjun Guo , LinuxArm , Libin , Zhen Lei Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP v4 -> v5: 1. change the type of global variable and struct member named "non_strict" from "int" to "bool". 2. cancel the unnecessary parameter "strict" of __arm_lpae_unmap which was added in v4. 3. change boot option "arm_iommu" to "iommu.non_strict". 4. convert __iommu_dma_unmap to use iommu_unmap_fast()/iommu_tlb_sync(), because non-leaf unmaps still need to be synchronous. Thanks for Robin's review comments. v3 -> v4: 1. Add a new member "non_strict" in struct iommu_domain to mark whether that domain use non-strict mode or not. This can help us to remove the capability which was added in prior version. 2. Add a new quirk IO_PGTABLE_QUIRK_NON_STRICT, so that we can get "strict mode" in io-pgtable-arm.c according to data->iop.cfg.quirks. 3. rename the new boot option to "arm_iommu". v2 -> v3: Add a bootup option "iommu_strict_mode" to make the manager can choose which mode to be used. The first 5 patches have not changed. + iommu_strict_mode= [arm-smmu-v3] + 0 - strict mode (default) + 1 - non-strict mode v1 -> v2: Use the lowest bit of the io_pgtable_ops.unmap's iova parameter to pass the strict mode: 0, IOMMU_STRICT; 1, IOMMU_NON_STRICT; Treat 0 as IOMMU_STRICT, so that the unmap operation can compatible with other IOMMUs which still use strict mode. In other words, this patch series will not impact other IOMMU drivers. I tried add a new quirk IO_PGTABLE_QUIRK_NON_STRICT in io_pgtable_cfg.quirks, but it can not pass the strict mode of the domain from SMMUv3 driver to io-pgtable module. Add a new member domain_non_strict in struct iommu_dma_cookie, this member will only be initialized when the related domain and IOMMU driver support non-strict mode. v1: In common, a IOMMU unmap operation follow the below steps: 1. remove the mapping in page table of the specified iova range 2. execute tlbi command to invalid the mapping which is cached in TLB 3. wait for the above tlbi operation to be finished 4. free the IOVA resource 5. free the physical memory resource This maybe a problem when unmap is very frequently, the combination of tlbi and wait operation will consume a lot of time. A feasible method is put off tlbi and iova-free operation, when accumulating to a certain number or reaching a specified time, execute only one tlbi_all command to clean up TLB, then free the backup IOVAs. Mark as non-strict mode. But it must be noted that, although the mapping has already been removed in the page table, it maybe still exist in TLB. And the freed physical memory may also be reused for others. So a attacker can persistent access to memory based on the just freed IOVA, to obtain sensible data or corrupt memory. So the VFIO should always choose the strict mode. Some may consider put off physical memory free also, that will still follow strict mode. But for the map_sg cases, the memory allocation is not controlled by IOMMU APIs, so it is not enforceable. Fortunately, Intel and AMD have already applied the non-strict mode, and put queue_iova() operation into the common file dma-iommu.c., and my work is based on it. The difference is that arm-smmu-v3 driver will call IOMMU common APIs to unmap, but Intel and AMD IOMMU drivers are not. Below is the performance data of strict vs non-strict for NVMe device: Randomly Read IOPS: 146K(strict) vs 573K(non-strict) Randomly Write IOPS: 143K(strict) vs 513K(non-strict) Zhen Lei (5): iommu/arm-smmu-v3: fix the implementation of flush_iotlb_all hook iommu/dma: add support for non-strict mode iommu/io-pgtable-arm: add support for non-strict mode iommu/arm-smmu-v3: add support for non-strict mode iommu/arm-smmu-v3: add bootup option "iommu.non_strict" Documentation/admin-guide/kernel-parameters.txt | 13 +++++++++ drivers/iommu/arm-smmu-v3.c | 35 ++++++++++++++++++++++++- drivers/iommu/dma-iommu.c | 29 +++++++++++++++++++- drivers/iommu/io-pgtable-arm.c | 20 +++++++++----- drivers/iommu/io-pgtable.h | 3 +++ drivers/iommu/iommu.c | 1 + include/linux/iommu.h | 1 + 7 files changed, 94 insertions(+), 8 deletions(-)