From patchwork Thu Jul 12 06:18:32 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen (ThunderTown)" X-Patchwork-Id: 10521125 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 8F5E7603D7 for ; Thu, 12 Jul 2018 06:31:27 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5C991293CC for ; Thu, 12 Jul 2018 06:31:27 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5039C293D3; Thu, 12 Jul 2018 06:31:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id A0D0C293CC for ; Thu, 12 Jul 2018 06:31:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=syEDsrR/Au2AG3/caiSxAfHJ8dDd/uDHRnDUtZL3QSQ=; b=cxWS1PlIstW1+U jAbyrqCYBlzVq1omMBNlAqhGTi10a4R32xLLF1ceWmYAddCfFIXsTXxgLKocXAyCOiTZR5hfxghI9 qyZyhhh37oJHzLadDF158fgEigCBSiKlCUyQymDY5CFbqYTyQ1SS50v2Jp4Sm4rwtesEomh+aEfv1 2tYWqtp26x5MAHGGySTplpWaA+Xu0I8ggpfbCCqob6+mvF4SZtMPrfk+GoppXsApLvNegegAC/4JM PSScFdBX+nQz7qN5YYgeGgMN+Yfg6Aecd7zHLe15ngTVWkPEbEnlheOwbP3uoedYqHrbxPutsS2md PgZYcekmSuM/uCqvkNeQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fdV8L-0003lG-Q5; Thu, 12 Jul 2018 06:31:21 +0000 Received: from szxga05-in.huawei.com ([45.249.212.191] helo=huawei.com) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1fdUx1-0004pg-1m for linux-arm-kernel@lists.infradead.org; Thu, 12 Jul 2018 06:19:43 +0000 Received: from DGGEMS410-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 7AB07ACFC0DCF; Thu, 12 Jul 2018 14:19:15 +0800 (CST) Received: from localhost (10.177.23.164) by DGGEMS410-HUB.china.huawei.com (10.3.19.210) with Microsoft SMTP Server id 14.3.382.0; Thu, 12 Jul 2018 14:19:07 +0800 From: Zhen Lei To: Jean-Philippe Brucker , Robin Murphy , Will Deacon , Joerg Roedel , linux-arm-kernel , iommu , linux-kernel Subject: [PATCH v3 6/6] iommu/arm-smmu-v3: add bootup option "iommu_strict_mode" Date: Thu, 12 Jul 2018 14:18:32 +0800 Message-ID: <1531376312-2192-7-git-send-email-thunder.leizhen@huawei.com> X-Mailer: git-send-email 1.9.5.msysgit.0 In-Reply-To: <1531376312-2192-1-git-send-email-thunder.leizhen@huawei.com> References: <1531376312-2192-1-git-send-email-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.177.23.164] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180711_231939_801503_E37C446E X-CRM114-Status: GOOD ( 15.36 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Zhen Lei Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Because the non-strict mode introduces a vulnerability window, so add a bootup option to make the manager can choose which mode to be used. The default mode is IOMMU_STRICT. Signed-off-by: Zhen Lei --- Documentation/admin-guide/kernel-parameters.txt | 12 ++++++++++ drivers/iommu/arm-smmu-v3.c | 32 ++++++++++++++++++++++--- 2 files changed, 41 insertions(+), 3 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index efc7aa7..0cc80bc 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -1720,6 +1720,18 @@ nobypass [PPC/POWERNV] Disable IOMMU bypass, using IOMMU for PCI devices. + iommu_strict_mode= [arm-smmu-v3] + 0 - strict mode + Make sure all related TLBs to be invalidated before the + memory released. + 1 - non-strict mode + Put off TLBs invalidation and release memory first. This mode + introduces a vlunerability window, an untrusted device can + access the reused memory because the TLBs may still valid. + Please take full consideration before choosing this mode. + Note that, VFIO is always use strict mode. + others - strict mode + iommu.passthrough= [ARM64] Configure DMA to bypass the IOMMU by default. Format: { "0" | "1" } diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index 4a198a0..9b72fc4 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -631,6 +631,24 @@ struct arm_smmu_option_prop { { 0, NULL}, }; +static u32 iommu_strict_mode __read_mostly = IOMMU_STRICT; + +static int __init setup_iommu_strict_mode(char *str) +{ + u32 strict_mode = IOMMU_STRICT; + + get_option(&str, &strict_mode); + if (strict_mode == IOMMU_NON_STRICT) { + iommu_strict_mode = strict_mode; + pr_warn("WARNING: iommu non-strict mode is chose.\n" + "It's good for scatter-gather performance but lacks full isolation\n"); + add_taint(TAINT_WARN, LOCKDEP_STILL_OK); + } + + return 0; +} +early_param("iommu_strict_mode", setup_iommu_strict_mode); + static inline void __iomem *arm_smmu_page1_fixup(unsigned long offset, struct arm_smmu_device *smmu) { @@ -1441,7 +1459,7 @@ static bool arm_smmu_capable(enum iommu_cap cap) case IOMMU_CAP_NOEXEC: return true; case IOMMU_CAP_NON_STRICT: - return true; + return (iommu_strict_mode == IOMMU_NON_STRICT) ? true : false; default: return false; } @@ -1750,6 +1768,14 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) return ret; } +static u32 arm_smmu_strict_mode(struct iommu_domain *domain) +{ + if (iommu_strict_mode == IOMMU_NON_STRICT) + return IOMMU_DOMAIN_STRICT_MODE(domain); + + return IOMMU_STRICT; +} + static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova, phys_addr_t paddr, size_t size, int prot) { @@ -1769,7 +1795,7 @@ static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova, if (!ops) return 0; - return ops->unmap(ops, iova | IOMMU_DOMAIN_STRICT_MODE(domain), size); + return ops->unmap(ops, iova | arm_smmu_strict_mode(domain), size); } static void arm_smmu_flush_iotlb_all(struct iommu_domain *domain) @@ -1784,7 +1810,7 @@ static void arm_smmu_iotlb_sync(struct iommu_domain *domain) { struct arm_smmu_device *smmu = to_smmu_domain(domain)->smmu; - if (smmu && (IOMMU_DOMAIN_STRICT_MODE(domain) == IOMMU_STRICT)) + if (smmu && (arm_smmu_strict_mode(domain) == IOMMU_STRICT)) __arm_smmu_tlb_sync(smmu); }