From patchwork Mon Mar 18 13:12:42 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhen Lei X-Patchwork-Id: 10857599 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 216D513B5 for ; Mon, 18 Mar 2019 13:14:39 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 00C74286CB for ; Mon, 18 Mar 2019 13:14:39 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F2CA728702; Mon, 18 Mar 2019 13:14:38 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 9572329343 for ; Mon, 18 Mar 2019 13:14:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=KsSYn0ApfaOT6TxXC8/ROOQSQ9MB/3gAKFu1vpdX5f4=; b=VcXkmRQHE5m2F6 /QsJRHspxVo4DzVsZUqAiTVlwQGWReE10609aDioPSFpEahy+s3WkhPonFdYZ051iR6Bl2+Gy5E2b H4IBTrhjxsdhoHy9FsllbABfaO/wrRUcI8SQ8l6aOj/0whJAoUYyuFZ7v/RlWdT6LM0Agikb2GXU8 fMSRBe+Wl8FNGIjle9wMtgfm1/5T0ItGUGo7wDx8mkKdQkZBb5li7AzXxLgJGrme5Cc6B/Tbv9Dl3 xORB8G/m8fSJ7E8aYeGo5KQeuc0OZMT5d3QbrPOQWKbL49AF9P0NVW+9TO7bltitJBOSyiUPaDgXn j+8cz4L95BIABvjobUiA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h5s69-0007Lt-6D; Mon, 18 Mar 2019 13:14:37 +0000 Received: from szxga06-in.huawei.com ([45.249.212.32] helo=huawei.com) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1h5s66-0007Kf-UM for linux-arm-kernel@lists.infradead.org; Mon, 18 Mar 2019 13:14:35 +0000 Received: from DGGEMS408-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id C67FACB25BD9615062DA; Mon, 18 Mar 2019 21:14:27 +0800 (CST) Received: from HGHY1l002753561.china.huawei.com (10.177.23.164) by DGGEMS408-HUB.china.huawei.com (10.3.19.208) with Microsoft SMTP Server id 14.3.408.0; Mon, 18 Mar 2019 21:14:16 +0800 From: Zhen Lei To: Jean-Philippe Brucker , Robin Murphy , Will Deacon , Joerg Roedel , linux-arm-kernel , iommu , linux-kernel Subject: [PATCH v2 1/2] iommu/arm-smmu-v3: make sure the stale caching of L1STD are invalid Date: Mon, 18 Mar 2019 21:12:42 +0800 Message-ID: <20190318131243.20716-2-thunder.leizhen@huawei.com> X-Mailer: git-send-email 2.19.2.windows.1 In-Reply-To: <20190318131243.20716-1-thunder.leizhen@huawei.com> References: <20190318131243.20716-1-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.177.23.164] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190318_061435_146317_F171C631 X-CRM114-Status: GOOD ( 10.88 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Zhen Lei Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP After the content of L1STD(Level 1 Stream Table Descriptor) in DDR has been modified, should make sure the cached copies be invalidated. Signed-off-by: Zhen Lei --- drivers/iommu/arm-smmu-v3.c | 16 ++++++++++++++-- 1 file changed, 14 insertions(+), 2 deletions(-) diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index d3880010c6cfc8c..9b6afa8e69f70f6 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -1071,13 +1071,14 @@ static void arm_smmu_write_ctx_desc(struct arm_smmu_device *smmu, *dst = cpu_to_le64(val); } -static void arm_smmu_sync_ste_for_sid(struct arm_smmu_device *smmu, u32 sid) +static void __arm_smmu_sync_ste_for_sid(struct arm_smmu_device *smmu, + u32 sid, bool leaf) { struct arm_smmu_cmdq_ent cmd = { .opcode = CMDQ_OP_CFGI_STE, .cfgi = { .sid = sid, - .leaf = true, + .leaf = leaf, }, }; @@ -1085,6 +1086,16 @@ static void arm_smmu_sync_ste_for_sid(struct arm_smmu_device *smmu, u32 sid) arm_smmu_cmdq_issue_sync(smmu); } +static void arm_smmu_sync_ste_for_sid(struct arm_smmu_device *smmu, u32 sid) +{ + __arm_smmu_sync_ste_for_sid(smmu, sid, true); +} + +static void arm_smmu_sync_std_for_sid(struct arm_smmu_device *smmu, u32 sid) +{ + __arm_smmu_sync_ste_for_sid(smmu, sid, false); +} + static void arm_smmu_write_strtab_ent(struct arm_smmu_device *smmu, u32 sid, __le64 *dst, struct arm_smmu_strtab_ent *ste) { @@ -1232,6 +1243,7 @@ static int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid) arm_smmu_init_bypass_stes(desc->l2ptr, 1 << STRTAB_SPLIT); arm_smmu_write_strtab_l1_desc(strtab, desc); + arm_smmu_sync_std_for_sid(smmu, sid); return 0; } From patchwork Mon Mar 18 13:12:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhen Lei X-Patchwork-Id: 10857601 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A46AF139A for ; Mon, 18 Mar 2019 13:15:05 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 83D812000A for ; Mon, 18 Mar 2019 13:15:05 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 75D54284B1; Mon, 18 Mar 2019 13:15:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id E70212000A for ; Mon, 18 Mar 2019 13:15:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=NV5ntapaMBrCj4T1DxB0c4EwVYSIffHkJHDQgAQNvaI=; b=ChZ7SogWpV1EZN ICz8YYgRju3ELMkR7C5p/qguN/s3ebO7GjfPOKZe/zCuHxAb+L7D86zc5rOnWUFBcujXbFKpXBbRZ lPAv6VeCLGrXKCMktdr1jqfg/WZTQpHEw/jBwaOzz2Wv0WWAx9rqPNdDfP+cTpwjNyqL0GDmZseTW ot/dEiqXkCfIwORptnV5DiFn4WvQ1ngprGQVd3EUxQx+jASKWf3OvDYaWzQ9L5+P6B1bFGlBTzRq7 3VIHjI3gMfgYqf8BTkIVtNvclZlb1CIKbtSH435YnRXEkyRX+MBHG37krDzh3l23jEpwBZIB0gjTs 0Vo2K38RTqrbe8vIppLw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1h5s6T-0007l5-86; Mon, 18 Mar 2019 13:14:57 +0000 Received: from szxga06-in.huawei.com ([45.249.212.32] helo=huawei.com) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1h5s67-0007Kg-VV for linux-arm-kernel@lists.infradead.org; Mon, 18 Mar 2019 13:14:37 +0000 Received: from DGGEMS408-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id CDF98E8185D9CA8E2875; Mon, 18 Mar 2019 21:14:27 +0800 (CST) Received: from HGHY1l002753561.china.huawei.com (10.177.23.164) by DGGEMS408-HUB.china.huawei.com (10.3.19.208) with Microsoft SMTP Server id 14.3.408.0; Mon, 18 Mar 2019 21:14:17 +0800 From: Zhen Lei To: Jean-Philippe Brucker , Robin Murphy , Will Deacon , Joerg Roedel , linux-arm-kernel , iommu , linux-kernel Subject: [PATCH v2 2/2] iommu/arm-smmu-v3: to make smmu can be enabled in the kdump kernel Date: Mon, 18 Mar 2019 21:12:43 +0800 Message-ID: <20190318131243.20716-3-thunder.leizhen@huawei.com> X-Mailer: git-send-email 2.19.2.windows.1 In-Reply-To: <20190318131243.20716-1-thunder.leizhen@huawei.com> References: <20190318131243.20716-1-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.177.23.164] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190318_061436_177607_F343870E X-CRM114-Status: GOOD ( 22.67 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Zhen Lei Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP I don't known why device_shutdown() is not called in the first kernel before the execution switch to the secondary kernel. People may afraid that the function may lead the kernel to be crashed again, because it contains too many operations, lead the secondary kernel can not be entered finally. Maybe the configuration of a device driver is set in the first kernel, but it's not set in the secondary kernel, because of the limited memory resources. (In order to facilitate the description, mark this kind of devices as "unexpected devices".) Because the device was not shutdown in the first kernel, so it may still access memory in the secondary kernel. For example, a netcard may still using its ring buffer to auto receive the external network packets in the secondary kernel. commit b63b3439b856 ("iommu/arm-smmu-v3: Abort all transactions if SMMU is enabled in kdump kernel") set SMMU_GBPA.ABORT to abort the unexpected devices access, but it also abort the memory access of the devices which we needed, like netcard. For example, a system may have no harddisk, and the vmcore should be dumped through network. In fact, we can use STE.config=0b000 to abort the memory access of the unexpected devices only. Show as below: 1. In the first kernel, all buffers used by the "unexpected" devices are correctly mapped, and it will not be corrupted by the secondary kernel because the latter has its dedicated reserved memory. 2. Enter the secondary kernel, set SMMU_GBPA.ABORT=1 then disable smmu. 3. Preset all STE entries: STE.config=0b000. For 2-level Stream Table, pre-allocated a dummy L2ST(Level 2 Stream Table) and make all L1STD.l2ptr pointer to the dummy L2ST. The dummy L2ST is shared by all L1STDs(Level 1 Stream Table Descriptor). 4. Enable smmu. After now, a new attached device if needed, will allocate a new L2ST accordingly, and change the related L1STD.l2ptr pointer to it. Please note that, we still base desc->l2ptr to judge whether the L2ST have been allocated or not, and don't care the value of L1STD.l2ptr. Fixes: commit b63b3439b856 ("iommu/arm-smmu-v3: Abort all transactions ...") Signed-off-by: Zhen Lei --- drivers/iommu/arm-smmu-v3.c | 72 ++++++++++++++++++++++++++++++++------------- 1 file changed, 51 insertions(+), 21 deletions(-) diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index 9b6afa8e69f70f6..28b04d4aef62a9f 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -1218,35 +1218,57 @@ static void arm_smmu_init_bypass_stes(u64 *strtab, unsigned int nent) } } -static int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid) +static int __arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid, + struct arm_smmu_strtab_l1_desc *desc) { - size_t size; void *strtab; struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg; - struct arm_smmu_strtab_l1_desc *desc = &cfg->l1_desc[sid >> STRTAB_SPLIT]; - if (desc->l2ptr) - return 0; - - size = 1 << (STRTAB_SPLIT + ilog2(STRTAB_STE_DWORDS) + 3); strtab = &cfg->strtab[(sid >> STRTAB_SPLIT) * STRTAB_L1_DESC_DWORDS]; - desc->span = STRTAB_SPLIT + 1; - desc->l2ptr = dmam_alloc_coherent(smmu->dev, size, &desc->l2ptr_dma, - GFP_KERNEL | __GFP_ZERO); if (!desc->l2ptr) { - dev_err(smmu->dev, - "failed to allocate l2 stream table for SID %u\n", - sid); - return -ENOMEM; + size_t size; + + size = 1 << (STRTAB_SPLIT + ilog2(STRTAB_STE_DWORDS) + 3); + desc->l2ptr = dmam_alloc_coherent(smmu->dev, size, + &desc->l2ptr_dma, + GFP_KERNEL | __GFP_ZERO); + if (!desc->l2ptr) { + dev_err(smmu->dev, + "failed to allocate l2 stream table for SID %u\n", + sid); + return -ENOMEM; + } + + desc->span = STRTAB_SPLIT + 1; + arm_smmu_init_bypass_stes(desc->l2ptr, 1 << STRTAB_SPLIT); } - arm_smmu_init_bypass_stes(desc->l2ptr, 1 << STRTAB_SPLIT); arm_smmu_write_strtab_l1_desc(strtab, desc); + return 0; +} + +static int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid) +{ + int ret; + struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg; + struct arm_smmu_strtab_l1_desc *desc = &cfg->l1_desc[sid >> STRTAB_SPLIT]; + + ret = __arm_smmu_init_l2_strtab(smmu, sid, desc); + if (ret) + return ret; + arm_smmu_sync_std_for_sid(smmu, sid); return 0; } +static int arm_smmu_init_dummy_l2_strtab(struct arm_smmu_device *smmu, u32 sid) +{ + static struct arm_smmu_strtab_l1_desc dummy_desc; + + return __arm_smmu_init_l2_strtab(smmu, sid, &dummy_desc); +} + /* IRQ and event handlers */ static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev) { @@ -2149,8 +2171,12 @@ static int arm_smmu_init_l1_strtab(struct arm_smmu_device *smmu) } for (i = 0; i < cfg->num_l1_ents; ++i) { - arm_smmu_write_strtab_l1_desc(strtab, &cfg->l1_desc[i]); - strtab += STRTAB_L1_DESC_DWORDS << 3; + if (is_kdump_kernel()) { + arm_smmu_init_dummy_l2_strtab(smmu, i << STRTAB_SPLIT); + } else { + arm_smmu_write_strtab_l1_desc(strtab, &cfg->l1_desc[i]); + strtab += STRTAB_L1_DESC_DWORDS << 3; + } } return 0; @@ -2466,11 +2492,8 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass) /* Clear CR0 and sync (disables SMMU and queue processing) */ reg = readl_relaxed(smmu->base + ARM_SMMU_CR0); if (reg & CR0_SMMUEN) { - if (is_kdump_kernel()) { + if (is_kdump_kernel()) arm_smmu_update_gbpa(smmu, GBPA_ABORT, 0); - arm_smmu_device_disable(smmu); - return -EBUSY; - } dev_warn(smmu->dev, "SMMU currently enabled! Resetting...\n"); } @@ -2858,6 +2881,13 @@ static int arm_smmu_device_probe(struct platform_device *pdev) struct device *dev = &pdev->dev; bool bypass; + /* + * Force to disable bypass in the kdump kernel, abort all incoming + * transactions from the unknown devices. + */ + if (is_kdump_kernel()) + disable_bypass = 1; + smmu = devm_kzalloc(dev, sizeof(*smmu), GFP_KERNEL); if (!smmu) { dev_err(dev, "failed to allocate arm_smmu_device\n");