From patchwork Mon Jun 13 08:09:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen (ThunderTown)" X-Patchwork-Id: 12879157 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0AE81C43334 for ; Mon, 13 Jun 2022 08:14:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=b7TWeTIG35W5X20xSxy9VkaOLlaJ16/udX2dKm7vcwA=; b=KO7IGqKJahEKZy bdzwcQ3tN4KLcGsK3TNkUDSeCTTE5UNnHtPEodz6L/DcBk0RwNW5tkvs3sTr3GJjDXMUSFzD8tTyC 0HFTrkZ9g65u5hTemE52ae4O4E//yhnWkvm/KUc9OxYaXaMszsqqxs/FpGQvXNzW5RldfMsQVBDRs HSgV/OgJ5gv6W1g6VM5y8DZVh6RdWfDJTzQogCWhURQYG4f01Ef20nz7RMHWh9VJ3bZlJEfvHv/o1 o3koWvx3pzH7MJRbIWpix/Z1jOvsvYKy8scKwCDJDUvSEQUwMLUYNLtFvkxaxNA+emAv7Vtaj06C/ aNsuRETzRDDe21MHIRPw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o0fBv-002BpF-Ot; Mon, 13 Jun 2022 08:12:56 +0000 Received: from szxga02-in.huawei.com ([45.249.212.188]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o0fAf-002B5t-Q5; Mon, 13 Jun 2022 08:11:40 +0000 Received: from dggpemm500024.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4LM42C4Z3rzRj2S; Mon, 13 Jun 2022 16:08:19 +0800 (CST) Received: from dggpemm500006.china.huawei.com (7.185.36.236) by dggpemm500024.china.huawei.com (7.185.36.203) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 13 Jun 2022 16:11:07 +0800 Received: from thunder-town.china.huawei.com (10.174.178.55) by dggpemm500006.china.huawei.com (7.185.36.236) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 13 Jun 2022 16:11:05 +0800 From: Zhen Lei To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , Eric Biederman , Rob Herring , Frank Rowand , , Dave Young , Baoquan He , Vivek Goyal , , , Catalin Marinas , Will Deacon , , Jonathan Corbet , CC: Zhen Lei , Randy Dunlap , Feng Zhou , Kefeng Wang , Chen Zhou , "John Donnelly" , Dave Kleikamp Subject: [PATCH 5/5] arm64: kdump: Don't defer the reservation of crash high memory Date: Mon, 13 Jun 2022 16:09:32 +0800 Message-ID: <20220613080932.663-6-thunder.leizhen@huawei.com> X-Mailer: git-send-email 2.26.0.windows.1 In-Reply-To: <20220613080932.663-1-thunder.leizhen@huawei.com> References: <20220613080932.663-1-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.178.55] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500006.china.huawei.com (7.185.36.236) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220613_011138_238680_AF1F9CF7 X-CRM114-Status: GOOD ( 20.57 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org If the crashkernel has both high memory above DMA zones and low memory in DMA zones, kexec always loads the content such as Image and dtb to the high memory instead of the low memory. This means that only high memory requires write protection based on page-level mapping. The allocation of high memory does not depend on the DMA boundary. So we can reserve the high memory first even if the crashkernel reservation is deferred. This means that the block mapping can still be performed on other kernel linear address spaces, the TLB miss rate can be reduced and the system performance will be improved. Signed-off-by: Zhen Lei --- arch/arm64/mm/init.c | 71 ++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 65 insertions(+), 6 deletions(-) diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index fb24efbc46f5ef4..ae0bae2cafe6ab0 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -141,15 +141,44 @@ static void __init reserve_crashkernel(int dma_state) unsigned long long crash_max = CRASH_ADDR_LOW_MAX; char *cmdline = boot_command_line; int dma_enabled = IS_ENABLED(CONFIG_ZONE_DMA) || IS_ENABLED(CONFIG_ZONE_DMA32); - int ret; + int ret, skip_res = 0, skip_low_res = 0; bool fixed_base; if (!IS_ENABLED(CONFIG_KEXEC_CORE)) return; - if ((!dma_enabled && (dma_state != DMA_PHYS_LIMIT_UNKNOWN)) || - (dma_enabled && (dma_state != DMA_PHYS_LIMIT_KNOWN))) - return; + /* + * In the following table: + * X,high means crashkernel=X,high + * unknown means dma_state = DMA_PHYS_LIMIT_UNKNOWN + * known means dma_state = DMA_PHYS_LIMIT_KNOWN + * + * The first two columns indicate the status, and the last two + * columns indicate the phase in which crash high or low memory + * needs to be reserved. + * --------------------------------------------------- + * | DMA enabled | X,high used | unknown | known | + * --------------------------------------------------- + * | N N | low | NOP | + * | Y N | NOP | low | + * | N Y | high/low | NOP | + * | Y Y | high | low | + * --------------------------------------------------- + * + * But in this function, the crash high memory allocation of + * crashkernel=Y,high and the crash low memory allocation of + * crashkernel=X[@offset] for crashk_res are mixed at one place. + * So the table above need to be adjusted as below: + * --------------------------------------------------- + * | DMA enabled | X,high used | unknown | known | + * --------------------------------------------------- + * | N N | res | NOP | + * | Y N | NOP | res | + * | N Y |res/low_res| NOP | + * | Y Y | res | low_res | + * --------------------------------------------------- + * + */ /* crashkernel=X[@offset] */ ret = parse_crashkernel(cmdline, memblock_phys_mem_size(), @@ -169,10 +198,33 @@ static void __init reserve_crashkernel(int dma_state) else if (ret) return; + /* See the third row of the second table above, NOP */ + if (!dma_enabled && (dma_state == DMA_PHYS_LIMIT_KNOWN)) + return; + + /* See the fourth row of the second table above */ + if (dma_enabled) { + if (dma_state == DMA_PHYS_LIMIT_UNKNOWN) + skip_low_res = 1; + else + skip_res = 1; + } + crash_max = CRASH_ADDR_HIGH_MAX; } else if (ret || !crash_size) { /* The specified value is invalid */ return; + } else { + /* See the 1-2 rows of the second table above, NOP */ + if ((!dma_enabled && (dma_state == DMA_PHYS_LIMIT_KNOWN)) || + (dma_enabled && (dma_state == DMA_PHYS_LIMIT_UNKNOWN))) + return; + } + + if (skip_res) { + crash_base = crashk_res.start; + crash_size = crashk_res.end - crashk_res.start + 1; + goto check_low; } fixed_base = !!crash_base; @@ -202,9 +254,18 @@ static void __init reserve_crashkernel(int dma_state) return; } + crashk_res.start = crash_base; + crashk_res.end = crash_base + crash_size - 1; + +check_low: + if (skip_low_res) + return; + if ((crash_base >= CRASH_ADDR_LOW_MAX) && crash_low_size && reserve_crashkernel_low(crash_low_size)) { memblock_phys_free(crash_base, crash_size); + crashk_res.start = 0; + crashk_res.end = 0; return; } @@ -219,8 +280,6 @@ static void __init reserve_crashkernel(int dma_state) if (crashk_low_res.end) kmemleak_ignore_phys(crashk_low_res.start); - crashk_res.start = crash_base; - crashk_res.end = crash_base + crash_size - 1; insert_resource(&iomem_resource, &crashk_res); }