From patchwork Mon Jun 13 08:09:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen (ThunderTown)" X-Patchwork-Id: 12879155 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 54401C433EF for ; Mon, 13 Jun 2022 08:13:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=EtszSwaLHA3Pbk8T2YvCTLc14+jvioac0GxTnZe3xvE=; b=Zhdr2Vke9iOL7W irsXMselV0HXvXgMErg4pCgcFaX1eTQJvLNEYgZbLgzmQ/4S+s/KhgHBFDQJBWhq6j6zmLaejdPYd gKi8sHp4CIT4gBIb7HFCAedtFEVqTuLgKMhicrxFPNmHWf2OEKKV+bFVe6vYDwjIWh9x8a563tzAr lm8/4LQFBTvlGfjLrg1jnOiPEZqTmHYfTu4E6bqv9Fer2LKpMUOz5Ur9Mt9K7Db48G4lX63q74KDA XCLWMwIJlGrsToiCqgnw9j4V7GpYyDGQ5DYPSo5yYWyZmbJ98GgxWLQFjdvQO1GvoYRPKaB8d+8rF ndfMklwXRFWxS7fockdA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o0fB9-002BLA-BG; Mon, 13 Jun 2022 08:12:07 +0000 Received: from szxga08-in.huawei.com ([45.249.212.255]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o0fAZ-002Azj-Fh; Mon, 13 Jun 2022 08:11:33 +0000 Received: from dggpemm500021.china.huawei.com (unknown [172.30.72.56]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4LM43f4RsPz1K9V7; Mon, 13 Jun 2022 16:09:34 +0800 (CST) Received: from dggpemm500006.china.huawei.com (7.185.36.236) by dggpemm500021.china.huawei.com (7.185.36.109) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 13 Jun 2022 16:11:02 +0800 Received: from thunder-town.china.huawei.com (10.174.178.55) by dggpemm500006.china.huawei.com (7.185.36.236) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 13 Jun 2022 16:11:01 +0800 From: Zhen Lei To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , Eric Biederman , Rob Herring , Frank Rowand , , Dave Young , Baoquan He , Vivek Goyal , , , Catalin Marinas , Will Deacon , , Jonathan Corbet , CC: Zhen Lei , Randy Dunlap , Feng Zhou , Kefeng Wang , Chen Zhou , "John Donnelly" , Dave Kleikamp Subject: [PATCH 1/5] arm64: kdump: Provide default size when crashkernel=Y,low is not specified Date: Mon, 13 Jun 2022 16:09:28 +0800 Message-ID: <20220613080932.663-2-thunder.leizhen@huawei.com> X-Mailer: git-send-email 2.26.0.windows.1 In-Reply-To: <20220613080932.663-1-thunder.leizhen@huawei.com> References: <20220613080932.663-1-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.178.55] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500006.china.huawei.com (7.185.36.236) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220613_011131_900241_A02664C9 X-CRM114-Status: GOOD ( 16.53 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org To be consistent with the implementation of x86 and improve cross-platform user experience. Try to allocate at least 256 MiB low memory automatically when crashkernel=Y,low is not specified. Signed-off-by: Zhen Lei --- Documentation/admin-guide/kernel-parameters.txt | 8 +------- arch/arm64/mm/init.c | 12 +++++++++++- 2 files changed, 12 insertions(+), 8 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 8090130b544b070..61b179232b68001 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -843,7 +843,7 @@ available. It will be ignored if crashkernel=X is specified. crashkernel=size[KMG],low - [KNL, X86-64] range under 4G. When crashkernel=X,high + [KNL, X86-64, ARM64] range under 4G. When crashkernel=X,high is passed, kernel could allocate physical memory region above 4G, that cause second kernel crash on system that require some amount of low memory, e.g. swiotlb @@ -857,12 +857,6 @@ It will be ignored when crashkernel=X,high is not used or memory reserved is below 4G. - [KNL, ARM64] range in low memory. - This one lets the user specify a low range in the - DMA zone for the crash dump kernel. - It will be ignored when crashkernel=X,high is not used - or memory reserved is located in the DMA zones. - cryptomgr.notests [KNL] Disable crypto self-tests diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 339ee84e5a61a0b..5390f361208ccf7 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -96,6 +96,14 @@ phys_addr_t __ro_after_init arm64_dma_phys_limit = PHYS_MASK + 1; #define CRASH_ADDR_LOW_MAX arm64_dma_phys_limit #define CRASH_ADDR_HIGH_MAX (PHYS_MASK + 1) +/* + * This is an empirical value in x86_64 and taken here directly. Please + * refer to the code comment in reserve_crashkernel_low() of x86_64 for more + * details. + */ +#define DEFAULT_CRASH_KERNEL_LOW_SIZE \ + max(swiotlb_size_or_default() + (8UL << 20), 256UL << 20) + static int __init reserve_crashkernel_low(unsigned long long low_size) { unsigned long long low_base; @@ -147,7 +155,9 @@ static void __init reserve_crashkernel(void) * is not allowed. */ ret = parse_crashkernel_low(cmdline, 0, &crash_low_size, &crash_base); - if (ret && (ret != -ENOENT)) + if (ret == -ENOENT) + crash_low_size = DEFAULT_CRASH_KERNEL_LOW_SIZE; + else if (ret) return; crash_max = CRASH_ADDR_HIGH_MAX; From patchwork Mon Jun 13 08:09:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen (ThunderTown)" X-Patchwork-Id: 12879154 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5779AC43334 for ; Mon, 13 Jun 2022 08:12:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=RUAA/SKP2HQIPIMEi7gHuENJ3Aghntci57bIDmpoScQ=; b=RhDKobqEXtDws9 YXk2d9IMqBxKVwdASfuZgNtSqKmFDzOUPI3Eom7142iVjEmlodybP2fgoNnk3ppZ+PUJVcEoW5LQp J/8N1N12mMPtmmhXlnLK0Pid/ee0m7Yr3c7WV4AGXzkfE8RyL36nEsiJUNjBTB26bwIMwRBeG9O6w sThdcLLhzEXmAHTA5gulofnPRQknb/6L0c5/KPpM4kWbE8sPgbP7g9kyPO5A1VQgfJckqzoj7xFz6 vQRKFhOzJiQ41cFU9gpG0kkEpXLMGK+ZAbPzTsSzy5w+LUURjUIJ9OM9NtOd0wpucdxnoatdLdAQ1 mJhIIW7K0T9WH4UJIzOA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o0fAr-002BAb-2V; Mon, 13 Jun 2022 08:11:49 +0000 Received: from szxga03-in.huawei.com ([45.249.212.189]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o0fAE-002AqU-L7; Mon, 13 Jun 2022 08:11:13 +0000 Received: from dggpemm500020.china.huawei.com (unknown [172.30.72.56]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4LM44w0PKbzDqqK; Mon, 13 Jun 2022 16:10:40 +0800 (CST) Received: from dggpemm500006.china.huawei.com (7.185.36.236) by dggpemm500020.china.huawei.com (7.185.36.49) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 13 Jun 2022 16:11:03 +0800 Received: from thunder-town.china.huawei.com (10.174.178.55) by dggpemm500006.china.huawei.com (7.185.36.236) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 13 Jun 2022 16:11:02 +0800 From: Zhen Lei To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , Eric Biederman , Rob Herring , Frank Rowand , , Dave Young , Baoquan He , Vivek Goyal , , , Catalin Marinas , Will Deacon , , Jonathan Corbet , CC: Zhen Lei , Randy Dunlap , Feng Zhou , Kefeng Wang , Chen Zhou , "John Donnelly" , Dave Kleikamp Subject: [PATCH 2/5] arm64: kdump: Support crashkernel=X fall back to reserve region above DMA zones Date: Mon, 13 Jun 2022 16:09:29 +0800 Message-ID: <20220613080932.663-3-thunder.leizhen@huawei.com> X-Mailer: git-send-email 2.26.0.windows.1 In-Reply-To: <20220613080932.663-1-thunder.leizhen@huawei.com> References: <20220613080932.663-1-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.178.55] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500006.china.huawei.com (7.185.36.236) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220613_011111_062624_261DD0B1 X-CRM114-Status: GOOD ( 15.43 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org For crashkernel=X without '@offset', select a region within DMA zones first, and fall back to reserve region above DMA zones. This allows users to use the same configuration on multiple platforms. Signed-off-by: Zhen Lei Acked-by: Baoquan He --- Documentation/admin-guide/kernel-parameters.txt | 2 +- arch/arm64/mm/init.c | 16 +++++++++++++++- 2 files changed, 16 insertions(+), 2 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 61b179232b68001..fdac18beba5624e 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -823,7 +823,7 @@ memory region [offset, offset + size] for that kernel image. If '@offset' is omitted, then a suitable offset is selected automatically. - [KNL, X86-64] Select a region under 4G first, and + [KNL, X86-64, ARM64] Select a region under 4G first, and fall back to reserve region above 4G when '@offset' hasn't been specified. See Documentation/admin-guide/kdump/kdump.rst for further details. diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 5390f361208ccf7..8539598f9e58b4d 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -138,6 +138,7 @@ static void __init reserve_crashkernel(void) unsigned long long crash_max = CRASH_ADDR_LOW_MAX; char *cmdline = boot_command_line; int ret; + bool fixed_base; if (!IS_ENABLED(CONFIG_KEXEC_CORE)) return; @@ -166,15 +167,28 @@ static void __init reserve_crashkernel(void) return; } + fixed_base = !!crash_base; crash_size = PAGE_ALIGN(crash_size); /* User specifies base address explicitly. */ - if (crash_base) + if (fixed_base) crash_max = crash_base + crash_size; +retry: crash_base = memblock_phys_alloc_range(crash_size, CRASH_ALIGN, crash_base, crash_max); if (!crash_base) { + /* + * Attempt to fully allocate low memory failed, fall back + * to high memory, the minimum required low memory will be + * reserved later. + */ + if (!fixed_base && (crash_max == CRASH_ADDR_LOW_MAX)) { + crash_max = CRASH_ADDR_HIGH_MAX; + crash_low_size = DEFAULT_CRASH_KERNEL_LOW_SIZE; + goto retry; + } + pr_warn("cannot allocate crashkernel (size:0x%llx)\n", crash_size); return; From patchwork Mon Jun 13 08:09:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen (ThunderTown)" X-Patchwork-Id: 12879153 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EF13AC433EF for ; Mon, 13 Jun 2022 08:12:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=DUY7a976E2B2hQ8H4ZNE5uODTxuGNZ0Nerv+qLKUGiM=; b=WWDopkeSCubzLX dbe+SudOqp8fjHCUvHPgI7IgXB9cW7+bERUbguKe0yG+gYCKUIqv36eNXIhk3S+v1qTJbMJRTHvgP YAik6ycEm9elnZ+GGKxwF6Q+YP7Bir4sntJCTkjGtZgRa4dUdUwfY8FNExn0HX9DS0j/XU69p545B iZVxfwS8WbYnKw2jvpsv5Q79TknYUm+od/gH4BpAsyFfGgGdZicWofoPo6hIiOx0bShvahehA862y y0pASAIqiLb5vfiZ2prqhifk1s49gvpE4kTTXbWSSP0+1VOoX1DS/8FjsoeBoDzEE7yS15fkCTfeY iosopUcSk8n5+9bqYBHQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o0fAY-002Azd-QV; Mon, 13 Jun 2022 08:11:30 +0000 Received: from szxga08-in.huawei.com ([45.249.212.255]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o0fAE-002Aqa-W4; Mon, 13 Jun 2022 08:11:13 +0000 Received: from dggpemm500022.china.huawei.com (unknown [172.30.72.57]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4LM4395M1Nz1K9Q9; Mon, 13 Jun 2022 16:09:09 +0800 (CST) Received: from dggpemm500006.china.huawei.com (7.185.36.236) by dggpemm500022.china.huawei.com (7.185.36.162) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 13 Jun 2022 16:11:04 +0800 Received: from thunder-town.china.huawei.com (10.174.178.55) by dggpemm500006.china.huawei.com (7.185.36.236) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 13 Jun 2022 16:11:03 +0800 From: Zhen Lei To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , Eric Biederman , Rob Herring , Frank Rowand , , Dave Young , Baoquan He , Vivek Goyal , , , Catalin Marinas , Will Deacon , , Jonathan Corbet , CC: Zhen Lei , Randy Dunlap , Feng Zhou , Kefeng Wang , Chen Zhou , "John Donnelly" , Dave Kleikamp Subject: [PATCH 3/5] arm64: kdump: Remove some redundant checks in map_mem() Date: Mon, 13 Jun 2022 16:09:30 +0800 Message-ID: <20220613080932.663-4-thunder.leizhen@huawei.com> X-Mailer: git-send-email 2.26.0.windows.1 In-Reply-To: <20220613080932.663-1-thunder.leizhen@huawei.com> References: <20220613080932.663-1-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.178.55] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500006.china.huawei.com (7.185.36.236) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220613_011111_400424_5EA22274 X-CRM114-Status: GOOD ( 11.77 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org arm64_memblock_init() if (!IS_ENABLED(CONFIG_ZONE_DMA/DMA32)) reserve_crashkernel() //initialize crashk_res when //"crashkernel=" is correctly specified paging_init() map_mem() As shown in the above pseudo code, the crashk_res.end can only be initialized to non-zero when both "!IS_ENABLED(CONFIG_ZONE_DMA/DMA32)" and crash_mem_map are true. So some checks in map_mem() can be adjusted or optimized. Signed-off-by: Zhen Lei Acked-by: Baoquan He --- arch/arm64/mm/mmu.c | 25 +++++++++++-------------- 1 file changed, 11 insertions(+), 14 deletions(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 626ec32873c6c36..6028a5757e4eae2 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -529,12 +529,12 @@ static void __init map_mem(pgd_t *pgdp) #ifdef CONFIG_KEXEC_CORE if (crash_mem_map) { - if (IS_ENABLED(CONFIG_ZONE_DMA) || - IS_ENABLED(CONFIG_ZONE_DMA32)) - flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; - else if (crashk_res.end) + if (crashk_res.end) memblock_mark_nomap(crashk_res.start, resource_size(&crashk_res)); + else if (IS_ENABLED(CONFIG_ZONE_DMA) || + IS_ENABLED(CONFIG_ZONE_DMA32)) + flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; } #endif @@ -571,16 +571,13 @@ static void __init map_mem(pgd_t *pgdp) * through /sys/kernel/kexec_crash_size interface. */ #ifdef CONFIG_KEXEC_CORE - if (crash_mem_map && - !IS_ENABLED(CONFIG_ZONE_DMA) && !IS_ENABLED(CONFIG_ZONE_DMA32)) { - if (crashk_res.end) { - __map_memblock(pgdp, crashk_res.start, - crashk_res.end + 1, - PAGE_KERNEL, - NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS); - memblock_clear_nomap(crashk_res.start, - resource_size(&crashk_res)); - } + if (crashk_res.end) { + __map_memblock(pgdp, crashk_res.start, + crashk_res.end + 1, + PAGE_KERNEL, + NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS); + memblock_clear_nomap(crashk_res.start, + resource_size(&crashk_res)); } #endif } From patchwork Mon Jun 13 08:09:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen (ThunderTown)" X-Patchwork-Id: 12879156 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EBF90C43334 for ; Mon, 13 Jun 2022 08:13:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=pOadDXmCIv42j14VmCHkrB5Wk6IRjyhTFnu+Dg3bkoY=; b=BifkVlhg76oA53 4Z1zB7Cc3yMEnCSKegGlQgQDFXSUstm716lM03E2uhx3PQ+iMzgxNH6uvwggBoLMk0ERTom2vYppr 12+ml/DIRClx0bHiZ2uN++wZELdf7oqp6FfoqLRAKJYX/fihR+r42mv/YFqbyHdA2HEHWeImF3Ws5 IgewszSvG80W5OTPwkZSxPMCD35n6jbzHoFCRvKQ/t03RYF6E95x53GpTDSqFDoINXMAl5rjm9OS2 BTWjVZFgBcgJL3sqi4Uu3qbA1O4gqmTEGfTO0M7+rDwGifxSnsltuInjaxWCJa8Vl27gnCIthg/+0 gTDr2mmybOrbfJJPi7Dw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o0fBV-002BXC-1Q; Mon, 13 Jun 2022 08:12:29 +0000 Received: from szxga08-in.huawei.com ([45.249.212.255]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o0fAa-002B0C-Nv; Mon, 13 Jun 2022 08:11:34 +0000 Received: from dggpemm500023.china.huawei.com (unknown [172.30.72.55]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4LM43g22Gxz1K9Vw; Mon, 13 Jun 2022 16:09:35 +0800 (CST) Received: from dggpemm500006.china.huawei.com (7.185.36.236) by dggpemm500023.china.huawei.com (7.185.36.83) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 13 Jun 2022 16:11:06 +0800 Received: from thunder-town.china.huawei.com (10.174.178.55) by dggpemm500006.china.huawei.com (7.185.36.236) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 13 Jun 2022 16:11:04 +0800 From: Zhen Lei To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , Eric Biederman , Rob Herring , Frank Rowand , , Dave Young , Baoquan He , Vivek Goyal , , , Catalin Marinas , Will Deacon , , Jonathan Corbet , CC: Zhen Lei , Randy Dunlap , Feng Zhou , Kefeng Wang , Chen Zhou , "John Donnelly" , Dave Kleikamp Subject: [PATCH 4/5] arm64: kdump: Decide when to reserve crash memory in reserve_crashkernel() Date: Mon, 13 Jun 2022 16:09:31 +0800 Message-ID: <20220613080932.663-5-thunder.leizhen@huawei.com> X-Mailer: git-send-email 2.26.0.windows.1 In-Reply-To: <20220613080932.663-1-thunder.leizhen@huawei.com> References: <20220613080932.663-1-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.178.55] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500006.china.huawei.com (7.185.36.236) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220613_011133_153115_6FC874F2 X-CRM114-Status: GOOD ( 18.81 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org After the kexec completes data loading, the crash memory must be set to be inaccessible, to prevent the current kernel from damaging the data of the crash kernel. But for some platforms, the DMA zones is not known until the dtb or acpi table is parsed, but by then the linear mapping has been created, all are forced to be page-level mapping. To optimize the system performance (reduce the TLB miss rate) when crashkernel=X,high is used. The reservation of crash memory is divided into two phases: reserve crash high memory before paging_init() is called and crash low memory after it. We only perform page mapping for the crash high memory. commit 031495635b46 ("arm64: Do not defer reserve_crashkernel() for platforms with no DMA memory zones") has caused reserve_crashkernel() to be called in two places: before or after paging_init(), which is controlled by whether CONFIG_ZONE_DMA/DMA32 is enabled. Just move the control into reserve_crashkernel(), prepare for the optimizations mentioned above. Signed-off-by: Zhen Lei --- arch/arm64/mm/init.c | 16 +++++++++++----- 1 file changed, 11 insertions(+), 5 deletions(-) diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 8539598f9e58b4d..fb24efbc46f5ef4 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -90,6 +90,9 @@ phys_addr_t __ro_after_init arm64_dma_phys_limit; phys_addr_t __ro_after_init arm64_dma_phys_limit = PHYS_MASK + 1; #endif +#define DMA_PHYS_LIMIT_UNKNOWN 0 +#define DMA_PHYS_LIMIT_KNOWN 1 + /* Current arm64 boot protocol requires 2MB alignment */ #define CRASH_ALIGN SZ_2M @@ -131,18 +134,23 @@ static int __init reserve_crashkernel_low(unsigned long long low_size) * line parameter. The memory reserved is used by dump capture kernel when * primary kernel is crashing. */ -static void __init reserve_crashkernel(void) +static void __init reserve_crashkernel(int dma_state) { unsigned long long crash_base, crash_size; unsigned long long crash_low_size = 0; unsigned long long crash_max = CRASH_ADDR_LOW_MAX; char *cmdline = boot_command_line; + int dma_enabled = IS_ENABLED(CONFIG_ZONE_DMA) || IS_ENABLED(CONFIG_ZONE_DMA32); int ret; bool fixed_base; if (!IS_ENABLED(CONFIG_KEXEC_CORE)) return; + if ((!dma_enabled && (dma_state != DMA_PHYS_LIMIT_UNKNOWN)) || + (dma_enabled && (dma_state != DMA_PHYS_LIMIT_KNOWN))) + return; + /* crashkernel=X[@offset] */ ret = parse_crashkernel(cmdline, memblock_phys_mem_size(), &crash_size, &crash_base); @@ -413,8 +421,7 @@ void __init arm64_memblock_init(void) early_init_fdt_scan_reserved_mem(); - if (!IS_ENABLED(CONFIG_ZONE_DMA) && !IS_ENABLED(CONFIG_ZONE_DMA32)) - reserve_crashkernel(); + reserve_crashkernel(DMA_PHYS_LIMIT_UNKNOWN); high_memory = __va(memblock_end_of_DRAM() - 1) + 1; } @@ -462,8 +469,7 @@ void __init bootmem_init(void) * request_standard_resources() depends on crashkernel's memory being * reserved, so do it here. */ - if (IS_ENABLED(CONFIG_ZONE_DMA) || IS_ENABLED(CONFIG_ZONE_DMA32)) - reserve_crashkernel(); + reserve_crashkernel(DMA_PHYS_LIMIT_KNOWN); memblock_dump_all(); } From patchwork Mon Jun 13 08:09:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen (ThunderTown)" X-Patchwork-Id: 12879157 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0AE81C43334 for ; Mon, 13 Jun 2022 08:14:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=b7TWeTIG35W5X20xSxy9VkaOLlaJ16/udX2dKm7vcwA=; b=KO7IGqKJahEKZy bdzwcQ3tN4KLcGsK3TNkUDSeCTTE5UNnHtPEodz6L/DcBk0RwNW5tkvs3sTr3GJjDXMUSFzD8tTyC 0HFTrkZ9g65u5hTemE52ae4O4E//yhnWkvm/KUc9OxYaXaMszsqqxs/FpGQvXNzW5RldfMsQVBDRs HSgV/OgJ5gv6W1g6VM5y8DZVh6RdWfDJTzQogCWhURQYG4f01Ef20nz7RMHWh9VJ3bZlJEfvHv/o1 o3koWvx3pzH7MJRbIWpix/Z1jOvsvYKy8scKwCDJDUvSEQUwMLUYNLtFvkxaxNA+emAv7Vtaj06C/ aNsuRETzRDDe21MHIRPw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o0fBv-002BpF-Ot; Mon, 13 Jun 2022 08:12:56 +0000 Received: from szxga02-in.huawei.com ([45.249.212.188]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o0fAf-002B5t-Q5; Mon, 13 Jun 2022 08:11:40 +0000 Received: from dggpemm500024.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4LM42C4Z3rzRj2S; Mon, 13 Jun 2022 16:08:19 +0800 (CST) Received: from dggpemm500006.china.huawei.com (7.185.36.236) by dggpemm500024.china.huawei.com (7.185.36.203) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 13 Jun 2022 16:11:07 +0800 Received: from thunder-town.china.huawei.com (10.174.178.55) by dggpemm500006.china.huawei.com (7.185.36.236) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 13 Jun 2022 16:11:05 +0800 From: Zhen Lei To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , Eric Biederman , Rob Herring , Frank Rowand , , Dave Young , Baoquan He , Vivek Goyal , , , Catalin Marinas , Will Deacon , , Jonathan Corbet , CC: Zhen Lei , Randy Dunlap , Feng Zhou , Kefeng Wang , Chen Zhou , "John Donnelly" , Dave Kleikamp Subject: [PATCH 5/5] arm64: kdump: Don't defer the reservation of crash high memory Date: Mon, 13 Jun 2022 16:09:32 +0800 Message-ID: <20220613080932.663-6-thunder.leizhen@huawei.com> X-Mailer: git-send-email 2.26.0.windows.1 In-Reply-To: <20220613080932.663-1-thunder.leizhen@huawei.com> References: <20220613080932.663-1-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.178.55] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500006.china.huawei.com (7.185.36.236) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220613_011138_238680_AF1F9CF7 X-CRM114-Status: GOOD ( 20.57 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org If the crashkernel has both high memory above DMA zones and low memory in DMA zones, kexec always loads the content such as Image and dtb to the high memory instead of the low memory. This means that only high memory requires write protection based on page-level mapping. The allocation of high memory does not depend on the DMA boundary. So we can reserve the high memory first even if the crashkernel reservation is deferred. This means that the block mapping can still be performed on other kernel linear address spaces, the TLB miss rate can be reduced and the system performance will be improved. Signed-off-by: Zhen Lei --- arch/arm64/mm/init.c | 71 ++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 65 insertions(+), 6 deletions(-) diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index fb24efbc46f5ef4..ae0bae2cafe6ab0 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -141,15 +141,44 @@ static void __init reserve_crashkernel(int dma_state) unsigned long long crash_max = CRASH_ADDR_LOW_MAX; char *cmdline = boot_command_line; int dma_enabled = IS_ENABLED(CONFIG_ZONE_DMA) || IS_ENABLED(CONFIG_ZONE_DMA32); - int ret; + int ret, skip_res = 0, skip_low_res = 0; bool fixed_base; if (!IS_ENABLED(CONFIG_KEXEC_CORE)) return; - if ((!dma_enabled && (dma_state != DMA_PHYS_LIMIT_UNKNOWN)) || - (dma_enabled && (dma_state != DMA_PHYS_LIMIT_KNOWN))) - return; + /* + * In the following table: + * X,high means crashkernel=X,high + * unknown means dma_state = DMA_PHYS_LIMIT_UNKNOWN + * known means dma_state = DMA_PHYS_LIMIT_KNOWN + * + * The first two columns indicate the status, and the last two + * columns indicate the phase in which crash high or low memory + * needs to be reserved. + * --------------------------------------------------- + * | DMA enabled | X,high used | unknown | known | + * --------------------------------------------------- + * | N N | low | NOP | + * | Y N | NOP | low | + * | N Y | high/low | NOP | + * | Y Y | high | low | + * --------------------------------------------------- + * + * But in this function, the crash high memory allocation of + * crashkernel=Y,high and the crash low memory allocation of + * crashkernel=X[@offset] for crashk_res are mixed at one place. + * So the table above need to be adjusted as below: + * --------------------------------------------------- + * | DMA enabled | X,high used | unknown | known | + * --------------------------------------------------- + * | N N | res | NOP | + * | Y N | NOP | res | + * | N Y |res/low_res| NOP | + * | Y Y | res | low_res | + * --------------------------------------------------- + * + */ /* crashkernel=X[@offset] */ ret = parse_crashkernel(cmdline, memblock_phys_mem_size(), @@ -169,10 +198,33 @@ static void __init reserve_crashkernel(int dma_state) else if (ret) return; + /* See the third row of the second table above, NOP */ + if (!dma_enabled && (dma_state == DMA_PHYS_LIMIT_KNOWN)) + return; + + /* See the fourth row of the second table above */ + if (dma_enabled) { + if (dma_state == DMA_PHYS_LIMIT_UNKNOWN) + skip_low_res = 1; + else + skip_res = 1; + } + crash_max = CRASH_ADDR_HIGH_MAX; } else if (ret || !crash_size) { /* The specified value is invalid */ return; + } else { + /* See the 1-2 rows of the second table above, NOP */ + if ((!dma_enabled && (dma_state == DMA_PHYS_LIMIT_KNOWN)) || + (dma_enabled && (dma_state == DMA_PHYS_LIMIT_UNKNOWN))) + return; + } + + if (skip_res) { + crash_base = crashk_res.start; + crash_size = crashk_res.end - crashk_res.start + 1; + goto check_low; } fixed_base = !!crash_base; @@ -202,9 +254,18 @@ static void __init reserve_crashkernel(int dma_state) return; } + crashk_res.start = crash_base; + crashk_res.end = crash_base + crash_size - 1; + +check_low: + if (skip_low_res) + return; + if ((crash_base >= CRASH_ADDR_LOW_MAX) && crash_low_size && reserve_crashkernel_low(crash_low_size)) { memblock_phys_free(crash_base, crash_size); + crashk_res.start = 0; + crashk_res.end = 0; return; } @@ -219,8 +280,6 @@ static void __init reserve_crashkernel(int dma_state) if (crashk_low_res.end) kmemleak_ignore_phys(crashk_low_res.start); - crashk_res.start = crash_base; - crashk_res.end = crash_base + crash_size - 1; insert_resource(&iomem_resource, &crashk_res); }