From patchwork Tue Apr 9 07:31:41 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: chenzhou X-Patchwork-Id: 10890639 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A611E1390 for ; Tue, 9 Apr 2019 07:21:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 825BC2882F for ; Tue, 9 Apr 2019 07:21:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 761D52884A; Tue, 9 Apr 2019 07:21:55 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id A02072882F for ; Tue, 9 Apr 2019 07:21:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=KV/TKPRDSd55qELEquLU6Zu3BMUmJHJgDl3TtQxbfK8=; b=eYwHcNNKchkF+o NFHnFDNkaXogmmR67HKGsdw9w2DtdBCnaX47J0X8JbpIhM2aYuSsBO1Ykzlo/e+sxZCY+BjIunChP CVK+41btFkg7FRa882ynv6n8hTf+Sn/5j82WoeGWtOumn7fmFyAeLWpjG2v6aXUMzFGWArdfzuCxJ kcthbC7L+9/PZIEtarvqfYvUv+hKVnaMR4R9TldZe3MRABNBb46yhfGyINOL84DNpYzxE5avm4+l2 n0w+wtnC7V1xRBXWjklDhDNdY9U38r6d6kvEn2RtsgNr0tg+AYYW2jfL+w+bcT4ienWvdGeaJyWJg rvzRgx2YyDACxGfP+mHQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hDl4m-0001Tc-Hs; Tue, 09 Apr 2019 07:21:48 +0000 Received: from szxga07-in.huawei.com ([45.249.212.35] helo=huawei.com) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1hDl43-0000YP-ED; Tue, 09 Apr 2019 07:21:06 +0000 Received: from DGGEMS402-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 9BF98538DF1686C1E0B8; Tue, 9 Apr 2019 15:21:00 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by DGGEMS402-HUB.china.huawei.com (10.3.19.202) with Microsoft SMTP Server id 14.3.408.0; Tue, 9 Apr 2019 15:20:49 +0800 From: Chen Zhou To: , , , , Subject: [PATCH v2 1/3] arm64: kdump: support reserving crashkernel above 4G Date: Tue, 9 Apr 2019 15:31:41 +0800 Message-ID: <20190409073143.75808-2-chenzhou10@huawei.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190409073143.75808-1-chenzhou10@huawei.com> References: <20190409073143.75808-1-chenzhou10@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190409_002103_821223_88A30B7F X-CRM114-Status: GOOD ( 17.06 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: wangkefeng.wang@huawei.com, linux-mm@kvack.org, Chen Zhou , kexec@lists.infradead.org, linux-kernel@vger.kernel.org, takahiro.akashi@linaro.org, horms@verge.net.au, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP When crashkernel is reserved above 4G in memory, kernel should reserve some amount of low memory for swiotlb and some DMA buffers. Kernel would try to allocate at least 256M below 4G automatically as x86_64 if crashkernel is above 4G. Meanwhile, support crashkernel=X,[high,low] in arm64. Signed-off-by: Chen Zhou --- arch/arm64/include/asm/kexec.h | 3 ++ arch/arm64/kernel/setup.c | 3 ++ arch/arm64/mm/init.c | 26 +++++++++++++---- arch/x86/include/asm/kexec.h | 3 ++ arch/x86/kernel/setup.c | 66 +++++------------------------------------- include/linux/kexec.h | 1 + kernel/kexec_core.c | 53 +++++++++++++++++++++++++++++++++ 7 files changed, 91 insertions(+), 64 deletions(-) diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h index 67e4cb7..32949bf 100644 --- a/arch/arm64/include/asm/kexec.h +++ b/arch/arm64/include/asm/kexec.h @@ -28,6 +28,9 @@ #define KEXEC_ARCH KEXEC_ARCH_AARCH64 +/* 2M alignment for crash kernel regions */ +#define CRASH_ALIGN SZ_2M + #ifndef __ASSEMBLY__ /** diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c index 413d566..82cd9a0 100644 --- a/arch/arm64/kernel/setup.c +++ b/arch/arm64/kernel/setup.c @@ -243,6 +243,9 @@ static void __init request_standard_resources(void) request_resource(res, &kernel_data); #ifdef CONFIG_KEXEC_CORE /* Userspace will find "Crash kernel" region in /proc/iomem. */ + if (crashk_low_res.end && crashk_low_res.start >= res->start && + crashk_low_res.end <= res->end) + request_resource(res, &crashk_low_res); if (crashk_res.end && crashk_res.start >= res->start && crashk_res.end <= res->end) request_resource(res, &crashk_res); diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 972bf43..3bebddf 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -64,6 +64,7 @@ EXPORT_SYMBOL(memstart_addr); phys_addr_t arm64_dma_phys_limit __ro_after_init; #ifdef CONFIG_KEXEC_CORE + /* * reserve_crashkernel() - reserves memory for crash kernel * @@ -74,20 +75,30 @@ phys_addr_t arm64_dma_phys_limit __ro_after_init; static void __init reserve_crashkernel(void) { unsigned long long crash_base, crash_size; + bool high = false; int ret; ret = parse_crashkernel(boot_command_line, memblock_phys_mem_size(), &crash_size, &crash_base); /* no crashkernel= or invalid value specified */ - if (ret || !crash_size) - return; + if (ret || !crash_size) { + /* crashkernel=X,high */ + ret = parse_crashkernel_high(boot_command_line, + memblock_phys_mem_size(), + &crash_size, &crash_base); + if (ret || !crash_size) + return; + high = true; + } crash_size = PAGE_ALIGN(crash_size); if (crash_base == 0) { /* Current arm64 boot protocol requires 2MB alignment */ - crash_base = memblock_find_in_range(0, ARCH_LOW_ADDRESS_LIMIT, - crash_size, SZ_2M); + crash_base = memblock_find_in_range(0, + high ? memblock_end_of_DRAM() + : ARCH_LOW_ADDRESS_LIMIT, + crash_size, CRASH_ALIGN); if (crash_base == 0) { pr_warn("cannot allocate crashkernel (size:0x%llx)\n", crash_size); @@ -105,13 +116,18 @@ static void __init reserve_crashkernel(void) return; } - if (!IS_ALIGNED(crash_base, SZ_2M)) { + if (!IS_ALIGNED(crash_base, CRASH_ALIGN)) { pr_warn("cannot reserve crashkernel: base address is not 2MB aligned\n"); return; } } memblock_reserve(crash_base, crash_size); + if (crash_base >= SZ_4G && reserve_crashkernel_low()) { + memblock_free(crash_base, crash_size); + return; + } + pr_info("crashkernel reserved: 0x%016llx - 0x%016llx (%lld MB)\n", crash_base, crash_base + crash_size, crash_size >> 20); diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h index 003f2da..485a514 100644 --- a/arch/x86/include/asm/kexec.h +++ b/arch/x86/include/asm/kexec.h @@ -18,6 +18,9 @@ # define KEXEC_CONTROL_CODE_MAX_SIZE 2048 +/* 16M alignment for crash kernel regions */ +#define CRASH_ALIGN (16 << 20) + #ifndef __ASSEMBLY__ #include diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index 3773905..4182035 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -447,9 +447,6 @@ static void __init memblock_x86_reserve_range_setup_data(void) #ifdef CONFIG_KEXEC_CORE -/* 16M alignment for crash kernel regions */ -#define CRASH_ALIGN (16 << 20) - /* * Keep the crash kernel below this limit. On 32 bits earlier kernels * would limit the kernel to the low 512 MiB due to mapping restrictions. @@ -463,59 +460,6 @@ static void __init memblock_x86_reserve_range_setup_data(void) # define CRASH_ADDR_HIGH_MAX MAXMEM #endif -static int __init reserve_crashkernel_low(void) -{ -#ifdef CONFIG_X86_64 - unsigned long long base, low_base = 0, low_size = 0; - unsigned long total_low_mem; - int ret; - - total_low_mem = memblock_mem_size(1UL << (32 - PAGE_SHIFT)); - - /* crashkernel=Y,low */ - ret = parse_crashkernel_low(boot_command_line, total_low_mem, &low_size, &base); - if (ret) { - /* - * two parts from lib/swiotlb.c: - * -swiotlb size: user-specified with swiotlb= or default. - * - * -swiotlb overflow buffer: now hardcoded to 32k. We round it - * to 8M for other buffers that may need to stay low too. Also - * make sure we allocate enough extra low memory so that we - * don't run out of DMA buffers for 32-bit devices. - */ - low_size = max(swiotlb_size_or_default() + (8UL << 20), 256UL << 20); - } else { - /* passed with crashkernel=0,low ? */ - if (!low_size) - return 0; - } - - low_base = memblock_find_in_range(0, 1ULL << 32, low_size, CRASH_ALIGN); - if (!low_base) { - pr_err("Cannot reserve %ldMB crashkernel low memory, please try smaller size.\n", - (unsigned long)(low_size >> 20)); - return -ENOMEM; - } - - ret = memblock_reserve(low_base, low_size); - if (ret) { - pr_err("%s: Error reserving crashkernel low memblock.\n", __func__); - return ret; - } - - pr_info("Reserving %ldMB of low memory at %ldMB for crashkernel (System low RAM: %ldMB)\n", - (unsigned long)(low_size >> 20), - (unsigned long)(low_base >> 20), - (unsigned long)(total_low_mem >> 20)); - - crashk_low_res.start = low_base; - crashk_low_res.end = low_base + low_size - 1; - insert_resource(&iomem_resource, &crashk_low_res); -#endif - return 0; -} - static void __init reserve_crashkernel(void) { unsigned long long crash_size, crash_base, total_mem; @@ -573,9 +517,13 @@ static void __init reserve_crashkernel(void) return; } - if (crash_base >= (1ULL << 32) && reserve_crashkernel_low()) { - memblock_free(crash_base, crash_size); - return; + if (crash_base >= (1ULL << 32)) { + if (reserve_crashkernel_low()) { + memblock_free(crash_base, crash_size); + return; + } + + insert_resource(&iomem_resource, &crashk_low_res); } pr_info("Reserving %ldMB of memory at %ldMB for crashkernel (System RAM: %ldMB)\n", diff --git a/include/linux/kexec.h b/include/linux/kexec.h index b9b1bc5..6140cf8 100644 --- a/include/linux/kexec.h +++ b/include/linux/kexec.h @@ -281,6 +281,7 @@ extern void __crash_kexec(struct pt_regs *); extern void crash_kexec(struct pt_regs *); int kexec_should_crash(struct task_struct *); int kexec_crash_loaded(void); +int __init reserve_crashkernel_low(void); void crash_save_cpu(struct pt_regs *regs, int cpu); extern int kimage_crash_copy_vmcoreinfo(struct kimage *image); diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c index d714044..f8e8f80 100644 --- a/kernel/kexec_core.c +++ b/kernel/kexec_core.c @@ -39,6 +39,8 @@ #include #include #include +#include +#include #include #include @@ -96,6 +98,57 @@ int kexec_crash_loaded(void) } EXPORT_SYMBOL_GPL(kexec_crash_loaded); +int __init reserve_crashkernel_low(void) +{ + unsigned long long base, low_base = 0, low_size = 0; + unsigned long total_low_mem; + int ret; + + total_low_mem = memblock_mem_size(1UL << (32 - PAGE_SHIFT)); + + /* crashkernel=Y,low */ + ret = parse_crashkernel_low(boot_command_line, total_low_mem, &low_size, &base); + if (ret) { + /* + * two parts from lib/swiotlb.c: + * -swiotlb size: user-specified with swiotlb= or default. + * + * -swiotlb overflow buffer: now hardcoded to 32k. We round it + * to 8M for other buffers that may need to stay low too. Also + * make sure we allocate enough extra low memory so that we + * don't run out of DMA buffers for 32-bit devices. + */ + low_size = max(swiotlb_size_or_default() + (8UL << 20), 256UL << 20); + } else { + /* passed with crashkernel=0,low ? */ + if (!low_size) + return 0; + } + + low_base = memblock_find_in_range(0, 1ULL << 32, low_size, CRASH_ALIGN); + if (!low_base) { + pr_err("Cannot reserve %ldMB crashkernel low memory, please try smaller size.\n", + (unsigned long)(low_size >> 20)); + return -ENOMEM; + } + + ret = memblock_reserve(low_base, low_size); + if (ret) { + pr_err("%s: Error reserving crashkernel low memblock.\n", __func__); + return ret; + } + + pr_info("Reserving %ldMB of low memory at %ldMB for crashkernel (System low RAM: %ldMB)\n", + (unsigned long)(low_size >> 20), + (unsigned long)(low_base >> 20), + (unsigned long)(total_low_mem >> 20)); + + crashk_low_res.start = low_base; + crashk_low_res.end = low_base + low_size - 1; + + return 0; +} + /* * When kexec transitions to the new kernel there is a one-to-one * mapping between physical and virtual addresses. On processors From patchwork Tue Apr 9 07:31:42 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: chenzhou X-Patchwork-Id: 10890637 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4E79717E0 for ; Tue, 9 Apr 2019 07:21:41 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2D70A2882F for ; Tue, 9 Apr 2019 07:21:41 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1ED5F2884A; Tue, 9 Apr 2019 07:21:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id A124C2882F for ; Tue, 9 Apr 2019 07:21:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=fqSNljZyBjBEXldtmfL5aTVXfR9SsjdtCUYg4thYbsM=; b=Hf7uO0dF7jRpMf e5UZ+EB+IcDHTHdoEzHZePUAn7CDM5hUjLqqKEs5cpwaiIhUIwPIfGFQVq8dzeXbEW/wdEuIU73Vg bIs6xgbXbMJpE8mXs6TVBVtCK5FjaU73/Swcoy5obeuXcxrSkfA7gStW8MydEmmSpRtpyI8hmaXB0 Sajts0KBEYOli85kkQXf4EVT8j0wibfe2Kr6Ylxm6UYIMpOVxPSlhRNkUyO0DtKj0fX0rVTcueQsA OLkTOP3FrllEkAIBnnkzvTFRCRarA/KKfeZmVUUP1vnhkivGcCtn4AzKWMD0sqSP/TEDBB6QUUaDG w0tedB4/91DW60lwIMTA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hDl4Y-0001Cw-Hq; Tue, 09 Apr 2019 07:21:34 +0000 Received: from szxga07-in.huawei.com ([45.249.212.35] helo=huawei.com) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1hDl43-0000YO-EF; Tue, 09 Apr 2019 07:21:05 +0000 Received: from DGGEMS402-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 93BD059EB87F8BBCF1A4; Tue, 9 Apr 2019 15:21:00 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by DGGEMS402-HUB.china.huawei.com (10.3.19.202) with Microsoft SMTP Server id 14.3.408.0; Tue, 9 Apr 2019 15:20:52 +0800 From: Chen Zhou To: , , , , Subject: [PATCH v2 2/3] arm64: kdump: support more than one crash kernel regions Date: Tue, 9 Apr 2019 15:31:42 +0800 Message-ID: <20190409073143.75808-3-chenzhou10@huawei.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190409073143.75808-1-chenzhou10@huawei.com> References: <20190409073143.75808-1-chenzhou10@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190409_002103_648950_A47D5F5F X-CRM114-Status: GOOD ( 15.41 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: wangkefeng.wang@huawei.com, linux-mm@kvack.org, Chen Zhou , kexec@lists.infradead.org, linux-kernel@vger.kernel.org, takahiro.akashi@linaro.org, horms@verge.net.au, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP After commit (arm64: kdump: support reserving crashkernel above 4G), there may be two crash kernel regions, one is below 4G, the other is above 4G. Crash dump kernel reads more than one crash kernel regions via a dtb property under node /chosen, linux,usable-memory-range = Signed-off-by: Chen Zhou --- arch/arm64/mm/init.c | 66 ++++++++++++++++++++++++++++++++++++++++-------- include/linux/memblock.h | 6 +++++ mm/memblock.c | 7 ++--- 3 files changed, 66 insertions(+), 13 deletions(-) diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 3bebddf..0f18665 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -65,6 +65,11 @@ phys_addr_t arm64_dma_phys_limit __ro_after_init; #ifdef CONFIG_KEXEC_CORE +/* at most two crash kernel regions, low_region and high_region */ +#define CRASH_MAX_USABLE_RANGES 2 +#define LOW_REGION_IDX 0 +#define HIGH_REGION_IDX 1 + /* * reserve_crashkernel() - reserves memory for crash kernel * @@ -297,8 +302,8 @@ static int __init early_init_dt_scan_usablemem(unsigned long node, const char *uname, int depth, void *data) { struct memblock_region *usablemem = data; - const __be32 *reg; - int len; + const __be32 *reg, *endp; + int len, nr = 0; if (depth != 1 || strcmp(uname, "chosen") != 0) return 0; @@ -307,22 +312,63 @@ static int __init early_init_dt_scan_usablemem(unsigned long node, if (!reg || (len < (dt_root_addr_cells + dt_root_size_cells))) return 1; - usablemem->base = dt_mem_next_cell(dt_root_addr_cells, ®); - usablemem->size = dt_mem_next_cell(dt_root_size_cells, ®); + endp = reg + (len / sizeof(__be32)); + while ((endp - reg) >= (dt_root_addr_cells + dt_root_size_cells)) { + usablemem[nr].base = dt_mem_next_cell(dt_root_addr_cells, ®); + usablemem[nr].size = dt_mem_next_cell(dt_root_size_cells, ®); + + if (++nr >= CRASH_MAX_USABLE_RANGES) + break; + } return 1; } static void __init fdt_enforce_memory_region(void) { - struct memblock_region reg = { - .size = 0, - }; + int i, cnt = 0; + struct memblock_region regs[CRASH_MAX_USABLE_RANGES]; + + memset(regs, 0, sizeof(regs)); + of_scan_flat_dt(early_init_dt_scan_usablemem, regs); + + for (i = 0; i < CRASH_MAX_USABLE_RANGES; i++) + if (regs[i].size) + cnt++; + else + break; + + if (cnt - 1 == LOW_REGION_IDX) + memblock_cap_memory_range(regs[LOW_REGION_IDX].base, + regs[LOW_REGION_IDX].size); + else if (cnt - 1 == HIGH_REGION_IDX) { + /* + * Two crash kernel regions, cap the memory range + * [regs[LOW_REGION_IDX].base, regs[HIGH_REGION_IDX].end] + * and then remove the memory range in the middle. + */ + int start_rgn, end_rgn, i, ret; + phys_addr_t mid_base, mid_size; + + mid_base = regs[LOW_REGION_IDX].base + regs[LOW_REGION_IDX].size; + mid_size = regs[HIGH_REGION_IDX].base - mid_base; + ret = memblock_isolate_range(&memblock.memory, mid_base, + mid_size, &start_rgn, &end_rgn); - of_scan_flat_dt(early_init_dt_scan_usablemem, ®); + if (ret) + return; - if (reg.size) - memblock_cap_memory_range(reg.base, reg.size); + memblock_cap_memory_range(regs[LOW_REGION_IDX].base, + regs[HIGH_REGION_IDX].base - + regs[LOW_REGION_IDX].base + + regs[HIGH_REGION_IDX].size); + for (i = end_rgn - 1; i >= start_rgn; i--) { + if (!memblock_is_nomap(&memblock.memory.regions[i])) + memblock_remove_region(&memblock.memory, i); + } + memblock_remove_range(&memblock.reserved, mid_base, + mid_base + mid_size); + } } void __init arm64_memblock_init(void) diff --git a/include/linux/memblock.h b/include/linux/memblock.h index 294d5d8..787d252 100644 --- a/include/linux/memblock.h +++ b/include/linux/memblock.h @@ -110,9 +110,15 @@ void memblock_discard(void); phys_addr_t memblock_find_in_range(phys_addr_t start, phys_addr_t end, phys_addr_t size, phys_addr_t align); +void memblock_remove_region(struct memblock_type *type, unsigned long r); void memblock_allow_resize(void); int memblock_add_node(phys_addr_t base, phys_addr_t size, int nid); int memblock_add(phys_addr_t base, phys_addr_t size); +int memblock_isolate_range(struct memblock_type *type, + phys_addr_t base, phys_addr_t size, + int *start_rgn, int *end_rgn); +int memblock_remove_range(struct memblock_type *type, + phys_addr_t base, phys_addr_t size); int memblock_remove(phys_addr_t base, phys_addr_t size); int memblock_free(phys_addr_t base, phys_addr_t size); int memblock_reserve(phys_addr_t base, phys_addr_t size); diff --git a/mm/memblock.c b/mm/memblock.c index e7665cf..1846e2d 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -357,7 +357,8 @@ phys_addr_t __init_memblock memblock_find_in_range(phys_addr_t start, return ret; } -static void __init_memblock memblock_remove_region(struct memblock_type *type, unsigned long r) +void __init_memblock memblock_remove_region(struct memblock_type *type, + unsigned long r) { type->total_size -= type->regions[r].size; memmove(&type->regions[r], &type->regions[r + 1], @@ -724,7 +725,7 @@ int __init_memblock memblock_add(phys_addr_t base, phys_addr_t size) * Return: * 0 on success, -errno on failure. */ -static int __init_memblock memblock_isolate_range(struct memblock_type *type, +int __init_memblock memblock_isolate_range(struct memblock_type *type, phys_addr_t base, phys_addr_t size, int *start_rgn, int *end_rgn) { @@ -784,7 +785,7 @@ static int __init_memblock memblock_isolate_range(struct memblock_type *type, return 0; } -static int __init_memblock memblock_remove_range(struct memblock_type *type, +int __init_memblock memblock_remove_range(struct memblock_type *type, phys_addr_t base, phys_addr_t size) { int start_rgn, end_rgn; From patchwork Tue Apr 9 07:31:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: chenzhou X-Patchwork-Id: 10890635 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 543EE17E0 for ; Tue, 9 Apr 2019 07:21:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 338A32882F for ; Tue, 9 Apr 2019 07:21:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 279E52884A; Tue, 9 Apr 2019 07:21:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id B59D52882F for ; Tue, 9 Apr 2019 07:21:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=MJThh4D7fEQDcWyFcK+T+2jMgSUzw1+9a72CITwpU0Y=; b=CI07wpbXg/KUWi cpH0uKc4qCqgJVBD3MYJbqcxKX6RD4QZQIoAJok+dV+tl4aC3J20Fr2BgslIrpof+jS2gCzo3HgxG mEIXgsNiRB0awurRRi2nR3jspZ0n9VjZyradA1NMCyudxDNePPy3GvvIEDFfXLzG1m7gNVuXM/O/V ORM+iVoUlVCOQPc4jrEO7m8OzNtP1zJl7wsDFdlgRZO1LePfouHK1umcXrLxuoNiR378anItHBoKU 0JCS0fGL8VQt0tGR0vNlwjgKEz8rm9nCKYVM/tiW+79yodSzT0OOTba15RKnmSZSd9OXB5nTDjCYy b8h3K2JIWpqFYfMXuEdQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hDl4I-0000oL-1G; Tue, 09 Apr 2019 07:21:18 +0000 Received: from szxga07-in.huawei.com ([45.249.212.35] helo=huawei.com) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1hDl43-0000YM-ED; Tue, 09 Apr 2019 07:21:04 +0000 Received: from DGGEMS402-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 8C0E77D3C200B2A62F21; Tue, 9 Apr 2019 15:21:00 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by DGGEMS402-HUB.china.huawei.com (10.3.19.202) with Microsoft SMTP Server id 14.3.408.0; Tue, 9 Apr 2019 15:20:54 +0800 From: Chen Zhou To: , , , , Subject: [PATCH v2 3/3] kdump: update Documentation about crashkernel on arm64 Date: Tue, 9 Apr 2019 15:31:43 +0800 Message-ID: <20190409073143.75808-4-chenzhou10@huawei.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190409073143.75808-1-chenzhou10@huawei.com> References: <20190409073143.75808-1-chenzhou10@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190409_002103_648950_C083ECEC X-CRM114-Status: UNSURE ( 8.62 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: wangkefeng.wang@huawei.com, linux-mm@kvack.org, Chen Zhou , kexec@lists.infradead.org, linux-kernel@vger.kernel.org, takahiro.akashi@linaro.org, horms@verge.net.au, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Now we support crashkernel=X,[high,low] on arm64, update the Documentation. Signed-off-by: Chen Zhou --- Documentation/admin-guide/kernel-parameters.txt | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 060482d..d5c65e1 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -715,14 +715,14 @@ Documentation/kdump/kdump.txt for an example. crashkernel=size[KMG],high - [KNL, x86_64] range could be above 4G. Allow kernel + [KNL, x86_64, arm64] range could be above 4G. Allow kernel to allocate physical memory region from top, so could be above 4G if system have more than 4G ram installed. Otherwise memory region will be allocated below 4G, if available. It will be ignored if crashkernel=X is specified. crashkernel=size[KMG],low - [KNL, x86_64] range under 4G. When crashkernel=X,high + [KNL, x86_64, arm64] range under 4G. When crashkernel=X,high is passed, kernel could allocate physical memory region above 4G, that cause second kernel crash on system that require some amount of low memory, e.g. swiotlb