From patchwork Thu May 11 08:51:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chen Jiahao X-Patchwork-Id: 13237667 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 97FEEC7EE22 for ; Thu, 11 May 2023 08:51:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=t8SBX9/wsO4WNs0zY1HpYOC2/L0aD7D67WQSpQzPVbo=; b=g946P8Cixd/3Ko hMi0KC5onfWeaElGUEOMlB0fegLtwdXcUTlDoDdnraAc59IMcsByD+NWNezXzWfkMEHKD/ckxtdRP RLAzxxZjskKLoVra4e6gq043+JnI3ipzY+q3Pk+nAUM2d2xtFb1NZyh2tTqwmBuGxB1urqWpltnie 4NkhXKHVtCn41vuecGatVEDWXY5Uu50t6bmk5sJ7TuS2RgC9PSPV9od0dDtYeer8txLKRuEH5MREA 8tt8nGtCiZbGNur3VqOvjKRpgOKIeTeK7BNXOLp4b3X5IOuipBwyCAV3q2lucSqh6jhWQYVG9eTCj AevMPQas20uhYhKOHRBQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1px21U-008GkI-0L; Thu, 11 May 2023 08:51:40 +0000 Received: from szxga02-in.huawei.com ([45.249.212.188]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1px21P-008Gi1-1Z; Thu, 11 May 2023 08:51:37 +0000 Received: from dggpemm500016.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4QH5CX6zpmzLnj1; Thu, 11 May 2023 16:48:40 +0800 (CST) Received: from huawei.com (10.67.174.205) by dggpemm500016.china.huawei.com (7.185.36.25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Thu, 11 May 2023 16:51:31 +0800 From: Chen Jiahao To: , , , CC: , , , , , , , , , , , Subject: [PATCH -next v5 1/2] riscv: kdump: Implement crashkernel=X,[high,low] Date: Thu, 11 May 2023 16:51:38 +0800 Message-ID: <20230511085139.1039088-2-chenjiahao16@huawei.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230511085139.1039088-1-chenjiahao16@huawei.com> References: <20230511085139.1039088-1-chenjiahao16@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.174.205] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemm500016.china.huawei.com (7.185.36.25) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230511_015135_830545_D5437143 X-CRM114-Status: GOOD ( 18.08 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org On riscv, the current crash kernel allocation logic is trying to allocate within 32bit addressible memory region by default, if failed, try to allocate without 4G restriction. In need of saving DMA zone memory while allocating a relatively large crash kernel region, allocating the reserved memory top down in high memory, without overlapping the DMA zone, is a mature solution. Here introduce the parameter option crashkernel=X,[high,low]. One can reserve the crash kernel from high memory above DMA zone range by explicitly passing "crashkernel=X,high"; or reserve a memory range below 4G with "crashkernel=X,low". Signed-off-by: Chen Jiahao Acked-by: Guo Ren Reviewed-by: Zhen Lei Reviewed-by: Simon Horman --- arch/riscv/kernel/setup.c | 5 +++ arch/riscv/mm/init.c | 73 +++++++++++++++++++++++++++++++++++---- 2 files changed, 71 insertions(+), 7 deletions(-) diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c index 36b026057503..e0b7c1651d60 100644 --- a/arch/riscv/kernel/setup.c +++ b/arch/riscv/kernel/setup.c @@ -176,6 +176,11 @@ static void __init init_resources(void) if (ret < 0) goto error; } + if (crashk_low_res.start != crashk_low_res.end) { + ret = add_resource(&iomem_resource, &crashk_low_res); + if (ret < 0) + goto error; + } #endif #ifdef CONFIG_CRASH_DUMP diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c index 747e5b1ef02d..910d2e8c3c77 100644 --- a/arch/riscv/mm/init.c +++ b/arch/riscv/mm/init.c @@ -1272,6 +1272,28 @@ static inline void setup_vm_final(void) } #endif /* CONFIG_MMU */ +/* Reserve 128M low memory by default for swiotlb buffer */ +#define DEFAULT_CRASH_KERNEL_LOW_SIZE (128UL << 20) + +static int __init reserve_crashkernel_low(unsigned long long low_size) +{ + unsigned long long low_base; + + low_base = memblock_phys_alloc_range(low_size, PMD_SIZE, 0, dma32_phys_limit); + if (!low_base) { + pr_err("cannot allocate crashkernel low memory (size:0x%llx).\n", low_size); + return -ENOMEM; + } + + pr_info("crashkernel low memory reserved: 0x%016llx - 0x%016llx (%lld MB)\n", + low_base, low_base + low_size, low_size >> 20); + + crashk_low_res.start = low_base; + crashk_low_res.end = low_base + low_size - 1; + + return 0; +} + /* * reserve_crashkernel() - reserves memory for crash kernel * @@ -1283,8 +1305,11 @@ static void __init reserve_crashkernel(void) { unsigned long long crash_base = 0; unsigned long long crash_size = 0; + unsigned long long crash_low_size = 0; unsigned long search_start = memblock_start_of_DRAM(); - unsigned long search_end = memblock_end_of_DRAM(); + unsigned long search_end = (unsigned long)dma32_phys_limit; + char *cmdline = boot_command_line; + bool fixed_base = false; int ret = 0; @@ -1300,14 +1325,34 @@ static void __init reserve_crashkernel(void) return; } - ret = parse_crashkernel(boot_command_line, memblock_phys_mem_size(), + ret = parse_crashkernel(cmdline, memblock_phys_mem_size(), &crash_size, &crash_base); - if (ret || !crash_size) + if (ret == -ENOENT) { + /* Fallback to crashkernel=X,[high,low] */ + ret = parse_crashkernel_high(cmdline, 0, &crash_size, &crash_base); + if (ret || !crash_size) + return; + + /* + * crashkernel=Y,low is valid only when crashkernel=X,high + * is passed. + */ + ret = parse_crashkernel_low(cmdline, 0, &crash_low_size, &crash_base); + if (ret == -ENOENT) + crash_low_size = DEFAULT_CRASH_KERNEL_LOW_SIZE; + else if (ret) + return; + + search_end = memblock_end_of_DRAM(); + } else if (ret || !crash_size) { + /* Invalid argument value specified */ return; + } crash_size = PAGE_ALIGN(crash_size); if (crash_base) { + fixed_base = true; search_start = crash_base; search_end = crash_base + crash_size; } @@ -1320,17 +1365,31 @@ static void __init reserve_crashkernel(void) * swiotlb can work on the crash kernel. */ crash_base = memblock_phys_alloc_range(crash_size, PMD_SIZE, - search_start, - min(search_end, (unsigned long) SZ_4G)); + search_start, search_end); if (crash_base == 0) { - /* Try again without restricting region to 32bit addressible memory */ + if (fixed_base) { + pr_warn("crashkernel: allocating failed with given size@offset\n"); + return; + } + search_end = memblock_end_of_DRAM(); + + /* Try again above the region of 32bit addressible memory */ crash_base = memblock_phys_alloc_range(crash_size, PMD_SIZE, - search_start, search_end); + search_start, search_end); if (crash_base == 0) { pr_warn("crashkernel: couldn't allocate %lldKB\n", crash_size >> 10); return; } + + if (!crash_low_size) + crash_low_size = DEFAULT_CRASH_KERNEL_LOW_SIZE; + } + + if ((crash_base > dma32_phys_limit - crash_low_size) && + crash_low_size && reserve_crashkernel_low(crash_low_size)) { + memblock_phys_free(crash_base, crash_size); + return; } pr_info("crashkernel: reserved 0x%016llx - 0x%016llx (%lld MB)\n",