From patchwork Tue Apr 16 11:35:16 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: chenzhou X-Patchwork-Id: 10902759 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2E8E5139A for ; Tue, 16 Apr 2019 11:25:04 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 14E3727165 for ; Tue, 16 Apr 2019 11:25:04 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0690327853; Tue, 16 Apr 2019 11:25:04 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 401FE27165 for ; Tue, 16 Apr 2019 11:25:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 067526B0007; Tue, 16 Apr 2019 07:25:02 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 017826B0008; Tue, 16 Apr 2019 07:25:01 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E22306B000A; Tue, 16 Apr 2019 07:25:01 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-oi1-f197.google.com (mail-oi1-f197.google.com [209.85.167.197]) by kanga.kvack.org (Postfix) with ESMTP id B202F6B0007 for ; Tue, 16 Apr 2019 07:25:01 -0400 (EDT) Received: by mail-oi1-f197.google.com with SMTP id c21so9619057oig.20 for ; Tue, 16 Apr 2019 04:25:01 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=uq4YXG/fd4i9tMfjG1TF9685MevsjIg/lcHigJ5WLOE=; b=sR6UpgpSnLz/tG6HXEGZ2gjL8/zTu3GjdbewvTgpQLZSiOvX/EoBnenFLGwLByc9M8 1Cu9o83K3J9vNpnURRC6d8G5/zE5RUnXmxDOKiACFU9RQqD8fj5pzDIR2LmesSz7BsrL XP/fM1I7+//3Qt1cjCA+44TJVFC70MLPoYAjT064r11x/IKfhbsKdxV11HbyBF6RkHHQ YcTA9DQDrxeRyU3mz7BX1c8Xf7DNemuN4gy2FQdhGf4cWrImYjQABHEexb93wAKbdFiV pCuWzRJTySyS9/XEBl2K5QLJbO58jrXW1kiPlZcSKtWZIHJJJ/af6vR4knJQadpma1AM KXNg== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.32 as permitted sender) smtp.mailfrom=chenzhou10@huawei.com X-Gm-Message-State: APjAAAVJkBzN7Mg9YlphblLBWQki+OMOuwtWYWuxyaU1TZ/qKODCH3KS v5K2aB2sq5wDuzXqDFZ0th/zNooxQRQNNtZM3LpZqB6daR5Azq5moyVCEZhjuYGM7wU2l+RItx9 5OyZxUsdqDF927WyfTh0dpmuqiXeioln1z4GcUZzahh5BvVeq9ljXhhScfott3CnE/Q== X-Received: by 2002:aca:d7c6:: with SMTP id o189mr393339oig.2.1555413901375; Tue, 16 Apr 2019 04:25:01 -0700 (PDT) X-Google-Smtp-Source: APXvYqyPFg5a1gokm1/DFANxGfzfKOO4S2Ma4ThcVosK0Dba15xmQt/Yr8O2sRB4nIffIy2lVxdg X-Received: by 2002:aca:d7c6:: with SMTP id o189mr393307oig.2.1555413900320; Tue, 16 Apr 2019 04:25:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1555413900; cv=none; d=google.com; s=arc-20160816; b=bZONwuwJtQSxn5lm5sEu1R9tN+zp9SPqYJ0XPIk8lDfW7jdRO/TWRBP0z69y7bNnzC Kj1bWtdS3Y/3FXUHQljizELqV/+VO0wvhqnsmo+zt6Z80foCDkx26sJ+9jNbttPZmN6o NG9/AL6RiQmnhFcTNiumzrR4P7WKGR9AVBr//DRmqlPIhRdzBS7ka/vZ8PwpSvrqw5MI kD0JwuKtugbksxGOAjx9fJOTAweSJzSeuaiKgP9pwhgjasfHRLvhk8hnu+ZcAah6RQ1L zuR0rdZbLBA2+pfe7Nm3J9Zy+wTdwRZxwMt+2DeOPjMZdKRofKetcoJJ2VUMtyPvZUZy FpZQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=uq4YXG/fd4i9tMfjG1TF9685MevsjIg/lcHigJ5WLOE=; b=Y2Px/x2w44nJmW+BINq8OZXYtXVVrkQ4qn9I7rclJo5im9ehBXKmss3+JkEsXU34/y UQlHiefp1Q2Gm6wPDjUMsvjgYCpQq0saVX6I5FUunqVQh+aOo2391hISWNOktTf7BE4u am3w/ST/TQUBknok2kW0RR2OLniQ+LrZL8GWukSKiKmdmhP609cnxOSSGNnj0NZX9KLd wVwbyyO09kpgiCUdhS+3Zxbo1eHAlwVjL76tp+wKbG3xlKGerIanHt5ZNJHruSzYrXmz SBKNbXekDoQmAzBOhg1fWMqUG2bnSVtDZIxBe/1ThKs6LGL2kOGYFtNeOTeW9nwQF84q vVmg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.32 as permitted sender) smtp.mailfrom=chenzhou10@huawei.com Received: from huawei.com (szxga06-in.huawei.com. [45.249.212.32]) by mx.google.com with ESMTPS id n26si24463296otf.203.2019.04.16.04.24.59 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 16 Apr 2019 04:25:00 -0700 (PDT) Received-SPF: pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.32 as permitted sender) client-ip=45.249.212.32; Authentication-Results: mx.google.com; spf=pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.32 as permitted sender) smtp.mailfrom=chenzhou10@huawei.com Received: from DGGEMS413-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 5239DDBC8B2B31FE47B3; Tue, 16 Apr 2019 19:24:54 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by DGGEMS413-HUB.china.huawei.com (10.3.19.213) with Microsoft SMTP Server id 14.3.408.0; Tue, 16 Apr 2019 19:24:47 +0800 From: Chen Zhou To: , , , , , , , , CC: , , , , , , , Chen Zhou Subject: [RESEND PATCH v5 1/4] x86: kdump: move reserve_crashkernel_low() into kexec_core.c Date: Tue, 16 Apr 2019 19:35:16 +0800 Message-ID: <20190416113519.90507-2-chenzhou10@huawei.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190416113519.90507-1-chenzhou10@huawei.com> References: <20190416113519.90507-1-chenzhou10@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-CFilter-Loop: Reflected X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP In preparation for supporting more than one crash kernel regions in arm64 as x86_64 does, move reserve_crashkernel_low() into kexec/kexec_core.c. Signed-off-by: Chen Zhou --- arch/x86/include/asm/kexec.h | 3 ++ arch/x86/kernel/setup.c | 66 +++++--------------------------------------- include/linux/kexec.h | 5 ++++ kernel/kexec_core.c | 56 +++++++++++++++++++++++++++++++++++++ 4 files changed, 71 insertions(+), 59 deletions(-) diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h index 003f2da..485a514 100644 --- a/arch/x86/include/asm/kexec.h +++ b/arch/x86/include/asm/kexec.h @@ -18,6 +18,9 @@ # define KEXEC_CONTROL_CODE_MAX_SIZE 2048 +/* 16M alignment for crash kernel regions */ +#define CRASH_ALIGN (16 << 20) + #ifndef __ASSEMBLY__ #include diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index 3773905..4182035 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -447,9 +447,6 @@ static void __init memblock_x86_reserve_range_setup_data(void) #ifdef CONFIG_KEXEC_CORE -/* 16M alignment for crash kernel regions */ -#define CRASH_ALIGN (16 << 20) - /* * Keep the crash kernel below this limit. On 32 bits earlier kernels * would limit the kernel to the low 512 MiB due to mapping restrictions. @@ -463,59 +460,6 @@ static void __init memblock_x86_reserve_range_setup_data(void) # define CRASH_ADDR_HIGH_MAX MAXMEM #endif -static int __init reserve_crashkernel_low(void) -{ -#ifdef CONFIG_X86_64 - unsigned long long base, low_base = 0, low_size = 0; - unsigned long total_low_mem; - int ret; - - total_low_mem = memblock_mem_size(1UL << (32 - PAGE_SHIFT)); - - /* crashkernel=Y,low */ - ret = parse_crashkernel_low(boot_command_line, total_low_mem, &low_size, &base); - if (ret) { - /* - * two parts from lib/swiotlb.c: - * -swiotlb size: user-specified with swiotlb= or default. - * - * -swiotlb overflow buffer: now hardcoded to 32k. We round it - * to 8M for other buffers that may need to stay low too. Also - * make sure we allocate enough extra low memory so that we - * don't run out of DMA buffers for 32-bit devices. - */ - low_size = max(swiotlb_size_or_default() + (8UL << 20), 256UL << 20); - } else { - /* passed with crashkernel=0,low ? */ - if (!low_size) - return 0; - } - - low_base = memblock_find_in_range(0, 1ULL << 32, low_size, CRASH_ALIGN); - if (!low_base) { - pr_err("Cannot reserve %ldMB crashkernel low memory, please try smaller size.\n", - (unsigned long)(low_size >> 20)); - return -ENOMEM; - } - - ret = memblock_reserve(low_base, low_size); - if (ret) { - pr_err("%s: Error reserving crashkernel low memblock.\n", __func__); - return ret; - } - - pr_info("Reserving %ldMB of low memory at %ldMB for crashkernel (System low RAM: %ldMB)\n", - (unsigned long)(low_size >> 20), - (unsigned long)(low_base >> 20), - (unsigned long)(total_low_mem >> 20)); - - crashk_low_res.start = low_base; - crashk_low_res.end = low_base + low_size - 1; - insert_resource(&iomem_resource, &crashk_low_res); -#endif - return 0; -} - static void __init reserve_crashkernel(void) { unsigned long long crash_size, crash_base, total_mem; @@ -573,9 +517,13 @@ static void __init reserve_crashkernel(void) return; } - if (crash_base >= (1ULL << 32) && reserve_crashkernel_low()) { - memblock_free(crash_base, crash_size); - return; + if (crash_base >= (1ULL << 32)) { + if (reserve_crashkernel_low()) { + memblock_free(crash_base, crash_size); + return; + } + + insert_resource(&iomem_resource, &crashk_low_res); } pr_info("Reserving %ldMB of memory at %ldMB for crashkernel (System RAM: %ldMB)\n", diff --git a/include/linux/kexec.h b/include/linux/kexec.h index b9b1bc5..096ad63 100644 --- a/include/linux/kexec.h +++ b/include/linux/kexec.h @@ -63,6 +63,10 @@ #define KEXEC_CORE_NOTE_NAME CRASH_CORE_NOTE_NAME +#ifndef CRASH_ALIGN +#define CRASH_ALIGN SZ_128M +#endif + /* * This structure is used to hold the arguments that are used when loading * kernel binaries. @@ -281,6 +285,7 @@ extern void __crash_kexec(struct pt_regs *); extern void crash_kexec(struct pt_regs *); int kexec_should_crash(struct task_struct *); int kexec_crash_loaded(void); +int __init reserve_crashkernel_low(void); void crash_save_cpu(struct pt_regs *regs, int cpu); extern int kimage_crash_copy_vmcoreinfo(struct kimage *image); diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c index d714044..3492abd 100644 --- a/kernel/kexec_core.c +++ b/kernel/kexec_core.c @@ -39,6 +39,8 @@ #include #include #include +#include +#include #include #include @@ -96,6 +98,60 @@ int kexec_crash_loaded(void) } EXPORT_SYMBOL_GPL(kexec_crash_loaded); +int __init reserve_crashkernel_low(void) +{ + unsigned long long base, low_base = 0, low_size = 0; + unsigned long total_low_mem; + int ret; + + total_low_mem = memblock_mem_size(1UL << (32 - PAGE_SHIFT)); + + /* crashkernel=Y,low */ + ret = parse_crashkernel_low(boot_command_line, total_low_mem, + &low_size, &base); + if (ret) { + /* + * two parts from lib/swiotlb.c: + * -swiotlb size: user-specified with swiotlb= or default. + * + * -swiotlb overflow buffer: now hardcoded to 32k. We round it + * to 8M for other buffers that may need to stay low too. Also + * make sure we allocate enough extra low memory so that we + * don't run out of DMA buffers for 32-bit devices. + */ + low_size = max(swiotlb_size_or_default() + (8UL << 20), + 256UL << 20); + } else { + /* passed with crashkernel=0,low ? */ + if (!low_size) + return 0; + } + + low_base = memblock_find_in_range(0, 1ULL << 32, low_size, CRASH_ALIGN); + if (!low_base) { + pr_err("Cannot reserve %ldMB crashkernel low memory, please try smaller size.\n", + (unsigned long)(low_size >> 20)); + return -ENOMEM; + } + + ret = memblock_reserve(low_base, low_size); + if (ret) { + pr_err("%s: Error reserving crashkernel low memblock.\n", + __func__); + return ret; + } + + pr_info("Reserving %ldMB of low memory at %ldMB for crashkernel (System low RAM: %ldMB)\n", + (unsigned long)(low_size >> 20), + (unsigned long)(low_base >> 20), + (unsigned long)(total_low_mem >> 20)); + + crashk_low_res.start = low_base; + crashk_low_res.end = low_base + low_size - 1; + + return 0; +} + /* * When kexec transitions to the new kernel there is a one-to-one * mapping between physical and virtual addresses. On processors From patchwork Tue Apr 16 11:35:17 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: chenzhou X-Patchwork-Id: 10902763 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0555114DB for ; Tue, 16 Apr 2019 11:25:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E4BAB27165 for ; Tue, 16 Apr 2019 11:25:07 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D837F27853; Tue, 16 Apr 2019 11:25:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 593BC27165 for ; Tue, 16 Apr 2019 11:25:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 262BF6B000C; Tue, 16 Apr 2019 07:25:06 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 21D976B000E; Tue, 16 Apr 2019 07:25:06 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0B47E6B000D; Tue, 16 Apr 2019 07:25:06 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-oi1-f197.google.com (mail-oi1-f197.google.com [209.85.167.197]) by kanga.kvack.org (Postfix) with ESMTP id C67486B000A for ; Tue, 16 Apr 2019 07:25:05 -0400 (EDT) Received: by mail-oi1-f197.google.com with SMTP id d63so9665802oig.0 for ; Tue, 16 Apr 2019 04:25:05 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=msW/DfQw6RLXEJV1GdO6xcS+vBgdX+OmDHFBMpqAuDk=; b=tfUsmA0hEYJanNS+lthAom4xjpbj2HEylkOwGowp3c1dOWU6SEQYBpR95e1MH0JoYv MZtRrhHKGgIVSDxYiKAnE/g9+0FG72hsY40YDRPQvYgkiw9NdU5Lw5cORnsQVtr+wGWl I2tkZtJbEGVlcPkQ+0QD9FTVBWtp11sP4L4zOaxlRfAiD/baeWZ0y+xdFK8I8l7zBZmZ ZILavIetHx6wHTgxnjSyHl5Dg/sfmwmoSfi39Tx40Tkh5PX2mOhBVtHVcDK7EXsfPkcz kzMM0yW5xOOwb4WRxnatd5aVwpYd5g38pG/W5u9EPXiGN5hany4wBUKqHXJ/pG1gz4qU qdlg== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.35 as permitted sender) smtp.mailfrom=chenzhou10@huawei.com X-Gm-Message-State: APjAAAVf9um07tsof0AP43EWWZT/Vhid5NS/cVA00nex0kKn+rMAx6oT Ojw0vy0F2sfMWsJcxcBJQ6OhbqymEsWa4DZODI4AEH/38AMZVfEYQtF0jIkNRI2VpUd6/G5VfsZ f2LCl419swcNsz3nLhr4ozAg6DXHPuGBxyUZWUYFiTSdidMgDlcFUlv9boE4UG9GcZQ== X-Received: by 2002:aca:4507:: with SMTP id s7mr22404615oia.127.1555413905540; Tue, 16 Apr 2019 04:25:05 -0700 (PDT) X-Google-Smtp-Source: APXvYqxto7OepSxude7d08zh3ob9XGi3Dp8xryaDKUCPUczmWtVAQbVG+L9iuLRkh53FnF2M7eAu X-Received: by 2002:aca:4507:: with SMTP id s7mr22404592oia.127.1555413904829; Tue, 16 Apr 2019 04:25:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1555413904; cv=none; d=google.com; s=arc-20160816; b=0s8OuV88/zFl3EQBvpVxC250bNi0imjN43m8d7CvjTHsXaucD4hf1OKDAEdIfnjAvq 6bxIEk8XneiblBz+rKRq/d4iC+iSdjNsmpcY1WrKYZ3P3ETQ3aAQCFJQbhfhidFKCyRO wX4AaCjDCpYaEnssyUbKx+T162oe+tHpMFdMWiE2L+dZqODaNHVSgXfIzytl+CrjKXm5 gOpc8PyfECQrr8ux/1Sri7K5fwZJlfB0UBkPg6QEhWZSTZ2Ru5m1UlhLO37Jaktk4QL2 Q8cLp4cXfKYaj8AYw7vMZGHA7zPNwwi+XWpSFYpd40cAC/j+WIBKD97vTO3q6vizxnEh lDyQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=msW/DfQw6RLXEJV1GdO6xcS+vBgdX+OmDHFBMpqAuDk=; b=AY67uhSZ46tsiReYE1ExiG2D4jdLCUEtIKo2gbZ2yMtrYs0Q5a6WVrVvzV+joO+jkZ Bl7B60E0H7W1N+UJ+rndQ8yQBSgBMjy5rPo6WsR+pw3E66KavF7H3xMrgKhIC2V9tbII sUeICweZObe2F65PHqDevbKY0RyZIDV1WcssfQcU1BnPBplG5B59N9R0owPeW35Mr7FH LCrsB4S7JuqcH/b0xqy2kH8VP/ZZLm/tI+6EjvvH4vCz68j4uQ9Esi612XU+Fju1Cr37 7mT0OpghLtdkwgfpALecmURY6aDyaig+MzjpKY1nTbaWYao0N32+9REJRL95UYjIWnoM X4Hg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.35 as permitted sender) smtp.mailfrom=chenzhou10@huawei.com Received: from huawei.com (szxga07-in.huawei.com. [45.249.212.35]) by mx.google.com with ESMTPS id m11si14944947otc.49.2019.04.16.04.25.04 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 16 Apr 2019 04:25:04 -0700 (PDT) Received-SPF: pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.35 as permitted sender) client-ip=45.249.212.35; Authentication-Results: mx.google.com; spf=pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.35 as permitted sender) smtp.mailfrom=chenzhou10@huawei.com Received: from DGGEMS413-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 681B4554A1C5BD91B5E9; Tue, 16 Apr 2019 19:24:59 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by DGGEMS413-HUB.china.huawei.com (10.3.19.213) with Microsoft SMTP Server id 14.3.408.0; Tue, 16 Apr 2019 19:24:49 +0800 From: Chen Zhou To: , , , , , , , , CC: , , , , , , , Chen Zhou Subject: [RESEND PATCH v5 2/4] arm64: kdump: support reserving crashkernel above 4G Date: Tue, 16 Apr 2019 19:35:17 +0800 Message-ID: <20190416113519.90507-3-chenzhou10@huawei.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190416113519.90507-1-chenzhou10@huawei.com> References: <20190416113519.90507-1-chenzhou10@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-CFilter-Loop: Reflected X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP When crashkernel is reserved above 4G in memory, kernel should reserve some amount of low memory for swiotlb and some DMA buffers. Kernel would try to allocate at least 256M below 4G automatically as x86_64 if crashkernel is above 4G. Meanwhile, support crashkernel=X,[high,low] in arm64. Signed-off-by: Chen Zhou --- arch/arm64/include/asm/kexec.h | 3 +++ arch/arm64/kernel/setup.c | 3 +++ arch/arm64/mm/init.c | 25 ++++++++++++++++++++----- 3 files changed, 26 insertions(+), 5 deletions(-) diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h index 67e4cb7..32949bf 100644 --- a/arch/arm64/include/asm/kexec.h +++ b/arch/arm64/include/asm/kexec.h @@ -28,6 +28,9 @@ #define KEXEC_ARCH KEXEC_ARCH_AARCH64 +/* 2M alignment for crash kernel regions */ +#define CRASH_ALIGN SZ_2M + #ifndef __ASSEMBLY__ /** diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c index 413d566..82cd9a0 100644 --- a/arch/arm64/kernel/setup.c +++ b/arch/arm64/kernel/setup.c @@ -243,6 +243,9 @@ static void __init request_standard_resources(void) request_resource(res, &kernel_data); #ifdef CONFIG_KEXEC_CORE /* Userspace will find "Crash kernel" region in /proc/iomem. */ + if (crashk_low_res.end && crashk_low_res.start >= res->start && + crashk_low_res.end <= res->end) + request_resource(res, &crashk_low_res); if (crashk_res.end && crashk_res.start >= res->start && crashk_res.end <= res->end) request_resource(res, &crashk_res); diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 972bf43..f5dde73 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -74,20 +74,30 @@ phys_addr_t arm64_dma_phys_limit __ro_after_init; static void __init reserve_crashkernel(void) { unsigned long long crash_base, crash_size; + bool high = false; int ret; ret = parse_crashkernel(boot_command_line, memblock_phys_mem_size(), &crash_size, &crash_base); /* no crashkernel= or invalid value specified */ - if (ret || !crash_size) - return; + if (ret || !crash_size) { + /* crashkernel=X,high */ + ret = parse_crashkernel_high(boot_command_line, + memblock_phys_mem_size(), + &crash_size, &crash_base); + if (ret || !crash_size) + return; + high = true; + } crash_size = PAGE_ALIGN(crash_size); if (crash_base == 0) { /* Current arm64 boot protocol requires 2MB alignment */ - crash_base = memblock_find_in_range(0, ARCH_LOW_ADDRESS_LIMIT, - crash_size, SZ_2M); + crash_base = memblock_find_in_range(0, + high ? memblock_end_of_DRAM() + : ARCH_LOW_ADDRESS_LIMIT, + crash_size, CRASH_ALIGN); if (crash_base == 0) { pr_warn("cannot allocate crashkernel (size:0x%llx)\n", crash_size); @@ -105,13 +115,18 @@ static void __init reserve_crashkernel(void) return; } - if (!IS_ALIGNED(crash_base, SZ_2M)) { + if (!IS_ALIGNED(crash_base, CRASH_ALIGN)) { pr_warn("cannot reserve crashkernel: base address is not 2MB aligned\n"); return; } } memblock_reserve(crash_base, crash_size); + if (crash_base >= SZ_4G && reserve_crashkernel_low()) { + memblock_free(crash_base, crash_size); + return; + } + pr_info("crashkernel reserved: 0x%016llx - 0x%016llx (%lld MB)\n", crash_base, crash_base + crash_size, crash_size >> 20); From patchwork Tue Apr 16 11:35:18 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: chenzhou X-Patchwork-Id: 10902765 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8786E139A for ; Tue, 16 Apr 2019 11:25:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 62DF327165 for ; Tue, 16 Apr 2019 11:25:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5656827853; Tue, 16 Apr 2019 11:25:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B33A827165 for ; Tue, 16 Apr 2019 11:25:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6AD936B000A; Tue, 16 Apr 2019 07:25:06 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 632216B000D; Tue, 16 Apr 2019 07:25:06 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 45D4C6B0010; Tue, 16 Apr 2019 07:25:06 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-ot1-f70.google.com (mail-ot1-f70.google.com [209.85.210.70]) by kanga.kvack.org (Postfix) with ESMTP id 1BE9A6B000A for ; Tue, 16 Apr 2019 07:25:06 -0400 (EDT) Received: by mail-ot1-f70.google.com with SMTP id q15so10539830otl.8 for ; Tue, 16 Apr 2019 04:25:06 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=ewUYorVIChjnXmYEdq4ua4ZywsVGY6/UWoRBagLdmsc=; b=CFv3nJkAMigLlkOBlNDKsFSb5vY2hbOnjxhuNHhqhe69YCzkhwGZdBjxnpMnmClK3g 4RJBuX/LeDuaVaKiwSWihWS2Lt6Ae1N/4qN+POSsV8tHihwaGylH9piXEAdVCHqCKbZW BJXRzhR8+D5QhHdLs/YDbr6cb3MWxsJnIoudCGAoHUfhjFl9742nCZ9qjqf2AiQP0sGK p3zU7TZaEaf5j2VcYgFhH/FtgTkvK5Gq+4P+6rZA3lHJjXsILmXFJ/E4FtZLi2WPkE5U GhIHWvrHKTgBvi8MqoYc0s2jQg5LZ+oYAk3ENNEySH2y9/2iyK1kG8sbbc9MTCn07FKU XCxQ== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.35 as permitted sender) smtp.mailfrom=chenzhou10@huawei.com X-Gm-Message-State: APjAAAWTlv/B8KsFtU98xXfqYOJUUPVlG6/NNVWxBtyUxd4gxZ8WiHBw 71dD5FZXBp/72Bfud9P8RP0qiwAKC3p5wACoixTsbdrTeXmmXnGrr/1sx4rQ5855botewCBog2p 4sMfYqEn3vge6hFutUP8xQm+QBy97BlkfPMzUhtLSOl+ue6eP+ne/tQYu1Azocp3t+Q== X-Received: by 2002:aca:ad82:: with SMTP id w124mr23845547oie.33.1555413905665; Tue, 16 Apr 2019 04:25:05 -0700 (PDT) X-Google-Smtp-Source: APXvYqyuRza2vFqO8SedRcKoMC4A70X6oy0R6OP2nbZ9NW61LbMC37+7zNrlxDx6qHedyPM7we7E X-Received: by 2002:aca:ad82:: with SMTP id w124mr23845518oie.33.1555413904798; Tue, 16 Apr 2019 04:25:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1555413904; cv=none; d=google.com; s=arc-20160816; b=zGf6pOeiWOC6pKBwQGRXkOJPA9d1+fLNY19+liu5RR9FsQqIQXy18ZBeQFfCJUd9br srIT/6HJgJsrXl/xvDIO+X2JcYmdK7qiBL/dl8nDJdkWil+N+BjSKPZ0X7T5LK4Xs+p9 woUIkfiRcN4MvfSG+OphBMIebjLiuM3xse+eb7fVdG2HAQzR3UtUtf1HB5V8SAR/mIYj CmPPEE0i0KiljjmZNZ6nTrKWFOfXwxtJKdztdTc6HHFQPjRydgGRufjGRrlOiJ7XWzAK uXhyQUZR+4wRI7il9/WjhuAEhtqef94ESAADH2pXAtjU2qnM/plPwDSPXF37cybFy2yR ztoA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=ewUYorVIChjnXmYEdq4ua4ZywsVGY6/UWoRBagLdmsc=; b=YHoZLZwU/j2SVX5uS7Tt1QC/HhZSGABl47HbwtrMCCG2RQgrkC8D0cKF4D5+LQ7/ph LSOMX/CyL9mBlFYXnMMH+Jnts48jwIwBKF0kSxpGjIlq+HCHfnSSYcMKPSAX3bXu3sUI H/x+xA1yOBonp5rZIjfn3lFPyLsjieCtkvhhdPZRTJXkL034g6WjAqfBv43YUTgc9sDf O/KN6aiD3otn60RqRZwnds5wW60ql77K6SkvRBr8626Y/FHStUbT35JQT4A2xprs5fuS EXfD3zminAbY27vQJu8m8qEStlW/UnvoTkV5Z2iY0NrlILdZfitAzIfHpD7AV3Syu/+o TPRA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.35 as permitted sender) smtp.mailfrom=chenzhou10@huawei.com Received: from huawei.com (szxga07-in.huawei.com. [45.249.212.35]) by mx.google.com with ESMTPS id n187si24869230oib.51.2019.04.16.04.25.04 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 16 Apr 2019 04:25:04 -0700 (PDT) Received-SPF: pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.35 as permitted sender) client-ip=45.249.212.35; Authentication-Results: mx.google.com; spf=pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.35 as permitted sender) smtp.mailfrom=chenzhou10@huawei.com Received: from DGGEMS413-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 7063E3072191F166EEC8; Tue, 16 Apr 2019 19:24:59 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by DGGEMS413-HUB.china.huawei.com (10.3.19.213) with Microsoft SMTP Server id 14.3.408.0; Tue, 16 Apr 2019 19:24:51 +0800 From: Chen Zhou To: , , , , , , , , CC: , , , , , , , Mike Rapoport , "Chen Zhou" Subject: [RESEND PATCH v5 3/4] memblock: extend memblock_cap_memory_range to multiple ranges Date: Tue, 16 Apr 2019 19:35:18 +0800 Message-ID: <20190416113519.90507-4-chenzhou10@huawei.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190416113519.90507-1-chenzhou10@huawei.com> References: <20190416113519.90507-1-chenzhou10@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-CFilter-Loop: Reflected X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Mike Rapoport The memblock_cap_memory_range() removes all the memory except the range passed to it. Extend this function to receive an array of memblock_regions that should be kept. This allows switching to simple iteration over memblock arrays with 'for_each_mem_range_rev' to remove the unneeded memory. Enable use of this function in arm64 for reservation of multiple regions for the crash kernel. Signed-off-by: Mike Rapoport Signed-off-by: Chen Zhou --- arch/arm64/mm/init.c | 34 ++++++++++++++++++++++++---------- include/linux/memblock.h | 2 +- mm/memblock.c | 44 ++++++++++++++++++++------------------------ 3 files changed, 45 insertions(+), 35 deletions(-) diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index f5dde73..7f999bf 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -64,6 +64,10 @@ EXPORT_SYMBOL(memstart_addr); phys_addr_t arm64_dma_phys_limit __ro_after_init; #ifdef CONFIG_KEXEC_CORE + +/* at most two crash kernel regions, low_region and high_region */ +#define CRASH_MAX_USABLE_RANGES 2 + /* * reserve_crashkernel() - reserves memory for crash kernel * @@ -295,9 +299,9 @@ early_param("mem", early_mem); static int __init early_init_dt_scan_usablemem(unsigned long node, const char *uname, int depth, void *data) { - struct memblock_region *usablemem = data; - const __be32 *reg; - int len; + struct memblock_type *usablemem = data; + const __be32 *reg, *endp; + int len, nr = 0; if (depth != 1 || strcmp(uname, "chosen") != 0) return 0; @@ -306,22 +310,32 @@ static int __init early_init_dt_scan_usablemem(unsigned long node, if (!reg || (len < (dt_root_addr_cells + dt_root_size_cells))) return 1; - usablemem->base = dt_mem_next_cell(dt_root_addr_cells, ®); - usablemem->size = dt_mem_next_cell(dt_root_size_cells, ®); + endp = reg + (len / sizeof(__be32)); + while ((endp - reg) >= (dt_root_addr_cells + dt_root_size_cells)) { + unsigned long base = dt_mem_next_cell(dt_root_addr_cells, ®); + unsigned long size = dt_mem_next_cell(dt_root_size_cells, ®); + if (memblock_add_range(usablemem, base, size, NUMA_NO_NODE, + MEMBLOCK_NONE)) + return 0; + if (++nr >= CRASH_MAX_USABLE_RANGES) + break; + } return 1; } static void __init fdt_enforce_memory_region(void) { - struct memblock_region reg = { - .size = 0, + struct memblock_region usable_regions[CRASH_MAX_USABLE_RANGES]; + struct memblock_type usablemem = { + .max = CRASH_MAX_USABLE_RANGES, + .regions = usable_regions, }; - of_scan_flat_dt(early_init_dt_scan_usablemem, ®); + of_scan_flat_dt(early_init_dt_scan_usablemem, &usablemem); - if (reg.size) - memblock_cap_memory_range(reg.base, reg.size); + if (usablemem.cnt) + memblock_cap_memory_ranges(usablemem.regions, usablemem.cnt); } void __init arm64_memblock_init(void) diff --git a/include/linux/memblock.h b/include/linux/memblock.h index 47e3c06..e490a73 100644 --- a/include/linux/memblock.h +++ b/include/linux/memblock.h @@ -445,7 +445,7 @@ phys_addr_t memblock_mem_size(unsigned long limit_pfn); phys_addr_t memblock_start_of_DRAM(void); phys_addr_t memblock_end_of_DRAM(void); void memblock_enforce_memory_limit(phys_addr_t memory_limit); -void memblock_cap_memory_range(phys_addr_t base, phys_addr_t size); +void memblock_cap_memory_ranges(struct memblock_region *regions, int count); void memblock_mem_limit_remove_map(phys_addr_t limit); bool memblock_is_memory(phys_addr_t addr); bool memblock_is_map_memory(phys_addr_t addr); diff --git a/mm/memblock.c b/mm/memblock.c index f315eca..08581b1 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -1669,36 +1669,31 @@ void __init memblock_enforce_memory_limit(phys_addr_t limit) PHYS_ADDR_MAX); } -void __init memblock_cap_memory_range(phys_addr_t base, phys_addr_t size) -{ - int start_rgn, end_rgn; - int i, ret; - - if (!size) - return; - - ret = memblock_isolate_range(&memblock.memory, base, size, - &start_rgn, &end_rgn); - if (ret) - return; - - /* remove all the MAP regions */ - for (i = memblock.memory.cnt - 1; i >= end_rgn; i--) - if (!memblock_is_nomap(&memblock.memory.regions[i])) - memblock_remove_region(&memblock.memory, i); +void __init memblock_cap_memory_ranges(struct memblock_region *regions, + int count) +{ + struct memblock_type regions_to_keep = { + .max = count, + .cnt = count, + .regions = regions, + }; + phys_addr_t start, end; + u64 i; - for (i = start_rgn - 1; i >= 0; i--) - if (!memblock_is_nomap(&memblock.memory.regions[i])) - memblock_remove_region(&memblock.memory, i); + /* truncate memory while skipping NOMAP regions */ + for_each_mem_range_rev(i, &memblock.memory, ®ions_to_keep, + NUMA_NO_NODE, MEMBLOCK_NONE, &start, &end, NULL) + memblock_remove(start, end - start); /* truncate the reserved regions */ - memblock_remove_range(&memblock.reserved, 0, base); - memblock_remove_range(&memblock.reserved, - base + size, PHYS_ADDR_MAX); + for_each_mem_range_rev(i, &memblock.reserved, ®ions_to_keep, + NUMA_NO_NODE, MEMBLOCK_NONE, &start, &end, NULL) + memblock_remove_range(&memblock.reserved, start, end - start); } void __init memblock_mem_limit_remove_map(phys_addr_t limit) { + struct memblock_region region = { 0 }; phys_addr_t max_addr; if (!limit) @@ -1710,7 +1705,8 @@ void __init memblock_mem_limit_remove_map(phys_addr_t limit) if (max_addr == PHYS_ADDR_MAX) return; - memblock_cap_memory_range(0, max_addr); + region.size = max_addr; + memblock_cap_memory_ranges(®ion, 1); } static int __init_memblock memblock_search(struct memblock_type *type, phys_addr_t addr) From patchwork Tue Apr 16 11:35:19 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: chenzhou X-Patchwork-Id: 10902767 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CD3CA139A for ; Tue, 16 Apr 2019 11:25:12 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B4FDA28842 for ; Tue, 16 Apr 2019 11:25:12 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A443228999; Tue, 16 Apr 2019 11:25:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 45A5928842 for ; Tue, 16 Apr 2019 11:25:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E7A716B0010; Tue, 16 Apr 2019 07:25:10 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id DB0986B0266; Tue, 16 Apr 2019 07:25:10 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C4DC26B0269; Tue, 16 Apr 2019 07:25:10 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-ot1-f71.google.com (mail-ot1-f71.google.com [209.85.210.71]) by kanga.kvack.org (Postfix) with ESMTP id 93D4E6B0010 for ; Tue, 16 Apr 2019 07:25:10 -0400 (EDT) Received: by mail-ot1-f71.google.com with SMTP id u18so10551259otq.5 for ; Tue, 16 Apr 2019 04:25:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=s3cqLnZ8ikaGliCL7mUmXmKsHQTifOzyfKPhb+L5V7k=; b=hgTHdm8qVpy/jkXIFDe5yJ1/TNdt3v/oOdBp2QVygz1z3XDHoSJG+JLMHdoSqcqAE3 Kl6CSPQcNrelVFT90wlC47nbSoKLmMmmX90zwslkHSPEzI5PD8Hi4OqqHN/0lgeOMfqG rAdvXKPsKOMN/QuF6JYxaXrqHUOyRr4vPcQoCDRgQt/HfHVcjv6nmAsW/XhASpCdd7+g rWF+3YRtJCEQGNdUmxLnVetYLDnrj9YGEUFQZWt8nbszni3ydzp6S6syJVb22GEyMIlp 9iaGjb/HoOzH/cy/Tv5j31SBjFrAA45PQnswTO1weYTBhYPZ+lRKvqnaucxs0Baz1ZNB dz/Q== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.35 as permitted sender) smtp.mailfrom=chenzhou10@huawei.com X-Gm-Message-State: APjAAAXhiPlTVd2K3pKNkaczPWpJpgMtL+sT7O6QKCnVvubxL/5v+rDz 4ZSIQfwneYjvE/X8epcBZuMVkkhwhKfLT0V/7HdcEwpieHUKHlfQ5zBLefp0sv8g3W0ah6rvgOo 1PSpJuzDmpjBDuP0Q0WgUybJQWcDzbYYgDirRYRv80ZXvjkJ/dqICp3pb35k8zbooaQ== X-Received: by 2002:a9d:6519:: with SMTP id i25mr49658477otl.287.1555413910342; Tue, 16 Apr 2019 04:25:10 -0700 (PDT) X-Google-Smtp-Source: APXvYqxfLcVj1JiLeepRbcleZf9Aa9jj0zCv+RS0JBSGys+GC5oAgrT+q2pVC67JW4BK5vuVMTxi X-Received: by 2002:a9d:6519:: with SMTP id i25mr49658454otl.287.1555413909715; Tue, 16 Apr 2019 04:25:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1555413909; cv=none; d=google.com; s=arc-20160816; b=Ai8zdff/LMRbHCHjS2cVUCueF+40k+kxfqdVdCzABkrGl9Eg3BgT7MGicXAvBFmefR K2zPRmoxvvQceTi4TN6jyK9rZzU6h8GaNtPYNfZ4DRwGZ0DblMNcVbcmHP4apoGHDtQF S9CvD6oDJZhknVJFaL5TnredsM1/5a1PXfLx/E3cEKQ16UwPeuVxPUgF2qehfwczMgbO hziQcWloLs8tZE2s4rX84hDuZ0ddmAsWim/Ri4Xhvn2YIGaPX+XER6mSWqX8k+EDH+pk 8O1+XoVKZvSE4gLAez6YMKfjoqxxJ38jb277g5He3i7mSmMEISw/YJa/4MS3QwdR4NHe pnGA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=s3cqLnZ8ikaGliCL7mUmXmKsHQTifOzyfKPhb+L5V7k=; b=E6XJrOl9qSvnj1PqMMbR9owp4tNK1byICfd3VAnb7bGVX5ZzjrbSeff0xf+sTyrlTv rVrzWpYjWAVFeLEBEeLdkQLISYGkgQe8G6abk3Xjg5t4CWQAzjwheU7Rm47HkpEH8Suk bW4Vew7ti6bBPcqjXbQN7cLSXQ81wUyKN4F3GYGIzEjYUmmpbtAA0fWbiCHP1gL0WXaD Zl3c0Q4FpwjUxjU8/iglmHpR2jkupY4b9WSbTyWUdSK+dE29vM7cG5ez5w/vLg1Smc0+ g9bK3ezLdCxDMJl/yWRGtVl38kwTxwT6AapgwlsDNbjs1hdDQMvn9an0osSXn7f0m1Ty n/ag== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.35 as permitted sender) smtp.mailfrom=chenzhou10@huawei.com Received: from huawei.com (szxga07-in.huawei.com. [45.249.212.35]) by mx.google.com with ESMTPS id n185si24211991oih.36.2019.04.16.04.25.09 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 16 Apr 2019 04:25:09 -0700 (PDT) Received-SPF: pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.35 as permitted sender) client-ip=45.249.212.35; Authentication-Results: mx.google.com; spf=pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.35 as permitted sender) smtp.mailfrom=chenzhou10@huawei.com Received: from DGGEMS413-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 725CBBCF56F8F71E26B6; Tue, 16 Apr 2019 19:25:04 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by DGGEMS413-HUB.china.huawei.com (10.3.19.213) with Microsoft SMTP Server id 14.3.408.0; Tue, 16 Apr 2019 19:24:54 +0800 From: Chen Zhou To: , , , , , , , , CC: , , , , , , , Chen Zhou Subject: [RESEND PATCH v5 4/4] kdump: update Documentation about crashkernel on arm64 Date: Tue, 16 Apr 2019 19:35:19 +0800 Message-ID: <20190416113519.90507-5-chenzhou10@huawei.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190416113519.90507-1-chenzhou10@huawei.com> References: <20190416113519.90507-1-chenzhou10@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-CFilter-Loop: Reflected X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Now we support crashkernel=X,[high,low] on arm64, update the Documentation. Signed-off-by: Chen Zhou --- Documentation/admin-guide/kernel-parameters.txt | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 308af3b..a055983 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -715,14 +715,14 @@ Documentation/kdump/kdump.txt for an example. crashkernel=size[KMG],high - [KNL, x86_64] range could be above 4G. Allow kernel + [KNL, x86_64, arm64] range could be above 4G. Allow kernel to allocate physical memory region from top, so could be above 4G if system have more than 4G ram installed. Otherwise memory region will be allocated below 4G, if available. It will be ignored if crashkernel=X is specified. crashkernel=size[KMG],low - [KNL, x86_64] range under 4G. When crashkernel=X,high + [KNL, x86_64, arm64] range under 4G. When crashkernel=X,high is passed, kernel could allocate physical memory region above 4G, that cause second kernel crash on system that require some amount of low memory, e.g. swiotlb