From patchwork Tue Apr 16 07:43:26 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: chenzhou X-Patchwork-Id: 10902091 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9AAE014DB for ; Tue, 16 Apr 2019 07:33:27 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 80AF12858F for ; Tue, 16 Apr 2019 07:33:27 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 71C72288CE; Tue, 16 Apr 2019 07:33:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 575892858F for ; Tue, 16 Apr 2019 07:33:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4BDD16B0003; Tue, 16 Apr 2019 03:33:25 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 46E246B0006; Tue, 16 Apr 2019 03:33:25 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 35CD76B0007; Tue, 16 Apr 2019 03:33:25 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-oi1-f197.google.com (mail-oi1-f197.google.com [209.85.167.197]) by kanga.kvack.org (Postfix) with ESMTP id 098126B0003 for ; Tue, 16 Apr 2019 03:33:25 -0400 (EDT) Received: by mail-oi1-f197.google.com with SMTP id r190so9408042oie.13 for ; Tue, 16 Apr 2019 00:33:24 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=uq4YXG/fd4i9tMfjG1TF9685MevsjIg/lcHigJ5WLOE=; b=tLv5l57bucj7jMMrKGzAkOhGaNCFnSWCVk2OjkpncAP818AMdsYNJ6kngo8ACIEagK cQ2zn8iLBq2JyRVCSnrFzVxVB8u7CFH/jMMfK9vNQe4dfcPEEm+2nWvqc2TTws10FcES 2Az/Jz7oQrSZhARQ7fx6BaN2rcs0FIC0olhgK7e4qb5mOdJtEaFRRyBzAx7uZM6aQXLU 7TxGkdawpRQ+le7VotNeYzKwlsNCk8tb2l1AI4p7NTLhR2rrrWjYFkogRoAoSWv+BE2b t5ni8sx0uk9azgOI4eu3tcW7F9hd5YrXonc7ewC/SacVrmC0DqjbVF6IkPQqgDPX2m85 Q0zA== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=chenzhou10@huawei.com X-Gm-Message-State: APjAAAVJCUS51FyR43t6qCoJ+mmCNtfojxmJD7gvVV+iuy26OuV3iwi1 AaSb8XCwFI5Q0sikvt6T4nTl+LN9mt2oZ/gbAiH5+PkYobZAFUCKhMsTIa1w/ry0dMLLL0FXYhN FGW1jswZV1IC+5MEjF7zpxkaQXXFLVmGHbFCopeaBuWrnmbnuhB2lueGdddvwRsDhNQ== X-Received: by 2002:a9d:39b4:: with SMTP id y49mr50173892otb.328.1555400004578; Tue, 16 Apr 2019 00:33:24 -0700 (PDT) X-Google-Smtp-Source: APXvYqwIRrLh7kOCPJgT/6IwDR9HPnTI6kCDJSj1I7AylBdybs7B08Ga3vE6/xTOXh5o3UINBvTG X-Received: by 2002:a9d:39b4:: with SMTP id y49mr50173820otb.328.1555400002798; Tue, 16 Apr 2019 00:33:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1555400002; cv=none; d=google.com; s=arc-20160816; b=c0E4T6p/M+7rJv5kH2xk4VWxY8GtJauYwcEQDraVWgwOth3OoJRyolm60TNDxUIAyq L1mvv/ykEtovMMVQEb6qAtcos8lNP4/swovD5c2L7A8YFlHLYmMZehZX39NltXifp6k0 9PPUbuo8EiyvF/FHGljlpBaskptWj7qEtLwFBY7eFS09tmtAIgP9z9PbJelf9RHMV92c CHH+eTS+LGK4SPkDAmCAXio3qiJgpy89OSckZ6SS0xy/ILtqZ1JHABLPDnXLHGQrgM3y SNVJG1uTcodwGI6tjkORSR8jjn356m7y2uIpwxSdMVlzLFPp6tIkwybX3dmD3W08QzDs 7naA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=uq4YXG/fd4i9tMfjG1TF9685MevsjIg/lcHigJ5WLOE=; b=wQ0iUKMnx4Dj0XYiX2KXqCSCjICqNOvCVmVB5oTFbZoQArQhUv5Ug7Dp2VG/v7gZ5u iZMoOZUxtBddUsjcd2NjlZz5QALKrHNJ6uoMkDGQw0uDbvKXPJazRcFGBaITELZ/CEyE m3nwDW123g8vFa8JW3pBQnVT9v/jJX0uj/UX7Q1DsSPtyaTLcJ4uAQyH15UjP8NnxrPi UxX6vtKem9mRxGhYcprS3fzvl54uT+vv+wrcKuoS8mtGjlN3uWt0CDkWme7yCbucK2kW r0gn+1DGZGqWSIJ2I+nHXc3rELf2os50b/tkUkK8eGh47mcXu8QUzgKxbI1FEP9rvwqI 45ww== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=chenzhou10@huawei.com Received: from huawei.com (szxga04-in.huawei.com. [45.249.212.190]) by mx.google.com with ESMTPS id p185si23933073oig.35.2019.04.16.00.33.21 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 16 Apr 2019 00:33:22 -0700 (PDT) Received-SPF: pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.190 as permitted sender) client-ip=45.249.212.190; Authentication-Results: mx.google.com; spf=pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=chenzhou10@huawei.com Received: from DGGEMS402-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 1CF8F481313DE16D063C; Tue, 16 Apr 2019 15:33:14 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by DGGEMS402-HUB.china.huawei.com (10.3.19.202) with Microsoft SMTP Server id 14.3.408.0; Tue, 16 Apr 2019 15:33:00 +0800 From: Chen Zhou To: , , , , , , , , CC: , , , , , , , Chen Zhou Subject: [PATCH v5 1/4] x86: kdump: move reserve_crashkernel_low() into kexec_core.c Date: Tue, 16 Apr 2019 15:43:26 +0800 Message-ID: <20190416074329.44928-2-chenzhou10@huawei.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190416074329.44928-1-chenzhou10@huawei.com> References: <20190416074329.44928-1-chenzhou10@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-CFilter-Loop: Reflected X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP In preparation for supporting more than one crash kernel regions in arm64 as x86_64 does, move reserve_crashkernel_low() into kexec/kexec_core.c. Signed-off-by: Chen Zhou --- arch/x86/include/asm/kexec.h | 3 ++ arch/x86/kernel/setup.c | 66 +++++--------------------------------------- include/linux/kexec.h | 5 ++++ kernel/kexec_core.c | 56 +++++++++++++++++++++++++++++++++++++ 4 files changed, 71 insertions(+), 59 deletions(-) diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h index 003f2da..485a514 100644 --- a/arch/x86/include/asm/kexec.h +++ b/arch/x86/include/asm/kexec.h @@ -18,6 +18,9 @@ # define KEXEC_CONTROL_CODE_MAX_SIZE 2048 +/* 16M alignment for crash kernel regions */ +#define CRASH_ALIGN (16 << 20) + #ifndef __ASSEMBLY__ #include diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index 3773905..4182035 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -447,9 +447,6 @@ static void __init memblock_x86_reserve_range_setup_data(void) #ifdef CONFIG_KEXEC_CORE -/* 16M alignment for crash kernel regions */ -#define CRASH_ALIGN (16 << 20) - /* * Keep the crash kernel below this limit. On 32 bits earlier kernels * would limit the kernel to the low 512 MiB due to mapping restrictions. @@ -463,59 +460,6 @@ static void __init memblock_x86_reserve_range_setup_data(void) # define CRASH_ADDR_HIGH_MAX MAXMEM #endif -static int __init reserve_crashkernel_low(void) -{ -#ifdef CONFIG_X86_64 - unsigned long long base, low_base = 0, low_size = 0; - unsigned long total_low_mem; - int ret; - - total_low_mem = memblock_mem_size(1UL << (32 - PAGE_SHIFT)); - - /* crashkernel=Y,low */ - ret = parse_crashkernel_low(boot_command_line, total_low_mem, &low_size, &base); - if (ret) { - /* - * two parts from lib/swiotlb.c: - * -swiotlb size: user-specified with swiotlb= or default. - * - * -swiotlb overflow buffer: now hardcoded to 32k. We round it - * to 8M for other buffers that may need to stay low too. Also - * make sure we allocate enough extra low memory so that we - * don't run out of DMA buffers for 32-bit devices. - */ - low_size = max(swiotlb_size_or_default() + (8UL << 20), 256UL << 20); - } else { - /* passed with crashkernel=0,low ? */ - if (!low_size) - return 0; - } - - low_base = memblock_find_in_range(0, 1ULL << 32, low_size, CRASH_ALIGN); - if (!low_base) { - pr_err("Cannot reserve %ldMB crashkernel low memory, please try smaller size.\n", - (unsigned long)(low_size >> 20)); - return -ENOMEM; - } - - ret = memblock_reserve(low_base, low_size); - if (ret) { - pr_err("%s: Error reserving crashkernel low memblock.\n", __func__); - return ret; - } - - pr_info("Reserving %ldMB of low memory at %ldMB for crashkernel (System low RAM: %ldMB)\n", - (unsigned long)(low_size >> 20), - (unsigned long)(low_base >> 20), - (unsigned long)(total_low_mem >> 20)); - - crashk_low_res.start = low_base; - crashk_low_res.end = low_base + low_size - 1; - insert_resource(&iomem_resource, &crashk_low_res); -#endif - return 0; -} - static void __init reserve_crashkernel(void) { unsigned long long crash_size, crash_base, total_mem; @@ -573,9 +517,13 @@ static void __init reserve_crashkernel(void) return; } - if (crash_base >= (1ULL << 32) && reserve_crashkernel_low()) { - memblock_free(crash_base, crash_size); - return; + if (crash_base >= (1ULL << 32)) { + if (reserve_crashkernel_low()) { + memblock_free(crash_base, crash_size); + return; + } + + insert_resource(&iomem_resource, &crashk_low_res); } pr_info("Reserving %ldMB of memory at %ldMB for crashkernel (System RAM: %ldMB)\n", diff --git a/include/linux/kexec.h b/include/linux/kexec.h index b9b1bc5..096ad63 100644 --- a/include/linux/kexec.h +++ b/include/linux/kexec.h @@ -63,6 +63,10 @@ #define KEXEC_CORE_NOTE_NAME CRASH_CORE_NOTE_NAME +#ifndef CRASH_ALIGN +#define CRASH_ALIGN SZ_128M +#endif + /* * This structure is used to hold the arguments that are used when loading * kernel binaries. @@ -281,6 +285,7 @@ extern void __crash_kexec(struct pt_regs *); extern void crash_kexec(struct pt_regs *); int kexec_should_crash(struct task_struct *); int kexec_crash_loaded(void); +int __init reserve_crashkernel_low(void); void crash_save_cpu(struct pt_regs *regs, int cpu); extern int kimage_crash_copy_vmcoreinfo(struct kimage *image); diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c index d714044..3492abd 100644 --- a/kernel/kexec_core.c +++ b/kernel/kexec_core.c @@ -39,6 +39,8 @@ #include #include #include +#include +#include #include #include @@ -96,6 +98,60 @@ int kexec_crash_loaded(void) } EXPORT_SYMBOL_GPL(kexec_crash_loaded); +int __init reserve_crashkernel_low(void) +{ + unsigned long long base, low_base = 0, low_size = 0; + unsigned long total_low_mem; + int ret; + + total_low_mem = memblock_mem_size(1UL << (32 - PAGE_SHIFT)); + + /* crashkernel=Y,low */ + ret = parse_crashkernel_low(boot_command_line, total_low_mem, + &low_size, &base); + if (ret) { + /* + * two parts from lib/swiotlb.c: + * -swiotlb size: user-specified with swiotlb= or default. + * + * -swiotlb overflow buffer: now hardcoded to 32k. We round it + * to 8M for other buffers that may need to stay low too. Also + * make sure we allocate enough extra low memory so that we + * don't run out of DMA buffers for 32-bit devices. + */ + low_size = max(swiotlb_size_or_default() + (8UL << 20), + 256UL << 20); + } else { + /* passed with crashkernel=0,low ? */ + if (!low_size) + return 0; + } + + low_base = memblock_find_in_range(0, 1ULL << 32, low_size, CRASH_ALIGN); + if (!low_base) { + pr_err("Cannot reserve %ldMB crashkernel low memory, please try smaller size.\n", + (unsigned long)(low_size >> 20)); + return -ENOMEM; + } + + ret = memblock_reserve(low_base, low_size); + if (ret) { + pr_err("%s: Error reserving crashkernel low memblock.\n", + __func__); + return ret; + } + + pr_info("Reserving %ldMB of low memory at %ldMB for crashkernel (System low RAM: %ldMB)\n", + (unsigned long)(low_size >> 20), + (unsigned long)(low_base >> 20), + (unsigned long)(total_low_mem >> 20)); + + crashk_low_res.start = low_base; + crashk_low_res.end = low_base + low_size - 1; + + return 0; +} + /* * When kexec transitions to the new kernel there is a one-to-one * mapping between physical and virtual addresses. On processors From patchwork Tue Apr 16 07:43:27 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: chenzhou X-Patchwork-Id: 10902103 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 45E56161F for ; Tue, 16 Apr 2019 07:33:44 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2E2B32858F for ; Tue, 16 Apr 2019 07:33:44 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2274D288CE; Tue, 16 Apr 2019 07:33:44 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8C48F2858F for ; Tue, 16 Apr 2019 07:33:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 117E86B0008; Tue, 16 Apr 2019 03:33:37 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0FD4A6B000D; Tue, 16 Apr 2019 03:33:36 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D9CAD6B0008; Tue, 16 Apr 2019 03:33:36 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-oi1-f200.google.com (mail-oi1-f200.google.com [209.85.167.200]) by kanga.kvack.org (Postfix) with ESMTP id 99E376B000C for ; Tue, 16 Apr 2019 03:33:36 -0400 (EDT) Received: by mail-oi1-f200.google.com with SMTP id i203so9362449oih.16 for ; Tue, 16 Apr 2019 00:33:36 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=msW/DfQw6RLXEJV1GdO6xcS+vBgdX+OmDHFBMpqAuDk=; b=o0dmMhSOY0P0arluYgW/A7kRbjmCVA9cq9ZXzsB640qNJf4xkoRjo5DF4UZkCWQZUh fFgb71AEG7F3W18r5b27XJWCxbnowaZcKFiUuzhMp3Nz7W33LOLDZFyqFziEVYY1ml9d CR/qavNSrQWfRYxGIiV/kRT8hr/LOx+eTE0Tn22qPSgcbsQmIDzg98/aMDGjdJZIwtv+ mpsI1+B/Kb6tiNLvT/9OGrfLpZkKDPEwlgWacC7VRGRe0iO4j3qXhE5pXAOgDZAkkgU4 JvvD/WnRelRWxXTB8KW8hq/BiCHFFsm/ofqzbREq+m1iY/ajbHN5/qUfUOlqFFlxUfN8 C/uQ== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.32 as permitted sender) smtp.mailfrom=chenzhou10@huawei.com X-Gm-Message-State: APjAAAV3pQnQJoOoCfldCIw7mabyY8PB/skfm+616XCQVyggCDXHfcTx ktzVQmM6bNo+ME4ZwRo+unptdIOT69Zrzz+rtLg4Bee1DsxZtMfVnfRbTKWk4+Q2v4ri8gWw3AZ SNZrbMUyqmcNOtt7wtDfRzBb+jqmd6/DE2ds5ElFQKg7Yc2vZlrxZOoAEcVBPMREEYA== X-Received: by 2002:a9d:3f4b:: with SMTP id m69mr49984594otc.246.1555400016299; Tue, 16 Apr 2019 00:33:36 -0700 (PDT) X-Google-Smtp-Source: APXvYqz2Hb1Y87+v/CmjAjTZDErH6SQkFnLBZtW755hCePECz1i/T6aOsItawe/KRq69MI1C/5h6 X-Received: by 2002:a9d:3f4b:: with SMTP id m69mr49984543otc.246.1555400014986; Tue, 16 Apr 2019 00:33:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1555400014; cv=none; d=google.com; s=arc-20160816; b=ebVHCPC+rLcz22zZzuPFDXPjs6rkqJVkfFDO41cV9rW7hNtbx+nHouscSDHdL8HiXh DFO3+uGhRXeyavBr0r1Q0Aij3pm1m3qaD2Svta5usYTkFOHg7nG86tbP4l0t70H/VcaD t2n9ACfD5+S0N0Mk//1KUlz8TrfatysZDaF4hmLxXrEnKiWIo1vHOKk2FrkbkFz+smYX u7Fq6wi3R4ttHr23KTRyypBks1QC5jUjubVzuT7K8Btw1GrkBCmyN9R4P/ON3ZYupHu6 TWueUKlJ0TUhWJ12rCUL/mLQ5xikNtCdwT1Jv532xnC7bqdYTnSedMYywA35GV/sHKhR /Rhg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=msW/DfQw6RLXEJV1GdO6xcS+vBgdX+OmDHFBMpqAuDk=; b=a7vfhKGHaAc3bINYVdM/85L4Mzt9zdYTWcN+72CSoIhJwqkYTGL4TB4jOsD7DrXkmk n4AxAPompKoUJoG5IJGnDq52V0CJvLL7tU9fZcslSTrNJyc/1ax7qXXRudKzjS7F0ubM o5tmJNB4a7cIJXSzyGY3+DwVEtizJkba3lQROnVE4oPGF3xXY1dVbHoDmQ3Kj+DIoWq5 8DxER4dTSvURT7O1ZwXX0QkP+Fyw+sdQxWVvp7hn0nEso0v1X1j5H747eeFZmItrlfqi w9efqTRtski75rp71E5q5tmr28HoAa7LBXLjSv59aXfqHuTr123rwA4mVPk5Yxl/hnY3 zyng== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.32 as permitted sender) smtp.mailfrom=chenzhou10@huawei.com Received: from huawei.com (szxga06-in.huawei.com. [45.249.212.32]) by mx.google.com with ESMTPS id 31si27731872oty.209.2019.04.16.00.33.34 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 16 Apr 2019 00:33:34 -0700 (PDT) Received-SPF: pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.32 as permitted sender) client-ip=45.249.212.32; Authentication-Results: mx.google.com; spf=pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.32 as permitted sender) smtp.mailfrom=chenzhou10@huawei.com Received: from DGGEMS402-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 9A06628F29508A9FEC9C; Tue, 16 Apr 2019 15:33:23 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by DGGEMS402-HUB.china.huawei.com (10.3.19.202) with Microsoft SMTP Server id 14.3.408.0; Tue, 16 Apr 2019 15:33:14 +0800 From: Chen Zhou To: , , , , , , , , CC: , , , , , , , Chen Zhou Subject: [PATCH v5 2/4] arm64: kdump: support reserving crashkernel above 4G Date: Tue, 16 Apr 2019 15:43:27 +0800 Message-ID: <20190416074329.44928-3-chenzhou10@huawei.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190416074329.44928-1-chenzhou10@huawei.com> References: <20190416074329.44928-1-chenzhou10@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-CFilter-Loop: Reflected X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP When crashkernel is reserved above 4G in memory, kernel should reserve some amount of low memory for swiotlb and some DMA buffers. Kernel would try to allocate at least 256M below 4G automatically as x86_64 if crashkernel is above 4G. Meanwhile, support crashkernel=X,[high,low] in arm64. Signed-off-by: Chen Zhou --- arch/arm64/include/asm/kexec.h | 3 +++ arch/arm64/kernel/setup.c | 3 +++ arch/arm64/mm/init.c | 25 ++++++++++++++++++++----- 3 files changed, 26 insertions(+), 5 deletions(-) diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h index 67e4cb7..32949bf 100644 --- a/arch/arm64/include/asm/kexec.h +++ b/arch/arm64/include/asm/kexec.h @@ -28,6 +28,9 @@ #define KEXEC_ARCH KEXEC_ARCH_AARCH64 +/* 2M alignment for crash kernel regions */ +#define CRASH_ALIGN SZ_2M + #ifndef __ASSEMBLY__ /** diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c index 413d566..82cd9a0 100644 --- a/arch/arm64/kernel/setup.c +++ b/arch/arm64/kernel/setup.c @@ -243,6 +243,9 @@ static void __init request_standard_resources(void) request_resource(res, &kernel_data); #ifdef CONFIG_KEXEC_CORE /* Userspace will find "Crash kernel" region in /proc/iomem. */ + if (crashk_low_res.end && crashk_low_res.start >= res->start && + crashk_low_res.end <= res->end) + request_resource(res, &crashk_low_res); if (crashk_res.end && crashk_res.start >= res->start && crashk_res.end <= res->end) request_resource(res, &crashk_res); diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 972bf43..f5dde73 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -74,20 +74,30 @@ phys_addr_t arm64_dma_phys_limit __ro_after_init; static void __init reserve_crashkernel(void) { unsigned long long crash_base, crash_size; + bool high = false; int ret; ret = parse_crashkernel(boot_command_line, memblock_phys_mem_size(), &crash_size, &crash_base); /* no crashkernel= or invalid value specified */ - if (ret || !crash_size) - return; + if (ret || !crash_size) { + /* crashkernel=X,high */ + ret = parse_crashkernel_high(boot_command_line, + memblock_phys_mem_size(), + &crash_size, &crash_base); + if (ret || !crash_size) + return; + high = true; + } crash_size = PAGE_ALIGN(crash_size); if (crash_base == 0) { /* Current arm64 boot protocol requires 2MB alignment */ - crash_base = memblock_find_in_range(0, ARCH_LOW_ADDRESS_LIMIT, - crash_size, SZ_2M); + crash_base = memblock_find_in_range(0, + high ? memblock_end_of_DRAM() + : ARCH_LOW_ADDRESS_LIMIT, + crash_size, CRASH_ALIGN); if (crash_base == 0) { pr_warn("cannot allocate crashkernel (size:0x%llx)\n", crash_size); @@ -105,13 +115,18 @@ static void __init reserve_crashkernel(void) return; } - if (!IS_ALIGNED(crash_base, SZ_2M)) { + if (!IS_ALIGNED(crash_base, CRASH_ALIGN)) { pr_warn("cannot reserve crashkernel: base address is not 2MB aligned\n"); return; } } memblock_reserve(crash_base, crash_size); + if (crash_base >= SZ_4G && reserve_crashkernel_low()) { + memblock_free(crash_base, crash_size); + return; + } + pr_info("crashkernel reserved: 0x%016llx - 0x%016llx (%lld MB)\n", crash_base, crash_base + crash_size, crash_size >> 20); From patchwork Tue Apr 16 07:43:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: chenzhou X-Patchwork-Id: 10902095 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0B73414DB for ; Tue, 16 Apr 2019 07:33:38 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E6D802858F for ; Tue, 16 Apr 2019 07:33:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D8A78288CE; Tue, 16 Apr 2019 07:33:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3CEBF28646 for ; Tue, 16 Apr 2019 07:33:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 274B36B0006; Tue, 16 Apr 2019 03:33:36 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 224786B0007; Tue, 16 Apr 2019 03:33:36 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 114E96B0008; Tue, 16 Apr 2019 03:33:36 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-ot1-f69.google.com (mail-ot1-f69.google.com [209.85.210.69]) by kanga.kvack.org (Postfix) with ESMTP id DB9126B0006 for ; Tue, 16 Apr 2019 03:33:35 -0400 (EDT) Received: by mail-ot1-f69.google.com with SMTP id o13so10322631otk.12 for ; Tue, 16 Apr 2019 00:33:35 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=9ApDrdhleOQlyK72P6WzNf7bRqZbU3Z1WCRXdPUw2Ow=; b=m/RW7EStUfOY4DYMO+7alFy+Rp3BTPzsVvbCPczwspPDbMdoHkLGrbBK4Q4Hh2/VTu t6GwU8fxWOOm2TdhF+/zELTziRFDAWyqa7rapgWFCQmGCsb4zDQQ4fiWHKTeMn5XLiJC Cjc2PtyrbOn5ILTeLhv2If2lNzJJzCURavi2YOseBkX5RtgUmqY76tnpPjK3/hVYhQgA nqCGFq8ozVkeOFQiVuE3m/7DE3UZi9q9J61bpYZwaVRybPxCqHj/Xwg2EZJ6bAIekgat Y/sF6vO0IZTSag9QG4+qiInOEyiGsrBCwHIyR0NaRBi+x+Pnx+nrTdNwORxcBtgQfKy5 sQdw== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=chenzhou10@huawei.com X-Gm-Message-State: APjAAAWJ+cUu9scQZxPXhG/PfefMaXGU97VNcprbyF8XU13ttpLaWzh7 +4CcYCVwi1TJU1j5YlIXuXgemsSJ9mYpn19t8ot3YSWykSuCvOQFn0VkgM18rANAv1KySJSJhfS ePABUv8UN24xBSWpgPRAVAX+wF8mkqxDJMwIJ/QTWwkp+amFM8cWbKyr5F7Kz26OEgA== X-Received: by 2002:aca:c4cc:: with SMTP id u195mr22397354oif.40.1555400015595; Tue, 16 Apr 2019 00:33:35 -0700 (PDT) X-Google-Smtp-Source: APXvYqxt44H1jqnkBZ67wH4vw51QfvAwGeg8ZVmSgJKLsXisz9I8ya0rP03jHZONGY2W3K4wQsrB X-Received: by 2002:aca:c4cc:: with SMTP id u195mr22397302oif.40.1555400014351; Tue, 16 Apr 2019 00:33:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1555400014; cv=none; d=google.com; s=arc-20160816; b=t9bHoyo0jnTmAqPt57iXfZhiPKYIv8EJ4Gvp+kGRt0xwqpNdI4Rdi66aMZYlcj1H9A RSarVDnCkPsdW4gCiJv+vFgBCIWeVfA3KHCTXwSrK7MFe45xyiXJ9Z+A0clTQ+7LvDh/ h4L6s2jpg9JiKOROMU39NiQCkTNH542guDuQKtKz8Ys66CtsUOKkydMpxWvVfagsfzVr f7wqQkSGT+mKKVtAQjHVznOABgeva1YhRQHLL3Ohz9jXMCM1kD8eIvPwcYw0aqaHVtRC HP6lfNIvVgFTy8lIGk2u+hbidBUgZmmqEd93UlUq5sSJINh+vHH7w5ntMxWfatck89kN BdNA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=9ApDrdhleOQlyK72P6WzNf7bRqZbU3Z1WCRXdPUw2Ow=; b=gvRTZWJsIpLKGSkrcHr6BVqfndmnkSacio/fWNFksNP8FYlyojHOmd1A2uc9PxFEQJ TljHfUYGzThgZHeyKB6f4hzgkm7OZlYLLXSlew2SfMPCXat7i6+uuo9ixlNehuzP+9Fj P4TfWXaETOqjunP93vssN01PIDzQ2UACmZTIIgQgAt0DO+zoEXwSfhUZSAbsNAwOop2r NSYT9pBN5Y869Th3gO+TO+p59dcP//OfY534Cdk0y+1THVa0PDoMUbbfXkRom3JcuNH1 GXvd671fmCmwWFpr0oruyylD47WRs8jq/WS+QgRkax1BTKYtUZ5oSjvt1F52k3AkqJCF taxA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=chenzhou10@huawei.com Received: from huawei.com (szxga04-in.huawei.com. [45.249.212.190]) by mx.google.com with ESMTPS id o64si23648620oig.100.2019.04.16.00.33.33 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 16 Apr 2019 00:33:34 -0700 (PDT) Received-SPF: pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.190 as permitted sender) client-ip=45.249.212.190; Authentication-Results: mx.google.com; spf=pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=chenzhou10@huawei.com Received: from DGGEMS402-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id A8E6748454742B1BD12C; Tue, 16 Apr 2019 15:33:28 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by DGGEMS402-HUB.china.huawei.com (10.3.19.202) with Microsoft SMTP Server id 14.3.408.0; Tue, 16 Apr 2019 15:33:18 +0800 From: Chen Zhou To: , , , , , , , , CC: , , , , , , , Chen Zhou Subject: [PATCH v5 3/4] memblock: extend memblock_cap_memory_range to multiple ranges Date: Tue, 16 Apr 2019 15:43:28 +0800 Message-ID: <20190416074329.44928-4-chenzhou10@huawei.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190416074329.44928-1-chenzhou10@huawei.com> References: <20190416074329.44928-1-chenzhou10@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-CFilter-Loop: Reflected X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP The memblock_cap_memory_range() removes all the memory except the range passed to it. Extend this function to receive an array of memblock_regions that should be kept. This allows switching to simple iteration over memblock arrays with 'for_each_mem_range_rev' to remove the unneeded memory. Enable use of this function in arm64 for reservation of multiple regions for the crash kernel. Signed-off-by: Chen Zhou --- arch/arm64/mm/init.c | 34 ++++++++++++++++++++++++---------- include/linux/memblock.h | 2 +- mm/memblock.c | 44 ++++++++++++++++++++------------------------ 3 files changed, 45 insertions(+), 35 deletions(-) diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index f5dde73..7f999bf 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -64,6 +64,10 @@ EXPORT_SYMBOL(memstart_addr); phys_addr_t arm64_dma_phys_limit __ro_after_init; #ifdef CONFIG_KEXEC_CORE + +/* at most two crash kernel regions, low_region and high_region */ +#define CRASH_MAX_USABLE_RANGES 2 + /* * reserve_crashkernel() - reserves memory for crash kernel * @@ -295,9 +299,9 @@ early_param("mem", early_mem); static int __init early_init_dt_scan_usablemem(unsigned long node, const char *uname, int depth, void *data) { - struct memblock_region *usablemem = data; - const __be32 *reg; - int len; + struct memblock_type *usablemem = data; + const __be32 *reg, *endp; + int len, nr = 0; if (depth != 1 || strcmp(uname, "chosen") != 0) return 0; @@ -306,22 +310,32 @@ static int __init early_init_dt_scan_usablemem(unsigned long node, if (!reg || (len < (dt_root_addr_cells + dt_root_size_cells))) return 1; - usablemem->base = dt_mem_next_cell(dt_root_addr_cells, ®); - usablemem->size = dt_mem_next_cell(dt_root_size_cells, ®); + endp = reg + (len / sizeof(__be32)); + while ((endp - reg) >= (dt_root_addr_cells + dt_root_size_cells)) { + unsigned long base = dt_mem_next_cell(dt_root_addr_cells, ®); + unsigned long size = dt_mem_next_cell(dt_root_size_cells, ®); + if (memblock_add_range(usablemem, base, size, NUMA_NO_NODE, + MEMBLOCK_NONE)) + return 0; + if (++nr >= CRASH_MAX_USABLE_RANGES) + break; + } return 1; } static void __init fdt_enforce_memory_region(void) { - struct memblock_region reg = { - .size = 0, + struct memblock_region usable_regions[CRASH_MAX_USABLE_RANGES]; + struct memblock_type usablemem = { + .max = CRASH_MAX_USABLE_RANGES, + .regions = usable_regions, }; - of_scan_flat_dt(early_init_dt_scan_usablemem, ®); + of_scan_flat_dt(early_init_dt_scan_usablemem, &usablemem); - if (reg.size) - memblock_cap_memory_range(reg.base, reg.size); + if (usablemem.cnt) + memblock_cap_memory_ranges(usablemem.regions, usablemem.cnt); } void __init arm64_memblock_init(void) diff --git a/include/linux/memblock.h b/include/linux/memblock.h index 47e3c06..e490a73 100644 --- a/include/linux/memblock.h +++ b/include/linux/memblock.h @@ -445,7 +445,7 @@ phys_addr_t memblock_mem_size(unsigned long limit_pfn); phys_addr_t memblock_start_of_DRAM(void); phys_addr_t memblock_end_of_DRAM(void); void memblock_enforce_memory_limit(phys_addr_t memory_limit); -void memblock_cap_memory_range(phys_addr_t base, phys_addr_t size); +void memblock_cap_memory_ranges(struct memblock_region *regions, int count); void memblock_mem_limit_remove_map(phys_addr_t limit); bool memblock_is_memory(phys_addr_t addr); bool memblock_is_map_memory(phys_addr_t addr); diff --git a/mm/memblock.c b/mm/memblock.c index f315eca..08581b1 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -1669,36 +1669,31 @@ void __init memblock_enforce_memory_limit(phys_addr_t limit) PHYS_ADDR_MAX); } -void __init memblock_cap_memory_range(phys_addr_t base, phys_addr_t size) -{ - int start_rgn, end_rgn; - int i, ret; - - if (!size) - return; - - ret = memblock_isolate_range(&memblock.memory, base, size, - &start_rgn, &end_rgn); - if (ret) - return; - - /* remove all the MAP regions */ - for (i = memblock.memory.cnt - 1; i >= end_rgn; i--) - if (!memblock_is_nomap(&memblock.memory.regions[i])) - memblock_remove_region(&memblock.memory, i); +void __init memblock_cap_memory_ranges(struct memblock_region *regions, + int count) +{ + struct memblock_type regions_to_keep = { + .max = count, + .cnt = count, + .regions = regions, + }; + phys_addr_t start, end; + u64 i; - for (i = start_rgn - 1; i >= 0; i--) - if (!memblock_is_nomap(&memblock.memory.regions[i])) - memblock_remove_region(&memblock.memory, i); + /* truncate memory while skipping NOMAP regions */ + for_each_mem_range_rev(i, &memblock.memory, ®ions_to_keep, + NUMA_NO_NODE, MEMBLOCK_NONE, &start, &end, NULL) + memblock_remove(start, end - start); /* truncate the reserved regions */ - memblock_remove_range(&memblock.reserved, 0, base); - memblock_remove_range(&memblock.reserved, - base + size, PHYS_ADDR_MAX); + for_each_mem_range_rev(i, &memblock.reserved, ®ions_to_keep, + NUMA_NO_NODE, MEMBLOCK_NONE, &start, &end, NULL) + memblock_remove_range(&memblock.reserved, start, end - start); } void __init memblock_mem_limit_remove_map(phys_addr_t limit) { + struct memblock_region region = { 0 }; phys_addr_t max_addr; if (!limit) @@ -1710,7 +1705,8 @@ void __init memblock_mem_limit_remove_map(phys_addr_t limit) if (max_addr == PHYS_ADDR_MAX) return; - memblock_cap_memory_range(0, max_addr); + region.size = max_addr; + memblock_cap_memory_ranges(®ion, 1); } static int __init_memblock memblock_search(struct memblock_type *type, phys_addr_t addr) From patchwork Tue Apr 16 07:43:29 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: chenzhou X-Patchwork-Id: 10902099 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 01DC3161F for ; Tue, 16 Apr 2019 07:33:42 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E15582858F for ; Tue, 16 Apr 2019 07:33:41 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D296C288CE; Tue, 16 Apr 2019 07:33:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 789D12858F for ; Tue, 16 Apr 2019 07:33:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D9F786B000C; Tue, 16 Apr 2019 03:33:36 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id CAE416B000E; Tue, 16 Apr 2019 03:33:36 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B29AB6B000D; Tue, 16 Apr 2019 03:33:36 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-ot1-f72.google.com (mail-ot1-f72.google.com [209.85.210.72]) by kanga.kvack.org (Postfix) with ESMTP id 7FE956B000A for ; Tue, 16 Apr 2019 03:33:36 -0400 (EDT) Received: by mail-ot1-f72.google.com with SMTP id f103so10354782otf.14 for ; Tue, 16 Apr 2019 00:33:36 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=s3cqLnZ8ikaGliCL7mUmXmKsHQTifOzyfKPhb+L5V7k=; b=tH9oSHMZEJL1UTyOMeZWvDS3zHif0474J9oHp254duzf+NJwwl/NWoEuhrL2TMImYr YQXSNtEEbeEL0ndHcTUSmsehOV8pHaxppPcn0SJTLL5OVBNaORlYsqxlkrKanDQMUZ9y QEJ4KAcftv7Nj61B0cw928dy52ndudvbzNGyNtBs5PrlWu+8YtivTN9rK2+iU4U41zzL NSsgHIQdoAyF4JXuN2PIcMnhSAI1fNKXfdRMAcfnaGCqxyaCXD5R2BM5WhJo1XiO9gMk YJK0rGqhkmG+suky77OL8JKUFllTZmUsESJSSPMnyg68zXaNyhHP1670HIOclehsa2FD h0YA== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=chenzhou10@huawei.com X-Gm-Message-State: APjAAAWK5dghb+pgIh5Nmf93QhPC2uQloRci0zhbF1Ph+lf4sb+E6QI7 H7s+3jVcdOt7AZrDXkv3LQLQZYJfrsFXo4bA4fzIK47DYcpYezZ91VTSyzeLpYD4P/OhUh1ViZV D1wE+Zsabg6P0i2+eVKmz1cYjbpBBgxCIqKb9cZ2u9sp37krFqghbrKdVW5TRSXmwGA== X-Received: by 2002:a9d:7f0b:: with SMTP id j11mr45774122otq.132.1555400016245; Tue, 16 Apr 2019 00:33:36 -0700 (PDT) X-Google-Smtp-Source: APXvYqyvfWIVWH9odg2quFTxOE2imHpZ2m5svUdRKpVt2pXIuc7fl5yCuWB7omo4TTt9nowW7+Kg X-Received: by 2002:a9d:7f0b:: with SMTP id j11mr45774093otq.132.1555400015510; Tue, 16 Apr 2019 00:33:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1555400015; cv=none; d=google.com; s=arc-20160816; b=VmeG586EQr2BodYaAiFwFUYRfyu9nnEJgNA1FqsuTt+rvSDmcMquK82ptDRZS4Ngv6 X2Snl+fLkUT4MQjwTrRXcMoE/dFv1Nnf3WBA1DNnrOtCJ/1azerzfS98an9tUpypJnBH 6Esnyt1nj09w7ADBoAUzeXwfPbsnHvMil58DVlU5cTEMAIW0c0Cuzcz+JeOL8whSXoT9 Yzqt3dnV7I7hsNIaNg/b3z0fFufJxxDaq7dApOVI4CpPN3P+v+06J9/EnoHBpQbtcOrc HAOfINDZxAYe0JgHEWeUESX1nZi1ctZ345Mje4Ib0iRTFF1WcdSG/lXRM+YWGk65DO2T tFqA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=s3cqLnZ8ikaGliCL7mUmXmKsHQTifOzyfKPhb+L5V7k=; b=Pozo7o3Ks9K5uW/QUsr68NqFB5QrfNmZmk0B5ZmSQHFKmEEMjRS9RiEyNRUZ5kSan9 C5g5KgKiFfv4xHHvFb4KhGeSqVBxeudmevBEJI7/1YFjbkQ0tgjiMYic1zdUbB5O0Y7Z bKtNhhTiBvUB1jCkICFOj17HFtF9Xy7MxBDxRCjphxpKobdjC+gEI3syFlyATGK1iP8v VwJwO4y/lqeeW5syhu90PYXxQhjTkE/TNs9Jg2oanKfF1J+p82JtYIlvnIE/L41Lrzb5 JsyqFEJjgPQMLgq/dzDFOGJDh72PLShOgt1r+Z18WfcG8ITWETUb+H/nXDTVUdvzgr5a QFVA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=chenzhou10@huawei.com Received: from huawei.com (szxga04-in.huawei.com. [45.249.212.190]) by mx.google.com with ESMTPS id k18si24300878oib.274.2019.04.16.00.33.35 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 16 Apr 2019 00:33:35 -0700 (PDT) Received-SPF: pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.190 as permitted sender) client-ip=45.249.212.190; Authentication-Results: mx.google.com; spf=pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=chenzhou10@huawei.com Received: from DGGEMS402-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id B19C6A9FD6C45F289170; Tue, 16 Apr 2019 15:33:28 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by DGGEMS402-HUB.china.huawei.com (10.3.19.202) with Microsoft SMTP Server id 14.3.408.0; Tue, 16 Apr 2019 15:33:21 +0800 From: Chen Zhou To: , , , , , , , , CC: , , , , , , , Chen Zhou Subject: [PATCH v5 4/4] kdump: update Documentation about crashkernel on arm64 Date: Tue, 16 Apr 2019 15:43:29 +0800 Message-ID: <20190416074329.44928-5-chenzhou10@huawei.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190416074329.44928-1-chenzhou10@huawei.com> References: <20190416074329.44928-1-chenzhou10@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-CFilter-Loop: Reflected X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Now we support crashkernel=X,[high,low] on arm64, update the Documentation. Signed-off-by: Chen Zhou --- Documentation/admin-guide/kernel-parameters.txt | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 308af3b..a055983 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -715,14 +715,14 @@ Documentation/kdump/kdump.txt for an example. crashkernel=size[KMG],high - [KNL, x86_64] range could be above 4G. Allow kernel + [KNL, x86_64, arm64] range could be above 4G. Allow kernel to allocate physical memory region from top, so could be above 4G if system have more than 4G ram installed. Otherwise memory region will be allocated below 4G, if available. It will be ignored if crashkernel=X is specified. crashkernel=size[KMG],low - [KNL, x86_64] range under 4G. When crashkernel=X,high + [KNL, x86_64, arm64] range under 4G. When crashkernel=X,high is passed, kernel could allocate physical memory region above 4G, that cause second kernel crash on system that require some amount of low memory, e.g. swiotlb