From patchwork Tue May 7 03:50:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: chenzhou X-Patchwork-Id: 10932191 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C38ED1390 for ; Tue, 7 May 2019 03:42:21 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 968B6288F4 for ; Tue, 7 May 2019 03:42:21 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6078628928; Tue, 7 May 2019 03:42:21 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A2FCF28925 for ; Tue, 7 May 2019 03:42:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 775896B0007; Mon, 6 May 2019 23:42:18 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 727A46B0008; Mon, 6 May 2019 23:42:18 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 63ECF6B000A; Mon, 6 May 2019 23:42:18 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-ot1-f69.google.com (mail-ot1-f69.google.com [209.85.210.69]) by kanga.kvack.org (Postfix) with ESMTP id 392606B0007 for ; Mon, 6 May 2019 23:42:18 -0400 (EDT) Received: by mail-ot1-f69.google.com with SMTP id k90so8559802otk.21 for ; Mon, 06 May 2019 20:42:18 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=GzzSFJ5ZcDAgbPLgqQMBmNbo+FYPTpZunI5GjWwF7e4=; b=Vs9oMBw3ufezGM6hR8C+1EHcGrGe+7k1FLcMP6zv75bs1C0/k57VjQyLdkG5hy6VEr YUCg3ALwMnB3id5DtQEtpZjFDABiTqAudONokvr7Hc9VHes2aSYRc/9Noa8W6li9OZkH cymz/CMVC576Q5zj0dzEANruZOAz9ayitbIgthvOR+7JBhKUgBLKGehoDtFRslea6bwB 6xL2eAdE42jbovEjqrPWbTnYliy7pMOpmhhYLtHIbu6AxFOC+KRjxR0XXYyBTtEPqKuy j53nZRxHrHCpoBJqR0dHDldx0urwHvCl9e9nVP8spuqSqmyY5yDoOcH3Vdj7TiDgfvEm 8I8g== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=chenzhou10@huawei.com X-Gm-Message-State: APjAAAUQNGeEF89Z5cprzSTBG+s/6RdO0+GdWxoyEpvsV9VOuVXS2T4d X2+xbg/hMMWwpD9igejH3g5OHyICOMUmKEQ7qrwOLXmsy3vHLbtD3UZ+0WRJtYLdRcNy6ywhCWt fEZM6psjPoA8Tq8zfHvSBW5kNtlHpgxAKguJjWvSlMB8UXqphk+Yc6eMHuxB0QWjpjQ== X-Received: by 2002:a9d:77c8:: with SMTP id w8mr19702273otl.365.1557200537923; Mon, 06 May 2019 20:42:17 -0700 (PDT) X-Google-Smtp-Source: APXvYqzFe7EqvvuM7SvAk77rSAcQo0pWlPQWon/KO1IZY8CuaFqq363PnX9gau6stIp+27lzL5V/ X-Received: by 2002:a9d:77c8:: with SMTP id w8mr19702215otl.365.1557200536459; Mon, 06 May 2019 20:42:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557200536; cv=none; d=google.com; s=arc-20160816; b=t2sXY0NdptRtnJvuAtG5dklqRH2YJY/JQFwAwuJ51ih0YNKPRhVGunEmLlysn3IaCO qgutwFm5qHRYDkptrZT7iGCnJNRQHnwrbDyvMnT/U0NfRAwT4E/e2oejQzHe4zGservj cryYcJWDv8vo+BAWmesb5zePWSCnlKb6RaiSpH6fq2S84JCTKrITQjSM6/G4UMeo5NN4 DmtrH2JnZyesa8oxCEMGNTENRdJw3T9tETipbO0BwjyyCTJLXNQaKOSJAjAsAsGKxJ3B WO4Ambk4Mwy0YAqE6m0GnNsE4nvY2B4XJfYiNe0RzkbUY7IfhsyJoaHNtLX1AEM2ZGni lmew== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=GzzSFJ5ZcDAgbPLgqQMBmNbo+FYPTpZunI5GjWwF7e4=; b=ICkwhMm434WYq+jmaIXsBhxceMVTX/g+ANs7C8hELK4eIM/hTYEw9vKRIHldAtWByW 63ZixLul8haQMQB7IHExr4bcOdIFwJuyDK+hwq8DYNdTaMTpV90PuLBD5lTP0oBMfXwg r0bixPimLUgKfNwVhpAbyfy7p1nGNbTL0Y9obiiLJ9v1whzS15hXw5Q2RVylw2F7Exeb ydP5WqLUziliOQ/Ilq4RjQs8udQto+cMRxxT1Ki8f17u1b9L72kYFAm6u/GDN0uGqsaw zpTI0Pxs5mwp14dn9zcVvJ0exLg7sY4+tf8SKjKiWzi7tQ57RN0Qc7zJIM5UAw9umdco 9gTQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=chenzhou10@huawei.com Received: from huawei.com (szxga05-in.huawei.com. [45.249.212.191]) by mx.google.com with ESMTPS id v62si7188557oia.93.2019.05.06.20.42.15 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 06 May 2019 20:42:16 -0700 (PDT) Received-SPF: pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.191 as permitted sender) client-ip=45.249.212.191; Authentication-Results: mx.google.com; spf=pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=chenzhou10@huawei.com Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 61A06FA6DD0FEF377DF2; Tue, 7 May 2019 11:42:10 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.439.0; Tue, 7 May 2019 11:42:00 +0800 From: Chen Zhou To: , , , , , , , , CC: , , , , , , , Chen Zhou Subject: [PATCH 1/4] x86: kdump: move reserve_crashkernel_low() into kexec_core.c Date: Tue, 7 May 2019 11:50:55 +0800 Message-ID: <20190507035058.63992-2-chenzhou10@huawei.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190507035058.63992-1-chenzhou10@huawei.com> References: <20190507035058.63992-1-chenzhou10@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-CFilter-Loop: Reflected X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP In preparation for supporting reserving crashkernel above 4G in arm64 as x86_64 does, move reserve_crashkernel_low() into kexec/kexec_core.c. Signed-off-by: Chen Zhou --- arch/x86/include/asm/kexec.h | 3 ++ arch/x86/kernel/setup.c | 66 +++++--------------------------------------- include/linux/kexec.h | 5 ++++ kernel/kexec_core.c | 56 +++++++++++++++++++++++++++++++++++++ 4 files changed, 71 insertions(+), 59 deletions(-) diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h index 003f2da..c51f293 100644 --- a/arch/x86/include/asm/kexec.h +++ b/arch/x86/include/asm/kexec.h @@ -18,6 +18,9 @@ # define KEXEC_CONTROL_CODE_MAX_SIZE 2048 +/* 16M alignment for crash kernel regions */ +#define CRASH_ALIGN SZ_16M + #ifndef __ASSEMBLY__ #include diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index 905dae8..9ee33b6 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -448,9 +448,6 @@ static void __init memblock_x86_reserve_range_setup_data(void) #ifdef CONFIG_KEXEC_CORE -/* 16M alignment for crash kernel regions */ -#define CRASH_ALIGN SZ_16M - /* * Keep the crash kernel below this limit. On 32 bits earlier kernels * would limit the kernel to the low 512 MiB due to mapping restrictions. @@ -463,59 +460,6 @@ static void __init memblock_x86_reserve_range_setup_data(void) # define CRASH_ADDR_HIGH_MAX MAXMEM #endif -static int __init reserve_crashkernel_low(void) -{ -#ifdef CONFIG_X86_64 - unsigned long long base, low_base = 0, low_size = 0; - unsigned long total_low_mem; - int ret; - - total_low_mem = memblock_mem_size(1UL << (32 - PAGE_SHIFT)); - - /* crashkernel=Y,low */ - ret = parse_crashkernel_low(boot_command_line, total_low_mem, &low_size, &base); - if (ret) { - /* - * two parts from lib/swiotlb.c: - * -swiotlb size: user-specified with swiotlb= or default. - * - * -swiotlb overflow buffer: now hardcoded to 32k. We round it - * to 8M for other buffers that may need to stay low too. Also - * make sure we allocate enough extra low memory so that we - * don't run out of DMA buffers for 32-bit devices. - */ - low_size = max(swiotlb_size_or_default() + (8UL << 20), 256UL << 20); - } else { - /* passed with crashkernel=0,low ? */ - if (!low_size) - return 0; - } - - low_base = memblock_find_in_range(0, 1ULL << 32, low_size, CRASH_ALIGN); - if (!low_base) { - pr_err("Cannot reserve %ldMB crashkernel low memory, please try smaller size.\n", - (unsigned long)(low_size >> 20)); - return -ENOMEM; - } - - ret = memblock_reserve(low_base, low_size); - if (ret) { - pr_err("%s: Error reserving crashkernel low memblock.\n", __func__); - return ret; - } - - pr_info("Reserving %ldMB of low memory at %ldMB for crashkernel (System low RAM: %ldMB)\n", - (unsigned long)(low_size >> 20), - (unsigned long)(low_base >> 20), - (unsigned long)(total_low_mem >> 20)); - - crashk_low_res.start = low_base; - crashk_low_res.end = low_base + low_size - 1; - insert_resource(&iomem_resource, &crashk_low_res); -#endif - return 0; -} - static void __init reserve_crashkernel(void) { unsigned long long crash_size, crash_base, total_mem; @@ -579,9 +523,13 @@ static void __init reserve_crashkernel(void) return; } - if (crash_base >= (1ULL << 32) && reserve_crashkernel_low()) { - memblock_free(crash_base, crash_size); - return; + if (crash_base >= (1ULL << 32)) { + if (reserve_crashkernel_low()) { + memblock_free(crash_base, crash_size); + return; + } + + insert_resource(&iomem_resource, &crashk_low_res); } pr_info("Reserving %ldMB of memory at %ldMB for crashkernel (System RAM: %ldMB)\n", diff --git a/include/linux/kexec.h b/include/linux/kexec.h index b9b1bc5..096ad63 100644 --- a/include/linux/kexec.h +++ b/include/linux/kexec.h @@ -63,6 +63,10 @@ #define KEXEC_CORE_NOTE_NAME CRASH_CORE_NOTE_NAME +#ifndef CRASH_ALIGN +#define CRASH_ALIGN SZ_128M +#endif + /* * This structure is used to hold the arguments that are used when loading * kernel binaries. @@ -281,6 +285,7 @@ extern void __crash_kexec(struct pt_regs *); extern void crash_kexec(struct pt_regs *); int kexec_should_crash(struct task_struct *); int kexec_crash_loaded(void); +int __init reserve_crashkernel_low(void); void crash_save_cpu(struct pt_regs *regs, int cpu); extern int kimage_crash_copy_vmcoreinfo(struct kimage *image); diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c index d714044..3492abd 100644 --- a/kernel/kexec_core.c +++ b/kernel/kexec_core.c @@ -39,6 +39,8 @@ #include #include #include +#include +#include #include #include @@ -96,6 +98,60 @@ int kexec_crash_loaded(void) } EXPORT_SYMBOL_GPL(kexec_crash_loaded); +int __init reserve_crashkernel_low(void) +{ + unsigned long long base, low_base = 0, low_size = 0; + unsigned long total_low_mem; + int ret; + + total_low_mem = memblock_mem_size(1UL << (32 - PAGE_SHIFT)); + + /* crashkernel=Y,low */ + ret = parse_crashkernel_low(boot_command_line, total_low_mem, + &low_size, &base); + if (ret) { + /* + * two parts from lib/swiotlb.c: + * -swiotlb size: user-specified with swiotlb= or default. + * + * -swiotlb overflow buffer: now hardcoded to 32k. We round it + * to 8M for other buffers that may need to stay low too. Also + * make sure we allocate enough extra low memory so that we + * don't run out of DMA buffers for 32-bit devices. + */ + low_size = max(swiotlb_size_or_default() + (8UL << 20), + 256UL << 20); + } else { + /* passed with crashkernel=0,low ? */ + if (!low_size) + return 0; + } + + low_base = memblock_find_in_range(0, 1ULL << 32, low_size, CRASH_ALIGN); + if (!low_base) { + pr_err("Cannot reserve %ldMB crashkernel low memory, please try smaller size.\n", + (unsigned long)(low_size >> 20)); + return -ENOMEM; + } + + ret = memblock_reserve(low_base, low_size); + if (ret) { + pr_err("%s: Error reserving crashkernel low memblock.\n", + __func__); + return ret; + } + + pr_info("Reserving %ldMB of low memory at %ldMB for crashkernel (System low RAM: %ldMB)\n", + (unsigned long)(low_size >> 20), + (unsigned long)(low_base >> 20), + (unsigned long)(total_low_mem >> 20)); + + crashk_low_res.start = low_base; + crashk_low_res.end = low_base + low_size - 1; + + return 0; +} + /* * When kexec transitions to the new kernel there is a one-to-one * mapping between physical and virtual addresses. On processors From patchwork Tue May 7 03:50:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: chenzhou X-Patchwork-Id: 10932189 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C1B5016C1 for ; Tue, 7 May 2019 03:42:21 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 47BB628922 for ; Tue, 7 May 2019 03:42:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 292F228924; Tue, 7 May 2019 03:42:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9D80828922 for ; Tue, 7 May 2019 03:42:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B45056B0006; Mon, 6 May 2019 23:42:17 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id AF62D6B0007; Mon, 6 May 2019 23:42:17 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A0BC06B0008; Mon, 6 May 2019 23:42:17 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-oi1-f198.google.com (mail-oi1-f198.google.com [209.85.167.198]) by kanga.kvack.org (Postfix) with ESMTP id 79BC66B0006 for ; Mon, 6 May 2019 23:42:17 -0400 (EDT) Received: by mail-oi1-f198.google.com with SMTP id r78so1360776oie.8 for ; Mon, 06 May 2019 20:42:17 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=cbtj+TMP+/lIzIc/5TZ2OkHDkxCb2+xgbljpaF/2A7w=; b=dnIapa51EZCUG/6r1rijKz5lsdUzCs0uuCZlfnqrRrr8G5SK7tZ0BLWkprGoqPuxuR 6gEfhGia+g7V0NTUBjRl6Yiz42hjtY5GtNTc0Yc3sDFY9+XmwSzG313LGcbzjGXjtQfW Y1hul1/V+y4VXtRnmnh7ESfTu4/5JnnSv3hdKB1GyOiDwRkVOcc3E0OepwlFHX45mtGS DWIff6NUx67NLnEzg5teq5LckiU+M0gUB7d4PWaHQW5rklCDYQ3eqFBpnJ+f/S8rO07B gUksBiog+wX4SbhdeWmx4OA3Wj/l/gtIMkCi6F2NbkcJ4hYhcZ3UMFP00PKLpR56JMbW iH6A== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=chenzhou10@huawei.com X-Gm-Message-State: APjAAAV+1n1m/GGH+KO+/TklHWP5824EgQsYH0KXDsxINR9Yq+V4Q+3G 5X/8jUC2GB1M/iKAnl6hxLlV8LFiqdBZGB/yVC7wsq4674xR+iFfVdRlb/kFL2SA65KFLSFP2oZ LdXeQ/VfpYcjp6lxAn9Vkaa3A/So4lMH+ZLvRkEa3cA7uUePSsCyUh+47zcTmihwMpw== X-Received: by 2002:a05:6830:15cc:: with SMTP id j12mr20486102otr.2.1557200537078; Mon, 06 May 2019 20:42:17 -0700 (PDT) X-Google-Smtp-Source: APXvYqw1L+YMWqyLmHbDSD9JcZlywTRNECkT6NRokXOtkElrybWnGrms9sRHNbnkJHLf+2xjqosK X-Received: by 2002:a05:6830:15cc:: with SMTP id j12mr20486051otr.2.1557200535789; Mon, 06 May 2019 20:42:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557200535; cv=none; d=google.com; s=arc-20160816; b=ANS5fRCrqCy+0QWHUTxF3aEN2eyVF26y001MJ9yKBhPpuOD4+TDkxRNI4inpTdwUGO nXaU06kS8b23pdsdMQgJtq1LDJud6Jy0WSD3m9GxuY3qu46zxJg8pgVenUj5YOAHo/vc pFQXGpomsstov2ChOk1beE7YNj+N7denkKV0ThbERbhlntdFlcOw1njLHZ3ltgTGm1vj NiFORxTfnfP7LtPSLo1d4vMUUjgc6Avy43oRDtBrohHRuOHsYH9tFqC5yn5MlRkqyzd3 72gWDe5afMqGc7i+O6SAlR0lPDLX+39m78UO5aR4xr7c5xiJhPqgvhPEYXKlevOKO3uE BlqQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=cbtj+TMP+/lIzIc/5TZ2OkHDkxCb2+xgbljpaF/2A7w=; b=uIFRctcK2NjXzJ6H1U8bSCy89JGUfd5PdOA5xnI4Qq3/D3Ef0zIvDGDOckkOwpofHg YaYuEr7P9WpRkeMuS2x2i2fDH1jeUcMLjzsBPi88//44WjU2bBPkIA28Zy/Ek5BXL3qa QgQyK00i2j+Ee0h2nRauo9+nU6q3MGGib5wT/hccWln7DQyu8cRUbZk05pBkTJ2v9mj/ Gez7ma3nrb7SXdHv8ONO9InATF8IRI6fr7szCKOCnpLb0t7APUxkKfaeryKmDzxSZA+N WvbyGJ0UVYrt1lghMhVkhf6WYOGnS7dM3cNoJCgJ3/UZIza5DTafn6aMG0dDKhBs4Wfp O4bg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=chenzhou10@huawei.com Received: from huawei.com (szxga05-in.huawei.com. [45.249.212.191]) by mx.google.com with ESMTPS id 91si8200469otj.27.2019.05.06.20.42.15 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 06 May 2019 20:42:15 -0700 (PDT) Received-SPF: pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.191 as permitted sender) client-ip=45.249.212.191; Authentication-Results: mx.google.com; spf=pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=chenzhou10@huawei.com Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 5923ECCBB0A73C5F1D49; Tue, 7 May 2019 11:42:10 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.439.0; Tue, 7 May 2019 11:42:02 +0800 From: Chen Zhou To: , , , , , , , , CC: , , , , , , , Chen Zhou Subject: [PATCH 2/4] arm64: kdump: support reserving crashkernel above 4G Date: Tue, 7 May 2019 11:50:56 +0800 Message-ID: <20190507035058.63992-3-chenzhou10@huawei.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190507035058.63992-1-chenzhou10@huawei.com> References: <20190507035058.63992-1-chenzhou10@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-CFilter-Loop: Reflected X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP When crashkernel is reserved above 4G in memory, kernel should reserve some amount of low memory for swiotlb and some DMA buffers. Meanwhile, support crashkernel=X,[high,low] in arm64. When use crashkernel=X parameter, try low memory first and fall back to high memory unless "crashkernel=X,high" is specified. Signed-off-by: Chen Zhou --- arch/arm64/include/asm/kexec.h | 3 +++ arch/arm64/kernel/setup.c | 3 +++ arch/arm64/mm/init.c | 34 ++++++++++++++++++++++++++++------ 3 files changed, 34 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h index 67e4cb7..32949bf 100644 --- a/arch/arm64/include/asm/kexec.h +++ b/arch/arm64/include/asm/kexec.h @@ -28,6 +28,9 @@ #define KEXEC_ARCH KEXEC_ARCH_AARCH64 +/* 2M alignment for crash kernel regions */ +#define CRASH_ALIGN SZ_2M + #ifndef __ASSEMBLY__ /** diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c index 413d566..82cd9a0 100644 --- a/arch/arm64/kernel/setup.c +++ b/arch/arm64/kernel/setup.c @@ -243,6 +243,9 @@ static void __init request_standard_resources(void) request_resource(res, &kernel_data); #ifdef CONFIG_KEXEC_CORE /* Userspace will find "Crash kernel" region in /proc/iomem. */ + if (crashk_low_res.end && crashk_low_res.start >= res->start && + crashk_low_res.end <= res->end) + request_resource(res, &crashk_low_res); if (crashk_res.end && crashk_res.start >= res->start && crashk_res.end <= res->end) request_resource(res, &crashk_res); diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index d2adffb..3fcd739 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -74,20 +74,37 @@ phys_addr_t arm64_dma_phys_limit __ro_after_init; static void __init reserve_crashkernel(void) { unsigned long long crash_base, crash_size; + bool high = false; int ret; ret = parse_crashkernel(boot_command_line, memblock_phys_mem_size(), &crash_size, &crash_base); /* no crashkernel= or invalid value specified */ - if (ret || !crash_size) - return; + if (ret || !crash_size) { + /* crashkernel=X,high */ + ret = parse_crashkernel_high(boot_command_line, + memblock_phys_mem_size(), + &crash_size, &crash_base); + if (ret || !crash_size) + return; + high = true; + } crash_size = PAGE_ALIGN(crash_size); if (crash_base == 0) { - /* Current arm64 boot protocol requires 2MB alignment */ - crash_base = memblock_find_in_range(0, ARCH_LOW_ADDRESS_LIMIT, - crash_size, SZ_2M); + /* + * Try low memory first and fall back to high memory + * unless "crashkernel=size[KMG],high" is specified. + */ + if (!high) + crash_base = memblock_find_in_range(0, + ARCH_LOW_ADDRESS_LIMIT, + crash_size, CRASH_ALIGN); + if (!crash_base) + crash_base = memblock_find_in_range(0, + memblock_end_of_DRAM(), + crash_size, CRASH_ALIGN); if (crash_base == 0) { pr_warn("cannot allocate crashkernel (size:0x%llx)\n", crash_size); @@ -105,13 +122,18 @@ static void __init reserve_crashkernel(void) return; } - if (!IS_ALIGNED(crash_base, SZ_2M)) { + if (!IS_ALIGNED(crash_base, CRASH_ALIGN)) { pr_warn("cannot reserve crashkernel: base address is not 2MB aligned\n"); return; } } memblock_reserve(crash_base, crash_size); + if (crash_base >= SZ_4G && reserve_crashkernel_low()) { + memblock_free(crash_base, crash_size); + return; + } + pr_info("crashkernel reserved: 0x%016llx - 0x%016llx (%lld MB)\n", crash_base, crash_base + crash_size, crash_size >> 20); From patchwork Tue May 7 03:50:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: chenzhou X-Patchwork-Id: 10932195 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CEED31390 for ; Tue, 7 May 2019 03:42:24 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BC7B8228C8 for ; Tue, 7 May 2019 03:42:24 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AFA09288F6; Tue, 7 May 2019 03:42:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 16F19228C8 for ; Tue, 7 May 2019 03:42:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0482F6B0008; Mon, 6 May 2019 23:42:23 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id F3C286B000C; Mon, 6 May 2019 23:42:22 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E29E56B000D; Mon, 6 May 2019 23:42:22 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-ot1-f72.google.com (mail-ot1-f72.google.com [209.85.210.72]) by kanga.kvack.org (Postfix) with ESMTP id B96F26B0008 for ; Mon, 6 May 2019 23:42:22 -0400 (EDT) Received: by mail-ot1-f72.google.com with SMTP id v16so2141030otp.17 for ; Mon, 06 May 2019 20:42:22 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=1xnre9XHRnQ624eCACSz9ZX+IutElUUCR2dc/8kvf3I=; b=karF8aE6PqCI+1QkezAJFPZEdzkxYNq/8CA+leQZD9UpuaOUKF8R+Ot+861kmNU016 0NvLxhT8myAqK42hGl9vt7bkFoor11YSxbDzAjq6UFyBjBNofLAxHm1rW3s4ZF5Zwadh EOohuzSwcFHj7W40fxBjtJ8oMUdONLYRj6QJLvZ+HfP1UVJ8L/+OMD+RO2kaA5cVQMlf LUyKWdVADQe/Vki8ysKUsrLhL4okT4JUMPsIaIyZFZdrVRq2PXI+7pzTRRK4ki9bUx+j 7sd60sIxgv3t3kvMQZQ2+VwyNPKRsDtFXKx5bkQ0VH2bFc4LrJ7Lx8MgVK2p6Dv4vTsE Duuw== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=chenzhou10@huawei.com X-Gm-Message-State: APjAAAX1Wc9U8k/us+g6fl7/441lPhWWsa4yfv7h7ZZ0UlsS3lg1nSeZ OWFfV6dvRiV4ZbpM2/8Wl3ywFaHgVUGDnGdLph38CjvZKUYdg7Q88l+TDtwQgQjMImahbJRRKyd 7fxN7kPQdzKU9AJ8lDfHlIMQJGaTjjX99dkaQL4jev8p+ZP582L3Veq+jkOP7lAl9ww== X-Received: by 2002:a9d:490e:: with SMTP id e14mr19752096otf.197.1557200542363; Mon, 06 May 2019 20:42:22 -0700 (PDT) X-Google-Smtp-Source: APXvYqyfiIz3n5/Pb4CEbS7AeU7dMdl3AhJn4dUVJQNwacBahY1vTB7xSNll5tY2AiCwR780BJkl X-Received: by 2002:a9d:490e:: with SMTP id e14mr19752046otf.197.1557200541012; Mon, 06 May 2019 20:42:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557200541; cv=none; d=google.com; s=arc-20160816; b=L4sDzTJfni54hKOppe1MuVK9nlyeR1jytK1E7pFC16WaX1ZDEyVtM2Dwa8tvH9z5es zuz+dzyFZLYZ5sWoa5GFTtDnlnzNSDgB7+g9ZwfoeJaF/8fOjw7jBTrRE5SKA5brW67P fdwGOhDOLIa0q+9EKTEHNGKBqtrk4ub9Eruyac4Re4JUErOlwAeHLAewImysYqdsa3KK rWz27TNyXq+jN6qtPVfVwa62TQd53AdCzmx+aslU3Zi6bTIGsAb9d1F5ZCMbUL/utkX4 IdhwkvprfD10wconvcb6AJhaciWf97UAToJUtASSymt4H7MSUrOeenw0qeq5i0P7Izs/ 84hg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=1xnre9XHRnQ624eCACSz9ZX+IutElUUCR2dc/8kvf3I=; b=wEfBruHqCfJDil0frME7FShZZHi2UHBQFhl5sLCmmh4XphUocbjgg7LeHkx3SvPnyC ohdvLRTnqFXqUN5aDs0hLTBJ6IS+09DpIe9eHxOgyIX6WWhJa92k+616/vkJBiEfk9MI pTsX8oJDTempspPvYiX7wtTD1AMQLb0xNjUfvS6aAQlM/r7P4Kl/p7UEaJ9bEPuiFXyA iCAqwFniVNhrjZDxfUHJ8OJdYGh7t8B8qjem23cnnMzLWRqFFASSyhNK7gmxQLoGSzRK gOUWdXgwnD/3578PP2uCsikhPLZcCvcAf+GFWXsvrVx4SOqu8HRv95m4aeomnMPOrJv/ 4NJQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=chenzhou10@huawei.com Received: from huawei.com (szxga04-in.huawei.com. [45.249.212.190]) by mx.google.com with ESMTPS id t78si7079764oie.102.2019.05.06.20.42.20 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 06 May 2019 20:42:21 -0700 (PDT) Received-SPF: pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.190 as permitted sender) client-ip=45.249.212.190; Authentication-Results: mx.google.com; spf=pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=chenzhou10@huawei.com Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 6C16DACF82BE33D6EFEE; Tue, 7 May 2019 11:42:15 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.439.0; Tue, 7 May 2019 11:42:05 +0800 From: Chen Zhou To: , , , , , , , , CC: , , , , , , , Chen Zhou Subject: [PATCH 3/4] memblock: extend memblock_cap_memory_range to multiple ranges Date: Tue, 7 May 2019 11:50:57 +0800 Message-ID: <20190507035058.63992-4-chenzhou10@huawei.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190507035058.63992-1-chenzhou10@huawei.com> References: <20190507035058.63992-1-chenzhou10@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-CFilter-Loop: Reflected X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Mike Rapoport The memblock_cap_memory_range() removes all the memory except the range passed to it. Extend this function to receive an array of memblock_regions that should be kept. This allows switching to simple iteration over memblock arrays with 'for_each_mem_range_rev' to remove the unneeded memory. Enable use of this function in arm64 for reservation of multiple regions for the crash kernel. Signed-off-by: Mike Rapoport Signed-off-by: Chen Zhou --- arch/arm64/mm/init.c | 38 ++++++++++++++++++++++++++++---------- include/linux/memblock.h | 2 +- mm/memblock.c | 44 ++++++++++++++++++++------------------------ 3 files changed, 49 insertions(+), 35 deletions(-) diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 3fcd739..2d8f302 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -63,6 +63,13 @@ EXPORT_SYMBOL(memstart_addr); phys_addr_t arm64_dma_phys_limit __ro_after_init; +/* The main usage of linux,usable-memory-range is for crash dump kernel. + * Originally, the number of usable-memory regions is one. Now crash dump + * kernel support at most two crash kernel regions, low_region and high + * region. + */ +#define MAX_USABLE_RANGES 2 + #ifdef CONFIG_KEXEC_CORE /* * reserve_crashkernel() - reserves memory for crash kernel @@ -302,9 +309,9 @@ early_param("mem", early_mem); static int __init early_init_dt_scan_usablemem(unsigned long node, const char *uname, int depth, void *data) { - struct memblock_region *usablemem = data; - const __be32 *reg; - int len; + struct memblock_type *usablemem = data; + const __be32 *reg, *endp; + int len, nr = 0; if (depth != 1 || strcmp(uname, "chosen") != 0) return 0; @@ -313,22 +320,33 @@ static int __init early_init_dt_scan_usablemem(unsigned long node, if (!reg || (len < (dt_root_addr_cells + dt_root_size_cells))) return 1; - usablemem->base = dt_mem_next_cell(dt_root_addr_cells, ®); - usablemem->size = dt_mem_next_cell(dt_root_size_cells, ®); + endp = reg + (len / sizeof(__be32)); + while ((endp - reg) >= (dt_root_addr_cells + dt_root_size_cells)) { + unsigned long base = dt_mem_next_cell(dt_root_addr_cells, ®); + unsigned long size = dt_mem_next_cell(dt_root_size_cells, ®); + + if (memblock_add_range(usablemem, base, size, NUMA_NO_NODE, + MEMBLOCK_NONE)) + return 0; + if (++nr >= MAX_USABLE_RANGES) + break; + } return 1; } static void __init fdt_enforce_memory_region(void) { - struct memblock_region reg = { - .size = 0, + struct memblock_region usable_regions[MAX_USABLE_RANGES]; + struct memblock_type usablemem = { + .max = MAX_USABLE_RANGES, + .regions = usable_regions, }; - of_scan_flat_dt(early_init_dt_scan_usablemem, ®); + of_scan_flat_dt(early_init_dt_scan_usablemem, &usablemem); - if (reg.size) - memblock_cap_memory_range(reg.base, reg.size); + if (usablemem.cnt) + memblock_cap_memory_ranges(usablemem.regions, usablemem.cnt); } void __init arm64_memblock_init(void) diff --git a/include/linux/memblock.h b/include/linux/memblock.h index 676d390..526e279 100644 --- a/include/linux/memblock.h +++ b/include/linux/memblock.h @@ -446,7 +446,7 @@ phys_addr_t memblock_mem_size(unsigned long limit_pfn); phys_addr_t memblock_start_of_DRAM(void); phys_addr_t memblock_end_of_DRAM(void); void memblock_enforce_memory_limit(phys_addr_t memory_limit); -void memblock_cap_memory_range(phys_addr_t base, phys_addr_t size); +void memblock_cap_memory_ranges(struct memblock_region *regions, int count); void memblock_mem_limit_remove_map(phys_addr_t limit); bool memblock_is_memory(phys_addr_t addr); bool memblock_is_map_memory(phys_addr_t addr); diff --git a/mm/memblock.c b/mm/memblock.c index 6bbad46..ecdf8a9 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -1669,36 +1669,31 @@ void __init memblock_enforce_memory_limit(phys_addr_t limit) PHYS_ADDR_MAX); } -void __init memblock_cap_memory_range(phys_addr_t base, phys_addr_t size) -{ - int start_rgn, end_rgn; - int i, ret; - - if (!size) - return; - - ret = memblock_isolate_range(&memblock.memory, base, size, - &start_rgn, &end_rgn); - if (ret) - return; - - /* remove all the MAP regions */ - for (i = memblock.memory.cnt - 1; i >= end_rgn; i--) - if (!memblock_is_nomap(&memblock.memory.regions[i])) - memblock_remove_region(&memblock.memory, i); +void __init memblock_cap_memory_ranges(struct memblock_region *regions, + int count) +{ + struct memblock_type regions_to_keep = { + .max = count, + .cnt = count, + .regions = regions, + }; + phys_addr_t start, end; + u64 i; - for (i = start_rgn - 1; i >= 0; i--) - if (!memblock_is_nomap(&memblock.memory.regions[i])) - memblock_remove_region(&memblock.memory, i); + /* truncate memory while skipping NOMAP regions */ + for_each_mem_range_rev(i, &memblock.memory, ®ions_to_keep, + NUMA_NO_NODE, MEMBLOCK_NONE, &start, &end, NULL) + memblock_remove(start, end - start); /* truncate the reserved regions */ - memblock_remove_range(&memblock.reserved, 0, base); - memblock_remove_range(&memblock.reserved, - base + size, PHYS_ADDR_MAX); + for_each_mem_range_rev(i, &memblock.reserved, ®ions_to_keep, + NUMA_NO_NODE, MEMBLOCK_NONE, &start, &end, NULL) + memblock_remove_range(&memblock.reserved, start, end - start); } void __init memblock_mem_limit_remove_map(phys_addr_t limit) { + struct memblock_region region = { 0 }; phys_addr_t max_addr; if (!limit) @@ -1710,7 +1705,8 @@ void __init memblock_mem_limit_remove_map(phys_addr_t limit) if (max_addr == PHYS_ADDR_MAX) return; - memblock_cap_memory_range(0, max_addr); + region.size = max_addr; + memblock_cap_memory_ranges(®ion, 1); } static int __init_memblock memblock_search(struct memblock_type *type, phys_addr_t addr) From patchwork Tue May 7 03:50:58 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: chenzhou X-Patchwork-Id: 10932197 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CC884912 for ; Tue, 7 May 2019 03:42:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BB497228C8 for ; Tue, 7 May 2019 03:42:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AED01288F6; Tue, 7 May 2019 03:42:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3EA9B228C8 for ; Tue, 7 May 2019 03:42:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3F2456B000A; Mon, 6 May 2019 23:42:23 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 3A3306B000C; Mon, 6 May 2019 23:42:23 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2202A6B000E; Mon, 6 May 2019 23:42:23 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-ot1-f71.google.com (mail-ot1-f71.google.com [209.85.210.71]) by kanga.kvack.org (Postfix) with ESMTP id DB45D6B000A for ; Mon, 6 May 2019 23:42:22 -0400 (EDT) Received: by mail-ot1-f71.google.com with SMTP id q15so8563267otl.8 for ; Mon, 06 May 2019 20:42:22 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=cEalju8d9sYCGCsaQbM4UqqlSjYWHHV47oeviVAwqmU=; b=XoItsMmRq6WudY+A+uA6A+10etpAUhr2dlKQ3QJZl0NzUPZd1Omk3K0a7/RsxRo1Jg G/IahpTEZ4iJhPnZsZ4jXinxXz0s3RqucIBhZM9ndmKxOCEwhJJ5Bgpri0O2YWnztEI9 UCSQrGd2RnJAJTYfgDhm3QppJ8WUwwAzvDQrqO/hSXFz2BwX8b0DoWVk/lW8lK3goROl JbukZFCoxiFN2EAZXORO0dbVZZzea+mA2uL+CqICsYUAeZ4icjx6yddPUVWsClA7HjUN UUDFGllAykie7UDXDkH/nzNZKxZILDMj6V5Gs5hE3QqmMmHwDxDR9C3034LMFHHaieZG AmcA== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=chenzhou10@huawei.com X-Gm-Message-State: APjAAAUaH4py37bCgX0fLeU5N6WRfN1w0JM5/F5tw4CC8QNPOQ3FT2DK 9TcuGMy5u+f1CbBjHcTsD910IIYwi6oQ5m1Ai5xtLqkui+ZIg76sB+gxPplA8DK02JpUUdVZbUu PevRpWmqwG6zrts1UDMxHMqC7KwSjPkcWJmalRbQIIbW6k/mrOc5dUljZuHDThjPRNA== X-Received: by 2002:a9d:4ef:: with SMTP id 102mr20884322otm.302.1557200542544; Mon, 06 May 2019 20:42:22 -0700 (PDT) X-Google-Smtp-Source: APXvYqyXZ7n9+ncG77368j2PRXo2G9e9ge9yer14f082HdfxSCVS+QsHUrClL+XWZLS1CDKTfUhj X-Received: by 2002:a9d:4ef:: with SMTP id 102mr20884268otm.302.1557200541111; Mon, 06 May 2019 20:42:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557200541; cv=none; d=google.com; s=arc-20160816; b=wUpEbeTXlQ5Xkit0lsqVUURXk9qHOshBNhruQBgKeYu2FCVzFlidaAJcD/zR6SbddQ pIq5evDDIjADD2WoJRoZGaN4P7SQ83eIjiujK0JLNFvQmWrCMvqTVMtr+6SmzlE43qP3 2qDfyIQed5TIFoVRDb9sqZgtB+juzbG/OGw7a7kMhGt1hBVTcsk6LIB1NFmYSgZAgx4W aFG+oGBJYTkqLECqvpZOw+hn5cuP37O7qft+Lms+CS9hijMKp0z7FxfMfWXWXW7pqn1+ IM3nNUwVARXn1Dy7IppZiz6B0nRAgp/aafmrtlHdG8hyFcbb8NONSgoBBDG99DVrbxCF +A/g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=cEalju8d9sYCGCsaQbM4UqqlSjYWHHV47oeviVAwqmU=; b=AMZ/P/SZjIQ10YBzJURTRf1wNomnru1PNShJx/5RVlmx+xA4Ytbpol0xAyjQx9NxED OPt/VwDhdbPbdcGQp85wXeCPO70SAHpDsXG/oFRBBFA/77AG/Lcl6VH23fikSZy9Xo4T 0RGpqkA4RdfAHiLU2H+9iLjYRVAmNVEBIosVlUep3KAcGYNRV1WhVhYQCZxyGmxSy6iw ArB4xD+2MQVN05oUDcklcz5sv+4UykVTiiptCztRqwfkKgepOUfX8fWk03pYhahi0He3 mLLeAfkl2ynZz2nZzQFITHEpzIAbBnMw4OGuLm4Ogs2qlx/52GRZI8gqyxdMMuD8roRF 0g/w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=chenzhou10@huawei.com Received: from huawei.com (szxga04-in.huawei.com. [45.249.212.190]) by mx.google.com with ESMTPS id u84si6923434oib.82.2019.05.06.20.42.20 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 06 May 2019 20:42:21 -0700 (PDT) Received-SPF: pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.190 as permitted sender) client-ip=45.249.212.190; Authentication-Results: mx.google.com; spf=pass (google.com: domain of chenzhou10@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=chenzhou10@huawei.com Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 74D60BC3975760A175A1; Tue, 7 May 2019 11:42:15 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.439.0; Tue, 7 May 2019 11:42:07 +0800 From: Chen Zhou To: , , , , , , , , CC: , , , , , , , Chen Zhou Subject: [PATCH 4/4] kdump: update Documentation about crashkernel on arm64 Date: Tue, 7 May 2019 11:50:58 +0800 Message-ID: <20190507035058.63992-5-chenzhou10@huawei.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190507035058.63992-1-chenzhou10@huawei.com> References: <20190507035058.63992-1-chenzhou10@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-CFilter-Loop: Reflected X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Now we support crashkernel=X,[high,low] on arm64, update the Documentation. Signed-off-by: Chen Zhou --- Documentation/admin-guide/kernel-parameters.txt | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 268b10a..03a08aa 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -705,7 +705,7 @@ memory region [offset, offset + size] for that kernel image. If '@offset' is omitted, then a suitable offset is selected automatically. - [KNL, x86_64] select a region under 4G first, and + [KNL, x86_64, arm64] select a region under 4G first, and fall back to reserve region above 4G when '@offset' hasn't been specified. See Documentation/kdump/kdump.txt for further details. @@ -718,14 +718,14 @@ Documentation/kdump/kdump.txt for an example. crashkernel=size[KMG],high - [KNL, x86_64] range could be above 4G. Allow kernel + [KNL, x86_64, arm64] range could be above 4G. Allow kernel to allocate physical memory region from top, so could be above 4G if system have more than 4G ram installed. Otherwise memory region will be allocated below 4G, if available. It will be ignored if crashkernel=X is specified. crashkernel=size[KMG],low - [KNL, x86_64] range under 4G. When crashkernel=X,high + [KNL, x86_64, arm64] range under 4G. When crashkernel=X,high is passed, kernel could allocate physical memory region above 4G, that cause second kernel crash on system that require some amount of low memory, e.g. swiotlb