From patchwork Sat Jan 30 07:10:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: chenzhou X-Patchwork-Id: 12056859 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1388CC433E9 for ; Sat, 30 Jan 2021 07:08:30 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id BB62764E18 for ; Sat, 30 Jan 2021 07:08:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BB62764E18 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=LSp9MUfliCdSOSBHc4ZL6Kga2eC1xCGqL5dTTtOx8C4=; b=hqFdnj13odlYzMUM1DdV2CMyT opca0jYVYpK70YRQsfhmkGFSNSHjfmKOlrmRJDZzlk48pDWo0mdLcuDbPkZ3ySNte2LlNAHMxlkqS XxZHFvs57AIBI5UI357jgZr3J5KA/aGj6lxCcvejwSjpy0phFyRUCLlTKXiJA7Qte02n9cGUD52om pPvB7AYAd0B46dNkJOjTwz5ZZ7YNK01H9pSwJ6CHIYVIc0zfCAF0b2BEGkbqg3syUuDG5qVIueZ5m fEpk++9JRbwIu0IziGk2v/fCqyUWenAF1qt2Vz14kXiDmHC7DgWpGZN26zb9tNyVhpIho6tPGSGlK 7ENkezhHg==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1l5kLD-0001hV-2t; Sat, 30 Jan 2021 07:06:43 +0000 Received: from szxga06-in.huawei.com ([45.249.212.32]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1l5kKs-0001Zq-Qm; Sat, 30 Jan 2021 07:06:27 +0000 Received: from DGGEMS402-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4DSQFc45TJzjF7n; Sat, 30 Jan 2021 15:05:08 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by DGGEMS402-HUB.china.huawei.com (10.3.19.202) with Microsoft SMTP Server id 14.3.498.0; Sat, 30 Jan 2021 15:06:01 +0800 From: Chen Zhou To: , , , , , , , , , , , Subject: [PATCH v14 01/11] x86: kdump: replace the hard-coded alignment with macro CRASH_ALIGN Date: Sat, 30 Jan 2021 15:10:15 +0800 Message-ID: <20210130071025.65258-2-chenzhou10@huawei.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210130071025.65258-1-chenzhou10@huawei.com> References: <20210130071025.65258-1-chenzhou10@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210130_020623_381353_F095BEEA X-CRM114-Status: UNSURE ( 9.73 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: John Donnelly , wangkefeng.wang@huawei.com, arnd@arndb.de, xiexiuqi@huawei.com, chenzhou10@huawei.com, linux-doc@vger.kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, robh+dt@kernel.org, horms@verge.net.au, james.morse@arm.com, huawei.libin@huawei.com, guohanjun@huawei.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Move CRASH_ALIGN to header asm/kexec.h for later use. Besides, the alignment of crash kernel regions in x86 is 16M(CRASH_ALIGN), but function reserve_crashkernel() also used 1M alignment. So just replace hard-coded alignment 1M with macro CRASH_ALIGN. Suggested-by: Dave Young Suggested-by: Baoquan He Signed-off-by: Chen Zhou Tested-by: John Donnelly Acked-by: Baoquan He --- arch/x86/include/asm/kexec.h | 3 +++ arch/x86/kernel/setup.c | 5 +---- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h index 6802c59e8252..be18dc7ae51f 100644 --- a/arch/x86/include/asm/kexec.h +++ b/arch/x86/include/asm/kexec.h @@ -18,6 +18,9 @@ # define KEXEC_CONTROL_CODE_MAX_SIZE 2048 +/* 16M alignment for crash kernel regions */ +#define CRASH_ALIGN SZ_16M + #ifndef __ASSEMBLY__ #include diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index 3412c4595efd..da769845597d 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -390,9 +390,6 @@ static void __init memblock_x86_reserve_range_setup_data(void) #ifdef CONFIG_KEXEC_CORE -/* 16M alignment for crash kernel regions */ -#define CRASH_ALIGN SZ_16M - /* * Keep the crash kernel below this limit. * @@ -510,7 +507,7 @@ static void __init reserve_crashkernel(void) } else { unsigned long long start; - start = memblock_phys_alloc_range(crash_size, SZ_1M, crash_base, + start = memblock_phys_alloc_range(crash_size, CRASH_ALIGN, crash_base, crash_base + crash_size); if (start != crash_base) { pr_info("crashkernel reservation failed - memory is in use.\n"); From patchwork Sat Jan 30 07:10:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: chenzhou X-Patchwork-Id: 12056855 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 04244C433DB for ; Sat, 30 Jan 2021 07:08:28 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A6D6964E18 for ; Sat, 30 Jan 2021 07:08:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A6D6964E18 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=LB4YUcqKyrDty4jDg8ZsZ5evZM1QVGKKzo3trdqmFK8=; b=HH6x+VLka6e13O5HhfQ5ZvbBD jIZ5wBbXwJdlb/An2rAJ3mqWCzCl1BkMadWjJ9KqAOtOrm17xs96p0BlijCzJoGdSYkKQlFnKu1YG 1sIgt+B9XOLCd4xJHCCD0q5B4n99JIiCaiONgAXwgNVViZiSVxJI9Zxvn6Ti7kKGPmfhgan7iXE1q 3iGQrn0HORVpnvYFyMyannAut1rWYVtEfUWBnk8YHPtYj1C0zL58h8r8KpOSJ2bRCKSEQy4JfaM35 wVfUu0Cr1+KQOQiXSKBh0bV6K542vOo1bASo8J3RSVJOiNxlfFDa3BKMu8vD/CtPSmyq4XpC/OHZr W8632p8nA==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1l5kL5-0001fo-35; Sat, 30 Jan 2021 07:06:35 +0000 Received: from szxga06-in.huawei.com ([45.249.212.32]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1l5kKs-0001Zp-Ql; Sat, 30 Jan 2021 07:06:25 +0000 Received: from DGGEMS402-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4DSQFc4ZH3zjFJ2; Sat, 30 Jan 2021 15:05:08 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by DGGEMS402-HUB.china.huawei.com (10.3.19.202) with Microsoft SMTP Server id 14.3.498.0; Sat, 30 Jan 2021 15:06:02 +0800 From: Chen Zhou To: , , , , , , , , , , , Subject: [PATCH v14 02/11] x86: kdump: make the lower bound of crash kernel reservation consistent Date: Sat, 30 Jan 2021 15:10:16 +0800 Message-ID: <20210130071025.65258-3-chenzhou10@huawei.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210130071025.65258-1-chenzhou10@huawei.com> References: <20210130071025.65258-1-chenzhou10@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210130_020623_137366_7DEECC1D X-CRM114-Status: UNSURE ( 9.19 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: John Donnelly , wangkefeng.wang@huawei.com, arnd@arndb.de, xiexiuqi@huawei.com, chenzhou10@huawei.com, linux-doc@vger.kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, robh+dt@kernel.org, horms@verge.net.au, james.morse@arm.com, huawei.libin@huawei.com, guohanjun@huawei.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The lower bounds of crash kernel reservation and crash kernel low reservation are different, use the consistent value CRASH_ALIGN. Suggested-by: Dave Young Signed-off-by: Chen Zhou Tested-by: John Donnelly Acked-by: Baoquan He --- arch/x86/kernel/setup.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index da769845597d..27470479e4a3 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -439,7 +439,8 @@ static int __init reserve_crashkernel_low(void) return 0; } - low_base = memblock_phys_alloc_range(low_size, CRASH_ALIGN, 0, CRASH_ADDR_LOW_MAX); + low_base = memblock_phys_alloc_range(low_size, CRASH_ALIGN, CRASH_ALIGN, + CRASH_ADDR_LOW_MAX); if (!low_base) { pr_err("Cannot reserve %ldMB crashkernel low memory, please try smaller size.\n", (unsigned long)(low_size >> 20)); From patchwork Sat Jan 30 07:10:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: chenzhou X-Patchwork-Id: 12056861 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3DE85C433E6 for ; Sat, 30 Jan 2021 07:08:30 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DB07164E06 for ; Sat, 30 Jan 2021 07:08:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DB07164E06 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=7XEeYCd8lelmCUPpqoepSsT++bjL1OlbbZ/ulQ8FGs0=; b=YUJmvDI0kzEup/JoJ3oQFCX7s f+qwh9Yg3ppnPpJ78vPz7+ZmtCMylXkmmEXWgxTtmsuThsjeeXWgiiPXdNe8qUqmkiOc5var3/Xtp k2D657NdU7Q7vr0rV7YFX0ekXhHK07G/znObiQCdUaUwivXBr5J022Bz7me6PnhBh+E+BnK/2Oz9S dDkSSM1KsCS2dvzHwOTf/A31hp5TdLth+UM7/lMkFU1hFw6T681NraYO6p6C/h+wfA96KhibICXpH 5YvG4Fb0Ygdw8Yj4Af1YPMu3XmnTBEdg3qj0acWtuGfhJNiLgnaEQ/7X6PTBeEDcJPaT+HNJybxhF PceVpJYMg==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1l5kL3-0001fT-GG; Sat, 30 Jan 2021 07:06:33 +0000 Received: from szxga06-in.huawei.com ([45.249.212.32]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1l5kKs-0001Zt-RE; Sat, 30 Jan 2021 07:06:25 +0000 Received: from DGGEMS402-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4DSQFj541SzjFJ8; Sat, 30 Jan 2021 15:05:13 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by DGGEMS402-HUB.china.huawei.com (10.3.19.202) with Microsoft SMTP Server id 14.3.498.0; Sat, 30 Jan 2021 15:06:03 +0800 From: Chen Zhou To: , , , , , , , , , , , Subject: [PATCH v14 03/11] x86: kdump: use macro CRASH_ADDR_LOW_MAX in functions reserve_crashkernel() Date: Sat, 30 Jan 2021 15:10:17 +0800 Message-ID: <20210130071025.65258-4-chenzhou10@huawei.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210130071025.65258-1-chenzhou10@huawei.com> References: <20210130071025.65258-1-chenzhou10@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210130_020623_193696_3DBBD081 X-CRM114-Status: UNSURE ( 7.58 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: John Donnelly , wangkefeng.wang@huawei.com, arnd@arndb.de, xiexiuqi@huawei.com, chenzhou10@huawei.com, linux-doc@vger.kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, robh+dt@kernel.org, horms@verge.net.au, james.morse@arm.com, huawei.libin@huawei.com, guohanjun@huawei.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org To make the functions reserve_crashkernel() as generic, replace some hard-coded numbers with macro CRASH_ADDR_LOW_MAX. Signed-off-by: Chen Zhou Tested-by: John Donnelly Acked-by: Baoquan He --- arch/x86/kernel/setup.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index 27470479e4a3..086a04235be4 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -487,8 +487,9 @@ static void __init reserve_crashkernel(void) if (!crash_base) { /* * Set CRASH_ADDR_LOW_MAX upper bound for crash memory, - * crashkernel=x,high reserves memory over 4G, also allocates - * 256M extra low memory for DMA buffers and swiotlb. + * crashkernel=x,high reserves memory over CRASH_ADDR_LOW_MAX, + * also allocates 256M extra low memory for DMA buffers + * and swiotlb. * But the extra memory is not required for all machines. * So try low memory first and fall back to high memory * unless "crashkernel=size[KMG],high" is specified. @@ -516,7 +517,7 @@ static void __init reserve_crashkernel(void) } } - if (crash_base >= (1ULL << 32) && reserve_crashkernel_low()) { + if (crash_base >= CRASH_ADDR_LOW_MAX && reserve_crashkernel_low()) { memblock_free(crash_base, crash_size); return; } From patchwork Sat Jan 30 07:10:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: chenzhou X-Patchwork-Id: 12056875 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1DA0EC433DB for ; Sat, 30 Jan 2021 07:09:48 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CE52564E18 for ; Sat, 30 Jan 2021 07:09:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CE52564E18 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=VN77gWiI4IkMMYuoADbaiEBRybsRhm9B9oYPsxAbMSw=; b=Wwz2wWNDLDU1Y4CnbF8mlIGXU iifLIFo1+nhpK7tlcT/HFs1C6mJR7TXja2CXsENQso6QdUWbeDVhk+yEzbqz200rTHi5ATtfpH3DC 1+XMzGdhI73ysarB8umw/PBR+sa/dXsGuqifiPzackScVrU3c+840DIdCDnK2l6HjD8K1jeW3seIL ZXQvwDs5OyFrPUrUlWMHDEm9dDdy6wdnBmcJJ3ysX41SEW7F0cMOt+agu1x0Pi7TkC03zW7hQWqUR grJIlIHIx58BShjkaEPy4SShhCDhKxicdYF5WMqrAjzWCoqHyn2x7SbTe3UpOO8NAHw7OfrsQyLX9 fHfOcE2lw==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1l5kMk-0002Kt-KA; Sat, 30 Jan 2021 07:08:18 +0000 Received: from szxga06-in.huawei.com ([45.249.212.32]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1l5kL1-0001e1-9z; Sat, 30 Jan 2021 07:06:32 +0000 Received: from DGGEMS402-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4DSQFk09sbzjFJ9; Sat, 30 Jan 2021 15:05:14 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by DGGEMS402-HUB.china.huawei.com (10.3.19.202) with Microsoft SMTP Server id 14.3.498.0; Sat, 30 Jan 2021 15:06:05 +0800 From: Chen Zhou To: , , , , , , , , , , , Subject: [PATCH v14 04/11] x86: kdump: move xen_pv_domain() check and insert_resource() to setup_arch() Date: Sat, 30 Jan 2021 15:10:18 +0800 Message-ID: <20210130071025.65258-5-chenzhou10@huawei.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210130071025.65258-1-chenzhou10@huawei.com> References: <20210130071025.65258-1-chenzhou10@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210130_020631_642279_34A30E83 X-CRM114-Status: UNSURE ( 9.72 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: John Donnelly , wangkefeng.wang@huawei.com, arnd@arndb.de, xiexiuqi@huawei.com, chenzhou10@huawei.com, linux-doc@vger.kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, robh+dt@kernel.org, horms@verge.net.au, james.morse@arm.com, huawei.libin@huawei.com, guohanjun@huawei.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We will make the functions reserve_crashkernel() as generic, the xen_pv_domain() check in reserve_crashkernel() is relevant only to x86, the same as insert_resource() in reserve_crashkernel[_low](). So move xen_pv_domain() check and insert_resource() to setup_arch() to keep them in x86. Suggested-by: Mike Rapoport Signed-off-by: Chen Zhou Tested-by: John Donnelly Acked-by: Baoquan He --- arch/x86/kernel/setup.c | 19 +++++++++++-------- 1 file changed, 11 insertions(+), 8 deletions(-) diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index 086a04235be4..5d676efc32f6 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -454,7 +454,6 @@ static int __init reserve_crashkernel_low(void) crashk_low_res.start = low_base; crashk_low_res.end = low_base + low_size - 1; - insert_resource(&iomem_resource, &crashk_low_res); #endif return 0; } @@ -478,11 +477,6 @@ static void __init reserve_crashkernel(void) high = true; } - if (xen_pv_domain()) { - pr_info("Ignoring crashkernel for a Xen PV domain\n"); - return; - } - /* 0 means: find the address automatically */ if (!crash_base) { /* @@ -529,7 +523,6 @@ static void __init reserve_crashkernel(void) crashk_res.start = crash_base; crashk_res.end = crash_base + crash_size - 1; - insert_resource(&iomem_resource, &crashk_res); } #else static void __init reserve_crashkernel(void) @@ -1151,7 +1144,17 @@ void __init setup_arch(char **cmdline_p) * Reserve memory for crash kernel after SRAT is parsed so that it * won't consume hotpluggable memory. */ - reserve_crashkernel(); + if (xen_pv_domain()) + pr_info("Ignoring crashkernel for a Xen PV domain\n"); + else { + reserve_crashkernel(); +#ifdef CONFIG_KEXEC_CORE + if (crashk_res.end > crashk_res.start) + insert_resource(&iomem_resource, &crashk_res); + if (crashk_low_res.end > crashk_low_res.start) + insert_resource(&iomem_resource, &crashk_low_res); +#endif + } memblock_find_dma_reserve(); From patchwork Sat Jan 30 07:10:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: chenzhou X-Patchwork-Id: 12056877 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 09AA9C433E0 for ; Sat, 30 Jan 2021 07:09:50 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B49E664E06 for ; Sat, 30 Jan 2021 07:09:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B49E664E06 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=VkJblg/aXhWLAP2jT939p6NjKVsikm9srfqDaHCTlZU=; b=vkx4xUaxHOrCkwDJtoV4CjB4M gUqRAHdPrFld3/XM/bn2MQpycMgxTCb2KBn8iYuki58tu+YMcF4Js5H9UI61cCWnmSUFqE/qLC0+n 9bR+xazMx9cmCULyFwkGuzqB0h8GCoBXBIUmSvzjt+XZXVUqYXvhJOScQp2ILVx7RpdxfqQKyj+fL aFjRmmMhhS21PWbQlbwvwnxoJNqwL+WOC18XM9wHTR116ZRd2dwnTQ6556TT+tTjEV7ld88y1ge39 Qbn/1depTr6FZ9m0VwP3toCcHqnRgisyV8SZX/rcAXWRY5fUH+zwglz07mbTnmYVfNdGj7V6qaMef +tBK0Zp2w==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1l5kMd-0002FC-P1; Sat, 30 Jan 2021 07:08:11 +0000 Received: from szxga06-in.huawei.com ([45.249.212.32]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1l5kKy-0001c0-E4; Sat, 30 Jan 2021 07:06:31 +0000 Received: from DGGEMS402-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4DSQFj69ypzjFJ7; Sat, 30 Jan 2021 15:05:13 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by DGGEMS402-HUB.china.huawei.com (10.3.19.202) with Microsoft SMTP Server id 14.3.498.0; Sat, 30 Jan 2021 15:06:06 +0800 From: Chen Zhou To: , , , , , , , , , , , Subject: [PATCH v14 05/11] x86: kdump: move reserve_crashkernel[_low]() into crash_core.c Date: Sat, 30 Jan 2021 15:10:19 +0800 Message-ID: <20210130071025.65258-6-chenzhou10@huawei.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210130071025.65258-1-chenzhou10@huawei.com> References: <20210130071025.65258-1-chenzhou10@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210130_020629_279571_C4D3D5C4 X-CRM114-Status: GOOD ( 26.10 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: John Donnelly , wangkefeng.wang@huawei.com, arnd@arndb.de, xiexiuqi@huawei.com, chenzhou10@huawei.com, linux-doc@vger.kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, robh+dt@kernel.org, horms@verge.net.au, james.morse@arm.com, huawei.libin@huawei.com, guohanjun@huawei.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Make the functions reserve_crashkernel[_low]() as generic. Arm64 will use these to reimplement crashkernel=X. Signed-off-by: Chen Zhou Tested-by: John Donnelly --- arch/x86/include/asm/kexec.h | 25 ++++++ arch/x86/kernel/setup.c | 143 +------------------------------ include/linux/crash_core.h | 3 + include/linux/kexec.h | 2 - kernel/crash_core.c | 159 +++++++++++++++++++++++++++++++++++ kernel/kexec_core.c | 17 ---- 6 files changed, 189 insertions(+), 160 deletions(-) diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h index be18dc7ae51f..2b18f918203e 100644 --- a/arch/x86/include/asm/kexec.h +++ b/arch/x86/include/asm/kexec.h @@ -21,6 +21,27 @@ /* 16M alignment for crash kernel regions */ #define CRASH_ALIGN SZ_16M +/* + * Keep the crash kernel below this limit. + * + * Earlier 32-bits kernels would limit the kernel to the low 512 MB range + * due to mapping restrictions. + * + * 64-bit kdump kernels need to be restricted to be under 64 TB, which is + * the upper limit of system RAM in 4-level paging mode. Since the kdump + * jump could be from 5-level paging to 4-level paging, the jump will fail if + * the kernel is put above 64 TB, and during the 1st kernel bootup there's + * no good way to detect the paging mode of the target kernel which will be + * loaded for dumping. + */ +#ifdef CONFIG_X86_32 +# define CRASH_ADDR_LOW_MAX SZ_512M +# define CRASH_ADDR_HIGH_MAX SZ_512M +#else +# define CRASH_ADDR_LOW_MAX SZ_4G +# define CRASH_ADDR_HIGH_MAX SZ_64T +#endif + #ifndef __ASSEMBLY__ #include @@ -200,6 +221,10 @@ typedef void crash_vmclear_fn(void); extern crash_vmclear_fn __rcu *crash_vmclear_loaded_vmcss; extern void kdump_nmi_shootdown_cpus(void); +#ifdef CONFIG_KEXEC_CORE +extern void __init reserve_crashkernel(void); +#endif + #endif /* __ASSEMBLY__ */ #endif /* _ASM_X86_KEXEC_H */ diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index 5d676efc32f6..d136d6ad3fa8 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -38,6 +38,7 @@ #include #include #include +#include #include #include #include @@ -384,147 +385,7 @@ static void __init memblock_x86_reserve_range_setup_data(void) } } -/* - * --------- Crashkernel reservation ------------------------------ - */ - -#ifdef CONFIG_KEXEC_CORE - -/* - * Keep the crash kernel below this limit. - * - * Earlier 32-bits kernels would limit the kernel to the low 512 MB range - * due to mapping restrictions. - * - * 64-bit kdump kernels need to be restricted to be under 64 TB, which is - * the upper limit of system RAM in 4-level paging mode. Since the kdump - * jump could be from 5-level paging to 4-level paging, the jump will fail if - * the kernel is put above 64 TB, and during the 1st kernel bootup there's - * no good way to detect the paging mode of the target kernel which will be - * loaded for dumping. - */ -#ifdef CONFIG_X86_32 -# define CRASH_ADDR_LOW_MAX SZ_512M -# define CRASH_ADDR_HIGH_MAX SZ_512M -#else -# define CRASH_ADDR_LOW_MAX SZ_4G -# define CRASH_ADDR_HIGH_MAX SZ_64T -#endif - -static int __init reserve_crashkernel_low(void) -{ -#ifdef CONFIG_X86_64 - unsigned long long base, low_base = 0, low_size = 0; - unsigned long low_mem_limit; - int ret; - - low_mem_limit = min(memblock_phys_mem_size(), CRASH_ADDR_LOW_MAX); - - /* crashkernel=Y,low */ - ret = parse_crashkernel_low(boot_command_line, low_mem_limit, &low_size, &base); - if (ret) { - /* - * two parts from kernel/dma/swiotlb.c: - * -swiotlb size: user-specified with swiotlb= or default. - * - * -swiotlb overflow buffer: now hardcoded to 32k. We round it - * to 8M for other buffers that may need to stay low too. Also - * make sure we allocate enough extra low memory so that we - * don't run out of DMA buffers for 32-bit devices. - */ - low_size = max(swiotlb_size_or_default() + (8UL << 20), 256UL << 20); - } else { - /* passed with crashkernel=0,low ? */ - if (!low_size) - return 0; - } - - low_base = memblock_phys_alloc_range(low_size, CRASH_ALIGN, CRASH_ALIGN, - CRASH_ADDR_LOW_MAX); - if (!low_base) { - pr_err("Cannot reserve %ldMB crashkernel low memory, please try smaller size.\n", - (unsigned long)(low_size >> 20)); - return -ENOMEM; - } - - pr_info("Reserving %ldMB of low memory at %ldMB for crashkernel (low RAM limit: %ldMB)\n", - (unsigned long)(low_size >> 20), - (unsigned long)(low_base >> 20), - (unsigned long)(low_mem_limit >> 20)); - - crashk_low_res.start = low_base; - crashk_low_res.end = low_base + low_size - 1; -#endif - return 0; -} - -static void __init reserve_crashkernel(void) -{ - unsigned long long crash_size, crash_base, total_mem; - bool high = false; - int ret; - - total_mem = memblock_phys_mem_size(); - - /* crashkernel=XM */ - ret = parse_crashkernel(boot_command_line, total_mem, &crash_size, &crash_base); - if (ret != 0 || crash_size <= 0) { - /* crashkernel=X,high */ - ret = parse_crashkernel_high(boot_command_line, total_mem, - &crash_size, &crash_base); - if (ret != 0 || crash_size <= 0) - return; - high = true; - } - - /* 0 means: find the address automatically */ - if (!crash_base) { - /* - * Set CRASH_ADDR_LOW_MAX upper bound for crash memory, - * crashkernel=x,high reserves memory over CRASH_ADDR_LOW_MAX, - * also allocates 256M extra low memory for DMA buffers - * and swiotlb. - * But the extra memory is not required for all machines. - * So try low memory first and fall back to high memory - * unless "crashkernel=size[KMG],high" is specified. - */ - if (!high) - crash_base = memblock_phys_alloc_range(crash_size, - CRASH_ALIGN, CRASH_ALIGN, - CRASH_ADDR_LOW_MAX); - if (!crash_base) - crash_base = memblock_phys_alloc_range(crash_size, - CRASH_ALIGN, CRASH_ALIGN, - CRASH_ADDR_HIGH_MAX); - if (!crash_base) { - pr_info("crashkernel reservation failed - No suitable area found.\n"); - return; - } - } else { - unsigned long long start; - - start = memblock_phys_alloc_range(crash_size, CRASH_ALIGN, crash_base, - crash_base + crash_size); - if (start != crash_base) { - pr_info("crashkernel reservation failed - memory is in use.\n"); - return; - } - } - - if (crash_base >= CRASH_ADDR_LOW_MAX && reserve_crashkernel_low()) { - memblock_free(crash_base, crash_size); - return; - } - - pr_info("Reserving %ldMB of memory at %ldMB for crashkernel (System RAM: %ldMB)\n", - (unsigned long)(crash_size >> 20), - (unsigned long)(crash_base >> 20), - (unsigned long)(total_mem >> 20)); - - crashk_res.start = crash_base; - crashk_res.end = crash_base + crash_size - 1; -} -#else +#ifndef CONFIG_KEXEC_CORE static void __init reserve_crashkernel(void) { } diff --git a/include/linux/crash_core.h b/include/linux/crash_core.h index 206bde8308b2..fc0ef33a76f7 100644 --- a/include/linux/crash_core.h +++ b/include/linux/crash_core.h @@ -69,6 +69,9 @@ extern unsigned char *vmcoreinfo_data; extern size_t vmcoreinfo_size; extern u32 *vmcoreinfo_note; +extern struct resource crashk_res; +extern struct resource crashk_low_res; + /* raw contents of kernel .notes section */ extern const void __start_notes __weak; extern const void __stop_notes __weak; diff --git a/include/linux/kexec.h b/include/linux/kexec.h index 9e93bef52968..f301f2f5cfc4 100644 --- a/include/linux/kexec.h +++ b/include/linux/kexec.h @@ -337,8 +337,6 @@ extern int kexec_load_disabled; /* Location of a reserved region to hold the crash kernel. */ -extern struct resource crashk_res; -extern struct resource crashk_low_res; extern note_buf_t __percpu *crash_notes; /* flag to track if kexec reboot is in progress */ diff --git a/kernel/crash_core.c b/kernel/crash_core.c index 825284baaf46..a0e790d6ea0f 100644 --- a/kernel/crash_core.c +++ b/kernel/crash_core.c @@ -7,6 +7,12 @@ #include #include #include +#include +#include + +#ifdef CONFIG_KEXEC_CORE +#include +#endif #include #include @@ -21,6 +27,22 @@ u32 *vmcoreinfo_note; /* trusted vmcoreinfo, e.g. we can make a copy in the crash memory */ static unsigned char *vmcoreinfo_data_safecopy; +/* Location of the reserved area for the crash kernel */ +struct resource crashk_res = { + .name = "Crash kernel", + .start = 0, + .end = 0, + .flags = IORESOURCE_BUSY | IORESOURCE_SYSTEM_RAM, + .desc = IORES_DESC_CRASH_KERNEL +}; +struct resource crashk_low_res = { + .name = "Crash kernel", + .start = 0, + .end = 0, + .flags = IORESOURCE_BUSY | IORESOURCE_SYSTEM_RAM, + .desc = IORES_DESC_CRASH_KERNEL +}; + /* * parsing the "crashkernel" commandline * @@ -294,6 +316,143 @@ int __init parse_crashkernel_low(char *cmdline, "crashkernel=", suffix_tbl[SUFFIX_LOW]); } +/* + * --------- Crashkernel reservation ------------------------------ + */ + +#ifdef CONFIG_KEXEC_CORE + +#ifdef CONFIG_X86 +static int __init reserve_crashkernel_low(void) +{ +#ifdef CONFIG_X86_64 + unsigned long long base, low_base = 0, low_size = 0; + unsigned long low_mem_limit; + int ret; + + low_mem_limit = min(memblock_phys_mem_size(), CRASH_ADDR_LOW_MAX); + + /* crashkernel=Y,low */ + ret = parse_crashkernel_low(boot_command_line, low_mem_limit, &low_size, &base); + if (ret) { + /* + * two parts from kernel/dma/swiotlb.c: + * -swiotlb size: user-specified with swiotlb= or default. + * + * -swiotlb overflow buffer: now hardcoded to 32k. We round it + * to 8M for other buffers that may need to stay low too. Also + * make sure we allocate enough extra low memory so that we + * don't run out of DMA buffers for 32-bit devices. + */ + low_size = max(swiotlb_size_or_default() + (8UL << 20), 256UL << 20); + } else { + /* passed with crashkernel=0,low ? */ + if (!low_size) + return 0; + } + + low_base = memblock_phys_alloc_range(low_size, CRASH_ALIGN, CRASH_ALIGN, + CRASH_ADDR_LOW_MAX); + if (!low_base) { + pr_err("Cannot reserve %ldMB crashkernel low memory, please try smaller size.\n", + (unsigned long)(low_size >> 20)); + return -ENOMEM; + } + + pr_info("Reserving %ldMB of low memory at %ldMB for crashkernel (low RAM limit: %ldMB)\n", + (unsigned long)(low_size >> 20), + (unsigned long)(low_base >> 20), + (unsigned long)(low_mem_limit >> 20)); + + crashk_low_res.start = low_base; + crashk_low_res.end = low_base + low_size - 1; +#endif + return 0; +} + +/* + * reserve_crashkernel() - reserves memory for crash kernel + * + * This function reserves memory area given in "crashkernel=" kernel command + * line parameter. The memory reserved is used by dump capture kernel when + * primary kernel is crashing. + */ +void __init reserve_crashkernel(void) +{ + unsigned long long crash_size, crash_base, total_mem; + bool high = false; + int ret; + + total_mem = memblock_phys_mem_size(); + + /* crashkernel=XM */ + ret = parse_crashkernel(boot_command_line, total_mem, &crash_size, &crash_base); + if (ret != 0 || crash_size <= 0) { + /* crashkernel=X,high */ + ret = parse_crashkernel_high(boot_command_line, total_mem, + &crash_size, &crash_base); + if (ret != 0 || crash_size <= 0) + return; + high = true; + } + + /* 0 means: find the address automatically */ + if (!crash_base) { + /* + * Set CRASH_ADDR_LOW_MAX upper bound for crash memory, + * crashkernel=x,high reserves memory over CRASH_ADDR_LOW_MAX, + * also allocates 256M extra low memory for DMA buffers + * and swiotlb. + * But the extra memory is not required for all machines. + * So try low memory first and fall back to high memory + * unless "crashkernel=size[KMG],high" is specified. + */ + if (!high) + crash_base = memblock_phys_alloc_range(crash_size, + CRASH_ALIGN, CRASH_ALIGN, + CRASH_ADDR_LOW_MAX); + if (!crash_base) + crash_base = memblock_phys_alloc_range(crash_size, + CRASH_ALIGN, CRASH_ALIGN, + CRASH_ADDR_HIGH_MAX); + if (!crash_base) { + pr_info("crashkernel reservation failed - No suitable area found.\n"); + return; + } + } else { + /* User specifies base address explicitly. */ + unsigned long long start; + + if (!IS_ALIGNED(crash_base, CRASH_ALIGN)) { + pr_warn("cannot reserve crashkernel: base address is not %ldMB aligned\n", + (unsigned long)CRASH_ALIGN >> 20); + return; + } + + start = memblock_phys_alloc_range(crash_size, CRASH_ALIGN, crash_base, + crash_base + crash_size); + if (start != crash_base) { + pr_info("crashkernel reservation failed - memory is in use.\n"); + return; + } + } + + if (crash_base >= CRASH_ADDR_LOW_MAX && reserve_crashkernel_low()) { + memblock_free(crash_base, crash_size); + return; + } + + pr_info("Reserving %ldMB of memory at %ldMB for crashkernel (System RAM: %ldMB)\n", + (unsigned long)(crash_size >> 20), + (unsigned long)(crash_base >> 20), + (unsigned long)(total_mem >> 20)); + + crashk_res.start = crash_base; + crashk_res.end = crash_base + crash_size - 1; +} +#endif /* CONFIG_X86 */ +#endif /* CONFIG_KEXEC_CORE */ + Elf_Word *append_elf_note(Elf_Word *buf, char *name, unsigned int type, void *data, size_t data_len) { diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c index 4f8efc278aa7..265799e93caf 100644 --- a/kernel/kexec_core.c +++ b/kernel/kexec_core.c @@ -52,23 +52,6 @@ note_buf_t __percpu *crash_notes; /* Flag to indicate we are going to kexec a new kernel */ bool kexec_in_progress = false; - -/* Location of the reserved area for the crash kernel */ -struct resource crashk_res = { - .name = "Crash kernel", - .start = 0, - .end = 0, - .flags = IORESOURCE_BUSY | IORESOURCE_SYSTEM_RAM, - .desc = IORES_DESC_CRASH_KERNEL -}; -struct resource crashk_low_res = { - .name = "Crash kernel", - .start = 0, - .end = 0, - .flags = IORESOURCE_BUSY | IORESOURCE_SYSTEM_RAM, - .desc = IORES_DESC_CRASH_KERNEL -}; - int kexec_should_crash(struct task_struct *p) { /* From patchwork Sat Jan 30 07:10:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: chenzhou X-Patchwork-Id: 12056873 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A577C433DB for ; Sat, 30 Jan 2021 07:09:30 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 474A364E06 for ; Sat, 30 Jan 2021 07:09:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 474A364E06 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=hjG4ool3+tg8wx2SshXxBD5K6Kx0nxq68L9vVpmcv5A=; b=b2E9znhsRqX7aKqP78OvmEHSc 4iehYHD9x2pwnwqs9h+/Dn1f9HFVhCGXNa/Bu8P9fXRys02cq4w6CXw/UYOC6vMTkOt8Oxg9JAUNY Tjaa3gqOKnracb4di+02rvCrJzHBAnTtJ7nnWSOuO2dGFWX27kd2j07hFoWXF4qffpexU1ewhIzqI vi4aycjFROFfaD1uYCiRs1KEZWV6ERGUvdopiY/UIf83Vw7bbemziq9nuKlNGejBQCES6OzCZveSK voQKL2KDVzvQ8gUBZKQl4KP3cdMC7jNUiloltJEQkIMMaMjt4e/qeTRnRA2/pRf9RqLjyXtLhP7RV 9CERjlPEg==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1l5kMU-00028h-6K; Sat, 30 Jan 2021 07:08:02 +0000 Received: from szxga06-in.huawei.com ([45.249.212.32]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1l5kKx-0001bz-AY; Sat, 30 Jan 2021 07:06:29 +0000 Received: from DGGEMS402-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4DSQFj45tRzjFJ3; Sat, 30 Jan 2021 15:05:13 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by DGGEMS402-HUB.china.huawei.com (10.3.19.202) with Microsoft SMTP Server id 14.3.498.0; Sat, 30 Jan 2021 15:06:07 +0800 From: Chen Zhou To: , , , , , , , , , , , Subject: [PATCH v14 06/11] x86/elf: Move vmcore_elf_check_arch_cross to arch/x86/include/asm/elf.h Date: Sat, 30 Jan 2021 15:10:20 +0800 Message-ID: <20210130071025.65258-7-chenzhou10@huawei.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210130071025.65258-1-chenzhou10@huawei.com> References: <20210130071025.65258-1-chenzhou10@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210130_020627_866637_FDCF567C X-CRM114-Status: GOOD ( 11.89 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: wangkefeng.wang@huawei.com, kernel test robot , arnd@arndb.de, xiexiuqi@huawei.com, chenzhou10@huawei.com, linux-doc@vger.kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, robh+dt@kernel.org, horms@verge.net.au, james.morse@arm.com, huawei.libin@huawei.com, guohanjun@huawei.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Move macro vmcore_elf_check_arch_cross from arch/x86/include/asm/kexec.h to arch/x86/include/asm/elf.h to fix the following compiling warning: make ARCH=i386 In file included from arch/x86/kernel/setup.c:39:0: ./arch/x86/include/asm/kexec.h:77:0: warning: "vmcore_elf_check_arch_cross" redefined # define vmcore_elf_check_arch_cross(x) ((x)->e_machine == EM_X86_64) In file included from arch/x86/kernel/setup.c:9:0: ./include/linux/crash_dump.h:39:0: note: this is the location of the previous definition #define vmcore_elf_check_arch_cross(x) 0 The root cause is that vmcore_elf_check_arch_cross under CONFIG_CRASH_CORE depend on CONFIG_KEXEC_CORE. Commit 2db65f1db17d ("x86: kdump: move reserve_crashkernel[_low]() into crash_core.c") triggered the issue. Suggested by Mike, simply move vmcore_elf_check_arch_cross from arch/x86/include/asm/kexec.h to arch/x86/include/asm/elf.h to fix the warning. Fixes: 2db65f1db17d ("x86: kdump: move reserve_crashkernel[_low]() into crash_core.c") Reported-by: kernel test robot Suggested-by: Mike Rapoport Signed-off-by: Chen Zhou --- arch/x86/include/asm/elf.h | 3 +++ arch/x86/include/asm/kexec.h | 3 --- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/elf.h b/arch/x86/include/asm/elf.h index 66bdfe838d61..5333777cc758 100644 --- a/arch/x86/include/asm/elf.h +++ b/arch/x86/include/asm/elf.h @@ -94,6 +94,9 @@ extern unsigned int vdso32_enabled; #define elf_check_arch(x) elf_check_arch_ia32(x) +/* We can also handle crash dumps from 64 bit kernel. */ +# define vmcore_elf_check_arch_cross(x) ((x)->e_machine == EM_X86_64) + /* SVR4/i386 ABI (pages 3-31, 3-32) says that when the program starts %edx contains a pointer to a function which might be registered using `atexit'. This provides a mean for the dynamic linker to call DT_FINI functions for diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h index 2b18f918203e..6fcae01a9cca 100644 --- a/arch/x86/include/asm/kexec.h +++ b/arch/x86/include/asm/kexec.h @@ -72,9 +72,6 @@ struct kimage; /* The native architecture */ # define KEXEC_ARCH KEXEC_ARCH_386 - -/* We can also handle crash dumps from 64 bit kernel. */ -# define vmcore_elf_check_arch_cross(x) ((x)->e_machine == EM_X86_64) #else /* Maximum physical address we can use pages from */ # define KEXEC_SOURCE_MEMORY_LIMIT (MAXMEM-1) From patchwork Sat Jan 30 07:10:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: chenzhou X-Patchwork-Id: 12056857 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C8F74C433E0 for ; Sat, 30 Jan 2021 07:08:28 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 72DEA64E06 for ; Sat, 30 Jan 2021 07:08:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 72DEA64E06 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=dXnFHmk/XG5rnto4mv9YRY5yZaYfG0I36T64HC4sGBo=; b=KenIWVElLhDPlP6vMBeXrOcD+ GyI1aCddzeosPEkomvKSqQNZwphWVHSlXKdGDNlshqmtQpfgGHNQZP9LJk3ghWSfGGGG9j0gwqUhX 1DBxLC+G8qFwf15Lm6afusNjxOPNoSlnU7o5w83YfzyUzGtgx/qWFK6qkIIMIG2busLgVNWX/HtrS bGQqR+0v5rXK0aa5rZwRgNEFjQOzH0TnjBC/XmmbzoA65C8jV0vSbxRNaEslOnUy+aJlZVF86I02w 6wQ8TR51QZapu/JQ9qTUXqbEkWvQjYvfdzVbmW9hw4mh5qubLDe8yy7BwsUHGHVyRCrySp5LWSLyY A7xwQbwfw==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1l5kL7-0001gS-Ol; Sat, 30 Jan 2021 07:06:37 +0000 Received: from szxga06-in.huawei.com ([45.249.212.32]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1l5kKs-0001Zs-Qn; Sat, 30 Jan 2021 07:06:26 +0000 Received: from DGGEMS402-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4DSQFj4bhkzjFJ6; Sat, 30 Jan 2021 15:05:13 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by DGGEMS402-HUB.china.huawei.com (10.3.19.202) with Microsoft SMTP Server id 14.3.498.0; Sat, 30 Jan 2021 15:06:08 +0800 From: Chen Zhou To: , , , , , , , , , , , Subject: [PATCH v14 07/11] arm64: kdump: introduce some macroes for crash kernel reservation Date: Sat, 30 Jan 2021 15:10:21 +0800 Message-ID: <20210130071025.65258-8-chenzhou10@huawei.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210130071025.65258-1-chenzhou10@huawei.com> References: <20210130071025.65258-1-chenzhou10@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210130_020623_230785_BDAF96E3 X-CRM114-Status: UNSURE ( 9.60 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: John Donnelly , wangkefeng.wang@huawei.com, arnd@arndb.de, xiexiuqi@huawei.com, chenzhou10@huawei.com, linux-doc@vger.kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, robh+dt@kernel.org, horms@verge.net.au, james.morse@arm.com, huawei.libin@huawei.com, guohanjun@huawei.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce macro CRASH_ALIGN for alignment, macro CRASH_ADDR_LOW_MAX for upper bound of low crash memory, macro CRASH_ADDR_HIGH_MAX for upper bound of high crash memory, use macroes instead. Besides, keep consistent with x86, use CRASH_ALIGN as the lower bound of crash kernel reservation. Signed-off-by: Chen Zhou Tested-by: John Donnelly --- arch/arm64/include/asm/kexec.h | 6 ++++++ arch/arm64/mm/init.c | 6 +++--- 2 files changed, 9 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h index d24b527e8c00..3f6ecae0bc68 100644 --- a/arch/arm64/include/asm/kexec.h +++ b/arch/arm64/include/asm/kexec.h @@ -25,6 +25,12 @@ #define KEXEC_ARCH KEXEC_ARCH_AARCH64 +/* 2M alignment for crash kernel regions */ +#define CRASH_ALIGN SZ_2M + +#define CRASH_ADDR_LOW_MAX arm64_dma_phys_limit +#define CRASH_ADDR_HIGH_MAX MEMBLOCK_ALLOC_ACCESSIBLE + #ifndef __ASSEMBLY__ /** diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 709d98fea90c..912f64f505f7 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -84,8 +84,8 @@ static void __init reserve_crashkernel(void) if (crash_base == 0) { /* Current arm64 boot protocol requires 2MB alignment */ - crash_base = memblock_find_in_range(0, arm64_dma_phys_limit, - crash_size, SZ_2M); + crash_base = memblock_find_in_range(CRASH_ALIGN, CRASH_ADDR_LOW_MAX, + crash_size, CRASH_ALIGN); if (crash_base == 0) { pr_warn("cannot allocate crashkernel (size:0x%llx)\n", crash_size); @@ -103,7 +103,7 @@ static void __init reserve_crashkernel(void) return; } - if (!IS_ALIGNED(crash_base, SZ_2M)) { + if (!IS_ALIGNED(crash_base, CRASH_ALIGN)) { pr_warn("cannot reserve crashkernel: base address is not 2MB aligned\n"); return; } From patchwork Sat Jan 30 07:10:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: chenzhou X-Patchwork-Id: 12056863 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46F7CC43381 for ; Sat, 30 Jan 2021 07:08:30 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id F188764E1C for ; Sat, 30 Jan 2021 07:08:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F188764E1C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=4Ox6w4GOeufs38Ma/LJfkv397Pp48RLG3d6Aw3UOoaQ=; b=CUg1hsi5bWFWxwJ9F/HM+Rz91 4cMWX9taYIWocRQR+z7K/toZJiNCo+FFMzuBLTeoYFRDc6d/5oCmc/hKLS6X3FH5k5rGQdFKyTk3b jWIGXd4DuwatsL6xtSBhxudIWiqlMop2MQl5PxNlMqNRXS+P8iHIcivii2Xm3+kbg/S7MgOBMT+TL SZMYOCRWN822eA4yK6LTvrDyRJOQz5zomWGL9gZCwfsoVcJiiBpyPMOneReg14zf0BT7ssAnH6B+6 YuKnC0kKEjy2oD7Ac/iZvrDqDlzOKQDtE6WBy3wtTUhx1WjukNOLx0J5aqttU/OIb7xYq13XDRs+p /YZEuQ7qw==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1l5kLL-0001jg-C4; Sat, 30 Jan 2021 07:06:51 +0000 Received: from szxga05-in.huawei.com ([45.249.212.191]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1l5kKt-0001Zx-5w; Sat, 30 Jan 2021 07:06:28 +0000 Received: from DGGEMS402-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4DSQFX2nkwzjGPl; Sat, 30 Jan 2021 15:05:04 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by DGGEMS402-HUB.china.huawei.com (10.3.19.202) with Microsoft SMTP Server id 14.3.498.0; Sat, 30 Jan 2021 15:06:09 +0800 From: Chen Zhou To: , , , , , , , , , , , Subject: [PATCH v14 08/11] arm64: kdump: reimplement crashkernel=X Date: Sat, 30 Jan 2021 15:10:22 +0800 Message-ID: <20210130071025.65258-9-chenzhou10@huawei.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210130071025.65258-1-chenzhou10@huawei.com> References: <20210130071025.65258-1-chenzhou10@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210130_020623_727436_DBEB707E X-CRM114-Status: GOOD ( 18.90 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: John Donnelly , wangkefeng.wang@huawei.com, arnd@arndb.de, xiexiuqi@huawei.com, chenzhou10@huawei.com, linux-doc@vger.kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, robh+dt@kernel.org, horms@verge.net.au, james.morse@arm.com, huawei.libin@huawei.com, guohanjun@huawei.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org There are following issues in arm64 kdump: 1. We use crashkernel=X to reserve crashkernel below 4G, which will fail when there is no enough low memory. 2. If reserving crashkernel above 4G, in this case, crash dump kernel will boot failure because there is no low memory available for allocation. To solve these issues, change the behavior of crashkernel=X and introduce crashkernel=X,[high,low]. crashkernel=X tries low allocation in DMA zone, and fall back to high allocation if it fails. We can also use "crashkernel=X,high" to select a region above DMA zone, which also tries to allocate at least 256M in DMA zone automatically. "crashkernel=Y,low" can be used to allocate specified size low memory. Another minor change, there may be two regions reserved for crash dump kernel, in order to distinct from the high region and make no effect to the use of existing kexec-tools, rename the low region as "Crash kernel (low)". Signed-off-by: Chen Zhou Tested-by: John Donnelly --- arch/arm64/include/asm/kexec.h | 4 ++ arch/arm64/kernel/setup.c | 13 ++++++- arch/arm64/mm/init.c | 68 ++++++---------------------------- kernel/crash_core.c | 6 +-- 4 files changed, 30 insertions(+), 61 deletions(-) diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h index 3f6ecae0bc68..f0caed0cb5e1 100644 --- a/arch/arm64/include/asm/kexec.h +++ b/arch/arm64/include/asm/kexec.h @@ -96,6 +96,10 @@ static inline void crash_prepare_suspend(void) {} static inline void crash_post_resume(void) {} #endif +#ifdef CONFIG_KEXEC_CORE +extern void __init reserve_crashkernel(void); +#endif + #ifdef CONFIG_KEXEC_FILE #define ARCH_HAS_KIMAGE_ARCH diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c index c18aacde8bb0..69c592c546de 100644 --- a/arch/arm64/kernel/setup.c +++ b/arch/arm64/kernel/setup.c @@ -238,7 +238,18 @@ static void __init request_standard_resources(void) kernel_data.end <= res->end) request_resource(res, &kernel_data); #ifdef CONFIG_KEXEC_CORE - /* Userspace will find "Crash kernel" region in /proc/iomem. */ + /* + * Userspace will find "Crash kernel" or "Crash kernel (low)" + * region in /proc/iomem. + * In order to distinct from the high region and make no effect + * to the use of existing kexec-tools, rename the low region as + * "Crash kernel (low)". + */ + if (crashk_low_res.end && crashk_low_res.start >= res->start && + crashk_low_res.end <= res->end) { + crashk_low_res.name = "Crash kernel (low)"; + request_resource(res, &crashk_low_res); + } if (crashk_res.end && crashk_res.start >= res->start && crashk_res.end <= res->end) request_resource(res, &crashk_res); diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 912f64f505f7..d20f5c444ebf 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -35,6 +35,7 @@ #include #include #include +#include #include #include #include @@ -61,66 +62,11 @@ EXPORT_SYMBOL(memstart_addr); */ phys_addr_t arm64_dma_phys_limit __ro_after_init; -#ifdef CONFIG_KEXEC_CORE -/* - * reserve_crashkernel() - reserves memory for crash kernel - * - * This function reserves memory area given in "crashkernel=" kernel command - * line parameter. The memory reserved is used by dump capture kernel when - * primary kernel is crashing. - */ +#ifndef CONFIG_KEXEC_CORE static void __init reserve_crashkernel(void) { - unsigned long long crash_base, crash_size; - int ret; - - ret = parse_crashkernel(boot_command_line, memblock_phys_mem_size(), - &crash_size, &crash_base); - /* no crashkernel= or invalid value specified */ - if (ret || !crash_size) - return; - - crash_size = PAGE_ALIGN(crash_size); - - if (crash_base == 0) { - /* Current arm64 boot protocol requires 2MB alignment */ - crash_base = memblock_find_in_range(CRASH_ALIGN, CRASH_ADDR_LOW_MAX, - crash_size, CRASH_ALIGN); - if (crash_base == 0) { - pr_warn("cannot allocate crashkernel (size:0x%llx)\n", - crash_size); - return; - } - } else { - /* User specifies base address explicitly. */ - if (!memblock_is_region_memory(crash_base, crash_size)) { - pr_warn("cannot reserve crashkernel: region is not memory\n"); - return; - } - - if (memblock_is_region_reserved(crash_base, crash_size)) { - pr_warn("cannot reserve crashkernel: region overlaps reserved memory\n"); - return; - } - - if (!IS_ALIGNED(crash_base, CRASH_ALIGN)) { - pr_warn("cannot reserve crashkernel: base address is not 2MB aligned\n"); - return; - } - } - memblock_reserve(crash_base, crash_size); - - pr_info("crashkernel reserved: 0x%016llx - 0x%016llx (%lld MB)\n", - crash_base, crash_base + crash_size, crash_size >> 20); - - crashk_res.start = crash_base; - crashk_res.end = crash_base + crash_size - 1; } -#else -static void __init reserve_crashkernel(void) -{ -} -#endif /* CONFIG_KEXEC_CORE */ +#endif #ifdef CONFIG_CRASH_DUMP static int __init early_init_dt_scan_elfcorehdr(unsigned long node, @@ -446,6 +392,14 @@ void __init bootmem_init(void) * reserved, so do it here. */ reserve_crashkernel(); +#ifdef CONFIG_KEXEC_CORE + /* + * The low region is intended to be used for crash dump kernel devices, + * just mark the low region as "nomap" simply. + */ + if (crashk_low_res.end) + memblock_mark_nomap(crashk_low_res.start, resource_size(&crashk_low_res)); +#endif memblock_dump_all(); } diff --git a/kernel/crash_core.c b/kernel/crash_core.c index a0e790d6ea0f..8479be270c0b 100644 --- a/kernel/crash_core.c +++ b/kernel/crash_core.c @@ -322,10 +322,10 @@ int __init parse_crashkernel_low(char *cmdline, #ifdef CONFIG_KEXEC_CORE -#ifdef CONFIG_X86 +#if defined(CONFIG_X86) || defined(CONFIG_ARM64) static int __init reserve_crashkernel_low(void) { -#ifdef CONFIG_X86_64 +#ifdef CONFIG_64BIT unsigned long long base, low_base = 0, low_size = 0; unsigned long low_mem_limit; int ret; @@ -450,7 +450,7 @@ void __init reserve_crashkernel(void) crashk_res.start = crash_base; crashk_res.end = crash_base + crash_size - 1; } -#endif /* CONFIG_X86 */ +#endif #endif /* CONFIG_KEXEC_CORE */ Elf_Word *append_elf_note(Elf_Word *buf, char *name, unsigned int type, From patchwork Sat Jan 30 07:10:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: chenzhou X-Patchwork-Id: 12056871 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 424DCC433E0 for ; Sat, 30 Jan 2021 07:09:22 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0594964E1B for ; Sat, 30 Jan 2021 07:09:21 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0594964E1B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=fvegJqYfa7TmiTDpZR67zSFKAMtdq1zN8gNPSHTxUhg=; b=B49jsEjiHZszHLE6jJJK5KD8p 66WOhN/mtZ34pW0mSdNtYsb3rN9VtvvE6DFChrnLovk1vSVZHlnqvRh9CqnJjcydnDEYw2lkNXddb KB6xqXGIw42nHx3vEsqLzJqka9FtCi8EYsvuX7ZoScLxzY7Po4COBj78DzujSLU6Fb1MFkKKjDZJs SO0k+PV2hKH2WU6+csa8zSkyhus5c5OGTX7hRybugZt2FaUvL5AgUGybtFDjjV0k6z3Knssq8st+T gR5/9SCz+e90Arh2wLpTNWtH5SQcoMfl8ugc0uH/vb9QqcHXaWqs2kUMhXm7nhD/ftr+rCnVC5u0J AbX1Iv7zg==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1l5kMD-00020T-Ko; Sat, 30 Jan 2021 07:07:45 +0000 Received: from szxga05-in.huawei.com ([45.249.212.191]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1l5kKv-0001Zv-2H; Sat, 30 Jan 2021 07:06:29 +0000 Received: from DGGEMS402-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4DSQFX3J9QzjGPt; Sat, 30 Jan 2021 15:05:04 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by DGGEMS402-HUB.china.huawei.com (10.3.19.202) with Microsoft SMTP Server id 14.3.498.0; Sat, 30 Jan 2021 15:06:10 +0800 From: Chen Zhou To: , , , , , , , , , , , Subject: [PATCH v14 09/11] x86, arm64: Add ARCH_WANT_RESERVE_CRASH_KERNEL config Date: Sat, 30 Jan 2021 15:10:23 +0800 Message-ID: <20210130071025.65258-10-chenzhou10@huawei.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210130071025.65258-1-chenzhou10@huawei.com> References: <20210130071025.65258-1-chenzhou10@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210130_020625_641309_E8CDBD72 X-CRM114-Status: GOOD ( 11.43 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: wangkefeng.wang@huawei.com, arnd@arndb.de, xiexiuqi@huawei.com, chenzhou10@huawei.com, linux-doc@vger.kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, robh+dt@kernel.org, horms@verge.net.au, james.morse@arm.com, huawei.libin@huawei.com, guohanjun@huawei.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We make the functions reserve_crashkernel[_low]() as generic for x86 and arm64. Since reserve_crashkernel[_low]() implementations are quite similar on other architectures as well, we can have more users of this later. So have CONFIG_ARCH_WANT_RESERVE_CRASH_KERNEL in arch/Kconfig and select this by X86 and ARM64. Suggested-by: Mike Rapoport Suggested-by: Baoquan He Signed-off-by: Chen Zhou Acked-by: Baoquan He --- arch/Kconfig | 3 +++ arch/arm64/Kconfig | 1 + arch/x86/Kconfig | 2 ++ kernel/crash_core.c | 7 ++----- 4 files changed, 8 insertions(+), 5 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index 24862d15f3a3..0ca1ff5bb157 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -24,6 +24,9 @@ config KEXEC_ELF config HAVE_IMA_KEXEC bool +config ARCH_WANT_RESERVE_CRASH_KERNEL + bool + config SET_FS bool diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index f39568b28ec1..09365c7ff469 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -82,6 +82,7 @@ config ARM64 select ARCH_WANT_FRAME_POINTERS select ARCH_WANT_HUGE_PMD_SHARE if ARM64_4K_PAGES || (ARM64_16K_PAGES && !ARM64_VA_BITS_36) select ARCH_WANT_LD_ORPHAN_WARN + select ARCH_WANT_RESERVE_CRASH_KERNEL if KEXEC_CORE select ARCH_HAS_UBSAN_SANITIZE_ALL select ARM_AMBA select ARM_ARCH_TIMER diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 21f851179ff0..e6926fcb4a40 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -12,6 +12,7 @@ config X86_32 depends on !64BIT # Options that are inherently 32-bit kernel only: select ARCH_WANT_IPC_PARSE_VERSION + select ARCH_WANT_RESERVE_CRASH_KERNEL if KEXEC_CORE select CLKSRC_I8253 select CLONE_BACKWARDS select GENERIC_VDSO_32 @@ -28,6 +29,7 @@ config X86_64 select ARCH_HAS_GIGANTIC_PAGE select ARCH_SUPPORTS_INT128 if CC_HAS_INT128 select ARCH_USE_CMPXCHG_LOCKREF + select ARCH_WANT_RESERVE_CRASH_KERNEL if KEXEC_CORE select HAVE_ARCH_SOFT_DIRTY select MODULES_USE_ELF_RELA select NEED_DMA_MAP_STATE diff --git a/kernel/crash_core.c b/kernel/crash_core.c index 8479be270c0b..2c5783985db5 100644 --- a/kernel/crash_core.c +++ b/kernel/crash_core.c @@ -320,9 +320,7 @@ int __init parse_crashkernel_low(char *cmdline, * --------- Crashkernel reservation ------------------------------ */ -#ifdef CONFIG_KEXEC_CORE - -#if defined(CONFIG_X86) || defined(CONFIG_ARM64) +#ifdef CONFIG_ARCH_WANT_RESERVE_CRASH_KERNEL static int __init reserve_crashkernel_low(void) { #ifdef CONFIG_64BIT @@ -450,8 +448,7 @@ void __init reserve_crashkernel(void) crashk_res.start = crash_base; crashk_res.end = crash_base + crash_size - 1; } -#endif -#endif /* CONFIG_KEXEC_CORE */ +#endif /* CONFIG_ARCH_WANT_RESERVE_CRASH_KERNEL */ Elf_Word *append_elf_note(Elf_Word *buf, char *name, unsigned int type, void *data, size_t data_len) From patchwork Sat Jan 30 07:10:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: chenzhou X-Patchwork-Id: 12056867 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1BB0FC433E6 for ; Sat, 30 Jan 2021 07:08:36 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DB5F264E18 for ; Sat, 30 Jan 2021 07:08:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DB5F264E18 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=acz5w7NzxxWsdSbmo+Eq0mwNbMLZqG60ArmIidMg7Mc=; b=lxUPBBE6oZvYuYoqAYvpF7nRG lE48HL9cgWAib8cmf3ufNGTg5nmfbXu2wdIHDJf1nBLu/n7wtNoIslMfKjxmSBZKGoDghYs5tdYpo AK6xSs/7V9zE8sCTJnvn1VGrzFGUZDGF9MJvfUC1UrFRr4LINk3OhRl4M9EtcMF0IidWxfL1+FsYY bmN/i3lDbqSVvdDJoEN/2fivSuKdnMNq9H9zHBXZyB1vW3ZH4y5zgL4dPid8coTi+1exjQCoaMGFO ERqr9vF7Y7b3otHZIRALBVRUb56STI9LZLlvVasrf2B2zqqadnkmcEgYq4/RHMRBpnQqgvFfFq3BH F+UvxF9CA==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1l5kLG-0001i5-81; Sat, 30 Jan 2021 07:06:46 +0000 Received: from szxga05-in.huawei.com ([45.249.212.191]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1l5kKu-0001Zw-KL; Sat, 30 Jan 2021 07:06:27 +0000 Received: from DGGEMS402-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4DSQFX3nx7zjGPw; Sat, 30 Jan 2021 15:05:04 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by DGGEMS402-HUB.china.huawei.com (10.3.19.202) with Microsoft SMTP Server id 14.3.498.0; Sat, 30 Jan 2021 15:06:11 +0800 From: Chen Zhou To: , , , , , , , , , , , Subject: [PATCH v14 10/11] arm64: kdump: add memory for devices by DT property linux, usable-memory-range Date: Sat, 30 Jan 2021 15:10:24 +0800 Message-ID: <20210130071025.65258-11-chenzhou10@huawei.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210130071025.65258-1-chenzhou10@huawei.com> References: <20210130071025.65258-1-chenzhou10@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210130_020625_422688_39864504 X-CRM114-Status: GOOD ( 15.60 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: John Donnelly , wangkefeng.wang@huawei.com, arnd@arndb.de, xiexiuqi@huawei.com, chenzhou10@huawei.com, linux-doc@vger.kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, robh+dt@kernel.org, horms@verge.net.au, james.morse@arm.com, huawei.libin@huawei.com, guohanjun@huawei.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When reserving crashkernel in high memory, some low memory is reserved for crash dump kernel devices and never mapped by the first kernel. This memory range is advertised to crash dump kernel via DT property under /chosen, linux,usable-memory-range = We reused the DT property linux,usable-memory-range and made the low memory region as the second range "BASE2 SIZE2", which keeps compatibility with existing user-space and older kdump kernels. Crash dump kernel reads this property at boot time and call memblock_add() to add the low memory region after memblock_cap_memory_range() has been called. Signed-off-by: Chen Zhou Tested-by: John Donnelly --- arch/arm64/mm/init.c | 43 +++++++++++++++++++++++++++++++++---------- 1 file changed, 33 insertions(+), 10 deletions(-) diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index d20f5c444ebf..180a25b67f55 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -68,6 +68,15 @@ static void __init reserve_crashkernel(void) } #endif +/* + * The main usage of linux,usable-memory-range is for crash dump kernel. + * Originally, the number of usable-memory regions is one. Now there may + * be two regions, low region and high region. + * To make compatibility with existing user-space and older kdump, the low + * region is always the last range of linux,usable-memory-range if exist. + */ +#define MAX_USABLE_RANGES 2 + #ifdef CONFIG_CRASH_DUMP static int __init early_init_dt_scan_elfcorehdr(unsigned long node, const char *uname, int depth, void *data) @@ -201,9 +210,9 @@ early_param("mem", early_mem); static int __init early_init_dt_scan_usablemem(unsigned long node, const char *uname, int depth, void *data) { - struct memblock_region *usablemem = data; - const __be32 *reg; - int len; + struct memblock_region *usable_rgns = data; + const __be32 *reg, *endp; + int len, nr = 0; if (depth != 1 || strcmp(uname, "chosen") != 0) return 0; @@ -212,22 +221,36 @@ static int __init early_init_dt_scan_usablemem(unsigned long node, if (!reg || (len < (dt_root_addr_cells + dt_root_size_cells))) return 1; - usablemem->base = dt_mem_next_cell(dt_root_addr_cells, ®); - usablemem->size = dt_mem_next_cell(dt_root_size_cells, ®); + endp = reg + (len / sizeof(__be32)); + while ((endp - reg) >= (dt_root_addr_cells + dt_root_size_cells)) { + usable_rgns[nr].base = dt_mem_next_cell(dt_root_addr_cells, ®); + usable_rgns[nr].size = dt_mem_next_cell(dt_root_size_cells, ®); + + if (++nr >= MAX_USABLE_RANGES) + break; + } return 1; } static void __init fdt_enforce_memory_region(void) { - struct memblock_region reg = { - .size = 0, + struct memblock_region usable_rgns[MAX_USABLE_RANGES] = { + { .size = 0 }, + { .size = 0 } }; - of_scan_flat_dt(early_init_dt_scan_usablemem, ®); + of_scan_flat_dt(early_init_dt_scan_usablemem, &usable_rgns); - if (reg.size) - memblock_cap_memory_range(reg.base, reg.size); + /* + * The first range of usable-memory regions is for crash dump + * kernel with only one region or for high region with two regions, + * the second range is dedicated for low region if exist. + */ + if (usable_rgns[0].size) + memblock_cap_memory_range(usable_rgns[0].base, usable_rgns[0].size); + if (usable_rgns[1].size) + memblock_add(usable_rgns[1].base, usable_rgns[1].size); } void __init arm64_memblock_init(void) From patchwork Sat Jan 30 07:10:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: chenzhou X-Patchwork-Id: 12056869 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 820EEC433E0 for ; Sat, 30 Jan 2021 07:08:53 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4B4D764E1C for ; Sat, 30 Jan 2021 07:08:53 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4B4D764E1C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=F3k3ZPprGicSNc336s0uN6uXLOnlA9cPi/OA7ePotzA=; b=Znx16e68hw2BuCUb30rAhduJ7 qQB7jXULyfbPXVhVxmLLMunyJf3PUPhh3RG39A6kajwJT1kkrAfjrjejIzkGr8sM8UmNc6BKTuyKn uLMLw2RCJa5nRsXHxFn/L0EPpapfOoeOrhzkdo2f4KUBY/qXAdRT/yBxMfsSUOyzHA9gXbVGRbHDV BYUXKFWWqYUf6GC85lLC5+F1hzba4Yl6WUbSOWk+HYblYHK2Sg8LOmxJQsSfPbvCA+/9FcOITcbiE 8Rr1psbE1C5zihNFGDhy3yLDuklUxWDWZM8+EKgMdSrn23LXoH0vy+S0rX3Hnlb7YT+SrbuRR/znL LIxFijp6w==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1l5kLh-0001ov-Rk; Sat, 30 Jan 2021 07:07:14 +0000 Received: from szxga05-in.huawei.com ([45.249.212.191]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1l5kKu-0001Zu-Tf; Sat, 30 Jan 2021 07:06:29 +0000 Received: from DGGEMS402-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4DSQFX2GC1zjGPY; Sat, 30 Jan 2021 15:05:04 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by DGGEMS402-HUB.china.huawei.com (10.3.19.202) with Microsoft SMTP Server id 14.3.498.0; Sat, 30 Jan 2021 15:06:12 +0800 From: Chen Zhou To: , , , , , , , , , , , Subject: [PATCH v14 11/11] kdump: update Documentation about crashkernel Date: Sat, 30 Jan 2021 15:10:25 +0800 Message-ID: <20210130071025.65258-12-chenzhou10@huawei.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210130071025.65258-1-chenzhou10@huawei.com> References: <20210130071025.65258-1-chenzhou10@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210130_020625_517795_7E430274 X-CRM114-Status: GOOD ( 13.29 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: John Donnelly , wangkefeng.wang@huawei.com, arnd@arndb.de, xiexiuqi@huawei.com, chenzhou10@huawei.com, linux-doc@vger.kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, robh+dt@kernel.org, horms@verge.net.au, james.morse@arm.com, huawei.libin@huawei.com, guohanjun@huawei.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org For arm64, the behavior of crashkernel=X has been changed, which tries low allocation in DMA zone and fall back to high allocation if it fails. We can also use "crashkernel=X,high" to select a high region above DMA zone, which also tries to allocate at least 256M low memory in DMA zone automatically and "crashkernel=Y,low" can be used to allocate specified size low memory. So update the Documentation. Signed-off-by: Chen Zhou Tested-by: John Donnelly Acked-by: Baoquan He --- Documentation/admin-guide/kdump/kdump.rst | 22 ++++++++++++++++--- .../admin-guide/kernel-parameters.txt | 11 ++++++++-- 2 files changed, 28 insertions(+), 5 deletions(-) diff --git a/Documentation/admin-guide/kdump/kdump.rst b/Documentation/admin-guide/kdump/kdump.rst index 75a9dd98e76e..0877c76f8015 100644 --- a/Documentation/admin-guide/kdump/kdump.rst +++ b/Documentation/admin-guide/kdump/kdump.rst @@ -299,7 +299,16 @@ Boot into System Kernel "crashkernel=64M@16M" tells the system kernel to reserve 64 MB of memory starting at physical address 0x01000000 (16MB) for the dump-capture kernel. - On x86 and x86_64, use "crashkernel=64M@16M". + On x86 use "crashkernel=64M@16M". + + On x86_64, use "crashkernel=X" to select a region under 4G first, and + fall back to reserve region above 4G. And go for high allocation + directly if the required size is too large. + We can also use "crashkernel=X,high" to select a region above 4G, which + also tries to allocate at least 256M below 4G automatically and + "crashkernel=Y,low" can be used to allocate specified size low memory. + Use "crashkernel=Y@X" if you really have to reserve memory from specified + start address X. On ppc64, use "crashkernel=128M@32M". @@ -316,8 +325,15 @@ Boot into System Kernel kernel will automatically locate the crash kernel image within the first 512MB of RAM if X is not given. - On arm64, use "crashkernel=Y[@X]". Note that the start address of - the kernel, X if explicitly specified, must be aligned to 2MiB (0x200000). + On arm64, use "crashkernel=X" to try low allocation in DMA zone and + fall back to high allocation if it fails. + We can also use "crashkernel=X,high" to select a high region above + DMA zone, which also tries to allocate at least 256M low memory in + DMA zone automatically. + "crashkernel=Y,low" can be used to allocate specified size low memory. + Use "crashkernel=Y@X" if you really have to reserve memory from + specified start address X. Note that the start address of the kernel, + X if explicitly specified, must be aligned to 2MiB (0x200000). Load the Dump-capture Kernel ============================ diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index a10b545c2070..908e5c8b61ba 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -738,6 +738,9 @@ [KNL, X86-64] Select a region under 4G first, and fall back to reserve region above 4G when '@offset' hasn't been specified. + [KNL, arm64] Try low allocation in DMA zone and fall back + to high allocation if it fails when '@offset' hasn't been + specified. See Documentation/admin-guide/kdump/kdump.rst for further details. crashkernel=range1:size1[,range2:size2,...][@offset] @@ -754,6 +757,8 @@ Otherwise memory region will be allocated below 4G, if available. It will be ignored if crashkernel=X is specified. + [KNL, arm64] range in high memory. + Allow kernel to allocate physical memory region from top. crashkernel=size[KMG],low [KNL, X86-64] range under 4G. When crashkernel=X,high is passed, kernel could allocate physical memory region @@ -762,13 +767,15 @@ requires at least 64M+32K low memory, also enough extra low memory is needed to make sure DMA buffers for 32-bit devices won't run out. Kernel would try to allocate at - at least 256M below 4G automatically. + least 256M below 4G automatically. This one let user to specify own low range under 4G for second kernel instead. 0: to disable low allocation. It will be ignored when crashkernel=X,high is not used or memory reserved is below 4G. - + [KNL, arm64] range in low memory. + This one let user to specify a low range in DMA zone for + crash dump kernel. cryptomgr.notests [KNL] Disable crypto self-tests