From patchwork Fri Jul 21 08:17:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen (ThunderTown)" X-Patchwork-Id: 13321605 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 74DC8EB64DC for ; Fri, 21 Jul 2023 08:21:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=PIiLIbQAITpfgWeQA9Lk/OYxgamzhy55SSStpmkZ+jI=; b=rYrx7GbO4HrW4B PK4KXSORBcwCR47VIZ4qTT9kImHeQr+9Z4L20c8kB4pHPKP0OG8wTYozhU/BQwJ0u4riNL7NRZ2nX mpGx3SNijt+5tkg5Y3UkTFTTHJkVR1JP7AjUyGqiWXwikkvkA1l4UeF8AQ8Wv445iEe8JZxzXuaZh aRew314Kpb7J7ZQCz6GNhk+M/TRnu2o0P8WuEZqITF1xlZvy0whYgXQW4OrIBN7w+sv62VW3Ar8gS mlVRKYvj1xzLQ8eYDlJi4BZLVyiTs0j87cMpCfyHTmfTQniCfyZ6BnTNSfBDIS5HRIRf92VZVqOE9 x6JP6RJTa5D9Ob/JBe4A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qMlOH-00DLNb-0f; Fri, 21 Jul 2023 08:21:33 +0000 Received: from dggsgout12.his.huawei.com ([45.249.212.56]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qMlO7-00DLHM-17; Fri, 21 Jul 2023 08:21:26 +0000 Received: from mail02.huawei.com (unknown [172.30.67.143]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTP id 4R6jF33Wktz4f3jHZ; Fri, 21 Jul 2023 16:21:11 +0800 (CST) Received: from huaweicloud.com (unknown [10.174.178.55]) by APP4 (Coremail) with SMTP id gCh0CgAHoZT2P7pkhprcOQ--.24672S5; Fri, 21 Jul 2023 16:21:13 +0800 (CST) From: thunder.leizhen@huaweicloud.com To: Dave Young , Baoquan He , Vivek Goyal , "Eric W . Biederman" , kexec@lists.infradead.org, linux-kernel@vger.kernel.org, Catalin Marinas , Will Deacon , linux-arm-kernel@lists.infradead.org Cc: Zhen Lei Subject: [PATCH 1/3] arm64: kdump: Allocate crash low memory in the bottom-up direction Date: Fri, 21 Jul 2023 16:17:24 +0800 Message-Id: <20230721081726.882-2-thunder.leizhen@huaweicloud.com> X-Mailer: git-send-email 2.37.3.windows.1 In-Reply-To: <20230721081726.882-1-thunder.leizhen@huaweicloud.com> References: <20230721081726.882-1-thunder.leizhen@huaweicloud.com> MIME-Version: 1.0 X-CM-TRANSID: gCh0CgAHoZT2P7pkhprcOQ--.24672S5 X-Coremail-Antispam: 1UD129KBjvJXoW3ZF43XF4fuF1UGFWDAFWUArb_yoWkGryUpr 1xJF4ftF1jyFnrGa1fJrn7Cr4xZa1S9a45XFyYyrn5KF9rKr93Kr43uFy7WryUKr95WFy7 AFWrtr9Yva18XrJanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUBCb4IE77IF4wAFF20E14v26ryj6rWUM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6r106r1rM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUGw A2048vs2IY020Ec7CjxVAFwI0_JFI_Gr1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0rcxS w2x7M28EF7xvwVC0I7IYx2IY67AKxVWDJVCq3wA2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxV WxJr0_GcWl84ACjcxK6I8E87Iv67AKxVW0oVCq3wA2z4x0Y4vEx4A2jsIEc7CjxVAFwI0_ GcCE3s1le2I262IYc4CY6c8Ij28IcVAaY2xG8wAqx4xG64xvF2IEw4CE5I8CrVC2j2WlYx 0E2Ix0cI8IcVAFwI0_Jr0_Jr4lYx0Ex4A2jsIE14v26r4j6F4UMcvjeVCFs4IE7xkEbVWU JVW8JwACjcxG0xvY0x0EwIxGrwACI402YVCY1x02628vn2kIc2xKxwAKzVCY07xG64k0F2 4l42xK82IYc2Ij64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAqx4xG67AKxVWU JVWUGwC20s026x8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r1q6r43MIIYrxkI7V AKI48JMIIF0xvE2Ix0cI8IcVAFwI0_Jr0_JF4lIxAIcVC0I7IYx2IY6xkF7I0E14v26r4j 6F4UMIIF0xvE42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVWUJVW8JwCI42 IY6I8E87Iv6xkF7I0E14v26r4j6r4UJbIYCTnIWIevJa73UjIFyTuYvjxU2db1UUUUU X-CM-SenderInfo: hwkx0vthuozvpl2kv046kxt4xhlfz01xgou0bp/ X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230721_012123_802225_F2E48774 X-CRM114-Status: GOOD ( 24.79 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Zhen Lei arm64_memblock_init() reserve_crashkernel() (1) paging_init() map_mem() (2) unflatten_device_tree or parse ACPI (3) bootmem_init() zone_sizes_init() Update arm64_dma_phys_limit (4) late_reserve_crashkernel() (5) For most arm64 platforms, DMA-capable devices can access the whole low 4G memory without SMMU enabled. So we can directly use SZ_4G as upper limit to do memblock alloc. However, DMA zone does not cover all the 32-bit addressable memory on some specific platforms (e.g. 30-bit on Raspberry Pi 4), and the upper limit of DMA zone (arm64_dma_phys_limit) is updated after map_mem(), see (3)(4) above. Let's change the allocation direction of low memory from top-town to bottom-up. In this way, as long as DMA zone has continuous free memory that meets the size, the memory reserved for crash will not exceed DMA zone. Of course, it's possible that the DMA zone is not enough, so add late_reserve_crashkernel() to perform fall back if need: 1. For case crashkernel=X(offset is not specified) Fall back to reserve region above DMA zone, and reserve default size of memory in DMA zone. 2. For case crashkernel=X,high Fall back to searching the low memory with the specified size in crashkernel=,high. In reserve_crashkernel(), the allocation policy is as follows: low high |<---DMA---|--------------------->| | | |<<<-------------(1)--------------| top-town |----------------(2)----------->>>| bottom-up (1) crashkernel=Y,high, upper limit is known, top-town. (2) crashkernel=Y,low, crashkernel=X, upper limit is unknown, bottom-up. (x) crashkernel=X@offset, fixed. Signed-off-by: Zhen Lei --- arch/arm64/mm/init.c | 212 ++++++++++++++++++++++++++++++++----------- 1 file changed, 160 insertions(+), 52 deletions(-) diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index d31c3a9290c5524..d2ab377520b2742 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -69,23 +69,168 @@ phys_addr_t __ro_after_init arm64_dma_phys_limit; #define CRASH_ADDR_LOW_MAX arm64_dma_phys_limit #define CRASH_ADDR_HIGH_MAX (PHYS_MASK + 1) -#define CRASH_HIGH_SEARCH_BASE SZ_4G +#define CRASHKERNEL_TYPE_FIXED_BASE 1 +#define CRASHKERNEL_TYPE_HIGH 2 #define DEFAULT_CRASH_KERNEL_LOW_SIZE (128UL << 20) +static int crashkernel_type __initdata; + +static phys_addr_t __init crashkernel_phys_alloc_range(phys_addr_t size, + phys_addr_t start, + phys_addr_t end) +{ + phys_addr_t base; + bool old_direction; + + old_direction = memblock_bottom_up(); + if (!end) { + /* The upper limit is unknown, let's allocate from bottom to up */ + end = CRASH_ADDR_HIGH_MAX; + memblock_set_bottom_up(true); + } + base = memblock_phys_alloc_range(size, CRASH_ALIGN, start, end); + memblock_set_bottom_up(old_direction); + + return base; +} + +static void __init crashkernel_low_rollback(void) +{ + if (crashk_low_res.end) { + release_resource(&crashk_low_res); + memblock_phys_free(crashk_low_res.start, resource_size(&crashk_low_res)); + crashk_low_res.start = 0; + crashk_low_res.end = 0; + } +} + +static void __init crashkernel_rollback(void) +{ + release_resource(&crashk_res); + memblock_phys_free(crashk_res.start, resource_size(&crashk_res)); + crashk_res.start = 0; + crashk_res.end = 0; + + crashkernel_low_rollback(); +} + +static void __init late_reserve_crashkernel(void) +{ + struct resource *res; + unsigned long long low_base, low_size; + unsigned long long crash_base, crash_size; + + res = &crashk_res; + if (!res->end) + return; + + crash_size = resource_size(res); + if (crashkernel_type == CRASHKERNEL_TYPE_HIGH) { + /* + * CRASH_ADDR_LOW_MAX + * | + * |<----DMA---->|------------| + * |-high-| //case1 + * |-high-| //case2 + * |-high-| //case3 + */ + if (crashk_res.end < CRASH_ADDR_LOW_MAX) /* case 1 */ + crashkernel_low_rollback(); + else if (crashk_res.start >= CRASH_ADDR_LOW_MAX) /* case 3 */ + res = &crashk_low_res; + + low_size = crashk_low_res.end ? resource_size(&crashk_low_res) : 0; + } + + /* All crashkernel memory is reserved as expected */ + if (res->end < CRASH_ADDR_LOW_MAX) + goto ok; + + crashkernel_rollback(); + + /* For details, see Documentation/arch/arm64/kdump.rst */ + if (crashkernel_type == CRASHKERNEL_TYPE_FIXED_BASE) { + pr_warn("crashkernel reservation failed - memory range is invalid\n"); + return; + } else if (crashkernel_type == CRASHKERNEL_TYPE_HIGH) { + /* Above case 3(low memory is not enough) */ + if (res == &crashk_low_res) { + pr_warn("cannot allocate crashkernel low memory (size:0x%llx)\n", low_size); + return; + } + + /* + * Above case 2. Fall back to searching the low memory with + * the specified size in crashkernel=,high + */ + crash_base = memblock_phys_alloc_range(crash_size, CRASH_ALIGN, + 0, CRASH_ADDR_LOW_MAX); + if (!crash_base) { + pr_warn("cannot allocate crashkernel (size:0x%llx)\n", crash_size); + return; + } + } else { + /* + * Fall back to reserve region above DMA zone and allocate default + * size of memory in DMA zone. + */ + low_size = DEFAULT_CRASH_KERNEL_LOW_SIZE; + low_base = memblock_phys_alloc_range(low_size, CRASH_ALIGN, 0, CRASH_ADDR_LOW_MAX); + if (!low_base) { + pr_warn("cannot allocate crashkernel low memory (size:0x%llx)\n", low_size); + return; + } + + crash_base = memblock_phys_alloc_range(crash_size, CRASH_ALIGN, + CRASH_ADDR_LOW_MAX, CRASH_ADDR_HIGH_MAX); + if (!crash_base) { + pr_warn("cannot allocate crashkernel (size:0x%llx)\n", crash_size); + memblock_phys_free(low_base, low_size); + return; + } + + crashk_low_res.start = low_base; + crashk_low_res.end = low_base + low_size - 1; + insert_resource(&iomem_resource, &crashk_low_res); + } + + crashk_res.start = crash_base; + crashk_res.end = crash_base + crash_size - 1; + insert_resource(&iomem_resource, &crashk_res); + +ok: + crash_base = crashk_res.start; + crash_size = resource_size(&crashk_res); + pr_info("crashkernel reserved: 0x%016llx - 0x%016llx (%lld MB)\n", + crash_base, crash_base + crash_size, crash_size >> 20); + + if (crashk_low_res.end) { + low_base = crashk_low_res.start; + low_size = resource_size(&crashk_low_res); + pr_info("crashkernel low memory reserved: 0x%08llx - 0x%08llx (%lld MB)\n", + low_base, low_base + low_size, low_size >> 20); + } + + /* + * The crashkernel memory will be removed from the kernel linear + * map. Inform kmemleak so that it won't try to access it. + */ + kmemleak_ignore_phys(crash_base); + if (crashk_low_res.end) + kmemleak_ignore_phys(crashk_low_res.start); +} + static int __init reserve_crashkernel_low(unsigned long long low_size) { unsigned long long low_base; - low_base = memblock_phys_alloc_range(low_size, CRASH_ALIGN, 0, CRASH_ADDR_LOW_MAX); + low_base = crashkernel_phys_alloc_range(low_size, 0, CRASH_ADDR_LOW_MAX); if (!low_base) { pr_err("cannot allocate crashkernel low memory (size:0x%llx).\n", low_size); return -ENOMEM; } - pr_info("crashkernel low memory reserved: 0x%08llx - 0x%08llx (%lld MB)\n", - low_base, low_base + low_size, low_size >> 20); - crashk_low_res.start = low_base; crashk_low_res.end = low_base + low_size - 1; insert_resource(&iomem_resource, &crashk_low_res); @@ -102,12 +247,10 @@ static int __init reserve_crashkernel_low(unsigned long long low_size) */ static void __init reserve_crashkernel(void) { - unsigned long long crash_low_size = 0, search_base = 0; + unsigned long long crash_low_size = 0; unsigned long long crash_max = CRASH_ADDR_LOW_MAX; unsigned long long crash_base, crash_size; char *cmdline = boot_command_line; - bool fixed_base = false; - bool high = false; int ret; if (!IS_ENABLED(CONFIG_KEXEC_CORE)) @@ -131,9 +274,8 @@ static void __init reserve_crashkernel(void) else if (ret) return; - search_base = CRASH_HIGH_SEARCH_BASE; crash_max = CRASH_ADDR_HIGH_MAX; - high = true; + crashkernel_type = CRASHKERNEL_TYPE_HIGH; } else if (ret || !crash_size) { /* The specified value is invalid */ return; @@ -143,67 +285,31 @@ static void __init reserve_crashkernel(void) /* User specifies base address explicitly. */ if (crash_base) { - fixed_base = true; - search_base = crash_base; + crashkernel_type = CRASHKERNEL_TYPE_FIXED_BASE; crash_max = crash_base + crash_size; } -retry: - crash_base = memblock_phys_alloc_range(crash_size, CRASH_ALIGN, - search_base, crash_max); + crash_base = crashkernel_phys_alloc_range(crash_size, crash_base, crash_max); if (!crash_base) { /* * For crashkernel=size[KMG]@offset[KMG], print out failure * message if can't reserve the specified region. */ - if (fixed_base) { + if (crashkernel_type == CRASHKERNEL_TYPE_FIXED_BASE) { pr_warn("crashkernel reservation failed - memory is in use.\n"); return; } - /* - * For crashkernel=size[KMG], if the first attempt was for - * low memory, fall back to high memory, the minimum required - * low memory will be reserved later. - */ - if (!high && crash_max == CRASH_ADDR_LOW_MAX) { - crash_max = CRASH_ADDR_HIGH_MAX; - search_base = CRASH_ADDR_LOW_MAX; - crash_low_size = DEFAULT_CRASH_KERNEL_LOW_SIZE; - goto retry; - } + pr_warn("cannot allocate crashkernel (size:0x%llx)\n", crash_size); - /* - * For crashkernel=size[KMG],high, if the first attempt was - * for high memory, fall back to low memory. - */ - if (high && crash_max == CRASH_ADDR_HIGH_MAX) { - crash_max = CRASH_ADDR_LOW_MAX; - search_base = 0; - goto retry; - } - pr_warn("cannot allocate crashkernel (size:0x%llx)\n", - crash_size); return; } - if ((crash_base >= CRASH_ADDR_LOW_MAX) && crash_low_size && - reserve_crashkernel_low(crash_low_size)) { + if (crash_low_size && reserve_crashkernel_low(crash_low_size)) { memblock_phys_free(crash_base, crash_size); return; } - pr_info("crashkernel reserved: 0x%016llx - 0x%016llx (%lld MB)\n", - crash_base, crash_base + crash_size, crash_size >> 20); - - /* - * The crashkernel memory will be removed from the kernel linear - * map. Inform kmemleak so that it won't try to access it. - */ - kmemleak_ignore_phys(crash_base); - if (crashk_low_res.end) - kmemleak_ignore_phys(crashk_low_res.start); - crashk_res.start = crash_base; crashk_res.end = crash_base + crash_size - 1; insert_resource(&iomem_resource, &crashk_res); @@ -408,6 +514,8 @@ void __init arm64_memblock_init(void) early_init_fdt_scan_reserved_mem(); + reserve_crashkernel(); + high_memory = __va(memblock_end_of_DRAM() - 1) + 1; } @@ -454,7 +562,7 @@ void __init bootmem_init(void) * request_standard_resources() depends on crashkernel's memory being * reserved, so do it here. */ - reserve_crashkernel(); + late_reserve_crashkernel(); memblock_dump_all(); } From patchwork Fri Jul 21 08:17:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen (ThunderTown)" X-Patchwork-Id: 13321604 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 76D2EC001DC for ; Fri, 21 Jul 2023 08:21:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=qujBGrXU2DximNyA3gr45uFOwskLdR9AWinZ5BMVJXc=; b=ZtI+zi3XOejgnW jrSP3j9R5ljmYxAdJz0mHrbtEkfd/oY98d6uV1/HNwTN4L9od7nRzDodP17YzGUrZprbKFie9i5rD ad6aaaAynak4QSRCuFwwcFDqppJft9OAHFeEGma2TQ+DpnUurh+lCbSOCFcEKuDzOAt2rD9N5nwO4 OP3YGdeXuCTV5Cs92XfZVsQ6T9uNI+vUabkR/xGwD6Rxknmvv80xt9g7JXsFgbCRUBoEjHCJTCwZi 2eTwciYx/rt7uyXBgD4Cs65nX637XsR68aZuZRfrOPVdmEb7YmWrBdmij6PEp6MoRnjxVOK/8z6K7 fYcub68ydvLKiwp/PKyA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qMlOD-00DLLR-02; Fri, 21 Jul 2023 08:21:29 +0000 Received: from [45.249.212.51] (helo=dggsgout11.his.huawei.com) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qMlO7-00DLHD-0i; Fri, 21 Jul 2023 08:21:25 +0000 Received: from mail02.huawei.com (unknown [172.30.67.143]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4R6jF54MWYz4f41GL; Fri, 21 Jul 2023 16:21:13 +0800 (CST) Received: from huaweicloud.com (unknown [10.174.178.55]) by APP4 (Coremail) with SMTP id gCh0CgAHoZT2P7pkhprcOQ--.24672S6; Fri, 21 Jul 2023 16:21:14 +0800 (CST) From: thunder.leizhen@huaweicloud.com To: Dave Young , Baoquan He , Vivek Goyal , "Eric W . Biederman" , kexec@lists.infradead.org, linux-kernel@vger.kernel.org, Catalin Marinas , Will Deacon , linux-arm-kernel@lists.infradead.org Cc: Zhen Lei Subject: [PATCH 2/3] arm64: kdump: use page-level mapping for crashkernel region Date: Fri, 21 Jul 2023 16:17:25 +0800 Message-Id: <20230721081726.882-3-thunder.leizhen@huaweicloud.com> X-Mailer: git-send-email 2.37.3.windows.1 In-Reply-To: <20230721081726.882-1-thunder.leizhen@huaweicloud.com> References: <20230721081726.882-1-thunder.leizhen@huaweicloud.com> MIME-Version: 1.0 X-CM-TRANSID: gCh0CgAHoZT2P7pkhprcOQ--.24672S6 X-Coremail-Antispam: 1UD129KBjvJXoW7uF4ktrWkuw43ZF1Uuw4rZrb_yoW8Wr4rpr 1kZ3s8Gr4rC3Z3ua1fWwn7Z3yrtw1FkFy5Za13A3Wvga1kJ39xKryrWFySvryjgrWftr4S vr10yrn3Wa12yrDanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUBCb4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6r106r1rM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUXw A2048vs2IY020Ec7CjxVAFwI0_Gr0_Xr1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0rcxS w2x7M28EF7xvwVC0I7IYx2IY67AKxVWDJVCq3wA2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxV WxJr0_GcWl84ACjcxK6I8E87Iv67AKxVW0oVCq3wA2z4x0Y4vEx4A2jsIEc7CjxVAFwI0_ GcCE3s1le2I262IYc4CY6c8Ij28IcVAaY2xG8wAqx4xG64xvF2IEw4CE5I8CrVC2j2WlYx 0E2Ix0cI8IcVAFwI0_Jr0_Jr4lYx0Ex4A2jsIE14v26r4j6F4UMcvjeVCFs4IE7xkEbVWU JVW8JwACjcxG0xvY0x0EwIxGrwACI402YVCY1x02628vn2kIc2xKxwAKzVCY07xG64k0F2 4l42xK82IYc2Ij64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAqx4xG67AKxVWU JVWUGwC20s026x8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r1q6r43MIIYrxkI7V AKI48JMIIF0xvE2Ix0cI8IcVAFwI0_Jr0_JF4lIxAIcVC0I7IYx2IY6xkF7I0E14v26r4j 6F4UMIIF0xvE42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVWUJVW8JwCI42 IY6I8E87Iv6xkF7I0E14v26r4j6r4UJbIYCTnIWIevJa73UjIFyTuYvjxU49mRUUUUU X-CM-SenderInfo: hwkx0vthuozvpl2kv046kxt4xhlfz01xgou0bp/ X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230721_012123_449579_56204471 X-CRM114-Status: UNSURE ( 9.80 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Zhen Lei Use page-level mappings for crashkernel region so that we can use set_memory_valid() to do access protection for it. Signed-off-by: Zhen Lei --- arch/arm64/mm/mmu.c | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 95d360805f8aeb3..e0a197ebe14837d 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -594,6 +594,11 @@ static void __init map_mem(pgd_t *pgdp) */ memblock_mark_nomap(kernel_start, kernel_end - kernel_start); +#ifdef CONFIG_KEXEC_CORE + if (crashk_res.end) + memblock_mark_nomap(crashk_res.start, resource_size(&crashk_res)); +#endif + /* map all the memory banks */ for_each_mem_range(i, &start, &end) { if (start >= end) @@ -621,6 +626,22 @@ static void __init map_mem(pgd_t *pgdp) PAGE_KERNEL, NO_CONT_MAPPINGS); memblock_clear_nomap(kernel_start, kernel_end - kernel_start); arm64_kfence_map_pool(early_kfence_pool, pgdp); + + /* + * Use page-level mappings here so that we can shrink the region + * in page granularity and put back unused memory to buddy system + * through /sys/kernel/kexec_crash_size interface. + */ +#ifdef CONFIG_KEXEC_CORE + if (crashk_res.end) { + __map_memblock(pgdp, crashk_res.start, + crashk_res.end + 1, + PAGE_KERNEL, + NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS); + memblock_clear_nomap(crashk_res.start, + resource_size(&crashk_res)); + } +#endif } void mark_rodata_ro(void) From patchwork Fri Jul 21 08:17:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen (ThunderTown)" X-Patchwork-Id: 13321606 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 792A0EB64DD for ; Fri, 21 Jul 2023 08:21:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=GWP2eReeH1GsouvsHSgbBGupIu85QX+wB1H7xwDCLrc=; b=22FGphi3KTxkge FGDG7CkEMHRYHrq1kXF/32NHedf+hHXCjpL1IApth0NL50dGVpn1u//im6JWLV1HhAGJAfoAaW+8K uw7l7nrT3+pyS669fbOcs3NnyfyRa+dZvXNXDMdOrJSxf6WW1WXByaETq/q2jOzm8gg+/TFXlwqX9 irRo/nT/ni3llF8bAc11KD359BkZvdlC/1cW+0SgY6WnnJTzooefiNvIEZf6on6b8F1rge0HPNsMe ZlZed/9nRgKgVgLxjb3vE5q8hBqIHYkh8sTzcu8FK3mla4SeWDrrqdpK+TnZ6+CzFAd9LR7TlpBVA RuUN+Pbpz1aokF2zP8Eg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qMlOO-00DLS7-0M; Fri, 21 Jul 2023 08:21:40 +0000 Received: from [45.249.212.51] (helo=dggsgout11.his.huawei.com) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qMlO9-00DLHE-0H; Fri, 21 Jul 2023 08:21:26 +0000 Received: from mail02.huawei.com (unknown [172.30.67.143]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4R6jF613LNz4f41GZ; Fri, 21 Jul 2023 16:21:14 +0800 (CST) Received: from huaweicloud.com (unknown [10.174.178.55]) by APP4 (Coremail) with SMTP id gCh0CgAHoZT2P7pkhprcOQ--.24672S7; Fri, 21 Jul 2023 16:21:15 +0800 (CST) From: thunder.leizhen@huaweicloud.com To: Dave Young , Baoquan He , Vivek Goyal , "Eric W . Biederman" , kexec@lists.infradead.org, linux-kernel@vger.kernel.org, Catalin Marinas , Will Deacon , linux-arm-kernel@lists.infradead.org Cc: Zhen Lei Subject: [PATCH 3/3] arm64: kdump: add support access protection for crashkernel region Date: Fri, 21 Jul 2023 16:17:26 +0800 Message-Id: <20230721081726.882-4-thunder.leizhen@huaweicloud.com> X-Mailer: git-send-email 2.37.3.windows.1 In-Reply-To: <20230721081726.882-1-thunder.leizhen@huaweicloud.com> References: <20230721081726.882-1-thunder.leizhen@huaweicloud.com> MIME-Version: 1.0 X-CM-TRANSID: gCh0CgAHoZT2P7pkhprcOQ--.24672S7 X-Coremail-Antispam: 1UD129KBjvJXoWxWrW8AF4DWrWrWFykGr48JFb_yoW5tF1xpw 1DZr4rtr4UuFsakr93Jrsxuw4rJw1kKa4ag34Fkr1F9FyDGry3GF98W3W7ZFWUCr4Yga1I vFsYqFnYq3WUJaUanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUBCb4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6r106r1rM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUWw A2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0rcxS w2x7M28EF7xvwVC0I7IYx2IY67AKxVWDJVCq3wA2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxV WxJr0_GcWl84ACjcxK6I8E87Iv67AKxVW0oVCq3wA2z4x0Y4vEx4A2jsIEc7CjxVAFwI0_ GcCE3s1le2I262IYc4CY6c8Ij28IcVAaY2xG8wAqx4xG64xvF2IEw4CE5I8CrVC2j2WlYx 0E2Ix0cI8IcVAFwI0_Jr0_Jr4lYx0Ex4A2jsIE14v26r4j6F4UMcvjeVCFs4IE7xkEbVWU JVW8JwACjcxG0xvY0x0EwIxGrwACI402YVCY1x02628vn2kIc2xKxwAKzVCY07xG64k0F2 4l42xK82IYc2Ij64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAqx4xG67AKxVWU JVWUGwC20s026x8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r1q6r43MIIYrxkI7V AKI48JMIIF0xvE2Ix0cI8IcVAFwI0_Jr0_JF4lIxAIcVC0I7IYx2IY6xkF7I0E14v26r4j 6F4UMIIF0xvE42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVW8JVWxJwCI42 IY6I8E87Iv6xkF7I0E14v26r4j6r4UJbIYCTnIWIevJa73UjIFyTuYvjxU7-eoUUUUU X-CM-SenderInfo: hwkx0vthuozvpl2kv046kxt4xhlfz01xgou0bp/ X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230721_012125_531085_6C563031 X-CRM114-Status: GOOD ( 12.66 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Zhen Lei arch_kexec_protect_crashkres() and arch_kexec_unprotect_crashkres() are meant to be called by kexec_load() in order to protect the memory allocated for crash dump kernel once the image is loaded. This is basically revert commit 0d124e96051b ("arm64: kdump : take off the protection on crashkernel memory region"), except for the crashkernel region has been fallen back. Because we didn't do page-level mapping for the newly allocated region. Signed-off-by: Zhen Lei --- arch/arm64/include/asm/kexec.h | 8 ++++++++ arch/arm64/kernel/machine_kexec.c | 26 ++++++++++++++++++++++++++ arch/arm64/mm/init.c | 4 ++++ 3 files changed, 38 insertions(+) diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h index 9ac9572a3bbee2c..a55388ff045e980 100644 --- a/arch/arm64/include/asm/kexec.h +++ b/arch/arm64/include/asm/kexec.h @@ -102,6 +102,14 @@ void cpu_soft_restart(unsigned long el2_switch, unsigned long entry, int machine_kexec_post_load(struct kimage *image); #define machine_kexec_post_load machine_kexec_post_load + +extern bool crash_fallback; + +void arch_kexec_protect_crashkres(void); +#define arch_kexec_protect_crashkres arch_kexec_protect_crashkres + +void arch_kexec_unprotect_crashkres(void); +#define arch_kexec_unprotect_crashkres arch_kexec_unprotect_crashkres #endif #define ARCH_HAS_KIMAGE_ARCH diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c index 078910db77a41b6..00392b48501d35c 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -269,6 +269,32 @@ void machine_crash_shutdown(struct pt_regs *regs) pr_info("Starting crashdump kernel...\n"); } +void arch_kexec_protect_crashkres(void) +{ + int i; + + if (crash_fallback) + return; + + for (i = 0; i < kexec_crash_image->nr_segments; i++) + set_memory_valid( + __phys_to_virt(kexec_crash_image->segment[i].mem), + kexec_crash_image->segment[i].memsz >> PAGE_SHIFT, 0); +} + +void arch_kexec_unprotect_crashkres(void) +{ + int i; + + if (crash_fallback) + return; + + for (i = 0; i < kexec_crash_image->nr_segments; i++) + set_memory_valid( + __phys_to_virt(kexec_crash_image->segment[i].mem), + kexec_crash_image->segment[i].memsz >> PAGE_SHIFT, 1); +} + #ifdef CONFIG_HIBERNATION /* * To preserve the crash dump kernel image, the relevant memory segments diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index d2ab377520b2742..b544ed0ab04193d 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -74,6 +74,7 @@ phys_addr_t __ro_after_init arm64_dma_phys_limit; #define DEFAULT_CRASH_KERNEL_LOW_SIZE (128UL << 20) +bool crash_fallback; static int crashkernel_type __initdata; static phys_addr_t __init crashkernel_phys_alloc_range(phys_addr_t size, @@ -199,6 +200,9 @@ static void __init late_reserve_crashkernel(void) crashk_res.end = crash_base + crash_size - 1; insert_resource(&iomem_resource, &crashk_res); + crash_fallback = true; + pr_info("cannot allocate all crashkernel memory as expected, fallen back.\n"); + ok: crash_base = crashk_res.start; crash_size = resource_size(&crashk_res);