From patchwork Fri Aug 19 04:11:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12948302 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0DB48C28B2B for ; Fri, 19 Aug 2022 04:13:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=WQQiwO+Ufmd28gkOP2GlFE+emUP8t1KQa+mytyFgBDk=; b=eoo0JXCzuQm4s8 pO//y0d1/iaDwwtH9gKhGHX9Wp9Frjv81rs+i43tZRZN/7D8bzjeqdfQMoTnzekDy7/g1TFDXmTeQ xjey4pLqPCm2qLwPlN5GhFItW3tYi4QAmmSOu9AOTsKDrQZ85pZ0vYOK0fy2TZNnAnIKz19rM6AT0 YZQn2Wl4NCnkv0ya2I7/xojlRVcTV2DYTqjr3nyRw5PsNeeV7I7YV+Y4Jy83NvBrhc51geejuazXO t1WlV8eh9VDW+JPHE0d9zA96L4M7yv0V28wr4Qc+HzZxfXo7vkuWJhNYLESBZJU6QELlnJ5ZuT/5U +LZlW7f7aUR0hpYYGy6A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oOtN4-00HDip-5H; Fri, 19 Aug 2022 04:12:34 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oOtMq-00HD9B-PL for linux-arm-kernel@lists.infradead.org; Fri, 19 Aug 2022 04:12:22 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 72E1FB825A9; Fri, 19 Aug 2022 04:12:18 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 86439C43470; Fri, 19 Aug 2022 04:12:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1660882337; bh=OJw5ok0VWpXV3rCiT2RPnbejds8oP9obR0Vq/h7CFMg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kk7a5L2fGYglj/X+iysS0UCkB1aNmS+LAXX5gdRkSk6QWLPtsYWlXgpsLQSX4/2m6 xSAnTQjm8z1SSGIu8AvMC+WvtHiaUkHJXdotVu+VIGcGwbSuWAa+pd3D9TwA5hO/xV SeQY4tvS/87p9KJbLFiypyt5avneNb1U2JMS3OHFQ8Wg6vRfY+VP9ARAPb3z3fFNQe huyGcQmXx05+a+1zofD5JiK4ztPPW2Jk7PfQ9e7+JOYJ2Xd9K53aTQjezSdfbXcy+C tUZwCybmYrgTbhNUX0ean/n/TqOYsmBrz6//+A5XGGyoJP5834TNnLWwbx8TiIrg1s Q0eNNmNBjqIyQ== From: Mike Rapoport To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Catalin Marinas , Guanghui Feng , Mark Rutland , Mike Rapoport , Mike Rapoport , Will Deacon , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 1/5] arm64: rename defer_reserve_crashkernel() to have_zone_dma() Date: Fri, 19 Aug 2022 07:11:52 +0300 Message-Id: <20220819041156.873873-2-rppt@kernel.org> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220819041156.873873-1-rppt@kernel.org> References: <20220819041156.873873-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220818_211221_192496_77FAE2C8 X-CRM114-Status: GOOD ( 16.35 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Mike Rapoport The new name better describes what the function does and does not restrict its use to crash kernel reservations. Signed-off-by: Mike Rapoport --- arch/arm64/include/asm/memory.h | 2 +- arch/arm64/mm/init.c | 4 ++-- arch/arm64/mm/mmu.c | 4 ++-- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 9dd08cd339c3..27fce129b97e 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -364,7 +364,7 @@ static inline void *phys_to_virt(phys_addr_t x) void dump_mem_limit(void); -static inline bool defer_reserve_crashkernel(void) +static inline bool have_zone_dma(void) { return IS_ENABLED(CONFIG_ZONE_DMA) || IS_ENABLED(CONFIG_ZONE_DMA32); } diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index b9af30be813e..a6585d50a76c 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -389,7 +389,7 @@ void __init arm64_memblock_init(void) early_init_fdt_scan_reserved_mem(); - if (!defer_reserve_crashkernel()) + if (!have_zone_dma()) reserve_crashkernel(); high_memory = __va(memblock_end_of_DRAM() - 1) + 1; @@ -438,7 +438,7 @@ void __init bootmem_init(void) * request_standard_resources() depends on crashkernel's memory being * reserved, so do it here. */ - if (defer_reserve_crashkernel()) + if (have_zone_dma()) reserve_crashkernel(); memblock_dump_all(); diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index db7c4e6ae57b..bf303f1dea25 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -548,7 +548,7 @@ static void __init map_mem(pgd_t *pgdp) #ifdef CONFIG_KEXEC_CORE if (crash_mem_map) { - if (defer_reserve_crashkernel()) + if (have_zone_dma()) flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; else if (crashk_res.end) memblock_mark_nomap(crashk_res.start, @@ -589,7 +589,7 @@ static void __init map_mem(pgd_t *pgdp) * through /sys/kernel/kexec_crash_size interface. */ #ifdef CONFIG_KEXEC_CORE - if (crash_mem_map && !defer_reserve_crashkernel()) { + if (crash_mem_map && !have_zone_dma()) { if (crashk_res.end) { __map_memblock(pgdp, crashk_res.start, crashk_res.end + 1, From patchwork Fri Aug 19 04:11:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12948303 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 516F2C25B0E for ; Fri, 19 Aug 2022 04:13:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=FKRm+TKSHN75tqooWXbfDGPjLK8AVeWIzcdhBBcEXuE=; b=dLS1AR0N+kdAIO jkpuQxHH0/0gez4PBjpzt5wPXikO/84evWXep1mQz1OPBwsisJJsjYJLORxyRh7gffxbR/xjXR+9J 2uYIa7MRkPwF8c77l6DqC1ii/z8x07mkLUGvGCoK+VohaaeLpOaihoEI9ZbZj3mzrIYgh1t/HDLJC /Hk1sQbeXyw9lhFuIRT/gdwXP90ineKTAqy+4znAeuAP1iak5ohGtBGTrRj9N6e6d9kaDfpf8Q/nD dMOKudlwXk2RZH1SwQjfy4Dl6dt54rdK2JntY7ESp/rgK5YBq91hb9o7bRHH0sZNrj9i0e+lQXUqX ETr76873SDvnz/rdb52g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oOtND-00HDlj-IC; Fri, 19 Aug 2022 04:12:43 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oOtMs-00HDEm-5j for linux-arm-kernel@lists.infradead.org; Fri, 19 Aug 2022 04:12:23 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 3AD1C6153C; Fri, 19 Aug 2022 04:12:21 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C0F87C433C1; Fri, 19 Aug 2022 04:12:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1660882341; bh=zBIuI5xJv0DW/mdZk8va9FmpzHKWXDhqhhaqKGKSfXg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kQAoFAYvMa2YCWmHycPxD7RjUUKALOFfcuCjJ+ET6gwjVaBZ8zpGwlCII6ybIwIYp dtbZUq65yd+jjOfgLj36qfXtIMqNN7ENdkHwsq+RRb7R2lCWsbgpnTsXW7Lgnx+1p+ lPr65UdT/vl7v5EptEhxOtmELPq/fbJXOSWLQRVNcncYtz3/v0BuLRZJfEk1HeOgqX FepEhVhEBTSyoK+x7RETM710mRHS1fVwe92Ip1E12u0OwAaGV+t6TERW81HD9T9BNP IKDBeDVG27XnSoWiA13XzEwwA11t0szOgzydrU5cvgb5u9+6ZfvyRE/rVP9h1RLonZ XJSsuYdQfmesw== From: Mike Rapoport To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Catalin Marinas , Guanghui Feng , Mark Rutland , Mike Rapoport , Mike Rapoport , Will Deacon , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 2/5] arm64/mmu: drop _hotplug from unmap_hotplug_* function names Date: Fri, 19 Aug 2022 07:11:53 +0300 Message-Id: <20220819041156.873873-3-rppt@kernel.org> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220819041156.873873-1-rppt@kernel.org> References: <20220819041156.873873-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220818_211222_361482_9B0896DF X-CRM114-Status: GOOD ( 13.47 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Mike Rapoport so that they can be used for remapping crash kernel. Signed-off-by: Mike Rapoport --- arch/arm64/mm/mmu.c | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index bf303f1dea25..ea81e40a25cd 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -911,7 +911,7 @@ static bool pgtable_range_aligned(unsigned long start, unsigned long end, return true; } -static void unmap_hotplug_pte_range(pmd_t *pmdp, unsigned long addr, +static void unmap_pte_range(pmd_t *pmdp, unsigned long addr, unsigned long end, bool free_mapped, struct vmem_altmap *altmap) { @@ -932,7 +932,7 @@ static void unmap_hotplug_pte_range(pmd_t *pmdp, unsigned long addr, } while (addr += PAGE_SIZE, addr < end); } -static void unmap_hotplug_pmd_range(pud_t *pudp, unsigned long addr, +static void unmap_pmd_range(pud_t *pudp, unsigned long addr, unsigned long end, bool free_mapped, struct vmem_altmap *altmap) { @@ -961,11 +961,11 @@ static void unmap_hotplug_pmd_range(pud_t *pudp, unsigned long addr, continue; } WARN_ON(!pmd_table(pmd)); - unmap_hotplug_pte_range(pmdp, addr, next, free_mapped, altmap); + unmap_pte_range(pmdp, addr, next, free_mapped, altmap); } while (addr = next, addr < end); } -static void unmap_hotplug_pud_range(p4d_t *p4dp, unsigned long addr, +static void unmap_pud_range(p4d_t *p4dp, unsigned long addr, unsigned long end, bool free_mapped, struct vmem_altmap *altmap) { @@ -994,11 +994,11 @@ static void unmap_hotplug_pud_range(p4d_t *p4dp, unsigned long addr, continue; } WARN_ON(!pud_table(pud)); - unmap_hotplug_pmd_range(pudp, addr, next, free_mapped, altmap); + unmap_pmd_range(pudp, addr, next, free_mapped, altmap); } while (addr = next, addr < end); } -static void unmap_hotplug_p4d_range(pgd_t *pgdp, unsigned long addr, +static void unmap_p4d_range(pgd_t *pgdp, unsigned long addr, unsigned long end, bool free_mapped, struct vmem_altmap *altmap) { @@ -1013,11 +1013,11 @@ static void unmap_hotplug_p4d_range(pgd_t *pgdp, unsigned long addr, continue; WARN_ON(!p4d_present(p4d)); - unmap_hotplug_pud_range(p4dp, addr, next, free_mapped, altmap); + unmap_pud_range(p4dp, addr, next, free_mapped, altmap); } while (addr = next, addr < end); } -static void unmap_hotplug_range(unsigned long addr, unsigned long end, +static void unmap_range(unsigned long addr, unsigned long end, bool free_mapped, struct vmem_altmap *altmap) { unsigned long next; @@ -1039,7 +1039,7 @@ static void unmap_hotplug_range(unsigned long addr, unsigned long end, continue; WARN_ON(!pgd_present(pgd)); - unmap_hotplug_p4d_range(pgdp, addr, next, free_mapped, altmap); + unmap_p4d_range(pgdp, addr, next, free_mapped, altmap); } while (addr = next, addr < end); } @@ -1258,7 +1258,7 @@ void vmemmap_free(unsigned long start, unsigned long end, { WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END)); - unmap_hotplug_range(start, end, true, altmap); + unmap_range(start, end, true, altmap); free_empty_tables(start, end, VMEMMAP_START, VMEMMAP_END); } #endif /* CONFIG_MEMORY_HOTPLUG */ @@ -1522,7 +1522,7 @@ static void __remove_pgd_mapping(pgd_t *pgdir, unsigned long start, u64 size) WARN_ON(pgdir != init_mm.pgd); WARN_ON((start < PAGE_OFFSET) || (end > PAGE_END)); - unmap_hotplug_range(start, end, false, NULL); + unmap_range(start, end, false, NULL); free_empty_tables(start, end, PAGE_OFFSET, PAGE_END); } From patchwork Fri Aug 19 04:11:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12948304 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4BB29C25B0E for ; Fri, 19 Aug 2022 04:14:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=KJIlWXUIuJ8vfLVGd9j2nw96eFdALSkbvaMDquGUcTc=; b=McDsdH7fM96FeW ued7IZmE8wMurfptFe6ZqkWZrIn1HsVESLxaR7zmi0WYKlHxF9ZrzlhfqExZRPV+o7Goa18e/2OIw GJgnI9kwBHEDk256mslfsS11KEOzoLqkM5vEwd1su1YoYd9c9zcrBF4xCdrHjvEnnVq2AUkbGuB7M fza9GgUvFD999Do5nuD7HD0hwdaTxNV9AGVY6MDEGvzoQRVyAYwoZPgiMdGBhTPt445Lt7AWH3EGn pcPKa+hhYXKeb3NX1JAUnYimjpEwaB5XISsjoX5U02bJV5Cb5DB6nqYOntZsTGDVdHos7xcGMuuzp kPHbR9XHPQ21wSe9GldA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oOtNT-00HEV8-J9; Fri, 19 Aug 2022 04:12:59 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oOtMx-00HDT7-Fs for linux-arm-kernel@lists.infradead.org; Fri, 19 Aug 2022 04:12:29 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 2D933B825B0; Fri, 19 Aug 2022 04:12:26 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9A812C433D6; Fri, 19 Aug 2022 04:12:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1660882344; bh=5gYyN5/YhF/Awfp/DEZJAfWB5qjBkFX3C40Sr9VWbiQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=l4D61Oc16nqlsYrAc37zlc937iYsEazJo+ROdjVVM+XyoMCZv/SkE41jWolD090hQ m86h2HiDf7k+8MAoDxmfiYkLl0xoXwqtYkRacioBIAp7pTFIea1dDMtDP5JXA0CyR9 4ja7Uft2RlR9Jasht+v9SgKYeprc7lGwcilQ7/SI/k966XK15tgZn7hvOVOMDEmmQQ 49xHflDuMymL437/LKVd3keknFKn0yQyzNq2DrZls7S9Z69P5HCKG5DOlWckKkng1H mAk2Lf2RjEUfAquoHqVU+vE/tp3vBnhO3PzuhFdn6j0Ng9zCwJk7nFbQSK18/1HvVm iLUpNJZC87SmQ== From: Mike Rapoport To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Catalin Marinas , Guanghui Feng , Mark Rutland , Mike Rapoport , Mike Rapoport , Will Deacon , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 3/5] arm64/mmu: move helpers for hotplug page tables freeing close to callers Date: Fri, 19 Aug 2022 07:11:54 +0300 Message-Id: <20220819041156.873873-4-rppt@kernel.org> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220819041156.873873-1-rppt@kernel.org> References: <20220819041156.873873-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220818_211227_877963_B7D52336 X-CRM114-Status: GOOD ( 13.80 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Mike Rapoport to minimize extra ifdefery when unmap_*() methods will be used to remap crash kernel. Signed-off-by: Mike Rapoport --- arch/arm64/mm/mmu.c | 50 ++++++++++++++++++++++----------------------- 1 file changed, 25 insertions(+), 25 deletions(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index ea81e40a25cd..92267e5e9b5f 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -887,30 +887,6 @@ static void free_hotplug_page_range(struct page *page, size_t size, } } -static void free_hotplug_pgtable_page(struct page *page) -{ - free_hotplug_page_range(page, PAGE_SIZE, NULL); -} - -static bool pgtable_range_aligned(unsigned long start, unsigned long end, - unsigned long floor, unsigned long ceiling, - unsigned long mask) -{ - start &= mask; - if (start < floor) - return false; - - if (ceiling) { - ceiling &= mask; - if (!ceiling) - return false; - } - - if (end - 1 > ceiling - 1) - return false; - return true; -} - static void unmap_pte_range(pmd_t *pmdp, unsigned long addr, unsigned long end, bool free_mapped, struct vmem_altmap *altmap) @@ -1043,6 +1019,30 @@ static void unmap_range(unsigned long addr, unsigned long end, } while (addr = next, addr < end); } +static bool pgtable_range_aligned(unsigned long start, unsigned long end, + unsigned long floor, unsigned long ceiling, + unsigned long mask) +{ + start &= mask; + if (start < floor) + return false; + + if (ceiling) { + ceiling &= mask; + if (!ceiling) + return false; + } + + if (end - 1 > ceiling - 1) + return false; + return true; +} + +static void free_hotplug_pgtable_page(struct page *page) +{ + free_hotplug_page_range(page, PAGE_SIZE, NULL); +} + static void free_empty_pte_table(pmd_t *pmdp, unsigned long addr, unsigned long end, unsigned long floor, unsigned long ceiling) @@ -1196,7 +1196,7 @@ static void free_empty_tables(unsigned long addr, unsigned long end, free_empty_p4d_table(pgdp, addr, next, floor, ceiling); } while (addr = next, addr < end); } -#endif +#endif /* CONFIG_MEMORY_HOTPLUG */ #if !ARM64_KERNEL_USES_PMD_MAPS int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, From patchwork Fri Aug 19 04:11:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12948305 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 47CB6C25B0E for ; Fri, 19 Aug 2022 04:14:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=DoksHaG7ew97/6nBgSjOqn4HhBzcefru0HbN/tJd5Hc=; b=ATThwik/gJjARU nFE4RsKLunl43TYQrltJD2GNV13uaXTDV7a0a9SlQsif+afA0lWpBE1vb/LRhi3e74XfEY09sSJO1 NePsArKSWfPUfovnXZv12XM4TE2cd+bZG7+z0sd9PC4fcEY2oXdxU4VosNhhEew4SnSx+A3BIR3FD gpZybPXh1MTFMZgDHA+duXoTAQXlj7cQ6g+SMt8PG6aaNroQryU12RGMZy0o9ZDdhSM05qbO6WaJa 1V4lsamXNnpb8J/6OOOa9+vC9TUzblp40/Vce/Yda4PaFGenzeno3B2Q1ujd50U0s0N/m8xImLT6y JGLBAioPDbmhaJ508Ofw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oOtNn-00HFKX-5t; Fri, 19 Aug 2022 04:13:19 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oOtN1-00HDdN-Ks for linux-arm-kernel@lists.infradead.org; Fri, 19 Aug 2022 04:12:33 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 298FDB825AB; Fri, 19 Aug 2022 04:12:30 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 75F9AC433B5; Fri, 19 Aug 2022 04:12:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1660882348; bh=Alvns4dpbVoV7VohHSyYxFeYgoUid9EzSZfmT2izcoU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=C99RTrEyoqCLyTDVCqKkr2U2/6O+Gm+ZTiCS4iz71T9txXvKXALdZ+QkY7e7gis16 ZPrkUuyKKRp+A9Xxc89yUSR8rIWnpy19eeFbxMsCZLamHyf7HL8VkOc3sgjcP113ag M53xCZHpvVblDmZaSbTgh8KJeisMJbGylmO5J9lUVC2wbwdn5Cg6UlC1n/mDLS0INr Czr9Nmt+1adrYhKWfEg7Ubw9CyIstkNx6rPePd+MELheuSjJ4jpVwKKAaU69Pm0i4E oxThwMWb99qbdimO2bC3xbf2a0lC3V3UFW8ujasyfhViK2SmBAeP7RxPTqEdu/QZRK 1p4PItQ/aOVvA== From: Mike Rapoport To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Catalin Marinas , Guanghui Feng , Mark Rutland , Mike Rapoport , Mike Rapoport , Will Deacon , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 4/5] arm64/mm: remap crash kernel with base pages even if rodata_full disabled Date: Fri, 19 Aug 2022 07:11:55 +0300 Message-Id: <20220819041156.873873-5-rppt@kernel.org> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220819041156.873873-1-rppt@kernel.org> References: <20220819041156.873873-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220818_211232_085054_C0B042FC X-CRM114-Status: GOOD ( 32.98 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Mike Rapoport For server systems it is important to protect crash kernel memory for post-mortem analysis. In order to protect this memory it should be mapped at PTE level. When CONFIG_ZONE_DMA or CONFIG_ZONE_DMA32 is enabled, usage of crash kernel essentially forces mapping of the entire linear map with base pages even if rodata_full is not set (commit 2687275a5843 ("arm64: Force NO_BLOCK_MAPPINGS if crashkernel reservation is required")) and this causes performance degradation. With ZONE_DMA/DMA32 enabled, the crash kernel memory is reserved after the linear map is created, but before multiprocessing and multithreading are enabled, so it is safe to remap the crash kernel memory with base pages as long as the page table entries that would be changed do not map the memory that might be accessed during the remapping. To ensure there are no memory accesses in the range that will be remapped, align crash memory reservation to PUD_SIZE boundaries, remap the entire PUD-aligned area and than free the memory that was allocated beyond the crash_size requested by the user. Signed-off-by: Mike Rapoport --- arch/arm64/include/asm/mmu.h | 3 ++ arch/arm64/kernel/machine_kexec.c | 6 +++ arch/arm64/mm/init.c | 65 +++++++++++++++++++++++++------ arch/arm64/mm/mmu.c | 40 ++++++++++++++++--- 4 files changed, 98 insertions(+), 16 deletions(-) diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h index 48f8466a4be9..aba3c095272e 100644 --- a/arch/arm64/include/asm/mmu.h +++ b/arch/arm64/include/asm/mmu.h @@ -71,6 +71,9 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys, extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot); extern void mark_linear_text_alias_ro(void); extern bool kaslr_requires_kpti(void); +extern int remap_crashkernel(phys_addr_t start, phys_addr_t size, + phys_addr_t aligned_size); +extern bool crashkres_protection_possible; #define INIT_MM_CONTEXT(name) \ .pgd = init_pg_dir, diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c index 19c2d487cb08..68295403aa40 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -272,6 +272,9 @@ void arch_kexec_protect_crashkres(void) { int i; + if (!crashkres_protection_possible) + return; + for (i = 0; i < kexec_crash_image->nr_segments; i++) set_memory_valid( __phys_to_virt(kexec_crash_image->segment[i].mem), @@ -282,6 +285,9 @@ void arch_kexec_unprotect_crashkres(void) { int i; + if (!crashkres_protection_possible) + return; + for (i = 0; i < kexec_crash_image->nr_segments; i++) set_memory_valid( __phys_to_virt(kexec_crash_image->segment[i].mem), diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index a6585d50a76c..d5d647aaf23b 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -40,6 +40,7 @@ #include #include #include +#include #include #include #include @@ -70,19 +71,19 @@ EXPORT_SYMBOL(memstart_addr); * crash kernel memory which has a dependency on arm64_dma_phys_limit. * Reserving memory early for crash kernel allows linear creation of block * mappings (greater than page-granularity) for all the memory bank rangs. - * In this scheme a comparatively quicker boot is observed. + * In this scheme a comparatively quicker boot is observed and overall + * memory access via the linear map is more efficient as there is less TLB + * pressure. * * If ZONE_DMA configs are defined, crash kernel memory reservation * is delayed until DMA zone memory range size initialization performed in * zone_sizes_init(). The defer is necessary to steer clear of DMA zone - * memory range to avoid overlap allocation. So crash kernel memory boundaries - * are not known when mapping all bank memory ranges, which otherwise means - * not possible to exclude crash kernel range from creating block mappings - * so page-granularity mappings are created for the entire memory range. - * Hence a slightly slower boot is observed. - * - * Note: Page-granularity mappings are necessary for crash kernel memory - * range for shrinking its size via /sys/kernel/kexec_crash_size interface. + * memory range to avoid overlap allocation. To keep block mappings in the + * linear map, the first reservation attempt tries to allocate PUD-aligned + * region so that it would be possible to remap crash kernel memory with + * base pages. If there is not enough memory for such extended reservation, + * the exact amount of memory is reserved and crash kernel protection is + * disabled. */ #if IS_ENABLED(CONFIG_ZONE_DMA) || IS_ENABLED(CONFIG_ZONE_DMA32) phys_addr_t __ro_after_init arm64_dma_phys_limit; @@ -90,6 +91,8 @@ phys_addr_t __ro_after_init arm64_dma_phys_limit; phys_addr_t __ro_after_init arm64_dma_phys_limit = PHYS_MASK + 1; #endif +bool __ro_after_init crashkres_protection_possible; + /* Current arm64 boot protocol requires 2MB alignment */ #define CRASH_ALIGN SZ_2M @@ -116,6 +119,43 @@ static int __init reserve_crashkernel_low(unsigned long long low_size) return 0; } +static unsigned long long __init +reserve_remap_crashkernel(unsigned long long crash_base, + unsigned long long crash_size, + unsigned long long crash_max) +{ + unsigned long long size; + + /* + * If linear map uses base pages or there is no ZONE_DMA/ZONE_DMA32 + * the crashk_res will be mapped with PTEs in mmu::map_mem() + */ + if (can_set_direct_map() || IS_ENABLED(CONFIG_KFENCE) || + !have_zone_dma()) { + crashkres_protection_possible = true; + return 0; + } + + if (crash_base) + return 0; + + size = ALIGN(crash_size, PUD_SIZE); + + crash_base = memblock_phys_alloc_range(size, PUD_SIZE, 0, crash_max); + if (!crash_base) + return 0; + + if (remap_crashkernel(crash_base, crash_size, size)) { + memblock_phys_free(crash_base, size); + return 0; + } + + crashkres_protection_possible = true; + memblock_phys_free(crash_base + crash_size, size - crash_size); + + return crash_base; +} + /* * reserve_crashkernel() - reserves memory for crash kernel * @@ -162,8 +202,11 @@ static void __init reserve_crashkernel(void) if (crash_base) crash_max = crash_base + crash_size; - crash_base = memblock_phys_alloc_range(crash_size, CRASH_ALIGN, - crash_base, crash_max); + crash_base = reserve_remap_crashkernel(crash_base, crash_size, + crash_max); + if (!crash_base) + crash_base = memblock_phys_alloc_range(crash_size, CRASH_ALIGN, + crash_base, crash_max); if (!crash_base) { pr_warn("cannot allocate crashkernel (size:0x%llx)\n", crash_size); diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 92267e5e9b5f..83f2f18f7f34 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -547,10 +547,8 @@ static void __init map_mem(pgd_t *pgdp) memblock_mark_nomap(kernel_start, kernel_end - kernel_start); #ifdef CONFIG_KEXEC_CORE - if (crash_mem_map) { - if (have_zone_dma()) - flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; - else if (crashk_res.end) + if (crash_mem_map && !have_zone_dma()) { + if (crashk_res.end) memblock_mark_nomap(crashk_res.start, resource_size(&crashk_res)); } @@ -875,7 +873,7 @@ int kern_addr_valid(unsigned long addr) return pfn_valid(pte_pfn(pte)); } -#ifdef CONFIG_MEMORY_HOTPLUG +#if defined(CONFIG_MEMORY_HOTPLUG) || defined(CONFIG_KEXEC_CORE) static void free_hotplug_page_range(struct page *page, size_t size, struct vmem_altmap *altmap) { @@ -1018,7 +1016,9 @@ static void unmap_range(unsigned long addr, unsigned long end, unmap_p4d_range(pgdp, addr, next, free_mapped, altmap); } while (addr = next, addr < end); } +#endif /* CONFIG_MEMORY_HOTPLUG || CONFIG_KEXEC_CORE */ +#ifdef CONFIG_MEMORY_HOTPLUG static bool pgtable_range_aligned(unsigned long start, unsigned long end, unsigned long floor, unsigned long ceiling, unsigned long mask) @@ -1263,6 +1263,36 @@ void vmemmap_free(unsigned long start, unsigned long end, } #endif /* CONFIG_MEMORY_HOTPLUG */ +int __init remap_crashkernel(phys_addr_t start, phys_addr_t size, + phys_addr_t aligned_size) +{ +#ifdef CONFIG_KEXEC_CORE + phys_addr_t end = start + size; + phys_addr_t aligned_end = start + aligned_size; + + if (!IS_ALIGNED(start, PUD_SIZE) || !IS_ALIGNED(aligned_end, PUD_SIZE)) + return -EINVAL; + + /* Clear PUDs containing crash kernel memory */ + unmap_range(__phys_to_virt(start), __phys_to_virt(aligned_end), + false, NULL); + + /* map crash kernel memory with base pages */ + __create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start), + size, PAGE_KERNEL, early_pgtable_alloc, + NO_EXEC_MAPPINGS | NO_BLOCK_MAPPINGS | + NO_CONT_MAPPINGS); + + /* map area from end of crash kernel to PUD end with large pages */ + size = aligned_end - end; + if (size) + __create_pgd_mapping(swapper_pg_dir, end, __phys_to_virt(end), + size, PAGE_KERNEL, early_pgtable_alloc, 0); +#endif + + return 0; +} + static inline pud_t *fixmap_pud(unsigned long addr) { pgd_t *pgdp = pgd_offset_k(addr); From patchwork Fri Aug 19 04:11:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12948306 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 15436C25B0E for ; Fri, 19 Aug 2022 04:14:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=xVkx+2rkNw/+86Fuuq8eH8JhjVTLA6X5agrNg9iW+aY=; b=NOzCSZJFXWTTG4 L0kRGz/daQpx6q7IW2MwUoqPVeXk/I+k87PKWLa9YPC2vEx4ENS/bTfLFV/o1BGzUePM0z3LE/dRA 2fUdyr+6ol4rJzCOUvqYOH3DZfBSbF54vx4wwu1aTiPCz3vJCCIlZ3T3MaydRKbIOlZ/QIqlaMSJ2 GQPRxzeqV60wE68hX0adqPQN+hGEVd3lDN5h0gA6JB7YVTJ6Mq8WMTpIAbTGHvs60XXKdSfzCeCfl 2yx+d6l+wrkTHU/U1DmACtpSTJ7nn8gQwVti0SQvIXZSW1EGFagsofKAMIPUSktM93NprCdJE448K RomM0a++deql8erqry5g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oOtO2-00HFUE-Jc; Fri, 19 Aug 2022 04:13:35 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oOtN4-00HDiu-4p for linux-arm-kernel@lists.infradead.org; Fri, 19 Aug 2022 04:12:35 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id AF01E6153F; Fri, 19 Aug 2022 04:12:33 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8142AC43140; Fri, 19 Aug 2022 04:12:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1660882353; bh=xA88Ovos9H3DHR/HPlEe0AwH/mYo7aEjxL8NAKpwIOI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dZb0HOnlL2Y22Acis6FSAF9KAcZ8A1LfVc5yWntVp0HlvxGO+KFsurLTJob9b+hQh PkfQ8LIifmzwzPin/wpr8baDOHLVGMK9hlbneLnyAg4Vhw+WTENfeCYHnsmxUFfadT /cx8xnmqRSVv/+OVb+4vOf/9n9Cg+KXY/kiu+eBy4vzbXYQcqBckcpqJYtcv8h1t/w qyDkiaBA+aAJjJK7hdkJcNfRp9/x8rfE3dnSkkyV//lEzdUx1c1+C9r06rmuX1dbKN T0Bkc9f7cE2HpKpkJ8/WyTDrSRuU4Ib/DgFQIkZHrw8dPEISYIzaKYU0+Y9UlLXG/k RCeyHxW12/8Cg== From: Mike Rapoport To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Catalin Marinas , Guanghui Feng , Mark Rutland , Mike Rapoport , Mike Rapoport , Will Deacon , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 5/5] arm64/mmu: simplify logic around crash kernel mapping in map_mem() Date: Fri, 19 Aug 2022 07:11:56 +0300 Message-Id: <20220819041156.873873-6-rppt@kernel.org> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220819041156.873873-1-rppt@kernel.org> References: <20220819041156.873873-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220818_211234_336102_774867C7 X-CRM114-Status: GOOD ( 18.38 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Mike Rapoport The check for crashkernel command line parameter and presence of CONFIG_ZONE_DMA[32] in mmu::map_mem() are not necessary because crashk_res.end would be set by the time map_mem() runs only if reserve_crashkernel() was called from arm64_memblock_init() and only if there was proper crashkernel parameter in the command line. Leave only check that crashk_res.end is non-zero to decide whether crash kernel memory should be mapped with base pages. Signed-off-by: Mike Rapoport --- arch/arm64/mm/mmu.c | 44 ++++++++++++-------------------------------- 1 file changed, 12 insertions(+), 32 deletions(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 83f2f18f7f34..fa23cfa6b772 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -502,21 +502,6 @@ void __init mark_linear_text_alias_ro(void) PAGE_KERNEL_RO); } -static bool crash_mem_map __initdata; - -static int __init enable_crash_mem_map(char *arg) -{ - /* - * Proper parameter parsing is done by reserve_crashkernel(). We only - * need to know if the linear map has to avoid block mappings so that - * the crashkernel reservations can be unmapped later. - */ - crash_mem_map = true; - - return 0; -} -early_param("crashkernel", enable_crash_mem_map); - static void __init map_mem(pgd_t *pgdp) { static const u64 direct_map_end = _PAGE_END(VA_BITS_MIN); @@ -547,11 +532,9 @@ static void __init map_mem(pgd_t *pgdp) memblock_mark_nomap(kernel_start, kernel_end - kernel_start); #ifdef CONFIG_KEXEC_CORE - if (crash_mem_map && !have_zone_dma()) { - if (crashk_res.end) - memblock_mark_nomap(crashk_res.start, - resource_size(&crashk_res)); - } + if (crashk_res.end) + memblock_mark_nomap(crashk_res.start, + resource_size(&crashk_res)); #endif /* map all the memory banks */ @@ -582,20 +565,17 @@ static void __init map_mem(pgd_t *pgdp) memblock_clear_nomap(kernel_start, kernel_end - kernel_start); /* - * Use page-level mappings here so that we can shrink the region - * in page granularity and put back unused memory to buddy system - * through /sys/kernel/kexec_crash_size interface. + * Use page-level mappings here so that we can protect crash kernel + * memory to allow post-mortem analysis when things go awry. */ #ifdef CONFIG_KEXEC_CORE - if (crash_mem_map && !have_zone_dma()) { - if (crashk_res.end) { - __map_memblock(pgdp, crashk_res.start, - crashk_res.end + 1, - PAGE_KERNEL, - NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS); - memblock_clear_nomap(crashk_res.start, - resource_size(&crashk_res)); - } + if (crashk_res.end) { + __map_memblock(pgdp, crashk_res.start, + crashk_res.end + 1, + PAGE_KERNEL, + NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS); + memblock_clear_nomap(crashk_res.start, + resource_size(&crashk_res)); } #endif }