From patchwork Thu Mar 31 07:40:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 12796813 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D064DC433FE for ; Thu, 31 Mar 2022 07:29:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=NzQeMhRBsCu0+xxbgXWQ8xhd8nsBXzgr5Bf0B8JXICA=; b=Zy1zcJP5JMfSvB XZNXPw9ez1T8ZKNfiz3xaMqNMVddF1OqZDuOBZY+3ELPccvyD2SX4zkB3cW8+LpTrkbBcvJyTbxto I/QYPp9QQp2qpHl9wFs3fLbQ9JBi72iWbFyfIsW4iXRGkVRkz0ElAkXMPZIk9rCQnnohbxd5wOJ57 LQ/zgT5BpjZEle4MG/1mJdsrvpCS31p9u8TLyCe8+odbVfsMleJzYmkIU4ol9bRMwcOeXWil2YDkZ +zABis7dxW6zATtaWyqYqxq5KO32SuOl8oXG0qcabfChI4Sp532xtu4UG+e1WdanKLAwexeV8gM47 R1D0iN4LtNtbj4E82htw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nZpDq-00154T-BX; Thu, 31 Mar 2022 07:27:58 +0000 Received: from szxga01-in.huawei.com ([45.249.212.187]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nZpDl-00152M-3t for linux-arm-kernel@lists.infradead.org; Thu, 31 Mar 2022 07:27:55 +0000 Received: from dggpemm500020.china.huawei.com (unknown [172.30.72.53]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4KTZbl01F2zgY8P; Thu, 31 Mar 2022 15:26:11 +0800 (CST) Received: from dggpemm500001.china.huawei.com (7.185.36.107) by dggpemm500020.china.huawei.com (7.185.36.49) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Thu, 31 Mar 2022 15:27:50 +0800 Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Thu, 31 Mar 2022 15:27:49 +0800 From: Kefeng Wang To: , , , CC: , , Kefeng Wang , Pasha Tatashin Subject: [PATCH v2 resend 1/3] arm64: mm: Do not defer reserve_crashkernel() if only ZONE_DMA32 Date: Thu, 31 Mar 2022 15:40:53 +0800 Message-ID: <20220331074055.125824-2-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20220331074055.125824-1-wangkefeng.wang@huawei.com> References: <20220331074055.125824-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220331_002753_507975_4FC86516 X-CRM114-Status: GOOD ( 15.18 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The kernel could be benefit due to BLOCK_MAPPINGS, see commit 031495635b46 ("arm64: Do not defer reserve_crashkernel() for platforms with no DMA memory zones"), if only with ZONE_DMA32, set arm64_dma_phys_limit to max_zone_phys(32) earlier in arm64_memblock_init(), so platforms with just ZONE_DMA32 config enabled will be benefit. Cc: Vijay Balakrishna Cc: Pasha Tatashin Cc: Will Deacon Reviewed-by: Vijay Balakrishna Signed-off-by: Kefeng Wang --- arch/arm64/mm/init.c | 23 +++++++++++++---------- arch/arm64/mm/mmu.c | 6 ++---- 2 files changed, 15 insertions(+), 14 deletions(-) diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 8ac25f19084e..fb01eb489fa9 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -65,8 +65,9 @@ EXPORT_SYMBOL(memstart_addr); * Memory reservation for crash kernel either done early or deferred * depending on DMA memory zones configs (ZONE_DMA) -- * - * In absence of ZONE_DMA configs arm64_dma_phys_limit initialized - * here instead of max_zone_phys(). This lets early reservation of + * In absence of ZONE_DMA and ZONE_DMA32 configs arm64_dma_phys_limit + * initialized here and if only with ZONE_DMA32 arm64_dma_phys_limit + * initialised to dma32_phys_limit. This lets early reservation of * crash kernel memory which has a dependency on arm64_dma_phys_limit. * Reserving memory early for crash kernel allows linear creation of block * mappings (greater than page-granularity) for all the memory bank rangs. @@ -84,6 +85,7 @@ EXPORT_SYMBOL(memstart_addr); * Note: Page-granularity mapppings are necessary for crash kernel memory * range for shrinking its size via /sys/kernel/kexec_crash_size interface. */ +static phys_addr_t __ro_after_init dma32_phys_limit; #if IS_ENABLED(CONFIG_ZONE_DMA) || IS_ENABLED(CONFIG_ZONE_DMA32) phys_addr_t __ro_after_init arm64_dma_phys_limit; #else @@ -160,11 +162,10 @@ static phys_addr_t __init max_zone_phys(unsigned int zone_bits) static void __init zone_sizes_init(unsigned long min, unsigned long max) { unsigned long max_zone_pfns[MAX_NR_ZONES] = {0}; - unsigned int __maybe_unused acpi_zone_dma_bits; - unsigned int __maybe_unused dt_zone_dma_bits; - phys_addr_t __maybe_unused dma32_phys_limit = max_zone_phys(32); - #ifdef CONFIG_ZONE_DMA + unsigned int acpi_zone_dma_bits; + unsigned int dt_zone_dma_bits; + acpi_zone_dma_bits = fls64(acpi_iort_dma_get_max_cpu_address()); dt_zone_dma_bits = fls64(of_dma_get_max_cpu_address(NULL)); zone_dma_bits = min3(32U, dt_zone_dma_bits, acpi_zone_dma_bits); @@ -173,8 +174,6 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max) #endif #ifdef CONFIG_ZONE_DMA32 max_zone_pfns[ZONE_DMA32] = PFN_DOWN(dma32_phys_limit); - if (!arm64_dma_phys_limit) - arm64_dma_phys_limit = dma32_phys_limit; #endif max_zone_pfns[ZONE_NORMAL] = max; @@ -336,8 +335,12 @@ void __init arm64_memblock_init(void) early_init_fdt_scan_reserved_mem(); - if (!IS_ENABLED(CONFIG_ZONE_DMA) && !IS_ENABLED(CONFIG_ZONE_DMA32)) + dma32_phys_limit = max_zone_phys(32); + if (!IS_ENABLED(CONFIG_ZONE_DMA)) { + if (IS_ENABLED(CONFIG_ZONE_DMA32)) + arm64_dma_phys_limit = dma32_phys_limit; reserve_crashkernel(); + } high_memory = __va(memblock_end_of_DRAM() - 1) + 1; } @@ -385,7 +388,7 @@ void __init bootmem_init(void) * request_standard_resources() depends on crashkernel's memory being * reserved, so do it here. */ - if (IS_ENABLED(CONFIG_ZONE_DMA) || IS_ENABLED(CONFIG_ZONE_DMA32)) + if (IS_ENABLED(CONFIG_ZONE_DMA)) reserve_crashkernel(); memblock_dump_all(); diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 626ec32873c6..23734481318a 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -529,8 +529,7 @@ static void __init map_mem(pgd_t *pgdp) #ifdef CONFIG_KEXEC_CORE if (crash_mem_map) { - if (IS_ENABLED(CONFIG_ZONE_DMA) || - IS_ENABLED(CONFIG_ZONE_DMA32)) + if (IS_ENABLED(CONFIG_ZONE_DMA)) flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; else if (crashk_res.end) memblock_mark_nomap(crashk_res.start, @@ -571,8 +570,7 @@ static void __init map_mem(pgd_t *pgdp) * through /sys/kernel/kexec_crash_size interface. */ #ifdef CONFIG_KEXEC_CORE - if (crash_mem_map && - !IS_ENABLED(CONFIG_ZONE_DMA) && !IS_ENABLED(CONFIG_ZONE_DMA32)) { + if (crash_mem_map && !IS_ENABLED(CONFIG_ZONE_DMA)) { if (crashk_res.end) { __map_memblock(pgdp, crashk_res.start, crashk_res.end + 1, From patchwork Thu Mar 31 07:40:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 12796814 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 847ADC433EF for ; Thu, 31 Mar 2022 07:29:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=kQMTKh8EAn5NmYRpGTaGh8eYvD/+qpLuP5rwTsdKvKI=; b=Nn0XOXh3Ckws0y 1uY/tuvgVtOOpX7FeyQxunZb2Xgn24cgbcoSguOyAkg6xar8pP/2ydohTlWklTALJsfmF7+qHO+9U GiyxQ3ghr1DbPrAedd+0+euGhaIjMsOjvnwHPV4zuLSqez46wOfUtj2bXp3YBmu05P42GaownHVNT dNrBuLGrS3/OdBIoFUJ7ZJOp3DESwa146akORj3Jr6TDmkcHUhbUwfkVnUuC8SWlY97LVHDqvmZHX SnW+8R5DkwHZZzjmZp5ERrOK5rtB/4IvgEkGG4Ngc7lzg+XsGhDo2CwW93Si5aGsCgPmsKbU6VjhR 5QYIICF4PHREgfmqGVeQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nZpEF-0015CE-Kd; Thu, 31 Mar 2022 07:28:23 +0000 Received: from szxga03-in.huawei.com ([45.249.212.189]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nZpDn-00152s-IQ for linux-arm-kernel@lists.infradead.org; Thu, 31 Mar 2022 07:27:59 +0000 Received: from dggpemm500022.china.huawei.com (unknown [172.30.72.54]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4KTZXy3JZFzBrrb; Thu, 31 Mar 2022 15:23:46 +0800 (CST) Received: from dggpemm500001.china.huawei.com (7.185.36.107) by dggpemm500022.china.huawei.com (7.185.36.162) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Thu, 31 Mar 2022 15:27:50 +0800 Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Thu, 31 Mar 2022 15:27:50 +0800 From: Kefeng Wang To: , , , CC: , , Kefeng Wang Subject: [PATCH v2 resend 2/3] arm64: mm: Don't defer reserve_crashkernel() with dma_force_32bit Date: Thu, 31 Mar 2022 15:40:54 +0800 Message-ID: <20220331074055.125824-3-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20220331074055.125824-1-wangkefeng.wang@huawei.com> References: <20220331074055.125824-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220331_002755_983247_D7642CE4 X-CRM114-Status: GOOD ( 17.20 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org ARM64 enable ZONE_DMA by default, and with ZONE_DMA crash kernel memory reservation is delayed until DMA zone memory range size initilazation performed in zone_sizes_init(), but for most platforms use 32bit dma_zone_bits, so add dma_force_32bit kernel parameter if ZONE_DMA enabled, and initialize arm64_dma_phys_limit to dma32_phys_limit in arm64_memblock_init() if dma_force_32bit is setup, this could let the crash kernel reservation earlier, and allows linear creation with block mapping. Signed-off-by: Kefeng Wang Reported-by: kernel test robot Reviewed-by: Vijay Balakrishna --- arch/arm64/include/asm/kexec.h | 1 + arch/arm64/mm/init.c | 42 ++++++++++++++++++++++++++-------- arch/arm64/mm/mmu.c | 4 ++-- 3 files changed, 36 insertions(+), 11 deletions(-) diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h index 9839bfc163d7..8bea40aea359 100644 --- a/arch/arm64/include/asm/kexec.h +++ b/arch/arm64/include/asm/kexec.h @@ -95,6 +95,7 @@ void cpu_soft_restart(unsigned long el2_switch, unsigned long entry, unsigned long arg0, unsigned long arg1, unsigned long arg2); #endif +bool crashkernel_could_early_reserve(void); #define ARCH_HAS_KIMAGE_ARCH diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index fb01eb489fa9..0aafa9181607 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -66,7 +66,8 @@ EXPORT_SYMBOL(memstart_addr); * depending on DMA memory zones configs (ZONE_DMA) -- * * In absence of ZONE_DMA and ZONE_DMA32 configs arm64_dma_phys_limit - * initialized here and if only with ZONE_DMA32 arm64_dma_phys_limit + * initialized here, and if only with ZONE_DMA32 or if with ZONE_DMA + * and dma_force_32bit kernel parameter, the arm64_dma_phys_limit is * initialised to dma32_phys_limit. This lets early reservation of * crash kernel memory which has a dependency on arm64_dma_phys_limit. * Reserving memory early for crash kernel allows linear creation of block @@ -92,6 +93,27 @@ phys_addr_t __ro_after_init arm64_dma_phys_limit; phys_addr_t __ro_after_init arm64_dma_phys_limit = PHYS_MASK + 1; #endif +static bool __ro_after_init arm64_dma_force_32bit; +#ifdef CONFIG_ZONE_DMA +static int __init arm64_dma_force_32bit_setup(char *p) +{ + zone_dma_bits = 32; + arm64_dma_force_32bit = true; + + return 0; +} +early_param("dma_force_32bit", arm64_dma_force_32bit_setup); +#endif + +bool __init crashkernel_could_early_reserve(void) +{ + if (!IS_ENABLED(CONFIG_ZONE_DMA)) + return true; + if (arm64_dma_force_32bit) + return true; + return false; +} + /* * reserve_crashkernel() - reserves memory for crash kernel * @@ -163,12 +185,14 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max) { unsigned long max_zone_pfns[MAX_NR_ZONES] = {0}; #ifdef CONFIG_ZONE_DMA - unsigned int acpi_zone_dma_bits; - unsigned int dt_zone_dma_bits; + if (!arm64_dma_force_32bit) { + unsigned int acpi_zone_dma_bits; + unsigned int dt_zone_dma_bits; - acpi_zone_dma_bits = fls64(acpi_iort_dma_get_max_cpu_address()); - dt_zone_dma_bits = fls64(of_dma_get_max_cpu_address(NULL)); - zone_dma_bits = min3(32U, dt_zone_dma_bits, acpi_zone_dma_bits); + acpi_zone_dma_bits = fls64(acpi_iort_dma_get_max_cpu_address()); + dt_zone_dma_bits = fls64(of_dma_get_max_cpu_address(NULL)); + zone_dma_bits = min3(32U, dt_zone_dma_bits, acpi_zone_dma_bits); + } arm64_dma_phys_limit = max_zone_phys(zone_dma_bits); max_zone_pfns[ZONE_DMA] = PFN_DOWN(arm64_dma_phys_limit); #endif @@ -336,8 +360,8 @@ void __init arm64_memblock_init(void) early_init_fdt_scan_reserved_mem(); dma32_phys_limit = max_zone_phys(32); - if (!IS_ENABLED(CONFIG_ZONE_DMA)) { - if (IS_ENABLED(CONFIG_ZONE_DMA32)) + if (crashkernel_could_early_reserve()) { + if (IS_ENABLED(CONFIG_ZONE_DMA32) || arm64_dma_force_32bit) arm64_dma_phys_limit = dma32_phys_limit; reserve_crashkernel(); } @@ -388,7 +412,7 @@ void __init bootmem_init(void) * request_standard_resources() depends on crashkernel's memory being * reserved, so do it here. */ - if (IS_ENABLED(CONFIG_ZONE_DMA)) + if (!crashkernel_could_early_reserve()) reserve_crashkernel(); memblock_dump_all(); diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 23734481318a..8f7e8452d906 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -529,7 +529,7 @@ static void __init map_mem(pgd_t *pgdp) #ifdef CONFIG_KEXEC_CORE if (crash_mem_map) { - if (IS_ENABLED(CONFIG_ZONE_DMA)) + if (!crashkernel_could_early_reserve()) flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; else if (crashk_res.end) memblock_mark_nomap(crashk_res.start, @@ -570,7 +570,7 @@ static void __init map_mem(pgd_t *pgdp) * through /sys/kernel/kexec_crash_size interface. */ #ifdef CONFIG_KEXEC_CORE - if (crash_mem_map && !IS_ENABLED(CONFIG_ZONE_DMA)) { + if (crash_mem_map && crashkernel_could_early_reserve()) { if (crashk_res.end) { __map_memblock(pgdp, crashk_res.start, crashk_res.end + 1, From patchwork Thu Mar 31 07:40:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 12796815 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 308F3C433F5 for ; Thu, 31 Mar 2022 07:29:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=LyvJ3WVwzBB83tpTb42IX1KC/T72fTjK7Q4OTMaWJYc=; b=b7hR5F+gF6wXHt 3CN9Yw1Z3d/am/MNhI9/V1SOqSvCHmgn6iE32Xy4uGEJimeSIhHdPMuHZm2g/9cmWDxGzYW4iEz9k PH9dTkgJEJJebu4EQBU1fJZZ1c6YQFSUCVr9We9aNHXICZunEvvyO+5x9EWA/hXRrNIZoWt/JDT5L OumRSHVKkldErVcz9bDeuSUYgRvLe0lK8OUy0KoaQzna9ivlPKYXC+yMEnTDY4zmEqZq2uGNjnwPo FoZFa6vn1TutiLLq3h2SbUpC0eeBg8ax87PWHNJP2h2KwkiLf/+4vWs2nOh0Lqg+aQAAkptyeVhqp oFl/CbGGHGSjXybkklRA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nZpER-0015Ew-0F; Thu, 31 Mar 2022 07:28:35 +0000 Received: from szxga02-in.huawei.com ([45.249.212.188]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nZpDo-00152t-IK for linux-arm-kernel@lists.infradead.org; Thu, 31 Mar 2022 07:27:59 +0000 Received: from dggpemm500023.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4KTZbl40lXzgY9Q; Thu, 31 Mar 2022 15:26:11 +0800 (CST) Received: from dggpemm500001.china.huawei.com (7.185.36.107) by dggpemm500023.china.huawei.com (7.185.36.83) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Thu, 31 Mar 2022 15:27:51 +0800 Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Thu, 31 Mar 2022 15:27:50 +0800 From: Kefeng Wang To: , , , CC: , , Kefeng Wang Subject: [PATCH v2 resend 3/3] arm64: mm: Cleanup useless parameters in zone_sizes_init() Date: Thu, 31 Mar 2022 15:40:55 +0800 Message-ID: <20220331074055.125824-4-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20220331074055.125824-1-wangkefeng.wang@huawei.com> References: <20220331074055.125824-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220331_002756_830108_A39C3D12 X-CRM114-Status: GOOD ( 10.12 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Directly use max_pfn for max and no one use min, kill them. Signed-off-by: Kefeng Wang Reviewed-by: Vijay Balakrishna --- arch/arm64/mm/init.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 0aafa9181607..80e9ff37b697 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -181,7 +181,7 @@ static phys_addr_t __init max_zone_phys(unsigned int zone_bits) return min(zone_mask, memblock_end_of_DRAM() - 1) + 1; } -static void __init zone_sizes_init(unsigned long min, unsigned long max) +static void __init zone_sizes_init(void) { unsigned long max_zone_pfns[MAX_NR_ZONES] = {0}; #ifdef CONFIG_ZONE_DMA @@ -199,7 +199,7 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max) #ifdef CONFIG_ZONE_DMA32 max_zone_pfns[ZONE_DMA32] = PFN_DOWN(dma32_phys_limit); #endif - max_zone_pfns[ZONE_NORMAL] = max; + max_zone_pfns[ZONE_NORMAL] = max_pfn; free_area_init(max_zone_pfns); } @@ -401,7 +401,7 @@ void __init bootmem_init(void) * done after the fixed reservations */ sparse_init(); - zone_sizes_init(min, max); + zone_sizes_init(); /* * Reserve the CMA area after arm64_dma_phys_limit was initialised.