From patchwork Tue Nov 8 03:44:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 13035775 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 31B3FC4332F for ; Tue, 8 Nov 2022 03:45:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:Message-Id:Date:Subject:Cc :To:From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References: List-Owner; bh=pWt4fwXLLzUR+QXLS7WwDB6TIemHs4xk6wpG3v3/p7s=; b=exeSM6fRsmonsV G/iF8CpHfF3JWJMQ0nWsgHxmbN29LGF9siCs/ITbRixJyb04sL5jh3q3UZu//F5UW2wTp5j5MidL8 Q/tC1UkO75vpQ+qphi5v5Xx8AkpnhICYct6cfDKUbvxviJg6H1uz8if7f2eHvBLUyQzGc6fekRdl8 DBPYDzAIRX0RSCuT3xQVlyEpDjYpIHR/q2jcFmzKGxJtMQdH/DljeeT2x6q0t6qB+FoqCqop2Kie8 neEj8SWGH3TYEOcNqVb+5nTHNtGQS2+ElgAQb/7KW6yrGKCO8HDzWOVNb2bpLNcr5QFdDLmjlzGMG tHH+jEsM5emtJA0GILww==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1osFXI-002UJr-Tb; Tue, 08 Nov 2022 03:44:28 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1osFXF-002UIg-2x for linux-arm-kernel@lists.infradead.org; Tue, 08 Nov 2022 03:44:27 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 171581FB; Mon, 7 Nov 2022 19:44:25 -0800 (PST) Received: from a077893.blr.arm.com (unknown [10.162.43.6]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id D85ED3F703; Mon, 7 Nov 2022 19:44:16 -0800 (PST) From: Anshuman Khandual To: linux-arm-kernel@lists.infradead.org Cc: joey.gouly@arm.com, Anshuman Khandual , Catalin Marinas , Will Deacon Subject: [PATCH V2] arm64/mm: Drop ARM64_KERNEL_USES_PMD_MAPS Date: Tue, 8 Nov 2022 09:14:06 +0530 Message-Id: <20221108034406.2950071-1-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221107_194425_218414_1E77BEB0 X-CRM114-Status: GOOD ( 11.98 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Currently ARM64_KERNEL_USES_PMD_MAPS is an unnecessary abstraction. Kernel mapping at PMD (aka huge page aka block) level, is only applicable with 4K base page, which makes it 2MB aligned, a necessary requirement for linear mapping and physical memory start address. This can be easily achieved by directly checking against base page size itself. This drops off the macro ARM64_KERNE_USES_PMD_MAPS which is redundant. Cc: Catalin Marinas Cc: Will Deacon Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: Anshuman Khandual --- This applies on v6.1-rc4 Changes in V2: - Reverted back macro movements to preserve existing in code comments Changes in V1: https://lore.kernel.org/all/20220923130841.1382741-1-anshuman.khandual@arm.com/ arch/arm64/include/asm/kernel-pgtable.h | 11 +++-------- arch/arm64/mm/mmu.c | 2 +- 2 files changed, 4 insertions(+), 9 deletions(-) diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h index 32d14f481f0c..fcd14197756f 100644 --- a/arch/arm64/include/asm/kernel-pgtable.h +++ b/arch/arm64/include/asm/kernel-pgtable.h @@ -18,11 +18,6 @@ * with 4K (section size = 2M) but not with 16K (section size = 32M) or * 64K (section size = 512M). */ -#ifdef CONFIG_ARM64_4K_PAGES -#define ARM64_KERNEL_USES_PMD_MAPS 1 -#else -#define ARM64_KERNEL_USES_PMD_MAPS 0 -#endif /* * The idmap and swapper page tables need some space reserved in the kernel @@ -34,7 +29,7 @@ * VA range, so pages required to map highest possible PA are reserved in all * cases. */ -#if ARM64_KERNEL_USES_PMD_MAPS +#ifdef CONFIG_ARM64_4K_PAGES #define SWAPPER_PGTABLE_LEVELS (CONFIG_PGTABLE_LEVELS - 1) #else #define SWAPPER_PGTABLE_LEVELS (CONFIG_PGTABLE_LEVELS) @@ -96,7 +91,7 @@ #define INIT_IDMAP_DIR_PAGES EARLY_PAGES(KIMAGE_VADDR, _end + MAX_FDT_SIZE + SWAPPER_BLOCK_SIZE, 1) /* Initial memory map size */ -#if ARM64_KERNEL_USES_PMD_MAPS +#ifdef CONFIG_ARM64_4K_PAGES #define SWAPPER_BLOCK_SHIFT PMD_SHIFT #define SWAPPER_BLOCK_SIZE PMD_SIZE #define SWAPPER_TABLE_SHIFT PUD_SHIFT @@ -112,7 +107,7 @@ #define SWAPPER_PTE_FLAGS (PTE_TYPE_PAGE | PTE_AF | PTE_SHARED) #define SWAPPER_PMD_FLAGS (PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S) -#if ARM64_KERNEL_USES_PMD_MAPS +#ifdef CONFIG_ARM64_4K_PAGES #define SWAPPER_RW_MMUFLAGS (PMD_ATTRINDX(MT_NORMAL) | SWAPPER_PMD_FLAGS) #define SWAPPER_RX_MMUFLAGS (SWAPPER_RW_MMUFLAGS | PMD_SECT_RDONLY) #else diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 9a7c38965154..d386033a074c 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -1196,7 +1196,7 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END)); - if (!ARM64_KERNEL_USES_PMD_MAPS) + if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES)) return vmemmap_populate_basepages(start, end, node, altmap); do {