diff mbox series

[v7,02/33] arm64: mm: Avoid swapper block size when choosing vmemmap granularity

Message ID 20221111171201.2088501-3-ardb@kernel.org (mailing list archive)
State New, archived
Headers show
Series arm64: robustify boot sequence and add support for WXN | expand

Commit Message

Ard Biesheuvel Nov. 11, 2022, 5:11 p.m. UTC
The logic to decide between PTE and PMD mappings in the vmemmap region
is currently based on the granularity of the initial ID map but those
things have little to do with each other.

The reason we use PMDs here on 4k pagesize kernels is because a struct
page array describing a single section of memory takes up at least the
size described by a PMD, and so mapping down to pages is pointless.

So use the correct conditional, and add a comment to clarify it.

This allows us to remove or rename the swapper block size related
constants in the future.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/mm/mmu.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

Comments

Anshuman Khandual Nov. 24, 2022, 5:11 a.m. UTC | #1
On 11/11/22 22:41, Ard Biesheuvel wrote:
> The logic to decide between PTE and PMD mappings in the vmemmap region
> is currently based on the granularity of the initial ID map but those
> things have little to do with each other.
> 
> The reason we use PMDs here on 4k pagesize kernels is because a struct
> page array describing a single section of memory takes up at least the
> size described by a PMD, and so mapping down to pages is pointless.
> 
> So use the correct conditional, and add a comment to clarify it.
> 
> This allows us to remove or rename the swapper block size related
> constants in the future.
> 
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>> ---

The patch LGTM in itself.

Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>

>  arch/arm64/mm/mmu.c | 7 ++++++-
>  1 file changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 757c2fe54d2e99f0..0c35e1f195678695 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -1196,7 +1196,12 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
>  
>  	WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END));
>  
> -	if (!ARM64_KERNEL_USES_PMD_MAPS)
> +	/*
> +	 * Use page mappings for the vmemmap region if the area taken up by a
> +	 * struct page array covering a single section is smaller than the area
> +	 * covered by a PMD.
> +	 */
> +	if (SECTION_SIZE_BITS - VMEMMAP_SHIFT < PMD_SHIFT)
>  		return vmemmap_populate_basepages(start, end, node, altmap);
>  
>  	do {
diff mbox series

Patch

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 757c2fe54d2e99f0..0c35e1f195678695 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -1196,7 +1196,12 @@  int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
 
 	WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END));
 
-	if (!ARM64_KERNEL_USES_PMD_MAPS)
+	/*
+	 * Use page mappings for the vmemmap region if the area taken up by a
+	 * struct page array covering a single section is smaller than the area
+	 * covered by a PMD.
+	 */
+	if (SECTION_SIZE_BITS - VMEMMAP_SHIFT < PMD_SHIFT)
 		return vmemmap_populate_basepages(start, end, node, altmap);
 
 	do {