diff mbox series

arm64: Force SPARSEMEM_VMEMMAP as the only memory management model

Message ID 20210420093559.23168-1-catalin.marinas@arm.com (mailing list archive)
State New, archived
Headers show
Series arm64: Force SPARSEMEM_VMEMMAP as the only memory management model | expand

Commit Message

Catalin Marinas April 20, 2021, 9:35 a.m. UTC
Currently arm64 allows a choice of FLATMEM, SPARSEMEM and
SPARSEMEM_VMEMMAP. However, only the latter is tested regularly. FLATMEM
does not seem to boot in certain configurations (guest under KVM with
Qemu as a VMM). Since the reduction of the SECTION_SIZE_BITS to 27 (4K
pages) or 29 (64K page), there's little argument against the memory
wasted by the mem_map array with SPARSEMEM.

Make SPARSEMEM_VMEMMAP the only available option, non-selectable, and
remove the corresponding #ifdefs under arch/arm64/.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
---

If there are any concerns, please shout (but show numbers as well to
back it up).

 arch/arm64/Kconfig                      | 10 +---------
 arch/arm64/include/asm/kernel-pgtable.h |  2 +-
 arch/arm64/include/asm/memory.h         |  4 ++--
 arch/arm64/include/asm/sparsemem.h      |  3 ---
 arch/arm64/mm/init.c                    |  8 ++------
 arch/arm64/mm/mmu.c                     |  2 --
 arch/arm64/mm/ptdump.c                  |  2 --
 7 files changed, 6 insertions(+), 25 deletions(-)

Comments

Will Deacon April 20, 2021, 10:19 a.m. UTC | #1
On Tue, Apr 20, 2021 at 10:35:59AM +0100, Catalin Marinas wrote:
> Currently arm64 allows a choice of FLATMEM, SPARSEMEM and
> SPARSEMEM_VMEMMAP. However, only the latter is tested regularly. FLATMEM
> does not seem to boot in certain configurations (guest under KVM with
> Qemu as a VMM). Since the reduction of the SECTION_SIZE_BITS to 27 (4K
> pages) or 29 (64K page), there's little argument against the memory
> wasted by the mem_map array with SPARSEMEM.
> 
> Make SPARSEMEM_VMEMMAP the only available option, non-selectable, and
> remove the corresponding #ifdefs under arch/arm64/.
> 
> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> ---
> 
> If there are any concerns, please shout (but show numbers as well to
> back it up).
> 
>  arch/arm64/Kconfig                      | 10 +---------
>  arch/arm64/include/asm/kernel-pgtable.h |  2 +-
>  arch/arm64/include/asm/memory.h         |  4 ++--
>  arch/arm64/include/asm/sparsemem.h      |  3 ---
>  arch/arm64/mm/init.c                    |  8 ++------
>  arch/arm64/mm/mmu.c                     |  2 --
>  arch/arm64/mm/ptdump.c                  |  2 --
>  7 files changed, 6 insertions(+), 25 deletions(-)

Acked-by: Will Deacon <will@kernel.org>

Will
Ard Biesheuvel April 20, 2021, 10:28 a.m. UTC | #2
On Tue, 20 Apr 2021 at 12:19, Will Deacon <will@kernel.org> wrote:
>
> On Tue, Apr 20, 2021 at 10:35:59AM +0100, Catalin Marinas wrote:
> > Currently arm64 allows a choice of FLATMEM, SPARSEMEM and
> > SPARSEMEM_VMEMMAP. However, only the latter is tested regularly. FLATMEM
> > does not seem to boot in certain configurations (guest under KVM with
> > Qemu as a VMM). Since the reduction of the SECTION_SIZE_BITS to 27 (4K
> > pages) or 29 (64K page), there's little argument against the memory
> > wasted by the mem_map array with SPARSEMEM.
> >
> > Make SPARSEMEM_VMEMMAP the only available option, non-selectable, and
> > remove the corresponding #ifdefs under arch/arm64/.
> >
> > Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: Will Deacon <will@kernel.org>
> > ---
> >
> > If there are any concerns, please shout (but show numbers as well to
> > back it up).
> >
> >  arch/arm64/Kconfig                      | 10 +---------
> >  arch/arm64/include/asm/kernel-pgtable.h |  2 +-
> >  arch/arm64/include/asm/memory.h         |  4 ++--
> >  arch/arm64/include/asm/sparsemem.h      |  3 ---
> >  arch/arm64/mm/init.c                    |  8 ++------
> >  arch/arm64/mm/mmu.c                     |  2 --
> >  arch/arm64/mm/ptdump.c                  |  2 --
> >  7 files changed, 6 insertions(+), 25 deletions(-)
>
> Acked-by: Will Deacon <will@kernel.org>
>

Acked-by: Ard Biesheuvel <ardb@kernel.org>
Marc Zyngier April 20, 2021, 10:47 a.m. UTC | #3
On Tue, 20 Apr 2021 10:35:59 +0100,
Catalin Marinas <catalin.marinas@arm.com> wrote:
> 
> Currently arm64 allows a choice of FLATMEM, SPARSEMEM and
> SPARSEMEM_VMEMMAP. However, only the latter is tested regularly. FLATMEM
> does not seem to boot in certain configurations (guest under KVM with
> Qemu as a VMM). Since the reduction of the SECTION_SIZE_BITS to 27 (4K
> pages) or 29 (64K page), there's little argument against the memory
> wasted by the mem_map array with SPARSEMEM.
> 
> Make SPARSEMEM_VMEMMAP the only available option, non-selectable, and
> remove the corresponding #ifdefs under arch/arm64/.
> 
> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>

Acked-by: Marc Zyngier <maz@kernel.org>

	M.
Anshuman Khandual April 21, 2021, 4:48 a.m. UTC | #4
On 4/20/21 3:05 PM, Catalin Marinas wrote:
> Currently arm64 allows a choice of FLATMEM, SPARSEMEM and
> SPARSEMEM_VMEMMAP. However, only the latter is tested regularly. FLATMEM
> does not seem to boot in certain configurations (guest under KVM with
> Qemu as a VMM). Since the reduction of the SECTION_SIZE_BITS to 27 (4K
> pages) or 29 (64K page), there's little argument against the memory
> wasted by the mem_map array with SPARSEMEM.
> 
> Make SPARSEMEM_VMEMMAP the only available option, non-selectable, and
> remove the corresponding #ifdefs under arch/arm64/.
> 
> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> ---
> 
> If there are any concerns, please shout (but show numbers as well to
> back it up).
> 
>  arch/arm64/Kconfig                      | 10 +---------
>  arch/arm64/include/asm/kernel-pgtable.h |  2 +-
>  arch/arm64/include/asm/memory.h         |  4 ++--
>  arch/arm64/include/asm/sparsemem.h      |  3 ---
>  arch/arm64/mm/init.c                    |  8 ++------
>  arch/arm64/mm/mmu.c                     |  2 --
>  arch/arm64/mm/ptdump.c                  |  2 --
>  7 files changed, 6 insertions(+), 25 deletions(-)
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 9b4d629f7628..01c294035928 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -1040,15 +1040,7 @@ source "kernel/Kconfig.hz"
>  config ARCH_SPARSEMEM_ENABLE
>  	def_bool y
>  	select SPARSEMEM_VMEMMAP_ENABLE
> -
> -config ARCH_SPARSEMEM_DEFAULT
> -	def_bool ARCH_SPARSEMEM_ENABLE
> -
> -config ARCH_SELECT_MEMORY_MODEL
> -	def_bool ARCH_SPARSEMEM_ENABLE
> -
> -config ARCH_FLATMEM_ENABLE
> -	def_bool !NUMA
> +	select SPARSEMEM_VMEMMAP
>  
>  config HW_PERF_EVENTS
>  	def_bool y
> diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h
> index 587c504a4c8b..d44df9d62fc9 100644
> --- a/arch/arm64/include/asm/kernel-pgtable.h
> +++ b/arch/arm64/include/asm/kernel-pgtable.h
> @@ -136,7 +136,7 @@
>   * has a direct correspondence, and needs to appear sufficiently aligned
>   * in the virtual address space.
>   */
> -#if defined(CONFIG_SPARSEMEM_VMEMMAP) && ARM64_MEMSTART_SHIFT < SECTION_SIZE_BITS
> +#if ARM64_MEMSTART_SHIFT < SECTION_SIZE_BITS
>  #define ARM64_MEMSTART_ALIGN	(1UL << SECTION_SIZE_BITS)
>  #else
>  #define ARM64_MEMSTART_ALIGN	(1UL << ARM64_MEMSTART_SHIFT)
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index b943879c1c24..15018dc59554 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -329,7 +329,7 @@ static inline void *phys_to_virt(phys_addr_t x)
>   */
>  #define ARCH_PFN_OFFSET		((unsigned long)PHYS_PFN_OFFSET)
>  
> -#if !defined(CONFIG_SPARSEMEM_VMEMMAP) || defined(CONFIG_DEBUG_VIRTUAL)
> +#if defined(CONFIG_DEBUG_VIRTUAL)

A small nit. Should this be #ifdef CONFIG_DEBUG_VIRTUAL instead ? This is
an user selectable config and the conditional check here does not have an
#elseif part either. But then there are similar such instances else where
on arm64 platform as well.

>  #define page_to_virt(x)	({						\
>  	__typeof__(x) __page = x;					\
>  	void *__addr = __va(page_to_phys(__page));			\
> @@ -349,7 +349,7 @@ static inline void *phys_to_virt(phys_addr_t x)
>  	u64 __addr = VMEMMAP_START + (__idx * sizeof(struct page));	\
>  	(struct page *)__addr;						\
>  })
> -#endif /* !CONFIG_SPARSEMEM_VMEMMAP || CONFIG_DEBUG_VIRTUAL */
> +#endif /* CONFIG_DEBUG_VIRTUAL */
>  
>  #define virt_addr_valid(addr)	({					\
>  	__typeof__(addr) __addr = __tag_reset(addr);			\
> diff --git a/arch/arm64/include/asm/sparsemem.h b/arch/arm64/include/asm/sparsemem.h
> index eb4a75d720ed..4b73463423c3 100644
> --- a/arch/arm64/include/asm/sparsemem.h
> +++ b/arch/arm64/include/asm/sparsemem.h
> @@ -5,7 +5,6 @@
>  #ifndef __ASM_SPARSEMEM_H
>  #define __ASM_SPARSEMEM_H
>  
> -#ifdef CONFIG_SPARSEMEM
>  #define MAX_PHYSMEM_BITS	CONFIG_ARM64_PA_BITS
>  
>  /*
> @@ -27,6 +26,4 @@
>  #define SECTION_SIZE_BITS 27
>  #endif /* CONFIG_ARM64_64K_PAGES */
>  
> -#endif /* CONFIG_SPARSEMEM*/
> -
>  #endif
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index 3685e12aba9b..a205538aa1d5 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -220,6 +220,7 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max)
>  int pfn_valid(unsigned long pfn)
>  {
>  	phys_addr_t addr = PFN_PHYS(pfn);
> +	struct mem_section *ms;
>  
>  	/*
>  	 * Ensure the upper PAGE_SHIFT bits are clear in the
> @@ -230,10 +231,6 @@ int pfn_valid(unsigned long pfn)
>  	if (PHYS_PFN(addr) != pfn)
>  		return 0;
>  
> -#ifdef CONFIG_SPARSEMEM
> -{
> -	struct mem_section *ms;
> -
>  	if (pfn_to_section_nr(pfn) >= NR_MEM_SECTIONS)
>  		return 0;
>  
> @@ -252,8 +249,7 @@ int pfn_valid(unsigned long pfn)
>  	 */
>  	if (!early_section(ms))
>  		return pfn_section_valid(ms, pfn);
> -}
> -#endif
> +
>  	return memblock_is_map_memory(addr);
>  }
>  EXPORT_SYMBOL(pfn_valid);
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index fac957ff5187..af0ebcad3e1f 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -1113,7 +1113,6 @@ static void free_empty_tables(unsigned long addr, unsigned long end,
>  }
>  #endif
>  
> -#ifdef CONFIG_SPARSEMEM_VMEMMAP
>  #if !ARM64_SWAPPER_USES_SECTION_MAPS
>  int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
>  		struct vmem_altmap *altmap)
> @@ -1177,7 +1176,6 @@ void vmemmap_free(unsigned long start, unsigned long end,
>  	free_empty_tables(start, end, VMEMMAP_START, VMEMMAP_END);
>  #endif
>  }
> -#endif	/* CONFIG_SPARSEMEM_VMEMMAP */
>  
>  static inline pud_t *fixmap_pud(unsigned long addr)
>  {
> diff --git a/arch/arm64/mm/ptdump.c b/arch/arm64/mm/ptdump.c
> index a50e92ea1878..a1937dfff31c 100644
> --- a/arch/arm64/mm/ptdump.c
> +++ b/arch/arm64/mm/ptdump.c
> @@ -51,10 +51,8 @@ static struct addr_marker address_markers[] = {
>  	{ FIXADDR_TOP,			"Fixmap end" },
>  	{ PCI_IO_START,			"PCI I/O start" },
>  	{ PCI_IO_END,			"PCI I/O end" },
> -#ifdef CONFIG_SPARSEMEM_VMEMMAP
>  	{ VMEMMAP_START,		"vmemmap start" },
>  	{ VMEMMAP_START + VMEMMAP_SIZE,	"vmemmap end" },
> -#endif
>  	{ -1,				NULL },
>  };
>  
> 

Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Catalin Marinas April 21, 2021, 1:02 p.m. UTC | #5
On Wed, Apr 21, 2021 at 10:18:56AM +0530, Anshuman Khandual wrote:
> On 4/20/21 3:05 PM, Catalin Marinas wrote:
> > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> > index b943879c1c24..15018dc59554 100644
> > --- a/arch/arm64/include/asm/memory.h
> > +++ b/arch/arm64/include/asm/memory.h
> > @@ -329,7 +329,7 @@ static inline void *phys_to_virt(phys_addr_t x)
> >   */
> >  #define ARCH_PFN_OFFSET		((unsigned long)PHYS_PFN_OFFSET)
> >  
> > -#if !defined(CONFIG_SPARSEMEM_VMEMMAP) || defined(CONFIG_DEBUG_VIRTUAL)
> > +#if defined(CONFIG_DEBUG_VIRTUAL)
> 
> A small nit. Should this be #ifdef CONFIG_DEBUG_VIRTUAL instead ?

Yeah, for consistency I changed it to #ifdef.

> This is
> an user selectable config and the conditional check here does not have an
> #elseif part either. But then there are similar such instances else where
> on arm64 platform as well.

I'm not sure I get it. What would an #elseif need to check? We already
have an #else block for this #ifdef.
Anshuman Khandual April 22, 2021, 3:04 a.m. UTC | #6
On 4/21/21 6:32 PM, Catalin Marinas wrote:
> On Wed, Apr 21, 2021 at 10:18:56AM +0530, Anshuman Khandual wrote:
>> On 4/20/21 3:05 PM, Catalin Marinas wrote:
>>> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
>>> index b943879c1c24..15018dc59554 100644
>>> --- a/arch/arm64/include/asm/memory.h
>>> +++ b/arch/arm64/include/asm/memory.h
>>> @@ -329,7 +329,7 @@ static inline void *phys_to_virt(phys_addr_t x)
>>>   */
>>>  #define ARCH_PFN_OFFSET		((unsigned long)PHYS_PFN_OFFSET)
>>>  
>>> -#if !defined(CONFIG_SPARSEMEM_VMEMMAP) || defined(CONFIG_DEBUG_VIRTUAL)
>>> +#if defined(CONFIG_DEBUG_VIRTUAL)
>>
>> A small nit. Should this be #ifdef CONFIG_DEBUG_VIRTUAL instead ?
> 
> Yeah, for consistency I changed it to #ifdef.
> 
>> This is
>> an user selectable config and the conditional check here does not have an
>> #elseif part either. But then there are similar such instances else where
>> on arm64 platform as well.
> 
> I'm not sure I get it. What would an #elseif need to check? We already
> have an #else block for this #ifdef.

IIUC #elif always requires a defined() construct. In such cases #if defined()
might be preferable, in order to match the subsequent #elif. The point being,
in this particular case there is no #elif which would have probably justified
a preceding #if defined() construct though #ifdef is preferred.
Mike Rapoport April 22, 2021, 10:08 a.m. UTC | #7
On Tue, Apr 20, 2021 at 10:35:59AM +0100, Catalin Marinas wrote:
> Currently arm64 allows a choice of FLATMEM, SPARSEMEM and
> SPARSEMEM_VMEMMAP. However, only the latter is tested regularly. FLATMEM
> does not seem to boot in certain configurations (guest under KVM with
> Qemu as a VMM). Since the reduction of the SECTION_SIZE_BITS to 27 (4K
> pages) or 29 (64K page), there's little argument against the memory
> wasted by the mem_map array with SPARSEMEM.
> 
> Make SPARSEMEM_VMEMMAP the only available option, non-selectable, and
> remove the corresponding #ifdefs under arch/arm64/.
> 
> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>

Acked-by: Mike Rapoport <rppt@linux.ibm.com>

> ---
> 
> If there are any concerns, please shout (but show numbers as well to
> back it up).
> 
>  arch/arm64/Kconfig                      | 10 +---------
>  arch/arm64/include/asm/kernel-pgtable.h |  2 +-
>  arch/arm64/include/asm/memory.h         |  4 ++--
>  arch/arm64/include/asm/sparsemem.h      |  3 ---
>  arch/arm64/mm/init.c                    |  8 ++------
>  arch/arm64/mm/mmu.c                     |  2 --
>  arch/arm64/mm/ptdump.c                  |  2 --
>  7 files changed, 6 insertions(+), 25 deletions(-)
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 9b4d629f7628..01c294035928 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -1040,15 +1040,7 @@ source "kernel/Kconfig.hz"
>  config ARCH_SPARSEMEM_ENABLE
>  	def_bool y
>  	select SPARSEMEM_VMEMMAP_ENABLE
> -
> -config ARCH_SPARSEMEM_DEFAULT
> -	def_bool ARCH_SPARSEMEM_ENABLE
> -
> -config ARCH_SELECT_MEMORY_MODEL
> -	def_bool ARCH_SPARSEMEM_ENABLE
> -
> -config ARCH_FLATMEM_ENABLE
> -	def_bool !NUMA
> +	select SPARSEMEM_VMEMMAP
>  
>  config HW_PERF_EVENTS
>  	def_bool y
> diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h
> index 587c504a4c8b..d44df9d62fc9 100644
> --- a/arch/arm64/include/asm/kernel-pgtable.h
> +++ b/arch/arm64/include/asm/kernel-pgtable.h
> @@ -136,7 +136,7 @@
>   * has a direct correspondence, and needs to appear sufficiently aligned
>   * in the virtual address space.
>   */
> -#if defined(CONFIG_SPARSEMEM_VMEMMAP) && ARM64_MEMSTART_SHIFT < SECTION_SIZE_BITS
> +#if ARM64_MEMSTART_SHIFT < SECTION_SIZE_BITS
>  #define ARM64_MEMSTART_ALIGN	(1UL << SECTION_SIZE_BITS)
>  #else
>  #define ARM64_MEMSTART_ALIGN	(1UL << ARM64_MEMSTART_SHIFT)
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index b943879c1c24..15018dc59554 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -329,7 +329,7 @@ static inline void *phys_to_virt(phys_addr_t x)
>   */
>  #define ARCH_PFN_OFFSET		((unsigned long)PHYS_PFN_OFFSET)
>  
> -#if !defined(CONFIG_SPARSEMEM_VMEMMAP) || defined(CONFIG_DEBUG_VIRTUAL)
> +#if defined(CONFIG_DEBUG_VIRTUAL)
>  #define page_to_virt(x)	({						\
>  	__typeof__(x) __page = x;					\
>  	void *__addr = __va(page_to_phys(__page));			\
> @@ -349,7 +349,7 @@ static inline void *phys_to_virt(phys_addr_t x)
>  	u64 __addr = VMEMMAP_START + (__idx * sizeof(struct page));	\
>  	(struct page *)__addr;						\
>  })
> -#endif /* !CONFIG_SPARSEMEM_VMEMMAP || CONFIG_DEBUG_VIRTUAL */
> +#endif /* CONFIG_DEBUG_VIRTUAL */
>  
>  #define virt_addr_valid(addr)	({					\
>  	__typeof__(addr) __addr = __tag_reset(addr);			\
> diff --git a/arch/arm64/include/asm/sparsemem.h b/arch/arm64/include/asm/sparsemem.h
> index eb4a75d720ed..4b73463423c3 100644
> --- a/arch/arm64/include/asm/sparsemem.h
> +++ b/arch/arm64/include/asm/sparsemem.h
> @@ -5,7 +5,6 @@
>  #ifndef __ASM_SPARSEMEM_H
>  #define __ASM_SPARSEMEM_H
>  
> -#ifdef CONFIG_SPARSEMEM
>  #define MAX_PHYSMEM_BITS	CONFIG_ARM64_PA_BITS
>  
>  /*
> @@ -27,6 +26,4 @@
>  #define SECTION_SIZE_BITS 27
>  #endif /* CONFIG_ARM64_64K_PAGES */
>  
> -#endif /* CONFIG_SPARSEMEM*/
> -
>  #endif
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index 3685e12aba9b..a205538aa1d5 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -220,6 +220,7 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max)
>  int pfn_valid(unsigned long pfn)
>  {
>  	phys_addr_t addr = PFN_PHYS(pfn);
> +	struct mem_section *ms;
>  
>  	/*
>  	 * Ensure the upper PAGE_SHIFT bits are clear in the
> @@ -230,10 +231,6 @@ int pfn_valid(unsigned long pfn)
>  	if (PHYS_PFN(addr) != pfn)
>  		return 0;
>  
> -#ifdef CONFIG_SPARSEMEM
> -{
> -	struct mem_section *ms;
> -
>  	if (pfn_to_section_nr(pfn) >= NR_MEM_SECTIONS)
>  		return 0;
>  
> @@ -252,8 +249,7 @@ int pfn_valid(unsigned long pfn)
>  	 */
>  	if (!early_section(ms))
>  		return pfn_section_valid(ms, pfn);
> -}
> -#endif
> +
>  	return memblock_is_map_memory(addr);
>  }
>  EXPORT_SYMBOL(pfn_valid);
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index fac957ff5187..af0ebcad3e1f 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -1113,7 +1113,6 @@ static void free_empty_tables(unsigned long addr, unsigned long end,
>  }
>  #endif
>  
> -#ifdef CONFIG_SPARSEMEM_VMEMMAP
>  #if !ARM64_SWAPPER_USES_SECTION_MAPS
>  int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
>  		struct vmem_altmap *altmap)
> @@ -1177,7 +1176,6 @@ void vmemmap_free(unsigned long start, unsigned long end,
>  	free_empty_tables(start, end, VMEMMAP_START, VMEMMAP_END);
>  #endif
>  }
> -#endif	/* CONFIG_SPARSEMEM_VMEMMAP */
>  
>  static inline pud_t *fixmap_pud(unsigned long addr)
>  {
> diff --git a/arch/arm64/mm/ptdump.c b/arch/arm64/mm/ptdump.c
> index a50e92ea1878..a1937dfff31c 100644
> --- a/arch/arm64/mm/ptdump.c
> +++ b/arch/arm64/mm/ptdump.c
> @@ -51,10 +51,8 @@ static struct addr_marker address_markers[] = {
>  	{ FIXADDR_TOP,			"Fixmap end" },
>  	{ PCI_IO_START,			"PCI I/O start" },
>  	{ PCI_IO_END,			"PCI I/O end" },
> -#ifdef CONFIG_SPARSEMEM_VMEMMAP
>  	{ VMEMMAP_START,		"vmemmap start" },
>  	{ VMEMMAP_START + VMEMMAP_SIZE,	"vmemmap end" },
> -#endif
>  	{ -1,				NULL },
>  };
>  
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
Catalin Marinas April 23, 2021, 5:09 p.m. UTC | #8
On Tue, 20 Apr 2021 10:35:59 +0100, Catalin Marinas wrote:
> Currently arm64 allows a choice of FLATMEM, SPARSEMEM and
> SPARSEMEM_VMEMMAP. However, only the latter is tested regularly. FLATMEM
> does not seem to boot in certain configurations (guest under KVM with
> Qemu as a VMM). Since the reduction of the SECTION_SIZE_BITS to 27 (4K
> pages) or 29 (64K page), there's little argument against the memory
> wasted by the mem_map array with SPARSEMEM.
> 
> [...]

Applied to arm64 (for-next/core).

[1/1] arm64: Force SPARSEMEM_VMEMMAP as the only memory management model
      https://git.kernel.org/arm64/c/782276b4d0ad
diff mbox series

Patch

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 9b4d629f7628..01c294035928 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1040,15 +1040,7 @@  source "kernel/Kconfig.hz"
 config ARCH_SPARSEMEM_ENABLE
 	def_bool y
 	select SPARSEMEM_VMEMMAP_ENABLE
-
-config ARCH_SPARSEMEM_DEFAULT
-	def_bool ARCH_SPARSEMEM_ENABLE
-
-config ARCH_SELECT_MEMORY_MODEL
-	def_bool ARCH_SPARSEMEM_ENABLE
-
-config ARCH_FLATMEM_ENABLE
-	def_bool !NUMA
+	select SPARSEMEM_VMEMMAP
 
 config HW_PERF_EVENTS
 	def_bool y
diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h
index 587c504a4c8b..d44df9d62fc9 100644
--- a/arch/arm64/include/asm/kernel-pgtable.h
+++ b/arch/arm64/include/asm/kernel-pgtable.h
@@ -136,7 +136,7 @@ 
  * has a direct correspondence, and needs to appear sufficiently aligned
  * in the virtual address space.
  */
-#if defined(CONFIG_SPARSEMEM_VMEMMAP) && ARM64_MEMSTART_SHIFT < SECTION_SIZE_BITS
+#if ARM64_MEMSTART_SHIFT < SECTION_SIZE_BITS
 #define ARM64_MEMSTART_ALIGN	(1UL << SECTION_SIZE_BITS)
 #else
 #define ARM64_MEMSTART_ALIGN	(1UL << ARM64_MEMSTART_SHIFT)
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index b943879c1c24..15018dc59554 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -329,7 +329,7 @@  static inline void *phys_to_virt(phys_addr_t x)
  */
 #define ARCH_PFN_OFFSET		((unsigned long)PHYS_PFN_OFFSET)
 
-#if !defined(CONFIG_SPARSEMEM_VMEMMAP) || defined(CONFIG_DEBUG_VIRTUAL)
+#if defined(CONFIG_DEBUG_VIRTUAL)
 #define page_to_virt(x)	({						\
 	__typeof__(x) __page = x;					\
 	void *__addr = __va(page_to_phys(__page));			\
@@ -349,7 +349,7 @@  static inline void *phys_to_virt(phys_addr_t x)
 	u64 __addr = VMEMMAP_START + (__idx * sizeof(struct page));	\
 	(struct page *)__addr;						\
 })
-#endif /* !CONFIG_SPARSEMEM_VMEMMAP || CONFIG_DEBUG_VIRTUAL */
+#endif /* CONFIG_DEBUG_VIRTUAL */
 
 #define virt_addr_valid(addr)	({					\
 	__typeof__(addr) __addr = __tag_reset(addr);			\
diff --git a/arch/arm64/include/asm/sparsemem.h b/arch/arm64/include/asm/sparsemem.h
index eb4a75d720ed..4b73463423c3 100644
--- a/arch/arm64/include/asm/sparsemem.h
+++ b/arch/arm64/include/asm/sparsemem.h
@@ -5,7 +5,6 @@ 
 #ifndef __ASM_SPARSEMEM_H
 #define __ASM_SPARSEMEM_H
 
-#ifdef CONFIG_SPARSEMEM
 #define MAX_PHYSMEM_BITS	CONFIG_ARM64_PA_BITS
 
 /*
@@ -27,6 +26,4 @@ 
 #define SECTION_SIZE_BITS 27
 #endif /* CONFIG_ARM64_64K_PAGES */
 
-#endif /* CONFIG_SPARSEMEM*/
-
 #endif
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 3685e12aba9b..a205538aa1d5 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -220,6 +220,7 @@  static void __init zone_sizes_init(unsigned long min, unsigned long max)
 int pfn_valid(unsigned long pfn)
 {
 	phys_addr_t addr = PFN_PHYS(pfn);
+	struct mem_section *ms;
 
 	/*
 	 * Ensure the upper PAGE_SHIFT bits are clear in the
@@ -230,10 +231,6 @@  int pfn_valid(unsigned long pfn)
 	if (PHYS_PFN(addr) != pfn)
 		return 0;
 
-#ifdef CONFIG_SPARSEMEM
-{
-	struct mem_section *ms;
-
 	if (pfn_to_section_nr(pfn) >= NR_MEM_SECTIONS)
 		return 0;
 
@@ -252,8 +249,7 @@  int pfn_valid(unsigned long pfn)
 	 */
 	if (!early_section(ms))
 		return pfn_section_valid(ms, pfn);
-}
-#endif
+
 	return memblock_is_map_memory(addr);
 }
 EXPORT_SYMBOL(pfn_valid);
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index fac957ff5187..af0ebcad3e1f 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -1113,7 +1113,6 @@  static void free_empty_tables(unsigned long addr, unsigned long end,
 }
 #endif
 
-#ifdef CONFIG_SPARSEMEM_VMEMMAP
 #if !ARM64_SWAPPER_USES_SECTION_MAPS
 int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
 		struct vmem_altmap *altmap)
@@ -1177,7 +1176,6 @@  void vmemmap_free(unsigned long start, unsigned long end,
 	free_empty_tables(start, end, VMEMMAP_START, VMEMMAP_END);
 #endif
 }
-#endif	/* CONFIG_SPARSEMEM_VMEMMAP */
 
 static inline pud_t *fixmap_pud(unsigned long addr)
 {
diff --git a/arch/arm64/mm/ptdump.c b/arch/arm64/mm/ptdump.c
index a50e92ea1878..a1937dfff31c 100644
--- a/arch/arm64/mm/ptdump.c
+++ b/arch/arm64/mm/ptdump.c
@@ -51,10 +51,8 @@  static struct addr_marker address_markers[] = {
 	{ FIXADDR_TOP,			"Fixmap end" },
 	{ PCI_IO_START,			"PCI I/O start" },
 	{ PCI_IO_END,			"PCI I/O end" },
-#ifdef CONFIG_SPARSEMEM_VMEMMAP
 	{ VMEMMAP_START,		"vmemmap start" },
 	{ VMEMMAP_START + VMEMMAP_SIZE,	"vmemmap end" },
-#endif
 	{ -1,				NULL },
 };