Message ID | a2d91c1b5874a1217e473ffd33cd4f765a0e78b7.1601506266.git.sudaraja@codeaurora.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v2] arm64/mm: add fallback option to allocate virtually contiguous memory | expand |
On 10/01/2020 04:43 AM, Sudarshan Rajagopalan wrote: > When section mappings are enabled, we allocate vmemmap pages from physically > continuous memory of size PMD_SIZE using vmemmap_alloc_block_buf(). Section > mappings are good to reduce TLB pressure. But when system is highly fragmented > and memory blocks are being hot-added at runtime, its possible that such > physically continuous memory allocations can fail. Rather than failing the > memory hot-add procedure, add a fallback option to allocate vmemmap pages from > discontinuous pages using vmemmap_populate_basepages(). > > Signed-off-by: Sudarshan Rajagopalan <sudaraja@codeaurora.org> > Cc: Catalin Marinas <catalin.marinas@arm.com> > Cc: Will Deacon <will@kernel.org> > Cc: Anshuman Khandual <anshuman.khandual@arm.com> > Cc: Mark Rutland <mark.rutland@arm.com> > Cc: Logan Gunthorpe <logang@deltatee.com> > Cc: David Hildenbrand <david@redhat.com> > Cc: Andrew Morton <akpm@linux-foundation.org> > Cc: Steven Price <steven.price@arm.com> > --- > arch/arm64/mm/mmu.c | 14 ++++++++++++-- > 1 file changed, 12 insertions(+), 2 deletions(-) > > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > index 75df62f..9edbbb8 100644 > --- a/arch/arm64/mm/mmu.c > +++ b/arch/arm64/mm/mmu.c > @@ -1121,8 +1121,18 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, > void *p = NULL; > > p = vmemmap_alloc_block_buf(PMD_SIZE, node, altmap); > - if (!p) > - return -ENOMEM; > + if (!p) { > + if (altmap) > + return -ENOMEM; /* no fallback */ Why ? If huge pages inside a vmemmap section might have been allocated from altmap, the base page could also fallback on altmap. If this patch has just followed the existing x86 semantics, it was written [1] long back before vmemmap_populate_basepages() supported altmap allocation. While adding that support [2] recently, it was deliberate not to change x86 semantics as it was a platform decision. Nonetheless, it makes sense to fallback on altmap bases pages if and when required. [1] 4b94ffdc4163 (x86, mm: introduce vmem_altmap to augment vmemmap_populate()) [2] 1d9cfee7535c (mm/sparsemem: enable vmem_altmap support in vmemmap_populate_basepages())
On 2020-09-30 17:30, Anshuman Khandual wrote: > On 10/01/2020 04:43 AM, Sudarshan Rajagopalan wrote: >> When section mappings are enabled, we allocate vmemmap pages from >> physically >> continuous memory of size PMD_SIZE using vmemmap_alloc_block_buf(). >> Section >> mappings are good to reduce TLB pressure. But when system is highly >> fragmented >> and memory blocks are being hot-added at runtime, its possible that >> such >> physically continuous memory allocations can fail. Rather than failing >> the >> memory hot-add procedure, add a fallback option to allocate vmemmap >> pages from >> discontinuous pages using vmemmap_populate_basepages(). >> >> Signed-off-by: Sudarshan Rajagopalan <sudaraja@codeaurora.org> >> Cc: Catalin Marinas <catalin.marinas@arm.com> >> Cc: Will Deacon <will@kernel.org> >> Cc: Anshuman Khandual <anshuman.khandual@arm.com> >> Cc: Mark Rutland <mark.rutland@arm.com> >> Cc: Logan Gunthorpe <logang@deltatee.com> >> Cc: David Hildenbrand <david@redhat.com> >> Cc: Andrew Morton <akpm@linux-foundation.org> >> Cc: Steven Price <steven.price@arm.com> >> --- >> arch/arm64/mm/mmu.c | 14 ++++++++++++-- >> 1 file changed, 12 insertions(+), 2 deletions(-) >> >> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c >> index 75df62f..9edbbb8 100644 >> --- a/arch/arm64/mm/mmu.c >> +++ b/arch/arm64/mm/mmu.c >> @@ -1121,8 +1121,18 @@ int __meminit vmemmap_populate(unsigned long >> start, unsigned long end, int node, >> void *p = NULL; >> >> p = vmemmap_alloc_block_buf(PMD_SIZE, node, altmap); >> - if (!p) >> - return -ENOMEM; >> + if (!p) { >> + if (altmap) >> + return -ENOMEM; /* no fallback */ > > Why ? If huge pages inside a vmemmap section might have been allocated > from altmap, the base page could also fallback on altmap. If this patch > has just followed the existing x86 semantics, it was written [1] long > back before vmemmap_populate_basepages() supported altmap allocation. > While adding that support [2] recently, it was deliberate not to change > x86 semantics as it was a platform decision. Nonetheless, it makes > sense > to fallback on altmap bases pages if and when required. > > [1] 4b94ffdc4163 (x86, mm: introduce vmem_altmap to augment > vmemmap_populate()) > [2] 1d9cfee7535c (mm/sparsemem: enable vmem_altmap support in > vmemmap_populate_basepages()) Yes agreed. We can allow fallback on altmap as well. I did indeed follow x86 semantics. Will send the updated patch. Sudarshan -- Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 75df62f..9edbbb8 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -1121,8 +1121,18 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, void *p = NULL; p = vmemmap_alloc_block_buf(PMD_SIZE, node, altmap); - if (!p) - return -ENOMEM; + if (!p) { + if (altmap) + return -ENOMEM; /* no fallback */ + + /* + * fallback allocating with virtually + * contiguous memory for this section + */ + if (vmemmap_populate_basepages(addr, next, node, NULL)) + return -ENOMEM; + continue; + } pmd_set_huge(pmdp, __pa(p), __pgprot(PROT_SECT_NORMAL)); } else
When section mappings are enabled, we allocate vmemmap pages from physically continuous memory of size PMD_SIZE using vmemmap_alloc_block_buf(). Section mappings are good to reduce TLB pressure. But when system is highly fragmented and memory blocks are being hot-added at runtime, its possible that such physically continuous memory allocations can fail. Rather than failing the memory hot-add procedure, add a fallback option to allocate vmemmap pages from discontinuous pages using vmemmap_populate_basepages(). Signed-off-by: Sudarshan Rajagopalan <sudaraja@codeaurora.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Logan Gunthorpe <logang@deltatee.com> Cc: David Hildenbrand <david@redhat.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Steven Price <steven.price@arm.com> --- arch/arm64/mm/mmu.c | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-)