Message ID | 339118202d0a4741ec22f215830dc8d9ba1ccd49.1602542734.git.sudaraja@codeaurora.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v3] arm64/mm: add fallback option to allocate virtually contiguous memory | expand |
On 10/13/2020 04:35 AM, Sudarshan Rajagopalan wrote: > When section mappings are enabled, we allocate vmemmap pages from physically > continuous memory of size PMD_SIZE using vmemmap_alloc_block_buf(). Section > mappings are good to reduce TLB pressure. But when system is highly fragmented > and memory blocks are being hot-added at runtime, its possible that such > physically continuous memory allocations can fail. Rather than failing the > memory hot-add procedure, add a fallback option to allocate vmemmap pages from > discontinuous pages using vmemmap_populate_basepages(). There is a checkpatch warning here, which could be fixed while merging ? WARNING: Possible unwrapped commit description (prefer a maximum 75 chars per line) #7: When section mappings are enabled, we allocate vmemmap pages from physically total: 0 errors, 1 warnings, 13 lines checked > > Signed-off-by: Sudarshan Rajagopalan <sudaraja@codeaurora.org> > Reviewed-by: Gavin Shan <gshan@redhat.com> > Cc: Catalin Marinas <catalin.marinas@arm.com> > Cc: Will Deacon <will@kernel.org> > Cc: Anshuman Khandual <anshuman.khandual@arm.com> > Cc: Mark Rutland <mark.rutland@arm.com> > Cc: Logan Gunthorpe <logang@deltatee.com> > Cc: David Hildenbrand <david@redhat.com> > Cc: Andrew Morton <akpm@linux-foundation.org> > Cc: Steven Price <steven.price@arm.com> Nonetheless, this looks fine. Did not see any particular problem while creating an experimental vmemmap with interleaving section and base page mapping. Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> > --- > arch/arm64/mm/mmu.c | 7 +++++-- > 1 file changed, 5 insertions(+), 2 deletions(-) > > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > index 75df62fea1b6..44486fd0e883 100644 > --- a/arch/arm64/mm/mmu.c > +++ b/arch/arm64/mm/mmu.c > @@ -1121,8 +1121,11 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, > void *p = NULL; > > p = vmemmap_alloc_block_buf(PMD_SIZE, node, altmap); > - if (!p) > - return -ENOMEM; > + if (!p) { > + if (vmemmap_populate_basepages(addr, next, node, altmap)) > + return -ENOMEM; > + continue; > + } > > pmd_set_huge(pmdp, __pa(p), __pgprot(PROT_SECT_NORMAL)); > } else >
On 2020-10-13 04:38, Anshuman Khandual wrote: > On 10/13/2020 04:35 AM, Sudarshan Rajagopalan wrote: >> When section mappings are enabled, we allocate vmemmap pages from >> physically >> continuous memory of size PMD_SIZE using vmemmap_alloc_block_buf(). >> Section >> mappings are good to reduce TLB pressure. But when system is highly >> fragmented >> and memory blocks are being hot-added at runtime, its possible that >> such >> physically continuous memory allocations can fail. Rather than failing >> the >> memory hot-add procedure, add a fallback option to allocate vmemmap >> pages from >> discontinuous pages using vmemmap_populate_basepages(). > > There is a checkpatch warning here, which could be fixed while merging > ? > > WARNING: Possible unwrapped commit description (prefer a maximum 75 > chars per line) > #7: > When section mappings are enabled, we allocate vmemmap pages from > physically > > total: 0 errors, 1 warnings, 13 lines checked > Thanks Anshuman for the review. I sent out an updated patch fixing the checkpatch warning. >> >> Signed-off-by: Sudarshan Rajagopalan <sudaraja@codeaurora.org> >> Reviewed-by: Gavin Shan <gshan@redhat.com> >> Cc: Catalin Marinas <catalin.marinas@arm.com> >> Cc: Will Deacon <will@kernel.org> >> Cc: Anshuman Khandual <anshuman.khandual@arm.com> >> Cc: Mark Rutland <mark.rutland@arm.com> >> Cc: Logan Gunthorpe <logang@deltatee.com> >> Cc: David Hildenbrand <david@redhat.com> >> Cc: Andrew Morton <akpm@linux-foundation.org> >> Cc: Steven Price <steven.price@arm.com> > > Nonetheless, this looks fine. Did not see any particular problem > while creating an experimental vmemmap with interleaving section > and base page mapping. > > Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> > >> --- >> arch/arm64/mm/mmu.c | 7 +++++-- >> 1 file changed, 5 insertions(+), 2 deletions(-) >> >> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c >> index 75df62fea1b6..44486fd0e883 100644 >> --- a/arch/arm64/mm/mmu.c >> +++ b/arch/arm64/mm/mmu.c >> @@ -1121,8 +1121,11 @@ int __meminit vmemmap_populate(unsigned long >> start, unsigned long end, int node, >> void *p = NULL; >> >> p = vmemmap_alloc_block_buf(PMD_SIZE, node, altmap); >> - if (!p) >> - return -ENOMEM; >> + if (!p) { >> + if (vmemmap_populate_basepages(addr, next, node, altmap)) >> + return -ENOMEM; >> + continue; >> + } >> >> pmd_set_huge(pmdp, __pa(p), __pgprot(PROT_SECT_NORMAL)); >> } else >> Sudarshan -- Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 75df62fea1b6..44486fd0e883 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -1121,8 +1121,11 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, void *p = NULL; p = vmemmap_alloc_block_buf(PMD_SIZE, node, altmap); - if (!p) - return -ENOMEM; + if (!p) { + if (vmemmap_populate_basepages(addr, next, node, altmap)) + return -ENOMEM; + continue; + } pmd_set_huge(pmdp, __pa(p), __pgprot(PROT_SECT_NORMAL)); } else