diff mbox series

[RFC,6/6] arm64: fall back to vmemmap_populate_basepages if not aligned with PMD_SIZE

Message ID 20200729033424.2629-7-justin.he@arm.com
State New
Headers show
Series decrease unnecessary gap due to pmem kmem alignment | expand

Commit Message

Justin He July 29, 2020, 3:34 a.m. UTC
In dax pmem kmem (dax pmem used as RAM device) case, the start address
might not be aligned with PMD_SIZE
e.g.
240000000-33fdfffff : Persistent Memory
  240000000-2421fffff : namespace0.0
  242400000-2bfffffff : dax0.0
    242400000-2bfffffff : System RAM (kmem)
pfn_to_page(0x242400000) is fffffe0007e90000.

Without this patch, vmemmap_populate(fffffe0007e90000, ...) will incorrectly
create a pmd mapping [fffffe0007e00000, fffffe0008000000] which contains
fffffe0007e90000.

This adds the check and then falls back to vmemmap_populate_basepages()

Signed-off-by: Jia He <justin.he@arm.com>
---
 arch/arm64/mm/mmu.c | 4 ++++
 1 file changed, 4 insertions(+)
diff mbox series

Patch

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index d69feb2cfb84..3b21bd47e801 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -1102,6 +1102,10 @@  int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
 
 	do {
 		next = pmd_addr_end(addr, end);
+		if (next - addr < PMD_SIZE) {
+			vmemmap_populate_basepages(start, next, node, altmap);
+			continue;
+		}
 
 		pgdp = vmemmap_pgd_populate(addr, node);
 		if (!pgdp)