Message ID | 20160309113214.GB1535@rric.localdomain (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On 09.03.16 12:32:14, Robert Richter wrote: > On 08.03.16 17:31:05, Ard Biesheuvel wrote: > > On 8 March 2016 at 09:15, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote: > > I managed to reproduce and diagnose this. The problem is that vmemmap > > is no longer zone aligned, which causes trouble in the zone based > > rounding that occurs in memory_present. The below patch fixes this by > > rounding down the subtracted offset. Since this implies that the > > region could stick off the other end, it also reverts the halving of > > the region size. > > I have seen the same panic. The fix solves the problem. See enclosed > diff for reference as there was some patch corruption of the original. So this is: Tested-by: Robert Richter <rrichter@cavium.com> -Robert
diff for reference as there was some patch corruption of the original. Thanks, -Robert From 562760cc30905748cb851cc9aee2bb9d88c67d47 Mon Sep 17 00:00:00 2001 From: Ard Biesheuvel <ard.biesheuvel@linaro.org> Date: Tue, 8 Mar 2016 17:31:05 +0700 Subject: [PATCH] arm64: vmemmap: Fix use virtual projection of linear region Signed-off-by: Robert Richter <rrichter@cavium.com> --- arch/arm64/include/asm/pgtable.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index d9de87354869..98697488650f 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -40,7 +40,7 @@ * VMALLOC_END: extends to the available space below vmmemmap, PCI I/O space, * fixed mappings and modules */ -#define VMEMMAP_SIZE ALIGN((1UL << (VA_BITS - PAGE_SHIFT - 1)) * sizeof(struct page), PUD_SIZE) +#define VMEMMAP_SIZE ALIGN((1UL << (VA_BITS - PAGE_SHIFT)) * sizeof(struct page), PUD_SIZE) #ifndef CONFIG_KASAN #define VMALLOC_START (VA_START) @@ -52,7 +52,7 @@ #define VMALLOC_END (PAGE_OFFSET - PUD_SIZE - VMEMMAP_SIZE - SZ_64K) #define VMEMMAP_START (VMALLOC_END + SZ_64K) -#define vmemmap ((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT)) +#define vmemmap ((struct page *)VMEMMAP_START - ((memstart_addr >> PAGE_SHIFT) & PAGE_SECTION_MASK)) #define FIRST_USER_ADDRESS 0UL