Message ID | 20190621202407.7781-2-sstabellini@kernel.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v5,1/2] xen: Replace u64 with uint64_t in pdx_init_mask() and callers | expand |
Hi Stefano, On 6/21/19 9:24 PM, Stefano Stabellini wrote: > The mask calculation in pdx_init_mask is wrong when the first bank > starts at address 0x0. The reason is that pdx_init_mask will do '0 - 1' > causing an underflow. As a result, the mask becomes 0xffffffffffffffff > which is the biggest possible mask and ends up causing a significant > memory waste in the frametable size computation. > > For instance, on platforms that have a low memory bank starting at 0x0 > and a high memory bank, the frametable will end up covering all the > holes in between. > > The purpose of the mask is to be passed as a parameter to > pfn_pdx_hole_setup, which based on the mask parameter calculates > pfn_pdx_hole_shift, pfn_pdx_bottom_mask, etc. which are actually the > important masks for frametable initialization later on. > > pfn_pdx_hole_setup never compresses addresses below MAX_ORDER bits (1GB > on ARM). Thus, it is safe to initialize mask passing 1ULL << (MAX_ORDER > + PAGE_SHIFT) as start address to pdx_init_mask. > > Signed-off-by: Stefano Stabellini <stefanos@xilinx.com> > Reviewed-by: Julien Grall <julien.grall@arm.com> > Acked-by: Jan Beulich <jbeulich@suse.com> Unfortunately this patch is breaking boot on AMD Seattle (laxton{0,1}) see [1]. The bisector fingered this patch [2]. To unblock osstest, I have taken the liberty to revert the patch on staging. From Linux, the memory range for Seattle is 0x0000008000000000-0x00000087ffffffff I am not entirely sure why this patch affects the boot. Stefano can you look at it? Cheers, [1] http://logs.test-lab.xenproject.org/osstest/results/history/test-arm64-arm64-xl-xsm/xen-unstable-smoke.html [2] https://lists.xen.org/archives/html/xen-devel/2019-06/msg01549.html
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c index d5d188a105..215746a5c3 100644 --- a/xen/arch/arm/setup.c +++ b/xen/arch/arm/setup.c @@ -484,7 +484,14 @@ static void __init init_pdx(void) { paddr_t bank_start, bank_size, bank_end; - uint64_t mask = pdx_init_mask(bootinfo.mem.bank[0].start); + /* + * Arm does not have any restrictions on the bits to compress. Pass 0 to + * let the common code further restrict the mask. + * + * If the logic changes in pfn_pdx_hole_setup we might have to + * update this function too. + */ + uint64_t mask = pdx_init_mask(0x0); int bank; for ( bank = 0 ; bank < bootinfo.mem.nr_banks; bank++ ) diff --git a/xen/common/pdx.c b/xen/common/pdx.c index 8356f03ce8..c91875fabe 100644 --- a/xen/common/pdx.c +++ b/xen/common/pdx.c @@ -50,9 +50,11 @@ static u64 __init fill_mask(u64 mask) return mask; } +/* We don't want to compress the low MAX_ORDER bits of the addresses. */ uint64_t __init pdx_init_mask(uint64_t base_addr) { - return fill_mask(base_addr - 1); + return fill_mask(max(base_addr, + (uint64_t)1 << (MAX_ORDER + PAGE_SHIFT)) - 1); } u64 __init pdx_region_mask(u64 base, u64 len) @@ -80,6 +82,9 @@ void __init pfn_pdx_hole_setup(unsigned long mask) * This guarantees that page-pointer arithmetic remains valid within * contiguous aligned ranges of 2^MAX_ORDER pages. Among others, our * buddy allocator relies on this assumption. + * + * If the logic changes here, we might have to update the ARM specific + * init_pdx too. */ for ( j = MAX_ORDER-1; ; ) {