diff mbox

ARM: mm: handle non-pmd-aligned end of RAM

Message ID 20150511103130.GA3490@leverpostej (mailing list archive)
State New, archived
Headers show

Commit Message

Mark Rutland May 11, 2015, 10:31 a.m. UTC
At boot time we round the memblock limit down to section size in an
attempt to ensure that we will have mapped this RAM with section
mappings prior to allocating from it. When mapping RAM we iterate over
PMD-sized chunks, creating these section mappings.

Section mappings are only created when the end of a chunk is aligned to
section size. Unfortunately, with classic page tables (where PMD_SIZE is
2 * SECTION_SIZE) this means that if a chunk is between 1M and 2M in
size the first 1M will not be mapped despite having been accounted for
in the memblock limit. This has been observed to result in page tables
being allocated from unmapped memory, causing boot-time hangs.

This patch modifies the memblock limit rounding to always round down to
PMD_SIZE instead of SECTION_SIZE. For classic MMU this means that we
will round the memblock limit down to a 2M boundary, matching the limits
on section mappings, and preventing allocations from unmapped memory.
For LPAE there should be no change as PMD_SIZE == SECTION_SIZE.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reported-by: Stefan Agner <stefan@agner.ch>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Hans de Goede <hdegoede@redhat.com>
Cc: Laura Abbott <labbott@redhat.com>
Cc: Russell King <rmk+kernel@arm.linux.org.uk>
Cc: Steve Capper <steve.capper@linaro.org>
---
 arch/arm/mm/mmu.c | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

Comments

Stefan Agner May 11, 2015, 11:43 a.m. UTC | #1
On 2015-05-11 12:31, Mark Rutland wrote:
> At boot time we round the memblock limit down to section size in an
> attempt to ensure that we will have mapped this RAM with section
> mappings prior to allocating from it. When mapping RAM we iterate over
> PMD-sized chunks, creating these section mappings.
> 
> Section mappings are only created when the end of a chunk is aligned to
> section size. Unfortunately, with classic page tables (where PMD_SIZE is
> 2 * SECTION_SIZE) this means that if a chunk is between 1M and 2M in
> size the first 1M will not be mapped despite having been accounted for
> in the memblock limit. This has been observed to result in page tables
> being allocated from unmapped memory, causing boot-time hangs.
> 
> This patch modifies the memblock limit rounding to always round down to
> PMD_SIZE instead of SECTION_SIZE. For classic MMU this means that we
> will round the memblock limit down to a 2M boundary, matching the limits
> on section mappings, and preventing allocations from unmapped memory.
> For LPAE there should be no change as PMD_SIZE == SECTION_SIZE.

Thanks Mark, just tested that patch on the hardware I had the issue,
looks good.

Tested-by: Stefan Agner <stefan@agner.ch>

> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Reported-by: Stefan Agner <stefan@agner.ch>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Hans de Goede <hdegoede@redhat.com>
> Cc: Laura Abbott <labbott@redhat.com>
> Cc: Russell King <rmk+kernel@arm.linux.org.uk>
> Cc: Steve Capper <steve.capper@linaro.org>
> ---
>  arch/arm/mm/mmu.c | 20 ++++++++++----------
>  1 file changed, 10 insertions(+), 10 deletions(-)
> 
> diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
> index 4e6ef89..7186382 100644
> --- a/arch/arm/mm/mmu.c
> +++ b/arch/arm/mm/mmu.c
> @@ -1112,22 +1112,22 @@ void __init sanity_check_meminfo(void)
>  			}
>  
>  			/*
> -			 * Find the first non-section-aligned page, and point
> +			 * Find the first non-pmd-aligned page, and point
>  			 * memblock_limit at it. This relies on rounding the
> -			 * limit down to be section-aligned, which happens at
> -			 * the end of this function.
> +			 * limit down to be pmd-aligned, which happens at the
> +			 * end of this function.
>  			 *
>  			 * With this algorithm, the start or end of almost any
> -			 * bank can be non-section-aligned. The only exception
> -			 * is that the start of the bank 0 must be section-
> +			 * bank can be non-pmd-aligned. The only exception is
> +			 * that the start of the bank 0 must be section-
>  			 * aligned, since otherwise memory would need to be
>  			 * allocated when mapping the start of bank 0, which
>  			 * occurs before any free memory is mapped.
>  			 */
>  			if (!memblock_limit) {
> -				if (!IS_ALIGNED(block_start, SECTION_SIZE))
> +				if (!IS_ALIGNED(block_start, PMD_SIZE))
>  					memblock_limit = block_start;
> -				else if (!IS_ALIGNED(block_end, SECTION_SIZE))
> +				else if (!IS_ALIGNED(block_end, PMD_SIZE))
>  					memblock_limit = arm_lowmem_limit;
>  			}
>  
> @@ -1137,12 +1137,12 @@ void __init sanity_check_meminfo(void)
>  	high_memory = __va(arm_lowmem_limit - 1) + 1;
>  
>  	/*
> -	 * Round the memblock limit down to a section size.  This
> +	 * Round the memblock limit down to a pmd size.  This
>  	 * helps to ensure that we will allocate memory from the
> -	 * last full section, which should be mapped.
> +	 * last full pmd, which should be mapped.
>  	 */
>  	if (memblock_limit)
> -		memblock_limit = round_down(memblock_limit, SECTION_SIZE);
> +		memblock_limit = round_down(memblock_limit, PMD_SIZE);
>  	if (!memblock_limit)
>  		memblock_limit = arm_lowmem_limit;
Laura Abbott May 12, 2015, 2:54 a.m. UTC | #2
On 05/11/2015 03:31 AM, Mark Rutland wrote:
> At boot time we round the memblock limit down to section size in an
> attempt to ensure that we will have mapped this RAM with section
> mappings prior to allocating from it. When mapping RAM we iterate over
> PMD-sized chunks, creating these section mappings.
>
> Section mappings are only created when the end of a chunk is aligned to
> section size. Unfortunately, with classic page tables (where PMD_SIZE is
> 2 * SECTION_SIZE) this means that if a chunk is between 1M and 2M in
> size the first 1M will not be mapped despite having been accounted for
> in the memblock limit. This has been observed to result in page tables
> being allocated from unmapped memory, causing boot-time hangs.
>
> This patch modifies the memblock limit rounding to always round down to
> PMD_SIZE instead of SECTION_SIZE. For classic MMU this means that we
> will round the memblock limit down to a 2M boundary, matching the limits
> on section mappings, and preventing allocations from unmapped memory.
> For LPAE there should be no change as PMD_SIZE == SECTION_SIZE.
>
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Reported-by: Stefan Agner <stefan@agner.ch>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Hans de Goede <hdegoede@redhat.com>
> Cc: Laura Abbott <labbott@redhat.com>
> Cc: Russell King <rmk+kernel@arm.linux.org.uk>
> Cc: Steve Capper <steve.capper@linaro.org>

Acked-by: Laura Abbott <labbott@redhat.com>

> ---
>   arch/arm/mm/mmu.c | 20 ++++++++++----------
>   1 file changed, 10 insertions(+), 10 deletions(-)
>
> diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
> index 4e6ef89..7186382 100644
> --- a/arch/arm/mm/mmu.c
> +++ b/arch/arm/mm/mmu.c
> @@ -1112,22 +1112,22 @@ void __init sanity_check_meminfo(void)
>   			}
>
>   			/*
> -			 * Find the first non-section-aligned page, and point
> +			 * Find the first non-pmd-aligned page, and point
>   			 * memblock_limit at it. This relies on rounding the
> -			 * limit down to be section-aligned, which happens at
> -			 * the end of this function.
> +			 * limit down to be pmd-aligned, which happens at the
> +			 * end of this function.
>   			 *
>   			 * With this algorithm, the start or end of almost any
> -			 * bank can be non-section-aligned. The only exception
> -			 * is that the start of the bank 0 must be section-
> +			 * bank can be non-pmd-aligned. The only exception is
> +			 * that the start of the bank 0 must be section-
>   			 * aligned, since otherwise memory would need to be
>   			 * allocated when mapping the start of bank 0, which
>   			 * occurs before any free memory is mapped.
>   			 */
>   			if (!memblock_limit) {
> -				if (!IS_ALIGNED(block_start, SECTION_SIZE))
> +				if (!IS_ALIGNED(block_start, PMD_SIZE))
>   					memblock_limit = block_start;
> -				else if (!IS_ALIGNED(block_end, SECTION_SIZE))
> +				else if (!IS_ALIGNED(block_end, PMD_SIZE))
>   					memblock_limit = arm_lowmem_limit;
>   			}
>
> @@ -1137,12 +1137,12 @@ void __init sanity_check_meminfo(void)
>   	high_memory = __va(arm_lowmem_limit - 1) + 1;
>
>   	/*
> -	 * Round the memblock limit down to a section size.  This
> +	 * Round the memblock limit down to a pmd size.  This
>   	 * helps to ensure that we will allocate memory from the
> -	 * last full section, which should be mapped.
> +	 * last full pmd, which should be mapped.
>   	 */
>   	if (memblock_limit)
> -		memblock_limit = round_down(memblock_limit, SECTION_SIZE);
> +		memblock_limit = round_down(memblock_limit, PMD_SIZE);
>   	if (!memblock_limit)
>   		memblock_limit = arm_lowmem_limit;
>
>
Hans de Goede May 13, 2015, 1:40 p.m. UTC | #3
Hi,

On 11-05-15 13:43, Stefan Agner wrote:
> On 2015-05-11 12:31, Mark Rutland wrote:
>> At boot time we round the memblock limit down to section size in an
>> attempt to ensure that we will have mapped this RAM with section
>> mappings prior to allocating from it. When mapping RAM we iterate over
>> PMD-sized chunks, creating these section mappings.
>>
>> Section mappings are only created when the end of a chunk is aligned to
>> section size. Unfortunately, with classic page tables (where PMD_SIZE is
>> 2 * SECTION_SIZE) this means that if a chunk is between 1M and 2M in
>> size the first 1M will not be mapped despite having been accounted for
>> in the memblock limit. This has been observed to result in page tables
>> being allocated from unmapped memory, causing boot-time hangs.
>>
>> This patch modifies the memblock limit rounding to always round down to
>> PMD_SIZE instead of SECTION_SIZE. For classic MMU this means that we
>> will round the memblock limit down to a 2M boundary, matching the limits
>> on section mappings, and preventing allocations from unmapped memory.
>> For LPAE there should be no change as PMD_SIZE == SECTION_SIZE.
>
> Thanks Mark, just tested that patch on the hardware I had the issue,
> looks good.
>
> Tested-by: Stefan Agner <stefan@agner.ch>

Same for me, this also fixes the issue I was seeing no an Allwinner A33
tablet with 1024z600 lcd screen.

Tested-by: Hans de Goede <hdegoede@redhat.com>

Can we get this Cc-ed to stable@vger.kernel.org please? At least for 4.0 ?

Regards,

Hans


>
>>
>> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
>> Reported-by: Stefan Agner <stefan@agner.ch>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: Hans de Goede <hdegoede@redhat.com>
>> Cc: Laura Abbott <labbott@redhat.com>
>> Cc: Russell King <rmk+kernel@arm.linux.org.uk>
>> Cc: Steve Capper <steve.capper@linaro.org>
>> ---
>>   arch/arm/mm/mmu.c | 20 ++++++++++----------
>>   1 file changed, 10 insertions(+), 10 deletions(-)
>>
>> diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
>> index 4e6ef89..7186382 100644
>> --- a/arch/arm/mm/mmu.c
>> +++ b/arch/arm/mm/mmu.c
>> @@ -1112,22 +1112,22 @@ void __init sanity_check_meminfo(void)
>>   			}
>>
>>   			/*
>> -			 * Find the first non-section-aligned page, and point
>> +			 * Find the first non-pmd-aligned page, and point
>>   			 * memblock_limit at it. This relies on rounding the
>> -			 * limit down to be section-aligned, which happens at
>> -			 * the end of this function.
>> +			 * limit down to be pmd-aligned, which happens at the
>> +			 * end of this function.
>>   			 *
>>   			 * With this algorithm, the start or end of almost any
>> -			 * bank can be non-section-aligned. The only exception
>> -			 * is that the start of the bank 0 must be section-
>> +			 * bank can be non-pmd-aligned. The only exception is
>> +			 * that the start of the bank 0 must be section-
>>   			 * aligned, since otherwise memory would need to be
>>   			 * allocated when mapping the start of bank 0, which
>>   			 * occurs before any free memory is mapped.
>>   			 */
>>   			if (!memblock_limit) {
>> -				if (!IS_ALIGNED(block_start, SECTION_SIZE))
>> +				if (!IS_ALIGNED(block_start, PMD_SIZE))
>>   					memblock_limit = block_start;
>> -				else if (!IS_ALIGNED(block_end, SECTION_SIZE))
>> +				else if (!IS_ALIGNED(block_end, PMD_SIZE))
>>   					memblock_limit = arm_lowmem_limit;
>>   			}
>>
>> @@ -1137,12 +1137,12 @@ void __init sanity_check_meminfo(void)
>>   	high_memory = __va(arm_lowmem_limit - 1) + 1;
>>
>>   	/*
>> -	 * Round the memblock limit down to a section size.  This
>> +	 * Round the memblock limit down to a pmd size.  This
>>   	 * helps to ensure that we will allocate memory from the
>> -	 * last full section, which should be mapped.
>> +	 * last full pmd, which should be mapped.
>>   	 */
>>   	if (memblock_limit)
>> -		memblock_limit = round_down(memblock_limit, SECTION_SIZE);
>> +		memblock_limit = round_down(memblock_limit, PMD_SIZE);
>>   	if (!memblock_limit)
>>   		memblock_limit = arm_lowmem_limit;
>
Mark Rutland May 13, 2015, 2:11 p.m. UTC | #4
On Wed, May 13, 2015 at 02:40:16PM +0100, Hans de Goede wrote:
> Hi,
> 
> On 11-05-15 13:43, Stefan Agner wrote:
> > On 2015-05-11 12:31, Mark Rutland wrote:
> >> At boot time we round the memblock limit down to section size in an
> >> attempt to ensure that we will have mapped this RAM with section
> >> mappings prior to allocating from it. When mapping RAM we iterate over
> >> PMD-sized chunks, creating these section mappings.
> >>
> >> Section mappings are only created when the end of a chunk is aligned to
> >> section size. Unfortunately, with classic page tables (where PMD_SIZE is
> >> 2 * SECTION_SIZE) this means that if a chunk is between 1M and 2M in
> >> size the first 1M will not be mapped despite having been accounted for
> >> in the memblock limit. This has been observed to result in page tables
> >> being allocated from unmapped memory, causing boot-time hangs.
> >>
> >> This patch modifies the memblock limit rounding to always round down to
> >> PMD_SIZE instead of SECTION_SIZE. For classic MMU this means that we
> >> will round the memblock limit down to a 2M boundary, matching the limits
> >> on section mappings, and preventing allocations from unmapped memory.
> >> For LPAE there should be no change as PMD_SIZE == SECTION_SIZE.
> >
> > Thanks Mark, just tested that patch on the hardware I had the issue,
> > looks good.
> >
> > Tested-by: Stefan Agner <stefan@agner.ch>
> 
> Same for me, this also fixes the issue I was seeing no an Allwinner A33
> tablet with 1024z600 lcd screen.
> 
> Tested-by: Hans de Goede <hdegoede@redhat.com>

Great. Thanks for testing!

> Can we get this Cc-ed to stable@vger.kernel.org please? At least for 4.0 ?

Sure, done.

Russell, on the assumption you're ok with the patch I've submitted it to
the patch system as 8356/1.

Thanks,
Mark.

> Regards,
> 
> Hans
> 
> 
> >
> >>
> >> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> >> Reported-by: Stefan Agner <stefan@agner.ch>
> >> Cc: Catalin Marinas <catalin.marinas@arm.com>
> >> Cc: Hans de Goede <hdegoede@redhat.com>
> >> Cc: Laura Abbott <labbott@redhat.com>
> >> Cc: Russell King <rmk+kernel@arm.linux.org.uk>
> >> Cc: Steve Capper <steve.capper@linaro.org>
> >> ---
> >>   arch/arm/mm/mmu.c | 20 ++++++++++----------
> >>   1 file changed, 10 insertions(+), 10 deletions(-)
> >>
> >> diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
> >> index 4e6ef89..7186382 100644
> >> --- a/arch/arm/mm/mmu.c
> >> +++ b/arch/arm/mm/mmu.c
> >> @@ -1112,22 +1112,22 @@ void __init sanity_check_meminfo(void)
> >>   			}
> >>
> >>   			/*
> >> -			 * Find the first non-section-aligned page, and point
> >> +			 * Find the first non-pmd-aligned page, and point
> >>   			 * memblock_limit at it. This relies on rounding the
> >> -			 * limit down to be section-aligned, which happens at
> >> -			 * the end of this function.
> >> +			 * limit down to be pmd-aligned, which happens at the
> >> +			 * end of this function.
> >>   			 *
> >>   			 * With this algorithm, the start or end of almost any
> >> -			 * bank can be non-section-aligned. The only exception
> >> -			 * is that the start of the bank 0 must be section-
> >> +			 * bank can be non-pmd-aligned. The only exception is
> >> +			 * that the start of the bank 0 must be section-
> >>   			 * aligned, since otherwise memory would need to be
> >>   			 * allocated when mapping the start of bank 0, which
> >>   			 * occurs before any free memory is mapped.
> >>   			 */
> >>   			if (!memblock_limit) {
> >> -				if (!IS_ALIGNED(block_start, SECTION_SIZE))
> >> +				if (!IS_ALIGNED(block_start, PMD_SIZE))
> >>   					memblock_limit = block_start;
> >> -				else if (!IS_ALIGNED(block_end, SECTION_SIZE))
> >> +				else if (!IS_ALIGNED(block_end, PMD_SIZE))
> >>   					memblock_limit = arm_lowmem_limit;
> >>   			}
> >>
> >> @@ -1137,12 +1137,12 @@ void __init sanity_check_meminfo(void)
> >>   	high_memory = __va(arm_lowmem_limit - 1) + 1;
> >>
> >>   	/*
> >> -	 * Round the memblock limit down to a section size.  This
> >> +	 * Round the memblock limit down to a pmd size.  This
> >>   	 * helps to ensure that we will allocate memory from the
> >> -	 * last full section, which should be mapped.
> >> +	 * last full pmd, which should be mapped.
> >>   	 */
> >>   	if (memblock_limit)
> >> -		memblock_limit = round_down(memblock_limit, SECTION_SIZE);
> >> +		memblock_limit = round_down(memblock_limit, PMD_SIZE);
> >>   	if (!memblock_limit)
> >>   		memblock_limit = arm_lowmem_limit;
> >
>
Russell King - ARM Linux May 14, 2015, 10:15 a.m. UTC | #5
On Wed, May 13, 2015 at 03:11:52PM +0100, Mark Rutland wrote:
> Russell, on the assumption you're ok with the patch I've submitted it to
> the patch system as 8356/1.

Thanks.
Javier Martinez Canillas June 30, 2015, 10:01 a.m. UTC | #6
[Adding Krzysztof, Doug, Kukjin and Sjoerd to cc since they are also
familiar with this platform]

Hello Mark,

On Mon, May 11, 2015 at 12:31 PM, Mark Rutland <mark.rutland@arm.com> wrote:
> At boot time we round the memblock limit down to section size in an
> attempt to ensure that we will have mapped this RAM with section
> mappings prior to allocating from it. When mapping RAM we iterate over
> PMD-sized chunks, creating these section mappings.
>
> Section mappings are only created when the end of a chunk is aligned to
> section size. Unfortunately, with classic page tables (where PMD_SIZE is
> 2 * SECTION_SIZE) this means that if a chunk is between 1M and 2M in
> size the first 1M will not be mapped despite having been accounted for
> in the memblock limit. This has been observed to result in page tables
> being allocated from unmapped memory, causing boot-time hangs.
>
> This patch modifies the memblock limit rounding to always round down to
> PMD_SIZE instead of SECTION_SIZE. For classic MMU this means that we
> will round the memblock limit down to a 2M boundary, matching the limits
> on section mappings, and preventing allocations from unmapped memory.
> For LPAE there should be no change as PMD_SIZE == SECTION_SIZE.
>
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Reported-by: Stefan Agner <stefan@agner.ch>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Hans de Goede <hdegoede@redhat.com>
> Cc: Laura Abbott <labbott@redhat.com>
> Cc: Russell King <rmk+kernel@arm.linux.org.uk>
> Cc: Steve Capper <steve.capper@linaro.org>
> ---
>  arch/arm/mm/mmu.c | 20 ++++++++++----------
>  1 file changed, 10 insertions(+), 10 deletions(-)
>
> diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
> index 4e6ef89..7186382 100644
> --- a/arch/arm/mm/mmu.c
> +++ b/arch/arm/mm/mmu.c
> @@ -1112,22 +1112,22 @@ void __init sanity_check_meminfo(void)
>                         }
>
>                         /*
> -                        * Find the first non-section-aligned page, and point
> +                        * Find the first non-pmd-aligned page, and point
>                          * memblock_limit at it. This relies on rounding the
> -                        * limit down to be section-aligned, which happens at
> -                        * the end of this function.
> +                        * limit down to be pmd-aligned, which happens at the
> +                        * end of this function.
>                          *
>                          * With this algorithm, the start or end of almost any
> -                        * bank can be non-section-aligned. The only exception
> -                        * is that the start of the bank 0 must be section-
> +                        * bank can be non-pmd-aligned. The only exception is
> +                        * that the start of the bank 0 must be section-
>                          * aligned, since otherwise memory would need to be
>                          * allocated when mapping the start of bank 0, which
>                          * occurs before any free memory is mapped.
>                          */
>                         if (!memblock_limit) {
> -                               if (!IS_ALIGNED(block_start, SECTION_SIZE))
> +                               if (!IS_ALIGNED(block_start, PMD_SIZE))
>                                         memblock_limit = block_start;
> -                               else if (!IS_ALIGNED(block_end, SECTION_SIZE))
> +                               else if (!IS_ALIGNED(block_end, PMD_SIZE))
>                                         memblock_limit = arm_lowmem_limit;
>                         }
>
> @@ -1137,12 +1137,12 @@ void __init sanity_check_meminfo(void)
>         high_memory = __va(arm_lowmem_limit - 1) + 1;
>
>         /*
> -        * Round the memblock limit down to a section size.  This
> +        * Round the memblock limit down to a pmd size.  This
>          * helps to ensure that we will allocate memory from the
> -        * last full section, which should be mapped.
> +        * last full pmd, which should be mapped.
>          */
>         if (memblock_limit)
> -               memblock_limit = round_down(memblock_limit, SECTION_SIZE);
> +               memblock_limit = round_down(memblock_limit, PMD_SIZE);
>         if (!memblock_limit)
>                 memblock_limit = arm_lowmem_limit;
>
> --

There were reports of 4.1 not booting on the Exynos Chromebooks. I did
a bisect on an Exynos5420 Peach Pit Chromebook and tracked it down to
$subject as the cause. Reverting this commit makes it to boot again.

In case someone is not familiar with the Exynos Chromebooks, they have
a read-only u-boot that can only boot signed images so there are two
ways to boot a kernel:

1) Booting a signed FIT image that contains the zImage + FDT
2) Chain loading a signed u-boot image (i.e: mainline u-boot) that can
in turn boot a non signed kernel image

4.1 is failing to boot only for 1), doing 2) leads to the kernel
booting successfully.

Adding some printouts I found that the memory layout and total amount
of RAM is different when booting a FIT image directly and when chain
loading a mainline u-boot.

when chain loading a mainline u-boot:
------------------------------------------------------

memory block: block_start 0x40000000 size 0x60000000

This 1.5 GiB block is both PMD_SIZE and SECTION_SIZE aligned

when booting using the vendor u-boot and a FIT:
---------------------------------------------------------------------

memory block: block_start 0x20000000 size 0x1e00000
memory block: block_start 0x21f00000 size 0x7e100000

The first 30MiB block is both PMD_SIZE and SECTION_SIZE aligned but
the second (~2GiB) block start is SECTION_SIZE aligned but no PMD_SIZE
aligned.

The Device Tree memory node in arch/arm/boot/dts/exynos5420-peach-pit.dts is:

memory {
      reg = <0x20000000 0x80000000>;
};

So besides wondering which memory layout is the correct one, I wonder
if this patch is not just exposing another latent bug on this
platform.

Since before this patch, memblock_limit was always set to
arm_lowmem_limit in both cases because the memory blocks were always
aligned to SECTION_SIZE regardless if booting a FIT directly or chain
loading mainline u-boot.

But this is not true anymore when rounding memblock_limit down to
PMD_SIZE since when booting using a FIT, the second memory area start
is only SECTION_SIZE aligned but no PMD_SIZE aligned.

Best regards,
Javier
Javier Martinez Canillas July 1, 2015, 8:13 a.m. UTC | #7
On Tue, Jun 30, 2015 at 12:01 PM, Javier Martinez Canillas
<javier@dowhile0.org> wrote:

[...]

>
> There were reports of 4.1 not booting on the Exynos Chromebooks. I did
> a bisect on an Exynos5420 Peach Pit Chromebook and tracked it down to
> $subject as the cause. Reverting this commit makes it to boot again.
>

I found that the following posted patch by Laura Abbot makes the
system to boot again:

[PATCH] arm: Update memblock limit after mapping lowmem [0].

Best regards,
Javier

[0]: https://lkml.org/lkml/2015/6/4/606
diff mbox

Patch

diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 4e6ef89..7186382 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -1112,22 +1112,22 @@  void __init sanity_check_meminfo(void)
 			}
 
 			/*
-			 * Find the first non-section-aligned page, and point
+			 * Find the first non-pmd-aligned page, and point
 			 * memblock_limit at it. This relies on rounding the
-			 * limit down to be section-aligned, which happens at
-			 * the end of this function.
+			 * limit down to be pmd-aligned, which happens at the
+			 * end of this function.
 			 *
 			 * With this algorithm, the start or end of almost any
-			 * bank can be non-section-aligned. The only exception
-			 * is that the start of the bank 0 must be section-
+			 * bank can be non-pmd-aligned. The only exception is
+			 * that the start of the bank 0 must be section-
 			 * aligned, since otherwise memory would need to be
 			 * allocated when mapping the start of bank 0, which
 			 * occurs before any free memory is mapped.
 			 */
 			if (!memblock_limit) {
-				if (!IS_ALIGNED(block_start, SECTION_SIZE))
+				if (!IS_ALIGNED(block_start, PMD_SIZE))
 					memblock_limit = block_start;
-				else if (!IS_ALIGNED(block_end, SECTION_SIZE))
+				else if (!IS_ALIGNED(block_end, PMD_SIZE))
 					memblock_limit = arm_lowmem_limit;
 			}
 
@@ -1137,12 +1137,12 @@  void __init sanity_check_meminfo(void)
 	high_memory = __va(arm_lowmem_limit - 1) + 1;
 
 	/*
-	 * Round the memblock limit down to a section size.  This
+	 * Round the memblock limit down to a pmd size.  This
 	 * helps to ensure that we will allocate memory from the
-	 * last full section, which should be mapped.
+	 * last full pmd, which should be mapped.
 	 */
 	if (memblock_limit)
-		memblock_limit = round_down(memblock_limit, SECTION_SIZE);
+		memblock_limit = round_down(memblock_limit, PMD_SIZE);
 	if (!memblock_limit)
 		memblock_limit = arm_lowmem_limit;