diff mbox series

[-next] crash: Fix riscv64 crash memory reserve dead loop

Message ID 20240802090105.3871929-1-ruanjinjie@huawei.com (mailing list archive)
State Superseded
Headers show
Series [-next] crash: Fix riscv64 crash memory reserve dead loop | expand

Checks

Context Check Description
conchuod/vmtest-for-next-PR fail merge-conflict

Commit Message

Jinjie Ruan Aug. 2, 2024, 9:01 a.m. UTC
On RISCV64 Qemu machine with 512MB memory, cmdline "crashkernel=500M,high"
will cause system stall as below:

	 Zone ranges:
	   DMA32    [mem 0x0000000080000000-0x000000009fffffff]
	   Normal   empty
	 Movable zone start for each node
	 Early memory node ranges
	   node   0: [mem 0x0000000080000000-0x000000008005ffff]
	   node   0: [mem 0x0000000080060000-0x000000009fffffff]
	 Initmem setup node 0 [mem 0x0000000080000000-0x000000009fffffff]
	(stall here)

commit 5d99cadf1568 ("crash: fix x86_32 crash memory reserve dead loop
bug") fix this on 32-bit architecture. However, the problem is not
completely solved. If `CRASH_ADDR_LOW_MAX = CRASH_ADDR_HIGH_MAX` on 64-bit
architecture, for example, when system memory is equal to
CRASH_ADDR_LOW_MAX on RISCV64, the following infinite loop will also occur:

	-> reserve_crashkernel_generic() and high is true
	   -> alloc at [CRASH_ADDR_LOW_MAX, CRASH_ADDR_HIGH_MAX] fail
	      -> alloc at [0, CRASH_ADDR_LOW_MAX] fail and repeatedly
	         (because CRASH_ADDR_LOW_MAX = CRASH_ADDR_HIGH_MAX).

Before refactor in commit 9c08a2a139fe ("x86: kdump: use generic interface
to simplify crashkernel reservation code"), x86 do not try to reserve crash
memory at low if it fails to alloc above high 4G. However before refator in
commit fdc268232dbba ("arm64: kdump: use generic interface to simplify
crashkernel reservation"), arm64 try to reserve crash memory at low if it
fails above high 4G. For 64-bit systems, this attempt is less beneficial
than the opposite, remove it to fix this bug and align with native x86
implementation.

After this patch, it print:
	cannot allocate crashkernel (size:0x1f400000)

Fixes: 39365395046f ("riscv: kdump: use generic interface to simplify crashkernel reservation")
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
---
 kernel/crash_reserve.c | 9 ---------
 1 file changed, 9 deletions(-)

Comments

Baoquan He Aug. 2, 2024, 10:11 a.m. UTC | #1
On 08/02/24 at 05:01pm, Jinjie Ruan wrote:
> On RISCV64 Qemu machine with 512MB memory, cmdline "crashkernel=500M,high"
> will cause system stall as below:
> 
> 	 Zone ranges:
> 	   DMA32    [mem 0x0000000080000000-0x000000009fffffff]
> 	   Normal   empty
> 	 Movable zone start for each node
> 	 Early memory node ranges
> 	   node   0: [mem 0x0000000080000000-0x000000008005ffff]
> 	   node   0: [mem 0x0000000080060000-0x000000009fffffff]
> 	 Initmem setup node 0 [mem 0x0000000080000000-0x000000009fffffff]
> 	(stall here)
> 
> commit 5d99cadf1568 ("crash: fix x86_32 crash memory reserve dead loop
> bug") fix this on 32-bit architecture. However, the problem is not
> completely solved. If `CRASH_ADDR_LOW_MAX = CRASH_ADDR_HIGH_MAX` on 64-bit
> architecture, for example, when system memory is equal to
> CRASH_ADDR_LOW_MAX on RISCV64, the following infinite loop will also occur:

Interesting, I didn't expect risc-v defining them like these.

#define CRASH_ADDR_LOW_MAX              dma32_phys_limit
#define CRASH_ADDR_HIGH_MAX             memblock_end_of_DRAM()
> 
> 	-> reserve_crashkernel_generic() and high is true
> 	   -> alloc at [CRASH_ADDR_LOW_MAX, CRASH_ADDR_HIGH_MAX] fail
> 	      -> alloc at [0, CRASH_ADDR_LOW_MAX] fail and repeatedly
> 	         (because CRASH_ADDR_LOW_MAX = CRASH_ADDR_HIGH_MAX).
> 
> Before refactor in commit 9c08a2a139fe ("x86: kdump: use generic interface
> to simplify crashkernel reservation code"), x86 do not try to reserve crash
> memory at low if it fails to alloc above high 4G. However before refator in
> commit fdc268232dbba ("arm64: kdump: use generic interface to simplify
> crashkernel reservation"), arm64 try to reserve crash memory at low if it
> fails above high 4G. For 64-bit systems, this attempt is less beneficial
> than the opposite, remove it to fix this bug and align with native x86
> implementation.

And I don't like the idea crashkernel=,high failure will fallback to
attempt in low area, so this looks good to me.

> 
> After this patch, it print:
> 	cannot allocate crashkernel (size:0x1f400000)
> 
> Fixes: 39365395046f ("riscv: kdump: use generic interface to simplify crashkernel reservation")
> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
> ---
>  kernel/crash_reserve.c | 9 ---------
>  1 file changed, 9 deletions(-)

Acked-by: Baoquan He <bhe@redhat.com>

> 
> diff --git a/kernel/crash_reserve.c b/kernel/crash_reserve.c
> index 5387269114f6..69e4b8b7b969 100644
> --- a/kernel/crash_reserve.c
> +++ b/kernel/crash_reserve.c
> @@ -420,15 +420,6 @@ void __init reserve_crashkernel_generic(char *cmdline,
>  				goto retry;
>  		}
>  
> -		/*
> -		 * For crashkernel=size[KMG],high, if the first attempt was
> -		 * for high memory, fall back to low memory.
> -		 */
> -		if (high && search_end == CRASH_ADDR_HIGH_MAX) {
> -			search_end = CRASH_ADDR_LOW_MAX;
> -			search_base = 0;
> -			goto retry;
> -		}
>  		pr_warn("cannot allocate crashkernel (size:0x%llx)\n",
>  			crash_size);
>  		return;
> -- 
> 2.34.1
>
Alexandre Ghiti Aug. 2, 2024, 12:24 p.m. UTC | #2
Hi Jinjie,

On 02/08/2024 11:01, Jinjie Ruan wrote:
> On RISCV64 Qemu machine with 512MB memory, cmdline "crashkernel=500M,high"
> will cause system stall as below:
>
> 	 Zone ranges:
> 	   DMA32    [mem 0x0000000080000000-0x000000009fffffff]
> 	   Normal   empty
> 	 Movable zone start for each node
> 	 Early memory node ranges
> 	   node   0: [mem 0x0000000080000000-0x000000008005ffff]
> 	   node   0: [mem 0x0000000080060000-0x000000009fffffff]
> 	 Initmem setup node 0 [mem 0x0000000080000000-0x000000009fffffff]
> 	(stall here)
>
> commit 5d99cadf1568 ("crash: fix x86_32 crash memory reserve dead loop


I can't find this revision, was this patch merged in 6.11?


> bug") fix this on 32-bit architecture. However, the problem is not
> completely solved. If `CRASH_ADDR_LOW_MAX = CRASH_ADDR_HIGH_MAX` on 64-bit
> architecture, for example, when system memory is equal to
> CRASH_ADDR_LOW_MAX on RISCV64, the following infinite loop will also occur:
>
> 	-> reserve_crashkernel_generic() and high is true
> 	   -> alloc at [CRASH_ADDR_LOW_MAX, CRASH_ADDR_HIGH_MAX] fail
> 	      -> alloc at [0, CRASH_ADDR_LOW_MAX] fail and repeatedly
> 	         (because CRASH_ADDR_LOW_MAX = CRASH_ADDR_HIGH_MAX).
>
> Before refactor in commit 9c08a2a139fe ("x86: kdump: use generic interface
> to simplify crashkernel reservation code"), x86 do not try to reserve crash
> memory at low if it fails to alloc above high 4G. However before refator in
> commit fdc268232dbba ("arm64: kdump: use generic interface to simplify
> crashkernel reservation"), arm64 try to reserve crash memory at low if it
> fails above high 4G. For 64-bit systems, this attempt is less beneficial
> than the opposite, remove it to fix this bug and align with native x86
> implementation.
>
> After this patch, it print:
> 	cannot allocate crashkernel (size:0x1f400000)
>
> Fixes: 39365395046f ("riscv: kdump: use generic interface to simplify crashkernel reservation")


Your patch subject indicates "-next" but I see this commit ^ landed in 
6.7, so I think we should merge it now, let me know if I missed something.

Thanks,

Alex


> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
> ---
>   kernel/crash_reserve.c | 9 ---------
>   1 file changed, 9 deletions(-)
>
> diff --git a/kernel/crash_reserve.c b/kernel/crash_reserve.c
> index 5387269114f6..69e4b8b7b969 100644
> --- a/kernel/crash_reserve.c
> +++ b/kernel/crash_reserve.c
> @@ -420,15 +420,6 @@ void __init reserve_crashkernel_generic(char *cmdline,
>   				goto retry;
>   		}
>   
> -		/*
> -		 * For crashkernel=size[KMG],high, if the first attempt was
> -		 * for high memory, fall back to low memory.
> -		 */
> -		if (high && search_end == CRASH_ADDR_HIGH_MAX) {
> -			search_end = CRASH_ADDR_LOW_MAX;
> -			search_base = 0;
> -			goto retry;
> -		}
>   		pr_warn("cannot allocate crashkernel (size:0x%llx)\n",
>   			crash_size);
>   		return;
Jinjie Ruan Aug. 5, 2024, 2:01 a.m. UTC | #3
On 2024/8/2 20:24, Alexandre Ghiti wrote:
> Hi Jinjie,
> 
> On 02/08/2024 11:01, Jinjie Ruan wrote:
>> On RISCV64 Qemu machine with 512MB memory, cmdline
>> "crashkernel=500M,high"
>> will cause system stall as below:
>>
>>      Zone ranges:
>>        DMA32    [mem 0x0000000080000000-0x000000009fffffff]
>>        Normal   empty
>>      Movable zone start for each node
>>      Early memory node ranges
>>        node   0: [mem 0x0000000080000000-0x000000008005ffff]
>>        node   0: [mem 0x0000000080060000-0x000000009fffffff]
>>      Initmem setup node 0 [mem 0x0000000080000000-0x000000009fffffff]
>>     (stall here)
>>
>> commit 5d99cadf1568 ("crash: fix x86_32 crash memory reserve dead loop
> 
> 
> I can't find this revision, was this patch merged in 6.11

Yes, it is in linux-next.


> 
> 
>> bug") fix this on 32-bit architecture. However, the problem is not
>> completely solved. If `CRASH_ADDR_LOW_MAX = CRASH_ADDR_HIGH_MAX` on
>> 64-bit
>> architecture, for example, when system memory is equal to
>> CRASH_ADDR_LOW_MAX on RISCV64, the following infinite loop will also
>> occur:
>>
>>     -> reserve_crashkernel_generic() and high is true
>>        -> alloc at [CRASH_ADDR_LOW_MAX, CRASH_ADDR_HIGH_MAX] fail
>>           -> alloc at [0, CRASH_ADDR_LOW_MAX] fail and repeatedly
>>              (because CRASH_ADDR_LOW_MAX = CRASH_ADDR_HIGH_MAX).
>>
>> Before refactor in commit 9c08a2a139fe ("x86: kdump: use generic
>> interface
>> to simplify crashkernel reservation code"), x86 do not try to reserve
>> crash
>> memory at low if it fails to alloc above high 4G. However before
>> refator in
>> commit fdc268232dbba ("arm64: kdump: use generic interface to simplify
>> crashkernel reservation"), arm64 try to reserve crash memory at low if it
>> fails above high 4G. For 64-bit systems, this attempt is less beneficial
>> than the opposite, remove it to fix this bug and align with native x86
>> implementation.
>>
>> After this patch, it print:
>>     cannot allocate crashkernel (size:0x1f400000)
>>
>> Fixes: 39365395046f ("riscv: kdump: use generic interface to simplify
>> crashkernel reservation")
> 
> 
> Your patch subject indicates "-next" but I see this commit ^ landed in
> 6.7, so I think we should merge it now, let me know if I missed something.
> 
> Thanks,
> 
> Alex
> 
> 
>> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
>> ---
>>   kernel/crash_reserve.c | 9 ---------
>>   1 file changed, 9 deletions(-)
>>
>> diff --git a/kernel/crash_reserve.c b/kernel/crash_reserve.c
>> index 5387269114f6..69e4b8b7b969 100644
>> --- a/kernel/crash_reserve.c
>> +++ b/kernel/crash_reserve.c
>> @@ -420,15 +420,6 @@ void __init reserve_crashkernel_generic(char
>> *cmdline,
>>                   goto retry;
>>           }
>>   -        /*
>> -         * For crashkernel=size[KMG],high, if the first attempt was
>> -         * for high memory, fall back to low memory.
>> -         */
>> -        if (high && search_end == CRASH_ADDR_HIGH_MAX) {
>> -            search_end = CRASH_ADDR_LOW_MAX;
>> -            search_base = 0;
>> -            goto retry;
>> -        }
>>           pr_warn("cannot allocate crashkernel (size:0x%llx)\n",
>>               crash_size);
>>           return;
> 
> _______________________________________________
> linux-riscv mailing list
> linux-riscv@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-riscv
Catalin Marinas Aug. 6, 2024, 7:10 p.m. UTC | #4
To Jinjie, if you make generic changes that affect other architectures,
please either cc the individual lists/maintainers or at least cross-post
to linux-arch. I don't follow lkml, there's just too much traffic there.

On Fri, Aug 02, 2024 at 06:11:01PM +0800, Baoquan He wrote:
> On 08/02/24 at 05:01pm, Jinjie Ruan wrote:
> > On RISCV64 Qemu machine with 512MB memory, cmdline "crashkernel=500M,high"
> > will cause system stall as below:
> > 
> > 	 Zone ranges:
> > 	   DMA32    [mem 0x0000000080000000-0x000000009fffffff]
> > 	   Normal   empty
> > 	 Movable zone start for each node
> > 	 Early memory node ranges
> > 	   node   0: [mem 0x0000000080000000-0x000000008005ffff]
> > 	   node   0: [mem 0x0000000080060000-0x000000009fffffff]
> > 	 Initmem setup node 0 [mem 0x0000000080000000-0x000000009fffffff]
> > 	(stall here)
> > 
> > commit 5d99cadf1568 ("crash: fix x86_32 crash memory reserve dead loop
> > bug") fix this on 32-bit architecture. However, the problem is not
> > completely solved. If `CRASH_ADDR_LOW_MAX = CRASH_ADDR_HIGH_MAX` on 64-bit
> > architecture, for example, when system memory is equal to
> > CRASH_ADDR_LOW_MAX on RISCV64, the following infinite loop will also occur:
> 
> Interesting, I didn't expect risc-v defining them like these.
> 
> #define CRASH_ADDR_LOW_MAX              dma32_phys_limit
> #define CRASH_ADDR_HIGH_MAX             memblock_end_of_DRAM()

arm64 defines the high limit as PHYS_MASK+1, it doesn't need to be
dynamic and x86 does something similar (SZ_64T). Not sure why the
generic code and riscv define it like this.

> > 	-> reserve_crashkernel_generic() and high is true
> > 	   -> alloc at [CRASH_ADDR_LOW_MAX, CRASH_ADDR_HIGH_MAX] fail
> > 	      -> alloc at [0, CRASH_ADDR_LOW_MAX] fail and repeatedly
> > 	         (because CRASH_ADDR_LOW_MAX = CRASH_ADDR_HIGH_MAX).
> > 
> > Before refactor in commit 9c08a2a139fe ("x86: kdump: use generic interface
> > to simplify crashkernel reservation code"), x86 do not try to reserve crash
> > memory at low if it fails to alloc above high 4G. However before refator in
> > commit fdc268232dbba ("arm64: kdump: use generic interface to simplify
> > crashkernel reservation"), arm64 try to reserve crash memory at low if it
> > fails above high 4G. For 64-bit systems, this attempt is less beneficial
> > than the opposite, remove it to fix this bug and align with native x86
> > implementation.
> 
> And I don't like the idea crashkernel=,high failure will fallback to
> attempt in low area, so this looks good to me.

Well, I kind of liked this behaviour. One can specify ,high as a
preference rather than forcing a range. The arm64 land has different
platforms with some constrained memory layouts. Such fallback works well
as a default command line option shipped with distros without having to
guess the SoC memory layout.

Something like below should fix the issue as well (untested):

diff --git a/kernel/crash_reserve.c b/kernel/crash_reserve.c
index d3b4cd12bdd1..ae92d6745ef4 100644
--- a/kernel/crash_reserve.c
+++ b/kernel/crash_reserve.c
@@ -420,7 +420,8 @@ void __init reserve_crashkernel_generic(char *cmdline,
 		 * For crashkernel=size[KMG],high, if the first attempt was
 		 * for high memory, fall back to low memory.
 		 */
-		if (high && search_end == CRASH_ADDR_HIGH_MAX) {
+		if (high && search_end == CRASH_ADDR_HIGH_MAX &&
+		    CRASH_ADDR_LOW_MAX < CRASH_ADDR_HIGH_MAX) {
 			search_end = CRASH_ADDR_LOW_MAX;
 			search_base = 0;
 			goto retry;
Catalin Marinas Aug. 6, 2024, 7:34 p.m. UTC | #5
On Tue, Aug 06, 2024 at 08:10:30PM +0100, Catalin Marinas wrote:
> On Fri, Aug 02, 2024 at 06:11:01PM +0800, Baoquan He wrote:
> > On 08/02/24 at 05:01pm, Jinjie Ruan wrote:
> > > On RISCV64 Qemu machine with 512MB memory, cmdline "crashkernel=500M,high"
> > > will cause system stall as below:
> > > 
> > > 	 Zone ranges:
> > > 	   DMA32    [mem 0x0000000080000000-0x000000009fffffff]
> > > 	   Normal   empty
> > > 	 Movable zone start for each node
> > > 	 Early memory node ranges
> > > 	   node   0: [mem 0x0000000080000000-0x000000008005ffff]
> > > 	   node   0: [mem 0x0000000080060000-0x000000009fffffff]
> > > 	 Initmem setup node 0 [mem 0x0000000080000000-0x000000009fffffff]
> > > 	(stall here)
> > > 
> > > commit 5d99cadf1568 ("crash: fix x86_32 crash memory reserve dead loop
> > > bug") fix this on 32-bit architecture. However, the problem is not
> > > completely solved. If `CRASH_ADDR_LOW_MAX = CRASH_ADDR_HIGH_MAX` on 64-bit
> > > architecture, for example, when system memory is equal to
> > > CRASH_ADDR_LOW_MAX on RISCV64, the following infinite loop will also occur:
> > 
> > Interesting, I didn't expect risc-v defining them like these.
> > 
> > #define CRASH_ADDR_LOW_MAX              dma32_phys_limit
> > #define CRASH_ADDR_HIGH_MAX             memblock_end_of_DRAM()
> 
> arm64 defines the high limit as PHYS_MASK+1, it doesn't need to be
> dynamic and x86 does something similar (SZ_64T). Not sure why the
> generic code and riscv define it like this.
> 
> > > 	-> reserve_crashkernel_generic() and high is true
> > > 	   -> alloc at [CRASH_ADDR_LOW_MAX, CRASH_ADDR_HIGH_MAX] fail
> > > 	      -> alloc at [0, CRASH_ADDR_LOW_MAX] fail and repeatedly
> > > 	         (because CRASH_ADDR_LOW_MAX = CRASH_ADDR_HIGH_MAX).
> > > 
> > > Before refactor in commit 9c08a2a139fe ("x86: kdump: use generic interface
> > > to simplify crashkernel reservation code"), x86 do not try to reserve crash
> > > memory at low if it fails to alloc above high 4G. However before refator in
> > > commit fdc268232dbba ("arm64: kdump: use generic interface to simplify
> > > crashkernel reservation"), arm64 try to reserve crash memory at low if it
> > > fails above high 4G. For 64-bit systems, this attempt is less beneficial
> > > than the opposite, remove it to fix this bug and align with native x86
> > > implementation.
> > 
> > And I don't like the idea crashkernel=,high failure will fallback to
> > attempt in low area, so this looks good to me.
> 
> Well, I kind of liked this behaviour. One can specify ,high as a
> preference rather than forcing a range. The arm64 land has different
> platforms with some constrained memory layouts. Such fallback works well
> as a default command line option shipped with distros without having to
> guess the SoC memory layout.

I haven't tried but it's possible that this patch also breaks those
arm64 platforms with all RAM above 4GB when CRASH_ADDR_LOW_MAX is
memblock_end_of_DRAM(). Here all memory would be low and in the absence
of no fallback, it fails to allocate.

So, my strong preference would be to re-instate the current behaviour
and work around the infinite loop in a different way.

Thanks.
Jinjie Ruan Aug. 7, 2024, 1:40 a.m. UTC | #6
On 2024/8/7 3:10, Catalin Marinas wrote:
> To Jinjie, if you make generic changes that affect other architectures,
> please either cc the individual lists/maintainers or at least cross-post
> to linux-arch. I don't follow lkml, there's just too much traffic there.

Sorry, I forgot to Cc to the other architectures.

> 
> On Fri, Aug 02, 2024 at 06:11:01PM +0800, Baoquan He wrote:
>> On 08/02/24 at 05:01pm, Jinjie Ruan wrote:
>>> On RISCV64 Qemu machine with 512MB memory, cmdline "crashkernel=500M,high"
>>> will cause system stall as below:
>>>
>>> 	 Zone ranges:
>>> 	   DMA32    [mem 0x0000000080000000-0x000000009fffffff]
>>> 	   Normal   empty
>>> 	 Movable zone start for each node
>>> 	 Early memory node ranges
>>> 	   node   0: [mem 0x0000000080000000-0x000000008005ffff]
>>> 	   node   0: [mem 0x0000000080060000-0x000000009fffffff]
>>> 	 Initmem setup node 0 [mem 0x0000000080000000-0x000000009fffffff]
>>> 	(stall here)
>>>
>>> commit 5d99cadf1568 ("crash: fix x86_32 crash memory reserve dead loop
>>> bug") fix this on 32-bit architecture. However, the problem is not
>>> completely solved. If `CRASH_ADDR_LOW_MAX = CRASH_ADDR_HIGH_MAX` on 64-bit
>>> architecture, for example, when system memory is equal to
>>> CRASH_ADDR_LOW_MAX on RISCV64, the following infinite loop will also occur:
>>
>> Interesting, I didn't expect risc-v defining them like these.
>>
>> #define CRASH_ADDR_LOW_MAX              dma32_phys_limit
>> #define CRASH_ADDR_HIGH_MAX             memblock_end_of_DRAM()
> 
> arm64 defines the high limit as PHYS_MASK+1, it doesn't need to be
> dynamic and x86 does something similar (SZ_64T). Not sure why the
> generic code and riscv define it like this.
> 
>>> 	-> reserve_crashkernel_generic() and high is true
>>> 	   -> alloc at [CRASH_ADDR_LOW_MAX, CRASH_ADDR_HIGH_MAX] fail
>>> 	      -> alloc at [0, CRASH_ADDR_LOW_MAX] fail and repeatedly
>>> 	         (because CRASH_ADDR_LOW_MAX = CRASH_ADDR_HIGH_MAX).
>>>
>>> Before refactor in commit 9c08a2a139fe ("x86: kdump: use generic interface
>>> to simplify crashkernel reservation code"), x86 do not try to reserve crash
>>> memory at low if it fails to alloc above high 4G. However before refator in
>>> commit fdc268232dbba ("arm64: kdump: use generic interface to simplify
>>> crashkernel reservation"), arm64 try to reserve crash memory at low if it
>>> fails above high 4G. For 64-bit systems, this attempt is less beneficial
>>> than the opposite, remove it to fix this bug and align with native x86
>>> implementation.
>>
>> And I don't like the idea crashkernel=,high failure will fallback to
>> attempt in low area, so this looks good to me.
> 
> Well, I kind of liked this behaviour. One can specify ,high as a
> preference rather than forcing a range. The arm64 land has different
> platforms with some constrained memory layouts. Such fallback works well
> as a default command line option shipped with distros without having to
> guess the SoC memory layout.
> 
> Something like below should fix the issue as well (untested):

I tested it on QEMU and it is good to fix this dead loop problem.

> 
> diff --git a/kernel/crash_reserve.c b/kernel/crash_reserve.c
> index d3b4cd12bdd1..ae92d6745ef4 100644
> --- a/kernel/crash_reserve.c
> +++ b/kernel/crash_reserve.c
> @@ -420,7 +420,8 @@ void __init reserve_crashkernel_generic(char *cmdline,
>  		 * For crashkernel=size[KMG],high, if the first attempt was
>  		 * for high memory, fall back to low memory.
>  		 */
> -		if (high && search_end == CRASH_ADDR_HIGH_MAX) {
> +		if (high && search_end == CRASH_ADDR_HIGH_MAX &&
> +		    CRASH_ADDR_LOW_MAX < CRASH_ADDR_HIGH_MAX) {
>  			search_end = CRASH_ADDR_LOW_MAX;
>  			search_base = 0;
>  			goto retry;
>
Jinjie Ruan Aug. 8, 2024, 7:56 a.m. UTC | #7
On 2024/8/7 3:34, Catalin Marinas wrote:
> On Tue, Aug 06, 2024 at 08:10:30PM +0100, Catalin Marinas wrote:
>> On Fri, Aug 02, 2024 at 06:11:01PM +0800, Baoquan He wrote:
>>> On 08/02/24 at 05:01pm, Jinjie Ruan wrote:
>>>> On RISCV64 Qemu machine with 512MB memory, cmdline "crashkernel=500M,high"
>>>> will cause system stall as below:
>>>>
>>>> 	 Zone ranges:
>>>> 	   DMA32    [mem 0x0000000080000000-0x000000009fffffff]
>>>> 	   Normal   empty
>>>> 	 Movable zone start for each node
>>>> 	 Early memory node ranges
>>>> 	   node   0: [mem 0x0000000080000000-0x000000008005ffff]
>>>> 	   node   0: [mem 0x0000000080060000-0x000000009fffffff]
>>>> 	 Initmem setup node 0 [mem 0x0000000080000000-0x000000009fffffff]
>>>> 	(stall here)
>>>>
>>>> commit 5d99cadf1568 ("crash: fix x86_32 crash memory reserve dead loop
>>>> bug") fix this on 32-bit architecture. However, the problem is not
>>>> completely solved. If `CRASH_ADDR_LOW_MAX = CRASH_ADDR_HIGH_MAX` on 64-bit
>>>> architecture, for example, when system memory is equal to
>>>> CRASH_ADDR_LOW_MAX on RISCV64, the following infinite loop will also occur:
>>>
>>> Interesting, I didn't expect risc-v defining them like these.
>>>
>>> #define CRASH_ADDR_LOW_MAX              dma32_phys_limit
>>> #define CRASH_ADDR_HIGH_MAX             memblock_end_of_DRAM()
>>
>> arm64 defines the high limit as PHYS_MASK+1, it doesn't need to be
>> dynamic and x86 does something similar (SZ_64T). Not sure why the
>> generic code and riscv define it like this.
>>
>>>> 	-> reserve_crashkernel_generic() and high is true
>>>> 	   -> alloc at [CRASH_ADDR_LOW_MAX, CRASH_ADDR_HIGH_MAX] fail
>>>> 	      -> alloc at [0, CRASH_ADDR_LOW_MAX] fail and repeatedly
>>>> 	         (because CRASH_ADDR_LOW_MAX = CRASH_ADDR_HIGH_MAX).
>>>>
>>>> Before refactor in commit 9c08a2a139fe ("x86: kdump: use generic interface
>>>> to simplify crashkernel reservation code"), x86 do not try to reserve crash
>>>> memory at low if it fails to alloc above high 4G. However before refator in
>>>> commit fdc268232dbba ("arm64: kdump: use generic interface to simplify
>>>> crashkernel reservation"), arm64 try to reserve crash memory at low if it
>>>> fails above high 4G. For 64-bit systems, this attempt is less beneficial
>>>> than the opposite, remove it to fix this bug and align with native x86
>>>> implementation.
>>>
>>> And I don't like the idea crashkernel=,high failure will fallback to
>>> attempt in low area, so this looks good to me.
>>
>> Well, I kind of liked this behaviour. One can specify ,high as a
>> preference rather than forcing a range. The arm64 land has different
>> platforms with some constrained memory layouts. Such fallback works well
>> as a default command line option shipped with distros without having to
>> guess the SoC memory layout.
> 
> I haven't tried but it's possible that this patch also breaks those
> arm64 platforms with all RAM above 4GB when CRASH_ADDR_LOW_MAX is
> memblock_end_of_DRAM(). Here all memory would be low and in the absence
> of no fallback, it fails to allocate.
> 
> So, my strong preference would be to re-instate the current behaviour
> and work around the infinite loop in a different way.

Hi, baoquan, What's your opinion?

Only this patch should be re-instate or all the 3 dead loop fix patch?

> 
> Thanks.
>
Baoquan He Aug. 9, 2024, 1:56 a.m. UTC | #8
On 08/08/24 at 03:56pm, Jinjie Ruan wrote:
> 
> 
> On 2024/8/7 3:34, Catalin Marinas wrote:
> > On Tue, Aug 06, 2024 at 08:10:30PM +0100, Catalin Marinas wrote:
> >> On Fri, Aug 02, 2024 at 06:11:01PM +0800, Baoquan He wrote:
> >>> On 08/02/24 at 05:01pm, Jinjie Ruan wrote:
> >>>> On RISCV64 Qemu machine with 512MB memory, cmdline "crashkernel=500M,high"
> >>>> will cause system stall as below:
> >>>>
> >>>> 	 Zone ranges:
> >>>> 	   DMA32    [mem 0x0000000080000000-0x000000009fffffff]
> >>>> 	   Normal   empty
> >>>> 	 Movable zone start for each node
> >>>> 	 Early memory node ranges
> >>>> 	   node   0: [mem 0x0000000080000000-0x000000008005ffff]
> >>>> 	   node   0: [mem 0x0000000080060000-0x000000009fffffff]
> >>>> 	 Initmem setup node 0 [mem 0x0000000080000000-0x000000009fffffff]
> >>>> 	(stall here)
> >>>>
> >>>> commit 5d99cadf1568 ("crash: fix x86_32 crash memory reserve dead loop
> >>>> bug") fix this on 32-bit architecture. However, the problem is not
> >>>> completely solved. If `CRASH_ADDR_LOW_MAX = CRASH_ADDR_HIGH_MAX` on 64-bit
> >>>> architecture, for example, when system memory is equal to
> >>>> CRASH_ADDR_LOW_MAX on RISCV64, the following infinite loop will also occur:
> >>>
> >>> Interesting, I didn't expect risc-v defining them like these.
> >>>
> >>> #define CRASH_ADDR_LOW_MAX              dma32_phys_limit
> >>> #define CRASH_ADDR_HIGH_MAX             memblock_end_of_DRAM()
> >>
> >> arm64 defines the high limit as PHYS_MASK+1, it doesn't need to be
> >> dynamic and x86 does something similar (SZ_64T). Not sure why the
> >> generic code and riscv define it like this.
> >>
> >>>> 	-> reserve_crashkernel_generic() and high is true
> >>>> 	   -> alloc at [CRASH_ADDR_LOW_MAX, CRASH_ADDR_HIGH_MAX] fail
> >>>> 	      -> alloc at [0, CRASH_ADDR_LOW_MAX] fail and repeatedly
> >>>> 	         (because CRASH_ADDR_LOW_MAX = CRASH_ADDR_HIGH_MAX).
> >>>>
> >>>> Before refactor in commit 9c08a2a139fe ("x86: kdump: use generic interface
> >>>> to simplify crashkernel reservation code"), x86 do not try to reserve crash
> >>>> memory at low if it fails to alloc above high 4G. However before refator in
> >>>> commit fdc268232dbba ("arm64: kdump: use generic interface to simplify
> >>>> crashkernel reservation"), arm64 try to reserve crash memory at low if it
> >>>> fails above high 4G. For 64-bit systems, this attempt is less beneficial
> >>>> than the opposite, remove it to fix this bug and align with native x86
> >>>> implementation.
> >>>
> >>> And I don't like the idea crashkernel=,high failure will fallback to
> >>> attempt in low area, so this looks good to me.
> >>
> >> Well, I kind of liked this behaviour. One can specify ,high as a
> >> preference rather than forcing a range. The arm64 land has different
> >> platforms with some constrained memory layouts. Such fallback works well
> >> as a default command line option shipped with distros without having to
> >> guess the SoC memory layout.
> > 
> > I haven't tried but it's possible that this patch also breaks those
> > arm64 platforms with all RAM above 4GB when CRASH_ADDR_LOW_MAX is
> > memblock_end_of_DRAM(). Here all memory would be low and in the absence
> > of no fallback, it fails to allocate.
> > 
> > So, my strong preference would be to re-instate the current behaviour
> > and work around the infinite loop in a different way.
> 
> Hi, baoquan, What's your opinion?
> 
> Only this patch should be re-instate or all the 3 dead loop fix patch?

I am not sure which way Catalin suggested to take. 

Hi Catalin,

Could you say more words about your preference so that Jinjie can
proceed accordingly?

Thanks
Baoquan
Catalin Marinas Aug. 9, 2024, 9:51 a.m. UTC | #9
On Thu, Aug 08, 2024 at 03:56:35PM +0800, Jinjie Ruan wrote:
> On 2024/8/7 3:34, Catalin Marinas wrote:
> > On Tue, Aug 06, 2024 at 08:10:30PM +0100, Catalin Marinas wrote:
> >> On Fri, Aug 02, 2024 at 06:11:01PM +0800, Baoquan He wrote:
> >>> And I don't like the idea crashkernel=,high failure will fallback to
> >>> attempt in low area, so this looks good to me.
> >>
> >> Well, I kind of liked this behaviour. One can specify ,high as a
> >> preference rather than forcing a range. The arm64 land has different
> >> platforms with some constrained memory layouts. Such fallback works well
> >> as a default command line option shipped with distros without having to
> >> guess the SoC memory layout.
> > 
> > I haven't tried but it's possible that this patch also breaks those
> > arm64 platforms with all RAM above 4GB when CRASH_ADDR_LOW_MAX is
> > memblock_end_of_DRAM(). Here all memory would be low and in the absence
> > of no fallback, it fails to allocate.
> > 
> > So, my strong preference would be to re-instate the current behaviour
> > and work around the infinite loop in a different way.
> 
> Hi, baoquan, What's your opinion?
> 
> Only this patch should be re-instate or all the 3 dead loop fix patch?

Only the riscv64 patch that that removes the ,high reservation fallback
to ,low. From this series:

https://lore.kernel.org/r/20240719095735.1912878-1-ruanjinjie@huawei.com/

the first two fixes look fine (x86_32). The third one (arm32), not sure
why it's in the series called "crash: Fix x86_32 memory reserve dead
loop bug". Does it fix a problem on arm32? Anyway, I'm not against it
getting merged but I'm not maintaining arm32. If the first two patches
could be merged for 6.11, I think the arm32 one is more of a 6.12
material (unless it does fix something).

On the riscv64 patch removing the high->low fallback to avoid the
infinite loop, I'd rather replace it with something similar to the
x86_32 fix in the series above. I suggested something in the main if
block but, looking at the x86_32 fix, for consistency, I think it would
look better as something like:

diff --git a/kernel/crash_reserve.c b/kernel/crash_reserve.c
index d3b4cd12bdd1..64d44a52c011 100644
--- a/kernel/crash_reserve.c
+++ b/kernel/crash_reserve.c
@@ -423,7 +423,8 @@ void __init reserve_crashkernel_generic(char *cmdline,
 		if (high && search_end == CRASH_ADDR_HIGH_MAX) {
 			search_end = CRASH_ADDR_LOW_MAX;
 			search_base = 0;
-			goto retry;
+			if (search_end != CRASH_ADDR_HIGH_MAX)
+				goto retry;
 		}
 		pr_warn("cannot allocate crashkernel (size:0x%llx)\n",
 			crash_size);

In summary, just replace the riscv64 fix with something along the lines
of the diff above (or pick whatever you prefer that still keeps the
fallback).

Thanks.
Jinjie Ruan Aug. 9, 2024, 10:15 a.m. UTC | #10
On 2024/8/9 17:51, Catalin Marinas wrote:
> On Thu, Aug 08, 2024 at 03:56:35PM +0800, Jinjie Ruan wrote:
>> On 2024/8/7 3:34, Catalin Marinas wrote:
>>> On Tue, Aug 06, 2024 at 08:10:30PM +0100, Catalin Marinas wrote:
>>>> On Fri, Aug 02, 2024 at 06:11:01PM +0800, Baoquan He wrote:
>>>>> And I don't like the idea crashkernel=,high failure will fallback to
>>>>> attempt in low area, so this looks good to me.
>>>>
>>>> Well, I kind of liked this behaviour. One can specify ,high as a
>>>> preference rather than forcing a range. The arm64 land has different
>>>> platforms with some constrained memory layouts. Such fallback works well
>>>> as a default command line option shipped with distros without having to
>>>> guess the SoC memory layout.
>>>
>>> I haven't tried but it's possible that this patch also breaks those
>>> arm64 platforms with all RAM above 4GB when CRASH_ADDR_LOW_MAX is
>>> memblock_end_of_DRAM(). Here all memory would be low and in the absence
>>> of no fallback, it fails to allocate.
>>>
>>> So, my strong preference would be to re-instate the current behaviour
>>> and work around the infinite loop in a different way.
>>
>> Hi, baoquan, What's your opinion?
>>
>> Only this patch should be re-instate or all the 3 dead loop fix patch?
> 
> Only the riscv64 patch that that removes the ,high reservation fallback
> to ,low. From this series:
> 
> https://lore.kernel.org/r/20240719095735.1912878-1-ruanjinjie@huawei.com/
> 
> the first two fixes look fine (x86_32). The third one (arm32), not sure
> why it's in the series called "crash: Fix x86_32 memory reserve dead
> loop bug". Does it fix a problem on arm32? Anyway, I'm not against it
> getting merged but I'm not maintaining arm32. If the first two patches
> could be merged for 6.11, I think the arm32 one is more of a 6.12
> material (unless it does fix something).
> 
> On the riscv64 patch removing the high->low fallback to avoid the
> infinite loop, I'd rather replace it with something similar to the
> x86_32 fix in the series above. I suggested something in the main if
> block but, looking at the x86_32 fix, for consistency, I think it would
> look better as something like:
> 
> diff --git a/kernel/crash_reserve.c b/kernel/crash_reserve.c
> index d3b4cd12bdd1..64d44a52c011 100644
> --- a/kernel/crash_reserve.c
> +++ b/kernel/crash_reserve.c
> @@ -423,7 +423,8 @@ void __init reserve_crashkernel_generic(char *cmdline,
>  		if (high && search_end == CRASH_ADDR_HIGH_MAX) {
>  			search_end = CRASH_ADDR_LOW_MAX;
>  			search_base = 0;
> -			goto retry;
> +			if (search_end != CRASH_ADDR_HIGH_MAX)
> +				goto retry;
>  		}
>  		pr_warn("cannot allocate crashkernel (size:0x%llx)\n",
>  			crash_size);
> 
> In summary, just replace the riscv64 fix with something along the lines
> of the diff above (or pick whatever you prefer that still keeps the
> fallback).

Hi, Andrew

Could you please help to remove the riscv64 fix from your mm tree as
Catalin suggested? we will give the new patch sooner.

> 
> Thanks.
>
Petr Tesařík Aug. 13, 2024, 8:40 a.m. UTC | #11
Hi Catalin,

On Tue, 6 Aug 2024 20:34:42 +0100
Catalin Marinas <catalin.marinas@arm.com> wrote:

> On Tue, Aug 06, 2024 at 08:10:30PM +0100, Catalin Marinas wrote:
> > On Fri, Aug 02, 2024 at 06:11:01PM +0800, Baoquan He wrote:  
> > > On 08/02/24 at 05:01pm, Jinjie Ruan wrote:  
> > > > On RISCV64 Qemu machine with 512MB memory, cmdline "crashkernel=500M,high"
> > > > will cause system stall as below:
> > > > 
> > > > 	 Zone ranges:
> > > > 	   DMA32    [mem 0x0000000080000000-0x000000009fffffff]
> > > > 	   Normal   empty
> > > > 	 Movable zone start for each node
> > > > 	 Early memory node ranges
> > > > 	   node   0: [mem 0x0000000080000000-0x000000008005ffff]
> > > > 	   node   0: [mem 0x0000000080060000-0x000000009fffffff]
> > > > 	 Initmem setup node 0 [mem 0x0000000080000000-0x000000009fffffff]
> > > > 	(stall here)
> > > > 
> > > > commit 5d99cadf1568 ("crash: fix x86_32 crash memory reserve dead loop
> > > > bug") fix this on 32-bit architecture. However, the problem is not
> > > > completely solved. If `CRASH_ADDR_LOW_MAX = CRASH_ADDR_HIGH_MAX` on 64-bit
> > > > architecture, for example, when system memory is equal to
> > > > CRASH_ADDR_LOW_MAX on RISCV64, the following infinite loop will also occur:  
> > > 
> > > Interesting, I didn't expect risc-v defining them like these.
> > > 
> > > #define CRASH_ADDR_LOW_MAX              dma32_phys_limit
> > > #define CRASH_ADDR_HIGH_MAX             memblock_end_of_DRAM()  
> > 
> > arm64 defines the high limit as PHYS_MASK+1, it doesn't need to be
> > dynamic and x86 does something similar (SZ_64T). Not sure why the
> > generic code and riscv define it like this.
> >   
> > > > 	-> reserve_crashkernel_generic() and high is true
> > > > 	   -> alloc at [CRASH_ADDR_LOW_MAX, CRASH_ADDR_HIGH_MAX] fail
> > > > 	      -> alloc at [0, CRASH_ADDR_LOW_MAX] fail and repeatedly  
> > > > 	         (because CRASH_ADDR_LOW_MAX = CRASH_ADDR_HIGH_MAX).
> > > > 
> > > > Before refactor in commit 9c08a2a139fe ("x86: kdump: use generic interface
> > > > to simplify crashkernel reservation code"), x86 do not try to reserve crash
> > > > memory at low if it fails to alloc above high 4G. However before refator in
> > > > commit fdc268232dbba ("arm64: kdump: use generic interface to simplify
> > > > crashkernel reservation"), arm64 try to reserve crash memory at low if it
> > > > fails above high 4G. For 64-bit systems, this attempt is less beneficial
> > > > than the opposite, remove it to fix this bug and align with native x86
> > > > implementation.  
> > > 
> > > And I don't like the idea crashkernel=,high failure will fallback to
> > > attempt in low area, so this looks good to me.  
> > 
> > Well, I kind of liked this behaviour. One can specify ,high as a
> > preference rather than forcing a range. The arm64 land has different
> > platforms with some constrained memory layouts. Such fallback works well
> > as a default command line option shipped with distros without having to
> > guess the SoC memory layout.  
> 
> I haven't tried but it's possible that this patch also breaks those
> arm64 platforms with all RAM above 4GB when CRASH_ADDR_LOW_MAX is
> memblock_end_of_DRAM(). Here all memory would be low and in the absence
> of no fallback, it fails to allocate.

I'm afraid you've just opened a Pandora box... ;-)

Another (unrelated) patch series made us aware of a platforms where RAM
starts at 32G, but IIUC the host bridge maps 32G-33G to bus addresses
0-1G, and there is a device on that bus which can produce only 30-bit
addresses.

Now, what was the idea behind allocating some crash memory "low"?
Right, it should allow the crash kernel to access devices with
addressing constraints. So, on the above-mentioned platform, allocating
"low" would in fact mean allocating between 32G and 33G (in host address
domain).

Should we rethink the whole concept of high/low?

Petr T
Catalin Marinas Aug. 13, 2024, 12:04 p.m. UTC | #12
Hi Petr,

On Tue, Aug 13, 2024 at 10:40:06AM +0200, Petr Tesařík wrote:
> On Tue, 6 Aug 2024 20:34:42 +0100
> Catalin Marinas <catalin.marinas@arm.com> wrote:
> > I haven't tried but it's possible that this patch also breaks those
> > arm64 platforms with all RAM above 4GB when CRASH_ADDR_LOW_MAX is
> > memblock_end_of_DRAM(). Here all memory would be low and in the absence
> > of no fallback, it fails to allocate.
> 
> I'm afraid you've just opened a Pandora box... ;-)

Not that bad ;) but, yeah, this patch was dropped in favour of this:

https://lore.kernel.org/r/20240812062017.2674441-1-ruanjinjie@huawei.com/

> Another (unrelated) patch series made us aware of a platforms where RAM
> starts at 32G, but IIUC the host bridge maps 32G-33G to bus addresses
> 0-1G, and there is a device on that bus which can produce only 30-bit
> addresses.
> 
> Now, what was the idea behind allocating some crash memory "low"?
> Right, it should allow the crash kernel to access devices with
> addressing constraints. So, on the above-mentioned platform, allocating
> "low" would in fact mean allocating between 32G and 33G (in host address
> domain).

Indeed. If that's not available, the crash kernel won't be able to boot
(unless the corresponding device is removed from DT or ACPI tables).

> Should we rethink the whole concept of high/low?

Yeah, it would be good to revisit those at some point. For the time
being, 'low' in this context on arm64 means ZONE_DMA memory, basically
the common denominator address range that supports all devices on an
SoC. For others like x86_32, this means the memory that the kernel can
actually map (not necessarily device/DMA related).

So, it's not always about the DMA capabilities but also what the crash
kernel can map (so somewhat different from the zone allocator case we've
been discussing in other threads).
Petr Tesařík Aug. 13, 2024, 1:33 p.m. UTC | #13
On Tue, 13 Aug 2024 13:04:31 +0100
Catalin Marinas <catalin.marinas@arm.com> wrote:

> Hi Petr,
> 
> On Tue, Aug 13, 2024 at 10:40:06AM +0200, Petr Tesařík wrote:
> > On Tue, 6 Aug 2024 20:34:42 +0100
> > Catalin Marinas <catalin.marinas@arm.com> wrote:  
> > > I haven't tried but it's possible that this patch also breaks those
> > > arm64 platforms with all RAM above 4GB when CRASH_ADDR_LOW_MAX is
> > > memblock_end_of_DRAM(). Here all memory would be low and in the absence
> > > of no fallback, it fails to allocate.  
> > 
> > I'm afraid you've just opened a Pandora box... ;-)  
> 
> Not that bad ;) but, yeah, this patch was dropped in favour of this:
> 
> https://lore.kernel.org/r/20240812062017.2674441-1-ruanjinjie@huawei.com/

Yes, I have noticed. That one simply preserves the status quo and a
fuzzy definition of "low".

> > Another (unrelated) patch series made us aware of a platforms where RAM
> > starts at 32G, but IIUC the host bridge maps 32G-33G to bus addresses
> > 0-1G, and there is a device on that bus which can produce only 30-bit
> > addresses.
> > 
> > Now, what was the idea behind allocating some crash memory "low"?
> > Right, it should allow the crash kernel to access devices with
> > addressing constraints. So, on the above-mentioned platform, allocating
> > "low" would in fact mean allocating between 32G and 33G (in host address
> > domain).  
> 
> Indeed. If that's not available, the crash kernel won't be able to boot
> (unless the corresponding device is removed from DT or ACPI tables).

Then it may be able to boot, but it won't be able to save a crash dump
on disk or send it over the network, rendering the panic kernel
environment a bit less useful. 

> > Should we rethink the whole concept of high/low?  
> 
> Yeah, it would be good to revisit those at some point. For the time
> being, 'low' in this context on arm64 means ZONE_DMA memory, basically
> the common denominator address range that supports all devices on an
> SoC. For others like x86_32, this means the memory that the kernel can
> actually map (not necessarily device/DMA related).

Ah, right. I forgot that there are also constraints on the placement of
the kernel identity mapping in CPU physical address space.

> So, it's not always about the DMA capabilities but also what the crash
> kernel can map (so somewhat different from the zone allocator case we've
> been discussing in other threads).

It seems to me that a good panic kernel environment requires:

  1. memory where kernel text/data can be mapped (even at early init)
  2. memory that is accessible to I/O devices
  3. memory that can be allocated to user space (e.g. makedumpfile)

The first two blocks may require special placement in bus/CPU physical
address space, the third does not, but it needs to be big enough for
the workload.

I'll try to transform this knowledge into something actionable or even
reviewable.

For now, I agree there's nothing more to discuss.

Thanks
Petr T
diff mbox series

Patch

diff --git a/kernel/crash_reserve.c b/kernel/crash_reserve.c
index 5387269114f6..69e4b8b7b969 100644
--- a/kernel/crash_reserve.c
+++ b/kernel/crash_reserve.c
@@ -420,15 +420,6 @@  void __init reserve_crashkernel_generic(char *cmdline,
 				goto retry;
 		}
 
-		/*
-		 * For crashkernel=size[KMG],high, if the first attempt was
-		 * for high memory, fall back to low memory.
-		 */
-		if (high && search_end == CRASH_ADDR_HIGH_MAX) {
-			search_end = CRASH_ADDR_LOW_MAX;
-			search_base = 0;
-			goto retry;
-		}
 		pr_warn("cannot allocate crashkernel (size:0x%llx)\n",
 			crash_size);
 		return;