Message ID | 20190809100800.5426-1-yanaijie@huawei.com (mailing list archive) |
---|---|
Headers | show |
Series | implement KASLR for powerpc/fsl_booke/32 | expand |
Hi Michael, Is there anything more I should do to get this feature meeting the requirements of the mainline? Thanks, Jason On 2019/8/9 18:07, Jason Yan wrote: > This series implements KASLR for powerpc/fsl_booke/32, as a security > feature that deters exploit attempts relying on knowledge of the location > of kernel internals. > > Since CONFIG_RELOCATABLE has already supported, what we need to do is > map or copy kernel to a proper place and relocate. Freescale Book-E > parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1 > entries are not suitable to map the kernel directly in a randomized > region, so we chose to copy the kernel to a proper place and restart to > relocate. > > Entropy is derived from the banner and timer base, which will change every > build and boot. This not so much safe so additionally the bootloader may > pass entropy via the /chosen/kaslr-seed node in device tree. > > We will use the first 512M of the low memory to randomize the kernel > image. The memory will be split in 64M zones. We will use the lower 8 > bit of the entropy to decide the index of the 64M zone. Then we chose a > 16K aligned offset inside the 64M zone to put the kernel in. > > KERNELBASE > > |--> 64M <--| > | | > +---------------+ +----------------+---------------+ > | |....| |kernel| | | > +---------------+ +----------------+---------------+ > | | > |-----> offset <-----| > > kernstart_virt_addr > > We also check if we will overlap with some areas like the dtb area, the > initrd area or the crashkernel area. If we cannot find a proper area, > kaslr will be disabled and boot from the original kernel. > > Changes since v5: > - Rename M_IF_NEEDED to MAS2_M_IF_NEEDED > - Define some global variable as __ro_after_init > - Replace kimage_vaddr with kernstart_virt_addr > - Depend on RELOCATABLE, not select it > - Modify the comment block below the SPDX tag > - Remove some useless headers in kaslr_booke.c and move is_second_reloc > declarationto mmu_decl.h > - Remove DBG() and use pr_debug() and rewrite comment above get_boot_seed(). > - Add a patch to document the KASLR implementation. > - Split a patch from patch #10 which exports kaslr offset in VMCOREINFO ELF notes. > - Remove extra logic around finding nokaslr string in cmdline. > - Make regions static global and __initdata > > Changes since v4: > - Add Reviewed-by tag from Christophe > - Remove an unnecessary cast > - Remove unnecessary parenthesis > - Fix checkpatch warning > > Changes since v3: > - Add Reviewed-by and Tested-by tag from Diana > - Change the comment in fsl_booke_entry_mapping.S to be consistent > with the new code. > > Changes since v2: > - Remove unnecessary #ifdef > - Use SZ_64M instead of0x4000000 > - Call early_init_dt_scan_chosen() to init boot_command_line > - Rename kaslr_second_init() to kaslr_late_init() > > Changes since v1: > - Remove some useless 'extern' keyword. > - Replace EXPORT_SYMBOL with EXPORT_SYMBOL_GPL > - Improve some assembly code > - Use memzero_explicit instead of memset > - Use boot_command_line and remove early_command_line > - Do not print kaslr offset if kaslr is disabled > > Jason Yan (12): > powerpc: unify definition of M_IF_NEEDED > powerpc: move memstart_addr and kernstart_addr to init-common.c > powerpc: introduce kernstart_virt_addr to store the kernel base > powerpc/fsl_booke/32: introduce create_tlb_entry() helper > powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper > powerpc/fsl_booke/32: implement KASLR infrastructure > powerpc/fsl_booke/32: randomize the kernel image offset > powerpc/fsl_booke/kaslr: clear the original kernel if randomized > powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter > powerpc/fsl_booke/kaslr: dump out kernel offset information on panic > powerpc/fsl_booke/kaslr: export offset in VMCOREINFO ELF notes > powerpc/fsl_booke/32: Document KASLR implementation > > Documentation/powerpc/kaslr-booke32.rst | 42 ++ > arch/powerpc/Kconfig | 11 + > arch/powerpc/include/asm/nohash/mmu-book3e.h | 10 + > arch/powerpc/include/asm/page.h | 7 + > arch/powerpc/kernel/Makefile | 1 + > arch/powerpc/kernel/early_32.c | 2 +- > arch/powerpc/kernel/exceptions-64e.S | 12 +- > arch/powerpc/kernel/fsl_booke_entry_mapping.S | 27 +- > arch/powerpc/kernel/head_fsl_booke.S | 55 ++- > arch/powerpc/kernel/kaslr_booke.c | 393 ++++++++++++++++++ > arch/powerpc/kernel/machine_kexec.c | 1 + > arch/powerpc/kernel/misc_64.S | 7 +- > arch/powerpc/kernel/setup-common.c | 20 + > arch/powerpc/mm/init-common.c | 7 + > arch/powerpc/mm/init_32.c | 5 - > arch/powerpc/mm/init_64.c | 5 - > arch/powerpc/mm/mmu_decl.h | 11 + > arch/powerpc/mm/nohash/fsl_booke.c | 8 +- > 18 files changed, 572 insertions(+), 52 deletions(-) > create mode 100644 Documentation/powerpc/kaslr-booke32.rst > create mode 100644 arch/powerpc/kernel/kaslr_booke.c >
A polite ping :) What else should I do now? Thanks On 2019/8/19 14:12, Jason Yan wrote: > Hi Michael, > > Is there anything more I should do to get this feature meeting the > requirements of the mainline? > > Thanks, > Jason > > On 2019/8/9 18:07, Jason Yan wrote: >> This series implements KASLR for powerpc/fsl_booke/32, as a security >> feature that deters exploit attempts relying on knowledge of the location >> of kernel internals. >> >> Since CONFIG_RELOCATABLE has already supported, what we need to do is >> map or copy kernel to a proper place and relocate. Freescale Book-E >> parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1 >> entries are not suitable to map the kernel directly in a randomized >> region, so we chose to copy the kernel to a proper place and restart to >> relocate. >> >> Entropy is derived from the banner and timer base, which will change >> every >> build and boot. This not so much safe so additionally the bootloader may >> pass entropy via the /chosen/kaslr-seed node in device tree. >> >> We will use the first 512M of the low memory to randomize the kernel >> image. The memory will be split in 64M zones. We will use the lower 8 >> bit of the entropy to decide the index of the 64M zone. Then we chose a >> 16K aligned offset inside the 64M zone to put the kernel in. >> >> KERNELBASE >> >> |--> 64M <--| >> | | >> +---------------+ +----------------+---------------+ >> | |....| |kernel| | | >> +---------------+ +----------------+---------------+ >> | | >> |-----> offset <-----| >> >> kernstart_virt_addr >> >> We also check if we will overlap with some areas like the dtb area, the >> initrd area or the crashkernel area. If we cannot find a proper area, >> kaslr will be disabled and boot from the original kernel. >> >> Changes since v5: >> - Rename M_IF_NEEDED to MAS2_M_IF_NEEDED >> - Define some global variable as __ro_after_init >> - Replace kimage_vaddr with kernstart_virt_addr >> - Depend on RELOCATABLE, not select it >> - Modify the comment block below the SPDX tag >> - Remove some useless headers in kaslr_booke.c and move is_second_reloc >> declarationto mmu_decl.h >> - Remove DBG() and use pr_debug() and rewrite comment above >> get_boot_seed(). >> - Add a patch to document the KASLR implementation. >> - Split a patch from patch #10 which exports kaslr offset in >> VMCOREINFO ELF notes. >> - Remove extra logic around finding nokaslr string in cmdline. >> - Make regions static global and __initdata >> >> Changes since v4: >> - Add Reviewed-by tag from Christophe >> - Remove an unnecessary cast >> - Remove unnecessary parenthesis >> - Fix checkpatch warning >> >> Changes since v3: >> - Add Reviewed-by and Tested-by tag from Diana >> - Change the comment in fsl_booke_entry_mapping.S to be consistent >> with the new code. >> >> Changes since v2: >> - Remove unnecessary #ifdef >> - Use SZ_64M instead of0x4000000 >> - Call early_init_dt_scan_chosen() to init boot_command_line >> - Rename kaslr_second_init() to kaslr_late_init() >> >> Changes since v1: >> - Remove some useless 'extern' keyword. >> - Replace EXPORT_SYMBOL with EXPORT_SYMBOL_GPL >> - Improve some assembly code >> - Use memzero_explicit instead of memset >> - Use boot_command_line and remove early_command_line >> - Do not print kaslr offset if kaslr is disabled >> >> Jason Yan (12): >> powerpc: unify definition of M_IF_NEEDED >> powerpc: move memstart_addr and kernstart_addr to init-common.c >> powerpc: introduce kernstart_virt_addr to store the kernel base >> powerpc/fsl_booke/32: introduce create_tlb_entry() helper >> powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper >> powerpc/fsl_booke/32: implement KASLR infrastructure >> powerpc/fsl_booke/32: randomize the kernel image offset >> powerpc/fsl_booke/kaslr: clear the original kernel if randomized >> powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter >> powerpc/fsl_booke/kaslr: dump out kernel offset information on panic >> powerpc/fsl_booke/kaslr: export offset in VMCOREINFO ELF notes >> powerpc/fsl_booke/32: Document KASLR implementation >> >> Documentation/powerpc/kaslr-booke32.rst | 42 ++ >> arch/powerpc/Kconfig | 11 + >> arch/powerpc/include/asm/nohash/mmu-book3e.h | 10 + >> arch/powerpc/include/asm/page.h | 7 + >> arch/powerpc/kernel/Makefile | 1 + >> arch/powerpc/kernel/early_32.c | 2 +- >> arch/powerpc/kernel/exceptions-64e.S | 12 +- >> arch/powerpc/kernel/fsl_booke_entry_mapping.S | 27 +- >> arch/powerpc/kernel/head_fsl_booke.S | 55 ++- >> arch/powerpc/kernel/kaslr_booke.c | 393 ++++++++++++++++++ >> arch/powerpc/kernel/machine_kexec.c | 1 + >> arch/powerpc/kernel/misc_64.S | 7 +- >> arch/powerpc/kernel/setup-common.c | 20 + >> arch/powerpc/mm/init-common.c | 7 + >> arch/powerpc/mm/init_32.c | 5 - >> arch/powerpc/mm/init_64.c | 5 - >> arch/powerpc/mm/mmu_decl.h | 11 + >> arch/powerpc/mm/nohash/fsl_booke.c | 8 +- >> 18 files changed, 572 insertions(+), 52 deletions(-) >> create mode 100644 Documentation/powerpc/kaslr-booke32.rst >> create mode 100644 arch/powerpc/kernel/kaslr_booke.c >>
Jason Yan <yanaijie@huawei.com> writes: > A polite ping :) > > What else should I do now? That's a good question. Scott, are you still maintaining FSL bits, and if so any comments? Or should I take this. cheers > On 2019/8/19 14:12, Jason Yan wrote: >> Hi Michael, >> >> Is there anything more I should do to get this feature meeting the >> requirements of the mainline? >> >> Thanks, >> Jason >> >> On 2019/8/9 18:07, Jason Yan wrote: >>> This series implements KASLR for powerpc/fsl_booke/32, as a security >>> feature that deters exploit attempts relying on knowledge of the location >>> of kernel internals. >>> >>> Since CONFIG_RELOCATABLE has already supported, what we need to do is >>> map or copy kernel to a proper place and relocate. Freescale Book-E >>> parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1 >>> entries are not suitable to map the kernel directly in a randomized >>> region, so we chose to copy the kernel to a proper place and restart to >>> relocate. >>> >>> Entropy is derived from the banner and timer base, which will change >>> every >>> build and boot. This not so much safe so additionally the bootloader may >>> pass entropy via the /chosen/kaslr-seed node in device tree. >>> >>> We will use the first 512M of the low memory to randomize the kernel >>> image. The memory will be split in 64M zones. We will use the lower 8 >>> bit of the entropy to decide the index of the 64M zone. Then we chose a >>> 16K aligned offset inside the 64M zone to put the kernel in. >>> >>> KERNELBASE >>> >>> |--> 64M <--| >>> | | >>> +---------------+ +----------------+---------------+ >>> | |....| |kernel| | | >>> +---------------+ +----------------+---------------+ >>> | | >>> |-----> offset <-----| >>> >>> kernstart_virt_addr >>> >>> We also check if we will overlap with some areas like the dtb area, the >>> initrd area or the crashkernel area. If we cannot find a proper area, >>> kaslr will be disabled and boot from the original kernel. >>> >>> Changes since v5: >>> - Rename M_IF_NEEDED to MAS2_M_IF_NEEDED >>> - Define some global variable as __ro_after_init >>> - Replace kimage_vaddr with kernstart_virt_addr >>> - Depend on RELOCATABLE, not select it >>> - Modify the comment block below the SPDX tag >>> - Remove some useless headers in kaslr_booke.c and move is_second_reloc >>> declarationto mmu_decl.h >>> - Remove DBG() and use pr_debug() and rewrite comment above >>> get_boot_seed(). >>> - Add a patch to document the KASLR implementation. >>> - Split a patch from patch #10 which exports kaslr offset in >>> VMCOREINFO ELF notes. >>> - Remove extra logic around finding nokaslr string in cmdline. >>> - Make regions static global and __initdata >>> >>> Changes since v4: >>> - Add Reviewed-by tag from Christophe >>> - Remove an unnecessary cast >>> - Remove unnecessary parenthesis >>> - Fix checkpatch warning >>> >>> Changes since v3: >>> - Add Reviewed-by and Tested-by tag from Diana >>> - Change the comment in fsl_booke_entry_mapping.S to be consistent >>> with the new code. >>> >>> Changes since v2: >>> - Remove unnecessary #ifdef >>> - Use SZ_64M instead of0x4000000 >>> - Call early_init_dt_scan_chosen() to init boot_command_line >>> - Rename kaslr_second_init() to kaslr_late_init() >>> >>> Changes since v1: >>> - Remove some useless 'extern' keyword. >>> - Replace EXPORT_SYMBOL with EXPORT_SYMBOL_GPL >>> - Improve some assembly code >>> - Use memzero_explicit instead of memset >>> - Use boot_command_line and remove early_command_line >>> - Do not print kaslr offset if kaslr is disabled >>> >>> Jason Yan (12): >>> powerpc: unify definition of M_IF_NEEDED >>> powerpc: move memstart_addr and kernstart_addr to init-common.c >>> powerpc: introduce kernstart_virt_addr to store the kernel base >>> powerpc/fsl_booke/32: introduce create_tlb_entry() helper >>> powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper >>> powerpc/fsl_booke/32: implement KASLR infrastructure >>> powerpc/fsl_booke/32: randomize the kernel image offset >>> powerpc/fsl_booke/kaslr: clear the original kernel if randomized >>> powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter >>> powerpc/fsl_booke/kaslr: dump out kernel offset information on panic >>> powerpc/fsl_booke/kaslr: export offset in VMCOREINFO ELF notes >>> powerpc/fsl_booke/32: Document KASLR implementation >>> >>> Documentation/powerpc/kaslr-booke32.rst | 42 ++ >>> arch/powerpc/Kconfig | 11 + >>> arch/powerpc/include/asm/nohash/mmu-book3e.h | 10 + >>> arch/powerpc/include/asm/page.h | 7 + >>> arch/powerpc/kernel/Makefile | 1 + >>> arch/powerpc/kernel/early_32.c | 2 +- >>> arch/powerpc/kernel/exceptions-64e.S | 12 +- >>> arch/powerpc/kernel/fsl_booke_entry_mapping.S | 27 +- >>> arch/powerpc/kernel/head_fsl_booke.S | 55 ++- >>> arch/powerpc/kernel/kaslr_booke.c | 393 ++++++++++++++++++ >>> arch/powerpc/kernel/machine_kexec.c | 1 + >>> arch/powerpc/kernel/misc_64.S | 7 +- >>> arch/powerpc/kernel/setup-common.c | 20 + >>> arch/powerpc/mm/init-common.c | 7 + >>> arch/powerpc/mm/init_32.c | 5 - >>> arch/powerpc/mm/init_64.c | 5 - >>> arch/powerpc/mm/mmu_decl.h | 11 + >>> arch/powerpc/mm/nohash/fsl_booke.c | 8 +- >>> 18 files changed, 572 insertions(+), 52 deletions(-) >>> create mode 100644 Documentation/powerpc/kaslr-booke32.rst >>> create mode 100644 arch/powerpc/kernel/kaslr_booke.c >>>
On Fri, 2019-08-09 at 18:07 +0800, Jason Yan wrote: > This series implements KASLR for powerpc/fsl_booke/32, as a security > feature that deters exploit attempts relying on knowledge of the location > of kernel internals. > > Since CONFIG_RELOCATABLE has already supported, what we need to do is > map or copy kernel to a proper place and relocate. Have you tested this with a kernel that was loaded at a non-zero address? I tried loading a kernel at 0x04000000 (by changing the address in the uImage, and setting bootm_low to 04000000 in U-Boot), and it works without CONFIG_RANDOMIZE and fails with. > Freescale Book-E > parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1 > entries are not suitable to map the kernel directly in a randomized > region, so we chose to copy the kernel to a proper place and restart to > relocate. > > Entropy is derived from the banner and timer base, which will change every > build and boot. This not so much safe so additionally the bootloader may > pass entropy via the /chosen/kaslr-seed node in device tree. How complicated would it be to directly access the HW RNG (if present) that early in the boot? It'd be nice if a U-Boot update weren't required (and particularly concerning that KASLR would appear to work without a U-Boot update, but without decent entropy). -Scott
On Tue, 2019-08-27 at 23:05 -0500, Scott Wood wrote: > On Fri, 2019-08-09 at 18:07 +0800, Jason Yan wrote: > > Freescale Book-E > > parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1 > > entries are not suitable to map the kernel directly in a randomized > > region, so we chose to copy the kernel to a proper place and restart to > > relocate. > > > > Entropy is derived from the banner and timer base, which will change every > > build and boot. This not so much safe so additionally the bootloader may > > pass entropy via the /chosen/kaslr-seed node in device tree. > > How complicated would it be to directly access the HW RNG (if present) that > early in the boot? It'd be nice if a U-Boot update weren't required (and > particularly concerning that KASLR would appear to work without a U-Boot > update, but without decent entropy). OK, I see that kaslr-seed is used on some other platforms, though arm64 aborts KASLR if it doesn't get a seed. I'm not sure if that's better than a loud warning message (or if it was a conscious choice rather than just not having an alternative implemented), but silently using poor entropy for something like this seems bad. -Scott
On Tue, 2019-08-27 at 11:33 +1000, Michael Ellerman wrote: > Jason Yan <yanaijie@huawei.com> writes: > > A polite ping :) > > > > What else should I do now? > > That's a good question. > > Scott, are you still maintaining FSL bits, Sort of... now that it's become very low volume, it's easy to forget when something does show up (or miss it if I'm not CCed). It'd probably help if I were to just ack patches instead of thinking "I'll do a pull request for this later" when it's just one or two patches per cycle. -Scott
Scott Wood <oss@buserror.net> writes: > On Tue, 2019-08-27 at 11:33 +1000, Michael Ellerman wrote: >> Jason Yan <yanaijie@huawei.com> writes: >> > A polite ping :) >> > >> > What else should I do now? >> >> That's a good question. >> >> Scott, are you still maintaining FSL bits, > > Sort of... now that it's become very low volume, it's easy to forget when > something does show up (or miss it if I'm not CCed). It'd probably help if I > were to just ack patches instead of thinking "I'll do a pull request for this > later" when it's just one or two patches per cycle. Yep, understand. Just sending acks is totally fine if you don't have enough for a pull request. cheers
On 2019/8/28 12:05, Scott Wood wrote: > On Fri, 2019-08-09 at 18:07 +0800, Jason Yan wrote: >> This series implements KASLR for powerpc/fsl_booke/32, as a security >> feature that deters exploit attempts relying on knowledge of the location >> of kernel internals. >> >> Since CONFIG_RELOCATABLE has already supported, what we need to do is >> map or copy kernel to a proper place and relocate. > > Have you tested this with a kernel that was loaded at a non-zero address? I > tried loading a kernel at 0x04000000 (by changing the address in the uImage, > and setting bootm_low to 04000000 in U-Boot), and it works without > CONFIG_RANDOMIZE and fails with. > Not yet. I will test this kind of cases in the next days. Thank you so much. If there are any other corner cases that have to be tested, please let me know. >> Freescale Book-E >> parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1 >> entries are not suitable to map the kernel directly in a randomized >> region, so we chose to copy the kernel to a proper place and restart to >> relocate. >> >> Entropy is derived from the banner and timer base, which will change every >> build and boot. This not so much safe so additionally the bootloader may >> pass entropy via the /chosen/kaslr-seed node in device tree. > > How complicated would it be to directly access the HW RNG (if present) that > early in the boot? It'd be nice if a U-Boot update weren't required (and > particularly concerning that KASLR would appear to work without a U-Boot > update, but without decent entropy). > > -Scott > > > > . >
On 2019/8/28 12:59, Scott Wood wrote: > On Tue, 2019-08-27 at 23:05 -0500, Scott Wood wrote: >> On Fri, 2019-08-09 at 18:07 +0800, Jason Yan wrote: >>> Freescale Book-E >>> parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1 >>> entries are not suitable to map the kernel directly in a randomized >>> region, so we chose to copy the kernel to a proper place and restart to >>> relocate. >>> >>> Entropy is derived from the banner and timer base, which will change every >>> build and boot. This not so much safe so additionally the bootloader may >>> pass entropy via the /chosen/kaslr-seed node in device tree. >> >> How complicated would it be to directly access the HW RNG (if present) that >> early in the boot? It'd be nice if a U-Boot update weren't required (and >> particularly concerning that KASLR would appear to work without a U-Boot >> update, but without decent entropy). > > OK, I see that kaslr-seed is used on some other platforms, though arm64 aborts > KASLR if it doesn't get a seed. I'm not sure if that's better than a loud > warning message (or if it was a conscious choice rather than just not having > an alternative implemented), but silently using poor entropy for something > like this seems bad. > It can still make the attacker's cost higher with not so good entropy. The same strategy exists in X86 when X86 KASLR uses RDTSC if without X86_FEATURE_RDRAND supported. I agree that having a warning message looks better for reminding people in this situation. > -Scott > > > > . >
Hi Scott, On 2019/8/28 12:05, Scott Wood wrote: > On Fri, 2019-08-09 at 18:07 +0800, Jason Yan wrote: >> This series implements KASLR for powerpc/fsl_booke/32, as a security >> feature that deters exploit attempts relying on knowledge of the location >> of kernel internals. >> >> Since CONFIG_RELOCATABLE has already supported, what we need to do is >> map or copy kernel to a proper place and relocate. > > Have you tested this with a kernel that was loaded at a non-zero address? I > tried loading a kernel at 0x04000000 (by changing the address in the uImage, > and setting bootm_low to 04000000 in U-Boot), and it works without > CONFIG_RANDOMIZE and fails with. > How did you change the load address of the uImage, by changing the kernel config CONFIG_PHYSICAL_START or the "-a/-e" parameter of mkimage? I tried both, but it did not work with or without CONFIG_RANDOMIZE. Thanks, Jason >> Freescale Book-E >> parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1 >> entries are not suitable to map the kernel directly in a randomized >> region, so we chose to copy the kernel to a proper place and restart to >> relocate. >> >> Entropy is derived from the banner and timer base, which will change every >> build and boot. This not so much safe so additionally the bootloader may >> pass entropy via the /chosen/kaslr-seed node in device tree. > > How complicated would it be to directly access the HW RNG (if present) that > early in the boot? It'd be nice if a U-Boot update weren't required (and > particularly concerning that KASLR would appear to work without a U-Boot > update, but without decent entropy). > > -Scott > > > > . >
On Tue, 2019-09-10 at 13:34 +0800, Jason Yan wrote: > Hi Scott, > > On 2019/8/28 12:05, Scott Wood wrote: > > On Fri, 2019-08-09 at 18:07 +0800, Jason Yan wrote: > > > This series implements KASLR for powerpc/fsl_booke/32, as a security > > > feature that deters exploit attempts relying on knowledge of the > > > location > > > of kernel internals. > > > > > > Since CONFIG_RELOCATABLE has already supported, what we need to do is > > > map or copy kernel to a proper place and relocate. > > > > Have you tested this with a kernel that was loaded at a non-zero > > address? I > > tried loading a kernel at 0x04000000 (by changing the address in the > > uImage, > > and setting bootm_low to 04000000 in U-Boot), and it works without > > CONFIG_RANDOMIZE and fails with. > > > > How did you change the load address of the uImage, by changing the > kernel config CONFIG_PHYSICAL_START or the "-a/-e" parameter of mkimage? > I tried both, but it did not work with or without CONFIG_RANDOMIZE. With mkimage. Did you set bootm_low in U-Boot as described above? Was CONFIG_RELOCATABLE set in the non-CONFIG_RANDOMIZE kernel? -Scott