Message ID | 20240619214711.work.953-kees@kernel.org (mailing list archive) |
---|---|
State | Accepted |
Commit | 6db1208bf95b4c091897b597c415e11edeab2e2d |
Headers | show |
Series | randomize_kstack: Remove non-functional per-arch entropy filtering | expand |
On 2024/6/20 5:47, Kees Cook wrote: > An unintended consequence of commit 9c573cd31343 ("randomize_kstack: > Improve entropy diffusion") was that the per-architecture entropy size > filtering reduced how many bits were being added to the mix, rather than > how many bits were being used during the offsetting. All architectures > fell back to the existing default of 0x3FF (10 bits), which will consume > at most 1KiB of stack space. It seems that this is working just fine, > so let's avoid the confusion and update everything to use the default. > My original intent was indeed to do this, but I regret that not being more explicit in the commit log.. Additionally, I've tested the stack entropy by applying the following patch, the result was `Bits of stack entropy: 7` on arm64, too. It does not seem to affect the entropy value, maybe removing it is OK, or there may be some nuances of your intentions that I've overlooked. --- a/include/linux/randomize_kstack.h +++ b/include/linux/randomize_kstack.h @@ -79,9 +79,7 @@ DECLARE_PER_CPU(u32, kstack_offset); #define choose_random_kstack_offset(rand) do { \ if (static_branch_maybe(CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT, \ &randomize_kstack_offset)) { \ - u32 offset = raw_cpu_read(kstack_offset); \ - offset = ror32(offset, 5) ^ (rand); \ - raw_cpu_write(kstack_offset, offset); \ + raw_cpu_write(kstack_offset, rand); \ } \ } while (0) #else /* CONFIG_RANDOMIZE_KSTACK_OFFSET */ > The prior intent of the per-architecture limits were: > > arm64: capped at 0x1FF (9 bits), 5 bits effective > powerpc: uncapped (10 bits), 6 or 7 bits effective > riscv: uncapped (10 bits), 6 bits effective > x86: capped at 0xFF (8 bits), 5 (x86_64) or 6 (ia32) bits effective > s390: capped at 0xFF (8 bits), undocumented effective entropy > > Current discussion has led to just dropping the original per-architecture > filters. The additional entropy appears to be safe for arm64, x86, > and s390. Quoting Arnd, "There is no point pretending that 15.75KB is > somehow safe to use while 15.00KB is not." > > Co-developed-by: Yuntao Liu <liuyuntao12@huawei.com> > Signed-off-by: Yuntao Liu <liuyuntao12@huawei.com> > Fixes: 9c573cd31343 ("randomize_kstack: Improve entropy diffusion") > Link: https://lore.kernel.org/r/20240617133721.377540-1-liuyuntao12@huawei.com > Signed-off-by: Kees Cook <kees@kernel.org> > --- > Cc: Arnd Bergmann <arnd@arndb.de> > Cc: Mark Rutland <mark.rutland@arm.com> > --- > arch/arm64/kernel/syscall.c | 16 +++++++--------- > arch/s390/include/asm/entry-common.h | 2 +- > arch/x86/include/asm/entry-common.h | 15 ++++++--------- > 3 files changed, 14 insertions(+), 19 deletions(-) > > diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c > index ad198262b981..7230f6e20ab8 100644 > --- a/arch/arm64/kernel/syscall.c > +++ b/arch/arm64/kernel/syscall.c > @@ -53,17 +53,15 @@ static void invoke_syscall(struct pt_regs *regs, unsigned int scno, > syscall_set_return_value(current, regs, 0, ret); > > /* > - * Ultimately, this value will get limited by KSTACK_OFFSET_MAX(), > - * but not enough for arm64 stack utilization comfort. To keep > - * reasonable stack head room, reduce the maximum offset to 9 bits. > + * This value will get limited by KSTACK_OFFSET_MAX(), which is 10 > + * bits. The actual entropy will be further reduced by the compiler > + * when applying stack alignment constraints: the AAPCS mandates a > + * 16-byte aligned SP at function boundaries, which will remove the > + * 4 low bits from any entropy chosen here. > * > - * The actual entropy will be further reduced by the compiler when > - * applying stack alignment constraints: the AAPCS mandates a > - * 16-byte (i.e. 4-bit) aligned SP at function boundaries. > - * > - * The resulting 5 bits of entropy is seen in SP[8:4]. > + * The resulting 6 bits of entropy is seen in SP[9:4]. > */ > - choose_random_kstack_offset(get_random_u16() & 0x1FF); > + choose_random_kstack_offset(get_random_u16()); > } > > static inline bool has_syscall_work(unsigned long flags) > diff --git a/arch/s390/include/asm/entry-common.h b/arch/s390/include/asm/entry-common.h > index 7f5004065e8a..35555c944630 100644 > --- a/arch/s390/include/asm/entry-common.h > +++ b/arch/s390/include/asm/entry-common.h > @@ -54,7 +54,7 @@ static __always_inline void arch_exit_to_user_mode(void) > static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs, > unsigned long ti_work) > { > - choose_random_kstack_offset(get_tod_clock_fast() & 0xff); > + choose_random_kstack_offset(get_tod_clock_fast()); > } > > #define arch_exit_to_user_mode_prepare arch_exit_to_user_mode_prepare > diff --git a/arch/x86/include/asm/entry-common.h b/arch/x86/include/asm/entry-common.h > index 7e523bb3d2d3..fb2809b20b0a 100644 > --- a/arch/x86/include/asm/entry-common.h > +++ b/arch/x86/include/asm/entry-common.h > @@ -73,19 +73,16 @@ static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs, > #endif > > /* > - * Ultimately, this value will get limited by KSTACK_OFFSET_MAX(), > - * but not enough for x86 stack utilization comfort. To keep > - * reasonable stack head room, reduce the maximum offset to 8 bits. > - * > - * The actual entropy will be further reduced by the compiler when > - * applying stack alignment constraints (see cc_stack_align4/8 in > + * This value will get limited by KSTACK_OFFSET_MAX(), which is 10 > + * bits. The actual entropy will be further reduced by the compiler > + * when applying stack alignment constraints (see cc_stack_align4/8 in > * arch/x86/Makefile), which will remove the 3 (x86_64) or 2 (ia32) > * low bits from any entropy chosen here. > * > - * Therefore, final stack offset entropy will be 5 (x86_64) or > - * 6 (ia32) bits. > + * Therefore, final stack offset entropy will be 7 (x86_64) or > + * 8 (ia32) bits. > */ > - choose_random_kstack_offset(rdtsc() & 0xFF); > + choose_random_kstack_offset(rdtsc()); > } > #define arch_exit_to_user_mode_prepare arch_exit_to_user_mode_prepare >
On Wed, Jun 19, 2024 at 02:47:15PM -0700, Kees Cook wrote: > An unintended consequence of commit 9c573cd31343 ("randomize_kstack: > Improve entropy diffusion") was that the per-architecture entropy size > filtering reduced how many bits were being added to the mix, rather than > how many bits were being used during the offsetting. All architectures > fell back to the existing default of 0x3FF (10 bits), which will consume > at most 1KiB of stack space. It seems that this is working just fine, > so let's avoid the confusion and update everything to use the default. > > The prior intent of the per-architecture limits were: > > arm64: capped at 0x1FF (9 bits), 5 bits effective > powerpc: uncapped (10 bits), 6 or 7 bits effective > riscv: uncapped (10 bits), 6 bits effective > x86: capped at 0xFF (8 bits), 5 (x86_64) or 6 (ia32) bits effective > s390: capped at 0xFF (8 bits), undocumented effective entropy > > Current discussion has led to just dropping the original per-architecture > filters. The additional entropy appears to be safe for arm64, x86, > and s390. Quoting Arnd, "There is no point pretending that 15.75KB is > somehow safe to use while 15.00KB is not." > > Co-developed-by: Yuntao Liu <liuyuntao12@huawei.com> > Signed-off-by: Yuntao Liu <liuyuntao12@huawei.com> > Fixes: 9c573cd31343 ("randomize_kstack: Improve entropy diffusion") > Link: https://lore.kernel.org/r/20240617133721.377540-1-liuyuntao12@huawei.com > Signed-off-by: Kees Cook <kees@kernel.org> > --- > Cc: Arnd Bergmann <arnd@arndb.de> > Cc: Mark Rutland <mark.rutland@arm.com> > --- > arch/arm64/kernel/syscall.c | 16 +++++++--------- > arch/s390/include/asm/entry-common.h | 2 +- > arch/x86/include/asm/entry-common.h | 15 ++++++--------- > 3 files changed, 14 insertions(+), 19 deletions(-) Acked-by: Heiko Carstens <hca@linux.ibm.com> # s390
On Wed, Jun 19, 2024 at 02:47:15PM -0700, Kees Cook wrote: > An unintended consequence of commit 9c573cd31343 ("randomize_kstack: > Improve entropy diffusion") was that the per-architecture entropy size > filtering reduced how many bits were being added to the mix, rather than > how many bits were being used during the offsetting. All architectures > fell back to the existing default of 0x3FF (10 bits), which will consume > at most 1KiB of stack space. It seems that this is working just fine, > so let's avoid the confusion and update everything to use the default. > > The prior intent of the per-architecture limits were: > > arm64: capped at 0x1FF (9 bits), 5 bits effective > powerpc: uncapped (10 bits), 6 or 7 bits effective > riscv: uncapped (10 bits), 6 bits effective > x86: capped at 0xFF (8 bits), 5 (x86_64) or 6 (ia32) bits effective > s390: capped at 0xFF (8 bits), undocumented effective entropy > > Current discussion has led to just dropping the original per-architecture > filters. The additional entropy appears to be safe for arm64, x86, > and s390. Quoting Arnd, "There is no point pretending that 15.75KB is > somehow safe to use while 15.00KB is not." > > Co-developed-by: Yuntao Liu <liuyuntao12@huawei.com> > Signed-off-by: Yuntao Liu <liuyuntao12@huawei.com> > Fixes: 9c573cd31343 ("randomize_kstack: Improve entropy diffusion") > Link: https://lore.kernel.org/r/20240617133721.377540-1-liuyuntao12@huawei.com > Signed-off-by: Kees Cook <kees@kernel.org> > --- > Cc: Arnd Bergmann <arnd@arndb.de> > Cc: Mark Rutland <mark.rutland@arm.com> > --- > arch/arm64/kernel/syscall.c | 16 +++++++--------- > arch/s390/include/asm/entry-common.h | 2 +- > arch/x86/include/asm/entry-common.h | 15 ++++++--------- > 3 files changed, 14 insertions(+), 19 deletions(-) Acked-by: Mark Rutland <mark.rutland@arm.com> Mark. > > diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c > index ad198262b981..7230f6e20ab8 100644 > --- a/arch/arm64/kernel/syscall.c > +++ b/arch/arm64/kernel/syscall.c > @@ -53,17 +53,15 @@ static void invoke_syscall(struct pt_regs *regs, unsigned int scno, > syscall_set_return_value(current, regs, 0, ret); > > /* > - * Ultimately, this value will get limited by KSTACK_OFFSET_MAX(), > - * but not enough for arm64 stack utilization comfort. To keep > - * reasonable stack head room, reduce the maximum offset to 9 bits. > + * This value will get limited by KSTACK_OFFSET_MAX(), which is 10 > + * bits. The actual entropy will be further reduced by the compiler > + * when applying stack alignment constraints: the AAPCS mandates a > + * 16-byte aligned SP at function boundaries, which will remove the > + * 4 low bits from any entropy chosen here. > * > - * The actual entropy will be further reduced by the compiler when > - * applying stack alignment constraints: the AAPCS mandates a > - * 16-byte (i.e. 4-bit) aligned SP at function boundaries. > - * > - * The resulting 5 bits of entropy is seen in SP[8:4]. > + * The resulting 6 bits of entropy is seen in SP[9:4]. > */ > - choose_random_kstack_offset(get_random_u16() & 0x1FF); > + choose_random_kstack_offset(get_random_u16()); > } > > static inline bool has_syscall_work(unsigned long flags) > diff --git a/arch/s390/include/asm/entry-common.h b/arch/s390/include/asm/entry-common.h > index 7f5004065e8a..35555c944630 100644 > --- a/arch/s390/include/asm/entry-common.h > +++ b/arch/s390/include/asm/entry-common.h > @@ -54,7 +54,7 @@ static __always_inline void arch_exit_to_user_mode(void) > static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs, > unsigned long ti_work) > { > - choose_random_kstack_offset(get_tod_clock_fast() & 0xff); > + choose_random_kstack_offset(get_tod_clock_fast()); > } > > #define arch_exit_to_user_mode_prepare arch_exit_to_user_mode_prepare > diff --git a/arch/x86/include/asm/entry-common.h b/arch/x86/include/asm/entry-common.h > index 7e523bb3d2d3..fb2809b20b0a 100644 > --- a/arch/x86/include/asm/entry-common.h > +++ b/arch/x86/include/asm/entry-common.h > @@ -73,19 +73,16 @@ static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs, > #endif > > /* > - * Ultimately, this value will get limited by KSTACK_OFFSET_MAX(), > - * but not enough for x86 stack utilization comfort. To keep > - * reasonable stack head room, reduce the maximum offset to 8 bits. > - * > - * The actual entropy will be further reduced by the compiler when > - * applying stack alignment constraints (see cc_stack_align4/8 in > + * This value will get limited by KSTACK_OFFSET_MAX(), which is 10 > + * bits. The actual entropy will be further reduced by the compiler > + * when applying stack alignment constraints (see cc_stack_align4/8 in > * arch/x86/Makefile), which will remove the 3 (x86_64) or 2 (ia32) > * low bits from any entropy chosen here. > * > - * Therefore, final stack offset entropy will be 5 (x86_64) or > - * 6 (ia32) bits. > + * Therefore, final stack offset entropy will be 7 (x86_64) or > + * 8 (ia32) bits. > */ > - choose_random_kstack_offset(rdtsc() & 0xFF); > + choose_random_kstack_offset(rdtsc()); > } > #define arch_exit_to_user_mode_prepare arch_exit_to_user_mode_prepare > > -- > 2.34.1 >
On Wed, Jun 19, 2024, at 23:47, Kees Cook wrote: > An unintended consequence of commit 9c573cd31343 ("randomize_kstack: > Improve entropy diffusion") was that the per-architecture entropy size > filtering reduced how many bits were being added to the mix, rather than > how many bits were being used during the offsetting. All architectures > fell back to the existing default of 0x3FF (10 bits), which will consume > at most 1KiB of stack space. It seems that this is working just fine, > so let's avoid the confusion and update everything to use the default. > > The prior intent of the per-architecture limits were: > > arm64: capped at 0x1FF (9 bits), 5 bits effective > powerpc: uncapped (10 bits), 6 or 7 bits effective > riscv: uncapped (10 bits), 6 bits effective > x86: capped at 0xFF (8 bits), 5 (x86_64) or 6 (ia32) bits effective > s390: capped at 0xFF (8 bits), undocumented effective entropy > > Current discussion has led to just dropping the original per-architecture > filters. The additional entropy appears to be safe for arm64, x86, > and s390. Quoting Arnd, "There is no point pretending that 15.75KB is > somehow safe to use while 15.00KB is not." > > Co-developed-by: Yuntao Liu <liuyuntao12@huawei.com> > Signed-off-by: Yuntao Liu <liuyuntao12@huawei.com> > Fixes: 9c573cd31343 ("randomize_kstack: Improve entropy diffusion") > Link: https://lore.kernel.org/r/20240617133721.377540-1-liuyuntao12@huawei.com > Signed-off-by: Kees Cook <kees@kernel.org> Reviewed-by: Arnd Bergmann <arnd@arndb.de>
On Thu, Jun 20, 2024 at 11:47:58AM +0800, liuyuntao (F) wrote: > > > On 2024/6/20 5:47, Kees Cook wrote: > > An unintended consequence of commit 9c573cd31343 ("randomize_kstack: > > Improve entropy diffusion") was that the per-architecture entropy size > > filtering reduced how many bits were being added to the mix, rather than > > how many bits were being used during the offsetting. All architectures > > fell back to the existing default of 0x3FF (10 bits), which will consume > > at most 1KiB of stack space. It seems that this is working just fine, > > so let's avoid the confusion and update everything to use the default. > > > > My original intent was indeed to do this, but I regret that not being more > explicit in the commit log.. > > Additionally, I've tested the stack entropy by applying the following patch, > the result was `Bits of stack entropy: 7` on arm64, too. It does not seem to > affect the entropy value, maybe removing it is OK, or there may be some > nuances of your intentions that I've overlooked. > > --- a/include/linux/randomize_kstack.h > +++ b/include/linux/randomize_kstack.h > @@ -79,9 +79,7 @@ DECLARE_PER_CPU(u32, kstack_offset); > #define choose_random_kstack_offset(rand) do { \ > if (static_branch_maybe(CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT, \ > &randomize_kstack_offset)) { \ > - u32 offset = raw_cpu_read(kstack_offset); \ > - offset = ror32(offset, 5) ^ (rand); \ > - raw_cpu_write(kstack_offset, offset); \ > + raw_cpu_write(kstack_offset, rand); \ > } \ > } while (0) > #else /* CONFIG_RANDOMIZE_KSTACK_OFFSET */ I blame the multiple applications of the word "entropy" in this feature. :) So, there's both: - "how many bits CAN be randomized?" (i.e. within what range can all possible stack offsets be?) and - "is the randomization predictable?" (i.e. is the distribution of selected positions with the above range evenly distributed?) Commit 9c573cd31343 ("randomize_kstack: Improve entropy diffusion") was trying to improve the latter, but accidentally also grew the former. This patch is just trying to clean all this up now. Thanks for testing! And I'm curious as to why arm64's stack offset entropy is 7 for you when we're expecting it to be 6. Anyway, that's not a problem I don't think. Just a greater offset range than expected. -Kees
On Thu, Jun 20, 2024 at 11:34:22AM -0700, Kees Cook wrote: > On Thu, Jun 20, 2024 at 11:47:58AM +0800, liuyuntao (F) wrote: > > > > > > On 2024/6/20 5:47, Kees Cook wrote: > > > An unintended consequence of commit 9c573cd31343 ("randomize_kstack: > > > Improve entropy diffusion") was that the per-architecture entropy size > > > filtering reduced how many bits were being added to the mix, rather than > > > how many bits were being used during the offsetting. All architectures > > > fell back to the existing default of 0x3FF (10 bits), which will consume > > > at most 1KiB of stack space. It seems that this is working just fine, > > > so let's avoid the confusion and update everything to use the default. > > > > > > > My original intent was indeed to do this, but I regret that not being more > > explicit in the commit log.. > > > > Additionally, I've tested the stack entropy by applying the following patch, > > the result was `Bits of stack entropy: 7` on arm64, too. It does not seem to > > affect the entropy value, maybe removing it is OK, or there may be some > > nuances of your intentions that I've overlooked. > > > > --- a/include/linux/randomize_kstack.h > > +++ b/include/linux/randomize_kstack.h > > @@ -79,9 +79,7 @@ DECLARE_PER_CPU(u32, kstack_offset); > > #define choose_random_kstack_offset(rand) do { \ > > if (static_branch_maybe(CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT, \ > > &randomize_kstack_offset)) { \ > > - u32 offset = raw_cpu_read(kstack_offset); \ > > - offset = ror32(offset, 5) ^ (rand); \ > > - raw_cpu_write(kstack_offset, offset); \ > > + raw_cpu_write(kstack_offset, rand); \ > > } \ > > } while (0) > > #else /* CONFIG_RANDOMIZE_KSTACK_OFFSET */ > > I blame the multiple applications of the word "entropy" in this feature. :) > > So, there's both: > > - "how many bits CAN be randomized?" (i.e. within what range can all > possible stack offsets be?) > > and > > - "is the randomization predictable?" (i.e. is the distribution of > selected positions with the above range evenly distributed?) > > Commit 9c573cd31343 ("randomize_kstack: Improve entropy diffusion") was > trying to improve the latter, but accidentally also grew the former. > This patch is just trying to clean all this up now. > > Thanks for testing! And I'm curious as to why arm64's stack offset > entropy is 7 for you when we're expecting it to be 6. Anyway, that's not > a problem I don't think. Just a greater offset range than expected. Hmm.... I think this is due to the way the compiler aligns the stack in alloca(); it rounds up the value of KSTACK_OFFSET_MAX(offset) and ends up spilling over an additional bit (e.g. 0x3f1 to 0x3ff round up to 0x400). Looking at v6.10-rc4 defconfig + CONFIG_RANDOMIZE_STACKOFFSET=y, the disassembly for arm64's invoke_syscall() looks like: // offset = raw_cpu_read(kstack_offset) mov x4, sp adrp x0, kstack_offset mrs x5, tpidr_el1 add x0, x0, #:lo12:kstack_offset ldr w0, [x0, x5] // offset = KSTACK_OFFSET_MAX(offset) and x0, x0, #0x3ff // alloca(offset) add x0, x0, #0xf and x0, x0, #0x7f0 sub sp, x4, x0 ... which in C would be: offset = raw_cpu_read(kstack_offset) offset &= 0x3ff; // [0x0, 0x3ff] offset += 0xf; // [0xf, 0x40e] offset &= 0x7f0; // [0x0, ... so when *all* bits [3:0] are 0, they'll have no impact, and when *any* of bits [3:0] are 1 they'll trigger a carry into bit 4, which could ripple all the way up and spill into bit 10. I have no idea whether that's important. Kees, does that introduce a bias, and if so do we need to care? If I change the mask to discard the low bits: #define KSTACK_OFFSET_MAX(x) ((x) & 0x3F0) ... then the assembly avoids the rounding: mov x4, sp adrp x0, 0 <kstack_offset> mrs x5, tpidr_el1 add x0, x0, #:lo12:kstack_offset ldr w0, [x0, x5] and x0, x0, #0x3f0 sub sp, x4, x0 Mark.
On Fri, Jun 21, 2024 at 12:08:42PM +0100, Mark Rutland wrote: > On Thu, Jun 20, 2024 at 11:34:22AM -0700, Kees Cook wrote: > > On Thu, Jun 20, 2024 at 11:47:58AM +0800, liuyuntao (F) wrote: > > > > > > > > > On 2024/6/20 5:47, Kees Cook wrote: > > > > An unintended consequence of commit 9c573cd31343 ("randomize_kstack: > > > > Improve entropy diffusion") was that the per-architecture entropy size > > > > filtering reduced how many bits were being added to the mix, rather than > > > > how many bits were being used during the offsetting. All architectures > > > > fell back to the existing default of 0x3FF (10 bits), which will consume > > > > at most 1KiB of stack space. It seems that this is working just fine, > > > > so let's avoid the confusion and update everything to use the default. > > > > > > > > > > My original intent was indeed to do this, but I regret that not being more > > > explicit in the commit log.. > > > > > > Additionally, I've tested the stack entropy by applying the following patch, > > > the result was `Bits of stack entropy: 7` on arm64, too. It does not seem to > > > affect the entropy value, maybe removing it is OK, or there may be some > > > nuances of your intentions that I've overlooked. > > > > > > --- a/include/linux/randomize_kstack.h > > > +++ b/include/linux/randomize_kstack.h > > > @@ -79,9 +79,7 @@ DECLARE_PER_CPU(u32, kstack_offset); > > > #define choose_random_kstack_offset(rand) do { \ > > > if (static_branch_maybe(CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT, \ > > > &randomize_kstack_offset)) { \ > > > - u32 offset = raw_cpu_read(kstack_offset); \ > > > - offset = ror32(offset, 5) ^ (rand); \ > > > - raw_cpu_write(kstack_offset, offset); \ > > > + raw_cpu_write(kstack_offset, rand); \ > > > } \ > > > } while (0) > > > #else /* CONFIG_RANDOMIZE_KSTACK_OFFSET */ > > > > I blame the multiple applications of the word "entropy" in this feature. :) > > > > So, there's both: > > > > - "how many bits CAN be randomized?" (i.e. within what range can all > > possible stack offsets be?) > > > > and > > > > - "is the randomization predictable?" (i.e. is the distribution of > > selected positions with the above range evenly distributed?) > > > > Commit 9c573cd31343 ("randomize_kstack: Improve entropy diffusion") was > > trying to improve the latter, but accidentally also grew the former. > > This patch is just trying to clean all this up now. > > > > Thanks for testing! And I'm curious as to why arm64's stack offset > > entropy is 7 for you when we're expecting it to be 6. Anyway, that's not > > a problem I don't think. Just a greater offset range than expected. > > Hmm.... > > I think this is due to the way the compiler aligns the stack in alloca(); it > rounds up the value of KSTACK_OFFSET_MAX(offset) and ends up spilling over an > additional bit (e.g. 0x3f1 to 0x3ff round up to 0x400). > > Looking at v6.10-rc4 defconfig + CONFIG_RANDOMIZE_STACKOFFSET=y, the > disassembly for arm64's invoke_syscall() looks like: > > // offset = raw_cpu_read(kstack_offset) > mov x4, sp > adrp x0, kstack_offset > mrs x5, tpidr_el1 > add x0, x0, #:lo12:kstack_offset > ldr w0, [x0, x5] > > // offset = KSTACK_OFFSET_MAX(offset) > and x0, x0, #0x3ff > > // alloca(offset) > add x0, x0, #0xf > and x0, x0, #0x7f0 > sub sp, x4, x0 > > ... which in C would be: > > offset = raw_cpu_read(kstack_offset) > offset &= 0x3ff; // [0x0, 0x3ff] > offset += 0xf; // [0xf, 0x40e] > offset &= 0x7f0; // [0x0, > > ... so when *all* bits [3:0] are 0, they'll have no impact, and when *any* of > bits [3:0] are 1 they'll trigger a carry into bit 4, which could ripple all the > way up and spill into bit 10. > > I have no idea whether that's important. Kees, does that introduce a bias, and > if so do we need to care? > > If I change the mask to discard the low bits: > > #define KSTACK_OFFSET_MAX(x) ((x) & 0x3F0) > > ... then the assembly avoids the rounding: > > mov x4, sp > adrp x0, 0 <kstack_offset> > mrs x5, tpidr_el1 > add x0, x0, #:lo12:kstack_offset > ldr w0, [x0, x5] > and x0, x0, #0x3f0 > sub sp, x4, x0 Ah, interesting! I'd prefer to avoid the bias (or at least, the weirdness). How about this as a solution? diff --git a/include/linux/randomize_kstack.h b/include/linux/randomize_kstack.h index 6d92b68efbf6..1d982dbdd0d0 100644 --- a/include/linux/randomize_kstack.h +++ b/include/linux/randomize_kstack.h @@ -32,13 +32,19 @@ DECLARE_PER_CPU(u32, kstack_offset); #endif /* - * Use, at most, 10 bits of entropy. We explicitly cap this to keep the - * "VLA" from being unbounded (see above). 10 bits leaves enough room for - * per-arch offset masks to reduce entropy (by removing higher bits, since - * high entropy may overly constrain usable stack space), and for - * compiler/arch-specific stack alignment to remove the lower bits. + * Use, at most, 6 bits of entropy (on 64-bit; 8 on 32-bit). This cap is + * to keep the "VLA" from being unbounded (see above). Additionally clear + * the bottom 4 bits (on 64-bit systems, 2 for 32-bit), since stack + * alignment will always be at least word size. This makes the compiler + * code gen better when it is applying the actual per-arch alignment to + * the final offset. The resulting randomness is reasonable without overly + * constraining usable stack space. */ -#define KSTACK_OFFSET_MAX(x) ((x) & 0x3FF) +#ifdef CONFIG_64BIT +#define KSTACK_OFFSET_MAX(x) ((x) & 0b1111110000) +#else +#define KSTACK_OFFSET_MAX(x) ((x) & 0b1111111100) +#endif /** * add_random_kstack_offset - Increase stack utilization by previously
Hello: This patch was applied to riscv/linux.git (fixes) by Kees Cook <kees@kernel.org>: On Wed, 19 Jun 2024 14:47:15 -0700 you wrote: > An unintended consequence of commit 9c573cd31343 ("randomize_kstack: > Improve entropy diffusion") was that the per-architecture entropy size > filtering reduced how many bits were being added to the mix, rather than > how many bits were being used during the offsetting. All architectures > fell back to the existing default of 0x3FF (10 bits), which will consume > at most 1KiB of stack space. It seems that this is working just fine, > so let's avoid the confusion and update everything to use the default. > > [...] Here is the summary with links: - randomize_kstack: Remove non-functional per-arch entropy filtering https://git.kernel.org/riscv/c/6db1208bf95b You are awesome, thank you!
diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c index ad198262b981..7230f6e20ab8 100644 --- a/arch/arm64/kernel/syscall.c +++ b/arch/arm64/kernel/syscall.c @@ -53,17 +53,15 @@ static void invoke_syscall(struct pt_regs *regs, unsigned int scno, syscall_set_return_value(current, regs, 0, ret); /* - * Ultimately, this value will get limited by KSTACK_OFFSET_MAX(), - * but not enough for arm64 stack utilization comfort. To keep - * reasonable stack head room, reduce the maximum offset to 9 bits. + * This value will get limited by KSTACK_OFFSET_MAX(), which is 10 + * bits. The actual entropy will be further reduced by the compiler + * when applying stack alignment constraints: the AAPCS mandates a + * 16-byte aligned SP at function boundaries, which will remove the + * 4 low bits from any entropy chosen here. * - * The actual entropy will be further reduced by the compiler when - * applying stack alignment constraints: the AAPCS mandates a - * 16-byte (i.e. 4-bit) aligned SP at function boundaries. - * - * The resulting 5 bits of entropy is seen in SP[8:4]. + * The resulting 6 bits of entropy is seen in SP[9:4]. */ - choose_random_kstack_offset(get_random_u16() & 0x1FF); + choose_random_kstack_offset(get_random_u16()); } static inline bool has_syscall_work(unsigned long flags) diff --git a/arch/s390/include/asm/entry-common.h b/arch/s390/include/asm/entry-common.h index 7f5004065e8a..35555c944630 100644 --- a/arch/s390/include/asm/entry-common.h +++ b/arch/s390/include/asm/entry-common.h @@ -54,7 +54,7 @@ static __always_inline void arch_exit_to_user_mode(void) static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs, unsigned long ti_work) { - choose_random_kstack_offset(get_tod_clock_fast() & 0xff); + choose_random_kstack_offset(get_tod_clock_fast()); } #define arch_exit_to_user_mode_prepare arch_exit_to_user_mode_prepare diff --git a/arch/x86/include/asm/entry-common.h b/arch/x86/include/asm/entry-common.h index 7e523bb3d2d3..fb2809b20b0a 100644 --- a/arch/x86/include/asm/entry-common.h +++ b/arch/x86/include/asm/entry-common.h @@ -73,19 +73,16 @@ static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs, #endif /* - * Ultimately, this value will get limited by KSTACK_OFFSET_MAX(), - * but not enough for x86 stack utilization comfort. To keep - * reasonable stack head room, reduce the maximum offset to 8 bits. - * - * The actual entropy will be further reduced by the compiler when - * applying stack alignment constraints (see cc_stack_align4/8 in + * This value will get limited by KSTACK_OFFSET_MAX(), which is 10 + * bits. The actual entropy will be further reduced by the compiler + * when applying stack alignment constraints (see cc_stack_align4/8 in * arch/x86/Makefile), which will remove the 3 (x86_64) or 2 (ia32) * low bits from any entropy chosen here. * - * Therefore, final stack offset entropy will be 5 (x86_64) or - * 6 (ia32) bits. + * Therefore, final stack offset entropy will be 7 (x86_64) or + * 8 (ia32) bits. */ - choose_random_kstack_offset(rdtsc() & 0xFF); + choose_random_kstack_offset(rdtsc()); } #define arch_exit_to_user_mode_prepare arch_exit_to_user_mode_prepare