diff mbox series

remove AND operation in choose_random_kstack_offset()

Message ID 20240617133721.377540-1-liuyuntao12@huawei.com (mailing list archive)
State New
Headers show
Series remove AND operation in choose_random_kstack_offset() | expand

Commit Message

liuyuntao (F) June 17, 2024, 1:37 p.m. UTC
Since the offset would be bitwise ANDed with 0x3FF in
add_random_kstack_offset(), so just remove AND operation here.

Signed-off-by: Yuntao Liu <liuyuntao12@huawei.com>
---
 arch/arm64/kernel/syscall.c          | 2 +-
 arch/s390/include/asm/entry-common.h | 2 +-
 arch/x86/include/asm/entry-common.h  | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

Comments

Mark Rutland June 17, 2024, 3:52 p.m. UTC | #1
On Mon, Jun 17, 2024 at 01:37:21PM +0000, Yuntao Liu wrote:
> Since the offset would be bitwise ANDed with 0x3FF in
> add_random_kstack_offset(), so just remove AND operation here.
> 
> Signed-off-by: Yuntao Liu <liuyuntao12@huawei.com>

The comments in arm64 and x86 say that they're deliberately capping the
offset at fewer bits than the result of KSTACK_OFFSET_MAX() masking the
value with 0x3FF.

Maybe it's ok to expand that, but if that's the case the commit message
needs to explain why it's safe add extra bits (2 on arm64, 3 on s39 and
x86), and those comments need to be updated accordingly.

As-is, I do not think this patch is ok.

Mark.

> ---
>  arch/arm64/kernel/syscall.c          | 2 +-
>  arch/s390/include/asm/entry-common.h | 2 +-
>  arch/x86/include/asm/entry-common.h  | 2 +-
>  3 files changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c
> index ad198262b981..43f555f7cd2d 100644
> --- a/arch/arm64/kernel/syscall.c
> +++ b/arch/arm64/kernel/syscall.c
> @@ -63,7 +63,7 @@ static void invoke_syscall(struct pt_regs *regs, unsigned int scno,
>  	 *
>  	 * The resulting 5 bits of entropy is seen in SP[8:4].
>  	 */
> -	choose_random_kstack_offset(get_random_u16() & 0x1FF);
> +	choose_random_kstack_offset(get_random_u16());
>  }
>  
>  static inline bool has_syscall_work(unsigned long flags)
> diff --git a/arch/s390/include/asm/entry-common.h b/arch/s390/include/asm/entry-common.h
> index 7f5004065e8a..35555c944630 100644
> --- a/arch/s390/include/asm/entry-common.h
> +++ b/arch/s390/include/asm/entry-common.h
> @@ -54,7 +54,7 @@ static __always_inline void arch_exit_to_user_mode(void)
>  static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs,
>  						  unsigned long ti_work)
>  {
> -	choose_random_kstack_offset(get_tod_clock_fast() & 0xff);
> +	choose_random_kstack_offset(get_tod_clock_fast());
>  }
>  
>  #define arch_exit_to_user_mode_prepare arch_exit_to_user_mode_prepare
> diff --git a/arch/x86/include/asm/entry-common.h b/arch/x86/include/asm/entry-common.h
> index 7e523bb3d2d3..b28a307f2014 100644
> --- a/arch/x86/include/asm/entry-common.h
> +++ b/arch/x86/include/asm/entry-common.h
> @@ -85,7 +85,7 @@ static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs,
>  	 * Therefore, final stack offset entropy will be 5 (x86_64) or
>  	 * 6 (ia32) bits.
>  	 */
> -	choose_random_kstack_offset(rdtsc() & 0xFF);
> +	choose_random_kstack_offset(rdtsc());
>  }
>  #define arch_exit_to_user_mode_prepare arch_exit_to_user_mode_prepare
>  
> -- 
> 2.34.1
>
Kees Cook June 17, 2024, 6:22 p.m. UTC | #2
On Mon, Jun 17, 2024 at 04:52:15PM +0100, Mark Rutland wrote:
> On Mon, Jun 17, 2024 at 01:37:21PM +0000, Yuntao Liu wrote:
> > Since the offset would be bitwise ANDed with 0x3FF in
> > add_random_kstack_offset(), so just remove AND operation here.
> > 
> > Signed-off-by: Yuntao Liu <liuyuntao12@huawei.com>
> 
> The comments in arm64 and x86 say that they're deliberately capping the
> offset at fewer bits than the result of KSTACK_OFFSET_MAX() masking the
> value with 0x3FF.
> 
> Maybe it's ok to expand that, but if that's the case the commit message
> needs to explain why it's safe add extra bits (2 on arm64, 3 on s39 and
> x86), and those comments need to be updated accordingly.
> 
> As-is, I do not think this patch is ok.

Yeah, I agree: the truncation is intentional and tuned to the
architecture.
Arnd Bergmann June 17, 2024, 8:33 p.m. UTC | #3
On Mon, Jun 17, 2024, at 20:22, Kees Cook wrote:
> On Mon, Jun 17, 2024 at 04:52:15PM +0100, Mark Rutland wrote:
>> On Mon, Jun 17, 2024 at 01:37:21PM +0000, Yuntao Liu wrote:
>> > Since the offset would be bitwise ANDed with 0x3FF in
>> > add_random_kstack_offset(), so just remove AND operation here.
>> > 
>> > Signed-off-by: Yuntao Liu <liuyuntao12@huawei.com>
>> 
>> The comments in arm64 and x86 say that they're deliberately capping the
>> offset at fewer bits than the result of KSTACK_OFFSET_MAX() masking the
>> value with 0x3FF.
>> 
>> Maybe it's ok to expand that, but if that's the case the commit message
>> needs to explain why it's safe add extra bits (2 on arm64, 3 on s39 and
>> x86), and those comments need to be updated accordingly.
>> 
>> As-is, I do not think this patch is ok.
>
> Yeah, I agree: the truncation is intentional and tuned to the
> architecture.

It may be intentional, but it's clearly nonsense: there is nothing
inherent to the architecture that means we have can go only 256
bytes instead of 512 bytes into the 16KB available stack space.

As far as I can tell, any code just gets bloated to the point
where it fills up the available memory, regardless of how
much you give it. I'm sure one can find code paths today that
exceed the 16KB, so there is no point pretending that 15.75KB
is somehow safe to use while 15.00KB is not.

I'm definitely in favor of making this less architecture
specific, we just need to pick a good value, and we may well
end up deciding to use less than the default 1KB. We can also
go the opposite way and make the limit 4KB but then increase
the default stack size to 20KB for kernels that enable
randomization.

       Arnd
Kees Cook June 17, 2024, 11:31 p.m. UTC | #4
On Mon, Jun 17, 2024 at 10:33:08PM +0200, Arnd Bergmann wrote:
> On Mon, Jun 17, 2024, at 20:22, Kees Cook wrote:
> > On Mon, Jun 17, 2024 at 04:52:15PM +0100, Mark Rutland wrote:
> >> On Mon, Jun 17, 2024 at 01:37:21PM +0000, Yuntao Liu wrote:
> >> > Since the offset would be bitwise ANDed with 0x3FF in
> >> > add_random_kstack_offset(), so just remove AND operation here.
> >> > 
> >> > Signed-off-by: Yuntao Liu <liuyuntao12@huawei.com>
> >> 
> >> The comments in arm64 and x86 say that they're deliberately capping the
> >> offset at fewer bits than the result of KSTACK_OFFSET_MAX() masking the
> >> value with 0x3FF.
> >> 
> >> Maybe it's ok to expand that, but if that's the case the commit message
> >> needs to explain why it's safe add extra bits (2 on arm64, 3 on s39 and
> >> x86), and those comments need to be updated accordingly.
> >> 
> >> As-is, I do not think this patch is ok.
> >
> > Yeah, I agree: the truncation is intentional and tuned to the
> > architecture.
> 
> It may be intentional, but it's clearly nonsense: there is nothing
> inherent to the architecture that means we have can go only 256
> bytes instead of 512 bytes into the 16KB available stack space.
> 
> As far as I can tell, any code just gets bloated to the point
> where it fills up the available memory, regardless of how
> much you give it. I'm sure one can find code paths today that
> exceed the 16KB, so there is no point pretending that 15.75KB
> is somehow safe to use while 15.00KB is not.
> 
> I'm definitely in favor of making this less architecture
> specific, we just need to pick a good value, and we may well
> end up deciding to use less than the default 1KB. We can also
> go the opposite way and make the limit 4KB but then increase
> the default stack size to 20KB for kernels that enable
> randomization.

I'm all for more entropy, but arch maintainers had wanted specific
control over this value, and given the years of bikeshedding over the
feature, I'm not inclined dive back into that debate, but okay.

FWIW, the here's the actual entropy (due to stack alignment enforced by
the compiler for the given arch ABI).

standard cap is 0x3FF (10 bits).

arm64: capped at 0x1FF (9 bits), 5 bits effective
powerpc: uncapped (10 bits), 6 or 7 bits effective
riscv: uncapped (10 bits), 6 bits effective
x86: capped at 0xFF (8 bits), 5 (x86_64) or 6 (ia32) bits effective
s390: capped at 0xFF (8 bits), undocumented effective entropy

So if x86, arm64, and s390 maintainers are okay with it, we can try
dropping the masks on those architectures. They would gain 2, 1, and 2
bits respectively.

-Kees
Arnd Bergmann June 18, 2024, 6:46 a.m. UTC | #5
On Tue, Jun 18, 2024, at 01:31, Kees Cook wrote:
> On Mon, Jun 17, 2024 at 10:33:08PM +0200, Arnd Bergmann wrote:
>> On Mon, Jun 17, 2024, at 20:22, Kees Cook wrote:
>
> I'm all for more entropy, but arch maintainers had wanted specific
> control over this value, and given the years of bikeshedding over the
> feature, I'm not inclined dive back into that debate, but okay.
>
> FWIW, the here's the actual entropy (due to stack alignment enforced by
> the compiler for the given arch ABI).
>
> standard cap is 0x3FF (10 bits).
>
> arm64: capped at 0x1FF (9 bits), 5 bits effective
> powerpc: uncapped (10 bits), 6 or 7 bits effective
> riscv: uncapped (10 bits), 6 bits effective
> x86: capped at 0xFF (8 bits), 5 (x86_64) or 6 (ia32) bits effective
> s390: capped at 0xFF (8 bits), undocumented effective entropy

Thanks for the summary. 

Right now of course we need to fix the bug from 9c573cd31343
("randomize_kstack: Improve entropy diffusion") that has led to
using full 10 bits after diffusion but put fewer bits in than
possible on some architectures. Unless you want to revert that
patch, we should ensure that any truncation is only done in
KSTACK_OFFSET_MAX() rather than passed into
choose_random_kstack_offset().

     Arnd
Mark Rutland June 18, 2024, 10:45 a.m. UTC | #6
Hi Arnd,

On Mon, Jun 17, 2024 at 10:33:08PM +0200, Arnd Bergmann wrote:
> On Mon, Jun 17, 2024, at 20:22, Kees Cook wrote:
> > On Mon, Jun 17, 2024 at 04:52:15PM +0100, Mark Rutland wrote:
> >> On Mon, Jun 17, 2024 at 01:37:21PM +0000, Yuntao Liu wrote:
> >> > Since the offset would be bitwise ANDed with 0x3FF in
> >> > add_random_kstack_offset(), so just remove AND operation here.
> >> > 
> >> > Signed-off-by: Yuntao Liu <liuyuntao12@huawei.com>
> >> 
> >> The comments in arm64 and x86 say that they're deliberately capping the
> >> offset at fewer bits than the result of KSTACK_OFFSET_MAX() masking the
> >> value with 0x3FF.
> >> 
> >> Maybe it's ok to expand that, but if that's the case the commit message
> >> needs to explain why it's safe add extra bits (2 on arm64, 3 on s39 and
> >> x86), and those comments need to be updated accordingly.
> >> 
> >> As-is, I do not think this patch is ok.
> >
> > Yeah, I agree: the truncation is intentional and tuned to the
> > architecture.
> 
> It may be intentional, but it's clearly nonsense: there is nothing
> inherent to the architecture that means we have can go only 256
> bytes instead of 512 bytes into the 16KB available stack space.
> 
> As far as I can tell, any code just gets bloated to the point
> where it fills up the available memory, regardless of how
> much you give it. I'm sure one can find code paths today that
> exceed the 16KB, so there is no point pretending that 15.75KB
> is somehow safe to use while 15.00KB is not.
> 
> I'm definitely in favor of making this less architecture
> specific, we just need to pick a good value, and we may well
> end up deciding to use less than the default 1KB. We can also
> go the opposite way and make the limit 4KB but then increase
> the default stack size to 20KB for kernels that enable
> randomization.

Sorry, to be clear, I'm happy for this to change, so long as:

* The commit message explains why that's safe.

  IIUC this goes from 511 to 1023 bytes on arm64, which is ~3% of the
  stack, so maybe that is ok. It'd be nice to see any rationale/analysis
  beyond "the offset would be bitwise ANDed with 0x3FF".

* The comments in architecture code referring to the masking get
  removed/updated along with the masking.

My complaint was that the patch didn't do those things.

Mark.
Arnd Bergmann June 18, 2024, 11:14 a.m. UTC | #7
On Tue, Jun 18, 2024, at 12:45, Mark Rutland wrote:
> On Mon, Jun 17, 2024 at 10:33:08PM +0200, Arnd Bergmann wrote:
>> On Mon, Jun 17, 2024, at 20:22, Kees Cook wrote:
>> > On Mon, Jun 17, 2024 at 04:52:15PM +0100, Mark Rutland wrote:

> Sorry, to be clear, I'm happy for this to change, so long as:
>
> * The commit message explains why that's safe.
>
>   IIUC this goes from 511 to 1023 bytes on arm64, which is ~3% of the
>   stack, so maybe that is ok. It'd be nice to see any rationale/analysis
>   beyond "the offset would be bitwise ANDed with 0x3FF".

Absolutely agreed, and the commit message should also clarify that
the increase has already happened as an unintended side-effect
of commit 9c573cd31343 ("randomize_kstack: Improve entropy
diffusion").

> * The comments in architecture code referring to the masking get
>   removed/updated along with the masking.

Right.

FWIW, I also wouldn't mind to having a compile-time option
that configures the number of random bits on the stack offset,
but my preference here is to have a reasonable default and
not need a config option.

    Arnd
Mark Rutland June 18, 2024, 11:51 a.m. UTC | #8
On Tue, Jun 18, 2024 at 01:14:58PM +0200, Arnd Bergmann wrote:
> On Tue, Jun 18, 2024, at 12:45, Mark Rutland wrote:
> > On Mon, Jun 17, 2024 at 10:33:08PM +0200, Arnd Bergmann wrote:
> >> On Mon, Jun 17, 2024, at 20:22, Kees Cook wrote:
> >> > On Mon, Jun 17, 2024 at 04:52:15PM +0100, Mark Rutland wrote:
> 
> > Sorry, to be clear, I'm happy for this to change, so long as:
> >
> > * The commit message explains why that's safe.
> >
> >   IIUC this goes from 511 to 1023 bytes on arm64, which is ~3% of the
> >   stack, so maybe that is ok. It'd be nice to see any rationale/analysis
> >   beyond "the offset would be bitwise ANDed with 0x3FF".
> 
> Absolutely agreed, and the commit message should also clarify that
> the increase has already happened as an unintended side-effect
> of commit 9c573cd31343 ("randomize_kstack: Improve entropy
> diffusion").

FWIW, I think that alone is a reasonable justification.

> > * The comments in architecture code referring to the masking get
> >   removed/updated along with the masking.
> 
> Right.
> 
> FWIW, I also wouldn't mind to having a compile-time option
> that configures the number of random bits on the stack offset,
> but my preference here is to have a reasonable default and
> not need a config option.

I agree; I think we should avoid a config option unless we actually see
a need for it in testing.

Mark.
liuyuntao (F) June 20, 2024, 4:04 a.m. UTC | #9
On 2024/6/18 18:45, Mark Rutland wrote:
> Hi Arnd,
> 
> On Mon, Jun 17, 2024 at 10:33:08PM +0200, Arnd Bergmann wrote:
>> On Mon, Jun 17, 2024, at 20:22, Kees Cook wrote:
>>> On Mon, Jun 17, 2024 at 04:52:15PM +0100, Mark Rutland wrote:
>>>> On Mon, Jun 17, 2024 at 01:37:21PM +0000, Yuntao Liu wrote:
>>>>> Since the offset would be bitwise ANDed with 0x3FF in
>>>>> add_random_kstack_offset(), so just remove AND operation here.
>>>>>
>>>>> Signed-off-by: Yuntao Liu <liuyuntao12@huawei.com>
>>>>
>>>> The comments in arm64 and x86 say that they're deliberately capping the
>>>> offset at fewer bits than the result of KSTACK_OFFSET_MAX() masking the
>>>> value with 0x3FF.
>>>>
>>>> Maybe it's ok to expand that, but if that's the case the commit message
>>>> needs to explain why it's safe add extra bits (2 on arm64, 3 on s39 and
>>>> x86), and those comments need to be updated accordingly.
>>>>
>>>> As-is, I do not think this patch is ok.
>>>
>>> Yeah, I agree: the truncation is intentional and tuned to the
>>> architecture.
>>
>> It may be intentional, but it's clearly nonsense: there is nothing
>> inherent to the architecture that means we have can go only 256
>> bytes instead of 512 bytes into the 16KB available stack space.
>>
>> As far as I can tell, any code just gets bloated to the point
>> where it fills up the available memory, regardless of how
>> much you give it. I'm sure one can find code paths today that
>> exceed the 16KB, so there is no point pretending that 15.75KB
>> is somehow safe to use while 15.00KB is not.
>>
>> I'm definitely in favor of making this less architecture
>> specific, we just need to pick a good value, and we may well
>> end up deciding to use less than the default 1KB. We can also
>> go the opposite way and make the limit 4KB but then increase
>> the default stack size to 20KB for kernels that enable
>> randomization.
> 
> Sorry, to be clear, I'm happy for this to change, so long as:
> 
> * The commit message explains why that's safe.
> 
>    IIUC this goes from 511 to 1023 bytes on arm64, which is ~3% of the
>    stack, so maybe that is ok. It'd be nice to see any rationale/analysis
>    beyond "the offset would be bitwise ANDed with 0x3FF".
> 
> * The comments in architecture code referring to the masking get
>    removed/updated along with the masking.
> 
> My complaint was that the patch didn't do those things.
> 

Sorry for that I don't adjust the comments in architecture code 
referring to the masking.
I've tested the stack entropy by applying this patch on arm64.
before:
Bits of stack entropy: 6
after:
Bits of stack entropy: 7
It seems the difference was minimal, so I didn't reflect it in the 
commit message. Now it appears that I missed some of the Kees's intentions.
Kees has resent the patch, and everything should be fine now.

Thanks!
Yuntao

> Mark.
diff mbox series

Patch

diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c
index ad198262b981..43f555f7cd2d 100644
--- a/arch/arm64/kernel/syscall.c
+++ b/arch/arm64/kernel/syscall.c
@@ -63,7 +63,7 @@  static void invoke_syscall(struct pt_regs *regs, unsigned int scno,
 	 *
 	 * The resulting 5 bits of entropy is seen in SP[8:4].
 	 */
-	choose_random_kstack_offset(get_random_u16() & 0x1FF);
+	choose_random_kstack_offset(get_random_u16());
 }
 
 static inline bool has_syscall_work(unsigned long flags)
diff --git a/arch/s390/include/asm/entry-common.h b/arch/s390/include/asm/entry-common.h
index 7f5004065e8a..35555c944630 100644
--- a/arch/s390/include/asm/entry-common.h
+++ b/arch/s390/include/asm/entry-common.h
@@ -54,7 +54,7 @@  static __always_inline void arch_exit_to_user_mode(void)
 static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs,
 						  unsigned long ti_work)
 {
-	choose_random_kstack_offset(get_tod_clock_fast() & 0xff);
+	choose_random_kstack_offset(get_tod_clock_fast());
 }
 
 #define arch_exit_to_user_mode_prepare arch_exit_to_user_mode_prepare
diff --git a/arch/x86/include/asm/entry-common.h b/arch/x86/include/asm/entry-common.h
index 7e523bb3d2d3..b28a307f2014 100644
--- a/arch/x86/include/asm/entry-common.h
+++ b/arch/x86/include/asm/entry-common.h
@@ -85,7 +85,7 @@  static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs,
 	 * Therefore, final stack offset entropy will be 5 (x86_64) or
 	 * 6 (ia32) bits.
 	 */
-	choose_random_kstack_offset(rdtsc() & 0xFF);
+	choose_random_kstack_offset(rdtsc());
 }
 #define arch_exit_to_user_mode_prepare arch_exit_to_user_mode_prepare