mbox series

[v3,0/2] Refine kmalloc caches randomization in kvmalloc

Message ID 20250212081505.2025320-1-gongruiqi1@huawei.com (mailing list archive)
Headers show
Series Refine kmalloc caches randomization in kvmalloc | expand

Message

GONG Ruiqi Feb. 12, 2025, 8:15 a.m. UTC
Hi,

v3:
  - move all the way from kmalloc_gfp_adjust to kvrealloc_noprof into
    mm/slub.c
  - some rewording for commit logs
v2: https://lore.kernel.org/all/20250208014723.1514049-1-gongruiqi1@huawei.com/
  - change the implementation as Vlastimil suggested
v1: https://lore.kernel.org/all/20250122074817.991060-1-gongruiqi1@huawei.com/

Tamás reported [1] that kmalloc cache randomization doesn't actually
work for those kmalloc invoked via kvmalloc. For more details, see the
commit log of patch 2.

The current solution requires a direct call from __kvmalloc_node_noprof
to __do_kmalloc_node, a static function in a different .c file. As
suggested by Vlastimil [2], it's achieved by simply moving
__kvmalloc_node_noprof from mm/util.c to mm/slub.c, together with some
other functions of the same family.

Link: https://github.com/google/security-research/blob/908d59b573960dc0b90adda6f16f7017aca08609/pocs/linux/kernelctf/CVE-2024-27397_mitigation/docs/exploit.md?plain=1#L259 [1]
Link: https://lore.kernel.org/all/62044279-0c56-4185-97f7-7afac65ff449@suse.cz/ [2]

GONG Ruiqi (2):
  slab: Adjust placement of __kvmalloc_node_noprof
  slab: Achieve better kmalloc caches randomization in kvmalloc

 mm/slub.c | 162 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 mm/util.c | 162 ------------------------------------------------------
 2 files changed, 162 insertions(+), 162 deletions(-)

Comments

Hyeonggon Yoo Feb. 12, 2025, 2:20 p.m. UTC | #1
On Wed, Feb 12, 2025 at 04:15:03PM +0800, GONG Ruiqi wrote:
> Hi,
> 
> v3:
>   - move all the way from kmalloc_gfp_adjust to kvrealloc_noprof into
>     mm/slub.c
>   - some rewording for commit logs
> v2: https://lore.kernel.org/all/20250208014723.1514049-1-gongruiqi1@huawei.com/
>   - change the implementation as Vlastimil suggested
> v1: https://lore.kernel.org/all/20250122074817.991060-1-gongruiqi1@huawei.com/
> 
> Tamás reported [1] that kmalloc cache randomization doesn't actually
> work for those kmalloc invoked via kvmalloc. For more details, see the
> commit log of patch 2.
> 
> The current solution requires a direct call from __kvmalloc_node_noprof
> to __do_kmalloc_node, a static function in a different .c file. As
> suggested by Vlastimil [2], it's achieved by simply moving
> __kvmalloc_node_noprof from mm/util.c to mm/slub.c, together with some
> other functions of the same family.

Hi, GONG!
Sorry for my late review.

This patch series looks good to me (with a nit),
Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>

Also, I verified that the problem you described exists on slab/for-next,
and the patch series fixes the problem. Please feel free to add,
Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>

nit: Does it make sense to call __kvmalloc_node_track_caller_noprof()
instead of __do_kmalloc_node() to avoid bloating the code size?

My simple build test says it saves 1592 bytes:
  $ ./scripts/bloat-o-meter slub.o.before slub.o.after
  add/remove: 0/1 grow/shrink: 0/1 up/down: 0/-1592 (-1592)
  Function                                     old     new   delta
  __kvmalloc_node_noprof.cold                   39       -     -39
  __kvmalloc_node_noprof                      1755     202   -1553
  Total: Before=79723, After=78131, chg -2.00%

> Link: https://github.com/google/security-research/blob/908d59b573960dc0b90adda6f16f7017aca08609/pocs/linux/kernelctf/CVE-2024-27397_mitigation/docs/exploit.md?plain=1#L259 [1]
> Link: https://lore.kernel.org/all/62044279-0c56-4185-97f7-7afac65ff449@suse.cz/ [2]
Vlastimil Babka Feb. 12, 2025, 2:32 p.m. UTC | #2
On 2/12/25 15:20, Harry Yoo wrote:
> On Wed, Feb 12, 2025 at 04:15:03PM +0800, GONG Ruiqi wrote:
>> Hi,
>> 
>> v3:
>>   - move all the way from kmalloc_gfp_adjust to kvrealloc_noprof into
>>     mm/slub.c
>>   - some rewording for commit logs
>> v2: https://lore.kernel.org/all/20250208014723.1514049-1-gongruiqi1@huawei.com/
>>   - change the implementation as Vlastimil suggested
>> v1: https://lore.kernel.org/all/20250122074817.991060-1-gongruiqi1@huawei.com/
>> 
>> Tamás reported [1] that kmalloc cache randomization doesn't actually
>> work for those kmalloc invoked via kvmalloc. For more details, see the
>> commit log of patch 2.
>> 
>> The current solution requires a direct call from __kvmalloc_node_noprof
>> to __do_kmalloc_node, a static function in a different .c file. As
>> suggested by Vlastimil [2], it's achieved by simply moving
>> __kvmalloc_node_noprof from mm/util.c to mm/slub.c, together with some
>> other functions of the same family.
> 
> Hi, GONG!
> Sorry for my late review.
> 
> This patch series looks good to me (with a nit),
> Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> 
> Also, I verified that the problem you described exists on slab/for-next,
> and the patch series fixes the problem. Please feel free to add,
> Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>

Thanks!

> nit: Does it make sense to call __kvmalloc_node_track_caller_noprof()
> instead of __do_kmalloc_node() to avoid bloating the code size?

Hm I think it would be a bit arbitrary to make kvmalloc special like this
here. But we should probably change __do_kmalloc_node() to
__fastpath_inline. Or even check if not making it inline at all results in
other callers (not kvmalloc probably due to its complexity) doing a tail
call there, which should be fast enough.

> My simple build test says it saves 1592 bytes:
>   $ ./scripts/bloat-o-meter slub.o.before slub.o.after
>   add/remove: 0/1 grow/shrink: 0/1 up/down: 0/-1592 (-1592)
>   Function                                     old     new   delta
>   __kvmalloc_node_noprof.cold                   39       -     -39
>   __kvmalloc_node_noprof                      1755     202   -1553
>   Total: Before=79723, After=78131, chg -2.00%
> 
>> Link: https://github.com/google/security-research/blob/908d59b573960dc0b90adda6f16f7017aca08609/pocs/linux/kernelctf/CVE-2024-27397_mitigation/docs/exploit.md?plain=1#L259 [1]
>> Link: https://lore.kernel.org/all/62044279-0c56-4185-97f7-7afac65ff449@suse.cz/ [2]
>
Vlastimil Babka Feb. 12, 2025, 3:12 p.m. UTC | #3
On 2/12/25 09:15, GONG Ruiqi wrote:
> Hi,
> 
> v3:
>   - move all the way from kmalloc_gfp_adjust to kvrealloc_noprof into
>     mm/slub.c
>   - some rewording for commit logs
> v2: https://lore.kernel.org/all/20250208014723.1514049-1-gongruiqi1@huawei.com/
>   - change the implementation as Vlastimil suggested
> v1: https://lore.kernel.org/all/20250122074817.991060-1-gongruiqi1@huawei.com/
> 
> Tamás reported [1] that kmalloc cache randomization doesn't actually
> work for those kmalloc invoked via kvmalloc. For more details, see the
> commit log of patch 2.
> 
> The current solution requires a direct call from __kvmalloc_node_noprof
> to __do_kmalloc_node, a static function in a different .c file. As
> suggested by Vlastimil [2], it's achieved by simply moving
> __kvmalloc_node_noprof from mm/util.c to mm/slub.c, together with some
> other functions of the same family.
> 
> Link: https://github.com/google/security-research/blob/908d59b573960dc0b90adda6f16f7017aca08609/pocs/linux/kernelctf/CVE-2024-27397_mitigation/docs/exploit.md?plain=1#L259 [1]
> Link: https://lore.kernel.org/all/62044279-0c56-4185-97f7-7afac65ff449@suse.cz/ [2]
> 
> GONG Ruiqi (2):
>   slab: Adjust placement of __kvmalloc_node_noprof
>   slab: Achieve better kmalloc caches randomization in kvmalloc

Applied to slab/for-next, thanks!

> 
>  mm/slub.c | 162 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
>  mm/util.c | 162 ------------------------------------------------------
>  2 files changed, 162 insertions(+), 162 deletions(-)
>