diff mbox series

[v2,2/2] slab: Achieve better kmalloc caches randomization in kvmalloc

Message ID 20250208014723.1514049-3-gongruiqi1@huawei.com (mailing list archive)
State Superseded
Headers show
Series Refine kmalloc caches randomization in kvmalloc | expand

Commit Message

GONG Ruiqi Feb. 8, 2025, 1:47 a.m. UTC
As revealed by this writeup[1], due to the fact that __kmalloc_node (now
renamed to __kmalloc_node_noprof) is an exported symbol and will never
get inlined, using it in kvmalloc_node (now is __kvmalloc_node_noprof)
would make the RET_IP inside always point to the same address:

    upper_caller
        kvmalloc
        kvmalloc_node
        kvmalloc_node_noprof
        __kvmalloc_node_noprof	<-- all macros all the way down here
            __kmalloc_node_noprof
                __do_kmalloc_node(.., _RET_IP_)
            ...			<-- _RET_IP_ points to

That literally means all kmalloc invoked via kvmalloc would use the same
seed for cache randomization (CONFIG_RANDOM_KMALLOC_CACHES), which makes
this hardening unfunctional.

The root cause of this problem, IMHO, is that using RET_IP only cannot
identify the actual allocation site in case of kmalloc being called
inside wrappers or helper functions. And I believe there could be
similar cases in other functions. Nevertheless, I haven't thought of
any good solution for this. So for now let's solve this specific case
first.

For __kvmalloc_node_noprof, replace __kmalloc_node_noprof and call
__do_kmalloc_node directly instead, so that RET_IP can take the return
address of kvmalloc and differentiate each kvmalloc invocation:

    upper_caller
        kvmalloc
        kvmalloc_node
        kvmalloc_node_noprof
        __kvmalloc_node_noprof	<-- all macros all the way down here
            __do_kmalloc_node(.., _RET_IP_)
        ...			<-- _RET_IP_ points to

Thanks to Tamás Koczka for the report and discussion!

Link: https://github.com/google/security-research/pull/83/files#diff-1604319b55a48c39a210ee52034ed7ff5b9cdc3d704d2d9e34eb230d19fae235R200 [1]
Reported-by: Tamás Koczka <poprdi@google.com>
Signed-off-by: GONG Ruiqi <gongruiqi1@huawei.com>
---
 mm/slub.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

Comments

Vlastimil Babka Feb. 10, 2025, 10:05 a.m. UTC | #1
On 2/8/25 02:47, GONG Ruiqi wrote:
> As revealed by this writeup[1], due to the fact that __kmalloc_node (now
> renamed to __kmalloc_node_noprof) is an exported symbol and will never
> get inlined, using it in kvmalloc_node (now is __kvmalloc_node_noprof)
> would make the RET_IP inside always point to the same address:
> 
>     upper_caller
>         kvmalloc
>         kvmalloc_node
>         kvmalloc_node_noprof
>         __kvmalloc_node_noprof	<-- all macros all the way down here
>             __kmalloc_node_noprof
>                 __do_kmalloc_node(.., _RET_IP_)
>             ...			<-- _RET_IP_ points to
> 
> That literally means all kmalloc invoked via kvmalloc would use the same
> seed for cache randomization (CONFIG_RANDOM_KMALLOC_CACHES), which makes
> this hardening unfunctional.

                 non-functional?

> The root cause of this problem, IMHO, is that using RET_IP only cannot
> identify the actual allocation site in case of kmalloc being called
> inside wrappers or helper functions.

inside non-inlined wrappers... ?

>  And I believe there could be
> similar cases in other functions. Nevertheless, I haven't thought of
> any good solution for this. So for now let's solve this specific case
> first.

Yeah it's the similar problem with shared allocation wrappers as what
allocation tagging has.

> For __kvmalloc_node_noprof, replace __kmalloc_node_noprof and call
> __do_kmalloc_node directly instead, so that RET_IP can take the return
> address of kvmalloc and differentiate each kvmalloc invocation:
> 
>     upper_caller
>         kvmalloc
>         kvmalloc_node
>         kvmalloc_node_noprof
>         __kvmalloc_node_noprof	<-- all macros all the way down here
>             __do_kmalloc_node(.., _RET_IP_)
>         ...			<-- _RET_IP_ points to
> 
> Thanks to Tamás Koczka for the report and discussion!
> 
> Link: https://github.com/google/security-research/pull/83/files#diff-1604319b55a48c39a210ee52034ed7ff5b9cdc3d704d2d9e34eb230d19fae235R200 [1]

This should be slightly better? A permalink for the file itself:
https://github.com/google/security-research/blob/908d59b573960dc0b90adda6f16f7017aca08609/pocs/linux/kernelctf/CVE-2024-27397_mitigation/docs/exploit.md

Thanks.

> Reported-by: Tamás Koczka <poprdi@google.com>
> Signed-off-by: GONG Ruiqi <gongruiqi1@huawei.com>
> ---
>  mm/slub.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index 0830894bb92c..46e884b77dca 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -4903,9 +4903,9 @@ void *__kvmalloc_node_noprof(DECL_BUCKET_PARAMS(size, b), gfp_t flags, int node)
>  	 * It doesn't really make sense to fallback to vmalloc for sub page
>  	 * requests
>  	 */
> -	ret = __kmalloc_node_noprof(PASS_BUCKET_PARAMS(size, b),
> -				    kmalloc_gfp_adjust(flags, size),
> -				    node);
> +	ret = __do_kmalloc_node(size, PASS_BUCKET_PARAM(b),
> +				kmalloc_gfp_adjust(flags, size),
> +				node, _RET_IP_);
>  	if (ret || size <= PAGE_SIZE)
>  		return ret;
>
diff mbox series

Patch

diff --git a/mm/slub.c b/mm/slub.c
index 0830894bb92c..46e884b77dca 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4903,9 +4903,9 @@  void *__kvmalloc_node_noprof(DECL_BUCKET_PARAMS(size, b), gfp_t flags, int node)
 	 * It doesn't really make sense to fallback to vmalloc for sub page
 	 * requests
 	 */
-	ret = __kmalloc_node_noprof(PASS_BUCKET_PARAMS(size, b),
-				    kmalloc_gfp_adjust(flags, size),
-				    node);
+	ret = __do_kmalloc_node(size, PASS_BUCKET_PARAM(b),
+				kmalloc_gfp_adjust(flags, size),
+				node, _RET_IP_);
 	if (ret || size <= PAGE_SIZE)
 		return ret;