diff mbox series

mm/ksm: fix possible UAF of stable_node

Message ID 20240513-b4-ksm-stable-node-uaf-v1-1-f687de76f452@linux.dev (mailing list archive)
State New
Headers show
Series mm/ksm: fix possible UAF of stable_node | expand

Commit Message

Chengming Zhou May 13, 2024, 3:07 a.m. UTC
The commit 2c653d0ee2ae ("ksm: introduce ksm_max_page_sharing per page
deduplication limit") introduced a possible failure case in the
stable_tree_insert(), where we may free the new allocated stable_node_dup
if we fail to prepare the missing chain node.

Then that kfolio return and unlock with a freed stable_node set... And
any MM activities can come in to access kfolio->mapping, so UAF.

Fix it by moving folio_set_stable_node() to the end after stable_node
is inserted successfully.

Fixes: 2c653d0ee2ae ("ksm: introduce ksm_max_page_sharing per page deduplication limit")
Signed-off-by: Chengming Zhou <chengming.zhou@linux.dev>
---
 mm/ksm.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)


---
base-commit: 7e8aafe0636cdcc5c9699ced05ff1f8ffcb937e2
change-id: 20240513-b4-ksm-stable-node-uaf-ccc7fe2fd6bd

Best regards,

Comments

David Hildenbrand May 14, 2024, 8:46 p.m. UTC | #1
On 13.05.24 05:07, Chengming Zhou wrote:
> The commit 2c653d0ee2ae ("ksm: introduce ksm_max_page_sharing per page
> deduplication limit") introduced a possible failure case in the
> stable_tree_insert(), where we may free the new allocated stable_node_dup
> if we fail to prepare the missing chain node.
> 
> Then that kfolio return and unlock with a freed stable_node set... And
> any MM activities can come in to access kfolio->mapping, so UAF.
> 
> Fix it by moving folio_set_stable_node() to the end after stable_node
> is inserted successfully.
> 
> Fixes: 2c653d0ee2ae ("ksm: introduce ksm_max_page_sharing per page deduplication limit")
> Signed-off-by: Chengming Zhou <chengming.zhou@linux.dev>
> ---
>   mm/ksm.c | 3 ++-
>   1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/ksm.c b/mm/ksm.c
> index e1034bf1c937..a8b76af5cf64 100644
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -2153,7 +2153,6 @@ static struct ksm_stable_node *stable_tree_insert(struct folio *kfolio)
>   
>   	INIT_HLIST_HEAD(&stable_node_dup->hlist);
>   	stable_node_dup->kpfn = kpfn;
> -	folio_set_stable_node(kfolio, stable_node_dup);
>   	stable_node_dup->rmap_hlist_len = 0;
>   	DO_NUMA(stable_node_dup->nid = nid);
>   	if (!need_chain) {
> @@ -2172,6 +2171,8 @@ static struct ksm_stable_node *stable_tree_insert(struct folio *kfolio)
>   		stable_node_chain_add_dup(stable_node_dup, stable_node);
>   	}
>   
> +	folio_set_stable_node(kfolio, stable_node_dup);
> +
>   	return stable_node_dup;

Looks correct to me.

We might now link the node before the folio->mapping is set up. Do we 
care? Don't think so.

Acked-by: David Hildenbrand <david@redhat.com>
Chengming Zhou May 15, 2024, 3:10 a.m. UTC | #2
On 2024/5/15 04:46, David Hildenbrand wrote:
> On 13.05.24 05:07, Chengming Zhou wrote:
>> The commit 2c653d0ee2ae ("ksm: introduce ksm_max_page_sharing per page
>> deduplication limit") introduced a possible failure case in the
>> stable_tree_insert(), where we may free the new allocated stable_node_dup
>> if we fail to prepare the missing chain node.
>>
>> Then that kfolio return and unlock with a freed stable_node set... And
>> any MM activities can come in to access kfolio->mapping, so UAF.
>>
>> Fix it by moving folio_set_stable_node() to the end after stable_node
>> is inserted successfully.
>>
>> Fixes: 2c653d0ee2ae ("ksm: introduce ksm_max_page_sharing per page deduplication limit")
>> Signed-off-by: Chengming Zhou <chengming.zhou@linux.dev>
>> ---
>>   mm/ksm.c | 3 ++-
>>   1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/mm/ksm.c b/mm/ksm.c
>> index e1034bf1c937..a8b76af5cf64 100644
>> --- a/mm/ksm.c
>> +++ b/mm/ksm.c
>> @@ -2153,7 +2153,6 @@ static struct ksm_stable_node *stable_tree_insert(struct folio *kfolio)
>>         INIT_HLIST_HEAD(&stable_node_dup->hlist);
>>       stable_node_dup->kpfn = kpfn;
>> -    folio_set_stable_node(kfolio, stable_node_dup);
>>       stable_node_dup->rmap_hlist_len = 0;
>>       DO_NUMA(stable_node_dup->nid = nid);
>>       if (!need_chain) {
>> @@ -2172,6 +2171,8 @@ static struct ksm_stable_node *stable_tree_insert(struct folio *kfolio)
>>           stable_node_chain_add_dup(stable_node_dup, stable_node);
>>       }
>>   +    folio_set_stable_node(kfolio, stable_node_dup);
>> +
>>       return stable_node_dup;
> 
> Looks correct to me.
> 
> We might now link the node before the folio->mapping is set up. Do we care? Don't think so.

Yeah, it shouldn't be a problem, although it doesn't look very nice.

Another way to fix maybe "folio_set_stable_node(folio, NULL)" in the failure case,
which is safe since we have held the folio lock.

> 
> Acked-by: David Hildenbrand <david@redhat.com>
> 

Thanks.
diff mbox series

Patch

diff --git a/mm/ksm.c b/mm/ksm.c
index e1034bf1c937..a8b76af5cf64 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -2153,7 +2153,6 @@  static struct ksm_stable_node *stable_tree_insert(struct folio *kfolio)
 
 	INIT_HLIST_HEAD(&stable_node_dup->hlist);
 	stable_node_dup->kpfn = kpfn;
-	folio_set_stable_node(kfolio, stable_node_dup);
 	stable_node_dup->rmap_hlist_len = 0;
 	DO_NUMA(stable_node_dup->nid = nid);
 	if (!need_chain) {
@@ -2172,6 +2171,8 @@  static struct ksm_stable_node *stable_tree_insert(struct folio *kfolio)
 		stable_node_chain_add_dup(stable_node_dup, stable_node);
 	}
 
+	folio_set_stable_node(kfolio, stable_node_dup);
+
 	return stable_node_dup;
 }