diff mbox series

mm: vmalloc: Prevent use after free in _vm_unmap_aliases

Message ID 1616062105-23263-1-git-send-email-vjitta@codeaurora.org (mailing list archive)
State New, archived
Headers show
Series mm: vmalloc: Prevent use after free in _vm_unmap_aliases | expand

Commit Message

Vijayanand Jitta March 18, 2021, 10:08 a.m. UTC
From: Vijayanand Jitta <vjitta@codeaurora.org>

A potential use after free can occur in _vm_unmap_aliases
where an already freed vmap_area could be accessed, Consider
the following scenario:

Process 1						Process 2

__vm_unmap_aliases					__vm_unmap_aliases
	purge_fragmented_blocks_allcpus				rcu_read_lock()
		rcu_read_lock()
			list_del_rcu(&vb->free_list)
									list_for_each_entry_rcu(vb .. )
	__purge_vmap_area_lazy
		kmem_cache_free(va)
										va_start = vb->va->va_start

Here Process 1 is in purge path and it does list_del_rcu on vmap_block
and later frees the vmap_area, since Process 2 was holding the rcu lock
at this time vmap_block will still be present in and Process 2 accesse
it and thereby it tries to access vmap_area of that vmap_block which was
already freed by Process 1 and this results in use after free.

Fix this by adding a check for vb->dirty before accessing vmap_area
structure since vb->dirty will be set to VMAP_BBMAP_BITS in purge path
checking for this will prevent the use after free.

Signed-off-by: Vijayanand Jitta <vjitta@codeaurora.org>
---
 mm/vmalloc.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Uladzislau Rezki March 18, 2021, 4:59 p.m. UTC | #1
On Thu, Mar 18, 2021 at 03:38:25PM +0530, vjitta@codeaurora.org wrote:
> From: Vijayanand Jitta <vjitta@codeaurora.org>
> 
> A potential use after free can occur in _vm_unmap_aliases
> where an already freed vmap_area could be accessed, Consider
> the following scenario:
> 
> Process 1						Process 2
> 
> __vm_unmap_aliases					__vm_unmap_aliases
> 	purge_fragmented_blocks_allcpus				rcu_read_lock()
> 		rcu_read_lock()
> 			list_del_rcu(&vb->free_list)
> 									list_for_each_entry_rcu(vb .. )
> 	__purge_vmap_area_lazy
> 		kmem_cache_free(va)
> 										va_start = vb->va->va_start
Or maybe we should switch to kfree_rcu() instead of kmem_cache_free()?

--
Vlad Rezki
Vijayanand Jitta March 24, 2021, 3:29 a.m. UTC | #2
On 3/18/2021 10:29 PM, Uladzislau Rezki wrote:
> On Thu, Mar 18, 2021 at 03:38:25PM +0530, vjitta@codeaurora.org wrote:
>> From: Vijayanand Jitta <vjitta@codeaurora.org>
>>
>> A potential use after free can occur in _vm_unmap_aliases
>> where an already freed vmap_area could be accessed, Consider
>> the following scenario:
>>
>> Process 1						Process 2
>>
>> __vm_unmap_aliases					__vm_unmap_aliases
>> 	purge_fragmented_blocks_allcpus				rcu_read_lock()
>> 		rcu_read_lock()
>> 			list_del_rcu(&vb->free_list)
>> 									list_for_each_entry_rcu(vb .. )
>> 	__purge_vmap_area_lazy
>> 		kmem_cache_free(va)
>> 										va_start = vb->va->va_start
> Or maybe we should switch to kfree_rcu() instead of kmem_cache_free()?
> 
> --
> Vlad Rezki
> 

Thanks for suggestion.

I see free_vmap_area_lock (spinlock) is taken in __purge_vmap_area_lazy
while it loops through list and calls kmem_cache_free on va's. So, looks
like we can't replace it with kfree_rcu as it might cause scheduling
within atomic context.

Thanks,
Vijay
Uladzislau Rezki March 24, 2021, 1:32 p.m. UTC | #3
> 
> On 3/18/2021 10:29 PM, Uladzislau Rezki wrote:
> > On Thu, Mar 18, 2021 at 03:38:25PM +0530, vjitta@codeaurora.org wrote:
> >> From: Vijayanand Jitta <vjitta@codeaurora.org>
> >>
> >> A potential use after free can occur in _vm_unmap_aliases
> >> where an already freed vmap_area could be accessed, Consider
> >> the following scenario:
> >>
> >> Process 1						Process 2
> >>
> >> __vm_unmap_aliases					__vm_unmap_aliases
> >> 	purge_fragmented_blocks_allcpus				rcu_read_lock()
> >> 		rcu_read_lock()
> >> 			list_del_rcu(&vb->free_list)
> >> 									list_for_each_entry_rcu(vb .. )
> >> 	__purge_vmap_area_lazy
> >> 		kmem_cache_free(va)
> >> 										va_start = vb->va->va_start
> > Or maybe we should switch to kfree_rcu() instead of kmem_cache_free()?
> > 
> > --
> > Vlad Rezki
> > 
> 
> Thanks for suggestion.
> 
> I see free_vmap_area_lock (spinlock) is taken in __purge_vmap_area_lazy
> while it loops through list and calls kmem_cache_free on va's. So, looks
> like we can't replace it with kfree_rcu as it might cause scheduling
> within atomic context.
> 
A double argument of the kfree_rcu() is a safe way to be used from atomic
contexts, it does not use any sleeping primitives, so it can be replaced.

From the other hand i see that per-cpu KVA allocator is only one user of
the RCU and your change fixes it. Feel free to use:

Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com>

Thanks.

--
Vlad Rezki
diff mbox series

Patch

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index d5f2a84..ebb6f57 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1762,7 +1762,7 @@  static void _vm_unmap_aliases(unsigned long start, unsigned long end, int flush)
 		rcu_read_lock();
 		list_for_each_entry_rcu(vb, &vbq->free, free_list) {
 			spin_lock(&vb->lock);
-			if (vb->dirty) {
+			if (vb->dirty && vb->dirty != VMAP_BBMAP_BITS) {
 				unsigned long va_start = vb->va->va_start;
 				unsigned long s, e;