Message ID | 1616062105-23263-1-git-send-email-vjitta@codeaurora.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm: vmalloc: Prevent use after free in _vm_unmap_aliases | expand |
On Thu, Mar 18, 2021 at 03:38:25PM +0530, vjitta@codeaurora.org wrote: > From: Vijayanand Jitta <vjitta@codeaurora.org> > > A potential use after free can occur in _vm_unmap_aliases > where an already freed vmap_area could be accessed, Consider > the following scenario: > > Process 1 Process 2 > > __vm_unmap_aliases __vm_unmap_aliases > purge_fragmented_blocks_allcpus rcu_read_lock() > rcu_read_lock() > list_del_rcu(&vb->free_list) > list_for_each_entry_rcu(vb .. ) > __purge_vmap_area_lazy > kmem_cache_free(va) > va_start = vb->va->va_start Or maybe we should switch to kfree_rcu() instead of kmem_cache_free()? -- Vlad Rezki
On 3/18/2021 10:29 PM, Uladzislau Rezki wrote: > On Thu, Mar 18, 2021 at 03:38:25PM +0530, vjitta@codeaurora.org wrote: >> From: Vijayanand Jitta <vjitta@codeaurora.org> >> >> A potential use after free can occur in _vm_unmap_aliases >> where an already freed vmap_area could be accessed, Consider >> the following scenario: >> >> Process 1 Process 2 >> >> __vm_unmap_aliases __vm_unmap_aliases >> purge_fragmented_blocks_allcpus rcu_read_lock() >> rcu_read_lock() >> list_del_rcu(&vb->free_list) >> list_for_each_entry_rcu(vb .. ) >> __purge_vmap_area_lazy >> kmem_cache_free(va) >> va_start = vb->va->va_start > Or maybe we should switch to kfree_rcu() instead of kmem_cache_free()? > > -- > Vlad Rezki > Thanks for suggestion. I see free_vmap_area_lock (spinlock) is taken in __purge_vmap_area_lazy while it loops through list and calls kmem_cache_free on va's. So, looks like we can't replace it with kfree_rcu as it might cause scheduling within atomic context. Thanks, Vijay
> > On 3/18/2021 10:29 PM, Uladzislau Rezki wrote: > > On Thu, Mar 18, 2021 at 03:38:25PM +0530, vjitta@codeaurora.org wrote: > >> From: Vijayanand Jitta <vjitta@codeaurora.org> > >> > >> A potential use after free can occur in _vm_unmap_aliases > >> where an already freed vmap_area could be accessed, Consider > >> the following scenario: > >> > >> Process 1 Process 2 > >> > >> __vm_unmap_aliases __vm_unmap_aliases > >> purge_fragmented_blocks_allcpus rcu_read_lock() > >> rcu_read_lock() > >> list_del_rcu(&vb->free_list) > >> list_for_each_entry_rcu(vb .. ) > >> __purge_vmap_area_lazy > >> kmem_cache_free(va) > >> va_start = vb->va->va_start > > Or maybe we should switch to kfree_rcu() instead of kmem_cache_free()? > > > > -- > > Vlad Rezki > > > > Thanks for suggestion. > > I see free_vmap_area_lock (spinlock) is taken in __purge_vmap_area_lazy > while it loops through list and calls kmem_cache_free on va's. So, looks > like we can't replace it with kfree_rcu as it might cause scheduling > within atomic context. > A double argument of the kfree_rcu() is a safe way to be used from atomic contexts, it does not use any sleeping primitives, so it can be replaced. From the other hand i see that per-cpu KVA allocator is only one user of the RCU and your change fixes it. Feel free to use: Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com> Thanks. -- Vlad Rezki
diff --git a/mm/vmalloc.c b/mm/vmalloc.c index d5f2a84..ebb6f57 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -1762,7 +1762,7 @@ static void _vm_unmap_aliases(unsigned long start, unsigned long end, int flush) rcu_read_lock(); list_for_each_entry_rcu(vb, &vbq->free, free_list) { spin_lock(&vb->lock); - if (vb->dirty) { + if (vb->dirty && vb->dirty != VMAP_BBMAP_BITS) { unsigned long va_start = vb->va->va_start; unsigned long s, e;