Message ID | 20190701173042.221453-1-henryburns@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm/z3fold: Fix z3fold_buddy_slots use after free | expand |
Hi Henry, On Mon, Jul 1, 2019 at 8:31 PM Henry Burns <henryburns@google.com> wrote: > > Running z3fold stress testing with address sanitization > showed zhdr->slots was being used after it was freed. > > z3fold_free(z3fold_pool, handle) > free_handle(handle) > kmem_cache_free(pool->c_handle, zhdr->slots) > release_z3fold_page_locked_list(kref) > __release_z3fold_page(zhdr, true) > zhdr_to_pool(zhdr) > slots_to_pool(zhdr->slots) *BOOM* Thanks for looking into this. I'm not entirely sure I'm all for splitting free_handle() but let me think about it. > Instead we split free_handle into two functions, release_handle() > and free_slots(). We use release_handle() in place of free_handle(), > and use free_slots() to call kmem_cache_free() after > __release_z3fold_page() is done. A little less intrusive solution would be to move backlink to pool from slots back to z3fold_header. Looks like it was a bad idea from the start. Best regards, Vitaly
On Tue, Jul 2, 2019 at 12:45 AM Vitaly Wool <vitalywool@gmail.com> wrote: > > Hi Henry, > > On Mon, Jul 1, 2019 at 8:31 PM Henry Burns <henryburns@google.com> wrote: > > > > Running z3fold stress testing with address sanitization > > showed zhdr->slots was being used after it was freed. > > > > z3fold_free(z3fold_pool, handle) > > free_handle(handle) > > kmem_cache_free(pool->c_handle, zhdr->slots) > > release_z3fold_page_locked_list(kref) > > __release_z3fold_page(zhdr, true) > > zhdr_to_pool(zhdr) > > slots_to_pool(zhdr->slots) *BOOM* > > Thanks for looking into this. I'm not entirely sure I'm all for > splitting free_handle() but let me think about it. > > > Instead we split free_handle into two functions, release_handle() > > and free_slots(). We use release_handle() in place of free_handle(), > > and use free_slots() to call kmem_cache_free() after > > __release_z3fold_page() is done. > > A little less intrusive solution would be to move backlink to pool > from slots back to z3fold_header. Looks like it was a bad idea from > the start. > > Best regards, > Vitaly We still want z3fold pages to be movable though. Wouldn't moving the backink to the pool from slots to z3fold_header prevent us from enabling migration?
On Tue, Jul 2, 2019 at 6:57 PM Henry Burns <henryburns@google.com> wrote: > > On Tue, Jul 2, 2019 at 12:45 AM Vitaly Wool <vitalywool@gmail.com> wrote: > > > > Hi Henry, > > > > On Mon, Jul 1, 2019 at 8:31 PM Henry Burns <henryburns@google.com> wrote: > > > > > > Running z3fold stress testing with address sanitization > > > showed zhdr->slots was being used after it was freed. > > > > > > z3fold_free(z3fold_pool, handle) > > > free_handle(handle) > > > kmem_cache_free(pool->c_handle, zhdr->slots) > > > release_z3fold_page_locked_list(kref) > > > __release_z3fold_page(zhdr, true) > > > zhdr_to_pool(zhdr) > > > slots_to_pool(zhdr->slots) *BOOM* > > > > Thanks for looking into this. I'm not entirely sure I'm all for > > splitting free_handle() but let me think about it. > > > > > Instead we split free_handle into two functions, release_handle() > > > and free_slots(). We use release_handle() in place of free_handle(), > > > and use free_slots() to call kmem_cache_free() after > > > __release_z3fold_page() is done. > > > > A little less intrusive solution would be to move backlink to pool > > from slots back to z3fold_header. Looks like it was a bad idea from > > the start. > > > > Best regards, > > Vitaly > > We still want z3fold pages to be movable though. Wouldn't moving > the backink to the pool from slots to z3fold_header prevent us from > enabling migration? That is a valid point but we can just add back pool pointer to z3fold_header. The thing here is, there's another patch in the pipeline that allows for a better (inter-page) compaction and it will somewhat complicate things, because sometimes slots will have to be released after z3fold page is released (because they will hold a handle to another z3fold page). I would prefer that we just added back pool to z3fold_header and changed zhdr_to_pool to just return zhdr->pool, then had the compaction patch valid again, and then we could come back to size optimization. Best regards, Vitaly
> > On Tue, Jul 2, 2019 at 12:45 AM Vitaly Wool <vitalywool@gmail.com> wrote: > > > > > > Hi Henry, > > > > > > On Mon, Jul 1, 2019 at 8:31 PM Henry Burns <henryburns@google.com> wrote: > > > > > > > > Running z3fold stress testing with address sanitization > > > > showed zhdr->slots was being used after it was freed. > > > > > > > > z3fold_free(z3fold_pool, handle) > > > > free_handle(handle) > > > > kmem_cache_free(pool->c_handle, zhdr->slots) > > > > release_z3fold_page_locked_list(kref) > > > > __release_z3fold_page(zhdr, true) > > > > zhdr_to_pool(zhdr) > > > > slots_to_pool(zhdr->slots) *BOOM* > > > > > > Thanks for looking into this. I'm not entirely sure I'm all for > > > splitting free_handle() but let me think about it. > > > > > > > Instead we split free_handle into two functions, release_handle() > > > > and free_slots(). We use release_handle() in place of free_handle(), > > > > and use free_slots() to call kmem_cache_free() after > > > > __release_z3fold_page() is done. > > > > > > A little less intrusive solution would be to move backlink to pool > > > from slots back to z3fold_header. Looks like it was a bad idea from > > > the start. > > > > > > Best regards, > > > Vitaly > > > > We still want z3fold pages to be movable though. Wouldn't moving > > the backink to the pool from slots to z3fold_header prevent us from > > enabling migration? > > That is a valid point but we can just add back pool pointer to > z3fold_header. The thing here is, there's another patch in the > pipeline that allows for a better (inter-page) compaction and it will > somewhat complicate things, because sometimes slots will have to be > released after z3fold page is released (because they will hold a > handle to another z3fold page). I would prefer that we just added back > pool to z3fold_header and changed zhdr_to_pool to just return > zhdr->pool, then had the compaction patch valid again, and then we > could come back to size optimization. I see your point, patch incoming.
On Tue, Jul 2, 2019 at 11:03 PM Vitaly Wool <vitalywool@gmail.com> wrote: > > On Tue, Jul 2, 2019 at 6:57 PM Henry Burns <henryburns@google.com> wrote: > > > > On Tue, Jul 2, 2019 at 12:45 AM Vitaly Wool <vitalywool@gmail.com> wrote: > > > > > > Hi Henry, > > > > > > On Mon, Jul 1, 2019 at 8:31 PM Henry Burns <henryburns@google.com> wrote: > > > > > > > > Running z3fold stress testing with address sanitization > > > > showed zhdr->slots was being used after it was freed. > > > > > > > > z3fold_free(z3fold_pool, handle) > > > > free_handle(handle) > > > > kmem_cache_free(pool->c_handle, zhdr->slots) > > > > release_z3fold_page_locked_list(kref) > > > > __release_z3fold_page(zhdr, true) > > > > zhdr_to_pool(zhdr) > > > > slots_to_pool(zhdr->slots) *BOOM* > > > > > > Thanks for looking into this. I'm not entirely sure I'm all for > > > splitting free_handle() but let me think about it. > > > > > > > Instead we split free_handle into two functions, release_handle() > > > > and free_slots(). We use release_handle() in place of free_handle(), > > > > and use free_slots() to call kmem_cache_free() after > > > > __release_z3fold_page() is done. > > > > > > A little less intrusive solution would be to move backlink to pool > > > from slots back to z3fold_header. Looks like it was a bad idea from > > > the start. > > > > > > Best regards, > > > Vitaly > > > > We still want z3fold pages to be movable though. Wouldn't moving > > the backink to the pool from slots to z3fold_header prevent us from > > enabling migration? > > That is a valid point but we can just add back pool pointer to > z3fold_header. The thing here is, there's another patch in the > pipeline that allows for a better (inter-page) compaction and it will > somewhat complicate things, because sometimes slots will have to be > released after z3fold page is released (because they will hold a > handle to another z3fold page). I would prefer that we just added back > pool to z3fold_header and changed zhdr_to_pool to just return > zhdr->pool, then had the compaction patch valid again, and then we > could come back to size optimization. > By adding pool pointer back to z3fold_header, will we still be able to move/migrate/compact the z3fold pages?
On Wed, Jul 3, 2019, 10:14 PM Shakeel Butt <shakeelb@google.com> wrote: > On Tue, Jul 2, 2019 at 11:03 PM Vitaly Wool <vitalywool@gmail.com> wrote: > > > > On Tue, Jul 2, 2019 at 6:57 PM Henry Burns <henryburns@google.com> > wrote: > > > > > > On Tue, Jul 2, 2019 at 12:45 AM Vitaly Wool <vitalywool@gmail.com> > wrote: > > > > > > > > Hi Henry, > > > > > > > > On Mon, Jul 1, 2019 at 8:31 PM Henry Burns <henryburns@google.com> > wrote: > > > > > > > > > > Running z3fold stress testing with address sanitization > > > > > showed zhdr->slots was being used after it was freed. > > > > > > > > > > z3fold_free(z3fold_pool, handle) > > > > > free_handle(handle) > > > > > kmem_cache_free(pool->c_handle, zhdr->slots) > > > > > release_z3fold_page_locked_list(kref) > > > > > __release_z3fold_page(zhdr, true) > > > > > zhdr_to_pool(zhdr) > > > > > slots_to_pool(zhdr->slots) *BOOM* > > > > > > > > Thanks for looking into this. I'm not entirely sure I'm all for > > > > splitting free_handle() but let me think about it. > > > > > > > > > Instead we split free_handle into two functions, release_handle() > > > > > and free_slots(). We use release_handle() in place of > free_handle(), > > > > > and use free_slots() to call kmem_cache_free() after > > > > > __release_z3fold_page() is done. > > > > > > > > A little less intrusive solution would be to move backlink to pool > > > > from slots back to z3fold_header. Looks like it was a bad idea from > > > > the start. > > > > > > > > Best regards, > > > > Vitaly > > > > > > We still want z3fold pages to be movable though. Wouldn't moving > > > the backink to the pool from slots to z3fold_header prevent us from > > > enabling migration? > > > > That is a valid point but we can just add back pool pointer to > > z3fold_header. The thing here is, there's another patch in the > > pipeline that allows for a better (inter-page) compaction and it will > > somewhat complicate things, because sometimes slots will have to be > > released after z3fold page is released (because they will hold a > > handle to another z3fold page). I would prefer that we just added back > > pool to z3fold_header and changed zhdr_to_pool to just return > > zhdr->pool, then had the compaction patch valid again, and then we > > could come back to size optimization. > > > > By adding pool pointer back to z3fold_header, will we still be able to > move/migrate/compact the z3fold pages?l > Sure, it's only zhdr_to_pool() that will change, basically. ~Vitaly
diff --git a/mm/z3fold.c b/mm/z3fold.c index f7993ff778df..e174d1549734 100644 --- a/mm/z3fold.c +++ b/mm/z3fold.c @@ -213,31 +213,24 @@ static inline struct z3fold_buddy_slots *handle_to_slots(unsigned long handle) return (struct z3fold_buddy_slots *)(handle & ~(SLOTS_ALIGN - 1)); } -static inline void free_handle(unsigned long handle) +static inline void release_handle(unsigned long handle) { - struct z3fold_buddy_slots *slots; - int i; - bool is_free; - if (handle & (1 << PAGE_HEADLESS)) return; WARN_ON(*(unsigned long *)handle == 0); *(unsigned long *)handle = 0; - slots = handle_to_slots(handle); - is_free = true; - for (i = 0; i <= BUDDY_MASK; i++) { - if (slots->slot[i]) { - is_free = false; - break; - } - } +} - if (is_free) { - struct z3fold_pool *pool = slots_to_pool(slots); +/* At this point all of the slots should be empty */ +static inline void free_slots(struct z3fold_buddy_slots *slots) +{ + struct z3fold_pool *pool = slots_to_pool(slots); + int i; - kmem_cache_free(pool->c_handle, slots); - } + for (i = 0; i <= BUDDY_MASK; i++) + VM_BUG_ON(slots->slot[i]); + kmem_cache_free(pool->c_handle, slots); } static struct dentry *z3fold_do_mount(struct file_system_type *fs_type, @@ -431,7 +424,8 @@ static inline struct z3fold_pool *zhdr_to_pool(struct z3fold_header *zhdr) static void __release_z3fold_page(struct z3fold_header *zhdr, bool locked) { struct page *page = virt_to_page(zhdr); - struct z3fold_pool *pool = zhdr_to_pool(zhdr); + struct z3fold_buddy_slots *slots = zhdr->slots; + struct z3fold_pool *pool = slots_to_pool(slots); WARN_ON(!list_empty(&zhdr->buddy)); set_bit(PAGE_STALE, &page->private); @@ -442,6 +436,7 @@ static void __release_z3fold_page(struct z3fold_header *zhdr, bool locked) spin_unlock(&pool->lock); if (locked) z3fold_page_unlock(zhdr); + free_slots(slots); spin_lock(&pool->stale_lock); list_add(&zhdr->buddy, &pool->stale); queue_work(pool->release_wq, &pool->work); @@ -1009,7 +1004,7 @@ static void z3fold_free(struct z3fold_pool *pool, unsigned long handle) return; } - free_handle(handle); + release_handle(handle); if (kref_put(&zhdr->refcount, release_z3fold_page_locked_list)) { atomic64_dec(&pool->pages_nr); return;
Running z3fold stress testing with address sanitization showed zhdr->slots was being used after it was freed. z3fold_free(z3fold_pool, handle) free_handle(handle) kmem_cache_free(pool->c_handle, zhdr->slots) release_z3fold_page_locked_list(kref) __release_z3fold_page(zhdr, true) zhdr_to_pool(zhdr) slots_to_pool(zhdr->slots) *BOOM* Instead we split free_handle into two functions, release_handle() and free_slots(). We use release_handle() in place of free_handle(), and use free_slots() to call kmem_cache_free() after __release_z3fold_page() is done. Fixes: 7c2b8baa61fe ("mm/z3fold.c: add structure for buddy handles") Signed-off-by: Henry Burns <henryburns@google.com> --- mm/z3fold.c | 33 ++++++++++++++------------------- 1 file changed, 14 insertions(+), 19 deletions(-)