Message ID | 20190708134808.e89f3bfadd9f6ffd7eff9ba9@gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm/z3fold.c: don't try to use buddy slots after free | expand |
On Mon, Jul 8, 2019 at 4:48 AM Vitaly Wool <vitalywool@gmail.com> wrote: > > From fd87fdc38ea195e5a694102a57bd4d59fc177433 Mon Sep 17 00:00:00 2001 > From: Vitaly Wool <vitalywool@gmail.com> > Date: Mon, 8 Jul 2019 13:41:02 +0200 > [PATCH] mm/z3fold: don't try to use buddy slots after free > > As reported by Henry Burns: > > Running z3fold stress testing with address sanitization > showed zhdr->slots was being used after it was freed. > > z3fold_free(z3fold_pool, handle) > free_handle(handle) > kmem_cache_free(pool->c_handle, zhdr->slots) > release_z3fold_page_locked_list(kref) > __release_z3fold_page(zhdr, true) > zhdr_to_pool(zhdr) > slots_to_pool(zhdr->slots) *BOOM* > > To fix this, add pointer to the pool back to z3fold_header and modify > zhdr_to_pool to return zhdr->pool. > > Fixes: 7c2b8baa61fe ("mm/z3fold.c: add structure for buddy handles") > > Reported-by: Henry Burns <henryburns@google.com> > Signed-off-by: Vitaly Wool <vitalywool@gmail.com> Reviewed-by: Shakeel Butt <shakeelb@google.com> > --- > mm/z3fold.c | 5 ++++- > 1 file changed, 4 insertions(+), 1 deletion(-) > > diff --git a/mm/z3fold.c b/mm/z3fold.c > index 985732c8b025..e1686bf6d689 100644 > --- a/mm/z3fold.c > +++ b/mm/z3fold.c > @@ -101,6 +101,7 @@ struct z3fold_buddy_slots { > * @refcount: reference count for the z3fold page > * @work: work_struct for page layout optimization > * @slots: pointer to the structure holding buddy slots > + * @pool: pointer to the containing pool > * @cpu: CPU which this page "belongs" to > * @first_chunks: the size of the first buddy in chunks, 0 if free > * @middle_chunks: the size of the middle buddy in chunks, 0 if free > @@ -114,6 +115,7 @@ struct z3fold_header { > struct kref refcount; > struct work_struct work; > struct z3fold_buddy_slots *slots; > + struct z3fold_pool *pool; > short cpu; > unsigned short first_chunks; > unsigned short middle_chunks; > @@ -320,6 +322,7 @@ static struct z3fold_header *init_z3fold_page(struct page *page, > zhdr->start_middle = 0; > zhdr->cpu = -1; > zhdr->slots = slots; > + zhdr->pool = pool; > INIT_LIST_HEAD(&zhdr->buddy); > INIT_WORK(&zhdr->work, compact_page_work); > return zhdr; > @@ -426,7 +429,7 @@ static enum buddy handle_to_buddy(unsigned long handle) > > static inline struct z3fold_pool *zhdr_to_pool(struct z3fold_header *zhdr) > { > - return slots_to_pool(zhdr->slots); > + return zhdr->pool; > } > > static void __release_z3fold_page(struct z3fold_header *zhdr, bool locked) > -- > 2.17.1
diff --git a/mm/z3fold.c b/mm/z3fold.c index 985732c8b025..e1686bf6d689 100644 --- a/mm/z3fold.c +++ b/mm/z3fold.c @@ -101,6 +101,7 @@ struct z3fold_buddy_slots { * @refcount: reference count for the z3fold page * @work: work_struct for page layout optimization * @slots: pointer to the structure holding buddy slots + * @pool: pointer to the containing pool * @cpu: CPU which this page "belongs" to * @first_chunks: the size of the first buddy in chunks, 0 if free * @middle_chunks: the size of the middle buddy in chunks, 0 if free @@ -114,6 +115,7 @@ struct z3fold_header { struct kref refcount; struct work_struct work; struct z3fold_buddy_slots *slots; + struct z3fold_pool *pool; short cpu; unsigned short first_chunks; unsigned short middle_chunks; @@ -320,6 +322,7 @@ static struct z3fold_header *init_z3fold_page(struct page *page, zhdr->start_middle = 0; zhdr->cpu = -1; zhdr->slots = slots; + zhdr->pool = pool; INIT_LIST_HEAD(&zhdr->buddy); INIT_WORK(&zhdr->work, compact_page_work); return zhdr; @@ -426,7 +429,7 @@ static enum buddy handle_to_buddy(unsigned long handle) static inline struct z3fold_pool *zhdr_to_pool(struct z3fold_header *zhdr) { - return slots_to_pool(zhdr->slots); + return zhdr->pool; } static void __release_z3fold_page(struct z3fold_header *zhdr, bool locked)