diff mbox series

[1/3] z3fold: avoid subtle race when freeing slots

Message ID 20191127152118.6314b99074b0626d4c5a8835@gmail.com (mailing list archive)
State New, archived
Headers show
Series z3fold fixes for intra-page compaction | expand

Commit Message

Vitaly Wool Nov. 27, 2019, 2:21 p.m. UTC
There is a subtle race between freeing slots and setting the last
slot to zero since the OPRPHANED flag was set after the rwlock
had been released. Fix that to avoid rare memory leaks caused by
this race.

Signed-off-by: Vitaly Wool <vitaly.vul@sony.com>
---
 mm/z3fold.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)
diff mbox series

Patch

diff --git a/mm/z3fold.c b/mm/z3fold.c
index d48d0ec3bcdd..36bd2612f609 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -327,6 +327,10 @@  static inline void free_handle(unsigned long handle)
 	zhdr->foreign_handles--;
 	is_free = true;
 	read_lock(&slots->lock);
+	if (!test_bit(HANDLES_ORPHANED, &slots->pool)) {
+		read_unlock(&slots->lock);
+		return;
+	}
 	for (i = 0; i <= BUDDY_MASK; i++) {
 		if (slots->slot[i]) {
 			is_free = false;
@@ -335,7 +339,7 @@  static inline void free_handle(unsigned long handle)
 	}
 	read_unlock(&slots->lock);
 
-	if (is_free && test_and_clear_bit(HANDLES_ORPHANED, &slots->pool)) {
+	if (is_free) {
 		struct z3fold_pool *pool = slots_to_pool(slots);
 
 		kmem_cache_free(pool->c_handle, slots);
@@ -531,12 +535,12 @@  static void __release_z3fold_page(struct z3fold_header *zhdr, bool locked)
 			break;
 		}
 	}
+	if (!is_free)
+		set_bit(HANDLES_ORPHANED, &zhdr->slots->pool);
 	read_unlock(&zhdr->slots->lock);
 
 	if (is_free)
 		kmem_cache_free(pool->c_handle, zhdr->slots);
-	else
-		set_bit(HANDLES_ORPHANED, &zhdr->slots->pool);
 
 	if (locked)
 		z3fold_page_unlock(zhdr);