diff mbox series

[RFC,2/2] mm: add priority threshold to __purge_vmap_area_lazy()

Message ID 20181019173538.590-3-urezki@gmail.com (mailing list archive)
State New, archived
Headers show
Series improve vmalloc allocation | expand

Commit Message

Uladzislau Rezki (Sony) Oct. 19, 2018, 5:35 p.m. UTC
commit 763b218ddfaf ("mm: add preempt points into
__purge_vmap_area_lazy()")

introduced some preempt points, one of those is making
an allocation more prioritized.

Prioritizing an allocation over freeing does not work
well all the time, i.e. it should be rather a compromise.

1) Number of lazy pages directly influence on busy list
length thus on operations like: allocation, lookup, unmap,
remove, etc.

2) Under heavy simultaneous allocations/releases there may
be a situation when memory usage grows too fast hitting
out_of_memory -> panic.

Establish a threshold passing which the freeing path is
prioritized over allocation creating a balance between both.

Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
 mm/vmalloc.c | 14 ++++++++------
 1 file changed, 8 insertions(+), 6 deletions(-)
diff mbox series

Patch

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index a7f257540a05..bbafcff6632b 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1124,23 +1124,23 @@  static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end)
 	struct llist_node *valist;
 	struct vmap_area *va;
 	struct vmap_area *n_va;
-	bool do_free = false;
+	int resched_threshold;
 
 	lockdep_assert_held(&vmap_purge_lock);
 
 	valist = llist_del_all(&vmap_purge_list);
+	if (unlikely(valist == NULL))
+		return false;
+
 	llist_for_each_entry(va, valist, purge_list) {
 		if (va->va_start < start)
 			start = va->va_start;
 		if (va->va_end > end)
 			end = va->va_end;
-		do_free = true;
 	}
 
-	if (!do_free)
-		return false;
-
 	flush_tlb_kernel_range(start, end);
+	resched_threshold = (int) lazy_max_pages() << 1;
 
 	spin_lock(&vmap_area_lock);
 	llist_for_each_entry_safe(va, n_va, valist, purge_list) {
@@ -1148,7 +1148,9 @@  static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end)
 
 		__free_vmap_area(va);
 		atomic_sub(nr, &vmap_lazy_nr);
-		cond_resched_lock(&vmap_area_lock);
+
+		if (atomic_read(&vmap_lazy_nr) < resched_threshold)
+			cond_resched_lock(&vmap_area_lock);
 	}
 	spin_unlock(&vmap_area_lock);
 	return true;