diff mbox series

[8/9] mm: memcontrol: shrink the list lru size

Message ID 20210428094949.43579-9-songmuchun@bytedance.com (mailing list archive)
State New, archived
Headers show
Series Shrink the list lru size on memory cgroup removal | expand

Commit Message

Muchun Song April 28, 2021, 9:49 a.m. UTC
In our server, we found a suspected memory leak problem. The kmalloc-32
consumes more than 6GB of memory. Other kmem_caches consume less than 2GB
memory.

After our in-depth analysis, the memory consumption of kmalloc-32 slab
cache is the cause of list_lru_one allocation.

  crash> p memcg_nr_cache_ids
  memcg_nr_cache_ids = $2 = 24574

memcg_nr_cache_ids is very large and memory consumption of each list_lru
can be calculated with the following formula.

  num_numa_node * memcg_nr_cache_ids * 32 (kmalloc-32)

There are 4 numa nodes in our system, so each list_lru consumes ~3MB.

  crash> list super_blocks | wc -l
  952

Every mount will register 2 list lrus, one is for inode, another is for
dentry. There are 952 super_blocks. So the total memory is 952 * 2 * 3
MB (~5.6GB). But the number of memory cgroup is less than 500. So I
guess more than 12286 containers have been deployed on this machine (I
do not know why there are so many containers, it may be a user's bug or
the user really want to do that). But now there are less than 500
containers in the system. And memcg_nr_cache_ids has not been reduced
to a suitable value. This can waste a lot of memory. If we want to reduce
memcg_nr_cache_ids, we have to reboot the server. This is not what we
want. So this patch will dynamically adjust the value of
memcg_nr_cache_ids to keep healthy memory consumption. In this case, we
may be able to restore a healthy environment even if the users have
created tens of thousands of memory cgroups.

In this patch, I adjusted the calculation formula of memcg_nr_cache_ids
from "size = 2 * (id + 1)" to "size = 2 * id" in memcg_alloc_cache_id().
Because this can make things more simple when shrink the list lru size.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/memcontrol.c | 52 +++++++++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 49 insertions(+), 3 deletions(-)
diff mbox series

Patch

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 1610d501e7b5..f8cdd87cf693 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -362,6 +362,8 @@  static void memcg_reparent_objcgs(struct mem_cgroup *memcg,
 static DEFINE_IDA(memcg_cache_ida);
 int memcg_nr_cache_ids;
 
+static int kmemcg_max_id;
+
 /* Protects memcg_nr_cache_ids */
 static DECLARE_RWSEM(memcg_cache_ids_sem);
 
@@ -2856,8 +2858,11 @@  static int memcg_alloc_cache_id(void)
 	if (id < 0)
 		return id;
 
-	if (id < memcg_nr_cache_ids)
+	if (id < memcg_nr_cache_ids) {
+		if (id > kmemcg_max_id)
+			kmemcg_max_id = id;
 		return id;
+	}
 
 	/*
 	 * There's no space for the new id in memcg_caches arrays,
@@ -2865,15 +2870,17 @@  static int memcg_alloc_cache_id(void)
 	 */
 	down_write(&memcg_cache_ids_sem);
 
-	size = 2 * (id + 1);
+	size = 2 * id;
 	if (size < MEMCG_CACHES_MIN_SIZE)
 		size = MEMCG_CACHES_MIN_SIZE;
 	else if (size > MEMCG_CACHES_MAX_SIZE)
 		size = MEMCG_CACHES_MAX_SIZE;
 
 	err = memcg_update_all_list_lrus(size);
-	if (!err)
+	if (!err) {
 		memcg_nr_cache_ids = size;
+		kmemcg_max_id = id;
+	}
 
 	up_write(&memcg_cache_ids_sem);
 
@@ -2884,9 +2891,48 @@  static int memcg_alloc_cache_id(void)
 	return id;
 }
 
+static inline int nearest_fit_id(int id)
+{
+	if (unlikely(id < MEMCG_CACHES_MIN_SIZE))
+		return MEMCG_CACHES_MIN_SIZE;
+
+	return 1 << (__fls(id) + 1);
+}
+
+/*
+ * memcg_alloc_cache_id() and memcg_free_cache_id() are serialized by
+ * cgroup_mutex. So there is no race on kmemcg_max_id.
+ */
 static void memcg_free_cache_id(int id)
 {
 	ida_simple_remove(&memcg_cache_ida, id);
+
+	if (kmemcg_max_id == id) {
+		/*
+		 * In order to avoid @memcg_nr_cache_ids bouncing between
+		 * @memcg_nr_cache_ids / 2 and @memcg_nr_cache_ids. We only
+		 * shrink the list lru size when @kmemcg_max_id is smaller
+		 * than @memcg_nr_cache_ids / 3.
+		 */
+		int size = memcg_nr_cache_ids / 3;
+
+		kmemcg_max_id = ida_max(&memcg_cache_ida);
+		if (kmemcg_max_id < size) {
+			/*
+			 * Find the first value greater than @kmemcg_max_id
+			 * which can fit our need. And shrink the list lru
+			 * to this size.
+			 */
+			size = nearest_fit_id(kmemcg_max_id);
+
+			down_write(&memcg_cache_ids_sem);
+			if (size != memcg_nr_cache_ids) {
+				memcg_update_all_list_lrus(size);
+				memcg_nr_cache_ids = size;
+			}
+			up_write(&memcg_cache_ids_sem);
+		}
+	}
 }
 
 /*