diff mbox series

[RFC,v2,02/10] mm: Make shrink_slab() lockless

Message ID 4ceb948c-7ce7-0db3-17d8-82ef1e6e47cc@virtuozzo.com (mailing list archive)
State New, archived
Headers show
Series None | expand

Commit Message

Kirill Tkhai Aug. 8, 2018, 1:20 p.m. UTC
[Added two more places needed srcu_dereference(). All ->shrinker_map
 dereferences must be under SRCU, and this v2 adds missed in previous]

The patch makes shrinker list and shrinker_idr SRCU-safe
for readers. This requires synchronize_srcu() on finalize
stage unregistering stage, which waits till all parallel
shrink_slab() are finished

Note, that patch removes rwsem_is_contended() checks from
the code, and this does not result in delays during
registration, since there is no waiting at all. Unregistration
case may be optimized by splitting unregister_shrinker()
in tho stages, and this is made in next patches.
    
Also, keep in mind, that in case of SRCU is not allowed
to make unconditional (which is done in previous patch),
it is possible to use percpu_rw_semaphore instead of it.
percpu_down_read() will be used in shrink_slab_memcg()
and in shrink_slab(), and consecutive calls

        percpu_down_write(percpu_rwsem);
        percpu_up_write(percpu_rwsem);

will be used instead of synchronize_srcu().

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
---

Comments

Michal Hocko Aug. 9, 2018, 7:14 a.m. UTC | #1
On Wed 08-08-18 16:20:54, Kirill Tkhai wrote:
> [Added two more places needed srcu_dereference(). All ->shrinker_map
>  dereferences must be under SRCU, and this v2 adds missed in previous]
> 
> The patch makes shrinker list and shrinker_idr SRCU-safe
> for readers. This requires synchronize_srcu() on finalize
> stage unregistering stage, which waits till all parallel
> shrink_slab() are finished
> 
> Note, that patch removes rwsem_is_contended() checks from
> the code, and this does not result in delays during
> registration, since there is no waiting at all. Unregistration
> case may be optimized by splitting unregister_shrinker()
> in tho stages, and this is made in next patches.
>     
> Also, keep in mind, that in case of SRCU is not allowed
> to make unconditional (which is done in previous patch),
> it is possible to use percpu_rw_semaphore instead of it.
> percpu_down_read() will be used in shrink_slab_memcg()
> and in shrink_slab(), and consecutive calls
> 
>         percpu_down_write(percpu_rwsem);
>         percpu_up_write(percpu_rwsem);
> 
> will be used instead of synchronize_srcu().

An obvious question. Why didn't you go that way? What are pros/cons of
both approaches?
Kirill Tkhai Aug. 9, 2018, 9:21 a.m. UTC | #2
On 09.08.2018 10:14, Michal Hocko wrote:
> On Wed 08-08-18 16:20:54, Kirill Tkhai wrote:
>> [Added two more places needed srcu_dereference(). All ->shrinker_map
>>  dereferences must be under SRCU, and this v2 adds missed in previous]
>>
>> The patch makes shrinker list and shrinker_idr SRCU-safe
>> for readers. This requires synchronize_srcu() on finalize
>> stage unregistering stage, which waits till all parallel
>> shrink_slab() are finished
>>
>> Note, that patch removes rwsem_is_contended() checks from
>> the code, and this does not result in delays during
>> registration, since there is no waiting at all. Unregistration
>> case may be optimized by splitting unregister_shrinker()
>> in tho stages, and this is made in next patches.
>>     
>> Also, keep in mind, that in case of SRCU is not allowed
>> to make unconditional (which is done in previous patch),
>> it is possible to use percpu_rw_semaphore instead of it.
>> percpu_down_read() will be used in shrink_slab_memcg()
>> and in shrink_slab(), and consecutive calls
>>
>>         percpu_down_write(percpu_rwsem);
>>         percpu_up_write(percpu_rwsem);
>>
>> will be used instead of synchronize_srcu().
> 
> An obvious question. Why didn't you go that way? What are pros/cons of
> both approaches?

1)After percpu_rw_semaphore is introduced, shrink_slab() will be not able
  to do successful percpu_down_read_trylock() for longer time in comparison
  to current behavior:

  [cpu0]                                                               [cpu1]
  {un,}register_shrinker();                                            shrink_slab()
    percpu_down_write();                                                 percpu_down_read_trylock() -> fail
      synchronize_rcu(); -> in some periods very slow on big SMP       ...
                                                                       shrink_slab()
                                                                         percpu_down_read_trylock() -> fail

  Also, register_shrinker() and unregister_shrinker() will become slower for the same reason.
  Unlike unregister_shrinker(); register_shrinker() can't be made asynchronous/delayed, so 
  simple mount() performance will be worse.

  It's possible, these both can be solved by using both percpu_rw_semaphore and rw_semaphore.
  shrink_slab() may fall back to rw_semaphore in case of percpu_rw_semaphore can't be blocked:

  shrink_slab()
  {
        bool percpu = true;

        if (!percpu_down_read_try_lock()) {
               if(!down_read_trylock())
                    return 0;
               percpu = false;
  	}

        shrinker = idr_find();
        ...

        if (percpu)
             percpu_up_read();
        else
             up_read();
   }

   register_shrinker()
   {
         down_write();
         idr_alloc();
         up_write();
   }

   unregister_shrinker()
   {
         percpu_down_write();
         down_write();
         idr_remove();
         up_write();
         percpu_up_write();
   }

   But a)On big machine this may turn in always down_read_trylock() like we have now;
       b)I'm not sure, unlocked idr_find() is safe in parallel with idr_alloc(), maybe,
         there is needed something else around it (I just haven't investigated this).

   All the above are cons. Pros are not enabling SRCU.

2)SRCU. Pros are there are no the above problems; we will have completely unlocked and
  scalable shrink_slab(). We will also have a possibility to avoid unregistering delays,
  like I did for superblock shrinker. There will be full scalability.
  Cons is enabling SRCU.

Kirill
Tetsuo Handa Aug. 9, 2018, 10:37 a.m. UTC | #3
On 2018/08/09 18:21, Kirill Tkhai wrote:
> 2)SRCU. Pros are there are no the above problems; we will have completely unlocked and
>   scalable shrink_slab(). We will also have a possibility to avoid unregistering delays,
>   like I did for superblock shrinker. There will be full scalability.
>   Cons is enabling SRCU.
> 

How unregistering delays can be avoided? Since you traverse all shrinkers
using one shrinker_srcu, synchronize_srcu(&shrinker_srcu) will block
unregistering threads until longest inflight srcu_read_lock() user calls
srcu_read_unlock().

Unless you use per shrinker counter like below, I wonder whether
unregistering delays can be avoided...

  https://marc.info/?l=linux-mm&m=151060909613004
  https://marc.info/?l=linux-mm&m=151060909713005
Kirill Tkhai Aug. 9, 2018, 10:58 a.m. UTC | #4
On 09.08.2018 13:37, Tetsuo Handa wrote:
> On 2018/08/09 18:21, Kirill Tkhai wrote:
>> 2)SRCU. Pros are there are no the above problems; we will have completely unlocked and
>>   scalable shrink_slab(). We will also have a possibility to avoid unregistering delays,
>>   like I did for superblock shrinker. There will be full scalability.
>>   Cons is enabling SRCU.
>>
> 
> How unregistering delays can be avoided? Since you traverse all shrinkers
> using one shrinker_srcu, synchronize_srcu(&shrinker_srcu) will block
> unregistering threads until longest inflight srcu_read_lock() user calls
> srcu_read_unlock().

Yes, but we can do synchronize_srcu() from async work like I did for the further patches.
The only thing we need is to teach shrinker::count_objects() and shrinker::scan_objects()
be safe to be called on unregistering shrinker user. The next patches do this for superblock
shrinker.

> Unless you use per shrinker counter like below, I wonder whether
> unregistering delays can be avoided...
> 
>   https://marc.info/?l=linux-mm&m=151060909613004
>   https://marc.info/?l=linux-mm&m=151060909713005

I'm afraid these atomic_{inc,dec}(&shrinker->nr_active) may regulary drop CPU caches
on another CPUs on some workloads. Also, synchronize_rcu() is also a heavy delay.
Kirill Tkhai Aug. 9, 2018, 11:23 a.m. UTC | #5
On 09.08.2018 10:14, Michal Hocko wrote:
> On Wed 08-08-18 16:20:54, Kirill Tkhai wrote:
>> [Added two more places needed srcu_dereference(). All ->shrinker_map
>>  dereferences must be under SRCU, and this v2 adds missed in previous]
>>
>> The patch makes shrinker list and shrinker_idr SRCU-safe
>> for readers. This requires synchronize_srcu() on finalize
>> stage unregistering stage, which waits till all parallel
>> shrink_slab() are finished
>>
>> Note, that patch removes rwsem_is_contended() checks from
>> the code, and this does not result in delays during
>> registration, since there is no waiting at all. Unregistration
>> case may be optimized by splitting unregister_shrinker()
>> in tho stages, and this is made in next patches.
>>     
>> Also, keep in mind, that in case of SRCU is not allowed
>> to make unconditional (which is done in previous patch),
>> it is possible to use percpu_rw_semaphore instead of it.
>> percpu_down_read() will be used in shrink_slab_memcg()
>> and in shrink_slab(), and consecutive calls
>>
>>         percpu_down_write(percpu_rwsem);
>>         percpu_up_write(percpu_rwsem);
>>
>> will be used instead of synchronize_srcu().
> 
> An obvious question. Why didn't you go that way? What are pros/cons of
> both approaches?

percpu_rw_semaphore based variant looks something like:

commit d581d4ad7ecf
Author: Kirill Tkhai <ktkhai@virtuozzo.com>
Date:   Thu Aug 9 14:21:12 2018 +0300

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 0ff97e860759..fe8693775e33 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -168,6 +168,7 @@ unsigned long vm_total_pages;
 
 static LIST_HEAD(shrinker_list);
 static DECLARE_RWSEM(shrinker_rwsem);
+DEFINE_STATIC_PERCPU_RWSEM(shrinker_percpu_rwsem);
 
 #ifdef CONFIG_MEMCG_KMEM
 
@@ -198,7 +199,10 @@ static int prealloc_memcg_shrinker(struct shrinker *shrinker)
 		goto unlock;
 
 	if (id >= shrinker_nr_max) {
-		if (memcg_expand_shrinker_maps(id)) {
+		percpu_down_write(&shrinker_percpu_rwsem);
+		ret = memcg_expand_shrinker_maps(id);
+		percpu_up_write(&shrinker_percpu_rwsem);
+		if (ret) {
 			idr_remove(&shrinker_idr, id);
 			goto unlock;
 		}
@@ -406,7 +410,7 @@ void free_prealloced_shrinker(struct shrinker *shrinker)
 void register_shrinker_prepared(struct shrinker *shrinker)
 {
 	down_write(&shrinker_rwsem);
-	list_add_tail(&shrinker->list, &shrinker_list);
+	list_add_tail_rcu(&shrinker->list, &shrinker_list);
 #ifdef CONFIG_MEMCG_KMEM
 	idr_replace(&shrinker_idr, shrinker, shrinker->id);
 #endif
@@ -434,8 +438,14 @@ void unregister_shrinker(struct shrinker *shrinker)
 	if (shrinker->flags & SHRINKER_MEMCG_AWARE)
 		unregister_memcg_shrinker(shrinker);
 	down_write(&shrinker_rwsem);
-	list_del(&shrinker->list);
+	list_del_rcu(&shrinker->list);
 	up_write(&shrinker_rwsem);
+
+	synchronize_rcu();
+
+	percpu_down_write(&shrinker_percpu_rwsem);
+	percpu_up_write(&shrinker_percpu_rwsem);
+
 	kfree(shrinker->nr_deferred);
 	shrinker->nr_deferred = NULL;
 }
@@ -574,11 +584,11 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid,
 	if (!memcg_kmem_enabled() || !mem_cgroup_online(memcg))
 		return 0;
 
-	if (!down_read_trylock(&shrinker_rwsem))
+	if (!percpu_down_read_trylock(&shrinker_percpu_rwsem))
 		return 0;
 
 	map = rcu_dereference_protected(memcg->nodeinfo[nid]->shrinker_map,
-					true);
+					true /* shrinker_percpu_rwsem */);
 	if (unlikely(!map))
 		goto unlock;
 
@@ -590,7 +600,22 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid,
 		};
 		struct shrinker *shrinker;
 
+		/*
+		 * See shutdown sequence in unregister_shrinker().
+		 * RCU allows us to iterate IDR locklessly (this
+		 * is the way to synchronize with IDR changing by
+		 * idr_alloc()).
+		 *
+		 * If we see shrinker pointer undex RCU, this means
+		 * synchronize_rcu() in unregister_shrinker() has not
+		 * finished yet. Then, we unlock RCU, and synchronize_rcu()
+		 * can complete, but unregister_shrinker() can't proceed,
+		 * before we unlock shrinker_percpu_rwsem.
+		 */
+		rcu_read_lock();
 		shrinker = idr_find(&shrinker_idr, i);
+		rcu_read_unlock();
+
 		if (unlikely(!shrinker || shrinker == SHRINKER_REGISTERING)) {
 			if (!shrinker)
 				clear_bit(i, map->map);
@@ -624,13 +649,13 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid,
 		}
 		freed += ret;
 
-		if (rwsem_is_contended(&shrinker_rwsem)) {
+		if (!rcu_sync_is_idle(&shrinker_percpu_rwsem.rss)) {
 			freed = freed ? : 1;
 			break;
 		}
 	}
 unlock:
-	up_read(&shrinker_rwsem);
+	percpu_up_read(&shrinker_percpu_rwsem);
 	return freed;
 }
 #else /* CONFIG_MEMCG_KMEM */
@@ -672,15 +697,17 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid,
 	if (!mem_cgroup_is_root(memcg))
 		return shrink_slab_memcg(gfp_mask, nid, memcg, priority);
 
-	if (!down_read_trylock(&shrinker_rwsem))
+	if (!percpu_down_read_trylock(&shrinker_percpu_rwsem))
 		goto out;
 
-	list_for_each_entry(shrinker, &shrinker_list, list) {
+	rcu_read_lock();
+	list_for_each_entry_rcu(shrinker, &shrinker_list, list) {
 		struct shrink_control sc = {
 			.gfp_mask = gfp_mask,
 			.nid = nid,
 			.memcg = memcg,
 		};
+		rcu_read_unlock();
 
 		ret = do_shrink_slab(&sc, shrinker, priority);
 		if (ret == SHRINK_EMPTY)
@@ -691,13 +718,16 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid,
 		 * prevent the regsitration from being stalled for long periods
 		 * by parallel ongoing shrinking.
 		 */
-		if (rwsem_is_contended(&shrinker_rwsem)) {
+		if (!rcu_sync_is_idle(&shrinker_percpu_rwsem.rss)) {
 			freed = freed ? : 1;
 			break;
 		}
+
+		rcu_read_lock();
 	}
+	rcu_read_unlock();
 
-	up_read(&shrinker_rwsem);
+	percpu_up_read(&shrinker_percpu_rwsem);
 out:
 	cond_resched();
 	return freed;
diff mbox series

Patch

diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h
index 9443cafd1969..94b44662f430 100644
--- a/include/linux/shrinker.h
+++ b/include/linux/shrinker.h
@@ -82,6 +82,8 @@  struct shrinker {
 #define SHRINKER_NUMA_AWARE	(1 << 0)
 #define SHRINKER_MEMCG_AWARE	(1 << 1)
 
+extern struct srcu_struct shrinker_srcu;
+
 extern int prealloc_shrinker(struct shrinker *shrinker);
 extern void register_shrinker_prepared(struct shrinker *shrinker);
 extern int register_shrinker(struct shrinker *shrinker);
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 4e3c1315b1de..ed40eb4b8300 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -332,8 +332,9 @@  static int memcg_expand_one_shrinker_map(struct mem_cgroup *memcg,
 	lockdep_assert_held(&memcg_shrinker_map_mutex);
 
 	for_each_node(nid) {
-		old = rcu_dereference_protected(
-			mem_cgroup_nodeinfo(memcg, nid)->shrinker_map, true);
+		old = srcu_dereference_check(
+			mem_cgroup_nodeinfo(memcg, nid)->shrinker_map,
+			&shrinker_srcu, true);
 		/* Not yet online memcg */
 		if (!old)
 			return 0;
@@ -347,7 +348,7 @@  static int memcg_expand_one_shrinker_map(struct mem_cgroup *memcg,
 		memset((void *)new->map + old_size, 0, size - old_size);
 
 		rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_map, new);
-		call_rcu(&old->rcu, memcg_free_shrinker_map_rcu);
+		call_srcu(&shrinker_srcu, &old->rcu, memcg_free_shrinker_map_rcu);
 	}
 
 	return 0;
@@ -364,7 +365,8 @@  static void memcg_free_shrinker_maps(struct mem_cgroup *memcg)
 
 	for_each_node(nid) {
 		pn = mem_cgroup_nodeinfo(memcg, nid);
-		map = rcu_dereference_protected(pn->shrinker_map, true);
+		map = srcu_dereference_check(pn->shrinker_map,
+				&shrinker_srcu, true);
 		if (map)
 			kvfree(map);
 		rcu_assign_pointer(pn->shrinker_map, NULL);
@@ -427,13 +429,15 @@  void memcg_set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id)
 {
 	if (shrinker_id >= 0 && memcg && !mem_cgroup_is_root(memcg)) {
 		struct memcg_shrinker_map *map;
+		int srcu_id;
 
-		rcu_read_lock();
-		map = rcu_dereference(memcg->nodeinfo[nid]->shrinker_map);
+		srcu_id = srcu_read_lock(&shrinker_srcu);
+		map = srcu_dereference(memcg->nodeinfo[nid]->shrinker_map,
+				       &shrinker_srcu);
 		/* Pairs with smp mb in shrink_slab() */
 		smp_mb__before_atomic();
 		set_bit(shrinker_id, map->map);
-		rcu_read_unlock();
+		srcu_read_unlock(&shrinker_srcu, srcu_id);
 	}
 }
 
diff --git a/mm/vmscan.c b/mm/vmscan.c
index da135e1acd94..acb087f3ac35 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -168,6 +168,7 @@  unsigned long vm_total_pages;
 
 static LIST_HEAD(shrinker_list);
 static DECLARE_RWSEM(shrinker_rwsem);
+DEFINE_SRCU(shrinker_srcu);
 
 #ifdef CONFIG_MEMCG_KMEM
 
@@ -192,7 +193,6 @@  static int prealloc_memcg_shrinker(struct shrinker *shrinker)
 	int id, ret = -ENOMEM;
 
 	down_write(&shrinker_rwsem);
-	/* This may call shrinker, so it must use down_read_trylock() */
 	id = idr_alloc(&shrinker_idr, SHRINKER_REGISTERING, 0, 0, GFP_KERNEL);
 	if (id < 0)
 		goto unlock;
@@ -406,7 +406,7 @@  void free_prealloced_shrinker(struct shrinker *shrinker)
 void register_shrinker_prepared(struct shrinker *shrinker)
 {
 	down_write(&shrinker_rwsem);
-	list_add_tail(&shrinker->list, &shrinker_list);
+	list_add_tail_rcu(&shrinker->list, &shrinker_list);
 	idr_replace(&shrinker_idr, shrinker, shrinker->id);
 	up_write(&shrinker_rwsem);
 }
@@ -432,8 +432,11 @@  void unregister_shrinker(struct shrinker *shrinker)
 	if (shrinker->flags & SHRINKER_MEMCG_AWARE)
 		unregister_memcg_shrinker(shrinker);
 	down_write(&shrinker_rwsem);
-	list_del(&shrinker->list);
+	list_del_rcu(&shrinker->list);
 	up_write(&shrinker_rwsem);
+
+	synchronize_srcu(&shrinker_srcu);
+
 	kfree(shrinker->nr_deferred);
 	shrinker->nr_deferred = NULL;
 }
@@ -567,16 +570,14 @@  static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid,
 {
 	struct memcg_shrinker_map *map;
 	unsigned long freed = 0;
-	int ret, i;
+	int ret, i, srcu_id;
 
 	if (!memcg_kmem_enabled() || !mem_cgroup_online(memcg))
 		return 0;
 
-	if (!down_read_trylock(&shrinker_rwsem))
-		return 0;
-
-	map = rcu_dereference_protected(memcg->nodeinfo[nid]->shrinker_map,
-					true);
+	srcu_id = srcu_read_lock(&shrinker_srcu);
+	map = srcu_dereference(memcg->nodeinfo[nid]->shrinker_map,
+			       &shrinker_srcu);
 	if (unlikely(!map))
 		goto unlock;
 
@@ -621,14 +622,9 @@  static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid,
 				memcg_set_shrinker_bit(memcg, nid, i);
 		}
 		freed += ret;
-
-		if (rwsem_is_contended(&shrinker_rwsem)) {
-			freed = freed ? : 1;
-			break;
-		}
 	}
 unlock:
-	up_read(&shrinker_rwsem);
+	srcu_read_unlock(&shrinker_srcu, srcu_id);
 	return freed;
 }
 #else /* CONFIG_MEMCG_KMEM */
@@ -665,15 +661,13 @@  static unsigned long shrink_slab(gfp_t gfp_mask, int nid,
 {
 	struct shrinker *shrinker;
 	unsigned long freed = 0;
-	int ret;
+	int srcu_id, ret;
 
 	if (!mem_cgroup_is_root(memcg))
 		return shrink_slab_memcg(gfp_mask, nid, memcg, priority);
 
-	if (!down_read_trylock(&shrinker_rwsem))
-		goto out;
-
-	list_for_each_entry(shrinker, &shrinker_list, list) {
+	srcu_id = srcu_read_lock(&shrinker_srcu);
+	list_for_each_entry_rcu(shrinker, &shrinker_list, list) {
 		struct shrink_control sc = {
 			.gfp_mask = gfp_mask,
 			.nid = nid,
@@ -684,19 +678,9 @@  static unsigned long shrink_slab(gfp_t gfp_mask, int nid,
 		if (ret == SHRINK_EMPTY)
 			ret = 0;
 		freed += ret;
-		/*
-		 * Bail out if someone want to register a new shrinker to
-		 * prevent the regsitration from being stalled for long periods
-		 * by parallel ongoing shrinking.
-		 */
-		if (rwsem_is_contended(&shrinker_rwsem)) {
-			freed = freed ? : 1;
-			break;
-		}
 	}
+	srcu_read_unlock(&shrinker_srcu, srcu_id);
 
-	up_read(&shrinker_rwsem);
-out:
 	cond_resched();
 	return freed;
 }