diff mbox series

[v2,05/21] mm: list_lru: remove lru node locking from memcg_update_list_lru_node

Message ID 20210527062148.9361-6-songmuchun@bytedance.com (mailing list archive)
State New, archived
Headers show
Series Optimize list lru memory consumption | expand

Commit Message

Muchun Song May 27, 2021, 6:21 a.m. UTC
Since commit e5bc3af7734f ("rcu: Consolidate PREEMPT and !PREEMPT
synchronize_rcu()"), the critical section of spinlock can serve
as RCU read-side critical section. So we can remove the locking
from memcg_update_list_lru_node().

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/list_lru.c | 11 +----------
 1 file changed, 1 insertion(+), 10 deletions(-)
diff mbox series

Patch

diff --git a/mm/list_lru.c b/mm/list_lru.c
index 4962d48d4410..e86d4d055d3c 100644
--- a/mm/list_lru.c
+++ b/mm/list_lru.c
@@ -403,18 +403,9 @@  static int memcg_update_list_lru_node(struct list_lru_node *nlru,
 
 	memcpy(&new->lru, &old->lru, old_size * sizeof(void *));
 
-	/*
-	 * The locking below allows readers that hold nlru->lock avoid taking
-	 * rcu_read_lock (see list_lru_from_memcg_idx).
-	 *
-	 * Since list_lru_{add,del} may be called under an IRQ-safe lock,
-	 * we have to use IRQ-safe primitives here to avoid deadlock.
-	 */
-	spin_lock_irq(&nlru->lock);
 	rcu_assign_pointer(nlru->memcg_lrus, new);
-	spin_unlock_irq(&nlru->lock);
-
 	kvfree_rcu(old, rcu);
+
 	return 0;
 }