diff mbox series

[2/2] mm/swap: Access struct pagevec remotely

Message ID 20180914145924.22055-3-bigeasy@linutronix.de (mailing list archive)
State New, archived
Headers show
Series mm/swap: Add locking for pagevec | expand

Commit Message

Sebastian Andrzej Siewior Sept. 14, 2018, 2:59 p.m. UTC
From: Thomas Gleixner <tglx@linutronix.de>

Now that struct pagevec is locked during access, it is possible to
access it from a remote CPU. The advantage is that the work can be done
from the "requesting" CPU without firing a worker on a remote CPU and
waiting for it to complete the work.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
[bigeasy: +commit message]
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 mm/swap.c | 37 +------------------------------------
 1 file changed, 1 insertion(+), 36 deletions(-)

Comments

Andrew Morton Nov. 9, 2018, 11:06 p.m. UTC | #1
On Fri, 14 Sep 2018 16:59:24 +0200 Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote:

> Now that struct pagevec is locked during access, it is possible to
> access it from a remote CPU. The advantage is that the work can be done
> from the "requesting" CPU without firing a worker on a remote CPU and
> waiting for it to complete the work.

Well, removing a deferred work thingy is always welcome.  But I'm not
sure this was the overall aim of the patchset.  In fact I'm somewhat
unclear on what the overall aim is.  Does it have some relevance to -RT
kernels?

Anyway, please see if you can clarify the high-level intent, refresh,
retest and resend?
diff mbox series

Patch

diff --git a/mm/swap.c b/mm/swap.c
index 17702ee5bf81c..ec36e733aab5d 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -733,54 +733,19 @@  void lru_add_drain(void)
 	lru_add_drain_cpu(raw_smp_processor_id());
 }
 
-static void lru_add_drain_per_cpu(struct work_struct *dummy)
-{
-	lru_add_drain();
-}
-
-static DEFINE_PER_CPU(struct work_struct, lru_add_drain_work);
-
-/*
- * Doesn't need any cpu hotplug locking because we do rely on per-cpu
- * kworkers being shut down before our page_alloc_cpu_dead callback is
- * executed on the offlined cpu.
- * Calling this function with cpu hotplug locks held can actually lead
- * to obscure indirect dependencies via WQ context.
- */
 void lru_add_drain_all(void)
 {
-	static DEFINE_MUTEX(lock);
-	static struct cpumask has_work;
 	int cpu;
 
-	/*
-	 * Make sure nobody triggers this path before mm_percpu_wq is fully
-	 * initialized.
-	 */
-	if (WARN_ON(!mm_percpu_wq))
-		return;
-
-	mutex_lock(&lock);
-	cpumask_clear(&has_work);
-
 	for_each_online_cpu(cpu) {
-		struct work_struct *work = &per_cpu(lru_add_drain_work, cpu);
-
 		if (pagevec_count(&per_cpu(lru_add_pvec.pvec, cpu)) ||
 		    pagevec_count(&per_cpu(lru_rotate_pvecs.pvec, cpu)) ||
 		    pagevec_count(&per_cpu(lru_deactivate_file_pvecs.pvec, cpu)) ||
 		    pagevec_count(&per_cpu(lru_lazyfree_pvecs.pvec, cpu)) ||
 		    need_activate_page_drain(cpu)) {
-			INIT_WORK(work, lru_add_drain_per_cpu);
-			queue_work_on(cpu, mm_percpu_wq, work);
-			cpumask_set_cpu(cpu, &has_work);
+			lru_add_drain_cpu(cpu);
 		}
 	}
-
-	for_each_cpu(cpu, &has_work)
-		flush_work(&per_cpu(lru_add_drain_work, cpu));
-
-	mutex_unlock(&lock);
 }
 
 /**