diff mbox series

[V1] spi: core: wake kworker only when there is a message in the queue

Message ID 20181130145549.5265-1-kernel@martin.sperl.org (mailing list archive)
State New, archived
Headers show
Series [V1] spi: core: wake kworker only when there is a message in the queue | expand

Commit Message

Martin Sperl Nov. 30, 2018, 2:55 p.m. UTC
From: Martin Sperl <kernel@martin.sperl.org>

Right now the worker thread running spi_pump_message is always woken
during spi_finalize_current_message.

When spi_sync is running alone with no other spi devices connected
to the bus the worker thread is woken during spi_finalize_current_message.
This is totally unnecessary in the case that there is no message queued.

This is less of a problem when the worker kthread is used to control
the spi bus, but even there it is unnecessary and wasting CPU cycles.

So we only enable the worker thread when the queue is not empty.

For spi_sync transfers in a tight loop (say 40k messages/s) this
avoids the penalty of waking the worker thread 40k times/s.
On a rasperry pi 3 with 4 cores the results in 32% of a single core
only to find out that there is nothing in the queue and it can go back
to sleep.

With this patch applied the spi-worker is not even woken one!

I believe I have also seen situations where during a spi_sync loop
the worker thread (triggered by the last message finished) is slightly
faster and _wins_ the race to process the message, so we are actually
running the kthread and letting it do some work...

This is also no longer observed with this patch applied as.

Signed-off-by: Martin Sperl <kernel@martin.sperl.org>

---

More details:
The "running in threaded mode" while it is supposed to run immediately
can be seen by these temporarily exposed extra counters:
==> /sys/class/spi_master/spi0/statistics/pump_message <==
16250
==> /sys/class/spi_master/spi0/statistics/pump_message_already_running <==
8093
==> /sys/class/spi_master/spi0/statistics/pump_message_idling <==
0
==> /sys/class/spi_master/spi0/statistics/pump_message_in_kthread <==
8121

So out of 16250 times that pump_message is called 8121 times the call
was in kthread and 8093 times the message pump was already running.
That means that a few times the kthread must have won the race and the
non-kthread version lost or might even have waited in spin_lock_irqsave.

Signed-off-by: Martin Sperl <kernel@martin.sperl.org>
---
 drivers/spi/spi.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

Comments

Martin Sperl Dec. 2, 2018, 8:55 a.m. UTC | #1
> On 30.11.2018, at 15:55, kernel@martin.sperl.org wrote:
> 
> From: Martin Sperl <kernel@martin.sperl.org>
> 
> Right now the worker thread running spi_pump_message is always woken
> during spi_finalize_current_message.
> 
...
> For spi_sync transfers in a tight loop (say 40k messages/s) this
> avoids the penalty of waking the worker thread 40k times/s.
> On a rasperry pi 3 with 4 cores the results in 32% of a single core
> only to find out that there is nothing in the queue and it can go back
> to sleep.
Note that this also can be observed with can controllers connected to
the spi bus where during high CAN-bus load 20-30k spi_sync messages/s are 
processed resulting in the kthread of spi0 conduming 30% CPU of a 
single core.

> 
> With this patch applied the spi-worker is not even woken one!
> 
Further testing with mixed spi_sync and spi_async shows that spi_async
is no longer working properly.

This seems to be related to shutdown code getting triggered in the
next loop of _spi_pump_message, which is not called with this patch
in place.

More investigation and probably a more inversive patch is needed
to solve this problem for real.

While reading the code it seems as if controller teardown and
here ctlr->unprepare_transfer_hardware and pm_runtime_* are 
possibly called way more often than necessary - i.e. whenever
the queue is empty.

Maybe deferring this a few jiffies is all that is needed
and would be an improvement as well…

Martin
Mark Brown Dec. 6, 2018, 8:43 p.m. UTC | #2
On Sun, Dec 02, 2018 at 09:55:52AM +0100, kernel@martin.sperl.org wrote:
> > On 30.11.2018, at 15:55, kernel@martin.sperl.org wrote:

> > With this patch applied the spi-worker is not even woken one!

> Further testing with mixed spi_sync and spi_async shows that spi_async
> is no longer working properly.

> This seems to be related to shutdown code getting triggered in the
> next loop of _spi_pump_message, which is not called with this patch
> in place.

> More investigation and probably a more inversive patch is needed
> to solve this problem for real.

Yes...  I remember thinking when I put the unconditional kick in there
that there were holes that would need to be plugged before we could skip
the thread but I don't think got as far as specifically identifying
them.  You're right that it's inefficient and we should be able to get
rid of it a lot of the time.  Ideally we'd be able to run the entire
message pump in interrupt context as the prior message completes
when the device is busy but sadly some hardware (including the main
hardware I was using when I did that work) has limitations which prevent
this.

> While reading the code it seems as if controller teardown and
> here ctlr->unprepare_transfer_hardware and pm_runtime_* are 
> possibly called way more often than necessary - i.e. whenever
> the queue is empty.

> Maybe deferring this a few jiffies is all that is needed
> and would be an improvement as well…

Yes, that's also a good idea - I did wonder about experimenting with a
delay there like we do with DAPM and with the autosuspend delay in
runtime PM but IIRC my test system took negligable time to idle the
hardware so it wasn't helping my use case.
diff mbox series

Patch

diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
index 6ca59406b0b7..0f473d1b01f9 100644
--- a/drivers/spi/spi.c
+++ b/drivers/spi/spi.c
@@ -1416,7 +1416,9 @@  void spi_finalize_current_message(struct spi_controller *ctlr)
 	spin_lock_irqsave(&ctlr->queue_lock, flags);
 	ctlr->cur_msg = NULL;
 	ctlr->cur_msg_prepared = false;
-	kthread_queue_work(&ctlr->kworker, &ctlr->pump_messages);
+	/* only wake message_pump if there is anything in the queue */
+	if (!list_empty(&ctlr->queue))
+		kthread_queue_work(&ctlr->kworker, &ctlr->pump_messages);
 	spin_unlock_irqrestore(&ctlr->queue_lock, flags);
 
 	trace_spi_message_done(mesg);