Message ID | 20181130145549.5265-1-kernel@martin.sperl.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [V1] spi: core: wake kworker only when there is a message in the queue | expand |
> On 30.11.2018, at 15:55, kernel@martin.sperl.org wrote: > > From: Martin Sperl <kernel@martin.sperl.org> > > Right now the worker thread running spi_pump_message is always woken > during spi_finalize_current_message. > ... > For spi_sync transfers in a tight loop (say 40k messages/s) this > avoids the penalty of waking the worker thread 40k times/s. > On a rasperry pi 3 with 4 cores the results in 32% of a single core > only to find out that there is nothing in the queue and it can go back > to sleep. Note that this also can be observed with can controllers connected to the spi bus where during high CAN-bus load 20-30k spi_sync messages/s are processed resulting in the kthread of spi0 conduming 30% CPU of a single core. > > With this patch applied the spi-worker is not even woken one! > Further testing with mixed spi_sync and spi_async shows that spi_async is no longer working properly. This seems to be related to shutdown code getting triggered in the next loop of _spi_pump_message, which is not called with this patch in place. More investigation and probably a more inversive patch is needed to solve this problem for real. While reading the code it seems as if controller teardown and here ctlr->unprepare_transfer_hardware and pm_runtime_* are possibly called way more often than necessary - i.e. whenever the queue is empty. Maybe deferring this a few jiffies is all that is needed and would be an improvement as well… Martin
On Sun, Dec 02, 2018 at 09:55:52AM +0100, kernel@martin.sperl.org wrote: > > On 30.11.2018, at 15:55, kernel@martin.sperl.org wrote: > > With this patch applied the spi-worker is not even woken one! > Further testing with mixed spi_sync and spi_async shows that spi_async > is no longer working properly. > This seems to be related to shutdown code getting triggered in the > next loop of _spi_pump_message, which is not called with this patch > in place. > More investigation and probably a more inversive patch is needed > to solve this problem for real. Yes... I remember thinking when I put the unconditional kick in there that there were holes that would need to be plugged before we could skip the thread but I don't think got as far as specifically identifying them. You're right that it's inefficient and we should be able to get rid of it a lot of the time. Ideally we'd be able to run the entire message pump in interrupt context as the prior message completes when the device is busy but sadly some hardware (including the main hardware I was using when I did that work) has limitations which prevent this. > While reading the code it seems as if controller teardown and > here ctlr->unprepare_transfer_hardware and pm_runtime_* are > possibly called way more often than necessary - i.e. whenever > the queue is empty. > Maybe deferring this a few jiffies is all that is needed > and would be an improvement as well… Yes, that's also a good idea - I did wonder about experimenting with a delay there like we do with DAPM and with the autosuspend delay in runtime PM but IIRC my test system took negligable time to idle the hardware so it wasn't helping my use case.
diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c index 6ca59406b0b7..0f473d1b01f9 100644 --- a/drivers/spi/spi.c +++ b/drivers/spi/spi.c @@ -1416,7 +1416,9 @@ void spi_finalize_current_message(struct spi_controller *ctlr) spin_lock_irqsave(&ctlr->queue_lock, flags); ctlr->cur_msg = NULL; ctlr->cur_msg_prepared = false; - kthread_queue_work(&ctlr->kworker, &ctlr->pump_messages); + /* only wake message_pump if there is anything in the queue */ + if (!list_empty(&ctlr->queue)) + kthread_queue_work(&ctlr->kworker, &ctlr->pump_messages); spin_unlock_irqrestore(&ctlr->queue_lock, flags); trace_spi_message_done(mesg);