diff mbox series

[3/3] spi: bcm2835: add module parameter to configure minimum length for dma

Message ID 20190224162311.23899-3-kernel@martin.sperl.org (mailing list archive)
State New, archived
Headers show
Series [1/3] spi: bcm2835: add driver stats to debugfs | expand

Commit Message

Martin Sperl Feb. 24, 2019, 4:23 p.m. UTC
From: Martin Sperl <kernel@martin.sperl.org>

Allow setting the length of the transfer at which dma is used by
setting a module parameter.

Signed-off-by: Martin Sperl <kernel@martin.sperl.org>
---
 drivers/spi/spi-bcm2835.c | 40 +++++++++++++++++++++++++++++++++++++---
 1 file changed, 37 insertions(+), 3 deletions(-)

--
2.11.0

Comments

Stefan Wahren Feb. 24, 2019, 7:10 p.m. UTC | #1
Hi Martin,

> kernel@martin.sperl.org hat am 24. Februar 2019 um 17:23 geschrieben:
> 
> 
> From: Martin Sperl <kernel@martin.sperl.org>
> 
> Allow setting the length of the transfer at which dma is used by
> setting a module parameter.

please provide the motivation of this change.

Regards
Stefan
Martin Sperl March 24, 2019, 8:52 a.m. UTC | #2
> On 24.02.2019, at 20:10, Stefan Wahren <stefan.wahren@i2se.com> wrote:
> 
> Hi Martin,
> 
>> kernel@martin.sperl.org hat am 24. Februar 2019 um 17:23 geschrieben:
>> 
>> 
>> From: Martin Sperl <kernel@martin.sperl.org>
>> 
>> Allow setting the length of the transfer at which dma is used by
>> setting a module parameter.
> 
> please provide the motivation of this change.
As we provide control over the selection of polling vs. interrupt mode
we should - for consistency - also provide control over selection of 
DMA mode.

DMA mapping is quite expensive and with higher SPI clock speeds it
may be more economic CPU wise to run in polling mode instead of DMA
mode.

Also DMA mode has one specific difference to Polling mode:
there is no idle clock cycle between bytes transferred.

This may have negative impact when transferring lots of bytes to
some mcus without SPI buffers at the fastest possible clock speed,
where it helps when there is a gap after each byte.

Martin
Martin Sperl March 24, 2019, 8:58 a.m. UTC | #3
> On 22.03.2019, at 13:36, Lukas Wunner <lukas@wunner.de> wrote:
> 
> On Sun, Feb 24, 2019 at 04:23:11PM +0000, kernel@martin.sperl.org wrote:
>> +/* define dma min number of bytes to use in dma mode with value validation */
>> +static int dma_min_bytes_limit_set(const char *val,
>> +				   const struct kernel_param *kp)
>> +{
>> +	unsigned int v;
>> +
>> +	if (kstrtouint(val, 10, &v))
>> +		return -EINVAL;
>> +	/* value needs to be a multiple of 4 */
>> +	if (v % 4) {
>> +		pr_err("dma_min_bytes_limit needs to be a multiple of 4\n");
>> +		return -EINVAL;
>> +	}
> 
> Transfers don't need to be a multiple of 4 to be eligible for DMA,
> so this check can be dropped.

I definitely did not want to write a custom module argument parser
but if i remember correctly there is one limitation on the transmission path
where you would hit some inefficiencies in the DMA code when you run
transfers that are not a multiple of 4 - especially for short transfers.

That is why it was implemented.

But I can leave this precaution out.

Martin
Martin Sperl March 24, 2019, 11:23 a.m. UTC | #4
Hi Lukas!

> On 24.03.2019, at 11:15, Lukas Wunner <lukas@wunner.de> wrote:
> 
> On Sun, Mar 24, 2019 at 09:52:15AM +0100, kernel@martin.sperl.org wrote:
>>> On 24.02.2019, at 20:10, Stefan Wahren <stefan.wahren@i2se.com> wrote:
>>>> kernel@martin.sperl.org hat am 24. Februar 2019 um 17:23 geschrieben:
>>>> Allow setting the length of the transfer at which dma is used by
>>>> setting a module parameter.
>>> 
>>> please provide the motivation of this change.
>> 
>> As we provide control over the selection of polling vs. interrupt mode
>> we should - for consistency - also provide control over selection of 
>> DMA mode.
>> 
>> DMA mapping is quite expensive and with higher SPI clock speeds it
>> may be more economic CPU wise to run in polling mode instead of DMA
>> mode.
> 
> The problem is that making the DMA minimum length configurable
> by itself impacts performance because a memory read is necessary
> to retrieve the limit, instead of a hardcoded immediate in the
> machine code.  Ultimately this feature is only of interest to
> developers optimizing the code, not really to end users.

The host path is IMO not so hot that one additional memory read is
that expensive. For all practical purposes - at least with DMA or
IRQ driven mode - your “hot" code path will (typically) no longer
be in cache by the time that the code executes again, as other 
processes/threads have probably been scheduled on the cpu and
the cache lines for the code will be gone anyway.

> 
>> Also DMA mode has one specific difference to Polling mode:
>> there is no idle clock cycle between bytes transferred.
> 
> Seriously?  If that's true it should be documented in the driver.
> That seems like a major advantage of DMA mode.

It is true - not being documented explicitly is also true
(I believe it was mentioned in a cover letter).

But it is NOT always an advantage as said.

Also I find it surprising that you have not looked at spi traces
on a logic-analyzer where this is immediately visible.

> 
>> This may have negative impact when transferring lots of bytes to
>> some mcus without SPI buffers at the fastest possible clock speed,
>> where it helps when there is a gap after each byte.
> 
> Hm, wouldn't a slower SPI clock speed achieve the same?

Yes, it would, but then you would need to make the SPI clock 
cycle possibly 3-4 times as long to communicate with an atmega MCU
in slave mode, which is essentially wasting possible transfer rates.

> 
> As a general remark, the interrupt mode is currently suboptimal
> because when the TX FIFO becomes empty, there's a latency until
> it is filled again.  Instead, we should try to keep it non-empty
> at all times.  This can be achieved with the RXR interrupt:
> It signals that >= 48 bytes are in the RX FIFO, so in theory if
> we receive that interrupt, we could write 48 bytes to the TX FIFO.
> 
> The problem is, this clashes with your algorithm which tries to
> stuff as many bytes as possible in the TX FIFO.  Only if we give
> that FIFO stuffing algorithm up do we know for sure that 48 bytes
> are free in the TX FIFO.
> 
> Also, both poll mode and interrupt mode could be sped up by
> switching to pseudo-DMA mode, as I've done in 3bd7f6589f67,
> i.e. switch to DMA mode but access the chip with programmed I/O.
> That way, the number of MMIO accesses would be reduced by a
> factor of 4.  So if the TX FIFO is empty, perform 16 writes
> to fill it.  Write another 12 dwords once RXR is signaled.
> Read 16 dwords upon RXF or 12 dwords upon RXR.
How would you speed up poll mode really? It is polling and
consuming CPU cycles anyway!

> 
> This would make the time spent in the IRQ handler super short,
> but at the expense of receiving more interrupts.

In my experience minimizing interrupts should be the main goal
because this adds long “stalls” at high SPI clock speeds.
I do not have the exact numbers now, but there is a latency that
typically produces gaps in the order of 2-10us (depending on
clock frequencies)

So my original modifications of the driver is focused on 
avoiding those in the first place.

Also on single core machines (like the original RPI or RPIzero)
every interrupt means interruption of normal processes for quite
some time.

So this should be avoided in the first place.

And that is where the actual limit of 96 bytes for DMA came from!
At 96 (= 2 * 48) bytes we would run 2 interrupts in interrupt mode
and also 2 interrupts in DMA mode (one for RX one for TX)

> 
> Poll mode could function the same and precalculate the time it
> takes for the TX FIFO to empty or the RX FIFO to become filled,
> and usleep_range() as long to yield the CPU to other tasks.
> Again, this means more wakeups for the thread.  I'm not sure
> which one is the lesser evil but your FIFO stuffing algorithm
> forces us to leave optimization potential on the table and that
> bothers me.

Poll mode is wasting time to be as efficient as possible from
the thru-put perspective by trying to avoid unnecessary interrupts
and thread wakeup latencies, which are not guaranteed to happen
as fast as possible.

Right now the implementation is wasting a bit more time filling
the fifo but if that was faster, then you would still be reading
the status register in a tight loop, which does not make a huge
difference on the internal BUS - it is a transfer.

This is especially efficient when running the spi pump “inline"
in spi_sync - no wakeups, no interrupts no context switches…

That is the only way to get SPI transfer latencies down to ~6us
between 2 transfers (that is the time between stop the clock 
on the first transfer and start the clock for the second transfer.

Otherwise you can get long latencies in the order of 1ms, which
is prohibitively long and just wastes resources on scheduling,
context switches,…

At one point I was thinking if a “shared” polling infrastructure
(where multiple busses could poll in a single shared tread) would
help.

But that resulted in realization that for this to work we would
need to schedule at least 2 thread-wakeups: one to the “shared”
polling thread (even if filling the FIFO could get doe before
scheduling the wakeup) and another one to wake spi_sync again.

Assuming that the polling thread is already running, it would
mean that the wakeup is not needed really. But this would
still leave us with one thread-wakeup.

In general principle there are other low hanging fruit that
would improve CPU efficiency much more!

E.g: unnecessary wakeups of the spi pump thread when running
in sync mode where we consume about 30% of a single CPU just
for these wakeups when running 22k spi messages/second.
In total this resulted in 130% CPU utilization in 2 threads
plus sometimes async scheduling...

If you want to give optimizing things a try (e.g: pseudo DMA 
mode), then please go head and post patches and I will try
to give it a try when triggering those.

Finally: maybe this “use DMA/polling” decission settings
should be handled by spi: core instead so that these can
get set in the device tree on a per spi device not as a
general bus policy.

But I guess we have now moved from discussing a specific
patch to a discussion of “visions” for individual drivers
and the spi-core.

Martin
diff mbox series

Patch

diff --git a/drivers/spi/spi-bcm2835.c b/drivers/spi/spi-bcm2835.c
index dcf922ca2603..ef4b3468a97d 100644
--- a/drivers/spi/spi-bcm2835.c
+++ b/drivers/spi/spi-bcm2835.c
@@ -31,6 +31,7 @@ 
 #include <linux/io.h>
 #include <linux/kernel.h>
 #include <linux/module.h>
+#include <linux/moduleparam.h>
 #include <linux/of.h>
 #include <linux/of_address.h>
 #include <linux/of_device.h>
@@ -44,6 +45,39 @@  module_param(polling_limit_us, uint, 0664);
 MODULE_PARM_DESC(polling_limit_us,
 		 "time in us to run a transfer in polling mode\n");

+/* define dma min number of bytes to use in dma mode with value validation */
+static int dma_min_bytes_limit_set(const char *val,
+				   const struct kernel_param *kp)
+{
+	unsigned int v;
+
+	if (kstrtouint(val, 10, &v))
+		return -EINVAL;
+	/* value needs to be a multiple of 4 */
+	if (v % 4) {
+		pr_err("dma_min_bytes_limit needs to be a multiple of 4\n");
+		return -EINVAL;
+	}
+	/* value needs to be at least 6 - so actually 8 - rational below */
+	if (v < 6) {
+		pr_err("dma_min_bytes_limit needs to be at least 8\n");
+		return -EINVAL;
+	}
+
+	return param_set_uint(val, kp);
+}
+
+static const struct kernel_param_ops dma_min_bytes_limit_ops = {
+	.set	= dma_min_bytes_limit_set,
+	.get	= param_get_int,
+};
+
+unsigned int dma_min_bytes_limit = 96;
+module_param_cb(dma_min_bytes_limit, &dma_min_bytes_limit_ops,
+		&dma_min_bytes_limit, 0664);
+MODULE_PARM_DESC(dma_min_bytes_limit,
+		 "minimum number of bytes to run a transfer in dma mode\n");
+
 /* SPI register offsets */
 #define BCM2835_SPI_CS			0x00
 #define BCM2835_SPI_FIFO		0x04
@@ -80,7 +114,6 @@  MODULE_PARM_DESC(polling_limit_us,

 #define BCM2835_SPI_FIFO_SIZE		64
 #define BCM2835_SPI_FIFO_SIZE_3_4	48
-#define BCM2835_SPI_DMA_MIN_LENGTH	96
 #define BCM2835_SPI_MODE_BITS	(SPI_CPOL | SPI_CPHA | SPI_CS_HIGH \
 				| SPI_NO_CS | SPI_3WIRE)

@@ -447,7 +480,8 @@  static int bcm2835_spi_transfer_one_irq(struct spi_master *master,
  * if the length of the first is *exactly* 1.
  *
  * At most 6 bytes are written and at most 3 bytes read.  Do we know the
- * transfer has this many bytes?  Yes, see BCM2835_SPI_DMA_MIN_LENGTH.
+ * transfer has this many bytes?  Yes, see validation in
+ * dma_min_bytes_limit_set.
  *
  * The FIFO is normally accessed with 8-bit width by the CPU and 32-bit width
  * by the DMA engine.  Toggling the DMA Enable flag in the CS register switches
@@ -690,7 +724,7 @@  static bool bcm2835_spi_can_dma(struct spi_master *master,
 				struct spi_transfer *tfr)
 {
 	/* we start DMA efforts only on bigger transfers */
-	if (tfr->len < BCM2835_SPI_DMA_MIN_LENGTH)
+	if (tfr->len < dma_min_bytes_limit)
 		return false;

 	/* BCM2835_SPI_DLEN has defined a max transfer size as