Message ID | 20220927112117.77599-3-vincent.whitchurch@axis.com (mailing list archive) |
---|---|
State | Accepted |
Commit | 0c17ba73c08ff2690c1eff8df374b6709eed55ce |
Headers | show |
Series | spi: Fix DMA bugs in (not only) spi-s3c64xx | expand |
Hi, CCed: Christoph and Robin, as the issue is partially dma-mapping related. On 27.09.2022 13:21, Vincent Whitchurch wrote: > The SPI core DMA mapping support performs cache management once for the > entire message and not between transfers, and this leads to cache > corruption if a message has two or more RX transfers with both > transfers targeting the same cache line, and the controller driver > decides to handle one using DMA and the other using PIO (for example, > because one is much larger than the other). > > Fix it by syncing before/after the actual transfers. This also means > that we can skip the sync during the map/unmap of the message. > > Fixes: 99adef310f68 ("spi: Provide core support for DMA mapping transfers") > Signed-off-by: Vincent Whitchurch <vincent.whitchurch@axis.com> > --- This patch landed in linux next-20220929 as commit 0c17ba73c08f ("spi: Fix cache corruption due to DMA/PIO overlap"). Unfortunately it causes kernel oops on one of my test systems: 8<--- cut here --- Unable to handle kernel NULL pointer dereference at virtual address 0000000c [0000000c] *pgd=00000000 Internal error: Oops: 5 [#1] PREEMPT SMP ARM Modules linked in: cmac bnep btsdio hci_uart btbcm s5p_mfc btintel brcmfmac bluetooth videobuf2_dma_contig videobuf2_memops videobuf2_v4l2 videobuf2_common videodev cfg80211 mc ecdh_generic ecc brcmutil CPU: 0 PID: 12 Comm: kworker/0:1 Not tainted 6.0.0-rc7-next-20220929-dirty #12903 Hardware name: Samsung Exynos (Flattened Device Tree) Workqueue: events ax88796c_work PC is at dma_direct_sync_sg_for_device+0x24/0xb8 LR is at spi_transfer_one_message+0x4c4/0xabc pc : [<c01cbcf0>] lr : [<c0739fcc>] psr: 20000013 ... Process kworker/0:1 (pid: 12, stack limit = 0xca429928) Stack: (0xe0071d38 to 0xe0072000) ... dma_direct_sync_sg_for_device from spi_transfer_one_message+0x4c4/0xabc spi_transfer_one_message from __spi_pump_transfer_message+0x300/0x770 __spi_pump_transfer_message from __spi_sync+0x304/0x3f4 __spi_sync from spi_sync+0x28/0x40 spi_sync from axspi_read_rxq+0x98/0xc8 axspi_read_rxq from ax88796c_work+0x7a8/0xf6c ax88796c_work from process_one_work+0x288/0x774 process_one_work from worker_thread+0x44/0x504 worker_thread from kthread+0xf0/0x124 kthread from ret_from_fork+0x14/0x2c Exception stack(0xe0071fb0 to 0xe0071ff8) ... ---[ end trace 0000000000000000 ]--- This happens because sg_free_table() doesn't clear table->orig_nents nor table->nents. If the given spi xfer object is reused without dma-mapped buffer, then a NULL pointer de-reference happens at table->sgl spi_dma_sync_for_device()/spi_dma_sync_for_cpu(). A possible fix would be to zero table->orig_nents in spi_unmap_buf_attrs(). I will send a patch for this soon. However, I think that clearing table->orig_nents and table->nents should be added to __sg_free_table() in lib/scatterlist.c to avoid this kind of issue in the future. This however will be a significant change that might break code somewhere, if it relies on the nents/orig_nents value after calling sg_free_table(). Christoph, Robin - what is your opinion? > drivers/spi/spi.c | 109 +++++++++++++++++++++++++++++++++++++--------- > 1 file changed, 88 insertions(+), 21 deletions(-) > > ... Best regards
On 2022-09-30 12:20, Marek Szyprowski wrote: > Hi, > > CCed: Christoph and Robin, as the issue is partially dma-mapping related. > > On 27.09.2022 13:21, Vincent Whitchurch wrote: >> The SPI core DMA mapping support performs cache management once for the >> entire message and not between transfers, and this leads to cache >> corruption if a message has two or more RX transfers with both >> transfers targeting the same cache line, and the controller driver >> decides to handle one using DMA and the other using PIO (for example, >> because one is much larger than the other). >> >> Fix it by syncing before/after the actual transfers. This also means >> that we can skip the sync during the map/unmap of the message. >> >> Fixes: 99adef310f68 ("spi: Provide core support for DMA mapping transfers") >> Signed-off-by: Vincent Whitchurch <vincent.whitchurch@axis.com> >> --- > > This patch landed in linux next-20220929 as commit 0c17ba73c08f ("spi: > Fix cache corruption due to DMA/PIO overlap"). Unfortunately it causes > kernel oops on one of my test systems: > > 8<--- cut here --- > Unable to handle kernel NULL pointer dereference at virtual address 0000000c > [0000000c] *pgd=00000000 > Internal error: Oops: 5 [#1] PREEMPT SMP ARM > Modules linked in: cmac bnep btsdio hci_uart btbcm s5p_mfc btintel > brcmfmac bluetooth videobuf2_dma_contig videobuf2_memops videobuf2_v4l2 > videobuf2_common videodev cfg80211 mc ecdh_generic ecc brcmutil > CPU: 0 PID: 12 Comm: kworker/0:1 Not tainted > 6.0.0-rc7-next-20220929-dirty #12903 > Hardware name: Samsung Exynos (Flattened Device Tree) > Workqueue: events ax88796c_work > PC is at dma_direct_sync_sg_for_device+0x24/0xb8 > LR is at spi_transfer_one_message+0x4c4/0xabc > pc : [<c01cbcf0>] lr : [<c0739fcc>] psr: 20000013 > ... > Process kworker/0:1 (pid: 12, stack limit = 0xca429928) > Stack: (0xe0071d38 to 0xe0072000) > ... > dma_direct_sync_sg_for_device from spi_transfer_one_message+0x4c4/0xabc > spi_transfer_one_message from __spi_pump_transfer_message+0x300/0x770 > __spi_pump_transfer_message from __spi_sync+0x304/0x3f4 > __spi_sync from spi_sync+0x28/0x40 > spi_sync from axspi_read_rxq+0x98/0xc8 > axspi_read_rxq from ax88796c_work+0x7a8/0xf6c > ax88796c_work from process_one_work+0x288/0x774 > process_one_work from worker_thread+0x44/0x504 > worker_thread from kthread+0xf0/0x124 > kthread from ret_from_fork+0x14/0x2c > Exception stack(0xe0071fb0 to 0xe0071ff8) > ... > ---[ end trace 0000000000000000 ]--- > > This happens because sg_free_table() doesn't clear table->orig_nents nor > table->nents. If the given spi xfer object is reused without dma-mapped > buffer, then a NULL pointer de-reference happens at table->sgl > spi_dma_sync_for_device()/spi_dma_sync_for_cpu(). A possible fix would > be to zero table->orig_nents in spi_unmap_buf_attrs(). I will send a > patch for this soon. > > However, I think that clearing table->orig_nents and table->nents should > be added to __sg_free_table() in lib/scatterlist.c to avoid this kind of > issue in the future. This however will be a significant change that > might break code somewhere, if it relies on the nents/orig_nents value > after calling sg_free_table(). Christoph, Robin - what is your opinion? Yes, that makes sense to me: the table->nents etc. fields logically describe the list that table->sgl points to, so when it sets that to NULL it seems right to also update the corresponding fields accordingly. I don't see much good reason for code to poking into an sg_table after it's been freed, other that to reinitialise it with sg_alloc_table() which would overwrite those fields anyway, so I can't imagine it's a particularly risky change. That said, maybe this is something that's better to catch than to paper over? Arguably the real bug here is that spi_unmap_buf() and the new sync functions should use the same "{tx,rx}_buf != NULL" condition that spi_map_buf() used for the DMA mapping decision in the first place. Thanks, Robin. > > >> drivers/spi/spi.c | 109 +++++++++++++++++++++++++++++++++++++--------- >> 1 file changed, 88 insertions(+), 21 deletions(-) >> >> ... > > Best regards
On Fri, Sep 30, 2022 at 02:10:28PM +0200, Robin Murphy wrote: > That said, maybe this is something that's better to catch than to paper > over? Arguably the real bug here is that spi_unmap_buf() and the new > sync functions should use the same "{tx,rx}_buf != NULL" condition that > spi_map_buf() used for the DMA mapping decision in the first place. The "{tx,rx}_buf != NULL" condition would not sufficient on its own; the call to ->can_dma() is also part of the condition. __spi_unmap_msg() already does the ->can_dma() call even though it checks for the orig_nents != 0 condition instead of the tx,rx_buf != NULL, but I omitted that call in the new sync functions, incorrectly believing it to be redundant. It looks like __spi_unmap_msg() would have triggered a similar crash even before this patch, if a client had reused an xfer with both rx and tx the first time, and only one of them enabled the next time around (and with ->can_dma() returning true both times). Testing the {tx,rx}_buf instead of sgt->orig_nents would have avoided that, as you say.
diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c index dd885df23870..f41a8c2752b8 100644 --- a/drivers/spi/spi.c +++ b/drivers/spi/spi.c @@ -1010,9 +1010,9 @@ static void spi_set_cs(struct spi_device *spi, bool enable, bool force) } #ifdef CONFIG_HAS_DMA -int spi_map_buf(struct spi_controller *ctlr, struct device *dev, - struct sg_table *sgt, void *buf, size_t len, - enum dma_data_direction dir) +static int spi_map_buf_attrs(struct spi_controller *ctlr, struct device *dev, + struct sg_table *sgt, void *buf, size_t len, + enum dma_data_direction dir, unsigned long attrs) { const bool vmalloced_buf = is_vmalloc_addr(buf); unsigned int max_seg_size = dma_get_max_seg_size(dev); @@ -1078,28 +1078,39 @@ int spi_map_buf(struct spi_controller *ctlr, struct device *dev, sg = sg_next(sg); } - ret = dma_map_sg(dev, sgt->sgl, sgt->nents, dir); - if (!ret) - ret = -ENOMEM; + ret = dma_map_sgtable(dev, sgt, dir, attrs); if (ret < 0) { sg_free_table(sgt); return ret; } - sgt->nents = ret; - return 0; } -void spi_unmap_buf(struct spi_controller *ctlr, struct device *dev, - struct sg_table *sgt, enum dma_data_direction dir) +int spi_map_buf(struct spi_controller *ctlr, struct device *dev, + struct sg_table *sgt, void *buf, size_t len, + enum dma_data_direction dir) +{ + return spi_map_buf_attrs(ctlr, dev, sgt, buf, len, dir, 0); +} + +static void spi_unmap_buf_attrs(struct spi_controller *ctlr, + struct device *dev, struct sg_table *sgt, + enum dma_data_direction dir, + unsigned long attrs) { if (sgt->orig_nents) { - dma_unmap_sg(dev, sgt->sgl, sgt->orig_nents, dir); + dma_unmap_sgtable(dev, sgt, dir, attrs); sg_free_table(sgt); } } +void spi_unmap_buf(struct spi_controller *ctlr, struct device *dev, + struct sg_table *sgt, enum dma_data_direction dir) +{ + spi_unmap_buf_attrs(ctlr, dev, sgt, dir, 0); +} + static int __spi_map_msg(struct spi_controller *ctlr, struct spi_message *msg) { struct device *tx_dev, *rx_dev; @@ -1124,24 +1135,30 @@ static int __spi_map_msg(struct spi_controller *ctlr, struct spi_message *msg) rx_dev = ctlr->dev.parent; list_for_each_entry(xfer, &msg->transfers, transfer_list) { + /* The sync is done before each transfer. */ + unsigned long attrs = DMA_ATTR_SKIP_CPU_SYNC; + if (!ctlr->can_dma(ctlr, msg->spi, xfer)) continue; if (xfer->tx_buf != NULL) { - ret = spi_map_buf(ctlr, tx_dev, &xfer->tx_sg, - (void *)xfer->tx_buf, xfer->len, - DMA_TO_DEVICE); + ret = spi_map_buf_attrs(ctlr, tx_dev, &xfer->tx_sg, + (void *)xfer->tx_buf, + xfer->len, DMA_TO_DEVICE, + attrs); if (ret != 0) return ret; } if (xfer->rx_buf != NULL) { - ret = spi_map_buf(ctlr, rx_dev, &xfer->rx_sg, - xfer->rx_buf, xfer->len, - DMA_FROM_DEVICE); + ret = spi_map_buf_attrs(ctlr, rx_dev, &xfer->rx_sg, + xfer->rx_buf, xfer->len, + DMA_FROM_DEVICE, attrs); if (ret != 0) { - spi_unmap_buf(ctlr, tx_dev, &xfer->tx_sg, - DMA_TO_DEVICE); + spi_unmap_buf_attrs(ctlr, tx_dev, + &xfer->tx_sg, DMA_TO_DEVICE, + attrs); + return ret; } } @@ -1164,17 +1181,52 @@ static int __spi_unmap_msg(struct spi_controller *ctlr, struct spi_message *msg) return 0; list_for_each_entry(xfer, &msg->transfers, transfer_list) { + /* The sync has already been done after each transfer. */ + unsigned long attrs = DMA_ATTR_SKIP_CPU_SYNC; + if (!ctlr->can_dma(ctlr, msg->spi, xfer)) continue; - spi_unmap_buf(ctlr, rx_dev, &xfer->rx_sg, DMA_FROM_DEVICE); - spi_unmap_buf(ctlr, tx_dev, &xfer->tx_sg, DMA_TO_DEVICE); + spi_unmap_buf_attrs(ctlr, rx_dev, &xfer->rx_sg, + DMA_FROM_DEVICE, attrs); + spi_unmap_buf_attrs(ctlr, tx_dev, &xfer->tx_sg, + DMA_TO_DEVICE, attrs); } ctlr->cur_msg_mapped = false; return 0; } + +static void spi_dma_sync_for_device(struct spi_controller *ctlr, + struct spi_transfer *xfer) +{ + struct device *rx_dev = ctlr->cur_rx_dma_dev; + struct device *tx_dev = ctlr->cur_tx_dma_dev; + + if (!ctlr->cur_msg_mapped) + return; + + if (xfer->tx_sg.orig_nents) + dma_sync_sgtable_for_device(tx_dev, &xfer->tx_sg, DMA_TO_DEVICE); + if (xfer->rx_sg.orig_nents) + dma_sync_sgtable_for_device(rx_dev, &xfer->rx_sg, DMA_FROM_DEVICE); +} + +static void spi_dma_sync_for_cpu(struct spi_controller *ctlr, + struct spi_transfer *xfer) +{ + struct device *rx_dev = ctlr->cur_rx_dma_dev; + struct device *tx_dev = ctlr->cur_tx_dma_dev; + + if (!ctlr->cur_msg_mapped) + return; + + if (xfer->rx_sg.orig_nents) + dma_sync_sgtable_for_cpu(rx_dev, &xfer->rx_sg, DMA_FROM_DEVICE); + if (xfer->tx_sg.orig_nents) + dma_sync_sgtable_for_cpu(tx_dev, &xfer->tx_sg, DMA_TO_DEVICE); +} #else /* !CONFIG_HAS_DMA */ static inline int __spi_map_msg(struct spi_controller *ctlr, struct spi_message *msg) @@ -1187,6 +1239,16 @@ static inline int __spi_unmap_msg(struct spi_controller *ctlr, { return 0; } + +static void spi_dma_sync_for_device(struct spi_controller *ctrl, + struct spi_transfer *xfer) +{ +} + +static void spi_dma_sync_for_cpu(struct spi_controller *ctrl, + struct spi_transfer *xfer) +{ +} #endif /* !CONFIG_HAS_DMA */ static inline int spi_unmap_msg(struct spi_controller *ctlr, @@ -1445,8 +1507,11 @@ static int spi_transfer_one_message(struct spi_controller *ctlr, reinit_completion(&ctlr->xfer_completion); fallback_pio: + spi_dma_sync_for_device(ctlr, xfer); ret = ctlr->transfer_one(ctlr, msg->spi, xfer); if (ret < 0) { + spi_dma_sync_for_cpu(ctlr, xfer); + if (ctlr->cur_msg_mapped && (xfer->error & SPI_TRANS_FAIL_NO_START)) { __spi_unmap_msg(ctlr, msg); @@ -1469,6 +1534,8 @@ static int spi_transfer_one_message(struct spi_controller *ctlr, if (ret < 0) msg->status = ret; } + + spi_dma_sync_for_cpu(ctlr, xfer); } else { if (xfer->len) dev_err(&msg->spi->dev,
The SPI core DMA mapping support performs cache management once for the entire message and not between transfers, and this leads to cache corruption if a message has two or more RX transfers with both transfers targeting the same cache line, and the controller driver decides to handle one using DMA and the other using PIO (for example, because one is much larger than the other). Fix it by syncing before/after the actual transfers. This also means that we can skip the sync during the map/unmap of the message. Fixes: 99adef310f68 ("spi: Provide core support for DMA mapping transfers") Signed-off-by: Vincent Whitchurch <vincent.whitchurch@axis.com> --- drivers/spi/spi.c | 109 +++++++++++++++++++++++++++++++++++++--------- 1 file changed, 88 insertions(+), 21 deletions(-)