From patchwork Fri Sep 16 11:39:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Whitchurch X-Patchwork-Id: 12978449 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B63E7C6FA91 for ; Fri, 16 Sep 2022 11:40:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230389AbiIPLkB (ORCPT ); Fri, 16 Sep 2022 07:40:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47008 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229863AbiIPLkA (ORCPT ); Fri, 16 Sep 2022 07:40:00 -0400 Received: from smtp2.axis.com (smtp2.axis.com [195.60.68.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2E4C52B602; Fri, 16 Sep 2022 04:39:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axis.com; q=dns/txt; s=axis-central1; t=1663328398; x=1694864398; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=0O+QKem305LoINGTWe8PLbq0UAM4vzWU5ktHn5B5a3A=; b=L5guGB2OTf6KAA7BAswqBS/MOwYQk0GNo5Mkyyv5kuS+WzlohbYhJ6IP NjGl9aOAe6yke3BZulIikuVE7jd7xvpZCgJlwG3SuwbSojDAC/3r0m43W 1x+s/YhQahY12QQL9N6ySV4IOql0MrZgF+R9KApRkxtZXjGZ5XncS00MT MOKAkChA1tqIs0qGFbWxjNuAeXGOtYcGzv+zSw2O+Fe3AWLPTDG2Ar4gc aBDHM4RYp5ZKHBrLHgCabe3THMwkSs46MAjIomh5q+j56ULUZZEqorFXm dqOAqftNCpQw72E0v/Ky6amnl17JCkJf0NB5C3yK7E43YSf6/GGaRP5zQ A==; From: Vincent Whitchurch To: , , CC: , Vincent Whitchurch , , , , , Subject: [PATCH 1/4] spi: spi-loopback-test: Add test to trigger DMA/PIO mixing Date: Fri, 16 Sep 2022 13:39:48 +0200 Message-ID: <20220916113951.228398-2-vincent.whitchurch@axis.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220916113951.228398-1-vincent.whitchurch@axis.com> References: <20220916113951.228398-1-vincent.whitchurch@axis.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-samsung-soc@vger.kernel.org Add a test where a small and a large transfer in a message hit the same cache line. This test currently fails on spi-s3c64xx on in DMA mode since it ends up mixing DMA and PIO without proper cache maintenance. Signed-off-by: Vincent Whitchurch --- drivers/spi/spi-loopback-test.c | 27 +++++++++++++++++++++++++++ 1 file changed, 27 insertions(+) diff --git a/drivers/spi/spi-loopback-test.c b/drivers/spi/spi-loopback-test.c index 4d4f77a186a9..dd7de8fa37d0 100644 --- a/drivers/spi/spi-loopback-test.c +++ b/drivers/spi/spi-loopback-test.c @@ -313,6 +313,33 @@ static struct spi_test spi_tests[] = { }, }, }, + { + .description = "three tx+rx transfers with overlapping cache lines", + .fill_option = FILL_COUNT_8, + /* + * This should be large enough for the controller driver to + * choose to transfer it with DMA. + */ + .iterate_len = { 512, -1 }, + .iterate_transfer_mask = BIT(1), + .transfer_count = 3, + .transfers = { + { + .len = 1, + .tx_buf = TX(0), + .rx_buf = RX(0), + }, + { + .tx_buf = TX(1), + .rx_buf = RX(1), + }, + { + .len = 1, + .tx_buf = TX(513), + .rx_buf = RX(513), + }, + }, + }, { /* end of tests sequence */ } }; From patchwork Fri Sep 16 11:39:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Whitchurch X-Patchwork-Id: 12978446 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E86F6C54EE9 for ; Fri, 16 Sep 2022 11:39:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229911AbiIPLj6 (ORCPT ); Fri, 16 Sep 2022 07:39:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46840 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229532AbiIPLj5 (ORCPT ); Fri, 16 Sep 2022 07:39:57 -0400 Received: from smtp1.axis.com (smtp1.axis.com [195.60.68.17]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E39A227CC0; Fri, 16 Sep 2022 04:39:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axis.com; q=dns/txt; s=axis-central1; t=1663328396; x=1694864396; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=5gyRJMshptIEnXvFy/S2sxnAHY4n+uAltIvtK8WJPTI=; b=T0EL9QrklCuUoZu8crHKzLsYGXNRZ9dBdYQ/h+7zDD29OAbgLMLYVT3m EP9//+DH5WmQYF83/+vS8DNCbv8QgKyK8AY4EbvWPY/B2+hUFf+k/EDP+ 3ylxeSKpxKQIIYB4Yacm6jDDQL1PUoT4L4TO7i4S4lfwmAFdEU1uSeIKJ pw8Q0u6lFWabYkZRHCLWynHdoKA13ze98IpHg867yna+5iDQVablERiRI D5xk67b0VHK5zpL1JevgcQSPzqTG0g1HTko5FSGbK5PmeArYIl6yvXAkL h7/SgfAIissQKivjP3NFp1vXWy6q+2MkoslaWFEmagb9ohEiGixBSIUv/ w==; From: Vincent Whitchurch To: , , CC: , Vincent Whitchurch , , , , , Subject: [PATCH 2/4] spi: Save current RX and TX DMA devices Date: Fri, 16 Sep 2022 13:39:49 +0200 Message-ID: <20220916113951.228398-3-vincent.whitchurch@axis.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220916113951.228398-1-vincent.whitchurch@axis.com> References: <20220916113951.228398-1-vincent.whitchurch@axis.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-samsung-soc@vger.kernel.org Save the current RX and TX DMA devices to avoid having to duplicate the logic to pick them, since we'll need access to them in some more functions to fix a bug in the cache handling. Signed-off-by: Vincent Whitchurch --- drivers/spi/spi.c | 19 ++++--------------- include/linux/spi/spi.h | 4 ++++ 2 files changed, 8 insertions(+), 15 deletions(-) diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c index 32c01e684af3..a9134d062ff1 100644 --- a/drivers/spi/spi.c +++ b/drivers/spi/spi.c @@ -1147,6 +1147,8 @@ static int __spi_map_msg(struct spi_controller *ctlr, struct spi_message *msg) } } + ctlr->cur_rx_dma_dev = rx_dev; + ctlr->cur_tx_dma_dev = tx_dev; ctlr->cur_msg_mapped = true; return 0; @@ -1154,26 +1156,13 @@ static int __spi_map_msg(struct spi_controller *ctlr, struct spi_message *msg) static int __spi_unmap_msg(struct spi_controller *ctlr, struct spi_message *msg) { + struct device *rx_dev = ctlr->cur_rx_dma_dev; + struct device *tx_dev = ctlr->cur_tx_dma_dev; struct spi_transfer *xfer; - struct device *tx_dev, *rx_dev; if (!ctlr->cur_msg_mapped || !ctlr->can_dma) return 0; - if (ctlr->dma_tx) - tx_dev = ctlr->dma_tx->device->dev; - else if (ctlr->dma_map_dev) - tx_dev = ctlr->dma_map_dev; - else - tx_dev = ctlr->dev.parent; - - if (ctlr->dma_rx) - rx_dev = ctlr->dma_rx->device->dev; - else if (ctlr->dma_map_dev) - rx_dev = ctlr->dma_map_dev; - else - rx_dev = ctlr->dev.parent; - list_for_each_entry(xfer, &msg->transfers, transfer_list) { if (!ctlr->can_dma(ctlr, msg->spi, xfer)) continue; diff --git a/include/linux/spi/spi.h b/include/linux/spi/spi.h index f089ee1ead58..7466a171d7e5 100644 --- a/include/linux/spi/spi.h +++ b/include/linux/spi/spi.h @@ -378,6 +378,8 @@ extern struct spi_device *spi_new_ancillary_device(struct spi_device *spi, u8 ch * @cleanup: frees controller-specific state * @can_dma: determine whether this controller supports DMA * @dma_map_dev: device which can be used for DMA mapping + * @cur_rx_dma_dev: device which is currently used for RX DMA mapping + * @cur_tx_dma_dev: device which is currently used for TX DMA mapping * @queued: whether this controller is providing an internal message queue * @kworker: pointer to thread struct for message pump * @pump_messages: work struct for scheduling work to the message pump @@ -610,6 +612,8 @@ struct spi_controller { struct spi_device *spi, struct spi_transfer *xfer); struct device *dma_map_dev; + struct device *cur_rx_dma_dev; + struct device *cur_tx_dma_dev; /* * These hooks are for drivers that want to use the generic From patchwork Fri Sep 16 11:39:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Whitchurch X-Patchwork-Id: 12978450 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5DDD2C54EE9 for ; Fri, 16 Sep 2022 11:40:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231345AbiIPLkE (ORCPT ); Fri, 16 Sep 2022 07:40:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47172 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231268AbiIPLkC (ORCPT ); Fri, 16 Sep 2022 07:40:02 -0400 Received: from smtp1.axis.com (smtp1.axis.com [195.60.68.17]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7203D3DF26; Fri, 16 Sep 2022 04:39:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axis.com; q=dns/txt; s=axis-central1; t=1663328400; x=1694864400; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=4yNTGVfkoVx35F4Rlaa4nrJcCFACuKVLiDINNr/ayco=; b=MMNLkZUOHNK0RhQXaN9n1tYxRH3kGsv2gBh2bO0Wxcladb/mm7jT9fsd oQPIIUx/PuKZVBMZo0EXkC1rRRgJcHrjUvSmZ50bYAxMi31aWtPh3VajA GBQShk1O8pufkPVXscDwizvjKF6I3ks5UyH6DWiYQfFr5K1o510mcEBYe Rx6UnP+O7tvTz5VUZxlgOGsM3QUX0rGoESHc4k9oUvx+JaVyzgizQH83h WIGXJXFN23oV2k8YQhbXtSy6RwNSUFg/7EIiqAe0nhAZg0bsu8HHpkX+z s1mghIlK+lPmXLaaWL2/1sihzJAWl3zKEt1TnK3q0GEluEkh1t88HgSKV A==; From: Vincent Whitchurch To: , , CC: , Vincent Whitchurch , , , , , Subject: [PATCH 3/4] spi: Fix cache corruption due to DMA/PIO overlap Date: Fri, 16 Sep 2022 13:39:50 +0200 Message-ID: <20220916113951.228398-4-vincent.whitchurch@axis.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220916113951.228398-1-vincent.whitchurch@axis.com> References: <20220916113951.228398-1-vincent.whitchurch@axis.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-samsung-soc@vger.kernel.org The SPI core DMA mapping support performs cache management once for the entire message and not between transfers, and this leads to cache corruption if a message has two or more RX transfers with both transfers targeting the same cache line, and the controller driver decides to handle one using DMA and the other using PIO (for example, because one is much larger than the other). Fix it by syncing before/after the actual transfers. This also means that we can skip the sync during the map/unmap of the message. Fixes: 99adef310f682d6343 ("spi: Provide core support for DMA mapping transfers") Signed-off-by: Vincent Whitchurch Reported-by: kernel test robot --- drivers/spi/spi.c | 107 +++++++++++++++++++++++++++++++++++++--------- 1 file changed, 86 insertions(+), 21 deletions(-) diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c index a9134d062ff1..f7cd737bbf6f 100644 --- a/drivers/spi/spi.c +++ b/drivers/spi/spi.c @@ -1010,9 +1010,9 @@ static void spi_set_cs(struct spi_device *spi, bool enable, bool force) } #ifdef CONFIG_HAS_DMA -int spi_map_buf(struct spi_controller *ctlr, struct device *dev, - struct sg_table *sgt, void *buf, size_t len, - enum dma_data_direction dir) +static int spi_map_buf_attrs(struct spi_controller *ctlr, struct device *dev, + struct sg_table *sgt, void *buf, size_t len, + enum dma_data_direction dir, unsigned long attrs) { const bool vmalloced_buf = is_vmalloc_addr(buf); unsigned int max_seg_size = dma_get_max_seg_size(dev); @@ -1078,28 +1078,39 @@ int spi_map_buf(struct spi_controller *ctlr, struct device *dev, sg = sg_next(sg); } - ret = dma_map_sg(dev, sgt->sgl, sgt->nents, dir); - if (!ret) - ret = -ENOMEM; + ret = dma_map_sgtable(dev, sgt, dir, attrs); if (ret < 0) { sg_free_table(sgt); return ret; } - sgt->nents = ret; - return 0; } -void spi_unmap_buf(struct spi_controller *ctlr, struct device *dev, - struct sg_table *sgt, enum dma_data_direction dir) +int spi_map_buf(struct spi_controller *ctlr, struct device *dev, + struct sg_table *sgt, void *buf, size_t len, + enum dma_data_direction dir) +{ + return spi_map_buf_attrs(ctlr, dev, sgt, buf, len, dir, 0); +} + +static void spi_unmap_buf_attrs(struct spi_controller *ctlr, + struct device *dev, struct sg_table *sgt, + enum dma_data_direction dir, + unsigned long attrs) { if (sgt->orig_nents) { - dma_unmap_sg(dev, sgt->sgl, sgt->orig_nents, dir); + dma_unmap_sgtable(dev, sgt, dir, attrs); sg_free_table(sgt); } } +void spi_unmap_buf(struct spi_controller *ctlr, struct device *dev, + struct sg_table *sgt, enum dma_data_direction dir) +{ + spi_unmap_buf_attrs(ctlr, dev, sgt, dir, 0); +} + static int __spi_map_msg(struct spi_controller *ctlr, struct spi_message *msg) { struct device *tx_dev, *rx_dev; @@ -1124,24 +1135,30 @@ static int __spi_map_msg(struct spi_controller *ctlr, struct spi_message *msg) rx_dev = ctlr->dev.parent; list_for_each_entry(xfer, &msg->transfers, transfer_list) { + /* The sync is done before each transfer. */ + unsigned long attrs = DMA_ATTR_SKIP_CPU_SYNC; + if (!ctlr->can_dma(ctlr, msg->spi, xfer)) continue; if (xfer->tx_buf != NULL) { - ret = spi_map_buf(ctlr, tx_dev, &xfer->tx_sg, - (void *)xfer->tx_buf, xfer->len, - DMA_TO_DEVICE); + ret = spi_map_buf_attrs(ctlr, tx_dev, &xfer->tx_sg, + (void *)xfer->tx_buf, + xfer->len, DMA_TO_DEVICE, + attrs); if (ret != 0) return ret; } if (xfer->rx_buf != NULL) { - ret = spi_map_buf(ctlr, rx_dev, &xfer->rx_sg, - xfer->rx_buf, xfer->len, - DMA_FROM_DEVICE); + ret = spi_map_buf_attrs(ctlr, rx_dev, &xfer->rx_sg, + xfer->rx_buf, xfer->len, + DMA_FROM_DEVICE, attrs); if (ret != 0) { - spi_unmap_buf(ctlr, tx_dev, &xfer->tx_sg, - DMA_TO_DEVICE); + spi_unmap_buf_attrs(ctlr, tx_dev, + &xfer->tx_sg, DMA_TO_DEVICE, + attrs); + return ret; } } @@ -1164,17 +1181,52 @@ static int __spi_unmap_msg(struct spi_controller *ctlr, struct spi_message *msg) return 0; list_for_each_entry(xfer, &msg->transfers, transfer_list) { + /* The sync has already been done after each transfer. */ + unsigned long attrs = DMA_ATTR_SKIP_CPU_SYNC; + if (!ctlr->can_dma(ctlr, msg->spi, xfer)) continue; - spi_unmap_buf(ctlr, rx_dev, &xfer->rx_sg, DMA_FROM_DEVICE); - spi_unmap_buf(ctlr, tx_dev, &xfer->tx_sg, DMA_TO_DEVICE); + spi_unmap_buf_attrs(ctlr, rx_dev, &xfer->rx_sg, + DMA_FROM_DEVICE, attrs); + spi_unmap_buf_attrs(ctlr, tx_dev, &xfer->tx_sg, + DMA_TO_DEVICE, attrs); } ctlr->cur_msg_mapped = false; return 0; } + +static void spi_dma_sync_for_device(struct spi_controller *ctlr, + struct spi_transfer *xfer) +{ + struct device *rx_dev = ctlr->cur_rx_dma_dev; + struct device *tx_dev = ctlr->cur_tx_dma_dev; + + if (!ctlr->cur_msg_mapped) + return; + + if (xfer->tx_sg.orig_nents) + dma_sync_sgtable_for_device(tx_dev, &xfer->tx_sg, DMA_TO_DEVICE); + if (xfer->rx_sg.orig_nents) + dma_sync_sgtable_for_device(rx_dev, &xfer->rx_sg, DMA_FROM_DEVICE); +} + +static void spi_dma_sync_for_cpu(struct spi_controller *ctlr, + struct spi_transfer *xfer) +{ + struct device *rx_dev = ctlr->cur_rx_dma_dev; + struct device *tx_dev = ctlr->cur_tx_dma_dev; + + if (!ctlr->cur_msg_mapped) + return; + + if (xfer->rx_sg.orig_nents) + dma_sync_sgtable_for_cpu(rx_dev, &xfer->rx_sg, DMA_FROM_DEVICE); + if (xfer->tx_sg.orig_nents) + dma_sync_sgtable_for_cpu(tx_dev, &xfer->tx_sg, DMA_TO_DEVICE); +} #else /* !CONFIG_HAS_DMA */ static inline int __spi_map_msg(struct spi_controller *ctlr, struct spi_message *msg) @@ -1187,6 +1239,14 @@ static inline int __spi_unmap_msg(struct spi_controller *ctlr, { return 0; } + +void spi_dma_sync_for_device(struct spi_controller *ctrl, struct spi_transfer *xfer) +{ +} + +void spi_dma_sync_for_cpu(struct spi_controller *ctrl, struct spi_transfer *xfer) +{ +} #endif /* !CONFIG_HAS_DMA */ static inline int spi_unmap_msg(struct spi_controller *ctlr, @@ -1444,8 +1504,11 @@ static int spi_transfer_one_message(struct spi_controller *ctlr, reinit_completion(&ctlr->xfer_completion); fallback_pio: + spi_dma_sync_for_device(ctlr, xfer); ret = ctlr->transfer_one(ctlr, msg->spi, xfer); if (ret < 0) { + spi_dma_sync_for_cpu(ctlr, xfer); + if (ctlr->cur_msg_mapped && (xfer->error & SPI_TRANS_FAIL_NO_START)) { __spi_unmap_msg(ctlr, msg); @@ -1468,6 +1531,8 @@ static int spi_transfer_one_message(struct spi_controller *ctlr, if (ret < 0) msg->status = ret; } + + spi_dma_sync_for_cpu(ctlr, xfer); } else { if (xfer->len) dev_err(&msg->spi->dev, From patchwork Fri Sep 16 11:39:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Whitchurch X-Patchwork-Id: 12978448 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DCEA5C6FA94 for ; Fri, 16 Sep 2022 11:40:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230406AbiIPLkB (ORCPT ); Fri, 16 Sep 2022 07:40:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47010 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231139AbiIPLkA (ORCPT ); Fri, 16 Sep 2022 07:40:00 -0400 Received: from smtp1.axis.com (smtp1.axis.com [195.60.68.17]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2EC972CC97; Fri, 16 Sep 2022 04:39:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axis.com; q=dns/txt; s=axis-central1; t=1663328398; x=1694864398; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=522Kxf6U1q7s2dtT3SsExSNapFPwyU8RLSswskWzai8=; b=TfneD9eTL6XivhGCtWAcEhjTVLROmOQBXNFGkByRRWKCXS3gojsIEUJ+ +JstlQrYh/ONRk/RdkJ2sTBYkK2XxQkGADU6p/JBWRyZYKzTRcy/41EVV PzT1Za44VbqDsB1hZ/9dw73sXN3QQXbGA1by870v6dFWy1slk+X8iwKd5 TzA5KHPkZaJ+oex1F9lv5NhcEUBej9sG6S8PIb6d+/y55RUAnpoqJ0UM6 zlibkTW1SJhHbuzBYexO3UY5Tw5ZQgCSVl3VYjZTLlzxbHAFEIi06pdcP p+od6pD8W4RMrnmkRq+Hq0tszA+IWEe+NrKGuaU3RQc8PuRSI/iPL3YDn A==; From: Vincent Whitchurch To: , , CC: , Vincent Whitchurch , , , , , Subject: [PATCH 4/4] spi: s3c64xx: Fix large transfers with DMA Date: Fri, 16 Sep 2022 13:39:51 +0200 Message-ID: <20220916113951.228398-5-vincent.whitchurch@axis.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220916113951.228398-1-vincent.whitchurch@axis.com> References: <20220916113951.228398-1-vincent.whitchurch@axis.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-samsung-soc@vger.kernel.org The COUNT_VALUE in the PACKET_CNT register is 16-bit so the maximum value is 65535. Asking the driver to transfer a larger size currently leads to the DMA transfer timing out. Fix this by splitting the transfer as needed. With this, the len>64 KiB tests in spi-loopback-test pass. (Note that len==64 KiB tests work even without this patch for some reason. The driver programs 0 to the COUNT_VALUE field in that case, but it's unclear if it's by design, since the hardware documentation doesn't say anything about the behaviour when COUNT_VALUE == 0, so play it safe and split at 65535.) Fixes: 230d42d422e7b69 ("spi: Add s3c64xx SPI Controller driver") Signed-off-by: Vincent Whitchurch Acked-by: Krzysztof Kozlowski --- drivers/spi/spi-s3c64xx.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/drivers/spi/spi-s3c64xx.c b/drivers/spi/spi-s3c64xx.c index 7f346866614a..85e1d1f90109 100644 --- a/drivers/spi/spi-s3c64xx.c +++ b/drivers/spi/spi-s3c64xx.c @@ -701,6 +701,16 @@ static int s3c64xx_spi_prepare_message(struct spi_master *master, struct spi_device *spi = msg->spi; struct s3c64xx_spi_csinfo *cs = spi->controller_data; + if (master->can_dma) { + int ret; + + /* Limited by size of PACKET_CNT.COUNT_VALUE. */ + ret = spi_split_transfers_maxsize(master, msg, 65535, + GFP_KERNEL | GFP_DMA); + if (ret) + return ret; + } + /* Configure feedback delay */ if (!cs) /* No delay if not defined */