Message ID | 20240327-digigram-xdma-fixes-v1-2-45f4a52c0283@bootlin.com (mailing list archive) |
---|---|
State | Accepted |
Commit | 6a40fb8245965b481b4dcce011cd63f20bf91ee0 |
Headers | show |
Series | dmaengine: xilinx: xdma: Various fixes for xdma | expand |
On 3/27/24 02:58, Louis Chauvet wrote: > The current xdma_synchronize method does not properly wait for the last > transfer to be done. Due to limitations of the XMDA engine, it is not > possible to stop a transfer in the middle of a descriptor. Said > otherwise, if a stop is requested at the end of descriptor "N" and the OS > is fast enough, the DMA controller will effectively stop immediately. > However, if the OS is slightly too slow to request the stop and the DMA > engine starts descriptor "N+1", the N+1 transfer will be performed until > its end. This means that after a terminate_all, the last descriptor must > remain valid and the synchronization must wait for this last descriptor to > be terminated. > > Fixes: 855c2e1d1842 ("dmaengine: xilinx: xdma: Rework xdma_terminate_all()") > Fixes: f5c392d106e7 ("dmaengine: xilinx: xdma: Add terminate_all/synchronize callbacks") > Cc: stable@vger.kernel.org > Suggested-by: Miquel Raynal <miquel.raynal@bootlin.com> > Signed-off-by: Louis Chauvet <louis.chauvet@bootlin.com> > --- > drivers/dma/xilinx/xdma-regs.h | 3 +++ > drivers/dma/xilinx/xdma.c | 26 ++++++++++++++++++-------- > 2 files changed, 21 insertions(+), 8 deletions(-) > > diff --git a/drivers/dma/xilinx/xdma-regs.h b/drivers/dma/xilinx/xdma-regs.h > index 98f5f6fb9ff9..6ad08878e938 100644 > --- a/drivers/dma/xilinx/xdma-regs.h > +++ b/drivers/dma/xilinx/xdma-regs.h > @@ -117,6 +117,9 @@ struct xdma_hw_desc { > CHAN_CTRL_IE_WRITE_ERROR | \ > CHAN_CTRL_IE_DESC_ERROR) > > +/* bits of the channel status register */ > +#define XDMA_CHAN_STATUS_BUSY BIT(0) > + > #define XDMA_CHAN_STATUS_MASK CHAN_CTRL_START > > #define XDMA_CHAN_ERROR_MASK (CHAN_CTRL_IE_DESC_ALIGN_MISMATCH | \ > diff --git a/drivers/dma/xilinx/xdma.c b/drivers/dma/xilinx/xdma.c > index b9788aa8f6b7..5a3a3293b21b 100644 > --- a/drivers/dma/xilinx/xdma.c > +++ b/drivers/dma/xilinx/xdma.c > @@ -71,6 +71,8 @@ struct xdma_chan { > enum dma_transfer_direction dir; > struct dma_slave_config cfg; > u32 irq; > + struct completion last_interrupt; > + bool stop_requested; > }; > > /** > @@ -376,6 +378,8 @@ static int xdma_xfer_start(struct xdma_chan *xchan) > return ret; > > xchan->busy = true; > + xchan->stop_requested = false; > + reinit_completion(&xchan->last_interrupt); If stop_requested is true, it should not start another transfer. So I would suggest to add if (xchan->stop_requested) return -ENODEV; at the beginning of xdma_xfer_start(). xdma_xfer_start() is protected by chan lock. > > return 0; > } > @@ -387,7 +391,6 @@ static int xdma_xfer_start(struct xdma_chan *xchan) > static int xdma_xfer_stop(struct xdma_chan *xchan) > { > int ret; > - u32 val; > struct xdma_device *xdev = xchan->xdev_hdl; > > /* clear run stop bit to prevent any further auto-triggering */ > @@ -395,13 +398,7 @@ static int xdma_xfer_stop(struct xdma_chan *xchan) > CHAN_CTRL_RUN_STOP); > if (ret) > return ret; Above two lines can be removed with your change. > - > - /* Clear the channel status register */ > - ret = regmap_read(xdev->rmap, xchan->base + XDMA_CHAN_STATUS_RC, &val); > - if (ret) > - return ret; > - > - return 0; > + return ret; > } > > /** > @@ -474,6 +471,8 @@ static int xdma_alloc_channels(struct xdma_device *xdev, > xchan->xdev_hdl = xdev; > xchan->base = base + i * XDMA_CHAN_STRIDE; > xchan->dir = dir; > + xchan->stop_requested = false; > + init_completion(&xchan->last_interrupt); > > ret = xdma_channel_init(xchan); > if (ret) > @@ -521,6 +520,7 @@ static int xdma_terminate_all(struct dma_chan *chan) > spin_lock_irqsave(&xdma_chan->vchan.lock, flags); > > xdma_chan->busy = false; > + xdma_chan->stop_requested = true; > vd = vchan_next_desc(&xdma_chan->vchan); > if (vd) { > list_del(&vd->node); > @@ -542,6 +542,13 @@ static int xdma_terminate_all(struct dma_chan *chan) > static void xdma_synchronize(struct dma_chan *chan) > { > struct xdma_chan *xdma_chan = to_xdma_chan(chan); > + struct xdma_device *xdev = xdma_chan->xdev_hdl; > + int st = 0; > + > + /* If the engine continues running, wait for the last interrupt */ > + regmap_read(xdev->rmap, xdma_chan->base + XDMA_CHAN_STATUS, &st); > + if (st & XDMA_CHAN_STATUS_BUSY) > + wait_for_completion_timeout(&xdma_chan->last_interrupt, msecs_to_jiffies(1000)); I suggest to add error message for timeout case. > > vchan_synchronize(&xdma_chan->vchan); > } > @@ -876,6 +883,9 @@ static irqreturn_t xdma_channel_isr(int irq, void *dev_id) > u32 st; > bool repeat_tx; > > + if (xchan->stop_requested) > + complete(&xchan->last_interrupt); > + This should be moved to the end of function to make sure processing previous transfer is completed. out: if (xchan->stop_requested) { xchan->busy = false; complete(&xchan->last_interrupt); } spin_unlock(&xchan->vchan.lock); return IRQ_HANDLED; Thanks, Lizhi > spin_lock(&xchan->vchan.lock); > > /* get submitted request */ >
Hi Lizhi, > > @@ -376,6 +378,8 @@ static int xdma_xfer_start(struct xdma_chan *xchan) > > return ret; > > > xchan->busy = true; > > + xchan->stop_requested = false; > > + reinit_completion(&xchan->last_interrupt); > > If stop_requested is true, it should not start another transfer. So I would suggest to add > > if (xchan->stop_requested) > > return -ENODEV; Maybe -EBUSY in this case? I thought synchronize() was mandatory in-between. If that's not the case then indeed we need to block or error-out if a new transfer gets started. > > at the beginning of xdma_xfer_start(). > > xdma_xfer_start() is protected by chan lock. > > > > return 0; > > } Thanks, Miquèl
Hi Louis, kernel test robot noticed the following build warnings: [auto build test WARNING on 8e938e39866920ddc266898e6ae1fffc5c8f51aa] url: https://github.com/intel-lab-lkp/linux/commits/Louis-Chauvet/dmaengine-xilinx-xdma-Fix-wrong-offsets-in-the-buffers-addresses-in-dma-descriptor/20240327-180155 base: 8e938e39866920ddc266898e6ae1fffc5c8f51aa patch link: https://lore.kernel.org/r/20240327-digigram-xdma-fixes-v1-2-45f4a52c0283%40bootlin.com patch subject: [PATCH 2/3] dmaengine: xilinx: xdma: Fix synchronization issue config: sparc-allyesconfig (https://download.01.org/0day-ci/archive/20240328/202403280803.IAFl90ZE-lkp@intel.com/config) compiler: sparc64-linux-gcc (GCC) 13.2.0 reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240328/202403280803.IAFl90ZE-lkp@intel.com/reproduce) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot <lkp@intel.com> | Closes: https://lore.kernel.org/oe-kbuild-all/202403280803.IAFl90ZE-lkp@intel.com/ All warnings (new ones prefixed by >>): >> drivers/dma/xilinx/xdma.c:76: warning: Function parameter or struct member 'last_interrupt' not described in 'xdma_chan' >> drivers/dma/xilinx/xdma.c:76: warning: Function parameter or struct member 'stop_requested' not described in 'xdma_chan' vim +76 drivers/dma/xilinx/xdma.c 17ce252266c7f0 Lizhi Hou 2023-01-19 53 17ce252266c7f0 Lizhi Hou 2023-01-19 54 /** 17ce252266c7f0 Lizhi Hou 2023-01-19 55 * struct xdma_chan - Driver specific DMA channel structure 17ce252266c7f0 Lizhi Hou 2023-01-19 56 * @vchan: Virtual channel 17ce252266c7f0 Lizhi Hou 2023-01-19 57 * @xdev_hdl: Pointer to DMA device structure 17ce252266c7f0 Lizhi Hou 2023-01-19 58 * @base: Offset of channel registers 17ce252266c7f0 Lizhi Hou 2023-01-19 59 * @desc_pool: Descriptor pool 17ce252266c7f0 Lizhi Hou 2023-01-19 60 * @busy: Busy flag of the channel 17ce252266c7f0 Lizhi Hou 2023-01-19 61 * @dir: Transferring direction of the channel 17ce252266c7f0 Lizhi Hou 2023-01-19 62 * @cfg: Transferring config of the channel 17ce252266c7f0 Lizhi Hou 2023-01-19 63 * @irq: IRQ assigned to the channel 17ce252266c7f0 Lizhi Hou 2023-01-19 64 */ 17ce252266c7f0 Lizhi Hou 2023-01-19 65 struct xdma_chan { 17ce252266c7f0 Lizhi Hou 2023-01-19 66 struct virt_dma_chan vchan; 17ce252266c7f0 Lizhi Hou 2023-01-19 67 void *xdev_hdl; 17ce252266c7f0 Lizhi Hou 2023-01-19 68 u32 base; 17ce252266c7f0 Lizhi Hou 2023-01-19 69 struct dma_pool *desc_pool; 17ce252266c7f0 Lizhi Hou 2023-01-19 70 bool busy; 17ce252266c7f0 Lizhi Hou 2023-01-19 71 enum dma_transfer_direction dir; 17ce252266c7f0 Lizhi Hou 2023-01-19 72 struct dma_slave_config cfg; 17ce252266c7f0 Lizhi Hou 2023-01-19 73 u32 irq; 70e8496bf693e1 Louis Chauvet 2024-03-27 74 struct completion last_interrupt; 70e8496bf693e1 Louis Chauvet 2024-03-27 75 bool stop_requested; 17ce252266c7f0 Lizhi Hou 2023-01-19 @76 }; 17ce252266c7f0 Lizhi Hou 2023-01-19 77
On 3/27/24 17:23, Miquel Raynal wrote: > Hi Lizhi, > >>> @@ -376,6 +378,8 @@ static int xdma_xfer_start(struct xdma_chan *xchan) >>> return ret; >>> > xchan->busy = true; >>> + xchan->stop_requested = false; >>> + reinit_completion(&xchan->last_interrupt); >> If stop_requested is true, it should not start another transfer. So I would suggest to add >> >> if (xchan->stop_requested) >> >> return -ENODEV; > Maybe -EBUSY in this case? > > I thought synchronize() was mandatory in-between. If that's not the > case then indeed we need to block or error-out if a new transfer > gets started. Okay. It looks issue_pending is not expected between terminate_all() and synchronize(). This check is not needed. Thanks, Lizhi > >> at the beginning of xdma_xfer_start(). >> >> xdma_xfer_start() is protected by chan lock. >> >>> > return 0; >>> } > Thanks, > Miquèl
diff --git a/drivers/dma/xilinx/xdma-regs.h b/drivers/dma/xilinx/xdma-regs.h index 98f5f6fb9ff9..6ad08878e938 100644 --- a/drivers/dma/xilinx/xdma-regs.h +++ b/drivers/dma/xilinx/xdma-regs.h @@ -117,6 +117,9 @@ struct xdma_hw_desc { CHAN_CTRL_IE_WRITE_ERROR | \ CHAN_CTRL_IE_DESC_ERROR) +/* bits of the channel status register */ +#define XDMA_CHAN_STATUS_BUSY BIT(0) + #define XDMA_CHAN_STATUS_MASK CHAN_CTRL_START #define XDMA_CHAN_ERROR_MASK (CHAN_CTRL_IE_DESC_ALIGN_MISMATCH | \ diff --git a/drivers/dma/xilinx/xdma.c b/drivers/dma/xilinx/xdma.c index b9788aa8f6b7..5a3a3293b21b 100644 --- a/drivers/dma/xilinx/xdma.c +++ b/drivers/dma/xilinx/xdma.c @@ -71,6 +71,8 @@ struct xdma_chan { enum dma_transfer_direction dir; struct dma_slave_config cfg; u32 irq; + struct completion last_interrupt; + bool stop_requested; }; /** @@ -376,6 +378,8 @@ static int xdma_xfer_start(struct xdma_chan *xchan) return ret; xchan->busy = true; + xchan->stop_requested = false; + reinit_completion(&xchan->last_interrupt); return 0; } @@ -387,7 +391,6 @@ static int xdma_xfer_start(struct xdma_chan *xchan) static int xdma_xfer_stop(struct xdma_chan *xchan) { int ret; - u32 val; struct xdma_device *xdev = xchan->xdev_hdl; /* clear run stop bit to prevent any further auto-triggering */ @@ -395,13 +398,7 @@ static int xdma_xfer_stop(struct xdma_chan *xchan) CHAN_CTRL_RUN_STOP); if (ret) return ret; - - /* Clear the channel status register */ - ret = regmap_read(xdev->rmap, xchan->base + XDMA_CHAN_STATUS_RC, &val); - if (ret) - return ret; - - return 0; + return ret; } /** @@ -474,6 +471,8 @@ static int xdma_alloc_channels(struct xdma_device *xdev, xchan->xdev_hdl = xdev; xchan->base = base + i * XDMA_CHAN_STRIDE; xchan->dir = dir; + xchan->stop_requested = false; + init_completion(&xchan->last_interrupt); ret = xdma_channel_init(xchan); if (ret) @@ -521,6 +520,7 @@ static int xdma_terminate_all(struct dma_chan *chan) spin_lock_irqsave(&xdma_chan->vchan.lock, flags); xdma_chan->busy = false; + xdma_chan->stop_requested = true; vd = vchan_next_desc(&xdma_chan->vchan); if (vd) { list_del(&vd->node); @@ -542,6 +542,13 @@ static int xdma_terminate_all(struct dma_chan *chan) static void xdma_synchronize(struct dma_chan *chan) { struct xdma_chan *xdma_chan = to_xdma_chan(chan); + struct xdma_device *xdev = xdma_chan->xdev_hdl; + int st = 0; + + /* If the engine continues running, wait for the last interrupt */ + regmap_read(xdev->rmap, xdma_chan->base + XDMA_CHAN_STATUS, &st); + if (st & XDMA_CHAN_STATUS_BUSY) + wait_for_completion_timeout(&xdma_chan->last_interrupt, msecs_to_jiffies(1000)); vchan_synchronize(&xdma_chan->vchan); } @@ -876,6 +883,9 @@ static irqreturn_t xdma_channel_isr(int irq, void *dev_id) u32 st; bool repeat_tx; + if (xchan->stop_requested) + complete(&xchan->last_interrupt); + spin_lock(&xchan->vchan.lock); /* get submitted request */
The current xdma_synchronize method does not properly wait for the last transfer to be done. Due to limitations of the XMDA engine, it is not possible to stop a transfer in the middle of a descriptor. Said otherwise, if a stop is requested at the end of descriptor "N" and the OS is fast enough, the DMA controller will effectively stop immediately. However, if the OS is slightly too slow to request the stop and the DMA engine starts descriptor "N+1", the N+1 transfer will be performed until its end. This means that after a terminate_all, the last descriptor must remain valid and the synchronization must wait for this last descriptor to be terminated. Fixes: 855c2e1d1842 ("dmaengine: xilinx: xdma: Rework xdma_terminate_all()") Fixes: f5c392d106e7 ("dmaengine: xilinx: xdma: Add terminate_all/synchronize callbacks") Cc: stable@vger.kernel.org Suggested-by: Miquel Raynal <miquel.raynal@bootlin.com> Signed-off-by: Louis Chauvet <louis.chauvet@bootlin.com> --- drivers/dma/xilinx/xdma-regs.h | 3 +++ drivers/dma/xilinx/xdma.c | 26 ++++++++++++++++++-------- 2 files changed, 21 insertions(+), 8 deletions(-)