Message ID | 20180625092724.22164-1-andrea.merello@gmail.com (mailing list archive) |
---|---|
State | Changes Requested |
Headers | show |
On 25-06-18, 11:27, Andrea Merello wrote: > Whenever a single or cyclic transaction is prepared, the driver > could eventually split it over several SG descriptors in order > to deal with the HW maximum transfer length. > > This could end up in DMA operations starting from a misaligned > address. This seems fatal for the HW if DRE is not enabled. > > This patch eventually adjusts the transfer size in order to make sure > all operations start from an aligned address. > > Cc: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com> > Signed-off-by: Andrea Merello <andrea.merello@gmail.com> > Reviewed-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com> > --- > Changes in v2: > - don't introduce copy_mask field, rather rely on already-esistent > copy_align field. Suggested by Radhey Shyam Pandey > - reword title > Changes in v3: > - fix bug introduced in v2: wrong copy size when DRE is enabled > use implementation suggested by Radhey Shyam Pandey > --- > drivers/dma/xilinx/xilinx_dma.c | 20 ++++++++++++++++++++ > 1 file changed, 20 insertions(+) > > diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c > index 27b523530c4a..113d9bf1b6a1 100644 > --- a/drivers/dma/xilinx/xilinx_dma.c > +++ b/drivers/dma/xilinx/xilinx_dma.c > @@ -1793,6 +1793,16 @@ static struct dma_async_tx_descriptor *xilinx_dma_prep_slave_sg( > */ > copy = min_t(size_t, sg_dma_len(sg) - sg_used, > XILINX_DMA_MAX_TRANS_LEN); > + > + if ((copy + sg_used < sg_dma_len(sg)) && > + chan->xdev->common.copy_align) { > + /* > + * If this is not the last descriptor, make sure > + * the next one will be properly aligned > + */ > + copy = rounddown(copy, > + (1 << chan->xdev->common.copy_align)); > + } > hw = &segment->hw; > > /* Fill in the descriptor */ > @@ -1898,6 +1908,16 @@ static struct dma_async_tx_descriptor *xilinx_dma_prep_dma_cyclic( > */ > copy = min_t(size_t, period_len - sg_used, > XILINX_DMA_MAX_TRANS_LEN); > + > + if ((copy + sg_used < period_len) && > + chan->xdev->common.copy_align) { > + /* > + * If this is not the last descriptor, make sure > + * the next one will be properly aligned > + */ > + copy = rounddown(copy, > + (1 << chan->xdev->common.copy_align)); > + } same code pasted twice, can we have a routine for this... perhaps more code can be made common too
On Fri, Jun 29, 2018 at 9:25 AM, Vinod <vkoul@kernel.org> wrote: > On 25-06-18, 11:27, Andrea Merello wrote: >> Whenever a single or cyclic transaction is prepared, the driver >> could eventually split it over several SG descriptors in order >> to deal with the HW maximum transfer length. >> >> This could end up in DMA operations starting from a misaligned >> address. This seems fatal for the HW if DRE is not enabled. >> >> This patch eventually adjusts the transfer size in order to make sure >> all operations start from an aligned address. >> >> Cc: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com> >> Signed-off-by: Andrea Merello <andrea.merello@gmail.com> >> Reviewed-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com> >> --- >> Changes in v2: >> - don't introduce copy_mask field, rather rely on already-esistent >> copy_align field. Suggested by Radhey Shyam Pandey >> - reword title >> Changes in v3: >> - fix bug introduced in v2: wrong copy size when DRE is enabled >> use implementation suggested by Radhey Shyam Pandey >> --- >> drivers/dma/xilinx/xilinx_dma.c | 20 ++++++++++++++++++++ >> 1 file changed, 20 insertions(+) >> >> diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c >> index 27b523530c4a..113d9bf1b6a1 100644 >> --- a/drivers/dma/xilinx/xilinx_dma.c >> +++ b/drivers/dma/xilinx/xilinx_dma.c >> @@ -1793,6 +1793,16 @@ static struct dma_async_tx_descriptor *xilinx_dma_prep_slave_sg( >> */ >> copy = min_t(size_t, sg_dma_len(sg) - sg_used, >> XILINX_DMA_MAX_TRANS_LEN); >> + >> + if ((copy + sg_used < sg_dma_len(sg)) && >> + chan->xdev->common.copy_align) { >> + /* >> + * If this is not the last descriptor, make sure >> + * the next one will be properly aligned >> + */ >> + copy = rounddown(copy, >> + (1 << chan->xdev->common.copy_align)); >> + } >> hw = &segment->hw; >> >> /* Fill in the descriptor */ >> @@ -1898,6 +1908,16 @@ static struct dma_async_tx_descriptor *xilinx_dma_prep_dma_cyclic( >> */ >> copy = min_t(size_t, period_len - sg_used, >> XILINX_DMA_MAX_TRANS_LEN); >> + >> + if ((copy + sg_used < period_len) && >> + chan->xdev->common.copy_align) { >> + /* >> + * If this is not the last descriptor, make sure >> + * the next one will be properly aligned >> + */ >> + copy = rounddown(copy, >> + (1 << chan->xdev->common.copy_align)); >> + } > > same code pasted twice, can we have a routine for this... perhaps more > code can be made common too Yes, I see.. Indeed there was duplicated code before this series and it is still there after it. I can see if we can have a routine as you suggested at least for the code portions touched by this patch. Do you eventually want this extra change to be done in the same patch 1/5 or do you want a separate patch i.e. 2/6 or 6/6 ? > -- > ~Vinod -- To unsubscribe from this list: send the line "unsubscribe dmaengine" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 29-06-18, 09:46, Andrea Merello wrote: > On Fri, Jun 29, 2018 at 9:25 AM, Vinod <vkoul@kernel.org> wrote: > >> + > >> + if ((copy + sg_used < period_len) && > >> + chan->xdev->common.copy_align) { > >> + /* > >> + * If this is not the last descriptor, make sure > >> + * the next one will be properly aligned > >> + */ > >> + copy = rounddown(copy, > >> + (1 << chan->xdev->common.copy_align)); > >> + } > > > > same code pasted twice, can we have a routine for this... perhaps more > > code can be made common too > > Yes, I see.. Indeed there was duplicated code before this series and > it is still there after it. > > I can see if we can have a routine as you suggested at least for the > code portions touched by this patch. Do you eventually want this extra > change to be done in the same patch 1/5 or do you want a separate > patch i.e. 2/6 or 6/6 ? Each patch should do one thing, so would make sense to move first and then add you on top of that. 1/6 commonize and 2/6 add this bit.
diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c index 27b523530c4a..113d9bf1b6a1 100644 --- a/drivers/dma/xilinx/xilinx_dma.c +++ b/drivers/dma/xilinx/xilinx_dma.c @@ -1793,6 +1793,16 @@ static struct dma_async_tx_descriptor *xilinx_dma_prep_slave_sg( */ copy = min_t(size_t, sg_dma_len(sg) - sg_used, XILINX_DMA_MAX_TRANS_LEN); + + if ((copy + sg_used < sg_dma_len(sg)) && + chan->xdev->common.copy_align) { + /* + * If this is not the last descriptor, make sure + * the next one will be properly aligned + */ + copy = rounddown(copy, + (1 << chan->xdev->common.copy_align)); + } hw = &segment->hw; /* Fill in the descriptor */ @@ -1898,6 +1908,16 @@ static struct dma_async_tx_descriptor *xilinx_dma_prep_dma_cyclic( */ copy = min_t(size_t, period_len - sg_used, XILINX_DMA_MAX_TRANS_LEN); + + if ((copy + sg_used < period_len) && + chan->xdev->common.copy_align) { + /* + * If this is not the last descriptor, make sure + * the next one will be properly aligned + */ + copy = rounddown(copy, + (1 << chan->xdev->common.copy_align)); + } hw = &segment->hw; xilinx_axidma_buf(chan, hw, buf_addr, sg_used, period_len * i);