Message ID | 20250219110847.725628-1-devverma@amd.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | dmaengine: dw-edma: Add simple mode support | expand |
On Wed, Feb 19, 2025 at 04:38:47PM +0530, Devendra K Verma wrote: + Niklas (who also looked into the MEMCPY for eDMA) > Added the simple or non-linked list DMA mode of transfer. > Patch subject and description are also simple :) You completely forgot to mention that you are adding the DMA_MEMCPY support to this driver. That too only for HDMA. > Signed-off-by: Devendra K Verma <devverma@amd.com> > --- > drivers/dma/dw-edma/dw-edma-core.c | 38 +++++++++++++++++ > drivers/dma/dw-edma/dw-edma-core.h | 1 + > drivers/dma/dw-edma/dw-hdma-v0-core.c | 59 ++++++++++++++++++++++++++- > 3 files changed, 97 insertions(+), 1 deletion(-) > > diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c > index 68236247059d..bd975e6d419a 100644 > --- a/drivers/dma/dw-edma/dw-edma-core.c > +++ b/drivers/dma/dw-edma/dw-edma-core.c > @@ -595,6 +595,43 @@ dw_edma_device_prep_interleaved_dma(struct dma_chan *dchan, > return dw_edma_device_transfer(&xfer); > } > > +static struct dma_async_tx_descriptor * > +dw_edma_device_prep_dma_memcpy(struct dma_chan *dchan, > + dma_addr_t dst, > + dma_addr_t src, size_t len, > + unsigned long flags) > +{ > + struct dw_edma_chan *chan = dchan2dw_edma_chan(dchan); > + struct dw_edma_chunk *chunk; > + struct dw_edma_burst *burst; > + struct dw_edma_desc *desc; > + > + desc = dw_edma_alloc_desc(chan); > + if (unlikely(!desc)) > + return NULL; > + > + chunk = dw_edma_alloc_chunk(desc); > + if (unlikely(!chunk)) > + goto err_alloc; > + > + burst = dw_edma_alloc_burst(chunk); > + if (unlikely(!burst)) > + goto err_alloc; > + > + burst->sar = src; > + burst->dar = dst; Niklas looked into adding MEMCPY support but blocked by the fact that the device_prep_dma_memcpy() assumes that the direction is always MEM_TO_MEM. But the eDMA driver (HDMA also?) only support transfers between remote and local DDR. So only MEM_TO_DEV and DEV_TO_MEM are valid directions (assuming that we call the remote DDR as DEV). One can also argue that since both are DDR addresses anyway, we could use the MEM_TO_MEM direction. But that will not help in identifying the unsupported local to local and remote to remote transfers. I haven't referred the HDMA spec yet, but does HDMA support the above case also? - Mani
On Fri, Feb 21, 2025 at 01:16:12PM +0530, Manivannan Sadhasivam wrote: > On Wed, Feb 19, 2025 at 04:38:47PM +0530, Devendra K Verma wrote: > > + Niklas (who also looked into the MEMCPY for eDMA) > > > Added the simple or non-linked list DMA mode of transfer. > > > > Patch subject and description are also simple :) You completely forgot to > mention that you are adding the DMA_MEMCPY support to this driver. That too only > for HDMA. > > > Signed-off-by: Devendra K Verma <devverma@amd.com> > > --- > > drivers/dma/dw-edma/dw-edma-core.c | 38 +++++++++++++++++ > > drivers/dma/dw-edma/dw-edma-core.h | 1 + > > drivers/dma/dw-edma/dw-hdma-v0-core.c | 59 ++++++++++++++++++++++++++- > > 3 files changed, 97 insertions(+), 1 deletion(-) > > > > diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c > > index 68236247059d..bd975e6d419a 100644 > > --- a/drivers/dma/dw-edma/dw-edma-core.c > > +++ b/drivers/dma/dw-edma/dw-edma-core.c > > @@ -595,6 +595,43 @@ dw_edma_device_prep_interleaved_dma(struct dma_chan *dchan, > > return dw_edma_device_transfer(&xfer); > > } > > > > +static struct dma_async_tx_descriptor * > > +dw_edma_device_prep_dma_memcpy(struct dma_chan *dchan, > > + dma_addr_t dst, > > + dma_addr_t src, size_t len, > > + unsigned long flags) > > +{ > > + struct dw_edma_chan *chan = dchan2dw_edma_chan(dchan); > > + struct dw_edma_chunk *chunk; > > + struct dw_edma_burst *burst; > > + struct dw_edma_desc *desc; > > + > > + desc = dw_edma_alloc_desc(chan); > > + if (unlikely(!desc)) > > + return NULL; > > + > > + chunk = dw_edma_alloc_chunk(desc); > > + if (unlikely(!chunk)) > > + goto err_alloc; > > + > > + burst = dw_edma_alloc_burst(chunk); > > + if (unlikely(!burst)) > > + goto err_alloc; > > + > > + burst->sar = src; > > + burst->dar = dst; > > Niklas looked into adding MEMCPY support but blocked by the fact that the > device_prep_dma_memcpy() assumes that the direction is always MEM_TO_MEM. But > the eDMA driver (HDMA also?) only support transfers between remote and local > DDR. So only MEM_TO_DEV and DEV_TO_MEM are valid directions (assuming that we > call the remote DDR as DEV). In rk3588 TRM: MAP_FORMAT 0x0 (EDMA_LEGACY_PL): Legacy DMA register map accessed by the port-logic registers 0x1 (EDMA_LEGACY_UNROLL): Legacy DMA register map, mapped to a PF/BAR 0x5 (HDMA_COMPATIBILITY_MODE): HDMA compatibility mode (CC_LEGACY_DMA_MAP =1) register map, mapped to a PF/BAR 0x7 (HDMA_NATIVE): HDMA native (CC_LEGACY_DMA_MAP =0) register map, mapped to a PF/BAR Read-only register. Value After Reset: 0x1. So eDMA in rk3588 is EDMA_LEGACY_UNROLL. I don't know if this limitation that you correctly described applies to all the other formats as well. Kind regards, Niklas
diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c index 68236247059d..bd975e6d419a 100644 --- a/drivers/dma/dw-edma/dw-edma-core.c +++ b/drivers/dma/dw-edma/dw-edma-core.c @@ -595,6 +595,43 @@ dw_edma_device_prep_interleaved_dma(struct dma_chan *dchan, return dw_edma_device_transfer(&xfer); } +static struct dma_async_tx_descriptor * +dw_edma_device_prep_dma_memcpy(struct dma_chan *dchan, + dma_addr_t dst, + dma_addr_t src, size_t len, + unsigned long flags) +{ + struct dw_edma_chan *chan = dchan2dw_edma_chan(dchan); + struct dw_edma_chunk *chunk; + struct dw_edma_burst *burst; + struct dw_edma_desc *desc; + + desc = dw_edma_alloc_desc(chan); + if (unlikely(!desc)) + return NULL; + + chunk = dw_edma_alloc_chunk(desc); + if (unlikely(!chunk)) + goto err_alloc; + + burst = dw_edma_alloc_burst(chunk); + if (unlikely(!burst)) + goto err_alloc; + + burst->sar = src; + burst->dar = dst; + burst->sz = len; + chunk->non_ll_en = true; + + desc->alloc_sz += burst->sz; + + return vchan_tx_prep(&chan->vc, &desc->vd, flags); + +err_alloc: + dw_edma_free_desc(desc); + return NULL; +} + static void dw_edma_done_interrupt(struct dw_edma_chan *chan) { struct dw_edma_desc *desc; @@ -806,6 +843,7 @@ static int dw_edma_channel_setup(struct dw_edma *dw, u32 wr_alloc, u32 rd_alloc) dma->device_prep_slave_sg = dw_edma_device_prep_slave_sg; dma->device_prep_dma_cyclic = dw_edma_device_prep_dma_cyclic; dma->device_prep_interleaved_dma = dw_edma_device_prep_interleaved_dma; + dma->device_prep_dma_memcpy = dw_edma_device_prep_dma_memcpy; dma_set_max_seg_size(dma->dev, U32_MAX); diff --git a/drivers/dma/dw-edma/dw-edma-core.h b/drivers/dma/dw-edma/dw-edma-core.h index 71894b9e0b15..b496a1e5e326 100644 --- a/drivers/dma/dw-edma/dw-edma-core.h +++ b/drivers/dma/dw-edma/dw-edma-core.h @@ -58,6 +58,7 @@ struct dw_edma_chunk { u8 cb; struct dw_edma_region ll_region; /* Linked list */ + bool non_ll_en; }; struct dw_edma_desc { diff --git a/drivers/dma/dw-edma/dw-hdma-v0-core.c b/drivers/dma/dw-edma/dw-hdma-v0-core.c index e3f8db4fe909..0d5fdab925fd 100644 --- a/drivers/dma/dw-edma/dw-hdma-v0-core.c +++ b/drivers/dma/dw-edma/dw-hdma-v0-core.c @@ -225,7 +225,56 @@ static void dw_hdma_v0_sync_ll_data(struct dw_edma_chunk *chunk) readl(chunk->ll_region.vaddr.io); } -static void dw_hdma_v0_core_start(struct dw_edma_chunk *chunk, bool first) +static void dw_hdma_v0_non_ll_start(struct dw_edma_chunk *chunk) +{ + struct dw_edma_chan *chan = chunk->chan; + struct dw_edma *dw = chan->dw; + struct dw_edma_burst *burst; + u64 addr; + u32 val; + + burst = list_first_entry(&chunk->burst->list, + struct dw_edma_burst, list); + if (!burst) + return; + + /* Source Address */ + addr = burst->sar; + + SET_CH_32(dw, chan->dir, chan->id, ch_en, BIT(0)); + + SET_CH_32(dw, chan->dir, chan->id, sar.lsb, lower_32_bits(addr)); + SET_CH_32(dw, chan->dir, chan->id, sar.msb, upper_32_bits(addr)); + + /* Destination Address */ + addr = burst->dar; + + SET_CH_32(dw, chan->dir, chan->id, dar.lsb, lower_32_bits(addr)); + SET_CH_32(dw, chan->dir, chan->id, dar.msb, upper_32_bits(addr)); + + /* Size */ + SET_CH_32(dw, chan->dir, chan->id, transfer_size, burst->sz); + + /* Interrupts */ + val = GET_CH_32(dw, chan->dir, chan->id, int_setup) | + HDMA_V0_STOP_INT_MASK | HDMA_V0_ABORT_INT_MASK | + HDMA_V0_LOCAL_STOP_INT_EN | HDMA_V0_LOCAL_ABORT_INT_EN; + + if (!(dw->chip->flags & DW_EDMA_CHIP_LOCAL)) + val |= HDMA_V0_REMOTE_STOP_INT_EN | HDMA_V0_REMOTE_ABORT_INT_EN; + + SET_CH_32(dw, chan->dir, chan->id, int_setup, val); + + /* Channel control */ + val = GET_CH_32(dw, chan->dir, chan->id, control1); + val &= ~HDMA_V0_LINKLIST_EN; + SET_CH_32(dw, chan->dir, chan->id, control1, val); + + /* Ring the doorbell */ + SET_CH_32(dw, chan->dir, chan->id, doorbell, HDMA_V0_DOORBELL_START); +} + +static void dw_hdma_v0_ll_start(struct dw_edma_chunk *chunk, bool first) { struct dw_edma_chan *chan = chunk->chan; struct dw_edma *dw = chan->dw; @@ -263,6 +312,14 @@ static void dw_hdma_v0_core_start(struct dw_edma_chunk *chunk, bool first) SET_CH_32(dw, chan->dir, chan->id, doorbell, HDMA_V0_DOORBELL_START); } +static void dw_hdma_v0_core_start(struct dw_edma_chunk *chunk, bool first) +{ + if (!chunk->non_ll_en) + dw_hdma_v0_ll_start(chunk, first); + else + dw_hdma_v0_non_ll_start(chunk); +} + static void dw_hdma_v0_core_ch_config(struct dw_edma_chan *chan) { struct dw_edma *dw = chan->dw;
Added the simple or non-linked list DMA mode of transfer. Signed-off-by: Devendra K Verma <devverma@amd.com> --- drivers/dma/dw-edma/dw-edma-core.c | 38 +++++++++++++++++ drivers/dma/dw-edma/dw-edma-core.h | 1 + drivers/dma/dw-edma/dw-hdma-v0-core.c | 59 ++++++++++++++++++++++++++- 3 files changed, 97 insertions(+), 1 deletion(-)