diff mbox series

[2/3] dmaengine: Add Slave and Cyclic mode support for Actions Semi Owl S900 SoC

Message ID 20180901164215.3683-3-manivannan.sadhasivam@linaro.org (mailing list archive)
State Changes Requested
Headers show
Series Add slave DMA support for Actios Semi S900 SoC | expand

Commit Message

Manivannan Sadhasivam Sept. 1, 2018, 4:42 p.m. UTC
Add Slave and Cyclic mode support for Actions Semi Owl S900 SoC. The slave
mode supports bus width of 4 bytes common for all peripherals and 1 byte
specific for UART.

The cyclic mode supports only block mode transfer.

Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/dma/owl-dma.c | 273 ++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 266 insertions(+), 7 deletions(-)

Comments

Vinod Koul Sept. 18, 2018, 4:35 p.m. UTC | #1
On 01-09-18, 22:12, Manivannan Sadhasivam wrote:

> @@ -364,6 +372,26 @@ static inline int owl_dma_cfg_lli(struct owl_dma_vchan *vchan,
>  			OWL_DMA_MODE_DT_DCU | OWL_DMA_MODE_SAM_INC |
>  			OWL_DMA_MODE_DAM_INC;
>  
> +		break;
> +	case DMA_MEM_TO_DEV:
> +		mode |= OWL_DMA_MODE_TS(vchan->drq)
> +			| OWL_DMA_MODE_ST_DCU | OWL_DMA_MODE_DT_DEV
> +			| OWL_DMA_MODE_SAM_INC | OWL_DMA_MODE_DAM_CONST;
> +
> +		/* Handle bus width for UART */
> +		if (sconfig->dst_addr_width == DMA_SLAVE_BUSWIDTH_1_BYTE)
> +			mode |= OWL_DMA_MODE_NDDBW_8BIT;

this is fine per se, but not correct way to handle in dmaengine driver.
You should be agnostic to user of dmaengine, so handle all the buswidths
the IP block supports and update the values accordingly. That way new
uses can be added w/o requiring change in dmaengine driver
Manivannan Sadhasivam Sept. 18, 2018, 10:52 p.m. UTC | #2
On Tue, Sep 18, 2018 at 09:35:12AM -0700, Vinod wrote:
> On 01-09-18, 22:12, Manivannan Sadhasivam wrote:
> 
> > @@ -364,6 +372,26 @@ static inline int owl_dma_cfg_lli(struct owl_dma_vchan *vchan,
> >  			OWL_DMA_MODE_DT_DCU | OWL_DMA_MODE_SAM_INC |
> >  			OWL_DMA_MODE_DAM_INC;
> >  
> > +		break;
> > +	case DMA_MEM_TO_DEV:
> > +		mode |= OWL_DMA_MODE_TS(vchan->drq)
> > +			| OWL_DMA_MODE_ST_DCU | OWL_DMA_MODE_DT_DEV
> > +			| OWL_DMA_MODE_SAM_INC | OWL_DMA_MODE_DAM_CONST;
> > +
> > +		/* Handle bus width for UART */
> > +		if (sconfig->dst_addr_width == DMA_SLAVE_BUSWIDTH_1_BYTE)
> > +			mode |= OWL_DMA_MODE_NDDBW_8BIT;
> 
> this is fine per se, but not correct way to handle in dmaengine driver.
> You should be agnostic to user of dmaengine, so handle all the buswidths
> the IP block supports and update the values accordingly. That way new
> uses can be added w/o requiring change in dmaengine driver
>

Hi Vinod,

Currently, all members of Owl family supports only 32bit and 8bit
bus widths. 32bit is common for all peripherals and 8bit applies to only
UART since the internal buffer is 8bit wide. So, this makes sense to me!

Thanks,
Mani

> -- 
> ~Vinod
Vinod Koul Sept. 18, 2018, 11:32 p.m. UTC | #3
Hi Mani,

On 18-09-18, 15:52, Manivannan Sadhasivam wrote:
> On Tue, Sep 18, 2018 at 09:35:12AM -0700, Vinod wrote:
> > On 01-09-18, 22:12, Manivannan Sadhasivam wrote:
> > 
> > > @@ -364,6 +372,26 @@ static inline int owl_dma_cfg_lli(struct owl_dma_vchan *vchan,
> > >  			OWL_DMA_MODE_DT_DCU | OWL_DMA_MODE_SAM_INC |
> > >  			OWL_DMA_MODE_DAM_INC;
> > >  
> > > +		break;
> > > +	case DMA_MEM_TO_DEV:
> > > +		mode |= OWL_DMA_MODE_TS(vchan->drq)
> > > +			| OWL_DMA_MODE_ST_DCU | OWL_DMA_MODE_DT_DEV
> > > +			| OWL_DMA_MODE_SAM_INC | OWL_DMA_MODE_DAM_CONST;
> > > +
> > > +		/* Handle bus width for UART */
> > > +		if (sconfig->dst_addr_width == DMA_SLAVE_BUSWIDTH_1_BYTE)
> > > +			mode |= OWL_DMA_MODE_NDDBW_8BIT;
> > 
> > this is fine per se, but not correct way to handle in dmaengine driver.
> > You should be agnostic to user of dmaengine, so handle all the buswidths
> > the IP block supports and update the values accordingly. That way new
> > uses can be added w/o requiring change in dmaengine driver
> 
> Currently, all members of Owl family supports only 32bit and 8bit
> bus widths. 32bit is common for all peripherals and 8bit applies to only
> UART since the internal buffer is 8bit wide. So, this makes sense to me!

Above you are onky handing DMA_SLAVE_BUSWIDTH_1_BYTE and not 32bit which
this IP supports.. You should handle all widths supported vt hardware..
Manivannan Sadhasivam Sept. 18, 2018, 11:34 p.m. UTC | #4
On Tue, Sep 18, 2018 at 04:32:00PM -0700, Vinod wrote:
> Hi Mani,
> 
> On 18-09-18, 15:52, Manivannan Sadhasivam wrote:
> > On Tue, Sep 18, 2018 at 09:35:12AM -0700, Vinod wrote:
> > > On 01-09-18, 22:12, Manivannan Sadhasivam wrote:
> > > 
> > > > @@ -364,6 +372,26 @@ static inline int owl_dma_cfg_lli(struct owl_dma_vchan *vchan,
> > > >  			OWL_DMA_MODE_DT_DCU | OWL_DMA_MODE_SAM_INC |
> > > >  			OWL_DMA_MODE_DAM_INC;
> > > >  
> > > > +		break;
> > > > +	case DMA_MEM_TO_DEV:
> > > > +		mode |= OWL_DMA_MODE_TS(vchan->drq)
> > > > +			| OWL_DMA_MODE_ST_DCU | OWL_DMA_MODE_DT_DEV
> > > > +			| OWL_DMA_MODE_SAM_INC | OWL_DMA_MODE_DAM_CONST;
> > > > +
> > > > +		/* Handle bus width for UART */
> > > > +		if (sconfig->dst_addr_width == DMA_SLAVE_BUSWIDTH_1_BYTE)
> > > > +			mode |= OWL_DMA_MODE_NDDBW_8BIT;
> > > 
> > > this is fine per se, but not correct way to handle in dmaengine driver.
> > > You should be agnostic to user of dmaengine, so handle all the buswidths
> > > the IP block supports and update the values accordingly. That way new
> > > uses can be added w/o requiring change in dmaengine driver
> > 
> > Currently, all members of Owl family supports only 32bit and 8bit
> > bus widths. 32bit is common for all peripherals and 8bit applies to only
> > UART since the internal buffer is 8bit wide. So, this makes sense to me!
> 
> Above you are onky handing DMA_SLAVE_BUSWIDTH_1_BYTE and not 32bit which
> this IP supports.. You should handle all widths supported vt hardware..
>

Hi Vinod,

Default width is 32bit and we will only override it for UART... Should I
add a comment stating this?

Thanks,
Mani

> -- 
> ~Vinod
Manivannan Sadhasivam Sept. 18, 2018, 11:56 p.m. UTC | #5
On Tue, Sep 18, 2018 at 04:34:14PM -0700, Manivannan Sadhasivam wrote:
> On Tue, Sep 18, 2018 at 04:32:00PM -0700, Vinod wrote:
> > Hi Mani,
> > 
> > On 18-09-18, 15:52, Manivannan Sadhasivam wrote:
> > > On Tue, Sep 18, 2018 at 09:35:12AM -0700, Vinod wrote:
> > > > On 01-09-18, 22:12, Manivannan Sadhasivam wrote:
> > > > 
> > > > > @@ -364,6 +372,26 @@ static inline int owl_dma_cfg_lli(struct owl_dma_vchan *vchan,
> > > > >  			OWL_DMA_MODE_DT_DCU | OWL_DMA_MODE_SAM_INC |
> > > > >  			OWL_DMA_MODE_DAM_INC;
> > > > >  
> > > > > +		break;
> > > > > +	case DMA_MEM_TO_DEV:
> > > > > +		mode |= OWL_DMA_MODE_TS(vchan->drq)
> > > > > +			| OWL_DMA_MODE_ST_DCU | OWL_DMA_MODE_DT_DEV
> > > > > +			| OWL_DMA_MODE_SAM_INC | OWL_DMA_MODE_DAM_CONST;
> > > > > +
> > > > > +		/* Handle bus width for UART */
> > > > > +		if (sconfig->dst_addr_width == DMA_SLAVE_BUSWIDTH_1_BYTE)
> > > > > +			mode |= OWL_DMA_MODE_NDDBW_8BIT;
> > > > 
> > > > this is fine per se, but not correct way to handle in dmaengine driver.
> > > > You should be agnostic to user of dmaengine, so handle all the buswidths
> > > > the IP block supports and update the values accordingly. That way new
> > > > uses can be added w/o requiring change in dmaengine driver
> > > 
> > > Currently, all members of Owl family supports only 32bit and 8bit
> > > bus widths. 32bit is common for all peripherals and 8bit applies to only
> > > UART since the internal buffer is 8bit wide. So, this makes sense to me!
> > 
> > Above you are onky handing DMA_SLAVE_BUSWIDTH_1_BYTE and not 32bit which
> > this IP supports.. You should handle all widths supported vt hardware..
> >
> 
> Hi Vinod,
> 
> Default width is 32bit and we will only override it for UART... Should I
> add a comment stating this?
>

I think it is better to select 32bit mode eventhough it is the default one.
Will update it in next revision.

Thanks,
Mani

> Thanks,
> Mani
> 
> > -- 
> > ~Vinod
diff mbox series

Patch

diff --git a/drivers/dma/owl-dma.c b/drivers/dma/owl-dma.c
index 7812a6338acd..7f7b3e76bcf7 100644
--- a/drivers/dma/owl-dma.c
+++ b/drivers/dma/owl-dma.c
@@ -21,6 +21,7 @@ 
 #include <linux/mm.h>
 #include <linux/module.h>
 #include <linux/of_device.h>
+#include <linux/of_dma.h>
 #include <linux/slab.h>
 #include "virt-dma.h"
 
@@ -165,6 +166,7 @@  struct owl_dma_lli {
 struct owl_dma_txd {
 	struct virt_dma_desc	vd;
 	struct list_head	lli_list;
+	bool			cyclic;
 };
 
 /**
@@ -191,6 +193,8 @@  struct owl_dma_vchan {
 	struct virt_dma_chan	vc;
 	struct owl_dma_pchan	*pchan;
 	struct owl_dma_txd	*txd;
+	struct dma_slave_config cfg;
+	u8			drq;
 };
 
 /**
@@ -336,9 +340,11 @@  static struct owl_dma_lli *owl_dma_alloc_lli(struct owl_dma *od)
 
 static struct owl_dma_lli *owl_dma_add_lli(struct owl_dma_txd *txd,
 					   struct owl_dma_lli *prev,
-					   struct owl_dma_lli *next)
+					   struct owl_dma_lli *next,
+					   bool is_cyclic)
 {
-	list_add_tail(&next->node, &txd->lli_list);
+	if (!is_cyclic)
+		list_add_tail(&next->node, &txd->lli_list);
 
 	if (prev) {
 		prev->hw.next_lli = next->phys;
@@ -351,7 +357,9 @@  static struct owl_dma_lli *owl_dma_add_lli(struct owl_dma_txd *txd,
 static inline int owl_dma_cfg_lli(struct owl_dma_vchan *vchan,
 				  struct owl_dma_lli *lli,
 				  dma_addr_t src, dma_addr_t dst,
-				  u32 len, enum dma_transfer_direction dir)
+				  u32 len, enum dma_transfer_direction dir,
+				  struct dma_slave_config *sconfig,
+				  bool is_cyclic)
 {
 	struct owl_dma_lli_hw *hw = &lli->hw;
 	u32 mode;
@@ -364,6 +372,26 @@  static inline int owl_dma_cfg_lli(struct owl_dma_vchan *vchan,
 			OWL_DMA_MODE_DT_DCU | OWL_DMA_MODE_SAM_INC |
 			OWL_DMA_MODE_DAM_INC;
 
+		break;
+	case DMA_MEM_TO_DEV:
+		mode |= OWL_DMA_MODE_TS(vchan->drq)
+			| OWL_DMA_MODE_ST_DCU | OWL_DMA_MODE_DT_DEV
+			| OWL_DMA_MODE_SAM_INC | OWL_DMA_MODE_DAM_CONST;
+
+		/* Handle bus width for UART */
+		if (sconfig->dst_addr_width == DMA_SLAVE_BUSWIDTH_1_BYTE)
+			mode |= OWL_DMA_MODE_NDDBW_8BIT;
+
+		break;
+	case DMA_DEV_TO_MEM:
+		 mode |= OWL_DMA_MODE_TS(vchan->drq)
+			| OWL_DMA_MODE_ST_DEV | OWL_DMA_MODE_DT_DCU
+			| OWL_DMA_MODE_SAM_CONST | OWL_DMA_MODE_DAM_INC;
+
+		/* Handle bus width for UART */
+		if (sconfig->src_addr_width == DMA_SLAVE_BUSWIDTH_1_BYTE)
+			mode |= OWL_DMA_MODE_NDDBW_8BIT;
+
 		break;
 	default:
 		return -EINVAL;
@@ -381,7 +409,10 @@  static inline int owl_dma_cfg_lli(struct owl_dma_vchan *vchan,
 				 OWL_DMA_LLC_SAV_LOAD_NEXT |
 				 OWL_DMA_LLC_DAV_LOAD_NEXT);
 
-	hw->ctrlb = llc_hw_ctrlb(OWL_DMA_INTCTL_SUPER_BLOCK);
+	if (is_cyclic)
+		hw->ctrlb = llc_hw_ctrlb(OWL_DMA_INTCTL_BLOCK);
+	else
+		hw->ctrlb = llc_hw_ctrlb(OWL_DMA_INTCTL_SUPER_BLOCK);
 
 	return 0;
 }
@@ -443,6 +474,16 @@  static void owl_dma_terminate_pchan(struct owl_dma *od,
 	spin_unlock_irqrestore(&od->lock, flags);
 }
 
+static void owl_dma_pause_pchan(struct owl_dma_pchan *pchan)
+{
+	pchan_writel(pchan, 1, OWL_DMAX_PAUSE);
+}
+
+static void owl_dma_resume_pchan(struct owl_dma_pchan *pchan)
+{
+	pchan_writel(pchan, 0, OWL_DMAX_PAUSE);
+}
+
 static int owl_dma_start_next_txd(struct owl_dma_vchan *vchan)
 {
 	struct owl_dma *od = to_owl_dma(vchan->vc.chan.device);
@@ -464,7 +505,10 @@  static int owl_dma_start_next_txd(struct owl_dma_vchan *vchan)
 	lli = list_first_entry(&txd->lli_list,
 			       struct owl_dma_lli, node);
 
-	int_ctl = OWL_DMA_INTCTL_SUPER_BLOCK;
+	if (txd->cyclic)
+		int_ctl = OWL_DMA_INTCTL_BLOCK;
+	else
+		int_ctl = OWL_DMA_INTCTL_SUPER_BLOCK;
 
 	pchan_writel(pchan, OWL_DMAX_MODE, OWL_DMA_MODE_LME);
 	pchan_writel(pchan, OWL_DMAX_LINKLIST_CTL,
@@ -627,6 +671,54 @@  static int owl_dma_terminate_all(struct dma_chan *chan)
 	return 0;
 }
 
+static int owl_dma_config(struct dma_chan *chan,
+			  struct dma_slave_config *config)
+{
+	struct owl_dma_vchan *vchan = to_owl_vchan(chan);
+
+	/* Reject definitely invalid configurations */
+	if (config->src_addr_width == DMA_SLAVE_BUSWIDTH_8_BYTES ||
+	    config->dst_addr_width == DMA_SLAVE_BUSWIDTH_8_BYTES)
+		return -EINVAL;
+
+	memcpy(&vchan->cfg, config, sizeof(struct dma_slave_config));
+
+	return 0;
+}
+
+static int owl_dma_pause(struct dma_chan *chan)
+{
+	struct owl_dma_vchan *vchan = to_owl_vchan(chan);
+	unsigned long flags;
+
+	spin_lock_irqsave(&vchan->vc.lock, flags);
+
+	owl_dma_pause_pchan(vchan->pchan);
+
+	spin_unlock_irqrestore(&vchan->vc.lock, flags);
+
+	return 0;
+}
+
+static int owl_dma_resume(struct dma_chan *chan)
+{
+	struct owl_dma_vchan *vchan = to_owl_vchan(chan);
+	unsigned long flags;
+
+	if (!vchan->pchan && !vchan->txd)
+		return 0;
+
+	dev_dbg(chan2dev(chan), "vchan %p: resume\n", &vchan->vc);
+
+	spin_lock_irqsave(&vchan->vc.lock, flags);
+
+	owl_dma_resume_pchan(vchan->pchan);
+
+	spin_unlock_irqrestore(&vchan->vc.lock, flags);
+
+	return 0;
+}
+
 static u32 owl_dma_getbytes_chan(struct owl_dma_vchan *vchan)
 {
 	struct owl_dma_pchan *pchan;
@@ -754,13 +846,14 @@  static struct dma_async_tx_descriptor
 		bytes = min_t(size_t, (len - offset), OWL_DMA_FRAME_MAX_LENGTH);
 
 		ret = owl_dma_cfg_lli(vchan, lli, src + offset, dst + offset,
-				      bytes, DMA_MEM_TO_MEM);
+				      bytes, DMA_MEM_TO_MEM,
+				      &vchan->cfg, txd->cyclic);
 		if (ret) {
 			dev_warn(chan2dev(chan), "failed to config lli\n");
 			goto err_txd_free;
 		}
 
-		prev = owl_dma_add_lli(txd, prev, lli);
+		prev = owl_dma_add_lli(txd, prev, lli, false);
 	}
 
 	return vchan_tx_prep(&vchan->vc, &txd->vd, flags);
@@ -770,6 +863,133 @@  static struct dma_async_tx_descriptor
 	return NULL;
 }
 
+static struct dma_async_tx_descriptor
+		*owl_dma_prep_slave_sg(struct dma_chan *chan,
+				       struct scatterlist *sgl,
+				       unsigned int sg_len,
+				       enum dma_transfer_direction dir,
+				       unsigned long flags, void *context)
+{
+	struct owl_dma *od = to_owl_dma(chan->device);
+	struct owl_dma_vchan *vchan = to_owl_vchan(chan);
+	struct dma_slave_config *sconfig = &vchan->cfg;
+	struct owl_dma_txd *txd;
+	struct owl_dma_lli *lli, *prev = NULL;
+	struct scatterlist *sg;
+	dma_addr_t addr, src = 0, dst = 0;
+	size_t len;
+	int ret, i;
+
+	txd = kzalloc(sizeof(*txd), GFP_NOWAIT);
+	if (!txd)
+		return NULL;
+
+	INIT_LIST_HEAD(&txd->lli_list);
+
+	for_each_sg(sgl, sg, sg_len, i) {
+		addr = sg_dma_address(sg);
+		len = sg_dma_len(sg);
+
+		if (len > OWL_DMA_FRAME_MAX_LENGTH) {
+			dev_err(od->dma.dev,
+				"frame length exceeds max supported length");
+			goto err_txd_free;
+		}
+
+		lli = owl_dma_alloc_lli(od);
+		if (!lli) {
+			dev_err(chan2dev(chan), "failed to allocate lli");
+			goto err_txd_free;
+		}
+
+		if (dir == DMA_MEM_TO_DEV) {
+			src = addr;
+			dst = sconfig->dst_addr;
+		} else {
+			src = sconfig->src_addr;
+			dst = addr;
+		}
+
+		ret = owl_dma_cfg_lli(vchan, lli, src, dst, len, dir, sconfig,
+				      txd->cyclic);
+		if (ret) {
+			dev_warn(chan2dev(chan), "failed to config lli");
+			goto err_txd_free;
+		}
+
+		prev = owl_dma_add_lli(txd, prev, lli, false);
+	}
+
+	return vchan_tx_prep(&vchan->vc, &txd->vd, flags);
+
+err_txd_free:
+	owl_dma_free_txd(od, txd);
+
+	return NULL;
+}
+
+static struct dma_async_tx_descriptor
+		*owl_prep_dma_cyclic(struct dma_chan *chan,
+				     dma_addr_t buf_addr, size_t buf_len,
+				     size_t period_len,
+				     enum dma_transfer_direction dir,
+				     unsigned long flags)
+{
+	struct owl_dma *od = to_owl_dma(chan->device);
+	struct owl_dma_vchan *vchan = to_owl_vchan(chan);
+	struct dma_slave_config *sconfig = &vchan->cfg;
+	struct owl_dma_txd *txd;
+	struct owl_dma_lli *lli, *prev = NULL, *first = NULL;
+	dma_addr_t src = 0, dst = 0;
+	unsigned int periods = buf_len / period_len;
+	int ret, i;
+
+	txd = kzalloc(sizeof(*txd), GFP_NOWAIT);
+	if (!txd)
+		return NULL;
+
+	INIT_LIST_HEAD(&txd->lli_list);
+	txd->cyclic = true;
+
+	for (i = 0; i < periods; i++) {
+		lli = owl_dma_alloc_lli(od);
+		if (!lli) {
+			dev_warn(chan2dev(chan), "failed to allocate lli");
+			goto err_txd_free;
+		}
+
+		if (dir == DMA_MEM_TO_DEV) {
+			src = buf_addr + (period_len * i);
+			dst = sconfig->dst_addr;
+		} else if (dir == DMA_DEV_TO_MEM) {
+			src = sconfig->src_addr;
+			dst = buf_addr + (period_len * i);
+		}
+
+		ret = owl_dma_cfg_lli(vchan, lli, src, dst, period_len,
+				      dir, sconfig, txd->cyclic);
+		if (ret) {
+			dev_warn(chan2dev(chan), "failed to config lli");
+			goto err_txd_free;
+		}
+
+		if (!first)
+			first = lli;
+
+		prev = owl_dma_add_lli(txd, prev, lli, false);
+	}
+
+	/* close the cyclic list */
+	owl_dma_add_lli(txd, prev, first, true);
+
+	return vchan_tx_prep(&vchan->vc, &txd->vd, flags);
+
+err_txd_free:
+	owl_dma_free_txd(od, txd);
+
+	return NULL;
+}
+
 static void owl_dma_free_chan_resources(struct dma_chan *chan)
 {
 	struct owl_dma_vchan *vchan = to_owl_vchan(chan);
@@ -790,6 +1010,27 @@  static inline void owl_dma_free(struct owl_dma *od)
 	}
 }
 
+static struct dma_chan *owl_dma_of_xlate(struct of_phandle_args *dma_spec,
+					 struct of_dma *ofdma)
+{
+	struct owl_dma *od = ofdma->of_dma_data;
+	struct owl_dma_vchan *vchan;
+	struct dma_chan *chan;
+	u8 drq = dma_spec->args[0];
+
+	if (drq > od->nr_vchans)
+		return NULL;
+
+	chan = dma_get_any_slave_channel(&od->dma);
+	if (!chan)
+		return NULL;
+
+	vchan = to_owl_vchan(chan);
+	vchan->drq = drq;
+
+	return chan;
+}
+
 static int owl_dma_probe(struct platform_device *pdev)
 {
 	struct device_node *np = pdev->dev.of_node;
@@ -833,12 +1074,19 @@  static int owl_dma_probe(struct platform_device *pdev)
 	spin_lock_init(&od->lock);
 
 	dma_cap_set(DMA_MEMCPY, od->dma.cap_mask);
+	dma_cap_set(DMA_SLAVE, od->dma.cap_mask);
+	dma_cap_set(DMA_CYCLIC, od->dma.cap_mask);
 
 	od->dma.dev = &pdev->dev;
 	od->dma.device_free_chan_resources = owl_dma_free_chan_resources;
 	od->dma.device_tx_status = owl_dma_tx_status;
 	od->dma.device_issue_pending = owl_dma_issue_pending;
 	od->dma.device_prep_dma_memcpy = owl_dma_prep_memcpy;
+	od->dma.device_prep_slave_sg = owl_dma_prep_slave_sg;
+	od->dma.device_prep_dma_cyclic = owl_prep_dma_cyclic;
+	od->dma.device_config = owl_dma_config;
+	od->dma.device_pause = owl_dma_pause;
+	od->dma.device_resume = owl_dma_resume;
 	od->dma.device_terminate_all = owl_dma_terminate_all;
 	od->dma.src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
 	od->dma.dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
@@ -910,8 +1158,18 @@  static int owl_dma_probe(struct platform_device *pdev)
 		goto err_pool_free;
 	}
 
+	/* Device-tree DMA controller registration */
+	ret = of_dma_controller_register(pdev->dev.of_node,
+					 owl_dma_of_xlate, od);
+	if (ret) {
+		dev_err(&pdev->dev, "of_dma_controller_register failed\n");
+		goto err_dma_unregister;
+	}
+
 	return 0;
 
+err_dma_unregister:
+	dma_async_device_unregister(&od->dma);
 err_pool_free:
 	clk_disable_unprepare(od->clk);
 	dma_pool_destroy(od->lli_pool);
@@ -923,6 +1181,7 @@  static int owl_dma_remove(struct platform_device *pdev)
 {
 	struct owl_dma *od = platform_get_drvdata(pdev);
 
+	of_dma_controller_free(pdev->dev.of_node);
 	dma_async_device_unregister(&od->dma);
 
 	/* Mask all interrupts for this execution environment */