Message ID | 4ba085c7-5256-6c8a-5697-c0d5736a6e46@ti.com (mailing list archive) |
---|---|
State | Not Applicable |
Headers | show |
On 04/17/2018 03:46 PM, Peter Ujfalusi wrote: > On 2018-04-17 15:54, Lars-Peter Clausen wrote: >> On 04/17/2018 01:43 PM, Radhey Shyam Pandey wrote: >>> Hi Vinod, >>> >>>> -----Original Message----- >>>> From: Vinod Koul [mailto:vinod.koul@intel.com] >>>> Sent: Wednesday, April 11, 2018 2:39 PM >>>> To: Radhey Shyam Pandey <radheys@xilinx.com> >>>> Cc: dan.j.williams@intel.com; michal.simek@xilinx.com; Appana Durga >>>> Kedareswara Rao <appanad@xilinx.com>; Radhey Shyam Pandey >>>> <radheys@xilinx.com>; lars@metafoo.de; dmaengine@vger.kernel.org; >>>> linux-arm-kernel@lists.infradead.org; linux-kernel@vger.kernel.org >>>> Subject: Re: [RFC 2/6] dmaengine: xilinx_dma: Pass AXI4-Stream control >>>> words to netdev dma client >>>> >>>> On Mon, Apr 02, 2018 at 04:09:02PM +0530, Radhey Shyam Pandey wrote: >>>> >>>>> + >>>>> + if (chan->xdev->has_axieth_connected) { >>>>> + seg = list_first_entry(&desc->segments, >>>>> + struct xilinx_axidma_tx_segment, >>>> node); >>>>> + if (cb.callback_param) { >>>>> + app_w = (u32 *) cb.callback_param; >>>> >>>> why are you interpreting callback_param? This is plainly wrong. >>>> we do not know what is the interpretation of callback_param and it is >>>> internal to submitter. >>> In design, if AXI DMA is connected to AXI Ethernet IP there are certain >>> AXI4-Stream Status fields (RX) that we need to pass to ethernet driver >>> along with data buffer. An example includes: checksum fields, packet >>> length etc. To pass these control words there is a structure defined >>> between dmaengine and client. Before calling the client callback >>> stream control words are copied to dma client callback_param struct >>> (only if axieth is connected). >>> >>> I understand it's not an ideal way and we shouldn't be interpreting >>> callback_param but couldn't find any better alternative of passing >>> custom information from dmaengine to client driver and still be >>> aligned to the framework. >>> >>>> >>>> What exactly is the problem you are trying to solve? >>> As mentioned above we need to pass AXI4-stream words(custom >>> data) from dmaengine driver to dma client driver(ethernet) for >>> each DMA descriptor. Current solution populates callback_param >>> struct (only if axieth is connected). Please let me know if there is >>> an alternate solution. >> >> The standard interfaces need to work in a way that a client using them does >> not have to know to which DMA controller it is connected. In a similar way >> the DMA controller shouldn't have any client specific logic for the generic >> interfaces. Otherwise there is no point of having a generic interface. >> >> There are two options. >> >> Either you extend the generic interfaces so it can cover your usecase in a >> generic way. E.g. the ability to attach meta data to transfer. > > Fwiw I have this patch as part of a bigger work to achieve similar results: That's good stuff. Is this in a public tree somewhere? >> Or you can implement a interface that is specific to your DMA controller and >> any client using this interface knows it is talking to your DMA controller. > > Hrm, so we can have DMA driver specific calls? The reason why TI's keystone 2 > navigator DMA support was rejected that it was introducing NAV specific calls > for clients to configure features not yet supported by the framework. In my opinion it is OK, somebody else might have different ideas. I mean it is not nice, but it is better than the alternative of overloading the generic API with driver specific semantics or introducing some kind of IOCTL catch all callback. If there is tight coupling between the DMA core and client and there is no intention of using a generic client the best solution might even be to no use DMAengine at all. -- To unsubscribe from this list: send the line "unsubscribe dmaengine" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 2018-04-17 16:58, Lars-Peter Clausen wrote: >>> There are two options. >>> >>> Either you extend the generic interfaces so it can cover your usecase in a >>> generic way. E.g. the ability to attach meta data to transfer. >> >> Fwiw I have this patch as part of a bigger work to achieve similar results: > > That's good stuff. Is this in a public tree somewhere? Not atm. I can not send the user of the new API and I did not wanted to send something like this out of the blue w/o context. But as it is a generic patch, I can send it as well. The only thing is that the need for the memcpy, so I might end up with ptr = get_metadata_ptr(desc, &size); /* size: in RX the valid size */ and set_metadata_size(); /* in TX to tell how the client placed */ Or something like that, the attach_metadata() as it is works just fine, but high throughput might not like the memcpy. >>> Or you can implement a interface that is specific to your DMA controller and >>> any client using this interface knows it is talking to your DMA controller. >> >> Hrm, so we can have DMA driver specific calls? The reason why TI's keystone 2 >> navigator DMA support was rejected that it was introducing NAV specific calls >> for clients to configure features not yet supported by the framework. > > In my opinion it is OK, somebody else might have different ideas. I mean it > is not nice, but it is better than the alternative of overloading the > generic API with driver specific semantics or introducing some kind of IOCTL > catch all callback. True, but the generic API can be extended as well to cover new grounds, features. Like this metadata thing. > If there is tight coupling between the DMA core and client and there is no > intention of using a generic client the best solution might even be to no > use DMAengine at all. This is how the knav stuff ended up. Well it is only used by networking atm, so it is 'fine' to have custom API, but it is not portable. > - Péter Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki -- To unsubscribe from this list: send the line "unsubscribe dmaengine" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, Apr 17, 2018 at 04:46:43PM +0300, Peter Ujfalusi wrote: > @@ -709,6 +709,11 @@ struct dma_filter { > * be called after period_len bytes have been transferred. > * @device_prep_interleaved_dma: Transfer expression in a generic way. > * @device_prep_dma_imm_data: DMA's 8 byte immediate data to the dst address > + * @device_attach_metadata: Some DMA engines can send and receive side band > + * information, commands or parameters which is not transferred within the > + * data stream itself. In such case clients can set the metadata to the > + * given descriptor and it is going to be sent to the peripheral, or in > + * case of DEV_TO_MEM the provided buffer will receive the metadata. > * @device_config: Pushes a new configuration to a channel, return 0 or an error > * code > * @device_pause: Pauses any transfer happening on a channel. Returns > @@ -796,6 +801,9 @@ struct dma_device { > struct dma_chan *chan, dma_addr_t dst, u64 data, > unsigned long flags); > > + int (*device_attach_metadata)(struct dma_async_tx_descriptor *desc, > + void *data, size_t len); while i am okay with the concept, I would not want to go again the custom pointer route, this is a no-go for me. Instead lets add the vendor data, define that explicitly. We can use struct, tokens or something else to define these. But lets try to stay away from opaque objects please :-)
On 04/17/2018 05:42 PM, Vinod Koul wrote: > On Tue, Apr 17, 2018 at 04:46:43PM +0300, Peter Ujfalusi wrote: > >> @@ -709,6 +709,11 @@ struct dma_filter { >> * be called after period_len bytes have been transferred. >> * @device_prep_interleaved_dma: Transfer expression in a generic way. >> * @device_prep_dma_imm_data: DMA's 8 byte immediate data to the dst address >> + * @device_attach_metadata: Some DMA engines can send and receive side band >> + * information, commands or parameters which is not transferred within the >> + * data stream itself. In such case clients can set the metadata to the >> + * given descriptor and it is going to be sent to the peripheral, or in >> + * case of DEV_TO_MEM the provided buffer will receive the metadata. >> * @device_config: Pushes a new configuration to a channel, return 0 or an error >> * code >> * @device_pause: Pauses any transfer happening on a channel. Returns >> @@ -796,6 +801,9 @@ struct dma_device { >> struct dma_chan *chan, dma_addr_t dst, u64 data, >> unsigned long flags); >> >> + int (*device_attach_metadata)(struct dma_async_tx_descriptor *desc, >> + void *data, size_t len); > > while i am okay with the concept, I would not want to go again the custom > pointer route, this is a no-go for me. > > Instead lets add the vendor data, define that explicitly. We can use struct, > tokens or something else to define these. But lets try to stay away from > opaque objects please :-) Well, for all the DMA core cares about the meta-data stream would be as opaque as the data stream itself. The data is in the end client specific. It is data that sits somewhere in memory that should be send along with the actual data to the client. -- To unsubscribe from this list: send the line "unsubscribe dmaengine" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 04/17/2018 04:53 PM, Peter Ujfalusi wrote: > On 2018-04-17 16:58, Lars-Peter Clausen wrote: >>>> There are two options. >>>> >>>> Either you extend the generic interfaces so it can cover your usecase in a >>>> generic way. E.g. the ability to attach meta data to transfer. >>> >>> Fwiw I have this patch as part of a bigger work to achieve similar results: >> >> That's good stuff. Is this in a public tree somewhere? > > Not atm. I can not send the user of the new API and I did not wanted to > send something like this out of the blue w/o context. > > But as it is a generic patch, I can send it as well. The only thing is > that the need for the memcpy, so I might end up with > ptr = get_metadata_ptr(desc, &size); /* size: in RX the valid size */ > > and set_metadata_size(); /* in TX to tell how the client placed */ > > Or something like that, the attach_metadata() as it is works just fine, > but high throughput might not like the memcpy. > In the most abstracted way I'd say metadata and data are two different data streams that are correlated and send/received at the same time. Think multi-planar transfer, like for audio when the right and left channel are in separate buffers and not interleaved. Or video with different color/luminance components in separate buffers. This is something that is at the moment not covered by the dmaengine API either. >>>> Or you can implement a interface that is specific to your DMA controller and >>>> any client using this interface knows it is talking to your DMA controller. >>> >>> Hrm, so we can have DMA driver specific calls? The reason why TI's keystone 2 >>> navigator DMA support was rejected that it was introducing NAV specific calls >>> for clients to configure features not yet supported by the framework. >> >> In my opinion it is OK, somebody else might have different ideas. I mean it >> is not nice, but it is better than the alternative of overloading the >> generic API with driver specific semantics or introducing some kind of IOCTL >> catch all callback. > > True, but the generic API can be extended as well to cover new grounds, > features. Like this metadata thing. > >> If there is tight coupling between the DMA core and client and there is no >> intention of using a generic client the best solution might even be to no >> use DMAengine at all. > > This is how the knav stuff ended up. Well it is only used by networking > atm, so it is 'fine' to have custom API, but it is not portable. I totally agree generic APIs are better, but not everybody has the resources to rewrite the whole framework just because they want to do this tiny thing that isn't covered by the framework yet. In that case it is better to go with a custom API (that might evolve into a generic API), rather than overloading the generic API and putting a strain on everybody who works on the generic API. -- To unsubscribe from this list: send the line "unsubscribe dmaengine" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 2018-04-17 18:54, Lars-Peter Clausen wrote: > On 04/17/2018 04:53 PM, Peter Ujfalusi wrote: >> On 2018-04-17 16:58, Lars-Peter Clausen wrote: >>>>> There are two options. >>>>> >>>>> Either you extend the generic interfaces so it can cover your usecase in a >>>>> generic way. E.g. the ability to attach meta data to transfer. >>>> >>>> Fwiw I have this patch as part of a bigger work to achieve similar results: >>> >>> That's good stuff. Is this in a public tree somewhere? >> >> Not atm. I can not send the user of the new API and I did not wanted to >> send something like this out of the blue w/o context. >> >> But as it is a generic patch, I can send it as well. The only thing is >> that the need for the memcpy, so I might end up with >> ptr = get_metadata_ptr(desc, &size); /* size: in RX the valid size */ >> >> and set_metadata_size(); /* in TX to tell how the client placed */ >> >> Or something like that, the attach_metadata() as it is works just fine, >> but high throughput might not like the memcpy. >> > > In the most abstracted way I'd say metadata and data are two different data > streams that are correlated and send/received at the same time. In my case the meatdata is sideband information or parameters for/from the remote end. Like timestamp, algorithm parameters, keys, etc. It is tight to the data payload, but it is not part of it. But the API should be generic enough to cover other use cases where clients need to provide additional information. For me, the metadata is part of the descriptor we give and receive back from the DMA, others might have sideband channel to send that. For metadata handling we could have: struct dma_desc_metadata_ops { /* To give a buffer for the DMA with the metadata, as it was in my * original patch */ int (*desc_attach_metadata)(struct dma_async_tx_descriptor *desc, void *data, size_t len); void *(*desc_get_metadata_ptr)(struct dma_async_tx_descriptor *desc, size_t *payload_len, size_t *max_len); int (*desc_set_payload_len)(struct dma_async_tx_descriptor *desc, size_t payload_len); }; Probably a simple flag variable to indicate which of the two modes are supported: 1. Client provided metadata buffer handling Clients provide the buffer via desc_attach_metadata(), the DMA driver will do whatever it needs to do, copy it in place, send it differently, use parameters. In RX the received metadata is going to be placed to the provided buffer. 2. Ability to give the metadata pointer to user to work on it. In TX, clients can use desc_get_metadata_ptr() to get the pointer, current payload size and maximum size of the metadata and can work directly on the buffer to place the data. Then desc_set_payload_len() to let the DMA know how much data is actually placed there. In RX, desc_get_metadata_ptr() will give the user the pointer and the payload size so it can process that information correctly. DMA driver can implement either or both, but clients must only use either 1 or 2 to work with the metadata. > Think multi-planar transfer, like for audio when the right and left channel > are in separate buffers and not interleaved. Or video with different > color/luminance components in separate buffers. This is something that is at > the moment not covered by the dmaengine API either. Hrm, true, but it is hardly the metadata use case. It is more like different DMA transfer type. >>>>> Or you can implement a interface that is specific to your DMA controller and >>>>> any client using this interface knows it is talking to your DMA controller. >>>> >>>> Hrm, so we can have DMA driver specific calls? The reason why TI's keystone 2 >>>> navigator DMA support was rejected that it was introducing NAV specific calls >>>> for clients to configure features not yet supported by the framework. >>> >>> In my opinion it is OK, somebody else might have different ideas. I mean it >>> is not nice, but it is better than the alternative of overloading the >>> generic API with driver specific semantics or introducing some kind of IOCTL >>> catch all callback. >> >> True, but the generic API can be extended as well to cover new grounds, >> features. Like this metadata thing. >> >>> If there is tight coupling between the DMA core and client and there is no >>> intention of using a generic client the best solution might even be to no >>> use DMAengine at all. >> >> This is how the knav stuff ended up. Well it is only used by networking >> atm, so it is 'fine' to have custom API, but it is not portable. > > I totally agree generic APIs are better, but not everybody has the resources > to rewrite the whole framework just because they want to do this tiny thing > that isn't covered by the framework yet. In that case it is better to go > with a custom API (that might evolve into a generic API), rather than > overloading the generic API and putting a strain on everybody who works on > the generic API. At some point a threshold is reached when the burden of maintaining a custom API is more costly than investing on the extension of the framework. What happens when an existing driver (using DMAengine API) need to be supported on platform where only custom DMA code is available or if you want to migrate to a new DMA from the custom API the drier is wired for? It is just plain pain. At the end what we want to do with DMA: move data from opne place to another (in oversimplified view). The framework can be extended to cover new features w/o much impact on generality or drivers do not need the feature, like the metadata. The impact is minimal for drivers who don't care. If there is interest I can send the core patches for the metadata in couple of days (need to update the documentation as well) and test the new give me the metadata pointer support. Unfortunately I can not send atm the DMA driver itself which is using this. If this helps others to move ahead, I'm fine spending extra time to get it in a good shape for upstream. - Péter Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki -- To unsubscribe from this list: send the line "unsubscribe dmaengine" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 2018-04-17 18:42, Vinod Koul wrote: > On Tue, Apr 17, 2018 at 04:46:43PM +0300, Peter Ujfalusi wrote: > >> @@ -709,6 +709,11 @@ struct dma_filter { >> * be called after period_len bytes have been transferred. >> * @device_prep_interleaved_dma: Transfer expression in a generic way. >> * @device_prep_dma_imm_data: DMA's 8 byte immediate data to the dst address >> + * @device_attach_metadata: Some DMA engines can send and receive side band >> + * information, commands or parameters which is not transferred within the >> + * data stream itself. In such case clients can set the metadata to the >> + * given descriptor and it is going to be sent to the peripheral, or in >> + * case of DEV_TO_MEM the provided buffer will receive the metadata. >> * @device_config: Pushes a new configuration to a channel, return 0 or an error >> * code >> * @device_pause: Pauses any transfer happening on a channel. Returns >> @@ -796,6 +801,9 @@ struct dma_device { >> struct dma_chan *chan, dma_addr_t dst, u64 data, >> unsigned long flags); >> >> + int (*device_attach_metadata)(struct dma_async_tx_descriptor *desc, >> + void *data, size_t len); > > while i am okay with the concept, I would not want to go again the custom > pointer route, this is a no-go for me. > > Instead lets add the vendor data, define that explicitly. We can use struct, > tokens or something else to define these. But lets try to stay away from > opaque objects please :-) The DMA does not interpret the metadata, it is information which can be only understood by the client driver and the remote peripheral. It is just chunk of data (parameters, timestamps, keys, etc) that needs to travel along with the payload. The content is not relevant for the DMA itself. - Péter Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki -- To unsubscribe from this list: send the line "unsubscribe dmaengine" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 2018-04-18 09:39, Peter Ujfalusi wrote: > > > On 2018-04-17 18:42, Vinod Koul wrote: >> On Tue, Apr 17, 2018 at 04:46:43PM +0300, Peter Ujfalusi wrote: >> >>> @@ -709,6 +709,11 @@ struct dma_filter { >>> * be called after period_len bytes have been transferred. >>> * @device_prep_interleaved_dma: Transfer expression in a generic way. >>> * @device_prep_dma_imm_data: DMA's 8 byte immediate data to the dst address >>> + * @device_attach_metadata: Some DMA engines can send and receive side band >>> + * information, commands or parameters which is not transferred within the >>> + * data stream itself. In such case clients can set the metadata to the >>> + * given descriptor and it is going to be sent to the peripheral, or in >>> + * case of DEV_TO_MEM the provided buffer will receive the metadata. >>> * @device_config: Pushes a new configuration to a channel, return 0 or an error >>> * code >>> * @device_pause: Pauses any transfer happening on a channel. Returns >>> @@ -796,6 +801,9 @@ struct dma_device { >>> struct dma_chan *chan, dma_addr_t dst, u64 data, >>> unsigned long flags); >>> >>> + int (*device_attach_metadata)(struct dma_async_tx_descriptor *desc, >>> + void *data, size_t len); >> >> while i am okay with the concept, I would not want to go again the custom >> pointer route, this is a no-go for me. >> >> Instead lets add the vendor data, define that explicitly. We can use struct, >> tokens or something else to define these. But lets try to stay away from >> opaque objects please :-) > > The DMA does not interpret the metadata, it is information which can be > only understood by the client driver and the remote peripheral. It is > just chunk of data (parameters, timestamps, keys, etc) that needs to > travel along with the payload. > > The content is not relevant for the DMA itself. To add: different peripherals needs to send receive different metadata and even the same peripheral might pass different information based on their operating mode. The size of metadata can be different as well. So it is not really vendor specific metadata, but peripheral, operating mode and other factors affected chunk of data. > > - Péter > > Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. > Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki > - Péter Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki -- To unsubscribe from this list: send the line "unsubscribe dmaengine" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 04/18/2018 08:31 AM, Peter Ujfalusi wrote: > > On 2018-04-17 18:54, Lars-Peter Clausen wrote: >> On 04/17/2018 04:53 PM, Peter Ujfalusi wrote: >>> On 2018-04-17 16:58, Lars-Peter Clausen wrote: >>>>>> There are two options. >>>>>> >>>>>> Either you extend the generic interfaces so it can cover your usecase in a >>>>>> generic way. E.g. the ability to attach meta data to transfer. >>>>> >>>>> Fwiw I have this patch as part of a bigger work to achieve similar results: >>>> >>>> That's good stuff. Is this in a public tree somewhere? >>> >>> Not atm. I can not send the user of the new API and I did not wanted to >>> send something like this out of the blue w/o context. >>> >>> But as it is a generic patch, I can send it as well. The only thing is >>> that the need for the memcpy, so I might end up with >>> ptr = get_metadata_ptr(desc, &size); /* size: in RX the valid size */ >>> >>> and set_metadata_size(); /* in TX to tell how the client placed */ >>> >>> Or something like that, the attach_metadata() as it is works just fine, >>> but high throughput might not like the memcpy. >>> >> >> In the most abstracted way I'd say metadata and data are two different data >> streams that are correlated and send/received at the same time. > > In my case the meatdata is sideband information or parameters for/from > the remote end. Like timestamp, algorithm parameters, keys, etc. > > It is tight to the data payload, but it is not part of it. > > But the API should be generic enough to cover other use cases where > clients need to provide additional information. > For me, the metadata is part of the descriptor we give and receive back > from the DMA, others might have sideband channel to send that. > > For metadata handling we could have: > > struct dma_desc_metadata_ops { > /* To give a buffer for the DMA with the metadata, as it was in my > * original patch > */ > int (*desc_attach_metadata)(struct dma_async_tx_descriptor *desc, > void *data, size_t len); > > void *(*desc_get_metadata_ptr)(struct dma_async_tx_descriptor *desc, > size_t *payload_len, size_t *max_len); > int (*desc_set_payload_len)(struct dma_async_tx_descriptor *desc, > size_t payload_len); > }; > > Probably a simple flag variable to indicate which of the two modes are > supported: > 1. Client provided metadata buffer handling > Clients provide the buffer via desc_attach_metadata(), the DMA driver > will do whatever it needs to do, copy it in place, send it differently, > use parameters. > In RX the received metadata is going to be placed to the provided buffer. > 2. Ability to give the metadata pointer to user to work on it. > In TX, clients can use desc_get_metadata_ptr() to get the pointer, > current payload size and maximum size of the metadata and can work > directly on the buffer to place the data. Then desc_set_payload_len() to > let the DMA know how much data is actually placed there. > In RX, desc_get_metadata_ptr() will give the user the pointer and the > payload size so it can process that information correctly. > > DMA driver can implement either or both, but clients must only use > either 1 or 2 to work with the metadata. > > >> Think multi-planar transfer, like for audio when the right and left channel >> are in separate buffers and not interleaved. Or video with different >> color/luminance components in separate buffers. This is something that is at >> the moment not covered by the dmaengine API either. > > Hrm, true, but it is hardly the metadata use case. It is more like > different DMA transfer type. When I look at this with my astronaut architect view from high high up above I do not see a difference between metadata and multi-planar data. Both split the data that is sent to the peripheral into multiple sub-streams, each carrying part of the data. I'm sure there are peripherals that interleave data and metadata on the same data stream. Similar to how we have left and right channel interleaved in a audio stream. What about metadata that is not contiguous and split into multiple segments. How do you handle passing a sgl to the metadata interface? And then it suddenly looks quite similar to the normal DMA descriptor interface. But maybe that's just one abstraction level to high. >>>>>> Or you can implement a interface that is specific to your DMA controller and >>>>>> any client using this interface knows it is talking to your DMA controller. >>>>> >>>>> Hrm, so we can have DMA driver specific calls? The reason why TI's keystone 2 >>>>> navigator DMA support was rejected that it was introducing NAV specific calls >>>>> for clients to configure features not yet supported by the framework. >>>> >>>> In my opinion it is OK, somebody else might have different ideas. I mean it >>>> is not nice, but it is better than the alternative of overloading the >>>> generic API with driver specific semantics or introducing some kind of IOCTL >>>> catch all callback. >>> >>> True, but the generic API can be extended as well to cover new grounds, >>> features. Like this metadata thing. >>> >>>> If there is tight coupling between the DMA core and client and there is no >>>> intention of using a generic client the best solution might even be to no >>>> use DMAengine at all. >>> >>> This is how the knav stuff ended up. Well it is only used by networking >>> atm, so it is 'fine' to have custom API, but it is not portable. >> >> I totally agree generic APIs are better, but not everybody has the resources >> to rewrite the whole framework just because they want to do this tiny thing >> that isn't covered by the framework yet. In that case it is better to go >> with a custom API (that might evolve into a generic API), rather than >> overloading the generic API and putting a strain on everybody who works on >> the generic API. > > At some point a threshold is reached when the burden of maintaining a > custom API is more costly than investing on the extension of the framework. > What happens when an existing driver (using DMAengine API) need to be > supported on platform where only custom DMA code is available or if you > want to migrate to a new DMA from the custom API the drier is wired for? > It is just plain pain. > At the end what we want to do with DMA: move data from opne place to > another (in oversimplified view). I think we fully agree on this :) -- To unsubscribe from this list: send the line "unsubscribe dmaengine" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 2018-04-18 16:06, Lars-Peter Clausen wrote: >> Hrm, true, but it is hardly the metadata use case. It is more like >> different DMA transfer type. > > When I look at this with my astronaut architect view from high high up above > I do not see a difference between metadata and multi-planar data. I tend to disagree. > Both split the data that is sent to the peripheral into multiple > sub-streams, each carrying part of the data. I'm sure there are peripherals > that interleave data and metadata on the same data stream. Similar to how we > have left and right channel interleaved in a audio stream. Slimbus, S/PDIF? > What about metadata that is not contiguous and split into multiple segments. > How do you handle passing a sgl to the metadata interface? And then it > suddenly looks quite similar to the normal DMA descriptor interface. Well, the metadata is for the descriptor. The descriptor describe the data transfer _and_ can convey additional information. Nothing is interleaved, the data and the descriptor are different things. It is more like TCP headers detached from the data (but pointing to it). > But maybe that's just one abstraction level to high. I understand your point, but at the end the metadata needs to end up in the descriptor which is describing the data that is going to be moved. The descriptor is not sent as a separate DMA trasnfer, it is part of the DMA transfer, it is handled internally by the DMA. - Péter Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki -- To unsubscribe from this list: send the line "unsubscribe dmaengine" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Thu, Apr 19, 2018 at 02:40:26PM +0300, Peter Ujfalusi wrote: > > On 2018-04-18 16:06, Lars-Peter Clausen wrote: > >> Hrm, true, but it is hardly the metadata use case. It is more like > >> different DMA transfer type. > > > > When I look at this with my astronaut architect view from high high up above > > I do not see a difference between metadata and multi-planar data. > > I tend to disagree. and we will love to hear more :) > > Both split the data that is sent to the peripheral into multiple > > sub-streams, each carrying part of the data. I'm sure there are peripherals > > that interleave data and metadata on the same data stream. Similar to how we > > have left and right channel interleaved in a audio stream. > > Slimbus, S/PDIF? > > > What about metadata that is not contiguous and split into multiple segments. > > How do you handle passing a sgl to the metadata interface? And then it > > suddenly looks quite similar to the normal DMA descriptor interface. > > Well, the metadata is for the descriptor. The descriptor describe the > data transfer _and_ can convey additional information. Nothing is > interleaved, the data and the descriptor are different things. It is > more like TCP headers detached from the data (but pointing to it). > > > But maybe that's just one abstraction level to high. > > I understand your point, but at the end the metadata needs to end up in > the descriptor which is describing the data that is going to be moved. > > The descriptor is not sent as a separate DMA trasnfer, it is part of the > DMA transfer, it is handled internally by the DMA. That is bit confusing to me. I thought DMA was transparent to meta data and would blindly collect and transfer along with the descriptor. So at high level we are talking about two transfers (probably co-joined at hip and you want to call one transfer) but why can't we visualize this as just a DMA transfers. maybe you want to signal/attach to transfer, cant we do that with additional flag DMA_METADATA etc..?
On 2018-04-24 06:55, Vinod Koul wrote: > On Thu, Apr 19, 2018 at 02:40:26PM +0300, Peter Ujfalusi wrote: >> >> On 2018-04-18 16:06, Lars-Peter Clausen wrote: >>>> Hrm, true, but it is hardly the metadata use case. It is more like >>>> different DMA transfer type. >>> >>> When I look at this with my astronaut architect view from high high up above >>> I do not see a difference between metadata and multi-planar data. >> >> I tend to disagree. > > and we will love to hear more :) It is getting pretty off topic from the subject ;) and I'm sorry about that. Multi-planar data is _data_, the metadata is parameters/commands/information on _how_ to use the data. It is more like a replacement or extension of: configure peripheral send data to send data with configuration In both cases the same data is sent, but the configuration, parametrization is 'simplified' to allow per packet changes. >>> Both split the data that is sent to the peripheral into multiple >>> sub-streams, each carrying part of the data. I'm sure there are peripherals >>> that interleave data and metadata on the same data stream. Similar to how we >>> have left and right channel interleaved in a audio stream. >> >> Slimbus, S/PDIF? >> >>> What about metadata that is not contiguous and split into multiple segments. >>> How do you handle passing a sgl to the metadata interface? And then it >>> suddenly looks quite similar to the normal DMA descriptor interface. >> >> Well, the metadata is for the descriptor. The descriptor describe the >> data transfer _and_ can convey additional information. Nothing is >> interleaved, the data and the descriptor are different things. It is >> more like TCP headers detached from the data (but pointing to it). >> >>> But maybe that's just one abstraction level to high. >> >> I understand your point, but at the end the metadata needs to end up in >> the descriptor which is describing the data that is going to be moved. >> >> The descriptor is not sent as a separate DMA trasnfer, it is part of the >> DMA transfer, it is handled internally by the DMA. > > That is bit confusing to me. I thought DMA was transparent to meta data and > would blindly collect and transfer along with the descriptor. So at high > level we are talking about two transfers (probably co-joined at hip and you > want to call one transfer) At the end yes, both the descriptor and the data is going to be sent to the other end. As a reference see [1] The metadata is not a separate entity, it is part of the descriptor (Host Packet Descriptor - HPD). Each transfer (packet) is described with a HPD. The HPD have optional fields, like EPIB (Extended Packet Info Block), PSdata (Protocol Specific data). When the DMA reads the HPD, is going to move the data described by the HPD to the entry point (or from the entry point to memory), copies the EPIB/PSdata from the HPD to a destination HPD. The other end will use the destination HPD to know the size of the data and to get the metadata from the descriptor. In essence every entity within the Multicore Navigator system have pktdma, they all work in a similar way, but their capabilities might differ. Our entry to this mesh is via the DMA. > but why can't we visualize this as just a DMA > transfers. maybe you want to signal/attach to transfer, cant we do that with > additional flag DMA_METADATA etc..? For the data we need to call dmaengine_prep_slave_* to create the descriptor (HPD). The metadata needs to be present in the HPD, hence I was thinking of the attach_metadata as per descriptor API. If separate dmaengine_prep_slave_* is used for allocating the HPD and place the metadata in it then the consequent dmaengine_prep_slave_* call must be for the data of the transfer and it is still unclear how the prepare call would have any idea where to look for the HPD it needs to update with the parameters for the data transfer. I guess the driver could store the HPD pointer in the channel data if the prepare is called with DMA_METADATA and it would be mandatory that the next prepare is for the data portion. The driver would pick the pointer to the HPD we stored away and update the descriptor belonging to different tx_desc. But if we are here, we could have a flag like DMA_DESCRIPTOR and let client drivers to allocate the whole descriptor, fill in the metadata and give that to the DMA driver, which will update the rest of the HPD. Well, let's see where this is going to go when I can send the patches for review. [1] http://www.ti.com/lit/ug/sprugr9h/sprugr9h.pdf - Péter Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki -- To unsubscribe from this list: send the line "unsubscribe dmaengine" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi, > -----Original Message----- > From: Peter Ujfalusi [mailto:peter.ujfalusi@ti.com] > Sent: Tuesday, April 24, 2018 3:21 PM > To: Vinod Koul <vinod.koul@intel.com> > Cc: Lars-Peter Clausen <lars@metafoo.de>; Radhey Shyam Pandey > <radheys@xilinx.com>; michal.simek@xilinx.com; linux- > kernel@vger.kernel.org; dmaengine@vger.kernel.org; > dan.j.williams@intel.com; Appana Durga Kedareswara Rao > <appanad@xilinx.com>; linux-arm-kernel@lists.infradead.org > Subject: Re: [RFC 2/6] dmaengine: xilinx_dma: Pass AXI4-Stream control words > to netdev dma client > > On 2018-04-24 06:55, Vinod Koul wrote: > > On Thu, Apr 19, 2018 at 02:40:26PM +0300, Peter Ujfalusi wrote: > >> > >> On 2018-04-18 16:06, Lars-Peter Clausen wrote: > >>>> Hrm, true, but it is hardly the metadata use case. It is more like > >>>> different DMA transfer type. > >>> > >>> When I look at this with my astronaut architect view from high high up > above > >>> I do not see a difference between metadata and multi-planar data. > >> > >> I tend to disagree. > > > > and we will love to hear more :) > > It is getting pretty off topic from the subject ;) and I'm sorry about that. > > Multi-planar data is _data_, the metadata is > parameters/commands/information on _how_ to use the data. > It is more like a replacement or extension of: > configure peripheral > send data > > to > > send data with configuration > > In both cases the same data is sent, but the configuration, > parametrization is 'simplified' to allow per packet changes. > > >>> Both split the data that is sent to the peripheral into multiple > >>> sub-streams, each carrying part of the data. I'm sure there are peripherals > >>> that interleave data and metadata on the same data stream. Similar to > how we > >>> have left and right channel interleaved in a audio stream. > >> > >> Slimbus, S/PDIF? > >> > >>> What about metadata that is not contiguous and split into multiple > segments. > >>> How do you handle passing a sgl to the metadata interface? And then it > >>> suddenly looks quite similar to the normal DMA descriptor interface. > >> > >> Well, the metadata is for the descriptor. The descriptor describe the > >> data transfer _and_ can convey additional information. Nothing is > >> interleaved, the data and the descriptor are different things. It is > >> more like TCP headers detached from the data (but pointing to it). > >> > >>> But maybe that's just one abstraction level to high. > >> > >> I understand your point, but at the end the metadata needs to end up in > >> the descriptor which is describing the data that is going to be moved. > >> > >> The descriptor is not sent as a separate DMA trasnfer, it is part of the > >> DMA transfer, it is handled internally by the DMA. > > > > That is bit confusing to me. I thought DMA was transparent to meta data and > > would blindly collect and transfer along with the descriptor. So at high > > level we are talking about two transfers (probably co-joined at hip and you > > want to call one transfer) > > At the end yes, both the descriptor and the data is going to be sent to > the other end. > > As a reference see [1] > > The metadata is not a separate entity, it is part of the descriptor > (Host Packet Descriptor - HPD). > Each transfer (packet) is described with a HPD. The HPD have optional > fields, like EPIB (Extended Packet Info Block), PSdata (Protocol > Specific data). > > When the DMA reads the HPD, is going to move the data described by the > HPD to the entry point (or from the entry point to memory), copies the > EPIB/PSdata from the HPD to a destination HPD. The other end will use > the destination HPD to know the size of the data and to get the metadata > from the descriptor. > > In essence every entity within the Multicore Navigator system have > pktdma, they all work in a similar way, but their capabilities might > differ. Our entry to this mesh is via the DMA. > > > but why can't we visualize this as just a DMA > > transfers. maybe you want to signal/attach to transfer, cant we do that with > > additional flag DMA_METADATA etc..? > > For the data we need to call dmaengine_prep_slave_* to create the > descriptor (HPD). The metadata needs to be present in the HPD, hence I > was thinking of the attach_metadata as per descriptor API. > > If separate dmaengine_prep_slave_* is used for allocating the HPD and > place the metadata in it then the consequent dmaengine_prep_slave_* call > must be for the data of the transfer and it is still unclear how the > prepare call would have any idea where to look for the HPD it needs to > update with the parameters for the data transfer. > > I guess the driver could store the HPD pointer in the channel data if > the prepare is called with DMA_METADATA and it would be mandatory that > the next prepare is for the data portion. The driver would pick the > pointer to the HPD we stored away and update the descriptor belonging to > different tx_desc. > > But if we are here, we could have a flag like DMA_DESCRIPTOR and let > client drivers to allocate the whole descriptor, fill in the metadata > and give that to the DMA driver, which will update the rest of the HPD. > > Well, let's see where this is going to go when I can send the patches > for review. Thanks all. @Peter: If we have metadata patchset ready may be good to send an RFC? > > [1] http://www.ti.com/lit/ug/sprugr9h/sprugr9h.pdf > > - Péter > > Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. > Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
Hi, On 2018-05-17 09:39, Radhey Shyam Pandey wrote: >> Well, let's see where this is going to go when I can send the patches >> for review. > Thanks all. @Peter: If we have metadata patchset ready may be good > to send an RFC? Sorry for the delay, I got distracted by this: http://www.ti.com/lit/pdf/spruid7 Chapter 10. I have given some tough to the metadata attach patches. In my case the 'metadata' is more like private data section within the DMA descriptor (10.1.2.2.1) which is used by the remote peripheral and the driver for the given peripheral and it is optional. I liked the idea of treating it as metadata as it gives more generic API which can be adopted by other drivers if they need something similar. Another issue I have with the attach metadata way is that it would require one memcpy to copy the data to the DMA descriptor and in high throughput case it is not acceptable. For me probably a .get_private_area / .put_private_area like API would be desirable where I can give the pointer of the 'metadata' are (and size) to the user. But these can co-exist in my opinion and DMA drivers can opt to implement none, either or both of the callbacks. In couple of days I can update the metadata patches I have atm and send as RFC. Is there anything from your side I should take into account when doing that? - Péter Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki -- To unsubscribe from this list: send the line "unsubscribe dmaengine" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
SGksDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogUGV0ZXIgVWpmYWx1 c2kgW21haWx0bzpwZXRlci51amZhbHVzaUB0aS5jb21dDQo+IFNlbnQ6IFR1ZXNkYXksIE1heSAy OSwgMjAxOCA4OjM1IFBNDQo+IFRvOiBSYWRoZXkgU2h5YW0gUGFuZGV5IDxyYWRoZXlzQHhpbGlu eC5jb20+OyBWaW5vZCBLb3VsDQo+IDx2aW5vZC5rb3VsQGludGVsLmNvbT4NCj4gQ2M6IExhcnMt UGV0ZXIgQ2xhdXNlbiA8bGFyc0BtZXRhZm9vLmRlPjsgbWljaGFsLnNpbWVrQHhpbGlueC5jb207 IGxpbnV4LQ0KPiBrZXJuZWxAdmdlci5rZXJuZWwub3JnOyBkbWFlbmdpbmVAdmdlci5rZXJuZWwu b3JnOw0KPiBkYW4uai53aWxsaWFtc0BpbnRlbC5jb207IEFwcGFuYSBEdXJnYSBLZWRhcmVzd2Fy YSBSYW8NCj4gPGFwcGFuYWRAeGlsaW54LmNvbT47IGxpbnV4LWFybS1rZXJuZWxAbGlzdHMuaW5m cmFkZWFkLm9yZw0KPiBTdWJqZWN0OiBSZTogW1JGQyAyLzZdIGRtYWVuZ2luZTogeGlsaW54X2Rt YTogUGFzcyBBWEk0LVN0cmVhbSBjb250cm9sIHdvcmRzDQo+IHRvIG5ldGRldiBkbWEgY2xpZW50 DQo+IA0KPiBIaSwNCj4gDQo+IE9uIDIwMTgtMDUtMTcgMDk6MzksIFJhZGhleSBTaHlhbSBQYW5k ZXkgd3JvdGU6DQo+ID4+IFdlbGwsIGxldCdzIHNlZSB3aGVyZSB0aGlzIGlzIGdvaW5nIHRvIGdv IHdoZW4gSSBjYW4gc2VuZCB0aGUgcGF0Y2hlcw0KPiA+PiBmb3IgcmV2aWV3Lg0KPiA+IFRoYW5r cyBhbGwuIEBQZXRlcjogSWYgd2UgaGF2ZSBtZXRhZGF0YSBwYXRjaHNldCByZWFkeSBtYXkgYmUg Z29vZA0KPiA+IHRvIHNlbmQgYW4gUkZDPw0KPiANCj4gU29ycnkgZm9yIHRoZSBkZWxheSwgSSBn b3QgZGlzdHJhY3RlZCBieSB0aGlzOg0KPiBodHRwOi8vd3d3LnRpLmNvbS9saXQvcGRmL3NwcnVp ZDcgQ2hhcHRlciAxMC4NCj4gDQo+IEkgaGF2ZSBnaXZlbiBzb21lIHRvdWdoIHRvIHRoZSBtZXRh ZGF0YSBhdHRhY2ggcGF0Y2hlcy4NCj4gSW4gbXkgY2FzZSB0aGUgJ21ldGFkYXRhJyBpcyBtb3Jl IGxpa2UgcHJpdmF0ZSBkYXRhIHNlY3Rpb24gd2l0aGluIHRoZQ0KPiBETUEgZGVzY3JpcHRvciAo MTAuMS4yLjIuMSkgd2hpY2ggaXMgdXNlZCBieSB0aGUgcmVtb3RlIHBlcmlwaGVyYWwgYW5kDQo+ IHRoZSBkcml2ZXIgZm9yIHRoZSBnaXZlbiBwZXJpcGhlcmFsIGFuZCBpdCBpcyBvcHRpb25hbC4N Cj4gDQo+IEkgbGlrZWQgdGhlIGlkZWEgb2YgdHJlYXRpbmcgaXQgYXMgbWV0YWRhdGEgYXMgaXQg Z2l2ZXMgbW9yZSBnZW5lcmljIEFQSQ0KPiB3aGljaCBjYW4gYmUgYWRvcHRlZCBieSBvdGhlciBk cml2ZXJzIGlmIHRoZXkgbmVlZCBzb21ldGhpbmcgc2ltaWxhci4NCj4gDQo+IEFub3RoZXIgaXNz dWUgSSBoYXZlIHdpdGggdGhlIGF0dGFjaCBtZXRhZGF0YSB3YXkgaXMgdGhhdCBpdCB3b3VsZA0K PiByZXF1aXJlIG9uZSBtZW1jcHkgdG8gY29weSB0aGUgZGF0YSB0byB0aGUgRE1BIGRlc2NyaXB0 b3IgYW5kIGluIGhpZ2gNCj4gdGhyb3VnaHB1dCBjYXNlIGl0IGlzIG5vdCBhY2NlcHRhYmxlLg0K DQpJIHRoaW5rIG1lbWNweSBpcyBuZWVkZWQgKGFsdGVybmF0aXZlPykgaWYgZG1hIGVuZ2luZSBk b2VzbuKAmXQgZGlyZWN0bHkNCnVwZGF0ZSBtZXRhZGF0YSBidWZmZXJzIGkuZSBpbiBSWCwgd2Ug bmVlZCB0byBjb3B5IG1ldGFkYXRhIGZyb20gDQpkbWEgZGVzY3JpcHRvciBmaWVsZHMgdG8gY2xp ZW50IGFsbG9jYXRlZCBtZXRhZGF0YSBidWZmZXIgKHNpZGViYW5kLw0KbWV0YWRhdGEgaW5mbyBp cyBwYXJ0IG9mIEJ1ZmZlciBkZXNjcmlwdG9yIGZpZWxkcykgDQoNCm1lbWNweShhcHBfdywgaHct PmFwcCwgc2l6ZW9mKHUzMikgKiBYSUxJTlhfRE1BX05VTV9BUFBfV09SRFMpOw0KDQpEZXNjcmlw dG9yIEZpZWxkcw0KQWRkcmVzcyBTcGFjZSAgICAgICAgICAgICAgIE9mZnNldCAgICAgICAgICBO YW1lIERlc2NyaXB0aW9uDQoyMGggICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgQVBQ MCAgICAgICAgICAgVXNlciBBcHBsaWNhdGlvbiBGaWVsZCAwDQoyNGggICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgQVBQMSAgICAgICAgICAgVXNlciBBcHBsaWNhdGlvbiBGaWVsZCAx DQouLi4NCjMwaCAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIEFQUDUgICAgICAgICAg IFVzZXIgQXBwbGljYXRpb24gRmllbGQgNQ0KDQo+IA0KPiBGb3IgbWUgcHJvYmFibHkgYSAuZ2V0 X3ByaXZhdGVfYXJlYSAvIC5wdXRfcHJpdmF0ZV9hcmVhIGxpa2UgQVBJIHdvdWxkDQo+IGJlIGRl c2lyYWJsZSB3aGVyZSBJIGNhbiBnaXZlIHRoZSBwb2ludGVyIG9mIHRoZSAnbWV0YWRhdGEnIGFy ZSAoYW5kDQo+IHNpemUpIHRvIHRoZSB1c2VyLg0KPiANCj4gQnV0IHRoZXNlIGNhbiBjby1leGlz dCBpbiBteSBvcGluaW9uIGFuZCBETUEgZHJpdmVycyBjYW4gb3B0IHRvDQo+IGltcGxlbWVudCBu b25lLCBlaXRoZXIgb3IgYm90aCBvZiB0aGUgY2FsbGJhY2tzLg0KPiANCj4gSW4gY291cGxlIG9m IGRheXMgSSBjYW4gdXBkYXRlIHRoZSBtZXRhZGF0YSBwYXRjaGVzIEkgaGF2ZSBhdG0gYW5kIHNl bmQNCj4gYXMgUkZDLg0KPiANCj4gSXMgdGhlcmUgYW55dGhpbmcgZnJvbSB5b3VyIHNpZGUgSSBz aG91bGQgdGFrZSBpbnRvIGFjY291bnQgd2hlbiBkb2luZyB0aGF0Pw0KSSB0aGluayBhIGdlbmVy aWMgaW50ZXJmYWNlIHRvIGF0dGFjaC9zaGFyZSBtZXRhZGF0YSBidWZmZXIgYi93IGNsaWVudCBh bmQgdGhlDQpkbWFlbmdpbmUgZHJpdmVyIGlzIGdvb2QgZW5vdWdoLiBJcyBtZXRhZGF0YSBwYXRj aHNldCAoZWFybHkgdmVyc2lvbikgDQphdmFpbGFibGUgaW4gVEkgZXh0ZXJuYWwgcmVwb3M/IA0K DQpUaGFua3MsDQpSYWRoZXkNCg0KPiANCj4gLSBQw6l0ZXINCj4gDQo+IFRleGFzIEluc3RydW1l bnRzIEZpbmxhbmQgT3ksIFBvcmtrYWxhbmthdHUgMjIsIDAwMTgwIEhlbHNpbmtpLg0KPiBZLXR1 bm51cy9CdXNpbmVzcyBJRDogMDYxNTUyMS00LiBLb3RpcGFpa2thL0RvbWljaWxlOiBIZWxzaW5r aQ0K -- To unsubscribe from this list: send the line "unsubscribe dmaengine" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h index cc887a6f74ba..fb8e9c92fb01 100644 --- a/include/linux/dmaengine.h +++ b/include/linux/dmaengine.h @@ -709,6 +709,11 @@ struct dma_filter { * be called after period_len bytes have been transferred. * @device_prep_interleaved_dma: Transfer expression in a generic way. * @device_prep_dma_imm_data: DMA's 8 byte immediate data to the dst address + * @device_attach_metadata: Some DMA engines can send and receive side band + * information, commands or parameters which is not transferred within the + * data stream itself. In such case clients can set the metadata to the + * given descriptor and it is going to be sent to the peripheral, or in + * case of DEV_TO_MEM the provided buffer will receive the metadata. * @device_config: Pushes a new configuration to a channel, return 0 or an error * code * @device_pause: Pauses any transfer happening on a channel. Returns @@ -796,6 +801,9 @@ struct dma_device { struct dma_chan *chan, dma_addr_t dst, u64 data, unsigned long flags); + int (*device_attach_metadata)(struct dma_async_tx_descriptor *desc, + void *data, size_t len); + int (*device_config)(struct dma_chan *chan, struct dma_slave_config *config); int (*device_pause)(struct dma_chan *chan); @@ -909,6 +917,21 @@ static inline struct dma_async_tx_descriptor *dmaengine_prep_dma_memcpy( len, flags); } +static inline int dmaengine_attach_metadata( + struct dma_async_tx_descriptor *desc, void *data, size_t len) +{ + struct dma_chan *chan; + + if (!desc) + return 0; + + chan = desc->chan; + if (!chan || !chan->device || !chan->device->device_attach_metadata) + return 0; + + return chan->device->device_attach_metadata(desc, data, len); +} + /** * dmaengine_terminate_all() - Terminate all active DMA transfers * @chan: The channel for which to terminate the transfers