Message ID | 1654651005-15475-1-git-send-email-quic_clew@quicinc.com (mailing list archive) |
---|---|
Headers | show |
Series | Introduction of rpmsg_rx_done | expand |
Hello Chris, On 6/8/22 03:16, Chris Lew wrote: > This series proposes an implementation for the rpmsg framework to do > deferred cleanup of buffers provided in the rx callback. The current > implementation assumes that the client is done with the buffer after > returning from the rx callback. > > In some cases where the data size is large, the client may want to > avoid copying the data in the rx callback for later processing. This > series proposes two new facilities for signaling that they want to > hold on to a buffer after the rx callback. > They are: > - New API rpmsg_rx_done() to tell the rpmsg framework the client is > done with the buffer > - New return codes for the rx callback to signal that the client will > hold onto a buffer and later call rpmsg_rx_done() > > This series implements the qcom_glink_native backend for these new > facilities. The API you proposed seems to me quite smart and adaptable to the rpmsg virtio backend. My main concern is about the release of the buffer when the endpoint is destroyed. Does the buffer release should be handled by each services or by the core? I wonder if the buffer list could be managed by the core part by adding the list in the rpmsg_endpoint structure. On destroy the core could call the rx_done for each remaining buffers in list... I let Bjorn and Mathieu advise on this... Thanks, Arnaud > > Chris Lew (4): > rpmsg: core: Add rx done hooks > rpmsg: char: Add support to use rpmsg_rx_done > rpmsg: glink: Try to send rx done in irq > rpmsg: glink: Add support for rpmsg_rx_done > > drivers/rpmsg/qcom_glink_native.c | 112 ++++++++++++++++++++++++++++++-------- > drivers/rpmsg/rpmsg_char.c | 50 ++++++++++++++++- > drivers/rpmsg/rpmsg_core.c | 20 +++++++ > drivers/rpmsg/rpmsg_internal.h | 1 + > include/linux/rpmsg.h | 24 ++++++++ > 5 files changed, 183 insertions(+), 24 deletions(-) >
On Mon, 18 Jul 2022 at 02:26, Arnaud POULIQUEN <arnaud.pouliquen@foss.st.com> wrote: > > Hello Chris, > > On 6/8/22 03:16, Chris Lew wrote: > > This series proposes an implementation for the rpmsg framework to do > > deferred cleanup of buffers provided in the rx callback. The current > > implementation assumes that the client is done with the buffer after > > returning from the rx callback. > > > > In some cases where the data size is large, the client may want to > > avoid copying the data in the rx callback for later processing. This > > series proposes two new facilities for signaling that they want to > > hold on to a buffer after the rx callback. > > They are: > > - New API rpmsg_rx_done() to tell the rpmsg framework the client is > > done with the buffer > > - New return codes for the rx callback to signal that the client will > > hold onto a buffer and later call rpmsg_rx_done() > > > > This series implements the qcom_glink_native backend for these new > > facilities. > > The API you proposed seems to me quite smart and adaptable to the rpmsg > virtio backend. > > My main concern is about the release of the buffer when the endpoint > is destroyed. > > Does the buffer release should be handled by each services or by the > core? > > I wonder if the buffer list could be managed by the core part by adding > the list in the rpmsg_endpoint structure. On destroy the core could call > the rx_done for each remaining buffers in list... > > I let Bjorn and Mathieu advise on this... Thanks for taking a look Arnaud. I'll get to this sortly. > > Thanks, > Arnaud > > > > > Chris Lew (4): > > rpmsg: core: Add rx done hooks > > rpmsg: char: Add support to use rpmsg_rx_done > > rpmsg: glink: Try to send rx done in irq > > rpmsg: glink: Add support for rpmsg_rx_done > > > > drivers/rpmsg/qcom_glink_native.c | 112 ++++++++++++++++++++++++++++++-------- > > drivers/rpmsg/rpmsg_char.c | 50 ++++++++++++++++- > > drivers/rpmsg/rpmsg_core.c | 20 +++++++ > > drivers/rpmsg/rpmsg_internal.h | 1 + > > include/linux/rpmsg.h | 24 ++++++++ > > 5 files changed, 183 insertions(+), 24 deletions(-) > >
On Tue, Jun 07, 2022 at 06:16:41PM -0700, Chris Lew wrote: > This series proposes an implementation for the rpmsg framework to do > deferred cleanup of buffers provided in the rx callback. The current > implementation assumes that the client is done with the buffer after > returning from the rx callback. > > In some cases where the data size is large, the client may want to > avoid copying the data in the rx callback for later processing. This > series proposes two new facilities for signaling that they want to > hold on to a buffer after the rx callback. > They are: > - New API rpmsg_rx_done() to tell the rpmsg framework the client is > done with the buffer > - New return codes for the rx callback to signal that the client will > hold onto a buffer and later call rpmsg_rx_done() > > This series implements the qcom_glink_native backend for these new > facilities. > > Chris Lew (4): > rpmsg: core: Add rx done hooks > rpmsg: char: Add support to use rpmsg_rx_done > rpmsg: glink: Try to send rx done in irq > rpmsg: glink: Add support for rpmsg_rx_done > > drivers/rpmsg/qcom_glink_native.c | 112 ++++++++++++++++++++++++++++++-------- > drivers/rpmsg/rpmsg_char.c | 50 ++++++++++++++++- > drivers/rpmsg/rpmsg_core.c | 20 +++++++ > drivers/rpmsg/rpmsg_internal.h | 1 + > include/linux/rpmsg.h | 24 ++++++++ > 5 files changed, 183 insertions(+), 24 deletions(-) I have started reviewing this set. Comments to come later today or tomorrow. Thanks, Mathieu > > -- > 2.7.4 >
On Mon, Jul 18, 2022 at 10:54:30AM -0600, Mathieu Poirier wrote: > On Mon, 18 Jul 2022 at 02:26, Arnaud POULIQUEN > <arnaud.pouliquen@foss.st.com> wrote: > > > > Hello Chris, > > > > On 6/8/22 03:16, Chris Lew wrote: > > > This series proposes an implementation for the rpmsg framework to do > > > deferred cleanup of buffers provided in the rx callback. The current > > > implementation assumes that the client is done with the buffer after > > > returning from the rx callback. > > > > > > In some cases where the data size is large, the client may want to > > > avoid copying the data in the rx callback for later processing. This > > > series proposes two new facilities for signaling that they want to > > > hold on to a buffer after the rx callback. > > > They are: > > > - New API rpmsg_rx_done() to tell the rpmsg framework the client is > > > done with the buffer > > > - New return codes for the rx callback to signal that the client will > > > hold onto a buffer and later call rpmsg_rx_done() > > > > > > This series implements the qcom_glink_native backend for these new > > > facilities. > > > > The API you proposed seems to me quite smart and adaptable to the rpmsg > > virtio backend. > > > > My main concern is about the release of the buffer when the endpoint > > is destroyed. > > > > Does the buffer release should be handled by each services or by the > > core? > > > > I wonder if the buffer list could be managed by the core part by adding > > the list in the rpmsg_endpoint structure. On destroy the core could call > > the rx_done for each remaining buffers in list... Arnaud has a valid point, though rpmst_endpoint_ops::destroy_ept() is there for this kind of cleanup (and this patchet is making use of it). I think we can leave things as they are now and consider moving to the core if we see a trend in future submissions. Thanks, Mathieu > > > > I let Bjorn and Mathieu advise on this... > > Thanks for taking a look Arnaud. I'll get to this sortly. > > > > > Thanks, > > Arnaud > > > > > > > > Chris Lew (4): > > > rpmsg: core: Add rx done hooks > > > rpmsg: char: Add support to use rpmsg_rx_done > > > rpmsg: glink: Try to send rx done in irq > > > rpmsg: glink: Add support for rpmsg_rx_done > > > > > > drivers/rpmsg/qcom_glink_native.c | 112 ++++++++++++++++++++++++++++++-------- > > > drivers/rpmsg/rpmsg_char.c | 50 ++++++++++++++++- > > > drivers/rpmsg/rpmsg_core.c | 20 +++++++ > > > drivers/rpmsg/rpmsg_internal.h | 1 + > > > include/linux/rpmsg.h | 24 ++++++++ > > > 5 files changed, 183 insertions(+), 24 deletions(-) > > >