mbox series

[v6,0/4] Add a vhost RPMsg API

Message ID 20200901151153.28111-1-guennadi.liakhovetski@linux.intel.com (mailing list archive)
Headers show
Series Add a vhost RPMsg API | expand

Message

Guennadi Liakhovetski Sept. 1, 2020, 3:11 p.m. UTC
Hi,

Next update:

v6:
- rename include/linux/virtio_rpmsg.h -> include/linux/rpmsg/virtio.h

v5:
- don't hard-code message layout

v4:
- add endianness conversions to comply with the VirtIO standard

v3:
- address several checkpatch warnings
- address comments from Mathieu Poirier

v2:
- update patch #5 with a correct vhost_dev_init() prototype
- drop patch #6 - it depends on a different patch, that is currently
  an RFC
- address comments from Pierre-Louis Bossart:
  * remove "default n" from Kconfig

Linux supports RPMsg over VirtIO for "remote processor" / AMP use
cases. It can however also be used for virtualisation scenarios,
e.g. when using KVM to run Linux on both the host and the guests.
This patch set adds a wrapper API to facilitate writing vhost
drivers for such RPMsg-based solutions. The first use case is an
audio DSP virtualisation project, currently under development, ready
for review and submission, available at
https://github.com/thesofproject/linux/pull/1501/commits

Thanks
Guennadi

Guennadi Liakhovetski (4):
  vhost: convert VHOST_VSOCK_SET_RUNNING to a generic ioctl
  rpmsg: move common structures and defines to headers
  rpmsg: update documentation
  vhost: add an RPMsg API

 Documentation/rpmsg.txt          |   6 +-
 drivers/rpmsg/virtio_rpmsg_bus.c |  78 +------
 drivers/vhost/Kconfig            |   7 +
 drivers/vhost/Makefile           |   3 +
 drivers/vhost/rpmsg.c            | 373 +++++++++++++++++++++++++++++++
 drivers/vhost/vhost_rpmsg.h      |  74 ++++++
 include/linux/rpmsg/virtio.h     |  83 +++++++
 include/uapi/linux/rpmsg.h       |   3 +
 include/uapi/linux/vhost.h       |   4 +-
 9 files changed, 551 insertions(+), 80 deletions(-)
 create mode 100644 drivers/vhost/rpmsg.c
 create mode 100644 drivers/vhost/vhost_rpmsg.h
 create mode 100644 include/linux/rpmsg/virtio.h

Comments

Arnaud POULIQUEN Sept. 15, 2020, 12:13 p.m. UTC | #1
Hi  Guennadi,

On 9/1/20 5:11 PM, Guennadi Liakhovetski wrote:
> Hi,
> 
> Next update:
> 
> v6:
> - rename include/linux/virtio_rpmsg.h -> include/linux/rpmsg/virtio.h
> 
> v5:
> - don't hard-code message layout
> 
> v4:
> - add endianness conversions to comply with the VirtIO standard
> 
> v3:
> - address several checkpatch warnings
> - address comments from Mathieu Poirier
> 
> v2:
> - update patch #5 with a correct vhost_dev_init() prototype
> - drop patch #6 - it depends on a different patch, that is currently
>   an RFC
> - address comments from Pierre-Louis Bossart:
>   * remove "default n" from Kconfig
> 
> Linux supports RPMsg over VirtIO for "remote processor" / AMP use
> cases. It can however also be used for virtualisation scenarios,
> e.g. when using KVM to run Linux on both the host and the guests.
> This patch set adds a wrapper API to facilitate writing vhost
> drivers for such RPMsg-based solutions. The first use case is an
> audio DSP virtualisation project, currently under development, ready
> for review and submission, available at
> https://github.com/thesofproject/linux/pull/1501/commits

Mathieu pointed me your series. On my side i proposed the rpmsg_ns_msg
service[1] that does not match with your implementation.
As i come late, i hope that i did not miss something in the history...
Don't hesitate to point me the discussions, if it is the case.

Regarding your patchset, it is quite confusing for me. It seems that you
implement your own protocol on top of vhost forked from the RPMsg one.
But look to me that it is not the RPMsg protocol.

So i would be agree with Vincent[2] which proposed to switch on a RPMsg API
and creating a vhost rpmsg device. This is also proposed in the 
"Enhance VHOST to enable SoC-to-SoC communication" RFC[3].
Do you think that this alternative could match with your need?

[1]. https://patchwork.kernel.org/project/linux-remoteproc/list/?series=338335 
[2]. https://www.spinics.net/lists/linux-virtualization/msg44195.html
[3]. https://www.spinics.net/lists/linux-remoteproc/msg06634.html  

Thanks,
Arnaud

> 
> Thanks
> Guennadi
> 
> Guennadi Liakhovetski (4):
>   vhost: convert VHOST_VSOCK_SET_RUNNING to a generic ioctl
>   rpmsg: move common structures and defines to headers
>   rpmsg: update documentation
>   vhost: add an RPMsg API
> 
>  Documentation/rpmsg.txt          |   6 +-
>  drivers/rpmsg/virtio_rpmsg_bus.c |  78 +------
>  drivers/vhost/Kconfig            |   7 +
>  drivers/vhost/Makefile           |   3 +
>  drivers/vhost/rpmsg.c            | 373 +++++++++++++++++++++++++++++++
>  drivers/vhost/vhost_rpmsg.h      |  74 ++++++
>  include/linux/rpmsg/virtio.h     |  83 +++++++
>  include/uapi/linux/rpmsg.h       |   3 +
>  include/uapi/linux/vhost.h       |   4 +-
>  9 files changed, 551 insertions(+), 80 deletions(-)
>  create mode 100644 drivers/vhost/rpmsg.c
>  create mode 100644 drivers/vhost/vhost_rpmsg.h
>  create mode 100644 include/linux/rpmsg/virtio.h
>
Vincent Whitchurch Sept. 17, 2020, 8:36 a.m. UTC | #2
On Thu, Sep 17, 2020 at 07:47:06AM +0200, Guennadi Liakhovetski wrote:
> On Tue, Sep 15, 2020 at 02:13:23PM +0200, Arnaud POULIQUEN wrote:
> > So i would be agree with Vincent[2] which proposed to switch on a RPMsg API
> > and creating a vhost rpmsg device. This is also proposed in the 
> > "Enhance VHOST to enable SoC-to-SoC communication" RFC[3].
> > Do you think that this alternative could match with your need?
> 
> As I replied to Vincent, I understand his proposal and the approach taken 
> in the series [3], but I'm not sure I agree, that adding yet another 
> virtual device / driver layer on the vhost side is a good idea. As far as 
> I understand adding new completely virtual devices isn't considered to be 
> a good practice in the kernel. Currently vhost is just a passive "library" 
> and my vhost-rpmsg support keeps it that way. Not sure I'm in favour of 
> converting vhost to a virtual device infrastructure.

I know it wasn't what you meant, but I noticed that the above paragraph
could be read as if my suggestion was to convert vhost to a virtual
device infrastructure, so I just want to clarify that that those are not
related.  The only similarity between what I suggested in the thread in
[2] and Kishon's RFC in [3] is that both involve creating a generic
vhost-rpmsg driver which would allow the RPMsg API to be used for both
sides of the link, instead of introducing a new API just for the server
side.  That can be done without rewriting drivers/vhost/.

> > [1]. https://patchwork.kernel.org/project/linux-remoteproc/list/?series=338335 
> > [2]. https://www.spinics.net/lists/linux-virtualization/msg44195.html
> > [3]. https://www.spinics.net/lists/linux-remoteproc/msg06634.html
Guennadi Liakhovetski Sept. 17, 2020, 10:29 a.m. UTC | #3
Hi Vincent,

On Thu, Sep 17, 2020 at 10:36:44AM +0200, Vincent Whitchurch wrote:
> On Thu, Sep 17, 2020 at 07:47:06AM +0200, Guennadi Liakhovetski wrote:
> > On Tue, Sep 15, 2020 at 02:13:23PM +0200, Arnaud POULIQUEN wrote:
> > > So i would be agree with Vincent[2] which proposed to switch on a RPMsg API
> > > and creating a vhost rpmsg device. This is also proposed in the 
> > > "Enhance VHOST to enable SoC-to-SoC communication" RFC[3].
> > > Do you think that this alternative could match with your need?
> > 
> > As I replied to Vincent, I understand his proposal and the approach taken 
> > in the series [3], but I'm not sure I agree, that adding yet another 
> > virtual device / driver layer on the vhost side is a good idea. As far as 
> > I understand adding new completely virtual devices isn't considered to be 
> > a good practice in the kernel. Currently vhost is just a passive "library" 
> > and my vhost-rpmsg support keeps it that way. Not sure I'm in favour of 
> > converting vhost to a virtual device infrastructure.
> 
> I know it wasn't what you meant, but I noticed that the above paragraph
> could be read as if my suggestion was to convert vhost to a virtual
> device infrastructure, so I just want to clarify that that those are not
> related.  The only similarity between what I suggested in the thread in
> [2] and Kishon's RFC in [3] is that both involve creating a generic
> vhost-rpmsg driver which would allow the RPMsg API to be used for both
> sides of the link, instead of introducing a new API just for the server
> side.  That can be done without rewriting drivers/vhost/.

Thanks for the clarification. Another flexibility, that I'm trying to preserve 
with my approach is keeping direct access to iovec style data buffers for 
cases where that's the structure, that's already used by the respective 
driver on the host side. Since we already do packing and unpacking on the 
guest / client side, we don't need the same on the host / server side again.

Thanks
Guennadi

> > > [1]. https://patchwork.kernel.org/project/linux-remoteproc/list/?series=338335 
> > > [2]. https://www.spinics.net/lists/linux-virtualization/msg44195.html
> > > [3]. https://www.spinics.net/lists/linux-remoteproc/msg06634.html
Arnaud POULIQUEN Sept. 17, 2020, 3:21 p.m. UTC | #4
Hi Guennadi,

> -----Original Message-----
> From: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
> Sent: jeudi 17 septembre 2020 07:47
> To: Arnaud POULIQUEN <arnaud.pouliquen@st.com>
> Cc: kvm@vger.kernel.org; linux-remoteproc@vger.kernel.org;
> virtualization@lists.linux-foundation.org; sound-open-firmware@alsa-
> project.org; Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>; Liam
> Girdwood <liam.r.girdwood@linux.intel.com>; Michael S. Tsirkin
> <mst@redhat.com>; Jason Wang <jasowang@redhat.com>; Ohad Ben-Cohen
> <ohad@wizery.com>; Bjorn Andersson <bjorn.andersson@linaro.org>; Mathieu
> Poirier <mathieu.poirier@linaro.org>; Vincent Whitchurch
> <vincent.whitchurch@axis.com>
> Subject: Re: [PATCH v6 0/4] Add a vhost RPMsg API
> 
> Hi Arnaud,
> 
> On Tue, Sep 15, 2020 at 02:13:23PM +0200, Arnaud POULIQUEN wrote:
> > Hi  Guennadi,
> >
> > On 9/1/20 5:11 PM, Guennadi Liakhovetski wrote:
> > > Hi,
> > >
> > > Next update:
> > >
> > > v6:
> > > - rename include/linux/virtio_rpmsg.h ->
> > > include/linux/rpmsg/virtio.h
> > >
> > > v5:
> > > - don't hard-code message layout
> > >
> > > v4:
> > > - add endianness conversions to comply with the VirtIO standard
> > >
> > > v3:
> > > - address several checkpatch warnings
> > > - address comments from Mathieu Poirier
> > >
> > > v2:
> > > - update patch #5 with a correct vhost_dev_init() prototype
> > > - drop patch #6 - it depends on a different patch, that is currently
> > >   an RFC
> > > - address comments from Pierre-Louis Bossart:
> > >   * remove "default n" from Kconfig
> > >
> > > Linux supports RPMsg over VirtIO for "remote processor" / AMP use
> > > cases. It can however also be used for virtualisation scenarios,
> > > e.g. when using KVM to run Linux on both the host and the guests.
> > > This patch set adds a wrapper API to facilitate writing vhost
> > > drivers for such RPMsg-based solutions. The first use case is an
> > > audio DSP virtualisation project, currently under development, ready
> > > for review and submission, available at
> > > https://github.com/thesofproject/linux/pull/1501/commits
> >
> > Mathieu pointed me your series. On my side i proposed the rpmsg_ns_msg
> > service[1] that does not match with your implementation.
> > As i come late, i hope that i did not miss something in the history...
> > Don't hesitate to point me the discussions, if it is the case.
> 
> Well, as you see, this is a v6 only of this patch set, and apart from it there have
> been several side discussions and patch sets.
> 
> > Regarding your patchset, it is quite confusing for me. It seems that
> > you implement your own protocol on top of vhost forked from the RPMsg
> one.
> > But look to me that it is not the RPMsg protocol.
> 
> I'm implementing a counterpart to the rpmsg protocol over VirtIO as initially
> implemented by drivers/rpmsg/virtio_rpmsg_bus.c for the "main CPU" (in case
> of remoteproc over VirtIO) or the guest side in case of Linux virtualisation.
> Since my implementation can talk to that driver, I don't think, that I'm inventing
> a new protocol. I'm adding support for the same protocol for the opposite side
> of the VirtIO divide.

The main point I would like to highlight here is related to the use of the name "RPMsg"
more than how you implement your IPC protocol.
If It is a counterpart, it probably does not respect interface for RPMsg clients.
A good way to answer this, might be to respond to this question:
Is the rpmsg sample client[4] can be used on top of your vhost RPMsg implementation?
If the response is no, describe it as a RPMsg implementation could lead to confusion...

[4] https://elixir.bootlin.com/linux/v5.9-rc5/source/samples/rpmsg/rpmsg_client_sample.c

Regards,
Arnaud

> 
> > So i would be agree with Vincent[2] which proposed to switch on a
> > RPMsg API and creating a vhost rpmsg device. This is also proposed in
> > the "Enhance VHOST to enable SoC-to-SoC communication" RFC[3].
> > Do you think that this alternative could match with your need?
> 
> As I replied to Vincent, I understand his proposal and the approach taken in the
> series [3], but I'm not sure I agree, that adding yet another virtual device /
> driver layer on the vhost side is a good idea. As far as I understand adding new
> completely virtual devices isn't considered to be a good practice in the kernel.
> Currently vhost is just a passive "library"
> and my vhost-rpmsg support keeps it that way. Not sure I'm in favour of
> converting vhost to a virtual device infrastructure.
> 
> Thanks for pointing me out at [3], I should have a better look at it.
> 
> Thanks
> Guennadi
> 
> > [1].
> > https://patchwork.kernel.org/project/linux-remoteproc/list/?series=338
> > 335 [2].
> > https://www.spinics.net/lists/linux-virtualization/msg44195.html
> > [3]. https://www.spinics.net/lists/linux-remoteproc/msg06634.html
> >
> > Thanks,
> > Arnaud
> >
> > >
> > > Thanks
> > > Guennadi
> > >
> > > Guennadi Liakhovetski (4):
> > >   vhost: convert VHOST_VSOCK_SET_RUNNING to a generic ioctl
> > >   rpmsg: move common structures and defines to headers
> > >   rpmsg: update documentation
> > >   vhost: add an RPMsg API
> > >
> > >  Documentation/rpmsg.txt          |   6 +-
> > >  drivers/rpmsg/virtio_rpmsg_bus.c |  78 +------
> > >  drivers/vhost/Kconfig            |   7 +
> > >  drivers/vhost/Makefile           |   3 +
> > >  drivers/vhost/rpmsg.c            | 373 +++++++++++++++++++++++++++++++
> > >  drivers/vhost/vhost_rpmsg.h      |  74 ++++++
> > >  include/linux/rpmsg/virtio.h     |  83 +++++++
> > >  include/uapi/linux/rpmsg.h       |   3 +
> > >  include/uapi/linux/vhost.h       |   4 +-
> > >  9 files changed, 551 insertions(+), 80 deletions(-)  create mode
> > > 100644 drivers/vhost/rpmsg.c  create mode 100644
> > > drivers/vhost/vhost_rpmsg.h  create mode 100644
> > > include/linux/rpmsg/virtio.h
> > >
Arnaud POULIQUEN Sept. 18, 2020, 7:47 a.m. UTC | #5
Hi Guennadi,

On 9/18/20 7:44 AM, Guennadi Liakhovetski wrote:
> Hi Arnaud,
> 
> On Thu, Sep 17, 2020 at 05:21:02PM +0200, Arnaud POULIQUEN wrote:
>> Hi Guennadi,
>>
>>> -----Original Message-----
>>> From: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
>>> Sent: jeudi 17 septembre 2020 07:47
>>> To: Arnaud POULIQUEN <arnaud.pouliquen@st.com>
>>> Cc: kvm@vger.kernel.org; linux-remoteproc@vger.kernel.org;
>>> virtualization@lists.linux-foundation.org; sound-open-firmware@alsa-
>>> project.org; Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>; Liam
>>> Girdwood <liam.r.girdwood@linux.intel.com>; Michael S. Tsirkin
>>> <mst@redhat.com>; Jason Wang <jasowang@redhat.com>; Ohad Ben-Cohen
>>> <ohad@wizery.com>; Bjorn Andersson <bjorn.andersson@linaro.org>; Mathieu
>>> Poirier <mathieu.poirier@linaro.org>; Vincent Whitchurch
>>> <vincent.whitchurch@axis.com>
>>> Subject: Re: [PATCH v6 0/4] Add a vhost RPMsg API
>>>
>>> Hi Arnaud,
>>>
>>> On Tue, Sep 15, 2020 at 02:13:23PM +0200, Arnaud POULIQUEN wrote:
>>>> Hi  Guennadi,
>>>>
>>>> On 9/1/20 5:11 PM, Guennadi Liakhovetski wrote:
>>>>> Hi,
>>>>>
>>>>> Next update:
>>>>>
>>>>> v6:
>>>>> - rename include/linux/virtio_rpmsg.h ->
>>>>> include/linux/rpmsg/virtio.h
>>>>>
>>>>> v5:
>>>>> - don't hard-code message layout
>>>>>
>>>>> v4:
>>>>> - add endianness conversions to comply with the VirtIO standard
>>>>>
>>>>> v3:
>>>>> - address several checkpatch warnings
>>>>> - address comments from Mathieu Poirier
>>>>>
>>>>> v2:
>>>>> - update patch #5 with a correct vhost_dev_init() prototype
>>>>> - drop patch #6 - it depends on a different patch, that is currently
>>>>>   an RFC
>>>>> - address comments from Pierre-Louis Bossart:
>>>>>   * remove "default n" from Kconfig
>>>>>
>>>>> Linux supports RPMsg over VirtIO for "remote processor" / AMP use
>>>>> cases. It can however also be used for virtualisation scenarios,
>>>>> e.g. when using KVM to run Linux on both the host and the guests.
>>>>> This patch set adds a wrapper API to facilitate writing vhost
>>>>> drivers for such RPMsg-based solutions. The first use case is an
>>>>> audio DSP virtualisation project, currently under development, ready
>>>>> for review and submission, available at
>>>>> https://github.com/thesofproject/linux/pull/1501/commits
>>>>
>>>> Mathieu pointed me your series. On my side i proposed the rpmsg_ns_msg
>>>> service[1] that does not match with your implementation.
>>>> As i come late, i hope that i did not miss something in the history...
>>>> Don't hesitate to point me the discussions, if it is the case.
>>>
>>> Well, as you see, this is a v6 only of this patch set, and apart from it there have
>>> been several side discussions and patch sets.
>>>
>>>> Regarding your patchset, it is quite confusing for me. It seems that
>>>> you implement your own protocol on top of vhost forked from the RPMsg
>>> one.
>>>> But look to me that it is not the RPMsg protocol.
>>>
>>> I'm implementing a counterpart to the rpmsg protocol over VirtIO as initially
>>> implemented by drivers/rpmsg/virtio_rpmsg_bus.c for the "main CPU" (in case
>>> of remoteproc over VirtIO) or the guest side in case of Linux virtualisation.
>>> Since my implementation can talk to that driver, I don't think, that I'm inventing
>>> a new protocol. I'm adding support for the same protocol for the opposite side
>>> of the VirtIO divide.
>>
>> The main point I would like to highlight here is related to the use of the name "RPMsg"
>> more than how you implement your IPC protocol.
>> If It is a counterpart, it probably does not respect interface for RPMsg clients.
>> A good way to answer this, might be to respond to this question:
>> Is the rpmsg sample client[4] can be used on top of your vhost RPMsg implementation?
>> If the response is no, describe it as a RPMsg implementation could lead to confusion...
> 
> Sorry, I don't quite understand your logic. RPMsg is a communication protocol, not an 
> API. An RPMsg implementation has to be able to communicate with other compliant RPMsg 
> implementations, it doesn't have to provide any specific API. Am I missing anything?

You are right nothing is written in stone that compliance with the user RPMsg API defined
in the Linux Documentation [5] is mandatory.
IMO, as this API is defined in the Linux documentation [5] we should respect it, to ensure
one generic implementation. The RPMsg sample client[4] uses this user API, so seems to me
a good candidate to verify this. 

That's said, shall we multiple the RPMsg implementations in Linux with several APIs,
With the risk to make the RPMsg clients devices dependent on these implementations?
That could lead to complex code or duplications...

I'm not the right person to answer, Bjorn and Mathieu are.

[5] https://elixir.bootlin.com/linux/v5.8.10/source/Documentation/rpmsg.txt#L66

Thanks,
Arnaud

  
> 
> Thanks
> Guennadi
> 
>> [4] https://elixir.bootlin.com/linux/v5.9-rc5/source/samples/rpmsg/rpmsg_client_sample.c
>>
>> Regards,
>> Arnaud
>>
>>>
>>>> So i would be agree with Vincent[2] which proposed to switch on a
>>>> RPMsg API and creating a vhost rpmsg device. This is also proposed in
>>>> the "Enhance VHOST to enable SoC-to-SoC communication" RFC[3].
>>>> Do you think that this alternative could match with your need?
>>>
>>> As I replied to Vincent, I understand his proposal and the approach taken in the
>>> series [3], but I'm not sure I agree, that adding yet another virtual device /
>>> driver layer on the vhost side is a good idea. As far as I understand adding new
>>> completely virtual devices isn't considered to be a good practice in the kernel.
>>> Currently vhost is just a passive "library"
>>> and my vhost-rpmsg support keeps it that way. Not sure I'm in favour of
>>> converting vhost to a virtual device infrastructure.
>>>
>>> Thanks for pointing me out at [3], I should have a better look at it.
>>>
>>> Thanks
>>> Guennadi
>>>
>>>> [1].
>>>> https://patchwork.kernel.org/project/linux-remoteproc/list/?series=338
>>>> 335 [2].
>>>> https://www.spinics.net/lists/linux-virtualization/msg44195.html
>>>> [3]. https://www.spinics.net/lists/linux-remoteproc/msg06634.html
>>>>
>>>> Thanks,
>>>> Arnaud
>>>>
>>>>>
>>>>> Thanks
>>>>> Guennadi
>>>>>
>>>>> Guennadi Liakhovetski (4):
>>>>>   vhost: convert VHOST_VSOCK_SET_RUNNING to a generic ioctl
>>>>>   rpmsg: move common structures and defines to headers
>>>>>   rpmsg: update documentation
>>>>>   vhost: add an RPMsg API
>>>>>
>>>>>  Documentation/rpmsg.txt          |   6 +-
>>>>>  drivers/rpmsg/virtio_rpmsg_bus.c |  78 +------
>>>>>  drivers/vhost/Kconfig            |   7 +
>>>>>  drivers/vhost/Makefile           |   3 +
>>>>>  drivers/vhost/rpmsg.c            | 373 +++++++++++++++++++++++++++++++
>>>>>  drivers/vhost/vhost_rpmsg.h      |  74 ++++++
>>>>>  include/linux/rpmsg/virtio.h     |  83 +++++++
>>>>>  include/uapi/linux/rpmsg.h       |   3 +
>>>>>  include/uapi/linux/vhost.h       |   4 +-
>>>>>  9 files changed, 551 insertions(+), 80 deletions(-)  create mode
>>>>> 100644 drivers/vhost/rpmsg.c  create mode 100644
>>>>> drivers/vhost/vhost_rpmsg.h  create mode 100644
>>>>> include/linux/rpmsg/virtio.h
>>>>>
Vincent Whitchurch Sept. 18, 2020, 10:39 a.m. UTC | #6
On Fri, Sep 18, 2020 at 11:47:20AM +0200, Guennadi Liakhovetski wrote:
> On Fri, Sep 18, 2020 at 09:47:45AM +0200, Arnaud POULIQUEN wrote:
> > IMO, as this API is defined in the Linux documentation [5] we should respect it, to ensure
> > one generic implementation. The RPMsg sample client[4] uses this user API, so seems to me
> > a good candidate to verify this. 
> > 
> > That's said, shall we multiple the RPMsg implementations in Linux with several APIs,
> > With the risk to make the RPMsg clients devices dependent on these implementations?
> > That could lead to complex code or duplications...
> 
> So, no, in my understanding there aren't two competing alternative APIs, you'd never have 
> to choose between them. If you're writing a driver for Linux to communicate with remote 
> processors or to run on VMs, you use the existing API. If you're writing a driver for 
> Linux to communicate with those VMs, you use the vhost API and whatever help is available 
> for RPMsg processing.
> 
> However, I can in principle imagine a single driver, written to work on both sides. 
> Something like the rpmsg_char.c or maybe some networking driver. Is that what you're 
> referring to? I can see that as a fun exercise, but are there any real uses for that? 

I hinted at a real use case for this in the previous mail thread[0].
I'm exploring using rpmsg-char to allow communication between two chips,
both running Linux.  rpmsg-char can be used pretty much as-is for both
sides of the userspace-to-userspace communication and (the userspace
side of the) userspace-to-kernel communication between the two chips.

> You could do the same with VirtIO, however, it has been decided to go with two 
> distinct APIs: virtio for guests and vhost for the host, noone bothered to create a 
> single API for both and nobody seems to miss one. Why would we want one with RPMsg?

I think I answered this question in the previous mail thread as well[1]:
| virtio has distinct driver and device roles so the completely different
| APIs on each side are understandable.  But I don't see that distinction
| in the rpmsg API which is why it seems like a good idea to me to make it
| work from both sides of the link and allow the reuse of drivers like
| rpmsg-char, instead of imposing virtio's distinction on rpmsg.

[0] https://www.spinics.net/lists/linux-virtualization/msg43799.html
[1] https://www.spinics.net/lists/linux-virtualization/msg43802.html
Guennadi Liakhovetski Sept. 18, 2020, 11:02 a.m. UTC | #7
On Fri, Sep 18, 2020 at 12:39:07PM +0200, Vincent Whitchurch wrote:
> On Fri, Sep 18, 2020 at 11:47:20AM +0200, Guennadi Liakhovetski wrote:
> > On Fri, Sep 18, 2020 at 09:47:45AM +0200, Arnaud POULIQUEN wrote:
> > > IMO, as this API is defined in the Linux documentation [5] we should respect it, to ensure
> > > one generic implementation. The RPMsg sample client[4] uses this user API, so seems to me
> > > a good candidate to verify this. 
> > > 
> > > That's said, shall we multiple the RPMsg implementations in Linux with several APIs,
> > > With the risk to make the RPMsg clients devices dependent on these implementations?
> > > That could lead to complex code or duplications...
> > 
> > So, no, in my understanding there aren't two competing alternative APIs, you'd never have 
> > to choose between them. If you're writing a driver for Linux to communicate with remote 
> > processors or to run on VMs, you use the existing API. If you're writing a driver for 
> > Linux to communicate with those VMs, you use the vhost API and whatever help is available 
> > for RPMsg processing.
> > 
> > However, I can in principle imagine a single driver, written to work on both sides. 
> > Something like the rpmsg_char.c or maybe some networking driver. Is that what you're 
> > referring to? I can see that as a fun exercise, but are there any real uses for that? 
> 
> I hinted at a real use case for this in the previous mail thread[0].
> I'm exploring using rpmsg-char to allow communication between two chips,
> both running Linux.  rpmsg-char can be used pretty much as-is for both
> sides of the userspace-to-userspace communication and (the userspace
> side of the) userspace-to-kernel communication between the two chips.
> 
> > You could do the same with VirtIO, however, it has been decided to go with two 
> > distinct APIs: virtio for guests and vhost for the host, noone bothered to create a 
> > single API for both and nobody seems to miss one. Why would we want one with RPMsg?
> 
> I think I answered this question in the previous mail thread as well[1]:
> | virtio has distinct driver and device roles so the completely different
> | APIs on each side are understandable.  But I don't see that distinction
> | in the rpmsg API which is why it seems like a good idea to me to make it
> | work from both sides of the link and allow the reuse of drivers like
> | rpmsg-char, instead of imposing virtio's distinction on rpmsg.

I think RPMsg is lacking real established documentation... Quating from [2]:

<quote>
In the current protocol, at startup, the master sends notification to remote to let it 
know that it can receive name service announcement.
</quote>

Isn't that a sufficient asymnetry?

Thanks
Guennadi

[2] https://github.com/OpenAMP/open-amp/wiki/RPMsg-Messaging-Protocol

> 
> [0] https://www.spinics.net/lists/linux-virtualization/msg43799.html
> [1] https://www.spinics.net/lists/linux-virtualization/msg43802.html
Arnaud POULIQUEN Sept. 18, 2020, 5:26 p.m. UTC | #8
Hi Guennadi,


On 9/18/20 11:47 AM, Guennadi Liakhovetski wrote:
> Hi Arnaud,
> 
> On Fri, Sep 18, 2020 at 09:47:45AM +0200, Arnaud POULIQUEN wrote:
>> Hi Guennadi,
>>
>> On 9/18/20 7:44 AM, Guennadi Liakhovetski wrote:
>>> Hi Arnaud,
>>>
>>> On Thu, Sep 17, 2020 at 05:21:02PM +0200, Arnaud POULIQUEN wrote:
>>>> Hi Guennadi,
>>>>
>>>>> -----Original Message-----
>>>>> From: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
>>>>> Sent: jeudi 17 septembre 2020 07:47
>>>>> To: Arnaud POULIQUEN <arnaud.pouliquen@st.com>
>>>>> Cc: kvm@vger.kernel.org; linux-remoteproc@vger.kernel.org;
>>>>> virtualization@lists.linux-foundation.org; sound-open-firmware@alsa-
>>>>> project.org; Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>; Liam
>>>>> Girdwood <liam.r.girdwood@linux.intel.com>; Michael S. Tsirkin
>>>>> <mst@redhat.com>; Jason Wang <jasowang@redhat.com>; Ohad Ben-Cohen
>>>>> <ohad@wizery.com>; Bjorn Andersson <bjorn.andersson@linaro.org>; Mathieu
>>>>> Poirier <mathieu.poirier@linaro.org>; Vincent Whitchurch
>>>>> <vincent.whitchurch@axis.com>
>>>>> Subject: Re: [PATCH v6 0/4] Add a vhost RPMsg API
>>>>>
>>>>> Hi Arnaud,
>>>>>
>>>>> On Tue, Sep 15, 2020 at 02:13:23PM +0200, Arnaud POULIQUEN wrote:
>>>>>> Hi  Guennadi,
>>>>>>
>>>>>> On 9/1/20 5:11 PM, Guennadi Liakhovetski wrote:
>>>>>>> Hi,
>>>>>>>
>>>>>>> Next update:
>>>>>>>
>>>>>>> v6:
>>>>>>> - rename include/linux/virtio_rpmsg.h ->
>>>>>>> include/linux/rpmsg/virtio.h
>>>>>>>
>>>>>>> v5:
>>>>>>> - don't hard-code message layout
>>>>>>>
>>>>>>> v4:
>>>>>>> - add endianness conversions to comply with the VirtIO standard
>>>>>>>
>>>>>>> v3:
>>>>>>> - address several checkpatch warnings
>>>>>>> - address comments from Mathieu Poirier
>>>>>>>
>>>>>>> v2:
>>>>>>> - update patch #5 with a correct vhost_dev_init() prototype
>>>>>>> - drop patch #6 - it depends on a different patch, that is currently
>>>>>>>   an RFC
>>>>>>> - address comments from Pierre-Louis Bossart:
>>>>>>>   * remove "default n" from Kconfig
>>>>>>>
>>>>>>> Linux supports RPMsg over VirtIO for "remote processor" / AMP use
>>>>>>> cases. It can however also be used for virtualisation scenarios,
>>>>>>> e.g. when using KVM to run Linux on both the host and the guests.
>>>>>>> This patch set adds a wrapper API to facilitate writing vhost
>>>>>>> drivers for such RPMsg-based solutions. The first use case is an
>>>>>>> audio DSP virtualisation project, currently under development, ready
>>>>>>> for review and submission, available at
>>>>>>> https://github.com/thesofproject/linux/pull/1501/commits
>>>>>>
>>>>>> Mathieu pointed me your series. On my side i proposed the rpmsg_ns_msg
>>>>>> service[1] that does not match with your implementation.
>>>>>> As i come late, i hope that i did not miss something in the history...
>>>>>> Don't hesitate to point me the discussions, if it is the case.
>>>>>
>>>>> Well, as you see, this is a v6 only of this patch set, and apart from it there have
>>>>> been several side discussions and patch sets.
>>>>>
>>>>>> Regarding your patchset, it is quite confusing for me. It seems that
>>>>>> you implement your own protocol on top of vhost forked from the RPMsg
>>>>> one.
>>>>>> But look to me that it is not the RPMsg protocol.
>>>>>
>>>>> I'm implementing a counterpart to the rpmsg protocol over VirtIO as initially
>>>>> implemented by drivers/rpmsg/virtio_rpmsg_bus.c for the "main CPU" (in case
>>>>> of remoteproc over VirtIO) or the guest side in case of Linux virtualisation.
>>>>> Since my implementation can talk to that driver, I don't think, that I'm inventing
>>>>> a new protocol. I'm adding support for the same protocol for the opposite side
>>>>> of the VirtIO divide.
>>>>
>>>> The main point I would like to highlight here is related to the use of the name "RPMsg"
>>>> more than how you implement your IPC protocol.
>>>> If It is a counterpart, it probably does not respect interface for RPMsg clients.
>>>> A good way to answer this, might be to respond to this question:
>>>> Is the rpmsg sample client[4] can be used on top of your vhost RPMsg implementation?
>>>> If the response is no, describe it as a RPMsg implementation could lead to confusion...
>>>
>>> Sorry, I don't quite understand your logic. RPMsg is a communication protocol, not an 
>>> API. An RPMsg implementation has to be able to communicate with other compliant RPMsg 
>>> implementations, it doesn't have to provide any specific API. Am I missing anything?
>>
>> You are right nothing is written in stone that compliance with the user RPMsg API defined
>> in the Linux Documentation [5] is mandatory.
> 
> A quote from [5]:
> 
> <quote>
> Rpmsg is a virtio-based messaging bus that allows kernel drivers to communicate
> with remote processors available on the system.
> </quote>
> 
> So, that document describes the API used by Linux drivers to talk to remote processors. 
> It says nothing about VMs. What my patches do, they add a capability to the Linux RPMsg 
> implementation to also be used with VMs. Moreover, this is a particularly good fit, 
> because both cases can use VirtIO, so, the "VirtIO side" of the communication doesn't 
> have to change, and indeed it remains unchanged and uses the API in [5]. But what I do, 
> is I also add RPMsg support to the host side.

The feature you propose is very interesting and using RPMsg for this is clearly,
for me, a good approach.

But I'm not sure that we are speaking about the same things...
  
Perhaps, I need to clarify my view with a new approach describing RPMsg layers. 

in next part I'm focusing only on Linux local side (I'm ignoring the remote side for now).
We can divide RPMsg implementation in layers.

1) Rpmsg service layer:
  This layer implements a service on top of the RPMsg protocol.
  It uses the RPMSG user API to:
    - register/unregister a device
    - create destroy endpoints
    - send/receive messages
  This layer is independent from the ways the message is sent (virtio, vhost,...)	 
  In Linux kernel as example we can find the RPMsg sample device and rpmsg_char device 

2) The RPMsg core layer:
  This is the transport layer. It implements the RPMsg API
  It a kind of message mixer/router layer based on local and distant addresses.
  This layer is independent from the ways the message is sent ( virtio, vhost,...)	 

3) The RPMsg bus layer:
  This backend layer implements the RPMsg protocol over an IPC layer.
  This layer depends on the platform.
  Some exemples are :
    - drivers/rpmsg/mtk_rpmsg.c
    - drivers/rpmsg/qcom_glink_native.c
    - drivers/rpmsg/virtio_rpmsg_bus.c

Regarding your implementation your drivers/vhost/rpmsg.c replaces the layers 2)
and 3) and define a new "Vhost RPMsg" API, right?
As consequence the layer 1) has to by modified or duplicated to support the
"Vhost RPMsg" API.

What Vincent an I proposed (please tell me Vincent if i'm wrong) is that only the
layer 3) is implemented for portability on vhost. This as been proposed in the
"RFC patch 14/22" [6] from Kishon.

But I'm not a vhost expert, So perhaps it is not adapted...?

[6] https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg2219863.html 

> 
>> IMO, as this API is defined in the Linux documentation [5] we should respect it, to ensure
>> one generic implementation. The RPMsg sample client[4] uses this user API, so seems to me
>> a good candidate to verify this. 
>>
>> That's said, shall we multiple the RPMsg implementations in Linux with several APIs,
>> With the risk to make the RPMsg clients devices dependent on these implementations?
>> That could lead to complex code or duplications...
> 
> So, no, in my understanding there aren't two competing alternative APIs, you'd never have 
> to choose between them. If you're writing a driver for Linux to communicate with remote 
> processors or to run on VMs, you use the existing API. If you're writing a driver for 
> Linux to communicate with those VMs, you use the vhost API and whatever help is available 
> for RPMsg processing.

This is what I would have expect here. To have only one driver per service, not
to instantiate it for each type of type of communication.

> 
> However, I can in principle imagine a single driver, written to work on both sides. 
> Something like the rpmsg_char.c or maybe some networking driver. Is that what you're 
> referring to? I can see that as a fun exercise, but are there any real uses for that? 
> You could do the same with VirtIO, however, it has been decided to go with two 
> distinct APIs: virtio for guests and vhost for the host, noone bothered to create a 
> single API for both and nobody seems to miss one. Why would we want one with RPMsg?

Regarding the RFC [3] mentioned in a previous mail, perhaps this requirement
exists. I added Kishon in copy. 

In ST, we have such requirement but not concerning vhost.Our need is to
facilitate the services porting between an internal coprocessor (virtio) and an
external coprocessor(serial link) using RPMsg.

The Sound open firmware project could also takes benefit of an uniformization of
the communication with the audio DSP, using the RPMsg API to address in a same
way an internal coprocessor, an external coprocessor or a virtual machine for
the control part...
   
And of course to simplify the maintenance and evolution of the RPMsg protocol in
Linux.

That's said our approach seems to me also valid as it respects the RPMsg protocol.

Now there are 2 different patch series with 2 different approaches sent to
the mailing list. So i guess that maintainers will have to decide whether
they will get the both or only one.

Thanks,
Arnaud

> 
> Thanks
> Guennadi
>> [5] https://elixir.bootlin.com/linux/v5.8.10/source/Documentation/rpmsg.txt#L66
>>
>> Thanks,
>> Arnaud
>>
>>   
>>>
>>> Thanks
>>> Guennadi
>>>
>>>> [4] https://elixir.bootlin.com/linux/v5.9-rc5/source/samples/rpmsg/rpmsg_client_sample.c
>>>>
>>>> Regards,
>>>> Arnaud
>>>>
>>>>>
>>>>>> So i would be agree with Vincent[2] which proposed to switch on a
>>>>>> RPMsg API and creating a vhost rpmsg device. This is also proposed in
>>>>>> the "Enhance VHOST to enable SoC-to-SoC communication" RFC[3].
>>>>>> Do you think that this alternative could match with your need?
>>>>>
>>>>> As I replied to Vincent, I understand his proposal and the approach taken in the
>>>>> series [3], but I'm not sure I agree, that adding yet another virtual device /
>>>>> driver layer on the vhost side is a good idea. As far as I understand adding new
>>>>> completely virtual devices isn't considered to be a good practice in the kernel.
>>>>> Currently vhost is just a passive "library"
>>>>> and my vhost-rpmsg support keeps it that way. Not sure I'm in favour of
>>>>> converting vhost to a virtual device infrastructure.
>>>>>
>>>>> Thanks for pointing me out at [3], I should have a better look at it.
>>>>>
>>>>> Thanks
>>>>> Guennadi
>>>>>
>>>>>> [1].
>>>>>> https://patchwork.kernel.org/project/linux-remoteproc/list/?series=338
>>>>>> 335 [2].
>>>>>> https://www.spinics.net/lists/linux-virtualization/msg44195.html
>>>>>> [3]. https://www.spinics.net/lists/linux-remoteproc/msg06634.html
>>>>>>
>>>>>> Thanks,
>>>>>> Arnaud
>>>>>>
>>>>>>>
>>>>>>> Thanks
>>>>>>> Guennadi
Mathieu Poirier Oct. 1, 2020, 5:57 p.m. UTC | #9
Hi all,

On Fri, Sep 18, 2020 at 07:26:50PM +0200, Arnaud POULIQUEN wrote:
> Hi Guennadi,
> 
> 
> On 9/18/20 11:47 AM, Guennadi Liakhovetski wrote:
> > Hi Arnaud,
> > 
> > On Fri, Sep 18, 2020 at 09:47:45AM +0200, Arnaud POULIQUEN wrote:
> >> Hi Guennadi,
> >>
> >> On 9/18/20 7:44 AM, Guennadi Liakhovetski wrote:
> >>> Hi Arnaud,
> >>>
> >>> On Thu, Sep 17, 2020 at 05:21:02PM +0200, Arnaud POULIQUEN wrote:
> >>>> Hi Guennadi,
> >>>>
> >>>>> -----Original Message-----
> >>>>> From: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
> >>>>> Sent: jeudi 17 septembre 2020 07:47
> >>>>> To: Arnaud POULIQUEN <arnaud.pouliquen@st.com>
> >>>>> Cc: kvm@vger.kernel.org; linux-remoteproc@vger.kernel.org;
> >>>>> virtualization@lists.linux-foundation.org; sound-open-firmware@alsa-
> >>>>> project.org; Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>; Liam
> >>>>> Girdwood <liam.r.girdwood@linux.intel.com>; Michael S. Tsirkin
> >>>>> <mst@redhat.com>; Jason Wang <jasowang@redhat.com>; Ohad Ben-Cohen
> >>>>> <ohad@wizery.com>; Bjorn Andersson <bjorn.andersson@linaro.org>; Mathieu
> >>>>> Poirier <mathieu.poirier@linaro.org>; Vincent Whitchurch
> >>>>> <vincent.whitchurch@axis.com>
> >>>>> Subject: Re: [PATCH v6 0/4] Add a vhost RPMsg API
> >>>>>
> >>>>> Hi Arnaud,
> >>>>>
> >>>>> On Tue, Sep 15, 2020 at 02:13:23PM +0200, Arnaud POULIQUEN wrote:
> >>>>>> Hi  Guennadi,
> >>>>>>
> >>>>>> On 9/1/20 5:11 PM, Guennadi Liakhovetski wrote:
> >>>>>>> Hi,
> >>>>>>>
> >>>>>>> Next update:
> >>>>>>>
> >>>>>>> v6:
> >>>>>>> - rename include/linux/virtio_rpmsg.h ->
> >>>>>>> include/linux/rpmsg/virtio.h
> >>>>>>>
> >>>>>>> v5:
> >>>>>>> - don't hard-code message layout
> >>>>>>>
> >>>>>>> v4:
> >>>>>>> - add endianness conversions to comply with the VirtIO standard
> >>>>>>>
> >>>>>>> v3:
> >>>>>>> - address several checkpatch warnings
> >>>>>>> - address comments from Mathieu Poirier
> >>>>>>>
> >>>>>>> v2:
> >>>>>>> - update patch #5 with a correct vhost_dev_init() prototype
> >>>>>>> - drop patch #6 - it depends on a different patch, that is currently
> >>>>>>>   an RFC
> >>>>>>> - address comments from Pierre-Louis Bossart:
> >>>>>>>   * remove "default n" from Kconfig
> >>>>>>>
> >>>>>>> Linux supports RPMsg over VirtIO for "remote processor" / AMP use
> >>>>>>> cases. It can however also be used for virtualisation scenarios,
> >>>>>>> e.g. when using KVM to run Linux on both the host and the guests.
> >>>>>>> This patch set adds a wrapper API to facilitate writing vhost
> >>>>>>> drivers for such RPMsg-based solutions. The first use case is an
> >>>>>>> audio DSP virtualisation project, currently under development, ready
> >>>>>>> for review and submission, available at
> >>>>>>> https://github.com/thesofproject/linux/pull/1501/commits
> >>>>>>
> >>>>>> Mathieu pointed me your series. On my side i proposed the rpmsg_ns_msg
> >>>>>> service[1] that does not match with your implementation.
> >>>>>> As i come late, i hope that i did not miss something in the history...
> >>>>>> Don't hesitate to point me the discussions, if it is the case.
> >>>>>
> >>>>> Well, as you see, this is a v6 only of this patch set, and apart from it there have
> >>>>> been several side discussions and patch sets.
> >>>>>
> >>>>>> Regarding your patchset, it is quite confusing for me. It seems that
> >>>>>> you implement your own protocol on top of vhost forked from the RPMsg
> >>>>> one.
> >>>>>> But look to me that it is not the RPMsg protocol.
> >>>>>
> >>>>> I'm implementing a counterpart to the rpmsg protocol over VirtIO as initially
> >>>>> implemented by drivers/rpmsg/virtio_rpmsg_bus.c for the "main CPU" (in case
> >>>>> of remoteproc over VirtIO) or the guest side in case of Linux virtualisation.
> >>>>> Since my implementation can talk to that driver, I don't think, that I'm inventing
> >>>>> a new protocol. I'm adding support for the same protocol for the opposite side
> >>>>> of the VirtIO divide.
> >>>>
> >>>> The main point I would like to highlight here is related to the use of the name "RPMsg"
> >>>> more than how you implement your IPC protocol.
> >>>> If It is a counterpart, it probably does not respect interface for RPMsg clients.
> >>>> A good way to answer this, might be to respond to this question:
> >>>> Is the rpmsg sample client[4] can be used on top of your vhost RPMsg implementation?
> >>>> If the response is no, describe it as a RPMsg implementation could lead to confusion...
> >>>
> >>> Sorry, I don't quite understand your logic. RPMsg is a communication protocol, not an 
> >>> API. An RPMsg implementation has to be able to communicate with other compliant RPMsg 
> >>> implementations, it doesn't have to provide any specific API. Am I missing anything?
> >>
> >> You are right nothing is written in stone that compliance with the user RPMsg API defined
> >> in the Linux Documentation [5] is mandatory.
> > 
> > A quote from [5]:
> > 
> > <quote>
> > Rpmsg is a virtio-based messaging bus that allows kernel drivers to communicate
> > with remote processors available on the system.
> > </quote>
> > 
> > So, that document describes the API used by Linux drivers to talk to remote processors. 
> > It says nothing about VMs. What my patches do, they add a capability to the Linux RPMsg 
> > implementation to also be used with VMs. Moreover, this is a particularly good fit, 
> > because both cases can use VirtIO, so, the "VirtIO side" of the communication doesn't 
> > have to change, and indeed it remains unchanged and uses the API in [5]. But what I do, 
> > is I also add RPMsg support to the host side.
> 
> The feature you propose is very interesting and using RPMsg for this is clearly,
> for me, a good approach.
> 
> But I'm not sure that we are speaking about the same things...
>   
> Perhaps, I need to clarify my view with a new approach describing RPMsg layers. 
> 
> in next part I'm focusing only on Linux local side (I'm ignoring the remote side for now).
> We can divide RPMsg implementation in layers.
> 
> 1) Rpmsg service layer:
>   This layer implements a service on top of the RPMsg protocol.
>   It uses the RPMSG user API to:
>     - register/unregister a device
>     - create destroy endpoints
>     - send/receive messages
>   This layer is independent from the ways the message is sent (virtio, vhost,...)	 
>   In Linux kernel as example we can find the RPMsg sample device and rpmsg_char device 
> 
> 2) The RPMsg core layer:
>   This is the transport layer. It implements the RPMsg API
>   It a kind of message mixer/router layer based on local and distant addresses.
>   This layer is independent from the ways the message is sent ( virtio, vhost,...)	 
> 
> 3) The RPMsg bus layer:
>   This backend layer implements the RPMsg protocol over an IPC layer.
>   This layer depends on the platform.
>   Some exemples are :
>     - drivers/rpmsg/mtk_rpmsg.c
>     - drivers/rpmsg/qcom_glink_native.c
>     - drivers/rpmsg/virtio_rpmsg_bus.c
> 
> Regarding your implementation your drivers/vhost/rpmsg.c replaces the layers 2)
> and 3) and define a new "Vhost RPMsg" API, right?
> As consequence the layer 1) has to by modified or duplicated to support the
> "Vhost RPMsg" API.
> 
> What Vincent an I proposed (please tell me Vincent if i'm wrong) is that only the
> layer 3) is implemented for portability on vhost. This as been proposed in the
> "RFC patch 14/22" [6] from Kishon.
> 
> But I'm not a vhost expert, So perhaps it is not adapted...?
> 
> [6] https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg2219863.html 
> 
> > 
> >> IMO, as this API is defined in the Linux documentation [5] we should respect it, to ensure
> >> one generic implementation. The RPMsg sample client[4] uses this user API, so seems to me
> >> a good candidate to verify this. 
> >>
> >> That's said, shall we multiple the RPMsg implementations in Linux with several APIs,
> >> With the risk to make the RPMsg clients devices dependent on these implementations?
> >> That could lead to complex code or duplications...
> > 
> > So, no, in my understanding there aren't two competing alternative APIs, you'd never have 
> > to choose between them. If you're writing a driver for Linux to communicate with remote 
> > processors or to run on VMs, you use the existing API. If you're writing a driver for 
> > Linux to communicate with those VMs, you use the vhost API and whatever help is available 
> > for RPMsg processing.
> 
> This is what I would have expect here. To have only one driver per service, not
> to instantiate it for each type of type of communication.
> 
> > 
> > However, I can in principle imagine a single driver, written to work on both sides. 
> > Something like the rpmsg_char.c or maybe some networking driver. Is that what you're 
> > referring to? I can see that as a fun exercise, but are there any real uses for that? 
> > You could do the same with VirtIO, however, it has been decided to go with two 
> > distinct APIs: virtio for guests and vhost for the host, noone bothered to create a 
> > single API for both and nobody seems to miss one. Why would we want one with RPMsg?
> 
> Regarding the RFC [3] mentioned in a previous mail, perhaps this requirement
> exists. I added Kishon in copy. 
> 
> In ST, we have such requirement but not concerning vhost.Our need is to
> facilitate the services porting between an internal coprocessor (virtio) and an
> external coprocessor(serial link) using RPMsg.
> 
> The Sound open firmware project could also takes benefit of an uniformization of
> the communication with the audio DSP, using the RPMsg API to address in a same
> way an internal coprocessor, an external coprocessor or a virtual machine for
> the control part...
>    
> And of course to simplify the maintenance and evolution of the RPMsg protocol in
> Linux.
> 
> That's said our approach seems to me also valid as it respects the RPMsg protocol.
> 
> Now there are 2 different patch series with 2 different approaches sent to
> the mailing list. So i guess that maintainers will have to decide whether
> they will get the both or only one.
> 

I finally had the time to look at Kishon's pathset yesterday.  If we skim out
the parts that deal with the realities of the NTB his solution is quite
simple.  It also provides an implementation for both side of the channel, that
is host and guest.  Lastly current implementation such as rpmsg-char and
rpmsg-sample-client.c can run on it seamlessly.  

Based on the above and the use case described by Vincent I think following
Kishon's approach is the best way to move forward. 

> Thanks,
> Arnaud
> 
> > 
> > Thanks
> > Guennadi
> >> [5] https://elixir.bootlin.com/linux/v5.8.10/source/Documentation/rpmsg.txt#L66
> >>
> >> Thanks,
> >> Arnaud
> >>
> >>   
> >>>
> >>> Thanks
> >>> Guennadi
> >>>
> >>>> [4] https://elixir.bootlin.com/linux/v5.9-rc5/source/samples/rpmsg/rpmsg_client_sample.c
> >>>>
> >>>> Regards,
> >>>> Arnaud
> >>>>
> >>>>>
> >>>>>> So i would be agree with Vincent[2] which proposed to switch on a
> >>>>>> RPMsg API and creating a vhost rpmsg device. This is also proposed in
> >>>>>> the "Enhance VHOST to enable SoC-to-SoC communication" RFC[3].
> >>>>>> Do you think that this alternative could match with your need?
> >>>>>
> >>>>> As I replied to Vincent, I understand his proposal and the approach taken in the
> >>>>> series [3], but I'm not sure I agree, that adding yet another virtual device /
> >>>>> driver layer on the vhost side is a good idea. As far as I understand adding new
> >>>>> completely virtual devices isn't considered to be a good practice in the kernel.
> >>>>> Currently vhost is just a passive "library"
> >>>>> and my vhost-rpmsg support keeps it that way. Not sure I'm in favour of
> >>>>> converting vhost to a virtual device infrastructure.
> >>>>>
> >>>>> Thanks for pointing me out at [3], I should have a better look at it.
> >>>>>
> >>>>> Thanks
> >>>>> Guennadi
> >>>>>
> >>>>>> [1].
> >>>>>> https://patchwork.kernel.org/project/linux-remoteproc/list/?series=338
> >>>>>> 335 [2].
> >>>>>> https://www.spinics.net/lists/linux-virtualization/msg44195.html
> >>>>>> [3]. https://www.spinics.net/lists/linux-remoteproc/msg06634.html
> >>>>>>
> >>>>>> Thanks,
> >>>>>> Arnaud
> >>>>>>
> >>>>>>>
> >>>>>>> Thanks
> >>>>>>> Guennadi