diff mbox series

[virtio-dev,RFC,v1,2/2] virtio-gpu: add the ability to export resources

Message ID CAD=HUj5U-TxqbPJiRRbyMHON21sht75q3nisvwvKrO+Bf=n8ng@mail.gmail.com (mailing list archive)
State New, archived
Headers show
Series Cross-device resource sharing | expand

Commit Message

David Stevens Jan. 8, 2020, 9:02 a.m. UTC
Signed-off-by: David Stevens <stevensd@chromium.org>
---
 virtio-gpu.tex | 29 +++++++++++++++++++++++++++++
 1 file changed, 29 insertions(+)

         VIRTIO_GPU_RESP_ERR_UNSPEC = 0x1200,
@@ -454,6 +458,31 @@ \subsubsection{Device Operation:
controlq}\label{sec:Device Types / GPU Device /
 This detaches any backing pages from a resource, to be used in case of
 guest swapping or object destruction.

+\item[VIRTIO_GPU_CMD_EXPORT_RESOURCE] Exports a resource for use by other
+  virtio devices. Request data is \field{struct
+    virtio_gpu_export_resource}.  Response type is
+  VIRTIO_GPU_RESP_OK_EXPORT_RESOURCE, response data is \field{struct
+    virtio_gpu_resp_export_resource}.
+
+\begin{lstlisting}
+struct virtio_gpu_export_resource {
+        struct virtio_gpu_ctrl_hdr hdr;
+        le32 resource_id;
+        le32 padding;
+};
+
+struct virtio_gpu_resp_export_resource {
+        struct virtio_gpu_ctrl_hdr hdr;
+        le64 uuid_low;
+        le64 uuid_high;
+};
+\end{lstlisting}
+
+The response contains a uuid which identifies the host private resource to
+other virtio devices. Note that if the resource has an attached backing,
+modifications made to an exported resource by other devices are not visible
+in the attached backing until they are transferred into the backing.
+
 \end{description}

 \subsubsection{Device Operation: cursorq}\label{sec:Device Types /
GPU Device / Device Operation / Device Operation: cursorq}

Comments

Gerd Hoffmann Jan. 8, 2020, 10:44 a.m. UTC | #1
> +\begin{lstlisting}
> +struct virtio_gpu_export_resource {
> +        struct virtio_gpu_ctrl_hdr hdr;
> +        le32 resource_id;
> +        le32 padding;
> +};
> +
> +struct virtio_gpu_resp_export_resource {
> +        struct virtio_gpu_ctrl_hdr hdr;
> +        le64 uuid_low;
> +        le64 uuid_high;
> +};
> +\end{lstlisting}

Is there a specific reason why you want the host pick the uuid?  I would
let the guest define the uuid, i.e. move the uuid fields to
virtio_gpu_export_resource and scratch virtio_gpu_resp_export_resource.

Also I'd siggest to name the command (and struct) RESOURCE_ASSIGN_UUID.

cheers,
  Gerd
David Stevens Jan. 8, 2020, 11:20 a.m. UTC | #2
> Is there a specific reason why you want the host pick the uuid?  I would
> let the guest define the uuid, i.e. move the uuid fields to
> virtio_gpu_export_resource and scratch virtio_gpu_resp_export_resource.

Sending the uuid in the original request doesn't really buy us
anything, at least in terms of asynchronicity. The guest would still
need to wait for the response to arrive before it could safely pass
the uuid to any other virtio devices, to prevent a race where the
import fails because it is processed before virtio-gpu processes the
export. Perhaps this wouldn't be the case if we supported sharing
fences between virtio devices, but even then, fences are more of a
thing for the operation of a pipeline, not for the setup of a
pipeline.

At that point, I think it's just a matter of aesthetics. I lean
slightly towards returning the uuid from the host, since that rules
out any implementation with the aforementioned race. That being said,
if there are any specific reasons or preferences to assigning the uuid
from the guest, I can switch to that direction.

-David
Gerd Hoffmann Jan. 9, 2020, 1:16 p.m. UTC | #3
Hi,

> At that point, I think it's just a matter of aesthetics. I lean
> slightly towards returning the uuid from the host, since that rules
> out any implementation with the aforementioned race.

Ok, design the API in a way that you can't get it wrong.  Makes sense.
I'd still name it ressource_assign_uuid though.

cheers,
  Gerd
Gurchetan Singh Jan. 10, 2020, 1:28 a.m. UTC | #4
I like the idea of having one central place in the kernel where
virtio-devices get their uuid from -- i.e, no separate VM specific,
device specific implementations calling into uuid_gen().

On Wed, Jan 8, 2020 at 3:20 AM David Stevens <stevensd@chromium.org> wrote:
>
> > Is there a specific reason why you want the host pick the uuid?  I would
> > let the guest define the uuid, i.e. move the uuid fields to
> > virtio_gpu_export_resource and scratch virtio_gpu_resp_export_resource.
>
> Sending the uuid in the original request doesn't really buy us
> anything, at least in terms of asynchronicity. The guest would still
> need to wait for the response to arrive before it could safely pass
> the uuid to any other virtio devices, to prevent a race where the
> import fails because it is processed before virtio-gpu processes the
> export. Perhaps this wouldn't be the case if we supported sharing
> fences between virtio devices, but even then, fences are more of a
> thing for the operation of a pipeline, not for the setup of a
> pipeline.
>
> At that point, I think it's just a matter of aesthetics. I lean
> slightly towards returning the uuid from the host, since that rules
> out any implementation with the aforementioned race. That being said,
> if there are any specific reasons or preferences to assigning the uuid
> from the guest, I can switch to that direction.
>
> -David
diff mbox series

Patch

diff --git a/virtio-gpu.tex b/virtio-gpu.tex
index af4ca61..522f478 100644
--- a/virtio-gpu.tex
+++ b/virtio-gpu.tex
@@ -186,12 +186,16 @@  \subsubsection{Device Operation: Request
header}\label{sec:Device Types / GPU De
         VIRTIO_GPU_CMD_UPDATE_CURSOR = 0x0300,
         VIRTIO_GPU_CMD_MOVE_CURSOR,

+        /* misc commands */
+        VIRTIO_GPU_CMD_EXPORT_RESOURCE = 0x0400,
+
         /* success responses */
         VIRTIO_GPU_RESP_OK_NODATA = 0x1100,
         VIRTIO_GPU_RESP_OK_DISPLAY_INFO,
         VIRTIO_GPU_RESP_OK_CAPSET_INFO,
         VIRTIO_GPU_RESP_OK_CAPSET,
         VIRTIO_GPU_RESP_OK_EDID,
+        VIRTIO_GPU_RESP_OK_EXPORT_RESOURCE,

         /* error responses */