diff mbox series

[1/2] drm/virtio: Create Dumb BOs as guest Blobs

Message ID 20210331030439.1564032-1-vivek.kasireddy@intel.com (mailing list archive)
State New, archived
Headers show
Series [1/2] drm/virtio: Create Dumb BOs as guest Blobs | expand

Commit Message

Kasireddy, Vivek March 31, 2021, 3:04 a.m. UTC
If support for Blob resources is available, then dumb BOs created
by the driver can be considered as guest Blobs. And, for guest
Blobs, there is no need to do any transfers or flushes but we do
need to do set_scanout even if the FB has not changed as part of
plane updates.

Cc: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Vivek Kasireddy <vivek.kasireddy@intel.com>
---
 drivers/gpu/drm/virtio/virtgpu_gem.c    |  8 ++++++++
 drivers/gpu/drm/virtio/virtgpu_object.c |  3 +++
 drivers/gpu/drm/virtio/virtgpu_plane.c  | 18 +++++++++++-------
 3 files changed, 22 insertions(+), 7 deletions(-)

Comments

Gerd Hoffmann March 31, 2021, 7:41 a.m. UTC | #1
On Tue, Mar 30, 2021 at 08:04:38PM -0700, Vivek Kasireddy wrote:
> If support for Blob resources is available, then dumb BOs created
> by the driver can be considered as guest Blobs. And, for guest
> Blobs, there is no need to do any transfers or flushes

No.  VIRTGPU_BLOB_FLAG_USE_SHAREABLE means the host (aka device in
virtio terms) *can* create a shared mapping.  So, the guest sends still
needs to send transfer commands, and then the device can shortcut the
transfer commands on the host side in case a shared mapping exists.

flush commands are still needed for dirty tracking.

> but we do need to do set_scanout even if the FB has not changed as
> part of plane updates.

Sounds like you workaround host bugs.  This should not be needed with
properly implemented flush.

take care,
  Gerd
Kasireddy, Vivek April 1, 2021, 3:51 a.m. UTC | #2
Hi Gerd,

> > If support for Blob resources is available, then dumb BOs created by
> > the driver can be considered as guest Blobs. And, for guest Blobs,
> > there is no need to do any transfers or flushes
> 
> No.  VIRTGPU_BLOB_FLAG_USE_SHAREABLE means the host (aka device in virtio
> terms) *can* create a shared mapping.  So, the guest sends still needs to send transfer
> commands, and then the device can shortcut the transfer commands on the host side in
> case a shared mapping exists.
[Kasireddy, Vivek] Ok. IIUC, are you saying that the device may or may not create a shared
mapping (meaning res->image) and that the driver should not make any assumptions about
that and thus still do the transfers and flushes?

Also, could you please briefly explain what does VIRTIO_GPU_BLOB_FLAG_USE_MAPPABLE
mean given that the spec does not describe these blob_flags clearly? This is what the spec says:

"The driver MUST inform the device if the blob resource is used for
memory access, sharing between driver instances and/or sharing with
other devices. This is done via the \field{blob_flags} field."

And, what should be the default blob_flags value for a dumb bo if the userspace does not
specify them?

> 
> flush commands are still needed for dirty tracking.
> 
> > but we do need to do set_scanout even if the FB has not changed as
> > part of plane updates.
> 
> Sounds like you workaround host bugs.  This should not be needed with properly
> implemented flush.
[Kasireddy, Vivek] With the patches I tested with:
https://lists.nongnu.org/archive/html/qemu-devel/2021-03/msg09786.html

I noticed that if we do not have res->image and only have res->blob, we have to 
re-submit the blob/dmabuf and update the displaysurface if guest made updates to it 
(in this case same FB) which can only happen if we call set_scanout_blob. IIUC, flush
only marks the area as dirty but does not re-submit the updated buffer/blob and I see
a flicker if I let it do dpy_gfx_update().

Thanks,
Vivek

> 
> take care,
>   Gerd
Gerd Hoffmann April 1, 2021, 6:53 a.m. UTC | #3
Hi,

> > No.  VIRTGPU_BLOB_FLAG_USE_SHAREABLE means the host (aka device in virtio
> > terms) *can* create a shared mapping.  So, the guest sends still needs to send transfer
> > commands, and then the device can shortcut the transfer commands on the host side in
> > case a shared mapping exists.
> [Kasireddy, Vivek] Ok. IIUC, are you saying that the device may or may not create a shared
> mapping (meaning res->image) and that the driver should not make any assumptions about
> that and thus still do the transfers and flushes?

Yes.

> Also, could you please briefly explain what does
> VIRTIO_GPU_BLOB_FLAG_USE_MAPPABLE mean given that the spec does not
> describe these blob_flags clearly? This is what the spec says:

This matters for VIRTIO_GPU_BLOB_MEM_HOST3D resources only.
VIRTIO_GPU_BLOB_FLAG_USE_MAPPABLE indicates the guest wants map the
resource for cpu access.  When the flag is not set only gpu access is
needed.

> And, what should be the default blob_flags value for a dumb bo if the
> userspace does not specify them?

Just VIRTIO_GPU_BLOB_FLAG_USE_SHAREABLE is fine for
VIRTIO_GPU_BLOB_MEM_GUEST.

> [Kasireddy, Vivek] With the patches I tested with:
> https://lists.nongnu.org/archive/html/qemu-devel/2021-03/msg09786.html

Saw the series, not looked in detail yet.

> I noticed that if we do not have res->image and only have res->blob, we have to 
> re-submit the blob/dmabuf and update the displaysurface if guest made updates to it 
> (in this case same FB)

flush must call dpy_gfx_update() or dpy_gl_update().

Oh, and make sure you use qemu master (or 6.0-rc).  In 5.2 + older
display updates might not work properly due to a missing glFlush()
in qemu.

take care,
  Gerd
diff mbox series

Patch

diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/virtgpu_gem.c
index 8502400b2f9c..5f49fb1cce65 100644
--- a/drivers/gpu/drm/virtio/virtgpu_gem.c
+++ b/drivers/gpu/drm/virtio/virtgpu_gem.c
@@ -64,6 +64,7 @@  int virtio_gpu_mode_dumb_create(struct drm_file *file_priv,
 {
 	struct drm_gem_object *gobj;
 	struct virtio_gpu_object_params params = { 0 };
+	struct virtio_gpu_device *vgdev = dev->dev_private;
 	int ret;
 	uint32_t pitch;
 
@@ -79,6 +80,13 @@  int virtio_gpu_mode_dumb_create(struct drm_file *file_priv,
 	params.height = args->height;
 	params.size = args->size;
 	params.dumb = true;
+
+	if (vgdev->has_resource_blob) {
+		params.blob_mem = VIRTGPU_BLOB_MEM_GUEST;
+		params.blob_flags = VIRTGPU_BLOB_FLAG_USE_SHAREABLE;
+		params.blob = true;
+	}
+
 	ret = virtio_gpu_gem_create(file_priv, dev, &params, &gobj,
 				    &args->handle);
 	if (ret)
diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
index 4ff1ec28e630..f648b0e24447 100644
--- a/drivers/gpu/drm/virtio/virtgpu_object.c
+++ b/drivers/gpu/drm/virtio/virtgpu_object.c
@@ -254,6 +254,9 @@  int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
 	}
 
 	if (params->blob) {
+		if (params->blob_mem == VIRTGPU_BLOB_MEM_GUEST)
+			bo->guest_blob = true;
+
 		virtio_gpu_cmd_resource_create_blob(vgdev, bo, params,
 						    ents, nents);
 	} else if (params->virgl) {
diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c b/drivers/gpu/drm/virtio/virtgpu_plane.c
index 4e1b17548007..3731f1a6477d 100644
--- a/drivers/gpu/drm/virtio/virtgpu_plane.c
+++ b/drivers/gpu/drm/virtio/virtgpu_plane.c
@@ -161,10 +161,11 @@  static void virtio_gpu_primary_plane_update(struct drm_plane *plane,
 		return;
 
 	bo = gem_to_virtio_gpu_obj(plane->state->fb->obj[0]);
-	if (bo->dumb)
+	if (bo->dumb && !bo->guest_blob)
 		virtio_gpu_update_dumb_bo(vgdev, plane->state, &rect);
 
-	if (plane->state->fb != old_state->fb ||
+	if ((bo->dumb && bo->guest_blob) ||
+	    plane->state->fb != old_state->fb ||
 	    plane->state->src_w != old_state->src_w ||
 	    plane->state->src_h != old_state->src_h ||
 	    plane->state->src_x != old_state->src_x ||
@@ -198,11 +199,14 @@  static void virtio_gpu_primary_plane_update(struct drm_plane *plane,
 		}
 	}
 
-	virtio_gpu_cmd_resource_flush(vgdev, bo->hw_res_handle,
-				      rect.x1,
-				      rect.y1,
-				      rect.x2 - rect.x1,
-				      rect.y2 - rect.y1);
+	if (!bo->guest_blob) {
+		virtio_gpu_cmd_resource_flush(vgdev, bo->hw_res_handle,
+					      rect.x1,
+				              rect.y1,
+				              rect.x2 - rect.x1,
+				              rect.y2 - rect.y1);
+	}
+
 	virtio_gpu_notify(vgdev);
 }