mbox series

[RFC,v2,0/1] fence per plane state

Message ID 20231003010013.15794-1-dongwon.kim@intel.com (mailing list archive)
Headers show
Series fence per plane state | expand

Message

Kim, Dongwon Oct. 3, 2023, 1 a.m. UTC
The patch "drm/virtio: new fence for every plane update" is to prevent a fence
synchronization problem when multiple planes are referencing a single large FB
(i.e. Xorg with multi displays configured as one extended surface.).

One example of a possible problematic flow is

1.virtio_gpu_plane_prepare_fb(plane_A) -> A fence for the FB is created (fence A)
2.virtio_gpu_resource_flush(plane_A) -> Fence A is emitted. Then it waits for the fence A
  to be signaled.
3.virtio_gpu_plane_prepare_fb(plane_B) -> A new fence for the FB is created (fence B) and
  FB->fence is replaced with fence B.
4.virtio_gpu_resource_flush(plane_A) -> Fence A is signaled but dma_fence_put is done for
  fence B because FB->fence = fence B already.
5.fence A won't be signaled or released for a long time, which leads to guest display and
  dmesg shows fence timeout errors.

The root-cause for problems is that the fence for the FB can be replaced with the new one
anytime another plain with the same FB is updated. So the proposed fix here is to allocate
a new fence per the plane state instead of per FB as described in the patch.

Tested system:

Host: QEMU + KVM on Linux running on Intel 12th Gen.
Guest: Ubuntu VM running Xorg w/ 2~3 virtual displays using blob scanouts

Dongwon Kim (1):
  drm/virtio: new fence for every plane update

 drivers/gpu/drm/virtio/virtgpu_drv.h   |  7 +++
 drivers/gpu/drm/virtio/virtgpu_plane.c | 66 +++++++++++++++++---------
 2 files changed, 51 insertions(+), 22 deletions(-)