From patchwork Wed Jul 12 22:44:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kim, Dongwon" X-Patchwork-Id: 13310989 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9C5F1EB64DD for ; Wed, 12 Jul 2023 23:07:08 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 7BD0910E5E7; Wed, 12 Jul 2023 23:07:06 +0000 (UTC) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by gabe.freedesktop.org (Postfix) with ESMTPS id 809E310E5E6 for ; Wed, 12 Jul 2023 23:07:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1689203224; x=1720739224; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+35PtxDF0Jpl3EfY+7X+bX/Psi7gLRc4M6ggNOYCSP4=; b=CXSN4YWWLbNo9lzy2Cv4XrrhG691sFxn4ZZBoDCGQUU3BTwv2psSCDOq 8QIgoaFoErAbGZP9RN/H43ztdox4lt1OV/7CrglGhkzkuxrW+dqW0u6eA yxciGp3JHsMHxAMA5jnPPcMlXmXvEo2klfkVBQnVQxdpwCeOeE+WR0HTM v6AO4Es0Gkn+HWRNW7Mi7NaGGHZ/CKJjJ9TBvgCjDvx5NwPcVOuWTi7RI 0F7CBAKF2F0VUrkrgNDSO0flDzzR1Ut7u1YjS83oreWUtYmHxL0w+C+Hg GvjLfF/T2TgmnKL7UMolOmJmbry+lNnoTVuHXzySZVYaESrVwEeka0AgB A==; X-IronPort-AV: E=McAfee;i="6600,9927,10769"; a="367654213" X-IronPort-AV: E=Sophos;i="6.01,200,1684825200"; d="scan'208";a="367654213" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jul 2023 16:07:04 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10769"; a="895772016" X-IronPort-AV: E=Sophos;i="6.01,200,1684825200"; d="scan'208";a="895772016" Received: from dongwonk-z390-aorus-ultra-intel-gfx.fm.intel.com ([10.105.129.122]) by orsmga005.jf.intel.com with ESMTP; 12 Jul 2023 16:07:04 -0700 From: Dongwon Kim To: dri-devel@lists.freedesktop.org Subject: [RFC PATCH 1/3] drm/virtio: .release ops for virtgpu fence release Date: Wed, 12 Jul 2023 15:44:22 -0700 Message-Id: <20230712224424.30158-2-dongwon.kim@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20230712224424.30158-1-dongwon.kim@intel.com> References: <20230712224424.30158-1-dongwon.kim@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Vivek Kasireddy , kraxel@redhat.com, Dongwon Kim Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" virtio_gpu_fence_release is added to free virtio-gpu-fence upon release of dma_fence. Cc: Gerd Hoffmann Cc: Vivek Kasireddy Signed-off-by: Dongwon Kim --- drivers/gpu/drm/virtio/virtgpu_fence.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/drivers/gpu/drm/virtio/virtgpu_fence.c b/drivers/gpu/drm/virtio/virtgpu_fence.c index f28357dbde35..ba659ac2a51d 100644 --- a/drivers/gpu/drm/virtio/virtgpu_fence.c +++ b/drivers/gpu/drm/virtio/virtgpu_fence.c @@ -63,12 +63,20 @@ static void virtio_gpu_timeline_value_str(struct dma_fence *f, char *str, (u64)atomic64_read(&fence->drv->last_fence_id)); } +static void virtio_gpu_fence_release(struct dma_fence *f) +{ + struct virtio_gpu_fence *fence = to_virtio_gpu_fence(f); + + kfree(fence); +} + static const struct dma_fence_ops virtio_gpu_fence_ops = { .get_driver_name = virtio_gpu_get_driver_name, .get_timeline_name = virtio_gpu_get_timeline_name, .signaled = virtio_gpu_fence_signaled, .fence_value_str = virtio_gpu_fence_value_str, .timeline_value_str = virtio_gpu_timeline_value_str, + .release = virtio_gpu_fence_release, }; struct virtio_gpu_fence *virtio_gpu_fence_alloc(struct virtio_gpu_device *vgdev, From patchwork Wed Jul 12 22:44:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kim, Dongwon" X-Patchwork-Id: 13310990 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 634C2C001B0 for ; Wed, 12 Jul 2023 23:07:11 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id BB76210E5E6; Wed, 12 Jul 2023 23:07:09 +0000 (UTC) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by gabe.freedesktop.org (Postfix) with ESMTPS id 40E4410E5E6 for ; Wed, 12 Jul 2023 23:07:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1689203226; x=1720739226; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=2TFWR9vPmUHU6eTAescV3auGVCXRde04RLPnatHt3Nw=; b=MgBSNhAt0kW/pTmptFlH9+3Bm3enzGZdDeFlvYyi3CSZBG+2mfVVrgGs GYZv6A9IWs18GjgVj/RbUV2v3z0SSJV2t1MHsqh9ZenoUol0ivXLCOGsG Drt5J2ZNikrPtPCrq5a3nny5I7BGAHAa92pdfyPUcGnELe8DY1RZpfBIS 4t1BJ0ERQbt50OaaZZJuoT+vtEMTxxIEV+C6+6I8k2ie+iFGskylkQXX1 LSGsrGG1h+C0dGbKrdVlaxuj9G6HxURJMuNK/7xOR4cDDxNsxWmsggeh0 A2jAvKnhyniU7dR7dORTdvxZQrT9QXhE2PlkRxWQyQ8yq1zQYFtO4sRdn w==; X-IronPort-AV: E=McAfee;i="6600,9927,10769"; a="367654219" X-IronPort-AV: E=Sophos;i="6.01,200,1684825200"; d="scan'208";a="367654219" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jul 2023 16:07:05 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10769"; a="895772024" X-IronPort-AV: E=Sophos;i="6.01,200,1684825200"; d="scan'208";a="895772024" Received: from dongwonk-z390-aorus-ultra-intel-gfx.fm.intel.com ([10.105.129.122]) by orsmga005.jf.intel.com with ESMTP; 12 Jul 2023 16:07:05 -0700 From: Dongwon Kim To: dri-devel@lists.freedesktop.org Subject: [RFC PATCH 2/3] drm/virtio: new fence for every plane update Date: Wed, 12 Jul 2023 15:44:23 -0700 Message-Id: <20230712224424.30158-3-dongwon.kim@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20230712224424.30158-1-dongwon.kim@intel.com> References: <20230712224424.30158-1-dongwon.kim@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Vivek Kasireddy , kraxel@redhat.com, Dongwon Kim Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Having a fence linked to a virtio_gpu_framebuffer in plane update sequence would cause conflict when several planes referencing the same framebuffer especially when those planes are updated concurrently (e.g. Xorg screen covering multi-displays configured for an extended mode). So it is better for the fence to be created for every plane update event then link it to the plane state since each plane update comes with a new plane state obj. The plane state for virtio-gpu, "struct virtio_gpu_plane_state" is added for this. This structure represents drm_plane_state and it contains the reference to virtio_gpu_fence, which was previously in "struct virtio_gpu_framebuffer". "virtio_gpu_plane_duplicate_state" and "virtio_gpu_plane_destroy_state" were added as well to manage virtio_gpu_plane_state. Several drm helpers were slightly modified accordingly to use the fence in new plane state structure. virtio_gpu_plane_cleanup_fb was completely removed as none of code in the function are not required. Also, the condition for adding fence, (plane->state->fb != new_state->fb) was removed for the sychronous FB update even when the same FB is flushed again consecutively. Cc: Gerd Hoffmann Cc: Vivek Kasireddy Signed-off-by: Dongwon Kim --- drivers/gpu/drm/virtio/virtgpu_drv.h | 7 +++ drivers/gpu/drm/virtio/virtgpu_plane.c | 76 +++++++++++++++----------- 2 files changed, 51 insertions(+), 32 deletions(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h index 4126c384286b..61fd37f95fbd 100644 --- a/drivers/gpu/drm/virtio/virtgpu_drv.h +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h @@ -191,6 +191,13 @@ struct virtio_gpu_framebuffer { #define to_virtio_gpu_framebuffer(x) \ container_of(x, struct virtio_gpu_framebuffer, base) +struct virtio_gpu_plane_state { + struct drm_plane_state base; + struct virtio_gpu_fence *fence; +}; +#define to_virtio_gpu_plane_state(x) \ + container_of(x, struct virtio_gpu_plane_state, base) + struct virtio_gpu_queue { struct virtqueue *vq; spinlock_t qlock; diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c b/drivers/gpu/drm/virtio/virtgpu_plane.c index a2e045f3a000..a063f06ab6c5 100644 --- a/drivers/gpu/drm/virtio/virtgpu_plane.c +++ b/drivers/gpu/drm/virtio/virtgpu_plane.c @@ -66,12 +66,36 @@ uint32_t virtio_gpu_translate_format(uint32_t drm_fourcc) return format; } +static struct +drm_plane_state *virtio_gpu_plane_duplicate_state(struct drm_plane *plane) +{ + struct virtio_gpu_plane_state *new; + + if (WARN_ON(!plane->state)) + return NULL; + + new = kzalloc(sizeof(*new), GFP_KERNEL); + if (!new) + return NULL; + + __drm_atomic_helper_plane_duplicate_state(plane, &new->base); + + return &new->base; +} + +static void virtio_gpu_plane_destroy_state(struct drm_plane *plane, + struct drm_plane_state *state) +{ + __drm_atomic_helper_plane_destroy_state(state); + kfree(to_virtio_gpu_plane_state(state)); +} + static const struct drm_plane_funcs virtio_gpu_plane_funcs = { .update_plane = drm_atomic_helper_update_plane, .disable_plane = drm_atomic_helper_disable_plane, .reset = drm_atomic_helper_plane_reset, - .atomic_duplicate_state = drm_atomic_helper_plane_duplicate_state, - .atomic_destroy_state = drm_atomic_helper_plane_destroy_state, + .atomic_duplicate_state = virtio_gpu_plane_duplicate_state, + .atomic_destroy_state = virtio_gpu_plane_destroy_state, }; static int virtio_gpu_plane_atomic_check(struct drm_plane *plane, @@ -128,11 +152,13 @@ static void virtio_gpu_resource_flush(struct drm_plane *plane, struct drm_device *dev = plane->dev; struct virtio_gpu_device *vgdev = dev->dev_private; struct virtio_gpu_framebuffer *vgfb; + struct virtio_gpu_plane_state *vgplane_st; struct virtio_gpu_object *bo; vgfb = to_virtio_gpu_framebuffer(plane->state->fb); + vgplane_st = to_virtio_gpu_plane_state(plane->state); bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]); - if (vgfb->fence) { + if (vgplane_st->fence) { struct virtio_gpu_object_array *objs; objs = virtio_gpu_array_alloc(1); @@ -141,13 +167,12 @@ static void virtio_gpu_resource_flush(struct drm_plane *plane, virtio_gpu_array_add_obj(objs, vgfb->base.obj[0]); virtio_gpu_array_lock_resv(objs); virtio_gpu_cmd_resource_flush(vgdev, bo->hw_res_handle, x, y, - width, height, objs, vgfb->fence); + width, height, objs, + vgplane_st->fence); virtio_gpu_notify(vgdev); - - dma_fence_wait_timeout(&vgfb->fence->f, true, + dma_fence_wait_timeout(&vgplane_st->fence->f, true, msecs_to_jiffies(50)); - dma_fence_put(&vgfb->fence->f); - vgfb->fence = NULL; + dma_fence_put(&vgplane_st->fence->f); } else { virtio_gpu_cmd_resource_flush(vgdev, bo->hw_res_handle, x, y, width, height, NULL, NULL); @@ -237,41 +262,29 @@ static int virtio_gpu_plane_prepare_fb(struct drm_plane *plane, struct drm_device *dev = plane->dev; struct virtio_gpu_device *vgdev = dev->dev_private; struct virtio_gpu_framebuffer *vgfb; + struct virtio_gpu_plane_state *vgplane_st; struct virtio_gpu_object *bo; if (!new_state->fb) return 0; vgfb = to_virtio_gpu_framebuffer(new_state->fb); + vgplane_st = to_virtio_gpu_plane_state(new_state); bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]); if (!bo || (plane->type == DRM_PLANE_TYPE_PRIMARY && !bo->guest_blob)) return 0; - if (bo->dumb && (plane->state->fb != new_state->fb)) { - vgfb->fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context, + if (bo->dumb) { + vgplane_st->fence = virtio_gpu_fence_alloc(vgdev, + vgdev->fence_drv.context, 0); - if (!vgfb->fence) + if (!vgplane_st->fence) return -ENOMEM; } return 0; } -static void virtio_gpu_plane_cleanup_fb(struct drm_plane *plane, - struct drm_plane_state *state) -{ - struct virtio_gpu_framebuffer *vgfb; - - if (!state->fb) - return; - - vgfb = to_virtio_gpu_framebuffer(state->fb); - if (vgfb->fence) { - dma_fence_put(&vgfb->fence->f); - vgfb->fence = NULL; - } -} - static void virtio_gpu_cursor_plane_update(struct drm_plane *plane, struct drm_atomic_state *state) { @@ -281,6 +294,7 @@ static void virtio_gpu_cursor_plane_update(struct drm_plane *plane, struct virtio_gpu_device *vgdev = dev->dev_private; struct virtio_gpu_output *output = NULL; struct virtio_gpu_framebuffer *vgfb; + struct virtio_gpu_plane_state *vgplane_st; struct virtio_gpu_object *bo = NULL; uint32_t handle; @@ -293,6 +307,7 @@ static void virtio_gpu_cursor_plane_update(struct drm_plane *plane, if (plane->state->fb) { vgfb = to_virtio_gpu_framebuffer(plane->state->fb); + vgplane_st = to_virtio_gpu_plane_state(plane->state); bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]); handle = bo->hw_res_handle; } else { @@ -312,11 +327,10 @@ static void virtio_gpu_cursor_plane_update(struct drm_plane *plane, (vgdev, 0, plane->state->crtc_w, plane->state->crtc_h, - 0, 0, objs, vgfb->fence); + 0, 0, objs, vgplane_st->fence); virtio_gpu_notify(vgdev); - dma_fence_wait(&vgfb->fence->f, true); - dma_fence_put(&vgfb->fence->f); - vgfb->fence = NULL; + dma_fence_wait(&vgplane_st->fence->f, true); + dma_fence_put(&vgplane_st->fence->f); } if (plane->state->fb != old_state->fb) { @@ -351,14 +365,12 @@ static void virtio_gpu_cursor_plane_update(struct drm_plane *plane, static const struct drm_plane_helper_funcs virtio_gpu_primary_helper_funcs = { .prepare_fb = virtio_gpu_plane_prepare_fb, - .cleanup_fb = virtio_gpu_plane_cleanup_fb, .atomic_check = virtio_gpu_plane_atomic_check, .atomic_update = virtio_gpu_primary_plane_update, }; static const struct drm_plane_helper_funcs virtio_gpu_cursor_helper_funcs = { .prepare_fb = virtio_gpu_plane_prepare_fb, - .cleanup_fb = virtio_gpu_plane_cleanup_fb, .atomic_check = virtio_gpu_plane_atomic_check, .atomic_update = virtio_gpu_cursor_plane_update, }; From patchwork Wed Jul 12 22:44:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kim, Dongwon" X-Patchwork-Id: 13310991 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3C7FFEB64DA for ; Wed, 12 Jul 2023 23:07:14 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 3287810E5EA; Wed, 12 Jul 2023 23:07:13 +0000 (UTC) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by gabe.freedesktop.org (Postfix) with ESMTPS id B98BC10E5E6 for ; Wed, 12 Jul 2023 23:07:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1689203226; x=1720739226; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ux50yhswqg8S22Fkck5oLgqU1zyJ+ETbzwGMdn/4BSQ=; b=aZue2I2lyPmnhQKTZRUG7qlU0fSHCpJLavsq7UainM644RaNX2/knv0i fEsEZrsYtx4uaeyBdX0IYrhnZo7aiQzSdaZ7il4Bps5aUmJJ82Hn3OVlI Ya02dRhRF7R8vkxChlJp0R6Sx9Hj+9+ACNCm2ZXcYp7LLoH3yyCs+W+yv MdcdnZw9tJo37SwAdOVxqTvwdUFpNz6WJymHyzJE5ib7iV1FEYoUT1zID rwCnWxv4yR2hhkIBA3gCBIpESlZHhrVerum1QQgQaQkfrBL9xDXIUzcAS Ki/vzJXpKF08SmddvRoCBn+dp5TR9WaLFmj88WWmkEDgVwN/L4jQnPREQ g==; X-IronPort-AV: E=McAfee;i="6600,9927,10769"; a="367654223" X-IronPort-AV: E=Sophos;i="6.01,200,1684825200"; d="scan'208";a="367654223" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jul 2023 16:07:06 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10769"; a="895772029" X-IronPort-AV: E=Sophos;i="6.01,200,1684825200"; d="scan'208";a="895772029" Received: from dongwonk-z390-aorus-ultra-intel-gfx.fm.intel.com ([10.105.129.122]) by orsmga005.jf.intel.com with ESMTP; 12 Jul 2023 16:07:06 -0700 From: Dongwon Kim To: dri-devel@lists.freedesktop.org Subject: [RFC PATCH 3/3] drm/virtio: drm_gem_plane_helper_prepare_fb for obj synchronization Date: Wed, 12 Jul 2023 15:44:24 -0700 Message-Id: <20230712224424.30158-4-dongwon.kim@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20230712224424.30158-1-dongwon.kim@intel.com> References: <20230712224424.30158-1-dongwon.kim@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Vivek Kasireddy , kraxel@redhat.com, Dongwon Kim Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" This helper is needed for framebuffer synchronization. Old framebuffer data is often displayed on the guest display without this helper. Cc: Gerd Hoffmann Cc: Vivek Kasireddy Signed-off-by: Dongwon Kim --- drivers/gpu/drm/virtio/virtgpu_plane.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c b/drivers/gpu/drm/virtio/virtgpu_plane.c index a063f06ab6c5..e197299489ce 100644 --- a/drivers/gpu/drm/virtio/virtgpu_plane.c +++ b/drivers/gpu/drm/virtio/virtgpu_plane.c @@ -26,6 +26,7 @@ #include #include #include +#include #include "virtgpu_drv.h" @@ -271,6 +272,9 @@ static int virtio_gpu_plane_prepare_fb(struct drm_plane *plane, vgfb = to_virtio_gpu_framebuffer(new_state->fb); vgplane_st = to_virtio_gpu_plane_state(new_state); bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]); + + drm_gem_plane_helper_prepare_fb(plane, new_state); + if (!bo || (plane->type == DRM_PLANE_TYPE_PRIMARY && !bo->guest_blob)) return 0;