Message ID | 20190130094338.14203-6-kraxel@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | drm/virtio: ttm improvements | expand |
Den 30.01.2019 10.43, skrev Gerd Hoffmann: > There is no need to wait for completion here. > > The host will process commands in submit order, so commands can > reference the new resource just fine even when queued up before > completion. > > On the guest side there is no need to wait for completion too. Which > btw is different from resource destroy, where we have to make sure the > host has seen the destroy and thus doesn't use it any more before > releasing the pages backing the resource. > > Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> > --- Acked-by: Noralf Trønnes <noralf@tronnes.org>
On Wed, Jan 30, 2019 at 1:43 AM Gerd Hoffmann <kraxel@redhat.com> wrote: > > There is no need to wait for completion here. > > The host will process commands in submit order, so commands can > reference the new resource just fine even when queued up before > completion. Does virtio_gpu_execbuffer_ioctl also wait for completion for a host response? From the guest driver perspective, a fence is just implemented has a virtio 3D resource. https://gitlab.freedesktop.org/mesa/mesa/blob/master/src/gallium/winsys/virgl/drm/virgl_drm_winsys.c#L787 The DRM_IOCTL_VIRTGPU_WAIT ioctl essentially waits for the reservation objects associated with that fence resource to become available. So the flow is: virtio_gpu_execbuffer_ioctl virtio_gpu_resource_create_ioctl with fence resource virtio_gpu_wait_ioctl with that fence resource --> associated with a GL wait on the host side Does this change modify this sequence of events? > > On the guest side there is no need to wait for completion too. Which > btw is different from resource destroy, where we have to make sure the > host has seen the destroy and thus doesn't use it any more before > releasing the pages backing the resource. > > Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> > --- > drivers/gpu/drm/virtio/virtgpu_ioctl.c | 51 +--------------------------------- > 1 file changed, 1 insertion(+), 50 deletions(-) > > diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c > index 431e5d767e..da06ebbb3a 100644 > --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c > +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c > @@ -279,10 +279,6 @@ static int virtio_gpu_resource_create_ioctl(struct drm_device *dev, void *data, > struct virtio_gpu_object *qobj; > struct drm_gem_object *obj; > uint32_t handle = 0; > - struct list_head validate_list; > - struct ttm_validate_buffer mainbuf; > - struct virtio_gpu_fence *fence = NULL; > - struct ww_acquire_ctx ticket; > struct virtio_gpu_object_params params = { 0 }; > > if (vgdev->has_virgl_3d == false) { > @@ -298,9 +294,6 @@ static int virtio_gpu_resource_create_ioctl(struct drm_device *dev, void *data, > return -EINVAL; > } > > - INIT_LIST_HEAD(&validate_list); > - memset(&mainbuf, 0, sizeof(struct ttm_validate_buffer)); > - > params.pinned = false, > params.format = rc->format; > params.width = rc->width; > @@ -329,62 +322,20 @@ static int virtio_gpu_resource_create_ioctl(struct drm_device *dev, void *data, > > ret = virtio_gpu_object_attach(vgdev, qobj, NULL); > } else { > - /* use a gem reference since unref list undoes them */ > - drm_gem_object_get(&qobj->gem_base); > - mainbuf.bo = &qobj->tbo; > - list_add(&mainbuf.head, &validate_list); > - > - ret = virtio_gpu_object_list_validate(&ticket, &validate_list); > - if (ret) { > - DRM_DEBUG("failed to validate\n"); > - goto fail_unref; > - } > - > - fence = virtio_gpu_fence_alloc(vgdev); > - if (!fence) { > - ret = -ENOMEM; > - goto fail_backoff; > - } > - > virtio_gpu_cmd_resource_create_3d(vgdev, qobj, ¶ms); > - ret = virtio_gpu_object_attach(vgdev, qobj, fence); > - if (ret) { > - dma_fence_put(&fence->f); > - goto fail_backoff; > - } > - ttm_eu_fence_buffer_objects(&ticket, &validate_list, &fence->f); > + ret = virtio_gpu_object_attach(vgdev, qobj, NULL); > } > > ret = drm_gem_handle_create(file_priv, obj, &handle); > if (ret) { > - > drm_gem_object_release(obj); > - if (vgdev->has_virgl_3d) { > - virtio_gpu_unref_list(&validate_list); > - dma_fence_put(&fence->f); > - } > return ret; > } > drm_gem_object_put_unlocked(obj); > > rc->res_handle = qobj->hw_res_handle; /* similiar to a VM address */ > rc->bo_handle = handle; > - > - if (vgdev->has_virgl_3d) { > - virtio_gpu_unref_list(&validate_list); > - dma_fence_put(&fence->f); > - } > return 0; > -fail_backoff: > - ttm_eu_backoff_reservation(&ticket, &validate_list); > -fail_unref: > - if (vgdev->has_virgl_3d) { > - virtio_gpu_unref_list(&validate_list); > - dma_fence_put(&fence->f); > - } > -//fail_obj: > -// drm_gem_object_handle_unreference_unlocked(obj); > - return ret; > } > > static int virtio_gpu_resource_info_ioctl(struct drm_device *dev, void *data, > -- > 2.9.3 > > _______________________________________________ > dri-devel mailing list > dri-devel@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/dri-devel
On Thu, Jan 31, 2019 at 11:04:31AM -0800, Gurchetan Singh wrote: > On Wed, Jan 30, 2019 at 1:43 AM Gerd Hoffmann <kraxel@redhat.com> wrote: > > > > There is no need to wait for completion here. > > > > The host will process commands in submit order, so commands can > > reference the new resource just fine even when queued up before > > completion. > > Does virtio_gpu_execbuffer_ioctl also wait for completion for a host response? No. But you pass in a list of objects (drm_virtgpu_execbuffer->bo_handles) used. They will all get reserved, so you can use DRM_IOCTL_VIRTGPU_WAIT on any of these objects to wait for completion. Recently the driver got support for returning a fence fd for the execbuffer, which you can also use to wait for completion in case your kernel is new enough. > From the guest driver perspective, a fence is just implemented has a > virtio 3D resource. > > https://gitlab.freedesktop.org/mesa/mesa/blob/master/src/gallium/winsys/virgl/drm/virgl_drm_winsys.c#L787 > > The DRM_IOCTL_VIRTGPU_WAIT ioctl essentially waits for the reservation > objects associated with that fence resource to become available. So > the flow is: > > virtio_gpu_execbuffer_ioctl > virtio_gpu_resource_create_ioctl with fence resource > virtio_gpu_wait_ioctl with that fence resource --> associated with a > GL wait on the host side Oh. /me looks surprised. Wasn't aware that userspace is doing *that*. > Does this change modify this sequence of events? Yes. DRM_IOCTL_VIRTGPU_WAIT will not wait any more. Guess I have to scratch the patch then ... cheers, Gerd
Hi, > virtio_gpu_execbuffer_ioctl > virtio_gpu_resource_create_ioctl with fence resource > virtio_gpu_wait_ioctl with that fence resource --> associated with a > GL wait on the host side After a long break (sidetracked with other stuff) I've finally sent v3 which should address this. cheers, Gerd
diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c index 431e5d767e..da06ebbb3a 100644 --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c @@ -279,10 +279,6 @@ static int virtio_gpu_resource_create_ioctl(struct drm_device *dev, void *data, struct virtio_gpu_object *qobj; struct drm_gem_object *obj; uint32_t handle = 0; - struct list_head validate_list; - struct ttm_validate_buffer mainbuf; - struct virtio_gpu_fence *fence = NULL; - struct ww_acquire_ctx ticket; struct virtio_gpu_object_params params = { 0 }; if (vgdev->has_virgl_3d == false) { @@ -298,9 +294,6 @@ static int virtio_gpu_resource_create_ioctl(struct drm_device *dev, void *data, return -EINVAL; } - INIT_LIST_HEAD(&validate_list); - memset(&mainbuf, 0, sizeof(struct ttm_validate_buffer)); - params.pinned = false, params.format = rc->format; params.width = rc->width; @@ -329,62 +322,20 @@ static int virtio_gpu_resource_create_ioctl(struct drm_device *dev, void *data, ret = virtio_gpu_object_attach(vgdev, qobj, NULL); } else { - /* use a gem reference since unref list undoes them */ - drm_gem_object_get(&qobj->gem_base); - mainbuf.bo = &qobj->tbo; - list_add(&mainbuf.head, &validate_list); - - ret = virtio_gpu_object_list_validate(&ticket, &validate_list); - if (ret) { - DRM_DEBUG("failed to validate\n"); - goto fail_unref; - } - - fence = virtio_gpu_fence_alloc(vgdev); - if (!fence) { - ret = -ENOMEM; - goto fail_backoff; - } - virtio_gpu_cmd_resource_create_3d(vgdev, qobj, ¶ms); - ret = virtio_gpu_object_attach(vgdev, qobj, fence); - if (ret) { - dma_fence_put(&fence->f); - goto fail_backoff; - } - ttm_eu_fence_buffer_objects(&ticket, &validate_list, &fence->f); + ret = virtio_gpu_object_attach(vgdev, qobj, NULL); } ret = drm_gem_handle_create(file_priv, obj, &handle); if (ret) { - drm_gem_object_release(obj); - if (vgdev->has_virgl_3d) { - virtio_gpu_unref_list(&validate_list); - dma_fence_put(&fence->f); - } return ret; } drm_gem_object_put_unlocked(obj); rc->res_handle = qobj->hw_res_handle; /* similiar to a VM address */ rc->bo_handle = handle; - - if (vgdev->has_virgl_3d) { - virtio_gpu_unref_list(&validate_list); - dma_fence_put(&fence->f); - } return 0; -fail_backoff: - ttm_eu_backoff_reservation(&ticket, &validate_list); -fail_unref: - if (vgdev->has_virgl_3d) { - virtio_gpu_unref_list(&validate_list); - dma_fence_put(&fence->f); - } -//fail_obj: -// drm_gem_object_handle_unreference_unlocked(obj); - return ret; } static int virtio_gpu_resource_info_ioctl(struct drm_device *dev, void *data,
There is no need to wait for completion here. The host will process commands in submit order, so commands can reference the new resource just fine even when queued up before completion. On the guest side there is no need to wait for completion too. Which btw is different from resource destroy, where we have to make sure the host has seen the destroy and thus doesn't use it any more before releasing the pages backing the resource. Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> --- drivers/gpu/drm/virtio/virtgpu_ioctl.c | 51 +--------------------------------- 1 file changed, 1 insertion(+), 50 deletions(-)