Message ID | 20181105114152.2088-4-robert.foss@collabora.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | virgl: fence fd support | expand |
On Mon, 5 Nov 2018 at 11:42, Robert Foss <robert.foss@collabora.com> wrote: > > When the execbuf call receives an in-fence it will get the dma_fence > related to that fence fd and wait on it before submitting the draw call. > > On the out-fence side we get fence returned by the submitted draw call > and attach it to a sync_file and send the sync_file fd to userspace. On > error -1 is returned to userspace. > > Signed-off-by: Gustavo Padovan <gustavo.padovan@collabora.com> > Signed-off-by: Robert Foss <robert.foss@collabora.com> > Suggested-by: Rob Herring <robh@kernel.org> > Reviewed-by: Emil Velikov <emil.velikov@collabora.com> > --- > > Changes since v3: > - Move all in_fence handling to the same VIRTGPU_EXECBUF_FENCE_FD_IN block Fwiw my suggestion was to explicitly document whether the IOCTL can support, simultaneously, IN and OUT fence. Merging the two patches makes things a bit meh. But as before - it's for Gerd to make the final call. -Emil
Heya, On 2018-11-05 18:25, Emil Velikov wrote: > On Mon, 5 Nov 2018 at 11:42, Robert Foss <robert.foss@collabora.com> wrote: >> >> When the execbuf call receives an in-fence it will get the dma_fence >> related to that fence fd and wait on it before submitting the draw call. >> >> On the out-fence side we get fence returned by the submitted draw call >> and attach it to a sync_file and send the sync_file fd to userspace. On >> error -1 is returned to userspace. >> >> Signed-off-by: Gustavo Padovan <gustavo.padovan@collabora.com> >> Signed-off-by: Robert Foss <robert.foss@collabora.com> >> Suggested-by: Rob Herring <robh@kernel.org> >> Reviewed-by: Emil Velikov <emil.velikov@collabora.com> >> --- >> >> Changes since v3: >> - Move all in_fence handling to the same VIRTGPU_EXECBUF_FENCE_FD_IN block > Fwiw my suggestion was to explicitly document whether the IOCTL can > support, simultaneously, IN and OUT fence. I can't say I understood that part, but I'll happily spin another revision with more documentation added. But the change above does not entirely come from your feedback. While looking into how other drivers do this, an issue in msm was found. Since this code is being added to virtgpu now, we might as well do the right thing here, and also end up with all of the in fence handling in a single chunk. > Merging the two patches makes things a bit meh. But as before - it's > for Gerd to make the final call. > > -Emil > _______________________________________________ > dri-devel mailing list > dri-devel@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/dri-devel >
On Mon, Nov 05, 2018 at 05:25:05PM +0000, Emil Velikov wrote: > On Mon, 5 Nov 2018 at 11:42, Robert Foss <robert.foss@collabora.com> wrote: > > > > When the execbuf call receives an in-fence it will get the dma_fence > > related to that fence fd and wait on it before submitting the draw call. > > > > On the out-fence side we get fence returned by the submitted draw call > > and attach it to a sync_file and send the sync_file fd to userspace. On > > error -1 is returned to userspace. > > > > Signed-off-by: Gustavo Padovan <gustavo.padovan@collabora.com> > > Signed-off-by: Robert Foss <robert.foss@collabora.com> > > Suggested-by: Rob Herring <robh@kernel.org> > > Reviewed-by: Emil Velikov <emil.velikov@collabora.com> > > --- > > > > Changes since v3: > > - Move all in_fence handling to the same VIRTGPU_EXECBUF_FENCE_FD_IN block > Fwiw my suggestion was to explicitly document whether the IOCTL can > support, simultaneously, IN and OUT fence. Yes, that would be good. Code looks like it is supposed to work, but explicitly saying so in the commit message would be nice. Also: should we use separate fields for in/out fds? cheers, Gerd
Hey Gerd, On 2018-11-09 11:13, Gerd Hoffmann wrote: > On Mon, Nov 05, 2018 at 05:25:05PM +0000, Emil Velikov wrote: >> On Mon, 5 Nov 2018 at 11:42, Robert Foss <robert.foss@collabora.com> wrote: >>> >>> When the execbuf call receives an in-fence it will get the dma_fence >>> related to that fence fd and wait on it before submitting the draw call. >>> >>> On the out-fence side we get fence returned by the submitted draw call >>> and attach it to a sync_file and send the sync_file fd to userspace. On >>> error -1 is returned to userspace. >>> >>> Signed-off-by: Gustavo Padovan <gustavo.padovan@collabora.com> >>> Signed-off-by: Robert Foss <robert.foss@collabora.com> >>> Suggested-by: Rob Herring <robh@kernel.org> >>> Reviewed-by: Emil Velikov <emil.velikov@collabora.com> >>> --- >>> >>> Changes since v3: >>> - Move all in_fence handling to the same VIRTGPU_EXECBUF_FENCE_FD_IN block >> Fwiw my suggestion was to explicitly document whether the IOCTL can >> support, simultaneously, IN and OUT fence. > > Yes, that would be good. Code looks like it is supposed to work, but > explicitly saying so in the commit message would be nice. On it! Will send out a v5. > > Also: should we use separate fields for in/out fds? I'm not sure I understand which fields you're referring to. > > cheers, > Gerd > > _______________________________________________ > dri-devel mailing list > dri-devel@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/dri-devel >
On Fri, Nov 09, 2018 at 06:13:52PM +0100, Robert Foss wrote: > Hey Gerd, > > On 2018-11-09 11:13, Gerd Hoffmann wrote: > > On Mon, Nov 05, 2018 at 05:25:05PM +0000, Emil Velikov wrote: > > > On Mon, 5 Nov 2018 at 11:42, Robert Foss <robert.foss@collabora.com> wrote: > > > > > > > > When the execbuf call receives an in-fence it will get the dma_fence > > > > related to that fence fd and wait on it before submitting the draw call. > > > > > > > > On the out-fence side we get fence returned by the submitted draw call > > > > and attach it to a sync_file and send the sync_file fd to userspace. On > > > > error -1 is returned to userspace. > > > > > > > > Signed-off-by: Gustavo Padovan <gustavo.padovan@collabora.com> > > > > Signed-off-by: Robert Foss <robert.foss@collabora.com> > > > > Suggested-by: Rob Herring <robh@kernel.org> > > > > Reviewed-by: Emil Velikov <emil.velikov@collabora.com> > > > > --- > > > > > > > > Changes since v3: > > > > - Move all in_fence handling to the same VIRTGPU_EXECBUF_FENCE_FD_IN block > > > Fwiw my suggestion was to explicitly document whether the IOCTL can > > > support, simultaneously, IN and OUT fence. > > > > Yes, that would be good. Code looks like it is supposed to work, but > > explicitly saying so in the commit message would be nice. > > On it! Will send out a v5. > > > > > Also: should we use separate fields for in/out fds? > > I'm not sure I understand which fields you're referring to. fence_in_fd & fence_out_fd in the ioctl struct (patch #2). cheers, Gerd
Hey Gerd On 2018-11-12 10:10, Gerd Hoffmann wrote: > On Fri, Nov 09, 2018 at 06:13:52PM +0100, Robert Foss wrote: >> Hey Gerd, >> >> On 2018-11-09 11:13, Gerd Hoffmann wrote: >>> On Mon, Nov 05, 2018 at 05:25:05PM +0000, Emil Velikov wrote: >>>> On Mon, 5 Nov 2018 at 11:42, Robert Foss <robert.foss@collabora.com> wrote: >>>>> >>>>> When the execbuf call receives an in-fence it will get the dma_fence >>>>> related to that fence fd and wait on it before submitting the draw call. >>>>> >>>>> On the out-fence side we get fence returned by the submitted draw call >>>>> and attach it to a sync_file and send the sync_file fd to userspace. On >>>>> error -1 is returned to userspace. >>>>> >>>>> Signed-off-by: Gustavo Padovan <gustavo.padovan@collabora.com> >>>>> Signed-off-by: Robert Foss <robert.foss@collabora.com> >>>>> Suggested-by: Rob Herring <robh@kernel.org> >>>>> Reviewed-by: Emil Velikov <emil.velikov@collabora.com> >>>>> --- >>>>> >>>>> Changes since v3: >>>>> - Move all in_fence handling to the same VIRTGPU_EXECBUF_FENCE_FD_IN block >>>> Fwiw my suggestion was to explicitly document whether the IOCTL can >>>> support, simultaneously, IN and OUT fence. >>> >>> Yes, that would be good. Code looks like it is supposed to work, but >>> explicitly saying so in the commit message would be nice. >> >> On it! Will send out a v5. >> >>> >>> Also: should we use separate fields for in/out fds? >> >> I'm not sure I understand which fields you're referring to. > > fence_in_fd & fence_out_fd in the ioctl struct (patch #2). I had a look into this and how other drivers are doing it. msm[1] and etnaviv[2] seem to use the same dual-use variable. There may be some value to keeping the virtgpu congruent with the other drivers, but I'll leave the final call up to you. [1] https://github.com/torvalds/linux/blob/master/include/uapi/drm/msm_drm.h#L223 [2] https://github.com/torvalds/linux/blob/master/include/uapi/drm/etnaviv_drm.h#L197 Rob.
On Mon, Nov 12, 2018 at 11:30:57AM +0100, Robert Foss wrote: > Hey Gerd > > On 2018-11-12 10:10, Gerd Hoffmann wrote: > > On Fri, Nov 09, 2018 at 06:13:52PM +0100, Robert Foss wrote: > > > Hey Gerd, > > > > > > On 2018-11-09 11:13, Gerd Hoffmann wrote: > > > > On Mon, Nov 05, 2018 at 05:25:05PM +0000, Emil Velikov wrote: > > > > > On Mon, 5 Nov 2018 at 11:42, Robert Foss <robert.foss@collabora.com> wrote: > > > > > > > > > > > > When the execbuf call receives an in-fence it will get the dma_fence > > > > > > related to that fence fd and wait on it before submitting the draw call. > > > > > > > > > > > > On the out-fence side we get fence returned by the submitted draw call > > > > > > and attach it to a sync_file and send the sync_file fd to userspace. On > > > > > > error -1 is returned to userspace. > > > > > > > > > > > > Signed-off-by: Gustavo Padovan <gustavo.padovan@collabora.com> > > > > > > Signed-off-by: Robert Foss <robert.foss@collabora.com> > > > > > > Suggested-by: Rob Herring <robh@kernel.org> > > > > > > Reviewed-by: Emil Velikov <emil.velikov@collabora.com> > > > > > > --- > > > > > > > > > > > > Changes since v3: > > > > > > - Move all in_fence handling to the same VIRTGPU_EXECBUF_FENCE_FD_IN block > > > > > Fwiw my suggestion was to explicitly document whether the IOCTL can > > > > > support, simultaneously, IN and OUT fence. > > > > > > > > Yes, that would be good. Code looks like it is supposed to work, but > > > > explicitly saying so in the commit message would be nice. > > > > > > On it! Will send out a v5. > > > > > > > > > > > Also: should we use separate fields for in/out fds? > > > > > > I'm not sure I understand which fields you're referring to. > > > > fence_in_fd & fence_out_fd in the ioctl struct (patch #2). > > I had a look into this and how other drivers are doing it. > msm[1] and etnaviv[2] seem to use the same dual-use variable. Ok, lets do it the same way then. What is the status of the userspace side of this? cheers, Gerd
On 2018-11-12 12:11, Gerd Hoffmann wrote: > On Mon, Nov 12, 2018 at 11:30:57AM +0100, Robert Foss wrote: >> Hey Gerd >> >> On 2018-11-12 10:10, Gerd Hoffmann wrote: >>> On Fri, Nov 09, 2018 at 06:13:52PM +0100, Robert Foss wrote: >>>> Hey Gerd, >>>> >>>> On 2018-11-09 11:13, Gerd Hoffmann wrote: >>>>> On Mon, Nov 05, 2018 at 05:25:05PM +0000, Emil Velikov wrote: >>>>>> On Mon, 5 Nov 2018 at 11:42, Robert Foss <robert.foss@collabora.com> wrote: >>>>>>> >>>>>>> When the execbuf call receives an in-fence it will get the dma_fence >>>>>>> related to that fence fd and wait on it before submitting the draw call. >>>>>>> >>>>>>> On the out-fence side we get fence returned by the submitted draw call >>>>>>> and attach it to a sync_file and send the sync_file fd to userspace. On >>>>>>> error -1 is returned to userspace. >>>>>>> >>>>>>> Signed-off-by: Gustavo Padovan <gustavo.padovan@collabora.com> >>>>>>> Signed-off-by: Robert Foss <robert.foss@collabora.com> >>>>>>> Suggested-by: Rob Herring <robh@kernel.org> >>>>>>> Reviewed-by: Emil Velikov <emil.velikov@collabora.com> >>>>>>> --- >>>>>>> >>>>>>> Changes since v3: >>>>>>> - Move all in_fence handling to the same VIRTGPU_EXECBUF_FENCE_FD_IN block >>>>>> Fwiw my suggestion was to explicitly document whether the IOCTL can >>>>>> support, simultaneously, IN and OUT fence. >>>>> >>>>> Yes, that would be good. Code looks like it is supposed to work, but >>>>> explicitly saying so in the commit message would be nice. >>>> >>>> On it! Will send out a v5. >>>> >>>>> >>>>> Also: should we use separate fields for in/out fds? >>>> >>>> I'm not sure I understand which fields you're referring to. >>> >>> fence_in_fd & fence_out_fd in the ioctl struct (patch #2). >> >> I had a look into this and how other drivers are doing it. >> msm[1] and etnaviv[2] seem to use the same dual-use variable. > > Ok, lets do it the same way then. > What is the status of the userspace side of this? The patch is hosted here[1], but is as of yet unmerged. I know the drm subsystem requires a userspace implementation of all features, but does it have to be upstreamed first? [1] https://gitlab.collabora.com/robertfoss/mesa/commits/virtio_fences_v3 Rob.
Hi, > > > I had a look into this and how other drivers are doing it. > > > msm[1] and etnaviv[2] seem to use the same dual-use variable. > > > > Ok, lets do it the same way then. > > What is the status of the userspace side of this? > > The patch is hosted here[1], but is as of yet unmerged. > > I know the drm subsystem requires a userspace implementation of > all features, but does it have to be upstreamed first? Which part is merged first doesn't matter much I think, but both kernel and userspace side patches should be reviewed, especially the API bits, so we don't end up with a bad API. cheers, Gerd
diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c index 3d497835b0f5..340f2513d829 100644 --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c @@ -28,6 +28,7 @@ #include <drm/drmP.h> #include <drm/virtgpu_drm.h> #include <drm/ttm/ttm_execbuf_util.h> +#include <linux/sync_file.h> #include "virtgpu_drv.h" @@ -105,7 +106,7 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data, struct virtio_gpu_device *vgdev = dev->dev_private; struct virtio_gpu_fpriv *vfpriv = drm_file->driver_priv; struct drm_gem_object *gobj; - struct virtio_gpu_fence *fence; + struct virtio_gpu_fence *out_fence; struct virtio_gpu_object *qobj; int ret; uint32_t *bo_handles = NULL; @@ -114,6 +115,9 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data, struct ttm_validate_buffer *buflist = NULL; int i; struct ww_acquire_ctx ticket; + struct sync_file *sync_file; + int in_fence_fd = exbuf->fence_fd; + int out_fence_fd = -1; void *buf; if (vgdev->has_virgl_3d == false) @@ -124,6 +128,33 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data, exbuf->fence_fd = -1; + if (exbuf->flags & VIRTGPU_EXECBUF_FENCE_FD_IN) { + struct dma_fence *in_fence; + + in_fence = sync_file_get_fence(in_fence_fd); + + if (!in_fence) + return -EINVAL; + + /* + * Wait if the fence is from a foreign context, or if the fence + * array contains any fence from a foreign context. + */ + ret = 0; + if (!dma_fence_match_context(in_fence, vgdev->fence_drv.context)) + ret = dma_fence_wait(in_fence, true); + + dma_fence_put(in_fence); + if (ret) + return ret; + } + + if (exbuf->flags & VIRTGPU_EXECBUF_FENCE_FD_OUT) { + out_fence_fd = get_unused_fd_flags(O_CLOEXEC); + if (out_fence_fd < 0) + return out_fence_fd; + } + INIT_LIST_HEAD(&validate_list); if (exbuf->num_bo_handles) { @@ -133,26 +164,22 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data, sizeof(struct ttm_validate_buffer), GFP_KERNEL | __GFP_ZERO); if (!bo_handles || !buflist) { - kvfree(bo_handles); - kvfree(buflist); - return -ENOMEM; + ret = -ENOMEM; + goto out_unused_fd; } user_bo_handles = (void __user *)(uintptr_t)exbuf->bo_handles; if (copy_from_user(bo_handles, user_bo_handles, exbuf->num_bo_handles * sizeof(uint32_t))) { ret = -EFAULT; - kvfree(bo_handles); - kvfree(buflist); - return ret; + goto out_unused_fd; } for (i = 0; i < exbuf->num_bo_handles; i++) { gobj = drm_gem_object_lookup(drm_file, bo_handles[i]); if (!gobj) { - kvfree(bo_handles); - kvfree(buflist); - return -ENOENT; + ret = -ENOENT; + goto out_unused_fd; } qobj = gem_to_virtio_gpu_obj(gobj); @@ -161,6 +188,7 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data, list_add(&buflist[i].head, &validate_list); } kvfree(bo_handles); + bo_handles = NULL; } ret = virtio_gpu_object_list_validate(&ticket, &validate_list); @@ -174,28 +202,47 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data, goto out_unresv; } - fence = virtio_gpu_fence_alloc(vgdev); - if (!fence) { - kfree(buf); + out_fence = virtio_gpu_fence_alloc(vgdev); + if(!out_fence) { ret = -ENOMEM; - goto out_unresv; + goto out_memdup; + } + + if (out_fence_fd >= 0) { + sync_file = sync_file_create(&out_fence->f); + if (!sync_file) { + dma_fence_put(&out_fence->f); + ret = -ENOMEM; + goto out_memdup; + } + + exbuf->fence_fd = out_fence_fd; + fd_install(out_fence_fd, sync_file->file); } + virtio_gpu_cmd_submit(vgdev, buf, exbuf->size, - vfpriv->ctx_id, &fence); + vfpriv->ctx_id, &out_fence); - ttm_eu_fence_buffer_objects(&ticket, &validate_list, &fence->f); + ttm_eu_fence_buffer_objects(&ticket, &validate_list, &out_fence->f); /* fence the command bo */ virtio_gpu_unref_list(&validate_list); kvfree(buflist); - dma_fence_put(&fence->f); return 0; +out_memdup: + kfree(buf); out_unresv: ttm_eu_backoff_reservation(&ticket, &validate_list); out_free: virtio_gpu_unref_list(&validate_list); +out_unused_fd: + kvfree(bo_handles); kvfree(buflist); + + if (out_fence_fd >= 0) + put_unused_fd(out_fence_fd); + return ret; }