Message ID | 20210909013717.861-9-gurchetansingh@chromium.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Context types | expand |
On Wed, Sep 08, 2021 at 06:37:13PM -0700, Gurchetan Singh wrote: > The plumbing is all here to do this. Since we always use the > default fence context when allocating a fence, this makes no > functional difference. > > We can't process just the largest fence id anymore, since it's > it's associated with different timelines. It's fine for fence_id > 260 to signal before 259. As such, process each fence_id > individually. I guess you need to also update virtio_gpu_fence_event_process() then? It currently has the strict ordering logic baked in ... take care, Gerd
On Tue, Sep 14, 2021 at 10:53 PM Gerd Hoffmann <kraxel@redhat.com> wrote: > On Wed, Sep 08, 2021 at 06:37:13PM -0700, Gurchetan Singh wrote: > > The plumbing is all here to do this. Since we always use the > > default fence context when allocating a fence, this makes no > > functional difference. > > > > We can't process just the largest fence id anymore, since it's > > it's associated with different timelines. It's fine for fence_id > > 260 to signal before 259. As such, process each fence_id > > individually. > > I guess you need to also update virtio_gpu_fence_event_process() > then? It currently has the strict ordering logic baked in ... > The update to virtio_gpu_fence_event_process was done as a preparation a few months back: https://cgit.freedesktop.org/drm/drm-misc/commit/drivers/gpu/drm/virtio/virtgpu_fence.c?id=36549848ed27c22bb2ffd5d1468efc6505b05f97 > > take care, > Gerd > > > --------------------------------------------------------------------- > To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org > For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org > >
Hi, > > I guess you need to also update virtio_gpu_fence_event_process() > > then? It currently has the strict ordering logic baked in ... > > The update to virtio_gpu_fence_event_process was done as a preparation a > few months back: > > https://cgit.freedesktop.org/drm/drm-misc/commit/drivers/gpu/drm/virtio/virtgpu_fence.c?id=36549848ed27c22bb2ffd5d1468efc6505b05f97 Ah, ok, missed the detail that the context check is already there. thanks, Gerd
diff --git a/drivers/gpu/drm/virtio/virtgpu_fence.c b/drivers/gpu/drm/virtio/virtgpu_fence.c index 24c728b65d21..98a00c1e654d 100644 --- a/drivers/gpu/drm/virtio/virtgpu_fence.c +++ b/drivers/gpu/drm/virtio/virtgpu_fence.c @@ -75,20 +75,25 @@ struct virtio_gpu_fence *virtio_gpu_fence_alloc(struct virtio_gpu_device *vgdev, uint64_t base_fence_ctx, uint32_t ring_idx) { + uint64_t fence_context = base_fence_ctx + ring_idx; struct virtio_gpu_fence_driver *drv = &vgdev->fence_drv; struct virtio_gpu_fence *fence = kzalloc(sizeof(struct virtio_gpu_fence), GFP_KERNEL); + if (!fence) return fence; fence->drv = drv; + fence->ring_idx = ring_idx; + fence->emit_fence_info = !(base_fence_ctx == drv->context); /* This only partially initializes the fence because the seqno is * unknown yet. The fence must not be used outside of the driver * until virtio_gpu_fence_emit is called. */ - dma_fence_init(&fence->f, &virtio_gpu_fence_ops, &drv->lock, drv->context, - 0); + + dma_fence_init(&fence->f, &virtio_gpu_fence_ops, &drv->lock, + fence_context, 0); return fence; } @@ -110,6 +115,13 @@ void virtio_gpu_fence_emit(struct virtio_gpu_device *vgdev, cmd_hdr->flags |= cpu_to_le32(VIRTIO_GPU_FLAG_FENCE); cmd_hdr->fence_id = cpu_to_le64(fence->fence_id); + + /* Only currently defined fence param. */ + if (fence->emit_fence_info) { + cmd_hdr->flags |= + cpu_to_le32(VIRTIO_GPU_FLAG_INFO_RING_IDX); + cmd_hdr->ring_idx = (u8)fence->ring_idx; + } } void virtio_gpu_fence_event_process(struct virtio_gpu_device *vgdev, diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c index 496f8ce4cd41..938331554632 100644 --- a/drivers/gpu/drm/virtio/virtgpu_vq.c +++ b/drivers/gpu/drm/virtio/virtgpu_vq.c @@ -205,7 +205,7 @@ void virtio_gpu_dequeue_ctrl_func(struct work_struct *work) struct list_head reclaim_list; struct virtio_gpu_vbuffer *entry, *tmp; struct virtio_gpu_ctrl_hdr *resp; - u64 fence_id = 0; + u64 fence_id; INIT_LIST_HEAD(&reclaim_list); spin_lock(&vgdev->ctrlq.qlock); @@ -232,23 +232,14 @@ void virtio_gpu_dequeue_ctrl_func(struct work_struct *work) DRM_DEBUG("response 0x%x\n", le32_to_cpu(resp->type)); } if (resp->flags & cpu_to_le32(VIRTIO_GPU_FLAG_FENCE)) { - u64 f = le64_to_cpu(resp->fence_id); - - if (fence_id > f) { - DRM_ERROR("%s: Oops: fence %llx -> %llx\n", - __func__, fence_id, f); - } else { - fence_id = f; - } + fence_id = le64_to_cpu(resp->fence_id); + virtio_gpu_fence_event_process(vgdev, fence_id); } if (entry->resp_cb) entry->resp_cb(vgdev, entry); } wake_up(&vgdev->ctrlq.ack_queue); - if (fence_id) - virtio_gpu_fence_event_process(vgdev, fence_id); - list_for_each_entry_safe(entry, tmp, &reclaim_list, list) { if (entry->objs) virtio_gpu_array_put_free_delayed(vgdev, entry->objs);