From patchwork Tue Sep 2 21:32:40 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jesse Barnes X-Patchwork-Id: 4828631 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 9AA039F314 for ; Tue, 2 Sep 2014 21:40:00 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 1410B201C0 for ; Tue, 2 Sep 2014 21:39:59 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 88A8F201CD for ; Tue, 2 Sep 2014 21:39:57 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 0C2626E49E; Tue, 2 Sep 2014 14:39:56 -0700 (PDT) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org X-Greylist: delayed 399 seconds by postgrey-1.34 at gabe; Tue, 02 Sep 2014 14:39:54 PDT Received: from gproxy5-pub.mail.unifiedlayer.com (gproxy5-pub.mail.unifiedlayer.com [67.222.38.55]) by gabe.freedesktop.org (Postfix) with SMTP id 3C0C689E98 for ; Tue, 2 Sep 2014 14:39:54 -0700 (PDT) Received: (qmail 15572 invoked by uid 0); 2 Sep 2014 21:33:14 -0000 Received: from unknown (HELO cmgw3) (10.0.90.84) by gproxy5.mail.unifiedlayer.com with SMTP; 2 Sep 2014 21:33:14 -0000 Received: from box514.bluehost.com ([74.220.219.114]) by cmgw3 with id mTZ71o00N2UhLwi01TZAtF; Tue, 02 Sep 2014 21:33:12 -0600 X-Authority-Analysis: v=2.1 cv=DIUcvU9b c=1 sm=1 tr=0 a=9W6Fsu4pMcyimqnCr1W0/w==:117 a=9W6Fsu4pMcyimqnCr1W0/w==:17 a=cNaOj0WVAAAA:8 a=f5113yIGAAAA:8 a=rBoPP1QMGVoA:10 a=s8U-QkUfyOMA:10 a=3ROhxo7VqVMA:10 a=IkcTkHD0fZMA:10 a=TBVoxVdAAAAA:8 a=QyXUC8HyAAAA:8 a=GhZ5P8ky69gA:10 a=noBwr2J6l1kA:10 a=M5NMU0vRb8AMKCYNGF4A:9 a=S40DrWoYvTu7MRe0:21 a=AtP3VS9XER54StVR:21 a=QEXdDO2ut3YA:10 a=rW6DTWptwo0A:10 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=virtuousgeek.org; s=default; h=Content-Transfer-Encoding:Content-Type:MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:To:From; bh=Uc2ldFqxX2JX6xhsBbw6xaIFLUZBW8sJo/GjfFX4dqk=; b=jAd+SjlsUlitDzNC7ZYhrWZDAwfRtIRoXKtRP3y+K/DkpOlYt41YHKAVmfwS5DNIlN+Puufy4hRuCMbNjQfX+7gChg75ojlfhTnpWWw6VRWgVLfDeSxJ3aOJmlLo4Ior; Received: from [67.161.37.189] (port=33524 helo=jbarnes-t420.intel.com) by box514.bluehost.com with esmtpsa (TLSv1.2:AES128-SHA256:128) (Exim 4.82) (envelope-from ) id 1XOvhM-0005gs-FI for intel-gfx@lists.freedesktop.org; Tue, 02 Sep 2014 15:33:08 -0600 From: Jesse Barnes To: intel-gfx@lists.freedesktop.org Date: Tue, 2 Sep 2014 14:32:40 -0700 Message-Id: <1409693561-1669-2-git-send-email-jbarnes@virtuousgeek.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1409693561-1669-1-git-send-email-jbarnes@virtuousgeek.org> References: <1409693561-1669-1-git-send-email-jbarnes@virtuousgeek.org> MIME-Version: 1.0 X-Identified-User: {10642:box514.bluehost.com:virtuous:virtuousgeek.org} {sentby:smtp auth 67.161.37.189 authed with jbarnes@virtuousgeek.org} Subject: [Intel-gfx] [PATCH 1/2] drm/i915: Android sync points for i915 v2 X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-5.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Expose an ioctl to create Android fences based on the Android sync point infrastructure (which in turn is based on DMA-buf fences). Just a sketch at this point, no testing has been done. There are a couple of goals here: 1) allow applications and libraries to create fences without an associated buffer 2) re-use a common API so userspace doesn't have to impedance mismatch between different driver implementations too much 3) allow applications and libraries to use explicit synchronization if they choose by exposing fences directly v2: use struct fence directly using Maarten's new interface Signed-off-by: Jesse Barnes --- drivers/gpu/drm/i915/Kconfig | 2 + drivers/gpu/drm/i915/Makefile | 1 + drivers/gpu/drm/i915/i915_dma.c | 1 + drivers/gpu/drm/i915/i915_drv.h | 10 ++ drivers/gpu/drm/i915/i915_gem.c | 15 +- drivers/gpu/drm/i915/i915_irq.c | 4 +- drivers/gpu/drm/i915/i915_sync.c | 323 +++++++++++++++++++++++++++++++++++++++ include/uapi/drm/i915_drm.h | 23 +++ 8 files changed, 373 insertions(+), 6 deletions(-) create mode 100644 drivers/gpu/drm/i915/i915_sync.c diff --git a/drivers/gpu/drm/i915/Kconfig b/drivers/gpu/drm/i915/Kconfig index 4e39ab3..cd0f2ec 100644 --- a/drivers/gpu/drm/i915/Kconfig +++ b/drivers/gpu/drm/i915/Kconfig @@ -6,6 +6,8 @@ config DRM_I915 select INTEL_GTT select AGP_INTEL if AGP select INTERVAL_TREE + select ANDROID + select SYNC # we need shmfs for the swappable backing store, and in particular # the shmem_readpage() which depends upon tmpfs select SHMEM diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile index 91bd167..61a3eb5c 100644 --- a/drivers/gpu/drm/i915/Makefile +++ b/drivers/gpu/drm/i915/Makefile @@ -25,6 +25,7 @@ i915-y += i915_cmd_parser.o \ i915_gem_execbuffer.o \ i915_gem_gtt.o \ i915_gem.o \ + i915_sync.o \ i915_gem_stolen.o \ i915_gem_tiling.o \ i915_gem_userptr.o \ diff --git a/drivers/gpu/drm/i915/i915_dma.c b/drivers/gpu/drm/i915/i915_dma.c index 2e7f03a..84086e1 100644 --- a/drivers/gpu/drm/i915/i915_dma.c +++ b/drivers/gpu/drm/i915/i915_dma.c @@ -2043,6 +2043,7 @@ const struct drm_ioctl_desc i915_ioctls[] = { DRM_IOCTL_DEF_DRV(I915_REG_READ, i915_reg_read_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW), DRM_IOCTL_DEF_DRV(I915_GET_RESET_STATS, i915_get_reset_stats_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW), DRM_IOCTL_DEF_DRV(I915_GEM_USERPTR, i915_gem_userptr_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW), + DRM_IOCTL_DEF_DRV(I915_GEM_FENCE, i915_sync_create_fence_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW), }; int i915_max_ioctl = ARRAY_SIZE(i915_ioctls); diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index d604f4f..6eb119e 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -1388,6 +1388,8 @@ struct i915_frontbuffer_tracking { unsigned flip_bits; }; +struct i915_sync_timeline; + struct drm_i915_private { struct drm_device *dev; struct kmem_cache *slab; @@ -1422,6 +1424,8 @@ struct drm_i915_private { struct drm_i915_gem_object *semaphore_obj; uint32_t last_seqno, next_seqno; + struct i915_sync_timeline *sync_tl[I915_NUM_RINGS]; + drm_dma_handle_t *status_page_dmah; struct resource mch_res; @@ -2275,6 +2279,12 @@ void i915_init_vm(struct drm_i915_private *dev_priv, void i915_gem_free_object(struct drm_gem_object *obj); void i915_gem_vma_destroy(struct i915_vma *vma); +/* i915_sync.c */ +int i915_sync_init(struct drm_i915_private *dev_priv); +void i915_sync_fini(struct drm_i915_private *dev_priv); +int i915_sync_create_fence_ioctl(struct drm_device *dev, void *data, + struct drm_file *file); + #define PIN_MAPPABLE 0x1 #define PIN_NONBLOCK 0x2 #define PIN_GLOBAL 0x4 diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index dcd8d7b..ace716e 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -1146,11 +1146,11 @@ static bool can_wait_boost(struct drm_i915_file_private *file_priv) * Returns 0 if the seqno was found within the alloted time. Else returns the * errno with remaining time filled in timeout argument. */ -static int __wait_seqno(struct intel_engine_cs *ring, u32 seqno, - unsigned reset_counter, - bool interruptible, - struct timespec *timeout, - struct drm_i915_file_private *file_priv) +int __wait_seqno(struct intel_engine_cs *ring, u32 seqno, + unsigned reset_counter, + bool interruptible, + struct timespec *timeout, + struct drm_i915_file_private *file_priv) { struct drm_device *dev = ring->dev; struct drm_i915_private *dev_priv = dev->dev_private; @@ -4775,6 +4775,9 @@ int i915_gem_init(struct drm_device *dev) atomic_set_mask(I915_WEDGED, &dev_priv->gpu_error.reset_counter); ret = 0; } + + i915_sync_init(dev_priv); + mutex_unlock(&dev->struct_mutex); /* Allow hardware batchbuffers unless told otherwise, but not for KMS. */ @@ -4970,6 +4973,8 @@ void i915_gem_release(struct drm_device *dev, struct drm_file *file) request->file_priv = NULL; } spin_unlock(&file_priv->mm.lock); + + i915_sync_fini(dev->dev_private); } static void diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c index 98abc22..149e083 100644 --- a/drivers/gpu/drm/i915/i915_irq.c +++ b/drivers/gpu/drm/i915/i915_irq.c @@ -33,6 +33,7 @@ #include #include #include +#include "../../../staging/android/sync.h" #include "i915_drv.h" #include "i915_trace.h" #include "intel_drv.h" @@ -2617,8 +2618,9 @@ static void i915_error_wake_up(struct drm_i915_private *dev_priv, */ /* Wake up __wait_seqno, potentially holding dev->struct_mutex. */ - for_each_ring(ring, dev_priv, i) + for_each_ring(ring, dev_priv, i) { wake_up_all(&ring->irq_queue); + } /* Wake up intel_crtc_wait_for_pending_flips, holding crtc->mutex. */ wake_up_all(&dev_priv->pending_flip_queue); diff --git a/drivers/gpu/drm/i915/i915_sync.c b/drivers/gpu/drm/i915/i915_sync.c new file mode 100644 index 0000000..4938616 --- /dev/null +++ b/drivers/gpu/drm/i915/i915_sync.c @@ -0,0 +1,323 @@ +/* + * Copyright © 2014 Intel Corporation + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice (including the next + * paragraph) shall be included in all copies or substantial portions of the + * Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS + * IN THE SOFTWARE. + * + * Authors: + * Jesse Barnes + * + */ + +#include +#include +#include +#include "i915_drv.h" +#include "i915_trace.h" +#include "intel_drv.h" +#include +#include +#include +#include +#include +#include +#include "../../../staging/android/sync.h" + +/* Nothing really to protect here... */ +spinlock_t fence_lock; + +/* + * i915 fences on sync timelines + * + * We implement sync points in terms of i915 seqnos. They're exposed + * through the new DRM_I915_GEM_FENCE ioctl, and can be mixed and matched + * with other Android timelines and aggregated into sync_fences, etc. + * + * TODO: + * rebase on top of Chris's seqno/request stuff and use requests + * allow non-RCS fences (need ring/context association) + */ + +struct i915_fence { + struct fence base; + struct intel_engine_cs *ring; + struct intel_context *ctx; + wait_queue_t wait; + u32 seqno; +}; + +#define to_intel_fence(x) container_of(x, struct i915_fence, base) + +int __wait_seqno(struct intel_engine_cs *ring, u32 seqno, + unsigned reset_counter, + bool interruptible, + struct timespec *timeout, + struct drm_i915_file_private *file_priv); + +static const char *i915_fence_get_driver_name(struct fence *fence) +{ + return "i915"; +} + +static const char *i915_fence_get_timeline_name(struct fence *fence) +{ + struct i915_fence *intel_fence = to_intel_fence(fence); + + return intel_fence->ring->name; +} + +static int i915_fence_check(wait_queue_t *wait, unsigned mode, int flags, + void *key) +{ + struct i915_fence *intel_fence = wait->private; + struct intel_engine_cs *ring = intel_fence->ring; + + if (!i915_seqno_passed(ring->get_seqno(ring, false), + intel_fence->seqno)) + return 0; + + fence_signal_locked(&intel_fence->base); + + __remove_wait_queue(&ring->irq_queue, wait); + fence_put(&intel_fence->base); + ring->irq_put(ring); + + return 0; +} + +static bool i915_fence_enable_signaling(struct fence *fence) +{ + struct i915_fence *intel_fence = to_intel_fence(fence); + struct intel_engine_cs *ring = intel_fence->ring; + struct drm_i915_private *dev_priv = ring->dev->dev_private; + wait_queue_t *wait = &intel_fence->wait; + + /* queue fence wait queue on irq queue and get fence */ + if (i915_seqno_passed(ring->get_seqno(ring, false), + intel_fence->seqno) || + i915_terminally_wedged(&dev_priv->gpu_error)) + return false; + + if (!ring->irq_get(ring)) + return false; + + wait->flags = 0; + wait->private = intel_fence; + wait->func = i915_fence_check; + + __add_wait_queue(&ring->irq_queue, wait); + fence_get(fence); + + return true; +} + +static bool i915_fence_signaled(struct fence *fence) +{ + struct i915_fence *intel_fence = to_intel_fence(fence); + struct intel_engine_cs *ring = intel_fence->ring; + + return i915_seqno_passed(ring->get_seqno(ring, false), + intel_fence->seqno); +} + +static signed long i915_fence_wait(struct fence *fence, bool intr, + signed long timeout) +{ + struct i915_fence *intel_fence = to_intel_fence(fence); + struct drm_i915_private *dev_priv = intel_fence->ring->dev->dev_private; + struct timespec ts; + int ret; + + jiffies_to_timespec(timeout, &ts); + + ret = __wait_seqno(intel_fence->ring, intel_fence->seqno, + atomic_read(&dev_priv->gpu_error.reset_counter), + intr, &ts, NULL); + if (ret == -ETIME) + return timespec_to_jiffies(&ts); + + return ret; +} + +static int i915_fence_fill_driver_data(struct fence *fence, void *data, + int size) +{ + struct i915_fence *intel_fence = to_intel_fence(fence); + + if (size < sizeof(intel_fence->seqno)) + return -ENOMEM; + + memcpy(data, &intel_fence->seqno, sizeof(intel_fence->seqno)); + + return sizeof(intel_fence->seqno); +} + +static void i915_fence_value_str(struct fence *fence, char *str, int size) +{ + struct i915_fence *intel_fence = to_intel_fence(fence); + + snprintf(str, size, "%u", intel_fence->seqno); +} + +static void i915_fence_timeline_value_str(struct fence *fence, char *str, + int size) +{ + struct i915_fence *intel_fence = to_intel_fence(fence); + struct intel_engine_cs *ring = intel_fence->ring; + + snprintf(str, size, "%u", ring->get_seqno(ring, false)); +} + +static struct fence_ops i915_fence_ops = { + .get_driver_name = i915_fence_get_driver_name, + .get_timeline_name = i915_fence_get_timeline_name, + .enable_signaling = i915_fence_enable_signaling, + .signaled = i915_fence_signaled, + .wait = i915_fence_wait, + .fill_driver_data = i915_fence_fill_driver_data, + .fence_value_str = i915_fence_value_str, + .timeline_value_str = i915_fence_timeline_value_str, +}; + +static struct fence *i915_fence_create(struct intel_engine_cs *ring, + struct intel_context *ctx) +{ + struct i915_fence *fence; + int ret; + + fence = kzalloc(sizeof(*fence), GFP_KERNEL); + if (!fence) + return NULL; + + ret = ring->add_request(ring); + if (ret) { + DRM_ERROR("add_request failed\n"); + fence_free((struct fence *)fence); + return NULL; + } + + fence->ring = ring; + fence->ctx = ctx; + fence->seqno = ring->outstanding_lazy_seqno; + fence_init(&fence->base, &i915_fence_ops, &fence_lock, ctx->user_handle, + fence->seqno); + + return &fence->base; +} + +/** + * i915_sync_create_fence_ioctl - fence creation function + * @dev: drm device + * @data: ioctl data + * @file: file struct + * + * This function creates a fence given a context and ring, and returns + * it to the caller in the form of a file descriptor. + * + * The returned descriptor is a sync fence fd, and can be used with all + * the usual sync fence operations (poll, ioctl, etc). + * + * The process fd limit should prevent an overallocation of fence objects, + * which need to be destroyed manually with a close() call. + */ +int i915_sync_create_fence_ioctl(struct drm_device *dev, void *data, + struct drm_file *file) +{ + struct drm_i915_private *dev_priv = dev->dev_private; + struct drm_i915_gem_fence *fdata = data; + struct fence *fence; + struct sync_fence *sfence; + struct intel_engine_cs *ring; + struct intel_context *ctx; + u32 ctx_id = fdata->ctx_id; + int fd = get_unused_fd_flags(O_CLOEXEC); + int ret = 0; + + if (file == NULL) { + DRM_ERROR("no file priv?\n"); + return -EINVAL; + } + + ret = i915_mutex_lock_interruptible(dev); + if (ret) { + DRM_ERROR("mutex interrupted\n"); + goto out; + } + + ctx = i915_gem_context_get(file->driver_priv, ctx_id); + if (ctx == NULL) { + DRM_ERROR("context lookup failed\n"); + ret = -ENOENT; + goto err; + } + + ring = &dev_priv->ring[RCS]; + + if (!intel_ring_initialized(ring)) { + DRM_ERROR("ring not ready\n"); + ret = -EIO; + goto err; + } + + fence = i915_fence_create(ring, ctx); + if (!fence) { + ret = -ENOMEM; + goto err; + } + + fdata->name[sizeof(fdata->name) - 1] = '\0'; + sfence = sync_fence_create_dma(fdata->name, fence); + if (!sfence) { + ret = -ENOMEM; + goto err; + } + + fdata->fd = fd; + + sync_fence_install(sfence, fd); + + mutex_unlock(&dev->struct_mutex); +out: + return ret; + +err: + mutex_unlock(&dev->struct_mutex); + put_unused_fd(fd); + return ret; +} + +int i915_sync_init(struct drm_i915_private *dev_priv) +{ + struct intel_engine_cs *ring; + int i, ret = 0; + + for_each_ring(ring, dev_priv, i) { + /* FIXME: non-RCS fences */ + } + + return ret; +} + +void i915_sync_fini(struct drm_i915_private *dev_priv) +{ + int i; + + for (i = 0; i < I915_NUM_RINGS; i++) { + } +} diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h index ff57f07..65bd271 100644 --- a/include/uapi/drm/i915_drm.h +++ b/include/uapi/drm/i915_drm.h @@ -224,6 +224,7 @@ typedef struct _drm_i915_sarea { #define DRM_I915_REG_READ 0x31 #define DRM_I915_GET_RESET_STATS 0x32 #define DRM_I915_GEM_USERPTR 0x33 +#define DRM_I915_GEM_FENCE 0x34 #define DRM_IOCTL_I915_INIT DRM_IOW( DRM_COMMAND_BASE + DRM_I915_INIT, drm_i915_init_t) #define DRM_IOCTL_I915_FLUSH DRM_IO ( DRM_COMMAND_BASE + DRM_I915_FLUSH) @@ -275,6 +276,7 @@ typedef struct _drm_i915_sarea { #define DRM_IOCTL_I915_REG_READ DRM_IOWR (DRM_COMMAND_BASE + DRM_I915_REG_READ, struct drm_i915_reg_read) #define DRM_IOCTL_I915_GET_RESET_STATS DRM_IOWR (DRM_COMMAND_BASE + DRM_I915_GET_RESET_STATS, struct drm_i915_reset_stats) #define DRM_IOCTL_I915_GEM_USERPTR DRM_IOWR (DRM_COMMAND_BASE + DRM_I915_GEM_USERPTR, struct drm_i915_gem_userptr) +#define DRM_IOCTL_I915_GEM_FENCE DRM_IOWR (DRM_COMMAND_BASE + DRM_I915_GEM_FENCE, struct drm_i915_gem_fence) /* Allow drivers to submit batchbuffers directly to hardware, relying * on the security mechanisms provided by hardware. @@ -1066,4 +1068,25 @@ struct drm_i915_gem_userptr { __u32 handle; }; +/** + * drm_i915_gem_fence - create a fence + * @fd: fd for fence + * @ctx_id: context ID for fence + * @flags: flags for operation + * + * Creates a fence in @fd and returns it to the caller. This fd can be + * passed around between processes as any other fd, and can be poll'd + * and read for status. + * + * RETURNS: + * A valid fd in the @fd field or an errno on error. + */ +struct drm_i915_gem_fence { + __s32 fd; + __u32 ctx_id; + __u32 flags; + __u32 pad; + char name[32]; +}; + #endif /* _UAPI_I915_DRM_H_ */