diff mbox

[REPOST,1/8] fence: dma-buf cross-device synchronization (v17)

Message ID 20140618103653.15728.4942.stgit@patser (mailing list archive)
State New, archived
Headers show

Commit Message

Maarten Lankhorst June 18, 2014, 10:36 a.m. UTC
A fence can be attached to a buffer which is being filled or consumed
by hw, to allow userspace to pass the buffer without waiting to another
device.  For example, userspace can call page_flip ioctl to display the
next frame of graphics after kicking the GPU but while the GPU is still
rendering.  The display device sharing the buffer with the GPU would
attach a callback to get notified when the GPU's rendering-complete IRQ
fires, to update the scan-out address of the display, without having to
wake up userspace.

A driver must allocate a fence context for each execution ring that can
run in parallel. The function for this takes an argument with how many
contexts to allocate:
  + fence_context_alloc()

A fence is transient, one-shot deal.  It is allocated and attached
to one or more dma-buf's.  When the one that attached it is done, with
the pending operation, it can signal the fence:
  + fence_signal()

To have a rough approximation whether a fence is fired, call:
  + fence_is_signaled()

The dma-buf-mgr handles tracking, and waiting on, the fences associated
with a dma-buf.

The one pending on the fence can add an async callback:
  + fence_add_callback()

The callback can optionally be cancelled with:
  + fence_remove_callback()

To wait synchronously, optionally with a timeout:
  + fence_wait()
  + fence_wait_timeout()

When emitting a fence, call:
  + trace_fence_emit()

To annotate that a fence is blocking on another fence, call:
  + trace_fence_annotate_wait_on(fence, on_fence)

A default software-only implementation is provided, which can be used
by drivers attaching a fence to a buffer when they have no other means
for hw sync.  But a memory backed fence is also envisioned, because it
is common that GPU's can write to, or poll on some memory location for
synchronization.  For example:

  fence = custom_get_fence(...);
  if ((seqno_fence = to_seqno_fence(fence)) != NULL) {
    dma_buf *fence_buf = seqno_fence->sync_buf;
    get_dma_buf(fence_buf);

    ... tell the hw the memory location to wait ...
    custom_wait_on(fence_buf, seqno_fence->seqno_ofs, fence->seqno);
  } else {
    /* fall-back to sw sync * /
    fence_add_callback(fence, my_cb);
  }

On SoC platforms, if some other hw mechanism is provided for synchronizing
between IP blocks, it could be supported as an alternate implementation
with it's own fence ops in a similar way.

enable_signaling callback is used to provide sw signaling in case a cpu
waiter is requested or no compatible hardware signaling could be used.

The intention is to provide a userspace interface (presumably via eventfd)
later, to be used in conjunction with dma-buf's mmap support for sw access
to buffers (or for userspace apps that would prefer to do their own
synchronization).

v1: Original
v2: After discussion w/ danvet and mlankhorst on #dri-devel, we decided
    that dma-fence didn't need to care about the sw->hw signaling path
    (it can be handled same as sw->sw case), and therefore the fence->ops
    can be simplified and more handled in the core.  So remove the signal,
    add_callback, cancel_callback, and wait ops, and replace with a simple
    enable_signaling() op which can be used to inform a fence supporting
    hw->hw signaling that one or more devices which do not support hw
    signaling are waiting (and therefore it should enable an irq or do
    whatever is necessary in order that the CPU is notified when the
    fence is passed).
v3: Fix locking fail in attach_fence() and get_fence()
v4: Remove tie-in w/ dma-buf..  after discussion w/ danvet and mlankorst
    we decided that we need to be able to attach one fence to N dma-buf's,
    so using the list_head in dma-fence struct would be problematic.
v5: [ Maarten Lankhorst ] Updated for dma-bikeshed-fence and dma-buf-manager.
v6: [ Maarten Lankhorst ] I removed dma_fence_cancel_callback and some comments
    about checking if fence fired or not. This is broken by design.
    waitqueue_active during destruction is now fatal, since the signaller
    should be holding a reference in enable_signalling until it signalled
    the fence. Pass the original dma_fence_cb along, and call __remove_wait
    in the dma_fence_callback handler, so that no cleanup needs to be
    performed.
v7: [ Maarten Lankhorst ] Set cb->func and only enable sw signaling if
    fence wasn't signaled yet, for example for hardware fences that may
    choose to signal blindly.
v8: [ Maarten Lankhorst ] Tons of tiny fixes, moved __dma_fence_init to
    header and fixed include mess. dma-fence.h now includes dma-buf.h
    All members are now initialized, so kmalloc can be used for
    allocating a dma-fence. More documentation added.
v9: Change compiler bitfields to flags, change return type of
    enable_signaling to bool. Rework dma_fence_wait. Added
    dma_fence_is_signaled and dma_fence_wait_timeout.
    s/dma// and change exports to non GPL. Added fence_is_signaled and
    fence_enable_sw_signaling calls, add ability to override default
    wait operation.
v10: remove event_queue, use a custom list, export try_to_wake_up from
    scheduler. Remove fence lock and use a global spinlock instead,
    this should hopefully remove all the locking headaches I was having
    on trying to implement this. enable_signaling is called with this
    lock held.
v11:
    Use atomic ops for flags, lifting the need for some spin_lock_irqsaves.
    However I kept the guarantee that after fence_signal returns, it is
    guaranteed that enable_signaling has either been called to completion,
    or will not be called any more.

    Add contexts and seqno to base fence implementation. This allows you
    to wait for less fences, by testing for seqno + signaled, and then only
    wait on the later fence.

    Add FENCE_TRACE, FENCE_WARN, and FENCE_ERR. This makes debugging easier.
    An CONFIG_DEBUG_FENCE will be added to turn off the FENCE_TRACE
    spam, and another runtime option can turn it off at runtime.
v12:
    Add CONFIG_FENCE_TRACE. Add missing documentation for the fence->context
    and fence->seqno members.
v13:
    Fixup CONFIG_FENCE_TRACE kconfig description.
    Move fence_context_alloc to fence.
    Simplify fence_later.
    Kill priv member to fence_cb.
v14:
    Remove priv argument from fence_add_callback, oops!
v15:
    Remove priv from documentation.
    Explicitly include linux/atomic.h.
v16:
    Add trace events.
    Import changes required by android syncpoints.
v17:
    Use wake_up_state instead of try_to_wake_up. (Colin Cross)
    Fix up commit description for seqno_fence. (Rob Clark)

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Signed-off-by: Thierry Reding <thierry.reding@gmail.com> #use smp_mb__before_atomic()
Reviewed-by: Rob Clark <robdclark@gmail.com>
---
 Documentation/DocBook/device-drivers.tmpl |    2 
 drivers/base/Kconfig                      |    9 +
 drivers/base/Makefile                     |    2 
 drivers/base/fence.c                      |  416 +++++++++++++++++++++++++++++
 include/linux/fence.h                     |  333 +++++++++++++++++++++++
 include/trace/events/fence.h              |  128 +++++++++
 6 files changed, 889 insertions(+), 1 deletion(-)
 create mode 100644 drivers/base/fence.c
 create mode 100644 include/linux/fence.h
 create mode 100644 include/trace/events/fence.h

Comments

Greg KH June 19, 2014, 1:13 a.m. UTC | #1
On Wed, Jun 18, 2014 at 12:36:54PM +0200, Maarten Lankhorst wrote:
> +#define CREATE_TRACE_POINTS
> +#include <trace/events/fence.h>
> +
> +EXPORT_TRACEPOINT_SYMBOL(fence_annotate_wait_on);
> +EXPORT_TRACEPOINT_SYMBOL(fence_emit);

Are you really willing to live with these as tracepoints for forever?
What is the use of them in debugging?  Was it just for debugging the
fence code, or for something else?

> +/**
> + * fence_context_alloc - allocate an array of fence contexts
> + * @num:	[in]	amount of contexts to allocate
> + *
> + * This function will return the first index of the number of fences allocated.
> + * The fence context is used for setting fence->context to a unique number.
> + */
> +unsigned fence_context_alloc(unsigned num)
> +{
> +	BUG_ON(!num);
> +	return atomic_add_return(num, &fence_context_counter) - num;
> +}
> +EXPORT_SYMBOL(fence_context_alloc);

EXPORT_SYMBOL_GPL()?  Same goes for all of the exports in here.
Traditionally all of the driver core exports have been with this
marking, any objection to making that change here as well?

> +int __fence_signal(struct fence *fence)
> +{
> +	struct fence_cb *cur, *tmp;
> +	int ret = 0;
> +
> +	if (WARN_ON(!fence))
> +		return -EINVAL;
> +
> +	if (!ktime_to_ns(fence->timestamp)) {
> +		fence->timestamp = ktime_get();
> +		smp_mb__before_atomic();
> +	}
> +
> +	if (test_and_set_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
> +		ret = -EINVAL;
> +
> +		/*
> +		 * we might have raced with the unlocked fence_signal,
> +		 * still run through all callbacks
> +		 */
> +	} else
> +		trace_fence_signaled(fence);
> +
> +	list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) {
> +		list_del_init(&cur->node);
> +		cur->func(fence, cur);
> +	}
> +	return ret;
> +}
> +EXPORT_SYMBOL(__fence_signal);

Don't export a function with __ in front of it, you are saying that an
internal function is somehow "valid" for everyone else to call?  Why?
You aren't even documenting the function, so who knows how to use it?

> +/**
> + * fence_signal - signal completion of a fence
> + * @fence: the fence to signal
> + *
> + * Signal completion for software callbacks on a fence, this will unblock
> + * fence_wait() calls and run all the callbacks added with
> + * fence_add_callback(). Can be called multiple times, but since a fence
> + * can only go from unsignaled to signaled state, it will only be effective
> + * the first time.
> + */
> +int fence_signal(struct fence *fence)
> +{
> +	unsigned long flags;
> +
> +	if (!fence)
> +		return -EINVAL;
> +
> +	if (!ktime_to_ns(fence->timestamp)) {
> +		fence->timestamp = ktime_get();
> +		smp_mb__before_atomic();
> +	}
> +
> +	if (test_and_set_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
> +		return -EINVAL;
> +
> +	trace_fence_signaled(fence);
> +
> +	if (test_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags)) {
> +		struct fence_cb *cur, *tmp;
> +
> +		spin_lock_irqsave(fence->lock, flags);
> +		list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) {
> +			list_del_init(&cur->node);
> +			cur->func(fence, cur);
> +		}
> +		spin_unlock_irqrestore(fence->lock, flags);
> +	}
> +	return 0;
> +}
> +EXPORT_SYMBOL(fence_signal);

So, why should I use this over __fence_signal?  What is the difference?
Am I missing some documentation somewhere?

> +void release_fence(struct kref *kref)
> +{
> +	struct fence *fence =
> +			container_of(kref, struct fence, refcount);
> +
> +	trace_fence_destroy(fence);
> +
> +	BUG_ON(!list_empty(&fence->cb_list));
> +
> +	if (fence->ops->release)
> +		fence->ops->release(fence);
> +	else
> +		kfree(fence);
> +}
> +EXPORT_SYMBOL(release_fence);

fence_release() makes it more unified with the other functions in this
file, right?

> +/**
> + * fence_default_wait - default sleep until the fence gets signaled
> + * or until timeout elapses
> + * @fence:	[in]	the fence to wait on
> + * @intr:	[in]	if true, do an interruptible wait
> + * @timeout:	[in]	timeout value in jiffies, or MAX_SCHEDULE_TIMEOUT
> + *
> + * Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or the
> + * remaining timeout in jiffies on success.
> + */
> +long

Shouldn't this be signed to be explicit?

> +fence_default_wait(struct fence *fence, bool intr, signed long timeout)
> +{
> +	struct default_wait_cb cb;
> +	unsigned long flags;
> +	long ret = timeout;
> +	bool was_set;
> +
> +	if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
> +		return timeout;
> +
> +	spin_lock_irqsave(fence->lock, flags);
> +
> +	if (intr && signal_pending(current)) {
> +		ret = -ERESTARTSYS;
> +		goto out;
> +	}
> +
> +	was_set = test_and_set_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags);
> +
> +	if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
> +		goto out;
> +
> +	if (!was_set) {
> +		trace_fence_enable_signal(fence);
> +
> +		if (!fence->ops->enable_signaling(fence)) {
> +			__fence_signal(fence);
> +			goto out;
> +		}
> +	}
> +
> +	cb.base.func = fence_default_wait_cb;
> +	cb.task = current;
> +	list_add(&cb.base.node, &fence->cb_list);
> +
> +	while (!test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags) && ret > 0) {
> +		if (intr)
> +			__set_current_state(TASK_INTERRUPTIBLE);
> +		else
> +			__set_current_state(TASK_UNINTERRUPTIBLE);
> +		spin_unlock_irqrestore(fence->lock, flags);
> +
> +		ret = schedule_timeout(ret);
> +
> +		spin_lock_irqsave(fence->lock, flags);
> +		if (ret > 0 && intr && signal_pending(current))
> +			ret = -ERESTARTSYS;
> +	}
> +
> +	if (!list_empty(&cb.base.node))
> +		list_del(&cb.base.node);
> +	__set_current_state(TASK_RUNNING);
> +
> +out:
> +	spin_unlock_irqrestore(fence->lock, flags);
> +	return ret;
> +}
> +EXPORT_SYMBOL(fence_default_wait);
> +
> +/**
> + * __fence_init - Initialize a custom fence.
> + * @fence:	[in]	the fence to initialize
> + * @ops:	[in]	the fence_ops for operations on this fence
> + * @lock:	[in]	the irqsafe spinlock to use for locking this fence
> + * @context:	[in]	the execution context this fence is run on
> + * @seqno:	[in]	a linear increasing sequence number for this context
> + *
> + * Initializes an allocated fence, the caller doesn't have to keep its
> + * refcount after committing with this fence, but it will need to hold a
> + * refcount again if fence_ops.enable_signaling gets called. This can
> + * be used for other implementing other types of fence.
> + *
> + * context and seqno are used for easy comparison between fences, allowing
> + * to check which fence is later by simply using fence_later.
> + */
> +void
> +__fence_init(struct fence *fence, const struct fence_ops *ops,
> +	     spinlock_t *lock, unsigned context, unsigned seqno)
> +{
> +	BUG_ON(!lock);
> +	BUG_ON(!ops || !ops->wait || !ops->enable_signaling ||
> +	       !ops->get_driver_name || !ops->get_timeline_name);
> +
> +	kref_init(&fence->refcount);
> +	fence->ops = ops;
> +	INIT_LIST_HEAD(&fence->cb_list);
> +	fence->lock = lock;
> +	fence->context = context;
> +	fence->seqno = seqno;
> +	fence->flags = 0UL;
> +
> +	trace_fence_init(fence);
> +}
> +EXPORT_SYMBOL(__fence_init);

Again with the __ exported function...

I don't even see a fence_init() function anywhere, why the __ ?


> diff --git a/include/linux/fence.h b/include/linux/fence.h
> new file mode 100644
> index 000000000000..65f2a01ee7e4
> --- /dev/null
> +++ b/include/linux/fence.h
> @@ -0,0 +1,333 @@
> +/*
> + * Fence mechanism for dma-buf to allow for asynchronous dma access
> + *
> + * Copyright (C) 2012 Canonical Ltd
> + * Copyright (C) 2012 Texas Instruments
> + *
> + * Authors:
> + * Rob Clark <robdclark@gmail.com>
> + * Maarten Lankhorst <maarten.lankhorst@canonical.com>
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms of the GNU General Public License version 2 as published by
> + * the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#ifndef __LINUX_FENCE_H
> +#define __LINUX_FENCE_H
> +
> +#include <linux/err.h>
> +#include <linux/wait.h>
> +#include <linux/list.h>
> +#include <linux/bitops.h>
> +#include <linux/kref.h>
> +#include <linux/sched.h>
> +#include <linux/printk.h>
> +
> +struct fence;
> +struct fence_ops;
> +struct fence_cb;
> +
> +/**
> + * struct fence - software synchronization primitive
> + * @refcount: refcount for this fence
> + * @ops: fence_ops associated with this fence
> + * @cb_list: list of all callbacks to call
> + * @lock: spin_lock_irqsave used for locking
> + * @context: execution context this fence belongs to, returned by
> + *           fence_context_alloc()
> + * @seqno: the sequence number of this fence inside the execution context,
> + * can be compared to decide which fence would be signaled later.
> + * @flags: A mask of FENCE_FLAG_* defined below
> + * @timestamp: Timestamp when the fence was signaled.
> + * @status: Optional, only valid if < 0, must be set before calling
> + * fence_signal, indicates that the fence has completed with an error.
> + *
> + * the flags member must be manipulated and read using the appropriate
> + * atomic ops (bit_*), so taking the spinlock will not be needed most
> + * of the time.
> + *
> + * FENCE_FLAG_SIGNALED_BIT - fence is already signaled
> + * FENCE_FLAG_ENABLE_SIGNAL_BIT - enable_signaling might have been called*
> + * FENCE_FLAG_USER_BITS - start of the unused bits, can be used by the
> + * implementer of the fence for its own purposes. Can be used in different
> + * ways by different fence implementers, so do not rely on this.
> + *
> + * *) Since atomic bitops are used, this is not guaranteed to be the case.
> + * Particularly, if the bit was set, but fence_signal was called right
> + * before this bit was set, it would have been able to set the
> + * FENCE_FLAG_SIGNALED_BIT, before enable_signaling was called.
> + * Adding a check for FENCE_FLAG_SIGNALED_BIT after setting
> + * FENCE_FLAG_ENABLE_SIGNAL_BIT closes this race, and makes sure that
> + * after fence_signal was called, any enable_signaling call will have either
> + * been completed, or never called at all.
> + */
> +struct fence {
> +	struct kref refcount;
> +	const struct fence_ops *ops;
> +	struct list_head cb_list;
> +	spinlock_t *lock;
> +	unsigned context, seqno;
> +	unsigned long flags;
> +	ktime_t timestamp;
> +	int status;
> +};
> +
> +enum fence_flag_bits {
> +	FENCE_FLAG_SIGNALED_BIT,
> +	FENCE_FLAG_ENABLE_SIGNAL_BIT,
> +	FENCE_FLAG_USER_BITS, /* must always be last member */
> +};
> +
> +typedef void (*fence_func_t)(struct fence *fence, struct fence_cb *cb);
> +
> +/**
> + * struct fence_cb - callback for fence_add_callback
> + * @node: used by fence_add_callback to append this struct to fence::cb_list
> + * @func: fence_func_t to call
> + *
> + * This struct will be initialized by fence_add_callback, additional
> + * data can be passed along by embedding fence_cb in another struct.
> + */
> +struct fence_cb {
> +	struct list_head node;
> +	fence_func_t func;
> +};
> +
> +/**
> + * struct fence_ops - operations implemented for fence
> + * @get_driver_name: returns the driver name.
> + * @get_timeline_name: return the name of the context this fence belongs to.
> + * @enable_signaling: enable software signaling of fence.
> + * @signaled: [optional] peek whether the fence is signaled, can be null.
> + * @wait: custom wait implementation, or fence_default_wait.
> + * @release: [optional] called on destruction of fence, can be null
> + * @fill_driver_data: [optional] callback to fill in free-form debug info
> + * Returns amount of bytes filled, or -errno.
> + * @fence_value_str: [optional] fills in the value of the fence as a string
> + * @timeline_value_str: [optional] fills in the current value of the timeline
> + * as a string
> + *
> + * Notes on enable_signaling:
> + * For fence implementations that have the capability for hw->hw
> + * signaling, they can implement this op to enable the necessary
> + * irqs, or insert commands into cmdstream, etc.  This is called
> + * in the first wait() or add_callback() path to let the fence
> + * implementation know that there is another driver waiting on
> + * the signal (ie. hw->sw case).
> + *
> + * This function can be called called from atomic context, but not
> + * from irq context, so normal spinlocks can be used.
> + *
> + * A return value of false indicates the fence already passed,
> + * or some failure occured that made it impossible to enable
> + * signaling. True indicates succesful enabling.
> + *
> + * fence->status may be set in enable_signaling, but only when false is
> + * returned.
> + *
> + * Calling fence_signal before enable_signaling is called allows
> + * for a tiny race window in which enable_signaling is called during,
> + * before, or after fence_signal. To fight this, it is recommended
> + * that before enable_signaling returns true an extra reference is
> + * taken on the fence, to be released when the fence is signaled.
> + * This will mean fence_signal will still be called twice, but
> + * the second time will be a noop since it was already signaled.
> + *
> + * Notes on signaled:
> + * May set fence->status if returning true.
> + *
> + * Notes on wait:
> + * Must not be NULL, set to fence_default_wait for default implementation.
> + * the fence_default_wait implementation should work for any fence, as long
> + * as enable_signaling works correctly.
> + *
> + * Must return -ERESTARTSYS if the wait is intr = true and the wait was
> + * interrupted, and remaining jiffies if fence has signaled, or 0 if wait
> + * timed out. Can also return other error values on custom implementations,
> + * which should be treated as if the fence is signaled. For example a hardware
> + * lockup could be reported like that.
> + *
> + * Notes on release:
> + * Can be NULL, this function allows additional commands to run on
> + * destruction of the fence. Can be called from irq context.
> + * If pointer is set to NULL, kfree will get called instead.
> + */
> +
> +struct fence_ops {
> +	const char * (*get_driver_name)(struct fence *fence);
> +	const char * (*get_timeline_name)(struct fence *fence);
> +	bool (*enable_signaling)(struct fence *fence);
> +	bool (*signaled)(struct fence *fence);
> +	long (*wait)(struct fence *fence, bool intr, signed long timeout);
> +	void (*release)(struct fence *fence);
> +
> +	int (*fill_driver_data)(struct fence *fence, void *data, int size);
> +	void (*fence_value_str)(struct fence *fence, char *str, int size);
> +	void (*timeline_value_str)(struct fence *fence, char *str, int size);
> +};
> +
> +void __fence_init(struct fence *fence, const struct fence_ops *ops,
> +		  spinlock_t *lock, unsigned context, unsigned seqno);
> +
> +/**
> + * fence_get - increases refcount of the fence
> + * @fence:	[in]	fence to increase refcount of
> + */
> +static inline void fence_get(struct fence *fence)
> +{
> +	if (WARN_ON(!fence))
> +		return;

Why warn?

> +	kref_get(&fence->refcount);
> +}

Why is this inline?

> +
> +extern void release_fence(struct kref *kref);
> +
> +/**
> + * fence_put - decreases refcount of the fence
> + * @fence:	[in]	fence to reduce refcount of
> + */
> +static inline void fence_put(struct fence *fence)
> +{
> +	if (WARN_ON(!fence))
> +		return;

Same as above.

> +	kref_put(&fence->refcount, release_fence);
> +}

Why inline?

> +int fence_signal(struct fence *fence);
> +int __fence_signal(struct fence *fence);

Let's randomly pick a function to call... :)

> +static inline bool
> +__fence_is_signaled(struct fence *fence)
> +{
> +	if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
> +		return true;
> +
> +	if (fence->ops->signaled && fence->ops->signaled(fence)) {
> +		__fence_signal(fence);
> +		return true;
> +	}
> +
> +	return false;
> +}

Oh nice, another __ function :(

> +
> +/**
> + * fence_is_signaled - Return an indication if the fence is signaled yet.
> + * @fence:	[in]	the fence to check
> + *
> + * Returns true if the fence was already signaled, false if not. Since this
> + * function doesn't enable signaling, it is not guaranteed to ever return true
> + * If fence_add_callback, fence_wait or fence_enable_sw_signaling
> + * haven't been called before.
> + *
> + * It's recommended for seqno fences to call fence_signal when the
> + * operation is complete, it makes it possible to prevent issues from
> + * wraparound between time of issue and time of use by checking the return
> + * value of this function before calling hardware-specific wait instructions.
> + */
> +static inline bool
> +fence_is_signaled(struct fence *fence)
> +{
> +	if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
> +		return true;
> +
> +	if (fence->ops->signaled && fence->ops->signaled(fence)) {
> +		fence_signal(fence);
> +		return true;
> +	}
> +
> +	return false;
> +}


Why inline for all of these, does it really matter for speed?

> +/**
> + * fence_later - return the chronologically later fence
> + * @f1:	[in]	the first fence from the same context
> + * @f2:	[in]	the second fence from the same context
> + *
> + * Returns NULL if both fences are signaled, otherwise the fence that would be
> + * signaled last. Both fences must be from the same context, since a seqno is
> + * not re-used across contexts.
> + */
> +static inline struct fence *fence_later(struct fence *f1, struct fence *f2)
> +{
> +	BUG_ON(f1->context != f2->context);

Nice, you just crashed the kernel, making it impossible to debug or
recover :(

Don't do that.

thanks,

greg k-h
Greg KH June 19, 2014, 1:15 a.m. UTC | #2
On Wed, Jun 18, 2014 at 12:36:54PM +0200, Maarten Lankhorst wrote:
> + * This program is distributed in the hope that it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.

I don't like this paragraph in all of the files, but if you insist that
some lawyer wants it there, I'll live with it...

> + * You should have received a copy of the GNU General Public License along with
> + * this program.  If not, see <http://www.gnu.org/licenses/>.

That's just not needed at all and is fluff.  Please remove.
Greg KH June 19, 2014, 1:16 a.m. UTC | #3
On Wed, Jun 18, 2014 at 12:36:54PM +0200, Maarten Lankhorst wrote:
> A fence can be attached to a buffer which is being filled or consumed
> by hw, to allow userspace to pass the buffer without waiting to another
> device.  For example, userspace can call page_flip ioctl to display the
> next frame of graphics after kicking the GPU but while the GPU is still
> rendering.  The display device sharing the buffer with the GPU would
> attach a callback to get notified when the GPU's rendering-complete IRQ
> fires, to update the scan-out address of the display, without having to
> wake up userspace.
> 
> A driver must allocate a fence context for each execution ring that can
> run in parallel. The function for this takes an argument with how many
> contexts to allocate:
>   + fence_context_alloc()
> 
> A fence is transient, one-shot deal.  It is allocated and attached
> to one or more dma-buf's.  When the one that attached it is done, with
> the pending operation, it can signal the fence:
>   + fence_signal()
> 
> To have a rough approximation whether a fence is fired, call:
>   + fence_is_signaled()
> 
> The dma-buf-mgr handles tracking, and waiting on, the fences associated
> with a dma-buf.
> 
> The one pending on the fence can add an async callback:
>   + fence_add_callback()
> 
> The callback can optionally be cancelled with:
>   + fence_remove_callback()
> 
> To wait synchronously, optionally with a timeout:
>   + fence_wait()
>   + fence_wait_timeout()
> 
> When emitting a fence, call:
>   + trace_fence_emit()
> 
> To annotate that a fence is blocking on another fence, call:
>   + trace_fence_annotate_wait_on(fence, on_fence)
> 
> A default software-only implementation is provided, which can be used
> by drivers attaching a fence to a buffer when they have no other means
> for hw sync.  But a memory backed fence is also envisioned, because it
> is common that GPU's can write to, or poll on some memory location for
> synchronization.  For example:
> 
>   fence = custom_get_fence(...);
>   if ((seqno_fence = to_seqno_fence(fence)) != NULL) {
>     dma_buf *fence_buf = seqno_fence->sync_buf;
>     get_dma_buf(fence_buf);
> 
>     ... tell the hw the memory location to wait ...
>     custom_wait_on(fence_buf, seqno_fence->seqno_ofs, fence->seqno);
>   } else {
>     /* fall-back to sw sync * /
>     fence_add_callback(fence, my_cb);
>   }
> 
> On SoC platforms, if some other hw mechanism is provided for synchronizing
> between IP blocks, it could be supported as an alternate implementation
> with it's own fence ops in a similar way.
> 
> enable_signaling callback is used to provide sw signaling in case a cpu
> waiter is requested or no compatible hardware signaling could be used.
> 
> The intention is to provide a userspace interface (presumably via eventfd)
> later, to be used in conjunction with dma-buf's mmap support for sw access
> to buffers (or for userspace apps that would prefer to do their own
> synchronization).
> 
> v1: Original
> v2: After discussion w/ danvet and mlankhorst on #dri-devel, we decided
>     that dma-fence didn't need to care about the sw->hw signaling path
>     (it can be handled same as sw->sw case), and therefore the fence->ops
>     can be simplified and more handled in the core.  So remove the signal,
>     add_callback, cancel_callback, and wait ops, and replace with a simple
>     enable_signaling() op which can be used to inform a fence supporting
>     hw->hw signaling that one or more devices which do not support hw
>     signaling are waiting (and therefore it should enable an irq or do
>     whatever is necessary in order that the CPU is notified when the
>     fence is passed).
> v3: Fix locking fail in attach_fence() and get_fence()
> v4: Remove tie-in w/ dma-buf..  after discussion w/ danvet and mlankorst
>     we decided that we need to be able to attach one fence to N dma-buf's,
>     so using the list_head in dma-fence struct would be problematic.
> v5: [ Maarten Lankhorst ] Updated for dma-bikeshed-fence and dma-buf-manager.
> v6: [ Maarten Lankhorst ] I removed dma_fence_cancel_callback and some comments
>     about checking if fence fired or not. This is broken by design.
>     waitqueue_active during destruction is now fatal, since the signaller
>     should be holding a reference in enable_signalling until it signalled
>     the fence. Pass the original dma_fence_cb along, and call __remove_wait
>     in the dma_fence_callback handler, so that no cleanup needs to be
>     performed.
> v7: [ Maarten Lankhorst ] Set cb->func and only enable sw signaling if
>     fence wasn't signaled yet, for example for hardware fences that may
>     choose to signal blindly.
> v8: [ Maarten Lankhorst ] Tons of tiny fixes, moved __dma_fence_init to
>     header and fixed include mess. dma-fence.h now includes dma-buf.h
>     All members are now initialized, so kmalloc can be used for
>     allocating a dma-fence. More documentation added.
> v9: Change compiler bitfields to flags, change return type of
>     enable_signaling to bool. Rework dma_fence_wait. Added
>     dma_fence_is_signaled and dma_fence_wait_timeout.
>     s/dma// and change exports to non GPL. Added fence_is_signaled and
>     fence_enable_sw_signaling calls, add ability to override default
>     wait operation.
> v10: remove event_queue, use a custom list, export try_to_wake_up from
>     scheduler. Remove fence lock and use a global spinlock instead,
>     this should hopefully remove all the locking headaches I was having
>     on trying to implement this. enable_signaling is called with this
>     lock held.
> v11:
>     Use atomic ops for flags, lifting the need for some spin_lock_irqsaves.
>     However I kept the guarantee that after fence_signal returns, it is
>     guaranteed that enable_signaling has either been called to completion,
>     or will not be called any more.
> 
>     Add contexts and seqno to base fence implementation. This allows you
>     to wait for less fences, by testing for seqno + signaled, and then only
>     wait on the later fence.
> 
>     Add FENCE_TRACE, FENCE_WARN, and FENCE_ERR. This makes debugging easier.
>     An CONFIG_DEBUG_FENCE will be added to turn off the FENCE_TRACE
>     spam, and another runtime option can turn it off at runtime.
> v12:
>     Add CONFIG_FENCE_TRACE. Add missing documentation for the fence->context
>     and fence->seqno members.
> v13:
>     Fixup CONFIG_FENCE_TRACE kconfig description.
>     Move fence_context_alloc to fence.
>     Simplify fence_later.
>     Kill priv member to fence_cb.
> v14:
>     Remove priv argument from fence_add_callback, oops!
> v15:
>     Remove priv from documentation.
>     Explicitly include linux/atomic.h.
> v16:
>     Add trace events.
>     Import changes required by android syncpoints.
> v17:
>     Use wake_up_state instead of try_to_wake_up. (Colin Cross)
>     Fix up commit description for seqno_fence. (Rob Clark)
> 
> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
> Signed-off-by: Thierry Reding <thierry.reding@gmail.com> #use smp_mb__before_atomic()
> Reviewed-by: Rob Clark <robdclark@gmail.com>
> ---
>  Documentation/DocBook/device-drivers.tmpl |    2 
>  drivers/base/Kconfig                      |    9 +
>  drivers/base/Makefile                     |    2 
>  drivers/base/fence.c                      |  416 +++++++++++++++++++++++++++++
>  include/linux/fence.h                     |  333 +++++++++++++++++++++++
>  include/trace/events/fence.h              |  128 +++++++++
>  6 files changed, 889 insertions(+), 1 deletion(-)
>  create mode 100644 drivers/base/fence.c
>  create mode 100644 include/linux/fence.h
>  create mode 100644 include/trace/events/fence.h

Who is going to sign up to maintain this code?  (hint, it's not me...)

thanks,

greg k-h
Rob Clark June 19, 2014, 1:23 a.m. UTC | #4
On Wed, Jun 18, 2014 at 9:13 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
> On Wed, Jun 18, 2014 at 12:36:54PM +0200, Maarten Lankhorst wrote:
>> +#define CREATE_TRACE_POINTS
>> +#include <trace/events/fence.h>
>> +
>> +EXPORT_TRACEPOINT_SYMBOL(fence_annotate_wait_on);
>> +EXPORT_TRACEPOINT_SYMBOL(fence_emit);
>
> Are you really willing to live with these as tracepoints for forever?
> What is the use of them in debugging?  Was it just for debugging the
> fence code, or for something else?

fwiw, the goal is something like this:

http://people.freedesktop.org/~robclark/perf-supertuxkart.svg

but without needing to make perf understand each driver's custom trace events

(from: http://bloggingthemonkey.blogspot.com/2013/09/freedreno-update-moar-fps.html
)

BR,
-R


>> +/**
>> + * fence_context_alloc - allocate an array of fence contexts
>> + * @num:     [in]    amount of contexts to allocate
>> + *
>> + * This function will return the first index of the number of fences allocated.
>> + * The fence context is used for setting fence->context to a unique number.
>> + */
>> +unsigned fence_context_alloc(unsigned num)
>> +{
>> +     BUG_ON(!num);
>> +     return atomic_add_return(num, &fence_context_counter) - num;
>> +}
>> +EXPORT_SYMBOL(fence_context_alloc);
>
> EXPORT_SYMBOL_GPL()?  Same goes for all of the exports in here.
> Traditionally all of the driver core exports have been with this
> marking, any objection to making that change here as well?
>
>> +int __fence_signal(struct fence *fence)
>> +{
>> +     struct fence_cb *cur, *tmp;
>> +     int ret = 0;
>> +
>> +     if (WARN_ON(!fence))
>> +             return -EINVAL;
>> +
>> +     if (!ktime_to_ns(fence->timestamp)) {
>> +             fence->timestamp = ktime_get();
>> +             smp_mb__before_atomic();
>> +     }
>> +
>> +     if (test_and_set_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
>> +             ret = -EINVAL;
>> +
>> +             /*
>> +              * we might have raced with the unlocked fence_signal,
>> +              * still run through all callbacks
>> +              */
>> +     } else
>> +             trace_fence_signaled(fence);
>> +
>> +     list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) {
>> +             list_del_init(&cur->node);
>> +             cur->func(fence, cur);
>> +     }
>> +     return ret;
>> +}
>> +EXPORT_SYMBOL(__fence_signal);
>
> Don't export a function with __ in front of it, you are saying that an
> internal function is somehow "valid" for everyone else to call?  Why?
> You aren't even documenting the function, so who knows how to use it?
>
>> +/**
>> + * fence_signal - signal completion of a fence
>> + * @fence: the fence to signal
>> + *
>> + * Signal completion for software callbacks on a fence, this will unblock
>> + * fence_wait() calls and run all the callbacks added with
>> + * fence_add_callback(). Can be called multiple times, but since a fence
>> + * can only go from unsignaled to signaled state, it will only be effective
>> + * the first time.
>> + */
>> +int fence_signal(struct fence *fence)
>> +{
>> +     unsigned long flags;
>> +
>> +     if (!fence)
>> +             return -EINVAL;
>> +
>> +     if (!ktime_to_ns(fence->timestamp)) {
>> +             fence->timestamp = ktime_get();
>> +             smp_mb__before_atomic();
>> +     }
>> +
>> +     if (test_and_set_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
>> +             return -EINVAL;
>> +
>> +     trace_fence_signaled(fence);
>> +
>> +     if (test_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags)) {
>> +             struct fence_cb *cur, *tmp;
>> +
>> +             spin_lock_irqsave(fence->lock, flags);
>> +             list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) {
>> +                     list_del_init(&cur->node);
>> +                     cur->func(fence, cur);
>> +             }
>> +             spin_unlock_irqrestore(fence->lock, flags);
>> +     }
>> +     return 0;
>> +}
>> +EXPORT_SYMBOL(fence_signal);
>
> So, why should I use this over __fence_signal?  What is the difference?
> Am I missing some documentation somewhere?
>
>> +void release_fence(struct kref *kref)
>> +{
>> +     struct fence *fence =
>> +                     container_of(kref, struct fence, refcount);
>> +
>> +     trace_fence_destroy(fence);
>> +
>> +     BUG_ON(!list_empty(&fence->cb_list));
>> +
>> +     if (fence->ops->release)
>> +             fence->ops->release(fence);
>> +     else
>> +             kfree(fence);
>> +}
>> +EXPORT_SYMBOL(release_fence);
>
> fence_release() makes it more unified with the other functions in this
> file, right?
>
>> +/**
>> + * fence_default_wait - default sleep until the fence gets signaled
>> + * or until timeout elapses
>> + * @fence:   [in]    the fence to wait on
>> + * @intr:    [in]    if true, do an interruptible wait
>> + * @timeout: [in]    timeout value in jiffies, or MAX_SCHEDULE_TIMEOUT
>> + *
>> + * Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or the
>> + * remaining timeout in jiffies on success.
>> + */
>> +long
>
> Shouldn't this be signed to be explicit?
>
>> +fence_default_wait(struct fence *fence, bool intr, signed long timeout)
>> +{
>> +     struct default_wait_cb cb;
>> +     unsigned long flags;
>> +     long ret = timeout;
>> +     bool was_set;
>> +
>> +     if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
>> +             return timeout;
>> +
>> +     spin_lock_irqsave(fence->lock, flags);
>> +
>> +     if (intr && signal_pending(current)) {
>> +             ret = -ERESTARTSYS;
>> +             goto out;
>> +     }
>> +
>> +     was_set = test_and_set_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags);
>> +
>> +     if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
>> +             goto out;
>> +
>> +     if (!was_set) {
>> +             trace_fence_enable_signal(fence);
>> +
>> +             if (!fence->ops->enable_signaling(fence)) {
>> +                     __fence_signal(fence);
>> +                     goto out;
>> +             }
>> +     }
>> +
>> +     cb.base.func = fence_default_wait_cb;
>> +     cb.task = current;
>> +     list_add(&cb.base.node, &fence->cb_list);
>> +
>> +     while (!test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags) && ret > 0) {
>> +             if (intr)
>> +                     __set_current_state(TASK_INTERRUPTIBLE);
>> +             else
>> +                     __set_current_state(TASK_UNINTERRUPTIBLE);
>> +             spin_unlock_irqrestore(fence->lock, flags);
>> +
>> +             ret = schedule_timeout(ret);
>> +
>> +             spin_lock_irqsave(fence->lock, flags);
>> +             if (ret > 0 && intr && signal_pending(current))
>> +                     ret = -ERESTARTSYS;
>> +     }
>> +
>> +     if (!list_empty(&cb.base.node))
>> +             list_del(&cb.base.node);
>> +     __set_current_state(TASK_RUNNING);
>> +
>> +out:
>> +     spin_unlock_irqrestore(fence->lock, flags);
>> +     return ret;
>> +}
>> +EXPORT_SYMBOL(fence_default_wait);
>> +
>> +/**
>> + * __fence_init - Initialize a custom fence.
>> + * @fence:   [in]    the fence to initialize
>> + * @ops:     [in]    the fence_ops for operations on this fence
>> + * @lock:    [in]    the irqsafe spinlock to use for locking this fence
>> + * @context: [in]    the execution context this fence is run on
>> + * @seqno:   [in]    a linear increasing sequence number for this context
>> + *
>> + * Initializes an allocated fence, the caller doesn't have to keep its
>> + * refcount after committing with this fence, but it will need to hold a
>> + * refcount again if fence_ops.enable_signaling gets called. This can
>> + * be used for other implementing other types of fence.
>> + *
>> + * context and seqno are used for easy comparison between fences, allowing
>> + * to check which fence is later by simply using fence_later.
>> + */
>> +void
>> +__fence_init(struct fence *fence, const struct fence_ops *ops,
>> +          spinlock_t *lock, unsigned context, unsigned seqno)
>> +{
>> +     BUG_ON(!lock);
>> +     BUG_ON(!ops || !ops->wait || !ops->enable_signaling ||
>> +            !ops->get_driver_name || !ops->get_timeline_name);
>> +
>> +     kref_init(&fence->refcount);
>> +     fence->ops = ops;
>> +     INIT_LIST_HEAD(&fence->cb_list);
>> +     fence->lock = lock;
>> +     fence->context = context;
>> +     fence->seqno = seqno;
>> +     fence->flags = 0UL;
>> +
>> +     trace_fence_init(fence);
>> +}
>> +EXPORT_SYMBOL(__fence_init);
>
> Again with the __ exported function...
>
> I don't even see a fence_init() function anywhere, why the __ ?
>
>
>> diff --git a/include/linux/fence.h b/include/linux/fence.h
>> new file mode 100644
>> index 000000000000..65f2a01ee7e4
>> --- /dev/null
>> +++ b/include/linux/fence.h
>> @@ -0,0 +1,333 @@
>> +/*
>> + * Fence mechanism for dma-buf to allow for asynchronous dma access
>> + *
>> + * Copyright (C) 2012 Canonical Ltd
>> + * Copyright (C) 2012 Texas Instruments
>> + *
>> + * Authors:
>> + * Rob Clark <robdclark@gmail.com>
>> + * Maarten Lankhorst <maarten.lankhorst@canonical.com>
>> + *
>> + * This program is free software; you can redistribute it and/or modify it
>> + * under the terms of the GNU General Public License version 2 as published by
>> + * the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope that it will be useful, but WITHOUT
>> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
>> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
>> + * more details.
>> + *
>> + * You should have received a copy of the GNU General Public License along with
>> + * this program.  If not, see <http://www.gnu.org/licenses/>.
>> + */
>> +
>> +#ifndef __LINUX_FENCE_H
>> +#define __LINUX_FENCE_H
>> +
>> +#include <linux/err.h>
>> +#include <linux/wait.h>
>> +#include <linux/list.h>
>> +#include <linux/bitops.h>
>> +#include <linux/kref.h>
>> +#include <linux/sched.h>
>> +#include <linux/printk.h>
>> +
>> +struct fence;
>> +struct fence_ops;
>> +struct fence_cb;
>> +
>> +/**
>> + * struct fence - software synchronization primitive
>> + * @refcount: refcount for this fence
>> + * @ops: fence_ops associated with this fence
>> + * @cb_list: list of all callbacks to call
>> + * @lock: spin_lock_irqsave used for locking
>> + * @context: execution context this fence belongs to, returned by
>> + *           fence_context_alloc()
>> + * @seqno: the sequence number of this fence inside the execution context,
>> + * can be compared to decide which fence would be signaled later.
>> + * @flags: A mask of FENCE_FLAG_* defined below
>> + * @timestamp: Timestamp when the fence was signaled.
>> + * @status: Optional, only valid if < 0, must be set before calling
>> + * fence_signal, indicates that the fence has completed with an error.
>> + *
>> + * the flags member must be manipulated and read using the appropriate
>> + * atomic ops (bit_*), so taking the spinlock will not be needed most
>> + * of the time.
>> + *
>> + * FENCE_FLAG_SIGNALED_BIT - fence is already signaled
>> + * FENCE_FLAG_ENABLE_SIGNAL_BIT - enable_signaling might have been called*
>> + * FENCE_FLAG_USER_BITS - start of the unused bits, can be used by the
>> + * implementer of the fence for its own purposes. Can be used in different
>> + * ways by different fence implementers, so do not rely on this.
>> + *
>> + * *) Since atomic bitops are used, this is not guaranteed to be the case.
>> + * Particularly, if the bit was set, but fence_signal was called right
>> + * before this bit was set, it would have been able to set the
>> + * FENCE_FLAG_SIGNALED_BIT, before enable_signaling was called.
>> + * Adding a check for FENCE_FLAG_SIGNALED_BIT after setting
>> + * FENCE_FLAG_ENABLE_SIGNAL_BIT closes this race, and makes sure that
>> + * after fence_signal was called, any enable_signaling call will have either
>> + * been completed, or never called at all.
>> + */
>> +struct fence {
>> +     struct kref refcount;
>> +     const struct fence_ops *ops;
>> +     struct list_head cb_list;
>> +     spinlock_t *lock;
>> +     unsigned context, seqno;
>> +     unsigned long flags;
>> +     ktime_t timestamp;
>> +     int status;
>> +};
>> +
>> +enum fence_flag_bits {
>> +     FENCE_FLAG_SIGNALED_BIT,
>> +     FENCE_FLAG_ENABLE_SIGNAL_BIT,
>> +     FENCE_FLAG_USER_BITS, /* must always be last member */
>> +};
>> +
>> +typedef void (*fence_func_t)(struct fence *fence, struct fence_cb *cb);
>> +
>> +/**
>> + * struct fence_cb - callback for fence_add_callback
>> + * @node: used by fence_add_callback to append this struct to fence::cb_list
>> + * @func: fence_func_t to call
>> + *
>> + * This struct will be initialized by fence_add_callback, additional
>> + * data can be passed along by embedding fence_cb in another struct.
>> + */
>> +struct fence_cb {
>> +     struct list_head node;
>> +     fence_func_t func;
>> +};
>> +
>> +/**
>> + * struct fence_ops - operations implemented for fence
>> + * @get_driver_name: returns the driver name.
>> + * @get_timeline_name: return the name of the context this fence belongs to.
>> + * @enable_signaling: enable software signaling of fence.
>> + * @signaled: [optional] peek whether the fence is signaled, can be null.
>> + * @wait: custom wait implementation, or fence_default_wait.
>> + * @release: [optional] called on destruction of fence, can be null
>> + * @fill_driver_data: [optional] callback to fill in free-form debug info
>> + * Returns amount of bytes filled, or -errno.
>> + * @fence_value_str: [optional] fills in the value of the fence as a string
>> + * @timeline_value_str: [optional] fills in the current value of the timeline
>> + * as a string
>> + *
>> + * Notes on enable_signaling:
>> + * For fence implementations that have the capability for hw->hw
>> + * signaling, they can implement this op to enable the necessary
>> + * irqs, or insert commands into cmdstream, etc.  This is called
>> + * in the first wait() or add_callback() path to let the fence
>> + * implementation know that there is another driver waiting on
>> + * the signal (ie. hw->sw case).
>> + *
>> + * This function can be called called from atomic context, but not
>> + * from irq context, so normal spinlocks can be used.
>> + *
>> + * A return value of false indicates the fence already passed,
>> + * or some failure occured that made it impossible to enable
>> + * signaling. True indicates succesful enabling.
>> + *
>> + * fence->status may be set in enable_signaling, but only when false is
>> + * returned.
>> + *
>> + * Calling fence_signal before enable_signaling is called allows
>> + * for a tiny race window in which enable_signaling is called during,
>> + * before, or after fence_signal. To fight this, it is recommended
>> + * that before enable_signaling returns true an extra reference is
>> + * taken on the fence, to be released when the fence is signaled.
>> + * This will mean fence_signal will still be called twice, but
>> + * the second time will be a noop since it was already signaled.
>> + *
>> + * Notes on signaled:
>> + * May set fence->status if returning true.
>> + *
>> + * Notes on wait:
>> + * Must not be NULL, set to fence_default_wait for default implementation.
>> + * the fence_default_wait implementation should work for any fence, as long
>> + * as enable_signaling works correctly.
>> + *
>> + * Must return -ERESTARTSYS if the wait is intr = true and the wait was
>> + * interrupted, and remaining jiffies if fence has signaled, or 0 if wait
>> + * timed out. Can also return other error values on custom implementations,
>> + * which should be treated as if the fence is signaled. For example a hardware
>> + * lockup could be reported like that.
>> + *
>> + * Notes on release:
>> + * Can be NULL, this function allows additional commands to run on
>> + * destruction of the fence. Can be called from irq context.
>> + * If pointer is set to NULL, kfree will get called instead.
>> + */
>> +
>> +struct fence_ops {
>> +     const char * (*get_driver_name)(struct fence *fence);
>> +     const char * (*get_timeline_name)(struct fence *fence);
>> +     bool (*enable_signaling)(struct fence *fence);
>> +     bool (*signaled)(struct fence *fence);
>> +     long (*wait)(struct fence *fence, bool intr, signed long timeout);
>> +     void (*release)(struct fence *fence);
>> +
>> +     int (*fill_driver_data)(struct fence *fence, void *data, int size);
>> +     void (*fence_value_str)(struct fence *fence, char *str, int size);
>> +     void (*timeline_value_str)(struct fence *fence, char *str, int size);
>> +};
>> +
>> +void __fence_init(struct fence *fence, const struct fence_ops *ops,
>> +               spinlock_t *lock, unsigned context, unsigned seqno);
>> +
>> +/**
>> + * fence_get - increases refcount of the fence
>> + * @fence:   [in]    fence to increase refcount of
>> + */
>> +static inline void fence_get(struct fence *fence)
>> +{
>> +     if (WARN_ON(!fence))
>> +             return;
>
> Why warn?
>
>> +     kref_get(&fence->refcount);
>> +}
>
> Why is this inline?
>
>> +
>> +extern void release_fence(struct kref *kref);
>> +
>> +/**
>> + * fence_put - decreases refcount of the fence
>> + * @fence:   [in]    fence to reduce refcount of
>> + */
>> +static inline void fence_put(struct fence *fence)
>> +{
>> +     if (WARN_ON(!fence))
>> +             return;
>
> Same as above.
>
>> +     kref_put(&fence->refcount, release_fence);
>> +}
>
> Why inline?
>
>> +int fence_signal(struct fence *fence);
>> +int __fence_signal(struct fence *fence);
>
> Let's randomly pick a function to call... :)
>
>> +static inline bool
>> +__fence_is_signaled(struct fence *fence)
>> +{
>> +     if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
>> +             return true;
>> +
>> +     if (fence->ops->signaled && fence->ops->signaled(fence)) {
>> +             __fence_signal(fence);
>> +             return true;
>> +     }
>> +
>> +     return false;
>> +}
>
> Oh nice, another __ function :(
>
>> +
>> +/**
>> + * fence_is_signaled - Return an indication if the fence is signaled yet.
>> + * @fence:   [in]    the fence to check
>> + *
>> + * Returns true if the fence was already signaled, false if not. Since this
>> + * function doesn't enable signaling, it is not guaranteed to ever return true
>> + * If fence_add_callback, fence_wait or fence_enable_sw_signaling
>> + * haven't been called before.
>> + *
>> + * It's recommended for seqno fences to call fence_signal when the
>> + * operation is complete, it makes it possible to prevent issues from
>> + * wraparound between time of issue and time of use by checking the return
>> + * value of this function before calling hardware-specific wait instructions.
>> + */
>> +static inline bool
>> +fence_is_signaled(struct fence *fence)
>> +{
>> +     if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
>> +             return true;
>> +
>> +     if (fence->ops->signaled && fence->ops->signaled(fence)) {
>> +             fence_signal(fence);
>> +             return true;
>> +     }
>> +
>> +     return false;
>> +}
>
>
> Why inline for all of these, does it really matter for speed?
>
>> +/**
>> + * fence_later - return the chronologically later fence
>> + * @f1:      [in]    the first fence from the same context
>> + * @f2:      [in]    the second fence from the same context
>> + *
>> + * Returns NULL if both fences are signaled, otherwise the fence that would be
>> + * signaled last. Both fences must be from the same context, since a seqno is
>> + * not re-used across contexts.
>> + */
>> +static inline struct fence *fence_later(struct fence *f1, struct fence *f2)
>> +{
>> +     BUG_ON(f1->context != f2->context);
>
> Nice, you just crashed the kernel, making it impossible to debug or
> recover :(
>
> Don't do that.
>
> thanks,
>
> greg k-h
Rob Clark June 19, 2014, 1:25 a.m. UTC | #5
On Wed, Jun 18, 2014 at 9:16 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
> On Wed, Jun 18, 2014 at 12:36:54PM +0200, Maarten Lankhorst wrote:
>> A fence can be attached to a buffer which is being filled or consumed
>> by hw, to allow userspace to pass the buffer without waiting to another
>> device.  For example, userspace can call page_flip ioctl to display the
>> next frame of graphics after kicking the GPU but while the GPU is still
>> rendering.  The display device sharing the buffer with the GPU would
>> attach a callback to get notified when the GPU's rendering-complete IRQ
>> fires, to update the scan-out address of the display, without having to
>> wake up userspace.
>>
>> A driver must allocate a fence context for each execution ring that can
>> run in parallel. The function for this takes an argument with how many
>> contexts to allocate:
>>   + fence_context_alloc()
>>
>> A fence is transient, one-shot deal.  It is allocated and attached
>> to one or more dma-buf's.  When the one that attached it is done, with
>> the pending operation, it can signal the fence:
>>   + fence_signal()
>>
>> To have a rough approximation whether a fence is fired, call:
>>   + fence_is_signaled()
>>
>> The dma-buf-mgr handles tracking, and waiting on, the fences associated
>> with a dma-buf.
>>
>> The one pending on the fence can add an async callback:
>>   + fence_add_callback()
>>
>> The callback can optionally be cancelled with:
>>   + fence_remove_callback()
>>
>> To wait synchronously, optionally with a timeout:
>>   + fence_wait()
>>   + fence_wait_timeout()
>>
>> When emitting a fence, call:
>>   + trace_fence_emit()
>>
>> To annotate that a fence is blocking on another fence, call:
>>   + trace_fence_annotate_wait_on(fence, on_fence)
>>
>> A default software-only implementation is provided, which can be used
>> by drivers attaching a fence to a buffer when they have no other means
>> for hw sync.  But a memory backed fence is also envisioned, because it
>> is common that GPU's can write to, or poll on some memory location for
>> synchronization.  For example:
>>
>>   fence = custom_get_fence(...);
>>   if ((seqno_fence = to_seqno_fence(fence)) != NULL) {
>>     dma_buf *fence_buf = seqno_fence->sync_buf;
>>     get_dma_buf(fence_buf);
>>
>>     ... tell the hw the memory location to wait ...
>>     custom_wait_on(fence_buf, seqno_fence->seqno_ofs, fence->seqno);
>>   } else {
>>     /* fall-back to sw sync * /
>>     fence_add_callback(fence, my_cb);
>>   }
>>
>> On SoC platforms, if some other hw mechanism is provided for synchronizing
>> between IP blocks, it could be supported as an alternate implementation
>> with it's own fence ops in a similar way.
>>
>> enable_signaling callback is used to provide sw signaling in case a cpu
>> waiter is requested or no compatible hardware signaling could be used.
>>
>> The intention is to provide a userspace interface (presumably via eventfd)
>> later, to be used in conjunction with dma-buf's mmap support for sw access
>> to buffers (or for userspace apps that would prefer to do their own
>> synchronization).
>>
>> v1: Original
>> v2: After discussion w/ danvet and mlankhorst on #dri-devel, we decided
>>     that dma-fence didn't need to care about the sw->hw signaling path
>>     (it can be handled same as sw->sw case), and therefore the fence->ops
>>     can be simplified and more handled in the core.  So remove the signal,
>>     add_callback, cancel_callback, and wait ops, and replace with a simple
>>     enable_signaling() op which can be used to inform a fence supporting
>>     hw->hw signaling that one or more devices which do not support hw
>>     signaling are waiting (and therefore it should enable an irq or do
>>     whatever is necessary in order that the CPU is notified when the
>>     fence is passed).
>> v3: Fix locking fail in attach_fence() and get_fence()
>> v4: Remove tie-in w/ dma-buf..  after discussion w/ danvet and mlankorst
>>     we decided that we need to be able to attach one fence to N dma-buf's,
>>     so using the list_head in dma-fence struct would be problematic.
>> v5: [ Maarten Lankhorst ] Updated for dma-bikeshed-fence and dma-buf-manager.
>> v6: [ Maarten Lankhorst ] I removed dma_fence_cancel_callback and some comments
>>     about checking if fence fired or not. This is broken by design.
>>     waitqueue_active during destruction is now fatal, since the signaller
>>     should be holding a reference in enable_signalling until it signalled
>>     the fence. Pass the original dma_fence_cb along, and call __remove_wait
>>     in the dma_fence_callback handler, so that no cleanup needs to be
>>     performed.
>> v7: [ Maarten Lankhorst ] Set cb->func and only enable sw signaling if
>>     fence wasn't signaled yet, for example for hardware fences that may
>>     choose to signal blindly.
>> v8: [ Maarten Lankhorst ] Tons of tiny fixes, moved __dma_fence_init to
>>     header and fixed include mess. dma-fence.h now includes dma-buf.h
>>     All members are now initialized, so kmalloc can be used for
>>     allocating a dma-fence. More documentation added.
>> v9: Change compiler bitfields to flags, change return type of
>>     enable_signaling to bool. Rework dma_fence_wait. Added
>>     dma_fence_is_signaled and dma_fence_wait_timeout.
>>     s/dma// and change exports to non GPL. Added fence_is_signaled and
>>     fence_enable_sw_signaling calls, add ability to override default
>>     wait operation.
>> v10: remove event_queue, use a custom list, export try_to_wake_up from
>>     scheduler. Remove fence lock and use a global spinlock instead,
>>     this should hopefully remove all the locking headaches I was having
>>     on trying to implement this. enable_signaling is called with this
>>     lock held.
>> v11:
>>     Use atomic ops for flags, lifting the need for some spin_lock_irqsaves.
>>     However I kept the guarantee that after fence_signal returns, it is
>>     guaranteed that enable_signaling has either been called to completion,
>>     or will not be called any more.
>>
>>     Add contexts and seqno to base fence implementation. This allows you
>>     to wait for less fences, by testing for seqno + signaled, and then only
>>     wait on the later fence.
>>
>>     Add FENCE_TRACE, FENCE_WARN, and FENCE_ERR. This makes debugging easier.
>>     An CONFIG_DEBUG_FENCE will be added to turn off the FENCE_TRACE
>>     spam, and another runtime option can turn it off at runtime.
>> v12:
>>     Add CONFIG_FENCE_TRACE. Add missing documentation for the fence->context
>>     and fence->seqno members.
>> v13:
>>     Fixup CONFIG_FENCE_TRACE kconfig description.
>>     Move fence_context_alloc to fence.
>>     Simplify fence_later.
>>     Kill priv member to fence_cb.
>> v14:
>>     Remove priv argument from fence_add_callback, oops!
>> v15:
>>     Remove priv from documentation.
>>     Explicitly include linux/atomic.h.
>> v16:
>>     Add trace events.
>>     Import changes required by android syncpoints.
>> v17:
>>     Use wake_up_state instead of try_to_wake_up. (Colin Cross)
>>     Fix up commit description for seqno_fence. (Rob Clark)
>>
>> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
>> Signed-off-by: Thierry Reding <thierry.reding@gmail.com> #use smp_mb__before_atomic()
>> Reviewed-by: Rob Clark <robdclark@gmail.com>
>> ---
>>  Documentation/DocBook/device-drivers.tmpl |    2
>>  drivers/base/Kconfig                      |    9 +
>>  drivers/base/Makefile                     |    2
>>  drivers/base/fence.c                      |  416 +++++++++++++++++++++++++++++
>>  include/linux/fence.h                     |  333 +++++++++++++++++++++++
>>  include/trace/events/fence.h              |  128 +++++++++
>>  6 files changed, 889 insertions(+), 1 deletion(-)
>>  create mode 100644 drivers/base/fence.c
>>  create mode 100644 include/linux/fence.h
>>  create mode 100644 include/trace/events/fence.h
>
> Who is going to sign up to maintain this code?  (hint, it's not me...)

that would be Sumit (dma-buf tree)..

probably we should move fence/reservation/dma-buf into drivers/dma-buf
(or something approximately like that)

BR,
-R


> thanks,
>
> greg k-h
Greg KH June 19, 2014, 1:44 a.m. UTC | #6
On Wed, Jun 18, 2014 at 09:23:06PM -0400, Rob Clark wrote:
> On Wed, Jun 18, 2014 at 9:13 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
> > On Wed, Jun 18, 2014 at 12:36:54PM +0200, Maarten Lankhorst wrote:
> >> +#define CREATE_TRACE_POINTS
> >> +#include <trace/events/fence.h>
> >> +
> >> +EXPORT_TRACEPOINT_SYMBOL(fence_annotate_wait_on);
> >> +EXPORT_TRACEPOINT_SYMBOL(fence_emit);
> >
> > Are you really willing to live with these as tracepoints for forever?
> > What is the use of them in debugging?  Was it just for debugging the
> > fence code, or for something else?
> 
> fwiw, the goal is something like this:
> 
> http://people.freedesktop.org/~robclark/perf-supertuxkart.svg
> 
> but without needing to make perf understand each driver's custom trace events
> 
> (from: http://bloggingthemonkey.blogspot.com/2013/09/freedreno-update-moar-fps.html
> )

Will these tracepoints provide something like that?  If so, great, but I
want to make sure as these now become a user/kernel ABI that you can not
break.

thanks,

greg k-h
Sumit Semwal June 19, 2014, 4:27 a.m. UTC | #7
Hi Greg,

On 19 June 2014 06:55, Rob Clark <robdclark@gmail.com> wrote:
> On Wed, Jun 18, 2014 at 9:16 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
>> On Wed, Jun 18, 2014 at 12:36:54PM +0200, Maarten Lankhorst wrote:
>>> A fence can be attached to a buffer which is being filled or consumed
>>> by hw, to allow userspace to pass the buffer without waiting to another
>>> device.  For example, userspace can call page_flip ioctl to display the
>>> next frame of graphics after kicking the GPU but while the GPU is still
>>> rendering.  The display device sharing the buffer with the GPU would
>>> attach a callback to get notified when the GPU's rendering-complete IRQ
>>> fires, to update the scan-out address of the display, without having to
>>> wake up userspace.
>>>
>>> A driver must allocate a fence context for each execution ring that can
>>> run in parallel. The function for this takes an argument with how many
>>> contexts to allocate:
>>>   + fence_context_alloc()
>>>
>>> A fence is transient, one-shot deal.  It is allocated and attached
>>> to one or more dma-buf's.  When the one that attached it is done, with
>>> the pending operation, it can signal the fence:
>>>   + fence_signal()
>>>
>>> To have a rough approximation whether a fence is fired, call:
>>>   + fence_is_signaled()
>>>
>>> The dma-buf-mgr handles tracking, and waiting on, the fences associated
>>> with a dma-buf.
>>>
>>> The one pending on the fence can add an async callback:
>>>   + fence_add_callback()
>>>
>>> The callback can optionally be cancelled with:
>>>   + fence_remove_callback()
>>>
>>> To wait synchronously, optionally with a timeout:
>>>   + fence_wait()
>>>   + fence_wait_timeout()
>>>
>>> When emitting a fence, call:
>>>   + trace_fence_emit()
>>>
>>> To annotate that a fence is blocking on another fence, call:
>>>   + trace_fence_annotate_wait_on(fence, on_fence)
>>>
>>> A default software-only implementation is provided, which can be used
>>> by drivers attaching a fence to a buffer when they have no other means
>>> for hw sync.  But a memory backed fence is also envisioned, because it
>>> is common that GPU's can write to, or poll on some memory location for
>>> synchronization.  For example:
>>>
>>>   fence = custom_get_fence(...);
>>>   if ((seqno_fence = to_seqno_fence(fence)) != NULL) {
>>>     dma_buf *fence_buf = seqno_fence->sync_buf;
>>>     get_dma_buf(fence_buf);
>>>
>>>     ... tell the hw the memory location to wait ...
>>>     custom_wait_on(fence_buf, seqno_fence->seqno_ofs, fence->seqno);
>>>   } else {
>>>     /* fall-back to sw sync * /
>>>     fence_add_callback(fence, my_cb);
>>>   }
>>>
>>> On SoC platforms, if some other hw mechanism is provided for synchronizing
>>> between IP blocks, it could be supported as an alternate implementation
>>> with it's own fence ops in a similar way.
>>>
>>> enable_signaling callback is used to provide sw signaling in case a cpu
>>> waiter is requested or no compatible hardware signaling could be used.
>>>
>>> The intention is to provide a userspace interface (presumably via eventfd)
>>> later, to be used in conjunction with dma-buf's mmap support for sw access
>>> to buffers (or for userspace apps that would prefer to do their own
>>> synchronization).
>>>
>>> v1: Original
>>> v2: After discussion w/ danvet and mlankhorst on #dri-devel, we decided
>>>     that dma-fence didn't need to care about the sw->hw signaling path
>>>     (it can be handled same as sw->sw case), and therefore the fence->ops
>>>     can be simplified and more handled in the core.  So remove the signal,
>>>     add_callback, cancel_callback, and wait ops, and replace with a simple
>>>     enable_signaling() op which can be used to inform a fence supporting
>>>     hw->hw signaling that one or more devices which do not support hw
>>>     signaling are waiting (and therefore it should enable an irq or do
>>>     whatever is necessary in order that the CPU is notified when the
>>>     fence is passed).
>>> v3: Fix locking fail in attach_fence() and get_fence()
>>> v4: Remove tie-in w/ dma-buf..  after discussion w/ danvet and mlankorst
>>>     we decided that we need to be able to attach one fence to N dma-buf's,
>>>     so using the list_head in dma-fence struct would be problematic.
>>> v5: [ Maarten Lankhorst ] Updated for dma-bikeshed-fence and dma-buf-manager.
>>> v6: [ Maarten Lankhorst ] I removed dma_fence_cancel_callback and some comments
>>>     about checking if fence fired or not. This is broken by design.
>>>     waitqueue_active during destruction is now fatal, since the signaller
>>>     should be holding a reference in enable_signalling until it signalled
>>>     the fence. Pass the original dma_fence_cb along, and call __remove_wait
>>>     in the dma_fence_callback handler, so that no cleanup needs to be
>>>     performed.
>>> v7: [ Maarten Lankhorst ] Set cb->func and only enable sw signaling if
>>>     fence wasn't signaled yet, for example for hardware fences that may
>>>     choose to signal blindly.
>>> v8: [ Maarten Lankhorst ] Tons of tiny fixes, moved __dma_fence_init to
>>>     header and fixed include mess. dma-fence.h now includes dma-buf.h
>>>     All members are now initialized, so kmalloc can be used for
>>>     allocating a dma-fence. More documentation added.
>>> v9: Change compiler bitfields to flags, change return type of
>>>     enable_signaling to bool. Rework dma_fence_wait. Added
>>>     dma_fence_is_signaled and dma_fence_wait_timeout.
>>>     s/dma// and change exports to non GPL. Added fence_is_signaled and
>>>     fence_enable_sw_signaling calls, add ability to override default
>>>     wait operation.
>>> v10: remove event_queue, use a custom list, export try_to_wake_up from
>>>     scheduler. Remove fence lock and use a global spinlock instead,
>>>     this should hopefully remove all the locking headaches I was having
>>>     on trying to implement this. enable_signaling is called with this
>>>     lock held.
>>> v11:
>>>     Use atomic ops for flags, lifting the need for some spin_lock_irqsaves.
>>>     However I kept the guarantee that after fence_signal returns, it is
>>>     guaranteed that enable_signaling has either been called to completion,
>>>     or will not be called any more.
>>>
>>>     Add contexts and seqno to base fence implementation. This allows you
>>>     to wait for less fences, by testing for seqno + signaled, and then only
>>>     wait on the later fence.
>>>
>>>     Add FENCE_TRACE, FENCE_WARN, and FENCE_ERR. This makes debugging easier.
>>>     An CONFIG_DEBUG_FENCE will be added to turn off the FENCE_TRACE
>>>     spam, and another runtime option can turn it off at runtime.
>>> v12:
>>>     Add CONFIG_FENCE_TRACE. Add missing documentation for the fence->context
>>>     and fence->seqno members.
>>> v13:
>>>     Fixup CONFIG_FENCE_TRACE kconfig description.
>>>     Move fence_context_alloc to fence.
>>>     Simplify fence_later.
>>>     Kill priv member to fence_cb.
>>> v14:
>>>     Remove priv argument from fence_add_callback, oops!
>>> v15:
>>>     Remove priv from documentation.
>>>     Explicitly include linux/atomic.h.
>>> v16:
>>>     Add trace events.
>>>     Import changes required by android syncpoints.
>>> v17:
>>>     Use wake_up_state instead of try_to_wake_up. (Colin Cross)
>>>     Fix up commit description for seqno_fence. (Rob Clark)
>>>
>>> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
>>> Signed-off-by: Thierry Reding <thierry.reding@gmail.com> #use smp_mb__before_atomic()
>>> Reviewed-by: Rob Clark <robdclark@gmail.com>
>>> ---
>>>  Documentation/DocBook/device-drivers.tmpl |    2
>>>  drivers/base/Kconfig                      |    9 +
>>>  drivers/base/Makefile                     |    2
>>>  drivers/base/fence.c                      |  416 +++++++++++++++++++++++++++++
>>>  include/linux/fence.h                     |  333 +++++++++++++++++++++++
>>>  include/trace/events/fence.h              |  128 +++++++++
>>>  6 files changed, 889 insertions(+), 1 deletion(-)
>>>  create mode 100644 drivers/base/fence.c
>>>  create mode 100644 include/linux/fence.h
>>>  create mode 100644 include/trace/events/fence.h
>>
>> Who is going to sign up to maintain this code?  (hint, it's not me...)
>
> that would be Sumit (dma-buf tree)..
>
> probably we should move fence/reservation/dma-buf into drivers/dma-buf
> (or something approximately like that)
Yes, that would be me - it might be better to create a new directory
as suggested above (drivers/dma-buf).
>
> BR,
> -R
>
>
>> thanks,
>>
>> greg k-h
Best regards,
~Sumit.
Greg KH June 19, 2014, 4:54 a.m. UTC | #8
On Thu, Jun 19, 2014 at 09:57:35AM +0530, Sumit Semwal wrote:
> Hi Greg,
> 
> On 19 June 2014 06:55, Rob Clark <robdclark@gmail.com> wrote:
> > On Wed, Jun 18, 2014 at 9:16 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
> >> On Wed, Jun 18, 2014 at 12:36:54PM +0200, Maarten Lankhorst wrote:
> >>> A fence can be attached to a buffer which is being filled or consumed
> >>> by hw, to allow userspace to pass the buffer without waiting to another
> >>> device.  For example, userspace can call page_flip ioctl to display the
> >>> next frame of graphics after kicking the GPU but while the GPU is still
> >>> rendering.  The display device sharing the buffer with the GPU would
> >>> attach a callback to get notified when the GPU's rendering-complete IRQ
> >>> fires, to update the scan-out address of the display, without having to
> >>> wake up userspace.
> >>>
> >>> A driver must allocate a fence context for each execution ring that can
> >>> run in parallel. The function for this takes an argument with how many
> >>> contexts to allocate:
> >>>   + fence_context_alloc()
> >>>
> >>> A fence is transient, one-shot deal.  It is allocated and attached
> >>> to one or more dma-buf's.  When the one that attached it is done, with
> >>> the pending operation, it can signal the fence:
> >>>   + fence_signal()
> >>>
> >>> To have a rough approximation whether a fence is fired, call:
> >>>   + fence_is_signaled()
> >>>
> >>> The dma-buf-mgr handles tracking, and waiting on, the fences associated
> >>> with a dma-buf.
> >>>
> >>> The one pending on the fence can add an async callback:
> >>>   + fence_add_callback()
> >>>
> >>> The callback can optionally be cancelled with:
> >>>   + fence_remove_callback()
> >>>
> >>> To wait synchronously, optionally with a timeout:
> >>>   + fence_wait()
> >>>   + fence_wait_timeout()
> >>>
> >>> When emitting a fence, call:
> >>>   + trace_fence_emit()
> >>>
> >>> To annotate that a fence is blocking on another fence, call:
> >>>   + trace_fence_annotate_wait_on(fence, on_fence)
> >>>
> >>> A default software-only implementation is provided, which can be used
> >>> by drivers attaching a fence to a buffer when they have no other means
> >>> for hw sync.  But a memory backed fence is also envisioned, because it
> >>> is common that GPU's can write to, or poll on some memory location for
> >>> synchronization.  For example:
> >>>
> >>>   fence = custom_get_fence(...);
> >>>   if ((seqno_fence = to_seqno_fence(fence)) != NULL) {
> >>>     dma_buf *fence_buf = seqno_fence->sync_buf;
> >>>     get_dma_buf(fence_buf);
> >>>
> >>>     ... tell the hw the memory location to wait ...
> >>>     custom_wait_on(fence_buf, seqno_fence->seqno_ofs, fence->seqno);
> >>>   } else {
> >>>     /* fall-back to sw sync * /
> >>>     fence_add_callback(fence, my_cb);
> >>>   }
> >>>
> >>> On SoC platforms, if some other hw mechanism is provided for synchronizing
> >>> between IP blocks, it could be supported as an alternate implementation
> >>> with it's own fence ops in a similar way.
> >>>
> >>> enable_signaling callback is used to provide sw signaling in case a cpu
> >>> waiter is requested or no compatible hardware signaling could be used.
> >>>
> >>> The intention is to provide a userspace interface (presumably via eventfd)
> >>> later, to be used in conjunction with dma-buf's mmap support for sw access
> >>> to buffers (or for userspace apps that would prefer to do their own
> >>> synchronization).
> >>>
> >>> v1: Original
> >>> v2: After discussion w/ danvet and mlankhorst on #dri-devel, we decided
> >>>     that dma-fence didn't need to care about the sw->hw signaling path
> >>>     (it can be handled same as sw->sw case), and therefore the fence->ops
> >>>     can be simplified and more handled in the core.  So remove the signal,
> >>>     add_callback, cancel_callback, and wait ops, and replace with a simple
> >>>     enable_signaling() op which can be used to inform a fence supporting
> >>>     hw->hw signaling that one or more devices which do not support hw
> >>>     signaling are waiting (and therefore it should enable an irq or do
> >>>     whatever is necessary in order that the CPU is notified when the
> >>>     fence is passed).
> >>> v3: Fix locking fail in attach_fence() and get_fence()
> >>> v4: Remove tie-in w/ dma-buf..  after discussion w/ danvet and mlankorst
> >>>     we decided that we need to be able to attach one fence to N dma-buf's,
> >>>     so using the list_head in dma-fence struct would be problematic.
> >>> v5: [ Maarten Lankhorst ] Updated for dma-bikeshed-fence and dma-buf-manager.
> >>> v6: [ Maarten Lankhorst ] I removed dma_fence_cancel_callback and some comments
> >>>     about checking if fence fired or not. This is broken by design.
> >>>     waitqueue_active during destruction is now fatal, since the signaller
> >>>     should be holding a reference in enable_signalling until it signalled
> >>>     the fence. Pass the original dma_fence_cb along, and call __remove_wait
> >>>     in the dma_fence_callback handler, so that no cleanup needs to be
> >>>     performed.
> >>> v7: [ Maarten Lankhorst ] Set cb->func and only enable sw signaling if
> >>>     fence wasn't signaled yet, for example for hardware fences that may
> >>>     choose to signal blindly.
> >>> v8: [ Maarten Lankhorst ] Tons of tiny fixes, moved __dma_fence_init to
> >>>     header and fixed include mess. dma-fence.h now includes dma-buf.h
> >>>     All members are now initialized, so kmalloc can be used for
> >>>     allocating a dma-fence. More documentation added.
> >>> v9: Change compiler bitfields to flags, change return type of
> >>>     enable_signaling to bool. Rework dma_fence_wait. Added
> >>>     dma_fence_is_signaled and dma_fence_wait_timeout.
> >>>     s/dma// and change exports to non GPL. Added fence_is_signaled and
> >>>     fence_enable_sw_signaling calls, add ability to override default
> >>>     wait operation.
> >>> v10: remove event_queue, use a custom list, export try_to_wake_up from
> >>>     scheduler. Remove fence lock and use a global spinlock instead,
> >>>     this should hopefully remove all the locking headaches I was having
> >>>     on trying to implement this. enable_signaling is called with this
> >>>     lock held.
> >>> v11:
> >>>     Use atomic ops for flags, lifting the need for some spin_lock_irqsaves.
> >>>     However I kept the guarantee that after fence_signal returns, it is
> >>>     guaranteed that enable_signaling has either been called to completion,
> >>>     or will not be called any more.
> >>>
> >>>     Add contexts and seqno to base fence implementation. This allows you
> >>>     to wait for less fences, by testing for seqno + signaled, and then only
> >>>     wait on the later fence.
> >>>
> >>>     Add FENCE_TRACE, FENCE_WARN, and FENCE_ERR. This makes debugging easier.
> >>>     An CONFIG_DEBUG_FENCE will be added to turn off the FENCE_TRACE
> >>>     spam, and another runtime option can turn it off at runtime.
> >>> v12:
> >>>     Add CONFIG_FENCE_TRACE. Add missing documentation for the fence->context
> >>>     and fence->seqno members.
> >>> v13:
> >>>     Fixup CONFIG_FENCE_TRACE kconfig description.
> >>>     Move fence_context_alloc to fence.
> >>>     Simplify fence_later.
> >>>     Kill priv member to fence_cb.
> >>> v14:
> >>>     Remove priv argument from fence_add_callback, oops!
> >>> v15:
> >>>     Remove priv from documentation.
> >>>     Explicitly include linux/atomic.h.
> >>> v16:
> >>>     Add trace events.
> >>>     Import changes required by android syncpoints.
> >>> v17:
> >>>     Use wake_up_state instead of try_to_wake_up. (Colin Cross)
> >>>     Fix up commit description for seqno_fence. (Rob Clark)
> >>>
> >>> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
> >>> Signed-off-by: Thierry Reding <thierry.reding@gmail.com> #use smp_mb__before_atomic()
> >>> Reviewed-by: Rob Clark <robdclark@gmail.com>
> >>> ---
> >>>  Documentation/DocBook/device-drivers.tmpl |    2
> >>>  drivers/base/Kconfig                      |    9 +
> >>>  drivers/base/Makefile                     |    2
> >>>  drivers/base/fence.c                      |  416 +++++++++++++++++++++++++++++
> >>>  include/linux/fence.h                     |  333 +++++++++++++++++++++++
> >>>  include/trace/events/fence.h              |  128 +++++++++
> >>>  6 files changed, 889 insertions(+), 1 deletion(-)
> >>>  create mode 100644 drivers/base/fence.c
> >>>  create mode 100644 include/linux/fence.h
> >>>  create mode 100644 include/trace/events/fence.h
> >>
> >> Who is going to sign up to maintain this code?  (hint, it's not me...)
> >
> > that would be Sumit (dma-buf tree)..
> >
> > probably we should move fence/reservation/dma-buf into drivers/dma-buf
> > (or something approximately like that)
> Yes, that would be me - it might be better to create a new directory
> as suggested above (drivers/dma-buf).

That's fine with me, there is going to be more than just one file in
there, right?  :)

greg k-h
Sumit Semwal June 19, 2014, 5:26 a.m. UTC | #9
On 19 June 2014 10:24, Greg KH <gregkh@linuxfoundation.org> wrote:
> On Thu, Jun 19, 2014 at 09:57:35AM +0530, Sumit Semwal wrote:
>> Hi Greg,
>>

>> >>
>> >> Who is going to sign up to maintain this code?  (hint, it's not me...)
>> >
>> > that would be Sumit (dma-buf tree)..
>> >
>> > probably we should move fence/reservation/dma-buf into drivers/dma-buf
>> > (or something approximately like that)
>> Yes, that would be me - it might be better to create a new directory
>> as suggested above (drivers/dma-buf).
>
> That's fine with me, there is going to be more than just one file in
> there, right?  :)
>
> greg k-h
Certainly atleast 3 :)

~sumit
Rob Clark June 19, 2014, 2 p.m. UTC | #10
On Wed, Jun 18, 2014 at 9:13 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
> On Wed, Jun 18, 2014 at 12:36:54PM +0200, Maarten Lankhorst wrote:
>> +#define CREATE_TRACE_POINTS
>> +#include <trace/events/fence.h>
>> +
>> +EXPORT_TRACEPOINT_SYMBOL(fence_annotate_wait_on);
>> +EXPORT_TRACEPOINT_SYMBOL(fence_emit);
>
> Are you really willing to live with these as tracepoints for forever?
> What is the use of them in debugging?  Was it just for debugging the
> fence code, or for something else?
>
>> +/**
>> + * fence_context_alloc - allocate an array of fence contexts
>> + * @num:     [in]    amount of contexts to allocate
>> + *
>> + * This function will return the first index of the number of fences allocated.
>> + * The fence context is used for setting fence->context to a unique number.
>> + */
>> +unsigned fence_context_alloc(unsigned num)
>> +{
>> +     BUG_ON(!num);
>> +     return atomic_add_return(num, &fence_context_counter) - num;
>> +}
>> +EXPORT_SYMBOL(fence_context_alloc);
>
> EXPORT_SYMBOL_GPL()?  Same goes for all of the exports in here.
> Traditionally all of the driver core exports have been with this
> marking, any objection to making that change here as well?

tbh, I prefer EXPORT_SYMBOL()..  well, I'd prefer even more if there
wasn't even a need for EXPORT_SYMBOL_GPL(), but sadly it is a fact of
life.  We already went through this debate once with dma-buf.  We
aren't going to change $evil_vendor's mind about non-gpl modules.  The
only result will be a more flugly convoluted solution (ie. use syncpt
EXPORT_SYMBOL() on top of fence EXPORT_SYMBOL_GPL()) just as a
workaround, with the result that no-one benefits.

>> +int __fence_signal(struct fence *fence)
>> +{
>> +     struct fence_cb *cur, *tmp;
>> +     int ret = 0;
>> +
>> +     if (WARN_ON(!fence))
>> +             return -EINVAL;
>> +
>> +     if (!ktime_to_ns(fence->timestamp)) {
>> +             fence->timestamp = ktime_get();
>> +             smp_mb__before_atomic();
>> +     }
>> +
>> +     if (test_and_set_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
>> +             ret = -EINVAL;
>> +
>> +             /*
>> +              * we might have raced with the unlocked fence_signal,
>> +              * still run through all callbacks
>> +              */
>> +     } else
>> +             trace_fence_signaled(fence);
>> +
>> +     list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) {
>> +             list_del_init(&cur->node);
>> +             cur->func(fence, cur);
>> +     }
>> +     return ret;
>> +}
>> +EXPORT_SYMBOL(__fence_signal);
>
> Don't export a function with __ in front of it, you are saying that an
> internal function is somehow "valid" for everyone else to call?  Why?
> You aren't even documenting the function, so who knows how to use it?

fwiw, the __ versions appear to mainly be concessions for android
syncpt.  That is the only user outside of fence.c, and it should stay
that way.

>> +/**
>> + * fence_signal - signal completion of a fence
>> + * @fence: the fence to signal
>> + *
>> + * Signal completion for software callbacks on a fence, this will unblock
>> + * fence_wait() calls and run all the callbacks added with
>> + * fence_add_callback(). Can be called multiple times, but since a fence
>> + * can only go from unsignaled to signaled state, it will only be effective
>> + * the first time.
>> + */
>> +int fence_signal(struct fence *fence)
>> +{
>> +     unsigned long flags;
>> +
>> +     if (!fence)
>> +             return -EINVAL;
>> +
>> +     if (!ktime_to_ns(fence->timestamp)) {
>> +             fence->timestamp = ktime_get();
>> +             smp_mb__before_atomic();
>> +     }
>> +
>> +     if (test_and_set_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
>> +             return -EINVAL;
>> +
>> +     trace_fence_signaled(fence);
>> +
>> +     if (test_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags)) {
>> +             struct fence_cb *cur, *tmp;
>> +
>> +             spin_lock_irqsave(fence->lock, flags);
>> +             list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) {
>> +                     list_del_init(&cur->node);
>> +                     cur->func(fence, cur);
>> +             }
>> +             spin_unlock_irqrestore(fence->lock, flags);
>> +     }
>> +     return 0;
>> +}
>> +EXPORT_SYMBOL(fence_signal);
>
> So, why should I use this over __fence_signal?  What is the difference?
> Am I missing some documentation somewhere?
>
>> +void release_fence(struct kref *kref)
>> +{
>> +     struct fence *fence =
>> +                     container_of(kref, struct fence, refcount);
>> +
>> +     trace_fence_destroy(fence);
>> +
>> +     BUG_ON(!list_empty(&fence->cb_list));
>> +
>> +     if (fence->ops->release)
>> +             fence->ops->release(fence);
>> +     else
>> +             kfree(fence);
>> +}
>> +EXPORT_SYMBOL(release_fence);
>
> fence_release() makes it more unified with the other functions in this
> file, right?
>
>> +/**
>> + * fence_default_wait - default sleep until the fence gets signaled
>> + * or until timeout elapses
>> + * @fence:   [in]    the fence to wait on
>> + * @intr:    [in]    if true, do an interruptible wait
>> + * @timeout: [in]    timeout value in jiffies, or MAX_SCHEDULE_TIMEOUT
>> + *
>> + * Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or the
>> + * remaining timeout in jiffies on success.
>> + */
>> +long
>
> Shouldn't this be signed to be explicit?
>
>> +fence_default_wait(struct fence *fence, bool intr, signed long timeout)
>> +{
>> +     struct default_wait_cb cb;
>> +     unsigned long flags;
>> +     long ret = timeout;
>> +     bool was_set;
>> +
>> +     if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
>> +             return timeout;
>> +
>> +     spin_lock_irqsave(fence->lock, flags);
>> +
>> +     if (intr && signal_pending(current)) {
>> +             ret = -ERESTARTSYS;
>> +             goto out;
>> +     }
>> +
>> +     was_set = test_and_set_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags);
>> +
>> +     if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
>> +             goto out;
>> +
>> +     if (!was_set) {
>> +             trace_fence_enable_signal(fence);
>> +
>> +             if (!fence->ops->enable_signaling(fence)) {
>> +                     __fence_signal(fence);
>> +                     goto out;
>> +             }
>> +     }
>> +
>> +     cb.base.func = fence_default_wait_cb;
>> +     cb.task = current;
>> +     list_add(&cb.base.node, &fence->cb_list);
>> +
>> +     while (!test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags) && ret > 0) {
>> +             if (intr)
>> +                     __set_current_state(TASK_INTERRUPTIBLE);
>> +             else
>> +                     __set_current_state(TASK_UNINTERRUPTIBLE);
>> +             spin_unlock_irqrestore(fence->lock, flags);
>> +
>> +             ret = schedule_timeout(ret);
>> +
>> +             spin_lock_irqsave(fence->lock, flags);
>> +             if (ret > 0 && intr && signal_pending(current))
>> +                     ret = -ERESTARTSYS;
>> +     }
>> +
>> +     if (!list_empty(&cb.base.node))
>> +             list_del(&cb.base.node);
>> +     __set_current_state(TASK_RUNNING);
>> +
>> +out:
>> +     spin_unlock_irqrestore(fence->lock, flags);
>> +     return ret;
>> +}
>> +EXPORT_SYMBOL(fence_default_wait);
>> +
>> +/**
>> + * __fence_init - Initialize a custom fence.
>> + * @fence:   [in]    the fence to initialize
>> + * @ops:     [in]    the fence_ops for operations on this fence
>> + * @lock:    [in]    the irqsafe spinlock to use for locking this fence
>> + * @context: [in]    the execution context this fence is run on
>> + * @seqno:   [in]    a linear increasing sequence number for this context
>> + *
>> + * Initializes an allocated fence, the caller doesn't have to keep its
>> + * refcount after committing with this fence, but it will need to hold a
>> + * refcount again if fence_ops.enable_signaling gets called. This can
>> + * be used for other implementing other types of fence.
>> + *
>> + * context and seqno are used for easy comparison between fences, allowing
>> + * to check which fence is later by simply using fence_later.
>> + */
>> +void
>> +__fence_init(struct fence *fence, const struct fence_ops *ops,
>> +          spinlock_t *lock, unsigned context, unsigned seqno)
>> +{
>> +     BUG_ON(!lock);
>> +     BUG_ON(!ops || !ops->wait || !ops->enable_signaling ||
>> +            !ops->get_driver_name || !ops->get_timeline_name);
>> +
>> +     kref_init(&fence->refcount);
>> +     fence->ops = ops;
>> +     INIT_LIST_HEAD(&fence->cb_list);
>> +     fence->lock = lock;
>> +     fence->context = context;
>> +     fence->seqno = seqno;
>> +     fence->flags = 0UL;
>> +
>> +     trace_fence_init(fence);
>> +}
>> +EXPORT_SYMBOL(__fence_init);
>
> Again with the __ exported function...
>
> I don't even see a fence_init() function anywhere, why the __ ?
>

think of it as a 'protected' constructor.. only the derived fence
subclass should call.

>
>> diff --git a/include/linux/fence.h b/include/linux/fence.h
>> new file mode 100644
>> index 000000000000..65f2a01ee7e4
>> --- /dev/null
>> +++ b/include/linux/fence.h
>> @@ -0,0 +1,333 @@
>> +/*
>> + * Fence mechanism for dma-buf to allow for asynchronous dma access
>> + *
>> + * Copyright (C) 2012 Canonical Ltd
>> + * Copyright (C) 2012 Texas Instruments
>> + *
>> + * Authors:
>> + * Rob Clark <robdclark@gmail.com>
>> + * Maarten Lankhorst <maarten.lankhorst@canonical.com>
>> + *
>> + * This program is free software; you can redistribute it and/or modify it
>> + * under the terms of the GNU General Public License version 2 as published by
>> + * the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope that it will be useful, but WITHOUT
>> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
>> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
>> + * more details.
>> + *
>> + * You should have received a copy of the GNU General Public License along with
>> + * this program.  If not, see <http://www.gnu.org/licenses/>.
>> + */
>> +
>> +#ifndef __LINUX_FENCE_H
>> +#define __LINUX_FENCE_H
>> +
>> +#include <linux/err.h>
>> +#include <linux/wait.h>
>> +#include <linux/list.h>
>> +#include <linux/bitops.h>
>> +#include <linux/kref.h>
>> +#include <linux/sched.h>
>> +#include <linux/printk.h>
>> +
>> +struct fence;
>> +struct fence_ops;
>> +struct fence_cb;
>> +
>> +/**
>> + * struct fence - software synchronization primitive
>> + * @refcount: refcount for this fence
>> + * @ops: fence_ops associated with this fence
>> + * @cb_list: list of all callbacks to call
>> + * @lock: spin_lock_irqsave used for locking
>> + * @context: execution context this fence belongs to, returned by
>> + *           fence_context_alloc()
>> + * @seqno: the sequence number of this fence inside the execution context,
>> + * can be compared to decide which fence would be signaled later.
>> + * @flags: A mask of FENCE_FLAG_* defined below
>> + * @timestamp: Timestamp when the fence was signaled.
>> + * @status: Optional, only valid if < 0, must be set before calling
>> + * fence_signal, indicates that the fence has completed with an error.
>> + *
>> + * the flags member must be manipulated and read using the appropriate
>> + * atomic ops (bit_*), so taking the spinlock will not be needed most
>> + * of the time.
>> + *
>> + * FENCE_FLAG_SIGNALED_BIT - fence is already signaled
>> + * FENCE_FLAG_ENABLE_SIGNAL_BIT - enable_signaling might have been called*
>> + * FENCE_FLAG_USER_BITS - start of the unused bits, can be used by the
>> + * implementer of the fence for its own purposes. Can be used in different
>> + * ways by different fence implementers, so do not rely on this.
>> + *
>> + * *) Since atomic bitops are used, this is not guaranteed to be the case.
>> + * Particularly, if the bit was set, but fence_signal was called right
>> + * before this bit was set, it would have been able to set the
>> + * FENCE_FLAG_SIGNALED_BIT, before enable_signaling was called.
>> + * Adding a check for FENCE_FLAG_SIGNALED_BIT after setting
>> + * FENCE_FLAG_ENABLE_SIGNAL_BIT closes this race, and makes sure that
>> + * after fence_signal was called, any enable_signaling call will have either
>> + * been completed, or never called at all.
>> + */
>> +struct fence {
>> +     struct kref refcount;
>> +     const struct fence_ops *ops;
>> +     struct list_head cb_list;
>> +     spinlock_t *lock;
>> +     unsigned context, seqno;
>> +     unsigned long flags;
>> +     ktime_t timestamp;
>> +     int status;
>> +};
>> +
>> +enum fence_flag_bits {
>> +     FENCE_FLAG_SIGNALED_BIT,
>> +     FENCE_FLAG_ENABLE_SIGNAL_BIT,
>> +     FENCE_FLAG_USER_BITS, /* must always be last member */
>> +};
>> +
>> +typedef void (*fence_func_t)(struct fence *fence, struct fence_cb *cb);
>> +
>> +/**
>> + * struct fence_cb - callback for fence_add_callback
>> + * @node: used by fence_add_callback to append this struct to fence::cb_list
>> + * @func: fence_func_t to call
>> + *
>> + * This struct will be initialized by fence_add_callback, additional
>> + * data can be passed along by embedding fence_cb in another struct.
>> + */
>> +struct fence_cb {
>> +     struct list_head node;
>> +     fence_func_t func;
>> +};
>> +
>> +/**
>> + * struct fence_ops - operations implemented for fence
>> + * @get_driver_name: returns the driver name.
>> + * @get_timeline_name: return the name of the context this fence belongs to.
>> + * @enable_signaling: enable software signaling of fence.
>> + * @signaled: [optional] peek whether the fence is signaled, can be null.
>> + * @wait: custom wait implementation, or fence_default_wait.
>> + * @release: [optional] called on destruction of fence, can be null
>> + * @fill_driver_data: [optional] callback to fill in free-form debug info
>> + * Returns amount of bytes filled, or -errno.
>> + * @fence_value_str: [optional] fills in the value of the fence as a string
>> + * @timeline_value_str: [optional] fills in the current value of the timeline
>> + * as a string
>> + *
>> + * Notes on enable_signaling:
>> + * For fence implementations that have the capability for hw->hw
>> + * signaling, they can implement this op to enable the necessary
>> + * irqs, or insert commands into cmdstream, etc.  This is called
>> + * in the first wait() or add_callback() path to let the fence
>> + * implementation know that there is another driver waiting on
>> + * the signal (ie. hw->sw case).
>> + *
>> + * This function can be called called from atomic context, but not
>> + * from irq context, so normal spinlocks can be used.
>> + *
>> + * A return value of false indicates the fence already passed,
>> + * or some failure occured that made it impossible to enable
>> + * signaling. True indicates succesful enabling.
>> + *
>> + * fence->status may be set in enable_signaling, but only when false is
>> + * returned.
>> + *
>> + * Calling fence_signal before enable_signaling is called allows
>> + * for a tiny race window in which enable_signaling is called during,
>> + * before, or after fence_signal. To fight this, it is recommended
>> + * that before enable_signaling returns true an extra reference is
>> + * taken on the fence, to be released when the fence is signaled.
>> + * This will mean fence_signal will still be called twice, but
>> + * the second time will be a noop since it was already signaled.
>> + *
>> + * Notes on signaled:
>> + * May set fence->status if returning true.
>> + *
>> + * Notes on wait:
>> + * Must not be NULL, set to fence_default_wait for default implementation.
>> + * the fence_default_wait implementation should work for any fence, as long
>> + * as enable_signaling works correctly.
>> + *
>> + * Must return -ERESTARTSYS if the wait is intr = true and the wait was
>> + * interrupted, and remaining jiffies if fence has signaled, or 0 if wait
>> + * timed out. Can also return other error values on custom implementations,
>> + * which should be treated as if the fence is signaled. For example a hardware
>> + * lockup could be reported like that.
>> + *
>> + * Notes on release:
>> + * Can be NULL, this function allows additional commands to run on
>> + * destruction of the fence. Can be called from irq context.
>> + * If pointer is set to NULL, kfree will get called instead.
>> + */
>> +
>> +struct fence_ops {
>> +     const char * (*get_driver_name)(struct fence *fence);
>> +     const char * (*get_timeline_name)(struct fence *fence);
>> +     bool (*enable_signaling)(struct fence *fence);
>> +     bool (*signaled)(struct fence *fence);
>> +     long (*wait)(struct fence *fence, bool intr, signed long timeout);
>> +     void (*release)(struct fence *fence);
>> +
>> +     int (*fill_driver_data)(struct fence *fence, void *data, int size);
>> +     void (*fence_value_str)(struct fence *fence, char *str, int size);
>> +     void (*timeline_value_str)(struct fence *fence, char *str, int size);
>> +};
>> +
>> +void __fence_init(struct fence *fence, const struct fence_ops *ops,
>> +               spinlock_t *lock, unsigned context, unsigned seqno);
>> +
>> +/**
>> + * fence_get - increases refcount of the fence
>> + * @fence:   [in]    fence to increase refcount of
>> + */
>> +static inline void fence_get(struct fence *fence)
>> +{
>> +     if (WARN_ON(!fence))
>> +             return;
>
> Why warn?
>
>> +     kref_get(&fence->refcount);
>> +}
>
> Why is this inline?

performance can be critical.. especially if the driver is using this
fence mechanism for internal buffers as well as shared buffers (which
is what I'd like to do to avoid having to deal with two different
fencing mechanisms for shared vs non-shared buffers), since you could
easily have 100's or perhaps 1000's of buffers involved in a submit.

The fence stuff does try to inline as much stuff as possible,
especially critical-path stuff, for this reason.

>> +
>> +extern void release_fence(struct kref *kref);
>> +
>> +/**
>> + * fence_put - decreases refcount of the fence
>> + * @fence:   [in]    fence to reduce refcount of
>> + */
>> +static inline void fence_put(struct fence *fence)
>> +{
>> +     if (WARN_ON(!fence))
>> +             return;
>
> Same as above.
>
>> +     kref_put(&fence->refcount, release_fence);
>> +}
>
> Why inline?
>
>> +int fence_signal(struct fence *fence);
>> +int __fence_signal(struct fence *fence);
>
> Let's randomly pick a function to call... :)
>
>> +static inline bool
>> +__fence_is_signaled(struct fence *fence)
>> +{
>> +     if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
>> +             return true;
>> +
>> +     if (fence->ops->signaled && fence->ops->signaled(fence)) {
>> +             __fence_signal(fence);
>> +             return true;
>> +     }
>> +
>> +     return false;
>> +}
>
> Oh nice, another __ function :(
>
>> +
>> +/**
>> + * fence_is_signaled - Return an indication if the fence is signaled yet.
>> + * @fence:   [in]    the fence to check
>> + *
>> + * Returns true if the fence was already signaled, false if not. Since this
>> + * function doesn't enable signaling, it is not guaranteed to ever return true
>> + * If fence_add_callback, fence_wait or fence_enable_sw_signaling
>> + * haven't been called before.
>> + *
>> + * It's recommended for seqno fences to call fence_signal when the
>> + * operation is complete, it makes it possible to prevent issues from
>> + * wraparound between time of issue and time of use by checking the return
>> + * value of this function before calling hardware-specific wait instructions.
>> + */
>> +static inline bool
>> +fence_is_signaled(struct fence *fence)
>> +{
>> +     if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
>> +             return true;
>> +
>> +     if (fence->ops->signaled && fence->ops->signaled(fence)) {
>> +             fence_signal(fence);
>> +             return true;
>> +     }
>> +
>> +     return false;
>> +}
>
>
> Why inline for all of these, does it really matter for speed?
>
>> +/**
>> + * fence_later - return the chronologically later fence
>> + * @f1:      [in]    the first fence from the same context
>> + * @f2:      [in]    the second fence from the same context
>> + *
>> + * Returns NULL if both fences are signaled, otherwise the fence that would be
>> + * signaled last. Both fences must be from the same context, since a seqno is
>> + * not re-used across contexts.
>> + */
>> +static inline struct fence *fence_later(struct fence *f1, struct fence *f2)
>> +{
>> +     BUG_ON(f1->context != f2->context);
>
> Nice, you just crashed the kernel, making it impossible to debug or
> recover :(

agreed, that should probably be 'if (WARN_ON(...)) return NULL;'

(but at least I wouldn't expect to hit that under console_lock so you
should at least see the last N lines of the backtrace on the screen
;-))

BR,
-R


> Don't do that.
>
> thanks,
>
> greg k-h
Greg KH June 19, 2014, 5 p.m. UTC | #11
On Thu, Jun 19, 2014 at 10:00:18AM -0400, Rob Clark wrote:
> On Wed, Jun 18, 2014 at 9:13 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
> > On Wed, Jun 18, 2014 at 12:36:54PM +0200, Maarten Lankhorst wrote:
> >> +#define CREATE_TRACE_POINTS
> >> +#include <trace/events/fence.h>
> >> +
> >> +EXPORT_TRACEPOINT_SYMBOL(fence_annotate_wait_on);
> >> +EXPORT_TRACEPOINT_SYMBOL(fence_emit);
> >
> > Are you really willing to live with these as tracepoints for forever?
> > What is the use of them in debugging?  Was it just for debugging the
> > fence code, or for something else?
> >
> >> +/**
> >> + * fence_context_alloc - allocate an array of fence contexts
> >> + * @num:     [in]    amount of contexts to allocate
> >> + *
> >> + * This function will return the first index of the number of fences allocated.
> >> + * The fence context is used for setting fence->context to a unique number.
> >> + */
> >> +unsigned fence_context_alloc(unsigned num)
> >> +{
> >> +     BUG_ON(!num);
> >> +     return atomic_add_return(num, &fence_context_counter) - num;
> >> +}
> >> +EXPORT_SYMBOL(fence_context_alloc);
> >
> > EXPORT_SYMBOL_GPL()?  Same goes for all of the exports in here.
> > Traditionally all of the driver core exports have been with this
> > marking, any objection to making that change here as well?
> 
> tbh, I prefer EXPORT_SYMBOL()..  well, I'd prefer even more if there
> wasn't even a need for EXPORT_SYMBOL_GPL(), but sadly it is a fact of
> life.  We already went through this debate once with dma-buf.  We
> aren't going to change $evil_vendor's mind about non-gpl modules.  The
> only result will be a more flugly convoluted solution (ie. use syncpt
> EXPORT_SYMBOL() on top of fence EXPORT_SYMBOL_GPL()) just as a
> workaround, with the result that no-one benefits.

It has been proven that using _GPL() exports have caused companies to
release their code "properly" over the years, so as these really are
Linux-only apis, please change them to be marked this way, it helps
everyone out in the end.

> >> +int __fence_signal(struct fence *fence)
> >> +{
> >> +     struct fence_cb *cur, *tmp;
> >> +     int ret = 0;
> >> +
> >> +     if (WARN_ON(!fence))
> >> +             return -EINVAL;
> >> +
> >> +     if (!ktime_to_ns(fence->timestamp)) {
> >> +             fence->timestamp = ktime_get();
> >> +             smp_mb__before_atomic();
> >> +     }
> >> +
> >> +     if (test_and_set_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
> >> +             ret = -EINVAL;
> >> +
> >> +             /*
> >> +              * we might have raced with the unlocked fence_signal,
> >> +              * still run through all callbacks
> >> +              */
> >> +     } else
> >> +             trace_fence_signaled(fence);
> >> +
> >> +     list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) {
> >> +             list_del_init(&cur->node);
> >> +             cur->func(fence, cur);
> >> +     }
> >> +     return ret;
> >> +}
> >> +EXPORT_SYMBOL(__fence_signal);
> >
> > Don't export a function with __ in front of it, you are saying that an
> > internal function is somehow "valid" for everyone else to call?  Why?
> > You aren't even documenting the function, so who knows how to use it?
> 
> fwiw, the __ versions appear to mainly be concessions for android
> syncpt.  That is the only user outside of fence.c, and it should stay
> that way.

How are you going to "ensure" this?  And where did you document it?
Please fix this up, it's a horrid way to create a new api.

If the android code needs to be fixed to fit into this model, then fix
it.

> >> +void
> >> +__fence_init(struct fence *fence, const struct fence_ops *ops,
> >> +          spinlock_t *lock, unsigned context, unsigned seqno)
> >> +{
> >> +     BUG_ON(!lock);
> >> +     BUG_ON(!ops || !ops->wait || !ops->enable_signaling ||
> >> +            !ops->get_driver_name || !ops->get_timeline_name);
> >> +
> >> +     kref_init(&fence->refcount);
> >> +     fence->ops = ops;
> >> +     INIT_LIST_HEAD(&fence->cb_list);
> >> +     fence->lock = lock;
> >> +     fence->context = context;
> >> +     fence->seqno = seqno;
> >> +     fence->flags = 0UL;
> >> +
> >> +     trace_fence_init(fence);
> >> +}
> >> +EXPORT_SYMBOL(__fence_init);
> >
> > Again with the __ exported function...
> >
> > I don't even see a fence_init() function anywhere, why the __ ?
> >
> 
> think of it as a 'protected' constructor.. only the derived fence
> subclass should call.

Where do you say this?  Again, not a good reason, fix up the api please.

> >> +     kref_get(&fence->refcount);
> >> +}
> >
> > Why is this inline?
> 
> performance can be critical.. especially if the driver is using this
> fence mechanism for internal buffers as well as shared buffers (which
> is what I'd like to do to avoid having to deal with two different
> fencing mechanisms for shared vs non-shared buffers), since you could
> easily have 100's or perhaps 1000's of buffers involved in a submit.

"can be".  Did you actually measure it?  Please do so.

> The fence stuff does try to inline as much stuff as possible,
> especially critical-path stuff, for this reason.

Inlining code doesn't always mean "faster", in fact, on lots of
processors and with "large" inline functions, the opposite is true.  So
only do so if you can measure it.

> >> +/**
> >> + * fence_later - return the chronologically later fence
> >> + * @f1:      [in]    the first fence from the same context
> >> + * @f2:      [in]    the second fence from the same context
> >> + *
> >> + * Returns NULL if both fences are signaled, otherwise the fence that would be
> >> + * signaled last. Both fences must be from the same context, since a seqno is
> >> + * not re-used across contexts.
> >> + */
> >> +static inline struct fence *fence_later(struct fence *f1, struct fence *f2)
> >> +{
> >> +     BUG_ON(f1->context != f2->context);
> >
> > Nice, you just crashed the kernel, making it impossible to debug or
> > recover :(
> 
> agreed, that should probably be 'if (WARN_ON(...)) return NULL;'
> 
> (but at least I wouldn't expect to hit that under console_lock so you
> should at least see the last N lines of the backtrace on the screen
> ;-))

Lots of devices don't have console screens :)

thanks,

greg k-h
Rob Clark June 19, 2014, 5:45 p.m. UTC | #12
On Thu, Jun 19, 2014 at 1:00 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
> On Thu, Jun 19, 2014 at 10:00:18AM -0400, Rob Clark wrote:
>> On Wed, Jun 18, 2014 at 9:13 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
>> > On Wed, Jun 18, 2014 at 12:36:54PM +0200, Maarten Lankhorst wrote:
>> >> +#define CREATE_TRACE_POINTS
>> >> +#include <trace/events/fence.h>
>> >> +
>> >> +EXPORT_TRACEPOINT_SYMBOL(fence_annotate_wait_on);
>> >> +EXPORT_TRACEPOINT_SYMBOL(fence_emit);
>> >
>> > Are you really willing to live with these as tracepoints for forever?
>> > What is the use of them in debugging?  Was it just for debugging the
>> > fence code, or for something else?
>> >
>> >> +/**
>> >> + * fence_context_alloc - allocate an array of fence contexts
>> >> + * @num:     [in]    amount of contexts to allocate
>> >> + *
>> >> + * This function will return the first index of the number of fences allocated.
>> >> + * The fence context is used for setting fence->context to a unique number.
>> >> + */
>> >> +unsigned fence_context_alloc(unsigned num)
>> >> +{
>> >> +     BUG_ON(!num);
>> >> +     return atomic_add_return(num, &fence_context_counter) - num;
>> >> +}
>> >> +EXPORT_SYMBOL(fence_context_alloc);
>> >
>> > EXPORT_SYMBOL_GPL()?  Same goes for all of the exports in here.
>> > Traditionally all of the driver core exports have been with this
>> > marking, any objection to making that change here as well?
>>
>> tbh, I prefer EXPORT_SYMBOL()..  well, I'd prefer even more if there
>> wasn't even a need for EXPORT_SYMBOL_GPL(), but sadly it is a fact of
>> life.  We already went through this debate once with dma-buf.  We
>> aren't going to change $evil_vendor's mind about non-gpl modules.  The
>> only result will be a more flugly convoluted solution (ie. use syncpt
>> EXPORT_SYMBOL() on top of fence EXPORT_SYMBOL_GPL()) just as a
>> workaround, with the result that no-one benefits.
>
> It has been proven that using _GPL() exports have caused companies to
> release their code "properly" over the years, so as these really are
> Linux-only apis, please change them to be marked this way, it helps
> everyone out in the end.

Well, maybe that is the true in some cases.  But it certainly didn't
work out that way for dma-buf.  And I think the end result is worse.

I don't really like coming down on the side of EXPORT_SYMBOL() instead
of EXPORT_SYMBOL_GPL(), but if we do use EXPORT_SYMBOL_GPL() then the
result will only be creative workarounds using the _GPL symbols
indirectly by whatever is available via EXPORT_SYMBOL().  I don't
really see how that will be better.

>> >> +int __fence_signal(struct fence *fence)
>> >> +{
>> >> +     struct fence_cb *cur, *tmp;
>> >> +     int ret = 0;
>> >> +
>> >> +     if (WARN_ON(!fence))
>> >> +             return -EINVAL;
>> >> +
>> >> +     if (!ktime_to_ns(fence->timestamp)) {
>> >> +             fence->timestamp = ktime_get();
>> >> +             smp_mb__before_atomic();
>> >> +     }
>> >> +
>> >> +     if (test_and_set_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
>> >> +             ret = -EINVAL;
>> >> +
>> >> +             /*
>> >> +              * we might have raced with the unlocked fence_signal,
>> >> +              * still run through all callbacks
>> >> +              */
>> >> +     } else
>> >> +             trace_fence_signaled(fence);
>> >> +
>> >> +     list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) {
>> >> +             list_del_init(&cur->node);
>> >> +             cur->func(fence, cur);
>> >> +     }
>> >> +     return ret;
>> >> +}
>> >> +EXPORT_SYMBOL(__fence_signal);
>> >
>> > Don't export a function with __ in front of it, you are saying that an
>> > internal function is somehow "valid" for everyone else to call?  Why?
>> > You aren't even documenting the function, so who knows how to use it?
>>
>> fwiw, the __ versions appear to mainly be concessions for android
>> syncpt.  That is the only user outside of fence.c, and it should stay
>> that way.
>
> How are you going to "ensure" this?  And where did you document it?
> Please fix this up, it's a horrid way to create a new api.
>
> If the android code needs to be fixed to fit into this model, then fix
> it.

heh, and in fact I was wrong about this.. the __ versions are actually
for when the lock is already held.  Maarten needs to rename (ie
_locked suffix) and add some API docs for this.

>> >> +void
>> >> +__fence_init(struct fence *fence, const struct fence_ops *ops,
>> >> +          spinlock_t *lock, unsigned context, unsigned seqno)
>> >> +{
>> >> +     BUG_ON(!lock);
>> >> +     BUG_ON(!ops || !ops->wait || !ops->enable_signaling ||
>> >> +            !ops->get_driver_name || !ops->get_timeline_name);
>> >> +
>> >> +     kref_init(&fence->refcount);
>> >> +     fence->ops = ops;
>> >> +     INIT_LIST_HEAD(&fence->cb_list);
>> >> +     fence->lock = lock;
>> >> +     fence->context = context;
>> >> +     fence->seqno = seqno;
>> >> +     fence->flags = 0UL;
>> >> +
>> >> +     trace_fence_init(fence);
>> >> +}
>> >> +EXPORT_SYMBOL(__fence_init);
>> >
>> > Again with the __ exported function...
>> >
>> > I don't even see a fence_init() function anywhere, why the __ ?
>> >
>>
>> think of it as a 'protected' constructor.. only the derived fence
>> subclass should call.
>
> Where do you say this?  Again, not a good reason, fix up the api please.
>
>> >> +     kref_get(&fence->refcount);
>> >> +}
>> >
>> > Why is this inline?
>>
>> performance can be critical.. especially if the driver is using this
>> fence mechanism for internal buffers as well as shared buffers (which
>> is what I'd like to do to avoid having to deal with two different
>> fencing mechanisms for shared vs non-shared buffers), since you could
>> easily have 100's or perhaps 1000's of buffers involved in a submit.
>
> "can be".  Did you actually measure it?  Please do so.

"can be" == "depends on how the driver uses fence".  Ie. if it is only
used for buffers that are shared between devices, I wouldn't expect it
to matter.

I believe that Maarten did some measurements at some point, which
might be relevant..

At any rate, fences certainly aren't something new to graphics
drivers.  And at least in the case of TTM we are replacing code that
is directly inline with various parts of TTM with calls to static
inline fence fxns.  So for those starting w/ static-inline seems more
reasonable.. to *not* inline would be the bigger deviation from how
things work now.

>> The fence stuff does try to inline as much stuff as possible,
>> especially critical-path stuff, for this reason.
>
> Inlining code doesn't always mean "faster", in fact, on lots of
> processors and with "large" inline functions, the opposite is true.  So
> only do so if you can measure it.

fair enough.. although inline fxns here are not particularly large,
and since this is something we are retrofitting to TTM I think it is
not unreasonable to start with static-inline until proven otherwise
;-)

BR,
-R

>> >> +/**
>> >> + * fence_later - return the chronologically later fence
>> >> + * @f1:      [in]    the first fence from the same context
>> >> + * @f2:      [in]    the second fence from the same context
>> >> + *
>> >> + * Returns NULL if both fences are signaled, otherwise the fence that would be
>> >> + * signaled last. Both fences must be from the same context, since a seqno is
>> >> + * not re-used across contexts.
>> >> + */
>> >> +static inline struct fence *fence_later(struct fence *f1, struct fence *f2)
>> >> +{
>> >> +     BUG_ON(f1->context != f2->context);
>> >
>> > Nice, you just crashed the kernel, making it impossible to debug or
>> > recover :(
>>
>> agreed, that should probably be 'if (WARN_ON(...)) return NULL;'
>>
>> (but at least I wouldn't expect to hit that under console_lock so you
>> should at least see the last N lines of the backtrace on the screen
>> ;-))
>
> Lots of devices don't have console screens :)
>
> thanks,
>
> greg k-h
Greg KH June 19, 2014, 6:19 p.m. UTC | #13
On Thu, Jun 19, 2014 at 01:45:30PM -0400, Rob Clark wrote:
> On Thu, Jun 19, 2014 at 1:00 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
> > On Thu, Jun 19, 2014 at 10:00:18AM -0400, Rob Clark wrote:
> >> On Wed, Jun 18, 2014 at 9:13 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
> >> > On Wed, Jun 18, 2014 at 12:36:54PM +0200, Maarten Lankhorst wrote:
> >> >> +#define CREATE_TRACE_POINTS
> >> >> +#include <trace/events/fence.h>
> >> >> +
> >> >> +EXPORT_TRACEPOINT_SYMBOL(fence_annotate_wait_on);
> >> >> +EXPORT_TRACEPOINT_SYMBOL(fence_emit);
> >> >
> >> > Are you really willing to live with these as tracepoints for forever?
> >> > What is the use of them in debugging?  Was it just for debugging the
> >> > fence code, or for something else?
> >> >
> >> >> +/**
> >> >> + * fence_context_alloc - allocate an array of fence contexts
> >> >> + * @num:     [in]    amount of contexts to allocate
> >> >> + *
> >> >> + * This function will return the first index of the number of fences allocated.
> >> >> + * The fence context is used for setting fence->context to a unique number.
> >> >> + */
> >> >> +unsigned fence_context_alloc(unsigned num)
> >> >> +{
> >> >> +     BUG_ON(!num);
> >> >> +     return atomic_add_return(num, &fence_context_counter) - num;
> >> >> +}
> >> >> +EXPORT_SYMBOL(fence_context_alloc);
> >> >
> >> > EXPORT_SYMBOL_GPL()?  Same goes for all of the exports in here.
> >> > Traditionally all of the driver core exports have been with this
> >> > marking, any objection to making that change here as well?
> >>
> >> tbh, I prefer EXPORT_SYMBOL()..  well, I'd prefer even more if there
> >> wasn't even a need for EXPORT_SYMBOL_GPL(), but sadly it is a fact of
> >> life.  We already went through this debate once with dma-buf.  We
> >> aren't going to change $evil_vendor's mind about non-gpl modules.  The
> >> only result will be a more flugly convoluted solution (ie. use syncpt
> >> EXPORT_SYMBOL() on top of fence EXPORT_SYMBOL_GPL()) just as a
> >> workaround, with the result that no-one benefits.
> >
> > It has been proven that using _GPL() exports have caused companies to
> > release their code "properly" over the years, so as these really are
> > Linux-only apis, please change them to be marked this way, it helps
> > everyone out in the end.
> 
> Well, maybe that is the true in some cases.  But it certainly didn't
> work out that way for dma-buf.  And I think the end result is worse.
> 
> I don't really like coming down on the side of EXPORT_SYMBOL() instead
> of EXPORT_SYMBOL_GPL(), but if we do use EXPORT_SYMBOL_GPL() then the
> result will only be creative workarounds using the _GPL symbols
> indirectly by whatever is available via EXPORT_SYMBOL().  I don't
> really see how that will be better.

You are saying that you _know_ companies will violate our license, so
you should just "give up"?  And how do you know people aren't working on
preventing those "indirect" usages as well?  :)

Sorry, I'm not going to give up here, again, it has proven to work in
the past in changing the ways of _very_ large companies, why stop now?

thanks,

greg k-h
James Bottomley June 19, 2014, 6:37 p.m. UTC | #14
On Thu, 2014-06-19 at 11:19 -0700, Greg KH wrote:
> On Thu, Jun 19, 2014 at 01:45:30PM -0400, Rob Clark wrote:
> > On Thu, Jun 19, 2014 at 1:00 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
> > > On Thu, Jun 19, 2014 at 10:00:18AM -0400, Rob Clark wrote:
> > >> On Wed, Jun 18, 2014 at 9:13 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
> > >> > On Wed, Jun 18, 2014 at 12:36:54PM +0200, Maarten Lankhorst wrote:
> > >> >> +#define CREATE_TRACE_POINTS
> > >> >> +#include <trace/events/fence.h>
> > >> >> +
> > >> >> +EXPORT_TRACEPOINT_SYMBOL(fence_annotate_wait_on);
> > >> >> +EXPORT_TRACEPOINT_SYMBOL(fence_emit);
> > >> >
> > >> > Are you really willing to live with these as tracepoints for forever?
> > >> > What is the use of them in debugging?  Was it just for debugging the
> > >> > fence code, or for something else?
> > >> >
> > >> >> +/**
> > >> >> + * fence_context_alloc - allocate an array of fence contexts
> > >> >> + * @num:     [in]    amount of contexts to allocate
> > >> >> + *
> > >> >> + * This function will return the first index of the number of fences allocated.
> > >> >> + * The fence context is used for setting fence->context to a unique number.
> > >> >> + */
> > >> >> +unsigned fence_context_alloc(unsigned num)
> > >> >> +{
> > >> >> +     BUG_ON(!num);
> > >> >> +     return atomic_add_return(num, &fence_context_counter) - num;
> > >> >> +}
> > >> >> +EXPORT_SYMBOL(fence_context_alloc);
> > >> >
> > >> > EXPORT_SYMBOL_GPL()?  Same goes for all of the exports in here.
> > >> > Traditionally all of the driver core exports have been with this
> > >> > marking, any objection to making that change here as well?
> > >>
> > >> tbh, I prefer EXPORT_SYMBOL()..  well, I'd prefer even more if there
> > >> wasn't even a need for EXPORT_SYMBOL_GPL(), but sadly it is a fact of
> > >> life.  We already went through this debate once with dma-buf.  We
> > >> aren't going to change $evil_vendor's mind about non-gpl modules.  The
> > >> only result will be a more flugly convoluted solution (ie. use syncpt
> > >> EXPORT_SYMBOL() on top of fence EXPORT_SYMBOL_GPL()) just as a
> > >> workaround, with the result that no-one benefits.
> > >
> > > It has been proven that using _GPL() exports have caused companies to
> > > release their code "properly" over the years, so as these really are
> > > Linux-only apis, please change them to be marked this way, it helps
> > > everyone out in the end.
> > 
> > Well, maybe that is the true in some cases.  But it certainly didn't
> > work out that way for dma-buf.  And I think the end result is worse.
> > 
> > I don't really like coming down on the side of EXPORT_SYMBOL() instead
> > of EXPORT_SYMBOL_GPL(), but if we do use EXPORT_SYMBOL_GPL() then the
> > result will only be creative workarounds using the _GPL symbols
> > indirectly by whatever is available via EXPORT_SYMBOL().  I don't
> > really see how that will be better.
> 
> You are saying that you _know_ companies will violate our license, so
> you should just "give up"?  And how do you know people aren't working on
> preventing those "indirect" usages as well?  :)
> 
> Sorry, I'm not going to give up here, again, it has proven to work in
> the past in changing the ways of _very_ large companies, why stop now?

When you try to train a dog, you have to be consistent about it.  We're
fantastically inconsistent in symbol exports.

For instance, the mutex primitives are all EXPORT_SYMBOL(), so we're
telling proprietary modules they can use them.  However, when the kernel
is built with CONFIG_DEBUG_MUTEX, they all become
EXPORT_SYMBOL_GPL() ... what type of crazy message is that supposed to
send?  It's OK to use mutexes but it's potentially a GPL violation to
debug them?

We either need to decide that we have a defined and consistent part of
our API that's GPL only or make the bold statement that we don't have
any part of our API that's usable by non-GPL modules.  Right at the
minute we do neither and it confuses people no end about what is and
isn't allowed.

James
Rob Clark June 19, 2014, 6:52 p.m. UTC | #15
On Thu, Jun 19, 2014 at 2:19 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
> On Thu, Jun 19, 2014 at 01:45:30PM -0400, Rob Clark wrote:
>> On Thu, Jun 19, 2014 at 1:00 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
>> > On Thu, Jun 19, 2014 at 10:00:18AM -0400, Rob Clark wrote:
>> >> On Wed, Jun 18, 2014 at 9:13 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
>> >> > On Wed, Jun 18, 2014 at 12:36:54PM +0200, Maarten Lankhorst wrote:
>> >> >> +#define CREATE_TRACE_POINTS
>> >> >> +#include <trace/events/fence.h>
>> >> >> +
>> >> >> +EXPORT_TRACEPOINT_SYMBOL(fence_annotate_wait_on);
>> >> >> +EXPORT_TRACEPOINT_SYMBOL(fence_emit);
>> >> >
>> >> > Are you really willing to live with these as tracepoints for forever?
>> >> > What is the use of them in debugging?  Was it just for debugging the
>> >> > fence code, or for something else?
>> >> >
>> >> >> +/**
>> >> >> + * fence_context_alloc - allocate an array of fence contexts
>> >> >> + * @num:     [in]    amount of contexts to allocate
>> >> >> + *
>> >> >> + * This function will return the first index of the number of fences allocated.
>> >> >> + * The fence context is used for setting fence->context to a unique number.
>> >> >> + */
>> >> >> +unsigned fence_context_alloc(unsigned num)
>> >> >> +{
>> >> >> +     BUG_ON(!num);
>> >> >> +     return atomic_add_return(num, &fence_context_counter) - num;
>> >> >> +}
>> >> >> +EXPORT_SYMBOL(fence_context_alloc);
>> >> >
>> >> > EXPORT_SYMBOL_GPL()?  Same goes for all of the exports in here.
>> >> > Traditionally all of the driver core exports have been with this
>> >> > marking, any objection to making that change here as well?
>> >>
>> >> tbh, I prefer EXPORT_SYMBOL()..  well, I'd prefer even more if there
>> >> wasn't even a need for EXPORT_SYMBOL_GPL(), but sadly it is a fact of
>> >> life.  We already went through this debate once with dma-buf.  We
>> >> aren't going to change $evil_vendor's mind about non-gpl modules.  The
>> >> only result will be a more flugly convoluted solution (ie. use syncpt
>> >> EXPORT_SYMBOL() on top of fence EXPORT_SYMBOL_GPL()) just as a
>> >> workaround, with the result that no-one benefits.
>> >
>> > It has been proven that using _GPL() exports have caused companies to
>> > release their code "properly" over the years, so as these really are
>> > Linux-only apis, please change them to be marked this way, it helps
>> > everyone out in the end.
>>
>> Well, maybe that is the true in some cases.  But it certainly didn't
>> work out that way for dma-buf.  And I think the end result is worse.
>>
>> I don't really like coming down on the side of EXPORT_SYMBOL() instead
>> of EXPORT_SYMBOL_GPL(), but if we do use EXPORT_SYMBOL_GPL() then the
>> result will only be creative workarounds using the _GPL symbols
>> indirectly by whatever is available via EXPORT_SYMBOL().  I don't
>> really see how that will be better.
>
> You are saying that you _know_ companies will violate our license, so
> you should just "give up"?  And how do you know people aren't working on
> preventing those "indirect" usages as well?  :)

Well, all I know is what happened with dmabuf.  This seems like the
exact same scenario (same vendor, same driver, same use-case).

Not really sure how we could completely prevent indirect usage, given
that drm core and many of the drivers are dual MIT/GPL.   (But ofc,
IANAL.)

> Sorry, I'm not going to give up here, again, it has proven to work in
> the past in changing the ways of _very_ large companies, why stop now?

In the general case, I would agree.  But in this specific case, I am
not very optimistic.

That said, it isn't really my loss if it is _GPL()..  I don't have to
use or support that particular driver.  But given that we have some
history from the same debate with dma-buf, I think it is pretty easy
to infer the result from making fence EXPORT_SYMBOL_GPL().

BR,
-R

> thanks,
>
> greg k-h
Daniel Vetter June 19, 2014, 7:15 p.m. UTC | #16
On Thu, Jun 19, 2014 at 7:00 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
>> >> +     BUG_ON(f1->context != f2->context);
>> >
>> > Nice, you just crashed the kernel, making it impossible to debug or
>> > recover :(
>>
>> agreed, that should probably be 'if (WARN_ON(...)) return NULL;'
>>
>> (but at least I wouldn't expect to hit that under console_lock so you
>> should at least see the last N lines of the backtrace on the screen
>> ;-))
>
> Lots of devices don't have console screens :)

Aside: This is a pet peeve of mine and recently I've switched to
rejecting all patch that have a BUG_ON, period. Except when you can
prove that the kernel will die in the next few lines and there's
nothing you can do about it a WARN_ON is always better - I've wasted
_way_ too much time debugging hard hangs because such a "benign"
BUG_ON ended up eating my irq handler or a spinlock required by such.
Or some other nonsense that makes debugging a royal pita, especially
if your remote debugger consists of a frustrated users staring at a
hung machine.

</rant>

Cheers, Daniel
Daniel Vetter June 19, 2014, 7:20 p.m. UTC | #17
On Thu, Jun 19, 2014 at 8:19 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
>> >> > EXPORT_SYMBOL_GPL()?  Same goes for all of the exports in here.
>> >> > Traditionally all of the driver core exports have been with this
>> >> > marking, any objection to making that change here as well?
>> >>
>> >> tbh, I prefer EXPORT_SYMBOL()..  well, I'd prefer even more if there
>> >> wasn't even a need for EXPORT_SYMBOL_GPL(), but sadly it is a fact of
>> >> life.  We already went through this debate once with dma-buf.  We
>> >> aren't going to change $evil_vendor's mind about non-gpl modules.  The
>> >> only result will be a more flugly convoluted solution (ie. use syncpt
>> >> EXPORT_SYMBOL() on top of fence EXPORT_SYMBOL_GPL()) just as a
>> >> workaround, with the result that no-one benefits.
>> >
>> > It has been proven that using _GPL() exports have caused companies to
>> > release their code "properly" over the years, so as these really are
>> > Linux-only apis, please change them to be marked this way, it helps
>> > everyone out in the end.
>>
>> Well, maybe that is the true in some cases.  But it certainly didn't
>> work out that way for dma-buf.  And I think the end result is worse.
>>
>> I don't really like coming down on the side of EXPORT_SYMBOL() instead
>> of EXPORT_SYMBOL_GPL(), but if we do use EXPORT_SYMBOL_GPL() then the
>> result will only be creative workarounds using the _GPL symbols
>> indirectly by whatever is available via EXPORT_SYMBOL().  I don't
>> really see how that will be better.
>
> You are saying that you _know_ companies will violate our license, so
> you should just "give up"?  And how do you know people aren't working on
> preventing those "indirect" usages as well?  :)
>
> Sorry, I'm not going to give up here, again, it has proven to work in
> the past in changing the ways of _very_ large companies, why stop now?

Dave should chime in here since currently dma-buf is _GPL and the
drm_prime.c wrapper for it is not (and he merged that one, contributed
from said $vendor). And since we're gfx people everything we do is MIT
licensed (that's where X is from after all), so _GPL for for drm stuff
really doesn't make a lot of sense for us. ianal and all that applies.
-Daniel
Greg KH June 19, 2014, 8:01 p.m. UTC | #18
On Thu, Jun 19, 2014 at 09:15:36PM +0200, Daniel Vetter wrote:
> On Thu, Jun 19, 2014 at 7:00 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
> >> >> +     BUG_ON(f1->context != f2->context);
> >> >
> >> > Nice, you just crashed the kernel, making it impossible to debug or
> >> > recover :(
> >>
> >> agreed, that should probably be 'if (WARN_ON(...)) return NULL;'
> >>
> >> (but at least I wouldn't expect to hit that under console_lock so you
> >> should at least see the last N lines of the backtrace on the screen
> >> ;-))
> >
> > Lots of devices don't have console screens :)
> 
> Aside: This is a pet peeve of mine and recently I've switched to
> rejecting all patch that have a BUG_ON, period.

Please do, I have been for a few years now as well for the same reasons
you cite.

greg k-h
Dave Airlie June 19, 2014, 9:50 p.m. UTC | #19
On 20 June 2014 04:19, Greg KH <gregkh@linuxfoundation.org> wrote:
> On Thu, Jun 19, 2014 at 01:45:30PM -0400, Rob Clark wrote:
>> On Thu, Jun 19, 2014 at 1:00 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
>> > On Thu, Jun 19, 2014 at 10:00:18AM -0400, Rob Clark wrote:
>> >> On Wed, Jun 18, 2014 at 9:13 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
>> >> > On Wed, Jun 18, 2014 at 12:36:54PM +0200, Maarten Lankhorst wrote:
>> >> >> +#define CREATE_TRACE_POINTS
>> >> >> +#include <trace/events/fence.h>
>> >> >> +
>> >> >> +EXPORT_TRACEPOINT_SYMBOL(fence_annotate_wait_on);
>> >> >> +EXPORT_TRACEPOINT_SYMBOL(fence_emit);
>> >> >
>> >> > Are you really willing to live with these as tracepoints for forever?
>> >> > What is the use of them in debugging?  Was it just for debugging the
>> >> > fence code, or for something else?
>> >> >
>> >> >> +/**
>> >> >> + * fence_context_alloc - allocate an array of fence contexts
>> >> >> + * @num:     [in]    amount of contexts to allocate
>> >> >> + *
>> >> >> + * This function will return the first index of the number of fences allocated.
>> >> >> + * The fence context is used for setting fence->context to a unique number.
>> >> >> + */
>> >> >> +unsigned fence_context_alloc(unsigned num)
>> >> >> +{
>> >> >> +     BUG_ON(!num);
>> >> >> +     return atomic_add_return(num, &fence_context_counter) - num;
>> >> >> +}
>> >> >> +EXPORT_SYMBOL(fence_context_alloc);
>> >> >
>> >> > EXPORT_SYMBOL_GPL()?  Same goes for all of the exports in here.
>> >> > Traditionally all of the driver core exports have been with this
>> >> > marking, any objection to making that change here as well?
>> >>
>> >> tbh, I prefer EXPORT_SYMBOL()..  well, I'd prefer even more if there
>> >> wasn't even a need for EXPORT_SYMBOL_GPL(), but sadly it is a fact of
>> >> life.  We already went through this debate once with dma-buf.  We
>> >> aren't going to change $evil_vendor's mind about non-gpl modules.  The
>> >> only result will be a more flugly convoluted solution (ie. use syncpt
>> >> EXPORT_SYMBOL() on top of fence EXPORT_SYMBOL_GPL()) just as a
>> >> workaround, with the result that no-one benefits.
>> >
>> > It has been proven that using _GPL() exports have caused companies to
>> > release their code "properly" over the years, so as these really are
>> > Linux-only apis, please change them to be marked this way, it helps
>> > everyone out in the end.
>>
>> Well, maybe that is the true in some cases.  But it certainly didn't
>> work out that way for dma-buf.  And I think the end result is worse.
>>
>> I don't really like coming down on the side of EXPORT_SYMBOL() instead
>> of EXPORT_SYMBOL_GPL(), but if we do use EXPORT_SYMBOL_GPL() then the
>> result will only be creative workarounds using the _GPL symbols
>> indirectly by whatever is available via EXPORT_SYMBOL().  I don't
>> really see how that will be better.
>
> You are saying that you _know_ companies will violate our license, so
> you should just "give up"?  And how do you know people aren't working on
> preventing those "indirect" usages as well?  :)
>
> Sorry, I'm not going to give up here, again, it has proven to work in
> the past in changing the ways of _very_ large companies, why stop now?

I've found large companies shipping lots of hw putting pressure on
other large/small companies seems to be only way this has ever
happened, we'd like to cover that up and say its some great GPL
enforcement thing.

To be honest, author's choice is how I'd treat this.

Personally I think _GPL is broken by design, and that Linus's initial
point for them has been so diluted by random lobby groups asking for
every symbol to be _GPL that they are becoming effectively pointless
now. I also dislike the fact that the lobby groups don't just bring
violators to court. I'm also sure someone like the LF could have a
nice income stream if Linus gave them permission to enforce his
copyrights.

But anyways, has someone checked that iOS or Windows don't have a
fence interface? so we know that this is a Linux only interface and
any works using it are derived? Say the nvidia driver isn't a derived
work now, will using this interface magically translate it into a
derived work, so we can go sue them? I don't think so.

But its up to Maarten and Rob, and if they say no _GPL then I don't
think we should be overriding authors intents.

Dave.
H. Peter Anvin June 19, 2014, 10:39 p.m. UTC | #20
On 06/19/2014 01:01 PM, Greg KH wrote:
> On Thu, Jun 19, 2014 at 09:15:36PM +0200, Daniel Vetter wrote:
>> On Thu, Jun 19, 2014 at 7:00 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
>>>>>> +     BUG_ON(f1->context != f2->context);
>>>>>
>>>>> Nice, you just crashed the kernel, making it impossible to debug or
>>>>> recover :(
>>>>
>>>> agreed, that should probably be 'if (WARN_ON(...)) return NULL;'
>>>>
>>>> (but at least I wouldn't expect to hit that under console_lock so you
>>>> should at least see the last N lines of the backtrace on the screen
>>>> ;-))
>>>
>>> Lots of devices don't have console screens :)
>>
>> Aside: This is a pet peeve of mine and recently I've switched to
>> rejecting all patch that have a BUG_ON, period.
> 
> Please do, I have been for a few years now as well for the same reasons
> you cite.
> 

I'm actually concerned about this trend.  Downgrading things to WARN_ON
can allow a security bug in the kernel to continue to exist, for
example, or make the error message disappear.

I am wondering if the right thing here isn't to have a user (command
line?) settable policy as to how to proceed on an assert violation,
instead of hardcoding it at compile time.

	-hpa
James Bottomley June 19, 2014, 11:08 p.m. UTC | #21
On Thu, 2014-06-19 at 15:39 -0700, H. Peter Anvin wrote:
> On 06/19/2014 01:01 PM, Greg KH wrote:
> > On Thu, Jun 19, 2014 at 09:15:36PM +0200, Daniel Vetter wrote:
> >> On Thu, Jun 19, 2014 at 7:00 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
> >>>>>> +     BUG_ON(f1->context != f2->context);
> >>>>>
> >>>>> Nice, you just crashed the kernel, making it impossible to debug or
> >>>>> recover :(
> >>>>
> >>>> agreed, that should probably be 'if (WARN_ON(...)) return NULL;'
> >>>>
> >>>> (but at least I wouldn't expect to hit that under console_lock so you
> >>>> should at least see the last N lines of the backtrace on the screen
> >>>> ;-))
> >>>
> >>> Lots of devices don't have console screens :)
> >>
> >> Aside: This is a pet peeve of mine and recently I've switched to
> >> rejecting all patch that have a BUG_ON, period.
> > 
> > Please do, I have been for a few years now as well for the same reasons
> > you cite.
> > 
> 
> I'm actually concerned about this trend.  Downgrading things to WARN_ON
> can allow a security bug in the kernel to continue to exist, for
> example, or make the error message disappear.

Me too.  We use BUG_ON in the I/O subsystem where we're forced to
violate a guarantee.  When the choice is corrupt something or panic the
system, I prefer the latter every time.

> I am wondering if the right thing here isn't to have a user (command
> line?) settable policy as to how to proceed on an assert violation,
> instead of hardcoding it at compile time.

I'd say it depends on the consequence of the assertion violation.  We
have assertions that are largely theoretical, ones that govern process
internal state (so killing the process mostly sanitizes the system) and
a few that imply data loss or data corruption.

James
Rob Clark June 19, 2014, 11:21 p.m. UTC | #22
On Thu, Jun 19, 2014 at 5:50 PM, Dave Airlie <airlied@gmail.com> wrote:
> On 20 June 2014 04:19, Greg KH <gregkh@linuxfoundation.org> wrote:
>> On Thu, Jun 19, 2014 at 01:45:30PM -0400, Rob Clark wrote:
>>> On Thu, Jun 19, 2014 at 1:00 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
>>> > On Thu, Jun 19, 2014 at 10:00:18AM -0400, Rob Clark wrote:
>>> >> On Wed, Jun 18, 2014 at 9:13 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
>>> >> > On Wed, Jun 18, 2014 at 12:36:54PM +0200, Maarten Lankhorst wrote:
>>> >> >> +#define CREATE_TRACE_POINTS
>>> >> >> +#include <trace/events/fence.h>
>>> >> >> +
>>> >> >> +EXPORT_TRACEPOINT_SYMBOL(fence_annotate_wait_on);
>>> >> >> +EXPORT_TRACEPOINT_SYMBOL(fence_emit);
>>> >> >
>>> >> > Are you really willing to live with these as tracepoints for forever?
>>> >> > What is the use of them in debugging?  Was it just for debugging the
>>> >> > fence code, or for something else?
>>> >> >
>>> >> >> +/**
>>> >> >> + * fence_context_alloc - allocate an array of fence contexts
>>> >> >> + * @num:     [in]    amount of contexts to allocate
>>> >> >> + *
>>> >> >> + * This function will return the first index of the number of fences allocated.
>>> >> >> + * The fence context is used for setting fence->context to a unique number.
>>> >> >> + */
>>> >> >> +unsigned fence_context_alloc(unsigned num)
>>> >> >> +{
>>> >> >> +     BUG_ON(!num);
>>> >> >> +     return atomic_add_return(num, &fence_context_counter) - num;
>>> >> >> +}
>>> >> >> +EXPORT_SYMBOL(fence_context_alloc);
>>> >> >
>>> >> > EXPORT_SYMBOL_GPL()?  Same goes for all of the exports in here.
>>> >> > Traditionally all of the driver core exports have been with this
>>> >> > marking, any objection to making that change here as well?
>>> >>
>>> >> tbh, I prefer EXPORT_SYMBOL()..  well, I'd prefer even more if there
>>> >> wasn't even a need for EXPORT_SYMBOL_GPL(), but sadly it is a fact of
>>> >> life.  We already went through this debate once with dma-buf.  We
>>> >> aren't going to change $evil_vendor's mind about non-gpl modules.  The
>>> >> only result will be a more flugly convoluted solution (ie. use syncpt
>>> >> EXPORT_SYMBOL() on top of fence EXPORT_SYMBOL_GPL()) just as a
>>> >> workaround, with the result that no-one benefits.
>>> >
>>> > It has been proven that using _GPL() exports have caused companies to
>>> > release their code "properly" over the years, so as these really are
>>> > Linux-only apis, please change them to be marked this way, it helps
>>> > everyone out in the end.
>>>
>>> Well, maybe that is the true in some cases.  But it certainly didn't
>>> work out that way for dma-buf.  And I think the end result is worse.
>>>
>>> I don't really like coming down on the side of EXPORT_SYMBOL() instead
>>> of EXPORT_SYMBOL_GPL(), but if we do use EXPORT_SYMBOL_GPL() then the
>>> result will only be creative workarounds using the _GPL symbols
>>> indirectly by whatever is available via EXPORT_SYMBOL().  I don't
>>> really see how that will be better.
>>
>> You are saying that you _know_ companies will violate our license, so
>> you should just "give up"?  And how do you know people aren't working on
>> preventing those "indirect" usages as well?  :)
>>
>> Sorry, I'm not going to give up here, again, it has proven to work in
>> the past in changing the ways of _very_ large companies, why stop now?
>
> I've found large companies shipping lots of hw putting pressure on
> other large/small companies seems to be only way this has ever
> happened, we'd like to cover that up and say its some great GPL
> enforcement thing.
>
> To be honest, author's choice is how I'd treat this.
>
> Personally I think _GPL is broken by design, and that Linus's initial
> point for them has been so diluted by random lobby groups asking for
> every symbol to be _GPL that they are becoming effectively pointless
> now. I also dislike the fact that the lobby groups don't just bring
> violators to court. I'm also sure someone like the LF could have a
> nice income stream if Linus gave them permission to enforce his
> copyrights.
>
> But anyways, has someone checked that iOS or Windows don't have a
> fence interface? so we know that this is a Linux only interface and
> any works using it are derived? Say the nvidia driver isn't a derived
> work now, will using this interface magically translate it into a
> derived work, so we can go sue them? I don't think so.

I've no ideas about what the APIs are in windows, but windows has had
multi-gpu support for a *long* time, which implies some mechanism like
dmabuf and fence.. this isn't exactly an area where we are
trailblazing here.

BR,
-R


> But its up to Maarten and Rob, and if they say no _GPL then I don't
> think we should be overriding authors intents.
>
> Dave.
Greg KH June 19, 2014, 11:42 p.m. UTC | #23
On Thu, Jun 19, 2014 at 03:39:47PM -0700, H. Peter Anvin wrote:
> On 06/19/2014 01:01 PM, Greg KH wrote:
> > On Thu, Jun 19, 2014 at 09:15:36PM +0200, Daniel Vetter wrote:
> >> On Thu, Jun 19, 2014 at 7:00 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
> >>>>>> +     BUG_ON(f1->context != f2->context);
> >>>>>
> >>>>> Nice, you just crashed the kernel, making it impossible to debug or
> >>>>> recover :(
> >>>>
> >>>> agreed, that should probably be 'if (WARN_ON(...)) return NULL;'
> >>>>
> >>>> (but at least I wouldn't expect to hit that under console_lock so you
> >>>> should at least see the last N lines of the backtrace on the screen
> >>>> ;-))
> >>>
> >>> Lots of devices don't have console screens :)
> >>
> >> Aside: This is a pet peeve of mine and recently I've switched to
> >> rejecting all patch that have a BUG_ON, period.
> > 
> > Please do, I have been for a few years now as well for the same reasons
> > you cite.
> > 
> 
> I'm actually concerned about this trend.  Downgrading things to WARN_ON
> can allow a security bug in the kernel to continue to exist, for
> example, or make the error message disappear.

A BUG_ON makes any error message disappear pretty quickly :)

I'm talking about foolish "ASSERT-like" BUG_ON that driver authors like
to add to their code when writing it to catch things they are messing
up.  After the code is working, they should be removed, like this one.

Don't enforce an api requirement with a kernel crash, warn and return an
error which the caller should always be checking anyway.

thanks,

greg k-h
Daniel Vetter June 20, 2014, 8:24 a.m. UTC | #24
On Fri, Jun 20, 2014 at 12:39 AM, H. Peter Anvin <hpa@zytor.com> wrote:
>>> Aside: This is a pet peeve of mine and recently I've switched to
>>> rejecting all patch that have a BUG_ON, period.
>>
>> Please do, I have been for a few years now as well for the same reasons
>> you cite.
>>
>
> I'm actually concerned about this trend.  Downgrading things to WARN_ON
> can allow a security bug in the kernel to continue to exist, for
> example, or make the error message disappear.
>
> I am wondering if the right thing here isn't to have a user (command
> line?) settable policy as to how to proceed on an assert violation,
> instead of hardcoding it at compile time.

I should clarify: If it smells like the issue is a failure of our
ioctl/syscall validation code to catch crap, BUG_ON is the right
choice. And fundamentally I've had this rule since 1-2 years now, the
only recent change I've done is switch my scripts from warning by
default if there's a new BUG_ON to rejecting by default. Mostly
because I'm lazy and let too many BUG_ONs pass through by default.

Also if you add a new interface to i915 I'll make damn sure you supply
a full set of nasty testcases to abuse the ioctl hard. In the end it's
a tradeoff and overall I don't think I'm compromising security with my
current set of rules.

Also, people don't (yet) terribly care about data integrity as soon as
their data has passed once through a gpu.
-Daniel
Daniel Vetter June 20, 2014, 8:30 a.m. UTC | #25
On Fri, Jun 20, 2014 at 1:42 AM, Greg KH <gregkh@linuxfoundation.org> wrote:
>> I'm actually concerned about this trend.  Downgrading things to WARN_ON
>> can allow a security bug in the kernel to continue to exist, for
>> example, or make the error message disappear.
>
> A BUG_ON makes any error message disappear pretty quickly :)
>
> I'm talking about foolish "ASSERT-like" BUG_ON that driver authors like
> to add to their code when writing it to catch things they are messing
> up.  After the code is working, they should be removed, like this one.

Well except for cases where it's super performance critical I like to
retain these WARN_ON asserts (not BUG_ON). "Is the logic sufficient
locked down with WARN_ONs?" is actually one of the main review
criteria I have for i915 patches, especially on the modeset side.
They're a bit an annoyance for distro's since they result in a
constant (but ever shifting) stream of backtraces, but for me they
serve as an excellent early warning sign when our driver has yet again
lost its marbles (or at least some) way before something user-visibly
bad happens.

And for those screaming that these checks should be hidden behind a
config option and only enabled for validation: Nope, there's too many
combinations of display hardware out there and I simply need our
entire user base to serve as guinea pigs. There's really no other way
to validate this mess called drm/i915.
-Daniel
diff mbox

Patch

diff --git a/Documentation/DocBook/device-drivers.tmpl b/Documentation/DocBook/device-drivers.tmpl
index cc63f30de166..7eef81069d1b 100644
--- a/Documentation/DocBook/device-drivers.tmpl
+++ b/Documentation/DocBook/device-drivers.tmpl
@@ -129,6 +129,8 @@  X!Edrivers/base/interface.c
      </sect1>
      <sect1><title>Device Drivers DMA Management</title>
 !Edrivers/base/dma-buf.c
+!Edrivers/base/fence.c
+!Iinclude/linux/fence.h
 !Edrivers/base/reservation.c
 !Iinclude/linux/reservation.h
 !Edrivers/base/dma-coherent.c
diff --git a/drivers/base/Kconfig b/drivers/base/Kconfig
index 23b8726962af..00e13ce5cbbd 100644
--- a/drivers/base/Kconfig
+++ b/drivers/base/Kconfig
@@ -208,6 +208,15 @@  config DMA_SHARED_BUFFER
 	  APIs extension; the file's descriptor can then be passed on to other
 	  driver.
 
+config FENCE_TRACE
+	bool "Enable verbose FENCE_TRACE messages"
+	depends on DMA_SHARED_BUFFER
+	help
+	  Enable the FENCE_TRACE printks. This will add extra
+	  spam to the console log, but will make it easier to diagnose
+	  lockup related problems for dma-buffers shared across multiple
+	  devices.
+
 config DMA_CMA
 	bool "DMA Contiguous Memory Allocator"
 	depends on HAVE_DMA_CONTIGUOUS && CMA
diff --git a/drivers/base/Makefile b/drivers/base/Makefile
index 04b314e0fa51..eb4864aee073 100644
--- a/drivers/base/Makefile
+++ b/drivers/base/Makefile
@@ -10,7 +10,7 @@  obj-$(CONFIG_DMA_CMA) += dma-contiguous.o
 obj-y			+= power/
 obj-$(CONFIG_HAS_DMA)	+= dma-mapping.o
 obj-$(CONFIG_HAVE_GENERIC_DMA_COHERENT) += dma-coherent.o
-obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o reservation.o
+obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o fence.o reservation.o
 obj-$(CONFIG_ISA)	+= isa.o
 obj-$(CONFIG_FW_LOADER)	+= firmware_class.o
 obj-$(CONFIG_NUMA)	+= node.o
diff --git a/drivers/base/fence.c b/drivers/base/fence.c
new file mode 100644
index 000000000000..1da7f4d6542a
--- /dev/null
+++ b/drivers/base/fence.c
@@ -0,0 +1,416 @@ 
+/*
+ * Fence mechanism for dma-buf and to allow for asynchronous dma access
+ *
+ * Copyright (C) 2012 Canonical Ltd
+ * Copyright (C) 2012 Texas Instruments
+ *
+ * Authors:
+ * Rob Clark <robdclark@gmail.com>
+ * Maarten Lankhorst <maarten.lankhorst@canonical.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/slab.h>
+#include <linux/export.h>
+#include <linux/atomic.h>
+#include <linux/fence.h>
+
+#define CREATE_TRACE_POINTS
+#include <trace/events/fence.h>
+
+EXPORT_TRACEPOINT_SYMBOL(fence_annotate_wait_on);
+EXPORT_TRACEPOINT_SYMBOL(fence_emit);
+
+/**
+ * fence context counter: each execution context should have its own
+ * fence context, this allows checking if fences belong to the same
+ * context or not. One device can have multiple separate contexts,
+ * and they're used if some engine can run independently of another.
+ */
+static atomic_t fence_context_counter = ATOMIC_INIT(0);
+
+/**
+ * fence_context_alloc - allocate an array of fence contexts
+ * @num:	[in]	amount of contexts to allocate
+ *
+ * This function will return the first index of the number of fences allocated.
+ * The fence context is used for setting fence->context to a unique number.
+ */
+unsigned fence_context_alloc(unsigned num)
+{
+	BUG_ON(!num);
+	return atomic_add_return(num, &fence_context_counter) - num;
+}
+EXPORT_SYMBOL(fence_context_alloc);
+
+int __fence_signal(struct fence *fence)
+{
+	struct fence_cb *cur, *tmp;
+	int ret = 0;
+
+	if (WARN_ON(!fence))
+		return -EINVAL;
+
+	if (!ktime_to_ns(fence->timestamp)) {
+		fence->timestamp = ktime_get();
+		smp_mb__before_atomic();
+	}
+
+	if (test_and_set_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
+		ret = -EINVAL;
+
+		/*
+		 * we might have raced with the unlocked fence_signal,
+		 * still run through all callbacks
+		 */
+	} else
+		trace_fence_signaled(fence);
+
+	list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) {
+		list_del_init(&cur->node);
+		cur->func(fence, cur);
+	}
+	return ret;
+}
+EXPORT_SYMBOL(__fence_signal);
+
+/**
+ * fence_signal - signal completion of a fence
+ * @fence: the fence to signal
+ *
+ * Signal completion for software callbacks on a fence, this will unblock
+ * fence_wait() calls and run all the callbacks added with
+ * fence_add_callback(). Can be called multiple times, but since a fence
+ * can only go from unsignaled to signaled state, it will only be effective
+ * the first time.
+ */
+int fence_signal(struct fence *fence)
+{
+	unsigned long flags;
+
+	if (!fence)
+		return -EINVAL;
+
+	if (!ktime_to_ns(fence->timestamp)) {
+		fence->timestamp = ktime_get();
+		smp_mb__before_atomic();
+	}
+
+	if (test_and_set_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
+		return -EINVAL;
+
+	trace_fence_signaled(fence);
+
+	if (test_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags)) {
+		struct fence_cb *cur, *tmp;
+
+		spin_lock_irqsave(fence->lock, flags);
+		list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) {
+			list_del_init(&cur->node);
+			cur->func(fence, cur);
+		}
+		spin_unlock_irqrestore(fence->lock, flags);
+	}
+	return 0;
+}
+EXPORT_SYMBOL(fence_signal);
+
+/**
+ * fence_wait_timeout - sleep until the fence gets signaled
+ * or until timeout elapses
+ * @fence:	[in]	the fence to wait on
+ * @intr:	[in]	if true, do an interruptible wait
+ * @timeout:	[in]	timeout value in jiffies, or MAX_SCHEDULE_TIMEOUT
+ *
+ * Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or the
+ * remaining timeout in jiffies on success. Other error values may be
+ * returned on custom implementations.
+ *
+ * Performs a synchronous wait on this fence. It is assumed the caller
+ * directly or indirectly (buf-mgr between reservation and committing)
+ * holds a reference to the fence, otherwise the fence might be
+ * freed before return, resulting in undefined behavior.
+ */
+long
+fence_wait_timeout(struct fence *fence, bool intr, signed long timeout)
+{
+	unsigned long ret;
+
+	if (WARN_ON(timeout < 0))
+		return -EINVAL;
+
+	trace_fence_wait_start(fence);
+	ret = fence->ops->wait(fence, intr, timeout);
+	trace_fence_wait_end(fence);
+	return ret;
+}
+EXPORT_SYMBOL(fence_wait_timeout);
+
+void release_fence(struct kref *kref)
+{
+	struct fence *fence =
+			container_of(kref, struct fence, refcount);
+
+	trace_fence_destroy(fence);
+
+	BUG_ON(!list_empty(&fence->cb_list));
+
+	if (fence->ops->release)
+		fence->ops->release(fence);
+	else
+		kfree(fence);
+}
+EXPORT_SYMBOL(release_fence);
+
+/**
+ * fence_enable_sw_signaling - enable signaling on fence
+ * @fence:	[in]	the fence to enable
+ *
+ * this will request for sw signaling to be enabled, to make the fence
+ * complete as soon as possible
+ */
+void fence_enable_sw_signaling(struct fence *fence)
+{
+	unsigned long flags;
+
+	if (!test_and_set_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags) &&
+	    !test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
+		trace_fence_enable_signal(fence);
+
+		spin_lock_irqsave(fence->lock, flags);
+
+		if (!fence->ops->enable_signaling(fence))
+			__fence_signal(fence);
+
+		spin_unlock_irqrestore(fence->lock, flags);
+	}
+}
+EXPORT_SYMBOL(fence_enable_sw_signaling);
+
+/**
+ * fence_add_callback - add a callback to be called when the fence
+ * is signaled
+ * @fence:	[in]	the fence to wait on
+ * @cb:		[in]	the callback to register
+ * @func:	[in]	the function to call
+ *
+ * cb will be initialized by fence_add_callback, no initialization
+ * by the caller is required. Any number of callbacks can be registered
+ * to a fence, but a callback can only be registered to one fence at a time.
+ *
+ * Note that the callback can be called from an atomic context.  If
+ * fence is already signaled, this function will return -ENOENT (and
+ * *not* call the callback)
+ *
+ * Add a software callback to the fence. Same restrictions apply to
+ * refcount as it does to fence_wait, however the caller doesn't need to
+ * keep a refcount to fence afterwards: when software access is enabled,
+ * the creator of the fence is required to keep the fence alive until
+ * after it signals with fence_signal. The callback itself can be called
+ * from irq context.
+ *
+ */
+int fence_add_callback(struct fence *fence, struct fence_cb *cb,
+		       fence_func_t func)
+{
+	unsigned long flags;
+	int ret = 0;
+	bool was_set;
+
+	if (WARN_ON(!fence || !func))
+		return -EINVAL;
+
+	if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
+		INIT_LIST_HEAD(&cb->node);
+		return -ENOENT;
+	}
+
+	spin_lock_irqsave(fence->lock, flags);
+
+	was_set = test_and_set_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags);
+
+	if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
+		ret = -ENOENT;
+	else if (!was_set) {
+		trace_fence_enable_signal(fence);
+
+		if (!fence->ops->enable_signaling(fence)) {
+			__fence_signal(fence);
+			ret = -ENOENT;
+		}
+	}
+
+	if (!ret) {
+		cb->func = func;
+		list_add_tail(&cb->node, &fence->cb_list);
+	} else
+		INIT_LIST_HEAD(&cb->node);
+	spin_unlock_irqrestore(fence->lock, flags);
+
+	return ret;
+}
+EXPORT_SYMBOL(fence_add_callback);
+
+/**
+ * fence_remove_callback - remove a callback from the signaling list
+ * @fence:	[in]	the fence to wait on
+ * @cb:		[in]	the callback to remove
+ *
+ * Remove a previously queued callback from the fence. This function returns
+ * true if the callback is succesfully removed, or false if the fence has
+ * already been signaled.
+ *
+ * *WARNING*:
+ * Cancelling a callback should only be done if you really know what you're
+ * doing, since deadlocks and race conditions could occur all too easily. For
+ * this reason, it should only ever be done on hardware lockup recovery,
+ * with a reference held to the fence.
+ */
+bool
+fence_remove_callback(struct fence *fence, struct fence_cb *cb)
+{
+	unsigned long flags;
+	bool ret;
+
+	spin_lock_irqsave(fence->lock, flags);
+
+	ret = !list_empty(&cb->node);
+	if (ret)
+		list_del_init(&cb->node);
+
+	spin_unlock_irqrestore(fence->lock, flags);
+
+	return ret;
+}
+EXPORT_SYMBOL(fence_remove_callback);
+
+struct default_wait_cb {
+	struct fence_cb base;
+	struct task_struct *task;
+};
+
+static void
+fence_default_wait_cb(struct fence *fence, struct fence_cb *cb)
+{
+	struct default_wait_cb *wait =
+		container_of(cb, struct default_wait_cb, base);
+
+	wake_up_state(wait->task, TASK_NORMAL);
+}
+
+/**
+ * fence_default_wait - default sleep until the fence gets signaled
+ * or until timeout elapses
+ * @fence:	[in]	the fence to wait on
+ * @intr:	[in]	if true, do an interruptible wait
+ * @timeout:	[in]	timeout value in jiffies, or MAX_SCHEDULE_TIMEOUT
+ *
+ * Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or the
+ * remaining timeout in jiffies on success.
+ */
+long
+fence_default_wait(struct fence *fence, bool intr, signed long timeout)
+{
+	struct default_wait_cb cb;
+	unsigned long flags;
+	long ret = timeout;
+	bool was_set;
+
+	if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
+		return timeout;
+
+	spin_lock_irqsave(fence->lock, flags);
+
+	if (intr && signal_pending(current)) {
+		ret = -ERESTARTSYS;
+		goto out;
+	}
+
+	was_set = test_and_set_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags);
+
+	if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
+		goto out;
+
+	if (!was_set) {
+		trace_fence_enable_signal(fence);
+
+		if (!fence->ops->enable_signaling(fence)) {
+			__fence_signal(fence);
+			goto out;
+		}
+	}
+
+	cb.base.func = fence_default_wait_cb;
+	cb.task = current;
+	list_add(&cb.base.node, &fence->cb_list);
+
+	while (!test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags) && ret > 0) {
+		if (intr)
+			__set_current_state(TASK_INTERRUPTIBLE);
+		else
+			__set_current_state(TASK_UNINTERRUPTIBLE);
+		spin_unlock_irqrestore(fence->lock, flags);
+
+		ret = schedule_timeout(ret);
+
+		spin_lock_irqsave(fence->lock, flags);
+		if (ret > 0 && intr && signal_pending(current))
+			ret = -ERESTARTSYS;
+	}
+
+	if (!list_empty(&cb.base.node))
+		list_del(&cb.base.node);
+	__set_current_state(TASK_RUNNING);
+
+out:
+	spin_unlock_irqrestore(fence->lock, flags);
+	return ret;
+}
+EXPORT_SYMBOL(fence_default_wait);
+
+/**
+ * __fence_init - Initialize a custom fence.
+ * @fence:	[in]	the fence to initialize
+ * @ops:	[in]	the fence_ops for operations on this fence
+ * @lock:	[in]	the irqsafe spinlock to use for locking this fence
+ * @context:	[in]	the execution context this fence is run on
+ * @seqno:	[in]	a linear increasing sequence number for this context
+ *
+ * Initializes an allocated fence, the caller doesn't have to keep its
+ * refcount after committing with this fence, but it will need to hold a
+ * refcount again if fence_ops.enable_signaling gets called. This can
+ * be used for other implementing other types of fence.
+ *
+ * context and seqno are used for easy comparison between fences, allowing
+ * to check which fence is later by simply using fence_later.
+ */
+void
+__fence_init(struct fence *fence, const struct fence_ops *ops,
+	     spinlock_t *lock, unsigned context, unsigned seqno)
+{
+	BUG_ON(!lock);
+	BUG_ON(!ops || !ops->wait || !ops->enable_signaling ||
+	       !ops->get_driver_name || !ops->get_timeline_name);
+
+	kref_init(&fence->refcount);
+	fence->ops = ops;
+	INIT_LIST_HEAD(&fence->cb_list);
+	fence->lock = lock;
+	fence->context = context;
+	fence->seqno = seqno;
+	fence->flags = 0UL;
+
+	trace_fence_init(fence);
+}
+EXPORT_SYMBOL(__fence_init);
diff --git a/include/linux/fence.h b/include/linux/fence.h
new file mode 100644
index 000000000000..65f2a01ee7e4
--- /dev/null
+++ b/include/linux/fence.h
@@ -0,0 +1,333 @@ 
+/*
+ * Fence mechanism for dma-buf to allow for asynchronous dma access
+ *
+ * Copyright (C) 2012 Canonical Ltd
+ * Copyright (C) 2012 Texas Instruments
+ *
+ * Authors:
+ * Rob Clark <robdclark@gmail.com>
+ * Maarten Lankhorst <maarten.lankhorst@canonical.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __LINUX_FENCE_H
+#define __LINUX_FENCE_H
+
+#include <linux/err.h>
+#include <linux/wait.h>
+#include <linux/list.h>
+#include <linux/bitops.h>
+#include <linux/kref.h>
+#include <linux/sched.h>
+#include <linux/printk.h>
+
+struct fence;
+struct fence_ops;
+struct fence_cb;
+
+/**
+ * struct fence - software synchronization primitive
+ * @refcount: refcount for this fence
+ * @ops: fence_ops associated with this fence
+ * @cb_list: list of all callbacks to call
+ * @lock: spin_lock_irqsave used for locking
+ * @context: execution context this fence belongs to, returned by
+ *           fence_context_alloc()
+ * @seqno: the sequence number of this fence inside the execution context,
+ * can be compared to decide which fence would be signaled later.
+ * @flags: A mask of FENCE_FLAG_* defined below
+ * @timestamp: Timestamp when the fence was signaled.
+ * @status: Optional, only valid if < 0, must be set before calling
+ * fence_signal, indicates that the fence has completed with an error.
+ *
+ * the flags member must be manipulated and read using the appropriate
+ * atomic ops (bit_*), so taking the spinlock will not be needed most
+ * of the time.
+ *
+ * FENCE_FLAG_SIGNALED_BIT - fence is already signaled
+ * FENCE_FLAG_ENABLE_SIGNAL_BIT - enable_signaling might have been called*
+ * FENCE_FLAG_USER_BITS - start of the unused bits, can be used by the
+ * implementer of the fence for its own purposes. Can be used in different
+ * ways by different fence implementers, so do not rely on this.
+ *
+ * *) Since atomic bitops are used, this is not guaranteed to be the case.
+ * Particularly, if the bit was set, but fence_signal was called right
+ * before this bit was set, it would have been able to set the
+ * FENCE_FLAG_SIGNALED_BIT, before enable_signaling was called.
+ * Adding a check for FENCE_FLAG_SIGNALED_BIT after setting
+ * FENCE_FLAG_ENABLE_SIGNAL_BIT closes this race, and makes sure that
+ * after fence_signal was called, any enable_signaling call will have either
+ * been completed, or never called at all.
+ */
+struct fence {
+	struct kref refcount;
+	const struct fence_ops *ops;
+	struct list_head cb_list;
+	spinlock_t *lock;
+	unsigned context, seqno;
+	unsigned long flags;
+	ktime_t timestamp;
+	int status;
+};
+
+enum fence_flag_bits {
+	FENCE_FLAG_SIGNALED_BIT,
+	FENCE_FLAG_ENABLE_SIGNAL_BIT,
+	FENCE_FLAG_USER_BITS, /* must always be last member */
+};
+
+typedef void (*fence_func_t)(struct fence *fence, struct fence_cb *cb);
+
+/**
+ * struct fence_cb - callback for fence_add_callback
+ * @node: used by fence_add_callback to append this struct to fence::cb_list
+ * @func: fence_func_t to call
+ *
+ * This struct will be initialized by fence_add_callback, additional
+ * data can be passed along by embedding fence_cb in another struct.
+ */
+struct fence_cb {
+	struct list_head node;
+	fence_func_t func;
+};
+
+/**
+ * struct fence_ops - operations implemented for fence
+ * @get_driver_name: returns the driver name.
+ * @get_timeline_name: return the name of the context this fence belongs to.
+ * @enable_signaling: enable software signaling of fence.
+ * @signaled: [optional] peek whether the fence is signaled, can be null.
+ * @wait: custom wait implementation, or fence_default_wait.
+ * @release: [optional] called on destruction of fence, can be null
+ * @fill_driver_data: [optional] callback to fill in free-form debug info
+ * Returns amount of bytes filled, or -errno.
+ * @fence_value_str: [optional] fills in the value of the fence as a string
+ * @timeline_value_str: [optional] fills in the current value of the timeline
+ * as a string
+ *
+ * Notes on enable_signaling:
+ * For fence implementations that have the capability for hw->hw
+ * signaling, they can implement this op to enable the necessary
+ * irqs, or insert commands into cmdstream, etc.  This is called
+ * in the first wait() or add_callback() path to let the fence
+ * implementation know that there is another driver waiting on
+ * the signal (ie. hw->sw case).
+ *
+ * This function can be called called from atomic context, but not
+ * from irq context, so normal spinlocks can be used.
+ *
+ * A return value of false indicates the fence already passed,
+ * or some failure occured that made it impossible to enable
+ * signaling. True indicates succesful enabling.
+ *
+ * fence->status may be set in enable_signaling, but only when false is
+ * returned.
+ *
+ * Calling fence_signal before enable_signaling is called allows
+ * for a tiny race window in which enable_signaling is called during,
+ * before, or after fence_signal. To fight this, it is recommended
+ * that before enable_signaling returns true an extra reference is
+ * taken on the fence, to be released when the fence is signaled.
+ * This will mean fence_signal will still be called twice, but
+ * the second time will be a noop since it was already signaled.
+ *
+ * Notes on signaled:
+ * May set fence->status if returning true.
+ *
+ * Notes on wait:
+ * Must not be NULL, set to fence_default_wait for default implementation.
+ * the fence_default_wait implementation should work for any fence, as long
+ * as enable_signaling works correctly.
+ *
+ * Must return -ERESTARTSYS if the wait is intr = true and the wait was
+ * interrupted, and remaining jiffies if fence has signaled, or 0 if wait
+ * timed out. Can also return other error values on custom implementations,
+ * which should be treated as if the fence is signaled. For example a hardware
+ * lockup could be reported like that.
+ *
+ * Notes on release:
+ * Can be NULL, this function allows additional commands to run on
+ * destruction of the fence. Can be called from irq context.
+ * If pointer is set to NULL, kfree will get called instead.
+ */
+
+struct fence_ops {
+	const char * (*get_driver_name)(struct fence *fence);
+	const char * (*get_timeline_name)(struct fence *fence);
+	bool (*enable_signaling)(struct fence *fence);
+	bool (*signaled)(struct fence *fence);
+	long (*wait)(struct fence *fence, bool intr, signed long timeout);
+	void (*release)(struct fence *fence);
+
+	int (*fill_driver_data)(struct fence *fence, void *data, int size);
+	void (*fence_value_str)(struct fence *fence, char *str, int size);
+	void (*timeline_value_str)(struct fence *fence, char *str, int size);
+};
+
+void __fence_init(struct fence *fence, const struct fence_ops *ops,
+		  spinlock_t *lock, unsigned context, unsigned seqno);
+
+/**
+ * fence_get - increases refcount of the fence
+ * @fence:	[in]	fence to increase refcount of
+ */
+static inline void fence_get(struct fence *fence)
+{
+	if (WARN_ON(!fence))
+		return;
+	kref_get(&fence->refcount);
+}
+
+extern void release_fence(struct kref *kref);
+
+/**
+ * fence_put - decreases refcount of the fence
+ * @fence:	[in]	fence to reduce refcount of
+ */
+static inline void fence_put(struct fence *fence)
+{
+	if (WARN_ON(!fence))
+		return;
+	kref_put(&fence->refcount, release_fence);
+}
+
+int fence_signal(struct fence *fence);
+int __fence_signal(struct fence *fence);
+long fence_default_wait(struct fence *fence, bool intr, signed long timeout);
+int fence_add_callback(struct fence *fence, struct fence_cb *cb,
+		       fence_func_t func);
+bool fence_remove_callback(struct fence *fence, struct fence_cb *cb);
+void fence_enable_sw_signaling(struct fence *fence);
+
+static inline bool
+__fence_is_signaled(struct fence *fence)
+{
+	if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
+		return true;
+
+	if (fence->ops->signaled && fence->ops->signaled(fence)) {
+		__fence_signal(fence);
+		return true;
+	}
+
+	return false;
+}
+
+/**
+ * fence_is_signaled - Return an indication if the fence is signaled yet.
+ * @fence:	[in]	the fence to check
+ *
+ * Returns true if the fence was already signaled, false if not. Since this
+ * function doesn't enable signaling, it is not guaranteed to ever return true
+ * If fence_add_callback, fence_wait or fence_enable_sw_signaling
+ * haven't been called before.
+ *
+ * It's recommended for seqno fences to call fence_signal when the
+ * operation is complete, it makes it possible to prevent issues from
+ * wraparound between time of issue and time of use by checking the return
+ * value of this function before calling hardware-specific wait instructions.
+ */
+static inline bool
+fence_is_signaled(struct fence *fence)
+{
+	if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags))
+		return true;
+
+	if (fence->ops->signaled && fence->ops->signaled(fence)) {
+		fence_signal(fence);
+		return true;
+	}
+
+	return false;
+}
+
+/**
+ * fence_later - return the chronologically later fence
+ * @f1:	[in]	the first fence from the same context
+ * @f2:	[in]	the second fence from the same context
+ *
+ * Returns NULL if both fences are signaled, otherwise the fence that would be
+ * signaled last. Both fences must be from the same context, since a seqno is
+ * not re-used across contexts.
+ */
+static inline struct fence *fence_later(struct fence *f1, struct fence *f2)
+{
+	BUG_ON(f1->context != f2->context);
+
+	/*
+	 * can't check just FENCE_FLAG_SIGNALED_BIT here, it may never have been
+	 * set called if enable_signaling wasn't, and enabling that here is
+	 * overkill.
+	 */
+	if (f2->seqno - f1->seqno <= INT_MAX)
+		return fence_is_signaled(f2) ? NULL : f2;
+	else
+		return fence_is_signaled(f1) ? NULL : f1;
+}
+
+long fence_wait_timeout(struct fence *fence, bool intr, signed long timeout);
+
+
+/**
+ * fence_wait - sleep until the fence gets signaled
+ * @fence:	[in]	the fence to wait on
+ * @intr:	[in]	if true, do an interruptible wait
+ *
+ * This function will return -ERESTARTSYS if interrupted by a signal,
+ * or 0 if the fence was signaled. Other error values may be
+ * returned on custom implementations.
+ *
+ * Performs a synchronous wait on this fence. It is assumed the caller
+ * directly or indirectly (buf-mgr between reservation and committing)
+ * holds a reference to the fence, otherwise the fence might be
+ * freed before return, resulting in undefined behavior.
+ */
+static inline long fence_wait(struct fence *fence, bool intr)
+{
+	long ret;
+
+	/* Since fence_wait_timeout cannot timeout with
+	 * MAX_SCHEDULE_TIMEOUT, only valid return values are
+	 * -ERESTARTSYS and MAX_SCHEDULE_TIMEOUT.
+	 */
+	ret = fence_wait_timeout(fence, intr, MAX_SCHEDULE_TIMEOUT);
+
+	return ret < 0 ? ret : 0;
+}
+
+unsigned fence_context_alloc(unsigned num);
+
+#define FENCE_TRACE(f, fmt, args...) \
+	do {								\
+		struct fence *__ff = (f);				\
+		if (config_enabled(CONFIG_FENCE_TRACE))			\
+			pr_info("f %u#%u: " fmt,			\
+				__ff->context, __ff->seqno, ##args);	\
+	} while (0)
+
+#define FENCE_WARN(f, fmt, args...) \
+	do {								\
+		struct fence *__ff = (f);				\
+		pr_warn("f %u#%u: " fmt, __ff->context, __ff->seqno,	\
+			 ##args);					\
+	} while (0)
+
+#define FENCE_ERR(f, fmt, args...) \
+	do {								\
+		struct fence *__ff = (f);				\
+		pr_err("f %u#%u: " fmt, __ff->context, __ff->seqno,	\
+			##args);					\
+	} while (0)
+
+#endif /* __LINUX_FENCE_H */
diff --git a/include/trace/events/fence.h b/include/trace/events/fence.h
new file mode 100644
index 000000000000..98feb1b82896
--- /dev/null
+++ b/include/trace/events/fence.h
@@ -0,0 +1,128 @@ 
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM fence
+
+#if !defined(_TRACE_FENCE_H) || defined(TRACE_HEADER_MULTI_READ)
+#define _TRACE_FENCE_H
+
+#include <linux/tracepoint.h>
+
+struct fence;
+
+TRACE_EVENT(fence_annotate_wait_on,
+
+	/* fence: the fence waiting on f1, f1: the fence to be waited on. */
+	TP_PROTO(struct fence *fence, struct fence *f1),
+
+	TP_ARGS(fence, f1),
+
+	TP_STRUCT__entry(
+		__string(driver, fence->ops->get_driver_name(fence))
+		__string(timeline, fence->ops->get_driver_name(fence))
+		__field(unsigned int, context)
+		__field(unsigned int, seqno)
+
+		__string(waiting_driver, f1->ops->get_driver_name(f1))
+		__string(waiting_timeline, f1->ops->get_timeline_name(f1))
+		__field(unsigned int, waiting_context)
+		__field(unsigned int, waiting_seqno)
+	),
+
+	TP_fast_assign(
+		__assign_str(driver, fence->ops->get_driver_name(fence))
+		__assign_str(timeline, fence->ops->get_timeline_name(fence))
+		__entry->context = fence->context;
+		__entry->seqno = fence->seqno;
+
+		__assign_str(waiting_driver, f1->ops->get_driver_name(f1))
+		__assign_str(waiting_timeline, f1->ops->get_timeline_name(f1))
+		__entry->waiting_context = f1->context;
+		__entry->waiting_seqno = f1->seqno;
+
+	),
+
+	TP_printk("driver=%s timeline=%s context=%u seqno=%u "	\
+		  "waits on driver=%s timeline=%s context=%u seqno=%u",
+		  __get_str(driver), __get_str(timeline), __entry->context,
+		  __entry->seqno,
+		  __get_str(waiting_driver), __get_str(waiting_timeline),
+		  __entry->waiting_context, __entry->waiting_seqno)
+);
+
+DECLARE_EVENT_CLASS(fence,
+
+	TP_PROTO(struct fence *fence),
+
+	TP_ARGS(fence),
+
+	TP_STRUCT__entry(
+		__string(driver, fence->ops->get_driver_name(fence))
+		__string(timeline, fence->ops->get_timeline_name(fence))
+		__field(unsigned int, context)
+		__field(unsigned int, seqno)
+	),
+
+	TP_fast_assign(
+		__assign_str(driver, fence->ops->get_driver_name(fence))
+		__assign_str(timeline, fence->ops->get_timeline_name(fence))
+		__entry->context = fence->context;
+		__entry->seqno = fence->seqno;
+	),
+
+	TP_printk("driver=%s timeline=%s context=%u seqno=%u",
+		  __get_str(driver), __get_str(timeline), __entry->context,
+		  __entry->seqno)
+);
+
+DEFINE_EVENT(fence, fence_emit,
+
+	TP_PROTO(struct fence *fence),
+
+	TP_ARGS(fence)
+);
+
+DEFINE_EVENT(fence, fence_init,
+
+	TP_PROTO(struct fence *fence),
+
+	TP_ARGS(fence)
+);
+
+DEFINE_EVENT(fence, fence_destroy,
+
+	TP_PROTO(struct fence *fence),
+
+	TP_ARGS(fence)
+);
+
+DEFINE_EVENT(fence, fence_enable_signal,
+
+	TP_PROTO(struct fence *fence),
+
+	TP_ARGS(fence)
+);
+
+DEFINE_EVENT(fence, fence_signaled,
+
+	TP_PROTO(struct fence *fence),
+
+	TP_ARGS(fence)
+);
+
+DEFINE_EVENT(fence, fence_wait_start,
+
+	TP_PROTO(struct fence *fence),
+
+	TP_ARGS(fence)
+);
+
+DEFINE_EVENT(fence, fence_wait_end,
+
+	TP_PROTO(struct fence *fence),
+
+	TP_ARGS(fence)
+);
+
+#endif /*  _TRACE_FENCE_H */
+
+/* This part must be outside protection */
+#include <trace/define_trace.h>