diff mbox

[PATCHv3,2/2] spinlock: fair read-write locks

Message ID 1454326273-23588-3-git-send-email-david.vrabel@citrix.com (mailing list archive)
State New, archived
Headers show

Commit Message

David Vrabel Feb. 1, 2016, 11:31 a.m. UTC
From: Jennifer Herbert <jennifer.herbert@citrix.com>

The current rwlocks are write-biased and unfair.  This allows writers
to starve readers in situations where there are many writers (e.g.,
p2m type changes from log dirty updates during domain save).

Replace the current implementation with queued read-write locks which use
a fair spinlock (a ticket lock in this case) to ensure fairness between
readers and writers when they are contended.

This implementation is from the Linux commit 70af2f8a4f48 by Waiman
Long and Peter Zijlstra.

    locking/rwlocks: Introduce 'qrwlocks' - fair, queued rwlocks

    This rwlock uses the arch_spin_lock_t as a waitqueue, and assuming
    the arch_spin_lock_t is a fair lock (ticket,mcs etc..) the
    resulting rwlock is a fair lock.

    It fits in the same 8 bytes as the regular rwlock_t by folding the
    reader and writer count into a single integer, using the remaining
    4 bytes for the arch_spinlock_t.

    Architectures that can single-copy adress bytes can optimize
    queue_write_unlock() with a 0 write to the LSB (the write count).

We do not yet make use of the architecture-specific optimization noted
above.

Signed-off-by: Jennifer Herbert <jennifer.herbert@citrix.com>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
v2:
- Remove in_irq() special case from read slow path.
- Comment on why 8-bit write mask.
- Style.
- Use 0 in various cmpxchg() calls where appropriate.
---
 xen/common/rwlock.c        | 102 +++++++++++++++++++++++
 xen/common/spinlock.c      | 204 ---------------------------------------------
 xen/include/xen/rwlock.h   | 182 ++++++++++++++++++++++++++++++++++++++++
 xen/include/xen/spinlock.h |  32 -------
 4 files changed, 284 insertions(+), 236 deletions(-)

Comments

Jan Beulich Feb. 3, 2016, 11:28 a.m. UTC | #1
>>> On 01.02.16 at 12:31, <david.vrabel@citrix.com> wrote:
> +void queue_write_lock_slowpath(rwlock_t *lock)
> +{
> +    u32 cnts;
> +
> +    /* Put the writer into the wait queue. */
> +    spin_lock(&lock->lock);
> +
> +    /* Try to acquire the lock directly if no reader is present. */
> +    if ( !atomic_read(&lock->cnts) &&
> +         (atomic_cmpxchg(&lock->cnts, 0, _QW_LOCKED) == 0) )
> +        goto unlock;
> +
> +    /*
> +     * Set the waiting flag to notify readers that a writer is pending,
> +     * or wait for a previous writer to go away.
> +     */
> +    for (;;)

Since everything else here has been nicely converted to Xen style,
strictly speaking these should be

    for ( ; ; )

but of course this is no reason to block the patch. Since however,
as said in reply to patch 1, ...

> --- a/xen/include/xen/rwlock.h
> +++ b/xen/include/xen/rwlock.h
> @@ -3,6 +3,188 @@
>  
>  #include <xen/spinlock.h>

... this should go away if possible, it would be nice for the cosmetic
thing above to also be fixed up at once.

With at least the latter taken care of, and assuming this won't incur
other changes (apart from adding necessary includes here)
Acked-by: Jan Beulich <jbeulich@suse.com>

Jan
David Vrabel Feb. 3, 2016, 11:57 a.m. UTC | #2
On 03/02/16 11:28, Jan Beulich wrote:
>>>> On 01.02.16 at 12:31, <david.vrabel@citrix.com> wrote:
>> +void queue_write_lock_slowpath(rwlock_t *lock)
>> +{
>> +    u32 cnts;
>> +
>> +    /* Put the writer into the wait queue. */
>> +    spin_lock(&lock->lock);
>> +
>> +    /* Try to acquire the lock directly if no reader is present. */
>> +    if ( !atomic_read(&lock->cnts) &&
>> +         (atomic_cmpxchg(&lock->cnts, 0, _QW_LOCKED) == 0) )
>> +        goto unlock;
>> +
>> +    /*
>> +     * Set the waiting flag to notify readers that a writer is pending,
>> +     * or wait for a previous writer to go away.
>> +     */
>> +    for (;;)
> 
> Since everything else here has been nicely converted to Xen style,
> strictly speaking these should be
> 
>     for ( ; ; )
> 
> but of course this is no reason to block the patch. Since however,
> as said in reply to patch 1, ...

TBH, I really think you're pointlessly nit-picking here.  This change
would make zero impact on readability.

>> --- a/xen/include/xen/rwlock.h
>> +++ b/xen/include/xen/rwlock.h
>> @@ -3,6 +3,188 @@
>>  
>>  #include <xen/spinlock.h>
> 
> ... this should go away if possible, it would be nice for the cosmetic
> thing above to also be fixed up at once.

The rwlock structure now includes a spinlock, so this #include is
required here.

David
Jan Beulich Feb. 3, 2016, 12:14 p.m. UTC | #3
>>> On 03.02.16 at 12:57, <david.vrabel@citrix.com> wrote:
> On 03/02/16 11:28, Jan Beulich wrote:
>>>>> On 01.02.16 at 12:31, <david.vrabel@citrix.com> wrote:
>>> +void queue_write_lock_slowpath(rwlock_t *lock)
>>> +{
>>> +    u32 cnts;
>>> +
>>> +    /* Put the writer into the wait queue. */
>>> +    spin_lock(&lock->lock);
>>> +
>>> +    /* Try to acquire the lock directly if no reader is present. */
>>> +    if ( !atomic_read(&lock->cnts) &&
>>> +         (atomic_cmpxchg(&lock->cnts, 0, _QW_LOCKED) == 0) )
>>> +        goto unlock;
>>> +
>>> +    /*
>>> +     * Set the waiting flag to notify readers that a writer is pending,
>>> +     * or wait for a previous writer to go away.
>>> +     */
>>> +    for (;;)
>> 
>> Since everything else here has been nicely converted to Xen style,
>> strictly speaking these should be
>> 
>>     for ( ; ; )
>> 
>> but of course this is no reason to block the patch. Since however,
>> as said in reply to patch 1, ...
> 
> TBH, I really think you're pointlessly nit-picking here.  This change
> would make zero impact on readability.

Well, the style I've pointed out is the one used in well formed code
(and in fact I have to remind myself of this each time I omit one or
more of the three expressions). But as said, I'm not demanding a
re-submission just because of this (and if I remember to do so, I
may well fix it up while committing).

>>> --- a/xen/include/xen/rwlock.h
>>> +++ b/xen/include/xen/rwlock.h
>>> @@ -3,6 +3,188 @@
>>>  
>>>  #include <xen/spinlock.h>
>> 
>> ... this should go away if possible, it would be nice for the cosmetic
>> thing above to also be fixed up at once.
> 
> The rwlock structure now includes a spinlock, so this #include is
> required here.

Ah, okay, patch 2 indeed makes this necessary.

Both patches
Acked-by: Jan Beulich <jbeulich@suse.com>
then.

Jan
diff mbox

Patch

diff --git a/xen/common/rwlock.c b/xen/common/rwlock.c
index 410d4dc..1fd8ea0 100644
--- a/xen/common/rwlock.c
+++ b/xen/common/rwlock.c
@@ -1,6 +1,108 @@ 
 #include <xen/rwlock.h>
 #include <xen/irq.h>
 
+/*
+ * rspin_until_writer_unlock - spin until writer is gone.
+ * @lock  : Pointer to queue rwlock structure.
+ * @cnts: Current queue rwlock writer status byte.
+ *
+ * In interrupt context or at the head of the queue, the reader will just
+ * increment the reader count & wait until the writer releases the lock.
+ */
+static inline void rspin_until_writer_unlock(rwlock_t *lock, u32 cnts)
+{
+    while ( (cnts & _QW_WMASK) == _QW_LOCKED )
+    {
+        cpu_relax();
+        smp_rmb();
+        cnts = atomic_read(&lock->cnts);
+    }
+}
+
+/*
+ * queue_read_lock_slowpath - acquire read lock of a queue rwlock.
+ * @lock: Pointer to queue rwlock structure.
+ */
+void queue_read_lock_slowpath(rwlock_t *lock)
+{
+    u32 cnts;
+
+    /*
+     * Readers come here when they cannot get the lock without waiting.
+     */
+    atomic_sub(_QR_BIAS, &lock->cnts);
+
+    /*
+     * Put the reader into the wait queue.
+     */
+    spin_lock(&lock->lock);
+
+    /*
+     * At the head of the wait queue now, wait until the writer state
+     * goes to 0 and then try to increment the reader count and get
+     * the lock. It is possible that an incoming writer may steal the
+     * lock in the interim, so it is necessary to check the writer byte
+     * to make sure that the write lock isn't taken.
+     */
+    while ( atomic_read(&lock->cnts) & _QW_WMASK )
+        cpu_relax();
+
+    cnts = atomic_add_return(_QR_BIAS, &lock->cnts) - _QR_BIAS;
+    rspin_until_writer_unlock(lock, cnts);
+
+    /*
+     * Signal the next one in queue to become queue head.
+     */
+    spin_unlock(&lock->lock);
+}
+
+/*
+ * queue_write_lock_slowpath - acquire write lock of a queue rwlock
+ * @lock : Pointer to queue rwlock structure.
+ */
+void queue_write_lock_slowpath(rwlock_t *lock)
+{
+    u32 cnts;
+
+    /* Put the writer into the wait queue. */
+    spin_lock(&lock->lock);
+
+    /* Try to acquire the lock directly if no reader is present. */
+    if ( !atomic_read(&lock->cnts) &&
+         (atomic_cmpxchg(&lock->cnts, 0, _QW_LOCKED) == 0) )
+        goto unlock;
+
+    /*
+     * Set the waiting flag to notify readers that a writer is pending,
+     * or wait for a previous writer to go away.
+     */
+    for (;;)
+    {
+        cnts = atomic_read(&lock->cnts);
+        if ( !(cnts & _QW_WMASK) &&
+             (atomic_cmpxchg(&lock->cnts, cnts,
+                             cnts | _QW_WAITING) == cnts) )
+            break;
+
+        cpu_relax();
+    }
+
+    /* When no more readers, set the locked flag. */
+    for (;;)
+    {
+        cnts = atomic_read(&lock->cnts);
+        if ( (cnts == _QW_WAITING) &&
+             (atomic_cmpxchg(&lock->cnts, _QW_WAITING,
+                             _QW_LOCKED) == _QW_WAITING) )
+            break;
+
+        cpu_relax();
+    }
+ unlock:
+    spin_unlock(&lock->lock);
+}
+
+
 static DEFINE_PER_CPU(cpumask_t, percpu_rwlock_readers);
 
 void _percpu_write_lock(percpu_rwlock_t **per_cpudata,
diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c
index 7b0cf6c..a43fa84 100644
--- a/xen/common/spinlock.c
+++ b/xen/common/spinlock.c
@@ -288,210 +288,6 @@  void _spin_unlock_recursive(spinlock_t *lock)
     }
 }
 
-void _read_lock(rwlock_t *lock)
-{
-    uint32_t x;
-
-    check_lock(&lock->debug);
-    do {
-        while ( (x = lock->lock) & RW_WRITE_FLAG )
-            cpu_relax();
-    } while ( cmpxchg(&lock->lock, x, x+1) != x );
-    preempt_disable();
-}
-
-void _read_lock_irq(rwlock_t *lock)
-{
-    uint32_t x;
-
-    ASSERT(local_irq_is_enabled());
-    local_irq_disable();
-    check_lock(&lock->debug);
-    do {
-        if ( (x = lock->lock) & RW_WRITE_FLAG )
-        {
-            local_irq_enable();
-            while ( (x = lock->lock) & RW_WRITE_FLAG )
-                cpu_relax();
-            local_irq_disable();
-        }
-    } while ( cmpxchg(&lock->lock, x, x+1) != x );
-    preempt_disable();
-}
-
-unsigned long _read_lock_irqsave(rwlock_t *lock)
-{
-    uint32_t x;
-    unsigned long flags;
-
-    local_irq_save(flags);
-    check_lock(&lock->debug);
-    do {
-        if ( (x = lock->lock) & RW_WRITE_FLAG )
-        {
-            local_irq_restore(flags);
-            while ( (x = lock->lock) & RW_WRITE_FLAG )
-                cpu_relax();
-            local_irq_disable();
-        }
-    } while ( cmpxchg(&lock->lock, x, x+1) != x );
-    preempt_disable();
-    return flags;
-}
-
-int _read_trylock(rwlock_t *lock)
-{
-    uint32_t x;
-
-    check_lock(&lock->debug);
-    do {
-        if ( (x = lock->lock) & RW_WRITE_FLAG )
-            return 0;
-    } while ( cmpxchg(&lock->lock, x, x+1) != x );
-    preempt_disable();
-    return 1;
-}
-
-#ifndef _raw_read_unlock
-# define _raw_read_unlock(l) do {                      \
-    uint32_t x = (l)->lock, y;                         \
-    while ( (y = cmpxchg(&(l)->lock, x, x - 1)) != x ) \
-        x = y;                                         \
-} while (0)
-#endif
-
-inline void _read_unlock(rwlock_t *lock)
-{
-    preempt_enable();
-    _raw_read_unlock(lock);
-}
-
-void _read_unlock_irq(rwlock_t *lock)
-{
-    _read_unlock(lock);
-    local_irq_enable();
-}
-
-void _read_unlock_irqrestore(rwlock_t *lock, unsigned long flags)
-{
-    _read_unlock(lock);
-    local_irq_restore(flags);
-}
-
-void _write_lock(rwlock_t *lock)
-{
-    uint32_t x;
-
-    check_lock(&lock->debug);
-    do {
-        while ( (x = lock->lock) & RW_WRITE_FLAG )
-            cpu_relax();
-    } while ( cmpxchg(&lock->lock, x, x|RW_WRITE_FLAG) != x );
-    while ( x != 0 )
-    {
-        cpu_relax();
-        x = lock->lock & ~RW_WRITE_FLAG;
-    }
-    preempt_disable();
-}
-
-void _write_lock_irq(rwlock_t *lock)
-{
-    uint32_t x;
-
-    ASSERT(local_irq_is_enabled());
-    local_irq_disable();
-    check_lock(&lock->debug);
-    do {
-        if ( (x = lock->lock) & RW_WRITE_FLAG )
-        {
-            local_irq_enable();
-            while ( (x = lock->lock) & RW_WRITE_FLAG )
-                cpu_relax();
-            local_irq_disable();
-        }
-    } while ( cmpxchg(&lock->lock, x, x|RW_WRITE_FLAG) != x );
-    while ( x != 0 )
-    {
-        cpu_relax();
-        x = lock->lock & ~RW_WRITE_FLAG;
-    }
-    preempt_disable();
-}
-
-unsigned long _write_lock_irqsave(rwlock_t *lock)
-{
-    uint32_t x;
-    unsigned long flags;
-
-    local_irq_save(flags);
-    check_lock(&lock->debug);
-    do {
-        if ( (x = lock->lock) & RW_WRITE_FLAG )
-        {
-            local_irq_restore(flags);
-            while ( (x = lock->lock) & RW_WRITE_FLAG )
-                cpu_relax();
-            local_irq_disable();
-        }
-    } while ( cmpxchg(&lock->lock, x, x|RW_WRITE_FLAG) != x );
-    while ( x != 0 )
-    {
-        cpu_relax();
-        x = lock->lock & ~RW_WRITE_FLAG;
-    }
-    preempt_disable();
-    return flags;
-}
-
-int _write_trylock(rwlock_t *lock)
-{
-    uint32_t x;
-
-    check_lock(&lock->debug);
-    do {
-        if ( (x = lock->lock) != 0 )
-            return 0;
-    } while ( cmpxchg(&lock->lock, x, x|RW_WRITE_FLAG) != x );
-    preempt_disable();
-    return 1;
-}
-
-#ifndef _raw_write_unlock
-# define _raw_write_unlock(l) xchg(&(l)->lock, 0)
-#endif
-
-inline void _write_unlock(rwlock_t *lock)
-{
-    preempt_enable();
-    if ( _raw_write_unlock(lock) != RW_WRITE_FLAG )
-        BUG();
-}
-
-void _write_unlock_irq(rwlock_t *lock)
-{
-    _write_unlock(lock);
-    local_irq_enable();
-}
-
-void _write_unlock_irqrestore(rwlock_t *lock, unsigned long flags)
-{
-    _write_unlock(lock);
-    local_irq_restore(flags);
-}
-
-int _rw_is_locked(rwlock_t *lock)
-{
-    check_lock(&lock->debug);
-    return (lock->lock != 0); /* anyone in critical section? */
-}
-
-int _rw_is_write_locked(rwlock_t *lock)
-{
-    check_lock(&lock->debug);
-    return (lock->lock == RW_WRITE_FLAG); /* writer in critical section? */
-}
-
 #ifdef LOCK_PROFILE
 
 struct lock_profile_anc {
diff --git a/xen/include/xen/rwlock.h b/xen/include/xen/rwlock.h
index 9d87783..35657c5 100644
--- a/xen/include/xen/rwlock.h
+++ b/xen/include/xen/rwlock.h
@@ -3,6 +3,188 @@ 
 
 #include <xen/spinlock.h>
 
+#include <asm/atomic.h>
+#include <asm/system.h>
+
+typedef struct {
+    atomic_t cnts;
+    spinlock_t lock;
+} rwlock_t;
+
+#define    RW_LOCK_UNLOCKED {           \
+    .cnts = ATOMIC_INIT(0),             \
+    .lock = SPIN_LOCK_UNLOCKED          \
+}
+
+#define DEFINE_RWLOCK(l) rwlock_t l = RW_LOCK_UNLOCKED
+#define rwlock_init(l) (*(l) = (rwlock_t)RW_LOCK_UNLOCKED)
+
+/*
+ * Writer states & reader shift and bias.
+ *
+ * Writer field is 8 bit to allow for potential optimisation, see
+ * _write_unlock().
+ */
+#define    _QW_WAITING  1               /* A writer is waiting     */
+#define    _QW_LOCKED   0xff            /* A writer holds the lock */
+#define    _QW_WMASK    0xff            /* Writer mask.*/
+#define    _QR_SHIFT    8               /* Reader count shift      */
+#define    _QR_BIAS     (1U << _QR_SHIFT)
+
+void queue_read_lock_slowpath(rwlock_t *lock);
+void queue_write_lock_slowpath(rwlock_t *lock);
+
+/*
+ * _read_trylock - try to acquire read lock of a queue rwlock.
+ * @lock : Pointer to queue rwlock structure.
+ * Return: 1 if lock acquired, 0 if failed.
+ */
+static inline int _read_trylock(rwlock_t *lock)
+{
+    u32 cnts;
+
+    cnts = atomic_read(&lock->cnts);
+    if ( likely(!(cnts & _QW_WMASK)) )
+    {
+        cnts = (u32)atomic_add_return(_QR_BIAS, &lock->cnts);
+        if ( likely(!(cnts & _QW_WMASK)) )
+            return 1;
+        atomic_sub(_QR_BIAS, &lock->cnts);
+    }
+    return 0;
+}
+
+/*
+ * _read_lock - acquire read lock of a queue rwlock.
+ * @lock: Pointer to queue rwlock structure.
+ */
+static inline void _read_lock(rwlock_t *lock)
+{
+    u32 cnts;
+
+    cnts = atomic_add_return(_QR_BIAS, &lock->cnts);
+    if ( likely(!(cnts & _QW_WMASK)) )
+        return;
+
+    /* The slowpath will decrement the reader count, if necessary. */
+    queue_read_lock_slowpath(lock);
+}
+
+static inline void _read_lock_irq(rwlock_t *lock)
+{
+    ASSERT(local_irq_is_enabled());
+    local_irq_disable();
+    _read_lock(lock);
+}
+
+static inline unsigned long _read_lock_irqsave(rwlock_t *lock)
+{
+    unsigned long flags;
+    local_irq_save(flags);
+    _read_lock(lock);
+    return flags;
+}
+
+/*
+ * _read_unlock - release read lock of a queue rwlock.
+ * @lock : Pointer to queue rwlock structure.
+ */
+static inline void _read_unlock(rwlock_t *lock)
+{
+    /*
+     * Atomically decrement the reader count
+     */
+    atomic_sub(_QR_BIAS, &lock->cnts);
+}
+
+static inline void _read_unlock_irq(rwlock_t *lock)
+{
+    _read_unlock(lock);
+    local_irq_enable();
+}
+
+static inline void _read_unlock_irqrestore(rwlock_t *lock, unsigned long flags)
+{
+    _read_unlock(lock);
+    local_irq_restore(flags);
+}
+
+static inline int _rw_is_locked(rwlock_t *lock)
+{
+    return atomic_read(&lock->cnts);
+}
+
+/*
+ * queue_write_lock - acquire write lock of a queue rwlock.
+ * @lock : Pointer to queue rwlock structure.
+ */
+static inline void _write_lock(rwlock_t *lock)
+{
+    /* Optimize for the unfair lock case where the fair flag is 0. */
+    if ( atomic_cmpxchg(&lock->cnts, 0, _QW_LOCKED) == 0 )
+        return;
+
+    queue_write_lock_slowpath(lock);
+}
+
+static inline void _write_lock_irq(rwlock_t *lock)
+{
+    ASSERT(local_irq_is_enabled());
+    local_irq_disable();
+    _write_lock(lock);
+}
+
+static inline unsigned long _write_lock_irqsave(rwlock_t *lock)
+{
+    unsigned long flags;
+
+    local_irq_save(flags);
+    _write_lock(lock);
+    return flags;
+}
+
+/*
+ * queue_write_trylock - try to acquire write lock of a queue rwlock.
+ * @lock : Pointer to queue rwlock structure.
+ * Return: 1 if lock acquired, 0 if failed.
+ */
+static inline int _write_trylock(rwlock_t *lock)
+{
+    u32 cnts;
+
+    cnts = atomic_read(&lock->cnts);
+    if ( unlikely(cnts) )
+        return 0;
+
+    return likely(atomic_cmpxchg(&lock->cnts, 0, _QW_LOCKED) == 0);
+}
+
+static inline void _write_unlock(rwlock_t *lock)
+{
+    /*
+     * If the writer field is atomic, it can be cleared directly.
+     * Otherwise, an atomic subtraction will be used to clear it.
+     */
+    atomic_sub(_QW_LOCKED, &lock->cnts);
+}
+
+static inline void _write_unlock_irq(rwlock_t *lock)
+{
+    _write_unlock(lock);
+    local_irq_enable();
+}
+
+static inline void _write_unlock_irqrestore(rwlock_t *lock, unsigned long flags)
+{
+    _write_unlock(lock);
+    local_irq_restore(flags);
+}
+
+static inline int _rw_is_write_locked(rwlock_t *lock)
+{
+    return (atomic_read(&lock->cnts) & _QW_WMASK) == _QW_LOCKED;
+}
+
 #define read_lock(l)                  _read_lock(l)
 #define read_lock_irq(l)              _read_lock_irq(l)
 #define read_lock_irqsave(l, f)                                 \
diff --git a/xen/include/xen/spinlock.h b/xen/include/xen/spinlock.h
index 765db51..77ab7d5 100644
--- a/xen/include/xen/spinlock.h
+++ b/xen/include/xen/spinlock.h
@@ -152,17 +152,6 @@  typedef struct spinlock {
 
 #define spin_lock_init(l) (*(l) = (spinlock_t)SPIN_LOCK_UNLOCKED)
 
-typedef struct {
-    volatile uint32_t lock;
-    struct lock_debug debug;
-} rwlock_t;
-
-#define RW_WRITE_FLAG (1u<<31)
-
-#define RW_LOCK_UNLOCKED { 0, _LOCK_DEBUG }
-#define DEFINE_RWLOCK(l) rwlock_t l = RW_LOCK_UNLOCKED
-#define rwlock_init(l) (*(l) = (rwlock_t)RW_LOCK_UNLOCKED)
-
 void _spin_lock(spinlock_t *lock);
 void _spin_lock_irq(spinlock_t *lock);
 unsigned long _spin_lock_irqsave(spinlock_t *lock);
@@ -179,27 +168,6 @@  int _spin_trylock_recursive(spinlock_t *lock);
 void _spin_lock_recursive(spinlock_t *lock);
 void _spin_unlock_recursive(spinlock_t *lock);
 
-void _read_lock(rwlock_t *lock);
-void _read_lock_irq(rwlock_t *lock);
-unsigned long _read_lock_irqsave(rwlock_t *lock);
-
-void _read_unlock(rwlock_t *lock);
-void _read_unlock_irq(rwlock_t *lock);
-void _read_unlock_irqrestore(rwlock_t *lock, unsigned long flags);
-int _read_trylock(rwlock_t *lock);
-
-void _write_lock(rwlock_t *lock);
-void _write_lock_irq(rwlock_t *lock);
-unsigned long _write_lock_irqsave(rwlock_t *lock);
-int _write_trylock(rwlock_t *lock);
-
-void _write_unlock(rwlock_t *lock);
-void _write_unlock_irq(rwlock_t *lock);
-void _write_unlock_irqrestore(rwlock_t *lock, unsigned long flags);
-
-int _rw_is_locked(rwlock_t *lock);
-int _rw_is_write_locked(rwlock_t *lock);
-
 #define spin_lock(l)                  _spin_lock(l)
 #define spin_lock_irq(l)              _spin_lock_irq(l)
 #define spin_lock_irqsave(l, f)                                 \