diff mbox series

[V3,13/20] Documentation: Add lock ordering and nesting documentation

Message ID 20200321113242.026561244@linutronix.de (mailing list archive)
State Not Applicable, archived
Headers show
Series Lock ordering documentation and annotation for lockdep | expand

Commit Message

Thomas Gleixner March 21, 2020, 11:25 a.m. UTC
From: Thomas Gleixner <tglx@linutronix.de>

The kernel provides a variety of locking primitives. The nesting of these
lock types and the implications of them on RT enabled kernels is nowhere
documented.

Add initial documentation.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: "Paul E . McKenney" <paulmck@kernel.org>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Randy Dunlap <rdunlap@infradead.org>
---
V3: Addressed review comments from Paul, Jonathan, Davidlohr
V2: Addressed review comments from Randy
---
 Documentation/locking/index.rst     |    1 
 Documentation/locking/locktypes.rst |  299 ++++++++++++++++++++++++++++++++++++
 2 files changed, 300 insertions(+)
 create mode 100644 Documentation/locking/locktypes.rst

Comments

Paul E. McKenney March 23, 2020, 2:55 a.m. UTC | #1
On Sat, Mar 21, 2020 at 12:25:57PM +0100, Thomas Gleixner wrote:
> From: Thomas Gleixner <tglx@linutronix.de>
> 
> The kernel provides a variety of locking primitives. The nesting of these
> lock types and the implications of them on RT enabled kernels is nowhere
> documented.
> 
> Add initial documentation.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: "Paul E . McKenney" <paulmck@kernel.org>
> Cc: Jonathan Corbet <corbet@lwn.net>
> Cc: Davidlohr Bueso <dave@stgolabs.net>
> Cc: Randy Dunlap <rdunlap@infradead.org>
> ---
> V3: Addressed review comments from Paul, Jonathan, Davidlohr
> V2: Addressed review comments from Randy
> ---
>  Documentation/locking/index.rst     |    1 
>  Documentation/locking/locktypes.rst |  299 ++++++++++++++++++++++++++++++++++++
>  2 files changed, 300 insertions(+)
>  create mode 100644 Documentation/locking/locktypes.rst
> 
> --- a/Documentation/locking/index.rst
> +++ b/Documentation/locking/index.rst
> @@ -7,6 +7,7 @@ locking
>  .. toctree::
>      :maxdepth: 1
>  
> +    locktypes
>      lockdep-design
>      lockstat
>      locktorture
> --- /dev/null
> +++ b/Documentation/locking/locktypes.rst
> @@ -0,0 +1,299 @@

[ . . . Adding your example execution sequences . . . ]

> +PREEMPT_RT kernels preserve all other spinlock_t semantics:
> +
> + - Tasks holding a spinlock_t do not migrate.  Non-PREEMPT_RT kernels
> +   avoid migration by disabling preemption.  PREEMPT_RT kernels instead
> +   disable migration, which ensures that pointers to per-CPU variables
> +   remain valid even if the task is preempted.
> +
> + - Task state is preserved across spinlock acquisition, ensuring that the
> +   task-state rules apply to all kernel configurations.  Non-PREEMPT_RT
> +   kernels leave task state untouched.  However, PREEMPT_RT must change
> +   task state if the task blocks during acquisition.  Therefore, it saves
> +   the current task state before blocking and the corresponding lock wakeup
> +   restores it.
> +
> +   Other types of wakeups would normally unconditionally set the task state
> +   to RUNNING, but that does not work here because the task must remain
> +   blocked until the lock becomes available.  Therefore, when a non-lock
> +   wakeup attempts to awaken a task blocked waiting for a spinlock, it
> +   instead sets the saved state to RUNNING.  Then, when the lock
> +   acquisition completes, the lock wakeup sets the task state to the saved
> +   state, in this case setting it to RUNNING.

In the normal case where the task sleeps through the entire lock
acquisition, the sequence of events is as follows:

     state = UNINTERRUPTIBLE
     lock()
       block()
         real_state = state
         state = SLEEPONLOCK

                               lock wakeup
                                 state = real_state == UNINTERRUPTIBLE

This sequence of events can occur when the task acquires spinlocks
on its way to sleeping, for example, in a call to wait_event().

The non-lock wakeup can occur when a wakeup races with this wait_event(),
which can result in the following sequence of events:

     state = UNINTERRUPTIBLE
     lock()
       block()
         real_state = state
         state = SLEEPONLOCK

                             non lock wakeup
                                 real_state = RUNNING

                               lock wakeup
                                 state = real_state == RUNNING

Without this real_state subterfuge, the wakeup might be lost.

[ . . . and continuing where I left off earlier . . . ]

> +bit spinlocks
> +-------------
> +
> +Bit spinlocks are problematic for PREEMPT_RT as they cannot be easily
> +substituted by an RT-mutex based implementation for obvious reasons.
> +
> +The semantics of bit spinlocks are preserved on PREEMPT_RT kernels and the
> +caveats vs. raw_spinlock_t apply.
> +
> +Some bit spinlocks are substituted by regular spinlock_t for PREEMPT_RT but
> +this requires conditional (#ifdef'ed) code changes at the usage site while
> +the spinlock_t substitution is simply done by the compiler and the
> +conditionals are restricted to header files and core implementation of the
> +locking primitives and the usage sites do not require any changes.

PREEMPT_RT cannot substitute bit spinlocks because a single bit is
too small to accommodate an RT-mutex.  Therefore, the semantics of bit
spinlocks are preserved on PREEMPT_RT kernels, so that the raw_spinlock_t
caveats also apply to bit spinlocks.

Some bit spinlocks are replaced with regular spinlock_t for PREEMPT_RT
using conditional (#ifdef'ed) code changes at the usage site.
In contrast, usage-site changes are not needed for the spinlock_t
substitution.  Instead, conditionals in header files and the core locking
implemementation enable the compiler to do the substitution transparently.


> +Lock type nesting rules
> +=======================
> +
> +The most basic rules are:
> +
> +  - Lock types of the same lock category (sleeping, spinning) can nest
> +    arbitrarily as long as they respect the general lock ordering rules to
> +    prevent deadlocks.

  - Lock types in the same category (sleeping, spinning) can nest
     arbitrarily as long as they respect the general deadlock-avoidance
     ordering rules.

[ Give or take lockdep eventually complaining about too-deep nesting,
  but that is probably not worth mentioning here.  Leave that caveat
  to the lockdep documentation. ]

> +  - Sleeping lock types cannot nest inside spinning lock types.
> +
> +  - Spinning lock types can nest inside sleeping lock types.
> +
> +These rules apply in general independent of CONFIG_PREEMPT_RT.

These constraints apply both in CONFIG_PREEMPT_RT and otherwise.

> +As PREEMPT_RT changes the lock category of spinlock_t and rwlock_t from
> +spinning to sleeping this has obviously restrictions how they can nest with
> +raw_spinlock_t.
> +
> +This results in the following nest ordering:

The fact that PREEMPT_RT changes the lock category of spinlock_t and
rwlock_t from spinning to sleeping means that they cannot be acquired
while holding a raw spinlock.  This results in the following nesting
ordering:

> +  1) Sleeping locks
> +  2) spinlock_t and rwlock_t
> +  3) raw_spinlock_t and bit spinlocks
> +
> +Lockdep is aware of these constraints to ensure that they are respected.

Lockdep will complain if these constraints are violated, both in
CONFIG_PREEMPT_RT and otherwise.


> +Owner semantics
> +===============
> +
> +Most lock types in the Linux kernel have strict owner semantics, i.e. the
> +context (task) which acquires a lock has to release it.

The aforementioned lock types have strict owner semantics: The context
(task) that acquired the lock must release it.

> +There are two exceptions:
> +
> +  - semaphores
> +  - rwsems
> +
> +semaphores have no owner semantics for historical reason, and as such
> +trylock and release operations can be called from any context. They are
> +often used for both serialization and waiting purposes. That's generally
> +discouraged and should be replaced by separate serialization and wait
> +mechanisms, such as mutexes and completions.

semaphores lack owner semantics for historical reasons, so their trylock
and release operations may be called from any context. They are often
used for both serialization and waiting, but new use cases should
instead use separate serialization and wait mechanisms, such as mutexes
and completions.

> +rwsems have grown interfaces which allow non owner release for special
> +purposes. This usage is problematic on PREEMPT_RT because PREEMPT_RT
> +substitutes all locking primitives except semaphores with RT-mutex based
> +implementations to provide priority inheritance for all lock types except
> +the truly spinning ones. Priority inheritance on ownerless locks is
> +obviously impossible.
> +
> +For now the rwsem non-owner release excludes code which utilizes it from
> +being used on PREEMPT_RT enabled kernels. In same cases this can be
> +mitigated by disabling portions of the code, in other cases the complete
> +functionality has to be disabled until a workable solution has been found.

rwsems have grown special-purpose interfaces that allow non-owner release.
This non-owner release prevents PREEMPT_RT from substituting RT-mutex
implementations, for example, by defeating priority inheritance.
After all, if the lock has no owner, whose priority should be boosted?
As a result, PREEMPT_RT does not currently support rwsem, which in turn
means that code using it must therefore be disabled until a workable
solution presents itself.

[ Note: Not as confident as I would like to be in the above. ]

							Thanx, Paul
Thomas Gleixner March 24, 2020, 11:13 p.m. UTC | #2
Paul,

"Paul E. McKenney" <paulmck@kernel.org> writes:
> On Sat, Mar 21, 2020 at 12:25:57PM +0100, Thomas Gleixner wrote:
> In the normal case where the task sleeps through the entire lock
> acquisition, the sequence of events is as follows:
>
>      state = UNINTERRUPTIBLE
>      lock()
>        block()
>          real_state = state
>          state = SLEEPONLOCK
>
>                                lock wakeup
>                                  state = real_state == UNINTERRUPTIBLE
>
> This sequence of events can occur when the task acquires spinlocks
> on its way to sleeping, for example, in a call to wait_event().
>
> The non-lock wakeup can occur when a wakeup races with this wait_event(),
> which can result in the following sequence of events:
>
>      state = UNINTERRUPTIBLE
>      lock()
>        block()
>          real_state = state
>          state = SLEEPONLOCK
>
>                              non lock wakeup
>                                  real_state = RUNNING
>
>                                lock wakeup
>                                  state = real_state == RUNNING
>
> Without this real_state subterfuge, the wakeup might be lost.

I added this with a few modifications which reflect the actual
implementation. Conceptually the same.

> rwsems have grown special-purpose interfaces that allow non-owner release.
> This non-owner release prevents PREEMPT_RT from substituting RT-mutex
> implementations, for example, by defeating priority inheritance.
> After all, if the lock has no owner, whose priority should be boosted?
> As a result, PREEMPT_RT does not currently support rwsem, which in turn
> means that code using it must therefore be disabled until a workable
> solution presents itself.
>
> [ Note: Not as confident as I would like to be in the above. ]

I'm not confident either especially not after looking at the actual
code.

In fact I feel really stupid because the rw_semaphore reader non-owner
restriction on RT simply does not exist anymore and my history biased
memory tricked me.

The first rw_semaphore implementation of RT was simple and restricted
the reader side to a single reader to support PI on both the reader and
the writer side. That obviosuly did not scale well and made mmap_sem
heavy use cases pretty unhappy.

The short interlude with multi-reader boosting turned out to be a failed
experiment - Steven might still disagree though :)

At some point we gave up and I myself (sic!) reimplemented the RT
variant of rw_semaphore with a reader biased mechanism.

The reader never holds the underlying rt_mutex accross the read side
critical section. It merily increments the reader count and drops it on
release.

The only time a reader takes the rt_mutex is when it blocks on a
writer. Writers hold the rt_mutex across the write side critical section
to allow incoming readers to boost them. Once the writer releases the
rw_semaphore it unlocks the rt_mutex which is then handed off to the
readers. They increment the reader count and then drop the rt_mutex
before continuing in the read side critical section.

So while I changed the implementation it did obviously not occur to me
that this also lifted the non-owner release restriction. Nobody else
noticed either. So we kept dragging this along in both memory and
implementation. Both will be fixed now :)

The owner semantics of down/up_read() are only enforced by lockdep. That
applies to both RT and !RT. The up/down_read_non_owner() variants are
just there to tell lockdep about it.

So, I picked up your other suggestions with slight modifications and
adjusted the owner, semaphore and rw_semaphore docs accordingly.

Please have a close look at the patch below (applies on tip core/locking).

Thanks,

        tglx, who is searching a brown paperbag

8<----------

 Documentation/locking/locktypes.rst |  148 +++++++++++++++++++++++-------------
 1 file changed, 98 insertions(+), 50 deletions(-)

--- a/Documentation/locking/locktypes.rst
+++ b/Documentation/locking/locktypes.rst
@@ -67,6 +67,17 @@ Spinning locks implicitly disable preemp
  _irqsave/restore()   Save and disable / restore interrupt disabled state
  ===================  ====================================================
 
+Owner semantics
+===============
+
+The aforementioned lock types except semaphores have strict owner
+semantics:
+
+  The context (task) that acquired the lock must release it.
+
+rw_semaphores have a special interface which allows non-owner release for
+readers.
+
 
 rtmutex
 =======
@@ -83,6 +94,51 @@ interrupt handlers and soft interrupts.
 and rwlock_t to be implemented via RT-mutexes.
 
 
+sempahore
+=========
+
+semaphore is a counting semaphore implementation.
+
+Semaphores are often used for both serialization and waiting, but new use
+cases should instead use separate serialization and wait mechanisms, such
+as mutexes and completions.
+
+sempahores and PREEMPT_RT
+----------------------------
+
+PREEMPT_RT does not change the sempahore implementation. That's impossible
+due to the counting semaphore semantics which have no concept of owners.
+The lack of an owner conflicts with priority inheritance. After all an
+unknown owner cannot be boosted. As a consequence blocking on semaphores
+can be subject to priority inversion.
+
+
+rw_sempahore
+============
+
+rw_semaphore is a multiple readers and single writer lock mechanism.
+
+On non-PREEMPT_RT kernels the implementation is fair, thus preventing
+writer starvation.
+
+rw_semaphore complies by default with the strict owner semantics, but there
+exist special-purpose interfaces that allow non-owner release for readers.
+These work independent of the kernel configuration.
+
+rw_sempahore and PREEMPT_RT
+---------------------------
+
+PREEMPT_RT kernels map rw_sempahore to a separate rt_mutex-based
+implementation, thus changing the fairness:
+
+ Because an rw_sempaphore writer cannot grant its priority to multiple
+ readers, a preempted low-priority reader will continue holding its lock,
+ thus starving even high-priority writers.  In contrast, because readers
+ can grant their priority to a writer, a preempted low-priority writer will
+ have its priority boosted until it releases the lock, thus preventing that
+ writer from starving readers.
+
+
 raw_spinlock_t and spinlock_t
 =============================
 
@@ -140,7 +196,16 @@ On a PREEMPT_RT enabled kernel spinlock_
    kernels leave task state untouched.  However, PREEMPT_RT must change
    task state if the task blocks during acquisition.  Therefore, it saves
    the current task state before blocking and the corresponding lock wakeup
-   restores it.
+   restores it::
+
+    task->state = TASK_INTERRUPTIBLE
+     lock()
+       block()
+         task->saved_state = task->state
+	 task->state = TASK_UNINTERRUPTIBLE
+	 schedule()
+					lock wakeup
+					  task->state = task->saved_state
 
    Other types of wakeups would normally unconditionally set the task state
    to RUNNING, but that does not work here because the task must remain
@@ -148,7 +213,22 @@ On a PREEMPT_RT enabled kernel spinlock_
    wakeup attempts to awaken a task blocked waiting for a spinlock, it
    instead sets the saved state to RUNNING.  Then, when the lock
    acquisition completes, the lock wakeup sets the task state to the saved
-   state, in this case setting it to RUNNING.
+   state, in this case setting it to RUNNING::
+
+    task->state = TASK_INTERRUPTIBLE
+     lock()
+       block()
+         task->saved_state = task->state
+	 task->state = TASK_UNINTERRUPTIBLE
+	 schedule()
+					non lock wakeup
+					  task->saved_state = TASK_RUNNING
+
+					lock wakeup
+					  task->state = task->saved_state
+
+   This ensures that the real wakeup cannot be lost.
+
 
 rwlock_t
 ========
@@ -228,17 +308,16 @@ while holding normal non-raw spinlocks b
 bit spinlocks
 -------------
 
-Bit spinlocks are problematic for PREEMPT_RT as they cannot be easily
-substituted by an RT-mutex based implementation for obvious reasons.
-
-The semantics of bit spinlocks are preserved on PREEMPT_RT kernels and the
-caveats vs. raw_spinlock_t apply.
-
-Some bit spinlocks are substituted by regular spinlock_t for PREEMPT_RT but
-this requires conditional (#ifdef'ed) code changes at the usage site while
-the spinlock_t substitution is simply done by the compiler and the
-conditionals are restricted to header files and core implementation of the
-locking primitives and the usage sites do not require any changes.
+PREEMPT_RT cannot substitute bit spinlocks because a single bit is too
+small to accommodate an RT-mutex.  Therefore, the semantics of bit
+spinlocks are preserved on PREEMPT_RT kernels, so that the raw_spinlock_t
+caveats also apply to bit spinlocks.
+
+Some bit spinlocks are replaced with regular spinlock_t for PREEMPT_RT
+using conditional (#ifdef'ed) code changes at the usage site.  In contrast,
+usage-site changes are not needed for the spinlock_t substitution.
+Instead, conditionals in header files and the core locking implemementation
+enable the compiler to do the substitution transparently.
 
 
 Lock type nesting rules
@@ -254,46 +333,15 @@ Lock type nesting rules
 
   - Spinning lock types can nest inside sleeping lock types.
 
-These rules apply in general independent of CONFIG_PREEMPT_RT.
+These constraints apply both in CONFIG_PREEMPT_RT and otherwise.
 
-As PREEMPT_RT changes the lock category of spinlock_t and rwlock_t from
-spinning to sleeping this has obviously restrictions how they can nest with
-raw_spinlock_t.
-
-This results in the following nest ordering:
+The fact that PREEMPT_RT changes the lock category of spinlock_t and
+rwlock_t from spinning to sleeping means that they cannot be acquired while
+holding a raw spinlock.  This results in the following nesting ordering:
 
   1) Sleeping locks
   2) spinlock_t and rwlock_t
   3) raw_spinlock_t and bit spinlocks
 
-Lockdep is aware of these constraints to ensure that they are respected.
-
-
-Owner semantics
-===============
-
-Most lock types in the Linux kernel have strict owner semantics, i.e. the
-context (task) which acquires a lock has to release it.
-
-There are two exceptions:
-
-  - semaphores
-  - rwsems
-
-semaphores have no owner semantics for historical reason, and as such
-trylock and release operations can be called from any context. They are
-often used for both serialization and waiting purposes. That's generally
-discouraged and should be replaced by separate serialization and wait
-mechanisms, such as mutexes and completions.
-
-rwsems have grown interfaces which allow non owner release for special
-purposes. This usage is problematic on PREEMPT_RT because PREEMPT_RT
-substitutes all locking primitives except semaphores with RT-mutex based
-implementations to provide priority inheritance for all lock types except
-the truly spinning ones. Priority inheritance on ownerless locks is
-obviously impossible.
-
-For now the rwsem non-owner release excludes code which utilizes it from
-being used on PREEMPT_RT enabled kernels. In same cases this can be
-mitigated by disabling portions of the code, in other cases the complete
-functionality has to be disabled until a workable solution has been found.
+Lockdep will complain if these constraints are violated, both in
+CONFIG_PREEMPT_RT and otherwise.
Paul E. McKenney March 25, 2020, 12:28 a.m. UTC | #3
On Wed, Mar 25, 2020 at 12:13:34AM +0100, Thomas Gleixner wrote:
> Paul,
> 
> "Paul E. McKenney" <paulmck@kernel.org> writes:
> > On Sat, Mar 21, 2020 at 12:25:57PM +0100, Thomas Gleixner wrote:
> > In the normal case where the task sleeps through the entire lock
> > acquisition, the sequence of events is as follows:
> >
> >      state = UNINTERRUPTIBLE
> >      lock()
> >        block()
> >          real_state = state
> >          state = SLEEPONLOCK
> >
> >                                lock wakeup
> >                                  state = real_state == UNINTERRUPTIBLE
> >
> > This sequence of events can occur when the task acquires spinlocks
> > on its way to sleeping, for example, in a call to wait_event().
> >
> > The non-lock wakeup can occur when a wakeup races with this wait_event(),
> > which can result in the following sequence of events:
> >
> >      state = UNINTERRUPTIBLE
> >      lock()
> >        block()
> >          real_state = state
> >          state = SLEEPONLOCK
> >
> >                              non lock wakeup
> >                                  real_state = RUNNING
> >
> >                                lock wakeup
> >                                  state = real_state == RUNNING
> >
> > Without this real_state subterfuge, the wakeup might be lost.
> 
> I added this with a few modifications which reflect the actual
> implementation. Conceptually the same.

Looks good!

> > rwsems have grown special-purpose interfaces that allow non-owner release.
> > This non-owner release prevents PREEMPT_RT from substituting RT-mutex
> > implementations, for example, by defeating priority inheritance.
> > After all, if the lock has no owner, whose priority should be boosted?
> > As a result, PREEMPT_RT does not currently support rwsem, which in turn
> > means that code using it must therefore be disabled until a workable
> > solution presents itself.
> >
> > [ Note: Not as confident as I would like to be in the above. ]
> 
> I'm not confident either especially not after looking at the actual
> code.
> 
> In fact I feel really stupid because the rw_semaphore reader non-owner
> restriction on RT simply does not exist anymore and my history biased
> memory tricked me.

I guess I am glad that it is not just me.  ;-)

> The first rw_semaphore implementation of RT was simple and restricted
> the reader side to a single reader to support PI on both the reader and
> the writer side. That obviosuly did not scale well and made mmap_sem
> heavy use cases pretty unhappy.
> 
> The short interlude with multi-reader boosting turned out to be a failed
> experiment - Steven might still disagree though :)
> 
> At some point we gave up and I myself (sic!) reimplemented the RT
> variant of rw_semaphore with a reader biased mechanism.
> 
> The reader never holds the underlying rt_mutex accross the read side
> critical section. It merily increments the reader count and drops it on
> release.
> 
> The only time a reader takes the rt_mutex is when it blocks on a
> writer. Writers hold the rt_mutex across the write side critical section
> to allow incoming readers to boost them. Once the writer releases the
> rw_semaphore it unlocks the rt_mutex which is then handed off to the
> readers. They increment the reader count and then drop the rt_mutex
> before continuing in the read side critical section.
> 
> So while I changed the implementation it did obviously not occur to me
> that this also lifted the non-owner release restriction. Nobody else
> noticed either. So we kept dragging this along in both memory and
> implementation. Both will be fixed now :)
> 
> The owner semantics of down/up_read() are only enforced by lockdep. That
> applies to both RT and !RT. The up/down_read_non_owner() variants are
> just there to tell lockdep about it.
> 
> So, I picked up your other suggestions with slight modifications and
> adjusted the owner, semaphore and rw_semaphore docs accordingly.
> 
> Please have a close look at the patch below (applies on tip core/locking).
> 
> Thanks,
> 
>         tglx, who is searching a brown paperbag

Sorry, used all the ones here over the past few days.  :-/

Please see below for a wordsmithing patch to be applied on top of
or merged into the patch in your email.

							Thanx, Paul

------------------------------------------------------------------------

commit e38c64ce8db45e2b0a19082f1e1f988c3b25fb81
Author: Paul E. McKenney <paulmck@kernel.org>
Date:   Tue Mar 24 17:23:36 2020 -0700

    Documentation: Wordsmith lock ordering and nesting documentation
    
    This commit is strictly wordsmithing with no (intended) semantic
    changes.
    
    Signed-off-by: Paul E. McKenney <paulmck@kernel.org>

diff --git a/Documentation/locking/locktypes.rst b/Documentation/locking/locktypes.rst
index ca7bf84..8eb52e9 100644
--- a/Documentation/locking/locktypes.rst
+++ b/Documentation/locking/locktypes.rst
@@ -94,7 +94,7 @@ interrupt handlers and soft interrupts.  This conversion allows spinlock_t
 and rwlock_t to be implemented via RT-mutexes.
 
 
-sempahore
+semaphore
 =========
 
 semaphore is a counting semaphore implementation.
@@ -103,17 +103,17 @@ Semaphores are often used for both serialization and waiting, but new use
 cases should instead use separate serialization and wait mechanisms, such
 as mutexes and completions.
 
-sempahores and PREEMPT_RT
+semaphores and PREEMPT_RT
 ----------------------------
 
-PREEMPT_RT does not change the sempahore implementation. That's impossible
-due to the counting semaphore semantics which have no concept of owners.
-The lack of an owner conflicts with priority inheritance. After all an
-unknown owner cannot be boosted. As a consequence blocking on semaphores
-can be subject to priority inversion.
+PREEMPT_RT does not change the semaphore implementation because counting
+semaphores have no concept of owners, thus preventing PREEMPT_RT from
+providing priority inheritance for semaphores.  After all, an unknown
+owner cannot be boosted. As a consequence, blocking on semaphores can
+result in priority inversion.
 
 
-rw_sempahore
+rw_semaphore
 ============
 
 rw_semaphore is a multiple readers and single writer lock mechanism.
@@ -125,13 +125,13 @@ rw_semaphore complies by default with the strict owner semantics, but there
 exist special-purpose interfaces that allow non-owner release for readers.
 These work independent of the kernel configuration.
 
-rw_sempahore and PREEMPT_RT
+rw_semaphore and PREEMPT_RT
 ---------------------------
 
-PREEMPT_RT kernels map rw_sempahore to a separate rt_mutex-based
+PREEMPT_RT kernels map rw_semaphore to a separate rt_mutex-based
 implementation, thus changing the fairness:
 
- Because an rw_sempaphore writer cannot grant its priority to multiple
+ Because an rw_semaphore writer cannot grant its priority to multiple
  readers, a preempted low-priority reader will continue holding its lock,
  thus starving even high-priority writers.  In contrast, because readers
  can grant their priority to a writer, a preempted low-priority writer will
@@ -158,7 +158,7 @@ critical section is tiny, thus avoiding RT-mutex overhead.
 spinlock_t
 ----------
 
-The semantics of spinlock_t change with the state of CONFIG_PREEMPT_RT.
+The semantics of spinlock_t change with the state of PREEMPT_RT.
 
 On a non PREEMPT_RT enabled kernel spinlock_t is mapped to raw_spinlock_t
 and has exactly the same semantics.
@@ -196,7 +196,7 @@ PREEMPT_RT kernels preserve all other spinlock_t semantics:
    kernels leave task state untouched.  However, PREEMPT_RT must change
    task state if the task blocks during acquisition.  Therefore, it saves
    the current task state before blocking and the corresponding lock wakeup
-   restores it::
+   restores it, as shown below::
 
     task->state = TASK_INTERRUPTIBLE
      lock()
@@ -333,7 +333,7 @@ The most basic rules are:
 
   - Spinning lock types can nest inside sleeping lock types.
 
-These constraints apply both in CONFIG_PREEMPT_RT and otherwise.
+These constraints apply both in PREEMPT_RT and otherwise.
 
 The fact that PREEMPT_RT changes the lock category of spinlock_t and
 rwlock_t from spinning to sleeping means that they cannot be acquired while
@@ -344,4 +344,4 @@ holding a raw spinlock.  This results in the following nesting ordering:
   3) raw_spinlock_t and bit spinlocks
 
 Lockdep will complain if these constraints are violated, both in
-CONFIG_PREEMPT_RT and otherwise.
+PREEMPT_RT and otherwise.
diff mbox series

Patch

--- a/Documentation/locking/index.rst
+++ b/Documentation/locking/index.rst
@@ -7,6 +7,7 @@  locking
 .. toctree::
     :maxdepth: 1
 
+    locktypes
     lockdep-design
     lockstat
     locktorture
--- /dev/null
+++ b/Documentation/locking/locktypes.rst
@@ -0,0 +1,299 @@ 
+.. SPDX-License-Identifier: GPL-2.0
+
+.. _kernel_hacking_locktypes:
+
+==========================
+Lock types and their rules
+==========================
+
+Introduction
+============
+
+The kernel provides a variety of locking primitives which can be divided
+into two categories:
+
+ - Sleeping locks
+ - Spinning locks
+
+This document conceptually describes these lock types and provides rules
+for their nesting, including the rules for use under PREEMPT_RT.
+
+
+Lock categories
+===============
+
+Sleeping locks
+--------------
+
+Sleeping locks can only be acquired in preemptible task context.
+
+Although implementations allow try_lock() from other contexts, it is
+necessary to carefully evaluate the safety of unlock() as well as of
+try_lock().  Furthermore, it is also necessary to evaluate the debugging
+versions of these primitives.  In short, don't acquire sleeping locks from
+other contexts unless there is no other option.
+
+Sleeping lock types:
+
+ - mutex
+ - rt_mutex
+ - semaphore
+ - rw_semaphore
+ - ww_mutex
+ - percpu_rw_semaphore
+
+On PREEMPT_RT kernels, these lock types are converted to sleeping locks:
+
+ - spinlock_t
+ - rwlock_t
+
+Spinning locks
+--------------
+
+ - raw_spinlock_t
+ - bit spinlocks
+
+On non-PREEMPT_RT kernels, these lock types are also spinning locks:
+
+ - spinlock_t
+ - rwlock_t
+
+Spinning locks implicitly disable preemption and the lock / unlock functions
+can have suffixes which apply further protections:
+
+ ===================  ====================================================
+ _bh()                Disable / enable bottom halves (soft interrupts)
+ _irq()               Disable / enable interrupts
+ _irqsave/restore()   Save and disable / restore interrupt disabled state
+ ===================  ====================================================
+
+
+rtmutex
+=======
+
+RT-mutexes are mutexes with support for priority inheritance (PI).
+
+PI has limitations on non PREEMPT_RT enabled kernels due to preemption and
+interrupt disabled sections.
+
+PI clearly cannot preempt preemption-disabled or interrupt-disabled
+regions of code, even on PREEMPT_RT kernels.  Instead, PREEMPT_RT kernels
+execute most such regions of code in preemptible task context, especially
+interrupt handlers and soft interrupts.  This conversion allows spinlock_t
+and rwlock_t to be implemented via RT-mutexes.
+
+
+raw_spinlock_t and spinlock_t
+=============================
+
+raw_spinlock_t
+--------------
+
+raw_spinlock_t is a strict spinning lock implementation regardless of the
+kernel configuration including PREEMPT_RT enabled kernels.
+
+raw_spinlock_t is a strict spinning lock implementation in all kernels,
+including PREEMPT_RT kernels.  Use raw_spinlock_t only in real critical
+core code, low level interrupt handling and places where disabling
+preemption or interrupts is required, for example, to safely access
+hardware state.  raw_spinlock_t can sometimes also be used when the
+critical section is tiny, thus avoiding RT-mutex overhead.
+
+spinlock_t
+----------
+
+The semantics of spinlock_t change with the state of CONFIG_PREEMPT_RT.
+
+On a non PREEMPT_RT enabled kernel spinlock_t is mapped to raw_spinlock_t
+and has exactly the same semantics.
+
+spinlock_t and PREEMPT_RT
+-------------------------
+
+On a PREEMPT_RT enabled kernel spinlock_t is mapped to a separate
+implementation based on rt_mutex which changes the semantics:
+
+ - Preemption is not disabled
+
+ - The hard interrupt related suffixes for spin_lock / spin_unlock
+   operations (_irq, _irqsave / _irqrestore) do not affect the CPUs
+   interrupt disabled state
+
+ - The soft interrupt related suffix (_bh()) still disables softirq
+   handlers.
+
+   Non-PREEMPT_RT kernels disable preemption to get this effect.
+
+   PREEMPT_RT kernels use a per-CPU lock for serialization which keeps
+   preemption disabled. The lock disables softirq handlers and also
+   prevents reentrancy due to task preemption.
+
+PREEMPT_RT kernels preserve all other spinlock_t semantics:
+
+ - Tasks holding a spinlock_t do not migrate.  Non-PREEMPT_RT kernels
+   avoid migration by disabling preemption.  PREEMPT_RT kernels instead
+   disable migration, which ensures that pointers to per-CPU variables
+   remain valid even if the task is preempted.
+
+ - Task state is preserved across spinlock acquisition, ensuring that the
+   task-state rules apply to all kernel configurations.  Non-PREEMPT_RT
+   kernels leave task state untouched.  However, PREEMPT_RT must change
+   task state if the task blocks during acquisition.  Therefore, it saves
+   the current task state before blocking and the corresponding lock wakeup
+   restores it.
+
+   Other types of wakeups would normally unconditionally set the task state
+   to RUNNING, but that does not work here because the task must remain
+   blocked until the lock becomes available.  Therefore, when a non-lock
+   wakeup attempts to awaken a task blocked waiting for a spinlock, it
+   instead sets the saved state to RUNNING.  Then, when the lock
+   acquisition completes, the lock wakeup sets the task state to the saved
+   state, in this case setting it to RUNNING.
+
+rwlock_t
+========
+
+rwlock_t is a multiple readers and single writer lock mechanism.
+
+Non-PREEMPT_RT kernels implement rwlock_t as a spinning lock and the
+suffix rules of spinlock_t apply accordingly. The implementation is fair,
+thus preventing writer starvation.
+
+rwlock_t and PREEMPT_RT
+-----------------------
+
+PREEMPT_RT kernels map rwlock_t to a separate rt_mutex-based
+implementation, thus changing semantics:
+
+ - All the spinlock_t changes also apply to rwlock_t.
+
+ - Because an rwlock_t writer cannot grant its priority to multiple
+   readers, a preempted low-priority reader will continue holding its lock,
+   thus starving even high-priority writers.  In contrast, because readers
+   can grant their priority to a writer, a preempted low-priority writer
+   will have its priority boosted until it releases the lock, thus
+   preventing that writer from starving readers.
+
+
+PREEMPT_RT caveats
+==================
+
+spinlock_t and rwlock_t
+-----------------------
+
+These changes in spinlock_t and rwlock_t semantics on PREEMPT_RT kernels
+have a few implications.  For example, on a non-PREEMPT_RT kernel the
+following code sequence works as expected::
+
+   local_irq_disable();
+   spin_lock(&lock);
+
+and is fully equivalent to::
+
+   spin_lock_irq(&lock);
+
+Same applies to rwlock_t and the _irqsave() suffix variants.
+
+On PREEMPT_RT kernel this code sequence breaks because RT-mutex requires a
+fully preemptible context.  Instead, use spin_lock_irq() or
+spin_lock_irqsave() and their unlock counterparts.  In cases where the
+interrupt disabling and locking must remain separate, PREEMPT_RT offers a
+local_lock mechanism.  Acquiring the local_lock pins the task to a CPU,
+allowing things like per-CPU irq-disabled locks to be acquired.  However,
+this approach should be used only where absolutely necessary.
+
+
+raw_spinlock_t
+--------------
+
+Acquiring a raw_spinlock_t disables preemption and possibly also
+interrupts, so the critical section must avoid acquiring a regular
+spinlock_t or rwlock_t, for example, the critical section must avoid
+allocating memory.  Thus, on a non-PREEMPT_RT kernel the following code
+works perfectly::
+
+  raw_spin_lock(&lock);
+  p = kmalloc(sizeof(*p), GFP_ATOMIC);
+
+But this code fails on PREEMPT_RT kernels because the memory allocator is
+fully preemptible and therefore cannot be invoked from truly atomic
+contexts.  However, it is perfectly fine to invoke the memory allocator
+while holding normal non-raw spinlocks because they do not disable
+preemption on PREEMPT_RT kernels::
+
+  spin_lock(&lock);
+  p = kmalloc(sizeof(*p), GFP_ATOMIC);
+
+
+bit spinlocks
+-------------
+
+Bit spinlocks are problematic for PREEMPT_RT as they cannot be easily
+substituted by an RT-mutex based implementation for obvious reasons.
+
+The semantics of bit spinlocks are preserved on PREEMPT_RT kernels and the
+caveats vs. raw_spinlock_t apply.
+
+Some bit spinlocks are substituted by regular spinlock_t for PREEMPT_RT but
+this requires conditional (#ifdef'ed) code changes at the usage site while
+the spinlock_t substitution is simply done by the compiler and the
+conditionals are restricted to header files and core implementation of the
+locking primitives and the usage sites do not require any changes.
+
+
+Lock type nesting rules
+=======================
+
+The most basic rules are:
+
+  - Lock types of the same lock category (sleeping, spinning) can nest
+    arbitrarily as long as they respect the general lock ordering rules to
+    prevent deadlocks.
+
+  - Sleeping lock types cannot nest inside spinning lock types.
+
+  - Spinning lock types can nest inside sleeping lock types.
+
+These rules apply in general independent of CONFIG_PREEMPT_RT.
+
+As PREEMPT_RT changes the lock category of spinlock_t and rwlock_t from
+spinning to sleeping this has obviously restrictions how they can nest with
+raw_spinlock_t.
+
+This results in the following nest ordering:
+
+  1) Sleeping locks
+  2) spinlock_t and rwlock_t
+  3) raw_spinlock_t and bit spinlocks
+
+Lockdep is aware of these constraints to ensure that they are respected.
+
+
+Owner semantics
+===============
+
+Most lock types in the Linux kernel have strict owner semantics, i.e. the
+context (task) which acquires a lock has to release it.
+
+There are two exceptions:
+
+  - semaphores
+  - rwsems
+
+semaphores have no owner semantics for historical reason, and as such
+trylock and release operations can be called from any context. They are
+often used for both serialization and waiting purposes. That's generally
+discouraged and should be replaced by separate serialization and wait
+mechanisms, such as mutexes and completions.
+
+rwsems have grown interfaces which allow non owner release for special
+purposes. This usage is problematic on PREEMPT_RT because PREEMPT_RT
+substitutes all locking primitives except semaphores with RT-mutex based
+implementations to provide priority inheritance for all lock types except
+the truly spinning ones. Priority inheritance on ownerless locks is
+obviously impossible.
+
+For now the rwsem non-owner release excludes code which utilizes it from
+being used on PREEMPT_RT enabled kernels. In same cases this can be
+mitigated by disabling portions of the code, in other cases the complete
+functionality has to be disabled until a workable solution has been found.