mbox series

[v3,0/7] arm64 / sched/preempt: support PREEMPT_DYNAMIC with static keys

Message ID 20220209153535.818830-1-mark.rutland@arm.com (mailing list archive)
Headers show
Series arm64 / sched/preempt: support PREEMPT_DYNAMIC with static keys | expand

Message

Mark Rutland Feb. 9, 2022, 3:35 p.m. UTC
This series enables PREEMPT_DYNAMIC on arm64. To do so, it adds a new
mechanism allowing the preemption functions to be enabled/disabled using
static keys rather than static calls, with architectures selecting
whether they use static calls or static keys.

With non-inline static calls, each function call results in a call to
the (out-of-line) trampoline which either tail-calls its associated
callee or performs an early return.

The key idea is that where we're only enabling/disabling a single
callee, we can inline this trampoline into the start of the callee,
using a static key to decide whether to return early, and leaving the
remaining codegen to the compiler. The overhead should be similar to
(and likely lower than) using a static call trampoline. Since most
codegen is up to the compiler, we sidestep a number of implementation
pain-points (e.g. things like CFI should "just work" as well as they do
for any other functions).

The bulk of the diffstat for kernel/sched/core.c is shuffling the
PREEMPT_DYNAMIC code later in the file, and the actual additions are
fairly trivial.

I've given this very light build+boot testing so far.

Since v1 [1]:
* Rework Kconfig text to be clearer
* Rework arm64 entry code
* Clarify commit messages.

Since v2 [2]:
* Add missing includes
* Always provide prototype for preempt_schedule()
* Always provide prototype for preempt_schedule_notrace()
* Fix __cond_resched() to default to disabled
* Fix might_resched() to default to disabled
* Clarify example in commit message

[1] https://lore.kernel.org/r/20211109172408.49641-1-mark.rutland@arm.com/
[2] https://lore.kernel.org/r/20220204150557.434610-1-mark.rutland@arm.com/

Mark Rutland (7):
  sched/preempt: move PREEMPT_DYNAMIC logic later
  sched/preempt: refactor sched_dynamic_update()
  sched/preempt: simplify irqentry_exit_cond_resched() callers
  sched/preempt: decouple HAVE_PREEMPT_DYNAMIC from GENERIC_ENTRY
  sched/preempt: add PREEMPT_DYNAMIC using static keys
  arm64: entry: centralize premeption decision
  arm64: support PREEMPT_DYNAMIC

 arch/Kconfig                     |  37 +++-
 arch/arm64/Kconfig               |   1 +
 arch/arm64/include/asm/preempt.h |  19 +-
 arch/arm64/kernel/entry-common.c |  28 ++-
 arch/x86/Kconfig                 |   2 +-
 arch/x86/include/asm/preempt.h   |  10 +-
 include/linux/entry-common.h     |  15 +-
 include/linux/kernel.h           |   7 +-
 include/linux/sched.h            |  10 +-
 kernel/entry/common.c            |  23 +-
 kernel/sched/core.c              | 347 ++++++++++++++++++-------------
 11 files changed, 327 insertions(+), 172 deletions(-)

Comments

Frederic Weisbecker Feb. 9, 2022, 7:58 p.m. UTC | #1
On Wed, Feb 09, 2022 at 03:35:28PM +0000, Mark Rutland wrote:
> This series enables PREEMPT_DYNAMIC on arm64. To do so, it adds a new
> mechanism allowing the preemption functions to be enabled/disabled using
> static keys rather than static calls, with architectures selecting
> whether they use static calls or static keys.
> 
> With non-inline static calls, each function call results in a call to
> the (out-of-line) trampoline which either tail-calls its associated
> callee or performs an early return.
> 
> The key idea is that where we're only enabling/disabling a single
> callee, we can inline this trampoline into the start of the callee,
> using a static key to decide whether to return early, and leaving the
> remaining codegen to the compiler. The overhead should be similar to
> (and likely lower than) using a static call trampoline. Since most
> codegen is up to the compiler, we sidestep a number of implementation
> pain-points (e.g. things like CFI should "just work" as well as they do
> for any other functions).
> 
> The bulk of the diffstat for kernel/sched/core.c is shuffling the
> PREEMPT_DYNAMIC code later in the file, and the actual additions are
> fairly trivial.
> 
> I've given this very light build+boot testing so far.
> 
> Since v1 [1]:
> * Rework Kconfig text to be clearer
> * Rework arm64 entry code
> * Clarify commit messages.
> 
> Since v2 [2]:
> * Add missing includes
> * Always provide prototype for preempt_schedule()
> * Always provide prototype for preempt_schedule_notrace()
> * Fix __cond_resched() to default to disabled
> * Fix might_resched() to default to disabled
> * Clarify example in commit message

Acked-by: Frederic Weisbecker <frederic@kernel.org>

Thanks!
Ard Biesheuvel Feb. 10, 2022, 9:29 a.m. UTC | #2
On Wed, 9 Feb 2022 at 16:35, Mark Rutland <mark.rutland@arm.com> wrote:
>
> This series enables PREEMPT_DYNAMIC on arm64. To do so, it adds a new
> mechanism allowing the preemption functions to be enabled/disabled using
> static keys rather than static calls, with architectures selecting
> whether they use static calls or static keys.
>
> With non-inline static calls, each function call results in a call to
> the (out-of-line) trampoline which either tail-calls its associated
> callee or performs an early return.
>
> The key idea is that where we're only enabling/disabling a single
> callee, we can inline this trampoline into the start of the callee,
> using a static key to decide whether to return early, and leaving the
> remaining codegen to the compiler. The overhead should be similar to
> (and likely lower than) using a static call trampoline. Since most
> codegen is up to the compiler, we sidestep a number of implementation
> pain-points (e.g. things like CFI should "just work" as well as they do
> for any other functions).
>
> The bulk of the diffstat for kernel/sched/core.c is shuffling the
> PREEMPT_DYNAMIC code later in the file, and the actual additions are
> fairly trivial.
>
> I've given this very light build+boot testing so far.
>
> Since v1 [1]:
> * Rework Kconfig text to be clearer
> * Rework arm64 entry code
> * Clarify commit messages.
>
> Since v2 [2]:
> * Add missing includes
> * Always provide prototype for preempt_schedule()
> * Always provide prototype for preempt_schedule_notrace()
> * Fix __cond_resched() to default to disabled
> * Fix might_resched() to default to disabled
> * Clarify example in commit message
>
> [1] https://lore.kernel.org/r/20211109172408.49641-1-mark.rutland@arm.com/
> [2] https://lore.kernel.org/r/20220204150557.434610-1-mark.rutland@arm.com/
>
> Mark Rutland (7):
>   sched/preempt: move PREEMPT_DYNAMIC logic later
>   sched/preempt: refactor sched_dynamic_update()
>   sched/preempt: simplify irqentry_exit_cond_resched() callers
>   sched/preempt: decouple HAVE_PREEMPT_DYNAMIC from GENERIC_ENTRY
>   sched/preempt: add PREEMPT_DYNAMIC using static keys
>   arm64: entry: centralize premeption decision
>   arm64: support PREEMPT_DYNAMIC
>

Acked-by: Ard Biesheuvel <ardb@kernel.org>


>  arch/Kconfig                     |  37 +++-
>  arch/arm64/Kconfig               |   1 +
>  arch/arm64/include/asm/preempt.h |  19 +-
>  arch/arm64/kernel/entry-common.c |  28 ++-
>  arch/x86/Kconfig                 |   2 +-
>  arch/x86/include/asm/preempt.h   |  10 +-
>  include/linux/entry-common.h     |  15 +-
>  include/linux/kernel.h           |   7 +-
>  include/linux/sched.h            |  10 +-
>  kernel/entry/common.c            |  23 +-
>  kernel/sched/core.c              | 347 ++++++++++++++++++-------------
>  11 files changed, 327 insertions(+), 172 deletions(-)
>
> --
> 2.30.2
>