mbox series

[RFC,0/5] running kernel mode SIMD with softirqs disabled

Message ID 20201218170106.23280-1-ardb@kernel.org (mailing list archive)
Headers show
Series running kernel mode SIMD with softirqs disabled | expand

Message

Ard Biesheuvel Dec. 18, 2020, 5:01 p.m. UTC
[ TL;DR for the non-ARM folks on CC: disabling softirq processing when using
  SIMD in kernel mode could reduce complexity and improve performance, but we
  need to decide whether we can do this, and how much softirq processing
  latency we can tolerate. If we can find a satisfactory solution for this,
  we might do the same for x86 and 32-bit ARM as well. ]

The crypto API provides two ways to invoke symmetric encryption algorithms:
- synchronously, where the transformation is guaranteed to be done by the
  time the function returns;
- asynchronously, where the function may return with a -EINPROGRESS return code,
  and a completion will be signalled when the transformation is done.

The latter is mainly intended for h/w accelerators, where the throughput would
be severely limited by the latency otherwise. However, it is also being used
for software algorithms based on SIMD instructions, which cannot be issued from
any context (the rules are not the same on each architecture, but typically,
SIMD can be used in task context, or in softirq context if it was not taken
while the SIMD was already in use in kernel mode).

Many users of the crypto API exist in the kernel today that opt out of this
asynchronous interface (802.11, macsec, kerberos, sw kTLS), or use a library
interface which is fundamentally synchronous (wireguard). This means we end
up using a degraded mode for the contended case (a scalar fallback) as well
as the uncontended case (generic GCM/CCM/CTR chaining mode templates wrapped
around the SIMD cipher as opposed to accelerated implementations of the full
chaining modes in question). Note that scalar AES runs ~20x slower than the
SIMD instruction based version.

So let's address this for arm64, by reorganizing kernel mode SIMD support so
that the SIMD unit can always be assumed to be available. This means we need
to defer softirq processing when grabbing the NEON unit in task context, so
that any use of it in softirq context is guaranteed not to interrupt any code
that was already using the NEON.

This obviously impacts softirq processing latency, which is why the existing
conditional NEON yield support is modified to take pending softirqs into
account.

As an example of how this impacts the code, the existing arm64 GCM driver is
updated to:
- Add yield support - currently, the pending softirq check is performed every
  64 bytes of input, which is way too often - one of the desired outcomes of
  this RFC is getting a reasonable ballpark for how long we want to run with
  softirqs disabled.
- Remove the existing scalar fallbacks, which are no longer needed.

Questions:
- what did I miss or break horribly?
- does any of this matter for RT? AIUI, RT runs softirqs from a dedicated
  kthread, so I don't think it cares.
- what would be a reasonable upper bound to keep softirqs disabled? I suppose
  100s of cycles or less is overkill, but I'm not sure how to derive a better
  answer.
- could we do the same on x86, now that kernel_fpu_begin/end is no longer
  expensive?

Cc: Dave Martin <dave.martin@arm.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Eric Biggers <ebiggers@kernel.org>
Cc: Will Deacon <will@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Ingo Molnar <mingo@kernel.org>

Ard Biesheuvel (5):
  crypto: aead - disallow en/decrypt for non-task or non-softirq context
  crypto: skcipher - disallow en/decrypt for non-task or non-softirq
    context
  crypto: arm64/gcm-aes-ce - add NEON yield support
  arm64: fpsimd: run kernel mode NEON with softirqs disabled
  crypto: arm64/gcm-aes-ce - remove non-SIMD fallback path

 arch/arm64/crypto/ghash-ce-core.S  | 115 ++++++-----
 arch/arm64/crypto/ghash-ce-glue.c  | 209 +++++---------------
 arch/arm64/include/asm/assembler.h |  19 +-
 arch/arm64/kernel/asm-offsets.c    |   2 +
 arch/arm64/kernel/fpsimd.c         |   4 +-
 crypto/aead.c                      |  10 +
 crypto/skcipher.c                  |  10 +
 7 files changed, 155 insertions(+), 214 deletions(-)

Comments

Herbert Xu Dec. 19, 2020, 2:04 a.m. UTC | #1
On Fri, Dec 18, 2020 at 06:01:01PM +0100, Ard Biesheuvel wrote:
>
> Questions:
> - what did I miss or break horribly?
> - does any of this matter for RT? AIUI, RT runs softirqs from a dedicated
>   kthread, so I don't think it cares.
> - what would be a reasonable upper bound to keep softirqs disabled? I suppose
>   100s of cycles or less is overkill, but I'm not sure how to derive a better
>   answer.
> - could we do the same on x86, now that kernel_fpu_begin/end is no longer
>   expensive?

If this approach works not only would it allow us to support the
synchronous users better, it would also allow us to remove loads
of cruft in the Crypto API that exist solely to support these SIMD
code paths.

So I eagerly await the assessment of the scheduler/RT folks on this
approach.

Thanks,
Ard Biesheuvel Jan. 14, 2021, 8:22 a.m. UTC | #2
On Sat, 19 Dec 2020 at 03:05, Herbert Xu <herbert@gondor.apana.org.au> wrote:
>
> On Fri, Dec 18, 2020 at 06:01:01PM +0100, Ard Biesheuvel wrote:
> >
> > Questions:
> > - what did I miss or break horribly?
> > - does any of this matter for RT? AIUI, RT runs softirqs from a dedicated
> >   kthread, so I don't think it cares.
> > - what would be a reasonable upper bound to keep softirqs disabled? I suppose
> >   100s of cycles or less is overkill, but I'm not sure how to derive a better
> >   answer.
> > - could we do the same on x86, now that kernel_fpu_begin/end is no longer
> >   expensive?
>
> If this approach works not only would it allow us to support the
> synchronous users better, it would also allow us to remove loads
> of cruft in the Crypto API that exist solely to support these SIMD
> code paths.
>
> So I eagerly await the assessment of the scheduler/RT folks on this
> approach.
>

Any insights here? Is there a ballpark upper bound for the duration of
a softirq disabled section? Other reasons why dis/enabling softirq
handling is a bad idea?
Peter Zijlstra Feb. 16, 2021, 10:09 a.m. UTC | #3
On Fri, Dec 18, 2020 at 06:01:01PM +0100, Ard Biesheuvel wrote:
> [ TL;DR for the non-ARM folks on CC: disabling softirq processing when using
>   SIMD in kernel mode could reduce complexity and improve performance, but we
>   need to decide whether we can do this, and how much softirq processing
>   latency we can tolerate. If we can find a satisfactory solution for this,
>   we might do the same for x86 and 32-bit ARM as well. ]

> - could we do the same on x86, now that kernel_fpu_begin/end is no longer
>   expensive?

Can't we simply save/restore the relevant register set?

So something like (note amluto was wanting to add a regset argument):

	<task>
	kernel_fpu_begin(MMX)
		<SIRQ>
		kernel_fpu_begin(SSE)
		kernel_fpu_end();
		</SIRQ>
	...
	kernel_fpu_end()

Would have to save the MMX regs on first SIRQ invocation of
kernel_fpu_begin(), and then have softirq context termination </SIRQ>
above, restore it.

I mean, we already do much the same for the first kernel_fpu_begin(),
that has to save the user registers, which will be restore when we go
back to userspace.

So why not do exactly the same for softirq context?
Ard Biesheuvel Feb. 16, 2021, 10:35 a.m. UTC | #4
On Tue, 16 Feb 2021 at 11:10, Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Fri, Dec 18, 2020 at 06:01:01PM +0100, Ard Biesheuvel wrote:
> > [ TL;DR for the non-ARM folks on CC: disabling softirq processing when using
> >   SIMD in kernel mode could reduce complexity and improve performance, but we
> >   need to decide whether we can do this, and how much softirq processing
> >   latency we can tolerate. If we can find a satisfactory solution for this,
> >   we might do the same for x86 and 32-bit ARM as well. ]
>
> > - could we do the same on x86, now that kernel_fpu_begin/end is no longer
> >   expensive?
>
> Can't we simply save/restore the relevant register set?
>
> So something like (note amluto was wanting to add a regset argument):
>
>         <task>
>         kernel_fpu_begin(MMX)
>                 <SIRQ>
>                 kernel_fpu_begin(SSE)
>                 kernel_fpu_end();
>                 </SIRQ>
>         ...
>         kernel_fpu_end()
>
> Would have to save the MMX regs on first SIRQ invocation of
> kernel_fpu_begin(), and then have softirq context termination </SIRQ>
> above, restore it.
>
> I mean, we already do much the same for the first kernel_fpu_begin(),
> that has to save the user registers, which will be restore when we go
> back to userspace.
>
> So why not do exactly the same for softirq context?

That is what we originally had on arm64, with per-CPU buffers of the
appropriate size. This became a bit messy when SVE support was added,
because the register file is so large (32 registers of up to 2048 bits
each), and since the kernel does not use SVE itself, we want the inner
per-CPU buffer to only cover 128 bits per register. This means we
cannot allow the <sirq></sirq> region above to interrupt the outer
preserve (which is implemented entirely in software), since resuming
the outer preserve after a sirq would preserve content that was
corrupted by the inner preserve/restore. This could be addressed by
disabling interrupts across the outer preserve, but this caused a
problem somewhere else (Dave might remember the details better than I
do). So we ended up making SIMD in task context mutually exclusive
with SIMD in softirq context, also because that is what 32-bit ARM and
x86 were already doing as well.

But I understand that these concerns may not apply to x86 at all, so
perhaps the answer there is indeed to have a alternate buffer. And
actually, Andy L. suggested the same when I asked him about it on IRC.