diff mbox series

[1/8] crypto: chacha-generic - fix use as arm64 no-NEON fallback

Message ID 20190313051252.2917-2-ebiggers@kernel.org (mailing list archive)
State Accepted
Delegated to: Herbert Xu
Headers show
Series crypto: test the !may_use_simd() fallback code | expand

Commit Message

Eric Biggers March 13, 2019, 5:12 a.m. UTC
From: Eric Biggers <ebiggers@google.com>

The arm64 implementations of ChaCha and XChaCha are failing the extra
crypto self-tests following my patches to test the !may_use_simd() code
paths, which previously were untested.  The problem is as follows:

When !may_use_simd(), the arm64 NEON implementations fall back to the
generic implementation, which uses the skcipher_walk API to iterate
through the src/dst scatterlists.  Due to how the skcipher_walk API
works, walk.stride is set from the skcipher_alg actually being used,
which in this case is the arm64 NEON algorithm.  Thus walk.stride is
5*CHACHA_BLOCK_SIZE, not CHACHA_BLOCK_SIZE.

This unnecessarily large stride shouldn't cause an actual problem.
However, the generic implementation computes round_down(nbytes,
walk.stride).  round_down() assumes the round amount is a power of 2,
which 5*CHACHA_BLOCK_SIZE is not, so it gives the wrong result.

This causes the following case in skcipher_walk_done() to be hit,
causing a WARN() and failing the encryption operation:

	if (WARN_ON(err)) {
		/* unexpected case; didn't process all bytes */
		err = -EINVAL;
		goto finish;
	}

Fix it by rounding down to CHACHA_BLOCK_SIZE instead of walk.stride.

(Or we could replace round_down() with rounddown(), but that would add a
slow division operation every time, which I think we should avoid.)

Fixes: 2fe55987b262 ("crypto: arm64/chacha - use combined SIMD/ALU routine for more speed")
Cc: <stable@vger.kernel.org> # v5.0+
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 crypto/chacha_generic.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Ard Biesheuvel March 13, 2019, 7:50 a.m. UTC | #1
On Wed, 13 Mar 2019 at 06:15, Eric Biggers <ebiggers@kernel.org> wrote:
>
> From: Eric Biggers <ebiggers@google.com>
>
> The arm64 implementations of ChaCha and XChaCha are failing the extra
> crypto self-tests following my patches to test the !may_use_simd() code
> paths, which previously were untested.  The problem is as follows:
>
> When !may_use_simd(), the arm64 NEON implementations fall back to the
> generic implementation, which uses the skcipher_walk API to iterate
> through the src/dst scatterlists.  Due to how the skcipher_walk API
> works, walk.stride is set from the skcipher_alg actually being used,
> which in this case is the arm64 NEON algorithm.  Thus walk.stride is
> 5*CHACHA_BLOCK_SIZE, not CHACHA_BLOCK_SIZE.
>
> This unnecessarily large stride shouldn't cause an actual problem.
> However, the generic implementation computes round_down(nbytes,
> walk.stride).  round_down() assumes the round amount is a power of 2,
> which 5*CHACHA_BLOCK_SIZE is not, so it gives the wrong result.
>
> This causes the following case in skcipher_walk_done() to be hit,
> causing a WARN() and failing the encryption operation:
>
>         if (WARN_ON(err)) {
>                 /* unexpected case; didn't process all bytes */
>                 err = -EINVAL;
>                 goto finish;
>         }
>
> Fix it by rounding down to CHACHA_BLOCK_SIZE instead of walk.stride.
>
> (Or we could replace round_down() with rounddown(), but that would add a
> slow division operation every time, which I think we should avoid.)
>
> Fixes: 2fe55987b262 ("crypto: arm64/chacha - use combined SIMD/ALU routine for more speed")
> Cc: <stable@vger.kernel.org> # v5.0+
> Signed-off-by: Eric Biggers <ebiggers@google.com>

Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

> ---
>  crypto/chacha_generic.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/crypto/chacha_generic.c b/crypto/chacha_generic.c
> index 35b583101f4f..90ec0ec1b4f7 100644
> --- a/crypto/chacha_generic.c
> +++ b/crypto/chacha_generic.c
> @@ -52,7 +52,7 @@ static int chacha_stream_xor(struct skcipher_request *req,
>                 unsigned int nbytes = walk.nbytes;
>
>                 if (nbytes < walk.total)
> -                       nbytes = round_down(nbytes, walk.stride);
> +                       nbytes = round_down(nbytes, CHACHA_BLOCK_SIZE);
>
>                 chacha_docrypt(state, walk.dst.virt.addr, walk.src.virt.addr,
>                                nbytes, ctx->nrounds);
> --
> 2.21.0
>
diff mbox series

Patch

diff --git a/crypto/chacha_generic.c b/crypto/chacha_generic.c
index 35b583101f4f..90ec0ec1b4f7 100644
--- a/crypto/chacha_generic.c
+++ b/crypto/chacha_generic.c
@@ -52,7 +52,7 @@  static int chacha_stream_xor(struct skcipher_request *req,
 		unsigned int nbytes = walk.nbytes;
 
 		if (nbytes < walk.total)
-			nbytes = round_down(nbytes, walk.stride);
+			nbytes = round_down(nbytes, CHACHA_BLOCK_SIZE);
 
 		chacha_docrypt(state, walk.dst.virt.addr, walk.src.virt.addr,
 			       nbytes, ctx->nrounds);