Message ID | Y9eGyzZ+JAqRQvtm@gondor.apana.org.au (mailing list archive) |
---|---|
State | Accepted |
Delegated to: | Herbert Xu |
Headers | show |
Series | crypto: arm64/aes-ccm - Rewrite skcipher walker loop | expand |
() On Mon, 30 Jan 2023 at 09:58, Herbert Xu <herbert@gondor.apana.org.au> wrote: > > An often overlooked aspect of the skcipher walker API is that an > error is not just indicated by a non-zero return value, but by the > fact that walk->nbytes is zero. > > Thus it is an error to call skcipher_walk_done after getting back > walk->nbytes == 0 from the previous interaction with the walker. > > This is because when walk->nbytes is zero the walker is left in > an undefined state and any further calls to it may try to free > uninitialised stack memory. > > The arm64 ccm code has to deal with zero-length messages, and > it needs to process data even when walk->nbytes == 0 is returned. > It doesn't have this bug because there is an explicit check for > walk->nbytes != 0 prior to the skcipher_walk_done call. > > However, the loop is still sufficiently different from the usual > layout and it appears to have been copied into other code which > then ended up with this bug. This patch rewrites it to follow the > usual convention of checking walk->nbytes. > > Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> > This works fine with CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y so Tested-by: Ard Biesheuvel <ardb@kernel.org> > diff --git a/arch/arm64/crypto/aes-ce-ccm-glue.c b/arch/arm64/crypto/aes-ce-ccm-glue.c > index c4f14415f5f0..25cd3808ecbe 100644 > --- a/arch/arm64/crypto/aes-ce-ccm-glue.c > +++ b/arch/arm64/crypto/aes-ce-ccm-glue.c > @@ -161,43 +161,39 @@ static int ccm_encrypt(struct aead_request *req) > memcpy(buf, req->iv, AES_BLOCK_SIZE); > > err = skcipher_walk_aead_encrypt(&walk, req, false); > - if (unlikely(err)) > - return err; > Should we keep this? No point in carrying on, and calling ce_aes_ccm_final() and scatterwalk_map_and_copy() in this state is best avoided. > kernel_neon_begin(); > > if (req->assoclen) > ccm_calculate_auth_mac(req, mac); > > - do { > + while (walk.nbytes) { > u32 tail = walk.nbytes % AES_BLOCK_SIZE; > + bool final = walk.nbytes == walk.total; > > - if (walk.nbytes == walk.total) > + if (final) > tail = 0; > > ce_aes_ccm_encrypt(walk.dst.virt.addr, walk.src.virt.addr, > walk.nbytes - tail, ctx->key_enc, > num_rounds(ctx), mac, walk.iv); > > - if (walk.nbytes == walk.total) > - ce_aes_ccm_final(mac, buf, ctx->key_enc, num_rounds(ctx)); > + if (!final) > + kernel_neon_end(); > + err = skcipher_walk_done(&walk, tail); > + if (!final) > + kernel_neon_begin(); > + } > > - kernel_neon_end(); > + ce_aes_ccm_final(mac, buf, ctx->key_enc, num_rounds(ctx)); > > - if (walk.nbytes) { > - err = skcipher_walk_done(&walk, tail); > - if (unlikely(err)) > - return err; > - if (unlikely(walk.nbytes)) > - kernel_neon_begin(); > - } > - } while (walk.nbytes); > + kernel_neon_end(); > > /* copy authtag to end of dst */ > scatterwalk_map_and_copy(mac, req->dst, req->assoclen + req->cryptlen, > crypto_aead_authsize(aead), 1); > > - return 0; > + return err; > } > > static int ccm_decrypt(struct aead_request *req) > @@ -219,37 +215,36 @@ static int ccm_decrypt(struct aead_request *req) > memcpy(buf, req->iv, AES_BLOCK_SIZE); > > err = skcipher_walk_aead_decrypt(&walk, req, false); > - if (unlikely(err)) > - return err; > > kernel_neon_begin(); > > if (req->assoclen) > ccm_calculate_auth_mac(req, mac); > > - do { > + while (walk.nbytes) { > u32 tail = walk.nbytes % AES_BLOCK_SIZE; > + bool final = walk.nbytes == walk.total; > > - if (walk.nbytes == walk.total) > + if (final) > tail = 0; > > ce_aes_ccm_decrypt(walk.dst.virt.addr, walk.src.virt.addr, > walk.nbytes - tail, ctx->key_enc, > num_rounds(ctx), mac, walk.iv); > > - if (walk.nbytes == walk.total) > - ce_aes_ccm_final(mac, buf, ctx->key_enc, num_rounds(ctx)); > + if (!final) > + kernel_neon_end(); > + err = skcipher_walk_done(&walk, tail); > + if (!final) > + kernel_neon_begin(); > + } > > - kernel_neon_end(); > + ce_aes_ccm_final(mac, buf, ctx->key_enc, num_rounds(ctx)); > > - if (walk.nbytes) { > - err = skcipher_walk_done(&walk, tail); > - if (unlikely(err)) > - return err; > - if (unlikely(walk.nbytes)) > - kernel_neon_begin(); > - } > - } while (walk.nbytes); > + kernel_neon_end(); > + > + if (unlikely(err)) > + return err; > > /* compare calculated auth tag with the stored one */ > scatterwalk_map_and_copy(buf, req->src, > -- > Email: Herbert Xu <herbert@gondor.apana.org.au> > Home Page: http://gondor.apana.org.au/~herbert/ > PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
On Mon, Jan 30, 2023 at 11:42:41AM +0100, Ard Biesheuvel wrote: > > > diff --git a/arch/arm64/crypto/aes-ce-ccm-glue.c b/arch/arm64/crypto/aes-ce-ccm-glue.c > > index c4f14415f5f0..25cd3808ecbe 100644 > > --- a/arch/arm64/crypto/aes-ce-ccm-glue.c > > +++ b/arch/arm64/crypto/aes-ce-ccm-glue.c > > @@ -161,43 +161,39 @@ static int ccm_encrypt(struct aead_request *req) > > memcpy(buf, req->iv, AES_BLOCK_SIZE); > > > > err = skcipher_walk_aead_encrypt(&walk, req, false); > > - if (unlikely(err)) > > - return err; > > > > Should we keep this? No point in carrying on, and calling > ce_aes_ccm_final() and scatterwalk_map_and_copy() in this state is > best avoided. First of all I don't think there is any risk of information leaks in this case. We're simply hashing the associated data by itself as if the message was zero-length. So it's a question of doing unnecessary work on the error-path. The Linux way is to optimise for the common case so adding a short-circuit solely to improve the error case would seems to be unnecessary. For context the errors that we're expecting at this point are memory allocation failures, not anything untoward in the input data. Thanks,
On 1/30/23 4:58 PM, Herbert Xu wrote: > An often overlooked aspect of the skcipher walker API is that an > error is not just indicated by a non-zero return value, but by the > fact that walk->nbytes is zero. > > Thus it is an error to call skcipher_walk_done after getting back > walk->nbytes == 0 from the previous interaction with the walker. > > This is because when walk->nbytes is zero the walker is left in > an undefined state and any further calls to it may try to free > uninitialised stack memory. > > The arm64 ccm code has to deal with zero-length messages, and > it needs to process data even when walk->nbytes == 0 is returned. > It doesn't have this bug because there is an explicit check for > walk->nbytes != 0 prior to the skcipher_walk_done call. > > However, the loop is still sufficiently different from the usual > layout and it appears to have been copied into other code which > then ended up with this bug. This patch rewrites it to follow the > usual convention of checking walk->nbytes. > > Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> > > diff --git a/arch/arm64/crypto/aes-ce-ccm-glue.c b/arch/arm64/crypto/aes-ce-ccm-glue.c > index c4f14415f5f0..25cd3808ecbe 100644 > --- a/arch/arm64/crypto/aes-ce-ccm-glue.c > +++ b/arch/arm64/crypto/aes-ce-ccm-glue.c > @@ -161,43 +161,39 @@ static int ccm_encrypt(struct aead_request *req) > memcpy(buf, req->iv, AES_BLOCK_SIZE); > > err = skcipher_walk_aead_encrypt(&walk, req, false); > - if (unlikely(err)) > - return err; > > kernel_neon_begin(); > > if (req->assoclen) > ccm_calculate_auth_mac(req, mac); > > - do { > + while (walk.nbytes) { > u32 tail = walk.nbytes % AES_BLOCK_SIZE; > + bool final = walk.nbytes == walk.total; > > - if (walk.nbytes == walk.total) > + if (final) > tail = 0; > > ce_aes_ccm_encrypt(walk.dst.virt.addr, walk.src.virt.addr, > walk.nbytes - tail, ctx->key_enc, > num_rounds(ctx), mac, walk.iv); > > - if (walk.nbytes == walk.total) > - ce_aes_ccm_final(mac, buf, ctx->key_enc, num_rounds(ctx)); > + if (!final) > + kernel_neon_end(); > + err = skcipher_walk_done(&walk, tail); > + if (!final) > + kernel_neon_begin(); > + } > > - kernel_neon_end(); > + ce_aes_ccm_final(mac, buf, ctx->key_enc, num_rounds(ctx)); > > - if (walk.nbytes) { > - err = skcipher_walk_done(&walk, tail); > - if (unlikely(err)) > - return err; > - if (unlikely(walk.nbytes)) > - kernel_neon_begin(); > - } > - } while (walk.nbytes); > + kernel_neon_end(); > > /* copy authtag to end of dst */ > scatterwalk_map_and_copy(mac, req->dst, req->assoclen + req->cryptlen, > crypto_aead_authsize(aead), 1); > > - return 0; > + return err; > } I think the following is a more cleaner rewriting form of the loop, which handles the last chunk separately, and both gcm and ccm can be handled similarly. while (walk.nbytes != walk.total) { u32 tail = walk.nbytes % AES_BLOCK_SIZE; ce_aes_ccm_encrypt(walk.dst.virt.addr, walk.src.virt.addr, walk.nbytes - tail, ctx->key_enc, num_rounds(ctx), mac, walk.iv); kernel_neon_end(); err = skcipher_walk_done(&walk, tail); kernel_neon_begin(); } if (walk.nbytes) { ce_aes_ccm_encrypt(walk.dst.virt.addr, walk.src.virt.addr, walk.nbytes, ctx->key_enc, num_rounds(ctx), mac, walk.iv); err = skcipher_walk_done(&walk, 0); } ce_aes_ccm_final(mac, buf, ctx->key_enc, num_rounds(ctx)); kernel_neon_end(); I have tested it initially. What are your opinions? > > static int ccm_decrypt(struct aead_request *req) > @@ -219,37 +215,36 @@ static int ccm_decrypt(struct aead_request *req) > memcpy(buf, req->iv, AES_BLOCK_SIZE); > > err = skcipher_walk_aead_decrypt(&walk, req, false); > - if (unlikely(err)) > - return err; > > kernel_neon_begin(); > > if (req->assoclen) > ccm_calculate_auth_mac(req, mac); > > - do { > + while (walk.nbytes) { > u32 tail = walk.nbytes % AES_BLOCK_SIZE; > + bool final = walk.nbytes == walk.total; > > - if (walk.nbytes == walk.total) > + if (final) > tail = 0; > > ce_aes_ccm_decrypt(walk.dst.virt.addr, walk.src.virt.addr, > walk.nbytes - tail, ctx->key_enc, > num_rounds(ctx), mac, walk.iv); > > - if (walk.nbytes == walk.total) > - ce_aes_ccm_final(mac, buf, ctx->key_enc, num_rounds(ctx)); > + if (!final) > + kernel_neon_end(); > + err = skcipher_walk_done(&walk, tail); > + if (!final) > + kernel_neon_begin(); > + } > > - kernel_neon_end(); > + ce_aes_ccm_final(mac, buf, ctx->key_enc, num_rounds(ctx)); > > - if (walk.nbytes) { > - err = skcipher_walk_done(&walk, tail); > - if (unlikely(err)) > - return err; > - if (unlikely(walk.nbytes)) > - kernel_neon_begin(); > - } > - } while (walk.nbytes); > + kernel_neon_end(); > + > + if (unlikely(err)) > + return err; > > /* compare calculated auth tag with the stored one */ > scatterwalk_map_and_copy(buf, req->src,
On Wed, Feb 01, 2023 at 10:53:37AM +0800, Tianjia Zhang wrote: > > while (walk.nbytes != walk.total) { This is still buggy, because we can have walk.nbytes == 0 and walk.nbytes != walk.total. You will enter the loop and call skcipher_walk_done which is not allowed. That is why you should follow the standard calling convention for skcipher walks, always check for walk.nbytes != 0 and not whether the walk returns an error. Cheers,
Hi Herbert, On 2/1/23 4:55 PM, Herbert Xu wrote: > On Wed, Feb 01, 2023 at 10:53:37AM +0800, Tianjia Zhang wrote: >> >> while (walk.nbytes != walk.total) { > > This is still buggy, because we can have walk.nbytes == 0 and > walk.nbytes != walk.total. You will enter the loop and call > skcipher_walk_done which is not allowed. > > That is why you should follow the standard calling convention > for skcipher walks, always check for walk.nbytes != 0 and not > whether the walk returns an error. > > Cheers, According to your previous reply, walker will ensure that the nbytes of each iteration is at least the size of the chunk. If walk.nbytes == 0, it must be the last chunk. If this is the case, walk.nbytes == 0 && walk.nbytes != walk.total will not appear, sorry I am not very clear about the details of walker, I don’t know if I understand correctly. Cheers, Tianjia
On Wed, Feb 01, 2023 at 05:15:23PM +0800, Tianjia Zhang wrote: > > According to your previous reply, walker will ensure that the nbytes of > each iteration is at least the size of the chunk. If walk.nbytes == 0, walk.nbytes == 0 is used to indicate error. Of course you could check for an error return in addition to checking walk.nbytes but that's how this bug got created in the first place. So always check for walk.nbytes == 0. Cheers,
Hi Herbert, On 2/1/23 5:21 PM, Herbert Xu wrote: > On Wed, Feb 01, 2023 at 05:15:23PM +0800, Tianjia Zhang wrote: >> >> According to your previous reply, walker will ensure that the nbytes of >> each iteration is at least the size of the chunk. If walk.nbytes == 0, > > walk.nbytes == 0 is used to indicate error. Of course you could > check for an error return in addition to checking walk.nbytes but > that's how this bug got created in the first place. > > So always check for walk.nbytes == 0. > > Cheers, It seems that only need to fix the loop condition, so if change the loop condition of the code just now to while (walk.nbytes && walk.nbytes != walk.total), in this way, the last chunk cryption is separated out of the loop, which will be clearer logically. What is your opinion? Thanks, Tianjia
On Wed, Feb 01, 2023 at 05:43:00PM +0800, Tianjia Zhang wrote: > > It seems that only need to fix the loop condition, so if change the > loop condition of the code just now to > while (walk.nbytes && walk.nbytes != walk.total), in this way, the > last chunk cryption is separated out of the loop, which will be clearer > logically. What is your opinion? Yes that should work. Cheers,
On 2/1/23 5:50 PM, Herbert Xu wrote: > On Wed, Feb 01, 2023 at 05:43:00PM +0800, Tianjia Zhang wrote: >> >> It seems that only need to fix the loop condition, so if change the >> loop condition of the code just now to >> while (walk.nbytes && walk.nbytes != walk.total), in this way, the >> last chunk cryption is separated out of the loop, which will be clearer >> logically. What is your opinion? > > Yes that should work. > > Cheers, Thanks, I will rewrite the loop in this form. Best regards, Tianjia
diff --git a/arch/arm64/crypto/aes-ce-ccm-glue.c b/arch/arm64/crypto/aes-ce-ccm-glue.c index c4f14415f5f0..25cd3808ecbe 100644 --- a/arch/arm64/crypto/aes-ce-ccm-glue.c +++ b/arch/arm64/crypto/aes-ce-ccm-glue.c @@ -161,43 +161,39 @@ static int ccm_encrypt(struct aead_request *req) memcpy(buf, req->iv, AES_BLOCK_SIZE); err = skcipher_walk_aead_encrypt(&walk, req, false); - if (unlikely(err)) - return err; kernel_neon_begin(); if (req->assoclen) ccm_calculate_auth_mac(req, mac); - do { + while (walk.nbytes) { u32 tail = walk.nbytes % AES_BLOCK_SIZE; + bool final = walk.nbytes == walk.total; - if (walk.nbytes == walk.total) + if (final) tail = 0; ce_aes_ccm_encrypt(walk.dst.virt.addr, walk.src.virt.addr, walk.nbytes - tail, ctx->key_enc, num_rounds(ctx), mac, walk.iv); - if (walk.nbytes == walk.total) - ce_aes_ccm_final(mac, buf, ctx->key_enc, num_rounds(ctx)); + if (!final) + kernel_neon_end(); + err = skcipher_walk_done(&walk, tail); + if (!final) + kernel_neon_begin(); + } - kernel_neon_end(); + ce_aes_ccm_final(mac, buf, ctx->key_enc, num_rounds(ctx)); - if (walk.nbytes) { - err = skcipher_walk_done(&walk, tail); - if (unlikely(err)) - return err; - if (unlikely(walk.nbytes)) - kernel_neon_begin(); - } - } while (walk.nbytes); + kernel_neon_end(); /* copy authtag to end of dst */ scatterwalk_map_and_copy(mac, req->dst, req->assoclen + req->cryptlen, crypto_aead_authsize(aead), 1); - return 0; + return err; } static int ccm_decrypt(struct aead_request *req) @@ -219,37 +215,36 @@ static int ccm_decrypt(struct aead_request *req) memcpy(buf, req->iv, AES_BLOCK_SIZE); err = skcipher_walk_aead_decrypt(&walk, req, false); - if (unlikely(err)) - return err; kernel_neon_begin(); if (req->assoclen) ccm_calculate_auth_mac(req, mac); - do { + while (walk.nbytes) { u32 tail = walk.nbytes % AES_BLOCK_SIZE; + bool final = walk.nbytes == walk.total; - if (walk.nbytes == walk.total) + if (final) tail = 0; ce_aes_ccm_decrypt(walk.dst.virt.addr, walk.src.virt.addr, walk.nbytes - tail, ctx->key_enc, num_rounds(ctx), mac, walk.iv); - if (walk.nbytes == walk.total) - ce_aes_ccm_final(mac, buf, ctx->key_enc, num_rounds(ctx)); + if (!final) + kernel_neon_end(); + err = skcipher_walk_done(&walk, tail); + if (!final) + kernel_neon_begin(); + } - kernel_neon_end(); + ce_aes_ccm_final(mac, buf, ctx->key_enc, num_rounds(ctx)); - if (walk.nbytes) { - err = skcipher_walk_done(&walk, tail); - if (unlikely(err)) - return err; - if (unlikely(walk.nbytes)) - kernel_neon_begin(); - } - } while (walk.nbytes); + kernel_neon_end(); + + if (unlikely(err)) + return err; /* compare calculated auth tag with the stored one */ scatterwalk_map_and_copy(buf, req->src,
An often overlooked aspect of the skcipher walker API is that an error is not just indicated by a non-zero return value, but by the fact that walk->nbytes is zero. Thus it is an error to call skcipher_walk_done after getting back walk->nbytes == 0 from the previous interaction with the walker. This is because when walk->nbytes is zero the walker is left in an undefined state and any further calls to it may try to free uninitialised stack memory. The arm64 ccm code has to deal with zero-length messages, and it needs to process data even when walk->nbytes == 0 is returned. It doesn't have this bug because there is an explicit check for walk->nbytes != 0 prior to the skcipher_walk_done call. However, the loop is still sufficiently different from the usual layout and it appears to have been copied into other code which then ended up with this bug. This patch rewrites it to follow the usual convention of checking walk->nbytes. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>