Message ID | 20230118141928.48136-1-tianjia.zhang@linux.alibaba.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | crypto: arm64/sm4 - Fix possible crash in GCM cryption | expand |
On Wed, Jan 18, 2023 at 10:19:28PM +0800, Tianjia Zhang wrote: > When the cryption total length is zero, GCM cryption call > skcipher_walk_done() will cause an unexpected crash, so skip calling > this function to avoid possible crash when the GCM cryption length > is equal to zero. > > Fixes: ae1b83c7d572 ("crypto: arm64/sm4 - add CE implementation for GCM mode") > Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com> > --- > arch/arm64/crypto/sm4-ce-gcm-glue.c | 12 +++++++----- > 1 file changed, 7 insertions(+), 5 deletions(-) > > diff --git a/arch/arm64/crypto/sm4-ce-gcm-glue.c b/arch/arm64/crypto/sm4-ce-gcm-glue.c > index c450a2025ca9..9b63bcf9aa85 100644 > --- a/arch/arm64/crypto/sm4-ce-gcm-glue.c > +++ b/arch/arm64/crypto/sm4-ce-gcm-glue.c > @@ -178,11 +178,13 @@ static int gcm_crypt(struct aead_request *req, struct skcipher_walk *walk, > > kernel_neon_end(); > > - err = skcipher_walk_done(walk, tail); > - if (err) > - return err; > - if (walk->nbytes) > - kernel_neon_begin(); > + if (walk->nbytes) { Please do if (!walk->nbytes) break; As an additional improvement, the tail calculation can be removed entirely because you already set the chunksize so the walker should only be feeding you multiples of chunksize except at the end. Cheers,
Hi Herbert, On 1/18/23 10:54 PM, Herbert Xu wrote: > On Wed, Jan 18, 2023 at 10:19:28PM +0800, Tianjia Zhang wrote: >> When the cryption total length is zero, GCM cryption call >> skcipher_walk_done() will cause an unexpected crash, so skip calling >> this function to avoid possible crash when the GCM cryption length >> is equal to zero. >> >> Fixes: ae1b83c7d572 ("crypto: arm64/sm4 - add CE implementation for GCM mode") >> Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com> >> --- >> arch/arm64/crypto/sm4-ce-gcm-glue.c | 12 +++++++----- >> 1 file changed, 7 insertions(+), 5 deletions(-) >> >> diff --git a/arch/arm64/crypto/sm4-ce-gcm-glue.c b/arch/arm64/crypto/sm4-ce-gcm-glue.c >> index c450a2025ca9..9b63bcf9aa85 100644 >> --- a/arch/arm64/crypto/sm4-ce-gcm-glue.c >> +++ b/arch/arm64/crypto/sm4-ce-gcm-glue.c >> @@ -178,11 +178,13 @@ static int gcm_crypt(struct aead_request *req, struct skcipher_walk *walk, >> >> kernel_neon_end(); >> >> - err = skcipher_walk_done(walk, tail); >> - if (err) >> - return err; >> - if (walk->nbytes) >> - kernel_neon_begin(); >> + if (walk->nbytes) { > > Please do > if (!walk->nbytes) > break; Thanks for the suggestion, a new patch has been sent. > > As an additional improvement, the tail calculation can be removed > entirely because you already set the chunksize so the walker should > only be feeding you multiples of chunksize except at the end. > > Cheers I printed the walk->nbytes of each iteration of the walker, it is not always multiples of chunksize except at the end when the algorithm test manager is turned on. For example, during a GCM encryption process, I get data like this: total = 4014, nbytes = 2078, tail = 14 total = 1950, nbytes = 16, tail = 0 total = 1934, nbytes = 311, tail = 7 total = 1630, nbytes = 16, tail = 0 total = 1614, nbytes = 16, tail = 0 total = 1598, nbytes = 1598, tail = 14 Is my understanding wrong? Best regards, Tianjia
On Mon, Jan 30, 2023 at 03:34:42PM +0800, Tianjia Zhang wrote: > > I printed the walk->nbytes of each iteration of the walker, it is not > always multiples of chunksize except at the end when the algorithm test > manager is turned on. Sorry I was mistaken. We only guarantee that a minimum of chunksize bytes is given to you until the very end, not that it is exactly a multiple of chunksize. While you still need to compute tail, you could get rid of the else if check as walk->nbytes - tail cannot be zero (we must provide you with at least one chunk before the end): if (walk->nbytes == walk->total) { tail = 0; sm4_ce_pmull_gcm_crypt(ctx->key.rkey_enc, dst, src, iv, walk->nbytes, ghash, ctx->ghash_table, (const u8 *)&lengths); } else { sm4_ce_pmull_gcm_crypt(ctx->key.rkey_enc, dst, src, iv, walk->nbytes - tail, ghash, ctx->ghash_table, NULL); } In fact we could rewrite it like this: unsigned int tail = walk->nbytes % SM4_BLOCK_SIZE; unsigned int nbytes = walk->nbytes - tail; const u8 *src = walk->src.virt.addr; u8 *dst = walk->dst.virt.addr; u8 *lp = NULL; if (walk->nbytes == walk->total) { nbytes = walk->nbytes; tail = 0; lp = (u8 *)&lengths; } sm4_ce_pmull_gcm_crypt(ctx->key.rkey_enc, dst, src, iv, nbytes, ghash, ctx->ghash_table, lp); The second part of that loop could also be rewritten as: kernel_neon_end(); err = skcipher_walk_done(walk, tail); if (!walk->nbytes) return err; kernel_neon_begin(); } while (1); Actually I think there is a serious bug here. If you're doing an empty message, you must not call skcipher_walk_done as that may then free random uninitialised stack memory. Did you copy this code from somewhere else? If so wherever you got it from needs to be fixed too. The loop should look like this: if (!walk->nbytes) { /* iv may be unaligned as the walker didn't run at all. */ sm4_ce_pmull_gcm_crypt(ctx->key.rkey_enc, NULL, NULL, iv, 0, ghash, ctx->ghash_table, (u8 *)&lengths); kernel_neon_end(); return 0; } do { ... } Thanks,
On Mon, Jan 30, 2023 at 04:15:33PM +0800, Herbert Xu wrote: > > Actually I think there is a serious bug here. If you're doing an > empty message, you must not call skcipher_walk_done as that may > then free random uninitialised stack memory. Hah, I had forgotten that this thread started with your patch to fix this exact bug :) Could you confirm that you did copy this from ccm? It would be nice if you could rewrite your loop in a form similar to my patch to ccm. Thanks,
Hi Herbert, On 1/30/23 5:01 PM, Herbert Xu wrote: > On Mon, Jan 30, 2023 at 04:15:33PM +0800, Herbert Xu wrote: >> >> Actually I think there is a serious bug here. If you're doing an >> empty message, you must not call skcipher_walk_done as that may >> then free random uninitialised stack memory. > > Hah, I had forgotten that this thread started with your patch > to fix this exact bug :) > > Could you confirm that you did copy this from ccm? > > It would be nice if you could rewrite your loop in a form similar > to my patch to ccm. > > Thanks, These codes are copied from gcm and ccm at the same time. I am not sure which has more components, but I will rewrite the gcm and ccm encryption loop of sm4 as soon as possible. Cheers, Tianjia
diff --git a/arch/arm64/crypto/sm4-ce-gcm-glue.c b/arch/arm64/crypto/sm4-ce-gcm-glue.c index c450a2025ca9..9b63bcf9aa85 100644 --- a/arch/arm64/crypto/sm4-ce-gcm-glue.c +++ b/arch/arm64/crypto/sm4-ce-gcm-glue.c @@ -178,11 +178,13 @@ static int gcm_crypt(struct aead_request *req, struct skcipher_walk *walk, kernel_neon_end(); - err = skcipher_walk_done(walk, tail); - if (err) - return err; - if (walk->nbytes) - kernel_neon_begin(); + if (walk->nbytes) { + err = skcipher_walk_done(walk, tail); + if (err) + return err; + if (walk->nbytes) + kernel_neon_begin(); + } } while (walk->nbytes > 0); return 0;
When the cryption total length is zero, GCM cryption call skcipher_walk_done() will cause an unexpected crash, so skip calling this function to avoid possible crash when the GCM cryption length is equal to zero. Fixes: ae1b83c7d572 ("crypto: arm64/sm4 - add CE implementation for GCM mode") Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com> --- arch/arm64/crypto/sm4-ce-gcm-glue.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-)