Message ID | 1523126303-23205-4-git-send-email-s.mesoraca16@gmail.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
On Sat, Apr 07, 2018 at 08:38:20PM +0200, Salvatore Mesoraca wrote: > > int crypto_init_cipher_ops(struct crypto_tfm *tfm) > { > + const unsigned long alignmask = crypto_tfm_alg_alignmask(tfm); > + const unsigned int size = crypto_tfm_alg_blocksize(tfm); > struct cipher_tfm *ops = &tfm->crt_cipher; > struct cipher_alg *cipher = &tfm->__crt_alg->cra_cipher; > > + if (size > MAX_BLOCKSIZE || alignmask > MAX_ALIGNMASK) > + return -EINVAL; > + This check should be done when the algorithm is registered. Perhaps crypto_check_alg. Thanks,
2018-04-08 5:16 GMT+02:00 Herbert Xu <herbert@gondor.apana.org.au>: > On Sat, Apr 07, 2018 at 08:38:20PM +0200, Salvatore Mesoraca wrote: >> >> int crypto_init_cipher_ops(struct crypto_tfm *tfm) >> { >> + const unsigned long alignmask = crypto_tfm_alg_alignmask(tfm); >> + const unsigned int size = crypto_tfm_alg_blocksize(tfm); >> struct cipher_tfm *ops = &tfm->crt_cipher; >> struct cipher_alg *cipher = &tfm->__crt_alg->cra_cipher; >> >> + if (size > MAX_BLOCKSIZE || alignmask > MAX_ALIGNMASK) >> + return -EINVAL; >> + > > This check should be done when the algorithm is registered. Perhaps > crypto_check_alg. Please correct me if I'm wrong: isn't crypto_check_alg invoked also during hashing algorithm registration? In this patch-set I'm dealing only with ciphers, because the maximum block size (16) is relatively small and it's also the most common block size with ciphers (maybe I should have explicitly referenced ciphers in the macro names, my bad). I don't think that it would be OK to use a similar approach for hashes too, because some of them have block size >= 1024 bytes. Thank you for your time, Salvatore
On Sun, Apr 08, 2018 at 11:07:12AM +0200, Salvatore Mesoraca wrote: > > > This check should be done when the algorithm is registered. Perhaps > > crypto_check_alg. > > Please correct me if I'm wrong: > isn't crypto_check_alg invoked also during hashing algorithm registration? > In this patch-set I'm dealing only with ciphers, because the maximum > block size (16) > is relatively small and it's also the most common block size with > ciphers (maybe I should > have explicitly referenced ciphers in the macro names, my bad). > I don't think that it would be OK to use a similar approach for hashes > too, because some > of them have block size >= 1024 bytes. Yes we want to make it for ciphers only even if we move it to crypto_check_alg. For a legacy type like cipher cou can do it by if (!alg->cra_type && (alg->cra_flags & CRYPTO_ALG_TYPE_MASK) == CRYPTO_ALG_TYPE_CIPHER) do_cipher_specific_check(); Cheers,
2018-04-08 11:22 GMT+02:00 Herbert Xu <herbert@gondor.apana.org.au>: > On Sun, Apr 08, 2018 at 11:07:12AM +0200, Salvatore Mesoraca wrote: >> >> > This check should be done when the algorithm is registered. Perhaps >> > crypto_check_alg. >> >> Please correct me if I'm wrong: >> isn't crypto_check_alg invoked also during hashing algorithm registration? >> In this patch-set I'm dealing only with ciphers, because the maximum >> block size (16) >> is relatively small and it's also the most common block size with >> ciphers (maybe I should >> have explicitly referenced ciphers in the macro names, my bad). >> I don't think that it would be OK to use a similar approach for hashes >> too, because some >> of them have block size >= 1024 bytes. > > Yes we want to make it for ciphers only even if we move it to > crypto_check_alg. > > For a legacy type like cipher cou can do it by > > if (!alg->cra_type && (alg->cra_flags & CRYPTO_ALG_TYPE_MASK) == > CRYPTO_ALG_TYPE_CIPHER) > do_cipher_specific_check(); > Thank you very much for your help. I'm sending the new version. Salvatore
diff --git a/crypto/cipher.c b/crypto/cipher.c index 94fa355..9cedf23 100644 --- a/crypto/cipher.c +++ b/crypto/cipher.c @@ -67,7 +67,7 @@ static void cipher_crypt_unaligned(void (*fn)(struct crypto_tfm *, u8 *, { unsigned long alignmask = crypto_tfm_alg_alignmask(tfm); unsigned int size = crypto_tfm_alg_blocksize(tfm); - u8 buffer[size + alignmask]; + u8 buffer[MAX_BLOCKSIZE + MAX_ALIGNMASK]; u8 *tmp = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1); memcpy(tmp, src, size); @@ -105,9 +105,14 @@ static void cipher_decrypt_unaligned(struct crypto_tfm *tfm, int crypto_init_cipher_ops(struct crypto_tfm *tfm) { + const unsigned long alignmask = crypto_tfm_alg_alignmask(tfm); + const unsigned int size = crypto_tfm_alg_blocksize(tfm); struct cipher_tfm *ops = &tfm->crt_cipher; struct cipher_alg *cipher = &tfm->__crt_alg->cra_cipher; + if (size > MAX_BLOCKSIZE || alignmask > MAX_ALIGNMASK) + return -EINVAL; + ops->cit_setkey = setkey; ops->cit_encrypt_one = crypto_tfm_alg_alignmask(tfm) ? cipher_encrypt_unaligned : cipher->cia_encrypt;
We avoid a VLA[1] by always allocating MAX_BLOCKSIZE + MAX_ALIGNMASK bytes. We also check the selected cipher at initialization time, if it doesn't comply with these limits, the initialization will fail. [1] http://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Signed-off-by: Salvatore Mesoraca <s.mesoraca16@gmail.com> --- crypto/cipher.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-)