Message ID | 20190807055022.15551-1-ard.biesheuvel@linaro.org (mailing list archive) |
---|---|
State | Accepted, archived |
Delegated to: | Mike Snitzer |
Headers | show |
Series | [RFC,v2] md/dm-crypt - reuse eboiv skcipher for IV generation | expand |
On 07/08/2019 07:50, Ard Biesheuvel wrote: > Instead of instantiating a separate cipher to perform the encryption > needed to produce the IV, reuse the skcipher used for the block data > and invoke it one additional time for each block to encrypt a zero > vector and use the output as the IV. > > For CBC mode, this is equivalent to using the bare block cipher, but > without the risk of ending up with a non-time invariant implementation > of AES when the skcipher itself is time variant (e.g., arm64 without > Crypto Extensions has a NEON based time invariant implementation of > cbc(aes) but no time invariant implementation of the core cipher other > than aes-ti, which is not enabled by default) > > This approach is a compromise between dm-crypt API flexibility and > reducing dependence on parts of the crypto API that should not usually > be exposed to other subsystems, such as the bare cipher API. > > Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Yes, this is a good idea, I'll test it. Thanks! Milan -- dm-devel mailing list dm-devel@redhat.com https://www.redhat.com/mailman/listinfo/dm-devel
On Wed, 7 Aug 2019 at 10:28, Pascal Van Leeuwen <pvanleeuwen@verimatrix.com> wrote: > > Ard, > > I've actually been following this discussion with some interest, as it has > some relevance for some of the things I am doing at the moment as well. > > For example, for my CTS implementation I need to crypt one or two > seperate blocks and for the inside-secure driver I sometimes need to do > some single crypto block precomputes. (the XTS driver additionally > also already did such a single block encrypt for the tweak, also using > a seperate (non-sk)cipher instance - very similar to your IV case) > > Long story short, the current approach is to allocate a seperate > cipher instance so you can conveniently do crypto_cipher_en/decrypt_one. > (it would be nice to have a matching crypto_skcipher_en/decrypt_one > function available from the crypto API for these purposes?) > But if I understand you correctly, you may end up with an insecure > table-based implementation if you do that. Not what I want :-( > Table based AES is known to be vulnerable to plaintext attacks on the key, since each byte of the input xor'ed with the key is used as an index for doing Sbox lookups, and so with enough samples, there is an exploitable statistical correlation between the response time and the key. So in the context of EBOIV, where the user might specify a SIMD based time invariant skcipher, it would be really bad if the known plaintext encryptions of the byte offsets that occur with the *same* key would happen with a different cipher that is allocated implicitly and ends up being fulfilled by, e.g., aes-generic, since in that case, each block en/decryption is preceded by a single, time-variant AES invocation with an easily guessable input. In your case, we are not dealing with known plaintext attacks, and the higher latency of h/w accelerated crypto makes me less worried that the final, user observable latency would strongly correlate the way aes-generic in isolation does. > However, in many cases there would actually be a very good reason > NOT to want to use the main skcipher for this. As that is some > hardware accelerator with terrible latency that you wouldn't want > to use to process just one cipher block. For that, you want to have > some SW implementation that is efficient on a single block instead. > Indeed. Note that for EBOIV, such performance concerns are deemed irrelevant, but it is an issue in the general case. > In my humble opinion, such insecure table based implementations just > shouldn't exist at all - you can always do better, possibly at the > expense of some performance degradation. Or you should at least have > some flag available to specify you have some security requirements > and such an implementation is not an acceptable response. > We did some work to reduce the time variance of AES: there is the aes-ti driver, and there is now also the AES library, which is known to be slower than aes-generic, but does include some mitigations for cache timing attacks. Other than that, I have little to offer, given that the performance vs security tradeoffs were decided long before security became a thing like it is today, and so removing aes-generic is not an option, especially since the scalar alternatives we have are not truly time invariant either. > > -----Original Message----- > > From: linux-crypto-owner@vger.kernel.org <linux-crypto-owner@vger.kernel.org> On Behalf Of > > Ard Biesheuvel > > Sent: Wednesday, August 7, 2019 7:50 AM > > To: linux-crypto@vger.kernel.org > > Cc: herbert@gondor.apana.org.au; ebiggers@kernel.org; agk@redhat.com; snitzer@redhat.com; > > dm-devel@redhat.com; gmazyland@gmail.com; Ard Biesheuvel <ard.biesheuvel@linaro.org> > > Subject: [RFC PATCH v2] md/dm-crypt - reuse eboiv skcipher for IV generation > > > > Instead of instantiating a separate cipher to perform the encryption > > needed to produce the IV, reuse the skcipher used for the block data > > and invoke it one additional time for each block to encrypt a zero > > vector and use the output as the IV. > > > > For CBC mode, this is equivalent to using the bare block cipher, but > > without the risk of ending up with a non-time invariant implementation > > of AES when the skcipher itself is time variant (e.g., arm64 without > > Crypto Extensions has a NEON based time invariant implementation of > > cbc(aes) but no time invariant implementation of the core cipher other > > than aes-ti, which is not enabled by default) > > > > This approach is a compromise between dm-crypt API flexibility and > > reducing dependence on parts of the crypto API that should not usually > > be exposed to other subsystems, such as the bare cipher API. > > > > Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> > > --- > > drivers/md/dm-crypt.c | 70 ++++++++++++++----------------------------- > > 1 file changed, 22 insertions(+), 48 deletions(-) > > > > diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c > > index d5216bcc4649..48cd76c88d77 100644 > > --- a/drivers/md/dm-crypt.c > > +++ b/drivers/md/dm-crypt.c > > @@ -120,10 +120,6 @@ struct iv_tcw_private { > > u8 *whitening; > > }; > > > > -struct iv_eboiv_private { > > - struct crypto_cipher *tfm; > > -}; > > - > > /* > > * Crypt: maps a linear range of a block device > > * and encrypts / decrypts at the same time. > > @@ -163,7 +159,6 @@ struct crypt_config { > > struct iv_benbi_private benbi; > > struct iv_lmk_private lmk; > > struct iv_tcw_private tcw; > > - struct iv_eboiv_private eboiv; > > } iv_gen_private; > > u64 iv_offset; > > unsigned int iv_size; > > @@ -847,65 +842,47 @@ static int crypt_iv_random_gen(struct crypt_config *cc, u8 *iv, > > return 0; > > } > > > > -static void crypt_iv_eboiv_dtr(struct crypt_config *cc) > > -{ > > - struct iv_eboiv_private *eboiv = &cc->iv_gen_private.eboiv; > > - > > - crypto_free_cipher(eboiv->tfm); > > - eboiv->tfm = NULL; > > -} > > - > > static int crypt_iv_eboiv_ctr(struct crypt_config *cc, struct dm_target *ti, > > const char *opts) > > { > > - struct iv_eboiv_private *eboiv = &cc->iv_gen_private.eboiv; > > - struct crypto_cipher *tfm; > > - > > - tfm = crypto_alloc_cipher(cc->cipher, 0, 0); > > - if (IS_ERR(tfm)) { > > - ti->error = "Error allocating crypto tfm for EBOIV"; > > - return PTR_ERR(tfm); > > + if (test_bit(CRYPT_MODE_INTEGRITY_AEAD, &cc->cipher_flags)) { > > + ti->error = "AEAD transforms not supported for EBOIV"; > > + return -EINVAL; > > } > > > > - if (crypto_cipher_blocksize(tfm) != cc->iv_size) { > > + if (crypto_skcipher_blocksize(any_tfm(cc)) != cc->iv_size) { > > ti->error = "Block size of EBOIV cipher does " > > "not match IV size of block cipher"; > > - crypto_free_cipher(tfm); > > return -EINVAL; > > } > > > > - eboiv->tfm = tfm; > > return 0; > > } > > > > -static int crypt_iv_eboiv_init(struct crypt_config *cc) > > +static int crypt_iv_eboiv_gen(struct crypt_config *cc, u8 *iv, > > + struct dm_crypt_request *dmreq) > > { > > - struct iv_eboiv_private *eboiv = &cc->iv_gen_private.eboiv; > > + u8 buf[MAX_CIPHER_BLOCKSIZE] __aligned(__alignof__(__le64)); > > + struct skcipher_request *req; > > + struct scatterlist src, dst; > > + struct crypto_wait wait; > > int err; > > > > - err = crypto_cipher_setkey(eboiv->tfm, cc->key, cc->key_size); > > - if (err) > > - return err; > > - > > - return 0; > > -} > > - > > -static int crypt_iv_eboiv_wipe(struct crypt_config *cc) > > -{ > > - /* Called after cc->key is set to random key in crypt_wipe() */ > > - return crypt_iv_eboiv_init(cc); > > -} > > + req = skcipher_request_alloc(any_tfm(cc), GFP_KERNEL | GFP_NOFS); > > + if (!req) > > + return -ENOMEM; > > > > -static int crypt_iv_eboiv_gen(struct crypt_config *cc, u8 *iv, > > - struct dm_crypt_request *dmreq) > > -{ > > - struct iv_eboiv_private *eboiv = &cc->iv_gen_private.eboiv; > > + memset(buf, 0, cc->iv_size); > > + *(__le64 *)buf = cpu_to_le64(dmreq->iv_sector * cc->sector_size); > > > > - memset(iv, 0, cc->iv_size); > > - *(__le64 *)iv = cpu_to_le64(dmreq->iv_sector * cc->sector_size); > > - crypto_cipher_encrypt_one(eboiv->tfm, iv, iv); > > + sg_init_one(&src, page_address(ZERO_PAGE(0)), cc->iv_size); > > + sg_init_one(&dst, iv, cc->iv_size); > > + skcipher_request_set_crypt(req, &src, &dst, cc->iv_size, buf); > > + skcipher_request_set_callback(req, 0, crypto_req_done, &wait); > > + err = crypto_wait_req(crypto_skcipher_encrypt(req), &wait); > > + skcipher_request_free(req); > > > > - return 0; > > + return err; > > } > > > > static const struct crypt_iv_operations crypt_iv_plain_ops = { > > @@ -962,9 +939,6 @@ static struct crypt_iv_operations crypt_iv_random_ops = { > > > > static struct crypt_iv_operations crypt_iv_eboiv_ops = { > > .ctr = crypt_iv_eboiv_ctr, > > - .dtr = crypt_iv_eboiv_dtr, > > - .init = crypt_iv_eboiv_init, > > - .wipe = crypt_iv_eboiv_wipe, > > .generator = crypt_iv_eboiv_gen > > }; > > > > -- > > 2.17.1 > -- dm-devel mailing list dm-devel@redhat.com https://www.redhat.com/mailman/listinfo/dm-devel
On Wed, 7 Aug 2019 at 16:52, Pascal Van Leeuwen <pvanleeuwen@verimatrix.com> wrote: > > Ard, > > > -----Original Message----- > > From: Ard Biesheuvel <ard.biesheuvel@linaro.org> > > Sent: Wednesday, August 7, 2019 3:17 PM > > To: Pascal Van Leeuwen <pvanleeuwen@verimatrix.com> > > Cc: linux-crypto@vger.kernel.org; herbert@gondor.apana.org.au; ebiggers@kernel.org; > > agk@redhat.com; snitzer@redhat.com; dm-devel@redhat.com; gmazyland@gmail.com > > Subject: Re: [RFC PATCH v2] md/dm-crypt - reuse eboiv skcipher for IV generation > > > > On Wed, 7 Aug 2019 at 10:28, Pascal Van Leeuwen > > <pvanleeuwen@verimatrix.com> wrote: > > > > > > Ard, > > > > > > I've actually been following this discussion with some interest, as it has > > > some relevance for some of the things I am doing at the moment as well. > > > > > > For example, for my CTS implementation I need to crypt one or two > > > seperate blocks and for the inside-secure driver I sometimes need to do > > > some single crypto block precomputes. (the XTS driver additionally > > > also already did such a single block encrypt for the tweak, also using > > > a seperate (non-sk)cipher instance - very similar to your IV case) > > > > > > Long story short, the current approach is to allocate a seperate > > > cipher instance so you can conveniently do crypto_cipher_en/decrypt_one. > > > (it would be nice to have a matching crypto_skcipher_en/decrypt_one > > > function available from the crypto API for these purposes?) > > > But if I understand you correctly, you may end up with an insecure > > > table-based implementation if you do that. Not what I want :-( > > > > > > > Table based AES is known to be vulnerable to plaintext attacks on the > > key, since each byte of the input xor'ed with the key is used as an > > index for doing Sbox lookups, and so with enough samples, there is an > > exploitable statistical correlation between the response time and the > > key. > > > > So in the context of EBOIV, where the user might specify a SIMD based > > time invariant skcipher, it would be really bad if the known plaintext > > encryptions of the byte offsets that occur with the *same* key would > > happen with a different cipher that is allocated implicitly and ends > > up being fulfilled by, e.g., aes-generic, since in that case, each > > block en/decryption is preceded by a single, time-variant AES > > invocation with an easily guessable input. > > > No need to tell me, doing crypto has been my dayjob for nearly 18.5 years > now :-) > I didn't mean to imply that you don't know your stuff :-) I am just reiterating the EBOIV issue so we can compare it to the issue you are bringing up > > In your case, we are not dealing with known plaintext attacks, > > > Since this is XTS, which is used for disk encryption, I would argue > we do! For the tweak encryption, the sector number is known plaintext, > same as for EBOIV. Also, you may be able to control data being written > to the disk encrypted, either directly or indirectly. > OK, part of the data into the CTS encryption will be previous ciphertext, > but that may be just 1 byte with the rest being the known plaintext. > The tweak encryption uses a dedicated key, so leaking it does not have the same impact as it does in the EBOIV case. And a plaintext attack on the data encryption part of XTS involves knowing the value of the tweak as well, so you'd have to successfully attack the tweak before you can attack the data. So while your point is valid, it's definitely less broken than EBOIV. > > and the > > higher latency of h/w accelerated crypto makes me less worried that > > the final, user observable latency would strongly correlate the way > > aes-generic in isolation does. > > > If that latency is constant - which it usually is - then it doesn't > really matter for correlation, it just filters out. > Due to the asynchronous nature of the driver, we'll usually be calling into the OS scheduler after queuing one or perhaps several blocks for processing by the hardware. Even if the processing time is fixed, the time it takes for the OS to respond to the completion IRQ and process the output is unlikely to correlate the way a table based software implementation does, especially if several blocks can be in flight at the same time. But note that we are basically in agreement here: falling back to table based AES is undesirable, but for EBOIV it is just much worse than for other modes. > > > However, in many cases there would actually be a very good reason > > > NOT to want to use the main skcipher for this. As that is some > > > hardware accelerator with terrible latency that you wouldn't want > > > to use to process just one cipher block. For that, you want to have > > > some SW implementation that is efficient on a single block instead. > > > > > > > Indeed. Note that for EBOIV, such performance concerns are deemed > > irrelevant, but it is an issue in the general case. > > > Yes, my interest was purely in the generic case. > > > > In my humble opinion, such insecure table based implementations just > > > shouldn't exist at all - you can always do better, possibly at the > > > expense of some performance degradation. Or you should at least have > > > some flag available to specify you have some security requirements > > > and such an implementation is not an acceptable response. > > > > > > > We did some work to reduce the time variance of AES: there is the > > aes-ti driver, and there is now also the AES library, which is known > > to be slower than aes-generic, but does include some mitigations for > > cache timing attacks. > > > > Other than that, I have little to offer, given that the performance vs > > security tradeoffs were decided long before security became a thing > > like it is today, and so removing aes-generic is not an option, > > especially since the scalar alternatives we have are not truly time > > invariant either. > > > Replacing aes-generic with a truly time-invariant implementation could > be an option. If you can find a truly time-invariant C implementation of AES that isn't orders of magnitude slower than aes-generic, I'm sure we can merge it. > Or selecting aes-generic only if some (new) "allow_insecure" > flag is set on the cipher request. (Obviously, you want to default to > secure, not insecure. Speaking as someone who earns his living doing > security :-) > We all do. But we all have different use cases to worry about, and different experiences and backgrounds :-) The main problem is that banning aes-generic is a bit too rigorous imo. It highly depends on whether there is known plaintext and whether there are observable latencies in the first place. > (Disclaimer: I do not know anything about the aes-generic implementation, > I'm just taking your word for it that it is not secure (enough) ...) > -- dm-devel mailing list dm-devel@redhat.com https://www.redhat.com/mailman/listinfo/dm-devel
On Wed, Aug 07, 2019 at 04:14:22PM +0000, Pascal Van Leeuwen wrote: > > > > In your case, we are not dealing with known plaintext attacks, > > > > > > > Since this is XTS, which is used for disk encryption, I would argue > > > we do! For the tweak encryption, the sector number is known plaintext, > > > same as for EBOIV. Also, you may be able to control data being written > > > to the disk encrypted, either directly or indirectly. > > > OK, part of the data into the CTS encryption will be previous ciphertext, > > > but that may be just 1 byte with the rest being the known plaintext. > > > > > > > The tweak encryption uses a dedicated key, so leaking it does not have > > the same impact as it does in the EBOIV case. > > > Well ... yes and no. The spec defines them as seperately controllable - > deviating from the original XEX definition - but in most practicle use cases > I've seen, the same key is used for both, as having 2 keys just increases > key storage requirements and does not actually improve effective security > (of the algorithm itself, implementation peculiarities like this one aside > :-), as XEX has been proven secure using a single key. And the security > proof for XTS actually builds on that while using 2 keys deviates from it. > This is a common misconception. Actually, XTS needs 2 distinct keys to be a CCA-secure tweakable block cipher, due to another subtle difference from XEX: XEX (by which I really mean "XEX[E,2]") builds the sequence of masks starting with x^1, while XTS starts with x^0. If only 1 key is used, the inclusion of the 0th power in XTS allows the attack described in Section 6 of the XEX paper (https://web.cs.ucdavis.edu/~rogaway/papers/offsets.pdf). Of course, it's debatable what this means *in practice* to the usual XTS use cases like disk encryption, for which CCA security may not be critical... But technically, single-key XTS isn't secure under as strong an attack model as XEX. - Eric -- dm-devel mailing list dm-devel@redhat.com https://www.redhat.com/mailman/listinfo/dm-devel
Hi, On 07/08/2019 07:50, Ard Biesheuvel wrote: > Instead of instantiating a separate cipher to perform the encryption > needed to produce the IV, reuse the skcipher used for the block data > and invoke it one additional time for each block to encrypt a zero > vector and use the output as the IV. > > For CBC mode, this is equivalent to using the bare block cipher, but > without the risk of ending up with a non-time invariant implementation > of AES when the skcipher itself is time variant (e.g., arm64 without > Crypto Extensions has a NEON based time invariant implementation of > cbc(aes) but no time invariant implementation of the core cipher other > than aes-ti, which is not enabled by default) > > This approach is a compromise between dm-crypt API flexibility and > reducing dependence on parts of the crypto API that should not usually > be exposed to other subsystems, such as the bare cipher API. > > Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> For now I have just pair of images here to test, but seems checksums are ok. Tested-by: Milan Broz <gmazyland@gmail.com> I talked with Mike already, so it should go through DM tree now. Thanks! Milan > --- > drivers/md/dm-crypt.c | 70 ++++++++++++++----------------------------- > 1 file changed, 22 insertions(+), 48 deletions(-) > > diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c > index d5216bcc4649..48cd76c88d77 100644 > --- a/drivers/md/dm-crypt.c > +++ b/drivers/md/dm-crypt.c > @@ -120,10 +120,6 @@ struct iv_tcw_private { > u8 *whitening; > }; > > -struct iv_eboiv_private { > - struct crypto_cipher *tfm; > -}; > - > /* > * Crypt: maps a linear range of a block device > * and encrypts / decrypts at the same time. > @@ -163,7 +159,6 @@ struct crypt_config { > struct iv_benbi_private benbi; > struct iv_lmk_private lmk; > struct iv_tcw_private tcw; > - struct iv_eboiv_private eboiv; > } iv_gen_private; > u64 iv_offset; > unsigned int iv_size; > @@ -847,65 +842,47 @@ static int crypt_iv_random_gen(struct crypt_config *cc, u8 *iv, > return 0; > } > > -static void crypt_iv_eboiv_dtr(struct crypt_config *cc) > -{ > - struct iv_eboiv_private *eboiv = &cc->iv_gen_private.eboiv; > - > - crypto_free_cipher(eboiv->tfm); > - eboiv->tfm = NULL; > -} > - > static int crypt_iv_eboiv_ctr(struct crypt_config *cc, struct dm_target *ti, > const char *opts) > { > - struct iv_eboiv_private *eboiv = &cc->iv_gen_private.eboiv; > - struct crypto_cipher *tfm; > - > - tfm = crypto_alloc_cipher(cc->cipher, 0, 0); > - if (IS_ERR(tfm)) { > - ti->error = "Error allocating crypto tfm for EBOIV"; > - return PTR_ERR(tfm); > + if (test_bit(CRYPT_MODE_INTEGRITY_AEAD, &cc->cipher_flags)) { > + ti->error = "AEAD transforms not supported for EBOIV"; > + return -EINVAL; > } > > - if (crypto_cipher_blocksize(tfm) != cc->iv_size) { > + if (crypto_skcipher_blocksize(any_tfm(cc)) != cc->iv_size) { > ti->error = "Block size of EBOIV cipher does " > "not match IV size of block cipher"; > - crypto_free_cipher(tfm); > return -EINVAL; > } > > - eboiv->tfm = tfm; > return 0; > } > > -static int crypt_iv_eboiv_init(struct crypt_config *cc) > +static int crypt_iv_eboiv_gen(struct crypt_config *cc, u8 *iv, > + struct dm_crypt_request *dmreq) > { > - struct iv_eboiv_private *eboiv = &cc->iv_gen_private.eboiv; > + u8 buf[MAX_CIPHER_BLOCKSIZE] __aligned(__alignof__(__le64)); > + struct skcipher_request *req; > + struct scatterlist src, dst; > + struct crypto_wait wait; > int err; > > - err = crypto_cipher_setkey(eboiv->tfm, cc->key, cc->key_size); > - if (err) > - return err; > - > - return 0; > -} > - > -static int crypt_iv_eboiv_wipe(struct crypt_config *cc) > -{ > - /* Called after cc->key is set to random key in crypt_wipe() */ > - return crypt_iv_eboiv_init(cc); > -} > + req = skcipher_request_alloc(any_tfm(cc), GFP_KERNEL | GFP_NOFS); > + if (!req) > + return -ENOMEM; > > -static int crypt_iv_eboiv_gen(struct crypt_config *cc, u8 *iv, > - struct dm_crypt_request *dmreq) > -{ > - struct iv_eboiv_private *eboiv = &cc->iv_gen_private.eboiv; > + memset(buf, 0, cc->iv_size); > + *(__le64 *)buf = cpu_to_le64(dmreq->iv_sector * cc->sector_size); > > - memset(iv, 0, cc->iv_size); > - *(__le64 *)iv = cpu_to_le64(dmreq->iv_sector * cc->sector_size); > - crypto_cipher_encrypt_one(eboiv->tfm, iv, iv); > + sg_init_one(&src, page_address(ZERO_PAGE(0)), cc->iv_size); > + sg_init_one(&dst, iv, cc->iv_size); > + skcipher_request_set_crypt(req, &src, &dst, cc->iv_size, buf); > + skcipher_request_set_callback(req, 0, crypto_req_done, &wait); > + err = crypto_wait_req(crypto_skcipher_encrypt(req), &wait); > + skcipher_request_free(req); > > - return 0; > + return err; > } > > static const struct crypt_iv_operations crypt_iv_plain_ops = { > @@ -962,9 +939,6 @@ static struct crypt_iv_operations crypt_iv_random_ops = { > > static struct crypt_iv_operations crypt_iv_eboiv_ops = { > .ctr = crypt_iv_eboiv_ctr, > - .dtr = crypt_iv_eboiv_dtr, > - .init = crypt_iv_eboiv_init, > - .wipe = crypt_iv_eboiv_wipe, > .generator = crypt_iv_eboiv_gen > }; > > -- dm-devel mailing list dm-devel@redhat.com https://www.redhat.com/mailman/listinfo/dm-devel
On 08/08/2019 11:31, Pascal Van Leeuwen wrote: >> -----Original Message----- >> From: Eric Biggers <ebiggers@kernel.org> >> Sent: Thursday, August 8, 2019 10:31 AM >> To: Pascal Van Leeuwen <pvanleeuwen@verimatrix.com> >> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>; linux-crypto@vger.kernel.org; >> herbert@gondor.apana.org.au; agk@redhat.com; snitzer@redhat.com; dm-devel@redhat.com; >> gmazyland@gmail.com >> Subject: Re: [RFC PATCH v2] md/dm-crypt - reuse eboiv skcipher for IV generation >> >> On Wed, Aug 07, 2019 at 04:14:22PM +0000, Pascal Van Leeuwen wrote: >>>>>> In your case, we are not dealing with known plaintext attacks, >>>>>> >>>>> Since this is XTS, which is used for disk encryption, I would argue >>>>> we do! For the tweak encryption, the sector number is known plaintext, >>>>> same as for EBOIV. Also, you may be able to control data being written >>>>> to the disk encrypted, either directly or indirectly. >>>>> OK, part of the data into the CTS encryption will be previous ciphertext, >>>>> but that may be just 1 byte with the rest being the known plaintext. >>>>> >>>> >>>> The tweak encryption uses a dedicated key, so leaking it does not have >>>> the same impact as it does in the EBOIV case. >>>> >>> Well ... yes and no. The spec defines them as seperately controllable - >>> deviating from the original XEX definition - but in most practicle use cases >>> I've seen, the same key is used for both, as having 2 keys just increases >>> key storage requirements and does not actually improve effective security >>> (of the algorithm itself, implementation peculiarities like this one aside >>> :-), as XEX has been proven secure using a single key. And the security >>> proof for XTS actually builds on that while using 2 keys deviates from it. >>> >> >> This is a common misconception. Actually, XTS needs 2 distinct keys to be a >> CCA-secure tweakable block cipher, due to another subtle difference from XEX: >> XEX (by which I really mean "XEX[E,2]") builds the sequence of masks starting >> with x^1, while XTS starts with x^0. If only 1 key is used, the inclusion of >> the 0th power in XTS allows the attack described in Section 6 of the XEX paper >> (https://web.cs.ucdavis.edu/~rogaway/papers/offsets.pdf). >> > Interesting ... I'm not a cryptographer, just a humble HW engineer specialized > in implementing crypto. I'm basing my views mostly on the Liskov/Minematsu > "Comments on XTS", who assert that using 2 keys in XTS was misguided. > (and I never saw any follow-on comments asserting that this view was wrong ...) > On not avoiding j=0 in the XTS spec they actually comment: > "This difference is significant in security, but has no impact on effectiveness > for practical applications.", which I read as "not relevant for normal use". > > In any case, it's frequently *used* with both keys being equal for performance > and key storage reasons. There is already check in kernel for XTS "weak" keys (tweak and encryption keys must not be the same). https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/include/crypto/xts.h#n27 For now it applies only in FIPS mode... (and if I see correctly it is duplicated in all drivers). Milan -- dm-devel mailing list dm-devel@redhat.com https://www.redhat.com/mailman/listinfo/dm-devel
On Thu, Aug 08, 2019 at 01:23:10PM +0000, Pascal Van Leeuwen wrote: > > -----Original Message----- > > From: Milan Broz <gmazyland@gmail.com> > > Sent: Thursday, August 8, 2019 2:53 PM > > To: Pascal Van Leeuwen <pvanleeuwen@verimatrix.com>; Eric Biggers <ebiggers@kernel.org> > > Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>; linux-crypto@vger.kernel.org; > > herbert@gondor.apana.org.au; agk@redhat.com; snitzer@redhat.com; dm-devel@redhat.com > > Subject: Re: [RFC PATCH v2] md/dm-crypt - reuse eboiv skcipher for IV generation > > > > On 08/08/2019 11:31, Pascal Van Leeuwen wrote: > > >> -----Original Message----- > > >> From: Eric Biggers <ebiggers@kernel.org> > > >> Sent: Thursday, August 8, 2019 10:31 AM > > >> To: Pascal Van Leeuwen <pvanleeuwen@verimatrix.com> > > >> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>; linux-crypto@vger.kernel.org; > > >> herbert@gondor.apana.org.au; agk@redhat.com; snitzer@redhat.com; dm-devel@redhat.com; > > >> gmazyland@gmail.com > > >> Subject: Re: [RFC PATCH v2] md/dm-crypt - reuse eboiv skcipher for IV generation > > >> > > >> On Wed, Aug 07, 2019 at 04:14:22PM +0000, Pascal Van Leeuwen wrote: > > >>>>>> In your case, we are not dealing with known plaintext attacks, > > >>>>>> > > >>>>> Since this is XTS, which is used for disk encryption, I would argue > > >>>>> we do! For the tweak encryption, the sector number is known plaintext, > > >>>>> same as for EBOIV. Also, you may be able to control data being written > > >>>>> to the disk encrypted, either directly or indirectly. > > >>>>> OK, part of the data into the CTS encryption will be previous ciphertext, > > >>>>> but that may be just 1 byte with the rest being the known plaintext. > > >>>>> > > >>>> > > >>>> The tweak encryption uses a dedicated key, so leaking it does not have > > >>>> the same impact as it does in the EBOIV case. > > >>>> > > >>> Well ... yes and no. The spec defines them as seperately controllable - > > >>> deviating from the original XEX definition - but in most practicle use cases > > >>> I've seen, the same key is used for both, as having 2 keys just increases > > >>> key storage requirements and does not actually improve effective security > > >>> (of the algorithm itself, implementation peculiarities like this one aside > > >>> :-), as XEX has been proven secure using a single key. And the security > > >>> proof for XTS actually builds on that while using 2 keys deviates from it. > > >>> > > >> > > >> This is a common misconception. Actually, XTS needs 2 distinct keys to be a > > >> CCA-secure tweakable block cipher, due to another subtle difference from XEX: > > >> XEX (by which I really mean "XEX[E,2]") builds the sequence of masks starting > > >> with x^1, while XTS starts with x^0. If only 1 key is used, the inclusion of > > >> the 0th power in XTS allows the attack described in Section 6 of the XEX paper > > >> (https://web.cs.ucdavis.edu/~rogaway/papers/offsets.pdf). > > >> > > > Interesting ... I'm not a cryptographer, just a humble HW engineer specialized > > > in implementing crypto. I'm basing my views mostly on the Liskov/Minematsu > > > "Comments on XTS", who assert that using 2 keys in XTS was misguided. > > > (and I never saw any follow-on comments asserting that this view was wrong ...) > > > On not avoiding j=0 in the XTS spec they actually comment: > > > "This difference is significant in security, but has no impact on effectiveness > > > for practical applications.", which I read as "not relevant for normal use". See page 6 of "Comments on XTS": Note that j = 0 must be excluded, as f(0, v) = v for any v, which implies ρ = 1. Moreover, if j = 0 was allowed, a simple attack based on this fact existed, as pointed out by [6] and [3]. Hence if XEX is used, one must be careful to avoid j being 0. The part you quoted is only talking about XTS *as specified*, i.e. with 2 keys. > > > > > > In any case, it's frequently *used* with both keys being equal for performance > > > and key storage reasons. It's broken, so it's broken. Doesn't matter who is using it. > > > > There is already check in kernel for XTS "weak" keys (tweak and encryption keys must not be > > the same). > > > > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/include/crypto/xts.h# > > n27 > > > > For now it applies only in FIPS mode... (and if I see correctly it is duplicated in all > > drivers). > > > I never had any need to look into FIPS for XTS before, but this actually appears > to be accurate. FIPS indeed *requires this*. Much to my surprise, I might add. > Still looking for some actual rationale that goes beyond suggestion and innuendo > (and is not too heavy on the math ;-) though. As I said, the attack is explained in the original XEX paper. Basically the adversary can submit a chosen ciphertext query for the first block of sector 0 to leak the first "mask" of that sector, then submit a chosen plaintext or ciphertext query for the reminder of the sector such that they can predict the output with 100% certainty. (The standard security model for tweakable block ciphers says the output must appear random.) - Eric -- dm-devel mailing list dm-devel@redhat.com https://www.redhat.com/mailman/listinfo/dm-devel
On Thu, 8 Aug 2019 at 14:53, Milan Broz <gmazyland@gmail.com> wrote: > > Hi, > > On 07/08/2019 07:50, Ard Biesheuvel wrote: > > Instead of instantiating a separate cipher to perform the encryption > > needed to produce the IV, reuse the skcipher used for the block data > > and invoke it one additional time for each block to encrypt a zero > > vector and use the output as the IV. > > > > For CBC mode, this is equivalent to using the bare block cipher, but > > without the risk of ending up with a non-time invariant implementation > > of AES when the skcipher itself is time variant (e.g., arm64 without > > Crypto Extensions has a NEON based time invariant implementation of > > cbc(aes) but no time invariant implementation of the core cipher other > > than aes-ti, which is not enabled by default) > > > > This approach is a compromise between dm-crypt API flexibility and > > reducing dependence on parts of the crypto API that should not usually > > be exposed to other subsystems, such as the bare cipher API. > > > > Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> > > For now I have just pair of images here to test, but seems checksums are ok. > > Tested-by: Milan Broz <gmazyland@gmail.com> > > I talked with Mike already, so it should go through DM tree now. > Thanks! > > > --- > > drivers/md/dm-crypt.c | 70 ++++++++++++++----------------------------- > > 1 file changed, 22 insertions(+), 48 deletions(-) > > > > diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c > > index d5216bcc4649..48cd76c88d77 100644 > > --- a/drivers/md/dm-crypt.c > > +++ b/drivers/md/dm-crypt.c > > @@ -120,10 +120,6 @@ struct iv_tcw_private { > > u8 *whitening; > > }; > > > > -struct iv_eboiv_private { > > - struct crypto_cipher *tfm; > > -}; > > - > > /* > > * Crypt: maps a linear range of a block device > > * and encrypts / decrypts at the same time. > > @@ -163,7 +159,6 @@ struct crypt_config { > > struct iv_benbi_private benbi; > > struct iv_lmk_private lmk; > > struct iv_tcw_private tcw; > > - struct iv_eboiv_private eboiv; > > } iv_gen_private; > > u64 iv_offset; > > unsigned int iv_size; > > @@ -847,65 +842,47 @@ static int crypt_iv_random_gen(struct crypt_config *cc, u8 *iv, > > return 0; > > } > > > > -static void crypt_iv_eboiv_dtr(struct crypt_config *cc) > > -{ > > - struct iv_eboiv_private *eboiv = &cc->iv_gen_private.eboiv; > > - > > - crypto_free_cipher(eboiv->tfm); > > - eboiv->tfm = NULL; > > -} > > - > > static int crypt_iv_eboiv_ctr(struct crypt_config *cc, struct dm_target *ti, > > const char *opts) > > { > > - struct iv_eboiv_private *eboiv = &cc->iv_gen_private.eboiv; > > - struct crypto_cipher *tfm; > > - > > - tfm = crypto_alloc_cipher(cc->cipher, 0, 0); > > - if (IS_ERR(tfm)) { > > - ti->error = "Error allocating crypto tfm for EBOIV"; > > - return PTR_ERR(tfm); > > + if (test_bit(CRYPT_MODE_INTEGRITY_AEAD, &cc->cipher_flags)) { > > + ti->error = "AEAD transforms not supported for EBOIV"; > > + return -EINVAL; > > } > > > > - if (crypto_cipher_blocksize(tfm) != cc->iv_size) { > > + if (crypto_skcipher_blocksize(any_tfm(cc)) != cc->iv_size) { > > ti->error = "Block size of EBOIV cipher does " > > "not match IV size of block cipher"; > > - crypto_free_cipher(tfm); > > return -EINVAL; > > } > > > > - eboiv->tfm = tfm; > > return 0; > > } > > > > -static int crypt_iv_eboiv_init(struct crypt_config *cc) > > +static int crypt_iv_eboiv_gen(struct crypt_config *cc, u8 *iv, > > + struct dm_crypt_request *dmreq) > > { > > - struct iv_eboiv_private *eboiv = &cc->iv_gen_private.eboiv; > > + u8 buf[MAX_CIPHER_BLOCKSIZE] __aligned(__alignof__(__le64)); > > + struct skcipher_request *req; > > + struct scatterlist src, dst; > > + struct crypto_wait wait; > > int err; > > > > - err = crypto_cipher_setkey(eboiv->tfm, cc->key, cc->key_size); > > - if (err) > > - return err; > > - > > - return 0; > > -} > > - > > -static int crypt_iv_eboiv_wipe(struct crypt_config *cc) > > -{ > > - /* Called after cc->key is set to random key in crypt_wipe() */ > > - return crypt_iv_eboiv_init(cc); > > -} > > + req = skcipher_request_alloc(any_tfm(cc), GFP_KERNEL | GFP_NOFS); > > + if (!req) > > + return -ENOMEM; > > > > -static int crypt_iv_eboiv_gen(struct crypt_config *cc, u8 *iv, > > - struct dm_crypt_request *dmreq) > > -{ > > - struct iv_eboiv_private *eboiv = &cc->iv_gen_private.eboiv; > > + memset(buf, 0, cc->iv_size); > > + *(__le64 *)buf = cpu_to_le64(dmreq->iv_sector * cc->sector_size); > > > > - memset(iv, 0, cc->iv_size); > > - *(__le64 *)iv = cpu_to_le64(dmreq->iv_sector * cc->sector_size); > > - crypto_cipher_encrypt_one(eboiv->tfm, iv, iv); > > + sg_init_one(&src, page_address(ZERO_PAGE(0)), cc->iv_size); > > + sg_init_one(&dst, iv, cc->iv_size); > > + skcipher_request_set_crypt(req, &src, &dst, cc->iv_size, buf); > > + skcipher_request_set_callback(req, 0, crypto_req_done, &wait); > > + err = crypto_wait_req(crypto_skcipher_encrypt(req), &wait); > > + skcipher_request_free(req); > > > > - return 0; > > + return err; > > } > > > > static const struct crypt_iv_operations crypt_iv_plain_ops = { > > @@ -962,9 +939,6 @@ static struct crypt_iv_operations crypt_iv_random_ops = { > > > > static struct crypt_iv_operations crypt_iv_eboiv_ops = { > > .ctr = crypt_iv_eboiv_ctr, > > - .dtr = crypt_iv_eboiv_dtr, > > - .init = crypt_iv_eboiv_init, > > - .wipe = crypt_iv_eboiv_wipe, > > .generator = crypt_iv_eboiv_gen > > }; > > > > -- dm-devel mailing list dm-devel@redhat.com https://www.redhat.com/mailman/listinfo/dm-devel
diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c index d5216bcc4649..48cd76c88d77 100644 --- a/drivers/md/dm-crypt.c +++ b/drivers/md/dm-crypt.c @@ -120,10 +120,6 @@ struct iv_tcw_private { u8 *whitening; }; -struct iv_eboiv_private { - struct crypto_cipher *tfm; -}; - /* * Crypt: maps a linear range of a block device * and encrypts / decrypts at the same time. @@ -163,7 +159,6 @@ struct crypt_config { struct iv_benbi_private benbi; struct iv_lmk_private lmk; struct iv_tcw_private tcw; - struct iv_eboiv_private eboiv; } iv_gen_private; u64 iv_offset; unsigned int iv_size; @@ -847,65 +842,47 @@ static int crypt_iv_random_gen(struct crypt_config *cc, u8 *iv, return 0; } -static void crypt_iv_eboiv_dtr(struct crypt_config *cc) -{ - struct iv_eboiv_private *eboiv = &cc->iv_gen_private.eboiv; - - crypto_free_cipher(eboiv->tfm); - eboiv->tfm = NULL; -} - static int crypt_iv_eboiv_ctr(struct crypt_config *cc, struct dm_target *ti, const char *opts) { - struct iv_eboiv_private *eboiv = &cc->iv_gen_private.eboiv; - struct crypto_cipher *tfm; - - tfm = crypto_alloc_cipher(cc->cipher, 0, 0); - if (IS_ERR(tfm)) { - ti->error = "Error allocating crypto tfm for EBOIV"; - return PTR_ERR(tfm); + if (test_bit(CRYPT_MODE_INTEGRITY_AEAD, &cc->cipher_flags)) { + ti->error = "AEAD transforms not supported for EBOIV"; + return -EINVAL; } - if (crypto_cipher_blocksize(tfm) != cc->iv_size) { + if (crypto_skcipher_blocksize(any_tfm(cc)) != cc->iv_size) { ti->error = "Block size of EBOIV cipher does " "not match IV size of block cipher"; - crypto_free_cipher(tfm); return -EINVAL; } - eboiv->tfm = tfm; return 0; } -static int crypt_iv_eboiv_init(struct crypt_config *cc) +static int crypt_iv_eboiv_gen(struct crypt_config *cc, u8 *iv, + struct dm_crypt_request *dmreq) { - struct iv_eboiv_private *eboiv = &cc->iv_gen_private.eboiv; + u8 buf[MAX_CIPHER_BLOCKSIZE] __aligned(__alignof__(__le64)); + struct skcipher_request *req; + struct scatterlist src, dst; + struct crypto_wait wait; int err; - err = crypto_cipher_setkey(eboiv->tfm, cc->key, cc->key_size); - if (err) - return err; - - return 0; -} - -static int crypt_iv_eboiv_wipe(struct crypt_config *cc) -{ - /* Called after cc->key is set to random key in crypt_wipe() */ - return crypt_iv_eboiv_init(cc); -} + req = skcipher_request_alloc(any_tfm(cc), GFP_KERNEL | GFP_NOFS); + if (!req) + return -ENOMEM; -static int crypt_iv_eboiv_gen(struct crypt_config *cc, u8 *iv, - struct dm_crypt_request *dmreq) -{ - struct iv_eboiv_private *eboiv = &cc->iv_gen_private.eboiv; + memset(buf, 0, cc->iv_size); + *(__le64 *)buf = cpu_to_le64(dmreq->iv_sector * cc->sector_size); - memset(iv, 0, cc->iv_size); - *(__le64 *)iv = cpu_to_le64(dmreq->iv_sector * cc->sector_size); - crypto_cipher_encrypt_one(eboiv->tfm, iv, iv); + sg_init_one(&src, page_address(ZERO_PAGE(0)), cc->iv_size); + sg_init_one(&dst, iv, cc->iv_size); + skcipher_request_set_crypt(req, &src, &dst, cc->iv_size, buf); + skcipher_request_set_callback(req, 0, crypto_req_done, &wait); + err = crypto_wait_req(crypto_skcipher_encrypt(req), &wait); + skcipher_request_free(req); - return 0; + return err; } static const struct crypt_iv_operations crypt_iv_plain_ops = { @@ -962,9 +939,6 @@ static struct crypt_iv_operations crypt_iv_random_ops = { static struct crypt_iv_operations crypt_iv_eboiv_ops = { .ctr = crypt_iv_eboiv_ctr, - .dtr = crypt_iv_eboiv_dtr, - .init = crypt_iv_eboiv_init, - .wipe = crypt_iv_eboiv_wipe, .generator = crypt_iv_eboiv_gen };
Instead of instantiating a separate cipher to perform the encryption needed to produce the IV, reuse the skcipher used for the block data and invoke it one additional time for each block to encrypt a zero vector and use the output as the IV. For CBC mode, this is equivalent to using the bare block cipher, but without the risk of ending up with a non-time invariant implementation of AES when the skcipher itself is time variant (e.g., arm64 without Crypto Extensions has a NEON based time invariant implementation of cbc(aes) but no time invariant implementation of the core cipher other than aes-ti, which is not enabled by default) This approach is a compromise between dm-crypt API flexibility and reducing dependence on parts of the crypto API that should not usually be exposed to other subsystems, such as the bare cipher API. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> --- drivers/md/dm-crypt.c | 70 ++++++++++++++----------------------------- 1 file changed, 22 insertions(+), 48 deletions(-)