From patchwork Sat Apr 25 22:08:20 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephan Mueller X-Patchwork-Id: 6275281 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Original-To: patchwork-linux-crypto@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 29D49BF4A7 for ; Sat, 25 Apr 2015 22:09:34 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 4FFDD20374 for ; Sat, 25 Apr 2015 22:09:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id CB6E420361 for ; Sat, 25 Apr 2015 22:09:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1750955AbbDYWJZ (ORCPT ); Sat, 25 Apr 2015 18:09:25 -0400 Received: from mail.eperm.de ([89.247.134.16]:34498 "EHLO mail.eperm.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750861AbbDYWJY (ORCPT ); Sat, 25 Apr 2015 18:09:24 -0400 Received: from myon.chronox.de (unknown [75.144.245.226]) by mail.eperm.de (Postfix) with ESMTPSA id ED3D32A003B; Sun, 26 Apr 2015 00:09:20 +0200 (CEST) From: Stephan Mueller To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org Subject: [PATCH v2] crypto: add key wrapping block chaining mode Date: Sun, 26 Apr 2015 00:08:20 +0200 Message-ID: <3407264.GJDOVGtEDe@myon.chronox.de> User-Agent: KMail/4.14.6 (Linux/3.19.4-200.fc21.x86_64; KDE/4.14.6; x86_64; ; ) In-Reply-To: <1515730.LIeS5qas5m@myon.chronox.de> References: <1515730.LIeS5qas5m@myon.chronox.de> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch implements the AES key wrapping as specified in NIST SP800-38F and RFC3394. The implementation covers key wrapping without padding. The caller may provide an IV. If no IV is provided, the default IV defined in SP800-38F is used for key wrapping and unwrapping. The key wrapping is an authenticated encryption operation without associated data. Therefore, setting of AAD is permissible, but that data is not used by the cipher implementation. Albeit the standards define the key wrapping for AES only, the template can be used with any other block cipher that has a block size of 16 bytes. Testing with CAVS test vectors for AES 128, 192, 256 in encryption and decryption up to 4096 bytes plaintext has been conducted successfully. Signed-off-by: Stephan Mueller --- crypto/Kconfig | 7 + crypto/Makefile | 1 + crypto/keywrap.c | 502 +++++++++++++++++++++++++++++++++++++++++++++++++++++++ crypto/testmgr.c | 25 +++ crypto/testmgr.h | 41 +++++ 5 files changed, 576 insertions(+) create mode 100644 crypto/keywrap.c diff --git a/crypto/Kconfig b/crypto/Kconfig index 8aaf298..3d62d8a 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -295,6 +295,13 @@ config CRYPTO_XTS key size 256, 384 or 512 bits. This implementation currently can't handle a sectorsize which is not a multiple of 16 bytes. +config CRYPTO_KEYWRAP + tristate "Key wrapping support" + select CRYPTO_BLKCIPHER + help + Support for key wrapping (NIST SP800-38F / RFC3394) without + padding. + comment "Hash modes" config CRYPTO_CMAC diff --git a/crypto/Makefile b/crypto/Makefile index 97b7d3a..d2f4b69 100644 --- a/crypto/Makefile +++ b/crypto/Makefile @@ -56,6 +56,7 @@ obj-$(CONFIG_CRYPTO_CTS) += cts.o obj-$(CONFIG_CRYPTO_LRW) += lrw.o obj-$(CONFIG_CRYPTO_XTS) += xts.o obj-$(CONFIG_CRYPTO_CTR) += ctr.o +obj-$(CONFIG_CRYPTO_KEYWRAP) += keywrap.o obj-$(CONFIG_CRYPTO_GCM) += gcm.o obj-$(CONFIG_CRYPTO_CCM) += ccm.o obj-$(CONFIG_CRYPTO_PCRYPT) += pcrypt.o diff --git a/crypto/keywrap.c b/crypto/keywrap.c new file mode 100644 index 0000000..d70b0b3 --- /dev/null +++ b/crypto/keywrap.c @@ -0,0 +1,502 @@ +/* + * Key Wrapping: RFC3394 / NIST SP800-38F + * + * Implemented modes as defined in NIST SP800-38F: Kw + * + * Copyright (C) 2015, Stephan Mueller + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, and the entire permission notice in its entirety, + * including the disclaimer of warranties. + * 2. Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * 3. The name of the author may not be used to endorse or promote + * products derived from this software without specific prior + * written permission. + * + * ALTERNATIVELY, this product may be distributed under the terms of + * the GNU General Public License, in which case the provisions of the GPL2 + * are required INSTEAD OF the above restrictions. (This clause is + * necessary due to a potential bad interaction between the GPL and + * the restrictions contained in a BSD-style copyright.) + * + * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED + * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES + * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, ALL OF + * WHICH ARE HEREBY DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE + * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR + * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT + * OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR + * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF + * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE + * USE OF THIS SOFTWARE, EVEN IF NOT ADVISED OF THE POSSIBILITY OF SUCH + * DAMAGE. + */ + +/* + * Note for using key wrapping: + * + * * The result of the encryption operation is the ciphertext starting + * with the 2nd semiblock. The first semiblock is provided as the IV. + * The IV uses to start the encryption operation is the default IV. + * + * * The input for the decryption is the first semiblock handed in as an + * IV. The ciphertext is the data starting with the 2nd semiblock. The + * return code of the decryption operation will be EBADMSG in case an + * integrity error occurs. + * + * To obtain the full result of an encryption as expected by SP800-38F, the + * caller must allocate a buffer of plaintext + 8 bytes: + * + * unsigned int datalen = ptlen + crypto_ablkcipher_ivsize(tfm); + * u8 data[datalen]; + * u8 *iv = data; + * u8 *pt = data + crypto_ablkcipher_ivsize(tfm); + * + * sg_init_one(&sg, ptdata, ptlen); + * ablkcipher_request_set_crypt(req, &sg, &sg, ptlen, iv); + * + * ==> After encryption, data now contains full KW result as per SP800-38F. + * + * In case of decryption, ciphertext now already has the expected length + * and must be segmented appropriately: + * + * unsigned int datalen = CTLEN; + * u8 data[datalen]; + * + * u8 *iv = data; + * u8 *ct = data + crypto_ablkcipher_ivsize(tfm); + * unsigned int ctlen = datalen - crypto_ablkcipher_ivsize(tfm); + * sg_init_one(&sg, ctdata, ctlen); + * ablkcipher_request_set_crypt(req, &sg, &sg, ptlen, iv); + * + * ==> After decryption (which hopefully does not return EBADMSG), the ct + * pointer now points to the plaintext of size ctlen. + */ + +#include +#include +#include +#include +#include + +struct crypto_kw_ctx { + struct crypto_cipher *child; +}; + +struct crypto_rfc3394_ctx { + struct crypto_ablkcipher *child; +}; + +struct crypto_kw_block { +#define SEMIBSIZE 8 + u8 A[SEMIBSIZE]; + u8 R[SEMIBSIZE]; +}; + +/* convert 64 bit integer into its string representation */ +static inline void crypto_kw_cpu_to_be64(u64 val, u8 *buf) +{ + struct s { + __be64 conv; + }; + struct s *conversion = (struct s *) buf; + + conversion->conv = cpu_to_be64(val); +} + +static inline void crypto_kw_copy_scatterlist(struct scatterlist *src, + struct scatterlist *dst) +{ + memcpy(dst, src, sizeof(struct scatterlist)); +} + +/* find the next memory block in scatter_walk of given size */ +static inline bool crypto_kw_scatterwalk_find(struct scatter_walk *walk, + unsigned int size) +{ + int n = scatterwalk_clamp(walk, size); + + if (!n) { + scatterwalk_start(walk, sg_next(walk->sg)); + n = scatterwalk_clamp(walk, size); + } + if (n != size) + return false; + return true; +} + +/* + * Copy out the memory block from or to scatter_walk of requested size + * before the walk->offset pointer. The scatter_walk is processed in reverse. + */ +static bool crypto_kw_scatterwalk_memcpy_rev(struct scatter_walk *walk, + unsigned int *walklen, + u8 *buf, unsigned int bufsize, + bool out) +{ + u8 *ptr = NULL; + + walk->offset -= bufsize; + if (!crypto_kw_scatterwalk_find(walk, bufsize)) + return false; + + ptr = scatterwalk_map(walk); + if (out) + memcpy(ptr, buf, bufsize); + else + memcpy(buf, ptr, bufsize); + *walklen -= bufsize; + scatterwalk_unmap(ptr); + scatterwalk_done(walk, 0, *walklen); + + return true; +} + +/* + * Copy the memory block from or to scatter_walk of requested size + * at the walk->offset pointer. The scatter_walk is processed forward. + */ +static bool crypto_kw_scatterwalk_memcpy(struct scatter_walk *walk, + unsigned int *walklen, + u8 *buf, unsigned int bufsize, + bool out) +{ + u8 *ptr = NULL; + + if (!crypto_kw_scatterwalk_find(walk, bufsize)) + return false; + + ptr = scatterwalk_map(walk); + if (out) + memcpy(ptr, buf, bufsize); + else + memcpy(buf, ptr, bufsize); + *walklen -= bufsize; + scatterwalk_unmap(ptr); + scatterwalk_advance(walk, bufsize); + scatterwalk_done(walk, 0, *walklen); + + return true; +} + +static int crypto_kw_decrypt(struct blkcipher_desc *desc, + struct scatterlist *dst, struct scatterlist *src, + unsigned int nbytes) +{ + struct crypto_blkcipher *tfm = desc->tfm; + struct crypto_kw_ctx *ctx = crypto_blkcipher_ctx(tfm); + struct crypto_cipher *child = ctx->child; + + unsigned long alignmask = max_t(unsigned long, 4, + crypto_cipher_alignmask(child)); + unsigned int src_nbytes, dst_nbytes, i; + struct scatter_walk src_walk, dst_walk; + + u8 blockbuf[sizeof(struct crypto_kw_block) + alignmask]; + struct crypto_kw_block *block = (struct crypto_kw_block *) + PTR_ALIGN(blockbuf + 0, alignmask + 1); + + u8 tmpblock[SEMIBSIZE]; + u64 t = 6 * ((nbytes) >> 3); + int ret = 0; + struct scatterlist lsrc, ldst; + + /* + * Require at least 2 semiblocks (note, the 3rd semiblock that is + * required by SP800-38F is the IV. + */ + if (nbytes < (2 * SEMIBSIZE) || nbytes % 8) + return -EINVAL; + memcpy(block->A, desc->info, SEMIBSIZE); + /* + * src scatterlist is read only. dst scatterlist is r/w. During the + * first loop, src points to req->src and dst to req->dst. For any + * subsequent round, the code operates on req->dst only. + */ + crypto_kw_copy_scatterlist(src, &lsrc); + crypto_kw_copy_scatterlist(dst, &ldst); + + for (i = 0; i < 6; i++) { + u8 tbe_buffer[SEMIBSIZE + alignmask]; + /* alignment for the crypto_xor operation */ + u8 *tbe = PTR_ALIGN(tbe_buffer + 0, alignmask + 1); + bool first_loop = true; + + scatterwalk_start(&src_walk, &lsrc); + scatterwalk_start(&dst_walk, &ldst); + src_nbytes = dst_nbytes = nbytes; + + /* + * Point to the end of the scatterlists to walk them backwards. + */ + src_walk.offset += src_nbytes; + dst_walk.offset += dst_nbytes; + + while (src_nbytes) { + if (!crypto_kw_scatterwalk_memcpy_rev(&src_walk, + &src_nbytes, block->R, SEMIBSIZE, false)) + goto out; + crypto_kw_cpu_to_be64(t, tbe); + crypto_xor(block->A, tbe, SEMIBSIZE); + t--; + crypto_cipher_decrypt_one(child, (u8*)block, + (u8*)block); + if (!first_loop) { + /* + * Copy block->R from last round into + * place. + */ + if (!crypto_kw_scatterwalk_memcpy_rev(&dst_walk, + &dst_nbytes, tmpblock, SEMIBSIZE, true)) + goto out; + } else { + first_loop = false; + } + + /* + * Store current block->R in temp buffer to + * copy it in place in the next round. + */ + memcpy(&tmpblock, block->R, SEMIBSIZE); + } + + /* process the final block->R */ + if (!crypto_kw_scatterwalk_memcpy_rev(&dst_walk, &dst_nbytes, + tmpblock, SEMIBSIZE, true)) + goto out; + + /* we now start to operate on the dst buffers only */ + crypto_kw_copy_scatterlist(dst, &lsrc); + crypto_kw_copy_scatterlist(dst, &ldst); + } + + if (crypto_memneq("\xA6\xA6\xA6\xA6\xA6\xA6\xA6\xA6", block->A, + SEMIBSIZE)) + ret = -EBADMSG; + +out: + memzero_explicit(&block, sizeof(struct crypto_kw_block)); + memzero_explicit(tmpblock, sizeof(tmpblock)); + + return ret; +} + +static int crypto_kw_encrypt(struct blkcipher_desc *desc, + struct scatterlist *dst, struct scatterlist *src, + unsigned int nbytes) +{ + struct crypto_blkcipher *tfm = desc->tfm; + struct crypto_kw_ctx *ctx = crypto_blkcipher_ctx(tfm); + struct crypto_cipher *child = ctx->child; + + unsigned long alignmask = max_t(unsigned long, 4, + crypto_cipher_alignmask(child)); + unsigned int src_nbytes, dst_nbytes, i; + struct scatter_walk src_walk, dst_walk; + + u8 blockbuf[sizeof(struct crypto_kw_block) + alignmask]; + struct crypto_kw_block *block = (struct crypto_kw_block *) + PTR_ALIGN(blockbuf + 0, alignmask + 1); + + u8 tmpblock[SEMIBSIZE]; + u64 t = 1; + struct scatterlist lsrc, ldst; + int ret = -EAGAIN; + + /* + * Require at least 2 semiblocks (note, the 3rd semiblock that is + * required by SP800-38F is the IV that occupies the first semiblock. + * This means that the dst memory must be one semiblock larger than src. + * Also ensure that the given data is aligned to semiblock. + */ + if (nbytes < (2 * SEMIBSIZE) || nbytes % 8) + return -EINVAL; + + memcpy(block->A, "\xA6\xA6\xA6\xA6\xA6\xA6\xA6\xA6", SEMIBSIZE); + + /* + * src scatterlist is read only. dst scatterlist is r/w. During the + * first loop, src points to req->src and dst to req->dst. For any + * subsequent round, the code operates on req->dst only. + */ + crypto_kw_copy_scatterlist(src, &lsrc); + crypto_kw_copy_scatterlist(dst, &ldst); + + for (i = 0; i < 6; i++) { + u8 tbe_buffer[SEMIBSIZE + alignmask]; + u8 *tbe = PTR_ALIGN(tbe_buffer + 0, alignmask + 1); + bool first_loop = true; + + scatterwalk_start(&src_walk, &lsrc); + scatterwalk_start(&dst_walk, &ldst); + src_nbytes = dst_nbytes = nbytes; + + while (src_nbytes) { + if (!crypto_kw_scatterwalk_memcpy(&src_walk, + &src_nbytes, block->R, SEMIBSIZE, false)) + goto out; + crypto_cipher_encrypt_one(child, (u8 *)block, + (u8 *)block); + crypto_kw_cpu_to_be64(t, tbe); + crypto_xor(block->A, tbe, SEMIBSIZE); + t++; + if (!first_loop) { + /* + * Copy block->R from last round into + * place. + */ + if (!crypto_kw_scatterwalk_memcpy(&dst_walk, + &dst_nbytes, tmpblock, SEMIBSIZE, true)) + goto out; + } else { + first_loop = false; + } + + /* + * Store current block->R in temp buffer to + * copy it in place in the next round. + */ + memcpy(&tmpblock, block->R, SEMIBSIZE); + } + + /* process the final block->R */ + if (!crypto_kw_scatterwalk_memcpy(&dst_walk, &dst_nbytes, + tmpblock, SEMIBSIZE, true)) + goto out; + + /* we now start to operate on the dst buffers only */ + crypto_kw_copy_scatterlist(dst, &lsrc); + crypto_kw_copy_scatterlist(dst, &ldst); + } + + /* establish the final IV */ + memcpy(desc->info, block->A, SEMIBSIZE); + + ret = 0; +out: + memzero_explicit(&block, sizeof(struct crypto_kw_block)); + memzero_explicit(tmpblock, sizeof(tmpblock)); + return ret; +} + +static int crypto_kw_setkey(struct crypto_tfm *parent, const u8 *key, + unsigned int keylen) +{ + struct crypto_kw_ctx *ctx = crypto_tfm_ctx(parent); + struct crypto_cipher *child = ctx->child; + int err; + + crypto_cipher_clear_flags(child, CRYPTO_TFM_REQ_MASK); + crypto_cipher_set_flags(child, crypto_tfm_get_flags(parent) & + CRYPTO_TFM_REQ_MASK); + err = crypto_cipher_setkey(child, key, keylen); + crypto_tfm_set_flags(parent, crypto_cipher_get_flags(child) & + CRYPTO_TFM_RES_MASK); + return err; +} + +static int crypto_kw_init_tfm(struct crypto_tfm *tfm) +{ + struct crypto_instance *inst = crypto_tfm_alg_instance(tfm); + struct crypto_spawn *spawn = crypto_instance_ctx(inst); + struct crypto_kw_ctx *ctx = crypto_tfm_ctx(tfm); + struct crypto_cipher *cipher; + + cipher = crypto_spawn_cipher(spawn); + if (IS_ERR(cipher)) + return PTR_ERR(cipher); + + ctx->child = cipher; + return 0; +} + +static void crypto_kw_exit_tfm(struct crypto_tfm *tfm) +{ + struct crypto_kw_ctx *ctx = crypto_tfm_ctx(tfm); + + crypto_free_cipher(ctx->child); +} + +static struct crypto_instance *crypto_kw_alloc(struct rtattr **tb) +{ + struct crypto_instance *inst = NULL; + struct crypto_alg *alg = NULL; + int err; + + err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_BLKCIPHER); + if (err) + return ERR_PTR(err); + + alg = crypto_get_attr_alg(tb, CRYPTO_ALG_TYPE_CIPHER, + CRYPTO_ALG_TYPE_MASK); + if (IS_ERR(alg)) + return ERR_CAST(alg); + + inst = ERR_PTR(-EINVAL); + /* Section 5.1 requirement for KW and KWP */ + if (alg->cra_blocksize != 2 * SEMIBSIZE) + goto err; + + inst = crypto_alloc_instance("kw", alg); + if (IS_ERR(inst)) + goto err; + + inst->alg.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER; + inst->alg.cra_priority = alg->cra_priority; + inst->alg.cra_blocksize = SEMIBSIZE; + inst->alg.cra_alignmask = 0; + inst->alg.cra_type = &crypto_blkcipher_type; + inst->alg.cra_blkcipher.ivsize = SEMIBSIZE; + inst->alg.cra_blkcipher.min_keysize = alg->cra_cipher.cia_min_keysize; + inst->alg.cra_blkcipher.max_keysize = alg->cra_cipher.cia_max_keysize; + + inst->alg.cra_ctxsize = sizeof(struct crypto_kw_ctx); + + inst->alg.cra_init = crypto_kw_init_tfm; + inst->alg.cra_exit = crypto_kw_exit_tfm; + + inst->alg.cra_blkcipher.setkey = crypto_kw_setkey; + inst->alg.cra_blkcipher.encrypt = crypto_kw_encrypt; + inst->alg.cra_blkcipher.decrypt = crypto_kw_decrypt; + +err: + crypto_mod_put(alg); + return inst; +} + +static void crypto_kw_free(struct crypto_instance *inst) +{ + crypto_drop_spawn(crypto_instance_ctx(inst)); + kfree(inst); +} + +static struct crypto_template crypto_kw_tmpl = { + .name = "kw", + .alloc = crypto_kw_alloc, + .free = crypto_kw_free, + .module = THIS_MODULE, +}; + +static int __init crypto_kw_init(void) +{ + return crypto_register_template(&crypto_kw_tmpl); +} + +static void __exit crypto_kw_exit(void) +{ + crypto_unregister_template(&crypto_kw_tmpl); +} + +module_init(crypto_kw_init); +module_exit(crypto_kw_exit); + +MODULE_LICENSE("Dual BSD/GPL"); +MODULE_AUTHOR("Stephan Mueller "); +MODULE_DESCRIPTION("Key Wrapping (RFC3394 / NIST SP800-38F)"); +MODULE_ALIAS_CRYPTO("kw"); diff --git a/crypto/testmgr.c b/crypto/testmgr.c index d463978..4744437 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -1021,6 +1021,15 @@ static int __test_skcipher(struct crypto_ablkcipher *tfm, int enc, ret = -EINVAL; goto out; } + if (template[i].ivout && + memcmp(req->info, template[i].ivout, + crypto_ablkcipher_ivsize(tfm))) { + pr_err("alg: skcipher%s: IV-test %d failed on %s for %s\n", + d, j, e, algo); + hexdump(req->info, crypto_ablkcipher_ivsize(tfm)); + ret = -EINVAL; + goto out; + } } j = 0; @@ -3097,6 +3106,22 @@ static const struct alg_test_desc alg_test_descs[] = { } } }, { + .alg = "kw(aes)", + .test = alg_test_skcipher, + .fips_allowed = 1, + .suite = { + .cipher = { + .enc = { + .vecs = aes_kw_enc_tv_template, + .count = ARRAY_SIZE(aes_kw_enc_tv_template) + }, + .dec = { + .vecs = aes_kw_dec_tv_template, + .count = ARRAY_SIZE(aes_kw_dec_tv_template) + } + } + } + }, { .alg = "lrw(aes)", .test = alg_test_skcipher, .suite = { diff --git a/crypto/testmgr.h b/crypto/testmgr.h index 62e2485..a9845fc 100644 --- a/crypto/testmgr.h +++ b/crypto/testmgr.h @@ -49,6 +49,7 @@ struct hash_testvec { struct cipher_testvec { char *key; char *iv; + char *ivout; char *input; char *result; unsigned short tap[MAX_TAP]; @@ -20704,6 +20705,46 @@ static struct aead_testvec aes_ccm_rfc4309_dec_tv_template[] = { }; /* + * All key wrapping test vectors taken from + * http://csrc.nist.gov/groups/STM/cavp/documents/mac/kwtestvectors.zip + * + * Note: as documented in keywrap.c, the ivout for encryption is the first + * semiblock of the ciphertext from the test vector. For decryption, iv is + * the first semiblock of the ciphertext. + */ +static struct cipher_testvec aes_kw_enc_tv_template[] = { + { + .key = "\x75\x75\xda\x3a\x93\x60\x7c\xc2" + "\xbf\xd8\xce\xc7\xaa\xdf\xd9\xa6", + .klen = 16, + .input = "\x42\x13\x6d\x3c\x38\x4a\x3e\xea" + "\xc9\x5a\x06\x6f\xd2\x8f\xed\x3f", + .ilen = 16, + .result = "\xf6\x85\x94\x81\x6f\x64\xca\xa3" + "\xf5\x6f\xab\xea\x25\x48\xf5\xfb", + .rlen = 16, + .ivout = "\x03\x1f\x6b\xd7\xe6\x1e\x64\x3d", + }, +}; + +static struct cipher_testvec aes_kw_dec_tv_template[] = { + { + .key = "\x80\xaa\x99\x73\x27\xa4\x80\x6b" + "\x6a\x7a\x41\xa5\x2b\x86\xc3\x71" + "\x03\x86\xf9\x32\x78\x6e\xf7\x96" + "\x76\xfa\xfb\x90\xb8\x26\x3c\x5f", + .klen = 32, + .input = "\xd3\x3d\x3d\x97\x7b\xf0\xa9\x15" + "\x59\xf9\x9c\x8a\xcd\x29\x3d\x43", + .ilen = 16, + .result = "\x0a\x25\x6b\xa7\x5c\xfa\x03\xaa" + "\xa0\x2b\xa9\x42\x03\xf1\x5b\xaa", + .rlen = 16, + .iv = "\x42\x3c\x96\x0d\x8a\x2a\xc4\xc1", + }, +}; + +/* * ANSI X9.31 Continuous Pseudo-Random Number Generator (AES mode) * test vectors, taken from Appendix B.2.9 and B.2.10: * http://csrc.nist.gov/groups/STM/cavp/documents/rng/RNGVS.pdf