From patchwork Mon Feb 1 18:02:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12059615 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EAEAFC433E0 for ; Mon, 1 Feb 2021 18:03:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C10FE64EAC for ; Mon, 1 Feb 2021 18:03:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229607AbhBASDi (ORCPT ); Mon, 1 Feb 2021 13:03:38 -0500 Received: from mail.kernel.org ([198.145.29.99]:53872 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229555AbhBASDi (ORCPT ); Mon, 1 Feb 2021 13:03:38 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 213AB64EA5; Mon, 1 Feb 2021 18:02:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1612202577; bh=DJ9ICFsS9IRWDkenOCWqKHa2sb36lWtmArNO1qICcfI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pm1WuywWJq5gJ3ZW+O+hWC9ubhoXFaoNL/jJGuOWVKNrYjWnFX4HPoH3RlArxl4xe iPrR/xnMiL9GyYnjcZARKsAaHzt0X4B2U83Zly4MPtqrDc4xIAp/WYhoQD/4dtuoM8 7QuM9cCWTjnZ+2PU7yP2FAOuWBnRUifCu3uvKd2Y5dSXB+fLaDkq0QAfIpZPaSyHM3 /9wa50Znvnlo3h1DmDSNuU/X3u8UsR16FBA/0D9BkBNyF3YRe8Ts7kHK2qaGcZUqZj IdjG9ciwx5j0tqtSni/R5Y6T6lOQu8swueaSndKyN/C3bYkgATKa2Ng53j+HEU5Dfn xag2UNtWOajTQ== From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel , stable@vger.kernel.org Subject: [PATCH 1/9] crypto: michael_mic - fix broken misalignment handling Date: Mon, 1 Feb 2021 19:02:29 +0100 Message-Id: <20210201180237.3171-2-ardb@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210201180237.3171-1-ardb@kernel.org> References: <20210201180237.3171-1-ardb@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The Michael MIC driver uses the cra_alignmask to ensure that pointers presented to its update and finup/final methods are 32-bit aligned. However, due to the way the shash API works, this is no guarantee that the 32-bit reads occurring in the update method are also aligned, as the size of the buffer presented to update may be of uneven length. For instance, an update() of 3 bytes followed by a misaligned update() of 4 or more bytes will result in a misaligned access using an accessor that is not suitable for this. On most architectures, this does not matter, and so setting the cra_alignmask is pointless. On architectures where this does matter, setting the cra_alignmask does not actually solve the problem. So let's get rid of the cra_alignmask, and use unaligned accessors instead, where appropriate. Cc: Signed-off-by: Ard Biesheuvel --- crypto/michael_mic.c | 31 ++++++++------------ 1 file changed, 12 insertions(+), 19 deletions(-) diff --git a/crypto/michael_mic.c b/crypto/michael_mic.c index 63350c4ad461..f4c31049601c 100644 --- a/crypto/michael_mic.c +++ b/crypto/michael_mic.c @@ -7,7 +7,7 @@ * Copyright (c) 2004 Jouni Malinen */ #include -#include +#include #include #include #include @@ -19,7 +19,7 @@ struct michael_mic_ctx { }; struct michael_mic_desc_ctx { - u8 pending[4]; + __le32 pending; size_t pending_len; u32 l, r; @@ -60,13 +60,12 @@ static int michael_update(struct shash_desc *desc, const u8 *data, unsigned int len) { struct michael_mic_desc_ctx *mctx = shash_desc_ctx(desc); - const __le32 *src; if (mctx->pending_len) { int flen = 4 - mctx->pending_len; if (flen > len) flen = len; - memcpy(&mctx->pending[mctx->pending_len], data, flen); + memcpy((u8 *)&mctx->pending + mctx->pending_len, data, flen); mctx->pending_len += flen; data += flen; len -= flen; @@ -74,23 +73,21 @@ static int michael_update(struct shash_desc *desc, const u8 *data, if (mctx->pending_len < 4) return 0; - src = (const __le32 *)mctx->pending; - mctx->l ^= le32_to_cpup(src); + mctx->l ^= le32_to_cpu(mctx->pending); michael_block(mctx->l, mctx->r); mctx->pending_len = 0; } - src = (const __le32 *)data; - while (len >= 4) { - mctx->l ^= le32_to_cpup(src++); + mctx->l ^= get_unaligned_le32(data); michael_block(mctx->l, mctx->r); + data += 4; len -= 4; } if (len > 0) { mctx->pending_len = len; - memcpy(mctx->pending, src, len); + memcpy(&mctx->pending, data, len); } return 0; @@ -100,8 +97,7 @@ static int michael_update(struct shash_desc *desc, const u8 *data, static int michael_final(struct shash_desc *desc, u8 *out) { struct michael_mic_desc_ctx *mctx = shash_desc_ctx(desc); - u8 *data = mctx->pending; - __le32 *dst = (__le32 *)out; + u8 *data = (u8 *)&mctx->pending; /* Last block and padding (0x5a, 4..7 x 0) */ switch (mctx->pending_len) { @@ -123,8 +119,8 @@ static int michael_final(struct shash_desc *desc, u8 *out) /* l ^= 0; */ michael_block(mctx->l, mctx->r); - dst[0] = cpu_to_le32(mctx->l); - dst[1] = cpu_to_le32(mctx->r); + put_unaligned_le32(mctx->l, out); + put_unaligned_le32(mctx->r, out + 4); return 0; } @@ -135,13 +131,11 @@ static int michael_setkey(struct crypto_shash *tfm, const u8 *key, { struct michael_mic_ctx *mctx = crypto_shash_ctx(tfm); - const __le32 *data = (const __le32 *)key; - if (keylen != 8) return -EINVAL; - mctx->l = le32_to_cpu(data[0]); - mctx->r = le32_to_cpu(data[1]); + mctx->l = get_unaligned_le32(key); + mctx->r = get_unaligned_le32(key + 4); return 0; } @@ -156,7 +150,6 @@ static struct shash_alg alg = { .cra_name = "michael_mic", .cra_driver_name = "michael_mic-generic", .cra_blocksize = 8, - .cra_alignmask = 3, .cra_ctxsize = sizeof(struct michael_mic_ctx), .cra_module = THIS_MODULE, } From patchwork Mon Feb 1 18:02:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12059617 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08B7DC433E0 for ; Mon, 1 Feb 2021 18:03:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CD94564EA0 for ; Mon, 1 Feb 2021 18:03:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229981AbhBASDm (ORCPT ); Mon, 1 Feb 2021 13:03:42 -0500 Received: from mail.kernel.org ([198.145.29.99]:53884 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229793AbhBASDk (ORCPT ); Mon, 1 Feb 2021 13:03:40 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id BCD8564E9C; Mon, 1 Feb 2021 18:02:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1612202578; bh=Cblu8CfYjFM2YPqZbDa2JAyKI5piJKhKuLA3RKBQTZc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dg97IkBOCwBPLpHrBQt4KG5TJihWT/ZQJ+SzgeeqftVu99Su8RI4CZQ3HPGlfaiB+ U4J/J+0dMi1hpyrGZmjBFSc6IW1SFHJInHW0LPyTD1aI2B830m1RVBg+4Iiip0LrTy vKh07Engaay+2MtbmilsNWLXaS2JiZc1gPPxE6vlOiHhGB4uw7rCusCtrqEMDaz/gi 8svyn204PFvum9YwcAE96V1pkFnqT6RaavmHdxDqSrwGd3kNyRgDwb80RScevQSSA+ BTCvmKIAHvKmNmrIbc5xLMIrZ7NItgallC+GP5g1jxYuqea6dv1F9cx3HqilnWYdh9 2dSfxVkPs4JXw== From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [PATCH 2/9] crypto: serpent - get rid of obsolete tnepres variant Date: Mon, 1 Feb 2021 19:02:30 +0100 Message-Id: <20210201180237.3171-3-ardb@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210201180237.3171-1-ardb@kernel.org> References: <20210201180237.3171-1-ardb@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org It is not trivial to trace back why exactly the tnepres variant of serpent was added ~17 years ago - Google searches come up mostly empty, but it seems to be related with the 'kerneli' version, which was based on an incorrect interpretation of the serpent spec. In other words, nobody is likely to care anymore today, so let's get rid of it. Signed-off-by: Ard Biesheuvel --- crypto/Kconfig | 3 +- crypto/serpent_generic.c | 82 ++------------------ crypto/tcrypt.c | 6 +- crypto/testmgr.c | 6 -- crypto/testmgr.h | 79 ------------------- 5 files changed, 7 insertions(+), 169 deletions(-) diff --git a/crypto/Kconfig b/crypto/Kconfig index 9779c7f7531f..15c9c28d9f53 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -1460,8 +1460,7 @@ config CRYPTO_SERPENT Serpent cipher algorithm, by Anderson, Biham & Knudsen. Keys are allowed to be from 0 to 256 bits in length, in steps - of 8 bits. Also includes the 'Tnepres' algorithm, a reversed - variant of Serpent for compatibility with old kerneli.org code. + of 8 bits. See also: diff --git a/crypto/serpent_generic.c b/crypto/serpent_generic.c index 492c1d0bfe06..a932e0b2964f 100644 --- a/crypto/serpent_generic.c +++ b/crypto/serpent_generic.c @@ -5,11 +5,6 @@ * Serpent Cipher Algorithm. * * Copyright (C) 2002 Dag Arne Osvik - * 2003 Herbert Valerio Riedel - * - * Added tnepres support: - * Ruben Jesus Garcia Hernandez , 18.10.2004 - * Based on code by hvr */ #include @@ -576,59 +571,7 @@ static void serpent_decrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) __serpent_decrypt(ctx, dst, src); } -static int tnepres_setkey(struct crypto_tfm *tfm, const u8 *key, - unsigned int keylen) -{ - u8 rev_key[SERPENT_MAX_KEY_SIZE]; - int i; - - for (i = 0; i < keylen; ++i) - rev_key[keylen - i - 1] = key[i]; - - return serpent_setkey(tfm, rev_key, keylen); -} - -static void tnepres_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) -{ - const u32 * const s = (const u32 * const)src; - u32 * const d = (u32 * const)dst; - - u32 rs[4], rd[4]; - - rs[0] = swab32(s[3]); - rs[1] = swab32(s[2]); - rs[2] = swab32(s[1]); - rs[3] = swab32(s[0]); - - serpent_encrypt(tfm, (u8 *)rd, (u8 *)rs); - - d[0] = swab32(rd[3]); - d[1] = swab32(rd[2]); - d[2] = swab32(rd[1]); - d[3] = swab32(rd[0]); -} - -static void tnepres_decrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) -{ - const u32 * const s = (const u32 * const)src; - u32 * const d = (u32 * const)dst; - - u32 rs[4], rd[4]; - - rs[0] = swab32(s[3]); - rs[1] = swab32(s[2]); - rs[2] = swab32(s[1]); - rs[3] = swab32(s[0]); - - serpent_decrypt(tfm, (u8 *)rd, (u8 *)rs); - - d[0] = swab32(rd[3]); - d[1] = swab32(rd[2]); - d[2] = swab32(rd[1]); - d[3] = swab32(rd[0]); -} - -static struct crypto_alg srp_algs[2] = { { +static struct crypto_alg srp_alg = { .cra_name = "serpent", .cra_driver_name = "serpent-generic", .cra_priority = 100, @@ -643,38 +586,23 @@ static struct crypto_alg srp_algs[2] = { { .cia_setkey = serpent_setkey, .cia_encrypt = serpent_encrypt, .cia_decrypt = serpent_decrypt } } -}, { - .cra_name = "tnepres", - .cra_driver_name = "tnepres-generic", - .cra_flags = CRYPTO_ALG_TYPE_CIPHER, - .cra_blocksize = SERPENT_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct serpent_ctx), - .cra_alignmask = 3, - .cra_module = THIS_MODULE, - .cra_u = { .cipher = { - .cia_min_keysize = SERPENT_MIN_KEY_SIZE, - .cia_max_keysize = SERPENT_MAX_KEY_SIZE, - .cia_setkey = tnepres_setkey, - .cia_encrypt = tnepres_encrypt, - .cia_decrypt = tnepres_decrypt } } -} }; +}; static int __init serpent_mod_init(void) { - return crypto_register_algs(srp_algs, ARRAY_SIZE(srp_algs)); + return crypto_register_alg(&srp_alg); } static void __exit serpent_mod_fini(void) { - crypto_unregister_algs(srp_algs, ARRAY_SIZE(srp_algs)); + crypto_unregister_alg(&srp_alg); } subsys_initcall(serpent_mod_init); module_exit(serpent_mod_fini); MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("Serpent and tnepres (kerneli compatible serpent reversed) Cipher Algorithm"); +MODULE_DESCRIPTION("Serpent Cipher Algorithm"); MODULE_AUTHOR("Dag Arne Osvik "); -MODULE_ALIAS_CRYPTO("tnepres"); MODULE_ALIAS_CRYPTO("serpent"); MODULE_ALIAS_CRYPTO("serpent-generic"); diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c index 2877b88cfa45..6b7c158dc508 100644 --- a/crypto/tcrypt.c +++ b/crypto/tcrypt.c @@ -70,7 +70,7 @@ static const char *check[] = { "des", "md5", "des3_ede", "rot13", "sha1", "sha224", "sha256", "sm3", "blowfish", "twofish", "serpent", "sha384", "sha512", "md4", "aes", "cast6", "arc4", "michael_mic", "deflate", "crc32c", "tea", "xtea", - "khazad", "wp512", "wp384", "wp256", "tnepres", "xeta", "fcrypt", + "khazad", "wp512", "wp384", "wp256", "xeta", "fcrypt", "camellia", "seed", "rmd160", "lzo", "lzo-rle", "cts", "sha3-224", "sha3-256", "sha3-384", "sha3-512", "streebog256", "streebog512", @@ -1806,10 +1806,6 @@ static int do_test(const char *alg, u32 type, u32 mask, int m, u32 num_mb) ret += tcrypt_test("wp256"); break; - case 25: - ret += tcrypt_test("ecb(tnepres)"); - break; - case 26: ret += tcrypt_test("ecb(anubis)"); ret += tcrypt_test("cbc(anubis)"); diff --git a/crypto/testmgr.c b/crypto/testmgr.c index 1a4103b1b202..93359999c94b 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -4876,12 +4876,6 @@ static const struct alg_test_desc alg_test_descs[] = { .suite = { .cipher = __VECS(tea_tv_template) } - }, { - .alg = "ecb(tnepres)", - .test = alg_test_skcipher, - .suite = { - .cipher = __VECS(tnepres_tv_template) - } }, { .alg = "ecb(twofish)", .test = alg_test_skcipher, diff --git a/crypto/testmgr.h b/crypto/testmgr.h index 99aca08263d2..ced56ea0c9b4 100644 --- a/crypto/testmgr.h +++ b/crypto/testmgr.h @@ -11415,85 +11415,6 @@ static const struct cipher_testvec serpent_tv_template[] = { }, }; -static const struct cipher_testvec tnepres_tv_template[] = { - { /* KeySize=0 */ - .ptext = "\x00\x01\x02\x03\x04\x05\x06\x07" - "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f", - .ctext = "\x41\xcc\x6b\x31\x59\x31\x45\x97" - "\x6d\x6f\xbb\x38\x4b\x37\x21\x28", - .len = 16, - }, - { /* KeySize=128, PT=0, I=1 */ - .ptext = "\x00\x00\x00\x00\x00\x00\x00\x00" - "\x00\x00\x00\x00\x00\x00\x00\x00", - .key = "\x80\x00\x00\x00\x00\x00\x00\x00" - "\x00\x00\x00\x00\x00\x00\x00\x00", - .klen = 16, - .ctext = "\x49\xaf\xbf\xad\x9d\x5a\x34\x05" - "\x2c\xd8\xff\xa5\x98\x6b\xd2\xdd", - .len = 16, - }, { /* KeySize=128 */ - .key = "\x00\x01\x02\x03\x04\x05\x06\x07" - "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f", - .klen = 16, - .ptext = "\x00\x01\x02\x03\x04\x05\x06\x07" - "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f", - .ctext = "\xea\xf4\xd7\xfc\xd8\x01\x34\x47" - "\x81\x45\x0b\xfa\x0c\xd6\xad\x6e", - .len = 16, - }, { /* KeySize=128, I=121 */ - .key = "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x80", - .klen = 16, - .ptext = zeroed_string, - .ctext = "\x3d\xda\xbf\xc0\x06\xda\xab\x06" - "\x46\x2a\xf4\xef\x81\x54\x4e\x26", - .len = 16, - }, { /* KeySize=192, PT=0, I=1 */ - .key = "\x80\x00\x00\x00\x00\x00\x00\x00" - "\x00\x00\x00\x00\x00\x00\x00\x00" - "\x00\x00\x00\x00\x00\x00\x00\x00", - .klen = 24, - .ptext = "\x00\x00\x00\x00\x00\x00\x00\x00" - "\x00\x00\x00\x00\x00\x00\x00\x00", - .ctext = "\xe7\x8e\x54\x02\xc7\x19\x55\x68" - "\xac\x36\x78\xf7\xa3\xf6\x0c\x66", - .len = 16, - }, { /* KeySize=256, PT=0, I=1 */ - .key = "\x80\x00\x00\x00\x00\x00\x00\x00" - "\x00\x00\x00\x00\x00\x00\x00\x00" - "\x00\x00\x00\x00\x00\x00\x00\x00" - "\x00\x00\x00\x00\x00\x00\x00\x00", - .klen = 32, - .ptext = "\x00\x00\x00\x00\x00\x00\x00\x00" - "\x00\x00\x00\x00\x00\x00\x00\x00", - .ctext = "\xab\xed\x96\xe7\x66\xbf\x28\xcb" - "\xc0\xeb\xd2\x1a\x82\xef\x08\x19", - .len = 16, - }, { /* KeySize=256, I=257 */ - .key = "\x1f\x1e\x1d\x1c\x1b\x1a\x19\x18" - "\x17\x16\x15\x14\x13\x12\x11\x10" - "\x0f\x0e\x0d\x0c\x0b\x0a\x09\x08" - "\x07\x06\x05\x04\x03\x02\x01\x00", - .klen = 32, - .ptext = "\x0f\x0e\x0d\x0c\x0b\x0a\x09\x08" - "\x07\x06\x05\x04\x03\x02\x01\x00", - .ctext = "\x5c\xe7\x1c\x70\xd2\x88\x2e\x5b" - "\xb8\x32\xe4\x33\xf8\x9f\x26\xde", - .len = 16, - }, { /* KeySize=256 */ - .key = "\x00\x01\x02\x03\x04\x05\x06\x07" - "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f" - "\x10\x11\x12\x13\x14\x15\x16\x17" - "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f", - .klen = 32, - .ptext = "\x00\x01\x02\x03\x04\x05\x06\x07" - "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f", - .ctext = "\x64\xa9\x1a\x37\xed\x9f\xe7\x49" - "\xa8\x4e\x76\xd6\xf5\x0d\x78\xee", - .len = 16, - } -}; - static const struct cipher_testvec serpent_cbc_tv_template[] = { { /* Generated with Crypto++ */ .key = "\x85\x62\x3F\x1C\xF9\xD6\x1C\xF9" From patchwork Mon Feb 1 18:02:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12059619 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7A1FDC433DB for ; Mon, 1 Feb 2021 18:03:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4124364EA0 for ; Mon, 1 Feb 2021 18:03:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229793AbhBASDm (ORCPT ); Mon, 1 Feb 2021 13:03:42 -0500 Received: from mail.kernel.org ([198.145.29.99]:53896 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229831AbhBASDl (ORCPT ); Mon, 1 Feb 2021 13:03:41 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 300F564EA0; Mon, 1 Feb 2021 18:02:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1612202580; bh=YULinnlkuCAewuzhstFApRJ5DhxdsVgZHgBAlOivg18=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GghINnXtjTrGjyv77GvlmxIC07QRqAZbKvcRa1VA8u655bZjEl1gAVCvbmiI0PjbZ Q6t2op79nIe/AAkPdEpt4U19ex2Mzp2wfEEfktnGXjd/0PB/e/vu920Frhsg3a4LbS KHeSMXEDVEySsVOY1yl0yAeYKT1AuIvDH+m4LKraBHWgYWpFoRpxFi+DU4Uz4ETe+L 3Qz2EERB+9qbG2S7JepzRm+YugWg1KPNEIfaqTEfqn6/V2C80ptG0u9LE8E5X8nT/X +D6qUM6oGvQyCQGu0i+ufK+Q+Gaf31EYoKihLikYPlj3u6wf89tRcfahRYNwXzJfWn nae/znKfHvqmw== From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [PATCH 3/9] crypto: serpent - use unaligned accessors instead of alignmask Date: Mon, 1 Feb 2021 19:02:31 +0100 Message-Id: <20210201180237.3171-4-ardb@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210201180237.3171-1-ardb@kernel.org> References: <20210201180237.3171-1-ardb@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Instead of using an alignmask of 0x3 to ensure 32-bit alignment of the Serpent input and output blocks, which propagates to mode drivers, and results in pointless copying on architectures that don't care about alignment, use the unaligned accessors, which will do the right thing on each respective architecture, avoiding the need for double buffering. Signed-off-by: Ard Biesheuvel --- crypto/serpent_generic.c | 44 ++++++++------------ 1 file changed, 17 insertions(+), 27 deletions(-) diff --git a/crypto/serpent_generic.c b/crypto/serpent_generic.c index a932e0b2964f..236c87547a17 100644 --- a/crypto/serpent_generic.c +++ b/crypto/serpent_generic.c @@ -10,7 +10,7 @@ #include #include #include -#include +#include #include #include #include @@ -448,19 +448,12 @@ void __serpent_encrypt(const void *c, u8 *dst, const u8 *src) { const struct serpent_ctx *ctx = c; const u32 *k = ctx->expkey; - const __le32 *s = (const __le32 *)src; - __le32 *d = (__le32 *)dst; u32 r0, r1, r2, r3, r4; -/* - * Note: The conversions between u8* and u32* might cause trouble - * on architectures with stricter alignment rules than x86 - */ - - r0 = le32_to_cpu(s[0]); - r1 = le32_to_cpu(s[1]); - r2 = le32_to_cpu(s[2]); - r3 = le32_to_cpu(s[3]); + r0 = get_unaligned_le32(src); + r1 = get_unaligned_le32(src + 4); + r2 = get_unaligned_le32(src + 8); + r3 = get_unaligned_le32(src + 12); K(r0, r1, r2, r3, 0); S0(r0, r1, r2, r3, r4); LK(r2, r1, r3, r0, r4, 1); @@ -496,10 +489,10 @@ void __serpent_encrypt(const void *c, u8 *dst, const u8 *src) S6(r0, r1, r3, r2, r4); LK(r3, r4, r1, r2, r0, 31); S7(r3, r4, r1, r2, r0); K(r0, r1, r2, r3, 32); - d[0] = cpu_to_le32(r0); - d[1] = cpu_to_le32(r1); - d[2] = cpu_to_le32(r2); - d[3] = cpu_to_le32(r3); + put_unaligned_le32(r0, dst); + put_unaligned_le32(r1, dst + 4); + put_unaligned_le32(r2, dst + 8); + put_unaligned_le32(r3, dst + 12); } EXPORT_SYMBOL_GPL(__serpent_encrypt); @@ -514,14 +507,12 @@ void __serpent_decrypt(const void *c, u8 *dst, const u8 *src) { const struct serpent_ctx *ctx = c; const u32 *k = ctx->expkey; - const __le32 *s = (const __le32 *)src; - __le32 *d = (__le32 *)dst; u32 r0, r1, r2, r3, r4; - r0 = le32_to_cpu(s[0]); - r1 = le32_to_cpu(s[1]); - r2 = le32_to_cpu(s[2]); - r3 = le32_to_cpu(s[3]); + r0 = get_unaligned_le32(src); + r1 = get_unaligned_le32(src + 4); + r2 = get_unaligned_le32(src + 8); + r3 = get_unaligned_le32(src + 12); K(r0, r1, r2, r3, 32); SI7(r0, r1, r2, r3, r4); KL(r1, r3, r0, r4, r2, 31); @@ -557,10 +548,10 @@ void __serpent_decrypt(const void *c, u8 *dst, const u8 *src) SI1(r3, r1, r2, r0, r4); KL(r4, r1, r2, r0, r3, 1); SI0(r4, r1, r2, r0, r3); K(r2, r3, r1, r4, 0); - d[0] = cpu_to_le32(r2); - d[1] = cpu_to_le32(r3); - d[2] = cpu_to_le32(r1); - d[3] = cpu_to_le32(r4); + put_unaligned_le32(r2, dst); + put_unaligned_le32(r3, dst + 4); + put_unaligned_le32(r1, dst + 8); + put_unaligned_le32(r4, dst + 12); } EXPORT_SYMBOL_GPL(__serpent_decrypt); @@ -578,7 +569,6 @@ static struct crypto_alg srp_alg = { .cra_flags = CRYPTO_ALG_TYPE_CIPHER, .cra_blocksize = SERPENT_BLOCK_SIZE, .cra_ctxsize = sizeof(struct serpent_ctx), - .cra_alignmask = 3, .cra_module = THIS_MODULE, .cra_u = { .cipher = { .cia_min_keysize = SERPENT_MIN_KEY_SIZE, From patchwork Mon Feb 1 18:02:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12059621 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E186C433E0 for ; Mon, 1 Feb 2021 18:03:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2EADB64E9C for ; Mon, 1 Feb 2021 18:03:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230000AbhBASDs (ORCPT ); Mon, 1 Feb 2021 13:03:48 -0500 Received: from mail.kernel.org ([198.145.29.99]:53910 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229555AbhBASDm (ORCPT ); Mon, 1 Feb 2021 13:03:42 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 97E3464EB8; Mon, 1 Feb 2021 18:03:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1612202581; bh=F8NI4gGlhEPKGSsSKx1da+fnX3qID97zd5EPihO9rf0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=helaF0sQI5XQDh8TvY4B1wpk8rk7DLnNy3GJFFkL81Twa1idM45VmYPx3peXl1YqR Lr2VZK7aYoic7rmm/85YUPtkBBM0snHVrr2HU4cPCz3+Xi3Fd8KQkCuV4xZUQLWvlP GvgTbKJsrBEtiAOfHRboPDmLFLLTwIiBzADAoCZBE46GPYwrhHYMbxyv6enx0ffRIe nEIHC4/3O1PQulSq6GgS+KaWD5WRK1qQIMHCA6z0Myn0HXKpn83OYGkmCW1cj1Y+Y0 lYDtk4OJwgx10ZIx8mS/WwQ5LYLc87AWew/7f6BpCQzLL9l6EN44m5rNASJandlG2z BXQ+VyU7RNIEQ== From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [PATCH 4/9] crypto: blowfish - use unaligned accessors instead of alignmask Date: Mon, 1 Feb 2021 19:02:32 +0100 Message-Id: <20210201180237.3171-5-ardb@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210201180237.3171-1-ardb@kernel.org> References: <20210201180237.3171-1-ardb@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Instead of using an alignmask of 0x3 to ensure 32-bit alignment of the Blowfish input and output blocks, which propagates to mode drivers, and results in pointless copying on architectures that don't care about alignment, use the unaligned accessors, which will do the right thing on each respective architecture, avoiding the need for double buffering. Signed-off-by: Ard Biesheuvel --- crypto/blowfish_generic.c | 23 ++++++++------------ 1 file changed, 9 insertions(+), 14 deletions(-) diff --git a/crypto/blowfish_generic.c b/crypto/blowfish_generic.c index c3c2041fe0c5..003b52c6880e 100644 --- a/crypto/blowfish_generic.c +++ b/crypto/blowfish_generic.c @@ -14,7 +14,7 @@ #include #include #include -#include +#include #include #include #include @@ -36,12 +36,10 @@ static void bf_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) { struct bf_ctx *ctx = crypto_tfm_ctx(tfm); - const __be32 *in_blk = (const __be32 *)src; - __be32 *const out_blk = (__be32 *)dst; const u32 *P = ctx->p; const u32 *S = ctx->s; - u32 yl = be32_to_cpu(in_blk[0]); - u32 yr = be32_to_cpu(in_blk[1]); + u32 yl = get_unaligned_be32(src); + u32 yr = get_unaligned_be32(src + 4); ROUND(yr, yl, 0); ROUND(yl, yr, 1); @@ -63,19 +61,17 @@ static void bf_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) yl ^= P[16]; yr ^= P[17]; - out_blk[0] = cpu_to_be32(yr); - out_blk[1] = cpu_to_be32(yl); + put_unaligned_be32(yr, dst); + put_unaligned_be32(yl, dst + 4); } static void bf_decrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) { struct bf_ctx *ctx = crypto_tfm_ctx(tfm); - const __be32 *in_blk = (const __be32 *)src; - __be32 *const out_blk = (__be32 *)dst; const u32 *P = ctx->p; const u32 *S = ctx->s; - u32 yl = be32_to_cpu(in_blk[0]); - u32 yr = be32_to_cpu(in_blk[1]); + u32 yl = get_unaligned_be32(src); + u32 yr = get_unaligned_be32(src + 4); ROUND(yr, yl, 17); ROUND(yl, yr, 16); @@ -97,8 +93,8 @@ static void bf_decrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) yl ^= P[1]; yr ^= P[0]; - out_blk[0] = cpu_to_be32(yr); - out_blk[1] = cpu_to_be32(yl); + put_unaligned_be32(yr, dst); + put_unaligned_be32(yl, dst + 4); } static struct crypto_alg alg = { @@ -108,7 +104,6 @@ static struct crypto_alg alg = { .cra_flags = CRYPTO_ALG_TYPE_CIPHER, .cra_blocksize = BF_BLOCK_SIZE, .cra_ctxsize = sizeof(struct bf_ctx), - .cra_alignmask = 3, .cra_module = THIS_MODULE, .cra_u = { .cipher = { .cia_min_keysize = BF_MIN_KEY_SIZE, From patchwork Mon Feb 1 18:02:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12059625 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 02FAAC433E0 for ; Mon, 1 Feb 2021 18:04:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A843A64EA5 for ; Mon, 1 Feb 2021 18:04:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229567AbhBASES (ORCPT ); Mon, 1 Feb 2021 13:04:18 -0500 Received: from mail.kernel.org ([198.145.29.99]:54034 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229984AbhBASER (ORCPT ); Mon, 1 Feb 2021 13:04:17 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 143AA64EB9; Mon, 1 Feb 2021 18:03:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1612202583; bh=dMhNJLwKQBJSk+VkWRbESrah6c7o3LbQV4IQtCiiilY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ecQJNXAaBSVdun1+ji4aTojFXYJcdyTJXumTnZAqtZNcCOzVpOfPBZ8pesslrUhxC vMwsp9nLL1bQXKLb+WBcNtySMuibM5Nbbah6XSpWIHcFeff7nsq9Izg89hMeA4NEnR lSR/eYa7Ks/tsSh6bB1mOk1MjtSdOVhARyhovzsfb09C8kFpzaCchfIJSa7iSasb4e ZMbTd6619Gfd7JYBYKRrFCM6JOcR0NrAcD6ZosqD98J8KI/AzfwZCGT34xTqH7J9XB qkUNWMoLDK5a+gRStPkujnxRSozy5mIeFaUn7NF2oe4l7cwedwj2wSvCfWF8BhzN+s yQ115KV+m2V7w== From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [PATCH 5/9] crypto: camellia - use unaligned accessors instead of alignmask Date: Mon, 1 Feb 2021 19:02:33 +0100 Message-Id: <20210201180237.3171-6-ardb@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210201180237.3171-1-ardb@kernel.org> References: <20210201180237.3171-1-ardb@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Instead of using an alignmask of 0x3 to ensure 32-bit alignment of the Camellia input and output blocks, which propagates to mode drivers, and results in pointless copying on architectures that don't care about alignment, use the unaligned accessors, which will do the right thing on each respective architecture, avoiding the need for double buffering. Signed-off-by: Ard Biesheuvel --- crypto/camellia_generic.c | 45 +++++++------------- 1 file changed, 16 insertions(+), 29 deletions(-) diff --git a/crypto/camellia_generic.c b/crypto/camellia_generic.c index 0b9f409f7370..fd1a88af9e77 100644 --- a/crypto/camellia_generic.c +++ b/crypto/camellia_generic.c @@ -9,14 +9,6 @@ * https://info.isl.ntt.co.jp/crypt/eng/camellia/specifications.html */ -/* - * - * NOTE --- NOTE --- NOTE --- NOTE - * This implementation assumes that all memory addresses passed - * as parameters are four-byte aligned. - * - */ - #include #include #include @@ -994,16 +986,14 @@ camellia_set_key(struct crypto_tfm *tfm, const u8 *in_key, static void camellia_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) { const struct camellia_ctx *cctx = crypto_tfm_ctx(tfm); - const __be32 *src = (const __be32 *)in; - __be32 *dst = (__be32 *)out; unsigned int max; u32 tmp[4]; - tmp[0] = be32_to_cpu(src[0]); - tmp[1] = be32_to_cpu(src[1]); - tmp[2] = be32_to_cpu(src[2]); - tmp[3] = be32_to_cpu(src[3]); + tmp[0] = get_unaligned_be32(in); + tmp[1] = get_unaligned_be32(in + 4); + tmp[2] = get_unaligned_be32(in + 8); + tmp[3] = get_unaligned_be32(in + 12); if (cctx->key_length == 16) max = 24; @@ -1013,25 +1003,23 @@ static void camellia_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) camellia_do_encrypt(cctx->key_table, tmp, max); /* do_encrypt returns 0,1 swapped with 2,3 */ - dst[0] = cpu_to_be32(tmp[2]); - dst[1] = cpu_to_be32(tmp[3]); - dst[2] = cpu_to_be32(tmp[0]); - dst[3] = cpu_to_be32(tmp[1]); + put_unaligned_be32(tmp[2], out); + put_unaligned_be32(tmp[3], out + 4); + put_unaligned_be32(tmp[0], out + 8); + put_unaligned_be32(tmp[1], out + 12); } static void camellia_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) { const struct camellia_ctx *cctx = crypto_tfm_ctx(tfm); - const __be32 *src = (const __be32 *)in; - __be32 *dst = (__be32 *)out; unsigned int max; u32 tmp[4]; - tmp[0] = be32_to_cpu(src[0]); - tmp[1] = be32_to_cpu(src[1]); - tmp[2] = be32_to_cpu(src[2]); - tmp[3] = be32_to_cpu(src[3]); + tmp[0] = get_unaligned_be32(in); + tmp[1] = get_unaligned_be32(in + 4); + tmp[2] = get_unaligned_be32(in + 8); + tmp[3] = get_unaligned_be32(in + 12); if (cctx->key_length == 16) max = 24; @@ -1041,10 +1029,10 @@ static void camellia_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) camellia_do_decrypt(cctx->key_table, tmp, max); /* do_decrypt returns 0,1 swapped with 2,3 */ - dst[0] = cpu_to_be32(tmp[2]); - dst[1] = cpu_to_be32(tmp[3]); - dst[2] = cpu_to_be32(tmp[0]); - dst[3] = cpu_to_be32(tmp[1]); + put_unaligned_be32(tmp[2], out); + put_unaligned_be32(tmp[3], out + 4); + put_unaligned_be32(tmp[0], out + 8); + put_unaligned_be32(tmp[1], out + 12); } static struct crypto_alg camellia_alg = { @@ -1054,7 +1042,6 @@ static struct crypto_alg camellia_alg = { .cra_flags = CRYPTO_ALG_TYPE_CIPHER, .cra_blocksize = CAMELLIA_BLOCK_SIZE, .cra_ctxsize = sizeof(struct camellia_ctx), - .cra_alignmask = 3, .cra_module = THIS_MODULE, .cra_u = { .cipher = { From patchwork Mon Feb 1 18:02:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12059623 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49221C433DB for ; Mon, 1 Feb 2021 18:04:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 018EC64EB8 for ; Mon, 1 Feb 2021 18:04:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229984AbhBASES (ORCPT ); Mon, 1 Feb 2021 13:04:18 -0500 Received: from mail.kernel.org ([198.145.29.99]:54032 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230091AbhBASES (ORCPT ); Mon, 1 Feb 2021 13:04:18 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 8929064EA4; Mon, 1 Feb 2021 18:03:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1612202584; bh=41NkKPwBINPNhH3vH8P0Nqh9BMhuATrNvq/U/G3tqJQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=llNoHeE+qOVXMDSE4EZbydED11mZ/WmV+Qy2RuqrF9nyC7UVwkwV4JSflYGgYgQow 66CwB01EF8Bse5H9J9J3JsFwTIh3Ad06Ff4rGJ6ZXqc0m/PPmtJEq15iGoNcQ7/84k PLogq05uF/IzpRdml5Tet8xskhVbn9XV6WlTLZB76znSfG2Zc+vwdGi68fcANCt0Yq +Ep8oKLeio1Yc5NbMO3KqyuFSM8ik3+YCxIb/mhMdKiaEKTGrmF52efGwR7lHEDkhc cMvHgY/ixtvn0tyIQNwPxRSoALm61H+v3mU5KJoTQwPtcvELzEPciLLaf6mlBI53kP Li29sBqqhyMxA== From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [PATCH 6/9] crypto: cast5 - use unaligned accessors instead of alignmask Date: Mon, 1 Feb 2021 19:02:34 +0100 Message-Id: <20210201180237.3171-7-ardb@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210201180237.3171-1-ardb@kernel.org> References: <20210201180237.3171-1-ardb@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Instead of using an alignmask of 0x3 to ensure 32-bit alignment of the CAST5 input and output blocks, which propagates to mode drivers, and results in pointless copying on architectures that don't care about alignment, use the unaligned accessors, which will do the right thing on each respective architecture, avoiding the need for double buffering. Signed-off-by: Ard Biesheuvel --- crypto/cast5_generic.c | 23 ++++++++------------ 1 file changed, 9 insertions(+), 14 deletions(-) diff --git a/crypto/cast5_generic.c b/crypto/cast5_generic.c index 4095085d4e51..0257c14cefc2 100644 --- a/crypto/cast5_generic.c +++ b/crypto/cast5_generic.c @@ -13,7 +13,7 @@ */ -#include +#include #include #include #include @@ -302,8 +302,6 @@ static const u32 sb8[256] = { void __cast5_encrypt(struct cast5_ctx *c, u8 *outbuf, const u8 *inbuf) { - const __be32 *src = (const __be32 *)inbuf; - __be32 *dst = (__be32 *)outbuf; u32 l, r, t; u32 I; /* used by the Fx macros */ u32 *Km; @@ -315,8 +313,8 @@ void __cast5_encrypt(struct cast5_ctx *c, u8 *outbuf, const u8 *inbuf) /* (L0,R0) <-- (m1...m64). (Split the plaintext into left and * right 32-bit halves L0 = m1...m32 and R0 = m33...m64.) */ - l = be32_to_cpu(src[0]); - r = be32_to_cpu(src[1]); + l = get_unaligned_be32(inbuf); + r = get_unaligned_be32(inbuf + 4); /* (16 rounds) for i from 1 to 16, compute Li and Ri as follows: * Li = Ri-1; @@ -347,8 +345,8 @@ void __cast5_encrypt(struct cast5_ctx *c, u8 *outbuf, const u8 *inbuf) /* c1...c64 <-- (R16,L16). (Exchange final blocks L16, R16 and * concatenate to form the ciphertext.) */ - dst[0] = cpu_to_be32(r); - dst[1] = cpu_to_be32(l); + put_unaligned_be32(r, outbuf); + put_unaligned_be32(l, outbuf + 4); } EXPORT_SYMBOL_GPL(__cast5_encrypt); @@ -359,8 +357,6 @@ static void cast5_encrypt(struct crypto_tfm *tfm, u8 *outbuf, const u8 *inbuf) void __cast5_decrypt(struct cast5_ctx *c, u8 *outbuf, const u8 *inbuf) { - const __be32 *src = (const __be32 *)inbuf; - __be32 *dst = (__be32 *)outbuf; u32 l, r, t; u32 I; u32 *Km; @@ -369,8 +365,8 @@ void __cast5_decrypt(struct cast5_ctx *c, u8 *outbuf, const u8 *inbuf) Km = c->Km; Kr = c->Kr; - l = be32_to_cpu(src[0]); - r = be32_to_cpu(src[1]); + l = get_unaligned_be32(inbuf); + r = get_unaligned_be32(inbuf + 4); if (!(c->rr)) { t = l; l = r; r = t ^ F1(r, Km[15], Kr[15]); @@ -391,8 +387,8 @@ void __cast5_decrypt(struct cast5_ctx *c, u8 *outbuf, const u8 *inbuf) t = l; l = r; r = t ^ F2(r, Km[1], Kr[1]); t = l; l = r; r = t ^ F1(r, Km[0], Kr[0]); - dst[0] = cpu_to_be32(r); - dst[1] = cpu_to_be32(l); + put_unaligned_be32(r, outbuf); + put_unaligned_be32(l, outbuf + 4); } EXPORT_SYMBOL_GPL(__cast5_decrypt); @@ -513,7 +509,6 @@ static struct crypto_alg alg = { .cra_flags = CRYPTO_ALG_TYPE_CIPHER, .cra_blocksize = CAST5_BLOCK_SIZE, .cra_ctxsize = sizeof(struct cast5_ctx), - .cra_alignmask = 3, .cra_module = THIS_MODULE, .cra_u = { .cipher = { From patchwork Mon Feb 1 18:02:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12059629 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A358DC433DB for ; Mon, 1 Feb 2021 18:04:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 57A5B64EAA for ; Mon, 1 Feb 2021 18:04:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230094AbhBASEU (ORCPT ); Mon, 1 Feb 2021 13:04:20 -0500 Received: from mail.kernel.org ([198.145.29.99]:54046 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230091AbhBASET (ORCPT ); Mon, 1 Feb 2021 13:04:19 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 06AE364EA9; Mon, 1 Feb 2021 18:03:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1612202586; bh=iJ33YdErUvCXsq+ZFQjFu55GbmWN6ibQM4OajKfZIXs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=X3+gMfw6VOkhgwVRtuox47wXU1O+42C5tTcZT+DNrVeKtzvh6+RRbRtoDbXkvl6dT pnQKudfL+3QNhQchu3zMXeQ1559ADw04G4KP2d/jjBe1G/z0mn6c11XMur1p5uDx5q a9wUpZ6HIY4SU7FfHSoiXvpk70zlU0iIei2JeQogDtDmDWKF/XqwxtjBWAQxNAVHXJ cKX02woUpGSFRR6n56Oe/QwzFGWUNfIIJQAsJRR92+wut5wrZi61yx4OzLNjQ1wEWV nTWoAvcEOkzTefwejw8/jagwWDavrPCOf6sTyfBX34Hmp9MyrlWOb7wDI6dMz0u9P+ 2ztsOuv23qzkQ== From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [PATCH 7/9] crypto: cast6 - use unaligned accessors instead of alignmask Date: Mon, 1 Feb 2021 19:02:35 +0100 Message-Id: <20210201180237.3171-8-ardb@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210201180237.3171-1-ardb@kernel.org> References: <20210201180237.3171-1-ardb@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Instead of using an alignmask of 0x3 to ensure 32-bit alignment of the CAST6 input and output blocks, which propagates to mode drivers, and results in pointless copying on architectures that don't care about alignment, use the unaligned accessors, which will do the right thing on each respective architecture, avoiding the need for double buffering. Signed-off-by: Ard Biesheuvel --- crypto/cast6_generic.c | 39 +++++++++----------- 1 file changed, 17 insertions(+), 22 deletions(-) diff --git a/crypto/cast6_generic.c b/crypto/cast6_generic.c index c77ff6c8a2b2..75346380aa0b 100644 --- a/crypto/cast6_generic.c +++ b/crypto/cast6_generic.c @@ -10,7 +10,7 @@ */ -#include +#include #include #include #include @@ -172,16 +172,14 @@ static inline void QBAR(u32 *block, const u8 *Kr, const u32 *Km) void __cast6_encrypt(const void *ctx, u8 *outbuf, const u8 *inbuf) { const struct cast6_ctx *c = ctx; - const __be32 *src = (const __be32 *)inbuf; - __be32 *dst = (__be32 *)outbuf; u32 block[4]; const u32 *Km; const u8 *Kr; - block[0] = be32_to_cpu(src[0]); - block[1] = be32_to_cpu(src[1]); - block[2] = be32_to_cpu(src[2]); - block[3] = be32_to_cpu(src[3]); + block[0] = get_unaligned_be32(inbuf); + block[1] = get_unaligned_be32(inbuf + 4); + block[2] = get_unaligned_be32(inbuf + 8); + block[3] = get_unaligned_be32(inbuf + 12); Km = c->Km[0]; Kr = c->Kr[0]; Q(block, Kr, Km); Km = c->Km[1]; Kr = c->Kr[1]; Q(block, Kr, Km); @@ -196,10 +194,10 @@ void __cast6_encrypt(const void *ctx, u8 *outbuf, const u8 *inbuf) Km = c->Km[10]; Kr = c->Kr[10]; QBAR(block, Kr, Km); Km = c->Km[11]; Kr = c->Kr[11]; QBAR(block, Kr, Km); - dst[0] = cpu_to_be32(block[0]); - dst[1] = cpu_to_be32(block[1]); - dst[2] = cpu_to_be32(block[2]); - dst[3] = cpu_to_be32(block[3]); + put_unaligned_be32(block[0], outbuf); + put_unaligned_be32(block[1], outbuf + 4); + put_unaligned_be32(block[2], outbuf + 8); + put_unaligned_be32(block[3], outbuf + 12); } EXPORT_SYMBOL_GPL(__cast6_encrypt); @@ -211,16 +209,14 @@ static void cast6_encrypt(struct crypto_tfm *tfm, u8 *outbuf, const u8 *inbuf) void __cast6_decrypt(const void *ctx, u8 *outbuf, const u8 *inbuf) { const struct cast6_ctx *c = ctx; - const __be32 *src = (const __be32 *)inbuf; - __be32 *dst = (__be32 *)outbuf; u32 block[4]; const u32 *Km; const u8 *Kr; - block[0] = be32_to_cpu(src[0]); - block[1] = be32_to_cpu(src[1]); - block[2] = be32_to_cpu(src[2]); - block[3] = be32_to_cpu(src[3]); + block[0] = get_unaligned_be32(inbuf); + block[1] = get_unaligned_be32(inbuf + 4); + block[2] = get_unaligned_be32(inbuf + 8); + block[3] = get_unaligned_be32(inbuf + 12); Km = c->Km[11]; Kr = c->Kr[11]; Q(block, Kr, Km); Km = c->Km[10]; Kr = c->Kr[10]; Q(block, Kr, Km); @@ -235,10 +231,10 @@ void __cast6_decrypt(const void *ctx, u8 *outbuf, const u8 *inbuf) Km = c->Km[1]; Kr = c->Kr[1]; QBAR(block, Kr, Km); Km = c->Km[0]; Kr = c->Kr[0]; QBAR(block, Kr, Km); - dst[0] = cpu_to_be32(block[0]); - dst[1] = cpu_to_be32(block[1]); - dst[2] = cpu_to_be32(block[2]); - dst[3] = cpu_to_be32(block[3]); + put_unaligned_be32(block[0], outbuf); + put_unaligned_be32(block[1], outbuf + 4); + put_unaligned_be32(block[2], outbuf + 8); + put_unaligned_be32(block[3], outbuf + 12); } EXPORT_SYMBOL_GPL(__cast6_decrypt); @@ -254,7 +250,6 @@ static struct crypto_alg alg = { .cra_flags = CRYPTO_ALG_TYPE_CIPHER, .cra_blocksize = CAST6_BLOCK_SIZE, .cra_ctxsize = sizeof(struct cast6_ctx), - .cra_alignmask = 3, .cra_module = THIS_MODULE, .cra_u = { .cipher = { From patchwork Mon Feb 1 18:02:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12059627 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 760E2C433E6 for ; Mon, 1 Feb 2021 18:04:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 36F6C64EA5 for ; Mon, 1 Feb 2021 18:04:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230118AbhBASEU (ORCPT ); Mon, 1 Feb 2021 13:04:20 -0500 Received: from mail.kernel.org ([198.145.29.99]:54048 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230094AbhBASET (ORCPT ); Mon, 1 Feb 2021 13:04:19 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 7B0F564EAA; Mon, 1 Feb 2021 18:03:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1612202587; bh=TuCy+1Z1hMFoa6Wk0qtENMh4a9e9GljruUyAWGT7p+8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=MZ+xzZFSvHM3QfiGYeHt/Tvuq66riL3rOCUaClnZz043EFZtraqwLx/oM/tnWXBo6 jcBWWpHrf6aWRnmirSG6e0uY12G4vqPI5kh9aPaiyklEe6KnEzlu9gfIXoCxyPRnfn 5YPT3EBRzqOzQIQunleS8XNqP0waLmeJWrsmjGFsj951pPxK6T+MxdeaUZ/K/4fvJF Izse9iytO1noUOokbnWQHN4eLp4rxbE4snGozIq5Eibr8YY/a75/x5T6S+sKrjoODJ oFgF58WaRMu55iwidThMMS5Pk4cvDq/fRbN6NknHSSAhlt7lH1nnxgkOsYigcp37/Q kziUN1ieQ9UEg== From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [PATCH 8/9] crypto: fcrypt - drop unneeded alignmask Date: Mon, 1 Feb 2021 19:02:36 +0100 Message-Id: <20210201180237.3171-9-ardb@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210201180237.3171-1-ardb@kernel.org> References: <20210201180237.3171-1-ardb@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The fcrypt implementation uses memcpy() to access the input and output buffers so there is no need to set an alignmask. Signed-off-by: Ard Biesheuvel --- crypto/fcrypt.c | 1 - 1 file changed, 1 deletion(-) diff --git a/crypto/fcrypt.c b/crypto/fcrypt.c index 58f935315cf8..c36ea0c8be98 100644 --- a/crypto/fcrypt.c +++ b/crypto/fcrypt.c @@ -396,7 +396,6 @@ static struct crypto_alg fcrypt_alg = { .cra_blocksize = 8, .cra_ctxsize = sizeof(struct fcrypt_ctx), .cra_module = THIS_MODULE, - .cra_alignmask = 3, .cra_u = { .cipher = { .cia_min_keysize = 8, .cia_max_keysize = 8, From patchwork Mon Feb 1 18:02:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12059631 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2E753C433E0 for ; Mon, 1 Feb 2021 18:04:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EA02A64E97 for ; Mon, 1 Feb 2021 18:04:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230129AbhBASEX (ORCPT ); Mon, 1 Feb 2021 13:04:23 -0500 Received: from mail.kernel.org ([198.145.29.99]:54078 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230091AbhBASEW (ORCPT ); Mon, 1 Feb 2021 13:04:22 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 02A1664EBB; Mon, 1 Feb 2021 18:03:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1612202589; bh=ppI7NuivS4D+nsSpFKK/EI0QBwCj15b8dU0LtLpjlG0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=CJpG8dOKN3PiRx5LMuofswFACZ5Q6FqAHwWA8iEmDOwKMqie8jra7aHbipNCDeHHc 5jpO5PrSRBXo7w4MKiFI7ikcPQJlzE+CYVR07U7e6/YyxpL3z43PyO2mLI/Wgtn0kq aUtUV+OAtyjkw6LeEsH4rs9i8rS5/vmoPYH1qnCws9I5Ec/fCeI4FV2TFLG9d0addS jXMt5kE3agCu0vme5kGKw9Dollv3AbxSH+iII/ZR0V5cnr2Qp1FbDE9qX712F2plYY IkLAm9+uNDD26FLyx65W+rNW9Or/BBVggP8EalVbSEBXBddXBiBUS+VBVtWyJcUtHw SidO8XELgI7PQ== From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [PATCH 9/9] crypto: twofish - use unaligned accessors instead of alignmask Date: Mon, 1 Feb 2021 19:02:37 +0100 Message-Id: <20210201180237.3171-10-ardb@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210201180237.3171-1-ardb@kernel.org> References: <20210201180237.3171-1-ardb@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Instead of using an alignmask of 0x3 to ensure 32-bit alignment of the Twofish input and output blocks, which propagates to mode drivers, and results in pointless copying on architectures that don't care about alignment, use the unaligned accessors, which will do the right thing on each respective architecture, avoiding the need for double buffering. Signed-off-by: Ard Biesheuvel --- crypto/twofish_generic.c | 11 +++-------- 1 file changed, 3 insertions(+), 8 deletions(-) diff --git a/crypto/twofish_generic.c b/crypto/twofish_generic.c index 4f7c033224f9..86b2f067a416 100644 --- a/crypto/twofish_generic.c +++ b/crypto/twofish_generic.c @@ -24,7 +24,7 @@ * Third Edition. */ -#include +#include #include #include #include @@ -83,11 +83,11 @@ * whitening subkey number m. */ #define INPACK(n, x, m) \ - x = le32_to_cpu(src[n]) ^ ctx->w[m] + x = get_unaligned_le32(in + (n) * 4) ^ ctx->w[m] #define OUTUNPACK(n, x, m) \ x ^= ctx->w[m]; \ - dst[n] = cpu_to_le32(x) + put_unaligned_le32(x, out + (n) * 4) @@ -95,8 +95,6 @@ static void twofish_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) { struct twofish_ctx *ctx = crypto_tfm_ctx(tfm); - const __le32 *src = (const __le32 *)in; - __le32 *dst = (__le32 *)out; /* The four 32-bit chunks of the text. */ u32 a, b, c, d; @@ -132,8 +130,6 @@ static void twofish_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) static void twofish_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) { struct twofish_ctx *ctx = crypto_tfm_ctx(tfm); - const __le32 *src = (const __le32 *)in; - __le32 *dst = (__le32 *)out; /* The four 32-bit chunks of the text. */ u32 a, b, c, d; @@ -172,7 +168,6 @@ static struct crypto_alg alg = { .cra_flags = CRYPTO_ALG_TYPE_CIPHER, .cra_blocksize = TF_BLOCK_SIZE, .cra_ctxsize = sizeof(struct twofish_ctx), - .cra_alignmask = 3, .cra_module = THIS_MODULE, .cra_u = { .cipher = { .cia_min_keysize = TF_MIN_KEY_SIZE,