From patchwork Thu Oct 19 05:53:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13428250 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1BC64CDB483 for ; Thu, 19 Oct 2023 05:54:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229632AbjJSFyP (ORCPT ); Thu, 19 Oct 2023 01:54:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34300 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230304AbjJSFyO (ORCPT ); Thu, 19 Oct 2023 01:54:14 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2CB29FE for ; Wed, 18 Oct 2023 22:54:13 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C4A4BC433C9 for ; Thu, 19 Oct 2023 05:54:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697694852; bh=GdWvwxtyPRuZpFGX2Ept5lwoZL6uqZnLAjrR6ZlAoHQ=; h=From:To:Subject:Date:In-Reply-To:References:From; b=hHx70JNvyiSRekfPyCK10QDoaEgC8JbeeeiC7agLALKoqGSyPfGFMJWUExTrcFn9N i48omFfyrtTu37AJidLPdYPMx5JGeGvH0u+ZUOOS9qXmvrx87GUxRmQLLeICtl0T67 AkXt9FY4cZBt9h3DAtIEydd52pS2H1vCHTs/omVJsvOeTSIhrCumz7yFAto8JEZDAQ yE+Hj6FKb1tFm3PKucr4qWn015dylcDqWXgUR/0j+TKkvLyA4VOY6Q40+rsy8SmE+R BD9ERX1MXhe0YO9cNTvBrw2WP/L7pNf1bt6CfoQHESDSmqldvzQZ/kAAjCmCfwoFj/ 9jlx3TMgXDArA== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 01/17] crypto: sparc/crc32c - stop using the shash alignmask Date: Wed, 18 Oct 2023 22:53:27 -0700 Message-ID: <20231019055343.588846-2-ebiggers@kernel.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231019055343.588846-1-ebiggers@kernel.org> References: <20231019055343.588846-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Eric Biggers As far as I can tell, "crc32c-sparc64" is the only "shash" algorithm in the kernel that sets a nonzero alignmask and actually relies on it to get the crypto API to align the inputs and outputs. This capability is not really useful, though. To unblock removing the support for alignmask from shash_alg, this patch updates crc32c-sparc64 to no longer use the alignmask. This means doing 8-byte alignment of the data when doing an update, using get_unaligned_le32() when setting a non-default initial CRC, and using put_unaligned_le32() to output the final CRC. Partially tested with: export ARCH=sparc64 CROSS_COMPILE=sparc64-linux-gnu- make sparc64_defconfig echo CONFIG_CRYPTO_CRC32C_SPARC64=y >> .config echo '# CONFIG_CRYPTO_MANAGER_DISABLE_TESTS is not set' >> .config echo CONFIG_DEBUG_KERNEL=y >> .config echo CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y >> .config make olddefconfig make -j$(getconf _NPROCESSORS_ONLN) qemu-system-sparc64 -kernel arch/sparc/boot/image -nographic However, qemu doesn't actually support the sparc CRC32C instructions, so for the test I temporarily replaced crc32c_sparc64() with __crc32c_le() and made sparc64_has_crc32c_opcode() always return true. So essentially I tested the glue code, not the actual SPARC part which is unchanged. Signed-off-by: Eric Biggers --- arch/sparc/crypto/crc32c_glue.c | 45 ++++++++++++++++++--------------- 1 file changed, 24 insertions(+), 21 deletions(-) diff --git a/arch/sparc/crypto/crc32c_glue.c b/arch/sparc/crypto/crc32c_glue.c index 82efb7f81c288..688db0dcb97d9 100644 --- a/arch/sparc/crypto/crc32c_glue.c +++ b/arch/sparc/crypto/crc32c_glue.c @@ -13,97 +13,101 @@ #include #include #include #include #include #include #include #include +#include #include "opcodes.h" /* * Setting the seed allows arbitrary accumulators and flexible XOR policy * If your algorithm starts with ~0, then XOR with ~0 before you set * the seed. */ static int crc32c_sparc64_setkey(struct crypto_shash *hash, const u8 *key, unsigned int keylen) { u32 *mctx = crypto_shash_ctx(hash); if (keylen != sizeof(u32)) return -EINVAL; - *mctx = le32_to_cpup((__le32 *)key); + *mctx = get_unaligned_le32(key); return 0; } static int crc32c_sparc64_init(struct shash_desc *desc) { u32 *mctx = crypto_shash_ctx(desc->tfm); u32 *crcp = shash_desc_ctx(desc); *crcp = *mctx; return 0; } extern void crc32c_sparc64(u32 *crcp, const u64 *data, unsigned int len); -static void crc32c_compute(u32 *crcp, const u64 *data, unsigned int len) +static u32 crc32c_compute(u32 crc, const u8 *data, unsigned int len) { - unsigned int asm_len; - - asm_len = len & ~7U; - if (asm_len) { - crc32c_sparc64(crcp, data, asm_len); - data += asm_len / 8; - len -= asm_len; + unsigned int n = -(uintptr_t)data & 7; + + if (n) { + /* Data isn't 8-byte aligned. Align it. */ + n = min(n, len); + crc = __crc32c_le(crc, data, n); + data += n; + len -= n; + } + n = len & ~7U; + if (n) { + crc32c_sparc64(&crc, (const u64 *)data, n); + data += n; + len -= n; } if (len) - *crcp = __crc32c_le(*crcp, (const unsigned char *) data, len); + crc = __crc32c_le(crc, data, len); + return crc; } static int crc32c_sparc64_update(struct shash_desc *desc, const u8 *data, unsigned int len) { u32 *crcp = shash_desc_ctx(desc); - crc32c_compute(crcp, (const u64 *) data, len); - + *crcp = crc32c_compute(*crcp, data, len); return 0; } -static int __crc32c_sparc64_finup(u32 *crcp, const u8 *data, unsigned int len, - u8 *out) +static int __crc32c_sparc64_finup(const u32 *crcp, const u8 *data, + unsigned int len, u8 *out) { - u32 tmp = *crcp; - - crc32c_compute(&tmp, (const u64 *) data, len); - - *(__le32 *) out = ~cpu_to_le32(tmp); + put_unaligned_le32(~crc32c_compute(*crcp, data, len), out); return 0; } static int crc32c_sparc64_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { return __crc32c_sparc64_finup(shash_desc_ctx(desc), data, len, out); } static int crc32c_sparc64_final(struct shash_desc *desc, u8 *out) { u32 *crcp = shash_desc_ctx(desc); - *(__le32 *) out = ~cpu_to_le32p(crcp); + put_unaligned_le32(~*crcp, out); return 0; } static int crc32c_sparc64_digest(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { return __crc32c_sparc64_finup(crypto_shash_ctx(desc->tfm), data, len, out); } @@ -128,21 +132,20 @@ static struct shash_alg alg = { .digest = crc32c_sparc64_digest, .descsize = sizeof(u32), .digestsize = CHKSUM_DIGEST_SIZE, .base = { .cra_name = "crc32c", .cra_driver_name = "crc32c-sparc64", .cra_priority = SPARC_CR_OPCODE_PRIORITY, .cra_flags = CRYPTO_ALG_OPTIONAL_KEY, .cra_blocksize = CHKSUM_BLOCK_SIZE, .cra_ctxsize = sizeof(u32), - .cra_alignmask = 7, .cra_module = THIS_MODULE, .cra_init = crc32c_sparc64_cra_init, } }; static bool __init sparc64_has_crc32c_opcode(void) { unsigned long cfr; if (!(sparc64_elf_hwcap & HWCAP_SPARC_CRYPTO)) From patchwork Thu Oct 19 05:53:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13428253 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66005CDB482 for ; Thu, 19 Oct 2023 05:54:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230304AbjJSFyP (ORCPT ); Thu, 19 Oct 2023 01:54:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34312 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231284AbjJSFyO (ORCPT ); Thu, 19 Oct 2023 01:54:14 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5B48B112 for ; Wed, 18 Oct 2023 22:54:13 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id F2156C433C8 for ; Thu, 19 Oct 2023 05:54:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697694853; bh=p+BNwiGefb1vbbfgRqLq1+GHJYCnXtut4Dm0CzVglTQ=; h=From:To:Subject:Date:In-Reply-To:References:From; b=FpqtD48Fc2My4dKG7uV4nKnblrLFdMleA8QstkQo7sTzXsBMO0WWFUoFfi/FAuiHI h8ga3bQU8tOZzUe7ZgMeGwdk2ADJpa7d91fzpfanns4PXO6b1GgX/jYA5HX82ULb2T F5Onn/jqm7+dRXpn+BPzNYPVIBnr46ef/zbMhXxH2CMiR7YRX2QyY6qgx33ZMCaJqC 82lhxkR8aTA3oifj0DoyJwuZDV1pqp7f93WUAergcQj+Y/YiVL/qGbYLWI3VvKBl0v jbPhaiBx0iFT23Ly0OLxME7vIrB7fTzlte3Zbs8ynYEt76pWp36Eubp0cZvZRH9t9w TxlO0+4sLsQ7g== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 02/17] crypto: stm32 - remove unnecessary alignmask Date: Wed, 18 Oct 2023 22:53:28 -0700 Message-ID: <20231019055343.588846-3-ebiggers@kernel.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231019055343.588846-1-ebiggers@kernel.org> References: <20231019055343.588846-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Eric Biggers The stm32 crc32 algorithms set a nonzero alignmask, but they don't seem to actually need it. Their ->update function already has code that handles aligning the data to the same alignment that the alignmask specifies, their ->setkey function already uses get_unaligned_le32(), and their ->final function already uses put_unaligned_le32(). Therefore, stop setting the alignmask. This will allow these algorithms to keep being registered after alignmask support is removed from shash. Signed-off-by: Eric Biggers --- drivers/crypto/stm32/stm32-crc32.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/drivers/crypto/stm32/stm32-crc32.c b/drivers/crypto/stm32/stm32-crc32.c index 90a920e7f6642..fa4fec31fcfc4 100644 --- a/drivers/crypto/stm32/stm32-crc32.c +++ b/drivers/crypto/stm32/stm32-crc32.c @@ -276,21 +276,20 @@ static struct shash_alg algs[] = { .finup = stm32_crc_finup, .digest = stm32_crc_digest, .descsize = sizeof(struct stm32_crc_desc_ctx), .digestsize = CHKSUM_DIGEST_SIZE, .base = { .cra_name = "crc32", .cra_driver_name = "stm32-crc32-crc32", .cra_priority = 200, .cra_flags = CRYPTO_ALG_OPTIONAL_KEY, .cra_blocksize = CHKSUM_BLOCK_SIZE, - .cra_alignmask = 3, .cra_ctxsize = sizeof(struct stm32_crc_ctx), .cra_module = THIS_MODULE, .cra_init = stm32_crc32_cra_init, } }, /* CRC-32Castagnoli */ { .setkey = stm32_crc_setkey, .init = stm32_crc_init, .update = stm32_crc_update, @@ -298,21 +297,20 @@ static struct shash_alg algs[] = { .finup = stm32_crc_finup, .digest = stm32_crc_digest, .descsize = sizeof(struct stm32_crc_desc_ctx), .digestsize = CHKSUM_DIGEST_SIZE, .base = { .cra_name = "crc32c", .cra_driver_name = "stm32-crc32-crc32c", .cra_priority = 200, .cra_flags = CRYPTO_ALG_OPTIONAL_KEY, .cra_blocksize = CHKSUM_BLOCK_SIZE, - .cra_alignmask = 3, .cra_ctxsize = sizeof(struct stm32_crc_ctx), .cra_module = THIS_MODULE, .cra_init = stm32_crc32c_cra_init, } } }; static int stm32_crc_probe(struct platform_device *pdev) { struct device *dev = &pdev->dev; From patchwork Thu Oct 19 05:53:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13428251 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BEF36C41513 for ; Thu, 19 Oct 2023 05:54:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231284AbjJSFyQ (ORCPT ); Thu, 19 Oct 2023 01:54:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34326 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232057AbjJSFyO (ORCPT ); Thu, 19 Oct 2023 01:54:14 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8930C113 for ; Wed, 18 Oct 2023 22:54:13 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2BDCFC433CA for ; Thu, 19 Oct 2023 05:54:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697694853; bh=tlNQT4pmclVG0jdMlLTdGJalZ4STs/Mv0X0OlOW1rOY=; h=From:To:Subject:Date:In-Reply-To:References:From; b=Wruq0AfELzjkCoArbHWp7jwAsQ37XYBKPAQ7L16D4s2wcHC0gt2MdZQzeXjvCGMZG xbFOf3e1rO9XKgB/dDFBt3tXBbYOH8jXVUbr95wyXlO5xS++BqkQggP4hwBdqBBEtl bOtEV6/ZKVnHMk5x/2BGr7MNgFBdasHj4NT5FFCHincaWOtosHA0pwCg+zc/jpRgH8 9XdjtTtXIZuYCyUv1ZqexeZGTDEi+imddbXkQdnLgH7hdgbmj7Fnpdkq0FPV1f8TNA 3VRKj7UkkE0xs6DqZ0oB56MOdLnaLqM0Cggrbp30oTnlduaj/wjQtDfObFkhN92+9a s5CrXZBA+/IfQ== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 03/17] crypto: xilinx/zynqmp-sha - remove unnecessary alignmask Date: Wed, 18 Oct 2023 22:53:29 -0700 Message-ID: <20231019055343.588846-4-ebiggers@kernel.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231019055343.588846-1-ebiggers@kernel.org> References: <20231019055343.588846-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Eric Biggers The zynqmp-sha3-384 algorithm sets a nonzero alignmask, but it doesn't appear to actually need it. Therefore, stop setting it. This will allow this algorithm to keep being registered after alignmask support is removed from shash. Signed-off-by: Eric Biggers --- drivers/crypto/xilinx/zynqmp-sha.c | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/crypto/xilinx/zynqmp-sha.c b/drivers/crypto/xilinx/zynqmp-sha.c index 426bf1a72ba66..b0dbf6263b0db 100644 --- a/drivers/crypto/xilinx/zynqmp-sha.c +++ b/drivers/crypto/xilinx/zynqmp-sha.c @@ -175,21 +175,20 @@ static struct zynqmp_sha_drv_ctx sha3_drv_ctx = { .digestsize = SHA3_384_DIGEST_SIZE, .base = { .cra_name = "sha3-384", .cra_driver_name = "zynqmp-sha3-384", .cra_priority = 300, .cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY | CRYPTO_ALG_ALLOCATES_MEMORY | CRYPTO_ALG_NEED_FALLBACK, .cra_blocksize = SHA3_384_BLOCK_SIZE, .cra_ctxsize = sizeof(struct zynqmp_sha_tfm_ctx), - .cra_alignmask = 3, .cra_module = THIS_MODULE, } } }; static int zynqmp_sha_probe(struct platform_device *pdev) { struct device *dev = &pdev->dev; int err; u32 v; From patchwork Thu Oct 19 05:53:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13428252 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32E98CDB484 for ; Thu, 19 Oct 2023 05:54:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232673AbjJSFyR (ORCPT ); Thu, 19 Oct 2023 01:54:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34340 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232123AbjJSFyP (ORCPT ); Thu, 19 Oct 2023 01:54:15 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BA317116 for ; Wed, 18 Oct 2023 22:54:13 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5B6A2C433CB for ; Thu, 19 Oct 2023 05:54:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697694853; bh=OwOWNeLfTUGViMlwD4hC/ObiKumOEI6sEF7PWE5f/Gc=; h=From:To:Subject:Date:In-Reply-To:References:From; b=iGCjWIvRk1sEFa6VFFJ0Rh0237kxRVi6SPJcsdC7uXvLMjPdpnEdr6Abll8MkszyA 8ysTFroaWbkxK1p/OwI4aqH2Uoh6Q9gdARhoh4ytm/vgffvPRIYPJbduw1JhXF0kMH trl0TbB8dwyxjfc1NWwKn0A1IlpguPklgTCjv9wGuQ/0sf5rxqKPRqPXnL7wU8uajH rhsU6RDZ1c7vRSi366+wAYRJBW4sMZPMpGMQpaol5cEBxM7R295XLQ/8z6RWJkVpN1 2Wv2V4pLGhvjZIK7fCqnBzELnhtjvi3q+v+W36RlbLXN+75nyVTjlFw5Zw+nxhqZPl 3W+TJ9xHm86Ow== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 04/17] crypto: mips/crc32 - remove redundant setting of alignmask to 0 Date: Wed, 18 Oct 2023 22:53:30 -0700 Message-ID: <20231019055343.588846-5-ebiggers@kernel.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231019055343.588846-1-ebiggers@kernel.org> References: <20231019055343.588846-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Eric Biggers This unnecessary explicit setting of cra_alignmask to 0 shows up when grepping for shash algorithms that set an alignmask. Remove it. No change in behavior. Signed-off-by: Eric Biggers --- arch/mips/crypto/crc32-mips.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/arch/mips/crypto/crc32-mips.c b/arch/mips/crypto/crc32-mips.c index 3e4f5ba104f89..ec6d58008f8e1 100644 --- a/arch/mips/crypto/crc32-mips.c +++ b/arch/mips/crypto/crc32-mips.c @@ -283,21 +283,20 @@ static struct shash_alg crc32_alg = { .final = chksum_final, .finup = chksum_finup, .digest = chksum_digest, .descsize = sizeof(struct chksum_desc_ctx), .base = { .cra_name = "crc32", .cra_driver_name = "crc32-mips-hw", .cra_priority = 300, .cra_flags = CRYPTO_ALG_OPTIONAL_KEY, .cra_blocksize = CHKSUM_BLOCK_SIZE, - .cra_alignmask = 0, .cra_ctxsize = sizeof(struct chksum_ctx), .cra_module = THIS_MODULE, .cra_init = chksum_cra_init, } }; static struct shash_alg crc32c_alg = { .digestsize = CHKSUM_DIGEST_SIZE, .setkey = chksum_setkey, .init = chksum_init, @@ -305,21 +304,20 @@ static struct shash_alg crc32c_alg = { .final = chksumc_final, .finup = chksumc_finup, .digest = chksumc_digest, .descsize = sizeof(struct chksum_desc_ctx), .base = { .cra_name = "crc32c", .cra_driver_name = "crc32c-mips-hw", .cra_priority = 300, .cra_flags = CRYPTO_ALG_OPTIONAL_KEY, .cra_blocksize = CHKSUM_BLOCK_SIZE, - .cra_alignmask = 0, .cra_ctxsize = sizeof(struct chksum_ctx), .cra_module = THIS_MODULE, .cra_init = chksum_cra_init, } }; static int __init crc32_mod_init(void) { int err; From patchwork Thu Oct 19 05:53:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13428254 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB59FCDB485 for ; Thu, 19 Oct 2023 05:54:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232123AbjJSFyR (ORCPT ); Thu, 19 Oct 2023 01:54:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34348 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232583AbjJSFyP (ORCPT ); Thu, 19 Oct 2023 01:54:15 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E875111B for ; Wed, 18 Oct 2023 22:54:13 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 88D89C433CC for ; Thu, 19 Oct 2023 05:54:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697694853; bh=/jzM6v2stiOvtt+IkxG/WMBv2h4LRliFs5rirhlpdJY=; h=From:To:Subject:Date:In-Reply-To:References:From; b=qKU3FgssrmOlztJEIqXsTjp/Zal42NBzK1VFPd0J5q6DBiGZcj4LuCsm4StrvTp3G mZheuaK89/gXMIwqU4MKWyIkS5VYKU2FpRvGQAhDXZ2CGuwCwjRKZ+sh3615wGBfvL 8+5LJn0nk82wu8aT41NX9Ljkri0zKiQH73yzVZ63LQcJBRQhTuDx9B4q+zyeH59QdI NTE7YY0PFy7Ij9+3YAlIn0LCqCZM/I75CtsBdGTs+aD7D/nJETjO/T4NaZnoA0SvhV NZ7hwXa3/M0IouDKCioPVeMNvtFSU9j0fgbipdwKC24mbsU7VH8HDvW4MiOzTHNv9L 7+cmyLgCfSIgw== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 05/17] crypto: loongarch/crc32 - remove redundant setting of alignmask to 0 Date: Wed, 18 Oct 2023 22:53:31 -0700 Message-ID: <20231019055343.588846-6-ebiggers@kernel.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231019055343.588846-1-ebiggers@kernel.org> References: <20231019055343.588846-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Eric Biggers This unnecessary explicit setting of cra_alignmask to 0 shows up when grepping for shash algorithms that set an alignmask. Remove it. No change in behavior. Signed-off-by: Eric Biggers --- arch/loongarch/crypto/crc32-loongarch.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/arch/loongarch/crypto/crc32-loongarch.c b/arch/loongarch/crypto/crc32-loongarch.c index 1f2a2c3839bcb..a49e507af38c0 100644 --- a/arch/loongarch/crypto/crc32-loongarch.c +++ b/arch/loongarch/crypto/crc32-loongarch.c @@ -232,21 +232,20 @@ static struct shash_alg crc32_alg = { .final = chksum_final, .finup = chksum_finup, .digest = chksum_digest, .descsize = sizeof(struct chksum_desc_ctx), .base = { .cra_name = "crc32", .cra_driver_name = "crc32-loongarch", .cra_priority = 300, .cra_flags = CRYPTO_ALG_OPTIONAL_KEY, .cra_blocksize = CHKSUM_BLOCK_SIZE, - .cra_alignmask = 0, .cra_ctxsize = sizeof(struct chksum_ctx), .cra_module = THIS_MODULE, .cra_init = chksum_cra_init, } }; static struct shash_alg crc32c_alg = { .digestsize = CHKSUM_DIGEST_SIZE, .setkey = chksum_setkey, .init = chksum_init, @@ -254,21 +253,20 @@ static struct shash_alg crc32c_alg = { .final = chksumc_final, .finup = chksumc_finup, .digest = chksumc_digest, .descsize = sizeof(struct chksum_desc_ctx), .base = { .cra_name = "crc32c", .cra_driver_name = "crc32c-loongarch", .cra_priority = 300, .cra_flags = CRYPTO_ALG_OPTIONAL_KEY, .cra_blocksize = CHKSUM_BLOCK_SIZE, - .cra_alignmask = 0, .cra_ctxsize = sizeof(struct chksum_ctx), .cra_module = THIS_MODULE, .cra_init = chksumc_cra_init, } }; static int __init crc32_mod_init(void) { int err; From patchwork Thu Oct 19 05:53:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13428255 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC290CDB465 for ; Thu, 19 Oct 2023 05:54:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232583AbjJSFyR (ORCPT ); Thu, 19 Oct 2023 01:54:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34358 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232614AbjJSFyP (ORCPT ); Thu, 19 Oct 2023 01:54:15 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 23132B6 for ; Wed, 18 Oct 2023 22:54:14 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B67E6C433CD for ; Thu, 19 Oct 2023 05:54:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697694853; bh=i4wpB1M5Ma47I5Q4K7RKZN0PvPuyNWlMWM67kM5wTBU=; h=From:To:Subject:Date:In-Reply-To:References:From; b=G/XhY4obiqNb1dK1MTCBu7LtZgmEdkVoFecth8YTIGjjEG1n196WSO9lTjeO2QLL9 fXJk2bBkPl9aln9s0u2MlnhXs5ER86XTrIP5NYezVPm1h7F2H1Y7sUITc2hyoKTRw8 sPp5541d+tyBb5DYoraSSmAD7TtvUN96r8KLeAcJUGVkl5pBMGQ8CVgXOkNGZ6q/Gh oz2cIyxoOvXGhcB67Zweh8K1/ZnGcNmrRHheJeJCSbFd8OJA+VXSCbR6ip/caIDW29 RQcF16WF4M+8IGFdSgOWOEngLCNZxx8cnLq3Wle3xNunreUxTZKdOiI+1Dm48/Tt+6 C4MQBXR1UyjWw== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 06/17] crypto: cbcmac - remove unnecessary alignment logic Date: Wed, 18 Oct 2023 22:53:32 -0700 Message-ID: <20231019055343.588846-7-ebiggers@kernel.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231019055343.588846-1-ebiggers@kernel.org> References: <20231019055343.588846-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Eric Biggers The cbcmac template is aligning a field in its desc context to the alignmask of its underlying 'cipher', at runtime. This is almost entirely pointless, since cbcmac is already using the cipher API functions that handle alignment themselves, and few ciphers set a nonzero alignmask anyway. Also, even without runtime alignment, an alignment of at least 4 bytes can be guaranteed. Thus, at best this code is optimizing for the rare case of ciphers that set an alignmask >= 7, at the cost of hurting the common cases. Therefore, remove the manual alignment code from cbcmac. Signed-off-by: Eric Biggers --- crypto/ccm.c | 17 +++++++---------- 1 file changed, 7 insertions(+), 10 deletions(-) diff --git a/crypto/ccm.c b/crypto/ccm.c index 7af89a5b745c4..dd7aed63efc93 100644 --- a/crypto/ccm.c +++ b/crypto/ccm.c @@ -49,20 +49,21 @@ struct crypto_ccm_req_priv_ctx { struct skcipher_request skreq; }; }; struct cbcmac_tfm_ctx { struct crypto_cipher *child; }; struct cbcmac_desc_ctx { unsigned int len; + u8 dg[]; }; static inline struct crypto_ccm_req_priv_ctx *crypto_ccm_reqctx( struct aead_request *req) { unsigned long align = crypto_aead_alignmask(crypto_aead_reqtfm(req)); return (void *)PTR_ALIGN((u8 *)aead_request_ctx(req), align + 1); } @@ -778,68 +779,65 @@ static int crypto_cbcmac_digest_setkey(struct crypto_shash *parent, { struct cbcmac_tfm_ctx *ctx = crypto_shash_ctx(parent); return crypto_cipher_setkey(ctx->child, inkey, keylen); } static int crypto_cbcmac_digest_init(struct shash_desc *pdesc) { struct cbcmac_desc_ctx *ctx = shash_desc_ctx(pdesc); int bs = crypto_shash_digestsize(pdesc->tfm); - u8 *dg = (u8 *)ctx + crypto_shash_descsize(pdesc->tfm) - bs; ctx->len = 0; - memset(dg, 0, bs); + memset(ctx->dg, 0, bs); return 0; } static int crypto_cbcmac_digest_update(struct shash_desc *pdesc, const u8 *p, unsigned int len) { struct crypto_shash *parent = pdesc->tfm; struct cbcmac_tfm_ctx *tctx = crypto_shash_ctx(parent); struct cbcmac_desc_ctx *ctx = shash_desc_ctx(pdesc); struct crypto_cipher *tfm = tctx->child; int bs = crypto_shash_digestsize(parent); - u8 *dg = (u8 *)ctx + crypto_shash_descsize(parent) - bs; while (len > 0) { unsigned int l = min(len, bs - ctx->len); - crypto_xor(dg + ctx->len, p, l); + crypto_xor(&ctx->dg[ctx->len], p, l); ctx->len +=l; len -= l; p += l; if (ctx->len == bs) { - crypto_cipher_encrypt_one(tfm, dg, dg); + crypto_cipher_encrypt_one(tfm, ctx->dg, ctx->dg); ctx->len = 0; } } return 0; } static int crypto_cbcmac_digest_final(struct shash_desc *pdesc, u8 *out) { struct crypto_shash *parent = pdesc->tfm; struct cbcmac_tfm_ctx *tctx = crypto_shash_ctx(parent); struct cbcmac_desc_ctx *ctx = shash_desc_ctx(pdesc); struct crypto_cipher *tfm = tctx->child; int bs = crypto_shash_digestsize(parent); - u8 *dg = (u8 *)ctx + crypto_shash_descsize(parent) - bs; if (ctx->len) - crypto_cipher_encrypt_one(tfm, dg, dg); + crypto_cipher_encrypt_one(tfm, ctx->dg, ctx->dg); - memcpy(out, dg, bs); + memcpy(out, ctx->dg, bs); return 0; } static int cbcmac_init_tfm(struct crypto_tfm *tfm) { struct crypto_cipher *cipher; struct crypto_instance *inst = (void *)tfm->__crt_alg; struct crypto_cipher_spawn *spawn = crypto_instance_ctx(inst); struct cbcmac_tfm_ctx *ctx = crypto_tfm_ctx(tfm); @@ -882,22 +880,21 @@ static int cbcmac_create(struct crypto_template *tmpl, struct rtattr **tb) alg = crypto_spawn_cipher_alg(spawn); err = crypto_inst_setname(shash_crypto_instance(inst), tmpl->name, alg); if (err) goto err_free_inst; inst->alg.base.cra_priority = alg->cra_priority; inst->alg.base.cra_blocksize = 1; inst->alg.digestsize = alg->cra_blocksize; - inst->alg.descsize = ALIGN(sizeof(struct cbcmac_desc_ctx), - alg->cra_alignmask + 1) + + inst->alg.descsize = sizeof(struct cbcmac_desc_ctx) + alg->cra_blocksize; inst->alg.base.cra_ctxsize = sizeof(struct cbcmac_tfm_ctx); inst->alg.base.cra_init = cbcmac_init_tfm; inst->alg.base.cra_exit = cbcmac_exit_tfm; inst->alg.init = crypto_cbcmac_digest_init; inst->alg.update = crypto_cbcmac_digest_update; inst->alg.final = crypto_cbcmac_digest_final; inst->alg.setkey = crypto_cbcmac_digest_setkey; From patchwork Thu Oct 19 05:53:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13428257 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08A7FCDB486 for ; Thu, 19 Oct 2023 05:54:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232626AbjJSFyS (ORCPT ); Thu, 19 Oct 2023 01:54:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34382 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232638AbjJSFyQ (ORCPT ); Thu, 19 Oct 2023 01:54:16 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6156811D for ; Wed, 18 Oct 2023 22:54:14 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E46EEC433C7 for ; Thu, 19 Oct 2023 05:54:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697694854; bh=j82LtqYf1qqTubk8yZYGD7oFCwPaQRV4h4D9NiVEthc=; h=From:To:Subject:Date:In-Reply-To:References:From; b=PDOrKfes+20uuQZciieRY9PLBClcbLWX2KedBROqnAvNhqMMuKprX1y8B/zTR2glJ B6w3n7h5Ssy3u/o97lbp1XIl8/6SudCeJ8bY0jNPUS3xMURd8PmYfEjfhGE9wXpj4k DIu6BV26J6WgG3h2JVG40lsbsmVsn6CpfTbNUGEJryiVxAK6EupCftswC6TCFEvmrZ 4Q7doVaMsTpnHXsaAT52e8MDthTYrgzEfxBbsca6Axu29dcXxJ65bGDCnpR3Hc0kKT lsK4Axf1SUOTmHtoc6+SZyPL9WR+UzV2UxcmLVSoJ9PcHq1LUYRDQZYZIYwNixDHCz Su1J7uzC+19CA== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 07/17] crypto: cmac - remove unnecessary alignment logic Date: Wed, 18 Oct 2023 22:53:33 -0700 Message-ID: <20231019055343.588846-8-ebiggers@kernel.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231019055343.588846-1-ebiggers@kernel.org> References: <20231019055343.588846-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Eric Biggers The cmac template is setting its alignmask to that of its underlying 'cipher'. Yet, it doesn't care itself about how its inputs and outputs are aligned, which is ostensibly the point of the alignmask. Instead, cmac actually just uses its alignmask itself to runtime-align certain fields in its tfm and desc contexts appropriately for its underlying cipher. That is almost entirely pointless too, though, since cmac is already using the cipher API functions that handle alignment themselves, and few ciphers set a nonzero alignmask anyway. Also, even without runtime alignment, an alignment of at least 4 bytes can be guaranteed. Thus, at best this code is optimizing for the rare case of ciphers that set an alignmask >= 7, at the cost of hurting the common cases. Therefore, this patch removes the manual alignment code from cmac and makes it stop setting an alignmask. Signed-off-by: Eric Biggers --- crypto/cmac.c | 39 +++++++++++---------------------------- 1 file changed, 11 insertions(+), 28 deletions(-) diff --git a/crypto/cmac.c b/crypto/cmac.c index fce6b0f58e88e..c7aa3665b076e 100644 --- a/crypto/cmac.c +++ b/crypto/cmac.c @@ -21,47 +21,45 @@ * +------------------------ * | * +------------------------ * | cmac_tfm_ctx * +------------------------ * | consts (block size * 2) * +------------------------ */ struct cmac_tfm_ctx { struct crypto_cipher *child; - u8 ctx[]; + __be64 consts[]; }; /* * +------------------------ * | * +------------------------ * | cmac_desc_ctx * +------------------------ * | odds (block size) * +------------------------ * | prev (block size) * +------------------------ */ struct cmac_desc_ctx { unsigned int len; - u8 ctx[]; + u8 odds[]; }; static int crypto_cmac_digest_setkey(struct crypto_shash *parent, const u8 *inkey, unsigned int keylen) { - unsigned long alignmask = crypto_shash_alignmask(parent); struct cmac_tfm_ctx *ctx = crypto_shash_ctx(parent); unsigned int bs = crypto_shash_blocksize(parent); - __be64 *consts = PTR_ALIGN((void *)ctx->ctx, - (alignmask | (__alignof__(__be64) - 1)) + 1); + __be64 *consts = ctx->consts; u64 _const[2]; int i, err = 0; u8 msb_mask, gfmask; err = crypto_cipher_setkey(ctx->child, inkey, keylen); if (err) return err; /* encrypt the zero block */ memset(consts, 0, bs); @@ -97,41 +95,39 @@ static int crypto_cmac_digest_setkey(struct crypto_shash *parent, } break; } return 0; } static int crypto_cmac_digest_init(struct shash_desc *pdesc) { - unsigned long alignmask = crypto_shash_alignmask(pdesc->tfm); struct cmac_desc_ctx *ctx = shash_desc_ctx(pdesc); int bs = crypto_shash_blocksize(pdesc->tfm); - u8 *prev = PTR_ALIGN((void *)ctx->ctx, alignmask + 1) + bs; + u8 *prev = &ctx->odds[bs]; ctx->len = 0; memset(prev, 0, bs); return 0; } static int crypto_cmac_digest_update(struct shash_desc *pdesc, const u8 *p, unsigned int len) { struct crypto_shash *parent = pdesc->tfm; - unsigned long alignmask = crypto_shash_alignmask(parent); struct cmac_tfm_ctx *tctx = crypto_shash_ctx(parent); struct cmac_desc_ctx *ctx = shash_desc_ctx(pdesc); struct crypto_cipher *tfm = tctx->child; int bs = crypto_shash_blocksize(parent); - u8 *odds = PTR_ALIGN((void *)ctx->ctx, alignmask + 1); + u8 *odds = ctx->odds; u8 *prev = odds + bs; /* checking the data can fill the block */ if ((ctx->len + len) <= bs) { memcpy(odds + ctx->len, p, len); ctx->len += len; return 0; } /* filling odds with new data and encrypting it */ @@ -158,47 +154,44 @@ static int crypto_cmac_digest_update(struct shash_desc *pdesc, const u8 *p, memcpy(odds, p, len); ctx->len = len; } return 0; } static int crypto_cmac_digest_final(struct shash_desc *pdesc, u8 *out) { struct crypto_shash *parent = pdesc->tfm; - unsigned long alignmask = crypto_shash_alignmask(parent); struct cmac_tfm_ctx *tctx = crypto_shash_ctx(parent); struct cmac_desc_ctx *ctx = shash_desc_ctx(pdesc); struct crypto_cipher *tfm = tctx->child; int bs = crypto_shash_blocksize(parent); - u8 *consts = PTR_ALIGN((void *)tctx->ctx, - (alignmask | (__alignof__(__be64) - 1)) + 1); - u8 *odds = PTR_ALIGN((void *)ctx->ctx, alignmask + 1); + u8 *odds = ctx->odds; u8 *prev = odds + bs; unsigned int offset = 0; if (ctx->len != bs) { unsigned int rlen; u8 *p = odds + ctx->len; *p = 0x80; p++; rlen = bs - ctx->len - 1; if (rlen) memset(p, 0, rlen); offset += bs; } crypto_xor(prev, odds, bs); - crypto_xor(prev, consts + offset, bs); + crypto_xor(prev, (const u8 *)tctx->consts + offset, bs); crypto_cipher_encrypt_one(tfm, out, prev); return 0; } static int cmac_init_tfm(struct crypto_shash *tfm) { struct shash_instance *inst = shash_alg_instance(tfm); struct cmac_tfm_ctx *ctx = crypto_shash_ctx(tfm); @@ -234,21 +227,20 @@ static void cmac_exit_tfm(struct crypto_shash *tfm) { struct cmac_tfm_ctx *ctx = crypto_shash_ctx(tfm); crypto_free_cipher(ctx->child); } static int cmac_create(struct crypto_template *tmpl, struct rtattr **tb) { struct shash_instance *inst; struct crypto_cipher_spawn *spawn; struct crypto_alg *alg; - unsigned long alignmask; u32 mask; int err; err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_SHASH, &mask); if (err) return err; inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL); if (!inst) return -ENOMEM; @@ -266,37 +258,28 @@ static int cmac_create(struct crypto_template *tmpl, struct rtattr **tb) break; default: err = -EINVAL; goto err_free_inst; } err = crypto_inst_setname(shash_crypto_instance(inst), tmpl->name, alg); if (err) goto err_free_inst; - alignmask = alg->cra_alignmask; - inst->alg.base.cra_alignmask = alignmask; inst->alg.base.cra_priority = alg->cra_priority; inst->alg.base.cra_blocksize = alg->cra_blocksize; + inst->alg.base.cra_ctxsize = sizeof(struct cmac_tfm_ctx) + + alg->cra_blocksize * 2; inst->alg.digestsize = alg->cra_blocksize; - inst->alg.descsize = - ALIGN(sizeof(struct cmac_desc_ctx), crypto_tfm_ctx_alignment()) - + (alignmask & ~(crypto_tfm_ctx_alignment() - 1)) - + alg->cra_blocksize * 2; - - inst->alg.base.cra_ctxsize = - ALIGN(sizeof(struct cmac_tfm_ctx), crypto_tfm_ctx_alignment()) - + ((alignmask | (__alignof__(__be64) - 1)) & - ~(crypto_tfm_ctx_alignment() - 1)) - + alg->cra_blocksize * 2; - + inst->alg.descsize = sizeof(struct cmac_desc_ctx) + + alg->cra_blocksize * 2; inst->alg.init = crypto_cmac_digest_init; inst->alg.update = crypto_cmac_digest_update; inst->alg.final = crypto_cmac_digest_final; inst->alg.setkey = crypto_cmac_digest_setkey; inst->alg.init_tfm = cmac_init_tfm; inst->alg.clone_tfm = cmac_clone_tfm; inst->alg.exit_tfm = cmac_exit_tfm; inst->free = shash_free_singlespawn_instance; From patchwork Thu Oct 19 05:53:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13428259 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 12C78CDB465 for ; Thu, 19 Oct 2023 05:54:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232712AbjJSFyZ (ORCPT ); Thu, 19 Oct 2023 01:54:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34340 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232650AbjJSFyQ (ORCPT ); Thu, 19 Oct 2023 01:54:16 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7C789121 for ; Wed, 18 Oct 2023 22:54:14 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1DD3FC433D9 for ; Thu, 19 Oct 2023 05:54:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697694854; bh=6+5sfw+Im75F6oUY/EK7cod3GRqwQ5kxKGclEU/TThw=; h=From:To:Subject:Date:In-Reply-To:References:From; b=Pmgum+MM4XSJl+eBIpJ7y1BfVja+ZInJOoAR6Pd4mlsIDuND0+EA6tw1gXsijeh2g BskO5EhD/jaI+8xp+QGIwnOCXSFV10VPdUSLZcwVC7zJTtTaSOcNMhqePxdtBe85MR +mSpyGl5l9QPNSLBQx6WzvlggtPDFCUfccwFIXoCSfYSiIqD6jBYqU8BMorxIYroqe d5YZ++zoqBGNYc5SCkaiyUfvsmMc8poSA/uiwd215WosF8sCkF9HfcNYH98DAA98A1 NRGOMFddLLeCPKWx6up37s6lzuS+eZeQKYyLfdy8txl9uSSO5ame6E9M2SKaPqMrxJ gzCtyAzjbTMwQ== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 08/17] crypto: hmac - remove unnecessary alignment logic Date: Wed, 18 Oct 2023 22:53:34 -0700 Message-ID: <20231019055343.588846-9-ebiggers@kernel.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231019055343.588846-1-ebiggers@kernel.org> References: <20231019055343.588846-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Eric Biggers The hmac template is setting its alignmask to that of its underlying unkeyed hash algorithm, and it is aligning the ipad and opad fields in its tfm context to that alignment. However, hmac does not actually need any sort of alignment itself, which makes this pointless except to keep the pads aligned to what the underlying algorithm prefers. But very few shash algorithms actually set an alignmask, and it is being removed from those remaining ones; also, after setkey, the pads are only passed to crypto_shash_import and crypto_shash_export which ignore the alignmask. Therefore, make the hmac template stop setting an alignmask and simply use natural alignment for ipad and opad. Note, this change also moves the pads from the beginning of the tfm context to the end, which makes much more sense; the variable-length fields should be at the end. Signed-off-by: Eric Biggers --- crypto/hmac.c | 56 ++++++++++++++++++++------------------------------- 1 file changed, 22 insertions(+), 34 deletions(-) diff --git a/crypto/hmac.c b/crypto/hmac.c index ea93f4c55f251..7cec25ff98891 100644 --- a/crypto/hmac.c +++ b/crypto/hmac.c @@ -17,45 +17,34 @@ #include #include #include #include #include #include #include struct hmac_ctx { struct crypto_shash *hash; + /* Contains 'u8 ipad[statesize];', then 'u8 opad[statesize];' */ + u8 pads[]; }; -static inline void *align_ptr(void *p, unsigned int align) -{ - return (void *)ALIGN((unsigned long)p, align); -} - -static inline struct hmac_ctx *hmac_ctx(struct crypto_shash *tfm) -{ - return align_ptr(crypto_shash_ctx_aligned(tfm) + - crypto_shash_statesize(tfm) * 2, - crypto_tfm_ctx_alignment()); -} - static int hmac_setkey(struct crypto_shash *parent, const u8 *inkey, unsigned int keylen) { int bs = crypto_shash_blocksize(parent); int ds = crypto_shash_digestsize(parent); int ss = crypto_shash_statesize(parent); - char *ipad = crypto_shash_ctx_aligned(parent); - char *opad = ipad + ss; - struct hmac_ctx *ctx = align_ptr(opad + ss, - crypto_tfm_ctx_alignment()); - struct crypto_shash *hash = ctx->hash; + struct hmac_ctx *tctx = crypto_shash_ctx(parent); + struct crypto_shash *hash = tctx->hash; + u8 *ipad = &tctx->pads[0]; + u8 *opad = &tctx->pads[ss]; SHASH_DESC_ON_STACK(shash, hash); unsigned int i; if (fips_enabled && (keylen < 112 / 8)) return -EINVAL; shash->tfm = hash; if (keylen > bs) { int err; @@ -87,105 +76,109 @@ static int hmac_setkey(struct crypto_shash *parent, static int hmac_export(struct shash_desc *pdesc, void *out) { struct shash_desc *desc = shash_desc_ctx(pdesc); return crypto_shash_export(desc, out); } static int hmac_import(struct shash_desc *pdesc, const void *in) { struct shash_desc *desc = shash_desc_ctx(pdesc); - struct hmac_ctx *ctx = hmac_ctx(pdesc->tfm); + const struct hmac_ctx *tctx = crypto_shash_ctx(pdesc->tfm); - desc->tfm = ctx->hash; + desc->tfm = tctx->hash; return crypto_shash_import(desc, in); } static int hmac_init(struct shash_desc *pdesc) { - return hmac_import(pdesc, crypto_shash_ctx_aligned(pdesc->tfm)); + const struct hmac_ctx *tctx = crypto_shash_ctx(pdesc->tfm); + + return hmac_import(pdesc, &tctx->pads[0]); } static int hmac_update(struct shash_desc *pdesc, const u8 *data, unsigned int nbytes) { struct shash_desc *desc = shash_desc_ctx(pdesc); return crypto_shash_update(desc, data, nbytes); } static int hmac_final(struct shash_desc *pdesc, u8 *out) { struct crypto_shash *parent = pdesc->tfm; int ds = crypto_shash_digestsize(parent); int ss = crypto_shash_statesize(parent); - char *opad = crypto_shash_ctx_aligned(parent) + ss; + const struct hmac_ctx *tctx = crypto_shash_ctx(parent); + const u8 *opad = &tctx->pads[ss]; struct shash_desc *desc = shash_desc_ctx(pdesc); return crypto_shash_final(desc, out) ?: crypto_shash_import(desc, opad) ?: crypto_shash_finup(desc, out, ds, out); } static int hmac_finup(struct shash_desc *pdesc, const u8 *data, unsigned int nbytes, u8 *out) { struct crypto_shash *parent = pdesc->tfm; int ds = crypto_shash_digestsize(parent); int ss = crypto_shash_statesize(parent); - char *opad = crypto_shash_ctx_aligned(parent) + ss; + const struct hmac_ctx *tctx = crypto_shash_ctx(parent); + const u8 *opad = &tctx->pads[ss]; struct shash_desc *desc = shash_desc_ctx(pdesc); return crypto_shash_finup(desc, data, nbytes, out) ?: crypto_shash_import(desc, opad) ?: crypto_shash_finup(desc, out, ds, out); } static int hmac_init_tfm(struct crypto_shash *parent) { struct crypto_shash *hash; struct shash_instance *inst = shash_alg_instance(parent); struct crypto_shash_spawn *spawn = shash_instance_ctx(inst); - struct hmac_ctx *ctx = hmac_ctx(parent); + struct hmac_ctx *tctx = crypto_shash_ctx(parent); hash = crypto_spawn_shash(spawn); if (IS_ERR(hash)) return PTR_ERR(hash); parent->descsize = sizeof(struct shash_desc) + crypto_shash_descsize(hash); - ctx->hash = hash; + tctx->hash = hash; return 0; } static int hmac_clone_tfm(struct crypto_shash *dst, struct crypto_shash *src) { - struct hmac_ctx *sctx = hmac_ctx(src); - struct hmac_ctx *dctx = hmac_ctx(dst); + struct hmac_ctx *sctx = crypto_shash_ctx(src); + struct hmac_ctx *dctx = crypto_shash_ctx(dst); struct crypto_shash *hash; hash = crypto_clone_shash(sctx->hash); if (IS_ERR(hash)) return PTR_ERR(hash); dctx->hash = hash; return 0; } static void hmac_exit_tfm(struct crypto_shash *parent) { - struct hmac_ctx *ctx = hmac_ctx(parent); + struct hmac_ctx *tctx = crypto_shash_ctx(parent); - crypto_free_shash(ctx->hash); + crypto_free_shash(tctx->hash); } static int hmac_create(struct crypto_template *tmpl, struct rtattr **tb) { struct shash_instance *inst; struct crypto_shash_spawn *spawn; struct crypto_alg *alg; struct shash_alg *salg; u32 mask; int err; @@ -218,29 +211,24 @@ static int hmac_create(struct crypto_template *tmpl, struct rtattr **tb) if (ds > alg->cra_blocksize || ss < alg->cra_blocksize) goto err_free_inst; err = crypto_inst_setname(shash_crypto_instance(inst), tmpl->name, alg); if (err) goto err_free_inst; inst->alg.base.cra_priority = alg->cra_priority; inst->alg.base.cra_blocksize = alg->cra_blocksize; - inst->alg.base.cra_alignmask = alg->cra_alignmask; + inst->alg.base.cra_ctxsize = sizeof(struct hmac_ctx) + (ss * 2); - ss = ALIGN(ss, alg->cra_alignmask + 1); inst->alg.digestsize = ds; inst->alg.statesize = ss; - - inst->alg.base.cra_ctxsize = sizeof(struct hmac_ctx) + - ALIGN(ss * 2, crypto_tfm_ctx_alignment()); - inst->alg.init = hmac_init; inst->alg.update = hmac_update; inst->alg.final = hmac_final; inst->alg.finup = hmac_finup; inst->alg.export = hmac_export; inst->alg.import = hmac_import; inst->alg.setkey = hmac_setkey; inst->alg.init_tfm = hmac_init_tfm; inst->alg.clone_tfm = hmac_clone_tfm; inst->alg.exit_tfm = hmac_exit_tfm; From patchwork Thu Oct 19 05:53:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13428256 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 11465CDB487 for ; Thu, 19 Oct 2023 05:54:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232614AbjJSFyS (ORCPT ); Thu, 19 Oct 2023 01:54:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34368 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232626AbjJSFyQ (ORCPT ); Thu, 19 Oct 2023 01:54:16 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A90E7FE for ; Wed, 18 Oct 2023 22:54:14 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4C7CFC433C9 for ; Thu, 19 Oct 2023 05:54:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697694854; bh=20+UvtXYNWYcRu2KHZqRBA5odB5J1ZAywDOCONf/L4I=; h=From:To:Subject:Date:In-Reply-To:References:From; b=YM7NMcpgyWaAKr8L3i6ehkyUtQbdbhRthdDKP+Cd1Ep6AVfdB4sUcJKHFrV+2IqGZ WEjfHnNJ2GdDcCPlU358vhq5oeNt4C1r1zJbhBq2e7M4/3bBRE1QYMw/rT/390B0I7 HLKpSFzbAWlfAhe7lBbxRMgsVH71KWehD5G1C6h2/0DH+kYdB7l3RwMmLqNhX/NAm6 E9WiScfWztz6CBhCVMRc+qGbzKx3OlDb5HIFhRApe0JAc05ZBuR0u1X+7Bnjh8j8sc 7DpDta3Mfn0v02YqN/VLuyymCckJWFyjQS7zeRPoHz2nOx06RXW4K7+NasuacJP0wP mVFZ9pmHLEBYA== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 09/17] crypto: vmac - don't set alignmask Date: Wed, 18 Oct 2023 22:53:35 -0700 Message-ID: <20231019055343.588846-10-ebiggers@kernel.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231019055343.588846-1-ebiggers@kernel.org> References: <20231019055343.588846-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Eric Biggers The vmac template is setting its alignmask to that of its underlying 'cipher'. This doesn't actually accomplish anything useful, though, so stop doing it. (vmac_update() does have an alignment bug, where it assumes u64 alignment when it shouldn't, but that bug exists both before and after this patch.) This is a prerequisite for removing support for nonzero alignmasks from shash. Signed-off-by: Eric Biggers --- crypto/vmac.c | 1 - 1 file changed, 1 deletion(-) diff --git a/crypto/vmac.c b/crypto/vmac.c index 4633b2dda1e0a..0a1d8efa6c1a6 100644 --- a/crypto/vmac.c +++ b/crypto/vmac.c @@ -642,21 +642,20 @@ static int vmac_create(struct crypto_template *tmpl, struct rtattr **tb) err = -EINVAL; if (alg->cra_blocksize != VMAC_NONCEBYTES) goto err_free_inst; err = crypto_inst_setname(shash_crypto_instance(inst), tmpl->name, alg); if (err) goto err_free_inst; inst->alg.base.cra_priority = alg->cra_priority; inst->alg.base.cra_blocksize = alg->cra_blocksize; - inst->alg.base.cra_alignmask = alg->cra_alignmask; inst->alg.base.cra_ctxsize = sizeof(struct vmac_tfm_ctx); inst->alg.base.cra_init = vmac_init_tfm; inst->alg.base.cra_exit = vmac_exit_tfm; inst->alg.descsize = sizeof(struct vmac_desc_ctx); inst->alg.digestsize = VMAC_TAG_LEN / 8; inst->alg.init = vmac_init; inst->alg.update = vmac_update; inst->alg.final = vmac_final; From patchwork Thu Oct 19 05:53:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13428261 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81DC8CDB483 for ; Thu, 19 Oct 2023 05:54:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232767AbjJSFyd (ORCPT ); Thu, 19 Oct 2023 01:54:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55440 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232666AbjJSFyR (ORCPT ); Thu, 19 Oct 2023 01:54:17 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D6542113 for ; Wed, 18 Oct 2023 22:54:14 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7A033C433C8 for ; Thu, 19 Oct 2023 05:54:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697694854; bh=QYH+cVmF4mkZ3rx5Xu98l2QE4+ZE4e0GOXnEh69pB8M=; h=From:To:Subject:Date:In-Reply-To:References:From; b=HR+1imBnJ/f0YwgnGNZomXZKAinNPw1MxD7OMlSJbQHsjBrhpjZCkl97MCsFcytQ1 tKXnvvqtN4IId2VN9xNoNDevCWwuXMPg7rOP2DLNO1XBnV6CofKc7JQeGJfdywniw3 KBMi1X80sYEJWzZyd1oWnzlhEwILQqflMGpdZzvggIRuhE1cXyThaICwZBdUGIy0zk KjiGDUdrCdvnVkVDSR6o3/KA0xovuEykyJZ9fhx3sl6kdh+FBJQZNWY7G/L+xLNrUG NMr3J6Wx3K6nUwTwElx0kdkGc6NjWzAjys5S/h0KMlBlLiZlYbyt2PHm1E3tUevise H2X2n6eG9wOaw== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 10/17] crypto: xcbc - remove unnecessary alignment logic Date: Wed, 18 Oct 2023 22:53:36 -0700 Message-ID: <20231019055343.588846-11-ebiggers@kernel.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231019055343.588846-1-ebiggers@kernel.org> References: <20231019055343.588846-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Eric Biggers The xcbc template is setting its alignmask to that of its underlying 'cipher'. Yet, it doesn't care itself about how its inputs and outputs are aligned, which is ostensibly the point of the alignmask. Instead, xcbc actually just uses its alignmask itself to runtime-align certain fields in its tfm and desc contexts appropriately for its underlying cipher. That is almost entirely pointless too, though, since xcbc is already using the cipher API functions that handle alignment themselves, and few ciphers set a nonzero alignmask anyway. Also, even without runtime alignment, an alignment of at least 4 bytes can be guaranteed. Thus, at best this code is optimizing for the rare case of ciphers that set an alignmask >= 7, at the cost of hurting the common cases. Therefore, this patch removes the manual alignment code from xcbc and makes it stop setting an alignmask. Signed-off-by: Eric Biggers --- crypto/xcbc.c | 32 ++++++++++---------------------- 1 file changed, 10 insertions(+), 22 deletions(-) diff --git a/crypto/xcbc.c b/crypto/xcbc.c index 6074c5c1da492..a9e8ee9c1949c 100644 --- a/crypto/xcbc.c +++ b/crypto/xcbc.c @@ -20,85 +20,82 @@ static u_int32_t ks[12] = {0x01010101, 0x01010101, 0x01010101, 0x01010101, * +------------------------ * | * +------------------------ * | xcbc_tfm_ctx * +------------------------ * | consts (block size * 2) * +------------------------ */ struct xcbc_tfm_ctx { struct crypto_cipher *child; - u8 ctx[]; + u8 consts[]; }; /* * +------------------------ * | * +------------------------ * | xcbc_desc_ctx * +------------------------ * | odds (block size) * +------------------------ * | prev (block size) * +------------------------ */ struct xcbc_desc_ctx { unsigned int len; - u8 ctx[]; + u8 odds[]; }; #define XCBC_BLOCKSIZE 16 static int crypto_xcbc_digest_setkey(struct crypto_shash *parent, const u8 *inkey, unsigned int keylen) { - unsigned long alignmask = crypto_shash_alignmask(parent); struct xcbc_tfm_ctx *ctx = crypto_shash_ctx(parent); - u8 *consts = PTR_ALIGN(&ctx->ctx[0], alignmask + 1); + u8 *consts = ctx->consts; int err = 0; u8 key1[XCBC_BLOCKSIZE]; int bs = sizeof(key1); if ((err = crypto_cipher_setkey(ctx->child, inkey, keylen))) return err; crypto_cipher_encrypt_one(ctx->child, consts, (u8 *)ks + bs); crypto_cipher_encrypt_one(ctx->child, consts + bs, (u8 *)ks + bs * 2); crypto_cipher_encrypt_one(ctx->child, key1, (u8 *)ks); return crypto_cipher_setkey(ctx->child, key1, bs); } static int crypto_xcbc_digest_init(struct shash_desc *pdesc) { - unsigned long alignmask = crypto_shash_alignmask(pdesc->tfm); struct xcbc_desc_ctx *ctx = shash_desc_ctx(pdesc); int bs = crypto_shash_blocksize(pdesc->tfm); - u8 *prev = PTR_ALIGN(&ctx->ctx[0], alignmask + 1) + bs; + u8 *prev = &ctx->odds[bs]; ctx->len = 0; memset(prev, 0, bs); return 0; } static int crypto_xcbc_digest_update(struct shash_desc *pdesc, const u8 *p, unsigned int len) { struct crypto_shash *parent = pdesc->tfm; - unsigned long alignmask = crypto_shash_alignmask(parent); struct xcbc_tfm_ctx *tctx = crypto_shash_ctx(parent); struct xcbc_desc_ctx *ctx = shash_desc_ctx(pdesc); struct crypto_cipher *tfm = tctx->child; int bs = crypto_shash_blocksize(parent); - u8 *odds = PTR_ALIGN(&ctx->ctx[0], alignmask + 1); + u8 *odds = ctx->odds; u8 *prev = odds + bs; /* checking the data can fill the block */ if ((ctx->len + len) <= bs) { memcpy(odds + ctx->len, p, len); ctx->len += len; return 0; } /* filling odds with new data and encrypting it */ @@ -125,46 +122,44 @@ static int crypto_xcbc_digest_update(struct shash_desc *pdesc, const u8 *p, memcpy(odds, p, len); ctx->len = len; } return 0; } static int crypto_xcbc_digest_final(struct shash_desc *pdesc, u8 *out) { struct crypto_shash *parent = pdesc->tfm; - unsigned long alignmask = crypto_shash_alignmask(parent); struct xcbc_tfm_ctx *tctx = crypto_shash_ctx(parent); struct xcbc_desc_ctx *ctx = shash_desc_ctx(pdesc); struct crypto_cipher *tfm = tctx->child; int bs = crypto_shash_blocksize(parent); - u8 *consts = PTR_ALIGN(&tctx->ctx[0], alignmask + 1); - u8 *odds = PTR_ALIGN(&ctx->ctx[0], alignmask + 1); + u8 *odds = ctx->odds; u8 *prev = odds + bs; unsigned int offset = 0; if (ctx->len != bs) { unsigned int rlen; u8 *p = odds + ctx->len; *p = 0x80; p++; rlen = bs - ctx->len -1; if (rlen) memset(p, 0, rlen); offset += bs; } crypto_xor(prev, odds, bs); - crypto_xor(prev, consts + offset, bs); + crypto_xor(prev, &tctx->consts[offset], bs); crypto_cipher_encrypt_one(tfm, out, prev); return 0; } static int xcbc_init_tfm(struct crypto_tfm *tfm) { struct crypto_cipher *cipher; struct crypto_instance *inst = (void *)tfm->__crt_alg; @@ -184,21 +179,20 @@ static void xcbc_exit_tfm(struct crypto_tfm *tfm) { struct xcbc_tfm_ctx *ctx = crypto_tfm_ctx(tfm); crypto_free_cipher(ctx->child); } static int xcbc_create(struct crypto_template *tmpl, struct rtattr **tb) { struct shash_instance *inst; struct crypto_cipher_spawn *spawn; struct crypto_alg *alg; - unsigned long alignmask; u32 mask; int err; err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_SHASH, &mask); if (err) return err; inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL); if (!inst) return -ENOMEM; @@ -211,35 +205,29 @@ static int xcbc_create(struct crypto_template *tmpl, struct rtattr **tb) alg = crypto_spawn_cipher_alg(spawn); err = -EINVAL; if (alg->cra_blocksize != XCBC_BLOCKSIZE) goto err_free_inst; err = crypto_inst_setname(shash_crypto_instance(inst), tmpl->name, alg); if (err) goto err_free_inst; - alignmask = alg->cra_alignmask | 3; - inst->alg.base.cra_alignmask = alignmask; inst->alg.base.cra_priority = alg->cra_priority; inst->alg.base.cra_blocksize = alg->cra_blocksize; + inst->alg.base.cra_ctxsize = sizeof(struct xcbc_tfm_ctx) + + alg->cra_blocksize * 2; inst->alg.digestsize = alg->cra_blocksize; - inst->alg.descsize = ALIGN(sizeof(struct xcbc_desc_ctx), - crypto_tfm_ctx_alignment()) + - (alignmask & - ~(crypto_tfm_ctx_alignment() - 1)) + + inst->alg.descsize = sizeof(struct xcbc_desc_ctx) + alg->cra_blocksize * 2; - inst->alg.base.cra_ctxsize = ALIGN(sizeof(struct xcbc_tfm_ctx), - alignmask + 1) + - alg->cra_blocksize * 2; inst->alg.base.cra_init = xcbc_init_tfm; inst->alg.base.cra_exit = xcbc_exit_tfm; inst->alg.init = crypto_xcbc_digest_init; inst->alg.update = crypto_xcbc_digest_update; inst->alg.final = crypto_xcbc_digest_final; inst->alg.setkey = crypto_xcbc_digest_setkey; inst->free = shash_free_singlespawn_instance; From patchwork Thu Oct 19 05:53:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13428266 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 472C7CDB465 for ; Thu, 19 Oct 2023 05:54:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231705AbjJSFyg (ORCPT ); Thu, 19 Oct 2023 01:54:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55462 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232678AbjJSFyR (ORCPT ); Thu, 19 Oct 2023 01:54:17 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D589C112 for ; Wed, 18 Oct 2023 22:54:14 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A79B5C433CA for ; Thu, 19 Oct 2023 05:54:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697694854; bh=Q/ICmAR+hC7qnlX/jlK/bjhFFZtdntEIDOUu3Skh/nU=; h=From:To:Subject:Date:In-Reply-To:References:From; b=hH5m5fcIkTYfxyF6ttq+Qul7pZaUHgFgd+6wJnpSmN8SvYF5TwNG8skmiUUeZXhVg qBPQrCw3ivkgiG2KiF3r/bwl5JsnGi8EIdkjD68zRt6ptkr/s9uIyTGyLBGvDq9OvN pm8llPdsfDVqYsUFVkOvHKR1TnaBbu0rj49OAiX3qzJR5L8UZFee6Vt2jsJ1NqCc31 vffCckOeMbwq/RsDxq6Y/ixeP/EXrnOCEzuC0w2dQyQvH6FtXZ2Mvjm1aEZQ6g/n3m qj9zVOKd/EAV70AZpUA0gPjk9Ft1seBh9BTzZGEjkVD6OQAoyO+hX75zKcF+PKyqr8 IsJ1NtAyPC7tA== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 11/17] crypto: shash - remove support for nonzero alignmask Date: Wed, 18 Oct 2023 22:53:37 -0700 Message-ID: <20231019055343.588846-12-ebiggers@kernel.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231019055343.588846-1-ebiggers@kernel.org> References: <20231019055343.588846-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Eric Biggers Currently, the shash API checks the alignment of all message, key, and digest buffers against the algorithm's declared alignmask, and for any unaligned buffers it falls back to manually aligned temporary buffers. This is virtually useless, however. In the case of the message buffer, cryptographic hash functions internally operate on fixed-size blocks, so implementations end up needing to deal with byte-aligned data anyway because the length(s) passed to ->update might not be divisible by the block size. Word-alignment of the message can theoretically be helpful for CRCs, like what was being done in crc32c-sparc64. But in practice it's better for the algorithms to use unaligned accesses or align the message themselves. A similar argument applies to the key and digest. In any case, no shash algorithms actually set a nonzero alignmask anymore. Therefore, remove support for it from shash. The benefit is that all the code to handle "misaligned" buffers in the shash API goes away, reducing the overhead of the shash API. Signed-off-by: Eric Biggers --- crypto/shash.c | 128 ++++--------------------------------------------- 1 file changed, 8 insertions(+), 120 deletions(-) diff --git a/crypto/shash.c b/crypto/shash.c index 52420c41db44a..409b33f9c97cc 100644 --- a/crypto/shash.c +++ b/crypto/shash.c @@ -3,264 +3,151 @@ * Synchronous Cryptographic Hash operations. * * Copyright (c) 2008 Herbert Xu */ #include #include #include #include #include -#include #include #include #include #include "hash.h" -#define MAX_SHASH_ALIGNMASK 63 - static const struct crypto_type crypto_shash_type; static inline struct crypto_istat_hash *shash_get_stat(struct shash_alg *alg) { return hash_get_stat(&alg->halg); } static inline int crypto_shash_errstat(struct shash_alg *alg, int err) { return crypto_hash_errstat(&alg->halg, err); } int shash_no_setkey(struct crypto_shash *tfm, const u8 *key, unsigned int keylen) { return -ENOSYS; } EXPORT_SYMBOL_GPL(shash_no_setkey); -static int shash_setkey_unaligned(struct crypto_shash *tfm, const u8 *key, - unsigned int keylen) -{ - struct shash_alg *shash = crypto_shash_alg(tfm); - unsigned long alignmask = crypto_shash_alignmask(tfm); - unsigned long absize; - u8 *buffer, *alignbuffer; - int err; - - absize = keylen + (alignmask & ~(crypto_tfm_ctx_alignment() - 1)); - buffer = kmalloc(absize, GFP_ATOMIC); - if (!buffer) - return -ENOMEM; - - alignbuffer = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1); - memcpy(alignbuffer, key, keylen); - err = shash->setkey(tfm, alignbuffer, keylen); - kfree_sensitive(buffer); - return err; -} - static void shash_set_needkey(struct crypto_shash *tfm, struct shash_alg *alg) { if (crypto_shash_alg_needs_key(alg)) crypto_shash_set_flags(tfm, CRYPTO_TFM_NEED_KEY); } int crypto_shash_setkey(struct crypto_shash *tfm, const u8 *key, unsigned int keylen) { struct shash_alg *shash = crypto_shash_alg(tfm); - unsigned long alignmask = crypto_shash_alignmask(tfm); int err; - if ((unsigned long)key & alignmask) - err = shash_setkey_unaligned(tfm, key, keylen); - else - err = shash->setkey(tfm, key, keylen); - + err = shash->setkey(tfm, key, keylen); if (unlikely(err)) { shash_set_needkey(tfm, shash); return err; } crypto_shash_clear_flags(tfm, CRYPTO_TFM_NEED_KEY); return 0; } EXPORT_SYMBOL_GPL(crypto_shash_setkey); -static int shash_update_unaligned(struct shash_desc *desc, const u8 *data, - unsigned int len) -{ - struct crypto_shash *tfm = desc->tfm; - struct shash_alg *shash = crypto_shash_alg(tfm); - unsigned long alignmask = crypto_shash_alignmask(tfm); - unsigned int unaligned_len = alignmask + 1 - - ((unsigned long)data & alignmask); - /* - * We cannot count on __aligned() working for large values: - * https://patchwork.kernel.org/patch/9507697/ - */ - u8 ubuf[MAX_SHASH_ALIGNMASK * 2]; - u8 *buf = PTR_ALIGN(&ubuf[0], alignmask + 1); - int err; - - if (WARN_ON(buf + unaligned_len > ubuf + sizeof(ubuf))) - return -EINVAL; - - if (unaligned_len > len) - unaligned_len = len; - - memcpy(buf, data, unaligned_len); - err = shash->update(desc, buf, unaligned_len); - memset(buf, 0, unaligned_len); - - return err ?: - shash->update(desc, data + unaligned_len, len - unaligned_len); -} - int crypto_shash_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - struct crypto_shash *tfm = desc->tfm; - struct shash_alg *shash = crypto_shash_alg(tfm); - unsigned long alignmask = crypto_shash_alignmask(tfm); + struct shash_alg *shash = crypto_shash_alg(desc->tfm); int err; if (IS_ENABLED(CONFIG_CRYPTO_STATS)) atomic64_add(len, &shash_get_stat(shash)->hash_tlen); - if ((unsigned long)data & alignmask) - err = shash_update_unaligned(desc, data, len); - else - err = shash->update(desc, data, len); + err = shash->update(desc, data, len); return crypto_shash_errstat(shash, err); } EXPORT_SYMBOL_GPL(crypto_shash_update); -static int shash_final_unaligned(struct shash_desc *desc, u8 *out) -{ - struct crypto_shash *tfm = desc->tfm; - unsigned long alignmask = crypto_shash_alignmask(tfm); - struct shash_alg *shash = crypto_shash_alg(tfm); - unsigned int ds = crypto_shash_digestsize(tfm); - /* - * We cannot count on __aligned() working for large values: - * https://patchwork.kernel.org/patch/9507697/ - */ - u8 ubuf[MAX_SHASH_ALIGNMASK + HASH_MAX_DIGESTSIZE]; - u8 *buf = PTR_ALIGN(&ubuf[0], alignmask + 1); - int err; - - if (WARN_ON(buf + ds > ubuf + sizeof(ubuf))) - return -EINVAL; - - err = shash->final(desc, buf); - if (err) - goto out; - - memcpy(out, buf, ds); - -out: - memset(buf, 0, ds); - return err; -} - int crypto_shash_final(struct shash_desc *desc, u8 *out) { - struct crypto_shash *tfm = desc->tfm; - struct shash_alg *shash = crypto_shash_alg(tfm); - unsigned long alignmask = crypto_shash_alignmask(tfm); + struct shash_alg *shash = crypto_shash_alg(desc->tfm); int err; if (IS_ENABLED(CONFIG_CRYPTO_STATS)) atomic64_inc(&shash_get_stat(shash)->hash_cnt); - if ((unsigned long)out & alignmask) - err = shash_final_unaligned(desc, out); - else - err = shash->final(desc, out); + err = shash->final(desc, out); return crypto_shash_errstat(shash, err); } EXPORT_SYMBOL_GPL(crypto_shash_final); -static int shash_finup_unaligned(struct shash_desc *desc, const u8 *data, - unsigned int len, u8 *out) -{ - return shash_update_unaligned(desc, data, len) ?: - shash_final_unaligned(desc, out); -} - static int shash_default_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { struct shash_alg *shash = crypto_shash_alg(desc->tfm); return shash->update(desc, data, len) ?: shash->final(desc, out); } int crypto_shash_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { struct crypto_shash *tfm = desc->tfm; struct shash_alg *shash = crypto_shash_alg(tfm); - unsigned long alignmask = crypto_shash_alignmask(tfm); int err; if (IS_ENABLED(CONFIG_CRYPTO_STATS)) { struct crypto_istat_hash *istat = shash_get_stat(shash); atomic64_inc(&istat->hash_cnt); atomic64_add(len, &istat->hash_tlen); } - if (((unsigned long)data | (unsigned long)out) & alignmask) - err = shash_finup_unaligned(desc, data, len, out); - else - err = shash->finup(desc, data, len, out); - + err = shash->finup(desc, data, len, out); return crypto_shash_errstat(shash, err); } EXPORT_SYMBOL_GPL(crypto_shash_finup); static int shash_default_digest(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { struct shash_alg *shash = crypto_shash_alg(desc->tfm); return shash->init(desc) ?: shash->finup(desc, data, len, out); } int crypto_shash_digest(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { struct crypto_shash *tfm = desc->tfm; struct shash_alg *shash = crypto_shash_alg(tfm); - unsigned long alignmask = crypto_shash_alignmask(tfm); int err; if (IS_ENABLED(CONFIG_CRYPTO_STATS)) { struct crypto_istat_hash *istat = shash_get_stat(shash); atomic64_inc(&istat->hash_cnt); atomic64_add(len, &istat->hash_tlen); } if (crypto_shash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY) err = -ENOKEY; - else if (((unsigned long)data | (unsigned long)out) & alignmask) - err = shash->init(desc) ?: - shash_finup_unaligned(desc, data, len, out); else err = shash->digest(desc, data, len, out); return crypto_shash_errstat(shash, err); } EXPORT_SYMBOL_GPL(crypto_shash_digest); int crypto_shash_tfm_digest(struct crypto_shash *tfm, const u8 *data, unsigned int len, u8 *out) { @@ -663,21 +550,22 @@ int hash_prepare_alg(struct hash_alg_common *alg) } static int shash_prepare_alg(struct shash_alg *alg) { struct crypto_alg *base = &alg->halg.base; int err; if (alg->descsize > HASH_MAX_DESCSIZE) return -EINVAL; - if (base->cra_alignmask > MAX_SHASH_ALIGNMASK) + /* alignmask is not useful for shash, so it is not supported. */ + if (base->cra_alignmask) return -EINVAL; if ((alg->export && !alg->import) || (alg->import && !alg->export)) return -EINVAL; err = hash_prepare_alg(&alg->halg); if (err) return err; base->cra_type = &crypto_shash_type; From patchwork Thu Oct 19 05:53:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13428258 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73528CDB483 for ; Thu, 19 Oct 2023 05:54:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232650AbjJSFy0 (ORCPT ); Thu, 19 Oct 2023 01:54:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55426 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232057AbjJSFyR (ORCPT ); Thu, 19 Oct 2023 01:54:17 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3E97A11B for ; Wed, 18 Oct 2023 22:54:15 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D5C1EC433CB for ; Thu, 19 Oct 2023 05:54:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697694854; bh=WZ1hRhoDd4lCMgZ0YancZdiBlLpsjneFbre+ZmH4G/E=; h=From:To:Subject:Date:In-Reply-To:References:From; b=vKNjDSUqoOZWhVRnRj3JK3CAdtOS5Gf6iV7Q7ROCEkQjis7RCZdyMrbcqn89/h9A8 bT174ZX6KLxPwh7olqAiv/J3eN/w2aGUIZH1BuqF96dpH2TXx8pM+twzH5D7bXjIrR HsB0ujykcUfMCqaSlAwhKtByS2u2ffwjFKrUZEMArMUZ8Rqm7IEnbGinPsrtXVOSHH AiA1ohW//vX1YL929f7345oNb++UQYTmpSwIw8h1fna5wxvNqfKA5qDRxcC11u7dzT mx4pEUusOVNPFjffLf2LEgoAN4lN+eTprkPb+cRpS31vDi3/X9tvYEf+nUGXP64HGU uEpdHPuYGSrVg== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 12/17] libceph: stop checking crypto_shash_alignmask Date: Wed, 18 Oct 2023 22:53:38 -0700 Message-ID: <20231019055343.588846-13-ebiggers@kernel.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231019055343.588846-1-ebiggers@kernel.org> References: <20231019055343.588846-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Eric Biggers Now that the shash algorithm type does not support nonzero alignmasks, crypto_shash_alignmask() always returns 0 and will be removed. In preparation for this, stop checking crypto_shash_alignmask() in net/ceph/messenger_v2.c. Signed-off-by: Eric Biggers --- net/ceph/messenger_v2.c | 4 ---- 1 file changed, 4 deletions(-) diff --git a/net/ceph/messenger_v2.c b/net/ceph/messenger_v2.c index d09a39ff2cf04..f8ec60e1aba3a 100644 --- a/net/ceph/messenger_v2.c +++ b/net/ceph/messenger_v2.c @@ -726,22 +726,20 @@ static int setup_crypto(struct ceph_connection *con, noio_flag = memalloc_noio_save(); con->v2.hmac_tfm = crypto_alloc_shash("hmac(sha256)", 0, 0); memalloc_noio_restore(noio_flag); if (IS_ERR(con->v2.hmac_tfm)) { ret = PTR_ERR(con->v2.hmac_tfm); con->v2.hmac_tfm = NULL; pr_err("failed to allocate hmac tfm context: %d\n", ret); return ret; } - WARN_ON((unsigned long)session_key & - crypto_shash_alignmask(con->v2.hmac_tfm)); ret = crypto_shash_setkey(con->v2.hmac_tfm, session_key, session_key_len); if (ret) { pr_err("failed to set hmac key: %d\n", ret); return ret; } if (con->v2.con_mode == CEPH_CON_MODE_CRC) { WARN_ON(con_secret_len); return 0; /* auth_x, plain mode */ @@ -809,22 +807,20 @@ static int hmac_sha256(struct ceph_connection *con, const struct kvec *kvecs, memset(hmac, 0, SHA256_DIGEST_SIZE); return 0; /* auth_none */ } desc->tfm = con->v2.hmac_tfm; ret = crypto_shash_init(desc); if (ret) goto out; for (i = 0; i < kvec_cnt; i++) { - WARN_ON((unsigned long)kvecs[i].iov_base & - crypto_shash_alignmask(con->v2.hmac_tfm)); ret = crypto_shash_update(desc, kvecs[i].iov_base, kvecs[i].iov_len); if (ret) goto out; } ret = crypto_shash_final(desc, hmac); out: shash_desc_zero(desc); From patchwork Thu Oct 19 05:53:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13428260 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC9D5CDB465 for ; Thu, 19 Oct 2023 05:54:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232740AbjJSFyc (ORCPT ); Thu, 19 Oct 2023 01:54:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55424 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232658AbjJSFyR (ORCPT ); Thu, 19 Oct 2023 01:54:17 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3C75B116 for ; Wed, 18 Oct 2023 22:54:15 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0F614C43391 for ; Thu, 19 Oct 2023 05:54:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697694855; bh=TuzF4+jIfBft5VqDo/fE9UfXx0/TR1AztgYN6AHx+OQ=; h=From:To:Subject:Date:In-Reply-To:References:From; b=vGp0ylWCyNJcdylXwU1quTIfNQxtwNGHr6tFRyQ6H43DQ/GkmQEitHntKQ435Lyu0 Vt7tX4v7jv/p1h9ZIEiPsWgHFxI0q7c7of5tyLuHm4ZIAL3nSRL3T3UQEQd4aSfiuR aBI20FfnnbPLvERWMMbFD1ehxikbsl88ZbXd3CNm7atBO7Gc3A4VPWGfeuzZEh6RSb HbdIJuMJzfyVze0qz5RJ5wTtb/Xh0GJVgPpsq8uoXYBY/VEaAy4PIMU0G8nSa6NupA SyMiteaoTlBEJWUiy/PYe39DjG4UK5ScdkeQk2B4XnZOGRYNP5Dlj3JC9u50Pke+/3 4mq/LGD8dXTBQ== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 13/17] crypto: drbg - stop checking crypto_shash_alignmask Date: Wed, 18 Oct 2023 22:53:39 -0700 Message-ID: <20231019055343.588846-14-ebiggers@kernel.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231019055343.588846-1-ebiggers@kernel.org> References: <20231019055343.588846-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Eric Biggers Now that the shash algorithm type does not support nonzero alignmasks, crypto_shash_alignmask() always returns 0 and will be removed. In preparation for this, stop checking crypto_shash_alignmask() in drbg. Signed-off-by: Eric Biggers --- crypto/drbg.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/crypto/drbg.c b/crypto/drbg.c index ff4ebbc68efab..e01f8c7769d03 100644 --- a/crypto/drbg.c +++ b/crypto/drbg.c @@ -1691,21 +1691,21 @@ static int drbg_init_hash_kernel(struct drbg_state *drbg) sdesc = kzalloc(sizeof(struct shash_desc) + crypto_shash_descsize(tfm), GFP_KERNEL); if (!sdesc) { crypto_free_shash(tfm); return -ENOMEM; } sdesc->shash.tfm = tfm; drbg->priv_data = sdesc; - return crypto_shash_alignmask(tfm); + return 0; } static int drbg_fini_hash_kernel(struct drbg_state *drbg) { struct sdesc *sdesc = drbg->priv_data; if (sdesc) { crypto_free_shash(sdesc->shash.tfm); kfree_sensitive(sdesc); } drbg->priv_data = NULL; From patchwork Thu Oct 19 05:53:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13428264 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78D32CDB485 for ; Thu, 19 Oct 2023 05:54:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232057AbjJSFyg (ORCPT ); Thu, 19 Oct 2023 01:54:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55464 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232683AbjJSFyR (ORCPT ); Thu, 19 Oct 2023 01:54:17 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9C97AB6 for ; Wed, 18 Oct 2023 22:54:15 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3D13CC433CC for ; Thu, 19 Oct 2023 05:54:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697694855; bh=rDKS+YvCLRvkbTwqkwMcOAISlL7GPSBnvGKjtB0WB6E=; h=From:To:Subject:Date:In-Reply-To:References:From; b=fUh0PtvtFDXqoxVBd27XQlUtMdOsxPo50Wqb9jbwAFfXp3PDPpeJPA92LFsKLlT+y f0aNzXuG9fhBT5aB6D0tEvWwpPQC44S8F7s49BZmRAyaOCjRIEtJK0eL9Q2e6lUWpj RxUzE8vCM7d4mJu/ntxTLSzesnHdrlb66Nb+5KYh6Uq+YylgDVhncY1vM/7f+lJMdU Rsc0uKg4kkfgiuo7lyRcHZHGTh5b+duFLnLGYYE7RjnEBfAnXpIBfNF3KV42sQ/6SW WGuWqLWKdNYybwEJnH21kLB0WZ5oBJ1QNGmRwj42D7dvs5RqoNBX20EohFaJt4D77z HTLHFTur3SAew== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 14/17] crypto: testmgr - stop checking crypto_shash_alignmask Date: Wed, 18 Oct 2023 22:53:40 -0700 Message-ID: <20231019055343.588846-15-ebiggers@kernel.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231019055343.588846-1-ebiggers@kernel.org> References: <20231019055343.588846-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Eric Biggers Now that the shash algorithm type does not support nonzero alignmasks, crypto_shash_alignmask() always returns 0 and will be removed. In preparation for this, stop checking crypto_shash_alignmask() in testmgr. Signed-off-by: Eric Biggers --- crypto/testmgr.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/crypto/testmgr.c b/crypto/testmgr.c index 54135c7610f06..48a0929c7a158 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -1268,50 +1268,49 @@ static inline int check_shash_op(const char *op, int err, /* Test one hash test vector in one configuration, using the shash API */ static int test_shash_vec_cfg(const struct hash_testvec *vec, const char *vec_name, const struct testvec_config *cfg, struct shash_desc *desc, struct test_sglist *tsgl, u8 *hashstate) { struct crypto_shash *tfm = desc->tfm; - const unsigned int alignmask = crypto_shash_alignmask(tfm); const unsigned int digestsize = crypto_shash_digestsize(tfm); const unsigned int statesize = crypto_shash_statesize(tfm); const char *driver = crypto_shash_driver_name(tfm); const struct test_sg_division *divs[XBUFSIZE]; unsigned int i; u8 result[HASH_MAX_DIGESTSIZE + TESTMGR_POISON_LEN]; int err; /* Set the key, if specified */ if (vec->ksize) { err = do_setkey(crypto_shash_setkey, tfm, vec->key, vec->ksize, - cfg, alignmask); + cfg, 0); if (err) { if (err == vec->setkey_error) return 0; pr_err("alg: shash: %s setkey failed on test vector %s; expected_error=%d, actual_error=%d, flags=%#x\n", driver, vec_name, vec->setkey_error, err, crypto_shash_get_flags(tfm)); return err; } if (vec->setkey_error) { pr_err("alg: shash: %s setkey unexpectedly succeeded on test vector %s; expected_error=%d\n", driver, vec_name, vec->setkey_error); return -EINVAL; } } /* Build the scatterlist for the source data */ - err = build_hash_sglist(tsgl, vec, cfg, alignmask, divs); + err = build_hash_sglist(tsgl, vec, cfg, 0, divs); if (err) { pr_err("alg: shash: %s: error preparing scatterlist for test vector %s, cfg=\"%s\"\n", driver, vec_name, cfg->name); return err; } /* Do the actual hashing */ testmgr_poison(desc->__ctx, crypto_shash_descsize(tfm)); testmgr_poison(result, digestsize + TESTMGR_POISON_LEN); From patchwork Thu Oct 19 05:53:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13428262 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2638CDB482 for ; Thu, 19 Oct 2023 05:54:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232777AbjJSFye (ORCPT ); Thu, 19 Oct 2023 01:54:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55452 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232670AbjJSFyR (ORCPT ); Thu, 19 Oct 2023 01:54:17 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 978C1124 for ; Wed, 18 Oct 2023 22:54:15 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6A203C433C9 for ; Thu, 19 Oct 2023 05:54:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697694855; bh=nrvdUnNzBcEYHOgVy+o433Wmsw1Eb+ZN29j5yZ8/iW0=; h=From:To:Subject:Date:In-Reply-To:References:From; b=UcujRJ/ZzXStpBtb+1q8h6Z+lV26x/5L/eylyo7ppBI/IsGKUpKZ3CGIhnSJ8PQUF mLA1X0BT/VaiuoXvXDn1koLOFXQ76XJiP+fjS7u8eXh4WdiT4NReVhKgI//eUwr5wy PodKm46iaE16TmlGWzqKX6m8QIVykug0pIex6NSFnE29NqCm3bbd6OxWPZriOqwCeb +aI8M7DhUtxEjfYUy5oAAoIzI4+nBotm5VCQEiFHINflz34/rnAr320sb7rAcyakBR msCjlQ5Fe8x3CzTOFwZpyl9mcoReWINIkLSoXdEa2iT+dbQhNdGTunfCLzQgnbAWia CrijRO/Xw2IlQ== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 15/17] crypto: adiantum - stop using alignmask of shash_alg Date: Wed, 18 Oct 2023 22:53:41 -0700 Message-ID: <20231019055343.588846-16-ebiggers@kernel.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231019055343.588846-1-ebiggers@kernel.org> References: <20231019055343.588846-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Eric Biggers Now that the shash algorithm type does not support nonzero alignmasks, shash_alg::base.cra_alignmask is always 0, so OR-ing it into another value is a no-op. Signed-off-by: Eric Biggers --- crypto/adiantum.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/crypto/adiantum.c b/crypto/adiantum.c index 51703746d91e2..064a0a57c77c1 100644 --- a/crypto/adiantum.c +++ b/crypto/adiantum.c @@ -554,22 +554,21 @@ static int adiantum_create(struct crypto_template *tmpl, struct rtattr **tb) goto err_free_inst; if (snprintf(inst->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME, "adiantum(%s,%s,%s)", streamcipher_alg->base.cra_driver_name, blockcipher_alg->cra_driver_name, hash_alg->base.cra_driver_name) >= CRYPTO_MAX_ALG_NAME) goto err_free_inst; inst->alg.base.cra_blocksize = BLOCKCIPHER_BLOCK_SIZE; inst->alg.base.cra_ctxsize = sizeof(struct adiantum_tfm_ctx); - inst->alg.base.cra_alignmask = streamcipher_alg->base.cra_alignmask | - hash_alg->base.cra_alignmask; + inst->alg.base.cra_alignmask = streamcipher_alg->base.cra_alignmask; /* * The block cipher is only invoked once per message, so for long * messages (e.g. sectors for disk encryption) its performance doesn't * matter as much as that of the stream cipher and hash function. Thus, * weigh the block cipher's ->cra_priority less. */ inst->alg.base.cra_priority = (4 * streamcipher_alg->base.cra_priority + 2 * hash_alg->base.cra_priority + blockcipher_alg->cra_priority) / 7; From patchwork Thu Oct 19 05:53:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13428263 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 262FFC41513 for ; Thu, 19 Oct 2023 05:54:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232782AbjJSFyf (ORCPT ); Thu, 19 Oct 2023 01:54:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55466 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232680AbjJSFyR (ORCPT ); Thu, 19 Oct 2023 01:54:17 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C5712126 for ; Wed, 18 Oct 2023 22:54:15 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 981CBC433C7 for ; Thu, 19 Oct 2023 05:54:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697694855; bh=ZglcWcSc/6r5kwTLpX9SlSkzhKg8wNTkUlAli5+ICIw=; h=From:To:Subject:Date:In-Reply-To:References:From; b=Trz63NTtoKbuZ6/LG+WwDV7L/PPd9dkgn3uHSmu9P8kPk+weOHNTX1jK618v1xueu LGCcaMYr2cZGhwRffg7h/B+RZwjIgTEdGDp3+W1JFughOZV/K78m5gQwAVqYMnbyym b6dpLBDGKu/Q+Oug8niYN35vLptzd8zUJX8/VTPT6uf8p8dFdfSrpU6WDNNW9D9n3r cx9pZo8KvuiNSMtukKsChuNmth+Sya8eF/x8+OeAySFg8TAfWmUCikkhsAxUUyr3AO JJP2JKMNa8/8+/Ur/mLCsc9j0W9GRik/4BwkNKT9No1+zpSnxih0sD1H7OV8xSkWkm x1DU3IsPaEb2A== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 16/17] crypto: hctr2 - stop using alignmask of shash_alg Date: Wed, 18 Oct 2023 22:53:42 -0700 Message-ID: <20231019055343.588846-17-ebiggers@kernel.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231019055343.588846-1-ebiggers@kernel.org> References: <20231019055343.588846-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Eric Biggers Now that the shash algorithm type does not support nonzero alignmasks, shash_alg::base.cra_alignmask is always 0, so OR-ing it into another value is a no-op. Signed-off-by: Eric Biggers --- crypto/hctr2.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/crypto/hctr2.c b/crypto/hctr2.c index 653fde727f0fa..87e7547ad1862 100644 --- a/crypto/hctr2.c +++ b/crypto/hctr2.c @@ -478,22 +478,21 @@ static int hctr2_create_common(struct crypto_template *tmpl, goto err_free_inst; if (snprintf(inst->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME, "hctr2_base(%s,%s)", xctr_alg->base.cra_driver_name, polyval_alg->base.cra_driver_name) >= CRYPTO_MAX_ALG_NAME) goto err_free_inst; inst->alg.base.cra_blocksize = BLOCKCIPHER_BLOCK_SIZE; inst->alg.base.cra_ctxsize = sizeof(struct hctr2_tfm_ctx) + polyval_alg->statesize * 2; - inst->alg.base.cra_alignmask = xctr_alg->base.cra_alignmask | - polyval_alg->base.cra_alignmask; + inst->alg.base.cra_alignmask = xctr_alg->base.cra_alignmask; /* * The hash function is called twice, so it is weighted higher than the * xctr and blockcipher. */ inst->alg.base.cra_priority = (2 * xctr_alg->base.cra_priority + 4 * polyval_alg->base.cra_priority + blockcipher_alg->cra_priority) / 7; inst->alg.setkey = hctr2_setkey; inst->alg.encrypt = hctr2_encrypt; From patchwork Thu Oct 19 05:53:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13428265 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DACE4CDB484 for ; Thu, 19 Oct 2023 05:54:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232678AbjJSFyh (ORCPT ); Thu, 19 Oct 2023 01:54:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34340 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232695AbjJSFyY (ORCPT ); Thu, 19 Oct 2023 01:54:24 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 14A5DFE for ; Wed, 18 Oct 2023 22:54:15 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C5633C433CD for ; Thu, 19 Oct 2023 05:54:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697694855; bh=GcLYinJeWpUvY/Uoy8rAE5b2gBzBptP2ijV9hrQy9vg=; h=From:To:Subject:Date:In-Reply-To:References:From; b=eXLwQyPQ4lCitWeeffNUOvNFLlJf21BdUFCQ0Fi/354X1YBx3PNkruJq4RHufgHik m5KNFkgC4w17IMHnK6EQkc+8vCfTjNbhZnjw8QLrTXKayapw206rn9bvbIve8sWhTp G7Hi+SMfrpKOp7kWWI8LM7HCHpmaFIgznOo52D75QM2GSs0wB1pbypDM3UR6kgfL65 55dFWOfTAuMY5c3pvy90nD3v8gT4UH+GbFZ409vDRSpbQS7C0wkhmguF/6AZuSH5mx k30YfFLx+OtaJ/bQy+ye0SddaLinsA9JmQGEqHwQyqiXqw0qkPPlyMk8pFA/r00MAe blNdiaCz8ETYQ== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 17/17] crypto: shash - remove crypto_shash_alignmask Date: Wed, 18 Oct 2023 22:53:43 -0700 Message-ID: <20231019055343.588846-18-ebiggers@kernel.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231019055343.588846-1-ebiggers@kernel.org> References: <20231019055343.588846-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Eric Biggers crypto_shash_alignmask() no longer has any callers, and it always returns 0 now that the shash algorithm type no longer supports nonzero alignmasks. Therefore, remove it. Signed-off-by: Eric Biggers --- include/crypto/hash.h | 6 ------ 1 file changed, 6 deletions(-) diff --git a/include/crypto/hash.h b/include/crypto/hash.h index 52e57e93b2f59..d3a380ae894ad 100644 --- a/include/crypto/hash.h +++ b/include/crypto/hash.h @@ -791,26 +791,20 @@ static inline void crypto_free_shash(struct crypto_shash *tfm) static inline const char *crypto_shash_alg_name(struct crypto_shash *tfm) { return crypto_tfm_alg_name(crypto_shash_tfm(tfm)); } static inline const char *crypto_shash_driver_name(struct crypto_shash *tfm) { return crypto_tfm_alg_driver_name(crypto_shash_tfm(tfm)); } -static inline unsigned int crypto_shash_alignmask( - struct crypto_shash *tfm) -{ - return crypto_tfm_alg_alignmask(crypto_shash_tfm(tfm)); -} - /** * crypto_shash_blocksize() - obtain block size for cipher * @tfm: cipher handle * * The block size for the message digest cipher referenced with the cipher * handle is returned. * * Return: block size of cipher */ static inline unsigned int crypto_shash_blocksize(struct crypto_shash *tfm)