From patchwork Mon Dec 2 21:42:29 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 11269947 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8A51A138D for ; Mon, 2 Dec 2019 21:42:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 6975820717 for ; Mon, 2 Dec 2019 21:42:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1575322974; bh=zDj3oM05nKO4ZKYxFm5WmcR9doF7vrNPYBOA1N8LgT8=; h=From:To:Subject:Date:In-Reply-To:References:List-ID:From; b=2iKDtkdiSjVOd/p0oK1z5yOUQscIpPPXo6aY2oDqsIR+UFKT6E25f5QScmQvP/vBn +jniVVpNG0LmMt/aC++zf0RHV7AR2xkT0gRnhCsaTUXxMbD0sIdYY15Cujtva5pJQn s1xRsUnet11oVLxjsP8MzEM3oLXyJS6yKshlsqDQ= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725775AbfLBVmx (ORCPT ); Mon, 2 Dec 2019 16:42:53 -0500 Received: from mail.kernel.org ([198.145.29.99]:38648 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725834AbfLBVmx (ORCPT ); Mon, 2 Dec 2019 16:42:53 -0500 Received: from ebiggers-linuxstation.mtv.corp.google.com (unknown [104.132.1.77]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id A4DA020717 for ; Mon, 2 Dec 2019 21:42:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1575322972; bh=zDj3oM05nKO4ZKYxFm5WmcR9doF7vrNPYBOA1N8LgT8=; h=From:To:Subject:Date:In-Reply-To:References:From; b=0xMKjivBFwtr4oLAkry7DUbspci+Ej4MmMXYNrOEqlUmS7fUP2oo90921pkGFyoFT c7sZIR2LmFJSwoi8vDWnqzjv0YO2zxA28m8p0BsqxK+QZpcz1q1WiaZSGevC5TJy1q MCDSqagYiBeoRoJm0aF6LhCTB+u7jkWQYwTxK93A= From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 1/2] crypto: compress - remove crt_u.compress (struct compress_tfm) Date: Mon, 2 Dec 2019 13:42:29 -0800 Message-Id: <20191202214230.164997-2-ebiggers@kernel.org> X-Mailer: git-send-email 2.24.0.393.g34dc348eaf-goog In-Reply-To: <20191202214230.164997-1-ebiggers@kernel.org> References: <20191202214230.164997-1-ebiggers@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Eric Biggers crt_u.compress (struct compress_tfm) is pointless because its two fields, ->cot_compress() and ->cot_decompress(), always point to crypto_compress() and crypto_decompress(). Remove this pointless indirection, and just make crypto_comp_compress() and crypto_comp_decompress() be direct calls to what used to be crypto_compress() and crypto_decompress(). Also remove the unused function crypto_comp_cast(). Signed-off-by: Eric Biggers --- crypto/api.c | 2 +- crypto/compress.c | 31 ++++++++++++------------------ crypto/internal.h | 1 - include/linux/crypto.h | 43 ++++++------------------------------------ 4 files changed, 19 insertions(+), 58 deletions(-) diff --git a/crypto/api.c b/crypto/api.c index 55bca28df92d86..d1ff266a92ff7d 100644 --- a/crypto/api.c +++ b/crypto/api.c @@ -301,7 +301,7 @@ static int crypto_init_ops(struct crypto_tfm *tfm, u32 type, u32 mask) return crypto_init_cipher_ops(tfm); case CRYPTO_ALG_TYPE_COMPRESS: - return crypto_init_compress_ops(tfm); + return 0; default: break; diff --git a/crypto/compress.c b/crypto/compress.c index e9edf852478798..9048fe390c4630 100644 --- a/crypto/compress.c +++ b/crypto/compress.c @@ -6,34 +6,27 @@ * * Copyright (c) 2002 James Morris */ -#include #include -#include -#include #include "internal.h" -static int crypto_compress(struct crypto_tfm *tfm, - const u8 *src, unsigned int slen, - u8 *dst, unsigned int *dlen) +int crypto_comp_compress(struct crypto_comp *comp, + const u8 *src, unsigned int slen, + u8 *dst, unsigned int *dlen) { + struct crypto_tfm *tfm = crypto_comp_tfm(comp); + return tfm->__crt_alg->cra_compress.coa_compress(tfm, src, slen, dst, dlen); } +EXPORT_SYMBOL_GPL(crypto_comp_compress); -static int crypto_decompress(struct crypto_tfm *tfm, - const u8 *src, unsigned int slen, - u8 *dst, unsigned int *dlen) +int crypto_comp_decompress(struct crypto_comp *comp, + const u8 *src, unsigned int slen, + u8 *dst, unsigned int *dlen) { + struct crypto_tfm *tfm = crypto_comp_tfm(comp); + return tfm->__crt_alg->cra_compress.coa_decompress(tfm, src, slen, dst, dlen); } - -int crypto_init_compress_ops(struct crypto_tfm *tfm) -{ - struct compress_tfm *ops = &tfm->crt_compress; - - ops->cot_compress = crypto_compress; - ops->cot_decompress = crypto_decompress; - - return 0; -} +EXPORT_SYMBOL_GPL(crypto_comp_decompress); diff --git a/crypto/internal.h b/crypto/internal.h index 93df7bec844a7c..a58a2af4b66963 100644 --- a/crypto/internal.h +++ b/crypto/internal.h @@ -59,7 +59,6 @@ struct crypto_alg *crypto_mod_get(struct crypto_alg *alg); struct crypto_alg *crypto_alg_mod_lookup(const char *name, u32 type, u32 mask); int crypto_init_cipher_ops(struct crypto_tfm *tfm); -int crypto_init_compress_ops(struct crypto_tfm *tfm); struct crypto_larval *crypto_larval_alloc(const char *name, u32 type, u32 mask); void crypto_larval_kill(struct crypto_alg *alg); diff --git a/include/linux/crypto.h b/include/linux/crypto.h index 23365a9d062e3f..8f708564b98b11 100644 --- a/include/linux/crypto.h +++ b/include/linux/crypto.h @@ -606,17 +606,7 @@ struct cipher_tfm { void (*cit_decrypt_one)(struct crypto_tfm *tfm, u8 *dst, const u8 *src); }; -struct compress_tfm { - int (*cot_compress)(struct crypto_tfm *tfm, - const u8 *src, unsigned int slen, - u8 *dst, unsigned int *dlen); - int (*cot_decompress)(struct crypto_tfm *tfm, - const u8 *src, unsigned int slen, - u8 *dst, unsigned int *dlen); -}; - #define crt_cipher crt_u.cipher -#define crt_compress crt_u.compress struct crypto_tfm { @@ -624,7 +614,6 @@ struct crypto_tfm { union { struct cipher_tfm cipher; - struct compress_tfm compress; } crt_u; void (*exit)(struct crypto_tfm *tfm); @@ -928,13 +917,6 @@ static inline struct crypto_comp *__crypto_comp_cast(struct crypto_tfm *tfm) return (struct crypto_comp *)tfm; } -static inline struct crypto_comp *crypto_comp_cast(struct crypto_tfm *tfm) -{ - BUG_ON((crypto_tfm_alg_type(tfm) ^ CRYPTO_ALG_TYPE_COMPRESS) & - CRYPTO_ALG_TYPE_MASK); - return __crypto_comp_cast(tfm); -} - static inline struct crypto_comp *crypto_alloc_comp(const char *alg_name, u32 type, u32 mask) { @@ -969,26 +951,13 @@ static inline const char *crypto_comp_name(struct crypto_comp *tfm) return crypto_tfm_alg_name(crypto_comp_tfm(tfm)); } -static inline struct compress_tfm *crypto_comp_crt(struct crypto_comp *tfm) -{ - return &crypto_comp_tfm(tfm)->crt_compress; -} - -static inline int crypto_comp_compress(struct crypto_comp *tfm, - const u8 *src, unsigned int slen, - u8 *dst, unsigned int *dlen) -{ - return crypto_comp_crt(tfm)->cot_compress(crypto_comp_tfm(tfm), - src, slen, dst, dlen); -} +int crypto_comp_compress(struct crypto_comp *tfm, + const u8 *src, unsigned int slen, + u8 *dst, unsigned int *dlen); -static inline int crypto_comp_decompress(struct crypto_comp *tfm, - const u8 *src, unsigned int slen, - u8 *dst, unsigned int *dlen) -{ - return crypto_comp_crt(tfm)->cot_decompress(crypto_comp_tfm(tfm), - src, slen, dst, dlen); -} +int crypto_comp_decompress(struct crypto_comp *tfm, + const u8 *src, unsigned int slen, + u8 *dst, unsigned int *dlen); #endif /* _LINUX_CRYPTO_H */ From patchwork Mon Dec 2 21:42:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 11269949 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5D058138D for ; Mon, 2 Dec 2019 21:42:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 2800220717 for ; Mon, 2 Dec 2019 21:42:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1575322975; bh=cYrKbSdOfbaxlnBpsAOduyszybhpUcszGoXsXA6TVMg=; h=From:To:Subject:Date:In-Reply-To:References:List-ID:From; b=PKT41dA/n6NQ6JE6OqTses0dBbM/MoKmqBRvbJc4VqKO+PMPM7g06uVV/BdFsM10K 8F8VFTyDofNdE77WbDXVrmMvOAHXnJCgEXsLtpY+wiy1AgxPMDC9l/UuCwiUzoaIyC Y2WI3ZbS/tn2pC045Cfc4Sp5Ewdn57kG4Jro23hQ= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725834AbfLBVmy (ORCPT ); Mon, 2 Dec 2019 16:42:54 -0500 Received: from mail.kernel.org ([198.145.29.99]:38650 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725835AbfLBVmy (ORCPT ); Mon, 2 Dec 2019 16:42:54 -0500 Received: from ebiggers-linuxstation.mtv.corp.google.com (unknown [104.132.1.77]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id D47C32071E for ; Mon, 2 Dec 2019 21:42:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1575322972; bh=cYrKbSdOfbaxlnBpsAOduyszybhpUcszGoXsXA6TVMg=; h=From:To:Subject:Date:In-Reply-To:References:From; b=WP+I1A/Egl2dfy6nBCvFf2AefS8HwHb71WOqIvrgH2F4SSIVmz6dsTII2DumgbTlX a7tSMs1L2STcTJpLLgY0w6i8GHdMPaULKej2LMNjspA5UfqKnoyabUVcJsy/8QwI0s UDJ4nhB2KO2MKqeE8Y2G/T2RckpbVaRRP2T+fr6M= From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 2/2] crypto: cipher - remove crt_u.cipher (struct cipher_tfm) Date: Mon, 2 Dec 2019 13:42:30 -0800 Message-Id: <20191202214230.164997-3-ebiggers@kernel.org> X-Mailer: git-send-email 2.24.0.393.g34dc348eaf-goog In-Reply-To: <20191202214230.164997-1-ebiggers@kernel.org> References: <20191202214230.164997-1-ebiggers@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Eric Biggers Of the three fields in crt_u.cipher (struct cipher_tfm), ->cit_setkey() is pointless because it always points to setkey() in crypto/cipher.c. ->cit_decrypt_one() and ->cit_encrypt_one() are slightly less pointless, since if the algorithm doesn't have an alignmask, they are set directly to ->cia_encrypt() and ->cia_decrypt(). However, this "optimization" isn't worthwhile because: - The "cipher" algorithm type is the only algorithm still using crt_u, so it's bloating every struct crypto_tfm for every algorithm type. - If the algorithm has an alignmask, this "optimization" actually makes things slower, as it causes 2 indirect calls per block rather than 1. - It adds extra code complexity. - Some templates already call ->cia_encrypt()/->cia_decrypt() directly instead of going through ->cit_encrypt_one()/->cit_decrypt_one(). - The "cipher" algorithm type never gives optimal performance anyway. For that, a higher-level type such as skcipher needs to be used. Therefore, just remove the extra indirection, and make crypto_cipher_setkey(), crypto_cipher_encrypt_one(), and crypto_cipher_decrypt_one() be direct calls into crypto/cipher.c. Also remove the unused function crypto_cipher_cast(). Signed-off-by: Eric Biggers --- crypto/api.c | 15 +------ crypto/cipher.c | 92 +++++++++++++++++------------------------- crypto/internal.h | 2 - include/linux/crypto.h | 48 +++------------------- 4 files changed, 43 insertions(+), 114 deletions(-) diff --git a/crypto/api.c b/crypto/api.c index d1ff266a92ff7d..72b6527d7e890c 100644 --- a/crypto/api.c +++ b/crypto/api.c @@ -295,20 +295,7 @@ static int crypto_init_ops(struct crypto_tfm *tfm, u32 type, u32 mask) if (type_obj) return type_obj->init(tfm, type, mask); - - switch (crypto_tfm_alg_type(tfm)) { - case CRYPTO_ALG_TYPE_CIPHER: - return crypto_init_cipher_ops(tfm); - - case CRYPTO_ALG_TYPE_COMPRESS: - return 0; - - default: - break; - } - - BUG(); - return -EINVAL; + return 0; } static void crypto_exit_ops(struct crypto_tfm *tfm) diff --git a/crypto/cipher.c b/crypto/cipher.c index 108427026e7c7a..aadd51cb7250c1 100644 --- a/crypto/cipher.c +++ b/crypto/cipher.c @@ -2,7 +2,7 @@ /* * Cryptographic API. * - * Cipher operations. + * Single-block cipher operations. * * Copyright (c) 2002 James Morris * Copyright (c) 2005 Herbert Xu @@ -16,11 +16,11 @@ #include #include "internal.h" -static int setkey_unaligned(struct crypto_tfm *tfm, const u8 *key, +static int setkey_unaligned(struct crypto_cipher *tfm, const u8 *key, unsigned int keylen) { - struct cipher_alg *cia = &tfm->__crt_alg->cra_cipher; - unsigned long alignmask = crypto_tfm_alg_alignmask(tfm); + struct cipher_alg *cia = crypto_cipher_alg(tfm); + unsigned long alignmask = crypto_cipher_alignmask(tfm); int ret; u8 *buffer, *alignbuffer; unsigned long absize; @@ -32,83 +32,63 @@ static int setkey_unaligned(struct crypto_tfm *tfm, const u8 *key, alignbuffer = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1); memcpy(alignbuffer, key, keylen); - ret = cia->cia_setkey(tfm, alignbuffer, keylen); + ret = cia->cia_setkey(crypto_cipher_tfm(tfm), alignbuffer, keylen); memset(alignbuffer, 0, keylen); kfree(buffer); return ret; } -static int setkey(struct crypto_tfm *tfm, const u8 *key, unsigned int keylen) +int crypto_cipher_setkey(struct crypto_cipher *tfm, + const u8 *key, unsigned int keylen) { - struct cipher_alg *cia = &tfm->__crt_alg->cra_cipher; - unsigned long alignmask = crypto_tfm_alg_alignmask(tfm); + struct cipher_alg *cia = crypto_cipher_alg(tfm); + unsigned long alignmask = crypto_cipher_alignmask(tfm); - tfm->crt_flags &= ~CRYPTO_TFM_RES_MASK; + crypto_cipher_clear_flags(tfm, CRYPTO_TFM_RES_MASK); if (keylen < cia->cia_min_keysize || keylen > cia->cia_max_keysize) { - tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN; + crypto_cipher_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN); return -EINVAL; } if ((unsigned long)key & alignmask) return setkey_unaligned(tfm, key, keylen); - return cia->cia_setkey(tfm, key, keylen); + return cia->cia_setkey(crypto_cipher_tfm(tfm), key, keylen); } +EXPORT_SYMBOL_GPL(crypto_cipher_setkey); -static void cipher_crypt_unaligned(void (*fn)(struct crypto_tfm *, u8 *, - const u8 *), - struct crypto_tfm *tfm, - u8 *dst, const u8 *src) +static inline void cipher_crypt_one(struct crypto_cipher *tfm, + u8 *dst, const u8 *src, bool enc) { - unsigned long alignmask = crypto_tfm_alg_alignmask(tfm); - unsigned int size = crypto_tfm_alg_blocksize(tfm); - u8 buffer[MAX_CIPHER_BLOCKSIZE + MAX_CIPHER_ALIGNMASK]; - u8 *tmp = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1); - - memcpy(tmp, src, size); - fn(tfm, tmp, tmp); - memcpy(dst, tmp, size); -} - -static void cipher_encrypt_unaligned(struct crypto_tfm *tfm, - u8 *dst, const u8 *src) -{ - unsigned long alignmask = crypto_tfm_alg_alignmask(tfm); - struct cipher_alg *cipher = &tfm->__crt_alg->cra_cipher; + unsigned long alignmask = crypto_cipher_alignmask(tfm); + struct cipher_alg *cia = crypto_cipher_alg(tfm); + void (*fn)(struct crypto_tfm *, u8 *, const u8 *) = + enc ? cia->cia_encrypt : cia->cia_decrypt; if (unlikely(((unsigned long)dst | (unsigned long)src) & alignmask)) { - cipher_crypt_unaligned(cipher->cia_encrypt, tfm, dst, src); - return; + unsigned int bs = crypto_cipher_blocksize(tfm); + u8 buffer[MAX_CIPHER_BLOCKSIZE + MAX_CIPHER_ALIGNMASK]; + u8 *tmp = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1); + + memcpy(tmp, src, bs); + fn(crypto_cipher_tfm(tfm), tmp, tmp); + memcpy(dst, tmp, bs); + } else { + fn(crypto_cipher_tfm(tfm), dst, src); } - - cipher->cia_encrypt(tfm, dst, src); } -static void cipher_decrypt_unaligned(struct crypto_tfm *tfm, - u8 *dst, const u8 *src) +void crypto_cipher_encrypt_one(struct crypto_cipher *tfm, + u8 *dst, const u8 *src) { - unsigned long alignmask = crypto_tfm_alg_alignmask(tfm); - struct cipher_alg *cipher = &tfm->__crt_alg->cra_cipher; - - if (unlikely(((unsigned long)dst | (unsigned long)src) & alignmask)) { - cipher_crypt_unaligned(cipher->cia_decrypt, tfm, dst, src); - return; - } - - cipher->cia_decrypt(tfm, dst, src); + cipher_crypt_one(tfm, dst, src, true); } +EXPORT_SYMBOL_GPL(crypto_cipher_encrypt_one); -int crypto_init_cipher_ops(struct crypto_tfm *tfm) +void crypto_cipher_decrypt_one(struct crypto_cipher *tfm, + u8 *dst, const u8 *src) { - struct cipher_tfm *ops = &tfm->crt_cipher; - struct cipher_alg *cipher = &tfm->__crt_alg->cra_cipher; - - ops->cit_setkey = setkey; - ops->cit_encrypt_one = crypto_tfm_alg_alignmask(tfm) ? - cipher_encrypt_unaligned : cipher->cia_encrypt; - ops->cit_decrypt_one = crypto_tfm_alg_alignmask(tfm) ? - cipher_decrypt_unaligned : cipher->cia_decrypt; - - return 0; + cipher_crypt_one(tfm, dst, src, false); } +EXPORT_SYMBOL_GPL(crypto_cipher_decrypt_one); diff --git a/crypto/internal.h b/crypto/internal.h index a58a2af4b66963..ff06a3bd1ca10c 100644 --- a/crypto/internal.h +++ b/crypto/internal.h @@ -58,8 +58,6 @@ static inline unsigned int crypto_compress_ctxsize(struct crypto_alg *alg) struct crypto_alg *crypto_mod_get(struct crypto_alg *alg); struct crypto_alg *crypto_alg_mod_lookup(const char *name, u32 type, u32 mask); -int crypto_init_cipher_ops(struct crypto_tfm *tfm); - struct crypto_larval *crypto_larval_alloc(const char *name, u32 type, u32 mask); void crypto_larval_kill(struct crypto_alg *alg); void crypto_alg_tested(const char *name, int err); diff --git a/include/linux/crypto.h b/include/linux/crypto.h index 8f708564b98b11..c23f1eed797029 100644 --- a/include/linux/crypto.h +++ b/include/linux/crypto.h @@ -599,23 +599,10 @@ int crypto_has_alg(const char *name, u32 type, u32 mask); * crypto_free_*(), as well as the various helpers below. */ -struct cipher_tfm { - int (*cit_setkey)(struct crypto_tfm *tfm, - const u8 *key, unsigned int keylen); - void (*cit_encrypt_one)(struct crypto_tfm *tfm, u8 *dst, const u8 *src); - void (*cit_decrypt_one)(struct crypto_tfm *tfm, u8 *dst, const u8 *src); -}; - -#define crt_cipher crt_u.cipher - struct crypto_tfm { u32 crt_flags; - union { - struct cipher_tfm cipher; - } crt_u; - void (*exit)(struct crypto_tfm *tfm); struct crypto_alg *__crt_alg; @@ -752,12 +739,6 @@ static inline struct crypto_cipher *__crypto_cipher_cast(struct crypto_tfm *tfm) return (struct crypto_cipher *)tfm; } -static inline struct crypto_cipher *crypto_cipher_cast(struct crypto_tfm *tfm) -{ - BUG_ON(crypto_tfm_alg_type(tfm) != CRYPTO_ALG_TYPE_CIPHER); - return __crypto_cipher_cast(tfm); -} - /** * crypto_alloc_cipher() - allocate single block cipher handle * @alg_name: is the cra_name / name or cra_driver_name / driver name of the @@ -815,11 +796,6 @@ static inline int crypto_has_cipher(const char *alg_name, u32 type, u32 mask) return crypto_has_alg(alg_name, type, mask); } -static inline struct cipher_tfm *crypto_cipher_crt(struct crypto_cipher *tfm) -{ - return &crypto_cipher_tfm(tfm)->crt_cipher; -} - /** * crypto_cipher_blocksize() - obtain block size for cipher * @tfm: cipher handle @@ -873,12 +849,8 @@ static inline void crypto_cipher_clear_flags(struct crypto_cipher *tfm, * * Return: 0 if the setting of the key was successful; < 0 if an error occurred */ -static inline int crypto_cipher_setkey(struct crypto_cipher *tfm, - const u8 *key, unsigned int keylen) -{ - return crypto_cipher_crt(tfm)->cit_setkey(crypto_cipher_tfm(tfm), - key, keylen); -} +int crypto_cipher_setkey(struct crypto_cipher *tfm, + const u8 *key, unsigned int keylen); /** * crypto_cipher_encrypt_one() - encrypt one block of plaintext @@ -889,12 +861,8 @@ static inline int crypto_cipher_setkey(struct crypto_cipher *tfm, * Invoke the encryption operation of one block. The caller must ensure that * the plaintext and ciphertext buffers are at least one block in size. */ -static inline void crypto_cipher_encrypt_one(struct crypto_cipher *tfm, - u8 *dst, const u8 *src) -{ - crypto_cipher_crt(tfm)->cit_encrypt_one(crypto_cipher_tfm(tfm), - dst, src); -} +void crypto_cipher_encrypt_one(struct crypto_cipher *tfm, + u8 *dst, const u8 *src); /** * crypto_cipher_decrypt_one() - decrypt one block of ciphertext @@ -905,12 +873,8 @@ static inline void crypto_cipher_encrypt_one(struct crypto_cipher *tfm, * Invoke the decryption operation of one block. The caller must ensure that * the plaintext and ciphertext buffers are at least one block in size. */ -static inline void crypto_cipher_decrypt_one(struct crypto_cipher *tfm, - u8 *dst, const u8 *src) -{ - crypto_cipher_crt(tfm)->cit_decrypt_one(crypto_cipher_tfm(tfm), - dst, src); -} +void crypto_cipher_decrypt_one(struct crypto_cipher *tfm, + u8 *dst, const u8 *src); static inline struct crypto_comp *__crypto_comp_cast(struct crypto_tfm *tfm) {