From patchwork Mon Nov 11 21:45:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 11237897 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9C63716B1 for ; Mon, 11 Nov 2019 21:46:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7B302206BB for ; Mon, 11 Nov 2019 21:46:32 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="ihuN695K" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727392AbfKKVqb (ORCPT ); Mon, 11 Nov 2019 16:46:31 -0500 Received: from mail-pl1-f195.google.com ([209.85.214.195]:38035 "EHLO mail-pl1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727642AbfKKVqH (ORCPT ); Mon, 11 Nov 2019 16:46:07 -0500 Received: by mail-pl1-f195.google.com with SMTP id w8so8381290plq.5 for ; Mon, 11 Nov 2019 13:46:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=scnPY7Vttw3RyzZhL4W5SpA/avMnImyUTQ0YId03ujs=; b=ihuN695KvOIYvShEFrPb/mLsRgCB+jsW+Zf4zcpsRLeZ6gUH29xLpLoEaSZENcxesD aRXnINDZpo+dHBayJhowwxsHQmmjO8KhdQEmxw/osNEkLnDYkkGzE3mUznjodnxBXuKd g40VBnd7TXFKmXIgxwQAsBGGRxfPHa+cNQYBA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=scnPY7Vttw3RyzZhL4W5SpA/avMnImyUTQ0YId03ujs=; b=sjBdDQKaZouGUAcQyh8oEPJzRUKZq0LVL5cwyH8KTecpB1wPckXR6sHKEkWGoHMMZk 3TlQswFA14wEPqJll5T6XykNDXr+UhjsUNXKF/gvj4g0ycQu7n4mAjsQp0KTAO4Y/B11 G/jaWvincfLbmiUCXt7cVcad+0JEpR494mc/jSA/vAcMk+PgNYs1q4ba5Ni5QKTSxics rtMBa50hSOGamw6GFcfNc6ss4zDxsA5Q15c1fQQkpwvlUh9+y3+vtbaTmwtJZvjFOZQE oRzs2qJ5gcOiNDP13UppftX4r3AoN3I72O8ckcXOn+FH/Eue9CBVb4BWbj0kcKD9UKzV uCLQ== X-Gm-Message-State: APjAAAXuWOvp2Nyl5fMwNu//zNWKXs2akzvgYGiW2cWUV+WQnGMCHzEM 8xgY6FLxa6xzZMnIBV6Y0ioVXQ== X-Google-Smtp-Source: APXvYqzVIxtt5GEKdaDXPQ+xj7Wjby90TrRdmpa51OGLsU2O2Qj1q3F+9zcZ2rKrY4+kA2/QVptBBg== X-Received: by 2002:a17:902:7c8f:: with SMTP id y15mr27046980pll.341.1573508764932; Mon, 11 Nov 2019 13:46:04 -0800 (PST) Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163]) by smtp.gmail.com with ESMTPSA id y11sm19317170pfq.1.2019.11.11.13.46.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Nov 2019 13:46:00 -0800 (PST) From: Kees Cook To: Herbert Xu Cc: Kees Cook , =?utf-8?q?Jo=C3=A3o_Moreira?= , Eric Biggers , Sami Tolvanen , "David S. Miller" , Ard Biesheuvel , Stephan Mueller , x86@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-hardening@lists.openwall.com Subject: [PATCH v4 1/8] crypto: x86/glue_helper: Add function glue macros Date: Mon, 11 Nov 2019 13:45:45 -0800 Message-Id: <20191111214552.36717-2-keescook@chromium.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20191111214552.36717-1-keescook@chromium.org> References: <20191111214552.36717-1-keescook@chromium.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The crypto glue performed function prototype casting to make indirect calls to assembly routines. Instead of performing casts at the call sites (which trips Control Flow Integrity prototype checking), create a set of macros to either declare the prototypes to avoid the need for casts, or build inline helpers to allow for various aliased functions. Co-developed-by: João Moreira Signed-off-by: Kees Cook --- arch/x86/include/asm/crypto/glue_helper.h | 24 +++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/arch/x86/include/asm/crypto/glue_helper.h b/arch/x86/include/asm/crypto/glue_helper.h index 8d4a8e1226ee..2fa4968ab8e2 100644 --- a/arch/x86/include/asm/crypto/glue_helper.h +++ b/arch/x86/include/asm/crypto/glue_helper.h @@ -23,6 +23,30 @@ typedef void (*common_glue_xts_func_t)(void *ctx, u128 *dst, const u128 *src, #define GLUE_CTR_FUNC_CAST(fn) ((common_glue_ctr_func_t)(fn)) #define GLUE_XTS_FUNC_CAST(fn) ((common_glue_xts_func_t)(fn)) +#define CRYPTO_FUNC(func) \ +asmlinkage void func(void *ctx, u8 *dst, const u8 *src) + +#define CRYPTO_FUNC_CBC(func) \ +asmlinkage void func(void *ctx, u128 *dst, const u128 *src) + +#define CRYPTO_FUNC_WRAP_CBC(func) \ +static inline void func ## _cbc(void *ctx, u128 *dst, const u128 *src) \ +{ func(ctx, (u8 *)dst, (u8 *)src); } + +#define CRYPTO_FUNC_CTR(func) \ +asmlinkage void func(void *ctx, u128 *dst, const u128 *src, le128 *iv); + +#define CRYPTO_FUNC_XTS(func) CRYPTO_FUNC_CTR(func) + +#define CRYPTO_FUNC_XOR(func) \ +asmlinkage void __ ## func(void *ctx, u8 *dst, const u8 *src, bool y); \ +asmlinkage static inline \ +void func(void *ctx, u8 *dst, const u8 *src) \ +{ __ ## func(ctx, dst, src, false); } \ +asmlinkage static inline \ +void func ## _xor(void *ctx, u8 *dst, const u8 *src) \ +{ __ ## func(ctx, dst, src, true); } + struct common_glue_func_entry { unsigned int num_blocks; /* number of blocks that @fn will process */ union { From patchwork Mon Nov 11 21:45:46 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 11237887 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8C2A816B1 for ; Mon, 11 Nov 2019 21:46:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5543321D7F for ; Mon, 11 Nov 2019 21:46:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="MlTOhUGT" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727610AbfKKVqF (ORCPT ); Mon, 11 Nov 2019 16:46:05 -0500 Received: from mail-pf1-f194.google.com ([209.85.210.194]:33767 "EHLO mail-pf1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727559AbfKKVqF (ORCPT ); Mon, 11 Nov 2019 16:46:05 -0500 Received: by mail-pf1-f194.google.com with SMTP id c184so11667112pfb.0 for ; Mon, 11 Nov 2019 13:46:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Sfboeiv8JCMcl19Ljf0Wt+ugr3E2aYX7s1amn1fzaaE=; b=MlTOhUGTFlgsrhhYqETf6FZGdkc5oXfA8Mj9m1exKYyLki21rA3H7KDGQm233hLdhL 3e4D2W8dguoD7Q7TjUB0z3Fk0WILwScL4m9/bgRBJWbzbLZhC2ERgOqee8atcwCvtb4a 3cznvifmOJq801TfwhTYGvbxC2qg69IovL5D0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Sfboeiv8JCMcl19Ljf0Wt+ugr3E2aYX7s1amn1fzaaE=; b=VNbDyuz15uPKx+qf9J5H8tqUkmkM8e71DTKD6BjEJgWdr3+zvu4I2ICGqeZti7Y2/t EPkIYQLlUJS/tjixpQRVITJRBjFJuovK5rM/MPIcvDxTIFZypdlqllklhZAH2aWN6iVO 223eKQpAHjIi41hUO3yKccSkIqued4tO3oBsY2Qj/8grgYSJV+GlxwIqbkPmm3mDVrfm lkZCUUakKA6KqHCOFhX7rJuQdY0dx4Uf4j6tmv0cex9Z9BkdAEm96rsYeOYaVhEfUMiz dUv6BdpxUdnJfTaUj/22CmEyj1X/I8V2xxY5n7KIzIivrrhMJRjwaU96ODZ+SETHrssk wWAQ== X-Gm-Message-State: APjAAAWpynoQzp+ZS/QRPQbRAm52+eDqiyU6pUZANPTL6l7WJYs+Jqw/ PRN8krQVma22ov78oTpCQMz/aA== X-Google-Smtp-Source: APXvYqzhVjwNIm1LXIRz2ia3dBQycGEDMEkzBpGVfgemI5r8JyPgBG0iVT0lXPDz9slD4wR/cwhPgQ== X-Received: by 2002:a63:2ac9:: with SMTP id q192mr31375298pgq.351.1573508762533; Mon, 11 Nov 2019 13:46:02 -0800 (PST) Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163]) by smtp.gmail.com with ESMTPSA id g6sm14826520pfh.125.2019.11.11.13.46.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Nov 2019 13:46:00 -0800 (PST) From: Kees Cook To: Herbert Xu Cc: Kees Cook , =?utf-8?q?Jo=C3=A3o_Moreira?= , Eric Biggers , Sami Tolvanen , "David S. Miller" , Ard Biesheuvel , Stephan Mueller , x86@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-hardening@lists.openwall.com Subject: [PATCH v4 2/8] crypto: x86/serpent: Use new glue function macros Date: Mon, 11 Nov 2019 13:45:46 -0800 Message-Id: <20191111214552.36717-3-keescook@chromium.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20191111214552.36717-1-keescook@chromium.org> References: <20191111214552.36717-1-keescook@chromium.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Convert to function declaration macros from function prototype casts to avoid triggering Control-Flow Integrity checks during indirect function calls. Co-developed-by: João Moreira Signed-off-by: Kees Cook --- arch/x86/crypto/serpent_avx2_glue.c | 65 ++++++++++------------ arch/x86/crypto/serpent_avx_glue.c | 58 +++++++------------ arch/x86/crypto/serpent_sse2_glue.c | 24 +++++--- arch/x86/include/asm/crypto/serpent-avx.h | 23 +++----- arch/x86/include/asm/crypto/serpent-sse2.h | 6 +- crypto/serpent_generic.c | 6 +- include/crypto/serpent.h | 4 +- 7 files changed, 80 insertions(+), 106 deletions(-) diff --git a/arch/x86/crypto/serpent_avx2_glue.c b/arch/x86/crypto/serpent_avx2_glue.c index 13fd8d3d2da0..b27139cf93c2 100644 --- a/arch/x86/crypto/serpent_avx2_glue.c +++ b/arch/x86/crypto/serpent_avx2_glue.c @@ -19,18 +19,12 @@ #define SERPENT_AVX2_PARALLEL_BLOCKS 16 /* 16-way AVX2 parallel cipher functions */ -asmlinkage void serpent_ecb_enc_16way(struct serpent_ctx *ctx, u8 *dst, - const u8 *src); -asmlinkage void serpent_ecb_dec_16way(struct serpent_ctx *ctx, u8 *dst, - const u8 *src); -asmlinkage void serpent_cbc_dec_16way(void *ctx, u128 *dst, const u128 *src); - -asmlinkage void serpent_ctr_16way(void *ctx, u128 *dst, const u128 *src, - le128 *iv); -asmlinkage void serpent_xts_enc_16way(struct serpent_ctx *ctx, u8 *dst, - const u8 *src, le128 *iv); -asmlinkage void serpent_xts_dec_16way(struct serpent_ctx *ctx, u8 *dst, - const u8 *src, le128 *iv); +CRYPTO_FUNC(serpent_ecb_enc_16way); +CRYPTO_FUNC(serpent_ecb_dec_16way); +CRYPTO_FUNC_CBC(serpent_cbc_dec_16way); +CRYPTO_FUNC_CTR(serpent_ctr_16way); +CRYPTO_FUNC_XTS(serpent_xts_enc_16way); +CRYPTO_FUNC_XTS(serpent_xts_dec_16way); static int serpent_setkey_skcipher(struct crypto_skcipher *tfm, const u8 *key, unsigned int keylen) @@ -44,13 +38,13 @@ static const struct common_glue_ctx serpent_enc = { .funcs = { { .num_blocks = 16, - .fn_u = { .ecb = GLUE_FUNC_CAST(serpent_ecb_enc_16way) } + .fn_u = { .ecb = serpent_ecb_enc_16way } }, { .num_blocks = 8, - .fn_u = { .ecb = GLUE_FUNC_CAST(serpent_ecb_enc_8way_avx) } + .fn_u = { .ecb = serpent_ecb_enc_8way_avx } }, { .num_blocks = 1, - .fn_u = { .ecb = GLUE_FUNC_CAST(__serpent_encrypt) } + .fn_u = { .ecb = __serpent_encrypt } } } }; @@ -60,13 +54,13 @@ static const struct common_glue_ctx serpent_ctr = { .funcs = { { .num_blocks = 16, - .fn_u = { .ctr = GLUE_CTR_FUNC_CAST(serpent_ctr_16way) } + .fn_u = { .ctr = serpent_ctr_16way } }, { .num_blocks = 8, - .fn_u = { .ctr = GLUE_CTR_FUNC_CAST(serpent_ctr_8way_avx) } + .fn_u = { .ctr = serpent_ctr_8way_avx } }, { .num_blocks = 1, - .fn_u = { .ctr = GLUE_CTR_FUNC_CAST(__serpent_crypt_ctr) } + .fn_u = { .ctr = __serpent_crypt_ctr } } } }; @@ -76,13 +70,13 @@ static const struct common_glue_ctx serpent_enc_xts = { .funcs = { { .num_blocks = 16, - .fn_u = { .xts = GLUE_XTS_FUNC_CAST(serpent_xts_enc_16way) } + .fn_u = { .xts = serpent_xts_enc_16way } }, { .num_blocks = 8, - .fn_u = { .xts = GLUE_XTS_FUNC_CAST(serpent_xts_enc_8way_avx) } + .fn_u = { .xts = serpent_xts_enc_8way_avx } }, { .num_blocks = 1, - .fn_u = { .xts = GLUE_XTS_FUNC_CAST(serpent_xts_enc) } + .fn_u = { .xts = serpent_xts_enc } } } }; @@ -92,13 +86,13 @@ static const struct common_glue_ctx serpent_dec = { .funcs = { { .num_blocks = 16, - .fn_u = { .ecb = GLUE_FUNC_CAST(serpent_ecb_dec_16way) } + .fn_u = { .ecb = serpent_ecb_dec_16way } }, { .num_blocks = 8, - .fn_u = { .ecb = GLUE_FUNC_CAST(serpent_ecb_dec_8way_avx) } + .fn_u = { .ecb = serpent_ecb_dec_8way_avx } }, { .num_blocks = 1, - .fn_u = { .ecb = GLUE_FUNC_CAST(__serpent_decrypt) } + .fn_u = { .ecb = __serpent_decrypt } } } }; @@ -108,13 +102,13 @@ static const struct common_glue_ctx serpent_dec_cbc = { .funcs = { { .num_blocks = 16, - .fn_u = { .cbc = GLUE_CBC_FUNC_CAST(serpent_cbc_dec_16way) } + .fn_u = { .cbc = serpent_cbc_dec_16way } }, { .num_blocks = 8, - .fn_u = { .cbc = GLUE_CBC_FUNC_CAST(serpent_cbc_dec_8way_avx) } + .fn_u = { .cbc = serpent_cbc_dec_8way_avx } }, { .num_blocks = 1, - .fn_u = { .cbc = GLUE_CBC_FUNC_CAST(__serpent_decrypt) } + .fn_u = { .cbc = __serpent_decrypt_cbc } } } }; @@ -124,13 +118,13 @@ static const struct common_glue_ctx serpent_dec_xts = { .funcs = { { .num_blocks = 16, - .fn_u = { .xts = GLUE_XTS_FUNC_CAST(serpent_xts_dec_16way) } + .fn_u = { .xts = serpent_xts_dec_16way } }, { .num_blocks = 8, - .fn_u = { .xts = GLUE_XTS_FUNC_CAST(serpent_xts_dec_8way_avx) } + .fn_u = { .xts = serpent_xts_dec_8way_avx } }, { .num_blocks = 1, - .fn_u = { .xts = GLUE_XTS_FUNC_CAST(serpent_xts_dec) } + .fn_u = { .xts = serpent_xts_dec } } } }; @@ -146,8 +140,7 @@ static int ecb_decrypt(struct skcipher_request *req) static int cbc_encrypt(struct skcipher_request *req) { - return glue_cbc_encrypt_req_128bit(GLUE_FUNC_CAST(__serpent_encrypt), - req); + return glue_cbc_encrypt_req_128bit(__serpent_encrypt, req); } static int cbc_decrypt(struct skcipher_request *req) @@ -166,8 +159,8 @@ static int xts_encrypt(struct skcipher_request *req) struct serpent_xts_ctx *ctx = crypto_skcipher_ctx(tfm); return glue_xts_req_128bit(&serpent_enc_xts, req, - XTS_TWEAK_CAST(__serpent_encrypt), - &ctx->tweak_ctx, &ctx->crypt_ctx, false); + __serpent_encrypt, &ctx->tweak_ctx, + &ctx->crypt_ctx, false); } static int xts_decrypt(struct skcipher_request *req) @@ -176,8 +169,8 @@ static int xts_decrypt(struct skcipher_request *req) struct serpent_xts_ctx *ctx = crypto_skcipher_ctx(tfm); return glue_xts_req_128bit(&serpent_dec_xts, req, - XTS_TWEAK_CAST(__serpent_encrypt), - &ctx->tweak_ctx, &ctx->crypt_ctx, true); + __serpent_encrypt, &ctx->tweak_ctx, + &ctx->crypt_ctx, true); } static struct skcipher_alg serpent_algs[] = { diff --git a/arch/x86/crypto/serpent_avx_glue.c b/arch/x86/crypto/serpent_avx_glue.c index 7d3dca38a5a2..4a0a26195952 100644 --- a/arch/x86/crypto/serpent_avx_glue.c +++ b/arch/x86/crypto/serpent_avx_glue.c @@ -20,28 +20,11 @@ #include /* 8-way parallel cipher functions */ -asmlinkage void serpent_ecb_enc_8way_avx(struct serpent_ctx *ctx, u8 *dst, - const u8 *src); EXPORT_SYMBOL_GPL(serpent_ecb_enc_8way_avx); - -asmlinkage void serpent_ecb_dec_8way_avx(struct serpent_ctx *ctx, u8 *dst, - const u8 *src); EXPORT_SYMBOL_GPL(serpent_ecb_dec_8way_avx); - -asmlinkage void serpent_cbc_dec_8way_avx(struct serpent_ctx *ctx, u8 *dst, - const u8 *src); EXPORT_SYMBOL_GPL(serpent_cbc_dec_8way_avx); - -asmlinkage void serpent_ctr_8way_avx(struct serpent_ctx *ctx, u8 *dst, - const u8 *src, le128 *iv); EXPORT_SYMBOL_GPL(serpent_ctr_8way_avx); - -asmlinkage void serpent_xts_enc_8way_avx(struct serpent_ctx *ctx, u8 *dst, - const u8 *src, le128 *iv); EXPORT_SYMBOL_GPL(serpent_xts_enc_8way_avx); - -asmlinkage void serpent_xts_dec_8way_avx(struct serpent_ctx *ctx, u8 *dst, - const u8 *src, le128 *iv); EXPORT_SYMBOL_GPL(serpent_xts_dec_8way_avx); void __serpent_crypt_ctr(void *ctx, u128 *dst, const u128 *src, le128 *iv) @@ -58,15 +41,13 @@ EXPORT_SYMBOL_GPL(__serpent_crypt_ctr); void serpent_xts_enc(void *ctx, u128 *dst, const u128 *src, le128 *iv) { - glue_xts_crypt_128bit_one(ctx, dst, src, iv, - GLUE_FUNC_CAST(__serpent_encrypt)); + glue_xts_crypt_128bit_one(ctx, dst, src, iv, __serpent_encrypt); } EXPORT_SYMBOL_GPL(serpent_xts_enc); void serpent_xts_dec(void *ctx, u128 *dst, const u128 *src, le128 *iv) { - glue_xts_crypt_128bit_one(ctx, dst, src, iv, - GLUE_FUNC_CAST(__serpent_decrypt)); + glue_xts_crypt_128bit_one(ctx, dst, src, iv, __serpent_decrypt); } EXPORT_SYMBOL_GPL(serpent_xts_dec); @@ -102,10 +83,10 @@ static const struct common_glue_ctx serpent_enc = { .funcs = { { .num_blocks = SERPENT_PARALLEL_BLOCKS, - .fn_u = { .ecb = GLUE_FUNC_CAST(serpent_ecb_enc_8way_avx) } + .fn_u = { .ecb = serpent_ecb_enc_8way_avx } }, { .num_blocks = 1, - .fn_u = { .ecb = GLUE_FUNC_CAST(__serpent_encrypt) } + .fn_u = { .ecb = __serpent_encrypt } } } }; @@ -115,10 +96,10 @@ static const struct common_glue_ctx serpent_ctr = { .funcs = { { .num_blocks = SERPENT_PARALLEL_BLOCKS, - .fn_u = { .ctr = GLUE_CTR_FUNC_CAST(serpent_ctr_8way_avx) } + .fn_u = { .ctr = serpent_ctr_8way_avx } }, { .num_blocks = 1, - .fn_u = { .ctr = GLUE_CTR_FUNC_CAST(__serpent_crypt_ctr) } + .fn_u = { .ctr = __serpent_crypt_ctr } } } }; @@ -128,10 +109,10 @@ static const struct common_glue_ctx serpent_enc_xts = { .funcs = { { .num_blocks = SERPENT_PARALLEL_BLOCKS, - .fn_u = { .xts = GLUE_XTS_FUNC_CAST(serpent_xts_enc_8way_avx) } + .fn_u = { .xts = serpent_xts_enc_8way_avx } }, { .num_blocks = 1, - .fn_u = { .xts = GLUE_XTS_FUNC_CAST(serpent_xts_enc) } + .fn_u = { .xts = serpent_xts_enc } } } }; @@ -141,10 +122,10 @@ static const struct common_glue_ctx serpent_dec = { .funcs = { { .num_blocks = SERPENT_PARALLEL_BLOCKS, - .fn_u = { .ecb = GLUE_FUNC_CAST(serpent_ecb_dec_8way_avx) } + .fn_u = { .ecb = serpent_ecb_dec_8way_avx } }, { .num_blocks = 1, - .fn_u = { .ecb = GLUE_FUNC_CAST(__serpent_decrypt) } + .fn_u = { .ecb = __serpent_decrypt } } } }; @@ -154,10 +135,10 @@ static const struct common_glue_ctx serpent_dec_cbc = { .funcs = { { .num_blocks = SERPENT_PARALLEL_BLOCKS, - .fn_u = { .cbc = GLUE_CBC_FUNC_CAST(serpent_cbc_dec_8way_avx) } + .fn_u = { .cbc = serpent_cbc_dec_8way_avx } }, { .num_blocks = 1, - .fn_u = { .cbc = GLUE_CBC_FUNC_CAST(__serpent_decrypt) } + .fn_u = { .cbc = __serpent_decrypt_cbc } } } }; @@ -167,10 +148,10 @@ static const struct common_glue_ctx serpent_dec_xts = { .funcs = { { .num_blocks = SERPENT_PARALLEL_BLOCKS, - .fn_u = { .xts = GLUE_XTS_FUNC_CAST(serpent_xts_dec_8way_avx) } + .fn_u = { .xts = serpent_xts_dec_8way_avx } }, { .num_blocks = 1, - .fn_u = { .xts = GLUE_XTS_FUNC_CAST(serpent_xts_dec) } + .fn_u = { .xts = serpent_xts_dec } } } }; @@ -186,8 +167,7 @@ static int ecb_decrypt(struct skcipher_request *req) static int cbc_encrypt(struct skcipher_request *req) { - return glue_cbc_encrypt_req_128bit(GLUE_FUNC_CAST(__serpent_encrypt), - req); + return glue_cbc_encrypt_req_128bit(__serpent_encrypt, req); } static int cbc_decrypt(struct skcipher_request *req) @@ -206,8 +186,8 @@ static int xts_encrypt(struct skcipher_request *req) struct serpent_xts_ctx *ctx = crypto_skcipher_ctx(tfm); return glue_xts_req_128bit(&serpent_enc_xts, req, - XTS_TWEAK_CAST(__serpent_encrypt), - &ctx->tweak_ctx, &ctx->crypt_ctx, false); + __serpent_encrypt, &ctx->tweak_ctx, + &ctx->crypt_ctx, false); } static int xts_decrypt(struct skcipher_request *req) @@ -216,8 +196,8 @@ static int xts_decrypt(struct skcipher_request *req) struct serpent_xts_ctx *ctx = crypto_skcipher_ctx(tfm); return glue_xts_req_128bit(&serpent_dec_xts, req, - XTS_TWEAK_CAST(__serpent_encrypt), - &ctx->tweak_ctx, &ctx->crypt_ctx, true); + __serpent_encrypt, &ctx->tweak_ctx, + &ctx->crypt_ctx, true); } static struct skcipher_alg serpent_algs[] = { diff --git a/arch/x86/crypto/serpent_sse2_glue.c b/arch/x86/crypto/serpent_sse2_glue.c index 5fdf1931d069..1d4ba7359e8e 100644 --- a/arch/x86/crypto/serpent_sse2_glue.c +++ b/arch/x86/crypto/serpent_sse2_glue.c @@ -25,6 +25,12 @@ #include #include +CRYPTO_FUNC(__serpent_encrypt); +CRYPTO_FUNC(__serpent_decrypt); +CRYPTO_FUNC_WRAP_CBC(__serpent_decrypt); +CRYPTO_FUNC(serpent_enc_blk_xway); +CRYPTO_FUNC(serpent_dec_blk_xway); + static int serpent_setkey_skcipher(struct crypto_skcipher *tfm, const u8 *key, unsigned int keylen) { @@ -79,10 +85,10 @@ static const struct common_glue_ctx serpent_enc = { .funcs = { { .num_blocks = SERPENT_PARALLEL_BLOCKS, - .fn_u = { .ecb = GLUE_FUNC_CAST(serpent_enc_blk_xway) } + .fn_u = { .ecb = serpent_enc_blk_xway } }, { .num_blocks = 1, - .fn_u = { .ecb = GLUE_FUNC_CAST(__serpent_encrypt) } + .fn_u = { .ecb = __serpent_encrypt } } } }; @@ -92,10 +98,10 @@ static const struct common_glue_ctx serpent_ctr = { .funcs = { { .num_blocks = SERPENT_PARALLEL_BLOCKS, - .fn_u = { .ctr = GLUE_CTR_FUNC_CAST(serpent_crypt_ctr_xway) } + .fn_u = { .ctr = serpent_crypt_ctr_xway } }, { .num_blocks = 1, - .fn_u = { .ctr = GLUE_CTR_FUNC_CAST(serpent_crypt_ctr) } + .fn_u = { .ctr = serpent_crypt_ctr } } } }; @@ -105,10 +111,10 @@ static const struct common_glue_ctx serpent_dec = { .funcs = { { .num_blocks = SERPENT_PARALLEL_BLOCKS, - .fn_u = { .ecb = GLUE_FUNC_CAST(serpent_dec_blk_xway) } + .fn_u = { .ecb = serpent_dec_blk_xway } }, { .num_blocks = 1, - .fn_u = { .ecb = GLUE_FUNC_CAST(__serpent_decrypt) } + .fn_u = { .ecb = __serpent_decrypt } } } }; @@ -118,10 +124,10 @@ static const struct common_glue_ctx serpent_dec_cbc = { .funcs = { { .num_blocks = SERPENT_PARALLEL_BLOCKS, - .fn_u = { .cbc = GLUE_CBC_FUNC_CAST(serpent_decrypt_cbc_xway) } + .fn_u = { .cbc = serpent_decrypt_cbc_xway } }, { .num_blocks = 1, - .fn_u = { .cbc = GLUE_CBC_FUNC_CAST(__serpent_decrypt) } + .fn_u = { .cbc = __serpent_decrypt_cbc } } } }; @@ -137,7 +143,7 @@ static int ecb_decrypt(struct skcipher_request *req) static int cbc_encrypt(struct skcipher_request *req) { - return glue_cbc_encrypt_req_128bit(GLUE_FUNC_CAST(__serpent_encrypt), + return glue_cbc_encrypt_req_128bit(__serpent_encrypt, req); } diff --git a/arch/x86/include/asm/crypto/serpent-avx.h b/arch/x86/include/asm/crypto/serpent-avx.h index db7c9cc32234..7dd8ab476295 100644 --- a/arch/x86/include/asm/crypto/serpent-avx.h +++ b/arch/x86/include/asm/crypto/serpent-avx.h @@ -15,20 +15,15 @@ struct serpent_xts_ctx { struct serpent_ctx crypt_ctx; }; -asmlinkage void serpent_ecb_enc_8way_avx(struct serpent_ctx *ctx, u8 *dst, - const u8 *src); -asmlinkage void serpent_ecb_dec_8way_avx(struct serpent_ctx *ctx, u8 *dst, - const u8 *src); - -asmlinkage void serpent_cbc_dec_8way_avx(struct serpent_ctx *ctx, u8 *dst, - const u8 *src); -asmlinkage void serpent_ctr_8way_avx(struct serpent_ctx *ctx, u8 *dst, - const u8 *src, le128 *iv); - -asmlinkage void serpent_xts_enc_8way_avx(struct serpent_ctx *ctx, u8 *dst, - const u8 *src, le128 *iv); -asmlinkage void serpent_xts_dec_8way_avx(struct serpent_ctx *ctx, u8 *dst, - const u8 *src, le128 *iv); +CRYPTO_FUNC(__serpent_encrypt); +CRYPTO_FUNC(__serpent_decrypt); +CRYPTO_FUNC_WRAP_CBC(__serpent_decrypt); +CRYPTO_FUNC(serpent_ecb_enc_8way_avx); +CRYPTO_FUNC(serpent_ecb_dec_8way_avx); +CRYPTO_FUNC_CBC(serpent_cbc_dec_8way_avx); +CRYPTO_FUNC_CTR(serpent_ctr_8way_avx); +CRYPTO_FUNC_XTS(serpent_xts_enc_8way_avx); +CRYPTO_FUNC_XTS(serpent_xts_dec_8way_avx); extern void __serpent_crypt_ctr(void *ctx, u128 *dst, const u128 *src, le128 *iv); diff --git a/arch/x86/include/asm/crypto/serpent-sse2.h b/arch/x86/include/asm/crypto/serpent-sse2.h index 1a345e8a7496..491a5a7d4e15 100644 --- a/arch/x86/include/asm/crypto/serpent-sse2.h +++ b/arch/x86/include/asm/crypto/serpent-sse2.h @@ -41,8 +41,7 @@ asmlinkage void __serpent_enc_blk_8way(struct serpent_ctx *ctx, u8 *dst, asmlinkage void serpent_dec_blk_8way(struct serpent_ctx *ctx, u8 *dst, const u8 *src); -static inline void serpent_enc_blk_xway(struct serpent_ctx *ctx, u8 *dst, - const u8 *src) +static inline void serpent_enc_blk_xway(void *ctx, u8 *dst, const u8 *src) { __serpent_enc_blk_8way(ctx, dst, src, false); } @@ -53,8 +52,7 @@ static inline void serpent_enc_blk_xway_xor(struct serpent_ctx *ctx, u8 *dst, __serpent_enc_blk_8way(ctx, dst, src, true); } -static inline void serpent_dec_blk_xway(struct serpent_ctx *ctx, u8 *dst, - const u8 *src) +static inline void serpent_dec_blk_xway(void *ctx, u8 *dst, const u8 *src) { serpent_dec_blk_8way(ctx, dst, src); } diff --git a/crypto/serpent_generic.c b/crypto/serpent_generic.c index 56fa665a4f01..6309fdc77466 100644 --- a/crypto/serpent_generic.c +++ b/crypto/serpent_generic.c @@ -449,8 +449,9 @@ int serpent_setkey(struct crypto_tfm *tfm, const u8 *key, unsigned int keylen) } EXPORT_SYMBOL_GPL(serpent_setkey); -void __serpent_encrypt(struct serpent_ctx *ctx, u8 *dst, const u8 *src) +void __serpent_encrypt(void *c, u8 *dst, const u8 *src) { + struct serpent_ctx *ctx = c; const u32 *k = ctx->expkey; const __le32 *s = (const __le32 *)src; __le32 *d = (__le32 *)dst; @@ -514,8 +515,9 @@ static void serpent_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) __serpent_encrypt(ctx, dst, src); } -void __serpent_decrypt(struct serpent_ctx *ctx, u8 *dst, const u8 *src) +void __serpent_decrypt(void *c, u8 *dst, const u8 *src) { + struct serpent_ctx *ctx = c; const u32 *k = ctx->expkey; const __le32 *s = (const __le32 *)src; __le32 *d = (__le32 *)dst; diff --git a/include/crypto/serpent.h b/include/crypto/serpent.h index 7dd780c5d058..986659db5939 100644 --- a/include/crypto/serpent.h +++ b/include/crypto/serpent.h @@ -22,7 +22,7 @@ int __serpent_setkey(struct serpent_ctx *ctx, const u8 *key, unsigned int keylen); int serpent_setkey(struct crypto_tfm *tfm, const u8 *key, unsigned int keylen); -void __serpent_encrypt(struct serpent_ctx *ctx, u8 *dst, const u8 *src); -void __serpent_decrypt(struct serpent_ctx *ctx, u8 *dst, const u8 *src); +void __serpent_encrypt(void *ctx, u8 *dst, const u8 *src); +void __serpent_decrypt(void *ctx, u8 *dst, const u8 *src); #endif From patchwork Mon Nov 11 21:45:47 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 11237907 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B57831747 for ; Mon, 11 Nov 2019 21:46:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7E722206BB for ; Mon, 11 Nov 2019 21:46:46 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="ZCU2lQim" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727559AbfKKVqp (ORCPT ); Mon, 11 Nov 2019 16:46:45 -0500 Received: from mail-pl1-f193.google.com ([209.85.214.193]:33374 "EHLO mail-pl1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727205AbfKKVqE (ORCPT ); Mon, 11 Nov 2019 16:46:04 -0500 Received: by mail-pl1-f193.google.com with SMTP id ay6so8389928plb.0 for ; Mon, 11 Nov 2019 13:46:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=/2ts2yi9jWkjKaKxvacsom8Ft/BzDMBRs9kw3QHoanA=; b=ZCU2lQimgkn4yiJdfjXQDw7y3EYBIPHuPhqwx7cz/vDHVadjbEaTrJVFnvDETBaneN XCORHPQie4F4u7ij1HD1c6L5g0iKsG2zJ40ilFswHFPHxmubuYD97ah6jGn92j1kKFIn yI0HqTMBxNBEFrJ5qADZ9fy3xW9MwOMif5Pc0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/2ts2yi9jWkjKaKxvacsom8Ft/BzDMBRs9kw3QHoanA=; b=AIhcVpXAMs4PrAPd+dPMxw8HJrnOBa9EBK2Nbgq2VZqAh8nQxZYUlpkrjwNvXZYjlm afPmENH/m6qI6eyUn1VRsMgg1MFMVK5NUxAaWGGjXKiB7BM/JD6OlWXYLOIuZJmd2ayn 99M4juYRyl7f7cnoyo1TSj8p67OXehplWB5GRA4v/2v9deYEk4YY0jq/4KcLCLMbjJP7 4WqREonOh6hlgXgNEtsD7hlQbQAtjM3oMKFJ7sYyU6hXQqi724sEzBZztML6BsyKz7z7 tPjp+avRzSejkO1dNKJ6W65ShJrIuCzA6WHjUtupUhDK81Z5pzUDZXoprgjBzVvUO88t OHQA== X-Gm-Message-State: APjAAAVPm5q1ST94SoXmYSdl6o1yb8z6gyFRdEF5lTQPBXCRm4L8m8e8 WPPy6W01t1xBLsCpNlcRtwAKHA== X-Google-Smtp-Source: APXvYqzHNMoMXSbseRk1G9UWTNMTRd52nN0ECpk1UXL9PEb2HrB3ew5F+6zlSPjxaxkCqFpHmiR+Kw== X-Received: by 2002:a17:902:7e45:: with SMTP id a5mr27579393pln.315.1573508762036; Mon, 11 Nov 2019 13:46:02 -0800 (PST) Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163]) by smtp.gmail.com with ESMTPSA id m13sm12898797pga.70.2019.11.11.13.46.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Nov 2019 13:46:00 -0800 (PST) From: Kees Cook To: Herbert Xu Cc: Kees Cook , =?utf-8?q?Jo=C3=A3o_Moreira?= , Eric Biggers , Sami Tolvanen , "David S. Miller" , Ard Biesheuvel , Stephan Mueller , x86@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-hardening@lists.openwall.com Subject: [PATCH v4 3/8] crypto: x86/camellia: Use new glue function macros Date: Mon, 11 Nov 2019 13:45:47 -0800 Message-Id: <20191111214552.36717-4-keescook@chromium.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20191111214552.36717-1-keescook@chromium.org> References: <20191111214552.36717-1-keescook@chromium.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Convert to function declaration macros from function prototype casts to avoid triggering Control-Flow Integrity checks during indirect function calls. Co-developed-by: João Moreira Signed-off-by: Kees Cook --- arch/x86/crypto/camellia_aesni_avx2_glue.c | 73 +++++++++------------- arch/x86/crypto/camellia_aesni_avx_glue.c | 63 +++++++------------ arch/x86/crypto/camellia_glue.c | 29 +++------ arch/x86/include/asm/crypto/camellia.h | 58 ++++------------- 4 files changed, 74 insertions(+), 149 deletions(-) diff --git a/arch/x86/crypto/camellia_aesni_avx2_glue.c b/arch/x86/crypto/camellia_aesni_avx2_glue.c index a4f00128ea55..e32b4ded3b4e 100644 --- a/arch/x86/crypto/camellia_aesni_avx2_glue.c +++ b/arch/x86/crypto/camellia_aesni_avx2_glue.c @@ -19,20 +19,12 @@ #define CAMELLIA_AESNI_AVX2_PARALLEL_BLOCKS 32 /* 32-way AVX2/AES-NI parallel cipher functions */ -asmlinkage void camellia_ecb_enc_32way(struct camellia_ctx *ctx, u8 *dst, - const u8 *src); -asmlinkage void camellia_ecb_dec_32way(struct camellia_ctx *ctx, u8 *dst, - const u8 *src); - -asmlinkage void camellia_cbc_dec_32way(struct camellia_ctx *ctx, u8 *dst, - const u8 *src); -asmlinkage void camellia_ctr_32way(struct camellia_ctx *ctx, u8 *dst, - const u8 *src, le128 *iv); - -asmlinkage void camellia_xts_enc_32way(struct camellia_ctx *ctx, u8 *dst, - const u8 *src, le128 *iv); -asmlinkage void camellia_xts_dec_32way(struct camellia_ctx *ctx, u8 *dst, - const u8 *src, le128 *iv); +CRYPTO_FUNC(camellia_ecb_enc_32way); +CRYPTO_FUNC(camellia_ecb_dec_32way); +CRYPTO_FUNC_CBC(camellia_cbc_dec_32way); +CRYPTO_FUNC_CTR(camellia_ctr_32way); +CRYPTO_FUNC_XTS(camellia_xts_enc_32way); +CRYPTO_FUNC_XTS(camellia_xts_dec_32way); static const struct common_glue_ctx camellia_enc = { .num_funcs = 4, @@ -40,16 +32,16 @@ static const struct common_glue_ctx camellia_enc = { .funcs = { { .num_blocks = CAMELLIA_AESNI_AVX2_PARALLEL_BLOCKS, - .fn_u = { .ecb = GLUE_FUNC_CAST(camellia_ecb_enc_32way) } + .fn_u = { .ecb = camellia_ecb_enc_32way } }, { .num_blocks = CAMELLIA_AESNI_PARALLEL_BLOCKS, - .fn_u = { .ecb = GLUE_FUNC_CAST(camellia_ecb_enc_16way) } + .fn_u = { .ecb = camellia_ecb_enc_16way } }, { .num_blocks = 2, - .fn_u = { .ecb = GLUE_FUNC_CAST(camellia_enc_blk_2way) } + .fn_u = { .ecb = camellia_enc_blk_2way } }, { .num_blocks = 1, - .fn_u = { .ecb = GLUE_FUNC_CAST(camellia_enc_blk) } + .fn_u = { .ecb = camellia_enc_blk } } } }; @@ -59,16 +51,16 @@ static const struct common_glue_ctx camellia_ctr = { .funcs = { { .num_blocks = CAMELLIA_AESNI_AVX2_PARALLEL_BLOCKS, - .fn_u = { .ctr = GLUE_CTR_FUNC_CAST(camellia_ctr_32way) } + .fn_u = { .ctr = camellia_ctr_32way } }, { .num_blocks = CAMELLIA_AESNI_PARALLEL_BLOCKS, - .fn_u = { .ctr = GLUE_CTR_FUNC_CAST(camellia_ctr_16way) } + .fn_u = { .ctr = camellia_ctr_16way } }, { .num_blocks = 2, - .fn_u = { .ctr = GLUE_CTR_FUNC_CAST(camellia_crypt_ctr_2way) } + .fn_u = { .ctr = camellia_crypt_ctr_2way } }, { .num_blocks = 1, - .fn_u = { .ctr = GLUE_CTR_FUNC_CAST(camellia_crypt_ctr) } + .fn_u = { .ctr = camellia_crypt_ctr } } } }; @@ -78,13 +70,13 @@ static const struct common_glue_ctx camellia_enc_xts = { .funcs = { { .num_blocks = CAMELLIA_AESNI_AVX2_PARALLEL_BLOCKS, - .fn_u = { .xts = GLUE_XTS_FUNC_CAST(camellia_xts_enc_32way) } + .fn_u = { .xts = camellia_xts_enc_32way } }, { .num_blocks = CAMELLIA_AESNI_PARALLEL_BLOCKS, - .fn_u = { .xts = GLUE_XTS_FUNC_CAST(camellia_xts_enc_16way) } + .fn_u = { .xts = camellia_xts_enc_16way } }, { .num_blocks = 1, - .fn_u = { .xts = GLUE_XTS_FUNC_CAST(camellia_xts_enc) } + .fn_u = { .xts = camellia_xts_enc } } } }; @@ -94,16 +86,16 @@ static const struct common_glue_ctx camellia_dec = { .funcs = { { .num_blocks = CAMELLIA_AESNI_AVX2_PARALLEL_BLOCKS, - .fn_u = { .ecb = GLUE_FUNC_CAST(camellia_ecb_dec_32way) } + .fn_u = { .ecb = camellia_ecb_dec_32way } }, { .num_blocks = CAMELLIA_AESNI_PARALLEL_BLOCKS, - .fn_u = { .ecb = GLUE_FUNC_CAST(camellia_ecb_dec_16way) } + .fn_u = { .ecb = camellia_ecb_dec_16way } }, { .num_blocks = 2, - .fn_u = { .ecb = GLUE_FUNC_CAST(camellia_dec_blk_2way) } + .fn_u = { .ecb = camellia_dec_blk_2way } }, { .num_blocks = 1, - .fn_u = { .ecb = GLUE_FUNC_CAST(camellia_dec_blk) } + .fn_u = { .ecb = camellia_dec_blk } } } }; @@ -113,16 +105,16 @@ static const struct common_glue_ctx camellia_dec_cbc = { .funcs = { { .num_blocks = CAMELLIA_AESNI_AVX2_PARALLEL_BLOCKS, - .fn_u = { .cbc = GLUE_CBC_FUNC_CAST(camellia_cbc_dec_32way) } + .fn_u = { .cbc = camellia_cbc_dec_32way } }, { .num_blocks = CAMELLIA_AESNI_PARALLEL_BLOCKS, - .fn_u = { .cbc = GLUE_CBC_FUNC_CAST(camellia_cbc_dec_16way) } + .fn_u = { .cbc = camellia_cbc_dec_16way } }, { .num_blocks = 2, - .fn_u = { .cbc = GLUE_CBC_FUNC_CAST(camellia_decrypt_cbc_2way) } + .fn_u = { .cbc = camellia_decrypt_cbc_2way } }, { .num_blocks = 1, - .fn_u = { .cbc = GLUE_CBC_FUNC_CAST(camellia_dec_blk) } + .fn_u = { .cbc = camellia_dec_blk_cbc } } } }; @@ -132,13 +124,13 @@ static const struct common_glue_ctx camellia_dec_xts = { .funcs = { { .num_blocks = CAMELLIA_AESNI_AVX2_PARALLEL_BLOCKS, - .fn_u = { .xts = GLUE_XTS_FUNC_CAST(camellia_xts_dec_32way) } + .fn_u = { .xts = camellia_xts_dec_32way } }, { .num_blocks = CAMELLIA_AESNI_PARALLEL_BLOCKS, - .fn_u = { .xts = GLUE_XTS_FUNC_CAST(camellia_xts_dec_16way) } + .fn_u = { .xts = camellia_xts_dec_16way } }, { .num_blocks = 1, - .fn_u = { .xts = GLUE_XTS_FUNC_CAST(camellia_xts_dec) } + .fn_u = { .xts = camellia_xts_dec } } } }; @@ -161,8 +153,7 @@ static int ecb_decrypt(struct skcipher_request *req) static int cbc_encrypt(struct skcipher_request *req) { - return glue_cbc_encrypt_req_128bit(GLUE_FUNC_CAST(camellia_enc_blk), - req); + return glue_cbc_encrypt_req_128bit(camellia_enc_blk, req); } static int cbc_decrypt(struct skcipher_request *req) @@ -180,8 +171,7 @@ static int xts_encrypt(struct skcipher_request *req) struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct camellia_xts_ctx *ctx = crypto_skcipher_ctx(tfm); - return glue_xts_req_128bit(&camellia_enc_xts, req, - XTS_TWEAK_CAST(camellia_enc_blk), + return glue_xts_req_128bit(&camellia_enc_xts, req, camellia_enc_blk, &ctx->tweak_ctx, &ctx->crypt_ctx, false); } @@ -190,8 +180,7 @@ static int xts_decrypt(struct skcipher_request *req) struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct camellia_xts_ctx *ctx = crypto_skcipher_ctx(tfm); - return glue_xts_req_128bit(&camellia_dec_xts, req, - XTS_TWEAK_CAST(camellia_enc_blk), + return glue_xts_req_128bit(&camellia_dec_xts, req, camellia_enc_blk, &ctx->tweak_ctx, &ctx->crypt_ctx, true); } diff --git a/arch/x86/crypto/camellia_aesni_avx_glue.c b/arch/x86/crypto/camellia_aesni_avx_glue.c index f28d282779b8..70445c8d8540 100644 --- a/arch/x86/crypto/camellia_aesni_avx_glue.c +++ b/arch/x86/crypto/camellia_aesni_avx_glue.c @@ -6,7 +6,6 @@ */ #include -#include #include #include #include @@ -18,41 +17,22 @@ #define CAMELLIA_AESNI_PARALLEL_BLOCKS 16 /* 16-way parallel cipher functions (avx/aes-ni) */ -asmlinkage void camellia_ecb_enc_16way(struct camellia_ctx *ctx, u8 *dst, - const u8 *src); EXPORT_SYMBOL_GPL(camellia_ecb_enc_16way); - -asmlinkage void camellia_ecb_dec_16way(struct camellia_ctx *ctx, u8 *dst, - const u8 *src); EXPORT_SYMBOL_GPL(camellia_ecb_dec_16way); - -asmlinkage void camellia_cbc_dec_16way(struct camellia_ctx *ctx, u8 *dst, - const u8 *src); EXPORT_SYMBOL_GPL(camellia_cbc_dec_16way); - -asmlinkage void camellia_ctr_16way(struct camellia_ctx *ctx, u8 *dst, - const u8 *src, le128 *iv); EXPORT_SYMBOL_GPL(camellia_ctr_16way); - -asmlinkage void camellia_xts_enc_16way(struct camellia_ctx *ctx, u8 *dst, - const u8 *src, le128 *iv); EXPORT_SYMBOL_GPL(camellia_xts_enc_16way); - -asmlinkage void camellia_xts_dec_16way(struct camellia_ctx *ctx, u8 *dst, - const u8 *src, le128 *iv); EXPORT_SYMBOL_GPL(camellia_xts_dec_16way); void camellia_xts_enc(void *ctx, u128 *dst, const u128 *src, le128 *iv) { - glue_xts_crypt_128bit_one(ctx, dst, src, iv, - GLUE_FUNC_CAST(camellia_enc_blk)); + glue_xts_crypt_128bit_one(ctx, dst, src, iv, camellia_enc_blk); } EXPORT_SYMBOL_GPL(camellia_xts_enc); void camellia_xts_dec(void *ctx, u128 *dst, const u128 *src, le128 *iv) { - glue_xts_crypt_128bit_one(ctx, dst, src, iv, - GLUE_FUNC_CAST(camellia_dec_blk)); + glue_xts_crypt_128bit_one(ctx, dst, src, iv, camellia_dec_blk); } EXPORT_SYMBOL_GPL(camellia_xts_dec); @@ -62,13 +42,13 @@ static const struct common_glue_ctx camellia_enc = { .funcs = { { .num_blocks = CAMELLIA_AESNI_PARALLEL_BLOCKS, - .fn_u = { .ecb = GLUE_FUNC_CAST(camellia_ecb_enc_16way) } + .fn_u = { .ecb = camellia_ecb_enc_16way } }, { .num_blocks = 2, - .fn_u = { .ecb = GLUE_FUNC_CAST(camellia_enc_blk_2way) } + .fn_u = { .ecb = camellia_enc_blk_2way } }, { .num_blocks = 1, - .fn_u = { .ecb = GLUE_FUNC_CAST(camellia_enc_blk) } + .fn_u = { .ecb = camellia_enc_blk } } } }; @@ -78,13 +58,13 @@ static const struct common_glue_ctx camellia_ctr = { .funcs = { { .num_blocks = CAMELLIA_AESNI_PARALLEL_BLOCKS, - .fn_u = { .ctr = GLUE_CTR_FUNC_CAST(camellia_ctr_16way) } + .fn_u = { .ctr = camellia_ctr_16way } }, { .num_blocks = 2, - .fn_u = { .ctr = GLUE_CTR_FUNC_CAST(camellia_crypt_ctr_2way) } + .fn_u = { .ctr = camellia_crypt_ctr_2way } }, { .num_blocks = 1, - .fn_u = { .ctr = GLUE_CTR_FUNC_CAST(camellia_crypt_ctr) } + .fn_u = { .ctr = camellia_crypt_ctr } } } }; @@ -94,10 +74,10 @@ static const struct common_glue_ctx camellia_enc_xts = { .funcs = { { .num_blocks = CAMELLIA_AESNI_PARALLEL_BLOCKS, - .fn_u = { .xts = GLUE_XTS_FUNC_CAST(camellia_xts_enc_16way) } + .fn_u = { .xts = camellia_xts_enc_16way } }, { .num_blocks = 1, - .fn_u = { .xts = GLUE_XTS_FUNC_CAST(camellia_xts_enc) } + .fn_u = { .xts = camellia_xts_enc } } } }; @@ -107,13 +87,13 @@ static const struct common_glue_ctx camellia_dec = { .funcs = { { .num_blocks = CAMELLIA_AESNI_PARALLEL_BLOCKS, - .fn_u = { .ecb = GLUE_FUNC_CAST(camellia_ecb_dec_16way) } + .fn_u = { .ecb = camellia_ecb_dec_16way } }, { .num_blocks = 2, - .fn_u = { .ecb = GLUE_FUNC_CAST(camellia_dec_blk_2way) } + .fn_u = { .ecb = camellia_dec_blk_2way } }, { .num_blocks = 1, - .fn_u = { .ecb = GLUE_FUNC_CAST(camellia_dec_blk) } + .fn_u = { .ecb = camellia_dec_blk } } } }; @@ -123,13 +103,13 @@ static const struct common_glue_ctx camellia_dec_cbc = { .funcs = { { .num_blocks = CAMELLIA_AESNI_PARALLEL_BLOCKS, - .fn_u = { .cbc = GLUE_CBC_FUNC_CAST(camellia_cbc_dec_16way) } + .fn_u = { .cbc = camellia_cbc_dec_16way } }, { .num_blocks = 2, - .fn_u = { .cbc = GLUE_CBC_FUNC_CAST(camellia_decrypt_cbc_2way) } + .fn_u = { .cbc = camellia_decrypt_cbc_2way } }, { .num_blocks = 1, - .fn_u = { .cbc = GLUE_CBC_FUNC_CAST(camellia_dec_blk) } + .fn_u = { .cbc = camellia_dec_blk_cbc } } } }; @@ -139,10 +119,10 @@ static const struct common_glue_ctx camellia_dec_xts = { .funcs = { { .num_blocks = CAMELLIA_AESNI_PARALLEL_BLOCKS, - .fn_u = { .xts = GLUE_XTS_FUNC_CAST(camellia_xts_dec_16way) } + .fn_u = { .xts = camellia_xts_dec_16way } }, { .num_blocks = 1, - .fn_u = { .xts = GLUE_XTS_FUNC_CAST(camellia_xts_dec) } + .fn_u = { .xts = camellia_xts_dec } } } }; @@ -165,8 +145,7 @@ static int ecb_decrypt(struct skcipher_request *req) static int cbc_encrypt(struct skcipher_request *req) { - return glue_cbc_encrypt_req_128bit(GLUE_FUNC_CAST(camellia_enc_blk), - req); + return glue_cbc_encrypt_req_128bit(camellia_enc_blk, req); } static int cbc_decrypt(struct skcipher_request *req) @@ -207,7 +186,7 @@ static int xts_encrypt(struct skcipher_request *req) struct camellia_xts_ctx *ctx = crypto_skcipher_ctx(tfm); return glue_xts_req_128bit(&camellia_enc_xts, req, - XTS_TWEAK_CAST(camellia_enc_blk), + camellia_enc_blk, &ctx->tweak_ctx, &ctx->crypt_ctx, false); } @@ -217,7 +196,7 @@ static int xts_decrypt(struct skcipher_request *req) struct camellia_xts_ctx *ctx = crypto_skcipher_ctx(tfm); return glue_xts_req_128bit(&camellia_dec_xts, req, - XTS_TWEAK_CAST(camellia_enc_blk), + camellia_enc_blk, &ctx->tweak_ctx, &ctx->crypt_ctx, true); } diff --git a/arch/x86/crypto/camellia_glue.c b/arch/x86/crypto/camellia_glue.c index 7c62db56ffe1..98d459e322e6 100644 --- a/arch/x86/crypto/camellia_glue.c +++ b/arch/x86/crypto/camellia_glue.c @@ -18,19 +18,11 @@ #include /* regular block cipher functions */ -asmlinkage void __camellia_enc_blk(struct camellia_ctx *ctx, u8 *dst, - const u8 *src, bool xor); EXPORT_SYMBOL_GPL(__camellia_enc_blk); -asmlinkage void camellia_dec_blk(struct camellia_ctx *ctx, u8 *dst, - const u8 *src); EXPORT_SYMBOL_GPL(camellia_dec_blk); /* 2-way parallel cipher functions */ -asmlinkage void __camellia_enc_blk_2way(struct camellia_ctx *ctx, u8 *dst, - const u8 *src, bool xor); EXPORT_SYMBOL_GPL(__camellia_enc_blk_2way); -asmlinkage void camellia_dec_blk_2way(struct camellia_ctx *ctx, u8 *dst, - const u8 *src); EXPORT_SYMBOL_GPL(camellia_dec_blk_2way); static void camellia_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) @@ -1305,7 +1297,7 @@ void camellia_crypt_ctr_2way(void *ctx, u128 *dst, const u128 *src, le128 *iv) le128_to_be128(&ctrblks[1], iv); le128_inc(iv); - camellia_enc_blk_xor_2way(ctx, (u8 *)dst, (u8 *)ctrblks); + camellia_enc_blk_2way_xor(ctx, (u8 *)dst, (u8 *)ctrblks); } EXPORT_SYMBOL_GPL(camellia_crypt_ctr_2way); @@ -1315,10 +1307,10 @@ static const struct common_glue_ctx camellia_enc = { .funcs = { { .num_blocks = 2, - .fn_u = { .ecb = GLUE_FUNC_CAST(camellia_enc_blk_2way) } + .fn_u = { .ecb = camellia_enc_blk_2way } }, { .num_blocks = 1, - .fn_u = { .ecb = GLUE_FUNC_CAST(camellia_enc_blk) } + .fn_u = { .ecb = camellia_enc_blk } } } }; @@ -1328,10 +1320,10 @@ static const struct common_glue_ctx camellia_ctr = { .funcs = { { .num_blocks = 2, - .fn_u = { .ctr = GLUE_CTR_FUNC_CAST(camellia_crypt_ctr_2way) } + .fn_u = { .ctr = camellia_crypt_ctr_2way } }, { .num_blocks = 1, - .fn_u = { .ctr = GLUE_CTR_FUNC_CAST(camellia_crypt_ctr) } + .fn_u = { .ctr = camellia_crypt_ctr } } } }; @@ -1341,10 +1333,10 @@ static const struct common_glue_ctx camellia_dec = { .funcs = { { .num_blocks = 2, - .fn_u = { .ecb = GLUE_FUNC_CAST(camellia_dec_blk_2way) } + .fn_u = { .ecb = camellia_dec_blk_2way } }, { .num_blocks = 1, - .fn_u = { .ecb = GLUE_FUNC_CAST(camellia_dec_blk) } + .fn_u = { .ecb = camellia_dec_blk } } } }; @@ -1354,10 +1346,10 @@ static const struct common_glue_ctx camellia_dec_cbc = { .funcs = { { .num_blocks = 2, - .fn_u = { .cbc = GLUE_CBC_FUNC_CAST(camellia_decrypt_cbc_2way) } + .fn_u = { .cbc = camellia_decrypt_cbc_2way } }, { .num_blocks = 1, - .fn_u = { .cbc = GLUE_CBC_FUNC_CAST(camellia_dec_blk) } + .fn_u = { .cbc = camellia_dec_blk_cbc } } } }; @@ -1373,8 +1365,7 @@ static int ecb_decrypt(struct skcipher_request *req) static int cbc_encrypt(struct skcipher_request *req) { - return glue_cbc_encrypt_req_128bit(GLUE_FUNC_CAST(camellia_enc_blk), - req); + return glue_cbc_encrypt_req_128bit(camellia_enc_blk, req); } static int cbc_decrypt(struct skcipher_request *req) diff --git a/arch/x86/include/asm/crypto/camellia.h b/arch/x86/include/asm/crypto/camellia.h index a5d86fc0593f..8053b01f8418 100644 --- a/arch/x86/include/asm/crypto/camellia.h +++ b/arch/x86/include/asm/crypto/camellia.h @@ -2,6 +2,7 @@ #ifndef ASM_X86_CAMELLIA_H #define ASM_X86_CAMELLIA_H +#include #include #include #include @@ -32,56 +33,21 @@ extern int xts_camellia_setkey(struct crypto_skcipher *tfm, const u8 *key, unsigned int keylen); /* regular block cipher functions */ -asmlinkage void __camellia_enc_blk(struct camellia_ctx *ctx, u8 *dst, - const u8 *src, bool xor); -asmlinkage void camellia_dec_blk(struct camellia_ctx *ctx, u8 *dst, - const u8 *src); +CRYPTO_FUNC_XOR(camellia_enc_blk); +CRYPTO_FUNC(camellia_dec_blk); +CRYPTO_FUNC_WRAP_CBC(camellia_dec_blk); /* 2-way parallel cipher functions */ -asmlinkage void __camellia_enc_blk_2way(struct camellia_ctx *ctx, u8 *dst, - const u8 *src, bool xor); -asmlinkage void camellia_dec_blk_2way(struct camellia_ctx *ctx, u8 *dst, - const u8 *src); +CRYPTO_FUNC_XOR(camellia_enc_blk_2way); +CRYPTO_FUNC(camellia_dec_blk_2way); /* 16-way parallel cipher functions (avx/aes-ni) */ -asmlinkage void camellia_ecb_enc_16way(struct camellia_ctx *ctx, u8 *dst, - const u8 *src); -asmlinkage void camellia_ecb_dec_16way(struct camellia_ctx *ctx, u8 *dst, - const u8 *src); - -asmlinkage void camellia_cbc_dec_16way(struct camellia_ctx *ctx, u8 *dst, - const u8 *src); -asmlinkage void camellia_ctr_16way(struct camellia_ctx *ctx, u8 *dst, - const u8 *src, le128 *iv); - -asmlinkage void camellia_xts_enc_16way(struct camellia_ctx *ctx, u8 *dst, - const u8 *src, le128 *iv); -asmlinkage void camellia_xts_dec_16way(struct camellia_ctx *ctx, u8 *dst, - const u8 *src, le128 *iv); - -static inline void camellia_enc_blk(struct camellia_ctx *ctx, u8 *dst, - const u8 *src) -{ - __camellia_enc_blk(ctx, dst, src, false); -} - -static inline void camellia_enc_blk_xor(struct camellia_ctx *ctx, u8 *dst, - const u8 *src) -{ - __camellia_enc_blk(ctx, dst, src, true); -} - -static inline void camellia_enc_blk_2way(struct camellia_ctx *ctx, u8 *dst, - const u8 *src) -{ - __camellia_enc_blk_2way(ctx, dst, src, false); -} - -static inline void camellia_enc_blk_xor_2way(struct camellia_ctx *ctx, u8 *dst, - const u8 *src) -{ - __camellia_enc_blk_2way(ctx, dst, src, true); -} +CRYPTO_FUNC(camellia_ecb_enc_16way); +CRYPTO_FUNC(camellia_ecb_dec_16way); +CRYPTO_FUNC_CBC(camellia_cbc_dec_16way); +CRYPTO_FUNC_CTR(camellia_ctr_16way); +CRYPTO_FUNC_XTS(camellia_xts_enc_16way); +CRYPTO_FUNC_XTS(camellia_xts_dec_16way); /* glue helpers */ extern void camellia_decrypt_cbc_2way(void *ctx, u128 *dst, const u128 *src); From patchwork Mon Nov 11 21:45:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 11237911 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A30AC1747 for ; Mon, 11 Nov 2019 21:46:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 76A8F206BB for ; Mon, 11 Nov 2019 21:46:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="G7eFpjyp" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727223AbfKKVqC (ORCPT ); Mon, 11 Nov 2019 16:46:02 -0500 Received: from mail-pg1-f195.google.com ([209.85.215.195]:33226 "EHLO mail-pg1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727093AbfKKVqC (ORCPT ); Mon, 11 Nov 2019 16:46:02 -0500 Received: by mail-pg1-f195.google.com with SMTP id h27so10330506pgn.0 for ; Mon, 11 Nov 2019 13:46:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=0GDZ66xG0SN3+0GVKFWNa8tU2gt2Sk1vjyHwyErKZKw=; b=G7eFpjypj1tb3wv2r/g+x0YFu/vEJYxWk3nA1Y4/z4ktmr3ZH0dtoOxWu4bOVYtcZf VIFkGXIn6pvTYEl5nFXcEOTeI/t4MCMj2+kLbPct5ELc3tHxc6eUGkUuRv4X4FHrR2JF QMb+otG+1O8jPD4D6fLIExnbUeH+QEWZMjS0E= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0GDZ66xG0SN3+0GVKFWNa8tU2gt2Sk1vjyHwyErKZKw=; b=ZYhYvb6GX2Uv2TSNM2R2kcMfjXLBZu2R2VmT73cgx794c75pJXO+CoVH7X4fK4DTpg KCR9D6DoNBy+LbIgz6aM4dFnMMxM+8OVckRDk9d8aJWurFnySydHWbTBq/VVyDE48DbH EA9gCwSgYdsO+tYy3VtR0p5sgD8StOpR/pGHTKKsRMhaPdIDbmqEFKBDg+RxTh3p6s7V TF3JB+ylEaVGW9t+7J2xfMODdn8sBPgEfEs8HoHYImXl6cK36W64BBzGGSv4RzZR2ko0 /6ZW9VQMXOOZuWILmLUqkPL4RJ1zKqmhF5I2DWYYVpFEnBm8XtJfLXDdi75aOQi8970S Bskw== X-Gm-Message-State: APjAAAUVxPFEsmhWf8mFVZQF6OfaYke4TNPz3vnC4Z6tlXUoonaI/bxe x0SS3cM6d0DxFsZRymsEMsQ3tA== X-Google-Smtp-Source: APXvYqybN7sVLdmCufCaFMuuvsOaw0sm5AZb+5wtYmfYGl2vz2F3xKgt6/pSpWC+35WLEj62d+rWhw== X-Received: by 2002:a63:6782:: with SMTP id b124mr32315938pgc.220.1573508761528; Mon, 11 Nov 2019 13:46:01 -0800 (PST) Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163]) by smtp.gmail.com with ESMTPSA id a12sm15611188pfk.188.2019.11.11.13.46.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Nov 2019 13:46:00 -0800 (PST) From: Kees Cook To: Herbert Xu Cc: Kees Cook , =?utf-8?q?Jo=C3=A3o_Moreira?= , Eric Biggers , Sami Tolvanen , "David S. Miller" , Ard Biesheuvel , Stephan Mueller , x86@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-hardening@lists.openwall.com Subject: [PATCH v4 4/8] crypto: x86/twofish: Use new glue function macros Date: Mon, 11 Nov 2019 13:45:48 -0800 Message-Id: <20191111214552.36717-5-keescook@chromium.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20191111214552.36717-1-keescook@chromium.org> References: <20191111214552.36717-1-keescook@chromium.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Convert to function declaration macros from function prototype casts to avoid triggering Control-Flow Integrity checks during indirect function calls. Co-developed-by: João Moreira Signed-off-by: Kees Cook --- arch/x86/crypto/twofish_avx_glue.c | 74 ++++++++++----------------- arch/x86/crypto/twofish_glue.c | 5 +- arch/x86/crypto/twofish_glue_3way.c | 25 ++++----- arch/x86/include/asm/crypto/twofish.h | 17 +++--- 4 files changed, 44 insertions(+), 77 deletions(-) diff --git a/arch/x86/crypto/twofish_avx_glue.c b/arch/x86/crypto/twofish_avx_glue.c index d561c821788b..340b2676798d 100644 --- a/arch/x86/crypto/twofish_avx_glue.c +++ b/arch/x86/crypto/twofish_avx_glue.c @@ -16,26 +16,17 @@ #include #include #include -#include #include #define TWOFISH_PARALLEL_BLOCKS 8 /* 8-way parallel cipher functions */ -asmlinkage void twofish_ecb_enc_8way(struct twofish_ctx *ctx, u8 *dst, - const u8 *src); -asmlinkage void twofish_ecb_dec_8way(struct twofish_ctx *ctx, u8 *dst, - const u8 *src); - -asmlinkage void twofish_cbc_dec_8way(struct twofish_ctx *ctx, u8 *dst, - const u8 *src); -asmlinkage void twofish_ctr_8way(struct twofish_ctx *ctx, u8 *dst, - const u8 *src, le128 *iv); - -asmlinkage void twofish_xts_enc_8way(struct twofish_ctx *ctx, u8 *dst, - const u8 *src, le128 *iv); -asmlinkage void twofish_xts_dec_8way(struct twofish_ctx *ctx, u8 *dst, - const u8 *src, le128 *iv); +CRYPTO_FUNC(twofish_ecb_enc_8way); +CRYPTO_FUNC(twofish_ecb_dec_8way); +CRYPTO_FUNC_CBC(twofish_cbc_dec_8way); +CRYPTO_FUNC_CTR(twofish_ctr_8way); +CRYPTO_FUNC_XTS(twofish_xts_enc_8way); +CRYPTO_FUNC_XTS(twofish_xts_dec_8way); static int twofish_setkey_skcipher(struct crypto_skcipher *tfm, const u8 *key, unsigned int keylen) @@ -43,22 +34,14 @@ static int twofish_setkey_skcipher(struct crypto_skcipher *tfm, return twofish_setkey(&tfm->base, key, keylen); } -static inline void twofish_enc_blk_3way(struct twofish_ctx *ctx, u8 *dst, - const u8 *src) -{ - __twofish_enc_blk_3way(ctx, dst, src, false); -} - static void twofish_xts_enc(void *ctx, u128 *dst, const u128 *src, le128 *iv) { - glue_xts_crypt_128bit_one(ctx, dst, src, iv, - GLUE_FUNC_CAST(twofish_enc_blk)); + glue_xts_crypt_128bit_one(ctx, dst, src, iv, twofish_enc_blk); } static void twofish_xts_dec(void *ctx, u128 *dst, const u128 *src, le128 *iv) { - glue_xts_crypt_128bit_one(ctx, dst, src, iv, - GLUE_FUNC_CAST(twofish_dec_blk)); + glue_xts_crypt_128bit_one(ctx, dst, src, iv, twofish_dec_blk); } struct twofish_xts_ctx { @@ -93,13 +76,13 @@ static const struct common_glue_ctx twofish_enc = { .funcs = { { .num_blocks = TWOFISH_PARALLEL_BLOCKS, - .fn_u = { .ecb = GLUE_FUNC_CAST(twofish_ecb_enc_8way) } + .fn_u = { .ecb = twofish_ecb_enc_8way } }, { .num_blocks = 3, - .fn_u = { .ecb = GLUE_FUNC_CAST(twofish_enc_blk_3way) } + .fn_u = { .ecb = twofish_enc_blk_3way } }, { .num_blocks = 1, - .fn_u = { .ecb = GLUE_FUNC_CAST(twofish_enc_blk) } + .fn_u = { .ecb = twofish_enc_blk } } } }; @@ -109,13 +92,13 @@ static const struct common_glue_ctx twofish_ctr = { .funcs = { { .num_blocks = TWOFISH_PARALLEL_BLOCKS, - .fn_u = { .ctr = GLUE_CTR_FUNC_CAST(twofish_ctr_8way) } + .fn_u = { .ctr = twofish_ctr_8way } }, { .num_blocks = 3, - .fn_u = { .ctr = GLUE_CTR_FUNC_CAST(twofish_enc_blk_ctr_3way) } + .fn_u = { .ctr = twofish_enc_blk_ctr_3way } }, { .num_blocks = 1, - .fn_u = { .ctr = GLUE_CTR_FUNC_CAST(twofish_enc_blk_ctr) } + .fn_u = { .ctr = twofish_enc_blk_ctr } } } }; @@ -125,10 +108,10 @@ static const struct common_glue_ctx twofish_enc_xts = { .funcs = { { .num_blocks = TWOFISH_PARALLEL_BLOCKS, - .fn_u = { .xts = GLUE_XTS_FUNC_CAST(twofish_xts_enc_8way) } + .fn_u = { .xts = twofish_xts_enc_8way } }, { .num_blocks = 1, - .fn_u = { .xts = GLUE_XTS_FUNC_CAST(twofish_xts_enc) } + .fn_u = { .xts = twofish_xts_enc } } } }; @@ -138,13 +121,13 @@ static const struct common_glue_ctx twofish_dec = { .funcs = { { .num_blocks = TWOFISH_PARALLEL_BLOCKS, - .fn_u = { .ecb = GLUE_FUNC_CAST(twofish_ecb_dec_8way) } + .fn_u = { .ecb = twofish_ecb_dec_8way } }, { .num_blocks = 3, - .fn_u = { .ecb = GLUE_FUNC_CAST(twofish_dec_blk_3way) } + .fn_u = { .ecb = twofish_dec_blk_3way } }, { .num_blocks = 1, - .fn_u = { .ecb = GLUE_FUNC_CAST(twofish_dec_blk) } + .fn_u = { .ecb = twofish_dec_blk } } } }; @@ -154,13 +137,13 @@ static const struct common_glue_ctx twofish_dec_cbc = { .funcs = { { .num_blocks = TWOFISH_PARALLEL_BLOCKS, - .fn_u = { .cbc = GLUE_CBC_FUNC_CAST(twofish_cbc_dec_8way) } + .fn_u = { .cbc = twofish_cbc_dec_8way } }, { .num_blocks = 3, - .fn_u = { .cbc = GLUE_CBC_FUNC_CAST(twofish_dec_blk_cbc_3way) } + .fn_u = { .cbc = twofish_dec_blk_cbc_3way } }, { .num_blocks = 1, - .fn_u = { .cbc = GLUE_CBC_FUNC_CAST(twofish_dec_blk) } + .fn_u = { .cbc = twofish_dec_blk_cbc } } } }; @@ -170,10 +153,10 @@ static const struct common_glue_ctx twofish_dec_xts = { .funcs = { { .num_blocks = TWOFISH_PARALLEL_BLOCKS, - .fn_u = { .xts = GLUE_XTS_FUNC_CAST(twofish_xts_dec_8way) } + .fn_u = { .xts = twofish_xts_dec_8way } }, { .num_blocks = 1, - .fn_u = { .xts = GLUE_XTS_FUNC_CAST(twofish_xts_dec) } + .fn_u = { .xts = twofish_xts_dec } } } }; @@ -189,8 +172,7 @@ static int ecb_decrypt(struct skcipher_request *req) static int cbc_encrypt(struct skcipher_request *req) { - return glue_cbc_encrypt_req_128bit(GLUE_FUNC_CAST(twofish_enc_blk), - req); + return glue_cbc_encrypt_req_128bit(twofish_enc_blk, req); } static int cbc_decrypt(struct skcipher_request *req) @@ -208,8 +190,7 @@ static int xts_encrypt(struct skcipher_request *req) struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct twofish_xts_ctx *ctx = crypto_skcipher_ctx(tfm); - return glue_xts_req_128bit(&twofish_enc_xts, req, - XTS_TWEAK_CAST(twofish_enc_blk), + return glue_xts_req_128bit(&twofish_enc_xts, req, twofish_enc_blk, &ctx->tweak_ctx, &ctx->crypt_ctx, false); } @@ -218,8 +199,7 @@ static int xts_decrypt(struct skcipher_request *req) struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct twofish_xts_ctx *ctx = crypto_skcipher_ctx(tfm); - return glue_xts_req_128bit(&twofish_dec_xts, req, - XTS_TWEAK_CAST(twofish_enc_blk), + return glue_xts_req_128bit(&twofish_dec_xts, req, twofish_enc_blk, &ctx->tweak_ctx, &ctx->crypt_ctx, true); } diff --git a/arch/x86/crypto/twofish_glue.c b/arch/x86/crypto/twofish_glue.c index 77e06c2da83d..4b47ad326bb6 100644 --- a/arch/x86/crypto/twofish_glue.c +++ b/arch/x86/crypto/twofish_glue.c @@ -43,12 +43,9 @@ #include #include #include +#include -asmlinkage void twofish_enc_blk(struct twofish_ctx *ctx, u8 *dst, - const u8 *src); EXPORT_SYMBOL_GPL(twofish_enc_blk); -asmlinkage void twofish_dec_blk(struct twofish_ctx *ctx, u8 *dst, - const u8 *src); EXPORT_SYMBOL_GPL(twofish_dec_blk); static void twofish_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) diff --git a/arch/x86/crypto/twofish_glue_3way.c b/arch/x86/crypto/twofish_glue_3way.c index 1dc9e29f221e..dd0b563483ff 100644 --- a/arch/x86/crypto/twofish_glue_3way.c +++ b/arch/x86/crypto/twofish_glue_3way.c @@ -25,12 +25,6 @@ static int twofish_setkey_skcipher(struct crypto_skcipher *tfm, return twofish_setkey(&tfm->base, key, keylen); } -static inline void twofish_enc_blk_3way(struct twofish_ctx *ctx, u8 *dst, - const u8 *src) -{ - __twofish_enc_blk_3way(ctx, dst, src, false); -} - static inline void twofish_enc_blk_xor_3way(struct twofish_ctx *ctx, u8 *dst, const u8 *src) { @@ -94,10 +88,10 @@ static const struct common_glue_ctx twofish_enc = { .funcs = { { .num_blocks = 3, - .fn_u = { .ecb = GLUE_FUNC_CAST(twofish_enc_blk_3way) } + .fn_u = { .ecb = twofish_enc_blk_3way } }, { .num_blocks = 1, - .fn_u = { .ecb = GLUE_FUNC_CAST(twofish_enc_blk) } + .fn_u = { .ecb = twofish_enc_blk } } } }; @@ -107,10 +101,10 @@ static const struct common_glue_ctx twofish_ctr = { .funcs = { { .num_blocks = 3, - .fn_u = { .ecb = GLUE_FUNC_CAST(twofish_enc_blk_ctr_3way) } + .fn_u = { .ctr = twofish_enc_blk_ctr_3way } }, { .num_blocks = 1, - .fn_u = { .ecb = GLUE_FUNC_CAST(twofish_enc_blk_ctr) } + .fn_u = { .ctr = twofish_enc_blk_ctr } } } }; @@ -120,10 +114,10 @@ static const struct common_glue_ctx twofish_dec = { .funcs = { { .num_blocks = 3, - .fn_u = { .ecb = GLUE_FUNC_CAST(twofish_dec_blk_3way) } + .fn_u = { .ecb = twofish_dec_blk_3way } }, { .num_blocks = 1, - .fn_u = { .ecb = GLUE_FUNC_CAST(twofish_dec_blk) } + .fn_u = { .ecb = twofish_dec_blk } } } }; @@ -133,10 +127,10 @@ static const struct common_glue_ctx twofish_dec_cbc = { .funcs = { { .num_blocks = 3, - .fn_u = { .cbc = GLUE_CBC_FUNC_CAST(twofish_dec_blk_cbc_3way) } + .fn_u = { .cbc = twofish_dec_blk_cbc_3way } }, { .num_blocks = 1, - .fn_u = { .cbc = GLUE_CBC_FUNC_CAST(twofish_dec_blk) } + .fn_u = { .cbc = twofish_dec_blk_cbc } } } }; @@ -152,8 +146,7 @@ static int ecb_decrypt(struct skcipher_request *req) static int cbc_encrypt(struct skcipher_request *req) { - return glue_cbc_encrypt_req_128bit(GLUE_FUNC_CAST(twofish_enc_blk), - req); + return glue_cbc_encrypt_req_128bit(twofish_enc_blk, req); } static int cbc_decrypt(struct skcipher_request *req) diff --git a/arch/x86/include/asm/crypto/twofish.h b/arch/x86/include/asm/crypto/twofish.h index f618bf272b90..ad54b456577c 100644 --- a/arch/x86/include/asm/crypto/twofish.h +++ b/arch/x86/include/asm/crypto/twofish.h @@ -5,23 +5,20 @@ #include #include #include +#include /* regular block cipher functions from twofish_x86_64 module */ -asmlinkage void twofish_enc_blk(struct twofish_ctx *ctx, u8 *dst, - const u8 *src); -asmlinkage void twofish_dec_blk(struct twofish_ctx *ctx, u8 *dst, - const u8 *src); +CRYPTO_FUNC(twofish_enc_blk); +CRYPTO_FUNC(twofish_dec_blk); +CRYPTO_FUNC_WRAP_CBC(twofish_dec_blk); /* 3-way parallel cipher functions */ -asmlinkage void __twofish_enc_blk_3way(struct twofish_ctx *ctx, u8 *dst, - const u8 *src, bool xor); -asmlinkage void twofish_dec_blk_3way(struct twofish_ctx *ctx, u8 *dst, - const u8 *src); +CRYPTO_FUNC_XOR(twofish_enc_blk_3way); +CRYPTO_FUNC(twofish_dec_blk_3way); /* helpers from twofish_x86_64-3way module */ extern void twofish_dec_blk_cbc_3way(void *ctx, u128 *dst, const u128 *src); -extern void twofish_enc_blk_ctr(void *ctx, u128 *dst, const u128 *src, - le128 *iv); +CRYPTO_FUNC_CTR(twofish_enc_blk_ctr); extern void twofish_enc_blk_ctr_3way(void *ctx, u128 *dst, const u128 *src, le128 *iv); From patchwork Mon Nov 11 21:45:49 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 11237903 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5688216B1 for ; Mon, 11 Nov 2019 21:46:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 295C3206BB for ; Mon, 11 Nov 2019 21:46:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="ZP43Y1V6" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727121AbfKKVql (ORCPT ); Mon, 11 Nov 2019 16:46:41 -0500 Received: from mail-pf1-f194.google.com ([209.85.210.194]:34353 "EHLO mail-pf1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727618AbfKKVqG (ORCPT ); Mon, 11 Nov 2019 16:46:06 -0500 Received: by mail-pf1-f194.google.com with SMTP id n13so11657749pff.1 for ; Mon, 11 Nov 2019 13:46:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ZcOJA7rw+bZqOv1GdLBsInsGQzz6225k9cIjCh5MhhM=; b=ZP43Y1V6ff1HQ8Xg1Z0ybmTGnTgRWCPigzsp6aXFM0pQLo1PpYLDrNqdglWa7w2kBE ofnkcT7WE6/XgspKsZzZ5hZBEtphVnmkH3/mJOvpJtyAg1t+8XDm9JPGKrQruBHSlYFm lLKvS4N7e27k8xvRFE8aBMz/XQgulBfA9tQ5s= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ZcOJA7rw+bZqOv1GdLBsInsGQzz6225k9cIjCh5MhhM=; b=s55o5sYYNhTK/yeUwOQ6aieC9PFjCQf8j3Y8lKoKCyk3BPCIJLpiekkBsh2+uXPn39 EbE5QQ0Ns15izSKQQW2InThRyvvrgxqEQBYaufevHXoILsLnM/fQE8BB9jp4Q0INmob+ HElKyb6Rebd+P+9R/RRaEUPQuR/dw6L0jJUUBPlztFbAXffMGlE1YP+v/Dh/979CnmRm lt0hivmsUEZjCL0bEvvY9VWEyGuGY0KO1S8jnYsfwGknYE32DXvFPGVDA1FgIfbAXm8v 8p8GO6H42wFMa946FJyMDzPsy5c1YaO/Vj8GSucdI2d7wYCjRLezBKxlt+Mr2h8macdm Cmqw== X-Gm-Message-State: APjAAAWseyYFxFHtbb5e9ZUY6x6ZVbRBCaa8zxxZWuCNCQI465e7U8oL YqWaH6fBnVfJqlavbr+ZiCx3Pg== X-Google-Smtp-Source: APXvYqyffBGHQKnSMZzfXoWRTqJhRDT1ivChICdHWoCl+K+VCWP8Doc5SEkO+zmrUTOghOMCkdnNiQ== X-Received: by 2002:a05:6a00:e:: with SMTP id h14mr30956789pfk.99.1573508765844; Mon, 11 Nov 2019 13:46:05 -0800 (PST) Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163]) by smtp.gmail.com with ESMTPSA id v10sm15268006pfg.11.2019.11.11.13.46.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Nov 2019 13:46:05 -0800 (PST) From: Kees Cook To: Herbert Xu Cc: Kees Cook , =?utf-8?q?Jo=C3=A3o_Moreira?= , Eric Biggers , Sami Tolvanen , "David S. Miller" , Ard Biesheuvel , Stephan Mueller , x86@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-hardening@lists.openwall.com Subject: [PATCH v4 5/8] crypto: x86/cast6: Use new glue function macros Date: Mon, 11 Nov 2019 13:45:49 -0800 Message-Id: <20191111214552.36717-6-keescook@chromium.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20191111214552.36717-1-keescook@chromium.org> References: <20191111214552.36717-1-keescook@chromium.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Convert to function declaration macros from function prototype casts to avoid triggering Control-Flow Integrity checks during indirect function calls. Co-developed-by: João Moreira Signed-off-by: Kees Cook --- arch/x86/crypto/cast6_avx_glue.c | 62 ++++++++++++++------------------ crypto/cast6_generic.c | 6 ++-- include/crypto/cast6.h | 4 +-- 3 files changed, 32 insertions(+), 40 deletions(-) diff --git a/arch/x86/crypto/cast6_avx_glue.c b/arch/x86/crypto/cast6_avx_glue.c index a8a38fffb4a9..841724f335a5 100644 --- a/arch/x86/crypto/cast6_avx_glue.c +++ b/arch/x86/crypto/cast6_avx_glue.c @@ -20,20 +20,15 @@ #define CAST6_PARALLEL_BLOCKS 8 -asmlinkage void cast6_ecb_enc_8way(struct cast6_ctx *ctx, u8 *dst, - const u8 *src); -asmlinkage void cast6_ecb_dec_8way(struct cast6_ctx *ctx, u8 *dst, - const u8 *src); - -asmlinkage void cast6_cbc_dec_8way(struct cast6_ctx *ctx, u8 *dst, - const u8 *src); -asmlinkage void cast6_ctr_8way(struct cast6_ctx *ctx, u8 *dst, const u8 *src, - le128 *iv); - -asmlinkage void cast6_xts_enc_8way(struct cast6_ctx *ctx, u8 *dst, - const u8 *src, le128 *iv); -asmlinkage void cast6_xts_dec_8way(struct cast6_ctx *ctx, u8 *dst, - const u8 *src, le128 *iv); +CRYPTO_FUNC(__cast6_encrypt); +CRYPTO_FUNC(__cast6_decrypt); +CRYPTO_FUNC(cast6_ecb_enc_8way); +CRYPTO_FUNC(cast6_ecb_dec_8way); +CRYPTO_FUNC_CBC(cast6_cbc_dec_8way); +CRYPTO_FUNC_WRAP_CBC(__cast6_decrypt); +CRYPTO_FUNC_CTR(cast6_ctr_8way); +CRYPTO_FUNC_XTS(cast6_xts_enc_8way); +CRYPTO_FUNC_XTS(cast6_xts_dec_8way); static int cast6_setkey_skcipher(struct crypto_skcipher *tfm, const u8 *key, unsigned int keylen) @@ -43,14 +38,12 @@ static int cast6_setkey_skcipher(struct crypto_skcipher *tfm, static void cast6_xts_enc(void *ctx, u128 *dst, const u128 *src, le128 *iv) { - glue_xts_crypt_128bit_one(ctx, dst, src, iv, - GLUE_FUNC_CAST(__cast6_encrypt)); + glue_xts_crypt_128bit_one(ctx, dst, src, iv, __cast6_encrypt); } static void cast6_xts_dec(void *ctx, u128 *dst, const u128 *src, le128 *iv) { - glue_xts_crypt_128bit_one(ctx, dst, src, iv, - GLUE_FUNC_CAST(__cast6_decrypt)); + glue_xts_crypt_128bit_one(ctx, dst, src, iv, __cast6_decrypt); } static void cast6_crypt_ctr(void *ctx, u128 *dst, const u128 *src, le128 *iv) @@ -70,10 +63,10 @@ static const struct common_glue_ctx cast6_enc = { .funcs = { { .num_blocks = CAST6_PARALLEL_BLOCKS, - .fn_u = { .ecb = GLUE_FUNC_CAST(cast6_ecb_enc_8way) } + .fn_u = { .ecb = cast6_ecb_enc_8way } }, { .num_blocks = 1, - .fn_u = { .ecb = GLUE_FUNC_CAST(__cast6_encrypt) } + .fn_u = { .ecb = __cast6_encrypt } } } }; @@ -83,10 +76,10 @@ static const struct common_glue_ctx cast6_ctr = { .funcs = { { .num_blocks = CAST6_PARALLEL_BLOCKS, - .fn_u = { .ctr = GLUE_CTR_FUNC_CAST(cast6_ctr_8way) } + .fn_u = { .ctr = cast6_ctr_8way } }, { .num_blocks = 1, - .fn_u = { .ctr = GLUE_CTR_FUNC_CAST(cast6_crypt_ctr) } + .fn_u = { .ctr = cast6_crypt_ctr } } } }; @@ -96,10 +89,10 @@ static const struct common_glue_ctx cast6_enc_xts = { .funcs = { { .num_blocks = CAST6_PARALLEL_BLOCKS, - .fn_u = { .xts = GLUE_XTS_FUNC_CAST(cast6_xts_enc_8way) } + .fn_u = { .xts = cast6_xts_enc_8way } }, { .num_blocks = 1, - .fn_u = { .xts = GLUE_XTS_FUNC_CAST(cast6_xts_enc) } + .fn_u = { .xts = cast6_xts_enc } } } }; @@ -109,10 +102,10 @@ static const struct common_glue_ctx cast6_dec = { .funcs = { { .num_blocks = CAST6_PARALLEL_BLOCKS, - .fn_u = { .ecb = GLUE_FUNC_CAST(cast6_ecb_dec_8way) } + .fn_u = { .ecb = cast6_ecb_dec_8way } }, { .num_blocks = 1, - .fn_u = { .ecb = GLUE_FUNC_CAST(__cast6_decrypt) } + .fn_u = { .ecb = __cast6_decrypt } } } }; @@ -122,10 +115,10 @@ static const struct common_glue_ctx cast6_dec_cbc = { .funcs = { { .num_blocks = CAST6_PARALLEL_BLOCKS, - .fn_u = { .cbc = GLUE_CBC_FUNC_CAST(cast6_cbc_dec_8way) } + .fn_u = { .cbc = cast6_cbc_dec_8way } }, { .num_blocks = 1, - .fn_u = { .cbc = GLUE_CBC_FUNC_CAST(__cast6_decrypt) } + .fn_u = { .cbc = __cast6_decrypt_cbc } } } }; @@ -135,10 +128,10 @@ static const struct common_glue_ctx cast6_dec_xts = { .funcs = { { .num_blocks = CAST6_PARALLEL_BLOCKS, - .fn_u = { .xts = GLUE_XTS_FUNC_CAST(cast6_xts_dec_8way) } + .fn_u = { .xts = cast6_xts_dec_8way } }, { .num_blocks = 1, - .fn_u = { .xts = GLUE_XTS_FUNC_CAST(cast6_xts_dec) } + .fn_u = { .xts = cast6_xts_dec } } } }; @@ -154,8 +147,7 @@ static int ecb_decrypt(struct skcipher_request *req) static int cbc_encrypt(struct skcipher_request *req) { - return glue_cbc_encrypt_req_128bit(GLUE_FUNC_CAST(__cast6_encrypt), - req); + return glue_cbc_encrypt_req_128bit(__cast6_encrypt, req); } static int cbc_decrypt(struct skcipher_request *req) @@ -199,8 +191,7 @@ static int xts_encrypt(struct skcipher_request *req) struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct cast6_xts_ctx *ctx = crypto_skcipher_ctx(tfm); - return glue_xts_req_128bit(&cast6_enc_xts, req, - XTS_TWEAK_CAST(__cast6_encrypt), + return glue_xts_req_128bit(&cast6_enc_xts, req, __cast6_encrypt, &ctx->tweak_ctx, &ctx->crypt_ctx, false); } @@ -209,8 +200,7 @@ static int xts_decrypt(struct skcipher_request *req) struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct cast6_xts_ctx *ctx = crypto_skcipher_ctx(tfm); - return glue_xts_req_128bit(&cast6_dec_xts, req, - XTS_TWEAK_CAST(__cast6_encrypt), + return glue_xts_req_128bit(&cast6_dec_xts, req, __cast6_encrypt, &ctx->tweak_ctx, &ctx->crypt_ctx, true); } diff --git a/crypto/cast6_generic.c b/crypto/cast6_generic.c index a8248f8e2777..c51121bedf68 100644 --- a/crypto/cast6_generic.c +++ b/crypto/cast6_generic.c @@ -173,8 +173,9 @@ static inline void QBAR(u32 *block, u8 *Kr, u32 *Km) block[2] ^= F1(block[3], Kr[0], Km[0]); } -void __cast6_encrypt(struct cast6_ctx *c, u8 *outbuf, const u8 *inbuf) +void __cast6_encrypt(void *ctx, u8 *outbuf, const u8 *inbuf) { + struct cast6_ctx *c = ctx; const __be32 *src = (const __be32 *)inbuf; __be32 *dst = (__be32 *)outbuf; u32 block[4]; @@ -211,8 +212,9 @@ static void cast6_encrypt(struct crypto_tfm *tfm, u8 *outbuf, const u8 *inbuf) __cast6_encrypt(crypto_tfm_ctx(tfm), outbuf, inbuf); } -void __cast6_decrypt(struct cast6_ctx *c, u8 *outbuf, const u8 *inbuf) +void __cast6_decrypt(void *ctx, u8 *outbuf, const u8 *inbuf) { + struct cast6_ctx *c = ctx; const __be32 *src = (const __be32 *)inbuf; __be32 *dst = (__be32 *)outbuf; u32 block[4]; diff --git a/include/crypto/cast6.h b/include/crypto/cast6.h index c71f6ef47f0f..b6c3a0324959 100644 --- a/include/crypto/cast6.h +++ b/include/crypto/cast6.h @@ -19,7 +19,7 @@ int __cast6_setkey(struct cast6_ctx *ctx, const u8 *key, unsigned int keylen, u32 *flags); int cast6_setkey(struct crypto_tfm *tfm, const u8 *key, unsigned int keylen); -void __cast6_encrypt(struct cast6_ctx *ctx, u8 *dst, const u8 *src); -void __cast6_decrypt(struct cast6_ctx *ctx, u8 *dst, const u8 *src); +void __cast6_encrypt(void *ctx, u8 *dst, const u8 *src); +void __cast6_decrypt(void *ctx, u8 *dst, const u8 *src); #endif From patchwork Mon Nov 11 21:45:50 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 11237893 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 29CB41747 for ; Mon, 11 Nov 2019 21:46:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 085862184C for ; Mon, 11 Nov 2019 21:46:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="kmWX6Ijq" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727153AbfKKVqP (ORCPT ); Mon, 11 Nov 2019 16:46:15 -0500 Received: from mail-pl1-f196.google.com ([209.85.214.196]:35628 "EHLO mail-pl1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727668AbfKKVqI (ORCPT ); Mon, 11 Nov 2019 16:46:08 -0500 Received: by mail-pl1-f196.google.com with SMTP id s10so8379673plp.2 for ; Mon, 11 Nov 2019 13:46:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=I/3dBlJyaxPRtFTis6wLSqQyStGQBcFjrF54ZE6N9ys=; b=kmWX6IjqUtRMt+wusjNIA2uL5H2sIAh/5FOt2aG6VjnnT6ztx6kDdaxbUvAX9TMRTy 1J2LwAPQvXXt5zFk+ySCykBJoPhqC3OdlxvvrnCivG1qdIU0A5cslh1z9MZg5CbNkrFZ wgcO2qTsPS7Y+yU23hmLm27pp/92lo/gleDLU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=I/3dBlJyaxPRtFTis6wLSqQyStGQBcFjrF54ZE6N9ys=; b=DCexYQp4t83qKmDUGmRkPWAKkjVwNxLWNxKILX7MsMCAjvGx0zBewqsQFTsWUfzQBb UGmgTck2HFXA0DNgTUJr+eR3f+unfDxRLtx2WRR5/72KMS12e0nSJkEGE9J8etvPv8yB jpPDdF0xmDu5fEZI0xFjgmEQBRhq45uE0BnLe93S60RoMiCE7Di7LM16c5I262/0t/bk 7w04Urfr18NUQ5/1h8zcal4uJ6DCSyrOQLcdu3ZxJ5kumVdP/q45CY/7EaL83eWPBh5T bM1mQEehfcUqP0M33AqMjrT3wdgnL0Ru0ZvNlZ/9bR1bXkOg/O55ypPAQ39LQ/RysRCH SfRg== X-Gm-Message-State: APjAAAVmRnnWacdE21PmdrUzsyKFy6jeMXCWzicIxY719hd8SJg7Nzqq OBi05f0tEC/cemS+agUF2TdBmg== X-Google-Smtp-Source: APXvYqztW9oWMklUPzVKJI3FAB8FKSIHkHfHiMvOVTc5Wn5vlQagD7druaWt2L6MWIHtHkHpx1wx3g== X-Received: by 2002:a17:902:8e86:: with SMTP id bg6mr28110421plb.240.1573508767350; Mon, 11 Nov 2019 13:46:07 -0800 (PST) Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163]) by smtp.gmail.com with ESMTPSA id v15sm13518755pfe.44.2019.11.11.13.46.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Nov 2019 13:46:05 -0800 (PST) From: Kees Cook To: Herbert Xu Cc: Kees Cook , =?utf-8?q?Jo=C3=A3o_Moreira?= , Eric Biggers , Sami Tolvanen , "David S. Miller" , Ard Biesheuvel , Stephan Mueller , x86@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-hardening@lists.openwall.com Subject: [PATCH v4 6/8] crypto: x86/aesni: Use new glue function macros Date: Mon, 11 Nov 2019 13:45:50 -0800 Message-Id: <20191111214552.36717-7-keescook@chromium.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20191111214552.36717-1-keescook@chromium.org> References: <20191111214552.36717-1-keescook@chromium.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Convert to function declaration macros from function prototype casts to avoid triggering Control-Flow Integrity checks during indirect function calls. Signed-off-by: Kees Cook --- arch/x86/crypto/aesni-intel_glue.c | 31 ++++++++++-------------------- 1 file changed, 10 insertions(+), 21 deletions(-) diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c index 3e707e81afdb..e1072ea0a4fa 100644 --- a/arch/x86/crypto/aesni-intel_glue.c +++ b/arch/x86/crypto/aesni-intel_glue.c @@ -33,9 +33,7 @@ #include #include #include -#ifdef CONFIG_X86_64 #include -#endif #define AESNI_ALIGN 16 @@ -83,10 +81,8 @@ struct gcm_context_data { asmlinkage int aesni_set_key(struct crypto_aes_ctx *ctx, const u8 *in_key, unsigned int key_len); -asmlinkage void aesni_enc(struct crypto_aes_ctx *ctx, u8 *out, - const u8 *in); -asmlinkage void aesni_dec(struct crypto_aes_ctx *ctx, u8 *out, - const u8 *in); +CRYPTO_FUNC(aesni_enc); +CRYPTO_FUNC(aesni_dec); asmlinkage void aesni_ecb_enc(struct crypto_aes_ctx *ctx, u8 *out, const u8 *in, unsigned int len); asmlinkage void aesni_ecb_dec(struct crypto_aes_ctx *ctx, u8 *out, @@ -550,19 +546,14 @@ static int xts_aesni_setkey(struct crypto_skcipher *tfm, const u8 *key, } -static void aesni_xts_tweak(void *ctx, u8 *out, const u8 *in) -{ - aesni_enc(ctx, out, in); -} - static void aesni_xts_enc(void *ctx, u128 *dst, const u128 *src, le128 *iv) { - glue_xts_crypt_128bit_one(ctx, dst, src, iv, GLUE_FUNC_CAST(aesni_enc)); + glue_xts_crypt_128bit_one(ctx, dst, src, iv, aesni_enc); } static void aesni_xts_dec(void *ctx, u128 *dst, const u128 *src, le128 *iv) { - glue_xts_crypt_128bit_one(ctx, dst, src, iv, GLUE_FUNC_CAST(aesni_dec)); + glue_xts_crypt_128bit_one(ctx, dst, src, iv, aesni_dec); } static void aesni_xts_enc8(void *ctx, u128 *dst, const u128 *src, le128 *iv) @@ -581,10 +572,10 @@ static const struct common_glue_ctx aesni_enc_xts = { .funcs = { { .num_blocks = 8, - .fn_u = { .xts = GLUE_XTS_FUNC_CAST(aesni_xts_enc8) } + .fn_u = { .xts = aesni_xts_enc8 } }, { .num_blocks = 1, - .fn_u = { .xts = GLUE_XTS_FUNC_CAST(aesni_xts_enc) } + .fn_u = { .xts = aesni_xts_enc } } } }; @@ -594,10 +585,10 @@ static const struct common_glue_ctx aesni_dec_xts = { .funcs = { { .num_blocks = 8, - .fn_u = { .xts = GLUE_XTS_FUNC_CAST(aesni_xts_dec8) } + .fn_u = { .xts = aesni_xts_dec8 } }, { .num_blocks = 1, - .fn_u = { .xts = GLUE_XTS_FUNC_CAST(aesni_xts_dec) } + .fn_u = { .xts = aesni_xts_dec } } } }; @@ -606,8 +597,7 @@ static int xts_encrypt(struct skcipher_request *req) struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct aesni_xts_ctx *ctx = crypto_skcipher_ctx(tfm); - return glue_xts_req_128bit(&aesni_enc_xts, req, - XTS_TWEAK_CAST(aesni_xts_tweak), + return glue_xts_req_128bit(&aesni_enc_xts, req, aesni_enc, aes_ctx(ctx->raw_tweak_ctx), aes_ctx(ctx->raw_crypt_ctx), false); @@ -618,8 +608,7 @@ static int xts_decrypt(struct skcipher_request *req) struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct aesni_xts_ctx *ctx = crypto_skcipher_ctx(tfm); - return glue_xts_req_128bit(&aesni_dec_xts, req, - XTS_TWEAK_CAST(aesni_xts_tweak), + return glue_xts_req_128bit(&aesni_dec_xts, req, aesni_enc, aes_ctx(ctx->raw_tweak_ctx), aes_ctx(ctx->raw_crypt_ctx), true); From patchwork Mon Nov 11 21:45:51 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 11237901 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E0C461864 for ; Mon, 11 Nov 2019 21:46:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id BE82921783 for ; Mon, 11 Nov 2019 21:46:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="X6cJO6j+" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727695AbfKKVqc (ORCPT ); Mon, 11 Nov 2019 16:46:32 -0500 Received: from mail-pl1-f181.google.com ([209.85.214.181]:32886 "EHLO mail-pl1-f181.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727638AbfKKVqH (ORCPT ); Mon, 11 Nov 2019 16:46:07 -0500 Received: by mail-pl1-f181.google.com with SMTP id ay6so8389991plb.0 for ; Mon, 11 Nov 2019 13:46:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Q2lZ7Dm1L5aRXMm9XxNsm9WxcGVGFqTOrxf2vwq2v/c=; b=X6cJO6j+LuNeFzoMBlJXSpwENsy0YOCb5jmcJTGxZvPS1RV2nWRXEkpU9UtVC0gzU3 +Skr+AL2+vJk7GunbZi7TcgodFA6q2V5iT3hVRsf4ee2IKQkEKCEGk0zUGnZWZV6us3x dsdp/HxQ5DvQXnLDmGpXnTwQRiGC4YdyBTjDE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Q2lZ7Dm1L5aRXMm9XxNsm9WxcGVGFqTOrxf2vwq2v/c=; b=POOIPjYGRwdCKc+OKHau6z7avfskONe6ZXgdAHrLasuIvtg0Td3DS4xAylOmiomDkY PFXwlqqTf//doaECPY/lw4G3GsVFD0FbypYJJJlhb5goJu27nQym5EPCDSsvSlmhZb++ 8QY7n3GY+zPjIo1um6DXza4goK33NecPR0iaR/HefTw6OLkGfg+UnxzxO331t+0mN/Vj eTrqU3NoyUr1St1UkgzY/UqaiyiiUseqWSyD2Isjcr7hJcRoMmHjihXnj3Ws5cz5ZdS0 +HviVu4h5+uqqHbhqBkl3Z//rDXwiit9x9LGWGlLsfatUQ9RGaryyYf+tEhzBrm1tr4C JVkg== X-Gm-Message-State: APjAAAUyT4B+GXs7MFp/Iuff1otm4HmZluKkLtXqao8uKUA9y/lysRol tVbgDAPGMmJ/pCO5swfwPTakKA== X-Google-Smtp-Source: APXvYqyte6oJuiojDpH+7VVh1GlXynL4x0mJaAVAGIUiJyoki2Cy/SuhjS+ya0YYXVwQoQZshdRvoA== X-Received: by 2002:a17:902:b948:: with SMTP id h8mr27221875pls.139.1573508766383; Mon, 11 Nov 2019 13:46:06 -0800 (PST) Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163]) by smtp.gmail.com with ESMTPSA id w138sm12427189pfc.68.2019.11.11.13.46.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Nov 2019 13:46:05 -0800 (PST) From: Kees Cook To: Herbert Xu Cc: Kees Cook , =?utf-8?q?Jo=C3=A3o_Moreira?= , Eric Biggers , Sami Tolvanen , "David S. Miller" , Ard Biesheuvel , Stephan Mueller , x86@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-hardening@lists.openwall.com Subject: [PATCH v4 7/8] crypto: x86/glue_helper: Remove function prototype cast helpers Date: Mon, 11 Nov 2019 13:45:51 -0800 Message-Id: <20191111214552.36717-8-keescook@chromium.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20191111214552.36717-1-keescook@chromium.org> References: <20191111214552.36717-1-keescook@chromium.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Now that all users of the function prototype casting helpers have been removed, delete the unused macros. Signed-off-by: Kees Cook --- arch/x86/include/asm/crypto/glue_helper.h | 5 ----- include/crypto/xts.h | 2 -- 2 files changed, 7 deletions(-) diff --git a/arch/x86/include/asm/crypto/glue_helper.h b/arch/x86/include/asm/crypto/glue_helper.h index 2fa4968ab8e2..a9935bbb3eb9 100644 --- a/arch/x86/include/asm/crypto/glue_helper.h +++ b/arch/x86/include/asm/crypto/glue_helper.h @@ -18,11 +18,6 @@ typedef void (*common_glue_ctr_func_t)(void *ctx, u128 *dst, const u128 *src, typedef void (*common_glue_xts_func_t)(void *ctx, u128 *dst, const u128 *src, le128 *iv); -#define GLUE_FUNC_CAST(fn) ((common_glue_func_t)(fn)) -#define GLUE_CBC_FUNC_CAST(fn) ((common_glue_cbc_func_t)(fn)) -#define GLUE_CTR_FUNC_CAST(fn) ((common_glue_ctr_func_t)(fn)) -#define GLUE_XTS_FUNC_CAST(fn) ((common_glue_xts_func_t)(fn)) - #define CRYPTO_FUNC(func) \ asmlinkage void func(void *ctx, u8 *dst, const u8 *src) diff --git a/include/crypto/xts.h b/include/crypto/xts.h index 75fd96ff976b..15ae7fdc0478 100644 --- a/include/crypto/xts.h +++ b/include/crypto/xts.h @@ -8,8 +8,6 @@ #define XTS_BLOCK_SIZE 16 -#define XTS_TWEAK_CAST(x) ((void (*)(void *, u8*, const u8*))(x)) - static inline int xts_check_key(struct crypto_tfm *tfm, const u8 *key, unsigned int keylen) { From patchwork Mon Nov 11 21:45:52 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 11237889 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3F8551747 for ; Mon, 11 Nov 2019 21:46:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 0AE1521783 for ; Mon, 11 Nov 2019 21:46:13 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="nQAAN4TM" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727726AbfKKVqM (ORCPT ); Mon, 11 Nov 2019 16:46:12 -0500 Received: from mail-pg1-f196.google.com ([209.85.215.196]:46687 "EHLO mail-pg1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727617AbfKKVqK (ORCPT ); Mon, 11 Nov 2019 16:46:10 -0500 Received: by mail-pg1-f196.google.com with SMTP id r18so10268085pgu.13 for ; Mon, 11 Nov 2019 13:46:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=2WYPO+Cs160lWLgrFwHAXtycZesNOJ8Ah35tHvgpbQI=; b=nQAAN4TMzTMOwTTpLhRLBu60PTHJUR7cQ/Y17oJwAbsKJ7d2W5XN8Fqx6V0ZnbNnwX ruKgSpRuRRxG8RBcC+AkTTxQZdZXiqr4Y/AQAWydLWw0qW0huXDKxIn8pwCRFliGuP9P GlrWZwkbma/KLtl4UwKMmnuRSf/WZps4nWD64= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=2WYPO+Cs160lWLgrFwHAXtycZesNOJ8Ah35tHvgpbQI=; b=P4DZA5NrZwf4PiHe+uoc13E4v54TXSoDwL1kmv+U7Y5pkwoC3z6iwBulXLkcbAzuRR 24lGlNEIuqQU0r2EVixpYf43HIar/3J6VG+6HlU5ajzITJzOYUU231zHni+oyGyPrc4G y8WmQU876vInJ74MXKELQJnngGzqR2SYNGzqy+67k1E6gYWkIlCdR+dvmFq2pnYdz0pG XVnHrWiaR3niwvKBNXYY1i6cqXqOfg6aPJjtUALqqlKKQsHaZCYmIDOhyvAEipuv37bZ 4J5mLQoIznipromvgR4m1YKiYckH9HZ5sYL4mWmhsWJpSSL6zVdEqlPpHi4ZVXu8oETH xntg== X-Gm-Message-State: APjAAAWKimEbaTeJJ9rCZhfhnMgCnfKkt2EClaJrPCJ/laJkrTECg2Oj /ewK5OA2lBvbVQDa2v4KALoHig== X-Google-Smtp-Source: APXvYqyyDwMPQYhs6q2UU3YEN3oqn4oWt9YMxPCJD6W8qjLSZJNsg+/JAgiqqjlfzDgYrzdwWlM30w== X-Received: by 2002:a17:90a:35d0:: with SMTP id r74mr1614656pjb.47.1573508767878; Mon, 11 Nov 2019 13:46:07 -0800 (PST) Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163]) by smtp.gmail.com with ESMTPSA id r20sm22895783pgo.74.2019.11.11.13.46.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Nov 2019 13:46:05 -0800 (PST) From: Kees Cook To: Herbert Xu Cc: Kees Cook , =?utf-8?q?Jo=C3=A3o_Moreira?= , Eric Biggers , Sami Tolvanen , "David S. Miller" , Ard Biesheuvel , Stephan Mueller , x86@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-hardening@lists.openwall.com Subject: [PATCH v4 8/8] crypto, x86/sha: Eliminate casts on asm implementations Date: Mon, 11 Nov 2019 13:45:52 -0800 Message-Id: <20191111214552.36717-9-keescook@chromium.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20191111214552.36717-1-keescook@chromium.org> References: <20191111214552.36717-1-keescook@chromium.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org In order to avoid CFI function prototype mismatches, this removes the casts on assembly implementations of sha1/256/512 accelerators. The safety checks from BUILD_BUG_ON() remain. Signed-off-by: Kees Cook --- arch/x86/crypto/sha1_ssse3_glue.c | 61 ++++++++++++----------------- arch/x86/crypto/sha256_ssse3_glue.c | 31 +++++++-------- arch/x86/crypto/sha512_ssse3_glue.c | 28 ++++++------- 3 files changed, 50 insertions(+), 70 deletions(-) diff --git a/arch/x86/crypto/sha1_ssse3_glue.c b/arch/x86/crypto/sha1_ssse3_glue.c index 639d4c2fd6a8..a151d899f37a 100644 --- a/arch/x86/crypto/sha1_ssse3_glue.c +++ b/arch/x86/crypto/sha1_ssse3_glue.c @@ -27,11 +27,8 @@ #include #include -typedef void (sha1_transform_fn)(u32 *digest, const char *data, - unsigned int rounds); - static int sha1_update(struct shash_desc *desc, const u8 *data, - unsigned int len, sha1_transform_fn *sha1_xform) + unsigned int len, sha1_block_fn *sha1_xform) { struct sha1_state *sctx = shash_desc_ctx(desc); @@ -39,48 +36,44 @@ static int sha1_update(struct shash_desc *desc, const u8 *data, (sctx->count % SHA1_BLOCK_SIZE) + len < SHA1_BLOCK_SIZE) return crypto_sha1_update(desc, data, len); - /* make sure casting to sha1_block_fn() is safe */ + /* make sure sha1_block_fn() use in generic routines is safe */ BUILD_BUG_ON(offsetof(struct sha1_state, state) != 0); kernel_fpu_begin(); - sha1_base_do_update(desc, data, len, - (sha1_block_fn *)sha1_xform); + sha1_base_do_update(desc, data, len, sha1_xform); kernel_fpu_end(); return 0; } static int sha1_finup(struct shash_desc *desc, const u8 *data, - unsigned int len, u8 *out, sha1_transform_fn *sha1_xform) + unsigned int len, u8 *out, sha1_block_fn *sha1_xform) { if (!crypto_simd_usable()) return crypto_sha1_finup(desc, data, len, out); kernel_fpu_begin(); if (len) - sha1_base_do_update(desc, data, len, - (sha1_block_fn *)sha1_xform); - sha1_base_do_finalize(desc, (sha1_block_fn *)sha1_xform); + sha1_base_do_update(desc, data, len, sha1_xform); + sha1_base_do_finalize(desc, sha1_xform); kernel_fpu_end(); return sha1_base_finish(desc, out); } -asmlinkage void sha1_transform_ssse3(u32 *digest, const char *data, - unsigned int rounds); +asmlinkage void sha1_transform_ssse3(struct sha1_state *digest, + u8 const *data, int rounds); static int sha1_ssse3_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - return sha1_update(desc, data, len, - (sha1_transform_fn *) sha1_transform_ssse3); + return sha1_update(desc, data, len, sha1_transform_ssse3); } static int sha1_ssse3_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { - return sha1_finup(desc, data, len, out, - (sha1_transform_fn *) sha1_transform_ssse3); + return sha1_finup(desc, data, len, out, sha1_transform_ssse3); } /* Add padding and return the message digest. */ @@ -119,21 +112,19 @@ static void unregister_sha1_ssse3(void) } #ifdef CONFIG_AS_AVX -asmlinkage void sha1_transform_avx(u32 *digest, const char *data, - unsigned int rounds); +asmlinkage void sha1_transform_avx(struct sha1_state *digest, + u8 const *data, int rounds); static int sha1_avx_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - return sha1_update(desc, data, len, - (sha1_transform_fn *) sha1_transform_avx); + return sha1_update(desc, data, len, sha1_transform_avx); } static int sha1_avx_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { - return sha1_finup(desc, data, len, out, - (sha1_transform_fn *) sha1_transform_avx); + return sha1_finup(desc, data, len, out, sha1_transform_avx); } static int sha1_avx_final(struct shash_desc *desc, u8 *out) @@ -190,8 +181,8 @@ static inline void unregister_sha1_avx(void) { } #if defined(CONFIG_AS_AVX2) && (CONFIG_AS_AVX) #define SHA1_AVX2_BLOCK_OPTSIZE 4 /* optimal 4*64 bytes of SHA1 blocks */ -asmlinkage void sha1_transform_avx2(u32 *digest, const char *data, - unsigned int rounds); +asmlinkage void sha1_transform_avx2(struct sha1_state *digest, + u8 const *data, int rounds); static bool avx2_usable(void) { @@ -203,8 +194,8 @@ static bool avx2_usable(void) return false; } -static void sha1_apply_transform_avx2(u32 *digest, const char *data, - unsigned int rounds) +static void sha1_apply_transform_avx2(struct sha1_state *digest, + u8 const *data, int rounds) { /* Select the optimal transform based on data block size */ if (rounds >= SHA1_AVX2_BLOCK_OPTSIZE) @@ -216,15 +207,13 @@ static void sha1_apply_transform_avx2(u32 *digest, const char *data, static int sha1_avx2_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - return sha1_update(desc, data, len, - (sha1_transform_fn *) sha1_apply_transform_avx2); + return sha1_update(desc, data, len, sha1_apply_transform_avx2); } static int sha1_avx2_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { - return sha1_finup(desc, data, len, out, - (sha1_transform_fn *) sha1_apply_transform_avx2); + return sha1_finup(desc, data, len, out, sha1_apply_transform_avx2); } static int sha1_avx2_final(struct shash_desc *desc, u8 *out) @@ -267,21 +256,19 @@ static inline void unregister_sha1_avx2(void) { } #endif #ifdef CONFIG_AS_SHA1_NI -asmlinkage void sha1_ni_transform(u32 *digest, const char *data, - unsigned int rounds); +asmlinkage void sha1_ni_transform(struct sha1_state *digest, u8 const *data, + int rounds); static int sha1_ni_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - return sha1_update(desc, data, len, - (sha1_transform_fn *) sha1_ni_transform); + return sha1_update(desc, data, len, sha1_ni_transform); } static int sha1_ni_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { - return sha1_finup(desc, data, len, out, - (sha1_transform_fn *) sha1_ni_transform); + return sha1_finup(desc, data, len, out, sha1_ni_transform); } static int sha1_ni_final(struct shash_desc *desc, u8 *out) diff --git a/arch/x86/crypto/sha256_ssse3_glue.c b/arch/x86/crypto/sha256_ssse3_glue.c index f9aff31fe59e..960f56100a6c 100644 --- a/arch/x86/crypto/sha256_ssse3_glue.c +++ b/arch/x86/crypto/sha256_ssse3_glue.c @@ -41,12 +41,11 @@ #include #include -asmlinkage void sha256_transform_ssse3(u32 *digest, const char *data, - u64 rounds); -typedef void (sha256_transform_fn)(u32 *digest, const char *data, u64 rounds); +asmlinkage void sha256_transform_ssse3(struct sha256_state *digest, + u8 const *data, int rounds); static int _sha256_update(struct shash_desc *desc, const u8 *data, - unsigned int len, sha256_transform_fn *sha256_xform) + unsigned int len, sha256_block_fn *sha256_xform) { struct sha256_state *sctx = shash_desc_ctx(desc); @@ -54,28 +53,26 @@ static int _sha256_update(struct shash_desc *desc, const u8 *data, (sctx->count % SHA256_BLOCK_SIZE) + len < SHA256_BLOCK_SIZE) return crypto_sha256_update(desc, data, len); - /* make sure casting to sha256_block_fn() is safe */ + /* make sure sha256_block_fn() use in generic routines is safe */ BUILD_BUG_ON(offsetof(struct sha256_state, state) != 0); kernel_fpu_begin(); - sha256_base_do_update(desc, data, len, - (sha256_block_fn *)sha256_xform); + sha256_base_do_update(desc, data, len, sha256_xform); kernel_fpu_end(); return 0; } static int sha256_finup(struct shash_desc *desc, const u8 *data, - unsigned int len, u8 *out, sha256_transform_fn *sha256_xform) + unsigned int len, u8 *out, sha256_block_fn *sha256_xform) { if (!crypto_simd_usable()) return crypto_sha256_finup(desc, data, len, out); kernel_fpu_begin(); if (len) - sha256_base_do_update(desc, data, len, - (sha256_block_fn *)sha256_xform); - sha256_base_do_finalize(desc, (sha256_block_fn *)sha256_xform); + sha256_base_do_update(desc, data, len, sha256_xform); + sha256_base_do_finalize(desc, sha256_xform); kernel_fpu_end(); return sha256_base_finish(desc, out); @@ -145,8 +142,8 @@ static void unregister_sha256_ssse3(void) } #ifdef CONFIG_AS_AVX -asmlinkage void sha256_transform_avx(u32 *digest, const char *data, - u64 rounds); +asmlinkage void sha256_transform_avx(struct sha256_state *digest, + u8 const *data, int blocks); static int sha256_avx_update(struct shash_desc *desc, const u8 *data, unsigned int len) @@ -227,8 +224,8 @@ static inline void unregister_sha256_avx(void) { } #endif #if defined(CONFIG_AS_AVX2) && defined(CONFIG_AS_AVX) -asmlinkage void sha256_transform_rorx(u32 *digest, const char *data, - u64 rounds); +asmlinkage void sha256_transform_rorx(struct sha256_state *digest, + u8 const *data, int rounds); static int sha256_avx2_update(struct shash_desc *desc, const u8 *data, unsigned int len) @@ -307,8 +304,8 @@ static inline void unregister_sha256_avx2(void) { } #endif #ifdef CONFIG_AS_SHA256_NI -asmlinkage void sha256_ni_transform(u32 *digest, const char *data, - u64 rounds); /*unsigned int rounds);*/ +asmlinkage void sha256_ni_transform(struct sha256_state *digest, + u8 const *data, int rounds); static int sha256_ni_update(struct shash_desc *desc, const u8 *data, unsigned int len) diff --git a/arch/x86/crypto/sha512_ssse3_glue.c b/arch/x86/crypto/sha512_ssse3_glue.c index 458356a3f124..09349d93f562 100644 --- a/arch/x86/crypto/sha512_ssse3_glue.c +++ b/arch/x86/crypto/sha512_ssse3_glue.c @@ -39,13 +39,11 @@ #include #include -asmlinkage void sha512_transform_ssse3(u64 *digest, const char *data, - u64 rounds); - -typedef void (sha512_transform_fn)(u64 *digest, const char *data, u64 rounds); +asmlinkage void sha512_transform_ssse3(struct sha512_state *digest, + u8 const *data, int rounds); static int sha512_update(struct shash_desc *desc, const u8 *data, - unsigned int len, sha512_transform_fn *sha512_xform) + unsigned int len, sha512_block_fn *sha512_xform) { struct sha512_state *sctx = shash_desc_ctx(desc); @@ -53,28 +51,26 @@ static int sha512_update(struct shash_desc *desc, const u8 *data, (sctx->count[0] % SHA512_BLOCK_SIZE) + len < SHA512_BLOCK_SIZE) return crypto_sha512_update(desc, data, len); - /* make sure casting to sha512_block_fn() is safe */ + /* make sure sha512_block_fn() use in generic routines is safe */ BUILD_BUG_ON(offsetof(struct sha512_state, state) != 0); kernel_fpu_begin(); - sha512_base_do_update(desc, data, len, - (sha512_block_fn *)sha512_xform); + sha512_base_do_update(desc, data, len, sha512_xform); kernel_fpu_end(); return 0; } static int sha512_finup(struct shash_desc *desc, const u8 *data, - unsigned int len, u8 *out, sha512_transform_fn *sha512_xform) + unsigned int len, u8 *out, sha512_block_fn *sha512_xform) { if (!crypto_simd_usable()) return crypto_sha512_finup(desc, data, len, out); kernel_fpu_begin(); if (len) - sha512_base_do_update(desc, data, len, - (sha512_block_fn *)sha512_xform); - sha512_base_do_finalize(desc, (sha512_block_fn *)sha512_xform); + sha512_base_do_update(desc, data, len, sha512_xform); + sha512_base_do_finalize(desc, sha512_xform); kernel_fpu_end(); return sha512_base_finish(desc, out); @@ -144,8 +140,8 @@ static void unregister_sha512_ssse3(void) } #ifdef CONFIG_AS_AVX -asmlinkage void sha512_transform_avx(u64 *digest, const char *data, - u64 rounds); +asmlinkage void sha512_transform_avx(struct sha512_state *digest, + u8 const *data, int rounds); static bool avx_usable(void) { if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL)) { @@ -225,8 +221,8 @@ static inline void unregister_sha512_avx(void) { } #endif #if defined(CONFIG_AS_AVX2) && defined(CONFIG_AS_AVX) -asmlinkage void sha512_transform_rorx(u64 *digest, const char *data, - u64 rounds); +asmlinkage void sha512_transform_rorx(struct sha512_state *digest, + u8 const *data, int rounds); static int sha512_avx2_update(struct shash_desc *desc, const u8 *data, unsigned int len)