From patchwork Wed Jun 12 12:48:19 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 10989885 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BB8A576 for ; Wed, 12 Jun 2019 12:48:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A97E628671 for ; Wed, 12 Jun 2019 12:48:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9D58B28A49; Wed, 12 Jun 2019 12:48:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2D11E28671 for ; Wed, 12 Jun 2019 12:48:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2405620AbfFLMsv (ORCPT ); Wed, 12 Jun 2019 08:48:51 -0400 Received: from mail-wm1-f67.google.com ([209.85.128.67]:53398 "EHLO mail-wm1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2409156AbfFLMsv (ORCPT ); Wed, 12 Jun 2019 08:48:51 -0400 Received: by mail-wm1-f67.google.com with SMTP id x15so6449787wmj.3 for ; Wed, 12 Jun 2019 05:48:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=BVjKB83CGDhuLXJgz703gQuRW2ORbNsUnSLW3NNqkVc=; b=PEO+oml8sbOBJnA2JgN8yi8oG/YrG3PgZZ8hW5sCEhx3zRADsH4VIEc+C2lfQ9ID4H eY9GZRlCgvsU0wxkPjtYKurKi14Rwj+DRDWE5oP+/0q36mRyX/k3sDFFnSiZE2Z2nS0N +Kud0bquyc5memcL6tJI5k53Dc0KTcwBECh4fbh95l/UH5bVDnHpvKUrUpnU4RnoFI4L 7LQInRcFEOUc0RY3Rqu2aZd9JT/CfvDaI9GHC64ZuKEWbsGse3uAe12zj3oQt93Bvb9s 09IQH3BbPVewj67JRu9AQsvb5o2yVlE10C7OWjfdt2XooE/6wBQgQGtBQvGua0WYTjl1 d+iQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=BVjKB83CGDhuLXJgz703gQuRW2ORbNsUnSLW3NNqkVc=; b=i0b/SpltcuyEUX7gYwgtAG3xXL4X3z/NvlvgICYqbmKcZOgVeQ21WbIn7D3GXmdTMR +VNpJEUu+QgNAcZfdeHjFCaGpU74WvE3RjtTpPvwH+LkHdMxYIDmxpXMTeNDlfLq/XUi 40F9JRbGAXMVQbztqNpUT41x/Grx1aC0vX/U/cX0gMOnNZTujoXZDHCHlLJHw3wMkHwI yQ0K62+sVDdEKKj04crdgQ8SbJZxQ7k3fpanxfoVYa8VfIrhL2+l/Dx1aGc0UnuHu28f Kw9OEncqPZGKmzFAN5jw0BKsHRgoT9bkjRTotL4B/aYU3p+9y8PXJIL5vpwooJ6ThDXz SXBg== X-Gm-Message-State: APjAAAXMQGv0nbqzk2YMiS7XYGiAbow/uDqqklauqRpra+K44hbWgdWn ysehGqDCs5IaeX7ewpxd/rUDT4Q+62+Ayg== X-Google-Smtp-Source: APXvYqwPrt1EyvzxlUQYiFV1xBy+Vu/TkM1O+aI1kjXzvBXiBbnL/Ff/KxkVtzu1sQXvTIm5Sa3LlQ== X-Received: by 2002:a1c:f102:: with SMTP id p2mr20392795wmh.60.1560343728001; Wed, 12 Jun 2019 05:48:48 -0700 (PDT) Received: from sudo.home ([2a01:cb1d:112:6f00:353a:f33a:a393:3ada]) by smtp.gmail.com with ESMTPSA id s8sm28505480wra.55.2019.06.12.05.48.47 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Jun 2019 05:48:47 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [RFC PATCH 01/20] crypto: arm/aes-ce - cosmetic/whitespace cleanup Date: Wed, 12 Jun 2019 14:48:19 +0200 Message-Id: <20190612124838.2492-2-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190612124838.2492-1-ard.biesheuvel@linaro.org> References: <20190612124838.2492-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Rearrange the aes_algs[] array for legibility. Signed-off-by: Ard Biesheuvel --- arch/arm/crypto/aes-ce-glue.c | 116 ++++++++++---------- 1 file changed, 56 insertions(+), 60 deletions(-) diff --git a/arch/arm/crypto/aes-ce-glue.c b/arch/arm/crypto/aes-ce-glue.c index 5affb8482379..04ba66903674 100644 --- a/arch/arm/crypto/aes-ce-glue.c +++ b/arch/arm/crypto/aes-ce-glue.c @@ -337,69 +337,65 @@ static int xts_decrypt(struct skcipher_request *req) } static struct skcipher_alg aes_algs[] = { { - .base = { - .cra_name = "__ecb(aes)", - .cra_driver_name = "__ecb-aes-ce", - .cra_priority = 300, - .cra_flags = CRYPTO_ALG_INTERNAL, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct crypto_aes_ctx), - .cra_module = THIS_MODULE, - }, - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .setkey = ce_aes_setkey, - .encrypt = ecb_encrypt, - .decrypt = ecb_decrypt, + .base.cra_name = "__ecb(aes)", + .base.cra_driver_name = "__ecb-aes-ce", + .base.cra_priority = 300, + .base.cra_flags = CRYPTO_ALG_INTERNAL, + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct crypto_aes_ctx), + .base.cra_module = THIS_MODULE, + + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .setkey = ce_aes_setkey, + .encrypt = ecb_encrypt, + .decrypt = ecb_decrypt, }, { - .base = { - .cra_name = "__cbc(aes)", - .cra_driver_name = "__cbc-aes-ce", - .cra_priority = 300, - .cra_flags = CRYPTO_ALG_INTERNAL, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct crypto_aes_ctx), - .cra_module = THIS_MODULE, - }, - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - .setkey = ce_aes_setkey, - .encrypt = cbc_encrypt, - .decrypt = cbc_decrypt, + .base.cra_name = "__cbc(aes)", + .base.cra_driver_name = "__cbc-aes-ce", + .base.cra_priority = 300, + .base.cra_flags = CRYPTO_ALG_INTERNAL, + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct crypto_aes_ctx), + .base.cra_module = THIS_MODULE, + + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .setkey = ce_aes_setkey, + .encrypt = cbc_encrypt, + .decrypt = cbc_decrypt, }, { - .base = { - .cra_name = "__ctr(aes)", - .cra_driver_name = "__ctr-aes-ce", - .cra_priority = 300, - .cra_flags = CRYPTO_ALG_INTERNAL, - .cra_blocksize = 1, - .cra_ctxsize = sizeof(struct crypto_aes_ctx), - .cra_module = THIS_MODULE, - }, - .min_keysize = AES_MIN_KEY_SIZE, - .max_keysize = AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - .chunksize = AES_BLOCK_SIZE, - .setkey = ce_aes_setkey, - .encrypt = ctr_encrypt, - .decrypt = ctr_encrypt, + .base.cra_name = "__ctr(aes)", + .base.cra_driver_name = "__ctr-aes-ce", + .base.cra_priority = 300, + .base.cra_flags = CRYPTO_ALG_INTERNAL, + .base.cra_blocksize = 1, + .base.cra_ctxsize = sizeof(struct crypto_aes_ctx), + .base.cra_module = THIS_MODULE, + + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .chunksize = AES_BLOCK_SIZE, + .setkey = ce_aes_setkey, + .encrypt = ctr_encrypt, + .decrypt = ctr_encrypt, }, { - .base = { - .cra_name = "__xts(aes)", - .cra_driver_name = "__xts-aes-ce", - .cra_priority = 300, - .cra_flags = CRYPTO_ALG_INTERNAL, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct crypto_aes_xts_ctx), - .cra_module = THIS_MODULE, - }, - .min_keysize = 2 * AES_MIN_KEY_SIZE, - .max_keysize = 2 * AES_MAX_KEY_SIZE, - .ivsize = AES_BLOCK_SIZE, - .setkey = xts_set_key, - .encrypt = xts_encrypt, - .decrypt = xts_decrypt, + .base.cra_name = "__xts(aes)", + .base.cra_driver_name = "__xts-aes-ce", + .base.cra_priority = 300, + .base.cra_flags = CRYPTO_ALG_INTERNAL, + .base.cra_blocksize = AES_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct crypto_aes_xts_ctx), + .base.cra_module = THIS_MODULE, + + .min_keysize = 2 * AES_MIN_KEY_SIZE, + .max_keysize = 2 * AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .setkey = xts_set_key, + .encrypt = xts_encrypt, + .decrypt = xts_decrypt, } }; static struct simd_skcipher_alg *aes_simd_algs[ARRAY_SIZE(aes_algs)]; From patchwork Wed Jun 12 12:48:20 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 10989887 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4B5661515 for ; Wed, 12 Jun 2019 12:48:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 386D528671 for ; Wed, 12 Jun 2019 12:48:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2C8C4288D9; Wed, 12 Jun 2019 12:48:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AD39428A4C for ; Wed, 12 Jun 2019 12:48:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2409156AbfFLMsw (ORCPT ); Wed, 12 Jun 2019 08:48:52 -0400 Received: from mail-wr1-f67.google.com ([209.85.221.67]:42808 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2406812AbfFLMsv (ORCPT ); Wed, 12 Jun 2019 08:48:51 -0400 Received: by mail-wr1-f67.google.com with SMTP id x17so1484707wrl.9 for ; Wed, 12 Jun 2019 05:48:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kh+1icf/YKEvTTRJagBGMOB+f2u4C9zDf6SaxozYbCI=; b=bm10Ep11t57nCbm56g4vpVdy+Yaj8iHbmgP0FQq92PpqW6tSTKYLwLb9Xfe4PTu7c/ 5WDx8zgtCB2XRC8FbCA+JkKv/3S6EdFvXpC9hRWEwXMrxLmaFh//p5Nicq87n4EsPMNd k95tlbbrkmsBlVWG+v9r/soXrIgwaCjpTDz7066MHcjFV2FGoiAN5FvwSk3S4Q2o3vVD M4D/AYJg36iCrCnxCRYEkv8BB/xsIMEvNaWrWEXbSxT2UQjpBs/YvbzxJoHemS5LTDwL yK4MLTkbFAfzojdtDVwGHH/+LFeLkVbScOB6nefOJ+KMOCQ0yOkkCNoZl88Sy/SO2IvV 8nmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kh+1icf/YKEvTTRJagBGMOB+f2u4C9zDf6SaxozYbCI=; b=QnpCEj9Bvm1Lgwaq92ovYpGSu8aQvRqVFzbmxVCVv9IPfXM1S4eD0GPamPM9e6Hg5D hivy/UL/0IMmC/MVE4vpuUHoeDlzlH6bHMlrtWUxYu99u7odkN3zIEIMZYPigbtk5buZ 9SqZeRLI7IAHwCebrtHUlOJtIrbaktJhRKrRpSTC8GYUKV5lClKkAeVVfU4+EIuvtdcU cfKwNQkRixvajNPCarosAng9mXiD1W2Kaj5nVtUxwhEKtPfTiP5ZWBasxXs0jbZvWQsB RIg/NASvgutYKaEJXv0CrA3suY1RIYTY6+ZYByu84f1lvN67gVfHb0P9vFfuSiO6p71K hu8w== X-Gm-Message-State: APjAAAVes539iGHQ1RBma74CwHIP5XGRiF7ol0TryvtKcNAbYgpXjDPc DLR5Vw57oik07NC9ipRsHdAcYIWoxpn0rg== X-Google-Smtp-Source: APXvYqyFRbe5a7llDGgdZgGQ48cyE7FpVJnFDOKdhMwgD29ae0cAOhBTcXAdPK/P6xz8JRWbiVjupw== X-Received: by 2002:adf:e6ca:: with SMTP id y10mr40142759wrm.3.1560343729058; Wed, 12 Jun 2019 05:48:49 -0700 (PDT) Received: from sudo.home ([2a01:cb1d:112:6f00:353a:f33a:a393:3ada]) by smtp.gmail.com with ESMTPSA id s8sm28505480wra.55.2019.06.12.05.48.48 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Jun 2019 05:48:48 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [RFC PATCH 02/20] crypto: arm/aes - rename local routines to prevent future clashes Date: Wed, 12 Jun 2019 14:48:20 +0200 Message-Id: <20190612124838.2492-3-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190612124838.2492-1-ard.biesheuvel@linaro.org> References: <20190612124838.2492-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Rename some local AES encrypt/decrypt routines so they don't clash with the names we are about to introduce for the routines exposes by the generic AES library. Signed-off-by: Ard Biesheuvel --- arch/arm/crypto/aes-cipher-glue.c | 8 ++++---- arch/arm64/crypto/aes-cipher-glue.c | 8 ++++---- crypto/aes_generic.c | 8 ++++---- 3 files changed, 12 insertions(+), 12 deletions(-) diff --git a/arch/arm/crypto/aes-cipher-glue.c b/arch/arm/crypto/aes-cipher-glue.c index c222f6e072ad..f6c07867b8ff 100644 --- a/arch/arm/crypto/aes-cipher-glue.c +++ b/arch/arm/crypto/aes-cipher-glue.c @@ -19,7 +19,7 @@ EXPORT_SYMBOL(__aes_arm_encrypt); asmlinkage void __aes_arm_decrypt(u32 *rk, int rounds, const u8 *in, u8 *out); EXPORT_SYMBOL(__aes_arm_decrypt); -static void aes_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) +static void aes_arm_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) { struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); int rounds = 6 + ctx->key_length / 4; @@ -27,7 +27,7 @@ static void aes_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) __aes_arm_encrypt(ctx->key_enc, rounds, in, out); } -static void aes_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) +static void aes_arm_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) { struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); int rounds = 6 + ctx->key_length / 4; @@ -47,8 +47,8 @@ static struct crypto_alg aes_alg = { .cra_cipher.cia_min_keysize = AES_MIN_KEY_SIZE, .cra_cipher.cia_max_keysize = AES_MAX_KEY_SIZE, .cra_cipher.cia_setkey = crypto_aes_set_key, - .cra_cipher.cia_encrypt = aes_encrypt, - .cra_cipher.cia_decrypt = aes_decrypt, + .cra_cipher.cia_encrypt = aes_arm_encrypt, + .cra_cipher.cia_decrypt = aes_arm_decrypt, #ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS .cra_alignmask = 3, diff --git a/arch/arm64/crypto/aes-cipher-glue.c b/arch/arm64/crypto/aes-cipher-glue.c index 7288e7cbebff..0e90b06ebcec 100644 --- a/arch/arm64/crypto/aes-cipher-glue.c +++ b/arch/arm64/crypto/aes-cipher-glue.c @@ -18,7 +18,7 @@ EXPORT_SYMBOL(__aes_arm64_encrypt); asmlinkage void __aes_arm64_decrypt(u32 *rk, u8 *out, const u8 *in, int rounds); EXPORT_SYMBOL(__aes_arm64_decrypt); -static void aes_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) +static void aes_arm64_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) { struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); int rounds = 6 + ctx->key_length / 4; @@ -26,7 +26,7 @@ static void aes_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) __aes_arm64_encrypt(ctx->key_enc, out, in, rounds); } -static void aes_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) +static void aes_arm64_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) { struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); int rounds = 6 + ctx->key_length / 4; @@ -46,8 +46,8 @@ static struct crypto_alg aes_alg = { .cra_cipher.cia_min_keysize = AES_MIN_KEY_SIZE, .cra_cipher.cia_max_keysize = AES_MAX_KEY_SIZE, .cra_cipher.cia_setkey = crypto_aes_set_key, - .cra_cipher.cia_encrypt = aes_encrypt, - .cra_cipher.cia_decrypt = aes_decrypt + .cra_cipher.cia_encrypt = aes_arm64_encrypt, + .cra_cipher.cia_decrypt = aes_arm64_decrypt }; static int __init aes_init(void) diff --git a/crypto/aes_generic.c b/crypto/aes_generic.c index f217568917e4..3aa4a715c216 100644 --- a/crypto/aes_generic.c +++ b/crypto/aes_generic.c @@ -1332,7 +1332,7 @@ EXPORT_SYMBOL_GPL(crypto_aes_set_key); f_rl(bo, bi, 3, k); \ } while (0) -static void aes_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) +static void crypto_aes_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) { const struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); u32 b0[4], b1[4]; @@ -1402,7 +1402,7 @@ static void aes_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) i_rl(bo, bi, 3, k); \ } while (0) -static void aes_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) +static void crypto_aes_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) { const struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); u32 b0[4], b1[4]; @@ -1454,8 +1454,8 @@ static struct crypto_alg aes_alg = { .cia_min_keysize = AES_MIN_KEY_SIZE, .cia_max_keysize = AES_MAX_KEY_SIZE, .cia_setkey = crypto_aes_set_key, - .cia_encrypt = aes_encrypt, - .cia_decrypt = aes_decrypt + .cia_encrypt = crypto_aes_encrypt, + .cia_decrypt = crypto_aes_decrypt } } }; From patchwork Wed Jun 12 12:48:21 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 10989889 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BB78F76 for ; Wed, 12 Jun 2019 12:48:54 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A87A4286D5 for ; Wed, 12 Jun 2019 12:48:54 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9CEA328A49; Wed, 12 Jun 2019 12:48:54 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 23129288D9 for ; Wed, 12 Jun 2019 12:48:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2409158AbfFLMsx (ORCPT ); Wed, 12 Jun 2019 08:48:53 -0400 Received: from mail-wm1-f66.google.com ([209.85.128.66]:54690 "EHLO mail-wm1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2404447AbfFLMsx (ORCPT ); Wed, 12 Jun 2019 08:48:53 -0400 Received: by mail-wm1-f66.google.com with SMTP id g135so6451627wme.4 for ; Wed, 12 Jun 2019 05:48:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=OVlu3R/XDhYHCSEro6NMIXQLAFRHmhGDxGC17BFWD8A=; b=djsoiZR4kDhV/HUvSBg3zkLEMwXN5MIzGZPokRW1HBnBFD5AJMKJYA22kfP19inVP2 mFa9evSKDMxbsAdrJ/EKxtvsEwxRuWjYse+gm2EY8P4CEpvC/vicvUbbdM6Bouq2eb8D G1d1hPrXJJivLL71rAJw9vfwlC4WUF1TBIP49bDL4Qb5nz0I8NstNWwIW+/g3bEu0cIs EwHnOP9KmB4stjvfoaN0XqJySjWZ9h2veKZxWgkTzc915hGCDQ1z2yNnjxrjKuYuwaII G8CqRExpLsAj4+zNOazJi1UnDSOm+Ns3rxdIw9Rf/mQb3Nj/40CGkz3r0WgOAoVuxJSA jcJg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=OVlu3R/XDhYHCSEro6NMIXQLAFRHmhGDxGC17BFWD8A=; b=GzaEzMAH+CCTsIbLo/37RswGyKX0tCxo8U4Xp+xO/6qwWaPSHFWwLIQ7B6tcu/ezDH tIJJXJvm38c8EdEydonVRV8IcDCbT+02+VfnMtdSFt1xiVmJxDSIq4Lwl4CBqHyX9/nD qtGVnQTBXOZq8YfX9zverWuhl8/nNy6vF9uggj8H44MN/MUBXemv5D9uPKxYjW5rz3iH PXqR9RPO4d9x5QdU9QqEjivJF9L3OCxJeHJFaXb+hy/U6Ne7UbluW7KzAUfOHOS0U2mg hBMHqFcfOaF8CDL1KGmCQ20WoeFBWB6V5NBSycODJAsDFdI33k4UiTNjdMAlWjFNJqSm Y5xQ== X-Gm-Message-State: APjAAAXaIoDb50PqyUXDSRsxBGBnpi3Yyn38qTeeI3CKs1Rc1aBkfOCG pMOq/5jJofGu5oDeZfIpnjfB0J1cn8n3DA== X-Google-Smtp-Source: APXvYqxhiPdkKRrJKmM3/bB8iZ6hGjA1a0dGgmjfsJcw19xlaihqJ6kKmUyStIxasJqeDQ/bqkjHvA== X-Received: by 2002:a7b:c7d8:: with SMTP id z24mr22343958wmk.10.1560343730272; Wed, 12 Jun 2019 05:48:50 -0700 (PDT) Received: from sudo.home ([2a01:cb1d:112:6f00:353a:f33a:a393:3ada]) by smtp.gmail.com with ESMTPSA id s8sm28505480wra.55.2019.06.12.05.48.49 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Jun 2019 05:48:49 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [RFC PATCH 03/20] crypto: aes/fixed-time - align key schedule with other implementations Date: Wed, 12 Jun 2019 14:48:21 +0200 Message-Id: <20190612124838.2492-4-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190612124838.2492-1-ard.biesheuvel@linaro.org> References: <20190612124838.2492-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The fixed time AES code mangles the key schedule so that xoring the first round key with values at fixed offsets across the Sbox produces the correct value. This primes the D-cache with the entire Sbox before any data dependent lookups are done, making it more difficult to infer key bits from timing variances when the plaintext is known. The downside of this approach is that it renders the key schedule incompatible with other implementations of AES in the kernel, which makes it cumbersome to use this implementation as a fallback for SIMD based AES in contexts where this is not allowed. So let's tweak the fixed Sbox indexes so that they add up to zero under the xor operation. While at it, increase the granularity to 16 bytes so we cover the entire Sbox even on systems with 16 byte cachelines. Signed-off-by: Ard Biesheuvel --- crypto/aes_ti.c | 52 ++++++++------------ 1 file changed, 21 insertions(+), 31 deletions(-) diff --git a/crypto/aes_ti.c b/crypto/aes_ti.c index 1ff9785b30f5..fd70dc322634 100644 --- a/crypto/aes_ti.c +++ b/crypto/aes_ti.c @@ -237,30 +237,8 @@ static int aesti_set_key(struct crypto_tfm *tfm, const u8 *in_key, unsigned int key_len) { struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); - int err; - err = aesti_expand_key(ctx, in_key, key_len); - if (err) - return err; - - /* - * In order to force the compiler to emit data independent Sbox lookups - * at the start of each block, xor the first round key with values at - * fixed indexes in the Sbox. This will need to be repeated each time - * the key is used, which will pull the entire Sbox into the D-cache - * before any data dependent Sbox lookups are performed. - */ - ctx->key_enc[0] ^= __aesti_sbox[ 0] ^ __aesti_sbox[128]; - ctx->key_enc[1] ^= __aesti_sbox[32] ^ __aesti_sbox[160]; - ctx->key_enc[2] ^= __aesti_sbox[64] ^ __aesti_sbox[192]; - ctx->key_enc[3] ^= __aesti_sbox[96] ^ __aesti_sbox[224]; - - ctx->key_dec[0] ^= __aesti_inv_sbox[ 0] ^ __aesti_inv_sbox[128]; - ctx->key_dec[1] ^= __aesti_inv_sbox[32] ^ __aesti_inv_sbox[160]; - ctx->key_dec[2] ^= __aesti_inv_sbox[64] ^ __aesti_inv_sbox[192]; - ctx->key_dec[3] ^= __aesti_inv_sbox[96] ^ __aesti_inv_sbox[224]; - - return 0; + return aesti_expand_key(ctx, in_key, key_len); } static void aesti_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) @@ -283,10 +261,16 @@ static void aesti_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) */ local_irq_save(flags); - st0[0] ^= __aesti_sbox[ 0] ^ __aesti_sbox[128]; - st0[1] ^= __aesti_sbox[32] ^ __aesti_sbox[160]; - st0[2] ^= __aesti_sbox[64] ^ __aesti_sbox[192]; - st0[3] ^= __aesti_sbox[96] ^ __aesti_sbox[224]; + /* + * Force the compiler to emit data independent Sbox references, + * by xoring the input with Sbox values that are known to add up + * to zero. This pulls the entire Sbox into the D-cache before any + * data dependent lookups are done. + */ + st0[0] ^= __aesti_sbox[ 0] ^ __aesti_sbox[ 64] ^ __aesti_sbox[134] ^ __aesti_sbox[195]; + st0[1] ^= __aesti_sbox[16] ^ __aesti_sbox[ 82] ^ __aesti_sbox[158] ^ __aesti_sbox[221]; + st0[2] ^= __aesti_sbox[32] ^ __aesti_sbox[ 96] ^ __aesti_sbox[160] ^ __aesti_sbox[234]; + st0[3] ^= __aesti_sbox[48] ^ __aesti_sbox[112] ^ __aesti_sbox[186] ^ __aesti_sbox[241]; for (round = 0;; round += 2, rkp += 8) { st1[0] = mix_columns(subshift(st0, 0)) ^ rkp[0]; @@ -331,10 +315,16 @@ static void aesti_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) */ local_irq_save(flags); - st0[0] ^= __aesti_inv_sbox[ 0] ^ __aesti_inv_sbox[128]; - st0[1] ^= __aesti_inv_sbox[32] ^ __aesti_inv_sbox[160]; - st0[2] ^= __aesti_inv_sbox[64] ^ __aesti_inv_sbox[192]; - st0[3] ^= __aesti_inv_sbox[96] ^ __aesti_inv_sbox[224]; + /* + * Force the compiler to emit data independent Sbox references, + * by xoring the input with Sbox values that are known to add up + * to zero. This pulls the entire Sbox into the D-cache before any + * data dependent lookups are done. + */ + st0[0] ^= __aesti_inv_sbox[ 0] ^ __aesti_inv_sbox[ 64] ^ __aesti_inv_sbox[129] ^ __aesti_inv_sbox[200]; + st0[1] ^= __aesti_inv_sbox[16] ^ __aesti_inv_sbox[ 83] ^ __aesti_inv_sbox[150] ^ __aesti_inv_sbox[212]; + st0[2] ^= __aesti_inv_sbox[32] ^ __aesti_inv_sbox[ 96] ^ __aesti_inv_sbox[160] ^ __aesti_inv_sbox[236]; + st0[3] ^= __aesti_inv_sbox[48] ^ __aesti_inv_sbox[112] ^ __aesti_inv_sbox[187] ^ __aesti_inv_sbox[247]; for (round = 0;; round += 2, rkp += 8) { st1[0] = inv_mix_columns(inv_subshift(st0, 0)) ^ rkp[0]; From patchwork Wed Jun 12 12:48:22 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 10989897 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1C23276 for ; Wed, 12 Jun 2019 12:49:00 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 07F70286D5 for ; Wed, 12 Jun 2019 12:49:00 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F0CD328A4C; Wed, 12 Jun 2019 12:48:59 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 17B5F286D5 for ; Wed, 12 Jun 2019 12:48:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2439220AbfFLMs5 (ORCPT ); Wed, 12 Jun 2019 08:48:57 -0400 Received: from mail-wr1-f66.google.com ([209.85.221.66]:44428 "EHLO mail-wr1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2406812AbfFLMs5 (ORCPT ); Wed, 12 Jun 2019 08:48:57 -0400 Received: by mail-wr1-f66.google.com with SMTP id b17so16754811wrq.11 for ; Wed, 12 Jun 2019 05:48:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=J5qDNimM3ydg5KEkTGKqT6zhN+3mBuOyob3Wh7rS+7U=; b=nI0P+5An6QqPZGocdoI2QLloEd2XmSwuPe4kc8+VFiJlqxXiTlfnrsGzSDPoGdSoHu yYw1dT4A+Y778LVcrajT3Y5vxnjPA2JcR1g6AHQBfk4JmJcSYPvJpeiuj8bxeyE8Ut4T n40m9Mca4cpju3Xo4A187vQYVGWQy8kr2jiQFnL4mdEL7ZgCGKVKNQkokRFivjHuJe4U rGjHznz1IZwZo2Rv1f2H1lUNOgbApmuduPUDiGmm3hOC5qfepu+8V8HvI7CG5/4YjFVK HdtTy9voA6UlDwOZSZJdskmbOtQyU6hATik0cxvXA12qVEd4sTRl5WcV9Fj86TEOVMNF waxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=J5qDNimM3ydg5KEkTGKqT6zhN+3mBuOyob3Wh7rS+7U=; b=Sk65992mJXk2rpnRhpdMcgU0vPpaZebp49M0GHttIKphrVG6t6gWW6/dZxCwioUY0J 9LoP4cr0hZl78rFsaP50dudDhhJSvnBRkuXXvham6nplrtUcp1Dt99ZAE16UsLv/BflU QGYsiCvJu1xqMCiTlo4xcQRpq7NfwgiDedQn8tR5GptP+HzXa1g3jfETt9+B0r/ZdiDI CJdaC3T+LuXxPZ6Kd5sm2k8pI42UuK+FFsWF8K6uHm1gOdSt/CvYNiIufkqigitiyZWV ChhPosrAg2SqI4LbZpqojh31LRZyhwlCpSjIRI+klmXjB3T6J+wlLQamUab/VjzjJH9u es6w== X-Gm-Message-State: APjAAAVyeglc/jQ6f5R//mxlMbfedtTkFBXa3IA/SC79WTVVIpX+7GCH C6cGe/qwsx95FiDxCDbfRhv++m8QLWTj3w== X-Google-Smtp-Source: APXvYqxbd8DKxOrmYDQjmDfzwDU0SVez8kGi3QgmCgs/sqcahd1dPSsacxyeA77C8SlZogAnyyXKyg== X-Received: by 2002:adf:9d81:: with SMTP id p1mr5077183wre.294.1560343731706; Wed, 12 Jun 2019 05:48:51 -0700 (PDT) Received: from sudo.home ([2a01:cb1d:112:6f00:353a:f33a:a393:3ada]) by smtp.gmail.com with ESMTPSA id s8sm28505480wra.55.2019.06.12.05.48.50 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Jun 2019 05:48:51 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [RFC PATCH 04/20] crypto: aes - create AES library based on the fixed time AES code Date: Wed, 12 Jun 2019 14:48:22 +0200 Message-Id: <20190612124838.2492-5-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190612124838.2492-1-ard.biesheuvel@linaro.org> References: <20190612124838.2492-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Turn the existing small footprint and mostly time invariant C code and turn it into a AES library that can be used for non-performance critical, casual use of AES, and as a fallback for, e.g., SIMD code that needs a secondary path that can be taken in contexts where the SIMD unit is off limits (e.g., in hard interrupts taken from kernel context) Signed-off-by: Ard Biesheuvel --- crypto/Kconfig | 4 + crypto/aes_ti.c | 325 +---------------- include/crypto/aes.h | 34 ++ lib/crypto/Makefile | 3 + lib/crypto/aes.c | 368 ++++++++++++++++++++ 5 files changed, 413 insertions(+), 321 deletions(-) diff --git a/crypto/Kconfig b/crypto/Kconfig index 5114b35ef3b4..dc6f93ef3ead 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -1059,6 +1059,9 @@ config CRYPTO_GHASH_CLMUL_NI_INTEL comment "Ciphers" +config CRYPTO_LIB_AES + tristate + config CRYPTO_AES tristate "AES cipher algorithms" select CRYPTO_ALGAPI @@ -1082,6 +1085,7 @@ config CRYPTO_AES config CRYPTO_AES_TI tristate "Fixed time AES cipher" select CRYPTO_ALGAPI + select CRYPTO_LIB_AES help This is a generic implementation of AES that attempts to eliminate data dependent latencies as much as possible without affecting diff --git a/crypto/aes_ti.c b/crypto/aes_ti.c index fd70dc322634..30d73b587acc 100644 --- a/crypto/aes_ti.c +++ b/crypto/aes_ti.c @@ -1,352 +1,35 @@ +// SPDX-License-Identifier: GPL-2.0 /* * Scalar fixed time AES core transform * * Copyright (C) 2017 Linaro Ltd - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. */ #include #include #include -#include - -/* - * Emit the sbox as volatile const to prevent the compiler from doing - * constant folding on sbox references involving fixed indexes. - */ -static volatile const u8 __cacheline_aligned __aesti_sbox[] = { - 0x63, 0x7c, 0x77, 0x7b, 0xf2, 0x6b, 0x6f, 0xc5, - 0x30, 0x01, 0x67, 0x2b, 0xfe, 0xd7, 0xab, 0x76, - 0xca, 0x82, 0xc9, 0x7d, 0xfa, 0x59, 0x47, 0xf0, - 0xad, 0xd4, 0xa2, 0xaf, 0x9c, 0xa4, 0x72, 0xc0, - 0xb7, 0xfd, 0x93, 0x26, 0x36, 0x3f, 0xf7, 0xcc, - 0x34, 0xa5, 0xe5, 0xf1, 0x71, 0xd8, 0x31, 0x15, - 0x04, 0xc7, 0x23, 0xc3, 0x18, 0x96, 0x05, 0x9a, - 0x07, 0x12, 0x80, 0xe2, 0xeb, 0x27, 0xb2, 0x75, - 0x09, 0x83, 0x2c, 0x1a, 0x1b, 0x6e, 0x5a, 0xa0, - 0x52, 0x3b, 0xd6, 0xb3, 0x29, 0xe3, 0x2f, 0x84, - 0x53, 0xd1, 0x00, 0xed, 0x20, 0xfc, 0xb1, 0x5b, - 0x6a, 0xcb, 0xbe, 0x39, 0x4a, 0x4c, 0x58, 0xcf, - 0xd0, 0xef, 0xaa, 0xfb, 0x43, 0x4d, 0x33, 0x85, - 0x45, 0xf9, 0x02, 0x7f, 0x50, 0x3c, 0x9f, 0xa8, - 0x51, 0xa3, 0x40, 0x8f, 0x92, 0x9d, 0x38, 0xf5, - 0xbc, 0xb6, 0xda, 0x21, 0x10, 0xff, 0xf3, 0xd2, - 0xcd, 0x0c, 0x13, 0xec, 0x5f, 0x97, 0x44, 0x17, - 0xc4, 0xa7, 0x7e, 0x3d, 0x64, 0x5d, 0x19, 0x73, - 0x60, 0x81, 0x4f, 0xdc, 0x22, 0x2a, 0x90, 0x88, - 0x46, 0xee, 0xb8, 0x14, 0xde, 0x5e, 0x0b, 0xdb, - 0xe0, 0x32, 0x3a, 0x0a, 0x49, 0x06, 0x24, 0x5c, - 0xc2, 0xd3, 0xac, 0x62, 0x91, 0x95, 0xe4, 0x79, - 0xe7, 0xc8, 0x37, 0x6d, 0x8d, 0xd5, 0x4e, 0xa9, - 0x6c, 0x56, 0xf4, 0xea, 0x65, 0x7a, 0xae, 0x08, - 0xba, 0x78, 0x25, 0x2e, 0x1c, 0xa6, 0xb4, 0xc6, - 0xe8, 0xdd, 0x74, 0x1f, 0x4b, 0xbd, 0x8b, 0x8a, - 0x70, 0x3e, 0xb5, 0x66, 0x48, 0x03, 0xf6, 0x0e, - 0x61, 0x35, 0x57, 0xb9, 0x86, 0xc1, 0x1d, 0x9e, - 0xe1, 0xf8, 0x98, 0x11, 0x69, 0xd9, 0x8e, 0x94, - 0x9b, 0x1e, 0x87, 0xe9, 0xce, 0x55, 0x28, 0xdf, - 0x8c, 0xa1, 0x89, 0x0d, 0xbf, 0xe6, 0x42, 0x68, - 0x41, 0x99, 0x2d, 0x0f, 0xb0, 0x54, 0xbb, 0x16, -}; - -static volatile const u8 __cacheline_aligned __aesti_inv_sbox[] = { - 0x52, 0x09, 0x6a, 0xd5, 0x30, 0x36, 0xa5, 0x38, - 0xbf, 0x40, 0xa3, 0x9e, 0x81, 0xf3, 0xd7, 0xfb, - 0x7c, 0xe3, 0x39, 0x82, 0x9b, 0x2f, 0xff, 0x87, - 0x34, 0x8e, 0x43, 0x44, 0xc4, 0xde, 0xe9, 0xcb, - 0x54, 0x7b, 0x94, 0x32, 0xa6, 0xc2, 0x23, 0x3d, - 0xee, 0x4c, 0x95, 0x0b, 0x42, 0xfa, 0xc3, 0x4e, - 0x08, 0x2e, 0xa1, 0x66, 0x28, 0xd9, 0x24, 0xb2, - 0x76, 0x5b, 0xa2, 0x49, 0x6d, 0x8b, 0xd1, 0x25, - 0x72, 0xf8, 0xf6, 0x64, 0x86, 0x68, 0x98, 0x16, - 0xd4, 0xa4, 0x5c, 0xcc, 0x5d, 0x65, 0xb6, 0x92, - 0x6c, 0x70, 0x48, 0x50, 0xfd, 0xed, 0xb9, 0xda, - 0x5e, 0x15, 0x46, 0x57, 0xa7, 0x8d, 0x9d, 0x84, - 0x90, 0xd8, 0xab, 0x00, 0x8c, 0xbc, 0xd3, 0x0a, - 0xf7, 0xe4, 0x58, 0x05, 0xb8, 0xb3, 0x45, 0x06, - 0xd0, 0x2c, 0x1e, 0x8f, 0xca, 0x3f, 0x0f, 0x02, - 0xc1, 0xaf, 0xbd, 0x03, 0x01, 0x13, 0x8a, 0x6b, - 0x3a, 0x91, 0x11, 0x41, 0x4f, 0x67, 0xdc, 0xea, - 0x97, 0xf2, 0xcf, 0xce, 0xf0, 0xb4, 0xe6, 0x73, - 0x96, 0xac, 0x74, 0x22, 0xe7, 0xad, 0x35, 0x85, - 0xe2, 0xf9, 0x37, 0xe8, 0x1c, 0x75, 0xdf, 0x6e, - 0x47, 0xf1, 0x1a, 0x71, 0x1d, 0x29, 0xc5, 0x89, - 0x6f, 0xb7, 0x62, 0x0e, 0xaa, 0x18, 0xbe, 0x1b, - 0xfc, 0x56, 0x3e, 0x4b, 0xc6, 0xd2, 0x79, 0x20, - 0x9a, 0xdb, 0xc0, 0xfe, 0x78, 0xcd, 0x5a, 0xf4, - 0x1f, 0xdd, 0xa8, 0x33, 0x88, 0x07, 0xc7, 0x31, - 0xb1, 0x12, 0x10, 0x59, 0x27, 0x80, 0xec, 0x5f, - 0x60, 0x51, 0x7f, 0xa9, 0x19, 0xb5, 0x4a, 0x0d, - 0x2d, 0xe5, 0x7a, 0x9f, 0x93, 0xc9, 0x9c, 0xef, - 0xa0, 0xe0, 0x3b, 0x4d, 0xae, 0x2a, 0xf5, 0xb0, - 0xc8, 0xeb, 0xbb, 0x3c, 0x83, 0x53, 0x99, 0x61, - 0x17, 0x2b, 0x04, 0x7e, 0xba, 0x77, 0xd6, 0x26, - 0xe1, 0x69, 0x14, 0x63, 0x55, 0x21, 0x0c, 0x7d, -}; - -static u32 mul_by_x(u32 w) -{ - u32 x = w & 0x7f7f7f7f; - u32 y = w & 0x80808080; - - /* multiply by polynomial 'x' (0b10) in GF(2^8) */ - return (x << 1) ^ (y >> 7) * 0x1b; -} - -static u32 mul_by_x2(u32 w) -{ - u32 x = w & 0x3f3f3f3f; - u32 y = w & 0x80808080; - u32 z = w & 0x40404040; - - /* multiply by polynomial 'x^2' (0b100) in GF(2^8) */ - return (x << 2) ^ (y >> 7) * 0x36 ^ (z >> 6) * 0x1b; -} - -static u32 mix_columns(u32 x) -{ - /* - * Perform the following matrix multiplication in GF(2^8) - * - * | 0x2 0x3 0x1 0x1 | | x[0] | - * | 0x1 0x2 0x3 0x1 | | x[1] | - * | 0x1 0x1 0x2 0x3 | x | x[2] | - * | 0x3 0x1 0x1 0x2 | | x[3] | - */ - u32 y = mul_by_x(x) ^ ror32(x, 16); - - return y ^ ror32(x ^ y, 8); -} - -static u32 inv_mix_columns(u32 x) -{ - /* - * Perform the following matrix multiplication in GF(2^8) - * - * | 0xe 0xb 0xd 0x9 | | x[0] | - * | 0x9 0xe 0xb 0xd | | x[1] | - * | 0xd 0x9 0xe 0xb | x | x[2] | - * | 0xb 0xd 0x9 0xe | | x[3] | - * - * which can conveniently be reduced to - * - * | 0x2 0x3 0x1 0x1 | | 0x5 0x0 0x4 0x0 | | x[0] | - * | 0x1 0x2 0x3 0x1 | | 0x0 0x5 0x0 0x4 | | x[1] | - * | 0x1 0x1 0x2 0x3 | x | 0x4 0x0 0x5 0x0 | x | x[2] | - * | 0x3 0x1 0x1 0x2 | | 0x0 0x4 0x0 0x5 | | x[3] | - */ - u32 y = mul_by_x2(x); - - return mix_columns(x ^ y ^ ror32(y, 16)); -} - -static __always_inline u32 subshift(u32 in[], int pos) -{ - return (__aesti_sbox[in[pos] & 0xff]) ^ - (__aesti_sbox[(in[(pos + 1) % 4] >> 8) & 0xff] << 8) ^ - (__aesti_sbox[(in[(pos + 2) % 4] >> 16) & 0xff] << 16) ^ - (__aesti_sbox[(in[(pos + 3) % 4] >> 24) & 0xff] << 24); -} - -static __always_inline u32 inv_subshift(u32 in[], int pos) -{ - return (__aesti_inv_sbox[in[pos] & 0xff]) ^ - (__aesti_inv_sbox[(in[(pos + 3) % 4] >> 8) & 0xff] << 8) ^ - (__aesti_inv_sbox[(in[(pos + 2) % 4] >> 16) & 0xff] << 16) ^ - (__aesti_inv_sbox[(in[(pos + 1) % 4] >> 24) & 0xff] << 24); -} -static u32 subw(u32 in) -{ - return (__aesti_sbox[in & 0xff]) ^ - (__aesti_sbox[(in >> 8) & 0xff] << 8) ^ - (__aesti_sbox[(in >> 16) & 0xff] << 16) ^ - (__aesti_sbox[(in >> 24) & 0xff] << 24); -} - -static int aesti_expand_key(struct crypto_aes_ctx *ctx, const u8 *in_key, - unsigned int key_len) -{ - u32 kwords = key_len / sizeof(u32); - u32 rc, i, j; - - if (key_len != AES_KEYSIZE_128 && - key_len != AES_KEYSIZE_192 && - key_len != AES_KEYSIZE_256) - return -EINVAL; - - ctx->key_length = key_len; - - for (i = 0; i < kwords; i++) - ctx->key_enc[i] = get_unaligned_le32(in_key + i * sizeof(u32)); - - for (i = 0, rc = 1; i < 10; i++, rc = mul_by_x(rc)) { - u32 *rki = ctx->key_enc + (i * kwords); - u32 *rko = rki + kwords; - - rko[0] = ror32(subw(rki[kwords - 1]), 8) ^ rc ^ rki[0]; - rko[1] = rko[0] ^ rki[1]; - rko[2] = rko[1] ^ rki[2]; - rko[3] = rko[2] ^ rki[3]; - - if (key_len == 24) { - if (i >= 7) - break; - rko[4] = rko[3] ^ rki[4]; - rko[5] = rko[4] ^ rki[5]; - } else if (key_len == 32) { - if (i >= 6) - break; - rko[4] = subw(rko[3]) ^ rki[4]; - rko[5] = rko[4] ^ rki[5]; - rko[6] = rko[5] ^ rki[6]; - rko[7] = rko[6] ^ rki[7]; - } - } - - /* - * Generate the decryption keys for the Equivalent Inverse Cipher. - * This involves reversing the order of the round keys, and applying - * the Inverse Mix Columns transformation to all but the first and - * the last one. - */ - ctx->key_dec[0] = ctx->key_enc[key_len + 24]; - ctx->key_dec[1] = ctx->key_enc[key_len + 25]; - ctx->key_dec[2] = ctx->key_enc[key_len + 26]; - ctx->key_dec[3] = ctx->key_enc[key_len + 27]; - - for (i = 4, j = key_len + 20; j > 0; i += 4, j -= 4) { - ctx->key_dec[i] = inv_mix_columns(ctx->key_enc[j]); - ctx->key_dec[i + 1] = inv_mix_columns(ctx->key_enc[j + 1]); - ctx->key_dec[i + 2] = inv_mix_columns(ctx->key_enc[j + 2]); - ctx->key_dec[i + 3] = inv_mix_columns(ctx->key_enc[j + 3]); - } - - ctx->key_dec[i] = ctx->key_enc[0]; - ctx->key_dec[i + 1] = ctx->key_enc[1]; - ctx->key_dec[i + 2] = ctx->key_enc[2]; - ctx->key_dec[i + 3] = ctx->key_enc[3]; - - return 0; -} static int aesti_set_key(struct crypto_tfm *tfm, const u8 *in_key, unsigned int key_len) { struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); - return aesti_expand_key(ctx, in_key, key_len); + return aes_expandkey(ctx, in_key, key_len); } static void aesti_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) { const struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); - const u32 *rkp = ctx->key_enc + 4; - int rounds = 6 + ctx->key_length / 4; - u32 st0[4], st1[4]; - unsigned long flags; - int round; - st0[0] = ctx->key_enc[0] ^ get_unaligned_le32(in); - st0[1] = ctx->key_enc[1] ^ get_unaligned_le32(in + 4); - st0[2] = ctx->key_enc[2] ^ get_unaligned_le32(in + 8); - st0[3] = ctx->key_enc[3] ^ get_unaligned_le32(in + 12); - - /* - * Temporarily disable interrupts to avoid races where cachelines are - * evicted when the CPU is interrupted to do something else. - */ - local_irq_save(flags); - - /* - * Force the compiler to emit data independent Sbox references, - * by xoring the input with Sbox values that are known to add up - * to zero. This pulls the entire Sbox into the D-cache before any - * data dependent lookups are done. - */ - st0[0] ^= __aesti_sbox[ 0] ^ __aesti_sbox[ 64] ^ __aesti_sbox[134] ^ __aesti_sbox[195]; - st0[1] ^= __aesti_sbox[16] ^ __aesti_sbox[ 82] ^ __aesti_sbox[158] ^ __aesti_sbox[221]; - st0[2] ^= __aesti_sbox[32] ^ __aesti_sbox[ 96] ^ __aesti_sbox[160] ^ __aesti_sbox[234]; - st0[3] ^= __aesti_sbox[48] ^ __aesti_sbox[112] ^ __aesti_sbox[186] ^ __aesti_sbox[241]; - - for (round = 0;; round += 2, rkp += 8) { - st1[0] = mix_columns(subshift(st0, 0)) ^ rkp[0]; - st1[1] = mix_columns(subshift(st0, 1)) ^ rkp[1]; - st1[2] = mix_columns(subshift(st0, 2)) ^ rkp[2]; - st1[3] = mix_columns(subshift(st0, 3)) ^ rkp[3]; - - if (round == rounds - 2) - break; - - st0[0] = mix_columns(subshift(st1, 0)) ^ rkp[4]; - st0[1] = mix_columns(subshift(st1, 1)) ^ rkp[5]; - st0[2] = mix_columns(subshift(st1, 2)) ^ rkp[6]; - st0[3] = mix_columns(subshift(st1, 3)) ^ rkp[7]; - } - - put_unaligned_le32(subshift(st1, 0) ^ rkp[4], out); - put_unaligned_le32(subshift(st1, 1) ^ rkp[5], out + 4); - put_unaligned_le32(subshift(st1, 2) ^ rkp[6], out + 8); - put_unaligned_le32(subshift(st1, 3) ^ rkp[7], out + 12); - - local_irq_restore(flags); + aes_encrypt(ctx, out, in); } static void aesti_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) { const struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); - const u32 *rkp = ctx->key_dec + 4; - int rounds = 6 + ctx->key_length / 4; - u32 st0[4], st1[4]; - unsigned long flags; - int round; - - st0[0] = ctx->key_dec[0] ^ get_unaligned_le32(in); - st0[1] = ctx->key_dec[1] ^ get_unaligned_le32(in + 4); - st0[2] = ctx->key_dec[2] ^ get_unaligned_le32(in + 8); - st0[3] = ctx->key_dec[3] ^ get_unaligned_le32(in + 12); - - /* - * Temporarily disable interrupts to avoid races where cachelines are - * evicted when the CPU is interrupted to do something else. - */ - local_irq_save(flags); - - /* - * Force the compiler to emit data independent Sbox references, - * by xoring the input with Sbox values that are known to add up - * to zero. This pulls the entire Sbox into the D-cache before any - * data dependent lookups are done. - */ - st0[0] ^= __aesti_inv_sbox[ 0] ^ __aesti_inv_sbox[ 64] ^ __aesti_inv_sbox[129] ^ __aesti_inv_sbox[200]; - st0[1] ^= __aesti_inv_sbox[16] ^ __aesti_inv_sbox[ 83] ^ __aesti_inv_sbox[150] ^ __aesti_inv_sbox[212]; - st0[2] ^= __aesti_inv_sbox[32] ^ __aesti_inv_sbox[ 96] ^ __aesti_inv_sbox[160] ^ __aesti_inv_sbox[236]; - st0[3] ^= __aesti_inv_sbox[48] ^ __aesti_inv_sbox[112] ^ __aesti_inv_sbox[187] ^ __aesti_inv_sbox[247]; - - for (round = 0;; round += 2, rkp += 8) { - st1[0] = inv_mix_columns(inv_subshift(st0, 0)) ^ rkp[0]; - st1[1] = inv_mix_columns(inv_subshift(st0, 1)) ^ rkp[1]; - st1[2] = inv_mix_columns(inv_subshift(st0, 2)) ^ rkp[2]; - st1[3] = inv_mix_columns(inv_subshift(st0, 3)) ^ rkp[3]; - - if (round == rounds - 2) - break; - - st0[0] = inv_mix_columns(inv_subshift(st1, 0)) ^ rkp[4]; - st0[1] = inv_mix_columns(inv_subshift(st1, 1)) ^ rkp[5]; - st0[2] = inv_mix_columns(inv_subshift(st1, 2)) ^ rkp[6]; - st0[3] = inv_mix_columns(inv_subshift(st1, 3)) ^ rkp[7]; - } - - put_unaligned_le32(inv_subshift(st1, 0) ^ rkp[4], out); - put_unaligned_le32(inv_subshift(st1, 1) ^ rkp[5], out + 4); - put_unaligned_le32(inv_subshift(st1, 2) ^ rkp[6], out + 8); - put_unaligned_le32(inv_subshift(st1, 3) ^ rkp[7], out + 12); - local_irq_restore(flags); + aes_decrypt(ctx, out, in); } static struct crypto_alg aes_alg = { diff --git a/include/crypto/aes.h b/include/crypto/aes.h index 0fdb542c70cd..72ead82d3f98 100644 --- a/include/crypto/aes.h +++ b/include/crypto/aes.h @@ -37,4 +37,38 @@ int crypto_aes_set_key(struct crypto_tfm *tfm, const u8 *in_key, unsigned int key_len); int crypto_aes_expand_key(struct crypto_aes_ctx *ctx, const u8 *in_key, unsigned int key_len); + +/** + * aes_expandkey - Expands the AES key as described in FIPS-197 + * @ctx: The location where the computed key will be stored. + * @in_key: The supplied key. + * @key_len: The length of the supplied key. + * + * Returns 0 on success. The function fails only if an invalid key size (or + * pointer) is supplied. + * The expanded key size is 240 bytes (max of 14 rounds with a unique 16 bytes + * key schedule plus a 16 bytes key which is used before the first round). + * The decryption key is prepared for the "Equivalent Inverse Cipher" as + * described in FIPS-197. The first slot (16 bytes) of each key (enc or dec) is + * for the initial combination, the second slot for the first round and so on. + */ +int aes_expandkey(struct crypto_aes_ctx *ctx, const u8 *in_key, + unsigned int key_len); + +/** + * aes_encrypt - Encrypt a single AES block + * @ctx: Context struct containing the key schedule + * @out: Buffer to store the ciphertext + * @in: Buffer containing the plaintext + */ +void aes_encrypt(const struct crypto_aes_ctx *ctx, u8 *out, const u8 *in); + +/** + * aes_decrypt - Encrypt a single AES block + * @ctx: Context struct containing the key schedule + * @out: Buffer to store the plaintext + * @in: Buffer containing the ciphertext + */ +void aes_decrypt(const struct crypto_aes_ctx *ctx, u8 *out, const u8 *in); + #endif diff --git a/lib/crypto/Makefile b/lib/crypto/Makefile index 88195c34932d..42a91c62d96d 100644 --- a/lib/crypto/Makefile +++ b/lib/crypto/Makefile @@ -1,4 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 +obj-$(CONFIG_CRYPTO_LIB_AES) += libaes.o +libaes-y := aes.o + obj-$(CONFIG_CRYPTO_LIB_ARC4) += libarc4.o libarc4-y := arc4.o diff --git a/lib/crypto/aes.c b/lib/crypto/aes.c new file mode 100644 index 000000000000..57596148b010 --- /dev/null +++ b/lib/crypto/aes.c @@ -0,0 +1,368 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2017-2019 Linaro Ltd + */ + +#include +#include +#include +#include + +/* + * Emit the sbox as volatile const to prevent the compiler from doing + * constant folding on sbox references involving fixed indexes. + */ +static volatile const u8 __cacheline_aligned aes_sbox[] = { + 0x63, 0x7c, 0x77, 0x7b, 0xf2, 0x6b, 0x6f, 0xc5, + 0x30, 0x01, 0x67, 0x2b, 0xfe, 0xd7, 0xab, 0x76, + 0xca, 0x82, 0xc9, 0x7d, 0xfa, 0x59, 0x47, 0xf0, + 0xad, 0xd4, 0xa2, 0xaf, 0x9c, 0xa4, 0x72, 0xc0, + 0xb7, 0xfd, 0x93, 0x26, 0x36, 0x3f, 0xf7, 0xcc, + 0x34, 0xa5, 0xe5, 0xf1, 0x71, 0xd8, 0x31, 0x15, + 0x04, 0xc7, 0x23, 0xc3, 0x18, 0x96, 0x05, 0x9a, + 0x07, 0x12, 0x80, 0xe2, 0xeb, 0x27, 0xb2, 0x75, + 0x09, 0x83, 0x2c, 0x1a, 0x1b, 0x6e, 0x5a, 0xa0, + 0x52, 0x3b, 0xd6, 0xb3, 0x29, 0xe3, 0x2f, 0x84, + 0x53, 0xd1, 0x00, 0xed, 0x20, 0xfc, 0xb1, 0x5b, + 0x6a, 0xcb, 0xbe, 0x39, 0x4a, 0x4c, 0x58, 0xcf, + 0xd0, 0xef, 0xaa, 0xfb, 0x43, 0x4d, 0x33, 0x85, + 0x45, 0xf9, 0x02, 0x7f, 0x50, 0x3c, 0x9f, 0xa8, + 0x51, 0xa3, 0x40, 0x8f, 0x92, 0x9d, 0x38, 0xf5, + 0xbc, 0xb6, 0xda, 0x21, 0x10, 0xff, 0xf3, 0xd2, + 0xcd, 0x0c, 0x13, 0xec, 0x5f, 0x97, 0x44, 0x17, + 0xc4, 0xa7, 0x7e, 0x3d, 0x64, 0x5d, 0x19, 0x73, + 0x60, 0x81, 0x4f, 0xdc, 0x22, 0x2a, 0x90, 0x88, + 0x46, 0xee, 0xb8, 0x14, 0xde, 0x5e, 0x0b, 0xdb, + 0xe0, 0x32, 0x3a, 0x0a, 0x49, 0x06, 0x24, 0x5c, + 0xc2, 0xd3, 0xac, 0x62, 0x91, 0x95, 0xe4, 0x79, + 0xe7, 0xc8, 0x37, 0x6d, 0x8d, 0xd5, 0x4e, 0xa9, + 0x6c, 0x56, 0xf4, 0xea, 0x65, 0x7a, 0xae, 0x08, + 0xba, 0x78, 0x25, 0x2e, 0x1c, 0xa6, 0xb4, 0xc6, + 0xe8, 0xdd, 0x74, 0x1f, 0x4b, 0xbd, 0x8b, 0x8a, + 0x70, 0x3e, 0xb5, 0x66, 0x48, 0x03, 0xf6, 0x0e, + 0x61, 0x35, 0x57, 0xb9, 0x86, 0xc1, 0x1d, 0x9e, + 0xe1, 0xf8, 0x98, 0x11, 0x69, 0xd9, 0x8e, 0x94, + 0x9b, 0x1e, 0x87, 0xe9, 0xce, 0x55, 0x28, 0xdf, + 0x8c, 0xa1, 0x89, 0x0d, 0xbf, 0xe6, 0x42, 0x68, + 0x41, 0x99, 0x2d, 0x0f, 0xb0, 0x54, 0xbb, 0x16, +}; + +static volatile const u8 __cacheline_aligned aes_inv_sbox[] = { + 0x52, 0x09, 0x6a, 0xd5, 0x30, 0x36, 0xa5, 0x38, + 0xbf, 0x40, 0xa3, 0x9e, 0x81, 0xf3, 0xd7, 0xfb, + 0x7c, 0xe3, 0x39, 0x82, 0x9b, 0x2f, 0xff, 0x87, + 0x34, 0x8e, 0x43, 0x44, 0xc4, 0xde, 0xe9, 0xcb, + 0x54, 0x7b, 0x94, 0x32, 0xa6, 0xc2, 0x23, 0x3d, + 0xee, 0x4c, 0x95, 0x0b, 0x42, 0xfa, 0xc3, 0x4e, + 0x08, 0x2e, 0xa1, 0x66, 0x28, 0xd9, 0x24, 0xb2, + 0x76, 0x5b, 0xa2, 0x49, 0x6d, 0x8b, 0xd1, 0x25, + 0x72, 0xf8, 0xf6, 0x64, 0x86, 0x68, 0x98, 0x16, + 0xd4, 0xa4, 0x5c, 0xcc, 0x5d, 0x65, 0xb6, 0x92, + 0x6c, 0x70, 0x48, 0x50, 0xfd, 0xed, 0xb9, 0xda, + 0x5e, 0x15, 0x46, 0x57, 0xa7, 0x8d, 0x9d, 0x84, + 0x90, 0xd8, 0xab, 0x00, 0x8c, 0xbc, 0xd3, 0x0a, + 0xf7, 0xe4, 0x58, 0x05, 0xb8, 0xb3, 0x45, 0x06, + 0xd0, 0x2c, 0x1e, 0x8f, 0xca, 0x3f, 0x0f, 0x02, + 0xc1, 0xaf, 0xbd, 0x03, 0x01, 0x13, 0x8a, 0x6b, + 0x3a, 0x91, 0x11, 0x41, 0x4f, 0x67, 0xdc, 0xea, + 0x97, 0xf2, 0xcf, 0xce, 0xf0, 0xb4, 0xe6, 0x73, + 0x96, 0xac, 0x74, 0x22, 0xe7, 0xad, 0x35, 0x85, + 0xe2, 0xf9, 0x37, 0xe8, 0x1c, 0x75, 0xdf, 0x6e, + 0x47, 0xf1, 0x1a, 0x71, 0x1d, 0x29, 0xc5, 0x89, + 0x6f, 0xb7, 0x62, 0x0e, 0xaa, 0x18, 0xbe, 0x1b, + 0xfc, 0x56, 0x3e, 0x4b, 0xc6, 0xd2, 0x79, 0x20, + 0x9a, 0xdb, 0xc0, 0xfe, 0x78, 0xcd, 0x5a, 0xf4, + 0x1f, 0xdd, 0xa8, 0x33, 0x88, 0x07, 0xc7, 0x31, + 0xb1, 0x12, 0x10, 0x59, 0x27, 0x80, 0xec, 0x5f, + 0x60, 0x51, 0x7f, 0xa9, 0x19, 0xb5, 0x4a, 0x0d, + 0x2d, 0xe5, 0x7a, 0x9f, 0x93, 0xc9, 0x9c, 0xef, + 0xa0, 0xe0, 0x3b, 0x4d, 0xae, 0x2a, 0xf5, 0xb0, + 0xc8, 0xeb, 0xbb, 0x3c, 0x83, 0x53, 0x99, 0x61, + 0x17, 0x2b, 0x04, 0x7e, 0xba, 0x77, 0xd6, 0x26, + 0xe1, 0x69, 0x14, 0x63, 0x55, 0x21, 0x0c, 0x7d, +}; + +static u32 mul_by_x(u32 w) +{ + u32 x = w & 0x7f7f7f7f; + u32 y = w & 0x80808080; + + /* multiply by polynomial 'x' (0b10) in GF(2^8) */ + return (x << 1) ^ (y >> 7) * 0x1b; +} + +static u32 mul_by_x2(u32 w) +{ + u32 x = w & 0x3f3f3f3f; + u32 y = w & 0x80808080; + u32 z = w & 0x40404040; + + /* multiply by polynomial 'x^2' (0b100) in GF(2^8) */ + return (x << 2) ^ (y >> 7) * 0x36 ^ (z >> 6) * 0x1b; +} + +static u32 mix_columns(u32 x) +{ + /* + * Perform the following matrix multiplication in GF(2^8) + * + * | 0x2 0x3 0x1 0x1 | | x[0] | + * | 0x1 0x2 0x3 0x1 | | x[1] | + * | 0x1 0x1 0x2 0x3 | x | x[2] | + * | 0x3 0x1 0x1 0x2 | | x[3] | + */ + u32 y = mul_by_x(x) ^ ror32(x, 16); + + return y ^ ror32(x ^ y, 8); +} + +static u32 inv_mix_columns(u32 x) +{ + /* + * Perform the following matrix multiplication in GF(2^8) + * + * | 0xe 0xb 0xd 0x9 | | x[0] | + * | 0x9 0xe 0xb 0xd | | x[1] | + * | 0xd 0x9 0xe 0xb | x | x[2] | + * | 0xb 0xd 0x9 0xe | | x[3] | + * + * which can conveniently be reduced to + * + * | 0x2 0x3 0x1 0x1 | | 0x5 0x0 0x4 0x0 | | x[0] | + * | 0x1 0x2 0x3 0x1 | | 0x0 0x5 0x0 0x4 | | x[1] | + * | 0x1 0x1 0x2 0x3 | x | 0x4 0x0 0x5 0x0 | x | x[2] | + * | 0x3 0x1 0x1 0x2 | | 0x0 0x4 0x0 0x5 | | x[3] | + */ + u32 y = mul_by_x2(x); + + return mix_columns(x ^ y ^ ror32(y, 16)); +} + +static __always_inline u32 subshift(u32 in[], int pos) +{ + return (aes_sbox[in[pos] & 0xff]) ^ + (aes_sbox[(in[(pos + 1) % 4] >> 8) & 0xff] << 8) ^ + (aes_sbox[(in[(pos + 2) % 4] >> 16) & 0xff] << 16) ^ + (aes_sbox[(in[(pos + 3) % 4] >> 24) & 0xff] << 24); +} + +static __always_inline u32 inv_subshift(u32 in[], int pos) +{ + return (aes_inv_sbox[in[pos] & 0xff]) ^ + (aes_inv_sbox[(in[(pos + 3) % 4] >> 8) & 0xff] << 8) ^ + (aes_inv_sbox[(in[(pos + 2) % 4] >> 16) & 0xff] << 16) ^ + (aes_inv_sbox[(in[(pos + 1) % 4] >> 24) & 0xff] << 24); +} + +static u32 subw(u32 in) +{ + return (aes_sbox[in & 0xff]) ^ + (aes_sbox[(in >> 8) & 0xff] << 8) ^ + (aes_sbox[(in >> 16) & 0xff] << 16) ^ + (aes_sbox[(in >> 24) & 0xff] << 24); +} + +/** + * aes_expandkey - Expands the AES key as described in FIPS-197 + * @ctx: The location where the computed key will be stored. + * @in_key: The supplied key. + * @key_len: The length of the supplied key. + * + * Returns 0 on success. The function fails only if an invalid key size (or + * pointer) is supplied. + * The expanded key size is 240 bytes (max of 14 rounds with a unique 16 bytes + * key schedule plus a 16 bytes key which is used before the first round). + * The decryption key is prepared for the "Equivalent Inverse Cipher" as + * described in FIPS-197. The first slot (16 bytes) of each key (enc or dec) is + * for the initial combination, the second slot for the first round and so on. + */ +int aes_expandkey(struct crypto_aes_ctx *ctx, const u8 *in_key, + unsigned int key_len) +{ + u32 kwords = key_len / sizeof(u32); + u32 rc, i, j; + + if (key_len != AES_KEYSIZE_128 && + key_len != AES_KEYSIZE_192 && + key_len != AES_KEYSIZE_256) + return -EINVAL; + + ctx->key_length = key_len; + + for (i = 0; i < kwords; i++) + ctx->key_enc[i] = get_unaligned_le32(in_key + i * sizeof(u32)); + + for (i = 0, rc = 1; i < 10; i++, rc = mul_by_x(rc)) { + u32 *rki = ctx->key_enc + (i * kwords); + u32 *rko = rki + kwords; + + rko[0] = ror32(subw(rki[kwords - 1]), 8) ^ rc ^ rki[0]; + rko[1] = rko[0] ^ rki[1]; + rko[2] = rko[1] ^ rki[2]; + rko[3] = rko[2] ^ rki[3]; + + if (key_len == 24) { + if (i >= 7) + break; + rko[4] = rko[3] ^ rki[4]; + rko[5] = rko[4] ^ rki[5]; + } else if (key_len == 32) { + if (i >= 6) + break; + rko[4] = subw(rko[3]) ^ rki[4]; + rko[5] = rko[4] ^ rki[5]; + rko[6] = rko[5] ^ rki[6]; + rko[7] = rko[6] ^ rki[7]; + } + } + + /* + * Generate the decryption keys for the Equivalent Inverse Cipher. + * This involves reversing the order of the round keys, and applying + * the Inverse Mix Columns transformation to all but the first and + * the last one. + */ + ctx->key_dec[0] = ctx->key_enc[key_len + 24]; + ctx->key_dec[1] = ctx->key_enc[key_len + 25]; + ctx->key_dec[2] = ctx->key_enc[key_len + 26]; + ctx->key_dec[3] = ctx->key_enc[key_len + 27]; + + for (i = 4, j = key_len + 20; j > 0; i += 4, j -= 4) { + ctx->key_dec[i] = inv_mix_columns(ctx->key_enc[j]); + ctx->key_dec[i + 1] = inv_mix_columns(ctx->key_enc[j + 1]); + ctx->key_dec[i + 2] = inv_mix_columns(ctx->key_enc[j + 2]); + ctx->key_dec[i + 3] = inv_mix_columns(ctx->key_enc[j + 3]); + } + + ctx->key_dec[i] = ctx->key_enc[0]; + ctx->key_dec[i + 1] = ctx->key_enc[1]; + ctx->key_dec[i + 2] = ctx->key_enc[2]; + ctx->key_dec[i + 3] = ctx->key_enc[3]; + + return 0; +} +EXPORT_SYMBOL(aes_expandkey); + +/** + * aes_encrypt - Encrypt a single AES block + * @ctx: Context struct containing the key schedule + * @out: Buffer to store the ciphertext + * @in: Buffer containing the plaintext + */ +void aes_encrypt(const struct crypto_aes_ctx *ctx, u8 *out, const u8 *in) +{ + const u32 *rkp = ctx->key_enc + 4; + int rounds = 6 + ctx->key_length / 4; + u32 st0[4], st1[4]; + unsigned long flags; + int round; + + st0[0] = ctx->key_enc[0] ^ get_unaligned_le32(in); + st0[1] = ctx->key_enc[1] ^ get_unaligned_le32(in + 4); + st0[2] = ctx->key_enc[2] ^ get_unaligned_le32(in + 8); + st0[3] = ctx->key_enc[3] ^ get_unaligned_le32(in + 12); + + /* + * Temporarily disable interrupts to avoid races where cachelines are + * evicted when the CPU is interrupted to do something else. + */ + local_irq_save(flags); + + /* + * Force the compiler to emit data independent Sbox references, + * by xoring the input with Sbox values that are known to add up + * to zero. This pulls the entire Sbox into the D-cache before any + * data dependent lookups are done. + */ + st0[0] ^= aes_sbox[ 0] ^ aes_sbox[ 64] ^ aes_sbox[134] ^ aes_sbox[195]; + st0[1] ^= aes_sbox[16] ^ aes_sbox[ 82] ^ aes_sbox[158] ^ aes_sbox[221]; + st0[2] ^= aes_sbox[32] ^ aes_sbox[ 96] ^ aes_sbox[160] ^ aes_sbox[234]; + st0[3] ^= aes_sbox[48] ^ aes_sbox[112] ^ aes_sbox[186] ^ aes_sbox[241]; + + for (round = 0;; round += 2, rkp += 8) { + st1[0] = mix_columns(subshift(st0, 0)) ^ rkp[0]; + st1[1] = mix_columns(subshift(st0, 1)) ^ rkp[1]; + st1[2] = mix_columns(subshift(st0, 2)) ^ rkp[2]; + st1[3] = mix_columns(subshift(st0, 3)) ^ rkp[3]; + + if (round == rounds - 2) + break; + + st0[0] = mix_columns(subshift(st1, 0)) ^ rkp[4]; + st0[1] = mix_columns(subshift(st1, 1)) ^ rkp[5]; + st0[2] = mix_columns(subshift(st1, 2)) ^ rkp[6]; + st0[3] = mix_columns(subshift(st1, 3)) ^ rkp[7]; + } + + put_unaligned_le32(subshift(st1, 0) ^ rkp[4], out); + put_unaligned_le32(subshift(st1, 1) ^ rkp[5], out + 4); + put_unaligned_le32(subshift(st1, 2) ^ rkp[6], out + 8); + put_unaligned_le32(subshift(st1, 3) ^ rkp[7], out + 12); + + local_irq_restore(flags); +} +EXPORT_SYMBOL(aes_encrypt); + +/** + * aes_decrypt - Encrypt a single AES block + * @ctx: Context struct containing the key schedule + * @out: Buffer to store the plaintext + * @in: Buffer containing the ciphertext + */ +void aes_decrypt(const struct crypto_aes_ctx *ctx, u8 *out, const u8 *in) +{ + const u32 *rkp = ctx->key_dec + 4; + int rounds = 6 + ctx->key_length / 4; + u32 st0[4], st1[4]; + unsigned long flags; + int round; + + st0[0] = ctx->key_dec[0] ^ get_unaligned_le32(in); + st0[1] = ctx->key_dec[1] ^ get_unaligned_le32(in + 4); + st0[2] = ctx->key_dec[2] ^ get_unaligned_le32(in + 8); + st0[3] = ctx->key_dec[3] ^ get_unaligned_le32(in + 12); + + /* + * Temporarily disable interrupts to avoid races where cachelines are + * evicted when the CPU is interrupted to do something else. + */ + local_irq_save(flags); + + /* + * Force the compiler to emit data independent Sbox references, + * by xoring the input with Sbox values that are known to add up + * to zero. This pulls the entire Sbox into the D-cache before any + * data dependent lookups are done. + */ + st0[0] ^= aes_inv_sbox[ 0] ^ aes_inv_sbox[ 64] ^ aes_inv_sbox[129] ^ aes_inv_sbox[200]; + st0[1] ^= aes_inv_sbox[16] ^ aes_inv_sbox[ 83] ^ aes_inv_sbox[150] ^ aes_inv_sbox[212]; + st0[2] ^= aes_inv_sbox[32] ^ aes_inv_sbox[ 96] ^ aes_inv_sbox[160] ^ aes_inv_sbox[236]; + st0[3] ^= aes_inv_sbox[48] ^ aes_inv_sbox[112] ^ aes_inv_sbox[187] ^ aes_inv_sbox[247]; + + for (round = 0;; round += 2, rkp += 8) { + st1[0] = inv_mix_columns(inv_subshift(st0, 0)) ^ rkp[0]; + st1[1] = inv_mix_columns(inv_subshift(st0, 1)) ^ rkp[1]; + st1[2] = inv_mix_columns(inv_subshift(st0, 2)) ^ rkp[2]; + st1[3] = inv_mix_columns(inv_subshift(st0, 3)) ^ rkp[3]; + + if (round == rounds - 2) + break; + + st0[0] = inv_mix_columns(inv_subshift(st1, 0)) ^ rkp[4]; + st0[1] = inv_mix_columns(inv_subshift(st1, 1)) ^ rkp[5]; + st0[2] = inv_mix_columns(inv_subshift(st1, 2)) ^ rkp[6]; + st0[3] = inv_mix_columns(inv_subshift(st1, 3)) ^ rkp[7]; + } + + put_unaligned_le32(inv_subshift(st1, 0) ^ rkp[4], out); + put_unaligned_le32(inv_subshift(st1, 1) ^ rkp[5], out + 4); + put_unaligned_le32(inv_subshift(st1, 2) ^ rkp[6], out + 8); + put_unaligned_le32(inv_subshift(st1, 3) ^ rkp[7], out + 12); + + local_irq_restore(flags); +} +EXPORT_SYMBOL(aes_decrypt); + +MODULE_DESCRIPTION("Generic AES library"); +MODULE_AUTHOR("Ard Biesheuvel "); +MODULE_LICENSE("GPL v2"); From patchwork Wed Jun 12 12:48:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 10989891 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E89B076 for ; Wed, 12 Jun 2019 12:48:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D5289286D5 for ; Wed, 12 Jun 2019 12:48:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C9B9B28A49; Wed, 12 Jun 2019 12:48:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 54FE9286D5 for ; Wed, 12 Jun 2019 12:48:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2409160AbfFLMsz (ORCPT ); Wed, 12 Jun 2019 08:48:55 -0400 Received: from mail-wm1-f67.google.com ([209.85.128.67]:40633 "EHLO mail-wm1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2409157AbfFLMsz (ORCPT ); Wed, 12 Jun 2019 08:48:55 -0400 Received: by mail-wm1-f67.google.com with SMTP id v19so6389255wmj.5 for ; Wed, 12 Jun 2019 05:48:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kYNNbe+mFt8Q+RY9DxwwFbhQEO3KEw5kT4vc21gIvDg=; b=M+2h4fD46HUC4cJ4KOuokanGoNSZ7xsT1Q+jGhMcs61l2SOksGa3I5dHZJ786fc1qX hwWLnCna9N+1lAxU3P9SSJn8TKRYJxcyR9J6luCq5uqa+ygm7ntsNvI+jDsfVTh4wuO7 zLE28BI+hzPxUPqD4vUvwEXFBseb3e31yDKoID1mSHjLQa3Tl8iQZQHALHnCA0AjKWgX SAgnb6jlklnhZemcReGewK60rQ1ilAj72SyvQVQ35FgVVKPpAV+BYMtkHEOLnVCkUw0w C3t3tpAJqsEyHzbbjEelLLw/QJ8s+1v8JuGhRkVHs7D2rYPWmbLqi5m8rcPZLpUPDtUo 3jTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kYNNbe+mFt8Q+RY9DxwwFbhQEO3KEw5kT4vc21gIvDg=; b=cdxOuqOeL9ETOjOaO0ZvATtonhw8PB0vud7+W5Z20sXcXbL1tGDoWejy3/rPAUwMo8 Q2jui4WHjGZjUlti5RFc8YkZ9+iU51VbsiYcWurE4AHgkxxy50UdFpU6pMgE1k7yCsOD rS6NoprcIjAN6h92P+zTY96C91y7DujfT7UDWTyaXPRPYTj7YGp27FilyM5GqfJrlA2p Yk/26SK1C5xrbKNr4xwP9ETpyXrZ9kT4Plm5HfIkNTG7PDIBo1aFPq5JsI+fxZshBXci y/4A0kzZwGpv82Y2zwkUmjJhDU6Q1aZ2fH0FZr4ygL2FOhDmp5WUNb/hMV5S6hO6Erx+ J3wA== X-Gm-Message-State: APjAAAV9FD5I1Wafj+GfuvowXtJcuNJ2rt1Z4t1NUufXG0s3elcAY5B1 civF4xp9MkQP+1cYSQlPJVuHfY8rEoRzrA== X-Google-Smtp-Source: APXvYqyF+C7SiBexU0MkQMJuAdUdezZFptyfoBb6DdReSF88CbWJsNt8wcl33GCpfJk6fqWFCLmSMQ== X-Received: by 2002:a05:600c:2243:: with SMTP id a3mr21220424wmm.83.1560343732621; Wed, 12 Jun 2019 05:48:52 -0700 (PDT) Received: from sudo.home ([2a01:cb1d:112:6f00:353a:f33a:a393:3ada]) by smtp.gmail.com with ESMTPSA id s8sm28505480wra.55.2019.06.12.05.48.51 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Jun 2019 05:48:52 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [RFC PATCH 05/20] crypto: x86/aes-ni - switch to generic for fallback and key routines Date: Wed, 12 Jun 2019 14:48:23 +0200 Message-Id: <20190612124838.2492-6-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190612124838.2492-1-ard.biesheuvel@linaro.org> References: <20190612124838.2492-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The AES-NI code contains fallbacks for invocations that occur from a context where the SIMD unit is unavailable, which really only occurs when running in softirq context that was entered from a hard IRQ that was taken while running kernel code that was already using the FPU. That means performance is not really a consideration, and we can just use the new library code for this use case, which has a smaller footprint and is believed to be time invariant. This will allow us to drop the non-SIMD asm routines in a subsequent patch. Signed-off-by: Ard Biesheuvel --- arch/x86/crypto/aesni-intel_glue.c | 15 +++++++-------- arch/x86/include/asm/crypto/aes.h | 12 ------------ crypto/Kconfig | 3 +-- 3 files changed, 8 insertions(+), 22 deletions(-) diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c index e9b866e87d48..9952bd312ddc 100644 --- a/arch/x86/crypto/aesni-intel_glue.c +++ b/arch/x86/crypto/aesni-intel_glue.c @@ -26,7 +26,6 @@ #include #include #include -#include #include #include #include @@ -329,7 +328,7 @@ static int aes_set_key_common(struct crypto_tfm *tfm, void *raw_ctx, } if (!crypto_simd_usable()) - err = crypto_aes_expand_key(ctx, in_key, key_len); + err = aes_expandkey(ctx, in_key, key_len); else { kernel_fpu_begin(); err = aesni_set_key(ctx, in_key, key_len); @@ -349,9 +348,9 @@ static void aes_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) { struct crypto_aes_ctx *ctx = aes_ctx(crypto_tfm_ctx(tfm)); - if (!crypto_simd_usable()) - crypto_aes_encrypt_x86(ctx, dst, src); - else { + if (!crypto_simd_usable()) { + aes_encrypt(ctx, dst, src); + } else { kernel_fpu_begin(); aesni_enc(ctx, dst, src); kernel_fpu_end(); @@ -362,9 +361,9 @@ static void aes_decrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) { struct crypto_aes_ctx *ctx = aes_ctx(crypto_tfm_ctx(tfm)); - if (!crypto_simd_usable()) - crypto_aes_decrypt_x86(ctx, dst, src); - else { + if (!crypto_simd_usable()) { + aes_decrypt(ctx, dst, src); + } else { kernel_fpu_begin(); aesni_dec(ctx, dst, src); kernel_fpu_end(); diff --git a/arch/x86/include/asm/crypto/aes.h b/arch/x86/include/asm/crypto/aes.h deleted file mode 100644 index c508521dd190..000000000000 --- a/arch/x86/include/asm/crypto/aes.h +++ /dev/null @@ -1,12 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef ASM_X86_AES_H -#define ASM_X86_AES_H - -#include -#include - -void crypto_aes_encrypt_x86(struct crypto_aes_ctx *ctx, u8 *dst, - const u8 *src); -void crypto_aes_decrypt_x86(struct crypto_aes_ctx *ctx, u8 *dst, - const u8 *src); -#endif diff --git a/crypto/Kconfig b/crypto/Kconfig index dc6f93ef3ead..0d80985016bf 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -1149,8 +1149,7 @@ config CRYPTO_AES_NI_INTEL tristate "AES cipher algorithms (AES-NI)" depends on X86 select CRYPTO_AEAD - select CRYPTO_AES_X86_64 if 64BIT - select CRYPTO_AES_586 if !64BIT + select CRYPTO_LIB_AES select CRYPTO_ALGAPI select CRYPTO_BLKCIPHER select CRYPTO_GLUE_HELPER_X86 if 64BIT From patchwork Wed Jun 12 12:48:24 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 10989901 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id ED0AD76 for ; Wed, 12 Jun 2019 12:49:00 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D7DD5286D5 for ; Wed, 12 Jun 2019 12:49:00 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CC3D3288D9; Wed, 12 Jun 2019 12:49:00 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3E55A28A49 for ; Wed, 12 Jun 2019 12:48:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2439225AbfFLMs6 (ORCPT ); Wed, 12 Jun 2019 08:48:58 -0400 Received: from mail-wr1-f67.google.com ([209.85.221.67]:42815 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2409161AbfFLMs6 (ORCPT ); Wed, 12 Jun 2019 08:48:58 -0400 Received: by mail-wr1-f67.google.com with SMTP id x17so1485018wrl.9 for ; Wed, 12 Jun 2019 05:48:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=76xN9WyeYIhCcwBpwFZTvWrdI8ZQQS6OHwuIy2N/Qdg=; b=VvGobrd1ZlgnDxeVKQG7TfwNsT5TYcduKpX9ULX1FgJQ8hijRWZyTZ7TqjRUGAqCkd brbRRJuKg9Pl2NWiu//dnM9AQhquseWXwStWe9yYCYWJ/S6nkA45M4ar33v8Bn1zHnii lBIn0wNNxs6d22hZmBT70eSasLpTGgmrBVG5casRaCz0ouO+QdOmvl0PpKwTZWh9FN41 nNF1MiCKNm1gB54jvs9f/4lL7aKV/TDQ1ztD3vyNflO96GgddvvvL12qxAF5yrY5kqiI UPRJIF3XqtYpL50/ymgsTvSczERjH+tAd+Tjc0fmTMbfXxud6aC65BvEMewuzgZrrgQ7 8/Yw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=76xN9WyeYIhCcwBpwFZTvWrdI8ZQQS6OHwuIy2N/Qdg=; b=M6FHCG59PLAsBaNyoMztEMOP5RknOoR4wNApyNLFex0iPYZUgbjF1Xn4CIXZAF3DTu J766abPwpfky62tUyeeoA3fxar5W6d1LXd3x+LcaYAy/n1EL6H7YYAa0rLgGNz18DzOs JQkuoESsa9iUgAvLKurHsMopPf/HamDud0souK4nyrUffi964xufQayvVOKtqaif2uhU qO5gDEr1dr1cY+ABZ0nia/43KMp7JaNpT+0oPCXdz+WXq7J8MkzAy/QWbNsPxBl8BK+u t9MarzrV6Tdk2DyisIqDuojbHwpTCUWyiBOfcSKIVVBryfrJ2LTVKMcf/SjMaa39UFYp clmQ== X-Gm-Message-State: APjAAAUSSIf4ebd40VRPQwAsYIIs0fSDwUA4c1I2AJ2yNuq0qW6CQXdH XPbS508Ka2zyxjKtL5XRJHZymoHtzpdBsQ== X-Google-Smtp-Source: APXvYqxEtqchIjBNQTVTlzBb/jFpGJVhx3YisaNBUH4Pr76o863+7JpMS8T4qCRXMtFptUF/lqI3xQ== X-Received: by 2002:a5d:43c9:: with SMTP id v9mr53672370wrr.70.1560343733768; Wed, 12 Jun 2019 05:48:53 -0700 (PDT) Received: from sudo.home ([2a01:cb1d:112:6f00:353a:f33a:a393:3ada]) by smtp.gmail.com with ESMTPSA id s8sm28505480wra.55.2019.06.12.05.48.52 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Jun 2019 05:48:53 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [RFC PATCH 06/20] crypto: x86/aes - drop scalar assembler implementations Date: Wed, 12 Jun 2019 14:48:24 +0200 Message-Id: <20190612124838.2492-7-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190612124838.2492-1-ard.biesheuvel@linaro.org> References: <20190612124838.2492-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The AES assembler code for x86 isn't actually faster than code generated by the compiler from aes_generic.c, and considering the disproportionate maintenance burden of assembler code on x86, it is better just to drop it entirely. Modern x86 systems will use AES-NI anyway, and given that the modules being removed have a dependency on aes_generic already, we can remove them without running the risk of regressions. Signed-off-by: Ard Biesheuvel --- arch/x86/crypto/Makefile | 4 - arch/x86/crypto/aes-i586-asm_32.S | 362 -------------------- arch/x86/crypto/aes-x86_64-asm_64.S | 185 ---------- arch/x86/crypto/aes_glue.c | 71 ---- crypto/Kconfig | 44 --- 5 files changed, 666 deletions(-) diff --git a/arch/x86/crypto/Makefile b/arch/x86/crypto/Makefile index 45734e1cf967..b96a14e67ab0 100644 --- a/arch/x86/crypto/Makefile +++ b/arch/x86/crypto/Makefile @@ -14,11 +14,9 @@ sha256_ni_supported :=$(call as-instr,sha256msg1 %xmm0$(comma)%xmm1,yes,no) obj-$(CONFIG_CRYPTO_GLUE_HELPER_X86) += glue_helper.o -obj-$(CONFIG_CRYPTO_AES_586) += aes-i586.o obj-$(CONFIG_CRYPTO_TWOFISH_586) += twofish-i586.o obj-$(CONFIG_CRYPTO_SERPENT_SSE2_586) += serpent-sse2-i586.o -obj-$(CONFIG_CRYPTO_AES_X86_64) += aes-x86_64.o obj-$(CONFIG_CRYPTO_DES3_EDE_X86_64) += des3_ede-x86_64.o obj-$(CONFIG_CRYPTO_CAMELLIA_X86_64) += camellia-x86_64.o obj-$(CONFIG_CRYPTO_BLOWFISH_X86_64) += blowfish-x86_64.o @@ -68,11 +66,9 @@ ifeq ($(avx2_supported),yes) obj-$(CONFIG_CRYPTO_MORUS1280_AVX2) += morus1280-avx2.o endif -aes-i586-y := aes-i586-asm_32.o aes_glue.o twofish-i586-y := twofish-i586-asm_32.o twofish_glue.o serpent-sse2-i586-y := serpent-sse2-i586-asm_32.o serpent_sse2_glue.o -aes-x86_64-y := aes-x86_64-asm_64.o aes_glue.o des3_ede-x86_64-y := des3_ede-asm_64.o des3_ede_glue.o camellia-x86_64-y := camellia-x86_64-asm_64.o camellia_glue.o blowfish-x86_64-y := blowfish-x86_64-asm_64.o blowfish_glue.o diff --git a/arch/x86/crypto/aes-i586-asm_32.S b/arch/x86/crypto/aes-i586-asm_32.S deleted file mode 100644 index 2849dbc59e11..000000000000 --- a/arch/x86/crypto/aes-i586-asm_32.S +++ /dev/null @@ -1,362 +0,0 @@ -// ------------------------------------------------------------------------- -// Copyright (c) 2001, Dr Brian Gladman < >, Worcester, UK. -// All rights reserved. -// -// LICENSE TERMS -// -// The free distribution and use of this software in both source and binary -// form is allowed (with or without changes) provided that: -// -// 1. distributions of this source code include the above copyright -// notice, this list of conditions and the following disclaimer// -// -// 2. distributions in binary form include the above copyright -// notice, this list of conditions and the following disclaimer -// in the documentation and/or other associated materials// -// -// 3. the copyright holder's name is not used to endorse products -// built using this software without specific written permission. -// -// -// ALTERNATIVELY, provided that this notice is retained in full, this product -// may be distributed under the terms of the GNU General Public License (GPL), -// in which case the provisions of the GPL apply INSTEAD OF those given above. -// -// Copyright (c) 2004 Linus Torvalds -// Copyright (c) 2004 Red Hat, Inc., James Morris - -// DISCLAIMER -// -// This software is provided 'as is' with no explicit or implied warranties -// in respect of its properties including, but not limited to, correctness -// and fitness for purpose. -// ------------------------------------------------------------------------- -// Issue Date: 29/07/2002 - -.file "aes-i586-asm.S" -.text - -#include -#include - -#define tlen 1024 // length of each of 4 'xor' arrays (256 32-bit words) - -/* offsets to parameters with one register pushed onto stack */ -#define ctx 8 -#define out_blk 12 -#define in_blk 16 - -/* offsets in crypto_aes_ctx structure */ -#define klen (480) -#define ekey (0) -#define dkey (240) - -// register mapping for encrypt and decrypt subroutines - -#define r0 eax -#define r1 ebx -#define r2 ecx -#define r3 edx -#define r4 esi -#define r5 edi - -#define eaxl al -#define eaxh ah -#define ebxl bl -#define ebxh bh -#define ecxl cl -#define ecxh ch -#define edxl dl -#define edxh dh - -#define _h(reg) reg##h -#define h(reg) _h(reg) - -#define _l(reg) reg##l -#define l(reg) _l(reg) - -// This macro takes a 32-bit word representing a column and uses -// each of its four bytes to index into four tables of 256 32-bit -// words to obtain values that are then xored into the appropriate -// output registers r0, r1, r4 or r5. - -// Parameters: -// table table base address -// %1 out_state[0] -// %2 out_state[1] -// %3 out_state[2] -// %4 out_state[3] -// idx input register for the round (destroyed) -// tmp scratch register for the round -// sched key schedule - -#define do_col(table, a1,a2,a3,a4, idx, tmp) \ - movzx %l(idx),%tmp; \ - xor table(,%tmp,4),%a1; \ - movzx %h(idx),%tmp; \ - shr $16,%idx; \ - xor table+tlen(,%tmp,4),%a2; \ - movzx %l(idx),%tmp; \ - movzx %h(idx),%idx; \ - xor table+2*tlen(,%tmp,4),%a3; \ - xor table+3*tlen(,%idx,4),%a4; - -// initialise output registers from the key schedule -// NB1: original value of a3 is in idx on exit -// NB2: original values of a1,a2,a4 aren't used -#define do_fcol(table, a1,a2,a3,a4, idx, tmp, sched) \ - mov 0 sched,%a1; \ - movzx %l(idx),%tmp; \ - mov 12 sched,%a2; \ - xor table(,%tmp,4),%a1; \ - mov 4 sched,%a4; \ - movzx %h(idx),%tmp; \ - shr $16,%idx; \ - xor table+tlen(,%tmp,4),%a2; \ - movzx %l(idx),%tmp; \ - movzx %h(idx),%idx; \ - xor table+3*tlen(,%idx,4),%a4; \ - mov %a3,%idx; \ - mov 8 sched,%a3; \ - xor table+2*tlen(,%tmp,4),%a3; - -// initialise output registers from the key schedule -// NB1: original value of a3 is in idx on exit -// NB2: original values of a1,a2,a4 aren't used -#define do_icol(table, a1,a2,a3,a4, idx, tmp, sched) \ - mov 0 sched,%a1; \ - movzx %l(idx),%tmp; \ - mov 4 sched,%a2; \ - xor table(,%tmp,4),%a1; \ - mov 12 sched,%a4; \ - movzx %h(idx),%tmp; \ - shr $16,%idx; \ - xor table+tlen(,%tmp,4),%a2; \ - movzx %l(idx),%tmp; \ - movzx %h(idx),%idx; \ - xor table+3*tlen(,%idx,4),%a4; \ - mov %a3,%idx; \ - mov 8 sched,%a3; \ - xor table+2*tlen(,%tmp,4),%a3; - - -// original Gladman had conditional saves to MMX regs. -#define save(a1, a2) \ - mov %a2,4*a1(%esp) - -#define restore(a1, a2) \ - mov 4*a2(%esp),%a1 - -// These macros perform a forward encryption cycle. They are entered with -// the first previous round column values in r0,r1,r4,r5 and -// exit with the final values in the same registers, using stack -// for temporary storage. - -// round column values -// on entry: r0,r1,r4,r5 -// on exit: r2,r1,r4,r5 -#define fwd_rnd1(arg, table) \ - save (0,r1); \ - save (1,r5); \ - \ - /* compute new column values */ \ - do_fcol(table, r2,r5,r4,r1, r0,r3, arg); /* idx=r0 */ \ - do_col (table, r4,r1,r2,r5, r0,r3); /* idx=r4 */ \ - restore(r0,0); \ - do_col (table, r1,r2,r5,r4, r0,r3); /* idx=r1 */ \ - restore(r0,1); \ - do_col (table, r5,r4,r1,r2, r0,r3); /* idx=r5 */ - -// round column values -// on entry: r2,r1,r4,r5 -// on exit: r0,r1,r4,r5 -#define fwd_rnd2(arg, table) \ - save (0,r1); \ - save (1,r5); \ - \ - /* compute new column values */ \ - do_fcol(table, r0,r5,r4,r1, r2,r3, arg); /* idx=r2 */ \ - do_col (table, r4,r1,r0,r5, r2,r3); /* idx=r4 */ \ - restore(r2,0); \ - do_col (table, r1,r0,r5,r4, r2,r3); /* idx=r1 */ \ - restore(r2,1); \ - do_col (table, r5,r4,r1,r0, r2,r3); /* idx=r5 */ - -// These macros performs an inverse encryption cycle. They are entered with -// the first previous round column values in r0,r1,r4,r5 and -// exit with the final values in the same registers, using stack -// for temporary storage - -// round column values -// on entry: r0,r1,r4,r5 -// on exit: r2,r1,r4,r5 -#define inv_rnd1(arg, table) \ - save (0,r1); \ - save (1,r5); \ - \ - /* compute new column values */ \ - do_icol(table, r2,r1,r4,r5, r0,r3, arg); /* idx=r0 */ \ - do_col (table, r4,r5,r2,r1, r0,r3); /* idx=r4 */ \ - restore(r0,0); \ - do_col (table, r1,r4,r5,r2, r0,r3); /* idx=r1 */ \ - restore(r0,1); \ - do_col (table, r5,r2,r1,r4, r0,r3); /* idx=r5 */ - -// round column values -// on entry: r2,r1,r4,r5 -// on exit: r0,r1,r4,r5 -#define inv_rnd2(arg, table) \ - save (0,r1); \ - save (1,r5); \ - \ - /* compute new column values */ \ - do_icol(table, r0,r1,r4,r5, r2,r3, arg); /* idx=r2 */ \ - do_col (table, r4,r5,r0,r1, r2,r3); /* idx=r4 */ \ - restore(r2,0); \ - do_col (table, r1,r4,r5,r0, r2,r3); /* idx=r1 */ \ - restore(r2,1); \ - do_col (table, r5,r0,r1,r4, r2,r3); /* idx=r5 */ - -// AES (Rijndael) Encryption Subroutine -/* void aes_enc_blk(struct crypto_aes_ctx *ctx, u8 *out_blk, const u8 *in_blk) */ - -.extern crypto_ft_tab -.extern crypto_fl_tab - -ENTRY(aes_enc_blk) - push %ebp - mov ctx(%esp),%ebp - -// CAUTION: the order and the values used in these assigns -// rely on the register mappings - -1: push %ebx - mov in_blk+4(%esp),%r2 - push %esi - mov klen(%ebp),%r3 // key size - push %edi -#if ekey != 0 - lea ekey(%ebp),%ebp // key pointer -#endif - -// input four columns and xor in first round key - - mov (%r2),%r0 - mov 4(%r2),%r1 - mov 8(%r2),%r4 - mov 12(%r2),%r5 - xor (%ebp),%r0 - xor 4(%ebp),%r1 - xor 8(%ebp),%r4 - xor 12(%ebp),%r5 - - sub $8,%esp // space for register saves on stack - add $16,%ebp // increment to next round key - cmp $24,%r3 - jb 4f // 10 rounds for 128-bit key - lea 32(%ebp),%ebp - je 3f // 12 rounds for 192-bit key - lea 32(%ebp),%ebp - -2: fwd_rnd1( -64(%ebp), crypto_ft_tab) // 14 rounds for 256-bit key - fwd_rnd2( -48(%ebp), crypto_ft_tab) -3: fwd_rnd1( -32(%ebp), crypto_ft_tab) // 12 rounds for 192-bit key - fwd_rnd2( -16(%ebp), crypto_ft_tab) -4: fwd_rnd1( (%ebp), crypto_ft_tab) // 10 rounds for 128-bit key - fwd_rnd2( +16(%ebp), crypto_ft_tab) - fwd_rnd1( +32(%ebp), crypto_ft_tab) - fwd_rnd2( +48(%ebp), crypto_ft_tab) - fwd_rnd1( +64(%ebp), crypto_ft_tab) - fwd_rnd2( +80(%ebp), crypto_ft_tab) - fwd_rnd1( +96(%ebp), crypto_ft_tab) - fwd_rnd2(+112(%ebp), crypto_ft_tab) - fwd_rnd1(+128(%ebp), crypto_ft_tab) - fwd_rnd2(+144(%ebp), crypto_fl_tab) // last round uses a different table - -// move final values to the output array. CAUTION: the -// order of these assigns rely on the register mappings - - add $8,%esp - mov out_blk+12(%esp),%ebp - mov %r5,12(%ebp) - pop %edi - mov %r4,8(%ebp) - pop %esi - mov %r1,4(%ebp) - pop %ebx - mov %r0,(%ebp) - pop %ebp - ret -ENDPROC(aes_enc_blk) - -// AES (Rijndael) Decryption Subroutine -/* void aes_dec_blk(struct crypto_aes_ctx *ctx, u8 *out_blk, const u8 *in_blk) */ - -.extern crypto_it_tab -.extern crypto_il_tab - -ENTRY(aes_dec_blk) - push %ebp - mov ctx(%esp),%ebp - -// CAUTION: the order and the values used in these assigns -// rely on the register mappings - -1: push %ebx - mov in_blk+4(%esp),%r2 - push %esi - mov klen(%ebp),%r3 // key size - push %edi -#if dkey != 0 - lea dkey(%ebp),%ebp // key pointer -#endif - -// input four columns and xor in first round key - - mov (%r2),%r0 - mov 4(%r2),%r1 - mov 8(%r2),%r4 - mov 12(%r2),%r5 - xor (%ebp),%r0 - xor 4(%ebp),%r1 - xor 8(%ebp),%r4 - xor 12(%ebp),%r5 - - sub $8,%esp // space for register saves on stack - add $16,%ebp // increment to next round key - cmp $24,%r3 - jb 4f // 10 rounds for 128-bit key - lea 32(%ebp),%ebp - je 3f // 12 rounds for 192-bit key - lea 32(%ebp),%ebp - -2: inv_rnd1( -64(%ebp), crypto_it_tab) // 14 rounds for 256-bit key - inv_rnd2( -48(%ebp), crypto_it_tab) -3: inv_rnd1( -32(%ebp), crypto_it_tab) // 12 rounds for 192-bit key - inv_rnd2( -16(%ebp), crypto_it_tab) -4: inv_rnd1( (%ebp), crypto_it_tab) // 10 rounds for 128-bit key - inv_rnd2( +16(%ebp), crypto_it_tab) - inv_rnd1( +32(%ebp), crypto_it_tab) - inv_rnd2( +48(%ebp), crypto_it_tab) - inv_rnd1( +64(%ebp), crypto_it_tab) - inv_rnd2( +80(%ebp), crypto_it_tab) - inv_rnd1( +96(%ebp), crypto_it_tab) - inv_rnd2(+112(%ebp), crypto_it_tab) - inv_rnd1(+128(%ebp), crypto_it_tab) - inv_rnd2(+144(%ebp), crypto_il_tab) // last round uses a different table - -// move final values to the output array. CAUTION: the -// order of these assigns rely on the register mappings - - add $8,%esp - mov out_blk+12(%esp),%ebp - mov %r5,12(%ebp) - pop %edi - mov %r4,8(%ebp) - pop %esi - mov %r1,4(%ebp) - pop %ebx - mov %r0,(%ebp) - pop %ebp - ret -ENDPROC(aes_dec_blk) diff --git a/arch/x86/crypto/aes-x86_64-asm_64.S b/arch/x86/crypto/aes-x86_64-asm_64.S deleted file mode 100644 index 8739cf7795de..000000000000 --- a/arch/x86/crypto/aes-x86_64-asm_64.S +++ /dev/null @@ -1,185 +0,0 @@ -/* AES (Rijndael) implementation (FIPS PUB 197) for x86_64 - * - * Copyright (C) 2005 Andreas Steinmetz, - * - * License: - * This code can be distributed under the terms of the GNU General Public - * License (GPL) Version 2 provided that the above header down to and - * including this sentence is retained in full. - */ - -.extern crypto_ft_tab -.extern crypto_it_tab -.extern crypto_fl_tab -.extern crypto_il_tab - -.text - -#include -#include - -#define R1 %rax -#define R1E %eax -#define R1X %ax -#define R1H %ah -#define R1L %al -#define R2 %rbx -#define R2E %ebx -#define R2X %bx -#define R2H %bh -#define R2L %bl -#define R3 %rcx -#define R3E %ecx -#define R3X %cx -#define R3H %ch -#define R3L %cl -#define R4 %rdx -#define R4E %edx -#define R4X %dx -#define R4H %dh -#define R4L %dl -#define R5 %rsi -#define R5E %esi -#define R6 %rdi -#define R6E %edi -#define R7 %r9 /* don't use %rbp; it breaks stack traces */ -#define R7E %r9d -#define R8 %r8 -#define R10 %r10 -#define R11 %r11 - -#define prologue(FUNC,KEY,B128,B192,r1,r2,r5,r6,r7,r8,r9,r10,r11) \ - ENTRY(FUNC); \ - movq r1,r2; \ - leaq KEY+48(r8),r9; \ - movq r10,r11; \ - movl (r7),r5 ## E; \ - movl 4(r7),r1 ## E; \ - movl 8(r7),r6 ## E; \ - movl 12(r7),r7 ## E; \ - movl 480(r8),r10 ## E; \ - xorl -48(r9),r5 ## E; \ - xorl -44(r9),r1 ## E; \ - xorl -40(r9),r6 ## E; \ - xorl -36(r9),r7 ## E; \ - cmpl $24,r10 ## E; \ - jb B128; \ - leaq 32(r9),r9; \ - je B192; \ - leaq 32(r9),r9; - -#define epilogue(FUNC,r1,r2,r5,r6,r7,r8,r9) \ - movq r1,r2; \ - movl r5 ## E,(r9); \ - movl r6 ## E,4(r9); \ - movl r7 ## E,8(r9); \ - movl r8 ## E,12(r9); \ - ret; \ - ENDPROC(FUNC); - -#define round(TAB,OFFSET,r1,r2,r3,r4,r5,r6,r7,r8,ra,rb,rc,rd) \ - movzbl r2 ## H,r5 ## E; \ - movzbl r2 ## L,r6 ## E; \ - movl TAB+1024(,r5,4),r5 ## E;\ - movw r4 ## X,r2 ## X; \ - movl TAB(,r6,4),r6 ## E; \ - roll $16,r2 ## E; \ - shrl $16,r4 ## E; \ - movzbl r4 ## L,r7 ## E; \ - movzbl r4 ## H,r4 ## E; \ - xorl OFFSET(r8),ra ## E; \ - xorl OFFSET+4(r8),rb ## E; \ - xorl TAB+3072(,r4,4),r5 ## E;\ - xorl TAB+2048(,r7,4),r6 ## E;\ - movzbl r1 ## L,r7 ## E; \ - movzbl r1 ## H,r4 ## E; \ - movl TAB+1024(,r4,4),r4 ## E;\ - movw r3 ## X,r1 ## X; \ - roll $16,r1 ## E; \ - shrl $16,r3 ## E; \ - xorl TAB(,r7,4),r5 ## E; \ - movzbl r3 ## L,r7 ## E; \ - movzbl r3 ## H,r3 ## E; \ - xorl TAB+3072(,r3,4),r4 ## E;\ - xorl TAB+2048(,r7,4),r5 ## E;\ - movzbl r1 ## L,r7 ## E; \ - movzbl r1 ## H,r3 ## E; \ - shrl $16,r1 ## E; \ - xorl TAB+3072(,r3,4),r6 ## E;\ - movl TAB+2048(,r7,4),r3 ## E;\ - movzbl r1 ## L,r7 ## E; \ - movzbl r1 ## H,r1 ## E; \ - xorl TAB+1024(,r1,4),r6 ## E;\ - xorl TAB(,r7,4),r3 ## E; \ - movzbl r2 ## H,r1 ## E; \ - movzbl r2 ## L,r7 ## E; \ - shrl $16,r2 ## E; \ - xorl TAB+3072(,r1,4),r3 ## E;\ - xorl TAB+2048(,r7,4),r4 ## E;\ - movzbl r2 ## H,r1 ## E; \ - movzbl r2 ## L,r2 ## E; \ - xorl OFFSET+8(r8),rc ## E; \ - xorl OFFSET+12(r8),rd ## E; \ - xorl TAB+1024(,r1,4),r3 ## E;\ - xorl TAB(,r2,4),r4 ## E; - -#define move_regs(r1,r2,r3,r4) \ - movl r3 ## E,r1 ## E; \ - movl r4 ## E,r2 ## E; - -#define entry(FUNC,KEY,B128,B192) \ - prologue(FUNC,KEY,B128,B192,R2,R8,R1,R3,R4,R6,R10,R5,R11) - -#define return(FUNC) epilogue(FUNC,R8,R2,R5,R6,R3,R4,R11) - -#define encrypt_round(TAB,OFFSET) \ - round(TAB,OFFSET,R1,R2,R3,R4,R5,R6,R7,R10,R5,R6,R3,R4) \ - move_regs(R1,R2,R5,R6) - -#define encrypt_final(TAB,OFFSET) \ - round(TAB,OFFSET,R1,R2,R3,R4,R5,R6,R7,R10,R5,R6,R3,R4) - -#define decrypt_round(TAB,OFFSET) \ - round(TAB,OFFSET,R2,R1,R4,R3,R6,R5,R7,R10,R5,R6,R3,R4) \ - move_regs(R1,R2,R5,R6) - -#define decrypt_final(TAB,OFFSET) \ - round(TAB,OFFSET,R2,R1,R4,R3,R6,R5,R7,R10,R5,R6,R3,R4) - -/* void aes_enc_blk(stuct crypto_tfm *tfm, u8 *out, const u8 *in) */ - - entry(aes_enc_blk,0,.Le128,.Le192) - encrypt_round(crypto_ft_tab,-96) - encrypt_round(crypto_ft_tab,-80) -.Le192: encrypt_round(crypto_ft_tab,-64) - encrypt_round(crypto_ft_tab,-48) -.Le128: encrypt_round(crypto_ft_tab,-32) - encrypt_round(crypto_ft_tab,-16) - encrypt_round(crypto_ft_tab, 0) - encrypt_round(crypto_ft_tab, 16) - encrypt_round(crypto_ft_tab, 32) - encrypt_round(crypto_ft_tab, 48) - encrypt_round(crypto_ft_tab, 64) - encrypt_round(crypto_ft_tab, 80) - encrypt_round(crypto_ft_tab, 96) - encrypt_final(crypto_fl_tab,112) - return(aes_enc_blk) - -/* void aes_dec_blk(struct crypto_tfm *tfm, u8 *out, const u8 *in) */ - - entry(aes_dec_blk,240,.Ld128,.Ld192) - decrypt_round(crypto_it_tab,-96) - decrypt_round(crypto_it_tab,-80) -.Ld192: decrypt_round(crypto_it_tab,-64) - decrypt_round(crypto_it_tab,-48) -.Ld128: decrypt_round(crypto_it_tab,-32) - decrypt_round(crypto_it_tab,-16) - decrypt_round(crypto_it_tab, 0) - decrypt_round(crypto_it_tab, 16) - decrypt_round(crypto_it_tab, 32) - decrypt_round(crypto_it_tab, 48) - decrypt_round(crypto_it_tab, 64) - decrypt_round(crypto_it_tab, 80) - decrypt_round(crypto_it_tab, 96) - decrypt_final(crypto_il_tab,112) - return(aes_dec_blk) diff --git a/arch/x86/crypto/aes_glue.c b/arch/x86/crypto/aes_glue.c deleted file mode 100644 index 9e9d819e8bc3..000000000000 --- a/arch/x86/crypto/aes_glue.c +++ /dev/null @@ -1,71 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-only -/* - * Glue Code for the asm optimized version of the AES Cipher Algorithm - * - */ - -#include -#include -#include - -asmlinkage void aes_enc_blk(struct crypto_aes_ctx *ctx, u8 *out, const u8 *in); -asmlinkage void aes_dec_blk(struct crypto_aes_ctx *ctx, u8 *out, const u8 *in); - -void crypto_aes_encrypt_x86(struct crypto_aes_ctx *ctx, u8 *dst, const u8 *src) -{ - aes_enc_blk(ctx, dst, src); -} -EXPORT_SYMBOL_GPL(crypto_aes_encrypt_x86); - -void crypto_aes_decrypt_x86(struct crypto_aes_ctx *ctx, u8 *dst, const u8 *src) -{ - aes_dec_blk(ctx, dst, src); -} -EXPORT_SYMBOL_GPL(crypto_aes_decrypt_x86); - -static void aes_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) -{ - aes_enc_blk(crypto_tfm_ctx(tfm), dst, src); -} - -static void aes_decrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) -{ - aes_dec_blk(crypto_tfm_ctx(tfm), dst, src); -} - -static struct crypto_alg aes_alg = { - .cra_name = "aes", - .cra_driver_name = "aes-asm", - .cra_priority = 200, - .cra_flags = CRYPTO_ALG_TYPE_CIPHER, - .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct crypto_aes_ctx), - .cra_module = THIS_MODULE, - .cra_u = { - .cipher = { - .cia_min_keysize = AES_MIN_KEY_SIZE, - .cia_max_keysize = AES_MAX_KEY_SIZE, - .cia_setkey = crypto_aes_set_key, - .cia_encrypt = aes_encrypt, - .cia_decrypt = aes_decrypt - } - } -}; - -static int __init aes_init(void) -{ - return crypto_register_alg(&aes_alg); -} - -static void __exit aes_fini(void) -{ - crypto_unregister_alg(&aes_alg); -} - -module_init(aes_init); -module_exit(aes_fini); - -MODULE_DESCRIPTION("Rijndael (AES) Cipher Algorithm, asm optimized"); -MODULE_LICENSE("GPL"); -MODULE_ALIAS_CRYPTO("aes"); -MODULE_ALIAS_CRYPTO("aes-asm"); diff --git a/crypto/Kconfig b/crypto/Kconfig index 0d80985016bf..2ed65185dde8 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -1101,50 +1101,6 @@ config CRYPTO_AES_TI block. Interrupts are also disabled to avoid races where cachelines are evicted when the CPU is interrupted to do something else. -config CRYPTO_AES_586 - tristate "AES cipher algorithms (i586)" - depends on (X86 || UML_X86) && !64BIT - select CRYPTO_ALGAPI - select CRYPTO_AES - help - AES cipher algorithms (FIPS-197). AES uses the Rijndael - algorithm. - - Rijndael appears to be consistently a very good performer in - both hardware and software across a wide range of computing - environments regardless of its use in feedback or non-feedback - modes. Its key setup time is excellent, and its key agility is - good. Rijndael's very low memory requirements make it very well - suited for restricted-space environments, in which it also - demonstrates excellent performance. Rijndael's operations are - among the easiest to defend against power and timing attacks. - - The AES specifies three key sizes: 128, 192 and 256 bits - - See for more information. - -config CRYPTO_AES_X86_64 - tristate "AES cipher algorithms (x86_64)" - depends on (X86 || UML_X86) && 64BIT - select CRYPTO_ALGAPI - select CRYPTO_AES - help - AES cipher algorithms (FIPS-197). AES uses the Rijndael - algorithm. - - Rijndael appears to be consistently a very good performer in - both hardware and software across a wide range of computing - environments regardless of its use in feedback or non-feedback - modes. Its key setup time is excellent, and its key agility is - good. Rijndael's very low memory requirements make it very well - suited for restricted-space environments, in which it also - demonstrates excellent performance. Rijndael's operations are - among the easiest to defend against power and timing attacks. - - The AES specifies three key sizes: 128, 192 and 256 bits - - See for more information. - config CRYPTO_AES_NI_INTEL tristate "AES cipher algorithms (AES-NI)" depends on X86 From patchwork Wed Jun 12 12:48:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 10989893 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D85FC13AF for ; Wed, 12 Jun 2019 12:48:57 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C6C1C286D5 for ; Wed, 12 Jun 2019 12:48:57 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BB0B828A49; Wed, 12 Jun 2019 12:48:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6994D286D5 for ; Wed, 12 Jun 2019 12:48:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2409163AbfFLMs4 (ORCPT ); Wed, 12 Jun 2019 08:48:56 -0400 Received: from mail-wr1-f66.google.com ([209.85.221.66]:33312 "EHLO mail-wr1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2409157AbfFLMs4 (ORCPT ); Wed, 12 Jun 2019 08:48:56 -0400 Received: by mail-wr1-f66.google.com with SMTP id n9so16819636wru.0 for ; Wed, 12 Jun 2019 05:48:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=35aKMFTnfvApYT0bZnbnGPl1odG7lcC67cVgsN970Jc=; b=uesOFonJATVd0Dhi+ZGYHzoBuGkkmkYtb0S7HQv1Hz59/RBe4i5p43ZVuPUMbAGNHx 9bpEHnn3XLn+RgRUFTon52E1BHildbm3m0MKNdS97AHqyTiqUkwWy1VHjpoZLou1P7AG r6I3q/xWtdUl+qLTdEO9jqMSgUoxL5QnSYzyV4l22NQ322Y1N09yiOP49WkSD8lVneJY 0rvI97kyanKet4adT23Xdj9kb9nzLJnzZK2itXiUChvrbGQiNl/foh5tvhZ3f4RwPhug ADKDGvtOpFHPbKSrQCbUKE88bbtYeKZWOeLH/fN+NcVVMwnOC3lgFXo5Fq3XhuRYhMMG R8Zg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=35aKMFTnfvApYT0bZnbnGPl1odG7lcC67cVgsN970Jc=; b=cDGHxiFEoxSvLyWHQUg/XPpm5gDSl3mtUzVmCPnnphlvWBcf+Mjh80DMUP/uBl9yVq 8R8UFSruv4CoP6AYsg1eFAz+aS+2dogvfayTw7hiewXMnkPItqBKhLIqZu2jsSK1Z3P9 sBHSA37k+vVCiUPZ3zIoUQ0c8+9Vi+mSQg1rDdMbYszYMRdQjWmTH0KCbqztq30E9RsH tVCTDOB6bbqtuj8TrH1GCu3X4en2MI7+sLwE2uqdOBVcM/0LSd4TA90cVvruXIkZRIl+ cYfl8ymN3Ta/RxIMb7zPxtFctObDf4enHW2+iFOF2IflrDHrxkXRfMVABBPNLAYlZ1/L TcfQ== X-Gm-Message-State: APjAAAUz0xJnydBh/kpIh/n5dPC71TUAzjLJ82UUwO5H1tq4+iGfj+3c vrhcT6jP/QVFg3epHLJ4jFMwSepBsPQxgQ== X-Google-Smtp-Source: APXvYqz2LH34ke+zZvymZrS7jVoWGvw7k9Vc1McHT42Eze37IROOTN7xLWF6R7cSL45+2nzwGmuBkw== X-Received: by 2002:a5d:4a0b:: with SMTP id m11mr7629850wrq.251.1560343734764; Wed, 12 Jun 2019 05:48:54 -0700 (PDT) Received: from sudo.home ([2a01:cb1d:112:6f00:353a:f33a:a393:3ada]) by smtp.gmail.com with ESMTPSA id s8sm28505480wra.55.2019.06.12.05.48.53 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Jun 2019 05:48:54 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [RFC PATCH 07/20] crypto: padlock/aes - switch to library version of key expansion routine Date: Wed, 12 Jun 2019 14:48:25 +0200 Message-Id: <20190612124838.2492-8-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190612124838.2492-1-ard.biesheuvel@linaro.org> References: <20190612124838.2492-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Switch to the new AES library that also provides an implementation of the AES key expansion routine. This removes the dependency on the generic AES cipher, allowing it to be omitted entirely in the future. Signed-off-by: Ard Biesheuvel --- drivers/crypto/Kconfig | 2 +- drivers/crypto/padlock-aes.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig index 0af08081e305..b7557eb69409 100644 --- a/drivers/crypto/Kconfig +++ b/drivers/crypto/Kconfig @@ -27,7 +27,7 @@ config CRYPTO_DEV_PADLOCK_AES tristate "PadLock driver for AES algorithm" depends on CRYPTO_DEV_PADLOCK select CRYPTO_BLKCIPHER - select CRYPTO_AES + select CRYPTO_LIB_AES help Use VIA PadLock for AES algorithm. diff --git a/drivers/crypto/padlock-aes.c b/drivers/crypto/padlock-aes.c index ad020133da19..e73eab9bc22a 100644 --- a/drivers/crypto/padlock-aes.c +++ b/drivers/crypto/padlock-aes.c @@ -145,7 +145,7 @@ static int aes_set_key(struct crypto_tfm *tfm, const u8 *in_key, ctx->cword.encrypt.keygen = 1; ctx->cword.decrypt.keygen = 1; - if (crypto_aes_expand_key(&gen_aes, in_key, key_len)) { + if (aes_expandkey(&gen_aes, in_key, key_len)) { *flags |= CRYPTO_TFM_RES_BAD_KEY_LEN; return -EINVAL; } From patchwork Wed Jun 12 12:48:26 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 10989895 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7F61713AF for ; Wed, 12 Jun 2019 12:48:59 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6E410288D9 for ; Wed, 12 Jun 2019 12:48:59 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 62F2728A53; Wed, 12 Jun 2019 12:48:59 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 04D8D288D9 for ; Wed, 12 Jun 2019 12:48:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2439224AbfFLMs6 (ORCPT ); Wed, 12 Jun 2019 08:48:58 -0400 Received: from mail-wr1-f67.google.com ([209.85.221.67]:45956 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2409162AbfFLMs6 (ORCPT ); Wed, 12 Jun 2019 08:48:58 -0400 Received: by mail-wr1-f67.google.com with SMTP id f9so16737687wre.12 for ; Wed, 12 Jun 2019 05:48:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1JPW+Khv2z/aWoRuZihhCURcfSA9tf7hrRP493vIL8U=; b=L4vm5I00wk/ySgVsbGUIEtgHpepAypN9jVidRQ0lAW894f729XPrQ0KYdwsDyrslWT ncRcENt6WFZV7F+xqgrkMGCPqc7lGdRyzr8KXfatIm2TYMo0rHtSS0+Hkf/tOBC19zQ4 GZzHWR6pX8J2xJ6MdwaRyP3nJI8X8xJHhg8tnDal5qR8RjrXMsIFrsQ6TKEQnKhAEoht P0zp+nBE1+pTkNhHMxJyICEKBQ5TkTwSeV4oI1eg4bnJE1cpVNx/BWJTSdxgsbhtlZxE tAiuorRRi+S00qNTUOX9smGdKAeq09GawFffIHn2lVO6eqSqhclKZBvx1PSVDtRsoYsE nt7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1JPW+Khv2z/aWoRuZihhCURcfSA9tf7hrRP493vIL8U=; b=kE7KyRNPorsUT+VXuZr/3XxvFrL2O0Tw8cu5r92aoY59jm+r5JHOBCrCaYh0IJNmiB uePHFIumV80kLP28dz5HOurGP21fFjGWmouWKgxg4/W/EdkQEnIOQadEw8BhCbf17GSH xcZ85GmSQucs6OQLer/nnKzj2zPw7yEkCQtcu+bIC/R6DE+cQMAFwu/vWUXpx7gtFb6C LsYUwrr2nlZkQcztW5D4ptbOn0/RJnOPf6gSn+P4yuUl6k6SDHqS5hGbM8NVvI6RMVou tIzGMsCTSaSL1x1pr+lBtQ28ovnyWGzfXE4u/Oh1adJVqYwvJj+OnQBJnEFJkRxH7vbp MeYA== X-Gm-Message-State: APjAAAUur7QdbFLvWSKuoTVq3n9YhEZe3NKDG5+zNWCWjQpMCR55EAgH 5db+ZUS4JpSbD/Hr5iK907ZZJWVnHgEhoA== X-Google-Smtp-Source: APXvYqxgs2MtoxT26g2L9AGHoiQRgpePHpQuHHJdWV15RyUPCdlFTu8TgeU3u3GiYw5lC70m6rnr8g== X-Received: by 2002:a05:6000:1285:: with SMTP id f5mr13986859wrx.85.1560343735715; Wed, 12 Jun 2019 05:48:55 -0700 (PDT) Received: from sudo.home ([2a01:cb1d:112:6f00:353a:f33a:a393:3ada]) by smtp.gmail.com with ESMTPSA id s8sm28505480wra.55.2019.06.12.05.48.54 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Jun 2019 05:48:55 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [RFC PATCH 08/20] crypto: cesa/aes - switch to library version of key expansion routine Date: Wed, 12 Jun 2019 14:48:26 +0200 Message-Id: <20190612124838.2492-9-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190612124838.2492-1-ard.biesheuvel@linaro.org> References: <20190612124838.2492-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Switch to the new AES library that also provides an implementation of the AES key expansion routine. This removes the dependency on the generic AES cipher, allowing it to be omitted entirely in the future. Signed-off-by: Ard Biesheuvel --- drivers/crypto/Kconfig | 2 +- drivers/crypto/marvell/cipher.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig index b7557eb69409..539592e1d6f1 100644 --- a/drivers/crypto/Kconfig +++ b/drivers/crypto/Kconfig @@ -214,7 +214,7 @@ config CRYPTO_CRC32_S390 config CRYPTO_DEV_MARVELL_CESA tristate "Marvell's Cryptographic Engine driver" depends on PLAT_ORION || ARCH_MVEBU - select CRYPTO_AES + select CRYPTO_LIB_AES select CRYPTO_DES select CRYPTO_BLKCIPHER select CRYPTO_HASH diff --git a/drivers/crypto/marvell/cipher.c b/drivers/crypto/marvell/cipher.c index 2fd936b19c6d..debe7d9f00ae 100644 --- a/drivers/crypto/marvell/cipher.c +++ b/drivers/crypto/marvell/cipher.c @@ -257,7 +257,7 @@ static int mv_cesa_aes_setkey(struct crypto_skcipher *cipher, const u8 *key, int ret; int i; - ret = crypto_aes_expand_key(&ctx->aes, key, len); + ret = aes_expandkey(&ctx->aes, key, len); if (ret) { crypto_skcipher_set_flags(cipher, CRYPTO_TFM_RES_BAD_KEY_LEN); return ret; From patchwork Wed Jun 12 12:48:27 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 10989899 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B8E631515 for ; Wed, 12 Jun 2019 12:49:00 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A6F4B288D9 for ; Wed, 12 Jun 2019 12:49:00 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9B56728A4C; Wed, 12 Jun 2019 12:49:00 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 49D42286D5 for ; Wed, 12 Jun 2019 12:49:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2406812AbfFLMs7 (ORCPT ); Wed, 12 Jun 2019 08:48:59 -0400 Received: from mail-wr1-f66.google.com ([209.85.221.66]:43857 "EHLO mail-wr1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2439221AbfFLMs7 (ORCPT ); Wed, 12 Jun 2019 08:48:59 -0400 Received: by mail-wr1-f66.google.com with SMTP id p13so6668261wru.10 for ; Wed, 12 Jun 2019 05:48:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Rp2SyHztXuSc5EPDjn72803h6AQOk7vXYYLU7TdxcTM=; b=i589ezzqAAjlFViP74UzWJqZJFiqtT1FTmC1a0q0ImhT7u9tc+tCz60xh9CJLqUg8c UNA2tMsx9Yh1HnZlM21Aw5fUZlhYeViGelWAyjbm6p++n52DTRJpXRcZzw65FVgIjJqV OjBCZYAp5n3o3nQr3y/iU0RTpzpqIXaiPOVeiHghbqoDr2wiSK57WSDk3WeZfHGfj/bK zsIRzC2I36pon57I3tT8KN3q2OnloiQuNjPfEnMDRpSJENgsjy6djRQjffwk+WgctCp/ lvuNIGWEjds6ZOpJKkedu/x9IYDIUhgSive3a62OmT6cPwWsNTLTnuRC7p7oViSpyU71 Vewg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Rp2SyHztXuSc5EPDjn72803h6AQOk7vXYYLU7TdxcTM=; b=DrqPr7GtZlf+G7L/zd8Qj4Um8v0vPG/rb7soZ/0IDYIdaab7ZqmUz+wbls8uN8u6f/ KRSgKwu6Bus16q+kUn2IkrVRsm3YL75ErN3cEcPxaPXh2RKNOduwjxkt/6/dOmJ4jLny c/RxIsegu3nP7TsQDWFdbV1zNRN5OJew/rdfxlmLLV/maByFVzPcmbtGCT4gB9dmsvmv pBGQrcQGeSd5O0km36rgGtOb47lB38Id14ifPNagwc9eTIILTqyhrTfqoi8UIE/XXwOZ nT/ELBrnR7fMt+YQOnDZoMJkYe894RzMGSx5jizko7hmiI4XdslBMA7Z01qjzs8JomwA 5p9A== X-Gm-Message-State: APjAAAWMZfLUvRRucTWPSvjzPLNsb9IJKT/4/VJ+LhPnKfd2Z1SDNLdB 1ui5/Dd+HVXjPMABkRwyffB3HtCe7jNYkA== X-Google-Smtp-Source: APXvYqwH6HyQ7VqGyNJKlmKvWvkaDR0YvbujuJVEBHPVcsk7baLGxd/zb06pQ7W5AD/OminzAiskIA== X-Received: by 2002:adf:db81:: with SMTP id u1mr53014316wri.296.1560343736822; Wed, 12 Jun 2019 05:48:56 -0700 (PDT) Received: from sudo.home ([2a01:cb1d:112:6f00:353a:f33a:a393:3ada]) by smtp.gmail.com with ESMTPSA id s8sm28505480wra.55.2019.06.12.05.48.55 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Jun 2019 05:48:56 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [RFC PATCH 09/20] crypto: safexcel/aes - switch to library version of key expansion routine Date: Wed, 12 Jun 2019 14:48:27 +0200 Message-Id: <20190612124838.2492-10-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190612124838.2492-1-ard.biesheuvel@linaro.org> References: <20190612124838.2492-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Switch to the new AES library that also provides an implementation of the AES key expansion routine. This removes the dependency on the generic AES cipher, allowing it to be omitted entirely in the future. Signed-off-by: Ard Biesheuvel --- drivers/crypto/Kconfig | 2 +- drivers/crypto/inside-secure/safexcel_cipher.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig index 539592e1d6f1..a6067bb5a6a2 100644 --- a/drivers/crypto/Kconfig +++ b/drivers/crypto/Kconfig @@ -701,7 +701,7 @@ config CRYPTO_DEV_SAFEXCEL tristate "Inside Secure's SafeXcel cryptographic engine driver" depends on OF depends on (ARM64 && ARCH_MVEBU) || (COMPILE_TEST && 64BIT) - select CRYPTO_AES + select CRYPTO_LIB_AES select CRYPTO_AUTHENC select CRYPTO_BLKCIPHER select CRYPTO_DES diff --git a/drivers/crypto/inside-secure/safexcel_cipher.c b/drivers/crypto/inside-secure/safexcel_cipher.c index de4be10b172f..483632546260 100644 --- a/drivers/crypto/inside-secure/safexcel_cipher.c +++ b/drivers/crypto/inside-secure/safexcel_cipher.c @@ -158,7 +158,7 @@ static int safexcel_skcipher_aes_setkey(struct crypto_skcipher *ctfm, struct crypto_aes_ctx aes; int ret, i; - ret = crypto_aes_expand_key(&aes, key, len); + ret = aes_expandkey(&aes, key, len); if (ret) { crypto_skcipher_set_flags(ctfm, CRYPTO_TFM_RES_BAD_KEY_LEN); return ret; From patchwork Wed Jun 12 12:48:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 10989903 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 545DE76 for ; Wed, 12 Jun 2019 12:49:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 426CE286D5 for ; Wed, 12 Jun 2019 12:49:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 36C3B28A49; Wed, 12 Jun 2019 12:49:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B4EEF286D5 for ; Wed, 12 Jun 2019 12:49:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2409159AbfFLMtB (ORCPT ); Wed, 12 Jun 2019 08:49:01 -0400 Received: from mail-wr1-f67.google.com ([209.85.221.67]:38209 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2439226AbfFLMtA (ORCPT ); Wed, 12 Jun 2019 08:49:00 -0400 Received: by mail-wr1-f67.google.com with SMTP id d18so16761116wrs.5 for ; Wed, 12 Jun 2019 05:48:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=8rzDm2pxsxDof2+7Zzrqa2tkF2eXq+Yv+cQAP5RFetY=; b=Rfb/Q/DVnFDyufHbo81DeLA5Mgu81RRzPJ8RgVQOhA92jjmdOkINg6u/Q2a+Djf8+V H7jl1IjuN6mJCuKDQmh4TufNwBNNkJU3Db5exPStu0jQfCbAYkTFikAFFRUw//WhEPIW 7zeCHsJnRdYMDSwwFOP3XJDPQMjX/4bdacNOPgGwnXAhtbuwuJTN9BKLuK769EHOH+NK 7bIFZURO5vC42W6Zw2A/4LMESvl5wfXIPEq0DgRx2TLANgOxDMbQYgNpdxXieFcd8DOl fIdzB92oSna3AebyUB6Vrhp4gPUf5Yw1Oq3A2DUI5MVTgaIsvYuXlLxgNz+Oyiw5mjDw SgNg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=8rzDm2pxsxDof2+7Zzrqa2tkF2eXq+Yv+cQAP5RFetY=; b=kr8X39MnFBQl3L0oDK5ySbLeoZqWACyNb3SY77Cl5Kq8YEZr8hWqPoaMECA9fNOH+v ZBZk057QPZHaU+eoCw/KQyUIb5NsVogKXEpuLIhQEW/+zHeDUQuiAkBLKwm0Uw4i3RG7 RPh+4S+O98mjQ5q3qgoTKTqemKsd1VSgh1i8SaBdW601jXW+fYDNh3HE5W/ImIYaRfOa o9pY/itf3Um/qBuAMXz+Wx86fSwlzBiuw4e0wLXSEKLygBkmVBfkRaJMR5GbIXPRqaai EEmJLTcMd3V/u064inbGXlS6QA5Kxs+iAdxi4NB303a2F0S8hnKkA4vrwRSuA1mXKhKB ViIQ== X-Gm-Message-State: APjAAAXUzA0uomDF4W2C+zylJVwwl5/rJrUjGrXrZ4Q5yFyPhUqLWpWh HyATVwRR998+VzsmTTorqiHL2gqW3XRfJw== X-Google-Smtp-Source: APXvYqwQ+QzMDuxfANjYqhcJpbAjWfcU/OhTtV4SErkwC1hk7PvIfAdTpMuDQvJeQtJFqeqEwdrJiQ== X-Received: by 2002:a5d:67cd:: with SMTP id n13mr42203811wrw.138.1560343737744; Wed, 12 Jun 2019 05:48:57 -0700 (PDT) Received: from sudo.home ([2a01:cb1d:112:6f00:353a:f33a:a393:3ada]) by smtp.gmail.com with ESMTPSA id s8sm28505480wra.55.2019.06.12.05.48.56 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Jun 2019 05:48:57 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [RFC PATCH 10/20] crypto: arm64/ghash - switch to AES library Date: Wed, 12 Jun 2019 14:48:28 +0200 Message-Id: <20190612124838.2492-11-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190612124838.2492-1-ard.biesheuvel@linaro.org> References: <20190612124838.2492-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The GHASH code uses the generic AES key expansion routines, and calls directly into the scalar table based AES cipher for arm64 from the fallback path, and since this implementation is known to be non-time invariant, doing so from a time invariant SIMD cipher is a bit nasty. So let's switch to the AES library - this makes the code more robust, and drops the dependency on the generic AES cipher, allowing us to omit it entirely in the future. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/Kconfig | 3 +- arch/arm64/crypto/ghash-ce-glue.c | 30 +++++++------------- 2 files changed, 11 insertions(+), 22 deletions(-) diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig index d9a523ecdd83..1762055e7093 100644 --- a/arch/arm64/crypto/Kconfig +++ b/arch/arm64/crypto/Kconfig @@ -58,8 +58,7 @@ config CRYPTO_GHASH_ARM64_CE depends on KERNEL_MODE_NEON select CRYPTO_HASH select CRYPTO_GF128MUL - select CRYPTO_AES - select CRYPTO_AES_ARM64 + select CRYPTO_LIB_AES config CRYPTO_CRCT10DIF_ARM64_CE tristate "CRCT10DIF digest algorithm using PMULL instructions" diff --git a/arch/arm64/crypto/ghash-ce-glue.c b/arch/arm64/crypto/ghash-ce-glue.c index b39ed99b06fb..90496765d22f 100644 --- a/arch/arm64/crypto/ghash-ce-glue.c +++ b/arch/arm64/crypto/ghash-ce-glue.c @@ -73,8 +73,6 @@ asmlinkage void pmull_gcm_decrypt(int blocks, u64 dg[], u8 dst[], asmlinkage void pmull_gcm_encrypt_block(u8 dst[], u8 const src[], u32 const rk[], int rounds); -asmlinkage void __aes_arm64_encrypt(u32 *rk, u8 *out, const u8 *in, int rounds); - static int ghash_init(struct shash_desc *desc) { struct ghash_desc_ctx *ctx = shash_desc_ctx(desc); @@ -312,14 +310,13 @@ static int gcm_setkey(struct crypto_aead *tfm, const u8 *inkey, u8 key[GHASH_BLOCK_SIZE]; int ret; - ret = crypto_aes_expand_key(&ctx->aes_key, inkey, keylen); + ret = aes_expandkey(&ctx->aes_key, inkey, keylen); if (ret) { tfm->base.crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN; return -EINVAL; } - __aes_arm64_encrypt(ctx->aes_key.key_enc, key, (u8[AES_BLOCK_SIZE]){}, - num_rounds(&ctx->aes_key)); + aes_encrypt(&ctx->aes_key, key, (u8[AES_BLOCK_SIZE]){}); return __ghash_setkey(&ctx->ghash_key, key, sizeof(be128)); } @@ -470,7 +467,7 @@ static int gcm_encrypt(struct aead_request *req) rk = ctx->aes_key.key_enc; } while (walk.nbytes >= 2 * AES_BLOCK_SIZE); } else { - __aes_arm64_encrypt(ctx->aes_key.key_enc, tag, iv, nrounds); + aes_encrypt(&ctx->aes_key, tag, iv); put_unaligned_be32(2, iv + GCM_IV_SIZE); while (walk.nbytes >= (2 * AES_BLOCK_SIZE)) { @@ -481,8 +478,7 @@ static int gcm_encrypt(struct aead_request *req) int remaining = blocks; do { - __aes_arm64_encrypt(ctx->aes_key.key_enc, - ks, iv, nrounds); + aes_encrypt(&ctx->aes_key, ks, iv); crypto_xor_cpy(dst, src, ks, AES_BLOCK_SIZE); crypto_inc(iv, AES_BLOCK_SIZE); @@ -498,13 +494,10 @@ static int gcm_encrypt(struct aead_request *req) walk.nbytes % (2 * AES_BLOCK_SIZE)); } if (walk.nbytes) { - __aes_arm64_encrypt(ctx->aes_key.key_enc, ks, iv, - nrounds); + aes_encrypt(&ctx->aes_key, ks, iv); if (walk.nbytes > AES_BLOCK_SIZE) { crypto_inc(iv, AES_BLOCK_SIZE); - __aes_arm64_encrypt(ctx->aes_key.key_enc, - ks + AES_BLOCK_SIZE, iv, - nrounds); + aes_encrypt(&ctx->aes_key, ks + AES_BLOCK_SIZE, iv); } } } @@ -608,7 +601,7 @@ static int gcm_decrypt(struct aead_request *req) rk = ctx->aes_key.key_enc; } while (walk.nbytes >= 2 * AES_BLOCK_SIZE); } else { - __aes_arm64_encrypt(ctx->aes_key.key_enc, tag, iv, nrounds); + aes_encrypt(&ctx->aes_key, tag, iv); put_unaligned_be32(2, iv + GCM_IV_SIZE); while (walk.nbytes >= (2 * AES_BLOCK_SIZE)) { @@ -621,8 +614,7 @@ static int gcm_decrypt(struct aead_request *req) pmull_ghash_update_p64); do { - __aes_arm64_encrypt(ctx->aes_key.key_enc, - buf, iv, nrounds); + aes_encrypt(&ctx->aes_key, buf, iv); crypto_xor_cpy(dst, src, buf, AES_BLOCK_SIZE); crypto_inc(iv, AES_BLOCK_SIZE); @@ -640,11 +632,9 @@ static int gcm_decrypt(struct aead_request *req) memcpy(iv2, iv, AES_BLOCK_SIZE); crypto_inc(iv2, AES_BLOCK_SIZE); - __aes_arm64_encrypt(ctx->aes_key.key_enc, iv2, - iv2, nrounds); + aes_encrypt(&ctx->aes_key, iv2, iv2); } - __aes_arm64_encrypt(ctx->aes_key.key_enc, iv, iv, - nrounds); + aes_encrypt(&ctx->aes_key, iv, iv); } } From patchwork Wed Jun 12 12:48:29 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 10989905 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C768B1515 for ; Wed, 12 Jun 2019 12:49:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B4B62286D5 for ; Wed, 12 Jun 2019 12:49:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A93A628A49; Wed, 12 Jun 2019 12:49:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 542E3288D9 for ; Wed, 12 Jun 2019 12:49:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2439226AbfFLMtB (ORCPT ); Wed, 12 Jun 2019 08:49:01 -0400 Received: from mail-wm1-f67.google.com ([209.85.128.67]:55471 "EHLO mail-wm1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2409157AbfFLMtB (ORCPT ); Wed, 12 Jun 2019 08:49:01 -0400 Received: by mail-wm1-f67.google.com with SMTP id a15so6429725wmj.5 for ; Wed, 12 Jun 2019 05:48:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=QXrlKaueW49CmOlW9q/RYTZvkffZ+6o0SuGgZq9akzA=; b=Bc8YNuhgRlYAqFqT5jtNDi4QTQiRYHmu4iimah+AMG+XNBcM4k/qp0UjZj6sdDr7++ qDgYUaHtKrj+24P0Ta+7rPPz6QrRYKIg4I/0DL+NmM8+bL9KJ89p8E/Hxn0gCKlTppog 2i4GGHcLUygzvj0xSb37E6y2gtjOiHtpQIkGO6fwUBtS8KMPWnB62lO5Q60zYmwSK+9T 83x72iiwWV/iz/077WlhuVns9Z5y+NDBNjpVxKwVDQ30INPw3m/ATjYlvSiD6oN2LwNZ mC8z6DbH65y6ZcGCzPcEwvckeIfXZN/7Fv+pT1xNEt/5xNe7TGUa6TPuZ/Vq7mIFQY1i TOHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=QXrlKaueW49CmOlW9q/RYTZvkffZ+6o0SuGgZq9akzA=; b=SAhXkaM4QR/poxvBAPPGsWpR2Y6HO4oEQRLEo4IBEiA9KX4IGtp9cgdMvVZ64eGnnj TFc4I+fVGTWjmgHPMjDOpW/fvMFkgb6a5PWyX0jTpzSdzpW3kgnvtohxn6GKP6DCiqA/ JP6YVjJvyGA8IkV4B1UogJ9iXbSxdBUoWb3g/LrlgzteT4o2NYHwaG0di/kq8OUuNtnE 6+AjudIoedvmMCXXxzQqbfOxsxX6qrsdhXKELIMctn1L38+/jpZTyjXFSvPVtQLZuwcE 5E87M41s034LmkdAIAwtSXdOKiteDCdEMU082DVWozwHalRjxEzvYvnEpZMAEjdGHZPH lBbQ== X-Gm-Message-State: APjAAAVnEmWONEqdvelDVutD9VHXUkUFJZQuo+KQUx2D5dmS4Jr73PLr Ksm/LMLGqV3F9cEz3toZ9pKG26iSr3JCSg== X-Google-Smtp-Source: APXvYqyuQ/y3rlo7dhNeeyzF+rlGQTYgd8NINhRXyB8oGigKj0kFm662vv9nOY31PNY0BuohLv8KgA== X-Received: by 2002:a1c:f415:: with SMTP id z21mr9228056wma.34.1560343738709; Wed, 12 Jun 2019 05:48:58 -0700 (PDT) Received: from sudo.home ([2a01:cb1d:112:6f00:353a:f33a:a393:3ada]) by smtp.gmail.com with ESMTPSA id s8sm28505480wra.55.2019.06.12.05.48.57 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Jun 2019 05:48:58 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [RFC PATCH 11/20] crypto: arm/aes-neonbs - switch to library version of key expansion routine Date: Wed, 12 Jun 2019 14:48:29 +0200 Message-Id: <20190612124838.2492-12-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190612124838.2492-1-ard.biesheuvel@linaro.org> References: <20190612124838.2492-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Switch to the new AES library that also provides an implementation of the AES key expansion routine. This removes the dependency on the generic AES cipher, allowing it to be omitted entirely in the future. Signed-off-by: Ard Biesheuvel --- arch/arm/crypto/Kconfig | 2 +- arch/arm/crypto/aes-neonbs-glue.c | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/arm/crypto/Kconfig b/arch/arm/crypto/Kconfig index a95322b59799..b24df84a1d7a 100644 --- a/arch/arm/crypto/Kconfig +++ b/arch/arm/crypto/Kconfig @@ -82,8 +82,8 @@ config CRYPTO_AES_ARM_BS tristate "Bit sliced AES using NEON instructions" depends on KERNEL_MODE_NEON select CRYPTO_BLKCIPHER + select CRYPTO_LIB_AES select CRYPTO_SIMD - select CRYPTO_AES help Use a faster and more secure NEON based implementation of AES in CBC, CTR and XTS modes diff --git a/arch/arm/crypto/aes-neonbs-glue.c b/arch/arm/crypto/aes-neonbs-glue.c index 617c2c99ebfb..f43c9365b6a9 100644 --- a/arch/arm/crypto/aes-neonbs-glue.c +++ b/arch/arm/crypto/aes-neonbs-glue.c @@ -64,7 +64,7 @@ static int aesbs_setkey(struct crypto_skcipher *tfm, const u8 *in_key, struct crypto_aes_ctx rk; int err; - err = crypto_aes_expand_key(&rk, in_key, key_len); + err = aes_expandkey(&rk, in_key, key_len); if (err) return err; @@ -123,7 +123,7 @@ static int aesbs_cbc_setkey(struct crypto_skcipher *tfm, const u8 *in_key, struct crypto_aes_ctx rk; int err; - err = crypto_aes_expand_key(&rk, in_key, key_len); + err = aes_expandkey(&rk, in_key, key_len); if (err) return err; From patchwork Wed Jun 12 12:48:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 10989907 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6EE8A76 for ; Wed, 12 Jun 2019 12:49:03 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5D05B286D5 for ; Wed, 12 Jun 2019 12:49:03 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 51D9328A49; Wed, 12 Jun 2019 12:49:03 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DDFCA286D5 for ; Wed, 12 Jun 2019 12:49:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2439228AbfFLMtC (ORCPT ); Wed, 12 Jun 2019 08:49:02 -0400 Received: from mail-wm1-f66.google.com ([209.85.128.66]:39434 "EHLO mail-wm1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2439221AbfFLMtC (ORCPT ); Wed, 12 Jun 2019 08:49:02 -0400 Received: by mail-wm1-f66.google.com with SMTP id z23so6394877wma.4 for ; Wed, 12 Jun 2019 05:49:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Pgzq94PpKRgLR2IqqS+QLXIksHPk3/jPtTuBsqGb9fM=; b=pEI0k+6wRBRQqYRce8OQ6w7+qzDTcY9VdLpprLUjALVpHrp3KXp8B7+l0jeIyQ8lQ7 9dp9dDJ4UejLYCht5Til3Ll+9/chqnWTJuUA8+xnk3oYaN7+z2uwhZb6FsJnU873/E29 N5bxyAaFyrbhvo6zscxxdXLBqf6XG498RaeA4Fe/ypTpVRzQWJl2I8CjFWrF3d9vfqDq s74i9Ga+4c5X+DU1wxy87mG9QKFSjkOokrrcup/UlK2XXVO+cP10OHmaMyR7c1Fufdai I6Vx47XAtXxMdhd0bcddGCJi4s6B/v4Yjtr7nPMkR90sRX+QdHxaARay5S8uSa/HegIF zQJg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Pgzq94PpKRgLR2IqqS+QLXIksHPk3/jPtTuBsqGb9fM=; b=IClUpRjNQRNaGoLk8qWXjDuLhKZHhZ5MpuBz1LIXQDK3MxoiGrzraObrQKjIBjxQ5u P5n6eOiHfTdg7yTK74s2POsh37K2V2R5maG7VlWrJSXFbo9ffccYBzqkNyhJXdzmJNd1 lXpEYpV3FO/psoEdQnEmO3IjyJ02pHoiCPFuMVTY3buUv1vVJvO1VWF5lQjKgL/LTWCW XdAQ18sck0L/L4Tb8YLa1JLpq/eoNXEtAfoUY11Ef4XCfZpKcWOqAWlsGYpaKRy6teKB E2nM9IQJuUEb814ms4YTYknKXsvrzF3btofwDTjyT0Dn6K/eunXJHrW9noeV05lbmUlS 6osw== X-Gm-Message-State: APjAAAUzQPR2GOEQ6BbUGuiOoQyjOKwY4jZhfuduGgK2YArgWfNtJmb0 tv6IuTU8/5cM75XTqbQsViLSpIdTkK9LWQ== X-Google-Smtp-Source: APXvYqzPoAyNpHd8lrlzzzYij7/9XnDDVzvgpESpiKWrZVEe6+mlg8cd2vQ2P5uivoE/zn1aOC5kRQ== X-Received: by 2002:a7b:c7d8:: with SMTP id z24mr22344472wmk.10.1560343740210; Wed, 12 Jun 2019 05:49:00 -0700 (PDT) Received: from sudo.home ([2a01:cb1d:112:6f00:353a:f33a:a393:3ada]) by smtp.gmail.com with ESMTPSA id s8sm28505480wra.55.2019.06.12.05.48.58 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Jun 2019 05:48:59 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [RFC PATCH 12/20] crypto: arm64/aes-ccm - switch to AES library Date: Wed, 12 Jun 2019 14:48:30 +0200 Message-Id: <20190612124838.2492-13-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190612124838.2492-1-ard.biesheuvel@linaro.org> References: <20190612124838.2492-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The CCM code calls directly into the scalar table based AES cipher for arm64 from the fallback path, and since this implementation is known to be non-time invariant, doing so from a time invariant SIMD cipher is a bit nasty. So let's switch to the AES library - this makes the code more robust, and drops the dependency on the generic AES cipher, allowing us to omit it entirely in the future. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/Kconfig | 2 +- arch/arm64/crypto/aes-ce-ccm-glue.c | 18 ++++++------------ 2 files changed, 7 insertions(+), 13 deletions(-) diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig index 1762055e7093..c6032bfb44fb 100644 --- a/arch/arm64/crypto/Kconfig +++ b/arch/arm64/crypto/Kconfig @@ -80,8 +80,8 @@ config CRYPTO_AES_ARM64_CE_CCM depends on ARM64 && KERNEL_MODE_NEON select CRYPTO_ALGAPI select CRYPTO_AES_ARM64_CE - select CRYPTO_AES_ARM64 select CRYPTO_AEAD + select CRYPTO_LIB_AES config CRYPTO_AES_ARM64_CE_BLK tristate "AES in ECB/CBC/CTR/XTS modes using ARMv8 Crypto Extensions" diff --git a/arch/arm64/crypto/aes-ce-ccm-glue.c b/arch/arm64/crypto/aes-ce-ccm-glue.c index cb89c80800b5..b9b7cf4b5a8f 100644 --- a/arch/arm64/crypto/aes-ce-ccm-glue.c +++ b/arch/arm64/crypto/aes-ce-ccm-glue.c @@ -46,8 +46,6 @@ asmlinkage void ce_aes_ccm_decrypt(u8 out[], u8 const in[], u32 cbytes, asmlinkage void ce_aes_ccm_final(u8 mac[], u8 const ctr[], u32 const rk[], u32 rounds); -asmlinkage void __aes_arm64_encrypt(u32 *rk, u8 *out, const u8 *in, int rounds); - static int ccm_setkey(struct crypto_aead *tfm, const u8 *in_key, unsigned int key_len) { @@ -127,8 +125,7 @@ static void ccm_update_mac(struct crypto_aes_ctx *key, u8 mac[], u8 const in[], } while (abytes >= AES_BLOCK_SIZE) { - __aes_arm64_encrypt(key->key_enc, mac, mac, - num_rounds(key)); + aes_encrypt(key, mac, mac); crypto_xor(mac, in, AES_BLOCK_SIZE); in += AES_BLOCK_SIZE; @@ -136,8 +133,7 @@ static void ccm_update_mac(struct crypto_aes_ctx *key, u8 mac[], u8 const in[], } if (abytes > 0) { - __aes_arm64_encrypt(key->key_enc, mac, mac, - num_rounds(key)); + aes_encrypt(key, mac, mac); crypto_xor(mac, in, abytes); *macp = abytes; } @@ -209,10 +205,8 @@ static int ccm_crypt_fallback(struct skcipher_walk *walk, u8 mac[], u8 iv0[], bsize = nbytes; crypto_inc(walk->iv, AES_BLOCK_SIZE); - __aes_arm64_encrypt(ctx->key_enc, buf, walk->iv, - num_rounds(ctx)); - __aes_arm64_encrypt(ctx->key_enc, mac, mac, - num_rounds(ctx)); + aes_encrypt(ctx, buf, walk->iv); + aes_encrypt(ctx, mac, mac); if (enc) crypto_xor(mac, src, bsize); crypto_xor_cpy(dst, src, buf, bsize); @@ -227,8 +221,8 @@ static int ccm_crypt_fallback(struct skcipher_walk *walk, u8 mac[], u8 iv0[], } if (!err) { - __aes_arm64_encrypt(ctx->key_enc, buf, iv0, num_rounds(ctx)); - __aes_arm64_encrypt(ctx->key_enc, mac, mac, num_rounds(ctx)); + aes_encrypt(ctx, buf, iv0); + aes_encrypt(ctx, mac, mac); crypto_xor(mac, buf, AES_BLOCK_SIZE); } return err; From patchwork Wed Jun 12 12:48:31 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 10989909 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 17C9376 for ; Wed, 12 Jun 2019 12:49:05 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 05605286D5 for ; Wed, 12 Jun 2019 12:49:05 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id EE15C28A49; Wed, 12 Jun 2019 12:49:04 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9468E286D5 for ; Wed, 12 Jun 2019 12:49:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2439221AbfFLMtE (ORCPT ); Wed, 12 Jun 2019 08:49:04 -0400 Received: from mail-wr1-f67.google.com ([209.85.221.67]:33321 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2439227AbfFLMtD (ORCPT ); Wed, 12 Jun 2019 08:49:03 -0400 Received: by mail-wr1-f67.google.com with SMTP id n9so16819986wru.0 for ; Wed, 12 Jun 2019 05:49:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=0+abpCskFH64uUUzWz6MrJWQeWRPxd08F5mJmqOJSaY=; b=QQVIHwG+58pKXs93jYTMJMczRFAiZL6Osuf0fx9Pjt7dZUy4dQChhWMUhYfFvg7pxs 9BGqIDU/W51lXSWRCmpOTUpeKjNKQTTqZwtZmBloToenu9UQCO8ntGIQ9KZ7q3Hu0BkT t6C3vaBcpqxkn6eYbJWda9Qj18jzYaVseMnPS0tB9NF1/BpN7NPRcsTk4Q4aAQEW4FaL 5npBgrH+mtyv5bDOOLYLRvy1pyE1FKdf340PRXY42yOqQFlTp/AfIpGmhZzURAVbtPXt rmHdOi2UWHK6jOcFKjhOdmXjErwtWLZGQkM5S0FqBxPsD7T1yIcnGpxZOu3/7iTOI8Qk W8JQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0+abpCskFH64uUUzWz6MrJWQeWRPxd08F5mJmqOJSaY=; b=PloMp0R5RsmwBSnlAeypGMTCOFO29t4EDaWlZJ4URusc+WxpyXrNYyCgDVhniFUNHi AAWuOsvv4YHHMUjx1QNu9nPkaBEt8mhgXnzorWQM+YCaN7tHvP+v8ZfB6d763/WcDzOQ bYbcJpUUoL8n3HUc2CJJYWedVWPsOZGHL1DKrvHa9w+T7cUJ44upg3U/qToubBPM+7Vu kPv9zJDVPw51h+RY5ucF+0hu/ix5FKkzuGs3HH+QJYcQzsxeBGyaPRpeixnp3EMYsN/v V6inSFo4nMscLacLncOz0JtJCdj6K37xzWFLqkeXlxbuBr4CzN3oKMOaayRXqEnbdDWo PubQ== X-Gm-Message-State: APjAAAUtqGt+G3Gj0pOIj2Ti6+/Ji8XMuHdUR+gcH5tgNZdusO0uL/46 /1M/Vj+eY2Z/a+VgMm9/xHTvZaUtQ5INOQ== X-Google-Smtp-Source: APXvYqwDvJjTphhGVfRVAXKf2dTT6/qcC6XV+OpUvl+k25P9W6KERVmuTVVakLLVagc8CNQDvo247w== X-Received: by 2002:a5d:43c9:: with SMTP id v9mr53672758wrr.70.1560343741335; Wed, 12 Jun 2019 05:49:01 -0700 (PDT) Received: from sudo.home ([2a01:cb1d:112:6f00:353a:f33a:a393:3ada]) by smtp.gmail.com with ESMTPSA id s8sm28505480wra.55.2019.06.12.05.49.00 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Jun 2019 05:49:00 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [RFC PATCH 13/20] crypto: arm64/aes-neonbs - switch to library version of key expansion routine Date: Wed, 12 Jun 2019 14:48:31 +0200 Message-Id: <20190612124838.2492-14-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190612124838.2492-1-ard.biesheuvel@linaro.org> References: <20190612124838.2492-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Switch to the new AES library that also provides an implementation of the AES key expansion routine. This removes the dependency on the generic AES cipher, allowing it to be omitted entirely in the future. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/Kconfig | 1 + arch/arm64/crypto/aes-neonbs-glue.c | 8 ++++---- 2 files changed, 5 insertions(+), 4 deletions(-) diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig index c6032bfb44fb..17bf5dc10aad 100644 --- a/arch/arm64/crypto/Kconfig +++ b/arch/arm64/crypto/Kconfig @@ -116,6 +116,7 @@ config CRYPTO_AES_ARM64_BS select CRYPTO_BLKCIPHER select CRYPTO_AES_ARM64_NEON_BLK select CRYPTO_AES_ARM64 + select CRYPTO_LIB_AES select CRYPTO_SIMD endif diff --git a/arch/arm64/crypto/aes-neonbs-glue.c b/arch/arm64/crypto/aes-neonbs-glue.c index 02b65d9eb947..cb8d90f795a0 100644 --- a/arch/arm64/crypto/aes-neonbs-glue.c +++ b/arch/arm64/crypto/aes-neonbs-glue.c @@ -77,7 +77,7 @@ static int aesbs_setkey(struct crypto_skcipher *tfm, const u8 *in_key, struct crypto_aes_ctx rk; int err; - err = crypto_aes_expand_key(&rk, in_key, key_len); + err = aes_expandkey(&rk, in_key, key_len); if (err) return err; @@ -136,7 +136,7 @@ static int aesbs_cbc_setkey(struct crypto_skcipher *tfm, const u8 *in_key, struct crypto_aes_ctx rk; int err; - err = crypto_aes_expand_key(&rk, in_key, key_len); + err = aes_expandkey(&rk, in_key, key_len); if (err) return err; @@ -208,7 +208,7 @@ static int aesbs_ctr_setkey_sync(struct crypto_skcipher *tfm, const u8 *in_key, struct aesbs_ctr_ctx *ctx = crypto_skcipher_ctx(tfm); int err; - err = crypto_aes_expand_key(&ctx->fallback, in_key, key_len); + err = aes_expandkey(&ctx->fallback, in_key, key_len); if (err) return err; @@ -274,7 +274,7 @@ static int aesbs_xts_setkey(struct crypto_skcipher *tfm, const u8 *in_key, return err; key_len /= 2; - err = crypto_aes_expand_key(&rk, in_key + key_len, key_len); + err = aes_expandkey(&rk, in_key + key_len, key_len); if (err) return err; From patchwork Wed Jun 12 12:48:32 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 10989911 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1783C76 for ; Wed, 12 Jun 2019 12:49:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 04EA9286D5 for ; Wed, 12 Jun 2019 12:49:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id ED9F728A49; Wed, 12 Jun 2019 12:49:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 93CBE286D5 for ; Wed, 12 Jun 2019 12:49:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2439230AbfFLMtF (ORCPT ); Wed, 12 Jun 2019 08:49:05 -0400 Received: from mail-wm1-f68.google.com ([209.85.128.68]:36706 "EHLO mail-wm1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2439229AbfFLMtE (ORCPT ); Wed, 12 Jun 2019 08:49:04 -0400 Received: by mail-wm1-f68.google.com with SMTP id u8so6408977wmm.1 for ; Wed, 12 Jun 2019 05:49:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=mCCXO8E7dxoAfZNPGu9MWapyiMaDRiD0qXpOpKK9sPY=; b=fqMYY52GJRSJB7nqoddTnyp+yHfTI4dqn1gGxnvbszw1xwXLpymiUOIvsEFUqtxu83 fQread8hii0ShQ9ZUt0SZGprEoruVA1ff2dNsbTw4RwhxHZd6421rgtPir31C+yCEkMA udlT8SXMf/1FVfEtypI30iKKpOj8TCOG7nsu3/mj4jRfYl8Gi00jCNYqTaACz20+9aRU YcV+79O433Gzyiz/XeFGnmEmCBtjzXd15OOEpRBtDc2nxOQ17Q9iz/Yc9rDEeZii3WPw SrfK0zS5RXjo7u3jCzWgrsG8IhdT1UJrnKaKjU9lXpwBZhvzBEWntV04Mhbm+keYn6CH PbHQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=mCCXO8E7dxoAfZNPGu9MWapyiMaDRiD0qXpOpKK9sPY=; b=becOtEKy9cH4+/vK1lHSzhRlkD4FA8E87WqqYZiJKveUGtHhBBDhFmiZgqJ5quc+7w YYVMKmX4OJMS+YhVJgigJkfhI8XWgaL4R9mCFpMJZ+2yOE/UKU9XJWNAH0lZzoPrFvkG 4N9u5dfNugBCre6pTpnmWXbeVXaq73wHHbdXNl8f0nn0QAgAGIcdfymj2LJ5OW0t8mjp jLG2LlEDa9YWiNiTZ8tfA2uwAInJHsSQYOqtVjh4kwyqu/rLCgzuzehv0dfXtNlTqewt CYoCQ/E2NQWNh/y0Dz3yskyrIzF8vqrd2D3tC3bT3B3Kq6pcakPI6fo1TX/8bGobECpc HIww== X-Gm-Message-State: APjAAAX/rSS6jOFrxYEy6EuEdniiiJ3S08ay6/bYr20np/GE9KE1S2+p Lho/7KcgbpzMaW+Pxb1JhGo5dHBwNZCB1w== X-Google-Smtp-Source: APXvYqwdxVdPmO4Y4CaMxhkX0EkIeu+J3Lqpzi3pJpPoHcmWYZO/DJrER87cgEzDVOz687H5gdo01A== X-Received: by 2002:a1c:2c41:: with SMTP id s62mr22013608wms.8.1560343742297; Wed, 12 Jun 2019 05:49:02 -0700 (PDT) Received: from sudo.home ([2a01:cb1d:112:6f00:353a:f33a:a393:3ada]) by smtp.gmail.com with ESMTPSA id s8sm28505480wra.55.2019.06.12.05.49.01 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Jun 2019 05:49:01 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [RFC PATCH 14/20] crypto: arm64/aes-ce - switch to library version of key expansion routine Date: Wed, 12 Jun 2019 14:48:32 +0200 Message-Id: <20190612124838.2492-15-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190612124838.2492-1-ard.biesheuvel@linaro.org> References: <20190612124838.2492-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Switch to the new AES library that also provides an implementation of the AES key expansion routine. This removes the dependency on the generic AES cipher, allowing it to be omitted entirely in the future. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/Kconfig | 2 +- arch/arm64/crypto/aes-glue.c | 12 ++++++++---- 2 files changed, 9 insertions(+), 5 deletions(-) diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig index 17bf5dc10aad..66dea518221c 100644 --- a/arch/arm64/crypto/Kconfig +++ b/arch/arm64/crypto/Kconfig @@ -96,7 +96,7 @@ config CRYPTO_AES_ARM64_NEON_BLK depends on KERNEL_MODE_NEON select CRYPTO_BLKCIPHER select CRYPTO_AES_ARM64 - select CRYPTO_AES + select CRYPTO_LIB_AES select CRYPTO_SIMD config CRYPTO_CHACHA20_NEON diff --git a/arch/arm64/crypto/aes-glue.c b/arch/arm64/crypto/aes-glue.c index f0ceb545bd1e..8fa17a764802 100644 --- a/arch/arm64/crypto/aes-glue.c +++ b/arch/arm64/crypto/aes-glue.c @@ -26,7 +26,6 @@ #ifdef USE_V8_CRYPTO_EXTENSIONS #define MODE "ce" #define PRIO 300 -#define aes_setkey ce_aes_setkey #define aes_expandkey ce_aes_expandkey #define aes_ecb_encrypt ce_aes_ecb_encrypt #define aes_ecb_decrypt ce_aes_ecb_decrypt @@ -42,8 +41,6 @@ MODULE_DESCRIPTION("AES-ECB/CBC/CTR/XTS using ARMv8 Crypto Extensions"); #else #define MODE "neon" #define PRIO 200 -#define aes_setkey crypto_aes_set_key -#define aes_expandkey crypto_aes_expand_key #define aes_ecb_encrypt neon_aes_ecb_encrypt #define aes_ecb_decrypt neon_aes_ecb_decrypt #define aes_cbc_encrypt neon_aes_cbc_encrypt @@ -121,7 +118,14 @@ struct mac_desc_ctx { static int skcipher_aes_setkey(struct crypto_skcipher *tfm, const u8 *in_key, unsigned int key_len) { - return aes_setkey(crypto_skcipher_tfm(tfm), in_key, key_len); + struct crypto_aes_ctx *ctx = crypto_skcipher_ctx(tfm); + int ret; + + ret = aes_expandkey(ctx, in_key, key_len); + if (ret) + crypto_skcipher_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN); + + return ret; } static int xts_set_key(struct crypto_skcipher *tfm, const u8 *in_key, From patchwork Wed Jun 12 12:48:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 10989915 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B0B141515 for ; Wed, 12 Jun 2019 12:49:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9CFF7286D5 for ; Wed, 12 Jun 2019 12:49:08 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 912B928A49; Wed, 12 Jun 2019 12:49:08 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E53A6288D9 for ; Wed, 12 Jun 2019 12:49:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2439229AbfFLMtH (ORCPT ); Wed, 12 Jun 2019 08:49:07 -0400 Received: from mail-wm1-f67.google.com ([209.85.128.67]:50650 "EHLO mail-wm1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2439227AbfFLMtH (ORCPT ); Wed, 12 Jun 2019 08:49:07 -0400 Received: by mail-wm1-f67.google.com with SMTP id c66so6457046wmf.0 for ; Wed, 12 Jun 2019 05:49:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Dlfj4kqVFhv44IOajcgw+06riYNUJAIzXTewbPPqMIc=; b=sjaYhXv0a6NKJmqvhDYeIVWz5nMWkTbteorU7HT48yNBwJPfxW6fVLxN11xDMz3hj6 kLn3cgCqGOrjCyzFqoPbetCQzxGaJCSJZHGNA2n7VkVrqi1eZEO81z7ngGWo/6HBNsMN korc6e/FR8NvFePkaQR25iLtBpdKfVI9W/dTvOCixkjEvbEDyC5vwcORJkKbX72xabRe 9pR0PNTzGeitqAx5dwNc2DmAnVZ76oGyeffiMe3++WVInp98V2iwh+MM6ooRRildX1lT Xssuejgi+hX/VJClXuYge961z8WjTdL0S0XduYeKhjE8DB9RnhKaU5ci8RL3I+68FzJi T0dw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Dlfj4kqVFhv44IOajcgw+06riYNUJAIzXTewbPPqMIc=; b=fliAOod3ytsoI/v5bGrMtDGN1eAvxuZX8UTM71hMVN6IwWMnQawcIZSIYo+XNTtAHB QaXdJatHWPLu55SHZFiMVzhlc8xUjQuRtDNs5ZbyW1dX+B3Z3kvea0wYD71FNa/k/Vpj wlLlSNA9LG2RIGjegb5U9o46r4gPfhOlfmSB7GNupvZV7jYolLu8xBBvlGwcwRcZD+mH RsSgmUNEW/d0oJL6lhd144mUpdoLO5qhfbdLIwuSGmn/IpXzBvf2E8d8XNa/8o/eKU2O 1UmiuzKNO0ZIDLarZH1CDODZ5mWhrgkB9izpdK1TGw5hVmE6H14jAkbVBLDm34qAK++w FYsQ== X-Gm-Message-State: APjAAAX9KeLYXmrtE8kotfB9BN1mjDc9oEzLWdCOnHoZXsmkJpqSBVdf QeUpmclYucVixzmowaImG1QW52CzMC9/ng== X-Google-Smtp-Source: APXvYqx7/FluxdO2idlOIGkMOxJvqqjzjsK16XY9ke4D7G+jzM7Y9IBePwADry15/mqEbcrHOXNWog== X-Received: by 2002:a1c:2907:: with SMTP id p7mr21675532wmp.100.1560343743431; Wed, 12 Jun 2019 05:49:03 -0700 (PDT) Received: from sudo.home ([2a01:cb1d:112:6f00:353a:f33a:a393:3ada]) by smtp.gmail.com with ESMTPSA id s8sm28505480wra.55.2019.06.12.05.49.02 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Jun 2019 05:49:02 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [RFC PATCH 15/20] crypto: generic/aes - drop key expansion routine in favor of library version Date: Wed, 12 Jun 2019 14:48:33 +0200 Message-Id: <20190612124838.2492-16-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190612124838.2492-1-ard.biesheuvel@linaro.org> References: <20190612124838.2492-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Drop aes-generic's version of crypto_aes_expand_key(), and switch to key expansion routine provided by the AES library. AES key expansion is not performance critical, and it is better to have a single version shared by all AES implementations. Signed-off-by: Ard Biesheuvel --- crypto/Kconfig | 1 + crypto/aes_generic.c | 153 +------------------- include/crypto/aes.h | 2 - 3 files changed, 3 insertions(+), 153 deletions(-) diff --git a/crypto/Kconfig b/crypto/Kconfig index 2ed65185dde8..3b08230fe3ba 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -1065,6 +1065,7 @@ config CRYPTO_LIB_AES config CRYPTO_AES tristate "AES cipher algorithms" select CRYPTO_ALGAPI + select CRYPTO_LIB_AES help AES cipher algorithms (FIPS-197). AES uses the Rijndael algorithm. diff --git a/crypto/aes_generic.c b/crypto/aes_generic.c index 3aa4a715c216..426deb437f19 100644 --- a/crypto/aes_generic.c +++ b/crypto/aes_generic.c @@ -1125,155 +1125,6 @@ EXPORT_SYMBOL_GPL(crypto_fl_tab); EXPORT_SYMBOL_GPL(crypto_it_tab); EXPORT_SYMBOL_GPL(crypto_il_tab); -/* initialise the key schedule from the user supplied key */ - -#define star_x(x) (((x) & 0x7f7f7f7f) << 1) ^ ((((x) & 0x80808080) >> 7) * 0x1b) - -#define imix_col(y, x) do { \ - u = star_x(x); \ - v = star_x(u); \ - w = star_x(v); \ - t = w ^ (x); \ - (y) = u ^ v ^ w; \ - (y) ^= ror32(u ^ t, 8) ^ \ - ror32(v ^ t, 16) ^ \ - ror32(t, 24); \ -} while (0) - -#define ls_box(x) \ - crypto_fl_tab[0][byte(x, 0)] ^ \ - crypto_fl_tab[1][byte(x, 1)] ^ \ - crypto_fl_tab[2][byte(x, 2)] ^ \ - crypto_fl_tab[3][byte(x, 3)] - -#define loop4(i) do { \ - t = ror32(t, 8); \ - t = ls_box(t) ^ rco_tab[i]; \ - t ^= ctx->key_enc[4 * i]; \ - ctx->key_enc[4 * i + 4] = t; \ - t ^= ctx->key_enc[4 * i + 1]; \ - ctx->key_enc[4 * i + 5] = t; \ - t ^= ctx->key_enc[4 * i + 2]; \ - ctx->key_enc[4 * i + 6] = t; \ - t ^= ctx->key_enc[4 * i + 3]; \ - ctx->key_enc[4 * i + 7] = t; \ -} while (0) - -#define loop6(i) do { \ - t = ror32(t, 8); \ - t = ls_box(t) ^ rco_tab[i]; \ - t ^= ctx->key_enc[6 * i]; \ - ctx->key_enc[6 * i + 6] = t; \ - t ^= ctx->key_enc[6 * i + 1]; \ - ctx->key_enc[6 * i + 7] = t; \ - t ^= ctx->key_enc[6 * i + 2]; \ - ctx->key_enc[6 * i + 8] = t; \ - t ^= ctx->key_enc[6 * i + 3]; \ - ctx->key_enc[6 * i + 9] = t; \ - t ^= ctx->key_enc[6 * i + 4]; \ - ctx->key_enc[6 * i + 10] = t; \ - t ^= ctx->key_enc[6 * i + 5]; \ - ctx->key_enc[6 * i + 11] = t; \ -} while (0) - -#define loop8tophalf(i) do { \ - t = ror32(t, 8); \ - t = ls_box(t) ^ rco_tab[i]; \ - t ^= ctx->key_enc[8 * i]; \ - ctx->key_enc[8 * i + 8] = t; \ - t ^= ctx->key_enc[8 * i + 1]; \ - ctx->key_enc[8 * i + 9] = t; \ - t ^= ctx->key_enc[8 * i + 2]; \ - ctx->key_enc[8 * i + 10] = t; \ - t ^= ctx->key_enc[8 * i + 3]; \ - ctx->key_enc[8 * i + 11] = t; \ -} while (0) - -#define loop8(i) do { \ - loop8tophalf(i); \ - t = ctx->key_enc[8 * i + 4] ^ ls_box(t); \ - ctx->key_enc[8 * i + 12] = t; \ - t ^= ctx->key_enc[8 * i + 5]; \ - ctx->key_enc[8 * i + 13] = t; \ - t ^= ctx->key_enc[8 * i + 6]; \ - ctx->key_enc[8 * i + 14] = t; \ - t ^= ctx->key_enc[8 * i + 7]; \ - ctx->key_enc[8 * i + 15] = t; \ -} while (0) - -/** - * crypto_aes_expand_key - Expands the AES key as described in FIPS-197 - * @ctx: The location where the computed key will be stored. - * @in_key: The supplied key. - * @key_len: The length of the supplied key. - * - * Returns 0 on success. The function fails only if an invalid key size (or - * pointer) is supplied. - * The expanded key size is 240 bytes (max of 14 rounds with a unique 16 bytes - * key schedule plus a 16 bytes key which is used before the first round). - * The decryption key is prepared for the "Equivalent Inverse Cipher" as - * described in FIPS-197. The first slot (16 bytes) of each key (enc or dec) is - * for the initial combination, the second slot for the first round and so on. - */ -int crypto_aes_expand_key(struct crypto_aes_ctx *ctx, const u8 *in_key, - unsigned int key_len) -{ - u32 i, t, u, v, w, j; - - if (key_len != AES_KEYSIZE_128 && key_len != AES_KEYSIZE_192 && - key_len != AES_KEYSIZE_256) - return -EINVAL; - - ctx->key_length = key_len; - - ctx->key_enc[0] = get_unaligned_le32(in_key); - ctx->key_enc[1] = get_unaligned_le32(in_key + 4); - ctx->key_enc[2] = get_unaligned_le32(in_key + 8); - ctx->key_enc[3] = get_unaligned_le32(in_key + 12); - - ctx->key_dec[key_len + 24] = ctx->key_enc[0]; - ctx->key_dec[key_len + 25] = ctx->key_enc[1]; - ctx->key_dec[key_len + 26] = ctx->key_enc[2]; - ctx->key_dec[key_len + 27] = ctx->key_enc[3]; - - switch (key_len) { - case AES_KEYSIZE_128: - t = ctx->key_enc[3]; - for (i = 0; i < 10; ++i) - loop4(i); - break; - - case AES_KEYSIZE_192: - ctx->key_enc[4] = get_unaligned_le32(in_key + 16); - t = ctx->key_enc[5] = get_unaligned_le32(in_key + 20); - for (i = 0; i < 8; ++i) - loop6(i); - break; - - case AES_KEYSIZE_256: - ctx->key_enc[4] = get_unaligned_le32(in_key + 16); - ctx->key_enc[5] = get_unaligned_le32(in_key + 20); - ctx->key_enc[6] = get_unaligned_le32(in_key + 24); - t = ctx->key_enc[7] = get_unaligned_le32(in_key + 28); - for (i = 0; i < 6; ++i) - loop8(i); - loop8tophalf(i); - break; - } - - ctx->key_dec[0] = ctx->key_enc[key_len + 24]; - ctx->key_dec[1] = ctx->key_enc[key_len + 25]; - ctx->key_dec[2] = ctx->key_enc[key_len + 26]; - ctx->key_dec[3] = ctx->key_enc[key_len + 27]; - - for (i = 4; i < key_len + 24; ++i) { - j = key_len + 24 - (i & ~3) + (i & 3); - imix_col(ctx->key_dec[j], ctx->key_enc[i]); - } - return 0; -} -EXPORT_SYMBOL_GPL(crypto_aes_expand_key); - /** * crypto_aes_set_key - Set the AES key. * @tfm: The %crypto_tfm that is used in the context. @@ -1281,7 +1132,7 @@ EXPORT_SYMBOL_GPL(crypto_aes_expand_key); * @key_len: The size of the key. * * Returns 0 on success, on failure the %CRYPTO_TFM_RES_BAD_KEY_LEN flag in tfm - * is set. The function uses crypto_aes_expand_key() to expand the key. + * is set. The function uses aes_expand_key() to expand the key. * &crypto_aes_ctx _must_ be the private data embedded in @tfm which is * retrieved with crypto_tfm_ctx(). */ @@ -1292,7 +1143,7 @@ int crypto_aes_set_key(struct crypto_tfm *tfm, const u8 *in_key, u32 *flags = &tfm->crt_flags; int ret; - ret = crypto_aes_expand_key(ctx, in_key, key_len); + ret = aes_expandkey(ctx, in_key, key_len); if (!ret) return 0; diff --git a/include/crypto/aes.h b/include/crypto/aes.h index 72ead82d3f98..31ba40d803df 100644 --- a/include/crypto/aes.h +++ b/include/crypto/aes.h @@ -35,8 +35,6 @@ extern const u32 crypto_il_tab[4][256] ____cacheline_aligned; int crypto_aes_set_key(struct crypto_tfm *tfm, const u8 *in_key, unsigned int key_len); -int crypto_aes_expand_key(struct crypto_aes_ctx *ctx, const u8 *in_key, - unsigned int key_len); /** * aes_expandkey - Expands the AES key as described in FIPS-197 From patchwork Wed Jun 12 12:48:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 10989913 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 162F413AF for ; Wed, 12 Jun 2019 12:49:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 036D0286D5 for ; Wed, 12 Jun 2019 12:49:08 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id EBB5028A4C; Wed, 12 Jun 2019 12:49:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7FB54286D5 for ; Wed, 12 Jun 2019 12:49:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2439232AbfFLMtH (ORCPT ); Wed, 12 Jun 2019 08:49:07 -0400 Received: from mail-wm1-f65.google.com ([209.85.128.65]:37905 "EHLO mail-wm1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2439229AbfFLMtG (ORCPT ); Wed, 12 Jun 2019 08:49:06 -0400 Received: by mail-wm1-f65.google.com with SMTP id s15so6411027wmj.3 for ; Wed, 12 Jun 2019 05:49:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Lm7oGps4teOEwcLPMuq0r39VG4DYBydnuDjdvE8qRsQ=; b=PhOA519RQ/KSm8Wq2yejvZbrXDftUtvpiVOr23t3wVMghc2GenkIWkRSLQWvCugDB1 fHLSI958DpSR+ppikmXzADwbVAEpbWu8fxOTcoTPM27vCiB0Vb6vvCNNp9FYtEQbtBOI fmSNGvDIcxoK+95wfBwL0ji7nw4Y3rZQBz15O32vV5o7WCkv2S7vxfoR2vLdx1trUutS tfiwpTr16f864dZ1gQ57VWdZ8oXo+aGb1SreN1vg7eVFbVITaAd+vLOcdsIDjh4WPPTz AJe4gMbJD1EpUW8DVBHTMvnwN7VR1Cw6wJYetUc9g5LnM2M+av50xnw/8QAG/CbWijEm GbFw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Lm7oGps4teOEwcLPMuq0r39VG4DYBydnuDjdvE8qRsQ=; b=lHkpNO9yXO0R/aT0ga5qd4UYiDgsWtBoOg3u/gP3Ac6JGxXFDX3ZHAPMYbR45COhNo aqgTMZAYA3f/byjJnWX5PBniPit5IbQsdTW2yB33hlbok9kvZIj40zkIKSZREvSML2qY omrcIveagADus1MbFDQkZgBogRURl2KwP0WTetz2CG0eQDXeltbAz9lWSGEVFKtGKe6j PRvAthYgmY814rmJitHh3sepqLaUw0NbmOFkeQE2iQhyxqlMJToAHJAx/RwWRQ8Ey+Ja FUaNOd/4BlfmJ+DU6xUjclPDjpGy3GcOuB5DH3bkqvwpXdTB593g5AZPxkmCcAGgdg1b Y65w== X-Gm-Message-State: APjAAAXjpPhHc/hpXhRdiqj+7d+CAj2YsPah7ojIHSGiw5159s+IKM7I ubkwlIPPo+aHoBYodAjXjD054ogr03EpGA== X-Google-Smtp-Source: APXvYqzHFhZd9Lo9GxcvMVDWmdhtntULzodtBWuiah4NQZql4MQH5zvRXcR8CgE5ZJkyFGZsBBc/pQ== X-Received: by 2002:a05:600c:23d2:: with SMTP id p18mr21441697wmb.108.1560343744412; Wed, 12 Jun 2019 05:49:04 -0700 (PDT) Received: from sudo.home ([2a01:cb1d:112:6f00:353a:f33a:a393:3ada]) by smtp.gmail.com with ESMTPSA id s8sm28505480wra.55.2019.06.12.05.49.03 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Jun 2019 05:49:03 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [RFC PATCH 16/20] crypto: arm64/aes-ce-cipher - use AES library as fallback Date: Wed, 12 Jun 2019 14:48:34 +0200 Message-Id: <20190612124838.2492-17-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190612124838.2492-1-ard.biesheuvel@linaro.org> References: <20190612124838.2492-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Instead of calling into the table based scalar AES code in situations where the SIMD unit may not be used, use the generic AES code, which is more appropriate since it is less likely to be susceptible to timing attacks. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/Kconfig | 2 +- arch/arm64/crypto/aes-ce-glue.c | 7 ++----- arch/arm64/crypto/aes-cipher-glue.c | 3 --- 3 files changed, 3 insertions(+), 9 deletions(-) diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig index 66dea518221c..4922c4451e7c 100644 --- a/arch/arm64/crypto/Kconfig +++ b/arch/arm64/crypto/Kconfig @@ -73,7 +73,7 @@ config CRYPTO_AES_ARM64_CE tristate "AES core cipher using ARMv8 Crypto Extensions" depends on ARM64 && KERNEL_MODE_NEON select CRYPTO_ALGAPI - select CRYPTO_AES_ARM64 + select CRYPTO_LIB_AES config CRYPTO_AES_ARM64_CE_CCM tristate "AES in CCM mode using ARMv8 Crypto Extensions" diff --git a/arch/arm64/crypto/aes-ce-glue.c b/arch/arm64/crypto/aes-ce-glue.c index 3213843fcb46..6890e003b8f1 100644 --- a/arch/arm64/crypto/aes-ce-glue.c +++ b/arch/arm64/crypto/aes-ce-glue.c @@ -23,9 +23,6 @@ MODULE_DESCRIPTION("Synchronous AES cipher using ARMv8 Crypto Extensions"); MODULE_AUTHOR("Ard Biesheuvel "); MODULE_LICENSE("GPL v2"); -asmlinkage void __aes_arm64_encrypt(u32 *rk, u8 *out, const u8 *in, int rounds); -asmlinkage void __aes_arm64_decrypt(u32 *rk, u8 *out, const u8 *in, int rounds); - struct aes_block { u8 b[AES_BLOCK_SIZE]; }; @@ -54,7 +51,7 @@ static void aes_cipher_encrypt(struct crypto_tfm *tfm, u8 dst[], u8 const src[]) struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); if (!crypto_simd_usable()) { - __aes_arm64_encrypt(ctx->key_enc, dst, src, num_rounds(ctx)); + aes_encrypt(ctx, dst, src); return; } @@ -68,7 +65,7 @@ static void aes_cipher_decrypt(struct crypto_tfm *tfm, u8 dst[], u8 const src[]) struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); if (!crypto_simd_usable()) { - __aes_arm64_decrypt(ctx->key_dec, dst, src, num_rounds(ctx)); + aes_decrypt(ctx, dst, src); return; } diff --git a/arch/arm64/crypto/aes-cipher-glue.c b/arch/arm64/crypto/aes-cipher-glue.c index 0e90b06ebcec..bf32cc6489e1 100644 --- a/arch/arm64/crypto/aes-cipher-glue.c +++ b/arch/arm64/crypto/aes-cipher-glue.c @@ -13,10 +13,7 @@ #include asmlinkage void __aes_arm64_encrypt(u32 *rk, u8 *out, const u8 *in, int rounds); -EXPORT_SYMBOL(__aes_arm64_encrypt); - asmlinkage void __aes_arm64_decrypt(u32 *rk, u8 *out, const u8 *in, int rounds); -EXPORT_SYMBOL(__aes_arm64_decrypt); static void aes_arm64_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) { From patchwork Wed Jun 12 12:48:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 10989919 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6AC8813AF for ; Wed, 12 Jun 2019 12:49:11 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 57B17286D5 for ; Wed, 12 Jun 2019 12:49:11 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4BCE428A49; Wed, 12 Jun 2019 12:49:11 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AA184288D9 for ; Wed, 12 Jun 2019 12:49:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2439227AbfFLMtK (ORCPT ); Wed, 12 Jun 2019 08:49:10 -0400 Received: from mail-wm1-f66.google.com ([209.85.128.66]:55485 "EHLO mail-wm1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2439231AbfFLMtJ (ORCPT ); Wed, 12 Jun 2019 08:49:09 -0400 Received: by mail-wm1-f66.google.com with SMTP id a15so6430130wmj.5 for ; Wed, 12 Jun 2019 05:49:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FRmUfZcZrs5gwbshqgmWeJhm624aavbe4mR2Os9kVds=; b=ox4tpd6P6ar74xvJmGEDOZ3bY7LZM6d2Wf22j9f17GQSx52dugdszFmC8DA6E1ppeL gQnDg3vb90lRTgUxt1dIpIvaeVp20WKtg6C3KQN1Cc9axY+s8kCQ7GmeSnFRBb18zhL+ 0eys7sjuzou33znvRYpG8SuL8thxqv892ZnATw5UmNEQGnyr4scvk70CUlR1dsvxb0af yx8EjN45Zers8YamyBGPXvV8l9dwGjOhwYRmzUpJf73gu6nNQ9vGG9pq+of/yNJtXFhO HUv+iSin+Pgv/FHfJE/7Wj5g1bnNppQEizdaJirgMbCCPAZXCaGxXXILj/kzeSzd/Hw3 wJjA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FRmUfZcZrs5gwbshqgmWeJhm624aavbe4mR2Os9kVds=; b=IpsS6RmvJk0Si7VnY8aoNTMXkqEbAIncKXpvNlNXlQJ/8kgAWPdiwXZFK7/EKAbwXV mxjqWZQ+ct+pelkUl+WLqnPJXhswk7Q+oszNuNOtkrDpxHnDEIFYqMI8usZ4bKzvXW7q mPzeAqn7M71IvzacVkT5JAFeAgu0NbeqAteX8PnhmvZ2B1WeKetc0JKWzgkRE7DPY0d3 3NNR3sx+NA9AGXUEOZuTCO0BifjKDDM51ukZnc4EKpICBpZRnIRbIgQBzeMpPRNmO4lK YrU8UwLHSPZxZV91a2COy63NlH9LrkKQDH9QyOZQnokIZBoht7dbOste31q5vfdrGIB6 WIRw== X-Gm-Message-State: APjAAAWmwONs2Cu5COb3pytgAFrBs1EW+6xK1M22/WYsZW9ljBCPGe2l 5M9CsjyYgNF04C26HGH4h/3DOJAYDKnikA== X-Google-Smtp-Source: APXvYqxd41RPyGlCq2WkdPX/ISV2qfcqiTJ1HY+ag6kqkFOkGEU0EfSPQmXHChF8S0G1+UUDVZka5Q== X-Received: by 2002:a7b:c74a:: with SMTP id w10mr20999150wmk.99.1560343745519; Wed, 12 Jun 2019 05:49:05 -0700 (PDT) Received: from sudo.home ([2a01:cb1d:112:6f00:353a:f33a:a393:3ada]) by smtp.gmail.com with ESMTPSA id s8sm28505480wra.55.2019.06.12.05.49.04 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Jun 2019 05:49:04 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [RFC PATCH 17/20] crypto: aes - move ctr(aes) non-SIMD fallback to AES library Date: Wed, 12 Jun 2019 14:48:35 +0200 Message-Id: <20190612124838.2492-18-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190612124838.2492-1-ard.biesheuvel@linaro.org> References: <20190612124838.2492-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In preparation of duplicating the sync ctr(aes) functionality to modules under arch/arm, move the helper function from a inline .h file to the AES library, which is already depended upon by the drivers that use this fallback. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/aes-ctr-fallback.h | 53 -------------------- arch/arm64/crypto/aes-glue.c | 17 ++++--- arch/arm64/crypto/aes-neonbs-glue.c | 12 +++-- crypto/Kconfig | 1 + include/crypto/aes.h | 11 ++++ lib/crypto/aes.c | 41 +++++++++++++++ 6 files changed, 72 insertions(+), 63 deletions(-) diff --git a/arch/arm64/crypto/aes-ctr-fallback.h b/arch/arm64/crypto/aes-ctr-fallback.h deleted file mode 100644 index c9285717b6b5..000000000000 --- a/arch/arm64/crypto/aes-ctr-fallback.h +++ /dev/null @@ -1,53 +0,0 @@ -/* - * Fallback for sync aes(ctr) in contexts where kernel mode NEON - * is not allowed - * - * Copyright (C) 2017 Linaro Ltd - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - */ - -#include -#include - -asmlinkage void __aes_arm64_encrypt(u32 *rk, u8 *out, const u8 *in, int rounds); - -static inline int aes_ctr_encrypt_fallback(struct crypto_aes_ctx *ctx, - struct skcipher_request *req) -{ - struct skcipher_walk walk; - u8 buf[AES_BLOCK_SIZE]; - int err; - - err = skcipher_walk_virt(&walk, req, true); - - while (walk.nbytes > 0) { - u8 *dst = walk.dst.virt.addr; - u8 *src = walk.src.virt.addr; - int nbytes = walk.nbytes; - int tail = 0; - - if (nbytes < walk.total) { - nbytes = round_down(nbytes, AES_BLOCK_SIZE); - tail = walk.nbytes % AES_BLOCK_SIZE; - } - - do { - int bsize = min(nbytes, AES_BLOCK_SIZE); - - __aes_arm64_encrypt(ctx->key_enc, buf, walk.iv, - 6 + ctx->key_length / 4); - crypto_xor_cpy(dst, src, buf, bsize); - crypto_inc(walk.iv, AES_BLOCK_SIZE); - - dst += AES_BLOCK_SIZE; - src += AES_BLOCK_SIZE; - nbytes -= AES_BLOCK_SIZE; - } while (nbytes > 0); - - err = skcipher_walk_done(&walk, tail); - } - return err; -} diff --git a/arch/arm64/crypto/aes-glue.c b/arch/arm64/crypto/aes-glue.c index 8fa17a764802..3d9cedbb91c9 100644 --- a/arch/arm64/crypto/aes-glue.c +++ b/arch/arm64/crypto/aes-glue.c @@ -21,7 +21,6 @@ #include #include "aes-ce-setkey.h" -#include "aes-ctr-fallback.h" #ifdef USE_V8_CRYPTO_EXTENSIONS #define MODE "ce" @@ -409,8 +408,15 @@ static int ctr_encrypt_sync(struct skcipher_request *req) struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct crypto_aes_ctx *ctx = crypto_skcipher_ctx(tfm); - if (!crypto_simd_usable()) - return aes_ctr_encrypt_fallback(ctx, req); + if (!crypto_simd_usable()) { + struct skcipher_walk walk; + int err; + + err = skcipher_walk_virt(&walk, req, true); + if (err) + return err; + return skcipher_encrypt_aes_ctr(&walk, ctx); + } return ctr_encrypt(req); } @@ -653,15 +659,14 @@ static void mac_do_update(struct crypto_aes_ctx *ctx, u8 const in[], int blocks, kernel_neon_end(); } else { if (enc_before) - __aes_arm64_encrypt(ctx->key_enc, dg, dg, rounds); + aes_encrypt(ctx, dg, dg); while (blocks--) { crypto_xor(dg, in, AES_BLOCK_SIZE); in += AES_BLOCK_SIZE; if (blocks || enc_after) - __aes_arm64_encrypt(ctx->key_enc, dg, dg, - rounds); + aes_encrypt(ctx, dg, dg); } } } diff --git a/arch/arm64/crypto/aes-neonbs-glue.c b/arch/arm64/crypto/aes-neonbs-glue.c index cb8d90f795a0..02d46e97c1e1 100644 --- a/arch/arm64/crypto/aes-neonbs-glue.c +++ b/arch/arm64/crypto/aes-neonbs-glue.c @@ -16,8 +16,6 @@ #include #include -#include "aes-ctr-fallback.h" - MODULE_AUTHOR("Ard Biesheuvel "); MODULE_LICENSE("GPL v2"); @@ -288,9 +286,15 @@ static int ctr_encrypt_sync(struct skcipher_request *req) struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct aesbs_ctr_ctx *ctx = crypto_skcipher_ctx(tfm); - if (!crypto_simd_usable()) - return aes_ctr_encrypt_fallback(&ctx->fallback, req); + if (!crypto_simd_usable()) { + struct skcipher_walk walk; + int err; + err = skcipher_walk_virt(&walk, req, true); + if (err) + return err; + return skcipher_encrypt_aes_ctr(&walk, &ctx->fallback); + } return ctr_encrypt(req); } diff --git a/crypto/Kconfig b/crypto/Kconfig index 3b08230fe3ba..efeb307c0594 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -1061,6 +1061,7 @@ comment "Ciphers" config CRYPTO_LIB_AES tristate + select CRYPTO_ALGAPI config CRYPTO_AES tristate "AES cipher algorithms" diff --git a/include/crypto/aes.h b/include/crypto/aes.h index 31ba40d803df..f67c38500746 100644 --- a/include/crypto/aes.h +++ b/include/crypto/aes.h @@ -8,6 +8,8 @@ #include #include +#include +#include #define AES_MIN_KEY_SIZE 16 #define AES_MAX_KEY_SIZE 32 @@ -69,4 +71,13 @@ void aes_encrypt(const struct crypto_aes_ctx *ctx, u8 *out, const u8 *in); */ void aes_decrypt(const struct crypto_aes_ctx *ctx, u8 *out, const u8 *in); +/** + * skcipher_encrypt_aes_ctr - Process a aes(ctr) skcipher encryption request + * using the generic AES implementation. + * @walk: the skcipher walk data structure that describes the data to operate on + * @ctx: the AES key schedule + */ +int skcipher_encrypt_aes_ctr(struct skcipher_walk *walk, + const struct crypto_aes_ctx *ctx); + #endif diff --git a/lib/crypto/aes.c b/lib/crypto/aes.c index 57596148b010..f5ef29eaa714 100644 --- a/lib/crypto/aes.c +++ b/lib/crypto/aes.c @@ -363,6 +363,47 @@ void aes_decrypt(const struct crypto_aes_ctx *ctx, u8 *out, const u8 *in) } EXPORT_SYMBOL(aes_decrypt); +/** + * skcipher_encrypt_aes_ctr - Process a aes(ctr) skcipher encryption request + * using the generic AES implementation. + * @walk: the skcipher walk data structure that describes the data to operate on + * @ctx: the AES key schedule + */ +int skcipher_encrypt_aes_ctr(struct skcipher_walk *walk, + const struct crypto_aes_ctx *ctx) +{ + u8 buf[AES_BLOCK_SIZE]; + int err = 0; + + while (walk->nbytes > 0) { + u8 *dst = walk->dst.virt.addr; + u8 *src = walk->src.virt.addr; + int nbytes = walk->nbytes; + int tail = 0; + + if (nbytes < walk->total) { + nbytes = round_down(nbytes, AES_BLOCK_SIZE); + tail = walk->nbytes % AES_BLOCK_SIZE; + } + + do { + int bsize = min(nbytes, AES_BLOCK_SIZE); + + aes_encrypt(ctx, buf, walk->iv); + crypto_xor_cpy(dst, src, buf, bsize); + crypto_inc(walk->iv, AES_BLOCK_SIZE); + + dst += AES_BLOCK_SIZE; + src += AES_BLOCK_SIZE; + nbytes -= AES_BLOCK_SIZE; + } while (nbytes > 0); + + err = skcipher_walk_done(walk, tail); + } + return err; +} +EXPORT_SYMBOL(skcipher_encrypt_aes_ctr); + MODULE_DESCRIPTION("Generic AES library"); MODULE_AUTHOR("Ard Biesheuvel "); MODULE_LICENSE("GPL v2"); From patchwork Wed Jun 12 12:48:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 10989917 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 100CD76 for ; Wed, 12 Jun 2019 12:49:11 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F32BE286D5 for ; Wed, 12 Jun 2019 12:49:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E7EB928A4C; Wed, 12 Jun 2019 12:49:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 83044286D5 for ; Wed, 12 Jun 2019 12:49:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2439234AbfFLMtJ (ORCPT ); Wed, 12 Jun 2019 08:49:09 -0400 Received: from mail-wr1-f67.google.com ([209.85.221.67]:45980 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2439227AbfFLMtJ (ORCPT ); Wed, 12 Jun 2019 08:49:09 -0400 Received: by mail-wr1-f67.google.com with SMTP id f9so16738345wre.12 for ; Wed, 12 Jun 2019 05:49:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1xxctaVyY+STLE0T/Ia3mG1xDByiyGRaSaAm3A+Hnfc=; b=tz4lZUGm9GNsN5QswSxBu7YXktply7N1ckplmsyGaNzKPqdMjhYQIq5MA5Za8gHjVv hbmHeCSyK9Dv1HCnMTpSoZuYGyLJcOccSyP8R9Wlt5gRPw7cEK4ArZ5pLJnKCI+QeEtT jn0ql/+fmwDUAt+1c2EgEFXX1XuxCczqDAGslmpsFUU2LCQZGEhBEjt6KKYQzuNZM+VI PZQNvwUOv0/D3yTH8JTNF3Oyl65MrIBMnLBLuMDcOYPabmhfCtaAvgK/9SzQCWq2D15L 8yi/Hml1UiDmGFEws/lyBKVDlHJFjsQyUEc8MhCwy90KY9ki2vxhabPaIPfReMKal4nV C12Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1xxctaVyY+STLE0T/Ia3mG1xDByiyGRaSaAm3A+Hnfc=; b=iImMA1ahyb/c6nZ6b3P8COk8rbhMib4CSondHip2M+1iPVctbH6g0k+ryPVkiF4aXL EaviRYaAQuo31kXGrlsiJC7c+tHAoOL4rsGr7IJ9Wtj9Z17XIZXP8vKJoOEsezUlKpt+ yuTCNVxlsZYi3kt9xU/LWdEt0RFwX88DmOGFcPxJJhUOLBHk+bk++vYJbKiI1RbXLR9e ptAPVV8RUIVkAdBYLcZg6e8tlcBzAIN/RpI/3Cfxt0S0GtvNZhciv1plz5GV66RmOiBK 8TqhMBt9ELQT7POnZWw1ZqCrIjin8H7WzffO8vMDweX/QOTCIcdngPfWwb+uKjDp5iIL /ToQ== X-Gm-Message-State: APjAAAWkbL/U51IF/3X3Z4jMJa8FP3H1MnuYG9mSWbrNC/rADA5o0Y44 vr8flDz937CO89fU2zblX8/vp2ejuUrB+A== X-Google-Smtp-Source: APXvYqwRohvV78ZaYYR4vsm/h+DRxRjPQWQ7HUQFKascJ307uk4yoyMwiX8PUkwBDQO+q+L56ETbIA== X-Received: by 2002:adf:9d81:: with SMTP id p1mr5078018wre.294.1560343746557; Wed, 12 Jun 2019 05:49:06 -0700 (PDT) Received: from sudo.home ([2a01:cb1d:112:6f00:353a:f33a:a393:3ada]) by smtp.gmail.com with ESMTPSA id s8sm28505480wra.55.2019.06.12.05.49.05 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Jun 2019 05:49:05 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [RFC PATCH 18/20] crypto: arm/aes-ce - provide a synchronous version of ctr(aes) Date: Wed, 12 Jun 2019 14:48:36 +0200 Message-Id: <20190612124838.2492-19-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190612124838.2492-1-ard.biesheuvel@linaro.org> References: <20190612124838.2492-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP AES in CTR mode is used by modes such as GCM and CCM, which are often used in contexts where only synchronous ciphers are permitted. So provide a synchronous version of ctr(aes) based on the existing code. This requires a non-SIMD fallback to deal with invocations occurring from a context where SIMD instructions may not be used. We have a helper for this now in the AES library, so wire that up. Signed-off-by: Ard Biesheuvel --- arch/arm/crypto/aes-ce-glue.c | 36 ++++++++++++++++++++ 1 file changed, 36 insertions(+) diff --git a/arch/arm/crypto/aes-ce-glue.c b/arch/arm/crypto/aes-ce-glue.c index 04ba66903674..cdcc4b09e7db 100644 --- a/arch/arm/crypto/aes-ce-glue.c +++ b/arch/arm/crypto/aes-ce-glue.c @@ -10,6 +10,7 @@ #include #include +#include #include #include #include @@ -292,6 +293,23 @@ static int ctr_encrypt(struct skcipher_request *req) return err; } +static int ctr_encrypt_sync(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct crypto_aes_ctx *ctx = crypto_skcipher_ctx(tfm); + + if (!crypto_simd_usable()) { + struct skcipher_walk walk; + int err; + + err = skcipher_walk_virt(&walk, req, true); + if (err) + return err; + return skcipher_encrypt_aes_ctr(&walk, ctx); + } + return ctr_encrypt(req); +} + static int xts_encrypt(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); @@ -381,6 +399,21 @@ static struct skcipher_alg aes_algs[] = { { .setkey = ce_aes_setkey, .encrypt = ctr_encrypt, .decrypt = ctr_encrypt, +}, { + .base.cra_name = "ctr(aes)", + .base.cra_driver_name = "ctr-aes-ce-sync", + .base.cra_priority = 300 - 1, + .base.cra_blocksize = 1, + .base.cra_ctxsize = sizeof(struct crypto_aes_ctx), + .base.cra_module = THIS_MODULE, + + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .chunksize = AES_BLOCK_SIZE, + .setkey = ce_aes_setkey, + .encrypt = ctr_encrypt_sync, + .decrypt = ctr_encrypt_sync, }, { .base.cra_name = "__xts(aes)", .base.cra_driver_name = "__xts-aes-ce", @@ -424,6 +457,9 @@ static int __init aes_init(void) return err; for (i = 0; i < ARRAY_SIZE(aes_algs); i++) { + if (!(aes_algs[i].base.cra_flags & CRYPTO_ALG_INTERNAL)) + continue; + algname = aes_algs[i].base.cra_name + 2; drvname = aes_algs[i].base.cra_driver_name + 2; basename = aes_algs[i].base.cra_driver_name; From patchwork Wed Jun 12 12:48:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 10989921 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EF19B13AF for ; Wed, 12 Jun 2019 12:49:13 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D89BE28A4C for ; Wed, 12 Jun 2019 12:49:13 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CA3B528A53; Wed, 12 Jun 2019 12:49:13 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B984028A49 for ; Wed, 12 Jun 2019 12:49:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2439233AbfFLMtM (ORCPT ); Wed, 12 Jun 2019 08:49:12 -0400 Received: from mail-wm1-f67.google.com ([209.85.128.67]:55492 "EHLO mail-wm1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2439235AbfFLMtL (ORCPT ); Wed, 12 Jun 2019 08:49:11 -0400 Received: by mail-wm1-f67.google.com with SMTP id a15so6430333wmj.5 for ; Wed, 12 Jun 2019 05:49:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=80OaoKT+inVFqryUaesnban4gPiDdroPQoZlqPQnBMg=; b=s+wuB5UFnOhqXz+mQtE7yzhq5X2+mZ/YAF3O3DEM+El+csWqg8aIFgALkrOWVw6NyM C3ciGneUJ589CJvJkA8fbHUB/jnjUlNGD6zGRC/U4JMYv/nTMSDbxqi9HFb2n8xawAIJ CSFF/YZKiy8GnsLn/jWHO52iFat6DzOq6o5uf2Csa/OJl2dto1xVyzFG2mLb/qpPxD5m S0C4zfGXljSZCXymur9UAJYEYqFlDejE+JR/lM/WWBFsAXScS0WCPWntLGFJ47lN0009 TsbybyYXVSfwE0bybY0mYx842U/LDK1wUcOa5scCzb5DWsZWcIweYY/S1JhzIbEd7JAJ b+qA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=80OaoKT+inVFqryUaesnban4gPiDdroPQoZlqPQnBMg=; b=RAq2eBCM202bmgefGn2dq/IoeUjBSqTu5QuKPF3Hfm1N8jnE7m173+ukkibskD68o+ J3wtbnIajuOLD2fmNdJwC0kZQ+L4LyY03VAhHXnO9WavOzDIldcCDn9s5a68fhWZBptB Q/twMXnAO5tOvJD5aEPjXNh9JkIybUSyrrM5zcWiENBV+QTJ2N1RwxkgA09ylvlgU1cT A+2AIavqf51qFacNCizGt2/mmHXTBNx4hDlDZ03cYxTWQ6bKG5WinLKzKlmfsmU9lRPy I/LFUZq9FE2DQPibAb+6obBwwuXgLokzU/epxcxMfYVUvkCg1qe7spJ07PndWvxEyfNk am1g== X-Gm-Message-State: APjAAAWucycUKR3n7Uo7yxiHNes+gTp4T4iV7pC59r4umPAsRItCrvip 4GdHgDKiLa0Rvz/mBIL+D8Mqf18/iQcvxw== X-Google-Smtp-Source: APXvYqwArKyR47aRZq3Xh6NWiryv7rp0nu8iYrHxe1b5gMntI4PNBNtaljrqoy7XEw7JidHoD4WJAg== X-Received: by 2002:a1c:48c5:: with SMTP id v188mr21239967wma.175.1560343747741; Wed, 12 Jun 2019 05:49:07 -0700 (PDT) Received: from sudo.home ([2a01:cb1d:112:6f00:353a:f33a:a393:3ada]) by smtp.gmail.com with ESMTPSA id s8sm28505480wra.55.2019.06.12.05.49.06 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Jun 2019 05:49:06 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [RFC PATCH 19/20] crypto: arm/aes-neonbs - provide a synchronous version of ctr(aes) Date: Wed, 12 Jun 2019 14:48:37 +0200 Message-Id: <20190612124838.2492-20-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190612124838.2492-1-ard.biesheuvel@linaro.org> References: <20190612124838.2492-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP AES in CTR mode is used by modes such as GCM and CCM, which are often used in contexts where only synchronous ciphers are permitted. So provide a synchronous version of ctr(aes) based on the existing code. This requires a non-SIMD fallback to deal with invocations occurring from a context where SIMD instructions may not be used. We have a helper for this now in the AES library, so wire that up. Signed-off-by: Ard Biesheuvel --- arch/arm/crypto/aes-neonbs-glue.c | 58 ++++++++++++++++++++ 1 file changed, 58 insertions(+) diff --git a/arch/arm/crypto/aes-neonbs-glue.c b/arch/arm/crypto/aes-neonbs-glue.c index f43c9365b6a9..62cadb92379b 100644 --- a/arch/arm/crypto/aes-neonbs-glue.c +++ b/arch/arm/crypto/aes-neonbs-glue.c @@ -9,6 +9,7 @@ */ #include +#include #include #include #include @@ -57,6 +58,11 @@ struct aesbs_xts_ctx { struct crypto_cipher *tweak_tfm; }; +struct aesbs_ctr_ctx { + struct aesbs_ctx key; /* must be first member */ + struct crypto_aes_ctx fallback; +}; + static int aesbs_setkey(struct crypto_skcipher *tfm, const u8 *in_key, unsigned int key_len) { @@ -192,6 +198,25 @@ static void cbc_exit(struct crypto_tfm *tfm) crypto_free_cipher(ctx->enc_tfm); } +static int aesbs_ctr_setkey_sync(struct crypto_skcipher *tfm, const u8 *in_key, + unsigned int key_len) +{ + struct aesbs_ctr_ctx *ctx = crypto_skcipher_ctx(tfm); + int err; + + err = aes_expandkey(&ctx->fallback, in_key, key_len); + if (err) + return err; + + ctx->key.rounds = 6 + key_len / 4; + + kernel_neon_begin(); + aesbs_convert_key(ctx->key.rk, ctx->fallback.key_enc, ctx->key.rounds); + kernel_neon_end(); + + return 0; +} + static int ctr_encrypt(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); @@ -234,6 +259,23 @@ static int ctr_encrypt(struct skcipher_request *req) return err; } +static int ctr_encrypt_sync(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct aesbs_ctr_ctx *ctx = crypto_skcipher_ctx(tfm); + + if (!crypto_simd_usable()) { + struct skcipher_walk walk; + int err; + + err = skcipher_walk_virt(&walk, req, true); + if (err) + return err; + return skcipher_encrypt_aes_ctr(&walk, &ctx->fallback); + } + return ctr_encrypt(req); +} + static int aesbs_xts_setkey(struct crypto_skcipher *tfm, const u8 *in_key, unsigned int key_len) { @@ -361,6 +403,22 @@ static struct skcipher_alg aes_algs[] = { { .setkey = aesbs_setkey, .encrypt = ctr_encrypt, .decrypt = ctr_encrypt, +}, { + .base.cra_name = "ctr(aes)", + .base.cra_driver_name = "ctr-aes-neonbs-sync", + .base.cra_priority = 250 - 1, + .base.cra_blocksize = 1, + .base.cra_ctxsize = sizeof(struct aesbs_ctr_ctx), + .base.cra_module = THIS_MODULE, + + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .chunksize = AES_BLOCK_SIZE, + .walksize = 8 * AES_BLOCK_SIZE, + .ivsize = AES_BLOCK_SIZE, + .setkey = aesbs_ctr_setkey_sync, + .encrypt = ctr_encrypt_sync, + .decrypt = ctr_encrypt_sync, }, { .base.cra_name = "__xts(aes)", .base.cra_driver_name = "__xts-aes-neonbs", From patchwork Wed Jun 12 12:48:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 10989923 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7F9D51515 for ; Wed, 12 Jun 2019 12:49:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6B6FE28A49 for ; Wed, 12 Jun 2019 12:49:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5EBDA28A5A; Wed, 12 Jun 2019 12:49:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D832428A49 for ; Wed, 12 Jun 2019 12:49:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2439236AbfFLMtN (ORCPT ); Wed, 12 Jun 2019 08:49:13 -0400 Received: from mail-wm1-f66.google.com ([209.85.128.66]:55494 "EHLO mail-wm1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2439231AbfFLMtM (ORCPT ); Wed, 12 Jun 2019 08:49:12 -0400 Received: by mail-wm1-f66.google.com with SMTP id a15so6430396wmj.5 for ; Wed, 12 Jun 2019 05:49:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=0vLqup8kXDSW4TBicl2CoUwy3k2oxTX/PfRrsOfcNio=; b=J5DbI/pva0+dCmUWVpXPhHY0jfiBrCLM4KEPpXNN2Cu51rLrZtYdu4RhvzX3xwY8fn VbkNxZuqFCNMler5SD2SFukanJJuSv0ghBfgXYnMs3fmhHnW6qPyJuJO/vrn1qweZVzh 64WHUwVG7/HOfYwWQgxZ/sFb+t733e16TS/CLti/ZE9f6/RFp8MoZcYr7NZt+0AAfKdw IyW4irDY8mD3VgINCyjgrFXgaaMUoMHpRo8+uG33ewigb3TMrf1rN1P6X+9VaJv924UC 3/ku+qvDm0GtGUhhbIGFeV6JWZSBznWRrimuK37R49ZOwjy53rsJufqcAer7KyajAdgC ik6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0vLqup8kXDSW4TBicl2CoUwy3k2oxTX/PfRrsOfcNio=; b=N+JSSMn6xHqySztRCW4O9ln4yKPVRsOhk4ZlkuzabUdee8WQKp9VAn03/4IIFpDiFg NVUGoMDLqGkEmDWJvFeShGhBks6B2L2+8UUgwwC/9PK30J10KDnstnnUkDwzp03bBRSX HCv7b0R/3ibiMt6WM7fQOadox57zTAQ3Kl+pmpHGn2IbEsWo4j/CXEm5lR7Zzx7b4ZF9 gd7VAGXf8PVgVFqHeKy38bAGggngjpYjxTzzJSFOU+vRtMASh7511vvzJBtDriAqpMu7 6MKq8Yz/WH/McQbdUN+4SqRo/u1UOQPnsK8SVnB+tXOoyJUyNW6n5qY31iQIvOjTaKfG AX1w== X-Gm-Message-State: APjAAAU7LewYhLLH/T+4mYGTvTWlZvmU6iA00oj/hwyH/PLSkmNIcFYv 1yhAW8qHkNbYJzq7LHa+s01sQ4eLS22Now== X-Google-Smtp-Source: APXvYqx9Envz8rgJkYR+dgguiKwX3gKeJmW4O+xI4lQnO9InG/KtPMaQAaZMC40HXdxFA7PCWlmEGQ== X-Received: by 2002:a05:600c:23d2:: with SMTP id p18mr21442037wmb.108.1560343750560; Wed, 12 Jun 2019 05:49:10 -0700 (PDT) Received: from sudo.home ([2a01:cb1d:112:6f00:353a:f33a:a393:3ada]) by smtp.gmail.com with ESMTPSA id s8sm28505480wra.55.2019.06.12.05.49.07 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 12 Jun 2019 05:49:09 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [RFC PATCH 20/20] crypto: arm/ghash - provide a synchronous version Date: Wed, 12 Jun 2019 14:48:38 +0200 Message-Id: <20190612124838.2492-21-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190612124838.2492-1-ard.biesheuvel@linaro.org> References: <20190612124838.2492-1-ard.biesheuvel@linaro.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP GHASH is used by the GCM mode, which is often used in contexts where only synchronous ciphers are permitted. So provide a synchronous version of GHASH based on the existing code. This requires a non-SIMD fallback to deal with invocations occurring from a context where SIMD instructions may not be used. Signed-off-by: Ard Biesheuvel --- arch/arm/crypto/ghash-ce-glue.c | 78 +++++++++++++------- 1 file changed, 52 insertions(+), 26 deletions(-) diff --git a/arch/arm/crypto/ghash-ce-glue.c b/arch/arm/crypto/ghash-ce-glue.c index 39d1ccec1aab..ebb237ca874b 100644 --- a/arch/arm/crypto/ghash-ce-glue.c +++ b/arch/arm/crypto/ghash-ce-glue.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include #include @@ -33,6 +34,8 @@ struct ghash_key { u64 h2[2]; u64 h3[2]; u64 h4[2]; + + be128 k; }; struct ghash_desc_ctx { @@ -65,6 +68,36 @@ static int ghash_init(struct shash_desc *desc) return 0; } +static void ghash_do_update(int blocks, u64 dg[], const char *src, + struct ghash_key *key, const char *head) +{ + if (likely(crypto_simd_usable())) { + kernel_neon_begin(); + pmull_ghash_update(blocks, dg, src, key, head); + kernel_neon_end(); + } else { + be128 dst = { cpu_to_be64(dg[1]), cpu_to_be64(dg[0]) }; + + do { + const u8 *in = src; + + if (head) { + in = head; + blocks++; + head = NULL; + } else { + src += GHASH_BLOCK_SIZE; + } + + crypto_xor((u8 *)&dst, in, GHASH_BLOCK_SIZE); + gf128mul_lle(&dst, &key->k); + } while (--blocks); + + dg[0] = be64_to_cpu(dst.b); + dg[1] = be64_to_cpu(dst.a); + } +} + static int ghash_update(struct shash_desc *desc, const u8 *src, unsigned int len) { @@ -88,10 +121,8 @@ static int ghash_update(struct shash_desc *desc, const u8 *src, blocks = len / GHASH_BLOCK_SIZE; len %= GHASH_BLOCK_SIZE; - kernel_neon_begin(); - pmull_ghash_update(blocks, ctx->digest, src, key, - partial ? ctx->buf : NULL); - kernel_neon_end(); + ghash_do_update(blocks, ctx->digest, src, key, + partial ? ctx->buf : NULL); src += blocks * GHASH_BLOCK_SIZE; partial = 0; } @@ -109,9 +140,7 @@ static int ghash_final(struct shash_desc *desc, u8 *dst) struct ghash_key *key = crypto_shash_ctx(desc->tfm); memset(ctx->buf + partial, 0, GHASH_BLOCK_SIZE - partial); - kernel_neon_begin(); - pmull_ghash_update(1, ctx->digest, ctx->buf, key, NULL); - kernel_neon_end(); + ghash_do_update(1, ctx->digest, ctx->buf, key, NULL); } put_unaligned_be64(ctx->digest[1], dst); put_unaligned_be64(ctx->digest[0], dst + 8); @@ -135,24 +164,25 @@ static int ghash_setkey(struct crypto_shash *tfm, const u8 *inkey, unsigned int keylen) { struct ghash_key *key = crypto_shash_ctx(tfm); - be128 h, k; + be128 h; if (keylen != GHASH_BLOCK_SIZE) { crypto_shash_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN); return -EINVAL; } - memcpy(&k, inkey, GHASH_BLOCK_SIZE); - ghash_reflect(key->h, &k); + /* needed for the fallback */ + memcpy(&key->k, inkey, GHASH_BLOCK_SIZE); + ghash_reflect(key->h, &key->k); - h = k; - gf128mul_lle(&h, &k); + h = key->k; + gf128mul_lle(&h, &key->k); ghash_reflect(key->h2, &h); - gf128mul_lle(&h, &k); + gf128mul_lle(&h, &key->k); ghash_reflect(key->h3, &h); - gf128mul_lle(&h, &k); + gf128mul_lle(&h, &key->k); ghash_reflect(key->h4, &h); return 0; @@ -165,15 +195,13 @@ static struct shash_alg ghash_alg = { .final = ghash_final, .setkey = ghash_setkey, .descsize = sizeof(struct ghash_desc_ctx), - .base = { - .cra_name = "__ghash", - .cra_driver_name = "__driver-ghash-ce", - .cra_priority = 0, - .cra_flags = CRYPTO_ALG_INTERNAL, - .cra_blocksize = GHASH_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct ghash_key), - .cra_module = THIS_MODULE, - }, + + .base.cra_name = "ghash", + .base.cra_driver_name = "ghash-ce-sync", + .base.cra_priority = 300 - 1, + .base.cra_blocksize = GHASH_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct ghash_key), + .base.cra_module = THIS_MODULE, }; static int ghash_async_init(struct ahash_request *req) @@ -288,9 +316,7 @@ static int ghash_async_init_tfm(struct crypto_tfm *tfm) struct cryptd_ahash *cryptd_tfm; struct ghash_async_ctx *ctx = crypto_tfm_ctx(tfm); - cryptd_tfm = cryptd_alloc_ahash("__driver-ghash-ce", - CRYPTO_ALG_INTERNAL, - CRYPTO_ALG_INTERNAL); + cryptd_tfm = cryptd_alloc_ahash("ghash-ce-sync", 0, 0); if (IS_ERR(cryptd_tfm)) return PTR_ERR(cryptd_tfm); ctx->cryptd_tfm = cryptd_tfm;