From patchwork Tue Oct 3 23:00:16 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christian Lamparter X-Patchwork-Id: 9983641 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id D610C6029B for ; Tue, 3 Oct 2017 23:00:36 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CBC1E28A44 for ; Tue, 3 Oct 2017 23:00:36 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C061528A4D; Tue, 3 Oct 2017 23:00:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, FREEMAIL_FROM, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1FF2128A44 for ; Tue, 3 Oct 2017 23:00:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751296AbdJCXAe (ORCPT ); Tue, 3 Oct 2017 19:00:34 -0400 Received: from mail-wm0-f66.google.com ([74.125.82.66]:54732 "EHLO mail-wm0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751337AbdJCXAb (ORCPT ); Tue, 3 Oct 2017 19:00:31 -0400 Received: by mail-wm0-f66.google.com with SMTP id i124so19797035wmf.3 for ; Tue, 03 Oct 2017 16:00:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=g3+oTQ3KcafKGP0MYuSKSNNo3ib0bIHAcCB/Sv7vmC4=; b=YJcdCcx0clBch0IjOrQsfPodYMPWOaR2VMhr55+N9IKIRFiWvNSHMs96FaRXxsqPW1 xCtk4CzfANGyCWrfpBASgcr1WRbxLWJLT0RzCOlGCZRf7MM1rkqe5LdnEZqn5nSr36TW 8+H651zkbcPTMccBftj+95MBajGNLnWBP/OQEAtVAylwB4qFRwuyuKnLaN4+E069t02U qEHvA2IvhDKXBOqrbyVGUyrB1HBc8ENT4m6Ecgy55qV8cx5p6xWDOYpawoAu9ycmC/Em KluiCDYug6iya8f16yrZHzoSSTrUPT5//twQymbTFnwBRDEo3RmbHGOxxQRO0dMjna8q yWsQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=g3+oTQ3KcafKGP0MYuSKSNNo3ib0bIHAcCB/Sv7vmC4=; b=ecfRYLT9GZdxFjOjXzr+YlUoYv70q14j9aXntxIlpyPuyt3jKOrBvpSgjxrp1K2jL7 FF9PwuJnIoktjA2bYGppEsaxx/jEcggBH6I7VlH0h50Uk2Gz+2A2ONFX/AyCTRWlqlT2 mE2mDNhzeWxi/FxX9YdI2Llb1j0E7JmYfuYa20Pc0AvxSLIbouiiF0OwFpcnQVlQMoso iNq22r46mN8yY6hOCkDbk2ONO09X6HPdusjNWn8K/xBoccLIbJBnaBA7IBOsbbo2AKmE QTfm1mgLu+3G1oG6z5aX1elfU2a/5zRFIQrG23FLKdmCNQKOvnqPWreEXY6cWV3c1TrC 7xVg== X-Gm-Message-State: AMCzsaUeHZHIlcDzsWHWaJ1QlQDEaJIcVJk7tRs8oHBucXPhoKrMvLhr XphvOcdh2MvLWEX80EfY20FJV7Iv X-Google-Smtp-Source: AOwi7QC3dUhSwjjXZoGRaD3oKF6hyW3yQAN9ks/re9V3VTCTYk4qmaOf8p+W1wNHX9V/W4RJiE2RLg== X-Received: by 10.28.178.205 with SMTP id b196mr4286764wmf.103.1507071629986; Tue, 03 Oct 2017 16:00:29 -0700 (PDT) Received: from debian64.daheim (p5B0D76FC.dip0.t-ipconnect.de. [91.13.118.252]) by smtp.gmail.com with ESMTPSA id v5sm8131211wme.5.2017.10.03.16.00.19 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 03 Oct 2017 16:00:20 -0700 (PDT) Received: from chuck by debian64.daheim with local (Exim 4.89) (envelope-from ) id 1dzWAj-00010v-OW; Wed, 04 Oct 2017 01:00:17 +0200 From: Christian Lamparter To: linux-crypto@vger.kernel.org Cc: Herbert Xu Subject: [RFC 12/13] crypto: crypto4xx: add aes-ccm support Date: Wed, 4 Oct 2017 01:00:16 +0200 Message-Id: <13a1688295f004c6489cfad1b4cbeeebafd636bf.1507070985.git.chunkeey@gmail.com> X-Mailer: git-send-email 2.14.2 In-Reply-To: <8c9b4bc7e3a88970fe0fc308034627b8ae972600.1507070985.git.chunkeey@gmail.com> References: <8c9b4bc7e3a88970fe0fc308034627b8ae972600.1507070985.git.chunkeey@gmail.com> In-Reply-To: <8c9b4bc7e3a88970fe0fc308034627b8ae972600.1507070985.git.chunkeey@gmail.com> References: <8c9b4bc7e3a88970fe0fc308034627b8ae972600.1507070985.git.chunkeey@gmail.com> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch adds aes-ccm support. Signed-off-by: Christian Lamparter --- drivers/crypto/amcc/crypto4xx_alg.c | 185 +++++++++++++++++++++++++++++++++++ drivers/crypto/amcc/crypto4xx_core.c | 23 +++++ drivers/crypto/amcc/crypto4xx_core.h | 8 ++ 3 files changed, 216 insertions(+) diff --git a/drivers/crypto/amcc/crypto4xx_alg.c b/drivers/crypto/amcc/crypto4xx_alg.c index dd4241a5bf56..b1c4783feab9 100644 --- a/drivers/crypto/amcc/crypto4xx_alg.c +++ b/drivers/crypto/amcc/crypto4xx_alg.c @@ -231,6 +231,191 @@ int crypto4xx_rfc3686_decrypt(struct ablkcipher_request *req) ctx->sa_out, ctx->sa_len, 0); } +static inline bool crypto4xx_aead_need_fallback(struct aead_request *req, + bool is_ccm, bool decrypt) +{ + struct crypto_aead *aead = crypto_aead_reqtfm(req); + + /* authsize has to be a multiple of 4 */ + if (aead->authsize & 3) + return true; + + /* + * hardware does not handle cases where cryptlen + * is less than a block + */ + if (req->cryptlen < AES_BLOCK_SIZE) + return true; + + /* assoc len needs to be a multiple of 4 */ + if (req->assoclen & 0x3) + return true; + + /* CCM supports only counter field length of 2 and 4 bytes */ + if (is_ccm && !(req->iv[0] == 1 || req->iv[0] == 3)) + return true; + + /* CCM - fix CBC MAC mismatch in special case */ + if (is_ccm && decrypt && !req->assoclen) + return true; + + return false; +} + +static int crypto4xx_aead_fallback(struct aead_request *req, + struct crypto4xx_ctx *ctx, bool do_decrypt) +{ + char aead_req_data[sizeof(struct aead_request) + + crypto_aead_reqsize(ctx->sw_cipher.aead)] + __aligned(__alignof__(struct aead_request)); + + struct aead_request *subreq = (void *) aead_req_data; + + memset(subreq, 0, sizeof(aead_req_data)); + + aead_request_set_tfm(subreq, ctx->sw_cipher.aead); + aead_request_set_callback(subreq, req->base.flags, + req->base.complete, req->base.data); + aead_request_set_crypt(subreq, req->src, req->dst, req->cryptlen, + req->iv); + aead_request_set_ad(subreq, req->assoclen); + return do_decrypt ? crypto_aead_decrypt(subreq) : + crypto_aead_encrypt(subreq); +} + +static int crypto4xx_setup_fallback(struct crypto4xx_ctx *ctx, + struct crypto_aead *cipher, + const u8 *key, + unsigned int keylen) +{ + int rc; + + crypto_aead_clear_flags(ctx->sw_cipher.aead, CRYPTO_TFM_REQ_MASK); + crypto_aead_set_flags(ctx->sw_cipher.aead, + crypto_aead_get_flags(cipher) & CRYPTO_TFM_REQ_MASK); + rc = crypto_aead_setkey(ctx->sw_cipher.aead, key, keylen); + crypto_aead_clear_flags(cipher, CRYPTO_TFM_RES_MASK); + crypto_aead_set_flags(cipher, + crypto_aead_get_flags(ctx->sw_cipher.aead) & + CRYPTO_TFM_RES_MASK); + + return rc; +} + +/** + * AES-CCM Functions + */ + +int crypto4xx_setkey_aes_ccm(struct crypto_aead *cipher, const u8 *key, + unsigned int keylen) +{ + struct crypto_tfm *tfm = crypto_aead_tfm(cipher); + struct crypto4xx_ctx *ctx = crypto_tfm_ctx(tfm); + struct dynamic_sa_ctl *sa; + int rc = 0; + + rc = crypto4xx_setup_fallback(ctx, cipher, key, keylen); + if (rc) + return rc; + + if (ctx->sa_in || ctx->sa_out) + crypto4xx_free_sa(ctx); + + rc = crypto4xx_alloc_sa(ctx, SA_AES128_CCM_LEN + (keylen - 16) / 4); + if (rc) + return rc; + + /* Setup SA */ + sa = (struct dynamic_sa_ctl *) ctx->sa_in; + sa->sa_contents.w = SA_AES_CCM_CONTENTS | (keylen << 2); + + set_dynamic_sa_command_0(sa, SA_NOT_SAVE_HASH, SA_NOT_SAVE_IV, + SA_LOAD_HASH_FROM_SA, SA_LOAD_IV_FROM_STATE, + SA_NO_HEADER_PROC, SA_HASH_ALG_CBC_MAC, + SA_CIPHER_ALG_AES, + SA_PAD_TYPE_ZERO, SA_OP_GROUP_BASIC, + SA_OPCODE_HASH_DECRYPT, DIR_INBOUND); + + set_dynamic_sa_command_1(sa, CRYPTO_MODE_CTR, SA_HASH_MODE_HASH, + CRYPTO_FEEDBACK_MODE_NO_FB, SA_EXTENDED_SN_OFF, + SA_SEQ_MASK_OFF, SA_MC_ENABLE, + SA_NOT_COPY_PAD, SA_COPY_PAYLOAD, + SA_NOT_COPY_HDR); + + sa->sa_command_1.bf.key_len = keylen >> 3; + + crypto4xx_memcpy_to_le32(get_dynamic_sa_key_field(sa), key, keylen); + + memcpy(ctx->sa_out, ctx->sa_in, ctx->sa_len * 4); + sa = (struct dynamic_sa_ctl *) ctx->sa_out; + + set_dynamic_sa_command_0(sa, SA_SAVE_HASH, SA_NOT_SAVE_IV, + SA_LOAD_HASH_FROM_SA, SA_LOAD_IV_FROM_STATE, + SA_NO_HEADER_PROC, SA_HASH_ALG_CBC_MAC, + SA_CIPHER_ALG_AES, + SA_PAD_TYPE_ZERO, SA_OP_GROUP_BASIC, + SA_OPCODE_ENCRYPT_HASH, DIR_OUTBOUND); + + set_dynamic_sa_command_1(sa, CRYPTO_MODE_CTR, SA_HASH_MODE_HASH, + CRYPTO_FEEDBACK_MODE_NO_FB, SA_EXTENDED_SN_OFF, + SA_SEQ_MASK_OFF, SA_MC_ENABLE, + SA_COPY_PAD, SA_COPY_PAYLOAD, + SA_NOT_COPY_HDR); + + sa->sa_command_1.bf.key_len = keylen >> 3; + return 0; +} + +static int crypto4xx_crypt_aes_ccm(struct aead_request *req, bool decrypt) +{ + struct crypto4xx_ctx *ctx = crypto_tfm_ctx(req->base.tfm); + struct crypto_aead *aead = crypto_aead_reqtfm(req); + unsigned int len = req->cryptlen; + __le32 iv[16]; + u32 tmp_sa[ctx->sa_len * 4]; + struct dynamic_sa_ctl *sa = (struct dynamic_sa_ctl *)tmp_sa; + + if (crypto4xx_aead_need_fallback(req, true, decrypt)) + return crypto4xx_aead_fallback(req, ctx, decrypt); + + if (decrypt) + len -= crypto_aead_authsize(aead); + + memcpy(tmp_sa, decrypt ? ctx->sa_in : ctx->sa_out, sizeof(tmp_sa)); + sa->sa_command_0.bf.digest_len = crypto_aead_authsize(aead) >> 2; + + if (req->iv[0] == 1) { + /* CRYPTO_MODE_AES_ICM */ + sa->sa_command_1.bf.crypto_mode9_8 = 1; + } + + iv[3] = cpu_to_le32(0); + crypto4xx_memcpy_to_le32(iv, req->iv, 16 - (req->iv[0] + 1)); + + return crypto4xx_build_pd(&req->base, ctx, req->src, req->dst, + len, iv, sizeof(iv), + sa, ctx->sa_len, req->assoclen); +} + +int crypto4xx_encrypt_aes_ccm(struct aead_request *req) +{ + return crypto4xx_crypt_aes_ccm(req, false); +} + +int crypto4xx_decrypt_aes_ccm(struct aead_request *req) +{ + return crypto4xx_crypt_aes_ccm(req, true); +} + +int crypto4xx_setauthsize_aead(struct crypto_aead *cipher, + unsigned int authsize) +{ + struct crypto_tfm *tfm = crypto_aead_tfm(cipher); + struct crypto4xx_ctx *ctx = crypto_tfm_ctx(tfm); + + return crypto_aead_setauthsize(ctx->sw_cipher.aead, authsize); +} + /** * HASH SHA1 Functions */ diff --git a/drivers/crypto/amcc/crypto4xx_core.c b/drivers/crypto/amcc/crypto4xx_core.c index b5108259f1a6..85c650323c97 100644 --- a/drivers/crypto/amcc/crypto4xx_core.c +++ b/drivers/crypto/amcc/crypto4xx_core.c @@ -1210,6 +1210,29 @@ static struct crypto4xx_alg_common crypto4xx_alg[] = { } } } }, + + /* AEAD */ + { .type = CRYPTO_ALG_TYPE_AEAD, .u.aead = { + .setkey = crypto4xx_setkey_aes_ccm, + .setauthsize = crypto4xx_setauthsize_aead, + .encrypt = crypto4xx_encrypt_aes_ccm, + .decrypt = crypto4xx_decrypt_aes_ccm, + .init = crypto4xx_aead_init, + .exit = crypto4xx_aead_exit, + .ivsize = AES_BLOCK_SIZE, + .maxauthsize = 16, + .base = { + .cra_name = "ccm(aes)", + .cra_driver_name = "ccm-aes-ppc4xx", + .cra_priority = CRYPTO4XX_CRYPTO_PRIORITY, + .cra_flags = CRYPTO_ALG_ASYNC | + CRYPTO_ALG_NEED_FALLBACK | + CRYPTO_ALG_KERN_DRIVER_ONLY, + .cra_blocksize = 1, + .cra_ctxsize = sizeof(struct crypto4xx_ctx), + .cra_module = THIS_MODULE, + }, + } }, }; /** diff --git a/drivers/crypto/amcc/crypto4xx_core.h b/drivers/crypto/amcc/crypto4xx_core.h index ab89c2af1e90..bdd5954c2388 100644 --- a/drivers/crypto/amcc/crypto4xx_core.h +++ b/drivers/crypto/amcc/crypto4xx_core.h @@ -222,4 +222,12 @@ static inline void crypto4xx_memcpy_to_le32(__le32 *dst, const void *buf, { crypto4xx_memcpy_swab32((u32 *)dst, buf, len); } + +int crypto4xx_setauthsize_aead(struct crypto_aead *ciper, + unsigned int authsize); +int crypto4xx_setkey_aes_ccm(struct crypto_aead *cipher, + const u8 *key, unsigned int keylen); +int crypto4xx_encrypt_aes_ccm(struct aead_request *req); +int crypto4xx_decrypt_aes_ccm(struct aead_request *req); + #endif