From patchwork Mon Oct 7 12:12:30 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 2997001 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id C0E349F1E1 for ; Mon, 7 Oct 2013 12:17:39 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 165522018C for ; Mon, 7 Oct 2013 12:17:38 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9105420131 for ; Mon, 7 Oct 2013 12:17:35 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1VT9j1-0003iC-F6; Mon, 07 Oct 2013 12:15:48 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1VT9iY-0002fP-GX; Mon, 07 Oct 2013 12:15:18 +0000 Received: from mail-we0-f177.google.com ([74.125.82.177]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1VT9hs-0002a3-3m for linux-arm-kernel@lists.infradead.org; Mon, 07 Oct 2013 12:14:42 +0000 Received: by mail-we0-f177.google.com with SMTP id x55so1693552wes.8 for ; Mon, 07 Oct 2013 05:14:14 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=jz6j330VL6VvrlEyciSAFSC4iKIJEJQSevWg8yKluek=; b=QmjSKUxiO13mgydFIEUh7puOlZeVvYpn6z6zcvhZfsDJtLG05uT856UK//J0JOeAkG Ljgb9IXwqWqrhZ9rmLN/zaGE8dGIa5oCzLn5obusMHUJWqgDMyrshRTG+my5l44Js/qk ceOPGUJOh5kcH7vZMTuxBzx21eaTQl+W/orszLNrWdqPTykpLQioVd+P76EW5MGX6FyL CxcRzjkeh+0jafltJb+ffcc8XVruCMUkLOa3xEKzztlO4xZd3hEIQV6loLysiM7CF6Z6 748K6qzffudvUnlkFC+TRa9mN6otrWeT8Q67oxu0KDOZq2jjyt2x+quYsVja0nsaTTjA yrsA== X-Gm-Message-State: ALoCoQn6JV8er7cAanOQ7Cj6y1kVUTMwmXFoKxsHLdlygCgXQJ29TgALwQKhwkS+n2sV01SC/YXZ X-Received: by 10.194.240.129 with SMTP id wa1mr24070893wjc.31.1381148054174; Mon, 07 Oct 2013 05:14:14 -0700 (PDT) Received: from ards-mac-mini.local ([83.153.85.71]) by mx.google.com with ESMTPSA id ma3sm38759714wic.1.1969.12.31.16.00.00 (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 07 Oct 2013 05:14:13 -0700 (PDT) From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Subject: [RFC PATCH 4/5] ARM64: add Crypto Extensions based synchronous AES in CCM mode Date: Mon, 7 Oct 2013 14:12:30 +0200 Message-Id: <1381147951-7609-5-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 1.8.1.2 In-Reply-To: <1381147951-7609-1-git-send-email-ard.biesheuvel@linaro.org> References: <1381147951-7609-1-git-send-email-ard.biesheuvel@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20131007_081436_466970_88D3F8B3 X-CRM114-Status: GOOD ( 23.19 ) X-Spam-Score: -2.6 (--) Cc: Ard Biesheuvel , catalin.marinas@arm.com, patches@linaro.org, nico@linaro.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This implements the CCM AEAD chaining mode for AES using Crypto Extensions instructions. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/Makefile | 2 +- arch/arm64/crypto/aes-sync.c | 323 +++++++++++++++++++++++++++++++++++++++++- arch/arm64/crypto/aesce-ccm.S | 159 +++++++++++++++++++++ 3 files changed, 479 insertions(+), 5 deletions(-) create mode 100644 arch/arm64/crypto/aesce-ccm.S diff --git a/arch/arm64/crypto/Makefile b/arch/arm64/crypto/Makefile index e598c0a..dfd1886 100644 --- a/arch/arm64/crypto/Makefile +++ b/arch/arm64/crypto/Makefile @@ -11,6 +11,6 @@ obj-y += aesce-emu.o ifeq ($(CONFIG_KERNEL_MODE_SYNC_CE_CRYPTO),y) -aesce-sync-y := aes-sync.o +aesce-sync-y := aes-sync.o aesce-ccm.o obj-m += aesce-sync.o endif diff --git a/arch/arm64/crypto/aes-sync.c b/arch/arm64/crypto/aes-sync.c index 5c5d641..263925a5 100644 --- a/arch/arm64/crypto/aes-sync.c +++ b/arch/arm64/crypto/aes-sync.c @@ -8,7 +8,10 @@ * published by the Free Software Foundation. */ +#include #include +#include +#include #include #include @@ -58,7 +61,281 @@ static void aes_cipher_decrypt(struct crypto_tfm *tfm, u8 dst[], u8 const src[]) [key] "r"(ctx->key_dec)); } -static struct crypto_alg aes_alg = { +struct crypto_ccm_aes_ctx { + struct crypto_aes_ctx *key; + struct crypto_blkcipher *blk_tfm; +}; + +asmlinkage void ce_aes_ccm_auth_data(u8 mac[], u8 const in[], long abytes, + u32 const rk[], int rounds); + +asmlinkage void ce_aes_ccm_encrypt(u8 out[], u8 const in[], long cbytes, + u32 const rk[], int rounds, u8 mac[], + u8 ctr[]); + +asmlinkage void ce_aes_ccm_decrypt(u8 out[], u8 const in[], long cbytes, + u32 const rk[], int rounds, u8 mac[], + u8 ctr[]); + +asmlinkage void ce_aes_ccm_final(u8 mac[], u8 const ctr[], u32 const rk[], + long rounds); + +static int ccm_setkey(struct crypto_aead *tfm, const u8 *in_key, + unsigned int key_len) +{ + struct crypto_ccm_aes_ctx *ctx = crypto_aead_ctx(tfm); + int ret; + + ret = crypto_aes_expand_key(ctx->key, in_key, key_len); + if (!ret) + return 0; + + tfm->base.crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN; + return -EINVAL; +} + +static int ccm_setauthsize(struct crypto_aead *tfm, unsigned int authsize) +{ + if ((authsize & 1) || authsize < 4) + return -EINVAL; + return 0; +} + +static void ccm_init_mac(struct aead_request *req, u8 maciv[], u32 msglen) +{ + struct crypto_aead *aead = crypto_aead_reqtfm(req); + __be32 *n = (__be32 *)(&maciv[AES_BLOCK_SIZE - 4]); + u32 l = req->iv[0] + 1; + + *n = cpu_to_be32(msglen); + + memcpy(maciv, req->iv, AES_BLOCK_SIZE - l); + + maciv[0] |= (crypto_aead_authsize(aead) - 2) << 2; + if (req->assoclen) + maciv[0] |= 0x40; + + memset(&req->iv[AES_BLOCK_SIZE - l], 0, l); +} + +static void ccm_calculate_auth_mac(struct aead_request *req, u8 mac[]) +{ + struct crypto_aead *aead = crypto_aead_reqtfm(req); + struct crypto_ccm_aes_ctx *ctx = crypto_aead_ctx(aead); + struct __packed { __be16 l; __be32 h; } ltag; + int rounds = 6 + ctx->key->key_length / 4; + struct scatter_walk walk; + u32 len = req->assoclen; + u32 macp; + + /* prepend the AAD with a length tag */ + if (len < 0xff00) { + ltag.l = cpu_to_be16(len); + macp = 2; + } else { + ltag.l = cpu_to_be16(0xfffe); + put_unaligned_be32(len, <ag.h); + macp = 6; + } + + ce_aes_ccm_auth_data(mac, (u8 *)<ag, macp, ctx->key->key_enc, rounds); + scatterwalk_start(&walk, req->assoc); + + do { + u32 n = scatterwalk_clamp(&walk, len); + u32 m; + u8 *p; + + if (!n) { + scatterwalk_start(&walk, sg_next(walk.sg)); + n = scatterwalk_clamp(&walk, len); + } + p = scatterwalk_map(&walk); + m = min(n, AES_BLOCK_SIZE - macp); + crypto_xor(&mac[macp], p, m); + + len -= n; + n -= m; + macp += m; + if (macp == AES_BLOCK_SIZE && (n || len)) { + ce_aes_ccm_auth_data(mac, &p[m], n, ctx->key->key_enc, + rounds); + macp = n % AES_BLOCK_SIZE; + } + + scatterwalk_unmap(p); + scatterwalk_advance(&walk, n + m); + scatterwalk_done(&walk, 0, len); + } while (len); +} + +struct ccm_inner_desc_info { + u8 ctriv[AES_BLOCK_SIZE]; + u8 mac[AES_BLOCK_SIZE]; +} __aligned(8); + +static int ccm_inner_encrypt(struct blkcipher_desc *desc, + struct scatterlist *dst, struct scatterlist *src, + unsigned int nbytes) +{ + struct crypto_aes_ctx *ctx = crypto_blkcipher_ctx(desc->tfm); + struct ccm_inner_desc_info *descinfo = desc->info; + int rounds = 6 + ctx->key_length / 4; + struct blkcipher_walk walk; + int err; + + blkcipher_walk_init(&walk, dst, src, nbytes); + err = blkcipher_walk_virt_block(desc, &walk, AES_BLOCK_SIZE); + + while (walk.nbytes) { + u32 tail = walk.nbytes % AES_BLOCK_SIZE; + + if (walk.nbytes == nbytes) + tail = 0; + + ce_aes_ccm_encrypt(walk.dst.virt.addr, walk.src.virt.addr, + walk.nbytes - tail, ctx->key_enc, rounds, + descinfo->mac, descinfo->ctriv); + + nbytes -= walk.nbytes - tail; + err = blkcipher_walk_done(desc, &walk, tail); + } + return err; +} + +static int ccm_inner_decrypt(struct blkcipher_desc *desc, + struct scatterlist *dst, struct scatterlist *src, + unsigned int nbytes) +{ + struct crypto_aes_ctx *ctx = crypto_blkcipher_ctx(desc->tfm); + struct ccm_inner_desc_info *descinfo = desc->info; + int rounds = 6 + ctx->key_length / 4; + struct blkcipher_walk walk; + int err; + + blkcipher_walk_init(&walk, dst, src, nbytes); + err = blkcipher_walk_virt_block(desc, &walk, AES_BLOCK_SIZE); + + while (walk.nbytes) { + u32 tail = walk.nbytes % AES_BLOCK_SIZE; + + if (walk.nbytes == nbytes) + tail = 0; + + ce_aes_ccm_decrypt(walk.dst.virt.addr, walk.src.virt.addr, + walk.nbytes - tail, ctx->key_enc, rounds, + descinfo->mac, descinfo->ctriv); + + nbytes -= walk.nbytes - tail; + err = blkcipher_walk_done(desc, &walk, tail); + } + return err; +} + +static int ccm_encrypt(struct aead_request *req) +{ + struct crypto_aead *aead = crypto_aead_reqtfm(req); + struct crypto_ccm_aes_ctx *ctx = crypto_aead_ctx(aead); + int rounds = 6 + ctx->key->key_length / 4; + struct ccm_inner_desc_info descinfo; + int err; + + struct blkcipher_desc desc = { + .tfm = ctx->blk_tfm, + .info = &descinfo, + .flags = 0, + }; + + ccm_init_mac(req, descinfo.mac, req->cryptlen); + + if (req->assoclen) + ccm_calculate_auth_mac(req, descinfo.mac); + + memcpy(descinfo.ctriv, req->iv, AES_BLOCK_SIZE); + + /* call inner blkcipher to process the payload */ + err = ccm_inner_encrypt(&desc, req->dst, req->src, req->cryptlen); + if (err) + return err; + + ce_aes_ccm_final(descinfo.mac, req->iv, ctx->key->key_enc, rounds); + + /* copy authtag to end of dst */ + scatterwalk_map_and_copy(descinfo.mac, req->dst, req->cryptlen, + crypto_aead_authsize(aead), 1); + + return 0; +} + +static int ccm_decrypt(struct aead_request *req) +{ + struct crypto_aead *aead = crypto_aead_reqtfm(req); + struct crypto_ccm_aes_ctx *ctx = crypto_aead_ctx(aead); + int rounds = 6 + ctx->key->key_length / 4; + struct ccm_inner_desc_info descinfo; + u8 atag[AES_BLOCK_SIZE]; + u32 len; + int err; + + struct blkcipher_desc desc = { + .tfm = ctx->blk_tfm, + .info = &descinfo, + .flags = 0, + }; + + len = req->cryptlen - crypto_aead_authsize(aead); + ccm_init_mac(req, descinfo.mac, len); + + if (req->assoclen) + ccm_calculate_auth_mac(req, descinfo.mac); + + memcpy(descinfo.ctriv, req->iv, AES_BLOCK_SIZE); + + /* call inner blkcipher to process the payload */ + err = ccm_inner_decrypt(&desc, req->dst, req->src, len); + if (err) + return err; + + ce_aes_ccm_final(descinfo.mac, req->iv, ctx->key->key_enc, rounds); + + /* compare calculated auth tag with the stored one */ + scatterwalk_map_and_copy(atag, req->src, len, + crypto_aead_authsize(aead), 0); + + if (memcmp(descinfo.mac, atag, crypto_aead_authsize(aead))) + return -EBADMSG; + return 0; +} + +static int ccm_init(struct crypto_tfm *tfm) +{ + struct crypto_ccm_aes_ctx *ctx = crypto_tfm_ctx(tfm); + struct crypto_blkcipher *blk_tfm; + + blk_tfm = crypto_alloc_blkcipher("__driver-ccm-aesce-inner", 0, 0); + if (IS_ERR(blk_tfm)) + return PTR_ERR(blk_tfm); + + /* did we get the right one? (sanity check) */ + if (crypto_blkcipher_crt(blk_tfm)->encrypt != ccm_inner_encrypt) { + crypto_free_blkcipher(ctx->blk_tfm); + return -EINVAL; + } + + ctx->blk_tfm = blk_tfm; + ctx->key = crypto_blkcipher_ctx(blk_tfm); + + return 0; +} + +static void ccm_exit(struct crypto_tfm *tfm) +{ + struct crypto_ccm_aes_ctx *ctx = crypto_tfm_ctx(tfm); + + crypto_free_blkcipher(ctx->blk_tfm); +} + +static struct crypto_alg aes_algs[] = { { .cra_name = "aes", .cra_driver_name = "aes-ce", .cra_priority = 300, @@ -73,18 +350,56 @@ static struct crypto_alg aes_alg = { .cia_encrypt = aes_cipher_encrypt, .cia_decrypt = aes_cipher_decrypt } -}; +}, { + .cra_name = "__ccm-aesce-inner", + .cra_driver_name = "__driver-ccm-aesce-inner", + .cra_priority = 0, + .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER, + .cra_blocksize = 1, + .cra_ctxsize = sizeof(struct crypto_aes_ctx), + .cra_alignmask = 7, + .cra_type = &crypto_blkcipher_type, + .cra_module = THIS_MODULE, + .cra_blkcipher = { + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = sizeof(struct ccm_inner_desc_info), + .setkey = crypto_aes_set_key, + .encrypt = ccm_inner_encrypt, + .decrypt = ccm_inner_decrypt, + }, +}, { + .cra_name = "ccm(aes)", + .cra_driver_name = "ccm-aes-ce", + .cra_priority = 300, + .cra_flags = CRYPTO_ALG_TYPE_AEAD, + .cra_blocksize = 1, + .cra_ctxsize = sizeof(struct crypto_ccm_aes_ctx), + .cra_alignmask = 7, + .cra_type = &crypto_aead_type, + .cra_module = THIS_MODULE, + .cra_init = ccm_init, + .cra_exit = ccm_exit, + .cra_aead = { + .ivsize = AES_BLOCK_SIZE, + .maxauthsize = AES_BLOCK_SIZE, + .setkey = ccm_setkey, + .setauthsize = ccm_setauthsize, + .encrypt = ccm_encrypt, + .decrypt = ccm_decrypt, + } +} }; static int __init aes_mod_init(void) { if (0) // TODO check for crypto extensions return -ENODEV; - return crypto_register_alg(&aes_alg); + return crypto_register_algs(aes_algs, ARRAY_SIZE(aes_algs)); } static void __exit aes_mod_exit(void) { - crypto_unregister_alg(&aes_alg); + crypto_unregister_algs(aes_algs, ARRAY_SIZE(aes_algs)); } module_init(aes_mod_init); diff --git a/arch/arm64/crypto/aesce-ccm.S b/arch/arm64/crypto/aesce-ccm.S new file mode 100644 index 0000000..35d09af --- /dev/null +++ b/arch/arm64/crypto/aesce-ccm.S @@ -0,0 +1,159 @@ +/* + * linux/arch/arm64/crypto/aesce-ccm.S - AES-CCM transform for ARMv8 with + * Crypto Extensions + * + * Copyright (C) 2013 Linaro Ltd + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include + + .text + .arch armv8-a+crypto + .align 4 + + /* + * void ce_aes_ccm_auth_data(u8 mac[], u8 const in[], long abytes, + * u8 const rk[], int rounds); + */ +ENTRY(ce_aes_ccm_auth_data) + ld1 {v0.16b}, [x0] /* load mac */ + ld1 {v1.16b}, [x3] /* load first round key */ +0: mov x7, x4 + add x6, x3, #16 +1: aese v0.16b, v1.16b + ld1 {v1.16b}, [x6], #16 /* load next round key */ + subs x7, x7, #1 + beq 2f + aesmc v0.16b, v0.16b + b 1b +2: eor v0.16b, v0.16b, v1.16b /* final round */ + subs x2, x2, #16 /* last data? */ + bmi 3f + ld1 {v1.16b}, [x1], #16 /* load next input block */ + eor v0.16b, v0.16b, v1.16b /* xor with mac */ + beq 3f + ld1 {v1.16b}, [x3] /* reload first round key */ + b 0b +3: st1 {v0.16b}, [x0] /* store mac */ + beq 5f + adds x2, x2, #16 + beq 5f +4: ldrb w7, [x1], #1 + umov w6, v0.b[0] + eor w6, w6, w7 + strb w6, [x0], #1 + subs x2, x2, #1 + beq 5f + ext v0.16b, v0.16b, v0.16b, #1 /* rotate out the mac bytes */ + b 4b +5: ret +ENDPROC(ce_aes_ccm_auth_data) + + .macro aes_ccm_do_crypt,enc + stp x29, x30, [sp, #-32]! /* prologue */ + mov x29, sp + stp x8, x9, [sp, #16] + ld1 {v0.16b}, [x5] /* load mac */ + ld1 {v2.16b}, [x6] /* load ctr */ + ld1 {v3.16b}, [x3] /* load first round key */ + umov x8, v2.d[1] + rev x8, x8 /* keep swabbed ctr in reg */ +0: add x8, x8, #1 + rev x9, x8 + ins v2.d[1], x9 /* no carry */ + mov x7, x4 + add x9, x3, #16 +1: aese v0.16b, v3.16b + aese v2.16b, v3.16b + ld1 {v3.16b}, [x9], #16 /* load next round key */ + subs x7, x7, #1 + beq 2f + aesmc v0.16b, v0.16b + aesmc v2.16b, v2.16b + b 1b +2: eor v2.16b, v2.16b, v3.16b /* final round enc */ + eor v0.16b, v0.16b, v3.16b /* final round mac */ + subs x2, x2, #16 + bmi 3f + ld1 {v1.16b}, [x1], #16 /* load next input block */ + .if \enc == 1 + eor v0.16b, v0.16b, v1.16b /* xor mac with plaintext */ + eor v1.16b, v1.16b, v2.16b /* xor with crypted ctr */ + .else + eor v1.16b, v1.16b, v2.16b /* xor with crypted ctr */ + eor v0.16b, v0.16b, v1.16b /* xor mac with plaintext */ + .endif + st1 {v1.16b}, [x0], #16 /* write output block */ + beq 5f + ld1 {v2.8b}, [x6] /* reload ctriv */ + ld1 {v3.16b}, [x3] /* reload first round key */ + b 0b +3: st1 {v0.16b}, [x5] /* store mac */ + add x2, x2, #16 /* process partial tail block */ +4: ldrb w9, [x1], #1 /* get 1 byte of input */ + umov w6, v2.b[0] /* get top crypted ctr byte */ + umov w7, v0.b[0] /* get top mac byte */ + .if \enc == 1 + eor w7, w7, w9 + eor w9, w9, w6 + .else + eor w9, w9, w6 + eor w7, w7, w9 + .endif + strb w9, [x0], #1 /* store out byte */ + strb w7, [x5], #1 /* store mac byte */ + subs x2, x2, #1 + beq 6f + ext v0.16b, v0.16b, v0.16b, #1 /* shift out mac byte */ + ext v2.16b, v2.16b, v2.16b, #1 /* shift out ctr byte */ + b 4b +5: rev x8, x8 + st1 {v0.16b}, [x5] /* store mac */ + str x8, [x6, #8] /* store lsb end of ctr (BE) */ +6: ldp x8, x9, [sp, #16] /* epilogue */ + ldp x29, x30, [sp], #32 + ret + .endm + + /* + * void ce_aes_ccm_encrypt(u8 out[], u8 const in[], long cbytes, + * u8 const rk[], int rounds, u8 mac[], + * u8 ctr[]); + * void ce_aes_ccm_decrypt(u8 out[], u8 const in[], long cbytes, + * u8 const rk[], int rounds, u8 mac[], + * u8 ctr[]); + */ +ENTRY(ce_aes_ccm_encrypt) + aes_ccm_do_crypt 1 +ENDPROC(ce_aes_ccm_encrypt) + +ENTRY(ce_aes_ccm_decrypt) + aes_ccm_do_crypt 0 +ENDPROC(ce_aes_ccm_decrypt) + + /* + * void ce_aes_ccm_final(u8 mac[], u8 const ctr[], u8 const rk[], + * long rounds); + */ +ENTRY(ce_aes_ccm_final) + ld1 {v0.16b}, [x0] /* load mac */ + ld1 {v3.16b}, [x2], #16 /* load first round key */ + ld1 {v2.16b}, [x1] /* load 1st ctriv */ +0: aese v0.16b, v3.16b + aese v2.16b, v3.16b + ld1 {v3.16b}, [x2], #16 /* load next round key */ + subs x3, x3, #1 + beq 1f + aesmc v0.16b, v0.16b + aesmc v2.16b, v2.16b + b 0b +1: eor v2.16b, v2.16b, v3.16b /* final round enc */ + eor v0.16b, v0.16b, v3.16b /* final round mac */ + eor v0.16b, v0.16b, v2.16b /* en-/decrypt the mac */ + st1 {v0.16b}, [x0] /* store result */ + ret +ENDPROC(ce_aes_ccm_final)