From patchwork Wed Mar 26 20:02:57 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 3895151 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 063039F2B6 for ; Wed, 26 Mar 2014 20:03:29 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id CDB7620221 for ; Wed, 26 Mar 2014 20:03:26 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 728642021F for ; Wed, 26 Mar 2014 20:03:25 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WSu2Z-000584-Iy; Wed, 26 Mar 2014 20:03:11 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WSu2X-0002Bv-2w; Wed, 26 Mar 2014 20:03:09 +0000 Received: from mail-wg0-f45.google.com ([74.125.82.45]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WSu2T-0002BC-DT for linux-arm-kernel@lists.infradead.org; Wed, 26 Mar 2014 20:03:06 +0000 Received: by mail-wg0-f45.google.com with SMTP id l18so1639241wgh.4 for ; Wed, 26 Mar 2014 13:02:39 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=x/oBP89Ce/V08prkpp5g7yalJIs2Bh6Dn2TzfFYA4w4=; b=YouhKBQi/LF+1dnxX8k3kHRqDgwAYzvDsJfiXRzJYaOmWmPqZ9AJFL0VVeUnE0vcY3 YrUlmTlCDdrHzI+wESg7jlYv2bAtLbhq+8UvRyUoYohsmoSf9BFyUWP43oaYZoCgl44/ Ao+uhMbkL4/RUf9rbYz3GHUSol6zhDUllB7vX7rILtsPCP+WybwRJXCftRnyJsx3ZbWe m3Y9UWjd4F1PmaizdkmNbxQjvN5yeE+AF49N/dZHh8f+lND3XadOohuHAiDHIa2Kkb0T QFB7D+XuPZfvgBJYlSPuHWnefaNcOKNrkZVqHJAeLejTYsBayF1xW9qZqxOqOOUEvb0v QFnw== X-Gm-Message-State: ALoCoQmerMC8lU+u5wXbwp3Pj5qsYkTLrHZs0N1f8qsFv0+z2qEFtXU+bkkAhLgtASUenPSJS8Al X-Received: by 10.180.188.169 with SMTP id gb9mr32387223wic.17.1395864158664; Wed, 26 Mar 2014 13:02:38 -0700 (PDT) Received: from ards-macbook-pro.local (cag06-7-83-153-85-71.fbx.proxad.net. [83.153.85.71]) by mx.google.com with ESMTPSA id fo6sm172643wib.7.2014.03.26.13.02.36 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 26 Mar 2014 13:02:37 -0700 (PDT) From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org, linux-crypto@vger.kernel.org Subject: [PATCH] arm64: add support for GHASH secure hash using ARMv8 Crypto Extensions Date: Wed, 26 Mar 2014 21:02:57 +0100 Message-Id: <1395864177-30115-1-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 1.8.3.2 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140326_160305_660919_6F863B72 X-CRM114-Status: GOOD ( 19.95 ) X-Spam-Score: -2.6 (--) Cc: catalin.marinas@arm.com, steve.capper@linaro.org, Ard Biesheuvel X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This is a port to ARMv8 (Crypto Extensions) of the Intel implementation of the GHASH Secure Hash (used in the Galois/Counter chaining mode). It relies on the optional PMULL/PMULL2 instruction (polynomial multiply long, what Intel call carry-less multiply). Signed-off-by: Ard Biesheuvel --- Only mildly tested, mainly because the internal tcrypt routine only supplies a single test vector for ghash. Again, this patch requires the NEON patches to allow kernel mode NEON in (soft)irq context. arch/arm64/crypto/Kconfig | 5 ++ arch/arm64/crypto/Makefile | 3 + arch/arm64/crypto/ghash-ce-core.S | 119 +++++++++++++++++++++++++++++++ arch/arm64/crypto/ghash-ce-glue.c | 143 ++++++++++++++++++++++++++++++++++++++ 4 files changed, 270 insertions(+) create mode 100644 arch/arm64/crypto/ghash-ce-core.S create mode 100644 arch/arm64/crypto/ghash-ce-glue.c diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig index 2e869f4b925a..7b5da897a904 100644 --- a/arch/arm64/crypto/Kconfig +++ b/arch/arm64/crypto/Kconfig @@ -42,4 +42,9 @@ config CRYPTO_AES_ARM64_NEON_BLK select CRYPTO_AES select CRYPTO_ABLK_HELPER +config CRYPTO_GHASH_ARM64_CE + tristate "GHASH (for GCM chaining mode) using ARMv8 Crypto Extensions" + depends on ARM64 && KERNEL_MODE_NEON + select CRYPTO_HASH + endif diff --git a/arch/arm64/crypto/Makefile b/arch/arm64/crypto/Makefile index 23fbe222cba8..8ad5c8fc8527 100644 --- a/arch/arm64/crypto/Makefile +++ b/arch/arm64/crypto/Makefile @@ -33,3 +33,6 @@ CFLAGS_aes-glue-ce.o := -DUSE_V8_CRYPTO_EXTENSIONS $(obj)/aes-glue-%.o: $(src)/aes-glue.c FORCE $(call if_changed_dep,cc_o_c) + +obj-$(CONFIG_CRYPTO_GHASH_ARM64_CE) += ghash-ce.o +ghash-ce-y := ghash-ce-glue.o ghash-ce-core.o diff --git a/arch/arm64/crypto/ghash-ce-core.S b/arch/arm64/crypto/ghash-ce-core.S new file mode 100644 index 000000000000..a150ad7cae65 --- /dev/null +++ b/arch/arm64/crypto/ghash-ce-core.S @@ -0,0 +1,119 @@ +/* + * Accelerated GHASH implementation with ARMv8 PMULL instructions. + * + * Copyright (C) 2014 Linaro Ltd. + * + * Based on arch/x86/crypto/ghash-pmullni-intel_asm.S + * + * Copyright (c) 2009 Intel Corp. + * Author: Huang Ying + * Vinodh Gopal + * Erdinc Ozturk + * Deniz Karakoyunlu + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published + * by the Free Software Foundation. + */ + +#include +#include + + DATA .req v0 + SHASH .req v1 + T1 .req v2 + T2 .req v3 + T3 .req v4 + T4 .req v5 + IN1 .req v5 + + .text + .arch armv8-a+crypto + .align 3 + + /* + * void pmull_ghash_update(char *dst, const char *src, u32 blocks, + * const be128 *shash, const char *head); + */ +ENTRY(pmull_ghash_update) + ld1 {DATA.2d}, [x0] + ld1 {SHASH.2d}, [x3] + + /* do the head block first, if supplied */ + cbz x4, 0f + ld1 {IN1.2d}, [x4], #16 + b 1f + +0: sub w2, w2, #1 + ld1 {IN1.2d}, [x1], #16 +1: rev64 IN1.16b, IN1.16b +CPU_LE( ext IN1.16b, IN1.16b, IN1.16b, #8 ) + eor DATA.16b, DATA.16b, IN1.16b + + /* multiply DATA by SHASH in GF(2^128) */ + eor T4.16b, T4.16b, T4.16b + ext T2.16b, DATA.16b, DATA.16b, #8 + ext T3.16b, SHASH.16b, SHASH.16b, #8 + eor T2.16b, T2.16b, DATA.16b + eor T3.16b, T3.16b, SHASH.16b + + pmull2 T1.1q, SHASH.2d, DATA.2d // a1 * b1 + pmull DATA.1q, SHASH.1d, DATA.1d // a0 * b0 + pmull T2.1q, T2.1d, T3.1d // (a1 + a0)(b1 + b0) + eor T2.16b, T2.16b, T1.16b // (a0 * b1) + (a1 * b0) + eor T2.16b, T2.16b, DATA.16b + + ext T3.16b, T4.16b, T2.16b, #8 + ext T2.16b, T2.16b, T4.16b, #8 + eor DATA.16b, DATA.16b, T3.16b + eor T1.16b, T1.16b, T2.16b // is result of + // carry-less multiplication + + /* first phase of the reduction */ + shl T3.2d, DATA.2d, #1 + eor T3.16b, T3.16b, DATA.16b + shl T3.2d, T3.2d, #5 + eor T3.16b, T3.16b, DATA.16b + shl T3.2d, T3.2d, #57 + ext T2.16b, T4.16b, T3.16b, #8 + ext T3.16b, T3.16b, T4.16b, #8 + eor DATA.16b, DATA.16b, T2.16b + eor T1.16b, T1.16b, T3.16b + + /* second phase of the reduction */ + ushr T2.2d, DATA.2d, #5 + eor T2.16b, T2.16b, DATA.16b + ushr T2.2d, T2.2d, #1 + eor T2.16b, T2.16b, DATA.16b + ushr T2.2d, T2.2d, #1 + eor T1.16b, T1.16b, T2.16b + eor DATA.16b, DATA.16b, T1.16b + + cbnz w2, 0b + + st1 {DATA.2d}, [x0] + ret +ENDPROC(pmull_ghash_update) + + /* + * void pmull_ghash_setkey(be128 *shash, const u8 *key); + * + * Calculate hash_key << 1 mod poly + */ +ENTRY(pmull_ghash_setkey) + ldp x2, x3, [x1] + movz x4, #0xc200, lsl #48 // BE GF(2^128) multiply mask +CPU_LE( rev x5, x2 ) +CPU_LE( rev x6, x3 ) +CPU_BE( mov x5, x3 ) +CPU_BE( mov x6, x2 ) + asr x7, x5, #63 + lsl x2, x6, #1 + and x1, x4, x7 + extr x3, x5, x6, #63 + and x7, x7, #1 + eor x3, x3, x1 + eor x2, x2, x7 + stp x2, x3, [x0] + ret +ENDPROC(pmull_ghash_setkey) diff --git a/arch/arm64/crypto/ghash-ce-glue.c b/arch/arm64/crypto/ghash-ce-glue.c new file mode 100644 index 000000000000..1147646a3155 --- /dev/null +++ b/arch/arm64/crypto/ghash-ce-glue.c @@ -0,0 +1,143 @@ +/* + * Accelerated GHASH implementation with ARMv8 PMULL instructions. + * + * Copyright (C) 2014 Linaro Ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published + * by the Free Software Foundation. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +MODULE_DESCRIPTION("GHASH secure hash using ARMv8 Crypto Extensions"); +MODULE_AUTHOR("Ard Biesheuvel "); +MODULE_LICENSE("GPL v2"); + +#define GHASH_BLOCK_SIZE 16 +#define GHASH_DIGEST_SIZE 16 + +asmlinkage void pmull_ghash_update(char *dst, const char *src, unsigned int len, + const be128 *shash, const char *head); + +asmlinkage void pmull_ghash_setkey(be128 *shash, const u8 *key); + +struct ghash_desc_ctx { + u8 digest[GHASH_DIGEST_SIZE]; + u8 buf[GHASH_BLOCK_SIZE]; + u32 count; +}; + +static int ghash_init(struct shash_desc *desc) +{ + struct ghash_desc_ctx *dctx = shash_desc_ctx(desc); + + *dctx = (struct ghash_desc_ctx){}; + return 0; +} + +static int ghash_update(struct shash_desc *desc, const u8 *src, + unsigned int len) +{ + struct ghash_desc_ctx *dctx = shash_desc_ctx(desc); + unsigned int partial = dctx->count % GHASH_BLOCK_SIZE; + + dctx->count += len; + + if ((partial + len) >= GHASH_BLOCK_SIZE) { + be128 *skey = crypto_shash_ctx(desc->tfm); + int blocks; + + if (partial) { + int p = GHASH_BLOCK_SIZE - partial; + + memcpy(dctx->buf + partial, src, p); + src += p; + len -= p; + } + + blocks = len / GHASH_BLOCK_SIZE; + len %= GHASH_BLOCK_SIZE; + + kernel_neon_begin_partial(6); + pmull_ghash_update(dctx->digest, src, blocks, skey, + partial ? dctx->buf : NULL); + kernel_neon_end(); + + src += blocks * GHASH_BLOCK_SIZE; + partial = 0; + } + if (len) + memcpy(dctx->buf + partial, src, len); + return 0; +} + +static int ghash_final(struct shash_desc *desc, u8 *dst) +{ + struct ghash_desc_ctx *dctx = shash_desc_ctx(desc); + int i; + + if (dctx->count % GHASH_BLOCK_SIZE) { + be128 *skey = crypto_shash_ctx(desc->tfm); + + kernel_neon_begin_partial(6); + pmull_ghash_update(dctx->digest, NULL, 0, skey, dctx->buf); + kernel_neon_end(); + } + for (i = 0; i < GHASH_DIGEST_SIZE; i++) + dst[i] = dctx->digest[GHASH_DIGEST_SIZE - i - 1]; + + return 0; +} + +static int ghash_setkey(struct crypto_shash *tfm, + const u8 *key, unsigned int keylen) +{ + be128 *skey = crypto_shash_ctx(tfm); + + if (keylen != GHASH_BLOCK_SIZE) { + crypto_shash_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN); + return -EINVAL; + } + pmull_ghash_setkey(skey, key); + return 0; +} + +static struct shash_alg ghash_alg = { + .digestsize = GHASH_DIGEST_SIZE, + .init = ghash_init, + .update = ghash_update, + .final = ghash_final, + .setkey = ghash_setkey, + .descsize = sizeof(struct ghash_desc_ctx), + .base = { + .cra_name = "ghash", + .cra_driver_name = "ghash-ce", + .cra_priority = 200, + .cra_flags = CRYPTO_ALG_TYPE_SHASH, + .cra_blocksize = GHASH_BLOCK_SIZE, + .cra_ctxsize = sizeof(be128), + .cra_module = THIS_MODULE, + }, +}; + +static int __init ghash_ce_mod_init(void) +{ + return crypto_register_shash(&ghash_alg); +} + +static void __exit ghash_ce_mod_exit(void) +{ + crypto_unregister_shash(&ghash_alg); +} + +module_cpu_feature_match(PMULL, ghash_ce_mod_init); +module_exit(ghash_ce_mod_exit);