From patchwork Wed Feb 5 17:13:38 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 3587751 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id B104D9F2E9 for ; Wed, 5 Feb 2014 17:15:37 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id B591920125 for ; Wed, 5 Feb 2014 17:15:36 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 79E3420123 for ; Wed, 5 Feb 2014 17:15:35 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WB63b-0000gH-IY; Wed, 05 Feb 2014 17:14:39 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WB63Q-0002RK-Uu; Wed, 05 Feb 2014 17:14:28 +0000 Received: from mail-wi0-f178.google.com ([209.85.212.178]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WB635-0002Ml-1H for linux-arm-kernel@lists.infradead.org; Wed, 05 Feb 2014 17:14:16 +0000 Received: by mail-wi0-f178.google.com with SMTP id cc10so774726wib.17 for ; Wed, 05 Feb 2014 09:13:45 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=m4bRWGtNIZ/pYHVYjeFsFYS1HfNqoXfAksOMOqdyiGg=; b=kBk6g9mcL091tGm05kzFANJNV+XroI5AnCM95ijVyeBpCUM2xsW4bN1MjXAhFxsG5n U3rwO6DjNVu30uaoN/PVZhpKAUoS7+GrVMwsYAnpSVdD0V4J6sjrKxt/xNpewCGZ+4E/ hJDt1I11bMUY6XPnJAFqk4FPMYoGNq8lrIZRaVdwMBlXuMpjyUikVKb9o9VNoimBIrlz StUe12qPUSasvAQ27tTmSl4AHNM4xZoXLSIr5QHjBW2Jxgrcyta1YCQB8J7pAZWi+RWQ AMSh4oGKnlR1/hsmHUAL0AqRJ1AzUJ/YSeAUQZdsHE8mIaVpaFEjV/T3fZCLow6hEV9v WT9w== X-Gm-Message-State: ALoCoQmteWFGatZ9lQ9iSvx0WfcwrxEphPiZwOIxSE3cKMICcFcQAo2cZkWmDFMIHZaPwvnyGLbB X-Received: by 10.194.57.140 with SMTP id i12mr2246973wjq.20.1391620425349; Wed, 05 Feb 2014 09:13:45 -0800 (PST) Received: from ards-macbook-pro.local (cag06-7-83-153-85-71.fbx.proxad.net. [83.153.85.71]) by mx.google.com with ESMTPSA id y13sm62961628wjr.8.2014.02.05.09.13.44 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 05 Feb 2014 09:13:44 -0800 (PST) From: Ard Biesheuvel To: will.deacon@arm.com, catalin.marinas@arm.com, linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 4/4] arm64: add Crypto Extensions based synchronous core AES cipher Date: Wed, 5 Feb 2014 18:13:38 +0100 Message-Id: <1391620418-3999-5-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1391620418-3999-1-git-send-email-ard.biesheuvel@linaro.org> References: <1391620418-3999-1-git-send-email-ard.biesheuvel@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140205_121407_349433_9C848988 X-CRM114-Status: GOOD ( 15.18 ) X-Spam-Score: -2.6 (--) Cc: Ard Biesheuvel , patches@linaro.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.7 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This implements the core AES cipher using the Crypto Extensions, using only NEON registers q0 - q3. Signed-off-by: Ard Biesheuvel --- arch/arm64/Makefile | 1 + arch/arm64/crypto/Makefile | 13 ++++ arch/arm64/crypto/aes-ce-cipher.c | 134 ++++++++++++++++++++++++++++++++++++++ crypto/Kconfig | 6 ++ 4 files changed, 154 insertions(+) create mode 100644 arch/arm64/crypto/Makefile create mode 100644 arch/arm64/crypto/aes-ce-cipher.c diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile index 2fceb71ac3b7..8185a913c5ed 100644 --- a/arch/arm64/Makefile +++ b/arch/arm64/Makefile @@ -45,6 +45,7 @@ export TEXT_OFFSET GZFLAGS core-y += arch/arm64/kernel/ arch/arm64/mm/ core-$(CONFIG_KVM) += arch/arm64/kvm/ core-$(CONFIG_XEN) += arch/arm64/xen/ +core-$(CONFIG_CRYPTO) += arch/arm64/crypto/ libs-y := arch/arm64/lib/ $(libs-y) libs-y += $(LIBGCC) diff --git a/arch/arm64/crypto/Makefile b/arch/arm64/crypto/Makefile new file mode 100644 index 000000000000..ac58945c50b3 --- /dev/null +++ b/arch/arm64/crypto/Makefile @@ -0,0 +1,13 @@ +# +# linux/arch/arm64/crypto/Makefile +# +# Copyright (C) 2013 Linaro Ltd +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License version 2 as +# published by the Free Software Foundation. +# + +obj-$(CONFIG_CRYPTO_AES_ARM64_CE) += aes-ce-cipher.o + +CFLAGS_aes-ce-cipher.o += -march=armv8-a+crypto diff --git a/arch/arm64/crypto/aes-ce-cipher.c b/arch/arm64/crypto/aes-ce-cipher.c new file mode 100644 index 000000000000..6dd64106923f --- /dev/null +++ b/arch/arm64/crypto/aes-ce-cipher.c @@ -0,0 +1,134 @@ +/* + * linux/arch/arm64/crypto/aes-ce-cipher.c + * + * Copyright (C) 2013 Linaro Ltd + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include +#include +#include +#include +#include +#include + +MODULE_DESCRIPTION("Synchronous AES cipher using ARMv8 Crypto Extensions"); +MODULE_AUTHOR("Ard Biesheuvel "); +MODULE_LICENSE("GPL"); + +static int num_rounds(struct crypto_aes_ctx *ctx) +{ + /* + * # of rounds specified by AES: + * 128 bit key 10 rounds + * 192 bit key 12 rounds + * 256 bit key 14 rounds + * => n byte key => 6 + (n/4) rounds + */ + return 6 + ctx->key_length / 4; +} + +static void aes_cipher_encrypt(struct crypto_tfm *tfm, u8 dst[], u8 const src[]) +{ + struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); + + kernel_neon_begin_partial(4); + + __asm__(" ld1 {v0.16b}, [%[in]] ;" + " cmp %[rounds], #10 ;" + " bmi 0f ;" + " bne 3f ;" + " ld1 {v3.2d}, [%[key]], #16 ;" + " b 2f ;" + "0: ld1 {v2.2d-v3.2d}, [%[key]], #32 ;" + "1: aese v0.16b, v2.16b ;" + " aesmc v0.16b, v0.16b ;" + "2: aese v0.16b, v3.16b ;" + " aesmc v0.16b, v0.16b ;" + "3: ld1 {v1.2d-v3.2d}, [%[key]], #48 ;" + " subs %[rounds], %[rounds], #3 ;" + " aese v0.16b, v1.16b ;" + " aesmc v0.16b, v0.16b ;" + " bpl 1b ;" + " aese v0.16b, v2.16b ;" + " eor v0.16b, v0.16b, v3.16b ;" + " st1 {v0.16b}, [%[out]] ;" + : : + [out] "r"(dst), + [in] "r"(src), + [rounds] "r"(num_rounds(ctx) - 2), + [key] "r"(ctx->key_enc) + : "cc", "memory"); + + kernel_neon_end(); +} + +static void aes_cipher_decrypt(struct crypto_tfm *tfm, u8 dst[], u8 const src[]) +{ + struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); + + kernel_neon_begin_partial(4); + + __asm__(" ld1 {v0.16b}, [%[in]] ;" + " cmp %[rounds], #10 ;" + " bmi 0f ;" + " bne 3f ;" + " ld1 {v3.2d}, [%[key]], #16 ;" + " b 2f ;" + "0: ld1 {v2.2d-v3.2d}, [%[key]], #32 ;" + "1: aesd v0.16b, v2.16b ;" + " aesimc v0.16b, v0.16b ;" + "2: aesd v0.16b, v3.16b ;" + " aesimc v0.16b, v0.16b ;" + "3: ld1 {v1.2d-v3.2d}, [%[key]], #48 ;" + " subs %[rounds], %[rounds], #3 ;" + " aesd v0.16b, v1.16b ;" + " aesimc v0.16b, v0.16b ;" + " bpl 1b ;" + " aesd v0.16b, v2.16b ;" + " eor v0.16b, v0.16b, v3.16b ;" + " st1 {v0.16b}, [%[out]] ;" + : : + [out] "r"(dst), + [in] "r"(src), + [rounds] "r"(num_rounds(ctx) - 2), + [key] "r"(ctx->key_dec) + : "cc", "memory"); + + kernel_neon_end(); +} + +static struct crypto_alg aes_alg = { + .cra_name = "aes", + .cra_driver_name = "aes-ce", + .cra_priority = 300, + .cra_flags = CRYPTO_ALG_TYPE_CIPHER, + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct crypto_aes_ctx), + .cra_module = THIS_MODULE, + .cra_cipher = { + .cia_min_keysize = AES_MIN_KEY_SIZE, + .cia_max_keysize = AES_MAX_KEY_SIZE, + .cia_setkey = crypto_aes_set_key, + .cia_encrypt = aes_cipher_encrypt, + .cia_decrypt = aes_cipher_decrypt + } +}; + +static int __init aes_mod_init(void) +{ + if (!(elf_hwcap & HWCAP_AES)) + return -ENODEV; + return crypto_register_alg(&aes_alg); +} + +static void __exit aes_mod_exit(void) +{ + crypto_unregister_alg(&aes_alg); +} + +module_init(aes_mod_init); +module_exit(aes_mod_exit); diff --git a/crypto/Kconfig b/crypto/Kconfig index 7bcb70d216e1..f1d98bc346b6 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -791,6 +791,12 @@ config CRYPTO_AES_ARM_BS This implementation does not rely on any lookup tables so it is believed to be invulnerable to cache timing attacks. +config CRYPTO_AES_ARM64_CE + tristate "Synchronous AES cipher using ARMv8 Crypto Extensions" + depends on ARM64 && KERNEL_MODE_NEON + select CRYPTO_ALGAPI + select CRYPTO_AES + config CRYPTO_ANUBIS tristate "Anubis cipher algorithm" select CRYPTO_ALGAPI