From patchwork Mon Oct 7 12:12:29 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 2996961 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 33595BF924 for ; Mon, 7 Oct 2013 12:15:25 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 075032018C for ; Mon, 7 Oct 2013 12:15:24 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 936302017D for ; Mon, 7 Oct 2013 12:15:22 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1VT9iA-0003Fs-4M; Mon, 07 Oct 2013 12:14:54 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1VT9i4-0002cX-Ev; Mon, 07 Oct 2013 12:14:48 +0000 Received: from mail-wg0-f41.google.com ([74.125.82.41]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1VT9ho-0002Zy-Pq for linux-arm-kernel@lists.infradead.org; Mon, 07 Oct 2013 12:14:34 +0000 Received: by mail-wg0-f41.google.com with SMTP id l18so4845975wgh.2 for ; Mon, 07 Oct 2013 05:14:12 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=7/nHFLSawwNTk7t/IKnIoh7Z1ehxx9WF8/sGVcEnaIw=; b=Mj2YuY/PuIzIQHf6TZ5UMA/npokHjb9yPEOOeJ/I1WtADjT62fpzSUuWDxAhg8sxg7 fOXxAns0XlXH21VbMyYis83J9442yu3X/DHV2UNU95/yHOawHBk3U0DcXhUrjtq4RLnH aMHMCRY/UfAYtfcUSTAh+8q4sr+jBkBmOKAO72v3PIdCc8GLl+oP8kLkIJ8ruFaX5Q9K vLaImViq1D5XKX3quSJcEL3RWyf83T3nG1zNXuIsYf31vGviZb35qZyuAASoaleV8aqY JlyoJHPjm9AEIwRpA4LmWrFAZpyKyzZUwVUwLVUva1xTIBN8o//GQEC51uyfwbJQFEYE 6X3w== X-Gm-Message-State: ALoCoQm+3IfNSHrsgctk2/50TCEiTUKPLhEldFtjofl1nqzuOEPIqViJuq/xqn1fiNT/NbsWZ+V9 X-Received: by 10.194.104.42 with SMTP id gb10mr24190324wjb.16.1381148052096; Mon, 07 Oct 2013 05:14:12 -0700 (PDT) Received: from ards-mac-mini.local ([83.153.85.71]) by mx.google.com with ESMTPSA id ma3sm38759714wic.1.1969.12.31.16.00.00 (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 07 Oct 2013 05:14:11 -0700 (PDT) From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Subject: [RFC PATCH 3/5] ARM64: add Crypto Extensions based synchronous core AES cipher Date: Mon, 7 Oct 2013 14:12:29 +0200 Message-Id: <1381147951-7609-4-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 1.8.1.2 In-Reply-To: <1381147951-7609-1-git-send-email-ard.biesheuvel@linaro.org> References: <1381147951-7609-1-git-send-email-ard.biesheuvel@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20131007_081433_061841_C62A006D X-CRM114-Status: GOOD ( 12.47 ) X-Spam-Score: -2.6 (--) Cc: Ard Biesheuvel , catalin.marinas@arm.com, patches@linaro.org, nico@linaro.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This implements the core AES cipher using the Crypto Extensions, using only NEON register q0 and q1. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/Makefile | 5 +++ arch/arm64/crypto/aes-sync.c | 95 ++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 100 insertions(+) create mode 100644 arch/arm64/crypto/aes-sync.c diff --git a/arch/arm64/crypto/Makefile b/arch/arm64/crypto/Makefile index f87ec80..e598c0a 100644 --- a/arch/arm64/crypto/Makefile +++ b/arch/arm64/crypto/Makefile @@ -9,3 +9,8 @@ # obj-y += aesce-emu.o + +ifeq ($(CONFIG_KERNEL_MODE_SYNC_CE_CRYPTO),y) +aesce-sync-y := aes-sync.o +obj-m += aesce-sync.o +endif diff --git a/arch/arm64/crypto/aes-sync.c b/arch/arm64/crypto/aes-sync.c new file mode 100644 index 0000000..5c5d641 --- /dev/null +++ b/arch/arm64/crypto/aes-sync.c @@ -0,0 +1,95 @@ +/* + * linux/arch/arm64/crypto/aes-sync.c + * + * Copyright (C) 2013 Linaro Ltd + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include +#include +#include + +static void aes_cipher_encrypt(struct crypto_tfm *tfm, u8 dst[], u8 const src[]) +{ + struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); + int rounds = 6 + ctx->key_length / 4; + + __asm__(" .arch armv8-a+crypto \n\t" + " ld1 {v0.16b}, [%[in]] \n\t" + " ld1 {v1.16b}, [%[key]], #16 \n\t" + "0: aese v0.16b, v1.16b \n\t" + " subs %[rounds], %[rounds], #1 \n\t" + " ld1 {v1.16b}, [%[key]], #16 \n\t" + " beq 1f \n\t" + " aesmc v0.16b, v0.16b \n\t" + " b 0b \n\t" + "1: eor v0.16b, v0.16b, v1.16b \n\t" + " st1 {v0.16b}, [%[out]] \n\t" + : : + [out] "r"(dst), + [in] "r"(src), + [rounds] "r"(rounds), + [key] "r"(ctx->key_enc)); +} + +static void aes_cipher_decrypt(struct crypto_tfm *tfm, u8 dst[], u8 const src[]) +{ + struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); + int rounds = 6 + ctx->key_length / 4; + + __asm__(" .arch armv8-a+crypto \n\t" + " ld1 {v0.16b}, [%[in]] \n\t" + " ld1 {v1.16b}, [%[key]], #16 \n\t" + "0: aesd v0.16b, v1.16b \n\t" + " ld1 {v1.16b}, [%[key]], #16 \n\t" + " subs %[rounds], %[rounds], #1 \n\t" + " beq 1f \n\t" + " aesimc v0.16b, v0.16b \n\t" + " b 0b \n\t" + "1: eor v0.16b, v0.16b, v1.16b \n\t" + " st1 {v0.16b}, [%[out]] \n\t" + : : + [out] "r"(dst), + [in] "r"(src), + [rounds] "r"(rounds), + [key] "r"(ctx->key_dec)); +} + +static struct crypto_alg aes_alg = { + .cra_name = "aes", + .cra_driver_name = "aes-ce", + .cra_priority = 300, + .cra_flags = CRYPTO_ALG_TYPE_CIPHER, + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct crypto_aes_ctx), + .cra_module = THIS_MODULE, + .cra_cipher = { + .cia_min_keysize = AES_MIN_KEY_SIZE, + .cia_max_keysize = AES_MAX_KEY_SIZE, + .cia_setkey = crypto_aes_set_key, + .cia_encrypt = aes_cipher_encrypt, + .cia_decrypt = aes_cipher_decrypt + } +}; + +static int __init aes_mod_init(void) +{ + if (0) // TODO check for crypto extensions + return -ENODEV; + return crypto_register_alg(&aes_alg); +} + +static void __exit aes_mod_exit(void) +{ + crypto_unregister_alg(&aes_alg); +} + +module_init(aes_mod_init); +module_exit(aes_mod_exit); + +MODULE_DESCRIPTION("Synchronous AES using ARMv8 Crypto Extensions"); +MODULE_AUTHOR("Ard Biesheuvel "); +MODULE_LICENSE("GPL");