From patchwork Mon Jul 24 10:28:05 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 9859115 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id D50BC60349 for ; Mon, 24 Jul 2017 10:28:38 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C57D627DA4 for ; Mon, 24 Jul 2017 10:28:38 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B55FC27F94; Mon, 24 Jul 2017 10:28:38 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4A12127FA3 for ; Mon, 24 Jul 2017 10:28:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752071AbdGXK2h (ORCPT ); Mon, 24 Jul 2017 06:28:37 -0400 Received: from mail-wr0-f171.google.com ([209.85.128.171]:34636 "EHLO mail-wr0-f171.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751559AbdGXK2g (ORCPT ); Mon, 24 Jul 2017 06:28:36 -0400 Received: by mail-wr0-f171.google.com with SMTP id 12so104446778wrb.1 for ; Mon, 24 Jul 2017 03:28:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=k3ld6dTTaWKjAGwefohmXas0N9G+pe8b8EvzVMNOcjg=; b=V6UU9n8Se2U5QC2ThzXqZh2jcFR+vo3h43D/0YFllnmC8rQnMQe6g3hMAj/mGePyTQ bLDYidHzxQ9W6SzXfA/kbHURaZaEri4xr4winGUAgdZ9EqnMgBhVa8VruXEd2BLaEiLU hQ4VI/M/vGttxP3oXaq+NvKHAIE4vhqTbiFbc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=k3ld6dTTaWKjAGwefohmXas0N9G+pe8b8EvzVMNOcjg=; b=KC8y5Ow+FOLvCR2e77hYUBwVrxfFcnn6ygVwJIj+1jAmYlEbNZrJU3ZY/ooeS2zyoE KRHX7/0WhKm1hShDvbgJGOWVJv5KewponDgVN9vqSP+KRk2UwSjn4kjP1a6YMA9XgS7P fFcJotxOZa+D4CRQq3bceai2pRv6W5VZxLEbakDGL7+fOeEb25hOBBf0vzrbKSmwRlfV C8GI9BxCjVBkI1lYQ51bxq0wQ0d14DmTlQR3X7Ja59ZzCv4gi5x8+1esGppfaBGzaMJ7 rtvot+8xMIDElWQ7zjFmlP38jS12Xb7ykt6OAYqfVPBRoxte0IBmsENfq8VU2BYGOGEd KMjA== X-Gm-Message-State: AIVw112v8yE2lKDKiT5r03bYFRxdAhK20AvDNH6cW78AHUUAC+tSIFCe 9N9kpcReuoy7Z7LYNa01Zg== X-Received: by 10.223.137.137 with SMTP id x9mr7403316wrx.242.1500892114818; Mon, 24 Jul 2017 03:28:34 -0700 (PDT) Received: from localhost.localdomain ([105.148.195.69]) by smtp.gmail.com with ESMTPSA id v44sm13205400wrb.53.2017.07.24.03.28.32 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 24 Jul 2017 03:28:34 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: herbert@gondor.apana.org.au, dave.martin@arm.com, Ard Biesheuvel Subject: [PATCH resend 03/18] crypto: arm64/ghash-ce - add non-SIMD scalar fallback Date: Mon, 24 Jul 2017 11:28:05 +0100 Message-Id: <20170724102820.16534-4-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.9.3 In-Reply-To: <20170724102820.16534-1-ard.biesheuvel@linaro.org> References: <20170724102820.16534-1-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The arm64 kernel will shortly disallow nested kernel mode NEON, so add a fallback to scalar C code that can be invoked in that case. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/Kconfig | 3 +- arch/arm64/crypto/ghash-ce-glue.c | 49 ++++++++++++++++---- 2 files changed, 43 insertions(+), 9 deletions(-) diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig index d92293747d63..7d75a363e317 100644 --- a/arch/arm64/crypto/Kconfig +++ b/arch/arm64/crypto/Kconfig @@ -28,8 +28,9 @@ config CRYPTO_SHA2_ARM64_CE config CRYPTO_GHASH_ARM64_CE tristate "GHASH (for GCM chaining mode) using ARMv8 Crypto Extensions" - depends on ARM64 && KERNEL_MODE_NEON + depends on KERNEL_MODE_NEON select CRYPTO_HASH + select CRYPTO_GF128MUL config CRYPTO_CRCT10DIF_ARM64_CE tristate "CRCT10DIF digest algorithm using PMULL instructions" diff --git a/arch/arm64/crypto/ghash-ce-glue.c b/arch/arm64/crypto/ghash-ce-glue.c index 833ec1e3f3e9..30221ef56e70 100644 --- a/arch/arm64/crypto/ghash-ce-glue.c +++ b/arch/arm64/crypto/ghash-ce-glue.c @@ -1,7 +1,7 @@ /* * Accelerated GHASH implementation with ARMv8 PMULL instructions. * - * Copyright (C) 2014 Linaro Ltd. + * Copyright (C) 2014 - 2017 Linaro Ltd. * * This program is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License version 2 as published @@ -9,7 +9,9 @@ */ #include +#include #include +#include #include #include #include @@ -25,6 +27,7 @@ MODULE_LICENSE("GPL v2"); struct ghash_key { u64 a; u64 b; + be128 k; }; struct ghash_desc_ctx { @@ -44,6 +47,36 @@ static int ghash_init(struct shash_desc *desc) return 0; } +static void ghash_do_update(int blocks, u64 dg[], const char *src, + struct ghash_key *key, const char *head) +{ + if (likely(may_use_simd())) { + kernel_neon_begin(); + pmull_ghash_update(blocks, dg, src, key, head); + kernel_neon_end(); + } else { + be128 dst = { cpu_to_be64(dg[1]), cpu_to_be64(dg[0]) }; + + do { + const u8 *in = src; + + if (head) { + in = head; + blocks++; + head = NULL; + } else { + src += GHASH_BLOCK_SIZE; + } + + crypto_xor((u8 *)&dst, in, GHASH_BLOCK_SIZE); + gf128mul_lle(&dst, &key->k); + } while (--blocks); + + dg[0] = be64_to_cpu(dst.b); + dg[1] = be64_to_cpu(dst.a); + } +} + static int ghash_update(struct shash_desc *desc, const u8 *src, unsigned int len) { @@ -67,10 +100,9 @@ static int ghash_update(struct shash_desc *desc, const u8 *src, blocks = len / GHASH_BLOCK_SIZE; len %= GHASH_BLOCK_SIZE; - kernel_neon_begin_partial(8); - pmull_ghash_update(blocks, ctx->digest, src, key, - partial ? ctx->buf : NULL); - kernel_neon_end(); + ghash_do_update(blocks, ctx->digest, src, key, + partial ? ctx->buf : NULL); + src += blocks * GHASH_BLOCK_SIZE; partial = 0; } @@ -89,9 +121,7 @@ static int ghash_final(struct shash_desc *desc, u8 *dst) memset(ctx->buf + partial, 0, GHASH_BLOCK_SIZE - partial); - kernel_neon_begin_partial(8); - pmull_ghash_update(1, ctx->digest, ctx->buf, key, NULL); - kernel_neon_end(); + ghash_do_update(1, ctx->digest, ctx->buf, key, NULL); } put_unaligned_be64(ctx->digest[1], dst); put_unaligned_be64(ctx->digest[0], dst + 8); @@ -111,6 +141,9 @@ static int ghash_setkey(struct crypto_shash *tfm, return -EINVAL; } + /* needed for the fallback */ + memcpy(&key->k, inkey, GHASH_BLOCK_SIZE); + /* perform multiplication by 'x' in GF(2^128) */ b = get_unaligned_be64(inkey); a = get_unaligned_be64(inkey + 8);