From patchwork Wed Dec 23 08:09:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 11987841 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C0C5CC433E0 for ; Wed, 23 Dec 2020 08:14:29 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 79F1D2070B for ; Wed, 23 Dec 2020 08:14:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 79F1D2070B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=XekPNwaKWHiqAfEpixzRZNshLYktueXSMRJgp6ciK9c=; b=fK+IQlfkAIxiewVU64TsGS0/J E8OsEp3q9+057SaWlaC+YyDIUEsiwx6dxVLJOL2fB4DV5WIvyI/JzIZ4eRqh1/JlV/qYeYOjFZK3s 07rfHfBW/V4mOZpvkE2yZK8x277vQsL5JWCD4t8aK3yAE1Zc1Lt2bGJJBG2sItauMBAUWps5/THbu nh/s1U2Lp4lsFVofQwNE4CrAEa+3ttmLD8HS4uuQsAtK3+2qzLNn07SlM5KZl8z2iMW1Y6pr/Wqny TtX21YcvQfOvR4ZAujmYn5AhSl1lffWkGHRtPdNjLGQf3PH75iuqgz6h2CZAfbhjzYSQomrTIFRY+ oz96vO/AQ==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1krzGU-0006xE-0x; Wed, 23 Dec 2020 08:12:58 +0000 Received: from mail.kernel.org ([198.145.29.99]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1krzGR-0006vT-4h for linux-arm-kernel@lists.infradead.org; Wed, 23 Dec 2020 08:12:56 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 96C5922482; Wed, 23 Dec 2020 08:12:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1608711173; bh=/JOeDVsRKecFgqhYuedJ9WzTU2siPkilHRHczO28l8M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Wl+Ffinee7BFY+nx97KgW6rOgIk0E4ofkHa8ND6e9IkVmfxdQeHGdZC7eCYC43wjZ hIMltVJ7HwHBms18VmDXcKdN2IgzHSha0X20Eq9ywO9UPIsEV4fQSn4A8F4IE+Ofog fex3tPySB78CEZYDRsGlAQlhSIsYk2r4WqsrZAraEdV3vQ1C0W6WKFH8Qw9GkEb35s GKvTuyARJzf0hufuL03J4tohT78ekFCq2c6QF6qursb7vYCiXUv7qI5uv29lc1LiQp jFpdNRpGbT3NdYqxwAP8PSfbmclIo1BR6FHipG3Yag9RH1LOr6wSJmhnNghAV8EnZP iIQOMTDTlvsZg== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH v3 01/14] crypto: blake2s - define shash_alg structs using macros Date: Wed, 23 Dec 2020 00:09:50 -0800 Message-Id: <20201223081003.373663-2-ebiggers@kernel.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201223081003.373663-1-ebiggers@kernel.org> References: <20201223081003.373663-1-ebiggers@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201223_031255_493436_8ED39112 X-CRM114-Status: GOOD ( 10.93 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "Jason A . Donenfeld" , Herbert Xu , David Sterba , Ard Biesheuvel , linux-arm-kernel@lists.infradead.org, Paul Crowley Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Eric Biggers The shash_alg structs for the four variants of BLAKE2s are identical except for the algorithm name, driver name, and digest size. So, avoid code duplication by using a macro to define these structs. Acked-by: Ard Biesheuvel Signed-off-by: Eric Biggers --- crypto/blake2s_generic.c | 88 ++++++++++++---------------------------- 1 file changed, 27 insertions(+), 61 deletions(-) diff --git a/crypto/blake2s_generic.c b/crypto/blake2s_generic.c index 005783ff45ad0..e3aa6e7ff3d83 100644 --- a/crypto/blake2s_generic.c +++ b/crypto/blake2s_generic.c @@ -83,67 +83,33 @@ static int crypto_blake2s_final(struct shash_desc *desc, u8 *out) return 0; } -static struct shash_alg blake2s_algs[] = {{ - .base.cra_name = "blake2s-128", - .base.cra_driver_name = "blake2s-128-generic", - .base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY, - .base.cra_ctxsize = sizeof(struct blake2s_tfm_ctx), - .base.cra_priority = 200, - .base.cra_blocksize = BLAKE2S_BLOCK_SIZE, - .base.cra_module = THIS_MODULE, - - .digestsize = BLAKE2S_128_HASH_SIZE, - .setkey = crypto_blake2s_setkey, - .init = crypto_blake2s_init, - .update = crypto_blake2s_update, - .final = crypto_blake2s_final, - .descsize = sizeof(struct blake2s_state), -}, { - .base.cra_name = "blake2s-160", - .base.cra_driver_name = "blake2s-160-generic", - .base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY, - .base.cra_ctxsize = sizeof(struct blake2s_tfm_ctx), - .base.cra_priority = 200, - .base.cra_blocksize = BLAKE2S_BLOCK_SIZE, - .base.cra_module = THIS_MODULE, - - .digestsize = BLAKE2S_160_HASH_SIZE, - .setkey = crypto_blake2s_setkey, - .init = crypto_blake2s_init, - .update = crypto_blake2s_update, - .final = crypto_blake2s_final, - .descsize = sizeof(struct blake2s_state), -}, { - .base.cra_name = "blake2s-224", - .base.cra_driver_name = "blake2s-224-generic", - .base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY, - .base.cra_ctxsize = sizeof(struct blake2s_tfm_ctx), - .base.cra_priority = 200, - .base.cra_blocksize = BLAKE2S_BLOCK_SIZE, - .base.cra_module = THIS_MODULE, - - .digestsize = BLAKE2S_224_HASH_SIZE, - .setkey = crypto_blake2s_setkey, - .init = crypto_blake2s_init, - .update = crypto_blake2s_update, - .final = crypto_blake2s_final, - .descsize = sizeof(struct blake2s_state), -}, { - .base.cra_name = "blake2s-256", - .base.cra_driver_name = "blake2s-256-generic", - .base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY, - .base.cra_ctxsize = sizeof(struct blake2s_tfm_ctx), - .base.cra_priority = 200, - .base.cra_blocksize = BLAKE2S_BLOCK_SIZE, - .base.cra_module = THIS_MODULE, - - .digestsize = BLAKE2S_256_HASH_SIZE, - .setkey = crypto_blake2s_setkey, - .init = crypto_blake2s_init, - .update = crypto_blake2s_update, - .final = crypto_blake2s_final, - .descsize = sizeof(struct blake2s_state), -}}; +#define BLAKE2S_ALG(name, driver_name, digest_size) \ + { \ + .base.cra_name = name, \ + .base.cra_driver_name = driver_name, \ + .base.cra_priority = 100, \ + .base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY, \ + .base.cra_blocksize = BLAKE2S_BLOCK_SIZE, \ + .base.cra_ctxsize = sizeof(struct blake2s_tfm_ctx), \ + .base.cra_module = THIS_MODULE, \ + .digestsize = digest_size, \ + .setkey = crypto_blake2s_setkey, \ + .init = crypto_blake2s_init, \ + .update = crypto_blake2s_update, \ + .final = crypto_blake2s_final, \ + .descsize = sizeof(struct blake2s_state), \ + } + +static struct shash_alg blake2s_algs[] = { + BLAKE2S_ALG("blake2s-128", "blake2s-128-generic", + BLAKE2S_128_HASH_SIZE), + BLAKE2S_ALG("blake2s-160", "blake2s-160-generic", + BLAKE2S_160_HASH_SIZE), + BLAKE2S_ALG("blake2s-224", "blake2s-224-generic", + BLAKE2S_224_HASH_SIZE), + BLAKE2S_ALG("blake2s-256", "blake2s-256-generic", + BLAKE2S_256_HASH_SIZE), +}; static int __init blake2s_mod_init(void) { From patchwork Wed Dec 23 08:09:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 11987849 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5CCD9C433E0 for ; Wed, 23 Dec 2020 08:14:46 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0BB382076D for ; Wed, 23 Dec 2020 08:14:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0BB382076D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=O9p9ppPAXX5zagNtd/5bZLr3bJlxEa8McyohAkK9z7A=; b=eL4CcgEE5MJqCub9Y5rYgUv+M iFzx+n58mNDydW6lRMSu7ypD50G9ftxcNglTkf7QLFmS/rph6zzUKLcEudu6c8bKai5lZ012qs/7G AlAcDUytCHIQBMH+12EqhHa4nRA7wJW6g/pqEsQjOzcrAMU4i3n3doWxO5rXnVNyfEd7uryYPdQff XnBUT4qAyFd/INt0meyv0dn9Q3adO+1TXLPe3VJjJvovqagGTjTDWxHEeYMfSZwV8hRu9vnThOelG tMrGKdToKyuncZdj4A0Rohk5U1OEXIMjbD2bThQGtbGMLAasU7rvebPh3rR7qIqstaCSQVkC4+Oyr Dj6rFhDKA==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1krzGs-00076P-3c; Wed, 23 Dec 2020 08:13:22 +0000 Received: from mail.kernel.org ([198.145.29.99]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1krzGR-0006vX-Iq for linux-arm-kernel@lists.infradead.org; Wed, 23 Dec 2020 08:13:00 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 048B1224B0; Wed, 23 Dec 2020 08:12:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1608711174; bh=AwolWcU1J9aB45wmv29fRS/pK38QTPj+/7OvPBNmghs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nl6T8XcJDi8VRbKkup4cgCKOYtpu9bkypOmsem76aK04D66bNx12XkRQBh34AzWJ6 skwBlyJTJf724l2kw7h43nbKBF6WG2ePRq/4iYbea9EzshxWZeWnFgI0ipdlaavO6b Q5dKyDyjGMC3lTxO7JKWjsU4rTiLt4Ir364ET1rqLWJkNb4HM3M7UJySJuTa303xvf 1dDWjKc6ejT4oPIsFDknL6WQSTSXlzK18bsEKLlSWAyfe37mvZ3v8KDZf13CUObjKy EAqxgxmk13uvAKj0ekXpLJedzztG754ahTsBQkMc+snhT0JUQQvNrKPfutMD96vT8z G4TuNdtAlx04g== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH v3 02/14] crypto: x86/blake2s - define shash_alg structs using macros Date: Wed, 23 Dec 2020 00:09:51 -0800 Message-Id: <20201223081003.373663-3-ebiggers@kernel.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201223081003.373663-1-ebiggers@kernel.org> References: <20201223081003.373663-1-ebiggers@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201223_031255_773190_894673C1 X-CRM114-Status: GOOD ( 11.05 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "Jason A . Donenfeld" , Herbert Xu , David Sterba , Ard Biesheuvel , linux-arm-kernel@lists.infradead.org, Paul Crowley Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Eric Biggers The shash_alg structs for the four variants of BLAKE2s are identical except for the algorithm name, driver name, and digest size. So, avoid code duplication by using a macro to define these structs. Acked-by: Ard Biesheuvel Signed-off-by: Eric Biggers --- arch/x86/crypto/blake2s-glue.c | 84 ++++++++++------------------------ 1 file changed, 23 insertions(+), 61 deletions(-) diff --git a/arch/x86/crypto/blake2s-glue.c b/arch/x86/crypto/blake2s-glue.c index c025a01cf7084..4dcb2ee89efc9 100644 --- a/arch/x86/crypto/blake2s-glue.c +++ b/arch/x86/crypto/blake2s-glue.c @@ -129,67 +129,29 @@ static int crypto_blake2s_final(struct shash_desc *desc, u8 *out) return 0; } -static struct shash_alg blake2s_algs[] = {{ - .base.cra_name = "blake2s-128", - .base.cra_driver_name = "blake2s-128-x86", - .base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY, - .base.cra_ctxsize = sizeof(struct blake2s_tfm_ctx), - .base.cra_priority = 200, - .base.cra_blocksize = BLAKE2S_BLOCK_SIZE, - .base.cra_module = THIS_MODULE, - - .digestsize = BLAKE2S_128_HASH_SIZE, - .setkey = crypto_blake2s_setkey, - .init = crypto_blake2s_init, - .update = crypto_blake2s_update, - .final = crypto_blake2s_final, - .descsize = sizeof(struct blake2s_state), -}, { - .base.cra_name = "blake2s-160", - .base.cra_driver_name = "blake2s-160-x86", - .base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY, - .base.cra_ctxsize = sizeof(struct blake2s_tfm_ctx), - .base.cra_priority = 200, - .base.cra_blocksize = BLAKE2S_BLOCK_SIZE, - .base.cra_module = THIS_MODULE, - - .digestsize = BLAKE2S_160_HASH_SIZE, - .setkey = crypto_blake2s_setkey, - .init = crypto_blake2s_init, - .update = crypto_blake2s_update, - .final = crypto_blake2s_final, - .descsize = sizeof(struct blake2s_state), -}, { - .base.cra_name = "blake2s-224", - .base.cra_driver_name = "blake2s-224-x86", - .base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY, - .base.cra_ctxsize = sizeof(struct blake2s_tfm_ctx), - .base.cra_priority = 200, - .base.cra_blocksize = BLAKE2S_BLOCK_SIZE, - .base.cra_module = THIS_MODULE, - - .digestsize = BLAKE2S_224_HASH_SIZE, - .setkey = crypto_blake2s_setkey, - .init = crypto_blake2s_init, - .update = crypto_blake2s_update, - .final = crypto_blake2s_final, - .descsize = sizeof(struct blake2s_state), -}, { - .base.cra_name = "blake2s-256", - .base.cra_driver_name = "blake2s-256-x86", - .base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY, - .base.cra_ctxsize = sizeof(struct blake2s_tfm_ctx), - .base.cra_priority = 200, - .base.cra_blocksize = BLAKE2S_BLOCK_SIZE, - .base.cra_module = THIS_MODULE, - - .digestsize = BLAKE2S_256_HASH_SIZE, - .setkey = crypto_blake2s_setkey, - .init = crypto_blake2s_init, - .update = crypto_blake2s_update, - .final = crypto_blake2s_final, - .descsize = sizeof(struct blake2s_state), -}}; +#define BLAKE2S_ALG(name, driver_name, digest_size) \ + { \ + .base.cra_name = name, \ + .base.cra_driver_name = driver_name, \ + .base.cra_priority = 200, \ + .base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY, \ + .base.cra_blocksize = BLAKE2S_BLOCK_SIZE, \ + .base.cra_ctxsize = sizeof(struct blake2s_tfm_ctx), \ + .base.cra_module = THIS_MODULE, \ + .digestsize = digest_size, \ + .setkey = crypto_blake2s_setkey, \ + .init = crypto_blake2s_init, \ + .update = crypto_blake2s_update, \ + .final = crypto_blake2s_final, \ + .descsize = sizeof(struct blake2s_state), \ + } + +static struct shash_alg blake2s_algs[] = { + BLAKE2S_ALG("blake2s-128", "blake2s-128-x86", BLAKE2S_128_HASH_SIZE), + BLAKE2S_ALG("blake2s-160", "blake2s-160-x86", BLAKE2S_160_HASH_SIZE), + BLAKE2S_ALG("blake2s-224", "blake2s-224-x86", BLAKE2S_224_HASH_SIZE), + BLAKE2S_ALG("blake2s-256", "blake2s-256-x86", BLAKE2S_256_HASH_SIZE), +}; static int __init blake2s_mod_init(void) { From patchwork Wed Dec 23 08:09:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 11987843 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC2C1C433E0 for ; Wed, 23 Dec 2020 08:14:39 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9DF6F2070B for ; Wed, 23 Dec 2020 08:14:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9DF6F2070B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=sP9gy5IDy0CKlLfGqd/l5TbNAR6EgRBVZx2D+icmAOM=; b=DYIJezEx2HiA2xpiTzgho0y6D 7oJYHWV0IVdkuY4PKiF5x5ArWLezGzFjo3M/e1iKRvKYI/2aTlnhx1LdSiKoz2n30w0cL+q9JfTwG iisXshgyuDhevFP7fMcR81pCaL2h7KBM0vK2Gd3i79YCs+15XENcHJe5Vg+MYvw5QY0iQrZ2QV/54 VJoCe0cF8M/rPTa4Scf7y+smbpx4nRK8FJ/wQTbX94Oot1aDe5Wtqnf86l/z4a9qj9kwsI46MBc8p 2lj1kRpft58VshhnsORSfVCX1ZZeXypDvWPWkHlckDdl+4QMNgAVA0D0XFOQAUX+00JrXfPhqh2MB zpfQJ/e8g==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1krzGl-000747-7N; Wed, 23 Dec 2020 08:13:15 +0000 Received: from mail.kernel.org ([198.145.29.99]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1krzGR-0006vc-E4 for linux-arm-kernel@lists.infradead.org; Wed, 23 Dec 2020 08:12:57 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 71C29224B1; Wed, 23 Dec 2020 08:12:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1608711174; bh=fJvvlUbZ+ZnE9DMtGC+qGx+ypAakOfgJ3h5h3DOtI7I=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hgUab9ejMd6tw83BmVEpH9nAOzPXmcy8F0TTpIuvpXt6Rs1OPlxF9ohod1+6IwiKb kuwILck+wqZ+q9JQXBfXU0jbZ50SnR37eNRZZQNZIGy8+zYcMizxSiQLnCAId0Ev8/ NDIJkaVaxgTUUelwONFQembbtifmEzwdZ56IDm+4JzQMsjCHUQlE0Q6F3tufB9eDw4 3bX4MYkJtz29uVqdw35daOUX3mw6QTP2jj+HQMLNDNKkapuqLEUOY1W6xqNoYX/IhA Xy3MT0dFsvvK9xJf9pk15uoTvZWQKlFex8TJnuSEKljsqT3pW4/IhUkAVBF/7zPrmA bIIUCzg0W2xPg== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH v3 03/14] crypto: blake2s - remove unneeded includes Date: Wed, 23 Dec 2020 00:09:52 -0800 Message-Id: <20201223081003.373663-4-ebiggers@kernel.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201223081003.373663-1-ebiggers@kernel.org> References: <20201223081003.373663-1-ebiggers@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201223_031255_586288_CF4885C1 X-CRM114-Status: UNSURE ( 9.55 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "Jason A . Donenfeld" , Herbert Xu , David Sterba , Ard Biesheuvel , linux-arm-kernel@lists.infradead.org, Paul Crowley Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Eric Biggers It doesn't make sense for the generic implementation of BLAKE2s to include and , as these are things that would only be useful in an architecture-specific implementation. Remove these unnecessary includes. Acked-by: Ard Biesheuvel Signed-off-by: Eric Biggers --- crypto/blake2s_generic.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/crypto/blake2s_generic.c b/crypto/blake2s_generic.c index e3aa6e7ff3d83..b89536c3671cf 100644 --- a/crypto/blake2s_generic.c +++ b/crypto/blake2s_generic.c @@ -4,11 +4,9 @@ */ #include -#include #include #include -#include #include #include From patchwork Wed Dec 23 08:09:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 11987851 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B82B3C433DB for ; Wed, 23 Dec 2020 08:15:08 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 638332070B for ; Wed, 23 Dec 2020 08:15:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 638332070B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=gtx8c7Pv+PcDxhr7R0ReEoGK/PyCkUFmgtdFEVfr5kY=; b=Q7UwepTpG0Ofp3hXFuKbz9uQ1 dZNa/ceYpbMMHOclFtrKCKm6ZRx+Qu8LNJpMiS2MGuizw2hBgeRak+ULv62kzKGO/01zOOp03hkOu C9UyTUL4nKxGaXMCLBKUrm7ifaX/5HDLSqHvXgS8fXzVq76fWLBnyahY+pF1DfJrGPSioD21+GfEL QXqKNjsg5740RIdpB19INdqfhn40MH7jsqSDGkGgIROu4j4bQlQ2cqy3NCazfiVEMzrlWjNr0RMUm rRFtfx22NY+O94yWZxXrlevv/67767u924yw4pxA/F2NPdhVephI8prQ3o+qQaUVnbMD7jailZ2a9 emFZJA9jQ==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1krzGy-00078C-Hf; Wed, 23 Dec 2020 08:13:28 +0000 Received: from mail.kernel.org ([198.145.29.99]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1krzGR-0006vm-Qe for linux-arm-kernel@lists.infradead.org; Wed, 23 Dec 2020 08:13:00 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id ECCD62076D; Wed, 23 Dec 2020 08:12:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1608711175; bh=b6+wc7dgRM5d8Iyuv0PHYC4QdSJRATl+ajSSE/yxgHA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HRa2zcMw6Rm3uDgBsMUwzZH+xzMA7oJbWzMAHDqbnjsjtDxbsJphlhG7V6I0+Z0UY tpN2je5lhjEp+cvK49vFX+iEEna4bEBxYO19XVUOFQ1mjF3pejOqfRK11nLxcb/WGx J6mR468qOnvJK9VCgorYeKWYsEw3AIiEQc1Wfl9EdHMg7W37kvvbqz2aeGdPdP3Ulf UMU9NaDuh5fUIqqM1/Kxeuf3q+Ew8H2mZwSbx6Lbt2EeJ4JgbuBKcyIwCUvwaoV4p0 dyGoLp8q6APyFZ4+pCn1opbF4JFTNeRUm6G6BQ36JgdK1ONwcwNcDKnjzpezJN901E 4w5WZrKuAQleA== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH v3 04/14] crypto: blake2s - move update and final logic to internal/blake2s.h Date: Wed, 23 Dec 2020 00:09:53 -0800 Message-Id: <20201223081003.373663-5-ebiggers@kernel.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201223081003.373663-1-ebiggers@kernel.org> References: <20201223081003.373663-1-ebiggers@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201223_031256_006726_015503A1 X-CRM114-Status: GOOD ( 14.70 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "Jason A . Donenfeld" , Herbert Xu , David Sterba , Ard Biesheuvel , linux-arm-kernel@lists.infradead.org, Paul Crowley Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Eric Biggers Move most of blake2s_update() and blake2s_final() into new inline functions __blake2s_update() and __blake2s_final() in include/crypto/internal/blake2s.h so that this logic can be shared by the shash helper functions. This will avoid duplicating this logic between the library and shash implementations. Signed-off-by: Eric Biggers Acked-by: Ard Biesheuvel --- include/crypto/internal/blake2s.h | 41 ++++++++++++++++++++++++++ lib/crypto/blake2s.c | 48 ++++++------------------------- 2 files changed, 49 insertions(+), 40 deletions(-) diff --git a/include/crypto/internal/blake2s.h b/include/crypto/internal/blake2s.h index 6e376ae6b6b58..42deba4b8ceef 100644 --- a/include/crypto/internal/blake2s.h +++ b/include/crypto/internal/blake2s.h @@ -4,6 +4,7 @@ #define BLAKE2S_INTERNAL_H #include +#include struct blake2s_tfm_ctx { u8 key[BLAKE2S_KEY_SIZE]; @@ -23,4 +24,44 @@ static inline void blake2s_set_lastblock(struct blake2s_state *state) state->f[0] = -1; } +typedef void (*blake2s_compress_t)(struct blake2s_state *state, + const u8 *block, size_t nblocks, u32 inc); + +static inline void __blake2s_update(struct blake2s_state *state, + const u8 *in, size_t inlen, + blake2s_compress_t compress) +{ + const size_t fill = BLAKE2S_BLOCK_SIZE - state->buflen; + + if (unlikely(!inlen)) + return; + if (inlen > fill) { + memcpy(state->buf + state->buflen, in, fill); + (*compress)(state, state->buf, 1, BLAKE2S_BLOCK_SIZE); + state->buflen = 0; + in += fill; + inlen -= fill; + } + if (inlen > BLAKE2S_BLOCK_SIZE) { + const size_t nblocks = DIV_ROUND_UP(inlen, BLAKE2S_BLOCK_SIZE); + /* Hash one less (full) block than strictly possible */ + (*compress)(state, in, nblocks - 1, BLAKE2S_BLOCK_SIZE); + in += BLAKE2S_BLOCK_SIZE * (nblocks - 1); + inlen -= BLAKE2S_BLOCK_SIZE * (nblocks - 1); + } + memcpy(state->buf + state->buflen, in, inlen); + state->buflen += inlen; +} + +static inline void __blake2s_final(struct blake2s_state *state, u8 *out, + blake2s_compress_t compress) +{ + blake2s_set_lastblock(state); + memset(state->buf + state->buflen, 0, + BLAKE2S_BLOCK_SIZE - state->buflen); /* Padding */ + (*compress)(state, state->buf, 1, state->buflen); + cpu_to_le32_array(state->h, ARRAY_SIZE(state->h)); + memcpy(out, state->h, state->outlen); +} + #endif /* BLAKE2S_INTERNAL_H */ diff --git a/lib/crypto/blake2s.c b/lib/crypto/blake2s.c index 6a4b6b78d630f..c64ac8bfb6a97 100644 --- a/lib/crypto/blake2s.c +++ b/lib/crypto/blake2s.c @@ -15,55 +15,23 @@ #include #include #include -#include + +#if IS_ENABLED(CONFIG_CRYPTO_ARCH_HAVE_LIB_BLAKE2S) +# define blake2s_compress blake2s_compress_arch +#else +# define blake2s_compress blake2s_compress_generic +#endif void blake2s_update(struct blake2s_state *state, const u8 *in, size_t inlen) { - const size_t fill = BLAKE2S_BLOCK_SIZE - state->buflen; - - if (unlikely(!inlen)) - return; - if (inlen > fill) { - memcpy(state->buf + state->buflen, in, fill); - if (IS_ENABLED(CONFIG_CRYPTO_ARCH_HAVE_LIB_BLAKE2S)) - blake2s_compress_arch(state, state->buf, 1, - BLAKE2S_BLOCK_SIZE); - else - blake2s_compress_generic(state, state->buf, 1, - BLAKE2S_BLOCK_SIZE); - state->buflen = 0; - in += fill; - inlen -= fill; - } - if (inlen > BLAKE2S_BLOCK_SIZE) { - const size_t nblocks = DIV_ROUND_UP(inlen, BLAKE2S_BLOCK_SIZE); - /* Hash one less (full) block than strictly possible */ - if (IS_ENABLED(CONFIG_CRYPTO_ARCH_HAVE_LIB_BLAKE2S)) - blake2s_compress_arch(state, in, nblocks - 1, - BLAKE2S_BLOCK_SIZE); - else - blake2s_compress_generic(state, in, nblocks - 1, - BLAKE2S_BLOCK_SIZE); - in += BLAKE2S_BLOCK_SIZE * (nblocks - 1); - inlen -= BLAKE2S_BLOCK_SIZE * (nblocks - 1); - } - memcpy(state->buf + state->buflen, in, inlen); - state->buflen += inlen; + __blake2s_update(state, in, inlen, blake2s_compress); } EXPORT_SYMBOL(blake2s_update); void blake2s_final(struct blake2s_state *state, u8 *out) { WARN_ON(IS_ENABLED(DEBUG) && !out); - blake2s_set_lastblock(state); - memset(state->buf + state->buflen, 0, - BLAKE2S_BLOCK_SIZE - state->buflen); /* Padding */ - if (IS_ENABLED(CONFIG_CRYPTO_ARCH_HAVE_LIB_BLAKE2S)) - blake2s_compress_arch(state, state->buf, 1, state->buflen); - else - blake2s_compress_generic(state, state->buf, 1, state->buflen); - cpu_to_le32_array(state->h, ARRAY_SIZE(state->h)); - memcpy(out, state->h, state->outlen); + __blake2s_final(state, out, blake2s_compress); memzero_explicit(state, sizeof(*state)); } EXPORT_SYMBOL(blake2s_final); From patchwork Wed Dec 23 08:09:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 11987863 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1D39C433DB for ; Wed, 23 Dec 2020 08:16:13 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 67D7E2070B for ; Wed, 23 Dec 2020 08:16:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 67D7E2070B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=KBxov5nIM5ZVQP/VYQHY5wRGES6ZY/WBMK1SbdlkV+w=; b=BO3SQPlZ0nRF2xqtWVvjkhHL5 OqJhjbyEximyLRiJBJyFHSUNxR2HnxyF5FDjM6nbd9Mcl+uMaQoM4QnH2LdJZPp+dwLaze1Zj5IHB 0HVg1EmUSUSHDuWSwaDXoqbxxk3KFOo9ZPddyq9aP6x0jgHY6OLDvXBKKgpHEo6oU4y3hgztFZ6Y7 2Twgx4KJ2dqUCuENNUqyJ/zO8gM6b3bRmJBGY/aPNdxsEnNNCG7UBz/89GT0MHVJTw8CNVwVxxaY6 TXIKfhdNbNO8fsmI3E/H53GRIfGOfljvHN4OaVAPNF3qqn3QLPDi8yrOclvhkKCiivx2YJCDrysLa TY/Fr+0wA==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1krzHx-0007XH-Ut; Wed, 23 Dec 2020 08:14:30 +0000 Received: from mail.kernel.org ([198.145.29.99]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1krzGT-0006x7-3m for linux-arm-kernel@lists.infradead.org; Wed, 23 Dec 2020 08:13:05 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 6A3E0207D2; Wed, 23 Dec 2020 08:12:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1608711175; bh=WKu9KmHZC5G9NukKoENOJnSDAufpaL5SSKOah6FCBwc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=f8GePN2uB3p2OU8kKCvb9acrjfwUDhc9Px6iQRG267EudQrUobwiSmqxl2rs+jR7h zuY7RVcLV9U1EcI4JJoy0rnOGBw1xXvwXfsIVFRzoaJkOtdH6hg/oqJ3pRk/9mJkcL asdSkhODZiuVcxKWfcsLM8v2atxSP52nisrrkZMU37do8INWsMI7cZEWOVX5iiGX19 s9QAiPeULcmxAev6wCMhEwCDzF2eC5qG6yKh0l0mkY6tMiGAYfasQ8CAXdugJ3So/w 5yFxbt1XRB/EUJUfJDkSS6l0ZoDFgMPvSwldxe3UcH8d6zpcQT5nhgSKIThYajnH11 EH6YrKrom79BA== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH v3 05/14] crypto: blake2s - share the "shash" API boilerplate code Date: Wed, 23 Dec 2020 00:09:54 -0800 Message-Id: <20201223081003.373663-6-ebiggers@kernel.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201223081003.373663-1-ebiggers@kernel.org> References: <20201223081003.373663-1-ebiggers@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201223_031257_481330_6D04F945 X-CRM114-Status: GOOD ( 22.35 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "Jason A . Donenfeld" , Herbert Xu , David Sterba , Ard Biesheuvel , linux-arm-kernel@lists.infradead.org, Paul Crowley Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Eric Biggers Add helper functions for shash implementations of BLAKE2s to include/crypto/internal/blake2s.h, taking advantage of __blake2s_update() and __blake2s_final() that were added by the previous patch to share more code between the library and shash implementations. crypto_blake2s_setkey() and crypto_blake2s_init() are usable as shash_alg::setkey and shash_alg::init directly, while crypto_blake2s_update() and crypto_blake2s_final() take an extra 'blake2s_compress_t' function pointer parameter. This allows the implementation of the compression function to be overridden, which is the only part that optimized implementations really care about. The new functions are inline functions (similar to those in sha1_base.h, sha256_base.h, and sm3_base.h) because this avoids needing to add a new module blake2s_helpers.ko, they aren't *too* long, and this avoids indirect calls which are expensive these days. Note that they can't go in blake2s_generic.ko, as that would require selecting CRYPTO_BLAKE2S from CRYPTO_BLAKE2S_X86, which would cause a recursive dependency. Finally, use these new helper functions in the x86 implementation of BLAKE2s. (This part should be a separate patch, but unfortunately the x86 implementation used the exact same function names like "crypto_blake2s_update()", so it had to be updated at the same time.) Signed-off-by: Eric Biggers Acked-by: Ard Biesheuvel --- arch/x86/crypto/blake2s-glue.c | 74 +++--------------------------- crypto/blake2s_generic.c | 76 ++++--------------------------- include/crypto/internal/blake2s.h | 65 ++++++++++++++++++++++++-- 3 files changed, 76 insertions(+), 139 deletions(-) diff --git a/arch/x86/crypto/blake2s-glue.c b/arch/x86/crypto/blake2s-glue.c index 4dcb2ee89efc9..a40365ab301ee 100644 --- a/arch/x86/crypto/blake2s-glue.c +++ b/arch/x86/crypto/blake2s-glue.c @@ -58,75 +58,15 @@ void blake2s_compress_arch(struct blake2s_state *state, } EXPORT_SYMBOL(blake2s_compress_arch); -static int crypto_blake2s_setkey(struct crypto_shash *tfm, const u8 *key, - unsigned int keylen) +static int crypto_blake2s_update_x86(struct shash_desc *desc, + const u8 *in, unsigned int inlen) { - struct blake2s_tfm_ctx *tctx = crypto_shash_ctx(tfm); - - if (keylen == 0 || keylen > BLAKE2S_KEY_SIZE) - return -EINVAL; - - memcpy(tctx->key, key, keylen); - tctx->keylen = keylen; - - return 0; -} - -static int crypto_blake2s_init(struct shash_desc *desc) -{ - struct blake2s_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm); - struct blake2s_state *state = shash_desc_ctx(desc); - const int outlen = crypto_shash_digestsize(desc->tfm); - - if (tctx->keylen) - blake2s_init_key(state, outlen, tctx->key, tctx->keylen); - else - blake2s_init(state, outlen); - - return 0; -} - -static int crypto_blake2s_update(struct shash_desc *desc, const u8 *in, - unsigned int inlen) -{ - struct blake2s_state *state = shash_desc_ctx(desc); - const size_t fill = BLAKE2S_BLOCK_SIZE - state->buflen; - - if (unlikely(!inlen)) - return 0; - if (inlen > fill) { - memcpy(state->buf + state->buflen, in, fill); - blake2s_compress_arch(state, state->buf, 1, BLAKE2S_BLOCK_SIZE); - state->buflen = 0; - in += fill; - inlen -= fill; - } - if (inlen > BLAKE2S_BLOCK_SIZE) { - const size_t nblocks = DIV_ROUND_UP(inlen, BLAKE2S_BLOCK_SIZE); - /* Hash one less (full) block than strictly possible */ - blake2s_compress_arch(state, in, nblocks - 1, BLAKE2S_BLOCK_SIZE); - in += BLAKE2S_BLOCK_SIZE * (nblocks - 1); - inlen -= BLAKE2S_BLOCK_SIZE * (nblocks - 1); - } - memcpy(state->buf + state->buflen, in, inlen); - state->buflen += inlen; - - return 0; + return crypto_blake2s_update(desc, in, inlen, blake2s_compress_arch); } -static int crypto_blake2s_final(struct shash_desc *desc, u8 *out) +static int crypto_blake2s_final_x86(struct shash_desc *desc, u8 *out) { - struct blake2s_state *state = shash_desc_ctx(desc); - - blake2s_set_lastblock(state); - memset(state->buf + state->buflen, 0, - BLAKE2S_BLOCK_SIZE - state->buflen); /* Padding */ - blake2s_compress_arch(state, state->buf, 1, state->buflen); - cpu_to_le32_array(state->h, ARRAY_SIZE(state->h)); - memcpy(out, state->h, state->outlen); - memzero_explicit(state, sizeof(*state)); - - return 0; + return crypto_blake2s_final(desc, out, blake2s_compress_arch); } #define BLAKE2S_ALG(name, driver_name, digest_size) \ @@ -141,8 +81,8 @@ static int crypto_blake2s_final(struct shash_desc *desc, u8 *out) .digestsize = digest_size, \ .setkey = crypto_blake2s_setkey, \ .init = crypto_blake2s_init, \ - .update = crypto_blake2s_update, \ - .final = crypto_blake2s_final, \ + .update = crypto_blake2s_update_x86, \ + .final = crypto_blake2s_final_x86, \ .descsize = sizeof(struct blake2s_state), \ } diff --git a/crypto/blake2s_generic.c b/crypto/blake2s_generic.c index b89536c3671cf..72fe480f9bd67 100644 --- a/crypto/blake2s_generic.c +++ b/crypto/blake2s_generic.c @@ -1,5 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 OR MIT /* + * shash interface to the generic implementation of BLAKE2s + * * Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. */ @@ -10,75 +12,15 @@ #include #include -static int crypto_blake2s_setkey(struct crypto_shash *tfm, const u8 *key, - unsigned int keylen) +static int crypto_blake2s_update_generic(struct shash_desc *desc, + const u8 *in, unsigned int inlen) { - struct blake2s_tfm_ctx *tctx = crypto_shash_ctx(tfm); - - if (keylen == 0 || keylen > BLAKE2S_KEY_SIZE) - return -EINVAL; - - memcpy(tctx->key, key, keylen); - tctx->keylen = keylen; - - return 0; + return crypto_blake2s_update(desc, in, inlen, blake2s_compress_generic); } -static int crypto_blake2s_init(struct shash_desc *desc) +static int crypto_blake2s_final_generic(struct shash_desc *desc, u8 *out) { - struct blake2s_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm); - struct blake2s_state *state = shash_desc_ctx(desc); - const int outlen = crypto_shash_digestsize(desc->tfm); - - if (tctx->keylen) - blake2s_init_key(state, outlen, tctx->key, tctx->keylen); - else - blake2s_init(state, outlen); - - return 0; -} - -static int crypto_blake2s_update(struct shash_desc *desc, const u8 *in, - unsigned int inlen) -{ - struct blake2s_state *state = shash_desc_ctx(desc); - const size_t fill = BLAKE2S_BLOCK_SIZE - state->buflen; - - if (unlikely(!inlen)) - return 0; - if (inlen > fill) { - memcpy(state->buf + state->buflen, in, fill); - blake2s_compress_generic(state, state->buf, 1, BLAKE2S_BLOCK_SIZE); - state->buflen = 0; - in += fill; - inlen -= fill; - } - if (inlen > BLAKE2S_BLOCK_SIZE) { - const size_t nblocks = DIV_ROUND_UP(inlen, BLAKE2S_BLOCK_SIZE); - /* Hash one less (full) block than strictly possible */ - blake2s_compress_generic(state, in, nblocks - 1, BLAKE2S_BLOCK_SIZE); - in += BLAKE2S_BLOCK_SIZE * (nblocks - 1); - inlen -= BLAKE2S_BLOCK_SIZE * (nblocks - 1); - } - memcpy(state->buf + state->buflen, in, inlen); - state->buflen += inlen; - - return 0; -} - -static int crypto_blake2s_final(struct shash_desc *desc, u8 *out) -{ - struct blake2s_state *state = shash_desc_ctx(desc); - - blake2s_set_lastblock(state); - memset(state->buf + state->buflen, 0, - BLAKE2S_BLOCK_SIZE - state->buflen); /* Padding */ - blake2s_compress_generic(state, state->buf, 1, state->buflen); - cpu_to_le32_array(state->h, ARRAY_SIZE(state->h)); - memcpy(out, state->h, state->outlen); - memzero_explicit(state, sizeof(*state)); - - return 0; + return crypto_blake2s_final(desc, out, blake2s_compress_generic); } #define BLAKE2S_ALG(name, driver_name, digest_size) \ @@ -93,8 +35,8 @@ static int crypto_blake2s_final(struct shash_desc *desc, u8 *out) .digestsize = digest_size, \ .setkey = crypto_blake2s_setkey, \ .init = crypto_blake2s_init, \ - .update = crypto_blake2s_update, \ - .final = crypto_blake2s_final, \ + .update = crypto_blake2s_update_generic, \ + .final = crypto_blake2s_final_generic, \ .descsize = sizeof(struct blake2s_state), \ } diff --git a/include/crypto/internal/blake2s.h b/include/crypto/internal/blake2s.h index 42deba4b8ceef..2ea0a8f5e7f41 100644 --- a/include/crypto/internal/blake2s.h +++ b/include/crypto/internal/blake2s.h @@ -1,16 +1,16 @@ /* SPDX-License-Identifier: GPL-2.0 OR MIT */ +/* + * Helper functions for BLAKE2s implementations. + * Keep this in sync with the corresponding BLAKE2b header. + */ #ifndef BLAKE2S_INTERNAL_H #define BLAKE2S_INTERNAL_H #include +#include #include -struct blake2s_tfm_ctx { - u8 key[BLAKE2S_KEY_SIZE]; - unsigned int keylen; -}; - void blake2s_compress_generic(struct blake2s_state *state,const u8 *block, size_t nblocks, const u32 inc); @@ -27,6 +27,8 @@ static inline void blake2s_set_lastblock(struct blake2s_state *state) typedef void (*blake2s_compress_t)(struct blake2s_state *state, const u8 *block, size_t nblocks, u32 inc); +/* Helper functions for BLAKE2s shared by the library and shash APIs */ + static inline void __blake2s_update(struct blake2s_state *state, const u8 *in, size_t inlen, blake2s_compress_t compress) @@ -64,4 +66,57 @@ static inline void __blake2s_final(struct blake2s_state *state, u8 *out, memcpy(out, state->h, state->outlen); } +/* Helper functions for shash implementations of BLAKE2s */ + +struct blake2s_tfm_ctx { + u8 key[BLAKE2S_KEY_SIZE]; + unsigned int keylen; +}; + +static inline int crypto_blake2s_setkey(struct crypto_shash *tfm, + const u8 *key, unsigned int keylen) +{ + struct blake2s_tfm_ctx *tctx = crypto_shash_ctx(tfm); + + if (keylen == 0 || keylen > BLAKE2S_KEY_SIZE) + return -EINVAL; + + memcpy(tctx->key, key, keylen); + tctx->keylen = keylen; + + return 0; +} + +static inline int crypto_blake2s_init(struct shash_desc *desc) +{ + const struct blake2s_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm); + struct blake2s_state *state = shash_desc_ctx(desc); + unsigned int outlen = crypto_shash_digestsize(desc->tfm); + + if (tctx->keylen) + blake2s_init_key(state, outlen, tctx->key, tctx->keylen); + else + blake2s_init(state, outlen); + return 0; +} + +static inline int crypto_blake2s_update(struct shash_desc *desc, + const u8 *in, unsigned int inlen, + blake2s_compress_t compress) +{ + struct blake2s_state *state = shash_desc_ctx(desc); + + __blake2s_update(state, in, inlen, compress); + return 0; +} + +static inline int crypto_blake2s_final(struct shash_desc *desc, u8 *out, + blake2s_compress_t compress) +{ + struct blake2s_state *state = shash_desc_ctx(desc); + + __blake2s_final(state, out, compress); + return 0; +} + #endif /* BLAKE2S_INTERNAL_H */ From patchwork Wed Dec 23 08:09:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 11987853 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2ADF0C433E9 for ; Wed, 23 Dec 2020 08:15:09 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D30522070B for ; Wed, 23 Dec 2020 08:15:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D30522070B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=ElkmOsHhuiRWZKQdK4/I4QqrpL9HoC8HjNPnmb3vIMw=; b=EO3oYCCMjFXM/V4+g3qlqJPWi eHtHZX3VYyMNKb98Qv0h36+CcZE+OcY5TQ532ottbL8oTqW8/gfbGmEKkFYNvHgqwT/ZWvnnPO9t6 TmX2Cmq5rk6ijMul1ndz48hbeKRbk7170891Vtb/Op1U/HyiSXwJtKcyKX7fam41qzgVNChPeZTGf kOwcM+RoF/mJRN5mnKUQLZ5kEHUMEeVAMLhQw+dWw7rssxyftb5tsXnKDoVaLwDLMCpVf2Fom046o QnLeIYFSFVWB2zGUfuyh0/pKbm7KejvH1OnZ4KlBYJAEliV2ZWHCVKyhxawPJ+wOdhZQQJuhA4Pi6 3K2UG7OMw==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1krzH5-0007AR-Am; Wed, 23 Dec 2020 08:13:35 +0000 Received: from mail.kernel.org ([198.145.29.99]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1krzGT-0006x6-3p for linux-arm-kernel@lists.infradead.org; Wed, 23 Dec 2020 08:13:03 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id CA21C224BE; Wed, 23 Dec 2020 08:12:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1608711176; bh=2Mk6yVDnK8isqT8WqVj7WnmT/hEVWvBlhb17m29fzDA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LyvGYO50LAIkC6en4T6NgL8bCtRol2wPUXmW8YtS5O4t3A97KVXMp6Z/ghjuTsodD +w5JvJ4XxL91j4HtHRG0+PwmLKM21ZBb/jGaj9pB+xJPfDKGr0Su4as9ABL5EEsUS1 G8zDs9ewAIPBSlmcHLz1S3+7+yqs0W+eOmbb4OjNmZoGR/SB4WH34QUt6+3+jTt2/X juTKeTGT+6q+7bQYWHSyNhxRjzRiBGZaZRfDw/Gi+k6Tk3SzAwbROqPZNb+OWMwMzP LUC2V10r3I+nNCskQbr8ZoXFCF3rOeKwyQLqAqxUaK5hMkPm3Mb9Sr6gnwFp4l0uDw cQiaQ5YkW4gpA== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH v3 06/14] crypto: blake2s - optimize blake2s initialization Date: Wed, 23 Dec 2020 00:09:55 -0800 Message-Id: <20201223081003.373663-7-ebiggers@kernel.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201223081003.373663-1-ebiggers@kernel.org> References: <20201223081003.373663-1-ebiggers@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201223_031257_470334_69F465BB X-CRM114-Status: GOOD ( 13.16 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "Jason A . Donenfeld" , Herbert Xu , David Sterba , Ard Biesheuvel , linux-arm-kernel@lists.infradead.org, Paul Crowley Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Eric Biggers If no key was provided, then don't waste time initializing the block buffer, as its initial contents won't be used. Also, make crypto_blake2s_init() and blake2s() call a single internal function __blake2s_init() which treats the key as optional, rather than conditionally calling blake2s_init() or blake2s_init_key(). This reduces the compiled code size, as previously both blake2s_init() and blake2s_init_key() were being inlined into these two callers, except when the key size passed to blake2s() was a compile-time constant. These optimizations aren't that significant for BLAKE2s. However, the equivalent optimizations will be more significant for BLAKE2b, as everything is twice as big in BLAKE2b. And it's good to keep things consistent rather than making optimizations for BLAKE2b but not BLAKE2s. Signed-off-by: Eric Biggers Acked-by: Ard Biesheuvel --- include/crypto/blake2s.h | 53 ++++++++++++++++--------------- include/crypto/internal/blake2s.h | 5 +-- 2 files changed, 28 insertions(+), 30 deletions(-) diff --git a/include/crypto/blake2s.h b/include/crypto/blake2s.h index b471deac28ff8..734ed22b7a6aa 100644 --- a/include/crypto/blake2s.h +++ b/include/crypto/blake2s.h @@ -43,29 +43,34 @@ enum blake2s_iv { BLAKE2S_IV7 = 0x5BE0CD19UL, }; -void blake2s_update(struct blake2s_state *state, const u8 *in, size_t inlen); -void blake2s_final(struct blake2s_state *state, u8 *out); - -static inline void blake2s_init_param(struct blake2s_state *state, - const u32 param) +static inline void __blake2s_init(struct blake2s_state *state, size_t outlen, + const void *key, size_t keylen) { - *state = (struct blake2s_state){{ - BLAKE2S_IV0 ^ param, - BLAKE2S_IV1, - BLAKE2S_IV2, - BLAKE2S_IV3, - BLAKE2S_IV4, - BLAKE2S_IV5, - BLAKE2S_IV6, - BLAKE2S_IV7, - }}; + state->h[0] = BLAKE2S_IV0 ^ (0x01010000 | keylen << 8 | outlen); + state->h[1] = BLAKE2S_IV1; + state->h[2] = BLAKE2S_IV2; + state->h[3] = BLAKE2S_IV3; + state->h[4] = BLAKE2S_IV4; + state->h[5] = BLAKE2S_IV5; + state->h[6] = BLAKE2S_IV6; + state->h[7] = BLAKE2S_IV7; + state->t[0] = 0; + state->t[1] = 0; + state->f[0] = 0; + state->f[1] = 0; + state->buflen = 0; + state->outlen = outlen; + if (keylen) { + memcpy(state->buf, key, keylen); + memset(&state->buf[keylen], 0, BLAKE2S_BLOCK_SIZE - keylen); + state->buflen = BLAKE2S_BLOCK_SIZE; + } } static inline void blake2s_init(struct blake2s_state *state, const size_t outlen) { - blake2s_init_param(state, 0x01010000 | outlen); - state->outlen = outlen; + __blake2s_init(state, outlen, NULL, 0); } static inline void blake2s_init_key(struct blake2s_state *state, @@ -75,12 +80,12 @@ static inline void blake2s_init_key(struct blake2s_state *state, WARN_ON(IS_ENABLED(DEBUG) && (!outlen || outlen > BLAKE2S_HASH_SIZE || !key || !keylen || keylen > BLAKE2S_KEY_SIZE)); - blake2s_init_param(state, 0x01010000 | keylen << 8 | outlen); - memcpy(state->buf, key, keylen); - state->buflen = BLAKE2S_BLOCK_SIZE; - state->outlen = outlen; + __blake2s_init(state, outlen, key, keylen); } +void blake2s_update(struct blake2s_state *state, const u8 *in, size_t inlen); +void blake2s_final(struct blake2s_state *state, u8 *out); + static inline void blake2s(u8 *out, const u8 *in, const u8 *key, const size_t outlen, const size_t inlen, const size_t keylen) @@ -91,11 +96,7 @@ static inline void blake2s(u8 *out, const u8 *in, const u8 *key, outlen > BLAKE2S_HASH_SIZE || keylen > BLAKE2S_KEY_SIZE || (!key && keylen))); - if (keylen) - blake2s_init_key(&state, outlen, key, keylen); - else - blake2s_init(&state, outlen); - + __blake2s_init(&state, outlen, key, keylen); blake2s_update(&state, in, inlen); blake2s_final(&state, out); } diff --git a/include/crypto/internal/blake2s.h b/include/crypto/internal/blake2s.h index 2ea0a8f5e7f41..867ef3753f5c1 100644 --- a/include/crypto/internal/blake2s.h +++ b/include/crypto/internal/blake2s.h @@ -93,10 +93,7 @@ static inline int crypto_blake2s_init(struct shash_desc *desc) struct blake2s_state *state = shash_desc_ctx(desc); unsigned int outlen = crypto_shash_digestsize(desc->tfm); - if (tctx->keylen) - blake2s_init_key(state, outlen, tctx->key, tctx->keylen); - else - blake2s_init(state, outlen); + __blake2s_init(state, outlen, tctx->key, tctx->keylen); return 0; } From patchwork Wed Dec 23 08:09:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 11987847 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6A19C433DB for ; Wed, 23 Dec 2020 08:14:42 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9B7A32070B for ; Wed, 23 Dec 2020 08:14:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9B7A32070B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=Cvy5EGHul8RnNZ8W971KsxlF+bdvHTizJWyoytw1VG0=; b=nVgWVvJN60sNRAKNnPFSFZKyj anCbEoEa6/uAJBIEPWESrbEuQFI4KNAAnJ+fpzgY+dpfnjxUYcAGnuXfBdykl1nL4Piwu87VrTWbk elQR6YAdA86wZSPvB+QskqMAK821Q5mPYNKxgfcrJ2tgJkj+zjOsV7weKa6//ZrngFMhx/faZyTBj PxaZUvmkIEFuMacMN2p1ESdmYmo+1dJAsL/QPQ5cvbGG9rarvL8hK5bvQZ//AHnCJwUiThVtCEEeX FkVuW0JBw/TumozDb5O4lKBgbK4u9GdMB1iLwwrWwc9dA5ovzICqEBy+/IR/hEmS3mz871k2Ltgey tCJO2ToVA==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1krzGn-00074z-Sb; Wed, 23 Dec 2020 08:13:17 +0000 Received: from mail.kernel.org ([198.145.29.99]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1krzGT-0006x8-4h for linux-arm-kernel@lists.infradead.org; Wed, 23 Dec 2020 08:12:59 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 346EF224D3; Wed, 23 Dec 2020 08:12:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1608711176; bh=Kg5dRTba2a6zJZ3NuTGgwpK24+5oriIRi29coh0tJW8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=XYRMu+FBeoVMUcupJYwwZOqU7b3daxt5OZvbwW9/YVdasPjxbBiMVzwBWPD20425A F0qckfosbCHfAIJtvgLvuvKtjKx8+eBrYBRRC+vuLx7b0DkgjpWDt+XFUzsYNIJh6O sozVo5XLKJqpFQYkyNl5bBl157MTdk4GC3zN2FHXEesCYgC+L/xYoRq72ngY16rHec a/pXSSAsN/fDc36zkWWSSruVA4tu7SmOU5y6uAzUFME0Aw9RjsEB9rr8xZ4V8sf3Pl YQe93nZ9/BtOidJkcPJbBmQqHTDcM0lbSTzT6OpXBQTnH6yLkw85LXjXGORDsMQ/mV 3UUGpJz9yy+3Q== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH v3 07/14] crypto: blake2s - add comment for blake2s_state fields Date: Wed, 23 Dec 2020 00:09:56 -0800 Message-Id: <20201223081003.373663-8-ebiggers@kernel.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201223081003.373663-1-ebiggers@kernel.org> References: <20201223081003.373663-1-ebiggers@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201223_031257_321738_F6792CF6 X-CRM114-Status: GOOD ( 10.20 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "Jason A . Donenfeld" , Herbert Xu , David Sterba , Ard Biesheuvel , linux-arm-kernel@lists.infradead.org, Paul Crowley Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Eric Biggers The first three fields of 'struct blake2s_state' are used in assembly code, which isn't immediately obvious, so add a comment to this effect. Signed-off-by: Eric Biggers Acked-by: Ard Biesheuvel --- include/crypto/blake2s.h | 1 + 1 file changed, 1 insertion(+) diff --git a/include/crypto/blake2s.h b/include/crypto/blake2s.h index 734ed22b7a6aa..f1c8330a61a91 100644 --- a/include/crypto/blake2s.h +++ b/include/crypto/blake2s.h @@ -24,6 +24,7 @@ enum blake2s_lengths { }; struct blake2s_state { + /* 'h', 't', and 'f' are used in assembly code, so keep them as-is. */ u32 h[8]; u32 t[2]; u32 f[2]; From patchwork Wed Dec 23 08:09:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 11987855 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06F8BC433E0 for ; Wed, 23 Dec 2020 08:15:19 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B8B362070B for ; Wed, 23 Dec 2020 08:15:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B8B362070B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=CJsx3ANfMYK4925Oi/wjMn7oGunaHtqc8scK/wdjDsk=; b=VmT5p4L6CFn5bvR7MsFTAyj9s DNgcdmx6MNr7NrebhnoDdRA/VSJvXT5hCOAs/JXY5qeOiJssdB/raacUla6OYhCfeHaHQsczUtbrx v/ctRWm/R4p/9psWa9nza9LCDHXX6KcvttJoesBrYu4qfsofhuD4CThcU85emtobdt0Dapm/NQRJe JroFAzhx9tsuSp+PYi0fmWakh+9MbjjiFjtbVNSqzZqCiTMsVMRBdE/89fIKSv+P/BluStygZil0V e6F+CGV+Tv9R5cvxO7sel3S4ePR0XahBwFzg9uu0fjc73vTgQvmj9IWqfqKHicAjtk0oHHosJtBnG 9qyxE40TQ==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1krzHH-0007Df-BZ; Wed, 23 Dec 2020 08:13:47 +0000 Received: from mail.kernel.org ([198.145.29.99]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1krzGT-0006xD-Iy for linux-arm-kernel@lists.infradead.org; Wed, 23 Dec 2020 08:13:03 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id AA16B224D4; Wed, 23 Dec 2020 08:12:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1608711176; bh=q4F9axpwjHT+hZCneYUehP2waWRDVZo8NQHm18Rvv8A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bRgb0YHY7nLLWkRGRYheVHAwG0hkbfuZl72KhzElKOFinUleUQ4wTPZULrx1EOiS/ 91+jd1LkKsWFcRWBQ6tw0Nu+k42lFGIkiCcMuuopKoNLR+ukNCuyykggGUqGbB/FeE VF38QzIeILBMrE3hoJ6k1oMOLz75rn9WPCm0xM0TMtmCVEb1nGNGvUOW+h2FSqs+yX LMKs71l1HfIu2jxyJc8b81qEWoHSHbVJax1NGm4Z1yalmGhd9ej3Zd7R7FXqilYTUj +wVRwMdfL4Q+1oe/05+qbxglVphIa86kPAeP4w7yY+2d9xeK5CYXCnDtJDL/D9eWYB NVfRIEmJURd2Q== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH v3 08/14] crypto: blake2s - adjust include guard naming Date: Wed, 23 Dec 2020 00:09:57 -0800 Message-Id: <20201223081003.373663-9-ebiggers@kernel.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201223081003.373663-1-ebiggers@kernel.org> References: <20201223081003.373663-1-ebiggers@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201223_031257_769327_A5C873F2 X-CRM114-Status: GOOD ( 12.46 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "Jason A . Donenfeld" , Herbert Xu , David Sterba , Ard Biesheuvel , linux-arm-kernel@lists.infradead.org, Paul Crowley Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Eric Biggers Use the full path in the include guards for the BLAKE2s headers to avoid ambiguity and to match the convention for most files in include/crypto/. Signed-off-by: Eric Biggers Acked-by: Ard Biesheuvel --- include/crypto/blake2s.h | 6 +++--- include/crypto/internal/blake2s.h | 6 +++--- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/include/crypto/blake2s.h b/include/crypto/blake2s.h index f1c8330a61a91..3f06183c2d804 100644 --- a/include/crypto/blake2s.h +++ b/include/crypto/blake2s.h @@ -3,8 +3,8 @@ * Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. */ -#ifndef BLAKE2S_H -#define BLAKE2S_H +#ifndef _CRYPTO_BLAKE2S_H +#define _CRYPTO_BLAKE2S_H #include #include @@ -105,4 +105,4 @@ static inline void blake2s(u8 *out, const u8 *in, const u8 *key, void blake2s256_hmac(u8 *out, const u8 *in, const u8 *key, const size_t inlen, const size_t keylen); -#endif /* BLAKE2S_H */ +#endif /* _CRYPTO_BLAKE2S_H */ diff --git a/include/crypto/internal/blake2s.h b/include/crypto/internal/blake2s.h index 867ef3753f5c1..8e50d487500f2 100644 --- a/include/crypto/internal/blake2s.h +++ b/include/crypto/internal/blake2s.h @@ -4,8 +4,8 @@ * Keep this in sync with the corresponding BLAKE2b header. */ -#ifndef BLAKE2S_INTERNAL_H -#define BLAKE2S_INTERNAL_H +#ifndef _CRYPTO_INTERNAL_BLAKE2S_H +#define _CRYPTO_INTERNAL_BLAKE2S_H #include #include @@ -116,4 +116,4 @@ static inline int crypto_blake2s_final(struct shash_desc *desc, u8 *out, return 0; } -#endif /* BLAKE2S_INTERNAL_H */ +#endif /* _CRYPTO_INTERNAL_BLAKE2S_H */ From patchwork Wed Dec 23 08:09:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 11987857 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 33BF1C433E0 for ; Wed, 23 Dec 2020 08:15:42 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 82C4B2070B for ; Wed, 23 Dec 2020 08:15:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 82C4B2070B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=tO0heb28aqLrfeW2HE/JbD/OF4e9tnZVO2Yi7jE8JGo=; b=Ba5fV5+UR4MoR8dSbGREEloMC 276Jqkcu/kBIJ0mLg80c4sqjKT9j0e77czApEi5hoDcQPIZZ1n8KTPueq7YyCKIAnxwmZCyGEr8Oh C3Zd3ZuGqzzpYvIvBHvSpFLGts3U2m8+3NGluJ8UkVOHe5gDqI1zwKcsYO8Gsnh99yzsBPFowzxJ8 Cv9I4dBeB5lQKMdO6sKOQ6+YpNqN3aPl2THy/QM63KF+4tSAGEm7xB/Sq23fCM6DD4N9bMF1CmEux etKIF3Q9biSuS7IzV60IWc+OPZIjbytv2RqMMYjEEsr+tXDUGyk6QHs+Oj9bbLRRhH2kBqE+DiV1T Uihg/28UQ==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1krzHb-0007MT-GR; Wed, 23 Dec 2020 08:14:08 +0000 Received: from mail.kernel.org ([198.145.29.99]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1krzGU-0006yM-QQ for linux-arm-kernel@lists.infradead.org; Wed, 23 Dec 2020 08:13:03 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 21E37224DF; Wed, 23 Dec 2020 08:12:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1608711177; bh=hdS/nZMIHIRlLDU4jx0otHFHReXXhN43ibKqK9h7cxA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OgkNBHtTnftbt2Yj+NrWKD07vzQfqRiCOm5RYBPIAWygx4wk49tKLg9uxWVW8BapX QShd3x1h26NWOwlSVXGibc8YCZ0VlCSh/ACErt1bh3qJE2TM9Rzj0GDkTT2iPKPC92 vV4ecO7uvjODQkAhWBRUsdx5JyDOgWOocYmeUd6sIjDo85udnTQ5ZTge6Tpv04BA8K RoSanYrtI6Wecg48nt84HBLC+VFo6foFGLpq2geneq80fRwN7zzzsxDo6K/kr6MqCz AJO/xTaRkwxSrNmZOindPqu18cqbH4w2kq1Nzke6SNKv5ak9h19bupG+w+7w7hgGYx HQpo2hXw7N95w== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH v3 09/14] crypto: blake2s - include instead of Date: Wed, 23 Dec 2020 00:09:58 -0800 Message-Id: <20201223081003.373663-10-ebiggers@kernel.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201223081003.373663-1-ebiggers@kernel.org> References: <20201223081003.373663-1-ebiggers@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201223_031259_074570_DBC61AB3 X-CRM114-Status: GOOD ( 11.14 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "Jason A . Donenfeld" , Herbert Xu , David Sterba , Ard Biesheuvel , linux-arm-kernel@lists.infradead.org, Paul Crowley Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Eric Biggers Address the following checkpatch warning: WARNING: Use #include instead of Signed-off-by: Eric Biggers Acked-by: Ard Biesheuvel --- include/crypto/blake2s.h | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/include/crypto/blake2s.h b/include/crypto/blake2s.h index 3f06183c2d804..bc3fb59442ce5 100644 --- a/include/crypto/blake2s.h +++ b/include/crypto/blake2s.h @@ -6,12 +6,11 @@ #ifndef _CRYPTO_BLAKE2S_H #define _CRYPTO_BLAKE2S_H +#include #include #include #include -#include - enum blake2s_lengths { BLAKE2S_BLOCK_SIZE = 64, BLAKE2S_HASH_SIZE = 32, From patchwork Wed Dec 23 08:09:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 11987865 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 502E4C433DB for ; Wed, 23 Dec 2020 08:16:24 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D4B7F2070B for ; Wed, 23 Dec 2020 08:16:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D4B7F2070B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=D4zRvVaHJ7/tqo0pHkTL0u8KtgfNAfRJ+bj+eIBlWRM=; b=oy2H2H4g3M03pMuHnVROdxufJ dajERJWMJSG6cWRF6VpwD5C7/GSNx7SvuwFxa7fBS9ciFzSLV89WJ+bm0stwq3FSBuIrQTdIKIo9O nPUdKg9B/2RZzG5ZLsTzQ9gPrPwx4dkMB59pyMXGK84obCw5HrHMslmPIEPc1wWGAg6Xu8xxKY4q/ cvPaz1hZpjJ40b5M9Zmb+i9Rf5SP/UWGlUcnKct2LIr6hnUcpQY3wh6NDJcu4yRrQ5zfCRyxvDN/t /p8tPpM2f2s0t7INEF+ey8gKdvQoa8CrRqlo3bzCOD6dNWMGc3c4kTUQvlIAU6uuzG+6MQ2hxZ/1e 5CJdMNItw==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1krzI7-0007dS-Cf; Wed, 23 Dec 2020 08:14:39 +0000 Received: from mail.kernel.org ([198.145.29.99]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1krzGU-0006yN-Q7 for linux-arm-kernel@lists.infradead.org; Wed, 23 Dec 2020 08:13:10 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 96E70224B8; Wed, 23 Dec 2020 08:12:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1608711178; bh=XhsksrwDIqT70f9H88kGLZnr2BQDqu2gAm2TeXt92xE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gHeU1THF9rOmFqv8ARIAMf0QgzcMxOv7qxE6pPY+bYiOFC/msmEPu3xxF7lWo4v7B kk1p3keeqx/5kkV2rDwTEqWK6/feGEAgqWJfHh7YDBBdGRMZA2zcJ2SVu6mj8nf0Sd 0TEzoRtLE/DjfevBVbxRH/zhKo+0Wv3r7LOMIMsr5qJHBHm62biXRQod9mlTp9zqoy BGFQ3/lGJiYh5nNzqUNq0CeeYANbAVUQDxYGZ+tAx9hWFd7MHO3kOBk8/u3L7MBBJ0 wv0p2ThhxCb3bhNQn7QVT7tkqQy2Q4+qVW6zXY2P7UMczMlbvZ+VwB6QSlWjFif/vZ yN05hRV6kEmVg== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH v3 10/14] crypto: arm/blake2s - add ARM scalar optimized BLAKE2s Date: Wed, 23 Dec 2020 00:09:59 -0800 Message-Id: <20201223081003.373663-11-ebiggers@kernel.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201223081003.373663-1-ebiggers@kernel.org> References: <20201223081003.373663-1-ebiggers@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201223_031259_190347_A2DAB7B7 X-CRM114-Status: GOOD ( 29.81 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "Jason A . Donenfeld" , Herbert Xu , David Sterba , Ard Biesheuvel , linux-arm-kernel@lists.infradead.org, Paul Crowley Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Eric Biggers Add an ARM scalar optimized implementation of BLAKE2s. NEON isn't very useful for BLAKE2s because the BLAKE2s block size is too small for NEON to help. Each NEON instruction would depend on the previous one, resulting in poor performance. With scalar instructions, on the other hand, we can take advantage of ARM's "free" rotations (like I did in chacha-scalar-core.S) to get an implementation get runs much faster than the C implementation. Performance results on Cortex-A7 in cycles per byte using the shash API: 4096-byte messages: blake2s-256-arm: 18.8 blake2s-256-generic: 26.0 500-byte messages: blake2s-256-arm: 20.3 blake2s-256-generic: 27.9 100-byte messages: blake2s-256-arm: 29.7 blake2s-256-generic: 39.2 32-byte messages: blake2s-256-arm: 50.6 blake2s-256-generic: 66.2 Except on very short messages, this is still slower than the NEON implementation of BLAKE2b which I've written; that is 14.0, 16.4, 25.8, and 76.1 cpb on 4096, 500, 100, and 32-byte messages, respectively. However, optimized BLAKE2s is useful for cases where BLAKE2s is used instead of BLAKE2b, such as WireGuard. This new implementation is added in the form of a new module blake2s-arm.ko, which is analogous to blake2s-x86_64.ko in that it provides blake2s_compress_arch() for use by the library API as well as optionally register the algorithms with the shash API. Acked-by: Ard Biesheuvel Signed-off-by: Eric Biggers Tested-by: Ard Biesheuvel --- arch/arm/crypto/Kconfig | 9 ++ arch/arm/crypto/Makefile | 2 + arch/arm/crypto/blake2s-core.S | 285 +++++++++++++++++++++++++++++++++ arch/arm/crypto/blake2s-glue.c | 78 +++++++++ 4 files changed, 374 insertions(+) create mode 100644 arch/arm/crypto/blake2s-core.S create mode 100644 arch/arm/crypto/blake2s-glue.c diff --git a/arch/arm/crypto/Kconfig b/arch/arm/crypto/Kconfig index c9bf2df85cb90..281c829c12d0b 100644 --- a/arch/arm/crypto/Kconfig +++ b/arch/arm/crypto/Kconfig @@ -62,6 +62,15 @@ config CRYPTO_SHA512_ARM SHA-512 secure hash standard (DFIPS 180-2) implemented using optimized ARM assembler and NEON, when available. +config CRYPTO_BLAKE2S_ARM + tristate "BLAKE2s digest algorithm (ARM)" + select CRYPTO_ARCH_HAVE_LIB_BLAKE2S + help + BLAKE2s digest algorithm optimized with ARM scalar instructions. This + is faster than the generic implementations of BLAKE2s and BLAKE2b, but + slower than the NEON implementation of BLAKE2b. (There is no NEON + implementation of BLAKE2s, since NEON doesn't really help with it.) + config CRYPTO_AES_ARM tristate "Scalar AES cipher for ARM" select CRYPTO_ALGAPI diff --git a/arch/arm/crypto/Makefile b/arch/arm/crypto/Makefile index b745c17d356fe..5ad1e985a718b 100644 --- a/arch/arm/crypto/Makefile +++ b/arch/arm/crypto/Makefile @@ -9,6 +9,7 @@ obj-$(CONFIG_CRYPTO_SHA1_ARM) += sha1-arm.o obj-$(CONFIG_CRYPTO_SHA1_ARM_NEON) += sha1-arm-neon.o obj-$(CONFIG_CRYPTO_SHA256_ARM) += sha256-arm.o obj-$(CONFIG_CRYPTO_SHA512_ARM) += sha512-arm.o +obj-$(CONFIG_CRYPTO_BLAKE2S_ARM) += blake2s-arm.o obj-$(CONFIG_CRYPTO_CHACHA20_NEON) += chacha-neon.o obj-$(CONFIG_CRYPTO_POLY1305_ARM) += poly1305-arm.o obj-$(CONFIG_CRYPTO_NHPOLY1305_NEON) += nhpoly1305-neon.o @@ -29,6 +30,7 @@ sha256-arm-neon-$(CONFIG_KERNEL_MODE_NEON) := sha256_neon_glue.o sha256-arm-y := sha256-core.o sha256_glue.o $(sha256-arm-neon-y) sha512-arm-neon-$(CONFIG_KERNEL_MODE_NEON) := sha512-neon-glue.o sha512-arm-y := sha512-core.o sha512-glue.o $(sha512-arm-neon-y) +blake2s-arm-y := blake2s-core.o blake2s-glue.o sha1-arm-ce-y := sha1-ce-core.o sha1-ce-glue.o sha2-arm-ce-y := sha2-ce-core.o sha2-ce-glue.o aes-arm-ce-y := aes-ce-core.o aes-ce-glue.o diff --git a/arch/arm/crypto/blake2s-core.S b/arch/arm/crypto/blake2s-core.S new file mode 100644 index 0000000000000..bed897e9a181a --- /dev/null +++ b/arch/arm/crypto/blake2s-core.S @@ -0,0 +1,285 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * BLAKE2s digest algorithm, ARM scalar implementation + * + * Copyright 2020 Google LLC + * + * Author: Eric Biggers + */ + +#include + + // Registers used to hold message words temporarily. There aren't + // enough ARM registers to hold the whole message block, so we have to + // load the words on-demand. + M_0 .req r12 + M_1 .req r14 + +// The BLAKE2s initialization vector +.Lblake2s_IV: + .word 0x6A09E667, 0xBB67AE85, 0x3C6EF372, 0xA54FF53A + .word 0x510E527F, 0x9B05688C, 0x1F83D9AB, 0x5BE0CD19 + +.macro __ldrd a, b, src, offset +#if __LINUX_ARM_ARCH__ >= 6 + ldrd \a, \b, [\src, #\offset] +#else + ldr \a, [\src, #\offset] + ldr \b, [\src, #\offset + 4] +#endif +.endm + +.macro __strd a, b, dst, offset +#if __LINUX_ARM_ARCH__ >= 6 + strd \a, \b, [\dst, #\offset] +#else + str \a, [\dst, #\offset] + str \b, [\dst, #\offset + 4] +#endif +.endm + +// Execute a quarter-round of BLAKE2s by mixing two columns or two diagonals. +// (a0, b0, c0, d0) and (a1, b1, c1, d1) give the registers containing the two +// columns/diagonals. s0-s1 are the word offsets to the message words the first +// column/diagonal needs, and likewise s2-s3 for the second column/diagonal. +// M_0 and M_1 are free to use, and the message block can be found at sp + 32. +// +// Note that to save instructions, the rotations don't happen when the +// pseudocode says they should, but rather they are delayed until the values are +// used. See the comment above _blake2s_round(). +.macro _blake2s_quarterround a0, b0, c0, d0, a1, b1, c1, d1, s0, s1, s2, s3 + + ldr M_0, [sp, #32 + 4 * \s0] + ldr M_1, [sp, #32 + 4 * \s2] + + // a += b + m[blake2s_sigma[r][2*i + 0]]; + add \a0, \a0, \b0, ror #brot + add \a1, \a1, \b1, ror #brot + add \a0, \a0, M_0 + add \a1, \a1, M_1 + + // d = ror32(d ^ a, 16); + eor \d0, \a0, \d0, ror #drot + eor \d1, \a1, \d1, ror #drot + + // c += d; + add \c0, \c0, \d0, ror #16 + add \c1, \c1, \d1, ror #16 + + // b = ror32(b ^ c, 12); + eor \b0, \c0, \b0, ror #brot + eor \b1, \c1, \b1, ror #brot + + ldr M_0, [sp, #32 + 4 * \s1] + ldr M_1, [sp, #32 + 4 * \s3] + + // a += b + m[blake2s_sigma[r][2*i + 1]]; + add \a0, \a0, \b0, ror #12 + add \a1, \a1, \b1, ror #12 + add \a0, \a0, M_0 + add \a1, \a1, M_1 + + // d = ror32(d ^ a, 8); + eor \d0, \a0, \d0, ror#16 + eor \d1, \a1, \d1, ror#16 + + // c += d; + add \c0, \c0, \d0, ror#8 + add \c1, \c1, \d1, ror#8 + + // b = ror32(b ^ c, 7); + eor \b0, \c0, \b0, ror#12 + eor \b1, \c1, \b1, ror#12 +.endm + +// Execute one round of BLAKE2s by updating the state matrix v[0..15]. v[0..9] +// are in r0..r9. The stack pointer points to 8 bytes of scratch space for +// spilling v[8..9], then to v[9..15], then to the message block. r10-r12 and +// r14 are free to use. The macro arguments s0-s15 give the order in which the +// message words are used in this round. +// +// All rotates are performed using the implicit rotate operand accepted by the +// 'add' and 'eor' instructions. This is faster than using explicit rotate +// instructions. To make this work, we allow the values in the second and last +// rows of the BLAKE2s state matrix (rows 'b' and 'd') to temporarily have the +// wrong rotation amount. The rotation amount is then fixed up just in time +// when the values are used. 'brot' is the number of bits the values in row 'b' +// need to be rotated right to arrive at the correct values, and 'drot' +// similarly for row 'd'. (brot, drot) start out as (0, 0) but we make it such +// that they end up as (7, 8) after every round. +.macro _blake2s_round s0, s1, s2, s3, s4, s5, s6, s7, \ + s8, s9, s10, s11, s12, s13, s14, s15 + + // Mix first two columns: + // (v[0], v[4], v[8], v[12]) and (v[1], v[5], v[9], v[13]). + __ldrd r10, r11, sp, 16 // load v[12] and v[13] + _blake2s_quarterround r0, r4, r8, r10, r1, r5, r9, r11, \ + \s0, \s1, \s2, \s3 + __strd r8, r9, sp, 0 + __strd r10, r11, sp, 16 + + // Mix second two columns: + // (v[2], v[6], v[10], v[14]) and (v[3], v[7], v[11], v[15]). + __ldrd r8, r9, sp, 8 // load v[10] and v[11] + __ldrd r10, r11, sp, 24 // load v[14] and v[15] + _blake2s_quarterround r2, r6, r8, r10, r3, r7, r9, r11, \ + \s4, \s5, \s6, \s7 + str r10, [sp, #24] // store v[14] + // v[10], v[11], and v[15] are used below, so no need to store them yet. + + .set brot, 7 + .set drot, 8 + + // Mix first two diagonals: + // (v[0], v[5], v[10], v[15]) and (v[1], v[6], v[11], v[12]). + ldr r10, [sp, #16] // load v[12] + _blake2s_quarterround r0, r5, r8, r11, r1, r6, r9, r10, \ + \s8, \s9, \s10, \s11 + __strd r8, r9, sp, 8 + str r11, [sp, #28] + str r10, [sp, #16] + + // Mix second two diagonals: + // (v[2], v[7], v[8], v[13]) and (v[3], v[4], v[9], v[14]). + __ldrd r8, r9, sp, 0 // load v[8] and v[9] + __ldrd r10, r11, sp, 20 // load v[13] and v[14] + _blake2s_quarterround r2, r7, r8, r10, r3, r4, r9, r11, \ + \s12, \s13, \s14, \s15 + __strd r10, r11, sp, 20 +.endm + +// +// void blake2s_compress_arch(struct blake2s_state *state, +// const u8 *block, size_t nblocks, u32 inc); +// +// Only the first three fields of struct blake2s_state are used: +// u32 h[8]; (inout) +// u32 t[2]; (inout) +// u32 f[2]; (in) +// + .align 5 +ENTRY(blake2s_compress_arch) + push {r0-r2,r4-r11,lr} // keep this an even number + +.Lnext_block: + // r0 is 'state' + // r1 is 'block' + // r3 is 'inc' + + // Load and increment the counter t[0..1]. + __ldrd r10, r11, r0, 32 + adds r10, r10, r3 + adc r11, r11, #0 + __strd r10, r11, r0, 32 + + // _blake2s_round is very short on registers, so copy the message block + // to the stack to save a register during the rounds. This also has the + // advantage that misalignment only needs to be dealt with in one place. + sub sp, sp, #64 + mov r12, sp + tst r1, #3 + bne .Lcopy_block_misaligned + ldmia r1!, {r2-r9} + stmia r12!, {r2-r9} + ldmia r1!, {r2-r9} + stmia r12, {r2-r9} +.Lcopy_block_done: + str r1, [sp, #68] // Update message pointer + + // Calculate v[8..15]. Push v[9..15] onto the stack, and leave space + // for spilling v[8..9]. Leave v[8..9] in r8-r9. + mov r14, r0 // r14 = state + adr r12, .Lblake2s_IV + ldmia r12!, {r8-r9} // load IV[0..1] + __ldrd r0, r1, r14, 40 // load f[0..1] + ldm r12, {r2-r7} // load IV[3..7] + eor r4, r4, r10 // v[12] = IV[4] ^ t[0] + eor r5, r5, r11 // v[13] = IV[5] ^ t[1] + eor r6, r6, r0 // v[14] = IV[6] ^ f[0] + eor r7, r7, r1 // v[15] = IV[7] ^ f[1] + push {r2-r7} // push v[9..15] + sub sp, sp, #8 // leave space for v[8..9] + + // Load h[0..7] == v[0..7]. + ldm r14, {r0-r7} + + // Execute the rounds. Each round is provided the order in which it + // needs to use the message words. + .set brot, 0 + .set drot, 0 + _blake2s_round 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 + _blake2s_round 14, 10, 4, 8, 9, 15, 13, 6, 1, 12, 0, 2, 11, 7, 5, 3 + _blake2s_round 11, 8, 12, 0, 5, 2, 15, 13, 10, 14, 3, 6, 7, 1, 9, 4 + _blake2s_round 7, 9, 3, 1, 13, 12, 11, 14, 2, 6, 5, 10, 4, 0, 15, 8 + _blake2s_round 9, 0, 5, 7, 2, 4, 10, 15, 14, 1, 11, 12, 6, 8, 3, 13 + _blake2s_round 2, 12, 6, 10, 0, 11, 8, 3, 4, 13, 7, 5, 15, 14, 1, 9 + _blake2s_round 12, 5, 1, 15, 14, 13, 4, 10, 0, 7, 6, 3, 9, 2, 8, 11 + _blake2s_round 13, 11, 7, 14, 12, 1, 3, 9, 5, 0, 15, 4, 8, 6, 2, 10 + _blake2s_round 6, 15, 14, 9, 11, 3, 0, 8, 12, 2, 13, 7, 1, 4, 10, 5 + _blake2s_round 10, 2, 8, 4, 7, 6, 1, 5, 15, 11, 9, 14, 3, 12, 13, 0 + + // Fold the final state matrix into the hash chaining value: + // + // for (i = 0; i < 8; i++) + // h[i] ^= v[i] ^ v[i + 8]; + // + ldr r14, [sp, #96] // r14 = &h[0] + add sp, sp, #8 // v[8..9] are already loaded. + pop {r10-r11} // load v[10..11] + eor r0, r0, r8 + eor r1, r1, r9 + eor r2, r2, r10 + eor r3, r3, r11 + ldm r14, {r8-r11} // load h[0..3] + eor r0, r0, r8 + eor r1, r1, r9 + eor r2, r2, r10 + eor r3, r3, r11 + stmia r14!, {r0-r3} // store new h[0..3] + ldm r14, {r0-r3} // load old h[4..7] + pop {r8-r11} // load v[12..15] + eor r0, r0, r4, ror #brot + eor r1, r1, r5, ror #brot + eor r2, r2, r6, ror #brot + eor r3, r3, r7, ror #brot + eor r0, r0, r8, ror #drot + eor r1, r1, r9, ror #drot + eor r2, r2, r10, ror #drot + eor r3, r3, r11, ror #drot + add sp, sp, #64 // skip copy of message block + stm r14, {r0-r3} // store new h[4..7] + + // Advance to the next block, if there is one. Note that if there are + // multiple blocks, then 'inc' (the counter increment amount) must be + // 64. So we can simply set it to 64 without re-loading it. + ldm sp, {r0, r1, r2} // load (state, block, nblocks) + mov r3, #64 // set 'inc' + subs r2, r2, #1 // nblocks-- + str r2, [sp, #8] + bne .Lnext_block // nblocks != 0? + + pop {r0-r2,r4-r11,pc} + + // The next message block (pointed to by r1) isn't 4-byte aligned, so it + // can't be loaded using ldmia. Copy it to the stack buffer (pointed to + // by r12) using an alternative method. r2-r9 are free to use. +.Lcopy_block_misaligned: + mov r2, #64 +1: +#ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS + ldr r3, [r1], #4 +#else + ldrb r3, [r1, #0] + ldrb r4, [r1, #1] + ldrb r5, [r1, #2] + ldrb r6, [r1, #3] + add r1, r1, #4 + orr r3, r3, r4, lsl #8 + orr r3, r3, r5, lsl #16 + orr r3, r3, r6, lsl #24 +#endif + subs r2, r2, #4 + str r3, [r12], #4 + bne 1b + b .Lcopy_block_done +ENDPROC(blake2s_compress_arch) diff --git a/arch/arm/crypto/blake2s-glue.c b/arch/arm/crypto/blake2s-glue.c new file mode 100644 index 0000000000000..f2cc1e5fc9ec1 --- /dev/null +++ b/arch/arm/crypto/blake2s-glue.c @@ -0,0 +1,78 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * BLAKE2s digest algorithm, ARM scalar implementation + * + * Copyright 2020 Google LLC + */ + +#include +#include + +#include + +/* defined in blake2s-core.S */ +EXPORT_SYMBOL(blake2s_compress_arch); + +static int crypto_blake2s_update_arm(struct shash_desc *desc, + const u8 *in, unsigned int inlen) +{ + return crypto_blake2s_update(desc, in, inlen, blake2s_compress_arch); +} + +static int crypto_blake2s_final_arm(struct shash_desc *desc, u8 *out) +{ + return crypto_blake2s_final(desc, out, blake2s_compress_arch); +} + +#define BLAKE2S_ALG(name, driver_name, digest_size) \ + { \ + .base.cra_name = name, \ + .base.cra_driver_name = driver_name, \ + .base.cra_priority = 200, \ + .base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY, \ + .base.cra_blocksize = BLAKE2S_BLOCK_SIZE, \ + .base.cra_ctxsize = sizeof(struct blake2s_tfm_ctx), \ + .base.cra_module = THIS_MODULE, \ + .digestsize = digest_size, \ + .setkey = crypto_blake2s_setkey, \ + .init = crypto_blake2s_init, \ + .update = crypto_blake2s_update_arm, \ + .final = crypto_blake2s_final_arm, \ + .descsize = sizeof(struct blake2s_state), \ + } + +static struct shash_alg blake2s_arm_algs[] = { + BLAKE2S_ALG("blake2s-128", "blake2s-128-arm", BLAKE2S_128_HASH_SIZE), + BLAKE2S_ALG("blake2s-160", "blake2s-160-arm", BLAKE2S_160_HASH_SIZE), + BLAKE2S_ALG("blake2s-224", "blake2s-224-arm", BLAKE2S_224_HASH_SIZE), + BLAKE2S_ALG("blake2s-256", "blake2s-256-arm", BLAKE2S_256_HASH_SIZE), +}; + +static int __init blake2s_arm_mod_init(void) +{ + return IS_REACHABLE(CONFIG_CRYPTO_HASH) ? + crypto_register_shashes(blake2s_arm_algs, + ARRAY_SIZE(blake2s_arm_algs)) : 0; +} + +static void __exit blake2s_arm_mod_exit(void) +{ + if (IS_REACHABLE(CONFIG_CRYPTO_HASH)) + crypto_unregister_shashes(blake2s_arm_algs, + ARRAY_SIZE(blake2s_arm_algs)); +} + +module_init(blake2s_arm_mod_init); +module_exit(blake2s_arm_mod_exit); + +MODULE_DESCRIPTION("BLAKE2s digest algorithm, ARM scalar implementation"); +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Eric Biggers "); +MODULE_ALIAS_CRYPTO("blake2s-128"); +MODULE_ALIAS_CRYPTO("blake2s-128-arm"); +MODULE_ALIAS_CRYPTO("blake2s-160"); +MODULE_ALIAS_CRYPTO("blake2s-160-arm"); +MODULE_ALIAS_CRYPTO("blake2s-224"); +MODULE_ALIAS_CRYPTO("blake2s-224-arm"); +MODULE_ALIAS_CRYPTO("blake2s-256"); +MODULE_ALIAS_CRYPTO("blake2s-256-arm"); From patchwork Wed Dec 23 08:10:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 11987859 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2F4C1C433E0 for ; Wed, 23 Dec 2020 08:16:02 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AFB072070B for ; Wed, 23 Dec 2020 08:16:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AFB072070B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=9CxgJfZ2AD0WbG/mWC/ASnKQmSLD0sBLq5VWu0yhaqA=; b=hMdt5sSxsyBYp2IdZWURKGd+Z seaLtX6TiDUbhWiV0cLPMlVtl1s4II3B3ZvcP/hp6XThzKATC+nv3StlNTd7Gg8qMwfUVxWymj1Sz nTwmPl7GTdQcfH6cfdAmsFV8aBIquC2DPwonosbet+OtYNj3stnziwVXSAwUvWPgmqzvDCtIoxwSk 08se+ERgm4hbHz503lP3wR+5j/cJv7NjCr7zq4I3kICGu/v0uK6ckL5aJdygN9ICtjaNx+TivxD/Z SE2WC2CYNP93ORhGnjGOesAltODWXp9NLI7X1rN9BDgBXfrWMPSyok4I0aSEIJcskuFbMrBHf4AsB yOIFmDmKg==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1krzHp-0007Qc-3Z; Wed, 23 Dec 2020 08:14:21 +0000 Received: from mail.kernel.org ([198.145.29.99]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1krzGW-0006z9-LY for linux-arm-kernel@lists.infradead.org; Wed, 23 Dec 2020 08:13:05 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 3042D22512; Wed, 23 Dec 2020 08:12:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1608711178; bh=gZU5h+3+EaOoF4SX/C9e5RdZA4PwrlP/kPxt0AaR0Xo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iKhNh3FyKWUCsqIS29saRGqHsC5eS6qYKuPIwzYZsAgPBI3TrWygntpp7WRLNILJ7 xR29UuidItsqwjG879rJIulFsYJAqdWVgEv5iiezs73frIYde1IDKFXTNmjrUEvWku ljjhtWYSejaV0u3ZAE1ZVNN2lIMrasdtF581j8GIlCUbc/dBsS0TN4Pg6z21Va3px4 1gL8pqO2iwpQP+VXJgrIMpZXbfmixtNn9nnxa63aLI5aOsb9h1F2S1iPUWEefX1nLc E4AuoNrwrzm8y15kVLTmm2G6Ptrn/rRt0/NsCzp6UK5O6ie8YvzetTx941LcaXsr5a YeBGDJwQw9xRw== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH v3 11/14] wireguard: Kconfig: select CRYPTO_BLAKE2S_ARM Date: Wed, 23 Dec 2020 00:10:00 -0800 Message-Id: <20201223081003.373663-12-ebiggers@kernel.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201223081003.373663-1-ebiggers@kernel.org> References: <20201223081003.373663-1-ebiggers@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201223_031300_863476_1FA2C4CA X-CRM114-Status: GOOD ( 10.22 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "Jason A . Donenfeld" , Herbert Xu , David Sterba , Ard Biesheuvel , linux-arm-kernel@lists.infradead.org, Paul Crowley Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Eric Biggers When available, select the new implementation of BLAKE2s for 32-bit ARM. This is faster than the generic C implementation. Reviewed-by: Jason A. Donenfeld Acked-by: Ard Biesheuvel Signed-off-by: Eric Biggers --- drivers/net/Kconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig index 260f9f46668b8..672fcdd9aecbb 100644 --- a/drivers/net/Kconfig +++ b/drivers/net/Kconfig @@ -90,6 +90,7 @@ config WIREGUARD select CRYPTO_CHACHA20_NEON if (ARM || ARM64) && KERNEL_MODE_NEON select CRYPTO_POLY1305_NEON if ARM64 && KERNEL_MODE_NEON select CRYPTO_POLY1305_ARM if ARM + select CRYPTO_BLAKE2S_ARM if ARM select CRYPTO_CURVE25519_NEON if ARM && KERNEL_MODE_NEON select CRYPTO_CHACHA_MIPS if CPU_MIPS32_R2 select CRYPTO_POLY1305_MIPS if CPU_MIPS32 || (CPU_MIPS64 && 64BIT) From patchwork Wed Dec 23 08:10:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 11987867 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8970CC433E0 for ; Wed, 23 Dec 2020 08:16:31 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3D4CC2076D for ; Wed, 23 Dec 2020 08:16:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3D4CC2076D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=4xS7m5X6k117rZ1eIZ+cQrQqkeCsXxc+CY42DATCh54=; b=DaM9OEzTBGlrC74xOvmmGxbop w5YTXCip5Hl5kuutWDCFTh7dHErfU1/TjdyEIgNrBN7oeKszSmONbgRie7sXYFtdjXf9OvILavLex Je37CZ8eLP5/ft+eANH5NXewKIusfIRXYPEGyBzdk/v4CsAQDkGGwukc0uzoTixjedSlSA451aDui jDXJvnWsHNg+UxTCq+LW2Q4/EAHI5kMUqCx23fkYn9hVgSFX7ZY/+6TBBr0sLv1ctPrIuT+YRxYLe 0uM6SoEnU2RPauU5hneoE8dB3n5GmspWLPVgtLH6NJrc+4uBYAY8v7FunJ+VvqCqGOLBGOcJKLcbf aJX7+cdWQ==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1krzIC-0007ge-4j; Wed, 23 Dec 2020 08:14:44 +0000 Received: from mail.kernel.org ([198.145.29.99]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1krzGW-0006zB-Le for linux-arm-kernel@lists.infradead.org; Wed, 23 Dec 2020 08:13:11 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id A8E2422518; Wed, 23 Dec 2020 08:12:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1608711178; bh=K0+a1BVTUOXVw8Q/84YNLE+19Xr5Q5NMoQTfEbB3YXQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dKLHuRiGMR8WJWxJnr7O4Z7UzqmwhNHJ2ZK/52VscAjkqmNtR0zoT0nOxBPJ/fmCv y+RyaCKDpvmA6+/6e2HCgYwdJX6k+Yx8iImVcCNEjs5L93XjDieUwFoB1HVNUFhuLs uRfuAn5QPZpCLh1lUXkxc/amRg1i+Z3VB9oWFujxFltcUrTDqUQgNRKrmTd+kKMAkX Mbys65OyFcXW8uW9P+R9g0dn3EICbeHhPJl860hgpDiK2p3e9F7MkMbUZFyBKepS48 fuG3a7IEzZsRU3Z3d8fXfZpm8Jj7y2qekZm787LCAMhis20LwTEua/Hd/WaBPS8zPC tKEtWJPQR11tA== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH v3 12/14] crypto: blake2b - sync with blake2s implementation Date: Wed, 23 Dec 2020 00:10:01 -0800 Message-Id: <20201223081003.373663-13-ebiggers@kernel.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201223081003.373663-1-ebiggers@kernel.org> References: <20201223081003.373663-1-ebiggers@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201223_031300_982282_698C6288 X-CRM114-Status: GOOD ( 29.59 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "Jason A . Donenfeld" , Herbert Xu , David Sterba , Ard Biesheuvel , linux-arm-kernel@lists.infradead.org, Paul Crowley Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Eric Biggers Sync the BLAKE2b code with the BLAKE2s code as much as possible: - Move a lot of code into new headers and , and adjust it to be like the corresponding BLAKE2s code, i.e. like and . - Rename constants, e.g. BLAKE2B_*_DIGEST_SIZE => BLAKE2B_*_HASH_SIZE. - Use a macro BLAKE2B_ALG() to define the shash_alg structs. - Export blake2b_compress_generic() for use as a fallback. This makes it much easier to add optimized implementations of BLAKE2b, as optimized implementations can use the helper functions crypto_blake2b_{setkey,init,update,final}() and blake2b_compress_generic(). The ARM implementation will use these. But this change is also helpful because it eliminates unnecessary differences between the BLAKE2b and BLAKE2s code, so that the same improvements can easily be made to both. (The two algorithms are basically identical, except for the word size and constants.) It also makes it straightforward to add a library API for BLAKE2b in the future if/when it's needed. This change does make the BLAKE2b code slightly more complicated than it needs to be, as it doesn't actually provide a library API yet. For example, __blake2b_update() doesn't really need to exist yet; it could just be inlined into crypto_blake2b_update(). But I believe this is outweighed by the benefits of keeping the code in sync. Signed-off-by: Eric Biggers Acked-by: Ard Biesheuvel --- crypto/blake2b_generic.c | 226 +++++++----------------------- include/crypto/blake2b.h | 67 +++++++++ include/crypto/internal/blake2b.h | 115 +++++++++++++++ 3 files changed, 230 insertions(+), 178 deletions(-) create mode 100644 include/crypto/blake2b.h create mode 100644 include/crypto/internal/blake2b.h diff --git a/crypto/blake2b_generic.c b/crypto/blake2b_generic.c index a2ffe60e06d34..963f7fe0e4ea8 100644 --- a/crypto/blake2b_generic.c +++ b/crypto/blake2b_generic.c @@ -20,36 +20,11 @@ #include #include -#include #include #include +#include #include -#define BLAKE2B_160_DIGEST_SIZE (160 / 8) -#define BLAKE2B_256_DIGEST_SIZE (256 / 8) -#define BLAKE2B_384_DIGEST_SIZE (384 / 8) -#define BLAKE2B_512_DIGEST_SIZE (512 / 8) - -enum blake2b_constant { - BLAKE2B_BLOCKBYTES = 128, - BLAKE2B_KEYBYTES = 64, -}; - -struct blake2b_state { - u64 h[8]; - u64 t[2]; - u64 f[2]; - u8 buf[BLAKE2B_BLOCKBYTES]; - size_t buflen; -}; - -static const u64 blake2b_IV[8] = { - 0x6a09e667f3bcc908ULL, 0xbb67ae8584caa73bULL, - 0x3c6ef372fe94f82bULL, 0xa54ff53a5f1d36f1ULL, - 0x510e527fade682d1ULL, 0x9b05688c2b3e6c1fULL, - 0x1f83d9abfb41bd6bULL, 0x5be0cd19137e2179ULL -}; - static const u8 blake2b_sigma[12][16] = { { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 }, { 14, 10, 4, 8, 9, 15, 13, 6, 1, 12, 0, 2, 11, 7, 5, 3 }, @@ -95,8 +70,8 @@ static void blake2b_increment_counter(struct blake2b_state *S, const u64 inc) G(r,7,v[ 3],v[ 4],v[ 9],v[14]); \ } while (0) -static void blake2b_compress(struct blake2b_state *S, - const u8 block[BLAKE2B_BLOCKBYTES]) +static void blake2b_compress_one_generic(struct blake2b_state *S, + const u8 block[BLAKE2B_BLOCK_SIZE]) { u64 m[16]; u64 v[16]; @@ -108,14 +83,14 @@ static void blake2b_compress(struct blake2b_state *S, for (i = 0; i < 8; ++i) v[i] = S->h[i]; - v[ 8] = blake2b_IV[0]; - v[ 9] = blake2b_IV[1]; - v[10] = blake2b_IV[2]; - v[11] = blake2b_IV[3]; - v[12] = blake2b_IV[4] ^ S->t[0]; - v[13] = blake2b_IV[5] ^ S->t[1]; - v[14] = blake2b_IV[6] ^ S->f[0]; - v[15] = blake2b_IV[7] ^ S->f[1]; + v[ 8] = BLAKE2B_IV0; + v[ 9] = BLAKE2B_IV1; + v[10] = BLAKE2B_IV2; + v[11] = BLAKE2B_IV3; + v[12] = BLAKE2B_IV4 ^ S->t[0]; + v[13] = BLAKE2B_IV5 ^ S->t[1]; + v[14] = BLAKE2B_IV6 ^ S->f[0]; + v[15] = BLAKE2B_IV7 ^ S->f[1]; ROUND(0); ROUND(1); @@ -139,159 +114,54 @@ static void blake2b_compress(struct blake2b_state *S, #undef G #undef ROUND -struct blake2b_tfm_ctx { - u8 key[BLAKE2B_KEYBYTES]; - unsigned int keylen; -}; - -static int blake2b_setkey(struct crypto_shash *tfm, const u8 *key, - unsigned int keylen) +void blake2b_compress_generic(struct blake2b_state *state, + const u8 *block, size_t nblocks, u32 inc) { - struct blake2b_tfm_ctx *tctx = crypto_shash_ctx(tfm); - - if (keylen == 0 || keylen > BLAKE2B_KEYBYTES) - return -EINVAL; - - memcpy(tctx->key, key, keylen); - tctx->keylen = keylen; - - return 0; + do { + blake2b_increment_counter(state, inc); + blake2b_compress_one_generic(state, block); + block += BLAKE2B_BLOCK_SIZE; + } while (--nblocks); } +EXPORT_SYMBOL(blake2b_compress_generic); -static int blake2b_init(struct shash_desc *desc) +static int crypto_blake2b_update_generic(struct shash_desc *desc, + const u8 *in, unsigned int inlen) { - struct blake2b_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm); - struct blake2b_state *state = shash_desc_ctx(desc); - const int digestsize = crypto_shash_digestsize(desc->tfm); - - memset(state, 0, sizeof(*state)); - memcpy(state->h, blake2b_IV, sizeof(state->h)); - - /* Parameter block is all zeros except index 0, no xor for 1..7 */ - state->h[0] ^= 0x01010000 | tctx->keylen << 8 | digestsize; - - if (tctx->keylen) { - /* - * Prefill the buffer with the key, next call to _update or - * _final will process it - */ - memcpy(state->buf, tctx->key, tctx->keylen); - state->buflen = BLAKE2B_BLOCKBYTES; - } - return 0; + return crypto_blake2b_update(desc, in, inlen, blake2b_compress_generic); } -static int blake2b_update(struct shash_desc *desc, const u8 *in, - unsigned int inlen) +static int crypto_blake2b_final_generic(struct shash_desc *desc, u8 *out) { - struct blake2b_state *state = shash_desc_ctx(desc); - const size_t left = state->buflen; - const size_t fill = BLAKE2B_BLOCKBYTES - left; - - if (!inlen) - return 0; - - if (inlen > fill) { - state->buflen = 0; - /* Fill buffer */ - memcpy(state->buf + left, in, fill); - blake2b_increment_counter(state, BLAKE2B_BLOCKBYTES); - /* Compress */ - blake2b_compress(state, state->buf); - in += fill; - inlen -= fill; - while (inlen > BLAKE2B_BLOCKBYTES) { - blake2b_increment_counter(state, BLAKE2B_BLOCKBYTES); - blake2b_compress(state, in); - in += BLAKE2B_BLOCKBYTES; - inlen -= BLAKE2B_BLOCKBYTES; - } - } - memcpy(state->buf + state->buflen, in, inlen); - state->buflen += inlen; - - return 0; + return crypto_blake2b_final(desc, out, blake2b_compress_generic); } -static int blake2b_final(struct shash_desc *desc, u8 *out) -{ - struct blake2b_state *state = shash_desc_ctx(desc); - const int digestsize = crypto_shash_digestsize(desc->tfm); - size_t i; - - blake2b_increment_counter(state, state->buflen); - /* Set last block */ - state->f[0] = (u64)-1; - /* Padding */ - memset(state->buf + state->buflen, 0, BLAKE2B_BLOCKBYTES - state->buflen); - blake2b_compress(state, state->buf); - - /* Avoid temporary buffer and switch the internal output to LE order */ - for (i = 0; i < ARRAY_SIZE(state->h); i++) - __cpu_to_le64s(&state->h[i]); - - memcpy(out, state->h, digestsize); - return 0; -} +#define BLAKE2B_ALG(name, driver_name, digest_size) \ + { \ + .base.cra_name = name, \ + .base.cra_driver_name = driver_name, \ + .base.cra_priority = 100, \ + .base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY, \ + .base.cra_blocksize = BLAKE2B_BLOCK_SIZE, \ + .base.cra_ctxsize = sizeof(struct blake2b_tfm_ctx), \ + .base.cra_module = THIS_MODULE, \ + .digestsize = digest_size, \ + .setkey = crypto_blake2b_setkey, \ + .init = crypto_blake2b_init, \ + .update = crypto_blake2b_update_generic, \ + .final = crypto_blake2b_final_generic, \ + .descsize = sizeof(struct blake2b_state), \ + } static struct shash_alg blake2b_algs[] = { - { - .base.cra_name = "blake2b-160", - .base.cra_driver_name = "blake2b-160-generic", - .base.cra_priority = 100, - .base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY, - .base.cra_blocksize = BLAKE2B_BLOCKBYTES, - .base.cra_ctxsize = sizeof(struct blake2b_tfm_ctx), - .base.cra_module = THIS_MODULE, - .digestsize = BLAKE2B_160_DIGEST_SIZE, - .setkey = blake2b_setkey, - .init = blake2b_init, - .update = blake2b_update, - .final = blake2b_final, - .descsize = sizeof(struct blake2b_state), - }, { - .base.cra_name = "blake2b-256", - .base.cra_driver_name = "blake2b-256-generic", - .base.cra_priority = 100, - .base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY, - .base.cra_blocksize = BLAKE2B_BLOCKBYTES, - .base.cra_ctxsize = sizeof(struct blake2b_tfm_ctx), - .base.cra_module = THIS_MODULE, - .digestsize = BLAKE2B_256_DIGEST_SIZE, - .setkey = blake2b_setkey, - .init = blake2b_init, - .update = blake2b_update, - .final = blake2b_final, - .descsize = sizeof(struct blake2b_state), - }, { - .base.cra_name = "blake2b-384", - .base.cra_driver_name = "blake2b-384-generic", - .base.cra_priority = 100, - .base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY, - .base.cra_blocksize = BLAKE2B_BLOCKBYTES, - .base.cra_ctxsize = sizeof(struct blake2b_tfm_ctx), - .base.cra_module = THIS_MODULE, - .digestsize = BLAKE2B_384_DIGEST_SIZE, - .setkey = blake2b_setkey, - .init = blake2b_init, - .update = blake2b_update, - .final = blake2b_final, - .descsize = sizeof(struct blake2b_state), - }, { - .base.cra_name = "blake2b-512", - .base.cra_driver_name = "blake2b-512-generic", - .base.cra_priority = 100, - .base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY, - .base.cra_blocksize = BLAKE2B_BLOCKBYTES, - .base.cra_ctxsize = sizeof(struct blake2b_tfm_ctx), - .base.cra_module = THIS_MODULE, - .digestsize = BLAKE2B_512_DIGEST_SIZE, - .setkey = blake2b_setkey, - .init = blake2b_init, - .update = blake2b_update, - .final = blake2b_final, - .descsize = sizeof(struct blake2b_state), - } + BLAKE2B_ALG("blake2b-160", "blake2b-160-generic", + BLAKE2B_160_HASH_SIZE), + BLAKE2B_ALG("blake2b-256", "blake2b-256-generic", + BLAKE2B_256_HASH_SIZE), + BLAKE2B_ALG("blake2b-384", "blake2b-384-generic", + BLAKE2B_384_HASH_SIZE), + BLAKE2B_ALG("blake2b-512", "blake2b-512-generic", + BLAKE2B_512_HASH_SIZE), }; static int __init blake2b_mod_init(void) diff --git a/include/crypto/blake2b.h b/include/crypto/blake2b.h new file mode 100644 index 0000000000000..18875f16f8cad --- /dev/null +++ b/include/crypto/blake2b.h @@ -0,0 +1,67 @@ +/* SPDX-License-Identifier: GPL-2.0 OR MIT */ + +#ifndef _CRYPTO_BLAKE2B_H +#define _CRYPTO_BLAKE2B_H + +#include +#include +#include +#include + +enum blake2b_lengths { + BLAKE2B_BLOCK_SIZE = 128, + BLAKE2B_HASH_SIZE = 64, + BLAKE2B_KEY_SIZE = 64, + + BLAKE2B_160_HASH_SIZE = 20, + BLAKE2B_256_HASH_SIZE = 32, + BLAKE2B_384_HASH_SIZE = 48, + BLAKE2B_512_HASH_SIZE = 64, +}; + +struct blake2b_state { + /* 'h', 't', and 'f' are used in assembly code, so keep them as-is. */ + u64 h[8]; + u64 t[2]; + u64 f[2]; + u8 buf[BLAKE2B_BLOCK_SIZE]; + unsigned int buflen; + unsigned int outlen; +}; + +enum blake2b_iv { + BLAKE2B_IV0 = 0x6A09E667F3BCC908ULL, + BLAKE2B_IV1 = 0xBB67AE8584CAA73BULL, + BLAKE2B_IV2 = 0x3C6EF372FE94F82BULL, + BLAKE2B_IV3 = 0xA54FF53A5F1D36F1ULL, + BLAKE2B_IV4 = 0x510E527FADE682D1ULL, + BLAKE2B_IV5 = 0x9B05688C2B3E6C1FULL, + BLAKE2B_IV6 = 0x1F83D9ABFB41BD6BULL, + BLAKE2B_IV7 = 0x5BE0CD19137E2179ULL, +}; + +static inline void __blake2b_init(struct blake2b_state *state, size_t outlen, + const void *key, size_t keylen) +{ + state->h[0] = BLAKE2B_IV0 ^ (0x01010000 | keylen << 8 | outlen); + state->h[1] = BLAKE2B_IV1; + state->h[2] = BLAKE2B_IV2; + state->h[3] = BLAKE2B_IV3; + state->h[4] = BLAKE2B_IV4; + state->h[5] = BLAKE2B_IV5; + state->h[6] = BLAKE2B_IV6; + state->h[7] = BLAKE2B_IV7; + state->t[0] = 0; + state->t[1] = 0; + state->f[0] = 0; + state->f[1] = 0; + state->buflen = 0; + state->outlen = outlen; + if (keylen) { + memcpy(state->buf, key, keylen); + memset(&state->buf[keylen], 0, BLAKE2B_BLOCK_SIZE - keylen); + state->buflen = BLAKE2B_BLOCK_SIZE; + } +} + +#endif /* _CRYPTO_BLAKE2B_H */ diff --git a/include/crypto/internal/blake2b.h b/include/crypto/internal/blake2b.h new file mode 100644 index 0000000000000..982fe5e8471cd --- /dev/null +++ b/include/crypto/internal/blake2b.h @@ -0,0 +1,115 @@ +/* SPDX-License-Identifier: GPL-2.0 OR MIT */ +/* + * Helper functions for BLAKE2b implementations. + * Keep this in sync with the corresponding BLAKE2s header. + */ + +#ifndef _CRYPTO_INTERNAL_BLAKE2B_H +#define _CRYPTO_INTERNAL_BLAKE2B_H + +#include +#include +#include + +void blake2b_compress_generic(struct blake2b_state *state, + const u8 *block, size_t nblocks, u32 inc); + +static inline void blake2b_set_lastblock(struct blake2b_state *state) +{ + state->f[0] = -1; +} + +typedef void (*blake2b_compress_t)(struct blake2b_state *state, + const u8 *block, size_t nblocks, u32 inc); + +static inline void __blake2b_update(struct blake2b_state *state, + const u8 *in, size_t inlen, + blake2b_compress_t compress) +{ + const size_t fill = BLAKE2B_BLOCK_SIZE - state->buflen; + + if (unlikely(!inlen)) + return; + if (inlen > fill) { + memcpy(state->buf + state->buflen, in, fill); + (*compress)(state, state->buf, 1, BLAKE2B_BLOCK_SIZE); + state->buflen = 0; + in += fill; + inlen -= fill; + } + if (inlen > BLAKE2B_BLOCK_SIZE) { + const size_t nblocks = DIV_ROUND_UP(inlen, BLAKE2B_BLOCK_SIZE); + /* Hash one less (full) block than strictly possible */ + (*compress)(state, in, nblocks - 1, BLAKE2B_BLOCK_SIZE); + in += BLAKE2B_BLOCK_SIZE * (nblocks - 1); + inlen -= BLAKE2B_BLOCK_SIZE * (nblocks - 1); + } + memcpy(state->buf + state->buflen, in, inlen); + state->buflen += inlen; +} + +static inline void __blake2b_final(struct blake2b_state *state, u8 *out, + blake2b_compress_t compress) +{ + int i; + + blake2b_set_lastblock(state); + memset(state->buf + state->buflen, 0, + BLAKE2B_BLOCK_SIZE - state->buflen); /* Padding */ + (*compress)(state, state->buf, 1, state->buflen); + for (i = 0; i < ARRAY_SIZE(state->h); i++) + __cpu_to_le64s(&state->h[i]); + memcpy(out, state->h, state->outlen); +} + +/* Helper functions for shash implementations of BLAKE2b */ + +struct blake2b_tfm_ctx { + u8 key[BLAKE2B_KEY_SIZE]; + unsigned int keylen; +}; + +static inline int crypto_blake2b_setkey(struct crypto_shash *tfm, + const u8 *key, unsigned int keylen) +{ + struct blake2b_tfm_ctx *tctx = crypto_shash_ctx(tfm); + + if (keylen == 0 || keylen > BLAKE2B_KEY_SIZE) + return -EINVAL; + + memcpy(tctx->key, key, keylen); + tctx->keylen = keylen; + + return 0; +} + +static inline int crypto_blake2b_init(struct shash_desc *desc) +{ + const struct blake2b_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm); + struct blake2b_state *state = shash_desc_ctx(desc); + unsigned int outlen = crypto_shash_digestsize(desc->tfm); + + __blake2b_init(state, outlen, tctx->key, tctx->keylen); + return 0; +} + +static inline int crypto_blake2b_update(struct shash_desc *desc, + const u8 *in, unsigned int inlen, + blake2b_compress_t compress) +{ + struct blake2b_state *state = shash_desc_ctx(desc); + + __blake2b_update(state, in, inlen, compress); + return 0; +} + +static inline int crypto_blake2b_final(struct shash_desc *desc, u8 *out, + blake2b_compress_t compress) +{ + struct blake2b_state *state = shash_desc_ctx(desc); + + __blake2b_final(state, out, compress); + return 0; +} + +#endif /* _CRYPTO_INTERNAL_BLAKE2B_H */ From patchwork Wed Dec 23 08:10:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 11987861 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A8B25C433E0 for ; Wed, 23 Dec 2020 08:16:13 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 69F8A2076D for ; Wed, 23 Dec 2020 08:16:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 69F8A2076D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=+c3F7CcLx9wqZENfL5qPdh6JiIS5DhizkOQf857so+I=; b=BanVhhTdbsgfUdnveAHPAz5zM cUtWycaOu/tiOrY9KR5qd0Hby0udSSNUM8VcTFbMufw5S9CgDR82JhA68oxtTzLP2JzBwmwSIFp8R u3Utaqyk+LvuNIJ/I+WrRYHg0Ja+/s5xXtuRq8D321gTG9jVMMAJPd24mGXuVUawPp+TOqHokiuSr zGRAIT6o/sLpBHnRh0U/ZPiFQjqaL3OFJ4MNI5+klU+LTqS4U8HtW+FAvnp5o0O7Twx9vBK8FWaDm b6G1Nt5rcWK2atX0+YY/p0iF1nqvroJ4r4PFpE72jQtIUA4EzTwufyWA2VnrlFsAypaVkDTkSMw6Q LFlojz07g==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1krzI2-0007aa-RW; Wed, 23 Dec 2020 08:14:34 +0000 Received: from mail.kernel.org ([198.145.29.99]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1krzGX-0006zK-8e for linux-arm-kernel@lists.infradead.org; Wed, 23 Dec 2020 08:13:08 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 1C3B122510; Wed, 23 Dec 2020 08:12:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1608711179; bh=qS5odlNvKzcQ1NLPrL+sF+R4JtS6E6dFmblxMXhMlEI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SkQPMYnRUb4z6xQnbpCKDKenc6E58KB7clteA90Q9gUGF2XAZWw1e17yNcZEio9Oy khoIW0jbfw+D84scqPCSzdlBZYShgWWoWBrs8rYtF6BVE1qXlHWlRZk9w9q3xIwkqQ D0wUj8Sq9xkyoJfts6qR4ExNDBrKDQAVQQRKtOJy4CzWj2Y0vnugoZKQaBFBpNlRRh q7hIitMn8yMM3kTcQB+3ggVnXPkYD+91ZSQF//wpWXFOIufcEQLqycJIzNiLnIvtTy pkGjyUxdDrkYHQzbSVr9jRK1L1pAm6VPd3V63bPewbpemh+KlY8QW2qEz8I0fMf9P2 mqsBm/b4t5P9Q== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH v3 13/14] crypto: blake2b - update file comment Date: Wed, 23 Dec 2020 00:10:02 -0800 Message-Id: <20201223081003.373663-14-ebiggers@kernel.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201223081003.373663-1-ebiggers@kernel.org> References: <20201223081003.373663-1-ebiggers@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201223_031301_483516_F8E13FCB X-CRM114-Status: GOOD ( 12.03 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "Jason A . Donenfeld" , Herbert Xu , David Sterba , Ard Biesheuvel , linux-arm-kernel@lists.infradead.org, Paul Crowley Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Eric Biggers The file comment for blake2b_generic.c makes it sound like it's the reference implementation of BLAKE2b with only minor changes. But it's actually been changed a lot. Update the comment to make this clearer. Reviewed-by: David Sterba Acked-by: Ard Biesheuvel Signed-off-by: Eric Biggers --- crypto/blake2b_generic.c | 23 ++++++++++------------- 1 file changed, 10 insertions(+), 13 deletions(-) diff --git a/crypto/blake2b_generic.c b/crypto/blake2b_generic.c index 963f7fe0e4ea8..6704c03558896 100644 --- a/crypto/blake2b_generic.c +++ b/crypto/blake2b_generic.c @@ -1,21 +1,18 @@ // SPDX-License-Identifier: (GPL-2.0-only OR Apache-2.0) /* - * BLAKE2b reference source code package - reference C implementations + * Generic implementation of the BLAKE2b digest algorithm. Based on the BLAKE2b + * reference implementation, but it has been heavily modified for use in the + * kernel. The reference implementation was: * - * Copyright 2012, Samuel Neves . You may use this under the - * terms of the CC0, the OpenSSL Licence, or the Apache Public License 2.0, at - * your option. The terms of these licenses can be found at: + * Copyright 2012, Samuel Neves . You may use this under + * the terms of the CC0, the OpenSSL Licence, or the Apache Public License + * 2.0, at your option. The terms of these licenses can be found at: * - * - CC0 1.0 Universal : http://creativecommons.org/publicdomain/zero/1.0 - * - OpenSSL license : https://www.openssl.org/source/license.html - * - Apache 2.0 : https://www.apache.org/licenses/LICENSE-2.0 + * - CC0 1.0 Universal : http://creativecommons.org/publicdomain/zero/1.0 + * - OpenSSL license : https://www.openssl.org/source/license.html + * - Apache 2.0 : https://www.apache.org/licenses/LICENSE-2.0 * - * More information about the BLAKE2 hash function can be found at - * https://blake2.net. - * - * Note: the original sources have been modified for inclusion in linux kernel - * in terms of coding style, using generic helpers and simplifications of error - * handling. + * More information about BLAKE2 can be found at https://blake2.net. */ #include From patchwork Wed Dec 23 08:10:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 11987869 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 725B6C433DB for ; Wed, 23 Dec 2020 08:16:48 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1C5C52070B for ; Wed, 23 Dec 2020 08:16:48 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1C5C52070B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=7clcSVb2KVrYIdxIXPuTZxFulS2lFEc2AWX9a7k98Ik=; b=BOUAMld1LlxMpcMLhjSqbVO8X CRg9yQ7q7QnjetXiT2Ar28sdNeo3zMcp3WRgyrd2YB3hibc3tGmBInp/7eY1cDR/BJGDgq8RCMfew 8pSFxU8tTcttm+18nShDifRSvNIZ89HpIwUx6t6hYK8maroo8O7kLyLWN3R5JrBBeEQ/9qn5iV7NZ YCnDSUTeWf1msjPDWM9UrDSITs6WN6Y6DyhirgT9kESKj6eDRm5ZHKIzY2FjbXyG7R+cc9v5+HzfX 4ClkXTmXWX6y7O1RUm+xvKJcpl+zWygHhi5tQyG8v2/J+a1pjgETKs50vduIv+fJzPKkDJfgfVtl+ lnkKbP6sg==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1krzIZ-0007pC-Ic; Wed, 23 Dec 2020 08:15:07 +0000 Received: from mail.kernel.org ([198.145.29.99]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1krzGX-0006zJ-8t for linux-arm-kernel@lists.infradead.org; Wed, 23 Dec 2020 08:13:12 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 8C97E22517; Wed, 23 Dec 2020 08:12:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1608711179; bh=JRNq0Z9P755tRhBe2YcST7ZWWYVJW6j9hD075rTGk18=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GkCyLxmXvOxslZlRVec9YhhU72svlWdxAhclPdQ83SHmzi4I7jgiczfXudnDtQc+P hsSLZCyJ4TC0F4FGkHcqaNs0hfipIvkd9e7fbbWnQOAtOwJG3W7zlPp590r9Mg1Bup 3lkeKjdReCtROfPtLw53r/sWyxxYj5kiTUEkTuQyRSsR2wo/ZaS4arX2omVEf74oI6 gNHNtcWVhaQFRY97DwUAczCwGP8lwrXJq0OCNb+cyq7ePQkaVCevCSoN9dohYNn+I2 3XYiqFlsb1S5qF0yzdG3tdgyZmivh5omHJfVozvLuWjnRe9pEYwlEd/Fh9lkXRxJ8Q vamtM61DFqypg== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH v3 14/14] crypto: arm/blake2b - add NEON-accelerated BLAKE2b Date: Wed, 23 Dec 2020 00:10:03 -0800 Message-Id: <20201223081003.373663-15-ebiggers@kernel.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201223081003.373663-1-ebiggers@kernel.org> References: <20201223081003.373663-1-ebiggers@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201223_031301_620480_7C063AC4 X-CRM114-Status: GOOD ( 30.20 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "Jason A . Donenfeld" , Herbert Xu , David Sterba , Ard Biesheuvel , linux-arm-kernel@lists.infradead.org, Paul Crowley Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Eric Biggers Add a NEON-accelerated implementation of BLAKE2b. On Cortex-A7 (which these days is the most common ARM processor that doesn't have the ARMv8 Crypto Extensions), this is over twice as fast as SHA-256, and slightly faster than SHA-1. It is also almost three times as fast as the generic implementation of BLAKE2b: Algorithm Cycles per byte (on 4096-byte messages) =================== ======================================= blake2b-256-neon 14.0 sha1-neon 16.3 blake2s-256-arm 18.8 sha1-asm 20.8 blake2s-256-generic 26.0 sha256-neon 28.9 sha256-asm 32.0 blake2b-256-generic 38.9 This implementation isn't directly based on any other implementation, but it borrows some ideas from previous NEON code I've written as well as from chacha-neon-core.S. At least on Cortex-A7, it is faster than the other NEON implementations of BLAKE2b I'm aware of (the implementation in the BLAKE2 official repository using intrinsics, and Andrew Moon's implementation which can be found in SUPERCOP). It does only one block at a time, so it performs well on short messages too. NEON-accelerated BLAKE2b is useful because there is interest in using BLAKE2b-256 for dm-verity on low-end Android devices (specifically, devices that lack the ARMv8 Crypto Extensions) to replace SHA-1. On these devices, the performance cost of upgrading to SHA-256 may be unacceptable, whereas BLAKE2b-256 would actually improve performance. Although BLAKE2b is intended for 64-bit platforms (unlike BLAKE2s which is intended for 32-bit platforms), on 32-bit ARM processors with NEON, BLAKE2b is actually faster than BLAKE2s. This is because NEON supports 64-bit operations, and because BLAKE2s's block size is too small for NEON to be helpful for it. The best I've been able to do with BLAKE2s on Cortex-A7 is 18.8 cpb with an optimized scalar implementation. (I didn't try BLAKE2sp and BLAKE3, which in theory would be faster, but they're more complex as they require running multiple hashes at once. Note that BLAKE2b already uses all the NEON bandwidth on the Cortex-A7, so I expect that any speedup from BLAKE2sp or BLAKE3 would come only from the smaller number of rounds, not from the extra parallelism.) For now this BLAKE2b implementation is only wired up to the shash API, since there is no library API for BLAKE2b yet. However, I've tried to keep things consistent with BLAKE2s, e.g. by defining blake2b_compress_arch() which is analogous to blake2s_compress_arch() and could be exported for use by the library API later if needed. Acked-by: Ard Biesheuvel Signed-off-by: Eric Biggers Tested-by: Ard Biesheuvel --- arch/arm/crypto/Kconfig | 10 + arch/arm/crypto/Makefile | 2 + arch/arm/crypto/blake2b-neon-core.S | 347 ++++++++++++++++++++++++++++ arch/arm/crypto/blake2b-neon-glue.c | 105 +++++++++ 4 files changed, 464 insertions(+) create mode 100644 arch/arm/crypto/blake2b-neon-core.S create mode 100644 arch/arm/crypto/blake2b-neon-glue.c diff --git a/arch/arm/crypto/Kconfig b/arch/arm/crypto/Kconfig index 281c829c12d0b..2b575792363e5 100644 --- a/arch/arm/crypto/Kconfig +++ b/arch/arm/crypto/Kconfig @@ -71,6 +71,16 @@ config CRYPTO_BLAKE2S_ARM slower than the NEON implementation of BLAKE2b. (There is no NEON implementation of BLAKE2s, since NEON doesn't really help with it.) +config CRYPTO_BLAKE2B_NEON + tristate "BLAKE2b digest algorithm (ARM NEON)" + depends on KERNEL_MODE_NEON + select CRYPTO_BLAKE2B + help + BLAKE2b digest algorithm optimized with ARM NEON instructions. + On ARM processors that have NEON support but not the ARMv8 + Crypto Extensions, typically this BLAKE2b implementation is + much faster than SHA-2 and slightly faster than SHA-1. + config CRYPTO_AES_ARM tristate "Scalar AES cipher for ARM" select CRYPTO_ALGAPI diff --git a/arch/arm/crypto/Makefile b/arch/arm/crypto/Makefile index 5ad1e985a718b..8f26c454ea12e 100644 --- a/arch/arm/crypto/Makefile +++ b/arch/arm/crypto/Makefile @@ -10,6 +10,7 @@ obj-$(CONFIG_CRYPTO_SHA1_ARM_NEON) += sha1-arm-neon.o obj-$(CONFIG_CRYPTO_SHA256_ARM) += sha256-arm.o obj-$(CONFIG_CRYPTO_SHA512_ARM) += sha512-arm.o obj-$(CONFIG_CRYPTO_BLAKE2S_ARM) += blake2s-arm.o +obj-$(CONFIG_CRYPTO_BLAKE2B_NEON) += blake2b-neon.o obj-$(CONFIG_CRYPTO_CHACHA20_NEON) += chacha-neon.o obj-$(CONFIG_CRYPTO_POLY1305_ARM) += poly1305-arm.o obj-$(CONFIG_CRYPTO_NHPOLY1305_NEON) += nhpoly1305-neon.o @@ -31,6 +32,7 @@ sha256-arm-y := sha256-core.o sha256_glue.o $(sha256-arm-neon-y) sha512-arm-neon-$(CONFIG_KERNEL_MODE_NEON) := sha512-neon-glue.o sha512-arm-y := sha512-core.o sha512-glue.o $(sha512-arm-neon-y) blake2s-arm-y := blake2s-core.o blake2s-glue.o +blake2b-neon-y := blake2b-neon-core.o blake2b-neon-glue.o sha1-arm-ce-y := sha1-ce-core.o sha1-ce-glue.o sha2-arm-ce-y := sha2-ce-core.o sha2-ce-glue.o aes-arm-ce-y := aes-ce-core.o aes-ce-glue.o diff --git a/arch/arm/crypto/blake2b-neon-core.S b/arch/arm/crypto/blake2b-neon-core.S new file mode 100644 index 0000000000000..0406a186377fb --- /dev/null +++ b/arch/arm/crypto/blake2b-neon-core.S @@ -0,0 +1,347 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * BLAKE2b digest algorithm, NEON accelerated + * + * Copyright 2020 Google LLC + * + * Author: Eric Biggers + */ + +#include + + .text + .fpu neon + + // The arguments to blake2b_compress_neon() + STATE .req r0 + BLOCK .req r1 + NBLOCKS .req r2 + INC .req r3 + + // Pointers to the rotation tables + ROR24_TABLE .req r4 + ROR16_TABLE .req r5 + + // The original stack pointer + ORIG_SP .req r6 + + // NEON registers which contain the message words of the current block. + // M_0-M_3 are occasionally used for other purposes too. + M_0 .req d16 + M_1 .req d17 + M_2 .req d18 + M_3 .req d19 + M_4 .req d20 + M_5 .req d21 + M_6 .req d22 + M_7 .req d23 + M_8 .req d24 + M_9 .req d25 + M_10 .req d26 + M_11 .req d27 + M_12 .req d28 + M_13 .req d29 + M_14 .req d30 + M_15 .req d31 + + .align 4 + // Tables for computing ror64(x, 24) and ror64(x, 16) using the vtbl.8 + // instruction. This is the most efficient way to implement these + // rotation amounts with NEON. (On Cortex-A53 it's the same speed as + // vshr.u64 + vsli.u64, while on Cortex-A7 it's faster.) +.Lror24_table: + .byte 3, 4, 5, 6, 7, 0, 1, 2 +.Lror16_table: + .byte 2, 3, 4, 5, 6, 7, 0, 1 + // The BLAKE2b initialization vector +.Lblake2b_IV: + .quad 0x6a09e667f3bcc908, 0xbb67ae8584caa73b + .quad 0x3c6ef372fe94f82b, 0xa54ff53a5f1d36f1 + .quad 0x510e527fade682d1, 0x9b05688c2b3e6c1f + .quad 0x1f83d9abfb41bd6b, 0x5be0cd19137e2179 + +// Execute one round of BLAKE2b by updating the state matrix v[0..15] in the +// NEON registers q0-q7. The message block is in q8..q15 (M_0-M_15). The stack +// pointer points to a 32-byte aligned buffer containing a copy of q8 and q9 +// (M_0-M_3), so that they can be reloaded if they are used as temporary +// registers. The macro arguments s0-s15 give the order in which the message +// words are used in this round. 'final' is 1 if this is the final round. +.macro _blake2b_round s0, s1, s2, s3, s4, s5, s6, s7, \ + s8, s9, s10, s11, s12, s13, s14, s15, final=0 + + // Mix the columns: + // (v[0], v[4], v[8], v[12]), (v[1], v[5], v[9], v[13]), + // (v[2], v[6], v[10], v[14]), and (v[3], v[7], v[11], v[15]). + + // a += b + m[blake2b_sigma[r][2*i + 0]]; + vadd.u64 q0, q0, q2 + vadd.u64 q1, q1, q3 + vadd.u64 d0, d0, M_\s0 + vadd.u64 d1, d1, M_\s2 + vadd.u64 d2, d2, M_\s4 + vadd.u64 d3, d3, M_\s6 + + // d = ror64(d ^ a, 32); + veor q6, q6, q0 + veor q7, q7, q1 + vrev64.32 q6, q6 + vrev64.32 q7, q7 + + // c += d; + vadd.u64 q4, q4, q6 + vadd.u64 q5, q5, q7 + + // b = ror64(b ^ c, 24); + vld1.8 {M_0}, [ROR24_TABLE, :64] + veor q2, q2, q4 + veor q3, q3, q5 + vtbl.8 d4, {d4}, M_0 + vtbl.8 d5, {d5}, M_0 + vtbl.8 d6, {d6}, M_0 + vtbl.8 d7, {d7}, M_0 + + // a += b + m[blake2b_sigma[r][2*i + 1]]; + // + // M_0 got clobbered above, so we have to reload it if any of the four + // message words this step needs happens to be M_0. Otherwise we don't + // need to reload it here, as it will just get clobbered again below. +.if \s1 == 0 || \s3 == 0 || \s5 == 0 || \s7 == 0 + vld1.8 {M_0}, [sp, :64] +.endif + vadd.u64 q0, q0, q2 + vadd.u64 q1, q1, q3 + vadd.u64 d0, d0, M_\s1 + vadd.u64 d1, d1, M_\s3 + vadd.u64 d2, d2, M_\s5 + vadd.u64 d3, d3, M_\s7 + + // d = ror64(d ^ a, 16); + vld1.8 {M_0}, [ROR16_TABLE, :64] + veor q6, q6, q0 + veor q7, q7, q1 + vtbl.8 d12, {d12}, M_0 + vtbl.8 d13, {d13}, M_0 + vtbl.8 d14, {d14}, M_0 + vtbl.8 d15, {d15}, M_0 + + // c += d; + vadd.u64 q4, q4, q6 + vadd.u64 q5, q5, q7 + + // b = ror64(b ^ c, 63); + // + // This rotation amount isn't a multiple of 8, so it has to be + // implemented using a pair of shifts, which requires temporary + // registers. Use q8-q9 (M_0-M_3) for this, and reload them afterwards. + veor q8, q2, q4 + veor q9, q3, q5 + vshr.u64 q2, q8, #63 + vshr.u64 q3, q9, #63 + vsli.u64 q2, q8, #1 + vsli.u64 q3, q9, #1 + vld1.8 {q8-q9}, [sp, :256] + + // Mix the diagonals: + // (v[0], v[5], v[10], v[15]), (v[1], v[6], v[11], v[12]), + // (v[2], v[7], v[8], v[13]), and (v[3], v[4], v[9], v[14]). + // + // There are two possible ways to do this: use 'vext' instructions to + // shift the rows of the matrix so that the diagonals become columns, + // and undo it afterwards; or just use 64-bit operations on 'd' + // registers instead of 128-bit operations on 'q' registers. We use the + // latter approach, as it performs much better on Cortex-A7. + + // a += b + m[blake2b_sigma[r][2*i + 0]]; + vadd.u64 d0, d0, d5 + vadd.u64 d1, d1, d6 + vadd.u64 d2, d2, d7 + vadd.u64 d3, d3, d4 + vadd.u64 d0, d0, M_\s8 + vadd.u64 d1, d1, M_\s10 + vadd.u64 d2, d2, M_\s12 + vadd.u64 d3, d3, M_\s14 + + // d = ror64(d ^ a, 32); + veor d15, d15, d0 + veor d12, d12, d1 + veor d13, d13, d2 + veor d14, d14, d3 + vrev64.32 d15, d15 + vrev64.32 d12, d12 + vrev64.32 d13, d13 + vrev64.32 d14, d14 + + // c += d; + vadd.u64 d10, d10, d15 + vadd.u64 d11, d11, d12 + vadd.u64 d8, d8, d13 + vadd.u64 d9, d9, d14 + + // b = ror64(b ^ c, 24); + vld1.8 {M_0}, [ROR24_TABLE, :64] + veor d5, d5, d10 + veor d6, d6, d11 + veor d7, d7, d8 + veor d4, d4, d9 + vtbl.8 d5, {d5}, M_0 + vtbl.8 d6, {d6}, M_0 + vtbl.8 d7, {d7}, M_0 + vtbl.8 d4, {d4}, M_0 + + // a += b + m[blake2b_sigma[r][2*i + 1]]; +.if \s9 == 0 || \s11 == 0 || \s13 == 0 || \s15 == 0 + vld1.8 {M_0}, [sp, :64] +.endif + vadd.u64 d0, d0, d5 + vadd.u64 d1, d1, d6 + vadd.u64 d2, d2, d7 + vadd.u64 d3, d3, d4 + vadd.u64 d0, d0, M_\s9 + vadd.u64 d1, d1, M_\s11 + vadd.u64 d2, d2, M_\s13 + vadd.u64 d3, d3, M_\s15 + + // d = ror64(d ^ a, 16); + vld1.8 {M_0}, [ROR16_TABLE, :64] + veor d15, d15, d0 + veor d12, d12, d1 + veor d13, d13, d2 + veor d14, d14, d3 + vtbl.8 d12, {d12}, M_0 + vtbl.8 d13, {d13}, M_0 + vtbl.8 d14, {d14}, M_0 + vtbl.8 d15, {d15}, M_0 + + // c += d; + vadd.u64 d10, d10, d15 + vadd.u64 d11, d11, d12 + vadd.u64 d8, d8, d13 + vadd.u64 d9, d9, d14 + + // b = ror64(b ^ c, 63); + veor d16, d4, d9 + veor d17, d5, d10 + veor d18, d6, d11 + veor d19, d7, d8 + vshr.u64 q2, q8, #63 + vshr.u64 q3, q9, #63 + vsli.u64 q2, q8, #1 + vsli.u64 q3, q9, #1 + // Reloading q8-q9 can be skipped on the final round. +.if ! \final + vld1.8 {q8-q9}, [sp, :256] +.endif +.endm + +// +// void blake2b_compress_neon(struct blake2b_state *state, +// const u8 *block, size_t nblocks, u32 inc); +// +// Only the first three fields of struct blake2b_state are used: +// u64 h[8]; (inout) +// u64 t[2]; (inout) +// u64 f[2]; (in) +// + .align 5 +ENTRY(blake2b_compress_neon) + push {r4-r10} + + // Allocate a 32-byte stack buffer that is 32-byte aligned. + mov ORIG_SP, sp + sub ip, sp, #32 + bic ip, ip, #31 + mov sp, ip + + adr ROR24_TABLE, .Lror24_table + adr ROR16_TABLE, .Lror16_table + + mov ip, STATE + vld1.64 {q0-q1}, [ip]! // Load h[0..3] + vld1.64 {q2-q3}, [ip]! // Load h[4..7] +.Lnext_block: + adr r10, .Lblake2b_IV + vld1.64 {q14-q15}, [ip] // Load t[0..1] and f[0..1] + vld1.64 {q4-q5}, [r10]! // Load IV[0..3] + vmov r7, r8, d28 // Copy t[0] to (r7, r8) + vld1.64 {q6-q7}, [r10] // Load IV[4..7] + adds r7, r7, INC // Increment counter + bcs .Lslow_inc_ctr + vmov.i32 d28[0], r7 + vst1.64 {d28}, [ip] // Update t[0] +.Linc_ctr_done: + + // Load the next message block and finish initializing the state matrix + // 'v'. Fortunately, there are exactly enough NEON registers to fit the + // entire state matrix in q0-q7 and the entire message block in q8-15. + // + // However, _blake2b_round also needs some extra registers for rotates, + // so we have to spill some registers. It's better to spill the message + // registers than the state registers, as the message doesn't change. + // Therefore we store a copy of the first 32 bytes of the message block + // (q8-q9) in an aligned buffer on the stack so that they can be + // reloaded when needed. (We could just reload directly from the + // message buffer, but it's faster to use aligned loads.) + vld1.8 {q8-q9}, [BLOCK]! + veor q6, q6, q14 // v[12..13] = IV[4..5] ^ t[0..1] + vld1.8 {q10-q11}, [BLOCK]! + veor q7, q7, q15 // v[14..15] = IV[6..7] ^ f[0..1] + vld1.8 {q12-q13}, [BLOCK]! + vst1.8 {q8-q9}, [sp, :256] + mov ip, STATE + vld1.8 {q14-q15}, [BLOCK]! + + // Execute the rounds. Each round is provided the order in which it + // needs to use the message words. + _blake2b_round 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 + _blake2b_round 14, 10, 4, 8, 9, 15, 13, 6, 1, 12, 0, 2, 11, 7, 5, 3 + _blake2b_round 11, 8, 12, 0, 5, 2, 15, 13, 10, 14, 3, 6, 7, 1, 9, 4 + _blake2b_round 7, 9, 3, 1, 13, 12, 11, 14, 2, 6, 5, 10, 4, 0, 15, 8 + _blake2b_round 9, 0, 5, 7, 2, 4, 10, 15, 14, 1, 11, 12, 6, 8, 3, 13 + _blake2b_round 2, 12, 6, 10, 0, 11, 8, 3, 4, 13, 7, 5, 15, 14, 1, 9 + _blake2b_round 12, 5, 1, 15, 14, 13, 4, 10, 0, 7, 6, 3, 9, 2, 8, 11 + _blake2b_round 13, 11, 7, 14, 12, 1, 3, 9, 5, 0, 15, 4, 8, 6, 2, 10 + _blake2b_round 6, 15, 14, 9, 11, 3, 0, 8, 12, 2, 13, 7, 1, 4, 10, 5 + _blake2b_round 10, 2, 8, 4, 7, 6, 1, 5, 15, 11, 9, 14, 3, 12, 13, 0 + _blake2b_round 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 + _blake2b_round 14, 10, 4, 8, 9, 15, 13, 6, 1, 12, 0, 2, 11, 7, 5, 3 \ + final=1 + + // Fold the final state matrix into the hash chaining value: + // + // for (i = 0; i < 8; i++) + // h[i] ^= v[i] ^ v[i + 8]; + // + vld1.64 {q8-q9}, [ip]! // Load old h[0..3] + veor q0, q0, q4 // v[0..1] ^= v[8..9] + veor q1, q1, q5 // v[2..3] ^= v[10..11] + vld1.64 {q10-q11}, [ip] // Load old h[4..7] + veor q2, q2, q6 // v[4..5] ^= v[12..13] + veor q3, q3, q7 // v[6..7] ^= v[14..15] + veor q0, q0, q8 // v[0..1] ^= h[0..1] + veor q1, q1, q9 // v[2..3] ^= h[2..3] + mov ip, STATE + subs NBLOCKS, NBLOCKS, #1 // nblocks-- + vst1.64 {q0-q1}, [ip]! // Store new h[0..3] + veor q2, q2, q10 // v[4..5] ^= h[4..5] + veor q3, q3, q11 // v[6..7] ^= h[6..7] + vst1.64 {q2-q3}, [ip]! // Store new h[4..7] + + // Advance to the next block, if there is one. + bne .Lnext_block // nblocks != 0? + + mov sp, ORIG_SP + pop {r4-r10} + mov pc, lr + +.Lslow_inc_ctr: + // Handle the case where the counter overflowed its low 32 bits, by + // carrying the overflow bit into the full 128-bit counter. + vmov r9, r10, d29 + adcs r8, r8, #0 + adcs r9, r9, #0 + adc r10, r10, #0 + vmov d28, r7, r8 + vmov d29, r9, r10 + vst1.64 {q14}, [ip] // Update t[0] and t[1] + b .Linc_ctr_done +ENDPROC(blake2b_compress_neon) diff --git a/arch/arm/crypto/blake2b-neon-glue.c b/arch/arm/crypto/blake2b-neon-glue.c new file mode 100644 index 0000000000000..34d73200e7fa6 --- /dev/null +++ b/arch/arm/crypto/blake2b-neon-glue.c @@ -0,0 +1,105 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * BLAKE2b digest algorithm, NEON accelerated + * + * Copyright 2020 Google LLC + */ + +#include +#include +#include + +#include +#include + +#include +#include + +asmlinkage void blake2b_compress_neon(struct blake2b_state *state, + const u8 *block, size_t nblocks, u32 inc); + +static void blake2b_compress_arch(struct blake2b_state *state, + const u8 *block, size_t nblocks, u32 inc) +{ + if (!crypto_simd_usable()) { + blake2b_compress_generic(state, block, nblocks, inc); + return; + } + + do { + const size_t blocks = min_t(size_t, nblocks, + SZ_4K / BLAKE2B_BLOCK_SIZE); + + kernel_neon_begin(); + blake2b_compress_neon(state, block, blocks, inc); + kernel_neon_end(); + + nblocks -= blocks; + block += blocks * BLAKE2B_BLOCK_SIZE; + } while (nblocks); +} + +static int crypto_blake2b_update_neon(struct shash_desc *desc, + const u8 *in, unsigned int inlen) +{ + return crypto_blake2b_update(desc, in, inlen, blake2b_compress_arch); +} + +static int crypto_blake2b_final_neon(struct shash_desc *desc, u8 *out) +{ + return crypto_blake2b_final(desc, out, blake2b_compress_arch); +} + +#define BLAKE2B_ALG(name, driver_name, digest_size) \ + { \ + .base.cra_name = name, \ + .base.cra_driver_name = driver_name, \ + .base.cra_priority = 200, \ + .base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY, \ + .base.cra_blocksize = BLAKE2B_BLOCK_SIZE, \ + .base.cra_ctxsize = sizeof(struct blake2b_tfm_ctx), \ + .base.cra_module = THIS_MODULE, \ + .digestsize = digest_size, \ + .setkey = crypto_blake2b_setkey, \ + .init = crypto_blake2b_init, \ + .update = crypto_blake2b_update_neon, \ + .final = crypto_blake2b_final_neon, \ + .descsize = sizeof(struct blake2b_state), \ + } + +static struct shash_alg blake2b_neon_algs[] = { + BLAKE2B_ALG("blake2b-160", "blake2b-160-neon", BLAKE2B_160_HASH_SIZE), + BLAKE2B_ALG("blake2b-256", "blake2b-256-neon", BLAKE2B_256_HASH_SIZE), + BLAKE2B_ALG("blake2b-384", "blake2b-384-neon", BLAKE2B_384_HASH_SIZE), + BLAKE2B_ALG("blake2b-512", "blake2b-512-neon", BLAKE2B_512_HASH_SIZE), +}; + +static int __init blake2b_neon_mod_init(void) +{ + if (!(elf_hwcap & HWCAP_NEON)) + return -ENODEV; + + return crypto_register_shashes(blake2b_neon_algs, + ARRAY_SIZE(blake2b_neon_algs)); +} + +static void __exit blake2b_neon_mod_exit(void) +{ + return crypto_unregister_shashes(blake2b_neon_algs, + ARRAY_SIZE(blake2b_neon_algs)); +} + +module_init(blake2b_neon_mod_init); +module_exit(blake2b_neon_mod_exit); + +MODULE_DESCRIPTION("BLAKE2b digest algorithm, NEON accelerated"); +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Eric Biggers "); +MODULE_ALIAS_CRYPTO("blake2b-160"); +MODULE_ALIAS_CRYPTO("blake2b-160-neon"); +MODULE_ALIAS_CRYPTO("blake2b-256"); +MODULE_ALIAS_CRYPTO("blake2b-256-neon"); +MODULE_ALIAS_CRYPTO("blake2b-384"); +MODULE_ALIAS_CRYPTO("blake2b-384-neon"); +MODULE_ALIAS_CRYPTO("blake2b-512"); +MODULE_ALIAS_CRYPTO("blake2b-512-neon");