From patchwork Fri Jan 14 14:20:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jason A. Donenfeld" X-Patchwork-Id: 12713679 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F178C43217 for ; Fri, 14 Jan 2022 14:20:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237389AbiANOUl (ORCPT ); Fri, 14 Jan 2022 09:20:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47856 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235371AbiANOUk (ORCPT ); Fri, 14 Jan 2022 09:20:40 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 96CD3C061574; Fri, 14 Jan 2022 06:20:39 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 3708461CE8; Fri, 14 Jan 2022 14:20:39 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B7FD1C36AEA; Fri, 14 Jan 2022 14:20:37 +0000 (UTC) Authentication-Results: smtp.kernel.org; dkim=pass (1024-bit key) header.d=zx2c4.com header.i=@zx2c4.com header.b="fbH4Mvht" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zx2c4.com; s=20210105; t=1642170037; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=arKs06J8UnnDJVyqeR4qdEPwABwHy9oEX6qlki11Aeg=; b=fbH4MvhtEoAfMs7PWUduRCDd/1cRJ0xxZYCa/rpJzGbOQ/4YYmHdft+o3OVCa9YjbkOMrI mHvbHRa0dKiGasg5p2vtSYfyxyRrs8VsV9wXFvisUtLikoiFZ3CPiVT9A0HsCLX0b7bYbM sIPCgYPeDrC0w8IARRvoVomeKYr/YWo= Received: by mail.zx2c4.com (ZX2C4 Mail Server) with ESMTPSA id 67a4c572 (TLSv1.3:AEAD-AES256-GCM-SHA384:256:NO); Fri, 14 Jan 2022 14:20:37 +0000 (UTC) From: "Jason A. Donenfeld" To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, linux-crypto@vger.kernel.org Cc: "Jason A. Donenfeld" , Geert Uytterhoeven , Herbert Xu , Andy Lutomirski , Ard Biesheuvel , Jean-Philippe Aumasson , Alexei Starovoitov Subject: [PATCH RFC v2 1/3] bpf: move from sha1 to blake2s in tag calculation Date: Fri, 14 Jan 2022 15:20:13 +0100 Message-Id: <20220114142015.87974-2-Jason@zx2c4.com> In-Reply-To: <20220114142015.87974-1-Jason@zx2c4.com> References: <20220114142015.87974-1-Jason@zx2c4.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org BLAKE2s is faster and more secure. SHA-1 has been broken for a long time now. This also removes quite a bit of code, and lets us potentially remove sha1 from lib, which would further reduce vmlinux size. Cc: Geert Uytterhoeven Cc: Herbert Xu Cc: Andy Lutomirski Cc: Ard Biesheuvel Cc: Jean-Philippe Aumasson Cc: Alexei Starovoitov Signed-off-by: Jason A. Donenfeld --- kernel/bpf/core.c | 39 ++++----------------------------------- 1 file changed, 4 insertions(+), 35 deletions(-) diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index de3e5bc6781f..20a799d36ba8 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -33,6 +33,7 @@ #include #include #include +#include #include #include @@ -265,24 +266,16 @@ void __bpf_prog_free(struct bpf_prog *fp) int bpf_prog_calc_tag(struct bpf_prog *fp) { - const u32 bits_offset = SHA1_BLOCK_SIZE - sizeof(__be64); u32 raw_size = bpf_prog_tag_scratch_size(fp); - u32 digest[SHA1_DIGEST_WORDS]; - u32 ws[SHA1_WORKSPACE_WORDS]; - u32 i, bsize, psize, blocks; struct bpf_insn *dst; bool was_ld_map; - u8 *raw, *todo; - __be32 *result; - __be64 *bits; + u8 *raw; + int i; raw = vmalloc(raw_size); if (!raw) return -ENOMEM; - sha1_init(digest); - memset(ws, 0, sizeof(ws)); - /* We need to take out the map fd for the digest calculation * since they are unstable from user space side. */ @@ -307,31 +300,7 @@ int bpf_prog_calc_tag(struct bpf_prog *fp) } } - psize = bpf_prog_insn_size(fp); - memset(&raw[psize], 0, raw_size - psize); - raw[psize++] = 0x80; - - bsize = round_up(psize, SHA1_BLOCK_SIZE); - blocks = bsize / SHA1_BLOCK_SIZE; - todo = raw; - if (bsize - psize >= sizeof(__be64)) { - bits = (__be64 *)(todo + bsize - sizeof(__be64)); - } else { - bits = (__be64 *)(todo + bsize + bits_offset); - blocks++; - } - *bits = cpu_to_be64((psize - 1) << 3); - - while (blocks--) { - sha1_transform(digest, todo, ws); - todo += SHA1_BLOCK_SIZE; - } - - result = (__force __be32 *)digest; - for (i = 0; i < SHA1_DIGEST_WORDS; i++) - result[i] = cpu_to_be32(digest[i]); - memcpy(fp->tag, result, sizeof(fp->tag)); - + blake2s(fp->tag, raw, NULL, sizeof(fp->tag), bpf_prog_insn_size(fp), 0); vfree(raw); return 0; } From patchwork Fri Jan 14 14:20:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jason A. Donenfeld" X-Patchwork-Id: 12713681 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 598A0C433EF for ; Fri, 14 Jan 2022 14:20:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237665AbiANOUu (ORCPT ); Fri, 14 Jan 2022 09:20:50 -0500 Received: from ams.source.kernel.org ([145.40.68.75]:56628 "EHLO ams.source.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237549AbiANOUp (ORCPT ); Fri, 14 Jan 2022 09:20:45 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 6AD05B825FD; Fri, 14 Jan 2022 14:20:44 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DC6F8C36AF3; Fri, 14 Jan 2022 14:20:41 +0000 (UTC) Authentication-Results: smtp.kernel.org; dkim=pass (1024-bit key) header.d=zx2c4.com header.i=@zx2c4.com header.b="UpmILH5D" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zx2c4.com; s=20210105; t=1642170040; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cncAl1lhYlaogYKmR5zQjCe3rwFoN6Y6kAe4crZ9kvU=; b=UpmILH5DoSXmwuy6cHRdFXeBPVq/cj/NSwzm10lYJ3bJ5PSjhNLkisMY3OxAnV0Ue73wTs +r5+f8RZURvO9Zxq59N7Qb4WUOmzKZcYP+Savgy+8IB7WwC/ZWhyq+a8yt9CJlpKno2dMw 81DdppmXajYhlwMDlAX/KZezx1DdFds= Received: by mail.zx2c4.com (ZX2C4 Mail Server) with ESMTPSA id 378e8146 (TLSv1.3:AEAD-AES256-GCM-SHA384:256:NO); Fri, 14 Jan 2022 14:20:40 +0000 (UTC) From: "Jason A. Donenfeld" To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, linux-crypto@vger.kernel.org Cc: "Jason A. Donenfeld" , Geert Uytterhoeven , Herbert Xu , Andy Lutomirski , Ard Biesheuvel , Jean-Philippe Aumasson , Hannes Frederic Sowa , Fernando Gont , Erik Kline , Lorenzo Colitti Subject: [PATCH RFC v2 2/3] ipv6: move from sha1 to blake2s in address calculation Date: Fri, 14 Jan 2022 15:20:14 +0100 Message-Id: <20220114142015.87974-3-Jason@zx2c4.com> In-Reply-To: <20220114142015.87974-1-Jason@zx2c4.com> References: <20220114142015.87974-1-Jason@zx2c4.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org BLAKE2s is faster and more secure. SHA-1 has been broken for a long time now. This also removes some code complexity, and lets us potentially remove sha1 from lib, which would further reduce vmlinux size. This also lets us use the secret in the proper field for a secret, rather than the prepending done in the prior construction. Cc: Geert Uytterhoeven Cc: Herbert Xu Cc: Andy Lutomirski Cc: Ard Biesheuvel Cc: Jean-Philippe Aumasson Cc: Hannes Frederic Sowa Cc: Fernando Gont Cc: Erik Kline Cc: Lorenzo Colitti Signed-off-by: Jason A. Donenfeld --- net/ipv6/addrconf.c | 56 ++++++++++++--------------------------------- 1 file changed, 14 insertions(+), 42 deletions(-) diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c index 3eee17790a82..47048aafebd3 100644 --- a/net/ipv6/addrconf.c +++ b/net/ipv6/addrconf.c @@ -61,7 +61,7 @@ #include #include #include -#include +#include #include #include @@ -3224,61 +3224,33 @@ static int ipv6_generate_stable_address(struct in6_addr *address, u8 dad_count, const struct inet6_dev *idev) { - static DEFINE_SPINLOCK(lock); - static __u32 digest[SHA1_DIGEST_WORDS]; - static __u32 workspace[SHA1_WORKSPACE_WORDS]; - - static union { - char __data[SHA1_BLOCK_SIZE]; - struct { - struct in6_addr secret; - __be32 prefix[2]; - unsigned char hwaddr[MAX_ADDR_LEN]; - u8 dad_count; - } __packed; - } data; - - struct in6_addr secret; - struct in6_addr temp; struct net *net = dev_net(idev->dev); - - BUILD_BUG_ON(sizeof(data.__data) != sizeof(data)); + const struct in6_addr *secret; + struct blake2s_state hash; + struct in6_addr proposal; if (idev->cnf.stable_secret.initialized) - secret = idev->cnf.stable_secret.secret; + secret = &idev->cnf.stable_secret.secret; else if (net->ipv6.devconf_dflt->stable_secret.initialized) - secret = net->ipv6.devconf_dflt->stable_secret.secret; + secret = &net->ipv6.devconf_dflt->stable_secret.secret; else return -1; retry: - spin_lock_bh(&lock); - - sha1_init(digest); - memset(&data, 0, sizeof(data)); - memset(workspace, 0, sizeof(workspace)); - memcpy(data.hwaddr, idev->dev->perm_addr, idev->dev->addr_len); - data.prefix[0] = address->s6_addr32[0]; - data.prefix[1] = address->s6_addr32[1]; - data.secret = secret; - data.dad_count = dad_count; - - sha1_transform(digest, data.__data, workspace); - - temp = *address; - temp.s6_addr32[2] = (__force __be32)digest[0]; - temp.s6_addr32[3] = (__force __be32)digest[1]; - - spin_unlock_bh(&lock); + blake2s_init_key(&hash, sizeof(proposal.s6_addr32[2]) * 2, secret, sizeof(*secret)); + blake2s_update(&hash, (u8 *)&address->s6_addr32[0], sizeof(address->s6_addr32[0]) * 2); + blake2s_update(&hash, idev->dev->perm_addr, idev->dev->addr_len); + blake2s_update(&hash, (u8 *)&dad_count, sizeof(dad_count)); + blake2s_final(&hash, (u8 *)&proposal.s6_addr32[2]); - if (ipv6_reserved_interfaceid(temp)) { + if (ipv6_reserved_interfaceid(proposal)) { dad_count++; - if (dad_count > dev_net(idev->dev)->ipv6.sysctl.idgen_retries) + if (dad_count > net->ipv6.sysctl.idgen_retries) return -1; goto retry; } - *address = temp; + *address = proposal; return 0; } From patchwork Fri Jan 14 14:20:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jason A. Donenfeld" X-Patchwork-Id: 12713680 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A2B4C433F5 for ; Fri, 14 Jan 2022 14:20:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238002AbiANOUu (ORCPT ); Fri, 14 Jan 2022 09:20:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47888 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238008AbiANOUr (ORCPT ); Fri, 14 Jan 2022 09:20:47 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 09642C061574; Fri, 14 Jan 2022 06:20:46 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 8AC7661D0D; Fri, 14 Jan 2022 14:20:45 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 416BBC36AE5; Fri, 14 Jan 2022 14:20:44 +0000 (UTC) Authentication-Results: smtp.kernel.org; dkim=pass (1024-bit key) header.d=zx2c4.com header.i=@zx2c4.com header.b="Ir2iB1Rl" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zx2c4.com; s=20210105; t=1642170043; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2M0plqEKxWINnvXmEwZigA0qBNxyVTO7bInRkdCgSbI=; b=Ir2iB1RlDHtlSVT2J9bpjhsBW361nOkoDQYwV9aKUGllgEca/La0x3mqfh54zrZDk+Dsyh S1e5g8YjMDcMOdFUSMSAxMPjobcBRzSizR6J9kyrjHcIhSpJth0fZn3oY34PYyh6D/YzV3 Xl+xywK0YDmwqcujcMKjq0PHniYwbUA= Received: by mail.zx2c4.com (ZX2C4 Mail Server) with ESMTPSA id 6101a9e0 (TLSv1.3:AEAD-AES256-GCM-SHA384:256:NO); Fri, 14 Jan 2022 14:20:43 +0000 (UTC) From: "Jason A. Donenfeld" To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, linux-crypto@vger.kernel.org Cc: "Jason A. Donenfeld" , Geert Uytterhoeven , Herbert Xu , Andy Lutomirski , Ard Biesheuvel Subject: [PATCH RFC v2 3/3] crypto: sha1_generic - import lib/sha1.c locally Date: Fri, 14 Jan 2022 15:20:15 +0100 Message-Id: <20220114142015.87974-4-Jason@zx2c4.com> In-Reply-To: <20220114142015.87974-1-Jason@zx2c4.com> References: <20220114142015.87974-1-Jason@zx2c4.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org With no non-crypto API users of this function, we can move it into the generic crypto/ code where it belongs. Cc: Geert Uytterhoeven Cc: Herbert Xu Cc: Andy Lutomirski Cc: Ard Biesheuvel Signed-off-by: Jason A. Donenfeld --- crypto/sha1_generic.c | 182 +++++++++++++++++++++++++++++++++++++ include/crypto/sha1.h | 10 --- lib/Makefile | 2 +- lib/sha1.c | 204 ------------------------------------------ 4 files changed, 183 insertions(+), 215 deletions(-) delete mode 100644 lib/sha1.c diff --git a/crypto/sha1_generic.c b/crypto/sha1_generic.c index 325b57fe28dc..1475a0fbbf4e 100644 --- a/crypto/sha1_generic.c +++ b/crypto/sha1_generic.c @@ -16,9 +16,191 @@ #include #include #include +#include +#include #include #include #include +#include + +#define SHA1_DIGEST_WORDS (SHA1_DIGEST_SIZE / 4) +#define SHA1_WORKSPACE_WORDS 16 + +/* + * If you have 32 registers or more, the compiler can (and should) + * try to change the array[] accesses into registers. However, on + * machines with less than ~25 registers, that won't really work, + * and at least gcc will make an unholy mess of it. + * + * So to avoid that mess which just slows things down, we force + * the stores to memory to actually happen (we might be better off + * with a 'W(t)=(val);asm("":"+m" (W(t))' there instead, as + * suggested by Artur Skawina - that will also make gcc unable to + * try to do the silly "optimize away loads" part because it won't + * see what the value will be). + * + * Ben Herrenschmidt reports that on PPC, the C version comes close + * to the optimized asm with this (ie on PPC you don't want that + * 'volatile', since there are lots of registers). + * + * On ARM we get the best code generation by forcing a full memory barrier + * between each SHA_ROUND, otherwise gcc happily get wild with spilling and + * the stack frame size simply explode and performance goes down the drain. + */ + +#ifdef CONFIG_X86 + #define setW(x, val) (*(volatile __u32 *)&W(x) = (val)) +#elif defined(CONFIG_ARM) + #define setW(x, val) do { W(x) = (val); __asm__("":::"memory"); } while (0) +#else + #define setW(x, val) (W(x) = (val)) +#endif + +/* This "rolls" over the 512-bit array */ +#define W(x) (array[(x)&15]) + +/* + * Where do we get the source from? The first 16 iterations get it from + * the input data, the next mix it from the 512-bit array. + */ +#define SHA_SRC(t) get_unaligned_be32((__u32 *)data + t) +#define SHA_MIX(t) rol32(W(t+13) ^ W(t+8) ^ W(t+2) ^ W(t), 1) + +#define SHA_ROUND(t, input, fn, constant, A, B, C, D, E) do { \ + __u32 TEMP = input(t); setW(t, TEMP); \ + E += TEMP + rol32(A,5) + (fn) + (constant); \ + B = ror32(B, 2); } while (0) + +#define T_0_15(t, A, B, C, D, E) SHA_ROUND(t, SHA_SRC, (((C^D)&B)^D) , 0x5a827999, A, B, C, D, E ) +#define T_16_19(t, A, B, C, D, E) SHA_ROUND(t, SHA_MIX, (((C^D)&B)^D) , 0x5a827999, A, B, C, D, E ) +#define T_20_39(t, A, B, C, D, E) SHA_ROUND(t, SHA_MIX, (B^C^D) , 0x6ed9eba1, A, B, C, D, E ) +#define T_40_59(t, A, B, C, D, E) SHA_ROUND(t, SHA_MIX, ((B&C)+(D&(B^C))) , 0x8f1bbcdc, A, B, C, D, E ) +#define T_60_79(t, A, B, C, D, E) SHA_ROUND(t, SHA_MIX, (B^C^D) , 0xca62c1d6, A, B, C, D, E ) + +/** + * sha1_transform - single block SHA1 transform (deprecated) + * + * @digest: 160 bit digest to update + * @data: 512 bits of data to hash + * @array: 16 words of workspace (see note) + * + * This function executes SHA-1's internal compression function. It updates the + * 160-bit internal state (@digest) with a single 512-bit data block (@data). + * + * Don't use this function. SHA-1 is no longer considered secure. And even if + * you do have to use SHA-1, this isn't the correct way to hash something with + * SHA-1 as this doesn't handle padding and finalization. + * + * Note: If the hash is security sensitive, the caller should be sure + * to clear the workspace. This is left to the caller to avoid + * unnecessary clears between chained hashing operations. + */ +static void sha1_transform(__u32 *digest, const char *data, __u32 *array) +{ + __u32 A, B, C, D, E; + + A = digest[0]; + B = digest[1]; + C = digest[2]; + D = digest[3]; + E = digest[4]; + + /* Round 1 - iterations 0-16 take their input from 'data' */ + T_0_15( 0, A, B, C, D, E); + T_0_15( 1, E, A, B, C, D); + T_0_15( 2, D, E, A, B, C); + T_0_15( 3, C, D, E, A, B); + T_0_15( 4, B, C, D, E, A); + T_0_15( 5, A, B, C, D, E); + T_0_15( 6, E, A, B, C, D); + T_0_15( 7, D, E, A, B, C); + T_0_15( 8, C, D, E, A, B); + T_0_15( 9, B, C, D, E, A); + T_0_15(10, A, B, C, D, E); + T_0_15(11, E, A, B, C, D); + T_0_15(12, D, E, A, B, C); + T_0_15(13, C, D, E, A, B); + T_0_15(14, B, C, D, E, A); + T_0_15(15, A, B, C, D, E); + + /* Round 1 - tail. Input from 512-bit mixing array */ + T_16_19(16, E, A, B, C, D); + T_16_19(17, D, E, A, B, C); + T_16_19(18, C, D, E, A, B); + T_16_19(19, B, C, D, E, A); + + /* Round 2 */ + T_20_39(20, A, B, C, D, E); + T_20_39(21, E, A, B, C, D); + T_20_39(22, D, E, A, B, C); + T_20_39(23, C, D, E, A, B); + T_20_39(24, B, C, D, E, A); + T_20_39(25, A, B, C, D, E); + T_20_39(26, E, A, B, C, D); + T_20_39(27, D, E, A, B, C); + T_20_39(28, C, D, E, A, B); + T_20_39(29, B, C, D, E, A); + T_20_39(30, A, B, C, D, E); + T_20_39(31, E, A, B, C, D); + T_20_39(32, D, E, A, B, C); + T_20_39(33, C, D, E, A, B); + T_20_39(34, B, C, D, E, A); + T_20_39(35, A, B, C, D, E); + T_20_39(36, E, A, B, C, D); + T_20_39(37, D, E, A, B, C); + T_20_39(38, C, D, E, A, B); + T_20_39(39, B, C, D, E, A); + + /* Round 3 */ + T_40_59(40, A, B, C, D, E); + T_40_59(41, E, A, B, C, D); + T_40_59(42, D, E, A, B, C); + T_40_59(43, C, D, E, A, B); + T_40_59(44, B, C, D, E, A); + T_40_59(45, A, B, C, D, E); + T_40_59(46, E, A, B, C, D); + T_40_59(47, D, E, A, B, C); + T_40_59(48, C, D, E, A, B); + T_40_59(49, B, C, D, E, A); + T_40_59(50, A, B, C, D, E); + T_40_59(51, E, A, B, C, D); + T_40_59(52, D, E, A, B, C); + T_40_59(53, C, D, E, A, B); + T_40_59(54, B, C, D, E, A); + T_40_59(55, A, B, C, D, E); + T_40_59(56, E, A, B, C, D); + T_40_59(57, D, E, A, B, C); + T_40_59(58, C, D, E, A, B); + T_40_59(59, B, C, D, E, A); + + /* Round 4 */ + T_60_79(60, A, B, C, D, E); + T_60_79(61, E, A, B, C, D); + T_60_79(62, D, E, A, B, C); + T_60_79(63, C, D, E, A, B); + T_60_79(64, B, C, D, E, A); + T_60_79(65, A, B, C, D, E); + T_60_79(66, E, A, B, C, D); + T_60_79(67, D, E, A, B, C); + T_60_79(68, C, D, E, A, B); + T_60_79(69, B, C, D, E, A); + T_60_79(70, A, B, C, D, E); + T_60_79(71, E, A, B, C, D); + T_60_79(72, D, E, A, B, C); + T_60_79(73, C, D, E, A, B); + T_60_79(74, B, C, D, E, A); + T_60_79(75, A, B, C, D, E); + T_60_79(76, E, A, B, C, D); + T_60_79(77, D, E, A, B, C); + T_60_79(78, C, D, E, A, B); + T_60_79(79, B, C, D, E, A); + + digest[0] += A; + digest[1] += B; + digest[2] += C; + digest[3] += D; + digest[4] += E; +} const u8 sha1_zero_message_hash[SHA1_DIGEST_SIZE] = { 0xda, 0x39, 0xa3, 0xee, 0x5e, 0x6b, 0x4b, 0x0d, diff --git a/include/crypto/sha1.h b/include/crypto/sha1.h index 044ecea60ac8..118a3cad5eb3 100644 --- a/include/crypto/sha1.h +++ b/include/crypto/sha1.h @@ -33,14 +33,4 @@ extern int crypto_sha1_update(struct shash_desc *desc, const u8 *data, extern int crypto_sha1_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *hash); -/* - * An implementation of SHA-1's compression function. Don't use in new code! - * You shouldn't be using SHA-1, and even if you *have* to use SHA-1, this isn't - * the correct way to hash something with SHA-1 (use crypto_shash instead). - */ -#define SHA1_DIGEST_WORDS (SHA1_DIGEST_SIZE / 4) -#define SHA1_WORKSPACE_WORDS 16 -void sha1_init(__u32 *buf); -void sha1_transform(__u32 *digest, const char *data, __u32 *W); - #endif /* _CRYPTO_SHA1_H */ diff --git a/lib/Makefile b/lib/Makefile index b213a7bbf3fd..233a2fd2aba4 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -29,7 +29,7 @@ endif lib-y := ctype.o string.o vsprintf.o cmdline.o \ rbtree.o radix-tree.o timerqueue.o xarray.o \ - idr.o extable.o sha1.o irq_regs.o argv_split.o \ + idr.o extable.o irq_regs.o argv_split.o \ flex_proportions.o ratelimit.o show_mem.o \ is_single_threaded.o plist.o decompress.o kobject_uevent.o \ earlycpio.o seq_buf.o siphash.o dec_and_lock.o \ diff --git a/lib/sha1.c b/lib/sha1.c deleted file mode 100644 index 9bd1935a1472..000000000000 --- a/lib/sha1.c +++ /dev/null @@ -1,204 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* - * SHA1 routine optimized to do word accesses rather than byte accesses, - * and to avoid unnecessary copies into the context array. - * - * This was based on the git SHA1 implementation. - */ - -#include -#include -#include -#include -#include - -/* - * If you have 32 registers or more, the compiler can (and should) - * try to change the array[] accesses into registers. However, on - * machines with less than ~25 registers, that won't really work, - * and at least gcc will make an unholy mess of it. - * - * So to avoid that mess which just slows things down, we force - * the stores to memory to actually happen (we might be better off - * with a 'W(t)=(val);asm("":"+m" (W(t))' there instead, as - * suggested by Artur Skawina - that will also make gcc unable to - * try to do the silly "optimize away loads" part because it won't - * see what the value will be). - * - * Ben Herrenschmidt reports that on PPC, the C version comes close - * to the optimized asm with this (ie on PPC you don't want that - * 'volatile', since there are lots of registers). - * - * On ARM we get the best code generation by forcing a full memory barrier - * between each SHA_ROUND, otherwise gcc happily get wild with spilling and - * the stack frame size simply explode and performance goes down the drain. - */ - -#ifdef CONFIG_X86 - #define setW(x, val) (*(volatile __u32 *)&W(x) = (val)) -#elif defined(CONFIG_ARM) - #define setW(x, val) do { W(x) = (val); __asm__("":::"memory"); } while (0) -#else - #define setW(x, val) (W(x) = (val)) -#endif - -/* This "rolls" over the 512-bit array */ -#define W(x) (array[(x)&15]) - -/* - * Where do we get the source from? The first 16 iterations get it from - * the input data, the next mix it from the 512-bit array. - */ -#define SHA_SRC(t) get_unaligned_be32((__u32 *)data + t) -#define SHA_MIX(t) rol32(W(t+13) ^ W(t+8) ^ W(t+2) ^ W(t), 1) - -#define SHA_ROUND(t, input, fn, constant, A, B, C, D, E) do { \ - __u32 TEMP = input(t); setW(t, TEMP); \ - E += TEMP + rol32(A,5) + (fn) + (constant); \ - B = ror32(B, 2); } while (0) - -#define T_0_15(t, A, B, C, D, E) SHA_ROUND(t, SHA_SRC, (((C^D)&B)^D) , 0x5a827999, A, B, C, D, E ) -#define T_16_19(t, A, B, C, D, E) SHA_ROUND(t, SHA_MIX, (((C^D)&B)^D) , 0x5a827999, A, B, C, D, E ) -#define T_20_39(t, A, B, C, D, E) SHA_ROUND(t, SHA_MIX, (B^C^D) , 0x6ed9eba1, A, B, C, D, E ) -#define T_40_59(t, A, B, C, D, E) SHA_ROUND(t, SHA_MIX, ((B&C)+(D&(B^C))) , 0x8f1bbcdc, A, B, C, D, E ) -#define T_60_79(t, A, B, C, D, E) SHA_ROUND(t, SHA_MIX, (B^C^D) , 0xca62c1d6, A, B, C, D, E ) - -/** - * sha1_transform - single block SHA1 transform (deprecated) - * - * @digest: 160 bit digest to update - * @data: 512 bits of data to hash - * @array: 16 words of workspace (see note) - * - * This function executes SHA-1's internal compression function. It updates the - * 160-bit internal state (@digest) with a single 512-bit data block (@data). - * - * Don't use this function. SHA-1 is no longer considered secure. And even if - * you do have to use SHA-1, this isn't the correct way to hash something with - * SHA-1 as this doesn't handle padding and finalization. - * - * Note: If the hash is security sensitive, the caller should be sure - * to clear the workspace. This is left to the caller to avoid - * unnecessary clears between chained hashing operations. - */ -void sha1_transform(__u32 *digest, const char *data, __u32 *array) -{ - __u32 A, B, C, D, E; - - A = digest[0]; - B = digest[1]; - C = digest[2]; - D = digest[3]; - E = digest[4]; - - /* Round 1 - iterations 0-16 take their input from 'data' */ - T_0_15( 0, A, B, C, D, E); - T_0_15( 1, E, A, B, C, D); - T_0_15( 2, D, E, A, B, C); - T_0_15( 3, C, D, E, A, B); - T_0_15( 4, B, C, D, E, A); - T_0_15( 5, A, B, C, D, E); - T_0_15( 6, E, A, B, C, D); - T_0_15( 7, D, E, A, B, C); - T_0_15( 8, C, D, E, A, B); - T_0_15( 9, B, C, D, E, A); - T_0_15(10, A, B, C, D, E); - T_0_15(11, E, A, B, C, D); - T_0_15(12, D, E, A, B, C); - T_0_15(13, C, D, E, A, B); - T_0_15(14, B, C, D, E, A); - T_0_15(15, A, B, C, D, E); - - /* Round 1 - tail. Input from 512-bit mixing array */ - T_16_19(16, E, A, B, C, D); - T_16_19(17, D, E, A, B, C); - T_16_19(18, C, D, E, A, B); - T_16_19(19, B, C, D, E, A); - - /* Round 2 */ - T_20_39(20, A, B, C, D, E); - T_20_39(21, E, A, B, C, D); - T_20_39(22, D, E, A, B, C); - T_20_39(23, C, D, E, A, B); - T_20_39(24, B, C, D, E, A); - T_20_39(25, A, B, C, D, E); - T_20_39(26, E, A, B, C, D); - T_20_39(27, D, E, A, B, C); - T_20_39(28, C, D, E, A, B); - T_20_39(29, B, C, D, E, A); - T_20_39(30, A, B, C, D, E); - T_20_39(31, E, A, B, C, D); - T_20_39(32, D, E, A, B, C); - T_20_39(33, C, D, E, A, B); - T_20_39(34, B, C, D, E, A); - T_20_39(35, A, B, C, D, E); - T_20_39(36, E, A, B, C, D); - T_20_39(37, D, E, A, B, C); - T_20_39(38, C, D, E, A, B); - T_20_39(39, B, C, D, E, A); - - /* Round 3 */ - T_40_59(40, A, B, C, D, E); - T_40_59(41, E, A, B, C, D); - T_40_59(42, D, E, A, B, C); - T_40_59(43, C, D, E, A, B); - T_40_59(44, B, C, D, E, A); - T_40_59(45, A, B, C, D, E); - T_40_59(46, E, A, B, C, D); - T_40_59(47, D, E, A, B, C); - T_40_59(48, C, D, E, A, B); - T_40_59(49, B, C, D, E, A); - T_40_59(50, A, B, C, D, E); - T_40_59(51, E, A, B, C, D); - T_40_59(52, D, E, A, B, C); - T_40_59(53, C, D, E, A, B); - T_40_59(54, B, C, D, E, A); - T_40_59(55, A, B, C, D, E); - T_40_59(56, E, A, B, C, D); - T_40_59(57, D, E, A, B, C); - T_40_59(58, C, D, E, A, B); - T_40_59(59, B, C, D, E, A); - - /* Round 4 */ - T_60_79(60, A, B, C, D, E); - T_60_79(61, E, A, B, C, D); - T_60_79(62, D, E, A, B, C); - T_60_79(63, C, D, E, A, B); - T_60_79(64, B, C, D, E, A); - T_60_79(65, A, B, C, D, E); - T_60_79(66, E, A, B, C, D); - T_60_79(67, D, E, A, B, C); - T_60_79(68, C, D, E, A, B); - T_60_79(69, B, C, D, E, A); - T_60_79(70, A, B, C, D, E); - T_60_79(71, E, A, B, C, D); - T_60_79(72, D, E, A, B, C); - T_60_79(73, C, D, E, A, B); - T_60_79(74, B, C, D, E, A); - T_60_79(75, A, B, C, D, E); - T_60_79(76, E, A, B, C, D); - T_60_79(77, D, E, A, B, C); - T_60_79(78, C, D, E, A, B); - T_60_79(79, B, C, D, E, A); - - digest[0] += A; - digest[1] += B; - digest[2] += C; - digest[3] += D; - digest[4] += E; -} -EXPORT_SYMBOL(sha1_transform); - -/** - * sha1_init - initialize the vectors for a SHA1 digest - * @buf: vector to initialize - */ -void sha1_init(__u32 *buf) -{ - buf[0] = 0x67452301; - buf[1] = 0xefcdab89; - buf[2] = 0x98badcfe; - buf[3] = 0x10325476; - buf[4] = 0xc3d2e1f0; -} -EXPORT_SYMBOL(sha1_init);