From patchwork Sat Feb 8 02:49:06 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13966241 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7B54829D0E; Sat, 8 Feb 2025 02:49:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738982963; cv=none; b=ZVA5/bYjd8mhaqrhayW6hC8rF8oJoDipz9Tv5fmTjmHp8ss2KPfzjQWQuxqY3+/Is+fBwaI+/TGaTfJ1+/tfA76CDaVgJ0DGAgBoTwlVMVB78O4oz6zcf4OdFudYUgnHF1NVbefiQCu13jHH2dwAN4uoUxlGS74o5gxMQ9Sppf8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738982963; c=relaxed/simple; bh=CcbjCnOKgUgi0bXsM96+1hSnj0FEL/khYEZSB2AazeA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=t5hKP34P8QojSiF/f3+xaFwRY3/QwduS/g8M5GTSTYNmVp3d6Srlc2dID1/hYjL6+NMeFTsEscIZd2I1F8Pg/zE2CMdfuQenaA6kXixcwp+fxH3/FQTJZXntZQI52s2O8XoMFni3QqmwkuNLXs5C5eYA5iXx4eMgi5KygjdJEq8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=XHzXWgiG; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="XHzXWgiG" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E9BDFC4CED6; Sat, 8 Feb 2025 02:49:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1738982963; bh=CcbjCnOKgUgi0bXsM96+1hSnj0FEL/khYEZSB2AazeA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=XHzXWgiGWT7/za6es3R66EVIPwn7nolhuIKqtLFEU/Qbkraq/6to5kCTx9rzNMRk9 82b+gAYRg+uSrzQSKVvghqziDRXD9KZ4X1rj3TI4nOOXuioQ/G8lFSFGUDg+U7Qwb+ aVIsqlfkLQ4eY099lMOtwYLJH6D/SxAvbTNbFqhNGXKMAQA+vrKBXMY8qWSLIEnJZ2 zYHAjd3uOi0TDKo6+Udr/uSyVg5yh7qEvY8pBsqvDKg93K96S3toNpZ27LRnPIiGWA YA+Xtf9hYotEJpMPEQf4DTKz7gdmGhHkjWckziniERjljpc9J92nqcSPyAqNsFH9U3 L/OJUkpzLOGiA== From: Eric Biggers To: linux-kernel@vger.kernel.org Cc: linux-crypto@vger.kernel.org, Ard Biesheuvel , Nathan Chancellor Subject: [PATCH v2 1/6] mips/crc32: remove unused enums Date: Fri, 7 Feb 2025 18:49:06 -0800 Message-ID: <20250208024911.14936-2-ebiggers@kernel.org> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250208024911.14936-1-ebiggers@kernel.org> References: <20250208024911.14936-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Remove enum crc_op_size and enum crc_type, since they are never actually used. Tokens with the names of the enum values do appear in the file, but they are only used for token concatenation with the preprocessor. This prevents a conflict with the addition of crc32c() to linux/crc32.h. Reported-by: Nathan Chancellor Closes: https://lore.kernel.org/r/20250207224233.GA1261167@ax162 Signed-off-by: Eric Biggers Acked-by: Ard Biesheuvel --- arch/mips/lib/crc32-mips.c | 9 --------- 1 file changed, 9 deletions(-) diff --git a/arch/mips/lib/crc32-mips.c b/arch/mips/lib/crc32-mips.c index 083e5d693a169..100ac586aadb2 100644 --- a/arch/mips/lib/crc32-mips.c +++ b/arch/mips/lib/crc32-mips.c @@ -14,19 +14,10 @@ #include #include #include #include -enum crc_op_size { - b, h, w, d, -}; - -enum crc_type { - crc32, - crc32c, -}; - #ifndef TOOLCHAIN_SUPPORTS_CRC #define _ASM_SET_CRC(OP, SZ, TYPE) \ _ASM_MACRO_3R(OP, rt, rs, rt2, \ ".ifnc \\rt, \\rt2\n\t" \ ".error \"invalid operands \\\"" #OP " \\rt,\\rs,\\rt2\\\"\"\n\t" \ From patchwork Sat Feb 8 02:49:07 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13966243 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B997D1514E4; Sat, 8 Feb 2025 02:49:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738982963; cv=none; b=j0bb3bkTlVQpEGKeN3Wf5cviEatd1QDsnZCrW6jkDpE4GdaAI5fn4zrjbHhWNh9iFkPICoVexOcMEHydUfPCfEcION6nYwQBbdvXo7VqPTtA2drNpuDBZsFsRowpU9/qrWnwDCPecavQrXjNgkiFR0sohQ6K4qnCiFu8TF+aGpo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738982963; c=relaxed/simple; bh=UKNOpRq17HglB9yaAPYXxH+dAXWyU/9qtlMNOx890CA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=eXq88j7ESbD3L5ZkzWbkJRzjZ1MbeRReIPH/HpRqCfMsstsAlA4Yln7u4FzAOdyMthGDOyiONNhoy5dUtAE81/182idwW7OVSNTaHkQ9mxAuYZTSBs5yaop4faMLgAOQ1iy/uPT7LpiPJ0qVjmYO0U8DAM+WeKG4Ze5q0gt7BBo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=pzXD6Vhq; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="pzXD6Vhq" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3BD40C4CEE6; Sat, 8 Feb 2025 02:49:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1738982963; bh=UKNOpRq17HglB9yaAPYXxH+dAXWyU/9qtlMNOx890CA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pzXD6VhqfQbZzGTIOLcudLteIcsTwk8A72fHLdL6pMdf/xEfP0RWBFHrSMxxYrgwU yeV95g8pYAtUoOpDaOPh8+NSXse0ElEEexHGqNCFvI6KKm155c1JH5aFT8A2HjFVPG YbMkG0SMdez/x8R19HAzFKZN07dlV+fpESJO4Oon3sgDZdmSVmFPk0DvDxRXN9pGr+ T5fNyKWnzGjQtftCcUBZRcY+wikNtkfyqDE+2XkEst23NDGc4ym4FCn8UJL1iR0EfK 50vouHi0ezyj4lmuUTmFEgP0zjhMm/s2r2Vjt6+YOp1KK9LW4kvyulKAxZs/qrAs5s fK8tVGiRSRRhA== From: Eric Biggers To: linux-kernel@vger.kernel.org Cc: linux-crypto@vger.kernel.org, Ard Biesheuvel Subject: [PATCH v2 2/6] lib/crc32: use void pointer for data Date: Fri, 7 Feb 2025 18:49:07 -0800 Message-ID: <20250208024911.14936-3-ebiggers@kernel.org> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250208024911.14936-1-ebiggers@kernel.org> References: <20250208024911.14936-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Update crc32_le(), crc32_be(), and __crc32c_le() to take the data as a 'const void *' instead of 'const u8 *'. This makes them slightly easier to use, as it can eliminate the need for casts in the calling code. It's the only pointer argument, so there is no possibility for confusion with another pointer argument. Also, some of the CRC library functions, for example crc32c() and crc64_be(), already used 'const void *'. Let's standardize on that, as it seems like a better choice. The underlying base and arch functions continue to use 'const u8 *', as that is often more convenient for the implementation. Reviewed-by: Ard Biesheuvel Signed-off-by: Eric Biggers --- include/linux/crc32.h | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/include/linux/crc32.h b/include/linux/crc32.h index e9bd40056687a..e70977014cfdc 100644 --- a/include/linux/crc32.h +++ b/include/linux/crc32.h @@ -13,26 +13,26 @@ u32 __pure crc32_le_base(u32 crc, const u8 *p, size_t len); u32 __pure crc32_be_arch(u32 crc, const u8 *p, size_t len); u32 __pure crc32_be_base(u32 crc, const u8 *p, size_t len); u32 __pure crc32c_le_arch(u32 crc, const u8 *p, size_t len); u32 __pure crc32c_le_base(u32 crc, const u8 *p, size_t len); -static inline u32 __pure crc32_le(u32 crc, const u8 *p, size_t len) +static inline u32 __pure crc32_le(u32 crc, const void *p, size_t len) { if (IS_ENABLED(CONFIG_CRC32_ARCH)) return crc32_le_arch(crc, p, len); return crc32_le_base(crc, p, len); } -static inline u32 __pure crc32_be(u32 crc, const u8 *p, size_t len) +static inline u32 __pure crc32_be(u32 crc, const void *p, size_t len) { if (IS_ENABLED(CONFIG_CRC32_ARCH)) return crc32_be_arch(crc, p, len); return crc32_be_base(crc, p, len); } /* TODO: leading underscores should be dropped once callers have been updated */ -static inline u32 __pure __crc32c_le(u32 crc, const u8 *p, size_t len) +static inline u32 __pure __crc32c_le(u32 crc, const void *p, size_t len) { if (IS_ENABLED(CONFIG_CRC32_ARCH)) return crc32c_le_arch(crc, p, len); return crc32c_le_base(crc, p, len); } From patchwork Sat Feb 8 02:49:08 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13966242 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B76C414F9D9; Sat, 8 Feb 2025 02:49:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738982963; cv=none; b=tFpYtaRUElG8MkVpyy7xbbetsuKtTjjZqjUPIG72mjWaCoCkv301Br2m43wHony2qt9msvB+9Rh7UxnkNsnh5CKi1UoLJoc83pbH5svPuc+LheeVxLNc1scULFXoyHv/ctwlHmlYiOT1NL1FrXLidS9nvZhWKZZoPE9YIze2NkE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738982963; c=relaxed/simple; bh=pg7pn03qgdZnaIVA+IRdsp6/W/WhJZLwaxEK8sNdGdQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=dpMSotRiAqRTSi1HZgN7IZt9SA1PdhNj/1b/APEd2W87oNRk2Reorkb+s3loyC/Z5MgI+8avzDG4vGQUCqoB6vsUZaBRDDQ8Dw+vTTZt9jOZ6RhdvjGkOHeuGK53w1TG/iraakMLpBLvs5BrvybdGl7dJ21jSYLaaEeK6B1G8UA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=M4py47nr; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="M4py47nr" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7AEBCC4CEE8; Sat, 8 Feb 2025 02:49:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1738982963; bh=pg7pn03qgdZnaIVA+IRdsp6/W/WhJZLwaxEK8sNdGdQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=M4py47nrrv508wFzYOQ1gR6K7JpARbBoBMUlgAni34wZ2dyLy5PCrqoldJeOnO3vg /kadusUb/lcjyhecsD1QgMRFs6e8s6Rn6aqwB5uzGmVdVvGvxqPCXNBstwsrtLCoHa Rcm/Ji80YbzG3QLdN5sYKmlAcKLfVc8xQmrAw7n3sJ/NGe+dWd+qkAfISEwGqR+K6A PLqb+42M7cibPLnNkPsUwDkYsruWQnRUVgXgFdoX3FnuYJgIUA+J2H90yKo/OwlMJR SsmEaFfPq8TRhEWfezm8SBMxoh3cyFzOsbTyQdLmQYvko0MS0x0mlIVRZU6313Y8og 3GkR+eIKqB4TQ== From: Eric Biggers To: linux-kernel@vger.kernel.org Cc: linux-crypto@vger.kernel.org, Ard Biesheuvel Subject: [PATCH v2 3/6] lib/crc32: don't bother with pure and const function attributes Date: Fri, 7 Feb 2025 18:49:08 -0800 Message-ID: <20250208024911.14936-4-ebiggers@kernel.org> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250208024911.14936-1-ebiggers@kernel.org> References: <20250208024911.14936-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Drop the use of __pure and __attribute_const__ from the CRC32 library functions that had them. Both of these are unusual optimizations that don't help properly written code. They seem more likely to cause problems than have any real benefit. Reviewed-by: Ard Biesheuvel Signed-off-by: Eric Biggers --- arch/arm64/lib/crc32-glue.c | 6 +++--- arch/riscv/lib/crc32-riscv.c | 13 ++++++------- include/linux/crc32.h | 22 +++++++++++----------- lib/crc32.c | 15 +++++++-------- 4 files changed, 27 insertions(+), 29 deletions(-) diff --git a/arch/arm64/lib/crc32-glue.c b/arch/arm64/lib/crc32-glue.c index 15c4c9db573ec..265fbf36914b6 100644 --- a/arch/arm64/lib/crc32-glue.c +++ b/arch/arm64/lib/crc32-glue.c @@ -20,11 +20,11 @@ asmlinkage u32 crc32_be_arm64(u32 crc, unsigned char const *p, size_t len); asmlinkage u32 crc32_le_arm64_4way(u32 crc, unsigned char const *p, size_t len); asmlinkage u32 crc32c_le_arm64_4way(u32 crc, unsigned char const *p, size_t len); asmlinkage u32 crc32_be_arm64_4way(u32 crc, unsigned char const *p, size_t len); -u32 __pure crc32_le_arch(u32 crc, const u8 *p, size_t len) +u32 crc32_le_arch(u32 crc, const u8 *p, size_t len) { if (!alternative_has_cap_likely(ARM64_HAS_CRC32)) return crc32_le_base(crc, p, len); if (len >= min_len && cpu_have_named_feature(PMULL) && crypto_simd_usable()) { @@ -41,11 +41,11 @@ u32 __pure crc32_le_arch(u32 crc, const u8 *p, size_t len) return crc32_le_arm64(crc, p, len); } EXPORT_SYMBOL(crc32_le_arch); -u32 __pure crc32c_le_arch(u32 crc, const u8 *p, size_t len) +u32 crc32c_le_arch(u32 crc, const u8 *p, size_t len) { if (!alternative_has_cap_likely(ARM64_HAS_CRC32)) return crc32c_le_base(crc, p, len); if (len >= min_len && cpu_have_named_feature(PMULL) && crypto_simd_usable()) { @@ -62,11 +62,11 @@ u32 __pure crc32c_le_arch(u32 crc, const u8 *p, size_t len) return crc32c_le_arm64(crc, p, len); } EXPORT_SYMBOL(crc32c_le_arch); -u32 __pure crc32_be_arch(u32 crc, const u8 *p, size_t len) +u32 crc32_be_arch(u32 crc, const u8 *p, size_t len) { if (!alternative_has_cap_likely(ARM64_HAS_CRC32)) return crc32_be_base(crc, p, len); if (len >= min_len && cpu_have_named_feature(PMULL) && crypto_simd_usable()) { diff --git a/arch/riscv/lib/crc32-riscv.c b/arch/riscv/lib/crc32-riscv.c index 53d56ab422c72..a50f8e010417d 100644 --- a/arch/riscv/lib/crc32-riscv.c +++ b/arch/riscv/lib/crc32-riscv.c @@ -173,14 +173,13 @@ static inline u32 crc32_le_unaligned(u32 crc, unsigned char const *p, crc ^= crc_low; return crc; } -static inline u32 __pure crc32_le_generic(u32 crc, unsigned char const *p, - size_t len, u32 poly, - unsigned long poly_qt, - fallback crc_fb) +static inline u32 crc32_le_generic(u32 crc, unsigned char const *p, size_t len, + u32 poly, unsigned long poly_qt, + fallback crc_fb) { size_t offset, head_len, tail_len; unsigned long const *p_ul; unsigned long s; @@ -216,18 +215,18 @@ static inline u32 __pure crc32_le_generic(u32 crc, unsigned char const *p, legacy: return crc_fb(crc, p, len); } -u32 __pure crc32_le_arch(u32 crc, const u8 *p, size_t len) +u32 crc32_le_arch(u32 crc, const u8 *p, size_t len) { return crc32_le_generic(crc, p, len, CRC32_POLY_LE, CRC32_POLY_QT_LE, crc32_le_base); } EXPORT_SYMBOL(crc32_le_arch); -u32 __pure crc32c_le_arch(u32 crc, const u8 *p, size_t len) +u32 crc32c_le_arch(u32 crc, const u8 *p, size_t len) { return crc32_le_generic(crc, p, len, CRC32C_POLY_LE, CRC32C_POLY_QT_LE, crc32c_le_base); } EXPORT_SYMBOL(crc32c_le_arch); @@ -254,11 +253,11 @@ static inline u32 crc32_be_unaligned(u32 crc, unsigned char const *p, crc ^= crc_low; return crc; } -u32 __pure crc32_be_arch(u32 crc, const u8 *p, size_t len) +u32 crc32_be_arch(u32 crc, const u8 *p, size_t len) { size_t offset, head_len, tail_len; unsigned long const *p_ul; unsigned long s; diff --git a/include/linux/crc32.h b/include/linux/crc32.h index e70977014cfdc..61a7ec29d6338 100644 --- a/include/linux/crc32.h +++ b/include/linux/crc32.h @@ -6,33 +6,33 @@ #define _LINUX_CRC32_H #include #include -u32 __pure crc32_le_arch(u32 crc, const u8 *p, size_t len); -u32 __pure crc32_le_base(u32 crc, const u8 *p, size_t len); -u32 __pure crc32_be_arch(u32 crc, const u8 *p, size_t len); -u32 __pure crc32_be_base(u32 crc, const u8 *p, size_t len); -u32 __pure crc32c_le_arch(u32 crc, const u8 *p, size_t len); -u32 __pure crc32c_le_base(u32 crc, const u8 *p, size_t len); +u32 crc32_le_arch(u32 crc, const u8 *p, size_t len); +u32 crc32_le_base(u32 crc, const u8 *p, size_t len); +u32 crc32_be_arch(u32 crc, const u8 *p, size_t len); +u32 crc32_be_base(u32 crc, const u8 *p, size_t len); +u32 crc32c_le_arch(u32 crc, const u8 *p, size_t len); +u32 crc32c_le_base(u32 crc, const u8 *p, size_t len); -static inline u32 __pure crc32_le(u32 crc, const void *p, size_t len) +static inline u32 crc32_le(u32 crc, const void *p, size_t len) { if (IS_ENABLED(CONFIG_CRC32_ARCH)) return crc32_le_arch(crc, p, len); return crc32_le_base(crc, p, len); } -static inline u32 __pure crc32_be(u32 crc, const void *p, size_t len) +static inline u32 crc32_be(u32 crc, const void *p, size_t len) { if (IS_ENABLED(CONFIG_CRC32_ARCH)) return crc32_be_arch(crc, p, len); return crc32_be_base(crc, p, len); } /* TODO: leading underscores should be dropped once callers have been updated */ -static inline u32 __pure __crc32c_le(u32 crc, const void *p, size_t len) +static inline u32 __crc32c_le(u32 crc, const void *p, size_t len) { if (IS_ENABLED(CONFIG_CRC32_ARCH)) return crc32c_le_arch(crc, p, len); return crc32c_le_base(crc, p, len); } @@ -68,11 +68,11 @@ static inline u32 crc32_optimizations(void) { return 0; } * the crc32_le() value of seq_full, then crc_full == * crc32_le_combine(crc1, crc2, len2) when crc_full was seeded * with the same initializer as crc1, and crc2 seed was 0. See * also crc32_combine_test(). */ -u32 __attribute_const__ crc32_le_shift(u32 crc, size_t len); +u32 crc32_le_shift(u32 crc, size_t len); static inline u32 crc32_le_combine(u32 crc1, u32 crc2, size_t len2) { return crc32_le_shift(crc1, len2) ^ crc2; } @@ -93,11 +93,11 @@ static inline u32 crc32_le_combine(u32 crc1, u32 crc2, size_t len2) * the __crc32c_le() value of seq_full, then crc_full == * __crc32c_le_combine(crc1, crc2, len2) when crc_full was * seeded with the same initializer as crc1, and crc2 seed * was 0. See also crc32c_combine_test(). */ -u32 __attribute_const__ __crc32c_le_shift(u32 crc, size_t len); +u32 __crc32c_le_shift(u32 crc, size_t len); static inline u32 __crc32c_le_combine(u32 crc1, u32 crc2, size_t len2) { return __crc32c_le_shift(crc1, len2) ^ crc2; } diff --git a/lib/crc32.c b/lib/crc32.c index ede6131f66fc4..3c080cda5e1c9 100644 --- a/lib/crc32.c +++ b/lib/crc32.c @@ -35,19 +35,19 @@ MODULE_AUTHOR("Matt Domsch "); MODULE_DESCRIPTION("Various CRC32 calculations"); MODULE_LICENSE("GPL"); -u32 __pure crc32_le_base(u32 crc, const u8 *p, size_t len) +u32 crc32_le_base(u32 crc, const u8 *p, size_t len) { while (len--) crc = (crc >> 8) ^ crc32table_le[(crc & 255) ^ *p++]; return crc; } EXPORT_SYMBOL(crc32_le_base); -u32 __pure crc32c_le_base(u32 crc, const u8 *p, size_t len) +u32 crc32c_le_base(u32 crc, const u8 *p, size_t len) { while (len--) crc = (crc >> 8) ^ crc32ctable_le[(crc & 255) ^ *p++]; return crc; } @@ -56,11 +56,11 @@ EXPORT_SYMBOL(crc32c_le_base); /* * This multiplies the polynomials x and y modulo the given modulus. * This follows the "little-endian" CRC convention that the lsbit * represents the highest power of x, and the msbit represents x^0. */ -static u32 __attribute_const__ gf2_multiply(u32 x, u32 y, u32 modulus) +static u32 gf2_multiply(u32 x, u32 y, u32 modulus) { u32 product = x & 1 ? y : 0; int i; for (i = 0; i < 31; i++) { @@ -82,12 +82,11 @@ static u32 __attribute_const__ gf2_multiply(u32 x, u32 y, u32 modulus) * over separate ranges of a buffer, then summing them. * This shifts the given CRC by 8*len bits (i.e. produces the same effect * as appending len bytes of zero to the data), in time proportional * to log(len). */ -static u32 __attribute_const__ crc32_generic_shift(u32 crc, size_t len, - u32 polynomial) +static u32 crc32_generic_shift(u32 crc, size_t len, u32 polynomial) { u32 power = polynomial; /* CRC of x^32 */ int i; /* Shift up to 32 bits in the simple linear way */ @@ -112,23 +111,23 @@ static u32 __attribute_const__ crc32_generic_shift(u32 crc, size_t len, } return crc; } -u32 __attribute_const__ crc32_le_shift(u32 crc, size_t len) +u32 crc32_le_shift(u32 crc, size_t len) { return crc32_generic_shift(crc, len, CRC32_POLY_LE); } -u32 __attribute_const__ __crc32c_le_shift(u32 crc, size_t len) +u32 __crc32c_le_shift(u32 crc, size_t len) { return crc32_generic_shift(crc, len, CRC32C_POLY_LE); } EXPORT_SYMBOL(crc32_le_shift); EXPORT_SYMBOL(__crc32c_le_shift); -u32 __pure crc32_be_base(u32 crc, const u8 *p, size_t len) +u32 crc32_be_base(u32 crc, const u8 *p, size_t len) { while (len--) crc = (crc << 8) ^ crc32table_be[(crc >> 24) ^ *p++]; return crc; } From patchwork Sat Feb 8 02:49:09 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13966244 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 633561714A5; Sat, 8 Feb 2025 02:49:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738982964; cv=none; b=chlFtbogC6KsDwqCHSsoR4wQ3ZsJH5cd5dPF24ksrL16nR6ksq3D8BJ6ZNNTN++Hht/Ie1ZA5baNQUgtJH/G0KTa9RKMzGcFL0slgcKiQ7UnINVC+2SN94GGSDtbg6bUYfy6S4gQmC3LIWNIhBNr/c8Pjm4vU6z44GJyAtjIcRE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738982964; c=relaxed/simple; bh=5G+9da+cXwYNTOE+eHdrcmsABoBZzxN8HIQQ+fpA3Sg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=UDzAAXRHGYkd+I1zBJJdWbHUt3uwy9ljCu3lv9JAWLnWMby51Y1X26PX9Fkxd66hrXCEIBbrM2zXq5bbq8pX2CPTZl4MHWt1/lOnT2xCyOiGaUF2e0ffEqJXmke4B9lVlm+0AKnNWYabsBLr8q9V3N38h4ykwqEP+u6XissAVJc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=gX8PiP2D; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="gX8PiP2D" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B8B0FC4CED1; Sat, 8 Feb 2025 02:49:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1738982963; bh=5G+9da+cXwYNTOE+eHdrcmsABoBZzxN8HIQQ+fpA3Sg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gX8PiP2DAmq2G6X5drDtN3H8yKyjyVcjIjd1DJkwMG5slSYoMQJ7q2GjfC9PlIYpY jtoHeJFd2Kr7OcEVTfPRU49lO9HlG7nNjkftQrztSEBwye3xDRjsxFWvHkd5lK+8UA SGlUkJYQRHm5UIQHZ9i0mumnbVURglMLktUILhTQzyHQEWTuv9PbZxiBnY8nxjQ8Wf eKBW7pw7+oH6qVFhU6M/igK2+muVsyi+fhBRhUqzexOzz9jH4o1VhIH9NEU08/ZGg5 A9T4XfBkIfztBQ1IQXCN6pi7lTtsu9rlsCuL6XR1c49/Yyg+pgpWYMU3swJwHNLlML F7mQ4dqYb/USA== From: Eric Biggers To: linux-kernel@vger.kernel.org Cc: linux-crypto@vger.kernel.org, Ard Biesheuvel Subject: [PATCH v2 4/6] lib/crc32: standardize on crc32c() name for Castagnoli CRC32 Date: Fri, 7 Feb 2025 18:49:09 -0800 Message-ID: <20250208024911.14936-5-ebiggers@kernel.org> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250208024911.14936-1-ebiggers@kernel.org> References: <20250208024911.14936-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers For historical reasons, the Castagnoli CRC32 is available under 3 names: crc32c(), crc32c_le(), and __crc32c_le(). Most callers use crc32c(). The more verbose versions are not really warranted; there is no "_be" version that the "_le" version needs to be differentiated from, and the leading underscores are pointless. Therefore, let's standardize on just crc32c(). Remove the other two names, and update callers accordingly. Specifically, the new crc32c() comes from what was previously __crc32c_le(), so compared to the old crc32c() it now takes a size_t length rather than unsigned int, and it's now in linux/crc32.h instead of just linux/crc32c.h (which includes linux/crc32.h). Later patches will also rename __crc32c_le_combine(), crc32c_le_base(), and crc32c_le_arch(). Reviewed-by: Ard Biesheuvel Signed-off-by: Eric Biggers --- crypto/crc32c_generic.c | 4 +-- drivers/crypto/stm32/stm32-crc32.c | 2 +- drivers/md/raid5-cache.c | 31 +++++++++---------- drivers/md/raid5-ppl.c | 16 +++++----- .../net/ethernet/broadcom/bnx2x/bnx2x_sp.c | 2 +- drivers/thunderbolt/ctl.c | 2 +- drivers/thunderbolt/eeprom.c | 2 +- include/linux/crc32.h | 5 ++- include/linux/crc32c.h | 8 ----- include/net/sctp/checksum.h | 3 -- sound/soc/codecs/aw88395/aw88395_device.c | 2 +- 11 files changed, 32 insertions(+), 45 deletions(-) diff --git a/crypto/crc32c_generic.c b/crypto/crc32c_generic.c index 985da981d6e2a..770533d19b813 100644 --- a/crypto/crc32c_generic.c +++ b/crypto/crc32c_generic.c @@ -92,11 +92,11 @@ static int chksum_update(struct shash_desc *desc, const u8 *data, static int chksum_update_arch(struct shash_desc *desc, const u8 *data, unsigned int length) { struct chksum_desc_ctx *ctx = shash_desc_ctx(desc); - ctx->crc = __crc32c_le(ctx->crc, data, length); + ctx->crc = crc32c(ctx->crc, data, length); return 0; } static int chksum_final(struct shash_desc *desc, u8 *out) { @@ -113,11 +113,11 @@ static int __chksum_finup(u32 *crcp, const u8 *data, unsigned int len, u8 *out) } static int __chksum_finup_arch(u32 *crcp, const u8 *data, unsigned int len, u8 *out) { - put_unaligned_le32(~__crc32c_le(*crcp, data, len), out); + put_unaligned_le32(~crc32c(*crcp, data, len), out); return 0; } static int chksum_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) diff --git a/drivers/crypto/stm32/stm32-crc32.c b/drivers/crypto/stm32/stm32-crc32.c index de4d0402f1339..fd29785a3ecf3 100644 --- a/drivers/crypto/stm32/stm32-crc32.c +++ b/drivers/crypto/stm32/stm32-crc32.c @@ -160,11 +160,11 @@ static int burst_update(struct shash_desc *desc, const u8 *d8, if (!spin_trylock(&crc->lock)) { /* Hardware is busy, calculate crc32 by software */ if (mctx->poly == CRC32_POLY_LE) ctx->partial = crc32_le(ctx->partial, d8, length); else - ctx->partial = __crc32c_le(ctx->partial, d8, length); + ctx->partial = crc32c(ctx->partial, d8, length); goto pm_out; } /* diff --git a/drivers/md/raid5-cache.c b/drivers/md/raid5-cache.c index e530271cb86bb..ba768ca7f4229 100644 --- a/drivers/md/raid5-cache.c +++ b/drivers/md/raid5-cache.c @@ -712,11 +712,11 @@ static void r5l_submit_current_io(struct r5l_log *log) if (!io) return; block = page_address(io->meta_page); block->meta_size = cpu_to_le32(io->meta_offset); - crc = crc32c_le(log->uuid_checksum, block, PAGE_SIZE); + crc = crc32c(log->uuid_checksum, block, PAGE_SIZE); block->checksum = cpu_to_le32(crc); log->current_io = NULL; spin_lock_irqsave(&log->io_list_lock, flags); if (io->has_flush || io->has_fua) { @@ -1018,12 +1018,12 @@ int r5l_write_stripe(struct r5l_log *log, struct stripe_head *sh) write_disks++; /* checksum is already calculated in last run */ if (test_bit(STRIPE_LOG_TRAPPED, &sh->state)) continue; addr = kmap_local_page(sh->dev[i].page); - sh->dev[i].log_checksum = crc32c_le(log->uuid_checksum, - addr, PAGE_SIZE); + sh->dev[i].log_checksum = crc32c(log->uuid_checksum, + addr, PAGE_SIZE); kunmap_local(addr); } parity_pages = 1 + !!(sh->qd_idx >= 0); data_pages = write_disks - parity_pages; @@ -1739,11 +1739,11 @@ static int r5l_recovery_read_meta_block(struct r5l_log *log, le64_to_cpu(mb->seq) != ctx->seq || mb->version != R5LOG_VERSION || le64_to_cpu(mb->position) != ctx->pos) return -EINVAL; - crc = crc32c_le(log->uuid_checksum, mb, PAGE_SIZE); + crc = crc32c(log->uuid_checksum, mb, PAGE_SIZE); if (stored_crc != crc) return -EINVAL; if (le32_to_cpu(mb->meta_size) > PAGE_SIZE) return -EINVAL; @@ -1778,12 +1778,11 @@ static int r5l_log_write_empty_meta_block(struct r5l_log *log, sector_t pos, page = alloc_page(GFP_KERNEL); if (!page) return -ENOMEM; r5l_recovery_create_empty_meta_block(log, page, pos, seq); mb = page_address(page); - mb->checksum = cpu_to_le32(crc32c_le(log->uuid_checksum, - mb, PAGE_SIZE)); + mb->checksum = cpu_to_le32(crc32c(log->uuid_checksum, mb, PAGE_SIZE)); if (!sync_page_io(log->rdev, pos, PAGE_SIZE, page, REQ_OP_WRITE | REQ_SYNC | REQ_FUA, false)) { __free_page(page); return -EIO; } @@ -1974,11 +1973,11 @@ r5l_recovery_verify_data_checksum(struct r5l_log *log, void *addr; u32 checksum; r5l_recovery_read_page(log, ctx, page, log_offset); addr = kmap_local_page(page); - checksum = crc32c_le(log->uuid_checksum, addr, PAGE_SIZE); + checksum = crc32c(log->uuid_checksum, addr, PAGE_SIZE); kunmap_local(addr); return (le32_to_cpu(log_checksum) == checksum) ? 0 : -EINVAL; } /* @@ -2377,12 +2376,12 @@ r5c_recovery_rewrite_data_only_stripes(struct r5l_log *log, payload->size = cpu_to_le32(BLOCK_SECTORS); payload->location = cpu_to_le64( raid5_compute_blocknr(sh, i, 0)); addr = kmap_local_page(dev->page); payload->checksum[0] = cpu_to_le32( - crc32c_le(log->uuid_checksum, addr, - PAGE_SIZE)); + crc32c(log->uuid_checksum, addr, + PAGE_SIZE)); kunmap_local(addr); sync_page_io(log->rdev, write_pos, PAGE_SIZE, dev->page, REQ_OP_WRITE, false); write_pos = r5l_ring_add(log, write_pos, BLOCK_SECTORS); @@ -2390,12 +2389,12 @@ r5c_recovery_rewrite_data_only_stripes(struct r5l_log *log, sizeof(struct r5l_payload_data_parity); } } mb->meta_size = cpu_to_le32(offset); - mb->checksum = cpu_to_le32(crc32c_le(log->uuid_checksum, - mb, PAGE_SIZE)); + mb->checksum = cpu_to_le32(crc32c(log->uuid_checksum, + mb, PAGE_SIZE)); sync_page_io(log->rdev, ctx->pos, PAGE_SIZE, page, REQ_OP_WRITE | REQ_SYNC | REQ_FUA, false); sh->log_start = ctx->pos; list_add_tail(&sh->r5c, &log->stripe_in_journal_list); atomic_inc(&log->stripe_in_journal_count); @@ -2883,12 +2882,12 @@ int r5c_cache_data(struct r5l_log *log, struct stripe_head *sh) void *addr; if (!test_bit(R5_Wantwrite, &sh->dev[i].flags)) continue; addr = kmap_local_page(sh->dev[i].page); - sh->dev[i].log_checksum = crc32c_le(log->uuid_checksum, - addr, PAGE_SIZE); + sh->dev[i].log_checksum = crc32c(log->uuid_checksum, + addr, PAGE_SIZE); kunmap_local(addr); pages++; } WARN_ON(pages == 0); @@ -2967,11 +2966,11 @@ static int r5l_load_log(struct r5l_log *log) create_super = true; goto create; } stored_crc = le32_to_cpu(mb->checksum); mb->checksum = 0; - expected_crc = crc32c_le(log->uuid_checksum, mb, PAGE_SIZE); + expected_crc = crc32c(log->uuid_checksum, mb, PAGE_SIZE); if (stored_crc != expected_crc) { create_super = true; goto create; } if (le64_to_cpu(mb->position) != cp) { @@ -3075,12 +3074,12 @@ int r5l_init_log(struct r5conf *conf, struct md_rdev *rdev) log = kzalloc(sizeof(*log), GFP_KERNEL); if (!log) return -ENOMEM; log->rdev = rdev; log->need_cache_flush = bdev_write_cache(rdev->bdev); - log->uuid_checksum = crc32c_le(~0, rdev->mddev->uuid, - sizeof(rdev->mddev->uuid)); + log->uuid_checksum = crc32c(~0, rdev->mddev->uuid, + sizeof(rdev->mddev->uuid)); mutex_init(&log->io_mutex); spin_lock_init(&log->io_list_lock); INIT_LIST_HEAD(&log->running_ios); diff --git a/drivers/md/raid5-ppl.c b/drivers/md/raid5-ppl.c index 37c4da5311ca7..c0fb335311aa6 100644 --- a/drivers/md/raid5-ppl.c +++ b/drivers/md/raid5-ppl.c @@ -344,13 +344,13 @@ static int ppl_log_stripe(struct ppl_log *log, struct stripe_head *sh) /* don't write any PP if full stripe write */ if (!test_bit(STRIPE_FULL_WRITE, &sh->state)) { le32_add_cpu(&e->pp_size, PAGE_SIZE); io->pp_size += PAGE_SIZE; - e->checksum = cpu_to_le32(crc32c_le(le32_to_cpu(e->checksum), - page_address(sh->ppl_page), - PAGE_SIZE)); + e->checksum = cpu_to_le32(crc32c(le32_to_cpu(e->checksum), + page_address(sh->ppl_page), + PAGE_SIZE)); } list_add_tail(&sh->log_list, &io->stripe_list); atomic_inc(&io->pending_stripes); sh->ppl_io = io; @@ -452,11 +452,11 @@ static void ppl_submit_iounit(struct ppl_io_unit *io) ilog2(ppl_conf->block_size >> 9)); e->checksum = cpu_to_le32(~le32_to_cpu(e->checksum)); } pplhdr->entries_count = cpu_to_le32(io->entries_count); - pplhdr->checksum = cpu_to_le32(~crc32c_le(~0, pplhdr, PPL_HEADER_SIZE)); + pplhdr->checksum = cpu_to_le32(~crc32c(~0, pplhdr, PPL_HEADER_SIZE)); /* Rewind the buffer if current PPL is larger then remaining space */ if (log->use_multippl && log->rdev->ppl.sector + log->rdev->ppl.size - log->next_io_sector < (PPL_HEADER_SIZE + io->pp_size) >> 9) @@ -996,11 +996,11 @@ static int ppl_recover(struct ppl_log *log, struct ppl_header *pplhdr, md_error(mddev, rdev); ret = -EIO; goto out; } - crc = crc32c_le(crc, page_address(page), s); + crc = crc32c(crc, page_address(page), s); pp_size -= s; sector += s >> 9; } @@ -1050,11 +1050,11 @@ static int ppl_write_empty_header(struct ppl_log *log) /* zero out PPL space to avoid collision with old PPLs */ blkdev_issue_zeroout(rdev->bdev, rdev->ppl.sector, log->rdev->ppl.size, GFP_NOIO, 0); memset(pplhdr->reserved, 0xff, PPL_HDR_RESERVED); pplhdr->signature = cpu_to_le32(log->ppl_conf->signature); - pplhdr->checksum = cpu_to_le32(~crc32c_le(~0, pplhdr, PAGE_SIZE)); + pplhdr->checksum = cpu_to_le32(~crc32c(~0, pplhdr, PAGE_SIZE)); if (!sync_page_io(rdev, rdev->ppl.sector - rdev->data_offset, PPL_HEADER_SIZE, page, REQ_OP_WRITE | REQ_SYNC | REQ_FUA, false)) { md_error(rdev->mddev, rdev); @@ -1104,11 +1104,11 @@ static int ppl_load_distributed(struct ppl_log *log) pplhdr = page_address(page); /* check header validity */ crc_stored = le32_to_cpu(pplhdr->checksum); pplhdr->checksum = 0; - crc = ~crc32c_le(~0, pplhdr, PAGE_SIZE); + crc = ~crc32c(~0, pplhdr, PAGE_SIZE); if (crc_stored != crc) { pr_debug("%s: ppl header crc does not match: stored: 0x%x calculated: 0x%x (offset: %llu)\n", __func__, crc_stored, crc, (unsigned long long)pplhdr_offset); @@ -1388,11 +1388,11 @@ int ppl_init_log(struct r5conf *conf) atomic64_set(&ppl_conf->seq, 0); INIT_LIST_HEAD(&ppl_conf->no_mem_stripes); spin_lock_init(&ppl_conf->no_mem_stripes_lock); if (!mddev->external) { - ppl_conf->signature = ~crc32c_le(~0, mddev->uuid, sizeof(mddev->uuid)); + ppl_conf->signature = ~crc32c(~0, mddev->uuid, sizeof(mddev->uuid)); ppl_conf->block_size = 512; } else { ppl_conf->block_size = queue_logical_block_size(mddev->gendisk->queue); } diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c index 8e04552d2216c..02c8213915a5d 100644 --- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c @@ -2591,11 +2591,11 @@ void bnx2x_init_rx_mode_obj(struct bnx2x *bp, } /********************* Multicast verbs: SET, CLEAR ****************************/ static inline u8 bnx2x_mcast_bin_from_mac(u8 *mac) { - return (crc32c_le(0, mac, ETH_ALEN) >> 24) & 0xff; + return (crc32c(0, mac, ETH_ALEN) >> 24) & 0xff; } struct bnx2x_mcast_mac_elem { struct list_head link; u8 mac[ETH_ALEN]; diff --git a/drivers/thunderbolt/ctl.c b/drivers/thunderbolt/ctl.c index dc1f456736dc4..cd15e84c47f47 100644 --- a/drivers/thunderbolt/ctl.c +++ b/drivers/thunderbolt/ctl.c @@ -310,11 +310,11 @@ static void tb_cfg_print_error(struct tb_ctl *ctl, enum tb_cfg_space space, } } static __be32 tb_crc(const void *data, size_t len) { - return cpu_to_be32(~__crc32c_le(~0, data, len)); + return cpu_to_be32(~crc32c(~0, data, len)); } static void tb_ctl_pkg_free(struct ctl_pkg *pkg) { if (pkg) { diff --git a/drivers/thunderbolt/eeprom.c b/drivers/thunderbolt/eeprom.c index 9c1d65d265531..e66183a72cf94 100644 --- a/drivers/thunderbolt/eeprom.c +++ b/drivers/thunderbolt/eeprom.c @@ -209,11 +209,11 @@ static u8 tb_crc8(u8 *data, int len) return val; } static u32 tb_crc32(void *data, size_t len) { - return ~__crc32c_le(~0, data, len); + return ~crc32c(~0, data, len); } #define TB_DROM_DATA_START 13 #define TB_DROM_HEADER_SIZE 22 #define USB4_DROM_HEADER_SIZE 16 diff --git a/include/linux/crc32.h b/include/linux/crc32.h index 61a7ec29d6338..bc39b023eac0f 100644 --- a/include/linux/crc32.h +++ b/include/linux/crc32.h @@ -27,12 +27,11 @@ static inline u32 crc32_be(u32 crc, const void *p, size_t len) if (IS_ENABLED(CONFIG_CRC32_ARCH)) return crc32_be_arch(crc, p, len); return crc32_be_base(crc, p, len); } -/* TODO: leading underscores should be dropped once callers have been updated */ -static inline u32 __crc32c_le(u32 crc, const void *p, size_t len) +static inline u32 crc32c(u32 crc, const void *p, size_t len) { if (IS_ENABLED(CONFIG_CRC32_ARCH)) return crc32c_le_arch(crc, p, len); return crc32c_le_base(crc, p, len); } @@ -43,11 +42,11 @@ static inline u32 __crc32c_le(u32 crc, const void *p, size_t len) * IS_ENABLED(CONFIG_CRC32_ARCH) it takes into account the different CRC32 * variants and also whether any needed CPU features are available at runtime. */ #define CRC32_LE_OPTIMIZATION BIT(0) /* crc32_le() is optimized */ #define CRC32_BE_OPTIMIZATION BIT(1) /* crc32_be() is optimized */ -#define CRC32C_OPTIMIZATION BIT(2) /* __crc32c_le() is optimized */ +#define CRC32C_OPTIMIZATION BIT(2) /* crc32c() is optimized */ #if IS_ENABLED(CONFIG_CRC32_ARCH) u32 crc32_optimizations(void); #else static inline u32 crc32_optimizations(void) { return 0; } #endif diff --git a/include/linux/crc32c.h b/include/linux/crc32c.h index 47eb78003c265..b8cff2f4309a7 100644 --- a/include/linux/crc32c.h +++ b/include/linux/crc32c.h @@ -2,14 +2,6 @@ #ifndef _LINUX_CRC32C_H #define _LINUX_CRC32C_H #include -static inline u32 crc32c(u32 crc, const void *address, unsigned int length) -{ - return __crc32c_le(crc, address, length); -} - -/* This macro exists for backwards-compatibility. */ -#define crc32c_le crc32c - #endif /* _LINUX_CRC32C_H */ diff --git a/include/net/sctp/checksum.h b/include/net/sctp/checksum.h index f514a0aa849ea..93041c970753e 100644 --- a/include/net/sctp/checksum.h +++ b/include/net/sctp/checksum.h @@ -28,13 +28,10 @@ #include #include static inline __wsum sctp_csum_update(const void *buff, int len, __wsum sum) { - /* This uses the crypto implementation of crc32c, which is either - * implemented w/ hardware support or resolves to __crc32c_le(). - */ return (__force __wsum)crc32c((__force __u32)sum, buff, len); } static inline __wsum sctp_csum_combine(__wsum csum, __wsum csum2, int offset, int len) diff --git a/sound/soc/codecs/aw88395/aw88395_device.c b/sound/soc/codecs/aw88395/aw88395_device.c index 6b333d1c6e946..b7ea8be0d0cb0 100644 --- a/sound/soc/codecs/aw88395/aw88395_device.c +++ b/sound/soc/codecs/aw88395/aw88395_device.c @@ -422,11 +422,11 @@ static int aw_dev_dsp_set_crc32(struct aw_device *aw_dev) if (crc_data_len & 0x11) { dev_err(aw_dev->dev, "The crc data len :%d unsupport", crc_data_len); return -EINVAL; } - crc_value = __crc32c_le(0xFFFFFFFF, crc_dsp_cfg->data, crc_data_len) ^ 0xFFFFFFFF; + crc_value = crc32c(0xFFFFFFFF, crc_dsp_cfg->data, crc_data_len) ^ 0xFFFFFFFF; return aw_dev_dsp_write(aw_dev, AW88395_DSP_REG_CRC_ADDR, crc_value, AW88395_DSP_32_DATA); } From patchwork Sat Feb 8 02:49:10 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13966245 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 89E5717A5BE; Sat, 8 Feb 2025 02:49:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738982964; cv=none; b=PLzc0x5boLCgNI4164Qnvjwroliie9WPsaFWSAzLQmJJGbvJgnZSXUk7PuN+a15qdnKFO9Qta7KKoAG7Rk0+nf8kGpZ+QvXmgo2kdF02gkIvmxLBgbi10GW+1qO0wVC8hlbft4DR0ORWxIm81MlNE5d5k+RnFNl3gqIvwdBK//8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738982964; c=relaxed/simple; bh=iSmynFUGEVW5Xd47UgfUiv25eLbOG/E5uXrX9p5Fu8Q=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=SsPeZH7CZKDHcB4Ev3r0PHKjJJqXecua8F/8sKyymh7MTl56VxpcluJZd7rj/pSBXr6h5fJVL0yLPcYnWNHJWj5LpjJioLi3ethtWcDJ8Uc3GORr+vPgacVh++q+If31ir/wjzlL88CZosTtaQw8PjURXwwqg15uaMCPxiFUPsE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=pwNTole5; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="pwNTole5" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 03101C4CEE7; Sat, 8 Feb 2025 02:49:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1738982964; bh=iSmynFUGEVW5Xd47UgfUiv25eLbOG/E5uXrX9p5Fu8Q=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pwNTole5BaY9WapbbYumY4hsnaIJE+UAdI5VJtSC9FYKDKoN6dGrKKnwSwIraD5Ey Oq0/XMjsT7Kb7DkN1hfECuSvMXtXcHqJ3K6VYVI9jkbMg8v+oXUj4GjvA9gzFoBujC v0O3NxdakZxYWNuLCn+CcptO82BJFn0kHm196iSCFLuyXgIeeW1tXl8qsXpysSq6vK pkpCN8ASNNXiN3g/84Q9zQUMaYmJiLWfB1PoPd7h2ovYi4haYRu7EVvHK9f9sU0l1p YBYnMEoMV2aEG+OS390V9e0b2mgiEJUKgzazI8QYAzhI402n3W3o6vgle2n///Rec8 h1EFpkntwB4wA== From: Eric Biggers To: linux-kernel@vger.kernel.org Cc: linux-crypto@vger.kernel.org, Ard Biesheuvel Subject: [PATCH v2 5/6] lib/crc32: rename __crc32c_le_combine() to crc32c_combine() Date: Fri, 7 Feb 2025 18:49:10 -0800 Message-ID: <20250208024911.14936-6-ebiggers@kernel.org> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250208024911.14936-1-ebiggers@kernel.org> References: <20250208024911.14936-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Since the Castagnoli CRC32 is now always just crc32c(), rename __crc32c_le_combine() and __crc32c_le_shift() accordingly. Reviewed-by: Ard Biesheuvel Signed-off-by: Eric Biggers --- drivers/infiniband/sw/siw/siw.h | 4 ++-- include/linux/crc32.h | 28 +++++++++++++--------------- include/net/sctp/checksum.h | 4 ++-- lib/crc32.c | 6 +++--- lib/crc_kunit.c | 2 +- 5 files changed, 21 insertions(+), 23 deletions(-) diff --git a/drivers/infiniband/sw/siw/siw.h b/drivers/infiniband/sw/siw/siw.h index ea5eee50dc39d..4e692de1da933 100644 --- a/drivers/infiniband/sw/siw/siw.h +++ b/drivers/infiniband/sw/siw/siw.h @@ -674,12 +674,12 @@ static inline __wsum siw_csum_update(const void *buff, int len, __wsum sum) } static inline __wsum siw_csum_combine(__wsum csum, __wsum csum2, int offset, int len) { - return (__force __wsum)__crc32c_le_combine((__force __u32)csum, - (__force __u32)csum2, len); + return (__force __wsum)crc32c_combine((__force __u32)csum, + (__force __u32)csum2, len); } static inline void siw_crc_skb(struct siw_rx_stream *srx, unsigned int len) { const struct skb_checksum_ops siw_cs_ops = { diff --git a/include/linux/crc32.h b/include/linux/crc32.h index bc39b023eac0f..535071964f52f 100644 --- a/include/linux/crc32.h +++ b/include/linux/crc32.h @@ -74,33 +74,31 @@ u32 crc32_le_shift(u32 crc, size_t len); static inline u32 crc32_le_combine(u32 crc1, u32 crc2, size_t len2) { return crc32_le_shift(crc1, len2) ^ crc2; } +u32 crc32c_shift(u32 crc, size_t len); + /** - * __crc32c_le_combine - Combine two crc32c check values into one. For two - * sequences of bytes, seq1 and seq2 with lengths len1 - * and len2, __crc32c_le() check values were calculated - * for each, crc1 and crc2. + * crc32c_combine - Combine two crc32c check values into one. For two sequences + * of bytes, seq1 and seq2 with lengths len1 and len2, crc32c() + * check values were calculated for each, crc1 and crc2. * * @crc1: crc32c of the first block * @crc2: crc32c of the second block * @len2: length of the second block * - * Return: The __crc32c_le() check value of seq1 and seq2 concatenated, - * requiring only crc1, crc2, and len2. Note: If seq_full denotes - * the concatenated memory area of seq1 with seq2, and crc_full - * the __crc32c_le() value of seq_full, then crc_full == - * __crc32c_le_combine(crc1, crc2, len2) when crc_full was - * seeded with the same initializer as crc1, and crc2 seed - * was 0. See also crc32c_combine_test(). + * Return: The crc32c() check value of seq1 and seq2 concatenated, requiring + * only crc1, crc2, and len2. Note: If seq_full denotes the concatenated + * memory area of seq1 with seq2, and crc_full the crc32c() value of + * seq_full, then crc_full == crc32c_combine(crc1, crc2, len2) when + * crc_full was seeded with the same initializer as crc1, and crc2 seed + * was 0. See also crc_combine_test(). */ -u32 __crc32c_le_shift(u32 crc, size_t len); - -static inline u32 __crc32c_le_combine(u32 crc1, u32 crc2, size_t len2) +static inline u32 crc32c_combine(u32 crc1, u32 crc2, size_t len2) { - return __crc32c_le_shift(crc1, len2) ^ crc2; + return crc32c_shift(crc1, len2) ^ crc2; } #define crc32(seed, data, length) crc32_le(seed, (unsigned char const *)(data), length) /* diff --git a/include/net/sctp/checksum.h b/include/net/sctp/checksum.h index 93041c970753e..291465c258102 100644 --- a/include/net/sctp/checksum.h +++ b/include/net/sctp/checksum.h @@ -34,12 +34,12 @@ static inline __wsum sctp_csum_update(const void *buff, int len, __wsum sum) } static inline __wsum sctp_csum_combine(__wsum csum, __wsum csum2, int offset, int len) { - return (__force __wsum)__crc32c_le_combine((__force __u32)csum, - (__force __u32)csum2, len); + return (__force __wsum)crc32c_combine((__force __u32)csum, + (__force __u32)csum2, len); } static const struct skb_checksum_ops sctp_csum_ops = { .update = sctp_csum_update, .combine = sctp_csum_combine, diff --git a/lib/crc32.c b/lib/crc32.c index 3c080cda5e1c9..554ef6827b80d 100644 --- a/lib/crc32.c +++ b/lib/crc32.c @@ -115,17 +115,17 @@ static u32 crc32_generic_shift(u32 crc, size_t len, u32 polynomial) u32 crc32_le_shift(u32 crc, size_t len) { return crc32_generic_shift(crc, len, CRC32_POLY_LE); } +EXPORT_SYMBOL(crc32_le_shift); -u32 __crc32c_le_shift(u32 crc, size_t len) +u32 crc32c_shift(u32 crc, size_t len) { return crc32_generic_shift(crc, len, CRC32C_POLY_LE); } -EXPORT_SYMBOL(crc32_le_shift); -EXPORT_SYMBOL(__crc32c_le_shift); +EXPORT_SYMBOL(crc32c_shift); u32 crc32_be_base(u32 crc, const u8 *p, size_t len) { while (len--) crc = (crc << 8) ^ crc32table_be[(crc >> 24) ^ *p++]; diff --git a/lib/crc_kunit.c b/lib/crc_kunit.c index 1e82fcf9489ef..40b4b41f21847 100644 --- a/lib/crc_kunit.c +++ b/lib/crc_kunit.c @@ -361,11 +361,11 @@ static u64 crc32c_wrapper(u64 crc, const u8 *p, size_t len) return crc32c(crc, p, len); } static u64 crc32c_combine_wrapper(u64 crc1, u64 crc2, size_t len2) { - return __crc32c_le_combine(crc1, crc2, len2); + return crc32c_combine(crc1, crc2, len2); } static const struct crc_variant crc_variant_crc32c = { .bits = 32, .le = true, From patchwork Sat Feb 8 02:49:11 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13966246 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C171D1865EB; Sat, 8 Feb 2025 02:49:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738982964; cv=none; b=a8RyIJn784yKYn3eXQ/NCgMDqhnhxgpKvtwDRfKep20dHQQ+vNd/Matqs7yzvfw//GBQ4mSKzbDssJE7Zekkjy8eGo9TZnGBQYwrks90b6dBgcWVzzXnixESrh4ZsQr0G/t/+aw/Xt0MaVQHuqP2rpC56Y6HFjW6e3r5jjue3k0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738982964; c=relaxed/simple; bh=TGsjrJN8DMO1soH9ja5q3Xtc4TKdVEGfEvOJVNzUTvQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=lXkS7jKpjc2L77/exT4o++wIIziusCf487ffWrHhDiZNB4MC9j9wIiUsAaTAuomEY+JEHj1jgUnMYCrj8PAPfwDguFpb8GZUJkREfMU1ZeS0rDbB0lQ1QAQTD5CpZqLkGaVnLrmcJffQBwnacCniZNBdDnhmOfxGVsfJCIraKBk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=CHyike94; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="CHyike94" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 41632C4CEF1; Sat, 8 Feb 2025 02:49:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1738982964; bh=TGsjrJN8DMO1soH9ja5q3Xtc4TKdVEGfEvOJVNzUTvQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=CHyike94/smVr06QN8j7ijQxcB4IIlUFDvL6JImV6H4xLHHZAFiGXi8JpYJXd91fe wAJ09yi/UiMVG3laRhXiH+Ky97yjrMZEoNZir97EBYAznaf7agBH9ddhPfmI5K6jE+ paGUWund83585GYOHT6Z/oYg+IiQmczcgVhlxEPZKM0/z4Ho8qZjlr2atFaSr9V0H8 RA6BJWXBY2iLOsGddsG/ETUh/qwNjck7yl61dLF3G57wqk5iKRgTicxoGeG59icwLy aTewsB9hOG6t14FaCdiPmrC3GMtGVjwSEuyGq1ey2KBBCFdAlQOnlsFK7R4W0gZiDk tU2HTAYt3kT8w== From: Eric Biggers To: linux-kernel@vger.kernel.org Cc: linux-crypto@vger.kernel.org, Ard Biesheuvel Subject: [PATCH v2 6/6] lib/crc32: remove "_le" from crc32c base and arch functions Date: Fri, 7 Feb 2025 18:49:11 -0800 Message-ID: <20250208024911.14936-7-ebiggers@kernel.org> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250208024911.14936-1-ebiggers@kernel.org> References: <20250208024911.14936-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Following the standardization on crc32c() as the lib entry point for the Castagnoli CRC32 instead of the previous mix of crc32c(), crc32c_le(), and __crc32c_le(), make the same change to the underlying base and arch functions that implement it. Reviewed-by: Ard Biesheuvel Signed-off-by: Eric Biggers --- arch/arm/lib/crc32-glue.c | 12 ++++++------ arch/arm64/lib/crc32-glue.c | 6 +++--- arch/loongarch/lib/crc32-loongarch.c | 6 +++--- arch/mips/lib/crc32-mips.c | 6 +++--- arch/powerpc/lib/crc32-glue.c | 10 +++++----- arch/riscv/lib/crc32-riscv.c | 6 +++--- arch/s390/lib/crc32-glue.c | 2 +- arch/sparc/lib/crc32_glue.c | 10 +++++----- arch/x86/lib/crc32-glue.c | 6 +++--- crypto/crc32c_generic.c | 4 ++-- include/linux/crc32.h | 8 ++++---- lib/crc32.c | 4 ++-- 12 files changed, 40 insertions(+), 40 deletions(-) diff --git a/arch/arm/lib/crc32-glue.c b/arch/arm/lib/crc32-glue.c index 2c30ba3d80e6a..4340351dbde8c 100644 --- a/arch/arm/lib/crc32-glue.c +++ b/arch/arm/lib/crc32-glue.c @@ -57,39 +57,39 @@ u32 crc32_le_arch(u32 crc, const u8 *p, size_t len) } return crc32_le_scalar(crc, p, len); } EXPORT_SYMBOL(crc32_le_arch); -static u32 crc32c_le_scalar(u32 crc, const u8 *p, size_t len) +static u32 crc32c_scalar(u32 crc, const u8 *p, size_t len) { if (static_branch_likely(&have_crc32)) return crc32c_armv8_le(crc, p, len); - return crc32c_le_base(crc, p, len); + return crc32c_base(crc, p, len); } -u32 crc32c_le_arch(u32 crc, const u8 *p, size_t len) +u32 crc32c_arch(u32 crc, const u8 *p, size_t len) { if (len >= PMULL_MIN_LEN + 15 && static_branch_likely(&have_pmull) && crypto_simd_usable()) { size_t n = -(uintptr_t)p & 15; /* align p to 16-byte boundary */ if (n) { - crc = crc32c_le_scalar(crc, p, n); + crc = crc32c_scalar(crc, p, n); p += n; len -= n; } n = round_down(len, 16); kernel_neon_begin(); crc = crc32c_pmull_le(p, n, crc); kernel_neon_end(); p += n; len -= n; } - return crc32c_le_scalar(crc, p, len); + return crc32c_scalar(crc, p, len); } -EXPORT_SYMBOL(crc32c_le_arch); +EXPORT_SYMBOL(crc32c_arch); u32 crc32_be_arch(u32 crc, const u8 *p, size_t len) { return crc32_be_base(crc, p, len); } diff --git a/arch/arm64/lib/crc32-glue.c b/arch/arm64/lib/crc32-glue.c index 265fbf36914b6..ed3acd71178f8 100644 --- a/arch/arm64/lib/crc32-glue.c +++ b/arch/arm64/lib/crc32-glue.c @@ -41,14 +41,14 @@ u32 crc32_le_arch(u32 crc, const u8 *p, size_t len) return crc32_le_arm64(crc, p, len); } EXPORT_SYMBOL(crc32_le_arch); -u32 crc32c_le_arch(u32 crc, const u8 *p, size_t len) +u32 crc32c_arch(u32 crc, const u8 *p, size_t len) { if (!alternative_has_cap_likely(ARM64_HAS_CRC32)) - return crc32c_le_base(crc, p, len); + return crc32c_base(crc, p, len); if (len >= min_len && cpu_have_named_feature(PMULL) && crypto_simd_usable()) { kernel_neon_begin(); crc = crc32c_le_arm64_4way(crc, p, len); kernel_neon_end(); @@ -60,11 +60,11 @@ u32 crc32c_le_arch(u32 crc, const u8 *p, size_t len) return crc; } return crc32c_le_arm64(crc, p, len); } -EXPORT_SYMBOL(crc32c_le_arch); +EXPORT_SYMBOL(crc32c_arch); u32 crc32_be_arch(u32 crc, const u8 *p, size_t len) { if (!alternative_has_cap_likely(ARM64_HAS_CRC32)) return crc32_be_base(crc, p, len); diff --git a/arch/loongarch/lib/crc32-loongarch.c b/arch/loongarch/lib/crc32-loongarch.c index 8af8113ecd9d3..c44ee4f325578 100644 --- a/arch/loongarch/lib/crc32-loongarch.c +++ b/arch/loongarch/lib/crc32-loongarch.c @@ -63,14 +63,14 @@ u32 crc32_le_arch(u32 crc, const u8 *p, size_t len) return crc; } EXPORT_SYMBOL(crc32_le_arch); -u32 crc32c_le_arch(u32 crc, const u8 *p, size_t len) +u32 crc32c_arch(u32 crc, const u8 *p, size_t len) { if (!static_branch_likely(&have_crc32)) - return crc32c_le_base(crc, p, len); + return crc32c_base(crc, p, len); while (len >= sizeof(u64)) { u64 value = get_unaligned_le64(p); CRC32C(crc, value, d); @@ -98,11 +98,11 @@ u32 crc32c_le_arch(u32 crc, const u8 *p, size_t len) CRC32C(crc, value, b); } return crc; } -EXPORT_SYMBOL(crc32c_le_arch); +EXPORT_SYMBOL(crc32c_arch); u32 crc32_be_arch(u32 crc, const u8 *p, size_t len) { return crc32_be_base(crc, p, len); } diff --git a/arch/mips/lib/crc32-mips.c b/arch/mips/lib/crc32-mips.c index 100ac586aadb2..676a4b3e290b9 100644 --- a/arch/mips/lib/crc32-mips.c +++ b/arch/mips/lib/crc32-mips.c @@ -106,14 +106,14 @@ u32 crc32_le_arch(u32 crc, const u8 *p, size_t len) return crc; } EXPORT_SYMBOL(crc32_le_arch); -u32 crc32c_le_arch(u32 crc, const u8 *p, size_t len) +u32 crc32c_arch(u32 crc, const u8 *p, size_t len) { if (!static_branch_likely(&have_crc32)) - return crc32c_le_base(crc, p, len); + return crc32c_base(crc, p, len); if (IS_ENABLED(CONFIG_64BIT)) { for (; len >= sizeof(u64); p += sizeof(u64), len -= sizeof(u64)) { u64 value = get_unaligned_le64(p); @@ -147,11 +147,11 @@ u32 crc32c_le_arch(u32 crc, const u8 *p, size_t len) CRC32C(crc, value, b); } return crc; } -EXPORT_SYMBOL(crc32c_le_arch); +EXPORT_SYMBOL(crc32c_arch); u32 crc32_be_arch(u32 crc, const u8 *p, size_t len) { return crc32_be_base(crc, p, len); } diff --git a/arch/powerpc/lib/crc32-glue.c b/arch/powerpc/lib/crc32-glue.c index 79cc954f499f1..dbd10f339183d 100644 --- a/arch/powerpc/lib/crc32-glue.c +++ b/arch/powerpc/lib/crc32-glue.c @@ -21,22 +21,22 @@ u32 crc32_le_arch(u32 crc, const u8 *p, size_t len) { return crc32_le_base(crc, p, len); } EXPORT_SYMBOL(crc32_le_arch); -u32 crc32c_le_arch(u32 crc, const u8 *p, size_t len) +u32 crc32c_arch(u32 crc, const u8 *p, size_t len) { unsigned int prealign; unsigned int tail; if (len < (VECTOR_BREAKPOINT + VMX_ALIGN) || !static_branch_likely(&have_vec_crypto) || !crypto_simd_usable()) - return crc32c_le_base(crc, p, len); + return crc32c_base(crc, p, len); if ((unsigned long)p & VMX_ALIGN_MASK) { prealign = VMX_ALIGN - ((unsigned long)p & VMX_ALIGN_MASK); - crc = crc32c_le_base(crc, p, prealign); + crc = crc32c_base(crc, p, prealign); len -= prealign; p += prealign; } if (len & ~VMX_ALIGN_MASK) { @@ -50,16 +50,16 @@ u32 crc32c_le_arch(u32 crc, const u8 *p, size_t len) } tail = len & VMX_ALIGN_MASK; if (tail) { p += len & ~VMX_ALIGN_MASK; - crc = crc32c_le_base(crc, p, tail); + crc = crc32c_base(crc, p, tail); } return crc; } -EXPORT_SYMBOL(crc32c_le_arch); +EXPORT_SYMBOL(crc32c_arch); u32 crc32_be_arch(u32 crc, const u8 *p, size_t len) { return crc32_be_base(crc, p, len); } diff --git a/arch/riscv/lib/crc32-riscv.c b/arch/riscv/lib/crc32-riscv.c index a50f8e010417d..b5cb752847c40 100644 --- a/arch/riscv/lib/crc32-riscv.c +++ b/arch/riscv/lib/crc32-riscv.c @@ -222,16 +222,16 @@ u32 crc32_le_arch(u32 crc, const u8 *p, size_t len) return crc32_le_generic(crc, p, len, CRC32_POLY_LE, CRC32_POLY_QT_LE, crc32_le_base); } EXPORT_SYMBOL(crc32_le_arch); -u32 crc32c_le_arch(u32 crc, const u8 *p, size_t len) +u32 crc32c_arch(u32 crc, const u8 *p, size_t len) { return crc32_le_generic(crc, p, len, CRC32C_POLY_LE, - CRC32C_POLY_QT_LE, crc32c_le_base); + CRC32C_POLY_QT_LE, crc32c_base); } -EXPORT_SYMBOL(crc32c_le_arch); +EXPORT_SYMBOL(crc32c_arch); static inline u32 crc32_be_unaligned(u32 crc, unsigned char const *p, size_t len) { size_t bits = len * 8; diff --git a/arch/s390/lib/crc32-glue.c b/arch/s390/lib/crc32-glue.c index 137080e61f901..124214a273401 100644 --- a/arch/s390/lib/crc32-glue.c +++ b/arch/s390/lib/crc32-glue.c @@ -60,11 +60,11 @@ static DEFINE_STATIC_KEY_FALSE(have_vxrs); } \ EXPORT_SYMBOL(___fname); DEFINE_CRC32_VX(crc32_le_arch, crc32_le_vgfm_16, crc32_le_base) DEFINE_CRC32_VX(crc32_be_arch, crc32_be_vgfm_16, crc32_be_base) -DEFINE_CRC32_VX(crc32c_le_arch, crc32c_le_vgfm_16, crc32c_le_base) +DEFINE_CRC32_VX(crc32c_arch, crc32c_le_vgfm_16, crc32c_base) static int __init crc32_s390_init(void) { if (cpu_have_feature(S390_CPU_FEATURE_VXRS)) static_branch_enable(&have_vxrs); diff --git a/arch/sparc/lib/crc32_glue.c b/arch/sparc/lib/crc32_glue.c index 41076d2b1fd2d..a70752c729cf6 100644 --- a/arch/sparc/lib/crc32_glue.c +++ b/arch/sparc/lib/crc32_glue.c @@ -25,35 +25,35 @@ u32 crc32_le_arch(u32 crc, const u8 *data, size_t len) } EXPORT_SYMBOL(crc32_le_arch); void crc32c_sparc64(u32 *crcp, const u64 *data, size_t len); -u32 crc32c_le_arch(u32 crc, const u8 *data, size_t len) +u32 crc32c_arch(u32 crc, const u8 *data, size_t len) { size_t n = -(uintptr_t)data & 7; if (!static_branch_likely(&have_crc32c_opcode)) - return crc32c_le_base(crc, data, len); + return crc32c_base(crc, data, len); if (n) { /* Data isn't 8-byte aligned. Align it. */ n = min(n, len); - crc = crc32c_le_base(crc, data, n); + crc = crc32c_base(crc, data, n); data += n; len -= n; } n = len & ~7U; if (n) { crc32c_sparc64(&crc, (const u64 *)data, n); data += n; len -= n; } if (len) - crc = crc32c_le_base(crc, data, len); + crc = crc32c_base(crc, data, len); return crc; } -EXPORT_SYMBOL(crc32c_le_arch); +EXPORT_SYMBOL(crc32c_arch); u32 crc32_be_arch(u32 crc, const u8 *data, size_t len) { return crc32_be_base(crc, data, len); } diff --git a/arch/x86/lib/crc32-glue.c b/arch/x86/lib/crc32-glue.c index 2dd18a886ded8..131c305e9ea0d 100644 --- a/arch/x86/lib/crc32-glue.c +++ b/arch/x86/lib/crc32-glue.c @@ -59,16 +59,16 @@ EXPORT_SYMBOL(crc32_le_arch); */ #define CRC32C_PCLMUL_BREAKEVEN 512 asmlinkage u32 crc32c_x86_3way(u32 crc, const u8 *buffer, size_t len); -u32 crc32c_le_arch(u32 crc, const u8 *p, size_t len) +u32 crc32c_arch(u32 crc, const u8 *p, size_t len) { size_t num_longs; if (!static_branch_likely(&have_crc32)) - return crc32c_le_base(crc, p, len); + return crc32c_base(crc, p, len); if (IS_ENABLED(CONFIG_X86_64) && len >= CRC32C_PCLMUL_BREAKEVEN && static_branch_likely(&have_pclmulqdq) && crypto_simd_usable()) { kernel_fpu_begin(); crc = crc32c_x86_3way(crc, p, len); @@ -83,11 +83,11 @@ u32 crc32c_le_arch(u32 crc, const u8 *p, size_t len) for (len %= sizeof(unsigned long); len; len--, p++) asm("crc32b %1, %0" : "+r" (crc) : "rm" (*p)); return crc; } -EXPORT_SYMBOL(crc32c_le_arch); +EXPORT_SYMBOL(crc32c_arch); u32 crc32_be_arch(u32 crc, const u8 *p, size_t len) { return crc32_be_base(crc, p, len); } diff --git a/crypto/crc32c_generic.c b/crypto/crc32c_generic.c index 770533d19b813..b1a36d32dc50c 100644 --- a/crypto/crc32c_generic.c +++ b/crypto/crc32c_generic.c @@ -83,11 +83,11 @@ static int chksum_setkey(struct crypto_shash *tfm, const u8 *key, static int chksum_update(struct shash_desc *desc, const u8 *data, unsigned int length) { struct chksum_desc_ctx *ctx = shash_desc_ctx(desc); - ctx->crc = crc32c_le_base(ctx->crc, data, length); + ctx->crc = crc32c_base(ctx->crc, data, length); return 0; } static int chksum_update_arch(struct shash_desc *desc, const u8 *data, unsigned int length) @@ -106,11 +106,11 @@ static int chksum_final(struct shash_desc *desc, u8 *out) return 0; } static int __chksum_finup(u32 *crcp, const u8 *data, unsigned int len, u8 *out) { - put_unaligned_le32(~crc32c_le_base(*crcp, data, len), out); + put_unaligned_le32(~crc32c_base(*crcp, data, len), out); return 0; } static int __chksum_finup_arch(u32 *crcp, const u8 *data, unsigned int len, u8 *out) diff --git a/include/linux/crc32.h b/include/linux/crc32.h index 535071964f52f..69c2e8bb37829 100644 --- a/include/linux/crc32.h +++ b/include/linux/crc32.h @@ -10,12 +10,12 @@ u32 crc32_le_arch(u32 crc, const u8 *p, size_t len); u32 crc32_le_base(u32 crc, const u8 *p, size_t len); u32 crc32_be_arch(u32 crc, const u8 *p, size_t len); u32 crc32_be_base(u32 crc, const u8 *p, size_t len); -u32 crc32c_le_arch(u32 crc, const u8 *p, size_t len); -u32 crc32c_le_base(u32 crc, const u8 *p, size_t len); +u32 crc32c_arch(u32 crc, const u8 *p, size_t len); +u32 crc32c_base(u32 crc, const u8 *p, size_t len); static inline u32 crc32_le(u32 crc, const void *p, size_t len) { if (IS_ENABLED(CONFIG_CRC32_ARCH)) return crc32_le_arch(crc, p, len); @@ -30,12 +30,12 @@ static inline u32 crc32_be(u32 crc, const void *p, size_t len) } static inline u32 crc32c(u32 crc, const void *p, size_t len) { if (IS_ENABLED(CONFIG_CRC32_ARCH)) - return crc32c_le_arch(crc, p, len); - return crc32c_le_base(crc, p, len); + return crc32c_arch(crc, p, len); + return crc32c_base(crc, p, len); } /* * crc32_optimizations() returns flags that indicate which CRC32 library * functions are using architecture-specific optimizations. Unlike diff --git a/lib/crc32.c b/lib/crc32.c index 554ef6827b80d..fddd424ff2245 100644 --- a/lib/crc32.c +++ b/lib/crc32.c @@ -43,17 +43,17 @@ u32 crc32_le_base(u32 crc, const u8 *p, size_t len) crc = (crc >> 8) ^ crc32table_le[(crc & 255) ^ *p++]; return crc; } EXPORT_SYMBOL(crc32_le_base); -u32 crc32c_le_base(u32 crc, const u8 *p, size_t len) +u32 crc32c_base(u32 crc, const u8 *p, size_t len) { while (len--) crc = (crc >> 8) ^ crc32ctable_le[(crc & 255) ^ *p++]; return crc; } -EXPORT_SYMBOL(crc32c_le_base); +EXPORT_SYMBOL(crc32c_base); /* * This multiplies the polynomials x and y modulo the given modulus. * This follows the "little-endian" CRC convention that the lsbit * represents the highest power of x, and the msbit represents x^0.