From patchwork Fri Jun 21 16:59:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13707914 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1354BC27C4F for ; Fri, 21 Jun 2024 17:00:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=uC9Wo940nnYY9skN7nIhijzw+GAtXdmOXtMmEBCSdCE=; b=PWfMO8Tu7IlXudEd+gzbIT4KWi zHsYN8dAh1kDl/+uMb5uz0tr0HiAaIufmCrZWwr5rtgqsoOPAB7ozL4pE0+YYWOycOmvRhsyd4hRB VhRexiS49nxDzm4rwS/xvqByBgOUL3R7tU8xVDdlQQ8ELOHmIpyrhIjl2FmL1IUBHuJY+kc2pSb29 Mls4B7ZmGB32MEHDHv/vQxLB/2apJDldQAWKz5DCXYgTUFhqMyymjV2d7+wiv2V9E8I4VS6xfZnWP AGafSx5EC9vEVAjz6EP69WnQ2lWmUGBc9H8eeceNv/69DzMMeYci2Juc5U2wYoXQZtahmNdXuH3bV Dk1N+dmg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sKhci-0000000A1db-2kbU; Fri, 21 Jun 2024 17:00:28 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sKhcX-0000000A1Ut-1R5S for linux-arm-kernel@lists.infradead.org; Fri, 21 Jun 2024 17:00:18 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 9E33762B5C; Fri, 21 Jun 2024 17:00:16 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EDDF2C3277B; Fri, 21 Jun 2024 17:00:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1718989216; bh=l3ue41TdfnkFLt1elay4/knx5ypgx+90Ae5loTtDDL4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=U16ZEs7+ippSs5c5gjLJySqow29YWbclmaPd0PIIGx0TQIWiXjMx78vFv6ryloS24 7gznWsQHgbHjChhKWkT5Fgc1bkPZLH4BVHMdVV6GtBhpVDVu5bqTMYQK8PDw4ppHv3 Cr+kCuAnj4yAP4om/YqCRpxTwv5KE7S1APVCxa7Z2cL2yEFtdpHZC+KA0SXtL7IcXJ ZCWaub8HShI2a4TwAWGK97dNRyHZXYJmeFHM/g1SLOYtuewGC0bgIrO+/KTXoUUTbu lTLtT4uFzoMVK7d3XkKE8fwgEJdQr7C3c+VjCiLHMAEpu23++a3GDQ+NOoZxocILhU AFJ/o6UbIpEiw== From: Eric Biggers To: linux-crypto@vger.kernel.org, fsverity@lists.linux.dev, dm-devel@lists.linux.dev Cc: x86@kernel.org, linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Sami Tolvanen , Bart Van Assche , Herbert Xu , Alasdair Kergon , Mike Snitzer , Mikulas Patocka Subject: [PATCH v6 01/15] crypto: shash - add support for finup_mb Date: Fri, 21 Jun 2024 09:59:08 -0700 Message-ID: <20240621165922.77672-2-ebiggers@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240621165922.77672-1-ebiggers@kernel.org> References: <20240621165922.77672-1-ebiggers@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240621_100017_532858_FC486796 X-CRM114-Status: GOOD ( 36.27 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Eric Biggers Most cryptographic hash functions are serialized, in the sense that they have an internal block size and the blocks must be processed serially. (BLAKE3 is a notable exception that has tree-based hashing built-in, but all the more common choices such as the SHAs and BLAKE2 are serialized. ParallelHash and Sakura are parallel hashes based on SHA3, but SHA3 is much slower than SHA256 in software even with the ARMv8 SHA3 extension.) This limits the performance of computing a single hash. Yet, computing multiple hashes simultaneously does not have this limitation. Modern CPUs are superscalar and often can execute independent instructions in parallel. As a result, on many modern CPUs, it is possible to hash two equal-length messages in about the same time as a single message, if all the instructions are interleaved. Meanwhile, a very common use case for hashing in the Linux kernel is dm-verity and fs-verity. Both use a Merkle tree that has a fixed block size, usually 4096 bytes with an empty or 32-byte salt prepended. The hash algorithm is usually SHA-256. Usually, many blocks need to be hashed at a time. This is an ideal scenario for multibuffer hashing. Linux actually used to support SHA-256 multibuffer hashing on x86_64, before it was removed by commit ab8085c130ed ("crypto: x86 - remove SHA multibuffer routines and mcryptd"). However, it was integrated with the crypto API in a weird way, where it behaved as an asynchronous hash that queued up and executed all requests on a global queue. This made it very complex, buggy, and virtually unusable. This patch takes a new approach of just adding an API crypto_shash_finup_mb() that synchronously computes the hash of multiple equal-length messages, starting from a common state that represents the (possibly empty) common prefix shared by the messages. The new API is part of the "shash" algorithm type, as it does not make sense in "ahash". It does a "finup" operation rather than a "digest" operation in order to support the salt that is used by dm-verity and fs-verity. The data and output buffers are provided in arrays of length @num_msgs in order to make the API itself extensible to interleaving factors other than 2. (Though, initially only 2x will actually be used. There are some platforms in which a higher factor could help, but there are significant trade-offs.) Reviewed-by: Sami Tolvanen Acked-by: Ard Biesheuvel Signed-off-by: Eric Biggers --- crypto/shash.c | 58 +++++++++++++++++++++++++++++++++++++++++++ include/crypto/hash.h | 52 +++++++++++++++++++++++++++++++++++++- 2 files changed, 109 insertions(+), 1 deletion(-) diff --git a/crypto/shash.c b/crypto/shash.c index 301ab42bf849..5ee5ce68c7b4 100644 --- a/crypto/shash.c +++ b/crypto/shash.c @@ -73,10 +73,57 @@ int crypto_shash_finup(struct shash_desc *desc, const u8 *data, { return crypto_shash_alg(desc->tfm)->finup(desc, data, len, out); } EXPORT_SYMBOL_GPL(crypto_shash_finup); +static noinline_for_stack int +shash_finup_mb_fallback(struct shash_desc *desc, const u8 * const data[], + unsigned int len, u8 * const outs[], + unsigned int num_msgs) +{ + struct crypto_shash *tfm = desc->tfm; + SHASH_DESC_ON_STACK(desc2, tfm); + unsigned int i; + int err; + + for (i = 0; i < num_msgs - 1; i++) { + desc2->tfm = tfm; + memcpy(shash_desc_ctx(desc2), shash_desc_ctx(desc), + crypto_shash_descsize(tfm)); + err = crypto_shash_finup(desc2, data[i], len, outs[i]); + if (err) + return err; + } + return crypto_shash_finup(desc, data[i], len, outs[i]); +} + +int crypto_shash_finup_mb(struct shash_desc *desc, const u8 * const data[], + unsigned int len, u8 * const outs[], + unsigned int num_msgs) +{ + struct shash_alg *alg = crypto_shash_alg(desc->tfm); + int err; + + if (num_msgs == 1) + return crypto_shash_finup(desc, data[0], len, outs[0]); + + if (num_msgs == 0) + return 0; + + if (WARN_ON_ONCE(num_msgs > alg->mb_max_msgs)) + goto fallback; + + err = alg->finup_mb(desc, data, len, outs, num_msgs); + if (unlikely(err == -EOPNOTSUPP)) + goto fallback; + return err; + +fallback: + return shash_finup_mb_fallback(desc, data, len, outs, num_msgs); +} +EXPORT_SYMBOL_GPL(crypto_shash_finup_mb); + static int shash_default_digest(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { struct shash_alg *shash = crypto_shash_alg(desc->tfm); @@ -312,10 +359,21 @@ static int shash_prepare_alg(struct shash_alg *alg) return -EINVAL; if ((alg->export && !alg->import) || (alg->import && !alg->export)) return -EINVAL; + if (alg->mb_max_msgs > 1) { + if (alg->mb_max_msgs > HASH_MAX_MB_MSGS) + return -EINVAL; + if (!alg->finup_mb) + return -EINVAL; + } else { + if (alg->finup_mb) + return -EINVAL; + alg->mb_max_msgs = 1; + } + err = hash_prepare_alg(&alg->halg); if (err) return err; base->cra_type = &crypto_shash_type; diff --git a/include/crypto/hash.h b/include/crypto/hash.h index 2d5ea9f9ff43..38511727b2ff 100644 --- a/include/crypto/hash.h +++ b/include/crypto/hash.h @@ -154,11 +154,13 @@ struct ahash_alg { struct shash_desc { struct crypto_shash *tfm; void *__ctx[] __aligned(ARCH_SLAB_MINALIGN); }; -#define HASH_MAX_DIGESTSIZE 64 +#define HASH_MAX_DIGESTSIZE 64 + +#define HASH_MAX_MB_MSGS 2 /* max value of crypto_shash_mb_max_msgs() */ /* * Worst case is hmac(sha3-224-generic). Its context is a nested 'shash_desc' * containing a 'struct sha3_state'. */ @@ -177,10 +179,19 @@ struct shash_desc { * @finup: see struct ahash_alg * @digest: see struct ahash_alg * @export: see struct ahash_alg * @import: see struct ahash_alg * @setkey: see struct ahash_alg + * @finup_mb: **[optional]** Multibuffer hashing support. Finish calculating + * the digests of multiple messages, interleaving the instructions to + * potentially achieve better performance than hashing each message + * individually. The num_msgs argument will be between 2 and + * @mb_max_msgs inclusively. If there are particular values of len + * or num_msgs, or a particular calling context (e.g. no-SIMD) that + * the implementation does not support with this function, then it + * must return -EOPNOTSUPP in those cases to cause the crypto API to + * fall back to repeated finups. * @init_tfm: Initialize the cryptographic transformation object. * This function is called only once at the instantiation * time, right after the transformation context was * allocated. In case the cryptographic hardware has * some special requirements which need to be handled @@ -192,10 +203,11 @@ struct shash_desc { * various changes set in @init_tfm. * @clone_tfm: Copy transform into new object, may allocate memory. * @descsize: Size of the operational state for the message digest. This state * size is the memory size that needs to be allocated for * shash_desc.__ctx + * @mb_max_msgs: Maximum supported value of num_msgs argument to @finup_mb * @halg: see struct hash_alg_common * @HASH_ALG_COMMON: see struct hash_alg_common */ struct shash_alg { int (*init)(struct shash_desc *desc); @@ -208,15 +220,19 @@ struct shash_alg { unsigned int len, u8 *out); int (*export)(struct shash_desc *desc, void *out); int (*import)(struct shash_desc *desc, const void *in); int (*setkey)(struct crypto_shash *tfm, const u8 *key, unsigned int keylen); + int (*finup_mb)(struct shash_desc *desc, const u8 * const data[], + unsigned int len, u8 * const outs[], + unsigned int num_msgs); int (*init_tfm)(struct crypto_shash *tfm); void (*exit_tfm)(struct crypto_shash *tfm); int (*clone_tfm)(struct crypto_shash *dst, struct crypto_shash *src); unsigned int descsize; + unsigned int mb_max_msgs; union { struct HASH_ALG_COMMON; struct hash_alg_common halg; }; @@ -750,10 +766,23 @@ static inline unsigned int crypto_shash_digestsize(struct crypto_shash *tfm) static inline unsigned int crypto_shash_statesize(struct crypto_shash *tfm) { return crypto_shash_alg(tfm)->statesize; } +/** + * crypto_shash_mb_max_msgs() - get max multibuffer interleaving factor + * @tfm: hash transformation object + * + * Return the maximum supported multibuffer hashing interleaving factor, i.e. + * the maximum num_msgs that can be passed to crypto_shash_finup_mb(). The + * return value will be between 1 and HASH_MAX_MB_MSGS inclusively. + */ +static inline unsigned int crypto_shash_mb_max_msgs(struct crypto_shash *tfm) +{ + return crypto_shash_alg(tfm)->mb_max_msgs; +} + static inline u32 crypto_shash_get_flags(struct crypto_shash *tfm) { return crypto_tfm_get_flags(crypto_shash_tfm(tfm)); } @@ -942,10 +971,31 @@ int crypto_shash_final(struct shash_desc *desc, u8 *out); * occurred */ int crypto_shash_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out); +/** + * crypto_shash_finup_mb() - multibuffer message hashing + * @desc: the starting state that is forked for each message. It contains the + * state after hashing a (possibly-empty) common prefix of the messages. + * @data: the data of each message (not including any common prefix from @desc) + * @len: length of each data buffer in bytes + * @outs: output buffer for each message digest + * @num_msgs: number of messages, i.e. the number of entries in @data and @outs. + * This can't be more than crypto_shash_mb_max_msgs(). + * + * This function provides support for hashing multiple messages with the + * instructions interleaved, if supported by the algorithm. This can + * significantly improve performance, depending on the CPU and algorithm. + * + * Context: Any context. + * Return: 0 on success; a negative errno value on failure. + */ +int crypto_shash_finup_mb(struct shash_desc *desc, const u8 * const data[], + unsigned int len, u8 * const outs[], + unsigned int num_msgs); + static inline void shash_desc_zero(struct shash_desc *desc) { memzero_explicit(desc, sizeof(*desc) + crypto_shash_descsize(desc->tfm)); } From patchwork Fri Jun 21 16:59:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13707917 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 68ECBC27C4F for ; Fri, 21 Jun 2024 17:00:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=Cnmqj5pAoqjf/QCY6i58nG3LxV12shiLj4oC8mmE3e8=; b=dFHnwVVx/jYmxOMPK0D1BUBpOb 98vjZz9R5+OzupAE3kbrKr3OI9/tpXyeeVRAHSEfA3pOs1lHMJreX9arugwB8P8NK3pG4wvCCWVXe zRPnVo+iaucmQ+EKtftv9BwMUlEjvdzQDrkFfoSL7r98UDCuMe1dztPiqsdyFJ/YUAD+zIzyCz15P qfEa70Dbi6jlUHVNAicv+8zhUdxLNluVpB05BcbNkd6UbvN8k0vRuOBul6QRIFdrmNv8P93aVbbAU piZj6OLVpOY7R4pOV+9jNdtACt2YmZ2++fOL5HbbqunE4K2/79bubD5LrA/4VgGmGNfrGrAIHSLRI t2YOhrYA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sKhck-0000000A1f0-3uV6; Fri, 21 Jun 2024 17:00:30 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sKhcX-0000000A1VB-3vhf for linux-arm-kernel@lists.infradead.org; Fri, 21 Jun 2024 17:00:19 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 2A63D62B61; Fri, 21 Jun 2024 17:00:17 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 76F91C4AF0A; Fri, 21 Jun 2024 17:00:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1718989216; bh=6USdTOOyYM3FNFIuW1a3O/WfkiZlhPxeGJEgqz38Sgw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=O8PuJMKnPBgqE3pbF/On9Owe+WUuzNwk0hDGwJXOtQrMk5mBHXQV6/1S7KKFuAbTr ncV0uRvdKzPyfDvDs6z48a9acujSdMvHZhr2oAjGGUIvlnfWBMMUD0F5+LvBbuIWJi 127sxDcxkVhi2InXsKt39C2RKrKedcoyG6LKB2Rj+4PihsopUgeF4aGdH93h3wUj+1 vhgZxWK1axYT0huzF2/TKVXk0LFo0ovupCFQikNVaDg+n0IatuunwEbytW5eAhlpXk iobRLpyA8y3shjRfpXqDGGcXYCnoP1XBAuaV1LLZ+trjiOiXbf2fcHdCzbFZwRDfKP ogECHWVRCV18w== From: Eric Biggers To: linux-crypto@vger.kernel.org, fsverity@lists.linux.dev, dm-devel@lists.linux.dev Cc: x86@kernel.org, linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Sami Tolvanen , Bart Van Assche , Herbert Xu , Alasdair Kergon , Mike Snitzer , Mikulas Patocka Subject: [PATCH v6 02/15] crypto: testmgr - generate power-of-2 lengths more often Date: Fri, 21 Jun 2024 09:59:09 -0700 Message-ID: <20240621165922.77672-3-ebiggers@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240621165922.77672-1-ebiggers@kernel.org> References: <20240621165922.77672-1-ebiggers@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240621_100018_124788_C97073E9 X-CRM114-Status: GOOD ( 13.14 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Eric Biggers Implementations of hash functions often have special cases when lengths are a multiple of the hash function's internal block size (e.g. 64 for SHA-256, 128 for SHA-512). Currently, when the fuzz testing code generates lengths, it doesn't prefer any length mod 64 over any other. This limits the coverage of these special cases. Therefore, this patch updates the fuzz testing code to generate power-of-2 lengths and divide messages exactly in half a bit more often. Reviewed-by: Sami Tolvanen Acked-by: Ard Biesheuvel Signed-off-by: Eric Biggers --- crypto/testmgr.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/crypto/testmgr.c b/crypto/testmgr.c index a780b615f8c6..f02cb075bd68 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -914,18 +914,24 @@ static unsigned int generate_random_length(struct rnd_state *rng, { unsigned int len = prandom_u32_below(rng, max_len + 1); switch (prandom_u32_below(rng, 4)) { case 0: - return len % 64; + len %= 64; + break; case 1: - return len % 256; + len %= 256; + break; case 2: - return len % 1024; + len %= 1024; + break; default: - return len; + break; } + if (len && prandom_u32_below(rng, 4) == 0) + len = rounddown_pow_of_two(len); + return len; } /* Flip a random bit in the given nonempty data buffer */ static void flip_random_bit(struct rnd_state *rng, u8 *buf, size_t size) { @@ -1017,10 +1023,12 @@ static char *generate_random_sgl_divisions(struct rnd_state *rng, unsigned int this_len; const char *flushtype_str; if (div == &divs[max_divs - 1] || prandom_bool(rng)) this_len = remaining; + else if (prandom_u32_below(rng, 4) == 0) + this_len = (remaining + 1) / 2; else this_len = prandom_u32_inclusive(rng, 1, remaining); div->proportion_of_total = this_len; if (prandom_u32_below(rng, 4) == 0) From patchwork Fri Jun 21 16:59:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13707915 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4FD6AC2BB85 for ; Fri, 21 Jun 2024 17:00:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=ZqfwivER7RHSbiq5R0nxNoXJYrI6swU6G1zw2kOiGlM=; b=S/dYzH/wDA7OgCqDNsdaHEwZA0 DZ6kak31TtJbXlZHtHZ4ds1rxwfqci7nhR5605xTvf8zTRlpJjEzN6zYSze75XakAwNAx3ifAV5gh MsnlsgkfyuIy045jp1ugjo0ptXx8FOr76HNJ5TBRKND26D/uP6ZoMITAqkv0egWPcWQeyfI6N1QON lZ59nzC0lnKanGQkRyLbqH74lnmOFHbJqVxcnpks9EI7cC4xw0N3fTaI3rk3ygyAqRf8oob3liuAJ 8V+1IJtm/cqU+aDeh1FSr7dKODPPB21Lg+zW8Z+x1y+rkMl5AeSKiqIZNH8xIsqvNwfpEswKiExEz cQzvbV8w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sKhcl-0000000A1fq-45DE; Fri, 21 Jun 2024 17:00:31 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sKhcY-0000000A1VT-12sM for linux-arm-kernel@lists.infradead.org; Fri, 21 Jun 2024 17:00:20 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 8774C61F0A; Fri, 21 Jun 2024 17:00:17 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 03055C4AF0F; Fri, 21 Jun 2024 17:00:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1718989217; bh=6qGhfTG3OgN99AMhP9+TimHVZj5mUjlHpVt9zq3Ji6U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hyLROdfZ+Zemv+CzJR2ylUD/hIoyxgwtyc8tGRP4sPR3ZYhpCXyM0EpOXojmXVyP5 wn0KRvNdNKFsJLZXvlQjN7VGr+O6AWBQy184C2Q6XHvQQHJzrlBt8Fu0XKRpfFjAj8 9K3vg7kaDIbNUJZX6LSz0X+quCBv2d1V+IC+53EMqslBYFUHdCy49WZ2Z/xNO/C5kp RZImlHgC47RWqJVnoZ9ojKxkufA8kYT3XXu/YpdrcH/zwnUMKMaVmnMufHnPCl/3IL Ah1HvLgaNrGkawET1W3YMziixKX5dUZhItM8ZRagttTQ1JOObdQSs+vwJP881ycMr0 d6sWsKGieK17A== From: Eric Biggers To: linux-crypto@vger.kernel.org, fsverity@lists.linux.dev, dm-devel@lists.linux.dev Cc: x86@kernel.org, linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Sami Tolvanen , Bart Van Assche , Herbert Xu , Alasdair Kergon , Mike Snitzer , Mikulas Patocka Subject: [PATCH v6 03/15] crypto: testmgr - add tests for finup_mb Date: Fri, 21 Jun 2024 09:59:10 -0700 Message-ID: <20240621165922.77672-4-ebiggers@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240621165922.77672-1-ebiggers@kernel.org> References: <20240621165922.77672-1-ebiggers@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240621_100018_461314_8968FA20 X-CRM114-Status: GOOD ( 20.68 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Eric Biggers Update the shash self-tests to test the new finup_mb method when CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y. Reviewed-by: Sami Tolvanen Acked-by: Ard Biesheuvel Signed-off-by: Eric Biggers --- crypto/testmgr.c | 73 +++++++++++++++++++++++++++++++++++++++++++----- 1 file changed, 66 insertions(+), 7 deletions(-) diff --git a/crypto/testmgr.c b/crypto/testmgr.c index f02cb075bd68..577a32a792ad 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -227,10 +227,11 @@ enum flush_type { /* finalization function for hash algorithms */ enum finalization_type { FINALIZATION_TYPE_FINAL, /* use final() */ FINALIZATION_TYPE_FINUP, /* use finup() */ + FINALIZATION_TYPE_FINUP_MB, /* use finup_mb() */ FINALIZATION_TYPE_DIGEST, /* use digest() */ }; /* * Whether the crypto operation will occur in-place, and if so whether the @@ -290,10 +291,14 @@ struct test_sg_division { * the @iv_offset * @key_offset: misalignment of the key, where 0 is default alignment * @key_offset_relative_to_alignmask: if true, add the algorithm's alignmask to * the @key_offset * @finalization_type: what finalization function to use for hashes + * @multibuffer_index: random number used to generate the message index to use + * for finup_mb (when finup_mb is used). + * @multibuffer_count: random number used to generate the num_msgs parameter to + * finup_mb (when finup_mb is used). * @nosimd: execute with SIMD disabled? Requires !CRYPTO_TFM_REQ_MAY_SLEEP. * This applies to the parts of the operation that aren't controlled * individually by @nosimd_setkey or @src_divs[].nosimd. * @nosimd_setkey: set the key (if applicable) with SIMD disabled? Requires * !CRYPTO_TFM_REQ_MAY_SLEEP. @@ -307,10 +312,12 @@ struct testvec_config { unsigned int iv_offset; unsigned int key_offset; bool iv_offset_relative_to_alignmask; bool key_offset_relative_to_alignmask; enum finalization_type finalization_type; + unsigned int multibuffer_index; + unsigned int multibuffer_count; bool nosimd; bool nosimd_setkey; }; #define TESTVEC_CONFIG_NAMELEN 192 @@ -1122,19 +1129,27 @@ static void generate_random_testvec_config(struct rnd_state *rng, if (prandom_bool(rng)) { cfg->req_flags |= CRYPTO_TFM_REQ_MAY_SLEEP; p += scnprintf(p, end - p, " may_sleep"); } - switch (prandom_u32_below(rng, 4)) { + switch (prandom_u32_below(rng, 8)) { case 0: + case 1: cfg->finalization_type = FINALIZATION_TYPE_FINAL; p += scnprintf(p, end - p, " use_final"); break; - case 1: + case 2: cfg->finalization_type = FINALIZATION_TYPE_FINUP; p += scnprintf(p, end - p, " use_finup"); break; + case 3: + case 4: + cfg->finalization_type = FINALIZATION_TYPE_FINUP_MB; + cfg->multibuffer_index = prandom_u32_state(rng); + cfg->multibuffer_count = prandom_u32_state(rng); + p += scnprintf(p, end - p, " use_finup_mb"); + break; default: cfg->finalization_type = FINALIZATION_TYPE_DIGEST; p += scnprintf(p, end - p, " use_digest"); break; } @@ -1289,10 +1304,37 @@ static inline int check_shash_op(const char *op, int err, pr_err("alg: shash: %s %s() failed with err %d on test vector %s, cfg=\"%s\"\n", driver, op, err, vec_name, cfg->name); return err; } +static int do_finup_mb(struct shash_desc *desc, + const u8 *data, unsigned int len, u8 *result, + const struct testvec_config *cfg, + const struct test_sglist *tsgl) +{ + struct crypto_shash *tfm = desc->tfm; + const u8 *unused_data = tsgl->bufs[XBUFSIZE - 1]; + u8 unused_result[HASH_MAX_DIGESTSIZE]; + const u8 *datas[HASH_MAX_MB_MSGS]; + u8 *outs[HASH_MAX_MB_MSGS]; + unsigned int num_msgs; + unsigned int msg_idx; + unsigned int i; + + num_msgs = 1 + (cfg->multibuffer_count % crypto_shash_mb_max_msgs(tfm)); + if (WARN_ON_ONCE(num_msgs > HASH_MAX_MB_MSGS)) + return -EINVAL; + msg_idx = cfg->multibuffer_index % num_msgs; + for (i = 0; i < num_msgs; i++) { + datas[i] = unused_data; + outs[i] = unused_result; + } + datas[msg_idx] = data; + outs[msg_idx] = result; + return crypto_shash_finup_mb(desc, datas, len, outs, num_msgs); +} + /* Test one hash test vector in one configuration, using the shash API */ static int test_shash_vec_cfg(const struct hash_testvec *vec, const char *vec_name, const struct testvec_config *cfg, struct shash_desc *desc, @@ -1365,11 +1407,14 @@ static int test_shash_vec_cfg(const struct hash_testvec *vec, return -EINVAL; } goto result_ready; } - /* Using init(), zero or more update(), then final() or finup() */ + /* + * Using init(), zero or more update(), then either final(), finup(), or + * finup_mb(). + */ if (cfg->nosimd) crypto_disable_simd_for_test(); err = crypto_shash_init(desc); if (cfg->nosimd) @@ -1377,28 +1422,42 @@ static int test_shash_vec_cfg(const struct hash_testvec *vec, err = check_shash_op("init", err, driver, vec_name, cfg); if (err) return err; for (i = 0; i < tsgl->nents; i++) { + const u8 *data = sg_virt(&tsgl->sgl[i]); + unsigned int len = tsgl->sgl[i].length; + if (i + 1 == tsgl->nents && cfg->finalization_type == FINALIZATION_TYPE_FINUP) { if (divs[i]->nosimd) crypto_disable_simd_for_test(); - err = crypto_shash_finup(desc, sg_virt(&tsgl->sgl[i]), - tsgl->sgl[i].length, result); + err = crypto_shash_finup(desc, data, len, result); if (divs[i]->nosimd) crypto_reenable_simd_for_test(); err = check_shash_op("finup", err, driver, vec_name, cfg); if (err) return err; goto result_ready; } + if (i + 1 == tsgl->nents && + cfg->finalization_type == FINALIZATION_TYPE_FINUP_MB) { + if (divs[i]->nosimd) + crypto_disable_simd_for_test(); + err = do_finup_mb(desc, data, len, result, cfg, tsgl); + if (divs[i]->nosimd) + crypto_reenable_simd_for_test(); + err = check_shash_op("finup_mb", err, driver, vec_name, + cfg); + if (err) + return err; + goto result_ready; + } if (divs[i]->nosimd) crypto_disable_simd_for_test(); - err = crypto_shash_update(desc, sg_virt(&tsgl->sgl[i]), - tsgl->sgl[i].length); + err = crypto_shash_update(desc, data, len); if (divs[i]->nosimd) crypto_reenable_simd_for_test(); err = check_shash_op("update", err, driver, vec_name, cfg); if (err) return err; From patchwork Fri Jun 21 16:59:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13707916 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4DBC6C2BB85 for ; Fri, 21 Jun 2024 17:00:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=Bfdx673GCSPCeCrnYhMTJ9Eoj49Ql7wZaEs1nMuWX54=; b=2qG/SHwYMTPYigTS29t2nIe1+J bwCmatLSdYqqTexy4FPYQ9rzeWZoSlL/L+TgCjV+9QTrKJDTbyD7YObA5fahB1AQwrM/ZNnjf9OLR r86H6nxklkx8Q4AHtO00gKaveSvj1I5MYv1fxLc0xoXk+AaUklVzfY6gk7DfvGLoNXoau0IfyuCKi ZW+gDgnjEGDhEeBTdsyuND6h3E/ZuxCYFCtM+iAwIM2PlBXIsewWbbuiJpWcwH9P3W6A/+2VtztuK 5jfTIJT5RT3dP5jn3wi3S/JJkcgEZMhArClSV9My0e741hwEVpt9Y4Fvly3+znAXKyo9PB/Td9AtQ sSmhxHkQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sKhcn-0000000A1ha-3pek; Fri, 21 Jun 2024 17:00:33 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sKhcY-0000000A1Vj-2zMq for linux-arm-kernel@lists.infradead.org; Fri, 21 Jun 2024 17:00:21 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 12BDC62B5D; Fri, 21 Jun 2024 17:00:18 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 819E4C4AF07; Fri, 21 Jun 2024 17:00:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1718989217; bh=AzpmbbsBEzAwbcbgpquin/2Ffltt5aIvaMaLp0XNqro=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nvYAjMH+/y89VriSXhqboNpasSksBujZ0vhc9FwOV8boNbHnPwhCE0k+duGMnn4bF oMS+y04do1UJDkcZBmVqXjh6B1F8dEhxutEasGS2lerDxvLUhgCehWXDEUJ37nX7U6 3dUghp9i59ymkEVqaTPL4lm1K6EMrTHoal1Om1Hj7sKOsnM2N5B76eDf0ubasMTjq7 I4Tvyqnv80DboEe5f56bv+rjs2uHMI/XhzElVW+X28XcOp5kn2/5rTis00Ma73tpkV RHFhVxA2fr2be8yld/a7XfZh9Rp8bYgcaZdY0UQAnQDSBH7H8+c71ubof3NlYxTvoM WU0Km93sXdXkw== From: Eric Biggers To: linux-crypto@vger.kernel.org, fsverity@lists.linux.dev, dm-devel@lists.linux.dev Cc: x86@kernel.org, linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Sami Tolvanen , Bart Van Assche , Herbert Xu , Alasdair Kergon , Mike Snitzer , Mikulas Patocka Subject: [PATCH v6 04/15] crypto: x86/sha256-ni - add support for finup_mb Date: Fri, 21 Jun 2024 09:59:11 -0700 Message-ID: <20240621165922.77672-5-ebiggers@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240621165922.77672-1-ebiggers@kernel.org> References: <20240621165922.77672-1-ebiggers@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240621_100018_994483_35EF8E48 X-CRM114-Status: GOOD ( 24.95 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Eric Biggers Add an implementation of finup_mb to sha256-ni, using an interleaving factor of 2. It interleaves a finup operation for two equal-length messages that share a common prefix. dm-verity and fs-verity will take advantage of this for greatly improved performance on capable CPUs. This increases the throughput of SHA-256 hashing 4096-byte messages by the following amounts on the following CPUs: AMD Zen 1: 84% AMD Zen 4: 98% Intel Ice Lake: 4% Intel Sapphire Rapids: 20% For now, this seems to benefit AMD much more than Intel. This seems to be because current AMD CPUs support concurrent execution of the SHA-NI instructions, but unfortunately current Intel CPUs don't, except for the sha256msg2 instruction. Hopefully future Intel CPUs will support SHA-NI on more execution ports. Zen 1 supports 2 concurrent sha256rnds2, and Zen 4 supports 4 concurrent sha256rnds2, which suggests that even better performance may be achievable on Zen 4 by interleaving more than two hashes; however, doing so poses a number of trade-offs. It's been reported that the method that achieves the highest SHA-256 throughput on Intel CPUs is actually computing 16 hashes simultaneously using AVX512. That method would be quite different to the SHA-NI method used in this patch. However, such a high interleaving factor isn't practical for the use cases being targeted in the kernel. Reviewed-by: Sami Tolvanen Acked-by: Ard Biesheuvel Signed-off-by: Eric Biggers --- arch/x86/crypto/sha256_ni_asm.S | 368 ++++++++++++++++++++++++++++ arch/x86/crypto/sha256_ssse3_glue.c | 39 +++ 2 files changed, 407 insertions(+) diff --git a/arch/x86/crypto/sha256_ni_asm.S b/arch/x86/crypto/sha256_ni_asm.S index d515a55a3bc1..5e97922a24e4 100644 --- a/arch/x86/crypto/sha256_ni_asm.S +++ b/arch/x86/crypto/sha256_ni_asm.S @@ -172,10 +172,378 @@ SYM_TYPED_FUNC_START(sha256_ni_transform) .Ldone_hash: RET SYM_FUNC_END(sha256_ni_transform) +#undef DIGEST_PTR +#undef DATA_PTR +#undef NUM_BLKS +#undef SHA256CONSTANTS +#undef MSG +#undef STATE0 +#undef STATE1 +#undef MSG0 +#undef MSG1 +#undef MSG2 +#undef MSG3 +#undef TMP +#undef SHUF_MASK +#undef ABEF_SAVE +#undef CDGH_SAVE + +// parameters for __sha256_ni_finup2x() +#define SCTX %rdi +#define DATA1 %rsi +#define DATA2 %rdx +#define LEN %ecx +#define LEN8 %cl +#define LEN64 %rcx +#define OUT1 %r8 +#define OUT2 %r9 + +// other scalar variables +#define SHA256CONSTANTS %rax +#define COUNT %r10 +#define COUNT32 %r10d +#define FINAL_STEP %r11d + +// rbx is used as a temporary. + +#define MSG %xmm0 // sha256rnds2 implicit operand +#define STATE0_A %xmm1 +#define STATE1_A %xmm2 +#define STATE0_B %xmm3 +#define STATE1_B %xmm4 +#define TMP_A %xmm5 +#define TMP_B %xmm6 +#define MSG0_A %xmm7 +#define MSG1_A %xmm8 +#define MSG2_A %xmm9 +#define MSG3_A %xmm10 +#define MSG0_B %xmm11 +#define MSG1_B %xmm12 +#define MSG2_B %xmm13 +#define MSG3_B %xmm14 +#define SHUF_MASK %xmm15 + +#define OFFSETOF_STATE 0 // offsetof(struct sha256_state, state) +#define OFFSETOF_COUNT 32 // offsetof(struct sha256_state, count) +#define OFFSETOF_BUF 40 // offsetof(struct sha256_state, buf) + +// Do 4 rounds of SHA-256 for each of two messages (interleaved). m0_a and m0_b +// contain the current 4 message schedule words for the first and second message +// respectively. +// +// If not all the message schedule words have been computed yet, then this also +// computes 4 more message schedule words for each message. m1_a-m3_a contain +// the next 3 groups of 4 message schedule words for the first message, and +// likewise m1_b-m3_b for the second. After consuming the current value of +// m0_a, this macro computes the group after m3_a and writes it to m0_a, and +// likewise for *_b. This means that the next (m0_a, m1_a, m2_a, m3_a) is the +// current (m1_a, m2_a, m3_a, m0_a), and likewise for *_b, so the caller must +// cycle through the registers accordingly. +.macro do_4rounds_2x i, m0_a, m1_a, m2_a, m3_a, m0_b, m1_b, m2_b, m3_b + movdqa (\i-32)*4(SHA256CONSTANTS), TMP_A + movdqa TMP_A, TMP_B + paddd \m0_a, TMP_A + paddd \m0_b, TMP_B +.if \i < 48 + sha256msg1 \m1_a, \m0_a + sha256msg1 \m1_b, \m0_b +.endif + movdqa TMP_A, MSG + sha256rnds2 STATE0_A, STATE1_A + movdqa TMP_B, MSG + sha256rnds2 STATE0_B, STATE1_B + pshufd $0x0E, TMP_A, MSG + sha256rnds2 STATE1_A, STATE0_A + pshufd $0x0E, TMP_B, MSG + sha256rnds2 STATE1_B, STATE0_B +.if \i < 48 + movdqa \m3_a, TMP_A + movdqa \m3_b, TMP_B + palignr $4, \m2_a, TMP_A + palignr $4, \m2_b, TMP_B + paddd TMP_A, \m0_a + paddd TMP_B, \m0_b + sha256msg2 \m3_a, \m0_a + sha256msg2 \m3_b, \m0_b +.endif +.endm + +// +// void __sha256_ni_finup2x(const struct sha256_state *sctx, +// const u8 *data1, const u8 *data2, int len, +// u8 out1[SHA256_DIGEST_SIZE], +// u8 out2[SHA256_DIGEST_SIZE]); +// +// This function computes the SHA-256 digests of two messages |data1| and +// |data2| that are both |len| bytes long, starting from the initial state +// |sctx|. |len| must be at least SHA256_BLOCK_SIZE. +// +// The instructions for the two SHA-256 operations are interleaved. On many +// CPUs, this is almost twice as fast as hashing each message individually due +// to taking better advantage of the CPU's SHA-256 and SIMD throughput. +// +SYM_FUNC_START(__sha256_ni_finup2x) + // Allocate 128 bytes of stack space, 16-byte aligned. + push %rbx + push %rbp + mov %rsp, %rbp + sub $128, %rsp + and $~15, %rsp + + // Load the shuffle mask for swapping the endianness of 32-bit words. + movdqa PSHUFFLE_BYTE_FLIP_MASK(%rip), SHUF_MASK + + // Set up pointer to the round constants. + lea K256+32*4(%rip), SHA256CONSTANTS + + // Initially we're not processing the final blocks. + xor FINAL_STEP, FINAL_STEP + + // Load the initial state from sctx->state. + movdqu OFFSETOF_STATE+0*16(SCTX), STATE0_A // DCBA + movdqu OFFSETOF_STATE+1*16(SCTX), STATE1_A // HGFE + movdqa STATE0_A, TMP_A + punpcklqdq STATE1_A, STATE0_A // FEBA + punpckhqdq TMP_A, STATE1_A // DCHG + pshufd $0x1B, STATE0_A, STATE0_A // ABEF + pshufd $0xB1, STATE1_A, STATE1_A // CDGH + + // Load sctx->count. Take the mod 64 of it to get the number of bytes + // that are buffered in sctx->buf. Also save it in a register with LEN + // added to it. + mov LEN, LEN + mov OFFSETOF_COUNT(SCTX), %rbx + lea (%rbx, LEN64, 1), COUNT + and $63, %ebx + jz .Lfinup2x_enter_loop // No bytes buffered? + + // %ebx bytes (1 to 63) are currently buffered in sctx->buf. Load them + // followed by the first 64 - %ebx bytes of data. Since LEN >= 64, we + // just load 64 bytes from each of sctx->buf, DATA1, and DATA2 + // unconditionally and rearrange the data as needed. + + movdqu OFFSETOF_BUF+0*16(SCTX), MSG0_A + movdqu OFFSETOF_BUF+1*16(SCTX), MSG1_A + movdqu OFFSETOF_BUF+2*16(SCTX), MSG2_A + movdqu OFFSETOF_BUF+3*16(SCTX), MSG3_A + movdqa MSG0_A, 0*16(%rsp) + movdqa MSG1_A, 1*16(%rsp) + movdqa MSG2_A, 2*16(%rsp) + movdqa MSG3_A, 3*16(%rsp) + + movdqu 0*16(DATA1), MSG0_A + movdqu 1*16(DATA1), MSG1_A + movdqu 2*16(DATA1), MSG2_A + movdqu 3*16(DATA1), MSG3_A + movdqu MSG0_A, 0*16(%rsp,%rbx) + movdqu MSG1_A, 1*16(%rsp,%rbx) + movdqu MSG2_A, 2*16(%rsp,%rbx) + movdqu MSG3_A, 3*16(%rsp,%rbx) + movdqa 0*16(%rsp), MSG0_A + movdqa 1*16(%rsp), MSG1_A + movdqa 2*16(%rsp), MSG2_A + movdqa 3*16(%rsp), MSG3_A + + movdqu 0*16(DATA2), MSG0_B + movdqu 1*16(DATA2), MSG1_B + movdqu 2*16(DATA2), MSG2_B + movdqu 3*16(DATA2), MSG3_B + movdqu MSG0_B, 0*16(%rsp,%rbx) + movdqu MSG1_B, 1*16(%rsp,%rbx) + movdqu MSG2_B, 2*16(%rsp,%rbx) + movdqu MSG3_B, 3*16(%rsp,%rbx) + movdqa 0*16(%rsp), MSG0_B + movdqa 1*16(%rsp), MSG1_B + movdqa 2*16(%rsp), MSG2_B + movdqa 3*16(%rsp), MSG3_B + + sub $64, %rbx // rbx = buffered - 64 + sub %rbx, DATA1 // DATA1 += 64 - buffered + sub %rbx, DATA2 // DATA2 += 64 - buffered + add %ebx, LEN // LEN += buffered - 64 + movdqa STATE0_A, STATE0_B + movdqa STATE1_A, STATE1_B + jmp .Lfinup2x_loop_have_data + +.Lfinup2x_enter_loop: + sub $64, LEN + movdqa STATE0_A, STATE0_B + movdqa STATE1_A, STATE1_B +.Lfinup2x_loop: + // Load the next two data blocks. + movdqu 0*16(DATA1), MSG0_A + movdqu 0*16(DATA2), MSG0_B + movdqu 1*16(DATA1), MSG1_A + movdqu 1*16(DATA2), MSG1_B + movdqu 2*16(DATA1), MSG2_A + movdqu 2*16(DATA2), MSG2_B + movdqu 3*16(DATA1), MSG3_A + movdqu 3*16(DATA2), MSG3_B + add $64, DATA1 + add $64, DATA2 +.Lfinup2x_loop_have_data: + // Convert the words of the data blocks from big endian. + pshufb SHUF_MASK, MSG0_A + pshufb SHUF_MASK, MSG0_B + pshufb SHUF_MASK, MSG1_A + pshufb SHUF_MASK, MSG1_B + pshufb SHUF_MASK, MSG2_A + pshufb SHUF_MASK, MSG2_B + pshufb SHUF_MASK, MSG3_A + pshufb SHUF_MASK, MSG3_B +.Lfinup2x_loop_have_bswapped_data: + + // Save the original state for each block. + movdqa STATE0_A, 0*16(%rsp) + movdqa STATE0_B, 1*16(%rsp) + movdqa STATE1_A, 2*16(%rsp) + movdqa STATE1_B, 3*16(%rsp) + + // Do the SHA-256 rounds on each block. +.irp i, 0, 16, 32, 48 + do_4rounds_2x (\i + 0), MSG0_A, MSG1_A, MSG2_A, MSG3_A, \ + MSG0_B, MSG1_B, MSG2_B, MSG3_B + do_4rounds_2x (\i + 4), MSG1_A, MSG2_A, MSG3_A, MSG0_A, \ + MSG1_B, MSG2_B, MSG3_B, MSG0_B + do_4rounds_2x (\i + 8), MSG2_A, MSG3_A, MSG0_A, MSG1_A, \ + MSG2_B, MSG3_B, MSG0_B, MSG1_B + do_4rounds_2x (\i + 12), MSG3_A, MSG0_A, MSG1_A, MSG2_A, \ + MSG3_B, MSG0_B, MSG1_B, MSG2_B +.endr + + // Add the original state for each block. + paddd 0*16(%rsp), STATE0_A + paddd 1*16(%rsp), STATE0_B + paddd 2*16(%rsp), STATE1_A + paddd 3*16(%rsp), STATE1_B + + // Update LEN and loop back if more blocks remain. + sub $64, LEN + jge .Lfinup2x_loop + + // Check if any final blocks need to be handled. + // FINAL_STEP = 2: all done + // FINAL_STEP = 1: need to do count-only padding block + // FINAL_STEP = 0: need to do the block with 0x80 padding byte + cmp $1, FINAL_STEP + jg .Lfinup2x_done + je .Lfinup2x_finalize_countonly + add $64, LEN + jz .Lfinup2x_finalize_blockaligned + + // Not block-aligned; 1 <= LEN <= 63 data bytes remain. Pad the block. + // To do this, write the padding starting with the 0x80 byte to + // &sp[64]. Then for each message, copy the last 64 data bytes to sp + // and load from &sp[64 - LEN] to get the needed padding block. This + // code relies on the data buffers being >= 64 bytes in length. + mov $64, %ebx + sub LEN, %ebx // ebx = 64 - LEN + sub %rbx, DATA1 // DATA1 -= 64 - LEN + sub %rbx, DATA2 // DATA2 -= 64 - LEN + mov $0x80, FINAL_STEP // using FINAL_STEP as a temporary + movd FINAL_STEP, MSG0_A + pxor MSG1_A, MSG1_A + movdqa MSG0_A, 4*16(%rsp) + movdqa MSG1_A, 5*16(%rsp) + movdqa MSG1_A, 6*16(%rsp) + movdqa MSG1_A, 7*16(%rsp) + cmp $56, LEN + jge 1f // will COUNT spill into its own block? + shl $3, COUNT + bswap COUNT + mov COUNT, 56(%rsp,%rbx) + mov $2, FINAL_STEP // won't need count-only block + jmp 2f +1: + mov $1, FINAL_STEP // will need count-only block +2: + movdqu 0*16(DATA1), MSG0_A + movdqu 1*16(DATA1), MSG1_A + movdqu 2*16(DATA1), MSG2_A + movdqu 3*16(DATA1), MSG3_A + movdqa MSG0_A, 0*16(%rsp) + movdqa MSG1_A, 1*16(%rsp) + movdqa MSG2_A, 2*16(%rsp) + movdqa MSG3_A, 3*16(%rsp) + movdqu 0*16(%rsp,%rbx), MSG0_A + movdqu 1*16(%rsp,%rbx), MSG1_A + movdqu 2*16(%rsp,%rbx), MSG2_A + movdqu 3*16(%rsp,%rbx), MSG3_A + + movdqu 0*16(DATA2), MSG0_B + movdqu 1*16(DATA2), MSG1_B + movdqu 2*16(DATA2), MSG2_B + movdqu 3*16(DATA2), MSG3_B + movdqa MSG0_B, 0*16(%rsp) + movdqa MSG1_B, 1*16(%rsp) + movdqa MSG2_B, 2*16(%rsp) + movdqa MSG3_B, 3*16(%rsp) + movdqu 0*16(%rsp,%rbx), MSG0_B + movdqu 1*16(%rsp,%rbx), MSG1_B + movdqu 2*16(%rsp,%rbx), MSG2_B + movdqu 3*16(%rsp,%rbx), MSG3_B + jmp .Lfinup2x_loop_have_data + + // Prepare a padding block, either: + // + // {0x80, 0, 0, 0, ..., count (as __be64)} + // This is for a block aligned message. + // + // { 0, 0, 0, 0, ..., count (as __be64)} + // This is for a message whose length mod 64 is >= 56. + // + // Pre-swap the endianness of the words. +.Lfinup2x_finalize_countonly: + pxor MSG0_A, MSG0_A + jmp 1f + +.Lfinup2x_finalize_blockaligned: + mov $0x80000000, %ebx + movd %ebx, MSG0_A +1: + pxor MSG1_A, MSG1_A + pxor MSG2_A, MSG2_A + ror $29, COUNT + movq COUNT, MSG3_A + pslldq $8, MSG3_A + movdqa MSG0_A, MSG0_B + pxor MSG1_B, MSG1_B + pxor MSG2_B, MSG2_B + movdqa MSG3_A, MSG3_B + mov $2, FINAL_STEP + jmp .Lfinup2x_loop_have_bswapped_data + +.Lfinup2x_done: + // Write the two digests with all bytes in the correct order. + movdqa STATE0_A, TMP_A + movdqa STATE0_B, TMP_B + punpcklqdq STATE1_A, STATE0_A // GHEF + punpcklqdq STATE1_B, STATE0_B + punpckhqdq TMP_A, STATE1_A // ABCD + punpckhqdq TMP_B, STATE1_B + pshufd $0xB1, STATE0_A, STATE0_A // HGFE + pshufd $0xB1, STATE0_B, STATE0_B + pshufd $0x1B, STATE1_A, STATE1_A // DCBA + pshufd $0x1B, STATE1_B, STATE1_B + pshufb SHUF_MASK, STATE0_A + pshufb SHUF_MASK, STATE0_B + pshufb SHUF_MASK, STATE1_A + pshufb SHUF_MASK, STATE1_B + movdqu STATE0_A, 1*16(OUT1) + movdqu STATE0_B, 1*16(OUT2) + movdqu STATE1_A, 0*16(OUT1) + movdqu STATE1_B, 0*16(OUT2) + + mov %rbp, %rsp + pop %rbp + pop %rbx + RET +SYM_FUNC_END(__sha256_ni_finup2x) + .section .rodata.cst256.K256, "aM", @progbits, 256 .align 64 K256: .long 0x428a2f98,0x71374491,0xb5c0fbcf,0xe9b5dba5 .long 0x3956c25b,0x59f111f1,0x923f82a4,0xab1c5ed5 diff --git a/arch/x86/crypto/sha256_ssse3_glue.c b/arch/x86/crypto/sha256_ssse3_glue.c index e04a43d9f7d5..ff688bb1d560 100644 --- a/arch/x86/crypto/sha256_ssse3_glue.c +++ b/arch/x86/crypto/sha256_ssse3_glue.c @@ -331,10 +331,15 @@ static void unregister_sha256_avx2(void) #ifdef CONFIG_AS_SHA256_NI asmlinkage void sha256_ni_transform(struct sha256_state *digest, const u8 *data, int rounds); +asmlinkage void __sha256_ni_finup2x(const struct sha256_state *sctx, + const u8 *data1, const u8 *data2, int len, + u8 out1[SHA256_DIGEST_SIZE], + u8 out2[SHA256_DIGEST_SIZE]); + static int sha256_ni_update(struct shash_desc *desc, const u8 *data, unsigned int len) { return _sha256_update(desc, data, len, sha256_ni_transform); } @@ -355,18 +360,52 @@ static int sha256_ni_digest(struct shash_desc *desc, const u8 *data, { return sha256_base_init(desc) ?: sha256_ni_finup(desc, data, len, out); } +static int sha256_ni_finup_mb(struct shash_desc *desc, + const u8 * const data[], unsigned int len, + u8 * const outs[], unsigned int num_msgs) +{ + struct sha256_state *sctx = shash_desc_ctx(desc); + + /* + * num_msgs != 2 should not happen here, since this algorithm sets + * mb_max_msgs=2, and the crypto API handles num_msgs <= 1 before + * calling into the algorithm's finup_mb method. + */ + if (WARN_ON_ONCE(num_msgs != 2)) + return -EOPNOTSUPP; + + if (unlikely(!crypto_simd_usable())) + return -EOPNOTSUPP; + + /* __sha256_ni_finup2x() assumes SHA256_BLOCK_SIZE <= len <= INT_MAX. */ + if (unlikely(len < SHA256_BLOCK_SIZE || len > INT_MAX)) + return -EOPNOTSUPP; + + /* __sha256_ni_finup2x() assumes the following offsets. */ + BUILD_BUG_ON(offsetof(struct sha256_state, state) != 0); + BUILD_BUG_ON(offsetof(struct sha256_state, count) != 32); + BUILD_BUG_ON(offsetof(struct sha256_state, buf) != 40); + + kernel_fpu_begin(); + __sha256_ni_finup2x(sctx, data[0], data[1], len, outs[0], outs[1]); + kernel_fpu_end(); + return 0; +} + static struct shash_alg sha256_ni_algs[] = { { .digestsize = SHA256_DIGEST_SIZE, .init = sha256_base_init, .update = sha256_ni_update, .final = sha256_ni_final, .finup = sha256_ni_finup, .digest = sha256_ni_digest, + .finup_mb = sha256_ni_finup_mb, .descsize = sizeof(struct sha256_state), + .mb_max_msgs = 2, .base = { .cra_name = "sha256", .cra_driver_name = "sha256-ni", .cra_priority = 250, .cra_blocksize = SHA256_BLOCK_SIZE, From patchwork Fri Jun 21 16:59:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13707921 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 468C3C2BB85 for ; Fri, 21 Jun 2024 17:00:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=a1+Ye2SO/SdqvuYwlPZupGeTbX1YGOFAMyUl5H7HWcY=; b=bxDjiPFh3gGDpJCYBDlj9ZDqSL ZxMXk2FzZOxW/6Xpb/mB6MFy42LbHUf/5snNg13y1yUxzdcvZcCNB/Ex17JokoS5OR2yGhZmJoKNS wL0WcPEfwV8kHHyxAmlLtI+sGs+6NOwzMscwTRwJnVuogzrVz5iRJqti8jiHGgXVP04yut2H5ochu 22S2utCxTNWUq9+LUDkmVbp9KdQlEDzNBrk/fsai53KpVJL/ZMogDr9p+dSjz8nhhfy+gmKzHoNRM gDZbnNrwY9OOCoCVXPlFr+b3CBUtCg4cF+vwk6DHX+Kcl5JoXKPOHm3B41w1jyit3H3j94RO2Up7U L9wp3d/g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sKhcw-0000000A1nb-0l7K; Fri, 21 Jun 2024 17:00:42 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sKhcZ-0000000A1WB-1w5I for linux-arm-kernel@lists.infradead.org; Fri, 21 Jun 2024 17:00:23 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 9303F6277C; Fri, 21 Jun 2024 17:00:18 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0C24DC4AF0A; Fri, 21 Jun 2024 17:00:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1718989218; bh=+lWoMD7JFAcGRNhy95EFp0a3cwXRdyp0sOm8Zs77+p0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=uwH34D8WajhkfWSrmLEeKsgkWLdnHz2odZGooWzks2+B3FkYoiZFQZRh8/OM7aIMS SO/4UR/d9vmYmq8hHEcyoEEVA3fRmQaE+KUSImbvqxnwVct2vDEav3Q9iLfQ1kA8a+ 89gxYqbiioC1mb7LVtlVB99rUTtvpqMGzNa1LnFVl2i7uKzsXOoNejbOo0jjwjZync ndGcdqdp8TNG65BeCW7XQ6zWub1ms72K3jBOms2fg2hycA7Fmmj96aY6pY5jd9n9Be nN970Z+zyk0Qwo50LzJeXrkFtvy8lg7A/A/btXxZZdwj4c2M4zxNEmWiOWGRFXybfg vqi3rmIQpdo+g== From: Eric Biggers To: linux-crypto@vger.kernel.org, fsverity@lists.linux.dev, dm-devel@lists.linux.dev Cc: x86@kernel.org, linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Sami Tolvanen , Bart Van Assche , Herbert Xu , Alasdair Kergon , Mike Snitzer , Mikulas Patocka Subject: [PATCH v6 05/15] crypto: arm64/sha256-ce - add support for finup_mb Date: Fri, 21 Jun 2024 09:59:12 -0700 Message-ID: <20240621165922.77672-6-ebiggers@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240621165922.77672-1-ebiggers@kernel.org> References: <20240621165922.77672-1-ebiggers@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240621_100019_710969_A0BCA5E8 X-CRM114-Status: GOOD ( 27.96 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Eric Biggers Add an implementation of finup_mb to sha256-ce, using an interleaving factor of 2. It interleaves a finup operation for two equal-length messages that share a common prefix. dm-verity and fs-verity will take advantage of this for greatly improved performance on capable CPUs. On an ARM Cortex-X1, this increases the throughput of SHA-256 hashing 4096-byte messages by 70%. Reviewed-by: Ard Biesheuvel Reviewed-by: Sami Tolvanen Acked-by: Ard Biesheuvel Signed-off-by: Eric Biggers --- arch/arm64/crypto/sha2-ce-core.S | 281 ++++++++++++++++++++++++++++++- arch/arm64/crypto/sha2-ce-glue.c | 40 +++++ 2 files changed, 315 insertions(+), 6 deletions(-) diff --git a/arch/arm64/crypto/sha2-ce-core.S b/arch/arm64/crypto/sha2-ce-core.S index fce84d88ddb2..fb5d5227e585 100644 --- a/arch/arm64/crypto/sha2-ce-core.S +++ b/arch/arm64/crypto/sha2-ce-core.S @@ -68,22 +68,26 @@ .word 0x19a4c116, 0x1e376c08, 0x2748774c, 0x34b0bcb5 .word 0x391c0cb3, 0x4ed8aa4a, 0x5b9cca4f, 0x682e6ff3 .word 0x748f82ee, 0x78a5636f, 0x84c87814, 0x8cc70208 .word 0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2 + .macro load_round_constants tmp + adr_l \tmp, .Lsha2_rcon + ld1 { v0.4s- v3.4s}, [\tmp], #64 + ld1 { v4.4s- v7.4s}, [\tmp], #64 + ld1 { v8.4s-v11.4s}, [\tmp], #64 + ld1 {v12.4s-v15.4s}, [\tmp] + .endm + /* * int __sha256_ce_transform(struct sha256_ce_state *sst, u8 const *src, * int blocks) */ .text SYM_FUNC_START(__sha256_ce_transform) - /* load round constants */ - adr_l x8, .Lsha2_rcon - ld1 { v0.4s- v3.4s}, [x8], #64 - ld1 { v4.4s- v7.4s}, [x8], #64 - ld1 { v8.4s-v11.4s}, [x8], #64 - ld1 {v12.4s-v15.4s}, [x8] + + load_round_constants x8 /* load state */ ld1 {dgav.4s, dgbv.4s}, [x0] /* load sha256_ce_state::finalize */ @@ -153,5 +157,270 @@ CPU_LE( rev32 v19.16b, v19.16b ) /* store new state */ 3: st1 {dgav.4s, dgbv.4s}, [x0] mov w0, w2 ret SYM_FUNC_END(__sha256_ce_transform) + + .unreq dga + .unreq dgav + .unreq dgb + .unreq dgbv + .unreq t0 + .unreq t1 + .unreq dg0q + .unreq dg0v + .unreq dg1q + .unreq dg1v + .unreq dg2q + .unreq dg2v + + // parameters for __sha256_ce_finup2x() + sctx .req x0 + data1 .req x1 + data2 .req x2 + len .req w3 + out1 .req x4 + out2 .req x5 + + // other scalar variables + count .req x6 + final_step .req w7 + + // x8-x9 are used as temporaries. + + // v0-v15 are used to cache the SHA-256 round constants. + // v16-v19 are used for the message schedule for the first message. + // v20-v23 are used for the message schedule for the second message. + // v24-v31 are used for the state and temporaries as given below. + // *_a are for the first message and *_b for the second. + state0_a_q .req q24 + state0_a .req v24 + state1_a_q .req q25 + state1_a .req v25 + state0_b_q .req q26 + state0_b .req v26 + state1_b_q .req q27 + state1_b .req v27 + t0_a .req v28 + t0_b .req v29 + t1_a_q .req q30 + t1_a .req v30 + t1_b_q .req q31 + t1_b .req v31 + +#define OFFSETOF_COUNT 32 // offsetof(struct sha256_state, count) +#define OFFSETOF_BUF 40 // offsetof(struct sha256_state, buf) +// offsetof(struct sha256_state, state) is assumed to be 0. + + // Do 4 rounds of SHA-256 for each of two messages (interleaved). m0_a + // and m0_b contain the current 4 message schedule words for the first + // and second message respectively. + // + // If not all the message schedule words have been computed yet, then + // this also computes 4 more message schedule words for each message. + // m1_a-m3_a contain the next 3 groups of 4 message schedule words for + // the first message, and likewise m1_b-m3_b for the second. After + // consuming the current value of m0_a, this macro computes the group + // after m3_a and writes it to m0_a, and likewise for *_b. This means + // that the next (m0_a, m1_a, m2_a, m3_a) is the current (m1_a, m2_a, + // m3_a, m0_a), and likewise for *_b, so the caller must cycle through + // the registers accordingly. + .macro do_4rounds_2x i, k, m0_a, m1_a, m2_a, m3_a, \ + m0_b, m1_b, m2_b, m3_b + add t0_a\().4s, \m0_a\().4s, \k\().4s + add t0_b\().4s, \m0_b\().4s, \k\().4s + .if \i < 48 + sha256su0 \m0_a\().4s, \m1_a\().4s + sha256su0 \m0_b\().4s, \m1_b\().4s + sha256su1 \m0_a\().4s, \m2_a\().4s, \m3_a\().4s + sha256su1 \m0_b\().4s, \m2_b\().4s, \m3_b\().4s + .endif + mov t1_a.16b, state0_a.16b + mov t1_b.16b, state0_b.16b + sha256h state0_a_q, state1_a_q, t0_a\().4s + sha256h state0_b_q, state1_b_q, t0_b\().4s + sha256h2 state1_a_q, t1_a_q, t0_a\().4s + sha256h2 state1_b_q, t1_b_q, t0_b\().4s + .endm + + .macro do_16rounds_2x i, k0, k1, k2, k3 + do_4rounds_2x \i + 0, \k0, v16, v17, v18, v19, v20, v21, v22, v23 + do_4rounds_2x \i + 4, \k1, v17, v18, v19, v16, v21, v22, v23, v20 + do_4rounds_2x \i + 8, \k2, v18, v19, v16, v17, v22, v23, v20, v21 + do_4rounds_2x \i + 12, \k3, v19, v16, v17, v18, v23, v20, v21, v22 + .endm + +// +// void __sha256_ce_finup2x(const struct sha256_state *sctx, +// const u8 *data1, const u8 *data2, int len, +// u8 out1[SHA256_DIGEST_SIZE], +// u8 out2[SHA256_DIGEST_SIZE]); +// +// This function computes the SHA-256 digests of two messages |data1| and +// |data2| that are both |len| bytes long, starting from the initial state +// |sctx|. |len| must be at least SHA256_BLOCK_SIZE. +// +// The instructions for the two SHA-256 operations are interleaved. On many +// CPUs, this is almost twice as fast as hashing each message individually due +// to taking better advantage of the CPU's SHA-256 and SIMD throughput. +// +SYM_FUNC_START(__sha256_ce_finup2x) + sub sp, sp, #128 + mov final_step, #0 + load_round_constants x8 + + // Load the initial state from sctx->state. + ld1 {state0_a.4s-state1_a.4s}, [sctx] + + // Load sctx->count. Take the mod 64 of it to get the number of bytes + // that are buffered in sctx->buf. Also save it in a register with len + // added to it. + ldr x8, [sctx, #OFFSETOF_COUNT] + add count, x8, len, sxtw + and x8, x8, #63 + cbz x8, .Lfinup2x_enter_loop // No bytes buffered? + + // x8 bytes (1 to 63) are currently buffered in sctx->buf. Load them + // followed by the first 64 - x8 bytes of data. Since len >= 64, we + // just load 64 bytes from each of sctx->buf, data1, and data2 + // unconditionally and rearrange the data as needed. + add x9, sctx, #OFFSETOF_BUF + ld1 {v16.16b-v19.16b}, [x9] + st1 {v16.16b-v19.16b}, [sp] + + ld1 {v16.16b-v19.16b}, [data1], #64 + add x9, sp, x8 + st1 {v16.16b-v19.16b}, [x9] + ld1 {v16.4s-v19.4s}, [sp] + + ld1 {v20.16b-v23.16b}, [data2], #64 + st1 {v20.16b-v23.16b}, [x9] + ld1 {v20.4s-v23.4s}, [sp] + + sub len, len, #64 + sub data1, data1, x8 + sub data2, data2, x8 + add len, len, w8 + mov state0_b.16b, state0_a.16b + mov state1_b.16b, state1_a.16b + b .Lfinup2x_loop_have_data + +.Lfinup2x_enter_loop: + sub len, len, #64 + mov state0_b.16b, state0_a.16b + mov state1_b.16b, state1_a.16b +.Lfinup2x_loop: + // Load the next two data blocks. + ld1 {v16.4s-v19.4s}, [data1], #64 + ld1 {v20.4s-v23.4s}, [data2], #64 +.Lfinup2x_loop_have_data: + // Convert the words of the data blocks from big endian. +CPU_LE( rev32 v16.16b, v16.16b ) +CPU_LE( rev32 v17.16b, v17.16b ) +CPU_LE( rev32 v18.16b, v18.16b ) +CPU_LE( rev32 v19.16b, v19.16b ) +CPU_LE( rev32 v20.16b, v20.16b ) +CPU_LE( rev32 v21.16b, v21.16b ) +CPU_LE( rev32 v22.16b, v22.16b ) +CPU_LE( rev32 v23.16b, v23.16b ) +.Lfinup2x_loop_have_bswapped_data: + + // Save the original state for each block. + st1 {state0_a.4s-state1_b.4s}, [sp] + + // Do the SHA-256 rounds on each block. + do_16rounds_2x 0, v0, v1, v2, v3 + do_16rounds_2x 16, v4, v5, v6, v7 + do_16rounds_2x 32, v8, v9, v10, v11 + do_16rounds_2x 48, v12, v13, v14, v15 + + // Add the original state for each block. + ld1 {v16.4s-v19.4s}, [sp] + add state0_a.4s, state0_a.4s, v16.4s + add state1_a.4s, state1_a.4s, v17.4s + add state0_b.4s, state0_b.4s, v18.4s + add state1_b.4s, state1_b.4s, v19.4s + + // Update len and loop back if more blocks remain. + sub len, len, #64 + tbz len, #31, .Lfinup2x_loop // len >= 0? + + // Check if any final blocks need to be handled. + // final_step = 2: all done + // final_step = 1: need to do count-only padding block + // final_step = 0: need to do the block with 0x80 padding byte + tbnz final_step, #1, .Lfinup2x_done + tbnz final_step, #0, .Lfinup2x_finalize_countonly + add len, len, #64 + cbz len, .Lfinup2x_finalize_blockaligned + + // Not block-aligned; 1 <= len <= 63 data bytes remain. Pad the block. + // To do this, write the padding starting with the 0x80 byte to + // &sp[64]. Then for each message, copy the last 64 data bytes to sp + // and load from &sp[64 - len] to get the needed padding block. This + // code relies on the data buffers being >= 64 bytes in length. + sub w8, len, #64 // w8 = len - 64 + add data1, data1, w8, sxtw // data1 += len - 64 + add data2, data2, w8, sxtw // data2 += len - 64 + mov x9, 0x80 + fmov d16, x9 + movi v17.16b, #0 + stp q16, q17, [sp, #64] + stp q17, q17, [sp, #96] + sub x9, sp, w8, sxtw // x9 = &sp[64 - len] + cmp len, #56 + b.ge 1f // will count spill into its own block? + lsl count, count, #3 + rev count, count + str count, [x9, #56] + mov final_step, #2 // won't need count-only block + b 2f +1: + mov final_step, #1 // will need count-only block +2: + ld1 {v16.16b-v19.16b}, [data1] + st1 {v16.16b-v19.16b}, [sp] + ld1 {v16.4s-v19.4s}, [x9] + ld1 {v20.16b-v23.16b}, [data2] + st1 {v20.16b-v23.16b}, [sp] + ld1 {v20.4s-v23.4s}, [x9] + b .Lfinup2x_loop_have_data + + // Prepare a padding block, either: + // + // {0x80, 0, 0, 0, ..., count (as __be64)} + // This is for a block aligned message. + // + // { 0, 0, 0, 0, ..., count (as __be64)} + // This is for a message whose length mod 64 is >= 56. + // + // Pre-swap the endianness of the words. +.Lfinup2x_finalize_countonly: + movi v16.2d, #0 + b 1f +.Lfinup2x_finalize_blockaligned: + mov x8, #0x80000000 + fmov d16, x8 +1: + movi v17.2d, #0 + movi v18.2d, #0 + ror count, count, #29 // ror(lsl(count, 3), 32) + mov v19.d[0], xzr + mov v19.d[1], count + mov v20.16b, v16.16b + movi v21.2d, #0 + movi v22.2d, #0 + mov v23.16b, v19.16b + mov final_step, #2 + b .Lfinup2x_loop_have_bswapped_data + +.Lfinup2x_done: + // Write the two digests with all bytes in the correct order. +CPU_LE( rev32 state0_a.16b, state0_a.16b ) +CPU_LE( rev32 state1_a.16b, state1_a.16b ) +CPU_LE( rev32 state0_b.16b, state0_b.16b ) +CPU_LE( rev32 state1_b.16b, state1_b.16b ) + st1 {state0_a.4s-state1_a.4s}, [out1] + st1 {state0_b.4s-state1_b.4s}, [out2] + add sp, sp, #128 + ret +SYM_FUNC_END(__sha256_ce_finup2x) diff --git a/arch/arm64/crypto/sha2-ce-glue.c b/arch/arm64/crypto/sha2-ce-glue.c index 0a44d2e7ee1f..b37cffc4191f 100644 --- a/arch/arm64/crypto/sha2-ce-glue.c +++ b/arch/arm64/crypto/sha2-ce-glue.c @@ -31,10 +31,15 @@ extern const u32 sha256_ce_offsetof_count; extern const u32 sha256_ce_offsetof_finalize; asmlinkage int __sha256_ce_transform(struct sha256_ce_state *sst, u8 const *src, int blocks); +asmlinkage void __sha256_ce_finup2x(const struct sha256_state *sctx, + const u8 *data1, const u8 *data2, int len, + u8 out1[SHA256_DIGEST_SIZE], + u8 out2[SHA256_DIGEST_SIZE]); + static void sha256_ce_transform(struct sha256_state *sst, u8 const *src, int blocks) { while (blocks) { int rem; @@ -122,10 +127,43 @@ static int sha256_ce_digest(struct shash_desc *desc, const u8 *data, { sha256_base_init(desc); return sha256_ce_finup(desc, data, len, out); } +static int sha256_ce_finup_mb(struct shash_desc *desc, + const u8 * const data[], unsigned int len, + u8 * const outs[], unsigned int num_msgs) +{ + struct sha256_ce_state *sctx = shash_desc_ctx(desc); + + /* + * num_msgs != 2 should not happen here, since this algorithm sets + * mb_max_msgs=2, and the crypto API handles num_msgs <= 1 before + * calling into the algorithm's finup_mb method. + */ + if (WARN_ON_ONCE(num_msgs != 2)) + return -EOPNOTSUPP; + + if (unlikely(!crypto_simd_usable())) + return -EOPNOTSUPP; + + /* __sha256_ce_finup2x() assumes SHA256_BLOCK_SIZE <= len <= INT_MAX. */ + if (unlikely(len < SHA256_BLOCK_SIZE || len > INT_MAX)) + return -EOPNOTSUPP; + + /* __sha256_ce_finup2x() assumes the following offsets. */ + BUILD_BUG_ON(offsetof(struct sha256_state, state) != 0); + BUILD_BUG_ON(offsetof(struct sha256_state, count) != 32); + BUILD_BUG_ON(offsetof(struct sha256_state, buf) != 40); + + kernel_neon_begin(); + __sha256_ce_finup2x(&sctx->sst, data[0], data[1], len, outs[0], + outs[1]); + kernel_neon_end(); + return 0; +} + static int sha256_ce_export(struct shash_desc *desc, void *out) { struct sha256_ce_state *sctx = shash_desc_ctx(desc); memcpy(out, &sctx->sst, sizeof(struct sha256_state)); @@ -162,13 +200,15 @@ static struct shash_alg algs[] = { { .init = sha256_base_init, .update = sha256_ce_update, .final = sha256_ce_final, .finup = sha256_ce_finup, .digest = sha256_ce_digest, + .finup_mb = sha256_ce_finup_mb, .export = sha256_ce_export, .import = sha256_ce_import, .descsize = sizeof(struct sha256_ce_state), + .mb_max_msgs = 2, .statesize = sizeof(struct sha256_state), .digestsize = SHA256_DIGEST_SIZE, .base = { .cra_name = "sha256", .cra_driver_name = "sha256-ce", From patchwork Fri Jun 21 16:59:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13707919 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D0288C2BD05 for ; Fri, 21 Jun 2024 17:00:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=pqi2LzR+eXWGijl++j2bQ+HmItloRsUaENwFpVU9R9k=; b=3DkFBVx7+ls/C3a02RZTRELDan 65IwjkkY3sg+2PciaY2MmhYHlN09mERjRiatoZoEN07yqyM2GFBCssSTZthOgE02mkGvcJqp8LBQk 8/XMkSP/tp3qtdYVko7rOwXgHvWd1n7ubWo6JXPO4TdebHkkDJtxCB1WNET8dEwalZPU9ZJdQHIAT NnzV0j3KkGICfq47yKlHUFN4+Bkl0u1qQjgb3ynWWUv9Fj9QohaAZAWns1XhQHrTxtpnN/pdYsGFV OTciSbwpeydgiaa3pNzrY+w9tVToxNKO+IgoVVaSVmMlJZwTNMhrxz8K+O7r0++dsbL3UMgkq+9sr eRtxVElQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sKhcp-0000000A1ir-3810; Fri, 21 Jun 2024 17:00:35 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sKhcZ-0000000A1WU-3DB6 for linux-arm-kernel@lists.infradead.org; Fri, 21 Jun 2024 17:00:22 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 1D61662B64; Fri, 21 Jun 2024 17:00:19 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8CE9BC4AF08; Fri, 21 Jun 2024 17:00:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1718989218; bh=h4goIHE8JZiiBX5at0AKaJh1622KaTu4aeyplnUlqSA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=S+ljcL6Vw29bFpZwU5rAHqO/h3jllmrcCt0hf0RV/ATwttH8GbykWTkIjhW2xvoJH VIjuj9WvI60M8wpYLcpVN3uYa1WuXA42/urW/anwqAQYxecRv1qRvIPWN+Wx8noFco RTvZa8PV2sh0dW7vaIxFsogk9l8h9CVM8prw+zwUpBtcVFkcklKXCAGnah0RUmoAyU jTiR+vnaQgzBco9BZZymRAMGRCB/afqLQkmfXIAGFHvmgSlfRZjyVjqexrddjZdIIK tKWlddceeLb+EkgtFT9Vp8lSpYQP17Z3XTyQsF+7tkkCo8oSYAVcAf72Dz4SuowwnQ k0kEelMRjcGzw== From: Eric Biggers To: linux-crypto@vger.kernel.org, fsverity@lists.linux.dev, dm-devel@lists.linux.dev Cc: x86@kernel.org, linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Sami Tolvanen , Bart Van Assche , Herbert Xu , Alasdair Kergon , Mike Snitzer , Mikulas Patocka Subject: [PATCH v6 06/15] fsverity: improve performance by using multibuffer hashing Date: Fri, 21 Jun 2024 09:59:13 -0700 Message-ID: <20240621165922.77672-7-ebiggers@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240621165922.77672-1-ebiggers@kernel.org> References: <20240621165922.77672-1-ebiggers@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240621_100019_994724_961F80E5 X-CRM114-Status: GOOD ( 35.09 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Eric Biggers When supported by the hash algorithm, use crypto_shash_finup_mb() to interleave the hashing of pairs of data blocks. On some CPUs this nearly doubles hashing performance. The increase in overall throughput of cold-cache fsverity reads that I'm seeing on arm64 and x86_64 is roughly 35% (though this metric is hard to measure as it jumps around a lot). For now this is only done on the verification path, and only for data blocks, not Merkle tree blocks. We could use finup_mb on Merkle tree blocks too, but that is less important as there aren't as many Merkle tree blocks as data blocks, and that would require some additional code restructuring. We could also use finup_mb to accelerate building the Merkle tree, but verification performance is more important. Reviewed-by: Sami Tolvanen Acked-by: Ard Biesheuvel Signed-off-by: Eric Biggers --- fs/verity/fsverity_private.h | 7 ++ fs/verity/hash_algs.c | 8 +- fs/verity/verify.c | 169 +++++++++++++++++++++++++++++------ 3 files changed, 153 insertions(+), 31 deletions(-) diff --git a/fs/verity/fsverity_private.h b/fs/verity/fsverity_private.h index b3506f56e180..7535c9d9b516 100644 --- a/fs/verity/fsverity_private.h +++ b/fs/verity/fsverity_private.h @@ -27,10 +27,15 @@ struct fsverity_hash_alg { /* * The HASH_ALGO_* constant for this algorithm. This is different from * FS_VERITY_HASH_ALG_*, which uses a different numbering scheme. */ enum hash_algo algo_id; + /* + * The maximum supported interleaving factor for multibuffer hashing, or + * 1 if the algorithm doesn't support multibuffer hashing + */ + int mb_max_msgs; }; /* Merkle tree parameters: hash algorithm, initial hash state, and topology */ struct merkle_tree_params { const struct fsverity_hash_alg *hash_alg; /* the hash algorithm */ @@ -150,8 +155,10 @@ static inline void fsverity_init_signature(void) } #endif /* !CONFIG_FS_VERITY_BUILTIN_SIGNATURES */ /* verify.c */ +#define FS_VERITY_MAX_PENDING_DATA_BLOCKS 2 + void __init fsverity_init_workqueue(void); #endif /* _FSVERITY_PRIVATE_H */ diff --git a/fs/verity/hash_algs.c b/fs/verity/hash_algs.c index 6b08b1d9a7d7..f24d7c295455 100644 --- a/fs/verity/hash_algs.c +++ b/fs/verity/hash_algs.c @@ -82,12 +82,16 @@ const struct fsverity_hash_alg *fsverity_get_hash_alg(const struct inode *inode, if (WARN_ON_ONCE(alg->digest_size != crypto_shash_digestsize(tfm))) goto err_free_tfm; if (WARN_ON_ONCE(alg->block_size != crypto_shash_blocksize(tfm))) goto err_free_tfm; - pr_info("%s using implementation \"%s\"\n", - alg->name, crypto_shash_driver_name(tfm)); + alg->mb_max_msgs = min(crypto_shash_mb_max_msgs(tfm), + FS_VERITY_MAX_PENDING_DATA_BLOCKS); + + pr_info("%s using implementation \"%s\"%s\n", + alg->name, crypto_shash_driver_name(tfm), + alg->mb_max_msgs > 1 ? " (multibuffer)" : ""); /* pairs with smp_load_acquire() above */ smp_store_release(&alg->tfm, tfm); goto out_unlock; diff --git a/fs/verity/verify.c b/fs/verity/verify.c index 4fcad0825a12..a07651c22520 100644 --- a/fs/verity/verify.c +++ b/fs/verity/verify.c @@ -8,10 +8,31 @@ #include "fsverity_private.h" #include #include +struct fsverity_pending_block { + const void *data; + u64 pos; + u8 real_hash[FS_VERITY_MAX_DIGEST_SIZE]; +}; + +struct fsverity_verification_context { + struct inode *inode; + struct fsverity_info *vi; + unsigned long max_ra_pages; + + /* + * This is the queue of data blocks that are pending verification. We + * allow multiple blocks to be queued up in order to support multibuffer + * hashing, i.e. interleaving the hashing of multiple messages. On many + * CPUs this improves performance significantly. + */ + int num_pending; + struct fsverity_pending_block pending_blocks[FS_VERITY_MAX_PENDING_DATA_BLOCKS]; +}; + static struct workqueue_struct *fsverity_read_workqueue; /* * Returns true if the hash block with index @hblock_idx in the tree, located in * @hpage, has already been verified. @@ -77,23 +98,25 @@ static bool is_hash_block_verified(struct fsverity_info *vi, struct page *hpage, SetPageChecked(hpage); return false; } /* - * Verify a single data block against the file's Merkle tree. + * Verify the hash of a single data block against the file's Merkle tree. * * In principle, we need to verify the entire path to the root node. However, * for efficiency the filesystem may cache the hash blocks. Therefore we need * only ascend the tree until an already-verified hash block is seen, and then * verify the path to that block. * * Return: %true if the data block is valid, else %false. */ static bool verify_data_block(struct inode *inode, struct fsverity_info *vi, - const void *data, u64 data_pos, unsigned long max_ra_pages) + const struct fsverity_pending_block *dblock, + unsigned long max_ra_pages) { + const u64 data_pos = dblock->pos; const struct merkle_tree_params *params = &vi->tree_params; const unsigned int hsize = params->digest_size; int level; u8 _want_hash[FS_VERITY_MAX_DIGEST_SIZE]; const u8 *want_hash; @@ -113,23 +136,27 @@ verify_data_block(struct inode *inode, struct fsverity_info *vi, * The index of the previous level's block within that level; also the * index of that block's hash within the current level. */ u64 hidx = data_pos >> params->log_blocksize; - /* Up to 1 + FS_VERITY_MAX_LEVELS pages may be mapped at once */ - BUILD_BUG_ON(1 + FS_VERITY_MAX_LEVELS > KM_MAX_IDX); + /* + * Up to FS_VERITY_MAX_PENDING_DATA_BLOCKS + FS_VERITY_MAX_LEVELS pages + * may be mapped at once. + */ + BUILD_BUG_ON(FS_VERITY_MAX_PENDING_DATA_BLOCKS + + FS_VERITY_MAX_LEVELS > KM_MAX_IDX); if (unlikely(data_pos >= inode->i_size)) { /* * This can happen in the data page spanning EOF when the Merkle * tree block size is less than the page size. The Merkle tree * doesn't cover data blocks fully past EOF. But the entire * page spanning EOF can be visible to userspace via a mmap, and * any part past EOF should be all zeroes. Therefore, we need * to verify that any data blocks fully past EOF are all zeroes. */ - if (memchr_inv(data, 0, params->block_size)) { + if (memchr_inv(dblock->data, 0, params->block_size)) { fsverity_err(inode, "FILE CORRUPTED! Data past EOF is not zeroed"); return false; } return true; @@ -219,54 +246,120 @@ verify_data_block(struct inode *inode, struct fsverity_info *vi, want_hash = _want_hash; kunmap_local(haddr); put_page(hpage); } - /* Finally, verify the data block. */ - if (fsverity_hash_block(params, inode, data, real_hash) != 0) - goto error; - if (memcmp(want_hash, real_hash, hsize) != 0) + /* Finally, verify the hash of the data block. */ + if (memcmp(want_hash, dblock->real_hash, hsize) != 0) goto corrupted; return true; corrupted: fsverity_err(inode, "FILE CORRUPTED! pos=%llu, level=%d, want_hash=%s:%*phN, real_hash=%s:%*phN", data_pos, level - 1, params->hash_alg->name, hsize, want_hash, - params->hash_alg->name, hsize, real_hash); + params->hash_alg->name, hsize, + level == 0 ? dblock->real_hash : real_hash); error: for (; level > 0; level--) { kunmap_local(hblocks[level - 1].addr); put_page(hblocks[level - 1].page); } return false; } +static void +fsverity_init_verification_context(struct fsverity_verification_context *ctx, + struct inode *inode, + unsigned long max_ra_pages) +{ + ctx->inode = inode; + ctx->vi = inode->i_verity_info; + ctx->max_ra_pages = max_ra_pages; + ctx->num_pending = 0; +} + +static void +fsverity_clear_pending_blocks(struct fsverity_verification_context *ctx) +{ + int i; + + for (i = ctx->num_pending - 1; i >= 0; i--) { + kunmap_local(ctx->pending_blocks[i].data); + ctx->pending_blocks[i].data = NULL; + } + ctx->num_pending = 0; +} + +static bool +fsverity_verify_pending_blocks(struct fsverity_verification_context *ctx) +{ + struct inode *inode = ctx->inode; + struct fsverity_info *vi = ctx->vi; + const struct merkle_tree_params *params = &vi->tree_params; + SHASH_DESC_ON_STACK(desc, params->hash_alg->tfm); + const u8 *data[FS_VERITY_MAX_PENDING_DATA_BLOCKS]; + u8 *real_hashes[FS_VERITY_MAX_PENDING_DATA_BLOCKS]; + int i; + int err; + + if (ctx->num_pending == 0) + return true; + + for (i = 0; i < ctx->num_pending; i++) { + data[i] = ctx->pending_blocks[i].data; + real_hashes[i] = ctx->pending_blocks[i].real_hash; + } + + desc->tfm = params->hash_alg->tfm; + if (params->hashstate) + err = crypto_shash_import(desc, params->hashstate); + else + err = crypto_shash_init(desc); + if (err) { + fsverity_err(inode, "Error %d importing hash state", err); + return false; + } + err = crypto_shash_finup_mb(desc, data, params->block_size, real_hashes, + ctx->num_pending); + if (err) { + fsverity_err(inode, "Error %d computing block hashes", err); + return false; + } + + for (i = 0; i < ctx->num_pending; i++) { + if (!verify_data_block(inode, vi, &ctx->pending_blocks[i], + ctx->max_ra_pages)) + return false; + } + + fsverity_clear_pending_blocks(ctx); + return true; +} + static bool -verify_data_blocks(struct folio *data_folio, size_t len, size_t offset, - unsigned long max_ra_pages) +fsverity_add_data_blocks(struct fsverity_verification_context *ctx, + struct folio *data_folio, size_t len, size_t offset) { - struct inode *inode = data_folio->mapping->host; - struct fsverity_info *vi = inode->i_verity_info; - const unsigned int block_size = vi->tree_params.block_size; + struct fsverity_info *vi = ctx->vi; + const struct merkle_tree_params *params = &vi->tree_params; + const unsigned int block_size = params->block_size; + const int mb_max_msgs = params->hash_alg->mb_max_msgs; u64 pos = (u64)data_folio->index << PAGE_SHIFT; if (WARN_ON_ONCE(len <= 0 || !IS_ALIGNED(len | offset, block_size))) return false; if (WARN_ON_ONCE(!folio_test_locked(data_folio) || folio_test_uptodate(data_folio))) return false; do { - void *data; - bool valid; - - data = kmap_local_folio(data_folio, offset); - valid = verify_data_block(inode, vi, data, pos + offset, - max_ra_pages); - kunmap_local(data); - if (!valid) + ctx->pending_blocks[ctx->num_pending].data = + kmap_local_folio(data_folio, offset); + ctx->pending_blocks[ctx->num_pending].pos = pos + offset; + if (++ctx->num_pending == mb_max_msgs && + !fsverity_verify_pending_blocks(ctx)) return false; offset += block_size; len -= block_size; } while (len); return true; @@ -284,11 +377,19 @@ verify_data_blocks(struct folio *data_folio, size_t len, size_t offset, * * Return: %true if the data is valid, else %false. */ bool fsverity_verify_blocks(struct folio *folio, size_t len, size_t offset) { - return verify_data_blocks(folio, len, offset, 0); + struct fsverity_verification_context ctx; + + fsverity_init_verification_context(&ctx, folio->mapping->host, 0); + + if (fsverity_add_data_blocks(&ctx, folio, len, offset) && + fsverity_verify_pending_blocks(&ctx)) + return true; + fsverity_clear_pending_blocks(&ctx); + return false; } EXPORT_SYMBOL_GPL(fsverity_verify_blocks); #ifdef CONFIG_BLOCK /** @@ -305,10 +406,12 @@ EXPORT_SYMBOL_GPL(fsverity_verify_blocks); * filesystems) must instead call fsverity_verify_page() directly on each page. * All filesystems must also call fsverity_verify_page() on holes. */ void fsverity_verify_bio(struct bio *bio) { + struct inode *inode = bio_first_folio_all(bio)->mapping->host; + struct fsverity_verification_context ctx; struct folio_iter fi; unsigned long max_ra_pages = 0; if (bio->bi_opf & REQ_RAHEAD) { /* @@ -321,17 +424,25 @@ void fsverity_verify_bio(struct bio *bio) * reduces the number of I/O requests made to the Merkle tree. */ max_ra_pages = bio->bi_iter.bi_size >> (PAGE_SHIFT + 2); } + fsverity_init_verification_context(&ctx, inode, max_ra_pages); + bio_for_each_folio_all(fi, bio) { - if (!verify_data_blocks(fi.folio, fi.length, fi.offset, - max_ra_pages)) { - bio->bi_status = BLK_STS_IOERR; - break; - } + if (!fsverity_add_data_blocks(&ctx, fi.folio, fi.length, + fi.offset)) + goto ioerr; } + + if (!fsverity_verify_pending_blocks(&ctx)) + goto ioerr; + return; + +ioerr: + fsverity_clear_pending_blocks(&ctx); + bio->bi_status = BLK_STS_IOERR; } EXPORT_SYMBOL_GPL(fsverity_verify_bio); #endif /* CONFIG_BLOCK */ /** From patchwork Fri Jun 21 16:59:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13707918 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2DAAEC2D0CE for ; Fri, 21 Jun 2024 17:00:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=hlWn0plhKEvs9CILKD0643ZcI2soC+xhx2CeDdDTQ9g=; b=MLfZ9NBRfOKAFAL7bGh5yhCfnA E+j1P7KiNtDlvjnwmzuMZqf/yqfF9ad7SbCbpV+HxogR5SxB855d/KnNH2Jvberg4LISgDZ+d6g0v oKeCTktn1vEl8h+n4hIi49scpdHiRnKB4zuFh4UCheJFoh51ux7dDL4qWLSDdgeG7NoLhBDpSiwqZ VBAx1M2+IT8OXnkd8tcxgX8BoJULVo2HEPUtIEexsUjRR8r0lwhMXv5o7wkXyuzSuFPdyUHttPbuE iNx3iqwbWBjJcU4i/jJSrGVTVdwzUwd7fMo0ykPgjSEvlkzKtXqYKPA9SGk7z5mg2EB/CP9chJPFM n9fnKpsw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sKhcr-0000000A1jz-2ahc; Fri, 21 Jun 2024 17:00:37 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sKhca-0000000A1Wx-1cek for linux-arm-kernel@lists.infradead.org; Fri, 21 Jun 2024 17:00:23 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 9D69162B59; Fri, 21 Jun 2024 17:00:19 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 174F4C3277B; Fri, 21 Jun 2024 17:00:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1718989219; bh=ypS3wDjrki602PRpZAix8qK+2Aw3HThVdjAOH9xqvE4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=s0qyhFijQyiGvRe1i3Husc3d43tP6AUy3GN22wuCtRmhMwDbr7uCpjEqBcvJJAzyv icqD39KaL9IYIj2jozTEbbeya9NZO5os4ONh4R1WuwIZngQ+5JDNGdjJwRLl/mhvhp pekZInU35oP3gTqX4KnPe1RgKa1SF9pVR3vehFY89WHzHsex0Cw/Re+Gli99HFMWoE Y7OOu8fWgtd1PpG/3AXiftn5jXgAHlOc5RJ2AM5nwQbN3KUq8WMXcGRzoKFk3Bef1P gS6XpDu4A69+vLNCmC1gTue8+Sx1bIJr3lD+u/a4vJXz6b2ZnqWlEjbLBB8gMQwoYO SfHtzxP9W8ThQ== From: Eric Biggers To: linux-crypto@vger.kernel.org, fsverity@lists.linux.dev, dm-devel@lists.linux.dev Cc: x86@kernel.org, linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Sami Tolvanen , Bart Van Assche , Herbert Xu , Alasdair Kergon , Mike Snitzer , Mikulas Patocka Subject: [PATCH v6 07/15] dm-verity: move hash algorithm setup into its own function Date: Fri, 21 Jun 2024 09:59:14 -0700 Message-ID: <20240621165922.77672-8-ebiggers@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240621165922.77672-1-ebiggers@kernel.org> References: <20240621165922.77672-1-ebiggers@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240621_100020_745673_F7B91C4D X-CRM114-Status: GOOD ( 15.17 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Eric Biggers Move the code that sets up the hash transformation into its own function. No change in behavior. Reviewed-by: Sami Tolvanen Acked-by: Ard Biesheuvel Signed-off-by: Eric Biggers --- drivers/md/dm-verity-target.c | 70 +++++++++++++++++++---------------- 1 file changed, 39 insertions(+), 31 deletions(-) diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c index bb5da66da4c1..88d2a49dca43 100644 --- a/drivers/md/dm-verity-target.c +++ b/drivers/md/dm-verity-target.c @@ -1224,10 +1224,47 @@ static int verity_parse_opt_args(struct dm_arg_set *as, struct dm_verity *v, } while (argc && !r); return r; } +static int verity_setup_hash_alg(struct dm_verity *v, const char *alg_name) +{ + struct dm_target *ti = v->ti; + struct crypto_ahash *ahash; + + v->alg_name = kstrdup(alg_name, GFP_KERNEL); + if (!v->alg_name) { + ti->error = "Cannot allocate algorithm name"; + return -ENOMEM; + } + + ahash = crypto_alloc_ahash(alg_name, 0, + v->use_bh_wq ? CRYPTO_ALG_ASYNC : 0); + if (IS_ERR(ahash)) { + ti->error = "Cannot initialize hash function"; + return PTR_ERR(ahash); + } + v->tfm = ahash; + + /* + * dm-verity performance can vary greatly depending on which hash + * algorithm implementation is used. Help people debug performance + * problems by logging the ->cra_driver_name. + */ + DMINFO("%s using implementation \"%s\"", alg_name, + crypto_hash_alg_common(ahash)->base.cra_driver_name); + + v->digest_size = crypto_ahash_digestsize(ahash); + if ((1 << v->hash_dev_block_bits) < v->digest_size * 2) { + ti->error = "Digest size too big"; + return -EINVAL; + } + v->ahash_reqsize = sizeof(struct ahash_request) + + crypto_ahash_reqsize(ahash); + return 0; +} + /* * Target parameters: * The current format is version 1. * Vsn 0 is compatible with original Chromium OS releases. * @@ -1348,42 +1385,13 @@ static int verity_ctr(struct dm_target *ti, unsigned int argc, char **argv) r = -EINVAL; goto bad; } v->hash_start = num_ll; - v->alg_name = kstrdup(argv[7], GFP_KERNEL); - if (!v->alg_name) { - ti->error = "Cannot allocate algorithm name"; - r = -ENOMEM; - goto bad; - } - - v->tfm = crypto_alloc_ahash(v->alg_name, 0, - v->use_bh_wq ? CRYPTO_ALG_ASYNC : 0); - if (IS_ERR(v->tfm)) { - ti->error = "Cannot initialize hash function"; - r = PTR_ERR(v->tfm); - v->tfm = NULL; - goto bad; - } - - /* - * dm-verity performance can vary greatly depending on which hash - * algorithm implementation is used. Help people debug performance - * problems by logging the ->cra_driver_name. - */ - DMINFO("%s using implementation \"%s\"", v->alg_name, - crypto_hash_alg_common(v->tfm)->base.cra_driver_name); - - v->digest_size = crypto_ahash_digestsize(v->tfm); - if ((1 << v->hash_dev_block_bits) < v->digest_size * 2) { - ti->error = "Digest size too big"; - r = -EINVAL; + r = verity_setup_hash_alg(v, argv[7]); + if (r) goto bad; - } - v->ahash_reqsize = sizeof(struct ahash_request) + - crypto_ahash_reqsize(v->tfm); v->root_digest = kmalloc(v->digest_size, GFP_KERNEL); if (!v->root_digest) { ti->error = "Cannot allocate root digest"; r = -ENOMEM; From patchwork Fri Jun 21 16:59:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13707920 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 36D98C2BB85 for ; Fri, 21 Jun 2024 17:00:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=43nNacA326WAHA9z5sG5q+Y27jhjZLlTtpxWfy/tyz0=; b=NuQGaeLQaPCouVLPAy9nDcJWW4 HVgWGLhc2JJZ8Ary9JHASN1Or400cFu59wjTeQERIFz0zhdvaKmvNvnqiwrKXcRj6BNoojRERdDwO ecxaa3FY9Anlaxfl4m79RogXKHK5UaMbBdolIBQAH1T+iwEb/c6O7mhj+kpL0OwjiwRFQA0DiSjvP 3G8k7sqVqbdTw+tRlQWO9GAgAh+Ta1QygMQMh4RFqFDp7unjRMQ9fi8A9szYUyAD4oYLBOIm0LPb2 g2tJMvVnAJTwQJsqFT+hG/uICzWansepi6I2hSL+CeqIS+6mvxBTn9snmxXQhfxHOS8ANQYBufMLI jLTmGsIw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sKhcs-0000000A1kw-3cfb; Fri, 21 Jun 2024 17:00:38 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sKhca-0000000A1Xe-34ib for linux-arm-kernel@lists.infradead.org; Fri, 21 Jun 2024 17:00:24 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 2887262B5B; Fri, 21 Jun 2024 17:00:20 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9685FC4AF10; Fri, 21 Jun 2024 17:00:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1718989220; bh=KKuxmjZHgDTU3MsqwzhYo+9QcXkXm3thNcp3gDEyqPY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LpCtmj3nujzipFUQrhmHF1VUqZr7KB/HzGNL3mCd2NHOEpTJrCK+RmIfayu2/KLme suXnoiO9QzWitufX3vIDBirSjAwecWqF0hp/G1jV9rQjzQ/yahDR1mmakxlegEed6p SqX7Aosi5Z33GaiKtMegRA+lASh2Q01iJ3mwVW7qkBcTmCw2OUBwg3OC2kKZH20VLa yyVvS1dynurnAaoFrp2gHOZ+owqf7PJxUoFcbUsj3kTsLS70tdlOvU5yrK3N2WIbbk +vF3qCfFHXjZC5OegQ8hvETirnUFENqevBayFqAdHM98tTDvFzaecIhhka70746iju ShKaAfvYZggHQ== From: Eric Biggers To: linux-crypto@vger.kernel.org, fsverity@lists.linux.dev, dm-devel@lists.linux.dev Cc: x86@kernel.org, linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Sami Tolvanen , Bart Van Assche , Herbert Xu , Alasdair Kergon , Mike Snitzer , Mikulas Patocka Subject: [PATCH v6 08/15] dm-verity: move data hash mismatch handling into its own function Date: Fri, 21 Jun 2024 09:59:15 -0700 Message-ID: <20240621165922.77672-9-ebiggers@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240621165922.77672-1-ebiggers@kernel.org> References: <20240621165922.77672-1-ebiggers@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240621_100021_029964_E51441BB X-CRM114-Status: GOOD ( 15.50 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Eric Biggers Move the code that handles mismatches of data block hashes into its own function so that it doesn't clutter up verity_verify_io(). Reviewed-by: Sami Tolvanen Acked-by: Ard Biesheuvel Signed-off-by: Eric Biggers --- drivers/md/dm-verity-target.c | 64 ++++++++++++++++++++--------------- 1 file changed, 36 insertions(+), 28 deletions(-) diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c index 88d2a49dca43..796d85526696 100644 --- a/drivers/md/dm-verity-target.c +++ b/drivers/md/dm-verity-target.c @@ -540,10 +540,42 @@ static noinline int verity_recheck(struct dm_verity *v, struct dm_verity_io *io, mempool_free(page, &v->recheck_pool); return r; } +static int verity_handle_data_hash_mismatch(struct dm_verity *v, + struct dm_verity_io *io, + struct bio *bio, sector_t blkno, + struct bvec_iter *start) +{ + if (static_branch_unlikely(&use_bh_wq_enabled) && io->in_bh) { + /* + * Error handling code (FEC included) cannot be run in the + * BH workqueue, so fallback to a standard workqueue. + */ + return -EAGAIN; + } + if (verity_recheck(v, io, *start, blkno) == 0) { + if (v->validated_blocks) + set_bit(blkno, v->validated_blocks); + return 0; + } +#if defined(CONFIG_DM_VERITY_FEC) + if (verity_fec_decode(v, io, DM_VERITY_BLOCK_TYPE_DATA, blkno, + NULL, start) == 0) + return 0; +#endif + if (bio->bi_status) + return -EIO; /* Error correction failed; Just return error */ + + if (verity_handle_err(v, DM_VERITY_BLOCK_TYPE_DATA, blkno)) { + dm_audit_log_bio(DM_MSG_PREFIX, "verify-data", bio, blkno, 0); + return -EIO; + } + return 0; +} + static int verity_bv_zero(struct dm_verity *v, struct dm_verity_io *io, u8 *data, size_t len) { memset(data, 0, len); return 0; @@ -632,39 +664,15 @@ static int verity_verify_io(struct dm_verity_io *io) if (likely(memcmp(verity_io_real_digest(v, io), verity_io_want_digest(v, io), v->digest_size) == 0)) { if (v->validated_blocks) set_bit(cur_block, v->validated_blocks); continue; - } else if (static_branch_unlikely(&use_bh_wq_enabled) && io->in_bh) { - /* - * Error handling code (FEC included) cannot be run in a - * tasklet since it may sleep, so fallback to work-queue. - */ - return -EAGAIN; - } else if (verity_recheck(v, io, start, cur_block) == 0) { - if (v->validated_blocks) - set_bit(cur_block, v->validated_blocks); - continue; -#if defined(CONFIG_DM_VERITY_FEC) - } else if (verity_fec_decode(v, io, DM_VERITY_BLOCK_TYPE_DATA, - cur_block, NULL, &start) == 0) { - continue; -#endif - } else { - if (bio->bi_status) { - /* - * Error correction failed; Just return error - */ - return -EIO; - } - if (verity_handle_err(v, DM_VERITY_BLOCK_TYPE_DATA, - cur_block)) { - dm_audit_log_bio(DM_MSG_PREFIX, "verify-data", - bio, cur_block, 0); - return -EIO; - } } + r = verity_handle_data_hash_mismatch(v, io, bio, cur_block, + &start); + if (unlikely(r)) + return r; } return 0; } From patchwork Fri Jun 21 16:59:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13707922 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 14D21C27C4F for ; Fri, 21 Jun 2024 17:01:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=owgwgc4wtWlUeusjqc/vfiG9sUthjzLcczlf7gJFXYM=; b=CJMLIN28cyBlHq3GfSZET0ZCAR TkG+4CRa7/eGLDvQbuAwBBp3eRdCfCRLpBI+6vgRFf5vW5So+vUM+k3kRkGhIv8+iBrGkiSAH0mhK F/21c4nHi/OPQboqfOawBVGQjpEP5pZrjUlAFrxM8qUapidEBHMqGFPlByCj8TFNKXKZInnQ9tcK9 6s5hmZUs9Fdb5ckWXqmQZ/6QKP9uMBBuImQeEeFf2dC5IX86JnIUk8r5jp/Ubp4OkrQtine3HECT1 zybZ55XNveO01vppgl+HXF7s4bGzhKVcVEdowbEmB8qM7jaNd2+69iaSh8JuH6wjfeTM2tyVQsDqL bax9rCog==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sKhd2-0000000A1su-0fvY; Fri, 21 Jun 2024 17:00:48 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sKhcb-0000000A1YO-1Fa9 for linux-arm-kernel@lists.infradead.org; Fri, 21 Jun 2024 17:00:24 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id A67C762B6E; Fri, 21 Jun 2024 17:00:20 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2090CC3277B; Fri, 21 Jun 2024 17:00:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1718989220; bh=FQIlGe1ODogKmI2ftbLTyX6pGZq9j7L+IwPSLI+6oQE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jrPFiXKi4LvCNOgnzw+phKBA5RMKNvKjBEYelo/7go6XV8yi4vFGII9AS/EJkWvdd GDOuq2HZ63To6E3tgcjpWX9Dcsar7WBntHgV6iRE/v0bTWlCGXk8f2THnUMc221xSJ DmaUJBepd428UBiTB3dqkMvg87ZQLgfdPsvBEllsUTsjwtBl7PhAzCUup2kmVwvAZq SccO2T5TsxGGtff0hPnRwsiJACJlRzRpvTdCDNy5+dLKvB4Ul2hjmvVV/DD8kHfE0j +0x2CBv9sWxCqIJdnBHhpQvnFbgCRSCIMS0D1DCyDjncDmOwWiKx08b+qbwJjYk6Gh Z6l5e45qmzHXQ== From: Eric Biggers To: linux-crypto@vger.kernel.org, fsverity@lists.linux.dev, dm-devel@lists.linux.dev Cc: x86@kernel.org, linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Sami Tolvanen , Bart Van Assche , Herbert Xu , Alasdair Kergon , Mike Snitzer , Mikulas Patocka Subject: [PATCH v6 09/15] dm-verity: make real_digest and want_digest fixed-length Date: Fri, 21 Jun 2024 09:59:16 -0700 Message-ID: <20240621165922.77672-10-ebiggers@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240621165922.77672-1-ebiggers@kernel.org> References: <20240621165922.77672-1-ebiggers@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240621_100021_607961_35882641 X-CRM114-Status: GOOD ( 15.60 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Eric Biggers Change the digest fields in struct dm_verity_io from variable-length to fixed-length, since their maximum length is fixed at HASH_MAX_DIGESTSIZE, i.e. 64 bytes, which is not too big. This is simpler and makes the fields a bit faster to access. (HASH_MAX_DIGESTSIZE did not exist when this code was written, which may explain why it wasn't used.) This makes the verity_io_real_digest() and verity_io_want_digest() functions trivial, but this patch leaves them in place temporarily since most of their callers will go away in a later patch anyway. Reviewed-by: Sami Tolvanen Acked-by: Ard Biesheuvel Signed-off-by: Eric Biggers --- drivers/md/dm-verity-target.c | 3 +-- drivers/md/dm-verity.h | 17 +++++++---------- 2 files changed, 8 insertions(+), 12 deletions(-) diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c index 796d85526696..4ef814a7faf4 100644 --- a/drivers/md/dm-verity-target.c +++ b/drivers/md/dm-verity-target.c @@ -1527,12 +1527,11 @@ static int verity_ctr(struct dm_target *ti, unsigned int argc, char **argv) ti->error = "Cannot allocate workqueue"; r = -ENOMEM; goto bad; } - ti->per_io_data_size = sizeof(struct dm_verity_io) + - v->ahash_reqsize + v->digest_size * 2; + ti->per_io_data_size = sizeof(struct dm_verity_io) + v->ahash_reqsize; r = verity_fec_ctr(v); if (r) goto bad; diff --git a/drivers/md/dm-verity.h b/drivers/md/dm-verity.h index 20b1bcf03474..5d3da9f5fc95 100644 --- a/drivers/md/dm-verity.h +++ b/drivers/md/dm-verity.h @@ -89,19 +89,16 @@ struct dm_verity_io { struct work_struct work; struct work_struct bh_work; char *recheck_buffer; + u8 real_digest[HASH_MAX_DIGESTSIZE]; + u8 want_digest[HASH_MAX_DIGESTSIZE]; + /* - * Three variably-size fields follow this struct: - * - * u8 hash_req[v->ahash_reqsize]; - * u8 real_digest[v->digest_size]; - * u8 want_digest[v->digest_size]; - * - * To access them use: verity_io_hash_req(), verity_io_real_digest() - * and verity_io_want_digest(). + * This struct is followed by a variable-sized struct ahash_request of + * size v->ahash_reqsize. To access it, use verity_io_hash_req(). */ }; static inline struct ahash_request *verity_io_hash_req(struct dm_verity *v, struct dm_verity_io *io) @@ -110,17 +107,17 @@ static inline struct ahash_request *verity_io_hash_req(struct dm_verity *v, } static inline u8 *verity_io_real_digest(struct dm_verity *v, struct dm_verity_io *io) { - return (u8 *)(io + 1) + v->ahash_reqsize; + return io->real_digest; } static inline u8 *verity_io_want_digest(struct dm_verity *v, struct dm_verity_io *io) { - return (u8 *)(io + 1) + v->ahash_reqsize + v->digest_size; + return io->want_digest; } extern int verity_for_bv_block(struct dm_verity *v, struct dm_verity_io *io, struct bvec_iter *iter, int (*process)(struct dm_verity *v, From patchwork Fri Jun 21 16:59:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13707923 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 882B2C2BB85 for ; Fri, 21 Jun 2024 17:02:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=wDVqVIy5Z7XyzfP//TWSgLNHRLpkeW+GJVNRaAVdFM8=; b=EgDqykSc87iT0rAZDyWrhqTrPJ v/SkmLbkNyQS0pfFYykMorT4pEWswHF982uJmyFbvF3btcnoRisqOpqwLCJvfUXaZbpKyBRRXZAsb 6vYkHCdXN18muYOtgveKfaITJnmhrYDhwry+ISgdj2j8WxlnA3+lBoAWN2Noh5DFYhA2VbiL0aZyI SOK5hCrmb8TQfLS5em9AznRiLYaDzoj5901WKdwk3lfvIbIOYzbXJ3/ZLNtzwGlhBvnaGr/wkXTc+ MSvzcKmwF4N7m4cmXiTuEZ9jKzomMPWewb1z1jBbRkEmrjDb2Duf7ZI3rnAxNrBAko0jRpj9kJX0F hvYIJq/A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sKheI-0000000A2Rb-3MFp; Fri, 21 Jun 2024 17:02:06 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sKhck-0000000A1eL-1GdO for linux-arm-kernel@bombadil.infradead.org; Fri, 21 Jun 2024 17:00:30 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Transfer-Encoding:MIME-Version :References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=wDVqVIy5Z7XyzfP//TWSgLNHRLpkeW+GJVNRaAVdFM8=; b=SpEuWSli+QpwP3V1OGp0ByBQrH bZe071xEdhGsXOexKwiwmiE0A3uK64fTbjkhHCIhsrsMigeCg9ci/J3fIbrdnxdAFWLZ5Obads2z7 CuyaA8mZPqIrMhaspca9nRdLhen7kI0Ta2fkTQQjzg0P2Vi0RehdUQUac/UsCd9JJUPl9dVwXI1xf E0D6sTTBINqYdsyAup1t5g1Yt1CXuejPBCS3Gqcf+VzK1dygBoTOpqh0lXk+eWyNVhGZZTSwgaXdu HwfC33j2Ff3gYTZ0lunrtfFdhMwvYMU3nR/w0mFP4qOufLrT1+h2SNeJt9lbX8D7JonUeLKy3i/zA 9y4uHK/Q==; Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by desiato.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sKhcg-00000007jkL-3ehU for linux-arm-kernel@lists.infradead.org; Fri, 21 Jun 2024 17:00:29 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 32E9362B68; Fri, 21 Jun 2024 17:00:21 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9E275C4AF08; Fri, 21 Jun 2024 17:00:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1718989221; bh=L8QaQlnQZqDSbKGcaArdBOq7kSV7Mtdn6A22XnKLLFE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=B5xMtIKI0abIpZKaD5d6w6YP0ikA0bpre7moHPS1gFNG8CmW1HR3KD8ZdL+DlqdCu jQvst1jVu7M+gbPCo7bM4AYdIlZozo2VHC8iI+aThWLb7vP0FbqoCRbNfKP8LGsime 6AJk1W+zHnhJ+4Gs0xwbBqX1UZwJUBbARuNC/6oHkfMbFgSu+ZNRDF4OoqRKQD7BkM r8N+Y0gG7jmbrh2+IBWo89FNGozD3iHFz5GRs0WeUo0QM9ssRd1RQkUIl08mbj6vA7 bGRVcELKNtCF87Nj0/FZs+lS/28sbO6cM7J019o2kdLHlXteNZQIJ/0mEpg7AfvG49 qccGCsjQOoMiw== From: Eric Biggers To: linux-crypto@vger.kernel.org, fsverity@lists.linux.dev, dm-devel@lists.linux.dev Cc: x86@kernel.org, linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Sami Tolvanen , Bart Van Assche , Herbert Xu , Alasdair Kergon , Mike Snitzer , Mikulas Patocka Subject: [PATCH v6 10/15] dm-verity: provide dma_alignment limit in io_hints Date: Fri, 21 Jun 2024 09:59:17 -0700 Message-ID: <20240621165922.77672-11-ebiggers@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240621165922.77672-1-ebiggers@kernel.org> References: <20240621165922.77672-1-ebiggers@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240621_180027_370084_BA9B9AD9 X-CRM114-Status: GOOD ( 12.28 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Eric Biggers Since Linux v6.1, some filesystems support submitting direct I/O that is aligned to only dma_alignment instead of the logical_block_size alignment that was required before. I/O that is not aligned to the logical_block_size is difficult to handle in device-mapper targets that do cryptographic processing of data, as it makes the units of data that are hashed or encrypted possibly be split across pages, creating rarely used and rarely tested edge cases. As such, dm-crypt and dm-integrity have already opted out of this by setting dma_alignment to 'logical_block_size - 1'. Although dm-verity does have code that handles these cases (or at least is intended to do so), supporting direct I/O with such a low amount of alignment is not really useful on dm-verity devices. So, opt dm-verity out of it too so that it's not necessary to handle these edge cases. Reviewed-by: Sami Tolvanen Acked-by: Ard Biesheuvel Signed-off-by: Eric Biggers --- drivers/md/dm-verity-target.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c index 4ef814a7faf4..c6a0e3280e39 100644 --- a/drivers/md/dm-verity-target.c +++ b/drivers/md/dm-verity-target.c @@ -1021,10 +1021,12 @@ static void verity_io_hints(struct dm_target *ti, struct queue_limits *limits) if (limits->physical_block_size < 1 << v->data_dev_block_bits) limits->physical_block_size = 1 << v->data_dev_block_bits; blk_limits_io_min(limits, limits->logical_block_size); + + limits->dma_alignment = limits->logical_block_size - 1; } static void verity_dtr(struct dm_target *ti) { struct dm_verity *v = ti->private; From patchwork Fri Jun 21 16:59:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13707927 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1672FC27C4F for ; Fri, 21 Jun 2024 17:02:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=663X2TwReF+syR10aevUYmu69yt7ol0EmTc4PnZCsBo=; b=Hy0JeFBXBU9BFCoSB1rLFG4M1T aK47HaSJwCPofy6Vfx7n5SJD1+/MxJ5MvIl1MGIpUQ86xuUEr3UzQrN6LgDtvJd0Imr+XTRMYCzFh BIAlSvLqRjAJAyXEwFkeETj8cVSaZ63ip53794GOQ8olF0BoERHtcFSEZrJL4NPre6kGBNCilstBv BnuV4UlJXC3BcK5paeE95ayF0I1nvIRL2hJm71+uNpGpjOyn/7GnUTATYhYZi7WQrkfst9d8OoDBg SjJD4ID2PrXVtSNkWkLk0hMoYruJkjIup6LAJtKNvWhVkuEKjnpXeP5sagNbulCH7uYuMYXQPkluC JnCbl/oQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sKheQ-0000000A2Vf-1dLG; Fri, 21 Jun 2024 17:02:14 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sKhcm-0000000A1g4-41Mb for linux-arm-kernel@bombadil.infradead.org; Fri, 21 Jun 2024 17:00:33 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Transfer-Encoding:MIME-Version :References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=663X2TwReF+syR10aevUYmu69yt7ol0EmTc4PnZCsBo=; b=lTWIbdQwGKRVyW4Q8YOXE/b++I BHrpoyE3ubKx0DmgbsHv/mdLxFDeHkDqjVWtBnwGfBT5LP+UVgiuTjTJqMVyK/lrLqfhHbGiQIGg2 Hh9qxhoRLytWj5/LHMRKf5ryDigUXE3lveVe3FfVNaR9LKYDgPmreWd/3tuKGFyYjPHcDeXt8QzNr iqT6942U2oWC51hsWzKDDPRMmJxWuBVV6aTvVLJp811KRrt/eywHzJknOI60O9mYmkdSfDl6ZbJyf rCcFEAO1fm8UiMnE9vu1md/B+xXNfTyWTOvYhLGHtwyzK6hPTH9l7RZzXVZU1hSfl7bJgwCs1BwO4 kcHaAP3Q==; Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by desiato.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sKhcg-00000007jkM-3YES for linux-arm-kernel@lists.infradead.org; Fri, 21 Jun 2024 17:00:31 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id B096262B78; Fri, 21 Jun 2024 17:00:21 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2C012C4AF0E; Fri, 21 Jun 2024 17:00:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1718989221; bh=tTxV7MtH6fbnfhO1L9UNItC2Kd0/groTwc13CgXMAJE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BClH1L1XAr4YmfjCEFyq/mWwFLAPHVw5hbP2DRYf2E5B1jKDoKC4zUBmQu8CKDtFN j4lQ1Y6kjvCtkAcnjTzoZchl8lS12LqTRilYhEQwePV45lSm5ES/1JeY4j0FjsKoYP n4On5pOqgxWWcWxzAiaLlQAjzVAKUpBDIiJ8PAt4fTRf1e4eCnNufuMekBY0IujJQ3 rHyzgIy/HJKPf1rWJ8LTbibl1na4WYlAf975RQiYDDdJX8vv89KtktRz6+3fmIelHl cPIN+GCW9In760NlvUQemPbc/8nPMkel450YDsqRhePgXwffkSah2Stqt5vpjJQORN ruyC6E+dVKPDg== From: Eric Biggers To: linux-crypto@vger.kernel.org, fsverity@lists.linux.dev, dm-devel@lists.linux.dev Cc: x86@kernel.org, linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Sami Tolvanen , Bart Van Assche , Herbert Xu , Alasdair Kergon , Mike Snitzer , Mikulas Patocka Subject: [PATCH v6 11/15] dm-verity: always "map" the data blocks Date: Fri, 21 Jun 2024 09:59:18 -0700 Message-ID: <20240621165922.77672-12-ebiggers@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240621165922.77672-1-ebiggers@kernel.org> References: <20240621165922.77672-1-ebiggers@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240621_180027_370073_70CAD1B6 X-CRM114-Status: GOOD ( 27.71 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Eric Biggers dm-verity needs to access data blocks by virtual address in three different cases (zeroization, recheck, and forward error correction), and one more case (shash support) is coming. Since it's guaranteed that dm-verity data blocks never cross pages, and kmap_local_page and kunmap_local are no-ops on modern platforms anyway, just unconditionally "map" every data block's page and work with the virtual buffer directly. This simplifies the code and eliminates unnecessary overhead. Reviewed-by: Sami Tolvanen Acked-by: Ard Biesheuvel Signed-off-by: Eric Biggers --- drivers/md/dm-verity-fec.c | 26 +---- drivers/md/dm-verity-fec.h | 6 +- drivers/md/dm-verity-target.c | 182 +++++++--------------------------- drivers/md/dm-verity.h | 8 -- 4 files changed, 42 insertions(+), 180 deletions(-) diff --git a/drivers/md/dm-verity-fec.c b/drivers/md/dm-verity-fec.c index e46aee6f932e..b838d21183b5 100644 --- a/drivers/md/dm-verity-fec.c +++ b/drivers/md/dm-verity-fec.c @@ -402,28 +402,13 @@ static int fec_decode_rsb(struct dm_verity *v, struct dm_verity_io *io, } return 0; } -static int fec_bv_copy(struct dm_verity *v, struct dm_verity_io *io, u8 *data, - size_t len) -{ - struct dm_verity_fec_io *fio = fec_io(io); - - memcpy(data, &fio->output[fio->output_pos], len); - fio->output_pos += len; - - return 0; -} - -/* - * Correct errors in a block. Copies corrected block to dest if non-NULL, - * otherwise to a bio_vec starting from iter. - */ +/* Correct errors in a block. Copies corrected block to dest. */ int verity_fec_decode(struct dm_verity *v, struct dm_verity_io *io, - enum verity_block_type type, sector_t block, u8 *dest, - struct bvec_iter *iter) + enum verity_block_type type, sector_t block, u8 *dest) { int r; struct dm_verity_fec_io *fio = fec_io(io); u64 offset, res, rsb; @@ -469,16 +454,11 @@ int verity_fec_decode(struct dm_verity *v, struct dm_verity_io *io, r = fec_decode_rsb(v, io, fio, rsb, offset, true); if (r < 0) goto done; } - if (dest) - memcpy(dest, fio->output, 1 << v->data_dev_block_bits); - else if (iter) { - fio->output_pos = 0; - r = verity_for_bv_block(v, io, iter, fec_bv_copy); - } + memcpy(dest, fio->output, 1 << v->data_dev_block_bits); done: fio->level--; return r; } diff --git a/drivers/md/dm-verity-fec.h b/drivers/md/dm-verity-fec.h index 8454070d2824..09123a612953 100644 --- a/drivers/md/dm-verity-fec.h +++ b/drivers/md/dm-verity-fec.h @@ -55,11 +55,10 @@ struct dm_verity_fec_io { struct rs_control *rs; /* Reed-Solomon state */ int erasures[DM_VERITY_FEC_MAX_RSN]; /* erasures for decode_rs8 */ u8 *bufs[DM_VERITY_FEC_BUF_MAX]; /* bufs for deinterleaving */ unsigned int nbufs; /* number of buffers allocated */ u8 *output; /* buffer for corrected output */ - size_t output_pos; unsigned int level; /* recursion level */ }; #ifdef CONFIG_DM_VERITY_FEC @@ -68,11 +67,11 @@ struct dm_verity_fec_io { extern bool verity_fec_is_enabled(struct dm_verity *v); extern int verity_fec_decode(struct dm_verity *v, struct dm_verity_io *io, enum verity_block_type type, sector_t block, - u8 *dest, struct bvec_iter *iter); + u8 *dest); extern unsigned int verity_fec_status_table(struct dm_verity *v, unsigned int sz, char *result, unsigned int maxlen); extern void verity_fec_finish_io(struct dm_verity_io *io); @@ -98,12 +97,11 @@ static inline bool verity_fec_is_enabled(struct dm_verity *v) } static inline int verity_fec_decode(struct dm_verity *v, struct dm_verity_io *io, enum verity_block_type type, - sector_t block, u8 *dest, - struct bvec_iter *iter) + sector_t block, u8 *dest) { return -EOPNOTSUPP; } static inline unsigned int verity_fec_status_table(struct dm_verity *v, diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c index c6a0e3280e39..3e2e4f41714c 100644 --- a/drivers/md/dm-verity-target.c +++ b/drivers/md/dm-verity-target.c @@ -340,11 +340,11 @@ static int verity_verify_level(struct dm_verity *v, struct dm_verity_io *io, * tasklet since it may sleep, so fallback to work-queue. */ r = -EAGAIN; goto release_ret_r; } else if (verity_fec_decode(v, io, DM_VERITY_BLOCK_TYPE_METADATA, - hash_block, data, NULL) == 0) + hash_block, data) == 0) aux->hash_verified = 1; else if (verity_handle_err(v, DM_VERITY_BLOCK_TYPE_METADATA, hash_block)) { struct bio *bio = @@ -402,102 +402,12 @@ int verity_hash_for_block(struct dm_verity *v, struct dm_verity_io *io, *is_zero = false; return r; } -/* - * Calculates the digest for the given bio - */ -static int verity_for_io_block(struct dm_verity *v, struct dm_verity_io *io, - struct bvec_iter *iter, struct crypto_wait *wait) -{ - unsigned int todo = 1 << v->data_dev_block_bits; - struct bio *bio = dm_bio_from_per_bio_data(io, v->ti->per_io_data_size); - struct scatterlist sg; - struct ahash_request *req = verity_io_hash_req(v, io); - - do { - int r; - unsigned int len; - struct bio_vec bv = bio_iter_iovec(bio, *iter); - - sg_init_table(&sg, 1); - - len = bv.bv_len; - - if (likely(len >= todo)) - len = todo; - /* - * Operating on a single page at a time looks suboptimal - * until you consider the typical block size is 4,096B. - * Going through this loops twice should be very rare. - */ - sg_set_page(&sg, bv.bv_page, len, bv.bv_offset); - ahash_request_set_crypt(req, &sg, NULL, len); - r = crypto_wait_req(crypto_ahash_update(req), wait); - - if (unlikely(r < 0)) { - DMERR("%s crypto op failed: %d", __func__, r); - return r; - } - - bio_advance_iter(bio, iter, len); - todo -= len; - } while (todo); - - return 0; -} - -/* - * Calls function process for 1 << v->data_dev_block_bits bytes in the bio_vec - * starting from iter. - */ -int verity_for_bv_block(struct dm_verity *v, struct dm_verity_io *io, - struct bvec_iter *iter, - int (*process)(struct dm_verity *v, - struct dm_verity_io *io, u8 *data, - size_t len)) -{ - unsigned int todo = 1 << v->data_dev_block_bits; - struct bio *bio = dm_bio_from_per_bio_data(io, v->ti->per_io_data_size); - - do { - int r; - u8 *page; - unsigned int len; - struct bio_vec bv = bio_iter_iovec(bio, *iter); - - page = bvec_kmap_local(&bv); - len = bv.bv_len; - - if (likely(len >= todo)) - len = todo; - - r = process(v, io, page, len); - kunmap_local(page); - - if (r < 0) - return r; - - bio_advance_iter(bio, iter, len); - todo -= len; - } while (todo); - - return 0; -} - -static int verity_recheck_copy(struct dm_verity *v, struct dm_verity_io *io, - u8 *data, size_t len) -{ - memcpy(data, io->recheck_buffer, len); - io->recheck_buffer += len; - - return 0; -} - static noinline int verity_recheck(struct dm_verity *v, struct dm_verity_io *io, - struct bvec_iter start, sector_t cur_block) + sector_t cur_block, u8 *dest) { struct page *page; void *buffer; int r; struct dm_io_request io_req; @@ -528,42 +438,38 @@ static noinline int verity_recheck(struct dm_verity *v, struct dm_verity_io *io, verity_io_want_digest(v, io), v->digest_size)) { r = -EIO; goto free_ret; } - io->recheck_buffer = buffer; - r = verity_for_bv_block(v, io, &start, verity_recheck_copy); - if (unlikely(r)) - goto free_ret; - + memcpy(dest, buffer, 1 << v->data_dev_block_bits); r = 0; free_ret: mempool_free(page, &v->recheck_pool); return r; } static int verity_handle_data_hash_mismatch(struct dm_verity *v, struct dm_verity_io *io, struct bio *bio, sector_t blkno, - struct bvec_iter *start) + u8 *data) { if (static_branch_unlikely(&use_bh_wq_enabled) && io->in_bh) { /* * Error handling code (FEC included) cannot be run in the * BH workqueue, so fallback to a standard workqueue. */ return -EAGAIN; } - if (verity_recheck(v, io, *start, blkno) == 0) { + if (verity_recheck(v, io, blkno, data) == 0) { if (v->validated_blocks) set_bit(blkno, v->validated_blocks); return 0; } #if defined(CONFIG_DM_VERITY_FEC) if (verity_fec_decode(v, io, DM_VERITY_BLOCK_TYPE_DATA, blkno, - NULL, start) == 0) + data) == 0) return 0; #endif if (bio->bi_status) return -EIO; /* Error correction failed; Just return error */ @@ -572,40 +478,19 @@ static int verity_handle_data_hash_mismatch(struct dm_verity *v, return -EIO; } return 0; } -static int verity_bv_zero(struct dm_verity *v, struct dm_verity_io *io, - u8 *data, size_t len) -{ - memset(data, 0, len); - return 0; -} - -/* - * Moves the bio iter one data block forward. - */ -static inline void verity_bv_skip_block(struct dm_verity *v, - struct dm_verity_io *io, - struct bvec_iter *iter) -{ - struct bio *bio = dm_bio_from_per_bio_data(io, v->ti->per_io_data_size); - - bio_advance_iter(bio, iter, 1 << v->data_dev_block_bits); -} - /* * Verify one "dm_verity_io" structure. */ static int verity_verify_io(struct dm_verity_io *io) { - bool is_zero; struct dm_verity *v = io->v; - struct bvec_iter start; + const unsigned int block_size = 1 << v->data_dev_block_bits; struct bvec_iter iter_copy; struct bvec_iter *iter; - struct crypto_wait wait; struct bio *bio = dm_bio_from_per_bio_data(io, v->ti->per_io_data_size); unsigned int b; if (static_branch_unlikely(&use_bh_wq_enabled) && io->in_bh) { /* @@ -615,62 +500,69 @@ static int verity_verify_io(struct dm_verity_io *io) iter_copy = io->iter; iter = &iter_copy; } else iter = &io->iter; - for (b = 0; b < io->n_blocks; b++) { + for (b = 0; b < io->n_blocks; + b++, bio_advance_iter(bio, iter, block_size)) { int r; sector_t cur_block = io->block + b; - struct ahash_request *req = verity_io_hash_req(v, io); + bool is_zero; + struct bio_vec bv; + void *data; if (v->validated_blocks && bio->bi_status == BLK_STS_OK && - likely(test_bit(cur_block, v->validated_blocks))) { - verity_bv_skip_block(v, io, iter); + likely(test_bit(cur_block, v->validated_blocks))) continue; - } r = verity_hash_for_block(v, io, cur_block, verity_io_want_digest(v, io), &is_zero); if (unlikely(r < 0)) return r; + bv = bio_iter_iovec(bio, *iter); + if (unlikely(bv.bv_len < block_size)) { + /* + * Data block spans pages. This should not happen, + * since dm-verity sets dma_alignment to the data block + * size minus 1, and dm-verity also doesn't allow the + * data block size to be greater than PAGE_SIZE. + */ + DMERR_LIMIT("unaligned io (data block spans pages)"); + return -EIO; + } + + data = bvec_kmap_local(&bv); + if (is_zero) { /* * If we expect a zero block, don't validate, just * return zeros. */ - r = verity_for_bv_block(v, io, iter, - verity_bv_zero); - if (unlikely(r < 0)) - return r; - + memset(data, 0, block_size); + kunmap_local(data); continue; } - r = verity_hash_init(v, req, &wait, !io->in_bh); - if (unlikely(r < 0)) - return r; - - start = *iter; - r = verity_for_io_block(v, io, iter, &wait); - if (unlikely(r < 0)) - return r; - - r = verity_hash_final(v, req, verity_io_real_digest(v, io), - &wait); - if (unlikely(r < 0)) + r = verity_hash(v, verity_io_hash_req(v, io), data, block_size, + verity_io_real_digest(v, io), !io->in_bh); + if (unlikely(r < 0)) { + kunmap_local(data); return r; + } if (likely(memcmp(verity_io_real_digest(v, io), verity_io_want_digest(v, io), v->digest_size) == 0)) { if (v->validated_blocks) set_bit(cur_block, v->validated_blocks); + kunmap_local(data); continue; } r = verity_handle_data_hash_mismatch(v, io, bio, cur_block, - &start); + data); + kunmap_local(data); if (unlikely(r)) return r; } return 0; diff --git a/drivers/md/dm-verity.h b/drivers/md/dm-verity.h index 5d3da9f5fc95..bd461c28b710 100644 --- a/drivers/md/dm-verity.h +++ b/drivers/md/dm-verity.h @@ -87,12 +87,10 @@ struct dm_verity_io { bool in_bh; struct work_struct work; struct work_struct bh_work; - char *recheck_buffer; - u8 real_digest[HASH_MAX_DIGESTSIZE]; u8 want_digest[HASH_MAX_DIGESTSIZE]; /* * This struct is followed by a variable-sized struct ahash_request of @@ -116,16 +114,10 @@ static inline u8 *verity_io_want_digest(struct dm_verity *v, struct dm_verity_io *io) { return io->want_digest; } -extern int verity_for_bv_block(struct dm_verity *v, struct dm_verity_io *io, - struct bvec_iter *iter, - int (*process)(struct dm_verity *v, - struct dm_verity_io *io, - u8 *data, size_t len)); - extern int verity_hash(struct dm_verity *v, struct ahash_request *req, const u8 *data, size_t len, u8 *digest, bool may_sleep); extern int verity_hash_for_block(struct dm_verity *v, struct dm_verity_io *io, sector_t block, u8 *digest, bool *is_zero); From patchwork Fri Jun 21 16:59:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13707925 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A2D56C2BD05 for ; Fri, 21 Jun 2024 17:02:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=r6kpcMPUyPe+Q6E+ZsqXGa7lRWlAvh4+HbybXP+fSM0=; b=VBBNcBfO7b7uuj/eW071ziPnSz +ECnR9naWasvFAqRm/+8rzl2ssZtRmTsPhUiyT7NF8qORPf+CAjRVLD0AiVykmp2YLzGXubzrM3UP 63Ihw1zENGaRFjZ7IwJFDU80xcvXDxv0vvndaiuaTvxqoWEVr2o2lCZSdZZA/99nctQbbdD5S8ZL+ IFu05Ib2rfBh+h2H1OKW3JPkoHQ1dF0SoTUwDiNZTaEctr2jhoX7GWVKYPXntT+QmjF6T+JcMPwei Vg1bb/BpK1hpZ2AqVPTPyKQa5Dh4o7o67tNz6htj7JCZ/DoTxatl7TY68Wwom25ehxF3/MuJZ8RnZ BwPrb2gA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sKheK-0000000A2SZ-305s; Fri, 21 Jun 2024 17:02:08 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sKhck-0000000A1eB-14iM for linux-arm-kernel@bombadil.infradead.org; Fri, 21 Jun 2024 17:00:30 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Transfer-Encoding:MIME-Version :References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=r6kpcMPUyPe+Q6E+ZsqXGa7lRWlAvh4+HbybXP+fSM0=; b=RjsVBLcrf/lEgSvUVa3lwYKMyl swHfbVPHKy/sH1/+RmGv1ANKBLGdHhOE7pSSoPSuLa/f4+1j7UIzh7NE7ISD87G516cnLCEjcmmm0 5SrUqge/GNdRJgxe9FM6lke5vP91HpOgyeB3M+BTila0QDeP39E6QH30aJ3z/9iaC5WmAb84AwrMZ 1xlm0xvkAxZxtTDZaq6kpZikGcmEjWJEHgkmxhDcYcDGvJujvpY8WD6qrOUlCg9SiFhkEYn5HfcFG h/ydLWF2QYtMGlnAXivbfOl+Wplr8aPoXbovvzZxri9VsPAOb1jkjp+qL04S7qm+XWAJdzdILtKjE l/O7pdxA==; Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by desiato.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sKhcg-00000007jkN-3ULP for linux-arm-kernel@lists.infradead.org; Fri, 21 Jun 2024 17:00:29 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 39A4862B74; Fri, 21 Jun 2024 17:00:22 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id AA579C4AF07; Fri, 21 Jun 2024 17:00:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1718989222; bh=V0k62jEzU2JMobLfaZ0TRIW0f9VCrDHDAVQoQebijdg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Y7REBj5YhzKcAWSDB6YA8QuA4Fzpz8x7pWn92ZDvy7deZLQkTAAQgOSS/w4zU9RoI XnemKqpzJSDL+uIEp2ev/06zlQY2SkeEVttjxNgE+aLT2d1MwmCaGhKV5Sxs/bMcrK cXoLPfztOkPdk2K50fwR31G6Y+mIc/1iybkzRLk3GO55ai2Rr2dSdr6qCcJV0k4SQ1 bU8Hk4mEpgMEW1Xi9Sa6Upl2jq5LoM0xj88mQpLtKVVgi2zpDXBDEju0wJnodz0nVs RFNj1lf7ayTlGkjW5PVfJhsfUsyPsjwXcql8lh0ouKeNWL8Spn+RiH2DtoybQuJahL uU5KNjnw834Rw== From: Eric Biggers To: linux-crypto@vger.kernel.org, fsverity@lists.linux.dev, dm-devel@lists.linux.dev Cc: x86@kernel.org, linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Sami Tolvanen , Bart Van Assche , Herbert Xu , Alasdair Kergon , Mike Snitzer , Mikulas Patocka Subject: [PATCH v6 12/15] dm-verity: make verity_hash() take dm_verity_io instead of ahash_request Date: Fri, 21 Jun 2024 09:59:19 -0700 Message-ID: <20240621165922.77672-13-ebiggers@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240621165922.77672-1-ebiggers@kernel.org> References: <20240621165922.77672-1-ebiggers@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240621_180027_370861_7ECA9847 X-CRM114-Status: GOOD ( 16.00 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Eric Biggers In preparation for adding shash support to dm-verity, change verity_hash() to take a pointer to a struct dm_verity_io instead of a pointer to the ahash_request embedded inside it. Reviewed-by: Sami Tolvanen Acked-by: Ard Biesheuvel Signed-off-by: Eric Biggers --- drivers/md/dm-verity-fec.c | 6 ++---- drivers/md/dm-verity-target.c | 21 ++++++++++----------- drivers/md/dm-verity.h | 2 +- 3 files changed, 13 insertions(+), 16 deletions(-) diff --git a/drivers/md/dm-verity-fec.c b/drivers/md/dm-verity-fec.c index b838d21183b5..62b1a44b8dd2 100644 --- a/drivers/md/dm-verity-fec.c +++ b/drivers/md/dm-verity-fec.c @@ -184,12 +184,11 @@ static int fec_decode_bufs(struct dm_verity *v, struct dm_verity_io *io, * Locate data block erasures using verity hashes. */ static int fec_is_erasure(struct dm_verity *v, struct dm_verity_io *io, u8 *want_digest, u8 *data) { - if (unlikely(verity_hash(v, verity_io_hash_req(v, io), - data, 1 << v->data_dev_block_bits, + if (unlikely(verity_hash(v, io, data, 1 << v->data_dev_block_bits, verity_io_real_digest(v, io), true))) return 0; return memcmp(verity_io_real_digest(v, io), want_digest, v->digest_size) != 0; @@ -386,12 +385,11 @@ static int fec_decode_rsb(struct dm_verity *v, struct dm_verity_io *io, pos += fio->nbufs << DM_VERITY_FEC_BUF_RS_BITS; } /* Always re-validate the corrected block against the expected hash */ - r = verity_hash(v, verity_io_hash_req(v, io), fio->output, - 1 << v->data_dev_block_bits, + r = verity_hash(v, io, fio->output, 1 << v->data_dev_block_bits, verity_io_real_digest(v, io), true); if (unlikely(r < 0)) return r; if (memcmp(verity_io_real_digest(v, io), verity_io_want_digest(v, io), diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c index 3e2e4f41714c..4aa140751166 100644 --- a/drivers/md/dm-verity-target.c +++ b/drivers/md/dm-verity-target.c @@ -178,13 +178,14 @@ static int verity_hash_final(struct dm_verity *v, struct ahash_request *req, r = crypto_wait_req(crypto_ahash_final(req), wait); out: return r; } -int verity_hash(struct dm_verity *v, struct ahash_request *req, +int verity_hash(struct dm_verity *v, struct dm_verity_io *io, const u8 *data, size_t len, u8 *digest, bool may_sleep) { + struct ahash_request *req = verity_io_hash_req(v, io); int r; struct crypto_wait wait; r = verity_hash_init(v, req, &wait, may_sleep); if (unlikely(r < 0)) @@ -323,12 +324,11 @@ static int verity_verify_level(struct dm_verity *v, struct dm_verity_io *io, if (skip_unverified) { r = 1; goto release_ret_r; } - r = verity_hash(v, verity_io_hash_req(v, io), - data, 1 << v->hash_dev_block_bits, + r = verity_hash(v, io, data, 1 << v->hash_dev_block_bits, verity_io_real_digest(v, io), !io->in_bh); if (unlikely(r < 0)) goto release_ret_r; if (likely(memcmp(verity_io_real_digest(v, io), want_digest, @@ -426,12 +426,11 @@ static noinline int verity_recheck(struct dm_verity *v, struct dm_verity_io *io, io_loc.count = 1 << (v->data_dev_block_bits - SECTOR_SHIFT); r = dm_io(&io_req, 1, &io_loc, NULL, IOPRIO_DEFAULT); if (unlikely(r)) goto free_ret; - r = verity_hash(v, verity_io_hash_req(v, io), buffer, - 1 << v->data_dev_block_bits, + r = verity_hash(v, io, buffer, 1 << v->data_dev_block_bits, verity_io_real_digest(v, io), true); if (unlikely(r)) goto free_ret; if (memcmp(verity_io_real_digest(v, io), @@ -542,11 +541,11 @@ static int verity_verify_io(struct dm_verity_io *io) memset(data, 0, block_size); kunmap_local(data); continue; } - r = verity_hash(v, verity_io_hash_req(v, io), data, block_size, + r = verity_hash(v, io, data, block_size, verity_io_real_digest(v, io), !io->in_bh); if (unlikely(r < 0)) { kunmap_local(data); return r; } @@ -983,33 +982,33 @@ static int verity_alloc_most_once(struct dm_verity *v) } static int verity_alloc_zero_digest(struct dm_verity *v) { int r = -ENOMEM; - struct ahash_request *req; + struct dm_verity_io *io; u8 *zero_data; v->zero_digest = kmalloc(v->digest_size, GFP_KERNEL); if (!v->zero_digest) return r; - req = kmalloc(v->ahash_reqsize, GFP_KERNEL); + io = kmalloc(sizeof(*io) + v->ahash_reqsize, GFP_KERNEL); - if (!req) + if (!io) return r; /* verity_dtr will free zero_digest */ zero_data = kzalloc(1 << v->data_dev_block_bits, GFP_KERNEL); if (!zero_data) goto out; - r = verity_hash(v, req, zero_data, 1 << v->data_dev_block_bits, + r = verity_hash(v, io, zero_data, 1 << v->data_dev_block_bits, v->zero_digest, true); out: - kfree(req); + kfree(io); kfree(zero_data); return r; } diff --git a/drivers/md/dm-verity.h b/drivers/md/dm-verity.h index bd461c28b710..0e1dd02a916f 100644 --- a/drivers/md/dm-verity.h +++ b/drivers/md/dm-verity.h @@ -114,11 +114,11 @@ static inline u8 *verity_io_want_digest(struct dm_verity *v, struct dm_verity_io *io) { return io->want_digest; } -extern int verity_hash(struct dm_verity *v, struct ahash_request *req, +extern int verity_hash(struct dm_verity *v, struct dm_verity_io *io, const u8 *data, size_t len, u8 *digest, bool may_sleep); extern int verity_hash_for_block(struct dm_verity *v, struct dm_verity_io *io, sector_t block, u8 *digest, bool *is_zero); From patchwork Fri Jun 21 16:59:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13707928 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7BF67C27C4F for ; Fri, 21 Jun 2024 17:02:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=SATghxL+6Nb3sYF5VpqIvGI5htYO64t5FhPzIr8gTZc=; b=rgyLwZfzeCJ2CTfc3ivZuzuwEU e5JoY+i887bcpkaN+XVL6Z2OF/sWgWkNhJZr8pYkmUH/+o0EXBL96S16uRlvl/MsWW73MVoDzmI9r sQWAOb0e1XSU41RaVGNxyFHsKRAR2pmkg5TwefTIWyVl6SczUqSwgs7mwh6TEhP31tZwvgHA1YY48 7yYYa6M5pVwhQwEFgRl+6YsHJ8V7Fiw6P6z0IaxvsKY9ZjpGuZcHIBz0ifMFPdn/SYaW8p4X30Q9j e6lJH9pd66ZqfwiTnmBGd4vXU7qNRlKGf//6X9sLfz0RtGFMwT7XpSY1ChPinFdH21E+Cb22pdNAU DOSVQ0ug==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sKheS-0000000A2Wh-1POw; Fri, 21 Jun 2024 17:02:16 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sKhcd-0000000A1WU-0DY6 for linux-arm-kernel@lists.infradead.org; Fri, 21 Jun 2024 17:00:28 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id B826562B73; Fri, 21 Jun 2024 17:00:22 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 33FBFC2BBFC; Fri, 21 Jun 2024 17:00:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1718989222; bh=zflnXarlsAz1kc4aqNG2o4Ia4e5L2b5skffcwOmm1Lc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WpMS9HF22BRCbWMmTrIBjH5S6oycm3JFIEUdyIKPJrbL8OT70RTVcMKOm397GUMIu Rb+TvThSm/CNEmT723Ji+JgctwoYSEzht0ouH5QZufNqUse0wh/o588oyaPpKHpMIQ Thx3JEjhVzDyn1Q+Eq1FjXTlQZhTJK5MtTh7vSHHZ7pYTpBgs9oBKKLHcWPHR3ITfl bAUR6EGLQSKCIgwQZ2NFRN5Dzqdb1zvNaO+j96JI53f9sgrviV6xAzdm+JhNIhAQcr J9P9TFbkd4t/3YrToyl9isarkSgSuDRrUghHZdmT/e0xIv0XTaqZJzPh5WYbvtkpuj w4PmYUwARbYxg== From: Eric Biggers To: linux-crypto@vger.kernel.org, fsverity@lists.linux.dev, dm-devel@lists.linux.dev Cc: x86@kernel.org, linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Sami Tolvanen , Bart Van Assche , Herbert Xu , Alasdair Kergon , Mike Snitzer , Mikulas Patocka Subject: [PATCH v6 13/15] dm-verity: hash blocks with shash import+finup when possible Date: Fri, 21 Jun 2024 09:59:20 -0700 Message-ID: <20240621165922.77672-14-ebiggers@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240621165922.77672-1-ebiggers@kernel.org> References: <20240621165922.77672-1-ebiggers@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240621_100024_120323_9A4F23F4 X-CRM114-Status: GOOD ( 36.99 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Eric Biggers Currently dm-verity computes the hash of each block by using multiple calls to the "ahash" crypto API. While the exact sequence depends on the chosen dm-verity settings, in the vast majority of cases it is: 1. crypto_ahash_init() 2. crypto_ahash_update() [salt] 3. crypto_ahash_update() [data] 4. crypto_ahash_final() This is inefficient for two main reasons: - It makes multiple indirect calls, which is expensive on modern CPUs especially when mitigations for CPU vulnerabilities are enabled. Since the salt is the same across all blocks on a given dm-verity device, a much more efficient sequence would be to do an import of the pre-salted state, then a finup. - It uses the ahash (asynchronous hash) API, despite the fact that CPU-based hashing is almost always used in practice, and therefore it experiences the overhead of the ahash-based wrapper for shash. Because dm-verity was intentionally converted to ahash to support off-CPU crypto accelerators, a full reversion to shash might not be acceptable. Yet, we should still provide a fast path for shash with the most common dm-verity settings. Another reason for shash over ahash is that the upcoming multibuffer hashing support, which is specific to CPU-based hashing, is much better suited for shash than for ahash. Supporting it via ahash would add significant complexity and overhead. And it's not possible for the "same" code to properly support both multibuffer hashing and HW accelerators at the same time anyway, given the different computation models. Unfortunately there will always be code specific to each model needed (for users who want to support both). Therefore, this patch adds a new shash import+finup based fast path to dm-verity. It is used automatically when appropriate. This makes dm-verity optimized for what the vast majority of users want: CPU-based hashing with the most common settings, while still retaining support for rarer settings and off-CPU crypto accelerators. In benchmarks with veritysetup's default parameters (SHA-256, 4K data and hash block sizes, 32-byte salt), which also match the parameters that Android currently uses, this patch improves block hashing performance by about 15% on x86_64 using the SHA-NI instructions, or by about 5% on arm64 using the ARMv8 SHA2 instructions. On x86_64 roughly two-thirds of the improvement comes from the use of import and finup, while the remaining third comes from the switch from ahash to shash. Note that another benefit of using "import" to handle the salt is that if the salt size is equal to the input size of the hash algorithm's compression function, e.g. 64 bytes for SHA-256, then the performance is exactly the same as no salt. This doesn't seem to be much better than veritysetup's current default of 32-byte salts, due to the way SHA-256's finalization padding works, but it should be marginally better. Reviewed-by: Sami Tolvanen Acked-by: Ard Biesheuvel Signed-off-by: Eric Biggers --- drivers/md/dm-verity-target.c | 169 ++++++++++++++++++++++++---------- drivers/md/dm-verity.h | 18 ++-- 2 files changed, 130 insertions(+), 57 deletions(-) diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c index 4aa140751166..d16c51958465 100644 --- a/drivers/md/dm-verity-target.c +++ b/drivers/md/dm-verity-target.c @@ -46,10 +46,13 @@ static unsigned int dm_verity_prefetch_cluster = DM_VERITY_DEFAULT_PREFETCH_SIZE module_param_named(prefetch_cluster, dm_verity_prefetch_cluster, uint, 0644); static DEFINE_STATIC_KEY_FALSE(use_bh_wq_enabled); +/* Is at least one dm-verity instance using ahash_tfm instead of shash_tfm? */ +static DEFINE_STATIC_KEY_FALSE(ahash_enabled); + struct dm_verity_prefetch_work { struct work_struct work; struct dm_verity *v; unsigned short ioprio; sector_t block; @@ -100,11 +103,11 @@ static sector_t verity_position_at_level(struct dm_verity *v, sector_t block, int level) { return block >> (level * v->hash_per_block_bits); } -static int verity_hash_update(struct dm_verity *v, struct ahash_request *req, +static int verity_ahash_update(struct dm_verity *v, struct ahash_request *req, const u8 *data, size_t len, struct crypto_wait *wait) { struct scatterlist sg; @@ -133,16 +136,16 @@ static int verity_hash_update(struct dm_verity *v, struct ahash_request *req, } /* * Wrapper for crypto_ahash_init, which handles verity salting. */ -static int verity_hash_init(struct dm_verity *v, struct ahash_request *req, +static int verity_ahash_init(struct dm_verity *v, struct ahash_request *req, struct crypto_wait *wait, bool may_sleep) { int r; - ahash_request_set_tfm(req, v->tfm); + ahash_request_set_tfm(req, v->ahash_tfm); ahash_request_set_callback(req, may_sleep ? CRYPTO_TFM_REQ_MAY_SLEEP | CRYPTO_TFM_REQ_MAY_BACKLOG : 0, crypto_req_done, (void *)wait); crypto_init_wait(wait); @@ -153,22 +156,22 @@ static int verity_hash_init(struct dm_verity *v, struct ahash_request *req, DMERR("crypto_ahash_init failed: %d", r); return r; } if (likely(v->salt_size && (v->version >= 1))) - r = verity_hash_update(v, req, v->salt, v->salt_size, wait); + r = verity_ahash_update(v, req, v->salt, v->salt_size, wait); return r; } -static int verity_hash_final(struct dm_verity *v, struct ahash_request *req, - u8 *digest, struct crypto_wait *wait) +static int verity_ahash_final(struct dm_verity *v, struct ahash_request *req, + u8 *digest, struct crypto_wait *wait) { int r; if (unlikely(v->salt_size && (!v->version))) { - r = verity_hash_update(v, req, v->salt, v->salt_size, wait); + r = verity_ahash_update(v, req, v->salt, v->salt_size, wait); if (r < 0) { DMERR("%s failed updating salt: %d", __func__, r); goto out; } @@ -181,25 +184,28 @@ static int verity_hash_final(struct dm_verity *v, struct ahash_request *req, } int verity_hash(struct dm_verity *v, struct dm_verity_io *io, const u8 *data, size_t len, u8 *digest, bool may_sleep) { - struct ahash_request *req = verity_io_hash_req(v, io); int r; - struct crypto_wait wait; - - r = verity_hash_init(v, req, &wait, may_sleep); - if (unlikely(r < 0)) - goto out; - r = verity_hash_update(v, req, data, len, &wait); - if (unlikely(r < 0)) - goto out; + if (static_branch_unlikely(&ahash_enabled) && !v->shash_tfm) { + struct ahash_request *req = verity_io_hash_req(v, io); + struct crypto_wait wait; - r = verity_hash_final(v, req, digest, &wait); + r = verity_ahash_init(v, req, &wait, may_sleep) ?: + verity_ahash_update(v, req, data, len, &wait) ?: + verity_ahash_final(v, req, digest, &wait); + } else { + struct shash_desc *desc = verity_io_hash_req(v, io); -out: + desc->tfm = v->shash_tfm; + r = crypto_shash_import(desc, v->initial_hashstate) ?: + crypto_shash_finup(desc, data, len, digest); + } + if (unlikely(r)) + DMERR("Error hashing block: %d", r); return r; } static void verity_hash_at_level(struct dm_verity *v, sector_t block, int level, sector_t *hash_block, unsigned int *offset) @@ -932,15 +938,20 @@ static void verity_dtr(struct dm_target *ti) if (v->bufio) dm_bufio_client_destroy(v->bufio); kvfree(v->validated_blocks); kfree(v->salt); + kfree(v->initial_hashstate); kfree(v->root_digest); kfree(v->zero_digest); - if (v->tfm) - crypto_free_ahash(v->tfm); + if (v->ahash_tfm) { + static_branch_dec(&ahash_enabled); + crypto_free_ahash(v->ahash_tfm); + } else { + crypto_free_shash(v->shash_tfm); + } kfree(v->alg_name); if (v->hash_dev) dm_put_device(ti, v->hash_dev); @@ -990,11 +1001,11 @@ static int verity_alloc_zero_digest(struct dm_verity *v) v->zero_digest = kmalloc(v->digest_size, GFP_KERNEL); if (!v->zero_digest) return r; - io = kmalloc(sizeof(*io) + v->ahash_reqsize, GFP_KERNEL); + io = kmalloc(sizeof(*io) + v->hash_reqsize, GFP_KERNEL); if (!io) return r; /* verity_dtr will free zero_digest */ zero_data = kzalloc(1 << v->data_dev_block_bits, GFP_KERNEL); @@ -1129,40 +1140,110 @@ static int verity_parse_opt_args(struct dm_arg_set *as, struct dm_verity *v, static int verity_setup_hash_alg(struct dm_verity *v, const char *alg_name) { struct dm_target *ti = v->ti; struct crypto_ahash *ahash; + struct crypto_shash *shash = NULL; + const char *driver_name; v->alg_name = kstrdup(alg_name, GFP_KERNEL); if (!v->alg_name) { ti->error = "Cannot allocate algorithm name"; return -ENOMEM; } + /* + * Allocate the hash transformation object that this dm-verity instance + * will use. The vast majority of dm-verity users use CPU-based + * hashing, so when possible use the shash API to minimize the crypto + * API overhead. If the ahash API resolves to a different driver + * (likely an off-CPU hardware offload), use ahash instead. Also use + * ahash if the obsolete dm-verity format with the appended salt is + * being used, so that quirk only needs to be handled in one place. + */ ahash = crypto_alloc_ahash(alg_name, 0, v->use_bh_wq ? CRYPTO_ALG_ASYNC : 0); if (IS_ERR(ahash)) { ti->error = "Cannot initialize hash function"; return PTR_ERR(ahash); } - v->tfm = ahash; - - /* - * dm-verity performance can vary greatly depending on which hash - * algorithm implementation is used. Help people debug performance - * problems by logging the ->cra_driver_name. - */ - DMINFO("%s using implementation \"%s\"", alg_name, - crypto_hash_alg_common(ahash)->base.cra_driver_name); - - v->digest_size = crypto_ahash_digestsize(ahash); + driver_name = crypto_ahash_driver_name(ahash); + if (v->version >= 1 /* salt prepended, not appended? */) { + shash = crypto_alloc_shash(alg_name, 0, 0); + if (!IS_ERR(shash) && + strcmp(crypto_shash_driver_name(shash), driver_name) != 0) { + /* + * ahash gave a different driver than shash, so probably + * this is a case of real hardware offload. Use ahash. + */ + crypto_free_shash(shash); + shash = NULL; + } + } + if (!IS_ERR_OR_NULL(shash)) { + crypto_free_ahash(ahash); + ahash = NULL; + v->shash_tfm = shash; + v->digest_size = crypto_shash_digestsize(shash); + v->hash_reqsize = sizeof(struct shash_desc) + + crypto_shash_descsize(shash); + DMINFO("%s using shash \"%s\"", alg_name, driver_name); + } else { + v->ahash_tfm = ahash; + static_branch_inc(&ahash_enabled); + v->digest_size = crypto_ahash_digestsize(ahash); + v->hash_reqsize = sizeof(struct ahash_request) + + crypto_ahash_reqsize(ahash); + DMINFO("%s using ahash \"%s\"", alg_name, driver_name); + } if ((1 << v->hash_dev_block_bits) < v->digest_size * 2) { ti->error = "Digest size too big"; return -EINVAL; } - v->ahash_reqsize = sizeof(struct ahash_request) + - crypto_ahash_reqsize(ahash); + return 0; +} + +static int verity_setup_salt_and_hashstate(struct dm_verity *v, const char *arg) +{ + struct dm_target *ti = v->ti; + + if (strcmp(arg, "-") != 0) { + v->salt_size = strlen(arg) / 2; + v->salt = kmalloc(v->salt_size, GFP_KERNEL); + if (!v->salt) { + ti->error = "Cannot allocate salt"; + return -ENOMEM; + } + if (strlen(arg) != v->salt_size * 2 || + hex2bin(v->salt, arg, v->salt_size)) { + ti->error = "Invalid salt"; + return -EINVAL; + } + } + if (v->shash_tfm) { + SHASH_DESC_ON_STACK(desc, v->shash_tfm); + int r; + + /* + * Compute the pre-salted hash state that can be passed to + * crypto_shash_import() for each block later. + */ + v->initial_hashstate = kmalloc( + crypto_shash_statesize(v->shash_tfm), GFP_KERNEL); + if (!v->initial_hashstate) { + ti->error = "Cannot allocate initial hash state"; + return -ENOMEM; + } + desc->tfm = v->shash_tfm; + r = crypto_shash_init(desc) ?: + crypto_shash_update(desc, v->salt, v->salt_size) ?: + crypto_shash_export(desc, v->initial_hashstate); + if (r) { + ti->error = "Cannot set up initial hash state"; + return r; + } + } return 0; } /* * Target parameters: @@ -1304,25 +1385,13 @@ static int verity_ctr(struct dm_target *ti, unsigned int argc, char **argv) r = -EINVAL; goto bad; } root_hash_digest_to_validate = argv[8]; - if (strcmp(argv[9], "-")) { - v->salt_size = strlen(argv[9]) / 2; - v->salt = kmalloc(v->salt_size, GFP_KERNEL); - if (!v->salt) { - ti->error = "Cannot allocate salt"; - r = -ENOMEM; - goto bad; - } - if (strlen(argv[9]) != v->salt_size * 2 || - hex2bin(v->salt, argv[9], v->salt_size)) { - ti->error = "Invalid salt"; - r = -EINVAL; - goto bad; - } - } + r = verity_setup_salt_and_hashstate(v, argv[9]); + if (r) + goto bad; argv += 10; argc -= 10; /* Optional parameters */ @@ -1420,11 +1489,11 @@ static int verity_ctr(struct dm_target *ti, unsigned int argc, char **argv) ti->error = "Cannot allocate workqueue"; r = -ENOMEM; goto bad; } - ti->per_io_data_size = sizeof(struct dm_verity_io) + v->ahash_reqsize; + ti->per_io_data_size = sizeof(struct dm_verity_io) + v->hash_reqsize; r = verity_fec_ctr(v); if (r) goto bad; diff --git a/drivers/md/dm-verity.h b/drivers/md/dm-verity.h index 0e1dd02a916f..aac3a1b1d94a 100644 --- a/drivers/md/dm-verity.h +++ b/drivers/md/dm-verity.h @@ -37,13 +37,15 @@ struct dm_verity { struct dm_dev *data_dev; struct dm_dev *hash_dev; struct dm_target *ti; struct dm_bufio_client *bufio; char *alg_name; - struct crypto_ahash *tfm; + struct crypto_ahash *ahash_tfm; /* either this or shash_tfm is set */ + struct crypto_shash *shash_tfm; /* either this or ahash_tfm is set */ u8 *root_digest; /* digest of the root block */ u8 *salt; /* salt: its size is salt_size */ + u8 *initial_hashstate; /* salted initial state, if shash_tfm is set */ u8 *zero_digest; /* digest for a zero block */ unsigned int salt_size; sector_t data_start; /* data offset in 512-byte sectors */ sector_t hash_start; /* hash start in blocks */ sector_t data_blocks; /* the number of data blocks */ @@ -54,11 +56,11 @@ struct dm_verity { unsigned char levels; /* the number of tree levels */ unsigned char version; bool hash_failed:1; /* set if hash of any block failed */ bool use_bh_wq:1; /* try to verify in BH wq before normal work-queue */ unsigned int digest_size; /* digest size for the current hash algorithm */ - unsigned int ahash_reqsize;/* the size of temporary space for crypto */ + unsigned int hash_reqsize; /* the size of temporary space for crypto */ enum verity_mode mode; /* mode for handling verification errors */ unsigned int corrupted_errs;/* Number of errors for corrupted blocks */ struct workqueue_struct *verify_wq; @@ -91,19 +93,21 @@ struct dm_verity_io { u8 real_digest[HASH_MAX_DIGESTSIZE]; u8 want_digest[HASH_MAX_DIGESTSIZE]; /* - * This struct is followed by a variable-sized struct ahash_request of - * size v->ahash_reqsize. To access it, use verity_io_hash_req(). + * This struct is followed by a variable-sized hash request of size + * v->hash_reqsize, either a struct ahash_request or a struct shash_desc + * (depending on whether ahash_tfm or shash_tfm is being used). To + * access it, use verity_io_hash_req(). */ }; -static inline struct ahash_request *verity_io_hash_req(struct dm_verity *v, - struct dm_verity_io *io) +static inline void *verity_io_hash_req(struct dm_verity *v, + struct dm_verity_io *io) { - return (struct ahash_request *)(io + 1); + return io + 1; } static inline u8 *verity_io_real_digest(struct dm_verity *v, struct dm_verity_io *io) { From patchwork Fri Jun 21 16:59:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13707924 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D8C86C27C4F for ; Fri, 21 Jun 2024 17:02:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=4NwPPspjJtpGWgzd3xqbuCVbC3XljQLhJ0bftsAjgf0=; b=HQN1dPVfqXM9/Y5MlGjv3Q+Tnz le604Gm4ra78mRjWs48wZpM2T0/JC9JEQlx0+/t5wlLBttWH4B/XjXMSpUVpi5POLFzrSWg+si0+B qn9ifvm/YDC9AmHxqOmobH2YsZwfgv0QSom480x5O+yhPzdttSgUd+NLyNbG2uDt19Rd+3Lvmcr1d AuVSMvM7ocBpJYMyw73wWu5XEWcMc2KApLrQPKeGh5iPMVtSMEc9T1dKSZIvI2F0ZgvIcoh5XOv1a daN6nfJ8LdG80MBPgg3SUpazIL4xebtry4ieYnQvDtHD+6kHhLOnlm6pmyS6A0/Gu9YRdNhKpPf21 yunF9PSA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sKheL-0000000A2T1-42EA; Fri, 21 Jun 2024 17:02:09 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sKhcm-0000000A1g2-3m8o for linux-arm-kernel@bombadil.infradead.org; Fri, 21 Jun 2024 17:00:33 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Transfer-Encoding:MIME-Version :References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=4NwPPspjJtpGWgzd3xqbuCVbC3XljQLhJ0bftsAjgf0=; b=ihXQLa83vzcEK0STF5ItyyGey4 0xw2H02AeGclR9d8YjTIlbwJiLfR2HUsSBIWqYhLFS5M1qjZdSyxxG8oiMiQItBFC1XRtgRhMfo8S niCP1Ysii87XJEYV0ZPbPNwRSRydrTfEMqg81SAKlUIgUr2NBufptqUB6RATjvgpVoihbCFuEs16i eOsOLNpYFTszUbhMmSHl0OzxzKjlmD6ZnEmWX7UYG5W0F1lShW1aCMcyvJwBbANtcmvVxmYbbbNz8 OLvoCeDHHNtQc4ofElMhlBt26quudZHFQK+pu8ubFQGyCN8zdAVM6p1nS2lU32ktlnQs21KCu8N/n 0hnwoMfw==; Received: from dfw.source.kernel.org ([139.178.84.217]) by desiato.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sKhcg-00000007jkP-3hAF for linux-arm-kernel@lists.infradead.org; Fri, 21 Jun 2024 17:00:31 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 4237E62B76; Fri, 21 Jun 2024 17:00:23 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B17BCC4AF0A; Fri, 21 Jun 2024 17:00:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1718989223; bh=d9URkqI7jqaafYTtviGrRDgp27KVBzc3eKXY8QF1r2Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=RI8vdjEjfwkSf+UaXic1vyS6BB7320+bysOaNbEKMW1RoLo/0utLtK/Lxm2y7xJs2 hW0HIfINwKmXPE0DyJZ6XiGCBH9RyULqPWJ2/tBXQwhKftnIJiKQLudeFJHudUndm8 UcZGd7L4biA1wR/xcBhRv6Y0V1up0l3YrbyOy8yaY2wtjnAknJuCLBpS3hIHOaUgXI saA5DXXVuQdiRyrNE2JH0+t7KXi+8TSC+1kH4TVIhPG7OMokIRj+b4EdG+e6u3+0/2 Oc7p32ZxQIT/NU+SES5MP74iJ8JcsBoC2gLmWyJm8WGLuLhtUGM17OKWEQOSa4S3Tp z+q+FR5ulpFEg== From: Eric Biggers To: linux-crypto@vger.kernel.org, fsverity@lists.linux.dev, dm-devel@lists.linux.dev Cc: x86@kernel.org, linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Sami Tolvanen , Bart Van Assche , Herbert Xu , Alasdair Kergon , Mike Snitzer , Mikulas Patocka Subject: [PATCH v6 14/15] dm-verity: reduce scope of real and wanted digests Date: Fri, 21 Jun 2024 09:59:21 -0700 Message-ID: <20240621165922.77672-15-ebiggers@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240621165922.77672-1-ebiggers@kernel.org> References: <20240621165922.77672-1-ebiggers@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240621_180027_399176_655BD549 X-CRM114-Status: GOOD ( 26.08 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Eric Biggers In preparation for supporting multibuffer hashing where dm-verity will need to keep track of the real and wanted digests for multiple data blocks simultaneously, stop using the want_digest and real_digest fields of struct dm_verity_io from so many different places. Specifically: - Make various functions take want_digest as a parameter rather than having it be implicitly passed via the struct dm_verity_io. - Add a new tmp_digest field, and use this instead of real_digest when computing a hash solely for the purpose of immediately checking it. The result is that real_digest and want_digest are used only by verity_verify_io(). Reviewed-by: Sami Tolvanen Acked-by: Ard Biesheuvel Signed-off-by: Eric Biggers --- drivers/md/dm-verity-fec.c | 19 +++++++++--------- drivers/md/dm-verity-fec.h | 5 +++-- drivers/md/dm-verity-target.c | 36 ++++++++++++++++++----------------- drivers/md/dm-verity.h | 1 + 4 files changed, 32 insertions(+), 29 deletions(-) diff --git a/drivers/md/dm-verity-fec.c b/drivers/md/dm-verity-fec.c index 62b1a44b8dd2..79f3794e197e 100644 --- a/drivers/md/dm-verity-fec.c +++ b/drivers/md/dm-verity-fec.c @@ -185,15 +185,14 @@ static int fec_decode_bufs(struct dm_verity *v, struct dm_verity_io *io, */ static int fec_is_erasure(struct dm_verity *v, struct dm_verity_io *io, u8 *want_digest, u8 *data) { if (unlikely(verity_hash(v, io, data, 1 << v->data_dev_block_bits, - verity_io_real_digest(v, io), true))) + io->tmp_digest, true))) return 0; - return memcmp(verity_io_real_digest(v, io), want_digest, - v->digest_size) != 0; + return memcmp(io->tmp_digest, want_digest, v->digest_size) != 0; } /* * Read data blocks that are part of the RS block and deinterleave as much as * fits into buffers. Check for erasure locations if @neras is non-NULL. @@ -360,11 +359,11 @@ static void fec_init_bufs(struct dm_verity *v, struct dm_verity_fec_io *fio) * (indicated by @offset) in fio->output. If @use_erasures is non-zero, uses * hashes to locate erasures. */ static int fec_decode_rsb(struct dm_verity *v, struct dm_verity_io *io, struct dm_verity_fec_io *fio, u64 rsb, u64 offset, - bool use_erasures) + const u8 *want_digest, bool use_erasures) { int r, neras = 0; unsigned int pos; r = fec_alloc_bufs(v, fio); @@ -386,27 +385,27 @@ static int fec_decode_rsb(struct dm_verity *v, struct dm_verity_io *io, pos += fio->nbufs << DM_VERITY_FEC_BUF_RS_BITS; } /* Always re-validate the corrected block against the expected hash */ r = verity_hash(v, io, fio->output, 1 << v->data_dev_block_bits, - verity_io_real_digest(v, io), true); + io->tmp_digest, true); if (unlikely(r < 0)) return r; - if (memcmp(verity_io_real_digest(v, io), verity_io_want_digest(v, io), - v->digest_size)) { + if (memcmp(io->tmp_digest, want_digest, v->digest_size)) { DMERR_LIMIT("%s: FEC %llu: failed to correct (%d erasures)", v->data_dev->name, (unsigned long long)rsb, neras); return -EILSEQ; } return 0; } /* Correct errors in a block. Copies corrected block to dest. */ int verity_fec_decode(struct dm_verity *v, struct dm_verity_io *io, - enum verity_block_type type, sector_t block, u8 *dest) + enum verity_block_type type, const u8 *want_digest, + sector_t block, u8 *dest) { int r; struct dm_verity_fec_io *fio = fec_io(io); u64 offset, res, rsb; @@ -445,13 +444,13 @@ int verity_fec_decode(struct dm_verity *v, struct dm_verity_io *io, /* * Locating erasures is slow, so attempt to recover the block without * them first. Do a second attempt with erasures if the corruption is * bad enough. */ - r = fec_decode_rsb(v, io, fio, rsb, offset, false); + r = fec_decode_rsb(v, io, fio, rsb, offset, want_digest, false); if (r < 0) { - r = fec_decode_rsb(v, io, fio, rsb, offset, true); + r = fec_decode_rsb(v, io, fio, rsb, offset, want_digest, true); if (r < 0) goto done; } memcpy(dest, fio->output, 1 << v->data_dev_block_bits); diff --git a/drivers/md/dm-verity-fec.h b/drivers/md/dm-verity-fec.h index 09123a612953..a6689cdc489d 100644 --- a/drivers/md/dm-verity-fec.h +++ b/drivers/md/dm-verity-fec.h @@ -66,12 +66,12 @@ struct dm_verity_fec_io { #define DM_VERITY_OPTS_FEC 8 extern bool verity_fec_is_enabled(struct dm_verity *v); extern int verity_fec_decode(struct dm_verity *v, struct dm_verity_io *io, - enum verity_block_type type, sector_t block, - u8 *dest); + enum verity_block_type type, const u8 *want_digest, + sector_t block, u8 *dest); extern unsigned int verity_fec_status_table(struct dm_verity *v, unsigned int sz, char *result, unsigned int maxlen); extern void verity_fec_finish_io(struct dm_verity_io *io); @@ -97,10 +97,11 @@ static inline bool verity_fec_is_enabled(struct dm_verity *v) } static inline int verity_fec_decode(struct dm_verity *v, struct dm_verity_io *io, enum verity_block_type type, + const u8 *want_digest, sector_t block, u8 *dest) { return -EOPNOTSUPP; } diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c index d16c51958465..1f23354256d3 100644 --- a/drivers/md/dm-verity-target.c +++ b/drivers/md/dm-verity-target.c @@ -283,16 +283,16 @@ static int verity_handle_err(struct dm_verity *v, enum verity_block_type type, /* * Verify hash of a metadata block pertaining to the specified data block * ("block" argument) at a specified level ("level" argument). * - * On successful return, verity_io_want_digest(v, io) contains the hash value - * for a lower tree level or for the data block (if we're at the lowest level). + * On successful return, want_digest contains the hash value for a lower tree + * level or for the data block (if we're at the lowest level). * * If "skip_unverified" is true, unverified buffer is skipped and 1 is returned. * If "skip_unverified" is false, unverified buffer is hashed and verified - * against current value of verity_io_want_digest(v, io). + * against current value of want_digest. */ static int verity_verify_level(struct dm_verity *v, struct dm_verity_io *io, sector_t block, int level, bool skip_unverified, u8 *want_digest) { @@ -331,26 +331,26 @@ static int verity_verify_level(struct dm_verity *v, struct dm_verity_io *io, r = 1; goto release_ret_r; } r = verity_hash(v, io, data, 1 << v->hash_dev_block_bits, - verity_io_real_digest(v, io), !io->in_bh); + io->tmp_digest, !io->in_bh); if (unlikely(r < 0)) goto release_ret_r; - if (likely(memcmp(verity_io_real_digest(v, io), want_digest, + if (likely(memcmp(io->tmp_digest, want_digest, v->digest_size) == 0)) aux->hash_verified = 1; else if (static_branch_unlikely(&use_bh_wq_enabled) && io->in_bh) { /* * Error handling code (FEC included) cannot be run in a * tasklet since it may sleep, so fallback to work-queue. */ r = -EAGAIN; goto release_ret_r; } else if (verity_fec_decode(v, io, DM_VERITY_BLOCK_TYPE_METADATA, - hash_block, data) == 0) + want_digest, hash_block, data) == 0) aux->hash_verified = 1; else if (verity_handle_err(v, DM_VERITY_BLOCK_TYPE_METADATA, hash_block)) { struct bio *bio = @@ -409,11 +409,12 @@ int verity_hash_for_block(struct dm_verity *v, struct dm_verity_io *io, return r; } static noinline int verity_recheck(struct dm_verity *v, struct dm_verity_io *io, - sector_t cur_block, u8 *dest) + const u8 *want_digest, sector_t cur_block, + u8 *dest) { struct page *page; void *buffer; int r; struct dm_io_request io_req; @@ -433,16 +434,15 @@ static noinline int verity_recheck(struct dm_verity *v, struct dm_verity_io *io, r = dm_io(&io_req, 1, &io_loc, NULL, IOPRIO_DEFAULT); if (unlikely(r)) goto free_ret; r = verity_hash(v, io, buffer, 1 << v->data_dev_block_bits, - verity_io_real_digest(v, io), true); + io->tmp_digest, true); if (unlikely(r)) goto free_ret; - if (memcmp(verity_io_real_digest(v, io), - verity_io_want_digest(v, io), v->digest_size)) { + if (memcmp(io->tmp_digest, want_digest, v->digest_size)) { r = -EIO; goto free_ret; } memcpy(dest, buffer, 1 << v->data_dev_block_bits); @@ -453,28 +453,29 @@ static noinline int verity_recheck(struct dm_verity *v, struct dm_verity_io *io, return r; } static int verity_handle_data_hash_mismatch(struct dm_verity *v, struct dm_verity_io *io, - struct bio *bio, sector_t blkno, - u8 *data) + struct bio *bio, + const u8 *want_digest, + sector_t blkno, u8 *data) { if (static_branch_unlikely(&use_bh_wq_enabled) && io->in_bh) { /* * Error handling code (FEC included) cannot be run in the * BH workqueue, so fallback to a standard workqueue. */ return -EAGAIN; } - if (verity_recheck(v, io, blkno, data) == 0) { + if (verity_recheck(v, io, want_digest, blkno, data) == 0) { if (v->validated_blocks) set_bit(blkno, v->validated_blocks); return 0; } #if defined(CONFIG_DM_VERITY_FEC) - if (verity_fec_decode(v, io, DM_VERITY_BLOCK_TYPE_DATA, blkno, - data) == 0) + if (verity_fec_decode(v, io, DM_VERITY_BLOCK_TYPE_DATA, want_digest, + blkno, data) == 0) return 0; #endif if (bio->bi_status) return -EIO; /* Error correction failed; Just return error */ @@ -561,12 +562,13 @@ static int verity_verify_io(struct dm_verity_io *io) if (v->validated_blocks) set_bit(cur_block, v->validated_blocks); kunmap_local(data); continue; } - r = verity_handle_data_hash_mismatch(v, io, bio, cur_block, - data); + r = verity_handle_data_hash_mismatch(v, io, bio, + verity_io_want_digest(v, io), + cur_block, data); kunmap_local(data); if (unlikely(r)) return r; } diff --git a/drivers/md/dm-verity.h b/drivers/md/dm-verity.h index aac3a1b1d94a..3951e5a4a156 100644 --- a/drivers/md/dm-verity.h +++ b/drivers/md/dm-verity.h @@ -89,10 +89,11 @@ struct dm_verity_io { bool in_bh; struct work_struct work; struct work_struct bh_work; + u8 tmp_digest[HASH_MAX_DIGESTSIZE]; u8 real_digest[HASH_MAX_DIGESTSIZE]; u8 want_digest[HASH_MAX_DIGESTSIZE]; /* * This struct is followed by a variable-sized hash request of size From patchwork Fri Jun 21 16:59:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13707926 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EDC86C2BB85 for ; Fri, 21 Jun 2024 17:02:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=D8+90gVk5zYVKCGgiqvumROaYLqjnGKW6EKjHm9QlTQ=; b=XuvUx2gteP/6GxOFm+0z/k59kN p//4H/Ho6PpWWWZ/azqnzvBPiTTWuQDtPNPa0J1UhFQZiTFE2HGpN3Inbn6UKPD8wXrIBrPBLs7sj rVYELGKhxlBTRGt9S1QDPkMLvaVvnitIoJLnu2IY7U2YFQ4PIc7hdGB5JyrmuyXREo4TZo4aLwKMJ p1R/YdtWpxNbmJJykTj440iSyjL+DTbaTHxi7XzhWn7Z8NXxpbNpY9H6C1abGWZOtSna4Xt7QwjGl YJK3ERd4n8scxxH+1GofTFfgvTHfVp517RF5VC4/WPlrJkKIa6GWyJspkXnVqHOE/hdONhWiedCoZ KIfgN9nw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sKheN-0000000A2UH-3T7J; Fri, 21 Jun 2024 17:02:11 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sKhcm-0000000A1gG-4Bh1 for linux-arm-kernel@bombadil.infradead.org; Fri, 21 Jun 2024 17:00:33 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Transfer-Encoding:MIME-Version :References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=D8+90gVk5zYVKCGgiqvumROaYLqjnGKW6EKjHm9QlTQ=; b=L4RCZRr1xQi1q9ilLditcZyW51 jJ6hXd7ue0ZRpyhdUmAzpxtN9i1c0yulB22GCWJ5oozzwkfFVqM4lgfbmzS0K8h/yJ95Tjc4BPkcl D5YOXgB8/C3ZA9zqF+zO91hlhmdbIWO7H89n/zjAy5Z9E0TTB4QeY2/dhcI8Ym/rmutODoy0UTZ6Z Tw5jKeEOiccdY9FEUUkFemSpkCAopsMzrx+pa6sv6eiLDoNOif6+SzOXUOt7TivaZbxqQcgCCKcLO lG0ztlcO1aNjQ0ZGXQvIvwdK9fvPmfs4tGSYj4mzi5JSid2MeJeZlqGo6M6QISYcScnlYgU1l3UGH otDFC0TA==; Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by desiato.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sKhcg-00000007jkQ-3aew for linux-arm-kernel@lists.infradead.org; Fri, 21 Jun 2024 17:00:31 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id C36C662B66; Fri, 21 Jun 2024 17:00:23 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3CBA2C4AF08; Fri, 21 Jun 2024 17:00:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1718989223; bh=fNLe52TPOxjgLpo59slUZdXGoIEJCK+i4EGVq9NPfPk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=i2d5eAYNbel8tNFaas0Y2Zbntn3EsMrJFceSmw8a+H/FjQZo8soMdzNBqFL7ITnsI QtbzcJMq9G2hFe4d/DBjWDvypthFeyDQAxWdpjlTLCwer35MFy7vFk0aoXrs/gNMlr sxKVV7PoILNJV5kx3ZRphMuLayzZptgF2ZLvRj8Su3Ch8d239rUyR19NFMYxbhmloS l0Jw8LWf6aXBAqSPowrlGo/m8sMdVk3MksDfgAdYdEbZulKCN7WGq+wOLfJqokArVe HE/V59d+lXfN/cNr9cfvriHVeiIspLlkBbBdadbBcaMA0uuE4fiVVE3LRFzBxnRs2R tdfh+UGfARpDw== From: Eric Biggers To: linux-crypto@vger.kernel.org, fsverity@lists.linux.dev, dm-devel@lists.linux.dev Cc: x86@kernel.org, linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Sami Tolvanen , Bart Van Assche , Herbert Xu , Alasdair Kergon , Mike Snitzer , Mikulas Patocka Subject: [PATCH v6 15/15] dm-verity: improve performance by using multibuffer hashing Date: Fri, 21 Jun 2024 09:59:22 -0700 Message-ID: <20240621165922.77672-16-ebiggers@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240621165922.77672-1-ebiggers@kernel.org> References: <20240621165922.77672-1-ebiggers@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240621_180027_396989_EC0D90FA X-CRM114-Status: GOOD ( 33.24 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Eric Biggers When supported by the hash algorithm, use crypto_shash_finup_mb() to interleave the hashing of pairs of data blocks. On some CPUs this nearly doubles hashing performance. The increase in overall throughput of cold-cache dm-verity reads that I'm seeing on arm64 and x86_64 is roughly 35% (though this metric is hard to measure as it jumps around a lot). For now this is only done on data blocks, not Merkle tree blocks. We could use finup_mb on Merkle tree blocks too, but that is less important as there aren't as many Merkle tree blocks as data blocks, and that would require some additional code restructuring. Reviewed-by: Sami Tolvanen Acked-by: Ard Biesheuvel Signed-off-by: Eric Biggers --- drivers/md/dm-verity-target.c | 166 ++++++++++++++++++++++++++-------- drivers/md/dm-verity.h | 33 ++++--- 2 files changed, 147 insertions(+), 52 deletions(-) diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c index 1f23354256d3..ff91bfa40302 100644 --- a/drivers/md/dm-verity-target.c +++ b/drivers/md/dm-verity-target.c @@ -181,22 +181,28 @@ static int verity_ahash_final(struct dm_verity *v, struct ahash_request *req, r = crypto_wait_req(crypto_ahash_final(req), wait); out: return r; } +static int verity_ahash(struct dm_verity *v, struct dm_verity_io *io, + const u8 *data, size_t len, u8 *digest, bool may_sleep) +{ + struct ahash_request *req = verity_io_hash_req(v, io); + struct crypto_wait wait; + + return verity_ahash_init(v, req, &wait, may_sleep) ?: + verity_ahash_update(v, req, data, len, &wait) ?: + verity_ahash_final(v, req, digest, &wait); +} + int verity_hash(struct dm_verity *v, struct dm_verity_io *io, const u8 *data, size_t len, u8 *digest, bool may_sleep) { int r; if (static_branch_unlikely(&ahash_enabled) && !v->shash_tfm) { - struct ahash_request *req = verity_io_hash_req(v, io); - struct crypto_wait wait; - - r = verity_ahash_init(v, req, &wait, may_sleep) ?: - verity_ahash_update(v, req, data, len, &wait) ?: - verity_ahash_final(v, req, digest, &wait); + r = verity_ahash(v, io, data, len, digest, may_sleep); } else { struct shash_desc *desc = verity_io_hash_req(v, io); desc->tfm = v->shash_tfm; r = crypto_shash_import(desc, v->initial_hashstate) ?: @@ -205,10 +211,38 @@ int verity_hash(struct dm_verity *v, struct dm_verity_io *io, if (unlikely(r)) DMERR("Error hashing block: %d", r); return r; } +static int verity_hash_mb(struct dm_verity *v, struct dm_verity_io *io, + const u8 *data[], size_t len, u8 *digests[], + int num_blocks) +{ + int r = 0; + + if (static_branch_unlikely(&ahash_enabled) && !v->shash_tfm) { + int i; + + /* Note: in practice num_blocks is always 1 in this case. */ + for (i = 0; i < num_blocks; i++) { + r = verity_ahash(v, io, data[i], len, digests[i], + !io->in_bh); + if (r) + break; + } + } else { + struct shash_desc *desc = verity_io_hash_req(v, io); + + desc->tfm = v->shash_tfm; + r = crypto_shash_import(desc, v->initial_hashstate) ?: + crypto_shash_finup_mb(desc, data, len, digests, num_blocks); + } + if (unlikely(r)) + DMERR("Error hashing blocks: %d", r); + return r; +} + static void verity_hash_at_level(struct dm_verity *v, sector_t block, int level, sector_t *hash_block, unsigned int *offset) { sector_t position = verity_position_at_level(v, block, level); unsigned int idx; @@ -454,13 +488,16 @@ static noinline int verity_recheck(struct dm_verity *v, struct dm_verity_io *io, } static int verity_handle_data_hash_mismatch(struct dm_verity *v, struct dm_verity_io *io, struct bio *bio, - const u8 *want_digest, - sector_t blkno, u8 *data) + struct pending_block *block) { + const u8 *want_digest = block->want_digest; + sector_t blkno = block->blkno; + u8 *data = block->data; + if (static_branch_unlikely(&use_bh_wq_enabled) && io->in_bh) { /* * Error handling code (FEC included) cannot be run in the * BH workqueue, so fallback to a standard workqueue. */ @@ -484,10 +521,57 @@ static int verity_handle_data_hash_mismatch(struct dm_verity *v, return -EIO; } return 0; } +static void verity_clear_pending_blocks(struct dm_verity_io *io) +{ + int i; + + for (i = io->num_pending - 1; i >= 0; i--) { + kunmap_local(io->pending_blocks[i].data); + io->pending_blocks[i].data = NULL; + } + io->num_pending = 0; +} + +static int verity_verify_pending_blocks(struct dm_verity *v, + struct dm_verity_io *io, + struct bio *bio) +{ + const u8 *data[DM_VERITY_MAX_PENDING_DATA_BLOCKS]; + u8 *real_digests[DM_VERITY_MAX_PENDING_DATA_BLOCKS]; + int i; + int r; + + for (i = 0; i < io->num_pending; i++) { + data[i] = io->pending_blocks[i].data; + real_digests[i] = io->pending_blocks[i].real_digest; + } + + r = verity_hash_mb(v, io, data, 1 << v->data_dev_block_bits, + real_digests, io->num_pending); + if (unlikely(r)) + return r; + + for (i = 0; i < io->num_pending; i++) { + struct pending_block *block = &io->pending_blocks[i]; + + if (likely(memcmp(block->real_digest, block->want_digest, + v->digest_size) == 0)) { + if (v->validated_blocks) + set_bit(block->blkno, v->validated_blocks); + } else { + r = verity_handle_data_hash_mismatch(v, io, bio, block); + if (unlikely(r)) + return r; + } + } + verity_clear_pending_blocks(io); + return 0; +} + /* * Verify one "dm_verity_io" structure. */ static int verity_verify_io(struct dm_verity_io *io) { @@ -495,10 +579,13 @@ static int verity_verify_io(struct dm_verity_io *io) const unsigned int block_size = 1 << v->data_dev_block_bits; struct bvec_iter iter_copy; struct bvec_iter *iter; struct bio *bio = dm_bio_from_per_bio_data(io, v->ti->per_io_data_size); unsigned int b; + int r; + + io->num_pending = 0; if (static_branch_unlikely(&use_bh_wq_enabled) && io->in_bh) { /* * Copy the iterator in case we need to restart * verification in a work-queue. @@ -508,36 +595,38 @@ static int verity_verify_io(struct dm_verity_io *io) } else iter = &io->iter; for (b = 0; b < io->n_blocks; b++, bio_advance_iter(bio, iter, block_size)) { - int r; - sector_t cur_block = io->block + b; + sector_t blkno = io->block + b; + struct pending_block *block; bool is_zero; struct bio_vec bv; void *data; if (v->validated_blocks && bio->bi_status == BLK_STS_OK && - likely(test_bit(cur_block, v->validated_blocks))) + likely(test_bit(blkno, v->validated_blocks))) continue; - r = verity_hash_for_block(v, io, cur_block, - verity_io_want_digest(v, io), + block = &io->pending_blocks[io->num_pending]; + + r = verity_hash_for_block(v, io, blkno, block->want_digest, &is_zero); if (unlikely(r < 0)) - return r; + goto error; bv = bio_iter_iovec(bio, *iter); if (unlikely(bv.bv_len < block_size)) { /* * Data block spans pages. This should not happen, * since dm-verity sets dma_alignment to the data block * size minus 1, and dm-verity also doesn't allow the * data block size to be greater than PAGE_SIZE. */ DMERR_LIMIT("unaligned io (data block spans pages)"); - return -EIO; + r = -EIO; + goto error; } data = bvec_kmap_local(&bv); if (is_zero) { @@ -547,34 +636,30 @@ static int verity_verify_io(struct dm_verity_io *io) */ memset(data, 0, block_size); kunmap_local(data); continue; } - - r = verity_hash(v, io, data, block_size, - verity_io_real_digest(v, io), !io->in_bh); - if (unlikely(r < 0)) { - kunmap_local(data); - return r; + block->data = data; + block->blkno = blkno; + if (++io->num_pending == v->mb_max_msgs) { + r = verity_verify_pending_blocks(v, io, bio); + if (unlikely(r)) + goto error; } + } - if (likely(memcmp(verity_io_real_digest(v, io), - verity_io_want_digest(v, io), v->digest_size) == 0)) { - if (v->validated_blocks) - set_bit(cur_block, v->validated_blocks); - kunmap_local(data); - continue; - } - r = verity_handle_data_hash_mismatch(v, io, bio, - verity_io_want_digest(v, io), - cur_block, data); - kunmap_local(data); + if (io->num_pending) { + r = verity_verify_pending_blocks(v, io, bio); if (unlikely(r)) - return r; + goto error; } return 0; + +error: + verity_clear_pending_blocks(io); + return r; } /* * Skip verity work in response to I/O error when system is shutting down. */ @@ -1155,14 +1240,15 @@ static int verity_setup_hash_alg(struct dm_verity *v, const char *alg_name) /* * Allocate the hash transformation object that this dm-verity instance * will use. The vast majority of dm-verity users use CPU-based * hashing, so when possible use the shash API to minimize the crypto - * API overhead. If the ahash API resolves to a different driver - * (likely an off-CPU hardware offload), use ahash instead. Also use - * ahash if the obsolete dm-verity format with the appended salt is - * being used, so that quirk only needs to be handled in one place. + * API overhead, especially when multibuffer hashing is used. If the + * ahash API resolves to a different driver (likely an off-CPU hardware + * offload), use ahash instead. Also use ahash if the obsolete + * dm-verity format with the appended salt is being used, so that quirk + * only needs to be handled in one place. */ ahash = crypto_alloc_ahash(alg_name, 0, v->use_bh_wq ? CRYPTO_ALG_ASYNC : 0); if (IS_ERR(ahash)) { ti->error = "Cannot initialize hash function"; @@ -1186,17 +1272,21 @@ static int verity_setup_hash_alg(struct dm_verity *v, const char *alg_name) ahash = NULL; v->shash_tfm = shash; v->digest_size = crypto_shash_digestsize(shash); v->hash_reqsize = sizeof(struct shash_desc) + crypto_shash_descsize(shash); - DMINFO("%s using shash \"%s\"", alg_name, driver_name); + v->mb_max_msgs = min(crypto_shash_mb_max_msgs(shash), + DM_VERITY_MAX_PENDING_DATA_BLOCKS); + DMINFO("%s using shash \"%s\"%s", alg_name, driver_name, + v->mb_max_msgs > 1 ? " (multibuffer)" : ""); } else { v->ahash_tfm = ahash; static_branch_inc(&ahash_enabled); v->digest_size = crypto_ahash_digestsize(ahash); v->hash_reqsize = sizeof(struct ahash_request) + crypto_ahash_reqsize(ahash); + v->mb_max_msgs = 1; DMINFO("%s using ahash \"%s\"", alg_name, driver_name); } if ((1 << v->hash_dev_block_bits) < v->digest_size * 2) { ti->error = "Digest size too big"; return -EINVAL; diff --git a/drivers/md/dm-verity.h b/drivers/md/dm-verity.h index 3951e5a4a156..85f4f40f3645 100644 --- a/drivers/md/dm-verity.h +++ b/drivers/md/dm-verity.h @@ -55,10 +55,11 @@ struct dm_verity { unsigned char hash_per_block_bits; /* log2(hashes in hash block) */ unsigned char levels; /* the number of tree levels */ unsigned char version; bool hash_failed:1; /* set if hash of any block failed */ bool use_bh_wq:1; /* try to verify in BH wq before normal work-queue */ + unsigned char mb_max_msgs; /* max multibuffer hashing interleaving factor */ unsigned int digest_size; /* digest size for the current hash algorithm */ unsigned int hash_reqsize; /* the size of temporary space for crypto */ enum verity_mode mode; /* mode for handling verification errors */ unsigned int corrupted_errs;/* Number of errors for corrupted blocks */ @@ -74,10 +75,19 @@ struct dm_verity { struct dm_io_client *io; mempool_t recheck_pool; }; +#define DM_VERITY_MAX_PENDING_DATA_BLOCKS HASH_MAX_MB_MSGS + +struct pending_block { + void *data; + sector_t blkno; + u8 want_digest[HASH_MAX_DIGESTSIZE]; + u8 real_digest[HASH_MAX_DIGESTSIZE]; +}; + struct dm_verity_io { struct dm_verity *v; /* original value of bio->bi_end_io */ bio_end_io_t *orig_bi_end_io; @@ -90,12 +100,19 @@ struct dm_verity_io { struct work_struct work; struct work_struct bh_work; u8 tmp_digest[HASH_MAX_DIGESTSIZE]; - u8 real_digest[HASH_MAX_DIGESTSIZE]; - u8 want_digest[HASH_MAX_DIGESTSIZE]; + + /* + * This is the queue of data blocks that are pending verification. We + * allow multiple blocks to be queued up in order to support multibuffer + * hashing, i.e. interleaving the hashing of multiple messages. On many + * CPUs this improves performance significantly. + */ + int num_pending; + struct pending_block pending_blocks[DM_VERITY_MAX_PENDING_DATA_BLOCKS]; /* * This struct is followed by a variable-sized hash request of size * v->hash_reqsize, either a struct ahash_request or a struct shash_desc * (depending on whether ahash_tfm or shash_tfm is being used). To @@ -107,22 +124,10 @@ static inline void *verity_io_hash_req(struct dm_verity *v, struct dm_verity_io *io) { return io + 1; } -static inline u8 *verity_io_real_digest(struct dm_verity *v, - struct dm_verity_io *io) -{ - return io->real_digest; -} - -static inline u8 *verity_io_want_digest(struct dm_verity *v, - struct dm_verity_io *io) -{ - return io->want_digest; -} - extern int verity_hash(struct dm_verity *v, struct dm_verity_io *io, const u8 *data, size_t len, u8 *digest, bool may_sleep); extern int verity_hash_for_block(struct dm_verity *v, struct dm_verity_io *io, sector_t block, u8 *digest, bool *is_zero);