diff mbox series

crypto: adiantum - flush destination page before unmapping

Message ID 20231027203017.57004-1-ebiggers@kernel.org (mailing list archive)
State Accepted
Delegated to: Herbert Xu
Headers show
Series crypto: adiantum - flush destination page before unmapping | expand

Commit Message

Eric Biggers Oct. 27, 2023, 8:30 p.m. UTC
From: Eric Biggers <ebiggers@google.com>

Upon additional review, the new fast path in adiantum_finish() is
missing the call to flush_dcache_page() that scatterwalk_map_and_copy()
was doing.  It's apparently debatable whether flush_dcache_page() is
actually needed, as per the discussion at
https://lore.kernel.org/lkml/YYP1lAq46NWzhOf0@casper.infradead.org/T/#u.
However, it appears that currently all the helper functions that write
to a page, such as scatterwalk_map_and_copy(), memcpy_to_page(), and
memzero_page(), do the dcache flush.  So do it to be consistent.

Fixes: dadf5e56c967 ("crypto: adiantum - add fast path for single-page messages")
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 crypto/adiantum.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)


base-commit: f2b88bab69c86d4dab2bfd25a0e741d7df411f7a
diff mbox series

Patch

diff --git a/crypto/adiantum.c b/crypto/adiantum.c
index 9ff3376f9ed3..60f3883b736a 100644
--- a/crypto/adiantum.c
+++ b/crypto/adiantum.c
@@ -293,30 +293,32 @@  static int adiantum_finish(struct skcipher_request *req)
 
 	/*
 	 * Second hash step
 	 *	enc: C_R = C_M - H_{K_H}(T, C_L)
 	 *	dec: P_R = P_M - H_{K_H}(T, P_L)
 	 */
 	rctx->u.hash_desc.tfm = tctx->hash;
 	le128_sub(&rctx->rbuf.bignum, &rctx->rbuf.bignum, &rctx->header_hash);
 	if (dst_nents == 1 && dst->offset + req->cryptlen <= PAGE_SIZE) {
 		/* Fast path for single-page destination */
-		void *virt = kmap_local_page(sg_page(dst)) + dst->offset;
+		struct page *page = sg_page(dst);
+		void *virt = kmap_local_page(page) + dst->offset;
 
 		err = crypto_shash_digest(&rctx->u.hash_desc, virt, bulk_len,
 					  (u8 *)&digest);
 		if (err) {
 			kunmap_local(virt);
 			return err;
 		}
 		le128_sub(&rctx->rbuf.bignum, &rctx->rbuf.bignum, &digest);
 		memcpy(virt + bulk_len, &rctx->rbuf.bignum, sizeof(le128));
+		flush_dcache_page(page);
 		kunmap_local(virt);
 	} else {
 		/* Slow path that works for any destination scatterlist */
 		err = adiantum_hash_message(req, dst, dst_nents, &digest);
 		if (err)
 			return err;
 		le128_sub(&rctx->rbuf.bignum, &rctx->rbuf.bignum, &digest);
 		scatterwalk_map_and_copy(&rctx->rbuf.bignum, dst,
 					 bulk_len, sizeof(le128), 1);
 	}