diff mbox series

[26/29] net/tls: use the new scatterwalk functions

Message ID 20241221091056.282098-27-ebiggers@kernel.org (mailing list archive)
State Under Review
Delegated to: Herbert Xu
Headers show
Series crypto: scatterlist handling improvements | expand

Commit Message

Eric Biggers Dec. 21, 2024, 9:10 a.m. UTC
From: Eric Biggers <ebiggers@google.com>

Replace calls to the deprecated function scatterwalk_copychunks() with
memcpy_from_scatterwalk(), memcpy_to_scatterwalk(), or
scatterwalk_skip() as appropriate.

The new functions behave more as expected and eliminate the need to call
scatterwalk_done() or scatterwalk_pagedone().  This was not always being
done when needed, and therefore the old code appears to have also had a
bug where the dcache of the destination page(s) was not always being
flushed on architectures that need that.

Cc: Boris Pismenny <borisp@nvidia.com>
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: John Fastabend <john.fastabend@gmail.com>
Cc: netdev@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
---

This patch is part of a long series touching many files, so I have
limited the Cc list on the full series.  If you want the full series and
did not receive it, please retrieve it from lore.kernel.org.

 net/tls/tls_device_fallback.c | 16 ++++------------
 1 file changed, 4 insertions(+), 12 deletions(-)
diff mbox series

Patch

diff --git a/net/tls/tls_device_fallback.c b/net/tls/tls_device_fallback.c
index f9e3d3d90dcf..ec7017c80b6a 100644
--- a/net/tls/tls_device_fallback.c
+++ b/net/tls/tls_device_fallback.c
@@ -67,20 +67,17 @@  static int tls_enc_record(struct aead_request *aead_req,
 	DEBUG_NET_WARN_ON_ONCE(!cipher_desc || !cipher_desc->offloadable);
 
 	buf_size = TLS_HEADER_SIZE + cipher_desc->iv;
 	len = min_t(int, *in_len, buf_size);
 
-	scatterwalk_copychunks(buf, in, len, 0);
-	scatterwalk_copychunks(buf, out, len, 1);
+	memcpy_from_scatterwalk(buf, in, len);
+	memcpy_to_scatterwalk(out, buf, len);
 
 	*in_len -= len;
 	if (!*in_len)
 		return 0;
 
-	scatterwalk_pagedone(in, 0, 1);
-	scatterwalk_pagedone(out, 1, 1);
-
 	len = buf[4] | (buf[3] << 8);
 	len -= cipher_desc->iv;
 
 	tls_make_aad(aad, len - cipher_desc->tag, (char *)&rcd_sn, buf[0], prot);
 
@@ -108,14 +105,12 @@  static int tls_enc_record(struct aead_request *aead_req,
 
 		*in_len = 0;
 	}
 
 	if (*in_len) {
-		scatterwalk_copychunks(NULL, in, len, 2);
-		scatterwalk_pagedone(in, 0, 1);
-		scatterwalk_copychunks(NULL, out, len, 2);
-		scatterwalk_pagedone(out, 1, 1);
+		scatterwalk_skip(in, len);
+		scatterwalk_skip(out, len);
 	}
 
 	len -= cipher_desc->tag;
 	aead_request_set_crypt(aead_req, sg_in, sg_out, len, iv);
 
@@ -160,13 +155,10 @@  static int tls_enc_records(struct aead_request *aead_req,
 				    cpu_to_be64(rcd_sn), &in, &out, &len, prot);
 		rcd_sn++;
 
 	} while (rc == 0 && len);
 
-	scatterwalk_done(&in, 0, 0);
-	scatterwalk_done(&out, 1, 0);
-
 	return rc;
 }
 
 /* Can't use icsk->icsk_af_ops->send_check here because the ip addresses
  * might have been changed by NAT.