diff mbox series

[RFC,03/24] crypto: Add 'krb5enc' hash and cipher AEAD algorithm

Message ID 20250117183538.881618-4-dhowells@redhat.com (mailing list archive)
State Under Review
Delegated to: Herbert Xu
Headers show
Series crypto: Add generic Kerberos library with AEAD template for hash-then-crypt | expand

Commit Message

David Howells Jan. 17, 2025, 6:35 p.m. UTC
Add an AEAD template that does hash-then-cipher (unlike authenc that does
cipher-then-hash).  This is required for a number of Kerberos 5 encoding
types.

[!] Note that the net/sunrpc/auth_gss/ implementation gets a pair of
ciphers, one non-CTS and one CTS, using the former to do all the aligned
blocks and the latter to do the last two blocks if they aren't also
aligned.  It may be necessary to do this here too for performance reasons -
but there are considerations both ways:

 (1) firstly, there is an optimised assembly version of cts(cbc(aes)) on
     x86_64 that should be used instead of having two ciphers;

 (2) secondly, none of the hardware offload drivers seem to offer CTS
     support (Intel QAT does not, for instance).

However, I don't know if it's possible to query the crypto API to find out
whether there's an optimised CTS algorithm available.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
---
 crypto/Kconfig        |  12 ++
 crypto/Makefile       |   1 +
 crypto/krb5enc.c      | 491 ++++++++++++++++++++++++++++++++++++++++++
 include/crypto/krb5.h |   7 +
 4 files changed, 511 insertions(+)
 create mode 100644 crypto/krb5enc.c

Comments

Simon Horman Jan. 20, 2025, 1:57 p.m. UTC | #1
On Fri, Jan 17, 2025 at 06:35:12PM +0000, David Howells wrote:
> Add an AEAD template that does hash-then-cipher (unlike authenc that does
> cipher-then-hash).  This is required for a number of Kerberos 5 encoding
> types.
> 
> [!] Note that the net/sunrpc/auth_gss/ implementation gets a pair of
> ciphers, one non-CTS and one CTS, using the former to do all the aligned
> blocks and the latter to do the last two blocks if they aren't also
> aligned.  It may be necessary to do this here too for performance reasons -
> but there are considerations both ways:
> 
>  (1) firstly, there is an optimised assembly version of cts(cbc(aes)) on
>      x86_64 that should be used instead of having two ciphers;
> 
>  (2) secondly, none of the hardware offload drivers seem to offer CTS
>      support (Intel QAT does not, for instance).
> 
> However, I don't know if it's possible to query the crypto API to find out
> whether there's an optimised CTS algorithm available.
> 
> Signed-off-by: David Howells <dhowells@redhat.com>

...

> diff --git a/crypto/krb5enc.c b/crypto/krb5enc.c

...

> +static int krb5enc_verify_hash(struct aead_request *req, void *hash)
> +{
> +	struct crypto_aead *krb5enc = crypto_aead_reqtfm(req);
> +	struct aead_instance *inst = aead_alg_instance(krb5enc);
> +	struct krb5enc_instance_ctx *ictx = aead_instance_ctx(inst);
> +	struct krb5enc_request_ctx *areq_ctx = aead_request_ctx(req);
> +	struct ahash_request *ahreq = (void *)(areq_ctx->tail + ictx->reqoff);
> +	unsigned int authsize = crypto_aead_authsize(krb5enc);
> +	u8 *ihash = ahreq->result + authsize;
> +
> +	scatterwalk_map_and_copy(ihash, req->src, ahreq->nbytes, authsize, 0);
> +
> +	if (crypto_memneq(ihash, ahreq->result, authsize))
> +		return -EBADMSG;
> +	return 0;
> +}
> +
> +static void krb5enc_decrypt_hash_done(void *data, int err)
> +{
> +	struct aead_request *req = data;
> +
> +	if (err)
> +		return krb5enc_request_complete(req, err);
> +
> +	err = krb5enc_verify_hash(req, 0);

Hi David,

Sparse complains that the second argument to krb5enc_verify_hash should be
a pointer rather than an integer. So perhaps this would be slightly better
expressed as (completely untested!):

	err = krb5enc_verify_hash(req, NULL);

> +	krb5enc_request_complete(req, err);

...
> +}
David Howells Jan. 20, 2025, 2:25 p.m. UTC | #2
Simon Horman <horms@kernel.org> wrote:

> > +static void krb5enc_decrypt_hash_done(void *data, int err)
> > +{
> > +	struct aead_request *req = data;
> > +
> > +	if (err)
> > +		return krb5enc_request_complete(req, err);
> > +
> > +	err = krb5enc_verify_hash(req, 0);
> 
> Hi David,
> 
> Sparse complains that the second argument to krb5enc_verify_hash should be
> a pointer rather than an integer. So perhaps this would be slightly better
> expressed as (completely untested!):
> 
> 	err = krb5enc_verify_hash(req, NULL);

Actually, no.  It should be "ahreq->result + authsize" and
krb5enc_verify_hash() shouldn't calculate ihash, but use its hash parameter.

I wonder if the testmgr driver tests running the algorithms asynchronously...

Thanks,
David
Eric Biggers Jan. 20, 2025, 5:39 p.m. UTC | #3
On Mon, Jan 20, 2025 at 02:25:11PM +0000, David Howells wrote:
> 
> I wonder if the testmgr driver tests running the algorithms asynchronously...
> 

Multiple requests in parallel, I think you mean?  No, it doesn't, but it should.

- Eric
David Howells Jan. 20, 2025, 6:59 p.m. UTC | #4
Eric Biggers <ebiggers@kernel.org> wrote:

> Multiple requests in parallel, I think you mean?  No, it doesn't, but it
> should.

Not so much.  This bug is on the asynchronous path and not tested by my
rxrpc/rxgk code which only exercises the synchronous path.  I haven't tried to
make that asynchronous yet.  I presume testmgr also only tests the sync path.

David
Eric Biggers Jan. 20, 2025, 7:12 p.m. UTC | #5
On Mon, Jan 20, 2025 at 06:59:40PM +0000, David Howells wrote:
> Eric Biggers <ebiggers@kernel.org> wrote:
> 
> > Multiple requests in parallel, I think you mean?  No, it doesn't, but it
> > should.
> 
> Not so much.  This bug is on the asynchronous path and not tested by my
> rxrpc/rxgk code which only exercises the synchronous path.  I haven't tried to
> make that asynchronous yet.  I presume testmgr also only tests the sync path.
> 
> David
> 

I'm not sure I understand your question.  Users of the crypto API can exclude
asynchronous algorithms when selecting one, but the self-tests do not do that.

In any case, why would you need anything to do asynchronous at all here?

- Eric
David Howells Jan. 20, 2025, 8:18 p.m. UTC | #6
Eric Biggers <ebiggers@kernel.org> wrote:

> In any case, why would you need anything to do asynchronous at all here?

Because authenc, which I copied, passes the asynchronocity mode onto the two
algos it runs (one encrypt, one hash).  If authenc is run synchronously, then
the algos are run synchronously and serially; but if authenc is run async,
then the algos are run asynchronously - but they may still have to be run
serially[*] and the second is dispatched from the completion handler of the
first.  So two different paths through the code exist, and rxgk and testmgr
only test the synchronous path.

[*] Because in authenc-compatible encoding types, the output of the encryption
is hashed.  Older krb5 encodings hash the plaintext and the hash generation
and the encrypt can be run in parallel.  For decrypting, the reverse is true;
authenc may be able to do the decrypt and the hash in parallel...  But
parallellisation also requires that the input and output buffers are not the
same.

Anyway.  If it can be done asynchronously, that should probably be tested.

David
Eric Biggers Jan. 20, 2025, 8:47 p.m. UTC | #7
On Mon, Jan 20, 2025 at 08:18:14PM +0000, David Howells wrote:
> Eric Biggers <ebiggers@kernel.org> wrote:
> 
> > In any case, why would you need anything to do asynchronous at all here?
> 
> Because authenc, which I copied, passes the asynchronocity mode onto the two
> algos it runs (one encrypt, one hash).  If authenc is run synchronously, then
> the algos are run synchronously and serially; but if authenc is run async,
> then the algos are run asynchronously - but they may still have to be run
> serially[*] and the second is dispatched from the completion handler of the
> first.  So two different paths through the code exist, and rxgk and testmgr
> only test the synchronous path.

No, it goes in the other direction.  The underlying algorithms decide whether
they are asynchronous or not, and that gets passed up.  It sounds like what you
want to do is test your template in the case where the underlying algorithms are
asynchronous.  There is a way to do that by wrapping the underlying algorithms
with cryptd.  For example the following works with gcm:

python3 <<EOF
import socket
s = socket.socket(socket.AF_ALG, 5, 0)
s.bind(("aead", "gcm_base(cryptd(ctr(aes-generic)),cryptd(ghash-generic))"))
EOF

This really should just be thought of as complying with the outdated design of
the crypto API, though.  In practice synchronous is the only case that really
matters.

> [*] Because in authenc-compatible encoding types, the output of the encryption
> is hashed.  Older krb5 encodings hash the plaintext and the hash generation
> and the encrypt can be run in parallel.  For decrypting, the reverse is true;
> authenc may be able to do the decrypt and the hash in parallel...  But
> parallellisation also requires that the input and output buffers are not the
> same.

The right way to optimize cases like that is to interleave the two computations.
Look at how the AES-GCM assembly code interleaves AES-CTR and GHASH for example.
Doing something with async threads is the completely wrong solution here and
would be much slower.  The amount of time needed to process a single message is
simply far too short for multithreading to be appropriate on a per message
basis.

- Eric
David Howells Jan. 20, 2025, 11:12 p.m. UTC | #8
David Howells <dhowells@redhat.com> wrote:

> > Sparse complains that the second argument to krb5enc_verify_hash should be
> > a pointer rather than an integer. So perhaps this would be slightly better
> > expressed as (completely untested!):
> > 
> > 	err = krb5enc_verify_hash(req, NULL);
> 
> Actually, no.  It should be "ahreq->result + authsize" and
> krb5enc_verify_hash() shouldn't calculate ihash, but use its hash parameter.

Ah.  That's wrong also.  I'm going to drop the second parameter and just
calculate the hash pointers directly.

David
---
diff --git a/crypto/krb5enc.c b/crypto/krb5enc.c
index 931387a8ee6f..e5cec47e7e42 100644
--- a/crypto/krb5enc.c
+++ b/crypto/krb5enc.c
@@ -230,7 +230,7 @@ static int krb5enc_encrypt(struct aead_request *req)
 	return krb5enc_dispatch_encrypt(req, aead_request_flags(req));
 }
 
-static int krb5enc_verify_hash(struct aead_request *req, void *hash)
+static int krb5enc_verify_hash(struct aead_request *req)
 {
 	struct crypto_aead *krb5enc = crypto_aead_reqtfm(req);
 	struct aead_instance *inst = aead_alg_instance(krb5enc);
@@ -238,11 +238,12 @@ static int krb5enc_verify_hash(struct aead_request *req, void *hash)
 	struct krb5enc_request_ctx *areq_ctx = aead_request_ctx(req);
 	struct ahash_request *ahreq = (void *)(areq_ctx->tail + ictx->reqoff);
 	unsigned int authsize = crypto_aead_authsize(krb5enc);
-	u8 *ihash = ahreq->result + authsize;
+	u8 *calc_hash = areq_ctx->tail;
+	u8 *msg_hash  = areq_ctx->tail + authsize;
 
-	scatterwalk_map_and_copy(ihash, req->src, ahreq->nbytes, authsize, 0);
+	scatterwalk_map_and_copy(msg_hash, req->src, ahreq->nbytes, authsize, 0);
 
-	if (crypto_memneq(ihash, ahreq->result, authsize))
+	if (crypto_memneq(msg_hash, calc_hash, authsize))
 		return -EBADMSG;
 	return 0;
 }
@@ -254,7 +255,7 @@ static void krb5enc_decrypt_hash_done(void *data, int err)
 	if (err)
 		return krb5enc_request_complete(req, err);
 
-	err = krb5enc_verify_hash(req, 0);
+	err = krb5enc_verify_hash(req);
 	krb5enc_request_complete(req, err);
 }
 
@@ -284,7 +285,7 @@ static int krb5enc_dispatch_decrypt_hash(struct aead_request *req)
 	if (err < 0)
 		return err;
 
-	return krb5enc_verify_hash(req, hash);
+	return krb5enc_verify_hash(req);
 }
 
 /*
@@ -352,7 +353,7 @@ static int krb5enc_init_tfm(struct crypto_aead *tfm)
 	crypto_aead_set_reqsize(
 		tfm,
 		sizeof(struct krb5enc_request_ctx) +
-		ictx->reqoff +
+		ictx->reqoff + /* Space for two checksums */
 		umax(sizeof(struct ahash_request) + crypto_ahash_reqsize(auth),
 		     sizeof(struct skcipher_request) + crypto_skcipher_reqsize(enc)));
diff mbox series

Patch

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 6b0bfbccac08..18b1a3b3a258 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -228,6 +228,18 @@  config CRYPTO_AUTHENC
 
 	  This is required for IPSec ESP (XFRM_ESP).
 
+config CRYPTO_KRB5ENC
+	tristate "Kerberos 5 combined hash+cipher support"
+	select CRYPTO_AEAD
+	select CRYPTO_SKCIPHER
+	select CRYPTO_MANAGER
+	select CRYPTO_HASH
+	select CRYPTO_NULL
+	help
+	  Combined hash and cipher support for Kerberos 5 RFC3961 simplified
+	  profile.  This is required for Kerberos 5-style encryption, used by
+	  sunrpc/NFS and rxrpc/AFS.
+
 config CRYPTO_TEST
 	tristate "Testing module"
 	depends on m || EXPERT
diff --git a/crypto/Makefile b/crypto/Makefile
index 77abca715445..eb40638c6c04 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -160,6 +160,7 @@  CFLAGS_crc32_generic.o += -DARCH=$(ARCH)
 obj-$(CONFIG_CRYPTO_CRCT10DIF) += crct10dif_common.o crct10dif_generic.o
 obj-$(CONFIG_CRYPTO_CRC64_ROCKSOFT) += crc64_rocksoft_generic.o
 obj-$(CONFIG_CRYPTO_AUTHENC) += authenc.o authencesn.o
+obj-$(CONFIG_CRYPTO_KRB5ENC) += krb5enc.o
 obj-$(CONFIG_CRYPTO_LZO) += lzo.o lzo-rle.o
 obj-$(CONFIG_CRYPTO_LZ4) += lz4.o
 obj-$(CONFIG_CRYPTO_LZ4HC) += lz4hc.o
diff --git a/crypto/krb5enc.c b/crypto/krb5enc.c
new file mode 100644
index 000000000000..931387a8ee6f
--- /dev/null
+++ b/crypto/krb5enc.c
@@ -0,0 +1,491 @@ 
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * AEAD wrapper for Kerberos 5 RFC3961 simplified profile.
+ *
+ * Copyright (C) 2025 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * Derived from authenc:
+ * Copyright (c) 2007-2015 Herbert Xu <herbert@gondor.apana.org.au>
+ */
+
+#include <crypto/internal/aead.h>
+#include <crypto/internal/hash.h>
+#include <crypto/internal/skcipher.h>
+#include <crypto/krb5.h>
+#include <crypto/authenc.h>
+#include <crypto/scatterwalk.h>
+#include <linux/err.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/rtnetlink.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+
+struct krb5enc_instance_ctx {
+	struct crypto_ahash_spawn auth;
+	struct crypto_skcipher_spawn enc;
+	unsigned int reqoff;
+};
+
+struct krb5enc_ctx {
+	struct crypto_ahash *auth;
+	struct crypto_skcipher *enc;
+};
+
+struct krb5enc_request_ctx {
+	struct scatterlist src[2];
+	struct scatterlist dst[2];
+	char tail[];
+};
+
+static void krb5enc_request_complete(struct aead_request *req, int err)
+{
+	if (err != -EINPROGRESS)
+		aead_request_complete(req, err);
+}
+
+/**
+ * crypto_krb5enc_extractkeys - Extract Ke and Ki keys from the key blob.
+ * @keys: Where to put the key sizes and pointers
+ * @key: Encoded key material
+ * @keylen: Amount of key material
+ *
+ * Decode the key blob we're given.  It starts with a __be32 that indicates the
+ * format.  Format 1 is:
+ *
+ *  be32 1 || be32 Ke len || be32 Ki len || Ke || Ki
+ */
+int crypto_krb5enc_extractkeys(struct crypto_authenc_keys *keys, const u8 *key,
+			       unsigned int keylen)
+{
+	__be32 *k = (void *)key;
+	u32 format, Ke_len, Ki_len;
+
+	if (keylen < 16)
+		return -EINVAL;
+
+	format = get_unaligned_be32(k++);
+	if (format != 1)
+		return -EINVAL;
+	Ke_len = get_unaligned_be32(k++);
+	Ki_len = get_unaligned_be32(k++);
+	keylen -= 12;
+
+	if (Ke_len + Ki_len != keylen)
+		return -EINVAL;
+
+	keys->enckeylen		= Ke_len;
+	keys->enckey		= (void *)k;
+	keys->authkeylen	= Ki_len;
+	keys->authkey		= (void *)k + Ke_len;
+	return 0;
+}
+EXPORT_SYMBOL(crypto_krb5enc_extractkeys);
+
+static int krb5enc_setkey(struct crypto_aead *krb5enc, const u8 *key,
+			  unsigned int keylen)
+{
+	struct crypto_authenc_keys keys;
+	struct krb5enc_ctx *ctx = crypto_aead_ctx(krb5enc);
+	struct crypto_skcipher *enc = ctx->enc;
+	struct crypto_ahash *auth = ctx->auth;
+	unsigned int flags = crypto_aead_get_flags(krb5enc);
+	int err = -EINVAL;
+
+	if (crypto_krb5enc_extractkeys(&keys, key, keylen) != 0)
+		goto out;
+
+	crypto_ahash_clear_flags(auth, CRYPTO_TFM_REQ_MASK);
+	crypto_ahash_set_flags(auth, flags & CRYPTO_TFM_REQ_MASK);
+	err = crypto_ahash_setkey(auth, keys.authkey, keys.authkeylen);
+	if (err)
+		goto out;
+
+	crypto_skcipher_clear_flags(enc, CRYPTO_TFM_REQ_MASK);
+	crypto_skcipher_set_flags(enc, flags & CRYPTO_TFM_REQ_MASK);
+	err = crypto_skcipher_setkey(enc, keys.enckey, keys.enckeylen);
+out:
+	memzero_explicit(&keys, sizeof(keys));
+	return err;
+}
+
+static void krb5enc_encrypt_done(void *data, int err)
+{
+	struct aead_request *req = data;
+
+	krb5enc_request_complete(req, err);
+}
+
+/*
+ * Start the encryption of the plaintext.  We skip over the associated data as
+ * that only gets included in the hash.
+ */
+static int krb5enc_dispatch_encrypt(struct aead_request *req,
+				    unsigned int flags)
+{
+	struct crypto_aead *krb5enc = crypto_aead_reqtfm(req);
+	struct aead_instance *inst = aead_alg_instance(krb5enc);
+	struct krb5enc_ctx *ctx = crypto_aead_ctx(krb5enc);
+	struct krb5enc_instance_ctx *ictx = aead_instance_ctx(inst);
+	struct krb5enc_request_ctx *areq_ctx = aead_request_ctx(req);
+	struct crypto_skcipher *enc = ctx->enc;
+	struct skcipher_request *skreq = (void *)(areq_ctx->tail +
+						  ictx->reqoff);
+	struct scatterlist *src, *dst;
+
+	src = scatterwalk_ffwd(areq_ctx->src, req->src, req->assoclen);
+	if (req->src == req->dst)
+		dst = src;
+	else
+		dst = scatterwalk_ffwd(areq_ctx->dst, req->dst, req->assoclen);
+
+	skcipher_request_set_tfm(skreq, enc);
+	skcipher_request_set_callback(skreq, aead_request_flags(req),
+				      krb5enc_encrypt_done, req);
+	skcipher_request_set_crypt(skreq, src, dst, req->cryptlen, req->iv);
+
+	return crypto_skcipher_encrypt(skreq);
+}
+
+/*
+ * Insert the hash into the checksum field in the destination buffer directly
+ * after the encrypted region.
+ */
+static void krb5enc_insert_checksum(struct aead_request *req, u8 *hash)
+{
+	struct crypto_aead *krb5enc = crypto_aead_reqtfm(req);
+
+	scatterwalk_map_and_copy(hash, req->dst,
+				 req->assoclen + req->cryptlen,
+				 crypto_aead_authsize(krb5enc), 1);
+}
+
+/*
+ * Upon completion of an asynchronous digest, transfer the hash to the checksum
+ * field.
+ */
+static void krb5enc_encrypt_ahash_done(void *data, int err)
+{
+	struct aead_request *req = data;
+	struct crypto_aead *krb5enc = crypto_aead_reqtfm(req);
+	struct aead_instance *inst = aead_alg_instance(krb5enc);
+	struct krb5enc_instance_ctx *ictx = aead_instance_ctx(inst);
+	struct krb5enc_request_ctx *areq_ctx = aead_request_ctx(req);
+	struct ahash_request *ahreq = (void *)(areq_ctx->tail + ictx->reqoff);
+
+	if (err)
+		return krb5enc_request_complete(req, err);
+
+	krb5enc_insert_checksum(req, ahreq->result);
+
+	err = krb5enc_dispatch_encrypt(req, 0);
+	if (err != -EINPROGRESS)
+		aead_request_complete(req, err);
+}
+
+/*
+ * Start the digest of the plaintext for encryption.  In theory, this could be
+ * run in parallel with the encryption, provided the src and dst buffers don't
+ * overlap.
+ */
+static int krb5enc_dispatch_encrypt_hash(struct aead_request *req)
+{
+	struct crypto_aead *krb5enc = crypto_aead_reqtfm(req);
+	struct aead_instance *inst = aead_alg_instance(krb5enc);
+	struct krb5enc_ctx *ctx = crypto_aead_ctx(krb5enc);
+	struct krb5enc_instance_ctx *ictx = aead_instance_ctx(inst);
+	struct crypto_ahash *auth = ctx->auth;
+	struct krb5enc_request_ctx *areq_ctx = aead_request_ctx(req);
+	struct ahash_request *ahreq = (void *)(areq_ctx->tail + ictx->reqoff);
+	u8 *hash = areq_ctx->tail;
+	int err;
+
+	ahash_request_set_callback(ahreq, aead_request_flags(req),
+				   krb5enc_encrypt_ahash_done, req);
+	ahash_request_set_tfm(ahreq, auth);
+	ahash_request_set_crypt(ahreq, req->src, hash, req->assoclen + req->cryptlen);
+
+	err = crypto_ahash_digest(ahreq);
+	if (err)
+		return err;
+
+	krb5enc_insert_checksum(req, hash);
+	return 0;
+}
+
+/*
+ * Process an encryption operation.  We can perform the cipher and the hash in
+ * parallel, provided the src and dst buffers are separate.
+ */
+static int krb5enc_encrypt(struct aead_request *req)
+{
+	int err;
+
+	err = krb5enc_dispatch_encrypt_hash(req);
+	if (err < 0)
+		return err;
+
+	return krb5enc_dispatch_encrypt(req, aead_request_flags(req));
+}
+
+static int krb5enc_verify_hash(struct aead_request *req, void *hash)
+{
+	struct crypto_aead *krb5enc = crypto_aead_reqtfm(req);
+	struct aead_instance *inst = aead_alg_instance(krb5enc);
+	struct krb5enc_instance_ctx *ictx = aead_instance_ctx(inst);
+	struct krb5enc_request_ctx *areq_ctx = aead_request_ctx(req);
+	struct ahash_request *ahreq = (void *)(areq_ctx->tail + ictx->reqoff);
+	unsigned int authsize = crypto_aead_authsize(krb5enc);
+	u8 *ihash = ahreq->result + authsize;
+
+	scatterwalk_map_and_copy(ihash, req->src, ahreq->nbytes, authsize, 0);
+
+	if (crypto_memneq(ihash, ahreq->result, authsize))
+		return -EBADMSG;
+	return 0;
+}
+
+static void krb5enc_decrypt_hash_done(void *data, int err)
+{
+	struct aead_request *req = data;
+
+	if (err)
+		return krb5enc_request_complete(req, err);
+
+	err = krb5enc_verify_hash(req, 0);
+	krb5enc_request_complete(req, err);
+}
+
+/*
+ * Dispatch the hashing of the plaintext after we've done the decryption.
+ */
+static int krb5enc_dispatch_decrypt_hash(struct aead_request *req)
+{
+	struct crypto_aead *krb5enc = crypto_aead_reqtfm(req);
+	struct aead_instance *inst = aead_alg_instance(krb5enc);
+	struct krb5enc_ctx *ctx = crypto_aead_ctx(krb5enc);
+	struct krb5enc_instance_ctx *ictx = aead_instance_ctx(inst);
+	struct krb5enc_request_ctx *areq_ctx = aead_request_ctx(req);
+	struct ahash_request *ahreq = (void *)(areq_ctx->tail + ictx->reqoff);
+	struct crypto_ahash *auth = ctx->auth;
+	unsigned int authsize = crypto_aead_authsize(krb5enc);
+	u8 *hash = areq_ctx->tail;
+	int err;
+
+	ahash_request_set_tfm(ahreq, auth);
+	ahash_request_set_crypt(ahreq, req->dst, hash,
+				req->assoclen + req->cryptlen - authsize);
+	ahash_request_set_callback(ahreq, aead_request_flags(req),
+				   krb5enc_decrypt_hash_done, req);
+
+	err = crypto_ahash_digest(ahreq);
+	if (err < 0)
+		return err;
+
+	return krb5enc_verify_hash(req, hash);
+}
+
+/*
+ * Dispatch the decryption of the ciphertext.
+ */
+static int krb5enc_dispatch_decrypt(struct aead_request *req)
+{
+	struct crypto_aead *krb5enc = crypto_aead_reqtfm(req);
+	struct aead_instance *inst = aead_alg_instance(krb5enc);
+	struct krb5enc_ctx *ctx = crypto_aead_ctx(krb5enc);
+	struct krb5enc_instance_ctx *ictx = aead_instance_ctx(inst);
+	struct krb5enc_request_ctx *areq_ctx = aead_request_ctx(req);
+	struct skcipher_request *skreq = (void *)(areq_ctx->tail +
+						  ictx->reqoff);
+	unsigned int authsize = crypto_aead_authsize(krb5enc);
+	struct scatterlist *src, *dst;
+
+	src = scatterwalk_ffwd(areq_ctx->src, req->src, req->assoclen);
+	dst = src;
+
+	if (req->src != req->dst)
+		dst = scatterwalk_ffwd(areq_ctx->dst, req->dst, req->assoclen);
+
+	skcipher_request_set_tfm(skreq, ctx->enc);
+	skcipher_request_set_callback(skreq, aead_request_flags(req),
+				      req->base.complete, req->base.data);
+	skcipher_request_set_crypt(skreq, src, dst,
+				   req->cryptlen - authsize, req->iv);
+
+	return crypto_skcipher_decrypt(skreq);
+}
+
+static int krb5enc_decrypt(struct aead_request *req)
+{
+	int err;
+
+	err = krb5enc_dispatch_decrypt(req);
+	if (err < 0)
+		return err;
+
+	return krb5enc_dispatch_decrypt_hash(req);
+}
+
+static int krb5enc_init_tfm(struct crypto_aead *tfm)
+{
+	struct aead_instance *inst = aead_alg_instance(tfm);
+	struct krb5enc_instance_ctx *ictx = aead_instance_ctx(inst);
+	struct krb5enc_ctx *ctx = crypto_aead_ctx(tfm);
+	struct crypto_ahash *auth;
+	struct crypto_skcipher *enc;
+	int err;
+
+	auth = crypto_spawn_ahash(&ictx->auth);
+	if (IS_ERR(auth))
+		return PTR_ERR(auth);
+
+	enc = crypto_spawn_skcipher(&ictx->enc);
+	err = PTR_ERR(enc);
+	if (IS_ERR(enc))
+		goto err_free_ahash;
+
+	ctx->auth = auth;
+	ctx->enc = enc;
+
+	crypto_aead_set_reqsize(
+		tfm,
+		sizeof(struct krb5enc_request_ctx) +
+		ictx->reqoff +
+		umax(sizeof(struct ahash_request) + crypto_ahash_reqsize(auth),
+		     sizeof(struct skcipher_request) + crypto_skcipher_reqsize(enc)));
+
+	return 0;
+
+err_free_ahash:
+	crypto_free_ahash(auth);
+	return err;
+}
+
+static void krb5enc_exit_tfm(struct crypto_aead *tfm)
+{
+	struct krb5enc_ctx *ctx = crypto_aead_ctx(tfm);
+
+	crypto_free_ahash(ctx->auth);
+	crypto_free_skcipher(ctx->enc);
+}
+
+static void krb5enc_free(struct aead_instance *inst)
+{
+	struct krb5enc_instance_ctx *ctx = aead_instance_ctx(inst);
+
+	crypto_drop_skcipher(&ctx->enc);
+	crypto_drop_ahash(&ctx->auth);
+	kfree(inst);
+}
+
+/*
+ * Create an instance of a template for a specific hash and cipher pair.
+ */
+static int krb5enc_create(struct crypto_template *tmpl, struct rtattr **tb)
+{
+	struct krb5enc_instance_ctx *ictx;
+	struct skcipher_alg_common *enc;
+	struct hash_alg_common *auth;
+	struct aead_instance *inst;
+	struct crypto_alg *auth_base;
+	u32 mask;
+	int err;
+
+	err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_AEAD, &mask);
+	if (err) {
+		pr_err("attr_type failed\n");
+		return err;
+	}
+
+	inst = kzalloc(sizeof(*inst) + sizeof(*ictx), GFP_KERNEL);
+	if (!inst)
+		return -ENOMEM;
+	ictx = aead_instance_ctx(inst);
+
+	err = crypto_grab_ahash(&ictx->auth, aead_crypto_instance(inst),
+				crypto_attr_alg_name(tb[1]), 0, mask);
+	if (err) {
+		pr_err("grab ahash failed\n");
+		goto err_free_inst;
+	}
+	auth = crypto_spawn_ahash_alg(&ictx->auth);
+	auth_base = &auth->base;
+
+	err = crypto_grab_skcipher(&ictx->enc, aead_crypto_instance(inst),
+				   crypto_attr_alg_name(tb[2]), 0, mask);
+	if (err) {
+		pr_err("grab skcipher failed\n");
+		goto err_free_inst;
+	}
+	enc = crypto_spawn_skcipher_alg_common(&ictx->enc);
+
+	ictx->reqoff = 2 * auth->digestsize;
+
+	err = -ENAMETOOLONG;
+	if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME,
+		     "krb5enc(%s,%s)", auth_base->cra_name,
+		     enc->base.cra_name) >=
+	    CRYPTO_MAX_ALG_NAME)
+		goto err_free_inst;
+
+	if (snprintf(inst->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME,
+		     "krb5enc(%s,%s)", auth_base->cra_driver_name,
+		     enc->base.cra_driver_name) >= CRYPTO_MAX_ALG_NAME)
+		goto err_free_inst;
+
+	inst->alg.base.cra_priority = enc->base.cra_priority * 10 +
+				      auth_base->cra_priority;
+	inst->alg.base.cra_blocksize = enc->base.cra_blocksize;
+	inst->alg.base.cra_alignmask = enc->base.cra_alignmask;
+	inst->alg.base.cra_ctxsize = sizeof(struct krb5enc_ctx);
+
+	inst->alg.ivsize = enc->ivsize;
+	inst->alg.chunksize = enc->chunksize;
+	inst->alg.maxauthsize = auth->digestsize;
+
+	inst->alg.init = krb5enc_init_tfm;
+	inst->alg.exit = krb5enc_exit_tfm;
+
+	inst->alg.setkey = krb5enc_setkey;
+	inst->alg.encrypt = krb5enc_encrypt;
+	inst->alg.decrypt = krb5enc_decrypt;
+
+	inst->free = krb5enc_free;
+
+	err = aead_register_instance(tmpl, inst);
+	if (err) {
+		pr_err("ref failed\n");
+		goto err_free_inst;
+	}
+
+	return 0;
+
+err_free_inst:
+	krb5enc_free(inst);
+	return err;
+}
+
+static struct crypto_template crypto_krb5enc_tmpl = {
+	.name = "krb5enc",
+	.create = krb5enc_create,
+	.module = THIS_MODULE,
+};
+
+static int __init crypto_krb5enc_module_init(void)
+{
+	return crypto_register_template(&crypto_krb5enc_tmpl);
+}
+
+static void __exit crypto_krb5enc_module_exit(void)
+{
+	crypto_unregister_template(&crypto_krb5enc_tmpl);
+}
+
+subsys_initcall(crypto_krb5enc_module_init);
+module_exit(crypto_krb5enc_module_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Simple AEAD wrapper for Kerberos 5 RFC3961");
+MODULE_ALIAS_CRYPTO("krb5enc");
diff --git a/include/crypto/krb5.h b/include/crypto/krb5.h
index 44a6342471d7..8949a9b71de3 100644
--- a/include/crypto/krb5.h
+++ b/include/crypto/krb5.h
@@ -48,4 +48,11 @@ 
 #define KEY_USAGE_SEED_ENCRYPTION       (0xAA)
 #define KEY_USAGE_SEED_INTEGRITY        (0x55)
 
+/*
+ * krb5enc.c
+ */
+struct crypto_authenc_keys;
+int crypto_krb5enc_extractkeys(struct crypto_authenc_keys *keys, const u8 *key,
+			       unsigned int keylen);
+
 #endif /* _CRYPTO_KRB5_H */