diff mbox series

[v2] crypto: lib - implement library version of AES in CFB mode

Message ID 20230217144348.1537615-1-ardb@kernel.org (mailing list archive)
State Changes Requested
Delegated to: Herbert Xu
Headers show
Series [v2] crypto: lib - implement library version of AES in CFB mode | expand

Commit Message

Ard Biesheuvel Feb. 17, 2023, 2:43 p.m. UTC
Implement AES in CFB mode using the existing, mostly constant-time
generic AES library implementation. This will be used by the TPM code
to encrypt communications with TPM hardware, which is often a discrete
component connected using sniffable wires or traces.

While a CFB template does exist, using a skcipher is a major pain for
non-performance critical synchronous crypto where the algorithm is known
at compile time and the data is in contiguous buffers with valid kernel
virtual addresses.

Tested-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Reviewed-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Link: https://lore.kernel.org/all/20230216201410.15010-1-James.Bottomley@HansenPartnership.com/
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
v1 was sent out by James and is archived at the URL above

v2:
- add test cases and kerneldoc comments
- add memzero_explicit() calls to wipe the keystream buffers
- add module exports
- add James's Tb/Rb

 include/crypto/aes.h |   5 +
 lib/crypto/Kconfig   |   5 +
 lib/crypto/Makefile  |   3 +
 lib/crypto/aescfb.c  | 257 ++++++++++++++++++++
 4 files changed, 270 insertions(+)

Comments

Herbert Xu Feb. 20, 2023, 4:44 a.m. UTC | #1
On Fri, Feb 17, 2023 at 03:43:48PM +0100, Ard Biesheuvel wrote:
> Implement AES in CFB mode using the existing, mostly constant-time
> generic AES library implementation. This will be used by the TPM code
> to encrypt communications with TPM hardware, which is often a discrete
> component connected using sniffable wires or traces.
> 
> While a CFB template does exist, using a skcipher is a major pain for
> non-performance critical synchronous crypto where the algorithm is known
> at compile time and the data is in contiguous buffers with valid kernel
> virtual addresses.
> 
> Tested-by: James Bottomley <James.Bottomley@HansenPartnership.com>
> Reviewed-by: James Bottomley <James.Bottomley@HansenPartnership.com>
> Link: https://lore.kernel.org/all/20230216201410.15010-1-James.Bottomley@HansenPartnership.com/
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> ---
> v1 was sent out by James and is archived at the URL above
> 
> v2:
> - add test cases and kerneldoc comments
> - add memzero_explicit() calls to wipe the keystream buffers
> - add module exports
> - add James's Tb/Rb
> 
>  include/crypto/aes.h |   5 +
>  lib/crypto/Kconfig   |   5 +
>  lib/crypto/Makefile  |   3 +
>  lib/crypto/aescfb.c  | 257 ++++++++++++++++++++
>  4 files changed, 270 insertions(+)

Could we remove the crypto/cfb.c implementation after this work
is complete?

Thanks,
Ard Biesheuvel Feb. 20, 2023, 7:28 a.m. UTC | #2
On Mon, 20 Feb 2023 at 05:44, Herbert Xu <herbert@gondor.apana.org.au> wrote:
>
> On Fri, Feb 17, 2023 at 03:43:48PM +0100, Ard Biesheuvel wrote:
> > Implement AES in CFB mode using the existing, mostly constant-time
> > generic AES library implementation. This will be used by the TPM code
> > to encrypt communications with TPM hardware, which is often a discrete
> > component connected using sniffable wires or traces.
> >
> > While a CFB template does exist, using a skcipher is a major pain for
> > non-performance critical synchronous crypto where the algorithm is known
> > at compile time and the data is in contiguous buffers with valid kernel
> > virtual addresses.
> >
> > Tested-by: James Bottomley <James.Bottomley@HansenPartnership.com>
> > Reviewed-by: James Bottomley <James.Bottomley@HansenPartnership.com>
> > Link: https://lore.kernel.org/all/20230216201410.15010-1-James.Bottomley@HansenPartnership.com/
> > Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> > ---
> > v1 was sent out by James and is archived at the URL above
> >
> > v2:
> > - add test cases and kerneldoc comments
> > - add memzero_explicit() calls to wipe the keystream buffers
> > - add module exports
> > - add James's Tb/Rb
> >
> >  include/crypto/aes.h |   5 +
> >  lib/crypto/Kconfig   |   5 +
> >  lib/crypto/Makefile  |   3 +
> >  lib/crypto/aescfb.c  | 257 ++++++++++++++++++++
> >  4 files changed, 270 insertions(+)
>
> Could we remove the crypto/cfb.c implementation after this work
> is complete?
>

We would still not have any in-tree users of cfb(aes) or any other
cfb(*), so in that sense, yes.

However, skciphers can be called from user space, and we also rely on
this template for the extended testing of the various cfb() hardware
implementations that we have in the tree.

So the answer is no, I suppose. I would like to simplify it a bit,
though - it is a bit more complicated than it needs to be.
Herbert Xu March 10, 2023, 10:15 a.m. UTC | #3
On Mon, Feb 20, 2023 at 08:28:05AM +0100, Ard Biesheuvel wrote:
>
> We would still not have any in-tree users of cfb(aes) or any other
> cfb(*), so in that sense, yes.
> 
> However, skciphers can be called from user space, and we also rely on
> this template for the extended testing of the various cfb() hardware
> implementations that we have in the tree.
> 
> So the answer is no, I suppose. I would like to simplify it a bit,
> though - it is a bit more complicated than it needs to be.

Could we hold onto this for a little bit? I'd like to finally
remove crypto_cipher, and in doing so I will add a virtual address
interface (i.e., not sg) to skcipher like we do with scomp and shash.

Thanks,
Ard Biesheuvel March 10, 2023, 4:18 p.m. UTC | #4
On Fri, 10 Mar 2023 at 11:15, Herbert Xu <herbert@gondor.apana.org.au> wrote:
>
> On Mon, Feb 20, 2023 at 08:28:05AM +0100, Ard Biesheuvel wrote:
> >
> > We would still not have any in-tree users of cfb(aes) or any other
> > cfb(*), so in that sense, yes.
> >
> > However, skciphers can be called from user space, and we also rely on
> > this template for the extended testing of the various cfb() hardware
> > implementations that we have in the tree.
> >
> > So the answer is no, I suppose. I would like to simplify it a bit,
> > though - it is a bit more complicated than it needs to be.
>
> Could we hold onto this for a little bit? I'd like to finally
> remove crypto_cipher, and in doing so I will add a virtual address
> interface (i.e., not sg) to skcipher like we do with scomp and shash.
>

Does that mean you are bringing back blkcipher? I think that would the
right thing to do tbh, although it might make sense to enhance
skcipher (and aead) to support this.

Could we perhaps update struct skcipher_request so it can describe
virtually mapped address ranges, but permit this only for synchronous
implementations? Then, we could update the skcipher walker code to
produce a single walk step covering the entire range, and just use the
provided virtual addresses directly, rather than going through a
mapping interface?
Herbert Xu March 11, 2023, 8:06 a.m. UTC | #5
On Fri, Mar 10, 2023 at 05:18:05PM +0100, Ard Biesheuvel wrote:
>
> Does that mean you are bringing back blkcipher? I think that would the
> right thing to do tbh, although it might make sense to enhance
> skcipher (and aead) to support this.

I haven't gone into that kind of detail yet but my first impression
is that it would be the analogue of shash and skcipher would simply
wrap around it just like ahash wraps around shash.

> Could we perhaps update struct skcipher_request so it can describe
> virtually mapped address ranges, but permit this only for synchronous
> implementations? Then, we could update the skcipher walker code to
> produce a single walk step covering the entire range, and just use the
> provided virtual addresses directly, rather than going through a
> mapping interface?

Since skcipher doesn't actually need to carry any state with it
I'd like to avoid having an skcipher_request at all.  So it would
look pretty much like the existing crypto_cipher interface except
with the addition of length and IV.

Cheers,
Ard Biesheuvel March 11, 2023, 8:15 a.m. UTC | #6
On Sat, 11 Mar 2023 at 09:06, Herbert Xu <herbert@gondor.apana.org.au> wrote:
>
> On Fri, Mar 10, 2023 at 05:18:05PM +0100, Ard Biesheuvel wrote:
> >
> > Does that mean you are bringing back blkcipher? I think that would the
> > right thing to do tbh, although it might make sense to enhance
> > skcipher (and aead) to support this.
>
> I haven't gone into that kind of detail yet but my first impression
> is that it would be the analogue of shash and skcipher would simply
> wrap around it just like ahash wraps around shash.
>
> > Could we perhaps update struct skcipher_request so it can describe
> > virtually mapped address ranges, but permit this only for synchronous
> > implementations? Then, we could update the skcipher walker code to
> > produce a single walk step covering the entire range, and just use the
> > provided virtual addresses directly, rather than going through a
> > mapping interface?
>
> Since skcipher doesn't actually need to carry any state with it
> I'd like to avoid having an skcipher_request at all.

Doesn't that depend on the implementation? It might have a >0 size
request context size, no? Or do we just allocate that on the stack?
Herbert Xu March 11, 2023, 8:17 a.m. UTC | #7
On Sat, Mar 11, 2023 at 09:15:42AM +0100, Ard Biesheuvel wrote:
>
> Doesn't that depend on the implementation? It might have a >0 size
> request context size, no? Or do we just allocate that on the stack?

Do you have a concrete example of something that needs this?
Is this a temporary scratch buffer used only during computation?

Thanks,
Ard Biesheuvel March 11, 2023, 8:42 a.m. UTC | #8
On Sat, 11 Mar 2023 at 09:17, Herbert Xu <herbert@gondor.apana.org.au> wrote:
>
> On Sat, Mar 11, 2023 at 09:15:42AM +0100, Ard Biesheuvel wrote:
> >
> > Doesn't that depend on the implementation? It might have a >0 size
> > request context size, no? Or do we just allocate that on the stack?
>
> Do you have a concrete example of something that needs this?
> Is this a temporary scratch buffer used only during computation?
>

Every call to crypto_skcipher_set_reqsize(), no?
Herbert Xu March 11, 2023, 8:47 a.m. UTC | #9
On Sat, Mar 11, 2023 at 09:42:06AM +0100, Ard Biesheuvel wrote:
>
> Every call to crypto_skcipher_set_reqsize(), no?

We'd only convert the software implementations.  But you're right
there does seem to be a few users such as aria that demand a large
amount of temp space.  I'd be tempted to just leave them on skcipher.

In other cases such as ctr we can easily put the IV on the stack.

Cheers,
Ard Biesheuvel March 11, 2023, 8:55 a.m. UTC | #10
On Sat, 11 Mar 2023 at 09:47, Herbert Xu <herbert@gondor.apana.org.au> wrote:
>
> On Sat, Mar 11, 2023 at 09:42:06AM +0100, Ard Biesheuvel wrote:
> >
> > Every call to crypto_skcipher_set_reqsize(), no?
>
> We'd only convert the software implementations.  But you're right
> there does seem to be a few users such as aria that demand a large
> amount of temp space.  I'd be tempted to just leave them on skcipher.
>
> In other cases such as ctr we can easily put the IV on the stack.
>

But why can't we make skcipher just a hybrid?

- make the scatterlist members in skcipher_request unions with virtual
src and dst alternatives
- add an API that assigns those alternative members and checks that
the tfm is not ALG_ASYNC
- make the existing skcipher_en/decrypt() implementations check the
request type, and hand off to a 'sync' alternative that allocates the
request ctx on the stack, and make the accessor return the stack
version instead of the heap version
- update skcipher_walk_xxx() to return the virtually addressable dst
and src if the sync request type is encountered.

That way, the skcipher implementations can remain as they are, and the
callers can just put a struct skcipher_request on the stack (without
the padding and ctx overhead) and call the new interface with virtual
addresses.

That way, all the SYNC_SKCIPHER hacks can go, and we don't need yet
another algo type.


That way, the implementations can remain the same,
Herbert Xu March 11, 2023, 9 a.m. UTC | #11
On Sat, Mar 11, 2023 at 09:55:15AM +0100, Ard Biesheuvel wrote:
>
> That way, the implementations can remain the same,

That's like doing a house renovation and keeping the scaffold
around forever :)

Yes I agree that it would save a little bit of work for now but
all the implementations would have to carry this unnecessary
walking code with them forever.

With a setup like ahash/shash the walking code disappears totally
from the underlying implementations.

Cheers,
Ard Biesheuvel March 11, 2023, 9:02 a.m. UTC | #12
On Sat, 11 Mar 2023 at 10:00, Herbert Xu <herbert@gondor.apana.org.au> wrote:
>
> On Sat, Mar 11, 2023 at 09:55:15AM +0100, Ard Biesheuvel wrote:
> >
> > That way, the implementations can remain the same,
>
> That's like doing a house renovation and keeping the scaffold
> around forever :)
>
> Yes I agree that it would save a little bit of work for now but
> all the implementations would have to carry this unnecessary
> walking code with them forever.
>
> With a setup like ahash/shash the walking code disappears totally
> from the underlying implementations.
>

So we are basically going back to ablkcipher/blkcipher then? How about aead?
Herbert Xu March 11, 2023, 9:21 a.m. UTC | #13
On Sat, Mar 11, 2023 at 10:02:01AM +0100, Ard Biesheuvel wrote:
>
> So we are basically going back to ablkcipher/blkcipher then? How about aead?

No I just dug up the old blkcipher code and it's based on SGs
just like skcipher.

I went back to the beginning of git and we've only ever had
an SG-based encryption interface.  This would be the very first
time that we've had this in the Crypto API.

Do we have any potential users for such an AEAD interface?

Cheers,
Ard Biesheuvel March 11, 2023, 9:25 a.m. UTC | #14
On Sat, 11 Mar 2023 at 10:21, Herbert Xu <herbert@gondor.apana.org.au> wrote:
>
> On Sat, Mar 11, 2023 at 10:02:01AM +0100, Ard Biesheuvel wrote:
> >
> > So we are basically going back to ablkcipher/blkcipher then? How about aead?
>
> No I just dug up the old blkcipher code and it's based on SGs
> just like skcipher.
>
> I went back to the beginning of git and we've only ever had
> an SG-based encryption interface.  This would be the very first
> time that we've had this in the Crypto API.
>
> Do we have any potential users for such an AEAD interface?
>

Not sure. I just added the libaesgcm library interface, but that is
used in SMP secondary bringup, so that shouldn't use the crypto API in
any case.

Synchronous AEADs are being used in the wifi code, but I am not aware
of any problematic use cases.

So what use case is the driver for this sync skcipher change? And how
will this work with existing templates? Do they all have to implement
two flavors now?
Herbert Xu March 11, 2023, 9:41 a.m. UTC | #15
On Sat, Mar 11, 2023 at 10:25:48AM +0100, Ard Biesheuvel wrote:
>
> So what use case is the driver for this sync skcipher change? And how

The main reason I wanted to do this is because I'd like to get
rid of crypto_cipher.  I'm planning on replacing the underlying
simple ciphers with their ECB equivalent.

> will this work with existing templates? Do they all have to implement
> two flavors now?

Let's say we're calling this vkcipher.  Because the existing
skcipher templates should continue to work with an underlying
vkcipher algorithm, I won't be adding any vkcipher template
unless there is a specific use-case, such as CFB here.

But I will do the common ones like CBC/CTR.

Cheers,
Ard Biesheuvel March 12, 2023, 8:06 a.m. UTC | #16
On Sat, 11 Mar 2023 at 10:42, Herbert Xu <herbert@gondor.apana.org.au> wrote:
>
> On Sat, Mar 11, 2023 at 10:25:48AM +0100, Ard Biesheuvel wrote:
> >
> > So what use case is the driver for this sync skcipher change? And how
>
> The main reason I wanted to do this is because I'd like to get
> rid of crypto_cipher.  I'm planning on replacing the underlying
> simple ciphers with their ECB equivalent.
>
> > will this work with existing templates? Do they all have to implement
> > two flavors now?
>
> Let's say we're calling this vkcipher.  Because the existing
> skcipher templates should continue to work with an underlying
> vkcipher algorithm, I won't be adding any vkcipher template
> unless there is a specific use-case, such as CFB here.
>
> But I will do the common ones like CBC/CTR.
>

Interesting. So I think having an interface like this would be useful.

However, to answer your original question, I don't think it makes
sense for James's stuff to be gated on this. In fact, I think
communication with the TPM should have as few moving parts as
possible, given how disruptive it might be if it fails, so I'd suggest
we merge this code in any case, and stick to simple library interfaces
where we can.
diff mbox series

Patch

diff --git a/include/crypto/aes.h b/include/crypto/aes.h
index 2090729701ab6d7a..9339da7c20a8b54e 100644
--- a/include/crypto/aes.h
+++ b/include/crypto/aes.h
@@ -87,4 +87,9 @@  void aes_decrypt(const struct crypto_aes_ctx *ctx, u8 *out, const u8 *in);
 extern const u8 crypto_aes_sbox[];
 extern const u8 crypto_aes_inv_sbox[];
 
+void aescfb_encrypt(const struct crypto_aes_ctx *ctx, u8 *dst, const u8 *src,
+		    int len, const u8 iv[AES_BLOCK_SIZE]);
+void aescfb_decrypt(const struct crypto_aes_ctx *ctx, u8 *dst, const u8 *src,
+		    int len, const u8 iv[AES_BLOCK_SIZE]);
+
 #endif
diff --git a/lib/crypto/Kconfig b/lib/crypto/Kconfig
index 45436bfc6dffe1db..b01253cac70a7499 100644
--- a/lib/crypto/Kconfig
+++ b/lib/crypto/Kconfig
@@ -8,6 +8,11 @@  config CRYPTO_LIB_UTILS
 config CRYPTO_LIB_AES
 	tristate
 
+config CRYPTO_LIB_AESCFB
+	tristate
+	select CRYPTO_LIB_AES
+	select CRYPTO_LIB_UTILS
+
 config CRYPTO_LIB_AESGCM
 	tristate
 	select CRYPTO_LIB_AES
diff --git a/lib/crypto/Makefile b/lib/crypto/Makefile
index 6ec2d4543d9cad48..33213a01aab101e8 100644
--- a/lib/crypto/Makefile
+++ b/lib/crypto/Makefile
@@ -10,6 +10,9 @@  obj-$(CONFIG_CRYPTO_LIB_CHACHA_GENERIC)		+= libchacha.o
 obj-$(CONFIG_CRYPTO_LIB_AES)			+= libaes.o
 libaes-y					:= aes.o
 
+obj-$(CONFIG_CRYPTO_LIB_AESCFB)			+= libaescfb.o
+libaescfb-y					:= aescfb.o
+
 obj-$(CONFIG_CRYPTO_LIB_AESGCM)			+= libaesgcm.o
 libaesgcm-y					:= aesgcm.o
 
diff --git a/lib/crypto/aescfb.c b/lib/crypto/aescfb.c
new file mode 100644
index 0000000000000000..749dc1258a44b7af
--- /dev/null
+++ b/lib/crypto/aescfb.c
@@ -0,0 +1,257 @@ 
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Minimal library implementation of AES in CFB mode
+ *
+ * Copyright 2023 Google LLC
+ */
+
+#include <linux/module.h>
+
+#include <crypto/algapi.h>
+#include <crypto/aes.h>
+
+#include <asm/irqflags.h>
+
+static void aescfb_encrypt_block(const struct crypto_aes_ctx *ctx, void *dst,
+				 const void *src)
+{
+	unsigned long flags;
+
+	/*
+	 * In AES-CFB, the AES encryption operates on known 'plaintext' (the IV
+	 * and ciphertext), making it susceptible to timing attacks on the
+	 * encryption key. The AES library already mitigates this risk to some
+	 * extent by pulling the entire S-box into the caches before doing any
+	 * substitutions, but this strategy is more effective when running with
+	 * interrupts disabled.
+	 */
+	local_irq_save(flags);
+	aes_encrypt(ctx, dst, src);
+	local_irq_restore(flags);
+}
+
+/**
+ * aescfb_encrypt - Perform AES-CFB encryption on a block of data
+ *
+ * @ctx:	The AES-CFB key schedule
+ * @dst:	Pointer to the ciphertext output buffer
+ * @src:	Pointer the plaintext (may equal @dst for encryption in place)
+ * @len:	The size in bytes of the plaintext and ciphertext.
+ * @iv:		The initialization vector (IV) to use for this block of data
+ */
+void aescfb_encrypt(const struct crypto_aes_ctx *ctx, u8 *dst, const u8 *src,
+		    int len, const u8 iv[AES_BLOCK_SIZE])
+{
+	u8 ks[AES_BLOCK_SIZE];
+	const u8 *v = iv;
+
+	while (len > 0) {
+		aescfb_encrypt_block(ctx, ks, v);
+		crypto_xor_cpy(dst, src, ks, min(len, AES_BLOCK_SIZE));
+		v = dst;
+
+		dst += AES_BLOCK_SIZE;
+		src += AES_BLOCK_SIZE;
+		len -= AES_BLOCK_SIZE;
+	}
+
+	memzero_explicit(ks, sizeof(ks));
+}
+EXPORT_SYMBOL(aescfb_encrypt);
+
+/**
+ * aescfb_decrypt - Perform AES-CFB decryption on a block of data
+ *
+ * @ctx:	The AES-CFB key schedule
+ * @dst:	Pointer to the plaintext output buffer
+ * @src:	Pointer the ciphertext (may equal @dst for decryption in place)
+ * @len:	The size in bytes of the plaintext and ciphertext.
+ * @iv:		The initialization vector (IV) to use for this block of data
+ */
+void aescfb_decrypt(const struct crypto_aes_ctx *ctx, u8 *dst, const u8 *src,
+		    int len, const u8 iv[AES_BLOCK_SIZE])
+{
+	u8 ks[2][AES_BLOCK_SIZE];
+
+	aescfb_encrypt_block(ctx, ks[0], iv);
+
+	for (int i = 0; len > 0; i ^= 1) {
+		if (len > AES_BLOCK_SIZE)
+			/*
+			 * Generate the keystream for the next block before
+			 * performing the XOR, as that may update in place and
+			 * overwrite the ciphertext.
+			 */
+			aescfb_encrypt_block(ctx, ks[!i], src);
+
+		crypto_xor_cpy(dst, src, ks[i], min(len, AES_BLOCK_SIZE));
+
+		dst += AES_BLOCK_SIZE;
+		src += AES_BLOCK_SIZE;
+		len -= AES_BLOCK_SIZE;
+	}
+
+	memzero_explicit(ks, sizeof(ks));
+}
+EXPORT_SYMBOL(aescfb_decrypt);
+
+MODULE_DESCRIPTION("Generic AES-CFB library");
+MODULE_AUTHOR("Ard Biesheuvel <ardb@kernel.org>");
+MODULE_LICENSE("GPL");
+
+#ifndef CONFIG_CRYPTO_MANAGER_DISABLE_TESTS
+
+/*
+ * Test code below. Vectors taken from crypto/testmgr.h
+ */
+
+static struct {
+	u8	ptext[64];
+	u8	ctext[64];
+
+	u8	key[AES_MAX_KEY_SIZE];
+	u8	iv[AES_BLOCK_SIZE];
+
+	int	klen;
+	int	len;
+} const aescfb_tv[] __initconst = {
+	{ /* From NIST SP800-38A */
+		.key    = "\x2b\x7e\x15\x16\x28\xae\xd2\xa6"
+			  "\xab\xf7\x15\x88\x09\xcf\x4f\x3c",
+		.klen	= 16,
+		.iv	= "\x00\x01\x02\x03\x04\x05\x06\x07"
+			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f",
+		.ptext	= "\x6b\xc1\xbe\xe2\x2e\x40\x9f\x96"
+			  "\xe9\x3d\x7e\x11\x73\x93\x17\x2a"
+			  "\xae\x2d\x8a\x57\x1e\x03\xac\x9c"
+			  "\x9e\xb7\x6f\xac\x45\xaf\x8e\x51"
+			  "\x30\xc8\x1c\x46\xa3\x5c\xe4\x11"
+			  "\xe5\xfb\xc1\x19\x1a\x0a\x52\xef"
+			  "\xf6\x9f\x24\x45\xdf\x4f\x9b\x17"
+			  "\xad\x2b\x41\x7b\xe6\x6c\x37\x10",
+		.ctext	= "\x3b\x3f\xd9\x2e\xb7\x2d\xad\x20"
+			  "\x33\x34\x49\xf8\xe8\x3c\xfb\x4a"
+			  "\xc8\xa6\x45\x37\xa0\xb3\xa9\x3f"
+			  "\xcd\xe3\xcd\xad\x9f\x1c\xe5\x8b"
+			  "\x26\x75\x1f\x67\xa3\xcb\xb1\x40"
+			  "\xb1\x80\x8c\xf1\x87\xa4\xf4\xdf"
+			  "\xc0\x4b\x05\x35\x7c\x5d\x1c\x0e"
+			  "\xea\xc4\xc6\x6f\x9f\xf7\xf2\xe6",
+		.len	= 64,
+	}, {
+		.key	= "\x8e\x73\xb0\xf7\xda\x0e\x64\x52"
+			  "\xc8\x10\xf3\x2b\x80\x90\x79\xe5"
+			  "\x62\xf8\xea\xd2\x52\x2c\x6b\x7b",
+		.klen	= 24,
+		.iv	= "\x00\x01\x02\x03\x04\x05\x06\x07"
+			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f",
+		.ptext	= "\x6b\xc1\xbe\xe2\x2e\x40\x9f\x96"
+			  "\xe9\x3d\x7e\x11\x73\x93\x17\x2a"
+			  "\xae\x2d\x8a\x57\x1e\x03\xac\x9c"
+			  "\x9e\xb7\x6f\xac\x45\xaf\x8e\x51"
+			  "\x30\xc8\x1c\x46\xa3\x5c\xe4\x11"
+			  "\xe5\xfb\xc1\x19\x1a\x0a\x52\xef"
+			  "\xf6\x9f\x24\x45\xdf\x4f\x9b\x17"
+			  "\xad\x2b\x41\x7b\xe6\x6c\x37\x10",
+		.ctext	= "\xcd\xc8\x0d\x6f\xdd\xf1\x8c\xab"
+			  "\x34\xc2\x59\x09\xc9\x9a\x41\x74"
+			  "\x67\xce\x7f\x7f\x81\x17\x36\x21"
+			  "\x96\x1a\x2b\x70\x17\x1d\x3d\x7a"
+			  "\x2e\x1e\x8a\x1d\xd5\x9b\x88\xb1"
+			  "\xc8\xe6\x0f\xed\x1e\xfa\xc4\xc9"
+			  "\xc0\x5f\x9f\x9c\xa9\x83\x4f\xa0"
+			  "\x42\xae\x8f\xba\x58\x4b\x09\xff",
+		.len	= 64,
+	}, {
+		.key	= "\x60\x3d\xeb\x10\x15\xca\x71\xbe"
+			  "\x2b\x73\xae\xf0\x85\x7d\x77\x81"
+			  "\x1f\x35\x2c\x07\x3b\x61\x08\xd7"
+			  "\x2d\x98\x10\xa3\x09\x14\xdf\xf4",
+		.klen	= 32,
+		.iv	= "\x00\x01\x02\x03\x04\x05\x06\x07"
+			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f",
+		.ptext	= "\x6b\xc1\xbe\xe2\x2e\x40\x9f\x96"
+			  "\xe9\x3d\x7e\x11\x73\x93\x17\x2a"
+			  "\xae\x2d\x8a\x57\x1e\x03\xac\x9c"
+			  "\x9e\xb7\x6f\xac\x45\xaf\x8e\x51"
+			  "\x30\xc8\x1c\x46\xa3\x5c\xe4\x11"
+			  "\xe5\xfb\xc1\x19\x1a\x0a\x52\xef"
+			  "\xf6\x9f\x24\x45\xdf\x4f\x9b\x17"
+			  "\xad\x2b\x41\x7b\xe6\x6c\x37\x10",
+		.ctext	= "\xdc\x7e\x84\xbf\xda\x79\x16\x4b"
+			  "\x7e\xcd\x84\x86\x98\x5d\x38\x60"
+			  "\x39\xff\xed\x14\x3b\x28\xb1\xc8"
+			  "\x32\x11\x3c\x63\x31\xe5\x40\x7b"
+			  "\xdf\x10\x13\x24\x15\xe5\x4b\x92"
+			  "\xa1\x3e\xd0\xa8\x26\x7a\xe2\xf9"
+			  "\x75\xa3\x85\x74\x1a\xb9\xce\xf8"
+			  "\x20\x31\x62\x3d\x55\xb1\xe4\x71",
+		.len	= 64,
+	}, { /* > 16 bytes, not a multiple of 16 bytes */
+		.key	= "\x2b\x7e\x15\x16\x28\xae\xd2\xa6"
+			  "\xab\xf7\x15\x88\x09\xcf\x4f\x3c",
+		.klen	= 16,
+		.iv	= "\x00\x01\x02\x03\x04\x05\x06\x07"
+			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f",
+		.ptext	= "\x6b\xc1\xbe\xe2\x2e\x40\x9f\x96"
+			  "\xe9\x3d\x7e\x11\x73\x93\x17\x2a"
+			  "\xae",
+		.ctext	= "\x3b\x3f\xd9\x2e\xb7\x2d\xad\x20"
+			  "\x33\x34\x49\xf8\xe8\x3c\xfb\x4a"
+			  "\xc8",
+		.len	= 17,
+	}, { /* < 16 bytes */
+		.key	= "\x2b\x7e\x15\x16\x28\xae\xd2\xa6"
+			  "\xab\xf7\x15\x88\x09\xcf\x4f\x3c",
+		.klen	= 16,
+		.iv	= "\x00\x01\x02\x03\x04\x05\x06\x07"
+			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f",
+		.ptext	= "\x6b\xc1\xbe\xe2\x2e\x40\x9f",
+		.ctext	= "\x3b\x3f\xd9\x2e\xb7\x2d\xad",
+		.len	= 7,
+	},
+};
+
+static int __init libaescfb_init(void)
+{
+	for (int i = 0; i < ARRAY_SIZE(aescfb_tv); i++) {
+		struct crypto_aes_ctx ctx;
+		u8 buf[64];
+
+		if (aes_expandkey(&ctx, aescfb_tv[i].key, aescfb_tv[i].klen)) {
+			pr_err("aes_expandkey() failed on vector %d\n", i);
+			return -ENODEV;
+		}
+
+		aescfb_encrypt(&ctx, buf, aescfb_tv[i].ptext, aescfb_tv[i].len,
+			       aescfb_tv[i].iv);
+		if (memcmp(buf, aescfb_tv[i].ctext, aescfb_tv[i].len)) {
+			pr_err("aescfb_encrypt() #1 failed on vector %d\n", i);
+			return -ENODEV;
+		}
+
+		/* decrypt in place */
+		aescfb_decrypt(&ctx, buf, buf, aescfb_tv[i].len, aescfb_tv[i].iv);
+		if (memcmp(buf, aescfb_tv[i].ptext, aescfb_tv[i].len)) {
+			pr_err("aescfb_decrypt() failed on vector %d\n", i);
+			return -ENODEV;
+		}
+
+		/* encrypt in place */
+		aescfb_encrypt(&ctx, buf, buf, aescfb_tv[i].len, aescfb_tv[i].iv);
+		if (memcmp(buf, aescfb_tv[i].ctext, aescfb_tv[i].len)) {
+			pr_err("aescfb_encrypt() #2 failed on vector %d\n", i);
+
+			return -ENODEV;
+		}
+
+	}
+	return 0;
+}
+module_init(libaescfb_init);
+
+static void __exit libaescfb_exit(void)
+{
+}
+module_exit(libaescfb_exit);
+#endif