diff mbox

[v3] crypto: aesni - Convert rfc4106 to new AEAD interface

Message ID 20150601075306.GA11725@gondor.apana.org.au (mailing list archive)
State Accepted
Delegated to: Herbert Xu
Headers show

Commit Message

Herbert Xu June 1, 2015, 7:53 a.m. UTC
On Mon, Jun 01, 2015 at 03:50:22PM +0800, Herbert Xu wrote:
> This patch converts the low-level __gcm-aes-aesni algorithm to
> the new AEAD interface.

Oops, I missed two more spots.

---8<---
This patch converts the low-level __gcm-aes-aesni algorithm to
the new AEAD interface.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

Comments

Stephan Mueller June 1, 2015, 9:04 a.m. UTC | #1
Am Montag, 1. Juni 2015, 15:53:06 schrieb Herbert Xu:

Hi Herbert,

>On Mon, Jun 01, 2015 at 03:50:22PM +0800, Herbert Xu wrote:
>> This patch converts the low-level __gcm-aes-aesni algorithm to
>> the new AEAD interface.
>
>Oops, I missed two more spots.

That patch fixes the crash. Thanks

Just FYI: when testing rfc4106(gcm(aes-aesni)) with the old givcipher API, I 
got a crash in my code invoking the cipher which used to work in older kernels 
and which works with the C implementations. So, there must be some change in 
how givcipher is treated by the AESNI implementation.

As this API is sunset now, shall I dig deeper or shall we simply deactivate 
the API entirely?

--
Ciao
Stephan
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Herbert Xu June 1, 2015, 9:09 a.m. UTC | #2
On Mon, Jun 01, 2015 at 11:04:39AM +0200, Stephan Mueller wrote:
>
> Just FYI: when testing rfc4106(gcm(aes-aesni)) with the old givcipher API, I 
> got a crash in my code invoking the cipher which used to work in older kernels 
> and which works with the C implementations. So, there must be some change in 
> how givcipher is treated by the AESNI implementation.
> 
> As this API is sunset now, shall I dig deeper or shall we simply deactivate 
> the API entirely?

What do you mean by the old givcipher API? Are you referring to the
old algif_aead that was disabled?

Can you show me the crash that you got and as much information as
you can about the caller that triggered the crash?

The assumption at this point is that all users outside of crypto/
itself are gone.  But I'd still like to look at the crash just in
case it's relevant for users within crypto.

Thanks,
Stephan Mueller June 1, 2015, 9:44 a.m. UTC | #3
Am Montag, 1. Juni 2015, 17:09:51 schrieb Herbert Xu:

Hi Herbert,

> On Mon, Jun 01, 2015 at 11:04:39AM +0200, Stephan Mueller wrote:
> > Just FYI: when testing rfc4106(gcm(aes-aesni)) with the old givcipher API,
> > I got a crash in my code invoking the cipher which used to work in older
> > kernels and which works with the C implementations. So, there must be
> > some change in how givcipher is treated by the AESNI implementation.
> > 
> > As this API is sunset now, shall I dig deeper or shall we simply
> > deactivate
> > the API entirely?
> 
> What do you mean by the old givcipher API? Are you referring to the
> old algif_aead that was disabled?

I am referring to the crypto_aead_givencrypt in-kernel API.

Before the big change, the RFC4106 ciphers could be used with the standard 
crypto_aead_encrypt API calls where the caller must provide the IV. When using 
the crypto_aead_givencrypt, the IV was generated using the seqiv.

When testing the current code base with the old crypto_aead_givencrypt API 
call, I get the following results: using rfc4106(gcm(aes-asm)) with the old 
API works just fine. Using rfc4106(gcm(aes-aesni)) with the very same code 
crashes somewhere in my code.

> 
> Can you show me the crash that you got and as much information as
> you can about the caller that triggered the crash?

Again, the caller is my out-of tree crypto API test harness.

That code crashes with the following stacktrace:

[ 2000.433502] BUG: unable to handle kernel NULL pointer dereference at           
(null)
[ 2000.433505] IP: [<          (null)>]           (null)
[ 2000.433508] PGD 44aa5067 PUD 7c2d3067 PMD 0 
[ 2000.433511] Oops: 0010 [#3] SMP 
[ 2000.433514] Modules linked in: kcapi_cavs(OE) ctr ghash_generic gcm ecb cbc 
sha512_ssse3 sha512_generic sha256_ssse3 sha1_ssse3 sha1_generic 
nf_conntrack_netbios_ns nf_conntrack_broadcast ip6t_rpfilter ip6t_REJECT 
nf_reject_ipv6 nf_conntrack_ipv6 nf_defrag_ipv6 nf_conntrack_ipv4 
nf_defrag_ipv4 xt_conntrack nf_conntrack cfg80211 ebtable_nat ebtable_broute 
bridge stp llc ebtable_filter ebtables ip6table_mangle ip6table_security 
ip6table_raw ip6table_filter ip6_tables iptable_mangle iptable_security 
iptable_raw crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel 
aesni_intel aes_x86_64 glue_helper ablk_helper joydev microcode virtio_balloon 
serio_raw pcspkr acpi_cpufreq i2c_piix4 virtio_blk qxl virtio_net 
drm_kms_helper ttm drm virtio_pci virtio_ring virtio [last unloaded: 
kcapi_cavs]
[ 2000.433550] CPU: 0 PID: 12614 Comm: perl Tainted: G      D    OE   4.0.0+ 
#228
[ 2000.433552] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 
1.8.1-20150318_183358- 04/01/2014
[ 2000.433554] task: ffff88007beaee80 ti: ffff880001bcc000 task.ti: 
ffff880001bcc000
[ 2000.433556] RIP: 0010:[<0000000000000000>]  [<          (null)>]           
(null)
[ 2000.433558] RSP: 0018:ffff880001bcfd20  EFLAGS: 00010206
[ 2000.433560] RAX: ffff8800363a5040 RBX: ffff88007c59e800 RCX: 
0200000000000000
[ 2000.433561] RDX: ffffffffa02e2280 RSI: ffffffffa02e051c RDI: 
ffff880064291c00
[ 2000.433563] RBP: ffff880001bcfde8 R08: 00000000000191e0 R09: 
ffff880001bcfd40
[ 2000.433564] R10: ffffffff811636d1 R11: 0000000000000006 R12: 
ffff880064291c00
[ 2000.433566] R13: ffff8800363a5000 R14: 0000000000000000 R15: 
0000000000000010
[ 2000.433568] FS:  00007f7b0e85f700(0000) GS:ffff88007fc00000(0000) 
knlGS:0000000000000000
[ 2000.433570] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 2000.433571] CR2: 0000000000000000 CR3: 000000007bc7d000 CR4: 
00000000000407f0
[ 2000.433577] Stack:
[ 2000.433579]  ffffffffa02de986 ffff880001bcfd40 0200000000000000 
0000000000eaee80
[ 2000.433582]  ffffea000005b002 0000002000000000 0000000000000000 
0000000000000000
[ 2000.433585]  ffffea000005c802 0000000800000000 0000000000000000 
0000000000000000
[ 2000.433587] Call Trace:
[ 2000.433593]  [<ffffffffa02de986>] ? kccavs_test_aead_givenc+0x226/0x3b0 
[kcapi_cavs]
[ 2000.433597]  [<ffffffffa02ddf17>] kccavs_data_read+0xf7/0x130 [kcapi_cavs]
[ 2000.433602]  [<ffffffff811a9928>] __vfs_read+0x28/0xc0
[ 2000.433605]  [<ffffffff81294324>] ? security_file_permission+0x84/0xa0
[ 2000.433607]  [<ffffffff811a9ff3>] ? rw_verify_area+0x53/0x100
[ 2000.433609]  [<ffffffff811aa12a>] vfs_read+0x8a/0x140
[ 2000.433612]  [<ffffffff811aab56>] SyS_read+0x46/0xb0
[ 2000.433616]  [<ffffffff8104a0f7>] ? trace_do_page_fault+0x37/0xb0
[ 2000.433620]  [<ffffffff816877ee>] system_call_fastpath+0x12/0x71
[ 2000.433622] Code:  Bad RIP value.
[ 2000.433625] RIP  [<          (null)>]           (null)
[ 2000.433627]  RSP <ffff880001bcfd20>
[ 2000.433628] CR2: 0000000000000000
[ 2000.433630] ---[ end trace d4df29cff525b7a9 ]---

The entire code that triggers the crash is the following:

#ifndef NEWAEAD
/* tie all data structures together */
struct kccavs_givaead_def {
	struct crypto_aead *tfm;
	struct aead_givcrypt_request *req;
	struct kccavs_tcrypt_res result;
};

/* Perform encryption */
static unsigned int kccavs_givaead_enc(struct kccavs_givaead_def *aead)
{
	int rc = crypto_aead_givencrypt(aead->req);

	switch (rc) {
	case 0:
		break;
	case -EINPROGRESS:
	case -EBUSY:
		rc = wait_for_completion_interruptible(&aead-
>result.completion);
		if (!rc && !aead->result.err) {
#ifdef OLDASYNC
			INIT_COMPLETION(aead->result.completion);
#else
			reinit_completion(&aead->result.completion);
#endif
			break;
		}
	default:
		dbg(DRIVER_NAME": aead cipher operation returned with %d 
result"
		    " %d\n",rc, aead->result.err);
		break;
	}
	init_completion(&aead->result.completion);

	return rc;
}
#endif

/*
 * GIV AEAD encryption
 * input: type
 * input: name
 * input: plaintext / ciphertext in kccavs_test->data
 * input: AuthTag is appended to ciphertext
 * input: Authsize
 * input: key in kccavs_test->key
 * input: associated data in kccavs_test->aead_assoc
 * output: ciphertext / plaintext in kccavs_test->data
 * output: IV in kccavs_test->iv
 *
 * Note: for decryption, the data->data will contain deadbeef if the
 *	 authentication failed.
 */
static int kccavs_test_aead_givenc(size_t nbytes)
{
	int ret = -EFAULT;

	struct crypto_aead *tfm = NULL;
#ifdef NEWAEAD
	struct kccavs_aead_def aead;
	struct aead_request *req = NULL;
	struct scatterlist sg[3];
#else
	struct kccavs_givaead_def aead;
	struct aead_givcrypt_request *req = NULL;
	struct scatterlist sg;
	struct scatterlist assocsg;
#endif
	struct kccavs_data *data = &kccavs_test->data;
	struct kccavs_data *key = &kccavs_test->key;
	struct kccavs_data *iv = &kccavs_test->iv;
	struct kccavs_data *aead_assoc = &kccavs_test->aead_assoc;
	u32 authsize = kccavs_test->aead_authsize;

	/* data will hold plaintext and tag */
	if (kccavs_test->type & TYPE_ENC &&
	    data->len + authsize > MAXDATALEN)
		return -ENOSPC;

	tfm = crypto_alloc_aead(kccavs_test->name, 0, 0);
	if (IS_ERR(tfm)) {
		pr_info("could not allocate aead handle for %s %ld\n",
			kccavs_test->name, PTR_ERR(tfm));
		return PTR_ERR(tfm);
	}

#ifdef NEWAEAD
	req = aead_request_alloc(tfm, GFP_KERNEL);
#else
	req = aead_givcrypt_alloc(tfm, GFP_KERNEL);
#endif
	if (IS_ERR(req)) {
		pr_info("could not allocate request queue\n");
		ret = PTR_ERR(req);
		goto out;
	}

	ret = crypto_aead_setkey(tfm, key->data, key->len);
	if (ret) {
		pr_info("key could not be set %d\n", ret);
		goto out;
	}

	ret = crypto_aead_setauthsize(tfm, authsize);
	if (ret) {
		pr_info("authsize %u could not be set %d\n", authsize, ret);
		ret = -EAGAIN;
		goto out;
	}

	aead.tfm = tfm;
	aead.req = req;

#ifdef NEWAEAD
	aead_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
				  kccavs_aead_cb, &aead.result);
#else
	aead_givcrypt_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
				  kccavs_aead_cb, &aead.result);
#endif

#ifdef NEWAEAD
	iv->len = crypto_aead_ivsize(aead.tfm);
	sg_init_table(sg, 3);
	sg_set_buf(&sg[0], aead_assoc->data, aead_assoc->len);
	sg_set_buf(&sg[1], iv->data, iv->len);
	sg_set_buf(&sg[2], data->data, data->len +
		   (kccavs_test->type & TYPE_ENC ? authsize : 0));
	aead_request_set_ad(req, aead_assoc->len);
	aead_request_set_crypt(req, sg, sg, data->len + iv->len, iv->data);
#else
	sg_init_one(&sg, data->data, data->len +
		    ((kccavs_test->type & TYPE_ENC) ? authsize : 0));
	sg_init_one(&assocsg, aead_assoc->data, aead_assoc->len);
	aead_givcrypt_set_assoc(req, &assocsg, aead_assoc->len);
	iv->len = crypto_aead_ivsize(aead_givcrypt_reqtfm(req));
	/*
	 * The IV pointer for AEAD is moved behind the IV value to be 
generated
	 * by seqiv - regardless of what is found there, it will be 
overwritten
	 * by seqiv. Moreover, we do not care what is found there as the
	 * seqiv generated IV is stored in iv->data anyhow.
	 */
	aead_givcrypt_set_crypt(req, &sg, &sg, data->len, iv->data);

	sg_init_one(&sg, data->data, data->len +
		    ((kccavs_test->type & TYPE_ENC) ? authsize : 0));

	/*
	 * The IV pointer for the seqiv generated IV is iv->data to allow
	 * it to be extracted with the IV debugfs read function
	 *
	 * We use a sequence number starting at 0 as defined by RFC4303
	 */
	aead_givcrypt_set_giv(req, iv->data, 0);
#endif

	init_completion(&aead.result.completion);

	if (kccavs_test->type & TYPE_ENC) {
#ifdef NEWAEAD
		ret = kccavs_aead_encdec(&aead, 1);
#else
		ret = kccavs_givaead_enc(&aead);
#endif
		/* data now contains ciphertext and concatenated tag */
		data->len += authsize;
		if (0 > ret) {
			pr_info("AEAD encryption failed: %d\n", ret);
		}
	} else {
		pr_err("AEAD: givcrypt is only intended for encrypt\n");
	}

out:
	if (tfm)
		crypto_free_aead(tfm);
	if (req)
#ifdef NEWAEAD
		aead_request_free(req);
#else
		aead_givcrypt_free(req);
#endif
	return ret;
}
Herbert Xu June 1, 2015, 9:52 a.m. UTC | #4
On Mon, Jun 01, 2015 at 11:44:22AM +0200, Stephan Mueller wrote:
>
> That code crashes with the following stacktrace:
> 
> [ 2000.433502] BUG: unable to handle kernel NULL pointer dereference at           
> (null)

This crash is totally different from the previous crash you sent.
This one is expected because you're calling givencrypt on an
algorithm that has already been converted.

Once converted they no longer have a givencrypt function and
calling crypto_aead_givencrypt on them will crash.

Cheers,
Stephan Mueller June 1, 2015, 10:08 a.m. UTC | #5
Am Montag, 1. Juni 2015, 17:52:45 schrieb Herbert Xu:

Hi Herbert,

> On Mon, Jun 01, 2015 at 11:44:22AM +0200, Stephan Mueller wrote:
> > That code crashes with the following stacktrace:
> > 
> > [ 2000.433502] BUG: unable to handle kernel NULL pointer dereference at
> > (null)
> 
> This crash is totally different from the previous crash you sent.
> This one is expected because you're calling givencrypt on an
> algorithm that has already been converted.
> 
> Once converted they no longer have a givencrypt function and
> calling crypto_aead_givencrypt on them will crash.

Thank you, that was the clarification I am looking for. Please disregard all 
communication on the givcrypt API then.
> 
> Cheers,
diff mbox

Patch

diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c
index 5660a18..ebcb981d 100644
--- a/arch/x86/crypto/aesni-intel_glue.c
+++ b/arch/x86/crypto/aesni-intel_glue.c
@@ -44,13 +44,18 @@ 
 #endif
 
 
+#define AESNI_ALIGN	16
+#define AES_BLOCK_MASK	(~(AES_BLOCK_SIZE - 1))
+#define RFC4106_HASH_SUBKEY_SIZE 16
+
 /* This data is stored at the end of the crypto_tfm struct.
  * It's a type of per "session" data storage location.
  * This needs to be 16 byte aligned.
  */
 struct aesni_rfc4106_gcm_ctx {
-	u8 hash_subkey[16];
-	struct crypto_aes_ctx aes_key_expanded;
+	u8 hash_subkey[16] __attribute__ ((__aligned__(AESNI_ALIGN)));
+	struct crypto_aes_ctx aes_key_expanded
+		__attribute__ ((__aligned__(AESNI_ALIGN)));
 	u8 nonce[4];
 };
 
@@ -65,10 +70,6 @@  struct aesni_hash_subkey_req_data {
 	struct scatterlist sg;
 };
 
-#define AESNI_ALIGN	(16)
-#define AES_BLOCK_MASK	(~(AES_BLOCK_SIZE-1))
-#define RFC4106_HASH_SUBKEY_SIZE 16
-
 struct aesni_lrw_ctx {
 	struct lrw_table_ctx lrw_table;
 	u8 raw_aes_ctx[sizeof(struct crypto_aes_ctx) + AESNI_ALIGN - 1];
@@ -282,10 +283,11 @@  static void (*aesni_gcm_dec_tfm)(void *ctx, u8 *out,
 static inline struct
 aesni_rfc4106_gcm_ctx *aesni_rfc4106_gcm_ctx_get(struct crypto_aead *tfm)
 {
-	return
-		(struct aesni_rfc4106_gcm_ctx *)
-		PTR_ALIGN((u8 *)
-		crypto_tfm_ctx(crypto_aead_tfm(tfm)), AESNI_ALIGN);
+	unsigned long align = AESNI_ALIGN;
+
+	if (align <= crypto_tfm_ctx_alignment())
+		align = 1;
+	return PTR_ALIGN(crypto_aead_ctx(tfm), align);
 }
 #endif
 
@@ -838,8 +840,6 @@  rfc4106_set_hash_subkey(u8 *hash_subkey, const u8 *key, unsigned int key_len)
 	if (IS_ERR(ctr_tfm))
 		return PTR_ERR(ctr_tfm);
 
-	crypto_ablkcipher_clear_flags(ctr_tfm, ~0);
-
 	ret = crypto_ablkcipher_setkey(ctr_tfm, key, key_len);
 	if (ret)
 		goto out_free_ablkcipher;
@@ -888,56 +888,20 @@  out_free_ablkcipher:
 static int common_rfc4106_set_key(struct crypto_aead *aead, const u8 *key,
 				  unsigned int key_len)
 {
-	int ret = 0;
-	struct crypto_tfm *tfm = crypto_aead_tfm(aead);
 	struct aesni_rfc4106_gcm_ctx *ctx = aesni_rfc4106_gcm_ctx_get(aead);
-	u8 *new_key_align, *new_key_mem = NULL;
 
 	if (key_len < 4) {
-		crypto_tfm_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
+		crypto_aead_set_flags(aead, CRYPTO_TFM_RES_BAD_KEY_LEN);
 		return -EINVAL;
 	}
 	/*Account for 4 byte nonce at the end.*/
 	key_len -= 4;
-	if (key_len != AES_KEYSIZE_128 && key_len != AES_KEYSIZE_192 &&
-	    key_len != AES_KEYSIZE_256) {
-		crypto_tfm_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
-		return -EINVAL;
-	}
 
 	memcpy(ctx->nonce, key + key_len, sizeof(ctx->nonce));
-	/*This must be on a 16 byte boundary!*/
-	if ((unsigned long)(&(ctx->aes_key_expanded.key_enc[0])) % AESNI_ALIGN)
-		return -EINVAL;
-
-	if ((unsigned long)key % AESNI_ALIGN) {
-		/*key is not aligned: use an auxuliar aligned pointer*/
-		new_key_mem = kmalloc(key_len+AESNI_ALIGN, GFP_KERNEL);
-		if (!new_key_mem)
-			return -ENOMEM;
-
-		new_key_align = PTR_ALIGN(new_key_mem, AESNI_ALIGN);
-		memcpy(new_key_align, key, key_len);
-		key = new_key_align;
-	}
 
-	if (!irq_fpu_usable())
-		ret = crypto_aes_expand_key(&(ctx->aes_key_expanded),
-		key, key_len);
-	else {
-		kernel_fpu_begin();
-		ret = aesni_set_key(&(ctx->aes_key_expanded), key, key_len);
-		kernel_fpu_end();
-	}
-	/*This must be on a 16 byte boundary!*/
-	if ((unsigned long)(&(ctx->hash_subkey[0])) % AESNI_ALIGN) {
-		ret = -EINVAL;
-		goto exit;
-	}
-	ret = rfc4106_set_hash_subkey(ctx->hash_subkey, key, key_len);
-exit:
-	kfree(new_key_mem);
-	return ret;
+	return aes_set_key_common(crypto_aead_tfm(aead),
+				  &ctx->aes_key_expanded, key, key_len) ?:
+	       rfc4106_set_hash_subkey(ctx->hash_subkey, key, key_len);
 }
 
 static int rfc4106_set_key(struct crypto_aead *parent, const u8 *key,
@@ -960,7 +924,7 @@  static int common_rfc4106_set_authsize(struct crypto_aead *aead,
 	default:
 		return -EINVAL;
 	}
-	crypto_aead_crt(aead)->authsize = authsize;
+
 	return 0;
 }
 
@@ -975,20 +939,17 @@  static int rfc4106_set_authsize(struct crypto_aead *parent,
 	return crypto_aead_setauthsize(&cryptd_tfm->base, authsize);
 }
 
-static int __driver_rfc4106_encrypt(struct aead_request *req)
+static int helper_rfc4106_encrypt(struct aead_request *req)
 {
 	u8 one_entry_in_sg = 0;
 	u8 *src, *dst, *assoc;
 	__be32 counter = cpu_to_be32(1);
 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
 	struct aesni_rfc4106_gcm_ctx *ctx = aesni_rfc4106_gcm_ctx_get(tfm);
-	u32 key_len = ctx->aes_key_expanded.key_length;
 	void *aes_ctx = &(ctx->aes_key_expanded);
 	unsigned long auth_tag_len = crypto_aead_authsize(tfm);
-	u8 iv_tab[16+AESNI_ALIGN];
-	u8* iv = (u8 *) PTR_ALIGN((u8 *)iv_tab, AESNI_ALIGN);
+	u8 iv[16] __attribute__ ((__aligned__(AESNI_ALIGN)));
 	struct scatter_walk src_sg_walk;
-	struct scatter_walk assoc_sg_walk;
 	struct scatter_walk dst_sg_walk;
 	unsigned int i;
 
@@ -997,12 +958,6 @@  static int __driver_rfc4106_encrypt(struct aead_request *req)
 	/* to 8 or 12 bytes */
 	if (unlikely(req->assoclen != 8 && req->assoclen != 12))
 		return -EINVAL;
-	if (unlikely(auth_tag_len != 8 && auth_tag_len != 12 && auth_tag_len != 16))
-	        return -EINVAL;
-	if (unlikely(key_len != AES_KEYSIZE_128 &&
-	             key_len != AES_KEYSIZE_192 &&
-	             key_len != AES_KEYSIZE_256))
-	        return -EINVAL;
 
 	/* IV below built */
 	for (i = 0; i < 4; i++)
@@ -1011,55 +966,57 @@  static int __driver_rfc4106_encrypt(struct aead_request *req)
 		*(iv+4+i) = req->iv[i];
 	*((__be32 *)(iv+12)) = counter;
 
-	if ((sg_is_last(req->src)) && (sg_is_last(req->assoc))) {
+	if (sg_is_last(req->src) &&
+	    req->src->offset + req->src->length <= PAGE_SIZE &&
+	    sg_is_last(req->dst) &&
+	    req->dst->offset + req->dst->length <= PAGE_SIZE) {
 		one_entry_in_sg = 1;
 		scatterwalk_start(&src_sg_walk, req->src);
-		scatterwalk_start(&assoc_sg_walk, req->assoc);
-		src = scatterwalk_map(&src_sg_walk);
-		assoc = scatterwalk_map(&assoc_sg_walk);
+		assoc = scatterwalk_map(&src_sg_walk);
+		src = assoc + req->assoclen;
 		dst = src;
 		if (unlikely(req->src != req->dst)) {
 			scatterwalk_start(&dst_sg_walk, req->dst);
-			dst = scatterwalk_map(&dst_sg_walk);
+			dst = scatterwalk_map(&dst_sg_walk) + req->assoclen;
 		}
-
 	} else {
 		/* Allocate memory for src, dst, assoc */
-		src = kmalloc(req->cryptlen + auth_tag_len + req->assoclen,
+		assoc = kmalloc(req->cryptlen + auth_tag_len + req->assoclen,
 			GFP_ATOMIC);
-		if (unlikely(!src))
+		if (unlikely(!assoc))
 			return -ENOMEM;
-		assoc = (src + req->cryptlen + auth_tag_len);
-		scatterwalk_map_and_copy(src, req->src, 0, req->cryptlen, 0);
-		scatterwalk_map_and_copy(assoc, req->assoc, 0,
-					req->assoclen, 0);
+		scatterwalk_map_and_copy(assoc, req->src, 0,
+					 req->assoclen + req->cryptlen, 0);
+		src = assoc + req->assoclen;
 		dst = src;
 	}
 
+	kernel_fpu_begin();
 	aesni_gcm_enc_tfm(aes_ctx, dst, src, (unsigned long)req->cryptlen, iv,
 		ctx->hash_subkey, assoc, (unsigned long)req->assoclen, dst
 		+ ((unsigned long)req->cryptlen), auth_tag_len);
+	kernel_fpu_end();
 
 	/* The authTag (aka the Integrity Check Value) needs to be written
 	 * back to the packet. */
 	if (one_entry_in_sg) {
 		if (unlikely(req->src != req->dst)) {
-			scatterwalk_unmap(dst);
-			scatterwalk_done(&dst_sg_walk, 0, 0);
+			scatterwalk_unmap(dst - req->assoclen);
+			scatterwalk_advance(&dst_sg_walk, req->dst->length);
+			scatterwalk_done(&dst_sg_walk, 1, 0);
 		}
-		scatterwalk_unmap(src);
 		scatterwalk_unmap(assoc);
-		scatterwalk_done(&src_sg_walk, 0, 0);
-		scatterwalk_done(&assoc_sg_walk, 0, 0);
+		scatterwalk_advance(&src_sg_walk, req->src->length);
+		scatterwalk_done(&src_sg_walk, req->src == req->dst, 0);
 	} else {
-		scatterwalk_map_and_copy(dst, req->dst, 0,
-			req->cryptlen + auth_tag_len, 1);
-		kfree(src);
+		scatterwalk_map_and_copy(dst, req->dst, req->assoclen,
+					 req->cryptlen + auth_tag_len, 1);
+		kfree(assoc);
 	}
 	return 0;
 }
 
-static int __driver_rfc4106_decrypt(struct aead_request *req)
+static int helper_rfc4106_decrypt(struct aead_request *req)
 {
 	u8 one_entry_in_sg = 0;
 	u8 *src, *dst, *assoc;
@@ -1068,26 +1025,16 @@  static int __driver_rfc4106_decrypt(struct aead_request *req)
 	int retval = 0;
 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
 	struct aesni_rfc4106_gcm_ctx *ctx = aesni_rfc4106_gcm_ctx_get(tfm);
-	u32 key_len = ctx->aes_key_expanded.key_length;
 	void *aes_ctx = &(ctx->aes_key_expanded);
 	unsigned long auth_tag_len = crypto_aead_authsize(tfm);
-	u8 iv_and_authTag[32+AESNI_ALIGN];
-	u8 *iv = (u8 *) PTR_ALIGN((u8 *)iv_and_authTag, AESNI_ALIGN);
-	u8 *authTag = iv + 16;
+	u8 iv[16] __attribute__ ((__aligned__(AESNI_ALIGN)));
+	u8 authTag[16];
 	struct scatter_walk src_sg_walk;
-	struct scatter_walk assoc_sg_walk;
 	struct scatter_walk dst_sg_walk;
 	unsigned int i;
 
-	if (unlikely((req->cryptlen < auth_tag_len) ||
-		(req->assoclen != 8 && req->assoclen != 12)))
+	if (unlikely(req->assoclen != 8 && req->assoclen != 12))
 		return -EINVAL;
-	if (unlikely(auth_tag_len != 8 && auth_tag_len != 12 && auth_tag_len != 16))
-	        return -EINVAL;
-	if (unlikely(key_len != AES_KEYSIZE_128 &&
-	             key_len != AES_KEYSIZE_192 &&
-	             key_len != AES_KEYSIZE_256))
-	        return -EINVAL;
 
 	/* Assuming we are supporting rfc4106 64-bit extended */
 	/* sequence numbers We need to have the AAD length */
@@ -1101,33 +1048,36 @@  static int __driver_rfc4106_decrypt(struct aead_request *req)
 		*(iv+4+i) = req->iv[i];
 	*((__be32 *)(iv+12)) = counter;
 
-	if ((sg_is_last(req->src)) && (sg_is_last(req->assoc))) {
+	if (sg_is_last(req->src) &&
+	    req->src->offset + req->src->length <= PAGE_SIZE &&
+	    sg_is_last(req->dst) &&
+	    req->dst->offset + req->dst->length <= PAGE_SIZE) {
 		one_entry_in_sg = 1;
 		scatterwalk_start(&src_sg_walk, req->src);
-		scatterwalk_start(&assoc_sg_walk, req->assoc);
-		src = scatterwalk_map(&src_sg_walk);
-		assoc = scatterwalk_map(&assoc_sg_walk);
+		assoc = scatterwalk_map(&src_sg_walk);
+		src = assoc + req->assoclen;
 		dst = src;
 		if (unlikely(req->src != req->dst)) {
 			scatterwalk_start(&dst_sg_walk, req->dst);
-			dst = scatterwalk_map(&dst_sg_walk);
+			dst = scatterwalk_map(&dst_sg_walk) + req->assoclen;
 		}
 
 	} else {
 		/* Allocate memory for src, dst, assoc */
-		src = kmalloc(req->cryptlen + req->assoclen, GFP_ATOMIC);
-		if (!src)
+		assoc = kmalloc(req->cryptlen + req->assoclen, GFP_ATOMIC);
+		if (!assoc)
 			return -ENOMEM;
-		assoc = (src + req->cryptlen);
-		scatterwalk_map_and_copy(src, req->src, 0, req->cryptlen, 0);
-		scatterwalk_map_and_copy(assoc, req->assoc, 0,
-			req->assoclen, 0);
+		scatterwalk_map_and_copy(assoc, req->src, 0,
+					 req->assoclen + req->cryptlen, 0);
+		src = assoc + req->assoclen;
 		dst = src;
 	}
 
+	kernel_fpu_begin();
 	aesni_gcm_dec_tfm(aes_ctx, dst, src, tempCipherLen, iv,
 		ctx->hash_subkey, assoc, (unsigned long)req->assoclen,
 		authTag, auth_tag_len);
+	kernel_fpu_end();
 
 	/* Compare generated tag with passed in tag. */
 	retval = crypto_memneq(src + tempCipherLen, authTag, auth_tag_len) ?
@@ -1135,16 +1085,17 @@  static int __driver_rfc4106_decrypt(struct aead_request *req)
 
 	if (one_entry_in_sg) {
 		if (unlikely(req->src != req->dst)) {
-			scatterwalk_unmap(dst);
-			scatterwalk_done(&dst_sg_walk, 0, 0);
+			scatterwalk_unmap(dst - req->assoclen);
+			scatterwalk_advance(&dst_sg_walk, req->dst->length);
+			scatterwalk_done(&dst_sg_walk, 1, 0);
 		}
-		scatterwalk_unmap(src);
 		scatterwalk_unmap(assoc);
-		scatterwalk_done(&src_sg_walk, 0, 0);
-		scatterwalk_done(&assoc_sg_walk, 0, 0);
+		scatterwalk_advance(&src_sg_walk, req->src->length);
+		scatterwalk_done(&src_sg_walk, req->src == req->dst, 0);
 	} else {
-		scatterwalk_map_and_copy(dst, req->dst, 0, tempCipherLen, 1);
-		kfree(src);
+		scatterwalk_map_and_copy(dst, req->dst, req->assoclen,
+					 tempCipherLen, 1);
+		kfree(assoc);
 	}
 	return retval;
 }
@@ -1188,36 +1139,6 @@  static int rfc4106_decrypt(struct aead_request *req)
 
 	return crypto_aead_decrypt(subreq);
 }
-
-static int helper_rfc4106_encrypt(struct aead_request *req)
-{
-	int ret;
-
-	if (unlikely(!irq_fpu_usable())) {
-		WARN_ONCE(1, "__gcm-aes-aesni alg used in invalid context");
-		ret = -EINVAL;
-	} else {
-		kernel_fpu_begin();
-		ret = __driver_rfc4106_encrypt(req);
-		kernel_fpu_end();
-	}
-	return ret;
-}
-
-static int helper_rfc4106_decrypt(struct aead_request *req)
-{
-	int ret;
-
-	if (unlikely(!irq_fpu_usable())) {
-		WARN_ONCE(1, "__gcm-aes-aesni alg used in invalid context");
-		ret = -EINVAL;
-	} else {
-		kernel_fpu_begin();
-		ret = __driver_rfc4106_decrypt(req);
-		kernel_fpu_end();
-	}
-	return ret;
-}
 #endif
 
 static struct crypto_alg aesni_algs[] = { {
@@ -1389,27 +1310,6 @@  static struct crypto_alg aesni_algs[] = { {
 			.geniv		= "chainiv",
 		},
 	},
-}, {
-	.cra_name		= "__gcm-aes-aesni",
-	.cra_driver_name	= "__driver-gcm-aes-aesni",
-	.cra_priority		= 0,
-	.cra_flags		= CRYPTO_ALG_TYPE_AEAD | CRYPTO_ALG_INTERNAL,
-	.cra_blocksize		= 1,
-	.cra_ctxsize		= sizeof(struct aesni_rfc4106_gcm_ctx) +
-				  AESNI_ALIGN,
-	.cra_alignmask		= 0,
-	.cra_type		= &crypto_aead_type,
-	.cra_module		= THIS_MODULE,
-	.cra_u = {
-		.aead = {
-			.setkey		= common_rfc4106_set_key,
-			.setauthsize	= common_rfc4106_set_authsize,
-			.encrypt	= helper_rfc4106_encrypt,
-			.decrypt	= helper_rfc4106_decrypt,
-			.ivsize		= 8,
-			.maxauthsize	= 16,
-		},
-	},
 #endif
 #if IS_ENABLED(CONFIG_CRYPTO_PCBC)
 }, {
@@ -1526,6 +1426,22 @@  static struct crypto_alg aesni_algs[] = { {
 
 #ifdef CONFIG_X86_64
 static struct aead_alg aesni_aead_algs[] = { {
+	.setkey			= common_rfc4106_set_key,
+	.setauthsize		= common_rfc4106_set_authsize,
+	.encrypt		= helper_rfc4106_encrypt,
+	.decrypt		= helper_rfc4106_decrypt,
+	.ivsize			= 8,
+	.maxauthsize		= 16,
+	.base = {
+		.cra_name		= "__gcm-aes-aesni",
+		.cra_driver_name	= "__driver-gcm-aes-aesni",
+		.cra_flags		= CRYPTO_ALG_INTERNAL,
+		.cra_blocksize		= 1,
+		.cra_ctxsize		= sizeof(struct aesni_rfc4106_gcm_ctx),
+		.cra_alignmask		= AESNI_ALIGN - 1,
+		.cra_module		= THIS_MODULE,
+	},
+}, {
 	.init			= rfc4106_init,
 	.exit			= rfc4106_exit,
 	.setkey			= rfc4106_set_key,