diff mbox

crypto: add asynchronous compression support

Message ID 1445008260-39367-1-git-send-email-weigang.li@intel.com (mailing list archive)
State RFC
Delegated to: Herbert Xu
Headers show

Commit Message

Weigang Li Oct. 16, 2015, 3:11 p.m. UTC
This patch set introduces Asynchronous Compression API.
What is proposed is a new crypto type called crypto_acomp_type, 
plus new struct acomp_alg and struct crypto_acomp, together with number 
of helper functions to register acomp type algorithms and allocate tfm 
instances. This is to make it similar to how the existing crypto API 
works for the ablkcipher, and akcipher types.
The operations the new interface will provide are:

	int (*compress)(struct acompress_request *req);
	int (*decompress)(struct acompress_request *req);

The benefits it gives interface are:
- the new interface allows for asynchronous implementations and
  scatterlist buffer that can use hardware to offload the compression
  operations, the new asynchronous API can be called by the linux kernel
  components (i.e., btrfs) who want to use hardware acceleration for data
  compression.

New helper functions have been added to allocate crypto_acomp instances 
and invoke the operations to make it easier to use.

Signed-off-by: Weigang Li <weigang.li@intel.com>
---
 crypto/Kconfig                      |    5 +
 crypto/Makefile                     |    1 +
 crypto/acompress.c                  |  116 ++++++++++++++++
 crypto/crypto_user.c                |   21 +++
 include/crypto/acompress.h          |  251 +++++++++++++++++++++++++++++++++++
 include/crypto/internal/acompress.h |   62 +++++++++
 include/linux/crypto.h              |    1 +
 include/uapi/linux/cryptouser.h     |    5 +
 8 files changed, 462 insertions(+), 0 deletions(-)
 create mode 100644 crypto/acompress.c
 create mode 100644 include/crypto/acompress.h
 create mode 100644 include/crypto/internal/acompress.h

Comments

Herbert Xu Oct. 16, 2015, 3:13 p.m. UTC | #1
On Fri, Oct 16, 2015 at 11:11:00PM +0800, Weigang Li wrote:
> This patch set introduces Asynchronous Compression API.
> What is proposed is a new crypto type called crypto_acomp_type, 
> plus new struct acomp_alg and struct crypto_acomp, together with number 
> of helper functions to register acomp type algorithms and allocate tfm 
> instances. This is to make it similar to how the existing crypto API 
> works for the ablkcipher, and akcipher types.
> The operations the new interface will provide are:
> 
> 	int (*compress)(struct acompress_request *req);
> 	int (*decompress)(struct acompress_request *req);
> 
> The benefits it gives interface are:
> - the new interface allows for asynchronous implementations and
>   scatterlist buffer that can use hardware to offload the compression
>   operations, the new asynchronous API can be called by the linux kernel
>   components (i.e., btrfs) who want to use hardware acceleration for data
>   compression.
> 
> New helper functions have been added to allocate crypto_acomp instances 
> and invoke the operations to make it easier to use.
> 
> Signed-off-by: Weigang Li <weigang.li@intel.com>

Thanks for the patch! Joonsoo Kim is also working on the compression
interface for zram.  Could you two collaborate and come up with one
interface rather than two?

Cheers,
Weigang Li Oct. 21, 2015, 7:59 a.m. UTC | #2
> -----Original Message-----
> From: Herbert Xu [mailto:herbert@gondor.apana.org.au]
> Sent: Wednesday, October 21, 2015 3:34 PM
> To: Sergey Senozhatsky
> Cc: Minchan Kim; Joonsoo Kim; Dan Streetman; Seth Jennings; Li, Weigang;
> Struk, Tadeusz
> Subject: Re: [PATCH] crypto: add asynchronous compression support
> 
> On Wed, Oct 21, 2015 at 04:33:22PM +0900, Sergey Senozhatsky wrote:
> >
> > the thing is -- I still want to have/use SW compressors; and they [the
> > existing S/W algorithms] want virtual addresses. so we need to
> > kmap_atomic pages in every SW algorithm? isn't it simpler to keep
> > kmap_atomic_to_page around?
> 
> The Crypto API will do it for you.  Have a look at how we handle it for aead
> and ahash for example.
> 
> Cheers,
> --
> Email: Herbert Xu <herbert@gondor.apana.org.au> Home Page:
> http://gondor.apana.org.au/~herbert/
> PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

Adding the "linux-crypto" list back to the latest discussion.
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Dan Streetman Oct. 28, 2015, 11:13 p.m. UTC | #3
On Wed, Oct 21, 2015 at 3:59 AM, Li, Weigang <weigang.li@intel.com> wrote:
>> -----Original Message-----
>> From: Herbert Xu [mailto:herbert@gondor.apana.org.au]
>> Sent: Wednesday, October 21, 2015 3:34 PM
>> To: Sergey Senozhatsky
>> Cc: Minchan Kim; Joonsoo Kim; Dan Streetman; Seth Jennings; Li, Weigang;
>> Struk, Tadeusz
>> Subject: Re: [PATCH] crypto: add asynchronous compression support
>>
>> On Wed, Oct 21, 2015 at 04:33:22PM +0900, Sergey Senozhatsky wrote:
>> >
>> > the thing is -- I still want to have/use SW compressors; and they [the
>> > existing S/W algorithms] want virtual addresses. so we need to
>> > kmap_atomic pages in every SW algorithm? isn't it simpler to keep
>> > kmap_atomic_to_page around?
>>
>> The Crypto API will do it for you.  Have a look at how we handle it for aead
>> and ahash for example.
>>
>> Cheers,
>> --
>> Email: Herbert Xu <herbert@gondor.apana.org.au> Home Page:
>> http://gondor.apana.org.au/~herbert/
>> PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
>
> Adding the "linux-crypto" list back to the latest discussion.

re: zswap, I suspect it won't be able to use async compression,
because when its store function returns the page must be available for
reallocation.  Of course, async compression can be useful elsewhere.
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Weigang Li Nov. 6, 2015, 1:55 a.m. UTC | #4
After sync with Joonsoo Kim offline, he agreed to merge this acomp patch with his ccomp patch, thanks Joonsoo!

-----Original Message-----
From: Herbert Xu [mailto:herbert@gondor.apana.org.au] 
Sent: Friday, October 16, 2015 11:14 PM
To: Li, Weigang
Cc: linux-crypto@vger.kernel.org; Struk, Tadeusz; Joonsoo Kim; Sergey Senozhatsky
Subject: Re: [PATCH] crypto: add asynchronous compression support

On Fri, Oct 16, 2015 at 11:11:00PM +0800, Weigang Li wrote:
> This patch set introduces Asynchronous Compression API.
> What is proposed is a new crypto type called crypto_acomp_type, plus 
> new struct acomp_alg and struct crypto_acomp, together with number of 
> helper functions to register acomp type algorithms and allocate tfm 
> instances. This is to make it similar to how the existing crypto API 
> works for the ablkcipher, and akcipher types.
> The operations the new interface will provide are:
> 
> 	int (*compress)(struct acompress_request *req);
> 	int (*decompress)(struct acompress_request *req);
> 
> The benefits it gives interface are:
> - the new interface allows for asynchronous implementations and
>   scatterlist buffer that can use hardware to offload the compression
>   operations, the new asynchronous API can be called by the linux kernel
>   components (i.e., btrfs) who want to use hardware acceleration for data
>   compression.
> 
> New helper functions have been added to allocate crypto_acomp 
> instances and invoke the operations to make it easier to use.
> 
> Signed-off-by: Weigang Li <weigang.li@intel.com>

Thanks for the patch! Joonsoo Kim is also working on the compression interface for zram.  Could you two collaborate and come up with one interface rather than two?

Cheers,
--
Email: Herbert Xu <herbert@gondor.apana.org.au> Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Weigang Li Nov. 19, 2015, 5:52 a.m. UTC | #5
On 10/16/2015 11:13 PM, Herbert Xu wrote:
> On Fri, Oct 16, 2015 at 11:11:00PM +0800, Weigang Li wrote:
>> This patch set introduces Asynchronous Compression API.
>> What is proposed is a new crypto type called crypto_acomp_type,
>> plus new struct acomp_alg and struct crypto_acomp, together with number
>> of helper functions to register acomp type algorithms and allocate tfm
>> instances. This is to make it similar to how the existing crypto API
>> works for the ablkcipher, and akcipher types.
>> The operations the new interface will provide are:
>>
>> 	int (*compress)(struct acompress_request *req);
>> 	int (*decompress)(struct acompress_request *req);
>>
>> The benefits it gives interface are:
>> - the new interface allows for asynchronous implementations and
>>    scatterlist buffer that can use hardware to offload the compression
>>    operations, the new asynchronous API can be called by the linux kernel
>>    components (i.e., btrfs) who want to use hardware acceleration for data
>>    compression.
>>
>> New helper functions have been added to allocate crypto_acomp instances
>> and invoke the operations to make it easier to use.
>>
>> Signed-off-by: Weigang Li <weigang.li@intel.com>
> 
> Thanks for the patch! Joonsoo Kim is also working on the compression
> interface for zram.  Could you two collaborate and come up with one
> interface rather than two?
> 
> Cheers,
> 
Hello Herbert,

After sync-up with Joonsoo Kim, we think it may be not feasible for a s/w implementation of the sg-list based asynchronous interface, we propose separate interfaces (patches) for acomp & ccomp. The reasons are:
1. to support sg-list in the ccomp (like what shash/ahash did), the partial update is required, some algorithms do not support partial update (i.e., lzo), that means:
2. the format of output buffer (sg-list) will be different, e.g., the lzo need contain the "length" info for each block in the output sg-list in order to de-compression, while zlib doesn't need, then it is difficult to have a single async sg-list i/f.
3. to compress a sg-list buffer, the lzo also requires an intermediate buffer to save the output of a block, and copy it back to the sg-list output buffer, it will introduce the complexity and cost, we don't see value for sg-list support in a s/w compression.

Any suggestions?
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Herbert Xu Nov. 19, 2015, 9:42 a.m. UTC | #6
On Thu, Nov 19, 2015 at 05:52:41AM +0000, Li, Weigang wrote:
>
> After sync-up with Joonsoo Kim, we think it may be not feasible for a s/w implementation of the sg-list based asynchronous interface, we propose separate interfaces (patches) for acomp & ccomp. The reasons are:
> 1. to support sg-list in the ccomp (like what shash/ahash did), the partial update is required, some algorithms do not support partial update (i.e., lzo), that means:

No this is not true.  For the ones that don't support partial
updates you can always linearise the input and then feed it in
as one chunk.  Because the overall interface you're proposing
does not allow partial updates the underlying implementation
doesn't need to do it either.  Only linearisation is necessary.

> 2. the format of output buffer (sg-list) will be different, e.g., the lzo need contain the "length" info for each block in the output sg-list in order to de-compression, while zlib doesn't need, then it is difficult to have a single async sg-list i/f.

I have no idea what you mean here.  Please explain.

> 3. to compress a sg-list buffer, the lzo also requires an intermediate buffer to save the output of a block, and copy it back to the sg-list output buffer, it will introduce the complexity and cost, we don't see value for sg-list support in a s/w compression.

Such an intermediate buffer would only  be needed if the SG list is
actually non-linear.  So I don't see this as an issue.

Cheers,
Joonsoo Kim Nov. 20, 2015, 6:04 a.m. UTC | #7
Hello, Herbert.

> -----Original Message-----
> From: Herbert Xu [mailto:herbert@gondor.apana.org.au]
> Sent: Thursday, November 19, 2015 6:43 PM
> To: Li, Weigang
> Cc: linux-crypto@vger.kernel.org; Struk, Tadeusz; Joonsoo Kim; Sergey
> Senozhatsky
> Subject: Re: [PATCH] crypto: add asynchronous compression support
> 
> On Thu, Nov 19, 2015 at 05:52:41AM +0000, Li, Weigang wrote:
> >
> > After sync-up with Joonsoo Kim, we think it may be not feasible for a
> s/w implementation of the sg-list based asynchronous interface, we propose
> separate interfaces (patches) for acomp & ccomp. The reasons are:
> > 1. to support sg-list in the ccomp (like what shash/ahash did), the
> partial update is required, some algorithms do not support partial update
> (i.e., lzo), that means:
> 
> No this is not true.  For the ones that don't support partial
> updates you can always linearise the input and then feed it in
> as one chunk.  Because the overall interface you're proposing
> does not allow partial updates the underlying implementation
> doesn't need to do it either.  Only linearisation is necessary.

Linearization would be enough to use sg-list but it has a problem.
Linearization needs sleepable function such as vmap() and it makes
sync (de)compression in atomic context impossible. Currently, zram
did sync (de)compression in atomic context. It uses map_vm_area() which
isn't sleepable function to linearize two separate pages. This is possible
because zram already knows that maximum number of spread pages is just two
and have allocated vm area in advance. But, if we implement linearization
in general API, we can't be sure of request input size so we need
sleepable function, vmap().

And, this sleep could degrade performance.

> > 2. the format of output buffer (sg-list) will be different, e.g., the
> lzo need contain the "length" info for each block in the output sg-list in
> order to de-compression, while zlib doesn't need, then it is difficult to
> have a single async sg-list i/f.
> 
> I have no idea what you mean here.  Please explain.
> 
> > 3. to compress a sg-list buffer, the lzo also requires an intermediate
> buffer to save the output of a block, and copy it back to the sg-list
> output buffer, it will introduce the complexity and cost, we don't see
> value for sg-list support in a s/w compression.
> 
> Such an intermediate buffer would only  be needed if the SG list is
> actually non-linear.  So I don't see this as an issue.

Intermediate buffer size could vary greatly so it would be allocated and
Freed whenever requested. This could affect performance.

I think that supporting unified API has more loss than gain.
I'm not expert on this area so please let me know what I missed.

Thanks.

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Herbert Xu Nov. 20, 2015, 6:19 a.m. UTC | #8
On Fri, Nov 20, 2015 at 03:04:47PM +0900, Joonsoo Kim wrote:
>
> Linearization would be enough to use sg-list but it has a problem.
> Linearization needs sleepable function such as vmap() and it makes
> sync (de)compression in atomic context impossible. Currently, zram
> did sync (de)compression in atomic context. It uses map_vm_area() which
> isn't sleepable function to linearize two separate pages. This is possible
> because zram already knows that maximum number of spread pages is just two
> and have allocated vm area in advance. But, if we implement linearization
> in general API, we can't be sure of request input size so we need
> sleepable function, vmap().
> 
> And, this sleep could degrade performance.

Obviously you would only perform linearisation where it's needed.
And if you are in an atomic context, then clearly linearisation
can only be attempted using kmalloc/alloc_pages with GFP_ATOMIC.

I don't understand your concern with zram because either zram is
already feeding us linear buffers in which case linearisation is
never needed.  Or it can only be used with algorithms that support
SG lists or partial updates, which we can easily mark using a flag.

> Intermediate buffer size could vary greatly so it would be allocated and
> Freed whenever requested. This could affect performance.

That's for the crypto user to figure out.  Either they should
supply a completely linear buffer if they want to be able to
support every algorithm in an efficient manner, or they will
either have to eat the cost of linearisation or only use algorithms
that can deal with SG lists efficiently.

We have the same problem with network drivers and it's dealt with
in exactly the same way.  An skb can be an SG list and will be
linearised when necessary.

> I think that supporting unified API has more loss than gain.

I disagree.  I have seen no valid reason so far for adding two
compression interfaces.

Cheers,
Weigang Li Nov. 20, 2015, 7:25 a.m. UTC | #9
On 11/20/2015 2:19 PM, Herbert Xu wrote:
> On Fri, Nov 20, 2015 at 03:04:47PM +0900, Joonsoo Kim wrote:
>>
>> Linearization would be enough to use sg-list but it has a problem.
>> Linearization needs sleepable function such as vmap() and it makes
>> sync (de)compression in atomic context impossible. Currently, zram
>> did sync (de)compression in atomic context. It uses map_vm_area() which
>> isn't sleepable function to linearize two separate pages. This is possible
>> because zram already knows that maximum number of spread pages is just two
>> and have allocated vm area in advance. But, if we implement linearization
>> in general API, we can't be sure of request input size so we need
>> sleepable function, vmap().
>>
>> And, this sleep could degrade performance.
>
> Obviously you would only perform linearisation where it's needed.
> And if you are in an atomic context, then clearly linearisation
> can only be attempted using kmalloc/alloc_pages with GFP_ATOMIC.
>
> I don't understand your concern with zram because either zram is
> already feeding us linear buffers in which case linearisation is
> never needed.  Or it can only be used with algorithms that support
> SG lists or partial updates, which we can easily mark using a flag.
>
>> Intermediate buffer size could vary greatly so it would be allocated and
>> Freed whenever requested. This could affect performance.
>
> That's for the crypto user to figure out.  Either they should
> supply a completely linear buffer if they want to be able to
> support every algorithm in an efficient manner, or they will
> either have to eat the cost of linearisation or only use algorithms
> that can deal with SG lists efficiently.
>
> We have the same problem with network drivers and it's dealt with
> in exactly the same way.  An skb can be an SG list and will be
> linearised when necessary.
>
>> I think that supporting unified API has more loss than gain.
>
> I disagree.  I have seen no valid reason so far for adding two
> compression interfaces.
>
> Cheers,
>
Thanks Herbert.

If we assume the sg-list can be linearized - no "holes" in the sg-list, 
all chunks in middle of the list are of PAGE_SIZE, it seems ok to 
support sg-list in the s/w implementation, linearize the sg-list and 
compress it as one chunk.
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 7240821..8b8d2d6 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -93,6 +93,10 @@  config CRYPTO_PCOMP2
 	tristate
 	select CRYPTO_ALGAPI2
 
+config CRYPTO_ACOMP2
+        tristate
+        select CRYPTO_ALGAPI2
+
 config CRYPTO_AKCIPHER2
 	tristate
 	select CRYPTO_ALGAPI2
@@ -123,6 +127,7 @@  config CRYPTO_MANAGER2
 	select CRYPTO_HASH2
 	select CRYPTO_BLKCIPHER2
 	select CRYPTO_PCOMP2
+	select CRYPTO_ACOMP2
 	select CRYPTO_AKCIPHER2
 
 config CRYPTO_USER
diff --git a/crypto/Makefile b/crypto/Makefile
index f7aba92..6ad718f 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -29,6 +29,7 @@  crypto_hash-y += shash.o
 obj-$(CONFIG_CRYPTO_HASH2) += crypto_hash.o
 
 obj-$(CONFIG_CRYPTO_PCOMP2) += pcompress.o
+obj-$(CONFIG_CRYPTO_ACOMP2) += acompress.o
 obj-$(CONFIG_CRYPTO_AKCIPHER2) += akcipher.o
 
 $(obj)/rsapubkey-asn1.o: $(obj)/rsapubkey-asn1.c $(obj)/rsapubkey-asn1.h
diff --git a/crypto/acompress.c b/crypto/acompress.c
new file mode 100644
index 0000000..0becbf1
--- /dev/null
+++ b/crypto/acompress.c
@@ -0,0 +1,116 @@ 
+/*
+ * Asynchronous compression operations
+ *
+ * Copyright (c) 2015, Intel Corporation
+ * Authors: Weigang Li <weigang.li@intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ */
+#include <linux/errno.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/seq_file.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/crypto.h>
+#include <crypto/algapi.h>
+#include <linux/cryptouser.h>
+#include <net/netlink.h>
+#include <crypto/acompress.h>
+#include "internal.h"
+
+#ifdef CONFIG_NET
+static int crypto_acompress_report(struct sk_buff *skb, struct crypto_alg *alg)
+{
+	struct crypto_report_acomp racomp;
+
+	strncpy(racomp.type, "acompress", sizeof(racomp.type));
+
+	if (nla_put(skb, CRYPTOCFGA_REPORT_ACOMPRESS,
+		    sizeof(struct crypto_report_acomp), &racomp))
+		goto nla_put_failure;
+	return 0;
+
+nla_put_failure:
+	return -EMSGSIZE;
+}
+#else
+static int crypto_acompress_report(struct sk_buff *skb, struct crypto_alg *alg)
+{
+	return -ENOSYS;
+}
+#endif
+
+static void crypto_acompress_show(struct seq_file *m, struct crypto_alg *alg)
+	__attribute__ ((unused));
+
+static void crypto_acompress_show(struct seq_file *m, struct crypto_alg *alg)
+{
+	seq_puts(m, "type         : acompress\n");
+}
+
+static void crypto_acompress_exit_tfm(struct crypto_tfm *tfm)
+{
+	struct crypto_acomp *acomp = __crypto_acomp_tfm(tfm);
+	struct acomp_alg *alg = crypto_acomp_alg(acomp);
+
+	alg->exit(acomp);
+}
+
+static int crypto_acompress_init_tfm(struct crypto_tfm *tfm)
+{
+	struct crypto_acomp *acomp = __crypto_acomp_tfm(tfm);
+	struct acomp_alg *alg = crypto_acomp_alg(acomp);
+
+	if (alg->exit)
+		acomp->base.exit = crypto_acompress_exit_tfm;
+
+	if (alg->init)
+		return alg->init(acomp);
+
+	return 0;
+}
+
+static const struct crypto_type crypto_acomp_type = {
+	.extsize = crypto_alg_extsize,
+	.init_tfm = crypto_acompress_init_tfm,
+#ifdef CONFIG_PROC_FS
+	.show = crypto_acompress_show,
+#endif
+	.report = crypto_acompress_report,
+	.maskclear = ~CRYPTO_ALG_TYPE_MASK,
+	.maskset = CRYPTO_ALG_TYPE_MASK,
+	.type = CRYPTO_ALG_TYPE_ACOMPRESS,
+	.tfmsize = offsetof(struct crypto_acomp, base),
+};
+
+struct crypto_acomp *crypto_alloc_acomp(const char *alg_name, u32 type,
+					u32 mask)
+{
+	return crypto_alloc_tfm(alg_name, &crypto_acomp_type, type, mask);
+}
+EXPORT_SYMBOL_GPL(crypto_alloc_acomp);
+
+int crypto_register_acomp(struct acomp_alg *alg)
+{
+	struct crypto_alg *base = &alg->base;
+
+	base->cra_type = &crypto_acomp_type;
+	base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK;
+	base->cra_flags |= CRYPTO_ALG_TYPE_ACOMPRESS;
+	return crypto_register_alg(base);
+}
+EXPORT_SYMBOL_GPL(crypto_register_acomp);
+
+void crypto_unregister_acomp(struct acomp_alg *alg)
+{
+	crypto_unregister_alg(&alg->base);
+}
+EXPORT_SYMBOL_GPL(crypto_unregister_acomp);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Async compression type");
diff --git a/crypto/crypto_user.c b/crypto/crypto_user.c
index d94d99f..f8a3729 100644
--- a/crypto/crypto_user.c
+++ b/crypto/crypto_user.c
@@ -111,6 +111,21 @@  nla_put_failure:
 	return -EMSGSIZE;
 }
 
+static int crypto_report_acomp(struct sk_buff *skb, struct crypto_alg *alg)
+{
+	struct crypto_report_acomp racomp;
+
+	strncpy(racomp.type, "acompression", sizeof(racomp.type));
+
+	if (nla_put(skb, CRYPTOCFGA_REPORT_ACOMPRESS,
+		    sizeof(struct crypto_report_acomp), &racomp))
+		goto nla_put_failure;
+	return 0;
+
+nla_put_failure:
+	return -EMSGSIZE;
+}
+
 static int crypto_report_akcipher(struct sk_buff *skb, struct crypto_alg *alg)
 {
 	struct crypto_report_akcipher rakcipher;
@@ -171,6 +186,12 @@  static int crypto_report_one(struct crypto_alg *alg,
 
 		break;
 
+	case CRYPTO_ALG_TYPE_ACOMPRESS:
+		if (crypto_report_acomp(skb, alg))
+			goto nla_put_failure;
+
+		break;
+
 	case CRYPTO_ALG_TYPE_AKCIPHER:
 		if (crypto_report_akcipher(skb, alg))
 			goto nla_put_failure;
diff --git a/include/crypto/acompress.h b/include/crypto/acompress.h
new file mode 100644
index 0000000..2b3d6d1
--- /dev/null
+++ b/include/crypto/acompress.h
@@ -0,0 +1,251 @@ 
+/*
+ * Asynchronous compression operations
+ *
+ * Copyright (c) 2015, Intel Corporation
+ * Authors: Weigang Li <weigang.li@intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ */
+#ifndef _CRYPTO_ACOMP_H
+#define _CRYPTO_ACOMP_H
+#include <linux/crypto.h>
+
+/**
+ * struct acompress_request - asynchronous compression request
+ *
+ * @base:	Common attributes for async crypto requests
+ * @nbytes:	Number of bytes to be processed
+ * @src:	Pointer to memory containing the input scatterlist buffer
+ * @dst:	Pointer to memory containing the output scatterlist buffer
+ * @__ctx:	Start of private context data
+ */
+struct acompress_request {
+	struct crypto_async_request base;
+	unsigned int nbytes;
+	struct scatterlist *src;
+	struct scatterlist *dst;
+	void *__ctx[] CRYPTO_MINALIGN_ATTR;
+};
+
+/**
+ * struct crypto_acomp - user-instantiated objects which encapsulate
+ * algorithms and core processing logic
+ *
+ * @base:	Common crypto API algorithm data structure
+ */
+struct crypto_acomp {
+	struct crypto_tfm base;
+};
+
+/**
+ * struct acomp_alg - async compression algorithm
+ *
+ * @compress:	Function performs a compress operation
+ * @decompress:	Function performs a de-compress operation
+ * @init:	Initialize the cryptographic transformation object.
+ *		This function is used to initialize the cryptographic
+ *		transformation object. This function is called only once at
+ *		the instantiation time, right after the transformation context
+ *		was allocated. In case the cryptographic hardware has some
+ *		special requirements which need to be handled by software, this
+ *		function shall check for the precise requirement of the
+ *		transformation and put any software fallbacks in place.
+ * @exit:	Deinitialize the cryptographic transformation object. This is a
+ *		counterpart to @init, used to remove various changes set in
+ *		@init.
+ *
+ * @reqsize:	Request size required by algorithm implementation
+ * @base:	Common crypto API algorithm data structure
+ */
+struct acomp_alg {
+	int (*compress)(struct acompress_request *req);
+	int (*decompress)(struct acompress_request *req);
+	int (*init)(struct crypto_acomp *tfm);
+	void (*exit)(struct crypto_acomp *tfm);
+	unsigned int reqsize;
+	struct crypto_alg base;
+};
+
+/**
+ * DOC: Asynchronous Compression API
+ *
+ * The Asynchronous Compression API is used with the algorithms of type
+ * CRYPTO_ALG_TYPE_ACOMPRESS (listed as type "acompress" in /proc/crypto)
+ */
+
+/**
+ * crypto_alloc_acompress() -- allocate ACOMPRESS tfm handle
+ * @alg_name: is the cra_name / name or cra_driver_name / driver name of the
+ *	      compression algorithm e.g. "deflate"
+ * @type: specifies the type of the algorithm
+ * @mask: specifies the mask for the algorithm
+ *
+ * Allocate a handle for compression algorithm. The returned struct
+ * crypto_acomp is the handle that is required for any subsequent
+ * API invocation for the compression operations.
+ *
+ * Return: allocated handle in case of success; IS_ERR() is true in case
+ *	   of an error, PTR_ERR() returns the error code.
+ */
+struct crypto_acomp *crypto_alloc_acomp(const char *alg_name, u32 type,
+					u32 mask);
+
+static inline struct crypto_tfm *crypto_acomp_tfm(struct crypto_acomp *tfm)
+{
+	return &tfm->base;
+}
+
+static inline struct acomp_alg *__crypto_acomp_alg(struct crypto_alg *alg)
+{
+	return container_of(alg, struct acomp_alg, base);
+}
+
+static inline struct crypto_acomp *__crypto_acomp_tfm(
+	struct crypto_tfm *tfm)
+{
+	return container_of(tfm, struct crypto_acomp, base);
+}
+
+static inline struct acomp_alg *crypto_acomp_alg(
+	struct crypto_acomp *tfm)
+{
+	return __crypto_acomp_alg(crypto_acomp_tfm(tfm)->__crt_alg);
+}
+
+static inline unsigned int crypto_acomp_reqsize(struct crypto_acomp *tfm)
+{
+	return crypto_acomp_alg(tfm)->reqsize;
+}
+
+static inline void acomp_request_set_tfm(struct acompress_request *req,
+					 struct crypto_acomp *tfm)
+{
+	req->base.tfm = crypto_acomp_tfm(tfm);
+}
+
+static inline struct crypto_acomp *crypto_acomp_reqtfm(
+				struct acompress_request *req)
+{
+	return __crypto_acomp_tfm(req->base.tfm);
+}
+
+/**
+ * crypto_free_acomp() -- free ACOMPRESS tfm handle
+ *
+ * @tfm: ACOMPRESS tfm handle allocated with crypto_alloc_acompr()
+ */
+static inline void crypto_free_acomp(struct crypto_acomp *tfm)
+{
+	crypto_destroy_tfm(tfm, crypto_acomp_tfm(tfm));
+}
+
+/**
+ * acomp_request_alloc() -- allocates async compress request
+ *
+ * @tfm:	ACOMPRESS tfm handle allocated with crypto_alloc_acomp()
+ * @gfp:	allocation flags
+ *
+ * Return: allocated handle in case of success or NULL in case of an error.
+ */
+static inline struct acompress_request *acomp_request_alloc(
+	struct crypto_acomp *tfm, gfp_t gfp)
+{
+	struct acompress_request *req;
+
+	req = kzalloc(sizeof(*req) + crypto_acomp_reqsize(tfm), gfp);
+	if (likely(req))
+		acomp_request_set_tfm(req, tfm);
+
+	return req;
+}
+
+/**
+ * acomp_request_free() -- zeroize and free async compress request
+ *
+ * @req:	request to free
+ */
+static inline void acomp_request_free(struct acompress_request *req)
+{
+	kfree(req);
+}
+
+/**
+ * acomp_request_set_callback() -- Sets an asynchronous callback.
+ *
+ * Callback will be called when an asynchronous operation on a given
+ * request is finished.
+ *
+ * @req:	request that the callback will be set for
+ * @flgs:	specify for instance if the operation may backlog
+ * @cmlp:	callback which will be called
+ * @data:	private data used by the caller
+ */
+static inline void acomp_request_set_callback(struct acompress_request *req,
+					      u32 flgs,
+					      crypto_completion_t cmpl,
+					      void *data)
+{
+	req->base.complete = cmpl;
+	req->base.data = data;
+	req->base.flags = flgs;
+}
+
+/**
+ * acomp_request_set_comp() -- Sets reqest parameters
+ *
+ * Sets parameters required by acomp operation
+ *
+ * @req:	async compress request
+ * @src:	ptr to input buffer list
+ * @dst:	ptr of output buffer list
+ * @nbytes:	size of the input buffer
+ */
+static inline void acomp_request_set_comp(struct acompress_request *req,
+					  struct scatterlist *src,
+					  struct scatterlist *dst,
+					  unsigned int nbytes)
+{
+	req->src = src;
+	req->dst = dst;
+	req->nbytes = nbytes;
+}
+
+/**
+ * crypto_acomp_compress() -- Invoke async compress operation
+ *
+ * Function invokes the async compress operation
+ *
+ * @req:	async compress request
+ *
+ * Return: zero on success; error code in case of error
+ */
+static inline int crypto_acomp_compress(struct acompress_request *req)
+{
+	struct crypto_acomp *tfm = crypto_acomp_reqtfm(req);
+	struct acomp_alg *alg = crypto_acomp_alg(tfm);
+
+	return alg->compress(req);
+}
+
+/**
+ * crypto_acomp_decompress() -- Invoke async decompress operation
+ *
+ * Function invokes the async decompress operation
+ *
+ * @req:	async compress request
+ *
+ * Return: zero on success; error code in case of error
+ */
+static inline int crypto_acomp_decompress(struct acompress_request *req)
+{
+	struct crypto_acomp *tfm = crypto_acomp_reqtfm(req);
+	struct acomp_alg *alg = crypto_acomp_alg(tfm);
+
+	return alg->decompress(req);
+}
+
+#endif
diff --git a/include/crypto/internal/acompress.h b/include/crypto/internal/acompress.h
new file mode 100644
index 0000000..036f62b
--- /dev/null
+++ b/include/crypto/internal/acompress.h
@@ -0,0 +1,62 @@ 
+/*
+ * Asynchronous compression
+ *
+ * Copyright (c) 2015, Intel Corporation
+ * Authors: Weigang Li <weigang.li@intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ */
+#ifndef _CRYPTO_ACOMP_INT_H
+#define _CRYPTO_ACOMP_INT_H
+#include <crypto/acompress.h>
+
+/*
+ * Transform internal helpers.
+ */
+static inline void *acomp_request_ctx(struct acompress_request *req)
+{
+	return req->__ctx;
+}
+
+static inline void *acomp_tfm_ctx(struct crypto_acomp *tfm)
+{
+	return tfm->base.__crt_ctx;
+}
+
+static inline void acomp_request_complete(struct acompress_request *req,
+					  int err)
+{
+	req->base.complete(&req->base, err);
+}
+
+static inline const char *acomp_alg_name(struct crypto_acomp *tfm)
+{
+	return crypto_acomp_tfm(tfm)->__crt_alg->cra_name;
+}
+
+/**
+ * crypto_register_acomp() -- Register asynchronous compression algorithm
+ *
+ * Function registers an implementation of a asynchronous
+ * compression algorithm
+ *
+ * @alg:	algorithm definition
+ *
+ * Return: zero on success; error code in case of error
+ */
+int crypto_register_acomp(struct acomp_alg *alg);
+
+/**
+ * crypto_unregister_acomp() -- Unregister asynchronous compression algorithm
+ *
+ * Function unregisters an implementation of a asynchronous
+ * compression algorithm
+ *
+ * @alg:	algorithm definition
+ */
+void crypto_unregister_acomp(struct acomp_alg *alg);
+#endif
diff --git a/include/linux/crypto.h b/include/linux/crypto.h
index e71cb70..f051dac 100644
--- a/include/linux/crypto.h
+++ b/include/linux/crypto.h
@@ -54,6 +54,7 @@ 
 #define CRYPTO_ALG_TYPE_AHASH		0x0000000a
 #define CRYPTO_ALG_TYPE_RNG		0x0000000c
 #define CRYPTO_ALG_TYPE_AKCIPHER	0x0000000d
+#define CRYPTO_ALG_TYPE_ACOMPRESS	0x0000000e
 #define CRYPTO_ALG_TYPE_PCOMPRESS	0x0000000f
 
 #define CRYPTO_ALG_TYPE_HASH_MASK	0x0000000e
diff --git a/include/uapi/linux/cryptouser.h b/include/uapi/linux/cryptouser.h
index 2e67bb6..7cf256e 100644
--- a/include/uapi/linux/cryptouser.h
+++ b/include/uapi/linux/cryptouser.h
@@ -42,6 +42,7 @@  enum crypto_attr_type_t {
 	CRYPTOCFGA_REPORT_BLKCIPHER,	/* struct crypto_report_blkcipher */
 	CRYPTOCFGA_REPORT_AEAD,		/* struct crypto_report_aead */
 	CRYPTOCFGA_REPORT_COMPRESS,	/* struct crypto_report_comp */
+	CRYPTOCFGA_REPORT_ACOMPRESS,	/* struct crypto_report_acomp */
 	CRYPTOCFGA_REPORT_RNG,		/* struct crypto_report_rng */
 	CRYPTOCFGA_REPORT_CIPHER,	/* struct crypto_report_cipher */
 	CRYPTOCFGA_REPORT_AKCIPHER,	/* struct crypto_report_akcipher */
@@ -98,6 +99,10 @@  struct crypto_report_comp {
 	char type[CRYPTO_MAX_NAME];
 };
 
+struct crypto_report_acomp {
+	char type[CRYPTO_MAX_NAME];
+};
+
 struct crypto_report_rng {
 	char type[CRYPTO_MAX_NAME];
 	unsigned int seedsize;